From d71d2be0ab380a2d1fbaf051e43b368330b4d27c Mon Sep 17 00:00:00 2001 From: Manuel Bottini Date: Thu, 21 May 2020 17:14:36 +0100 Subject: COMPMID-3069: Removing deprecated functions and classes from 20.05 release Change-Id: Ic4d20995d6c6bb76d07113e86247bad2722e4e83 Signed-off-by: Manuel Bottini Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/3244 Tested-by: Arm Jenkins Reviewed-by: Michele Di Giorgio Comments-Addressed: Arm Jenkins --- docs/00_introduction.dox | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) (limited to 'docs/00_introduction.dox') diff --git a/docs/00_introduction.dox b/docs/00_introduction.dox index 6fed3080f9..39b83f27a3 100644 --- a/docs/00_introduction.dox +++ b/docs/00_introduction.dox @@ -299,6 +299,9 @@ v20.05 Public major release - NEGEMMLowpQuantizeDownInt32ToUint8Scale - Removed CPP kernels / functions: - CPPFlipWeightsKernel + - Removed PoolingLayerInfo constructors without Data Layout. + - Removed CLDepthwiseConvolutionLayer3x3 + - Removed NEDepthwiseConvolutionLayerOptimized v20.02.1 Maintenance release - Added Android-NN build script. @@ -308,7 +311,7 @@ v20.02 Public major release - Various optimisations. - Added new data type QASYMM8_SIGNED support for: - @ref CLDepthwiseConvolutionLayer - - @ref CLDepthwiseConvolutionLayer3x3 + - CLDepthwiseConvolutionLayer3x3 - @ref CLGEMMConvolutionLayer - @ref CLGEMMLowpMatrixMultiplyCore - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel @@ -340,9 +343,9 @@ v20.02 Public major release - @ref NEFill - @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint - Deprecated NEON functions / interfaces: - - @ref CLDepthwiseConvolutionLayer3x3 - - @ref NEDepthwiseConvolutionLayerOptimized - - @ref PoolingLayerInfo constructors without Data Layout. + - CLDepthwiseConvolutionLayer3x3 + - NEDepthwiseConvolutionLayerOptimized + - PoolingLayerInfo constructors without Data Layout. - Added support for quantization with multiplier greater than 1 on NEON and CL. - Added support for quantized inputs of type QASYMM8_SIGNED and QASYMM8 to @ref CLQuantizationLayer. - Added the ability to build bootcode for bare metal. @@ -486,8 +489,8 @@ v19.08 Public major release - Added an optimized depthwise convolution layer kernel for 5x5 filters (NEON only) - Added support to enable OpenCL kernel cache. Added example showing how to load the prebuilt OpenCL kernels from a binary cache file - Altered @ref QuantizationInfo interface to support per-channel quantization. - - The @ref CLDepthwiseConvolutionLayer3x3 will be included by @ref CLDepthwiseConvolutionLayer to accommodate for future optimizations. - - The @ref NEDepthwiseConvolutionLayerOptimized will be included by @ref NEDepthwiseConvolutionLayer to accommodate for future optimizations. + - The CLDepthwiseConvolutionLayer3x3 will be included by @ref CLDepthwiseConvolutionLayer to accommodate for future optimizations. + - The NEDepthwiseConvolutionLayerOptimized will be included by @ref NEDepthwiseConvolutionLayer to accommodate for future optimizations. - Removed inner_border_right and inner_border_top parameters from @ref CLDeconvolutionLayer interface - Removed inner_border_right and inner_border_top parameters from @ref NEDeconvolutionLayer interface - Optimized the NEON assembly kernel for GEMMLowp. The new implementation fuses the output stage and quantization with the matrix multiplication kernel @@ -815,7 +818,7 @@ v18.02 Public major release - @ref NEDepthwiseConvolutionLayer - @ref NESoftmaxLayer - Added FP16 support to: - - @ref CLDepthwiseConvolutionLayer3x3 + - CLDepthwiseConvolutionLayer3x3 - @ref CLDepthwiseConvolutionLayer - Added broadcasting support to @ref NEArithmeticAddition / @ref CLArithmeticAddition / @ref CLPixelWiseMultiplication - Added fused batched normalization and activation to @ref CLBatchNormalizationLayer and @ref NEBatchNormalizationLayer @@ -941,7 +944,7 @@ v17.09 Public major release - @ref NEReshapeLayerKernel / @ref NEReshapeLayer - New OpenCL kernels / functions: - - @ref CLDepthwiseConvolutionLayer3x3NCHWKernel @ref CLDepthwiseConvolutionLayer3x3NHWCKernel CLDepthwiseIm2ColKernel CLDepthwiseVectorToTensorKernel CLDepthwiseWeightsReshapeKernel / @ref CLDepthwiseConvolutionLayer3x3 @ref CLDepthwiseConvolutionLayer CLDepthwiseSeparableConvolutionLayer + - @ref CLDepthwiseConvolutionLayer3x3NCHWKernel @ref CLDepthwiseConvolutionLayer3x3NHWCKernel CLDepthwiseIm2ColKernel CLDepthwiseVectorToTensorKernel CLDepthwiseWeightsReshapeKernel / CLDepthwiseConvolutionLayer3x3 @ref CLDepthwiseConvolutionLayer CLDepthwiseSeparableConvolutionLayer - @ref CLDequantizationLayerKernel / @ref CLDequantizationLayer - @ref CLDirectConvolutionLayerKernel / @ref CLDirectConvolutionLayer - @ref CLFlattenLayer -- cgit v1.2.1