aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorMichele Di Giorgio <michele.digiorgio@arm.com>2021-01-19 15:29:02 +0000
committerSheri Zhang <sheri.zhang@arm.com>2021-01-19 18:21:47 +0000
commitbd2c8e1be0c83d243a9e2bc8eec60853f8dc701a (patch)
tree4e644b3bd1e0df1d05939d7e75a16be26d3d1c00 /docs
parent0f7ef8ab2171093855a8f21bd39c8fd7066dd629 (diff)
downloadComputeLibrary-bd2c8e1be0c83d243a9e2bc8eec60853f8dc701a.tar.gz
Fix doxygen references to new kernels
Resolves COMPMID-4117 Change-Id: I9945a92402e34b9cfe0ba9ef2a961b168bf62721 Signed-off-by: Michele Di Giorgio <michele.digiorgio@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/4883 Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/00_introduction.dox28
-rw-r--r--docs/04_adding_operator.dox2
2 files changed, 15 insertions, 15 deletions
diff --git a/docs/00_introduction.dox b/docs/00_introduction.dox
index 5e8769c366..effeb7b8d2 100644
--- a/docs/00_introduction.dox
+++ b/docs/00_introduction.dox
@@ -110,8 +110,8 @@ v20.11 Public major release
This is planned to be resolved in 21.02 release.
- Added new data type QASYMM8_SIGNED support for @ref NEROIAlignLayer.
- Added new data type S32 support for:
- - @ref NEArithmeticSubtraction
- - @ref NEArithmeticSubtractionKernel
+ - NEArithmeticSubtraction
+ - NEArithmeticSubtractionKernel
- @ref NEPixelWiseMultiplication
- @ref NEPixelWiseMultiplicationKernel
- @ref NEElementwiseDivision
@@ -430,12 +430,12 @@ v20.08 Public major release
- More robust script for running benchmarks
- Removed padding from:
- @ref NEPixelWiseMultiplicationKernel
- - @ref NEHeightConcatenateLayerKernel
+ - NEHeightConcatenateLayerKernel
- @ref NEThresholdKernel
- - @ref NEBatchConcatenateLayerKernel
+ - NEBatchConcatenateLayerKernel
- @ref NETransposeKernel
- @ref NEBatchNormalizationLayerKernel
- - @ref NEArithmeticSubtractionKernel
+ - NEArithmeticSubtractionKernel
- @ref NEBoundingBoxTransformKernel
- @ref NELogits1DMaxKernel
- @ref NELogits1DSoftmaxKernel
@@ -444,8 +444,8 @@ v20.08 Public major release
- NEYOLOLayerKernel
- NEUpsampleLayerKernel
- NEFloorKernel
- - @ref NEWidthConcatenateLayerKernel
- - @ref NEDepthConcatenateLayerKernel
+ - NEWidthConcatenateLayerKernel
+ - NEDepthConcatenateLayerKernel
- @ref NENormalizationLayerKernel
- @ref NEL2NormalizeLayerKernel
- @ref NEFillArrayKernel
@@ -526,7 +526,7 @@ v20.05 Public major release
- @ref NEQLSTMLayerNormalizationKernel
- Added HARD_SWISH support in:
- @ref CLActivationLayerKernel
- - @ref NEActivationLayerKernel
+ - NEActivationLayerKernel
- Deprecated OpenCL kernels / functions:
- CLGEMMLowpQuantizeDownInt32ToUint8Scale
- CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFloat
@@ -697,7 +697,7 @@ v19.08 Public major release
- @ref NENegLayer
- @ref NEPReluLayer
- @ref NESinLayer
- - @ref NEBatchConcatenateLayerKernel
+ - NEBatchConcatenateLayerKernel
- @ref NEDepthToSpaceLayerKernel / @ref NEDepthToSpaceLayer
- @ref NEDepthwiseConvolutionLayerNativeKernel
- @ref NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
@@ -749,7 +749,7 @@ v19.05 Public major release
- @ref NEFFTRadixStageKernel
- @ref NEFFTScaleKernel
- @ref NEGEMMLowpOffsetContributionOutputStageKernel
- - @ref NEHeightConcatenateLayerKernel
+ - NEHeightConcatenateLayerKernel
- @ref NESpaceToBatchLayerKernel / @ref NESpaceToBatchLayer
- @ref NEFFT1D
- @ref NEFFT2D
@@ -882,7 +882,7 @@ v19.02 Public major release
- @ref CLROIAlignLayer
- @ref CLGenerateProposalsLayer
- Added QASYMM8 support to the following kernels:
- - @ref NEArithmeticAdditionKernel
+ - NEArithmeticAdditionKernel
- @ref NEScale
- Added new tests and improved validation and benchmarking suites.
- Deprecated functions/interfaces
@@ -1062,7 +1062,7 @@ v18.02 Public major release
- Added FP16 support to:
- CLDepthwiseConvolutionLayer3x3
- @ref CLDepthwiseConvolutionLayer
- - Added broadcasting support to @ref NEArithmeticAddition / @ref CLArithmeticAddition / @ref CLPixelWiseMultiplication
+ - Added broadcasting support to NEArithmeticAddition / @ref CLArithmeticAddition / @ref CLPixelWiseMultiplication
- Added fused batched normalization and activation to @ref CLBatchNormalizationLayer and @ref NEBatchNormalizationLayer
- Added support for non-square pooling to @ref NEPoolingLayer and @ref CLPoolingLayer
- New OpenCL kernels / functions:
@@ -1218,7 +1218,7 @@ v17.06 Public major release
- @ref CPPDetectionWindowNonMaximaSuppressionKernel
- New NEON kernels / functions:
- @ref NEBatchNormalizationLayerKernel / @ref NEBatchNormalizationLayer
- - @ref NEDepthConcatenateLayerKernel / NEDepthConcatenateLayer
+ - NEDepthConcatenateLayerKernel / NEDepthConcatenateLayer
- @ref NEDirectConvolutionLayerKernel / @ref NEDirectConvolutionLayer
- NELocallyConnectedMatrixMultiplyKernel / NELocallyConnectedLayer
- @ref NEWeightsReshapeKernel / @ref NEConvolutionLayerReshapeWeights
@@ -1276,7 +1276,7 @@ v17.03 Sources preview
- @ref CLNormalizationLayerKernel / @ref CLNormalizationLayer
- @ref CLLaplacianPyramid, @ref CLLaplacianReconstruct
- New NEON kernels / functions:
- - @ref NEActivationLayerKernel / @ref NEActivationLayer
+ - NEActivationLayerKernel / @ref NEActivationLayer
- GEMM refactoring + FP16 support (Requires armv8.2 CPU): @ref NEGEMMInterleave4x4Kernel, @ref NEGEMMTranspose1xWKernel, @ref NEGEMMMatrixMultiplyKernel, @ref NEGEMMMatrixAdditionKernel / @ref NEGEMM
- @ref NEPoolingLayerKernel / @ref NEPoolingLayer
diff --git a/docs/04_adding_operator.dox b/docs/04_adding_operator.dox
index 13be712549..9e6f3751b8 100644
--- a/docs/04_adding_operator.dox
+++ b/docs/04_adding_operator.dox
@@ -121,7 +121,7 @@ For OpenCL:
The run will call the function defined in the .cl file.
For the NEON backend case:
-@snippet src/core/NEON/kernels/NEReshapeLayerKernel.cpp NEReshapeLayerKernel Kernel
+@snippet src/core/cpu/kernels/CpuReshapeKernel.cpp NEReshapeLayerKernel Kernel
In the NEON case, there is no need to add an extra file and we implement the kernel in the same NEReshapeLayerKernel.cpp file.
If the tests are already in place, the new kernel can be tested using the existing tests by adding the configure and run of the kernel to the compute_target() in the fixture.