From ee301b384f4aeb697a5c249b8bb848d784146582 Mon Sep 17 00:00:00 2001 From: Jakub Sujak Date: Fri, 4 Jun 2021 09:46:08 +0100 Subject: Fix errata in documentation This patch addresses the following errata found in the project documentation: * Common typos. * Missing use of trademarks. * Incomplete operator descriptions. * Examples of code that have since been removed from the library. * Plus clarification over the usage of `All` category for data types and layouts. In addition, the Operator list was not generated properly due to: * Non-matching cases in the filenames (i.e. `Elementwise` and `ElementWise`). For consistency, all usages of the latter have been renamed to the former. * Extra data layout tables in the headers for the `NESlice` and `NEStridedSlice` functions (note: not present in CL counterpart) meant documentation for those functions was generated twice. Resolves: COMPMID-4561, COMPMID-4562, COMPMID-4563 Change-Id: I1eb24559545397749e636ffbf927727fb1bc6201 Signed-off-by: Jakub Sujak Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/5769 Comments-Addressed: Arm Jenkins Tested-by: Arm Jenkins Reviewed-by: Sheri Zhang Reviewed-by: SiCong Li --- docs/user_guide/operator_list.dox | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) (limited to 'docs/user_guide/operator_list.dox') diff --git a/docs/user_guide/operator_list.dox b/docs/user_guide/operator_list.dox index fc41265738..05cc892d40 100644 --- a/docs/user_guide/operator_list.dox +++ b/docs/user_guide/operator_list.dox @@ -45,14 +45,14 @@ The main data-types that the Machine Learning functions support are the followin
  • F16: 16-bit half precision floating point
  • S32: 32-bit signed integer
  • U8: 8-bit unsigned char -
  • All: include all above data types +
  • All: Agnostic to any specific data type Compute Library supports the following data layouts (fast changing dimension from right to left):
    • NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
    • NCHW: Legacy layout where width is in the fastest changing dimension -
    • All: include all above data layouts +
    • All: Agnostic to any specific data layout
    where N = batches, C = channels, H = height, W = width @@ -264,7 +264,7 @@ where N = batches, C = channels, H = height, W = width BitwiseAnd - Function to performe bitwise AND between 2 tensors. + Function to perform bitwise AND between 2 tensors.
    • ANEURALNETWORKS_LOGICAL_AND @@ -292,7 +292,7 @@ where N = batches, C = channels, H = height, W = width BitwiseNot - Function to performe bitwise NOT. + Function to perform bitwise NOT.
      • ANEURALNETWORKS_LOGICAL_NOT @@ -320,7 +320,7 @@ where N = batches, C = channels, H = height, W = width BitwiseOr - Function to performe bitwise OR between 2 tensors. + Function to perform bitwise OR between 2 tensors.
        • ANEURALNETWORKS_LOGICAL_OR @@ -348,7 +348,7 @@ where N = batches, C = channels, H = height, W = width BitwiseXor - Function to performe bitwise XOR between 2 tensors. + Function to perform bitwise XOR between 2 tensors.
          • n/a @@ -535,7 +535,7 @@ where N = batches, C = channels, H = height, W = width ConvertFullyConnectedWeights - Function to tranpose the wieghts for the fully connected layer. + Function to transpose the weights for the fully connected layer.
            • n/a @@ -678,7 +678,7 @@ where N = batches, C = channels, H = height, W = width DeconvolutionLayer - Function to compute a deconvolution or tranpose convolution. + Function to compute a deconvolution or transpose convolution.
              • ANEURALNETWORKS_TRANSPOSE_CONV_2D @@ -957,7 +957,7 @@ where N = batches, C = channels, H = height, W = width QASYMM8_SIGNEDQSYMM8_PER_CHANNELS32QASYMM8_SIGNED - ElementWiseOperations + ElementwiseOperations Function to perform in Cpu: - Div - Max - Min - Pow - SquaredDiff - Comparisons (Equal, greater, greater_equal, less, less_equal, not_equal) Function to perform in CL: - Add - Sub - Div - Max - Min - Pow - SquaredDiff
                  @@ -1242,6 +1242,7 @@ where N = batches, C = channels, H = height, W = width srcdst F16F16 F32F32 + S32S32 CLSinLayer @@ -1408,7 +1409,7 @@ where N = batches, C = channels, H = height, W = width FillBorder - Function to . + Function to fill the borders within the XY-planes.
                  • n/a @@ -1620,7 +1621,7 @@ where N = batches, C = channels, H = height, W = width F16F16F16F16 - GEMMConv2D + GEMMConv2d General Matrix Multiplication.
                      @@ -2193,7 +2194,7 @@ where N = batches, C = channels, H = height, W = width PixelWiseMultiplication - Function to performe a multiplication. + Function to perform a multiplication.
                      • ANEURALNETWORKS_MUL @@ -2237,11 +2238,12 @@ where N = batches, C = channels, H = height, W = width S16U8S16 S16S16S16 F16F16F16 - F32S32F32 + F32F32F32 + S32S32S32 PoolingLayer - Function to performe pooling with the specified pooling operation. + Function to perform pooling with the specified pooling operation.
                        • ANEURALNETWORKS_AVERAGE_POOL_2D @@ -2449,7 +2451,7 @@ where N = batches, C = channels, H = height, W = width ReduceMean - Function to performe reduce mean operation. + Function to perform reduce mean operation.
                          • ANEURALNETWORKS_MEAN @@ -2483,7 +2485,7 @@ where N = batches, C = channels, H = height, W = width ReductionOperation - Function to performe reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max + Function to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max
                            • ANEURALNETWORKS_REDUCE_ALL @@ -3100,7 +3102,7 @@ where N = batches, C = channels, H = height, W = width WinogradInputTransform - Function to. + Function to perform a Winograd transform on the input tensor.
                              • n/a -- cgit v1.2.1