/// /// Copyright (c) 2021 Arm Limited. /// /// SPDX-License-Identifier: MIT /// /// Permission is hereby granted, free of charge, to any person obtaining a copy /// of this software and associated documentation files (the "Software"), to /// deal in the Software without restriction, including without limitation the /// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or /// sell copies of the Software, and to permit persons to whom the Software is /// furnished to do so, subject to the following conditions: /// /// The above copyright notice and this permission notice shall be included in all /// copies or substantial portions of the Software. /// /// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR /// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, /// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE /// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER /// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, /// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE /// SOFTWARE. /// namespace arm_compute { /** @page operators_list Supported Operators @tableofcontents @section S9_1_operators_list Supported Operators Compute Library supports operators that are listed in below table. Compute Library supports a wide list of data-types, information can been directly found in the documentation of each kernel/function. The main data-types that the Machine Learning functions support are the following: Compute Library supports the following data layouts (fast changing dimension from right to left): where N = batches, C = channels, H = height, W = width
Function Description Equivalent Android NNAPI Op Backends Data Layouts Data Types
ActivationLayer Function to simulate an activation layer with the specified activation function.
  • ANEURALNETWORKS_ELU
  • ANEURALNETWORKS_HARD_SWISH
  • ANEURALNETWORKS_LOGISTIC
  • ANEURALNETWORKS_RELU
  • ANEURALNETWORKS_RELU1
  • ANEURALNETWORKS_RELU6
  • ANEURALNETWORKS_TANH
NEActivationLayer
  • All
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
QSYMM16QSYMM16
F16F16
F32F32
CLActivationLayer
  • All
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
QSYMM16QSYMM16
F16F16
F32F32
ConcatenateLayer Function to concatenate tensors along a given axis.
  • ANEURALNETWORKS_CONCATENATION
NEConcatenateLayer
  • All
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
F16F16
F32F32
CLConcatenateLayer
  • All
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
F16F16
F32F32
ConvertFullyConnectedWeights Function to tranpose the wieghts for the fully connected layer.
  • None
NEConvertFullyConnectedWeights
  • NHWC
  • NCHW
srcdst
AllAll
CLConvertFullyConnectedWeights
  • NHWC
  • NCHW
srcdst
AllAll
Copy Function to copy a tensor.
  • None
NECopy
  • All
srcdst
AllAll
CLCopy
  • All
srcdst
AllAll
DequantizationLayer Function to dequantize the values in a tensor
  • ANEURALNETWORKS_DEQUANTIZE
NEDequantizationLayer
  • All
srcdst
QASYMM8F16
QASYMM8F32
QASYMM8_SIGNEDF16
QASYMM8_SIGNEDF32
QSYMM8_PER_CHANNELF16
QSYMM8_PER_CHANNELF32
QSYMM8F16
QSYMM8F32
QSYMM16F16
QSYMM16F32
CLDequantizationLayer
  • All
srcdst
QASYMM8F16
QASYMM8F32
QASYMM8_SIGNEDF16
QASYMM8_SIGNEDF32
QSYMM8_PER_CHANNELF16
QSYMM8_PER_CHANNELF32
QSYMM8F16
QSYMM8F32
QSYMM16F16
QSYMM16F32
DirectConvolutionLayer Function to
  • ANEURALNETWORKS_CONV_2D
NEDirectConvolutionLayer
  • NHWC
  • NCHW
src0src1src2dst
F16F16F16F16
F32F32F32F32
CLDirectConvolutionLayer
  • NHWC
  • NCHW
src0src1src2dst
F16F16F16F16
F32F32F32F32
QASYMM8QASYMM8S32QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNEDS32QASYMM8_SIGNED
FFT1D Fast Fourier Transform 1D
  • None
NEFFT1D
  • All
srcdst
F32F32
CLFFT1D
  • All
srcdst
F32F32
F16F16
FFT2D Fast Fourier Transform 2D
  • None
NEFFT2D
  • All
srcdst
F32F32
CLFFT2D
  • All
srcdst
F32F32
F16F16
FFTConvolutionLayer Fast Fourier Transform Convolution
  • ANEURALNETWORKS_CONV_2D
NEFFTConvolutionLayer
  • All
srcdst
F32F32
CLFFTConvolutionLayer
  • All
srcdst
F32F32
F16F16
Fill Set the values of a tensor with a given value
  • ANEURALNETWORKS_FILL
NEFill
  • All
srcdst
AllAll
CLFill
  • All
srcdst
AllAll
Floor Round the value to the lowest number
  • ANEURALNETWORKS_FLOOR
NEFloor
  • All
srcdst
F32F32
F16F16
CLFloor
  • All
srcdst
F32F32
F16F16
Permute Function to transpose an ND tensor.
  • ANEURALNETWORKS_TRANSPOSE
NEPermute
  • NHWC
  • NCHW
srcdst
AllAll
CLPermute
  • NHWC
  • NCHW
srcdst
AllAll
PixelWiseMultiplication Function to performe a multiplication.
  • ANEURALNETWORKS_MUL
NEPixelWiseMultiplication
  • All
src0src1dst
QASYMM8QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNEDQASYMM8_SIGNED
QSYMM16QSYMM16QASYMM16
QSYMM16QSYMM16S32
U8U8U8
U8U8S16
U8S16S16
S16U8S16
S16S16S16
F16F16F16
F32S32F32
CLPixelWiseMultiplication
  • All
src0src1dst
QASYMM8QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNEDQASYMM8_SIGNED
QSYMM16QSYMM16QASYMM16
QSYMM16QSYMM16S32
U8U8U8
U8U8S16
U8S16S16
S16U8S16
S16S16S16
F16F16F16
F32S32F32
PoolingLayer Function to performe pooling with the specified pooling operation.
  • ANEURALNETWORKS_AVERAGE_POOL_2D
  • ANEURALNETWORKS_L2_POOL_2D
  • ANEURALNETWORKS_MAX_POOL_2D
NEPoolingLayer
  • NHWC
  • NCHW
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
F16F16
F32F32
CLPoolingLayer
  • NHWC
  • NCHW
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
F16F16
F32F32
PReluLayer Function to compute the activation layer with the PRELU activation function.
  • ANEURALNETWORKS_PRELU
NEPReluLayer
  • All
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
F16F16
F32F32
CLPReluLayer
  • All
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
F16F16
F32F32
QuantizationLayer Function to perform quantization layer
  • ANEURALNETWORKS_QUANTIZE
NEQuantizationLayer
  • All
srcdst
QASYMM8QASYMM8
QASYMM8QASYMM8_SIGNED
QASYMM8QASYMM16
QASYMM8_SIGNEDQASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
QASYMM8_SIGNEDQASYMM16
F16QASYMM8
F16QASYMM8_SIGNED
F16QASYMM16
F32QASYMM8
F32QASYMM8_SIGNED
F32QASYMM16
CLQuantizationLayer
  • All
srcdst
QASYMM8QASYMM8
QASYMM8QASYMM8_SIGNED
QASYMM8QASYMM16
QASYMM8_SIGNEDQASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
QASYMM8_SIGNEDQASYMM16
F16QASYMM8
F16QASYMM8_SIGNED
F16QASYMM16
F32QASYMM8
F32QASYMM8_SIGNED
F32QASYMM16
ReshapeLayer Fucntion to reshape a tensor
  • ANEURALNETWORKS_RESHAPE
  • ANEURALNETWORKS_SQUEEZE
NEReshapeLayer
  • All
srcdst
AllAll
CLReshapeLayer
  • All
srcdst
AllAll
Scale Fucntion to perform resize a tensor using to interpolate: - Bilenear - Nearest neighbor
  • ANEURALNETWORKS_RESIZE_BILINEAR
  • ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR
NEScale
  • NHWC
  • NCHW
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
F16F16
F32F32
U8U8
S16S16
CLScale
  • NHWC
  • NCHW
srcdst
QASYMM8QASYMM8
QASYMM8_SIGNEDQASYMM8_SIGNED
F16F16
F32F32
U8U8
S16S16
Slice Function to perform tensor slicing.
  • ANEURALNETWORKS_SLICE
NESlice
  • All
srcdst
AllAll
CLSlice
  • All
srcdst
AllAll
StridedSlice Function to extract a strided slice of a tensor.
  • ANEURALNETWORKS_STRIDED_SLICE
NEStridedSlice
  • All
srcdst
AllAll
CLStridedSlice
  • All
srcdst
AllAll
Transpose Function to transpose an 2D tensor.
  • ANEURALNETWORKS_TRANSPOSE
NETranspose
  • All
srcdst
AllAll
CLTranspose
  • All
srcdst
AllAll
*/ } // namespace