diff options
author | Sheri Zhang <sheri.zhang@arm.com> | 2021-04-22 14:41:12 +0100 |
---|---|---|
committer | Sheri Zhang <sheri.zhang@arm.com> | 2021-04-28 12:52:32 +0000 |
commit | a47dcc229d912d4e4bb5afa37220d20451f243a7 (patch) | |
tree | f8b296701fbdebfc7d29abc09144c49619bcca1c /arm_compute/runtime/NEON/functions/NEQuantizationLayer.h | |
parent | 2b7fee089c76226bfafcae77ba49f1eddb1e01da (diff) | |
download | ComputeLibrary-a47dcc229d912d4e4bb5afa37220d20451f243a7.tar.gz |
Update operator list documentation
All the common information for the operators are stored in OperatorList.h.
All data type and data layout information for the operators are store in the function header files.
Partially resolve: COMPMID-4199
Signed-off-by: Sheri Zhang <sheri.zhang@arm.com>
Change-Id: I272948cfb3f84e42232a82dd84c0158d84642099
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/5511
Reviewed-by: SiCong Li <sicong.li@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'arm_compute/runtime/NEON/functions/NEQuantizationLayer.h')
-rw-r--r-- | arm_compute/runtime/NEON/functions/NEQuantizationLayer.h | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/arm_compute/runtime/NEON/functions/NEQuantizationLayer.h b/arm_compute/runtime/NEON/functions/NEQuantizationLayer.h index 9e2d9ecf24..a7fadfc7cd 100644 --- a/arm_compute/runtime/NEON/functions/NEQuantizationLayer.h +++ b/arm_compute/runtime/NEON/functions/NEQuantizationLayer.h @@ -52,6 +52,25 @@ public: NEQuantizationLayer &operator=(NEQuantizationLayer &&) = default; /** Set the input and output tensors. * + * Valid data layouts: + * - All + * + * Valid data type configurations: + * |src |dst | + * |:------------------|:--------------| + * |QASYMM8 |QASYMM8 | + * |QASYMM8 |QASYMM8_SIGNED | + * |QASYMM8 |QASYMM16 | + * |QASYMM8_SIGNED |QASYMM8 | + * |QASYMM8_SIGNED |QASYMM8_SIGNED | + * |QASYMM8_SIGNED |QASYMM16 | + * |F16 |QASYMM8 | + * |F16 |QASYMM8_SIGNED | + * |F16 |QASYMM16 | + * |F32 |QASYMM8 | + * |F32 |QASYMM8_SIGNED | + * |F32 |QASYMM16 | + * * @param[in] input Source tensor. The dimensions over the third will be interpreted as batches. Data types supported: QASYMM8/QASYMM8_SIGNED/F32/F16. * @param[out] output Destination tensor with the same dimensions of input. Data types supported: QASYMM8/QASYMM8_SIGNED/QASYMM16 */ |