diff options
author | Georgios Pinitas <georgios.pinitas@arm.com> | 2019-11-21 14:10:25 +0000 |
---|---|---|
committer | Georgios Pinitas <georgios.pinitas@arm.com> | 2019-11-27 10:56:10 +0000 |
commit | 448a81fcec04333364a1e3266d5081596d3a0477 (patch) | |
tree | bd5382a58fae39a8014157423a8ff339d39e14b9 /arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h | |
parent | 449cbf9c20287fca9a56898cdc5821c061a66ce3 (diff) | |
download | ComputeLibrary-448a81fcec04333364a1e3266d5081596d3a0477.tar.gz |
COMPMID-2805: Add QASYMM8_SIGNED support in NEGEMMLowpOutputStage
Add support from requantizing down from S32 to Int8 with fixed point
requantization. This involves the following:
- Compute fixed point multiplication between each entry of input by
result_fixedpoint_multiplier
- Add bias to final result if bias tensor is not a nullptr
- Round to nearest division by a power-of-two using result_shift
- Add offset to each result
- Clamp the value between the specified min and max bounds
- Cast to int8 data type
Change-Id: I641b3fac0833c568d8565ccb859bbc561a24c17d
Signed-off-by: Georgios Pinitas <georgios.pinitas@arm.com>
Reviewed-on: https://review.mlplatform.org/c/2340
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Michele Di Giorgio <michele.digiorgio@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h')
-rw-r--r-- | arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h b/arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h index c284ca5c5f..dadc5c221b 100644 --- a/arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h +++ b/arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h @@ -83,7 +83,7 @@ public: * @param[in] vector_sum_row Input row-vector of sums of all the entries in each row of matrix A. * @param[in] bias Biases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. * Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as @p mm_result. - * @param[out] output Output tensor containing the final quantized result. Data type supported: QASYMM8 + * @param[out] output Output tensor containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED * @param[in] k Number of matrix A columns or Matrix B rows * @param[in] a_offset Offset to be added to each element of the matrix A. * @param[in] b_offset Offset to be added to each element of the matrix B. @@ -100,7 +100,7 @@ public: * Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as @p mm_result * @param[in] bias Biases tensor info. Only shared biases supported and it can be a nullptr if the addition of biases is not required. * Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as @p mm_result. - * @param[in] output Output tensor info containing the final quantized result. Data type supported: QASYMM8 + * @param[in] output Output tensor info containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED * @param[in] a_offset Offset to be added to each element of the matrix A. * @param[in] b_offset Offset to be added to each element of the matrix B. * @param[in] output_stage GEMMLowp output stage info, providing the type of quantization and the necessary parameters. |