From 448a81fcec04333364a1e3266d5081596d3a0477 Mon Sep 17 00:00:00 2001 From: Georgios Pinitas Date: Thu, 21 Nov 2019 14:10:25 +0000 Subject: COMPMID-2805: Add QASYMM8_SIGNED support in NEGEMMLowpOutputStage Add support from requantizing down from S32 to Int8 with fixed point requantization. This involves the following: - Compute fixed point multiplication between each entry of input by result_fixedpoint_multiplier - Add bias to final result if bias tensor is not a nullptr - Round to nearest division by a power-of-two using result_shift - Add offset to each result - Clamp the value between the specified min and max bounds - Cast to int8 data type Change-Id: I641b3fac0833c568d8565ccb859bbc561a24c17d Signed-off-by: Georgios Pinitas Reviewed-on: https://review.mlplatform.org/c/2340 Comments-Addressed: Arm Jenkins Reviewed-by: Michele Di Giorgio Tested-by: Arm Jenkins --- .../core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h') diff --git a/arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h b/arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h index c284ca5c5f..dadc5c221b 100644 --- a/arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h +++ b/arm_compute/core/NEON/kernels/NEGEMMLowpOffsetContributionOutputStageKernel.h @@ -83,7 +83,7 @@ public: * @param[in] vector_sum_row Input row-vector of sums of all the entries in each row of matrix A. * @param[in] bias Biases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. * Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as @p mm_result. - * @param[out] output Output tensor containing the final quantized result. Data type supported: QASYMM8 + * @param[out] output Output tensor containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED * @param[in] k Number of matrix A columns or Matrix B rows * @param[in] a_offset Offset to be added to each element of the matrix A. * @param[in] b_offset Offset to be added to each element of the matrix B. @@ -100,7 +100,7 @@ public: * Note: vector_sum_row can be a nullptr in case b_offset = 0. Data type supported: same as @p mm_result * @param[in] bias Biases tensor info. Only shared biases supported and it can be a nullptr if the addition of biases is not required. * Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as @p mm_result. - * @param[in] output Output tensor info containing the final quantized result. Data type supported: QASYMM8 + * @param[in] output Output tensor info containing the final quantized result. Data type supported: QASYMM8/QASYMM8_SIGNED * @param[in] a_offset Offset to be added to each element of the matrix A. * @param[in] b_offset Offset to be added to each element of the matrix B. * @param[in] output_stage GEMMLowp output stage info, providing the type of quantization and the necessary parameters. -- cgit v1.2.1