diff options
author | Freddie Liardet <frederick.liardet@arm.com> | 2021-04-22 14:55:17 +0100 |
---|---|---|
committer | frederick.liardet <frederick.liardet@arm.com> | 2021-04-28 14:17:10 +0000 |
commit | e92b0458a0432254f8e49bc306aebfe172bb4d0e (patch) | |
tree | c22bfced606c2b00990b74032c1bcd70529b74ca /arm_compute/runtime/CL/functions/CLDirectDeconvolutionLayer.h | |
parent | a47dcc229d912d4e4bb5afa37220d20451f243a7 (diff) | |
download | ComputeLibrary-e92b0458a0432254f8e49bc306aebfe172bb4d0e.tar.gz |
Add per-channel quantization support for CLDeconvolutionLayer
Add QSYMM8_PER_CHANNEL support on weight input for CLDeconvolutionLayer.
When weights are per-channel quantized type "Direct" method is always
used.
Also reduce number of QSYMM8_PER_CHANNEL tests for NEDeconvolutionLayer.
Resolves: COMPMID-3438
Signed-off-by: Freddie Liardet <frederick.liardet@arm.com>
Change-Id: I1330cac5142e19d21e322574fb8d912558745b02
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/5484
Reviewed-by: Michele Di Giorgio <michele.digiorgio@arm.com>
Reviewed-by: Giorgio Arena <giorgio.arena@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'arm_compute/runtime/CL/functions/CLDirectDeconvolutionLayer.h')
-rw-r--r-- | arm_compute/runtime/CL/functions/CLDirectDeconvolutionLayer.h | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/arm_compute/runtime/CL/functions/CLDirectDeconvolutionLayer.h b/arm_compute/runtime/CL/functions/CLDirectDeconvolutionLayer.h index 232b9f59b6..a23500e16b 100644 --- a/arm_compute/runtime/CL/functions/CLDirectDeconvolutionLayer.h +++ b/arm_compute/runtime/CL/functions/CLDirectDeconvolutionLayer.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2019-2020 Arm Limited. + * Copyright (c) 2019-2021 Arm Limited. * * SPDX-License-Identifier: MIT * @@ -89,7 +89,7 @@ public: * * @param[in,out] input Input tensor. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. * Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32. - * @param[in] weights The 4d weights with dimensions [width, height, IFM, OFM]. Data type supported: Same as @p input. + * @param[in] weights The 4d weights with dimensions [width, height, IFM, OFM]. Data type supported: Same as @p input or QSYMM8_PER_CHANNEL if @p input is QASYMM8/QASYMM8_SIGNED. * @param[in] bias (Optional) The biases have one dimension. * Data type supported: Should match @p input data type, except for input of QASYMM8 and QASYMM8_SIGNED type where biases should be of S32 type * @param[out] output Output tensor. The output has the same number of dimensions as the @p input. @@ -103,7 +103,7 @@ public: * @param[in] compile_context The compile context to be used. * @param[in,out] input Input tensor. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. * Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32. - * @param[in] weights The 4d weights with dimensions [width, height, IFM, OFM]. Data type supported: Same as @p input. + * @param[in] weights The 4d weights with dimensions [width, height, IFM, OFM]. Data type supported: Same as @p input or QSYMM8_PER_CHANNEL if @p input is QASYMM8/QASYMM8_SIGNED. * @param[in] bias (Optional) The biases have one dimension. * Data type supported: Should match @p input data type, except for input of QASYMM8 and QASYMM8_SIGNED type where biases should be of S32 type * @param[out] output Output tensor. The output has the same number of dimensions as the @p input. @@ -117,7 +117,7 @@ public: * * @param[in] input Input tensor info. 3 lower dimensions represent a single input, and an optional 4th dimension for batch of inputs. * Data types supported: QASYMM8_SIGNED/QASYMM8/F16/F32. - * @param[in] weights The 4d weights info with dimensions [width, height, IFM, OFM]. Data type supported: Same as @p input. + * @param[in] weights The 4d weights info with dimensions [width, height, IFM, OFM]. Data type supported: Same as @p input or QSYMM8_PER_CHANNEL if @p input is QASYMM8/QASYMM8_SIGNED. * @param[in] bias (Optional) The biases have one dimension. * Data type supported: Should match @p input data type, except for input of QASYMM8 and QASYMM8_SIGNED type where biases should be of S32 type * @param[in] output Output tensor info. The output has the same number of dimensions as the @p input. |