diff options
author | Manuel Bottini <manuel.bottini@arm.com> | 2020-04-08 10:15:51 +0100 |
---|---|---|
committer | Manuel Bottini <manuel.bottini@arm.com> | 2020-04-17 14:10:38 +0000 |
commit | 4c6bd514a8d424a29b776754f1b3426fa3a8c339 (patch) | |
tree | ed20ea238ae069ab138b62ea6879e0c1b955b606 /arm_compute/core/CL/kernels/CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel.h | |
parent | d2f6d96cdc2ad2169c4abd0a8c4884f61ed6d186 (diff) | |
download | ComputeLibrary-4c6bd514a8d424a29b776754f1b3426fa3a8c339.tar.gz |
COMPMID-3280: Make all ML primitives for CL use the new interface - Part 1
- Only CLKernels have been updated
Change-Id: Ife55b847c2e39e712a186eb6ca452503d5b66937
Signed-off-by: Manuel Bottini <manuel.bottini@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/3001
Reviewed-by: Michele Di Giorgio <michele.digiorgio@arm.com>
Reviewed-by: Michalis Spyrou <michalis.spyrou@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'arm_compute/core/CL/kernels/CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel.h')
-rw-r--r-- | arm_compute/core/CL/kernels/CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel.h | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/arm_compute/core/CL/kernels/CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel.h b/arm_compute/core/CL/kernels/CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel.h index 439f569d07..0b5b22cafc 100644 --- a/arm_compute/core/CL/kernels/CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel.h +++ b/arm_compute/core/CL/kernels/CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel.h @@ -67,6 +67,16 @@ public: * @param[in] info Output stage info. Used to pass the quantized output data type */ void configure(const ICLTensor *input, const ICLTensor *bias, ICLTensor *output, const GEMMLowpOutputStageInfo *info); + /** Initialise the kernel's input and output. + * + * @param[in] compile_context The compile context to be used. + * @param[in] input Input tensor. Data type supported: S32 + * @param[in] bias Biases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. + * Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as @p input. + * @param[out] output Output tensor. Data type supported: Data type supported: QASYMM8/QASYMM8_SIGNED + * @param[in] info Output stage info. Used to pass the quantized output data type + */ + void configure(CLCompileContext &compile_context, const ICLTensor *input, const ICLTensor *bias, ICLTensor *output, const GEMMLowpOutputStageInfo *info); /** Static function to check if given info will lead to a valid configuration of @ref CLGEMMLowpQuantizeDownInt32ScaleByFloatKernel * * @param[in] input Input tensor. Data type supported: S32 |