diff options
author | Sang-Hoon Park <sang-hoon.park@arm.com> | 2021-03-02 09:41:13 +0000 |
---|---|---|
committer | Sang-Hoon Park <sang-hoon.park@arm.com> | 2021-03-03 15:44:15 +0000 |
commit | 5ff38da7e18e91243a7f6b8e642f8b40f5846068 (patch) | |
tree | f0e1ef0fa2ddeca0e173de33d6d526387d5a4185 /arm_compute/runtime/CL/functions | |
parent | 473cb01e84cef6cab057e9492bfa3b68f708e5d7 (diff) | |
download | ComputeLibrary-5ff38da7e18e91243a7f6b8e642f8b40f5846068.tar.gz |
Create ClPRelu operator
Make the class that was in experimental namespace
as ClOperator to prepare porting to new interface.
The followings are added as a part of this change
Also, in-place computation is now correctly considered
to be aligned with the class description. Test cases
to test in-place computation are added.
Partially Implements: COMPMID-4184
Signed-off-by: Sang-Hoon Park <sang-hoon.park@arm.com>
Change-Id: I71c18ab47fe0370a2060d5303a58ff3650c0093f
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/5201
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Michele Di Giorgio <michele.digiorgio@arm.com>
Diffstat (limited to 'arm_compute/runtime/CL/functions')
-rw-r--r-- | arm_compute/runtime/CL/functions/CLPReluLayer.h | 37 |
1 files changed, 0 insertions, 37 deletions
diff --git a/arm_compute/runtime/CL/functions/CLPReluLayer.h b/arm_compute/runtime/CL/functions/CLPReluLayer.h index 1751fda030..7b6667044e 100644 --- a/arm_compute/runtime/CL/functions/CLPReluLayer.h +++ b/arm_compute/runtime/CL/functions/CLPReluLayer.h @@ -32,43 +32,6 @@ namespace arm_compute class CLCompileContext; class ICLTensor; class ITensorInfo; - -namespace experimental -{ -/** Basic function to run @ref arm_compute::opencl::kernels::ClArithmeticKernel for PRELU - * - * @note The function implements an activation layer with the PRELU activation function. - */ -class CLPReluLayer : public ICLOperator -{ -public: - /** Default Constructor */ - CLPReluLayer(); - /** Set the input and output tensor. - * - * @note If the output tensor is a nullptr or is equal to the input, the activation function will be performed in-place - * - * @param[in] compile_context The compile context to be used. - * @param[in] input Source tensor. Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32. - * @param[in] alpha PRelu layer parameters. Data types supported: same of @p input. - * @param[out] output Destination tensor. Data type supported: same as @p input - */ - void configure(const CLCompileContext &compile_context, ITensorInfo *input, ITensorInfo *alpha, ITensorInfo *output); - /** Static function to check if given info will lead to a valid configuration of @ref CLPReluLayer - * - * @param[in] input Source tensor info. Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32. - * @param[in] alpha PRelu layer parameters. Data types supported: same of @p input. - * @param[in] output Destination tensor info. Data type supported: same as @p input - * - * @return a status - */ - static Status validate(const ITensorInfo *input, const ITensorInfo *alpha, const ITensorInfo *output); - - // Inherited methods overridden: - void run(ITensorPack &tensors) override; -}; -} // namespace experimental - /** Basic function to run @ref opencl::kernels::ClArithmeticKernel for PRELU * * @note The function implements an activation layer with the PRELU activation function. |