diff options
author | Gian Marco Iodice <gianmarco.iodice@arm.com> | 2021-04-16 15:08:59 +0100 |
---|---|---|
committer | Gian Marco Iodice <gianmarco.iodice@arm.com> | 2021-07-02 15:56:45 +0000 |
commit | 8155c0253c00aa9e26651361460c66feb39829a6 (patch) | |
tree | 41dacc432d4d1f1daa32d20d15e5120c11b9fa56 /src/core/CL/cl_kernels/activation_quant_helpers.h | |
parent | 2eb5d16b839cbc28c6cb7f0de7a0bf15290b425a (diff) | |
download | ComputeLibrary-8155c0253c00aa9e26651361460c66feb39829a6.tar.gz |
Rework OpenCL Depthwise Convolution
- Remove dedicated kernels for NCHW. Now we only use NHWC with permute
- Remove specialized kernels for 3x3 NHWC
- Simplify CLDepthwiseConvolutionLayer.cpp to call just the native
implementation for both floating-point and quantized data types
- Develop two parametric opencl kernels for depthwise convolution layer NHWC
(floating-point and quantized)
- Add support to export the weights to cl_image
- Extend test for depthwise convolution on opencl
Resolves COMPMID-4417
Change-Id: Ibe533f79c2860f9cac8e921895d5a8f947753a5c
Signed-off-by: Gian Marco Iodice <gianmarco.iodice@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/5893
Reviewed-by: Giorgio Arena <giorgio.arena@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'src/core/CL/cl_kernels/activation_quant_helpers.h')
-rw-r--r-- | src/core/CL/cl_kernels/activation_quant_helpers.h | 9 |
1 files changed, 7 insertions, 2 deletions
diff --git a/src/core/CL/cl_kernels/activation_quant_helpers.h b/src/core/CL/cl_kernels/activation_quant_helpers.h index a32e4e94a3..c420578546 100644 --- a/src/core/CL/cl_kernels/activation_quant_helpers.h +++ b/src/core/CL/cl_kernels/activation_quant_helpers.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2019-2020 Arm Limited. + * Copyright (c) 2019-2021 Arm Limited. * * SPDX-License-Identifier: MIT * @@ -51,7 +51,12 @@ inline TYPE lu_brelu_op(TYPE x) // Hard Swish Activation inline TYPE hard_swish_op(TYPE x) { - return (x * ((min(max((TYPE)(x + (TYPE)3.f), (TYPE)0.f), (TYPE)6.f)) * (TYPE)0.166666667f)); + return (x * ((min(max((TYPE)(x + (TYPE)3.f), (TYPE)0.f), (TYPE)6.f)) * (TYPE)0.166666667f)); +} + +inline TYPE identiy_op(TYPE x) +{ + return x; } #define ACTIVATION_OP2(op, x) op##_op(x) |