diff options
author | ramelg01 <ramy.elgammal@arm.com> | 2022-02-08 23:01:31 +0000 |
---|---|---|
committer | Ramy Elgammal <ramy.elgammal@arm.com> | 2022-02-11 11:01:10 +0000 |
commit | 89aa4eb56d56c81a9d53f94dffa5fa88742e986c (patch) | |
tree | 64ac3cb37d44fcfb8cf7add9100a8f0230a51d8f /src/gpu/cl/kernels/ClDepthConcatenateKernel.cpp | |
parent | 2134d1bdb81e4959560d5becea06c43c083a9811 (diff) | |
download | ComputeLibrary-89aa4eb56d56c81a9d53f94dffa5fa88742e986c.tar.gz |
Improve start-up time for concatenation layers
- pass tensor's dimensions at runtime rather than compile time
- Add guard macro to compile only kernel of internest
Resolves: COMPMID-5121
Signed-off-by: Ramy Elgammal <ramy.elgammal@arm.com>
Change-Id: I76b7c0cf56d803f58ebff5494c904ace2a86ef5a
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/7097
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gian Marco Iodice <gianmarco.iodice@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'src/gpu/cl/kernels/ClDepthConcatenateKernel.cpp')
-rw-r--r-- | src/gpu/cl/kernels/ClDepthConcatenateKernel.cpp | 9 |
1 files changed, 7 insertions, 2 deletions
diff --git a/src/gpu/cl/kernels/ClDepthConcatenateKernel.cpp b/src/gpu/cl/kernels/ClDepthConcatenateKernel.cpp index d716f1e430..9704294d62 100644 --- a/src/gpu/cl/kernels/ClDepthConcatenateKernel.cpp +++ b/src/gpu/cl/kernels/ClDepthConcatenateKernel.cpp @@ -1,5 +1,5 @@ /* - * Copyright (c) 2017-2021 Arm Limited. + * Copyright (c) 2017-2022 Arm Limited. * * SPDX-License-Identifier: MIT * @@ -91,8 +91,13 @@ void ClDepthConcatenateKernel::configure(const CLCompileContext &compile_context build_opts.add_option("-DSCALE_OUT=" + float_to_string_with_full_precision(oq_info.scale)); } + std::string kernel_name = "concatenate"; + + // A macro guard to compile ONLY the kernel of interest + build_opts.add_option("-D" + upper_string(kernel_name)); + // Create kernel - _kernel = create_kernel(compile_context, "concatenate", build_opts.options()); + _kernel = create_kernel(compile_context, kernel_name, build_opts.options()); // Configure kernel window auto win = calculate_max_window(*dst, Steps(num_elems_processed_per_iteration)); |