Age | Commit message (Collapse) | Author |
|
Partially Resolves: COMPMID-6926
Signed-off-by: Ramy Elgammal <ramy.elgammal@arm.com>
Change-Id: I9d13c4319042f639a8c5be385b63857d77fefff2
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11768
Reviewed-by: Michael Tyler <michael.tyler@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
- Add support for mixed sign quantized convolution.
- Add support for mixed sign dequantized GEMM.
- Add SME FP16 GEMV kernel.
- Change SME vector length function to use RDSVL instead of static variable.
- Add GEMM dilation support internally (not exposed yet).
- Remove unused "get_default_activation_values" functions.
- Add SVE fixed format interleaved BF16 DOT kernel.
- Updates and optimizations to assembly kernels.
Resolves COMPMID-6926
Change-Id: I227f502502611d4cc4111c89e30c53ce94079544
Signed-off-by: Michael Tyler <michael.tyler@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11570
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
* Enable FP16 kernels in
NEROIAlignLayerKernel
NEComputeAllAnchorsKernel
NEBoundingBoxTransformKernel
NEInstanceNormalizationLayerKernel
NEBatchNormalizationLayerKernel
* The FP16 kernels were disabled due to the use of __ARM_FEATURE_FP16_VECTOR_ARITHMETIC
* Resolves MLCE-1305
Change-Id: Ib8dd3cad631667018b25db4ba76007dbfb4bf5a5
Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11677
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
* The softmax kernel is using SME2 instructions on non SME2 devices
* Resolves MLCE-1304
Change-Id: I9d7d94443e7c9df4e7c1a05eeef6838f530b357b
Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11676
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
|
|
Resolves: COMPMID-6901
Change-Id: Idcd3f5f5d90f4073aaf116c0586e46013fbd64f7
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11605
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
1. Remove unnecessary restriction to the exclusion only running on systems with little mid and big cores.
2. Allow override of the suggested number of threads in case the user sets the number of threads to a lower value.
Resolves [COMPMID-7014]
Signed-off-by: Omar Al Khatib <omar.alkhatib@arm.com>
Change-Id: Ifb76ef4454f38dd2e3e5781b5dfea07c044aeb74
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11604
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com>
|
|
On systems with BIG/MID/LITTLE cores, we need to exclude the LITTLE cores.
This is make changes to CPUInfo to detect number of LITTLE cores and set the num_threads to TOTAL_CORES-NUM_LITTLE cores
Resolves [COMPMID-7014]
Signed-off-by: Omar Al Khatib <omar.alkhatib@arm.com>
Change-Id: I3e1772e5b64d1c45304860be43233b7e5dd8dba1
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11565
Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
* Resolves MLCE-1285
Change-Id: I22a37972aefe1c0f04accbc798baa18358ed8959
Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11552
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
- Enable FP16 code when building multi_isa for armv8a architecture in
order to run on higher architectures e.g. 8.2, 8.6.
- When running this build on v8 the validation will stop it flagging
that the arch does not support FP16.
Resolves: COMPMID-7013
Signed-off-by: Ramy Elgammal <ramy.elgammal@arm.com>
Change-Id: I0d445e2fade31c1156d7a6e142edf2a7f84d3622
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11544
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
In NEQuantizeLayer for QASYMM8_SIGNED, the rounding was inconsistent
between the unrolled loop and the leftover loop, which meant identical
values (e.g. 0.5) at different indices of a Tensor could round to
different values (0 or 1 in this case). We have changed vcvtaq to
vcvtnq to round to the nearest, with ties to even. This matches the
default fegetround setting, so it is a sensible default.
Relates-to: COMPMID-6994
Signed-off-by: Jonathan Deakin <jonathan.deakin@arm.com>
Change-Id: I8e7ecb1b8dbdd3e887697a92046af99ed33fc78f
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11532
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
Resolves: [COMPMID-6917]
Change-Id: Id8b96efd29f6c61dd43a371341c6e1fe087953e9
Signed-off-by: Omar Al Khatib <omar.alkhatib@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11509
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
SVE BF16 kernels need to check for svebf16(), not just bf16().
Change-Id: I89494aac40166eba59719bed9822194a48ac282d
Signed-off-by: David Mansell <David.Mansell@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11520
Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
As the reorder kernel is called with WeightFormat OHWIo8
for hardware that does not support it e.g. vector length 128,
adapt the test case and add kernel implementation for this edge case.
This fixes the mismatching values that appear when OHWIo8 fixture
was run with 128 vector length.
Resolves: ONCPUML-1523, COMPMID-6281
Signed-off-by: Radu Salavat <radu.salavat@arm.com>
Change-Id: Iaa1a3b486d1725a2d6031051aa544082c1bbe913
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11421
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
Change-Id: I69aa973e61df950060807a31230a1edd91add498
Signed-off-by: David Mansell <David.Mansell@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11514
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
Resolves: COMPMID-6899
Change-Id: I3743f2c9e5c21e1ec9f4c81d08c148666afad33a
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11505
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Reviewed-by: Sang Won Ha <sangwon.ha@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
|
|
accumulated
Similar to https://review.mlplatform.org/c/ml/ComputeLibrary/+/11500, s8f32 kernels do not support accumulate mode. This patch modifies the kernel selection and also adds more tests to stress these test cases better.
Partially Resolves: COMPMID-6995
Change-Id: I40e19446c012eb7334e4511e254cce0d635aa234
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11503
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Radu Salavat <radu.salavat@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
|
|
SME2 kernels use a different accumulation buffer and destination tensor is not copied to this buffer as initial value, thus causing mismatches. This patch modifies the kernel selection algorithm such that it does not select SME2 kernels if accumulation is required.
Resolves: COMPMID-6995
Change-Id: I82da3cba41729f938a046f26b41b63ff5716c02d
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11500
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
Resolves: COMPMID-6894, COMPMID-6896
Change-Id: I9d29fd3701a7e0f28d83f81a6c42a7234c2587c3
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11477
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Ramy Elgammal <ramy.elgammal@arm.com>
Dynamic-Fusion: Ramy Elgammal <ramy.elgammal@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
* All per-channel requantizing hybrid assembly kernels require
these buffers to be padded.
* Resolves MLCE-1255
Change-Id: I892b8ee9b31e079189ec72f3fc6da4ce5efda974
Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11491
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
Resolves: [COMPMID-6891, COMPMID-6892]
Change-Id: I5b094fff1bff4c4c59cc44f7d6beab0e40133d8e
Signed-off-by: Mohammed Suhail Munshi <MohammedSuhail.Munshi@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11394
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
Signed-off-by: Sunita Nadampalli <nadampal@amazon.com>
Change-Id: I21eca31d97d6e2ca8279adb9db65f11540e72689
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11396
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com>
|
|
- Add support for QASYMM_SIGNED*QASYMM8_SIGNED->F32 in
CpuGemmLowpMatrixMultiplyCore
- Add s8f32 kernel using existing s8->s32 kernels with a new
DequantizeFloat OutputStage, the structure is similar to Requantize32
but the opposite way around.
- Add SME s8f32 kernels with integrated support for DequantizeFloat.
- Add scale to CpuGemmLowpOffsetContributionKernel.
- Add virtual dequantize scale to gemm_common, only implemented for
gemm_interleaved.
- Update year to 2024 in generate_build_files.
- Add dynamic flag to QuantizationInfo which signals to operators that
it can change after configuration
- Add support for dynamic quantization in NEGEMMLowpMatrixMultiplyCore
- Add dynamic quantization fixture by extending
GEMMLowpGenericMatrixMultiplyCoreValidationFixture
- Add GEMMLowpDequantizedMatrixMultiplyValidationFixture
- Store k (number of cols of A) rather than k_offset in the offset
contribution kernels so that we can recompute it when the other
offsets change
relates to: ONCPUML-1444 MLINFSW-439
Co-authored-by: Milos Puzovic <Milos.Puzovic@arm.com>
Co-authored-by: David Mansell <David.Mansell@arm.com>
Change-Id: I58a3acf2c09289a303e52eea6b336a696a5bc8da
Signed-off-by: Jonathan Deakin <jonathan.deakin@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11022
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
In addition to the softmax kernel, this patch fixes minor issues in the fp32 implementation.
Resolves: COMPMID-6920
Change-Id: Ibbd9f0af5f2a93fba0e92d72ba437279c34149d3
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11402
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
|
|
Instead of dispatching the sum postop for GEMM kernels to a
separate kernel + add, that requires an extra destination sized
allocation, plus 3 extra load/stores per element,
just do it in the GEMM kernel.
Resolves: ONCPUML-1442
Signed-off-by: Radu Salavat <radu.salavat@arm.com>
Co-authored-by: Milos Puzovic <milos.puzovic@arm.com>
Change-Id: I7a1f2da3300875fa1ac88b705a34390969518077
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11298
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
* This fixes the GCC 12 compiler error:
Assuming signed overflow does not occur when simplifying
conditional to constant [-Werror=strict-overflow]
* Resolves ARMCL-1130
Change-Id: I01e10ebca2dbfcd166c1f4128921953e31016038
Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11381
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
of 6 for accumulation and updated heuristics
Change-Id: Ib52ea6825e164f4a8b8422eab7991b50af0b0d7c
Signed-off-by: Milos Puzovic <milos.puzovic@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11354
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
PyTorch® autocast() function
The full range of tests must be added with [MLINFSW-482] epic due to the lack of reordering kernels implemented in Acl.
Co-Authored-By: David Mansell <David.Mansell@arm.com>
Change-Id: I820d316295a1ec94fdc89c37e4144a268f914c36
Signed-off-by: Renato Arantes <renato.arantes@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11169
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
- Neon(TM) implementation converts integers to float and performs the division because there is no vector integer division instructions. However, leftover loop still uses integer division, which makes results inconsistent depending on where we are in the tensor.
- SVE path does it in integer domain.
- OpenCL(TM) does it similar to Neon(TM) vector path.
- Reference implementation does it in integer domain.
These differences cause intermittent mismatches. This patch ensures all follow the same logic.
On the other hand, the provided Neon(TM) implementation is faster than the Fp32 converted version.
Resolves: COMPMID-6925
Change-Id: Ia12606d57f40a7d331b9b698f87fd4321496b275
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11316
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
arm_gemm fuses the actual bias addition with the output stage in quantized gemm.
The output stage, in its very basic form, is:
A_offset * B_offset - sum(A_row_i) * B_offset - sum(B_col_j) * A_offset
Matrix B is usually constant (e.g. weight matrix in convolutions). Therefore, except the middle term above, the expression is constant across the same output row because the column sums of matrix B are pre-calculated.
The bias is also usually constant. When it is, it makes sense to add the bias vector to the above sum and just perform a single addition on top of the output tensor.
For this to happen, the column sum computation of B tensor must account for the bias. This is ensured by set_quantized_bias() method in the interface. This function passes the bias pointer and strides to arm_gemm.
Gemv_pretransposed does not implement set_quantized_bias() and uses the parent function, which does nothing. Therefore, the bias is not added to the output. This causes tests to fail.
Resolves: COMPMID-6928
Change-Id: Iba24fabc65fdc47edb12db6abff2fb47784c0743
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11310
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
|
|
Resolves: COMPMID-6927
Signed-off-by: David Mansell <David.Mansell@arm.com>
Change-Id: Ib426fdc11ddbdbd0028d64547f3eaf312ca5fcce
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11301
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
|
|
* Resolves COMPMID-6931
Change-Id: I3ed0c509807e26bddfcd20be71b12ec4cbb5cce6
Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11277
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
* Resolves MLCE-1219
Change-Id: If997180ec88c35d6af05a06c8c5ef95681e67c05
Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11182
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
include of alloca.h should be guarded against _WIN64 and __OpenBSD__
Partially Resolves: COMPMID-6595
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Change-Id: I6a52ec129d92e290d033f75baeb4a598669daae0
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11180
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
This patch fuses the transposition taking place in Acl with the transformations done in arm_gemm (called pretranspose_b_array) if the underlying kernel and transform supports it. This should improve start-up time (as it's for constant Rhs matrices) and memory footprint. The transformations in arm_gemm are kernel specific. The Rhs matrix is transformed into certain layouts to improve the performance.
Resolves: COMPMID-6595
Change-Id: Id2932dd966e59f903c279417bebcea83d9a42464
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11144
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
* cl-clang is used to build ACL natively in WoA
* Resolves MLCE-1209
Change-Id: I040e84f526f16324138a074badf764ac099090e3
Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11126
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
This patch also fixes a bug where the split dimension was wrong in
CpuDepthwiseConv2dAssemblyDispatch::run. It was set to DimY, which is
cols, but it should have been DimZ. This was rarely an issue in practice
because typically the number of cols are greater than the number of
threads anyway.
Relates to: ONCPUML-1443
Co-authored-by: Milos Puzovic <Milos.Puzovic@arm.com>
Change-Id: Ifed2fce22ddeb7cd77e6a6ae1083694427f91e04
Signed-off-by: Jonathan Deakin <jonathan.deakin@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11083
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
|
|
kernels.
Change-Id: I81b71ecc0d2e776d132091e074798a79b3141bec
Signed-off-by: David Mansell <David.Mansell@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11085
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
The code in convolver.hpp generates pointers into either the
appropriate point in the input activation tensor or the padding buffer
for each kernel point of each output point of the convolution. This is
done at runtime interspersed with the data transform and matrix
multiplication steps. As such, it can have a significant impact on
performance, particularly for low input channel counts.
This change improves the performance of this code by streamlining the
checks for out of range input points (which must be directed to the
padding buffer). The previous implementation checked all four borders
for every point. The revised code does the checks one at a time, and
for any failing check applies the result to as many output points as
possible without repeating the other checks.
Signed-off-by: David Mansell <David.Mansell@arm.com>
Change-Id: I36a4fa114b425c1bcba2be40acf36718522519f5
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11004
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
|
|
Resolves: COMPMID-6753
Signed-off-by: Anitha Raj <anitha.raj@arm.com>
Change-Id: I80df0479eb4c7cc2c5380df708844cc9ffdd2aed
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11001
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
* CONVERT_TO_TENSOR4D_STRUCT_NO_STEP is implemented and used
in some CL kernels in the way that causes divide-by-zero issue.
- Since the steps are all zeros, the issue might have been
ignored by the compiler.
Resolves: COMPMID-6795
Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Change-Id: I0fb38fc62d63671b8abefa39b3d9b3ca6f49c7fe
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10967
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
Resolves: [COMPMID-6799]
Signed-off-by: Mohammed Suhail Munshi <MohammedSuhail.Munshi@arm.com>
Change-Id: I47baeeea75f1d03609d1fa1e9a10d2f53d5694f7
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10969
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
|
|
- Locks pointer before checking for validity to prevent race condition
Signed-off-by: Mohammed Suhail Munshi <MohammedSuhail.Munshi@arm.com>
Change-Id: I6872b10d058ee7f3707ba641f44bb6116e26880a
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10960
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
The reorders supported at the moment are:
ab->BA4b4a
ab->BA8b4a
Co-Authored-By: David Mansell <David.Mansell@arm.com>
Change-Id: Ic466465629ce3bcdcee0089e251485b79b60e1f3
Signed-off-by: Renato Arantes <renato.arantes@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10775
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
- Enables FP16 lut for logistic activation
- Adds LUTManager to re-use lut where appropriate.
Signed-off-by: Mohammed Suhail Munshi <MohammedSuhail.Munshi@arm.com>
Change-Id: I94667b63b452a8e58a1eb59cb0b5866178954523
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10864
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
The issue appears when this kernel is used by convolution operators because the stride calculations consider only simple matrix multiplication.
In conv2d triggered runs, Rhs does not have the same dimension as Lhs and Dst. Also, cases where Lhs and Dst are interpreted as 3d, where their X and Y dimensions (in convolution sense) are collapsed into one.
Resolves: COMPMID-6764
Change-Id: If443e6eb8f7a5cca1acc58b37c598122a013e69b
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10913
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
This patch adds adds the latest Gpus as Gpu Target and sets up kernel selection heuristics for MatMul to address some nightly issues.
Resolves: COMPMID-6766
Change-Id: I29dbb08c5ecfb3fcd63230b0b1675ab557074aca
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10902
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|
|
This fix modifies some of the conversions done in the generate proposals kernel that causes DDK issues while compiling the kernel.
The issues are mostly related to conversion from i64 to fp16, and it doesn't affect fp32. Firstly, type identifier size_t is converted into unsigned int. But, this alone was compiling but causing mismatches, even in older devices, where it was passing before. Therefore, the fp16 conversion delayed until vector construction where the integers are now converted to fp32, and then fp16. This, although may not be ideal, seems like the best solution.
Resolves: COMPMID-6756
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Change-Id: Iee61216c908fe51431985b80c3653fc32add4741
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10879
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
|
|
The graph example has fixed quantization information given for certain layers, and some of the offsets exceed the 8-bit range for Int8 data type.
This shouldn't have been the case and the offsets should respect the 8-bit quantization specification laid out here: https://www.tensorflow.org/lite/performance/quantization_spec
However, the mechanism added in the helper function introduces robustness in case of such irregularities with little/no cost; and therefore added as a fix.
Resolves: COMPMID-6748
Change-Id: If39bf323382f109fa100ee2b87ce63cc7bc89759
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10858
Reviewed-by: SiCong Li <sicong.li@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
The function pointer for clImportMemoryARM should be loaded in a portable way as recommended by Khronos® as outlined here:
https://registry.khronos.org/OpenCL/specs/3.0-unified/html/OpenCL_Ext.html#getting-opencl-api-extension-function-pointers
using clGetExtensionFunctionAddressForPlatform() call.
All extensions should ideally be loaded using the above mentioned function.
Resolves: COMPMID-6732
Signed-off-by: Gunes Bayir <gunes.bayir@arm.com>
Change-Id: I482b6bde721267d5e8c08301e5780d28a9c5ba85
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10852
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Jakub Sujak <jakub.sujak@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
|
|
Resolves: COMPMID-6622
Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Change-Id: Ibac276618bdda125dcbb9c851c547f12739b15b4
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10749
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
|