aboutsummaryrefslogtreecommitdiff
path: root/tests/validation/NEON/MatMul.cpp
AgeCommit message (Collapse)Author
2024-03-22[ONCPUML-1451] Guard bf16 to bf16 tests with ↵Renato Arantes
ARM_COMPUTE_ENABLE_FIXED_FORMAT_KERNELS Change-Id: I6a01fe1e19a9d3e38908309d766fe7fc43775490 Signed-off-by: Renato Arantes <renato.arantes@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11338 Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
2024-03-21[ONCPUML-1451] Add matmul kernel to enable bf16 to bf16 operations via ↵Renato Arantes
PyTorch® autocast() function The full range of tests must be added with [MLINFSW-482] epic due to the lack of reordering kernels implemented in Acl. Co-Authored-By: David Mansell <David.Mansell@arm.com> Change-Id: I820d316295a1ec94fdc89c37e4144a268f914c36 Signed-off-by: Renato Arantes <renato.arantes@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11169 Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Gunes Bayir <gunes.bayir@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com>
2024-03-19Increase MatMul and DilatedConv test Q8 thresholds to 1Gunes Bayir
Tolerance for quantized tests is better to be 1 due to possible rounding differences between the Acl and reference implementations. Resolves: COMPMID-6929 Change-Id: I6f317631322b702e6a9579593befff65bbf46151 Signed-off-by: Gunes Bayir <gunes.bayir@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11319 Reviewed-by: Pablo Marquez Tello <pablo.tello@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com>
2024-01-24Fix tolerance issue in BF16 MatMul testsGunes Bayir
BF16 kernels are not expected to have the same tolerance/accuracy standards as full float kernels. The reference implementation is a standard floating point implementation, thus resulting in small mismatches. We increase the tolerance of the MatMul BF16 tests, and add more tests to cover more shapes. Previously, the only tested bf16 kernel was a64_hybrid_fp32bf16fp32_mmla_4x24. With the inclusion of new shapes, heuristics also choose a64_hybrid_fp32bf16fp32_mmla_6x16 and stress this kernel as well, covering every implementation. Resolves: COMPMID-6654 Signed-off-by: Gunes Bayir <gunes.bayir@arm.com> Change-Id: I15342606912013c123b94c7e0ea2e6bbb25680d7 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11014 Benchmark: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Jakub Sujak <jakub.sujak@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com>
2023-09-04Make zip and combine variadicViet-Hoa Do
* Illustrate the benefit by writing CPU MatMul test dataset in a more readable way. Part of: COMPMID-6353 Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com> Change-Id: Id5dbc13a051709237bbcc4dd88716d0b24ecfd5d Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/10227 Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Jakub Sujak <jakub.sujak@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com>
2023-04-26Only define validation test tolerance for quantized types in case of aarch64 ↵Ramy Elgammal
for Neon™ Matmul Partially Resovles: COMPMID-6026 Signed-off-by: Ramy Elgammal <ramy.elgammal@arm.com> Change-Id: I273b213abba275f1609eae33058e3acbee2a7146 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/9489 Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Viet-Hoa Do <viet-hoa.do@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
2023-04-24Disable Neon/MatMul/Quantized for armv7aRamy Elgammal
- All the GeMM CPU assembly kernels for integer datatypes require aarch64 Resolves: COMPMID-6026 Signed-off-by: Ramy Elgammal <ramy.elgammal@arm.com> Change-Id: I34bb0d5ca5cc3684b996df851227fcd0ad452586 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/9481 Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Mohmun02 <MohammedSuhail.Munshi@arm.com>
2023-04-19Add quantized support for CPU MatMulViet-Hoa Do
Resolves: COMPMID-5899 Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com> Change-Id: I89d96e292c3492ba9b1900a3e5683f9dcd11dfc6 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/9440 Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Gunes Bayir <gunes.bayir@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com>
2023-04-13Implement MatMul Function and Operator with Floating Point support for CPUMohammed Suhail Munshi
- Implements MatMul function and operator for floating point datatype FP16/FP32 - Includes support for transposing dynamic tensors prior to matrix multiplication. - Adds tests for 2D/3D/4D+ tensors in MatMul with F32/F16 datatype (with all combinations of transposed/not-transposed tensors) - Updates fixture to allow for testing fused activation in MatMul - Adds tests for matmul with and without fused activation Resolved: [COMPMID-5898] Signed-off-by: Mohammed Suhail Munshi <MohammedSuhail.Munshi@arm.com> Change-Id: Iefa84b26dd723c9a51e6c3f91023152c6c31ace2 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/9411 Reviewed-by: SiCong Li <sicong.li@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com>