aboutsummaryrefslogtreecommitdiff
path: root/src/cpu/kernels/add/generic/neon
AgeCommit message (Collapse)Author
2022-12-29Use CPU quantized addition kernel for quantized subtractionOmar Al Khatib
Resolves : [COMPMID-5629] Signed-off-by: Omar Al Khatib <omar.alkhatib@arm.com> Change-Id: I061ea5bdafa3a01e66ff869d158f26a38d19e125 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8835 Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Gunes Bayir <gunes.bayir@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
2022-11-01Fix fixed-point quantized additionViet-Hoa Do
* The range check condition is incorrect. The maximum value of 8-bit quantized data is 255, not 1023. * Use 256 to make the check slightly stricter than it should. Resolves: COMPMID-5458 Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com> Change-Id: I5c071680531574b409a7ce71a732f6480caa10d8 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8537 Reviewed-by: Jakub Sujak <jakub.sujak@arm.com> Reviewed-by: Gunes Bayir <gunes.bayir@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
2022-10-27Fix fixed-point quantized additionViet-Hoa Do
* Use the same rounding function for the left-over part with the vectorized part. Resolves: COMPMID-5640 Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com> Change-Id: I07450b2a43390b77539b78cd5d3e6772bdc38548 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8520 Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Gunes Bayir <gunes.bayir@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
2022-10-20Update reinterpret tensor as 1D for CPU addViet-Hoa Do
* Use the same implementation as other layers. Resolves: COMPMID-5108 Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com> Change-Id: I5a50259b398b71ca1f61b5ee8daa539bf8263fac Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8501 Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Gunes Bayir <gunes.bayir@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
2022-10-03Optimize CPU add layer on quantized dataViet-Hoa Do
* Use fixed-point arithmetic where possible. * Various optimization for the FP32-based implementation. This implementation is kept as the fall-back solution in case of unrealistic quantization parameters that exceed the range of fixed-point solution. Resolves: COMPMID-5458 Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com> Change-Id: I221d2d3801ecaae4fe0b7cf6ae8ef00ca3743665 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8317 Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Gunes Bayir <gunes.bayir@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com>
2022-08-01Optimize add layer by considering the input tensors as 1D arrayGunes Bayir
Resolves: COMPMID-5108 Change-Id: I544f8160fbe5b4ffbef348d1fbd3dd626a6e1bdb Signed-off-by: Gunes Bayir <gunes.bayir@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8002 Reviewed-by: Gian Marco Iodice <gianmarco.iodice@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com>
2021-11-28Decouple CpuAddKernelDana Zlotnik
1- NEON supported data types are : fp32, fp16, u8, s16, s32 , q8, q_s8 , q16 2- SVE supported data types are: fp32, fp16, u8, s16, s32 3- SVE2 supported data types are : q8, q_s8 , q16 4- Re-arange SVE folder sturct ** Need to remove gaurds and add testing after Multi ISA build system and validation tests will be avalible Resolves COMPMID-4635 Change-Id: I90e4f6a219478aa9ad5c4a6b9858496afa8af42d Signed-off-by: Dana Zlotnik <dana.zlotnik@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/6711 Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Giorgio Arena <giorgio.arena@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com>