aboutsummaryrefslogtreecommitdiff
path: root/src/cpu/operators
AgeCommit message (Expand)Author
2023-08-07Document the Conv2D heuristicGian Marco Iodice
2023-07-28Retain back-compatibility for arm_compute/core/Types.hSiCong Li
2023-07-19Add support for input S64/U64 in CpuCastKernelPablo Marquez Tello
2023-07-10Do not include headers necessary for logging when logging is disabledMatthew Bentham
2023-07-04Depthwise channel pre-multiplicationMichael Tyler
2023-06-23Address the issues with the ACL coverage pipeline failures related to matmul.Renato Arantes
2023-06-16Add Fused Activation to OpenCL MatMulMohammed Suhail Munshi
2023-06-15Break up Utils.h a bit to reduce unused code being included everywhereMatthew Bentham
2023-06-15Break up arm_compute/core/Types.h a bitMatthew Bentham
2023-05-10Re-enable dyanmic weights in Neon™ depthwise convolutionRamy Elgammal
2023-05-05Connect CLMatMul function to quantized kernels and resolve NE BatchMatMul int...Jakub Sujak
2023-05-05Disable dynamic weights in unsupported operatorsViet-Hoa Do
2023-05-03Fix im2col for fast-maths mode with padding.Renato Arantes
2023-05-03Fix CPU MatMul broadcast detectionViet-Hoa Do
2023-05-02Fix fully connected and matmul mismatchesViet-Hoa Do
2023-04-26Integrate multi-threaded pretranspose_B_arraySiCong Li
2023-04-19Add quantized support for CPU MatMulViet-Hoa Do
2023-04-14Fix dynamic weights for CPU connected layerViet-Hoa Do
2023-04-13Implement MatMul Function and Operator with Floating Point support for CPUMohammed Suhail Munshi
2023-03-21Add dynamic weights for CPU fully connected layerViet-Hoa Do
2023-03-13[ONCPUML-1174] Allow src/weights mismatch for fixed formatJonathan Deakin
2023-03-03NEGEMMLowpMatrixMultiplyCore should be configured for optimized int8 kernel.Ethan Doe
2023-03-01Add support for kernel indices in MaxpoolAdnan AlSinan
2023-02-01Add new operator AddMulAdd for Neon™ backend for Float/Quantized typesGunes Bayir
2023-02-01Remove fixed format strides hackJonathan Deakin
2023-01-18Add broadcast batched matmul validation casesSiCong Li
2023-01-11Deprecated BF16 support in DepthConvertPablo Marquez Tello
2022-11-01Updateable weights in depthwise convolutionMilos Puzovic
2022-11-01Add check for Batch Matmul in GemmAssemblyDispatchMohammed Suhail Munshi
2022-10-20Update reinterpret tensor as 1D for CPU addViet-Hoa Do
2022-10-12Optimize Neon™ Logistic ActivationMohammed Suhail Munshi
2022-10-12Adding documentation section explaining how BF16 is usedRamy Elgammal
2022-10-07Optimize Neon™ SUB operator by squashing execution windowJakub Sujak
2022-09-16Optimize Quantized/Integer Bilinear Scale for Neon™Gunes Bayir
2022-09-14Interpreting tensor as 1D for CPU multiplicationViet-Hoa Do
2022-09-12Add test for NEGEMM to test a batched matrix multiplication with variable inp...Adnan AlSinan
2022-09-09Optimize FP32/16 Bilinear Scale Kernel for Neon™Gunes Bayir
2022-09-08Disable Winograd on fp16 if fast-math = falseRamy Elgammal
2022-08-08Fix for AI benchmark ResNet regressionViet-Hoa Do
2022-08-04[ONCPUML-970] Fast math mode for fixed format kernelsPablo Marquez Tello
2022-08-03[ONCPUML-968] Fixed format kernel support in additional APIsMilos Puzovic
2022-08-01Optimize add layer by considering the input tensors as 1D arrayGunes Bayir
2022-07-27Fix compilation error rasied in Nightly_NEWRamy Elgammal
2022-07-26Fix for inclusion of "arm_gemm" from src into "Types.h" from coreRamy Elgammal
2022-07-25Enable march=armv8.6-a in non multi-isa buildsPablo Marquez Tello
2022-07-19[ONCPUML-951] Variable weight support for Convolution.Francesco Petrogalli
2022-07-14Integrate new winograd APIs from MLTechramelg01
2022-06-30Wrong arguments for running activation function in CpuGemmDirectConv2dMichalis Spyrou
2022-05-24[arm_gemm] Import fixed-format kernels from gemm_linux.Francesco.Petrogalli@arm.com
2022-05-06Use svcreate instead of list initializations.Michalis Spyrou