aboutsummaryrefslogtreecommitdiff
path: root/src/cpu/operators
AgeCommit message (Expand)Author
2023-05-10Re-enable dyanmic weights in Neon™ depthwise convolutionRamy Elgammal
2023-05-05Connect CLMatMul function to quantized kernels and resolve NE BatchMatMul int...Jakub Sujak
2023-05-05Disable dynamic weights in unsupported operatorsViet-Hoa Do
2023-05-03Fix im2col for fast-maths mode with padding.Renato Arantes
2023-05-03Fix CPU MatMul broadcast detectionViet-Hoa Do
2023-05-02Fix fully connected and matmul mismatchesViet-Hoa Do
2023-04-26Integrate multi-threaded pretranspose_B_arraySiCong Li
2023-04-19Add quantized support for CPU MatMulViet-Hoa Do
2023-04-14Fix dynamic weights for CPU connected layerViet-Hoa Do
2023-04-13Implement MatMul Function and Operator with Floating Point support for CPUMohammed Suhail Munshi
2023-03-21Add dynamic weights for CPU fully connected layerViet-Hoa Do
2023-03-13[ONCPUML-1174] Allow src/weights mismatch for fixed formatJonathan Deakin
2023-03-03NEGEMMLowpMatrixMultiplyCore should be configured for optimized int8 kernel.Ethan Doe
2023-03-01Add support for kernel indices in MaxpoolAdnan AlSinan
2023-02-01Add new operator AddMulAdd for Neon™ backend for Float/Quantized typesGunes Bayir
2023-02-01Remove fixed format strides hackJonathan Deakin
2023-01-18Add broadcast batched matmul validation casesSiCong Li
2023-01-11Deprecated BF16 support in DepthConvertPablo Marquez Tello
2022-11-01Updateable weights in depthwise convolutionMilos Puzovic
2022-11-01Add check for Batch Matmul in GemmAssemblyDispatchMohammed Suhail Munshi
2022-10-20Update reinterpret tensor as 1D for CPU addViet-Hoa Do
2022-10-12Optimize Neon™ Logistic ActivationMohammed Suhail Munshi
2022-10-12Adding documentation section explaining how BF16 is usedRamy Elgammal
2022-10-07Optimize Neon™ SUB operator by squashing execution windowJakub Sujak
2022-09-16Optimize Quantized/Integer Bilinear Scale for Neon™Gunes Bayir
2022-09-14Interpreting tensor as 1D for CPU multiplicationViet-Hoa Do
2022-09-12Add test for NEGEMM to test a batched matrix multiplication with variable inp...Adnan AlSinan
2022-09-09Optimize FP32/16 Bilinear Scale Kernel for Neon™Gunes Bayir
2022-09-08Disable Winograd on fp16 if fast-math = falseRamy Elgammal
2022-08-08Fix for AI benchmark ResNet regressionViet-Hoa Do
2022-08-04[ONCPUML-970] Fast math mode for fixed format kernelsPablo Marquez Tello
2022-08-03[ONCPUML-968] Fixed format kernel support in additional APIsMilos Puzovic
2022-08-01Optimize add layer by considering the input tensors as 1D arrayGunes Bayir
2022-07-27Fix compilation error rasied in Nightly_NEWRamy Elgammal
2022-07-26Fix for inclusion of "arm_gemm" from src into "Types.h" from coreRamy Elgammal
2022-07-25Enable march=armv8.6-a in non multi-isa buildsPablo Marquez Tello
2022-07-19[ONCPUML-951] Variable weight support for Convolution.Francesco Petrogalli
2022-07-14Integrate new winograd APIs from MLTechramelg01
2022-06-30Wrong arguments for running activation function in CpuGemmDirectConv2dMichalis Spyrou
2022-05-24[arm_gemm] Import fixed-format kernels from gemm_linux.Francesco.Petrogalli@arm.com
2022-05-06Use svcreate instead of list initializations.Michalis Spyrou
2022-04-22[CpuGemmConv2d] Extract skip_im2col and skip_col2im computation.Francesco.Petrogalli@arm.com
2022-04-13Add support for int8 CpuPool3dAdnan AlSinan
2022-04-12Fix CpuGemmAssemblyDispatch::has_opt_impl.Francesco.Petrogalli@arm.com
2022-04-06[arm_gemm] Use static validate to find arm_gemm kernels.Francesco.Petrogalli@arm.com
2022-04-01Add CPU Pool3d FP16/32 implementationAdnan AlSinan
2022-03-16Remove deprecated interface from arm_compute.Francesco.Petrogalli@arm.com
2022-02-14Port MaxUnpoolingLayer kernel and add KernelSelect vaidation testDana Zlotnik
2022-02-01 Enable kernel selection testing (Phase #2)Yair Schwarzbaum
2021-11-09Enable fast_math in CpuFullyConnectedcfRod