aboutsummaryrefslogtreecommitdiff
path: root/src/cpu
AgeCommit message (Expand)Author
2023-02-03Fix armv7a failing GEMMConvolutionLayer testsMohammed Suhail Munshi
2023-02-02Fix GEMMLowp/Batched MatMul mismatches on CPUMohammed Suhail Munshi
2023-02-01Add new operator AddMulAdd for Neon™ backend for Float/Quantized typesGunes Bayir
2023-02-01Remove fixed format strides hackJonathan Deakin
2023-01-18Add broadcast batched matmul validation casesSiCong Li
2023-01-18Revert "Update CPU kernels to remove x19"Michael Tyler
2023-01-16Update CPU kernels to remove x19Michael Tyler
2023-01-11Deprecated BF16 support in DepthConvertPablo Marquez Tello
2022-12-29Use CPU quantized addition kernel for quantized subtractionOmar Al Khatib
2022-12-21Fixed various mismatches in CpuCastKernelPablo Marquez Tello
2022-11-30Fix build error for unused variables in data type specific buildsGunes Bayir
2022-11-23ONCPUML-1072: Remove double definition of get_mws for Mul kernelfadara01
2022-11-22ONCPUML-1072: Tuned MWS values (for N1, V1) for binary operators used by oneDNNFadi Arafeh
2022-11-15Fix regression caused by mws in ActivationLayerMohammed Suhail Munshi
2022-11-15Fixed Arm NN unit test failure caused by quantised multiplication patch.Omar Al Khatib
2022-11-09Fix CPU multiplication layer threading overheadViet-Hoa Do
2022-11-08SVE Hard-Swish via Lookup table for quantized inputPablo Marquez Tello
2022-11-07Optimize CPU mul layer on quantized dataOmar Al Khatib
2022-11-01Fix fixed-point quantized additionViet-Hoa Do
2022-11-01Updateable weights in depthwise convolutionMilos Puzovic
2022-11-01Add threshold for floating-point SOFT_RELU activationMilos Puzovic
2022-11-01Add check for Batch Matmul in GemmAssemblyDispatchMohammed Suhail Munshi
2022-10-27Fix fixed-point quantized additionViet-Hoa Do
2022-10-20Update reinterpret tensor as 1D for CPU addViet-Hoa Do
2022-10-20Add test in GEMMLowp for batch matmulMohammed Suhail Munshi
2022-10-19Fix FFTConvolutionLayer testViet-Hoa Do
2022-10-12Optimize Neon™ Logistic ActivationMohammed Suhail Munshi
2022-10-12Adding documentation section explaining how BF16 is usedRamy Elgammal
2022-10-10Fix LUT-based activation layerViet-Hoa Do
2022-10-07Optimize Neon™ SUB operator by squashing execution windowJakub Sujak
2022-10-03Fix Batch Matmul nightly failureAdnan AlSinan
2022-10-03Optimize CPU add layer on quantized dataViet-Hoa Do
2022-09-26Add FP32 Neon™ swish activationJonathan Deakin
2022-09-22Fix unresolved symbol for target armv7a + AndroidPablo Marquez Tello
2022-09-16Fix bug in QASYMM8_SIGNED to F32 cast layerViet-Hoa Do
2022-09-16Optimize Quantized/Integer Bilinear Scale for Neon™Gunes Bayir
2022-09-14Interpreting tensor as 1D for CPU multiplicationViet-Hoa Do
2022-09-14Adding GELU activationMurray Kornelsen
2022-09-14INT8 Quantized MeanStdDevNorm (LayerNorm)Murray Kornelsen
2022-09-12Add test for NEGEMM to test a batched matrix multiplication with variable inp...Adnan AlSinan
2022-09-09Optimize FP32/16 Bilinear Scale Kernel for Neon™Gunes Bayir
2022-09-08Disable Winograd on fp16 if fast-math = falseRamy Elgammal
2022-09-02F16 Specialization for MeanStdDevNormMurray Kornelsen
2022-08-24Fix add for tensors with non-matching stridesJonathan Deakin
2022-08-18Use Neon™ kernels for FP Bilinear Resize for SVEGunes Bayir
2022-08-17Add LUT for quantized sigmoid functionViet-Hoa Do
2022-08-08Fix for AI benchmark ResNet regressionViet-Hoa Do
2022-08-04[ONCPUML-970] Fast math mode for fixed format kernelsPablo Marquez Tello
2022-08-03[ONCPUML-968] Fixed format kernel support in additional APIsMilos Puzovic
2022-08-01Optimize add layer by considering the input tensors as 1D arrayGunes Bayir