index
:
ComputeLibrary.git
branches/arm_compute_19_02
branches/arm_compute_19_05
branches/arm_compute_19_08
branches/arm_compute_19_11
branches/arm_compute_20_02
branches/arm_compute_20_05
branches/arm_compute_20_08
branches/arm_compute_20_11
branches/arm_compute_21_02
branches/arm_compute_21_05
branches/arm_compute_21_08
branches/arm_compute_21_11
branches/arm_compute_22_02
branches/arm_compute_22_05
branches/arm_compute_22_08
branches/arm_compute_22_11
branches/arm_compute_23_02
branches/arm_compute_23_02_1
branches/arm_compute_23_05
branches/arm_compute_23_05_1
branches/arm_compute_23_08
branches/arm_compute_23_11
branches/arm_compute_24_01
branches/arm_compute_24_02
branches/arm_compute_24_02_1
branches/arm_compute_24_04
branches/arm_compute_24_05
branches/arm_compute_24_06
branches/arm_compute_24_07
dev/21_02_int8_optim
dev/21_05_int8_optim
main
master
release_candidate
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
src
/
cpu
Age
Commit message (
Expand
)
Author
2023-02-08
Add support for dilation > 1 in assembly DepthwiseConvolution
Pablo Marquez Tello
2023-02-03
Fix armv7a failing GEMMConvolutionLayer tests
Mohammed Suhail Munshi
2023-02-01
Fix GEMMLowp/Batched MatMul mismatches on CPU
Mohammed Suhail Munshi
2023-02-01
Add new operator AddMulAdd for Neon™ backend for Float/Quantized types
Gunes Bayir
2023-02-01
Remove fixed format strides hack
Jonathan Deakin
2023-01-18
Add broadcast batched matmul validation cases
SiCong Li
2023-01-18
Revert "Update CPU kernels to remove x19"
Michael Tyler
2023-01-16
Update CPU kernels to remove x19
Michael Tyler
2023-01-11
Deprecated BF16 support in DepthConvert
Pablo Marquez Tello
2022-12-29
Use CPU quantized addition kernel for quantized subtraction
Omar Al Khatib
2022-12-21
Fixed various mismatches in CpuCastKernel
Pablo Marquez Tello
2022-11-30
Fix build error for unused variables in data type specific builds
Gunes Bayir
2022-11-23
ONCPUML-1072: Remove double definition of get_mws for Mul kernel
fadara01
2022-11-22
ONCPUML-1072: Tuned MWS values (for N1, V1) for binary operators used by oneDNN
Fadi Arafeh
2022-11-15
Fix regression caused by mws in ActivationLayer
Mohammed Suhail Munshi
2022-11-15
Fixed Arm NN unit test failure caused by quantised multiplication patch.
Omar Al Khatib
2022-11-09
Fix CPU multiplication layer threading overhead
Viet-Hoa Do
2022-11-08
SVE Hard-Swish via Lookup table for quantized input
Pablo Marquez Tello
2022-11-07
Optimize CPU mul layer on quantized data
Omar Al Khatib
2022-11-01
Fix fixed-point quantized addition
Viet-Hoa Do
2022-11-01
Updateable weights in depthwise convolution
Milos Puzovic
2022-11-01
Add threshold for floating-point SOFT_RELU activation
Milos Puzovic
2022-11-01
Add check for Batch Matmul in GemmAssemblyDispatch
Mohammed Suhail Munshi
2022-10-27
Fix fixed-point quantized addition
Viet-Hoa Do
2022-10-20
Update reinterpret tensor as 1D for CPU add
Viet-Hoa Do
2022-10-20
Add test in GEMMLowp for batch matmul
Mohammed Suhail Munshi
2022-10-19
Fix FFTConvolutionLayer test
Viet-Hoa Do
2022-10-12
Optimize Neon™ Logistic Activation
Mohammed Suhail Munshi
2022-10-12
Adding documentation section explaining how BF16 is used
Ramy Elgammal
2022-10-10
Fix LUT-based activation layer
Viet-Hoa Do
2022-10-07
Optimize Neon™ SUB operator by squashing execution window
Jakub Sujak
2022-10-03
Fix Batch Matmul nightly failure
Adnan AlSinan
2022-10-03
Optimize CPU add layer on quantized data
Viet-Hoa Do
2022-09-26
Add FP32 Neon™ swish activation
Jonathan Deakin
2022-09-22
Fix unresolved symbol for target armv7a + Android
Pablo Marquez Tello
2022-09-16
Fix bug in QASYMM8_SIGNED to F32 cast layer
Viet-Hoa Do
2022-09-16
Optimize Quantized/Integer Bilinear Scale for Neon™
Gunes Bayir
2022-09-14
Interpreting tensor as 1D for CPU multiplication
Viet-Hoa Do
2022-09-14
Adding GELU activation
Murray Kornelsen
2022-09-14
INT8 Quantized MeanStdDevNorm (LayerNorm)
Murray Kornelsen
2022-09-12
Add test for NEGEMM to test a batched matrix multiplication with variable inp...
Adnan AlSinan
2022-09-09
Optimize FP32/16 Bilinear Scale Kernel for Neon™
Gunes Bayir
2022-09-08
Disable Winograd on fp16 if fast-math = false
Ramy Elgammal
2022-09-02
F16 Specialization for MeanStdDevNorm
Murray Kornelsen
2022-08-24
Fix add for tensors with non-matching strides
Jonathan Deakin
2022-08-18
Use Neon™ kernels for FP Bilinear Resize for SVE
Gunes Bayir
2022-08-17
Add LUT for quantized sigmoid function
Viet-Hoa Do
2022-08-08
Fix for AI benchmark ResNet regression
Viet-Hoa Do
2022-08-04
[ONCPUML-970] Fast math mode for fixed format kernels
Pablo Marquez Tello
2022-08-03
[ONCPUML-968] Fixed format kernel support in additional APIs
Milos Puzovic
[next]