index
:
ComputeLibrary.git
branches/arm_compute_19_02
branches/arm_compute_19_05
branches/arm_compute_19_08
branches/arm_compute_19_11
branches/arm_compute_20_02
branches/arm_compute_20_05
branches/arm_compute_20_08
branches/arm_compute_20_11
branches/arm_compute_21_02
branches/arm_compute_21_05
branches/arm_compute_21_08
branches/arm_compute_21_11
branches/arm_compute_22_02
branches/arm_compute_22_05
branches/arm_compute_22_08
branches/arm_compute_22_11
branches/arm_compute_23_02
branches/arm_compute_23_02_1
branches/arm_compute_23_05
branches/arm_compute_23_05_1
branches/arm_compute_23_08
branches/arm_compute_23_11
branches/arm_compute_24_01
branches/arm_compute_24_02
branches/arm_compute_24_02_1
branches/arm_compute_24_04
branches/arm_compute_24_05
branches/arm_compute_24_06
branches/arm_compute_24_07
dev/21_02_int8_optim
dev/21_05_int8_optim
main
master
release_candidate
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2022-11-22
space-to-depth shape calculator fix
Ramy Elgammal
2022-11-18
Add num_threads_to_use to OMPScheduler based on workload size
cfRod
2022-11-17
Fix documentation about BF16 acceleration
Viet-Hoa Do
2022-11-15
Fix release notes for 22.11
Viet-Hoa Do
2022-11-15
Fix regression caused by mws in ActivationLayer
Mohammed Suhail Munshi
2022-11-15
Fix GemmLowp BatchMatMul Tests to use quantized Outputs
Mohammed Suhail Munshi
2022-11-15
Fixed Arm NN unit test failure caused by quantised multiplication patch.
Omar Al Khatib
2022-11-15
Add release notes for 22.11
Viet-Hoa Do
2022-11-14
Optimize Transposed Convolution for CL backend (FP32/16)
Gunes Bayir
2022-11-14
Update README
Viet-Hoa Do
2022-11-14
Optimize T_QUANTIZE8_ASYMMETRIC for Mali™ G52
Pablo Marquez Tello
2022-11-10
Fix compiler warnings in dynamic fusion
SiCong Li
2022-11-09
Fix CPU multiplication layer threading overhead
Viet-Hoa Do
2022-11-08
SVE Hard-Swish via Lookup table for quantized input
Pablo Marquez Tello
2022-11-07
Optimize CPU mul layer on quantized data
Omar Al Khatib
2022-11-04
Fix compiler warnings in dynamic fusion
SiCong Li
2022-11-04
Update SONAME_VERSION in SConscript
Viet-Hoa Do
2022-11-03
Fix activation block in gemm.cl
Gian Marco Iodice
2022-11-02
Add Dynamic Fusion GpuConv2d FP32/FP16 testcase
Ramy Elgammal
2022-11-02
Partially Revert "Add threshold for floating-point SOFT_RELU activation"
Gunes Bayir
2022-11-01
Fix fixed-point quantized addition
Viet-Hoa Do
2022-11-01
Updateable weights in depthwise convolution
Milos Puzovic
2022-11-01
Add threshold for floating-point SOFT_RELU activation
Milos Puzovic
2022-11-01
Add check for Batch Matmul in GemmAssemblyDispatch
Mohammed Suhail Munshi
2022-11-01
Rewrite dynamic fusion
SiCong Li
2022-11-01
Rework direct convolution heuristic on OpenCL
Gian Marco Iodice
2022-10-27
Fix fixed-point quantized addition
Viet-Hoa Do
2022-10-25
Fix compiler warning
Pablo Marquez Tello
2022-10-24
Add FP16 tanh based on rational approximation
Jonathan Deakin
2022-10-21
Fix mapfile generation in Clang
Pablo Marquez Tello
2022-10-20
Update reinterpret tensor as 1D for CPU add
Viet-Hoa Do
2022-10-20
Add test in GEMMLowp for batch matmul
Mohammed Suhail Munshi
2022-10-19
Fix FFTConvolutionLayer test
Viet-Hoa Do
2022-10-12
Add scons option to generate Map files.
Pablo Marquez Tello
2022-10-12
Optimize Neon™ Logistic Activation
Mohammed Suhail Munshi
2022-10-12
Adding documentation section explaining how BF16 is used
Ramy Elgammal
2022-10-10
Use https to embed MathJax to documentation
Viet-Hoa Do
2022-10-10
Fix LUT-based activation layer
Viet-Hoa Do
2022-10-07
Workaround CL compiler issue on FP16
Viet-Hoa Do
2022-10-07
Optimize Neon™ SUB operator by squashing execution window
Jakub Sujak
2022-10-06
Rework DepthwiseConvolution heuristic on OpenCL
Gian Marco Iodice
2022-10-06
Improve start-up time in gemmlowp reshaped rhs only.
Adnan AlSinan
2022-10-04
Update GEMM reshaped rhs only heuristic
Gian Marco Iodice
2022-10-03
Force CL kernel compilation with 64 registers
Viet-Hoa Do
2022-10-03
Fix Batch Matmul nightly failure
Adnan AlSinan
2022-10-03
Enable FP16 when the target is armv8.6-a
Pablo Marquez Tello
2022-10-03
Optimize CPU add layer on quantized data
Viet-Hoa Do
2022-09-28
Fix overflow in NEActivationLayer for FP16 type
Pablo Marquez Tello
2022-09-26
Add FP32 Neon™ swish activation
Jonathan Deakin
2022-09-23
CPU GEMM: Fix overreads in SVE merges.
David Mansell
[next]