index
:
ComputeLibrary.git
branches/arm_compute_19_02
branches/arm_compute_19_05
branches/arm_compute_19_08
branches/arm_compute_19_11
branches/arm_compute_20_02
branches/arm_compute_20_05
branches/arm_compute_20_08
branches/arm_compute_20_11
branches/arm_compute_21_02
branches/arm_compute_21_05
branches/arm_compute_21_08
branches/arm_compute_21_11
branches/arm_compute_22_02
branches/arm_compute_22_05
branches/arm_compute_22_08
branches/arm_compute_22_11
branches/arm_compute_23_02
branches/arm_compute_23_02_1
branches/arm_compute_23_05
branches/arm_compute_23_05_1
branches/arm_compute_23_08
branches/arm_compute_23_11
branches/arm_compute_24_01
branches/arm_compute_24_02
branches/arm_compute_24_02_1
branches/arm_compute_24_04
branches/arm_compute_24_05
branches/arm_compute_24_06
branches/arm_compute_24_07
dev/21_02_int8_optim
dev/21_05_int8_optim
main
master
release_candidate
release_candidate_tmp
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
src
Age
Commit message (
Expand
)
Author
2022-11-14
Optimize Transposed Convolution for CL backend (FP32/16)
Gunes Bayir
2022-11-14
Optimize T_QUANTIZE8_ASYMMETRIC for Mali™ G52
Pablo Marquez Tello
2022-11-10
Fix compiler warnings in dynamic fusion
SiCong Li
2022-11-09
Fix CPU multiplication layer threading overhead
Viet-Hoa Do
2022-11-08
SVE Hard-Swish via Lookup table for quantized input
Pablo Marquez Tello
2022-11-07
Optimize CPU mul layer on quantized data
Omar Al Khatib
2022-11-04
Fix compiler warnings in dynamic fusion
SiCong Li
2022-11-03
Fix activation block in gemm.cl
Gian Marco Iodice
2022-11-02
Partially Revert "Add threshold for floating-point SOFT_RELU activation"
Gunes Bayir
2022-11-01
Fix fixed-point quantized addition
Viet-Hoa Do
2022-11-01
Updateable weights in depthwise convolution
Milos Puzovic
2022-11-01
Add threshold for floating-point SOFT_RELU activation
Milos Puzovic
2022-11-01
Add check for Batch Matmul in GemmAssemblyDispatch
Mohammed Suhail Munshi
2022-11-01
Rewrite dynamic fusion
SiCong Li
2022-11-01
Rework direct convolution heuristic on OpenCL
Gian Marco Iodice
2022-10-27
Fix fixed-point quantized addition
Viet-Hoa Do
2022-10-24
Add FP16 tanh based on rational approximation
Jonathan Deakin
2022-10-20
Update reinterpret tensor as 1D for CPU add
Viet-Hoa Do
2022-10-20
Add test in GEMMLowp for batch matmul
Mohammed Suhail Munshi
2022-10-19
Fix FFTConvolutionLayer test
Viet-Hoa Do
2022-10-12
Optimize Neon™ Logistic Activation
Mohammed Suhail Munshi
2022-10-12
Adding documentation section explaining how BF16 is used
Ramy Elgammal
2022-10-10
Fix LUT-based activation layer
Viet-Hoa Do
2022-10-07
Workaround CL compiler issue on FP16
Viet-Hoa Do
2022-10-07
Optimize Neon™ SUB operator by squashing execution window
Jakub Sujak
2022-10-06
Rework DepthwiseConvolution heuristic on OpenCL
Gian Marco Iodice
2022-10-06
Improve start-up time in gemmlowp reshaped rhs only.
Adnan AlSinan
2022-10-04
Update GEMM reshaped rhs only heuristic
Gian Marco Iodice
2022-10-03
Force CL kernel compilation with 64 registers
Viet-Hoa Do
2022-10-03
Fix Batch Matmul nightly failure
Adnan AlSinan
2022-10-03
Optimize CPU add layer on quantized data
Viet-Hoa Do
2022-09-28
Fix overflow in NEActivationLayer for FP16 type
Pablo Marquez Tello
2022-09-26
Add FP32 Neon™ swish activation
Jonathan Deakin
2022-09-23
CPU GEMM: Fix overreads in SVE merges.
David Mansell
2022-09-22
Fix unresolved symbol for target armv7a + Android
Pablo Marquez Tello
2022-09-16
Fix validation in validate_image2d_support_on_rhs
Gian Marco Iodice
2022-09-16
Fix bug in QASYMM8_SIGNED to F32 cast layer
Viet-Hoa Do
2022-09-16
Optimize Quantized/Integer Bilinear Scale for Neon™
Gunes Bayir
2022-09-14
Interpreting tensor as 1D for CPU multiplication
Viet-Hoa Do
2022-09-14
Fix invalid memory access for dynamically fused Cl Elementwise kernels
SiCong Li
2022-09-14
Adding GELU activation
Murray Kornelsen
2022-09-14
INT8 Quantized MeanStdDevNorm (LayerNorm)
Murray Kornelsen
2022-09-12
Add test for NEGEMM to test a batched matrix multiplication with variable inp...
Adnan AlSinan
2022-09-09
Rework heuristic in ClConv2d
Gian Marco Iodice
2022-09-09
Optimize FP32/16 Bilinear Scale Kernel for Neon™
Gunes Bayir
2022-09-09
Add a macro guard in all OpenCL kernels in gemmlowp.cl
Gian Marco Iodice
2022-09-08
Disable Winograd on fp16 if fast-math = false
Ramy Elgammal
2022-09-07
Optimize depthwise convolution on OpenCL
Gian Marco Iodice
2022-09-02
F16 Specialization for MeanStdDevNorm
Murray Kornelsen
2022-09-02
Enable Winograd-based conv2d when IFM>=8 on Gpu
Gian Marco Iodice
[next]