aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2022-11-17Fix documentation about BF16 accelerationv22.11branches/arm_compute_22_11Viet-Hoa Do
2022-11-15Fix release notes for 22.11Viet-Hoa Do
2022-11-15Add release notes for 22.11Viet-Hoa Do
2022-11-15Update READMEViet-Hoa Do
2022-11-11Fix compiler warnings in dynamic fusionSiCong Li
2022-11-09Fix CPU multiplication layer threading overheadViet-Hoa Do
2022-11-04Fix compiler warnings in dynamic fusionSiCong Li
2022-11-04Fix activation block in gemm.clGian Marco Iodice
2022-11-03Update SONAME_VERSION in SConscriptViet-Hoa Do
2022-11-02Add Dynamic Fusion GpuConv2d FP32/FP16 testcaseRamy Elgammal
2022-11-02Partially Revert "Add threshold for floating-point SOFT_RELU activation"Gunes Bayir
2022-11-01Fix fixed-point quantized additionViet-Hoa Do
2022-11-01Updateable weights in depthwise convolutionMilos Puzovic
2022-11-01Add threshold for floating-point SOFT_RELU activationMilos Puzovic
2022-11-01Add check for Batch Matmul in GemmAssemblyDispatchMohammed Suhail Munshi
2022-11-01Rewrite dynamic fusionSiCong Li
2022-11-01Rework direct convolution heuristic on OpenCLGian Marco Iodice
2022-10-27Fix fixed-point quantized additionViet-Hoa Do
2022-10-25Fix compiler warningPablo Marquez Tello
2022-10-24Add FP16 tanh based on rational approximationJonathan Deakin
2022-10-21Fix mapfile generation in ClangPablo Marquez Tello
2022-10-20Update reinterpret tensor as 1D for CPU addViet-Hoa Do
2022-10-20Add test in GEMMLowp for batch matmulMohammed Suhail Munshi
2022-10-19Fix FFTConvolutionLayer testViet-Hoa Do
2022-10-12Add scons option to generate Map files.Pablo Marquez Tello
2022-10-12Optimize Neon™ Logistic ActivationMohammed Suhail Munshi
2022-10-12Adding documentation section explaining how BF16 is usedRamy Elgammal
2022-10-10Use https to embed MathJax to documentationViet-Hoa Do
2022-10-10Fix LUT-based activation layerViet-Hoa Do
2022-10-07Workaround CL compiler issue on FP16Viet-Hoa Do
2022-10-07Optimize Neon™ SUB operator by squashing execution windowJakub Sujak
2022-10-06Rework DepthwiseConvolution heuristic on OpenCLGian Marco Iodice
2022-10-06Improve start-up time in gemmlowp reshaped rhs only.Adnan AlSinan
2022-10-04Update GEMM reshaped rhs only heuristicGian Marco Iodice
2022-10-03Force CL kernel compilation with 64 registersViet-Hoa Do
2022-10-03Fix Batch Matmul nightly failureAdnan AlSinan
2022-10-03Enable FP16 when the target is armv8.6-aPablo Marquez Tello
2022-10-03Optimize CPU add layer on quantized dataViet-Hoa Do
2022-09-28Fix overflow in NEActivationLayer for FP16 typePablo Marquez Tello
2022-09-26Add FP32 Neon™ swish activationJonathan Deakin
2022-09-23CPU GEMM: Fix overreads in SVE merges.David Mansell
2022-09-22Fix unresolved symbol for target armv7a + AndroidPablo Marquez Tello
2022-09-21Add test for ClGemmLowpMatrixMultiplyCore to test a batched matrix multiplica...Ramy Elgammal
2022-09-16Fix validation in validate_image2d_support_on_rhsGian Marco Iodice
2022-09-16Fix bug in QASYMM8_SIGNED to F32 cast layerViet-Hoa Do
2022-09-16Optimize Quantized/Integer Bilinear Scale for Neon™Gunes Bayir
2022-09-14Interpreting tensor as 1D for CPU multiplicationViet-Hoa Do
2022-09-14Fix invalid memory access for dynamically fused Cl Elementwise kernelsSiCong Li
2022-09-14Adding GELU activationMurray Kornelsen
2022-09-14INT8 Quantized MeanStdDevNorm (LayerNorm)Murray Kornelsen