aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2022-11-22Remove dynamic fusion prototype with tests and examplesSiCong Li
2022-11-22ONCPUML-1072: Tuned MWS values (for N1, V1) for binary operators used by oneDNNFadi Arafeh
2022-11-22space-to-depth shape calculator fixRamy Elgammal
2022-11-18Add num_threads_to_use to OMPScheduler based on workload sizecfRod
2022-11-17Fix documentation about BF16 accelerationViet-Hoa Do
2022-11-15Fix release notes for 22.11Viet-Hoa Do
2022-11-15Fix regression caused by mws in ActivationLayerMohammed Suhail Munshi
2022-11-15Fix GemmLowp BatchMatMul Tests to use quantized OutputsMohammed Suhail Munshi
2022-11-15Fixed Arm NN unit test failure caused by quantised multiplication patch.Omar Al Khatib
2022-11-15Add release notes for 22.11Viet-Hoa Do
2022-11-14Optimize Transposed Convolution for CL backend (FP32/16)Gunes Bayir
2022-11-14Update READMEViet-Hoa Do
2022-11-14Optimize T_QUANTIZE8_ASYMMETRIC for Mali™ G52Pablo Marquez Tello
2022-11-10Fix compiler warnings in dynamic fusionSiCong Li
2022-11-09Fix CPU multiplication layer threading overheadViet-Hoa Do
2022-11-08SVE Hard-Swish via Lookup table for quantized inputPablo Marquez Tello
2022-11-07Optimize CPU mul layer on quantized dataOmar Al Khatib
2022-11-04Fix compiler warnings in dynamic fusionSiCong Li
2022-11-04Update SONAME_VERSION in SConscriptViet-Hoa Do
2022-11-03Fix activation block in gemm.clGian Marco Iodice
2022-11-02Add Dynamic Fusion GpuConv2d FP32/FP16 testcaseRamy Elgammal
2022-11-02Partially Revert "Add threshold for floating-point SOFT_RELU activation"Gunes Bayir
2022-11-01Fix fixed-point quantized additionViet-Hoa Do
2022-11-01Updateable weights in depthwise convolutionMilos Puzovic
2022-11-01Add threshold for floating-point SOFT_RELU activationMilos Puzovic
2022-11-01Add check for Batch Matmul in GemmAssemblyDispatchMohammed Suhail Munshi
2022-11-01Rewrite dynamic fusionSiCong Li
2022-11-01Rework direct convolution heuristic on OpenCLGian Marco Iodice
2022-10-27Fix fixed-point quantized additionViet-Hoa Do
2022-10-25Fix compiler warningPablo Marquez Tello
2022-10-24Add FP16 tanh based on rational approximationJonathan Deakin
2022-10-21Fix mapfile generation in ClangPablo Marquez Tello
2022-10-20Update reinterpret tensor as 1D for CPU addViet-Hoa Do
2022-10-20Add test in GEMMLowp for batch matmulMohammed Suhail Munshi
2022-10-19Fix FFTConvolutionLayer testViet-Hoa Do
2022-10-12Add scons option to generate Map files.Pablo Marquez Tello
2022-10-12Optimize Neon™ Logistic ActivationMohammed Suhail Munshi
2022-10-12Adding documentation section explaining how BF16 is usedRamy Elgammal
2022-10-10Use https to embed MathJax to documentationViet-Hoa Do
2022-10-10Fix LUT-based activation layerViet-Hoa Do
2022-10-07Workaround CL compiler issue on FP16Viet-Hoa Do
2022-10-07Optimize Neon™ SUB operator by squashing execution windowJakub Sujak
2022-10-06Rework DepthwiseConvolution heuristic on OpenCLGian Marco Iodice
2022-10-06Improve start-up time in gemmlowp reshaped rhs only.Adnan AlSinan
2022-10-04Update GEMM reshaped rhs only heuristicGian Marco Iodice
2022-10-03Force CL kernel compilation with 64 registersViet-Hoa Do
2022-10-03Fix Batch Matmul nightly failureAdnan AlSinan
2022-10-03Enable FP16 when the target is armv8.6-aPablo Marquez Tello
2022-10-03Optimize CPU add layer on quantized dataViet-Hoa Do
2022-09-28Fix overflow in NEActivationLayer for FP16 typePablo Marquez Tello