aboutsummaryrefslogtreecommitdiff
path: root/src/core/CL/cl_kernels/common
AgeCommit message (Expand)Author
2024-04-30Add fp16 and integer data type support for ScatterNd in GpuGunes Bayir
2024-04-25Add update/index/output (m+1)/2d/(m+n) support for CLScatterGunes Bayir
2024-04-22Scatter GPU Kernel Implementation for 1D tensors.Mohammed Suhail Munshi
2024-03-20Make Cpu/Gpu/Ref scalar/vectoral S32 division consistentGunes Bayir
2024-01-18Fix divide-by-zero compilation errorViet-Hoa Do
2023-12-22Fix nightly issue caused by gemm_reshaped_only_rhs_mmul kernelGunes Bayir
2023-12-14Fix validation error in CL generate proposals kernelGunes Bayir
2023-11-15Fix device issue with CL softmaxViet-Hoa Do
2023-10-31[GPU] Update Reverse layer to allow negative axis and reversed axis orderAdnan AlSinan
2023-10-31Optimize CL softmaxViet-Hoa Do
2023-10-12Remove padding from CL comparison operatorViet-Hoa Do
2023-10-11Optimize CL reduction operationViet-Hoa Do
2023-10-05Optimize CLTranspose operatorJakub Sujak
2023-09-29Implement Quantized Matmul T/T and T/Nt kernels using MMUL extensionGunes Bayir
2023-09-28Implement Quantized Matmul Nt/T kernel using MMUL extensionGunes Bayir
2023-09-21Optimize the main loop in mat_mul_native_quantized_mmul_nt_ntGunes Bayir
2023-09-18Implement Quantized MatMul kernel using MMUL extensionGunes Bayir
2023-09-14Add skeleton of ClMatMulLowpNativeMMULKernelGunes Bayir
2023-09-04Remove legacy PostOps codeJakub Sujak
2023-08-14Optimize CLReduce for Min/Max Axis=0Gunes Bayir
2023-08-03Fix CL Tile operatorViet-Hoa Do
2023-07-21Enable S64 output in CLArgMinMaxPablo Marquez Tello
2023-07-11Add Bias to MatMul Kernels and add support for use in Fully Connected LayerMohammed Suhail Munshi
2023-07-06Fix nightly failures in MatMulLowpNativeKernel when using bounded activation ...Mohammed Suhail Munshi
2023-07-05Rewrote CLArgMinMax for axis 0Pablo Marquez Tello
2023-06-29Implement FP32/16 MatMul Lhs T Rhs T/NT kernel using MMUL extensionGunes Bayir
2023-06-26Use MatMul in fully connected layer with dynamic weights when supportedMohammed Suhail Munshi
2023-06-23Implement FP32/FP16 MatMul NT/T kernel using the MMUL extensionRamy Elgammal
2023-06-19Implement FP32/FP16 MatMul NT/NT kernel using the MMUL extensionSiCong Li
2023-06-16Add Fused Activation to OpenCL MatMulMohammed Suhail Munshi
2023-05-03Support multi-dimensional indices in the CL Gather Layer up to four-dimension...Omar Al Khatib
2023-04-28Fix the gather layer indices checkViet-Hoa Do
2023-04-27Add quantized CL MatMul kernel for LHS NT, RHS TJakub Sujak
2023-04-20Implement CL kernel for a native batched matmul Quantized - LHS transposed, R...Omar Al Khatib
2023-04-17Add quantized CL MatMul kernels for Lhs NT/T, Rhs NTGunes Bayir
2023-03-29Add quantized support for unary elementwise in CPUViet-Hoa Do
2023-03-24Add Texture Pipe Support for Matmul Lhs T/NT Rhs T kernelsRamy Elgammal
2023-03-24Add Texture Pipe Support for Matmul Lhs T/NT Rhs NT kernelsGunes Bayir
2023-03-20Implement OpenCL MatMul for Lhs T Rhs T/NT FP32/16Gunes Bayir
2023-03-17Implementation of RSQRT for quantized int8Ramy Elgammal
2023-03-17Implement OpenCL MatMul for Lhs NT Rhs T/NT FP32/16Ramy Elgammal
2022-12-23Make CLReshape kernel window based on dst instead of srcRamy Elgammal
2022-11-03Fix activation block in gemm.clGian Marco Iodice
2022-10-06Improve start-up time in gemmlowp reshaped rhs only.Adnan AlSinan
2022-09-09Add a macro guard in all OpenCL kernels in gemmlowp.clGian Marco Iodice
2022-09-02F16 Specialization for MeanStdDevNormMurray Kornelsen
2022-07-22Add GemmLowp MMUL Reshaped Only Rhs Support for QASYMM8/QASYMM8_SIGNEDFreddie Liardet
2022-07-13Add Gemm MMUL Reshaped Only Rhs Support for FP32/FP16Gunes Bayir
2022-06-27Implement new Elementwise Dynamic Fusion Operators: Div, FloorMichalis Spyrou
2022-05-31Add cl_khr_integer_dot_product extension supportViet-Hoa Do