aboutsummaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Expand)Author
2023-07-18Break up core/Utils.h to reduce unused code being included everywhereMatthew Bentham
2023-07-14Port ClTemplateCast into CkwAdnan AlSinan
2023-07-14Port ClTemplateActivation into CkwAdnan AlSinan
2023-07-13Added S64/U64 support for the input in CLCastPablo Marquez Tello
2023-07-13Fix excessive calls to clReleaseCommandQueueSiCong Li
2023-07-13Enable premultiplication for depthwise convolutionMichael Tyler
2023-07-12Add compute kernel writer arguments exportViet-Hoa Do
2023-07-11Add Bias to MatMul Kernels and add support for use in Fully Connected LayerMohammed Suhail Munshi
2023-07-10Port operations to CKW prototypeNikolaj Jensen
2023-07-10Disable kernel size 3 in argminmax for axis 0Pablo Marquez Tello
2023-07-10Do not include headers necessary for logging when logging is disabledMatthew Bentham
2023-07-07Enable transpose convolution with non-square kernelsViet-Hoa Do
2023-07-07Fix unsupported configuration in CLFullyConnected validationGunes Bayir
2023-07-06Fix nightly failures in MatMulLowpNativeKernel when using bounded activation ...Mohammed Suhail Munshi
2023-07-06Move CKW prototype to separate directoryViet-Hoa Do
2023-07-05Rewrote CLArgMinMax for axis 0Pablo Marquez Tello
2023-07-05Fix unused function warningMichael Tyler
2023-07-04Depthwise channel pre-multiplicationMichael Tyler
2023-07-04Add Kernel Writer driver code to dynamic fusionSiCong Li
2023-06-29Implement FP32/16 MatMul Lhs T Rhs T/NT kernel using MMUL extensionGunes Bayir
2023-06-26Add helpers to set CKW tensor components as OpenCL kernel argumentsJakub Sujak
2023-06-26Remove dependency on fp16 definitions from some core include filesMatthew Bentham
2023-06-26Use MatMul in fully connected layer with dynamic weights when supportedMohammed Suhail Munshi
2023-06-23Implement FP32/FP16 MatMul NT/T kernel using the MMUL extensionRamy Elgammal
2023-06-23Address the issues with the ACL coverage pipeline failures related to matmul.Renato Arantes
2023-06-23Fix doxygen warningsramy.elgammal@arm.com
2023-06-22Bazel and CMake optional fp16 supportDavid Svantesson
2023-06-21Fix CPU depthwise convolution in case of large paddingViet-Hoa Do
2023-06-21Enable vmfa in arm7va/aarch32 when presentPablo Marquez Tello
2023-06-19Implement FP32/FP16 MatMul NT/NT kernel using the MMUL extensionSiCong Li
2023-06-16Add Fused Activation to OpenCL MatMulMohammed Suhail Munshi
2023-06-15Break up Utils.h a bit to reduce unused code being included everywhereMatthew Bentham
2023-06-15Break up arm_compute/core/Types.h a bitMatthew Bentham
2023-06-12Add multi-sketch support for dynamic fusionViet-Hoa Do
2023-06-12Refactor activation LUT computationPablo Marquez Tello
2023-06-09Reorder destructor in srcDavid Svantesson
2023-06-07Fix build error for armv7aPablo Marquez Tello
2023-06-07Fix guards for FP16 depthwise kernelsMichael Tyler
2023-06-06Fix ScaleKernel validate method.Pablo Marquez Tello
2023-06-05Update CPU kernel implementations and guard directivesMichael Tyler
2023-05-17Move lut kernel to sve2 categorySiCong Li
2023-05-17Revert "Check for nullptr when failing to load OpenCL libraries"Omar Al Khatib
2023-05-16Check for nullptr when failing to load OpenCL librariesJakub Sujak
2023-05-12Fix performance regression in FP16 DeconvolutionJakub Sujak
2023-05-11Fix invalid vector length in CLViet-Hoa Do
2023-05-11Remove check for bias in CPU Depthwise ConvolutionJakub Sujak
2023-05-10Remove inclusion of NEReorderKernel header from NEReorderLayerRamy Elgammal
2023-05-10Re-enable dyanmic weights in Neon™ depthwise convolutionRamy Elgammal
2023-05-05Connect CLMatMul function to quantized kernels and resolve NE BatchMatMul int...Jakub Sujak
2023-05-05Disable dynamic weights in unsupported operatorsViet-Hoa Do