aboutsummaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Expand)Author
2018-09-17COMPMID-421: Added FP16 support to Softmax.Pablo Tello
2018-09-17COMPMID-421: Added FP16 support to the NEON Direct Convolution function.Pablo Tello
2018-09-17COMPMID-421: Added FP16 support in Pooling LayerPablo Tello
2018-09-17COMPMID-417: Enable CPU target selectionMoritz Pflanzer
2018-09-17COMPMID-417: Allow loading of custom OpenCL libraryMoritz Pflanzer
2018-09-17COMPMID-421: Added FP16 support in ActivationLayer.Pablo Tello
2018-09-17COMPMID-421: Added FP16 support to Arithmetic Subtraction.Pablo Tello
2018-09-17COMPMID-446: Add support for QS8/QS16 CL Arithmetic Add/SubMichele Di Giorgio
2018-09-17COMPMID-401: Implement FixedPointPosition conversion for NEON.Georgios Pinitas
2018-09-17COMPMID-410 Port BatchNormalization to use fixed point 16Michalis Spyrou
2018-09-17COMPMID-425 Port CLBatchnormalization to support QS8/QS16Michalis Spyrou
2018-09-17COMPMID-417: Add Leaky RELU support for both NEON/CL.Georgios Pinitas
2018-09-17COMPMID-444: Add support for QS8/QS16 NEON Arithmetic Add/Sub/Mul.Michele Di Giorgio
2018-09-17COMPMID-443 Collapse higher dimension for pooling layer and normalization layersteniu01
2018-09-17COMPMID-443 Change CLSoftMaxLayerKernel to use 3D tensor and collapse the hig...steniu01
2018-09-17COMPMID-406: Port CLActivationLayer to use QS8/QS16.Georgios Pinitas
2018-09-17COMPMID-417: Port DepthConcatenate to QS8/QS16 for NEON/CL.Georgios Pinitas
2018-09-17COMPMID-421: Added FP16 suppot to NENormalizationLayer and NEPixelWiseMultipl...Pablo Tello
2018-09-17COMPMID-421: Added FP16 support to Arithmetic Addition.Pablo Tello
2018-09-17COMPMID-443 Use 3D tensor for pixel multiply (Needed for Normalization Layer)Anthony Barbier
2018-09-17COMPMID-443: Collapse higher dimensions for activation layerAnthony Barbier
2018-09-17COMPMID-443: Use 3D tensors for fill_border_imageAnthony Barbier
2018-09-17COMPMID-431 Port CLDepthConvert to use 8-bit and 16-bit fixed pointsteniu01
2018-09-17COMPMID-417 Checking CL non uniform support at runtime.steniu01
2018-09-17COMPMID-428: Port NESoftmaxLayer to 16-bit fixed point.Georgios Pinitas
2018-09-17COMPMID-429: Port CLSoftmaxLayer to QS16.Georgios Pinitas
2018-09-17COMPMID-421: Added F16 support in FC Layer.Pablo Tello
2018-09-17COMPMID-440, COMPMID-441 - Port CLConvolutionLayer and CLFullyConnectedLayer ...Gian Marco Iodice
2018-09-17COMPMID-409: Add support for QS8 and QS16 CLPixelWiseMultiplication.Michele Di Giorgio
2018-09-17COMPMID-427: Port NEActivationLayer in 16bit fixed point.Georgios Pinitas
2018-09-17COMPMID-417: Fix assert in GEMMTransposeMoritz Pflanzer
2018-09-17COMPMID-417: Fix output access window in ChannelExtract Kernels.Georgios Pinitas
2018-09-17COMPMID-417: Auto initialize for SoftmaxLayer NEON/CL.Georgios Pinitas
2018-09-17COMPMID-417: DepthConvert NEON for QS8/QS16.Georgios Pinitas
2018-09-17COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer ...Gian Marco Iodice
2018-09-17COMPMID-417 - Fixed bug in gemm_interleave_16bit and gemm_interleave_32_bit d...Gian Marco Iodice
2018-09-17COMPMID-436, COMPMID-437 - Port NEConvolutionLayer & NEFullyConnectedLayer to...Gian Marco Iodice
2018-09-17COMPMID-434 - Port CLGEMM to support 16 bit fixed pointGian Marco Iodice
2018-09-17COMPMID-433 - Port NEGEMM to support 16 bit fixed pointGian Marco Iodice
2018-09-17COMPMID-418 Add check and fix comments after preprocessor conditionsAnthony Barbier
2018-09-17COMPMID-417: Remove val_to_stringMoritz Pflanzer
2018-09-17COMPMID-421: Fixed FP16 support in Neon GEMM.Pablo Tello
2018-09-17COMPMID-417: Auto configuration for Add/Sub/Mul Neon/CL.Georgios Pinitas
2018-09-17COMPMID-417: Auto initialization for PoolingLayer for NEON/CL.Georgios Pinitas
2018-09-17COMPMID-417: Autoconfigure for BatchNormalization CL/NEON.Georgios Pinitas
2018-09-17COMPMID-417: Add autoconfigure in NormalizationLayer CL/NEON.Georgios Pinitas
2018-09-17COMPMID-408 Create OpenCL complex math functions for 8 bit fixed point arithm...Michalis Spyrou
2018-09-17COMPMID-432 - Extended Convolution Layer to support rectangular kernelsGian Marco Iodice
2018-09-17COMPMID-411 - Port CLGEMM to support 8 bit fixed pointGian Marco Iodice
2018-09-17COMPMID-421: Fixed a problem in Convolution Layer reference values for FP16.Pablo Tello