aboutsummaryrefslogtreecommitdiff
path: root/src/backends/reference/workloads/BaseIterator.hpp
AgeCommit message (Collapse)Author
2020-11-09IVGCVSW-5091 Add Logical ops frontend and ref implJames Conroy
* Add frontend and reference implementation for logical ops NOT, AND, OR. * Unary NOT uses existing ElementwiseUnary layer and ElementwiseUnary descriptor. * Binary AND/OR uses new layer LogicalBinary and new LogicalBinary descriptor. * Add serialization/deserializion support and add missing ElementwiseUnary deserializer code. * Add additional Boolean decoder in BaseIterator.hpp. Signed-off-by: James Conroy <james.conroy@arm.com> Change-Id: Id343b01174053a166de1b98b6175e04a5065f720
2020-10-01IVGCVSW-5325 Fix non-channel per axis quantizationFinn Williams
Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: Ie0cf69b2cd76d6ecedab43d3d9ae267d23bbc052
2020-09-28IVGCVSW-5325 Speed up the reference backendFinn Williams
Change-Id: Id8bd0a0418be31d975b944b54bbacb25051ffb2e Signed-off-by: Finn Williams <Finn.Williams@arm.com>
2020-06-30IVGCVSW-5007 Implement an Int32 reference Elementwise workloadFinn Williams
Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I6592169b74ac4294bc09647879aec0718c641f91
2020-04-06IVGCVSW-4485 Remove Boost assertNarumol Prangnawarat
* Change boost assert to armnn assert * Change include file to armnn assert * Fix ARMNN_ASSERT_MSG issue with multiple conditions * Change BOOST_ASSERT to BOOST_TEST where appropriate * Remove unused include statements Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I5d0fa3a37b7c1c921216de68f0073aa34702c9ff
2020-03-19IVGCVSW-4565 TENSOR_BOOL8 data type not supported in AndroidNN DriverSadik Armagan
* Enabled Boolean and Int32 data types in Reference Comparison inputs * Added decoder for Boolean data type * Refactored ClGreaterWorkload to work with any data type * Refactored NeonGreaterWorkload to work with any data type !android-nn-driver:2902 Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I92772810b744b388831c9dca0119ebf8cb7a447e
2020-03-10IVGCVSW-4482 Remove boost::ignore_unusedJan Eilers
!referencetests:229377 Signed-off-by: Jan Eilers <jan.eilers@arm.com> Change-Id: Ia9b360b4a057fe7bbce5b268092627c09a0dba82
2020-03-09IVGCVSW-4517 Implement BFloat16 Encoder and DecoderNarumol Prangnawarat
* Add ConvertFloat32ToBFloat16 * Add ConvertBFloat16ToFloat32 * Add BFloat16Encoder * Add BFloat16Decoder * Unit tests Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I198888384c923aba28cfbed09a02edc6f8194b3e
2020-02-07IVGCVSW-4386 Add ArmNN reference support for QAsymmS8Ryan OShea
* Added Quantization Scheme for QAsymmS8 * Added Unit Tests for QAsymmS8 * Renamed QAsymm8 calls to QAsymmU8 Signed-off-by: Ryan OShea <Ryan.OShea2@arm.com> Change-Id: I897b4e018ba1d808cc3f8c113f2be2dbad49c8db
2020-01-20Remove inclusion of ArmNN.hpp where it is unnecessary.Matthew Bentham
Signed-off-by: Matthew Bentham <Matthew.Bentham@arm.com> Change-Id: Idb583f8de4470eefb47c90189cd3c90e74e0440a
2019-12-09IVGCVSW-4211 Add Signed 8 bit Quantisation support into the Reference backendFinn Williams
!android-nn-driver:2435 Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I10ecd4a8937725953396805f33a3562a5384c4d4
2019-11-29IVGCVSW-4209 Create a public API for the ArmNN UtilsMatteo Martincigh
* Moved the relevant armnnUtils headers to the new location: include/armnnUtils * Update the header usage throughout the source code !android-nn-driver:2387 Signed-off-by: Matteo Martincigh <matteo.martincigh@arm.com> Change-Id: I2ba15cebcacafad2b5a1a7b9c3312ffc585e09d6
2019-11-06IVGCVSW-3837 Add support for per-axis quantization to reference ↵Aron Virginas-Tar
Convolution2d workload Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com> Change-Id: I0ac08ba4864d48e6f64c4ac645dad8ea850be112
2019-11-05IVGCVSW-3836 Add support for Int32 per-axis scalesAron Virginas-Tar
* Added ScaledInt32PerAxisDecoder implementation * Added new case for Signed32 in MakeDecoder that returns a ScaledInt32PerAxisDecoder if the tensor info has multiple quantization scales Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com> Change-Id: I8b3c11091644da993044d2a0fe2aba6b06b5af56
2019-11-04IVGCVSW-3835 Create Encoder and Decoder for QSymm8PerAxisKeith Davis
* Add QuantizedSymm8PerAxis to armnn DataType (types.hpp) and * Add Quantize and Dequantize template for int8 in TypeUtils to be able to compute QSymm8 of the weight * Create PerAxisIterator for per-axis quantization * Create QSymm8PerAxisDecoder * Create QSymm8PerAxisEncoder Signed-off-by: Keith Davis <keith.davis@arm.com> Change-Id: Ibcfe0288a197b7ee50b543bdbd77b7edb8a547c2
2019-09-10IVGCVSW-3824 Implement Float 16 Encoder and DecoderMatthew Jackson
* Implement Float 16 Encoder and Decoder * Add Stack Float 16 layer and create workload tests Signed-off-by: Matthew Jackson <matthew.jackson@arm.com> Change-Id: Ice4678226f4d22c06ebcc6db3052d42ce0c1bd67
2019-08-02IVGCVSW-3609 Fix decoding and encoding of INT32 tensorsAron Virginas-Tar
* Added Int32Decoder and Int32Encoder to decode INT32 tensors * Changed MakeDecoder to return ScaledInt32Decoder only if the scale is different from 0, i.e. for quantized bias tensors Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com> Change-Id: I278061d445d1c549c7ace11f51aa172ce7c691ae
2019-07-02IVGCVSW-3307 Don't assume TensorInfo::Map() can be called before Execute()Matthew Bentham
Change-Id: I445c69d2e99d8c93622e739af61f721e61b0f90f Signed-off-by: Matthew Bentham <Matthew.Bentham@arm.com>
2019-05-27IVGCVSW-3134 Refactor FullyConnected workloads into single workloadFrancis Murtagh
* Refactor FullyConnected workloads into single workload. * Refactor FullyConnected ref implementation to use Encoders and Decoders to support all DataTypes. * Deleted RefFullyConnectedFloat32Workload and RefFullyConnected2dUint8Workload. Change-Id: Iad30fb0287ab7491e1297997e7d61f1d00785541 Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
2019-05-23IVGCVSW-3025: Refactor reference Convolution2d workloadMike Kelly
* Refactored RefConvolution2dWorkload to support all DataTypes through Encoders and Decoders. * Added Convolute function to ConvImpl that uses Encoders and Decoders to support all DataTypes. * Deleted RefConvolution2dFloat32Workload and RefConvolution2dUint8Workload. Signed-off-by: Mike Kelly <mike.kelly@arm.com> Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: Ic5ef0f499d08b948fa65fdee54b5f681fd0b1c05
2019-05-07IVGCVSW-2997 Refactor reference LSTM workloadNattapat Chaimanowong
Signed-off-by: Nattapat Chaimanowong <nattapat.chaimanowong@arm.com> Change-Id: I6883f878d9f701a55153292769d2fc0530d2529e
2019-04-10IVGCVSW-2947 Remove boost dependency from include/TypesUtils.hppAron Virginas-Tar
!android-nn-driver:968 Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com> Change-Id: I03ccb4842b060a9893567542bfcadc180bbc7311
2019-04-10IVGCVSW-2946 RefElementwiseWorkload configures prior to first executeDerek Lamberti
+ Added PostAllocationConfigure() method to workload interface + Elementwise function now deduces types based on Functor - Replaced RefComparisonWorkload with RefElementwiseWorkload specialization + Fixed up unit tests and minor formatting Change-Id: I33d08797767bba01cf4efb2904920ce0f950a4fe Signed-off-by: Derek Lamberti <derek.lamberti@arm.com>
2019-04-09IVGCVSW-2862 Extend the Elementwise Workload to support QSymm16 Data TypeSadik Armagan
IVGCVSW-2863 Unit test per Elementwise operator with QSymm16 Data Type * Added QSymm16 support for Elementwise Operators * Added QSymm16 unit tests for Elementwise Operators Change-Id: I4e4e2938f9ed2cbbb1f05fb0f7dc476768550277 Signed-off-by: Sadik Armagan <sadik.armagan@arm.com>
2019-04-08IVGCVSW-2861 Refactor the Reference Elementwise workloadSadik Armagan
* Refactor Reference Comparison workload * Removed templating based on the DataType * Implemented BaseIterator to do decode/encode Change-Id: I18f299f47ee23772f90152c1146b42f07465e105 Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Signed-off-by: Kevin May <kevin.may@arm.com>