aboutsummaryrefslogtreecommitdiff
path: root/src/backends
AgeCommit message (Collapse)Author
2020-11-23IVGCVSW-5569 Fix Unittest failure while building using EthosNAcc backendNarumol Prangnawarat
* Correct the id when EthosN is enable Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I5203e615f809e56c7597ffeeec56b5ad38d4ff17
2020-11-20IVGCVSW-5563 Fix Crash on model with Fullyconnected Sigmoid ActivationKevin May
* Add supported activations check to Neon FullyConected validate Signed-off-by: Kevin May <kevin.may@arm.com> Change-Id: I67a36eb83d0568d000e928e27eba3c84e32cdc72
2020-11-18IVGCVSW-5092 Add CL Logical workloadJames Conroy
* Add CL Logical workloads for NOT, AND and OR. * Enable Layer and IsSupported tests on CL. Signed-off-by: James Conroy <james.conroy@arm.com> Change-Id: I8b7227b2487fdbbb55a4baf6e61f290313947de1
2020-11-18IVGCVSW-5093 Add NEON Logical workloadJames Conroy
* Add NEON Logical workloads for NOT, AND and OR. * Enable Layer and IsSupported tests on NEON. Signed-off-by: James Conroy <james.conroy@arm.com> Change-Id: Ibca59530457a664ca3d77751825642f8daf52fab
2020-11-18Fix logical vts skipNarumol Prangnawarat
* Add Boolean support for Reshape * Use LogicalUnary factory and data type for LogicalNot Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I8e072fde200b7716556ae67f79616458cf98ff20
2020-11-17MLCE-278-IVGCVSW-5530 FusedActivation issuesMike Kelly
* GetOverriddenDataType was returning incorrect quantization data * Optimized CpuAcc and GpuAcc SubGraphs fail validation on debug versions of ArmNN Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: Ie97935cc2af67bd9aeebc94b63dafa458bd1aa8c
2020-11-17IVGCVSW-5530 'Cannot run SSD Mobilenet f16/uint8 on CpuRef via ExecuteNetwork'Sadik Armagan
* Added FP16 DataType support to DetectionPostProcess * For DetectionPostProcess layer output is always Float32 regardless of input type Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I21f63dd08f0863e9a98e105b3009bab3da1ab0c3
2020-11-17MLCE-278 issue with signed-int8 quantized modelTeresa Charlin
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: I144ebfca524f4cdee9cc82eef3995c6b32bfc40b
2020-11-13IVGCVSW-5189 Fix error running EfficientNet-Lite on GpuAccNarumol Prangnawarat
* Correct datatype of QAsymmS8 Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: Id4987b91e06d87735254d3cdd5c9adbe11cc8870
2020-11-13IVGCVSW-5328-5329 Fuse Activation CleanupMike Kelly
* Resolved the review items in the main review. Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: I5da34b74ac204569ea2d210fb5a069beb7d0835b
2020-11-13IVGCVSW-5328-5329 Fuse ActivationMike Kelly
* Added Fused Activation Optimization to both CL and Neon backends. * Added Fused Activation support to all the CL and Neon workloads that support it. * Changed ProfilingTest network to be a Convolution layer followed by an Abs layer rather than an Activation layer. * Added IBackendInternal::OptimizeSubgraphView function that can accept a ModelOptions. * Network will now call OptimizeSubgraphView passing in the ModelOptions. Signed-off-by: Keith Davis <keith.davis@arm.com> Signed-off-by: Mike Kelly <mike.kelly@arm.com> Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: Ib536ac3cbafc7d9b35c139ad9a65b7735262cd9d
2020-11-13IVGCVSW-5495 Fix validation for per-channel quantJames Conroy
* Now enter if block if bias OR weights have multiple quantization scales. Signed-off-by: James Conroy <james.conroy@arm.com> Change-Id: I5eba0ceac9b347d0e3467e86d72d587b749b9521
2020-11-12Update ACL pin to d7341fb9e3b24b904edf7ac9d83e1e063bc77765Teresa Charlin
* Use NEConvolutionLayer Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: Ieb81fafaf34a63be8daf297ebe1bb0e4079daf4e
2020-11-09IVGCVSW-5091 Add Logical ops frontend and ref implJames Conroy
* Add frontend and reference implementation for logical ops NOT, AND, OR. * Unary NOT uses existing ElementwiseUnary layer and ElementwiseUnary descriptor. * Binary AND/OR uses new layer LogicalBinary and new LogicalBinary descriptor. * Add serialization/deserializion support and add missing ElementwiseUnary deserializer code. * Add additional Boolean decoder in BaseIterator.hpp. Signed-off-by: James Conroy <james.conroy@arm.com> Change-Id: Id343b01174053a166de1b98b6175e04a5065f720
2020-11-09IVGCVSW-5327 Add to Layer a binary blob to host the activation layer infoKeith Davis
Signed-off-by: Keith Davis <keith.davis@arm.com> Change-Id: I0a07dea96a86849701ba387dbea148909a6d729b
2020-10-30IVGCVSW-5322 Fix segfault between Neon and Cl layersNarumol Prangnawarat
* Fallback to memory copy if memory import is not supported * Remove direct compatibility between Neon and Cl Tensors * Unit tests fallback from Neon to Cl and Cl to Neon Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: Iec00a77423fb23b37a6b1aefee1b2ec4d649efca
2020-10-28IVGCVSW-5433 Remove boost::transform_iterator and make_transform_iteratorFinn Williams
Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I28aace7092cff5743353df1b1de8e7a4691554d3
2020-10-23GitHub#465 Fix NonMaxSuppressionantkillerfarm
If visited flag set true, it should not be visited any more. For example, if we put 10 boxes (ordered by score) into NonMaxSuppression: * Step1: Suppose Box 2/3/6/8 are suppressed by Box 1. Box 4/5/7/9/10 survived. * Step2: Correct way: We use Box 4 to suppress the survive boxes. Prior to this commit: Box 4 may be suppressed by Box 2, even Box 2 is already suppressed by Box 1... Signed-off-by: Antkillerfarm <antkillerfarm@gmail.com> Change-Id: I38d7a84287649827a16565748592fb562b4df5d5
2020-10-14IVGCVSW-5335 Added Documentation for fast_mathMike Kelly
* Added Documentation for fast_math to CLBackendModelContext * Added Documentation for fast_math to NeonBackendModelContext Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: I43a0568ae6914e074a80130a051e5d9bb849f2ba
2020-10-13IVGCVSW-4489 Remove remaining occurrence of boost::formatMatthew Sloyan
* Replaced with fmt::format in Descriptors.cpp. * Removed remaining boost/format headers in ArmNN codebase. * Removed additional boost header in Network.cpp Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com> Change-Id: Ib98b83bf4ec99ef98ce7a3635ec0dd478c3e43e1
2020-10-08IVGCVSW-5363 Add Unmap layer and Unmap workloadJim Flynn
Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Signed-off-by: Jim Flynn <jim.flynn@arm.com> Change-Id: Ie5ecfa67e4763d0c058905592fe2e2fd7315f85c
2020-10-08Remove Resize from list of layers that need padding in NeonTeresa Charlin
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: I054f0b71d4e9581c637fa09e40f6b661e58e39f3
2020-10-07IVGCVSW-5362 Add Map layer and Map workloadJim Flynn
Signed-off-by: Jim Flynn <jim.flynn@arm.com> Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: Id2227c58809b84c7a7af61f7c0d88ad7d45ce558
2020-10-05Update ACL pin to fc2f6d0427e1d886fcccc68867d1af1ccd96608bTeresa Charlin
* Set use_padding to false in neon workload Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: Ia5367de331efe1d28dea4dfbefc0713720da81f9
2020-10-02IVGCVSW-5334 Remove remaining boost::numeric_cast from armnnMatthew Sloyan
* Floating point casts now use armnn::numeric_cast. * Also removed remaining header imports. Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com> Change-Id: I2d37847d67f164fc0a0ae17f34d49ff3d2210c30
2020-10-02IVGCVSW-5294 Remove boost::format armnn backendsJames Ward
* replaced with fmt::format * one case required std:stringstream instead Signed-off-by: James Ward <james.ward@arm.com> Change-Id: Ife7c4cf5f143e43373f42edf6124158af132abc5
2020-10-01Include layer GUID in SerializeToDot outputRob Hughes
Change-Id: I1a6df60683cc51fcd9739b6dc98f1e722becf045 Signed-off-by: Robert Hughes <robert.hughes@arm.com>
2020-10-01COMPMID-3784 Fix 1 CTS MUL INT32 failure due to using SATURATETeresa Charlin
* LargeGraph_TENSOR_INT32_Rank4/26 Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: I9d07444db56e26c13a77bf022938644ed7953d6b
2020-10-01IVGCVSW-5325 Fix non-channel per axis quantizationFinn Williams
Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: Ie0cf69b2cd76d6ecedab43d3d9ae267d23bbc052
2020-09-30Refactored Optimize(...) function to throw exceptions instead of returning nullMike Kelly
* INetwork::Optimize(...) states that the function should throw an exception if it fails but the implementation in Network.cpp returned null in some scenarios instead. This has led to some confusion amongst users. Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: I358d1293232c9464772aa0e39ab3355e3570c823
2020-09-28IVGCVSW-5325 Speed up the reference backendFinn Williams
Change-Id: Id8bd0a0418be31d975b944b54bbacb25051ffb2e Signed-off-by: Finn Williams <Finn.Williams@arm.com>
2020-09-25IVGCVSW-4973 Enable QLstm projection unit tests on CLTeresa Charlin
*Cosmetic changes on ClQLstmWorkload Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: I61f55343263e623aaae042d8dfe8c294540e98f1
2020-09-25Update ACL pin to 840a72cc745c60eccbd26fe192b035ec68b2ee41Nikhil Raj
* Change tensor to non const to fix build error caused by ACL fix for QLSTM Signed-off-by: Nikhil Raj <nikhil.raj@arm.com> Change-Id: I7ab0f644dfb3cb3cf21bda73028e9368f3354f4a
2020-09-24Add int32 and int64 ArgMax op supportInki Dae
This patch adds int32 and int64 ArgMax op support. Current ARMNN already has ArgMax op but not used, and it doesn't support int64 output type. So this patch adds a new type, Signed64, and also adds ArgMinMax computation function for int64 type support. In default, output tensor type of ArgMax op is int64 in case of tensorflow lite model so this patch makes a proper function - ArgMax op for int64 or int32 - to be called according to parsed output_type value. With this patch, ARMNN supports both types - int64 and int32 - for ArgMinMax op. Changelog v1: - Check if output data type of ArgMinMax op is valid or not. - Use template function to support int32 and int64 types of ArgMinMax function. - Keep using Signed32 as default data type of m_Output_Type. Change-Id: I7a8e7e38dd9e5acc81464571d8b4d51378fc7f14 Signed-off-by: Inki Dae <inki.dae@samsung.com>
2020-09-22IVGCVSW-5318 'Create a Neon/CL Workload Unit Test fast_math option enabled'Sadik Armagan
* Unit test implemented to make sure it returns WINOGRAD * Updated the enable-fast-math option in ExecuteNetwork to be consistent Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: Id64f114ae47966def69a9eef0770a4251ee56a41
2020-09-17IVGCVSW-5300 Remove some boost::numeric_cast from armnn/backendsMatthew Sloyan
* Replaced with armnn/utility/NumericCast.hpp * Some exclusions in reference backend * Excluded as requires float implementation in NumericCast.hpp Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com> Change-Id: I9e4e9cd502c865452128fa04415fd6f250baa855
2020-09-15IVGCVSW-5317 'Add enable_fast_math Option to ExecuteNetwork'Sadik Armagan
Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I4eb3e27837aea926593d49f9ccea07bab8388d5b
2020-09-14IVGCVSW-5157 'Pipe ModelOption through Network::LoadNetwork() to Workload ↵Sadik Armagan
factory' * Pass ModelOptions to WorkloadFactory * Updated signature of CL and NEON Convolution2d workloads added FastMathEnabled param. Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I536178be8e4dd4083489e69febadaf0feeba46d2
2020-09-10IVGCVSW-5156 Introduce ModelOptions to OptimizedNetworkSadik Armagan
* Introduced ModelOptions to IBackendInternal * Introduced ModelOptions to Network * Added FastMathEnabled parameter to Conv2d Validate function in CL and NEON * Added Optimizer tests Signed-off-by: Ryan OShea <Ryan.OShea2@arm.com> Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: Ib54c1e82cb3d89a52756ed499cf91b6a7fdb2063
2020-09-03Update ACL pin to ec4dee8c68a3d0f6d63db184bfb2f4589429778eTeresa Charlin
* Axis for LogSoftMax and SoftMax can be either positive or negative Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: I36b0507ad7600c0a98c3b8be3c0350045ee05b84 Signed-off-by: Nikhil Raj <nikhil.raj@arm.com>
2020-09-03IVGCVSW-5261 Fix undefined reference to GetIdStatic()David Monahan
* Moved DynamicBackend tests to only build when ArmnnRef is enabled due to a dependency on them dynamically loading the ArmnnRef backend object Signed-off-by: David Monahan <david.monahan@arm.com> Change-Id: Iee0480e7d0cf505bbb5c26629829d3d20fb60051
2020-08-31IVGCVSW-5256 Use CreateTensorHandle() function from TensorHandleFactory in ↵Finn Williams
the tests for layers Q,R & T Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I6fc613d31785298a0b7ed18f1abdd59bafed1e8e
2020-08-31IVGCVSW-5231 Remove CreateTensorHandle in the test where there is ↵Keith Davis
NO_DEPRECATE_WARN * Done for all elementwise layers, Activation, BatchNorm, BatchToSpace Signed-off-by: Keith Davis <keith.davis@arm.com> Change-Id: Id1d15a0960233026aecf7a07e0d3f006e07e4abf
2020-08-31IVGCVSW-5253 Use CreateTensorHandle() function from TensorHandleFactory in ↵Finn Williams
the tests for layers M-P Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I324eee7d750e30f714e0d346b7da7b69866ff935
2020-08-31IVGCVSW-5252 Use CreateTensorHandle() function from TensorHandleFactory in ↵Finn Williams
the tests for layers between G-L Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I197351a479fb211787bd12a73c9618d2ded95898
2020-08-31IVGCVSW-5249 Use CreateTensorHandle from ITensorHandleFactory in the test ↵Keith Davis
for all layers between C-D Signed-off-by: Keith Davis <keith.davis@arm.com> Change-Id: I9583adf50e67e63e73833f400d1c50fbff57f60c
2020-08-31IVGCVSW-5250 Remove CreateTensorHandle in the test for layers between E-FFinn Williams
* Refactored Floor and FullyConnected tests Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: Iad87254e638bdcb5d7b334b16ec87a0c981e48a0
2020-08-28IVGCVSW-5257 'Remove CreateTensorHandle in the test for layers beginning with S'Sadik Armagan
* Re-factored SplaceToDepth, Splitter, Stack and StridedSlice unit tests to use TensorHandleFactory for creating TensorHandles Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: Ib22bb09cd2120c02c548099eaa06db6e6f00b15e
2020-08-28IVGCVSW-4979 'Remove CreateTensorHandle using WorkloadFactory in workload tests'Sadik Armagan
* Small refactor in unit tests using TensorHandleFactory to use reference instead of pointer Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I1a702941890034a45029c014c8b11e185f45a807
2020-08-27IVGCVSW-5257 'Remove CreateTensorHandle in the test for layers beginning with S'Sadik Armagan
* Re-factored SoftmaxTestImpl to use TensorHandleFactory to create TensorHandles Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I83559a89187bbed0d6f34ca589ea81c694bf5683