aboutsummaryrefslogtreecommitdiff
path: root/src/backends/aclCommon/ArmComputeTensorUtils.cpp
AgeCommit message (Collapse)Author
2020-09-24Add int32 and int64 ArgMax op supportInki Dae
This patch adds int32 and int64 ArgMax op support. Current ARMNN already has ArgMax op but not used, and it doesn't support int64 output type. So this patch adds a new type, Signed64, and also adds ArgMinMax computation function for int64 type support. In default, output tensor type of ArgMax op is int64 in case of tensorflow lite model so this patch makes a proper function - ArgMax op for int64 or int32 - to be called according to parsed output_type value. With this patch, ARMNN supports both types - int64 and int32 - for ArgMinMax op. Changelog v1: - Check if output data type of ArgMinMax op is valid or not. - Use template function to support int32 and int64 types of ArgMinMax function. - Keep using Signed32 as default data type of m_Output_Type. Change-Id: I7a8e7e38dd9e5acc81464571d8b4d51378fc7f14 Signed-off-by: Inki Dae <inki.dae@samsung.com>
2020-06-23IVGCVSW-4622 Add NEON FILL WorkloadSadik Armagan
* Added Neon workload for Fill Operator * Enabled Fill operator tests on Neon * NEFill function does not have validate() function yet IsLayerSupported() function return true at the moment * Added INT32 supported type for CpuRef Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I17bf5ec13750f46322a30653e15ba2a514f61f08
2020-06-03remove BOM from filesLaurent Carlier
Change-Id: Ia4b4bb3be0ed6e933c77d58f8e9879b1370e9537 Signed-off-by: Laurent Carlier <laurent.carlier@arm.com>
2020-04-09IVGCVSW-4641 Investigate Hal 1.3 VTS FailuresSadik Armagan
* Add QASYMM8_SIGNED data type support to NeonTensorHandle Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: Iae34f7d67de83642606ccd8c61a1b72df7f2bb3a
2020-04-06IVGCVSW-4485 Remove Boost assertNarumol Prangnawarat
* Change boost assert to armnn assert * Change include file to armnn assert * Fix ARMNN_ASSERT_MSG issue with multiple conditions * Change BOOST_ASSERT to BOOST_TEST where appropriate * Remove unused include statements Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I5d0fa3a37b7c1c921216de68f0073aa34702c9ff
2020-03-31IVGCVSW-4633 Add conversion of BF16 support to NeonNarumol Prangnawarat
* Add NeonConvertBf16ToFp32Workload * Add NeonConvertFp32ToBf16Workload * Add BFloat16 type support to NeonConstantWorkload and NeonTensorHandle * Add ConvertBf16ToFp32Weight when ConvertBf16ToFp32Layer is added * Unit tests Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: Id5b44a203add5e0c98c1ca4e2162115741b56644
2020-03-02IVGCVSW-4375 Add support for TransposeMike Kelly
* Added TransposeLayer * Added CL, Neon and Ref Workloads * Added Transpose utilities * Added Serializer and Deserializer support * Added Quantizer support Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: I04c755ba7cb5b1edf72b3c9f3c0314878032e3c7
2020-02-07IVGCVSW-4386 Add ArmNN reference support for QAsymmS8Ryan OShea
* Added Quantization Scheme for QAsymmS8 * Added Unit Tests for QAsymmS8 * Renamed QAsymm8 calls to QAsymmU8 Signed-off-by: Ryan OShea <Ryan.OShea2@arm.com> Change-Id: I897b4e018ba1d808cc3f8c113f2be2dbad49c8db
2020-01-31IVGCVSW-4388 Update ACL pin to 6a342648ae50beb8457871862f14fc9baef6b74fTeresa Charlin
!android-nn-driver:2671 Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: Ifeb6be7812fbb98b37f2a1439bfd5a3215de2a62
2020-01-29IVGCVSW-4149 Enable quantisation multiplier > 1 in all convolutionsRyan OShea
Signed-off-by: Ryan OShea <Ryan.OShea2@arm.com> Change-Id: I9652844a868ce8e05c0433c051e7079cf203c422
2020-01-24IVGCVSW-4370 Deprecate DataType::QuantizedSymm8PerAxisDerek Lamberti
!android-nn-driver:2622 Change-Id: If99d3eff71ff66ba28af1e5af248299fe04511b9 Signed-off-by: Derek Lamberti <derek.lamberti@arm.com>
2020-01-13Rename quantized data types to remove ambiguity for signed/unsigned payloadsDerek Lamberti
!android-nn-driver:2572 Change-Id: I8fe52ceb09987b3d05c539409510f535165455cc Signed-off-by: Derek Lamberti <derek.lamberti@arm.com>
2019-12-09IVGCVSW-4211 Add Signed 8 bit Quantisation support into the Reference backendFinn Williams
!android-nn-driver:2435 Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I10ecd4a8937725953396805f33a3562a5384c4d4
2019-11-27IVGCVSW-4148 Extend reporting of quant multiplier > 1 as unsupported on ACL ↵Aron Virginas-Tar
to per-axis case Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com> Change-Id: I66a8360b6d86e95325dee58927dcbe62ccf6ad58
2019-11-19MLCE-144 Cts NNAPI test cases failedMike Kelly
* Fixed numerous CTS/VTS failures related to Quantization Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: If5c20256366e80b6b9bbc46b2a1c410a9b8c48e1
2019-11-08IVGCVSW-4108 Fixed invalid data type exceptionMike Kelly
* Added support for QuantizedSymm8PerAxis to ArmComputeTensorUtils. Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: Ib8662f216bc4b6b54e0099780f73bcf6ef05384b
2019-11-05IVGCVSW-3843 Add support of per-axis quantization to BuildArmComputeTensorInfoAron Virginas-Tar
Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com> Change-Id: I0bb0e9da306eee3e19dc9967a6c8bb01da998deb
2019-10-10IVGCVSW-3967 Avg_Pooling2d Fails on CL NHWC FP16Sadik Armagan
* Enable fp_mixed_precision flag for the failing test case Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: If13552165eb6598a84d213b82847b56a8c5f2783
2019-07-25IVGCVSW-3521 CpuAcc V1.2 pad FailuresMike Kelly
* Pad value for QASYMM8 is no longer stored in quantized form. Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: I048e1d233353c0560ae03a7cc1ed5199295352bc
2019-06-28IVGCVSW-3162 Support CL workload for TransposeConv2DAron Virginas-Tar
Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com> Change-Id: I3b021c0828d30298d99ddb211c9aae17fe3636f0
2019-01-23IVGCVSW-2467 Update Boolean type supportNattapat Chaimanowong
Change-Id: I0ab3339e8803a3e4e700d8fec9883eccc524b31e
2019-01-04MLCE-77 Depthwise Convolution with depth multiplier > 1 doesn't workMatteo Martincigh
* Unified ArmNN's weight format to [ M, I, H, W ] for the depthwise convolution * Added conversion utilities to permute/reshape the weights as appropriate when using CL and Neon backends * Updated the reference implementation of the convolution * Updated the relevant unit tests accordingly !android-nn-driver:459 Change-Id: I07d0818efa9d1ca1e5dad82983aac1fe78eadb18
2019-01-02MLCE-82 Add Neon Mean support and unit testsMatthew Bentham
Factor out new BuildArmComputeReductionCoordinates function from CL backend into ArmComputeTensorUtils. Update NEON LayerSupport and WorkloadFactory objects Change-Id: Icc975ec699199bffafbdb207323df509d35e1e04
2018-12-20IVGCVSW-2164 Added ACL implementation of SpaceToBatchNd operation to ArmNNSadik Armagan
!android-nn-driver:428 Change-Id: I42e59ad96d2c80f46b085182855d34b710a74dfe
2018-11-20IVGCVSW-1199 Disable auto-flattening of Compute Library tensorsMatthew Bentham
This is one of the reasons why the tests in https://review.mlplatform.org/#/c/ml/armnn/+/237/ are failing (but not the only reason). Change-Id: If485bade2a6dd013cba826cec71d748fc7747249
2018-11-02IVGCVSW-1946: Remove armnn/src from the include pathsAron Virginas-Tar
Change-Id: I663a0a0fccb43ee960ec070121a59df9db0bb04e
2018-10-10IVGCVSW-1921: move common Acl code to a separate folderDavid Beck
Change-Id: I400be8e7c0cc5a31eb9d2a7396da145d50d51b6e