aboutsummaryrefslogtreecommitdiff
path: root/src/armnn/optimizations
AgeCommit message (Collapse)Author
2021-01-25IVGCVSW-5619 Add OptimizerOptions and NetworkProperties to ArmNN DelegateNarumol Prangnawarat
* Add OptimizerOptions, NetworkProperties, DebugCallbackFunction to DelegateOptions * Enable OptimizerOptions when the network is being optimized * Enable NetworkProperties when loading network * Enable DebugCallbackFunction * Add error message when loading network * Log warning instead of error when operator is not supported but could fallback to another backend * Improve uint16_t CompareData * Unit tests Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I353035afb442774bfeb1c62570a90755c2ceaf38
2020-11-17IVGCVSW-5530 'Cannot run SSD Mobilenet f16/uint8 on CpuRef via ExecuteNetwork'Sadik Armagan
* Added FP16 DataType support to DetectionPostProcess * For DetectionPostProcess layer output is always Float32 regardless of input type Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I21f63dd08f0863e9a98e105b3009bab3da1ab0c3
2020-11-08IVGCVSW-5315 Create FuseBatchNorm classMike Kelly
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: Id0625c58dbeea79874bf986b70d136ed9390bf83
2020-10-29IVGCVSW-5314 Create OptimizeForExclusiveConnectionTeresa Charlin
* FuseBatchNorm class has been added to facilitate testing * Only Convolution2D FP32 being fused Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: I049c4770946ddca21b08516d4c9f4d0d22bf9b45
2020-09-15IVGCVSW-5305 AddBroadcastReshapeLayer as optimizerNarumol Prangnawarat
* Remove AddBroadcastReshapeLayer from TfLiteParser * Add AddBroadcastReshapeLayer as optimizer * AddBroadcastReshapeLayer optimizer unit tests * Load-scope dynamic tensor broadcasting unit tests Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I3549e85b71b41cbd4d96c0f1ece7887acbca76d1
2020-06-03remove BOM from filesLaurent Carlier
Change-Id: Ia4b4bb3be0ed6e933c77d58f8e9879b1370e9537 Signed-off-by: Laurent Carlier <laurent.carlier@arm.com>
2020-05-19IVGCVSW-4669 Set destination tensorInfo in MoveAllConnections()Finn Williams
Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I3563209dcb3db1b40cf2db3855adc631b5e323be
2020-04-10IVGCVSW-4483 Remove boost::polymorphic_downcastJan Eilers
* exchange boost::polymorphic_downcast with armnn::PolymorphicDowncast * remove unnecessary includes of boost::polymorphic_downcast Signed-off-by: Jan Eilers <jan.eilers@arm.com> Change-Id: Ie603fb82860fe05fee547dc78073230cc62b2e1f
2020-04-06IVGCVSW-4485 Remove Boost assertNarumol Prangnawarat
* Change boost assert to armnn assert * Change include file to armnn assert * Fix ARMNN_ASSERT_MSG issue with multiple conditions * Change BOOST_ASSERT to BOOST_TEST where appropriate * Remove unused include statements Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I5d0fa3a37b7c1c921216de68f0073aa34702c9ff
2020-03-26IVGCVSW-4597 Modify BF16 optimizer to Convert only inputs and weights ofNarumol Prangnawarat
Convolution2d and FullyConnected layers * Add InsertConvertFp32ToBf16LayersBefore * Add ConvertWeight to ConvertFp32NetworkToBf16Impl for Conv2d and FullyConnected * Allow different input and output when input is BF16 and output is FP32 Conv2d and FullyConnected layers * Unit tests Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: Ic8f92ff28edcae08a72a3114a28f50c4619f919b
2020-03-20IVGCVSW-4520 Implement BFloat16 OptimizerNarumol Prangnawarat
* Add ReduceFp32ToBf16 to OptimizerOptions * Add ConvertFp32NetworkToBf16 * Add utility functions to insert conversion layers * Add constant conversion BF16 <-> FP32 * Unit tests Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: Iaff77e20c721400b052cb37eb9ef6fe16d7abaff
2020-03-10IVGCVSW-4482 Remove boost::ignore_unusedJan Eilers
!referencetests:229377 Signed-off-by: Jan Eilers <jan.eilers@arm.com> Change-Id: Ia9b360b4a057fe7bbce5b268092627c09a0dba82
2020-03-03IVGCVSW-4375 Add support for Transpose to optimizationsMike Kelly
* Changed some existing Permutation specific optimizations to also support Transpose * Added MoveTransposeUp optimization * Added TransposeAsReshape optimization * Added tests for Transpose optimizations * Added missing layer tests for Transpose Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: I20d099b284861402ae94aaa5dbf34907327a485f
2019-12-30IVGCVSW-4246 Clean build optimizations with -WextraDerek Lamberti
Change-Id: I2e0884c66855071eb3aa72b86de06c6ed6389d50 Signed-off-by: Derek Lamberti <derek.lamberti@arm.com>
2019-11-29MLCE-143 Fixed driver crash during CTS testsMike Kelly
* Only apply the Optimization when the base ReshapeLayer is connected to the child ReshapeLayer and no other Layer. Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: Iccd676d657f9e7c829813f1bec9c82db8745d069
2019-11-29IVGCVSW-4209 Create a public API for the ArmNN UtilsMatteo Martincigh
* Moved the relevant armnnUtils headers to the new location: include/armnnUtils * Update the header usage throughout the source code !android-nn-driver:2387 Signed-off-by: Matteo Martincigh <matteo.martincigh@arm.com> Change-Id: I2ba15cebcacafad2b5a1a7b9c3312ffc585e09d6
2019-11-15IVGCVSW-4119 Fix FP16 to FP32 fallback mechanism in optimizer to work with ↵Aron Virginas-Tar
Dequantize * Check for output data type as well as input data type when determining whether we should attempt to fall back to FP32 if FP16 is not supported * Override output type for Dequantize in IsLayerSupported() instead of input type * Updated original input type from FP16 to FP32 in InsertConvertFp32ToFp16LayersAfter() Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com> Change-Id: Ic6477fd17cea5a91bd8bf9ae0cf836520897d5b7
2019-09-27NNXSW-1826 Add an optimization step which combines Permute and BatchToSpace ↵Rob Hughes
into DepthToSpace This is only possible in some limited cases, but removes an extra layer from the graph and so should improve performance in all cases. Change-Id: I7b3e6ba5dacb4fdb816ad270edaecda1436ab4cf Signed-off-by: Rob Hughes <robert.hughes@arm.com>
2019-09-24NNXSW-1826 OptimizeConsecutiveReshapes: remove unnecessary call to ↵Rob Hughes
MoveAllConnections This is called at a time when newReshape has nothing connected to its output slot (as it has just been created) and so is a no-op. The code comment indicated that the intention was to connect the newReshape to its *input*, but that has already been done in the InsertNewLayer() call above, so the comment was incorrect. There is a unit test covering this case ("OptimizeConsecutiveReshapesTest") Change-Id: I933d5d1c6eb32f5a8269fb5d7c809cd7c89680d1 Signed-off-by: Rob Hughes <robert.hughes@arm.com>
2019-05-13IVGCVSW-3058 Segmentation fault running Resnetv2.50 tfite int8 modelMatteo Martincigh
* The execution crashed because the weights of a convolution were null during the network execution * Copied the weights and bias when folding a pad layer into a convolution Change-Id: I3ae72143d04cac90d4f878cdf3b1a08b2f2a5c90 Signed-off-by: Matteo Martincigh <matteo.martincigh@arm.com>
2019-04-19IVGCVSW-2925: Combine Pad with Convolution2d in the OptimizerNina Drozd
* Added new optimization for folding pad layer into convolution2d layer following it * Added new test in OptimizerTests.cpp * Added new optimization into All optimizations * Added call to new optimization in Optimize in Network.cpp * Updated CMakeLists.txt Signed-off-by: Nina Drozd <nina.drozd@arm.com> Change-Id: I682e07c71bbd42c49c02dda30a848a9ab2b16e7e
2018-12-11IVGCVSW-1434 Add debug mode to Optimizerkeidav01
* Modified optimizer to support debug mode via DebugLayer Change-Id: Ic8f313778e55540c182cf99876c44a0823be04c6
2018-11-02IVGCVSW-1946: Remove armnn/src from the include pathsAron Virginas-Tar
Change-Id: I663a0a0fccb43ee960ec070121a59df9db0bb04e
2018-10-10IVGCVSW-1913: Fix for ValidationTest.concat_float_3_relaxedarovir01
* Added RefPermuteFloat16Workload to serve as a fallback when CL does not support the required permute configuration for FP16 * Move Half.hpp to armnnUtils as the utils library should not be including private header files from the armnn library Change-Id: Ibf0f698451e8406f7ed7cce470dab60b6d16361d
2018-10-10IVGCVSW-1900 : CL backend folder structureDavid Beck
* moving backends/ClWorkloads to backends/cl * and moving pure Cl workload related code to backends/cl/workloads Change-Id: I019a3c6b4da5e7a23074bf03fb057e63199ad129
2018-09-17IVGCVSW-1807 : change license text in file headersDavid Beck
All changes are the same: // // Copyright © 2017 ARM Ltd. All rights reserved. -// See LICENSE file in the project root for full license information. +// SPDX-License-Identifier: MIT // Change-Id: I37eae011411133663ca9d2b059714d92f8bf8e24
2018-08-31Release 18.08telsoa01
2018-03-29Release 18.03surmeh01
2018-03-09Release 18.02telsoa01
Change-Id: Id3c11dc5ee94ef664374a988fcc6901e9a232fa6