aboutsummaryrefslogtreecommitdiff
path: root/tests/InferenceModel.hpp
AgeCommit message (Collapse)Author
2024-05-13Add deprecation notices for items to be removed in 24.08.Colm Donelan
* Onnx parser. * Async execution interface. * Shim and support library. * Arm NN converter * GpuFsa backend. Signed-off-by: Colm Donelan <colm.donelan@arm.com> Change-Id: Ia9adae4da6d9bd2b92a4f4492a022e8337f57f14
2023-04-12IVGCVSW-7197 Implement Pimpl Idiom for OptimizerOptionsJohn Mcloughlin
Signed-off-by: John Mcloughlin <john.mcloughlin@arm.com> Change-Id: Id4bdc31e3e6f18ccaef232c29a2d2825c915b21c
2023-01-06IVGCVSW-7031 Generate static execute networkRyan OShea
* Build ExecNet lib dependencies as object libs except libarmnn * Disable PIPE when building static ExecNet * Remove multiple definition from AsyncExecutionCallback * Disable DynamicBackend for ExecNet Static build * Disable inference tests for TfLiteParser and ONNX during static ExecNet * Remove Tensorflow Parser if condition * Add Disable thread macro to InferenceModel * Don't compile dynamic backend symbols in Runtime.cpp for Baremetal and Exenet Static Signed-off-by: Ryan OShea <ryan.oshea3@arm.com> Change-Id: If41c063eab5f05b3df0a6e064924a36a177f116a
2022-11-16IVGCVSW-7214 Disable BF16-Turbo-Mode and remove conversion layersRyan OShea
- Remove Bf16ToFp32 Conversion Layer - Remove Fp32ToBf16 Conversion Layer - Remove B16 Conversion tests * Throw exception if m_ReduceFp32ToBf16 optimzer option is set to true * Provide comments to enable fast math in order to use bf16 * Update docs to inform users to enable fast math for bf16 Execute Network Changes * Require bf16_turbo_mode to also have fast_math_enabled set to true - Remove setting m_ReduceFp32ToBf16 optimizer option Signed-off-by: Ryan OShea <ryan.oshea3@arm.com> Change-Id: Ibaa6da9d29c96a1ce32ff5196b0847fde9f04a1c
2022-10-19MLCE-545 INT8 TFLite model execution abnormalKeith Davis
* Add functionality to print output tensors to file in tempdir * UnitTests Signed-off-by: Keith Davis <keith.davis@arm.com> Change-Id: Idfb4c186544187db1fecdfca11c662540f645439
2022-07-28Revert "Revert "IVGCVSW-6650 Refactor ExecuteNetwork""Teresa Charlin
This reverts commit 1a7f033768acb27da11503bd29abb468d2e77f9e. List of fixes to be able to add this code again: * "emplacing_back" the vector inputTensors into the vector m_InputTensorsVec outside the for loop * GetIOInfo() uses IOptimizedNetwork instead of INetwork, where the infered shapes are not saved * Add missing data type Signed32 to SetupInputsAndOutputs() * PrintOutputTensors() prints the actual output without dequantizing * Add profilingDetailsMethod as input in networkProperties in ArmNNExecutor constructor * Fix typos Change-Id: I91de166f87228282db3efa27431fe91458834442 Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: Ic6634d48892d11e5f146cdf285e1e333e93e9937 Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
2022-07-08Revert "IVGCVSW-6650 Refactor ExecuteNetwork"Nikhil Raj Arm
This reverts commit 615e06f54a4c4139e81e289991ba4084aa2f69d3. Reason for revert: <Breaking nightlies and tests> Change-Id: I06a4a0119463188a653bb749033f78514645bd0c
2022-07-08IVGCVSW-6650 Refactor ExecuteNetworkFinn Williams
* Remove InferenceModel * Add automatic IO type, shape and name configuration * Depreciate various redundant options * Add internal output comparison Signed-off-by: Finn Williams <finn.williams@arm.com> Change-Id: I2eca248bc91e1655a99ed94990efb8059f541fa9
2022-05-18IVGCVSW-6929 Support for models with implicit expandedMike Kelly
dimensions * Added allow-expanded-dims to TFLite parser and ArmNN delegate * If true ArmNN will disregard dimensions with a size of 1 when validating tensor shapes. Tensor sizes must still match. * This allows us to support models where tensors have expanded dimensions (i.e. extra dimensions with a size of 1). * Fixed bug in Network where it assumed that only the first option could be ShapeInferenceMethod. * Fixed bug where m_ShapeInferenceMethod was lost when copying or moving Graphs. * Changed Delegate to pass "infer-output-shape", "allow-expanded-dims" and other BackendOptions through to the Network during construction. Signed-off-by: Mike Kelly <mike.kelly@arm.com> Change-Id: Ibe7c5ae6597796fc9164cb07bd372bd7f8f8cacf
2022-02-15IVGCVSW-6786 Add import if memory aligned option to ExecuteNetworkJim Flynn
Change-Id: Ib038e7b2616195a64715e3a7126da1368bbca1d3 Signed-off-by: Jim Flynn <jim.flynn@arm.com>
2022-01-27IVGCVSW-6739 'Issues on Logging API'Sadik Armagan
* Enabled using same instance of SimpleLogger * Removed some trailing new lines on some log messages Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I4b917c0ca5011afc9b39dad50715290ba15a1246
2021-10-28IVGCVSW-6513: Compilation failure in armnn-mobilenet-quant in ML-ExamplesFrancis Murtagh
* Move TContainer to armnnUtils library Signed-off-by: Francis Murtagh <francis.murtagh@arm.com> Change-Id: I3c0f895d11b66f6ee224ac689a19d0477f990b98
2021-10-22IVGCVSW-6359 Create a single definition of TContainerDavid Monahan
* Added a single definition of TContainer to include/armnn/Utils.hpp * Change all files which contained their own identical definitions of TContainer to use the new one Signed-off-by: David Monahan <David.Monahan@arm.com> Change-Id: I63e633693a430bbbd6a29001cafa19742ef8309a
2021-10-18IVGCVSW-6450 Add Support of Models with Dynamic Batch Tensor to ONNX parserNarumol Prangnawarat
Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: Ia7dbf0735619d406d6b4e34a71f14f20d92586e6
2021-10-15Profile optimizer in ExecuteNetworkDerek Lamberti
Signed-off-by: Derek Lamberti <derek.lamberti@arm.com> Change-Id: I04fb80c967bba4bb377de419bde618c1cbb80075
2021-08-20IVGCVSW-6249 Add ProfilingDetails Macros to all workloads in Ref, Neon, CLKeith Davis
* Add functionality to only output network details in ExNet Signed-off-by: Keith Davis <keith.davis@arm.com> Change-Id: I0c45e67193f308ce7b86f1bb1a918a266fefba2e
2021-08-10IVGCVSW-6292 Allow profiling details to be switched off during profilingKeith Davis
* Add switch for network details during profiling Signed-off-by: Keith Davis <keith.davis@arm.com> Change-Id: I8bd49fd58f0e0255598106e9ab36806ee78391d6
2021-08-10IVGCVSW-6289 Separate tensor shape inference and validation callsFinn Williams
* Pass m_shapeInferenceMethod to OptimizerOptions in ExecuteNetwork Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I90280fb7629092d3b66e8a3968ca9e35a0df854a
2021-08-04IVGCVSW-5980 JSON profiling outputKeith Davis
* Add new ProfilingDetails class to construct operator details string * Add new macro which helps append layer details to ostream * Add ProfilingEnabled to NetworkProperties so that profiling can be realised when loading the network * Add further optional info to WorkloadInfo specific to convolutions * Generalise some JsonPrinter functions into JsonUtils for reusability * Remove explicit enabling of profiling within InferenceModel as it is done when loading network * Add ProfilingDetails macros to ConvolutionWorkloads for validation Signed-off-by: Keith Davis <keith.davis@arm.com> Change-Id: Ie84bc7dc667e72e6bcb635544f9ead7af1765690
2021-07-21NNXSW-3081 Move Filesystem.hpp and Threads.hpp to public includeRob Hughes
!android-nn-driver:5966 Change-Id: Ice0b4d2872bb0e09bfc0763034a206c3a8f24af4 Signed-off-by: Rob Hughes <robert.hughes@arm.com>
2021-06-23IVGCVSW-6062 Rework the async threadpoolFinn Williams
!android-nn-driver:5802 * Extract the threadpool from LoadedNetwork/Runtime * Refactor the threadpool to be handle multiple networks * Trim IAsyncExecutionCallback and add an InferenceId to AsyncExecutionCallback * Add AsyncCallbackManager class Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I36aa2ad29c16bc10ee0706adfeb6b27f60012afb
2021-06-01IVGCVSW-5833 Move the ProfilingGuid out of Types.hpp to its own header in ↵Nikhil Raj
profiling common !android-nn-driver:5691 Signed-off-by: Nikhil Raj <nikhil.raj@arm.com> Change-Id: Ib71af0831e324ac6bd27b1a36f4a6ec1a703b14a
2021-05-26IVGCVSW-6009 Enable creating thread pool with 1 threadKevin May
* Allow the user to use create a tread pool with a single thread * This is in keeping with how the android-nn-driver was implemented * Add it to ExecuteNetwork thread pool creation Signed-off-by: Kevin May <kevin.may@arm.com> Change-Id: I05b8048a9e0e45ae11d2b585080af28d9d008d81
2021-05-26IVGCVSW-6009 Integrate threadpool into ExNetKevin May
* Remove concurrent flag from ExecuteNetwork as it is possible to deduce if SimultaneousIterations > 1 * Add void RunAsync() * Refactor some unit tests Change-Id: I7021d4821b0e460470908294cbd9462850e8b361 Signed-off-by: Keith Davis <keith.davis@arm.com> Signed-off-by: Kevin May <kevin.may@arm.com>
2021-04-29IVGCVSW-5819 5820 5821 Add MemorySourceFlags to ↵Francis Murtagh
TensorHandleFactoryRegistry::GetFactory * Modify Layer::CreateTensorHandles to include MemorySource * Modify INetworkProperties to add MemorySource * Disable Neon/Cl fallback tests until full import implementation complete Change-Id: Ia4fff6ea3d4bf6afca33aae358125ccaec7f9a38 Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
2021-04-29IVGCVSW-5775 'Add Async Support to ExecuteNetwork'Sadik Armagan
* Enabled async mode with '-n, concurrent' and 'simultaneous-iterations' in ExecuteNetwork * Number of input files provided should be equal to number of input files provided multiply by number of simultaneous iterations divided by comma !armnn:5443 Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: Ibeb318010430bf4ae61a02b18b1bf88f3657774c
2021-04-16IVGCVSW-5720 Remove the Caffe Parser from ArmNNNikhil Raj
Signed-off-by: Nikhil Raj <nikhil.raj@arm.com> Change-Id: Ib00be204f549efa9aa5971ecf65c2dec4a10b10f
2021-04-07Fix graph copy memory spikeFinn Williams
* Change layer storage of ConstTensors to std::shared_ptr<ConstCpuTensorHandle> * Change clone to share ConstTensor rather than copy * Remove uses of non-const GetTensor() call * Reduce scope of non-optimized network in ExeNet, so memory can be released after use Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: Ibb2c7309d12411d21405bd6024c76bcdf5404545
2021-03-03IVGCVSW-5612 Fix tiny_wav2letter_relu_fixed_int8 delegate outputexperimental/abi-testsFinn Williams
* fix delegate perchannel quantization * change delegate to check reshape options before inputs * Add int8 "qsymms8" option to ExecuteNetwork * Add option to run ExecuteNetwork on tflite w/o delegate !referencetests:301301 Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: If3e12599b17aff1199d7ab0a55e1c901e480083d
2021-02-15IVGCVSW-5686 Add GpuAcc MLGO tuning file configuration argumentFinn Williams
Signed-off-by: Finn Williams <Finn.Williams@arm.com> Change-Id: I3f320499c379162f9d1b00cc8816bd144cd7eee4
2021-02-12IVGCVSW-5685 Add CpuAcc specific configuration option numberOfThreadsMatthew Sloyan
* Added ability to set number of threads used in CpuAcc backend * Enabled number-of-threads option in ExecuteNetwork * Added TfLiteDelegate ModelOptions test * Added unsigned int type to BackendOptions.hpp Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com> Change-Id: Ia576d4f45cbe5df3654bc730bb5ebd5181d82b5a
2021-01-29IVGCVSW-5484 Add Network loading time to InferenceModelMatthew Sloyan
* Added output log to capture time taken to load network into runtime. * This time is cut down when loading a cached network. Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com> Change-Id: I043c177f17d01df35fbe0752ec5d77e350749164
2021-01-12IVGCVSW-5484 Add CacheLoadedNetwork options to ExecuteNetworkMatthew Sloyan
* Enable ability to save/load ClContext in ExecuteNetwork. Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com> Change-Id: I58c61a53f6713853eb06520cc372ed47baf7f8c4
2020-10-30IVGCVSW-5265 Removing more Boost references from test executables.Colm Donelan
* Removed unused includes from InferenceModel.hpp. * Replaced use of boost multi-array with vectors in YoloInferenceTest. Signed-off-by: Colm Donelan <Colm.Donelan@arm.com> Change-Id: Ieadf3471ed170b09859187c83616c8e249f94543
2020-10-14IVGCVSW-5280 Switch tests/InferenceTest and derived tests over to cxxoptsJames Ward
* refactor AddCommandLineOptions() functions to allow checking of required options * add CxxoptsUtils.hpp file for convenience functions !referencetests:268500 Signed-off-by: James Ward <james.ward@arm.com> Change-Id: Ica954b210b2981b7cd10995f0d75fcb2a2f7b443
2020-09-30IVGCVSW-4519 Remove Boost Variant and apply_visitor variantJames Ward
* replace boost::variant with mapbox::util::variant * replace boost::apply_visitor with mapbox::util::apply_visitor * replace boost::get with mapbox::util::get Signed-off-by: James Ward <james.ward@arm.com> Change-Id: I38460cabbcd5e56d4d61151bfe3dcb5681ce696e
2020-09-15IVGCVSW-5317 'Add enable_fast_math Option to ExecuteNetwork'Sadik Armagan
Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I4eb3e27837aea926593d49f9ccea07bab8388d5b
2020-09-11IVGCVSW-5299 Remove some boost::numeric_cast from armnn/testsMatthew Sloyan
* Replaced with armnn/utility/NumericCast.hpp * Removed combinations without float implementation in NumericCast.hpp Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com> Change-Id: Ia4ec605f063cdb0071fff302ef48c610f9f9505e
2020-09-10IVGCVSW-5293 Remove boost::format from armnn/testsJames Ward
* Replaced boost::format with fmt::format Signed-off-by: James Ward <james.ward@arm.com> Change-Id: Icf5a6508e7be3d31bc063643491fc5e0607f21fa
2020-07-29IVGCVSW-4980 Introduce InferAndValidate option to ExecuteNetwork for parsersSadik Armagan
* Introduced infer-output-shape option to TfLiteParser in ExecuteNetwork app !armnn:3591 Signed-off-by: Sadik Armagan <sadik.armagan@arm.com> Change-Id: I30bd5e51ac2b6759169e22a44586fd97986f2402
2020-06-30IVGCVSW-4487 Remove boost::filesystemFrancis Murtagh
* Replace filesystem::path * Replace filesystem::exists * Replace filesystem::is_directory * Replace filesystem::directory_iterator * Replace filesystem::filesystem_error exception * Replace filesystem::temp_directory_path * Replace filesystem::unique path * Replace filesystem::ofstream with std::ofstream * Replace filesystem::remove * Replace filesystem::is_regular_file * Replace boost::optional with armnn::Optional in touched files * Remove some superfluous includes * Update build guides, GlobalConfig.cmake and CMakeLists.txt * Remove redundant armnnUtils::Filesystem::Remove function. * Remove redundant armnnUtils::Filesystem::GetFileSize function. Temporarily adding back Boost::filesystem to enable Boost::dll. Signed-off-by: Francis Murtagh <francis.murtagh@arm.com> Signed-off-by: Colm Donelan <Colm.Donelan@arm.com> Change-Id: Ifa46d4a0097d2612ddacd8e9736c0b36e365fb11
2020-05-22Adding more performance metricsalered01
* Implemented CLTuning flow for ExecuteNetwork tests * Added --tuning-path to specify tuning file to use/create * Added --tuning-level to specify tuning level to use as well as enable extra tuning run to generate the tuning file * Fixed issue where TuningLevel was being parsed incorrectly * Added measurements for initialization, network parsing, network optimization, tuning, and shutdown * Added flag to control number of iterations inference is run for Signed-off-by: alered01 <Alex.Redshaw@arm.com> Change-Id: Ic739ff26e136e32aff9f0995217c1c3207008ca4
2020-04-20IVGCVSW-4513 Remove boost/algorithm/string *David Monahan
* Removed split, classification, trim, string, join, contains * Added StringUtils.hpp to replace the removed Boost String functionality Signed-off-by: David Monahan <david.monahan@arm.com> Change-Id: I8aa938dc3942cb65c512cccb2c069da66aa24668
2020-04-06IVGCVSW-4485 Remove Boost assertNarumol Prangnawarat
* Change boost assert to armnn assert * Change include file to armnn assert * Fix ARMNN_ASSERT_MSG issue with multiple conditions * Change BOOST_ASSERT to BOOST_TEST where appropriate * Remove unused include statements Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I5d0fa3a37b7c1c921216de68f0073aa34702c9ff
2020-04-03IVGCVSW-4514 Remove lexical_cast.hppDavid Monahan
Signed-off-by: David Monahan <david.monahan@arm.com> Change-Id: I992379f03d1cfe3c019bb23786458d4f22df6b17
2020-03-24IVGCVSW-4521 Add bf16-turbo-mode option to ExecuteNetworkNarumol Prangnawarat
Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I57ec47adf98680254fa481fb91d5a98dea8f032e
2019-12-05Replace boost logging with simple loggerDerek Lamberti
!referencetests:214319 * Reduces arm nn binary size ~15% * Also fixed test logging black hole issues Change-Id: Iba27db304d9a8088fa46aeb0b52225d93bb56bc8 Signed-off-by: Derek Lamberti <derek.lamberti@arm.com>
2019-12-02IVGCVSW-4206 Optionally parse unsupported ops in ExecuteNetworkDerek Lamberti
Change-Id: I593e2540bd870d70aabb2c959f4e63a899967269 Signed-off-by: Derek Lamberti <derek.lamberti@arm.com>
2019-10-31GitHub #292 Move BackendRegistry.hpp to the public APIMatteo Martincigh
* Moved to BackendRegistry.hpp include/armnn * Updated makefiles and sources accordingly Signed-off-by: Matteo Martincigh <matteo.martincigh@arm.com> Change-Id: I4d83abb581d523218a880c879fcf30c9611f7fd7
2019-10-25IVGCVSW-4008 Add profiling mode to ExecuteNetworkAron Virginas-Tar
* Removed the requirement for specifying a data file for each input tensor * Added the possibility to generate dummy tensor data (filled with 0s) if no data files are specified by the user * Warn the user when they request to save the output to a file, but the input was generate, therefore rendering the output useless Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com> Change-Id: I8baed116dcd99fe380e419db322dc7e04ab1c653