Age | Commit message (Collapse) | Author |
|
!android-nn-driver:9431
Signed-off-by: Jim Flynn <jim.flynn@arm.com>
Change-Id: I58143445b5c5cf2aafd0838156c9543adce21e6a
|
|
Signed-off-by: John Mcloughlin <john.mcloughlin@arm.com>
Change-Id: Id4bdc31e3e6f18ccaef232c29a2d2825c915b21c
|
|
The initial model load and tensor allocation operations against the
TfLiteInterpreter were not checking return codes resulting in
segmentation faults.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: I812785f0af9012c97570065d200f72eaf781165a
|
|
The -A -B -C options in execute network were attempting to calculate
the RMS error over output tensors. However, the calculation was mixing
tensor elements and bytes when doing the calculation. This patch
changes the calculation to use a per byte RMS error calculation.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: If30230a16cfed1a8804b4d54ed1abcd371f26664
|
|
ProfilingOptions is not used in DelegateOptions. Instead the parameters
are passed in through the RuntimeOptions. This is done in ExecuteNetwork
and TfliteExecutor.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: Iaab3d4ef277c47e1ff82a51ba2648f5f51ec3e2c
|
|
* When the output of a network is a boolean from a comparison layer
ExecuteNetwork was missing the data type when writing the output tensor
Signed-off-by: Ryan OShea <ryan.oshea3@arm.com>
Change-Id: Id9c1609462395a68e8c1842c77a4a033a10f74e8
|
|
* When the tfLiteExecutor attempts to populate the input tensors it did
not check whether the tensor was constant. This was causing
segmentation faults.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: I80a4cc788de4ffe08afb2df9185d04fcb8b27c3a
|
|
* Check if BuildExecutor returns null in ExecuteNetwork.
* Check if tflite BuildFromFile returns null in TfliteExecutor.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: I42b6e5f26dfd127dd16b6b322184900846317c41
|
|
one works fine
* All ArmNNExecutors now share a single IRuntime.
* All armnn_delegates now share a single IRuntime.
* Increased delegate major version.
Signed-off-by: Mike Kelly <mike.kelly@arm.com>
Change-Id: I95cbdc32655ec0beb476dbb2d60f1a0209df8f04
|
|
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: Ic6bbbaa04aabaa5c3fd525acd5121f07d3392120
|
|
* Build ExecNet lib dependencies as object libs except libarmnn
* Disable PIPE when building static ExecNet
* Remove multiple definition from AsyncExecutionCallback
* Disable DynamicBackend for ExecNet Static build
* Disable inference tests for TfLiteParser and ONNX during static ExecNet
* Remove Tensorflow Parser if condition
* Add Disable thread macro to InferenceModel
* Don't compile dynamic backend symbols in Runtime.cpp for Baremetal and
Exenet Static
Signed-off-by: Ryan OShea <ryan.oshea3@arm.com>
Change-Id: If41c063eab5f05b3df0a6e064924a36a177f116a
|
|
Two problems here:
* First the Delegate was using the parameter options after the execution
of std::move on it.
* In ExecuteNetworkParams 3 GPU backend options were instead being set as
optimizer options.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: I61c7fad8a5819a0a4aec0243899019a342c5cc5f
|
|
Move call to 'SetupInputAndOutputs' to after LoadedNetwork is available.
Change-Id: I101e297d1d7b2517011d4ef3f1a4927566845474
Signed-off-by: Matthew Bentham <matthew.bentham@arm.com>
|
|
- Remove Bf16ToFp32 Conversion Layer
- Remove Fp32ToBf16 Conversion Layer
- Remove B16 Conversion tests
* Throw exception if m_ReduceFp32ToBf16 optimzer option is set to true
* Provide comments to enable fast math in order to use bf16
* Update docs to inform users to enable fast math for bf16
Execute Network Changes
* Require bf16_turbo_mode to also have fast_math_enabled set to true
- Remove setting m_ReduceFp32ToBf16 optimizer option
Signed-off-by: Ryan OShea <ryan.oshea3@arm.com>
Change-Id: Ibaa6da9d29c96a1ce32ff5196b0847fde9f04a1c
|
|
Signed-off-by: Kevin May <kevin.may@arm.com>
Change-Id: If837e4bec7940b53d18d0da32f3e736215dd2a03
|
|
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I2fd0f6aac6ffff695b17df7455f252f6013c0d43
|
|
* The intention is to keep the flexibility given by the ExNet before the refactor.
* When iteration > inputFiles, we repeat the usage in order
* When iteration < inputFiles, we just discard extra files.
Signed-off-by: Adam Jalkemo <adam.jalkemo@arm.com>
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I2fbe69f8affe0e3a5cc86fc1748164967f0c2d64
|
|
not match"
This reverts commit 6c95836e894f88c4bab6b22f974341f0dd2dddaa.
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I8be2147feb557a0849de5785fb63b464abc7dbb9
|
|
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: Ib30fc633a10b6ff8090b50314278fe5dc46fb250
|
|
* Add functionality to print output tensors to file in tempdir
* UnitTests
Signed-off-by: Keith Davis <keith.davis@arm.com>
Change-Id: Idfb4c186544187db1fecdfca11c662540f645439
|
|
Signed-off-by: Jim Flynn <jim.flynn@arm.com>
Change-Id: I3573078206272c3a72a2b3acf8781ab458ea6c90
|
|
* When a model with multiple outputs was used and output to file, e.g.
with "-w ./boxes,./classes,./scores,./detection", the results where
not saved in the correct files.
* Applies only to the ArmNNExecutor.
Change-Id: I2899322622a4c3fd1d0ddc75b100b81669417660
|
|
* I had issues when folder name contained "armnn" and
a .tflite model was used, as the wrong parser was selected.
* Now only the extension, and not the full string, is
considered when selecting parser.
Change-Id: If7964d2ce5535f7d25762d2a2d7e810bf1a1ed43
|
|
* Some CL kernels are not run after the first inference and this breaks
the profiler which is expecting a measurement for every kernel each run
* Add a function HasKernelMeasurements() to ascertain if the Event is
returning kernel measurements and if so insert 0.0 values for any missing
kernel measurements.
* Fix ExecuteNetwork to only print a json object after all inferences
have completed
Signed-off-by: Kevin May <kevin.may@arm.com>
Change-Id: I99f2bb0db847f5a52ab4c5705b072155c6b6f333
|
|
* Signed32 missing from CompareAndPrintOutput
Signed-off-by: Keith Davis <keith.davis@arm.com>
Change-Id: If3c93fb0d73c566ddcf439fceaa6d629029df18f
|
|
In ArmNNExecutor::ArmNNExecutor the call to m_Runtime->LoadNetwork was
ignoring the Status result and continuing to execute with a failed
network. In addition throwing an exception from the constructor resulted
in a segmentation fault.
* Modify IExecutor to allow the constructor to mark itself as failed.
* Modify ArmNNExecutor to mark itself as failed when LoadNetwork returns
an error.
* Modify ExecuteNetwork to check the value of m_constructionFailed.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: Idf222cb2b66e1051875dc67046734f2b00b288d1
|
|
* model was declared in the TfLiteExecutor constructor, instead of intializing m_Model
* Working with this model that has 4 output we saw the the output names were not correct, this got fixed too
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I48f194ad4ba6af43d43e6eea336eb87ffee02dcc
|
|
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: If00d8dab2846c484a1969fb152cb9f8bd16e1b3e
|
|
* dot file to be generated when -v is given. It was only being generated when using the delegate as executor
* output name read from m_Params.m_OutputNames instead of m_TfLiteInterpreter
* typo: "delage" instead of "delegate"
* QAsymmS8 templated as int8, instead of uint8
Change-Id: Ie13ae0f7e6395c0ebcb5ecda32e72082dee8aa6c
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: Iac97a23927ba42290ebeb3446bbd36da15045e07
|
|
This reverts commit 1a7f033768acb27da11503bd29abb468d2e77f9e.
List of fixes to be able to add this code again:
* "emplacing_back" the vector inputTensors into the vector m_InputTensorsVec outside the for loop
* GetIOInfo() uses IOptimizedNetwork instead of INetwork, where the infered shapes are not saved
* Add missing data type Signed32 to SetupInputsAndOutputs()
* PrintOutputTensors() prints the actual output without dequantizing
* Add profilingDetailsMethod as input in networkProperties in ArmNNExecutor constructor
* Fix typos
Change-Id: I91de166f87228282db3efa27431fe91458834442
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: Ic6634d48892d11e5f146cdf285e1e333e93e9937
Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
|
|
* "Asynchronous Execution with std::launch:async..."
* "Asynchronous Execution with Arm NN thread pool..."
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I93f6ae92fd5599d1042f0dfced7e90ef85e20463
|
|
* Remove ARMNN_TF_LITE_DELEGATE and DARMNN_TF_LITE_DELEGATE
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I3fc08da3fa0b733e6791c42f6bc59494f2bc26a6
|
|
This reverts commit 615e06f54a4c4139e81e289991ba4084aa2f69d3.
Reason for revert: <Breaking nightlies and tests>
Change-Id: I06a4a0119463188a653bb749033f78514645bd0c
|
|
* Remove InferenceModel
* Add automatic IO type, shape and name configuration
* Depreciate various redundant options
* Add internal output comparison
Signed-off-by: Finn Williams <finn.williams@arm.com>
Change-Id: I2eca248bc91e1655a99ed94990efb8059f541fa9
|
|
(CpuRef)
* Fixed bug occuring in Ref Gather Workload.
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I3ee79f475fd9909bfbd4afb58f698439f26d6d65
|
|
dimensions
* Added allow-expanded-dims to TFLite parser and ArmNN delegate
* If true ArmNN will disregard dimensions with a size of 1 when
validating tensor shapes. Tensor sizes must still match.
* This allows us to support models where tensors have expanded
dimensions (i.e. extra dimensions with a size of 1).
* Fixed bug in Network where it assumed that only the first option
could be ShapeInferenceMethod.
* Fixed bug where m_ShapeInferenceMethod was lost when copying or
moving Graphs.
* Changed Delegate to pass "infer-output-shape", "allow-expanded-dims"
and other BackendOptions through to the Network during construction.
Signed-off-by: Mike Kelly <mike.kelly@arm.com>
Change-Id: Ibe7c5ae6597796fc9164cb07bd372bd7f8f8cacf
|
|
* Add shorthand argument for no print
* Add Execute network option to reuse buffers
* Add new synchronous execute method to reuse buffers
Signed-off-by: Ryan OShea <ryan.oshea3@arm.com>
Change-Id: Ia7ee99b2ba9a21043c9575d7546bf25208357141
|
|
* Updated ABI version to 29 due to being the first ABI break in 22.05
!android-nn-driver:7226
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I9c50007dcd5b5e792757e7bd1213606df5ffec36
|
|
Change-Id: Ib038e7b2616195a64715e3a7126da1368bbca1d3
Signed-off-by: Jim Flynn <jim.flynn@arm.com>
|
|
Change-Id: I30a46f3368bbbf33019eac4fa1245f6ff69deacd
Signed-off-by: Jim Flynn <jim.flynn@arm.com>
|
|
* Change to check for success instead of specific failure
* Fix which map index is used when assigning outputs
Signed-off-by: Ryan OShea <ryan.oshea3@arm.com>
Change-Id: I13d8e989e35789ad3e2465d595905c5a5603ae0f
|
|
Signed-off-by: Finn Williams <finn.williams@arm.com>
Change-Id: Ic5ebf7b80468b7751c234c43a90ec4cbf4c59ffe
|
|
* Created individual IRuntime sharedptr in ExecuteNetwork main() each time
MainImpl() is called. Prevents additional runtime being created when the
delegate is used.
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: Ia4b508fbf2bbd25467c6235fed2f05662a7aecc0
|
|
* Adds ExecuteNetwork when building the delegate only
* Adds timings to delegate subgraph creation
* Adds executions times
Signed-off-by: Jan Eilers <jan.eilers@arm.com>
Change-Id: Ieff2f67ea8dbb6c2a708f8810e84a20485b7a631
|
|
Deep_speech models
* Fixed output bindings in ExecuteNetwork when using delegate
* Added extra comments (for my own comprehension)
Change-Id: Ia1330903a2b724be7d637f6c5b6e4349e7d33e2e
Signed-off-by: Tamas Nyiri <tamas.nyiri@arm.com>
|
|
This reverts commit 2d9956162dd002a41f7fb4fa6753195d33524c7f.
Reason for revert: After some discussion, this does technically implement Float16 support for ExecuteNetwork, but not in a way which matches most use cases and is likely to cause issues in the future. Reverting for now.
Change-Id: I4ce6de6879216e694631f5dc68e46fb793fae0a9
|
|
* Allows the user to specify float16 as a datatype
* Does not contain support for float16 on the TfLiteDelegate via
ExecuteNetwork
Signed-off-by: David Monahan <David.Monahan@arm.com>
Change-Id: Icba56feedab32662e2cf671cc46ada899cf40c6c
|
|
* In ExecuteNetwork MainImpl compare the data types of outputs on the
loaded model with those specified by the user through --output-type.
Issue a warning if there is a mismatch.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: Ic5add9734dc239eddca0972a9e560e54abdb1093
|
|
* Move TContainer to armnnUtils library
Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
Change-Id: I3c0f895d11b66f6ee224ac689a19d0477f990b98
|
|
* Pass through the value of m_EnableProfiling from Executenetwork to
DelegateOptions.
* If internal profiling is enabled print it out from inside the delegate.
* Remove an unnecessary ProfilerImpl instance from WorkingMemhandle.hpp
* Remove an unnecessary parameter from TfLiteDelegateMainImpl in
ExecuteNetwork.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: Ia1d4b1eb3a05ca5b4d80cc39e138c7fac182d948
|