Age | Commit message (Collapse) | Author |
|
* Move the Conv2D and DepthwiseConv2D validation to Optimization level
when the weights and tensors are as constant inputs
* Take into account offset and scales values when doing INT8 to FP32 dequantization
Signed-off-by: Sadik Armagan <sadik.armagan@arm.com>
Change-Id: I1f81f15640395ac041923b10dbe9151159715117
|
|
* ProfilingDetails assumed that every workload description included
both tensors and parameters. This is not always the case.
* Modify ProfilingDetails::AddDetailsToString to check the next
element to be printed before deciding to add a separator and new line.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: I2577b0e8a149d0a172ee12975e18b78238d8256e
|
|
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: Ib91b734d4add47e23ad00f76e53f1873ff617831
|
|
* Adding the check only if it's not a const layer which is needed to run ai_benchmark_v5_yolo_v4_tiny_quant.tflite model
* We still won't be able to run the model due to IVGCVSW-7158
Signed-off-by: Nikhil Raj <nikraj01@e126673.cambridge.arm.com>
Change-Id: Ib7e77a0b5a64be0c92a8e4eae45729f799770b37
|
|
Signed-off-by: Jim Flynn <jim.flynn@arm.com>
Change-Id: I3a3aab7b5042349cb2df8517678306665e037610
|
|
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I5c68b81a67fc2b5a33cf62753351440564bb868e
|
|
* Changed long variable declaration to int
Signed-off-by: Samuel Yap <samuel.yap@arm.com>
Change-Id: I2df6f8f6df8780e48e09f7e68c04626a8a8a207d
|
|
* Added case for Bf16 to switch and changed Assertion to Exception
so it shows up in Release build.
Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
Change-Id: I817260dc7b7667386c4aa734bea649383866a785
|
|
* Fixed caching issue.
Signed-off-by: Sadik Armagan <sadik.armagan@arm.com>
Change-Id: Ic7b3e0bd4438b2fd1b3dbfa86b6c89d625bbf9dd
|
|
running Arm NN Unittest
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I567452000287babad345e61ea85ea84f362f48e0
|
|
ConvertLayers.
* ConvertBf16ToFp32Layer
* ConvertFp16ToFp32Layer
* ConvertFp32ToBf16Layer
* ConvertFp32ToFp16Layer
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I5e763519a12f017dc14b09ea191fdb3b7398c0d7
|
|
* Originated from a GitHub issue: https://github.com/ARM-software/armnn/issues/667
* Initially, Arm NN supports the pool 2D operation because there is no padding
on the pool2d. Neon failure occurs when padding is followed by average pool 2D
due to folding optimization.
* Here we prevent the folding optimization from happening for the above special case
and add it in as a backend specific optimization.
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: Ia0fd90c3a6b4b9d29c81106f154617d2e893e26b
|
|
* Correcting some typos
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: Icb21dc4828e51afa38816bd454926fc41e9e82cb
|
|
This reverts commit 1a7f033768acb27da11503bd29abb468d2e77f9e.
List of fixes to be able to add this code again:
* "emplacing_back" the vector inputTensors into the vector m_InputTensorsVec outside the for loop
* GetIOInfo() uses IOptimizedNetwork instead of INetwork, where the infered shapes are not saved
* Add missing data type Signed32 to SetupInputsAndOutputs()
* PrintOutputTensors() prints the actual output without dequantizing
* Add profilingDetailsMethod as input in networkProperties in ArmNNExecutor constructor
* Fix typos
Change-Id: I91de166f87228282db3efa27431fe91458834442
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: Ic6634d48892d11e5f146cdf285e1e333e93e9937
Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
|
|
* Descriptors added for BatchMatMul
* Layer definition added
* Input validation added (will likely change when opt. param support comes in)
* Ref workload implementation for BatchMatMul added (will also change with opt. param support)
* Ref layer tests made for BatchMatMul
* CMake and other build files updated
Signed-off-by: Samuel Yap <samuel.yap@arm.com>
Change-Id: Ic885301da543ee0fbe7922b85e7f9658c4efc617
|
|
Fp32NetworkToBf16Converter
* Fuse FP32ToBF16Layers with Constant Layer so Conv2d/FullyConnected
can have their weights redirected.
* If BF16 Unsupported in Conv2d || FullyConnected revert fused
Constant Layer to FP32
Change-Id: If523c708a822659d64597d9ae39cca1c2f84b76f
Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
|
|
* Refactor backend capability checks in LoadedNetwork.
* ImportInputs should check the number of tensors does not exceed the
number of inputs.
* In EnqueueWorkload the check for for the count of input tensors
was ignoring pre-imported inputs.
* Added checks to verify ImportInputs/ImportOutputs worked as expected
in EndToEndTestImpl.
* Improve documentation on ImportInputs/ImportOutputs in IRuntime.hpp.
* Disabled import tests in CL and Neon EndToEndTests that cannot work.
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: Iae4b2644a1c9f01ee72bce1afb211661cc9ae2e3
|
|
* ExecutionData holds a void* which can be assigned to data required
for execution in a backend. WorkingMemDescriptors are used in the Ref
backend which hold TensorHandles for inputs and outputs.
* Updated ExecuteAsync functions to take ExecutionData.
* Added CreateExecutionData and UpdateExectutionData to IBackendInternal.
* Streamlined experimental IWorkingMemHandle API by removing map related
function and unused m_workingMemDescriptorMap from WorkingMemHandle.
Signed-off-by: Matthew Sloyan <matthew.sloyan@arm.com>
Change-Id: I54b0aab12872011743a141eb42dae200227769af
|
|
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I979a6f43c0d6ec49effb9a87339dbcd07678d2bd
|
|
Signed-off-by: Jim Flynn <jim.flynn@arm.com>
Change-Id: I97dee6982e0a7be01c13e9e803c0997547a39ff1
|
|
* Enabled import host memory in SL as default
* Updated import host memory functionality in GpuAcc
Signed-off-by: Sadik Armagan <sadik.armagan@arm.com>
Change-Id: I22132b1e1008159b0e7247219762e3e9ae5eba10
|
|
* Add virtual GetSlotIndex to IInputSlot
* Fix logic in GetWorkingCopy to use index of slots; so as not
to add slots to cloned subgraphView if not in original subgraphView
* Add test to cover cases when not all inputSlots to subgraphView layer
are part of the original subgraphView
* Mark SubgraphView::GetWorkingCopy() as const
Change-Id: I1d540f84c57f97f6c834ec06ca13393ffa55d379
|
|
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I1fedfdf2cd8871d6b307fce8620f40adadf75f04
|
|
instead of immediately before output
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I2d89a1efdabfdb4be24a8998a03fe1f502d26183
|
|
Signed-off-by: Nikhil Raj <nikhil.raj@arm.com>
Change-Id: I9ccaefbe28ea572e9e2b4a2168574804667f7460
|
|
* Added non-const variants of existing const member functions in
IInputSlot and IOutputSlot to retrieve non-const IConnectableLayer
Signed-off-by: Nabeel Ahmad <nabeel.ahmad@arm.com>
Change-Id: Ic3388b578324edb4d2cca36acce6560ad1ce83c5
|
|
This reverts commit a0f8b15d4ddb5075f380003ff31b271d389d3b66.
Reason for revert: <Test ClDmaBufInternalTests review >
Change-Id: Ibc4a77fa008643849da7330391942e4c87b941e2
|
|
This reverts commit 03bf98a8bc51ad20eef4b9ca5fbf6ce15e063721.
Reason for revert: Caused failures in tests located in internal repo.
Change-Id: If35cb0ede349b270e4e7827324382e09455d8cfa
|
|
Only one bool is used to indicate whether inputs should be imported.
However, its possible for the user to want to import inputs but not
export outputs. In addition it's possible for a user to enabled import
during optimize but then pass a memory source that does not require
import.
* Add m_ExportEnabled to INetwork.hpp.
* Modify Network::dNetwork to consider both m_ImportEnabled
and m_ExportEnabled.
* Add ValidateSourcesMatchOptimizedNetwork to LoadedNetwork to validate
import options between optimize and network load.
* Update the TfLite delegate consider exportEnabled flag in the
optimizer.
!armnn-internal-tests:425350
Signed-off-by: Colm Donelan <colm.donelan@arm.com>
Change-Id: I776eab81595898e43f91ab40306962eae61329f4
|
|
* Updated Serializer CMakeLists.txt to build armnnSerializerObj
* Added constant tensors as input support to SL
Signed-off-by: Sadik Armagan <sadik.armagan@arm.com>
Change-Id: I22f6cf50147d99a01f7fe70d7446b114a4c57af3
|
|
* Fixed Segfault when parsing Unidirectional Sequence LSTM
Signed-off-by: Mike Kelly <mike.kelly@arm.com>
Change-Id: Ic69a4190c60ef595be64bc2c356e540319381b7e
|
|
* Fix made to experimental/armnn_shim_sl branch also required for armnn master branch.
* TestGenerated/GeneratedTests.Sync/argmax_1 fix.
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: Idb0324ff59e1ed13caf5f4bf899d1d3220d823d4
|
|
* Removed the pre-generated ArmnnSchema_generated.h
* This version was generated using flatbuffers v1.12.0 and it contains
code that's incompatible with newer versions
* Android.mk will look for ArmnnSchema_generated.h in the armnnGenerated
directory in the armnn directory.
* The Serializer and Deserializer will look for ArmnnSchema_generated.h
in the armnnGenerated directory.
!android-nn-driver:7626
Signed-off-by: Mike Kelly <mike.kelly@arm.com>
Change-Id: I13ff6b6c78740cf1f82750f56caab83200e6a3e5
Signed-off-by: Mike Kelly <mike.kelly@arm.com>
|
|
* Deserializer.cpp
* Length() has been deprecated in flatbuffers v.1.12.0 or earlier.
* SerializerTests.cpp
* armnn::BaseDescriptor& descriptor is unused.
Signed-off-by: Mike Kelly <mike.kelly@arm.com>
Change-Id: Icf0f09863f13dfd86c2c209c36c7f74f194c707b
|
|
Make some things private that don't need to be public in RefElementwiseWorkload.
Remove non-workload header files from RefWorkloads.hpp - the non-workload header
files are implementation detail of individual workloads, whereas RefWorloads.hpp
should only contain the workload definitions, needed for RefWorkloadFactory.
Signed-off-by: Matthew Bentham <matthew.bentham@arm.com>
Change-Id: I4c28963a027162a6560e56cf84b6c0063283e48f
|
|
* Test already existed but bias was not enabled so yielded false positive
* Updated Conv2d and FC to have const layers as inputs
Signed-off-by: Keith Davis <keith.davis@arm.com>
Change-Id: Id4193adef2ac67b3a4681345e4dc01414cbbbad7
|
|
* BackendHelper.cpp IsXXXLayerSupported doesn't get as far as Neon/Cl
Validate functions where arm_compute::Status is returned.
* Conv2d, Depthwise, DilatedDepthwise and FullyConnected
* Tidy up if() -> if ()
* Clean up logic in FullyConnected so that isLayerSupported gets called
Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I5da1a882f4a2f55e90aa984b2b9548a847cb3a2d
|
|
* Use new INetwork::AddConvolution2dLayer
instead of deprecated version
* Remove duplicated test in SerlializerTests
* Fix some cosmetics
Signed-off-by: Keith Davis <keith.davis@arm.com>
Change-Id: I3407815bfdc1cdc01ca0a667b8e4d80d8621783f
|
|
* Add functionality to check for ConstantTensorsAsInputs to GetConstantTensorsByRef
* Reorder optimizations so RedirectMembersToConstantInputs occurs after
Conversion of Constants
* Ensure graph is in topological order after loading in OptimizedNet
* Fixed test to check release of m_LayerOutputs.
Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
Change-Id: I7cff50798d7217e8ea0d2f9b153eabd10174a566
|
|
* No trailing permute layer after a constant layer
* Unit test for optimization
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I0d098f5af41d2c55df7cef1ccfb848093320ddc1
|
|
* Support Float16 as input to Dequantize layer
* Add Optimization to substitute Const+Dequantize layers with Const layer
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: I58bb7e3871ca480c7b6fca93c4efb2de84e09e64
Signed-off-by: David <david.monahan@arm.com>
|
|
dimensions
* Added allow-expanded-dims to TFLite parser and ArmNN delegate
* If true ArmNN will disregard dimensions with a size of 1 when
validating tensor shapes. Tensor sizes must still match.
* This allows us to support models where tensors have expanded
dimensions (i.e. extra dimensions with a size of 1).
* Fixed bug in Network where it assumed that only the first option
could be ShapeInferenceMethod.
* Fixed bug where m_ShapeInferenceMethod was lost when copying or
moving Graphs.
* Changed Delegate to pass "infer-output-shape", "allow-expanded-dims"
and other BackendOptions through to the Network during construction.
Signed-off-by: Mike Kelly <mike.kelly@arm.com>
Change-Id: Ibe7c5ae6597796fc9164cb07bd372bd7f8f8cacf
|
|
* Resolves: IVGCVSW-6952
Signed-off-by: Finn Williams <finn.williams@arm.com>
Change-Id: Ic85bd5267cf94e0ee8461ff4e62b9db3cb80877a
|
|
* Signature change is ABI/API break, overloaded and forwarded to new function.
Signed-off-by: Francis Murtagh <francis.murtagh@arm.com>
Change-Id: I8590a6fd65986b5aeff905c1e761cb5c51042e99
|
|
!android-nn-driver:7477
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: Ibf633ccccc385bd980934ff829407d21981323ef
|
|
Remove use of std::unary_function and std::binary_function which were
deprecated in C+11.
Signed-off-by: Matthew Bentham <matthew.bentham@arm.com>
Change-Id: I9e4624f570b475595c9e28bdf185ddcc2ddceb2f
|
|
* Update Front-end and Tools.
* Updated Serializer, Deserializer and unit tests to reflect this.
* Updated TfLiteDelegate, TfLiteParser and OnnxParser.
* Updated Ref.
* Fixed resulting Neon / CL tests
* Unified optimizers for conv2d ops
* Optimizer Fix - Fp32ToBf16
* Partial implementation for ACL backends to fix VTS failures
!android-nn-driver:7477
Signed-off-by: Keith Davis <keith.davis@arm.com>
Change-Id: I5fb18877f7ee32643e15a9818945356274bb401b
|
|
* Add IsSupported for Pooling3d
* Add CreateWorkload case for Pooling3d
* Create new NeonPooling3dWorkload header and source files
* Add Pooling3d workload to NeonWorkloads.hpp
* Add float32 tests for Pooling3d workload
* Add Uint8 tests for Cl and NE pooling3d
Signed-off-by: Ryan OShea <ryan.oshea3@arm.com>
Change-Id: Ic992e1233d1eb8db52df2c8446183df1c907bc4d
|
|
* IVGCVSW-6940 ConstTensorsAsInput: DepthwiseConvolution2d - Complete Neon and Cl Bug Fix
* Bug fix to enable Cl and Neon Backend Compatibility ConstantTensorsAsInputs
* Updated Cl and Neon FullyConnected workloads to handle constant
weights and bias as inputs rather than reading from member variables.
* Prevent non const weights and biases passing CL and NEON validate
for Depthwise Convolution.
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I0f505ff5998a183152f843d0f6cc74327ba920e7
|
|
* Added backend specific optimization & test for CpuAcc and GpuAcc: PermuteDepthwiseConv2dWeights
Signed-off-by: Cathal Corbett <cathal.corbett@arm.com>
Change-Id: I600476b2e9c557a39818a574c1091c9d650b21b1
|