diff options
author | Mike Kelly <mike.kelly@arm.com> | 2022-05-16 23:10:42 +0100 |
---|---|---|
committer | Ryan OShea <ryan.oshea3@arm.com> | 2022-05-19 11:06:34 +0100 |
commit | 21fe06fad6760a0d453f2de9c8dd790983ae940c (patch) | |
tree | bad2f314defadd4b340343d99b6e157b46622039 /tests/ExecuteNetwork/ExecuteNetwork.cpp | |
parent | b5e03cc39cdabc49bf117c119073f60e9d36a474 (diff) | |
download | armnn-21fe06fad6760a0d453f2de9c8dd790983ae940c.tar.gz |
IVGCVSW-6929 Support for models with implicit expanded
dimensions
* Added allow-expanded-dims to TFLite parser and ArmNN delegate
* If true ArmNN will disregard dimensions with a size of 1 when
validating tensor shapes. Tensor sizes must still match.
* This allows us to support models where tensors have expanded
dimensions (i.e. extra dimensions with a size of 1).
* Fixed bug in Network where it assumed that only the first option
could be ShapeInferenceMethod.
* Fixed bug where m_ShapeInferenceMethod was lost when copying or
moving Graphs.
* Changed Delegate to pass "infer-output-shape", "allow-expanded-dims"
and other BackendOptions through to the Network during construction.
Signed-off-by: Mike Kelly <mike.kelly@arm.com>
Change-Id: Ibe7c5ae6597796fc9164cb07bd372bd7f8f8cacf
Diffstat (limited to 'tests/ExecuteNetwork/ExecuteNetwork.cpp')
-rw-r--r-- | tests/ExecuteNetwork/ExecuteNetwork.cpp | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/tests/ExecuteNetwork/ExecuteNetwork.cpp b/tests/ExecuteNetwork/ExecuteNetwork.cpp index ddabf3c11f..f0a3d0821e 100644 --- a/tests/ExecuteNetwork/ExecuteNetwork.cpp +++ b/tests/ExecuteNetwork/ExecuteNetwork.cpp @@ -389,6 +389,7 @@ int MainImpl(const ExecuteNetworkParams& params, // Creates an InferenceModel, which will parse the model and load it into an IRuntime. typename InferenceModel<TParser, TDataType>::Params inferenceModelParams; inferenceModelParams.m_ModelPath = params.m_ModelPath; + inferenceModelParams.m_AllowExpandedDims = params.m_AllowExpandedDims; inferenceModelParams.m_IsModelBinary = params.m_IsModelBinary; inferenceModelParams.m_ComputeDevices = params.m_ComputeDevices; inferenceModelParams.m_DynamicBackendsPath = params.m_DynamicBackendsPath; |