diff options
author | Jan Eilers <jan.eilers@arm.com> | 2020-10-15 18:34:43 +0100 |
---|---|---|
committer | Jan Eilers <jan.eilers@arm.com> | 2020-10-20 13:48:50 +0100 |
commit | 45274909b06a4882ada92899c58ee66194446135 (patch) | |
tree | 61a67ce012ef80fbd5d5f23cc8a22ba39ea2c7f2 /tests/ExecuteNetwork/ExecuteNetworkParams.hpp | |
parent | 3c24f43ff9afb50898d6a73ccddbc0936f72fdad (diff) | |
download | armnn-45274909b06a4882ada92899c58ee66194446135.tar.gz |
IVGCVSW-5284 Refactor ExecuteNetwork
* Removed boost program options and replaced it with cxxopts
* Unified adding, parsing and validation of program options
into the struct ProgramOptions
* Program options are now parsed directly into ExecuteNetworkParams
which can be passed directly to MainImpl
* Split NetworkExecutionUtils into header and source
* Removed RunTest
* Removed RunCsvTest
* Removed RunClTuning
* Moved MainImpl back to ExecuteNetwork.cpp
* Added additional util functions
The functionality of ExecuteNetwork remains the same. Only
cl tuning runs need to be started separately
and there is no short option for fp16-turbo-mode because -h is
reserved in cxxopts to print help messages
Signed-off-by: Jan Eilers <jan.eilers@arm.com>
Change-Id: Ib9689375c81e1a184c17bb3ea66c3550430bbe09
Diffstat (limited to 'tests/ExecuteNetwork/ExecuteNetworkParams.hpp')
-rw-r--r-- | tests/ExecuteNetwork/ExecuteNetworkParams.hpp | 48 |
1 files changed, 48 insertions, 0 deletions
diff --git a/tests/ExecuteNetwork/ExecuteNetworkParams.hpp b/tests/ExecuteNetwork/ExecuteNetworkParams.hpp new file mode 100644 index 0000000000..5490230ede --- /dev/null +++ b/tests/ExecuteNetwork/ExecuteNetworkParams.hpp @@ -0,0 +1,48 @@ +// +// Copyright © 2020 Arm Ltd and Contributors. All rights reserved. +// SPDX-License-Identifier: MIT +// + +#pragma once + +#include <armnn/BackendId.hpp> +#include <armnn/Tensor.hpp> + +/// Holds all parameters necessary to execute a network +/// Check ExecuteNetworkProgramOptions.cpp for a description of each parameter +struct ExecuteNetworkParams +{ + using TensorShapePtr = std::unique_ptr<armnn::TensorShape>; + + std::vector<armnn::BackendId> m_ComputeDevices; + bool m_DequantizeOutput; + std::string m_DynamicBackendsPath; + bool m_EnableBf16TurboMode; + bool m_EnableFastMath = false; + bool m_EnableFp16TurboMode; + bool m_EnableLayerDetails = false; + bool m_EnableProfiling; + bool m_GenerateTensorData; + bool m_InferOutputShape = false; + std::vector<std::string> m_InputNames; + std::vector<std::string> m_InputTensorDataFilePaths; + std::vector<TensorShapePtr> m_InputTensorShapes; + std::vector<std::string> m_InputTypes; + bool m_IsModelBinary; + size_t m_Iterations; + std::string m_ModelFormat; + std::string m_ModelPath; + std::vector<std::string> m_OutputNames; + std::vector<std::string> m_OutputTensorFiles; + std::vector<std::string> m_OutputTypes; + bool m_ParseUnsupported = false; + bool m_PrintIntermediate; + bool m_QuantizeInput; + size_t m_SubgraphId; + double m_ThresholdTime; + int m_TuningLevel; + std::string m_TuningPath; + + // Ensures that the parameters for ExecuteNetwork fit together + void ValidateParams(); +}; |