22.08
|
ArmNN performs an optimization on each model/network before it gets loaded for execution. More...
#include <INetwork.hpp>
Public Member Functions | |
OptimizerOptions () | |
OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16, bool importEnabled, ModelOptions modelOptions={}, bool exportEnabled=false) | |
OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16=false, ShapeInferenceMethod shapeInferenceMethod=armnn::ShapeInferenceMethod::ValidateOnly, bool importEnabled=false, ModelOptions modelOptions={}, bool exportEnabled=false) | |
const std::string | ToString () const |
Public Attributes | |
bool | m_ReduceFp32ToFp16 |
Reduces all Fp32 operators in the model to Fp16 for faster processing. More... | |
bool | m_Debug |
bool | m_ReduceFp32ToBf16 |
Reduces all Fp32 operators in the model to Bf16 for faster processing. More... | |
ShapeInferenceMethod | m_shapeInferenceMethod |
bool | m_ImportEnabled |
ModelOptions | m_ModelOptions |
bool | m_ProfilingEnabled |
bool | m_ExportEnabled |
ArmNN performs an optimization on each model/network before it gets loaded for execution.
OptimizerOptions provides a set of features that allows the user to customize this optimization on a per model basis.
Definition at line 127 of file INetwork.hpp.
|
inline |
Definition at line 129 of file INetwork.hpp.
|
inline |
Definition at line 140 of file INetwork.hpp.
References armnn::ValidateOnly.
|
inline |
Definition at line 157 of file INetwork.hpp.
|
inline |
Definition at line 175 of file INetwork.hpp.
References BackendOptions::BackendOption::GetName(), BackendOptions::BackendOption::GetValue(), BackendOptions::Var::ToString(), and armnn::ValidateOnly.
Referenced by armnn::Optimize().
bool m_Debug |
Definition at line 211 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), ArmNNExecutor::PrintNetworkInfo(), TEST_SUITE(), and ExecuteNetworkParams::ValidateParams().
bool m_ExportEnabled |
Definition at line 233 of file INetwork.hpp.
Referenced by armnn::Optimize(), and TEST_SUITE().
bool m_ImportEnabled |
Definition at line 224 of file INetwork.hpp.
Referenced by armnn::Optimize(), and TEST_SUITE().
ModelOptions m_ModelOptions |
Definition at line 227 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), ArmNNExecutor::PrintNetworkInfo(), TEST_CASE_FIXTURE(), TEST_SUITE(), and ExecuteNetworkParams::ValidateParams().
bool m_ProfilingEnabled |
Definition at line 230 of file INetwork.hpp.
Referenced by GetSoftmaxProfilerJson(), InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), ArmNNExecutor::PrintNetworkInfo(), and ExecuteNetworkParams::ValidateParams().
bool m_ReduceFp32ToBf16 |
Reduces all Fp32 operators in the model to Bf16 for faster processing.
This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Bf16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.
Definition at line 218 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), ArmNNExecutor::PrintNetworkInfo(), and ExecuteNetworkParams::ValidateParams().
bool m_ReduceFp32ToFp16 |
Reduces all Fp32 operators in the model to Fp16 for faster processing.
This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Fp16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.
Definition at line 208 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), ArmNNExecutor::PrintNetworkInfo(), TEST_SUITE(), and ExecuteNetworkParams::ValidateParams().
ShapeInferenceMethod m_shapeInferenceMethod |
Definition at line 221 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), ArmNNExecutor::PrintNetworkInfo(), and ExecuteNetworkParams::ValidateParams().