22.02
|
ArmNN performs an optimization on each model/network before it gets loaded for execution. More...
#include <INetwork.hpp>
Public Member Functions | |
OptimizerOptions () | |
OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16, bool importEnabled, ModelOptions modelOptions={}) | |
OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16=false, ShapeInferenceMethod shapeInferenceMethod=armnn::ShapeInferenceMethod::ValidateOnly, bool importEnabled=false, ModelOptions modelOptions={}) | |
const std::string | ToString () const |
Public Attributes | |
bool | m_ReduceFp32ToFp16 |
Reduces all Fp32 operators in the model to Fp16 for faster processing. More... | |
bool | m_Debug |
bool | m_ReduceFp32ToBf16 |
Reduces all Fp32 operators in the model to Bf16 for faster processing. More... | |
ShapeInferenceMethod | m_shapeInferenceMethod |
bool | m_ImportEnabled |
ModelOptions | m_ModelOptions |
bool | m_ProfilingEnabled |
ArmNN performs an optimization on each model/network before it gets loaded for execution.
OptimizerOptions provides a set of features that allows the user to customize this optimization on a per model basis.
Definition at line 137 of file INetwork.hpp.
|
inline |
Definition at line 139 of file INetwork.hpp.
|
inline |
Definition at line 149 of file INetwork.hpp.
References armnn::ValidateOnly.
|
inline |
Definition at line 165 of file INetwork.hpp.
|
inline |
Definition at line 182 of file INetwork.hpp.
References BackendOptions::BackendOption::GetName(), BackendOptions::BackendOption::GetValue(), BackendOptions::Var::ToString(), and armnn::ValidateOnly.
Referenced by armnn::Optimize().
bool m_Debug |
Definition at line 217 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), TEST_SUITE(), and ExecuteNetworkParams::ValidateParams().
bool m_ImportEnabled |
Definition at line 230 of file INetwork.hpp.
Referenced by armnn::Optimize(), and TEST_SUITE().
ModelOptions m_ModelOptions |
Definition at line 233 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), TEST_CASE_FIXTURE(), TEST_SUITE(), and ExecuteNetworkParams::ValidateParams().
bool m_ProfilingEnabled |
Definition at line 236 of file INetwork.hpp.
Referenced by GetSoftmaxProfilerJson(), InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), and ExecuteNetworkParams::ValidateParams().
bool m_ReduceFp32ToBf16 |
Reduces all Fp32 operators in the model to Bf16 for faster processing.
This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Bf16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.
Definition at line 224 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), and ExecuteNetworkParams::ValidateParams().
bool m_ReduceFp32ToFp16 |
Reduces all Fp32 operators in the model to Fp16 for faster processing.
This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Fp16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.
Definition at line 214 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), TEST_SUITE(), and ExecuteNetworkParams::ValidateParams().
ShapeInferenceMethod m_shapeInferenceMethod |
Definition at line 227 of file INetwork.hpp.
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), and ExecuteNetworkParams::ValidateParams().