22.11
|
ArmNN performs an optimization on each model/network before it gets loaded for execution. More...
#include <INetwork.hpp>
Public Member Functions | |
OptimizerOptions () | |
OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16, bool importEnabled, ModelOptions modelOptions={}, bool exportEnabled=false, bool debugToFile=false) | |
OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16=false, ShapeInferenceMethod shapeInferenceMethod=armnn::ShapeInferenceMethod::ValidateOnly, bool importEnabled=false, ModelOptions modelOptions={}, bool exportEnabled=false, bool debugToFile=false, bool allowExpandedDims=false) | |
const std::string | ToString () const |
Public Attributes | |
bool | m_ReduceFp32ToFp16 |
Reduces all Fp32 operators in the model to Fp16 for faster processing. More... | |
bool | m_Debug |
bool | m_DebugToFile |
bool | m_ReduceFp32ToBf16 |
Reduces all Fp32 operators in the model to Bf16 for faster processing. More... | |
ShapeInferenceMethod | m_shapeInferenceMethod |
bool | m_ImportEnabled |
ModelOptions | m_ModelOptions |
bool | m_ProfilingEnabled |
bool | m_ExportEnabled |
bool | m_AllowExpandedDims |
ArmNN performs an optimization on each model/network before it gets loaded for execution.
OptimizerOptions provides a set of features that allows the user to customize this optimization on a per model basis.
Definition at line 127 of file INetwork.hpp.
|
inline |
Definition at line 129 of file INetwork.hpp.
|
inline |
Definition at line 142 of file INetwork.hpp.
References armnn::ValidateOnly.
|
inline |
Definition at line 161 of file INetwork.hpp.
|
inline |
Definition at line 182 of file INetwork.hpp.
References BackendOptions::BackendOption::GetName(), BackendOptions::BackendOption::GetValue(), BackendOptions::Var::ToString(), and armnn::ValidateOnly.
Referenced by armnn::Optimize().
bool m_AllowExpandedDims |
Definition at line 248 of file INetwork.hpp.
bool m_Debug |
Definition at line 220 of file INetwork.hpp.
Referenced by armnn::Optimize().
bool m_DebugToFile |
Definition at line 223 of file INetwork.hpp.
Referenced by armnn::Optimize().
bool m_ExportEnabled |
Definition at line 245 of file INetwork.hpp.
Referenced by armnn::Optimize().
bool m_ImportEnabled |
Definition at line 236 of file INetwork.hpp.
Referenced by armnn::Optimize().
ModelOptions m_ModelOptions |
Definition at line 239 of file INetwork.hpp.
Referenced by armnn::Optimize(), ArmnnDriverImpl::PrepareArmnnModel(), and ArmnnDriverImpl::PrepareArmnnModelFromCache().
bool m_ProfilingEnabled |
Definition at line 242 of file INetwork.hpp.
Referenced by armnn::Optimize(), ArmnnDriverImpl::PrepareArmnnModel(), and ArmnnDriverImpl::PrepareArmnnModelFromCache().
bool m_ReduceFp32ToBf16 |
Reduces all Fp32 operators in the model to Bf16 for faster processing.
This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Bf16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.
Definition at line 230 of file INetwork.hpp.
Referenced by armnn::Optimize().
bool m_ReduceFp32ToFp16 |
Reduces all Fp32 operators in the model to Fp16 for faster processing.
This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Fp16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.
Definition at line 217 of file INetwork.hpp.
Referenced by armnn::Optimize(), ArmnnDriverImpl::PrepareArmnnModel(), and ArmnnDriverImpl::PrepareArmnnModelFromCache().
ShapeInferenceMethod m_shapeInferenceMethod |
Definition at line 233 of file INetwork.hpp.
Referenced by armnn::Optimize().