ArmNN
 22.11
OptimizerOptions Struct Reference

ArmNN performs an optimization on each model/network before it gets loaded for execution. More...

#include <INetwork.hpp>

Public Member Functions

 OptimizerOptions ()
 
 OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16, bool importEnabled, ModelOptions modelOptions={}, bool exportEnabled=false, bool debugToFile=false)
 
 OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16=false, ShapeInferenceMethod shapeInferenceMethod=armnn::ShapeInferenceMethod::ValidateOnly, bool importEnabled=false, ModelOptions modelOptions={}, bool exportEnabled=false, bool debugToFile=false, bool allowExpandedDims=false)
 
const std::string ToString () const
 

Public Attributes

bool m_ReduceFp32ToFp16
 Reduces all Fp32 operators in the model to Fp16 for faster processing. More...
 
bool m_Debug
 
bool m_DebugToFile
 
bool m_ReduceFp32ToBf16
 Reduces all Fp32 operators in the model to Bf16 for faster processing. More...
 
ShapeInferenceMethod m_shapeInferenceMethod
 
bool m_ImportEnabled
 
ModelOptions m_ModelOptions
 
bool m_ProfilingEnabled
 
bool m_ExportEnabled
 
bool m_AllowExpandedDims
 

Detailed Description

ArmNN performs an optimization on each model/network before it gets loaded for execution.

OptimizerOptions provides a set of features that allows the user to customize this optimization on a per model basis.

Examples:
CustomMemoryAllocatorSample.cpp.

Definition at line 127 of file INetwork.hpp.

Constructor & Destructor Documentation

◆ OptimizerOptions() [1/3]

OptimizerOptions ( )
inline

Definition at line 129 of file INetwork.hpp.

130  : m_ReduceFp32ToFp16(false)
131  , m_Debug(false)
132  , m_DebugToFile(false)
133  , m_ReduceFp32ToBf16(false)
135  , m_ImportEnabled(false)
136  , m_ModelOptions()
137  , m_ProfilingEnabled(false)
138  , m_ExportEnabled(false)
139  , m_AllowExpandedDims(false)
140  {}
ModelOptions m_ModelOptions
Definition: INetwork.hpp:239
ShapeInferenceMethod m_shapeInferenceMethod
Definition: INetwork.hpp:233
bool m_ReduceFp32ToBf16
Reduces all Fp32 operators in the model to Bf16 for faster processing.
Definition: INetwork.hpp:230
bool m_ReduceFp32ToFp16
Reduces all Fp32 operators in the model to Fp16 for faster processing.
Definition: INetwork.hpp:217
Validate all output shapes.

◆ OptimizerOptions() [2/3]

OptimizerOptions ( bool  reduceFp32ToFp16,
bool  debug,
bool  reduceFp32ToBf16,
bool  importEnabled,
ModelOptions  modelOptions = {},
bool  exportEnabled = false,
bool  debugToFile = false 
)
inline

Definition at line 142 of file INetwork.hpp.

References armnn::ValidateOnly.

143  {}, bool exportEnabled = false, bool debugToFile = false)
144  : m_ReduceFp32ToFp16(reduceFp32ToFp16)
145  , m_Debug(debug)
146  , m_DebugToFile(debugToFile)
147  , m_ReduceFp32ToBf16(reduceFp32ToBf16)
149  , m_ImportEnabled(importEnabled)
150  , m_ModelOptions(modelOptions)
151  , m_ProfilingEnabled(false)
152  , m_ExportEnabled(exportEnabled)
153  , m_AllowExpandedDims(false)
154  {
156  {
157  throw InvalidArgumentException("BFloat16 and Float16 optimization cannot be enabled at the same time.");
158  }
159  }
ModelOptions m_ModelOptions
Definition: INetwork.hpp:239
ShapeInferenceMethod m_shapeInferenceMethod
Definition: INetwork.hpp:233
bool m_ReduceFp32ToBf16
Reduces all Fp32 operators in the model to Bf16 for faster processing.
Definition: INetwork.hpp:230
bool m_ReduceFp32ToFp16
Reduces all Fp32 operators in the model to Fp16 for faster processing.
Definition: INetwork.hpp:217
Validate all output shapes.

◆ OptimizerOptions() [3/3]

OptimizerOptions ( bool  reduceFp32ToFp16,
bool  debug,
bool  reduceFp32ToBf16 = false,
ShapeInferenceMethod  shapeInferenceMethod = armnn::ShapeInferenceMethod::ValidateOnly,
bool  importEnabled = false,
ModelOptions  modelOptions = {},
bool  exportEnabled = false,
bool  debugToFile = false,
bool  allowExpandedDims = false 
)
inline

Definition at line 161 of file INetwork.hpp.

163  {}, bool exportEnabled = false,
164  bool debugToFile = false, bool allowExpandedDims = false)
165  : m_ReduceFp32ToFp16(reduceFp32ToFp16)
166  , m_Debug(debug)
167  , m_DebugToFile(debugToFile)
168  , m_ReduceFp32ToBf16(reduceFp32ToBf16)
169  , m_shapeInferenceMethod(shapeInferenceMethod)
170  , m_ImportEnabled(importEnabled)
171  , m_ModelOptions(modelOptions)
172  , m_ProfilingEnabled(false)
173  , m_ExportEnabled(exportEnabled)
174  , m_AllowExpandedDims(allowExpandedDims)
175  {
177  {
178  throw InvalidArgumentException("BFloat16 and Float16 optimization cannot be enabled at the same time.");
179  }
180  }
ModelOptions m_ModelOptions
Definition: INetwork.hpp:239
ShapeInferenceMethod m_shapeInferenceMethod
Definition: INetwork.hpp:233
bool m_ReduceFp32ToBf16
Reduces all Fp32 operators in the model to Bf16 for faster processing.
Definition: INetwork.hpp:230
bool m_ReduceFp32ToFp16
Reduces all Fp32 operators in the model to Fp16 for faster processing.
Definition: INetwork.hpp:217

Member Function Documentation

◆ ToString()

const std::string ToString ( ) const
inline

Definition at line 182 of file INetwork.hpp.

References BackendOptions::BackendOption::GetName(), BackendOptions::BackendOption::GetValue(), BackendOptions::Var::ToString(), and armnn::ValidateOnly.

Referenced by armnn::Optimize().

183  {
184  std::stringstream stream;
185  stream << "OptimizerOptions: \n";
186  stream << "\tReduceFp32ToFp16: " << m_ReduceFp32ToFp16 << "\n";
187  stream << "\tReduceFp32ToBf16: " << m_ReduceFp32ToBf16 << "\n";
188  stream << "\tDebug: " << m_Debug << "\n";
189  stream << "\tDebug to file: " << m_DebugToFile << "\n";
190  stream << "\tShapeInferenceMethod: " <<
191  (m_shapeInferenceMethod == ShapeInferenceMethod::ValidateOnly ? "ValidateOnly" : "InferAndValidate") << "\n";
192  stream << "\tImportEnabled: " << m_ImportEnabled << "\n";
193  stream << "\tExportEnabled: " << m_ExportEnabled << "\n";
194  stream << "\tProfilingEnabled: " << m_ProfilingEnabled << "\n";
195  stream << "\tAllowExpandedDims: " << m_AllowExpandedDims << "\n";
196 
197  stream << "\tModelOptions: \n";
198  for (auto optionsGroup : m_ModelOptions)
199  {
200  for (size_t i=0; i < optionsGroup.GetOptionCount(); i++)
201  {
202  const armnn::BackendOptions::BackendOption option = optionsGroup.GetOption(i);
203  stream << "\t\tBackend: " << optionsGroup.GetBackendId() << "\n"
204  << "\t\t\tOption: " << option.GetName() << "\n"
205  << "\t\t\tValue: " << std::string(option.GetValue().ToString()) << "\n";
206  }
207  }
208 
209  return stream.str();
210  }
ModelOptions m_ModelOptions
Definition: INetwork.hpp:239
ShapeInferenceMethod m_shapeInferenceMethod
Definition: INetwork.hpp:233
bool m_ReduceFp32ToBf16
Reduces all Fp32 operators in the model to Bf16 for faster processing.
Definition: INetwork.hpp:230
bool m_ReduceFp32ToFp16
Reduces all Fp32 operators in the model to Fp16 for faster processing.
Definition: INetwork.hpp:217
Validate all output shapes.

Member Data Documentation

◆ m_AllowExpandedDims

bool m_AllowExpandedDims

Definition at line 248 of file INetwork.hpp.

◆ m_Debug

bool m_Debug

Definition at line 220 of file INetwork.hpp.

Referenced by armnn::Optimize().

◆ m_DebugToFile

bool m_DebugToFile

Definition at line 223 of file INetwork.hpp.

Referenced by armnn::Optimize().

◆ m_ExportEnabled

bool m_ExportEnabled

Definition at line 245 of file INetwork.hpp.

Referenced by armnn::Optimize().

◆ m_ImportEnabled

bool m_ImportEnabled
Examples:
CustomMemoryAllocatorSample.cpp.

Definition at line 236 of file INetwork.hpp.

Referenced by armnn::Optimize().

◆ m_ModelOptions

◆ m_ProfilingEnabled

bool m_ProfilingEnabled

◆ m_ReduceFp32ToBf16

bool m_ReduceFp32ToBf16

Reduces all Fp32 operators in the model to Bf16 for faster processing.

This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Bf16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.

Definition at line 230 of file INetwork.hpp.

Referenced by armnn::Optimize().

◆ m_ReduceFp32ToFp16

bool m_ReduceFp32ToFp16

Reduces all Fp32 operators in the model to Fp16 for faster processing.

This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Fp16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.

Definition at line 217 of file INetwork.hpp.

Referenced by armnn::Optimize(), ArmnnDriverImpl::PrepareArmnnModel(), and ArmnnDriverImpl::PrepareArmnnModelFromCache().

◆ m_shapeInferenceMethod

ShapeInferenceMethod m_shapeInferenceMethod

Definition at line 233 of file INetwork.hpp.

Referenced by armnn::Optimize().


The documentation for this struct was generated from the following file: