ArmNN
 21.11
OptimizerOptions Struct Reference

ArmNN performs an optimization on each model/network before it gets loaded for execution. More...

#include <INetwork.hpp>

Public Member Functions

 OptimizerOptions ()
 
 OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16, bool importEnabled, ModelOptions modelOptions={})
 
 OptimizerOptions (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16=false, ShapeInferenceMethod shapeInferenceMethod=armnn::ShapeInferenceMethod::ValidateOnly, bool importEnabled=false, ModelOptions modelOptions={})
 

Public Attributes

bool m_ReduceFp32ToFp16
 Reduces all Fp32 operators in the model to Fp16 for faster processing. More...
 
bool m_Debug
 
bool m_ReduceFp32ToBf16
 Reduces all Fp32 operators in the model to Bf16 for faster processing. More...
 
ShapeInferenceMethod m_shapeInferenceMethod
 
bool m_ImportEnabled
 
ModelOptions m_ModelOptions
 
bool m_ProfilingEnabled
 

Detailed Description

ArmNN performs an optimization on each model/network before it gets loaded for execution.

OptimizerOptions provides a set of features that allows the user to customize this optimization on a per model basis.

Examples:
CustomMemoryAllocatorSample.cpp.

Definition at line 120 of file INetwork.hpp.

Constructor & Destructor Documentation

◆ OptimizerOptions() [1/3]

OptimizerOptions ( )
inline

Definition at line 122 of file INetwork.hpp.

123  : m_ReduceFp32ToFp16(false)
124  , m_Debug(false)
125  , m_ReduceFp32ToBf16(false)
127  , m_ImportEnabled(false)
128  , m_ModelOptions()
129  , m_ProfilingEnabled(false)
130  {}
ModelOptions m_ModelOptions
Definition: INetwork.hpp:189
ShapeInferenceMethod m_shapeInferenceMethod
Definition: INetwork.hpp:183
bool m_ReduceFp32ToBf16
Reduces all Fp32 operators in the model to Bf16 for faster processing.
Definition: INetwork.hpp:180
bool m_ReduceFp32ToFp16
Reduces all Fp32 operators in the model to Fp16 for faster processing.
Definition: INetwork.hpp:170
Validate all output shapes.

◆ OptimizerOptions() [2/3]

OptimizerOptions ( bool  reduceFp32ToFp16,
bool  debug,
bool  reduceFp32ToBf16,
bool  importEnabled,
ModelOptions  modelOptions = {} 
)
inline

Definition at line 132 of file INetwork.hpp.

References armnn::ValidateOnly.

133  {})
134  : m_ReduceFp32ToFp16(reduceFp32ToFp16)
135  , m_Debug(debug)
136  , m_ReduceFp32ToBf16(reduceFp32ToBf16)
138  , m_ImportEnabled(importEnabled)
139  , m_ModelOptions(modelOptions)
140  , m_ProfilingEnabled(false)
141  {
143  {
144  throw InvalidArgumentException("BFloat16 and Float16 optimization cannot be enabled at the same time.");
145  }
146  }
ModelOptions m_ModelOptions
Definition: INetwork.hpp:189
ShapeInferenceMethod m_shapeInferenceMethod
Definition: INetwork.hpp:183
bool m_ReduceFp32ToBf16
Reduces all Fp32 operators in the model to Bf16 for faster processing.
Definition: INetwork.hpp:180
bool m_ReduceFp32ToFp16
Reduces all Fp32 operators in the model to Fp16 for faster processing.
Definition: INetwork.hpp:170
Validate all output shapes.

◆ OptimizerOptions() [3/3]

OptimizerOptions ( bool  reduceFp32ToFp16,
bool  debug,
bool  reduceFp32ToBf16 = false,
ShapeInferenceMethod  shapeInferenceMethod = armnn::ShapeInferenceMethod::ValidateOnly,
bool  importEnabled = false,
ModelOptions  modelOptions = {} 
)
inline

Definition at line 148 of file INetwork.hpp.

150  {})
151  : m_ReduceFp32ToFp16(reduceFp32ToFp16)
152  , m_Debug(debug)
153  , m_ReduceFp32ToBf16(reduceFp32ToBf16)
154  , m_shapeInferenceMethod(shapeInferenceMethod)
155  , m_ImportEnabled(importEnabled)
156  , m_ModelOptions(modelOptions)
157  , m_ProfilingEnabled(false)
158  {
160  {
161  throw InvalidArgumentException("BFloat16 and Float16 optimization cannot be enabled at the same time.");
162  }
163  }
ModelOptions m_ModelOptions
Definition: INetwork.hpp:189
ShapeInferenceMethod m_shapeInferenceMethod
Definition: INetwork.hpp:183
bool m_ReduceFp32ToBf16
Reduces all Fp32 operators in the model to Bf16 for faster processing.
Definition: INetwork.hpp:180
bool m_ReduceFp32ToFp16
Reduces all Fp32 operators in the model to Fp16 for faster processing.
Definition: INetwork.hpp:170

Member Data Documentation

◆ m_Debug

◆ m_ImportEnabled

bool m_ImportEnabled
Examples:
CustomMemoryAllocatorSample.cpp.

Definition at line 186 of file INetwork.hpp.

Referenced by armnn::Optimize(), and TEST_SUITE().

◆ m_ModelOptions

◆ m_ProfilingEnabled

◆ m_ReduceFp32ToBf16

bool m_ReduceFp32ToBf16

Reduces all Fp32 operators in the model to Bf16 for faster processing.

This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Bf16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.

Definition at line 180 of file INetwork.hpp.

Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), and ExecuteNetworkParams::ValidateParams().

◆ m_ReduceFp32ToFp16

bool m_ReduceFp32ToFp16

Reduces all Fp32 operators in the model to Fp16 for faster processing.

This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Fp16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.

Definition at line 170 of file INetwork.hpp.

Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), armnn::Optimize(), TEST_SUITE(), and ExecuteNetworkParams::ValidateParams().

◆ m_shapeInferenceMethod


The documentation for this struct was generated from the following file: