ArmNN
 24.02
OptimizerOptionsOpaqueImpl Struct Reference

#include <Network.hpp>

Public Member Functions

 ~OptimizerOptionsOpaqueImpl ()=default
 
 OptimizerOptionsOpaqueImpl ()
 
 OptimizerOptionsOpaqueImpl (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16, bool importEnabled, ModelOptions modelOptions={}, bool exportEnabled=false, bool debugToFile=false)
 
 OptimizerOptionsOpaqueImpl (bool reduceFp32ToFp16, bool debug, bool reduceFp32ToBf16, ShapeInferenceMethod shapeInferenceMethod, bool importEnabled, ModelOptions modelOptions, bool exportEnabled, bool debugToFile, bool allowExpandedDims)
 

Public Attributes

bool m_ReduceFp32ToFp16 = false
 Reduces all Fp32 operators in the model to Fp16 for faster processing. More...
 
bool m_Debug = false
 Add debug data for easier troubleshooting. More...
 
bool m_DebugToFile = false
 Pass debug data to separate output files for easier troubleshooting. More...
 
bool m_ReduceFp32ToBf16 = false
 @Note This feature has been replaced by enabling Fast Math in compute library backend options. More...
 
ShapeInferenceMethod m_shapeInferenceMethod = armnn::ShapeInferenceMethod::ValidateOnly
 Infer output size when not available. More...
 
bool m_ImportEnabled = false
 Enable Import. More...
 
ModelOptions m_ModelOptions
 Enable Model Options. More...
 
bool m_ProfilingEnabled = false
 Enable profiling dump of the optimizer phase. More...
 
bool m_ExportEnabled = false
 Enable Export. More...
 
bool m_AllowExpandedDims = false
 When calculating tensor sizes, dimensions of size == 1 will be ignored. More...
 

Detailed Description

Definition at line 307 of file Network.hpp.

Constructor & Destructor Documentation

◆ ~OptimizerOptionsOpaqueImpl()

◆ OptimizerOptionsOpaqueImpl() [1/3]

OptimizerOptionsOpaqueImpl ( )
inlineexplicit

Definition at line 311 of file Network.hpp.

312  : m_ReduceFp32ToFp16(false)
313  , m_Debug(false)
314  , m_DebugToFile(false)
315  , m_ReduceFp32ToBf16(false)
317  , m_ImportEnabled(false)
318  , m_ModelOptions()
319  , m_ProfilingEnabled(false)
320  , m_ExportEnabled(false)
321  , m_AllowExpandedDims(false)
322  {
323  }

References armnn::ValidateOnly.

◆ OptimizerOptionsOpaqueImpl() [2/3]

OptimizerOptionsOpaqueImpl ( bool  reduceFp32ToFp16,
bool  debug,
bool  reduceFp32ToBf16,
bool  importEnabled,
ModelOptions  modelOptions = {},
bool  exportEnabled = false,
bool  debugToFile = false 
)
inlineexplicit

Definition at line 325 of file Network.hpp.

326  {},
327  bool exportEnabled = false, bool debugToFile = false)
328  : m_ReduceFp32ToFp16(reduceFp32ToFp16)
329  , m_Debug(debug)
330  , m_DebugToFile(debugToFile)
331  , m_ReduceFp32ToBf16(reduceFp32ToBf16)
333  , m_ImportEnabled(importEnabled)
334  , m_ModelOptions(modelOptions)
335  , m_ProfilingEnabled(false)
336  , m_ExportEnabled(exportEnabled)
337  , m_AllowExpandedDims(false)
338  {
339  }

◆ OptimizerOptionsOpaqueImpl() [3/3]

OptimizerOptionsOpaqueImpl ( bool  reduceFp32ToFp16,
bool  debug,
bool  reduceFp32ToBf16,
ShapeInferenceMethod  shapeInferenceMethod,
bool  importEnabled,
ModelOptions  modelOptions,
bool  exportEnabled,
bool  debugToFile,
bool  allowExpandedDims 
)
inlineexplicit

Definition at line 341 of file Network.hpp.

345  : m_ReduceFp32ToFp16(reduceFp32ToFp16)
346  , m_Debug(debug)
347  , m_DebugToFile(debugToFile)
348  , m_ReduceFp32ToBf16(reduceFp32ToBf16)
349  , m_shapeInferenceMethod(shapeInferenceMethod)
350  , m_ImportEnabled(importEnabled)
351  , m_ModelOptions(modelOptions)
352  , m_ProfilingEnabled(false)
353  , m_ExportEnabled(exportEnabled)
354  , m_AllowExpandedDims(allowExpandedDims)
355  {
356  }

References armnn::debug.

Member Data Documentation

◆ m_AllowExpandedDims

bool m_AllowExpandedDims = false

When calculating tensor sizes, dimensions of size == 1 will be ignored.

Definition at line 393 of file Network.hpp.

◆ m_Debug

bool m_Debug = false

Add debug data for easier troubleshooting.

Definition at line 368 of file Network.hpp.

◆ m_DebugToFile

bool m_DebugToFile = false

Pass debug data to separate output files for easier troubleshooting.

Definition at line 371 of file Network.hpp.

◆ m_ExportEnabled

bool m_ExportEnabled = false

Enable Export.

Definition at line 390 of file Network.hpp.

◆ m_ImportEnabled

bool m_ImportEnabled = false

Enable Import.

Definition at line 381 of file Network.hpp.

◆ m_ModelOptions

ModelOptions m_ModelOptions

Enable Model Options.

Definition at line 384 of file Network.hpp.

◆ m_ProfilingEnabled

bool m_ProfilingEnabled = false

Enable profiling dump of the optimizer phase.

Definition at line 387 of file Network.hpp.

◆ m_ReduceFp32ToBf16

bool m_ReduceFp32ToBf16 = false

@Note This feature has been replaced by enabling Fast Math in compute library backend options.

This is currently a placeholder option

Definition at line 375 of file Network.hpp.

◆ m_ReduceFp32ToFp16

bool m_ReduceFp32ToFp16 = false

Reduces all Fp32 operators in the model to Fp16 for faster processing.

If the first preferred backend does not have Fp16 support, this option will be disabled. If the value of converted Fp16 is infinity, round to the closest finite Fp16 value. @Note This feature works best if all operators of the model are in Fp32. ArmNN will add conversion layers between layers that weren't in Fp32 in the first place or if the operator is not supported in Fp16. The overhead of these conversions can lead to a slower overall performance if too many conversions are required.

Definition at line 365 of file Network.hpp.

◆ m_shapeInferenceMethod

Infer output size when not available.

Definition at line 378 of file Network.hpp.


The documentation for this struct was generated from the following file:
armnn::OptimizerOptionsOpaqueImpl::m_AllowExpandedDims
bool m_AllowExpandedDims
When calculating tensor sizes, dimensions of size == 1 will be ignored.
Definition: Network.hpp:393
armnn::OptimizerOptionsOpaqueImpl::m_ReduceFp32ToFp16
bool m_ReduceFp32ToFp16
Reduces all Fp32 operators in the model to Fp16 for faster processing.
Definition: Network.hpp:365
armnn::OptimizerOptionsOpaqueImpl::m_ExportEnabled
bool m_ExportEnabled
Enable Export.
Definition: Network.hpp:390
armnn::OptimizerOptionsOpaqueImpl::m_Debug
bool m_Debug
Add debug data for easier troubleshooting.
Definition: Network.hpp:368
armnn::OptimizerOptionsOpaqueImpl::m_ImportEnabled
bool m_ImportEnabled
Enable Import.
Definition: Network.hpp:381
armnn::ShapeInferenceMethod::ValidateOnly
@ ValidateOnly
Validate all output shapes.
armnn::OptimizerOptionsOpaqueImpl::m_shapeInferenceMethod
ShapeInferenceMethod m_shapeInferenceMethod
Infer output size when not available.
Definition: Network.hpp:378
armnn::OptimizerOptionsOpaqueImpl::m_ProfilingEnabled
bool m_ProfilingEnabled
Enable profiling dump of the optimizer phase.
Definition: Network.hpp:387
armnn::OptimizerOptionsOpaqueImpl::m_DebugToFile
bool m_DebugToFile
Pass debug data to separate output files for easier troubleshooting.
Definition: Network.hpp:371
armnn::OptimizerOptionsOpaqueImpl::m_ReduceFp32ToBf16
bool m_ReduceFp32ToBf16
@Note This feature has been replaced by enabling Fast Math in compute library backend options.
Definition: Network.hpp:375
armnn::OptimizerOptionsOpaqueImpl::m_ModelOptions
ModelOptions m_ModelOptions
Enable Model Options.
Definition: Network.hpp:384