ArmNN
 21.08
tflite Namespace Reference

Functions

TfLiteDelegate * tflite_plugin_create_delegate (char **options_keys, char **options_values, size_t num_options, void(*report_error)(const char *))
 Create an ArmNN delegate plugin. More...
 
void tflite_plugin_destroy_delegate (TfLiteDelegate *delegate)
 Destroy a given delegate plugin. More...
 

Variables

std::vector< std::string > gpu_options
 This file defines two symbols that need to be exported to use the TFLite external delegate provider. More...
 

Function Documentation

◆ tflite_plugin_create_delegate()

TfLiteDelegate* tflite::tflite_plugin_create_delegate ( char **  options_keys,
char **  options_values,
size_t  num_options,
void(*)(const char *)  report_error 
)

Create an ArmNN delegate plugin.

Available options:

Option key: "backends"
Possible values: ["EthosNPU"/"GpuAcc"/"CpuAcc"/"CpuRef"]
Descriptions: A comma separated list without whitespaces of backends which should be used for execution. Falls back to next backend in list if previous doesn't provide support for operation. e.g. "GpuAcc,CpuAcc"

Option key: "logging-severity"
Possible values: ["trace"/"debug"/"info"/"warning"/"error"/"fatal"]
Description: Sets the logging severity level for ArmNN. Logging is turned off if this option is not provided.

Option key: "gpu-tuning-level"
Possible values: ["0"/"1"/"2"/"3"]
Description: 0=UseOnly(default), 1=RapidTuning, 2=NormalTuning, 3=ExhaustiveTuning. Requires option gpu-tuning-file. 1,2 and 3 will create a tuning-file, 0 will apply the tunings from an existing file

Option key: "gpu-mlgo-tuning-file"
Possible values: [filenameString]
Description: File name for the MLGO tuning file

Option key: "gpu-tuning-file"
Possible values: [filenameString]
Description: File name for the tuning file.

Option key: "gpu-kernel-profiling-enabled"
Possible values: ["true"/"false"]
Description: Enables GPU kernel profiling

Option key: "save-cached-network"
Possible values: ["true"/"false"]
Description: Enables saving of the cached network to a file, specified with the cached-network-filepath option

Option key: "cached-network-filepath"
Possible values: [filenameString]
Description: If non-empty, the given file will be used to load/save the cached network. If save-cached-network is given then the cached network will be saved to the given file. To save the cached network a file must already exist. If save-cached-network is not given then the cached network will be loaded from the given file. This will remove initial compilation time of kernels and speed up the first execution.

Option key: "enable-fast-math"
Possible values: ["true"/"false"]
Description: Enables fast_math options in backends that support it

Option key: "number-of-threads"
Possible values: ["1"-"64"]
Description: Assign the number of threads used by the CpuAcc backend. Default is set to 0 (Backend will decide number of threads to use).

Option key: "reduce-fp32-to-fp16"
Possible values: ["true"/"false"]
Description: Reduce Fp32 data to Fp16 for faster processing

Option key: "reduce-fp32-to-bf16"
Possible values: ["true"/"false"]
Description: Reduce Fp32 data to Bf16 for faster processing

Option key: "debug-data"
Possible values: ["true"/"false"]
Description: Add debug data for easier troubleshooting

Option key: "memory-import"
Possible values: ["true"/"false"]
Description: Enable memory import

Parameters
[in]option_keysDelegate option names
[in]options_valuesDelegate option values
[in]num_optionsNumber of delegate options
[in,out]report_errorError callback function
Returns
An ArmNN delegate if it succeeds else NULL

Definition at line 116 of file armnn_external_delegate.cpp.

References DelegateOptions::AddBackendOption(), OptimizerOptions::m_Debug, OptimizerOptions::m_ImportEnabled, OptimizerOptions::m_ModelOptions, OptimizerOptions::m_ReduceFp32ToBf16, OptimizerOptions::m_ReduceFp32ToFp16, armnn::numeric_cast(), DelegateOptions::SetBackends(), DelegateOptions::SetLoggingSeverity(), DelegateOptions::SetOptimizerOptions(), armnnDelegate::TfLiteArmnnDelegateCreate(), and armnnDelegate::TfLiteArmnnDelegateOptionsDefault().

120 {
121  // Returning null indicates an error during delegate creation so we initialize with that
122  TfLiteDelegate* delegate = nullptr;
123  try
124  {
125  // (Initializes with CpuRef backend)
127  armnn::OptimizerOptions optimizerOptions;
128  for (size_t i = 0; i < num_options; ++i)
129  {
130  // Process backends
131  if (std::string(options_keys[i]) == std::string("backends"))
132  {
133  // The backend option is a comma separated string of backendIDs that needs to be split
134  std::vector<armnn::BackendId> backends;
135  char* pch;
136  pch = strtok(options_values[i],",");
137  while (pch != NULL)
138  {
139  backends.push_back(pch);
140  pch = strtok (NULL, ",");
141  }
142  options.SetBackends(backends);
143  }
144  // Process logging level
145  else if (std::string(options_keys[i]) == std::string("logging-severity"))
146  {
147  options.SetLoggingSeverity(options_values[i]);
148  }
149  // Process GPU backend options
150  else if (std::string(options_keys[i]) == std::string("gpu-tuning-level"))
151  {
152  armnn::BackendOptions option("GpuAcc", {{"TuningLevel", atoi(options_values[i])}});
153  options.AddBackendOption(option);
154  }
155  else if (std::string(options_keys[i]) == std::string("gpu-mlgo-tuning-file"))
156  {
157  armnn::BackendOptions option("GpuAcc", {{"MLGOTuningFilePath", std::string(options_values[i])}});
158  options.AddBackendOption(option);
159  }
160  else if (std::string(options_keys[i]) == std::string("gpu-tuning-file"))
161  {
162  armnn::BackendOptions option("GpuAcc", {{"TuningFile", std::string(options_values[i])}});
163  options.AddBackendOption(option);
164  }
165  else if (std::string(options_keys[i]) == std::string("gpu-kernel-profiling-enabled"))
166  {
167  armnn::BackendOptions option("GpuAcc", {{"KernelProfilingEnabled", (*options_values[i] != '0')}});
168  options.AddBackendOption(option);
169  }
170  else if (std::string(options_keys[i]) == std::string("save-cached-network"))
171  {
172  armnn::BackendOptions option("GpuAcc", {{"SaveCachedNetwork", (*options_values[i] != '0')}});
173  optimizerOptions.m_ModelOptions.push_back(option);
174  }
175  else if (std::string(options_keys[i]) == std::string("cached-network-filepath"))
176  {
177  armnn::BackendOptions option("GpuAcc", {{"CachedNetworkFilePath", std::string(options_values[i])}});
178  optimizerOptions.m_ModelOptions.push_back(option);
179  }
180  // Process GPU & CPU backend options
181  else if (std::string(options_keys[i]) == std::string("enable-fast-math"))
182  {
183  armnn::BackendOptions modelOptionGpu("GpuAcc", {{"FastMathEnabled", (*options_values[i] != '0')}});
184  optimizerOptions.m_ModelOptions.push_back(modelOptionGpu);
185 
186  armnn::BackendOptions modelOptionCpu("CpuAcc", {{"FastMathEnabled", (*options_values[i] != '0')}});
187  optimizerOptions.m_ModelOptions.push_back(modelOptionCpu);
188  }
189  // Process CPU backend options
190  else if (std::string(options_keys[i]) == std::string("number-of-threads"))
191  {
192  unsigned int numberOfThreads = armnn::numeric_cast<unsigned int>(atoi(options_values[i]));
193  armnn::BackendOptions modelOption("CpuAcc", {{"NumberOfThreads", numberOfThreads}});
194  optimizerOptions.m_ModelOptions.push_back(modelOption);
195  }
196  // Process reduce-fp32-to-fp16 option
197  else if (std::string(options_keys[i]) == std::string("reduce-fp32-to-fp16"))
198  {
199  optimizerOptions.m_ReduceFp32ToFp16 = *options_values[i] != '0';
200  }
201  // Process reduce-fp32-to-bf16 option
202  else if (std::string(options_keys[i]) == std::string("reduce-fp32-to-bf16"))
203  {
204  optimizerOptions.m_ReduceFp32ToBf16 = *options_values[i] != '0';
205  }
206  // Process debug-data
207  else if (std::string(options_keys[i]) == std::string("debug-data"))
208  {
209  optimizerOptions.m_Debug = *options_values[i] != '0';
210  }
211  // Process memory-import
212  else if (std::string(options_keys[i]) == std::string("memory-import"))
213  {
214  optimizerOptions.m_ImportEnabled = *options_values[i] != '0';
215  }
216  else
217  {
218  throw armnn::Exception("Unknown option for the ArmNN Delegate given: " + std::string(options_keys[i]));
219  }
220  }
221  options.SetOptimizerOptions(optimizerOptions);
222  delegate = TfLiteArmnnDelegateCreate(options);
223  }
224  catch (const std::exception& ex)
225  {
226  if(report_error)
227  {
228  report_error(ex.what());
229  }
230  }
231  return delegate;
232 }
ModelOptions m_ModelOptions
Definition: INetwork.hpp:167
void SetLoggingSeverity(const armnn::LogSeverity &level)
Sets the severity level for logging within ArmNN that will be used on creation of the delegate...
Struct for the users to pass backend specific options.
TfLiteDelegate * TfLiteArmnnDelegateCreate(armnnDelegate::DelegateOptions options)
Base class for all ArmNN exceptions so that users can filter to just those.
Definition: Exceptions.hpp:46
void SetBackends(const std::vector< armnn::BackendId > &backends)
DelegateOptions TfLiteArmnnDelegateOptionsDefault()
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:35
void AddBackendOption(const armnn::BackendOptions &option)
Appends a backend option to the list of backend options.
void SetOptimizerOptions(const armnn::OptimizerOptions &optimizerOptions)

◆ tflite_plugin_destroy_delegate()

void tflite::tflite_plugin_destroy_delegate ( TfLiteDelegate *  delegate)

Destroy a given delegate plugin.

Parameters
[in]delegateDelegate to destruct

Definition at line 238 of file armnn_external_delegate.cpp.

References armnnDelegate::TfLiteArmnnDelegateDelete().

239 {
241 }
void TfLiteArmnnDelegateDelete(TfLiteDelegate *tfLiteDelegate)

Variable Documentation

◆ gpu_options

std::vector<std::string> gpu_options
Initial value:
{"gpu-tuning-level",
"gpu-tuning-file",
"gpu-kernel-profiling-enabled"}

This file defines two symbols that need to be exported to use the TFLite external delegate provider.

This is a plugin that can be used for fast integration of delegates into benchmark tests and other tools. It allows loading of a dynamic delegate library at runtime.

The external delegate also has Tensorflow Lite Python bindings. Therefore the dynamic external delegate can be directly used with Tensorflow Lite Python APIs.

See tensorflow/lite/delegates/external for details or visit the tensorflow guide here

Definition at line 29 of file armnn_external_delegate.cpp.