ArmNN
 22.02
ProgramOptions Struct Reference

Holds and parses program options for the ExecuteNetwork application. More...

#include <ExecuteNetworkProgramOptions.hpp>

Public Member Functions

 ProgramOptions ()
 Initializes ProgramOptions by adding options to the underlying cxxopts::options object. More...
 
 ProgramOptions (int ac, const char *av[])
 Runs ParseOptions() on initialization. More...
 
void ParseOptions (int ac, const char *av[])
 Parses program options from the command line or another source and stores the values in member variables. More...
 
void ValidateExecuteNetworkParams ()
 Ensures that the parameters for ExecuteNetwork fit together. More...
 
void ValidateRuntimeOptions ()
 Ensures that the runtime options are valid. More...
 

Public Attributes

cxxopts::Options m_CxxOptions
 
cxxopts::ParseResult m_CxxResult
 
ExecuteNetworkParams m_ExNetParams
 
armnn::IRuntime::CreationOptions m_RuntimeOptions
 

Detailed Description

Holds and parses program options for the ExecuteNetwork application.

Definition at line 21 of file ExecuteNetworkProgramOptions.hpp.

Constructor & Destructor Documentation

◆ ProgramOptions() [1/2]

Initializes ProgramOptions by adding options to the underlying cxxopts::options object.

(Does not parse any options)

Definition at line 171 of file ExecuteNetworkProgramOptions.cpp.

References ARMNN_ASSERT_MSG, ARMNN_LOG, armnn::BackendRegistryInstance(), BackendRegistry::GetBackendIdsAsString(), ExecuteNetworkParams::m_CachedNetworkFilePath, IRuntime::CreationOptions::ExternalProfilingOptions::m_CapturePeriod, ExecuteNetworkParams::m_Concurrent, m_CxxOptions, ExecuteNetworkParams::m_DequantizeOutput, ExecuteNetworkParams::m_DontPrintOutputs, IRuntime::CreationOptions::m_DynamicBackendsPath, ExecuteNetworkParams::m_EnableBf16TurboMode, ExecuteNetworkParams::m_EnableDelegate, ExecuteNetworkParams::m_EnableFastMath, ExecuteNetworkParams::m_EnableFp16TurboMode, ExecuteNetworkParams::m_EnableLayerDetails, ExecuteNetworkParams::m_EnableProfiling, IRuntime::CreationOptions::ExternalProfilingOptions::m_EnableProfiling, m_ExNetParams, IRuntime::CreationOptions::ExternalProfilingOptions::m_FileFormat, IRuntime::CreationOptions::ExternalProfilingOptions::m_FileOnly, IRuntime::CreationOptions::ExternalProfilingOptions::m_IncomingCaptureFile, ExecuteNetworkParams::m_InferOutputShape, ExecuteNetworkParams::m_Iterations, ExecuteNetworkParams::m_MLGOTuningFilePath, ExecuteNetworkParams::m_ModelPath, ExecuteNetworkParams::m_NumberOfThreads, IRuntime::CreationOptions::ExternalProfilingOptions::m_OutgoingCaptureFile, ExecuteNetworkParams::m_OutputDetailsOnlyToStdOut, ExecuteNetworkParams::m_OutputDetailsToStdOut, ExecuteNetworkParams::m_ParseUnsupported, ExecuteNetworkParams::m_PrintIntermediate, IRuntime::CreationOptions::m_ProfilingOptions, ExecuteNetworkParams::m_QuantizeInput, m_RuntimeOptions, ExecuteNetworkParams::m_SaveCachedNetwork, ExecuteNetworkParams::m_SimultaneousIterations, ExecuteNetworkParams::m_SubgraphId, ExecuteNetworkParams::m_ThreadPoolSize, ExecuteNetworkParams::m_ThresholdTime, IRuntime::CreationOptions::ExternalProfilingOptions::m_TimelineEnabled, ExecuteNetworkParams::m_TuningLevel, and ExecuteNetworkParams::m_TuningPath.

171  : m_CxxOptions{"ExecuteNetwork",
172  "Executes a neural network model using the provided input "
173  "tensor. Prints the resulting output tensor."}
174 {
175  try
176  {
177  // cxxopts doesn't provide a mechanism to ensure required options are given. There is a
178  // separate function CheckRequiredOptions() for that.
179  m_CxxOptions.add_options("a) Required")
180  ("c,compute",
181  "Which device to run layers on by default. If a single device doesn't support all layers in the model "
182  "you can specify a second or third to fall back on. Possible choices: "
184  + " NOTE: Multiple compute devices need to be passed as a comma separated list without whitespaces "
185  "e.g. GpuAcc,CpuAcc,CpuRef or by repeating the program option e.g. '-c Cpuacc -c CpuRef'. "
186  "Duplicates are ignored.",
187  cxxopts::value<std::vector<std::string>>())
188 
189  ("f,model-format",
190  "armnn-binary, onnx-binary, onnx-text, tflite-binary",
191  cxxopts::value<std::string>())
192 
193  ("m,model-path",
194  "Path to model file, e.g. .armnn, , .prototxt, .tflite, .onnx",
195  cxxopts::value<std::string>(m_ExNetParams.m_ModelPath))
196 
197  ("i,input-name",
198  "Identifier of the input tensors in the network separated by comma.",
199  cxxopts::value<std::string>())
200 
201  ("o,output-name",
202  "Identifier of the output tensors in the network separated by comma.",
203  cxxopts::value<std::string>());
204 
205  m_CxxOptions.add_options("b) General")
206  ("b,dynamic-backends-path",
207  "Path where to load any available dynamic backend from. "
208  "If left empty (the default), dynamic backends will not be used.",
209  cxxopts::value<std::string>(m_RuntimeOptions.m_DynamicBackendsPath))
210 
211  ("n,concurrent",
212  "This option is for Arm NN internal asynchronous testing purposes. "
213  "False by default. If set to true will use std::launch::async or the Arm NN thread pool, "
214  "if 'thread-pool-size' is greater than 0, for asynchronous execution.",
215  cxxopts::value<bool>(m_ExNetParams.m_Concurrent)->default_value("false")->implicit_value("true"))
216 
217  ("d,input-tensor-data",
218  "Path to files containing the input data as a flat array separated by whitespace. "
219  "Several paths can be passed by separating them with a comma if the network has multiple inputs "
220  "or you wish to run the model multiple times with different input data using the 'iterations' option. "
221  "If not specified, the network will be run with dummy data (useful for profiling).",
222  cxxopts::value<std::string>()->default_value(""))
223 
224  ("h,help", "Display usage information")
225 
226  ("infer-output-shape",
227  "Infers output tensor shape from input tensor shape and validate where applicable (where supported by "
228  "parser)",
229  cxxopts::value<bool>(m_ExNetParams.m_InferOutputShape)->default_value("false")->implicit_value("true"))
230 
231  ("iterations",
232  "Number of iterations to run the network for, default is set to 1. "
233  "If you wish to run the model with different input data for every execution you can do so by "
234  "supplying more input file paths to the 'input-tensor-data' option. "
235  "Note: The number of input files provided must be divisible by the number of inputs of the model. "
236  "e.g. Your model has 2 inputs and you supply 4 input files. If you set 'iterations' to 6 the first "
237  "run will consume the first two inputs, the second the next two and the last will begin from the "
238  "start and use the first two inputs again. "
239  "Note: If the 'concurrent' option is enabled all iterations will be run asynchronously.",
240  cxxopts::value<size_t>(m_ExNetParams.m_Iterations)->default_value("1"))
241 
242  ("l,dequantize-output",
243  "If this option is enabled, all quantized outputs will be dequantized to float. "
244  "If unset, default to not get dequantized. "
245  "Accepted values (true or false)"
246  " (Not available when executing ArmNNTfLiteDelegate or TfliteInterpreter)",
247  cxxopts::value<bool>(m_ExNetParams.m_DequantizeOutput)->default_value("false")->implicit_value("true"))
248 
249  ("p,print-intermediate-layers",
250  "If this option is enabled, the output of every graph layer will be printed.",
251  cxxopts::value<bool>(m_ExNetParams.m_PrintIntermediate)->default_value("false")
252  ->implicit_value("true"))
253 
254  ("parse-unsupported",
255  "Add unsupported operators as stand-in layers (where supported by parser)",
256  cxxopts::value<bool>(m_ExNetParams.m_ParseUnsupported)->default_value("false")->implicit_value("true"))
257 
258  ("do-not-print-output",
259  "The default behaviour of ExecuteNetwork is to print the resulting outputs on the console. "
260  "This behaviour can be changed by adding this flag to your command.",
261  cxxopts::value<bool>(m_ExNetParams.m_DontPrintOutputs)->default_value("false")->implicit_value("true"))
262 
263  ("q,quantize-input",
264  "If this option is enabled, all float inputs will be quantized as appropriate for the model's inputs. "
265  "If unset, default to not quantized. Accepted values (true or false)"
266  " (Not available when executing ArmNNTfLiteDelegate or TfliteInterpreter)",
267  cxxopts::value<bool>(m_ExNetParams.m_QuantizeInput)->default_value("false")->implicit_value("true"))
268  ("r,threshold-time",
269  "Threshold time is the maximum allowed time for inference measured in milliseconds. If the actual "
270  "inference time is greater than the threshold time, the test will fail. By default, no threshold "
271  "time is used.",
272  cxxopts::value<double>(m_ExNetParams.m_ThresholdTime)->default_value("0.0"))
273 
274  ("s,input-tensor-shape",
275  "The shape of the input tensors in the network as a flat array of integers separated by comma."
276  "Several shapes can be passed by separating them with a colon (:).",
277  cxxopts::value<std::string>())
278 
279  ("v,visualize-optimized-model",
280  "Enables built optimized model visualizer. If unset, defaults to off.",
281  cxxopts::value<bool>(m_ExNetParams.m_EnableLayerDetails)->default_value("false")
282  ->implicit_value("true"))
283 
284  ("w,write-outputs-to-file",
285  "Comma-separated list of output file paths keyed with the binding-id of the output slot. "
286  "If left empty (the default), the output tensors will not be written to a file.",
287  cxxopts::value<std::string>())
288 
289  ("x,subgraph-number",
290  "Id of the subgraph to be executed. Defaults to 0."
291  " (Not available when executing ArmNNTfLiteDelegate or TfliteInterpreter)",
292  cxxopts::value<size_t>(m_ExNetParams.m_SubgraphId)->default_value("0"))
293 
294  ("y,input-type",
295  "The type of the input tensors in the network separated by comma. "
296  "If unset, defaults to \"float\" for all defined inputs. "
297  "Accepted values (float, int, qasymms8 or qasymmu8).",
298  cxxopts::value<std::string>())
299 
300  ("z,output-type",
301  "The type of the output tensors in the network separated by comma. "
302  "If unset, defaults to \"float\" for all defined outputs. "
303  "Accepted values (float, int, qasymms8 or qasymmu8).",
304  cxxopts::value<std::string>())
305 
306  ("T,tflite-executor",
307  "Set the executor for the tflite model: parser, delegate, tflite"
308  "parser is the ArmNNTfLiteParser, "
309  "delegate is the ArmNNTfLiteDelegate, "
310  "tflite is the TfliteInterpreter",
311  cxxopts::value<std::string>()->default_value("parser"))
312 
313  ("D,armnn-tflite-delegate",
314  "Enable Arm NN TfLite delegate. "
315  "DEPRECATED: This option is deprecated please use tflite-executor instead",
316  cxxopts::value<bool>(m_ExNetParams.m_EnableDelegate)->default_value("false")->implicit_value("true"))
317 
318  ("simultaneous-iterations",
319  "Number of simultaneous iterations to async-run the network for, default is set to 1 (disabled). "
320  "When thread-pool-size is set the Arm NN thread pool is used. Otherwise std::launch::async is used."
321  "DEPRECATED: This option is deprecated and will be removed soon. "
322  "Please use the option 'iterations' combined with 'concurrent' instead.",
323  cxxopts::value<size_t>(m_ExNetParams.m_SimultaneousIterations)->default_value("1"))
324 
325  ("thread-pool-size",
326  "Number of Arm NN threads to use when running the network asynchronously via the Arm NN thread pool. "
327  "The default is set to 0 which equals disabled. If 'thread-pool-size' is greater than 0 the "
328  "'concurrent' option is automatically set to true.",
329  cxxopts::value<size_t>(m_ExNetParams.m_ThreadPoolSize)->default_value("0"));
330 
331  m_CxxOptions.add_options("c) Optimization")
332  ("bf16-turbo-mode",
333  "If this option is enabled, FP32 layers, "
334  "weights and biases will be converted to BFloat16 where the backend supports it",
335  cxxopts::value<bool>(m_ExNetParams.m_EnableBf16TurboMode)
336  ->default_value("false")->implicit_value("true"))
337 
338  ("enable-fast-math",
339  "Enables fast_math options in backends that support it. Using the fast_math flag can lead to "
340  "performance improvements but may result in reduced or different precision.",
341  cxxopts::value<bool>(m_ExNetParams.m_EnableFastMath)->default_value("false")->implicit_value("true"))
342 
343  ("number-of-threads",
344  "Assign the number of threads used by the CpuAcc backend. "
345  "Input value must be between 1 and 64. "
346  "Default is set to 0 (Backend will decide number of threads to use).",
347  cxxopts::value<unsigned int>(m_ExNetParams.m_NumberOfThreads)->default_value("0"))
348 
349  ("save-cached-network",
350  "Enables saving of the cached network to a file given with the cached-network-filepath option. "
351  "See also --cached-network-filepath",
352  cxxopts::value<bool>(m_ExNetParams.m_SaveCachedNetwork)
353  ->default_value("false")->implicit_value("true"))
354 
355  ("cached-network-filepath",
356  "If non-empty, the given file will be used to load/save the cached network. "
357  "If save-cached-network is given then the cached network will be saved to the given file. "
358  "To save the cached network a file must already exist. "
359  "If save-cached-network is not given then the cached network will be loaded from the given file. "
360  "This will remove initial compilation time of kernels and speed up the first execution.",
361  cxxopts::value<std::string>(m_ExNetParams.m_CachedNetworkFilePath)->default_value(""))
362 
363  ("fp16-turbo-mode",
364  "If this option is enabled, FP32 layers, "
365  "weights and biases will be converted to FP16 where the backend supports it",
366  cxxopts::value<bool>(m_ExNetParams.m_EnableFp16TurboMode)
367  ->default_value("false")->implicit_value("true"))
368 
369  ("tuning-level",
370  "Sets the tuning level which enables a tuning run which will update/create a tuning file. "
371  "Available options are: 1 (Rapid), 2 (Normal), 3 (Exhaustive). "
372  "Requires tuning-path to be set, default is set to 0 (No tuning run)",
373  cxxopts::value<int>(m_ExNetParams.m_TuningLevel)->default_value("0"))
374 
375  ("tuning-path",
376  "Path to tuning file. Enables use of CL tuning",
377  cxxopts::value<std::string>(m_ExNetParams.m_TuningPath))
378 
379  ("MLGOTuningFilePath",
380  "Path to tuning file. Enables use of CL MLGO tuning",
381  cxxopts::value<std::string>(m_ExNetParams.m_MLGOTuningFilePath));
382 
383  m_CxxOptions.add_options("d) Profiling")
384  ("a,enable-external-profiling",
385  "If enabled external profiling will be switched on",
387  ->default_value("false")->implicit_value("true"))
388 
389  ("e,event-based-profiling",
390  "Enables built in profiler. If unset, defaults to off.",
391  cxxopts::value<bool>(m_ExNetParams.m_EnableProfiling)->default_value("false")->implicit_value("true"))
392 
393  ("g,file-only-external-profiling",
394  "If enabled then the 'file-only' test mode of external profiling will be enabled",
395  cxxopts::value<bool>(m_RuntimeOptions.m_ProfilingOptions.m_FileOnly)
396  ->default_value("false")->implicit_value("true"))
397 
398  ("file-format",
399  "If profiling is enabled specifies the output file format",
400  cxxopts::value<std::string>(m_RuntimeOptions.m_ProfilingOptions.m_FileFormat)->default_value("binary"))
401 
402  ("j,outgoing-capture-file",
403  "If specified the outgoing external profiling packets will be captured in this binary file",
404  cxxopts::value<std::string>(m_RuntimeOptions.m_ProfilingOptions.m_OutgoingCaptureFile))
405 
406  ("k,incoming-capture-file",
407  "If specified the incoming external profiling packets will be captured in this binary file",
408  cxxopts::value<std::string>(m_RuntimeOptions.m_ProfilingOptions.m_IncomingCaptureFile))
409 
410  ("timeline-profiling",
411  "If enabled timeline profiling will be switched on, requires external profiling",
413  ->default_value("false")->implicit_value("true"))
414 
415  ("u,counter-capture-period",
416  "If profiling is enabled in 'file-only' mode this is the capture period that will be used in the test",
417  cxxopts::value<uint32_t>(m_RuntimeOptions.m_ProfilingOptions.m_CapturePeriod)->default_value("150"))
418 
419  ("output-network-details",
420  "Outputs layer tensor infos and descriptors to std out along with profiling events. Defaults to off.",
421  cxxopts::value<bool>(m_ExNetParams.m_OutputDetailsToStdOut)->default_value("false")
422  ->implicit_value("true"))
423  ("output-network-details-only",
424  "Outputs layer tensor infos and descriptors to std out without profiling events. Defaults to off.",
425  cxxopts::value<bool>(m_ExNetParams.m_OutputDetailsOnlyToStdOut)->default_value("false")
426  ->implicit_value("true"));
427 
428  }
429  catch (const std::exception& e)
430  {
431  ARMNN_ASSERT_MSG(false, "Caught unexpected exception");
432  ARMNN_LOG(fatal) << "Fatal internal error: " << e.what();
433  exit(EXIT_FAILURE);
434  }
435 }
ExecuteNetworkParams m_ExNetParams
std::string m_OutgoingCaptureFile
Path to a file in which outgoing timeline profiling messages will be stored.
Definition: IRuntime.hpp:140
armnn::IRuntime::CreationOptions m_RuntimeOptions
#define ARMNN_LOG(severity)
Definition: Logging.hpp:205
BackendRegistry & BackendRegistryInstance()
std::string GetBackendIdsAsString() const
std::string m_IncomingCaptureFile
Path to a file in which incoming timeline profiling messages will be stored.
Definition: IRuntime.hpp:142
bool m_EnableProfiling
Indicates whether external profiling is enabled or not.
Definition: IRuntime.hpp:136
bool m_FileOnly
Enable profiling output to file only.
Definition: IRuntime.hpp:144
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
std::string m_DynamicBackendsPath
Setting this value will override the paths set by the DYNAMIC_BACKEND_PATHS compiler directive Only a...
Definition: IRuntime.hpp:96
uint32_t m_CapturePeriod
The duration at which captured profiling messages will be flushed.
Definition: IRuntime.hpp:146
bool m_TimelineEnabled
Indicates whether external timeline profiling is enabled or not.
Definition: IRuntime.hpp:138
ExternalProfilingOptions m_ProfilingOptions
Definition: IRuntime.hpp:151
std::string m_FileFormat
The format of the file used for outputting profiling data.
Definition: IRuntime.hpp:148

◆ ProgramOptions() [2/2]

ProgramOptions ( int  ac,
const char *  av[] 
)

Runs ParseOptions() on initialization.

Definition at line 437 of file ExecuteNetworkProgramOptions.cpp.

References ParseOptions().

437  : ProgramOptions()
438 {
439  ParseOptions(ac, av);
440 }
ProgramOptions()
Initializes ProgramOptions by adding options to the underlying cxxopts::options object.
void ParseOptions(int ac, const char *av[])
Parses program options from the command line or another source and stores the values in member variab...

Member Function Documentation

◆ ParseOptions()

void ParseOptions ( int  ac,
const char *  av[] 
)

Parses program options from the command line or another source and stores the values in member variables.

It also checks the validity of the parsed parameters. Throws a cxxopts exception if parsing fails or an armnn exception if parameters are not valid.

Definition at line 442 of file ExecuteNetworkProgramOptions.cpp.

References ARMNN_LOG, ExecuteNetworkParams::ArmNNTfLiteDelegate, ExecuteNetworkParams::ArmNNTfLiteParser, CheckForDeprecatedOptions(), CheckOptionDependencies(), CheckRequiredOptions(), GetBackendIDs(), IRuntime::CreationOptions::m_BackendOptions, ExecuteNetworkParams::m_ComputeDevices, ExecuteNetworkParams::m_Concurrent, m_CxxOptions, m_CxxResult, ExecuteNetworkParams::m_DynamicBackendsPath, IRuntime::CreationOptions::m_DynamicBackendsPath, ExecuteNetworkParams::m_EnableDelegate, IRuntime::CreationOptions::m_EnableGpuProfiling, ExecuteNetworkParams::m_EnableProfiling, m_ExNetParams, ExecuteNetworkParams::m_GenerateTensorData, ExecuteNetworkParams::m_InputNames, ExecuteNetworkParams::m_InputTensorDataFilePaths, ExecuteNetworkParams::m_InputTensorShapes, ExecuteNetworkParams::m_InputTypes, ExecuteNetworkParams::m_Iterations, ExecuteNetworkParams::m_MLGOTuningFilePath, ExecuteNetworkParams::m_ModelFormat, ExecuteNetworkParams::m_OutputNames, ExecuteNetworkParams::m_OutputTensorFiles, ExecuteNetworkParams::m_OutputTypes, m_RuntimeOptions, ExecuteNetworkParams::m_SimultaneousIterations, ExecuteNetworkParams::m_TfLiteExecutor, ExecuteNetworkParams::m_ThreadPoolSize, ExecuteNetworkParams::m_TuningLevel, ExecuteNetworkParams::m_TuningPath, ParseArray(), ParseStringList(), armnn::stringUtils::StringTrimCopy(), ExecuteNetworkParams::TfliteInterpreter, ValidateExecuteNetworkParams(), and ValidateRuntimeOptions().

Referenced by main(), and ProgramOptions().

443 {
444  // Parses the command-line.
445  m_CxxResult = m_CxxOptions.parse(ac, av);
446 
447  if (m_CxxResult.count("help") || ac <= 1)
448  {
449  std::cout << m_CxxOptions.help() << std::endl;
450  exit(EXIT_SUCCESS);
451  }
452 
456 
457  // Some options can't be assigned directly because they need some post-processing:
458  auto computeDevices = GetOptionValue<std::vector<std::string>>("compute", m_CxxResult);
459  m_ExNetParams.m_ComputeDevices = GetBackendIDs(computeDevices);
461  armnn::stringUtils::StringTrimCopy(GetOptionValue<std::string>("model-format", m_CxxResult));
463  ParseStringList(GetOptionValue<std::string>("input-name", m_CxxResult), ",");
465  ParseStringList(GetOptionValue<std::string>("input-tensor-data", m_CxxResult), ",");
467  ParseStringList(GetOptionValue<std::string>("output-name", m_CxxResult), ",");
469  ParseStringList(GetOptionValue<std::string>("input-type", m_CxxResult), ",");
471  ParseStringList(GetOptionValue<std::string>("output-type", m_CxxResult), ",");
473  ParseStringList(GetOptionValue<std::string>("write-outputs-to-file", m_CxxResult), ",");
477 
479 
480  std::string tfliteExecutor = GetOptionValue<std::string>("tflite-executor", m_CxxResult);
481 
482  if (tfliteExecutor.size() == 0 || tfliteExecutor == "parser")
483  {
485  }
486  else if (tfliteExecutor == "delegate")
487  {
489  }
490  else if (tfliteExecutor == "tflite")
491  {
493  }
494  else
495  {
496  ARMNN_LOG(info) << fmt::format("Invalid tflite-executor option '{}'.", tfliteExecutor);
497  throw armnn::InvalidArgumentException ("Invalid tflite-executor option");
498  }
499 
500  // For backwards compatibility when deprecated options are used
502  {
504  }
506  {
509  }
510 
511  // Set concurrent to true if the user expects to run inferences asynchronously
513  {
515  }
516 
517  // Parse input tensor shape from the string we got from the command-line.
518  std::vector<std::string> inputTensorShapesVector =
519  ParseStringList(GetOptionValue<std::string>("input-tensor-shape", m_CxxResult), ":");
520 
521  if (!inputTensorShapesVector.empty())
522  {
523  m_ExNetParams.m_InputTensorShapes.reserve(inputTensorShapesVector.size());
524 
525  for(const std::string& shape : inputTensorShapesVector)
526  {
527  std::stringstream ss(shape);
528  std::vector<unsigned int> dims = ParseArray(ss);
529 
531  std::make_unique<armnn::TensorShape>(static_cast<unsigned int>(dims.size()), dims.data()));
532  }
533  }
534 
535  // We have to validate ExecuteNetworkParams first so that the tuning path and level is validated
537 
538  // Parse CL tuning parameters to runtime options
539  if (!m_ExNetParams.m_TuningPath.empty())
540  {
541  m_RuntimeOptions.m_BackendOptions.emplace_back(
543  {
544  "GpuAcc",
545  {
546  {"TuningLevel", m_ExNetParams.m_TuningLevel},
547  {"TuningFile", m_ExNetParams.m_TuningPath.c_str()},
548  {"KernelProfilingEnabled", m_ExNetParams.m_EnableProfiling},
549  {"MLGOTuningFilePath", m_ExNetParams.m_MLGOTuningFilePath}
550  }
551  }
552  );
553  }
554 
556 }
ExecuteNetworkParams m_ExNetParams
std::vector< std::string > m_InputTypes
void ValidateExecuteNetworkParams()
Ensures that the parameters for ExecuteNetwork fit together.
std::vector< TensorShapePtr > m_InputTensorShapes
std::vector< unsigned int > ParseArray(std::istream &stream)
std::vector< std::string > ParseStringList(const std::string &inputString, const char *delimiter)
Splits a given string at every accurance of delimiter into a vector of string.
armnn::IRuntime::CreationOptions m_RuntimeOptions
#define ARMNN_LOG(severity)
Definition: Logging.hpp:205
std::vector< armnn::BackendId > GetBackendIDs(const std::vector< std::string > &backendStringsVec)
Takes a vector of backend strings and returns a vector of backendIDs.
std::vector< std::string > m_OutputNames
std::vector< std::string > m_OutputTensorFiles
std::vector< armnn::BackendId > m_ComputeDevices
std::vector< std::string > m_OutputTypes
std::string StringTrimCopy(const std::string &str, const std::string &chars="\\\")
Trim from both the start and the end of a string, returns a trimmed copy of the string.
Definition: StringUtils.hpp:88
std::vector< BackendOptions > m_BackendOptions
Pass backend specific options.
Definition: IRuntime.hpp:187
std::vector< std::string > m_InputNames
std::vector< std::string > m_InputTensorDataFilePaths
void CheckForDeprecatedOptions(const cxxopts::ParseResult &result)
Struct for the users to pass backend specific options.
std::string m_DynamicBackendsPath
Setting this value will override the paths set by the DYNAMIC_BACKEND_PATHS compiler directive Only a...
Definition: IRuntime.hpp:96
bool m_EnableGpuProfiling
Setting this flag will allow the user to obtain GPU profiling information from the runtime...
Definition: IRuntime.hpp:91
void ValidateRuntimeOptions()
Ensures that the runtime options are valid.
void CheckOptionDependencies(const cxxopts::ParseResult &result)
cxxopts::ParseResult m_CxxResult
void CheckRequiredOptions(const cxxopts::ParseResult &result)

◆ ValidateExecuteNetworkParams()

void ValidateExecuteNetworkParams ( )

Ensures that the parameters for ExecuteNetwork fit together.

Definition at line 156 of file ExecuteNetworkProgramOptions.cpp.

References m_ExNetParams, and ExecuteNetworkParams::ValidateParams().

Referenced by ParseOptions().

157 {
159 }
ExecuteNetworkParams m_ExNetParams

◆ ValidateRuntimeOptions()

void ValidateRuntimeOptions ( )

Ensures that the runtime options are valid.

Definition at line 161 of file ExecuteNetworkProgramOptions.cpp.

References LogAndThrowFatal(), IRuntime::CreationOptions::ExternalProfilingOptions::m_EnableProfiling, IRuntime::CreationOptions::m_ProfilingOptions, m_RuntimeOptions, and IRuntime::CreationOptions::ExternalProfilingOptions::m_TimelineEnabled.

Referenced by ParseOptions().

162 {
165  {
166  LogAndThrowFatal("Timeline profiling requires external profiling to be turned on");
167  }
168 }
armnn::IRuntime::CreationOptions m_RuntimeOptions
bool m_EnableProfiling
Indicates whether external profiling is enabled or not.
Definition: IRuntime.hpp:136
void LogAndThrowFatal(std::string errorMessage)
bool m_TimelineEnabled
Indicates whether external timeline profiling is enabled or not.
Definition: IRuntime.hpp:138
ExternalProfilingOptions m_ProfilingOptions
Definition: IRuntime.hpp:151

Member Data Documentation

◆ m_CxxOptions

cxxopts::Options m_CxxOptions

Definition at line 41 of file ExecuteNetworkProgramOptions.hpp.

Referenced by ParseOptions(), and ProgramOptions().

◆ m_CxxResult

cxxopts::ParseResult m_CxxResult

Definition at line 42 of file ExecuteNetworkProgramOptions.hpp.

Referenced by ParseOptions().

◆ m_ExNetParams

◆ m_RuntimeOptions


The documentation for this struct was generated from the following files: