ArmNN
 21.08
ProgramOptions Struct Reference

Holds and parses program options for the ExecuteNetwork application. More...

#include <ExecuteNetworkProgramOptions.hpp>

Public Member Functions

 ProgramOptions ()
 Initializes ProgramOptions by adding options to the underlying cxxopts::options object. More...
 
 ProgramOptions (int ac, const char *av[])
 Runs ParseOptions() on initialization. More...
 
void ParseOptions (int ac, const char *av[])
 Parses program options from the command line or another source and stores the values in member variables. More...
 
void ValidateExecuteNetworkParams ()
 Ensures that the parameters for ExecuteNetwork fit together. More...
 
void ValidateRuntimeOptions ()
 Ensures that the runtime options are valid. More...
 

Public Attributes

cxxopts::Options m_CxxOptions
 
cxxopts::ParseResult m_CxxResult
 
ExecuteNetworkParams m_ExNetParams
 
armnn::IRuntime::CreationOptions m_RuntimeOptions
 

Detailed Description

Holds and parses program options for the ExecuteNetwork application.

Definition at line 21 of file ExecuteNetworkProgramOptions.hpp.

Constructor & Destructor Documentation

◆ ProgramOptions() [1/2]

Initializes ProgramOptions by adding options to the underlying cxxopts::options object.

(Does not parse any options)

Definition at line 171 of file ExecuteNetworkProgramOptions.cpp.

References ARMNN_ASSERT_MSG, ARMNN_LOG, armnn::BackendRegistryInstance(), BackendRegistry::GetBackendIdsAsString(), ExecuteNetworkParams::m_CachedNetworkFilePath, IRuntime::CreationOptions::ExternalProfilingOptions::m_CapturePeriod, ExecuteNetworkParams::m_Concurrent, m_CxxOptions, ExecuteNetworkParams::m_DequantizeOutput, IRuntime::CreationOptions::m_DynamicBackendsPath, ExecuteNetworkParams::m_EnableBf16TurboMode, ExecuteNetworkParams::m_EnableDelegate, ExecuteNetworkParams::m_EnableFastMath, ExecuteNetworkParams::m_EnableFp16TurboMode, ExecuteNetworkParams::m_EnableLayerDetails, ExecuteNetworkParams::m_EnableProfiling, IRuntime::CreationOptions::ExternalProfilingOptions::m_EnableProfiling, m_ExNetParams, IRuntime::CreationOptions::ExternalProfilingOptions::m_FileFormat, IRuntime::CreationOptions::ExternalProfilingOptions::m_FileOnly, IRuntime::CreationOptions::ExternalProfilingOptions::m_IncomingCaptureFile, ExecuteNetworkParams::m_InferOutputShape, ExecuteNetworkParams::m_Iterations, ExecuteNetworkParams::m_MLGOTuningFilePath, ExecuteNetworkParams::m_ModelPath, ExecuteNetworkParams::m_NumberOfThreads, IRuntime::CreationOptions::ExternalProfilingOptions::m_OutgoingCaptureFile, ExecuteNetworkParams::m_OutputDetailsToStdOut, ExecuteNetworkParams::m_ParseUnsupported, ExecuteNetworkParams::m_PrintIntermediate, IRuntime::CreationOptions::m_ProfilingOptions, ExecuteNetworkParams::m_QuantizeInput, m_RuntimeOptions, ExecuteNetworkParams::m_SaveCachedNetwork, ExecuteNetworkParams::m_SimultaneousIterations, ExecuteNetworkParams::m_SubgraphId, ExecuteNetworkParams::m_ThreadPoolSize, ExecuteNetworkParams::m_ThresholdTime, IRuntime::CreationOptions::ExternalProfilingOptions::m_TimelineEnabled, ExecuteNetworkParams::m_TuningLevel, and ExecuteNetworkParams::m_TuningPath.

171  : m_CxxOptions{"ExecuteNetwork",
172  "Executes a neural network model using the provided input "
173  "tensor. Prints the resulting output tensor."}
174 {
175  try
176  {
177  // cxxopts doesn't provide a mechanism to ensure required options are given. There is a
178  // separate function CheckRequiredOptions() for that.
179  m_CxxOptions.add_options("a) Required")
180  ("c,compute",
181  "Which device to run layers on by default. If a single device doesn't support all layers in the model "
182  "you can specify a second or third to fall back on. Possible choices: "
184  + " NOTE: Multiple compute devices need to be passed as a comma separated list without whitespaces "
185  "e.g. GpuAcc,CpuAcc,CpuRef or by repeating the program option e.g. '-c Cpuacc -c CpuRef'. "
186  "Duplicates are ignored.",
187  cxxopts::value<std::vector<std::string>>())
188 
189  ("f,model-format",
190  "armnn-binary, onnx-binary, onnx-text, tflite-binary",
191  cxxopts::value<std::string>())
192 
193  ("m,model-path",
194  "Path to model file, e.g. .armnn, , .prototxt, .tflite, .onnx",
195  cxxopts::value<std::string>(m_ExNetParams.m_ModelPath))
196 
197  ("i,input-name",
198  "Identifier of the input tensors in the network separated by comma.",
199  cxxopts::value<std::string>())
200 
201  ("o,output-name",
202  "Identifier of the output tensors in the network separated by comma.",
203  cxxopts::value<std::string>());
204 
205  m_CxxOptions.add_options("b) General")
206  ("b,dynamic-backends-path",
207  "Path where to load any available dynamic backend from. "
208  "If left empty (the default), dynamic backends will not be used.",
209  cxxopts::value<std::string>(m_RuntimeOptions.m_DynamicBackendsPath))
210 
211  ("n,concurrent",
212  "This option is for Arm NN internal asynchronous testing purposes. "
213  "False by default. If set to true will use std::launch::async or the Arm NN thread pool, "
214  "if 'thread-pool-size' is greater than 0, for asynchronous execution.",
215  cxxopts::value<bool>(m_ExNetParams.m_Concurrent)->default_value("false")->implicit_value("true"))
216 
217  ("d,input-tensor-data",
218  "Path to files containing the input data as a flat array separated by whitespace. "
219  "Several paths can be passed by separating them with a comma if the network has multiple inputs "
220  "or you wish to run the model multiple times with different input data using the 'iterations' option. "
221  "If not specified, the network will be run with dummy data (useful for profiling).",
222  cxxopts::value<std::string>()->default_value(""))
223 
224  ("h,help", "Display usage information")
225 
226  ("infer-output-shape",
227  "Infers output tensor shape from input tensor shape and validate where applicable (where supported by "
228  "parser)",
229  cxxopts::value<bool>(m_ExNetParams.m_InferOutputShape)->default_value("false")->implicit_value("true"))
230 
231  ("iterations",
232  "Number of iterations to run the network for, default is set to 1. "
233  "If you wish to run the model with different input data for every execution you can do so by "
234  "supplying more input file paths to the 'input-tensor-data' option. "
235  "Note: The number of input files provided must be divisible by the number of inputs of the model. "
236  "e.g. Your model has 2 inputs and you supply 4 input files. If you set 'iterations' to 6 the first "
237  "run will consume the first two inputs, the second the next two and the last will begin from the "
238  "start and use the first two inputs again. "
239  "Note: If the 'concurrent' option is enabled all iterations will be run asynchronously.",
240  cxxopts::value<size_t>(m_ExNetParams.m_Iterations)->default_value("1"))
241 
242  ("l,dequantize-output",
243  "If this option is enabled, all quantized outputs will be dequantized to float. "
244  "If unset, default to not get dequantized. "
245  "Accepted values (true or false)",
246  cxxopts::value<bool>(m_ExNetParams.m_DequantizeOutput)->default_value("false")->implicit_value("true"))
247 
248  ("p,print-intermediate-layers",
249  "If this option is enabled, the output of every graph layer will be printed.",
250  cxxopts::value<bool>(m_ExNetParams.m_PrintIntermediate)->default_value("false")
251  ->implicit_value("true"))
252 
253  ("parse-unsupported",
254  "Add unsupported operators as stand-in layers (where supported by parser)",
255  cxxopts::value<bool>(m_ExNetParams.m_ParseUnsupported)->default_value("false")->implicit_value("true"))
256 
257  ("q,quantize-input",
258  "If this option is enabled, all float inputs will be quantized as appropriate for the model's inputs. "
259  "If unset, default to not quantized. Accepted values (true or false)",
260  cxxopts::value<bool>(m_ExNetParams.m_QuantizeInput)->default_value("false")->implicit_value("true"))
261 
262  ("r,threshold-time",
263  "Threshold time is the maximum allowed time for inference measured in milliseconds. If the actual "
264  "inference time is greater than the threshold time, the test will fail. By default, no threshold "
265  "time is used.",
266  cxxopts::value<double>(m_ExNetParams.m_ThresholdTime)->default_value("0.0"))
267 
268  ("s,input-tensor-shape",
269  "The shape of the input tensors in the network as a flat array of integers separated by comma."
270  "Several shapes can be passed by separating them with a colon (:).",
271  cxxopts::value<std::string>())
272 
273  ("v,visualize-optimized-model",
274  "Enables built optimized model visualizer. If unset, defaults to off.",
275  cxxopts::value<bool>(m_ExNetParams.m_EnableLayerDetails)->default_value("false")
276  ->implicit_value("true"))
277 
278  ("w,write-outputs-to-file",
279  "Comma-separated list of output file paths keyed with the binding-id of the output slot. "
280  "If left empty (the default), the output tensors will not be written to a file.",
281  cxxopts::value<std::string>())
282 
283  ("x,subgraph-number",
284  "Id of the subgraph to be executed. Defaults to 0.",
285  cxxopts::value<size_t>(m_ExNetParams.m_SubgraphId)->default_value("0"))
286 
287  ("y,input-type",
288  "The type of the input tensors in the network separated by comma. "
289  "If unset, defaults to \"float\" for all defined inputs. "
290  "Accepted values (float, int, qasymms8 or qasymmu8).",
291  cxxopts::value<std::string>())
292 
293  ("z,output-type",
294  "The type of the output tensors in the network separated by comma. "
295  "If unset, defaults to \"float\" for all defined outputs. "
296  "Accepted values (float, int, qasymms8 or qasymmu8).",
297  cxxopts::value<std::string>())
298 
299  ("T,tflite-executor",
300  "Set the executor for the tflite model: parser, delegate, tflite"
301  "parser is the ArmNNTfLiteParser, "
302  "delegate is the ArmNNTfLiteDelegate, "
303  "tflite is the TfliteInterpreter",
304  cxxopts::value<std::string>()->default_value("parser"))
305 
306  ("D,armnn-tflite-delegate",
307  "Enable Arm NN TfLite delegate. "
308  "DEPRECATED: This option is deprecated please use tflite-executor instead",
309  cxxopts::value<bool>(m_ExNetParams.m_EnableDelegate)->default_value("false")->implicit_value("true"))
310 
311  ("simultaneous-iterations",
312  "Number of simultaneous iterations to async-run the network for, default is set to 1 (disabled). "
313  "When thread-pool-size is set the Arm NN thread pool is used. Otherwise std::launch::async is used."
314  "DEPRECATED: This option is deprecated and will be removed soon. "
315  "Please use the option 'iterations' combined with 'concurrent' instead.",
316  cxxopts::value<size_t>(m_ExNetParams.m_SimultaneousIterations)->default_value("1"))
317 
318  ("thread-pool-size",
319  "Number of Arm NN threads to use when running the network asynchronously via the Arm NN thread pool. "
320  "The default is set to 0 which equals disabled. If 'thread-pool-size' is greater than 0 the "
321  "'concurrent' option is automatically set to true.",
322  cxxopts::value<size_t>(m_ExNetParams.m_ThreadPoolSize)->default_value("0"));
323 
324  m_CxxOptions.add_options("c) Optimization")
325  ("bf16-turbo-mode",
326  "If this option is enabled, FP32 layers, "
327  "weights and biases will be converted to BFloat16 where the backend supports it",
328  cxxopts::value<bool>(m_ExNetParams.m_EnableBf16TurboMode)
329  ->default_value("false")->implicit_value("true"))
330 
331  ("enable-fast-math",
332  "Enables fast_math options in backends that support it. Using the fast_math flag can lead to "
333  "performance improvements but may result in reduced or different precision.",
334  cxxopts::value<bool>(m_ExNetParams.m_EnableFastMath)->default_value("false")->implicit_value("true"))
335 
336  ("number-of-threads",
337  "Assign the number of threads used by the CpuAcc backend. "
338  "Input value must be between 1 and 64. "
339  "Default is set to 0 (Backend will decide number of threads to use).",
340  cxxopts::value<unsigned int>(m_ExNetParams.m_NumberOfThreads)->default_value("0"))
341 
342  ("save-cached-network",
343  "Enables saving of the cached network to a file given with the cached-network-filepath option. "
344  "See also --cached-network-filepath",
345  cxxopts::value<bool>(m_ExNetParams.m_SaveCachedNetwork)
346  ->default_value("false")->implicit_value("true"))
347 
348  ("cached-network-filepath",
349  "If non-empty, the given file will be used to load/save the cached network. "
350  "If save-cached-network is given then the cached network will be saved to the given file. "
351  "To save the cached network a file must already exist. "
352  "If save-cached-network is not given then the cached network will be loaded from the given file. "
353  "This will remove initial compilation time of kernels and speed up the first execution.",
354  cxxopts::value<std::string>(m_ExNetParams.m_CachedNetworkFilePath)->default_value(""))
355 
356  ("fp16-turbo-mode",
357  "If this option is enabled, FP32 layers, "
358  "weights and biases will be converted to FP16 where the backend supports it",
359  cxxopts::value<bool>(m_ExNetParams.m_EnableFp16TurboMode)
360  ->default_value("false")->implicit_value("true"))
361 
362  ("tuning-level",
363  "Sets the tuning level which enables a tuning run which will update/create a tuning file. "
364  "Available options are: 1 (Rapid), 2 (Normal), 3 (Exhaustive). "
365  "Requires tuning-path to be set, default is set to 0 (No tuning run)",
366  cxxopts::value<int>(m_ExNetParams.m_TuningLevel)->default_value("0"))
367 
368  ("tuning-path",
369  "Path to tuning file. Enables use of CL tuning",
370  cxxopts::value<std::string>(m_ExNetParams.m_TuningPath))
371 
372  ("MLGOTuningFilePath",
373  "Path to tuning file. Enables use of CL MLGO tuning",
374  cxxopts::value<std::string>(m_ExNetParams.m_MLGOTuningFilePath));
375 
376  m_CxxOptions.add_options("d) Profiling")
377  ("a,enable-external-profiling",
378  "If enabled external profiling will be switched on",
380  ->default_value("false")->implicit_value("true"))
381 
382  ("e,event-based-profiling",
383  "Enables built in profiler. If unset, defaults to off.",
384  cxxopts::value<bool>(m_ExNetParams.m_EnableProfiling)->default_value("false")->implicit_value("true"))
385 
386  ("g,file-only-external-profiling",
387  "If enabled then the 'file-only' test mode of external profiling will be enabled",
388  cxxopts::value<bool>(m_RuntimeOptions.m_ProfilingOptions.m_FileOnly)
389  ->default_value("false")->implicit_value("true"))
390 
391  ("file-format",
392  "If profiling is enabled specifies the output file format",
393  cxxopts::value<std::string>(m_RuntimeOptions.m_ProfilingOptions.m_FileFormat)->default_value("binary"))
394 
395  ("j,outgoing-capture-file",
396  "If specified the outgoing external profiling packets will be captured in this binary file",
397  cxxopts::value<std::string>(m_RuntimeOptions.m_ProfilingOptions.m_OutgoingCaptureFile))
398 
399  ("k,incoming-capture-file",
400  "If specified the incoming external profiling packets will be captured in this binary file",
401  cxxopts::value<std::string>(m_RuntimeOptions.m_ProfilingOptions.m_IncomingCaptureFile))
402 
403  ("timeline-profiling",
404  "If enabled timeline profiling will be switched on, requires external profiling",
406  ->default_value("false")->implicit_value("true"))
407 
408  ("u,counter-capture-period",
409  "If profiling is enabled in 'file-only' mode this is the capture period that will be used in the test",
410  cxxopts::value<uint32_t>(m_RuntimeOptions.m_ProfilingOptions.m_CapturePeriod)->default_value("150"))
411 
412  ("output-network-details",
413  "Outputs layer tensor infos and descriptors to std out. Defaults to off.",
414  cxxopts::value<bool>(m_ExNetParams.m_OutputDetailsToStdOut)->default_value("false")
415  ->implicit_value("true"));
416  }
417  catch (const std::exception& e)
418  {
419  ARMNN_ASSERT_MSG(false, "Caught unexpected exception");
420  ARMNN_LOG(fatal) << "Fatal internal error: " << e.what();
421  exit(EXIT_FAILURE);
422  }
423 }
ExecuteNetworkParams m_ExNetParams
armnn::IRuntime::CreationOptions m_RuntimeOptions
#define ARMNN_LOG(severity)
Definition: Logging.hpp:202
BackendRegistry & BackendRegistryInstance()
std::string GetBackendIdsAsString() const
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
std::string m_DynamicBackendsPath
Setting this value will override the paths set by the DYNAMIC_BACKEND_PATHS compiler directive Only a...
Definition: IRuntime.hpp:120
ExternalProfilingOptions m_ProfilingOptions
Definition: IRuntime.hpp:160

◆ ProgramOptions() [2/2]

ProgramOptions ( int  ac,
const char *  av[] 
)

Runs ParseOptions() on initialization.

Definition at line 425 of file ExecuteNetworkProgramOptions.cpp.

References ParseOptions().

425  : ProgramOptions()
426 {
427  ParseOptions(ac, av);
428 }
ProgramOptions()
Initializes ProgramOptions by adding options to the underlying cxxopts::options object.
void ParseOptions(int ac, const char *av[])
Parses program options from the command line or another source and stores the values in member variab...

Member Function Documentation

◆ ParseOptions()

void ParseOptions ( int  ac,
const char *  av[] 
)

Parses program options from the command line or another source and stores the values in member variables.

It also checks the validity of the parsed parameters. Throws a cxxopts exception if parsing fails or an armnn exception if parameters are not valid.

Definition at line 430 of file ExecuteNetworkProgramOptions.cpp.

References ARMNN_LOG, ExecuteNetworkParams::ArmNNTfLiteDelegate, ExecuteNetworkParams::ArmNNTfLiteParser, CheckForDeprecatedOptions(), CheckOptionDependencies(), CheckRequiredOptions(), GetBackendIDs(), IRuntime::CreationOptions::m_BackendOptions, ExecuteNetworkParams::m_ComputeDevices, ExecuteNetworkParams::m_Concurrent, m_CxxOptions, m_CxxResult, ExecuteNetworkParams::m_DynamicBackendsPath, IRuntime::CreationOptions::m_DynamicBackendsPath, ExecuteNetworkParams::m_EnableDelegate, IRuntime::CreationOptions::m_EnableGpuProfiling, ExecuteNetworkParams::m_EnableProfiling, m_ExNetParams, ExecuteNetworkParams::m_GenerateTensorData, ExecuteNetworkParams::m_InputNames, ExecuteNetworkParams::m_InputTensorDataFilePaths, ExecuteNetworkParams::m_InputTensorShapes, ExecuteNetworkParams::m_InputTypes, ExecuteNetworkParams::m_Iterations, ExecuteNetworkParams::m_MLGOTuningFilePath, ExecuteNetworkParams::m_ModelFormat, ExecuteNetworkParams::m_OutputNames, ExecuteNetworkParams::m_OutputTensorFiles, ExecuteNetworkParams::m_OutputTypes, m_RuntimeOptions, ExecuteNetworkParams::m_SimultaneousIterations, ExecuteNetworkParams::m_TfLiteExecutor, ExecuteNetworkParams::m_ThreadPoolSize, ExecuteNetworkParams::m_TuningLevel, ExecuteNetworkParams::m_TuningPath, ParseArray(), ParseStringList(), armnn::stringUtils::StringTrimCopy(), ExecuteNetworkParams::TfliteInterpreter, ValidateExecuteNetworkParams(), and ValidateRuntimeOptions().

Referenced by main(), and ProgramOptions().

431 {
432  // Parses the command-line.
433  m_CxxResult = m_CxxOptions.parse(ac, av);
434 
435  if (m_CxxResult.count("help") || ac <= 1)
436  {
437  std::cout << m_CxxOptions.help() << std::endl;
438  exit(EXIT_SUCCESS);
439  }
440 
444 
445  // Some options can't be assigned directly because they need some post-processing:
446  auto computeDevices = GetOptionValue<std::vector<std::string>>("compute", m_CxxResult);
447  m_ExNetParams.m_ComputeDevices = GetBackendIDs(computeDevices);
449  armnn::stringUtils::StringTrimCopy(GetOptionValue<std::string>("model-format", m_CxxResult));
451  ParseStringList(GetOptionValue<std::string>("input-name", m_CxxResult), ",");
453  ParseStringList(GetOptionValue<std::string>("input-tensor-data", m_CxxResult), ",");
455  ParseStringList(GetOptionValue<std::string>("output-name", m_CxxResult), ",");
457  ParseStringList(GetOptionValue<std::string>("input-type", m_CxxResult), ",");
459  ParseStringList(GetOptionValue<std::string>("output-type", m_CxxResult), ",");
461  ParseStringList(GetOptionValue<std::string>("write-outputs-to-file", m_CxxResult), ",");
465 
467 
468  std::string tfliteExecutor = GetOptionValue<std::string>("tflite-executor", m_CxxResult);
469 
470  if (tfliteExecutor.size() == 0 || tfliteExecutor == "parser")
471  {
473  }
474  else if (tfliteExecutor == "delegate")
475  {
477  }
478  else if (tfliteExecutor == "tflite")
479  {
481  }
482  else
483  {
484  ARMNN_LOG(info) << fmt::format("Invalid tflite-executor option '{}'.", tfliteExecutor);
485  throw armnn::InvalidArgumentException ("Invalid tflite-executor option");
486  }
487 
488  // For backwards compatibility when deprecated options are used
490  {
492  }
494  {
497  }
498 
499  // Set concurrent to true if the user expects to run inferences asynchronously
501  {
503  }
504 
505  // Parse input tensor shape from the string we got from the command-line.
506  std::vector<std::string> inputTensorShapesVector =
507  ParseStringList(GetOptionValue<std::string>("input-tensor-shape", m_CxxResult), ":");
508 
509  if (!inputTensorShapesVector.empty())
510  {
511  m_ExNetParams.m_InputTensorShapes.reserve(inputTensorShapesVector.size());
512 
513  for(const std::string& shape : inputTensorShapesVector)
514  {
515  std::stringstream ss(shape);
516  std::vector<unsigned int> dims = ParseArray(ss);
517 
519  std::make_unique<armnn::TensorShape>(static_cast<unsigned int>(dims.size()), dims.data()));
520  }
521  }
522 
523  // We have to validate ExecuteNetworkParams first so that the tuning path and level is validated
525 
526  // Parse CL tuning parameters to runtime options
527  if (!m_ExNetParams.m_TuningPath.empty())
528  {
529  m_RuntimeOptions.m_BackendOptions.emplace_back(
531  {
532  "GpuAcc",
533  {
534  {"TuningLevel", m_ExNetParams.m_TuningLevel},
535  {"TuningFile", m_ExNetParams.m_TuningPath.c_str()},
536  {"KernelProfilingEnabled", m_ExNetParams.m_EnableProfiling},
537  {"MLGOTuningFilePath", m_ExNetParams.m_MLGOTuningFilePath}
538  }
539  }
540  );
541  }
542 
544 }
ExecuteNetworkParams m_ExNetParams
std::vector< std::string > m_InputTypes
void ValidateExecuteNetworkParams()
Ensures that the parameters for ExecuteNetwork fit together.
std::vector< TensorShapePtr > m_InputTensorShapes
std::vector< unsigned int > ParseArray(std::istream &stream)
std::vector< std::string > ParseStringList(const std::string &inputString, const char *delimiter)
Splits a given string at every accurance of delimiter into a vector of string.
armnn::IRuntime::CreationOptions m_RuntimeOptions
#define ARMNN_LOG(severity)
Definition: Logging.hpp:202
std::vector< armnn::BackendId > GetBackendIDs(const std::vector< std::string > &backendStringsVec)
Takes a vector of backend strings and returns a vector of backendIDs.
std::vector< std::string > m_OutputNames
std::vector< std::string > m_OutputTensorFiles
std::vector< armnn::BackendId > m_ComputeDevices
std::vector< std::string > m_OutputTypes
std::string StringTrimCopy(const std::string &str, const std::string &chars="\\\")
Trim from both the start and the end of a string, returns a trimmed copy of the string.
Definition: StringUtils.hpp:85
std::vector< BackendOptions > m_BackendOptions
Pass backend specific options.
Definition: IRuntime.hpp:192
std::vector< std::string > m_InputNames
std::vector< std::string > m_InputTensorDataFilePaths
void CheckForDeprecatedOptions(const cxxopts::ParseResult &result)
Struct for the users to pass backend specific options.
std::string m_DynamicBackendsPath
Setting this value will override the paths set by the DYNAMIC_BACKEND_PATHS compiler directive Only a...
Definition: IRuntime.hpp:120
bool m_EnableGpuProfiling
Setting this flag will allow the user to obtain GPU profiling information from the runtime...
Definition: IRuntime.hpp:116
void ValidateRuntimeOptions()
Ensures that the runtime options are valid.
void CheckOptionDependencies(const cxxopts::ParseResult &result)
cxxopts::ParseResult m_CxxResult
void CheckRequiredOptions(const cxxopts::ParseResult &result)

◆ ValidateExecuteNetworkParams()

void ValidateExecuteNetworkParams ( )

Ensures that the parameters for ExecuteNetwork fit together.

Definition at line 156 of file ExecuteNetworkProgramOptions.cpp.

References m_ExNetParams, and ExecuteNetworkParams::ValidateParams().

Referenced by ParseOptions().

157 {
159 }
ExecuteNetworkParams m_ExNetParams

◆ ValidateRuntimeOptions()

void ValidateRuntimeOptions ( )

Member Data Documentation

◆ m_CxxOptions

cxxopts::Options m_CxxOptions

Definition at line 41 of file ExecuteNetworkProgramOptions.hpp.

Referenced by ParseOptions(), and ProgramOptions().

◆ m_CxxResult

cxxopts::ParseResult m_CxxResult

Definition at line 42 of file ExecuteNetworkProgramOptions.hpp.

Referenced by ParseOptions().

◆ m_ExNetParams

◆ m_RuntimeOptions


The documentation for this struct was generated from the following files: