ArmNN
 21.02
ParserFlatbuffersFixture Struct Reference

#include <ParserFlatbuffersFixture.hpp>

Inheritance diagram for ParserFlatbuffersFixture:
PositiveActivationFixture

Public Member Functions

 ParserFlatbuffersFixture ()
 
void Setup ()
 
void SetupSingleInputSingleOutput (const std::string &inputName, const std::string &outputName)
 
bool ReadStringToBinary ()
 
template<std::size_t NumOutputDimensions, armnn::DataType ArmnnType>
void RunTest (size_t subgraphId, const std::vector< armnn::ResolveType< ArmnnType >> &inputData, const std::vector< armnn::ResolveType< ArmnnType >> &expectedOutputData)
 Executes the network with the given input tensor and checks the result against the given output tensor. More...
 
template<std::size_t NumOutputDimensions, armnn::DataType ArmnnType>
void RunTest (size_t subgraphId, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType >>> &inputData, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType >>> &expectedOutputData)
 Executes the network with the given input tensors and checks the results against the given output tensors. More...
 
template<std::size_t NumOutputDimensions, armnn::DataType ArmnnType1, armnn::DataType ArmnnType2>
void RunTest (size_t subgraphId, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType1 >>> &inputData, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType2 >>> &expectedOutputData, bool isDynamic=false)
 Multiple Inputs, Multiple Outputs w/ Variable Datatypes and different dimension sizes. More...
 
template<std::size_t NumOutputDimensions, armnn::DataType inputType1, armnn::DataType inputType2, armnn::DataType outputType>
void RunTest (size_t subgraphId, const std::map< std::string, std::vector< armnn::ResolveType< inputType1 >>> &input1Data, const std::map< std::string, std::vector< armnn::ResolveType< inputType2 >>> &input2Data, const std::map< std::string, std::vector< armnn::ResolveType< outputType >>> &expectedOutputData)
 Multiple Inputs with different DataTypes, Multiple Outputs w/ Variable DataTypes Executes the network with the given input tensors and checks the results against the given output tensors. More...
 
template<armnn::DataType ArmnnType1, armnn::DataType ArmnnType2>
void RunTest (std::size_t subgraphId, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType1 >>> &inputData, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType2 >>> &expectedOutputData)
 Multiple Inputs, Multiple Outputs w/ Variable Datatypes and different dimension sizes. More...
 
void CheckTensors (const TensorRawPtr &tensors, size_t shapeSize, const std::vector< int32_t > &shape, tflite::TensorType tensorType, uint32_t buffer, const std::string &name, const std::vector< float > &min, const std::vector< float > &max, const std::vector< float > &scale, const std::vector< int64_t > &zeroPoint)
 

Static Public Member Functions

static std::string GenerateDetectionPostProcessJsonString (const armnn::DetectionPostProcessDescriptor &descriptor)
 

Public Attributes

std::vector< uint8_t > m_GraphBinary
 
std::string m_JsonString
 
ITfLiteParserPtr m_Parser
 
armnn::IRuntimePtr m_Runtime
 
armnn::NetworkId m_NetworkIdentifier
 
std::string m_SingleInputName
 If the single-input-single-output overload of Setup() is called, these will store the input and output name so they don't need to be passed to the single-input-single-output overload of RunTest(). More...
 
std::string m_SingleOutputName
 

Detailed Description

Definition at line 36 of file ParserFlatbuffersFixture.hpp.

Constructor & Destructor Documentation

◆ ParserFlatbuffersFixture()

Definition at line 38 of file ParserFlatbuffersFixture.hpp.

References m_Parser.

38  :
39  m_Parser(nullptr, &ITfLiteParser::Destroy),
42  {
43  ITfLiteParser::TfLiteParserOptions options;
44  options.m_StandInLayerForUnsupported = true;
45  options.m_InferAndValidate = true;
46 
47  m_Parser.reset(ITfLiteParser::CreateRaw(armnn::Optional<ITfLiteParser::TfLiteParserOptions>(options)));
48  }
static IRuntimePtr Create(const CreationOptions &options)
Definition: Runtime.cpp:37

Member Function Documentation

◆ CheckTensors()

void CheckTensors ( const TensorRawPtr tensors,
size_t  shapeSize,
const std::vector< int32_t > &  shape,
tflite::TensorType  tensorType,
uint32_t  buffer,
const std::string &  name,
const std::vector< float > &  min,
const std::vector< float > &  max,
const std::vector< float > &  scale,
const std::vector< int64_t > &  zeroPoint 
)
inline

Definition at line 205 of file ParserFlatbuffersFixture.hpp.

References m_Parser, and armnn::VerifyTensorInfoDataType().

209  {
210  BOOST_CHECK(tensors);
211  BOOST_CHECK_EQUAL(shapeSize, tensors->shape.size());
212  BOOST_CHECK_EQUAL_COLLECTIONS(shape.begin(), shape.end(), tensors->shape.begin(), tensors->shape.end());
213  BOOST_CHECK_EQUAL(tensorType, tensors->type);
214  BOOST_CHECK_EQUAL(buffer, tensors->buffer);
215  BOOST_CHECK_EQUAL(name, tensors->name);
216  BOOST_CHECK(tensors->quantization);
217  BOOST_CHECK_EQUAL_COLLECTIONS(min.begin(), min.end(), tensors->quantization.get()->min.begin(),
218  tensors->quantization.get()->min.end());
219  BOOST_CHECK_EQUAL_COLLECTIONS(max.begin(), max.end(), tensors->quantization.get()->max.begin(),
220  tensors->quantization.get()->max.end());
221  BOOST_CHECK_EQUAL_COLLECTIONS(scale.begin(), scale.end(), tensors->quantization.get()->scale.begin(),
222  tensors->quantization.get()->scale.end());
223  BOOST_CHECK_EQUAL_COLLECTIONS(zeroPoint.begin(), zeroPoint.end(),
224  tensors->quantization.get()->zero_point.begin(),
225  tensors->quantization.get()->zero_point.end());
226  }

◆ GenerateDetectionPostProcessJsonString()

static std::string GenerateDetectionPostProcessJsonString ( const armnn::DetectionPostProcessDescriptor descriptor)
inlinestatic

Definition at line 178 of file ParserFlatbuffersFixture.hpp.

References DetectionPostProcessDescriptor::m_DetectionsPerClass, DetectionPostProcessDescriptor::m_MaxClassesPerDetection, DetectionPostProcessDescriptor::m_MaxDetections, DetectionPostProcessDescriptor::m_NmsIouThreshold, DetectionPostProcessDescriptor::m_NmsScoreThreshold, DetectionPostProcessDescriptor::m_NumClasses, DetectionPostProcessDescriptor::m_ScaleH, DetectionPostProcessDescriptor::m_ScaleW, DetectionPostProcessDescriptor::m_ScaleX, DetectionPostProcessDescriptor::m_ScaleY, and DetectionPostProcessDescriptor::m_UseRegularNms.

180  {
181  flexbuffers::Builder detectPostProcess;
182  detectPostProcess.Map([&]() {
183  detectPostProcess.Bool("use_regular_nms", descriptor.m_UseRegularNms);
184  detectPostProcess.Int("max_detections", descriptor.m_MaxDetections);
185  detectPostProcess.Int("max_classes_per_detection", descriptor.m_MaxClassesPerDetection);
186  detectPostProcess.Int("detections_per_class", descriptor.m_DetectionsPerClass);
187  detectPostProcess.Int("num_classes", descriptor.m_NumClasses);
188  detectPostProcess.Float("nms_score_threshold", descriptor.m_NmsScoreThreshold);
189  detectPostProcess.Float("nms_iou_threshold", descriptor.m_NmsIouThreshold);
190  detectPostProcess.Float("h_scale", descriptor.m_ScaleH);
191  detectPostProcess.Float("w_scale", descriptor.m_ScaleW);
192  detectPostProcess.Float("x_scale", descriptor.m_ScaleX);
193  detectPostProcess.Float("y_scale", descriptor.m_ScaleY);
194  });
195  detectPostProcess.Finish();
196 
197  // Create JSON string
198  std::stringstream strStream;
199  std::vector<uint8_t> buffer = detectPostProcess.GetBuffer();
200  std::copy(buffer.begin(), buffer.end(),std::ostream_iterator<int>(strStream,","));
201 
202  return strStream.str();
203  }
float m_ScaleW
Center size encoding scale weight.
float m_ScaleX
Center size encoding scale x.
uint32_t m_DetectionsPerClass
Detections per classes, used in Regular NMS.
uint32_t m_MaxClassesPerDetection
Maximum numbers of classes per detection, used in Fast NMS.
uint32_t m_MaxDetections
Maximum numbers of detections.
float m_NmsIouThreshold
Intersection over union threshold.
uint32_t m_NumClasses
Number of classes.
bool m_UseRegularNms
Use Regular NMS.
float m_ScaleH
Center size encoding scale height.
float m_ScaleY
Center size encoding scale y.
float m_NmsScoreThreshold
NMS score threshold.

◆ ReadStringToBinary()

bool ReadStringToBinary ( )
inline

Definition at line 101 of file ParserFlatbuffersFixture.hpp.

References ARMNN_ASSERT_MSG, g_TfLiteSchemaText, g_TfLiteSchemaText_len, and RunTest().

Referenced by Setup().

102  {
104 
105  // parse schema first, so we can use it to parse the data after
106  flatbuffers::Parser parser;
107 
108  bool ok = parser.Parse(schemafile.c_str());
109  ARMNN_ASSERT_MSG(ok, "Failed to parse schema file");
110 
111  ok &= parser.Parse(m_JsonString.c_str());
112  ARMNN_ASSERT_MSG(ok, "Failed to parse json input");
113 
114  if (!ok)
115  {
116  return false;
117  }
118 
119  {
120  const uint8_t * bufferPtr = parser.builder_.GetBufferPointer();
121  size_t size = static_cast<size_t>(parser.builder_.GetSize());
122  m_GraphBinary.assign(bufferPtr, bufferPtr+size);
123  }
124  return ok;
125  }
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
std::vector< uint8_t > m_GraphBinary
unsigned char g_TfLiteSchemaText[]
unsigned int g_TfLiteSchemaText_len

◆ RunTest() [1/5]

void RunTest ( size_t  subgraphId,
const std::vector< armnn::ResolveType< armnnType >> &  inputData,
const std::vector< armnn::ResolveType< armnnType >> &  expectedOutputData 
)

Executes the network with the given input tensor and checks the result against the given output tensor.

Single Input, Single Output Executes the network with the given input tensor and checks the result against the given output tensor.

This assumes the network has a single input and a single output.

This overload assumes the network has a single input and a single output.

Definition at line 256 of file ParserFlatbuffersFixture.hpp.

References m_SingleInputName, and m_SingleOutputName.

Referenced by ReadStringToBinary().

259 {
260  RunTest<NumOutputDimensions, armnnType>(subgraphId,
261  { { m_SingleInputName, inputData } },
262  { { m_SingleOutputName, expectedOutputData } });
263 }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

◆ RunTest() [2/5]

void RunTest ( size_t  subgraphId,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType >>> &  inputData,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType >>> &  expectedOutputData 
)

Executes the network with the given input tensors and checks the results against the given output tensors.

Multiple Inputs, Multiple Outputs Executes the network with the given input tensors and checks the results against the given output tensors.

This overload supports multiple inputs and multiple outputs, identified by name.

Definition at line 270 of file ParserFlatbuffersFixture.hpp.

273 {
274  RunTest<NumOutputDimensions, armnnType, armnnType>(subgraphId, inputData, expectedOutputData);
275 }

◆ RunTest() [3/5]

void RunTest ( size_t  subgraphId,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType1 >>> &  inputData,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType2 >>> &  expectedOutputData,
bool  isDynamic = false 
)

Multiple Inputs, Multiple Outputs w/ Variable Datatypes and different dimension sizes.

Multiple Inputs, Multiple Outputs w/ Variable Datatypes Executes the network with the given input tensors and checks the results against the given output tensors.

Executes the network with the given input tensors and checks the results against the given output tensors. This overload supports multiple inputs and multiple outputs, identified by name along with the allowance for the input datatype to be different to the output

This overload supports multiple inputs and multiple outputs, identified by name along with the allowance for the input datatype to be different to the output

Definition at line 284 of file ParserFlatbuffersFixture.hpp.

References CompareTensors(), TensorInfo::GetNumDimensions(), m_NetworkIdentifier, m_Parser, m_Runtime, and armnn::VerifyTensorInfoDataType().

288 {
289  using DataType2 = armnn::ResolveType<armnnType2>;
290 
291  // Setup the armnn input tensors from the given vectors.
292  armnn::InputTensors inputTensors;
293  FillInputTensors<armnnType1>(inputTensors, inputData, subgraphId);
294 
295  // Allocate storage for the output tensors to be written to and setup the armnn output tensors.
296  std::map<std::string, boost::multi_array<DataType2, NumOutputDimensions>> outputStorage;
297  armnn::OutputTensors outputTensors;
298  for (auto&& it : expectedOutputData)
299  {
300  armnn::LayerBindingId outputBindingId = m_Parser->GetNetworkOutputBindingInfo(subgraphId, it.first).first;
301  armnn::TensorInfo outputTensorInfo = m_Runtime->GetOutputTensorInfo(m_NetworkIdentifier, outputBindingId);
302 
303  // Check that output tensors have correct number of dimensions (NumOutputDimensions specified in test)
304  auto outputNumDimensions = outputTensorInfo.GetNumDimensions();
305  BOOST_CHECK_MESSAGE((outputNumDimensions == NumOutputDimensions),
306  fmt::format("Number of dimensions expected {}, but got {} for output layer {}",
307  NumOutputDimensions,
308  outputNumDimensions,
309  it.first));
310 
311  armnn::VerifyTensorInfoDataType(outputTensorInfo, armnnType2);
312  outputStorage.emplace(it.first, MakeTensor<DataType2, NumOutputDimensions>(outputTensorInfo));
313  outputTensors.push_back(
314  { outputBindingId, armnn::Tensor(outputTensorInfo, outputStorage.at(it.first).data()) });
315  }
316 
317  m_Runtime->EnqueueWorkload(m_NetworkIdentifier, inputTensors, outputTensors);
318 
319  // Compare each output tensor to the expected values
320  for (auto&& it : expectedOutputData)
321  {
322  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkOutputBindingInfo(subgraphId, it.first);
323  auto outputExpected = MakeTensor<DataType2, NumOutputDimensions>(bindingInfo.second, it.second, isDynamic);
324  BOOST_TEST(CompareTensors(outputExpected, outputStorage[it.first], false, isDynamic));
325  }
326 }
boost::test_tools::predicate_result CompareTensors(const boost::multi_array< T, n > &a, const boost::multi_array< T, n > &b, bool compareBoolean=false, bool isDynamic=false)
typename ResolveTypeImpl< DT >::Type ResolveType
Definition: ResolveType.hpp:73
std::vector< std::pair< LayerBindingId, class ConstTensor > > InputTensors
Definition: Tensor.hpp:340
int LayerBindingId
Type of identifiers for bindable layers (inputs, outputs).
Definition: Types.hpp:210
A tensor defined by a TensorInfo (shape and data type) and a mutable backing store.
Definition: Tensor.hpp:306
std::vector< std::pair< LayerBindingId, class Tensor > > OutputTensors
Definition: Tensor.hpp:341
std::pair< armnn::LayerBindingId, armnn::TensorInfo > BindingPointInfo
Definition: Tensor.hpp:261
void VerifyTensorInfoDataType(const armnn::TensorInfo &info, armnn::DataType dataType)
Definition: TypesUtils.hpp:309
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:191

◆ RunTest() [4/5]

void RunTest ( size_t  subgraphId,
const std::map< std::string, std::vector< armnn::ResolveType< inputType1 >>> &  input1Data,
const std::map< std::string, std::vector< armnn::ResolveType< inputType2 >>> &  input2Data,
const std::map< std::string, std::vector< armnn::ResolveType< outputType >>> &  expectedOutputData 
)

Multiple Inputs with different DataTypes, Multiple Outputs w/ Variable DataTypes Executes the network with the given input tensors and checks the results against the given output tensors.

This overload supports multiple inputs and multiple outputs, identified by name along with the allowance for the input datatype to be different to the output

Definition at line 382 of file ParserFlatbuffersFixture.hpp.

References CompareTensors(), TensorInfo::GetNumDimensions(), m_NetworkIdentifier, m_Parser, m_Runtime, and armnn::VerifyTensorInfoDataType().

386 {
387  using DataType2 = armnn::ResolveType<outputType>;
388 
389  // Setup the armnn input tensors from the given vectors.
390  armnn::InputTensors inputTensors;
391  FillInputTensors<inputType1>(inputTensors, input1Data, subgraphId);
392  FillInputTensors<inputType2>(inputTensors, input2Data, subgraphId);
393 
394  // Allocate storage for the output tensors to be written to and setup the armnn output tensors.
395  std::map<std::string, boost::multi_array<DataType2, NumOutputDimensions>> outputStorage;
396  armnn::OutputTensors outputTensors;
397  for (auto&& it : expectedOutputData)
398  {
399  armnn::LayerBindingId outputBindingId = m_Parser->GetNetworkOutputBindingInfo(subgraphId, it.first).first;
400  armnn::TensorInfo outputTensorInfo = m_Runtime->GetOutputTensorInfo(m_NetworkIdentifier, outputBindingId);
401 
402  // Check that output tensors have correct number of dimensions (NumOutputDimensions specified in test)
403  auto outputNumDimensions = outputTensorInfo.GetNumDimensions();
404  BOOST_CHECK_MESSAGE((outputNumDimensions == NumOutputDimensions),
405  fmt::format("Number of dimensions expected {}, but got {} for output layer {}",
406  NumOutputDimensions,
407  outputNumDimensions,
408  it.first));
409 
410  armnn::VerifyTensorInfoDataType(outputTensorInfo, outputType);
411  outputStorage.emplace(it.first, MakeTensor<DataType2, NumOutputDimensions>(outputTensorInfo));
412  outputTensors.push_back(
413  { outputBindingId, armnn::Tensor(outputTensorInfo, outputStorage.at(it.first).data()) });
414  }
415 
416  m_Runtime->EnqueueWorkload(m_NetworkIdentifier, inputTensors, outputTensors);
417 
418  // Compare each output tensor to the expected values
419  for (auto&& it : expectedOutputData)
420  {
421  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkOutputBindingInfo(subgraphId, it.first);
422  auto outputExpected = MakeTensor<DataType2, NumOutputDimensions>(bindingInfo.second, it.second);
423  BOOST_TEST(CompareTensors(outputExpected, outputStorage[it.first], false));
424  }
425 }
boost::test_tools::predicate_result CompareTensors(const boost::multi_array< T, n > &a, const boost::multi_array< T, n > &b, bool compareBoolean=false, bool isDynamic=false)
typename ResolveTypeImpl< DT >::Type ResolveType
Definition: ResolveType.hpp:73
std::vector< std::pair< LayerBindingId, class ConstTensor > > InputTensors
Definition: Tensor.hpp:340
int LayerBindingId
Type of identifiers for bindable layers (inputs, outputs).
Definition: Types.hpp:210
A tensor defined by a TensorInfo (shape and data type) and a mutable backing store.
Definition: Tensor.hpp:306
std::vector< std::pair< LayerBindingId, class Tensor > > OutputTensors
Definition: Tensor.hpp:341
std::pair< armnn::LayerBindingId, armnn::TensorInfo > BindingPointInfo
Definition: Tensor.hpp:261
void VerifyTensorInfoDataType(const armnn::TensorInfo &info, armnn::DataType dataType)
Definition: TypesUtils.hpp:309
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:191

◆ RunTest() [5/5]

void RunTest ( std::size_t  subgraphId,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType1 >>> &  inputData,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType2 >>> &  expectedOutputData 
)

Multiple Inputs, Multiple Outputs w/ Variable Datatypes and different dimension sizes.

Executes the network with the given input tensors and checks the results against the given output tensors. This overload supports multiple inputs and multiple outputs, identified by name along with the allowance for the input datatype to be different to the output

Executes the network with the given input tensors and checks the results against the given output tensors. This overload supports multiple inputs and multiple outputs, identified by name along with the allowance for the input datatype to be different to the output.

Definition at line 334 of file ParserFlatbuffersFixture.hpp.

References m_NetworkIdentifier, m_Parser, m_Runtime, and armnn::VerifyTensorInfoDataType().

337 {
338  using DataType2 = armnn::ResolveType<armnnType2>;
339 
340  // Setup the armnn input tensors from the given vectors.
341  armnn::InputTensors inputTensors;
342  FillInputTensors<armnnType1>(inputTensors, inputData, subgraphId);
343 
344  armnn::OutputTensors outputTensors;
345  outputTensors.reserve(expectedOutputData.size());
346  std::map<std::string, std::vector<DataType2>> outputStorage;
347  for (auto&& it : expectedOutputData)
348  {
349  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkOutputBindingInfo(subgraphId, it.first);
350  armnn::VerifyTensorInfoDataType(bindingInfo.second, armnnType2);
351 
352  std::vector<DataType2> out(it.second.size());
353  outputStorage.emplace(it.first, out);
354  outputTensors.push_back({ bindingInfo.first,
355  armnn::Tensor(bindingInfo.second,
356  outputStorage.at(it.first).data()) });
357  }
358 
359  m_Runtime->EnqueueWorkload(m_NetworkIdentifier, inputTensors, outputTensors);
360 
361  // Checks the results.
362  for (auto&& it : expectedOutputData)
363  {
364  std::vector<armnn::ResolveType<armnnType2>> out = outputStorage.at(it.first);
365  {
366  for (unsigned int i = 0; i < out.size(); ++i)
367  {
368  BOOST_TEST(it.second[i] == out[i], boost::test_tools::tolerance(0.000001f));
369  }
370  }
371  }
372 }
typename ResolveTypeImpl< DT >::Type ResolveType
Definition: ResolveType.hpp:73
std::vector< std::pair< LayerBindingId, class ConstTensor > > InputTensors
Definition: Tensor.hpp:340
A tensor defined by a TensorInfo (shape and data type) and a mutable backing store.
Definition: Tensor.hpp:306
std::vector< std::pair< LayerBindingId, class Tensor > > OutputTensors
Definition: Tensor.hpp:341
std::pair< armnn::LayerBindingId, armnn::TensorInfo > BindingPointInfo
Definition: Tensor.hpp:261
void VerifyTensorInfoDataType(const armnn::TensorInfo &info, armnn::DataType dataType)
Definition: TypesUtils.hpp:309

◆ Setup()

void Setup ( )
inline

Definition at line 61 of file ParserFlatbuffersFixture.hpp.

References armnn::CpuRef, armnn::Optimize(), ReadStringToBinary(), and armnn::Success.

Referenced by BOOST_FIXTURE_TEST_CASE(), and SetupSingleInputSingleOutput().

62  {
63  bool ok = ReadStringToBinary();
64  if (!ok) {
65  throw armnn::Exception("LoadNetwork failed while reading binary input");
66  }
67 
68  armnn::INetworkPtr network =
69  m_Parser->CreateNetworkFromBinary(m_GraphBinary);
70 
71  if (!network) {
72  throw armnn::Exception("The parser failed to create an ArmNN network");
73  }
74 
75  auto optimized = Optimize(*network, { armnn::Compute::CpuRef },
76  m_Runtime->GetDeviceSpec());
77  std::string errorMessage;
78 
79  armnn::Status ret = m_Runtime->LoadNetwork(m_NetworkIdentifier, move(optimized), errorMessage);
80 
81  if (ret != armnn::Status::Success)
82  {
83  throw armnn::Exception(
84  fmt::format("The runtime failed to load the network. "
85  "Error was: {}. in {} [{}:{}]",
86  errorMessage,
87  __func__,
88  __FILE__,
89  __LINE__));
90  }
91  }
CPU Execution: Reference C++ kernels.
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional())
Create an optimized version of the network.
Definition: Network.cpp:1502
Status
enumeration
Definition: Types.hpp:26
std::vector< uint8_t > m_GraphBinary
Base class for all ArmNN exceptions so that users can filter to just those.
Definition: Exceptions.hpp:46
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
Definition: INetwork.hpp:173

◆ SetupSingleInputSingleOutput()

void SetupSingleInputSingleOutput ( const std::string &  inputName,
const std::string &  outputName 
)
inline

Definition at line 93 of file ParserFlatbuffersFixture.hpp.

References Setup().

Referenced by BOOST_AUTO_TEST_CASE().

94  {
95  // Store the input and output name so they don't need to be passed to the single-input-single-output RunTest().
96  m_SingleInputName = inputName;
97  m_SingleOutputName = outputName;
98  Setup();
99  }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

Member Data Documentation

◆ m_GraphBinary

std::vector<uint8_t> m_GraphBinary

Definition at line 50 of file ParserFlatbuffersFixture.hpp.

◆ m_JsonString

std::string m_JsonString

Definition at line 51 of file ParserFlatbuffersFixture.hpp.

◆ m_NetworkIdentifier

armnn::NetworkId m_NetworkIdentifier

Definition at line 54 of file ParserFlatbuffersFixture.hpp.

Referenced by RunTest().

◆ m_Parser

ITfLiteParserPtr m_Parser

Definition at line 52 of file ParserFlatbuffersFixture.hpp.

Referenced by CheckTensors(), ParserFlatbuffersFixture(), and RunTest().

◆ m_Runtime

armnn::IRuntimePtr m_Runtime

Definition at line 53 of file ParserFlatbuffersFixture.hpp.

Referenced by RunTest().

◆ m_SingleInputName

std::string m_SingleInputName

If the single-input-single-output overload of Setup() is called, these will store the input and output name so they don't need to be passed to the single-input-single-output overload of RunTest().

Definition at line 58 of file ParserFlatbuffersFixture.hpp.

Referenced by RunTest().

◆ m_SingleOutputName

std::string m_SingleOutputName

Definition at line 59 of file ParserFlatbuffersFixture.hpp.

Referenced by RunTest().


The documentation for this struct was generated from the following file: