ArmNN
 20.11
ParserFlatbuffersFixture Struct Reference

#include <ParserFlatbuffersFixture.hpp>

Inheritance diagram for ParserFlatbuffersFixture:
PositiveActivationFixture

Public Member Functions

 ParserFlatbuffersFixture ()
 
void Setup ()
 
void SetupSingleInputSingleOutput (const std::string &inputName, const std::string &outputName)
 
bool ReadStringToBinary ()
 
template<std::size_t NumOutputDimensions, armnn::DataType ArmnnType>
void RunTest (size_t subgraphId, const std::vector< armnn::ResolveType< ArmnnType >> &inputData, const std::vector< armnn::ResolveType< ArmnnType >> &expectedOutputData)
 Executes the network with the given input tensor and checks the result against the given output tensor. More...
 
template<std::size_t NumOutputDimensions, armnn::DataType ArmnnType>
void RunTest (size_t subgraphId, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType >>> &inputData, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType >>> &expectedOutputData)
 Executes the network with the given input tensors and checks the results against the given output tensors. More...
 
template<std::size_t NumOutputDimensions, armnn::DataType ArmnnType1, armnn::DataType ArmnnType2>
void RunTest (size_t subgraphId, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType1 >>> &inputData, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType2 >>> &expectedOutputData, bool isDynamic=false)
 Multiple Inputs, Multiple Outputs w/ Variable Datatypes and different dimension sizes. More...
 
template<armnn::DataType ArmnnType1, armnn::DataType ArmnnType2>
void RunTest (std::size_t subgraphId, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType1 >>> &inputData, const std::map< std::string, std::vector< armnn::ResolveType< ArmnnType2 >>> &expectedOutputData)
 Multiple Inputs, Multiple Outputs w/ Variable Datatypes and different dimension sizes. More...
 
void CheckTensors (const TensorRawPtr &tensors, size_t shapeSize, const std::vector< int32_t > &shape, tflite::TensorType tensorType, uint32_t buffer, const std::string &name, const std::vector< float > &min, const std::vector< float > &max, const std::vector< float > &scale, const std::vector< int64_t > &zeroPoint)
 

Static Public Member Functions

static std::string GenerateDetectionPostProcessJsonString (const armnn::DetectionPostProcessDescriptor &descriptor)
 

Public Attributes

std::vector< uint8_t > m_GraphBinary
 
std::string m_JsonString
 
ITfLiteParserPtr m_Parser
 
armnn::IRuntimePtr m_Runtime
 
armnn::NetworkId m_NetworkIdentifier
 
std::string m_SingleInputName
 If the single-input-single-output overload of Setup() is called, these will store the input and output name so they don't need to be passed to the single-input-single-output overload of RunTest(). More...
 
std::string m_SingleOutputName
 

Detailed Description

Definition at line 36 of file ParserFlatbuffersFixture.hpp.

Constructor & Destructor Documentation

◆ ParserFlatbuffersFixture()

Definition at line 38 of file ParserFlatbuffersFixture.hpp.

References m_Parser.

38  :
39  m_Parser(nullptr, &ITfLiteParser::Destroy),
42  {
43  ITfLiteParser::TfLiteParserOptions options;
44  options.m_StandInLayerForUnsupported = true;
45  options.m_InferAndValidate = true;
46 
47  m_Parser.reset(ITfLiteParser::CreateRaw(armnn::Optional<ITfLiteParser::TfLiteParserOptions>(options)));
48  }
static IRuntimePtr Create(const CreationOptions &options)
Definition: Runtime.cpp:32

Member Function Documentation

◆ CheckTensors()

void CheckTensors ( const TensorRawPtr tensors,
size_t  shapeSize,
const std::vector< int32_t > &  shape,
tflite::TensorType  tensorType,
uint32_t  buffer,
const std::string &  name,
const std::vector< float > &  min,
const std::vector< float > &  max,
const std::vector< float > &  scale,
const std::vector< int64_t > &  zeroPoint 
)
inline

Definition at line 193 of file ParserFlatbuffersFixture.hpp.

197  {
198  BOOST_CHECK(tensors);
199  BOOST_CHECK_EQUAL(shapeSize, tensors->shape.size());
200  BOOST_CHECK_EQUAL_COLLECTIONS(shape.begin(), shape.end(), tensors->shape.begin(), tensors->shape.end());
201  BOOST_CHECK_EQUAL(tensorType, tensors->type);
202  BOOST_CHECK_EQUAL(buffer, tensors->buffer);
203  BOOST_CHECK_EQUAL(name, tensors->name);
204  BOOST_CHECK(tensors->quantization);
205  BOOST_CHECK_EQUAL_COLLECTIONS(min.begin(), min.end(), tensors->quantization.get()->min.begin(),
206  tensors->quantization.get()->min.end());
207  BOOST_CHECK_EQUAL_COLLECTIONS(max.begin(), max.end(), tensors->quantization.get()->max.begin(),
208  tensors->quantization.get()->max.end());
209  BOOST_CHECK_EQUAL_COLLECTIONS(scale.begin(), scale.end(), tensors->quantization.get()->scale.begin(),
210  tensors->quantization.get()->scale.end());
211  BOOST_CHECK_EQUAL_COLLECTIONS(zeroPoint.begin(), zeroPoint.end(),
212  tensors->quantization.get()->zero_point.begin(),
213  tensors->quantization.get()->zero_point.end());
214  }

◆ GenerateDetectionPostProcessJsonString()

static std::string GenerateDetectionPostProcessJsonString ( const armnn::DetectionPostProcessDescriptor descriptor)
inlinestatic

Definition at line 166 of file ParserFlatbuffersFixture.hpp.

References DetectionPostProcessDescriptor::m_DetectionsPerClass, DetectionPostProcessDescriptor::m_MaxClassesPerDetection, DetectionPostProcessDescriptor::m_MaxDetections, DetectionPostProcessDescriptor::m_NmsIouThreshold, DetectionPostProcessDescriptor::m_NmsScoreThreshold, DetectionPostProcessDescriptor::m_NumClasses, DetectionPostProcessDescriptor::m_ScaleH, DetectionPostProcessDescriptor::m_ScaleW, DetectionPostProcessDescriptor::m_ScaleX, DetectionPostProcessDescriptor::m_ScaleY, and DetectionPostProcessDescriptor::m_UseRegularNms.

168  {
169  flexbuffers::Builder detectPostProcess;
170  detectPostProcess.Map([&]() {
171  detectPostProcess.Bool("use_regular_nms", descriptor.m_UseRegularNms);
172  detectPostProcess.Int("max_detections", descriptor.m_MaxDetections);
173  detectPostProcess.Int("max_classes_per_detection", descriptor.m_MaxClassesPerDetection);
174  detectPostProcess.Int("detections_per_class", descriptor.m_DetectionsPerClass);
175  detectPostProcess.Int("num_classes", descriptor.m_NumClasses);
176  detectPostProcess.Float("nms_score_threshold", descriptor.m_NmsScoreThreshold);
177  detectPostProcess.Float("nms_iou_threshold", descriptor.m_NmsIouThreshold);
178  detectPostProcess.Float("h_scale", descriptor.m_ScaleH);
179  detectPostProcess.Float("w_scale", descriptor.m_ScaleW);
180  detectPostProcess.Float("x_scale", descriptor.m_ScaleX);
181  detectPostProcess.Float("y_scale", descriptor.m_ScaleY);
182  });
183  detectPostProcess.Finish();
184 
185  // Create JSON string
186  std::stringstream strStream;
187  std::vector<uint8_t> buffer = detectPostProcess.GetBuffer();
188  std::copy(buffer.begin(), buffer.end(),std::ostream_iterator<int>(strStream,","));
189 
190  return strStream.str();
191  }
float m_ScaleW
Center size encoding scale weight.
float m_ScaleX
Center size encoding scale x.
uint32_t m_DetectionsPerClass
Detections per classes, used in Regular NMS.
uint32_t m_MaxClassesPerDetection
Maximum numbers of classes per detection, used in Fast NMS.
uint32_t m_MaxDetections
Maximum numbers of detections.
float m_NmsIouThreshold
Intersection over union threshold.
uint32_t m_NumClasses
Number of classes.
bool m_UseRegularNms
Use Regular NMS.
float m_ScaleH
Center size encoding scale height.
float m_ScaleY
Center size encoding scale y.
float m_NmsScoreThreshold
NMS score threshold.

◆ ReadStringToBinary()

bool ReadStringToBinary ( )
inline

Definition at line 101 of file ParserFlatbuffersFixture.hpp.

References ARMNN_ASSERT_MSG, g_TfLiteSchemaText, g_TfLiteSchemaText_len, and RunTest().

Referenced by Setup().

102  {
104 
105  // parse schema first, so we can use it to parse the data after
106  flatbuffers::Parser parser;
107 
108  bool ok = parser.Parse(schemafile.c_str());
109  ARMNN_ASSERT_MSG(ok, "Failed to parse schema file");
110 
111  ok &= parser.Parse(m_JsonString.c_str());
112  ARMNN_ASSERT_MSG(ok, "Failed to parse json input");
113 
114  if (!ok)
115  {
116  return false;
117  }
118 
119  {
120  const uint8_t * bufferPtr = parser.builder_.GetBufferPointer();
121  size_t size = static_cast<size_t>(parser.builder_.GetSize());
122  m_GraphBinary.assign(bufferPtr, bufferPtr+size);
123  }
124  return ok;
125  }
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
std::vector< uint8_t > m_GraphBinary
unsigned char g_TfLiteSchemaText[]
unsigned int g_TfLiteSchemaText_len

◆ RunTest() [1/4]

void RunTest ( size_t  subgraphId,
const std::vector< armnn::ResolveType< armnnType >> &  inputData,
const std::vector< armnn::ResolveType< armnnType >> &  expectedOutputData 
)

Executes the network with the given input tensor and checks the result against the given output tensor.

Single Input, Single Output Executes the network with the given input tensor and checks the result against the given output tensor.

This assumes the network has a single input and a single output.

This overload assumes the network has a single input and a single output.

Definition at line 222 of file ParserFlatbuffersFixture.hpp.

References m_SingleInputName, and m_SingleOutputName.

Referenced by ReadStringToBinary().

225 {
226  RunTest<NumOutputDimensions, armnnType>(subgraphId,
227  { { m_SingleInputName, inputData } },
228  { { m_SingleOutputName, expectedOutputData } });
229 }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

◆ RunTest() [2/4]

void RunTest ( size_t  subgraphId,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType >>> &  inputData,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType >>> &  expectedOutputData 
)

Executes the network with the given input tensors and checks the results against the given output tensors.

Multiple Inputs, Multiple Outputs Executes the network with the given input tensors and checks the results against the given output tensors.

This overload supports multiple inputs and multiple outputs, identified by name.

Definition at line 236 of file ParserFlatbuffersFixture.hpp.

239 {
240  RunTest<NumOutputDimensions, armnnType, armnnType>(subgraphId, inputData, expectedOutputData);
241 }

◆ RunTest() [3/4]

void RunTest ( size_t  subgraphId,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType1 >>> &  inputData,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType2 >>> &  expectedOutputData,
bool  isDynamic = false 
)

Multiple Inputs, Multiple Outputs w/ Variable Datatypes and different dimension sizes.

Multiple Inputs, Multiple Outputs w/ Variable Datatypes Executes the network with the given input tensors and checks the results against the given output tensors.

Executes the network with the given input tensors and checks the results against the given output tensors. This overload supports multiple inputs and multiple outputs, identified by name along with the allowance for the input datatype to be different to the output

This overload supports multiple inputs and multiple outputs, identified by name along with the allowance for the input datatype to be different to the output

Definition at line 250 of file ParserFlatbuffersFixture.hpp.

References CompareTensors(), TensorInfo::GetNumDimensions(), m_NetworkIdentifier, m_Parser, m_Runtime, and armnn::VerifyTensorInfoDataType().

254 {
255  using DataType2 = armnn::ResolveType<armnnType2>;
256 
257  // Setup the armnn input tensors from the given vectors.
258  armnn::InputTensors inputTensors;
259  for (auto&& it : inputData)
260  {
261  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkInputBindingInfo(subgraphId, it.first);
262  armnn::VerifyTensorInfoDataType(bindingInfo.second, armnnType1);
263  inputTensors.push_back({ bindingInfo.first, armnn::ConstTensor(bindingInfo.second, it.second.data()) });
264  }
265 
266  // Allocate storage for the output tensors to be written to and setup the armnn output tensors.
267  std::map<std::string, boost::multi_array<DataType2, NumOutputDimensions>> outputStorage;
268  armnn::OutputTensors outputTensors;
269  for (auto&& it : expectedOutputData)
270  {
271  armnn::LayerBindingId outputBindingId = m_Parser->GetNetworkOutputBindingInfo(subgraphId, it.first).first;
272  armnn::TensorInfo outputTensorInfo = m_Runtime->GetOutputTensorInfo(m_NetworkIdentifier, outputBindingId);
273 
274  // Check that output tensors have correct number of dimensions (NumOutputDimensions specified in test)
275  auto outputNumDimensions = outputTensorInfo.GetNumDimensions();
276  BOOST_CHECK_MESSAGE((outputNumDimensions == NumOutputDimensions),
277  fmt::format("Number of dimensions expected {}, but got {} for output layer {}",
278  NumOutputDimensions,
279  outputNumDimensions,
280  it.first));
281 
282  armnn::VerifyTensorInfoDataType(outputTensorInfo, armnnType2);
283  outputStorage.emplace(it.first, MakeTensor<DataType2, NumOutputDimensions>(outputTensorInfo));
284  outputTensors.push_back(
285  { outputBindingId, armnn::Tensor(outputTensorInfo, outputStorage.at(it.first).data()) });
286  }
287 
288  m_Runtime->EnqueueWorkload(m_NetworkIdentifier, inputTensors, outputTensors);
289 
290  // Compare each output tensor to the expected values
291  for (auto&& it : expectedOutputData)
292  {
293  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkOutputBindingInfo(subgraphId, it.first);
294  auto outputExpected = MakeTensor<DataType2, NumOutputDimensions>(bindingInfo.second, it.second, isDynamic);
295  BOOST_TEST(CompareTensors(outputExpected, outputStorage[it.first], false, isDynamic));
296  }
297 }
boost::test_tools::predicate_result CompareTensors(const boost::multi_array< T, n > &a, const boost::multi_array< T, n > &b, bool compareBoolean=false, bool isDynamic=false)
typename ResolveTypeImpl< DT >::Type ResolveType
Definition: ResolveType.hpp:73
std::vector< std::pair< LayerBindingId, class ConstTensor > > InputTensors
Definition: Tensor.hpp:340
int LayerBindingId
Type of identifiers for bindable layers (inputs, outputs).
Definition: Types.hpp:202
A tensor defined by a TensorInfo (shape and data type) and a mutable backing store.
Definition: Tensor.hpp:306
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:314
std::vector< std::pair< LayerBindingId, class Tensor > > OutputTensors
Definition: Tensor.hpp:341
std::pair< armnn::LayerBindingId, armnn::TensorInfo > BindingPointInfo
Definition: Tensor.hpp:261
void VerifyTensorInfoDataType(const armnn::TensorInfo &info, armnn::DataType dataType)
Definition: TypesUtils.hpp:309
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:191

◆ RunTest() [4/4]

void RunTest ( std::size_t  subgraphId,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType1 >>> &  inputData,
const std::map< std::string, std::vector< armnn::ResolveType< armnnType2 >>> &  expectedOutputData 
)

Multiple Inputs, Multiple Outputs w/ Variable Datatypes and different dimension sizes.

Executes the network with the given input tensors and checks the results against the given output tensors. This overload supports multiple inputs and multiple outputs, identified by name along with the allowance for the input datatype to be different to the output

Executes the network with the given input tensors and checks the results against the given output tensors. This overload supports multiple inputs and multiple outputs, identified by name along with the allowance for the input datatype to be different to the output.

Definition at line 305 of file ParserFlatbuffersFixture.hpp.

References m_NetworkIdentifier, m_Parser, m_Runtime, and armnn::VerifyTensorInfoDataType().

308 {
309  using DataType2 = armnn::ResolveType<armnnType2>;
310 
311  // Setup the armnn input tensors from the given vectors.
312  armnn::InputTensors inputTensors;
313  for (auto&& it : inputData)
314  {
315  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkInputBindingInfo(subgraphId, it.first);
316  armnn::VerifyTensorInfoDataType(bindingInfo.second, armnnType1);
317 
318  inputTensors.push_back({ bindingInfo.first, armnn::ConstTensor(bindingInfo.second, it.second.data()) });
319  }
320 
321  armnn::OutputTensors outputTensors;
322  outputTensors.reserve(expectedOutputData.size());
323  std::map<std::string, std::vector<DataType2>> outputStorage;
324  for (auto&& it : expectedOutputData)
325  {
326  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkOutputBindingInfo(subgraphId, it.first);
327  armnn::VerifyTensorInfoDataType(bindingInfo.second, armnnType2);
328 
329  std::vector<DataType2> out(it.second.size());
330  outputStorage.emplace(it.first, out);
331  outputTensors.push_back({ bindingInfo.first,
332  armnn::Tensor(bindingInfo.second,
333  outputStorage.at(it.first).data()) });
334  }
335 
336  m_Runtime->EnqueueWorkload(m_NetworkIdentifier, inputTensors, outputTensors);
337 
338  // Checks the results.
339  for (auto&& it : expectedOutputData)
340  {
341  std::vector<armnn::ResolveType<armnnType2>> out = outputStorage.at(it.first);
342  {
343  for (unsigned int i = 0; i < out.size(); ++i)
344  {
345  BOOST_TEST(it.second[i] == out[i], boost::test_tools::tolerance(0.000001f));
346  }
347  }
348  }
349 }
typename ResolveTypeImpl< DT >::Type ResolveType
Definition: ResolveType.hpp:73
std::vector< std::pair< LayerBindingId, class ConstTensor > > InputTensors
Definition: Tensor.hpp:340
A tensor defined by a TensorInfo (shape and data type) and a mutable backing store.
Definition: Tensor.hpp:306
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:314
std::vector< std::pair< LayerBindingId, class Tensor > > OutputTensors
Definition: Tensor.hpp:341
std::pair< armnn::LayerBindingId, armnn::TensorInfo > BindingPointInfo
Definition: Tensor.hpp:261
void VerifyTensorInfoDataType(const armnn::TensorInfo &info, armnn::DataType dataType)
Definition: TypesUtils.hpp:309

◆ Setup()

void Setup ( )
inline

Definition at line 61 of file ParserFlatbuffersFixture.hpp.

References armnn::CpuRef, armnn::Optimize(), ReadStringToBinary(), and armnn::Success.

Referenced by BOOST_FIXTURE_TEST_CASE(), and SetupSingleInputSingleOutput().

62  {
63  bool ok = ReadStringToBinary();
64  if (!ok) {
65  throw armnn::Exception("LoadNetwork failed while reading binary input");
66  }
67 
68  armnn::INetworkPtr network =
69  m_Parser->CreateNetworkFromBinary(m_GraphBinary);
70 
71  if (!network) {
72  throw armnn::Exception("The parser failed to create an ArmNN network");
73  }
74 
75  auto optimized = Optimize(*network, { armnn::Compute::CpuRef },
76  m_Runtime->GetDeviceSpec());
77  std::string errorMessage;
78 
79  armnn::Status ret = m_Runtime->LoadNetwork(m_NetworkIdentifier, move(optimized), errorMessage);
80 
81  if (ret != armnn::Status::Success)
82  {
83  throw armnn::Exception(
84  fmt::format("The runtime failed to load the network. "
85  "Error was: {}. in {} [{}:{}]",
86  errorMessage,
87  __func__,
88  __FILE__,
89  __LINE__));
90  }
91  }
CPU Execution: Reference C++ kernels.
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional())
Create an optimized version of the network.
Definition: Network.cpp:1011
Status
enumeration
Definition: Types.hpp:26
std::vector< uint8_t > m_GraphBinary
Base class for all ArmNN exceptions so that users can filter to just those.
Definition: Exceptions.hpp:46
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
Definition: INetwork.hpp:101

◆ SetupSingleInputSingleOutput()

void SetupSingleInputSingleOutput ( const std::string &  inputName,
const std::string &  outputName 
)
inline

Definition at line 93 of file ParserFlatbuffersFixture.hpp.

References Setup().

Referenced by BOOST_AUTO_TEST_CASE().

94  {
95  // Store the input and output name so they don't need to be passed to the single-input-single-output RunTest().
96  m_SingleInputName = inputName;
97  m_SingleOutputName = outputName;
98  Setup();
99  }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

Member Data Documentation

◆ m_GraphBinary

std::vector<uint8_t> m_GraphBinary

Definition at line 50 of file ParserFlatbuffersFixture.hpp.

◆ m_JsonString

std::string m_JsonString

Definition at line 51 of file ParserFlatbuffersFixture.hpp.

◆ m_NetworkIdentifier

armnn::NetworkId m_NetworkIdentifier

Definition at line 54 of file ParserFlatbuffersFixture.hpp.

Referenced by RunTest().

◆ m_Parser

ITfLiteParserPtr m_Parser

Definition at line 52 of file ParserFlatbuffersFixture.hpp.

Referenced by ParserFlatbuffersFixture(), and RunTest().

◆ m_Runtime

armnn::IRuntimePtr m_Runtime

Definition at line 53 of file ParserFlatbuffersFixture.hpp.

Referenced by RunTest().

◆ m_SingleInputName

std::string m_SingleInputName

If the single-input-single-output overload of Setup() is called, these will store the input and output name so they don't need to be passed to the single-input-single-output overload of RunTest().

Definition at line 58 of file ParserFlatbuffersFixture.hpp.

Referenced by RunTest().

◆ m_SingleOutputName

std::string m_SingleOutputName

Definition at line 59 of file ParserFlatbuffersFixture.hpp.

Referenced by RunTest().


The documentation for this struct was generated from the following file: