ArmNN
 21.02
ParserPrototxtFixture< TParser > Struct Template Reference

#include <ParserPrototxtFixture.hpp>

Public Member Functions

 ParserPrototxtFixture ()
 
template<std::size_t NumOutputDimensions>
void RunTest (const std::vector< float > &inputData, const std::vector< float > &expectedOutputData)
 Executes the network with the given input tensor and checks the result against the given output tensor. More...
 
template<std::size_t NumOutputDimensions>
void RunComparisonTest (const std::map< std::string, std::vector< float >> &inputData, const std::map< std::string, std::vector< uint8_t >> &expectedOutputData)
 Executes the network with the given input tensor and checks the result against the given output tensor. More...
 
template<std::size_t NumOutputDimensions, typename T = float>
void RunTest (const std::map< std::string, std::vector< float >> &inputData, const std::map< std::string, std::vector< T >> &expectedOutputData)
 Executes the network with the given input tensors and checks the results against the given output tensors. More...
 
void SetupSingleInputSingleOutput (const std::string &inputName, const std::string &outputName)
 Parses and loads the network defined by the m_Prototext string. More...
 
void SetupSingleInputSingleOutput (const armnn::TensorShape &inputTensorShape, const std::string &inputName, const std::string &outputName)
 
void SetupSingleInputSingleOutput (const armnn::TensorShape &inputTensorShape, const armnn::TensorShape &outputTensorShape, const std::string &inputName, const std::string &outputName)
 
void Setup (const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
 
void Setup ()
 
armnn::IOptimizedNetworkPtr SetupOptimizedNetwork (const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
 

Public Attributes

std::string m_Prototext
 
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
 
armnn::IRuntimePtr m_Runtime
 
armnn::NetworkId m_NetworkIdentifier
 
armnn::TensorShape m_SingleOutputShape
 This will store the output shape so it don't need to be passed to the single-input-single-output overload of RunTest(). More...
 
std::string m_SingleInputName
 If the single-input-single-output overload of Setup() is called, these will store the input and output name so they don't need to be passed to the single-input-single-output overload of RunTest(). More...
 
std::string m_SingleOutputName
 

Detailed Description

template<typename TParser>
struct armnnUtils::ParserPrototxtFixture< TParser >

Definition at line 23 of file ParserPrototxtFixture.hpp.

Constructor & Destructor Documentation

◆ ParserPrototxtFixture()

Definition at line 25 of file ParserPrototxtFixture.hpp.

26  : m_Parser(TParser::Create())
29  {
30  }
static IRuntimePtr Create(const CreationOptions &options)
Definition: Runtime.cpp:37
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser

Member Function Documentation

◆ RunComparisonTest()

void RunComparisonTest ( const std::map< std::string, std::vector< float >> &  inputData,
const std::map< std::string, std::vector< uint8_t >> &  expectedOutputData 
)

Executes the network with the given input tensor and checks the result against the given output tensor.

Calls RunTest with output type of uint8_t for checking comparison operators.

Definition at line 175 of file ParserPrototxtFixture.hpp.

Referenced by ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::ParserPrototxtFixture().

178 {
179  RunTest<NumOutputDimensions, uint8_t>(inputData, expectedOutputData);
180 }

◆ RunTest() [1/2]

void RunTest ( const std::vector< float > &  inputData,
const std::vector< float > &  expectedOutputData 
)

Executes the network with the given input tensor and checks the result against the given output tensor.

This overload assumes that the network has a single input and a single output.

Definition at line 167 of file ParserPrototxtFixture.hpp.

Referenced by ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::ParserPrototxtFixture().

169 {
170  RunTest<NumOutputDimensions>({ { m_SingleInputName, inputData } }, { { m_SingleOutputName, expectedOutputData } });
171 }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

◆ RunTest() [2/2]

void RunTest ( const std::map< std::string, std::vector< float >> &  inputData,
const std::map< std::string, std::vector< T >> &  expectedOutputData 
)

Executes the network with the given input tensors and checks the results against the given output tensors.

This overload supports multiple inputs and multiple outputs, identified by name.

Definition at line 184 of file ParserPrototxtFixture.hpp.

186 {
187  // Sets up the armnn input tensors from the given vectors.
188  armnn::InputTensors inputTensors;
189  for (auto&& it : inputData)
190  {
191  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkInputBindingInfo(it.first);
192  inputTensors.push_back({ bindingInfo.first, armnn::ConstTensor(bindingInfo.second, it.second.data()) });
193  }
194 
195  // Allocates storage for the output tensors to be written to and sets up the armnn output tensors.
196  std::map<std::string, boost::multi_array<T, NumOutputDimensions>> outputStorage;
197  armnn::OutputTensors outputTensors;
198  for (auto&& it : expectedOutputData)
199  {
200  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkOutputBindingInfo(it.first);
201  outputStorage.emplace(it.first, MakeTensor<T, NumOutputDimensions>(bindingInfo.second));
202  outputTensors.push_back(
203  { bindingInfo.first, armnn::Tensor(bindingInfo.second, outputStorage.at(it.first).data()) });
204  }
205 
206  m_Runtime->EnqueueWorkload(m_NetworkIdentifier, inputTensors, outputTensors);
207 
208  // Compares each output tensor to the expected values.
209  for (auto&& it : expectedOutputData)
210  {
211  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkOutputBindingInfo(it.first);
212  if (bindingInfo.second.GetNumElements() != it.second.size())
213  {
214  throw armnn::Exception(fmt::format("Output tensor {0} is expected to have {1} elements. "
215  "{2} elements supplied. {3}",
216  it.first,
217  bindingInfo.second.GetNumElements(),
218  it.second.size(),
219  CHECK_LOCATION().AsString()));
220  }
221 
222  // If the expected output shape is set, the output tensor checks will be carried out.
224  {
225 
226  if (bindingInfo.second.GetShape().GetNumDimensions() == NumOutputDimensions &&
227  bindingInfo.second.GetShape().GetNumDimensions() == m_SingleOutputShape.GetNumDimensions())
228  {
229  for (unsigned int i = 0; i < m_SingleOutputShape.GetNumDimensions(); ++i)
230  {
231  if (m_SingleOutputShape[i] != bindingInfo.second.GetShape()[i])
232  {
233  // This exception message could not be created by fmt:format because of an oddity in
234  // the operator << of TensorShape.
235  std::stringstream message;
236  message << "Output tensor " << it.first << " is expected to have "
237  << bindingInfo.second.GetShape() << "shape. "
238  << m_SingleOutputShape << " shape supplied. "
239  << CHECK_LOCATION().AsString();
240  throw armnn::Exception(message.str());
241  }
242  }
243  }
244  else
245  {
246  throw armnn::Exception(fmt::format("Output tensor {0} is expected to have {1} dimensions. "
247  "{2} dimensions supplied. {3}",
248  it.first,
249  bindingInfo.second.GetShape().GetNumDimensions(),
250  NumOutputDimensions,
251  CHECK_LOCATION().AsString()));
252  }
253  }
254 
255  auto outputExpected = MakeTensor<T, NumOutputDimensions>(bindingInfo.second, it.second);
256  if (std::is_same<T, uint8_t>::value)
257  {
258  BOOST_TEST(CompareTensors(outputExpected, outputStorage[it.first], true));
259  }
260  else
261  {
262  BOOST_TEST(CompareTensors(outputExpected, outputStorage[it.first]));
263  }
264  }
265 }
boost::test_tools::predicate_result CompareTensors(const boost::multi_array< T, n > &a, const boost::multi_array< T, n > &b, bool compareBoolean=false, bool isDynamic=false)
armnn::TensorShape m_SingleOutputShape
This will store the output shape so it don&#39;t need to be passed to the single-input-single-output over...
std::vector< std::pair< LayerBindingId, class ConstTensor > > InputTensors
Definition: Tensor.hpp:340
A tensor defined by a TensorInfo (shape and data type) and a mutable backing store.
Definition: Tensor.hpp:306
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:314
std::vector< std::pair< LayerBindingId, class Tensor > > OutputTensors
Definition: Tensor.hpp:341
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
std::pair< armnn::LayerBindingId, armnn::TensorInfo > BindingPointInfo
Definition: Tensor.hpp:261
Base class for all ArmNN exceptions so that users can filter to just those.
Definition: Exceptions.hpp:46
unsigned int GetNumDimensions() const
Function that returns the tensor rank.
Definition: Tensor.cpp:174

◆ Setup() [1/2]

void Setup ( const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)

Definition at line 120 of file ParserPrototxtFixture.hpp.

122 {
123  std::string errorMessage;
124 
125  armnn::INetworkPtr network =
126  m_Parser->CreateNetworkFromString(m_Prototext.c_str(), inputShapes, requestedOutputs);
127  auto optimized = Optimize(*network, { armnn::Compute::CpuRef }, m_Runtime->GetDeviceSpec());
128  armnn::Status ret = m_Runtime->LoadNetwork(m_NetworkIdentifier, move(optimized), errorMessage);
129  if (ret != armnn::Status::Success)
130  {
131  throw armnn::Exception(fmt::format("LoadNetwork failed with error: '{0}' {1}",
132  errorMessage,
133  CHECK_LOCATION().AsString()));
134  }
135 }
CPU Execution: Reference C++ kernels.
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional())
Create an optimized version of the network.
Definition: Network.cpp:1502
Status
enumeration
Definition: Types.hpp:26
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
Base class for all ArmNN exceptions so that users can filter to just those.
Definition: Exceptions.hpp:46
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
Definition: INetwork.hpp:173

◆ Setup() [2/2]

void Setup ( )

Definition at line 138 of file ParserPrototxtFixture.hpp.

Referenced by BOOST_FIXTURE_TEST_CASE(), ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::ParserPrototxtFixture(), and ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::SetupSingleInputSingleOutput().

139 {
140  std::string errorMessage;
141 
142  armnn::INetworkPtr network =
143  m_Parser->CreateNetworkFromString(m_Prototext.c_str());
144  auto optimized = Optimize(*network, { armnn::Compute::CpuRef }, m_Runtime->GetDeviceSpec());
145  armnn::Status ret = m_Runtime->LoadNetwork(m_NetworkIdentifier, move(optimized), errorMessage);
146  if (ret != armnn::Status::Success)
147  {
148  throw armnn::Exception(fmt::format("LoadNetwork failed with error: '{0}' {1}",
149  errorMessage,
150  CHECK_LOCATION().AsString()));
151  }
152 }
CPU Execution: Reference C++ kernels.
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional())
Create an optimized version of the network.
Definition: Network.cpp:1502
Status
enumeration
Definition: Types.hpp:26
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
Base class for all ArmNN exceptions so that users can filter to just those.
Definition: Exceptions.hpp:46
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
Definition: INetwork.hpp:173

◆ SetupOptimizedNetwork()

armnn::IOptimizedNetworkPtr SetupOptimizedNetwork ( const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)

Definition at line 155 of file ParserPrototxtFixture.hpp.

Referenced by BOOST_FIXTURE_TEST_CASE(), and ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::ParserPrototxtFixture().

158 {
159  armnn::INetworkPtr network =
160  m_Parser->CreateNetworkFromString(m_Prototext.c_str(), inputShapes, requestedOutputs);
161  auto optimized = Optimize(*network, { armnn::Compute::CpuRef }, m_Runtime->GetDeviceSpec());
162  return optimized;
163 }
CPU Execution: Reference C++ kernels.
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional())
Create an optimized version of the network.
Definition: Network.cpp:1502
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
Definition: INetwork.hpp:173

◆ SetupSingleInputSingleOutput() [1/3]

void SetupSingleInputSingleOutput ( const std::string &  inputName,
const std::string &  outputName 
)

Parses and loads the network defined by the m_Prototext string.

Definition at line 85 of file ParserPrototxtFixture.hpp.

Referenced by BOOST_FIXTURE_TEST_CASE(), and ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::ParserPrototxtFixture().

87 {
88  // Stores the input and output name so they don't need to be passed to the single-input-single-output RunTest().
89  m_SingleInputName = inputName;
90  m_SingleOutputName = outputName;
91  Setup({ }, { outputName });
92 }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

◆ SetupSingleInputSingleOutput() [2/3]

void SetupSingleInputSingleOutput ( const armnn::TensorShape inputTensorShape,
const std::string &  inputName,
const std::string &  outputName 
)

Definition at line 95 of file ParserPrototxtFixture.hpp.

98 {
99  // Stores the input and output name so they don't need to be passed to the single-input-single-output RunTest().
100  m_SingleInputName = inputName;
101  m_SingleOutputName = outputName;
102  Setup({ { inputName, inputTensorShape } }, { outputName });
103 }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

◆ SetupSingleInputSingleOutput() [3/3]

void SetupSingleInputSingleOutput ( const armnn::TensorShape inputTensorShape,
const armnn::TensorShape outputTensorShape,
const std::string &  inputName,
const std::string &  outputName 
)

Definition at line 106 of file ParserPrototxtFixture.hpp.

110 {
111  // Stores the input name, the output name and the output tensor shape
112  // so they don't need to be passed to the single-input-single-output RunTest().
113  m_SingleInputName = inputName;
114  m_SingleOutputName = outputName;
115  m_SingleOutputShape = outputTensorShape;
116  Setup({ { inputName, inputTensorShape } }, { outputName });
117 }
armnn::TensorShape m_SingleOutputShape
This will store the output shape so it don&#39;t need to be passed to the single-input-single-output over...
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

Member Data Documentation

◆ m_NetworkIdentifier

◆ m_Parser

◆ m_Prototext

◆ m_Runtime

◆ m_SingleInputName

std::string m_SingleInputName

If the single-input-single-output overload of Setup() is called, these will store the input and output name so they don't need to be passed to the single-input-single-output overload of RunTest().

Definition at line 75 of file ParserPrototxtFixture.hpp.

Referenced by ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::RunTest(), and ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::SetupSingleInputSingleOutput().

◆ m_SingleOutputName

◆ m_SingleOutputShape

armnn::TensorShape m_SingleOutputShape

This will store the output shape so it don't need to be passed to the single-input-single-output overload of RunTest().

Definition at line 81 of file ParserPrototxtFixture.hpp.

Referenced by ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::RunTest(), and ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::SetupSingleInputSingleOutput().


The documentation for this struct was generated from the following file: