ArmNN
 21.08
ParserPrototxtFixture< TParser > Struct Template Reference

#include <ParserPrototxtFixture.hpp>

Public Member Functions

 ParserPrototxtFixture ()
 
template<std::size_t NumOutputDimensions>
void RunTest (const std::vector< float > &inputData, const std::vector< float > &expectedOutputData)
 Executes the network with the given input tensor and checks the result against the given output tensor. More...
 
template<std::size_t NumOutputDimensions>
void RunComparisonTest (const std::map< std::string, std::vector< float >> &inputData, const std::map< std::string, std::vector< uint8_t >> &expectedOutputData)
 Executes the network with the given input tensor and checks the result against the given output tensor. More...
 
template<std::size_t NumOutputDimensions, typename T = float>
void RunTest (const std::map< std::string, std::vector< float >> &inputData, const std::map< std::string, std::vector< T >> &expectedOutputData)
 Executes the network with the given input tensors and checks the results against the given output tensors. More...
 
void SetupSingleInputSingleOutput (const std::string &inputName, const std::string &outputName)
 Parses and loads the network defined by the m_Prototext string. More...
 
void SetupSingleInputSingleOutput (const armnn::TensorShape &inputTensorShape, const std::string &inputName, const std::string &outputName)
 
void SetupSingleInputSingleOutput (const armnn::TensorShape &inputTensorShape, const armnn::TensorShape &outputTensorShape, const std::string &inputName, const std::string &outputName)
 
void Setup (const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
 
void Setup ()
 
armnn::IOptimizedNetworkPtr SetupOptimizedNetwork (const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
 

Public Attributes

std::string m_Prototext
 
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
 
armnn::IRuntimePtr m_Runtime
 
armnn::NetworkId m_NetworkIdentifier
 
armnn::TensorShape m_SingleOutputShape
 This will store the output shape so it don't need to be passed to the single-input-single-output overload of RunTest(). More...
 
std::string m_SingleInputName
 If the single-input-single-output overload of Setup() is called, these will store the input and output name so they don't need to be passed to the single-input-single-output overload of RunTest(). More...
 
std::string m_SingleOutputName
 

Detailed Description

template<typename TParser>
struct armnnUtils::ParserPrototxtFixture< TParser >

Definition at line 24 of file ParserPrototxtFixture.hpp.

Constructor & Destructor Documentation

◆ ParserPrototxtFixture()

Member Function Documentation

◆ RunComparisonTest()

void RunComparisonTest ( const std::map< std::string, std::vector< float >> &  inputData,
const std::map< std::string, std::vector< uint8_t >> &  expectedOutputData 
)

Executes the network with the given input tensor and checks the result against the given output tensor.

Calls RunTest with output type of uint8_t for checking comparison operators.

Definition at line 176 of file ParserPrototxtFixture.hpp.

Referenced by ParserPrototxtFixture< TParser >::ParserPrototxtFixture().

179 {
180  RunTest<NumOutputDimensions, uint8_t>(inputData, expectedOutputData);
181 }

◆ RunTest() [1/2]

void RunTest ( const std::vector< float > &  inputData,
const std::vector< float > &  expectedOutputData 
)

Executes the network with the given input tensor and checks the result against the given output tensor.

This overload assumes that the network has a single input and a single output.

Definition at line 168 of file ParserPrototxtFixture.hpp.

References ParserPrototxtFixture< TParser >::m_SingleInputName, and ParserPrototxtFixture< TParser >::m_SingleOutputName.

Referenced by ParserPrototxtFixture< TParser >::ParserPrototxtFixture().

170 {
171  RunTest<NumOutputDimensions>({ { m_SingleInputName, inputData } }, { { m_SingleOutputName, expectedOutputData } });
172 }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

◆ RunTest() [2/2]

void RunTest ( const std::map< std::string, std::vector< float >> &  inputData,
const std::map< std::string, std::vector< T >> &  expectedOutputData 
)

Executes the network with the given input tensors and checks the results against the given output tensors.

This overload supports multiple inputs and multiple outputs, identified by name.

Definition at line 185 of file ParserPrototxtFixture.hpp.

References CHECK_LOCATION, CompareTensors(), TensorShape::GetNumDimensions(), ParserPrototxtFixture< TParser >::m_NetworkIdentifier, ParserPrototxtFixture< TParser >::m_Parser, ParserPrototxtFixture< TParser >::m_Runtime, and ParserPrototxtFixture< TParser >::m_SingleOutputShape.

187 {
188  // Sets up the armnn input tensors from the given vectors.
189  armnn::InputTensors inputTensors;
190  for (auto&& it : inputData)
191  {
192  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkInputBindingInfo(it.first);
193  inputTensors.push_back({ bindingInfo.first, armnn::ConstTensor(bindingInfo.second, it.second.data()) });
194  }
195 
196  // Allocates storage for the output tensors to be written to and sets up the armnn output tensors.
197  std::map<std::string, std::vector<T>> outputStorage;
198  armnn::OutputTensors outputTensors;
199  for (auto&& it : expectedOutputData)
200  {
201  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkOutputBindingInfo(it.first);
202  outputStorage.emplace(it.first, std::vector<T>(bindingInfo.second.GetNumElements()));
203  outputTensors.push_back(
204  { bindingInfo.first, armnn::Tensor(bindingInfo.second, outputStorage.at(it.first).data()) });
205  }
206 
207  m_Runtime->EnqueueWorkload(m_NetworkIdentifier, inputTensors, outputTensors);
208 
209  // Compares each output tensor to the expected values.
210  for (auto&& it : expectedOutputData)
211  {
212  armnn::BindingPointInfo bindingInfo = m_Parser->GetNetworkOutputBindingInfo(it.first);
213  if (bindingInfo.second.GetNumElements() != it.second.size())
214  {
215  throw armnn::Exception(fmt::format("Output tensor {0} is expected to have {1} elements. "
216  "{2} elements supplied. {3}",
217  it.first,
218  bindingInfo.second.GetNumElements(),
219  it.second.size(),
220  CHECK_LOCATION().AsString()));
221  }
222 
223  // If the expected output shape is set, the output tensor checks will be carried out.
225  {
226 
227  if (bindingInfo.second.GetShape().GetNumDimensions() == NumOutputDimensions &&
228  bindingInfo.second.GetShape().GetNumDimensions() == m_SingleOutputShape.GetNumDimensions())
229  {
230  for (unsigned int i = 0; i < m_SingleOutputShape.GetNumDimensions(); ++i)
231  {
232  if (m_SingleOutputShape[i] != bindingInfo.second.GetShape()[i])
233  {
234  // This exception message could not be created by fmt:format because of an oddity in
235  // the operator << of TensorShape.
236  std::stringstream message;
237  message << "Output tensor " << it.first << " is expected to have "
238  << bindingInfo.second.GetShape() << "shape. "
239  << m_SingleOutputShape << " shape supplied. "
240  << CHECK_LOCATION().AsString();
241  throw armnn::Exception(message.str());
242  }
243  }
244  }
245  else
246  {
247  throw armnn::Exception(fmt::format("Output tensor {0} is expected to have {1} dimensions. "
248  "{2} dimensions supplied. {3}",
249  it.first,
250  bindingInfo.second.GetShape().GetNumDimensions(),
251  NumOutputDimensions,
252  CHECK_LOCATION().AsString()));
253  }
254  }
255 
256  auto outputExpected = it.second;
257  auto shape = bindingInfo.second.GetShape();
258  if (std::is_same<T, uint8_t>::value)
259  {
260  auto result = CompareTensors(outputExpected, outputStorage[it.first], shape, shape, true);
261  CHECK_MESSAGE(result.m_Result, result.m_Message.str());
262  }
263  else
264  {
265  auto result = CompareTensors(outputExpected, outputStorage[it.first], shape, shape);
266  CHECK_MESSAGE(result.m_Result, result.m_Message.str());
267  }
268  }
269 }
armnn::TensorShape m_SingleOutputShape
This will store the output shape so it don&#39;t need to be passed to the single-input-single-output over...
std::vector< std::pair< LayerBindingId, class ConstTensor > > InputTensors
Definition: Tensor.hpp:360
armnn::PredicateResult CompareTensors(const std::vector< T > &actualData, const std::vector< T > &expectedData, const armnn::TensorShape &actualShape, const armnn::TensorShape &expectedShape, bool compareBoolean=false, bool isDynamic=false)
A tensor defined by a TensorInfo (shape and data type) and a mutable backing store.
Definition: Tensor.hpp:319
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:327
std::vector< std::pair< LayerBindingId, class Tensor > > OutputTensors
Definition: Tensor.hpp:361
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
std::pair< armnn::LayerBindingId, armnn::TensorInfo > BindingPointInfo
Definition: Tensor.hpp:274
Base class for all ArmNN exceptions so that users can filter to just those.
Definition: Exceptions.hpp:46
unsigned int GetNumDimensions() const
Function that returns the tensor rank.
Definition: Tensor.cpp:174

◆ Setup() [1/2]

void Setup ( const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)

Definition at line 121 of file ParserPrototxtFixture.hpp.

References CHECK_LOCATION, armnn::CpuRef, ParserPrototxtFixture< TParser >::m_NetworkIdentifier, ParserPrototxtFixture< TParser >::m_Parser, ParserPrototxtFixture< TParser >::m_Prototext, ParserPrototxtFixture< TParser >::m_Runtime, armnn::Optimize(), and armnn::Success.

123 {
124  std::string errorMessage;
125 
126  armnn::INetworkPtr network =
127  m_Parser->CreateNetworkFromString(m_Prototext.c_str(), inputShapes, requestedOutputs);
128  auto optimized = Optimize(*network, { armnn::Compute::CpuRef }, m_Runtime->GetDeviceSpec());
129  armnn::Status ret = m_Runtime->LoadNetwork(m_NetworkIdentifier, move(optimized), errorMessage);
130  if (ret != armnn::Status::Success)
131  {
132  throw armnn::Exception(fmt::format("LoadNetwork failed with error: '{0}' {1}",
133  errorMessage,
134  CHECK_LOCATION().AsString()));
135  }
136 }
CPU Execution: Reference C++ kernels.
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional())
Create an optimized version of the network.
Definition: Network.cpp:1613
Status
enumeration
Definition: Types.hpp:29
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
Base class for all ArmNN exceptions so that users can filter to just those.
Definition: Exceptions.hpp:46
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
Definition: INetwork.hpp:172

◆ Setup() [2/2]

void Setup ( )

Definition at line 139 of file ParserPrototxtFixture.hpp.

References CHECK_LOCATION, armnn::CpuRef, ParserPrototxtFixture< TParser >::m_NetworkIdentifier, ParserPrototxtFixture< TParser >::m_Parser, ParserPrototxtFixture< TParser >::m_Prototext, ParserPrototxtFixture< TParser >::m_Runtime, armnn::Optimize(), and armnn::Success.

Referenced by ParserPrototxtFixture< TParser >::ParserPrototxtFixture(), ParserPrototxtFixture< TParser >::SetupSingleInputSingleOutput(), and TEST_SUITE().

140 {
141  std::string errorMessage;
142 
143  armnn::INetworkPtr network =
144  m_Parser->CreateNetworkFromString(m_Prototext.c_str());
145  auto optimized = Optimize(*network, { armnn::Compute::CpuRef }, m_Runtime->GetDeviceSpec());
146  armnn::Status ret = m_Runtime->LoadNetwork(m_NetworkIdentifier, move(optimized), errorMessage);
147  if (ret != armnn::Status::Success)
148  {
149  throw armnn::Exception(fmt::format("LoadNetwork failed with error: '{0}' {1}",
150  errorMessage,
151  CHECK_LOCATION().AsString()));
152  }
153 }
CPU Execution: Reference C++ kernels.
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional())
Create an optimized version of the network.
Definition: Network.cpp:1613
Status
enumeration
Definition: Types.hpp:29
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
Base class for all ArmNN exceptions so that users can filter to just those.
Definition: Exceptions.hpp:46
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
Definition: INetwork.hpp:172

◆ SetupOptimizedNetwork()

armnn::IOptimizedNetworkPtr SetupOptimizedNetwork ( const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)

Definition at line 156 of file ParserPrototxtFixture.hpp.

References armnn::CpuRef, ParserPrototxtFixture< TParser >::m_Parser, ParserPrototxtFixture< TParser >::m_Prototext, ParserPrototxtFixture< TParser >::m_Runtime, and armnn::Optimize().

Referenced by ParserPrototxtFixture< TParser >::ParserPrototxtFixture().

159 {
160  armnn::INetworkPtr network =
161  m_Parser->CreateNetworkFromString(m_Prototext.c_str(), inputShapes, requestedOutputs);
162  auto optimized = Optimize(*network, { armnn::Compute::CpuRef }, m_Runtime->GetDeviceSpec());
163  return optimized;
164 }
CPU Execution: Reference C++ kernels.
std::unique_ptr< TParser, void(*)(TParser *parser)> m_Parser
IOptimizedNetworkPtr Optimize(const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional())
Create an optimized version of the network.
Definition: Network.cpp:1613
std::unique_ptr< INetwork, void(*)(INetwork *network)> INetworkPtr
Definition: INetwork.hpp:172

◆ SetupSingleInputSingleOutput() [1/3]

void SetupSingleInputSingleOutput ( const std::string &  inputName,
const std::string &  outputName 
)

Parses and loads the network defined by the m_Prototext string.

Definition at line 86 of file ParserPrototxtFixture.hpp.

References ParserPrototxtFixture< TParser >::m_SingleInputName, ParserPrototxtFixture< TParser >::m_SingleOutputName, and ParserPrototxtFixture< TParser >::Setup().

Referenced by ParserPrototxtFixture< TParser >::ParserPrototxtFixture().

88 {
89  // Stores the input and output name so they don't need to be passed to the single-input-single-output RunTest().
90  m_SingleInputName = inputName;
91  m_SingleOutputName = outputName;
92  Setup({ }, { outputName });
93 }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

◆ SetupSingleInputSingleOutput() [2/3]

void SetupSingleInputSingleOutput ( const armnn::TensorShape inputTensorShape,
const std::string &  inputName,
const std::string &  outputName 
)

Definition at line 96 of file ParserPrototxtFixture.hpp.

References ParserPrototxtFixture< TParser >::m_SingleInputName, ParserPrototxtFixture< TParser >::m_SingleOutputName, and ParserPrototxtFixture< TParser >::Setup().

99 {
100  // Stores the input and output name so they don't need to be passed to the single-input-single-output RunTest().
101  m_SingleInputName = inputName;
102  m_SingleOutputName = outputName;
103  Setup({ { inputName, inputTensorShape } }, { outputName });
104 }
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

◆ SetupSingleInputSingleOutput() [3/3]

void SetupSingleInputSingleOutput ( const armnn::TensorShape inputTensorShape,
const armnn::TensorShape outputTensorShape,
const std::string &  inputName,
const std::string &  outputName 
)

Definition at line 107 of file ParserPrototxtFixture.hpp.

References ParserPrototxtFixture< TParser >::m_SingleInputName, ParserPrototxtFixture< TParser >::m_SingleOutputName, ParserPrototxtFixture< TParser >::m_SingleOutputShape, and ParserPrototxtFixture< TParser >::Setup().

111 {
112  // Stores the input name, the output name and the output tensor shape
113  // so they don't need to be passed to the single-input-single-output RunTest().
114  m_SingleInputName = inputName;
115  m_SingleOutputName = outputName;
116  m_SingleOutputShape = outputTensorShape;
117  Setup({ { inputName, inputTensorShape } }, { outputName });
118 }
armnn::TensorShape m_SingleOutputShape
This will store the output shape so it don&#39;t need to be passed to the single-input-single-output over...
std::string m_SingleInputName
If the single-input-single-output overload of Setup() is called, these will store the input and outpu...

Member Data Documentation

◆ m_NetworkIdentifier

◆ m_Parser

std::unique_ptr<TParser, void(*)(TParser* parser)> m_Parser

◆ m_Prototext

◆ m_Runtime

◆ m_SingleInputName

std::string m_SingleInputName

If the single-input-single-output overload of Setup() is called, these will store the input and output name so they don't need to be passed to the single-input-single-output overload of RunTest().

Definition at line 76 of file ParserPrototxtFixture.hpp.

Referenced by ParserPrototxtFixture< TParser >::RunTest(), and ParserPrototxtFixture< TParser >::SetupSingleInputSingleOutput().

◆ m_SingleOutputName

◆ m_SingleOutputShape

armnn::TensorShape m_SingleOutputShape

This will store the output shape so it don't need to be passed to the single-input-single-output overload of RunTest().

Definition at line 82 of file ParserPrototxtFixture.hpp.

Referenced by ParserPrototxtFixture< TParser >::RunTest(), and ParserPrototxtFixture< TParser >::SetupSingleInputSingleOutput().


The documentation for this struct was generated from the following file: