ArmNN
 20.02
CaffeParserBase Class Reference

#include <CaffeParser.hpp>

Inheritance diagram for CaffeParserBase:
ICaffeParser CaffeParser RecordByRecordCaffeParser

Public Member Functions

virtual armnn::INetworkPtr CreateNetworkFromTextFile (const char *graphFile, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs) override
 Create the network from a protobuf text file on disk. More...
 
virtual armnn::INetworkPtr CreateNetworkFromString (const char *protoText, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs) override
 Creates the network directly from protobuf text in a string. Useful for debugging/testing. More...
 
virtual BindingPointInfo GetNetworkInputBindingInfo (const std::string &name) const override
 Retrieves binding info (layer id and tensor info) for the network input identified by the given layer name. More...
 
virtual BindingPointInfo GetNetworkOutputBindingInfo (const std::string &name) const override
 Retrieves binding info (layer id and tensor info) for the network output identified by the given layer name. More...
 
 CaffeParserBase ()
 
- Public Member Functions inherited from ICaffeParser
virtual armnn::INetworkPtr CreateNetworkFromBinaryFile (const char *graphFile, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)=0
 Create the network from a protobuf binary file on the disk. More...
 

Protected Types

using OperationParsingFunction = void(CaffeParserBase::*)(const caffe::LayerParameter &layerParam)
 

Protected Member Functions

armnn::TensorInfo BlobShapeToTensorInfo (const caffe::BlobShape &blobShape) const
 Converts Caffe's protobuf tensor shape format to ArmNN's. More...
 
void TrackInputBinding (armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
 
void TrackOutputBinding (armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
 
void SetArmnnOutputSlotForCaffeTop (const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
 
armnn::IOutputSlotGetArmnnOutputSlotForCaffeTop (const std::string &caffeTopName) const
 Retrieves the Armnn IOutputSlot representing the given Caffe top. More...
 
void Cleanup ()
 
armnn::INetworkPtr CreateNetworkFromNetParameter (caffe::NetParameter &netParam, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
 Parses a NetParameter loaded into memory from one of the other CreateNetwork*. More...
 
void LoadNetParam (caffe::NetParameter &netParameter)
 does the actual conversion from caffe::NetParameter to armnn::INetwork More...
 
std::vector< const caffe::LayerParameter * > GetInputs (const caffe::LayerParameter &layerParam)
 Find the Caffe layers listed as inputs (bottoms) for a given layer. More...
 
void ResolveInPlaceLayers (caffe::NetParameter &netParameter)
 Modifies the Caffe network to replace "in-place" layers (whose top() and bottom() are both the same) with regular layers. More...
 
void ParseInputLayer (const caffe::LayerParameter &layerParam)
 Adds an armnn layer to m_Network given a Caffe LayerParameter of the correct type and is responsible for recording any newly created IOutputSlots using SetArmnnOutputSlotForCaffeTop(). More...
 
void ParseConvLayer (const caffe::LayerParameter &layerParam)
 
void ParsePoolingLayer (const caffe::LayerParameter &layerParam)
 
void ParseReluLayer (const caffe::LayerParameter &layerParam)
 
void ParseLRNLayer (const caffe::LayerParameter &layerParam)
 
void ParseInnerProductLayer (const caffe::LayerParameter &layerParam)
 
void ParseSoftmaxLayer (const caffe::LayerParameter &layerParam)
 
void ParseEltwiseLayer (const caffe::LayerParameter &layerParam)
 
void ParseConcatLayer (const caffe::LayerParameter &layerParam)
 
void ParseBatchNormLayer (const caffe::LayerParameter &layerParam)
 
void ParseScaleLayer (const caffe::LayerParameter &layerParam)
 
void ParseSplitLayer (const caffe::LayerParameter &layerParam)
 
void ParseDropoutLayer (const caffe::LayerParameter &layerParam)
 
void AddConvLayerWithSplits (const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
 ParseConv may use these helpers depending on the group parameter. More...
 
void AddConvLayerWithDepthwiseConv (const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
 
- Protected Member Functions inherited from ICaffeParser
virtual ~ICaffeParser ()
 

Static Protected Member Functions

static void TrackBindingPoint (armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo, const char *bindingPointDesc, std::unordered_map< std::string, BindingPointInfo > &nameToBindingInfo)
 
static std::pair< armnn::LayerBindingId, armnn::TensorInfoGetBindingInfo (const std::string &layerName, const char *bindingPointDesc, const std::unordered_map< std::string, BindingPointInfo > &bindingInfos)
 

Protected Attributes

std::unordered_map< std::string, BindingPointInfom_NetworkInputsBindingInfo
 maps input layer names to their corresponding ids and tensor infos More...
 
std::unordered_map< std::string, BindingPointInfom_NetworkOutputsBindingInfo
 maps output layer names to their corresponding ids and tensor infos More...
 
armnn::INetworkPtr m_Network
 
std::map< std::string, armnn::TensorShapem_InputShapes
 
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
 As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops. More...
 
std::vector< std::string > m_RequestedOutputs
 
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName
 

Static Protected Attributes

static const std::map< std::string, OperationParsingFunctionms_CaffeLayerNameToParsingFunctions
 Maps Caffe layer names to parsing member functions. More...
 

Additional Inherited Members

- Static Public Member Functions inherited from ICaffeParser
static ICaffeParserCreateRaw ()
 
static ICaffeParserPtr Create ()
 
static void Destroy (ICaffeParser *parser)
 

Detailed Description

Definition at line 26 of file CaffeParser.hpp.

Member Typedef Documentation

◆ OperationParsingFunction

using OperationParsingFunction = void(CaffeParserBase::*)(const caffe::LayerParameter& layerParam)
protected

Definition at line 115 of file CaffeParser.hpp.

Constructor & Destructor Documentation

◆ CaffeParserBase()

Definition at line 274 of file CaffeParser.cpp.

275  : m_Network(nullptr, nullptr)
276 {
277 
278 }

Member Function Documentation

◆ AddConvLayerWithDepthwiseConv()

void AddConvLayerWithDepthwiseConv ( const caffe::LayerParameter &  layerParam,
const armnn::Convolution2dDescriptor desc,
unsigned int  kernelW,
unsigned int  kernelH 
)
protected

Definition at line 611 of file CaffeParser.cpp.

References CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), armnnUtils::GetTensorInfo(), Convolution2dDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, DepthwiseConvolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and armnnCaffeParser::TensorDescToBlobShape().

Referenced by CaffeParserBase::ParseConvLayer().

615 {
616  BOOST_ASSERT(layerParam.type() == "Convolution");
617  ValidateNumInputsOutputs(layerParam, 1, 1);
618 
619  ConvolutionParameter convParam = layerParam.convolution_param();
620  BlobShape inputShape = TensorDescToBlobShape(GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo());
621 
623  desc.m_PadLeft = convDesc.m_PadLeft;
624  desc.m_PadRight = convDesc.m_PadRight;
625  desc.m_PadTop = convDesc.m_PadTop;
626  desc.m_PadBottom = convDesc.m_PadBottom;
627  desc.m_StrideX = convDesc.m_StrideX;
628  desc.m_StrideY = convDesc.m_StrideY;
629  desc.m_BiasEnabled = convDesc.m_BiasEnabled;
630 
631  unsigned int numFilters = convParam.num_output();
632 
633  BlobShape outputShape;
634  outputShape.add_dim(0);
635  outputShape.set_dim(0, inputShape.dim(0));
636  outputShape.add_dim(1);
637  outputShape.set_dim(1, numFilters);
638  outputShape.add_dim(2);
639  outputShape.set_dim(
640  2, (static_cast<int>(
641  static_cast<float>(inputShape.dim(2) + 2 * desc.m_PadBottom - kernelH) /
642  static_cast<float>(desc.m_StrideY)) + 1));
643  outputShape.add_dim(3);
644  outputShape.set_dim(
645  3, (static_cast<int>(
646  static_cast<float>(inputShape.dim(3) + 2 * desc.m_PadRight - kernelW) /
647  static_cast<float>(desc.m_StrideX)) + 1));
648 
649  // Load the weight data
650  size_t allWeightsSize = boost::numeric_cast<size_t>(inputShape.dim(1) * kernelH * kernelW);
651  vector<float> weightData(allWeightsSize);
652 
653  GetDataFromBlob(layerParam, weightData, 0);
654 
655  // depth multiplier will be 1 for the depthwise convolution
656  const unsigned int weightDimSizes[4] = {
657  static_cast<unsigned int>(1), // depth multiplier
658  static_cast<unsigned int>(inputShape.dim(1)), // #channels
659  kernelH,
660  kernelW};
661 
662  armnn::IConnectableLayer* returnLayer = nullptr;
663  ConstTensor weights(TensorInfo(4, weightDimSizes, DataType::Float32), weightData.data());
664  Optional<ConstTensor> optionalBiases;
665  vector<float> biasData;
666  if (desc.m_BiasEnabled)
667  {
668  TensorInfo biasInfo;
669 
670  biasData.resize(boost::numeric_cast<size_t>(outputShape.dim(1)), 1.f);
671  GetDataFromBlob(layerParam, biasData, 1);
672 
673  const unsigned int biasDimSizes[1] = {static_cast<unsigned int>(outputShape.dim(1))};
674  biasInfo = TensorInfo(1, biasDimSizes, DataType::Float32);
675 
676  ConstTensor biases(biasInfo, biasData.data());
677  optionalBiases = Optional<ConstTensor>(biases);
678  }
679  returnLayer = m_Network->AddDepthwiseConvolution2dLayer(desc,
680  weights,
681  optionalBiases,
682  layerParam.name().c_str());
683 
684  if (!returnLayer)
685  {
686  throw ParseException(
687  boost::str(
688  boost::format(
689  "Failed to create depthwise convolution layer. "
690  "Layer=%1% #filters=%2% %3%") %
691  layerParam.name() %
692  numFilters %
693  CHECK_LOCATION().AsString()));
694  }
695  armnn::IOutputSlot& inputConnection = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
696  inputConnection.Connect(returnLayer->GetInputSlot(0));
697  returnLayer->GetOutputSlot(0).SetTensorInfo(BlobShapeToTensorInfo(outputShape));
698  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), returnLayer->GetOutputSlot(0));
699 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
An output connection slot for a layer.
Definition: INetwork.hpp:37
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A DepthwiseConvolution2dDescriptor for the DepthwiseConvolution2dLayer.
uint32_t m_PadLeft
Padding left value in the width dimension.
BlobShape TensorDescToBlobShape(const TensorInfo &desc)

◆ AddConvLayerWithSplits()

void AddConvLayerWithSplits ( const caffe::LayerParameter &  layerParam,
const armnn::Convolution2dDescriptor desc,
unsigned int  kernelW,
unsigned int  kernelH 
)
protected

ParseConv may use these helpers depending on the group parameter.

Definition at line 419 of file CaffeParser.cpp.

References CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetNumOutputSlots(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), Convolution2dDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), IOutputSlot::SetTensorInfo(), OriginsDescriptor::SetViewOriginCoord(), ViewsDescriptor::SetViewOriginCoord(), ViewsDescriptor::SetViewSize(), and armnnCaffeParser::TensorDescToBlobShape().

Referenced by CaffeParserBase::ParseConvLayer().

423 {
424  BOOST_ASSERT(layerParam.type() == "Convolution");
425  ValidateNumInputsOutputs(layerParam, 1, 1);
426 
427  ConvolutionParameter convParam = layerParam.convolution_param();
428  BlobShape inputShape = TensorDescToBlobShape(GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo());
429  const unsigned int numGroups = convParam.has_group() ? convParam.group() : 1;
430 
431  // asusme these were already verified by the caller ParseConvLayer() function
432  BOOST_ASSERT(numGroups < inputShape.dim(1));
433  BOOST_ASSERT(numGroups > 1);
434 
435  // Handle grouping
436  armnn::IOutputSlot& inputConnection = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
437 
438  vector<string> convLayerNames(numGroups);
439  vector<armnn::IConnectableLayer*> convLayers(numGroups);
440  convLayerNames[0] = layerParam.name();
441 
442  // This convolution is to be applied to chunks of the input data so add a splitter layer
443 
444  // Redirect the convolution input to the splitter
445  unsigned int splitterDimSizes[4] = {static_cast<unsigned int>(inputShape.dim(0)),
446  static_cast<unsigned int>(inputShape.dim(1)),
447  static_cast<unsigned int>(inputShape.dim(2)),
448  static_cast<unsigned int>(inputShape.dim(3))};
449 
450  // Split dimension 1 of the splitter output shape and conv input shapes
451  // according to the number of groups
452 
453  splitterDimSizes[1] /= numGroups;
454  inputShape.set_dim(1, splitterDimSizes[1]);
455 
456  // This is used to describe how the input is to be split
457  ViewsDescriptor splitterDesc(numGroups);
458 
459  // Create an output node for each group, giving each a unique name
460  for (unsigned int g = 0; g < numGroups; ++g)
461  {
462  // Work out the names of the splitter layers child convolutions
463  stringstream ss;
464  ss << layerParam.name() << "_" << g;
465  convLayerNames[g] = ss.str();
466 
467  splitterDesc.SetViewOriginCoord(g, 1, splitterDimSizes[1] * g);
468 
469  // Set the size of the views.
470  for (unsigned int dimIdx=0; dimIdx < 4; dimIdx++)
471  {
472  splitterDesc.SetViewSize(g, dimIdx, splitterDimSizes[dimIdx]);
473  }
474  }
475 
476  const std::string splitterLayerName = std::string("splitter_") + layerParam.bottom(0);
477  armnn::IConnectableLayer* splitterLayer = m_Network->AddSplitterLayer(splitterDesc, splitterLayerName.c_str());
478 
479  inputConnection.Connect(splitterLayer->GetInputSlot(0));
480  for (unsigned int i = 0; i < splitterLayer->GetNumOutputSlots(); i++)
481  {
482  splitterLayer->GetOutputSlot(i).SetTensorInfo(BlobShapeToTensorInfo(inputShape));
483  }
484 
485  unsigned int numFilters = convParam.num_output();
486 
487  // Populates convolution output tensor descriptor dimensions.
488  BlobShape outputShape;
489  outputShape.add_dim(0);
490  outputShape.set_dim(0, inputShape.dim(0));
491  outputShape.add_dim(1);
492  // Ensures that dimension 1 of the convolution output is split according to the number of groups.
493  outputShape.set_dim(1, numFilters / numGroups);
494  outputShape.add_dim(2);
495  outputShape.set_dim(
496  2, (static_cast<int>(
497  static_cast<float>(inputShape.dim(2) + 2 * desc.m_PadBottom - kernelH) /
498  static_cast<float>(desc.m_StrideY)) + 1));
499  outputShape.add_dim(3);
500  outputShape.set_dim(
501  3, (static_cast<int>(
502  static_cast<float>(inputShape.dim(3) + 2 * desc.m_PadRight - kernelW) /
503  static_cast<float>(desc.m_StrideX)) + 1));
504 
505  // Load the weight data for ALL groups
506  vector<float> weightData(boost::numeric_cast<size_t>(numGroups *
507  inputShape.dim(1) * // number of input channels
508  outputShape.dim(1) * // number of output channels
509  kernelH *
510  kernelW));
511  GetDataFromBlob(layerParam, weightData, 0);
512 
513  const unsigned int weightDimSizes[4] = {
514  static_cast<unsigned int>(outputShape.dim(1)),
515  static_cast<unsigned int>(inputShape.dim(1)),
516  kernelH,
517  kernelW};
518 
519  TensorInfo biasInfo;
520  vector<float> biasData;
521 
522  if (desc.m_BiasEnabled)
523  {
524  biasData.resize(boost::numeric_cast<size_t>(numGroups * outputShape.dim(1)), 1.f);
525  GetDataFromBlob(layerParam, biasData, 1);
526 
527  const unsigned int biasDimSizes[1] = {static_cast<unsigned int>(outputShape.dim(1))};
528  biasInfo = TensorInfo(1, biasDimSizes, DataType::Float32);
529  }
530 
531  const unsigned int numWeightsPerGroup = boost::numeric_cast<unsigned int>(weightData.size()) / numGroups;
532  const unsigned int numBiasesPerGroup = boost::numeric_cast<unsigned int>(biasData.size()) / numGroups;
533 
534  for (unsigned int g = 0; g < numGroups; ++g)
535  {
536  // Sets the slot index, group 0 should be connected to the 0th output of the splitter
537  // group 1 should be connected to the 1st output of the splitter.
538 
539  // Pulls out the weights for this group from that loaded from the model file earlier.
540  ConstTensor weights(TensorInfo(4, weightDimSizes, DataType::Float32),
541  weightData.data() + numWeightsPerGroup * g);
542 
543  IConnectableLayer* convLayer = nullptr;
544  Optional<ConstTensor> optionalBiases;
545  if (desc.m_BiasEnabled)
546  {
547  // Pulls out the biases for this group from that loaded from the model file earlier.
548  ConstTensor biases(biasInfo, biasData.data() + numBiasesPerGroup * g);
549  optionalBiases = Optional<ConstTensor>(biases);
550  }
551  convLayer = m_Network->AddConvolution2dLayer(desc,
552  weights,
553  optionalBiases,
554  convLayerNames[g].c_str());
555  convLayers[g] = convLayer;
556 
557  // If we have more than one group then the input to the nth convolution the splitter layer's nth output,
558  // otherwise it's the regular input to this layer.
559  armnn::IOutputSlot& splitterInputConnection =
560  splitterLayer ? splitterLayer->GetOutputSlot(g) : inputConnection;
561  splitterInputConnection.Connect(convLayer->GetInputSlot(0));
562  convLayer->GetOutputSlot(0).SetTensorInfo(BlobShapeToTensorInfo(outputShape));
563  }
564 
565  // If the convolution was performed in chunks, add a layer to concatenate the results
566 
567  // The merge input shape matches that of the convolution output
568  unsigned int concatDimSizes[4] = {static_cast<unsigned int>(outputShape.dim(0)),
569  static_cast<unsigned int>(outputShape.dim(1)),
570  static_cast<unsigned int>(outputShape.dim(2)),
571  static_cast<unsigned int>(outputShape.dim(3))};
572 
573  // This is used to describe how the input is to be concatenated
574  OriginsDescriptor concatDesc(numGroups);
575 
576  // Now create an input node for each group, using the name from
577  // the output of the corresponding convolution
578  for (unsigned int g = 0; g < numGroups; ++g)
579  {
580  concatDesc.SetViewOriginCoord(g, 1, concatDimSizes[1] * g);
581  }
582 
583  // Make sure the output from the concat is the correct size to hold the data for all groups
584  concatDimSizes[1] *= numGroups;
585  outputShape.set_dim(1, concatDimSizes[1]);
586 
587  // Finally add the concat layer
588  IConnectableLayer* concatLayer = m_Network->AddConcatLayer(concatDesc, layerParam.name().c_str());
589 
590  if (!concatLayer)
591  {
592  throw ParseException(
593  boost::str(
594  boost::format(
595  "Failed to create final concat layer for Split+Convolution+Concat. "
596  "Layer=%1% #groups=%2% #filters=%3% %4%") %
597  layerParam.name() %
598  numGroups %
599  numFilters %
600  CHECK_LOCATION().AsString()));
601  }
602 
603  for (unsigned int g = 0; g < numGroups; ++g)
604  {
605  convLayers[g]->GetOutputSlot(0).Connect(concatLayer->GetInputSlot(g));
606  }
607  concatLayer->GetOutputSlot(0).SetTensorInfo(armnn::TensorInfo(4, concatDimSizes, DataType::Float32));
608  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), concatLayer->GetOutputSlot(0));
609 }
uint32_t m_PadBottom
Padding bottom value in the height dimension.
bool m_BiasEnabled
Enable/disable bias.
virtual unsigned int GetNumOutputSlots() const =0
Returns the number of connectable output slots.
A ViewsDescriptor for the SplitterLayer.
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
uint32_t m_PadRight
Padding right value in the width dimension.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
An output connection slot for a layer.
Definition: INetwork.hpp:37
An OriginsDescriptor for the ConcatLayer.
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
BlobShape TensorDescToBlobShape(const TensorInfo &desc)

◆ BlobShapeToTensorInfo()

TensorInfo BlobShapeToTensorInfo ( const caffe::BlobShape &  blobShape) const
protected

Converts Caffe's protobuf tensor shape format to ArmNN's.

Definition at line 314 of file CaffeParser.cpp.

Referenced by CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), CaffeParserBase::ParseConvLayer(), and CaffeParserBase::ParseInputLayer().

315 {
316  std::vector<unsigned int> shape;
317  for (int j = 0; j < blobShape.dim_size(); ++j)
318  {
319  shape.push_back(static_cast<unsigned int>(blobShape.dim(j)));
320  }
321 
322  return TensorInfo(boost::numeric_cast<unsigned int>(shape.size()), shape.data(), DataType::Float32);
323 }

◆ Cleanup()

void Cleanup ( )
protected

Definition at line 1860 of file CaffeParser.cpp.

References CaffeParserBase::m_ArmnnOutputSlotForCaffeTop, CaffeParserBase::m_CaffeLayersByTopName, CaffeParserBase::m_InputShapes, and CaffeParserBase::m_RequestedOutputs.

Referenced by RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), and CaffeParserBase::CreateNetworkFromNetParameter().

1860  {
1861  // cleanup, in case we reuse this parser
1862  m_InputShapes.clear();
1863  m_RequestedOutputs.clear();
1865  // NOTE: when we get the text/string format
1866  // optimised for memory then this data structure can
1867  // also move to the CaffeParser class
1868  m_CaffeLayersByTopName.clear();
1869 }
std::vector< std::string > m_RequestedOutputs
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops...
std::map< std::string, armnn::TensorShape > m_InputShapes
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName

◆ CreateNetworkFromNetParameter()

INetworkPtr CreateNetworkFromNetParameter ( caffe::NetParameter &  netParam,
const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)
protected

Parses a NetParameter loaded into memory from one of the other CreateNetwork*.

Definition at line 1829 of file CaffeParser.cpp.

References CaffeParserBase::Cleanup(), INetwork::Create(), CaffeParserBase::LoadNetParam(), CaffeParserBase::m_InputShapes, CaffeParserBase::m_Network, CaffeParserBase::m_NetworkInputsBindingInfo, CaffeParserBase::m_NetworkOutputsBindingInfo, and CaffeParserBase::m_RequestedOutputs.

Referenced by CaffeParser::CreateNetworkFromBinaryFile(), CaffeParserBase::CreateNetworkFromString(), and CaffeParserBase::CreateNetworkFromTextFile().

1832 {
1835 
1837 
1838  m_InputShapes = inputShapes;
1839  if (requestedOutputs.size() == 0)
1840  {
1841  throw ParseException("requestedOutputs must have at least one entry");
1842  }
1843  m_RequestedOutputs = requestedOutputs;
1844 
1845  try
1846  {
1847  LoadNetParam(netParam);
1848  }
1849  catch (const ParseException& e)
1850  {
1851  Cleanup();
1852  throw e;
1853  }
1854 
1855  Cleanup();
1856 
1857  return move(m_Network);
1858 }
void LoadNetParam(caffe::NetParameter &netParameter)
does the actual conversion from caffe::NetParameter to armnn::INetwork
std::vector< std::string > m_RequestedOutputs
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos
std::map< std::string, armnn::TensorShape > m_InputShapes
static INetworkPtr Create()
Definition: Network.cpp:49

◆ CreateNetworkFromString()

INetworkPtr CreateNetworkFromString ( const char *  protoText,
const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)
overridevirtual

Creates the network directly from protobuf text in a string. Useful for debugging/testing.

Implements ICaffeParser.

Definition at line 1769 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::CreateNetworkFromNetParameter().

1772 {
1773  // Parses the string into a message.
1774  NetParameter netParam;
1775  bool success = google::protobuf::TextFormat::ParseFromString(protoText, &netParam);
1776 
1777  if (!success)
1778  {
1779  throw ParseException(
1780  boost::str(
1781  boost::format(
1782  "Failed to parse graph string %1%") %
1783  CHECK_LOCATION().AsString()));
1784  }
1785 
1786  return CreateNetworkFromNetParameter(netParam, inputShapes, requestedOutputs);
1787 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
armnn::INetworkPtr CreateNetworkFromNetParameter(caffe::NetParameter &netParam, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
Parses a NetParameter loaded into memory from one of the other CreateNetwork*.

◆ CreateNetworkFromTextFile()

INetworkPtr CreateNetworkFromTextFile ( const char *  graphFile,
const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)
overridevirtual

Create the network from a protobuf text file on disk.

Implements ICaffeParser.

Definition at line 1733 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::CreateNetworkFromNetParameter().

1736 {
1737  FILE* fd = fopen(graphFile, "r");
1738 
1739  if (fd == nullptr)
1740  {
1741  throw FileNotFoundException(
1742  boost::str(
1743  boost::format(
1744  "Failed to open graph file: %1% %2%") %
1745  graphFile %
1746  CHECK_LOCATION().AsString()));
1747  }
1748 
1749  // Parses the file into a message.
1750  NetParameter netParam;
1751  auto input = new google::protobuf::io::FileInputStream(fileno(fd));
1752  bool success = google::protobuf::TextFormat::Parse(input, &netParam);
1753  delete input;
1754  fclose(fd);
1755 
1756  if (!success)
1757  {
1758  throw ParseException(
1759  boost::str(
1760  boost::format(
1761  "Failed to parse graph file: %1% %2%") %
1762  graphFile %
1763  CHECK_LOCATION().AsString()));
1764  }
1765 
1766  return CreateNetworkFromNetParameter(netParam, inputShapes, requestedOutputs);
1767 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
armnn::INetworkPtr CreateNetworkFromNetParameter(caffe::NetParameter &netParam, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
Parses a NetParameter loaded into memory from one of the other CreateNetwork*.

◆ GetArmnnOutputSlotForCaffeTop()

armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop ( const std::string &  caffeTopName) const
protected

Retrieves the Armnn IOutputSlot representing the given Caffe top.

Throws if it cannot be found (e.g. not parsed yet).

Definition at line 1533 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::m_ArmnnOutputSlotForCaffeTop.

Referenced by CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), CaffeParserBase::LoadNetParam(), CaffeParserBase::ParseBatchNormLayer(), CaffeParserBase::ParseConcatLayer(), CaffeParserBase::ParseConvLayer(), CaffeParserBase::ParseDropoutLayer(), CaffeParserBase::ParseEltwiseLayer(), CaffeParserBase::ParseInnerProductLayer(), CaffeParserBase::ParseLRNLayer(), CaffeParserBase::ParsePoolingLayer(), CaffeParserBase::ParseReluLayer(), CaffeParserBase::ParseScaleLayer(), CaffeParserBase::ParseSoftmaxLayer(), and CaffeParserBase::ParseSplitLayer().

1534 {
1535  auto it = m_ArmnnOutputSlotForCaffeTop.find(caffeTopName);
1536  if (it != m_ArmnnOutputSlotForCaffeTop.end())
1537  {
1538  return *it->second;
1539  }
1540  else
1541  {
1542  throw ParseException(
1543  boost::str(
1544  boost::format(
1545  "Could not find armnn output slot for Caffe top '%1%' %2%") %
1546  caffeTopName %
1547  CHECK_LOCATION().AsString()));
1548  }
1549 }
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops...
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192

◆ GetBindingInfo()

std::pair< armnn::LayerBindingId, armnn::TensorInfo > GetBindingInfo ( const std::string &  layerName,
const char *  bindingPointDesc,
const std::unordered_map< std::string, BindingPointInfo > &  bindingInfos 
)
staticprotected

Definition at line 296 of file CaffeParser.cpp.

References CHECK_LOCATION.

Referenced by CaffeParserBase::GetNetworkInputBindingInfo(), and CaffeParserBase::GetNetworkOutputBindingInfo().

299 {
300  auto it = nameToBindingInfo.find(layerName);
301  if (it == nameToBindingInfo.end())
302  {
304  boost::str(
305  boost::format(
306  "Unknown binding %1% for layer '%2%'. %3%") %
307  bindingPointDesc %
308  layerName %
309  CHECK_LOCATION().AsString()));
310  }
311  return it->second;
312 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192

◆ GetInputs()

vector< const LayerParameter * > GetInputs ( const caffe::LayerParameter &  layerParam)
protected

Find the Caffe layers listed as inputs (bottoms) for a given layer.

Definition at line 339 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::m_CaffeLayersByTopName.

Referenced by CaffeParserBase::LoadNetParam().

340 {
341  std::vector<const caffe::LayerParameter*> ret;
342  ret.reserve(boost::numeric_cast<size_t>(layerParam.bottom_size()));
343  for (int j = 0; j < layerParam.bottom_size(); ++j)
344  {
345  std::string inputName = layerParam.bottom(j);
346  auto inputIt = m_CaffeLayersByTopName.find(inputName);
347  if (inputIt == m_CaffeLayersByTopName.end())
348  {
349  throw ParseException(
350  boost::str(
351  boost::format(
352  "Can't find Caffe layer with top called '%1%', "
353  "which is listed as an input of '%2%'. %3%") %
354  inputName %
355  layerParam.name() %
356  CHECK_LOCATION().AsString()));
357  }
358  ret.push_back(inputIt->second);
359  }
360 
361  return ret;
362 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName

◆ GetNetworkInputBindingInfo()

BindingPointInfo GetNetworkInputBindingInfo ( const std::string &  name) const
overridevirtual

Retrieves binding info (layer id and tensor info) for the network input identified by the given layer name.

Implements ICaffeParser.

Definition at line 286 of file CaffeParser.cpp.

References CaffeParserBase::GetBindingInfo(), and CaffeParserBase::m_NetworkInputsBindingInfo.

287 {
288  return GetBindingInfo(name, "input", m_NetworkInputsBindingInfo);
289 }
static std::pair< armnn::LayerBindingId, armnn::TensorInfo > GetBindingInfo(const std::string &layerName, const char *bindingPointDesc, const std::unordered_map< std::string, BindingPointInfo > &bindingInfos)
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos

◆ GetNetworkOutputBindingInfo()

BindingPointInfo GetNetworkOutputBindingInfo ( const std::string &  name) const
overridevirtual

Retrieves binding info (layer id and tensor info) for the network output identified by the given layer name.

Implements ICaffeParser.

Definition at line 291 of file CaffeParser.cpp.

References CaffeParserBase::GetBindingInfo(), and CaffeParserBase::m_NetworkOutputsBindingInfo.

292 {
293  return GetBindingInfo(name, "output", m_NetworkOutputsBindingInfo);
294 }
static std::pair< armnn::LayerBindingId, armnn::TensorInfo > GetBindingInfo(const std::string &layerName, const char *bindingPointDesc, const std::unordered_map< std::string, BindingPointInfo > &bindingInfos)
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos

◆ LoadNetParam()

void LoadNetParam ( caffe::NetParameter &  netParameter)
protected

does the actual conversion from caffe::NetParameter to armnn::INetwork

Definition at line 1632 of file CaffeParser.cpp.

References CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), CaffeParserBase::GetInputs(), CaffeParserBase::m_CaffeLayersByTopName, CaffeParserBase::m_Network, CaffeParserBase::m_NetworkOutputsBindingInfo, CaffeParserBase::m_RequestedOutputs, CaffeParserBase::ms_CaffeLayerNameToParsingFunctions, armnn::numeric_cast(), CaffeParserBase::ResolveInPlaceLayers(), and CaffeParserBase::TrackOutputBinding().

Referenced by CaffeParserBase::CreateNetworkFromNetParameter().

1633 {
1634  // Caffe models sometimes have an implicit input layer.
1635  // In that case, add an explicit one.
1636  if (netParameter.input_size() > 0)
1637  {
1638  LayerParameter* newLayer = netParameter.add_layer();
1639 
1640  newLayer->set_type("Input");
1641  newLayer->set_name(netParameter.input(0));
1642  newLayer->add_top(netParameter.input(0));
1643 
1644  InputParameter* inputParam = newLayer->mutable_input_param();
1645  BlobShape* shape = inputParam->add_shape();
1646 
1647  int dim_size = netParameter.input_dim_size();
1648  for (int i = 0; i < dim_size; ++i)
1649  {
1650  shape->add_dim(netParameter.input_dim(i));
1651  }
1652  }
1653 
1654  // Replaces in-place layers with regular ones to make the rest of the parsing easier.
1655  ResolveInPlaceLayers(netParameter);
1656 
1657  // Creates a lookup of Caffe layers by name.
1658  for (int i = 0; i < netParameter.layer_size(); ++i)
1659  {
1660  const caffe::LayerParameter& layer = netParameter.layer(i);
1661  for (int i = 0; i < layer.top_size(); ++i)
1662  {
1663  m_CaffeLayersByTopName[layer.top(i)] = &layer;
1664  }
1665  }
1666 
1667  // Finds the output layers the user requested.
1668  std::vector<const caffe::LayerParameter*> targetLayers;
1669  for (const std::string& requestedOutputName : m_RequestedOutputs)
1670  {
1671  auto nodeIt = m_CaffeLayersByTopName.find(requestedOutputName);
1672  if (nodeIt == m_CaffeLayersByTopName.end())
1673  {
1674  throw ParseException(
1675  boost::str(
1676  boost::format(
1677  "Couldn't find requested output layer '%1%' in graph %2%") %
1678  requestedOutputName %
1679  CHECK_LOCATION().AsString()));
1680  }
1681  targetLayers.push_back(nodeIt->second);
1682  }
1683 
1684  // Sorts them into a linear ordering such that all inputs of a node are before the node itself.
1685  std::vector<const caffe::LayerParameter*> sortedNodes;
1686  if (!armnnUtils::GraphTopologicalSort<const caffe::LayerParameter*>(
1687  targetLayers,
1688  [this](const caffe::LayerParameter* node)
1689  {
1690  return GetInputs(*node);
1691  },
1692  sortedNodes))
1693  {
1694  throw ParseException(
1695  boost::str(
1696  boost::format(
1697  "Cycle detected in graph. #nodes: %1% %2%") %
1698  sortedNodes.size() %
1699  CHECK_LOCATION().AsString()));
1700  }
1701 
1702  // Parses each node in order, knowing that all inputs of a node will be processed before the node itself.
1703  for (const caffe::LayerParameter* current : sortedNodes)
1704  {
1705  auto it = ms_CaffeLayerNameToParsingFunctions.find(current->type());
1706  if (it == ms_CaffeLayerNameToParsingFunctions.end())
1707  {
1708  throw ParseException(
1709  boost::str(
1710  boost::format("Unsupported layer type: '%1%' for layer %2% %3%") %
1711  current->type() %
1712  current->name() %
1713  CHECK_LOCATION().AsString()));
1714  }
1715  auto func = it->second;
1716  (this->*func)(*current);
1717  }
1718 
1719  // Adds ArmNN output layers connected to each requested output.
1720  for (const std::string& requestedOutput : m_RequestedOutputs)
1721  {
1722  armnn::IOutputSlot& outputSlot = GetArmnnOutputSlotForCaffeTop(requestedOutput);
1723 
1726  armnn::IConnectableLayer* const outputLayer = m_Network->AddOutputLayer(outputId, requestedOutput.c_str());
1727  outputSlot.Connect(outputLayer->GetInputSlot(0));
1728 
1729  TrackOutputBinding(outputLayer, outputId, outputLayer->GetInputSlot(0).GetConnection()->GetTensorInfo());
1730  }
1731 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
static const std::map< std::string, OperationParsingFunction > ms_CaffeLayerNameToParsingFunctions
Maps Caffe layer names to parsing member functions.
std::vector< std::string > m_RequestedOutputs
int LayerBindingId
Type of identifiers for bindable layers (inputs, outputs).
Definition: Types.hpp:171
void TrackOutputBinding(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
void ResolveInPlaceLayers(caffe::NetParameter &netParameter)
Modifies the Caffe network to replace "in-place" layers (whose top() and bottom() are both the same) ...
An output connection slot for a layer.
Definition: INetwork.hpp:37
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
std::vector< const caffe::LayerParameter * > GetInputs(const caffe::LayerParameter &layerParam)
Find the Caffe layers listed as inputs (bottoms) for a given layer.
virtual int Connect(IInputSlot &destination)=0
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName

◆ ParseBatchNormLayer()

void ParseBatchNormLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1338 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), armnn::Float32, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), BatchNormalizationDescriptor::m_Eps, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1339 {
1340  ValidateNumInputsOutputs(layerParam, 1, 1);
1341 
1342  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1343 
1344  string name = layerParam.name();
1345 
1346  BatchNormParameter param = layerParam.batch_norm_param();
1347  // If use_global_stats is not explicitly set in the model, assume it to be true (its default value
1348  // when the network is in the testing phase).
1349  if (param.has_use_global_stats())
1350  {
1351  if (!param.use_global_stats())
1352  {
1353  throw ParseException(
1354  boost::str(
1355  boost::format(
1356  "Error parsing Batch Norm layer '%1%': "
1357  "Parameter 'use_global_stats' is set to false, which is "
1358  "unsupported (value used for training). %2%") %
1359  name %
1360  CHECK_LOCATION().AsString()));
1361  }
1362  }
1363 
1365  desc.m_Eps = param.eps();
1366 
1367  unsigned int channels = inputInfo.GetShape()[1];
1368  unsigned int shape[] = {channels};
1369 
1370  vector<float> meanData(channels);
1371  GetDataFromBlob(layerParam, meanData, 0);
1372 
1373  vector<float> varianceData(channels);
1374  GetDataFromBlob(layerParam, varianceData, 1);
1375 
1376  // Reads moving average factor and applies scaling (if required).
1377  const BlobProto& blob = layerParam.blobs(boost::numeric_cast<int>(2));
1378  const float movingAverageFactor = blob.data(boost::numeric_cast<int>(0));
1379  if(movingAverageFactor != 0.0f)
1380  {
1381  const float scaleFactor = 1.0f / movingAverageFactor;
1382  auto scaleFunction = [scaleFactor](float f) -> float { return f * scaleFactor; };
1383 
1384  std::transform(varianceData.begin(), varianceData.end(), varianceData.begin(), scaleFunction);
1385  std::transform(meanData.begin(), meanData.end(), meanData.begin(), scaleFunction);
1386  }
1387 
1388  // Identifies scale operation.
1389  vector<float> betaData(channels, 0.0f);
1390  vector<float> gammaData(channels, 1.0f);
1391 
1392  ConstTensor mean(TensorInfo(1, shape, armnn::DataType::Float32), meanData);
1393  ConstTensor variance(TensorInfo(1, shape, armnn::DataType::Float32), varianceData);
1394  ConstTensor beta(TensorInfo(1, shape, armnn::DataType::Float32), betaData);
1395  ConstTensor gamma(TensorInfo(1, shape, armnn::DataType::Float32), gammaData);
1396 
1397  armnn::IConnectableLayer* const batchNormLayer = m_Network->AddBatchNormalizationLayer(desc,
1398  mean, variance, beta, gamma, name.c_str());
1399  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(batchNormLayer->GetInputSlot(0));
1400  batchNormLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1401  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), batchNormLayer->GetOutputSlot(0));
1402 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
float m_Eps
Value to add to the variance. Used to avoid dividing by zero.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A BatchNormalizationDescriptor for the BatchNormalizationLayer.

◆ ParseConcatLayer()

void ParseConcatLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1277 of file CaffeParser.cpp.

References CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), TensorInfo::GetNumDimensions(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), IOutputSlot::SetTensorInfo(), and OriginsDescriptor::SetViewOriginCoord().

1278 {
1279  unsigned int numInputs = static_cast<unsigned int>(layerParam.bottom_size());
1280  // We assume concat happens along the channel dimension, which is 1 in (0, 1, 2, 3).
1281  unsigned int concatDim = 1;
1282  unsigned int numOfDims = 4;
1283 
1284  // we only consider 4-D tensor here
1285  OriginsDescriptor concatDescriptor(static_cast<uint32_t>(numInputs), numOfDims);
1286  std::vector<unsigned int>mergeDimSizes(numOfDims, 0u);
1287 
1288  unsigned int mergeDim = 0;
1289  for (unsigned int viewIndex = 0; viewIndex < numInputs; ++viewIndex)
1290  {
1291  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(
1292  layerParam.bottom(boost::numeric_cast<int>(viewIndex))).GetTensorInfo();
1293  // Checks whether the dimensions of the input tensors are actually 4.
1294  if (inputInfo.GetNumDimensions()!=4)
1295  {
1296  throw ParseException(
1297  boost::str(
1298  boost::format(
1299  "The number of dimensions for input tensors of "
1300  "the concatenation op should be 4. Inputs of %1% has "
1301  "%2% dimensions. %3%") %
1302  layerParam.name() %
1303  inputInfo.GetNumDimensions() %
1304  CHECK_LOCATION().AsString()));
1305  }
1306 
1307  mergeDimSizes[0] = inputInfo.GetShape()[0];
1308  mergeDimSizes[1] = inputInfo.GetShape()[1];
1309  mergeDimSizes[2] = inputInfo.GetShape()[2];
1310  mergeDimSizes[3] = inputInfo.GetShape()[3];
1311 
1312  for (unsigned int j = 0; j < concatDim; ++j)
1313  {
1314  concatDescriptor.SetViewOriginCoord(viewIndex, j, 0);
1315  }
1316 
1317  concatDescriptor.SetViewOriginCoord(viewIndex, concatDim, mergeDim);
1318  mergeDim += mergeDimSizes[concatDim];
1319 
1320  for (unsigned int j = concatDim+1; j < numOfDims; ++j)
1321  {
1322  concatDescriptor.SetViewOriginCoord(viewIndex, j, 0);
1323  }
1324  }
1325  mergeDimSizes[concatDim] = mergeDim;
1326 
1327  armnn::IConnectableLayer* concatlayer = m_Network->AddConcatLayer(concatDescriptor, layerParam.name().c_str());
1328  for (unsigned int i = 0; i < numInputs; ++i)
1329  {
1330  armnn::IOutputSlot& outputSlot = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(boost::numeric_cast<int>(i)));
1331  outputSlot.Connect(concatlayer->GetInputSlot(i));
1332  }
1333 
1334  concatlayer->GetOutputSlot(0).SetTensorInfo(armnn::TensorInfo(numOfDims, mergeDimSizes.data(), DataType::Float32));
1335  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), concatlayer->GetOutputSlot(0));
1336 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
An output connection slot for a layer.
Definition: INetwork.hpp:37
An OriginsDescriptor for the ConcatLayer.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:92

◆ ParseConvLayer()

void ParseConvLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 701 of file CaffeParser.cpp.

References CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, IOutputSlot::Connect(), GET_OPTIONAL_WITH_VECTOR_FALLBACK, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), armnnUtils::GetTensorInfo(), Convolution2dDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and armnnCaffeParser::TensorDescToBlobShape().

702 {
703  // Ignored Caffe Parameters
704  // * Dilation Size
705  // * Weight Filler
706  // * Bias Filler
707  // * Engine
708  // * Force nd_im2col
709  // * Axis
710 
711  // Not Available ArmNN Interface Parameters
712  // * Rounding policy;
713 
714  BOOST_ASSERT(layerParam.type() == "Convolution");
715  ValidateNumInputsOutputs(layerParam, 1, 1);
716 
717  ConvolutionParameter convParam = layerParam.convolution_param();
718  BlobShape inputShape = TensorDescToBlobShape(GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo());
719  const unsigned int numGroups = convParam.has_group() ? convParam.group() : 1;
720  unsigned int numFilters = convParam.num_output();
721 
722  const auto notFound = std::numeric_limits<unsigned int>::max();
723 
724  unsigned int kernelH = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
725  kernel_h, kernel_size, unsigned int, notFound);
726  unsigned int kernelW = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
727  kernel_w, kernel_size, unsigned int, notFound);
728 
729  unsigned int strideH = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
730  stride_h, stride, unsigned int, 1u);
731  unsigned int strideW = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
732  stride_w, stride, unsigned int, 1u);
733 
734  unsigned int padH = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
735  pad_h, pad, unsigned int, 0u);
736  unsigned int padW = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
737  pad_w, pad, unsigned int, 0u);
738 
739  Convolution2dDescriptor convolution2dDescriptor;
740  convolution2dDescriptor.m_PadLeft = padW;
741  convolution2dDescriptor.m_PadRight = padW;
742  convolution2dDescriptor.m_PadTop = padH;
743  convolution2dDescriptor.m_PadBottom = padH;
744  convolution2dDescriptor.m_StrideX = strideW;
745  convolution2dDescriptor.m_StrideY = strideH;
746  convolution2dDescriptor.m_BiasEnabled = convParam.has_bias_term() ? convParam.bias_term() : true;
747 
748  if (numGroups > numFilters)
749  {
750  throw ParseException(
751  boost::str(
752  boost::format(
753  "Error parsing Convolution: %1%. "
754  "The 'group'=%2% parameter cannot be larger than the "
755  "number of filters supplied ='%3%'. %4%") %
756  layerParam.name() %
757  numGroups %
758  numFilters %
759  CHECK_LOCATION().AsString()));
760  }
761 
762  if (inputShape.dim_size() != 4)
763  {
764  throw ParseException(
765  boost::str(
766  boost::format(
767  "Convolution input shape is expected to have 4 dimensions. "
768  "%1%'s input has only %2%. %3%") %
769  layerParam.name() %
770  inputShape.dim_size() %
771  CHECK_LOCATION().AsString()));
772  }
773 
774  if (numGroups > 1)
775  {
776  if (numGroups > inputShape.dim(1))
777  {
778  throw ParseException(
779  boost::str(
780  boost::format(
781  "Error parsing Convolution: %1%. "
782  "The 'group'=%2% parameter cannot be larger than the "
783  "channel of the input shape=%3% (in NCHW format). %4%") %
784  layerParam.name() %
785  numGroups %
786  inputShape.dim(1) %
787  CHECK_LOCATION().AsString()));
788  }
789  else if (numGroups == inputShape.dim(1))
790  {
791  // we use a depthwise convolution here, because the number of groups equals to the
792  // input channels
793  AddConvLayerWithDepthwiseConv(layerParam, convolution2dDescriptor, kernelW, kernelH);
794  return;
795  }
796  else
797  {
798  // we split the input by channels into channels/groups separate convolutions
799  // and concatenate the results afterwards
800  AddConvLayerWithSplits(layerParam, convolution2dDescriptor, kernelW, kernelH);
801  return;
802  }
803  }
804 
805  // NOTE: at this point we only need to handle #group=1 case, all other cases should be
806  // handled by the AddConvLayer* helpers
807 
808  // Populate convolution output tensor descriptor dimensions
809  BlobShape outputShape;
810  outputShape.add_dim(0);
811  outputShape.set_dim(0, inputShape.dim(0));
812  outputShape.add_dim(1);
813  outputShape.set_dim(1, numFilters);
814  outputShape.add_dim(2);
815  outputShape.set_dim(
816  2, (static_cast<int>(
817  static_cast<float>(inputShape.dim(2) + 2 * padH - kernelH) /
818  static_cast<float>(strideH)) + 1));
819  outputShape.add_dim(3);
820  outputShape.set_dim(
821  3, (static_cast<int>(
822  static_cast<float>(inputShape.dim(3) + 2 * padW - kernelW) /
823  static_cast<float>(strideW)) + 1));
824 
825  // Load the weight data for ALL groups
826  vector<float> weightData(boost::numeric_cast<size_t>(inputShape.dim(1) *
827  outputShape.dim(1) *
828  kernelH *
829  kernelW));
830  GetDataFromBlob(layerParam, weightData, 0);
831 
832  const unsigned int weightDimSizes[4] = {
833  static_cast<unsigned int>(outputShape.dim(1)), // output channels
834  static_cast<unsigned int>(inputShape.dim(1)), // input channels
835  kernelH,
836  kernelW};
837 
838  armnn::IConnectableLayer* returnLayer = nullptr;
839 
840  // Pull out the weights for this group from that loaded from the model file earlier
841  ConstTensor weights(TensorInfo(4, weightDimSizes, DataType::Float32), weightData.data());
842  Optional<ConstTensor> optionalBiases;
843  vector<float> biasData;
844  if (convolution2dDescriptor.m_BiasEnabled)
845  {
846  TensorInfo biasInfo;
847 
848  biasData.resize(boost::numeric_cast<size_t>(outputShape.dim(1)), 1.f);
849  GetDataFromBlob(layerParam, biasData, 1);
850 
851  const unsigned int biasDimSizes[1] = {static_cast<unsigned int>(outputShape.dim(1))};
852  biasInfo = TensorInfo(1, biasDimSizes, DataType::Float32);
853 
854  // Pull out the biases for this group from that loaded from the model file earlier
855  ConstTensor biases(biasInfo, biasData.data());
856  optionalBiases = Optional<ConstTensor>(biases);
857  }
858  returnLayer = m_Network->AddConvolution2dLayer(convolution2dDescriptor,
859  weights,
860  optionalBiases,
861  layerParam.name().c_str());
862 
863  armnn::IOutputSlot& inputConnection = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
864  inputConnection.Connect(returnLayer->GetInputSlot(0));
865  returnLayer->GetOutputSlot(0).SetTensorInfo(BlobShapeToTensorInfo(outputShape));
866 
867  if (!returnLayer)
868  {
869  throw ParseException(
870  boost::str(
871  boost::format(
872  "Failed to create Convolution layer. "
873  "Layer=%1% #groups=%2% #filters=%3% %4%") %
874  layerParam.name() %
875  numGroups %
876  numFilters %
877  CHECK_LOCATION().AsString()));
878  }
879 
880  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), returnLayer->GetOutputSlot(0));
881 }
uint32_t m_PadBottom
Padding bottom value in the height dimension.
bool m_BiasEnabled
Enable/disable bias.
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
A Convolution2dDescriptor for the Convolution2dLayer.
uint32_t m_PadRight
Padding right value in the width dimension.
uint32_t m_PadTop
Padding top value in the height dimension.
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
An output connection slot for a layer.
Definition: INetwork.hpp:37
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.
void AddConvLayerWithSplits(const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
ParseConv may use these helpers depending on the group parameter.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
#define GET_OPTIONAL_WITH_VECTOR_FALLBACK(PARAM, PARAM_TYPE, OPTIONAL_VALUE, FALLBACK_VECTOR, VALUE_TYPE, DEFAULT_VALUE)
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
void AddConvLayerWithDepthwiseConv(const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
uint32_t m_PadLeft
Padding left value in the width dimension.
BlobShape TensorDescToBlobShape(const TensorInfo &desc)

◆ ParseDropoutLayer()

void ParseDropoutLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1477 of file CaffeParser.cpp.

References CHECK_LOCATION, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), and CaffeParserBase::SetArmnnOutputSlotForCaffeTop().

1478 {
1479  // Ignored for inference, so patch the single input to its single output.
1480  if (layerParam.bottom_size() != 1 || layerParam.top_size() != 1)
1481  {
1482  throw ParseException(
1483  boost::str(
1484  boost::format(
1485  "Dropout layer '%1%' should have exactly 1 bottom and 1 top. "
1486  "#bottoms=%2% #tops=%3% %4%") %
1487  layerParam.name() %
1488  layerParam.bottom_size() %
1489  layerParam.top_size() %
1490  CHECK_LOCATION().AsString()));
1491  }
1492  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)));
1493 }
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ParseEltwiseLayer()

void ParseEltwiseLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1230 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1231 {
1232  ValidateNumInputsOutputs(layerParam, 2, 1);
1233 
1234  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1235 
1236  // Ignored Caffe Parameters:
1237  // coeff
1238 
1239  EltwiseParameter_EltwiseOp operation = EltwiseParameter_EltwiseOp_SUM; // Defaults to sum as per caffe.
1240 
1241  if (layerParam.has_eltwise_param() && layerParam.eltwise_param().has_operation())
1242  {
1243  operation = layerParam.eltwise_param().operation();
1244  }
1245 
1246  armnn::IConnectableLayer* newLayer = nullptr;
1247  switch (operation)
1248  {
1249  case EltwiseParameter_EltwiseOp_SUM:
1250  {
1251  newLayer = m_Network->AddAdditionLayer(layerParam.name().c_str());
1252  break;
1253  }
1254  case EltwiseParameter_EltwiseOp_PROD:
1255  {
1256  newLayer = m_Network->AddMultiplicationLayer(layerParam.name().c_str());
1257  break;
1258  }
1259  default:
1260  {
1261  throw ParseException(
1262  boost::str(
1263  boost::format(
1264  "Unsupported operation %1% in Eltwise layer %2% %3%") %
1265  operation %
1266  layerParam.name() %
1267  CHECK_LOCATION().AsString()));
1268  }
1269  }
1270 
1271  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(newLayer->GetInputSlot(0));
1272  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(1)).Connect(newLayer->GetInputSlot(1));
1273  newLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1274  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), newLayer->GetOutputSlot(0));
1275 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ParseInnerProductLayer()

void ParseInnerProductLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1134 of file CaffeParser.cpp.

References Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), TensorInfo::GetNumDimensions(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), FullyConnectedDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, FullyConnectedDescriptor::m_TransposeWeightMatrix, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1135 {
1136  InnerProductParameter param = layerParam.inner_product_param();
1137 
1138  ValidateNumInputsOutputs(layerParam, 1, 1);
1139 
1140  unsigned int outputSize = param.num_output();
1141 
1142  // Ignored Caffe Parameters:
1143  // Weight Filler
1144  // Bias Filler
1145  // Engine
1146  // Axis
1147 
1148  FullyConnectedDescriptor tensorFullyConnectedDescriptor;
1149 
1150  if (param.has_transpose())
1151  {
1152  // If true, assumes transposed weights.
1153  tensorFullyConnectedDescriptor.m_TransposeWeightMatrix = param.transpose();
1154  }
1155  else
1156  {
1157  // Caffe defaults to transposed.
1158  tensorFullyConnectedDescriptor.m_TransposeWeightMatrix = true;
1159  }
1160 
1161  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1162 
1163  TensorInfo weightInfo;
1164  TensorInfo biasInfo;
1165 
1166  // Allows implicit flattening of extra dimensions.
1167  unsigned int inputSize = inputInfo.GetShape()[1];
1168  for (unsigned int i = 2; i < inputInfo.GetNumDimensions(); ++i)
1169  {
1170  inputSize *= inputInfo.GetShape()[i];
1171  }
1172 
1173  const float* weightDataPtr = GetArrayPtrFromBlob(layerParam, 0);
1174  const unsigned int swTD[2] = { outputSize, inputSize };
1175  ConstTensor weights(TensorInfo(2, swTD, DataType::Float32), weightDataPtr);
1176 
1177  tensorFullyConnectedDescriptor.m_BiasEnabled = true;
1178  // Todo: check whether bias enabled.
1179  armnn::IConnectableLayer* fullyConnectedLayer = nullptr;
1180  if (tensorFullyConnectedDescriptor.m_BiasEnabled)
1181  {
1182  // BIAS VALUE
1183  const float* biasDataPtr = GetArrayPtrFromBlob(layerParam, 1);
1184 
1185  const unsigned int sbTD[1] = { outputSize };
1186 
1187  ConstTensor biases(TensorInfo(1, sbTD, DataType::Float32), biasDataPtr);
1188 
1189  fullyConnectedLayer = m_Network->AddFullyConnectedLayer(tensorFullyConnectedDescriptor,
1190  weights,
1191  Optional<ConstTensor>(biases),
1192  layerParam.name().c_str());
1193  }
1194  else
1195  {
1196  fullyConnectedLayer = m_Network->AddFullyConnectedLayer(tensorFullyConnectedDescriptor,
1197  weights,
1198  EmptyOptional(),
1199  layerParam.name().c_str());
1200  }
1201 
1202  TensorInfo outputInfo({ inputInfo.GetShape()[0], outputSize }, DataType::Float32);
1203  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(fullyConnectedLayer->GetInputSlot(0));
1204  fullyConnectedLayer->GetOutputSlot(0).SetTensorInfo(outputInfo);
1205  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), fullyConnectedLayer->GetOutputSlot(0));
1206 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
bool m_TransposeWeightMatrix
Enable/disable transpose weight matrix.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
A FullyConnectedDescriptor for the FullyConnectedLayer.
bool m_BiasEnabled
Enable/disable bias.
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
EmptyOptional is used to initialize the Optional class in case we want to have default value for an O...
Definition: Optional.hpp:32
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:92

◆ ParseInputLayer()

void ParseInputLayer ( const caffe::LayerParameter &  layerParam)
protected

Adds an armnn layer to m_Network given a Caffe LayerParameter of the correct type and is responsible for recording any newly created IOutputSlots using SetArmnnOutputSlotForCaffeTop().

Definition at line 364 of file CaffeParser.cpp.

References CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, CaffeParserBase::m_InputShapes, CaffeParserBase::m_Network, CaffeParserBase::m_NetworkInputsBindingInfo, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), TensorInfo::SetShape(), and CaffeParserBase::TrackInputBinding().

365 {
366  BOOST_ASSERT(layerParam.type() == "Input");
367  ValidateNumInputsOutputs(layerParam, 0, 1);
368 
369  const InputParameter& param = layerParam.input_param();
370 
373  armnn::IConnectableLayer* const inputLayer = m_Network->AddInputLayer(inputId, layerParam.name().c_str());
374 
375  // Decides the tensor info for this input. This can be specified in the Caffe network but can also
376  // be overriden by user input (m_inputShapes).
377  armnn::TensorInfo inputTensorInfo;
378 
379  const BlobShape* originalShape = param.shape_size() > 0 && param.shape(0).dim_size() > 0 ?
380  &param.shape(0) : nullptr;
381  if (originalShape)
382  {
383  inputTensorInfo = BlobShapeToTensorInfo(*originalShape);
384  }
385 
386  auto overrideIt = m_InputShapes.find(layerParam.name());
387  if (overrideIt != m_InputShapes.end())
388  {
389  const TensorShape& overrideShape = overrideIt->second;
390  if (originalShape &&
391  ( originalShape->dim(1) != overrideShape[1]
392  || originalShape->dim(2) != overrideShape[2]
393  || originalShape->dim(3) != overrideShape[3]))
394  {
395  throw ParseException(
396  boost::str(
397  boost::format(
398  "Parsed input shape for '%1%' is incompatible with the override provided. %2%") %
399  layerParam.name() %
400  CHECK_LOCATION().AsString()));
401  }
402  inputTensorInfo.SetShape(overrideShape);
403  }
404  else if (!originalShape)
405  {
406  throw ParseException(
407  boost::str(
408  boost::format(
409  "No input descriptor given for '%1%' and no input shape found in caffe model. %2%") %
410  layerParam.name() %
411  CHECK_LOCATION().AsString()));
412  }
413 
414  TrackInputBinding(inputLayer, inputId, inputTensorInfo);
415  inputLayer->GetOutputSlot(0).SetTensorInfo(inputTensorInfo);
416  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), inputLayer->GetOutputSlot(0));
417 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
int LayerBindingId
Type of identifiers for bindable layers (inputs, outputs).
Definition: Types.hpp:171
void SetShape(const TensorShape &newShape)
Definition: Tensor.hpp:90
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
std::map< std::string, armnn::TensorShape > m_InputShapes
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
void TrackInputBinding(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ParseLRNLayer()

void ParseLRNLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1027 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), NormalizationDescriptor::m_Alpha, NormalizationDescriptor::m_Beta, NormalizationDescriptor::m_K, CaffeParserBase::m_Network, NormalizationDescriptor::m_NormChannelType, NormalizationDescriptor::m_NormMethodType, NormalizationDescriptor::m_NormSize, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1028 {
1029  ValidateNumInputsOutputs(layerParam, 1, 1);
1030 
1031  LRNParameter param = layerParam.lrn_param();
1032 
1033  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1034 
1035  // Ignored BATCH NORMALIZATION Caffe Parameters.
1036  // Ignored MVN Caffe Parameters.
1037  // Ignored LRN Caffe Parameters.
1038  // Engine
1039 
1040  NormalizationDescriptor normalizationDescriptor;
1041  if (param.has_norm_region())
1042  {
1043  LRNParameter_NormRegion n = param.norm_region();
1044  switch (n)
1045  {
1046  case LRNParameter_NormRegion_ACROSS_CHANNELS:
1047  {
1048  normalizationDescriptor.m_NormChannelType = NormalizationAlgorithmChannel::Across;
1049  break;
1050  }
1051  case LRNParameter_NormRegion_WITHIN_CHANNEL:
1052  {
1053  normalizationDescriptor.m_NormChannelType = NormalizationAlgorithmChannel::Within;
1054  break;
1055  }
1056  default:
1057  {
1058  throw ParseException(
1059  boost::str(
1060  boost::format(
1061  "Unknown region %1% for LRN layer %2% %3%") %
1062  n %
1063  layerParam.name() %
1064  CHECK_LOCATION().AsString()));
1065  }
1066  }
1067  }
1068  else
1069  {
1070  // Caffe defaults to normalization across channels.
1071  normalizationDescriptor.m_NormChannelType = NormalizationAlgorithmChannel::Across;
1072  }
1073 
1074  normalizationDescriptor.m_NormMethodType = NormalizationAlgorithmMethod::LocalBrightness;
1075  if (param.has_local_size())
1076  {
1077  normalizationDescriptor.m_NormSize = param.local_size();
1078  }
1079  else
1080  {
1081  throw ParseException(
1082  boost::str(
1083  boost::format(
1084  "local_size not defined for LRN layer %1% %2%") %
1085  layerParam.name() %
1086  CHECK_LOCATION().AsString()));
1087  }
1088 
1089  if (param.has_alpha())
1090  {
1091  normalizationDescriptor.m_Alpha = param.alpha();
1092  normalizationDescriptor.m_Alpha /= boost::numeric_cast<float>(param.local_size());
1093  }
1094  else
1095  {
1096  throw ParseException(
1097  boost::str(
1098  boost::format(
1099  "Alpha not defined for LRN layer %1% %2%") %
1100  layerParam.name() %
1101  CHECK_LOCATION().AsString()));
1102  }
1103  if (param.has_beta())
1104  {
1105  normalizationDescriptor.m_Beta = param.beta();
1106  }
1107  else
1108  {
1109  throw ParseException(
1110  boost::str(
1111  boost::format(
1112  "Beta not defined for LRN layer %1% %2%") %
1113  layerParam.name() %
1114  CHECK_LOCATION().AsString()));
1115  }
1116 
1117  if (param.has_k())
1118  {
1119  normalizationDescriptor.m_K = param.k();
1120  }
1121  else
1122  {
1123  normalizationDescriptor.m_K = 1;
1124  }
1125 
1126  IConnectableLayer* const normLayer = m_Network->AddNormalizationLayer(normalizationDescriptor,
1127  layerParam.name().c_str());
1128  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(normLayer->GetInputSlot(0));
1129  normLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1130 
1131  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), normLayer->GetOutputSlot(0));
1132 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
float m_K
Kappa value used for the across channel normalization equation.
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
float m_Alpha
Alpha value for the normalization equation.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
NormalizationAlgorithmMethod m_NormMethodType
Normalization method algorithm to use (LocalBrightness, LocalContrast).
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
NormalizationAlgorithmChannel m_NormChannelType
Normalization channel algorithm to use (Across, Within).
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A NormalizationDescriptor for the NormalizationLayer.
float m_Beta
Beta value for the normalization equation.
uint32_t m_NormSize
Depth radius value.

◆ ParsePoolingLayer()

void ParsePoolingLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 883 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), GET_OPTIONAL_WITH_FALLBACK, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), CaffeParserBase::m_Network, Pooling2dDescriptor::m_OutputShapeRounding, Pooling2dDescriptor::m_PadBottom, Pooling2dDescriptor::m_PaddingMethod, Pooling2dDescriptor::m_PadLeft, Pooling2dDescriptor::m_PadRight, Pooling2dDescriptor::m_PadTop, Pooling2dDescriptor::m_PoolHeight, Pooling2dDescriptor::m_PoolType, Pooling2dDescriptor::m_PoolWidth, Pooling2dDescriptor::m_StrideX, Pooling2dDescriptor::m_StrideY, armnn::Max, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

884 {
885  // Ignored Caffe Parameters
886  // Stochastic Pooling
887  // Engine
888 
889  ValidateNumInputsOutputs(layerParam, 1, 1);
890  PoolingParameter param = layerParam.pooling_param();
891  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
892 
893  const auto notFound = std::numeric_limits<unsigned int>::max();
894 
895  unsigned int kernel_h = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
896  kernel_h, kernel_size, unsigned int, notFound);
897  unsigned int kernel_w = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
898  kernel_w, kernel_size, unsigned int, notFound);
899 
900  if ((kernel_h == notFound || kernel_w == notFound) && param.has_global_pooling())
901  {
902  kernel_h = inputInfo.GetShape()[2];
903  kernel_w = inputInfo.GetShape()[3];
904  }
905 
906  unsigned int stride_h = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
907  stride_h, stride, unsigned int, notFound);
908  unsigned int stride_w = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
909  stride_h, stride, unsigned int, notFound);
910 
911  if ((stride_h == notFound || stride_w == notFound) && param.has_global_pooling())
912  {
913  stride_h = 1;
914  stride_w = 1;
915  }
916 
917  unsigned int pad_h = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
918  pad_h, pad, unsigned int, 0u);
919  unsigned int pad_w = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
920  pad_w, pad, unsigned int, 0u);
921 
922  // Populate Weight and Bias Filter Descriptor
923  Pooling2dDescriptor pooling2dDescriptor;
924  if (param.has_pool())
925  {
926  PoolingParameter_PoolMethod p = param.pool();
927  switch (p)
928  {
929  case PoolingParameter_PoolMethod_MAX:
930  {
931  pooling2dDescriptor.m_PoolType = PoolingAlgorithm::Max;
932  break;
933  }
934  case PoolingParameter_PoolMethod_AVE:
935  {
936  pooling2dDescriptor.m_PoolType = PoolingAlgorithm::Average;
937  break;
938  }
939  case PoolingParameter_PoolMethod_STOCHASTIC:
940  {
941  throw ParseException(
942  boost::str(
943  boost::format(
944  "Pooling Layer: Stochastic Pooling Not Supported. Layer=%1% %2%") %
945  layerParam.name() %
946  CHECK_LOCATION().AsString()));
947  }
948  default:
949  {
950  throw ParseException(
951  boost::str(
952  boost::format(
953  "Pooling Layer: unknown pooling method: %1% for layer: %2% %3%") %
954  p %
955  layerParam.name() %
956  CHECK_LOCATION().AsString()));
957  }
958  }
959  }
960  else
961  {
962  throw ParseException(
963  boost::str(
964  boost::format(
965  "No Pooling Method Defined for %1% %2%") %
966  layerParam.name() %
967  CHECK_LOCATION().AsString()));
968  }
969 
970  pooling2dDescriptor.m_PadLeft = pad_w;
971  pooling2dDescriptor.m_PadRight = pad_w;
972  pooling2dDescriptor.m_PadTop = pad_h;
973  pooling2dDescriptor.m_PadBottom = pad_h;
974  pooling2dDescriptor.m_StrideX = stride_w;
975  pooling2dDescriptor.m_StrideY = stride_h;
976  pooling2dDescriptor.m_PoolWidth = kernel_w;
977  pooling2dDescriptor.m_PoolHeight = kernel_h;
978 
979  pooling2dDescriptor.m_OutputShapeRounding = OutputShapeRounding::Ceiling;
980  pooling2dDescriptor.m_PaddingMethod = PaddingMethod::IgnoreValue;
981 
982  armnn::IConnectableLayer* poolingLayer = m_Network->AddPooling2dLayer(pooling2dDescriptor,
983  layerParam.name().c_str());
984 
985  TensorInfo outputInfo(
986  { inputInfo.GetShape()[0],
987  inputInfo.GetShape()[1],
988  static_cast<unsigned int>(ceil(
989  static_cast<float>(inputInfo.GetShape()[2] + 2 * pad_h - kernel_h) /
990  boost::numeric_cast<float>(stride_h))) + 1,
991  static_cast<unsigned int>(ceil(
992  static_cast<float>(inputInfo.GetShape()[3] + 2 * pad_w - kernel_w) /
993  boost::numeric_cast<float>(stride_w))) + 1 },
994  DataType::Float32);
995 
996  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(poolingLayer->GetInputSlot(0));
997  poolingLayer->GetOutputSlot(0).SetTensorInfo(outputInfo);
998  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), poolingLayer->GetOutputSlot(0));
999 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
uint32_t m_PadBottom
Padding bottom value in the height dimension.
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
uint32_t m_PadLeft
Padding left value in the width dimension.
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
uint32_t m_PoolWidth
Pooling width value.
PaddingMethod m_PaddingMethod
The padding method to be used. (Exclude, IgnoreValue).
uint32_t m_PadTop
Padding top value in the height dimension.
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
uint32_t m_PoolHeight
Pooling height value.
uint32_t m_PadRight
Padding right value in the width dimension.
#define GET_OPTIONAL_WITH_FALLBACK(PARAM, PARAM_TYPE, OPTIONAL_VALUE, FALLBACK_VALUE, VALUE_TYPE, DEFAULT_VALUE)
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
PoolingAlgorithm m_PoolType
The pooling algorithm to use (Max. Average, L2).
OutputShapeRounding m_OutputShapeRounding
The rounding method for the output shape. (Floor, Ceiling).
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
A Pooling2dDescriptor for the Pooling2dLayer.
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.

◆ ParseReluLayer()

void ParseReluLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1001 of file CaffeParser.cpp.

References Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), ActivationDescriptor::m_A, ActivationDescriptor::m_Function, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1002 {
1003  ValidateNumInputsOutputs(layerParam, 1, 1);
1004 
1005  const string& name = layerParam.name();
1006  const ReLUParameter& param = layerParam.relu_param();
1007 
1008  ActivationDescriptor activationDescriptor;
1009  const float negativeSlope = param.negative_slope();
1010  if (negativeSlope == 0.0f)
1011  {
1012  activationDescriptor.m_Function = ActivationFunction::ReLu;
1013  }
1014  else
1015  {
1016  activationDescriptor.m_Function = ActivationFunction::LeakyReLu;
1017  activationDescriptor.m_A = negativeSlope;
1018  }
1019 
1020  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1021  IConnectableLayer* const activationLayer = m_Network->AddActivationLayer(activationDescriptor, name.c_str());
1022  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(activationLayer->GetInputSlot(0));
1023  activationLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1024  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), activationLayer->GetOutputSlot(0));
1025 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
An ActivationDescriptor for the ActivationLayer.
Definition: Descriptors.hpp:20
float m_A
Alpha upper bound value used by the activation functions. (BoundedReLu, Linear, TanH).
Definition: Descriptors.hpp:37
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
ActivationFunction m_Function
The activation function to use (Sigmoid, TanH, Linear, ReLu, BoundedReLu, SoftReLu, LeakyReLu, Abs, Sqrt, Square).
Definition: Descriptors.hpp:35

◆ ParseScaleLayer()

void ParseScaleLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1404 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), armnn::Float32, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), BatchNormalizationDescriptor::m_Eps, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1405 {
1406  // Current unoptimal solution: add a batchnormalization layer with 0 mean and 1 variance.
1407  ValidateNumInputsOutputs(layerParam, 1, 1);
1408 
1409  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1410 
1411  string name = layerParam.name();
1412 
1413  ScaleParameter param = layerParam.scale_param();
1414  if (param.axis() != 1)
1415  {
1416  // Would have to use something other than BatchNormalizationLayer in this case
1417  throw ParseException(
1418  boost::str(
1419  boost::format(
1420  "Loading Scale Layer: Only axis 1 is supported currently. "
1421  "Layer=%1% Axis=%2% %3%") %
1422  layerParam.name() %
1423  param.axis() %
1424  CHECK_LOCATION().AsString()));
1425  }
1426 
1427  unsigned int channels = inputInfo.GetShape()[1];
1428  unsigned int shape[] = {channels};
1429 
1431  desc.m_Eps = 0.0f; // Don't need epsilon if variance is 1.
1432  vector<float> meanData(channels, 0.0f);
1433  vector<float> varianceData(channels, 1.0f);
1434  vector<float> betaData(channels, 0.0f);
1435  vector<float> gammaData(channels);
1436 
1437  GetDataFromBlob(layerParam, gammaData, 0);
1438 
1439  if(param.has_bias_term())
1440  {
1441  GetDataFromBlob(layerParam, betaData, 1);
1442  }
1443 
1444  ConstTensor mean(TensorInfo(1, shape, armnn::DataType::Float32), meanData);
1445  ConstTensor variance(TensorInfo(1, shape, armnn::DataType::Float32), varianceData);
1446  ConstTensor beta(TensorInfo(1, shape, armnn::DataType::Float32), betaData);
1447  ConstTensor gamma(TensorInfo(1, shape, armnn::DataType::Float32), gammaData);
1448 
1449  armnn::IConnectableLayer* const batchNormLayer = m_Network->AddBatchNormalizationLayer(desc,
1450  mean, variance, beta, gamma, name.c_str());
1451  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(batchNormLayer->GetInputSlot(0));
1452  batchNormLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1453  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), batchNormLayer->GetOutputSlot(0));
1454 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
float m_Eps
Value to add to the variance. Used to avoid dividing by zero.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A BatchNormalizationDescriptor for the BatchNormalizationLayer.

◆ ParseSoftmaxLayer()

void ParseSoftmaxLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1208 of file CaffeParser.cpp.

References Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), SoftmaxDescriptor::m_Axis, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1209 {
1210  ValidateNumInputsOutputs(layerParam, 1, 1);
1211 
1212  SoftmaxParameter param = layerParam.softmax_param();
1213 
1214  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1215 
1216  // Ignored Caffe Parameters:
1217  // axis
1218  // Engine
1219 
1220  armnn::SoftmaxDescriptor softmaxDescriptor;
1221  softmaxDescriptor.m_Axis = 1;
1222  armnn::IConnectableLayer* const softmaxLayer = m_Network->AddSoftmaxLayer(
1223  softmaxDescriptor,
1224  layerParam.name().c_str());
1225  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(softmaxLayer->GetInputSlot(0));
1226  softmaxLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1227  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), softmaxLayer->GetOutputSlot(0));
1228 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
int m_Axis
Scalar, defaulted to the last index (-1), specifying the dimension the activation will be performed o...
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A SoftmaxDescriptor for the SoftmaxLayer.

◆ ParseSplitLayer()

void ParseSplitLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1456 of file CaffeParser.cpp.

References CHECK_LOCATION, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), and CaffeParserBase::SetArmnnOutputSlotForCaffeTop().

1457 {
1458  // Used in caffe to duplicate memory - not necessary in armnn.
1459  if (layerParam.bottom_size() != 1)
1460  {
1461  throw ParseException(
1462  boost::str(
1463  boost::format(
1464  "Split layer '%1%' should have exactly 1 bottom. "
1465  "#bottoms=%2% %3%") %
1466  layerParam.name() %
1467  layerParam.bottom_size() %
1468  CHECK_LOCATION().AsString()));
1469  }
1470  armnn::IOutputSlot& outputSlot = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
1471  for (int i = 0; i < layerParam.top_size(); i++)
1472  {
1473  SetArmnnOutputSlotForCaffeTop(layerParam.top(i), outputSlot);
1474  }
1475 }
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
An output connection slot for a layer.
Definition: INetwork.hpp:37
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ResolveInPlaceLayers()

void ResolveInPlaceLayers ( caffe::NetParameter &  netParameter)
protected

Modifies the Caffe network to replace "in-place" layers (whose top() and bottom() are both the same) with regular layers.

This simplifies further parsing.

Definition at line 1572 of file CaffeParser.cpp.

References CHECK_LOCATION.

Referenced by RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), and CaffeParserBase::LoadNetParam().

1573 {
1574  // Finds layers with the same top.
1575  std::map<std::string, std::vector<caffe::LayerParameter*>> layersByTop;
1576  for (int layerIdx = 0; layerIdx < netParameter.layer_size(); ++layerIdx)
1577  {
1578  caffe::LayerParameter& layer = *netParameter.mutable_layer(layerIdx);
1579  std::string name = layer.name();
1580  for (int i = 0; i < layer.top_size(); ++i)
1581  {
1582  layersByTop[layer.top(i)].push_back(&layer);
1583  }
1584  }
1585 
1586  // For each set of layers with the same top, resolves them to a linear chain rather than in-place layers.
1587  // Note that for 'regular' layers, there will be a single layer in each group and so this will be a no-op.
1588  for (auto layersWithSameTopIt : layersByTop)
1589  {
1590  const std::string& top = layersWithSameTopIt.first;
1591  const std::vector<caffe::LayerParameter*>& layersWithSameTop = layersWithSameTopIt.second;
1592 
1593  // Chains the layers together in the order that they are listed in the prototxt (hopefully this is correct).
1594  // Note that the last layer will not have its top modified so that other layers will continue to reference it.
1595  for (unsigned int layerIdx = 0; layerIdx < layersWithSameTop.size() - 1; ++layerIdx)
1596  {
1597  caffe::LayerParameter& layer1 = *layersWithSameTop[layerIdx];
1598  caffe::LayerParameter& layer2 = *layersWithSameTop[layerIdx+1];
1599  if (layer1.top_size() != 1)
1600  {
1601  throw ParseException(
1602  boost::str(
1603  boost::format(
1604  "Node '%1%' is an in-place layer but doesn't have exactly one "
1605  "top. It has %2% instead. %3%") %
1606  layer1.name() %
1607  layer1.top_size() %
1608  CHECK_LOCATION().AsString()));
1609  }
1610  std::string newTop = layer1.name() + "_top";
1611  layer1.set_top(0, newTop);
1612  if (layer2.bottom_size() != 1 || layer2.bottom(0) != top)
1613  {
1614  throw ParseException(
1615  boost::str(
1616  boost::format(
1617  "Node '%1%' is an in-place layer but "
1618  "doesn't have exactly one bottom, or it doesn't match its top. "
1619  "#bottoms=%2%, first bottom is %3%, top is %4% %5%") %
1620  layer2.name() %
1621  layer2.bottom(0) %
1622  top %
1623  CHECK_LOCATION().AsString()));
1624  }
1625  layer2.set_bottom(0, newTop);
1626  }
1627  }
1628 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192

◆ SetArmnnOutputSlotForCaffeTop()

void SetArmnnOutputSlotForCaffeTop ( const std::string &  caffeTopName,
armnn::IOutputSlot armnnOutputSlot 
)
protected

Definition at line 1551 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::m_ArmnnOutputSlotForCaffeTop.

Referenced by CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), CaffeParserBase::ParseBatchNormLayer(), CaffeParserBase::ParseConcatLayer(), CaffeParserBase::ParseConvLayer(), CaffeParserBase::ParseDropoutLayer(), CaffeParserBase::ParseEltwiseLayer(), CaffeParserBase::ParseInnerProductLayer(), CaffeParserBase::ParseInputLayer(), CaffeParserBase::ParseLRNLayer(), CaffeParserBase::ParsePoolingLayer(), CaffeParserBase::ParseReluLayer(), CaffeParserBase::ParseScaleLayer(), CaffeParserBase::ParseSoftmaxLayer(), and CaffeParserBase::ParseSplitLayer().

1553 {
1554  auto it = m_ArmnnOutputSlotForCaffeTop.find(caffeTopName);
1555  if (it == m_ArmnnOutputSlotForCaffeTop.end())
1556  {
1557  m_ArmnnOutputSlotForCaffeTop[caffeTopName] = &armnnOutputSlot;
1558  }
1559  else
1560  {
1561  throw ParseException(
1562  boost::str(
1563  boost::format(
1564  "Attempting to add duplicate entry for Caffe top '%1%' %2%") %
1565  caffeTopName %
1566  CHECK_LOCATION().AsString()));
1567  }
1568 }
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops...
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192

◆ TrackBindingPoint()

void TrackBindingPoint ( armnn::IConnectableLayer layer,
armnn::LayerBindingId  id,
const armnn::TensorInfo tensorInfo,
const char *  bindingPointDesc,
std::unordered_map< std::string, BindingPointInfo > &  nameToBindingInfo 
)
staticprotected

Definition at line 1509 of file CaffeParser.cpp.

References CHECK_LOCATION, and IConnectableLayer::GetName().

Referenced by CaffeParserBase::TrackInputBinding(), and CaffeParserBase::TrackOutputBinding().

1514 {
1515  const std::string layerName = layer->GetName();
1516  auto it = nameToBindingInfo.find(layerName);
1517  if (it == nameToBindingInfo.end())
1518  {
1519  nameToBindingInfo[layerName] = std::make_pair(id, tensorInfo);
1520  }
1521  else
1522  {
1523  throw ParseException(
1524  boost::str(
1525  boost::format(
1526  "Id %1% used by more than one %2% layer %3%") %
1527  id %
1528  bindingPointDesc %
1529  CHECK_LOCATION().AsString()));
1530  }
1531 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const char * GetName() const =0
Returns the name of the layer.

◆ TrackInputBinding()

void TrackInputBinding ( armnn::IConnectableLayer layer,
armnn::LayerBindingId  id,
const armnn::TensorInfo tensorInfo 
)
protected

Definition at line 1495 of file CaffeParser.cpp.

References IConnectableLayer::GetName(), CaffeParserBase::m_NetworkInputsBindingInfo, and CaffeParserBase::TrackBindingPoint().

Referenced by CaffeParserBase::ParseInputLayer().

1498 {
1499  return TrackBindingPoint(layer, id, tensorInfo, layer->GetName(), m_NetworkInputsBindingInfo);
1500 }
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos
static void TrackBindingPoint(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo, const char *bindingPointDesc, std::unordered_map< std::string, BindingPointInfo > &nameToBindingInfo)
virtual const char * GetName() const =0
Returns the name of the layer.

◆ TrackOutputBinding()

void TrackOutputBinding ( armnn::IConnectableLayer layer,
armnn::LayerBindingId  id,
const armnn::TensorInfo tensorInfo 
)
protected

Definition at line 1502 of file CaffeParser.cpp.

References IConnectableLayer::GetName(), CaffeParserBase::m_NetworkOutputsBindingInfo, and CaffeParserBase::TrackBindingPoint().

Referenced by RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), and CaffeParserBase::LoadNetParam().

1505 {
1506  return TrackBindingPoint(layer, id, tensorInfo, layer->GetName(), m_NetworkOutputsBindingInfo);
1507 }
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos
static void TrackBindingPoint(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo, const char *bindingPointDesc, std::unordered_map< std::string, BindingPointInfo > &nameToBindingInfo)
virtual const char * GetName() const =0
Returns the name of the layer.

Member Data Documentation

◆ m_ArmnnOutputSlotForCaffeTop

std::unordered_map<std::string, armnn::IOutputSlot*> m_ArmnnOutputSlotForCaffeTop
protected

As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops.

Definition at line 131 of file CaffeParser.hpp.

Referenced by CaffeParserBase::Cleanup(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), and CaffeParserBase::SetArmnnOutputSlotForCaffeTop().

◆ m_CaffeLayersByTopName

std::map<std::string, const caffe::LayerParameter*> m_CaffeLayersByTopName
protected

◆ m_InputShapes

◆ m_Network

◆ m_NetworkInputsBindingInfo

std::unordered_map<std::string, BindingPointInfo> m_NetworkInputsBindingInfo
protected

◆ m_NetworkOutputsBindingInfo

std::unordered_map<std::string, BindingPointInfo> m_NetworkOutputsBindingInfo
protected

◆ m_RequestedOutputs

std::vector<std::string> m_RequestedOutputs
protected

◆ ms_CaffeLayerNameToParsingFunctions

const std::map< std::string, CaffeParserBase::OperationParsingFunction > ms_CaffeLayerNameToParsingFunctions
staticprotected

The documentation for this class was generated from the following files: