ArmNN
 20.05
CaffeParserBase Class Reference

#include <CaffeParser.hpp>

Inheritance diagram for CaffeParserBase:
ICaffeParser CaffeParser RecordByRecordCaffeParser

Public Member Functions

virtual armnn::INetworkPtr CreateNetworkFromTextFile (const char *graphFile, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs) override
 Create the network from a protobuf text file on disk. More...
 
virtual armnn::INetworkPtr CreateNetworkFromString (const char *protoText, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs) override
 Creates the network directly from protobuf text in a string. Useful for debugging/testing. More...
 
virtual BindingPointInfo GetNetworkInputBindingInfo (const std::string &name) const override
 Retrieves binding info (layer id and tensor info) for the network input identified by the given layer name. More...
 
virtual BindingPointInfo GetNetworkOutputBindingInfo (const std::string &name) const override
 Retrieves binding info (layer id and tensor info) for the network output identified by the given layer name. More...
 
 CaffeParserBase ()
 
- Public Member Functions inherited from ICaffeParser
virtual armnn::INetworkPtr CreateNetworkFromBinaryFile (const char *graphFile, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)=0
 Create the network from a protobuf binary file on the disk. More...
 

Protected Types

using OperationParsingFunction = void(CaffeParserBase::*)(const caffe::LayerParameter &layerParam)
 

Protected Member Functions

armnn::TensorInfo BlobShapeToTensorInfo (const caffe::BlobShape &blobShape) const
 Converts Caffe's protobuf tensor shape format to ArmNN's. More...
 
void TrackInputBinding (armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
 
void TrackOutputBinding (armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
 
void SetArmnnOutputSlotForCaffeTop (const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
 
armnn::IOutputSlotGetArmnnOutputSlotForCaffeTop (const std::string &caffeTopName) const
 Retrieves the Armnn IOutputSlot representing the given Caffe top. More...
 
void Cleanup ()
 
armnn::INetworkPtr CreateNetworkFromNetParameter (caffe::NetParameter &netParam, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
 Parses a NetParameter loaded into memory from one of the other CreateNetwork*. More...
 
void LoadNetParam (caffe::NetParameter &netParameter)
 does the actual conversion from caffe::NetParameter to armnn::INetwork More...
 
std::vector< const caffe::LayerParameter * > GetInputs (const caffe::LayerParameter &layerParam)
 Find the Caffe layers listed as inputs (bottoms) for a given layer. More...
 
void ResolveInPlaceLayers (caffe::NetParameter &netParameter)
 Modifies the Caffe network to replace "in-place" layers (whose top() and bottom() are both the same) with regular layers. More...
 
void ParseInputLayer (const caffe::LayerParameter &layerParam)
 Adds an armnn layer to m_Network given a Caffe LayerParameter of the correct type and is responsible for recording any newly created IOutputSlots using SetArmnnOutputSlotForCaffeTop(). More...
 
void ParseConvLayer (const caffe::LayerParameter &layerParam)
 
void ParsePoolingLayer (const caffe::LayerParameter &layerParam)
 
void ParseReluLayer (const caffe::LayerParameter &layerParam)
 
void ParseLRNLayer (const caffe::LayerParameter &layerParam)
 
void ParseInnerProductLayer (const caffe::LayerParameter &layerParam)
 
void ParseSoftmaxLayer (const caffe::LayerParameter &layerParam)
 
void ParseEltwiseLayer (const caffe::LayerParameter &layerParam)
 
void ParseConcatLayer (const caffe::LayerParameter &layerParam)
 
void ParseBatchNormLayer (const caffe::LayerParameter &layerParam)
 
void ParseScaleLayer (const caffe::LayerParameter &layerParam)
 
void ParseSplitLayer (const caffe::LayerParameter &layerParam)
 
void ParseDropoutLayer (const caffe::LayerParameter &layerParam)
 
void AddConvLayerWithSplits (const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
 ParseConv may use these helpers depending on the group parameter. More...
 
void AddConvLayerWithDepthwiseConv (const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
 
- Protected Member Functions inherited from ICaffeParser
virtual ~ICaffeParser ()
 

Static Protected Member Functions

static void TrackBindingPoint (armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo, const char *bindingPointDesc, std::unordered_map< std::string, BindingPointInfo > &nameToBindingInfo)
 
static std::pair< armnn::LayerBindingId, armnn::TensorInfoGetBindingInfo (const std::string &layerName, const char *bindingPointDesc, const std::unordered_map< std::string, BindingPointInfo > &bindingInfos)
 

Protected Attributes

std::unordered_map< std::string, BindingPointInfom_NetworkInputsBindingInfo
 maps input layer names to their corresponding ids and tensor infos More...
 
std::unordered_map< std::string, BindingPointInfom_NetworkOutputsBindingInfo
 maps output layer names to their corresponding ids and tensor infos More...
 
armnn::INetworkPtr m_Network
 
std::map< std::string, armnn::TensorShapem_InputShapes
 
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
 As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops. More...
 
std::vector< std::string > m_RequestedOutputs
 
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName
 

Static Protected Attributes

static const std::map< std::string, OperationParsingFunctionms_CaffeLayerNameToParsingFunctions
 Maps Caffe layer names to parsing member functions. More...
 

Additional Inherited Members

- Static Public Member Functions inherited from ICaffeParser
static ICaffeParserCreateRaw ()
 
static ICaffeParserPtr Create ()
 
static void Destroy (ICaffeParser *parser)
 

Detailed Description

Definition at line 26 of file CaffeParser.hpp.

Member Typedef Documentation

◆ OperationParsingFunction

using OperationParsingFunction = void(CaffeParserBase::*)(const caffe::LayerParameter& layerParam)
protected

Definition at line 115 of file CaffeParser.hpp.

Constructor & Destructor Documentation

◆ CaffeParserBase()

Definition at line 275 of file CaffeParser.cpp.

276  : m_Network(nullptr, nullptr)
277 {
278 
279 }

Member Function Documentation

◆ AddConvLayerWithDepthwiseConv()

void AddConvLayerWithDepthwiseConv ( const caffe::LayerParameter &  layerParam,
const armnn::Convolution2dDescriptor desc,
unsigned int  kernelW,
unsigned int  kernelH 
)
protected

Definition at line 612 of file CaffeParser.cpp.

References ARMNN_ASSERT, CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), armnnUtils::GetTensorInfo(), Convolution2dDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, DepthwiseConvolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and armnnCaffeParser::TensorDescToBlobShape().

Referenced by CaffeParserBase::ParseConvLayer().

616 {
617  ARMNN_ASSERT(layerParam.type() == "Convolution");
618  ValidateNumInputsOutputs(layerParam, 1, 1);
619 
620  ConvolutionParameter convParam = layerParam.convolution_param();
621  BlobShape inputShape = TensorDescToBlobShape(GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo());
622 
624  desc.m_PadLeft = convDesc.m_PadLeft;
625  desc.m_PadRight = convDesc.m_PadRight;
626  desc.m_PadTop = convDesc.m_PadTop;
627  desc.m_PadBottom = convDesc.m_PadBottom;
628  desc.m_StrideX = convDesc.m_StrideX;
629  desc.m_StrideY = convDesc.m_StrideY;
630  desc.m_BiasEnabled = convDesc.m_BiasEnabled;
631 
632  unsigned int numFilters = convParam.num_output();
633 
634  BlobShape outputShape;
635  outputShape.add_dim(0);
636  outputShape.set_dim(0, inputShape.dim(0));
637  outputShape.add_dim(1);
638  outputShape.set_dim(1, numFilters);
639  outputShape.add_dim(2);
640  outputShape.set_dim(
641  2, (static_cast<int>(
642  static_cast<float>(inputShape.dim(2) + 2 * desc.m_PadBottom - kernelH) /
643  static_cast<float>(desc.m_StrideY)) + 1));
644  outputShape.add_dim(3);
645  outputShape.set_dim(
646  3, (static_cast<int>(
647  static_cast<float>(inputShape.dim(3) + 2 * desc.m_PadRight - kernelW) /
648  static_cast<float>(desc.m_StrideX)) + 1));
649 
650  // Load the weight data
651  size_t allWeightsSize = boost::numeric_cast<size_t>(inputShape.dim(1) * kernelH * kernelW);
652  vector<float> weightData(allWeightsSize);
653 
654  GetDataFromBlob(layerParam, weightData, 0);
655 
656  // depth multiplier will be 1 for the depthwise convolution
657  const unsigned int weightDimSizes[4] = {
658  static_cast<unsigned int>(1), // depth multiplier
659  static_cast<unsigned int>(inputShape.dim(1)), // #channels
660  kernelH,
661  kernelW};
662 
663  armnn::IConnectableLayer* returnLayer = nullptr;
664  ConstTensor weights(TensorInfo(4, weightDimSizes, DataType::Float32), weightData.data());
665  Optional<ConstTensor> optionalBiases;
666  vector<float> biasData;
667  if (desc.m_BiasEnabled)
668  {
669  TensorInfo biasInfo;
670 
671  biasData.resize(boost::numeric_cast<size_t>(outputShape.dim(1)), 1.f);
672  GetDataFromBlob(layerParam, biasData, 1);
673 
674  const unsigned int biasDimSizes[1] = {static_cast<unsigned int>(outputShape.dim(1))};
675  biasInfo = TensorInfo(1, biasDimSizes, DataType::Float32);
676 
677  ConstTensor biases(biasInfo, biasData.data());
678  optionalBiases = Optional<ConstTensor>(biases);
679  }
680  returnLayer = m_Network->AddDepthwiseConvolution2dLayer(desc,
681  weights,
682  optionalBiases,
683  layerParam.name().c_str());
684 
685  if (!returnLayer)
686  {
687  throw ParseException(
688  boost::str(
689  boost::format(
690  "Failed to create depthwise convolution layer. "
691  "Layer=%1% #filters=%2% %3%") %
692  layerParam.name() %
693  numFilters %
694  CHECK_LOCATION().AsString()));
695  }
696  armnn::IOutputSlot& inputConnection = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
697  inputConnection.Connect(returnLayer->GetInputSlot(0));
698  returnLayer->GetOutputSlot(0).SetTensorInfo(BlobShapeToTensorInfo(outputShape));
699  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), returnLayer->GetOutputSlot(0));
700 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
An output connection slot for a layer.
Definition: INetwork.hpp:37
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A DepthwiseConvolution2dDescriptor for the DepthwiseConvolution2dLayer.
uint32_t m_PadLeft
Padding left value in the width dimension.
BlobShape TensorDescToBlobShape(const TensorInfo &desc)

◆ AddConvLayerWithSplits()

void AddConvLayerWithSplits ( const caffe::LayerParameter &  layerParam,
const armnn::Convolution2dDescriptor desc,
unsigned int  kernelW,
unsigned int  kernelH 
)
protected

ParseConv may use these helpers depending on the group parameter.

Definition at line 420 of file CaffeParser.cpp.

References ARMNN_ASSERT, CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetNumOutputSlots(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), Convolution2dDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), IOutputSlot::SetTensorInfo(), OriginsDescriptor::SetViewOriginCoord(), ViewsDescriptor::SetViewOriginCoord(), ViewsDescriptor::SetViewSize(), and armnnCaffeParser::TensorDescToBlobShape().

Referenced by CaffeParserBase::ParseConvLayer().

424 {
425  ARMNN_ASSERT(layerParam.type() == "Convolution");
426  ValidateNumInputsOutputs(layerParam, 1, 1);
427 
428  ConvolutionParameter convParam = layerParam.convolution_param();
429  BlobShape inputShape = TensorDescToBlobShape(GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo());
430  const unsigned int numGroups = convParam.has_group() ? convParam.group() : 1;
431 
432  // asusme these were already verified by the caller ParseConvLayer() function
433  ARMNN_ASSERT(numGroups < inputShape.dim(1));
434  ARMNN_ASSERT(numGroups > 1);
435 
436  // Handle grouping
437  armnn::IOutputSlot& inputConnection = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
438 
439  vector<string> convLayerNames(numGroups);
440  vector<armnn::IConnectableLayer*> convLayers(numGroups);
441  convLayerNames[0] = layerParam.name();
442 
443  // This convolution is to be applied to chunks of the input data so add a splitter layer
444 
445  // Redirect the convolution input to the splitter
446  unsigned int splitterDimSizes[4] = {static_cast<unsigned int>(inputShape.dim(0)),
447  static_cast<unsigned int>(inputShape.dim(1)),
448  static_cast<unsigned int>(inputShape.dim(2)),
449  static_cast<unsigned int>(inputShape.dim(3))};
450 
451  // Split dimension 1 of the splitter output shape and conv input shapes
452  // according to the number of groups
453 
454  splitterDimSizes[1] /= numGroups;
455  inputShape.set_dim(1, splitterDimSizes[1]);
456 
457  // This is used to describe how the input is to be split
458  ViewsDescriptor splitterDesc(numGroups);
459 
460  // Create an output node for each group, giving each a unique name
461  for (unsigned int g = 0; g < numGroups; ++g)
462  {
463  // Work out the names of the splitter layers child convolutions
464  stringstream ss;
465  ss << layerParam.name() << "_" << g;
466  convLayerNames[g] = ss.str();
467 
468  splitterDesc.SetViewOriginCoord(g, 1, splitterDimSizes[1] * g);
469 
470  // Set the size of the views.
471  for (unsigned int dimIdx=0; dimIdx < 4; dimIdx++)
472  {
473  splitterDesc.SetViewSize(g, dimIdx, splitterDimSizes[dimIdx]);
474  }
475  }
476 
477  const std::string splitterLayerName = std::string("splitter_") + layerParam.bottom(0);
478  armnn::IConnectableLayer* splitterLayer = m_Network->AddSplitterLayer(splitterDesc, splitterLayerName.c_str());
479 
480  inputConnection.Connect(splitterLayer->GetInputSlot(0));
481  for (unsigned int i = 0; i < splitterLayer->GetNumOutputSlots(); i++)
482  {
483  splitterLayer->GetOutputSlot(i).SetTensorInfo(BlobShapeToTensorInfo(inputShape));
484  }
485 
486  unsigned int numFilters = convParam.num_output();
487 
488  // Populates convolution output tensor descriptor dimensions.
489  BlobShape outputShape;
490  outputShape.add_dim(0);
491  outputShape.set_dim(0, inputShape.dim(0));
492  outputShape.add_dim(1);
493  // Ensures that dimension 1 of the convolution output is split according to the number of groups.
494  outputShape.set_dim(1, numFilters / numGroups);
495  outputShape.add_dim(2);
496  outputShape.set_dim(
497  2, (static_cast<int>(
498  static_cast<float>(inputShape.dim(2) + 2 * desc.m_PadBottom - kernelH) /
499  static_cast<float>(desc.m_StrideY)) + 1));
500  outputShape.add_dim(3);
501  outputShape.set_dim(
502  3, (static_cast<int>(
503  static_cast<float>(inputShape.dim(3) + 2 * desc.m_PadRight - kernelW) /
504  static_cast<float>(desc.m_StrideX)) + 1));
505 
506  // Load the weight data for ALL groups
507  vector<float> weightData(boost::numeric_cast<size_t>(numGroups *
508  inputShape.dim(1) * // number of input channels
509  outputShape.dim(1) * // number of output channels
510  kernelH *
511  kernelW));
512  GetDataFromBlob(layerParam, weightData, 0);
513 
514  const unsigned int weightDimSizes[4] = {
515  static_cast<unsigned int>(outputShape.dim(1)),
516  static_cast<unsigned int>(inputShape.dim(1)),
517  kernelH,
518  kernelW};
519 
520  TensorInfo biasInfo;
521  vector<float> biasData;
522 
523  if (desc.m_BiasEnabled)
524  {
525  biasData.resize(boost::numeric_cast<size_t>(numGroups * outputShape.dim(1)), 1.f);
526  GetDataFromBlob(layerParam, biasData, 1);
527 
528  const unsigned int biasDimSizes[1] = {static_cast<unsigned int>(outputShape.dim(1))};
529  biasInfo = TensorInfo(1, biasDimSizes, DataType::Float32);
530  }
531 
532  const unsigned int numWeightsPerGroup = boost::numeric_cast<unsigned int>(weightData.size()) / numGroups;
533  const unsigned int numBiasesPerGroup = boost::numeric_cast<unsigned int>(biasData.size()) / numGroups;
534 
535  for (unsigned int g = 0; g < numGroups; ++g)
536  {
537  // Sets the slot index, group 0 should be connected to the 0th output of the splitter
538  // group 1 should be connected to the 1st output of the splitter.
539 
540  // Pulls out the weights for this group from that loaded from the model file earlier.
541  ConstTensor weights(TensorInfo(4, weightDimSizes, DataType::Float32),
542  weightData.data() + numWeightsPerGroup * g);
543 
544  IConnectableLayer* convLayer = nullptr;
545  Optional<ConstTensor> optionalBiases;
546  if (desc.m_BiasEnabled)
547  {
548  // Pulls out the biases for this group from that loaded from the model file earlier.
549  ConstTensor biases(biasInfo, biasData.data() + numBiasesPerGroup * g);
550  optionalBiases = Optional<ConstTensor>(biases);
551  }
552  convLayer = m_Network->AddConvolution2dLayer(desc,
553  weights,
554  optionalBiases,
555  convLayerNames[g].c_str());
556  convLayers[g] = convLayer;
557 
558  // If we have more than one group then the input to the nth convolution the splitter layer's nth output,
559  // otherwise it's the regular input to this layer.
560  armnn::IOutputSlot& splitterInputConnection =
561  splitterLayer ? splitterLayer->GetOutputSlot(g) : inputConnection;
562  splitterInputConnection.Connect(convLayer->GetInputSlot(0));
563  convLayer->GetOutputSlot(0).SetTensorInfo(BlobShapeToTensorInfo(outputShape));
564  }
565 
566  // If the convolution was performed in chunks, add a layer to concatenate the results
567 
568  // The merge input shape matches that of the convolution output
569  unsigned int concatDimSizes[4] = {static_cast<unsigned int>(outputShape.dim(0)),
570  static_cast<unsigned int>(outputShape.dim(1)),
571  static_cast<unsigned int>(outputShape.dim(2)),
572  static_cast<unsigned int>(outputShape.dim(3))};
573 
574  // This is used to describe how the input is to be concatenated
575  OriginsDescriptor concatDesc(numGroups);
576 
577  // Now create an input node for each group, using the name from
578  // the output of the corresponding convolution
579  for (unsigned int g = 0; g < numGroups; ++g)
580  {
581  concatDesc.SetViewOriginCoord(g, 1, concatDimSizes[1] * g);
582  }
583 
584  // Make sure the output from the concat is the correct size to hold the data for all groups
585  concatDimSizes[1] *= numGroups;
586  outputShape.set_dim(1, concatDimSizes[1]);
587 
588  // Finally add the concat layer
589  IConnectableLayer* concatLayer = m_Network->AddConcatLayer(concatDesc, layerParam.name().c_str());
590 
591  if (!concatLayer)
592  {
593  throw ParseException(
594  boost::str(
595  boost::format(
596  "Failed to create final concat layer for Split+Convolution+Concat. "
597  "Layer=%1% #groups=%2% #filters=%3% %4%") %
598  layerParam.name() %
599  numGroups %
600  numFilters %
601  CHECK_LOCATION().AsString()));
602  }
603 
604  for (unsigned int g = 0; g < numGroups; ++g)
605  {
606  convLayers[g]->GetOutputSlot(0).Connect(concatLayer->GetInputSlot(g));
607  }
608  concatLayer->GetOutputSlot(0).SetTensorInfo(armnn::TensorInfo(4, concatDimSizes, DataType::Float32));
609  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), concatLayer->GetOutputSlot(0));
610 }
uint32_t m_PadBottom
Padding bottom value in the height dimension.
bool m_BiasEnabled
Enable/disable bias.
virtual unsigned int GetNumOutputSlots() const =0
Returns the number of connectable output slots.
A ViewsDescriptor for the SplitterLayer.
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
uint32_t m_PadRight
Padding right value in the width dimension.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
An output connection slot for a layer.
Definition: INetwork.hpp:37
An OriginsDescriptor for the ConcatLayer.
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
BlobShape TensorDescToBlobShape(const TensorInfo &desc)

◆ BlobShapeToTensorInfo()

TensorInfo BlobShapeToTensorInfo ( const caffe::BlobShape &  blobShape) const
protected

Converts Caffe's protobuf tensor shape format to ArmNN's.

Definition at line 315 of file CaffeParser.cpp.

Referenced by CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), CaffeParserBase::ParseConvLayer(), and CaffeParserBase::ParseInputLayer().

316 {
317  std::vector<unsigned int> shape;
318  for (int j = 0; j < blobShape.dim_size(); ++j)
319  {
320  shape.push_back(static_cast<unsigned int>(blobShape.dim(j)));
321  }
322 
323  return TensorInfo(boost::numeric_cast<unsigned int>(shape.size()), shape.data(), DataType::Float32);
324 }

◆ Cleanup()

void Cleanup ( )
protected

Definition at line 1861 of file CaffeParser.cpp.

References CaffeParserBase::m_ArmnnOutputSlotForCaffeTop, CaffeParserBase::m_CaffeLayersByTopName, CaffeParserBase::m_InputShapes, and CaffeParserBase::m_RequestedOutputs.

Referenced by RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), and CaffeParserBase::CreateNetworkFromNetParameter().

1861  {
1862  // cleanup, in case we reuse this parser
1863  m_InputShapes.clear();
1864  m_RequestedOutputs.clear();
1866  // NOTE: when we get the text/string format
1867  // optimised for memory then this data structure can
1868  // also move to the CaffeParser class
1869  m_CaffeLayersByTopName.clear();
1870 }
std::vector< std::string > m_RequestedOutputs
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops...
std::map< std::string, armnn::TensorShape > m_InputShapes
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName

◆ CreateNetworkFromNetParameter()

INetworkPtr CreateNetworkFromNetParameter ( caffe::NetParameter &  netParam,
const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)
protected

Parses a NetParameter loaded into memory from one of the other CreateNetwork*.

Definition at line 1830 of file CaffeParser.cpp.

References CaffeParserBase::Cleanup(), INetwork::Create(), CaffeParserBase::LoadNetParam(), CaffeParserBase::m_InputShapes, CaffeParserBase::m_Network, CaffeParserBase::m_NetworkInputsBindingInfo, CaffeParserBase::m_NetworkOutputsBindingInfo, and CaffeParserBase::m_RequestedOutputs.

Referenced by CaffeParser::CreateNetworkFromBinaryFile(), CaffeParserBase::CreateNetworkFromString(), and CaffeParserBase::CreateNetworkFromTextFile().

1833 {
1836 
1838 
1839  m_InputShapes = inputShapes;
1840  if (requestedOutputs.size() == 0)
1841  {
1842  throw ParseException("requestedOutputs must have at least one entry");
1843  }
1844  m_RequestedOutputs = requestedOutputs;
1845 
1846  try
1847  {
1848  LoadNetParam(netParam);
1849  }
1850  catch (const ParseException& e)
1851  {
1852  Cleanup();
1853  throw e;
1854  }
1855 
1856  Cleanup();
1857 
1858  return move(m_Network);
1859 }
void LoadNetParam(caffe::NetParameter &netParameter)
does the actual conversion from caffe::NetParameter to armnn::INetwork
std::vector< std::string > m_RequestedOutputs
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos
std::map< std::string, armnn::TensorShape > m_InputShapes
static INetworkPtr Create()
Definition: Network.cpp:50

◆ CreateNetworkFromString()

INetworkPtr CreateNetworkFromString ( const char *  protoText,
const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)
overridevirtual

Creates the network directly from protobuf text in a string. Useful for debugging/testing.

Implements ICaffeParser.

Definition at line 1770 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::CreateNetworkFromNetParameter().

1773 {
1774  // Parses the string into a message.
1775  NetParameter netParam;
1776  bool success = google::protobuf::TextFormat::ParseFromString(protoText, &netParam);
1777 
1778  if (!success)
1779  {
1780  throw ParseException(
1781  boost::str(
1782  boost::format(
1783  "Failed to parse graph string %1%") %
1784  CHECK_LOCATION().AsString()));
1785  }
1786 
1787  return CreateNetworkFromNetParameter(netParam, inputShapes, requestedOutputs);
1788 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
armnn::INetworkPtr CreateNetworkFromNetParameter(caffe::NetParameter &netParam, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
Parses a NetParameter loaded into memory from one of the other CreateNetwork*.

◆ CreateNetworkFromTextFile()

INetworkPtr CreateNetworkFromTextFile ( const char *  graphFile,
const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)
overridevirtual

Create the network from a protobuf text file on disk.

Implements ICaffeParser.

Definition at line 1734 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::CreateNetworkFromNetParameter().

1737 {
1738  FILE* fd = fopen(graphFile, "r");
1739 
1740  if (fd == nullptr)
1741  {
1742  throw FileNotFoundException(
1743  boost::str(
1744  boost::format(
1745  "Failed to open graph file: %1% %2%") %
1746  graphFile %
1747  CHECK_LOCATION().AsString()));
1748  }
1749 
1750  // Parses the file into a message.
1751  NetParameter netParam;
1752  auto input = new google::protobuf::io::FileInputStream(fileno(fd));
1753  bool success = google::protobuf::TextFormat::Parse(input, &netParam);
1754  delete input;
1755  fclose(fd);
1756 
1757  if (!success)
1758  {
1759  throw ParseException(
1760  boost::str(
1761  boost::format(
1762  "Failed to parse graph file: %1% %2%") %
1763  graphFile %
1764  CHECK_LOCATION().AsString()));
1765  }
1766 
1767  return CreateNetworkFromNetParameter(netParam, inputShapes, requestedOutputs);
1768 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
armnn::INetworkPtr CreateNetworkFromNetParameter(caffe::NetParameter &netParam, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
Parses a NetParameter loaded into memory from one of the other CreateNetwork*.

◆ GetArmnnOutputSlotForCaffeTop()

armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop ( const std::string &  caffeTopName) const
protected

Retrieves the Armnn IOutputSlot representing the given Caffe top.

Throws if it cannot be found (e.g. not parsed yet).

Definition at line 1534 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::m_ArmnnOutputSlotForCaffeTop.

Referenced by CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), CaffeParserBase::LoadNetParam(), CaffeParserBase::ParseBatchNormLayer(), CaffeParserBase::ParseConcatLayer(), CaffeParserBase::ParseConvLayer(), CaffeParserBase::ParseDropoutLayer(), CaffeParserBase::ParseEltwiseLayer(), CaffeParserBase::ParseInnerProductLayer(), CaffeParserBase::ParseLRNLayer(), CaffeParserBase::ParsePoolingLayer(), CaffeParserBase::ParseReluLayer(), CaffeParserBase::ParseScaleLayer(), CaffeParserBase::ParseSoftmaxLayer(), and CaffeParserBase::ParseSplitLayer().

1535 {
1536  auto it = m_ArmnnOutputSlotForCaffeTop.find(caffeTopName);
1537  if (it != m_ArmnnOutputSlotForCaffeTop.end())
1538  {
1539  return *it->second;
1540  }
1541  else
1542  {
1543  throw ParseException(
1544  boost::str(
1545  boost::format(
1546  "Could not find armnn output slot for Caffe top '%1%' %2%") %
1547  caffeTopName %
1548  CHECK_LOCATION().AsString()));
1549  }
1550 }
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops...
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192

◆ GetBindingInfo()

std::pair< armnn::LayerBindingId, armnn::TensorInfo > GetBindingInfo ( const std::string &  layerName,
const char *  bindingPointDesc,
const std::unordered_map< std::string, BindingPointInfo > &  bindingInfos 
)
staticprotected

Definition at line 297 of file CaffeParser.cpp.

References CHECK_LOCATION.

Referenced by CaffeParserBase::GetNetworkInputBindingInfo(), and CaffeParserBase::GetNetworkOutputBindingInfo().

300 {
301  auto it = nameToBindingInfo.find(layerName);
302  if (it == nameToBindingInfo.end())
303  {
305  boost::str(
306  boost::format(
307  "Unknown binding %1% for layer '%2%'. %3%") %
308  bindingPointDesc %
309  layerName %
310  CHECK_LOCATION().AsString()));
311  }
312  return it->second;
313 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192

◆ GetInputs()

vector< const LayerParameter * > GetInputs ( const caffe::LayerParameter &  layerParam)
protected

Find the Caffe layers listed as inputs (bottoms) for a given layer.

Definition at line 340 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::m_CaffeLayersByTopName.

Referenced by CaffeParserBase::LoadNetParam().

341 {
342  std::vector<const caffe::LayerParameter*> ret;
343  ret.reserve(boost::numeric_cast<size_t>(layerParam.bottom_size()));
344  for (int j = 0; j < layerParam.bottom_size(); ++j)
345  {
346  std::string inputName = layerParam.bottom(j);
347  auto inputIt = m_CaffeLayersByTopName.find(inputName);
348  if (inputIt == m_CaffeLayersByTopName.end())
349  {
350  throw ParseException(
351  boost::str(
352  boost::format(
353  "Can't find Caffe layer with top called '%1%', "
354  "which is listed as an input of '%2%'. %3%") %
355  inputName %
356  layerParam.name() %
357  CHECK_LOCATION().AsString()));
358  }
359  ret.push_back(inputIt->second);
360  }
361 
362  return ret;
363 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName

◆ GetNetworkInputBindingInfo()

BindingPointInfo GetNetworkInputBindingInfo ( const std::string &  name) const
overridevirtual

Retrieves binding info (layer id and tensor info) for the network input identified by the given layer name.

Implements ICaffeParser.

Definition at line 287 of file CaffeParser.cpp.

References CaffeParserBase::GetBindingInfo(), and CaffeParserBase::m_NetworkInputsBindingInfo.

288 {
289  return GetBindingInfo(name, "input", m_NetworkInputsBindingInfo);
290 }
static std::pair< armnn::LayerBindingId, armnn::TensorInfo > GetBindingInfo(const std::string &layerName, const char *bindingPointDesc, const std::unordered_map< std::string, BindingPointInfo > &bindingInfos)
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos

◆ GetNetworkOutputBindingInfo()

BindingPointInfo GetNetworkOutputBindingInfo ( const std::string &  name) const
overridevirtual

Retrieves binding info (layer id and tensor info) for the network output identified by the given layer name.

Implements ICaffeParser.

Definition at line 292 of file CaffeParser.cpp.

References CaffeParserBase::GetBindingInfo(), and CaffeParserBase::m_NetworkOutputsBindingInfo.

293 {
294  return GetBindingInfo(name, "output", m_NetworkOutputsBindingInfo);
295 }
static std::pair< armnn::LayerBindingId, armnn::TensorInfo > GetBindingInfo(const std::string &layerName, const char *bindingPointDesc, const std::unordered_map< std::string, BindingPointInfo > &bindingInfos)
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos

◆ LoadNetParam()

void LoadNetParam ( caffe::NetParameter &  netParameter)
protected

does the actual conversion from caffe::NetParameter to armnn::INetwork

Definition at line 1633 of file CaffeParser.cpp.

References CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), CaffeParserBase::GetInputs(), CaffeParserBase::m_CaffeLayersByTopName, CaffeParserBase::m_Network, CaffeParserBase::m_NetworkOutputsBindingInfo, CaffeParserBase::m_RequestedOutputs, CaffeParserBase::ms_CaffeLayerNameToParsingFunctions, armnn::numeric_cast(), CaffeParserBase::ResolveInPlaceLayers(), and CaffeParserBase::TrackOutputBinding().

Referenced by CaffeParserBase::CreateNetworkFromNetParameter().

1634 {
1635  // Caffe models sometimes have an implicit input layer.
1636  // In that case, add an explicit one.
1637  if (netParameter.input_size() > 0)
1638  {
1639  LayerParameter* newLayer = netParameter.add_layer();
1640 
1641  newLayer->set_type("Input");
1642  newLayer->set_name(netParameter.input(0));
1643  newLayer->add_top(netParameter.input(0));
1644 
1645  InputParameter* inputParam = newLayer->mutable_input_param();
1646  BlobShape* shape = inputParam->add_shape();
1647 
1648  int dim_size = netParameter.input_dim_size();
1649  for (int i = 0; i < dim_size; ++i)
1650  {
1651  shape->add_dim(netParameter.input_dim(i));
1652  }
1653  }
1654 
1655  // Replaces in-place layers with regular ones to make the rest of the parsing easier.
1656  ResolveInPlaceLayers(netParameter);
1657 
1658  // Creates a lookup of Caffe layers by name.
1659  for (int i = 0; i < netParameter.layer_size(); ++i)
1660  {
1661  const caffe::LayerParameter& layer = netParameter.layer(i);
1662  for (int i = 0; i < layer.top_size(); ++i)
1663  {
1664  m_CaffeLayersByTopName[layer.top(i)] = &layer;
1665  }
1666  }
1667 
1668  // Finds the output layers the user requested.
1669  std::vector<const caffe::LayerParameter*> targetLayers;
1670  for (const std::string& requestedOutputName : m_RequestedOutputs)
1671  {
1672  auto nodeIt = m_CaffeLayersByTopName.find(requestedOutputName);
1673  if (nodeIt == m_CaffeLayersByTopName.end())
1674  {
1675  throw ParseException(
1676  boost::str(
1677  boost::format(
1678  "Couldn't find requested output layer '%1%' in graph %2%") %
1679  requestedOutputName %
1680  CHECK_LOCATION().AsString()));
1681  }
1682  targetLayers.push_back(nodeIt->second);
1683  }
1684 
1685  // Sorts them into a linear ordering such that all inputs of a node are before the node itself.
1686  std::vector<const caffe::LayerParameter*> sortedNodes;
1687  if (!armnnUtils::GraphTopologicalSort<const caffe::LayerParameter*>(
1688  targetLayers,
1689  [this](const caffe::LayerParameter* node)
1690  {
1691  return GetInputs(*node);
1692  },
1693  sortedNodes))
1694  {
1695  throw ParseException(
1696  boost::str(
1697  boost::format(
1698  "Cycle detected in graph. #nodes: %1% %2%") %
1699  sortedNodes.size() %
1700  CHECK_LOCATION().AsString()));
1701  }
1702 
1703  // Parses each node in order, knowing that all inputs of a node will be processed before the node itself.
1704  for (const caffe::LayerParameter* current : sortedNodes)
1705  {
1706  auto it = ms_CaffeLayerNameToParsingFunctions.find(current->type());
1707  if (it == ms_CaffeLayerNameToParsingFunctions.end())
1708  {
1709  throw ParseException(
1710  boost::str(
1711  boost::format("Unsupported layer type: '%1%' for layer %2% %3%") %
1712  current->type() %
1713  current->name() %
1714  CHECK_LOCATION().AsString()));
1715  }
1716  auto func = it->second;
1717  (this->*func)(*current);
1718  }
1719 
1720  // Adds ArmNN output layers connected to each requested output.
1721  for (const std::string& requestedOutput : m_RequestedOutputs)
1722  {
1723  armnn::IOutputSlot& outputSlot = GetArmnnOutputSlotForCaffeTop(requestedOutput);
1724 
1727  armnn::IConnectableLayer* const outputLayer = m_Network->AddOutputLayer(outputId, requestedOutput.c_str());
1728  outputSlot.Connect(outputLayer->GetInputSlot(0));
1729 
1730  TrackOutputBinding(outputLayer, outputId, outputLayer->GetInputSlot(0).GetConnection()->GetTensorInfo());
1731  }
1732 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
static const std::map< std::string, OperationParsingFunction > ms_CaffeLayerNameToParsingFunctions
Maps Caffe layer names to parsing member functions.
std::vector< std::string > m_RequestedOutputs
int LayerBindingId
Type of identifiers for bindable layers (inputs, outputs).
Definition: Types.hpp:171
void TrackOutputBinding(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
void ResolveInPlaceLayers(caffe::NetParameter &netParameter)
Modifies the Caffe network to replace "in-place" layers (whose top() and bottom() are both the same) ...
An output connection slot for a layer.
Definition: INetwork.hpp:37
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
std::vector< const caffe::LayerParameter * > GetInputs(const caffe::LayerParameter &layerParam)
Find the Caffe layers listed as inputs (bottoms) for a given layer.
virtual int Connect(IInputSlot &destination)=0
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName

◆ ParseBatchNormLayer()

void ParseBatchNormLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1339 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), armnn::Float32, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), BatchNormalizationDescriptor::m_Eps, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1340 {
1341  ValidateNumInputsOutputs(layerParam, 1, 1);
1342 
1343  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1344 
1345  string name = layerParam.name();
1346 
1347  BatchNormParameter param = layerParam.batch_norm_param();
1348  // If use_global_stats is not explicitly set in the model, assume it to be true (its default value
1349  // when the network is in the testing phase).
1350  if (param.has_use_global_stats())
1351  {
1352  if (!param.use_global_stats())
1353  {
1354  throw ParseException(
1355  boost::str(
1356  boost::format(
1357  "Error parsing Batch Norm layer '%1%': "
1358  "Parameter 'use_global_stats' is set to false, which is "
1359  "unsupported (value used for training). %2%") %
1360  name %
1361  CHECK_LOCATION().AsString()));
1362  }
1363  }
1364 
1366  desc.m_Eps = param.eps();
1367 
1368  unsigned int channels = inputInfo.GetShape()[1];
1369  unsigned int shape[] = {channels};
1370 
1371  vector<float> meanData(channels);
1372  GetDataFromBlob(layerParam, meanData, 0);
1373 
1374  vector<float> varianceData(channels);
1375  GetDataFromBlob(layerParam, varianceData, 1);
1376 
1377  // Reads moving average factor and applies scaling (if required).
1378  const BlobProto& blob = layerParam.blobs(boost::numeric_cast<int>(2));
1379  const float movingAverageFactor = blob.data(boost::numeric_cast<int>(0));
1380  if(movingAverageFactor != 0.0f)
1381  {
1382  const float scaleFactor = 1.0f / movingAverageFactor;
1383  auto scaleFunction = [scaleFactor](float f) -> float { return f * scaleFactor; };
1384 
1385  std::transform(varianceData.begin(), varianceData.end(), varianceData.begin(), scaleFunction);
1386  std::transform(meanData.begin(), meanData.end(), meanData.begin(), scaleFunction);
1387  }
1388 
1389  // Identifies scale operation.
1390  vector<float> betaData(channels, 0.0f);
1391  vector<float> gammaData(channels, 1.0f);
1392 
1393  ConstTensor mean(TensorInfo(1, shape, armnn::DataType::Float32), meanData);
1394  ConstTensor variance(TensorInfo(1, shape, armnn::DataType::Float32), varianceData);
1395  ConstTensor beta(TensorInfo(1, shape, armnn::DataType::Float32), betaData);
1396  ConstTensor gamma(TensorInfo(1, shape, armnn::DataType::Float32), gammaData);
1397 
1398  armnn::IConnectableLayer* const batchNormLayer = m_Network->AddBatchNormalizationLayer(desc,
1399  mean, variance, beta, gamma, name.c_str());
1400  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(batchNormLayer->GetInputSlot(0));
1401  batchNormLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1402  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), batchNormLayer->GetOutputSlot(0));
1403 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
float m_Eps
Value to add to the variance. Used to avoid dividing by zero.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A BatchNormalizationDescriptor for the BatchNormalizationLayer.

◆ ParseConcatLayer()

void ParseConcatLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1278 of file CaffeParser.cpp.

References CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), TensorInfo::GetNumDimensions(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), IOutputSlot::SetTensorInfo(), and OriginsDescriptor::SetViewOriginCoord().

1279 {
1280  unsigned int numInputs = static_cast<unsigned int>(layerParam.bottom_size());
1281  // We assume concat happens along the channel dimension, which is 1 in (0, 1, 2, 3).
1282  unsigned int concatDim = 1;
1283  unsigned int numOfDims = 4;
1284 
1285  // we only consider 4-D tensor here
1286  OriginsDescriptor concatDescriptor(static_cast<uint32_t>(numInputs), numOfDims);
1287  std::vector<unsigned int>mergeDimSizes(numOfDims, 0u);
1288 
1289  unsigned int mergeDim = 0;
1290  for (unsigned int viewIndex = 0; viewIndex < numInputs; ++viewIndex)
1291  {
1292  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(
1293  layerParam.bottom(boost::numeric_cast<int>(viewIndex))).GetTensorInfo();
1294  // Checks whether the dimensions of the input tensors are actually 4.
1295  if (inputInfo.GetNumDimensions()!=4)
1296  {
1297  throw ParseException(
1298  boost::str(
1299  boost::format(
1300  "The number of dimensions for input tensors of "
1301  "the concatenation op should be 4. Inputs of %1% has "
1302  "%2% dimensions. %3%") %
1303  layerParam.name() %
1304  inputInfo.GetNumDimensions() %
1305  CHECK_LOCATION().AsString()));
1306  }
1307 
1308  mergeDimSizes[0] = inputInfo.GetShape()[0];
1309  mergeDimSizes[1] = inputInfo.GetShape()[1];
1310  mergeDimSizes[2] = inputInfo.GetShape()[2];
1311  mergeDimSizes[3] = inputInfo.GetShape()[3];
1312 
1313  for (unsigned int j = 0; j < concatDim; ++j)
1314  {
1315  concatDescriptor.SetViewOriginCoord(viewIndex, j, 0);
1316  }
1317 
1318  concatDescriptor.SetViewOriginCoord(viewIndex, concatDim, mergeDim);
1319  mergeDim += mergeDimSizes[concatDim];
1320 
1321  for (unsigned int j = concatDim+1; j < numOfDims; ++j)
1322  {
1323  concatDescriptor.SetViewOriginCoord(viewIndex, j, 0);
1324  }
1325  }
1326  mergeDimSizes[concatDim] = mergeDim;
1327 
1328  armnn::IConnectableLayer* concatlayer = m_Network->AddConcatLayer(concatDescriptor, layerParam.name().c_str());
1329  for (unsigned int i = 0; i < numInputs; ++i)
1330  {
1331  armnn::IOutputSlot& outputSlot = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(boost::numeric_cast<int>(i)));
1332  outputSlot.Connect(concatlayer->GetInputSlot(i));
1333  }
1334 
1335  concatlayer->GetOutputSlot(0).SetTensorInfo(armnn::TensorInfo(numOfDims, mergeDimSizes.data(), DataType::Float32));
1336  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), concatlayer->GetOutputSlot(0));
1337 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
An output connection slot for a layer.
Definition: INetwork.hpp:37
An OriginsDescriptor for the ConcatLayer.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:92

◆ ParseConvLayer()

void ParseConvLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 702 of file CaffeParser.cpp.

References CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), ARMNN_ASSERT, CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, IOutputSlot::Connect(), GET_OPTIONAL_WITH_VECTOR_FALLBACK, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), armnnUtils::GetTensorInfo(), Convolution2dDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and armnnCaffeParser::TensorDescToBlobShape().

703 {
704  // Ignored Caffe Parameters
705  // * Dilation Size
706  // * Weight Filler
707  // * Bias Filler
708  // * Engine
709  // * Force nd_im2col
710  // * Axis
711 
712  // Not Available ArmNN Interface Parameters
713  // * Rounding policy;
714 
715  ARMNN_ASSERT(layerParam.type() == "Convolution");
716  ValidateNumInputsOutputs(layerParam, 1, 1);
717 
718  ConvolutionParameter convParam = layerParam.convolution_param();
719  BlobShape inputShape = TensorDescToBlobShape(GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo());
720  const unsigned int numGroups = convParam.has_group() ? convParam.group() : 1;
721  unsigned int numFilters = convParam.num_output();
722 
723  const auto notFound = std::numeric_limits<unsigned int>::max();
724 
725  unsigned int kernelH = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
726  kernel_h, kernel_size, unsigned int, notFound);
727  unsigned int kernelW = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
728  kernel_w, kernel_size, unsigned int, notFound);
729 
730  unsigned int strideH = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
731  stride_h, stride, unsigned int, 1u);
732  unsigned int strideW = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
733  stride_w, stride, unsigned int, 1u);
734 
735  unsigned int padH = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
736  pad_h, pad, unsigned int, 0u);
737  unsigned int padW = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
738  pad_w, pad, unsigned int, 0u);
739 
740  Convolution2dDescriptor convolution2dDescriptor;
741  convolution2dDescriptor.m_PadLeft = padW;
742  convolution2dDescriptor.m_PadRight = padW;
743  convolution2dDescriptor.m_PadTop = padH;
744  convolution2dDescriptor.m_PadBottom = padH;
745  convolution2dDescriptor.m_StrideX = strideW;
746  convolution2dDescriptor.m_StrideY = strideH;
747  convolution2dDescriptor.m_BiasEnabled = convParam.has_bias_term() ? convParam.bias_term() : true;
748 
749  if (numGroups > numFilters)
750  {
751  throw ParseException(
752  boost::str(
753  boost::format(
754  "Error parsing Convolution: %1%. "
755  "The 'group'=%2% parameter cannot be larger than the "
756  "number of filters supplied ='%3%'. %4%") %
757  layerParam.name() %
758  numGroups %
759  numFilters %
760  CHECK_LOCATION().AsString()));
761  }
762 
763  if (inputShape.dim_size() != 4)
764  {
765  throw ParseException(
766  boost::str(
767  boost::format(
768  "Convolution input shape is expected to have 4 dimensions. "
769  "%1%'s input has only %2%. %3%") %
770  layerParam.name() %
771  inputShape.dim_size() %
772  CHECK_LOCATION().AsString()));
773  }
774 
775  if (numGroups > 1)
776  {
777  if (numGroups > inputShape.dim(1))
778  {
779  throw ParseException(
780  boost::str(
781  boost::format(
782  "Error parsing Convolution: %1%. "
783  "The 'group'=%2% parameter cannot be larger than the "
784  "channel of the input shape=%3% (in NCHW format). %4%") %
785  layerParam.name() %
786  numGroups %
787  inputShape.dim(1) %
788  CHECK_LOCATION().AsString()));
789  }
790  else if (numGroups == inputShape.dim(1))
791  {
792  // we use a depthwise convolution here, because the number of groups equals to the
793  // input channels
794  AddConvLayerWithDepthwiseConv(layerParam, convolution2dDescriptor, kernelW, kernelH);
795  return;
796  }
797  else
798  {
799  // we split the input by channels into channels/groups separate convolutions
800  // and concatenate the results afterwards
801  AddConvLayerWithSplits(layerParam, convolution2dDescriptor, kernelW, kernelH);
802  return;
803  }
804  }
805 
806  // NOTE: at this point we only need to handle #group=1 case, all other cases should be
807  // handled by the AddConvLayer* helpers
808 
809  // Populate convolution output tensor descriptor dimensions
810  BlobShape outputShape;
811  outputShape.add_dim(0);
812  outputShape.set_dim(0, inputShape.dim(0));
813  outputShape.add_dim(1);
814  outputShape.set_dim(1, numFilters);
815  outputShape.add_dim(2);
816  outputShape.set_dim(
817  2, (static_cast<int>(
818  static_cast<float>(inputShape.dim(2) + 2 * padH - kernelH) /
819  static_cast<float>(strideH)) + 1));
820  outputShape.add_dim(3);
821  outputShape.set_dim(
822  3, (static_cast<int>(
823  static_cast<float>(inputShape.dim(3) + 2 * padW - kernelW) /
824  static_cast<float>(strideW)) + 1));
825 
826  // Load the weight data for ALL groups
827  vector<float> weightData(boost::numeric_cast<size_t>(inputShape.dim(1) *
828  outputShape.dim(1) *
829  kernelH *
830  kernelW));
831  GetDataFromBlob(layerParam, weightData, 0);
832 
833  const unsigned int weightDimSizes[4] = {
834  static_cast<unsigned int>(outputShape.dim(1)), // output channels
835  static_cast<unsigned int>(inputShape.dim(1)), // input channels
836  kernelH,
837  kernelW};
838 
839  armnn::IConnectableLayer* returnLayer = nullptr;
840 
841  // Pull out the weights for this group from that loaded from the model file earlier
842  ConstTensor weights(TensorInfo(4, weightDimSizes, DataType::Float32), weightData.data());
843  Optional<ConstTensor> optionalBiases;
844  vector<float> biasData;
845  if (convolution2dDescriptor.m_BiasEnabled)
846  {
847  TensorInfo biasInfo;
848 
849  biasData.resize(boost::numeric_cast<size_t>(outputShape.dim(1)), 1.f);
850  GetDataFromBlob(layerParam, biasData, 1);
851 
852  const unsigned int biasDimSizes[1] = {static_cast<unsigned int>(outputShape.dim(1))};
853  biasInfo = TensorInfo(1, biasDimSizes, DataType::Float32);
854 
855  // Pull out the biases for this group from that loaded from the model file earlier
856  ConstTensor biases(biasInfo, biasData.data());
857  optionalBiases = Optional<ConstTensor>(biases);
858  }
859  returnLayer = m_Network->AddConvolution2dLayer(convolution2dDescriptor,
860  weights,
861  optionalBiases,
862  layerParam.name().c_str());
863 
864  armnn::IOutputSlot& inputConnection = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
865  inputConnection.Connect(returnLayer->GetInputSlot(0));
866  returnLayer->GetOutputSlot(0).SetTensorInfo(BlobShapeToTensorInfo(outputShape));
867 
868  if (!returnLayer)
869  {
870  throw ParseException(
871  boost::str(
872  boost::format(
873  "Failed to create Convolution layer. "
874  "Layer=%1% #groups=%2% #filters=%3% %4%") %
875  layerParam.name() %
876  numGroups %
877  numFilters %
878  CHECK_LOCATION().AsString()));
879  }
880 
881  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), returnLayer->GetOutputSlot(0));
882 }
uint32_t m_PadBottom
Padding bottom value in the height dimension.
bool m_BiasEnabled
Enable/disable bias.
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
A Convolution2dDescriptor for the Convolution2dLayer.
uint32_t m_PadRight
Padding right value in the width dimension.
uint32_t m_PadTop
Padding top value in the height dimension.
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
An output connection slot for a layer.
Definition: INetwork.hpp:37
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.
void AddConvLayerWithSplits(const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
ParseConv may use these helpers depending on the group parameter.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
#define GET_OPTIONAL_WITH_VECTOR_FALLBACK(PARAM, PARAM_TYPE, OPTIONAL_VALUE, FALLBACK_VECTOR, VALUE_TYPE, DEFAULT_VALUE)
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
void AddConvLayerWithDepthwiseConv(const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
uint32_t m_PadLeft
Padding left value in the width dimension.
BlobShape TensorDescToBlobShape(const TensorInfo &desc)

◆ ParseDropoutLayer()

void ParseDropoutLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1478 of file CaffeParser.cpp.

References CHECK_LOCATION, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), and CaffeParserBase::SetArmnnOutputSlotForCaffeTop().

1479 {
1480  // Ignored for inference, so patch the single input to its single output.
1481  if (layerParam.bottom_size() != 1 || layerParam.top_size() != 1)
1482  {
1483  throw ParseException(
1484  boost::str(
1485  boost::format(
1486  "Dropout layer '%1%' should have exactly 1 bottom and 1 top. "
1487  "#bottoms=%2% #tops=%3% %4%") %
1488  layerParam.name() %
1489  layerParam.bottom_size() %
1490  layerParam.top_size() %
1491  CHECK_LOCATION().AsString()));
1492  }
1493  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)));
1494 }
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ParseEltwiseLayer()

void ParseEltwiseLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1231 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1232 {
1233  ValidateNumInputsOutputs(layerParam, 2, 1);
1234 
1235  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1236 
1237  // Ignored Caffe Parameters:
1238  // coeff
1239 
1240  EltwiseParameter_EltwiseOp operation = EltwiseParameter_EltwiseOp_SUM; // Defaults to sum as per caffe.
1241 
1242  if (layerParam.has_eltwise_param() && layerParam.eltwise_param().has_operation())
1243  {
1244  operation = layerParam.eltwise_param().operation();
1245  }
1246 
1247  armnn::IConnectableLayer* newLayer = nullptr;
1248  switch (operation)
1249  {
1250  case EltwiseParameter_EltwiseOp_SUM:
1251  {
1252  newLayer = m_Network->AddAdditionLayer(layerParam.name().c_str());
1253  break;
1254  }
1255  case EltwiseParameter_EltwiseOp_PROD:
1256  {
1257  newLayer = m_Network->AddMultiplicationLayer(layerParam.name().c_str());
1258  break;
1259  }
1260  default:
1261  {
1262  throw ParseException(
1263  boost::str(
1264  boost::format(
1265  "Unsupported operation %1% in Eltwise layer %2% %3%") %
1266  operation %
1267  layerParam.name() %
1268  CHECK_LOCATION().AsString()));
1269  }
1270  }
1271 
1272  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(newLayer->GetInputSlot(0));
1273  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(1)).Connect(newLayer->GetInputSlot(1));
1274  newLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1275  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), newLayer->GetOutputSlot(0));
1276 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ParseInnerProductLayer()

void ParseInnerProductLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1135 of file CaffeParser.cpp.

References Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), TensorInfo::GetNumDimensions(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), FullyConnectedDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, FullyConnectedDescriptor::m_TransposeWeightMatrix, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1136 {
1137  InnerProductParameter param = layerParam.inner_product_param();
1138 
1139  ValidateNumInputsOutputs(layerParam, 1, 1);
1140 
1141  unsigned int outputSize = param.num_output();
1142 
1143  // Ignored Caffe Parameters:
1144  // Weight Filler
1145  // Bias Filler
1146  // Engine
1147  // Axis
1148 
1149  FullyConnectedDescriptor tensorFullyConnectedDescriptor;
1150 
1151  if (param.has_transpose())
1152  {
1153  // If true, assumes transposed weights.
1154  tensorFullyConnectedDescriptor.m_TransposeWeightMatrix = param.transpose();
1155  }
1156  else
1157  {
1158  // Caffe defaults to transposed.
1159  tensorFullyConnectedDescriptor.m_TransposeWeightMatrix = true;
1160  }
1161 
1162  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1163 
1164  TensorInfo weightInfo;
1165  TensorInfo biasInfo;
1166 
1167  // Allows implicit flattening of extra dimensions.
1168  unsigned int inputSize = inputInfo.GetShape()[1];
1169  for (unsigned int i = 2; i < inputInfo.GetNumDimensions(); ++i)
1170  {
1171  inputSize *= inputInfo.GetShape()[i];
1172  }
1173 
1174  const float* weightDataPtr = GetArrayPtrFromBlob(layerParam, 0);
1175  const unsigned int swTD[2] = { outputSize, inputSize };
1176  ConstTensor weights(TensorInfo(2, swTD, DataType::Float32), weightDataPtr);
1177 
1178  tensorFullyConnectedDescriptor.m_BiasEnabled = true;
1179  // Todo: check whether bias enabled.
1180  armnn::IConnectableLayer* fullyConnectedLayer = nullptr;
1181  if (tensorFullyConnectedDescriptor.m_BiasEnabled)
1182  {
1183  // BIAS VALUE
1184  const float* biasDataPtr = GetArrayPtrFromBlob(layerParam, 1);
1185 
1186  const unsigned int sbTD[1] = { outputSize };
1187 
1188  ConstTensor biases(TensorInfo(1, sbTD, DataType::Float32), biasDataPtr);
1189 
1190  fullyConnectedLayer = m_Network->AddFullyConnectedLayer(tensorFullyConnectedDescriptor,
1191  weights,
1192  Optional<ConstTensor>(biases),
1193  layerParam.name().c_str());
1194  }
1195  else
1196  {
1197  fullyConnectedLayer = m_Network->AddFullyConnectedLayer(tensorFullyConnectedDescriptor,
1198  weights,
1199  EmptyOptional(),
1200  layerParam.name().c_str());
1201  }
1202 
1203  TensorInfo outputInfo({ inputInfo.GetShape()[0], outputSize }, DataType::Float32);
1204  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(fullyConnectedLayer->GetInputSlot(0));
1205  fullyConnectedLayer->GetOutputSlot(0).SetTensorInfo(outputInfo);
1206  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), fullyConnectedLayer->GetOutputSlot(0));
1207 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
bool m_TransposeWeightMatrix
Enable/disable transpose weight matrix.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
A FullyConnectedDescriptor for the FullyConnectedLayer.
bool m_BiasEnabled
Enable/disable bias.
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
EmptyOptional is used to initialize the Optional class in case we want to have default value for an O...
Definition: Optional.hpp:32
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:92

◆ ParseInputLayer()

void ParseInputLayer ( const caffe::LayerParameter &  layerParam)
protected

Adds an armnn layer to m_Network given a Caffe LayerParameter of the correct type and is responsible for recording any newly created IOutputSlots using SetArmnnOutputSlotForCaffeTop().

Definition at line 365 of file CaffeParser.cpp.

References ARMNN_ASSERT, CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, CaffeParserBase::m_InputShapes, CaffeParserBase::m_Network, CaffeParserBase::m_NetworkInputsBindingInfo, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), TensorInfo::SetShape(), and CaffeParserBase::TrackInputBinding().

366 {
367  ARMNN_ASSERT(layerParam.type() == "Input");
368  ValidateNumInputsOutputs(layerParam, 0, 1);
369 
370  const InputParameter& param = layerParam.input_param();
371 
374  armnn::IConnectableLayer* const inputLayer = m_Network->AddInputLayer(inputId, layerParam.name().c_str());
375 
376  // Decides the tensor info for this input. This can be specified in the Caffe network but can also
377  // be overriden by user input (m_inputShapes).
378  armnn::TensorInfo inputTensorInfo;
379 
380  const BlobShape* originalShape = param.shape_size() > 0 && param.shape(0).dim_size() > 0 ?
381  &param.shape(0) : nullptr;
382  if (originalShape)
383  {
384  inputTensorInfo = BlobShapeToTensorInfo(*originalShape);
385  }
386 
387  auto overrideIt = m_InputShapes.find(layerParam.name());
388  if (overrideIt != m_InputShapes.end())
389  {
390  const TensorShape& overrideShape = overrideIt->second;
391  if (originalShape &&
392  ( originalShape->dim(1) != overrideShape[1]
393  || originalShape->dim(2) != overrideShape[2]
394  || originalShape->dim(3) != overrideShape[3]))
395  {
396  throw ParseException(
397  boost::str(
398  boost::format(
399  "Parsed input shape for '%1%' is incompatible with the override provided. %2%") %
400  layerParam.name() %
401  CHECK_LOCATION().AsString()));
402  }
403  inputTensorInfo.SetShape(overrideShape);
404  }
405  else if (!originalShape)
406  {
407  throw ParseException(
408  boost::str(
409  boost::format(
410  "No input descriptor given for '%1%' and no input shape found in caffe model. %2%") %
411  layerParam.name() %
412  CHECK_LOCATION().AsString()));
413  }
414 
415  TrackInputBinding(inputLayer, inputId, inputTensorInfo);
416  inputLayer->GetOutputSlot(0).SetTensorInfo(inputTensorInfo);
417  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), inputLayer->GetOutputSlot(0));
418 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
int LayerBindingId
Type of identifiers for bindable layers (inputs, outputs).
Definition: Types.hpp:171
void SetShape(const TensorShape &newShape)
Definition: Tensor.hpp:90
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
std::map< std::string, armnn::TensorShape > m_InputShapes
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
void TrackInputBinding(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ParseLRNLayer()

void ParseLRNLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1028 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), NormalizationDescriptor::m_Alpha, NormalizationDescriptor::m_Beta, NormalizationDescriptor::m_K, CaffeParserBase::m_Network, NormalizationDescriptor::m_NormChannelType, NormalizationDescriptor::m_NormMethodType, NormalizationDescriptor::m_NormSize, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1029 {
1030  ValidateNumInputsOutputs(layerParam, 1, 1);
1031 
1032  LRNParameter param = layerParam.lrn_param();
1033 
1034  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1035 
1036  // Ignored BATCH NORMALIZATION Caffe Parameters.
1037  // Ignored MVN Caffe Parameters.
1038  // Ignored LRN Caffe Parameters.
1039  // Engine
1040 
1041  NormalizationDescriptor normalizationDescriptor;
1042  if (param.has_norm_region())
1043  {
1044  LRNParameter_NormRegion n = param.norm_region();
1045  switch (n)
1046  {
1047  case LRNParameter_NormRegion_ACROSS_CHANNELS:
1048  {
1049  normalizationDescriptor.m_NormChannelType = NormalizationAlgorithmChannel::Across;
1050  break;
1051  }
1052  case LRNParameter_NormRegion_WITHIN_CHANNEL:
1053  {
1054  normalizationDescriptor.m_NormChannelType = NormalizationAlgorithmChannel::Within;
1055  break;
1056  }
1057  default:
1058  {
1059  throw ParseException(
1060  boost::str(
1061  boost::format(
1062  "Unknown region %1% for LRN layer %2% %3%") %
1063  n %
1064  layerParam.name() %
1065  CHECK_LOCATION().AsString()));
1066  }
1067  }
1068  }
1069  else
1070  {
1071  // Caffe defaults to normalization across channels.
1072  normalizationDescriptor.m_NormChannelType = NormalizationAlgorithmChannel::Across;
1073  }
1074 
1075  normalizationDescriptor.m_NormMethodType = NormalizationAlgorithmMethod::LocalBrightness;
1076  if (param.has_local_size())
1077  {
1078  normalizationDescriptor.m_NormSize = param.local_size();
1079  }
1080  else
1081  {
1082  throw ParseException(
1083  boost::str(
1084  boost::format(
1085  "local_size not defined for LRN layer %1% %2%") %
1086  layerParam.name() %
1087  CHECK_LOCATION().AsString()));
1088  }
1089 
1090  if (param.has_alpha())
1091  {
1092  normalizationDescriptor.m_Alpha = param.alpha();
1093  normalizationDescriptor.m_Alpha /= boost::numeric_cast<float>(param.local_size());
1094  }
1095  else
1096  {
1097  throw ParseException(
1098  boost::str(
1099  boost::format(
1100  "Alpha not defined for LRN layer %1% %2%") %
1101  layerParam.name() %
1102  CHECK_LOCATION().AsString()));
1103  }
1104  if (param.has_beta())
1105  {
1106  normalizationDescriptor.m_Beta = param.beta();
1107  }
1108  else
1109  {
1110  throw ParseException(
1111  boost::str(
1112  boost::format(
1113  "Beta not defined for LRN layer %1% %2%") %
1114  layerParam.name() %
1115  CHECK_LOCATION().AsString()));
1116  }
1117 
1118  if (param.has_k())
1119  {
1120  normalizationDescriptor.m_K = param.k();
1121  }
1122  else
1123  {
1124  normalizationDescriptor.m_K = 1;
1125  }
1126 
1127  IConnectableLayer* const normLayer = m_Network->AddNormalizationLayer(normalizationDescriptor,
1128  layerParam.name().c_str());
1129  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(normLayer->GetInputSlot(0));
1130  normLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1131 
1132  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), normLayer->GetOutputSlot(0));
1133 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
float m_K
Kappa value used for the across channel normalization equation.
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
float m_Alpha
Alpha value for the normalization equation.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
NormalizationAlgorithmMethod m_NormMethodType
Normalization method algorithm to use (LocalBrightness, LocalContrast).
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:33
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
NormalizationAlgorithmChannel m_NormChannelType
Normalization channel algorithm to use (Across, Within).
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A NormalizationDescriptor for the NormalizationLayer.
float m_Beta
Beta value for the normalization equation.
uint32_t m_NormSize
Depth radius value.

◆ ParsePoolingLayer()

void ParsePoolingLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 884 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), GET_OPTIONAL_WITH_FALLBACK, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), CaffeParserBase::m_Network, Pooling2dDescriptor::m_OutputShapeRounding, Pooling2dDescriptor::m_PadBottom, Pooling2dDescriptor::m_PaddingMethod, Pooling2dDescriptor::m_PadLeft, Pooling2dDescriptor::m_PadRight, Pooling2dDescriptor::m_PadTop, Pooling2dDescriptor::m_PoolHeight, Pooling2dDescriptor::m_PoolType, Pooling2dDescriptor::m_PoolWidth, Pooling2dDescriptor::m_StrideX, Pooling2dDescriptor::m_StrideY, armnn::Max, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

885 {
886  // Ignored Caffe Parameters
887  // Stochastic Pooling
888  // Engine
889 
890  ValidateNumInputsOutputs(layerParam, 1, 1);
891  PoolingParameter param = layerParam.pooling_param();
892  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
893 
894  const auto notFound = std::numeric_limits<unsigned int>::max();
895 
896  unsigned int kernel_h = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
897  kernel_h, kernel_size, unsigned int, notFound);
898  unsigned int kernel_w = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
899  kernel_w, kernel_size, unsigned int, notFound);
900 
901  if ((kernel_h == notFound || kernel_w == notFound) && param.has_global_pooling())
902  {
903  kernel_h = inputInfo.GetShape()[2];
904  kernel_w = inputInfo.GetShape()[3];
905  }
906 
907  unsigned int stride_h = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
908  stride_h, stride, unsigned int, notFound);
909  unsigned int stride_w = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
910  stride_h, stride, unsigned int, notFound);
911 
912  if ((stride_h == notFound || stride_w == notFound) && param.has_global_pooling())
913  {
914  stride_h = 1;
915  stride_w = 1;
916  }
917 
918  unsigned int pad_h = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
919  pad_h, pad, unsigned int, 0u);
920  unsigned int pad_w = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
921  pad_w, pad, unsigned int, 0u);
922 
923  // Populate Weight and Bias Filter Descriptor
924  Pooling2dDescriptor pooling2dDescriptor;
925  if (param.has_pool())
926  {
927  PoolingParameter_PoolMethod p = param.pool();
928  switch (p)
929  {
930  case PoolingParameter_PoolMethod_MAX:
931  {
932  pooling2dDescriptor.m_PoolType = PoolingAlgorithm::Max;
933  break;
934  }
935  case PoolingParameter_PoolMethod_AVE:
936  {
937  pooling2dDescriptor.m_PoolType = PoolingAlgorithm::Average;
938  break;
939  }
940  case PoolingParameter_PoolMethod_STOCHASTIC:
941  {
942  throw ParseException(
943  boost::str(
944  boost::format(
945  "Pooling Layer: Stochastic Pooling Not Supported. Layer=%1% %2%") %
946  layerParam.name() %
947  CHECK_LOCATION().AsString()));
948  }
949  default:
950  {
951  throw ParseException(
952  boost::str(
953  boost::format(
954  "Pooling Layer: unknown pooling method: %1% for layer: %2% %3%") %
955  p %
956  layerParam.name() %
957  CHECK_LOCATION().AsString()));
958  }
959  }
960  }
961  else
962  {
963  throw ParseException(
964  boost::str(
965  boost::format(
966  "No Pooling Method Defined for %1% %2%") %
967  layerParam.name() %
968  CHECK_LOCATION().AsString()));
969  }
970 
971  pooling2dDescriptor.m_PadLeft = pad_w;
972  pooling2dDescriptor.m_PadRight = pad_w;
973  pooling2dDescriptor.m_PadTop = pad_h;
974  pooling2dDescriptor.m_PadBottom = pad_h;
975  pooling2dDescriptor.m_StrideX = stride_w;
976  pooling2dDescriptor.m_StrideY = stride_h;
977  pooling2dDescriptor.m_PoolWidth = kernel_w;
978  pooling2dDescriptor.m_PoolHeight = kernel_h;
979 
980  pooling2dDescriptor.m_OutputShapeRounding = OutputShapeRounding::Ceiling;
981  pooling2dDescriptor.m_PaddingMethod = PaddingMethod::IgnoreValue;
982 
983  armnn::IConnectableLayer* poolingLayer = m_Network->AddPooling2dLayer(pooling2dDescriptor,
984  layerParam.name().c_str());
985 
986  TensorInfo outputInfo(
987  { inputInfo.GetShape()[0],
988  inputInfo.GetShape()[1],
989  static_cast<unsigned int>(ceil(
990  static_cast<float>(inputInfo.GetShape()[2] + 2 * pad_h - kernel_h) /
991  boost::numeric_cast<float>(stride_h))) + 1,
992  static_cast<unsigned int>(ceil(
993  static_cast<float>(inputInfo.GetShape()[3] + 2 * pad_w - kernel_w) /
994  boost::numeric_cast<float>(stride_w))) + 1 },
995  DataType::Float32);
996 
997  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(poolingLayer->GetInputSlot(0));
998  poolingLayer->GetOutputSlot(0).SetTensorInfo(outputInfo);
999  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), poolingLayer->GetOutputSlot(0));
1000 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
uint32_t m_PadBottom
Padding bottom value in the height dimension.
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
uint32_t m_PadLeft
Padding left value in the width dimension.
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
uint32_t m_PoolWidth
Pooling width value.
PaddingMethod m_PaddingMethod
The padding method to be used. (Exclude, IgnoreValue).
uint32_t m_PadTop
Padding top value in the height dimension.
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
uint32_t m_PoolHeight
Pooling height value.
uint32_t m_PadRight
Padding right value in the width dimension.
#define GET_OPTIONAL_WITH_FALLBACK(PARAM, PARAM_TYPE, OPTIONAL_VALUE, FALLBACK_VALUE, VALUE_TYPE, DEFAULT_VALUE)
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
PoolingAlgorithm m_PoolType
The pooling algorithm to use (Max. Average, L2).
OutputShapeRounding m_OutputShapeRounding
The rounding method for the output shape. (Floor, Ceiling).
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
A Pooling2dDescriptor for the Pooling2dLayer.
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.

◆ ParseReluLayer()

void ParseReluLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1002 of file CaffeParser.cpp.

References Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), ActivationDescriptor::m_A, ActivationDescriptor::m_Function, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1003 {
1004  ValidateNumInputsOutputs(layerParam, 1, 1);
1005 
1006  const string& name = layerParam.name();
1007  const ReLUParameter& param = layerParam.relu_param();
1008 
1009  ActivationDescriptor activationDescriptor;
1010  const float negativeSlope = param.negative_slope();
1011  if (negativeSlope == 0.0f)
1012  {
1013  activationDescriptor.m_Function = ActivationFunction::ReLu;
1014  }
1015  else
1016  {
1017  activationDescriptor.m_Function = ActivationFunction::LeakyReLu;
1018  activationDescriptor.m_A = negativeSlope;
1019  }
1020 
1021  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1022  IConnectableLayer* const activationLayer = m_Network->AddActivationLayer(activationDescriptor, name.c_str());
1023  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(activationLayer->GetInputSlot(0));
1024  activationLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1025  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), activationLayer->GetOutputSlot(0));
1026 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
An ActivationDescriptor for the ActivationLayer.
Definition: Descriptors.hpp:20
float m_A
Alpha upper bound value used by the activation functions. (BoundedReLu, Linear, TanH, Elu).
Definition: Descriptors.hpp:45
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
ActivationFunction m_Function
The activation function to use (Sigmoid, TanH, Linear, ReLu, BoundedReLu, SoftReLu, LeakyReLu, Abs, Sqrt, Square, Elu).
Definition: Descriptors.hpp:43

◆ ParseScaleLayer()

void ParseScaleLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1405 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), armnn::Float32, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), BatchNormalizationDescriptor::m_Eps, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1406 {
1407  // Current unoptimal solution: add a batchnormalization layer with 0 mean and 1 variance.
1408  ValidateNumInputsOutputs(layerParam, 1, 1);
1409 
1410  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1411 
1412  string name = layerParam.name();
1413 
1414  ScaleParameter param = layerParam.scale_param();
1415  if (param.axis() != 1)
1416  {
1417  // Would have to use something other than BatchNormalizationLayer in this case
1418  throw ParseException(
1419  boost::str(
1420  boost::format(
1421  "Loading Scale Layer: Only axis 1 is supported currently. "
1422  "Layer=%1% Axis=%2% %3%") %
1423  layerParam.name() %
1424  param.axis() %
1425  CHECK_LOCATION().AsString()));
1426  }
1427 
1428  unsigned int channels = inputInfo.GetShape()[1];
1429  unsigned int shape[] = {channels};
1430 
1432  desc.m_Eps = 0.0f; // Don't need epsilon if variance is 1.
1433  vector<float> meanData(channels, 0.0f);
1434  vector<float> varianceData(channels, 1.0f);
1435  vector<float> betaData(channels, 0.0f);
1436  vector<float> gammaData(channels);
1437 
1438  GetDataFromBlob(layerParam, gammaData, 0);
1439 
1440  if(param.has_bias_term())
1441  {
1442  GetDataFromBlob(layerParam, betaData, 1);
1443  }
1444 
1445  ConstTensor mean(TensorInfo(1, shape, armnn::DataType::Float32), meanData);
1446  ConstTensor variance(TensorInfo(1, shape, armnn::DataType::Float32), varianceData);
1447  ConstTensor beta(TensorInfo(1, shape, armnn::DataType::Float32), betaData);
1448  ConstTensor gamma(TensorInfo(1, shape, armnn::DataType::Float32), gammaData);
1449 
1450  armnn::IConnectableLayer* const batchNormLayer = m_Network->AddBatchNormalizationLayer(desc,
1451  mean, variance, beta, gamma, name.c_str());
1452  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(batchNormLayer->GetInputSlot(0));
1453  batchNormLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1454  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), batchNormLayer->GetOutputSlot(0));
1455 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:88
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
float m_Eps
Value to add to the variance. Used to avoid dividing by zero.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:199
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A BatchNormalizationDescriptor for the BatchNormalizationLayer.

◆ ParseSoftmaxLayer()

void ParseSoftmaxLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1209 of file CaffeParser.cpp.

References Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), SoftmaxDescriptor::m_Axis, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1210 {
1211  ValidateNumInputsOutputs(layerParam, 1, 1);
1212 
1213  SoftmaxParameter param = layerParam.softmax_param();
1214 
1215  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1216 
1217  // Ignored Caffe Parameters:
1218  // axis
1219  // Engine
1220 
1221  armnn::SoftmaxDescriptor softmaxDescriptor;
1222  softmaxDescriptor.m_Axis = 1;
1223  armnn::IConnectableLayer* const softmaxLayer = m_Network->AddSoftmaxLayer(
1224  softmaxDescriptor,
1225  layerParam.name().c_str());
1226  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(softmaxLayer->GetInputSlot(0));
1227  softmaxLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1228  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), softmaxLayer->GetOutputSlot(0));
1229 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
int m_Axis
Scalar, defaulted to the last index (-1), specifying the dimension the activation will be performed o...
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A SoftmaxDescriptor for the SoftmaxLayer.

◆ ParseSplitLayer()

void ParseSplitLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1457 of file CaffeParser.cpp.

References CHECK_LOCATION, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), and CaffeParserBase::SetArmnnOutputSlotForCaffeTop().

1458 {
1459  // Used in caffe to duplicate memory - not necessary in armnn.
1460  if (layerParam.bottom_size() != 1)
1461  {
1462  throw ParseException(
1463  boost::str(
1464  boost::format(
1465  "Split layer '%1%' should have exactly 1 bottom. "
1466  "#bottoms=%2% %3%") %
1467  layerParam.name() %
1468  layerParam.bottom_size() %
1469  CHECK_LOCATION().AsString()));
1470  }
1471  armnn::IOutputSlot& outputSlot = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
1472  for (int i = 0; i < layerParam.top_size(); i++)
1473  {
1474  SetArmnnOutputSlotForCaffeTop(layerParam.top(i), outputSlot);
1475  }
1476 }
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
An output connection slot for a layer.
Definition: INetwork.hpp:37
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ResolveInPlaceLayers()

void ResolveInPlaceLayers ( caffe::NetParameter &  netParameter)
protected

Modifies the Caffe network to replace "in-place" layers (whose top() and bottom() are both the same) with regular layers.

This simplifies further parsing.

Definition at line 1573 of file CaffeParser.cpp.

References CHECK_LOCATION.

Referenced by RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), and CaffeParserBase::LoadNetParam().

1574 {
1575  // Finds layers with the same top.
1576  std::map<std::string, std::vector<caffe::LayerParameter*>> layersByTop;
1577  for (int layerIdx = 0; layerIdx < netParameter.layer_size(); ++layerIdx)
1578  {
1579  caffe::LayerParameter& layer = *netParameter.mutable_layer(layerIdx);
1580  std::string name = layer.name();
1581  for (int i = 0; i < layer.top_size(); ++i)
1582  {
1583  layersByTop[layer.top(i)].push_back(&layer);
1584  }
1585  }
1586 
1587  // For each set of layers with the same top, resolves them to a linear chain rather than in-place layers.
1588  // Note that for 'regular' layers, there will be a single layer in each group and so this will be a no-op.
1589  for (auto layersWithSameTopIt : layersByTop)
1590  {
1591  const std::string& top = layersWithSameTopIt.first;
1592  const std::vector<caffe::LayerParameter*>& layersWithSameTop = layersWithSameTopIt.second;
1593 
1594  // Chains the layers together in the order that they are listed in the prototxt (hopefully this is correct).
1595  // Note that the last layer will not have its top modified so that other layers will continue to reference it.
1596  for (unsigned int layerIdx = 0; layerIdx < layersWithSameTop.size() - 1; ++layerIdx)
1597  {
1598  caffe::LayerParameter& layer1 = *layersWithSameTop[layerIdx];
1599  caffe::LayerParameter& layer2 = *layersWithSameTop[layerIdx+1];
1600  if (layer1.top_size() != 1)
1601  {
1602  throw ParseException(
1603  boost::str(
1604  boost::format(
1605  "Node '%1%' is an in-place layer but doesn't have exactly one "
1606  "top. It has %2% instead. %3%") %
1607  layer1.name() %
1608  layer1.top_size() %
1609  CHECK_LOCATION().AsString()));
1610  }
1611  std::string newTop = layer1.name() + "_top";
1612  layer1.set_top(0, newTop);
1613  if (layer2.bottom_size() != 1 || layer2.bottom(0) != top)
1614  {
1615  throw ParseException(
1616  boost::str(
1617  boost::format(
1618  "Node '%1%' is an in-place layer but "
1619  "doesn't have exactly one bottom, or it doesn't match its top. "
1620  "#bottoms=%2%, first bottom is %3%, top is %4% %5%") %
1621  layer2.name() %
1622  layer2.bottom(0) %
1623  top %
1624  CHECK_LOCATION().AsString()));
1625  }
1626  layer2.set_bottom(0, newTop);
1627  }
1628  }
1629 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192

◆ SetArmnnOutputSlotForCaffeTop()

void SetArmnnOutputSlotForCaffeTop ( const std::string &  caffeTopName,
armnn::IOutputSlot armnnOutputSlot 
)
protected

Definition at line 1552 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::m_ArmnnOutputSlotForCaffeTop.

Referenced by CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), CaffeParserBase::ParseBatchNormLayer(), CaffeParserBase::ParseConcatLayer(), CaffeParserBase::ParseConvLayer(), CaffeParserBase::ParseDropoutLayer(), CaffeParserBase::ParseEltwiseLayer(), CaffeParserBase::ParseInnerProductLayer(), CaffeParserBase::ParseInputLayer(), CaffeParserBase::ParseLRNLayer(), CaffeParserBase::ParsePoolingLayer(), CaffeParserBase::ParseReluLayer(), CaffeParserBase::ParseScaleLayer(), CaffeParserBase::ParseSoftmaxLayer(), and CaffeParserBase::ParseSplitLayer().

1554 {
1555  auto it = m_ArmnnOutputSlotForCaffeTop.find(caffeTopName);
1556  if (it == m_ArmnnOutputSlotForCaffeTop.end())
1557  {
1558  m_ArmnnOutputSlotForCaffeTop[caffeTopName] = &armnnOutputSlot;
1559  }
1560  else
1561  {
1562  throw ParseException(
1563  boost::str(
1564  boost::format(
1565  "Attempting to add duplicate entry for Caffe top '%1%' %2%") %
1566  caffeTopName %
1567  CHECK_LOCATION().AsString()));
1568  }
1569 }
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops...
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192

◆ TrackBindingPoint()

void TrackBindingPoint ( armnn::IConnectableLayer layer,
armnn::LayerBindingId  id,
const armnn::TensorInfo tensorInfo,
const char *  bindingPointDesc,
std::unordered_map< std::string, BindingPointInfo > &  nameToBindingInfo 
)
staticprotected

Definition at line 1510 of file CaffeParser.cpp.

References CHECK_LOCATION, and IConnectableLayer::GetName().

Referenced by CaffeParserBase::TrackInputBinding(), and CaffeParserBase::TrackOutputBinding().

1515 {
1516  const std::string layerName = layer->GetName();
1517  auto it = nameToBindingInfo.find(layerName);
1518  if (it == nameToBindingInfo.end())
1519  {
1520  nameToBindingInfo[layerName] = std::make_pair(id, tensorInfo);
1521  }
1522  else
1523  {
1524  throw ParseException(
1525  boost::str(
1526  boost::format(
1527  "Id %1% used by more than one %2% layer %3%") %
1528  id %
1529  bindingPointDesc %
1530  CHECK_LOCATION().AsString()));
1531  }
1532 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:192
virtual const char * GetName() const =0
Returns the name of the layer.

◆ TrackInputBinding()

void TrackInputBinding ( armnn::IConnectableLayer layer,
armnn::LayerBindingId  id,
const armnn::TensorInfo tensorInfo 
)
protected

Definition at line 1496 of file CaffeParser.cpp.

References IConnectableLayer::GetName(), CaffeParserBase::m_NetworkInputsBindingInfo, and CaffeParserBase::TrackBindingPoint().

Referenced by CaffeParserBase::ParseInputLayer().

1499 {
1500  return TrackBindingPoint(layer, id, tensorInfo, layer->GetName(), m_NetworkInputsBindingInfo);
1501 }
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos
static void TrackBindingPoint(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo, const char *bindingPointDesc, std::unordered_map< std::string, BindingPointInfo > &nameToBindingInfo)
virtual const char * GetName() const =0
Returns the name of the layer.

◆ TrackOutputBinding()

void TrackOutputBinding ( armnn::IConnectableLayer layer,
armnn::LayerBindingId  id,
const armnn::TensorInfo tensorInfo 
)
protected

Definition at line 1503 of file CaffeParser.cpp.

References IConnectableLayer::GetName(), CaffeParserBase::m_NetworkOutputsBindingInfo, and CaffeParserBase::TrackBindingPoint().

Referenced by RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), and CaffeParserBase::LoadNetParam().

1506 {
1507  return TrackBindingPoint(layer, id, tensorInfo, layer->GetName(), m_NetworkOutputsBindingInfo);
1508 }
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos
static void TrackBindingPoint(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo, const char *bindingPointDesc, std::unordered_map< std::string, BindingPointInfo > &nameToBindingInfo)
virtual const char * GetName() const =0
Returns the name of the layer.

Member Data Documentation

◆ m_ArmnnOutputSlotForCaffeTop

std::unordered_map<std::string, armnn::IOutputSlot*> m_ArmnnOutputSlotForCaffeTop
protected

As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops.

Definition at line 131 of file CaffeParser.hpp.

Referenced by CaffeParserBase::Cleanup(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), and CaffeParserBase::SetArmnnOutputSlotForCaffeTop().

◆ m_CaffeLayersByTopName

std::map<std::string, const caffe::LayerParameter*> m_CaffeLayersByTopName
protected

◆ m_InputShapes

◆ m_Network

◆ m_NetworkInputsBindingInfo

std::unordered_map<std::string, BindingPointInfo> m_NetworkInputsBindingInfo
protected

◆ m_NetworkOutputsBindingInfo

std::unordered_map<std::string, BindingPointInfo> m_NetworkOutputsBindingInfo
protected

◆ m_RequestedOutputs

std::vector<std::string> m_RequestedOutputs
protected

◆ ms_CaffeLayerNameToParsingFunctions

const std::map< std::string, CaffeParserBase::OperationParsingFunction > ms_CaffeLayerNameToParsingFunctions
staticprotected

The documentation for this class was generated from the following files: