ArmNN
 20.11
CaffeParserBase Class Reference

#include <CaffeParser.hpp>

Inheritance diagram for CaffeParserBase:
ICaffeParser CaffeParser RecordByRecordCaffeParser

Public Member Functions

virtual armnn::INetworkPtr CreateNetworkFromTextFile (const char *graphFile, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs) override
 Create the network from a protobuf text file on disk. More...
 
virtual armnn::INetworkPtr CreateNetworkFromString (const char *protoText, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs) override
 Creates the network directly from protobuf text in a string. Useful for debugging/testing. More...
 
virtual BindingPointInfo GetNetworkInputBindingInfo (const std::string &name) const override
 Retrieves binding info (layer id and tensor info) for the network input identified by the given layer name. More...
 
virtual BindingPointInfo GetNetworkOutputBindingInfo (const std::string &name) const override
 Retrieves binding info (layer id and tensor info) for the network output identified by the given layer name. More...
 
 CaffeParserBase ()
 
- Public Member Functions inherited from ICaffeParser
virtual armnn::INetworkPtr CreateNetworkFromBinaryFile (const char *graphFile, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)=0
 Create the network from a protobuf binary file on the disk. More...
 

Protected Types

using OperationParsingFunction = void(CaffeParserBase::*)(const caffe::LayerParameter &layerParam)
 

Protected Member Functions

armnn::TensorInfo BlobShapeToTensorInfo (const caffe::BlobShape &blobShape) const
 Converts Caffe's protobuf tensor shape format to ArmNN's. More...
 
void TrackInputBinding (armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
 
void TrackOutputBinding (armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
 
void SetArmnnOutputSlotForCaffeTop (const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
 
armnn::IOutputSlotGetArmnnOutputSlotForCaffeTop (const std::string &caffeTopName) const
 Retrieves the Armnn IOutputSlot representing the given Caffe top. More...
 
void Cleanup ()
 
armnn::INetworkPtr CreateNetworkFromNetParameter (caffe::NetParameter &netParam, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
 Parses a NetParameter loaded into memory from one of the other CreateNetwork*. More...
 
void LoadNetParam (caffe::NetParameter &netParameter)
 does the actual conversion from caffe::NetParameter to armnn::INetwork More...
 
std::vector< const caffe::LayerParameter * > GetInputs (const caffe::LayerParameter &layerParam)
 Find the Caffe layers listed as inputs (bottoms) for a given layer. More...
 
void ResolveInPlaceLayers (caffe::NetParameter &netParameter)
 Modifies the Caffe network to replace "in-place" layers (whose top() and bottom() are both the same) with regular layers. More...
 
void ParseInputLayer (const caffe::LayerParameter &layerParam)
 Adds an armnn layer to m_Network given a Caffe LayerParameter of the correct type and is responsible for recording any newly created IOutputSlots using SetArmnnOutputSlotForCaffeTop(). More...
 
void ParseConvLayer (const caffe::LayerParameter &layerParam)
 
void ParsePoolingLayer (const caffe::LayerParameter &layerParam)
 
void ParseReluLayer (const caffe::LayerParameter &layerParam)
 
void ParseLRNLayer (const caffe::LayerParameter &layerParam)
 
void ParseInnerProductLayer (const caffe::LayerParameter &layerParam)
 
void ParseSoftmaxLayer (const caffe::LayerParameter &layerParam)
 
void ParseEltwiseLayer (const caffe::LayerParameter &layerParam)
 
void ParseConcatLayer (const caffe::LayerParameter &layerParam)
 
void ParseBatchNormLayer (const caffe::LayerParameter &layerParam)
 
void ParseScaleLayer (const caffe::LayerParameter &layerParam)
 
void ParseSplitLayer (const caffe::LayerParameter &layerParam)
 
void ParseDropoutLayer (const caffe::LayerParameter &layerParam)
 
void AddConvLayerWithSplits (const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
 ParseConv may use these helpers depending on the group parameter. More...
 
void AddConvLayerWithDepthwiseConv (const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
 
- Protected Member Functions inherited from ICaffeParser
virtual ~ICaffeParser ()
 

Static Protected Member Functions

static void TrackBindingPoint (armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo, const char *bindingPointDesc, std::unordered_map< std::string, BindingPointInfo > &nameToBindingInfo)
 
static std::pair< armnn::LayerBindingId, armnn::TensorInfoGetBindingInfo (const std::string &layerName, const char *bindingPointDesc, const std::unordered_map< std::string, BindingPointInfo > &bindingInfos)
 

Protected Attributes

std::unordered_map< std::string, BindingPointInfom_NetworkInputsBindingInfo
 maps input layer names to their corresponding ids and tensor infos More...
 
std::unordered_map< std::string, BindingPointInfom_NetworkOutputsBindingInfo
 maps output layer names to their corresponding ids and tensor infos More...
 
armnn::INetworkPtr m_Network
 
std::map< std::string, armnn::TensorShapem_InputShapes
 
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
 As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops. More...
 
std::vector< std::string > m_RequestedOutputs
 
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName
 

Static Protected Attributes

static const std::map< std::string, OperationParsingFunctionms_CaffeLayerNameToParsingFunctions
 Maps Caffe layer names to parsing member functions. More...
 

Additional Inherited Members

- Static Public Member Functions inherited from ICaffeParser
static ICaffeParserCreateRaw ()
 
static ICaffeParserPtr Create ()
 
static void Destroy (ICaffeParser *parser)
 

Detailed Description

Definition at line 26 of file CaffeParser.hpp.

Member Typedef Documentation

◆ OperationParsingFunction

using OperationParsingFunction = void(CaffeParserBase::*)(const caffe::LayerParameter& layerParam)
protected

Definition at line 115 of file CaffeParser.hpp.

Constructor & Destructor Documentation

◆ CaffeParserBase()

Definition at line 267 of file CaffeParser.cpp.

268  : m_Network(nullptr, nullptr)
269 {
270 
271 }

Member Function Documentation

◆ AddConvLayerWithDepthwiseConv()

void AddConvLayerWithDepthwiseConv ( const caffe::LayerParameter &  layerParam,
const armnn::Convolution2dDescriptor desc,
unsigned int  kernelW,
unsigned int  kernelH 
)
protected

Definition at line 594 of file CaffeParser.cpp.

References ARMNN_ASSERT, CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), armnnUtils::GetTensorInfo(), Convolution2dDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, DepthwiseConvolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and armnnCaffeParser::TensorDescToBlobShape().

Referenced by CaffeParserBase::ParseConvLayer().

598 {
599  ARMNN_ASSERT(layerParam.type() == "Convolution");
600  ValidateNumInputsOutputs(layerParam, 1, 1);
601 
602  ConvolutionParameter convParam = layerParam.convolution_param();
603  BlobShape inputShape = TensorDescToBlobShape(GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo());
604 
606  desc.m_PadLeft = convDesc.m_PadLeft;
607  desc.m_PadRight = convDesc.m_PadRight;
608  desc.m_PadTop = convDesc.m_PadTop;
609  desc.m_PadBottom = convDesc.m_PadBottom;
610  desc.m_StrideX = convDesc.m_StrideX;
611  desc.m_StrideY = convDesc.m_StrideY;
612  desc.m_BiasEnabled = convDesc.m_BiasEnabled;
613 
614  unsigned int numFilters = convParam.num_output();
615 
616  BlobShape outputShape;
617  outputShape.add_dim(0);
618  outputShape.set_dim(0, inputShape.dim(0));
619  outputShape.add_dim(1);
620  outputShape.set_dim(1, numFilters);
621  outputShape.add_dim(2);
622  outputShape.set_dim(
623  2, (static_cast<int>(
624  static_cast<float>(inputShape.dim(2) + 2 * desc.m_PadBottom - kernelH) /
625  static_cast<float>(desc.m_StrideY)) + 1));
626  outputShape.add_dim(3);
627  outputShape.set_dim(
628  3, (static_cast<int>(
629  static_cast<float>(inputShape.dim(3) + 2 * desc.m_PadRight - kernelW) /
630  static_cast<float>(desc.m_StrideX)) + 1));
631 
632  // Load the weight data
633  size_t allWeightsSize = armnn::numeric_cast<size_t>(inputShape.dim(1) * kernelH * kernelW);
634  vector<float> weightData(allWeightsSize);
635 
636  GetDataFromBlob(layerParam, weightData, 0);
637 
638  // depth multiplier will be 1 for the depthwise convolution
639  const unsigned int weightDimSizes[4] = {
640  static_cast<unsigned int>(1), // depth multiplier
641  static_cast<unsigned int>(inputShape.dim(1)), // #channels
642  kernelH,
643  kernelW};
644 
645  armnn::IConnectableLayer* returnLayer = nullptr;
646  ConstTensor weights(TensorInfo(4, weightDimSizes, DataType::Float32), weightData.data());
647  Optional<ConstTensor> optionalBiases;
648  vector<float> biasData;
649  if (desc.m_BiasEnabled)
650  {
651  TensorInfo biasInfo;
652 
653  biasData.resize(armnn::numeric_cast<size_t>(outputShape.dim(1)), 1.f);
654  GetDataFromBlob(layerParam, biasData, 1);
655 
656  const unsigned int biasDimSizes[1] = {static_cast<unsigned int>(outputShape.dim(1))};
657  biasInfo = TensorInfo(1, biasDimSizes, DataType::Float32);
658 
659  ConstTensor biases(biasInfo, biasData.data());
660  optionalBiases = Optional<ConstTensor>(biases);
661  }
662  returnLayer = m_Network->AddDepthwiseConvolution2dLayer(desc,
663  weights,
664  optionalBiases,
665  layerParam.name().c_str());
666 
667  if (!returnLayer)
668  {
669  throw ParseException(
670  fmt::format("Failed to create depthwise convolution layer. "
671  "Layer={} #filters={} {}",
672  layerParam.name(),
673  numFilters,
674  CHECK_LOCATION().AsString()));
675  }
676  armnn::IOutputSlot& inputConnection = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
677  inputConnection.Connect(returnLayer->GetInputSlot(0));
678  returnLayer->GetOutputSlot(0).SetTensorInfo(BlobShapeToTensorInfo(outputShape));
679  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), returnLayer->GetOutputSlot(0));
680 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
An output connection slot for a layer.
Definition: INetwork.hpp:37
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:314
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:35
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A DepthwiseConvolution2dDescriptor for the DepthwiseConvolution2dLayer.
uint32_t m_PadLeft
Padding left value in the width dimension.
BlobShape TensorDescToBlobShape(const TensorInfo &desc)

◆ AddConvLayerWithSplits()

void AddConvLayerWithSplits ( const caffe::LayerParameter &  layerParam,
const armnn::Convolution2dDescriptor desc,
unsigned int  kernelW,
unsigned int  kernelH 
)
protected

ParseConv may use these helpers depending on the group parameter.

Definition at line 404 of file CaffeParser.cpp.

References ARMNN_ASSERT, CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetNumOutputSlots(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), Convolution2dDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), IOutputSlot::SetTensorInfo(), OriginsDescriptor::SetViewOriginCoord(), ViewsDescriptor::SetViewOriginCoord(), ViewsDescriptor::SetViewSize(), and armnnCaffeParser::TensorDescToBlobShape().

Referenced by CaffeParserBase::ParseConvLayer().

408 {
409  ARMNN_ASSERT(layerParam.type() == "Convolution");
410  ValidateNumInputsOutputs(layerParam, 1, 1);
411 
412  ConvolutionParameter convParam = layerParam.convolution_param();
413  BlobShape inputShape = TensorDescToBlobShape(GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo());
414  const unsigned int numGroups = convParam.has_group() ? convParam.group() : 1;
415 
416  // asusme these were already verified by the caller ParseConvLayer() function
417  ARMNN_ASSERT(numGroups < inputShape.dim(1));
418  ARMNN_ASSERT(numGroups > 1);
419 
420  // Handle grouping
421  armnn::IOutputSlot& inputConnection = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
422 
423  vector<string> convLayerNames(numGroups);
424  vector<armnn::IConnectableLayer*> convLayers(numGroups);
425  convLayerNames[0] = layerParam.name();
426 
427  // This convolution is to be applied to chunks of the input data so add a splitter layer
428 
429  // Redirect the convolution input to the splitter
430  unsigned int splitterDimSizes[4] = {static_cast<unsigned int>(inputShape.dim(0)),
431  static_cast<unsigned int>(inputShape.dim(1)),
432  static_cast<unsigned int>(inputShape.dim(2)),
433  static_cast<unsigned int>(inputShape.dim(3))};
434 
435  // Split dimension 1 of the splitter output shape and conv input shapes
436  // according to the number of groups
437 
438  splitterDimSizes[1] /= numGroups;
439  inputShape.set_dim(1, splitterDimSizes[1]);
440 
441  // This is used to describe how the input is to be split
442  ViewsDescriptor splitterDesc(numGroups);
443 
444  // Create an output node for each group, giving each a unique name
445  for (unsigned int g = 0; g < numGroups; ++g)
446  {
447  // Work out the names of the splitter layers child convolutions
448  stringstream ss;
449  ss << layerParam.name() << "_" << g;
450  convLayerNames[g] = ss.str();
451 
452  splitterDesc.SetViewOriginCoord(g, 1, splitterDimSizes[1] * g);
453 
454  // Set the size of the views.
455  for (unsigned int dimIdx=0; dimIdx < 4; dimIdx++)
456  {
457  splitterDesc.SetViewSize(g, dimIdx, splitterDimSizes[dimIdx]);
458  }
459  }
460 
461  const std::string splitterLayerName = std::string("splitter_") + layerParam.bottom(0);
462  armnn::IConnectableLayer* splitterLayer = m_Network->AddSplitterLayer(splitterDesc, splitterLayerName.c_str());
463 
464  inputConnection.Connect(splitterLayer->GetInputSlot(0));
465  for (unsigned int i = 0; i < splitterLayer->GetNumOutputSlots(); i++)
466  {
467  splitterLayer->GetOutputSlot(i).SetTensorInfo(BlobShapeToTensorInfo(inputShape));
468  }
469 
470  unsigned int numFilters = convParam.num_output();
471 
472  // Populates convolution output tensor descriptor dimensions.
473  BlobShape outputShape;
474  outputShape.add_dim(0);
475  outputShape.set_dim(0, inputShape.dim(0));
476  outputShape.add_dim(1);
477  // Ensures that dimension 1 of the convolution output is split according to the number of groups.
478  outputShape.set_dim(1, numFilters / numGroups);
479  outputShape.add_dim(2);
480  outputShape.set_dim(
481  2, (static_cast<int>(
482  static_cast<float>(inputShape.dim(2) + 2 * desc.m_PadBottom - kernelH) /
483  static_cast<float>(desc.m_StrideY)) + 1));
484  outputShape.add_dim(3);
485  outputShape.set_dim(
486  3, (static_cast<int>(
487  static_cast<float>(inputShape.dim(3) + 2 * desc.m_PadRight - kernelW) /
488  static_cast<float>(desc.m_StrideX)) + 1));
489 
490  // Load the weight data for ALL groups
491  vector<float> weightData(armnn::numeric_cast<size_t>(numGroups *
492  inputShape.dim(1) * // number of input channels
493  outputShape.dim(1) * // number of output channels
494  kernelH *
495  kernelW));
496  GetDataFromBlob(layerParam, weightData, 0);
497 
498  const unsigned int weightDimSizes[4] = {
499  static_cast<unsigned int>(outputShape.dim(1)),
500  static_cast<unsigned int>(inputShape.dim(1)),
501  kernelH,
502  kernelW};
503 
504  TensorInfo biasInfo;
505  vector<float> biasData;
506 
507  if (desc.m_BiasEnabled)
508  {
509  biasData.resize(armnn::numeric_cast<size_t>(numGroups * outputShape.dim(1)), 1.f);
510  GetDataFromBlob(layerParam, biasData, 1);
511 
512  const unsigned int biasDimSizes[1] = {static_cast<unsigned int>(outputShape.dim(1))};
513  biasInfo = TensorInfo(1, biasDimSizes, DataType::Float32);
514  }
515 
516  const unsigned int numWeightsPerGroup = armnn::numeric_cast<unsigned int>(weightData.size()) / numGroups;
517  const unsigned int numBiasesPerGroup = armnn::numeric_cast<unsigned int>(biasData.size()) / numGroups;
518 
519  for (unsigned int g = 0; g < numGroups; ++g)
520  {
521  // Sets the slot index, group 0 should be connected to the 0th output of the splitter
522  // group 1 should be connected to the 1st output of the splitter.
523 
524  // Pulls out the weights for this group from that loaded from the model file earlier.
525  ConstTensor weights(TensorInfo(4, weightDimSizes, DataType::Float32),
526  weightData.data() + numWeightsPerGroup * g);
527 
528  IConnectableLayer* convLayer = nullptr;
529  Optional<ConstTensor> optionalBiases;
530  if (desc.m_BiasEnabled)
531  {
532  // Pulls out the biases for this group from that loaded from the model file earlier.
533  ConstTensor biases(biasInfo, biasData.data() + numBiasesPerGroup * g);
534  optionalBiases = Optional<ConstTensor>(biases);
535  }
536  convLayer = m_Network->AddConvolution2dLayer(desc,
537  weights,
538  optionalBiases,
539  convLayerNames[g].c_str());
540  convLayers[g] = convLayer;
541 
542  // If we have more than one group then the input to the nth convolution the splitter layer's nth output,
543  // otherwise it's the regular input to this layer.
544  armnn::IOutputSlot& splitterInputConnection =
545  splitterLayer ? splitterLayer->GetOutputSlot(g) : inputConnection;
546  splitterInputConnection.Connect(convLayer->GetInputSlot(0));
547  convLayer->GetOutputSlot(0).SetTensorInfo(BlobShapeToTensorInfo(outputShape));
548  }
549 
550  // If the convolution was performed in chunks, add a layer to concatenate the results
551 
552  // The merge input shape matches that of the convolution output
553  unsigned int concatDimSizes[4] = {static_cast<unsigned int>(outputShape.dim(0)),
554  static_cast<unsigned int>(outputShape.dim(1)),
555  static_cast<unsigned int>(outputShape.dim(2)),
556  static_cast<unsigned int>(outputShape.dim(3))};
557 
558  // This is used to describe how the input is to be concatenated
559  OriginsDescriptor concatDesc(numGroups);
560 
561  // Now create an input node for each group, using the name from
562  // the output of the corresponding convolution
563  for (unsigned int g = 0; g < numGroups; ++g)
564  {
565  concatDesc.SetViewOriginCoord(g, 1, concatDimSizes[1] * g);
566  }
567 
568  // Make sure the output from the concat is the correct size to hold the data for all groups
569  concatDimSizes[1] *= numGroups;
570  outputShape.set_dim(1, concatDimSizes[1]);
571 
572  // Finally add the concat layer
573  IConnectableLayer* concatLayer = m_Network->AddConcatLayer(concatDesc, layerParam.name().c_str());
574 
575  if (!concatLayer)
576  {
577  throw ParseException(
578  fmt::format("Failed to create final concat layer for Split+Convolution+Concat. "
579  "Layer={} #groups={} #filters={} {}",
580  layerParam.name(),
581  numGroups,
582  numFilters,
583  CHECK_LOCATION().AsString()));
584  }
585 
586  for (unsigned int g = 0; g < numGroups; ++g)
587  {
588  convLayers[g]->GetOutputSlot(0).Connect(concatLayer->GetInputSlot(g));
589  }
590  concatLayer->GetOutputSlot(0).SetTensorInfo(armnn::TensorInfo(4, concatDimSizes, DataType::Float32));
591  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), concatLayer->GetOutputSlot(0));
592 }
uint32_t m_PadBottom
Padding bottom value in the height dimension.
bool m_BiasEnabled
Enable/disable bias.
virtual unsigned int GetNumOutputSlots() const =0
Returns the number of connectable output slots.
A ViewsDescriptor for the SplitterLayer.
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
uint32_t m_PadRight
Padding right value in the width dimension.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
An output connection slot for a layer.
Definition: INetwork.hpp:37
An OriginsDescriptor for the ConcatLayer.
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:314
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:35
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
BlobShape TensorDescToBlobShape(const TensorInfo &desc)

◆ BlobShapeToTensorInfo()

TensorInfo BlobShapeToTensorInfo ( const caffe::BlobShape &  blobShape) const
protected

Converts Caffe's protobuf tensor shape format to ArmNN's.

Definition at line 305 of file CaffeParser.cpp.

Referenced by CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), CaffeParserBase::ParseConvLayer(), and CaffeParserBase::ParseInputLayer().

306 {
307  std::vector<unsigned int> shape;
308  for (int j = 0; j < blobShape.dim_size(); ++j)
309  {
310  shape.push_back(static_cast<unsigned int>(blobShape.dim(j)));
311  }
312 
313  return TensorInfo(armnn::numeric_cast<unsigned int>(shape.size()), shape.data(), DataType::Float32);
314 }

◆ Cleanup()

void Cleanup ( )
protected

Definition at line 1782 of file CaffeParser.cpp.

References CaffeParserBase::m_ArmnnOutputSlotForCaffeTop, CaffeParserBase::m_CaffeLayersByTopName, CaffeParserBase::m_InputShapes, and CaffeParserBase::m_RequestedOutputs.

Referenced by RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), and CaffeParserBase::CreateNetworkFromNetParameter().

1782  {
1783  // cleanup, in case we reuse this parser
1784  m_InputShapes.clear();
1785  m_RequestedOutputs.clear();
1787  // NOTE: when we get the text/string format
1788  // optimised for memory then this data structure can
1789  // also move to the CaffeParser class
1790  m_CaffeLayersByTopName.clear();
1791 }
std::vector< std::string > m_RequestedOutputs
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops...
std::map< std::string, armnn::TensorShape > m_InputShapes
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName

◆ CreateNetworkFromNetParameter()

INetworkPtr CreateNetworkFromNetParameter ( caffe::NetParameter &  netParam,
const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)
protected

Parses a NetParameter loaded into memory from one of the other CreateNetwork*.

Definition at line 1751 of file CaffeParser.cpp.

References CaffeParserBase::Cleanup(), INetwork::Create(), CaffeParserBase::LoadNetParam(), CaffeParserBase::m_InputShapes, CaffeParserBase::m_Network, CaffeParserBase::m_NetworkInputsBindingInfo, CaffeParserBase::m_NetworkOutputsBindingInfo, and CaffeParserBase::m_RequestedOutputs.

Referenced by CaffeParser::CreateNetworkFromBinaryFile(), CaffeParserBase::CreateNetworkFromString(), and CaffeParserBase::CreateNetworkFromTextFile().

1754 {
1757 
1759 
1760  m_InputShapes = inputShapes;
1761  if (requestedOutputs.size() == 0)
1762  {
1763  throw ParseException("requestedOutputs must have at least one entry");
1764  }
1765  m_RequestedOutputs = requestedOutputs;
1766 
1767  try
1768  {
1769  LoadNetParam(netParam);
1770  }
1771  catch (const ParseException& e)
1772  {
1773  Cleanup();
1774  throw e;
1775  }
1776 
1777  Cleanup();
1778 
1779  return move(m_Network);
1780 }
void LoadNetParam(caffe::NetParameter &netParameter)
does the actual conversion from caffe::NetParameter to armnn::INetwork
std::vector< std::string > m_RequestedOutputs
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos
std::map< std::string, armnn::TensorShape > m_InputShapes
static INetworkPtr Create(NetworkOptions networkOptions={})
Definition: Network.cpp:46

◆ CreateNetworkFromString()

INetworkPtr CreateNetworkFromString ( const char *  protoText,
const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)
overridevirtual

Creates the network directly from protobuf text in a string. Useful for debugging/testing.

Implements ICaffeParser.

Definition at line 1697 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::CreateNetworkFromNetParameter().

1700 {
1701  // Parses the string into a message.
1702  NetParameter netParam;
1703  bool success = google::protobuf::TextFormat::ParseFromString(protoText, &netParam);
1704 
1705  if (!success)
1706  {
1707  throw ParseException(
1708  fmt::format("Failed to parse graph string {}",
1709  CHECK_LOCATION().AsString()));
1710  }
1711 
1712  return CreateNetworkFromNetParameter(netParam, inputShapes, requestedOutputs);
1713 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
armnn::INetworkPtr CreateNetworkFromNetParameter(caffe::NetParameter &netParam, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
Parses a NetParameter loaded into memory from one of the other CreateNetwork*.

◆ CreateNetworkFromTextFile()

INetworkPtr CreateNetworkFromTextFile ( const char *  graphFile,
const std::map< std::string, armnn::TensorShape > &  inputShapes,
const std::vector< std::string > &  requestedOutputs 
)
overridevirtual

Create the network from a protobuf text file on disk.

Implements ICaffeParser.

Definition at line 1665 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::CreateNetworkFromNetParameter().

1668 {
1669  FILE* fd = fopen(graphFile, "r");
1670 
1671  if (fd == nullptr)
1672  {
1673  throw FileNotFoundException(
1674  fmt::format("Failed to open graph file: {} {}",
1675  graphFile,
1676  CHECK_LOCATION().AsString()));
1677  }
1678 
1679  // Parses the file into a message.
1680  NetParameter netParam;
1681  auto input = new google::protobuf::io::FileInputStream(fileno(fd));
1682  bool success = google::protobuf::TextFormat::Parse(input, &netParam);
1683  delete input;
1684  fclose(fd);
1685 
1686  if (!success)
1687  {
1688  throw ParseException(
1689  fmt::format("Failed to parse graph file: {} {}",
1690  graphFile,
1691  CHECK_LOCATION().AsString()));
1692  }
1693 
1694  return CreateNetworkFromNetParameter(netParam, inputShapes, requestedOutputs);
1695 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
armnn::INetworkPtr CreateNetworkFromNetParameter(caffe::NetParameter &netParam, const std::map< std::string, armnn::TensorShape > &inputShapes, const std::vector< std::string > &requestedOutputs)
Parses a NetParameter loaded into memory from one of the other CreateNetwork*.

◆ GetArmnnOutputSlotForCaffeTop()

armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop ( const std::string &  caffeTopName) const
protected

Retrieves the Armnn IOutputSlot representing the given Caffe top.

Throws if it cannot be found (e.g. not parsed yet).

Definition at line 1478 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::m_ArmnnOutputSlotForCaffeTop.

Referenced by CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), CaffeParserBase::LoadNetParam(), CaffeParserBase::ParseBatchNormLayer(), CaffeParserBase::ParseConcatLayer(), CaffeParserBase::ParseConvLayer(), CaffeParserBase::ParseDropoutLayer(), CaffeParserBase::ParseEltwiseLayer(), CaffeParserBase::ParseInnerProductLayer(), CaffeParserBase::ParseLRNLayer(), CaffeParserBase::ParsePoolingLayer(), CaffeParserBase::ParseReluLayer(), CaffeParserBase::ParseScaleLayer(), CaffeParserBase::ParseSoftmaxLayer(), and CaffeParserBase::ParseSplitLayer().

1479 {
1480  auto it = m_ArmnnOutputSlotForCaffeTop.find(caffeTopName);
1481  if (it != m_ArmnnOutputSlotForCaffeTop.end())
1482  {
1483  return *it->second;
1484  }
1485  else
1486  {
1487  throw ParseException(
1488  fmt::format("Could not find armnn output slot for Caffe top '{}' {}",
1489  caffeTopName,
1490  CHECK_LOCATION().AsString()));
1491  }
1492 }
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops...
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197

◆ GetBindingInfo()

std::pair< armnn::LayerBindingId, armnn::TensorInfo > GetBindingInfo ( const std::string &  layerName,
const char *  bindingPointDesc,
const std::unordered_map< std::string, BindingPointInfo > &  bindingInfos 
)
staticprotected

Definition at line 289 of file CaffeParser.cpp.

References CHECK_LOCATION.

Referenced by CaffeParserBase::GetNetworkInputBindingInfo(), and CaffeParserBase::GetNetworkOutputBindingInfo().

292 {
293  auto it = nameToBindingInfo.find(layerName);
294  if (it == nameToBindingInfo.end())
295  {
297  fmt::format("Unknown binding {} for layer '{}'. {}",
298  bindingPointDesc,
299  layerName,
300  CHECK_LOCATION().AsString()));
301  }
302  return it->second;
303 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197

◆ GetInputs()

vector< const LayerParameter * > GetInputs ( const caffe::LayerParameter &  layerParam)
protected

Find the Caffe layers listed as inputs (bottoms) for a given layer.

Definition at line 330 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::m_CaffeLayersByTopName.

Referenced by CaffeParserBase::LoadNetParam().

331 {
332  std::vector<const caffe::LayerParameter*> ret;
333  ret.reserve(armnn::numeric_cast<size_t>(layerParam.bottom_size()));
334  for (int j = 0; j < layerParam.bottom_size(); ++j)
335  {
336  std::string inputName = layerParam.bottom(j);
337  auto inputIt = m_CaffeLayersByTopName.find(inputName);
338  if (inputIt == m_CaffeLayersByTopName.end())
339  {
340  throw ParseException(
341  fmt::format("Can't find Caffe layer with top called '{}', "
342  "which is listed as an input of '{}'. {}",
343  inputName,
344  layerParam.name(),
345  CHECK_LOCATION().AsString()));
346  }
347  ret.push_back(inputIt->second);
348  }
349 
350  return ret;
351 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName

◆ GetNetworkInputBindingInfo()

BindingPointInfo GetNetworkInputBindingInfo ( const std::string &  name) const
overridevirtual

Retrieves binding info (layer id and tensor info) for the network input identified by the given layer name.

Implements ICaffeParser.

Definition at line 279 of file CaffeParser.cpp.

References CaffeParserBase::GetBindingInfo(), and CaffeParserBase::m_NetworkInputsBindingInfo.

280 {
281  return GetBindingInfo(name, "input", m_NetworkInputsBindingInfo);
282 }
static std::pair< armnn::LayerBindingId, armnn::TensorInfo > GetBindingInfo(const std::string &layerName, const char *bindingPointDesc, const std::unordered_map< std::string, BindingPointInfo > &bindingInfos)
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos

◆ GetNetworkOutputBindingInfo()

BindingPointInfo GetNetworkOutputBindingInfo ( const std::string &  name) const
overridevirtual

Retrieves binding info (layer id and tensor info) for the network output identified by the given layer name.

Implements ICaffeParser.

Definition at line 284 of file CaffeParser.cpp.

References CaffeParserBase::GetBindingInfo(), and CaffeParserBase::m_NetworkOutputsBindingInfo.

285 {
286  return GetBindingInfo(name, "output", m_NetworkOutputsBindingInfo);
287 }
static std::pair< armnn::LayerBindingId, armnn::TensorInfo > GetBindingInfo(const std::string &layerName, const char *bindingPointDesc, const std::unordered_map< std::string, BindingPointInfo > &bindingInfos)
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos

◆ LoadNetParam()

void LoadNetParam ( caffe::NetParameter &  netParameter)
protected

does the actual conversion from caffe::NetParameter to armnn::INetwork

Definition at line 1569 of file CaffeParser.cpp.

References CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), CaffeParserBase::GetInputs(), CaffeParserBase::m_CaffeLayersByTopName, CaffeParserBase::m_Network, CaffeParserBase::m_NetworkOutputsBindingInfo, CaffeParserBase::m_RequestedOutputs, CaffeParserBase::ms_CaffeLayerNameToParsingFunctions, armnn::numeric_cast(), CaffeParserBase::ResolveInPlaceLayers(), and CaffeParserBase::TrackOutputBinding().

Referenced by CaffeParserBase::CreateNetworkFromNetParameter().

1570 {
1571  // Caffe models sometimes have an implicit input layer.
1572  // In that case, add an explicit one.
1573  if (netParameter.input_size() > 0)
1574  {
1575  LayerParameter* newLayer = netParameter.add_layer();
1576 
1577  newLayer->set_type("Input");
1578  newLayer->set_name(netParameter.input(0));
1579  newLayer->add_top(netParameter.input(0));
1580 
1581  InputParameter* inputParam = newLayer->mutable_input_param();
1582  BlobShape* shape = inputParam->add_shape();
1583 
1584  int dim_size = netParameter.input_dim_size();
1585  for (int i = 0; i < dim_size; ++i)
1586  {
1587  shape->add_dim(netParameter.input_dim(i));
1588  }
1589  }
1590 
1591  // Replaces in-place layers with regular ones to make the rest of the parsing easier.
1592  ResolveInPlaceLayers(netParameter);
1593 
1594  // Creates a lookup of Caffe layers by name.
1595  for (int i = 0; i < netParameter.layer_size(); ++i)
1596  {
1597  const caffe::LayerParameter& layer = netParameter.layer(i);
1598  for (int i = 0; i < layer.top_size(); ++i)
1599  {
1600  m_CaffeLayersByTopName[layer.top(i)] = &layer;
1601  }
1602  }
1603 
1604  // Finds the output layers the user requested.
1605  std::vector<const caffe::LayerParameter*> targetLayers;
1606  for (const std::string& requestedOutputName : m_RequestedOutputs)
1607  {
1608  auto nodeIt = m_CaffeLayersByTopName.find(requestedOutputName);
1609  if (nodeIt == m_CaffeLayersByTopName.end())
1610  {
1611  throw ParseException(
1612  fmt::format("Couldn't find requested output layer '{}' in graph {}",
1613  requestedOutputName,
1614  CHECK_LOCATION().AsString()));
1615  }
1616  targetLayers.push_back(nodeIt->second);
1617  }
1618 
1619  // Sorts them into a linear ordering such that all inputs of a node are before the node itself.
1620  std::vector<const caffe::LayerParameter*> sortedNodes;
1621  if (!armnnUtils::GraphTopologicalSort<const caffe::LayerParameter*>(
1622  targetLayers,
1623  [this](const caffe::LayerParameter* node)
1624  {
1625  return GetInputs(*node);
1626  },
1627  sortedNodes))
1628  {
1629  throw ParseException(
1630  fmt::format("Cycle detected in graph. #nodes: {} {}",
1631  sortedNodes.size(),
1632  CHECK_LOCATION().AsString()));
1633  }
1634 
1635  // Parses each node in order, knowing that all inputs of a node will be processed before the node itself.
1636  for (const caffe::LayerParameter* current : sortedNodes)
1637  {
1638  auto it = ms_CaffeLayerNameToParsingFunctions.find(current->type());
1639  if (it == ms_CaffeLayerNameToParsingFunctions.end())
1640  {
1641  throw ParseException(
1642  fmt::format("Unsupported layer type: '{}' for layer {} {}",
1643  current->type(),
1644  current->name(),
1645  CHECK_LOCATION().AsString()));
1646  }
1647  auto func = it->second;
1648  (this->*func)(*current);
1649  }
1650 
1651  // Adds ArmNN output layers connected to each requested output.
1652  for (const std::string& requestedOutput : m_RequestedOutputs)
1653  {
1654  armnn::IOutputSlot& outputSlot = GetArmnnOutputSlotForCaffeTop(requestedOutput);
1655 
1658  armnn::IConnectableLayer* const outputLayer = m_Network->AddOutputLayer(outputId, requestedOutput.c_str());
1659  outputSlot.Connect(outputLayer->GetInputSlot(0));
1660 
1661  TrackOutputBinding(outputLayer, outputId, outputLayer->GetInputSlot(0).GetConnection()->GetTensorInfo());
1662  }
1663 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
static const std::map< std::string, OperationParsingFunction > ms_CaffeLayerNameToParsingFunctions
Maps Caffe layer names to parsing member functions.
std::vector< std::string > m_RequestedOutputs
int LayerBindingId
Type of identifiers for bindable layers (inputs, outputs).
Definition: Types.hpp:202
void TrackOutputBinding(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
void ResolveInPlaceLayers(caffe::NetParameter &netParameter)
Modifies the Caffe network to replace "in-place" layers (whose top() and bottom() are both the same) ...
An output connection slot for a layer.
Definition: INetwork.hpp:37
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
std::vector< const caffe::LayerParameter * > GetInputs(const caffe::LayerParameter &layerParam)
Find the Caffe layers listed as inputs (bottoms) for a given layer.
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:35
virtual int Connect(IInputSlot &destination)=0
std::map< std::string, const caffe::LayerParameter * > m_CaffeLayersByTopName

◆ ParseBatchNormLayer()

void ParseBatchNormLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1293 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), armnn::Float32, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), BatchNormalizationDescriptor::m_Eps, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1294 {
1295  ValidateNumInputsOutputs(layerParam, 1, 1);
1296 
1297  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1298 
1299  string name = layerParam.name();
1300 
1301  BatchNormParameter param = layerParam.batch_norm_param();
1302  // If use_global_stats is not explicitly set in the model, assume it to be true (its default value
1303  // when the network is in the testing phase).
1304  if (param.has_use_global_stats())
1305  {
1306  if (!param.use_global_stats())
1307  {
1308  throw ParseException(
1309  fmt::format("Error parsing Batch Norm layer '{}': "
1310  "Parameter 'use_global_stats' is set to false, which is "
1311  "unsupported (value used for training). {}",
1312  name,
1313  CHECK_LOCATION().AsString()));
1314  }
1315  }
1316 
1318  desc.m_Eps = param.eps();
1319 
1320  unsigned int channels = inputInfo.GetShape()[1];
1321  unsigned int shape[] = {channels};
1322 
1323  vector<float> meanData(channels);
1324  GetDataFromBlob(layerParam, meanData, 0);
1325 
1326  vector<float> varianceData(channels);
1327  GetDataFromBlob(layerParam, varianceData, 1);
1328 
1329  // Reads moving average factor and applies scaling (if required).
1330  const BlobProto& blob = layerParam.blobs(armnn::numeric_cast<int>(2));
1331  const float movingAverageFactor = blob.data(armnn::numeric_cast<int>(0));
1332  if(movingAverageFactor != 0.0f)
1333  {
1334  const float scaleFactor = 1.0f / movingAverageFactor;
1335  auto scaleFunction = [scaleFactor](float f) -> float { return f * scaleFactor; };
1336 
1337  std::transform(varianceData.begin(), varianceData.end(), varianceData.begin(), scaleFunction);
1338  std::transform(meanData.begin(), meanData.end(), meanData.begin(), scaleFunction);
1339  }
1340 
1341  // Identifies scale operation.
1342  vector<float> betaData(channels, 0.0f);
1343  vector<float> gammaData(channels, 1.0f);
1344 
1345  ConstTensor mean(TensorInfo(1, shape, armnn::DataType::Float32), meanData);
1346  ConstTensor variance(TensorInfo(1, shape, armnn::DataType::Float32), varianceData);
1347  ConstTensor beta(TensorInfo(1, shape, armnn::DataType::Float32), betaData);
1348  ConstTensor gamma(TensorInfo(1, shape, armnn::DataType::Float32), gammaData);
1349 
1350  armnn::IConnectableLayer* const batchNormLayer = m_Network->AddBatchNormalizationLayer(desc,
1351  mean, variance, beta, gamma, name.c_str());
1352  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(batchNormLayer->GetInputSlot(0));
1353  batchNormLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1354  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), batchNormLayer->GetOutputSlot(0));
1355 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:187
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
float m_Eps
Value to add to the variance. Used to avoid dividing by zero.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:314
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A BatchNormalizationDescriptor for the BatchNormalizationLayer.

◆ ParseConcatLayer()

void ParseConcatLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1234 of file CaffeParser.cpp.

References CHECK_LOCATION, IOutputSlot::Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), TensorInfo::GetNumDimensions(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), IOutputSlot::SetTensorInfo(), and OriginsDescriptor::SetViewOriginCoord().

1235 {
1236  unsigned int numInputs = static_cast<unsigned int>(layerParam.bottom_size());
1237  // We assume concat happens along the channel dimension, which is 1 in (0, 1, 2, 3).
1238  unsigned int concatDim = 1;
1239  unsigned int numOfDims = 4;
1240 
1241  // we only consider 4-D tensor here
1242  OriginsDescriptor concatDescriptor(static_cast<uint32_t>(numInputs), numOfDims);
1243  std::vector<unsigned int>mergeDimSizes(numOfDims, 0u);
1244 
1245  unsigned int mergeDim = 0;
1246  for (unsigned int viewIndex = 0; viewIndex < numInputs; ++viewIndex)
1247  {
1248  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(
1249  layerParam.bottom(armnn::numeric_cast<int>(viewIndex))).GetTensorInfo();
1250  // Checks whether the dimensions of the input tensors are actually 4.
1251  if (inputInfo.GetNumDimensions()!=4)
1252  {
1253  throw ParseException(
1254  fmt::format("The number of dimensions for input tensors of "
1255  "the concatenation op should be 4. Inputs of {} has "
1256  "{} dimensions. {}",
1257  layerParam.name(),
1258  inputInfo.GetNumDimensions(),
1259  CHECK_LOCATION().AsString()));
1260  }
1261 
1262  mergeDimSizes[0] = inputInfo.GetShape()[0];
1263  mergeDimSizes[1] = inputInfo.GetShape()[1];
1264  mergeDimSizes[2] = inputInfo.GetShape()[2];
1265  mergeDimSizes[3] = inputInfo.GetShape()[3];
1266 
1267  for (unsigned int j = 0; j < concatDim; ++j)
1268  {
1269  concatDescriptor.SetViewOriginCoord(viewIndex, j, 0);
1270  }
1271 
1272  concatDescriptor.SetViewOriginCoord(viewIndex, concatDim, mergeDim);
1273  mergeDim += mergeDimSizes[concatDim];
1274 
1275  for (unsigned int j = concatDim+1; j < numOfDims; ++j)
1276  {
1277  concatDescriptor.SetViewOriginCoord(viewIndex, j, 0);
1278  }
1279  }
1280  mergeDimSizes[concatDim] = mergeDim;
1281 
1282  armnn::IConnectableLayer* concatlayer = m_Network->AddConcatLayer(concatDescriptor, layerParam.name().c_str());
1283  for (unsigned int i = 0; i < numInputs; ++i)
1284  {
1285  armnn::IOutputSlot& outputSlot = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(armnn::numeric_cast<int>(i)));
1286  outputSlot.Connect(concatlayer->GetInputSlot(i));
1287  }
1288 
1289  concatlayer->GetOutputSlot(0).SetTensorInfo(armnn::TensorInfo(numOfDims, mergeDimSizes.data(), DataType::Float32));
1290  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), concatlayer->GetOutputSlot(0));
1291 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:187
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
An output connection slot for a layer.
Definition: INetwork.hpp:37
An OriginsDescriptor for the ConcatLayer.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:191

◆ ParseConvLayer()

void ParseConvLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 682 of file CaffeParser.cpp.

References CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), ARMNN_ASSERT, CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, IOutputSlot::Connect(), GET_OPTIONAL_WITH_VECTOR_FALLBACK, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), armnnUtils::GetTensorInfo(), Convolution2dDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and armnnCaffeParser::TensorDescToBlobShape().

683 {
684  // Ignored Caffe Parameters
685  // * Dilation Size
686  // * Weight Filler
687  // * Bias Filler
688  // * Engine
689  // * Force nd_im2col
690  // * Axis
691 
692  // Not Available ArmNN Interface Parameters
693  // * Rounding policy;
694 
695  ARMNN_ASSERT(layerParam.type() == "Convolution");
696  ValidateNumInputsOutputs(layerParam, 1, 1);
697 
698  ConvolutionParameter convParam = layerParam.convolution_param();
699  BlobShape inputShape = TensorDescToBlobShape(GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo());
700  const unsigned int numGroups = convParam.has_group() ? convParam.group() : 1;
701  unsigned int numFilters = convParam.num_output();
702 
703  const auto notFound = std::numeric_limits<unsigned int>::max();
704 
705  unsigned int kernelH = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
706  kernel_h, kernel_size, unsigned int, notFound);
707  unsigned int kernelW = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
708  kernel_w, kernel_size, unsigned int, notFound);
709 
710  unsigned int strideH = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
711  stride_h, stride, unsigned int, 1u);
712  unsigned int strideW = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
713  stride_w, stride, unsigned int, 1u);
714 
715  unsigned int padH = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
716  pad_h, pad, unsigned int, 0u);
717  unsigned int padW = GET_OPTIONAL_WITH_VECTOR_FALLBACK(convParam, ConvolutionParameter,
718  pad_w, pad, unsigned int, 0u);
719 
720  Convolution2dDescriptor convolution2dDescriptor;
721  convolution2dDescriptor.m_PadLeft = padW;
722  convolution2dDescriptor.m_PadRight = padW;
723  convolution2dDescriptor.m_PadTop = padH;
724  convolution2dDescriptor.m_PadBottom = padH;
725  convolution2dDescriptor.m_StrideX = strideW;
726  convolution2dDescriptor.m_StrideY = strideH;
727  convolution2dDescriptor.m_BiasEnabled = convParam.has_bias_term() ? convParam.bias_term() : true;
728 
729  if (numGroups > numFilters)
730  {
731  throw ParseException(
732  fmt::format("Error parsing Convolution: {}. "
733  "The 'group'={} parameter cannot be larger than the "
734  "number of filters supplied ='{}'. {}",
735  layerParam.name(),
736  numGroups,
737  numFilters,
738  CHECK_LOCATION().AsString()));
739  }
740 
741  if (inputShape.dim_size() != 4)
742  {
743  throw ParseException(
744  fmt::format("Convolution input shape is expected to have 4 dimensions. "
745  "{}'s input has only {}. {}",
746  layerParam.name(),
747  inputShape.dim_size(),
748  CHECK_LOCATION().AsString()));
749  }
750 
751  if (numGroups > 1)
752  {
753  if (numGroups > inputShape.dim(1))
754  {
755  throw ParseException(
756  fmt::format("Error parsing Convolution: {}. "
757  "The 'group'={} parameter cannot be larger than the "
758  "channel of the input shape={} (in NCHW format). {}",
759  layerParam.name(),
760  numGroups,
761  inputShape.dim(1),
762  CHECK_LOCATION().AsString()));
763  }
764  else if (numGroups == inputShape.dim(1))
765  {
766  // we use a depthwise convolution here, because the number of groups equals to the
767  // input channels
768  AddConvLayerWithDepthwiseConv(layerParam, convolution2dDescriptor, kernelW, kernelH);
769  return;
770  }
771  else
772  {
773  // we split the input by channels into channels/groups separate convolutions
774  // and concatenate the results afterwards
775  AddConvLayerWithSplits(layerParam, convolution2dDescriptor, kernelW, kernelH);
776  return;
777  }
778  }
779 
780  // NOTE: at this point we only need to handle #group=1 case, all other cases should be
781  // handled by the AddConvLayer* helpers
782 
783  // Populate convolution output tensor descriptor dimensions
784  BlobShape outputShape;
785  outputShape.add_dim(0);
786  outputShape.set_dim(0, inputShape.dim(0));
787  outputShape.add_dim(1);
788  outputShape.set_dim(1, numFilters);
789  outputShape.add_dim(2);
790  outputShape.set_dim(
791  2, (static_cast<int>(
792  static_cast<float>(inputShape.dim(2) + 2 * padH - kernelH) /
793  static_cast<float>(strideH)) + 1));
794  outputShape.add_dim(3);
795  outputShape.set_dim(
796  3, (static_cast<int>(
797  static_cast<float>(inputShape.dim(3) + 2 * padW - kernelW) /
798  static_cast<float>(strideW)) + 1));
799 
800  // Load the weight data for ALL groups
801  vector<float> weightData(armnn::numeric_cast<size_t>(inputShape.dim(1) *
802  outputShape.dim(1) *
803  kernelH *
804  kernelW));
805  GetDataFromBlob(layerParam, weightData, 0);
806 
807  const unsigned int weightDimSizes[4] = {
808  static_cast<unsigned int>(outputShape.dim(1)), // output channels
809  static_cast<unsigned int>(inputShape.dim(1)), // input channels
810  kernelH,
811  kernelW};
812 
813  armnn::IConnectableLayer* returnLayer = nullptr;
814 
815  // Pull out the weights for this group from that loaded from the model file earlier
816  ConstTensor weights(TensorInfo(4, weightDimSizes, DataType::Float32), weightData.data());
817  Optional<ConstTensor> optionalBiases;
818  vector<float> biasData;
819  if (convolution2dDescriptor.m_BiasEnabled)
820  {
821  TensorInfo biasInfo;
822 
823  biasData.resize(armnn::numeric_cast<size_t>(outputShape.dim(1)), 1.f);
824  GetDataFromBlob(layerParam, biasData, 1);
825 
826  const unsigned int biasDimSizes[1] = {static_cast<unsigned int>(outputShape.dim(1))};
827  biasInfo = TensorInfo(1, biasDimSizes, DataType::Float32);
828 
829  // Pull out the biases for this group from that loaded from the model file earlier
830  ConstTensor biases(biasInfo, biasData.data());
831  optionalBiases = Optional<ConstTensor>(biases);
832  }
833  returnLayer = m_Network->AddConvolution2dLayer(convolution2dDescriptor,
834  weights,
835  optionalBiases,
836  layerParam.name().c_str());
837 
838  armnn::IOutputSlot& inputConnection = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
839  inputConnection.Connect(returnLayer->GetInputSlot(0));
840  returnLayer->GetOutputSlot(0).SetTensorInfo(BlobShapeToTensorInfo(outputShape));
841 
842  if (!returnLayer)
843  {
844  throw ParseException(
845  fmt::format("Failed to create Convolution layer. "
846  "Layer={} #groups={} #filters={} {}",
847  layerParam.name(),
848  numGroups,
849  numFilters,
850  CHECK_LOCATION().AsString()));
851  }
852 
853  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), returnLayer->GetOutputSlot(0));
854 }
uint32_t m_PadBottom
Padding bottom value in the height dimension.
bool m_BiasEnabled
Enable/disable bias.
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
A Convolution2dDescriptor for the Convolution2dLayer.
uint32_t m_PadRight
Padding right value in the width dimension.
uint32_t m_PadTop
Padding top value in the height dimension.
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
An output connection slot for a layer.
Definition: INetwork.hpp:37
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:314
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.
void AddConvLayerWithSplits(const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
ParseConv may use these helpers depending on the group parameter.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
#define GET_OPTIONAL_WITH_VECTOR_FALLBACK(PARAM, PARAM_TYPE, OPTIONAL_VALUE, FALLBACK_VECTOR, VALUE_TYPE, DEFAULT_VALUE)
virtual int Connect(IInputSlot &destination)=0
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
void AddConvLayerWithDepthwiseConv(const caffe::LayerParameter &layerParam, const armnn::Convolution2dDescriptor &desc, unsigned int kernelW, unsigned int kernelH)
uint32_t m_PadLeft
Padding left value in the width dimension.
BlobShape TensorDescToBlobShape(const TensorInfo &desc)

◆ ParseDropoutLayer()

void ParseDropoutLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1426 of file CaffeParser.cpp.

References CHECK_LOCATION, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), and CaffeParserBase::SetArmnnOutputSlotForCaffeTop().

1427 {
1428  // Ignored for inference, so patch the single input to its single output.
1429  if (layerParam.bottom_size() != 1 || layerParam.top_size() != 1)
1430  {
1431  throw ParseException(
1432  fmt::format("Dropout layer '{}' should have exactly 1 bottom and 1 top. "
1433  "#bottoms={} #tops={} {}",
1434  layerParam.name(),
1435  layerParam.bottom_size(),
1436  layerParam.top_size(),
1437  CHECK_LOCATION().AsString()));
1438  }
1439  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)));
1440 }
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ParseEltwiseLayer()

void ParseEltwiseLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1189 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1190 {
1191  ValidateNumInputsOutputs(layerParam, 2, 1);
1192 
1193  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1194 
1195  // Ignored Caffe Parameters:
1196  // coeff
1197 
1198  EltwiseParameter_EltwiseOp operation = EltwiseParameter_EltwiseOp_SUM; // Defaults to sum as per caffe.
1199 
1200  if (layerParam.has_eltwise_param() && layerParam.eltwise_param().has_operation())
1201  {
1202  operation = layerParam.eltwise_param().operation();
1203  }
1204 
1205  armnn::IConnectableLayer* newLayer = nullptr;
1206  switch (operation)
1207  {
1208  case EltwiseParameter_EltwiseOp_SUM:
1209  {
1210  newLayer = m_Network->AddAdditionLayer(layerParam.name().c_str());
1211  break;
1212  }
1213  case EltwiseParameter_EltwiseOp_PROD:
1214  {
1215  newLayer = m_Network->AddMultiplicationLayer(layerParam.name().c_str());
1216  break;
1217  }
1218  default:
1219  {
1220  throw ParseException(
1221  fmt::format("Unsupported operation {} in Eltwise layer {} {}",
1222  operation,
1223  layerParam.name(),
1224  CHECK_LOCATION().AsString()));
1225  }
1226  }
1227 
1228  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(newLayer->GetInputSlot(0));
1229  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(1)).Connect(newLayer->GetInputSlot(1));
1230  newLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1231  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), newLayer->GetOutputSlot(0));
1232 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ParseInnerProductLayer()

void ParseInnerProductLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1093 of file CaffeParser.cpp.

References Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), TensorInfo::GetNumDimensions(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), FullyConnectedDescriptor::m_BiasEnabled, CaffeParserBase::m_Network, FullyConnectedDescriptor::m_TransposeWeightMatrix, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1094 {
1095  InnerProductParameter param = layerParam.inner_product_param();
1096 
1097  ValidateNumInputsOutputs(layerParam, 1, 1);
1098 
1099  unsigned int outputSize = param.num_output();
1100 
1101  // Ignored Caffe Parameters:
1102  // Weight Filler
1103  // Bias Filler
1104  // Engine
1105  // Axis
1106 
1107  FullyConnectedDescriptor tensorFullyConnectedDescriptor;
1108 
1109  if (param.has_transpose())
1110  {
1111  // If true, assumes transposed weights.
1112  tensorFullyConnectedDescriptor.m_TransposeWeightMatrix = param.transpose();
1113  }
1114  else
1115  {
1116  // Caffe defaults to transposed.
1117  tensorFullyConnectedDescriptor.m_TransposeWeightMatrix = true;
1118  }
1119 
1120  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1121 
1122  TensorInfo weightInfo;
1123  TensorInfo biasInfo;
1124 
1125  // Allows implicit flattening of extra dimensions.
1126  unsigned int inputSize = inputInfo.GetShape()[1];
1127  for (unsigned int i = 2; i < inputInfo.GetNumDimensions(); ++i)
1128  {
1129  inputSize *= inputInfo.GetShape()[i];
1130  }
1131 
1132  const float* weightDataPtr = GetArrayPtrFromBlob(layerParam, 0);
1133  const unsigned int swTD[2] = { outputSize, inputSize };
1134  ConstTensor weights(TensorInfo(2, swTD, DataType::Float32), weightDataPtr);
1135 
1136  tensorFullyConnectedDescriptor.m_BiasEnabled = true;
1137  // Todo: check whether bias enabled.
1138  armnn::IConnectableLayer* fullyConnectedLayer = nullptr;
1139  if (tensorFullyConnectedDescriptor.m_BiasEnabled)
1140  {
1141  // BIAS VALUE
1142  const float* biasDataPtr = GetArrayPtrFromBlob(layerParam, 1);
1143 
1144  const unsigned int sbTD[1] = { outputSize };
1145 
1146  ConstTensor biases(TensorInfo(1, sbTD, DataType::Float32), biasDataPtr);
1147 
1148  fullyConnectedLayer = m_Network->AddFullyConnectedLayer(tensorFullyConnectedDescriptor,
1149  weights,
1150  Optional<ConstTensor>(biases),
1151  layerParam.name().c_str());
1152  }
1153  else
1154  {
1155  fullyConnectedLayer = m_Network->AddFullyConnectedLayer(tensorFullyConnectedDescriptor,
1156  weights,
1157  EmptyOptional(),
1158  layerParam.name().c_str());
1159  }
1160 
1161  TensorInfo outputInfo({ inputInfo.GetShape()[0], outputSize }, DataType::Float32);
1162  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(fullyConnectedLayer->GetInputSlot(0));
1163  fullyConnectedLayer->GetOutputSlot(0).SetTensorInfo(outputInfo);
1164  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), fullyConnectedLayer->GetOutputSlot(0));
1165 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:187
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
bool m_TransposeWeightMatrix
Enable/disable transpose weight matrix.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
A FullyConnectedDescriptor for the FullyConnectedLayer.
bool m_BiasEnabled
Enable/disable bias.
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:314
EmptyOptional is used to initialize the Optional class in case we want to have default value for an O...
Definition: Optional.hpp:32
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:191

◆ ParseInputLayer()

void ParseInputLayer ( const caffe::LayerParameter &  layerParam)
protected

Adds an armnn layer to m_Network given a Caffe LayerParameter of the correct type and is responsible for recording any newly created IOutputSlots using SetArmnnOutputSlotForCaffeTop().

Definition at line 353 of file CaffeParser.cpp.

References ARMNN_ASSERT, CaffeParserBase::BlobShapeToTensorInfo(), CHECK_LOCATION, CaffeParserBase::m_InputShapes, CaffeParserBase::m_Network, CaffeParserBase::m_NetworkInputsBindingInfo, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), TensorInfo::SetShape(), and CaffeParserBase::TrackInputBinding().

354 {
355  ARMNN_ASSERT(layerParam.type() == "Input");
356  ValidateNumInputsOutputs(layerParam, 0, 1);
357 
358  const InputParameter& param = layerParam.input_param();
359 
362  armnn::IConnectableLayer* const inputLayer = m_Network->AddInputLayer(inputId, layerParam.name().c_str());
363 
364  // Decides the tensor info for this input. This can be specified in the Caffe network but can also
365  // be overriden by user input (m_inputShapes).
366  armnn::TensorInfo inputTensorInfo;
367 
368  const BlobShape* originalShape = param.shape_size() > 0 && param.shape(0).dim_size() > 0 ?
369  &param.shape(0) : nullptr;
370  if (originalShape)
371  {
372  inputTensorInfo = BlobShapeToTensorInfo(*originalShape);
373  }
374 
375  auto overrideIt = m_InputShapes.find(layerParam.name());
376  if (overrideIt != m_InputShapes.end())
377  {
378  const TensorShape& overrideShape = overrideIt->second;
379  if (originalShape &&
380  ( originalShape->dim(1) != overrideShape[1]
381  || originalShape->dim(2) != overrideShape[2]
382  || originalShape->dim(3) != overrideShape[3]))
383  {
384  throw ParseException(
385  fmt::format("Parsed input shape for '{}' is incompatible with the override provided. {}",
386  layerParam.name(),
387  CHECK_LOCATION().AsString()));
388  }
389  inputTensorInfo.SetShape(overrideShape);
390  }
391  else if (!originalShape)
392  {
393  throw ParseException(
394  fmt::format("No input descriptor given for '{}' and no input shape found in caffe model. {}",
395  layerParam.name(),
396  CHECK_LOCATION().AsString()));
397  }
398 
399  TrackInputBinding(inputLayer, inputId, inputTensorInfo);
400  inputLayer->GetOutputSlot(0).SetTensorInfo(inputTensorInfo);
401  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), inputLayer->GetOutputSlot(0));
402 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
int LayerBindingId
Type of identifiers for bindable layers (inputs, outputs).
Definition: Types.hpp:202
void SetShape(const TensorShape &newShape)
Definition: Tensor.hpp:189
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
armnn::TensorInfo BlobShapeToTensorInfo(const caffe::BlobShape &blobShape) const
Converts Caffe&#39;s protobuf tensor shape format to ArmNN&#39;s.
std::map< std::string, armnn::TensorShape > m_InputShapes
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
void TrackInputBinding(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo)
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:35
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ParseLRNLayer()

void ParseLRNLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 994 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), NormalizationDescriptor::m_Alpha, NormalizationDescriptor::m_Beta, NormalizationDescriptor::m_K, CaffeParserBase::m_Network, NormalizationDescriptor::m_NormChannelType, NormalizationDescriptor::m_NormMethodType, NormalizationDescriptor::m_NormSize, armnn::numeric_cast(), CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

995 {
996  ValidateNumInputsOutputs(layerParam, 1, 1);
997 
998  LRNParameter param = layerParam.lrn_param();
999 
1000  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1001 
1002  // Ignored BATCH NORMALIZATION Caffe Parameters.
1003  // Ignored MVN Caffe Parameters.
1004  // Ignored LRN Caffe Parameters.
1005  // Engine
1006 
1007  NormalizationDescriptor normalizationDescriptor;
1008  if (param.has_norm_region())
1009  {
1010  LRNParameter_NormRegion n = param.norm_region();
1011  switch (n)
1012  {
1013  case LRNParameter_NormRegion_ACROSS_CHANNELS:
1014  {
1015  normalizationDescriptor.m_NormChannelType = NormalizationAlgorithmChannel::Across;
1016  break;
1017  }
1018  case LRNParameter_NormRegion_WITHIN_CHANNEL:
1019  {
1020  normalizationDescriptor.m_NormChannelType = NormalizationAlgorithmChannel::Within;
1021  break;
1022  }
1023  default:
1024  {
1025  throw ParseException(
1026  fmt::format("Unknown region {} for LRN layer {} {}",
1027  n,
1028  layerParam.name(),
1029  CHECK_LOCATION().AsString()));
1030  }
1031  }
1032  }
1033  else
1034  {
1035  // Caffe defaults to normalization across channels.
1036  normalizationDescriptor.m_NormChannelType = NormalizationAlgorithmChannel::Across;
1037  }
1038 
1039  normalizationDescriptor.m_NormMethodType = NormalizationAlgorithmMethod::LocalBrightness;
1040  if (param.has_local_size())
1041  {
1042  normalizationDescriptor.m_NormSize = param.local_size();
1043  }
1044  else
1045  {
1046  throw ParseException(
1047  fmt::format("local_size not defined for LRN layer {} {}",
1048  layerParam.name(),
1049  CHECK_LOCATION().AsString()));
1050  }
1051 
1052  if (param.has_alpha())
1053  {
1054  normalizationDescriptor.m_Alpha = param.alpha();
1055  normalizationDescriptor.m_Alpha /= armnn::numeric_cast<float>(param.local_size());
1056  }
1057  else
1058  {
1059  throw ParseException(
1060  fmt::format("Alpha not defined for LRN layer {} {}",
1061  layerParam.name(),
1062  CHECK_LOCATION().AsString()));
1063  }
1064  if (param.has_beta())
1065  {
1066  normalizationDescriptor.m_Beta = param.beta();
1067  }
1068  else
1069  {
1070  throw ParseException(
1071  fmt::format("Beta not defined for LRN layer {} {}",
1072  layerParam.name(),
1073  CHECK_LOCATION().AsString()));
1074  }
1075 
1076  if (param.has_k())
1077  {
1078  normalizationDescriptor.m_K = param.k();
1079  }
1080  else
1081  {
1082  normalizationDescriptor.m_K = 1;
1083  }
1084 
1085  IConnectableLayer* const normLayer = m_Network->AddNormalizationLayer(normalizationDescriptor,
1086  layerParam.name().c_str());
1087  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(normLayer->GetInputSlot(0));
1088  normLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1089 
1090  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), normLayer->GetOutputSlot(0));
1091 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
float m_K
Kappa value used for the across channel normalization equation.
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
float m_Alpha
Alpha value for the normalization equation.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
NormalizationAlgorithmMethod m_NormMethodType
Normalization method algorithm to use (LocalBrightness, LocalContrast).
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
NormalizationAlgorithmChannel m_NormChannelType
Normalization channel algorithm to use (Across, Within).
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:35
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A NormalizationDescriptor for the NormalizationLayer.
float m_Beta
Beta value for the normalization equation.
uint32_t m_NormSize
Depth radius value.

◆ ParsePoolingLayer()

void ParsePoolingLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 856 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), GET_OPTIONAL_WITH_FALLBACK, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), CaffeParserBase::m_Network, Pooling2dDescriptor::m_OutputShapeRounding, Pooling2dDescriptor::m_PadBottom, Pooling2dDescriptor::m_PaddingMethod, Pooling2dDescriptor::m_PadLeft, Pooling2dDescriptor::m_PadRight, Pooling2dDescriptor::m_PadTop, Pooling2dDescriptor::m_PoolHeight, Pooling2dDescriptor::m_PoolType, Pooling2dDescriptor::m_PoolWidth, Pooling2dDescriptor::m_StrideX, Pooling2dDescriptor::m_StrideY, armnn::Max, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

857 {
858  // Ignored Caffe Parameters
859  // Stochastic Pooling
860  // Engine
861 
862  ValidateNumInputsOutputs(layerParam, 1, 1);
863  PoolingParameter param = layerParam.pooling_param();
864  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
865 
866  const auto notFound = std::numeric_limits<unsigned int>::max();
867 
868  unsigned int kernel_h = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
869  kernel_h, kernel_size, unsigned int, notFound);
870  unsigned int kernel_w = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
871  kernel_w, kernel_size, unsigned int, notFound);
872 
873  if ((kernel_h == notFound || kernel_w == notFound) && param.has_global_pooling())
874  {
875  kernel_h = inputInfo.GetShape()[2];
876  kernel_w = inputInfo.GetShape()[3];
877  }
878 
879  unsigned int stride_h = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
880  stride_h, stride, unsigned int, notFound);
881  unsigned int stride_w = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
882  stride_h, stride, unsigned int, notFound);
883 
884  if ((stride_h == notFound || stride_w == notFound) && param.has_global_pooling())
885  {
886  stride_h = 1;
887  stride_w = 1;
888  }
889 
890  unsigned int pad_h = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
891  pad_h, pad, unsigned int, 0u);
892  unsigned int pad_w = GET_OPTIONAL_WITH_FALLBACK(param, PoolingParameter,
893  pad_w, pad, unsigned int, 0u);
894 
895  // Populate Weight and Bias Filter Descriptor
896  Pooling2dDescriptor pooling2dDescriptor;
897  if (param.has_pool())
898  {
899  PoolingParameter_PoolMethod p = param.pool();
900  switch (p)
901  {
902  case PoolingParameter_PoolMethod_MAX:
903  {
904  pooling2dDescriptor.m_PoolType = PoolingAlgorithm::Max;
905  break;
906  }
907  case PoolingParameter_PoolMethod_AVE:
908  {
909  pooling2dDescriptor.m_PoolType = PoolingAlgorithm::Average;
910  break;
911  }
912  case PoolingParameter_PoolMethod_STOCHASTIC:
913  {
914  throw ParseException(
915  fmt::format("Pooling Layer: Stochastic Pooling Not Supported. Layer={} {}",
916  layerParam.name(),
917  CHECK_LOCATION().AsString()));
918  }
919  default:
920  {
921  throw ParseException(
922  fmt::format("Pooling Layer: unknown pooling method: {} for layer: {} {}",
923  p,
924  layerParam.name(),
925  CHECK_LOCATION().AsString()));
926  }
927  }
928  }
929  else
930  {
931  throw ParseException(
932  fmt::format("No Pooling Method Defined for {} {}",
933  layerParam.name(),
934  CHECK_LOCATION().AsString()));
935  }
936 
937  pooling2dDescriptor.m_PadLeft = pad_w;
938  pooling2dDescriptor.m_PadRight = pad_w;
939  pooling2dDescriptor.m_PadTop = pad_h;
940  pooling2dDescriptor.m_PadBottom = pad_h;
941  pooling2dDescriptor.m_StrideX = stride_w;
942  pooling2dDescriptor.m_StrideY = stride_h;
943  pooling2dDescriptor.m_PoolWidth = kernel_w;
944  pooling2dDescriptor.m_PoolHeight = kernel_h;
945 
946  pooling2dDescriptor.m_OutputShapeRounding = OutputShapeRounding::Ceiling;
947  pooling2dDescriptor.m_PaddingMethod = PaddingMethod::IgnoreValue;
948 
949  armnn::IConnectableLayer* poolingLayer = m_Network->AddPooling2dLayer(pooling2dDescriptor,
950  layerParam.name().c_str());
951 
952  TensorInfo outputInfo(
953  { inputInfo.GetShape()[0],
954  inputInfo.GetShape()[1],
955  static_cast<unsigned int>(ceil(
956  static_cast<float>(inputInfo.GetShape()[2] + 2 * pad_h - kernel_h) /
957  armnn::numeric_cast<float>(stride_h))) + 1,
958  static_cast<unsigned int>(ceil(
959  static_cast<float>(inputInfo.GetShape()[3] + 2 * pad_w - kernel_w) /
960  armnn::numeric_cast<float>(stride_w))) + 1 },
961  DataType::Float32);
962 
963  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(poolingLayer->GetInputSlot(0));
964  poolingLayer->GetOutputSlot(0).SetTensorInfo(outputInfo);
965  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), poolingLayer->GetOutputSlot(0));
966 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
uint32_t m_PadBottom
Padding bottom value in the height dimension.
const TensorShape & GetShape() const
Definition: Tensor.hpp:187
uint32_t m_PadLeft
Padding left value in the width dimension.
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
uint32_t m_PoolWidth
Pooling width value.
PaddingMethod m_PaddingMethod
The padding method to be used. (Exclude, IgnoreValue).
uint32_t m_PadTop
Padding top value in the height dimension.
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
uint32_t m_PoolHeight
Pooling height value.
uint32_t m_PadRight
Padding right value in the width dimension.
#define GET_OPTIONAL_WITH_FALLBACK(PARAM, PARAM_TYPE, OPTIONAL_VALUE, FALLBACK_VALUE, VALUE_TYPE, DEFAULT_VALUE)
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
PoolingAlgorithm m_PoolType
The pooling algorithm to use (Max. Average, L2).
OutputShapeRounding m_OutputShapeRounding
The rounding method for the output shape. (Floor, Ceiling).
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
A Pooling2dDescriptor for the Pooling2dLayer.
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.

◆ ParseReluLayer()

void ParseReluLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 968 of file CaffeParser.cpp.

References Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), ActivationDescriptor::m_A, ActivationDescriptor::m_Function, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

969 {
970  ValidateNumInputsOutputs(layerParam, 1, 1);
971 
972  const string& name = layerParam.name();
973  const ReLUParameter& param = layerParam.relu_param();
974 
975  ActivationDescriptor activationDescriptor;
976  const float negativeSlope = param.negative_slope();
977  if (negativeSlope == 0.0f)
978  {
979  activationDescriptor.m_Function = ActivationFunction::ReLu;
980  }
981  else
982  {
983  activationDescriptor.m_Function = ActivationFunction::LeakyReLu;
984  activationDescriptor.m_A = negativeSlope;
985  }
986 
987  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
988  IConnectableLayer* const activationLayer = m_Network->AddActivationLayer(activationDescriptor, name.c_str());
989  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(activationLayer->GetInputSlot(0));
990  activationLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
991  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), activationLayer->GetOutputSlot(0));
992 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
An ActivationDescriptor for the ActivationLayer.
Definition: Descriptors.hpp:20
float m_A
Alpha upper bound value used by the activation functions. (BoundedReLu, Linear, TanH, Elu).
Definition: Descriptors.hpp:45
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
ActivationFunction m_Function
The activation function to use (Sigmoid, TanH, Linear, ReLu, BoundedReLu, SoftReLu, LeakyReLu, Abs, Sqrt, Square, Elu).
Definition: Descriptors.hpp:43

◆ ParseScaleLayer()

void ParseScaleLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1357 of file CaffeParser.cpp.

References CHECK_LOCATION, Connect(), armnn::Float32, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), armnnUtils::GetTensorInfo(), BatchNormalizationDescriptor::m_Eps, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1358 {
1359  // Current unoptimal solution: add a batchnormalization layer with 0 mean and 1 variance.
1360  ValidateNumInputsOutputs(layerParam, 1, 1);
1361 
1362  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1363 
1364  string name = layerParam.name();
1365 
1366  ScaleParameter param = layerParam.scale_param();
1367  if (param.axis() != 1)
1368  {
1369  // Would have to use something other than BatchNormalizationLayer in this case
1370  throw ParseException(
1371  fmt::format("Loading Scale Layer: Only axis 1 is supported currently. "
1372  "Layer={} Axis={} {}",
1373  layerParam.name(),
1374  param.axis(),
1375  CHECK_LOCATION().AsString()));
1376  }
1377 
1378  unsigned int channels = inputInfo.GetShape()[1];
1379  unsigned int shape[] = {channels};
1380 
1382  desc.m_Eps = 0.0f; // Don't need epsilon if variance is 1.
1383  vector<float> meanData(channels, 0.0f);
1384  vector<float> varianceData(channels, 1.0f);
1385  vector<float> betaData(channels, 0.0f);
1386  vector<float> gammaData(channels);
1387 
1388  GetDataFromBlob(layerParam, gammaData, 0);
1389 
1390  if(param.has_bias_term())
1391  {
1392  GetDataFromBlob(layerParam, betaData, 1);
1393  }
1394 
1395  ConstTensor mean(TensorInfo(1, shape, armnn::DataType::Float32), meanData);
1396  ConstTensor variance(TensorInfo(1, shape, armnn::DataType::Float32), varianceData);
1397  ConstTensor beta(TensorInfo(1, shape, armnn::DataType::Float32), betaData);
1398  ConstTensor gamma(TensorInfo(1, shape, armnn::DataType::Float32), gammaData);
1399 
1400  armnn::IConnectableLayer* const batchNormLayer = m_Network->AddBatchNormalizationLayer(desc,
1401  mean, variance, beta, gamma, name.c_str());
1402  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(batchNormLayer->GetInputSlot(0));
1403  batchNormLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1404  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), batchNormLayer->GetOutputSlot(0));
1405 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
const TensorShape & GetShape() const
Definition: Tensor.hpp:187
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
float m_Eps
Value to add to the variance. Used to avoid dividing by zero.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:314
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A BatchNormalizationDescriptor for the BatchNormalizationLayer.

◆ ParseSoftmaxLayer()

void ParseSoftmaxLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1167 of file CaffeParser.cpp.

References Connect(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), armnnUtils::GetTensorInfo(), SoftmaxDescriptor::m_Axis, CaffeParserBase::m_Network, CaffeParserBase::SetArmnnOutputSlotForCaffeTop(), and IOutputSlot::SetTensorInfo().

1168 {
1169  ValidateNumInputsOutputs(layerParam, 1, 1);
1170 
1171  SoftmaxParameter param = layerParam.softmax_param();
1172 
1173  const TensorInfo& inputInfo = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).GetTensorInfo();
1174 
1175  // Ignored Caffe Parameters:
1176  // axis
1177  // Engine
1178 
1179  armnn::SoftmaxDescriptor softmaxDescriptor;
1180  softmaxDescriptor.m_Axis = 1;
1181  armnn::IConnectableLayer* const softmaxLayer = m_Network->AddSoftmaxLayer(
1182  softmaxDescriptor,
1183  layerParam.name().c_str());
1184  GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0)).Connect(softmaxLayer->GetInputSlot(0));
1185  softmaxLayer->GetOutputSlot(0).SetTensorInfo(inputInfo);
1186  SetArmnnOutputSlotForCaffeTop(layerParam.top(0), softmaxLayer->GetOutputSlot(0));
1187 }
Interface for a layer that is connectable to other layers via InputSlots and OutputSlots.
Definition: INetwork.hpp:61
int m_Axis
Scalar, defaulted to the last index (-1), specifying the dimension the activation will be performed o...
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
virtual void SetTensorInfo(const TensorInfo &tensorInfo)=0
virtual const IInputSlot & GetInputSlot(unsigned int index) const =0
Get a const input slot handle by slot index.
armnn::TensorInfo GetTensorInfo(unsigned int numberOfBatches, unsigned int numberOfChannels, unsigned int height, unsigned int width, const armnn::DataLayout dataLayout, const armnn::DataType dataType)
Definition: TensorUtils.cpp:38
virtual const IOutputSlot & GetOutputSlot(unsigned int index) const =0
Get the const output slot handle by slot index.
void Connect(armnn::IConnectableLayer *from, armnn::IConnectableLayer *to, const armnn::TensorInfo &tensorInfo, unsigned int fromIndex, unsigned int toIndex)
Definition: TestUtils.cpp:12
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)
A SoftmaxDescriptor for the SoftmaxLayer.

◆ ParseSplitLayer()

void ParseSplitLayer ( const caffe::LayerParameter &  layerParam)
protected

Definition at line 1407 of file CaffeParser.cpp.

References CHECK_LOCATION, CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), and CaffeParserBase::SetArmnnOutputSlotForCaffeTop().

1408 {
1409  // Used in caffe to duplicate memory - not necessary in armnn.
1410  if (layerParam.bottom_size() != 1)
1411  {
1412  throw ParseException(
1413  fmt::format("Split layer '{}' should have exactly 1 bottom. "
1414  "#bottoms={} {}",
1415  layerParam.name(),
1416  layerParam.bottom_size(),
1417  CHECK_LOCATION().AsString()));
1418  }
1419  armnn::IOutputSlot& outputSlot = GetArmnnOutputSlotForCaffeTop(layerParam.bottom(0));
1420  for (int i = 0; i < layerParam.top_size(); i++)
1421  {
1422  SetArmnnOutputSlotForCaffeTop(layerParam.top(i), outputSlot);
1423  }
1424 }
armnn::IOutputSlot & GetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName) const
Retrieves the Armnn IOutputSlot representing the given Caffe top.
An output connection slot for a layer.
Definition: INetwork.hpp:37
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
void SetArmnnOutputSlotForCaffeTop(const std::string &caffeTopName, armnn::IOutputSlot &armnnOutputSlot)

◆ ResolveInPlaceLayers()

void ResolveInPlaceLayers ( caffe::NetParameter &  netParameter)
protected

Modifies the Caffe network to replace "in-place" layers (whose top() and bottom() are both the same) with regular layers.

This simplifies further parsing.

Definition at line 1513 of file CaffeParser.cpp.

References CHECK_LOCATION.

Referenced by RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), and CaffeParserBase::LoadNetParam().

1514 {
1515  // Finds layers with the same top.
1516  std::map<std::string, std::vector<caffe::LayerParameter*>> layersByTop;
1517  for (int layerIdx = 0; layerIdx < netParameter.layer_size(); ++layerIdx)
1518  {
1519  caffe::LayerParameter& layer = *netParameter.mutable_layer(layerIdx);
1520  std::string name = layer.name();
1521  for (int i = 0; i < layer.top_size(); ++i)
1522  {
1523  layersByTop[layer.top(i)].push_back(&layer);
1524  }
1525  }
1526 
1527  // For each set of layers with the same top, resolves them to a linear chain rather than in-place layers.
1528  // Note that for 'regular' layers, there will be a single layer in each group and so this will be a no-op.
1529  for (auto layersWithSameTopIt : layersByTop)
1530  {
1531  const std::string& top = layersWithSameTopIt.first;
1532  const std::vector<caffe::LayerParameter*>& layersWithSameTop = layersWithSameTopIt.second;
1533 
1534  // Chains the layers together in the order that they are listed in the prototxt (hopefully this is correct).
1535  // Note that the last layer will not have its top modified so that other layers will continue to reference it.
1536  for (unsigned int layerIdx = 0; layerIdx < layersWithSameTop.size() - 1; ++layerIdx)
1537  {
1538  caffe::LayerParameter& layer1 = *layersWithSameTop[layerIdx];
1539  caffe::LayerParameter& layer2 = *layersWithSameTop[layerIdx+1];
1540  if (layer1.top_size() != 1)
1541  {
1542  throw ParseException(
1543  fmt::format("Node '{}' is an in-place layer but doesn't have exactly one "
1544  "top. It has {} instead. {}",
1545  layer1.name(),
1546  layer1.top_size(),
1547  CHECK_LOCATION().AsString()));
1548  }
1549  std::string newTop = layer1.name() + "_top";
1550  layer1.set_top(0, newTop);
1551  if (layer2.bottom_size() != 1 || layer2.bottom(0) != top)
1552  {
1553  throw ParseException(
1554  fmt::format("Node '{}' is an in-place layer but "
1555  "doesn't have exactly one bottom, or it doesn't match its top. "
1556  "#bottoms={}, first bottom is {}, top is {} {}",
1557  layer2.name(),
1558  layer2.bottom(0),
1559  top,
1560  CHECK_LOCATION().AsString()));
1561  }
1562  layer2.set_bottom(0, newTop);
1563  }
1564  }
1565 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197

◆ SetArmnnOutputSlotForCaffeTop()

void SetArmnnOutputSlotForCaffeTop ( const std::string &  caffeTopName,
armnn::IOutputSlot armnnOutputSlot 
)
protected

Definition at line 1494 of file CaffeParser.cpp.

References CHECK_LOCATION, and CaffeParserBase::m_ArmnnOutputSlotForCaffeTop.

Referenced by CaffeParserBase::AddConvLayerWithDepthwiseConv(), CaffeParserBase::AddConvLayerWithSplits(), CaffeParserBase::ParseBatchNormLayer(), CaffeParserBase::ParseConcatLayer(), CaffeParserBase::ParseConvLayer(), CaffeParserBase::ParseDropoutLayer(), CaffeParserBase::ParseEltwiseLayer(), CaffeParserBase::ParseInnerProductLayer(), CaffeParserBase::ParseInputLayer(), CaffeParserBase::ParseLRNLayer(), CaffeParserBase::ParsePoolingLayer(), CaffeParserBase::ParseReluLayer(), CaffeParserBase::ParseScaleLayer(), CaffeParserBase::ParseSoftmaxLayer(), and CaffeParserBase::ParseSplitLayer().

1496 {
1497  auto it = m_ArmnnOutputSlotForCaffeTop.find(caffeTopName);
1498  if (it == m_ArmnnOutputSlotForCaffeTop.end())
1499  {
1500  m_ArmnnOutputSlotForCaffeTop[caffeTopName] = &armnnOutputSlot;
1501  }
1502  else
1503  {
1504  throw ParseException(
1505  fmt::format("Attempting to add duplicate entry for Caffe top '{}' {}",
1506  caffeTopName,
1507  CHECK_LOCATION().AsString()));
1508  }
1509 }
std::unordered_map< std::string, armnn::IOutputSlot * > m_ArmnnOutputSlotForCaffeTop
As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops...
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197

◆ TrackBindingPoint()

void TrackBindingPoint ( armnn::IConnectableLayer layer,
armnn::LayerBindingId  id,
const armnn::TensorInfo tensorInfo,
const char *  bindingPointDesc,
std::unordered_map< std::string, BindingPointInfo > &  nameToBindingInfo 
)
staticprotected

Definition at line 1456 of file CaffeParser.cpp.

References CHECK_LOCATION, and IConnectableLayer::GetName().

Referenced by CaffeParserBase::TrackInputBinding(), and CaffeParserBase::TrackOutputBinding().

1461 {
1462  const std::string layerName = layer->GetName();
1463  auto it = nameToBindingInfo.find(layerName);
1464  if (it == nameToBindingInfo.end())
1465  {
1466  nameToBindingInfo[layerName] = std::make_pair(id, tensorInfo);
1467  }
1468  else
1469  {
1470  throw ParseException(
1471  fmt::format("Id {} used by more than one {} layer {}",
1472  id,
1473  bindingPointDesc,
1474  CHECK_LOCATION().AsString()));
1475  }
1476 }
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197
virtual const char * GetName() const =0
Returns the name of the layer.

◆ TrackInputBinding()

void TrackInputBinding ( armnn::IConnectableLayer layer,
armnn::LayerBindingId  id,
const armnn::TensorInfo tensorInfo 
)
protected

Definition at line 1442 of file CaffeParser.cpp.

References IConnectableLayer::GetName(), CaffeParserBase::m_NetworkInputsBindingInfo, and CaffeParserBase::TrackBindingPoint().

Referenced by CaffeParserBase::ParseInputLayer().

1445 {
1446  return TrackBindingPoint(layer, id, tensorInfo, layer->GetName(), m_NetworkInputsBindingInfo);
1447 }
std::unordered_map< std::string, BindingPointInfo > m_NetworkInputsBindingInfo
maps input layer names to their corresponding ids and tensor infos
static void TrackBindingPoint(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo, const char *bindingPointDesc, std::unordered_map< std::string, BindingPointInfo > &nameToBindingInfo)
virtual const char * GetName() const =0
Returns the name of the layer.

◆ TrackOutputBinding()

void TrackOutputBinding ( armnn::IConnectableLayer layer,
armnn::LayerBindingId  id,
const armnn::TensorInfo tensorInfo 
)
protected

Definition at line 1449 of file CaffeParser.cpp.

References IConnectableLayer::GetName(), CaffeParserBase::m_NetworkOutputsBindingInfo, and CaffeParserBase::TrackBindingPoint().

Referenced by RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), and CaffeParserBase::LoadNetParam().

1452 {
1453  return TrackBindingPoint(layer, id, tensorInfo, layer->GetName(), m_NetworkOutputsBindingInfo);
1454 }
std::unordered_map< std::string, BindingPointInfo > m_NetworkOutputsBindingInfo
maps output layer names to their corresponding ids and tensor infos
static void TrackBindingPoint(armnn::IConnectableLayer *layer, armnn::LayerBindingId id, const armnn::TensorInfo &tensorInfo, const char *bindingPointDesc, std::unordered_map< std::string, BindingPointInfo > &nameToBindingInfo)
virtual const char * GetName() const =0
Returns the name of the layer.

Member Data Documentation

◆ m_ArmnnOutputSlotForCaffeTop

std::unordered_map<std::string, armnn::IOutputSlot*> m_ArmnnOutputSlotForCaffeTop
protected

As we add armnn layers we store the armnn IOutputSlot which corresponds to the Caffe tops.

Definition at line 131 of file CaffeParser.hpp.

Referenced by CaffeParserBase::Cleanup(), CaffeParserBase::GetArmnnOutputSlotForCaffeTop(), and CaffeParserBase::SetArmnnOutputSlotForCaffeTop().

◆ m_CaffeLayersByTopName

std::map<std::string, const caffe::LayerParameter*> m_CaffeLayersByTopName
protected

◆ m_InputShapes

◆ m_Network

◆ m_NetworkInputsBindingInfo

std::unordered_map<std::string, BindingPointInfo> m_NetworkInputsBindingInfo
protected

◆ m_NetworkOutputsBindingInfo

std::unordered_map<std::string, BindingPointInfo> m_NetworkOutputsBindingInfo
protected

◆ m_RequestedOutputs

std::vector<std::string> m_RequestedOutputs
protected

◆ ms_CaffeLayerNameToParsingFunctions

const std::map< std::string, CaffeParserBase::OperationParsingFunction > ms_CaffeLayerNameToParsingFunctions
staticprotected

The documentation for this class was generated from the following files: