ArmNN
 22.02
Convolution2dLayer Class Reference

This layer represents a convolution 2d operation. More...

#include <Convolution2dLayer.hpp>

Inheritance diagram for Convolution2dLayer:
LayerWithParameters< Convolution2dDescriptor > Layer IConnectableLayer

Public Member Functions

virtual std::unique_ptr< IWorkloadCreateWorkload (const IWorkloadFactory &factory) const override
 Makes a workload for the Convolution2d type. More...
 
Convolution2dLayerClone (Graph &graph) const override
 Creates a dynamically-allocated copy of this layer. More...
 
void ValidateTensorShapesFromInputs () override
 Check if the input tensor shape(s) will lead to a valid configuration of Convolution2dLayer. More...
 
std::vector< TensorShapeInferOutputShapes (const std::vector< TensorShape > &inputShapes) const override
 By default returns inputShapes if the number of inputs are equal to number of outputs, otherwise infers the output shapes from given input shapes and layer properties. More...
 
ARMNN_NO_DEPRECATE_WARN_BEGIN void Accept (ILayerVisitor &visitor) const override
 
ARMNN_NO_DEPRECATE_WARN_END void ExecuteStrategy (IStrategy &strategy) const override
 Apply a visitor to this layer. More...
 
void SerializeLayerParameters (ParameterStringifyFunction &fn) const override
 Helper to serialize the layer parameters to string. More...
 
- Public Member Functions inherited from LayerWithParameters< Convolution2dDescriptor >
const Convolution2dDescriptorGetParameters () const override
 If the layer has a descriptor return it. More...
 
void SerializeLayerParameters (ParameterStringifyFunction &fn) const override
 Helper to serialize the layer parameters to string (currently used in DotSerializer and company). More...
 
- Public Member Functions inherited from Layer
 Layer (unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, const char *name)
 
 Layer (unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, DataLayout layout, const char *name)
 
const std::string & GetNameStr () const
 
const OutputHandlerGetOutputHandler (unsigned int i=0) const
 
OutputHandlerGetOutputHandler (unsigned int i=0)
 
ShapeInferenceMethod GetShapeInferenceMethod () const
 
const std::vector< InputSlot > & GetInputSlots () const
 
const std::vector< OutputSlot > & GetOutputSlots () const
 
std::vector< InputSlot >::iterator BeginInputSlots ()
 
std::vector< InputSlot >::iterator EndInputSlots ()
 
std::vector< OutputSlot >::iterator BeginOutputSlots ()
 
std::vector< OutputSlot >::iterator EndOutputSlots ()
 
bool IsOutputUnconnected ()
 
void ResetPriority () const
 
LayerPriority GetPriority () const
 
LayerType GetType () const override
 Returns the armnn::LayerType of this layer. More...
 
DataType GetDataType () const
 
const BackendIdGetBackendId () const
 
void SetBackendId (const BackendId &id)
 
virtual void CreateTensorHandles (const TensorHandleFactoryRegistry &registry, const IWorkloadFactory &factory, const bool IsMemoryManaged=true)
 
void VerifyLayerConnections (unsigned int expectedConnections, const CheckLocation &location) const
 
virtual void ReleaseConstantData ()
 
template<typename Op >
void OperateOnConstantTensors (Op op)
 
const char * GetName () const override
 Returns the name of the layer. More...
 
unsigned int GetNumInputSlots () const override
 Returns the number of connectable input slots. More...
 
unsigned int GetNumOutputSlots () const override
 Returns the number of connectable output slots. More...
 
const InputSlotGetInputSlot (unsigned int index) const override
 Get a const input slot handle by slot index. More...
 
InputSlotGetInputSlot (unsigned int index) override
 Get the input slot handle by slot index. More...
 
const OutputSlotGetOutputSlot (unsigned int index=0) const override
 Get the const output slot handle by slot index. More...
 
OutputSlotGetOutputSlot (unsigned int index=0) override
 Get the output slot handle by slot index. More...
 
void SetGuid (LayerGuid guid)
 
LayerGuid GetGuid () const final
 Returns the unique id of the layer. More...
 
void AddRelatedLayerName (const std::string layerName)
 
const std::list< std::string > & GetRelatedLayerNames ()
 
virtual void Reparent (Graph &dest, std::list< Layer *>::const_iterator iterator)=0
 
void BackendSelectionHint (Optional< BackendId > backend) final
 Provide a hint for the optimizer as to which backend to prefer for this layer. More...
 
Optional< BackendIdGetBackendHint () const
 
void SetShapeInferenceMethod (ShapeInferenceMethod shapeInferenceMethod)
 
template<typename T >
std::shared_ptr< T > GetAdditionalInformation () const
 
void SetAdditionalInfoForObject (const AdditionalInfoObjectPtr &additionalInfo)
 
- Public Member Functions inherited from IConnectableLayer
ARMNN_NO_DEPRECATE_WARN_BEGIN ARMNN_DEPRECATED_MSG_REMOVAL_DATE ("Accept is deprecated. The ILayerVisitor that works in conjunction with this " "Accept function is deprecated. Use IStrategy in combination with " "ExecuteStrategy instead, which is an ABI/API stable version of the " "visitor pattern.", "22.05") virtual void Accept(ILayerVisitor &visitor) const =0
 Apply a visitor to this layer. More...
 

Public Attributes

std::shared_ptr< ConstTensorHandlem_Weight
 A unique pointer to store Weight values. More...
 
std::shared_ptr< ConstTensorHandlem_Bias
 A unique pointer to store Bias values. More...
 

Protected Member Functions

 Convolution2dLayer (const Convolution2dDescriptor &param, const char *name)
 Constructor to create a Convolution2dLayer. More...
 
 ~Convolution2dLayer ()=default
 Default destructor. More...
 
ConstantTensors GetConstantTensorsByRef () override
 Retrieve the handles to the constant values stored by the layer. More...
 
- Protected Member Functions inherited from LayerWithParameters< Convolution2dDescriptor >
 LayerWithParameters (unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, const Convolution2dDescriptor &param, const char *name)
 
 ~LayerWithParameters ()=default
 
WorkloadInfo PrepInfoAndDesc (QueueDescriptor &descriptor) const
 Helper function to reduce duplication in *LayerCreateWorkload. More...
 
void ExecuteStrategy (IStrategy &strategy) const override
 Apply a visitor to this layer. More...
 
- Protected Member Functions inherited from Layer
virtual ~Layer ()=default
 
template<typename QueueDescriptor >
void CollectQueueDescriptorInputs (QueueDescriptor &descriptor, WorkloadInfo &info) const
 
template<typename QueueDescriptor >
void CollectQueueDescriptorOutputs (QueueDescriptor &descriptor, WorkloadInfo &info) const
 
void ValidateAndCopyShape (const TensorShape &outputShape, const TensorShape &inferredShape, const ShapeInferenceMethod shapeInferenceMethod, const std::string &layerName, const unsigned int outputSlotIndex=0)
 
void VerifyShapeInferenceType (const TensorShape &outputShape, ShapeInferenceMethod shapeInferenceMethod)
 
template<typename QueueDescriptor >
WorkloadInfo PrepInfoAndDesc (QueueDescriptor &descriptor) const
 Helper function to reduce duplication in *LayerCreateWorkload. More...
 
template<typename LayerType , typename ... Params>
LayerTypeCloneBase (Graph &graph, Params &&... params) const
 
void SetAdditionalInfo (QueueDescriptor &descriptor) const
 
- Protected Member Functions inherited from IConnectableLayer
 ~IConnectableLayer ()
 Objects are not deletable via the handle. More...
 

Additional Inherited Members

- Public Types inherited from LayerWithParameters< Convolution2dDescriptor >
using DescriptorType = Convolution2dDescriptor
 
- Public Types inherited from IConnectableLayer
using ConstantTensors = std::vector< std::reference_wrapper< std::shared_ptr< ConstTensorHandle > >>
 
- Protected Attributes inherited from LayerWithParameters< Convolution2dDescriptor >
Convolution2dDescriptor m_Param
 The parameters for the layer (not including tensor-valued weights etc.). More...
 
- Protected Attributes inherited from Layer
AdditionalInfoObjectPtr m_AdditionalInfoObject
 
std::vector< OutputHandlerm_OutputHandlers
 
ShapeInferenceMethod m_ShapeInferenceMethod
 

Detailed Description

This layer represents a convolution 2d operation.

Definition at line 15 of file Convolution2dLayer.hpp.

Constructor & Destructor Documentation

◆ Convolution2dLayer()

Convolution2dLayer ( const Convolution2dDescriptor param,
const char *  name 
)
protected

Constructor to create a Convolution2dLayer.

Parameters
[in]paramConvolution2dDescriptor to configure the convolution2d operation.
[in]nameOptional name for the layer.

Definition at line 23 of file Convolution2dLayer.cpp.

References armnn::Convolution2d.

24  : LayerWithParameters(1, 1, LayerType::Convolution2d, param, name)
25 {
26 
27 }
LayerWithParameters(unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, const Convolution2dDescriptor &param, const char *name)

◆ ~Convolution2dLayer()

~Convolution2dLayer ( )
protecteddefault

Default destructor.

Member Function Documentation

◆ Accept()

ARMNN_NO_DEPRECATE_WARN_BEGIN void Accept ( ILayerVisitor &  visitor) const
override

Definition at line 148 of file Convolution2dLayer.cpp.

References ARMNN_NO_DEPRECATE_WARN_END, Layer::GetName(), LayerWithParameters< Convolution2dDescriptor >::GetParameters(), ManagedConstTensorHandle::GetTensorInfo(), Convolution2dLayer::m_Bias, Convolution2dLayer::m_Weight, and ManagedConstTensorHandle::Map().

149 {
150  ManagedConstTensorHandle managedWeight(m_Weight);
151  ConstTensor weightsTensor(managedWeight.GetTensorInfo(), managedWeight.Map());
152 
153  Optional<ConstTensor> optionalBiasTensor = EmptyOptional();
154  ManagedConstTensorHandle managedBias(m_Bias);
155  if (GetParameters().m_BiasEnabled)
156  {
157  ConstTensor biasTensor(managedBias.GetTensorInfo(), managedBias.Map());
158  optionalBiasTensor = Optional<ConstTensor>(biasTensor);
159  }
160 
161  visitor.VisitConvolution2dLayer(this, GetParameters(), weightsTensor, optionalBiasTensor, GetName());
162 }
std::shared_ptr< ConstTensorHandle > m_Weight
A unique pointer to store Weight values.
const Convolution2dDescriptor & GetParameters() const override
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:327
std::shared_ptr< ConstTensorHandle > m_Bias
A unique pointer to store Bias values.
EmptyOptional is used to initialize the Optional class in case we want to have default value for an O...
Definition: Optional.hpp:32
const char * GetName() const override
Returns the name of the layer.
Definition: Layer.hpp:316

◆ Clone()

Convolution2dLayer * Clone ( Graph graph) const
overridevirtual

Creates a dynamically-allocated copy of this layer.

Parameters
[in]graphThe graph into which this layer is being cloned.

Implements Layer.

Definition at line 69 of file Convolution2dLayer.cpp.

References Layer::GetName(), Convolution2dLayer::m_Bias, LayerWithParameters< Convolution2dDescriptor >::m_Param, and Convolution2dLayer::m_Weight.

70 {
71  auto layer = CloneBase<Convolution2dLayer>(graph, m_Param, GetName());
72 
73  layer->m_Weight = m_Weight ? m_Weight : nullptr;
74 
75  if (layer->m_Param.m_BiasEnabled)
76  {
77  layer->m_Bias = m_Bias ? m_Bias : nullptr;
78  }
79 
80  return std::move(layer);
81 }
Convolution2dDescriptor m_Param
The parameters for the layer (not including tensor-valued weights etc.).
std::shared_ptr< ConstTensorHandle > m_Weight
A unique pointer to store Weight values.
std::shared_ptr< ConstTensorHandle > m_Bias
A unique pointer to store Bias values.
const char * GetName() const override
Returns the name of the layer.
Definition: Layer.hpp:316

◆ CreateWorkload()

std::unique_ptr< IWorkload > CreateWorkload ( const IWorkloadFactory factory) const
overridevirtual

Makes a workload for the Convolution2d type.

Parameters
[in]graphThe graph where this layer can be found.
[in]factoryThe workload factory which will create the workload.
Returns
A pointer to the created workload, or nullptr if not created.

Implements Layer.

Definition at line 49 of file Convolution2dLayer.cpp.

References ARMNN_ASSERT_MSG, ARMNN_SCOPED_PROFILING_EVENT, armnn::Convolution2d, IWorkloadFactory::CreateWorkload(), Convolution2dLayer::m_Bias, Convolution2dQueueDescriptor::m_Bias, Convolution2dDescriptor::m_BiasEnabled, LayerWithParameters< Convolution2dDescriptor >::m_Param, Convolution2dLayer::m_Weight, Convolution2dQueueDescriptor::m_Weight, LayerWithParameters< Convolution2dDescriptor >::PrepInfoAndDesc(), Layer::SetAdditionalInfo(), and armnn::Undefined.

50 {
51  // on this level constant data should not be released..
52  ARMNN_ASSERT_MSG(m_Weight != nullptr, "Convolution2dLayer: Weights data should not be null.");
53  ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "Convolution2dLayer_CreateWorkload");
55 
56  descriptor.m_Weight = m_Weight.get();
57 
59  {
60  ARMNN_ASSERT_MSG(m_Bias != nullptr, "Convolution2dLayer: Bias data should not be null.");
61  descriptor.m_Bias = m_Bias.get();
62  }
63 
64  SetAdditionalInfo(descriptor);
65 
66  return factory.CreateWorkload(LayerType::Convolution2d, descriptor, PrepInfoAndDesc(descriptor));
67 }
bool m_BiasEnabled
Enable/disable bias.
Convolution2dDescriptor m_Param
The parameters for the layer (not including tensor-valued weights etc.).
const ConstTensorHandle * m_Weight
std::shared_ptr< ConstTensorHandle > m_Weight
A unique pointer to store Weight values.
const ConstTensorHandle * m_Bias
#define ARMNN_SCOPED_PROFILING_EVENT(backendId, name)
Definition: Profiling.hpp:220
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
std::shared_ptr< ConstTensorHandle > m_Bias
A unique pointer to store Bias values.
void SetAdditionalInfo(QueueDescriptor &descriptor) const
Definition: Layer.cpp:248
WorkloadInfo PrepInfoAndDesc(QueueDescriptor &descriptor) const
Helper function to reduce duplication in *LayerCreateWorkload.
virtual std::unique_ptr< IWorkload > CreateWorkload(LayerType type, const QueueDescriptor &descriptor, const WorkloadInfo &info) const

◆ ExecuteStrategy()

ARMNN_NO_DEPRECATE_WARN_END void ExecuteStrategy ( IStrategy strategy) const
overridevirtual

Apply a visitor to this layer.

Reimplemented from Layer.

Definition at line 165 of file Convolution2dLayer.cpp.

References IStrategy::ExecuteStrategy(), Layer::GetName(), LayerWithParameters< Convolution2dDescriptor >::GetParameters(), ManagedConstTensorHandle::GetTensorInfo(), Convolution2dLayer::m_Bias, Convolution2dLayer::m_Weight, and ManagedConstTensorHandle::Map().

166 {
167  ManagedConstTensorHandle managedWeight(m_Weight);
168  std::vector<armnn::ConstTensor> constTensors { { managedWeight.GetTensorInfo(), managedWeight.Map() } };
169 
170  ManagedConstTensorHandle managedBias(m_Bias);
171  if (GetParameters().m_BiasEnabled)
172  {
173  constTensors.emplace_back(ConstTensor(managedBias.GetTensorInfo(), managedBias.Map()));
174  }
175 
176  strategy.ExecuteStrategy(this, GetParameters(), constTensors, GetName());
177 }
virtual void ExecuteStrategy(const armnn::IConnectableLayer *layer, const armnn::BaseDescriptor &descriptor, const std::vector< armnn::ConstTensor > &constants, const char *name, const armnn::LayerBindingId id=0)=0
std::shared_ptr< ConstTensorHandle > m_Weight
A unique pointer to store Weight values.
const Convolution2dDescriptor & GetParameters() const override
A tensor defined by a TensorInfo (shape and data type) and an immutable backing store.
Definition: Tensor.hpp:327
std::shared_ptr< ConstTensorHandle > m_Bias
A unique pointer to store Bias values.
const char * GetName() const override
Returns the name of the layer.
Definition: Layer.hpp:316

◆ GetConstantTensorsByRef()

Layer::ConstantTensors GetConstantTensorsByRef ( )
overrideprotectedvirtual

Retrieve the handles to the constant values stored by the layer.

Returns
A vector of the constant tensors stored by this layer.

Reimplemented from Layer.

Definition at line 141 of file Convolution2dLayer.cpp.

References ARMNN_NO_DEPRECATE_WARN_BEGIN, Convolution2dLayer::m_Bias, and Convolution2dLayer::m_Weight.

142 {
143  // For API stability DO NOT ALTER order and add new members to the end of vector
144  return {m_Weight, m_Bias};
145 }
std::shared_ptr< ConstTensorHandle > m_Weight
A unique pointer to store Weight values.
std::shared_ptr< ConstTensorHandle > m_Bias
A unique pointer to store Bias values.

◆ InferOutputShapes()

std::vector< TensorShape > InferOutputShapes ( const std::vector< TensorShape > &  inputShapes) const
overridevirtual

By default returns inputShapes if the number of inputs are equal to number of outputs, otherwise infers the output shapes from given input shapes and layer properties.

Parameters
[in]inputShapesThe input shapes layer has.
Returns
A vector to the inferred output shape.

Reimplemented from Layer.

Definition at line 83 of file Convolution2dLayer.cpp.

References ARMNN_ASSERT, ARMNN_ASSERT_MSG, DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetWidthIndex(), Convolution2dDescriptor::m_DataLayout, Convolution2dDescriptor::m_DilationX, Convolution2dDescriptor::m_DilationY, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, LayerWithParameters< Convolution2dDescriptor >::m_Param, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, and armnn::NHWC.

Referenced by Convolution2dInferOutputShapeTest(), and Convolution2dLayer::ValidateTensorShapesFromInputs().

84 {
85  ARMNN_ASSERT(inputShapes.size() == 2);
86  const TensorShape& inputShape = inputShapes[0];
87  const TensorShape filterShape = inputShapes[1];
88 
89  // If we support multiple batch dimensions in the future, then this assert will need to change.
90  ARMNN_ASSERT_MSG(inputShape.GetNumDimensions() == 4, "Convolutions will always have 4D input.");
91 
94 
95  DataLayoutIndexed dataLayoutIndex(m_Param.m_DataLayout);
96 
97  unsigned int inWidth = inputShape[dataLayoutIndex.GetWidthIndex()];
98  unsigned int inHeight = inputShape[dataLayoutIndex.GetHeightIndex()];
99  unsigned int inBatchSize = inputShape[0];
100 
101  unsigned int filterWidth = filterShape[dataLayoutIndex.GetWidthIndex()];
102  unsigned int dilatedFilterWidth = filterWidth + (m_Param.m_DilationX - 1) * (filterWidth - 1);
103  unsigned int readWidth = (inWidth + m_Param.m_PadLeft + m_Param.m_PadRight) - dilatedFilterWidth;
104  unsigned int outWidth = 1 + (readWidth / m_Param.m_StrideX);
105 
106  unsigned int filterHeight = filterShape[dataLayoutIndex.GetHeightIndex()];
107  unsigned int dilatedFilterHeight = filterHeight + (m_Param.m_DilationY - 1) * (filterHeight - 1);
108  unsigned int readHeight = (inHeight + m_Param.m_PadTop + m_Param.m_PadBottom) - dilatedFilterHeight;
109  unsigned int outHeight = 1 + (readHeight / m_Param.m_StrideY);
110 
111  unsigned int outChannels = filterShape[0];
112  unsigned int outBatchSize = inBatchSize;
113 
115  TensorShape( { outBatchSize, outHeight, outWidth, outChannels } ) :
116  TensorShape( { outBatchSize, outChannels, outHeight, outWidth });
117 
118  return std::vector<TensorShape>({ tensorShape });
119 }
uint32_t m_PadBottom
Padding bottom value in the height dimension.
DataLayout m_DataLayout
The data layout to be used (NCHW, NHWC).
Convolution2dDescriptor m_Param
The parameters for the layer (not including tensor-valued weights etc.).
uint32_t m_PadRight
Padding right value in the width dimension.
uint32_t m_DilationY
Dilation along y axis.
uint32_t m_PadTop
Padding top value in the height dimension.
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
Provides access to the appropriate indexes for Channels, Height and Width based on DataLayout...
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.
uint32_t m_DilationX
Dilation along x axis.
uint32_t m_PadLeft
Padding left value in the width dimension.

◆ SerializeLayerParameters()

void SerializeLayerParameters ( ParameterStringifyFunction fn) const
overridevirtual

Helper to serialize the layer parameters to string.

(currently used in DotSerializer and company).

Reimplemented from Layer.

Definition at line 29 of file Convolution2dLayer.cpp.

References InputSlot::GetConnection(), DataLayoutIndexed::GetHeightIndex(), Layer::GetInputSlot(), TensorInfo::GetShape(), IOutputSlot::GetTensorInfo(), DataLayoutIndexed::GetWidthIndex(), Convolution2dDescriptor::m_DataLayout, LayerWithParameters< Convolution2dDescriptor >::m_Param, Convolution2dLayer::m_Weight, and LayerWithParameters< Parameters >::SerializeLayerParameters().

30 {
31  //using DescriptorType = Parameters;
32  const std::vector<TensorShape>& inputShapes =
33  {
35  m_Weight->GetTensorInfo().GetShape()
36  };
37  const TensorShape filterShape = inputShapes[1];
38  DataLayoutIndexed dataLayoutIndex(m_Param.m_DataLayout);
39  unsigned int filterWidth = filterShape[dataLayoutIndex.GetWidthIndex()];
40  unsigned int filterHeight = filterShape[dataLayoutIndex.GetHeightIndex()];
41  unsigned int outChannels = filterShape[0];
42 
43  fn("OutputChannels",std::to_string(outChannels));
44  fn("FilterWidth",std::to_string(filterWidth));
45  fn("FilterHeight",std::to_string(filterHeight));
47 }
DataLayout m_DataLayout
The data layout to be used (NCHW, NHWC).
Convolution2dDescriptor m_Param
The parameters for the layer (not including tensor-valued weights etc.).
const TensorShape & GetShape() const
Definition: Tensor.hpp:191
std::shared_ptr< ConstTensorHandle > m_Weight
A unique pointer to store Weight values.
void SerializeLayerParameters(ParameterStringifyFunction &fn) const override
Helper to serialize the layer parameters to string (currently used in DotSerializer and company)...
const IOutputSlot * GetConnection() const override
Definition: Layer.hpp:204
const InputSlot & GetInputSlot(unsigned int index) const override
Get a const input slot handle by slot index.
Definition: Layer.hpp:321
Provides access to the appropriate indexes for Channels, Height and Width based on DataLayout...
virtual const TensorInfo & GetTensorInfo() const =0

◆ ValidateTensorShapesFromInputs()

void ValidateTensorShapesFromInputs ( )
overridevirtual

Check if the input tensor shape(s) will lead to a valid configuration of Convolution2dLayer.

Parameters
[in]shapeInferenceMethodIndicates if output shape shall be overwritten or just validated.

Implements Layer.

Definition at line 121 of file Convolution2dLayer.cpp.

References ARMNN_ASSERT, ARMNN_ASSERT_MSG, CHECK_LOCATION, InputSlot::GetConnection(), Layer::GetInputSlot(), Layer::GetOutputSlot(), TensorInfo::GetShape(), IOutputSlot::GetTensorInfo(), OutputSlot::GetTensorInfo(), Convolution2dLayer::InferOutputShapes(), Layer::m_ShapeInferenceMethod, Convolution2dLayer::m_Weight, Layer::ValidateAndCopyShape(), Layer::VerifyLayerConnections(), and Layer::VerifyShapeInferenceType().

122 {
124 
125  const TensorShape& outputShape = GetOutputSlot(0).GetTensorInfo().GetShape();
126 
128 
129  // check if we m_Weight data is not nullptr
130  ARMNN_ASSERT_MSG(m_Weight != nullptr, "Convolution2dLayer: Weights data should not be null.");
131 
132  auto inferredShapes = InferOutputShapes({
134  m_Weight->GetTensorInfo().GetShape() });
135 
136  ARMNN_ASSERT(inferredShapes.size() == 1);
137 
138  ValidateAndCopyShape(outputShape, inferredShapes[0], m_ShapeInferenceMethod, "Convolution2dLayer");
139 }
const TensorShape & GetShape() const
Definition: Tensor.hpp:191
std::shared_ptr< ConstTensorHandle > m_Weight
A unique pointer to store Weight values.
void VerifyShapeInferenceType(const TensorShape &outputShape, ShapeInferenceMethod shapeInferenceMethod)
Definition: Layer.cpp:436
const IOutputSlot * GetConnection() const override
Definition: Layer.hpp:204
void ValidateAndCopyShape(const TensorShape &outputShape, const TensorShape &inferredShape, const ShapeInferenceMethod shapeInferenceMethod, const std::string &layerName, const unsigned int outputSlotIndex=0)
Definition: Layer.cpp:396
void VerifyLayerConnections(unsigned int expectedConnections, const CheckLocation &location) const
Definition: Layer.cpp:352
const InputSlot & GetInputSlot(unsigned int index) const override
Get a const input slot handle by slot index.
Definition: Layer.hpp:321
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
#define CHECK_LOCATION()
Definition: Exceptions.hpp:209
const OutputSlot & GetOutputSlot(unsigned int index=0) const override
Get the const output slot handle by slot index.
Definition: Layer.hpp:323
virtual const TensorInfo & GetTensorInfo() const =0
std::vector< TensorShape > InferOutputShapes(const std::vector< TensorShape > &inputShapes) const override
By default returns inputShapes if the number of inputs are equal to number of outputs, otherwise infers the output shapes from given input shapes and layer properties.
const TensorInfo & GetTensorInfo() const override
Definition: Layer.cpp:66
ShapeInferenceMethod m_ShapeInferenceMethod
Definition: Layer.hpp:415

Member Data Documentation

◆ m_Bias

◆ m_Weight


The documentation for this class was generated from the following files: