ArmNN
 22.02
DepthwiseConvolution2dQueueDescriptor Struct Reference

Depthwise Convolution 2D layer workload data. More...

#include <WorkloadData.hpp>

Inheritance diagram for DepthwiseConvolution2dQueueDescriptor:
QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor > QueueDescriptor

Public Member Functions

 DepthwiseConvolution2dQueueDescriptor ()
 
void Validate (const WorkloadInfo &workloadInfo) const
 
- Public Member Functions inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
virtual ~QueueDescriptorWithParameters ()=default
 
- Public Member Functions inherited from QueueDescriptor
virtual ~QueueDescriptor ()=default
 
void ValidateInputsOutputs (const std::string &descName, unsigned int numExpectedIn, unsigned int numExpectedOut) const
 
template<typename T >
const T * GetAdditionalInformation () const
 

Public Attributes

const ConstTensorHandlem_Weight
 
const ConstTensorHandlem_Bias
 
- Public Attributes inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
DepthwiseConvolution2dDescriptor m_Parameters
 
- Public Attributes inherited from QueueDescriptor
std::vector< ITensorHandle * > m_Inputs
 
std::vector< ITensorHandle * > m_Outputs
 
void * m_AdditionalInfoObject
 

Additional Inherited Members

- Protected Member Functions inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
 QueueDescriptorWithParameters ()=default
 
 QueueDescriptorWithParameters (QueueDescriptorWithParameters const &)=default
 
QueueDescriptorWithParametersoperator= (QueueDescriptorWithParameters const &)=default
 
- Protected Member Functions inherited from QueueDescriptor
 QueueDescriptor ()
 
 QueueDescriptor (QueueDescriptor const &)=default
 
QueueDescriptoroperator= (QueueDescriptor const &)=default
 

Detailed Description

Depthwise Convolution 2D layer workload data.

Note
The weights are in the format [1, H, W, I*M]. Where I is the input channel size, M the depthwise mutliplier and H, W is the height and width of the filter kernel. If per channel quantization is applied the weights will be quantized along the last dimension/axis (I*M) which corresponds to the output channel size. If per channel quantization is applied the weights tensor will have I*M scales, one for each dimension of the quantization axis. You have to be aware of this when reshaping the weights tensor. Splitting the I*M axis, e.g. [1, H, W, I*M] –> [H, W, I, M], won't work without taking care of the corresponding quantization scales. If there is no per channel quantization applied reshaping the weights tensor won't cause any issues. There are preconfigured permutation functions available here.

Definition at line 235 of file WorkloadData.hpp.

Constructor & Destructor Documentation

◆ DepthwiseConvolution2dQueueDescriptor()

Definition at line 237 of file WorkloadData.hpp.

238  : m_Weight(nullptr)
239  , m_Bias(nullptr)
240  {
241  }

Member Function Documentation

◆ Validate()

void Validate ( const WorkloadInfo workloadInfo) const

Definition at line 1381 of file WorkloadData.cpp.

References armnn::BFloat16, armnn::Float16, armnn::Float32, armnn::GetBiasDataType(), TensorInfo::GetDataType(), TensorInfo::GetShape(), WorkloadInfo::m_InputTensorInfos, WorkloadInfo::m_OutputTensorInfos, armnn::NCHW, armnn::QAsymmS8, armnn::QAsymmU8, armnn::QSymmS16, and OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().

1382 {
1383  const std::string descriptorName{"DepthwiseConvolution2dQueueDescriptor"};
1384 
1385  ValidateNumInputs(workloadInfo, descriptorName, 1);
1386  ValidateNumOutputs(workloadInfo, descriptorName, 1);
1387 
1388  const TensorInfo& inputTensorInfo = workloadInfo.m_InputTensorInfos[0];
1389  const TensorInfo& outputTensorInfo = workloadInfo.m_OutputTensorInfos[0];
1390 
1391  ValidateTensorNumDimensions(inputTensorInfo, descriptorName, 4, "input");
1392  ValidateTensorNumDimensions(outputTensorInfo, descriptorName, 4, "output");
1393 
1394  ValidatePointer(m_Weight, descriptorName, "weight");
1395 
1396  const TensorInfo& weightTensorInfo = m_Weight->GetTensorInfo();
1397  ValidateTensorNumDimensions(weightTensorInfo, descriptorName, 4, "weight");
1398 
1400  {
1402  fmt::format("{}: dilationX (provided {}) and dilationY (provided {}) "
1403  "cannot be smaller than 1.",
1404  descriptorName, m_Parameters.m_DilationX, m_Parameters.m_DilationX));
1405  }
1406 
1407  if (m_Parameters.m_StrideX <= 0 || m_Parameters.m_StrideY <= 0 )
1408  {
1410  fmt::format("{}: strideX (provided {}) and strideY (provided {}) "
1411  "cannot be either negative or 0.",
1412  descriptorName, m_Parameters.m_StrideX, m_Parameters.m_StrideY));
1413  }
1414 
1415  const unsigned int channelIndex = (m_Parameters.m_DataLayout == DataLayout::NCHW) ? 1 : 3;
1416 
1417  // Expected weight shape: [ 1, H, W, I*M ] - This shape does NOT depend on the data layout
1418  // inputChannels * channelMultiplier should be equal to outputChannels.
1419  const unsigned int numWeightOutputChannels = weightTensorInfo.GetShape()[3]; // I*M=Cout
1420  const unsigned int numOutputChannels = outputTensorInfo.GetShape()[channelIndex];
1421  if (numWeightOutputChannels != numOutputChannels)
1422  {
1423  throw InvalidArgumentException(fmt::format(
1424  "{0}: The weight format in armnn is expected to be [1, H, W, Cout]."
1425  "But 4th dimension is not equal to Cout. Cout = {1} Provided weight shape: [{2}, {3}, {4}, {5}]",
1426  descriptorName,
1427  numOutputChannels,
1428  weightTensorInfo.GetShape()[0],
1429  weightTensorInfo.GetShape()[1],
1430  weightTensorInfo.GetShape()[2],
1431  weightTensorInfo.GetShape()[3]));
1432  }
1433  if (weightTensorInfo.GetShape()[0] != 1)
1434  {
1435  throw InvalidArgumentException(fmt::format(
1436  "{0}: The weight format in armnn is expected to be [1, H, W, Cout]."
1437  "But first dimension is not equal to 1. Provided weight shape: [{1}, {2}, {3}, {4}]",
1438  descriptorName,
1439  weightTensorInfo.GetShape()[0],
1440  weightTensorInfo.GetShape()[1],
1441  weightTensorInfo.GetShape()[2],
1442  weightTensorInfo.GetShape()[3]));
1443  }
1444 
1445  ValidateWeightDataType(inputTensorInfo, weightTensorInfo, descriptorName);
1446 
1447  Optional<TensorInfo> optionalBiasTensorInfo;
1449  {
1450  ValidatePointer(m_Bias, descriptorName, "bias");
1451 
1452  optionalBiasTensorInfo = MakeOptional<TensorInfo>(m_Bias->GetTensorInfo());
1453  const TensorInfo& biasTensorInfo = optionalBiasTensorInfo.value();
1454 
1455  ValidateBiasTensorQuantization(biasTensorInfo, inputTensorInfo, weightTensorInfo, descriptorName);
1456  ValidateTensorDataType(biasTensorInfo, GetBiasDataType(inputTensorInfo.GetDataType()), descriptorName, "bias");
1457  }
1458  ValidatePerAxisQuantization(inputTensorInfo,
1459  outputTensorInfo,
1460  weightTensorInfo,
1461  optionalBiasTensorInfo,
1462  descriptorName);
1463 
1464  std::vector<DataType> supportedTypes =
1465  {
1472  };
1473 
1474  ValidateDataTypes(inputTensorInfo, supportedTypes, descriptorName);
1475  ValidateTensorDataTypesMatch(inputTensorInfo, outputTensorInfo, descriptorName, "input", "output");
1476 }
bool m_BiasEnabled
Enable/disable bias.
const TensorShape & GetShape() const
Definition: Tensor.hpp:191
DataLayout m_DataLayout
The data layout to be used (NCHW, NHWC).
uint32_t m_DilationY
Dilation factor value for height dimension.
const TensorInfo & GetTensorInfo() const
std::vector< TensorInfo > m_InputTensorInfos
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
uint32_t m_DilationX
Dilation factor value for width dimension.
DataType GetDataType() const
Definition: Tensor.hpp:198
std::vector< TensorInfo > m_OutputTensorInfos
DataType GetBiasDataType(DataType inputDataType)
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.

Member Data Documentation

◆ m_Bias

◆ m_Weight


The documentation for this struct was generated from the following files: