ArmNN
 22.08
DepthwiseConvolution2dQueueDescriptor Struct Reference

Depthwise Convolution 2D layer workload data. More...

#include <WorkloadData.hpp>

Inheritance diagram for DepthwiseConvolution2dQueueDescriptor:
QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor > QueueDescriptor

Public Member Functions

 DepthwiseConvolution2dQueueDescriptor ()
 
void Validate (const WorkloadInfo &workloadInfo) const
 
- Public Member Functions inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
virtual ~QueueDescriptorWithParameters ()=default
 
- Public Member Functions inherited from QueueDescriptor
virtual ~QueueDescriptor ()=default
 
void ValidateTensorNumDimensions (const TensorInfo &tensor, std::string const &descName, unsigned int numDimensions, std::string const &tensorName) const
 
void ValidateTensorNumDimNumElem (const TensorInfo &tensorInfo, unsigned int numDimension, unsigned int numElements, std::string const &tensorName) const
 
void ValidateInputsOutputs (const std::string &descName, unsigned int numExpectedIn, unsigned int numExpectedOut) const
 
template<typename T >
const T * GetAdditionalInformation () const
 

Public Attributes

const ConstTensorHandlem_Weight
 
const ConstTensorHandlem_Bias
 
- Public Attributes inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
DepthwiseConvolution2dDescriptor m_Parameters
 
- Public Attributes inherited from QueueDescriptor
std::vector< ITensorHandle * > m_Inputs
 
std::vector< ITensorHandle * > m_Outputs
 
void * m_AdditionalInfoObject
 
bool m_AllowExpandedDims = false
 

Additional Inherited Members

- Protected Member Functions inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
 QueueDescriptorWithParameters ()=default
 
 QueueDescriptorWithParameters (QueueDescriptorWithParameters const &)=default
 
QueueDescriptorWithParametersoperator= (QueueDescriptorWithParameters const &)=default
 
- Protected Member Functions inherited from QueueDescriptor
 QueueDescriptor ()
 
 QueueDescriptor (QueueDescriptor const &)=default
 
QueueDescriptoroperator= (QueueDescriptor const &)=default
 

Detailed Description

Depthwise Convolution 2D layer workload data.

Note
The weights are in the format [1, H, W, I*M]. Where I is the input channel size, M the depthwise mutliplier and H, W is the height and width of the filter kernel. If per channel quantization is applied the weights will be quantized along the last dimension/axis (I*M) which corresponds to the output channel size. If per channel quantization is applied the weights tensor will have I*M scales, one for each dimension of the quantization axis. You have to be aware of this when reshaping the weights tensor. Splitting the I*M axis, e.g. [1, H, W, I*M] –> [H, W, I, M], won't work without taking care of the corresponding quantization scales. If there is no per channel quantization applied reshaping the weights tensor won't cause any issues. There are preconfigured permutation functions available here.

Definition at line 247 of file WorkloadData.hpp.

Constructor & Destructor Documentation

◆ DepthwiseConvolution2dQueueDescriptor()

Definition at line 249 of file WorkloadData.hpp.

250  : m_Weight(nullptr)
251  , m_Bias(nullptr)
252  {
253  }

Member Function Documentation

◆ Validate()

void Validate ( const WorkloadInfo workloadInfo) const

Definition at line 1411 of file WorkloadData.cpp.

References armnn::BFloat16, armnn::Float16, armnn::Float32, armnn::GetBiasDataType(), TensorInfo::GetDataType(), TensorInfo::GetShape(), WorkloadInfo::m_InputTensorInfos, WorkloadInfo::m_OutputTensorInfos, armnn::NCHW, armnn::QAsymmS8, armnn::QAsymmU8, armnn::QSymmS16, QueueDescriptor::ValidateTensorNumDimensions(), and OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().

1412 {
1413  const std::string descriptorName{"DepthwiseConvolution2dQueueDescriptor"};
1414 
1415  uint32_t numInputs = 2;
1417  {
1418  numInputs = 3;
1419  }
1420 
1421  ValidateNumInputs(workloadInfo, descriptorName, numInputs);
1422  ValidateNumOutputs(workloadInfo, descriptorName, 1);
1423 
1424  const TensorInfo& inputTensorInfo = workloadInfo.m_InputTensorInfos[0];
1425  const TensorInfo& outputTensorInfo = workloadInfo.m_OutputTensorInfos[0];
1426 
1427  ValidateTensorNumDimensions(inputTensorInfo, descriptorName, 4, "input");
1428  ValidateTensorNumDimensions(outputTensorInfo, descriptorName, 4, "output");
1429 
1430  const TensorInfo& weightTensorInfo = workloadInfo.m_InputTensorInfos[1];
1431  ValidateTensorNumDimensions(weightTensorInfo, descriptorName, 4, "weight");
1432 
1434  {
1436  fmt::format("{}: dilationX (provided {}) and dilationY (provided {}) "
1437  "cannot be smaller than 1.",
1438  descriptorName, m_Parameters.m_DilationX, m_Parameters.m_DilationX));
1439  }
1440 
1441  if (m_Parameters.m_StrideX <= 0 || m_Parameters.m_StrideY <= 0 )
1442  {
1444  fmt::format("{}: strideX (provided {}) and strideY (provided {}) "
1445  "cannot be either negative or 0.",
1446  descriptorName, m_Parameters.m_StrideX, m_Parameters.m_StrideY));
1447  }
1448 
1449  if (weightTensorInfo.GetShape()[0] != 1)
1450  {
1451  throw InvalidArgumentException(fmt::format(
1452  "{0}: The weight format in armnn is expected to be [1, H, W, Cout]."
1453  "But first dimension is not equal to 1. Provided weight shape: [{1}, {2}, {3}, {4}]",
1454  descriptorName,
1455  weightTensorInfo.GetShape()[0],
1456  weightTensorInfo.GetShape()[1],
1457  weightTensorInfo.GetShape()[2],
1458  weightTensorInfo.GetShape()[3]));
1459  }
1460 
1461  const unsigned int channelIndex = (m_Parameters.m_DataLayout == DataLayout::NCHW) ? 1 : 3;
1462  const unsigned int numWeightOutputChannelsRefFormat = weightTensorInfo.GetShape()[3];
1463  const unsigned int numWeightOutputChannelsAclFormat = weightTensorInfo.GetShape()[1];
1464  const unsigned int numOutputChannels = outputTensorInfo.GetShape()[channelIndex];
1465 
1466  // Weights format has two valid options: [1, H, W, Cout] (CpuRef) or [1, Cout, H, W] (CpuAcc/GpuAcc).
1467  bool validRefFormat = (numWeightOutputChannelsRefFormat == numOutputChannels);
1468  bool validAclFormat = (numWeightOutputChannelsAclFormat == numOutputChannels);
1469 
1470  if (!(validRefFormat || validAclFormat))
1471  {
1472  throw InvalidArgumentException(fmt::format(
1473  "{0}: The weight format in armnn is expected to be [1, H, W, Cout] (CpuRef) or [1, Cout, H, W] "
1474  "(CpuAcc/GpuAcc). But neither the 4th (CpuRef) or 2nd (CpuAcc/GpuAcc) dimension is equal to Cout."
1475  "Cout = {1} Provided weight shape: [{2}, {3}, {4}, {5}]",
1476  descriptorName,
1477  numOutputChannels,
1478  weightTensorInfo.GetShape()[0],
1479  weightTensorInfo.GetShape()[1],
1480  weightTensorInfo.GetShape()[2],
1481  weightTensorInfo.GetShape()[3]));
1482  }
1483 
1484  ValidateWeightDataType(inputTensorInfo, weightTensorInfo, descriptorName);
1485 
1486  Optional<TensorInfo> optionalBiasTensorInfo;
1488  {
1489  optionalBiasTensorInfo = MakeOptional<TensorInfo>(workloadInfo.m_InputTensorInfos[2]);
1490  const TensorInfo& biasTensorInfo = optionalBiasTensorInfo.value();
1491 
1492  ValidateBiasTensorQuantization(biasTensorInfo, inputTensorInfo, weightTensorInfo, descriptorName);
1493  ValidateTensorDataType(biasTensorInfo, GetBiasDataType(inputTensorInfo.GetDataType()), descriptorName, "bias");
1494  }
1495  ValidatePerAxisQuantization(inputTensorInfo,
1496  outputTensorInfo,
1497  weightTensorInfo,
1498  optionalBiasTensorInfo,
1499  descriptorName);
1500 
1501  std::vector<DataType> supportedTypes =
1502  {
1509  };
1510 
1511  ValidateDataTypes(inputTensorInfo, supportedTypes, descriptorName);
1512  ValidateTensorDataTypesMatch(inputTensorInfo, outputTensorInfo, descriptorName, "input", "output");
1513 }
bool m_BiasEnabled
Enable/disable bias.
const TensorShape & GetShape() const
Definition: Tensor.hpp:191
DataLayout m_DataLayout
The data layout to be used (NCHW, NHWC).
uint32_t m_DilationY
Dilation factor value for height dimension.
std::vector< TensorInfo > m_InputTensorInfos
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
uint32_t m_DilationX
Dilation factor value for width dimension.
DataType GetDataType() const
Definition: Tensor.hpp:198
std::vector< TensorInfo > m_OutputTensorInfos
DataType GetBiasDataType(DataType inputDataType)
void ValidateTensorNumDimensions(const TensorInfo &tensor, std::string const &descName, unsigned int numDimensions, std::string const &tensorName) const
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.

Member Data Documentation

◆ m_Bias

◆ m_Weight


The documentation for this struct was generated from the following files: