ArmNN
 22.11
DepthwiseConvolution2dQueueDescriptor Struct Reference

Depthwise Convolution 2D layer workload data. More...

#include <WorkloadData.hpp>

Inheritance diagram for DepthwiseConvolution2dQueueDescriptor:
QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor > QueueDescriptor

Public Member Functions

 DepthwiseConvolution2dQueueDescriptor ()
 
void Validate (const WorkloadInfo &workloadInfo) const
 
- Public Member Functions inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
virtual ~QueueDescriptorWithParameters ()=default
 
- Public Member Functions inherited from QueueDescriptor
virtual ~QueueDescriptor ()=default
 
void ValidateTensorNumDimensions (const TensorInfo &tensor, std::string const &descName, unsigned int numDimensions, std::string const &tensorName) const
 
void ValidateTensorNumDimNumElem (const TensorInfo &tensorInfo, unsigned int numDimension, unsigned int numElements, std::string const &tensorName) const
 
void ValidateInputsOutputs (const std::string &descName, unsigned int numExpectedIn, unsigned int numExpectedOut) const
 
template<typename T >
const T * GetAdditionalInformation () const
 

Public Attributes

const ConstTensorHandlem_Weight
 
const ConstTensorHandlem_Bias
 
- Public Attributes inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
DepthwiseConvolution2dDescriptor m_Parameters
 
- Public Attributes inherited from QueueDescriptor
std::vector< ITensorHandle * > m_Inputs
 
std::vector< ITensorHandle * > m_Outputs
 
void * m_AdditionalInfoObject
 
bool m_AllowExpandedDims = false
 

Additional Inherited Members

- Protected Member Functions inherited from QueueDescriptorWithParameters< DepthwiseConvolution2dDescriptor >
 QueueDescriptorWithParameters ()=default
 
 QueueDescriptorWithParameters (QueueDescriptorWithParameters const &)=default
 
QueueDescriptorWithParametersoperator= (QueueDescriptorWithParameters const &)=default
 
- Protected Member Functions inherited from QueueDescriptor
 QueueDescriptor ()
 
 QueueDescriptor (QueueDescriptor const &)=default
 
QueueDescriptoroperator= (QueueDescriptor const &)=default
 

Detailed Description

Depthwise Convolution 2D layer workload data.

Note
The weights are in the format [1, H, W, I*M]. Where I is the input channel size, M the depthwise mutliplier and H, W is the height and width of the filter kernel. If per channel quantization is applied the weights will be quantized along the last dimension/axis (I*M) which corresponds to the output channel size. If per channel quantization is applied the weights tensor will have I*M scales, one for each dimension of the quantization axis. You have to be aware of this when reshaping the weights tensor. Splitting the I*M axis, e.g. [1, H, W, I*M] –> [H, W, I, M], won't work without taking care of the corresponding quantization scales. If there is no per channel quantization applied reshaping the weights tensor won't cause any issues. There are preconfigured permutation functions available here.

Definition at line 247 of file WorkloadData.hpp.

Constructor & Destructor Documentation

◆ DepthwiseConvolution2dQueueDescriptor()

Definition at line 249 of file WorkloadData.hpp.

250  : m_Weight(nullptr)
251  , m_Bias(nullptr)
252  {
253  }

Member Function Documentation

◆ Validate()

void Validate ( const WorkloadInfo workloadInfo) const

Definition at line 1412 of file WorkloadData.cpp.

References armnn::BFloat16, armnn::Float16, armnn::Float32, armnn::GetBiasDataType(), TensorInfo::GetDataType(), TensorInfo::GetShape(), WorkloadInfo::m_InputTensorInfos, WorkloadInfo::m_OutputTensorInfos, armnn::NCHW, armnn::QAsymmS8, armnn::QAsymmU8, armnn::QSymmS16, QueueDescriptor::ValidateTensorNumDimensions(), and OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().

1413 {
1414  const std::string descriptorName{"DepthwiseConvolution2dQueueDescriptor"};
1415 
1416  uint32_t numInputs = 2;
1418  {
1419  numInputs = 3;
1420  }
1421 
1422  ValidateNumInputs(workloadInfo, descriptorName, numInputs);
1423  ValidateNumOutputs(workloadInfo, descriptorName, 1);
1424 
1425  const TensorInfo& inputTensorInfo = workloadInfo.m_InputTensorInfos[0];
1426  const TensorInfo& outputTensorInfo = workloadInfo.m_OutputTensorInfos[0];
1427 
1428  ValidateTensorNumDimensions(inputTensorInfo, descriptorName, 4, "input");
1429  ValidateTensorNumDimensions(outputTensorInfo, descriptorName, 4, "output");
1430 
1431  const TensorInfo& weightTensorInfo = workloadInfo.m_InputTensorInfos[1];
1432  ValidateTensorNumDimensions(weightTensorInfo, descriptorName, 4, "weight");
1433 
1435  {
1437  fmt::format("{}: dilationX (provided {}) and dilationY (provided {}) "
1438  "cannot be smaller than 1.",
1439  descriptorName, m_Parameters.m_DilationX, m_Parameters.m_DilationX));
1440  }
1441 
1442  if (m_Parameters.m_StrideX <= 0 || m_Parameters.m_StrideY <= 0 )
1443  {
1445  fmt::format("{}: strideX (provided {}) and strideY (provided {}) "
1446  "cannot be either negative or 0.",
1447  descriptorName, m_Parameters.m_StrideX, m_Parameters.m_StrideY));
1448  }
1449 
1450  if (weightTensorInfo.GetShape()[0] != 1)
1451  {
1452  throw InvalidArgumentException(fmt::format(
1453  "{0}: The weight format in armnn is expected to be [1, H, W, Cout]."
1454  "But first dimension is not equal to 1. Provided weight shape: [{1}, {2}, {3}, {4}]",
1455  descriptorName,
1456  weightTensorInfo.GetShape()[0],
1457  weightTensorInfo.GetShape()[1],
1458  weightTensorInfo.GetShape()[2],
1459  weightTensorInfo.GetShape()[3]));
1460  }
1461 
1462  const unsigned int channelIndex = (m_Parameters.m_DataLayout == DataLayout::NCHW) ? 1 : 3;
1463  const unsigned int numWeightOutputChannelsRefFormat = weightTensorInfo.GetShape()[3];
1464  const unsigned int numWeightOutputChannelsAclFormat = weightTensorInfo.GetShape()[1];
1465  const unsigned int numOutputChannels = outputTensorInfo.GetShape()[channelIndex];
1466 
1467  // Weights format has two valid options: [1, H, W, Cout] (CpuRef) or [1, Cout, H, W] (CpuAcc/GpuAcc).
1468  bool validRefFormat = (numWeightOutputChannelsRefFormat == numOutputChannels);
1469  bool validAclFormat = (numWeightOutputChannelsAclFormat == numOutputChannels);
1470 
1471  if (!(validRefFormat || validAclFormat))
1472  {
1473  throw InvalidArgumentException(fmt::format(
1474  "{0}: The weight format in armnn is expected to be [1, H, W, Cout] (CpuRef) or [1, Cout, H, W] "
1475  "(CpuAcc/GpuAcc). But neither the 4th (CpuRef) or 2nd (CpuAcc/GpuAcc) dimension is equal to Cout."
1476  "Cout = {1} Provided weight shape: [{2}, {3}, {4}, {5}]",
1477  descriptorName,
1478  numOutputChannels,
1479  weightTensorInfo.GetShape()[0],
1480  weightTensorInfo.GetShape()[1],
1481  weightTensorInfo.GetShape()[2],
1482  weightTensorInfo.GetShape()[3]));
1483  }
1484 
1485  ValidateWeightDataType(inputTensorInfo, weightTensorInfo, descriptorName);
1486 
1487  Optional<TensorInfo> optionalBiasTensorInfo;
1489  {
1490  optionalBiasTensorInfo = MakeOptional<TensorInfo>(workloadInfo.m_InputTensorInfos[2]);
1491  const TensorInfo& biasTensorInfo = optionalBiasTensorInfo.value();
1492 
1493  ValidateBiasTensorQuantization(biasTensorInfo, inputTensorInfo, weightTensorInfo, descriptorName);
1494  ValidateTensorDataType(biasTensorInfo, GetBiasDataType(inputTensorInfo.GetDataType()), descriptorName, "bias");
1495  }
1496  ValidatePerAxisQuantization(inputTensorInfo,
1497  outputTensorInfo,
1498  weightTensorInfo,
1499  optionalBiasTensorInfo,
1500  descriptorName);
1501 
1502  std::vector<DataType> supportedTypes =
1503  {
1510  };
1511 
1512  ValidateDataTypes(inputTensorInfo, supportedTypes, descriptorName);
1513  ValidateTensorDataTypesMatch(inputTensorInfo, outputTensorInfo, descriptorName, "input", "output");
1514 }
bool m_BiasEnabled
Enable/disable bias.
const TensorShape & GetShape() const
Definition: Tensor.hpp:191
DataLayout m_DataLayout
The data layout to be used (NCHW, NHWC).
uint32_t m_DilationY
Dilation factor value for height dimension.
std::vector< TensorInfo > m_InputTensorInfos
uint32_t m_StrideX
Stride value when proceeding through input for the width dimension.
uint32_t m_DilationX
Dilation factor value for width dimension.
DataType GetDataType() const
Definition: Tensor.hpp:198
std::vector< TensorInfo > m_OutputTensorInfos
DataType GetBiasDataType(DataType inputDataType)
void ValidateTensorNumDimensions(const TensorInfo &tensor, std::string const &descName, unsigned int numDimensions, std::string const &tensorName) const
uint32_t m_StrideY
Stride value when proceeding through input for the height dimension.

Member Data Documentation

◆ m_Bias

◆ m_Weight


The documentation for this struct was generated from the following files: