ArmNN
 20.08
FullyConnectedQueueDescriptor Struct Reference

#include <WorkloadData.hpp>

Inheritance diagram for FullyConnectedQueueDescriptor:
QueueDescriptorWithParameters< FullyConnectedDescriptor > QueueDescriptor

Public Member Functions

 FullyConnectedQueueDescriptor ()
 
void Validate (const WorkloadInfo &workloadInfo) const
 
- Public Member Functions inherited from QueueDescriptor
void ValidateInputsOutputs (const std::string &descName, unsigned int numExpectedIn, unsigned int numExpectedOut) const
 

Public Attributes

const ConstCpuTensorHandlem_Weight
 
const ConstCpuTensorHandlem_Bias
 
- Public Attributes inherited from QueueDescriptorWithParameters< FullyConnectedDescriptor >
FullyConnectedDescriptor m_Parameters
 
- Public Attributes inherited from QueueDescriptor
std::vector< ITensorHandle * > m_Inputs
 
std::vector< ITensorHandle * > m_Outputs
 

Additional Inherited Members

- Protected Member Functions inherited from QueueDescriptorWithParameters< FullyConnectedDescriptor >
 ~QueueDescriptorWithParameters ()=default
 
 QueueDescriptorWithParameters ()=default
 
 QueueDescriptorWithParameters (QueueDescriptorWithParameters const &)=default
 
QueueDescriptorWithParametersoperator= (QueueDescriptorWithParameters const &)=default
 
- Protected Member Functions inherited from QueueDescriptor
 ~QueueDescriptor ()=default
 
 QueueDescriptor ()=default
 
 QueueDescriptor (QueueDescriptor const &)=default
 
QueueDescriptoroperator= (QueueDescriptor const &)=default
 

Detailed Description

Definition at line 147 of file WorkloadData.hpp.

Constructor & Destructor Documentation

◆ FullyConnectedQueueDescriptor()

Definition at line 149 of file WorkloadData.hpp.

150  : m_Weight(nullptr)
151  , m_Bias(nullptr)
152  {
153  }
const ConstCpuTensorHandle * m_Weight
const ConstCpuTensorHandle * m_Bias

Member Function Documentation

◆ Validate()

void Validate ( const WorkloadInfo workloadInfo) const

Definition at line 992 of file WorkloadData.cpp.

References armnn::BFloat16, armnn::Float16, armnn::Float32, armnn::GetBiasDataType(), TensorInfo::GetDataType(), TensorInfo::GetNumDimensions(), WorkloadInfo::m_InputTensorInfos, WorkloadInfo::m_OutputTensorInfos, armnn::QAsymmS8, armnn::QAsymmU8, and armnn::QSymmS16.

993 {
994  const std::string descriptorName{"FullyConnectedQueueDescriptor"};
995 
996  ValidateNumInputs(workloadInfo, descriptorName, 1);
997  ValidateNumOutputs(workloadInfo, descriptorName, 1);
998 
999  const TensorInfo& inputTensorInfo = workloadInfo.m_InputTensorInfos[0];
1000  const TensorInfo& outputTensorInfo = workloadInfo.m_OutputTensorInfos[0];
1001 
1002  ValidateTensorNumDimensions(outputTensorInfo, descriptorName, 2, "output");
1003 
1004  if (!(inputTensorInfo.GetNumDimensions() == 2 || inputTensorInfo.GetNumDimensions() == 4))
1005  {
1006  throw InvalidArgumentException(descriptorName + ": Input tensor must have 2 or 4 dimensions.");
1007  }
1008 
1009  ValidatePointer(m_Weight, descriptorName, "weight");
1010 
1011  const TensorInfo& weightTensorInfo = m_Weight->GetTensorInfo();
1012  ValidateTensorNumDimensions(weightTensorInfo, descriptorName, 2, "weight");
1013 
1015  {
1016  ValidatePointer(m_Bias, descriptorName, "bias");
1017 
1018  // Validates type and quantization values.
1019  const TensorInfo& biasTensorInfo = m_Bias->GetTensorInfo();
1020  ValidateBiasTensorQuantization(biasTensorInfo, inputTensorInfo, weightTensorInfo, descriptorName);
1021 
1022  ValidateTensorDataType(biasTensorInfo, GetBiasDataType(inputTensorInfo.GetDataType()), descriptorName, "bias");
1023  ValidateTensorNumDimensions(biasTensorInfo, descriptorName, 1, "bias");
1024  }
1025 
1026  // Check the supported data types
1027  std::vector<DataType> supportedTypes =
1028  {
1035  };
1036 
1037  ValidateDataTypes(inputTensorInfo, supportedTypes, descriptorName);
1038 
1039  // For FullyConnected, we allow to have BFloat16 input with Float32 output for optimization.
1040  if (inputTensorInfo.GetDataType() == DataType::BFloat16)
1041  {
1042  if (outputTensorInfo.GetDataType() != DataType::BFloat16 && outputTensorInfo.GetDataType() != DataType::Float32)
1043  {
1044  throw InvalidArgumentException(descriptorName + ": " + " Output tensor type must be BFloat16 or Float32 "
1045  "for BFloat16 input.");
1046  }
1047  }
1048  else
1049  {
1050  ValidateTensorDataTypesMatch(inputTensorInfo, outputTensorInfo, descriptorName, "input", "output");
1051  }
1052 }
const ConstCpuTensorHandle * m_Weight
std::vector< TensorInfo > m_InputTensorInfos
DataType GetDataType() const
Definition: Tensor.hpp:194
bool m_BiasEnabled
Enable/disable bias.
std::vector< TensorInfo > m_OutputTensorInfos
DataType GetBiasDataType(DataType inputDataType)
const ConstCpuTensorHandle * m_Bias
unsigned int GetNumDimensions() const
Definition: Tensor.hpp:191
const TensorInfo & GetTensorInfo() const

Member Data Documentation

◆ m_Bias

◆ m_Weight

const ConstCpuTensorHandle* m_Weight

Definition at line 155 of file WorkloadData.hpp.

Referenced by BOOST_AUTO_TEST_CASE(), and FullyConnectedLayer::CreateWorkload().


The documentation for this struct was generated from the following files: