ArmNN
 20.02
RefPadWorkload< DataType > Class Template Reference

#include <RefPadWorkload.hpp>

Inheritance diagram for RefPadWorkload< DataType >:
TypedWorkload< PadQueueDescriptor, DataType > BaseWorkload< PadQueueDescriptor > IWorkload

Public Member Functions

void Execute () const override
 
- Public Member Functions inherited from TypedWorkload< PadQueueDescriptor, DataType >
 TypedWorkload (const PadQueueDescriptor &descriptor, const WorkloadInfo &info)
 
- Public Member Functions inherited from BaseWorkload< PadQueueDescriptor >
 BaseWorkload (const PadQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void PostAllocationConfigure () override
 
const PadQueueDescriptorGetData () const
 
profiling::ProfilingGuid GetGuid () const final
 
- Public Member Functions inherited from IWorkload
virtual ~IWorkload ()
 
virtual void RegisterDebugCallback (const DebugCallbackFunction &)
 

Static Public Member Functions

static const std::string & GetName ()
 

Additional Inherited Members

- Protected Attributes inherited from BaseWorkload< PadQueueDescriptor >
const PadQueueDescriptor m_Data
 
const profiling::ProfilingGuid m_Guid
 

Detailed Description

template<armnn::DataType DataType>
class armnn::RefPadWorkload< DataType >

Definition at line 17 of file RefPadWorkload.hpp.

Member Function Documentation

◆ Execute()

void Execute ( ) const
overridevirtual

Implements IWorkload.

Definition at line 21 of file RefPadWorkload.cpp.

References ARMNN_SCOPED_PROFILING_EVENT, armnn::CpuRef, armnn::GetTensorInfo(), and armnn::Pad().

Referenced by RefPadWorkload< DataType >::GetName().

22 {
23  using T = ResolveType<DataType>;
24 
25  ARMNN_SCOPED_PROFILING_EVENT(Compute::CpuRef, "RefPadWorkload_Execute");
26 
27  const TensorInfo& inputInfo = GetTensorInfo(m_Data.m_Inputs[0]);
28  const TensorInfo& outputInfo = GetTensorInfo(m_Data.m_Outputs[0]);
29 
30  const T* inputData = GetInputTensorData<T>(0, m_Data);
31  T* outputData = GetOutputTensorData<T>(0, m_Data);
32 
33  Pad(inputInfo, outputInfo, m_Data.m_Parameters.m_PadList, inputData, outputData, m_Data.m_Parameters.m_PadValue);
34 }
CPU Execution: Reference C++ kernels.
float m_PadValue
Optional value to use for padding, defaults to 0.
const PadQueueDescriptor m_Data
Definition: Workload.hpp:46
const TensorInfo & GetTensorInfo(const ITensorHandle *tensorHandle)
float32 helpers
std::vector< std::pair< unsigned int, unsigned int > > m_PadList
Specifies the padding for input dimension.
#define ARMNN_SCOPED_PROFILING_EVENT(backendId, name)
Definition: Profiling.hpp:169
std::vector< ITensorHandle * > m_Outputs
std::vector< ITensorHandle * > m_Inputs

◆ GetName()

static const std::string& GetName ( )
inlinestatic

Definition at line 21 of file RefPadWorkload.hpp.

References RefPadWorkload< DataType >::Execute(), and armnn::GetDataTypeName().

22  {
23  static const std::string name = std::string("RefPad") + GetDataTypeName(DataType) + "Workload";
24  return name;
25  }
constexpr const char * GetDataTypeName(DataType dataType)
Definition: TypesUtils.hpp:168
DataType
Definition: Types.hpp:32

The documentation for this class was generated from the following files: