ArmNN
 21.11
NeonPooling2dWorkload Class Reference

#include <NeonPooling2dWorkload.hpp>

Inheritance diagram for NeonPooling2dWorkload:
BaseWorkload< Pooling2dQueueDescriptor > IWorkload

Public Member Functions

 NeonPooling2dWorkload (const Pooling2dQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void Execute () const override
 
- Public Member Functions inherited from BaseWorkload< Pooling2dQueueDescriptor >
 BaseWorkload (const Pooling2dQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void ExecuteAsync (WorkingMemDescriptor &workingMemDescriptor) override
 
void PostAllocationConfigure () override
 
const Pooling2dQueueDescriptorGetData () const
 
profiling::ProfilingGuid GetGuid () const final
 
- Public Member Functions inherited from IWorkload
virtual ~IWorkload ()
 
virtual void RegisterDebugCallback (const DebugCallbackFunction &)
 

Additional Inherited Members

- Protected Attributes inherited from BaseWorkload< Pooling2dQueueDescriptor >
Pooling2dQueueDescriptor m_Data
 
const profiling::ProfilingGuid m_Guid
 

Detailed Description

Definition at line 22 of file NeonPooling2dWorkload.hpp.

Constructor & Destructor Documentation

◆ NeonPooling2dWorkload()

NeonPooling2dWorkload ( const Pooling2dQueueDescriptor descriptor,
const WorkloadInfo info 
)

Definition at line 36 of file NeonPooling2dWorkload.cpp.

References ARMNN_REPORT_PROFILING_WORKLOAD_DESC, BaseWorkload< Pooling2dQueueDescriptor >::m_Data, QueueDescriptor::m_Inputs, QueueDescriptor::m_Outputs, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and QueueDescriptor::ValidateInputsOutputs().

38  : BaseWorkload<Pooling2dQueueDescriptor>(descriptor, info)
39 {
40  // Report Profiling Details
41  ARMNN_REPORT_PROFILING_WORKLOAD_DESC("NeonPooling2dWorkload_Construct",
42  descriptor.m_Parameters,
43  info,
44  this->GetGuid());
45 
46  m_Data.ValidateInputsOutputs("NeonPooling2dWorkload", 1, 1);
47 
48  arm_compute::ITensor& input = PolymorphicDowncast<IAclTensorHandle*>(m_Data.m_Inputs[0])->GetTensor();
49  arm_compute::ITensor& output = PolymorphicDowncast<IAclTensorHandle*>(m_Data.m_Outputs[0])->GetTensor();
50 
51  arm_compute::DataLayout aclDataLayout = ConvertDataLayout(m_Data.m_Parameters.m_DataLayout);
52  input.info()->set_data_layout(aclDataLayout);
53  output.info()->set_data_layout(aclDataLayout);
54 
55  arm_compute::PoolingLayerInfo layerInfo = BuildArmComputePoolingLayerInfo(m_Data.m_Parameters);
56 
57  auto layer = std::make_unique<arm_compute::NEPoolingLayer>();
58  layer->configure(&input, &output, layerInfo);
59  m_PoolingLayer.reset(layer.release());
60 }
DataLayout
Definition: Types.hpp:49
void ValidateInputsOutputs(const std::string &descName, unsigned int numExpectedIn, unsigned int numExpectedOut) const
DataLayout m_DataLayout
The data layout to be used (NCHW, NHWC).
std::vector< ITensorHandle * > m_Outputs
#define ARMNN_REPORT_PROFILING_WORKLOAD_DESC(name, desc, infos, guid)
Definition: Profiling.hpp:227
std::vector< ITensorHandle * > m_Inputs

Member Function Documentation

◆ Execute()

void Execute ( ) const
overridevirtual

Implements IWorkload.

Definition at line 62 of file NeonPooling2dWorkload.cpp.

References ARMNN_SCOPED_PROFILING_EVENT_NEON_GUID, and BaseWorkload< Pooling2dQueueDescriptor >::GetGuid().

63 {
64  ARMNN_SCOPED_PROFILING_EVENT_NEON_GUID("NeonPooling2dWorkload_Execute", this->GetGuid());
65  m_PoolingLayer->run();
66 }
profiling::ProfilingGuid GetGuid() const final
Definition: Workload.hpp:55
#define ARMNN_SCOPED_PROFILING_EVENT_NEON_GUID(name, guid)

The documentation for this class was generated from the following files: