ArmNN
 21.02
ClArgMinMaxWorkload Class Reference

#include <ClArgMinMaxWorkload.hpp>

Inheritance diagram for ClArgMinMaxWorkload:
BaseWorkload< ArgMinMaxQueueDescriptor > IWorkload

Public Member Functions

 ClArgMinMaxWorkload (const ArgMinMaxQueueDescriptor &descriptor, const WorkloadInfo &info, const arm_compute::CLCompileContext &clCompileContext)
 
virtual void Execute () const override
 
- Public Member Functions inherited from BaseWorkload< ArgMinMaxQueueDescriptor >
 BaseWorkload (const ArgMinMaxQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void PostAllocationConfigure () override
 
const ArgMinMaxQueueDescriptorGetData () const
 
profiling::ProfilingGuid GetGuid () const final
 
- Public Member Functions inherited from IWorkload
virtual ~IWorkload ()
 
virtual void RegisterDebugCallback (const DebugCallbackFunction &)
 

Additional Inherited Members

- Protected Attributes inherited from BaseWorkload< ArgMinMaxQueueDescriptor >
const ArgMinMaxQueueDescriptor m_Data
 
const profiling::ProfilingGuid m_Guid
 

Detailed Description

Definition at line 20 of file ClArgMinMaxWorkload.hpp.

Constructor & Destructor Documentation

◆ ClArgMinMaxWorkload()

ClArgMinMaxWorkload ( const ArgMinMaxQueueDescriptor descriptor,
const WorkloadInfo info,
const arm_compute::CLCompileContext &  clCompileContext 
)

Definition at line 55 of file ClArgMinMaxWorkload.cpp.

References armnnUtils::GetUnsignedAxis(), ArgMinMaxDescriptor::m_Axis, BaseWorkload< ArgMinMaxQueueDescriptor >::m_Data, ArgMinMaxDescriptor::m_Function, QueueDescriptor::m_Inputs, WorkloadInfo::m_InputTensorInfos, QueueDescriptor::m_Outputs, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, armnn::Max, and armnn::numeric_cast().

58  : BaseWorkload<ArgMinMaxQueueDescriptor>(descriptor, info)
59 {
60  arm_compute::ICLTensor& input = static_cast<IClTensorHandle*>(this->m_Data.m_Inputs[0])->GetTensor();
61  arm_compute::ICLTensor& output = static_cast<IClTensorHandle*>(this->m_Data.m_Outputs[0])->GetTensor();
62 
63  auto numDims = info.m_InputTensorInfos[0].GetNumDimensions();
64  auto unsignedAxis = armnnUtils::GetUnsignedAxis(numDims, m_Data.m_Parameters.m_Axis);
65  int aclAxis = armnn::numeric_cast<int>(CalcAclAxis(numDims, unsignedAxis));
66 
68  {
69  m_ArgMinMaxLayer.configure(&input, aclAxis, &output, arm_compute::ReductionOperation::ARG_IDX_MAX);
70  }
71  else
72  {
73  m_ArgMinMaxLayer.configure(clCompileContext,
74  &input,
75  aclAxis,
76  &output,
77  arm_compute::ReductionOperation::ARG_IDX_MIN);
78  }
79 }
const ArgMinMaxQueueDescriptor m_Data
Definition: Workload.hpp:46
ArgMinMaxFunction m_Function
Specify if the function is to find Min or Max.
Definition: Descriptors.hpp:70
unsigned int GetUnsignedAxis(const unsigned int inputDimension, const int axis)
std::vector< ITensorHandle * > m_Outputs
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > numeric_cast(Source source)
Definition: NumericCast.hpp:35
int m_Axis
Axis to reduce across the input tensor.
Definition: Descriptors.hpp:72
std::vector< ITensorHandle * > m_Inputs

Member Function Documentation

◆ Execute()

void Execute ( ) const
overridevirtual

Implements IWorkload.

Definition at line 81 of file ClArgMinMaxWorkload.cpp.

References ARMNN_SCOPED_PROFILING_EVENT_CL, CHECK_LOCATION, and armnn::RunClFunction().

82 {
83  ARMNN_SCOPED_PROFILING_EVENT_CL("ClArgMinMaxWorkload_Execute");
84  RunClFunction(m_ArgMinMaxLayer, CHECK_LOCATION());
85 }
#define ARMNN_SCOPED_PROFILING_EVENT_CL(name)
void RunClFunction(arm_compute::IFunction &function, const CheckLocation &location)
#define CHECK_LOCATION()
Definition: Exceptions.hpp:197

The documentation for this class was generated from the following files: