ArmNN
 20.02
RefArgMinMaxWorkload Class Reference

#include <RefArgMinMaxWorkload.hpp>

Inheritance diagram for RefArgMinMaxWorkload:
BaseWorkload< ArgMinMaxQueueDescriptor > IWorkload

Public Member Functions

 RefArgMinMaxWorkload (const ArgMinMaxQueueDescriptor &descriptor, const WorkloadInfo &info)
 
virtual void Execute () const override
 
- Public Member Functions inherited from BaseWorkload< ArgMinMaxQueueDescriptor >
 BaseWorkload (const ArgMinMaxQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void PostAllocationConfigure () override
 
const ArgMinMaxQueueDescriptorGetData () const
 
profiling::ProfilingGuid GetGuid () const final
 
- Public Member Functions inherited from IWorkload
virtual ~IWorkload ()
 
virtual void RegisterDebugCallback (const DebugCallbackFunction &)
 

Additional Inherited Members

- Protected Attributes inherited from BaseWorkload< ArgMinMaxQueueDescriptor >
const ArgMinMaxQueueDescriptor m_Data
 
const profiling::ProfilingGuid m_Guid
 

Detailed Description

Definition at line 13 of file RefArgMinMaxWorkload.hpp.

Constructor & Destructor Documentation

◆ RefArgMinMaxWorkload()

RefArgMinMaxWorkload ( const ArgMinMaxQueueDescriptor descriptor,
const WorkloadInfo info 
)
explicit

Definition at line 16 of file RefArgMinMaxWorkload.cpp.

19  : BaseWorkload<ArgMinMaxQueueDescriptor>(descriptor, info) {}

Member Function Documentation

◆ Execute()

void Execute ( ) const
overridevirtual

Implements IWorkload.

Definition at line 21 of file RefArgMinMaxWorkload.cpp.

References armnn::ArgMinMax, ARMNN_SCOPED_PROFILING_EVENT, armnn::CpuRef, armnn::GetTensorInfo(), ArgMinMaxDescriptor::m_Axis, BaseWorkload< ArgMinMaxQueueDescriptor >::m_Data, ArgMinMaxDescriptor::m_Function, QueueDescriptor::m_Inputs, QueueDescriptor::m_Outputs, and QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters.

22 {
23  ARMNN_SCOPED_PROFILING_EVENT(Compute::CpuRef, "RefArgMinMaxWorkload_Execute");
24 
25  const TensorInfo &inputTensorInfo = GetTensorInfo(m_Data.m_Inputs[0]);
26 
27  std::unique_ptr<Decoder<float>> decoderPtr = MakeDecoder<float>(inputTensorInfo, m_Data.m_Inputs[0]->Map());
28  Decoder<float> &decoder = *decoderPtr;
29 
30  const TensorInfo &outputTensorInfo = GetTensorInfo(m_Data.m_Outputs[0]);
31 
32  int32_t* output = GetOutputTensorData<int32_t>(0, m_Data);
33 
34  ArgMinMax(decoder, output, inputTensorInfo, outputTensorInfo, m_Data.m_Parameters.m_Function,
36 }
CPU Execution: Reference C++ kernels.
const ArgMinMaxQueueDescriptor m_Data
Definition: Workload.hpp:46
ArgMinMaxFunction m_Function
Specify if the function is to find Min or Max.
Definition: Descriptors.hpp:56
const TensorInfo & GetTensorInfo(const ITensorHandle *tensorHandle)
float32 helpers
#define ARMNN_SCOPED_PROFILING_EVENT(backendId, name)
Definition: Profiling.hpp:169
std::vector< ITensorHandle * > m_Outputs
int m_Axis
Axis to reduce across the input tensor.
Definition: Descriptors.hpp:58
std::vector< ITensorHandle * > m_Inputs

The documentation for this class was generated from the following files: