ArmNN
 21.02
RefArgMinMaxWorkload Class Reference

#include <RefArgMinMaxWorkload.hpp>

Inheritance diagram for RefArgMinMaxWorkload:
BaseWorkload< ArgMinMaxQueueDescriptor > IWorkload

Public Member Functions

 RefArgMinMaxWorkload (const ArgMinMaxQueueDescriptor &descriptor, const WorkloadInfo &info)
 
virtual void Execute () const override
 
- Public Member Functions inherited from BaseWorkload< ArgMinMaxQueueDescriptor >
 BaseWorkload (const ArgMinMaxQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void PostAllocationConfigure () override
 
const ArgMinMaxQueueDescriptorGetData () const
 
profiling::ProfilingGuid GetGuid () const final
 
- Public Member Functions inherited from IWorkload
virtual ~IWorkload ()
 
virtual void RegisterDebugCallback (const DebugCallbackFunction &)
 

Additional Inherited Members

- Protected Attributes inherited from BaseWorkload< ArgMinMaxQueueDescriptor >
const ArgMinMaxQueueDescriptor m_Data
 
const profiling::ProfilingGuid m_Guid
 

Detailed Description

Definition at line 13 of file RefArgMinMaxWorkload.hpp.

Constructor & Destructor Documentation

◆ RefArgMinMaxWorkload()

RefArgMinMaxWorkload ( const ArgMinMaxQueueDescriptor descriptor,
const WorkloadInfo info 
)
explicit

Definition at line 16 of file RefArgMinMaxWorkload.cpp.

19  : BaseWorkload<ArgMinMaxQueueDescriptor>(descriptor, info) {}

Member Function Documentation

◆ Execute()

void Execute ( ) const
overridevirtual

Implements IWorkload.

Definition at line 21 of file RefArgMinMaxWorkload.cpp.

References armnn::ArgMinMax, ARMNN_SCOPED_PROFILING_EVENT, armnn::CpuRef, armnn::GetTensorInfo(), ArgMinMaxDescriptor::m_Axis, BaseWorkload< ArgMinMaxQueueDescriptor >::m_Data, ArgMinMaxDescriptor::m_Function, QueueDescriptor::m_Inputs, ArgMinMaxDescriptor::m_Output_Type, QueueDescriptor::m_Outputs, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and armnn::Signed32.

22 {
23  ARMNN_SCOPED_PROFILING_EVENT(Compute::CpuRef, "RefArgMinMaxWorkload_Execute");
24 
25  const TensorInfo &inputTensorInfo = GetTensorInfo(m_Data.m_Inputs[0]);
26 
27  std::unique_ptr<Decoder<float>> decoderPtr = MakeDecoder<float>(inputTensorInfo, m_Data.m_Inputs[0]->Map());
28  Decoder<float> &decoder = *decoderPtr;
29 
30  const TensorInfo &outputTensorInfo = GetTensorInfo(m_Data.m_Outputs[0]);
31 
33  int32_t *output = GetOutputTensorData<int32_t>(0, m_Data);
34  ArgMinMax(decoder, output, inputTensorInfo, outputTensorInfo, m_Data.m_Parameters.m_Function,
36  } else {
37  int64_t *output = GetOutputTensorData<int64_t>(0, m_Data);
38  ArgMinMax(decoder, output, inputTensorInfo, outputTensorInfo, m_Data.m_Parameters.m_Function,
40  }
41 }
CPU Execution: Reference C++ kernels.
const ArgMinMaxQueueDescriptor m_Data
Definition: Workload.hpp:46
ArgMinMaxFunction m_Function
Specify if the function is to find Min or Max.
Definition: Descriptors.hpp:70
#define ARMNN_SCOPED_PROFILING_EVENT(backendId, name)
Definition: Profiling.hpp:173
std::vector< ITensorHandle * > m_Outputs
int m_Axis
Axis to reduce across the input tensor.
Definition: Descriptors.hpp:72
std::vector< ITensorHandle * > m_Inputs
const TensorInfo & GetTensorInfo(const ITensorHandle *tensorHandle)
float32 helpers
armnn::DataType m_Output_Type
Definition: Descriptors.hpp:74

The documentation for this class was generated from the following files: