ArmNN
 20.05
SampleDynamicAdditionWorkload Class Reference

#include <SampleDynamicAdditionWorkload.hpp>

Inheritance diagram for SampleDynamicAdditionWorkload:
BaseWorkload< AdditionQueueDescriptor > IWorkload

Public Member Functions

 SampleDynamicAdditionWorkload (const AdditionQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void Execute () const override
 
- Public Member Functions inherited from BaseWorkload< AdditionQueueDescriptor >
 BaseWorkload (const AdditionQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void PostAllocationConfigure () override
 
const AdditionQueueDescriptorGetData () const
 
profiling::ProfilingGuid GetGuid () const final
 
- Public Member Functions inherited from IWorkload
virtual ~IWorkload ()
 
virtual void RegisterDebugCallback (const DebugCallbackFunction &)
 

Additional Inherited Members

- Protected Attributes inherited from BaseWorkload< AdditionQueueDescriptor >
const AdditionQueueDescriptor m_Data
 
const profiling::ProfilingGuid m_Guid
 

Detailed Description

Definition at line 13 of file SampleDynamicAdditionWorkload.hpp.

Constructor & Destructor Documentation

◆ SampleDynamicAdditionWorkload()

SampleDynamicAdditionWorkload ( const AdditionQueueDescriptor descriptor,
const WorkloadInfo info 
)

Definition at line 34 of file SampleDynamicAdditionWorkload.cpp.

36  : BaseWorkload(descriptor, info)
37 {}
BaseWorkload(const AdditionQueueDescriptor &descriptor, const WorkloadInfo &info)
Definition: Workload.hpp:32

Member Function Documentation

◆ Execute()

void Execute ( ) const
overridevirtual

Implements IWorkload.

Definition at line 39 of file SampleDynamicAdditionWorkload.cpp.

References armnn::GetInputTensorData(), TensorInfo::GetNumElements(), armnn::GetOutputTensorData(), armnn::GetTensorInfo(), armnn::info, BaseWorkload< AdditionQueueDescriptor >::m_Data, and QueueDescriptor::m_Inputs.

40 {
41  const TensorInfo& info = GetTensorInfo(m_Data.m_Inputs[0]);
42  unsigned int num = info.GetNumElements();
43 
44  const float* inputData0 = GetInputTensorData(0, m_Data);
45  const float* inputData1 = GetInputTensorData(1, m_Data);
46  float* outputData = GetOutputTensorData(0, m_Data);
47 
48  for (unsigned int i = 0; i < num; ++i)
49  {
50  outputData[i] = inputData0[i] + inputData1[i];
51  }
52 }
const DataType * GetInputTensorData(unsigned int idx, const PayloadType &data)
const AdditionQueueDescriptor m_Data
Definition: Workload.hpp:46
const TensorInfo & GetTensorInfo(const ITensorHandle *tensorHandle)
float32 helpers
DataType * GetOutputTensorData(unsigned int idx, const PayloadType &data)
std::vector< ITensorHandle * > m_Inputs

The documentation for this class was generated from the following files: