ArmNN
 23.02
NeonMultiplicationWorkload Class Reference

#include <NeonMultiplicationWorkload.hpp>

Inheritance diagram for NeonMultiplicationWorkload:
NeonBaseWorkload< MultiplicationQueueDescriptor > BaseWorkload< MultiplicationQueueDescriptor > IWorkload

Public Member Functions

 NeonMultiplicationWorkload (const MultiplicationQueueDescriptor &descriptor, const WorkloadInfo &info)
 
virtual void Execute () const override
 
- Public Member Functions inherited from NeonBaseWorkload< MultiplicationQueueDescriptor >
 NeonBaseWorkload (const MultiplicationQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void ReplaceInputTensorHandle (ITensorHandle *tensorHandle, unsigned int slot) override
 
void ReplaceOutputTensorHandle (ITensorHandle *tensorHandle, unsigned int slot) override
 
- Public Member Functions inherited from BaseWorkload< MultiplicationQueueDescriptor >
 BaseWorkload (const MultiplicationQueueDescriptor &descriptor, const WorkloadInfo &info)
 
void ExecuteAsync (ExecutionData &executionData) override
 
void PostAllocationConfigure () override
 
const MultiplicationQueueDescriptorGetData () const
 
arm::pipe::ProfilingGuid GetGuid () const final
 
virtual bool SupportsTensorHandleReplacement () const override
 
- Public Member Functions inherited from IWorkload
virtual ~IWorkload ()
 
virtual arm::pipe::ProfilingGuid GetGuid () const =0
 
virtual bool SupportsTensorHandleReplacement () const =0
 
virtual void RegisterDebugCallback (const DebugCallbackFunction &)
 
virtual armnn::Optional< armnn::MemoryRequirementsGetMemoryRequirements ()
 

Additional Inherited Members

- Protected Member Functions inherited from NeonBaseWorkload< MultiplicationQueueDescriptor >
virtual void Reconfigure ()
 
- Protected Attributes inherited from BaseWorkload< MultiplicationQueueDescriptor >
MultiplicationQueueDescriptor m_Data
 
const arm::pipe::ProfilingGuid m_Guid
 

Detailed Description

Definition at line 23 of file NeonMultiplicationWorkload.hpp.

Constructor & Destructor Documentation

◆ NeonMultiplicationWorkload()

NeonMultiplicationWorkload ( const MultiplicationQueueDescriptor descriptor,
const WorkloadInfo info 
)

Definition at line 47 of file NeonMultiplicationWorkload.cpp.

49  : NeonBaseWorkload<MultiplicationQueueDescriptor>(descriptor, info)
50 {
51  m_Data.ValidateInputsOutputs("NeonMultiplicationWorkload", 2, 1);
52 
53  arm_compute::ITensor& input1 = PolymorphicDowncast<IAclTensorHandle*>(m_Data.m_Inputs[0])->GetTensor();
54  arm_compute::ITensor& input2 = PolymorphicDowncast<IAclTensorHandle*>(m_Data.m_Inputs[1])->GetTensor();
55  arm_compute::ITensor& output = PolymorphicDowncast<IAclTensorHandle*>(m_Data.m_Outputs[0])->GetTensor();
56 
57  auto convertPolicy = (IsQuantizedType(info.m_InputTensorInfos[0].GetDataType()) ||
58  IsQuantizedType(info.m_InputTensorInfos[1].GetDataType())) ?
59  arm_compute::ConvertPolicy::SATURATE :
60  arm_compute::ConvertPolicy::WRAP;
61 
62  const arm_compute::ActivationLayerInfo activationInfo = ConvertAdditionalInfoToAclActivationLayerInfo(descriptor);
63 
64  // At the time of writing, configure() will fail if a rounding policy other than TO_ZERO is supplied to it,
65  // when providing a scale of 1.0 for F32 tensors, even though the provided rounding policy appears to be
66  // ignored for F32 tensors.
67  auto layer = std::make_unique<arm_compute::NEPixelWiseMultiplication>();
68  layer->configure(&input1,
69  &input2,
70  &output,
71  1.0f,
72  convertPolicy,
73  arm_compute::RoundingPolicy::TO_ZERO,
74  activationInfo);
75  m_PixelWiseMultiplication.reset(layer.release());
76 }

References armnn::ConvertAdditionalInfoToAclActivationLayerInfo(), armnn::info, armnn::IsQuantizedType(), BaseWorkload< MultiplicationQueueDescriptor >::m_Data, QueueDescriptor::m_Inputs, QueueDescriptor::m_Outputs, and QueueDescriptor::ValidateInputsOutputs().

Member Function Documentation

◆ Execute()

void Execute ( ) const
overridevirtual

Implements IWorkload.

Definition at line 78 of file NeonMultiplicationWorkload.cpp.

79 {
80  ARMNN_SCOPED_PROFILING_EVENT_NEON_GUID("NeonMultiplicationWorkload_Execute", this->GetGuid());
81  m_PixelWiseMultiplication->run();
82 }

References ARMNN_SCOPED_PROFILING_EVENT_NEON_GUID, and BaseWorkload< MultiplicationQueueDescriptor >::GetGuid().


The documentation for this class was generated from the following files:
armnn::BaseWorkload< MultiplicationQueueDescriptor >::GetGuid
arm::pipe::ProfilingGuid GetGuid() const final
Definition: Workload.hpp:61
armnn::QueueDescriptor::ValidateInputsOutputs
void ValidateInputsOutputs(const std::string &descName, unsigned int numExpectedIn, unsigned int numExpectedOut) const
Definition: WorkloadData.cpp:475
armnn::IsQuantizedType
constexpr bool IsQuantizedType()
Definition: TypesUtils.hpp:284
armnn::BaseWorkload< MultiplicationQueueDescriptor >::m_Data
MultiplicationQueueDescriptor m_Data
Definition: Workload.hpp:83
ARMNN_SCOPED_PROFILING_EVENT_NEON_GUID
#define ARMNN_SCOPED_PROFILING_EVENT_NEON_GUID(name, guid)
Definition: NeonWorkloadUtils.hpp:24
armnn::ConvertAdditionalInfoToAclActivationLayerInfo
arm_compute::ActivationLayerInfo ConvertAdditionalInfoToAclActivationLayerInfo(const QueueDescriptor &queueDescriptor)
Definition: ArmComputeUtils.hpp:103
armnn::QueueDescriptor::m_Outputs
std::vector< ITensorHandle * > m_Outputs
Definition: WorkloadData.hpp:27
armnn::QueueDescriptor::m_Inputs
std::vector< ITensorHandle * > m_Inputs
Definition: WorkloadData.hpp:26
armnn::BoostLogSeverityMapping::info
@ info