ArmNN
 24.05
NeonTensorHandle Class Reference

#include <NeonTensorHandle.hpp>

Inheritance diagram for NeonTensorHandle:
[legend]
Collaboration diagram for NeonTensorHandle:
[legend]

Public Member Functions

 NeonTensorHandle (const TensorInfo &tensorInfo)
 
 NeonTensorHandle (const TensorInfo &tensorInfo, DataLayout dataLayout, MemorySourceFlags importFlags=static_cast< MemorySourceFlags >(MemorySource::Malloc))
 
arm_compute::ITensor & GetTensor () override
 
arm_compute::ITensor const & GetTensor () const override
 
virtual void Allocate () override
 Indicate to the memory manager that this resource is no longer active. More...
 
virtual void Manage () override
 Indicate to the memory manager that this resource is active. More...
 
virtual ITensorHandleGetParent () const override
 Get the parent tensor if this is a subtensor. More...
 
virtual arm_compute::DataType GetDataType () const override
 
virtual void SetMemoryGroup (const std::shared_ptr< arm_compute::IMemoryGroup > &memoryGroup) override
 
virtual const void * Map (bool) const override
 Map the tensor data for access. More...
 
virtual void Unmap () const override
 Unmap the tensor data. More...
 
TensorShape GetStrides () const override
 Get the strides for each dimension ordered from largest to smallest where the smallest value is the same as the size of a single element in the tensor. More...
 
TensorShape GetShape () const override
 Get the number of elements for each dimension ordered from slowest iterating dimension to fastest iterating dimension. More...
 
void SetImportFlags (MemorySourceFlags importFlags)
 
MemorySourceFlags GetImportFlags () const override
 Get flags describing supported import sources. More...
 
void SetImportEnabledFlag (bool importEnabledFlag)
 
bool CanBeImported (void *memory, MemorySource source) override
 Implementations must determine if this memory block can be imported. More...
 
virtual bool Import (void *memory, MemorySource source) override
 Import externally allocated memory. More...
 
virtual std::shared_ptr< ITensorHandleDecorateTensorHandle (const TensorInfo &tensorInfo) override
 Returns a decorated version of this TensorHandle allowing us to override the TensorInfo for it. More...
 
- Public Member Functions inherited from ITensorHandle
virtual ~ITensorHandle ()
 
void * Map (bool blocking=true)
 Map the tensor data for access. More...
 
void Unmap ()
 Unmap the tensor data that was previously mapped with call to Map(). More...
 
virtual void Unimport ()
 Unimport externally allocated memory. More...
 

Detailed Description

Definition at line 28 of file NeonTensorHandle.hpp.

Constructor & Destructor Documentation

◆ NeonTensorHandle() [1/2]

NeonTensorHandle ( const TensorInfo tensorInfo)
inline

Definition at line 31 of file NeonTensorHandle.hpp.

32  : m_ImportFlags(static_cast<MemorySourceFlags>(MemorySource::Malloc)),
33  m_Imported(false),
34  m_IsImportEnabled(false),
35  m_TypeAlignment(GetDataTypeSize(tensorInfo.GetDataType()))
36  {
37  armnn::armcomputetensorutils::BuildArmComputeTensor(m_Tensor, tensorInfo);
38  }

References armnn::Malloc.

◆ NeonTensorHandle() [2/2]

NeonTensorHandle ( const TensorInfo tensorInfo,
DataLayout  dataLayout,
MemorySourceFlags  importFlags = static_cast<MemorySourceFlags>(MemorySource::Malloc) 
)
inline

Definition at line 40 of file NeonTensorHandle.hpp.

43  : m_ImportFlags(importFlags),
44  m_Imported(false),
45  m_IsImportEnabled(false),
46  m_TypeAlignment(GetDataTypeSize(tensorInfo.GetDataType()))
47 
48 
49  {
50  armnn::armcomputetensorutils::BuildArmComputeTensor(m_Tensor, tensorInfo, dataLayout);
51  }

Member Function Documentation

◆ Allocate()

virtual void Allocate ( )
inlineoverridevirtual

Indicate to the memory manager that this resource is no longer active.

This is used to compute overlapping lifetimes of resources.

Implements ITensorHandle.

Definition at line 56 of file NeonTensorHandle.hpp.

57  {
58  // If we have enabled Importing, don't Allocate the tensor
59  if (!m_IsImportEnabled)
60  {
61  armnn::armcomputetensorutils::InitialiseArmComputeTensorEmpty(m_Tensor);
62  }
63  };

◆ CanBeImported()

bool CanBeImported ( void *  memory,
MemorySource  source 
)
inlineoverridevirtual

Implementations must determine if this memory block can be imported.

This might be based on alignment or memory source type.

Returns
true if this memory can be imported.
false by default, cannot be imported.

Reimplemented from ITensorHandle.

Definition at line 119 of file NeonTensorHandle.hpp.

120  {
121  if (source != MemorySource::Malloc || reinterpret_cast<uintptr_t>(memory) % m_TypeAlignment)
122  {
123  return false;
124  }
125  return true;
126  }

References armnn::Malloc.

Referenced by NeonTensorHandle::Import().

◆ DecorateTensorHandle()

std::shared_ptr< ITensorHandle > DecorateTensorHandle ( const TensorInfo tensorInfo)
overridevirtual

Returns a decorated version of this TensorHandle allowing us to override the TensorInfo for it.

Parameters
tensorInfothe overidden TensorInfo.

Reimplemented from ITensorHandle.

Definition at line 12 of file NeonTensorHandle.cpp.

13 {
14  auto* parent = const_cast<NeonTensorHandle*>(this);
15  auto decorated = std::make_shared<NeonTensorHandleDecorator>(parent, tensorInfo);
16  m_Decorated.emplace_back(decorated);
17  return decorated;
18 }

◆ GetDataType()

virtual arm_compute::DataType GetDataType ( ) const
inlineoverridevirtual

Implements IAclTensorHandle.

Definition at line 77 of file NeonTensorHandle.hpp.

78  {
79  return m_Tensor.info()->data_type();
80  }

◆ GetImportFlags()

MemorySourceFlags GetImportFlags ( ) const
inlineoverridevirtual

Get flags describing supported import sources.

Reimplemented from ITensorHandle.

Definition at line 109 of file NeonTensorHandle.hpp.

110  {
111  return m_ImportFlags;
112  }

◆ GetParent()

virtual ITensorHandle* GetParent ( ) const
inlineoverridevirtual

Get the parent tensor if this is a subtensor.

Returns
a pointer to the parent tensor. Otherwise nullptr if not a subtensor.

Implements ITensorHandle.

Definition at line 75 of file NeonTensorHandle.hpp.

75 { return nullptr; }

◆ GetShape()

TensorShape GetShape ( ) const
inlineoverridevirtual

Get the number of elements for each dimension ordered from slowest iterating dimension to fastest iterating dimension.

Returns
a TensorShape filled with the number of elements for each dimension.

Implements ITensorHandle.

Definition at line 99 of file NeonTensorHandle.hpp.

100  {
101  return armcomputetensorutils::GetShape(m_Tensor.info()->tensor_shape());
102  }

Referenced by NeonRankWorkload::Execute().

◆ GetStrides()

TensorShape GetStrides ( ) const
inlineoverridevirtual

Get the strides for each dimension ordered from largest to smallest where the smallest value is the same as the size of a single element in the tensor.

Returns
a TensorShape filled with the strides for each dimension

Implements ITensorHandle.

Definition at line 94 of file NeonTensorHandle.hpp.

95  {
96  return armcomputetensorutils::GetStrides(m_Tensor.info()->strides_in_bytes());
97  }

◆ GetTensor() [1/2]

arm_compute::ITensor const& GetTensor ( ) const
inlineoverridevirtual

Implements IAclTensorHandle.

Definition at line 54 of file NeonTensorHandle.hpp.

54 { return m_Tensor; }

◆ GetTensor() [2/2]

arm_compute::ITensor& GetTensor ( )
inlineoverridevirtual

Implements IAclTensorHandle.

Definition at line 53 of file NeonTensorHandle.hpp.

53 { return m_Tensor; }

◆ Import()

virtual bool Import ( void *  memory,
MemorySource  source 
)
inlineoverridevirtual

Import externally allocated memory.

Parameters
memorybase address of the memory being imported.
sourcesource of the allocation for the memory being imported.
Returns
true on success or false on failure

Reimplemented from ITensorHandle.

Definition at line 128 of file NeonTensorHandle.hpp.

129  {
130  if (m_ImportFlags& static_cast<MemorySourceFlags>(source))
131  {
132  if (source == MemorySource::Malloc && m_IsImportEnabled)
133  {
134  if (!CanBeImported(memory, source))
135  {
136  throw MemoryImportException("NeonTensorHandle::Import Attempting to import unaligned memory");
137  }
138 
139  // m_Tensor not yet Allocated
140  if (!m_Imported && !m_Tensor.buffer())
141  {
142  arm_compute::Status status = m_Tensor.allocator()->import_memory(memory);
143  // Use the overloaded bool operator of Status to check if it worked, if not throw an exception
144  // with the Status error message
145  m_Imported = bool(status);
146  if (!m_Imported)
147  {
148  throw MemoryImportException(status.error_description());
149  }
150  return m_Imported;
151  }
152 
153  // m_Tensor.buffer() initially allocated with Allocate().
154  if (!m_Imported && m_Tensor.buffer())
155  {
156  throw MemoryImportException(
157  "NeonTensorHandle::Import Attempting to import on an already allocated tensor");
158  }
159 
160  // m_Tensor.buffer() previously imported.
161  if (m_Imported)
162  {
163  arm_compute::Status status = m_Tensor.allocator()->import_memory(memory);
164  // Use the overloaded bool operator of Status to check if it worked, if not throw an exception
165  // with the Status error message
166  m_Imported = bool(status);
167  if (!m_Imported)
168  {
169  throw MemoryImportException(status.error_description());
170  }
171  return m_Imported;
172  }
173  }
174  else
175  {
176  throw MemoryImportException("NeonTensorHandle::Import is disabled");
177  }
178  }
179  else
180  {
181  throw MemoryImportException("NeonTensorHandle::Incorrect import flag");
182  }
183  return false;
184  }

References NeonTensorHandle::CanBeImported(), and armnn::Malloc.

◆ Manage()

virtual void Manage ( )
inlineoverridevirtual

Indicate to the memory manager that this resource is active.

This is used to compute overlapping lifetimes of resources.

Implements ITensorHandle.

Definition at line 65 of file NeonTensorHandle.hpp.

66  {
67  // If we have enabled Importing, don't manage the tensor
68  if (!m_IsImportEnabled)
69  {
70  ARMNN_THROW_INVALIDARG_MSG_IF_FALSE(m_MemoryGroup, "arm_compute::MemoryGroup is null.");
71  m_MemoryGroup->manage(&m_Tensor);
72  }
73  }

References ARMNN_THROW_INVALIDARG_MSG_IF_FALSE.

◆ Map()

virtual const void* Map ( bool  blocking) const
inlineoverridevirtual

Map the tensor data for access.

Parameters
blockinghint to block the calling thread until all other accesses are complete. (backend dependent)
Returns
pointer to the first element of the mapped data.

Implements ITensorHandle.

Definition at line 87 of file NeonTensorHandle.hpp.

88  {
89  return static_cast<const void*>(m_Tensor.buffer() + m_Tensor.info()->offset_first_element_in_bytes());
90  }

◆ SetImportEnabledFlag()

void SetImportEnabledFlag ( bool  importEnabledFlag)
inline

Definition at line 114 of file NeonTensorHandle.hpp.

115  {
116  m_IsImportEnabled = importEnabledFlag;
117  }

◆ SetImportFlags()

void SetImportFlags ( MemorySourceFlags  importFlags)
inline

Definition at line 104 of file NeonTensorHandle.hpp.

105  {
106  m_ImportFlags = importFlags;
107  }

◆ SetMemoryGroup()

virtual void SetMemoryGroup ( const std::shared_ptr< arm_compute::IMemoryGroup > &  memoryGroup)
inlineoverridevirtual

Implements IAclTensorHandle.

Definition at line 82 of file NeonTensorHandle.hpp.

83  {
84  m_MemoryGroup = PolymorphicPointerDowncast<arm_compute::MemoryGroup>(memoryGroup);
85  }

◆ Unmap()

virtual void Unmap ( ) const
inlineoverridevirtual

Unmap the tensor data.

Implements ITensorHandle.

Definition at line 92 of file NeonTensorHandle.hpp.

92 {}

The documentation for this class was generated from the following files:
armnn::MemorySource::Malloc
@ Malloc
armnn::NeonTensorHandle::NeonTensorHandle
NeonTensorHandle(const TensorInfo &tensorInfo)
Definition: NeonTensorHandle.hpp:31
armnn::MemorySourceFlags
unsigned int MemorySourceFlags
Definition: MemorySources.hpp:15
armnn::NeonTensorHandle::CanBeImported
bool CanBeImported(void *memory, MemorySource source) override
Implementations must determine if this memory block can be imported.
Definition: NeonTensorHandle.hpp:119
armnn::GetDataTypeSize
constexpr unsigned int GetDataTypeSize(DataType dataType)
Definition: TypesUtils.hpp:182
armnn::Status
Status
Definition: Types.hpp:42
ARMNN_THROW_INVALIDARG_MSG_IF_FALSE
#define ARMNN_THROW_INVALIDARG_MSG_IF_FALSE(_cond, _str)
Definition: Exceptions.hpp:210