ArmNN
 23.02
RefTensorHandle Class Reference

#include <RefTensorHandle.hpp>

Inheritance diagram for RefTensorHandle:
ITensorHandle

Public Member Functions

 RefTensorHandle (const TensorInfo &tensorInfo, std::shared_ptr< RefMemoryManager > &memoryManager)
 
 RefTensorHandle (const TensorInfo &tensorInfo)
 
 ~RefTensorHandle ()
 
virtual void Manage () override
 Indicate to the memory manager that this resource is active. More...
 
virtual void Allocate () override
 Indicate to the memory manager that this resource is no longer active. More...
 
virtual ITensorHandleGetParent () const override
 Get the parent tensor if this is a subtensor. More...
 
virtual const void * Map (bool) const override
 Map the tensor data for access. More...
 
virtual void Unmap () const override
 Unmap the tensor data. More...
 
TensorShape GetStrides () const override
 Get the strides for each dimension ordered from largest to smallest where the smallest value is the same as the size of a single element in the tensor. More...
 
TensorShape GetShape () const override
 Get the number of elements for each dimension ordered from slowest iterating dimension to fastest iterating dimension. More...
 
const TensorInfoGetTensorInfo () const
 
virtual MemorySourceFlags GetImportFlags () const override
 Get flags describing supported import sources. More...
 
virtual bool Import (void *memory, MemorySource source) override
 Import externally allocated memory. More...
 
virtual bool CanBeImported (void *memory, MemorySource source) override
 Implementations must determine if this memory block can be imported. More...
 
virtual const void * Map (bool blocking=true) const=0
 Map the tensor data for access. More...
 
void * Map (bool blocking=true)
 Map the tensor data for access. More...
 
- Public Member Functions inherited from ITensorHandle
virtual ~ITensorHandle ()
 
void * Map (bool blocking=true)
 Map the tensor data for access. More...
 
void Unmap ()
 Unmap the tensor data that was previously mapped with call to Map(). More...
 
virtual void Unimport ()
 Unimport externally allocated memory. More...
 

Detailed Description

Definition at line 15 of file RefTensorHandle.hpp.

Constructor & Destructor Documentation

◆ RefTensorHandle() [1/2]

RefTensorHandle ( const TensorInfo tensorInfo,
std::shared_ptr< RefMemoryManager > &  memoryManager 
)

Definition at line 10 of file RefTensorHandle.cpp.

10  :
11  m_TensorInfo(tensorInfo),
12  m_MemoryManager(memoryManager),
13  m_Pool(nullptr),
14  m_UnmanagedMemory(nullptr),
15  m_ImportedMemory(nullptr)
16 {
17 
18 }

◆ RefTensorHandle() [2/2]

RefTensorHandle ( const TensorInfo tensorInfo)

Definition at line 20 of file RefTensorHandle.cpp.

21  : m_TensorInfo(tensorInfo),
22  m_Pool(nullptr),
23  m_UnmanagedMemory(nullptr),
24  m_ImportedMemory(nullptr)
25 {
26 
27 }

◆ ~RefTensorHandle()

Definition at line 29 of file RefTensorHandle.cpp.

30 {
31  ::operator delete(m_UnmanagedMemory);
32 }

Member Function Documentation

◆ Allocate()

void Allocate ( )
overridevirtual

Indicate to the memory manager that this resource is no longer active.

This is used to compute overlapping lifetimes of resources.

Implements ITensorHandle.

Definition at line 45 of file RefTensorHandle.cpp.

46 {
47  if (!m_UnmanagedMemory)
48  {
49  if (!m_Pool)
50  {
51  // unmanaged
52  m_UnmanagedMemory = ::operator new(m_TensorInfo.GetNumBytes());
53  }
54  else
55  {
56  m_MemoryManager->Allocate(m_Pool);
57  }
58  }
59  else
60  {
61  throw InvalidArgumentException("RefTensorHandle::Allocate Trying to allocate a RefTensorHandle"
62  "that already has allocated memory.");
63  }
64 }

References TensorInfo::GetNumBytes().

◆ CanBeImported()

bool CanBeImported ( void *  memory,
MemorySource  source 
)
overridevirtual

Implementations must determine if this memory block can be imported.

This might be based on alignment or memory source type.

Returns
true if this memory can be imported.
false by default, cannot be imported.

Reimplemented from ITensorHandle.

Definition at line 128 of file RefTensorHandle.cpp.

129 {
130  if (source == MemorySource::Malloc)
131  {
132  uintptr_t alignment = GetDataTypeSize(m_TensorInfo.GetDataType());
133  if (reinterpret_cast<uintptr_t>(memory) % alignment)
134  {
135  return false;
136  }
137  return true;
138  }
139  return false;
140 }

References TensorInfo::GetDataType(), armnn::GetDataTypeSize(), and armnn::Malloc.

Referenced by RefTensorHandle::Import().

◆ GetImportFlags()

MemorySourceFlags GetImportFlags ( ) const
overridevirtual

Get flags describing supported import sources.

Reimplemented from ITensorHandle.

Definition at line 105 of file RefTensorHandle.cpp.

106 {
107  return static_cast<MemorySourceFlags>(MemorySource::Malloc);
108 }

References armnn::Malloc.

◆ GetParent()

virtual ITensorHandle* GetParent ( ) const
inlineoverridevirtual

Get the parent tensor if this is a subtensor.

Returns
a pointer to the parent tensor. Otherwise nullptr if not a subtensor.

Implements ITensorHandle.

Definition at line 28 of file RefTensorHandle.hpp.

29  {
30  return nullptr;
31  }

◆ GetShape()

TensorShape GetShape ( ) const
inlineoverridevirtual

Get the number of elements for each dimension ordered from slowest iterating dimension to fastest iterating dimension.

Returns
a TensorShape filled with the number of elements for each dimension.

Implements ITensorHandle.

Definition at line 44 of file RefTensorHandle.hpp.

45  {
46  return m_TensorInfo.GetShape();
47  }

References TensorInfo::GetShape().

◆ GetStrides()

TensorShape GetStrides ( ) const
inlineoverridevirtual

Get the strides for each dimension ordered from largest to smallest where the smallest value is the same as the size of a single element in the tensor.

Returns
a TensorShape filled with the strides for each dimension

Implements ITensorHandle.

Definition at line 39 of file RefTensorHandle.hpp.

40  {
41  return GetUnpaddedTensorStrides(m_TensorInfo);
42  }

References armnn::GetUnpaddedTensorStrides().

◆ GetTensorInfo()

const TensorInfo& GetTensorInfo ( ) const
inline

Definition at line 49 of file RefTensorHandle.hpp.

50  {
51  return m_TensorInfo;
52  }

◆ Import()

bool Import ( void *  memory,
MemorySource  source 
)
overridevirtual

Import externally allocated memory.

Parameters
memorybase address of the memory being imported.
sourcesource of the allocation for the memory being imported.
Returns
true on success or false on failure

Reimplemented from ITensorHandle.

Definition at line 110 of file RefTensorHandle.cpp.

111 {
112  if (source == MemorySource::Malloc)
113  {
114  // Check memory alignment
115  if(!CanBeImported(memory, source))
116  {
117  m_ImportedMemory = nullptr;
118  return false;
119  }
120 
121  m_ImportedMemory = memory;
122  return true;
123  }
124 
125  return false;
126 }

References RefTensorHandle::CanBeImported(), and armnn::Malloc.

◆ Manage()

void Manage ( )
overridevirtual

Indicate to the memory manager that this resource is active.

This is used to compute overlapping lifetimes of resources.

Implements ITensorHandle.

Definition at line 34 of file RefTensorHandle.cpp.

35 {
36  ARMNN_ASSERT_MSG(!m_Pool, "RefTensorHandle::Manage() called twice");
37  ARMNN_ASSERT_MSG(!m_UnmanagedMemory, "RefTensorHandle::Manage() called after Allocate()");
38 
39  if (m_MemoryManager)
40  {
41  m_Pool = m_MemoryManager->Manage(m_TensorInfo.GetNumBytes());
42  }
43 }

References ARMNN_ASSERT_MSG, and TensorInfo::GetNumBytes().

◆ Map() [1/3]

void* Map
inline

Map the tensor data for access.

Must be paired with call to Unmap().

Parameters
blockinghint to block the calling thread until all other accesses are complete. (backend dependent)
Returns
pointer to the first element of the mapped data.

Definition at line 43 of file ITensorHandle.hpp.

44  {
45  return const_cast<void*>(static_cast<const ITensorHandle*>(this)->Map(blocking));
46  }

◆ Map() [2/3]

virtual const void* Map

Map the tensor data for access.

Parameters
blockinghint to block the calling thread until all other accesses are complete. (backend dependent)
Returns
pointer to the first element of the mapped data.

◆ Map() [3/3]

const void * Map ( bool  blocking) const
overridevirtual

Map the tensor data for access.

Parameters
blockinghint to block the calling thread until all other accesses are complete. (backend dependent)
Returns
pointer to the first element of the mapped data.

Implements ITensorHandle.

Definition at line 66 of file RefTensorHandle.cpp.

67 {
68  return GetPointer();
69 }

◆ Unmap()

virtual void Unmap ( ) const
inlineoverridevirtual

Unmap the tensor data.

Implements ITensorHandle.

Definition at line 36 of file RefTensorHandle.hpp.

37  {}

The documentation for this class was generated from the following files:
armnn::MemorySource::Malloc
@ Malloc
armnn::GetDataTypeSize
constexpr unsigned int GetDataTypeSize(DataType dataType)
Definition: TypesUtils.hpp:155
armnn::TensorInfo::GetShape
const TensorShape & GetShape() const
Definition: Tensor.hpp:191
ARMNN_ASSERT_MSG
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
armnn::RefTensorHandle::Map
virtual const void * Map(bool) const override
Map the tensor data for access.
Definition: RefTensorHandle.cpp:66
armnn::TensorInfo::GetNumBytes
unsigned int GetNumBytes() const
Definition: Tensor.cpp:427
armnn::RefTensorHandle::CanBeImported
virtual bool CanBeImported(void *memory, MemorySource source) override
Implementations must determine if this memory block can be imported.
Definition: RefTensorHandle.cpp:128
armnn::GetUnpaddedTensorStrides
TensorShape GetUnpaddedTensorStrides(const TensorInfo &tensorInfo)
Definition: TensorHandle.cpp:15
armnn::MemorySourceFlags
unsigned int MemorySourceFlags
Definition: MemorySources.hpp:15
armnn::TensorInfo::GetDataType
DataType GetDataType() const
Definition: Tensor.hpp:198