aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorGeorgios Pinitas <georgios.pinitas@arm.com>2019-06-17 17:46:17 +0100
committerGeorgios Pinitas <georgios.pinitas@arm.com>2019-06-24 14:27:42 +0000
commitdb09b3783ff9af67c6d373b12aa9a6aff3c5d0f1 (patch)
tree270aaba4e8e1553a32bc3e492f480fdbbaec3bd3 /docs
parent3689fcd5915cd902cb4ea5f618f2a6e42f6dc4a1 (diff)
downloadComputeLibrary-db09b3783ff9af67c6d373b12aa9a6aff3c5d0f1.tar.gz
COMPMID-2392: Add documentation for import_memory interface
Change-Id: I943aefafe4131fc366d7ec336c9b94e89ce4fb8d Signed-off-by: Georgios Pinitas <georgios.pinitas@arm.com> Reviewed-on: https://review.mlplatform.org/c/1362 Reviewed-by: Michalis Spyrou <michalis.spyrou@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/01_library.dox24
1 files changed, 23 insertions, 1 deletions
diff --git a/docs/01_library.dox b/docs/01_library.dox
index 359ca4794a..85af8a0ded 100644
--- a/docs/01_library.dox
+++ b/docs/01_library.dox
@@ -449,7 +449,29 @@ conv1.run();
conv2.run();
@endcode
-@section S4_8_opencl_tuner OpenCL Tuner
+@section S4_8_import_memory Import Memory Interface
+
+The implemented @ref TensorAllocator and @ref CLTensorAllocator objects provide an interface capable of importing existing memory to a tensor as backing memory.
+
+A simple NEON example can be the following:
+@code{.cpp}
+// External backing memory
+void* external_ptr = ...;
+
+// Create and initialize tensor
+Tensor tensor;
+tensor.allocator()->init(tensor_info);
+
+// Import existing pointer as backing memory
+tensor.allocator()->import_memory(external_ptr);
+@endcode
+
+It is important to note the following:
+- Ownership of the backing memory is not transferred to the tensor itself.
+- The tensor mustn't be memory managed.
+- Padding requirements should be accounted by the client code. In other words, if padding is required by the tensor after the function configuration step, then the imported backing memory should account for it. Padding can be checked through the @ref TensorInfo::padding() interface.
+
+@section S4_9_opencl_tuner OpenCL Tuner
OpenCL kernels when dispatched to the GPU take two arguments:
- The Global Workgroup Size (GWS): That's the number of times to run an OpenCL kernel to process all the elements we want to process.