aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSiCong Li <sicong.li@arm.com>2022-03-21 15:34:21 +0000
committerSiCong Li <sicong.li@arm.com>2022-04-25 10:50:35 +0000
commit0a3948394e7e77344201b8732e9c20fcb5fa9a38 (patch)
treed80a559557f8ed6d573d9156bf1517d9ea85c3d0
parentf55cca5f17da72004108f92047d3177b7cdb1a76 (diff)
downloadComputeLibrary-0a3948394e7e77344201b8732e9c20fcb5fa9a38.tar.gz
Document data layout of weight tensors in convolution layers
Resolves COMPMID-5187 Signed-off-by: SiCong Li <sicong.li@arm.com> Change-Id: I4fddd1f1e7134896a40f62553d705fa5e411e00b Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/7405 Reviewed-by: Gian Marco Iodice <gianmarco.iodice@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com>
-rw-r--r--docs/user_guide/data_layout.dox28
1 files changed, 25 insertions, 3 deletions
diff --git a/docs/user_guide/data_layout.dox b/docs/user_guide/data_layout.dox
index ae69bbf457..711b85f08c 100644
--- a/docs/user_guide/data_layout.dox
+++ b/docs/user_guide/data_layout.dox
@@ -1,5 +1,5 @@
///
-/// Copyright (c) 2021 Arm Limited.
+/// Copyright (c) 2021-2022 Arm Limited.
///
/// SPDX-License-Identifier: MIT
///
@@ -29,8 +29,7 @@ namespace arm_compute
@section data_layout_support_supported_data_layout Supported Data Layouts
-Compute Library supports the following data layouts and
-the right-most letter represents the fastest changing dimension:
+With regard to convolution layers, Compute Library supports the following data layouts for input and output tensors:
- NHWC: The native layout of Compute Library that delivers the best performance where channels are in the fastest changing dimension
- NCHW: Legacy layout where width is in the fastest changing dimension
@@ -38,5 +37,28 @@ the right-most letter represents the fastest changing dimension:
, where N = batch, C = channel, H = height, W = width, D = depth.
+Note: The right-most letter represents the fastest changing dimension, which is the "lower dimension".
+The corresponding @ref TensorShape for each of the data layout would be initialized as:
+
+- NHWC: TensorShape(C, W, H, N)
+- NCHW: TensorShape(W, H, C, N)
+- NDHWC: TensorShape(C, W, H, D, N)
+
+For 2d Conv, the weight / filter tensors are arranged in 4 dimensions: Height (H), Width (W), Input channel (I), Output channel (O)
+For 3d Conv, the additional Depth dimension means exactly the same as the Depth in the input / output layout.
+
+The layout of weight tensors change with that of the input / output tensors, and the dimensions can be mapped as:
+
+- Weight Height -> Height
+- Weight Width -> Width
+- Weight Input channel -> Channel
+- Weight Output channel -> Batch
+
+Therefore, the corresponding weight layouts for each input / output layout are:
+
+- (input/output tensor) NHWC: (weight tensor) OHWI
+- (input/output tensor) NCHW: (weight tensor) OIHW
+- (input/output tensor) NDHWC: (weight tensor) ODHWI
+
*/
} // namespace