From 1b71259bd697cfd40595972423c733daf8c38184 Mon Sep 17 00:00:00 2001 From: Viet-Hoa Do Date: Wed, 16 Nov 2022 16:11:45 +0000 Subject: Fix documentation about BF16 acceleration * Fix the heading and the code block. Resolves: COMPMID-5546 Signed-off-by: Viet-Hoa Do Change-Id: I60162b0e0aaf2a71a70e517aaeb8c75dd82d8dd9 Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8652 Benchmark: Arm Jenkins Tested-by: Arm Jenkins Reviewed-by: Pablo Marquez Tello Comments-Addressed: Arm Jenkins --- docs/user_guide/library.dox | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/docs/user_guide/library.dox b/docs/user_guide/library.dox index b95e0bace3..0501322254 100644 --- a/docs/user_guide/library.dox +++ b/docs/user_guide/library.dox @@ -54,14 +54,16 @@ When the fast-math flag is enabled, both Arm® Neon™ and CL convolution layers - no-fast-math: No Winograd support - fast-math: Supports Winograd 3x3,3x1,1x3,5x1,1x5,7x1,1x7,5x5,7x7 -@section BF16 acceleration - -- Required toolchain: android-ndk-r23-beta5 or later -- To build for BF16: "neon" flag should be set "=1" and "arch" has to be "=armv8.6-a", "=armv8.6-a-sve", or "=armv8.6-a-sve2" using following command: -- scons arch=armv8.6-a-sve neon=1 opencl=0 extra_cxx_flags="-fPIC" benchmark_tests=0 validation_tests=0 validation_examples=1 os=android Werror=0 toolchain_prefix=aarch64-linux-android29 -- To enable BF16 acceleration when running FP32 "fast-math" has to be enabled and that works only for Neon convolution layer using cpu gemm. - In this scenario on CPU: the CpuGemmConv2d kernel performs the conversion from FP32, type of input tensor, to BF16 at block level to exploit the arithmetic capabilities dedicated to BF16. Then transforms back to FP32, the output - tensor type. +@section bf16_acceleration BF16 acceleration + +Required toolchain: android-ndk-r23-beta5 or later. + +To build for BF16: "neon" flag should be set "=1" and "arch" has to be "=armv8.6-a", "=armv8.6-a-sve", or "=armv8.6-a-sve2". For example: + + scons arch=armv8.6-a-sve neon=1 opencl=0 extra_cxx_flags="-fPIC" benchmark_tests=0 validation_tests=0 validation_examples=1 os=android Werror=0 toolchain_prefix=aarch64-linux-android29 + +To enable BF16 acceleration when running FP32 "fast-math" has to be enabled and that works only for Neon convolution layer using cpu gemm. +In this scenario on CPU: the CpuGemmConv2d kernel performs the conversion from FP32, type of input tensor, to BF16 at block level to exploit the arithmetic capabilities dedicated to BF16. Then transforms back to FP32, the output tensor type. @section architecture_thread_safety Thread-safety -- cgit v1.2.1