From 31441595009182c985dacbedc70c41ee6664d070 Mon Sep 17 00:00:00 2001 From: Ryan OShea Date: Mon, 7 Nov 2022 16:20:48 +0000 Subject: IVGCVSW-7214 Disable BF16-Turbo-Mode and remove conversion layers - Remove Bf16ToFp32 Conversion Layer - Remove Fp32ToBf16 Conversion Layer - Remove B16 Conversion tests * Throw exception if m_ReduceFp32ToBf16 optimzer option is set to true * Provide comments to enable fast math in order to use bf16 * Update docs to inform users to enable fast math for bf16 Execute Network Changes * Require bf16_turbo_mode to also have fast_math_enabled set to true - Remove setting m_ReduceFp32ToBf16 optimizer option Signed-off-by: Ryan OShea Change-Id: Ibaa6da9d29c96a1ce32ff5196b0847fde9f04a1c --- python/pyarmnn/src/pyarmnn/swig/modules/armnn_network.i | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'python/pyarmnn') diff --git a/python/pyarmnn/src/pyarmnn/swig/modules/armnn_network.i b/python/pyarmnn/src/pyarmnn/swig/modules/armnn_network.i index f91bccc449..c9eef8630d 100644 --- a/python/pyarmnn/src/pyarmnn/swig/modules/armnn_network.i +++ b/python/pyarmnn/src/pyarmnn/swig/modules/armnn_network.i @@ -25,8 +25,8 @@ Struct for holding options relating to the Arm NN optimizer. See `Optimize`. Contains: m_debug (bool): Add debug data for easier troubleshooting. - m_ReduceFp32ToBf16 (bool): Reduces Fp32 network to BFloat16 (Bf16) for faster processing. Layers - that can not be reduced will be left in Fp32. + m_ReduceFp32ToBf16 (bool): This feature has been replaced by enabling Fast Math in compute library backend options. + This is currently a placeholder option. m_ReduceFp32ToFp16 (bool): Reduces Fp32 network to Fp16 for faster processing. Layers that can not be reduced will be left in Fp32. m_ImportEnabled (bool): Enable memory import of inport tensors. -- cgit v1.2.1