diff options
author | Ryan OShea <ryan.oshea3@arm.com> | 2022-11-07 16:20:48 +0000 |
---|---|---|
committer | ryan.oshea3 <ryan.oshea3@arm.com> | 2022-11-16 15:22:50 +0000 |
commit | 31441595009182c985dacbedc70c41ee6664d070 (patch) | |
tree | 248a85295aeff4022c9b395fc97748b0a0aa6b35 /src/backends/neon/workloads/NeonConvertBf16ToFp32Workload.hpp | |
parent | bd18eab07a8f30492de1e462b1815189014cb8d5 (diff) | |
download | armnn-31441595009182c985dacbedc70c41ee6664d070.tar.gz |
IVGCVSW-7214 Disable BF16-Turbo-Mode and remove conversion layers
- Remove Bf16ToFp32 Conversion Layer
- Remove Fp32ToBf16 Conversion Layer
- Remove B16 Conversion tests
* Throw exception if m_ReduceFp32ToBf16 optimzer option is set to true
* Provide comments to enable fast math in order to use bf16
* Update docs to inform users to enable fast math for bf16
Execute Network Changes
* Require bf16_turbo_mode to also have fast_math_enabled set to true
- Remove setting m_ReduceFp32ToBf16 optimizer option
Signed-off-by: Ryan OShea <ryan.oshea3@arm.com>
Change-Id: Ibaa6da9d29c96a1ce32ff5196b0847fde9f04a1c
Diffstat (limited to 'src/backends/neon/workloads/NeonConvertBf16ToFp32Workload.hpp')
-rw-r--r-- | src/backends/neon/workloads/NeonConvertBf16ToFp32Workload.hpp | 31 |
1 files changed, 0 insertions, 31 deletions
diff --git a/src/backends/neon/workloads/NeonConvertBf16ToFp32Workload.hpp b/src/backends/neon/workloads/NeonConvertBf16ToFp32Workload.hpp deleted file mode 100644 index 9d44ad2cac..0000000000 --- a/src/backends/neon/workloads/NeonConvertBf16ToFp32Workload.hpp +++ /dev/null @@ -1,31 +0,0 @@ -// -// Copyright © 2020 Arm Ltd and Contributors. All rights reserved. -// SPDX-License-Identifier: MIT -// - -#pragma once - -#include <armnn/backends/Workload.hpp> -#include <armnn/backends/WorkloadData.hpp> -#include <neon/workloads/NeonWorkloadUtils.hpp> - -namespace armnn -{ - -class NeonConvertBf16ToFp32Workload : public BFloat16ToFloat32Workload<ConvertBf16ToFp32QueueDescriptor> -{ -public: - NeonConvertBf16ToFp32Workload(const ConvertBf16ToFp32QueueDescriptor& descriptor, const WorkloadInfo& info); - virtual void Execute() const override; - // Replace input tensor handle with the given TensorHandle - void ReplaceInputTensorHandle(ITensorHandle* tensorHandle, unsigned int slot) override; - - // Replace output tensor handle with the given TensorHandle - void ReplaceOutputTensorHandle(ITensorHandle* tensorHandle, unsigned int slot) override; -private: - using TensorHandlePair = std::pair<const ITensorHandle*, ITensorHandle*>; - std::vector<TensorHandlePair> m_TensorHandlePairs; - virtual void Reconfigure(); -}; - -} //namespace armnn |