diff options
author | Finn Williams <Finn.Williams@arm.com> | 2021-03-22 17:51:06 +0000 |
---|---|---|
committer | finn.williams <finn.williams@arm.com> | 2021-04-07 16:42:38 +0000 |
commit | 4422ceca976a88aac49b21808a43e465bc87a35e (patch) | |
tree | d4f7f3d86394f74b679c907ad3f7fc7f4537933f /src/armnn/optimizations/ConvertFp32NetworkToBf16.hpp | |
parent | b70ec417989490a2a72c66ecd6c737df1c094f4c (diff) | |
download | armnn-4422ceca976a88aac49b21808a43e465bc87a35e.tar.gz |
Fix graph copy memory spike
* Change layer storage of ConstTensors to std::shared_ptr<ConstCpuTensorHandle>
* Change clone to share ConstTensor rather than copy
* Remove uses of non-const GetTensor() call
* Reduce scope of non-optimized network in ExeNet, so memory can be released after use
Signed-off-by: Finn Williams <Finn.Williams@arm.com>
Change-Id: Ibb2c7309d12411d21405bd6024c76bcdf5404545
Diffstat (limited to 'src/armnn/optimizations/ConvertFp32NetworkToBf16.hpp')
-rw-r--r-- | src/armnn/optimizations/ConvertFp32NetworkToBf16.hpp | 7 |
1 files changed, 4 insertions, 3 deletions
diff --git a/src/armnn/optimizations/ConvertFp32NetworkToBf16.hpp b/src/armnn/optimizations/ConvertFp32NetworkToBf16.hpp index c45ab2cded..a0856a485b 100644 --- a/src/armnn/optimizations/ConvertFp32NetworkToBf16.hpp +++ b/src/armnn/optimizations/ConvertFp32NetworkToBf16.hpp @@ -27,9 +27,10 @@ inline LayerT* ConvertWeight(Layer* l) { std::vector<BFloat16> newValues(info.GetNumElements()); - armnnUtils::FloatingPointConverter::ConvertFloat32ToBFloat16(layer->m_Weight->template GetTensor<float>(), - info.GetNumElements(), - newValues.data()); + armnnUtils::FloatingPointConverter::ConvertFloat32ToBFloat16( + layer->m_Weight->template GetConstTensor<float>(), + info.GetNumElements(), + newValues.data()); TensorInfo newInfo(info); newInfo.SetDataType(DataType::BFloat16); |