diff options
author | Teresa Charlin <teresa.charlinreyes@arm.com> | 2022-10-27 11:37:29 +0100 |
---|---|---|
committer | TeresaARM <teresa.charlinreyes@arm.com> | 2022-10-28 10:23:36 +0000 |
commit | 4d85adf436092d01ca0957967156e36060e8be68 (patch) | |
tree | ad13e2320c98f93c058b0cd43186005e7258d1ee /src/armnn | |
parent | 0f86ecfce593a302ebd2baf8b70c9f6f50616f81 (diff) | |
download | armnn-4d85adf436092d01ca0957967156e36060e8be68.tar.gz |
IVGCVSW-7296 REDUCE_PROD tests fail when using Tf 2.10
* In TF what ArmNN calls quantized data types can be non-quantized as well.
* This patch creates 2 models:
* ArmNN: model where int8 and uint8 will always be quantized, but scale can be 1 and offset 0
* TFLite: model where int8 and uint8 can be quantized and non-quantized
Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com>
Change-Id: Id960f2f30988f2bbec88cb4e0c52c189ac957bae
Diffstat (limited to 'src/armnn')
-rw-r--r-- | src/armnn/Network.cpp | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/src/armnn/Network.cpp b/src/armnn/Network.cpp index d3ce7ab62c..9d00a69518 100644 --- a/src/armnn/Network.cpp +++ b/src/armnn/Network.cpp @@ -574,8 +574,10 @@ bool CheckScaleSetOnQuantizedType(Layer* layer, Optional<std::vector<std::string for (unsigned int i = 0; i < numOutputs; i++) { OutputSlot& outputSlot = layer->GetOutputSlot(i); TensorInfo info = outputSlot.GetTensorInfo(); - if (DataType::QAsymmU8 == info.GetDataType()) { - if (0.f == info.GetQuantizationScale()) { + if (DataType::QAsymmU8 == info.GetDataType()) + { + if (0.f == info.GetQuantizationScale()) + { noErrors = false; std::stringstream ss; ss << "output " << i << " of layer " << GetLayerTypeAsCString(layer->GetType()) |