diff options
author | Narumol Prangnawarat <narumol.prangnawarat@arm.com> | 2019-02-25 17:26:05 +0000 |
---|---|---|
committer | Narumol Prangnawarat <narumol.prangnawarat@arm.com> | 2019-02-26 17:41:15 +0000 |
commit | 4628d05455dfc179f0437913185e76888115a98a (patch) | |
tree | a8eac68ee5aee88a7071ac6f13af7932b98caa87 /tests/InferenceModel.hpp | |
parent | 452869973b9a45c9c44820d16f92f7dfc96e9aef (diff) | |
download | armnn-4628d05455dfc179f0437913185e76888115a98a.tar.gz |
IVGCVSW-2560 Verify Inference test for TensorFlow Lite MobileNet SSD
* Assign output shape of MobileNet SSD to ArmNN network
* Add m_OverridenOutputShapes to TfLiteParser to set shape in GetNetworkOutputBindingInfo
* Use input quantization instead of output quantization params
* Correct data and datatype in Inference test
Change-Id: I01ac2e07ed08e8928ba0df33a4847399e1dd8394
Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com>
Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com>
Diffstat (limited to 'tests/InferenceModel.hpp')
-rw-r--r-- | tests/InferenceModel.hpp | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/tests/InferenceModel.hpp b/tests/InferenceModel.hpp index 6e73f5275f..25ccbee45a 100644 --- a/tests/InferenceModel.hpp +++ b/tests/InferenceModel.hpp @@ -577,6 +577,13 @@ public: m_OutputBindings[outputIndex].second.GetQuantizationOffset()); } + QuantizationParams GetInputQuantizationParams(unsigned int inputIndex = 0u) const + { + CheckInputIndexIsValid(inputIndex); + return std::make_pair(m_InputBindings[inputIndex].second.GetQuantizationScale(), + m_InputBindings[inputIndex].second.GetQuantizationOffset()); + } + std::vector<QuantizationParams> GetAllQuantizationParams() const { std::vector<QuantizationParams> quantizationParams; |