aboutsummaryrefslogtreecommitdiff
path: root/tests/MobileNetSsdInferenceTest.hpp
AgeCommit message (Collapse)Author
2019-04-15IVGCVSW-2928 Fix issue with GPU profilingMatthew Bentham
Correctly enable GPU profiling when test profiling is enabled. Remove extra copy of the profiling-enabled flag from InferenceModel::Params and correctly pass around the copy that is in InferenceTestOptions. !referencetests:180329 Change-Id: I0daa1bab2e7068fc479bf417a553183b1d922166 Signed-off-by: Matthew Bentham <matthew.bentham@arm.com>
2019-02-26IVGCVSW-2560 Verify Inference test for TensorFlow Lite MobileNet SSDNarumol Prangnawarat
* Assign output shape of MobileNet SSD to ArmNN network * Add m_OverridenOutputShapes to TfLiteParser to set shape in GetNetworkOutputBindingInfo * Use input quantization instead of output quantization params * Correct data and datatype in Inference test Change-Id: I01ac2e07ed08e8928ba0df33a4847399e1dd8394 Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com>
2019-02-20IVGCVSW-2436 Modify MobileNet SSD inference testNarumol Prangnawarat
* change MobileNet SSD input to uint8 * get quantization scale and offset from the model * change data layout to NHWC as TensorFlow lite layout * update expected output as result from TfLite with quantized data Change-Id: I07104d56286893935779169356234de53f1c9492 Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com>
2019-02-11IVGCVSW-2529 DeepSpeech v1 testFerran Balaguer
Change-Id: Ieb99ac1aa347cee4b28b831753855c4614220648
2019-01-30IVGCVSW-2437 Inference test for TensorFlow Lite MobileNet SSDAron Virginas-Tar
Change-Id: If7ee1efa3ee79d9eca41c5a6219b3fc42e740efe