aboutsummaryrefslogtreecommitdiff
path: root/tests/TfLiteMobileNetSsd-Armnn
AgeCommit message (Collapse)Author
2019-08-23IVGCVSW-3547 Use ExecuteNetwork to run a dynamic backend end to end testMatteo Martincigh
* Added path override for dynamic backend loading * Do not default to CpuRef, as there could be dynamic backends loaded at runtime * Do not check right away whether the backends are correct, as more of them can be loaded at runtime as dynamic backends Change-Id: If23f79aa1480b8dfce57e49b1746c23b6b9e6f82 Signed-off-by: Matteo Martincigh <matteo.martincigh@arm.com>
2019-04-15IVGCVSW-2928 Fix issue with GPU profilingMatthew Bentham
Correctly enable GPU profiling when test profiling is enabled. Remove extra copy of the profiling-enabled flag from InferenceModel::Params and correctly pass around the copy that is in InferenceTestOptions. !referencetests:180329 Change-Id: I0daa1bab2e7068fc479bf417a553183b1d922166 Signed-off-by: Matthew Bentham <matthew.bentham@arm.com>
2019-02-01IVGCVSW-2604 Fix bug that made it impossible to execute inference tests on ↵Aron Virginas-Tar
certain backends * Read compute devices from the CL as strings and convert them into BackendId objects afterwards Change-Id: Icded1c572778f5a213644e3052ff6dfe7022128b Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com>
2019-01-30IVGCVSW-2437 Inference test for TensorFlow Lite MobileNet SSDAron Virginas-Tar
Change-Id: If7ee1efa3ee79d9eca41c5a6219b3fc42e740efe