Age | Commit message (Collapse) | Author |
|
Correctly enable GPU profiling when test profiling is enabled.
Remove extra copy of the profiling-enabled flag from InferenceModel::Params
and correctly pass around the copy that is in InferenceTestOptions.
!referencetests:180329
Change-Id: I0daa1bab2e7068fc479bf417a553183b1d922166
Signed-off-by: Matthew Bentham <matthew.bentham@arm.com>
|
|
* Assign output shape of MobileNet SSD to ArmNN network
* Add m_OverridenOutputShapes to TfLiteParser to set shape in GetNetworkOutputBindingInfo
* Use input quantization instead of output quantization params
* Correct data and datatype in Inference test
Change-Id: I01ac2e07ed08e8928ba0df33a4847399e1dd8394
Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com>
Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com>
|
|
* change MobileNet SSD input to uint8
* get quantization scale and offset from the model
* change data layout to NHWC as TensorFlow lite layout
* update expected output as result from TfLite with quantized data
Change-Id: I07104d56286893935779169356234de53f1c9492
Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com>
|
|
Change-Id: Ieb99ac1aa347cee4b28b831753855c4614220648
|
|
Change-Id: If7ee1efa3ee79d9eca41c5a6219b3fc42e740efe
|