summaryrefslogtreecommitdiff
path: root/docs/use_cases/object_detection.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/use_cases/object_detection.md')
-rw-r--r--docs/use_cases/object_detection.md81
1 files changed, 81 insertions, 0 deletions
diff --git a/docs/use_cases/object_detection.md b/docs/use_cases/object_detection.md
index e946c1b..583a8e5 100644
--- a/docs/use_cases/object_detection.md
+++ b/docs/use_cases/object_detection.md
@@ -6,6 +6,7 @@
- [Building the code sample application from sources](./object_detection.md#building-the-code-sample-application-from-sources)
- [Build options](./object_detection.md#build-options)
- [Build process](./object_detection.md#build-process)
+ - [Build with VSI support](./object_detection.md#build-with-vsi-support)
- [Add custom input](./object_detection.md#add-custom-input)
- [Add custom model](./object_detection.md#add-custom-model)
- [Setting up and running Ethos-U NPU code sample](./object_detection.md#setting-up-and-running-ethos_u-npu-code-sample)
@@ -64,6 +65,14 @@ specifies:
- `object_detection_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model.
By default, it is set to 2MiB and is enough for most models.
+- `VSI_ENABLED`: Build the application with support for the [Virtual Streaming Interface (VSI)](https://arm-software.github.io/AVH/main/simulation/html/group__arm__vsi.html)
+ available on the Arm® Corstone™-300 and Arm® Corstone™-310 FVPs.
+ This adds the option to run the application using frames consumed from the host's webcam as input in place of static images.
+
+- `VSI_IMAGE_INPUT`: When used with the `VSI_ENABLED` flag, the VSI option in the application will consume images from the
+ filesystem specified by the `object_detection_FILE_PATH` path over VSI instead of consuming frames from the host's webcam.
+ This can be useful for automated testing.
+
To build **ONLY** the Object Detection example application, add `-DUSE_CASE_BUILD=object_detection` to the `cmake` command
line, as specified in: [Building](../documentation.md#Building).
@@ -142,6 +151,31 @@ The `bin` folder contains the following files:
- `Images-object_detection.txt`: Tells the FPGA which memory regions to use for loading the binaries
in the `sectors/...` folder.
+### Build with VSI support
+
+The Object Detection use case can be compiled to consume input from the
+[Virtual Streaming Interface (VSI)](https://arm-software.github.io/AVH/main/simulation/html/group__arm__vsi.html)
+available on the Arm® Corstone™-300 and Arm® Corstone™-310 FVPs.
+
+By default, this consumes frames from the webcam attached to the host which are then used to perform face detection.
+
+To build the use case with VSI support, supply the additional argument to CMake:
+
+```commandline
+-DVSI_ENABLED=1
+```
+
+For testing purposes, it can be useful to specify still images to be consumed over VSI instead of the webcam feed.
+To do this, supply the following arguments to CMake:
+
+```commandline
+-DVSI_ENABLED=1
+-DVSI_IMAGE_INPUT=1
+```
+
+When `VSI_IMAGE_INPUT` is set, images will be read from the default location.
+This can be overriden - see the [Add custom input](./object_detection.md#add-custom-input) section below.
+
### Add custom input
The application object detection is set up to perform inferences on data found in the folder, or an individual file,
@@ -259,6 +293,27 @@ using:
~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ./bin/mps3-sse-300/ethos-u-object_detection.axf
```
+If the application has been built with VSI support, additional arguments are needed:
+
+```commandline
+~/FVP_install_location/models/Linux64_GCC-9.3/FVP_Corstone_SSE-300_Ethos-U55 \
+ -a ./bin/ethos-u-object_detection.axf \
+ -C mps3_board.v_path=./scripts/py/vsi
+```
+
+Note that VSI support is available in 11.22.35 of the FVP or above. Run the following to check the version:
+
+```log
+./FVP_Corstone_SSE-300_Ethos-U65 --version
+
+Fast Models [11.22.35 (Aug 18 2023)]
+Copyright 2000-2023 ARM Limited.
+All Rights Reserved.
+
+
+Info: /OSCI/SystemC: Simulation stopped by user.
+```
+
A log output appears on the terminal:
```log
@@ -290,6 +345,23 @@ Choice:
```
+If `VSI_ENABLED` has been set, a sixth option will appear:
+
+```log
+User input required
+Enter option number from:
+
+ 1. Run detection on next ifm
+ 2. Run detection ifm at chosen index
+ 3. Run detection on all ifm
+ 4. Show NN model info
+ 5. List ifm
+ 6. Run detection using VSI as input
+
+Choice:
+
+```
+
What the preceding choices do:
1. Run detection on next ifm: Runs a single inference on the next in line image from the collection of the compiled images.
@@ -351,6 +423,15 @@ What the preceding choices do:
INFO - 3 => pitch_and_roll.bmp
```
+6. Run detection using VSI as input: this will begin to consume frames from the webcam connected to the host.
+ Preprocessing and inference will be run on each frame, which will then be displayed with a bounding box around any
+ detected face. The next frame will then be consumed and the process will repeat.
+
+ Alternatively, if `VSI_IMAGE_INPUT` has been passed, the behaviour of this option will be similar to that of
+ option 1, with inference performed on a set of pre-defined images.
+ However, these images will be read from the local filesystem over VSI at runtime,
+ whereas option 1 continues to use images that have been pre-compiled into the application.
+
### Running Object Detection
Please select the first menu option to execute Object Detection.