summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorIsabella Gottardi <isabella.gottardi@arm.com>2022-01-27 16:39:37 +0000
committerKshitij Sisodia <kshitij.sisodia@arm.com>2022-02-08 16:32:28 +0000
commit3107aa2152de9be8317e62da1d0327bcad6552e2 (patch)
tree2ba12a5dd39f28ae1b646e132fbe575c6a442ee9 /docs
parent5cdfa9b834dc5a94c70f9f2b1f5c849dc5439e85 (diff)
downloadml-embedded-evaluation-kit-3107aa2152de9be8317e62da1d0327bcad6552e2.tar.gz
MLECO-2873: Object detection usecase follow-up
Change-Id: Ic14e93a50fb7b3f3cfd9497bac1280794cc0fc15 Signed-off-by: Isabella Gottardi <isabella.gottardi@arm.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/use_cases/ad.md2
-rw-r--r--docs/use_cases/img_class.md6
-rw-r--r--docs/use_cases/inference_runner.md7
-rw-r--r--docs/use_cases/object_detection.md74
4 files changed, 44 insertions, 45 deletions
diff --git a/docs/use_cases/ad.md b/docs/use_cases/ad.md
index c14c921..553e3b8 100644
--- a/docs/use_cases/ad.md
+++ b/docs/use_cases/ad.md
@@ -23,7 +23,7 @@ Use-case code could be found in the following directory: [source/use_case/ad](..
### Preprocessing and feature extraction
-The Anomaly Detection model that is used with the Code Samples andexpects audio data to be preprocessed in a specific
+The Anomaly Detection model that is used with the Code Samples and expects audio data to be preprocessed in a specific
way before performing an inference.
Therefore, this section provides an overview of the feature extraction process used.
diff --git a/docs/use_cases/img_class.md b/docs/use_cases/img_class.md
index 7924ed5..e2df09d 100644
--- a/docs/use_cases/img_class.md
+++ b/docs/use_cases/img_class.md
@@ -321,7 +321,7 @@ What the preceding choices do:
```log
INFO - Model info:
INFO - Model INPUT tensors:
- INFO - tensor type is UINT8
+ INFO - tensor type is INT8
INFO - tensor occupies 150528 bytes with dimensions
INFO - 0: 1
INFO - 1: 224
@@ -338,10 +338,10 @@ What the preceding choices do:
INFO - Quant dimension: 0
INFO - Scale[0] = 0.03906
INFO - ZeroPoint[0] = -128
- INFO - Activation buffer (a.k.a tensor arena) size used: 1510012
+ INFO - Activation buffer (a.k.a tensor arena) size used: 1510004
INFO - Number of operators: 1
INFO - Operator 0: ethos-u
-
+
```
5. List Images: Prints a list of pair image indexes. The original filenames are embedded in the application, like so:
diff --git a/docs/use_cases/inference_runner.md b/docs/use_cases/inference_runner.md
index 01bf4d0..2824def 100644
--- a/docs/use_cases/inference_runner.md
+++ b/docs/use_cases/inference_runner.md
@@ -59,7 +59,7 @@ following:
- `inference_runner_DYNAMIC_MEM_LOAD_ENABLED`: This can be set to ON or OFF, to allow dynamic model load capability for use with MPS3 FVPs. See section [Building with dynamic model load capability](./inference_runner.md#building-with-dynamic-model-load-capability) below for more details.
-To build **ONLY** the Inference Runner example application, add `-DUSE_CASE_BUILD=inferece_runner` to the `cmake`
+To build **ONLY** the Inference Runner example application, add `-DUSE_CASE_BUILD=inference_runner` to the `cmake`
command line, as specified in: [Building](../documentation.md#Building).
### Build process
@@ -199,7 +199,7 @@ To install the FVP:
### Starting Fast Model simulation
-Once completed the building step, the application binary `ethos-u-infernce_runner.axf` can be found in the `build/bin`
+Once completed the building step, the application binary `ethos-u-inference_runner.axf` can be found in the `build/bin`
folder.
Assuming that the install location of the FVP was set to `~/FVP_install_location`, then the simulation can be started by
@@ -287,9 +287,11 @@ cmake .. \
```
Once the configuration completes, running:
+
```commandline
make -j
```
+
will build the application that will expect the neural network model and the IFM to be loaded into
specific addresses. These addresses are defined in
[corstone-sse-300.cmake](../../scripts/cmake/subsystem-profiles/corstone-sse-300.cmake) for the MPS3
@@ -314,6 +316,7 @@ binary blob.
--data /path/to/custom-ifm.bin@0x92000000 \
--dump cpu0=/path/to/output.bin@Memory:0x93000000,1024
```
+
The above command will dump a 1KiB (1024 bytes) file with output tensors as a binary blob after it
has consumed the model and IFM data provided by the file paths specified and the inference is
executed successfully.
diff --git a/docs/use_cases/object_detection.md b/docs/use_cases/object_detection.md
index e0d8899..8062325 100644
--- a/docs/use_cases/object_detection.md
+++ b/docs/use_cases/object_detection.md
@@ -33,12 +33,10 @@ See [Prerequisites](../documentation.md#prerequisites)
In addition to the already specified build option in the main documentation, the Object Detection use-case
specifies:
-- `object_detection_MODEL_TFLITE_PATH` - The path to the NN model file in the `TFLite` format. The model is then processed and
+- `object_detection_MODEL_TFLITE_PATH` - The path to the NN model file in the *TFLite* format. The model is then processed and
included in the application `axf` file. The default value points to one of the delivered set of models.
-
- Note that the parameters `TARGET_PLATFORM`, and `ETHOS_U_NPU_ENABLED` must be aligned with
- the chosen model. In other words:
-
+ Note that the parameters `TARGET_PLATFORM`, and `ETHOS_U_NPU_ENABLED` must be aligned with
+ the chosen model. In other words:
- If `ETHOS_U_NPU_ENABLED` is set to `On` or `1`, then the NN model is assumed to be optimized. The model naturally
falls back to the Arm® *Cortex®-M* CPU if an unoptimized model is supplied.
- if `ETHOS_U_NPU_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized
@@ -47,17 +45,21 @@ specifies:
- `object_detection_FILE_PATH`: The path to the directory containing the images, or a path to a single image file, that is to
be used in the application. The default value points to the `resources/object_detection/samples` folder containing the
delivered set of images.
-
- For further information, please refer to: [Add custom input data section](./object_detection.md#add-custom-input).
+ For further information, please refer to: [Add custom input data section](./object_detection.md#add-custom-input).
- `object_detection_IMAGE_SIZE`: The NN model requires input images to be of a specific size. This parameter defines the size
- of the image side in pixels. Images are considered squared. The default value is `224`, which is what the supplied
- *MobilenetV2-1.0* model expects.
+ of the image side in pixels. Images are considered squared. The default value is `192`, which is what the supplied
+ *YOLO Fastest* model expects.
+
+- `object_detection_ANCHOR_1`: First anchor array estimated during *YOLO Fastest* model training.
-- `object_detection_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model. By default, it
- is set to 2MiB and is enough for most models.
+- `object_detection_ANCHOR_2`: Second anchor array estimated during *YOLO Fastest* model training.
-- `USE_CASE_BUILD`: is set to `object_detection` to only build this example.
+- `object_detection_CHANNELS_IMAGE_DISPLAYED`: The user can decide if display the image on the LCD screen in grayscale or RGB.
+ This parameter defines the number of channel to use: 1 for grayscale, 3 for RGB. The default value is `3`.
+
+- `object_detection_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model.
+ By default, it is set to 2MiB and is enough for most models.
To build **ONLY** the Object Detection example application, add `-DUSE_CASE_BUILD=object_detection` to the `cmake` command
line, as specified in: [Building](../documentation.md#Building).
@@ -67,7 +69,7 @@ line, as specified in: [Building](../documentation.md#Building).
> **Note:** This section describes the process for configuring the build for the *MPS3: SSE-300*. To build for a
> different target platform, please refer to: [Building](../documentation.md#Building).
-Create a build directory and navigate inside, like so:
+To **only** build the `object_detection` example, create a build directory, and then navigate inside.
```commandline
mkdir build_object_detection && cd build_object_detection
@@ -80,7 +82,7 @@ build **only** Object Detection application to run on the *Ethos-U55* Fast Model
cmake ../ -DUSE_CASE_BUILD=object_detection
```
-To configure a build that can be debugged using Arm DS specify the build type as `Debug` and then use the `Arm Compiler`
+To configure a build that can be debugged using Arm DS, we specify the build type as `Debug` and use the `Arm Compiler`
toolchain file:
```commandline
@@ -135,8 +137,8 @@ The `bin` folder contains the following files:
- `sectors/object_detection`: Folder containing the built application. It is split into files for loading into different FPGA memory
regions.
-- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..`
- folder.
+- `Images-object_detection.txt`: Tells the FPGA which memory regions to use for loading the binaries
+ in the `sectors/...` folder.
### Add custom input
@@ -188,28 +190,25 @@ of any image does not match `IMAGE_SIZE`, then it is rescaled and padded so that
### Add custom model
-The application performs inference using the model pointed to by the CMake parameter `MODEL_TFLITE_PATH`.
+The application performs inference using the model pointed to by the CMake parameter `object_detection_MODEL_TFLITE_PATH`.
> **Note:** If you want to run the model using an *Ethos-U*, ensure that your custom model has been successfully run
> through the Vela compiler *before* continuing.
For further information: [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
-Then, you must set `object_detection_MODEL_TFLITE_PATH` to the location of the Vela processed model file and
-`object_detection_LABELS_TXT_FILE` to the location of the associated labels file.
-
For example:
```commandline
cmake .. \
-Dobject_detection_MODEL_TFLITE_PATH=<path/to/custom_model_after_vela.tflite> \
- -Dobject_detection_LABELS_TXT_FILE=<path/to/labels_custom_model.txt> \
-DUSE_CASE_BUILD=object_detection
```
> **Note:** Clean the build directory before re-running the CMake command.
-The `.tflite` model file pointed to by `object_detection_MODEL_TFLITE_PATH` is converted to C++ files during the CMake configuration stage. They are then compiled into
+The `.tflite` model file pointed to by `object_detection_MODEL_TFLITE_PATH` is converted to
+C++ files during the CMake configuration stage. They are then compiled into
the application for performing inference with.
The log from the configuration stage tells you what model path and labels file have been used, for example:
@@ -217,11 +216,8 @@ The log from the configuration stage tells you what model path and labels file h
```log
-- User option object_detection_MODEL_TFLITE_PATH is set to <path/to/custom_model_after_vela.tflite>
...
--- User option object_detection_LABELS_TXT_FILE is set to <path/to/labels_custom_model.txt>
-...
-- Using <path/to/custom_model_after_vela.tflite>
-++ Converting custom_model_after_vela.tflite to\
-custom_model_after_vela.tflite.cc
+++ Converting custom_model_after_vela.tflite to custom_model_after_vela.tflite.cc
...
```
@@ -251,15 +247,14 @@ To install the FVP:
### Starting Fast Model simulation
-The pre-built application binary `ethos-u-object_detection.axf` can be found in the `bin/mps3-sse-300` folder of the delivery
-package.
+The pre-built application binary `ethos-u-object_detection.axf` can be
+found in the `bin/mps3-sse-300` folder of the delivery package.
Assuming that the install location of the FVP was set to `~/FVP_install_location`, then the simulation can be started by
using:
```commandline
-~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
-./bin/mps3-sse-300/ethos-u-object_detection.axf
+~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ./bin/mps3-sse-300/ethos-u-object_detection.axf
```
A log output appears on the terminal:
@@ -302,19 +297,19 @@ What the preceding choices do:
> **Note:** Please make sure to select image index from within the range of supplied audio clips during application
> build. By default, a pre-built application has four images, with indexes from `0` to `3`.
-3. Run detection on all ifm4: Triggers sequential inference executions on all built-in images.
+3. Run detection on all ifm: Triggers sequential inference executions on all built-in images.
-4. Show NN model info: Prints information about the model data type, input, and output, tensor sizes:
+4. Show NN model info: Prints information about the model data type, input, and output, tensor sizes. For example:
```log
- INFO - Model info:
+ INFO - Allocating tensors
INFO - Model INPUT tensors:
- INFO - tensor type is UINT8
- INFO - tensor occupies 150528 bytes with dimensions
+ INFO - tensor type is INT8
+ INFO - tensor occupies 36864 bytes with dimensions
INFO - 0: 1
- INFO - 1: 224
- INFO - 2: 224
- INFO - 3: 3
+ INFO - 1: 192
+ INFO - 2: 192
+ INFO - 3: 1
INFO - Quant dimension: 0
INFO - Scale[0] = 0.003921
INFO - ZeroPoint[0] = -128
@@ -340,7 +335,8 @@ What the preceding choices do:
INFO - Activation buffer (a.k.a tensor arena) size used: 443992
INFO - Number of operators: 3
INFO - Operator 0: ethos-u
-
+ INFO - Operator 1: RESIZE_NEAREST_NEIGHBOR
+ INFO - Operator 2: ethos-u
```
5. List Images: Prints a list of pair image indexes. The original filenames are embedded in the application, like so: