summaryrefslogtreecommitdiff
path: root/docs/use_cases/img_class.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/use_cases/img_class.md')
-rw-r--r--docs/use_cases/img_class.md256
1 files changed, 136 insertions, 120 deletions
diff --git a/docs/use_cases/img_class.md b/docs/use_cases/img_class.md
index 2a31322..9a3451d 100644
--- a/docs/use_cases/img_class.md
+++ b/docs/use_cases/img_class.md
@@ -15,13 +15,12 @@
## Introduction
-This document describes the process of setting up and running the Arm® Ethos™-U55 Image Classification
-example.
+This document describes the process of setting up and running the Arm® *Ethos™-U55* Image Classification example.
-Use case solves classical computer vision problem: image classification. The ML sample was developed using MobileNet v2
-model trained on ImageNet dataset.
+This use-case example solves the classical computer vision problem of image classification. The ML sample was developed
+using the *MobileNet v2* model that was trained on the *ImageNet* dataset.
-Use case code could be found in [source/use_case/img_class](../../source/use_case/img_class]) directory.
+Use-case code could be found in the following directory:[source/use_case/img_class](../../source/use_case/img_class]).
### Prerequisites
@@ -31,57 +30,62 @@ See [Prerequisites](../documentation.md#prerequisites)
### Build options
-In addition to the already specified build option in the main documentation, Image Classification use case specifies:
+In addition to the already specified build option in the main documentation, the Image Classification use-case
+specifies:
-- `img_class_MODEL_TFLITE_PATH` - Path to the NN model file in TFLite format. Model will be processed and included into
- the application axf file. The default value points to one of the delivered set of models. Note that the parameters
- `img_class_LABELS_TXT_FILE`,`TARGET_PLATFORM` and `ETHOS_U55_ENABLED` should be aligned with the chosen model, i.e.:
- - if `ETHOS_U55_ENABLED` is set to `On` or `1`, the NN model is assumed to be optimized. The model will naturally
- fall back to the Arm® Cortex®-M CPU if an unoptimized model is supplied.
+- `img_class_MODEL_TFLITE_PATH` - The path to the NN model file in the `TFLite` format. The model is then processed and
+ included in the application `axf` file. The default value points to one of the delivered set of models.
+
+ Note that the parameters `img_class_LABELS_TXT_FILE`,`TARGET_PLATFORM`, and `ETHOS_U55_ENABLED` must be aligned with
+ the chosen model. In other words:
+
+ - If `ETHOS_U55_ENABLED` is set to `On` or `1`, then the NN model is assumed to be optimized. The model naturally
+ falls back to the Arm® *Cortex®-M* CPU if an unoptimized model is supplied.
- if `ETHOS_U55_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized
- model in this case will result in a runtime error.
+ model in this case results in a runtime error.
-- `img_class_FILE_PATH`: Path to the directory containing images, or path to a single image file, to be used file(s) in
- the application. The default value points to the resources/img_class/samples folder containing the delivered
- set of images. See more in the [Add custom input data section](#add-custom-input).
+- `img_class_FILE_PATH`: The path to the directory containing the images, or a path to a single image file, that is to
+ be used in the application. The default value points to the `resources/img_class/samples` folder containing the
+ delivered set of images.
-- `img_class_IMAGE_SIZE`: The NN model requires input images to be of a specific size. This parameter defines the
- size of the image side in pixels. Images are considered squared. Default value is 224, which is what the supplied
- MobilenetV2-1.0 model expects.
+ For further information, please refer to: [Add custom input data section](#add-custom-input).
-- `img_class_LABELS_TXT_FILE`: Path to the labels' text file to be baked into the application. The file is used to
- map classified classes index to the text label. Change this parameter to point to the custom labels file to map
- custom NN model output correctly.\
- The default value points to the delivered labels.txt file inside the delivery package.
+- `img_class_IMAGE_SIZE`: The NN model requires input images to be of a specific size. This parameter defines the size
+ of the image side in pixels. Images are considered squared. The default value is `224`, which is what the supplied
+ *MobilenetV2-1.0* model expects.
-- `img_class_ACTIVATION_BUF_SZ`: The intermediate/activation buffer size reserved for the NN model. By default, it
- is set to 2MiB and should be enough for most models.
+- `img_class_LABELS_TXT_FILE`: The path to the text file for the label. The file is used to map a classified class index
+ to the text label. The default value points to the delivered `labels.txt` file inside the delivery package. Change
+ this parameter to point to the custom labels file to map custom NN model output correctly.
-- `USE_CASE_BUILD`: set to img_class to build only this example.
+- `img_class_ACTIVATION_BUF_SZ`: The intermediate, or activation, buffer size reserved for the NN model. By default, it
+ is set to 2MiB and is enough for most models.
-In order to build **ONLY** Image Classification example application add to the `cmake` command line specified in
-[Building](../documentation.md#Building) `-DUSE_CASE_BUILD=img_class`.
+- `USE_CASE_BUILD`: is set to `img_class` to only build this example.
+
+To build **ONLY** the Image Classification example application, add `-DUSE_CASE_BUILD=img_class` to the `cmake` command
+line, as specified in: [Building](../documentation.md#Building).
### Build process
-> **Note:** This section describes the process for configuring the build for `MPS3: SSE-300` for different target platform
-see [Building](../documentation.md#Building).
+> **Note:** This section describes the process for configuring the build for the *MPS3: SSE-300*. To build for a
+> different target platform, please refer to: [Building](../documentation.md#Building).
-Create a build directory folder and navigate inside:
+Create a build directory and navigate inside, like so:
```commandline
mkdir build_img_class && cd build_img_class
```
-On Linux, execute the following command to build **only** Image Classification application to run on the Ethos-U55 Fast
-Model when providing only the mandatory arguments for CMake configuration:
+On Linux, when providing only the mandatory arguments for the CMake configuration, execute the following command to
+build **only** Image Classification application to run on the *Ethos-U55* Fast Model:
```commandline
cmake ../ -DUSE_CASE_BUILD=img_class
```
-To configure a build that can be debugged using Arm-DS, we can just specify
-the build type as `Debug` and use the `Arm Compiler` toolchain file:
+To configure a build that can be debugged using Arm DS specify the build type as `Debug` and then use the `Arm Compiler`
+toolchain file:
```commandline
cmake .. \
@@ -90,15 +94,15 @@ cmake .. \
-DUSE_CASE_BUILD=img_class
```
-Also see:
+For further information, please refer to:
- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
- [Using Arm Compiler](../sections/building.md#using-arm-compiler)
- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+- [Working with model debugger from Arm Fast Model Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
-> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
->the CMake command.
+> **Note:** If re-building with changed parameters values, we recommend that you clean the build directory and re-run
+> the CMake command.
If the CMake command succeeds, build the application as follows:
@@ -106,9 +110,9 @@ If the CMake command succeeds, build the application as follows:
make -j4
```
-Add VERBOSE=1 to see compilation and link details.
+To see compilation and link details, add `VERBOSE=1`.
-Results of the build will be placed under `build/bin` folder:
+Results of the build are placed under the `build/bin` folder, like so:
```tree
bin
@@ -118,30 +122,32 @@ bin
 └── sectors
├── images.txt
└── img_class
- ├── dram.bin
+ ├── ddr.bin
└── itcm.bin
```
-Where:
+The `bin` folder contains the following files:
-- `ethos-u-img_class.axf`: The built application binary for the Image Classification use case.
+- `ethos-u-img_class.axf`: The built application binary for the Image Classification use-case.
-- `ethos-u-img_class.map`: Information from building the application (e.g. libraries used, what was optimized, location
- of objects)
+- `ethos-u-img_class.map`: Information from building the application. For example: The libraries used, what was
+ optimized, and the location of objects.
- `ethos-u-img_class.htm`: Human readable file containing the call graph of application functions.
-- `sectors/img_class`: Folder containing the built application, split into files for loading into different FPGA memory regions.
+- `sectors/img_class`: Folder containing the built application. It is split into files for loading into different FPGA memory
+ regions.
-- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in sectors/** folder.
+- `sectors/images.txt`: Tells the FPGA which memory regions to use for loading the binaries in the `sectors/..`
+ folder.
### Add custom input
-The application performs inference on input data found in the folder, or an individual file set by the CMake parameter
-img_class_FILE_PATH.
+The application anomaly detection is set up to perform inferences on data found in the folder, or an individual file,
+that is pointed to by the parameter `img_class_FILE_PATH`.
-To run the application with your own images, first create a folder to hold them and then copy the custom images into
-this folder, for example:
+To run the application with your own images, first create a folder to hold them and then copy the custom images into the
+following folder:
```commandline
mkdir /tmp/custom_images
@@ -151,7 +157,7 @@ cp custom_image1.bmp /tmp/custom_images/
> **Note:** Clean the build directory before re-running the CMake command.
-Next set `img_class_FILE_PATH` to the location of this folder when building:
+Next, set `img_class_FILE_PATH` to the location of this folder when building:
```commandline
cmake .. \
@@ -159,11 +165,11 @@ cmake .. \
-DUSE_CASE_BUILD=img_class
```
-The images found in the `img_class_FILE_PATH` folder will be picked up and automatically converted to C++ files during
-the CMake configuration stage and then compiled into the application during the build phase for performing inference
+The images found in the `img_class_FILE_PATH` folder are picked up and automatically converted to C++ files during the
+CMake configuration stage. They are then compiled into the application during the build phase for performing inference
with.
-The log from the configuration stage should tell you what image directory path has been used:
+The log from the configuration stage tells you what image directory path has been used:
```log
-- User option img_class_FILE_PATH is set to /tmp/custom_images
@@ -178,26 +184,29 @@ The log from the configuration stage should tell you what image directory path h
-- img_class_IMAGE_SIZE=224
```
-After compiling, your custom images will have now replaced the default ones in the application.
+After compiling, your custom images have now replaced the default ones in the application.
-> **Note:** The CMake parameter IMAGE_SIZE should match the model input size. When building the application,
-if the size of any image does not match IMAGE_SIZE then it will be rescaled and padded so that it does.
+> **Note:** The CMake parameter `IMAGE_SIZE` must match the model input size. When building the application, if the size
+of any image does not match `IMAGE_SIZE`, then it is rescaled and padded so that it does.
### Add custom model
-The application performs inference using the model pointed to by the CMake parameter MODEL_TFLITE_PATH.
+The application performs inference using the model pointed to by the CMake parameter `MODEL_TFLITE_PATH`.
+
+> **Note:** If you want to run the model using an *Ethos-U55*, ensure that your custom model has been successfully run
+> through the Vela compiler *before* continuing.
-> **Note:** If you want to run the model using Ethos-U55, ensure your custom model has been run through the Vela compiler
->successfully before continuing. See [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
+For further information: [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).
-To run the application with a custom model you will need to provide a labels_<model_name>.txt file of labels
-associated with the model. Each line of the file should correspond to one of the outputs in your model. See the provided
-labels_mobilenet_v2_1.0_224.txt file for an example.
+To run the application with a custom model, you must provide a `labels_<model_name>.txt` file of labels that are
+associated with the model. Each line of the file must correspond to one of the outputs in your model.
+
+Refer to the provided `labels_mobilenet_v2_1.0_224.txt` file for an example.
Then, you must set `img_class_MODEL_TFLITE_PATH` to the location of the Vela processed model file and
`img_class_LABELS_TXT_FILE` to the location of the associated labels file.
-An example:
+For example:
```commandline
cmake .. \
@@ -208,11 +217,11 @@ cmake .. \
> **Note:** Clean the build directory before re-running the CMake command.
-The `.tflite` model file pointed to by `img_class_MODEL_TFLITE_PATH` and labels text file pointed to by
-`img_class_LABELS_TXT_FILE` will be converted to C++ files during the CMake configuration stage and then compiled into
+The `.tflite` model file pointed to by `img_class_MODEL_TFLITE_PATH`, and the labels text file pointed to by
+`img_class_LABELS_TXT_FILE` are converted to C++ files during the CMake configuration stage. They are then compiled into
the application for performing inference with.
-The log from the configuration stage should tell you what model path and labels file have been used:
+The log from the configuration stage tells you what model path and labels file have been used, for example:
```log
-- User option img_class_MODEL_TFLITE_PATH is set to <path/to/custom_model_after_vela.tflite>
@@ -227,38 +236,44 @@ custom_model_after_vela.tflite.cc
...
```
-After compiling, your custom model will have now replaced the default one in the application.
+After compiling, your custom model has now replaced the default one in the application.
## Setting up and running Ethos-U55 code sample
### Setting up the Ethos-U55 Fast Model
-The FVP is available publicly from [Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+The FVP is available publicly from
+[Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).
+
+For the *Ethos-U55* evaluation, please download the MPS3 version of the Arm® *Corstone™-300* model that contains both
+the *Ethos-U55* and *Cortex-M55*. The model is currently only supported on Linux-based machines.
-For Ethos-U55 evaluation, please download the MPS3 version of the Arm® Corstone™-300 model that contains Ethos-U55 and
-Cortex-M55. The model is currently only supported on Linux based machines. To install the FVP:
+To install the FVP:
-- Unpack the archive
+- Unpack the archive.
-- Run the install script in the extracted package
+- Run the install script in the extracted package:
```commandline
-$./FVP_Corstone_SSE-300_Ethos-U55.sh
+./FVP_Corstone_SSE-300_Ethos-U55.sh
```
-- Follow the instructions to install the FVP to your desired location
+- Follow the instructions to install the FVP to the required location.
### Starting Fast Model simulation
-Pre-built application binary ethos-u-img_class.axf can be found in the bin/mps3-sse-300 folder of the delivery package.
-Assuming the install location of the FVP was set to ~/FVP_install_location, the simulation can be started by:
+The pre-built application binary `ethos-u-img_class.axf` can be found in the `bin/mps3-sse-300` folder of the delivery
+package.
+
+Assuming that the install location of the FVP was set to `~/FVP_install_location`, then the simulation can be started by
+using:
```commandline
~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
./bin/mps3-sse-300/ethos-u-img_class.axf
```
-A log output should appear on the terminal:
+A log output appears on the terminal:
```log
telnetterminal0: Listening for serial connection on port 5000
@@ -267,13 +282,13 @@ telnetterminal2: Listening for serial connection on port 5002
telnetterminal5: Listening for serial connection on port 5003
```
-This will also launch a telnet window with the sample application's standard output and error log entries containing
-information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
-the input and output tensor sizes of the model compiled into the executable binary.
+This also launches a telnet window with the standard output of the sample application. It also includes error log
+entries containing information about the pre-built application version, TensorFlow Lite Micro library version used, and
+data types. The log also includes the input and output tensor sizes of the model compiled into the executable binary.
-After the application has started if `img_class_FILE_PATH` pointed to a single file (or a folder containing a single image)
-the inference starts immediately. In case of multiple inputs choice, it outputs a menu and waits for the user input from
-telnet terminal:
+After the application has started, if `img_class_FILE_PATH` points to a single file, or even a folder that contains a
+single image, then the inference starts immediately. If there are multiple inputs, it outputs a menu and then waits for
+input from the user:
```log
User input required
@@ -289,45 +304,46 @@ Choice:
```
-1. “Classify next image” menu option will run single inference on the next in line image from the collection of the
- compiled images.
+What the preceding choices do:
+
+1. Classify next image: Runs a single inference on the next in line image from the collection of the compiled images.
-2. “Classify image at chosen index” menu option will run single inference on the chosen image.
+2. Classify image at chosen index: Runs inference on the chosen image.
- > **Note:** Please make sure to select image index in the range of supplied images during application build.
- By default, pre-built application has 4 images, indexes from 0 to 3.
+ > **Note:** Please make sure to select image index from within the range of supplied audio clips during application
+ > build. By default, a pre-built application has four images, with indexes from `0` to `3`.
-3. “Run classification on all images” menu option triggers sequential inference executions on all built-in images.
+3. Run classification on all images: Triggers sequential inference executions on all built-in images.
-4. “Show NN model info” menu option prints information about model data type, input and output tensor sizes:
+4. Show NN model info: Prints information about the model data type, input, and output, tensor sizes:
```log
INFO - uTFL version: 2.5.0
INFO - Model info:
INFO - Model INPUT tensors:
- INFO - tensor type is UINT8
- INFO - tensor occupies 150528 bytes with dimensions
- INFO - 0: 1
- INFO - 1: 224
- INFO - 2: 224
- INFO - 3: 3
+ INFO - tensor type is UINT8
+ INFO - tensor occupies 150528 bytes with dimensions
+ INFO - 0: 1
+ INFO - 1: 224
+ INFO - 2: 224
+ INFO - 3: 3
INFO - Quant dimension: 0
INFO - Scale[0] = 0.007812
INFO - ZeroPoint[0] = 128
INFO - Model OUTPUT tensors:
- INFO - tensor type is UINT8
- INFO - tensor occupies 1001 bytes with dimensions
- INFO - 0: 1
- INFO - 1: 1001
+ INFO - tensor type is UINT8
+ INFO - tensor occupies 1001 bytes with dimensions
+ INFO - 0: 1
+ INFO - 1: 1001
INFO - Quant dimension: 0
INFO - Scale[0] = 0.098893
INFO - ZeroPoint[0] = 58
INFO - Activation buffer (a.k.a tensor arena) size used: 521760
INFO - Number of operators: 1
- INFO - Operator 0: ethos-u
+ INFO - Operator 0: ethos-u
```
-5. “List Images” menu option prints a list of pair image indexes - the original filenames embedded in the application:
+5. List Images: Prints a list of pair image indexes. The original filenames are embedded in the application, like so:
```log
INFO - List of Files:
@@ -341,7 +357,7 @@ Choice:
Please select the first menu option to execute Image Classification.
-The following example illustrates application output for classification:
+The following example illustrates an application output for classification:
```log
INFO - Running inference on image 0 => cat.bmp
@@ -361,31 +377,31 @@ INFO - NPU IDLE cycles: 914
INFO - NPU total cycles: 7490172
```
-It could take several minutes to complete one inference run (average time is 2-3 minutes).
+It can take several minutes to complete one inference run. The average time is around 2-3 minutes.
-The log shows the inference results for “image 0” (0 - index) that corresponds to “cat.bmp” in the sample image resource
-folder.
+The log shows the inference results for `image 0`, so `0` - `index`, that corresponds to `cat.bmp` in the sample image
+resource folder.
The profiling section of the log shows that for this inference:
-- Ethos-U55's PMU report:
+- *Ethos-U55* PMU report:
- - 7,490,172 total cycle: The number of NPU cycles
+ - 7,490,172 total cycle: The number of NPU cycles.
- - 7,489,258 active cycles: number of NPU cycles that were used for computation
+ - 7,489,258 active cycles: The number of NPU cycles that were used for computation.
- - 914 idle cycles: number of cycles for which the NPU was idle
+ - 914 idle cycles: The number of cycles for which the NPU was idle.
- - 2,489,726 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus.
- AXI0 is the bus where Ethos-U55 NPU reads and writes to the computation buffers (activation buf/tensor arenas).
+ - 2,489,726 AXI0 read beats: The number of AXI beats with read transactions from AXI0 bus. AXI0 is the bus where the
+ *Ethos-U55* NPU reads and writes to the computation buffers, activation buf, or tensor arenas.
- 1,098,726 AXI0 write beats: The number of AXI beats with write transactions to AXI0 bus.
- - 471,129 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus.
- AXI1 is the bus where Ethos-U55 NPU reads the model (read only)
+ - 471,129 AXI1 read beats: The number of AXI beats with read transactions from AXI1 bus. AXI1 is the bus where the
+ *Ethos-U55* NPU reads the model. So, read-only.
-- For FPGA platforms, CPU cycle count can also be enabled. For FVP, however, CPU cycle counters should not be used as
- the CPU model is not cycle-approximate or cycle-accurate.
+- For FPGA platforms, a CPU cycle count can also be enabled. However, do not use cycle counters for FVP, as the CPU
+ model is not cycle-approximate or cycle-accurate
-The application prints the top 5 classes with indexes, confidence score and labels from associated
-labels_mobilenet_v2_1.0_224.txt file. The FVP window also shows the output on its LCD section.
+The application prints the top five classes with indexes, a confidence score, and labels from the associated
+*labels_mobilenet_v2_1.0_224.txt* file. The FVP window also shows the output on its LCD section.