summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorIsabella Gottardi <isabella.gottardi@arm.com>2022-03-08 15:27:49 +0000
committerIsabella Gottardi <isabella.gottardi@arm.com>2022-03-11 10:57:23 +0000
commit1716efd0b35889b580276e27c8b6f661c9858cd0 (patch)
tree1c014d324fec1695d4f5bb8e26f9c4fdb795ee82
parente7f512592818574d98c4b3ba09b4d3315fe025bd (diff)
downloadml-embedded-evaluation-kit-1716efd0b35889b580276e27c8b6f661c9858cd0.tar.gz
MLECO-3006: Fixing some minor errors in documentation
Change-Id: I24cd544780f46fcec8f154b440f7bb959c20a459 Signed-off-by: Isabella Gottardi <isabella.gottardi@arm.com>
-rw-r--r--docs/documentation.md32
-rw-r--r--docs/sections/building.md12
-rw-r--r--docs/sections/customizing.md8
-rw-r--r--docs/sections/deployment.md2
-rw-r--r--docs/sections/troubleshooting.md6
-rw-r--r--docs/use_cases/asr.md3
-rw-r--r--docs/use_cases/img_class.md3
-rw-r--r--docs/use_cases/inference_runner.md7
-rw-r--r--docs/use_cases/kws.md3
-rw-r--r--docs/use_cases/kws_asr.md3
-rw-r--r--docs/use_cases/visual_wake_word.md3
-rw-r--r--model_conditioning_examples/Readme.md22
12 files changed, 52 insertions, 52 deletions
diff --git a/docs/documentation.md b/docs/documentation.md
index f911cff..9a00cc4 100644
--- a/docs/documentation.md
+++ b/docs/documentation.md
@@ -203,10 +203,12 @@ What these folders contain:
through `CMSIS_SRC_PATH` variable.
The static library is used by platform code.
-- `components` directory contains drivers code for different devices used in platforms. Such as UART, LCD and others.
- A platform can include those as sources in a build to enable usage of corresponding HW devices. Most of the use-cases
- use UART and LCD, thus if you want to run default ML use-cases on a custom platform, you will have to add
- implementation for your devices here (or re-use existing code if it is compatible with your platform).
+- `components` directory contains drivers for different modules that can be reused for different platforms.
+ These contain common functions for Arm Ethos-U NPU initialization, timing adapter block helpers and others.
+ Each component produces a static library that could potentially be linked into the platform library to enable
+ usage of corresponding modules from the platform sources. For example, most of the use-cases use NPU and
+ timing adapter initialization. If you want to run default ML use-cases on a custom platform, you could re-use
+ existing code from this directory provided it is compatible with your platform.
- `platform/mps3`\
`platform/simple`:
@@ -228,18 +230,22 @@ What these folders contain:
Native profile allows to build application to be executed on a build machine, i.e. x86. It bypasses and stubs platform
devices replacing them with standard C or C++ library calls.
-- `platforms/bare-metal/bsp/mem_layout`: Contains the platform-specific linker scripts.
-
## Models and resources
-The models used in the use-cases implemented in this project can be downloaded from: [Arm ML-Zoo](https://github.com/ARM-software/ML-zoo).
+The models used in the use-cases implemented in this project can be downloaded from:
+
+- [Arm ML-Zoo](https://github.com/ARM-software/ML-zoo) ( [Apache 2.0 License](https://github.com/ARM-software/ML-zoo/blob/master/LICENSE) )
+
+ - [Mobilenet V2](https://github.com/ARM-software/ML-zoo/tree/e0aa361b03c738047b9147d1a50e3f2dcb13dbcb/models/image_classification/mobilenet_v2_1.0_224/tflite_int8)
+ - [MicroNet for Keyword Spotting](https://github.com/ARM-software/ML-zoo/tree/9f506fe52b39df545f0e6c5ff9223f671bc5ae00/models/keyword_spotting/micronet_medium/tflite_int8)
+ - [Wav2Letter](https://github.com/ARM-software/ML-zoo/tree/1a92aa08c0de49a7304e0a7f3f59df6f4fd33ac8/models/speech_recognition/wav2letter/tflite_pruned_int8)
+ - [MicroNet for Anomaly Detection](https://github.com/ARM-software/ML-zoo/tree/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8)
+ - [MicroNet for Visual Wake Word](https://github.com/ARM-software/ML-zoo/raw/7dd3b16bb84007daf88be8648983c07f3eb21140/models/visual_wake_words/micronet_vww4/tflite_int8/vww4_128_128_INT8.tflite)
+ - [RNNoise](https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/rnnoise_INT8.tflite)
+
+- [Emza Visual Sense ModelZoo](https://github.com/emza-vs/ModelZoo) ( [Apache 2.0 License](https://github.com/emza-vs/ModelZoo/blob/v1.0/LICENSE) )
-- [Mobilenet V2](https://github.com/ARM-software/ML-zoo/tree/e0aa361b03c738047b9147d1a50e3f2dcb13dbcb/models/image_classification/mobilenet_v2_1.0_224/tflite_int8)
-- [MicroNet for Keyword Spotting](https://github.com/ARM-software/ML-zoo/tree/9f506fe52b39df545f0e6c5ff9223f671bc5ae00/models/keyword_spotting/micronet_medium/tflite_int8)
-- [Wav2Letter](https://github.com/ARM-software/ML-zoo/tree/1a92aa08c0de49a7304e0a7f3f59df6f4fd33ac8/models/speech_recognition/wav2letter/tflite_pruned_int8)
-- [MicroNet for Anomaly Detection](https://github.com/ARM-software/ML-zoo/tree/7c32b097f7d94aae2cd0b98a8ed5a3ba81e66b18/models/anomaly_detection/micronet_medium/tflite_int8)
-- [MicroNet for Visual Wake Word](https://github.com/ARM-software/ML-zoo/raw/7dd3b16bb84007daf88be8648983c07f3eb21140/models/visual_wake_words/micronet_vww4/tflite_int8/vww4_128_128_INT8.tflite)
-- [RNNoise](https://github.com/ARM-software/ML-zoo/raw/a061600058097a2785d6f1f7785e5a2d2a142955/models/noise_suppression/RNNoise/tflite_int8/rnnoise_INT8.tflite)
+ - [YOLO Fastest](https://github.com/emza-vs/ModelZoo/blob/v1.0/object_detection/yolo-fastest_192_face_v4.tflite)
When using *Ethos-U* NPU backend, Vela compiler optimizes the the NN model. However, if not and it is supported by
TensorFlow Lite Micro, then it falls back on the CPU and execute.
diff --git a/docs/sections/building.md b/docs/sections/building.md
index 2f122f9..4f4e6dd 100644
--- a/docs/sections/building.md
+++ b/docs/sections/building.md
@@ -184,7 +184,7 @@ The build parameters are:
- `Sram_Only`
> **Note:** The `Shared_Sram` memory mode is available on both *Ethos-U55* and *Ethos-U65* NPU, `Dedicated_Sram` only
- > for *Ethos-U65* NPU and `Sram_Only` only for Ethos-U55* NPU.
+ > for *Ethos-U65* NPU and `Sram_Only` only for *Ethos-U55* NPU.
- `ETHOS_U_NPU_CONFIG_ID`: This parameter is set by default based on the value of `ETHOS_U_NPU_ID`.
For Ethos-U55, it defaults to the `H128` indicating that the Ethos-U55 128 MAC optimised model
@@ -259,7 +259,7 @@ The build process uses three major steps:
- Some files such as neural network models, network inputs, and output labels are automatically converted into C/C++
arrays, see: [Automatic file generation](./building.md#automatic-file-generation).
-3. Build the application.\
+3. Build the application.
Application and third-party libraries are now built. For further information, see:
[Building the configured project](./building.md#building-the-configured-project).
@@ -271,12 +271,12 @@ Certain third-party sources are required to be present on the development machin
repository to link against.
1. [TensorFlow Lite Micro repository](https://github.com/tensorflow/tensorflow)
-2. [Ethos-U55 NPU core driver repository](https://review.mlplatform.org/admin/repos/ml/ethos-u/ethos-u-core-driver)
+2. [Ethos-U NPU core driver repository](https://review.mlplatform.org/admin/repos/ml/ethos-u/ethos-u-core-driver)
3. [CMSIS-5](https://github.com/ARM-software/CMSIS_5.git)
+4. [Ethos-U NPU core driver repository](https://review.mlplatform.org/admin/repos/ml/ethos-u/ethos-u-core-platform)
> **Note:** If you are using non git project sources, run `python3 ./download_dependencies.py` and ignore further git
> instructions. Proceed to [Fetching resource files](./building.md#fetching-resource-files) section.
->
To pull the submodules:
@@ -290,7 +290,7 @@ This downloads all of the required components and places them in a tree, like so
dependencies
├── cmsis
├── core-driver
- ├── core-software
+ ├── core-platform
└── tensorflow
```
@@ -391,7 +391,7 @@ mkdir build && cd build
#### Using GNU Arm Embedded toolchain
On Linux, if using `Arm GNU embedded toolchain`, execute the following command to build the application to run on the
-Arm® *Ethos™-U55* NPU when providing only the mandatory arguments for CMake configuration:
+Arm® *Ethos™-U* NPU when providing only the mandatory arguments for CMake configuration:
```commandline
cmake ../
diff --git a/docs/sections/customizing.md b/docs/sections/customizing.md
index ef90e5e..2302809 100644
--- a/docs/sections/customizing.md
+++ b/docs/sections/customizing.md
@@ -21,7 +21,7 @@
This section describes how to implement a custom Machine Learning application running on Arm® *Corstone™-300* based FVP
or on the Arm® MPS3 FPGA prototyping board.
-the Arm® *Ethos™-U55* code sample software project offers a way to incorporate more use-case code into the existing
+The Arm® *Ethos™-U* code sample software project offers a way to incorporate more use-case code into the existing
infrastructure. It also provides a build system that automatically picks up added functionality and produces
corresponding executable for each use-case. This is achieved by following certain configuration and code implementation
conventions.
@@ -679,7 +679,7 @@ in the root of your use-case. However, the name of the file is not important.
> - `use_case` – The name of the current use-case.
> - `UC_SRC` – A list of use-case sources.
> - `UC_INCLUDE` – The path to the use-case headers.
-> - `ETHOS_U_NPU_ENABLED` – The flag indicating if the current build supports Ethos-U55.
+> - `ETHOS_U_NPU_ENABLED` – The flag indicating if the current build supports *Ethos™-U* NPU.
> - `TARGET_PLATFORM` – The target platform being built for.
> - `TARGET_SUBSYSTEM` – If target platform supports multiple subsystems, this is the name of the subsystem.
> - All standard build options.
@@ -691,9 +691,9 @@ so:
```cmake
if (ETHOS_U_NPU_ENABLED)
- set(DEFAULT_MODEL_PATH ${DEFAULT_MODEL_DIR}/helloworldmodel_uint8_vela_${DEFAULT_NPU_CONFIG_ID}.tflite)
+ set(DEFAULT_MODEL_PATH ${DEFAULT_MODEL_DIR}/helloworldmodel_vela_${DEFAULT_NPU_CONFIG_ID}.tflite)
else()
- set(DEFAULT_MODEL_PATH ${DEFAULT_MODEL_DIR}/helloworldmodel_uint8.tflite)
+ set(DEFAULT_MODEL_PATH ${DEFAULT_MODEL_DIR}/helloworldmodel.tflite)
endif()
```
diff --git a/docs/sections/deployment.md b/docs/sections/deployment.md
index 034fb19..045bda0 100644
--- a/docs/sections/deployment.md
+++ b/docs/sections/deployment.md
@@ -25,7 +25,7 @@ The FVP is available publicly from the following page:
Please ensure that you download the correct archive from the list under Arm® *Corstone™-300*. You need the one which:
- Emulates MPS3 board and *not* for MPS2 FPGA board,
-- Contains support for Arm® *Ethos™-U55*.
+- Contains support for Arm® *Ethos™-U55* and *Ethos-U65* processors.
### Setting up the MPS3 Arm Corstone-300 FVP
diff --git a/docs/sections/troubleshooting.md b/docs/sections/troubleshooting.md
index 612e40e..8b2646a 100644
--- a/docs/sections/troubleshooting.md
+++ b/docs/sections/troubleshooting.md
@@ -42,7 +42,7 @@ It shows that the configuration of the Vela compiled `.tflite` file doesn't matc
The Vela configuration parameter `accelerator-config` used for producing the .`tflite` file that is used
while building the application should match the MACs configuration that the FVP is emulating.
For example, if the `accelerator-config` from the Vela command was `ethos-u55-128`, the FVP should be emulating the
-128 MACs configuration of the Ethos-U55 block(default FVP configuration). If the `accelerator-config` used was
+128 MACs configuration of the Ethos™-U55 block(default FVP configuration). If the `accelerator-config` used was
`ethos-u55-256`, the FVP must be executed with additional command line parameter to instruct it to emulate the
256 MACs configuration instead.
@@ -56,8 +56,8 @@ logged over UART. These include the MACs/cc configuration of the FVP.
INFO - MPS3 core clock has been set to: 32000000Hz
INFO - CPU ID: 0x410fd220
INFO - CPU: Cortex-M55 r0p0
-INFO - Ethos-U55 device initialised
-INFO - Ethos-U55 version info:
+INFO - Ethos-U device initialised
+INFO - Ethos-U version info:
INFO - Arch: v1.0.6
INFO - Driver: v0.16.0
INFO - MACs/cc: 128
diff --git a/docs/use_cases/asr.md b/docs/use_cases/asr.md
index 46ef584..adeb838 100644
--- a/docs/use_cases/asr.md
+++ b/docs/use_cases/asr.md
@@ -346,8 +346,7 @@ Assuming that the install location of the FVP was set to `~/FVP_install_location
using:
```commandline
-~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
-./bin/mps3-sse-300/ethos-u-asr.axf
+~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ./bin/mps3-sse-300/ethos-u-asr.axf
```
A log output appears on the terminal:
diff --git a/docs/use_cases/img_class.md b/docs/use_cases/img_class.md
index 494ec61..7db6e39 100644
--- a/docs/use_cases/img_class.md
+++ b/docs/use_cases/img_class.md
@@ -269,8 +269,7 @@ Assuming that the install location of the FVP was set to `~/FVP_install_location
using:
```commandline
-~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
-./bin/mps3-sse-300/ethos-u-img_class.axf
+~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ./bin/mps3-sse-300/ethos-u-img_class.axf
```
A log output appears on the terminal:
diff --git a/docs/use_cases/inference_runner.md b/docs/use_cases/inference_runner.md
index 0aa671a..1082c5c 100644
--- a/docs/use_cases/inference_runner.md
+++ b/docs/use_cases/inference_runner.md
@@ -205,8 +205,7 @@ Assuming that the install location of the FVP was set to `~/FVP_install_location
using:
```commandline
-~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
-./bin/mps3-sse-300/ethos-u-inference_runner.axf
+~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/mps3-sse-300/ethos-u-inference_runner.axf
```
A log output appears on the terminal:
@@ -309,8 +308,8 @@ binary blob.
> the model size can be a maximum of 32MiB. The IFM and OFM spaces are both reserved as 16MiB sections.
```commandline
-~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -a \
- ./bin/ethos-u-inference_runner.axf \
+~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 \
+ -a ./bin/ethos-u-inference_runner.axf \
--data /path/to/custom-model.tflite@0x90000000 \
--data /path/to/custom-ifm.bin@0x92000000 \
--dump cpu0=/path/to/output.bin@Memory:0x93000000,1024
diff --git a/docs/use_cases/kws.md b/docs/use_cases/kws.md
index d07dff2..bda22bf 100644
--- a/docs/use_cases/kws.md
+++ b/docs/use_cases/kws.md
@@ -313,8 +313,7 @@ Assuming that the install location of the FVP was set to `~/FVP_install_location
using:
```commandline
-~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
-./bin/mps3-sse-300/ethos-u-kws.axf
+~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ./bin/mps3-sse-300/ethos-u-kws.axf
```
A log output appears on the terminal:
diff --git a/docs/use_cases/kws_asr.md b/docs/use_cases/kws_asr.md
index 8013634..d8b2fee 100644
--- a/docs/use_cases/kws_asr.md
+++ b/docs/use_cases/kws_asr.md
@@ -404,8 +404,7 @@ Assuming that the install location of the FVP was set to `~/FVP_install_location
using:
```commandline
-$ ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
-./bin/mps3-sse-300/ethos-u-kws_asr.axf
+$ ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ./bin/mps3-sse-300/ethos-u-kws_asr.axf
```
A log output appears on the terminal:
diff --git a/docs/use_cases/visual_wake_word.md b/docs/use_cases/visual_wake_word.md
index a6f6130..99aa3f2 100644
--- a/docs/use_cases/visual_wake_word.md
+++ b/docs/use_cases/visual_wake_word.md
@@ -249,8 +249,7 @@ Pre-built application binary ethos-u-vww.axf can be found in the bin/mps3-sse-30
package. Assuming the install location of the FVP was set to ~/FVP_install_location, the simulation can be started by:
```commandline
-$ ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
-./bin/mps3-sse-300/ethos-u-vww.axf
+$ ~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 ./bin/mps3-sse-300/ethos-u-vww.axf
```
A log output should appear on the terminal:
diff --git a/model_conditioning_examples/Readme.md b/model_conditioning_examples/Readme.md
index bb00b79..9bfc968 100644
--- a/model_conditioning_examples/Readme.md
+++ b/model_conditioning_examples/Readme.md
@@ -12,7 +12,7 @@
## Introduction
This folder contains short example scripts that demonstrate some methods available in TensorFlow to condition your model
-in preparation for deployment on Arm Ethos NPU.
+in preparation for deployment on Arm® Ethos™ NPU.
These scripts will cover three main topics:
@@ -22,7 +22,7 @@ These scripts will cover three main topics:
The objective of these scripts is not to be a single source of knowledge on everything related to model conditioning.
Instead the aim is to provide the reader with a quick starting point that demonstrates some commonly used tools that
-will enable models to run on Arm Ethos NPU and also optimize them to enable maximum performance from the Arm Ethos NPU.
+will enable models to run on Arm® Ethos-U NPU and also optimize them to enable maximum performance from the Arm® Ethos-U NPU.
Links to more in-depth guides available on the TensorFlow website are provided in the [references](#references) section
in this Readme.
@@ -54,8 +54,8 @@ The produced TensorFlow Lite model files will be saved in a `conditioned_models`
## Quantization
-Most machine learning models are trained using 32bit floating point precision. However, Arm Ethos NPU performs
-calculations in 8bit integer precision. As a result, it is required that any model you wish to deploy on Arm Ethos NPU is
+Most machine learning models are trained using 32bit floating point precision. However, Arm® Ethos-U NPU performs
+calculations in 8bit integer precision. As a result, it is required that any model you wish to deploy on Arm® Ethos-U NPU is
first fully quantized to 8bits.
TensorFlow provides two methods of quantization and the scripts in this folder will demonstrate these:
@@ -94,7 +94,7 @@ Quantizing your model can result in accuracy drops depending on your model. Howe
drop when using post-training quantization is usually minimal. After post-training quantization is complete you will
have a fully quantized TensorFlow Lite model.
-If you are targetting an Arm Ethos-U55 NPU then the output TensorFlow Lite file will also need to be passed through the Vela
+If you are targetting an Arm® Ethos-U NPU then the output TensorFlow Lite file will also need to be passed through the Vela
compiler for further optimizations before it can be used.
### Quantization aware training
@@ -117,7 +117,7 @@ As well as simulating quantization and adjusting weights, the ranges for variabl
model can be fully quantized afterwards. Once you have finished quantization aware training the TensorFlow Lite converter is
used to produce a fully quantized TensorFlow Lite model.
-If you are targetting an Arm Ethos-U55 NPU then the output TensorFlow Lite file will also need to be passed through the Vela
+If you are targetting an Arm® Ethos-U NPU then the output TensorFlow Lite file will also need to be passed through the Vela
compiler for further optimizations before it can be used.
## Weight pruning
@@ -128,7 +128,7 @@ calculations so are safe to be removed or 'pruned' from the model. This is accom
values to 0, resulting in a sparse model.
Compression algorithms can then take advantage of this to reduce model size in memory, which can be very important when
-deploying on small embedded systems. Moreover, Arm Ethos NPU can take advantage of model sparsity to further accelerate
+deploying on small embedded systems. Moreover, Arm® Ethos-U NPU can take advantage of model sparsity to further accelerate
execution of a model.
Training with weight pruning will force your model to have a certain percentage of its weights set (or 'pruned') to 0
@@ -139,9 +139,9 @@ is desired.
Weight pruning can be further combined with quantization so you have a model that is both pruned and quantized, meaning
that the memory saving affects of both can be combined. Quantization then allows the model to be used with
-Arm Ethos NPU.
+Arm® Ethos-U NPU.
-If you are targetting an Arm Ethos-U55 NPU then the output TensorFlow Lite file will also need to be passed through the Vela
+If you are targetting an Arm® Ethos-U NPU then the output TensorFlow Lite file will also need to be passed through the Vela
compiler for further optimizations before it can be used.
## Weight clustering
@@ -158,9 +158,9 @@ better adjusted to the reduced precision.
Weight clustering can be further combined with quantization so you have a model that is both clustered and quantized,
meaning that the memory saving affects of both can be combined. Quantization then allows the model to be used with
-Arm Ethos NPU.
+Arm® Ethos-U NPU.
-If you are targetting an Arm Ethos-U55 NPU then the output TensorFlow Lite file will also need to be passed through the Vela
+If you are targetting an Arm® Ethos-U NPU then the output TensorFlow Lite file will also need to be passed through the Vela
compiler for further optimizations before it can be used (see [Optimize model with Vela compiler](./building.md#optimize-custom-model-with-vela-compiler)).
## References