summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorIsabella Gottardi <isabella.gottardi@arm.com>2021-05-12 08:27:15 +0100
committerIsabella Gottardi <isabella.gottardi@arm.com>2021-05-18 09:48:12 +0100
commit56ee6207c1524ddc4c444c6e48e05eb34105985a (patch)
treed4fc7823961034e95364f44b34fb098b34b99d0d /docs
parentf4e2c4736f19d2e06fede715bb49c475f93d79a9 (diff)
downloadml-embedded-evaluation-kit-56ee6207c1524ddc4c444c6e48e05eb34105985a.tar.gz
MLECO-1858: Documentation update
* Removing `_` in front of private functions and member Signed-off-by: Isabella Gottardi <isabella.gottardi@arm.com> Change-Id: I5a5d652f9647ebb16d2d2bd16ab980e73f7be3cf
Diffstat (limited to 'docs')
-rw-r--r--docs/documentation.md178
-rw-r--r--docs/sections/building.md123
-rw-r--r--docs/sections/coding_guidelines.md2
-rw-r--r--docs/sections/customizing.md72
-rw-r--r--docs/sections/deployment.md20
-rw-r--r--docs/sections/memory_considerations.md22
-rw-r--r--docs/sections/run.md44
-rw-r--r--docs/sections/testing_benchmarking.md4
-rw-r--r--docs/sections/troubleshooting.md4
-rw-r--r--docs/use_cases/ad.md8
-rw-r--r--docs/use_cases/asr.md6
-rw-r--r--docs/use_cases/img_class.md9
-rw-r--r--docs/use_cases/inference_runner.md10
-rw-r--r--docs/use_cases/kws.md9
-rw-r--r--docs/use_cases/kws_asr.md7
15 files changed, 182 insertions, 336 deletions
diff --git a/docs/documentation.md b/docs/documentation.md
index 050ca60..8ab9fa3 100644
--- a/docs/documentation.md
+++ b/docs/documentation.md
@@ -1,28 +1,18 @@
# Arm® ML embedded evaluation kit
-## Table of Contents
-
-- [Arm® ML embedded evaluation kit](./documentation.md#arm-ml-embedded-evaluation-kit)
- - [Table of Contents](./documentation.md#table-of-content)
- - [Trademarks](./documentation.md#trademarks)
- - [Prerequisites](./documentation.md#prerequisites)
- - [Additional reading](./documentation.md#additional-reading)
- - [Repository structure](./documentation.md#repository-structure)
- - [Models and resources](./documentation.md#models-and-resources)
- - [Building](./documentation.md#building)
- - [Deployment](./documentation.md#deployment)
- - [Running code samples applications](./documentation.md#running-code-samples-applications)
- - [Implementing custom ML application](./documentation.md#implementing-custom-ml-application)
- - [Testing and benchmarking](./documentation.md#testing-and-benchmarking)
- - [Memory considerations](./documentation.md#memory-considerations)
- - [Troubleshooting](./documentation.md#troubleshooting)
- - [Contribution guidelines](./documentation.md#contribution-guidelines)
- - [Coding standards and guidelines](./documentation.md#coding-standards-and-guidelines)
- - [Code Reviews](./documentation.md#code-reviews)
- - [Testing](./documentation.md#testing)
- - [Communication](./documentation.md#communication)
- - [Licenses](./documentation.md#licenses)
- - [Appendix](./documentation.md#appendix)
+- [Arm® ML embedded evaluation kit](#arm_ml-embedded-evaluation-kit)
+ - [Trademarks](#trademarks)
+ - [Prerequisites](#prerequisites)
+ - [Additional reading](#additional-reading)
+ - [Repository structure](#repository-structure)
+ - [Models and resources](#models-and-resources)
+ - [Building](#building)
+ - [Deployment](#deployment)
+ - [Implementing custom ML application](#implementing-custom-ml-application)
+ - [Testing and benchmarking](#testing-and-benchmarking)
+ - [Memory considerations](#memory-considerations)
+ - [Troubleshooting](#troubleshooting)
+ - [Appendix](#appendix)
## Trademarks
@@ -222,16 +212,22 @@ The project can be built for MPS3 FPGA and FVP emulating MPS3. Default values fo
will build executable models with Ethos-U55 NPU support.
See:
-- [Building the Code Samples application from sources](./sections/building.md#building-the-ml-embedded-code-sample-applications-from-sources)
- - [Contents](./sections/building.md#contents)
+- [Building the ML embedded code sample applications from sources](./sections/building.md#building-the-ml-embedded-code-sample-applications-from-sources)
- [Build prerequisites](./sections/building.md#build-prerequisites)
- [Build options](./sections/building.md#build-options)
- [Build process](./sections/building.md#build-process)
- [Preparing build environment](./sections/building.md#preparing-build-environment)
+ - [Fetching submodules](./sections/building.md#fetching-submodules)
+ - [Fetching resource files](./sections/building.md#fetching-resource-files)
- [Create a build directory](./sections/building.md#create-a-build-directory)
- - [Configuring the build for `MPS3: SSE-300`](./sections/building.md#configuring-the-build-for-mps3-sse-300)
+ - [Configuring the build for MPS3 SSE-300](./sections/building.md#configuring-the-build-for-mps3-sse-300)
+ - [Using GNU Arm Embedded Toolchain](./sections/building.md#using-gnu-arm-embedded-toolchain)
+ - [Using Arm Compiler](./sections/building.md#using-arm-compiler)
+ - [Generating project for Arm Development Studio](./sections/building.md#generating-project-for-arm-development-studio)
+ - [Working with model debugger from Arm FastModel Tools](./sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
+ - [Configuring with custom TPIP dependencies](./sections/building.md#configuring-with-custom-tpip-dependencies)
- [Configuring native unit-test build](./sections/building.md#configuring-native-unit-test-build)
- - [Configuring the build for `simple_platform`](./sections/building.md#configuring-the-build-for-simple_platform)
+ - [Configuring the build for simple_platform](./sections/building.md#configuring-the-build-for-simple_platform)
- [Building the configured project](./sections/building.md#building-the-configured-project)
- [Building timing adapter with custom options](./sections/building.md#building-timing-adapter-with-custom-options)
- [Add custom inputs](./sections/building.md#add-custom-inputs)
@@ -245,16 +241,11 @@ This section describes how to deploy the code sample applications on the Fixed V
See:
- [Deployment](./sections/deployment.md)
- - [Fixed Virtual Platform](./sections/deployment.md#fixed-Virtual-Platform)
- - [Setting up the MPS3 Corstone-300 FVP](./sections/deployment.md#Setting-up-the-MPS3-Corstone-300-FVP)
- - [Deploying on an FVP emulating MPS3](./sections/deployment.md#Deploying-on-an-FVP-emulating-MPS3)
- - [MPS3 board](./sections/deployment.md#MPS3-board)
- - [Deployment on MPS3 board](./sections/deployment.md#Deployment-on-MPS3-board)
-
-## Running code samples applications
-
-This section covers the process for getting started with pre-built binaries for the code samples.
-See [Running applications](./sections/run.md).
+ - [Fixed Virtual Platform](./sections/deployment.md#fixed-virtual-platform)
+ - [Setting up the MPS3 Corstone-300 FVP](./sections/deployment.md#setting-up-the-mps3-arm-corstone-300-fvp)
+ - [Deploying on an FVP emulating MPS3](./sections/deployment.md#deploying-on-an-fvp-emulating-mps3)
+ - [MPS3 board](./sections/deployment.md#mps3-board)
+ - [Deployment on MPS3 board](./sections/deployment.md#deployment-on-mps3-board)
## Implementing custom ML application
@@ -268,20 +259,20 @@ corresponding executable for each use-case.
See:
-- [Customizing](./sections/customizing.md)
- - [Software project description](./sections/customizing.md#Software-project-description)
+- [Implementing custom ML application](./sections/customizing.md)
+ - [Software project description](./sections/customizing.md#software-project-description)
- [HAL API](./sections/customizing.md#hal-api)
- [Main loop function](./sections/customizing.md#main-loop-function)
- [Application context](./sections/customizing.md#application-context)
- - [Profiler](./sections/customizing.md#Profiler)
- - [NN Model API](./sections/customizing.md#NN-model-API)
- - [Adding custom ML use-case](./sections/customizing.md#Adding-custom-ML-use-case)
- - [Implementing main loop](./sections/customizing.md#Implementing-main-loop)
- - [Implementing custom NN model](./sections/customizing.md#Implementing-custom-NN-model)
+ - [Profiler](./sections/customizing.md#profiler)
+ - [NN Model API](./sections/customizing.md#nn-model-api)
+ - [Adding custom ML use-case](./sections/customizing.md#adding-custom-ml-use-case)
+ - [Implementing main loop](./sections/customizing.md#implementing-main-loop)
+ - [Implementing custom NN model](./sections/customizing.md#implementing-custom-nn-model)
- [Executing inference](./sections/customizing.md#executing-inference)
- [Printing to console](./sections/customizing.md#printing-to-console)
- [Reading user input from console](./sections/customizing.md#reading-user-input-from-console)
- - [Output to MPS3 LCD](./sections/customizing.md#output-to-MPS3-LCD)
+ - [Output to MPS3 LCD](./sections/customizing.md#output-to-mps3-lcd)
- [Building custom use-case](./sections/customizing.md#building-custom-use-case)
## Testing and benchmarking
@@ -297,103 +288,12 @@ See [Memory considerations](./sections/memory_considerations.md)
See:
- [Troubleshooting](./sections/troubleshooting.md)
- - [Inference results are incorrect for my custom files](./sections/troubleshooting.md#Inference-results-are-incorrect-for-my-custom-files)
- - [The application does not work with my custom model](./sections/troubleshooting.md#The-application-does-not-work-with-my-custom-model)
-
-## Contribution guidelines
-
-Contributions are only accepted under the following conditions:
-
-- The contribution have certified origin and give us your permission. To manage this process we use
- [Developer Certificate of Origin (DCO) V1.1](https://developercertificate.org/).
- To indicate that contributors agree to the the terms of the DCO, it's neccessary "sign off" the
- contribution by adding a line with name and e-mail address to every git commit message:
-
- ```log
- Signed-off-by: John Doe <john.doe@example.org>
- ```
-
- This can be done automatically by adding the `-s` option to your `git commit` command.
- You must use your real name, no pseudonyms or anonymous contributions are accepted.
-
-- You give permission according to the [Apache License 2.0](../LICENSE_APACHE_2.0.txt).
-
- In each source file, include the following copyright notice:
-
- ```copyright
- /*
- * Copyright (c) <years additions were made to project> <your name>, Arm Limited. All rights reserved.
- * SPDX-License-Identifier: Apache-2.0
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
- ```
-
-### Coding standards and guidelines
-
-This repository follows a set of guidelines, best practices, programming styles and conventions,
-see:
-
-- [Coding standards and guidelines](./sections/coding_guidelines.md)
- - [Introduction](./sections/coding_guidelines.md#introduction)
- - [Language version](./sections/coding_guidelines.md#language-version)
- - [File naming](./sections/coding_guidelines.md#file-naming)
- - [File layout](./sections/coding_guidelines.md#file-layout)
- - [Block Management](./sections/coding_guidelines.md#block-management)
- - [Naming Conventions](./sections/coding_guidelines.md#naming-conventions)
- - [C++ language naming conventions](./sections/coding_guidelines.md#c_language-naming-conventions)
- - [C language naming conventions](./sections/coding_guidelines.md#c-language-naming-conventions)
- - [Layout and formatting conventions](./sections/coding_guidelines.md#layout-and-formatting-conventions)
- - [Language usage](./sections/coding_guidelines.md#language-usage)
-
-### Code Reviews
-
-Contributions must go through code review. Code reviews are performed through the
-[mlplatform.org Gerrit server](https://review.mlplatform.org). Contributors need to signup to this
-Gerrit server with their GitHub account credentials.
-In order to be merged a patch needs to:
-
-- get a "+1 Verified" from the pre-commit job.
-- get a "+2 Code-review" from a reviewer, it means the patch has the final approval.
-
-### Testing
-
-Prior to submitting a patch for review please make sure that all build variants works and unit tests pass.
-Contributions go through testing at the continuous integration system. All builds, tests and checks must pass before a
-contribution gets merged to the master branch.
-
-## Communication
-
-Please, if you want to start public discussion, raise any issues or questions related to this repository, use
-[https://discuss.mlplatform.org/c/ml-embedded-evaluation-kit](https://discuss.mlplatform.org/c/ml-embedded-evaluation-kit/)
-forum.
-
-## Licenses
-
-The ML Embedded applications samples are provided under the Apache 2.0 license, see [License Apache 2.0](../LICENSE_APACHE_2.0.txt).
-
-Application input data sample files are provided under their original license:
-
-| | Licence | Provenience |
-|---------------|---------|---------|
-| [Automatic Speech Recognition Samples](../resources/asr/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](../resources/LICENSE_CC_4.0.txt) | <http://www.openslr.org/12/> |
-| [Image Classification Samples](../resources/img_class/samples/files.md) | [Creative Commons Attribution 1.0](../resources/LICENSE_CC_1.0.txt) | <https://www.pexels.com> |
-| [Keyword Spotting Samples](../resources/kws/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](../resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
-| [Keyword Spotting and Automatic Speech Recognition Samples](../resources/kws_asr/samples/files.md) | [Creative Commons Attribution 4.0 International Public License](../resources/LICENSE_CC_4.0.txt) | <http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz> |
+ - [Inference results are incorrect for my custom files](./sections/troubleshooting.md#inference-results-are-incorrect-for-my-custom-files)
+ - [The application does not work with my custom model](./sections/troubleshooting.md#the-application-does-not-work-with-my-custom-model)
## Appendix
See:
- [Appendix](./sections/appendix.md)
- - [Cortex-M55 Memory map overview](./sections/appendix.md#cortex-m55-memory-map-overview)
+ - [Cortex-M55 Memory map overview](./sections/appendix.md#arm_cortex_m55-memory-map-overview-for-corstone_300-reference-design)
diff --git a/docs/sections/building.md b/docs/sections/building.md
index 4b1514b..ff5b518 100644
--- a/docs/sections/building.md
+++ b/docs/sections/building.md
@@ -1,9 +1,6 @@
# Building the ML embedded code sample applications from sources
-## Contents
-
- [Building the ML embedded code sample applications from sources](#building-the-ml-embedded-code-sample-applications-from-sources)
- - [Contents](#contents)
- [Build prerequisites](#build-prerequisites)
- [Build options](#build-options)
- [Build process](#build-process)
@@ -11,7 +8,7 @@
- [Fetching submodules](#fetching-submodules)
- [Fetching resource files](#fetching-resource-files)
- [Create a build directory](#create-a-build-directory)
- - [Configuring the build for MPS3: SSE-300](#configuring-the-build-for-mps3-sse-300)
+ - [Configuring the build for MPS3 SSE-300](#configuring-the-build-for-mps3-sse-300)
- [Using GNU Arm Embedded Toolchain](#using-gnu-arm-embedded-toolchain)
- [Using Arm Compiler](#using-arm-compiler)
- [Generating project for Arm Development Studio](#generating-project-for-arm-development-studio)
@@ -34,9 +31,8 @@ Before proceeding, please, make sure that the following prerequisites
are fulfilled:
- GNU Arm embedded toolchain 10.2.1 (or higher) or the Arm Compiler version 6.14 (or higher)
- is installed and available on the path.
-
- Test the compiler by running:
+ is installed and available on the path.
+ Test the compiler by running:
```commandline
armclang -v
@@ -47,11 +43,12 @@ are fulfilled:
Component: ARM Compiler 6.14
```
- Alternatively,
+ Alternatively,
```commandline
arm-none-eabi-gcc --version
```
+
```log
arm-none-eabi-gcc (GNU Arm Embedded Toolchain 10-2020-q4-major) 10.2.1 20201103 (release)
Copyright (C) 2020 Free Software Foundation, Inc.
@@ -59,11 +56,11 @@ are fulfilled:
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```
- > **Note:** Add compiler to the path, if needed:
- >
- > `export PATH=/path/to/armclang/bin:$PATH`
- > OR
- > `export PATH=/path/to/gcc-arm-none-eabi-toolchain/bin:$PATH`
+> **Note:** Add compiler to the path, if needed:
+>
+> `export PATH=/path/to/armclang/bin:$PATH`
+> OR
+> `export PATH=/path/to/gcc-arm-none-eabi-toolchain/bin:$PATH`
- Compiler license, if using the proprietary Arm Compiler, is configured correctly.
@@ -78,9 +75,9 @@ are fulfilled:
cmake version 3.16.2
```
- > **Note:** Add cmake to the path, if needed:
- >
- > `export PATH=/path/to/cmake/bin:$PATH`
+> **Note:** Add cmake to the path, if needed:
+>
+> `export PATH=/path/to/cmake/bin:$PATH`
- Python 3.6 or above is installed. Test python version by running:
@@ -112,7 +109,7 @@ are fulfilled:
...
```
- > **Note:** Add it to the path environment variable, if needed.
+> **Note:** Add it to the path environment variable, if needed.
- Access to the Internet to download the third party dependencies, specifically: TensorFlow Lite Micro, Arm® Ethos™-U55 NPU
driver and CMSIS. Instructions for downloading these are listed under [preparing build environment](#preparing-build-environment).
@@ -220,8 +217,8 @@ The build parameters are:
> **Note:** For details on the specific use case build options, follow the
> instructions in the use-case specific documentation.
-> Also, when setting any of the CMake configuration parameters that expect a directory/file path , it is advised
->to **use absolute paths instead of relative paths**.
+> Also, when setting any of the CMake configuration parameters that expect a directory/file path, it is advised
+> to **use absolute paths instead of relative paths**.
## Build process
@@ -274,9 +271,8 @@ dependencies
```
> **NOTE**: The default source paths for the TPIP sources assume the above directory structure, but all of the relevant
->paths can be overridden by CMake configuration arguments `TENSORFLOW_SRC_PATH`, `ETHOS_U55_DRIVER_SRC_PATH`,
->and `CMSIS_SRC_PATH`.
-
+> paths can be overridden by CMake configuration arguments `TENSORFLOW_SRC_PATH`, `ETHOS_U55_DRIVER_SRC_PATH`,
+> and `CMSIS_SRC_PATH`.
#### Fetching resource files
@@ -300,7 +296,7 @@ Create a build directory in the root of the project and navigate inside:
mkdir build && cd build
```
-### Configuring the build for MPS3: SSE-300
+### Configuring the build for MPS3 SSE-300
#### Using GNU Arm Embedded Toolchain
@@ -308,7 +304,6 @@ On Linux, if using `Arm GNU embedded toolchain`, execute the following command
to build the application to run on the Arm® Ethos™-U55 NPU when providing only
the mandatory arguments for CMake configuration:
-
```commandline
cmake ../
```
@@ -317,7 +312,6 @@ The above command will build for the default target platform `mps3`, the default
`sse-300`, and using the default toolchain file for the target as `bare-metal-gcc.` This is
equivalent to:
-
```commandline
cmake .. \
-DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-gcc.cmake
@@ -722,7 +716,6 @@ Vela Compiler documentation for more details.
> **Note:** By default, use of the Ethos-U55 NPU is enabled in the CMake configuration.
This could be changed by passing `-DETHOS_U55_ENABLED`.
-
## Automatic file generation
As mentioned in the previous sections, some files such as neural network
@@ -763,55 +756,55 @@ For example, the generated utility functions for image classification are:
- `build/generated/include/InputFiles.hpp`
-```c++
-#ifndef GENERATED_IMAGES_H
-#define GENERATED_IMAGES_H
+ ```C++
+ #ifndef GENERATED_IMAGES_H
+ #define GENERATED_IMAGES_H
-#include <cstdint>
+ #include <cstdint>
-#define NUMBER_OF_FILES (2U)
-#define IMAGE_DATA_SIZE (150528U)
+ #define NUMBER_OF_FILES (2U)
+ #define IMAGE_DATA_SIZE (150528U)
-extern const uint8_t im0[IMAGE_DATA_SIZE];
-extern const uint8_t im1[IMAGE_DATA_SIZE];
+ extern const uint8_t im0[IMAGE_DATA_SIZE];
+ extern const uint8_t im1[IMAGE_DATA_SIZE];
-const char* get_filename(const uint32_t idx);
-const uint8_t* get_img_array(const uint32_t idx);
+ const char* get_filename(const uint32_t idx);
+ const uint8_t* get_img_array(const uint32_t idx);
-#endif /* GENERATED_IMAGES_H */
-```
+ #endif /* GENERATED_IMAGES_H */
+ ```
- `build/generated/src/InputFiles.cc`
-```c++
-#include "InputFiles.hpp"
-
-static const char *img_filenames[] = {
- "img1.bmp",
- "img2.bmp",
-};
-
-static const uint8_t *img_arrays[] = {
- im0,
- im1
-};
-
-const char* get_filename(const uint32_t idx)
-{
- if (idx < NUMBER_OF_FILES) {
- return img_filenames[idx];
+ ```C++
+ #include "InputFiles.hpp"
+
+ static const char *img_filenames[] = {
+ "img1.bmp",
+ "img2.bmp",
+ };
+
+ static const uint8_t *img_arrays[] = {
+ im0,
+ im1
+ };
+
+ const char* get_filename(const uint32_t idx)
+ {
+ if (idx < NUMBER_OF_FILES) {
+ return img_filenames[idx];
+ }
+ return nullptr;
}
- return nullptr;
-}
-const uint8_t* get_img_array(const uint32_t idx)
-{
- if (idx < NUMBER_OF_FILES) {
- return img_arrays[idx];
+ const uint8_t* get_img_array(const uint32_t idx)
+ {
+ if (idx < NUMBER_OF_FILES) {
+ return img_arrays[idx];
+ }
+ return nullptr;
}
- return nullptr;
-}
-```
+ ```
These headers are generated using python templates, that are in `scripts/py/templates/*.template`.
@@ -940,4 +933,4 @@ build/generated/
└── <uc2_model_name>.tflite.cc
```
-Next section of the documentation: [Deployment](../documentation.md#Deployment).
+Next section of the documentation: [Deployment](deployment.md).
diff --git a/docs/sections/coding_guidelines.md b/docs/sections/coding_guidelines.md
index 752fe54..664b548 100644
--- a/docs/sections/coding_guidelines.md
+++ b/docs/sections/coding_guidelines.md
@@ -1,7 +1,5 @@
# Coding standards and guidelines
-## Contents
-
- [Introduction](#introduction)
- [Language version](#language-version)
- [File naming](#file-naming)
diff --git a/docs/sections/customizing.md b/docs/sections/customizing.md
index adf7749..056bc55 100644
--- a/docs/sections/customizing.md
+++ b/docs/sections/customizing.md
@@ -1,9 +1,6 @@
# Implementing custom ML application
-## Contents
-
- [Implementing custom ML application](#implementing-custom-ml-application)
- - [Contents](#contents)
- [Software project description](#software-project-description)
- [HAL API](#hal-api)
- [Main loop function](#main-loop-function)
@@ -69,14 +66,14 @@ sources are in the `use-case` subfolder.
> headers in an `include` directory, C/C++ sources in a `src` directory.
> For example:
>
->```tree
->use_case
-> └──img_class
-> ├── include
-> │ └── *.hpp
-> └── src
-> └── *.cc
->```
+> ```tree
+> use_case
+> └──img_class
+> ├── include
+> │ └── *.hpp
+> └── src
+> └── *.cc
+> ```
## HAL API
@@ -165,7 +162,7 @@ To access them, include `hal.h` header.
Example of the API initialization in the main function:
-```c++
+```C++
#include "hal.h"
int main ()
@@ -203,7 +200,7 @@ The main loop function has external linkage and main executable for the
use-case will have reference to the function defined in the use-case
code.
-```c++
+```C++
void main_loop(hal_platform& platform){
...
@@ -224,7 +221,7 @@ loop iterations. Include AppContext.hpp to use ApplicationContext class.
For example:
-```c++
+```C++
#include "hal.h"
#include "AppContext.hpp"
@@ -260,7 +257,7 @@ system timing information.
Usage example:
-```c++
+```C++
Profiler profiler{&platform, "Inference"};
profiler.StartProfiling();
@@ -306,13 +303,13 @@ To use this abstraction, import TensorFlowLiteMicro.hpp header.
> **Convention:** Each ML use-case must have extension of this class and implementation of the protected virtual methods:
>
->```c++
+> ```C++
> virtual const uint8_t* ModelPointer() = 0;
> virtual size_t ModelSize() = 0;
> virtual const tflite::MicroOpResolver& GetOpResolver() = 0;
> virtual bool EnlistOperations() = 0;
> virtual size_t GetActivationBufferSize() = 0;
->```
+> ```
>
> Network models have different set of operators that must be registered with
> tflite::MicroMutableOpResolver object in the EnlistOperations method.
@@ -361,7 +358,7 @@ Create a `MainLoop.cc` file in the `src` directory (the one created under
important. Define `main_loop` function with the signature described in
[Main loop function](#main-loop-function):
-```c++
+```C++
#include "hal.h"
void main_loop(hal_platform& platform) {
@@ -370,7 +367,7 @@ void main_loop(hal_platform& platform) {
```
The above is already a working use-case, if you compile and run it (see
-[Building custom usecase](#Building-custom-use-case)) the application will start, print
+[Building custom usecase](#building-custom-use-case)) the application will start, print
message to console and exit straight away.
Now, you can start filling this function with logic.
@@ -389,7 +386,7 @@ declare required methods.
For example:
-```c++
+```C++
#ifndef HELLOWORLDMODEL_HPP
#define HELLOWORLDMODEL_HPP
@@ -415,7 +412,7 @@ class HelloWorldModel: public Model {
static constexpr int ms_maxOpCnt = 5;
/* A mutable op resolver instance. */
- tflite::MicroMutableOpResolver<ms_maxOpCnt> _m_opResolver;
+ tflite::MicroMutableOpResolver<ms_maxOpCnt> m_opResolver;
};
} /* namespace app */
} /* namespace arm */
@@ -437,13 +434,13 @@ The following example shows how to add the custom Ethos-U55 operator with
TensorFlow Lite Micro framework. We will use the ARM_NPU define to exclude
the code if the application was built without NPU support.
-```c++
+```C++
#include "HelloWorldModel.hpp"
bool arm::app::HelloWorldModel::EnlistOperations() {
#if defined(ARM_NPU)
- if (kTfLiteOk == this->_m_opResolver.AddEthosU()) {
+ if (kTfLiteOk == this->m_opResolver.AddEthosU()) {
info("Added %s support to op resolver\n",
tflite::GetString_ETHOSU());
} else {
@@ -465,7 +462,7 @@ This generation the C++ array from the .tflite file, logic needs to be defined i
the `usecase.cmake` file for this `HelloWorld` example.
For more details on `usecase.cmake`, see [Building custom use case](#building-custom-use-case).
-For details on code generation flow in general, see [Automatic file generation](./building.md#Automatic-file-generation)
+For details on code generation flow in general, see [Automatic file generation](./building.md#automatic-file-generation)
The TensorFlow Lite model data is read during Model::Init() method execution, see
`application/tensorflow-lite-micro/Model.cc` for more details. Model invokes
@@ -476,7 +473,7 @@ file `build/generated/hello_world/src/<model_file_name>.cc`. Generated
file is added to the compilation automatically.
Use `${use-case}_MODEL_TFLITE_PATH` build parameter to include custom
-model to the generation/compilation process (see [Build options](./building.md/#build-options)).
+model to the generation/compilation process (see [Build options](./building.md#build-options)).
## Executing inference
@@ -506,7 +503,7 @@ to generate C++ sources from the provided images with
The following code adds inference invocation to the main loop function:
-```c++
+```C++
#include "hal.h"
#include "HelloWorldModel.hpp"
@@ -541,7 +538,7 @@ The code snippet has several important blocks:
- Creating HelloWorldModel object and initializing it.
- ```c++
+ ```C++
arm::app::HelloWorldModel model;
/* Load the model */
@@ -553,7 +550,7 @@ The code snippet has several important blocks:
- Getting pointers to allocated input and output tensors.
- ```c++
+ ```C++
TfLiteTensor *outputTensor = model.GetOutputTensor();
TfLiteTensor *inputTensor = model.GetInputTensor();
```
@@ -561,20 +558,20 @@ The code snippet has several important blocks:
- Copying input data to the input tensor. We assume input tensor size
to be 1000 uint8 elements.
- ```c++
+ ```C++
memcpy(inputTensor->data.data, inputData, 1000);
```
- Running inference
- ```c++
+ ```C++
model.RunInference();
```
- Reading inference results: data and data size from the output
tensor. We assume that output layer has uint8 data type.
- ```c++
+ ```C++
Const uint32_t tensorSz = outputTensor->bytes ;
const uint8_t *outputData = tflite::GetTensorData<uint8>(outputTensor);
@@ -584,7 +581,7 @@ Adding profiling for Ethos-U55 is easy. Include `Profiler.hpp` header and
invoke `StartProfiling` and `StopProfiling` around inference
execution.
-```c++
+```C++
Profiler profiler{&platform, "Inference"};
profiler.StartProfiling();
@@ -617,7 +614,7 @@ Default output level is info = level 2.
Platform data acquisition module has get_input function to read keyboard
input from the UART. It can be used as follows:
-```c++
+```C++
char ch_input[128];
platform.data_acq->get_input(ch_input, sizeof(ch_input));
```
@@ -647,7 +644,7 @@ screen it will go outside the screen boundary.
Example that prints "Hello world" on the LCD:
-```c++
+```C++
std::string hello("Hello world");
platform.data_psn->present_data_text(hello.c_str(), hello.size(), 10, 35, 0);
```
@@ -665,7 +662,7 @@ Image presentation function has the following signature:
For example, the following code snippet visualizes an input tensor data
for MobileNet v2 224 (down sampling it twice):
-```c++
+```C++
platform.data_psn->present_data_image((uint8_t *) inputTensor->data.data, 224, 224, 3, 10, 35, 2);
```
@@ -717,7 +714,6 @@ USER_OPTION(${use_case}_MODEL_TFLITE_PATH "Neural network model in tflite format
FILEPATH
)
-# Generate model file
generate_tflite_code(
MODEL_PATH ${${use_case}_MODEL_TFLITE_PATH}
DESTINATION ${SRC_GEN_DIR}
@@ -729,7 +725,7 @@ up by the build system. More information on auto-generations is available under
[Automatic file generation](./building.md#Automatic-file-generation).
To build you application follow the general instructions from
-[Add Custom inputs](#add-custom-inputs) and specify the name of the use-case in the
+[Add Custom inputs](./building.md#add-custom-inputs) and specify the name of the use-case in the
build command:
```commandline
@@ -744,4 +740,4 @@ As a result, `ethos-u-hello_world.axf` should be created, MPS3 build
will also produce `sectors/hello_world` directory with binaries and
`images-hello_world.txt` to be copied to the board MicroSD card.
-Next section of the documentation: [Testing and benchmarking](../documentation.md#Testing-and-benchmarking).
+Next section of the documentation: [Testing and benchmarking](testing_benchmarking.md).
diff --git a/docs/sections/deployment.md b/docs/sections/deployment.md
index 10acbcf..b852887 100644
--- a/docs/sections/deployment.md
+++ b/docs/sections/deployment.md
@@ -1,9 +1,6 @@
# Deployment
-## Contents
-
- [Deployment](#deployment)
- - [Contents](#contents)
- [Fixed Virtual Platform](#fixed-virtual-platform)
- [Setting up the MPS3 Arm Corstone-300 FVP](#setting-up-the-mps3-arm-corstone-300-fvp)
- [Deploying on an FVP emulating MPS3](#deploying-on-an-fvp-emulating-mps3)
@@ -27,11 +24,6 @@ Download the correct archive from the list under `Arm Corstone-300`. We need the
- Emulates MPS3 board (not for MPS2 FPGA board)
- Contains support for Arm® Ethos™-U55
-For FVP, the elf or the axf file can be run using the Fast Model
-executable as outlined under the [Starting Fast Model simulation](./setup.md/#starting-fast-model-simulation)
-except for the binary being pointed at here
-is the one just built using the steps in the previous section.
-
### Setting up the MPS3 Arm Corstone-300 FVP
For Ethos-U55 sample application, please download the MPS3 version of the
@@ -48,12 +40,12 @@ currently only supported on Linux based machines. To install the FVP:
### Deploying on an FVP emulating MPS3
-This section assumes that the FVP has been installed (see [Setting up the MPS3 Arm Corstone-300 FVP](#Setting-up-the-MPS3-Arm-Corstone-300-FVP)) to the user's home directory `~/FVP_Corstone_SSE-300_Ethos-U55`.
+This section assumes that the FVP has been installed (see [Setting up the MPS3 Arm Corstone-300 FVP](#setting-up-the-mps3-arm-corstone-300-fvp)) to the user's home directory `~/FVP_Corstone_SSE-300_Ethos-U55`.
The installation, typically, will have the executable under `~/FVP_Corstone_SSE-300_Ethos-U55/model/<OS>_<compiler-version>/`
directory. For the example below, we assume it to be `~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4`.
-To run a use case on the FVP, from the [Build directory](../sections/building.md#Create-a-build-directory):
+To run a use case on the FVP, from the [Build directory](../sections/building.md#create-a-build-directory):
```commandline
~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-<use_case>.axf
@@ -71,6 +63,8 @@ This will also launch a telnet window with the sample application's standard out
information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
the input and output tensor sizes of the model compiled into the executable binary.
+> **Note:** For details on the specific use-case follow the instructions in the corresponding documentation.
+
After the application has started it outputs a menu and waits for the user input from telnet terminal.
For example, the image classification use case can be started by:
@@ -79,6 +73,10 @@ For example, the image classification use case can be started by:
~/FVP_Corstone_SSE-300_Ethos-U55/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55 -a ./bin/ethos-u-img_class.axf
```
+![FVP](../media/fvp.png)
+
+![FVP Terminal](../media/fvpterminal.png)
+
The FVP supports many command line parameters:
- passed by using `-C <param>=<value>`. The most important ones are:
@@ -278,4 +276,4 @@ off.
...
```
-Next section of the main documentation, [Running code samples applications](../documentation.md#Running-code-samples-applications).
+Next section of the documentation: [Implementing custom ML application](customizing.md).
diff --git a/docs/sections/memory_considerations.md b/docs/sections/memory_considerations.md
index 7db0eba..48651f1 100644
--- a/docs/sections/memory_considerations.md
+++ b/docs/sections/memory_considerations.md
@@ -1,9 +1,8 @@
# Memory considerations
-## Table of Contents
+## Contents
- [Memory considerations](#memory-considerations)
- - [Table of Contents](#table-of-contents)
- [Introduction](#introduction)
- [Understanding memory usage from Vela output](#understanding-memory-usage-from-vela-output)
- [Total SRAM used](#total-sram-used)
@@ -43,7 +42,7 @@ When the neural network model is compiled with Vela, a summary report that inclu
usage is generated. For example, compiling the keyword spotting model [ds_cnn_clustered_int8](https://github.com/ARM-software/ML-zoo/blob/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8/ds_cnn_clustered_int8.tflite)
with Vela produces, among others, the following output:
-```
+```log
Total SRAM used 70.77 KiB
Total Off-chip Flash used 430.78 KiB
```
@@ -74,9 +73,10 @@ the `tensor arena`. Vela supports optimizing the model for this configuration wi
memory mode. See [vela.ini](../../scripts/vela/vela.ini). To make use of a neural network model
optimised for this configuration, the linker script for the target platform would need to be
changed. By default, the linker scripts are set up to support the default configuration only. See
-[Memory constraints](#Memory-constraints) for snippet of a script.
+[Memory constraints](#memory-constraints) for snippet of a script.
> Note
+>
> 1. The default configuration is represented by `Shared_Sram` memory mode.
> 2. `Dedicated_Sram` mode is only applicable for Arm® Ethos™-U65.
@@ -104,11 +104,11 @@ in the linker script.
The following numbers have been obtained from Vela for `Shared_Sram` memory mode and the SRAM and
flash memory requirements for the different use cases of the evaluation kit. Note that the SRAM usage
does not include memory used by TensorFlow Lite Micro and this will need to be topped up as explained
-under [Total SRAM used](#Total-SRAM-used).
+under [Total SRAM used](#total-sram-used).
- [Keyword spotting model](https://github.com/ARM-software/ML-zoo/tree/master/models/keyword_spotting/ds_cnn_large/tflite_clustered_int8) requires
- - 70.7 KiB of SRAM
- - 430.7 KiB of flash memory.
+ - 70.7 KiB of SRAM
+ - 430.7 KiB of flash memory.
- [Image classification model](https://github.com/ARM-software/ML-zoo/tree/master/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8) requires
- 638.6 KiB of SRAM
@@ -122,13 +122,13 @@ under [Total SRAM used](#Total-SRAM-used).
Both the MPS3 Fixed Virtual Platform and the MPS3 FPGA platform share the linker script for Arm® Corstone™-300
design. The design is set by the CMake configuration parameter `TARGET_SUBSYSTEM` as described in
-[build options](./building.md#Build-options).
+[build options](./building.md#build-options).
The memory map exposed by this design is presented in [Appendix 1](./appendix.md). This can be used as a reference
when editing the linker script, especially to make sure that region boundaries are respected. The snippet from the
scatter file is presented below:
-```
+```log
;---------------------------------------------------------
; First load region (ITCM)
;---------------------------------------------------------
@@ -235,4 +235,6 @@ by the Arm® Ethos™-U55 NPU block frequently. A bigger region of memory for st
network model is placed in the DDR/flash region under LOAD_REGION_1. The two load regions are necessary
as the MPS3's motherboard configuration controller limits the load size at address 0x00000000 to 1MiB.
This has implications on how the application **is deployed** on MPS3 as explained under the section
-[Deployment on MPS3](./deployment.md#MPS3-board).
+[Deployment on MPS3](./deployment.md#mps3-board).
+
+Next section of the documentation: [Troubleshooting](troubleshooting.md).
diff --git a/docs/sections/run.md b/docs/sections/run.md
deleted file mode 100644
index 900101d..0000000
--- a/docs/sections/run.md
+++ /dev/null
@@ -1,44 +0,0 @@
-
-# Running Ethos-U55 Code Samples
-
-## Contents
-
-- [Starting Fast Model simulation](#starting-fast-model-simulation)
-
-This section covers the process for getting started with pre-built binaries for the Code Samples.
-
-## Starting Fast Model simulation
-
-Once built application binaries and assuming the install location of the FVP
-was set to ~/FVP_install_location, the simulation can be started by:
-
-```commandline
-FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
-./bin/mps3-sse-300/ethos-u-<use_case>.axf
-```
-
-This will start the Fast Model simulation for the chosen use-case.
-
-A log output should appear on the terminal:
-
-```log
-telnetterminal0: Listening for serial connection on port 5000
-telnetterminal1: Listening for serial connection on port 5001
-telnetterminal2: Listening for serial connection on port 5002
-telnetterminal5: Listening for serial connection on port 5003
-```
-
-This will also launch a telnet window with the sample application's
-standard output and error log entries containing information about the
-pre-built application version, TensorFlow Lite Micro library version
-used, data type as well as the input and output tensor sizes of the
-model compiled into the executable binary.
-
-![FVP](../media/fvp.png)
-
-![FVP Terminal](../media/fvpterminal.png)
-
-> **Note:**
-For details on the specific use-case follow the instructions in the corresponding documentation.
-
-Next section of the documentation: [Implementing custom ML application](../documentation.md#Implementing-custom-ML-application).
diff --git a/docs/sections/testing_benchmarking.md b/docs/sections/testing_benchmarking.md
index e2ed434..7932dde 100644
--- a/docs/sections/testing_benchmarking.md
+++ b/docs/sections/testing_benchmarking.md
@@ -1,7 +1,5 @@
# Testing and benchmarking
-## Contents
-
- [Testing](#testing)
- [Benchmarking](#benchmarking)
@@ -86,4 +84,4 @@ For example:
Time in ms: 210
```
-Next section of the main documentation: [Troubleshooting](../documentation.md#Troubleshooting).
+Next section of the documentation: [Memory Considerations](memory_considerations.md).
diff --git a/docs/sections/troubleshooting.md b/docs/sections/troubleshooting.md
index 5e52a4e..a4f60fb 100644
--- a/docs/sections/troubleshooting.md
+++ b/docs/sections/troubleshooting.md
@@ -1,7 +1,5 @@
# Troubleshooting
-## Contents
-
- [Inference results are incorrect for my custom files](#inference-results-are-incorrect-for-my-custom-files)
- [The application does not work with my custom model](#the-application-does-not-work-with-my-custom-model)
@@ -26,4 +24,4 @@ Check that cmake parameters match your new models input requirements.
It is a python tool available from <https://pypi.org/project/ethos-u-vela/>.
The source code is hosted on <https://git.mlplatform.org/ml/ethos-u/ethos-u-vela.git/>.
-Next section of the documentation: [Contribution guidelines](../documentation.md#Contribution-guidelines).
+Next section of the documentation: [Appendix](appendix.md).
diff --git a/docs/use_cases/ad.md b/docs/use_cases/ad.md
index 5f210b1..661cf49 100644
--- a/docs/use_cases/ad.md
+++ b/docs/use_cases/ad.md
@@ -110,6 +110,7 @@ On Linux, execute the following command to build **only** Anomaly Detection appl
```commandline
cmake ../ -DUSE_CASE_BUILD=ad
```
+
To configure a build that can be debugged using Arm-DS, we can just specify
the build type as `Debug` and use the `Arm Compiler` toolchain file:
@@ -121,10 +122,11 @@ cmake .. \
```
Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
- [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
>the CMake command.
diff --git a/docs/use_cases/asr.md b/docs/use_cases/asr.md
index ec10fdb..a8142aa 100644
--- a/docs/use_cases/asr.md
+++ b/docs/use_cases/asr.md
@@ -162,10 +162,10 @@ cmake .. \
```
Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
- [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
>the CMake command.
diff --git a/docs/use_cases/img_class.md b/docs/use_cases/img_class.md
index 68a5285..75f0bd6 100644
--- a/docs/use_cases/img_class.md
+++ b/docs/use_cases/img_class.md
@@ -91,10 +91,11 @@ cmake .. \
```
Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
- [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
>the CMake command.
@@ -228,7 +229,7 @@ custom_model_after_vela.tflite.cc
After compiling, your custom model will have now replaced the default one in the application.
-## Setting-up and running Ethos-U55 code sample
+## Setting up and running Ethos-U55 code sample
### Setting up the Ethos-U55 Fast Model
diff --git a/docs/use_cases/inference_runner.md b/docs/use_cases/inference_runner.md
index ebc4677..b8004ed 100644
--- a/docs/use_cases/inference_runner.md
+++ b/docs/use_cases/inference_runner.md
@@ -70,6 +70,7 @@ Model when providing only the mandatory arguments for CMake configuration:
```commandline
cmake ../ -DUSE_CASE_BUILD=inference_runner
```
+
To configure a build that can be debugged using Arm-DS, we can just specify
the build type as `Debug` and use the `Arm Compiler` toolchain file:
@@ -81,10 +82,11 @@ cmake .. \
```
Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
- [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
>the CMake command.
@@ -162,7 +164,7 @@ custom_model_after_vela.tflite.cc
After compiling, your custom model will have now replaced the default one in the application.
-## Setting-up and running Ethos-U55 code sample
+## Setting up and running Ethos-U55 code sample
### Setting up the Ethos-U55 Fast Model
diff --git a/docs/use_cases/kws.md b/docs/use_cases/kws.md
index 8811efb..bf3e088 100644
--- a/docs/use_cases/kws.md
+++ b/docs/use_cases/kws.md
@@ -131,10 +131,11 @@ cmake .. \
```
Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
- [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run the CMake command.
@@ -261,7 +262,7 @@ custom_model_after_vela.tflite.cc
After compiling, your custom model will have now replaced the default one in the application.
-## Setting-up and running Ethos-U55 code sample
+## Setting up and running Ethos-U55 code sample
### Setting up the Ethos-U55 Fast Model
diff --git a/docs/use_cases/kws_asr.md b/docs/use_cases/kws_asr.md
index b63ee3a..745a108 100644
--- a/docs/use_cases/kws_asr.md
+++ b/docs/use_cases/kws_asr.md
@@ -202,10 +202,11 @@ cmake .. \
```
Also see:
-- [Configuring with custom TPIP dependencies](../sections/building.md#Configuring-with-custom-TPIP-dependencies)
+
+- [Configuring with custom TPIP dependencies](../sections/building.md#configuring-with-custom-tpip-dependencies)
- [Using Arm Compiler](../sections/building.md#using-arm-compiler)
-- [Configuring the build for simple_platform](../sections/building.md#Configuring-the-build-for-simple_platform)
-- [Working with model debugger from Arm FastModel Tools](../sections/building.md#Working-with-model-debugger-from-Arm-FastModel-Tools)
+- [Configuring the build for simple_platform](../sections/building.md#configuring-the-build-for-simple_platform)
+- [Working with model debugger from Arm FastModel Tools](../sections/building.md#working-with-model-debugger-from-arm-fastmodel-tools)
> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run the CMake command.