summaryrefslogtreecommitdiff
path: root/docs/sections
diff options
context:
space:
mode:
authorCisco Cervellera <cisco.cervellera@arm.com>2021-11-16 09:54:20 +0000
committerCisco Cervellera <cisco.cervellera@arm.com>2021-11-16 09:54:20 +0000
commite7a0393973a1a1c1ed05b1bf1838fe931416890a (patch)
tree8127d74bf1024ee6f45a07100216162d346ae2a2 /docs/sections
parentb52b585f1c9ee3f2800fa51f3031117ed1396abd (diff)
downloadml-embedded-evaluation-kit-e7a0393973a1a1c1ed05b1bf1838fe931416890a.tar.gz
MLECO-2520: Change md files to have correct file links
Change-Id: I3ec18583c321eb2815a670d56f4958e610331d6d
Diffstat (limited to 'docs/sections')
-rw-r--r--docs/sections/arm_virtual_hardware.md4
-rw-r--r--docs/sections/building.md62
-rw-r--r--docs/sections/coding_guidelines.md22
-rw-r--r--docs/sections/customizing.md42
-rw-r--r--docs/sections/deployment.md16
-rw-r--r--docs/sections/memory_considerations.md24
-rw-r--r--docs/sections/testing_benchmarking.md6
-rw-r--r--docs/sections/troubleshooting.md10
8 files changed, 93 insertions, 93 deletions
diff --git a/docs/sections/arm_virtual_hardware.md b/docs/sections/arm_virtual_hardware.md
index ca60a28..cb5ed48 100644
--- a/docs/sections/arm_virtual_hardware.md
+++ b/docs/sections/arm_virtual_hardware.md
@@ -1,5 +1,5 @@
-- [Overview](#overview)
- - [Getting started](#getting-started)
+- [Overview](./arm_virtual_hardware.md#overview)
+ - [Getting started](./arm_virtual_hardware.md#getting-started)
# Overview
diff --git a/docs/sections/building.md b/docs/sections/building.md
index f2911a8..28130d6 100644
--- a/docs/sections/building.md
+++ b/docs/sections/building.md
@@ -1,29 +1,29 @@
# Building the ML embedded code sample applications from sources
-- [Building the ML embedded code sample applications from sources](#building-the-ml-embedded-code-sample-applications-from-sources)
- - [Build prerequisites](#build-prerequisites)
- - [Build options](#build-options)
- - [Build process](#build-process)
- - [Preparing build environment](#preparing-build-environment)
- - [Fetching submodules](#fetching-submodules)
- - [Fetching resource files](#fetching-resource-files)
- - [Building for default configuration](#building-for-default-configuration)
- - [Create a build directory](#create-a-build-directory)
- - [Configuring the build for MPS3 SSE-300](#configuring-the-build-for-mps3-sse_300)
- - [Using GNU Arm Embedded toolchain](#using-gnu-arm-embedded-toolchain)
- - [Using Arm Compiler](#using-arm-compiler)
- - [Generating project for Arm Development Studio](#generating-project-for-arm-development-studio)
- - [Working with model debugger from Arm Fast Model Tools](#working-with-model-debugger-from-arm-fast-model-tools)
- - [Configuring with custom TPIP dependencies](#configuring-with-custom-tpip-dependencies)
- - [Configuring native unit-test build](#configuring-native-unit_test-build)
- - [Configuring the build for simple-platform](#configuring-the-build-for-simple_platform)
- - [Building the configured project](#building-the-configured-project)
- - [Building timing adapter with custom options](#building-timing-adapter-with-custom-options)
- - [Add custom inputs](#add-custom-inputs)
- - [Add custom model](#add-custom-model)
- - [Optimize custom model with Vela compiler](#optimize-custom-model-with-vela-compiler)
- - [Building for different Ethos-U NPU variants](#building-for-different-ethos_u-npu-variants)
- - [Automatic file generation](#automatic-file-generation)
+- [Building the ML embedded code sample applications from sources](./building.md#building-the-ml-embedded-code-sample-applications-from-sources)
+ - [Build prerequisites](./building.md#build-prerequisites)
+ - [Build options](./building.md#build-options)
+ - [Build process](./building.md#build-process)
+ - [Preparing build environment](./building.md#preparing-build-environment)
+ - [Fetching submodules](./building.md#fetching-submodules)
+ - [Fetching resource files](./building.md#fetching-resource-files)
+ - [Building for default configuration](./building.md#building-for-default-configuration)
+ - [Create a build directory](./building.md#create-a-build-directory)
+ - [Configuring the build for MPS3 SSE-300](./building.md#configuring-the-build-for-mps3-sse_300)
+ - [Using GNU Arm Embedded toolchain](./building.md#using-gnu-arm-embedded-toolchain)
+ - [Using Arm Compiler](./building.md#using-arm-compiler)
+ - [Generating project for Arm Development Studio](./building.md#generating-project-for-arm-development-studio)
+ - [Working with model debugger from Arm Fast Model Tools](./building.md#working-with-model-debugger-from-arm-fast-model-tools)
+ - [Configuring with custom TPIP dependencies](./building.md#configuring-with-custom-tpip-dependencies)
+ - [Configuring native unit-test build](./building.md#configuring-native-unit_test-build)
+ - [Configuring the build for simple-platform](./building.md#configuring-the-build-for-simple_platform)
+ - [Building the configured project](./building.md#building-the-configured-project)
+ - [Building timing adapter with custom options](./building.md#building-timing-adapter-with-custom-options)
+ - [Add custom inputs](./building.md#add-custom-inputs)
+ - [Add custom model](./building.md#add-custom-model)
+ - [Optimize custom model with Vela compiler](./building.md#optimize-custom-model-with-vela-compiler)
+ - [Building for different Ethos-U NPU variants](./building.md#building-for-different-ethos_u-npu-variants)
+ - [Automatic file generation](./building.md#automatic-file-generation)
This section assumes that you are using an **x86 Linux** build machine.
@@ -109,7 +109,7 @@ Before proceeding, it is *essential* to ensure that the following prerequisites
- Access to the internet to download the third-party dependencies, specifically: TensorFlow Lite Micro, Arm®
*Ethos™-U55* NPU driver, and CMSIS. Instructions for downloading these are listed under:
- [preparing build environment](#preparing-build-environment).
+ [preparing build environment](./building.md#preparing-build-environment).
## Build options
@@ -173,7 +173,7 @@ defaults to a configuration ID from `H32`, `H64`, `H256` and `Y512`.
configuration for all the use cases. If the user has overridden use-case specific model path
parameter `ETHOS_U_NPU_CONFIG_ID` parameter will become irrelevant for that use-case. Also, the
model files for the chosen `ETHOS_U_NPU_CONFIG_ID` are expected to exist in the default locations.
-See [Fetching resource files](#fetching-resource-files) for details on how to do this for your
+See [Fetching resource files](./building.md#fetching-resource-files) for details on how to do this for your
chosen configuration.
- `CPU_PROFILE_ENABLED`: Sets whether profiling information for the CPU core should be displayed. By default, this is
@@ -232,7 +232,7 @@ paths instead of relative paths**.
The build process uses three major steps:
1. Prepare the build environment by downloading third-party sources required, see
- [Preparing build environment](#preparing-build-environment).
+ [Preparing build environment](./building.md#preparing-build-environment).
2. Configure the build for the platform chosen. This stage includes:
- CMake options configuration
@@ -240,11 +240,11 @@ The build process uses three major steps:
downloaded from [Arm ML-Zoo](https://github.com/ARM-software/ML-zoo). For native builds, the network input and
output data for tests are downloaded.
- Some files such as neural network models, network inputs, and output labels are automatically converted into C/C++
- arrays, see: [Automatic file generation](#automatic-file-generation).
+ arrays, see: [Automatic file generation](./building.md#automatic-file-generation).
3. Build the application.\
Application and third-party libraries are now built. For further information, see:
- [Building the configured project](#building-the-configured-project).
+ [Building the configured project](./building.md#building-the-configured-project).
### Preparing build environment
@@ -258,7 +258,7 @@ repository to link against.
3. [CMSIS-5](https://github.com/ARM-software/CMSIS_5.git)
> **Note:** If you are using non git project sources, run `python3 ./download_dependencies.py` and ignore further git
-> instructions. Proceed to [Fetching resource files](#fetching-resource-files) section.
+> instructions. Proceed to [Fetching resource files](./building.md#fetching-resource-files) section.
>
To pull the submodules:
@@ -293,7 +293,7 @@ python3 ./set_up_default_resources.py
This fetches every model into the `resources_downloaded` directory. It also optimizes the models using the Vela compiler
for the default 128 MACs configuration of the Arm® *Ethos™-U55* NPU and for the default 256 MACs configuration of the Arm® *Ethos™-U65* NPU.
-> **Note:** This script requires Python version 3.6 or higher. Please make sure all [build prerequisites](#build-prerequisites)
+> **Note:** This script requires Python version 3.6 or higher. Please make sure all [build prerequisites](./building.md#build-prerequisites)
> are satisfied.
If you need to optimize the models for a different Ethos-U configuration, you can pass a
diff --git a/docs/sections/coding_guidelines.md b/docs/sections/coding_guidelines.md
index c1eba00..039b1e0 100644
--- a/docs/sections/coding_guidelines.md
+++ b/docs/sections/coding_guidelines.md
@@ -1,16 +1,16 @@
# Coding standards and guidelines
-- [Coding standards and guidelines](#coding-standards-and-guidelines)
- - [Introduction](#introduction)
- - [Language version](#language-version)
- - [File naming](#file-naming)
- - [File layout](#file-layout)
- - [Block Management](#block-management)
- - [Naming Conventions](#naming-conventions)
- - [CPP language naming conventions](#cpp-language-naming-conventions)
- - [C language naming conventions](#c-language-naming-conventions)
- - [Layout and formatting conventions](#layout-and-formatting-conventions)
- - [Language usage](#language-usage)
+- [Coding standards and guidelines](./coding_guidelines.md#coding-standards-and-guidelines)
+ - [Introduction](./coding_guidelines.md#introduction)
+ - [Language version](./coding_guidelines.md#language-version)
+ - [File naming](./coding_guidelines.md#file-naming)
+ - [File layout](./coding_guidelines.md#file-layout)
+ - [Block Management](./coding_guidelines.md#block-management)
+ - [Naming Conventions](./coding_guidelines.md#naming-conventions)
+ - [CPP language naming conventions](./coding_guidelines.md#cpp-language-naming-conventions)
+ - [C language naming conventions](./coding_guidelines.md#c-language-naming-conventions)
+ - [Layout and formatting conventions](./coding_guidelines.md#layout-and-formatting-conventions)
+ - [Language usage](./coding_guidelines.md#language-usage)
## Introduction
diff --git a/docs/sections/customizing.md b/docs/sections/customizing.md
index 854a3ed..3bf9b26 100644
--- a/docs/sections/customizing.md
+++ b/docs/sections/customizing.md
@@ -1,21 +1,21 @@
# Implementing custom ML application
-- [Implementing custom ML application](#implementing-custom-ml-application)
- - [Software project description](#software-project-description)
- - [Hardware Abstraction Layer API](#hardware-abstraction-layer-api)
- - [Main loop function](#main-loop-function)
- - [Application context](#application-context)
- - [Profiler](#profiler)
- - [NN Model API](#nn-model-api)
- - [Adding custom ML use-case](#adding-custom-ml-use_case)
- - [Implementing main loop](#implementing-main-loop)
- - [Implementing custom NN model](#implementing-custom-nn-model)
- - [Define ModelPointer and ModelSize methods](#define-modelpointer-and-modelsize-methods)
- - [Executing inference](#executing-inference)
- - [Printing to console](#printing-to-console)
- - [Reading user input from console](#reading-user-input-from-console)
- - [Output to MPS3 LCD](#output-to-mps3-lcd)
- - [Building custom use-case](#building-custom-use_case)
+- [Implementing custom ML application](./customizing.md#implementing-custom-ml-application)
+ - [Software project description](./customizing.md#software-project-description)
+ - [Hardware Abstraction Layer API](./customizing.md#hardware-abstraction-layer-api)
+ - [Main loop function](./customizing.md#main-loop-function)
+ - [Application context](./customizing.md#application-context)
+ - [Profiler](./customizing.md#profiler)
+ - [NN Model API](./customizing.md#nn-model-api)
+ - [Adding custom ML use-case](./customizing.md#adding-custom-ml-use_case)
+ - [Implementing main loop](./customizing.md#implementing-main-loop)
+ - [Implementing custom NN model](./customizing.md#implementing-custom-nn-model)
+ - [Define ModelPointer and ModelSize methods](./customizing.md#define-modelpointer-and-modelsize-methods)
+ - [Executing inference](./customizing.md#executing-inference)
+ - [Printing to console](./customizing.md#printing-to-console)
+ - [Reading user input from console](./customizing.md#reading-user-input-from-console)
+ - [Output to MPS3 LCD](./customizing.md#output-to-mps3-lcd)
+ - [Building custom use-case](./customizing.md#building-custom-use_case)
This section describes how to implement a custom Machine Learning application running on Arm® *Corstone™-300* based FVP
or on the Arm® MPS3 FPGA prototyping board.
@@ -323,7 +323,7 @@ use_case
```
Start with creation of a sub-directory under the `use_case` directory and two additional directories `src` and `include`
-as described in the [Software project description](#software-project-description) section.
+as described in the [Software project description](./customizing.md#software-project-description) section.
## Implementing main loop
@@ -336,9 +336,9 @@ Main loop has knowledge about the platform and has access to the platform compon
Layer (HAL).
Start by creating a `MainLoop.cc` file in the `src` directory (the one created under
-[Adding custom ML use case](#adding-custom-ml-use-case)). The name used is not important.
+[Adding custom ML use case](./customizing.md#adding-custom-ml-use-case)). The name used is not important.
-Now define the `main_loop` function with the signature described in [Main loop function](#main-loop-function):
+Now define the `main_loop` function with the signature described in [Main loop function](./customizing.md#main-loop-function):
```C++
#include "hal.h"
@@ -348,7 +348,7 @@ void main_loop(hal_platform& platform) {
}
```
-The preceeding code is already a working use-case. If you compile and run it (see [Building custom usecase](#building-custom-use-case)),
+The preceeding code is already a working use-case. If you compile and run it (see [Building custom usecase](./customizing.md#building-custom-use-case)),
then the application starts and prints a message to console and exits straight away.
You can now start filling this function with logic.
@@ -358,7 +358,7 @@ You can now start filling this function with logic.
Before inference could be run with a custom NN model, TensorFlow Lite Micro framework must learn about the operators, or
layers, included in the model. You must register operators using the `MicroMutableOpResolver` API.
-The *Ethos-U* code samples project has an abstraction around TensorFlow Lite Micro API (see [NN model API](#nn-model-api)).
+The *Ethos-U* code samples project has an abstraction around TensorFlow Lite Micro API (see [NN model API](./customizing.md#nn-model-api)).
Create `HelloWorldModel.hpp` in the use-case include sub-directory, extend Model abstract class,
and then declare the required methods.
diff --git a/docs/sections/deployment.md b/docs/sections/deployment.md
index 3e58464..5d858ce 100644
--- a/docs/sections/deployment.md
+++ b/docs/sections/deployment.md
@@ -1,12 +1,12 @@
# Deployment
-- [Deployment](#deployment)
- - [Fixed Virtual Platform](#fixed-virtual-platform)
- - [Setting up the MPS3 Arm Corstone-300 FVP](#setting-up-the-mps3-arm-corstone_300-fvp)
- - [Deploying on an FVP emulating MPS3](#deploying-on-an-fvp-emulating-mps3)
- - [MPS3 board](#mps3-board)
- - [MPS3 board top-view](#mps3-board-top_view)
- - [Deployment on MPS3 board](#deployment-on-mps3-board)
+- [Deployment](./deployment.md#deployment)
+ - [Fixed Virtual Platform](./deployment.md#fixed-virtual-platform)
+ - [Setting up the MPS3 Arm Corstone-300 FVP](./deployment.md#setting-up-the-mps3-arm-corstone_300-fvp)
+ - [Deploying on an FVP emulating MPS3](./deployment.md#deploying-on-an-fvp-emulating-mps3)
+ - [MPS3 board](./deployment.md#mps3-board)
+ - [MPS3 board top-view](./deployment.md#mps3-board-top_view)
+ - [Deployment on MPS3 board](./deployment.md#deployment-on-mps3-board)
The sample application for Arm® *Ethos™-U55* can be deployed on two target platforms:
@@ -45,7 +45,7 @@ To install the FVP:
### Deploying on an FVP emulating MPS3
This section assumes that the FVP has been installed (see
-[Setting up the MPS3 Arm Corstone-300 FVP](#setting-up-the-mps3-arm-corstone-300-fvp))
+[Setting up the MPS3 Arm Corstone-300 FVP](./deployment.md#setting-up-the-mps3-arm-corstone-300-fvp))
to the home directory of the user: `~/FVP_Corstone_SSE-300`.
The installation, typically, has the executable under `~/FVP_Corstone_SSE-300/model/<OS>_<compiler-version>/`
diff --git a/docs/sections/memory_considerations.md b/docs/sections/memory_considerations.md
index 89baf41..89acb1e 100644
--- a/docs/sections/memory_considerations.md
+++ b/docs/sections/memory_considerations.md
@@ -1,16 +1,16 @@
# Memory considerations
-- [Memory considerations](#memory-considerations)
- - [Introduction](#introduction)
- - [Memory available on the target platform](#memory-available-on-the-target-platform)
- - [Parameters linked to SRAM size definitions](#parameters-linked-to-sram-size-definitions)
- - [Understanding memory usage from Vela output](#understanding-memory-usage-from-vela-output)
- - [Total SRAM used](#total-sram-used)
- - [Total Off-chip Flash used](#total-off_chip-flash-used)
- - [Memory mode configurations](#memory-mode-configurations)
- - [Tensor arena and neural network model memory placement](#tensor-arena-and-neural-network-model-memory-placement)
- - [Memory usage for ML use-cases](#memory-usage-for-ml-use_cases)
- - [Memory constraints](#memory-constraints)
+- [Memory considerations](./memory_considerations.md#memory-considerations)
+ - [Introduction](./memory_considerations.md#introduction)
+ - [Memory available on the target platform](./memory_considerations.md#memory-available-on-the-target-platform)
+ - [Parameters linked to SRAM size definitions](./memory_considerations.md#parameters-linked-to-sram-size-definitions)
+ - [Understanding memory usage from Vela output](./memory_considerations.md#understanding-memory-usage-from-vela-output)
+ - [Total SRAM used](./memory_considerations.md#total-sram-used)
+ - [Total Off-chip Flash used](./memory_considerations.md#total-off_chip-flash-used)
+ - [Memory mode configurations](./memory_considerations.md#memory-mode-configurations)
+ - [Tensor arena and neural network model memory placement](./memory_considerations.md#tensor-arena-and-neural-network-model-memory-placement)
+ - [Memory usage for ML use-cases](./memory_considerations.md#memory-usage-for-ml-use_cases)
+ - [Memory constraints](./memory_considerations.md#memory-constraints)
## Introduction
@@ -199,7 +199,7 @@ The following numbers have been obtained from Vela for the `Shared_Sram` memory
memory requirements for the different use-cases of the evaluation kit.
> **Note:** The SRAM usage does not include memory used by TensorFlow Lite Micro and must be topped up as explained
-> under [Total SRAM used](#total-sram-used).
+> under [Total SRAM used](./memory_considerations.md#total-sram-used).
- [Keyword spotting model](https://github.com/ARM-software/ML-zoo/tree/68b5fbc77ed28e67b2efc915997ea4477c1d9d5b//models/keyword_spotting/ds_cnn_large/tflite_clustered_int8)
requires
diff --git a/docs/sections/testing_benchmarking.md b/docs/sections/testing_benchmarking.md
index a08789f..d1cd9df 100644
--- a/docs/sections/testing_benchmarking.md
+++ b/docs/sections/testing_benchmarking.md
@@ -1,8 +1,8 @@
# Testing and benchmarking
-- [Testing and benchmarking](#testing-and-benchmarking)
- - [Testing](#testing)
- - [Benchmarking](#benchmarking)
+- [Testing and benchmarking](./testing_benchmarking.md#testing-and-benchmarking)
+ - [Testing](./testing_benchmarking.md#testing)
+ - [Benchmarking](./testing_benchmarking.md#benchmarking)
## Testing
diff --git a/docs/sections/troubleshooting.md b/docs/sections/troubleshooting.md
index fc81ffd..794bfb0 100644
--- a/docs/sections/troubleshooting.md
+++ b/docs/sections/troubleshooting.md
@@ -1,10 +1,10 @@
# Troubleshooting
-- [Troubleshooting](#troubleshooting)
- - [Inference results are incorrect for my custom files](#inference-results-are-incorrect-for-my-custom-files)
- - [The application does not work with my custom model](#the-application-does-not-work-with-my-custom-model)
- - [NPU configuration mismatch error when running inference](#npu-configuration-mismatch-error-when-running-inference)
- - [Problem installing Vela](#problem-installing-vela)
+- [Troubleshooting](./troubleshooting.md#troubleshooting)
+ - [Inference results are incorrect for my custom files](./troubleshooting.md#inference-results-are-incorrect-for-my-custom-files)
+ - [The application does not work with my custom model](./troubleshooting.md#the-application-does-not-work-with-my-custom-model)
+ - [NPU configuration mismatch error when running inference](./troubleshooting.md#npu-configuration-mismatch-error-when-running-inference)
+ - [Problem installing Vela](./troubleshooting.md#problem-installing-vela)
## Inference results are incorrect for my custom files