summaryrefslogtreecommitdiff
path: root/docs/sections/customizing.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/sections/customizing.md')
-rw-r--r--docs/sections/customizing.md566
1 files changed, 268 insertions, 298 deletions
diff --git a/docs/sections/customizing.md b/docs/sections/customizing.md
index ae911d9..2df32d5 100644
--- a/docs/sections/customizing.md
+++ b/docs/sections/customizing.md
@@ -2,33 +2,33 @@
- [Implementing custom ML application](#implementing-custom-ml-application)
- [Software project description](#software-project-description)
- - [HAL API](#hal-api)
+ - [Hardware Abstraction Layer (HAL) API](#hardware-abstraction-layer-hal-api)
- [Main loop function](#main-loop-function)
- [Application context](#application-context)
- [Profiler](#profiler)
- [NN Model API](#nn-model-api)
- - [Adding custom ML use case](#adding-custom-ml-use-case)
+ - [Adding custom ML use-case](#adding-custom-ml-use-case)
- [Implementing main loop](#implementing-main-loop)
- [Implementing custom NN model](#implementing-custom-nn-model)
+ - [Define `ModelPointer` and `ModelSize` methods](#define-modelpointer-and-modelsize-methods)
- [Executing inference](#executing-inference)
- [Printing to console](#printing-to-console)
- [Reading user input from console](#reading-user-input-from-console)
- [Output to MPS3 LCD](#output-to-mps3-lcd)
- - [Building custom use case](#building-custom-use-case)
+ - [Building custom use-case](#building-custom-use-case)
-This section describes how to implement a custom Machine Learning
-application running on `Arm® Corstone™-300` based FVP or on the Arm® MPS3 FPGA prototyping board.
+This section describes how to implement a custom Machine Learning application running on Arm® *Corstone™-300* based FVP
+or on the Arm® MPS3 FPGA prototyping board.
-Arm® Ethos™-U55 code sample software project offers a simple way to incorporate
-additional use-case code into the existing infrastructure and provides a build
-system that automatically picks up added functionality and produces corresponding
-executable for each use-case. This is achieved by following certain configuration
-and code implementation conventions.
+the Arm® *Ethos™-U55* code sample software project offers a way to incorporate more use-case code into the existing
+infrastructure. It also provides a build system that automatically picks up added functionality and produces
+corresponding executable for each use-case. This is achieved by following certain configuration and code implementation
+conventions.
-The following sign will indicate the important conventions to apply:
+The following sign indicates the important conventions to apply:
-> **Convention:** The code is developed using C++11 and C99 standards.
-> This is governed by TensorFlow Lite for Microcontrollers framework.
+> **Convention:** The code is developed using `C++11` and `C99` standards. This is then governed by TensorFlow Lite for
+> Microcontrollers framework.
## Software project description
@@ -54,17 +54,14 @@ As mentioned in the [Repository structure](../documentation.md#repository-struct
└── Readme.md
```
-Where `source` contains C/C++ sources for the platform and ML applications.
-Common code related to the Ethos-U55 code samples software
-framework resides in the `application` sub-folder and ML application specific logic (use-cases)
-sources are in the `use-case` subfolder.
-
-> **Convention**: Separate use-cases must be organized in sub-folders under the use-case folder.
-> The name of the directory is used as a name for this use-case and could be provided
-> as a `USE_CASE_BUILD` parameter value.
-> It is expected by the build system that sources for the use-case are structured as follows:
-> headers in an `include` directory, C/C++ sources in a `src` directory.
-> For example:
+Where the `source` folder contains C/C++ sources for the platform and ML applications. Common code related to the
+*Ethos-U55* code samples software framework resides in the `application` sub-folder and ML application-specific logic,
+use-cases, sources are in the `use-case` subfolder.
+
+> **Convention**: Separate use-cases must be organized in sub-folders under the use-case folder. The name of the
+> directory is used as a name for this use-case and can be provided as a `USE_CASE_BUILD` parameter value. The build
+> system expects that sources for the use-case are structured as follows: Headers in an `include` directory and C/C++
+> sources in a `src` directory. For example:
>
> ```tree
> use_case
@@ -75,92 +72,84 @@ sources are in the `use-case` subfolder.
> └── *.cc
> ```
-## HAL API
+## Hardware Abstraction Layer (HAL) API
-Hardware abstraction layer is represented by the following interfaces.
-To access them, include `hal.h` header.
+The HAL is represented by the following interfaces. To access them, include the `hal.h` header.
-- `hal_platform` structure:
- Structure that defines a platform context to be used by the application
+- `hal_platform` structure: Defines a platform context to be used by the application.
| Attribute name | Description |
|--------------------|----------------------------------------------------------------------------------------------|
- | inited | Initialization flag. Is set after the platform_init() function is called. |
- | plat_name | Platform name. it is set to "mps3-bare" for MPS3 build and "FVP" for Fast Model build. |
- | data_acq | Pointer to data acquisition module responsible for user interaction and other data collection for the application logic. |
- | data_psn | Pointer to data presentation module responsible for data output through components available in the selected platform: LCD -- for MPS3, console -- for Fast Model. |
- | timer | Pointer to platform timer implementation (see platform_timer) |
- | platform_init | Pointer to platform initialization function. |
- | platform_release | Pointer to platform release function |
-
-- `hal_init` function:
- Initializes the HAL structure based on compile time config. This
- should be called before any other function in this API.
+ | `inited` | Initialization flag. Is set after the `platform_init()` function is called. |
+ | `plat_name` | Platform name. it is set to `mps3-bare` for MPS3 build and `FVP` for Fast Model build. |
+ | `data_acq` | Pointer to data acquisition module responsible for user interaction and other data collection for the application logic. |
+ | `data_psn` | Pointer to data presentation module responsible for data output through components available in the selected platform: `LCD --` for MPS3, `console --` for Fast Model. |
+ | `timer` | Pointer to platform timer implementation (see `platform_timer`) |
+ | `platform_init` | Pointer to platform initialization function. |
+ | `platform_release` | Pointer to platform release function |
+
+- `hal_init` function: Initializes the HAL structure based on the compile time configuration. This must be called before
+ any other function in this API.
| Parameter name | Description|
|------------------|-----------------------------------------------------|
- | platform | Pointer to a pre-allocated `hal_platform` struct. |
- | data_acq | Pointer to a pre-allocated data acquisition module |
- | data_psn | Pointer to a pre-allocated data presentation module |
- | timer | Pointer to a pre-allocated timer module |
- | return | zero if successful, error code otherwise |
+ | `platform` | Pointer to a pre-allocated `hal_platform` struct. |
+ | `data_acq` | Pointer to a pre-allocated data acquisition module |
+ | `data_psn` | Pointer to a pre-allocated data presentation module |
+ | `timer` | Pointer to a pre-allocated timer module |
+ | `return` | Zero returned if successful, an error code is returned if unsuccessful. |
-- `hal_platform_init` function:
- Initializes the HAL platform and all the modules on the platform the
- application requires to run.
+- `hal_platform_init` function: Initializes the HAL platform and every module on the platform that the application
+ requires to run.
| Parameter name | Description |
| ----------------| ------------------------------------------------------------------- |
- | platform | Pointer to a pre-allocated and initialized `hal_platform` struct. |
- | return | zero if successful, error code otherwise. |
+ | `platform` | Pointer to a pre-allocated and initialized `hal_platform` struct. |
+ | `return` | zero if successful, error code otherwise. |
-- `hal_platform_release` function
- Releases the HAL platform. This should release resources acquired.
+- `hal_platform_release` function Releases the HAL platform and any acquired resources.
| Parameter name | Description |
| ----------------| ------------------------------------------------------------------- |
- | platform | Pointer to a pre-allocated and initialized `hal_platform` struct. |
+ | `platform` | Pointer to a pre-allocated and initialized `hal_platform` struct. |
-- `data_acq_module` structure:
- Structure to encompass the data acquisition module and it's methods.
+- `data_acq_module` structure: Structure to encompass the data acquisition module and linked methods.
| Attribute name | Description |
|----------------|----------------------------------------------------|
- | inited | Initialization flag. Is set after the system_init () function is called. |
- | system_name | Channel name. It is set to "UART" for MPS3 build and fastmodel builds. |
- | system_init | Pointer to data acquisition module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platforminitialization routines. |
- | get_input | Pointer to a function reading user input. The pointer is set according to the selected platform during the build. For MPS3 and fastmodel environments, the function reads data from UART. |
+ | `inited` | Initialization flag. Is set after the `system_init ()` function is called. |
+ | `system_name` | Channel name. It is set to `UART` for MPS3 build and Fast Model builds. |
+ | `system_init` | Pointer to data acquisition module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platform initialization routines. |
+ | `get_input` | Pointer to a function reading user input. The pointer is set according to the selected platform during the build. For MPS3 and Fast Model environments, the function reads data from UART. |
-- `data_psn_module` structure:
- Structure to encompass the data presentation module and its methods.
+- `data_psn_module` structure: Structure to encompass the data presentation module and associated methods.
| Attribute name | Description |
|--------------------|------------------------------------------------|
- | inited | Initialization flag. It is set after the system_init () function is called. |
- | system_name | System component name used to present data. It is set to "lcd" for MPS3 build and to "log_psn" for fastmodel build. In case of fastmodel, all pixel drawing functions are replaced by console output of the data summary. |
- | system_init | Pointer to data presentation module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platform initialization routines. |
- | present_data_image | Pointer to a function to draw an image. The pointer is set according to the selected platform during the build. For MPS3, the image will be drawn on the LCD; for fastmodel image summary will be printed in the UART (coordinates, channel info, downsample factor) |
- | present_data_text | Pointer to a function to print a text. The pointer is set according to the selected platform during the build. For MPS3, the text will be drawn on the LCD; for fastmodel text will be printed in the UART. |
- | present_box | Pointer to a function to draw a rectangle. The pointer is set according to the selected platform during the build. For MPS3, the image will be drawn on the LCD; for fastmodel image summary will be printed in the UART. |
- | clear | Pointer to a function to clear the output. The pointer is set according to the selected platform during the build. For MPS3, the function will clear the LCD; for fastmodel will do nothing. |
- | set_text_color | Pointer to a function to set text color for the next call of present_data_text() function. The pointer is set according to the selected platform during the build. For MPS3, the function will set the color for the text printed on the LCD; for fastmodel -- will do nothing. |
-
-- `platform_timer` structure:
- Structure to hold a platform specific timer implementation.
+ | `inited` | Initialization flag. It is set after the `system_init ()` function is called. |
+ | `system_name` | System component name used to present data. It is set to `lcd` for the MPS3 build and to `log_psn` for the Fast Model build. For Fast Model, the console output of the data summary replaces all pixel drawing functions. |
+ | `system_init` | Pointer to data presentation module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platform initialization routines. |
+ | `present_data_image` | Pointer to a function to draw an image. The pointer is set according to the selected platform during the build. For MPS3, the image is drawn on the LCD. For Fast Model, the image summary is printed in the UART (coordinates, channel info, downsample factor). |
+ | `present_data_text` | Pointer to a function to print a text. The pointer is set according to the selected platform during the build. For MPS3, the text is drawn on the LCD. For Fast Model, the text is printed in the UART. |
+ | `present_box` | Pointer to a function to draw a rectangle. The pointer is set according to the selected platform during the build. For MPS3, the image is drawn on the LCD. For Fast Model, the image summary is printed in the UART. |
+ | `clear` | Pointer to a function to clear the output. The pointer is set according to the selected platform during the build. For MPS3, the function clears the LCD. For Fast Model, nothing happens. |
+ | `set_text_color` | Pointer to a function to set text color for the next call of `present_data_text()` function. The pointer is set according to the selected platform during the build. For MPS3, the function sets the color for the text printed on the LCD. For Fast Model, nothing happens. |
+
+- `platform_timer` structure: The structure to hold a platform-specific timer implementation.
| Attribute name | Description |
|---------------------|------------------------------------------------|
- | inited | Initialization flag. It is set after the timer is initialized by the `hal_platform_init` function. |
- | reset | Pointer to a function to reset a timer. |
- | get_time_counter | Pointer to a function to get current time counter. |
- | get_duration_ms | Pointer to a function to calculate duration between two time-counters in milliseconds. |
- | get_duration_us | Pointer to a function to calculate duration between two time-counters in microseconds |
- | get_cpu_cycle_diff | Pointer to a function to calculate duration between two time-counters in Cortex-M55 cycles. |
- | get_npu_cycle_diff | Pointer to a function to calculate duration between two time-counters in Ethos-U55 cycles. Available only when project is configured with ETHOS_U55_ENABLED set. |
- | start_profiling | Wraps `get_time_counter` function with additional profiling initialisation, if required. |
- | stop_profiling | Wraps `get_time_counter` function along with additional instructions when profiling ends, if required. |
-
-Example of the API initialization in the main function:
+ | `inited` | Initialization flag. It is set after the timer is initialized by the `hal_platform_init` function. |
+ | `reset` | Pointer to a function to reset a timer. |
+ | `get_time_counter` | Pointer to a function to get current time counter. |
+ | `get_duration_ms` | Pointer to a function to calculate duration between two time-counters in milliseconds. |
+ | `get_duration_us` | Pointer to a function to calculate duration between two time-counters in microseconds |
+ | `get_cpu_cycle_diff` | Pointer to a function to calculate duration between two time-counters in *Cortex-M55* cycles. |
+ | `get_npu_cycle_diff` | Pointer to a function to calculate duration between two time-counters in *Ethos-U55* cycles. Available only when project is configured with `ETHOS_U55_ENABLED` set. |
+ | `start_profiling` | If necessary, wraps the `get_time_counter` function with another profiling initialization, if necessary. |
+ | `stop_profiling` | If necessary, wraps the `get_time_counter` function along with more instructions when profiling ends. |
+
+An example of the API initialization in the main function:
```C++
#include "hal.h"
@@ -189,16 +178,13 @@ int main ()
## Main loop function
-Code samples application main function will delegate the use-case
-logic execution to the main loop function that must be implemented for
-each custom ML scenario.
+Code samples application main function delegates the use-case logic execution to the main loop function that must be
+implemented for each custom ML scenario.
-Main loop function takes the initialized *hal_platform* structure
-pointer as an argument.
+Main loop function takes the initialized `hal_platform` structure pointer as an argument.
-The main loop function has external linkage and main executable for the
-use-case will have reference to the function defined in the use-case
-code.
+The main loop function has external linkage and the main executable for the use-case references the function defined in
+the use-case code.
```C++
void main_loop(hal_platform& platform){
@@ -210,14 +196,14 @@ void main_loop(hal_platform& platform){
## Application context
-Application context could be used as a holder for a state between main
-loop iterations. Include AppContext.hpp to use ApplicationContext class.
+Application context can be used as a holder for a state between main loop iterations. Include `AppContext.hpp` to use
+`ApplicationContext` class.
| Method name | Description |
|--------------|------------------------------------------------------------------|
-| Set | Saves given value as a named attribute in the context. |
-| Get | Gets the saved attribute from the context by the given name. |
-| Has | Checks if an attribute with a given name exists in the context. |
+| `Set` | Saves given value as a named attribute in the context. |
+| `Get` | Gets the saved attribute from the context by the given name. |
+| `Has` | Checks if an attribute with a given name exists in the context. |
For example:
@@ -241,21 +227,20 @@ void main_loop(hal_platform& platform) {
## Profiler
-Profiler is a helper class assisting in collection of timings and
-Ethos-U55 cycle counts for operations. It uses platform timer to get
-system timing information.
+The profiler is a helper class that assists with the collection of timings and *Ethos-U55* cycle counts for operations.
+It uses platform timer to get system timing information.
| Method name | Description |
|-------------------------|----------------------------------------------------------------|
-| StartProfiling | Starts profiling and records the starting timing data. |
-| StopProfiling | Stops profiling and records the ending timing data. |
-| StopProfilingAndReset | Stops the profiling and internally resets the platform timers. |
-| Reset | Resets the profiler and clears all collected data. |
-| GetAllResultsAndReset | Gets all the results as string and resets the profiler. |
-| PrintProfilingResult | Prints collected profiling results and resets the profiler. |
-| SetName | Set the profiler name. |
+| `StartProfiling` | Starts profiling and records the starting timing data. |
+| `StopProfiling` | Stops profiling and records the ending timing data. |
+| `StopProfilingAndReset` | Stops the profiling and internally resets the platform timers. |
+| `Reset` | Resets the profiler and clears all collected data. |
+| `GetAllResultsAndReset` | Gets all the results as string and resets the profiler. |
+| `PrintProfilingResult` | Prints collected profiling results and resets the profiler. |
+| `SetName` | Set the profiler name. |
-Usage example:
+An example of it in use:
```C++
Profiler profiler{&platform, "Inference"};
@@ -269,39 +254,38 @@ profiler.PrintProfilingResult();
## NN Model API
-Model (refers to neural network model) is an abstract class wrapping the
-underlying TensorFlow Lite Micro API and providing methods to perform
-common operations such as TensorFlow Lite Micro framework
-initialization, inference execution, accessing input and output tensor
-objects.
+The Model, which refers to neural network model, is an abstract class wrapping the underlying TensorFlow Lite Micro API.
+It provides methods to perform common operations such as TensorFlow Lite Micro framework initialization, inference
+execution, accessing input, and output tensor objects.
-To use this abstraction, import TensorFlowLiteMicro.hpp header.
+To use this abstraction, import the `TensorFlowLiteMicro.hpp` header.
| Method name | Description |
|--------------------------|------------------------------------------------------------------------------|
-| GetInputTensor | Returns the pointer to the model's input tensor. |
-| GetOutputTensor | Returns the pointer to the model's output tensor |
-| GetType | Returns the model's data type |
-| GetInputShape | Return the pointer to the model's input shape |
-| GetOutputShape | Return the pointer to the model's output shape. |
-| GetNumInputs | Return the number of input tensors the model has. |
-| GetNumOutputs | Return the number of output tensors the model has. |
-| LogTensorInfo | Logs the tensor information to stdout for the given tensor pointer: tensor name, tensor address, tensor type, tensor memory size and quantization params. |
-| LogInterpreterInfo | Logs the interpreter information to stdout. |
-| Init | Initializes the TensorFlow Lite Micro framework, allocates require memory for the model. |
-| GetAllocator | Gets the allocator pointer for the instance. |
-| IsInited | Checks if this model object has been initialized. |
-| IsDataSigned | Checks if the model uses signed data type. |
-| RunInference | Runs the inference (invokes the interpreter). |
-| ShowModelInfoHandler | Model information handler common to all models. |
-| GetTensorArena | Returns pointer to memory region to be used for tensors allocations. |
-| ModelPointer | Returns the pointer to the NN model data array. |
-| ModelSize | Returns the model size. |
-| GetOpResolver | Returns the reference to the TensorFlow Lite Micro operator resolver. |
-| EnlistOperations | Registers required operators with TensorFlow Lite Micro operator resolver. |
-| GetActivationBufferSize | Returns the size of the tensor arena memory region. |
-
-> **Convention:** Each ML use-case must have extension of this class and implementation of the protected virtual methods:
+| `GetInputTensor` | Returns the pointer to the model's input tensor. |
+| `GetOutputTensor` | Returns the pointer to the model's output tensor |
+| `GetType` | Returns the model's data type |
+| `GetInputShape` | Return the pointer to the model's input shape |
+| `GetOutputShape` | Return the pointer to the model's output shape. |
+| `GetNumInputs` | Return the number of input tensors the model has. |
+| `GetNumOutputs` | Return the number of output tensors the model has. |
+| `LogTensorInfo` | Logs the tensor information to `stdout` for the given tensor pointer. Includes: Tensor name, tensor address, tensor type, tensor memory size, and quantization params. |
+| `LogInterpreterInfo` | Logs the interpreter information to stdout. |
+| `Init` | Initializes the TensorFlow Lite Micro framework, allocates require memory for the model. |
+| `GetAllocator` | Gets the allocator pointer for the instance. |
+| `IsInited` | Checks if this model object has been initialized. |
+| `IsDataSigned` | Checks if the model uses signed data type. |
+| `RunInference` | Runs the inference, so invokes the interpreter. |
+| `ShowModelInfoHandler` | Model information handler common to all models. |
+| `GetTensorArena` | Returns pointer to memory region to be used for tensors allocations. |
+| `ModelPointer` | Returns the pointer to the NN model data array. |
+| `ModelSize` | Returns the model size. |
+| `GetOpResolver` | Returns the reference to the TensorFlow Lite Micro operator resolver. |
+| `EnlistOperations` | Registers required operators with TensorFlow Lite Micro operator resolver. |
+| `GetActivationBufferSize` | Returns the size of the tensor arena memory region. |
+
+> **Convention:** Each ML use-case must have an extension of this class and an implementation of the protected virtual
+> methods:
>
> ```C++
> virtual const uint8_t* ModelPointer() = 0;
@@ -311,25 +295,25 @@ To use this abstraction, import TensorFlowLiteMicro.hpp header.
> virtual size_t GetActivationBufferSize() = 0;
> ```
>
-> Network models have different set of operators that must be registered with
-> tflite::MicroMutableOpResolver object in the EnlistOperations method.
-> Network models could require different size of activation buffer that is returned as
-> tensor arena memory for TensorFlow Lite Micro framework by the GetTensorArena
-> and GetActivationBufferSize methods.
+> Network models have different set of operators that must be registered with `tflite::MicroMutableOpResolver` object in
+> the `EnlistOperations` method. Network models can require different size of activation buffer that is returned as
+> tensor arena memory for TensorFlow Lite Micro framework by the `GetTensorArena` and `GetActivationBufferSize` methods.
+>
+> **Note:** Please see `MobileNetModel.hpp` and `MobileNetModel.cc` files from the image classification ML application
+> use-case as an example of the model base class extension.
+
+## Adding custom ML use-case
-Please see `MobileNetModel.hpp` and `MobileNetModel.cc` files from image
-classification ML application use-case as an example of the model base
-class extension.
+This section describes how to implement additional use-case and then compile it into the binary executable to run with
+Fast Model or MPS3 FPGA board.
-## Adding custom ML use case
+It covers common major steps: The application main loop creation, a description of the NN model, and inference
+execution.
-This section describes how to implement additional use-case and compile
-it into the binary executable to run with Fast Model or MPS3 FPGA board.
-It covers common major steps: application main loop creation,
-description of the NN model, inference execution.
+In addition, few useful examples are provided: Reading user input, printing into console, and drawing images into MPS3
+LCD.
-In addition, few useful examples are provided: reading user input,
-printing into console, drawing images into MPS3 LCD.
+For example:
```tree
use_case
@@ -338,25 +322,23 @@ use_case
└── src
```
-Start with creation of a sub-directory under the `use_case` directory and
-two other directories `src` and `include` as described in
-[Software project description](#software-project-description) section:
+Start with creation of a sub-directory under the `use_case` directory and two additional directories `src` and `include`
+as described in the [Software project description](#software-project-description) section.
## Implementing main loop
-Use-case main loop is the place to put use-case main logic. Essentially,
-it is an infinite loop that reacts on user input, triggers use-case
-conditional logic based on the input and present results back to the
-user. However, it could also be a simple logic that runs a single inference
-and then exits.
+The use-case main loop is the place to put use-case main logic. It is an infinite loop that reacts on user input,
+triggers use-case conditional logic based on the input and present results back to the user.
+
+However, it could also be a simple logic that runs a single inference and then exits.
-Main loop has knowledge about the platform and has access to the
-platform components through the hardware abstraction layer (referred to as HAL).
+Main loop has knowledge about the platform and has access to the platform components through the Hardware Abstraction
+Layer (HAL).
-Create a `MainLoop.cc` file in the `src` directory (the one created under
-[Adding custom ML use case](#adding-custom-ml-use-case)), the name is not
-important. Define `main_loop` function with the signature described in
-[Main loop function](#main-loop-function):
+Start by creating a `MainLoop.cc` file in the `src` directory (the one created under
+[Adding custom ML use case](#adding-custom-ml-use-case)). The name used is not important.
+
+Now define the `main_loop` function with the signature described in [Main loop function](#main-loop-function):
```C++
#include "hal.h"
@@ -366,23 +348,19 @@ void main_loop(hal_platform& platform) {
}
```
-The above is already a working use-case, if you compile and run it (see
-[Building custom usecase](#building-custom-use-case)) the application will start, print
-message to console and exit straight away.
+The preceeding code is already a working use-case. If you compile and run it (see [Building custom usecase](#building-custom-use-case)),
+then the application starts and prints a message to console and exits straight away.
-Now, you can start filling this function with logic.
+You can now start filling this function with logic.
## Implementing custom NN model
-Before inference could be run with a custom NN model, TensorFlow Lite
-Micro framework must learn about the operators/layers included in the
-model. Developer must register operators using `MicroMutableOpResolver`
-API.
+Before inference could be run with a custom NN model, TensorFlow Lite Micro framework must learn about the operators, or
+layers, included in the model. You must register operators using the `MicroMutableOpResolver` API.
-Ethos-U55 code samples project has an abstraction around TensorFlow
-Lite Micro API (see [NN model API](#nn-model-api)). Create `HelloWorldModel.hpp` in
-the use-case include sub-directory, extend Model abstract class and
-declare required methods.
+The *Ethos-U55* code samples project has an abstraction around TensorFlow Lite Micro API (see [NN model API](#nn-model-api)).
+Create `HelloWorldModel.hpp` in the use-case include sub-directory, extend Model abstract class,
+and then declare the required methods.
For example:
@@ -420,19 +398,20 @@ class HelloWorldModel: public Model {
#endif /* HELLOWORLDMODEL_HPP */
```
-Create `HelloWorldModel.cc` file in the `src` sub-directory and define the methods
-there. Include `HelloWorldModel.hpp` created earlier. Note that `Model.hpp`
-included in the header provides access to TensorFlow Lite Micro's operation
-resolver API.
+Create the `HelloWorldModel.cc` file in the `src` sub-directory and define the methods there. Include
+`HelloWorldModel.hpp` created earlier.
+
+> **Note:** The `Model.hpp` included in the header provides access to TensorFlow Lite Micro's operation resolver API.
-Please, see `use_case/img_class/src/MobileNetModel.cc` for
-code examples.
-If you are using a TensorFlow Lite model compiled with Vela, it is important to add
-custom Ethos-U55 operator to the operators list.
+Please refer to `use_case/img_class/src/MobileNetModel.cc` for code examples.
-The following example shows how to add the custom Ethos-U55 operator with
-TensorFlow Lite Micro framework. We will use the ARM_NPU define to exclude
-the code if the application was built without NPU support.
+If you are using a TensorFlow Lite model compiled with Vela, it is important to add a custom *Ethos-U55* operator to the
+operators list.
+
+The following example shows how to add the custom *Ethos-U55* operator with the TensorFlow Lite Micro framework. when
+defined, `ARM_NPU` excludes the code if the application was built without NPU support.
+
+For example:
```C++
#include "HelloWorldModel.hpp"
@@ -453,53 +432,51 @@ bool arm::app::HelloWorldModel::EnlistOperations() {
}
```
-To minimize application memory footprint, it is advised to register only
-operators used by the NN model.
+To minimize the memory footprint of the application, we advise you to only register operators that are used by the NN
+model.
+
+### Define `ModelPointer` and `ModelSize` methods
+
+These functions are wrappers around the functions generated in the C++ file containing the neural network model as an
+array. This generation the C++ array from the `.tflite` file, logic needs to be defined in the `usecase.cmake` file for
+this `HelloWorld` example.
-Define `ModelPointer` and `ModelSize` methods. These functions are wrappers around the
-functions generated in the C++ file containing the neural network model as an array.
-This generation the C++ array from the .tflite file, logic needs to be defined in
-the `usecase.cmake` file for this `HelloWorld` example.
+For more details on `usecase.cmake`, refer to: [Building custom use-case](#building-custom-use-case).
-For more details on `usecase.cmake`, see [Building custom use case](#building-custom-use-case).
-For details on code generation flow in general, see [Automatic file generation](./building.md#automatic-file-generation)
+For details on code generation flow in general, refer to: [Automatic file generation](./building.md#automatic-file-generation).
-The TensorFlow Lite model data is read during Model::Init() method execution, see
-`application/tensorflow-lite-micro/Model.cc` for more details. Model invokes
-`ModelPointer()` function which calls the `GetModelPointer()` function to get
-neural network model data memory address. The `GetModelPointer()` function
-will be generated during the build and could be found in the
-file `build/generated/hello_world/src/<model_file_name>.cc`. Generated
-file is added to the compilation automatically.
+The TensorFlow Lite model data is read during the `Model::Init()` method execution. Please refer to
+`application/tensorflow-lite-micro/Model.cc` for more details.
-Use `${use-case}_MODEL_TFLITE_PATH` build parameter to include custom
-model to the generation/compilation process (see [Build options](./building.md#build-options)).
+Model invokes the `ModelPointer()` function which calls the `GetModelPointer()` function to get the neural network model
+data memory address. The `GetModelPointer()` function is generated during the build and can be found in the file
+`build/generated/hello_world/src/<model_file_name>.cc`. The file generated is automatically added to the compilation.
+
+Use the `${use-case}_MODEL_TFLITE_PATH` build parameter to include custom model to the generation, or compilation,
+process. Please refer to: [Build options](./building.md#build-options) for further information.
## Executing inference
-To run an inference successfully it is required to have:
+To run an inference successfully, you must use:
-- a TensorFlow Lite model file
-- extended Model class
-- place to add the code to invoke inference
-- main loop function
-- and some input data.
+- A TensorFlow Lite model file,
+- An extended Model class,
+- A place to add the code to invoke inference,
+- A main loop function,
+- And some input data.
-For the hello_world example below, the input array is not populated.
-However, for real-world scenarios, this data should either be read from
-an on-board device or be prepared in the form of C++ sources before
-compilation and be baked into the application.
+For the `hello_world` example below, the input array is not populated. However, for real-world scenarios, and before
+compilation and be baked into the application, this data must either be read from an on-board device, or be prepared in
+the form of C++ sources.
-For example, the image classification application has extra build steps
-to generate C++ sources from the provided images with
-`generate_images_code` CMake function.
+For example, the image classification application requires extra build steps to generate C++ sources from the provided
+images with `generate_images_code` CMake function.
-> **Note:** Check the input data type for your NN model and input array data type are the same.
-> For example, generated C++ sources for images store image data as uint8 array. For models that were
-> quantized to int8 data type, it is important to convert image data to int8 correctly before inference execution.
-> Asymmetric data type to symmetric data type conversion involves positioning zero value, i.e. subtracting an
-> offset for uint8 values. Please check image classification application source for the code example
-> (ConvertImgToInt8 function).
+> **Note:** Check that the input data type for your NN model and input array data type are the same. For example,
+> generated C++ sources for images store image data as a `uint8` array. For models that were quantized to an `int8` data
+> type, convert the image data to `int8` correctly *before* inference execution. Converting asymmetric data to symmetric
+> data involves positioning the zero value. In other words, subtracting an offset for `uint8` values. Please check the
+> image classification application source for the code example, such as the `ConvertImgToInt8` function.
The following code adds inference invocation to the main loop function:
@@ -555,8 +532,7 @@ The code snippet has several important blocks:
TfLiteTensor *inputTensor = model.GetInputTensor();
```
-- Copying input data to the input tensor. We assume input tensor size
- to be 1000 uint8 elements.
+- Copying input data to the input tensor. We assume input tensor size to be 1000 `uint8` elements.
```C++
memcpy(inputTensor->data.data, inputData, 1000);
@@ -568,8 +544,8 @@ The code snippet has several important blocks:
model.RunInference();
```
-- Reading inference results: data and data size from the output
- tensor. We assume that output layer has uint8 data type.
+- Reading inference results: Data and data size from the output tensor. We assume that the output layer has a `uint8`
+ data type.
```C++
Const uint32_t tensorSz = outputTensor->bytes ;
@@ -577,9 +553,10 @@ The code snippet has several important blocks:
const uint8_t *outputData = tflite::GetTensorData<uint8>(outputTensor);
```
-Adding profiling for Ethos-U55 is easy. Include `Profiler.hpp` header and
-invoke `StartProfiling` and `StopProfiling` around inference
-execution.
+To add profiling for the *Ethos-U55*, include a `Profiler.hpp` header and invoke both `StartProfiling` and
+`StopProfiling` around inference execution.
+
+For example:
```C++
Profiler profiler{&platform, "Inference"};
@@ -593,110 +570,105 @@ profiler.PrintProfilingResult();
## Printing to console
-Provided examples already used some function to print messages to the
-console. The full list of available functions:
+The preceding examples used some function to print messages to the console.
+
+However, for clarity, here is the full list of available functions:
- `printf`
-- `trace` - printf wrapper for tracing messages
-- `debug` - printf wrapper for debug messages
-- `info` - printf wrapper for informational messages
-- `warn` - printf wrapper for warning messages
-- `printf_err` - printf wrapper for error messages
+- `trace` - printf wrapper for tracing messages.
+- `debug` - printf wrapper for debug messages.
+- `info` - printf wrapper for informational messages.
+- `warn` - printf wrapper for warning messages.
+- `printf_err` - printf wrapper for error messages.
-`printf` wrappers could be switched off with `LOG_LEVEL` define:
+`printf` wrappers can be switched off with `LOG_LEVEL` define:
-trace (0) < debug (1) < info (2) < warn (3) < error (4).
+`trace (0) < debug (1) < info (2) < warn (3) < error (4)`.
-Default output level is info = level 2.
+> **Note:** The default output level is `info = level 2`.
## Reading user input from console
-Platform data acquisition module has get_input function to read keyboard
-input from the UART. It can be used as follows:
+The platform data acquisition module uses the `get_input` function to read the keyboard input from the UART. It can be
+used as follows:
```C++
char ch_input[128];
platform.data_acq->get_input(ch_input, sizeof(ch_input));
```
-The function will block until user provides an input.
+The function is blocked until a user provides an input.
## Output to MPS3 LCD
-Platform presentation module has functions to print text or an image to
-the board LCD:
+The platform presentation module has functions to print text or an image to the board LCD. For example:
- `present_data_text`
- `present_data_image`
Text presentation function has the following signature:
-- `const char* str`: string to print.
-- `const uint32_t str_sz`: string size.
-- `const uint32_t pos_x`: x coordinate of the first letter in pixels.
-- `const uint32_t pos_y`: y coordinate of the first letter in pixels.
-- `const uint32_t alow_multiple_lines`: signals whether the text is
- allowed to span multiple lines on the screen, or should be truncated
- to the current line.
+- `const char* str`: the string to print.
+- `const uint32_t str_sz`: The string size.
+- `const uint32_t pos_x`: The x coordinate of the first letter in pixels.
+- `const uint32_t pos_y`: The y coordinate of the first letter in pixels.
+- `const uint32_t alow_multiple_lines`: Signals whether the text is allowed to span multiple lines on the screen, or
+ must be truncated to the current line.
-This function does not wrap text, if the given string cannot fit on the
-screen it will go outside the screen boundary.
+This function does not wrap text. If the given string cannot fit on the screen, it goes outside the screen boundary.
-Example that prints "Hello world" on the LCD:
+Here is an example that prints "Hello world" on the LCD screen:
```C++
std::string hello("Hello world");
platform.data_psn->present_data_text(hello.c_str(), hello.size(), 10, 35, 0);
```
-Image presentation function has the following signature:
+The image presentation function has the following signature:
-- `uint8_t* data`: image data pointer;
-- `const uint32_t width`: image width;
-- `const uint32_t height`: image height;
-- `const uint32_t channels`: number of channels. Only 1 and 3 channels are supported now.
-- `const uint32_t pos_x`: x coordinate of the first pixel.
-- `const uint32_t pos_y`: y coordinate of the first pixel.
-- `const uint32_t downsample_factor`: the factor by which the image is to be down sampled.
+- `uint8_t* data`: The image data pointer;
+- `const uint32_t width`: The image width;
+- `const uint32_t height`: The image height;
+- `const uint32_t channels`: The number of channels. Only 1 and 3 channels are supported now.
+- `const uint32_t pos_x`: The x coordinate of the first pixel.
+- `const uint32_t pos_y`: The y coordinate of the first pixel.
+- `const uint32_t downsample_factor`: The factor by which the image is to be downsampled.
-For example, the following code snippet visualizes an input tensor data
-for MobileNet v2 224 (down sampling it twice):
+For example, the following code snippet visualizes an input tensor data for `MobileNet v2 224`, by downsampling it
+twice:
```C++
platform.data_psn->present_data_image((uint8_t *) inputTensor->data.data, 224, 224, 3, 10, 35, 2);
```
-Please see [hal-api](#hal-api) section for other data presentation
-functions.
+Please refer to the [HAL API](#hal-api) section for more data presentation functions.
-## Building custom use case
+## Building custom use-case
-There is one last thing to do before building and running a use-case
-application: create a `usecase.cmake` file in the root of your use-case,
-the name of the file is not important.
+There is one last thing to do before building and running a use-case application. You must create a `usecase.cmake` file
+in the root of your use-case. However, the name of the file is not important.
> **Convention:** The build system searches for CMake file in each use-case directory and includes it into the build
-> flow. This file could be used to specify additional application specific build options, add custom build steps or
-> override standard compilation and linking flags.
-> Use `USER_OPTION` function to add additional build option. Prefix variable name with `${use_case}` (use-case name) to
-> avoid names collisions with other CMake variables.
-> Some useful variable names visible in use-case CMake file:
+> flow. This file can be used to specify additional application-specific build options, add custom build steps, or
+> override standard compilation and linking flags. Use the `USER_OPTION` function to add further build options. Prefix
+> the variable name with `${use_case}`, the use-case name, to avoid names collisions with other CMake variables. Here
+> are some useful variable names visible in use-case CMake file:
>
-> - `DEFAULT_MODEL_PATH` – default model path to use if use-case specific `${use_case}_MODEL_TFLITE_PATH` is not set
->in the build arguments.
->- `TARGET_NAME` – name of the executable.
-> - `use_case` – name of the current use-case.
-> - `UC_SRC` – list of use-case sources.
-> - `UC_INCLUDE` – path to the use-case headers.
-> - `ETHOS_U55_ENABLED` – flag indicating if the current build supports Ethos-U55.
-> - `TARGET_PLATFORM` – Target platform being built for.
+> - `DEFAULT_MODEL_PATH` – The default model path to use if use-case specific `${use_case}_MODEL_TFLITE_PATH` is not set
+> in the build arguments.
+>- `TARGET_NAME` – The name of the executable.
+> - `use_case` – The name of the current use-case.
+> - `UC_SRC` – A list of use-case sources.
+> - `UC_INCLUDE` – The path to the use-case headers.
+> - `ETHOS_U55_ENABLED` – The flag indicating if the current build supports Ethos-U55.
+> - `TARGET_PLATFORM` – The target platform being built for.
> - `TARGET_SUBSYSTEM` – If target platform supports multiple subsystems, this is the name of the subsystem.
> - All standard build options.
-> - `CMAKE_CXX_FLAGS` and `CMAKE_C_FLAGS` – compilation flags.
-> - `CMAKE_EXE_LINKER_FLAGS` – linker flags.
+> - `CMAKE_CXX_FLAGS` and `CMAKE_C_FLAGS` – The compilation flags.
+> - `CMAKE_EXE_LINKER_FLAGS` – The linker flags.
-For the hello world use-case it will be enough to create
-`helloworld.cmake` file and set DEFAULT_MODEL_PATH:
+For the hello world use-case, it is enough to create a `helloworld.cmake` file and set the `DEFAULT_MODEL_PATH`, like
+so:
```cmake
if (ETHOS_U55_ENABLED EQUAL 1)
@@ -720,13 +692,12 @@ generate_tflite_code(
)
```
-This ensures that the model path pointed by `${use_case}_MODEL_TFLITE_PATH` is converted to a C++ array and is picked
-up by the build system. More information on auto-generations is available under section
+This ensures that the model path pointed to by `${use_case}_MODEL_TFLITE_PATH` is converted to a C++ array and is picked
+up by the build system. More information on auto-generations is available under section:
[Automatic file generation](./building.md#Automatic-file-generation).
-To build you application follow the general instructions from
-[Add Custom inputs](./building.md#add-custom-inputs) and specify the name of the use-case in the
-build command:
+To build you application, follow the general instructions from [Add Custom inputs](./building.md#add-custom-inputs) and
+then specify the name of the use-case in the build command, like so:
```commandline
cmake .. \
@@ -736,8 +707,7 @@ cmake .. \
-DCMAKE_TOOLCHAIN_FILE=scripts/cmake/toolchains/bare-metal-armclang.cmake
```
-As a result, `ethos-u-hello_world.axf` should be created, MPS3 build
-will also produce `sectors/hello_world` directory with binaries and
-`sectors/images.txt` to be copied to the board MicroSD card.
+As a result, the file `ethos-u-hello_world.axf` is created. The MPS3 build also produces the `sectors/hello_world`
+directory with binaries and the file `sectors/images.txt` to be copied to the MicroSD card on the board.
-Next section of the documentation: [Testing and benchmarking](testing_benchmarking.md).
+The next section of the documentation covers: [Testing and benchmarking](testing_benchmarking.md).