summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorIsabella Gottardi <isabella.gottardi@arm.com>2021-04-21 16:18:29 +0100
committerAlexander Efremov <alexander.efremov@arm.com>2021-04-22 18:40:06 +0000
commit4e42268805a52849c06e0e5a63ec8b38b4b1f5b8 (patch)
treecdadb5cc3f8b499d596da5d14e68730a64fac1c7
parente03f1af5dcbe7f448cc63456ece6302590c38387 (diff)
downloadml-embedded-evaluation-kit-4e42268805a52849c06e0e5a63ec8b38b4b1f5b8.tar.gz
MLECO-1884: Fix documentation error: example for adding new model for a use case
Signed-off-by: Isabella Gottardi <isabella.gottardi@arm.com> Change-Id: I42269fcd9aa03a94f057ab0b9f8cf7274c476577
-rw-r--r--docs/sections/customizing.md197
1 files changed, 105 insertions, 92 deletions
diff --git a/docs/sections/customizing.md b/docs/sections/customizing.md
index 8781855..841923b 100644
--- a/docs/sections/customizing.md
+++ b/docs/sections/customizing.md
@@ -16,7 +16,7 @@
- [Building custom use case](#building-custom-use-case)
This section describes how to implement a custom Machine Learning
-application running on Fast Model FVP or on the Arm MPS3 FPGA prototyping board.
+application running on `Arm® Corstone™-300` based FVP or on the Arm® MPS3 FPGA prototyping board.
Arm® Ethos™-U55 code sample software project offers a simple way to incorporate
additional use-case code into the existing infrastructure and provides a build
@@ -27,7 +27,7 @@ and code implementation conventions.
The following sign will indicate the important conventions to apply:
> **Convention:** The code is developed using C++11 and C99 standards.
-This is governed by TensorFlow Lite for Microcontrollers framework.
+> This is governed by TensorFlow Lite for Microcontrollers framework.
## Software project description
@@ -55,15 +55,15 @@ As mentioned in the [Repository structure](../documentation.md#repository-struct
Where `source` contains C/C++ sources for the platform and ML applications.
Common code related to the Ethos-U55 code samples software
-framework resides in the *application* sub-folder and ML application specific logic (use-cases)
-sources are in the *use-case* subfolder.
+framework resides in the `application` sub-folder and ML application specific logic (use-cases)
+sources are in the `use-case` subfolder.
> **Convention**: Separate use-cases must be organized in sub-folders under the use-case folder.
-The name of the directory is used as a name for this use-case and could be provided
-as a `USE_CASE_BUILD` parameter value.
-It is expected by the build system that sources for the use-case are structured as follows:
-headers in an include directory, C/C++ sources in a src directory.
-For example:
+> The name of the directory is used as a name for this use-case and could be provided
+> as a `USE_CASE_BUILD` parameter value.
+> It is expected by the build system that sources for the use-case are structured as follows:
+> headers in an `include` directory, C/C++ sources in a `src` directory.
+> For example:
>
>```tree
>use_case
@@ -77,61 +77,60 @@ For example:
## HAL API
Hardware abstraction layer is represented by the following interfaces.
-To access them, include hal.h header.
+To access them, include `hal.h` header.
-- *hal_platfrom* structure:\
+- `hal_platform` structure:
Structure that defines a platform context to be used by the application
- | Attribute name | Description |
- |--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
- | inited | Initialization flag. Is set after the platfrom_init() function is called. |
- | plat_name | Platform name. it is set to "mps3-bare" for MPS3 build and "FVP" for Fast Model build. |
- | data_acq | Pointer to data acquisition module responsible for user interaction and other data collection for the application logic. |
+ | Attribute name | Description |
+ |--------------------|----------------------------------------------------------------------------------------------|
+ | inited | Initialization flag. Is set after the platform_init() function is called. |
+ | plat_name | Platform name. it is set to "mps3-bare" for MPS3 build and "FVP" for Fast Model build. |
+ | data_acq | Pointer to data acquisition module responsible for user interaction and other data collection for the application logic. |
| data_psn | Pointer to data presentation module responsible for data output through components available in the selected platform: LCD -- for MPS3, console -- for Fast Model. |
- | timer | Pointer to platform timer implementation (see platform_timer) |
- | platform_init | Pointer to platform initialization function. |
- | platform_release | Pointer to platform release function |
+ | timer | Pointer to platform timer implementation (see platform_timer) |
+ | platform_init | Pointer to platform initialization function. |
+ | platform_release | Pointer to platform release function |
-- *hal_init* function:\
+- `hal_init` function:
Initializes the HAL structure based on compile time config. This
should be called before any other function in this API.
| Parameter name | Description|
|------------------|-----------------------------------------------------|
- | platform | Pointer to a pre-allocated *hal_platfrom* struct. |
+ | platform | Pointer to a pre-allocated `hal_platform` struct. |
| data_acq | Pointer to a pre-allocated data acquisition module |
| data_psn | Pointer to a pre-allocated data presentation module |
| timer | Pointer to a pre-allocated timer module |
| return | zero if successful, error code otherwise |
-- *hal_platform_init* function:\
+- `hal_platform_init` function:
Initializes the HAL platform and all the modules on the platform the
application requires to run.
| Parameter name | Description |
| ----------------| ------------------------------------------------------------------- |
- | platform | Pointer to a pre-allocated and initialized *hal_platfrom* struct. |
+ | platform | Pointer to a pre-allocated and initialized `hal_platform` struct. |
| return | zero if successful, error code otherwise. |
-- *hal_platform_release* function\
+- `hal_platform_release` function
Releases the HAL platform. This should release resources acquired.
| Parameter name | Description |
| ----------------| ------------------------------------------------------------------- |
- | platform | Pointer to a pre-allocated and initialized *hal_platfrom* struct. |
+ | platform | Pointer to a pre-allocated and initialized `hal_platform` struct. |
-- *data_acq_module* structure:\
- Structure to encompass the data acquisition module and it's
- methods.
+- `data_acq_module` structure:
+ Structure to encompass the data acquisition module and it's methods.
| Attribute name | Description |
|----------------|----------------------------------------------------|
| inited | Initialization flag. Is set after the system_init () function is called. |
- | system_name | Channel name. It is set to "UART" for MPS3 build and fastmodel builds. |
- | system_init | Pointer to data acquisition module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platforminitialization routines. |
+ | system_name | Channel name. It is set to "UART" for MPS3 build and fastmodel builds. |
+ | system_init | Pointer to data acquisition module initialization function. The pointer is set according to the platform selected during the build. This function is called by the platforminitialization routines. |
| get_input | Pointer to a function reading user input. The pointer is set according to the selected platform during the build. For MPS3 and fastmodel environments, the function reads data from UART. |
-- *data_psn_module* structure:\
+- `data_psn_module` structure:
Structure to encompass the data presentation module and its methods.
| Attribute name | Description |
@@ -144,19 +143,21 @@ To access them, include hal.h header.
| present_box | Pointer to a function to draw a rectangle. The pointer is set according to the selected platform during the build. For MPS3, the image will be drawn on the LCD; for fastmodel image summary will be printed in the UART. |
| clear | Pointer to a function to clear the output. The pointer is set according to the selected platform during the build. For MPS3, the function will clear the LCD; for fastmodel will do nothing. |
| set_text_color | Pointer to a function to set text color for the next call of present_data_text() function. The pointer is set according to the selected platform during the build. For MPS3, the function will set the color for the text printed on the LCD; for fastmodel -- will do nothing. |
- | set_led | Pointer to a function controlling an LED (led_num) with on/off |
-- *platform_timer* structure:\
+- `platform_timer` structure:
Structure to hold a platform specific timer implementation.
- | Attribute name | Description |
- |--------------------|------------------------------------------------|
- | inited | Initialization flag. It is set after the timer is initialized by the *hal_platform_init* function. |
- | reset | Pointer to a function to reset a timer. |
- | get_time_counter | Pointer to a function to get current time counter. |
- | get_duration_ms | Pointer to a function to calculate duration between two time-counters in milliseconds. |
- | get_duration_us | Pointer to a function to calculate duration between two time-counters in microseconds |
+ | Attribute name | Description |
+ |---------------------|------------------------------------------------|
+ | inited | Initialization flag. It is set after the timer is initialized by the `hal_platform_init` function. |
+ | reset | Pointer to a function to reset a timer. |
+ | get_time_counter | Pointer to a function to get current time counter. |
+ | get_duration_ms | Pointer to a function to calculate duration between two time-counters in milliseconds. |
+ | get_duration_us | Pointer to a function to calculate duration between two time-counters in microseconds |
+ | get_cpu_cycle_diff | Pointer to a function to calculate duration between two time-counters in Cortex-M55 cycles. |
| get_npu_cycle_diff | Pointer to a function to calculate duration between two time-counters in Ethos-U55 cycles. Available only when project is configured with ETHOS_U55_ENABLED set. |
+ | start_profiling | Wraps `get_time_counter` function with additional profiling initialisation, if required. |
+ | stop_profiling | Wraps `get_time_counter` function along with additional instructions when profiling ends, if required. |
Example of the API initialization in the main function:
@@ -211,10 +212,10 @@ void main_loop(hal_platform& platform){
Application context could be used as a holder for a state between main
loop iterations. Include AppContext.hpp to use ApplicationContext class.
-| Method name | Description |
-|--------------|-----------------------------------------------------------------|
-| Set | Saves given value as a named attribute in the context. |
-| Get | Gets the saved attribute from the context by the given name. |
+| Method name | Description |
+|--------------|------------------------------------------------------------------|
+| Set | Saves given value as a named attribute in the context. |
+| Get | Gets the saved attribute from the context by the given name. |
| Has | Checks if an attribute with a given name exists in the context. |
For example:
@@ -249,7 +250,8 @@ system timing information.
| StopProfiling | Stops profiling and records the ending timing data. |
| StopProfilingAndReset | Stops the profiling and internally resets the platform timers. |
| Reset | Resets the profiler and clears all collected data. |
-| GetAllResultsAndReset | Gets the results as string and resets the profiler. |
+| GetAllResultsAndReset | Gets all the results as string and resets the profiler. |
+| PrintProfilingResult | Prints collected profiling results and resets the profiler. |
| SetName | Set the profiler name. |
Usage example:
@@ -276,38 +278,45 @@ To use this abstraction, import TensorFlowLiteMicro.hpp header.
| Method name | Description |
|--------------------------|------------------------------------------------------------------------------|
-| GetInputTensor | Returns the pointer to the model\'s input tensor. |
-| GetOutputTensor | Returns the pointer to the model\'s output tensor |
-| GetType | Returns the model's data type |
-| GetInputShape | Return the pointer to the model\'s input shape |
-| GetOutputShape | Return the pointer to the model\'s output shape |
-| LogTensorInfo | Logs the tensor information to stdout for the given tensor pointer: tensor name, tensor address, tensor type, tensor memory size and quantization params. |
-| LogInterpreterInfo | Logs the interpreter information to stdout. |
-| Init | Initializes the TensorFlow Lite Micro framework, allocates require memory for the model. |
+| GetInputTensor | Returns the pointer to the model's input tensor. |
+| GetOutputTensor | Returns the pointer to the model's output tensor |
+| GetType | Returns the model's data type |
+| GetInputShape | Return the pointer to the model's input shape |
+| GetOutputShape | Return the pointer to the model's output shape. |
+| GetNumInputs | Return the number of input tensors the model has. |
+| GetNumOutputs | Return the number of output tensors the model has. |
+| LogTensorInfo | Logs the tensor information to stdout for the given tensor pointer: tensor name, tensor address, tensor type, tensor memory size and quantization params. |
+| LogInterpreterInfo | Logs the interpreter information to stdout. |
+| Init | Initializes the TensorFlow Lite Micro framework, allocates require memory for the model. |
+| GetAllocator | Gets the allocator pointer for the instance. |
| IsInited | Checks if this model object has been initialized. |
| IsDataSigned | Checks if the model uses signed data type. |
| RunInference | Runs the inference (invokes the interpreter). |
-| GetOpResolver() | Returns the reference to the TensorFlow Lite Micro operator resolver. |
-| EnlistOperations | Registers required operators with TensorFlow Lite Micro operator resolver. |
+| ShowModelInfoHandler | Model information handler common to all models. |
| GetTensorArena | Returns pointer to memory region to be used for tensors allocations. |
+| ModelPointer | Returns the pointer to the NN model data array. |
+| ModelSize | Returns the model size. |
+| GetOpResolver | Returns the reference to the TensorFlow Lite Micro operator resolver. |
+| EnlistOperations | Registers required operators with TensorFlow Lite Micro operator resolver. |
| GetActivationBufferSize | Returns the size of the tensor arena memory region. |
-> **Convention**: Each ML use-case must have extension of this class and implementation of the protected virtual methods:
+> **Convention:** Each ML use-case must have extension of this class and implementation of the protected virtual methods:
>
>```c++
->virtual const tflite::MicroOpResolver& GetOpResolver() = 0;
->virtual bool EnlistOperations() = 0;
->virtual uint8_t* GetTensorArena() = 0;
->virtual size_t GetActivationBufferSize() = 0;
+> virtual const uint8_t* ModelPointer() = 0;
+> virtual size_t ModelSize() = 0;
+> virtual const tflite::MicroOpResolver& GetOpResolver() = 0;
+> virtual bool EnlistOperations() = 0;
+> virtual size_t GetActivationBufferSize() = 0;
>```
>
->Network models have different set of operators that must be registered with
-tflite::MicroMutableOpResolver object in the EnlistOperations method.
-Network models could require different size of activation buffer that is returned as
-tensor arena memory for TensorFlow Lite Micro framework by the GetTensorArena
-and GetActivationBufferSize methods.
+> Network models have different set of operators that must be registered with
+> tflite::MicroMutableOpResolver object in the EnlistOperations method.
+> Network models could require different size of activation buffer that is returned as
+> tensor arena memory for TensorFlow Lite Micro framework by the GetTensorArena
+> and GetActivationBufferSize methods.
-Please see MobileNetModel.hpp and MobileNetModel.cc files from image
+Please see `MobileNetModel.hpp` and `MobileNetModel.cc` files from image
classification ML application use-case as an example of the model base
class extension.
@@ -328,8 +337,8 @@ use_case
└── src
```
-Start with creation of a sub-directory under the *use_case* directory and
-two other directories *src* and *include* as described in
+Start with creation of a sub-directory under the `use_case` directory and
+two other directories `src` and `include` as described in
[Software project description](#software-project-description) section:
## Implementing main loop
@@ -343,9 +352,9 @@ and then exits.
Main loop has knowledge about the platform and has access to the
platform components through the hardware abstraction layer (referred to as HAL).
-Create a *MainLoop.cc* file in the *src* directory (the one created under
+Create a `MainLoop.cc` file in the `src` directory (the one created under
[Adding custom ML use case](#adding-custom-ml-use-case)), the name is not
-important. Define *main_loop* function with the signature described in
+important. Define `main_loop` function with the signature described in
[Main loop function](#main-loop-function):
```c++
@@ -366,17 +375,20 @@ Now, you can start filling this function with logic.
Before inference could be run with a custom NN model, TensorFlow Lite
Micro framework must learn about the operators/layers included in the
-model. Developer must register operators using *MicroMutableOpResolver*
+model. Developer must register operators using `MicroMutableOpResolver`
API.
Ethos-U55 code samples project has an abstraction around TensorFlow
-Lite Micro API (see [NN model API](#nn-model-api)). Create *HelloWorld.hpp* in
+Lite Micro API (see [NN model API](#nn-model-api)). Create `HelloWorldModel.hpp` in
the use-case include sub-directory, extend Model abstract class and
declare required methods.
For example:
```c++
+#ifndef HELLOWORLDMODEL_HPP
+#define HELLOWORLDMODEL_HPP
+
#include "Model.hpp"
namespace arm {
@@ -396,22 +408,24 @@ class HelloWorldModel: public Model {
private:
/* Maximum number of individual operations that can be enlisted. */
- static constexpr int _m_maxOpCnt = 5;
+ static constexpr int _ms_maxOpCnt = 5;
/* A mutable op resolver instance. */
- tflite::MicroMutableOpResolver<_maxOpCnt> _m_opResolver;
+ tflite::MicroMutableOpResolver<_ms_maxOpCnt> _m_opResolver;
};
} /* namespace app */
} /* namespace arm */
+
+#endif /* HELLOWORLDMODEL_HPP */
```
-Create `HelloWorld.cc` file in the `src` sub-directory and define the methods
+Create `HelloWorldModel.cc` file in the `src` sub-directory and define the methods
there. Include `HelloWorldModel.hpp` created earlier. Note that `Model.hpp`
included in the header provides access to TensorFlow Lite Micro's operation
resolver API.
-Please, see `use_case/image_classifiaction/src/MobileNetModel.cc` for
-code examples.\
+Please, see `use_case/img_class/src/MobileNetModel.cc` for
+code examples.
If you are using a TensorFlow Lite model compiled with Vela, it is important to add
custom Ethos-U55 operator to the operators list.
@@ -424,15 +438,15 @@ the code if the application was built without NPU support.
bool arm::app::HelloWorldModel::EnlistOperations() {
- #if defined(ARM_NPU)
- if (kTfLiteOk == this->_opResolver.AddEthosU()) {
+#if defined(ARM_NPU)
+ if (kTfLiteOk == this->_m_opResolver.AddEthosU()) {
info("Added %s support to op resolver\n",
tflite::GetString_ETHOSU());
} else {
printf_err("Failed to add Arm NPU support to op resolver.");
return false;
}
- #endif /* ARM_NPU */
+#endif /* ARM_NPU */
return true;
}
@@ -449,15 +463,15 @@ the `usecase.cmake` file for this `HelloWorld` example.
For more details on `usecase.cmake`, see [Building custom use case](#building-custom-use-case).
For details on code generation flow in general, see [Automatic file generation](./building.md#Automatic-file-generation)
-The TensorFlow Lite model data is read during Model::init() method execution, see
-*application/tensorflow-lite-micro/Model.cc* for more details. Model invokes
+The TensorFlow Lite model data is read during Model::Init() method execution, see
+`application/tensorflow-lite-micro/Model.cc` for more details. Model invokes
`ModelPointer()` function which calls the `GetModelPointer()` function to get
neural network model data memory address. The `GetModelPointer()` function
will be generated during the build and could be found in the
file `build/generated/hello_world/src/<model_file_name>.cc`. Generated
file is added to the compilation automatically.
-Use \${use-case}_MODEL_TFLITE_PATH build parameter to include custom
+Use `${use-case}_MODEL_TFLITE_PATH` build parameter to include custom
model to the generation/compilation process (see [Build options](./building.md/#build-options)).
## Executing inference
@@ -477,15 +491,14 @@ compilation and be baked into the application.
For example, the image classification application has extra build steps
to generate C++ sources from the provided images with
-*generate_images_code* CMake function.
-
-> **Note:**
-Check the input data type for your NN model and input array data type are the same.
-For example, generated C++ sources for images store image data as uint8 array. For models that were
-quantized to int8 data type, it is important to convert image data to int8 correctly before inference execution.
-Asymmetric data type to symmetric data type conversion involves positioning zero value, i.e. subtracting an
-offset for uint8 values. Please check image classification application source for the code example
-(ConvertImgToInt8 function).
+`generate_images_code` CMake function.
+
+> **Note:** Check the input data type for your NN model and input array data type are the same.
+> For example, generated C++ sources for images store image data as uint8 array. For models that were
+> quantized to int8 data type, it is important to convert image data to int8 correctly before inference execution.
+> Asymmetric data type to symmetric data type conversion involves positioning zero value, i.e. subtracting an
+> offset for uint8 values. Please check image classification application source for the code example
+> (ConvertImgToInt8 function).
The following code adds inference invocation to the main loop function: