summaryrefslogtreecommitdiff
path: root/docs/use_cases/inference_runner.md
blob: ffb205e12604661c9d3c86b2f11c811f5a2ff13e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
# Inference Runner Code Sample

- [Introduction](#introduction)
  - [Prerequisites](#prerequisites)
- [Building the Code Samples application from sources](#building-the-code-samples-application-from-sources)
  - [Build options](#build-options)
  - [Build process](#build-process)
  - [Add custom model](#add-custom-model)
- [Setting-up and running Ethos-U55 code sample](#setting-up-and-running-ethos-u55-code-sample)
  - [Setting up the Ethos-U55 Fast Model](#setting-up-the-ethos-u55-fast-model)
  - [Starting Fast Model simulation](#starting-fast-model-simulation)
  - [Running Inference Runner](#running-inference-runner)
- [Inference Runner processing information](#inference-runner-processing-information)

## Introduction

This document describes the process of setting up and running the Arm® Ethos™-U55 NPU Inference Runner.
The inference runner is intended for quickly checking profiling results for any desired network, providing it has been
processed by the Vela compiler.

A simple model is provided with the Inference Runner as an example, but it is expected that the user will replace this
model with one they wish to profile, see [Add custom model](./inference_runner.md#Add-custom-model) for more details.

The inference runner is intended for quickly checking profiling results for any desired network
providing it has been processed by the Vela compiler.

The inference runner will populate all input tensors for the provided model with randomly generated data and an
inference is then performed. Profiling results are then displayed in the console.

Use case code could be found in [source/use_case/inference_runner](../../source/use_case/inference_runner]) directory.

### Prerequisites

See [Prerequisites](../documentation.md#prerequisites)

## Building the Code Samples application from sources

### Build options

In addition to the already specified build option in the main documentation, the Inference Runner use case adds:

- `inference_runner_MODEL_TFLITE_PATH` - Path to the NN model file in TFLite format. Model will be processed and
  included into the application axf file. The default value points to one of the delivered set of models.
  Note that the parameters `TARGET_PLATFORM` and `ETHOS_U55_ENABLED` should be aligned with the chosen model, i.e.:
  - if `ETHOS_U55_ENABLED` is set to `On` or `1`, the NN model is assumed to be optimized. The model will naturally
    all back to the Arm® Cortex®-M CPU if an unoptimized model is supplied.
  - if `ETHOS_U55_ENABLED` is set to `Off` or `0`, the NN model is assumed to be unoptimized. Supplying an optimized model
    in this case will result in a runtime error.

- `inference_runner_ACTIVATION_BUF_SZ`: The intermediate/activation buffer size reserved for the NN model. By
    default, it is set to 2MiB and should be enough for most models.

In order to build **ONLY** Inference Runner example application add to the `cmake` command line specified in
[Building](../documentation.md#Building) `-DUSE_CASE_BUILD=inferece_runner`.

### Build process

> **Note:** This section describes the process for configuring the build for `MPS3: SSE-300` for different target platform
>see [Building](../documentation.md#Building) section.

Create a build directory and navigate inside:

```commandline
mkdir build_inference_runner && cd build_inference_runner
```

On Linux, execute the following command to build **only** Inference Runner application to run on the Ethos-U55 Fast
Model when providing only the mandatory arguments for CMake configuration:

```commandline
cmake \
    -DTARGET_PLATFORM=mps3 \
    -DTARGET_SUBSYSTEM=sse-300 \
    -DCMAKE_TOOLCHAIN_FILE=./scripts/cmake/bare-metal-toolchain.cmake \
    -DUSE_CASE_BUILD=inference_runner ..
```

For Windows, add `-G "MinGW Makefiles"`:

```commandline
cmake \
    -G "MinGW Makefiles" \
    -DTARGET_PLATFORM=mps3 \
    -DTARGET_SUBSYSTEM=sse-300 \
    -DCMAKE_TOOLCHAIN_FILE=./scripts/cmake/bare-metal-toolchain.cmake \
    -DUSE_CASE_BUILD=inference_runner ..
```

Toolchain option `CMAKE_TOOLCHAIN_FILE` points to the toolchain specific file to set the compiler and platform specific
parameters.

To configure a build that can be debugged using Arm-DS, we can just specify
the build type as `Debug`:

```commandline
cmake \
    -DTARGET_PLATFORM=mps3 \
    -DTARGET_SUBSYSTEM=sse-300 \
    -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/bare-metal-toolchain.cmake \
    -DCMAKE_BUILD_TYPE=Debug \
    -DUSE_CASE_BUILD=inference_runner ..
```

To configure a build that can be debugged using a tool that only supports
DWARF format 3 (Modeldebugger for example), we can use:

```commandline
cmake \
    -DTARGET_PLATFORM=mps3 \
    -DTARGET_SUBSYSTEM=sse-300 \
    -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/bare-metal-toolchain.cmake \
    -DCMAKE_BUILD_TYPE=Debug \
    -DARMCLANG_DEBUG_DWARF_LEVEL=3 \
    -DUSE_CASE_BUILD=inference_runner ..
```

> **Note:** If building for different Ethos-U55 configurations, see
>[Configuring build for different Arm Ethos-U55 configurations](../sections/building.md#Configuring-build-for-different-Arm-Ethos-U55-configurations):

If the TensorFlow source tree is not in its default expected location,
set the path using `TENSORFLOW_SRC_PATH`.
Similarly, if the Ethos-U55 driver is not in the default location,
`ETHOS_U55_DRIVER_SRC_PATH` can be used to configure the location. For example:

```commandline
cmake \
    -DTARGET_PLATFORM=mps3 \
    -DTARGET_SUBSYSTEM=sse-300 \
    -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/bare-metal-toolchain.cmake \
    -DTENSORFLOW_SRC_PATH=/my/custom/location/tensorflow \
    -DETHOS_U55_DRIVER_SRC_PATH=/my/custom/location/core_driver \
    -DUSE_CASE_BUILD=inference_runner ..
```

Also, `CMSIS_SRC_PATH` parameter can be used to override the CMSIS sources used for compilation used by TensorFlow by
default. For example, to use the CMSIS sources fetched by the ethos-u helper script, we can use:

```commandline
cmake \
    -DTARGET_PLATFORM=mps3 \
    -DTARGET_SUBSYSTEM=sse-300 \
    -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/bare-metal-toolchain.cmake \
    -DTENSORFLOW_SRC_PATH=../ethos-u/core_software/tensorflow \
    -DETHOS_U55_DRIVER_SRC_PATH=../ethos-u/core_software/core_driver \
    -DCMSIS_SRC_PATH=../ethos-u/core_software/cmsis \
    -DUSE_CASE_BUILD=inference_runner ..
```

> **Note:** If re-building with changed parameters values, it is highly advised to clean the build directory and re-run
>the CMake command.

If the CMake command succeeded, build the application as follows:

```commandline
make -j4
```

For Windows, use `mingw32-make`.

Add VERBOSE=1 to see compilation and link details.

Results of the build will be placed under `build/bin` folder:

```tree
bin
 ├── ethos-u-inference_runner.axf
 ├── ethos-u-inference_runner.htm
 ├── ethos-u-inference_runner.map
 ├── images-inference_runner.txt
 └── sectors
      ├── kws
      │ └── ...
      └── img_class
        ├── dram.bin
        └── itcm.bin
```

Where:

- `ethos-u-inference_runner.axf`: The built application binary for the Inference Runner use case.

- `ethos-u-inference_runner.map`: Information from building the application (e.g. libraries used, what was optimized,
    location of objects)

- `ethos-u-inference_runner.htm`: Human readable file containing the call graph of application functions.

- `sectors/`: Folder containing the built application, split into files for loading into different FPGA memory regions.

- `Images-inference_runner.txt`: Tells the FPGA which memory regions to use for loading the binaries in sectors/**
    folder.

### Add custom model

The application performs inference using the model pointed to by the CMake parameter `inference_runner_MODEL_TFLITE_PATH`.

> **Note:** If you want to run the model using Ethos-U55, ensure your custom model has been run through the Vela compiler
>successfully before continuing. See [Optimize model with Vela compiler](../sections/building.md#Optimize-custom-model-with-Vela-compiler).

Then, you must set `inference_runner_MODEL_TFLITE_PATH` to the location of the Vela processed model file.

An example:

```commandline
cmake \
  -Dinference_runner_MODEL_TFLITE_PATH=<path/to/custom_model_after_vela.tflite> \
  -DTARGET_PLATFORM=mps3 \
  -DTARGET_SUBSYSTEM=sse-300 \
  -DCMAKE_TOOLCHAIN_FILE=scripts/cmake/bare-metal-toolchain.cmake ..
```

For Windows, add `-G "MinGW Makefiles"` to the CMake command.

> **Note:** Clean the build directory before re-running the CMake command.

The `.tflite` model file pointed to by `inference_runner_MODEL_TFLITE_PATH` will be converted to C++ files during the CMake
configuration stage and then compiled into the application for performing inference with.

The log from the configuration stage should tell you what model path has been used:

```stdout
-- User option inference_runner_MODEL_TFLITE_PATH is set to <path/to/custom_model_after_vela.tflite>
...
-- Using <path/to/custom_model_after_vela.tflite>
++ Converting custom_model_after_vela.tflite to\
custom_model_after_vela.tflite.cc
...
```

After compiling, your custom model will have now replaced the default one in the application.

## Setting-up and running Ethos-U55 code sample

### Setting up the Ethos-U55 Fast Model

The FVP is available publicly from
[Arm Ecosystem FVP downloads](https://developer.arm.com/tools-and-software/open-source-software/arm-platforms-software/arm-ecosystem-fvps).

For Ethos-U55 evaluation, please download the MPS3 version of the Arm® Corstone™-300 model that contains Ethos-U55 and
Cortex-M55. The model is currently only supported on Linux based machines. To install the FVP:

- Unpack the archive

- Run the install script in the extracted package

```commandline
./FVP_Corstone_SSE-300_Ethos-U55.sh
```

- Follow the instructions to install the FVP to your desired location

### Starting Fast Model simulation

Once completed the building step, application binary ethos-u-infernce_runner.axf can be found in the `build/bin` folder.
Assuming the install location of the FVP was set to ~/FVP_install_location, the simulation can be started by:

```commandline
~/FVP_install_location/models/Linux64_GCC-6.4/FVP_Corstone_SSE-300_Ethos-U55
./bin/mps3-sse-300/ethos-u-inference_runner.axf
```

A log output should appear on the terminal:

```log
telnetterminal0: Listening for serial connection on port 5000
telnetterminal1: Listening for serial connection on port 5001
telnetterminal2: Listening for serial connection on port 5002
telnetterminal5: Listening for serial connection on port 5003
```

This will also launch a telnet window with the sample application's standard output and error log entries containing
information about the pre-built application version, TensorFlow Lite Micro library version used, data type as well as
the input and output tensor sizes of the model compiled into the executable binary.

### Running Inference Runner

After the application has started the inference starts immediately and it outputs the results on the telnet terminal.

The following example illustrates application output:

```log
[INFO] Profile for Inference:
       Active NPU cycles: 26976
       Idle NPU cycles: 196
```

After running an inference on randomly generated data, the output of the log shows the profiling results that for this
inference:

- Ethos-U55's PMU report:

  - 26,976 active cycles: number of cycles that were used for computation

  - 196 idle cycles: number of cycles for which the NPU was idle

- For FPGA platforms, CPU cycle count can also be enabled. For FVP, however, CPU cycle counters should not be used as
    the CPU model is not cycle-approximate or cycle-accurate.