ArmNN
 22.08
delegate/BuildGuideNative.md
Go to the documentation of this file.
1 # Delegate Build Guide
2 
3 This guide assumes that Arm NN has been built with the Arm NN TF Lite Delegate with the [Arm NN Build Tool](../build-tool/README.md).<br>
4 The Arm NN TF Lite Delegate can also be obtained from downloading the [Pre-Built Binaries on the GitHub homepage](../README.md).
5 
6 **Table of Contents:**
7 - [Running DelegateUnitTests](#running-delegateunittests)
8 - [Run the TF Lite Benchmark Tool](#run-the-tflite-model-benchmark-tool)
9  - [Download the TFLite Model Benchmark Tool](#download-the-tflite-model-benchmark-tool)
10  - [Execute the benchmarking tool with the Arm NN TF Lite Delegate](#execute-the-benchmarking-tool-with-the-arm-nn-tf-lite-delegate)
11 - [Integrate the Arm NN TfLite Delegate into your project](#integrate-the-arm-nn-tflite-delegate-into-your-project)
12 
13 
14 ## Running DelegateUnitTests
15 
16 To ensure that the build was successful you can run the unit tests for the delegate that can be found in
17 the build directory for the delegate. [Doctest](https://github.com/onqtam/doctest) was used to create those tests. Using test filters you can
18 filter out tests that your build is not configured for. In this case, we run all test suites that have `CpuAcc` in their name.
19 ```bash
20 cd <PATH_TO_ARMNN_BUILD_DIRECTORY>/delegate/build
21 ./DelegateUnitTests --test-suite=*CpuAcc*
22 ```
23 If you have built for Gpu acceleration as well you might want to change your test-suite filter:
24 ```bash
25 ./DelegateUnitTests --test-suite=*CpuAcc*,*GpuAcc*
26 ```
27 
28 ## Run the TFLite Model Benchmark Tool
29 
30 The [TFLite Model Benchmark](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark) Tool has a useful command line interface to test the TF Lite Delegate.
31 We can use this to demonstrate the use of the Arm NN TF Lite Delegate and its options.
32 
33 Some examples of this can be viewed in this [YouTube demonstration](https://www.youtube.com/watch?v=NResQ1kbm-M&t=920s).
34 
35 ### Download the TFLite Model Benchmark Tool
36 
37 Binary builds of the benchmarking tool for various platforms are available [here](https://www.tensorflow.org/lite/performance/measurement#native_benchmark_binary). In this example I will target an aarch64 Linux environment. I will also download a sample uint8 tflite model from the [Arm ML Model Zoo](https://github.com/ARM-software/ML-zoo).
38 
39 ```bash
40 mkdir $BASEDIR/benchmarking
41 cd $BASEDIR/benchmarking
42 # Get the benchmarking binary.
43 wget https://storage.googleapis.com/tensorflow-nightly-public/prod/tensorflow/release/lite/tools/nightly/latest/linux_aarch64_benchmark_model -O benchmark_model
44 # Make it executable.
45 chmod +x benchmark_model
46 # and a sample model from model zoo.
47 wget https://github.com/ARM-software/ML-zoo/blob/master/models/image_classification/mobilenet_v2_1.0_224/tflite_uint8/mobilenet_v2_1.0_224_quantized_1_default_1.tflite?raw=true -O mobilenet_v2_1.0_224_quantized_1_default_1.tflite
48 ```
49 
50 ### Execute the benchmarking tool with the Arm NN TF Lite Delegate
51 You are already at $BASEDIR/benchmarking from the previous stage.
52 ```bash
53 LD_LIBRARY_PATH=<PATH_TO_ARMNN_BUILD_DIRECTORY> ./benchmark_model --graph=mobilenet_v2_1.0_224_quantized_1_default_1.tflite --external_delegate_path="<PATH_TO_ARMNN_BUILD_DIRECTORY>/delegate/libarmnnDelegate.so" --external_delegate_options="backends:CpuAcc;logging-severity:info"
54 ```
55 The "external_delegate_options" here are specific to the Arm NN delegate. They are used to specify a target Arm NN backend or to enable/disable various options in Arm NN. A full description can be found in the parameters of function tflite_plugin_create_delegate.
56 
57 ## Integrate the Arm NN TfLite Delegate into your project
58 
59 The delegate can be integrated into your c++ project by creating a TfLite Interpreter and
60 instructing it to use the Arm NN delegate for the graph execution. This should look similar
61 to the following code snippet.
62 ```objectivec
63 // Create TfLite Interpreter
64 std::unique_ptr<Interpreter> armnnDelegateInterpreter;
65 InterpreterBuilder(tfLiteModel, ::tflite::ops::builtin::BuiltinOpResolver())
66  (&armnnDelegateInterpreter)
67 
68 // Create the Arm NN Delegate
69 armnnDelegate::DelegateOptions delegateOptions(backends);
70 std::unique_ptr<TfLiteDelegate, decltype(&armnnDelegate::TfLiteArmnnDelegateDelete)>
71  theArmnnDelegate(armnnDelegate::TfLiteArmnnDelegateCreate(delegateOptions),
72  armnnDelegate::TfLiteArmnnDelegateDelete);
73 
74 // Instruct the Interpreter to use the armnnDelegate
75 armnnDelegateInterpreter->ModifyGraphWithDelegate(theArmnnDelegate.get());
76 ```
77 
78 For further information on using TfLite Delegates please visit the [TensorFlow website](https://www.tensorflow.org/lite/guide).
79 
80 For more details of the kind of options you can pass to the Arm NN delegate please check the parameters of function tflite_plugin_create_delegate.