ArmNN
 22.11
README.md
Go to the documentation of this file.
1 <br>
2 <div align="center">
3  <img src="Arm_NN_horizontal_blue.png" class="center" alt="Arm NN Logo" width="300"/>
4 </div>
5 
6 * [Quick Start Guides](#quick-start-guides)
7 * [Pre-Built Binaries](#pre-built-binaries)
8 * [Software Overview](#software-overview)
9 * [Get Involved](#get-involved)
10 * [Contributions](#contributions)
11 * [Disclaimer](#disclaimer)
12 * [License](#license)
13 * [Third-Party](#third-party)
14 * [Build Flags](#build-flags)
15 
16 # Arm NN
17 
18 **_Arm NN_** is the **most performant** machine learning (ML) inference engine for Android and Linux, accelerating ML
19 on **Arm Cortex-A CPUs and Arm Mali GPUs**. This ML inference engine is an open source SDK which bridges the gap
20 between existing neural network frameworks and power-efficient Arm IP.
21 
22 Arm NN outperforms generic ML libraries due to **Arm architecture-specific optimizations** (e.g. SVE2) by utilizing
23 **[Arm Compute Library (ACL)](https://github.com/ARM-software/ComputeLibrary/)**. To target Arm Ethos-N NPUs, Arm NN
24 utilizes the [Ethos-N NPU Driver](https://github.com/ARM-software/ethos-n-driver-stack). For Arm Cortex-M acceleration,
25 please see [CMSIS-NN](https://github.com/ARM-software/CMSIS_5).
26 
27 Arm NN is written using portable **C++14** and built using [CMake](https://cmake.org/) - enabling builds for a wide
28 variety of target platforms, from a wide variety of host environments. **Python** developers can interface with Arm NN
29 through the use of our **Arm NN TF Lite Delegate**.
30 
31 
32 ## Quick Start Guides
33 **The Arm NN TF Lite Delegate provides the widest ML operator support in Arm NN** and is an easy way to accelerate
34 your ML model. To start using the TF Lite Delegate, first download the **[Pre-Built Binaries](#pre-built-binaries)** for
35 the latest release of Arm NN. Using a Python interpreter, you can load your TF Lite model into the Arm NN TF Lite
36 Delegate and run accelerated inference. Please see this
37 **[Quick Start Guide](delegate/DelegateQuickStartGuide.md)** on GitHub or this more comprehensive
38 **[Arm Developer Guide](https://developer.arm.com/documentation/102561/latest/)** for information on how to accelerate
39 your TF Lite model using the Arm NN TF Lite Delegate.
40 
41 The fastest way to integrate Arm NN into an **Android app** is by using our **Arm NN AAR (Android Archive) file with
42 Android Studio**. The AAR file nicely packages up the Arm NN TF Lite Delegate, Arm NN itself and ACL; ready to be
43 integrated into your Android ML application. Using the AAR allows you to benefit from the **vast operator support** of
44 the Arm NN TF Lite Delegate. We held an **[Arm AI Tech Talk](https://www.youtube.com/watch?v=Zu4v0nqq2FA)** on how to
45 accelerate an ML Image Segmentation app in 5 minutes using this AAR file, with the supporting guide
46 **[here](https://developer.arm.com/documentation/102744/latest)**. To download the Arm NN AAR file, please see the
47 **[Pre-Built Binaries](#pre-built-binaries)** section below.
48 
49 We also provide Debian packages for Arm NN, which are a quick way to start using Arm NN and the TF Lite Parser
50 (albeit with less ML operator support than the TF Lite Delegate). There is an installation guide available
51 [here](InstallationViaAptRepository.md) which provides instructions on how to install the Arm NN Core and the TF Lite
52 Parser for Ubuntu 20.04.
53 
54 To build Arm NN from scratch, we provide the **[Arm NN Build Tool](build-tool/README.md)**. This tool consists of
55 **parameterized bash scripts** accompanied by a **Dockerfile** for building Arm NN and its dependencies, including
56 **[Arm Compute Library (ACL)](https://github.com/ARM-software/ComputeLibrary/)**. This tool replaces/supersedes the
57 majority of the existing Arm NN build guides as a user-friendly way to build Arm NN. The main benefit of building
58 Arm NN from scratch is the ability to **exactly choose which components to build, targeted for your ML project**.<br>
59 
60 
61 ## Pre-Built Binaries
62 
63 | Operating System | Architecture-specific Release Archive (Download) |
64 |-----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
65 | Android (AAR) | [![](https://img.shields.io/badge/download-android--aar-orange)](https://github.com/ARM-software/armnn/releases/download/v22.11/armnn_delegate_jni-22.11.aar) |
66 | Android 10 "Q/Quince Tart" (API level 29) | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v22.11/ArmNN-android-29-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v22.11/ArmNN-android-29-arm64-v8a.tar.gz) |
67 | Android 11 "R/Red Velvet Cake" (API level 30) | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v22.11/ArmNN-android-30-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v22.11/ArmNN-android-30-arm64-v8a.tar.gz) |
68 | Android 12 "S/Snow Cone" (API level 31) | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v22.11/ArmNN-android-31-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v22.11/ArmNN-android-31-arm64-v8a.tar.gz) |
69 | Android 13 "T/Tirimase" (API level 32) | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v22.11/ArmNN-android-32-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v22.11/ArmNN-android-32-arm64-v8a.tar.gz) |
70 | Linux | [![](https://img.shields.io/badge/download-aarch64-green)](https://github.com/ARM-software/armnn/releases/download/v22.08/ArmNN-linux-aarch64.tar.gz) [![](https://img.shields.io/badge/download-x86__64-yellow)](https://github.com/ARM-software/armnn/releases/download/v22.08/ArmNN-linux-x86_64.tar.gz) |
71 
72 
73 ## Software Overview
74 The Arm NN SDK supports ML models in **TensorFlow Lite** (TF Lite) and **ONNX** formats.
75 
76 **Arm NN's TF Lite Delegate** accelerates TF Lite models through **Python or C++ APIs**. Supported TF Lite operators
77 are accelerated by Arm NN and any unsupported operators are delegated (fallback) to the reference TF Lite runtime -
78 ensuring extensive ML operator support. **The recommended way to use Arm NN is to
79 [convert your model to TF Lite format](https://www.tensorflow.org/lite/convert) and use the TF Lite Delegate.** Please
80 refer to the [Quick Start Guides](#quick-start-guides) for more information on how to use the TF Lite Delegate.
81 
82 Arm NN also provides **TF Lite and ONNX parsers** which are C++ libraries for integrating TF Lite or ONNX models
83 into your ML application. Please note that these parsers do not provide extensive ML operator coverage as compared
84 to the Arm NN TF Lite Delegate.
85 
86 **Android** ML application developers have a number of options for using Arm NN:
87 * Use our Arm NN AAR (Android Archive) file with **Android Studio** as described in the
88 [Quick Start Guides](#quick-start-guides) section
89 * Download and use our [Pre-Built Binaries](#pre-built-binaries) for the Android platform
90 * Build Arm NN from scratch with the Android NDK using this [GitHub guide](BuildGuideAndroidNDK.md)
91 
92 Arm also provides an [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) which implements a
93 hardware abstraction layer (HAL) for the Android NNAPI. When the Android NN Driver is integrated on an Android device,
94 ML models used in Android applications will automatically be accelerated by Arm NN.
95 
96 For more information about the Arm NN components, please refer to our
97 [documentation](https://github.com/ARM-software/armnn/wiki/Documentation).
98 
99 Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the
100 [Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/).
101 
102 For FAQs and troubleshooting advice, see the [FAQ](docs/FAQ.md) or take a look at previous
103 [GitHub Issues](https://github.com/ARM-software/armnn/issues).
104 
105 
106 ## Get Involved
107 The best way to get involved is by using our software. If you need help or encounter an issue, please raise it as a
108 [GitHub Issue](https://github.com/ARM-software/armnn/issues). Feel free to have a look at any of our open issues too.
109 We also welcome feedback on our documentation.
110 
111 Feature requests without a volunteer to implement them are closed, but have the 'Help wanted' label, these can be
112 found [here](https://github.com/ARM-software/armnn/issues?q=is%3Aissue+label%3A%22Help+wanted%22+).
113 Once you find a suitable Issue, feel free to re-open it and add a comment, so that Arm NN engineers know you are
114 working on it and can help.
115 
116 When the feature is implemented the 'Help wanted' label will be removed.
117 
118 
119 ## Contributions
120 The Arm NN project welcomes contributions. For more details on contributing to Arm NN please see the
121 [Contributing page](https://mlplatform.org/contributing/) on the [MLPlatform.org](https://mlplatform.org/) website,
122 or see the [Contributor Guide](CONTRIBUTING.md).
123 
124 Particularly if you'd like to implement your own backend next to our CPU, GPU and NPU backends there are guides for
125 backend development: [Backend development guide](src/backends/README.md),
126 [Dynamic backend development guide](src/dynamic/README.md).
127 
128 
129 ## Disclaimer
130 The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model
131 protobufs and image files not distributed with Arm NN. The dependencies for some tests are available freely on
132 the Internet, for those who wish to experiment, but they won't run out of the box.
133 
134 
135 ## License
136 Arm NN is provided under the [MIT](https://spdx.org/licenses/MIT.html) license.
137 See [LICENSE](LICENSE) for more information. Contributions to this project are accepted under the same license.
138 
139 Individual files contain the following tag instead of the full license text.
140 
141  SPDX-License-Identifier: MIT
142 
143 This enables machine processing of license information based on the SPDX License Identifiers that are available
144 here: http://spdx.org/licenses/
145 
146 
147 ## Third-party
148 Third party tools used by Arm NN:
149 
150 | Tool | License (SPDX ID) | Description | Version | Provenience
151 |----------------|-------------------|------------------------------------------------------------------|-------------|-------------------
152 | cxxopts | MIT | A lightweight C++ option parser library | SHA 12e496da3d486b87fa9df43edea65232ed852510 | https://github.com/jarro2783/cxxopts
153 | doctest | MIT | Header-only C++ testing framework | 2.4.6 | https://github.com/onqtam/doctest
154 | fmt | MIT | {fmt} is an open-source formatting library providing a fast and safe alternative to C stdio and C++ iostreams. | 7.0.1 | https://github.com/fmtlib/fmt
155 | ghc | MIT | A header-only single-file std::filesystem compatible helper library | 1.3.2 | https://github.com/gulrak/filesystem
156 | half | MIT | IEEE 754 conformant 16-bit half-precision floating point library | 1.12.0 | http://half.sourceforge.net
157 | mapbox/variant | BSD | A header-only alternative to 'boost::variant' | 1.1.3 | https://github.com/mapbox/variant
158 | stb | MIT | Image loader, resize and writer | 2.16 | https://github.com/nothings/stb
159 
160 
161 ## Build Flags
162 Arm NN uses the following security related build flags in their code:
163 
164 | Build flags |
165 |---------------------|
166 | -Wall |
167 | -Wextra |
168 | -Wold-style-cast |
169 | -Wno-missing-braces |
170 | -Wconversion |
171 | -Wsign-conversion |
172 | -Werror |