ArmNN
 22.02
README.md
Go to the documentation of this file.
1 # Introduction
2 
3 * [Quick Start Guides](#quick-start-guides)
4 * [Software tools overview](#software-tools-overview)
5 * [Where to find more information](#where-to-find-more-information)
6 * [Contributions](#contributions)
7 * [Disclaimer](#disclaimer)
8 * [License](#license)
9 * [Third-Party](#third-party)
10 
11 Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the
12 [Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/).
13 
14 The Arm NN SDK is a set of open-source software and tools that enables machine learning workloads on power-efficient
15 devices. It provides a bridge between existing neural network frameworks and power-efficient Cortex-A CPUs,
16 Arm Mali GPUs and Arm Ethos NPUs.
17 
18 <img align="center" width="400" src="https://developer.arm.com/-/media/Arm Developer Community/Images/Block Diagrams/Arm-NN/Arm-NN-Frameworks-Diagram.png"/>
19 
20 Arm NN SDK utilizes the Compute Library to target programmable cores, such as Cortex-A CPUs and Mali GPUs,
21 as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide
22 their [own driver and backend](src/backends/README.md). Note, Arm NN does not provide support for Cortex-M CPUs.
23 
24 Arm NN support models created with **TensorFlow Lite** (TfLite) and **ONNX**.
25 Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the
26 hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural
27 network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup
28 we've been experiencing in our experiments with a few common networks.
29 
30 \image html PerformanceChart.png
31 
32 Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible
33 to build for a wide variety of target platforms, from a wide variety of host environments.
34 
35 
36 ## Getting started: Quick Start Guides
37 Arm NN has added some quick start guides that will help you to setup Arm NN and run models quickly. The quickest way to build Arm NN is to either use our **Debian package** or use the prebuilt binaries available in the [Assets](https://github.com/ARM-software/armnn/releases) section of every Arm NN release.
38 There is an installation guide available [here](InstallationViaAptRepository.md) which provides step by step instructions on how to install the Arm NN Core,
39 the TfLite Parser and PyArmNN for Ubuntu 20.04. These guides can be used with the **prebuilt binaries**.
40 At present we have added a [quick start guide](delegate/DelegateQuickStartGuide.md) that will show you how to integrate the delegate into TfLite to run models using python.
41 More guides will be added here in the future.
42 
43 
44 ## Software Components overview
45 Depending on what kind of framework (Tensorflow Lite, ONNX) you've been using to create your model there are multiple
46 software tools available within Arm NN that can serve your needs.
47 
48 Generally, there is a **parser** available **for each supported framework**. ArmNN-Parsers are C++ libraries that you can integrate into your application to load, optimize and execute your model.
49 Each parser allows you to run models from one framework. If you would like to run an ONNX model you can make use of the **Onnx-Parser**. There also is a parser available for TfLite models but the preferred way to execute TfLite models is using our TfLite-Delegate. We also provide **python bindings** for our parsers and the Arm NN core.
50 We call the result **PyArmNN**. Therefore your application can be conveniently written in either C++ using the "original"
51 Arm NN library or in Python using PyArmNN. You can find tutorials on how to setup and use our parsers in our doxygen
52 documentation. The latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation)
53 of this repository.
54 
55 Arm NN's software toolkit comes with the **TfLite Delegate** which can be integrated into TfLite.
56 TfLite will then delegate operations, that can be accelerated with Arm NN, to Arm NN. Every other operation will still be
57 executed with the usual TfLite runtime. This is our **recommended way to accelerate TfLite models**. As with our parsers
58 there are tutorials in our doxygen documentation that can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation).
59 
60 If you would like to use **Arm NN on Android** you can follow this guide which explains [how to build Arm NN using the AndroidNDK](BuildGuideAndroidNDK.md).
61 But you might also want to take a look at another repository which implements a hardware abstraction layer (HAL) for
62 Android. The repository is called [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) and when
63 integrated into Android it will automatically run neural networks with Arm NN.
64 
65 
66 ## Where to find more information
67 The section above introduces the most important components that Arm NN provides.
68 You can find a complete list in our **doxygen documentation**. The
69 latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation) of our github
70 repository.
71 
72 For FAQs and troubleshooting advice, see [FAQ.md](docs/FAQ.md)
73 or take a look at previous [github issues](https://github.com/ARM-software/armnn/issues).
74 
75 
76 ## How to get involved
77 If you would like to get involved but don't know where to start, a good place to look is in our Github Issues.
78 
79 Feature requests without a volunteer to implement them are closed, but have the 'Help wanted' label, these can be found
80 [here](https://github.com/ARM-software/armnn/issues?q=is%3Aissue+label%3A%22Help+wanted%22+).
81 Once you find a suitable Issue, feel free to re-open it and add a comment,
82 so that other people know you are working on it and can help.
83 
84 When the feature is implemented the 'Help wanted' label will be removed.
85 
86 ## Contributions
87 The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/)
88 on the [MLPlatform.org](https://mlplatform.org/) website, or see the [Contributor Guide](ContributorGuide.md).
89 
90 Particularly if you'd like to implement your own backend next to our CPU, GPU and NPU backends there are guides for
91 backend development:
92 [Backend development guide](src/backends/README.md), [Dynamic backend development guide](src/dynamic/README.md)
93 
94 
95 ## Disclaimer
96 The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model
97 protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on
98 the Internet, for those who wish to experiment, but they won't run out of the box.
99 
100 
101 ## License
102 Arm NN is provided under the [MIT](https://spdx.org/licenses/MIT.html) license.
103 See [LICENSE](LICENSE) for more information. Contributions to this project are accepted under the same license.
104 
105 Individual files contain the following tag instead of the full license text.
106 
107  SPDX-License-Identifier: MIT
108 
109 This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/
110 
111 
112 ## Third-party
113 Third party tools used by Arm NN:
114 
115 | Tool | License (SPDX ID) | Description | Version | Provenience
116 |----------------|-------------------|------------------------------------------------------------------|-------------|-------------------
117 | cxxopts | MIT | A lightweight C++ option parser library | SHA 12e496da3d486b87fa9df43edea65232ed852510 | https://github.com/jarro2783/cxxopts
118 | doctest | MIT | Header-only C++ testing framework | 2.4.6 | https://github.com/onqtam/doctest
119 | fmt | MIT | {fmt} is an open-source formatting library providing a fast and safe alternative to C stdio and C++ iostreams. | 7.0.1 | https://github.com/fmtlib/fmt
120 | ghc | MIT | A header-only single-file std::filesystem compatible helper library | 1.3.2 | https://github.com/gulrak/filesystem
121 | half | MIT | IEEE 754 conformant 16-bit half-precision floating point library | 1.12.0 | http://half.sourceforge.net
122 | mapbox/variant | BSD | A header-only alternative to 'boost::variant' | 1.1.3 | https://github.com/mapbox/variant
123 | stb | MIT | Image loader, resize and writer | 2.16 | https://github.com/nothings/stb
124 
125 
126 ## Build process
127 Arm NN uses the following security related build flags in their code:
128 
129 | Build flags |
130 |---------------------|
131 | -Wall |
132 | -Wextra |
133 | -Wold-style-cast |
134 | -Wno-missing-braces |
135 | -Wconversion |
136 | -Wsign-conversion |
137 | -Werror |