ArmNN
 21.05
README.md
Go to the documentation of this file.
1 # Introduction
2 
3 * [Software tools overview](#software-tools-overview)
4 * [Where to find more information](#where-to-find-more-information)
5 * [Contributions](#contributions)
6 * [Disclaimer](#disclaimer)
7 * [License](#license)
8 * [Third-Party](#third-party)
9 
10 Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the
11 [Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/).
12 
13 The Arm NN SDK is a set of open-source software and tools that enables machine learning workloads on power-efficient
14 devices. It provides a bridge between existing neural network frameworks and power-efficient Cortex-A CPUs,
15 Arm Mali GPUs and Arm Ethos NPUs.
16 
17 <img align="center" width="400" src="https://developer.arm.com/-/media/Arm Developer Community/Images/Block Diagrams/Arm-NN/Arm-NN-Frameworks-Diagram.png"/>
18 
19 Arm NN SDK utilizes the Compute Library to target programmable cores, such as Cortex-A CPUs and Mali GPUs,
20 as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide
21 their [own driver and backend](src/backends/README.md). Note, Arm NN does not provide support for Cortex-M CPUs.
22 
23 The latest release supports models created with **TensorFlow Lite** (TfLite) and **ONNX**.
24 Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the
25 hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural
26 network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup
27 we've been experiencing in our experiments with a few common networks.
28 
29 <img align="center" width="700" src="https://developer.arm.com/-/media/developer/Other Images/Arm_NN_performance_relative_to_other_NN_frameworks_diagram.png"/>
30 
31 Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible
32 to build for a wide variety of target platforms, from a wide variety of host environments.
33 
34 
35 ## Getting started: Software tools overview
36 Depending on what kind of framework (Tensorflow Lite, ONNX) you've been using to create your model there are multiple
37 software tools available within Arm NN that can serve your needs.
38 
39 Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run models from
40 one framework e.g. the TfLite-Parser lets you run TfLite models. You can integrate these parsers into your own
41 application to load, optimize and execute your model. We also provide **python bindings** for our parsers and the Arm NN core.
42 We call the result **PyArmNN**. Therefore your application can be conveniently written in either C++ using the "original"
43 Arm NN library or in Python using PyArmNN. You can find tutorials on how to setup and use our parsers in our doxygen
44 documentation. The latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation)
45 of this repository.
46 
47 Admittedly, building Arm NN and its parsers from source is not always easy to accomplish. We are trying to increase our
48 usability by providing Arm NN as a **Debian package**. Our debian package is the most easy way to install the Arm NN Core,
49 the TfLite Parser and PyArmNN (More support is about to come): [Installation via Apt Repository](InstallationViaAptRepository.md)
50 
51 The newest member in Arm NNs software toolkit is the **TfLite Delegate**. The delegate can be integrated in TfLite.
52 TfLite will then delegate operations, that can be accelerated with Arm NN, to Arm NN. Every other operation will still be
53 executed with the usual TfLite runtime. This is our **recommended way to accelerate TfLite models**. As with our parsers
54 there are tutorials in our doxygen documentation that can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation).
55 
56 If you would like to use **Arm NN on Android** you can follow this guide which explains [how to build Arm NN using the AndroidNDK]().
57 But you might also want to take a look at another repository which implements a hardware abstraction layer (HAL) for
58 Android. The repository is called [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) and when
59 integrated into Android it will automatically run neural networks with Arm NN.
60 
61 
62 ## Where to find more information
63 The section above introduces the most important tools that Arm NN provides.
64 You can find a complete list in our **doxygen documentation**. The
65 latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation) of our github
66 repository.
67 
68 For FAQs and troubleshooting advice, see [FAQ.md](docs/FAQ.md)
69 or take a look at previous [github issues](https://github.com/ARM-software/armnn/issues).
70 
71 
72 ## Note
73 1. The following tools are **deprecated** in Arm NN 21.02 and will be removed in 21.05:
74  * TensorflowParser
75  * CaffeParser
76  * Quantizer
77 
78 2. Ubuntu Linux 16.04 LTS will no longer be supported by April 30, 2021.
79  At that time, Ubuntu 16.04 LTS will no longer receive security patches or other software updates.
80  Consequently Arm NN will from the 21.08 Release at the end of August 2021 no longer be officially
81  supported on Ubuntu 16.04 LTS but will instead be supported on Ubuntu 18.04 LTS.
82 
83 3. We are currently in the process of removing [boost](https://www.boost.org/) as a dependency to Arm NN. This process
84  is finished for everything apart from our unit tests. This means you don't need boost to build and use Arm NN but
85  you need it to execute our unit tests. Boost will soon be removed from Arm NN entirely.
86 
87 
88 ## How to get involved
89 If you would like to get involved but don't know where to start, a good place to look is in our Github Issues.
90 
91 Feature requests without a volunteer to implement them are closed, but have the 'Help wanted' label, these can be found
92 [here](https://github.com/ARM-software/armnn/issues?q=is%3Aissue+label%3A%22Help+wanted%22+).
93 Once you find a suitable Issue, feel free to re-open it and add a comment,
94 so that other people know you are working on it and can help.
95 
96 When the feature is implemented the 'Help wanted' label will be removed.
97 
98 ## Contributions
99 The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/)
100 on the [MLPlatform.org](https://mlplatform.org/) website, or see the [Contributor Guide](ContributorGuide.md).
101 
102 Particularly if you'd like to implement your own backend next to our CPU, GPU and NPU backends there are guides for
103 backend development:
104 [Backend development guide](src/backends/README.md), [Dynamic backend development guide](src/dynamic/README.md)
105 
106 
107 ## Disclaimer
108 The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model
109 protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on
110 the Internet, for those who wish to experiment, but they won't run out of the box.
111 
112 
113 ## License
114 Arm NN is provided under the [MIT](https://spdx.org/licenses/MIT.html) license.
115 See [LICENSE](LICENSE) for more information. Contributions to this project are accepted under the same license.
116 
117 Individual files contain the following tag instead of the full license text.
118 
119  SPDX-License-Identifier: MIT
120 
121 This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/
122 
123 
124 ## Third-party
125 Third party tools used by Arm NN:
126 
127 | Tool | License (SPDX ID) | Description | Version | Provenience
128 |----------------|-------------------|------------------------------------------------------------------|-------------|-------------------
129 | cxxopts | MIT | A lightweight C++ option parser library | SHA 12e496da3d486b87fa9df43edea65232ed852510 | https://github.com/jarro2783/cxxopts
130 | fmt | MIT | {fmt} is an open-source formatting library providing a fast and safe alternative to C stdio and C++ iostreams. | 7.0.1 | https://github.com/fmtlib/fmt
131 | ghc | MIT | A header-only single-file std::filesystem compatible helper library | 1.3.2 | https://github.com/gulrak/filesystem
132 | half | MIT | IEEE 754 conformant 16-bit half-precision floating point library | 1.12.0 | http://half.sourceforge.net
133 | mapbox/variant | BSD | A header-only alternative to 'boost::variant' | 1.1.3 | https://github.com/mapbox/variant
134 | stb | MIT | Image loader, resize and writer | 2.16 | https://github.com/nothings/stb