aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJames Conroy <james.conroy@arm.com>2022-05-20 18:45:51 +0100
committerRyan OShea <ryan.oshea3@arm.com>2022-05-23 23:30:52 +0100
commit7bbb79b53f95af00cfbf888fd97b1bdca81612ed (patch)
tree8edb150e74108f36669b06a1ec0de2b73c04dab6
parent630ce65543c08d8e7fca5be80f9a64122744d135 (diff)
downloadarmnn-22.05.tar.gz
Improve front-facing documentationv22.05
* Cleanup and make homepage docs more concise and direct * Highlight use of the TF Lite Delegate, guides and AI Tech talk Signed-off-by: James Conroy <james.conroy@arm.com> Change-Id: I7d0221bf804dd1769568beccbca93d23cdba24b6
-rw-r--r--README.md175
1 files changed, 99 insertions, 76 deletions
diff --git a/README.md b/README.md
index 4c7f4a6b1f..f64aed0144 100644
--- a/README.md
+++ b/README.md
@@ -1,100 +1,122 @@
-# Introduction
+# Arm NN
* [Quick Start Guides](#quick-start-guides)
-* [Software tools overview](#software-tools-overview)
-* [Where to find more information](#where-to-find-more-information)
+* [Pre-Built Binaries](#pre-built-binaries)
+* [Software Overview](#software-overview)
+* [Get Involved](#get-involved)
* [Contributions](#contributions)
* [Disclaimer](#disclaimer)
* [License](#license)
* [Third-Party](#third-party)
+* [Build Flags](#build-flags)
+
+**_Arm NN_** is the **most performant** machine learning (ML) inference engine for Android and Linux, accelerating ML
+on **Arm Cortex-A CPUs and Arm Mali GPUs**. This ML inference engine is an open source SDK which bridges the gap
+between existing neural network frameworks and power-efficient Arm IP.
+
+Arm NN outperforms generic ML libraries due to **Arm architecture-specific optimizations** (e.g. SVE2) by utilizing
+**[Arm Compute Library (ACL)](https://github.com/ARM-software/ComputeLibrary/)**. To target Arm Ethos-N NPUs, Arm NN
+utilizes the [Ethos-N NPU Driver](https://github.com/ARM-software/ethos-n-driver-stack). For Arm Cortex-M acceleration,
+please see [CMSIS-NN](https://github.com/ARM-software/CMSIS_5).
+
+Arm NN is written using portable **C++14** and built using [CMake](https://cmake.org/) - enabling builds for a wide
+variety of target platforms, from a wide variety of host environments. **Python** developers can interface with Arm NN
+through the use of our **Arm NN TF Lite Delegate**.
+
+
+## Quick Start Guides
+**The Arm NN TF Lite Delegate provides the widest ML operator support in Arm NN** and is an easy way to accelerate
+your ML model. To start using the TF Lite Delegate, first download the **[Pre-Built Binaries](#pre-built-binaries)** for
+the latest release of Arm NN. Using a Python interpreter, you can load your TF Lite model into the Arm NN TF Lite
+Delegate and run accelerated inference. Please see this
+**[Quick Start Guide](delegate/DelegateQuickStartGuide.md)** on GitHub or this more comprehensive
+**[Arm Developer Guide](https://developer.arm.com/documentation/102561/latest/)** for information on how to accelerate
+your TF Lite model using the Arm NN TF Lite Delegate.
+
+The fastest way to integrate Arm NN into an **Android app** is by using our **Arm NN AAR (Android Archive) file with
+Android Studio**. The AAR file nicely packages up the Arm NN TF Lite Delegate, Arm NN itself and ACL; ready to be
+integrated into your Android ML application. Using the AAR allows you to benefit from the **vast operator support** of
+the Arm NN TF Lite Delegate. We held an **[Arm AI Tech Talk](https://www.youtube.com/watch?v=Zu4v0nqq2FA)** on how to
+accelerate an ML Image Segmentation app in 5 minutes using this AAR file, with the supporting guide
+**[here](https://developer.arm.com/documentation/102744/latest)**. To download the Arm NN AAR file, please see the
+**[Pre-Built Binaries](#pre-built-binaries)** section below.
+
+We also provide Debian packages for Arm NN, which are a quick way to start using Arm NN and the TF Lite Parser
+(albeit with less ML operator support than the TF Lite Delegate). There is an installation guide available
+[here](InstallationViaAptRepository.md) which provides instructions on how to install the Arm NN Core and the TF Lite
+Parser for Ubuntu 20.04.
+
+
+## Pre-Built Binaries
+
+| Operating System | Architecture-specific Release Archive (Download) |
+|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Android (AAR) | [![](https://img.shields.io/badge/download-android--aar-orange)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmnnDelegate-release.aar) |
+| Android 27 | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-android-27-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-android-27-arm64-v8a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v7a-brightgreen)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-android-27-armv7a.tar.gz) |
+| Android 28 | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-android-28-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-android-28-arm64-v8a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v7a-brightgreen)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-android-28-armv7a.tar.gz) |
+| Android 29 | [![](https://img.shields.io/badge/download-arm64--v8.2a-blue)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-android-29-arm64-v8.2-a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v8a-red)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-android-29-arm64-v8a.tar.gz) [![](https://img.shields.io/badge/download-arm64--v7a-brightgreen)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-android-29-armv7a.tar.gz) |
+| Linux | [![](https://img.shields.io/badge/download-aarch64-green)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-linux-aarch64.tar.gz) [![](https://img.shields.io/badge/download-x86__64-yellow)](https://github.com/ARM-software/armnn/releases/download/v22.02/ArmNN-linux-x86_64.tar.gz) |
+
+
+## Software Overview
+The Arm NN SDK supports ML models in **TensorFlow Lite** (TF Lite) and **ONNX** formats.
+
+**Arm NN's TF Lite Delegate** accelerates TF Lite models through **Python or C++ APIs**. Supported TF Lite operators
+are accelerated by Arm NN and any unsupported operators are delegated (fallback) to the reference TF Lite runtime -
+ensuring extensive ML operator support. **The recommended way to use Arm NN is to
+[convert your model to TF Lite format](https://www.tensorflow.org/lite/convert) and use the TF Lite Delegate.** Please
+refer to the [Quick Start Guides](#quick-start-guides) for more information on how to use the TF Lite Delegate.
+
+Arm NN also provides **TF Lite and ONNX parsers** which are C++ libraries for integrating TF Lite or ONNX models
+into your ML application. Please note that these parsers do not provide extensive ML operator coverage as compared
+to the Arm NN TF Lite Delegate.
+
+**Android** ML application developers have a number of options for using Arm NN:
+* Use our Arm NN AAR (Android Archive) file with **Android Studio** as described in the
+[Quick Start Guides](#quick-start-guides) section
+* Download and use our [Pre-Built Binaries](#pre-built-binaries) for the Android platform
+* Build Arm NN from scratch with the Android NDK using this [GitHub guide](BuildGuideAndroidNDK.md)
+
+Arm also provides an [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) which implements a
+hardware abstraction layer (HAL) for the Android NNAPI. When the Android NN Driver is integrated on an Android device,
+ML models used in Android applications will automatically be accelerated by Arm NN.
+
+For more information about the Arm NN components, please refer to our
+[documentation](https://github.com/ARM-software/armnn/wiki/Documentation).
Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the
[Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/).
-The Arm NN SDK is a set of open-source software and tools that enables machine learning workloads on power-efficient
-devices. It provides a bridge between existing neural network frameworks and power-efficient Cortex-A CPUs,
-Arm Mali GPUs and Arm Ethos NPUs.
+For FAQs and troubleshooting advice, see the [FAQ](docs/FAQ.md) or take a look at previous
+[GitHub Issues](https://github.com/ARM-software/armnn/issues).
-<img align="center" width="400" src="https://developer.arm.com/-/media/Arm Developer Community/Images/Block Diagrams/Arm-NN/Arm-NN-Frameworks-Diagram.png"/>
-Arm NN SDK utilizes the Compute Library to target programmable cores, such as Cortex-A CPUs and Mali GPUs,
-as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide
-their [own driver and backend](src/backends/README.md). Note, Arm NN does not provide support for Cortex-M CPUs.
+## Get Involved
+The best way to get involved is by using our software. If you need help or encounter and issue, please raise it as a
+[GitHub Issue](https://github.com/ARM-software/armnn/issues). Feel free to have a look at any of our open issues too.
+We also welcome feedback on our documentation.
-Arm NN support models created with **TensorFlow Lite** (TfLite) and **ONNX**.
-Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the
-hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural
-network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup
-we've been experiencing in our experiments with a few common networks.
-
-\image html PerformanceChart.png
-
-Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible
-to build for a wide variety of target platforms, from a wide variety of host environments.
-
-
-## Getting started: Quick Start Guides
-Arm NN has added some quick start guides that will help you to setup Arm NN and run models quickly. The quickest way to build Arm NN is to either use our **Debian package** or use the prebuilt binaries available in the [Assets](https://github.com/ARM-software/armnn/releases) section of every Arm NN release.
-There is an installation guide available [here](InstallationViaAptRepository.md) which provides step by step instructions on how to install the Arm NN Core,
-the TfLite Parser and PyArmNN for Ubuntu 20.04. These guides can be used with the **prebuilt binaries**.
-At present we have added a [quick start guide](delegate/DelegateQuickStartGuide.md) that will show you how to integrate the delegate into TfLite to run models using python.
-More guides will be added here in the future.
-
-
-## Software Components overview
-Depending on what kind of framework (Tensorflow Lite, ONNX) you've been using to create your model there are multiple
-software tools available within Arm NN that can serve your needs.
-
-Generally, there is a **parser** available **for each supported framework**. ArmNN-Parsers are C++ libraries that you can integrate into your application to load, optimize and execute your model.
-Each parser allows you to run models from one framework. If you would like to run an ONNX model you can make use of the **Onnx-Parser**. There also is a parser available for TfLite models but the preferred way to execute TfLite models is using our TfLite-Delegate. We also provide **python bindings** for our parsers and the Arm NN core.
-We call the result **PyArmNN**. Therefore your application can be conveniently written in either C++ using the "original"
-Arm NN library or in Python using PyArmNN. You can find tutorials on how to setup and use our parsers in our doxygen
-documentation. The latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation)
-of this repository.
-
-Arm NN's software toolkit comes with the **TfLite Delegate** which can be integrated into TfLite.
-TfLite will then delegate operations, that can be accelerated with Arm NN, to Arm NN. Every other operation will still be
-executed with the usual TfLite runtime. This is our **recommended way to accelerate TfLite models**. As with our parsers
-there are tutorials in our doxygen documentation that can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation).
-
-If you would like to use **Arm NN on Android** you can follow this guide which explains [how to build Arm NN using the AndroidNDK](BuildGuideAndroidNDK.md).
-But you might also want to take a look at another repository which implements a hardware abstraction layer (HAL) for
-Android. The repository is called [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) and when
-integrated into Android it will automatically run neural networks with Arm NN.
-
-
-## Where to find more information
-The section above introduces the most important components that Arm NN provides.
-You can find a complete list in our **doxygen documentation**. The
-latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation) of our github
-repository.
-
-For FAQs and troubleshooting advice, see [FAQ.md](docs/FAQ.md)
-or take a look at previous [github issues](https://github.com/ARM-software/armnn/issues).
-
-
-## How to get involved
-If you would like to get involved but don't know where to start, a good place to look is in our Github Issues.
-
-Feature requests without a volunteer to implement them are closed, but have the 'Help wanted' label, these can be found
-[here](https://github.com/ARM-software/armnn/issues?q=is%3Aissue+label%3A%22Help+wanted%22+).
-Once you find a suitable Issue, feel free to re-open it and add a comment,
-so that other people know you are working on it and can help.
+Feature requests without a volunteer to implement them are closed, but have the 'Help wanted' label, these can be
+found [here](https://github.com/ARM-software/armnn/issues?q=is%3Aissue+label%3A%22Help+wanted%22+).
+Once you find a suitable Issue, feel free to re-open it and add a comment, so that Arm NN engineers know you are
+working on it and can help.
When the feature is implemented the 'Help wanted' label will be removed.
+
## Contributions
-The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/)
-on the [MLPlatform.org](https://mlplatform.org/) website, or see the [Contributor Guide](ContributorGuide.md).
+The Arm NN project welcomes contributions. For more details on contributing to Arm NN please see the
+[Contributing page](https://mlplatform.org/contributing/) on the [MLPlatform.org](https://mlplatform.org/) website,
+or see the [Contributor Guide](ContributorGuide.md).
Particularly if you'd like to implement your own backend next to our CPU, GPU and NPU backends there are guides for
-backend development:
-[Backend development guide](src/backends/README.md), [Dynamic backend development guide](src/dynamic/README.md)
+backend development: [Backend development guide](src/backends/README.md),
+[Dynamic backend development guide](src/dynamic/README.md).
## Disclaimer
The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model
-protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on
+protobufs and image files not distributed with Arm NN. The dependencies for some tests are available freely on
the Internet, for those who wish to experiment, but they won't run out of the box.
@@ -106,7 +128,8 @@ Individual files contain the following tag instead of the full license text.
SPDX-License-Identifier: MIT
-This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/
+This enables machine processing of license information based on the SPDX License Identifiers that are available
+here: http://spdx.org/licenses/
## Third-party
@@ -123,7 +146,7 @@ Third party tools used by Arm NN:
| stb | MIT | Image loader, resize and writer | 2.16 | https://github.com/nothings/stb
-## Build process
+## Build Flags
Arm NN uses the following security related build flags in their code:
| Build flags |