aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorJan Eilers <jan.eilers@arm.com>2021-01-27 20:04:41 +0000
committerJan Eilers <jan.eilers@arm.com>2021-02-01 17:06:24 +0000
commit06fabe12d7d115045517880bae78a4b973af7547 (patch)
tree41700520a6da284301ae99d03657f1899d975d0b /README.md
parent85d3671618f0d40b71ebbc80373389140390c2cd (diff)
downloadarmnn-06fabe12d7d115045517880bae78a4b973af7547.tar.gz
IVGCVSW-5605 Doxygen: Use readme.md as mainpage in doxygen
* Gives the readme file an update * Removes introduction.dox * Adds FAQ to doxygen Signed-off-by: Jan Eilers <jan.eilers@arm.com> Change-Id: Ibb67e7f2cac7e55556295eb7851c616561b17042
Diffstat (limited to 'README.md')
-rw-r--r--README.md106
1 files changed, 74 insertions, 32 deletions
diff --git a/README.md b/README.md
index e6959eb66f..69f1561c93 100644
--- a/README.md
+++ b/README.md
@@ -1,52 +1,96 @@
-# Arm NN
+# Introduction
-Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the [Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/). For more information on the machine learning platform and Arm NN, see: <https://mlplatform.org/>, also there is further Arm NN information available from <https://developer.arm.com/products/processors/machine-learning/arm-nn>
+* [Software tools overview](#software-tools-overview)
+* [Where to find more information](#where-to-find-more-information)
+* [Contributions](#contributions)
+* [Disclaimer](#disclaimer)
+* [License](#license)
+* [Third-Party](#third-party)
-There is a getting started guide here using TensorFlow: <https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-tensorflow>
+Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the
+[Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/).
-There is a getting started guide here using TensorFlow Lite: <https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-tensorflow-lite>
+The Arm NN SDK is a set of open-source software and tools that enables machine learning workloads on power-efficient
+devices. It provides a bridge between existing neural network frameworks and power-efficient Cortex-A CPUs,
+Arm Mali GPUs and Arm Ethos NPUs.
-There is a getting started guide here using Caffe: <https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/configure-the-arm-nn-sdk-build-environment-for-caffe>
+<img align="center" width="400" src="https://developer.arm.com/-/media/Arm Developer Community/Images/Block Diagrams/Arm-NN/Arm-NN-Frameworks-Diagram.png"/>
-There is a getting started guide here using ONNX: <https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-onnx>
+Arm NN SDK utilizes the Compute Library to target programmable cores, such as Cortex-A CPUs and Mali GPUs,
+as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide
+their [own driver and backend](src/backends/README.md). Note, Arm NN does not provide support for Cortex-M CPUs.
-There is a guide for backend development: [Backend development guide](src/backends/README.md)
+The latest release supports models created with **Caffe**, **TensorFlow**, **TensorFlow Lite** (TfLite) and **ONNX**.
+Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the
+hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural
+network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup
+we've been experiencing in our experiments with a few common networks.
-There is a guide for installation of ArmNN, Tensorflow Lite Parser and PyArmnn via our Apt Repository: [Installation via Apt Repository](InstallationViaAptRepository.md)
+<img align="center" width="700" src="https://developer.arm.com/-/media/developer/Other Images/Arm_NN_performance_relative_to_other_NN_frameworks_diagram.png"/>
-There is a getting started guide for our ArmNN TfLite Delegate: [Build the TfLite Delegate natively](delegate/BuildGuideNative.md)
+Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible
+to build for a wide variety of target platforms, from a wide variety of host environments.
-API Documentation is available at https://github.com/ARM-software/armnn/wiki/Documentation.
-Dox files to generate Arm NN doxygen files can be found at armnn/docs/. Following generation the xhtml files can be found at armnn/documentation/
+## Getting started: Software tools overview
+Depending on what kind of framework (Tensorflow, Caffe, ONNX) you've been using to create your model there are multiple
+software tools available within Arm NN that can serve your needs.
-### Build Instructions
+Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run a models from
+one framework e.g. the TfLite-Parser lets you run TfLite models. You can integrate these parsers into your own
+application to load, optimize and execute your model. We also provide **python bindings** for our parsers and the Arm NN core.
+We call the result **PyArmNN**. Therefore your application can be conveniently written in either C++ using the "original"
+Arm NN library or in Python using PyArmNN. You can find tutorials on how to setup and use our parsers in our doxygen
+documentation. The latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation)
+of this repository.
-Arm tests the build system of Arm NN with the following build environments:
+Admittedly, building Arm NN and its parsers from source is not always easy to accomplish. We are trying to increase our
+usability by providing Arm NN as a **Debian package**. Our debian package is the most easy way to install the Arm NN Core,
+the TfLite Parser and PyArmNN (More support is about to come): [Installation via Apt Repository](InstallationViaAptRepository.md)
-* Android NDK: [How to use Android NDK to build Arm NN](BuildGuideAndroidNDK.md)
-* Cross compilation from x86_64 Ubuntu to arm64 Linux: [Arm NN Cross Compilation](BuildGuideCrossCompilation.md)
-* Native compilation under aarch64 Debian 9
+The newest member in Arm NNs software toolkit is the **TfLite Delegate**. The delegate can be integrated in TfLite.
+TfLite will then delegate operations that can be accelerated with Arm NN to Arm NN. Every other operation will still be
+executed with the usual TfLite runtime. This is our **recommended way to accelerate TfLite models**. As with our parsers
+there are tutorials in our doxygen documentation that can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation).
-Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible to build for a wide variety of target platforms, from a wide variety of host environments.
+If you would like to use **Arm NN on Android** you can follow this guide which explains [how to build Arm NN using the AndroidNDK]().
+But you might also want to take a look at another repository which implements a hardware abstraction layer (HAL) for
+Android. The repository is called [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) and when
+integrated into Android it will automatically run neural networks with Arm NN.
-The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on the Internet, for those who wish to experiment.
-The 'armnn/samples' directory contains SimpleSample.cpp, a very basic example of the ArmNN SDK API in use, and DynamicSample.cpp, a very basic example of using the ArmNN SDK API with the standalone sample dynamic backend.
+## Where to find more information
+The section above introduces the most important tools that Arm NN provides.
+You can find a complete list in our **doxygen documentation**. The
+latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation) of our github
+repository.
-The 'ExecuteNetwork' program, in armnn/tests/ExecuteNetwork, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes any model and any input tensor, and simply prints out the output tensor. Run it with no arguments to see command-line help.
+For FAQs and troubleshooting advice, see [FAQ.md](docs/FAQ.md)
+or take a look at previous [github issues](https://github.com/ARM-software/armnn/issues).
-The 'ArmnnConverter' program, in armnn/src/armnnConverter, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes a model in TensorFlow format and produces a serialized model in Arm NN format. Run it with no arguments to see command-line help. Note that this program can only convert models for which all operations are supported by the serialization tool [src/armnnSerializer](src/armnnSerializer/README.md).
-The 'ArmnnQuantizer' program, in armnn/src/armnnQuantizer, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes a 32-bit float network and converts it into a quantized asymmetric 8-bit or quantized symmetric 16-bit network.
-Static quantization is supported by default but dynamic quantization can be enabled if CSV file of raw input tensors is specified. Run it with no arguments to see command-line help.
+## Note
+We are currently in the process of removing [boost](https://www.boost.org/) as a dependency to Arm NN. This process
+is finished for everything apart from our unit tests. This means you don't need boost to build and use Arm NN but
+you need it to execute our unit tests. Boost will soon be removed from Arm NN entirely.
-Note that Arm NN needs to be built against a particular version of [ARM's Compute Library](https://github.com/ARM-software/ComputeLibrary). The get_compute_library.sh in the scripts subdirectory will clone the compute library from the review.mlplatform.org github repository into a directory alongside armnn named 'clframework' and checks out the correct revision.
-For FAQs and troubleshooting advice, see [FAQ.md](docs/FAQ.md)
+## Contributions
+The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/)
+on the [MLPlatform.org](https://mlplatform.org/) website, or see the [Contributor Guide](ContributorGuide.md).
-### License
+Particularly if you'd like to implement your own backend next to our CPU, GPU and NPU backends there are guides for
+backend development:
+[Backend development guide](src/backends/README.md), [Dynamic backend development guide](src/dynamic/README.md)
+
+## Disclaimer
+The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model
+protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on
+the Internet, for those who wish to experiment, but they won't run out of the box.
+
+
+## License
Arm NN is provided under the [MIT](https://spdx.org/licenses/MIT.html) license.
See [LICENSE](LICENSE) for more information. Contributions to this project are accepted under the same license.
@@ -56,17 +100,15 @@ Individual files contain the following tag instead of the full license text.
This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/
+
+## Third-party
Third party tools used by Arm NN:
| Tool | License (SPDX ID) | Description | Version | Provenience
|----------------|-------------------|------------------------------------------------------------------|-------------|-------------------
| cxxopts | MIT | A lightweight C++ option parser library | SHA 12e496da3d486b87fa9df43edea65232ed852510 | https://github.com/jarro2783/cxxopts
| fmt | MIT | {fmt} is an open-source formatting library providing a fast and safe alternative to C stdio and C++ iostreams. | 7.0.1 | https://github.com/fmtlib/fmt
-| ghc | MIT | A header-only single-file std::filesystem compatible helper library | 1.3.2 | https://github.com/gulrak/filesystem
-| half | MIT | IEEE 754 conformant 16-bit half-precision floating point library | 1.12.0 | http://half.sourceforge.net
+| ghc | MIT | A header-only single-file std::filesystem compatible helper library | 1.3.2 | https://github.com/gulrak/filesystem
+| half | MIT | IEEE 754 conformant 16-bit half-precision floating point library | 1.12.0 | http://half.sourceforge.net
| mapbox/variant | BSD | A header-only alternative to 'boost::variant' | 1.1.3 | https://github.com/mapbox/variant
| stb | MIT | Image loader, resize and writer | 2.16 | https://github.com/nothings/stb
-
-### Contributions
-
-The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/) on the [MLPlatform.org](https://mlplatform.org/) website, or see the [Contributor Guide](ContributorGuide.md).