aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJim Flynn <jim.flynn@arm.com>2021-03-16 14:42:50 +0000
committerKeith Davis <keith.davis@arm.com>2021-03-18 12:29:42 +0000
commitbe0938e22e9fa6765a74f45f9f65181d0a2dc051 (patch)
tree1ac9aee7249d7598511d23a31fa12f8369d6e4e9
parentf0f00819fea340a791bbef9d8bb425951e065770 (diff)
downloadarmnn-be0938e22e9fa6765a74f45f9f65181d0a2dc051.tar.gz
IVGCVSW-5778 Update README.md to say 16.04 LTS will be replaced by 18.04 LTS from the 21.08 release
Change-Id: I89ef90fe696706323715caa5f1a86b6dde978181 Signed-off-by: Jim Flynn <jim.flynn@arm.com>
-rw-r--r--README.md61
1 files changed, 33 insertions, 28 deletions
diff --git a/README.md b/README.md
index a81712f69d..8dbe7687fd 100644
--- a/README.md
+++ b/README.md
@@ -7,41 +7,41 @@
* [License](#license)
* [Third-Party](#third-party)
-Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the
-[Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/).
+Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the
+[Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/).
-The Arm NN SDK is a set of open-source software and tools that enables machine learning workloads on power-efficient
-devices. It provides a bridge between existing neural network frameworks and power-efficient Cortex-A CPUs,
+The Arm NN SDK is a set of open-source software and tools that enables machine learning workloads on power-efficient
+devices. It provides a bridge between existing neural network frameworks and power-efficient Cortex-A CPUs,
Arm Mali GPUs and Arm Ethos NPUs.
<img align="center" width="400" src="https://developer.arm.com/-/media/Arm Developer Community/Images/Block Diagrams/Arm-NN/Arm-NN-Frameworks-Diagram.png"/>
-Arm NN SDK utilizes the Compute Library to target programmable cores, such as Cortex-A CPUs and Mali GPUs,
-as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide
+Arm NN SDK utilizes the Compute Library to target programmable cores, such as Cortex-A CPUs and Mali GPUs,
+as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide
their [own driver and backend](src/backends/README.md). Note, Arm NN does not provide support for Cortex-M CPUs.
-The latest release supports models created with **Caffe**, **TensorFlow**, **TensorFlow Lite** (TfLite) and **ONNX**.
-Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the
-hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural
-network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup
+The latest release supports models created with **Caffe**, **TensorFlow**, **TensorFlow Lite** (TfLite) and **ONNX**.
+Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the
+hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural
+network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup
we've been experiencing in our experiments with a few common networks.
<img align="center" width="700" src="https://developer.arm.com/-/media/developer/Other Images/Arm_NN_performance_relative_to_other_NN_frameworks_diagram.png"/>
-Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible
+Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible
to build for a wide variety of target platforms, from a wide variety of host environments.
## Getting started: Software tools overview
-Depending on what kind of framework (Tensorflow, Caffe, ONNX) you've been using to create your model there are multiple
+Depending on what kind of framework (Tensorflow, Caffe, ONNX) you've been using to create your model there are multiple
software tools available within Arm NN that can serve your needs.
-Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run models from
-one framework e.g. the TfLite-Parser lets you run TfLite models. You can integrate these parsers into your own
+Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run models from
+one framework e.g. the TfLite-Parser lets you run TfLite models. You can integrate these parsers into your own
application to load, optimize and execute your model. We also provide **python bindings** for our parsers and the Arm NN core.
We call the result **PyArmNN**. Therefore your application can be conveniently written in either C++ using the "original"
Arm NN library or in Python using PyArmNN. You can find tutorials on how to setup and use our parsers in our doxygen
-documentation. The latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation)
+documentation. The latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation)
of this repository.
Admittedly, building Arm NN and its parsers from source is not always easy to accomplish. We are trying to increase our
@@ -54,18 +54,18 @@ executed with the usual TfLite runtime. This is our **recommended way to acceler
there are tutorials in our doxygen documentation that can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation).
If you would like to use **Arm NN on Android** you can follow this guide which explains [how to build Arm NN using the AndroidNDK]().
-But you might also want to take a look at another repository which implements a hardware abstraction layer (HAL) for
-Android. The repository is called [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) and when
+But you might also want to take a look at another repository which implements a hardware abstraction layer (HAL) for
+Android. The repository is called [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) and when
integrated into Android it will automatically run neural networks with Arm NN.
## Where to find more information
The section above introduces the most important tools that Arm NN provides.
-You can find a complete list in our **doxygen documentation**. The
-latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation) of our github
+You can find a complete list in our **doxygen documentation**. The
+latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation) of our github
repository.
-For FAQs and troubleshooting advice, see [FAQ.md](docs/FAQ.md)
+For FAQs and troubleshooting advice, see [FAQ.md](docs/FAQ.md)
or take a look at previous [github issues](https://github.com/ARM-software/armnn/issues).
@@ -75,23 +75,28 @@ or take a look at previous [github issues](https://github.com/ARM-software/armnn
* CaffeParser
* Quantizer
-2. We are currently in the process of removing [boost](https://www.boost.org/) as a dependency to Arm NN. This process
-is finished for everything apart from our unit tests. This means you don't need boost to build and use Arm NN but
-you need it to execute our unit tests. Boost will soon be removed from Arm NN entirely.
+2. Ubuntu Linux 16.04 LTS will no longer be supported by April 30, 2021.
+ At that time, Ubuntu 16.04 LTS will no longer receive security patches or other software updates.
+ Consequently Arm NN will from the 21.08 Release at the end of August 2021 no longer be officially
+ supported on Ubuntu 16.04 LTS but will instead be supported on Ubuntu 18.04 LTS.
+
+3. We are currently in the process of removing [boost](https://www.boost.org/) as a dependency to Arm NN. This process
+ is finished for everything apart from our unit tests. This means you don't need boost to build and use Arm NN but
+ you need it to execute our unit tests. Boost will soon be removed from Arm NN entirely.
## Contributions
-The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/)
+The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/)
on the [MLPlatform.org](https://mlplatform.org/) website, or see the [Contributor Guide](ContributorGuide.md).
-Particularly if you'd like to implement your own backend next to our CPU, GPU and NPU backends there are guides for
-backend development:
+Particularly if you'd like to implement your own backend next to our CPU, GPU and NPU backends there are guides for
+backend development:
[Backend development guide](src/backends/README.md), [Dynamic backend development guide](src/dynamic/README.md)
## Disclaimer
-The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model
-protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on
+The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model
+protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on
the Internet, for those who wish to experiment, but they won't run out of the box.