aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKeith Davis <keith.davis@arm.com>2021-10-21 12:24:11 +0100
committerKeith Davis <keith.davis@arm.com>2021-10-21 16:39:51 +0100
commit446814b0900802e93f40b57c1a0dcb461267676d (patch)
tree2557da781651b2e7a6f2a1c5ea01eb72470df917
parentfdb27e2c8875fa2bb354557d5291894fcb7940b0 (diff)
downloadarmnn-446814b0900802e93f40b57c1a0dcb461267676d.tar.gz
IVGCVSW-6237 Assess documentation impact and update relevant places
* Update Tensorflow and CMake versions * Change Delegate python guide to be Quick Start guide * Add links to Github prebuilt binaries Signed-off-by: Keith Davis <keith.davis@arm.com> Change-Id: I10797fdb6794391d80315b57a128587548df77f6
-rw-r--r--BuildGuideCrossCompilation.md4
-rw-r--r--delegate/BuildGuideNative.md77
-rw-r--r--delegate/DelegateQuickStartGuide.md (renamed from delegate/IntegrateDelegateIntoPython.md)48
-rw-r--r--docker/x86_64/Dockerfile4
-rw-r--r--samples/ImageClassification/README.md25
5 files changed, 67 insertions, 91 deletions
diff --git a/BuildGuideCrossCompilation.md b/BuildGuideCrossCompilation.md
index 9f778e0419..72f7f02f62 100644
--- a/BuildGuideCrossCompilation.md
+++ b/BuildGuideCrossCompilation.md
@@ -137,12 +137,12 @@ onnx/onnx.proto --proto_path=. --proto_path=../google/x86_64_pb_install/include
```
## Build TfLite
-* Building TfLite (Tensorflow version 2.3.1)
+* Building TfLite (Tensorflow version 2.5.1)
```bash
cd $HOME/armnn-devenv
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow/
-git checkout fcc4b966f1265f466e82617020af93670141b009
+git checkout tags/v2.5.1
cd ..
mkdir tflite
cd tflite
diff --git a/delegate/BuildGuideNative.md b/delegate/BuildGuideNative.md
index 62aa5c0eda..62d6673925 100644
--- a/delegate/BuildGuideNative.md
+++ b/delegate/BuildGuideNative.md
@@ -23,19 +23,18 @@ natively (no cross-compilation required). This is to keep this guide simple.
# Dependencies
Build Dependencies:
- * Tensorflow Lite: this guide uses version 2.3.1 . Other versions may work.
+ * Tensorflow Lite: this guide uses version 2.5.1 . Other versions may work.
* Flatbuffers 1.12.0
- * Arm NN 20.11 or higher
+ * Arm NN 21.11 or higher
Required Tools:
- * Git. This guide uses version 2.17.1 . Other versions might work.
- * pip. This guide uses version 20.3.3 . Other versions might work.
- * wget. This guide uses version 1.17.1 . Other versions might work.
- * zip. This guide uses version 3.0 . Other versions might work.
- * unzip. This guide uses version 6.00 . Other versions might work.
- * cmake 3.7.0 or higher. This guide uses version 3.7.2
- * scons. This guide uses version 2.4.1 . Other versions might work.
- * bazel. This guide uses version 3.1.0 . Other versions might work.
+ * Git. This guide uses version 2.17.1. Other versions might work.
+ * pip. This guide uses version 20.3.3. Other versions might work.
+ * wget. This guide uses version 1.17.1. Other versions might work.
+ * zip. This guide uses version 3.0. Other versions might work.
+ * unzip. This guide uses version 6.00. Other versions might work.
+ * cmake 3.16.0 or higher. This guide uses version 3.16.0
+ * scons. This guide uses version 2.4.1. Other versions might work.
Our first step is to build all the build dependencies I have mentioned above. We will have to create quite a few
directories. To make navigation a bit easier define a base directory for the project. At this stage we can also
@@ -47,23 +46,22 @@ cd $BASEDIR
apt-get update && apt-get install git wget unzip zip python git cmake scons
```
## Build Tensorflow Lite for C++
-Tensorflow has a few dependencies on it's own. It requires the python packages pip3, numpy, wheel,
-and also bazel which is used to compile Tensoflow. A description on how to build bazel can be
-found [here](https://docs.bazel.build/versions/master/install-compile-source.html). There are multiple ways.
-I decided to compile from source because that should work for any platform and therefore adds the most value
-to this guide. Depending on your operating system and architecture there might be an easier way.
+Tensorflow has a few dependencies on it's own. It requires the python packages pip3, numpy,
+and also Bazel or CMake which are used to compile Tensorflow. A description on how to build bazel can be
+found [here](https://docs.bazel.build/versions/master/install-compile-source.html). But for this guide, we will
+compile with CMake. Depending on your operating system and architecture there might be an easier way.
```bash
-# Install the required python packages
-pip3 install -U pip numpy wheel
-
-# Bazel has a dependency on JDK (The specific JDK version depends on the bazel version but default-jdk tends to work.)
-sudo apt-get install default-jdk
-# Build Bazel
-wget -O bazel-3.1.0-dist.zip https://github.com/bazelbuild/bazel/releases/download/3.1.0/bazel-3.1.0-dist.zip
-unzip -d bazel bazel-3.1.0-dist.zip
-cd bazel
-env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
-# This creates an "output" directory where the bazel binary can be found
+wget -O cmake-3.16.0.tar.gz https://cmake.org/files/v3.16/cmake-3.16.0.tar.gz
+tar -xzf cmake-3.16.0.tar.gz -C $BASEDIR/cmake-3.16.0
+
+# If you have an older CMake, remove installed in order to upgrade
+yes | sudo apt-get purge cmake
+hash -r
+
+cd $BASEDIR/cmake-3.16.0
+./bootstrap
+make
+sudo make install
```
### Download and build Tensorflow Lite
@@ -72,26 +70,13 @@ env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
cd $BASEDIR
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow/
-git checkout tags/v2.3.1 # Minimum version required for the delegate
-```
-Before we build, a target for tensorflow lite needs to be defined in the `BUILD` file. This can be
-found in the root directory of Tensorflow. Append the following target to the file:
-```bash
-cc_binary(
- name = "libtensorflow_lite_all.so",
- linkshared = 1,
- deps = [
- "//tensorflow/lite:framework",
- "//tensorflow/lite/kernels:builtin_ops",
- ],
-)
+git checkout tags/v2.5.1 # Minimum version required for the delegate is v2.3.1
```
-Now the build process can be started. When calling "configure", as below, a dialog shows up that asks the
-user to specify additional options. If you don't have any particular needs to your build, decline all
-additional options and choose default values.
+Now the build process can be started. When calling "cmake", as below, you can specify a number of build
+flags. But if you have no need to configure your tensorflow build, you can follow the exact commands below:
```bash
-PATH="$BASEDIR/bazel/output:$PATH" ./configure
-$BASEDIR/bazel/output/bazel build --config=opt --config=monolithic --strip=always libtensorflow_lite_all.so
+cmake $BASEDIR/tensorflow
+cmake --build $BASEDIR/tflite-output # This will be your DTFLITE_LIB_ROOT directory
```
## Build Flatbuffers
@@ -154,7 +139,7 @@ with the additional cmake arguments shown below
cd $BASEDIR/armnn/delegate && mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=release # A release build rather than a debug build.
-DTENSORFLOW_ROOT=$BASEDIR/tensorflow \ # The root directory where tensorflow can be found.
- -DTFLITE_LIB_ROOT=$BASEDIR/tensorflow/bazel-bin \ # Directory where tensorflow libraries can be found.
+ -DTFLITE_LIB_ROOT=$BASEDIR/tflite-output \ # Directory where tensorflow libraries can be found.
-DFLATBUFFERS_ROOT=$BASEDIR/flatbuffers-1.12.0/install \ # Flatbuffers install directory.
-DArmnn_DIR=$BASEDIR/armnn/build \ # Directory where the Arm NN library can be found
-DARMNN_SOURCE_DIR=$BASEDIR/armnn # The top directory of the Arm NN repository.
@@ -201,7 +186,7 @@ cmake .. -DARMCOMPUTE_ROOT=$BASEDIR/ComputeLibrary \
-DBUILD_UNIT_TESTS=0 \
-DBUILD_ARMNN_TFLITE_DELEGATE=1 \
-DTENSORFLOW_ROOT=$BASEDIR/tensorflow \
- -DTFLITE_LIB_ROOT=$BASEDIR/tensorflow/bazel-bin \
+ -DTFLITE_LIB_ROOT=$BASEDIR/tflite-output \
-DFLATBUFFERS_ROOT=$BASEDIR/flatbuffers-1.12.0/install
make
```
diff --git a/delegate/IntegrateDelegateIntoPython.md b/delegate/DelegateQuickStartGuide.md
index 967b9e30e9..ed462b2a1b 100644
--- a/delegate/IntegrateDelegateIntoPython.md
+++ b/delegate/DelegateQuickStartGuide.md
@@ -1,6 +1,6 @@
-# Integrate the TfLite delegate into TfLite using Python
-If you have built the TfLite delegate as a separate dynamic library then this tutorial will show you how you can
-integrate it in TfLite to run models using python.
+# TfLite Delegate Quick Start Guide
+If you have downloaded the ArmNN Github binaries or built the TfLite delegate yourself, then this tutorial will show you how you can
+integrate it into TfLite to run models using python.
Here is an example python script showing how to do this. In this script we are making use of the
[external adaptor](https://www.tensorflow.org/lite/performance/implementing_delegate#option_2_leverage_external_delegate)
@@ -11,7 +11,7 @@ import tflite_runtime.interpreter as tflite
# Load TFLite model and allocate tensors.
# (if you are using the complete tensorflow package you can find load_delegate in tf.experimental.load_delegate)
-armnn_delegate = tflite.load_delegate( library="<your-armnn-build-dir>/delegate/libarmnnDelegate.so",
+armnn_delegate = tflite.load_delegate( library="<path-to-armnn-binaries>/libarmnnDelegate.so",
options={"backends": "CpuAcc,GpuAcc,CpuRef", "logging-severity":"info"})
# Delegates/Executes all operations supported by ArmNN to/with ArmNN
interpreter = tflite.Interpreter(model_path="<your-armnn-repo-dir>/delegate/python/test/test_data/mock_model.tflite",
@@ -36,17 +36,18 @@ print(output_data)
# Prepare the environment
Pre-requisites:
- * Dynamically build Arm NN Delegate library
+ * Dynamically build Arm NN Delegate library or download the ArmNN binaries
* python3 (Depends on TfLite version)
* virtualenv
* numpy (Depends on TfLite version)
- * tflite_runtime (>=2.0, depends on Arm NN Delegate)
+ * tflite_runtime (>=2.5, depends on Arm NN Delegate)
-If you haven't built the delegate yet then take a look at the [build guide](./BuildGuideNative.md).
+If you haven't built the delegate yet then take a look at the [build guide](./BuildGuideNative.md). Otherwise,
+you can download the binaries [here](https://github.com/ARM-software/armnn/releases/tag/v21.11)
We recommend creating a virtual environment for this tutorial. For the following code to work python3 is needed. Please
also check the documentation of the TfLite version you want to use. There might be additional prerequisites for the python
-version.
+version. We will use Tensorflow Lite 2.5.1 for this guide.
```bash
# Install python3 (We ended up with python3.5.3) and virtualenv
sudo apt-get install python3-pip
@@ -67,30 +68,17 @@ TfLite models. But since Arm NN is only an inference engine itself this is a per
`tflite_runtime` is also much smaller than the whole tensorflow package and better suited to run models on
mobile and embedded devices.
-At the time of writing, there are no packages of either `tensorflow` or `tflite_runtime` available on `pypi` that
-are built for an arm architecture. That means installing them using `pip` on your development board is currently not
-possible. The TfLite [website](https://www.tensorflow.org/lite/guide/python) points you at prebuilt `tflite_runtime`
-packages. However, that limits you to specific TfLite and Python versions. For this reason we will build the
-`tflite_runtime` from source.
+The TfLite [website](https://www.tensorflow.org/lite/guide/python) shows you two methods to download the `tflite_runtime` package.
+In our experience, the use of the pip command works for most systems including debian. However, if you're using an older version of Tensorflow,
+you may need to build the pip package from source. You can find more information [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/pip_package/README.md).
+But in our case, with Tensorflow Lite 2.5.1, we can install through:
-You will have downloaded the tensorflow repository in order to build the Arm NN delegate. In there you can find further
-instructions on how to build the `tflite_runtime` under `tensorflow/lite/tools/pip_package/README.md`. This tutorial
-uses bazel to build it natively but there are scripts for cross-compilation available as well.
-```bash
-# Add the directory where bazel is built to your PATH so that the script can find it
-PATH=$PATH:your/build/dir/bazel/output
-# Run the following script to build tflite_runtime natively.
-tensorflow/lite/tools/pip_package/build_pip_package_with_bazel.sh
```
-The execution of the script creates a `.whl` file which can be used by `pip` to install the TfLite Runtime package.
-The build-script produces some output in which you can find the location where the `.whl` file was created. Then all that is
-left to do is to install all necessary python packages with `pip`.
-```bash
-pip install tensorflow/lite/tools/pip_package/gen/tflite_pip/python3/dist/tflite_runtime-2.3.1-py3-none-any.whl numpy
+pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime
```
Your virtual environment is now all setup. Copy the final python script into a python file e.g.
-`ExternalDelegatePythonTutorial.py`. Modify the python script above and replace `<your-armnn-build-dir>` and
+`ExternalDelegatePythonTutorial.py`. Modify the python script above and replace `<path-to-armnn-binaries>` and
`<your-armnn-repo-dir>` with the directories you have set up. If you've been using the [native build guide](./BuildGuideNative.md)
this will be `$BASEDIR/armnn/build` and `$BASEDIR/armnn`.
@@ -100,7 +88,7 @@ python ExternalDelegatePythonTutorial.py
```
The output should look similar to this:
```bash
-Info: ArmNN v23.0.0
+Info: ArmNN v27.0.0
Info: Initialization time: 0.56 ms
@@ -116,5 +104,5 @@ You can also test the functionality of the external delegate adaptor by running
pip install pytest
cd armnn/delegate/python/test
# You can deselect tests that require backends that your hardware doesn't support using markers e.g. -m "not GpuAccTest"
-pytest --delegate-dir="<your-armnn-build-dir>/armnn/delegate/libarmnnDelegate.so" -m "not GpuAccTest"
-```
+pytest --delegate-dir="<path-to-armnn-binaries>/libarmnnDelegate.so" -m "not GpuAccTest"
+``` \ No newline at end of file
diff --git a/docker/x86_64/Dockerfile b/docker/x86_64/Dockerfile
index 3a17635fea..314017b8e1 100644
--- a/docker/x86_64/Dockerfile
+++ b/docker/x86_64/Dockerfile
@@ -122,10 +122,10 @@ RUN cd $HOME/armnn-devenv/ && git clone https://review.mlplatform.org/ml/Compute
git checkout $($HOME/armnn-devenv/armnn/scripts/get_compute_library.sh -p) && \
scons Werror=0 arch=arm64-v8a neon=1 opencl=1 embed_kernels=1 extra_cxx_flags="-fPIC" -j$(nproc) internal_only=0
-# Build Tensorflow 2.3.1
+# Build Tensorflow 2.5.1
RUN cd $HOME/armnn-devenv && git clone https://github.com/tensorflow/tensorflow.git && \
cd tensorflow && \
- git checkout fcc4b966f1265f466e82617020af93670141b009 && \
+ git checkout a4dfb8d1a71385bd6d122e4f27f86dcebb96712d && \
../armnn/scripts/generate_tensorflow_protobuf.sh ../tensorflow-protobuf ../google/x86_64_pb_install
# Download Flatbuffer
diff --git a/samples/ImageClassification/README.md b/samples/ImageClassification/README.md
index e34e12a922..ed80244c50 100644
--- a/samples/ImageClassification/README.md
+++ b/samples/ImageClassification/README.md
@@ -8,14 +8,17 @@ TensorFlow Lite Python package.
This repository assumes you have built, or have downloaded the
`libarmnnDelegate.so` and `libarmnn.so` from the GitHub releases page. You will
-also need to have built the TensorFlow Lite library from source.
+also need to have built the TensorFlow Lite library from source if you plan on building
+these ArmNN library files yourself.
If you have not already installed these, please follow our guides in the ArmNN
repository. The guide to build the delegate can be found
[here](../../delegate/BuildGuideNative.md) and the guide to integrate the
delegate into Python can be found
-[here](../../delegate/IntegrateDelegateIntoPython.md).
+[here](../../delegate/DelegateQuickStartGuide.md).
+This guide will assume you have retrieved the binaries
+from the ArmNN Github page, so there is no need to build Tensorflow from source.
## Getting Started
@@ -73,12 +76,12 @@ from the Arm ML-Zoo).
pip3 install -r requirements.txt
```
-6. Copy over your `libtensorflow_lite_all.so` and `libarmnn.so` library files
+6. Copy over your `libarmnnDelegate.so` and `libarmnn.so` library files
you built/downloaded before trying this application to the application
folder. For example:
```bash
- cp path/to/tensorflow/directory/tensorflow/bazel-bin/libtensorflow_lite_all.so .
+ cp /path/to/armnn/binaries/libarmnnDelegate.so .
cp /path/to/armnn/binaries/libarmnn.so .
```
@@ -89,12 +92,12 @@ You should now have the following folder structure:
```
.
├── README.md
-├── run_classifier.py # script for the demo
-├── libtensorflow_lite_all.so # tflite library built from tensorflow
+├── run_classifier.py # script for the demo
+├── libarmnnDelegate.so
├── libarmnn.so
-├── cat.png # downloaded example image
-├── mobilenet_v2_1.0_224_quantized_1_default_1.tflite #tflite model from ml-zoo
-└── labelmappings.txt # model labelmappings for output processing
+├── cat.png # downloaded example image
+├── mobilenet_v2_1.0_224_quantized_1_default_1.tflite # tflite model from ml-zoo
+└── labelmappings.txt # model label mappings for output processing
```
## Run the model
@@ -104,7 +107,7 @@ python3 run_classifier.py \
--input_image cat.png \
--model_file mobilenet_v2_1.0_224_quantized_1_default_1.tflite \
--label_file labelmappings.txt \
---delegate_path /path/to/delegate/libarmnnDelegate.so.24 \
+--delegate_path /path/to/armnn/binaries/libarmnnDelegate.so \
--preferred_backends GpuAcc CpuAcc CpuRef
```
@@ -122,7 +125,7 @@ Lite Delegate requires one extra step when loading in your model:
```python
import tflite_runtime.interpreter as tflite
-armnn_delegate = tflite.load_delegate("/path/to/delegate/libarmnnDelegate.so",
+armnn_delegate = tflite.load_delegate("/path/to/armnn/binaries/libarmnnDelegate.so",
options={
"backends": "GpuAcc,CpuAcc,CpuRef",
"logging-severity": "info"