aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorColm Donelan <Colm.Donelan@arm.com>2021-04-23 12:29:28 +0100
committerColm Donelan <colm.donelan@arm.com>2021-04-26 16:01:33 +0000
commit0e3afb33786e77d631595a885d86a788cdcd27ea (patch)
tree243995cecd43389bae739f8877186e22e4162c41
parent937565b90bb33eea785898c44db2942dd7af56e7 (diff)
downloadarmnn-0e3afb33786e77d631595a885d86a788cdcd27ea.tar.gz
IVGCVSW-5762 Update delegate build guide to remove tensorflow build.
* Removed the build of tensorflow. * Fixed some build parameters. * Added minor fixes to improve usability. Signed-off-by: Colm Donelan <Colm.Donelan@arm.com> Change-Id: I9d03438ff8c3015c442d9662ae3d2b8e7cd58382
-rw-r--r--delegate/BuildGuideNative.md94
1 files changed, 43 insertions, 51 deletions
diff --git a/delegate/BuildGuideNative.md b/delegate/BuildGuideNative.md
index 6bee1576ea..ec1dc4994d 100644
--- a/delegate/BuildGuideNative.md
+++ b/delegate/BuildGuideNative.md
@@ -11,7 +11,7 @@ natively (no cross-compilation required). This is to keep this guide simple.
**Table of content:**
- [Delegate build guide introduction](#delegate-build-guide-introduction)
- [Dependencies](#dependencies)
- * [Build Tensorflow for C++](#build-tensorflow-for-c--)
+ * [Build Tensorflow Lite for C++](#build-tensorflow-lite-for-c--)
* [Build Flatbuffers](#build-flatbuffers)
* [Build the Arm Compute Library](#build-the-arm-compute-library)
* [Build the Arm NN Library](#build-the-arm-nn-library)
@@ -23,7 +23,7 @@ natively (no cross-compilation required). This is to keep this guide simple.
# Dependencies
Build Dependencies:
- * Tensorflow and Tensorflow Lite. This guide uses version 2.3.1 . Other versions might work.
+ * Tensorflow Lite: this guide uses version 2.3.1 . Other versions may work.
* Flatbuffers 1.12.0
* Arm NN 20.11 or higher
@@ -39,54 +39,44 @@ Required Tools:
Our first step is to build all the build dependencies I have mentioned above. We will have to create quite a few
directories. To make navigation a bit easier define a base directory for the project. At this stage we can also
-install all the tools that are required during the build.
+install all the tools that are required during the build. This guide assumes you are using a Bash shell.
```bash
-export BASEDIR=/home
+export BASEDIR=~/ArmNNDelegate
+mkdir $BASEDIR
cd $BASEDIR
apt-get update && apt-get install git wget unzip zip python git cmake scons
```
-
-## Build Tensorflow for C++
-Tensorflow has a few dependencies on it's own. It requires the python packages pip3, numpy, wheel, keras_preprocessing
-and also bazel which is used to compile Tensoflow. A description on how to build bazel can be
-found [here](https://docs.bazel.build/versions/master/install-compile-source.html). There are multiple ways.
-I decided to compile from source because that should work for any platform and therefore adds the most value
+## Build Tensorflow Lite for C++
+Tensorflow has a few dependencies on it's own. It requires the python packages pip3, numpy, wheel,
+and also bazel which is used to compile Tensoflow. A description on how to build bazel can be
+found [here](https://docs.bazel.build/versions/master/install-compile-source.html). There are multiple ways.
+I decided to compile from source because that should work for any platform and therefore adds the most value
to this guide. Depending on your operating system and architecture there might be an easier way.
```bash
-# Install the python packages
+# Install the required python packages
pip3 install -U pip numpy wheel
-pip3 install -U keras_preprocessing --no-deps
-# Bazel has a dependency on JDK (The JDK version depends on the bazel version you want to build)
-apt-get install openjdk-11-jdk
+# Bazel has a dependency on JDK (The specific JDK version depends on the bazel version but default-jdk tends to work.)
+sudo apt-get install default-jdk
# Build Bazel
wget -O bazel-3.1.0-dist.zip https://github.com/bazelbuild/bazel/releases/download/3.1.0/bazel-3.1.0-dist.zip
unzip -d bazel bazel-3.1.0-dist.zip
cd bazel
env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
# This creates an "output" directory where the bazel binary can be found
-
-# Download Tensorflow
+```
+
+### Download and build Tensorflow Lite
+
+```bash
cd $BASEDIR
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow/
git checkout tags/v2.3.1 # Minimum version required for the delegate
```
-Before tensorflow can be built, targets need to be defined in the `BUILD` file that can be
-found in the root directory of Tensorflow. Append the following two targets to the file:
-```
-cc_binary(
- name = "libtensorflow_all.so",
- linkshared = 1,
- deps = [
- "//tensorflow/core:framework",
- "//tensorflow/core:tensorflow",
- "//tensorflow/cc:cc_ops",
- "//tensorflow/cc:client_session",
- "//tensorflow/cc:scope",
- "//tensorflow/c:c_api",
- ],
-)
+Before we build, a target for tensorflow lite needs to be defined in the `BUILD` file. This can be
+found in the root directory of Tensorflow. Append the following target to the file:
+```bash
cc_binary(
name = "libtensorflow_lite_all.so",
linkshared = 1,
@@ -96,18 +86,15 @@ cc_binary(
],
)
```
-Now the build process can be started. When calling "configure", as below, a dialog shows up that asks the
-user to specify additional options. If you don't have any particular needs to your build, decline all
-additional options and choose default values. Building `libtensorflow_all.so` requires quite some time.
-This might be a good time to get yourself another drink and take a break.
+Now the build process can be started. When calling "configure", as below, a dialog shows up that asks the
+user to specify additional options. If you don't have any particular needs to your build, decline all
+additional options and choose default values.
```bash
PATH="$BASEDIR/bazel/output:$PATH" ./configure
-$BASEDIR/bazel/output/bazel build --define=grpc_no_ares=true --config=opt --config=monolithic --strip=always --config=noaws libtensorflow_all.so
$BASEDIR/bazel/output/bazel build --config=opt --config=monolithic --strip=always libtensorflow_lite_all.so
```
## Build Flatbuffers
-
Flatbuffers is a memory efficient cross-platform serialization library as
described [here](https://google.github.io/flatbuffers/). It is used in tflite to store models and is also a dependency
of the delegate. After downloading the right version it can be built and installed using cmake.
@@ -129,16 +116,16 @@ both Arm CPUs and GPUs. The Arm Compute Library is used directly by Arm NN to ru
Arm CPUs and GPUs.
It is important to have the right version of ACL and Arm NN to make it work. Luckily, Arm NN and ACL are developed
-very closely and released together. If you would like to use the Arm NN version "20.11" you can use the same "20.11"
+very closely and released together. If you would like to use the Arm NN version "20.11" you should use the same "20.11"
version for ACL too.
-To build the Arm Compute Library on your platform, download the Arm Compute Library and checkout the branch
-that contains the version you want to use and build it using `scons`.
+To build the Arm Compute Library on your platform, download the Arm Compute Library and checkout the tag
+that contains the version you want to use. Build it using `scons`.
```bash
cd $BASEDIR
git clone https://review.mlplatform.org/ml/ComputeLibrary
cd ComputeLibrary/
-git checkout <branch_name> # e.g. branches/arm_compute_20_11
+git checkout <tag_name> # e.g. v20.11
# The machine used for this guide only has a Neon CPU which is why I only have "neon=1" but if
# your machine has an arm Gpu you can enable that by adding `opencl=1 embed_kernels=1 to the command below
scons arch=arm64-v8a neon=1 extra_cxx_flags="-fPIC" benchmark_tests=0 validation_tests=0
@@ -146,8 +133,8 @@ scons arch=arm64-v8a neon=1 extra_cxx_flags="-fPIC" benchmark_tests=0 validation
## Build the Arm NN Library
-After building ACL we can now continue building Arm NN. To do so, download the repository and checkout the same
-version as you did for ACL. Create a build directory and use cmake to build it.
+With ACL built we can now continue to building Arm NN. To do so, download the repository and checkout the matching
+version as you did for ACL. Create a build directory and use `cmake` to build it.
```bash
cd $BASEDIR
git clone "https://review.mlplatform.org/ml/armnn"
@@ -161,14 +148,14 @@ make
# Build the TfLite Delegate (Stand-Alone)
-The delegate as well as Arm NN is built using cmake. Create a build directory as usual and build the Delegate
+The delegate as well as Arm NN is built using `cmake`. Create a build directory as usual and build the delegate
with the additional cmake arguments shown below
```bash
cd $BASEDIR/armnn/delegate && mkdir build && cd build
-cmake .. -DTENSORFLOW_LIB_DIR=$BASEDIR/tensorflow/bazel-bin \ # Directory where tensorflow libraries can be found
- -DTENSORFLOW_ROOT=$BASEDIR/tensorflow \ # The top directory of the tensorflow repository
- -DTFLITE_LIB_ROOT=$BASEDIR/tensorflow/bazel-bin \ # In our case the same as TENSORFLOW_LIB_DIR
- -DFLATBUFFERS_ROOT=$BASEDIR/flatbuffers-1.12.0/install \ # The install directory
+cmake .. -DCMAKE_BUILD_TYPE=release # A release build rather than a debug build.
+ -DTENSORFLOW_ROOT=$BASEDIR/tensorflow \ # The root directory where tensorflow can be found.
+ -DTFLITE_LIB_ROOT=$BASEDIR/tensorflow/bazel-bin \ # Directory where tensorflow libraries can be found.
+ -DFLATBUFFERS_ROOT=$BASEDIR/flatbuffers-1.12.0/install \ # Flatbuffers install directory.
-DArmnn_DIR=$BASEDIR/armnn/build \ # Directory where the Arm NN library can be found
-DARMNN_SOURCE_DIR=$BASEDIR/armnn # The top directory of the Arm NN repository.
# Required are the includes for Arm NN
@@ -195,18 +182,25 @@ In the introduction it was mentioned that there is a way to integrate the delega
pretty straight forward. The cmake arguments that were previously used for the delegate have to be added
to the Arm NN cmake arguments. Also another argument `BUILD_ARMNN_TFLITE_DELEGATE` needs to be added to
instruct Arm NN to build the delegate as well. The new commands to build Arm NN are as follows:
+
+Download Arm NN if you have not already done so:
```bash
cd $BASEDIR
git clone "https://review.mlplatform.org/ml/armnn"
cd armnn
git checkout <branch_name> # e.g. branches/armnn_20_11
+```
+Build Arm NN with the delegate included
+```bash
+cd $BASEDIR
+cd armnn
+rm -rf build # Remove any previous cmake build.
mkdir build && cd build
# if you've got an arm Gpu add `-DARMCOMPUTECL=1` to the command below
cmake .. -DARMCOMPUTE_ROOT=$BASEDIR/ComputeLibrary \
-DARMCOMPUTENEON=1 \
-DBUILD_UNIT_TESTS=0 \
-DBUILD_ARMNN_TFLITE_DELEGATE=1 \
- -DTENSORFLOW_LIB_DIR=$BASEDIR/tensorflow/bazel-bin \
-DTENSORFLOW_ROOT=$BASEDIR/tensorflow \
-DTFLITE_LIB_ROOT=$BASEDIR/tensorflow/bazel-bin \
-DFLATBUFFERS_ROOT=$BASEDIR/flatbuffers-1.12.0/install
@@ -214,7 +208,6 @@ make
```
The delegate library can then be found in `build/armnn/delegate`.
-
# Integrate the Arm NN TfLite Delegate into your project
The delegate can be integrated into your c++ project by creating a TfLite Interpreter and
@@ -237,4 +230,3 @@ armnnDelegateInterpreter->ModifyGraphWithDelegate(theArmnnDelegate.get());
```
For further information on using TfLite Delegates
please visit the [tensorflow website](https://www.tensorflow.org/lite/guide)
-