aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorRyan OShea <Ryan.OShea2@arm.com>2020-02-12 16:15:27 +0000
committerRyan OShea <Ryan.OShea2@arm.com>2020-03-11 16:43:38 +0000
commitf3a43238858a91bbd3719efc5ae6e1a3992b2d23 (patch)
treee73a8c592a04da98856d353fc291f43cb11e0114 /docs
parentc2522c47c4f3b24167b7cefadb5616302d4535b0 (diff)
downloadarmnn-f3a43238858a91bbd3719efc5ae6e1a3992b2d23.tar.gz
IVGCVSW-3726 - Doxygen Beautification
* Added .dox files for main sections * Merged .md files into .dox files * Updated Doxyfile * Stylesheet for Doxygen Signed-off-by: Ryan OShea <Ryan.OShea2@arm.com> Change-Id: Ic13c28b3235fca91aeb463cd5063750aa6d85be8
Diffstat (limited to 'docs')
-rw-r--r--docs/00_introduction.dox826
-rw-r--r--docs/01_parsers.dox290
-rw-r--r--docs/02_deserializer_serializer.dox182
-rw-r--r--docs/03_converter_quantizer.dox60
-rw-r--r--docs/04_backends.dox470
-rw-r--r--docs/05_other_tools.dox107
-rw-r--r--docs/Doxyfile60
-rw-r--r--docs/stylesheet.css221
8 files changed, 2202 insertions, 14 deletions
diff --git a/docs/00_introduction.dox b/docs/00_introduction.dox
new file mode 100644
index 0000000000..981e03387b
--- /dev/null
+++ b/docs/00_introduction.dox
@@ -0,0 +1,826 @@
+/// Copyright (c) 2017 ARM Limited.
+///
+/// SPDX-License-Identifier: MIT
+///
+/// Permission is hereby granted, free of charge, to any person obtaining a copy
+/// of this software and associated documentation files (the "Software"), to deal
+/// in the Software without restriction, including without limitation the rights
+/// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+/// copies of the Software, and to permit persons to whom the Software is
+/// furnished to do so, subject to the following conditions:
+///
+/// The above copyright notice and this permission notice shall be included in all
+/// copies or substantial portions of the Software.
+///
+/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+/// SOFTWARE.
+///
+
+
+namespace armnn{
+/** @mainpage Introduction
+
+@tableofcontents
+@section S0_1_armnn ArmNN
+
+Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the [Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/). For more information on the machine learning platform and Arm NN, see: <https://mlplatform.org/>, also there is further Arm NN information available from <https://developer.arm.com/products/processors/machine-learning/arm-nn>
+
+There is a getting started guide here using TensorFlow: <https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-tensorflow>
+
+There is a getting started guide here using TensorFlow Lite: <https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-tensorflow-lite>
+
+There is a getting started guide here using Caffe: <https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-caffe>
+
+There is a getting started guide here using ONNX: <https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-onnx>
+
+There is a guide for backend development <a href="backends.xhtml">here</a>.
+<br/><br/><br/><br/>
+
+@section S1_license License
+
+Arm NN is provided under the [MIT](https://spdx.org/licenses/MIT.html) license.
+
+__**MIT License**__
+
+Copyright (c) 2017-2020 ARM Limited.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
+
+Individual files contain the following tag instead of the full license text.
+
+ SPDX-License-Identifier: MIT
+
+This enables machine processing of license information based on the SPDX License Identifiers that are available [here](http://spdx.org/licenses/)
+<br/><br/><br/><br/>
+
+@section S2_1_contributions Contributor Guide
+
+The Arm NN project is open for external contributors and welcomes contributions. Arm NN is licensed under the [MIT license](https://spdx.org/licenses/MIT.html) and all accepted contributions must have the same license. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/) on the [MLPlatform.org](https://mlplatform.org/) website.
+
+@subsection S2_1_dco Developer Certificate of Origin (DCO)
+
+Before the Arm NN project accepts your contribution, you need to certify its origin and give us your permission. To manage this process we use Developer Certificate of Origin (DCO) V1.1 (https://developercertificate.org/).
+
+To indicate that you agree to the the terms of the DCO, you "sign off" your contribution by adding a line with your name and e-mail address to every git commit message:
+
+Signed-off-by: John Doe <john.doe@example.org>
+
+You must use your real name, no pseudonyms or anonymous contributions are accepted.
+
+@subsection S2_2_releases Releases
+
+Official Arm NN releases are published through the official [Arm NN Github repository](https://github.com/ARM-software/armnn).
+
+@subsection S2_3_development_repository Developer Repository
+
+The Arm NN development repository is hosted on the [mlplatform.org git repository](https://git.mlplatform.org/ml/armnn.git/) hosted by [Linaro](https://www.linaro.org/).
+
+@subsection S2_4_code_review Code Review
+
+Contributions must go through code review. Code reviews are performed through the [mlplatform.org Gerrit server](https://review.mlplatform.org). Contributors need to signup to this Gerrit server with their GitHub account
+credentials.
+
+Only reviewed contributions can go to the master branch of Arm NN.
+
+@subsection S2_5_continuous_integration Continuous Integration
+
+Contributions to Arm NN go through testing at the Arm CI system. All unit, integration and regression tests must pass before a contribution gets merged to the Arm NN master branch.
+
+@subsection S2_6_communications Communications
+
+We encourage all Arm NN developers to subscribe to the [Arm NN developer mailing list](https://lists.linaro.org/mailman/listinfo/armnn-dev).
+<br/><br/><br/><br/>
+
+
+
+
+
+
+
+
+
+@section S3_build_instructions Build Instructions
+
+Arm tests the build system of Arm NN with the following build environments:
+
+* Android NDK
+* Cross compilation from x86_64 Ubuntu to arm64 Linux
+* Native compilation under aarch64 Debian 9
+
+
+Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible to build for a wide variety of target platforms, from a wide variety of host environments.
+
+The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on the Internet, for those who wish to experiment.
+
+The 'armnn/samples' directory contains SimpleSample.cpp, a very basic example of the ArmNN SDK API in use.
+
+The 'ExecuteNetwork' program, in armnn/tests/ExecuteNetwork, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes any model and any input tensor, and simply prints out the output tensor. Run it with no arguments to see command-line help.
+
+The 'ArmnnConverter' program, in armnn/src/armnnConverter, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes a model in TensorFlow format and produces a serialized model in Arm NN format. Run it with no arguments to see command-line help. Note that this program can only convert models for which all operations are supported by the serialization tool [src/armnnSerializer](src/armnnSerializer/README.md).
+
+The 'ArmnnQuantizer' program, in armnn/src/armnnQuantizer, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes a 32-bit float network and converts it into a quantized asymmetric 8-bit or quantized symmetric 16-bit network.
+Static quantization is supported by default but dynamic quantization can be enabled if CSV file of raw input tensors is specified. Run it with no arguments to see command-line help.
+
+\Note note that Arm NN needs to be built against a particular version of [ARM's Compute Library](https://github.com/ARM-software/ComputeLibrary). The get_compute_library.sh in the scripts subdirectory will clone the compute library from the review.mlplatform.org github repository into a directory alongside armnn named 'clframework' and checks out the correct revision.
+<br/><br/><br/><br/>
+
+@subsection S3_1_android_ndk_build_guide How to use the Android NDK to build ArmNN
+
+<ul>
+ <li> [Introduction](#introduction) </li>
+ <li> [Download the Android NDK and make a standalone toolchain](#downloadNDK) </li>
+ <li> [Build the Boost C++ libraries](#buildBoost) </li>
+ <li> [Build the Compute Library](#buildCL) </li>
+ <li> [Build Google's Protobuf library](#buildProtobuf) </li>
+ <li> [Download TensorFlow](#downloadTF) </li>
+ <li> [Build ArmNN](#buildArmNN) </li>
+ <li> [Run ArmNN UnitTests on an Android device](#runArmNNUnitTests) </li>
+</ul>
+
+## <a name="introduction">Introduction</a>
+These are step by step instructions for using the Android NDK to build ArmNN.
+They have been tested on a clean install of Ubuntu 18.04, and should also work with other OS versions.
+The instructions show how to build the ArmNN core library and the optional TensorFlow parser.
+All downloaded or generated files will be saved inside the `~/armnn-devenv` directory.
+
+
+## <a name="downloadNDK">Download the Android NDK and make a standalone toolchain</a>
+
+### Download the Android NDK from [the official website](https://developer.android.com/ndk/downloads/index.html):
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ mkdir -p ~/armnn-devenv/toolchains
+ cd ~/armnn-devenv/toolchains
+
+ #For Mac OS, change the NDK download link accordingly.
+ wget https://dl.google.com/android/repository/android-ndk-r17b-linux-x86_64.zip
+ unzip android-ndk-r17b-linux-x86_64.zip
+ export NDK=~/armnn-devenv/toolchains/android-ndk-r17b
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ You may want to append `export NDK=~/armnn-devenv/toolchains/android-ndk-r17b` to your `~/.bashrc` (or `~/.bash_profile` in Mac OS).
+
+### Make a standalone toolchain:
+
+ (Requires python if not previously installed: `sudo apt install python`)
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ # Create an arm64 API 26 libc++ toolchain.
+ $NDK/build/tools/make_standalone_toolchain.py \
+ --arch arm64 \
+ --api 26 \
+ --stl=libc++ \
+ --install-dir=$HOME/armnn-devenv/toolchains/aarch64-android-r17b
+ export PATH=$HOME/armnn-devenv/toolchains/aarch64-android-r17b/bin:$PATH
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ You may want to append `export PATH=$HOME/armnn-devenv/toolchains/aarch64-android-r17b/bin:$PATH` to your `~/.bashrc` (or `~/.bash_profile` in Mac OS).
+
+## <a name="buildBoost">Build the Boost C++ libraries</a>
+
+### Download Boost version 1.64:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+ mkdir ~/armnn-devenv/boost
+ cd ~/armnn-devenv/boost
+ wget https://dl.bintray.com/boostorg/release/1.64.0/source/boost_1_64_0.tar.bz2
+ tar xvf boost_1_64_0.tar.bz2
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Build:
+
+ (Requires gcc if not previously installed: `sudo apt install gcc`)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ echo "using gcc : arm : aarch64-linux-android-clang++ ;" > $HOME/armnn-devenv/boost/user-config.jam
+ cd ~/armnn-devenv/boost/boost_1_64_0
+ ./bootstrap.sh --prefix=$HOME/armnn-devenv/boost/install
+ ./b2 install --user-config=$HOME/armnn-devenv/boost/user-config.jam \
+ toolset=gcc-arm link=static cxxflags=-fPIC --with-filesystem \
+ --with-test --with-log --with-program_options -j16
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## <a name="buildCL">Build the Compute Library</a>
+
+### Clone the Compute Library:
+
+ (Requires Git if not previously installed: `sudo apt install git`)
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ cd ~/armnn-devenv
+ git clone https://github.com/ARM-software/ComputeLibrary.git
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Build:
+
+ (Requires SCons if not previously installed: `sudo apt install scons`)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ cd ComputeLibrary
+ scons arch=arm64-v8a neon=1 opencl=1 embed_kernels=1 extra_cxx_flags="-fPIC" \
+ benchmark_tests=0 validation_tests=0 os=android -j16
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## <a name="buildProtobuf">Build Google's Protobuf library</a>
+
+### Clone protobuf:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ mkdir ~/armnn-devenv/google
+ cd ~/armnn-devenv/google
+ git clone https://github.com/google/protobuf.git
+ cd protobuf
+ git checkout -b v3.5.2 v3.5.2
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Build a native (x86) version of the protobuf libraries and compiler (protoc):
+
+ (Requires cUrl, autoconf, llibtool, and other build dependencies if not previously installed: `sudo apt install curl autoconf libtool build-essential g++`)
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ ./autogen.sh
+ mkdir x86_build
+ cd x86_build
+ ../configure --prefix=$HOME/armnn-devenv/google/x86_pb_install
+ make install -j16
+ cd ..
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Build the arm64 version of the protobuf libraries:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ mkdir arm64_build
+ cd arm64_build
+ CC=aarch64-linux-android-clang \
+ CXX=aarch64-linux-android-clang++ \
+ CFLAGS="-fPIE -fPIC" LDFLAGS="-pie -llog" \
+ ../configure --host=aarch64-linux-android \
+ --prefix=$HOME/armnn-devenv/google/arm64_pb_install \
+ --with-protoc=$HOME/armnn-devenv/google/x86_pb_install/bin/protoc
+ make install -j16
+ cd ..
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## <a name="downloadTF">Download TensorFlow</a>
+### Clone TensorFlow source code:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ cd ~/armnn-devenv/google/
+ git clone https://github.com/tensorflow/tensorflow.git
+ cd tensorflow/
+ git checkout a0043f9262dc1b0e7dc4bdf3a7f0ef0bebc4891e
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ You need tensorflow/contrib/makefile/tf_proto_files.txt from TensorFlow to generate TensorFlow protobuf definitions. This file is not available in TensorFlow master branch.
+
+## <a name="buildArmNN">Build ArmNN</a>
+
+### Clone ArmNN source code:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ cd ~/armnn-devenv/
+ git clone https://github.com/ARM-software/armnn.git
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Generate TensorFlow protobuf definitions:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ cd ~/armnn-devenv/google/tensorflow
+ ~/armnn-devenv/armnn/scripts/generate_tensorflow_protobuf.sh \
+ $HOME/armnn-devenv/google/tf_pb $HOME/armnn-devenv/google/x86_pb_install
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Build ArmNN:
+
+ (Requires CMake if not previously installed: `sudo apt install cmake`)
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ mkdir ~/armnn-devenv/armnn/build
+ cd ~/armnn-devenv/armnn/build
+ CXX=aarch64-linux-android-clang++ \
+ CC=aarch64-linux-android-clang \
+ CXX_FLAGS="-fPIE -fPIC" \
+ cmake .. \
+ -DCMAKE_SYSTEM_NAME=Android \
+ -DCMAKE_ANDROID_ARCH_ABI=arm64-v8a \
+ -DCMAKE_ANDROID_STANDALONE_TOOLCHAIN=$HOME/armnn-devenv/toolchains/aarch64-android-r17b/ \
+ -DCMAKE_EXE_LINKER_FLAGS="-pie -llog" \
+ -DARMCOMPUTE_ROOT=$HOME/armnn-devenv/ComputeLibrary/ \
+ -DARMCOMPUTE_BUILD_DIR=$HOME/armnn-devenv/ComputeLibrary/build \
+ -DBOOST_ROOT=$HOME/armnn-devenv/boost/install/ \
+ -DARMCOMPUTENEON=1 -DARMCOMPUTECL=1 -DARMNNREF=1 \
+ -DTF_GENERATED_SOURCES=$HOME/armnn-devenv/google/tf_pb/ -DBUILD_TF_PARSER=1 \
+ -DPROTOBUF_ROOT=$HOME/armnn-devenv/google/arm64_pb_install/
+ make -j16
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## <a name="runArmNNUnitTests">Run the ArmNN unit tests on an Android device</a>
+
+
+### Push the build results to an Android device and make symbolic links for shared libraries:
+ Currently adb version we have used for testing is 1.0.41.
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ adb push libarmnnTfParser.so /data/local/tmp/
+ adb push libarmnn.so /data/local/tmp/
+ adb push UnitTests /data/local/tmp/
+ adb push $NDK/sources/cxx-stl/llvm-libc++/libs/arm64-v8a/libc++_shared.so /data/local/tmp/
+ adb push $HOME/armnn-devenv/google/arm64_pb_install/lib/libprotobuf.so /data/local/tmp/libprotobuf.so.15.0.1
+ adb shell 'ln -s libprotobuf.so.15.0.1 /data/local/tmp/libprotobuf.so.15'
+ adb shell 'ln -s libprotobuf.so.15.0.1 /data/local/tmp/libprotobuf.so'
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Push the files needed for the unit tests (they are a mix of files, directories and symbolic links):
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ adb shell mkdir -p /data/local/tmp/src/backends/backendsCommon/test/testSharedObject
+ adb push -p ~/armnn-devenv/armnn/build/src/backends/backendsCommon/test/testSharedObject/* /data/local/tmp/src/backends/backendsCommon/test/testSharedObject/
+
+ adb shell mkdir -p /data/local/tmp/src/backends/backendsCommon/test/testDynamicBackend
+ adb push -p ~/armnn-devenv/armnn/build/src/backends/backendsCommon/test/testDynamicBackend/* /data/local/tmp/src/backends/backendsCommon/test/testDynamicBackend/
+
+ adb shell mkdir -p /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath1
+ adb push -p ~/armnn-devenv/armnn/build/src/backends/backendsCommon/test/backendsTestPath1/* /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath1/
+
+ adb shell mkdir -p /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath2
+ adb push -p ~/armnn-devenv/armnn/build/src/backends/backendsCommon/test/backendsTestPath2/Arm_CpuAcc_backend.so /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath2/
+ adb shell ln -s Arm_CpuAcc_backend.so /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath2/Arm_CpuAcc_backend.so.1
+ adb shell ln -s Arm_CpuAcc_backend.so.1 /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath2/Arm_CpuAcc_backend.so.1.2
+ adb shell ln -s Arm_CpuAcc_backend.so.1.2 /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath2/Arm_CpuAcc_backend.so.1.2.3
+ adb push -p ~/armnn-devenv/armnn/build/src/backends/backendsCommon/test/backendsTestPath2/Arm_GpuAcc_backend.so /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath2/
+ adb shell ln -s nothing /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath2/Arm_no_backend.so
+
+ adb shell mkdir -p /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath3
+
+ adb shell mkdir -p /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath5
+ adb push -p ~/armnn-devenv/armnn/build/src/backends/backendsCommon/test/backendsTestPath5/* /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath5/
+
+ adb shell mkdir -p /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath6
+ adb push -p ~/armnn-devenv/armnn/build/src/backends/backendsCommon/test/backendsTestPath6/* /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath6/
+
+ adb shell mkdir -p /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath7
+
+ adb shell mkdir -p /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath9
+ adb push -p ~/armnn-devenv/armnn/build/src/backends/backendsCommon/test/backendsTestPath9/* /data/local/tmp/src/backends/backendsCommon/test/backendsTestPath9/
+
+ adb shell mkdir -p /data/local/tmp/src/backends/dynamic/reference
+ adb push -p ~/armnn-devenv/armnn/build/src/backends/dynamic/reference/Arm_CpuRef_backend.so /data/local/tmp/src/backends/dynamic/reference/
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Run ArmNN unit tests:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ adb shell 'LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/UnitTests'
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ If libarmnnUtils.a is present in `~/armnn-devenv/armnn/build/` and the unit tests run without failure then the build was successful.
+<br/><br/><br/><br/>
+
+@subsection S3_2_cross_compilations_build_guide Cross Compilation Build Guide
+
+<ul>
+ <li> [Introduction](#introduction) </li>
+ <li> [Cross-compiling ToolChain](#installCCT) </li>
+ <li> [Build and install Google's Protobuf library](#buildProtobuf) </li>
+ <li> [Build Caffe for x86_64](#buildCaffe) </li>
+ <li> [Build Boost library for arm64](#installBaarch) </li>
+ <li> [Build Compute Library](#buildCL) </li>
+ <li> [Build ArmNN](#buildANN) </li>
+ <li> [Run Unit Tests](#unittests) </li>
+ <li> [Troubleshooting and Errors](#troubleshooting) </li>
+</ul>
+
+
+## <a name="introduction">Introduction</a>
+These are the step by step instructions on Cross-Compiling ArmNN under an x86_64 system to target an Arm64 system. This build flow has been tested with Ubuntu 16.04.
+The instructions show how to build the ArmNN core library and the Boost, Protobuf, Caffe and Compute Libraries necessary for compilation.
+
+## <a name="installCCT">Cross-compiling ToolChain</a>
+
+### Install the standard cross-compilation libraries for arm64:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ sudo apt install crossbuild-essential-arm64
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## <a name="buildProtobuf">Build and install Google's Protobuf library</a>
+
+### Get protobuf-all-3.5.1.tar.gz from [here](https://github.com/protocolbuffers/protobuf/releases/tag/v3.5.1).
+
+### Extract:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ tar -zxvf protobuf-all-3.5.1.tar.gz
+ cd protobuf-3.5.1
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Build a native (x86_64) version of the protobuf libraries and compiler (protoc):
+ (Requires cUrl, autoconf, llibtool, and other build dependencies if not previously installed: sudo apt install curl autoconf libtool build-essential g++)
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ mkdir x86_64_build
+ cd x86_64_build
+ ../configure --prefix=$HOME/armnn-devenv/google/x86_64_pb_install
+ make install -j16
+ cd ..
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Build the arm64 version of the protobuf libraries:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ mkdir arm64_build
+ cd arm64_build
+ CC=aarch64-linux-gnu-gcc \
+ CXX=aarch64-linux-gnu-g++ \
+ ../configure --host=aarch64-linux \
+ --prefix=$HOME/armnn-devenv/google/arm64_pb_install \
+ --with-protoc=$HOME/armnn-devenv/google/x86_64_pb_install/bin/protoc
+ make install -j16
+ cd ..
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## <a name="buildCaffe">Build Caffe for x86_64</a>
+
+### Ubuntu 16.04 installation. These steps are taken from the full Caffe installation documentation [here](http://caffe.berkeleyvision.org/install_apt.html)
+
+### Install dependencies:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ sudo apt-get install libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev
+ sudo apt-get install --no-install-recommends libboost-all-dev
+ sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
+ sudo apt-get install libopenblas-dev
+ sudo apt-get install libatlas-base-dev
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Download Caffe-Master from [here](https://github.com/BVLC/caffe).
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ git clone https://github.com/BVLC/caffe.git
+ cd caffe
+ cp Makefile.config.example Makefile.config
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Adjust Makefile.config as necessary for your environment, for example:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ #CPU only version:
+ CPU_ONLY := 1
+
+ #Add hdf5 and protobuf include and library directories (Replace $HOME with explicit /home/username dir):
+ INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/ $HOME/armnn-devenv/google/x86_64_pb_install/include/
+ LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial/ $HOME/armnn-devenv/google/x86_64_pb_install/lib/
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Setup environment:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ export PATH=$HOME/armnn-devenv/google/x86_64_pb_install/bin/:$PATH
+ export LD_LIBRARY_PATH=$HOME/armnn-devenv/google/x86_64_pb_install/lib/:$LD_LIBRARY_PATH
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Compilation with Make:
+
+~~~~~~~~~~~~~~~~.sh
+
+ make all
+ make test
+ make runtest
+
+~~~~~~~~~~~~~~~~
+
+ These should all run without errors
+### caffe.pb.h and caffe.pb.cc will be needed when building ArmNN's Caffe Parser
+
+## <a name="installBaarch">Build Boost library for arm64</a>
+
+### Build Boost library for arm64
+ Download Boost version 1.64 from [here](http://www.boost.org/doc/libs/1_64_0/more/getting_started/unix-variants.html).
+ Using any version of Boost greater than 1.64 will fail to build ArmNN, due to different dependency issues.
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+ tar -zxvf boost_1_64_0.tar.gz
+ cd boost_1_64_0
+ echo "using gcc : arm : aarch64-linux-gnu-g++ ;" > user_config.jam
+ ./bootstrap.sh --prefix=$HOME/armnn-devenv/boost_arm64_install
+ ./b2 install toolset=gcc-arm link=static cxxflags=-fPIC --with-filesystem --with-test --with-log --with-program_options -j32 --user-config=user_config.jam
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## <a name="buildCL">Build Compute Library</a>
+
+### Building the Arm Compute Library:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ git clone https://github.com/ARM-software/ComputeLibrary.git
+ cd ComputeLibrary/
+ scons arch=arm64-v8a neon=1 opencl=1 embed_kernels=1 extra_cxx_flags="-fPIC" -j8 internal_only=0
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## <a name="buildANN">Build ArmNN</a>
+
+### Compile ArmNN for arm64:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ git clone https://github.com/ARM-software/armnn.git
+ cd armnn
+ mkdir build
+ cd build
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Use CMake to configure your build environment, update the following script and run it from the armnn/build directory to set up the armNN build:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ #!/bin/bash
+ CXX=aarch64-linux-gnu-g++ \
+ CC=aarch64-linux-gnu-gcc \
+ cmake .. \
+ -DARMCOMPUTE_ROOT=$HOME/armnn-devenv/ComputeLibrary \
+ -DARMCOMPUTE_BUILD_DIR=$HOME/armnn-devenv/ComputeLibrary/build/ \
+ -DBOOST_ROOT=$HOME/armnn-devenv/boost_arm64_install/ \
+ -DARMCOMPUTENEON=1 -DARMCOMPUTECL=1 -DARMNNREF=1 \
+ -DCAFFE_GENERATED_SOURCES=$HOME/armnn-devenv/caffe/build/src \
+ -DBUILD_CAFFE_PARSER=1 \
+ -DPROTOBUF_ROOT=$HOME/armnn-devenv/google/x86_64_pb_install/ \
+ -DPROTOBUF_LIBRARY_DEBUG=$HOME/armnn-devenv/google/arm64_pb_install/lib/libprotobuf.so.15.0.1 \
+ -DPROTOBUF_LIBRARY_RELEASE=$HOME/armnn-devenv/google/arm64_pb_install/lib/libprotobuf.so.15.0.1
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Run the build
+
+~~~~~~~~~~~~~.sh
+
+ make -j32
+
+~~~~~~~~~~~~~
+
+## <a name="unittests">Run Unit Tests</a>
+
+### Copy the build folder to an arm64 linux machine
+
+### Copy the libprotobuf.so.15.0.1 library file to the build folder
+
+### cd to the build folder on your arm64 machine and set your LD_LIBRARY_PATH to its current location:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ cd build/
+ export LD_LIBRARY_PATH=`pwd`
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Create a symbolic link to libprotobuf.so.15.0.1:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ ln -s libprotobuf.so.15.0.1 ./libprotobuf.so.15
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Run the UnitTests:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ ./UnitTests
+ Running 567 test cases...
+
+ *** No errors detected
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## <a name="troubleshooting">Troubleshooting and Errors:</a>
+## Error adding symbols: File in wrong format
+
+### When building armNN:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ /usr/local/lib/libboost_log.a: error adding symbols: File in wrong format
+ collect2: error: ld returned 1 exit status
+ CMakeFiles/armnn.dir/build.make:4028: recipe for target 'libarmnn.so' failed
+ make[2]: *** [libarmnn.so] Error 1
+ CMakeFiles/Makefile2:105: recipe for target 'CMakeFiles/armnn.dir/all' failed
+ make[1]: *** [CMakeFiles/armnn.dir/all] Error 2
+ Makefile:127: recipe for target 'all' failed
+ make: *** [all] Error 2
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Boost libraries are not compiled for the correct architecture, try recompiling for arm64
+
+## Virtual memory exhausted
+### When compiling the boost libraries:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ virtual memory exhausted: Cannot allocate memory
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Not enough memory available to compile. Increase the amount of RAM or swap space available.
+
+
+## Unrecognized command line option '-m64'
+
+### When compiling the boost libraries:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ aarch64-linux-gnu-g++: error: unrecognized command line option ‘-m64’
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Clean the boost library directory before trying to build with a different architecture:
+
+~~~~~~~~~~~~~~~~~~~.sh
+
+ sudo ./b2 clean
+
+~~~~~~~~~~~~~~~~~~~
+
+### It should show the following for arm64:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ - 32-bit : no
+ - 64-bit : yes
+ - arm : yes
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+## Missing libz.so.1
+
+### When compiling armNN:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ /usr/lib/gcc-cross/aarch64-linux-gnu/5/../../../../aarch64-linux-gnu/bin/ld: warning: libz.so.1, needed by /home/<username>/armNN/usr/lib64/libprotobuf.so.15.0.0, not found (try using -rpath or -rpath-link)
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Missing arm64 libraries for libz.so.1, these can be added by adding a second architecture to dpkg and explicitly installing them:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ sudo dpkg --add-architecture arm64
+ sudo apt-get install zlib1g:arm64
+ sudo apt-get update
+ sudo ldconfig
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### If apt-get update returns 404 errors for arm64 repos refer to section 5 below.
+
+### Alternatively the missing arm64 version of libz.so.1 can be downloaded and installed from a .deb package [here](https://launchpad.net/ubuntu/wily/arm64/zlib1g/1:1.2.8.dfsg-2ubuntu4)
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ sudo dpkg -i zlib1g_1.2.8.dfsg-2ubuntu4_arm64.deb
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## Unable to install arm64 packages after adding arm64 architecture
+
+### Using sudo apt-get update should add all of the required repos for arm64 but if it does not or you are getting 404 errors the following instructions can be used to add the repos manually:
+
+### From stackoverflow [here](https://askubuntu.com/questions/430705/how-to-use-apt-get-to-download-multi-arch-library/430718).
+
+### Open /etc/apt/sources.list with your preferred text editor.
+
+### Mark all the current (default) repos as \[arch=<current_os_arch>], e.g.
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ xenial main restricted
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Then add the following:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ deb [arch=arm64] http://ports.ubuntu.com/ xenial main restricted
+ deb [arch=arm64] http://ports.ubuntu.com/ xenial-updates main restricted
+ deb [arch=arm64] http://ports.ubuntu.com/ xenial universe
+ deb [arch=arm64] http://ports.ubuntu.com/ xenial-updates universe
+ deb [arch=arm64] http://ports.ubuntu.com/ xenial multiverse
+ deb [arch=arm64] http://ports.ubuntu.com/ xenial-updates multiverse
+ deb [arch=arm64] http://ports.ubuntu.com/ xenial-backports main restricted universe multiverse
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Update and install again:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ sudo apt-get install zlib1g:arm64
+ sudo apt-get update
+ sudo ldconfig
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## Undefined references to google::protobuf:: functions
+
+### When compiling armNN there are multiple errors of the following type:
+ ```
+ libarmnnCaffeParser.so: undefined reference to `google::protobuf:*
+ ```
+### Missing or out of date protobuf compilation libraries.
+ Use the command 'protoc --version' to check which version of protobuf is available (version 3.5.1 is required).
+ Follow the instructions above to install protobuf 3.5.1
+ Note this will require you to recompile Caffe for x86_64
+
+## Errors on strict-aliasing rules when compiling the Compute Library
+
+### When compiling the Compute Library there are multiple errors on strict-aliasing rules:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ cc1plus: error: unrecognized command line option ‘-Wno-implicit-fallthrough’ [-Werror]
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Add Werror=0 to the scons command:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+
+ scons arch=arm64-v8a neon=1 opencl=1 embed_kernels=1 extra_cxx_flags="-fPIC" -j8 internal_only=0 Werror=0
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**/
+}
diff --git a/docs/01_parsers.dox b/docs/01_parsers.dox
new file mode 100644
index 0000000000..e73334744f
--- /dev/null
+++ b/docs/01_parsers.dox
@@ -0,0 +1,290 @@
+/// Copyright (c) 2017 ARM Limited.
+///
+/// SPDX-License-Identifier: MIT
+///
+/// Permission is hereby granted, free of charge, to any person obtaining a copy
+/// of this software and associated documentation files (the "Software"), to deal
+/// in the Software without restriction, including without limitation the rights
+/// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+/// copies of the Software, and to permit persons to whom the Software is
+/// furnished to do so, subject to the following conditions:
+///
+/// The above copyright notice and this permission notice shall be included in all
+/// copies or substantial portions of the Software.
+///
+/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+/// SOFTWARE.
+///
+
+namespace armnn
+{
+/**
+@page parsers Parsers
+
+@tableofcontents
+@section S4_caffe_parser ArmNN Caffe Parser
+
+`armnnCaffeParser` is a library for loading neural networks defined in Caffe protobuf files into the Arm NN runtime.
+
+##Caffe layers supported by the Arm NN SDK
+This reference guide provides a list of Caffe layers the Arm NN SDK currently supports.
+
+## Although some other neural networks might work, Arm tests the Arm NN SDK with Caffe implementations of the following neural networks:
+
+- AlexNet.
+- Inception-BN.
+- Resnet_50, Resnet_101 and Resnet_152.
+- VGG_CNN_S, VGG_16 and VGG_19.
+- Yolov1_tiny.
+- Lenet.
+- MobileNetv1.
+
+using these datasets:
+- Cifar10.
+
+## The Arm NN SDK supports the following machine learning layers for Caffe networks:
+
+- BatchNorm, in inference mode.
+- Convolution, excluding the Dilation Size, Weight Filler, Bias Filler, Engine, Force nd_im2col, and Axis parameters.
+ Caffe doesn't support depthwise convolution, the equivalent layer is implemented through the notion of groups. ArmNN supports groups this way:
+ - when group=1, it is a normal conv2d
+ - when group=#input_channels, we can replace it by a depthwise convolution
+ - when group>1 && group<#input_channels, we need to split the input into the given number of groups, apply a separate convolution and then merge the results
+- Concat, along the channel dimension only.
+- Dropout, in inference mode.
+- Element wise, excluding the coefficient parameter.
+- Inner Product, excluding the Weight Filler, Bias Filler, Engine, and Axis parameters.
+- Input.
+- Local Response Normalisation (LRN), excluding the Engine parameter.
+- Pooling, excluding the Stochastic Pooling and Engine parameters.
+- ReLU.
+- Scale.
+- Softmax, excluding the Axis and Engine parameters.
+- Split.
+
+More machine learning layers will be supported in future releases.
+
+Please note that certain deprecated Caffe features are not supported by the armnnCaffeParser. If you think that Arm NN should be able to load your model according to the list of supported layers, but you are getting strange error messages, then try upgrading your model to the latest format using Caffe, either by saving it to a new file or using the upgrade utilities in `caffe/tools`.
+<br/><br/><br/><br/>
+
+@section S5_onnx_parser ArmNN Onnx Parser
+
+`armnnOnnxParser` is a library for loading neural networks defined in ONNX protobuf files into the Arm NN runtime.
+
+## ONNX operators that the Arm NN SDK supports
+
+This reference guide provides a list of ONNX operators the Arm NN SDK currently supports.
+
+The Arm NN SDK ONNX parser currently only supports fp32 operators.
+
+## Fully supported
+
+- Add
+ - See the ONNX [Add documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Add) for more information
+-AveragePool
+ - See the ONNX [AveragePool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#AveragePool) for more information.
+- Constant
+ - See the ONNX [Constant documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Constant) for more information.
+- GlobalAveragePool
+ - See the ONNX [GlobalAveragePool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#GlobalAveragePool) for more information.
+- MaxPool
+ - See the ONNX [max_pool documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#MaxPool) for more information.
+- Relu
+ - See the ONNX [Relu documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Relu) for more information.
+- Reshape
+ - See the ONNX [Reshape documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Reshape) for more information.
+
+## Partially supported
+
+- Conv
+ - The parser only supports 2D convolutions with a dilation rate of [1, 1] and group = 1 or group = #Nb_of_channel (depthwise convolution)
+ See the ONNX [Conv documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Conv) for more information.
+- BatchNormalization
+ - The parser does not support training mode. See the ONNX [BatchNormalization documentation](https://github.com/onnx/onnx/blob/master/docs/Operators.md#BatchNormalization) for more information.
+- MatMul
+ - The parser only supports constant weights in a fully connected layer.
+
+## Tested networks
+
+Arm tested these operators with the following ONNX fp32 neural networks:
+- Simple MNIST. See the ONNX [MNIST documentation](https://github.com/onnx/models/tree/master/mnist) for more information.
+- Mobilenet_v2. See the ONNX [MobileNet documentation](https://github.com/onnx/models/tree/master/models/image_classification/mobilenet) for more information.
+
+More machine learning operators will be supported in future releases.
+<br/><br/><br/><br/>
+
+@section S6_tf_lite_parser ArmNN Tf Lite Parser
+
+`armnnTfLiteParser` is a library for loading neural networks defined by TensorFlow Lite FlatBuffers files
+into the Arm NN runtime.
+
+## TensorFlow Lite operators that the Arm NN SDK supports
+
+This reference guide provides a list of TensorFlow Lite operators the Arm NN SDK currently supports.
+
+## Fully supported
+
+The Arm NN SDK TensorFlow Lite parser currently supports the following operators:
+
+- ADD
+- AVERAGE_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
+- BATCH_TO_SPACE
+- CONCATENATION, Supported Fused Activation: RELU , RELU6 , TANH, NONE
+- CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
+- DEPTHWISE_CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
+- FULLY_CONNECTED, Supported Fused Activation: RELU , RELU6 , TANH, NONE
+- LOGISTIC
+- L2_NORMALIZATION
+- MAX_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
+- MAXIMUM
+- MEAN
+- MINIMUM
+- MUL
+- PACK
+- PAD
+- RELU
+- RELU6
+- RESHAPE
+- RESIZE_BILINEAR
+- SLICE
+- SOFTMAX
+- SPACE_TO_BATCH
+- SPLIT
+- SQUEEZE
+- STRIDED_SLICE
+- SUB
+- TANH
+- TRANSPOSE
+- TRANSPOSE_CONV
+- UNPACK
+
+## Custom Operator
+
+- TFLite_Detection_PostProcess
+
+## Tested networks
+
+Arm tested these operators with the following TensorFlow Lite neural network:
+- [Quantized MobileNet](http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz)
+- [Quantized SSD MobileNet](http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18.tar.gz)
+- DeepSpeech v1 converted from [TensorFlow model](https://github.com/mozilla/DeepSpeech/releases/tag/v0.4.1)
+- DeepSpeaker
+
+More machine learning operators will be supported in future releases.
+<br/><br/><br/><br/>
+
+@section S7_tf_parser ArmNN Tensorflow Parser
+
+`armnnTfParser` is a library for loading neural networks defined by TensorFlow protobuf files into the Arm NN runtime.
+
+## TensorFlow operators that the Arm NN SDK supports
+
+This reference guide provides a list of TensorFlow operators the Arm NN SDK currently supports.
+
+The Arm NN SDK TensorFlow parser currently only supports fp32 operators.
+
+## Fully supported
+
+- avg_pool
+ - See the TensorFlow [avg_pool documentation](https://www.tensorflow.org/api_docs/python/tf/nn/avg_pool) for more information.
+- bias_add
+ - See the TensorFlow [bias_add documentation](https://www.tensorflow.org/api_docs/python/tf/nn/bias_add) for more information.
+- conv2d
+ - See the TensorFlow [conv2d documentation](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d) for more information.
+- expand_dims
+ - See the TensorFlow [expand_dims documentation](https://www.tensorflow.org/api_docs/python/tf/expand_dims) for more information.
+- gather
+ - See the TensorFlow [gather documentation](https://www.tensorflow.org/api_docs/python/tf/gather) for more information.
+- identity
+ - See the TensorFlow [identity documentation](https://www.tensorflow.org/api_docs/python/tf/identity) for more information.
+- local_response_normalization
+ - See the TensorFlow [local_response_normalization documentation](https://www.tensorflow.org/api_docs/python/tf/nn/local_response_normalization) for more information.
+- max_pool
+ - See the TensorFlow [max_pool documentation](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool) for more information.
+- placeholder
+ - See the TensorFlow [placeholder documentation](https://www.tensorflow.org/api_docs/python/tf/placeholder) for more information.
+- reduce_mean
+ -See the TensorFlow [reduce_mean documentation](https://www.tensorflow.org/api_docs/python/tf/reduce_mean) for more information.
+- relu
+ - See the TensorFlow [relu documentation](https://www.tensorflow.org/api_docs/python/tf/nn/relu) for more information.
+- relu6
+ - See the TensorFlow [relu6 documentation](https://www.tensorflow.org/api_docs/python/tf/nn/relu6) for more information.
+- rsqrt
+ - See the TensorFlow [rsqrt documentation](https://www.tensorflow.org/api_docs/python/tf/math/rsqrt) for more information.
+- shape
+ - See the TensorFlow [shape documentation](https://www.tensorflow.org/api_docs/python/tf/shape) for more information.
+- sigmoid
+ - See the TensorFlow [sigmoid documentation](https://www.tensorflow.org/api_docs/python/tf/sigmoid) for more information.
+- softplus
+ - See the TensorFlow [softplus documentation](https://www.tensorflow.org/api_docs/python/tf/nn/softplus) for more information.
+- squeeze
+ - See the TensorFlow [squeeze documentation](https://www.tensorflow.org/api_docs/python/tf/squeeze) for more information.
+- tanh
+ - See the TensorFlow [tanh documentation](https://www.tensorflow.org/api_docs/python/tf/tanh) for more information.
+
+## Partially supported
+
+- add
+ - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [add operator documentation](https://www.tensorflow.org/api_docs/python/tf/add) for more information.
+- add_n
+ - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [add operator documentation](https://www.tensorflow.org/api_docs/python/tf/add_n) for more information.
+- concat
+ - Arm NN supports concatenation along the channel dimension for data formats NHWC and NCHW.
+- constant
+ - The parser does not support the optional `shape` argument. It always infers the shape of the output tensor from `value`. See the TensorFlow [constant documentation](https://www.tensorflow.org/api_docs/python/tf/constant) for further information.
+- depthwise_conv2d_native
+ - The parser only supports a dilation rate of (1,1,1,1). See the TensorFlow [depthwise_conv2d_native documentation](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d_native) for more information.
+- equal
+ - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of 4D and 1D tensors. See the TensorFlow [equal operator documentation](https://www.tensorflow.org/api_docs/python/tf/math/equal) for more information.
+- fused_batch_norm
+ - The parser does not support training outputs. See the TensorFlow [fused_batch_norm documentation](https://www.tensorflow.org/api_docs/python/tf/nn/fused_batch_norm) for more information.
+- greater
+ - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of 4D and 1D tensors. See the TensorFlow [greater operator documentation](https://www.tensorflow.org/api_docs/python/tf/math/greater) for more information.
+- matmul
+ - The parser only supports constant weights in a fully connected layer. See the TensorFlow [matmul documentation](https://www.tensorflow.org/api_docs/python/tf/matmul) for more information.
+- maximum
+ where maximum is used in one of the following ways
+ - max(mul(a, x), x)
+ - max(mul(x, a), x)
+ - max(x, mul(a, x))
+ - max(x, mul(x, a)
+ This is interpreted as a ActivationLayer with a LeakyRelu activation function. Any other usage of max will result in the insertion of a simple maximum layer. The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting). See the TensorFlow [maximum documentation](https://www.tensorflow.org/api_docs/python/tf/maximum) for more information.
+- minimum
+ - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of 4D and 1D tensors. See the TensorFlow [minimum operator documentation](https://www.tensorflow.org/api_docs/python/tf/math/minimum) for more information.
+- multiply
+ - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [multiply documentation](https://www.tensorflow.org/api_docs/python/tf/multiply) for more information.
+- pad
+ - Only supports tf.pad function with mode = 'CONSTANT' and constant_values = 0. See the TensorFlow [pad documentation](https://www.tensorflow.org/api_docs/python/tf/pad) for more information.
+- realdiv
+ - The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [realdiv documentation](https://www.tensorflow.org/api_docs/python/tf/realdiv) for more information.
+- reshape
+ - The parser does not support reshaping to or from 4D. See the TensorFlow [reshape documentation](https://www.tensorflow.org/api_docs/python/tf/reshape) for more information.
+- resize_images
+ - The parser only supports `ResizeMethod.BILINEAR` with `align_corners=False`. See the TensorFlow [resize_images documentation](https://www.tensorflow.org/api_docs/python/tf/image/resize_images) for more information.
+- softmax
+ - The parser only supports 2D inputs and does not support selecting the `softmax` dimension. See the TensorFlow [softmax documentation](https://www.tensorflow.org/api_docs/python/tf/nn/softmax) for more information.
+- split
+ - Arm NN supports split along the channel dimension for data formats NHWC and NCHW.
+- subtract
+ - The parser does not support all forms of broadcasting [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [subtract documentation](https://www.tensorflow.org/api_docs/python/tf/math/subtract) for more information.
+
+## Tested networks
+
+Arm tests these operators with the following TensorFlow fp32 neural networks:
+- Lenet
+- mobilenet_v1_1.0_224. The Arm NN SDK only supports the non-quantized version of the network. See the [MobileNet_v1 documentation](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md) for more information on quantized networks.
+- inception_v3. The Arm NN SDK only supports the official inception_v3 transformed model. See the TensorFlow documentation on [preparing models for mobile deployment](https://www.tensorflow.org/mobile/prepare_models) for more information on how to transform the inception_v3 network.
+
+Using these datasets:
+- Cifar10
+- Simple MNIST. For more information check out the [tutorial](https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/deploying-a-tensorflow-mnist-model-on-arm-nn) on the Arm Developer portal.
+
+More machine learning operators will be supported in future releases.
+
+**/
+}
diff --git a/docs/02_deserializer_serializer.dox b/docs/02_deserializer_serializer.dox
new file mode 100644
index 0000000000..1cd0516a1e
--- /dev/null
+++ b/docs/02_deserializer_serializer.dox
@@ -0,0 +1,182 @@
+/// Copyright (c) 2017 ARM Limited.
+///
+/// SPDX-License-Identifier: MIT
+///
+/// Permission is hereby granted, free of charge, to any person obtaining a copy
+/// of this software and associated documentation files (the "Software"), to deal
+/// in the Software without restriction, including without limitation the rights
+/// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+/// copies of the Software, and to permit persons to whom the Software is
+/// furnished to do so, subject to the following conditions:
+///
+/// The above copyright notice and this permission notice shall be included in all
+/// copies or substantial portions of the Software.
+///
+/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+/// SOFTWARE.
+///
+
+namespace armnn
+{
+/**
+@page serializers The ArmNN Serializer and Deserializer
+@tableofcontents
+
+@section S8_serializer The ArmNN Serializer
+
+The `armnnSerializer` is a library for serializing an Arm NN network to a stream.
+
+## The layers that ArmNN SDK Serializer currently supports.
+
+This reference guide provides a list of layers which can be serialized currently by the Arm NN SDK.
+
+## Fully supported
+
+The Arm NN SDK Serializer currently supports the following layers:
+
+- Activation
+- Addition
+- ArgMinMax
+- BatchToSpaceNd
+- BatchNormalization
+- Comparison
+- Concat
+- Constant
+- Convolution2d
+- DepthToSpace
+- DepthwiseConvolution2d
+- Dequantize
+- DetectionPostProcess
+- Division
+- ElementwiseUnary
+- Floor
+- FullyConnected
+- Gather
+- Input
+- InstanceNormalization
+- L2Normalization
+- LogSoftmax
+- Lstm
+- Maximum
+- Mean
+- Merge
+- Minimum
+- Multiplication
+- Normalization
+- Output
+- Pad
+- Permute
+- Pooling2d
+- Prelu
+- Quantize
+- QuantizedLstm
+- Reshape
+- Resize
+- Slice
+- Softmax
+- SpaceToBatchNd
+- SpaceToDepth
+- Splitter
+- Stack
+- StandIn
+- StridedSlice
+- Subtraction
+- Switch
+- TransposeConvolution2d
+
+More machine learning layers will be supported in future releases.
+
+## Deprecated layers
+
+Some layers have been deprecated and replaced by others layers. In order to maintain backward compatibility, serializations of these deprecated layers will deserialize to the layers that have replaced them, as follows:
+
+- Equal will deserialize as Comparison
+- Merger will deserialize as Concat
+- Greater will deserialize as Comparison
+- ResizeBilinear will deserialize as Resize
+- Abs will deserialize as ElementwiseUnary
+- Rsqrt will deserialize as ElementwiseUnary
+<br/><br/><br/><br/>
+
+@section S9_deserializer The ArmNN Deserializer
+
+The `armnnDeserializer` is a library for loading neural networks defined by Arm NN FlatBuffers files
+into the Arm NN runtime.
+
+## The layers that ArmNN SDK Deserializer currently supports.
+
+This reference guide provides a list of layers which can be deserialized currently by the Arm NN SDK.
+
+## Fully supported
+
+The Arm NN SDK Deserialize parser currently supports the following layers:
+
+- Abs
+- Activation
+- Addition
+- ArgMinMax
+- BatchToSpaceNd
+- BatchNormalization
+- Concat
+- Comparison
+- Constant
+- Convolution2d
+- DepthToSpace
+- DepthwiseConvolution2d
+- Dequantize
+- DetectionPostProcess
+- Division
+- Floor
+- FullyConnected
+- Gather
+- Input
+- InstanceNormalization
+- L2Normalization
+- LogSoftmax
+- Lstm
+- Maximum
+- Mean
+- Merge
+- Minimum
+- Multiplication
+- Normalization
+- Output
+- Pad
+- Permute
+- Pooling2d
+- Prelu
+- Quantize
+- QuantizedLstm
+- Reshape
+- Rsqrt
+- Slice
+- Softmax
+- SpaceToBatchNd
+- SpaceToDepth
+- Splitter
+- Stack
+- StandIn
+- StridedSlice
+- Subtraction
+- Switch
+- TransposeConvolution2d
+- Resize
+
+More machine learning layers will be supported in future releases.
+
+## Deprecated layers
+
+Some layers have been deprecated and replaced by others layers. In order to maintain backward compatibility, serializations of these deprecated layers will deserialize to the layers that have replaced them, as follows:
+
+- Equal will deserialize as Comparison
+- Merger will deserialize as Concat
+- Greater will deserialize as Comparison
+- ResizeBilinear will deserialize as Resize
+
+**/
+} \ No newline at end of file
diff --git a/docs/03_converter_quantizer.dox b/docs/03_converter_quantizer.dox
new file mode 100644
index 0000000000..ebfacd473e
--- /dev/null
+++ b/docs/03_converter_quantizer.dox
@@ -0,0 +1,60 @@
+/// Copyright (c) 2017 ARM Limited.
+///
+/// SPDX-License-Identifier: MIT
+///
+/// Permission is hereby granted, free of charge, to any person obtaining a copy
+/// of this software and associated documentation files (the "Software"), to deal
+/// in the Software without restriction, including without limitation the rights
+/// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+/// copies of the Software, and to permit persons to whom the Software is
+/// furnished to do so, subject to the following conditions:
+///
+/// The above copyright notice and this permission notice shall be included in all
+/// copies or substantial portions of the Software.
+///
+/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+/// SOFTWARE.
+///
+
+namespace armnn
+{
+/**
+@page converter_and_quantizer The ArmNN Converter and Quantizer
+@tableofcontents
+
+@section S10_converter The ArmNN Converter
+
+The `ArmnnConverter` is a program for converting neural networks from other formats to Arm NN format.
+Currently the program supports models in Caffe, Onnx, Tensorflow Protocol Buffers and Tensorflow Lite FlatBuffers formats. Run the program with no arguments to see command-line help.
+
+For more information about the layers that are supported, see <a href="parsers.xhtml">parsers</a>.
+<br/><br/><br/><br/>
+
+@section S11_quantizer The ArmNN Quantizer
+
+The `ArmnnQuantizer` is a program for loading a 32-bit float network into ArmNN and converting it into a quantized asymmetric 8-bit or quantized symmetric 16-bit network.
+It supports static quantization by default, dynamic quantization is enabled if CSV file of raw input tensors is provided. Run the program with no arguments to see command-line help.
+
+
+|Cmd:|||
+| ---|---|---|
+| -h | --help | Display help messages |
+| -f | --infile | Input file containing float 32 ArmNN Input Graph |
+| -s | --scheme | Quantization scheme, "QAsymm8" or "QSymm16". Default value: QAsymm8 |
+| -c | --csvfile | CSV file containing paths for raw input tensors for dynamic quantization. If unset, static quantization is used |
+| -p | --preserve-data-type | Preserve the input and output data types. If unset, input and output data types are not preserved |
+| -d | --outdir | Directory that output file will be written to |
+| -o | --outfile | ArmNN output file name |
+
+<br/>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+./ArmnnQuantizer -f /path/to/armnn/input/graph/ -s "QSymm16" -c /path/to/csv/file -p 1 -d /path/to/output -o outputFileName
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**/
+}
diff --git a/docs/04_backends.dox b/docs/04_backends.dox
new file mode 100644
index 0000000000..cc6d01372e
--- /dev/null
+++ b/docs/04_backends.dox
@@ -0,0 +1,470 @@
+/// Copyright (c) 2017 ARM Limited.
+///
+/// SPDX-License-Identifier: MIT
+///
+/// Permission is hereby granted, free of charge, to any person obtaining a copy
+/// of this software and associated documentation files (the "Software"), to deal
+/// in the Software without restriction, including without limitation the rights
+/// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+/// copies of the Software, and to permit persons to whom the Software is
+/// furnished to do so, subject to the following conditions:
+///
+/// The above copyright notice and this permission notice shall be included in all
+/// copies or substantial portions of the Software.
+///
+/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+/// SOFTWARE.
+///
+
+namespace armnn
+{
+/**
+@page backends Backend Developer Guides
+@tableofcontents
+
+@section S12_backend_developer_guide Backend Developer Guide
+
+Arm NN allows adding new backends through the `Pluggable Backend` mechanism.
+
+@subsection S12_1_backend_developer_guide How to add a new backend
+
+Backends reside under [src/backends](./), in separate subfolders. For Linux builds they must have a `backend.cmake` file,
+which is read automatically by [src/backends/backends.cmake](backends.cmake). The `backend.cmake` file
+under the backend-specific folder is then included by the main CMakeLists.txt file at the root of the
+Arm NN source tree.
+
+### The backend.cmake file
+
+The `backend.cmake` has three main purposes:
+
+1. It makes sure the artifact (a cmake OBJECT library) is linked into the Arm NN shared library by appending the name of the library to the `armnnLibraries` list.
+2. It makes sure that the subdirectory where backend sources reside gets included into the build.
+3. To include backend-specific unit tests, the object library for the unit tests needs to be added to the `armnnUnitTestLibraries` list.
+
+Example `backend.cmake` file taken from [reference/backend.cmake](reference/backend.cmake):
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.cmake
+#
+# Make sure the reference backend is included in the build.
+# By adding the subdirectory, cmake requires the presence of CMakeLists.txt
+# in the reference (backend) folder.
+#
+add_subdirectory(${PROJECT_SOURCE_DIR}/src/backends/reference)
+
+#
+# Add the cmake OBJECT libraries built by the reference backend to the
+# list of libraries linked against the Arm NN shared library.
+#
+list(APPEND armnnLibraries armnnRefBackend armnnRefBackendWorkloads)
+
+#
+# Backend specific unit tests can be integrated through the
+# armnnUnitTestLibraries variable. This makes sure that the
+# UnitTests executable can run the backend-specific unit
+# tests.
+#
+list(APPEND armnnUnitTestLibraries armnnRefBackendUnitTests)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### The CMakeLists.txt file
+
+As described in the previous section, adding a new backend will require creating a `CMakeLists.txt` in
+the backend folder. This follows the standard cmake conventions, and is required to build a static cmake OBJECT library
+to be linked into the Arm NN shared library. As with any cmake build, the code can be structured into
+subfolders and modules as the developer sees fit.
+
+Example can be found under [reference/CMakeLists.txt](reference/CMakeLists.txt).
+
+### The backend.mk file
+
+Arm NN on Android uses the native Android build system. New backends are integrated by creating a
+`backend.mk` file, which has a single variable called `BACKEND_SOURCES` listing all cpp
+files to be built by the Android build system for the Arm NN shared library.
+
+Optionally, backend-specific unit tests can be added similarly, by
+appending the list of cpp files to the `BACKEND_TEST_SOURCES` variable.
+
+Example taken from [reference/backend.mk](reference/backend.mk):
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.make
+BACKEND_SOURCES := \
+ RefLayerSupport.cpp \
+ RefWorkloadFactory.cpp \
+ workloads/Activation.cpp \
+ workloads/ElementwiseFunction.cpp \
+ workloads/Broadcast.cpp \
+ ...
+
+BACKEND_TEST_SOURCES := \
+ test/RefCreateWorkloadTests.cpp \
+ test/RefEndToEndTests.cpp \
+ test/RefJsonPrinterTests.cpp \
+ ...
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+@subsection S12_2_backend_developer_guide How to Add Common Code Across Backends
+
+For multiple backends that need common code, there is support for including them in the build
+similarly to the backend code. This requires adding three files under a subfolder at the same level
+as the backends folders. These are:
+
+1. common.cmake
+2. common.mk
+3. CMakeLists.txt
+
+They work the same way as the backend files. The only difference between them is that
+common code is built first, so the backend code can depend on them.
+
+[aclCommon](aclCommon) is an example for this concept and you can find the corresponding files:
+
+1. [aclCommon/common.cmake](aclCommon/common.cmake)
+2. [aclCommon/common.mk](aclCommon/common.mk)
+3. [aclCommon/CMakeLists.txt](aclCommon/CMakeLists.txt)
+
+@subsection S12_3_backend_developer_guide Identifying Backends
+
+Backends are identified by a string that must be unique across backends. This string is
+wrapped in the [BackendId](../../include/armnn/BackendId.hpp) object for backward compatibility
+with previous Arm NN versions.
+
+@subsection S12_4_backend_developer_guide The IBackendInteral Interface
+
+All backends need to implement the [IBackendInternal](../../include/armnn/backends/IBackendInternal.hpp) interface.
+The interface functions to be implemented are:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.cpp
+ virtual IMemoryManagerUniquePtr CreateMemoryManager() const = 0;
+ virtual IWorkloadFactoryPtr CreateWorkloadFactory(
+ const IMemoryManagerSharedPtr& memoryManager = nullptr) const = 0;
+ virtual IBackendContextPtr CreateBackendContext(const IRuntime::CreationOptions&) const = 0;
+ virtual IBackendProfilingContextPtr CreateBackendProfilingContext(const IRuntime::CreationOptions& creationOptions,
+ armnn::profiling::IBackendProfiling& backendProfiling) const = 0;
+ virtual ILayerSupportSharedPtr GetLayerSupport() const = 0;
+ virtual Optimizations GetOptimizations() const = 0;
+ virtual SubgraphUniquePtr OptimizeSubgraph(const SubgraphView& subgraph, bool& optimizationAttempted) const;
+ virtual OptimizationViews OptimizeSubgraphView(const SubgraphView& subgraph) const;
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note that `GetOptimizations()` and `SubgraphViewUniquePtr OptimizeSubgraphView(const SubgraphView& subgraph, bool& optimizationAttempted)`
+have been deprecated.
+The method `OptimizationViews OptimizeSubgraph(const SubgraphView& subgraph)` should be used instead to
+apply specific optimizations to a given sub-graph.
+
+The Arm NN framework then creates instances of the IBackendInternal interface with the help of the
+[BackendRegistry](../../include/armnn/BackendRegistry.hpp) singleton.
+
+**Important:** the `IBackendInternal` object is not guaranteed to have a longer lifetime than
+the objects it creates. It is only intended to be a single entry point for the factory functions it has.
+The best use of this is to be a lightweight, stateless object and make no assumptions between
+its lifetime and the lifetime of the objects it creates.
+
+For each backend one needs to register a factory function that can
+be retrieved using a [BackendId](../../include/armnn/BackendId.hpp).
+The Arm NN framework creates the backend interfaces dynamically when
+it sees fit and it keeps these objects for a short period of time. Examples:
+
+- During optimization Arm NN needs to decide which layers are supported by the backend.
+ To do this, it creates a backends and calls the `GetLayerSupport()` function and creates
+ an `ILayerSupport` object to help deciding this.
+- During optimization Arm NN can run backend-specific optimizations. After splitting the graph into
+ sub-graphs based on backends, it calls the `OptimizeSubgraphView()` function on each of them and, if possible,
+ substitutes the corresponding sub-graph in the original graph with its optimized version.
+- When the Runtime is initialized it creates an optional `IBackendContext` object and keeps this context alive
+ for the Runtime's lifetime. It notifies this context object before and after a network is loaded or unloaded.
+- When the LoadedNetwork creates the backend-specific workloads for the layers, it creates a backend
+ specific workload factory and calls this to create the workloads.
+
+@subsection S12_5_backend_developer_guide The BackendRegistry
+
+As mentioned above, all backends need to be registered through the BackendRegistry so Arm NN knows
+about them. Registration requires a unique backend ID string and a lambda function that
+returns a unique pointer to the [IBackendInternal interface](../../include/armnn/backends/IBackendInternal.hpp).
+
+For registering a backend only this lambda function needs to exist, not the actual backend. This
+allows dynamically creating the backend objects when they are needed.
+
+The BackendRegistry has a few convenience functions, like we can query the registered backends and
+are able to tell if a given backend is registered or not.
+
+Dynamic backends are registered during the runtime creation.
+
+@subsection S12_6_backend_developer_guide The ILayerSupport Interface
+
+Arm NN uses the [ILayerSupport](../../include/armnn/ILayerSupport.hpp) interface to decide if a layer
+with a set of parameters (i.e. input and output tensors, descriptor, weights, filter, kernel if any) are
+supported on a given backend. The backends need a way to communicate this information by implementing
+the `GetLayerSupport()` function on the `IBackendInternal` interface.
+
+Examples of this can be found in the [RefLayerSupport header](reference/RefLayerSupport.hpp)
+and the [RefLayerSupport implementation](reference/RefLayerSupport.cpp).
+
+@subsection S12_7_backend_developer_guide The IWorkloadFactory Interface
+
+The [IWorkloadFactory interface](backendsCommon/WorkloadFactory.hpp) is used for creating the backend
+specific workloads. The factory function that creates the IWorkloadFactory object in the IBackendInterface
+takes an IMemoryManager object.
+
+To create a workload object the `IWorkloadFactory` takes a `WorkloadInfo` object that holds
+the input and output tensor information and a workload specific queue descriptor.
+
+@subsection S12_8_backend_developer_guide The IMemoryManager Interface
+
+Backends may choose to implement custom memory management. Arm NN supports this concept through the following
+mechanism:
+
+- the `IBackendInternal` interface has a `CreateMemoryManager()` function, which is called before
+ creating the workload factory
+- the memory manager is passed to the `CreateWorkloadFactory(...)` function so the workload factory can
+ use it for creating the backend-specific workloads
+- the LoadedNetwork calls `Acquire()` on the memory manager before it starts executing the network and
+ it calls `Release()` in its destructor
+
+@subsection S12_9_backend_developer_guide The Optimizations
+
+The backends may choose to implement backend-specific optimizations.
+This is supported through the method `OptimizationViews OptimizeSubgraph(const SubgraphView& subgraph)` of
+the backend interface that allows the backends to apply their specific optimizations to a given sub-graph.
+
+The `OptimizeSubgraph(...)` method returns an OptimizationViews object containing three lists:
+
+- A list of the sub-graph substitutions: a "substitution" is a pair of sub-graphs, the first is the "substitutable" sub-graph,
+ representing the part of the original graph that has been optimized by the backend, while the second is the "replacement" sub-graph,
+ containing the actual optimized layers that will be replaced in the original graph correspondingly to the "substitutable" sub-graph
+- A list of the failed sub-graphs: these are the parts of the original sub-graph that are not supported by the backend,
+ thus have been rejected. Arm NN will try to re-allocate these parts on other backends if available.
+- A list of the untouched sub-graphs: these are the parts of the original sub-graph that have not been optimized,
+ but that can run (unoptimized) on the backend.
+
+The previous way backends had to provide a list optimizations to the Optimizer (through the `GetOptimizations()` method)
+is still in place for backward compatibility, but it's now considered deprecated and will be remove in a future release.
+
+@subsection S12_10_backend_developer_guide The IBackendContext Interface
+
+Backends may need to be notified whenever a network is loaded or unloaded. To support that, one can implement the optional
+[IBackendContext](../../include/armnn/backends/IBackendContext.hpp) interface. The framework calls the `CreateBackendContext(...)`
+method for each backend in the Runtime. If the backend returns a valid unique pointer to a backend context, then the
+runtime will hold this for its entire lifetime. It then calls the following interface functions for each stored context:
+
+- `BeforeLoadNetwork(NetworkId networkId)`
+- `AfterLoadNetwork(NetworkId networkId)`
+- `BeforeUnloadNetwork(NetworkId networkId)`
+- `AfterUnloadNetwork(NetworkId networkId)`
+
+@subsection S12_11_backend_developer_guide Dynamic Backends
+
+Backends can also be loaded by Arm NN dynamically at runtime.
+To be properly loaded and used, the backend instances must comply to the standard interface for dynamic backends and to the versioning
+rules that enforce ABI compatibility.
+
+@subsection S12_12_backend_developer_guide Dynamic Backends Base Interface
+
+The dynamic backend shared object must expose the following interface for Arm NN to handle it correctly:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.cpp
+extern "C"
+{
+const char* GetBackendId();
+void GetVersion(uint32_t* outMajor, uint32_t* outMinor);
+void* BackendFactory();
+}
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Interface details:
+
+- `extern "C"` is needed to use avoid C++ name mangling, necessary to allow Arm NN to dynamically load the symbols.
+- `GetBackendId()`: must return the unique id of the dynamic backends.
+ If at the time of the loading the id already exists in the internal Arm NN's backend registry, the backend will be skipped and
+ not loaded in Arm NN
+- `GetVersion()`: must return the version of the dynamic backend.
+ The version must indicate the version of the Backend API the dynamic backend has been built with.
+ The current Backend API version can be found by inspecting the IBackendInternal interface.
+ At the time of loading, the version of the backend will be checked against the version of the Backend API Arm NN is built with.
+ If the backend version is not compatible with the current Backend API, the backend will not be loaded as it will be assumed that
+ it is not ABI compatible with the current Arm NN build.
+- `BackendFactory()`: must return a valid instance of the backend.
+ The backend instance is an object that must inherit from the version of the IBackendInternal interface declared by GetVersion().
+ It is the backend developer's responsibility to ensure that the backend implementation correctly reflects the version declared by
+ GetVersion(), and that the object returned by the BackendFactory() function is a valid and well-formed instance of the IBackendInternal
+ interface.
+
+@subsection S12_13_backend_developer_guide Dynamic Backend Versioning and ABI Compatibility
+
+Dynamic backend versioning policy:
+
+Updates to Arm NN's Backend API follow these rules: changes to the Backend API (the IBackendInternal interface) that break
+ABI compatibility with the previous API version will be indicated by a change of the API's major version, while changes
+that guarantee ABI compatibility with the previous API version will be indicated by a change in API's the minor version.
+
+For example:
+
+- Dynamic backend version 2.4 (i.e. built with Backend API version 2.4) is compatible with Arm NN's Backend API version 2.4
+ (same version, backend built against the same Backend API)
+- Dynamic backend version 2.1 (i.e. built with Backend API version 2.1) is compatible with Arm NN's Backend API version 2.4
+ (same major version, backend built against earlier compatible API)
+- Dynamic backend version 2.5 (i.e. built with Backend API version 2.5) is not compatible with Arm NN's Backend API version 2.4
+ (same major version, backend built against later incompatible API, backend might require update to the latest compatible backend API)
+- Dynamic backend version 2.0 (i.e. built with Backend API version 2.0) is not compatible with Arm NN's Backend API version 1.0
+ (backend requires a completely new API version)
+- Dynamic backend version 2.0 (i.e. built with Backend API version 2.0) is not compatible with Arm NN's Backend API version 3.0
+ (backward compatibility in the Backend API is broken)
+
+@subsection S12_13_backend_developer_guide Dynamic Backend Loading Paths
+
+During the creation of the Runtime, Arm NN will scan a given set of paths searching for suitable dynamic backend objects to load.
+A list of (absolute) paths can be specified at compile-time by setting a define named `DYNAMIC_BACKEND_PATHS` in the form of a colon-separated list of strings.
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+-DDYNAMIC_BACKEND_PATHS="PATH_1:PATH_2...:PATH_N"
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The paths will be processed in the same order as they are indicated in the macro.
+
+It is also possible to override those paths at runtime when creating the Runtime object by setting the value of the `m_DynamicBackendsPath` member in the CreationOptions class.
+Only one path is allowed for the override via the CreationOptions class.
+By setting the value of the `m_DynamicBackendsPath` to a path in the filesystem, Arm NN will entirely ignore the list of paths passed via the
+`DYNAMIC_BACKEND_PATHS` compiler directive.
+
+All the specified paths are validated before processing (they must exist, must be directories, and must be absolute paths),
+in case of error a warning message will be added to the log, but Arm NN's execution will not be stopped.
+If all paths are not valid, then no dynamic backends will be loaded in the Arm sNN's runtime.
+
+Passing an empty list of paths at compile-time and providing no path override at runtime will effectively disable the
+dynamic backend loading feature, and no dynamic backends will be loaded into Arm NN's runtime.
+
+@subsection S12_14_backend_developer_guide Dynamic Backend File Naming Convention
+
+During the creation of a Runtime object, Arm NN will scan the paths specified for dynamic backend loading searching for suitable backend objects.
+Arm NN will try to load only the files that match the following accepted naming scheme:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+<vendor>_<name>_backend.so[<version>] (e.g. "Arm_GpuAcc_backend.so" or "Arm_GpuAcc_backend.so.1.2.3")
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Only alphanumeric characters are allowed for both the `<vendor>` and the `<name>` fields, namely lowercase and/or uppercase characters,
+and/or numerical digits (see the table below for examples).
+Only dots and numbers are allowed for the optional `<version>` field.
+
+Symlinks to other files are allowed to support the standard linux shared object versioning:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+Arm_GpuAcc_backend.so -> Arm_GpuAcc_backend.so.1.2.3
+Arm_GpuAcc_backend.so.1 -> Arm_GpuAcc_backend.so.1.2.3
+Arm_GpuAcc_backend.so.1.2 -> Arm_GpuAcc_backend.so.1.2.3
+Arm_GpuAcc_backend.so.1.2.3
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Files are identified by their full canonical path, so it is allowed to have files with the same name in different directories.
+However, if those are actually the same dynamic backend, only the first in order of parsing will be loaded.
+
+Examples:
+
+| Filename | Description |
+| -------------------------------------------------------- | ------------------------------------------------- |
+| Arm_GpuAcc_backend.so | valid: basic backend name |
+| Arm_GpuAcc_backend.so.1 | valid: single field version number |
+| Arm_GpuAcc_backend.so.1.2 | valid: multiple field version number |
+| Arm_GpuAcc_backend.so.1.2.3 | valid: multiple field version number |
+| Arm_GpuAcc_backend.so.10.1.27 | valid: Multiple digit version |
+| Arm_GpuAcc_backend.so.10.1.33. | not valid: dot not followed by version number |
+| Arm_GpuAcc_backend.so.3.4..5 | not valid: dot not followed by version number |
+| Arm_GpuAcc_backend.so.1,1.1 | not valid: comma instead of dot in the version |
+| Arm123_GpuAcc_backend.so | valid: digits in vendor name are allowed |
+| Arm_GpuAcc456_backend.so | valid: digits in backend id are allowed |
+| Arm%Co_GpuAcc_backend.so | not valid: invalid character in vendor name |
+| Arm_Gpu.Acc_backend.so | not valid: invalid character in backend id |
+| GpuAcc_backend.so | not valid: missing vendor name |
+| _GpuAcc_backend.so | not valid: missing vendor name |
+| Arm__backend.so | not valid: missing backend id |
+| Arm_GpuAcc.so | not valid: missing "backend" at the end |
+| __backend.so | not valid: missing vendor name and backend id |
+| __.so | not valid: missing all fields |
+| Arm_GpuAcc_backend | not valid: missing at least ".so" at the end |
+| Arm_GpuAcc_backend_v1.2.so | not valid: extra version info at the end |
+| Arm_CpuAcc_backend.so | valid: basic backend name |
+| Arm_CpuAcc_backend.so.1 -> Arm_CpuAcc_backend.so | valid: symlink to valid backend file |
+| Arm_CpuAcc_backend.so.1.2 -> Arm_CpuAcc_backend.so.1 | valid: symlink to valid symlink |
+| Arm_CpuAcc_backend.so.1.2.3 -> Arm_CpuAcc_backend.so.1.2 | valid: symlink to valid symlink |
+| Arm_no_backend.so -> nothing | not valid: symlink resolves to non-existent file |
+| pathA/Arm_GpuAcc_backend.so | valid: basic backend name |
+| pathB/Arm_GpuAcc_backend.so | valid: but duplicated from pathA/ |
+
+Arm NN will try to load the dynamic backends in the same order as they are parsed from the filesystem.
+
+@subsection S12_15_backend_developer_guide Dynamic Backend Examples
+
+The source code includes an example that is used to generate some mock dynamic backends for testing purposes. The source files are:
+
+- TestDynamicBackend.hpp
+- TestDynamicBackend.cpp
+
+This example is useful for going through all the use cases that constitute an invalid dynamic backend object, such as
+an invalid/malformed implementation of the shared object interface, or an invalid value returned by any of the interface methods
+that would prevent Arm NN from making use of the dynamic backend.
+
+A dynamic implementation of the reference backend is also provided. The source files are:
+
+- RefDynamicBackend.hpp
+- RefDynamicBackend.cpp
+
+The implementation itself is quite simple and straightforward. Since an implementation of this particular backend was already available,
+the dynamic version is just a wrapper around the original code that simply returns the backend id, version and an instance of the
+backend itself via the factory function.
+For the sake of the example, the source code of the reference backend is used to build the dynamic version (as you would for any new
+dynamic backend), while all the other symbols needed are provided by linking the dynamic backend against Arm NN.
+
+The makefile used for building the reference dynamic backend is also provided: [CMakeLists.txt](dynamic/reference/CMakeLists.txt)
+
+A unit test that loads the reference backend dynamically and that exercises it is also included in the file
+[DynamicBackendTests.cpp](dynamic/backendsCommon/test/DynamicBackendTests.cpp), by the test case `CreateReferenceDynamicBackend`.
+In the test, a path on the filesystem is scanned for valid dynamic backend files (using the override option in `CreationOptions`)
+where only the reference dynamic backend is.
+In this example the file is named `Arm_CpuRef_backend.so`, which is compliant with the expected naming scheme for dynamic backends.
+A `DynamicBackend` is created in the runtime to represent the newly loaded backend, then the backend is registered in the Backend
+Registry with the id "CpuRef" (returned by `GetBackendId()`).
+The unit test makes sure that the backend is actually registered in Arm NN, before trying to create an instance of the backend by
+calling the factory function provided through the shared object interface (`BackendFactory()`).
+The backend instance is used to verify that everything is in order, testing basic 2D convolution support by making use of the
+Layer Support API and the Workload Factory.
+At the end of test, the runtime object goes out of scope and the dynamic backend instance is automatically destroyed, and the handle to
+the shared object is closed.
+
+<br/><br/><br/><br/><br/>
+
+@section S13_dynamic_backend_guide Standalone Dynamic Backend Developer Guide
+
+Arm NN allows adding new dynamic backends. Dynamic Backends can be compiled as standalone against Arm NN
+and can be loaded by Arm NN dynamically at runtime.
+
+To be properly loaded and used, the backend instances must comply to the standard interface for dynamic backends
+and to the versioning rules that enforce ABI compatibility.
+The details of how to add dynamic backends can be found in [src/backends/README.md](../backends/README.md).
+
+@subsection S13_1_dynamic_backend_guide Standalone Dynamic Backend Example
+
+The source code includes an example that is used to generate a dynamic implementation of the reference backend
+is provided at
+- RefDynamicBackend.hpp
+- RefDynamicBackend.cpp
+
+The makefile used for building the standalone reference dynamic backend is also provided:
+CMakeLists.txt
+
+@subsection S13_2_dynamic_backend_guide Dynamic Backend Loading Paths
+
+During the creation of the Runtime, Arm NN will scan a given set of paths searching for suitable dynamic backend objects to load.
+A list of (absolute) paths can be specified at compile-time by setting a define named `DYNAMIC_BACKEND_PATHS`
+ in the form of a colon-separated list of strings.
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+-DDYNAMIC_BACKEND_PATHS="PATH_1:PATH_2...:PATH_N"
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The paths will be processed in the same order as they are indicated in the macro.
+
+**/
+} \ No newline at end of file
diff --git a/docs/05_other_tools.dox b/docs/05_other_tools.dox
new file mode 100644
index 0000000000..898565443d
--- /dev/null
+++ b/docs/05_other_tools.dox
@@ -0,0 +1,107 @@
+/// Copyright (c) 2017 ARM Limited.
+///
+/// SPDX-License-Identifier: MIT
+///
+/// Permission is hereby granted, free of charge, to any person obtaining a copy
+/// of this software and associated documentation files (the "Software"), to deal
+/// in the Software without restriction, including without limitation the rights
+/// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+/// copies of the Software, and to permit persons to whom the Software is
+/// furnished to do so, subject to the following conditions:
+///
+/// The above copyright notice and this permission notice shall be included in all
+/// copies or substantial portions of the Software.
+///
+/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+/// SOFTWARE.
+///
+
+namespace armnn
+{
+/**
+@page other_tools Other Tools
+@tableofcontents
+
+@section S14_image_csv_file_generator The ImageCSVFileGenerator
+
+The `ImageCSVFileGenerator` is a program for creating a CSV file that contains a list of .raw tensor files. These
+.raw tensor files can be generated using the`ImageTensorGenerator`.
+
+|Cmd:|||
+| ---|---|---|
+| -h | --help | Display help messages |
+| -i | --indir | Directory that .raw files are stored in |
+| -o | --outfile | Output CSV file path |
+
+Example usage: <br/>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+./ImageCSVFileGenerator -i /path/to/directory/ -o /output/path/csvfile.csv
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+<br/><br/><br/><br/>
+
+@section S15_image_tensor_generator The ImageTensorGenerator
+
+The `ImageTensorGenerator` is a program for pre-processing a .jpg image before generating a .raw tensor file from it.
+
+Build option:
+To build ModelAccuracyTool, pass the following options to Cmake:
+* -DBUILD_ARMNN_QUANTIZER=1
+
+|Cmd:|||
+| ---|---|---|
+| -h | --help | Display help messages |
+| -f | --model-format | Format of the intended model file that uses the images.Different formats have different image normalization styles.Accepted values (caffe, tensorflow, tflite) |
+| -i | --infile | Input image file to generate tensor from |
+| -o | --outfile | Output raw tensor file path |
+| -z | --output-type | The data type of the output tensors.If unset, defaults to "float" for all defined inputs. Accepted values (float, int or qasymm8)
+| | --new-width |Resize image to new width. Keep original width if unspecified |
+| | --new-height | Resize image to new height. Keep original height if unspecified |
+| -l | --layout | Output data layout, "NHWC" or "NCHW". Default value: NHWC |
+
+Example usage: <br/>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+ .sh ./ImageTensorGenerator -i /path/to/image/dog.jpg -o /output/path/dog.raw --new-width 224 --new-height 224
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+<br/><br/><br/><br/>
+
+@section S16_model_accuracy_tool_armnn The ModelAccuracyTool-ArmNN
+
+The `ModelAccuracyTool-Armnn` is a program for measuring the Top 5 accuracy results of a model against an image dataset.
+
+Prerequisites:
+1. The model is in .armnn format model file. The `ArmnnConverter` can be used to convert a model to this format.
+
+Build option:
+To build ModelAccuracyTool, pass the following options to Cmake:
+* -DFLATC_DIR=/path/to/flatbuffers/x86build/
+* -DBUILD_ACCURACY_TOOL=1
+* -DBUILD_ARMNN_SERIALIZER=1
+
+|Cmd:|||
+| ---|---|---|
+| -h | --help | Display help messages |
+| -m | --model-path | Path to armnn format model file |
+| -f | --model-format | The model format. Supported values: caffe, tensorflow, tflite |
+| -i | --input-name | Identifier of the input tensors in the network separated by comma |
+| -o | --output-name | Identifier of the output tensors in the network separated by comma |
+| -d | --data-dir | Path to directory containing the ImageNet test data |
+| -p | --model-output-labels | Path to model output labels file.
+| -v | --validation-labels-path | Path to ImageNet Validation Label file
+| -l | --data-layout ] | Data layout. Supported value: NHWC, NCHW. Default: NHWC
+| -c | --compute | Which device to run layers on by default. Possible choices: CpuRef, CpuAcc, GpuAcc. Default: CpuAcc, CpuRef |
+| -r | --validation-range | The range of the images to be evaluated. Specified in the form <begin index>:<end index>. The index starts at 1 and the range is inclusive. By default the evaluation will be performed on all images. |
+| -b | --blacklist-path | Path to a blacklist file where each line denotes the index of an image to be excluded from evaluation. |
+
+Example usage: <br/>
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.sh
+./ModelAccuracyTool -m /path/to/model/model.armnn -f tflite -i input -o output -d /path/to/test/directory/ -p /path/to/model-output-labels -v /path/to/file/val.txt -c CpuRef -r 1:100
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+<br/><br/>
+
+**/
+} \ No newline at end of file
diff --git a/docs/Doxyfile b/docs/Doxyfile
index ac636f4136..8f104cb41c 100644
--- a/docs/Doxyfile
+++ b/docs/Doxyfile
@@ -1,5 +1,28 @@
# Doxyfile 1.8.12
+# Copyright (c) 2020 ARM Limited.
+#
+# SPDX-License-Identifier: MIT
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in all
+# copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+# SOFTWARE.
+#
+
# This file describes the settings to be used by the documentation system
# doxygen (www.doxygen.org) for a project.
#
@@ -38,7 +61,7 @@ PROJECT_NAME = "ArmNN"
# could be handy for archiving the generated documentation or if some version
# control system is used.
-PROJECT_NUMBER = NotReleased
+PROJECT_NUMBER = 20.02
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
@@ -187,7 +210,7 @@ SHORT_NAMES = NO
# description.)
# The default value is: NO.
-JAVADOC_AUTOBRIEF = NO
+JAVADOC_AUTOBRIEF = Yes
# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
# line (until the first dot) of a Qt-style comment as the brief description. If
@@ -538,7 +561,7 @@ HIDE_SCOPE_NAMES = YES
# YES the compound reference will be hidden.
# The default value is: NO.
-HIDE_COMPOUND_REFERENCE= NO
+#HIDE_COMPOUND_REFERENCE= NO
# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
# the files that are included by a file in the documentation of that file.
@@ -551,7 +574,7 @@ SHOW_INCLUDE_FILES = YES
# which file to include in order to use the member.
# The default value is: NO.
-SHOW_GROUPED_MEMB_INC = NO
+#SHOW_GROUPED_MEMB_INC = NO
# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
# files with double quotes in the documentation rather than with sharp brackets.
@@ -756,7 +779,7 @@ WARN_IF_DOC_ERROR = YES
# parameter documentation, but not about the absence of documentation.
# The default value is: NO.
-WARN_NO_PARAMDOC = NO
+WARN_NO_PARAMDOC = YES
# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when
# a warning is encountered.
@@ -772,7 +795,7 @@ WARN_AS_ERROR = NO
# FILE_VERSION_FILTER)
# The default value is: $file:$line: $text.
-WARN_FORMAT = "$file:$line: $text"
+WARN_FORMAT = "$file:$line:[DOXY_WARN] $text"
# The WARN_LOGFILE tag can be used to specify a file to which warning and error
# messages should be written. If left blank the output is written to standard
@@ -790,7 +813,16 @@ WARN_LOGFILE =
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.
-INPUT = include/ src/ tests/ docs/
+INPUT = ./docs/00_introduction.dox \
+ ./docs/01_parsers.dox \
+ ./docs/02_deserializer_serializer.dox \
+ ./docs/03_converter_quantizer.dox \
+ ./docs/04_backends.dox \
+ ./docs/05_other_tools.dox \
+ ./include/ \
+ ./src/ \
+ ./tests/ \
+ ./docs/
# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
@@ -889,7 +921,7 @@ EXCLUDE_SYMLINKS = NO
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories for example use the pattern */test/*
-EXCLUDE_PATTERNS =
+EXCLUDE_PATTERNS = *.md
# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
# (namespaces, classes, functions, etc.) that should be excluded from the
@@ -906,7 +938,7 @@ EXCLUDE_SYMBOLS = caffe tensorflow cl armcomputetensorutils
# that contain example code fragments that are included (see the \include
# command).
-EXAMPLE_PATH =
+EXAMPLE_PATH = ./samples/
# If the value of the EXAMPLE_PATH tag contains directories, you can use the
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
@@ -1008,7 +1040,7 @@ INLINE_SOURCES = YES
# Fortran comments will always remain visible.
# The default value is: YES.
-STRIP_CODE_COMMENTS = YES
+STRIP_CODE_COMMENTS = NO
# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
# function all documented functions referencing it will be listed.
@@ -1137,7 +1169,7 @@ HTML_OUTPUT = html
# The default value is: .html.
# This tag requires that the tag GENERATE_HTML is set to YES.
-HTML_FILE_EXTENSION = .html
+HTML_FILE_EXTENSION = .xhtml
# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
# each generated HTML page. If the tag is left blank doxygen will generate a
@@ -1192,7 +1224,7 @@ HTML_STYLESHEET =
# list). For an example see the documentation.
# This tag requires that the tag GENERATE_HTML is set to YES.
-HTML_EXTRA_STYLESHEET =
+HTML_EXTRA_STYLESHEET = ./docs/stylesheet.css
# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
# other source files which should be copied to the HTML output directory. Note
@@ -1533,7 +1565,7 @@ FORMULA_TRANSPARENT = YES
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.
-USE_MATHJAX = NO
+USE_MATHJAX = YES
# When MathJax is enabled you can set the default output format to be used for
# the MathJax output. See the MathJax site (see:
@@ -2376,7 +2408,7 @@ DIRECTORY_GRAPH = YES
# The default value is: png.
# This tag requires that the tag HAVE_DOT is set to YES.
-DOT_IMAGE_FORMAT = png
+DOT_IMAGE_FORMAT = svg
# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
# enable generation of interactive SVG images that allow zooming and panning.
diff --git a/docs/stylesheet.css b/docs/stylesheet.css
new file mode 100644
index 0000000000..f6ed8aadb9
--- /dev/null
+++ b/docs/stylesheet.css
@@ -0,0 +1,221 @@
+/* Changes to tabs.css */
+
+.tabs, .tabs2, .tabs3 {
+ /* box-shadow: 0px 5px 30px rgba(0, 0, 0, 0.3); */
+ position: relative;
+}
+
+.tablist li {
+ line-height: 32px;
+}
+
+.tablist a {
+ color: #FFFFFF;
+ text-shadow: none;
+}
+
+.tablist a:hover {
+ text-shadow: 0px 1px 1px rgba(0, 0, 0, 1.0);
+ text-decoration: none;
+ background-repeat: no-repeat;
+ background-image: url('tab_s.png');;
+}
+
+.tablist li.current a {
+ text-shadow: none;
+}
+
+/* Changes to navtree.css */
+
+#nav-tree .selected {
+ background-image: url('tab_a_nav.png');
+ border-radius: 15px;
+ text-shadow: none;
+}
+
+#nav-tree .label a {
+ color: #444444;
+}
+
+#nav-tree .selected a {
+ color: #007fa3;
+ font-weight: bold
+}
+
+#nav-tree {
+ background-color: #fafafa;
+}
+
+#doc-content {
+ background-color: #fafafa;
+}
+
+.ui-resizable-e {
+ background: none;
+ background-color : lightgray;
+ width:4px;
+}
+
+#nav-tree {
+ background-image: none;
+ background-color: #fafafa;
+}
+
+
+/* Changes to doxygen.css */
+
+h2.groupheader {
+ border-bottom: 1px solid #979797;
+ color: #4C4C4C;
+}
+
+h1, h2, h3, h4, h5, h6 {
+ font-weight : normal;
+}
+
+h1.glow, h2.glow, h3.glow, h4.glow, h5.glow, h6.glow {
+ text-shadow: 0 0 15px #007fa3;
+}
+
+div.qindex, div.navtab{
+ background-color: #EBEBEB;
+ border: 1px solid #B4B4B4;
+}
+
+div.qindex, div.navpath {
+ position : relative;
+}
+
+a {
+ color: #444444;
+}
+
+.contents a:visited {
+ color: #666666;
+}
+
+a.qindexHL {
+ background-color: #AFAFAf;
+ border: 1px double #9D9D9D;
+}
+
+a.code, a.code:visited {
+ color: #444444;
+}
+
+a.codeRef, a.codeRef:visited {
+ color: #444444;
+}
+
+div.fragment {
+ background-color: #FCFCFC;
+ border: 1px solid #CFCFCF;
+}
+
+div.line.glow {
+ background-color: #007fa3;
+}
+
+body {
+ background-color: #EEE;
+}
+
+.memberdecls td.glow, .fieldtable tr.glow {
+ background-color: #007fa3;
+}
+
+.memitem.glow {
+ /* box-shadow: 0 0 15px orange; */
+}
+
+.memproto, dl.reflist dt {
+ border-top: 1px solid #B8B8B8;
+ border-left: 1px solid #B8B8B8;
+ border-right: 1px solid #B8B8B8;
+ color: #333333;
+ background-color: #E2E2E2;
+}
+
+.memdoc, dl.reflist dd {
+ border-bottom: 1px solid #B8B8B8;
+ border-left: 1px solid #B8B8B8;
+ border-right: 1px solid #B8B8B8;
+ background-color: #FCFCFC;
+}
+
+table.doxtable td, table.doxtable th {
+ border: 1px solid #2D2D2D;
+}
+
+table.doxtable th {
+ background-color: #373737;
+}
+
+.navpath li.navelem a
+{
+ color: white;
+ text-shadow: none;
+}
+
+.navpath li.navelem a:hover
+{
+ color:white;
+ text-shadow : 0px 1px 1px rgba(0, 0, 0, 1.0);
+}
+
+dl.note
+{
+ border-color: #f68a33;
+}
+
+#projectlogo
+{
+ width:150px;
+ text-align:left;
+}
+
+#projectname
+{
+ font: 200% Tahoma, Arial,sans-serif;
+ color : #676767;
+ overflow:hidden;
+}
+
+#projectname #armdevcenter
+{
+ float:right;
+ padding-right: 20px;
+}
+
+#eula
+{
+ font-size: 80%;
+ font-weight: bold;
+}
+
+#titlearea
+{
+ background-color : white;
+ border-top: 5px solid white;
+ border-left: 10px solid white;
+ border-bottom: none;
+}
+
+a.copyright {
+ color: #FFFFFF;
+}
+
+a.copyright:hover {
+ color: #FFFFFF;
+}
+
+a.copyright:visited {
+ color: #FFFFFF;
+}
+
+div.toc h3 {
+ font: bold 12px/1.2 Arial,FreeSans,sans-serif;
+ color: #007fa3;
+ border-bottom: 0 none;
+ margin: 0;
+}