aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNarumol Prangnawarat <narumol.prangnawarat@arm.com>2023-09-20 16:04:58 +0100
committerNarumol Prangnawarat <narumol.prangnawarat@arm.com>2023-09-26 15:32:19 +0100
commita2135bb3737bd7c86c6ea9ed8df2272e5f3ebcb0 (patch)
tree3a3b4b7ac8f3127bb64381e12f9380354688cb48
parent4a43c9403306d10cd7905c9cbd1f4962655db001 (diff)
downloadarmnn-a2135bb3737bd7c86c6ea9ed8df2272e5f3ebcb0.tar.gz
IVGCVSW-8053 Update TensorFlow and FlatBuffers versions on ArmNN guides
Signed-off-by: Narumol Prangnawarat <narumol.prangnawarat@arm.com> Change-Id: I975cf4ccbddd1ea756e1d9f2148dcb8e320346f5
-rw-r--r--BuildGuideAndroidNDK.md10
-rw-r--r--delegate/DelegateQuickStartGuide.md10
-rw-r--r--samples/ObjectDetection/Readme.md20
-rwxr-xr-xscripts/build_android_ndk_guide.sh12
-rw-r--r--shim/BuildGuideShimSupportLibrary.md2
5 files changed, 27 insertions, 27 deletions
diff --git a/BuildGuideAndroidNDK.md b/BuildGuideAndroidNDK.md
index a133956dbb..8e634f3ff1 100644
--- a/BuildGuideAndroidNDK.md
+++ b/BuildGuideAndroidNDK.md
@@ -86,13 +86,13 @@ cd..
Download Flatbuffers:
```bash
cd $WORKING_DIR
-wget https://github.com/google/flatbuffers/archive/v2.0.6.tar.gz
-tar xf v2.0.6.tar.gz
+wget https://github.com/google/flatbuffers/archive/v23.5.26.tar.gz
+tar xf v23.5.26.tar.gz
```
Build Flatbuffers for x86:
```bash
-cd $WORKING_DIR/flatbuffers-2.0.6
+cd $WORKING_DIR/flatbuffers-23.5.26
rm -f CMakeCache.txt
@@ -113,7 +113,7 @@ Note: -fPIC is added to allow users to use the libraries in shared objects.
Build Flatbuffers for Android:
```bash
-cd $WORKING_DIR/flatbuffers-2.0.6
+cd $WORKING_DIR/flatbuffers-23.5.26
rm -f CMakeCache.txt
@@ -170,7 +170,7 @@ First clone Tensorflow manually and check out the version Arm NN was tested with
cd $WORKING_DIR
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
-git fetch && git checkout "6f692f73cb2043b4a0b0446539cd8c15b3dd9220"
+git fetch && git checkout v2.14.0-rc1
```
Or use the script that Arm NN provides:
```bash
diff --git a/delegate/DelegateQuickStartGuide.md b/delegate/DelegateQuickStartGuide.md
index 1665e0c158..6a14af4477 100644
--- a/delegate/DelegateQuickStartGuide.md
+++ b/delegate/DelegateQuickStartGuide.md
@@ -36,11 +36,11 @@ print(output_data)
# Prepare the environment
Pre-requisites:
- * Dynamically build Arm NN Delegate library or download the Arm NN binaries (built with a particular SHA of Tensorflow 2.12.0, which is 6f692f73cb2043b4a0b0446539cd8c15b3dd9220)
+ * Dynamically build Arm NN Delegate library or download the Arm NN binaries (built with a particular SHA of Tensorflow v2.14.0-rc1, which is dd01672d9a99ac372cc77a2a84faf0aedaefa36c)
* python3 (Depends on TfLite version)
* virtualenv
* numpy (Depends on TfLite version)
- * tflite_runtime (2.12 currently available)
+ * tflite_runtime (v2.14.0-rc1 currently available)
If you haven't built the delegate yet then take a look at the [build guide](./BuildGuideNative.md). Otherwise, you can download the binaries [here](https://github.com/ARM-software/armnn/releases/). Set the following environment variable to the location of the .so binary files:
@@ -50,7 +50,7 @@ export LD_LIBRARY_PATH=<path_to_so_binary_files>
We recommend creating a virtual environment for this tutorial. For the following code to work python3 is needed. Please
also check the documentation of the TfLite version you want to use. There might be additional prerequisites for the python
-version. We will use Tensorflow Lite 2.12.0 for this guide.
+version. We will use Tensorflow Lite 2.14.0 for this guide.
```bash
# Install python3 (We ended up with python3.5.3) and virtualenv
sudo apt-get install python3-pip
@@ -74,10 +74,10 @@ mobile and embedded devices.
The TfLite [website](https://www.tensorflow.org/lite/guide/python) shows you two methods to download the `tflite_runtime` package.
In our experience, the use of the pip command works for most systems including debian. However, if you're using an older version of Tensorflow,
you may need to build the pip package from source. You can find more information [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/pip_package/README.md).
-But in our case, with Tensorflow Lite 2.12.0, we can install through:
+But in our case, with Tensorflow Lite 2.14.0, we can install through:
```
-pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime==2.12.0
+pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime==2.14.0
```
Your virtual environment is now all setup. Copy the final python script into a python file e.g.
diff --git a/samples/ObjectDetection/Readme.md b/samples/ObjectDetection/Readme.md
index 87a38b3ff7..2b3a8d3e04 100644
--- a/samples/ObjectDetection/Readme.md
+++ b/samples/ObjectDetection/Readme.md
@@ -20,8 +20,8 @@ with detections shown in bounding boxes, class labels and confidence.
This example utilizes OpenCV functions to capture and output video data.
1. Public Arm NN C++ API is provided by Arm NN library.
2. For Delegate file mode following dependencies exist:
-2.1 Tensorflow version 2.10
-2.2 Flatbuffers version 2.0.6
+2.1 Tensorflow version 2.14
+2.2 Flatbuffers version 23.5.26
2.3 Arm NN delegate library
## System
@@ -97,7 +97,7 @@ Please see [find_opencv.cmake](./cmake/find_opencv.cmake) for implementation det
### Tensorflow Lite (Needed only in delegate file mode)
-This application uses [Tensorflow Lite)](https://www.tensorflow.org/) version 2.10 for demonstrating use of 'armnnDelegate'.
+This application uses [Tensorflow Lite)](https://www.tensorflow.org/) version 2.14 for demonstrating use of 'armnnDelegate'.
armnnDelegate is a library for accelerating certain TensorFlow Lite operators on Arm hardware by providing
the TensorFlow Lite interpreter with an alternative implementation of the operators via its delegation mechanism.
You may clone and build Tensorflow lite and provide the path to its root and output library directories through the cmake
@@ -106,13 +106,13 @@ For implementation details see the scripts FindTfLite.cmake and FindTfLiteSrc.cm
The application links with the Tensorflow lite library libtensorflow-lite.a
-#### Download and build Tensorflow Lite version. We currently use Tf 2.12 SHA which has a fix for the Cmake build.
+#### Download and build Tensorflow Lite version. We currently use Tf 2.14 for the Cmake build.
Example for Tensorflow Lite native compilation
```commandline
sudo apt install build-essential
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow/tensorflow
-git checkout 6f692f73cb2043b4a0b0446539cd8c15b3dd9220
+git checkout v2.14.0-rc1
mkdir build && cd build
cmake ../lite -DTFLITE_ENABLE_XNNPACK=OFF
make
@@ -120,19 +120,19 @@ make
### Flatbuffers (needed only in delegate file mode)
-This application uses [Flatbuffers)](https://google.github.io/flatbuffers/) version 1.12.0 for serialization
+This application uses [Flatbuffers)](https://google.github.io/flatbuffers/) version 23.5.26 for serialization
You may clone and build Flatbuffers and provide the path to its root directory through the cmake
flag FLATBUFFERS_ROOT.
Please see [FindFlatbuffers.cmake] for implementation details.
The application links with the Flatbuffers library libflatbuffers.a
-#### Download and build flatbuffers version 2.0.6
+#### Download and build flatbuffers version 23.5.26
Example for flatbuffer native compilation
```commandline
-wget https://github.com/google/flatbuffers/archive/v2.0.6.tar.gz
-tar xf v2.0.6.tar.gz
-cd flatbuffers-2.0.6
+wget https://github.com/google/flatbuffers/archive/v23.5.26.tar.gz
+tar xf v23.5.26.tar.gz
+cd flatbuffers-23.5.26
mkdir install && cd install
cmake .. -DCMAKE_INSTALL_PREFIX:PATH=`pwd`
make install
diff --git a/scripts/build_android_ndk_guide.sh b/scripts/build_android_ndk_guide.sh
index 90a2e3373a..51ff9b1b05 100755
--- a/scripts/build_android_ndk_guide.sh
+++ b/scripts/build_android_ndk_guide.sh
@@ -109,14 +109,14 @@ function GetAndBuildCmake319 {
function GetAndBuildFlatbuffers {
cd $WORKING_DIR
- if [[ ! -d flatbuffers-2.0.6 ]]; then
+ if [[ ! -d flatbuffers-23.5.26 ]]; then
echo "+++ Getting Flatbuffers"
- wget https://github.com/google/flatbuffers/archive/v2.0.6.tar.gz
- tar xf v2.0.6.tar.gz
+ wget https://github.com/google/flatbuffers/archive/v23.5.26.tar.gz
+ tar xf v23.5.26.tar.gz
fi
#Build FlatBuffers
echo "+++ Building x86 Flatbuffers library"
- cd $WORKING_DIR/flatbuffers-2.0.6
+ cd $WORKING_DIR/flatbuffers-23.5.26
rm -f CMakeCache.txt
@@ -134,7 +134,7 @@ function GetAndBuildFlatbuffers {
make all install -j16
echo "+++ Building Android Flatbuffers library"
- cd $WORKING_DIR/flatbuffers-2.0.6
+ cd $WORKING_DIR/flatbuffers-23.5.26
rm -f CMakeCache.txt
@@ -215,7 +215,7 @@ function GetAndBuildComputeLibrary {
}
function GetAndBuildTFLite {
- TENSORFLOW_REVISION="6f692f73cb2043b4a0b0446539cd8c15b3dd9220" # TF r2.12 + PR #60015 to fix Cmake build.
+ TENSORFLOW_REVISION="tags/v2.14.0-rc1" # TF 2.14 rc1
TFLITE_ROOT_DIR=${WORKING_DIR}/tensorflow/tensorflow/lite
cd $WORKING_DIR
diff --git a/shim/BuildGuideShimSupportLibrary.md b/shim/BuildGuideShimSupportLibrary.md
index 4a45596ed6..98c626fee0 100644
--- a/shim/BuildGuideShimSupportLibrary.md
+++ b/shim/BuildGuideShimSupportLibrary.md
@@ -18,7 +18,7 @@ This work is currently in an experimental phase.
The following are required to build the Arm NN support library
* Android NDK r25
* Detailed setup can be found in [BuildGuideAndroidNDK.md](../BuildGuideAndroidNDK.md)
-* Flatbuffer version 2.0.6
+* Flatbuffer version 23.5.26
* Detailed setup can be found in [BuildGuideCrossCompilation.md](../BuildGuideCrossCompilation.md)
The following is required to build the Arm NN shim