aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/FAQ.md37
-rw-r--r--docs/FAQ.md.license4
-rw-r--r--docs/IntegratorGuide.md135
-rw-r--r--docs/IntegratorGuide.md.license4
4 files changed, 116 insertions, 64 deletions
diff --git a/docs/FAQ.md b/docs/FAQ.md
index bd79bb04..9b6f099b 100644
--- a/docs/FAQ.md
+++ b/docs/FAQ.md
@@ -17,39 +17,6 @@ https://android.googlesource.com/platform/test/vts-testcase/hal/+/f74899c6c09b52
An acceptable workaround is to increase the timeout defined in AndroidTest.xml, in a similar way to https://android.googlesource.com/platform/test/vts-testcase/hal/+/f74899c6c09b52703e6db0323dffb4ae52539db4.
-Problems seen when trying to build the android-nn-driver obtained from GitHub
------------------------------------------------------------------------------
-
-Some users have encountered difficulties when attempting to build copies of the android-nn-driver obtained from GitHub. The build reports missing module source paths from armnn, clframework, flatbuffers-1.12.0 or boost_1_64_0.
-These errors can look
-like this:
-
-'error: vendor/arm/android-nn-driver/Android.bp:45:1: variant "android_arm64_armv7": module "armnn-arm_compute" "module source path "vendor/arm/android-nn-driver/clframework/build/android-arm64v8a/src/core/CL" does not exist'
-
-These errors are due to missing dependencies or incompatiblities between the android-nn-driver and armnn or clframework versions. The android-nn-driver requires boost_1_64_0 to build unit tests. The versions of android-nn-driver, armnn and clframework will have to match for them to work together. For example, the 19.08 version of android-nn-driver, clframework and armnn will work together but none of them will work with earlier or later versions of the others.
-
-In order to ensure that the correct versions of flatbuffers, boost, armnn and the clframework are obtained you can do the following:
-
-1. Delete or move any flatbuffers, boost, armnn or clframework directories from the android-nn-driver directory.
-2. Run the setup.sh script in the android-nn-driver directory.
-
-This will download the correct versions of flatbuffers, boost, armnn and the clframework and the android-nn-driver should build
-correctly. Alternatively you can go to the GitHub pages for android-nn-driver, armnn and computelibrary (clframework) and download versions with the same release tag.
-
-As an example, for 20.05 these would be:
-
-https://github.com/ARM-software/android-nn-driver/tree/v20.05
-https://github.com/ARM-software/armnn/tree/v20.05
-https://github.com/ARM-software/computelibrary/tree/v20.05
-
-The correct version of boost (1_64_0) can be downloaded from:
-
-https://www.boost.org/
-
-The correct version of flatbuffers (1.12.0) can be downloaded from:
-
-https://github.com/google/flatbuffers/archive/v1.12.0.tar.gz
-
Instance Normalization test failures
------------------------------------
@@ -58,7 +25,7 @@ There is a known issue in the Android NNAPI implementation of Instance Normaliza
VTS and CTS test failures
-------------------------
-With the release of the Android 10 R2 CTS some errors and crashes were discovered in the 19.08 and 19.11 releases of armnn, the android-nn-driver and ComputeLibrary. 19.08.01 and 19.11.01 releases of armnn, the android-nn-driver and ComputeLibrary were prepared that fix all these issues on CpuAcc and GpuAcc. If using 19.08 or 19.11 we recommend that you upgrade to the 19.08.01 or 19.11.01 releases. These issues have also been fixed in the 20.02 and later releases of armnn, the android-nn-driver and ComputeLibrary.
+With Android 10 R2 CTS some errors and crashes were discovered in the 19.08 and 19.11 releases of armnn, the android-nn-driver and ComputeLibrary. 19.08.01 and 19.11.01 releases of armnn, the android-nn-driver and ComputeLibrary were prepared that fix all these issues on CpuAcc and GpuAcc. If using 19.08 or 19.11 we recommend that you upgrade to the latest releases.
These fixes also required patches to be made to the Android Q test framework. You may encounter CTS and VTS test failures when attempting to build copies of the android-nn-driver against older versions of Android Q.
@@ -75,4 +42,4 @@ In order to fix these failures you will have to update to a version of Android Q
The Android 10 R3 CTS that can be downloaded from https://source.android.com/compatibility/cts/downloads contains all these patches.
-There is a known issue that even with these patches CTS tests "TestRandomGraph/RandomGraphTest#LargeGraph_TENSOR_FLOAT16_Rank3/41" and "TestRandomGraph/RandomGraphTest#LargeGraph_TENSOR_FLOAT16_Rank2/20 " will still fail on CpuRef. These failures are caused by a LogSoftmax layer followed by a Floor layer which blows up the slight difference between fp16 to fp32. This issue only affects CpuRef with Android Q. These tests are not failing for Android R. \ No newline at end of file
+There is a known issue that even with these patches CTS tests "TestRandomGraph/RandomGraphTest#LargeGraph_TENSOR_FLOAT16_Rank3/41" and "TestRandomGraph/RandomGraphTest#LargeGraph_TENSOR_FLOAT16_Rank2/20 " will still fail on CpuRef. These failures are caused by a LogSoftmax layer followed by a Floor layer which blows up the slight difference between fp16 to fp32. This issue only affects CpuRef with Android Q. These tests are not failing for Android R.
diff --git a/docs/FAQ.md.license b/docs/FAQ.md.license
new file mode 100644
index 00000000..68a3f516
--- /dev/null
+++ b/docs/FAQ.md.license
@@ -0,0 +1,4 @@
+#
+# Copyright © 2019-2022 Arm Ltd and Contributors. All rights reserved.
+# SPDX-License-Identifier: MIT
+#
diff --git a/docs/IntegratorGuide.md b/docs/IntegratorGuide.md
index 82177f72..55c9b9a7 100644
--- a/docs/IntegratorGuide.md
+++ b/docs/IntegratorGuide.md
@@ -5,10 +5,11 @@ This document describes how to integrate the Arm NN Android NNAPI driver into an
### Prerequisites
-1. Android source tree for Android P (we have tested against Android P version 9.0.0_r3) , in the directory `<ANDROID_ROOT>`
-2. Android source tree for Android Q (we have tested against Android Q version 10.0.0_r39), in the directory `<ANDROID_ROOT>`
+1. Android source tree for Android Q (we have tested against Android Q version 10.0.0_r39), in the directory `<ANDROID_ROOT>`
2. Android source tree for Android R (we have tested against Android R version 11.0.0_r3), in the directory `<ANDROID_ROOT>`
-3. Mali OpenCL driver integrated into the Android source tree
+3. Android source tree for Android S (we have tested against Android S version 12.0.0_r1), in the directory `<ANDROID_ROOT>`
+4. Android source tree for Android T (we have tested against Android T pre-release tag - TP1A.220624.003), in the directory `<ANDROID_ROOT>`
+5. Mali OpenCL driver integrated into the Android source tree
### Procedure
@@ -20,11 +21,7 @@ To update the build environment, add to the contents of the variable `PRODUCT_PA
within the device-specific makefile that is located in the `<ANDROID_ROOT>/device/<manufacturer>/<product>`
directory. This file is normally called `device.mk`:
-`Android.mk` contains the module definition of all versions (1.0, 1.1, 1.2 and 1.3) of the Arm NN driver.
-For Android P, a new version of NN API is available (1.1), thus the following should be added to `device.mk` instead:
-<pre>
-PRODUCT_PACKAGES += android.hardware.neuralnetworks@1.1-service-armnn
-</pre>
+`Android.mk` contains the module definition of all versions (1.1, 1.2 and 1.3) of the Arm NN driver.
For Android Q, a new version of the NN API is available (1.2),
thus the following should be added to `device.mk` instead:
@@ -32,7 +29,7 @@ thus the following should be added to `device.mk` instead:
PRODUCT_PACKAGES += android.hardware.neuralnetworks@1.2-service-armnn
</pre>
-For Android R, new version of the NN API is available (1.3),
+For Android R, S and T, new version of the NN API is available (1.3),
thus the following should be added to `device.mk` instead:
<pre>
PRODUCT_PACKAGES += android.hardware.neuralnetworks@1.3-service-armnn
@@ -44,34 +41,25 @@ ARMNN_COMPUTE_NEON_ENABLE or ARMNN_REF_ENABLE in `device.mk`:
ARMNN_COMPUTE_CL_ENABLE := 1
</pre>
-For Android P, Q and R the vendor manifest.xml requires the Neural Network HAL information.
-For Android P use HAL version 1.1 as below. For Android Q substitute 1.2 where necessary. For Android R substitute 1.3 where necessary.
+For all Android versions the vendor manifest.xml requires the Neural Network HAL information.
+For Android Q use HAL version 1.2 as below. For later Android versions substitute 1.3 where necessary.
```xml
<hal format="hidl">
<name>android.hardware.neuralnetworks</name>
<transport>hwbinder</transport>
- <version>1.1</version>
+ <version>1.2</version>
<interface>
<name>IDevice</name>
<instance>armnn</instance>
</interface>
- <fqname>@1.1::IDevice/armnn</fqname>
+ <fqname>@1.2::IDevice/armnn</fqname>
</hal>
```
-4. Build Android as normal, i.e. run `make` in `<ANDROID_ROOT>`
+4. Build Android as normal (https://source.android.com/setup/build/building)
5. To confirm that the Arm NN driver has been built, check for the driver service executable at
-Android P
-<pre>
-<ANDROID_ROOT>/out/target/product/<product>/system/vendor/bin/hw
-</pre>
-For example, if the Arm NN driver has been built with the NN API 1.1, check for the following file:
-<pre>
-<ANDROID_ROOT>/out/target/product/<product>/system/vendor/bin/hw/android.hardware.neuralnetworks@1.1-service-armnn
-</pre>
-
-Android Q and later has a different path:
+Android Q
<pre>
<ANDROID_ROOT>/out/target/product/<product>/vendor/bin/hw
</pre>
@@ -81,9 +69,8 @@ Android Q and later has a different path:
1. Run the Arm NN driver service executable in the background.
Use the corresponding version of the driver for the Android version you are running.
i.e
-android.hardware.neuralnetworks@1.1-service-armnn for Android P,
android.hardware.neuralnetworks@1.2-service-armnn for Android Q and
-android.hardware.neuralnetworks@1.3-service-armnn for Android R
+android.hardware.neuralnetworks@1.3-service-armnn for Android R, S and T
<pre>
It is also possible to use a specific backend by using the -c option.
The following is an example of using the CpuAcc backend for Android Q:
@@ -107,14 +94,104 @@ Rapid means that only 3 lws values should be tested for each kernel.
The recommended way of using it with Arm NN is to generate the tuning data during development of the Android image for a device, and use it in read-only mode during normal operation:
1. Run the Arm NN driver service executable in tuning mode. The path to the tuning data must be writable by the service.
-The following examples assume that the 1.1 version of the driver is being used:
+The following examples assume that the 1.2 version of the driver is being used:
<pre>
-adb shell /system/vendor/bin/hw/android.hardware.neuralnetworks@1.1-service-armnn --cl-tuned-parameters-file &lt;PATH_TO_TUNING_DATA&gt; --cl-tuned-parameters-mode UpdateTunedParameters --cl-tuning-level exhaustive &
+adb shell /system/vendor/bin/hw/android.hardware.neuralnetworks@1.2-service-armnn --cl-tuned-parameters-file &lt;PATH_TO_TUNING_DATA&gt; --cl-tuned-parameters-mode UpdateTunedParameters --cl-tuning-level exhaustive &
</pre>
2. Run a representative set of Android NNAPI testing loads. In this mode of operation, each NNAPI workload will be slow the first time it is executed, as the tuning parameters are being selected. Subsequent executions will use the tuning data which has been generated.
3. Stop the service.
4. Deploy the tuned parameters file to a location readable by the Arm NN driver service (for example, to a location within /vendor/etc).
5. During normal operation, pass the location of the tuning data to the driver service (this would normally be done by passing arguments via Android init in the service .rc definition):
<pre>
-adb shell /system/vendor/bin/hw/android.hardware.neuralnetworks@1.1-service-armnn --cl-tuned-parameters-file &lt;PATH_TO_TUNING_DATA&gt; &
+adb shell /system/vendor/bin/hw/android.hardware.neuralnetworks@1.2-service-armnn --cl-tuned-parameters-file &lt;PATH_TO_TUNING_DATA&gt; &
</pre>
+
+### Specifying the Capabilities for the Driver
+
+The Android NNAPI framework specifies a means for a Driver to specify its Capabilities. These are relevant in situations where there are multiple drivers on a device and the NNAPI needs to choose one to use for a model. The Android NNAPI documentation gives an overview of them and how they're used at https://source.android.com/docs/core/interaction/neural-networks
+
+These values will be hardware dependent and, as we can't know how any specific hardware is configured and we have no idea of the kind of values that the creators of other drivers might use, we leave it up to the Integrator to specify the values. The Android documentation linked above also provides some guidelines on measuring performance when generating these values.
+
+As the Arm NN driver service initialises it will look for system properties containing the performance values to return when the NNAPI service requests the drivers Capabilities. The properties must all be 32-bit Float values and specify execution performance as well as power usage (in some circumstances Android may prefer low power consumption over high performance).
+
+As each new HAL version was introduced the number of properties increased. The following is a list of the system properties that are looked for when the driver starts for each HAL version.
+
+#### HAL 1.0
+
+Initially the HAL 1.0 service only supported Float 32 and Quantized int8.
+
+* ArmNN.float32Performance.execTime
+* ArmNN.float32Performance.powerUsage
+* ArmNN.quantized8Performance.execTime
+* ArmNN.quantized8Performance.powerUsage
+
+#### HAL 1.1
+
+HAL 1.1 added a performance setting for relaxedFloat32toFloat16Performance.
+
+* ArmNN.float32Performance.execTime
+* ArmNN.float32Performance.powerUsage
+* ArmNN.quantized8Performance.execTime
+* ArmNN.quantized8Performance.powerUsage
+* ArmNN.relaxedFloat32toFloat16Performance.execTime
+* ArmNN.relaxedFloat32toFloat16Performance.powerUsage
+
+#### HAL 1.2
+
+HAL 1.2 added support for a number of new operand types.
+
+* ArmNN.relaxedFloat32toFloat16Performance.execTime
+* ArmNN.relaxedFloat32toFloat16Performance.powerUsage
+* Armnn.operandTypeTensorFloat32Performance.execTime
+* Armnn.operandTypeTensorFloat32Performance.powerUsage
+* Armnn.operandTypeFloat32Performance.execTime
+* Armnn.operandTypeFloat32Performance.powerUsage
+* Armnn.operandTypeTensorFloat16Performance.execTime
+* Armnn.operandTypeTensorFloat16Performance.powerUsage
+* Armnn.operandTypeFloat16Performance.execTime
+* Armnn.operandTypeFloat16Performance.powerUsage
+* Armnn.operandTypeTensorQuant8AsymmPerformance.execTime
+* Armnn.operandTypeTensorQuant8AsymmPerformance.powerUsage
+* Armnn.operandTypeTensorQuant16SymmPerformance.execTime
+* Armnn.operandTypeTensorQuant16SymmPerformance.powerUsage
+* Armnn.operandTypeTensorQuant8SymmPerformance.execTime
+* Armnn.operandTypeTensorQuant8SymmPerformance.powerUsage
+* Armnn.operandTypeTensorQuant8SymmPerChannelPerformance.execTime
+* Armnn.operandTypeTensorQuant8SymmPerChannelPerformance.powerUsage
+* Armnn.operandTypeTensorInt32Performance.execTime
+* Armnn.operandTypeTensorInt32Performance.powerUsage
+* Armnn.operandTypeInt32Performance.execTime
+* Armnn.operandTypeInt32Performance.powerUsage
+
+#### HAL 1.3
+
+HAL 1.3 added support for the control flow operations If and While. Please note ArmNN does not currently support If or While and until it does ignoring these system properties is appropriate.
+
+* ArmNN.relaxedFloat32toFloat16Performance.powerUsage
+* ArmNN.relaxedFloat32toFloat16Performance.powerUsage
+* ArmNN.ifPerformance.execTime
+* ArmNN.ifPerformance.powerUsage
+* ArmNN.whilePerformance.execTime
+* ArmNN.whilePerformance.powerUsage
+* Armnn.operandTypeTensorFloat32Performance.execTime
+* Armnn.operandTypeTensorFloat32Performance.powerUsage
+* Armnn.operandTypeFloat32Performance.execTime
+* Armnn.operandTypeFloat32Performance.powerUsage
+* Armnn.operandTypeTensorFloat16Performance.execTime
+* Armnn.operandTypeTensorFloat16Performance.powerUsage
+* Armnn.operandTypeFloat16Performance.execTime
+* Armnn.operandTypeFloat16Performance.powerUsage
+* Armnn.operandTypeTensorQuant8AsymmPerformance.execTime
+* Armnn.operandTypeTensorQuant8AsymmPerformance.powerUsage
+* Armnn.operandTypeTensorQuant8AsymmSignedPerformance.execTime
+* Armnn.operandTypeTensorQuant8AsymmSignedPerformance.powerUsage
+* Armnn.operandTypeTensorQuant16SymmPerformance.execTime
+* Armnn.operandTypeTensorQuant16SymmPerformance.powerUsage
+* Armnn.operandTypeTensorQuant8SymmPerformance.execTime
+* Armnn.operandTypeTensorQuant8SymmPerformance.powerUsage
+* Armnn.operandTypeTensorQuant8SymmPerChannelPerformance.execTime
+* Armnn.operandTypeTensorQuant8SymmPerChannelPerformance.powerUsage
+* Armnn.operandTypeTensorInt32Performance.execTime
+* Armnn.operandTypeTensorInt32Performance.powerUsage
+* Armnn.operandTypeInt32Performance.execTime
+* Armnn.operandTypeInt32Performance.powerUsage
diff --git a/docs/IntegratorGuide.md.license b/docs/IntegratorGuide.md.license
new file mode 100644
index 00000000..68a3f516
--- /dev/null
+++ b/docs/IntegratorGuide.md.license
@@ -0,0 +1,4 @@
+#
+# Copyright © 2019-2022 Arm Ltd and Contributors. All rights reserved.
+# SPDX-License-Identifier: MIT
+#