aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNattapat Chaimanowong <nattapat.chaimanowong@arm.com>2019-02-21 17:05:39 +0000
committerLes Bell <les.bell@arm.com>2019-02-22 09:03:23 +0000
commitc64ea9fdf975a65e9a1dd67b44469add270d6f8b (patch)
tree57cfba7e745d2acd7e40b65eea4dd043a8203294
parentb5b9bdf14d032f3133d5a76835742bbc8291494d (diff)
downloadarmnn-c64ea9fdf975a65e9a1dd67b44469add270d6f8b.tar.gz
IVGCVSW-2588 Update README files for 19.02
*Also update related Support.md files Change-Id: If832980fdebb136ab02333d99512fd39b9093b2b Signed-off-by: Nattapat Chaimanowong <nattapat.chaimanowong@arm.com>
-rw-r--r--README.md4
-rw-r--r--src/armnnConverter/README.md5
-rw-r--r--src/armnnTfLiteParser/TensorFlowLiteSupport.md10
-rw-r--r--src/armnnTfParser/TensorFlowSupport.md88
-rw-r--r--src/backends/README.md13
5 files changed, 78 insertions, 42 deletions
diff --git a/README.md b/README.md
index aef722f980..0e965fb833 100644
--- a/README.md
+++ b/README.md
@@ -24,9 +24,11 @@ Arm NN is written using portable C++14 and the build system uses [CMake](https:/
The armnn/tests directory contains tests used during ArmNN development. Many of them depend on third-party IP, model protobufs and image files not distributed with ArmNN. The dependencies of some of the tests are available freely on the Internet, for those who wish to experiment.
+The 'armnn/samples' directory contains SimpleSample.cpp. A very basic example of the ArmNN SDK API in use.
+
The 'ExecuteNetwork' program, in armnn/tests/ExecuteNetwork, has no additional dependencies beyond those required by ArmNN and the model parsers. It takes any model and any input tensor, and simply prints out the output tensor. Run with no arguments to see command-line help.
-The 'armnn/samples' directory contains SimpleSample.cpp. A very basic example of the ArmNN SDK API in use.
+The 'ArmnnConverter' program, in armnn/src/ArmnnConverter, has no additional dependencies beyond those required by ArmNN and the model parsers. It takes a model in TensorFlow format and produce a serialized model in ArmNN format. Run with no arguments to see command-line help. Note that this program can only convert models for which all operations are supported by the serialization tool (src/armnnSerializer).
Note that Arm NN needs to be built against a particular version of ARM's Compute Library. The get_compute_library.sh in the scripts subdirectory will clone the compute library from the review.mlplatform.org github repository into a directory alongside armnn named 'clframework' and checkouts the correct revision
diff --git a/src/armnnConverter/README.md b/src/armnnConverter/README.md
new file mode 100644
index 0000000000..489b483d9b
--- /dev/null
+++ b/src/armnnConverter/README.md
@@ -0,0 +1,5 @@
+# The ArmnnConverter
+
+The `ArmnnConverter` is a program for converting neural networks from other format to Arm NN format. Currently the program only support model in Tensorflow Protocal Buffers format. Run the program with no arguments to see command-line help.
+
+For more information about the layers that are supported, see [TensorFlowSupport.md](../armnnTfParser/TensorFlowSupport.md) and [TensorFlowSupport.md](../armnnSerializer/SerializerSupport.md).
diff --git a/src/armnnTfLiteParser/TensorFlowLiteSupport.md b/src/armnnTfLiteParser/TensorFlowLiteSupport.md
index 507552da99..375ee4dbe1 100644
--- a/src/armnnTfLiteParser/TensorFlowLiteSupport.md
+++ b/src/armnnTfLiteParser/TensorFlowLiteSupport.md
@@ -8,6 +8,8 @@ The Arm NN SDK TensorFlow Lite parser currently only supports uint8.
The Arm NN SDK TensorFlow Lite parser currently supports the following operators:
+* ADD
+
* AVERAGE_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
* CONCATENATION, Supported Fused Activation: RELU , RELU6 , TANH, NONE
@@ -18,8 +20,16 @@ The Arm NN SDK TensorFlow Lite parser currently supports the following operators
* FULLY_CONNECTED, Supported Fused Activation: RELU , RELU6 , TANH, NONE
+* LOGISTIC
+
* MAX_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE
+* MEAN
+
+* MUL
+
+* PAD
+
* RELU
* RELU6
diff --git a/src/armnnTfParser/TensorFlowSupport.md b/src/armnnTfParser/TensorFlowSupport.md
index 954f8e86f9..2e09768e42 100644
--- a/src/armnnTfParser/TensorFlowSupport.md
+++ b/src/armnnTfParser/TensorFlowSupport.md
@@ -12,11 +12,19 @@ See the TensorFlow [avg_pool documentation](https://www.tensorflow.org/api_docs/
**bias_add**
- See the TensorFlow [bias_add documentation](https://www.tensorflow.org/api_docs/python/tf/nn/bias_add) for more information.
+See the TensorFlow [bias_add documentation](https://www.tensorflow.org/api_docs/python/tf/nn/bias_add) for more information.
**conv2d**
- See the TensorFlow [conv2d documentation](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d) for more information.
+See the TensorFlow [conv2d documentation](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d) for more information.
+
+**expand_dims**
+
+See the TensorFlow [expand_dims documentation](https://www.tensorflow.org/api_docs/python/tf/expand_dims) for more information.
+
+**gather**
+
+See the TensorFlow [gather documentation](https://www.tensorflow.org/api_docs/python/tf/gather) for more information.
**identity**
@@ -30,25 +38,33 @@ See the TensorFlow [local_response_normalization documentation](https://www.tens
See the TensorFlow [max_pool documentation](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool) for more information.
+**placeholder**
+
+See the TensorFlow [placeholder documentation](https://www.tensorflow.org/api_docs/python/tf/placeholder) for more information.
+
**reduce_mean**
See the TensorFlow [reduce_mean documentation](https://www.tensorflow.org/api_docs/python/tf/reduce_mean) for more information.
**relu**
- See the TensorFlow [relu documentation](https://www.tensorflow.org/api_docs/python/tf/nn/relu) for more information.
+See the TensorFlow [relu documentation](https://www.tensorflow.org/api_docs/python/tf/nn/relu) for more information.
**relu6**
- See the TensorFlow [relu6 documentation](https://www.tensorflow.org/api_docs/python/tf/nn/relu6) for more information.
+See the TensorFlow [relu6 documentation](https://www.tensorflow.org/api_docs/python/tf/nn/relu6) for more information.
+
+**rsqrt**
+
+See the TensorFlow [rsqrt documentation](https://www.tensorflow.org/api_docs/python/tf/math/rsqrt) for more information.
**shape**
- See the TensorFlow [shape documentation](https://www.tensorflow.org/api_docs/python/tf/shape) for more information.
+See the TensorFlow [shape documentation](https://www.tensorflow.org/api_docs/python/tf/shape) for more information.
**sigmoid**
- See the TensorFlow [sigmoid documentation](https://www.tensorflow.org/api_docs/python/tf/sigmoid) for more information.
+See the TensorFlow [sigmoid documentation](https://www.tensorflow.org/api_docs/python/tf/sigmoid) for more information.
**softplus**
@@ -62,22 +78,6 @@ See the TensorFlow [squeeze documentation](https://www.tensorflow.org/api_docs/p
See the TensorFlow [tanh documentation](https://www.tensorflow.org/api_docs/python/tf/tanh) for more information.
-**expand_dims**
-
-See the TensorFlow [expand_dims documentation](https://www.tensorflow.org/api_docs/python/tf/expand_dims) for more information.
-
-**placeholder**
-
-See the TensorFlow [placeholder documentation](https://www.tensorflow.org/api_docs/python/tf/placeholder) for more information.
-
-**minimum**
-
-See the TensorFlow [minimum documentation](https://www.tensorflow.org/api_docs/python/tf/math/minimum) for more information.
-
-**greater**
-
-See the TensorFlow [greater documentation](https://www.tensorflow.org/api_docs/python/tf/math/greater) for more information.
-
## Partially supported
**add**
@@ -100,18 +100,45 @@ The parser does not support the optional `shape` argument. It always infers the
The parser only supports a dilation rate of (1,1,1,1). See the TensorFlow [depthwise_conv2d_native documentation](https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d_native) for more information.
+**equal**
+
+The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of 4D and 1D tensors. See the TensorFlow [equal operator documentation](https://www.tensorflow.org/api_docs/python/tf/math/equal) for more information.
+
**fused_batch_norm**
The parser does not support training outputs. See the TensorFlow [fused_batch_norm documentation](https://www.tensorflow.org/api_docs/python/tf/nn/fused_batch_norm) for more information.
+**greater**
+
+The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of 4D and 1D tensors. See the TensorFlow [greater operator documentation](https://www.tensorflow.org/api_docs/python/tf/math/greater) for more information.
+
**matmul**
The parser only supports constant weights in a fully connected layer. See the TensorFlow [matmul documentation](https://www.tensorflow.org/api_docs/python/tf/matmul) for more information.
+**maximum**
+
+where maximum is used in one of the following ways
+
+* max(mul(a, x), x)
+* max(mul(x, a), x)
+* max(x, mul(a, x))
+* max(x, mul(x, a)
+
+This is interpreted as a ActivationLayer with a LeakyRelu activation function. Any other usage of max will result in the insertion of a simple maximum layer. The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting). See the TensorFlow [maximum documentation](https://www.tensorflow.org/api_docs/python/tf/maximum) for more information.
+
+**minimum**
+
+The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of 4D and 1D tensors. See the TensorFlow [minimum operator documentation](https://www.tensorflow.org/api_docs/python/tf/math/minimum) for more information.
+
**multiply**
The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [multiply documentation](https://www.tensorflow.org/api_docs/python/tf/multiply) for more information.
+**pad**
+
+Only supports tf.pad function with mode = 'CONSTANT' and constant_values = 0. See the TensorFlow [pad documentation](https://www.tensorflow.org/api_docs/python/tf/pad) for more information.
+
**realdiv**
The parser does not support all forms of [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [realdiv documentation](https://www.tensorflow.org/api_docs/python/tf/realdiv) for more information.
@@ -132,21 +159,6 @@ The parser only supports 2D inputs and does not support selecting the `softmax`
Arm NN supports split along the channel dimension for data formats NHWC and NCHW.
-**maximum**
-
-where maximum is used in one of the following ways
-
-* max(mul(a, x), x)
-* max(mul(x, a), x)
-* max(x, mul(a, x))
-* max(x, mul(x, a)
-
-This is interpreted as a ActivationLayer with a LeakyRelu activation function. Any other usage of max will result in the insertion of a simple maximum layer. See the TensorFlow [maximum documentation](https://www.tensorflow.org/api_docs/python/tf/maximum) for more information.
-
-**pad**
-
-Only supports tf.pad function with mode = 'CONSTANT' and constant_values = 0. See the TensorFlow [pad documentation](https://www.tensorflow.org/api_docs/python/tf/pad) for more information.
-
**subtract**
The parser does not support all forms of broadcasting [broadcast composition](https://www.tensorflow.org/performance/xla/broadcasting), only broadcasting of scalars and 1D tensors. See the TensorFlow [subtract documentation](https://www.tensorflow.org/api_docs/python/tf/math/subtract) for more information.
@@ -165,4 +177,4 @@ Arm tests these operators with the following TensorFlow fp32 neural networks:
* inception_v3. The Arm NN SDK only supports the official inception_v3 transformed model. See the TensorFlow documentation on [preparing models for mobile deployment](https://www.tensorflow.org/mobile/prepare_models) for more information on how to transform the inception_v3 network.
-More machine learning operators will be supported in future releases. \ No newline at end of file
+More machine learning operators will be supported in future releases.
diff --git a/src/backends/README.md b/src/backends/README.md
index 60e4d0baa7..c269ea08da 100644
--- a/src/backends/README.md
+++ b/src/backends/README.md
@@ -116,8 +116,12 @@ The interface functions to be implemented are:
virtual IBackendContextPtr CreateBackendContext(const IRuntime::CreationOptions&) const = 0;
virtual Optimizations GetOptimizations() const = 0;
virtual ILayerSupportSharedPtr GetLayerSupport() const = 0;
+ virtual SubGraphUniquePtr OptimizeSubGraph(const SubGraph& subGraph, bool& optimizationAttempted) const = 0;
```
+Note that ```GetOptimizations()``` has been deprecated.
+The method ```OptimizeSubGraph(...)``` should be used instead to specific optimizations to a given sub-graph.
+
The ArmNN framework then creates instances of the IBackendInternal interface with the help of the
[BackendRegistry](backendsCommon/BackendRegistry.hpp) singleton.
@@ -186,9 +190,12 @@ mechanism:
## The Optimizations
-The backends may choose to implement backend-specific optimizations. This is supported through the ```GetOptimizations()```
-method of the IBackendInternal interface. This function may return a vector of optimization objects and the optimizer
-runs these after all general optimization is performed on the network.
+The backends may choose to implement backend-specific optimizations.
+This is supported through the method ```OptimizeSubGraph(...)``` to the backend interface
+that allows the backends to apply their specific optimizations to a given sub-grah.
+
+The way backends had to provide a list optimizations to the Optimizer (through the ```GetOptimizations()``` method)
+is still in place for backward compatibility, but it's now considered deprecated and will be remove in a future release.
## The IBackendContext interface