aboutsummaryrefslogtreecommitdiff
path: root/docs/01_03_delegate.dox
blob: 9063f05658b45f3e0201d9c832f3c9444d221b69 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
/// Copyright (c) 2021 ARM Limited and Contributors. All rights reserved.
///
/// SPDX-License-Identifier: MIT
///

namespace armnn
{
/**
@page delegate TfLite Delegate
@tableofcontents


@section delegateintro About the delegate
'armnnDelegate' is a library for accelerating certain TensorFlow Lite (TfLite) operators on Arm hardware. It can be
integrated in TfLite using its delegation mechanism. TfLite will then delegate the execution of operators supported by
Arm NN to Arm NN.

The main difference to our @ref S6_tf_lite_parser is the amount of operators you can run with it. If none of the active
backends support an operation in your model you won't be able to execute it with our parser. In contrast to that, TfLite
only delegates operations to the armnnDelegate if it does support them and otherwise executes them itself. In other
words, every TfLite model can be executed and every operation in your model that we can accelerate will be accelerated.
That is the reason why the armnnDelegate is our recommended way to accelerate TfLite models.

If you need help building the armnnDelegate, please take a look at our [build guide](delegate/BuildGuideNative.md).
An example how to setup TfLite to integrate the armnnDelegate can be found in this
guide: [Integrate the delegate into python](delegate/IntegrateDelegateIntoPython.md)


@section delegatesupport Supported Operators
This reference guide provides a list of TensorFlow Lite operators the Arm NN SDK currently supports.

@subsection delegatefullysupported Fully supported

The Arm NN SDK TensorFlow Lite delegate currently supports the following operators:

- ABS

- ADD

- ARGMAX

- ARGMIN

- AVERAGE_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE

- BATCH_TO_SPACE_ND

- CONCATENATION, Supported Fused Activation: RELU , RELU6 , TANH, NONE

- CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE

- DEPTH_TO_SPACE

- DEPTHWISE_CONV_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE

- DEQUANTIZE

- DIV

- EQUAL

- ELU

- EXP

- FULLY_CONNECTED, Supported Fused Activation: RELU , RELU6 , TANH, NONE

- FLOOR

- GATHER

- GREATER

- GREATER_OR_EQUAL

- HARD_SWISH

- LESS

- LESS_OR_EQUAL

- LOCAL_RESPONSE_NORMALIZATION

- LOGICAL_AND
-
- LOGICAL_NOT
-
- LOGICAL_OR

- LOGISTIC

- LOG_SOFTMAX

- L2_NORMALIZATION

- L2_POOL_2D

- MAXIMUM

- MAX_POOL_2D, Supported Fused Activation: RELU , RELU6 , TANH, NONE

- MEAN

- MINIMUM

- MUL

- NEG

- NOT_EQUAL

- PAD

- QUANTIZE

- RESHAPE

- RESIZE_BILINEAR

- RESIZE_NEAREST_NEIGHBOR

- RELU

- RELU6

- RSQRT

- SOFTMAX

- SPACE_TO_BATCH_ND

- SPACE_TO_DEPTH

- SPLIT

- SPLIT_V

- SQRT

- SUB

- TANH

- TRANSPOSE

- TRANSPOSE_CONV

More machine learning operators will be supported in future releases.
**/
}