diff options
author | Nikhil Raj <nikhil.raj@arm.com> | 2022-06-17 13:24:58 +0100 |
---|---|---|
committer | Nikhil Raj <nikhil.raj@arm.com> | 2022-06-17 13:24:58 +0100 |
commit | d5d43d82c0137e08553e44345c609cdd1a7931c7 (patch) | |
tree | f1509f7fa94db0373a2c127682dd3d0ccc1915bd /22.05.01/operator_list.xhtml | |
parent | 549b9600a6eaf0727fa084465a75f173edf8f381 (diff) | |
download | armnn-d5d43d82c0137e08553e44345c609cdd1a7931c7.tar.gz |
Update Doxygen for 22.05 patch release
* Pooling3D added to tfLite delegate
* Available in tag 22.05.01
Signed-off-by: Nikhil Raj <nikhil.raj@arm.com>
Change-Id: I8d605bba4e87d30baa2c6d7b338c78a4400dc021
Diffstat (limited to '22.05.01/operator_list.xhtml')
-rw-r--r-- | 22.05.01/operator_list.xhtml | 4248 |
1 files changed, 4248 insertions, 0 deletions
diff --git a/22.05.01/operator_list.xhtml b/22.05.01/operator_list.xhtml new file mode 100644 index 0000000000..b359e17e4a --- /dev/null +++ b/22.05.01/operator_list.xhtml @@ -0,0 +1,4248 @@ +<!-- Copyright (c) 2020 ARM Limited. --> +<!-- --> +<!-- SPDX-License-Identifier: MIT --> +<!-- --> +<!-- HTML header for doxygen 1.8.13--> +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> +<html xmlns="http://www.w3.org/1999/xhtml"> +<head> +<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/> +<meta http-equiv="X-UA-Compatible" content="IE=9"/> +<meta name="generator" content="Doxygen 1.8.13"/> +<meta name="robots" content="NOINDEX, NOFOLLOW" /> +<meta name="viewport" content="width=device-width, initial-scale=1"/> +<title>ArmNN: Arm NN Operators</title> +<link href="tabs.css" rel="stylesheet" type="text/css"/> +<script type="text/javascript" src="jquery.js"></script> +<script type="text/javascript" src="dynsections.js"></script> +<link href="navtree.css" rel="stylesheet" type="text/css"/> +<script type="text/javascript" src="resize.js"></script> +<script type="text/javascript" src="navtreedata.js"></script> +<script type="text/javascript" src="navtree.js"></script> +<script type="text/javascript"> + $(document).ready(initResizable); +</script> +<link href="search/search.css" rel="stylesheet" type="text/css"/> +<script type="text/javascript" src="search/searchdata.js"></script> +<script type="text/javascript" src="search/search.js"></script> +<script type="text/x-mathjax-config"> + MathJax.Hub.Config({ + extensions: ["tex2jax.js"], + jax: ["input/TeX","output/HTML-CSS"], +}); +</script><script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js"></script> +<link href="doxygen.css" rel="stylesheet" type="text/css" /> +<link href="stylesheet.css" rel="stylesheet" type="text/css"/> +</head> +<body> +<div id="top"><!-- do not remove this div, it is closed by doxygen! --> +<div id="titlearea"> +<table cellspacing="0" cellpadding="0"> + <tbody> + <tr style="height: 56px;"> + <img alt="ArmNN" src="Arm_NN_horizontal_blue.png" style="max-width: 10rem; margin-top: .5rem; margin-left 10px"/> + <td style="padding-left: 0.5em;"> + <div id="projectname"> +  <span id="projectnumber">22.05.01</span> + </div> + </td> + </tr> + </tbody> +</table> +</div> +<!-- end header part --> +<!-- Generated by Doxygen 1.8.13 --> +<script type="text/javascript"> +var searchBox = new SearchBox("searchBox", "search",false,'Search'); +</script> +<script type="text/javascript" src="menudata.js"></script> +<script type="text/javascript" src="menu.js"></script> +<script type="text/javascript"> +$(function() { + initMenu('',true,false,'search.php','Search'); + $(document).ready(function() { init_search(); }); +}); +</script> +<div id="main-nav"></div> +</div><!-- top --> +<div id="side-nav" class="ui-resizable side-nav-resizable"> + <div id="nav-tree"> + <div id="nav-tree-contents"> + <div id="nav-sync" class="sync"></div> + </div> + </div> + <div id="splitbar" style="-moz-user-select:none;" + class="ui-resizable-handle"> + </div> +</div> +<script type="text/javascript"> +$(document).ready(function(){initNavTree('operator_list.xhtml','');}); +</script> +<div id="doc-content"> +<!-- window showing the filter options --> +<div id="MSearchSelectWindow" + onmouseover="return searchBox.OnSearchSelectShow()" + onmouseout="return searchBox.OnSearchSelectHide()" + onkeydown="return searchBox.OnSearchSelectKey(event)"> +</div> + +<!-- iframe showing the search results (closed by default) --> +<div id="MSearchResultsWindow"> +<iframe src="javascript:void(0)" frameborder="0" + name="MSearchResults" id="MSearchResults"> +</iframe> +</div> + +<div class="header"> + <div class="headertitle"> +<div class="title">Arm NN Operators </div> </div> +</div><!--header--> +<div class="contents"> +<div class="toc"><h3>Table of Contents</h3> +<ul><li class="level1"><a href="#S5_1_operator_list">Arm NN Operators</a></li> +</ul> +</div> +<div class="textblock"><h1><a class="anchor" id="S5_1_operator_list"></a> +Arm NN Operators</h1> +<p>Arm NN supports operators that are listed in below table.</p> +<p>Arm NN supports a wide list of data-types. The main data-types that the Machine Learning functions support are the following: </p><ul> +<li> +<b>BFLOAT16:</b> 16-bit non-standard brain floating point </li> +<li> +<b>QASYMMU8:</b> 8-bit unsigned asymmetric quantized </li> +<li> +<b>QASYMMS8:</b> 8-bit signed asymmetric quantized </li> +<li> +<b>QUANTIZEDSYMM8PERAXIS:</b> 8-bit signed symmetric quantized </li> +<li> +<b>QSYMMS8:</b> 8-bit signed symmetric quantized </li> +<li> +<b>QSYMMS16:</b> 16-bit signed symmetric quantized </li> +<li> +<b>FLOAT32:</b> 32-bit single precision floating point </li> +<li> +<b>FLOAT16:</b> 16-bit half precision floating point </li> +<li> +<b>SIGNED32:</b> 32-bit signed integer </li> +<li> +<b>BOOLEAN:</b> 8-bit unsigned char </li> +<li> +<b>All:</b> Agnostic to any specific data type </li> +</ul> +<p>Arm NN supports the following data layouts (fast changing dimension from right to left): </p><ul> +<li> +<b>NHWC:</b> Layout where channels are in the fastest changing dimension </li> +<li> +<b>NCHW:</b> Layout where width is in the fastest changing dimension </li> +<li> +<b>All:</b> Agnostic to any specific data layout </li> +</ul> +<p>where N = batches, C = channels, H = height, W = width</p> +<a class="anchor" id="multi_row"></a> +<table class="doxtable"> +<caption></caption> +<tr> +<th>Operator </th><th>Description </th><th>Equivalent Android NNAPI Operator </th><th>Backends </th><th>Data Layouts </th><th>Data Types </th></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_abs_layer.xhtml">AbsLayer</a> </td><td rowspan="3"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform absolute operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_ABS </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_activation_layer.xhtml" title="This layer represents an activation operation with the specified activation function. ">ActivationLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to simulate an activation layer with the specified activation function. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_ABS </li> +<li> +ANEURALNETWORKS_ELU </li> +<li> +ANEURALNETWORKS_HARD_SWISH </li> +<li> +ANEURALNETWORKS_LOGISTIC </li> +<li> +ANEURALNETWORKS_PRELU </li> +<li> +ANEURALNETWORKS_RELU </li> +<li> +ANEURALNETWORKS_RELU1 </li> +<li> +ANEURALNETWORKS_RELU6 </li> +<li> +ANEURALNETWORKS_SQRT </li> +<li> +ANEURALNETWORKS_TANH </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_addition_layer.xhtml" title="This layer represents an addition operation. ">AdditionLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to add 2 tensors. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_ADD </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_arg_min_max_layer.xhtml" title="This layer represents a ArgMinMax operation. ">ArgMinMaxLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to calculate the index of the minimum or maximum values in a tensor based on an axis. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_ARGMAX </li> +<li> +ANEURALNETWORKS_ARGMIN </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>SIGNED64 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_batch_normalization_layer.xhtml" title="This layer represents a batch normalization operation. ">BatchNormalizationLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform batch normalization. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_batch_to_space_nd_layer.xhtml" title="This layer represents a BatchToSpaceNd operation. ">BatchToSpaceNdLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform a batch to space transformation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_BATCH_TO_SPACE_ND </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_cast_layer.xhtml" title="This layer represents a cast operation. ">CastLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to cast a tensor to a type. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_CAST </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_channel_shuffle_layer.xhtml">ChannelShuffleLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to reorganize the channels of a tensor. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_CHANNEL_SHUFFLE </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_comparison_layer.xhtml" title="This layer represents a comparison operation. ">ComparisonLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to compare 2 tensors. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_EQUAL </li> +<li> +ANEURALNETWORKS_GREATER </li> +<li> +ANEURALNETWORKS_GREATER_EQUAL </li> +<li> +ANEURALNETWORKS_LESS </li> +<li> +ANEURALNETWORKS_LESS_EQUAL </li> +<li> +ANEURALNETWORKS_NOT_EQUAL </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>BOOLEAN </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_concat_layer.xhtml" title="This layer represents a merge operation. ">ConcatLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to concatenate tensors along a given axis. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_CONCATENATION </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_constant_layer.xhtml" title="A layer that the constant data can be bound to. ">ConstantLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to provide a constant tensor. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_convert_bf16_to_fp32_layer.xhtml" title="This layer converts data type BFloat16 to Float32. ">ConvertBf16ToFp32Layer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to convert <a class="el" href="classarmnn_1_1_b_float16.xhtml">BFloat16</a> tensor to Float32 tensor. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_convert_fp16_to_fp32_layer.xhtml" title="This layer converts data type Float 16 to Float 32. ">ConvertFp16ToFp32Layer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to convert Float16 tensor to Float32 tensor. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_convert_fp32_to_bf16_layer.xhtml" title="This layer converts data type Float32 to BFloat16. ">ConvertFp32ToBf16Layer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to convert Float32 tensor to <a class="el" href="classarmnn_1_1_b_float16.xhtml">BFloat16</a> tensor. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_convert_fp32_to_fp16_layer.xhtml" title="This layer converts data type Float 32 to Float 16. ">ConvertFp32ToFp16Layer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to convert Float32 tensor to Float16 tensor. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_convolution2d_layer.xhtml" title="This layer represents a convolution 2d operation. ">Convolution2dLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to compute a convolution operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_CONV_2D </li> +<li> +ANEURALNETWORKS_GROUPED_CONV_2D </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QUANTIZEDSYMM8PERAXIS </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QUANTIZEDSYMM8PERAXIS </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_convolution3d_layer.xhtml" title="This layer represents a convolution 3d operation. ">Convolution3dLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to compute a 3D convolution operation. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +NDHWC </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +N/A </li> +</ul> +</td><td><ul> +<li> +N/A </li> +</ul> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +N/A </li> +</ul> +</td><td><ul> +<li> +N/A </li> +</ul> +</td></tr> +<tr> +<td rowspan="1"><a class="el" href="classarmnn_1_1_debug_layer.xhtml" title="This layer visualizes the data flowing through the network. ">DebugLayer</a> </td><td rowspan="1" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to print out inter layer tensor information. </td><td rowspan="1"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_depth_to_space_layer.xhtml" title="This layer represents a DepthToSpace operation. ">DepthToSpaceLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform Depth to Space transformation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_DEPTH_TO_SPACE </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_depthwise_convolution2d_layer.xhtml" title="This layer represents a depthwise convolution 2d operation. ">DepthwiseConvolution2dLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to compute a deconvolution or transpose convolution. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_DEPTHWISE_CONV_2D </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QUANTIZEDSYMM8PERAXIS </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QUANTIZEDSYMM8PERAXIS </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_dequantize_layer.xhtml" title="This layer dequantizes the input tensor. ">DequantizeLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to dequantize the values in a tensor. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_DEQUANTIZE </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QUANTIZEDSYMM8PERAXIS </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QUANTIZEDSYMM8PERAXIS </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="2"><a class="el" href="classarmnn_1_1_detection_post_process_layer.xhtml" title="This layer represents a detection postprocess operator. ">DetectionPostProcessLayer</a> </td><td rowspan="2" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to generate the detection output based on center size encoded boxes, class prediction and anchors by doing non maximum suppression (NMS). </td><td rowspan="2"><ul> +<li> +ANEURALNETWORKS_DETECTION_POSTPROCESSING </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_division_layer.xhtml" title="This layer represents a division operation. ">DivisionLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to divide 2 tensors. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_DIV </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_elementwise_base_layer.xhtml" title="NOTE: this is an abstract class to encapsulate the element wise operations, it does not implement: st...">ElementwiseBaseLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform Add - Div - Max - Min - Mul operations. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_ADD </li> +<li> +ANEURALNETWORKS_DIV </li> +<li> +ANEURALNETWORKS_MAXIMUM </li> +<li> +ANEURALNETWORKS_MINIMUM </li> +<li> +ANEURALNETWORKS_MUL </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_elementwise_unary_layer.xhtml" title="This layer represents a elementwiseUnary operation. ">ElementwiseUnaryLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform Rsqrt - Exp - Neg - Log - Abs - Sin - Sqrt operations. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_ABS </li> +<li> +ANEURALNETWORKS_EXP </li> +<li> +ANEURALNETWORKS_LOG </li> +<li> +ANEURALNETWORKS_NEG </li> +<li> +ANEURALNETWORKS_RSQRT </li> +<li> +ANEURALNETWORKS_SIN </li> +<li> +ANEURALNETWORKS_SQRT </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="1"><a class="el" href="classarmnn_1_1_fake_quantization_layer.xhtml" title="This layer represents a fake quantization operation. ">FakeQuantizationLayer</a> </td><td rowspan="1" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to quantize float values and dequantize afterwards. The current implementation does not dequantize the values. </td><td rowspan="1"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_fill_layer.xhtml" title="This layer represents a fill operation. ">FillLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to set the values of a tensor with a given value. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_FILL </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_floor_layer.xhtml" title="This layer represents a floor operation. ">FloorLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to round the value to the lowest whole number. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_FLOOR </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_fully_connected_layer.xhtml" title="This layer represents a fully connected operation. ">FullyConnectedLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform a fully connected / dense operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_FULLY_CONNECTED </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_gather_layer.xhtml" title="This layer represents a Gather operator. ">GatherLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform the gather operation along the chosen axis. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_GATHER </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_gather_nd_layer.xhtml" title="This layer represents a GatherNd operator. ">GatherNdLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform the gatherNd operation. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="1"><a class="el" href="classarmnn_1_1_input_layer.xhtml" title="A layer user-provided data can be bound to (e.g. inputs, outputs). ">InputLayer</a> </td><td rowspan="1" style="width:200px;">Special layer used to provide input data to the computational network. </td><td rowspan="1"><ul> +<li> +N/A </li> +</ul> +</td><td>All </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_instance_normalization_layer.xhtml" title="This layer represents an instance normalization operation. ">InstanceNormalizationLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform an instance normalization on a given axis. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_INSTANCE_NORMALIZATION </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_l2_normalization_layer.xhtml" title="This layer represents a L2 normalization operation. ">L2NormalizationLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform an L2 normalization on a given axis. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_L2_NORMALIZATION </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_log_softmax_layer.xhtml" title="This layer represents a log softmax operation. ">LogSoftmaxLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform the log softmax activations given logits. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_logical_binary_layer.xhtml" title="This layer represents a Logical Binary operation. ">LogicalBinaryLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform Logical AND - Logical NOT - Logical OR operations. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_LOGICAL_AND </li> +<li> +ANEURALNETWORKS_LOGICAL_NOT </li> +<li> +ANEURALNETWORKS_LOGICAL_OR </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BOOLEAN </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BOOLEAN </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BOOLEAN </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_lstm_layer.xhtml" title="This layer represents a LSTM operation. ">LstmLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform a single time step in a Long Short-Term Memory (LSTM) operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_LSTM </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_map_layer.xhtml" title="This layer represents a memory copy operation. ">MapLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform map operation on tensor. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_maximum_layer.xhtml" title="This layer represents a maximum operation. ">MaximumLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform an elementwise maximum of two tensors. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_mean_layer.xhtml" title="This layer represents a mean operation. ">MeanLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform reduce mean operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_MEAN </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_mem_copy_layer.xhtml" title="This layer represents a memory copy operation. ">MemCopyLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform memory copy operation. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>BOOLEAN </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_mem_import_layer.xhtml" title="This layer represents a memory import operation. ">MemImportLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform memory import operation. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_merge_layer.xhtml" title="This layer dequantizes the input tensor. ">MergeLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to concatenate tensors along a given axis. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_CONCATENATION </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_minimum_layer.xhtml" title="This layer represents a minimum operation. ">MinimumLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform an elementwise minimum of two tensors. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_MINIMUM </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_multiplication_layer.xhtml" title="This layer represents a multiplication operation. ">MultiplicationLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform an elementwise multiplication of two tensors. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_MUL </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_normalization_layer.xhtml" title="This layer represents a normalization operation. ">NormalizationLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to compute normalization operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_LOCAL_RESPONSE_NORMALIZATION </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="1"><a class="el" href="classarmnn_1_1_output_layer.xhtml" title="A layer user-provided data can be bound to (e.g. inputs, outputs). ">OutputLayer</a> </td><td rowspan="1" style="width:200px;">A special layer providing access to a user supplied buffer into which the output of a network can be written. </td><td rowspan="1"><ul> +<li> +N/A </li> +</ul> +</td><td>All </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_pad_layer.xhtml" title="This layer represents a pad operation. ">PadLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to pad a tensor. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_PAD </li> +<li> +ANEURALNETWORKS_PAD_V2 </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_permute_layer.xhtml" title="This layer represents a permutation operation. ">PermuteLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to transpose an ND tensor. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_TRANSPOSE </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_pooling2d_layer.xhtml" title="This layer represents a pooling 2d operation. ">Pooling2dLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform 2D pooling with the specified pooling operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_AVERAGE_POOL_2D </li> +<li> +ANEURALNETWORKS_L2_POOL_2D </li> +<li> +ANEURALNETWORKS_MAX_POOL_2D </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_pooling3d_layer.xhtml" title="This layer represents a pooling 3d operation. ">Pooling3dLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform 3D pooling with the specified pooling operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_AVERAGE_POOL_3D </li> +<li> +ANEURALNETWORKS_L2_POOL_3D </li> +<li> +ANEURALNETWORKS_MAX_POOL_3D </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +NDHWC </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NA </li> +</ul> +</td><td></td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NDHWC </li> +</ul> +</td></tr> +<tr> +<td rowspan="1"><a class="el" href="classarmnn_1_1_pre_compiled_layer.xhtml">PreCompiledLayer</a> </td><td rowspan="1" style="width:200px;">Opaque layer provided by a backend which provides an executable representation of a subgraph from the original network. </td><td rowspan="1"><ul> +<li> +N/A </li> +</ul> +</td><td>N/A </td><td>N/A </td><td>N/A </td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_prelu_layer.xhtml">PreluLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to compute the activation layer with the PRELU activation function. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_PRELU </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_q_lstm_layer.xhtml" title="This layer represents a QLstm operation. ">QLstmLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform quantized LSTM (Long Short-Term Memory) operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_QUANTIZED_LSTM </li> +<li> +ANEURALNETWORKS_QUANTIZED_16BIT_LSTM </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_quantize_layer.xhtml">QuantizeLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform quantization operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_QUANTIZE </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMM16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMM16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_quantized_lstm_layer.xhtml" title="This layer represents a QuantizedLstm operation. ">QuantizedLstmLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform quantized LSTM (Long Short-Term Memory) operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_QUANTIZED_LSTM </li> +<li> +ANEURALNETWORKS_QUANTIZED_16BIT_LSTM </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_rank_layer.xhtml">RankLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform a rank operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_RANK </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_reduce_layer.xhtml" title="This layer represents a reduction operation. ">ReduceLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform reduce with the following operations - ARG_IDX_MAX: Index of the max value - ARG_IDX_MIN: Index of the min value - MEAN_SUM: Mean of sum - PROD: Product - SUM_SQUARE: Sum of squares - SUM: Sum - MIN: Min - MAX: Max </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_REDUCE_MAX </li> +<li> +ANEURALNETWORKS_REDUCE_MIN </li> +<li> +ANEURALNETWORKS_REDUCE_SUM </li> +<li> +ANEURALNETWORKS_REDUCE_PROD </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_reshape_layer.xhtml" title="This layer represents a reshape operation. ">ReshapeLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to reshape a tensor. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_RESHAPE </li> +<li> +ANEURALNETWORKS_SQUEEZE </li> +<li> +ANEURALNETWORKS_EXPAND_DIMS </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>BOOLEAN </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_resize_layer.xhtml" title="This layer represents a resize operation. ">ResizeLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform resize of a tensor using one of the interpolation methods: - Bilinear - Nearest Neighbor. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_RESIZE_BILINEAR </li> +<li> +ANEURALNETWORKS_RESIZE_NEAREST_NEIGHBOR </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_rsqrt_layer.xhtml">RsqrtLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform Rsqrt operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_RSQRT </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_shape_layer.xhtml">ShapeLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to return the shape of the input tensor. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_slice_layer.xhtml">SliceLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform tensor slicing. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_SLICE </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_softmax_layer.xhtml" title="This layer represents a softmax operation. ">SoftmaxLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform softmax, log-softmax operation over the specified axis. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_LOG_SOFTMAX </li> +<li> +ANEURALNETWORKS_SOFTMAX </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_space_to_batch_nd_layer.xhtml" title="This layer represents a SpaceToBatchNd operation. ">SpaceToBatchNdLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to divide spatial dimensions of the tensor into a grid of blocks and interleaves these blocks with the batch dimension. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_SPACE_TO_BATCH_ND </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_space_to_depth_layer.xhtml" title="This layer represents a SpaceToDepth operation. ">SpaceToDepthLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to rearrange blocks of spatial data into depth. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_SPACE_TO_DEPTH </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_splitter_layer.xhtml" title="This layer represents a split operation. ">SplitterLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to split a tensor along a given axis. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_SPLIT </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_stack_layer.xhtml" title="This layer represents a stack operation. ">StackLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to stack tensors along an axis. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="1"><a class="el" href="classarmnn_1_1_stand_in_layer.xhtml" title="This layer represents an unknown operation in the input graph. ">StandInLayer</a> </td><td rowspan="1" style="width:200px;">A layer to represent "unknown" or "unsupported" operations in the input graph. It has a configurable number of input and output slots and an optional name. </td><td rowspan="1"><ul> +<li> +N/A </li> +</ul> +</td><td>N/A </td><td>N/A </td><td>N/A </td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_strided_slice_layer.xhtml" title="This layer represents a strided slice operation. ">StridedSliceLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to extract a strided slice of a tensor. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_STRIDED_SLICE </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_subtraction_layer.xhtml" title="This layer represents a subtraction operation. ">SubtractionLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform an elementwise subtract of 2 tensors. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_SUB </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_transpose_convolution2d_layer.xhtml" title="This layer represents a 2D transpose convolution operation. ">TransposeConvolution2dLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform 2D transpose convolution (deconvolution) operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_TRANSPOSE_CONV_2D </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QUANTIZEDSYMM8PERAXIS </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>SIGNED32 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QUANTIZEDSYMM8PERAXIS </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_transpose_layer.xhtml" title="This layer represents a transpose operation. ">TransposeLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to transpose a tensor. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_TRANSPOSE </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>BFLOAT16 </td></tr> +<tr> +<td>FLOAT16 </td></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +<tr> +<td>QASYMMU8 </td></tr> +<tr> +<td>QSYMMS16 </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3">UnidirectionalSquenceLstmLayer </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform unidirectional sequence LSTM operation. </td><td rowspan="3"><ul> +<li> +ANEURALNETWORKS_UNIDIRECTIONAL_SEQUENCE_LSTM </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th>Input Types </th></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +<table class="doxtable"> +<tr> +<th>Weight Types </th></tr> +<tr> +<td>FLOAT32 </td></tr> +<tr> +<td>QASYMMS8 </td></tr> +</table> +</td><td>CpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th>Input Types </th></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +<table class="doxtable"> +<tr> +<th>Weight Types </th></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td><td>GpuAcc </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th>Input Types </th></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +<table class="doxtable"> +<tr> +<th>Weight Types </th></tr> +<tr> +<td>FLOAT32 </td></tr> +</table> +</td></tr> +<tr> +<td rowspan="3"><a class="el" href="classarmnn_1_1_unmap_layer.xhtml" title="This layer represents a memory copy operation. ">UnmapLayer</a> </td><td rowspan="3" style="width:200px;"><a class="el" href="classarmnn_1_1_layer.xhtml">Layer</a> to perform unmap operation on tensor. </td><td rowspan="3"><ul> +<li> +N/A </li> +</ul> +</td><td>CpuRef </td><td><ul> +<li> +All </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>CpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +<tr> +<td>GpuAcc </td><td><ul> +<li> +NHWC </li> +<li> +NCHW </li> +</ul> +</td><td><table class="doxtable"> +<tr> +<th></th></tr> +<tr> +<td>All </td></tr> +</table> +</td></tr> +</table> +</div></div><!-- contents --> +</div><!-- doc-content --> +<!-- start footer part --> +<div id="nav-path" class="navpath"><!-- id is needed for treeview function! --> + <ul> + <li class="footer">Generated on Fri Jun 17 2022 13:20:29 for ArmNN by + <a href="http://www.doxygen.org/index.html"> + <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li> + </ul> +</div> +</body> +</html> |