aboutsummaryrefslogtreecommitdiff
path: root/21.02/_r_e_a_d_m_e_8md_source.xhtml
diff options
context:
space:
mode:
Diffstat (limited to '21.02/_r_e_a_d_m_e_8md_source.xhtml')
-rw-r--r--21.02/_r_e_a_d_m_e_8md_source.xhtml4
1 files changed, 2 insertions, 2 deletions
diff --git a/21.02/_r_e_a_d_m_e_8md_source.xhtml b/21.02/_r_e_a_d_m_e_8md_source.xhtml
index 009ba42910..0f1377c19b 100644
--- a/21.02/_r_e_a_d_m_e_8md_source.xhtml
+++ b/21.02/_r_e_a_d_m_e_8md_source.xhtml
@@ -98,13 +98,13 @@ $(document).ready(function(){initNavTree('_r_e_a_d_m_e_8md.xhtml','');});
<div class="title">README.md</div> </div>
</div><!--header-->
<div class="contents">
-<a href="_r_e_a_d_m_e_8md.xhtml">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span>&#160;# Introduction</div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span>&#160;</div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span>&#160;* [Software tools overview](#software-tools-overview)</div><div class="line"><a name="l00004"></a><span class="lineno"> 4</span>&#160;* [Where to find more information](#where-to-find-more-information)</div><div class="line"><a name="l00005"></a><span class="lineno"> 5</span>&#160;* [Contributions](#contributions)</div><div class="line"><a name="l00006"></a><span class="lineno"> 6</span>&#160;* [Disclaimer](#disclaimer)</div><div class="line"><a name="l00007"></a><span class="lineno"> 7</span>&#160;* [License](#license)</div><div class="line"><a name="l00008"></a><span class="lineno"> 8</span>&#160;* [Third-Party](#third-party)</div><div class="line"><a name="l00009"></a><span class="lineno"> 9</span>&#160;</div><div class="line"><a name="l00010"></a><span class="lineno"> 10</span>&#160;Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the </div><div class="line"><a name="l00011"></a><span class="lineno"> 11</span>&#160;[Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/). </div><div class="line"><a name="l00012"></a><span class="lineno"> 12</span>&#160;</div><div class="line"><a name="l00013"></a><span class="lineno"> 13</span>&#160;The Arm NN SDK is a set of open-source software and tools that enables machine learning workloads on power-efficient </div><div class="line"><a name="l00014"></a><span class="lineno"> 14</span>&#160;devices. It provides a bridge between existing neural network frameworks and power-efficient Cortex-A CPUs, </div><div class="line"><a name="l00015"></a><span class="lineno"> 15</span>&#160;Arm Mali GPUs and Arm Ethos NPUs.</div><div class="line"><a name="l00016"></a><span class="lineno"> 16</span>&#160;</div><div class="line"><a name="l00017"></a><span class="lineno"> 17</span>&#160;&lt;img align=&quot;center&quot; width=&quot;400&quot; src=&quot;https://developer.arm.com/-/media/Arm Developer Community/Images/Block Diagrams/Arm-NN/Arm-NN-Frameworks-Diagram.png&quot;/&gt;</div><div class="line"><a name="l00018"></a><span class="lineno"> 18</span>&#160;</div><div class="line"><a name="l00019"></a><span class="lineno"> 19</span>&#160;Arm NN SDK utilizes the Compute Library to target programmable cores, such as Cortex-A CPUs and Mali GPUs, </div><div class="line"><a name="l00020"></a><span class="lineno"> 20</span>&#160;as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide </div><div class="line"><a name="l00021"></a><span class="lineno"> 21</span>&#160;their [own driver and backend](src/backends/README.md). Note, Arm NN does not provide support for Cortex-M CPUs.</div><div class="line"><a name="l00022"></a><span class="lineno"> 22</span>&#160;</div><div class="line"><a name="l00023"></a><span class="lineno"> 23</span>&#160;The latest release supports models created with **Caffe**, **TensorFlow**, **TensorFlow Lite** (TfLite) and **ONNX**. </div><div class="line"><a name="l00024"></a><span class="lineno"> 24</span>&#160;Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the </div><div class="line"><a name="l00025"></a><span class="lineno"> 25</span>&#160;hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural </div><div class="line"><a name="l00026"></a><span class="lineno"> 26</span>&#160;network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup </div><div class="line"><a name="l00027"></a><span class="lineno"> 27</span>&#160;we&#39;ve been experiencing in our experiments with a few common networks.</div><div class="line"><a name="l00028"></a><span class="lineno"> 28</span>&#160;</div><div class="line"><a name="l00029"></a><span class="lineno"> 29</span>&#160;&lt;img align=&quot;center&quot; width=&quot;700&quot; src=&quot;https://developer.arm.com/-/media/developer/Other Images/Arm_NN_performance_relative_to_other_NN_frameworks_diagram.png&quot;/&gt;</div><div class="line"><a name="l00030"></a><span class="lineno"> 30</span>&#160;</div><div class="line"><a name="l00031"></a><span class="lineno"> 31</span>&#160;Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible </div><div class="line"><a name="l00032"></a><span class="lineno"> 32</span>&#160;to build for a wide variety of target platforms, from a wide variety of host environments.</div><div class="line"><a name="l00033"></a><span class="lineno"> 33</span>&#160;</div><div class="line"><a name="l00034"></a><span class="lineno"> 34</span>&#160;</div><div class="line"><a name="l00035"></a><span class="lineno"> 35</span>&#160;## Getting started: Software tools overview</div><div class="line"><a name="l00036"></a><span class="lineno"> 36</span>&#160;Depending on what kind of framework (Tensorflow, Caffe, ONNX) you&#39;ve been using to create your model there are multiple </div><div class="line"><a name="l00037"></a><span class="lineno"> 37</span>&#160;software tools available within Arm NN that can serve your needs.</div><div class="line"><a name="l00038"></a><span class="lineno"> 38</span>&#160;</div><div class="line"><a name="l00039"></a><span class="lineno"> 39</span>&#160;Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run models from </div><div class="line"><a name="l00040"></a><span class="lineno"> 40</span>&#160;one framework e.g. the TfLite-Parser lets you run TfLite models. You can integrate these parsers into your own </div><div class="line"><a name="l00041"></a><span class="lineno"> 41</span>&#160;application to load, optimize and execute your model. We also provide **python bindings** for our parsers and the Arm NN core.</div><div class="line"><a name="l00042"></a><span class="lineno"> 42</span>&#160;We call the result **PyArmNN**. Therefore your application can be conveniently written in either C++ using the &quot;original&quot;</div><div class="line"><a name="l00043"></a><span class="lineno"> 43</span>&#160;Arm NN library or in Python using PyArmNN. You can find tutorials on how to setup and use our parsers in our doxygen</div><div class="line"><a name="l00044"></a><span class="lineno"> 44</span>&#160;documentation. The latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation) </div><div class="line"><a name="l00045"></a><span class="lineno"> 45</span>&#160;of this repository.</div><div class="line"><a name="l00046"></a><span class="lineno"> 46</span>&#160;</div><div class="line"><a name="l00047"></a><span class="lineno"> 47</span>&#160;Admittedly, building Arm NN and its parsers from source is not always easy to accomplish. We are trying to increase our</div><div class="line"><a name="l00048"></a><span class="lineno"> 48</span>&#160;usability by providing Arm NN as a **Debian package**. Our debian package is the most easy way to install the Arm NN Core,</div><div class="line"><a name="l00049"></a><span class="lineno"> 49</span>&#160;the TfLite Parser and PyArmNN (More support is about to come): [Installation via Apt Repository](InstallationViaAptRepository.md)</div><div class="line"><a name="l00050"></a><span class="lineno"> 50</span>&#160;</div><div class="line"><a name="l00051"></a><span class="lineno"> 51</span>&#160;The newest member in Arm NNs software toolkit is the **TfLite Delegate**. The delegate can be integrated in TfLite.</div><div class="line"><a name="l00052"></a><span class="lineno"> 52</span>&#160;TfLite will then delegate operations, that can be accelerated with Arm NN, to Arm NN. Every other operation will still be</div><div class="line"><a name="l00053"></a><span class="lineno"> 53</span>&#160;executed with the usual TfLite runtime. This is our **recommended way to accelerate TfLite models**. As with our parsers</div><div class="line"><a name="l00054"></a><span class="lineno"> 54</span>&#160;there are tutorials in our doxygen documentation that can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation).</div><div class="line"><a name="l00055"></a><span class="lineno"> 55</span>&#160;</div><div class="line"><a name="l00056"></a><span class="lineno"> 56</span>&#160;If you would like to use **Arm NN on Android** you can follow this guide which explains [how to build Arm NN using the AndroidNDK]().</div><div class="line"><a name="l00057"></a><span class="lineno"> 57</span>&#160;But you might also want to take a look at another repository which implements a hardware abstraction layer (HAL) for </div><div class="line"><a name="l00058"></a><span class="lineno"> 58</span>&#160;Android. The repository is called [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) and when </div><div class="line"><a name="l00059"></a><span class="lineno"> 59</span>&#160;integrated into Android it will automatically run neural networks with Arm NN.</div><div class="line"><a name="l00060"></a><span class="lineno"> 60</span>&#160;</div><div class="line"><a name="l00061"></a><span class="lineno"> 61</span>&#160;</div><div class="line"><a name="l00062"></a><span class="lineno"> 62</span>&#160;## Where to find more information</div><div class="line"><a name="l00063"></a><span class="lineno"> 63</span>&#160;The section above introduces the most important tools that Arm NN provides.</div><div class="line"><a name="l00064"></a><span class="lineno"> 64</span>&#160;You can find a complete list in our **doxygen documentation**. The </div><div class="line"><a name="l00065"></a><span class="lineno"> 65</span>&#160;latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation) of our github </div><div class="line"><a name="l00066"></a><span class="lineno"> 66</span>&#160;repository.</div><div class="line"><a name="l00067"></a><span class="lineno"> 67</span>&#160;</div><div class="line"><a name="l00068"></a><span class="lineno"> 68</span>&#160;For FAQs and troubleshooting advice, see [FAQ.md](docs/FAQ.md) </div><div class="line"><a name="l00069"></a><span class="lineno"> 69</span>&#160;or take a look at previous [github issues](https://github.com/ARM-software/armnn/issues).</div><div class="line"><a name="l00070"></a><span class="lineno"> 70</span>&#160;</div><div class="line"><a name="l00071"></a><span class="lineno"> 71</span>&#160;</div><div class="line"><a name="l00072"></a><span class="lineno"> 72</span>&#160;## Note</div><div class="line"><a name="l00073"></a><span class="lineno"> 73</span>&#160;1. The following tools are **deprecated** in Arm NN 21.02 and will be removed in 21.05:</div><div class="line"><a name="l00074"></a><span class="lineno"> 74</span>&#160; * TensorflowParser</div><div class="line"><a name="l00075"></a><span class="lineno"> 75</span>&#160; * CaffeParser</div><div class="line"><a name="l00076"></a><span class="lineno"> 76</span>&#160; * Quantizer</div><div class="line"><a name="l00077"></a><span class="lineno"> 77</span>&#160;</div><div class="line"><a name="l00078"></a><span class="lineno"> 78</span>&#160;2. We are currently in the process of removing [boost](https://www.boost.org/) as a dependency to Arm NN. This process </div><div class="line"><a name="l00079"></a><span class="lineno"> 79</span>&#160;is finished for everything apart from our unit tests. This means you don&#39;t need boost to build and use Arm NN but </div><div class="line"><a name="l00080"></a><span class="lineno"> 80</span>&#160;you need it to execute our unit tests. Boost will soon be removed from Arm NN entirely.</div><div class="line"><a name="l00081"></a><span class="lineno"> 81</span>&#160;</div><div class="line"><a name="l00082"></a><span class="lineno"> 82</span>&#160;</div><div class="line"><a name="l00083"></a><span class="lineno"> 83</span>&#160;## Contributions</div><div class="line"><a name="l00084"></a><span class="lineno"> 84</span>&#160;The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/) </div><div class="line"><a name="l00085"></a><span class="lineno"> 85</span>&#160;on the [MLPlatform.org](https://mlplatform.org/) website, or see the [Contributor Guide](ContributorGuide.md).</div><div class="line"><a name="l00086"></a><span class="lineno"> 86</span>&#160;</div><div class="line"><a name="l00087"></a><span class="lineno"> 87</span>&#160;Particularly if you&#39;d like to implement your own backend next to our CPU, GPU and NPU backends there are guides for </div><div class="line"><a name="l00088"></a><span class="lineno"> 88</span>&#160;backend development: </div><div class="line"><a name="l00089"></a><span class="lineno"> 89</span>&#160;[Backend development guide](src/backends/README.md), [Dynamic backend development guide](src/dynamic/README.md)</div><div class="line"><a name="l00090"></a><span class="lineno"> 90</span>&#160;</div><div class="line"><a name="l00091"></a><span class="lineno"> 91</span>&#160;</div><div class="line"><a name="l00092"></a><span class="lineno"> 92</span>&#160;## Disclaimer</div><div class="line"><a name="l00093"></a><span class="lineno"> 93</span>&#160;The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model </div><div class="line"><a name="l00094"></a><span class="lineno"> 94</span>&#160;protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on </div><div class="line"><a name="l00095"></a><span class="lineno"> 95</span>&#160;the Internet, for those who wish to experiment, but they won&#39;t run out of the box.</div><div class="line"><a name="l00096"></a><span class="lineno"> 96</span>&#160;</div><div class="line"><a name="l00097"></a><span class="lineno"> 97</span>&#160;</div><div class="line"><a name="l00098"></a><span class="lineno"> 98</span>&#160;## License</div><div class="line"><a name="l00099"></a><span class="lineno"> 99</span>&#160;Arm NN is provided under the [MIT](https://spdx.org/licenses/MIT.html) license.</div><div class="line"><a name="l00100"></a><span class="lineno"> 100</span>&#160;See [LICENSE](LICENSE) for more information. Contributions to this project are accepted under the same license.</div><div class="line"><a name="l00101"></a><span class="lineno"> 101</span>&#160;</div><div class="line"><a name="l00102"></a><span class="lineno"> 102</span>&#160;Individual files contain the following tag instead of the full license text.</div><div class="line"><a name="l00103"></a><span class="lineno"> 103</span>&#160;</div><div class="line"><a name="l00104"></a><span class="lineno"> 104</span>&#160; SPDX-License-Identifier: MIT</div><div class="line"><a name="l00105"></a><span class="lineno"> 105</span>&#160;</div><div class="line"><a name="l00106"></a><span class="lineno"> 106</span>&#160;This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/</div><div class="line"><a name="l00107"></a><span class="lineno"> 107</span>&#160;</div><div class="line"><a name="l00108"></a><span class="lineno"> 108</span>&#160;</div><div class="line"><a name="l00109"></a><span class="lineno"> 109</span>&#160;## Third-party</div><div class="line"><a name="l00110"></a><span class="lineno"> 110</span>&#160;Third party tools used by Arm NN:</div><div class="line"><a name="l00111"></a><span class="lineno"> 111</span>&#160;</div><div class="line"><a name="l00112"></a><span class="lineno"> 112</span>&#160;| Tool | License (SPDX ID) | Description | Version | Provenience</div><div class="line"><a name="l00113"></a><span class="lineno"> 113</span>&#160;|----------------|-------------------|------------------------------------------------------------------|-------------|-------------------</div><div class="line"><a name="l00114"></a><span class="lineno"> 114</span>&#160;| cxxopts | MIT | A lightweight C++ option parser library | SHA 12e496da3d486b87fa9df43edea65232ed852510 | https://github.com/jarro2783/cxxopts</div><div class="line"><a name="l00115"></a><span class="lineno"> 115</span>&#160;| fmt | MIT | {fmt} is an open-source formatting library providing a fast and safe alternative to C stdio and C++ iostreams. | 7.0.1 | https://github.com/fmtlib/fmt</div><div class="line"><a name="l00116"></a><span class="lineno"> 116</span>&#160;| ghc | MIT | A header-only single-file std::filesystem compatible helper library | 1.3.2 | https://github.com/gulrak/filesystem</div><div class="line"><a name="l00117"></a><span class="lineno"> 117</span>&#160;| half | MIT | IEEE 754 conformant 16-bit half-precision floating point library | 1.12.0 | http://half.sourceforge.net</div><div class="line"><a name="l00118"></a><span class="lineno"> 118</span>&#160;| mapbox/variant | BSD | A header-only alternative to &#39;boost::variant&#39; | 1.1.3 | https://github.com/mapbox/variant</div><div class="line"><a name="l00119"></a><span class="lineno"> 119</span>&#160;| stb | MIT | Image loader, resize and writer | 2.16 | https://github.com/nothings/stb</div></div><!-- fragment --></div><!-- contents -->
+<a href="_r_e_a_d_m_e_8md.xhtml">Go to the documentation of this file.</a><div class="fragment"><div class="line"><a name="l00001"></a><span class="lineno"> 1</span>&#160;# Introduction</div><div class="line"><a name="l00002"></a><span class="lineno"> 2</span>&#160;</div><div class="line"><a name="l00003"></a><span class="lineno"> 3</span>&#160;* [Software tools overview](#software-tools-overview)</div><div class="line"><a name="l00004"></a><span class="lineno"> 4</span>&#160;* [Where to find more information](#where-to-find-more-information)</div><div class="line"><a name="l00005"></a><span class="lineno"> 5</span>&#160;* [Contributions](#contributions)</div><div class="line"><a name="l00006"></a><span class="lineno"> 6</span>&#160;* [Disclaimer](#disclaimer)</div><div class="line"><a name="l00007"></a><span class="lineno"> 7</span>&#160;* [License](#license)</div><div class="line"><a name="l00008"></a><span class="lineno"> 8</span>&#160;* [Third-Party](#third-party)</div><div class="line"><a name="l00009"></a><span class="lineno"> 9</span>&#160;</div><div class="line"><a name="l00010"></a><span class="lineno"> 10</span>&#160;Arm NN is a key component of the [machine learning platform](https://mlplatform.org/), which is part of the</div><div class="line"><a name="l00011"></a><span class="lineno"> 11</span>&#160;[Linaro Machine Intelligence Initiative](https://www.linaro.org/news/linaro-announces-launch-of-machine-intelligence-initiative/).</div><div class="line"><a name="l00012"></a><span class="lineno"> 12</span>&#160;</div><div class="line"><a name="l00013"></a><span class="lineno"> 13</span>&#160;The Arm NN SDK is a set of open-source software and tools that enables machine learning workloads on power-efficient</div><div class="line"><a name="l00014"></a><span class="lineno"> 14</span>&#160;devices. It provides a bridge between existing neural network frameworks and power-efficient Cortex-A CPUs,</div><div class="line"><a name="l00015"></a><span class="lineno"> 15</span>&#160;Arm Mali GPUs and Arm Ethos NPUs.</div><div class="line"><a name="l00016"></a><span class="lineno"> 16</span>&#160;</div><div class="line"><a name="l00017"></a><span class="lineno"> 17</span>&#160;&lt;img align=&quot;center&quot; width=&quot;400&quot; src=&quot;https://developer.arm.com/-/media/Arm Developer Community/Images/Block Diagrams/Arm-NN/Arm-NN-Frameworks-Diagram.png&quot;/&gt;</div><div class="line"><a name="l00018"></a><span class="lineno"> 18</span>&#160;</div><div class="line"><a name="l00019"></a><span class="lineno"> 19</span>&#160;Arm NN SDK utilizes the Compute Library to target programmable cores, such as Cortex-A CPUs and Mali GPUs,</div><div class="line"><a name="l00020"></a><span class="lineno"> 20</span>&#160;as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide</div><div class="line"><a name="l00021"></a><span class="lineno"> 21</span>&#160;their [own driver and backend](src/backends/README.md). Note, Arm NN does not provide support for Cortex-M CPUs.</div><div class="line"><a name="l00022"></a><span class="lineno"> 22</span>&#160;</div><div class="line"><a name="l00023"></a><span class="lineno"> 23</span>&#160;The latest release supports models created with **Caffe**, **TensorFlow**, **TensorFlow Lite** (TfLite) and **ONNX**.</div><div class="line"><a name="l00024"></a><span class="lineno"> 24</span>&#160;Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the</div><div class="line"><a name="l00025"></a><span class="lineno"> 25</span>&#160;hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural</div><div class="line"><a name="l00026"></a><span class="lineno"> 26</span>&#160;network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup</div><div class="line"><a name="l00027"></a><span class="lineno"> 27</span>&#160;we&#39;ve been experiencing in our experiments with a few common networks.</div><div class="line"><a name="l00028"></a><span class="lineno"> 28</span>&#160;</div><div class="line"><a name="l00029"></a><span class="lineno"> 29</span>&#160;&lt;img align=&quot;center&quot; width=&quot;700&quot; src=&quot;https://developer.arm.com/-/media/developer/Other Images/Arm_NN_performance_relative_to_other_NN_frameworks_diagram.png&quot;/&gt;</div><div class="line"><a name="l00030"></a><span class="lineno"> 30</span>&#160;</div><div class="line"><a name="l00031"></a><span class="lineno"> 31</span>&#160;Arm NN is written using portable C++14 and the build system uses [CMake](https://cmake.org/), therefore it is possible</div><div class="line"><a name="l00032"></a><span class="lineno"> 32</span>&#160;to build for a wide variety of target platforms, from a wide variety of host environments.</div><div class="line"><a name="l00033"></a><span class="lineno"> 33</span>&#160;</div><div class="line"><a name="l00034"></a><span class="lineno"> 34</span>&#160;</div><div class="line"><a name="l00035"></a><span class="lineno"> 35</span>&#160;## Getting started: Software tools overview</div><div class="line"><a name="l00036"></a><span class="lineno"> 36</span>&#160;Depending on what kind of framework (Tensorflow, Caffe, ONNX) you&#39;ve been using to create your model there are multiple</div><div class="line"><a name="l00037"></a><span class="lineno"> 37</span>&#160;software tools available within Arm NN that can serve your needs.</div><div class="line"><a name="l00038"></a><span class="lineno"> 38</span>&#160;</div><div class="line"><a name="l00039"></a><span class="lineno"> 39</span>&#160;Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run models from</div><div class="line"><a name="l00040"></a><span class="lineno"> 40</span>&#160;one framework e.g. the TfLite-Parser lets you run TfLite models. You can integrate these parsers into your own</div><div class="line"><a name="l00041"></a><span class="lineno"> 41</span>&#160;application to load, optimize and execute your model. We also provide **python bindings** for our parsers and the Arm NN core.</div><div class="line"><a name="l00042"></a><span class="lineno"> 42</span>&#160;We call the result **PyArmNN**. Therefore your application can be conveniently written in either C++ using the &quot;original&quot;</div><div class="line"><a name="l00043"></a><span class="lineno"> 43</span>&#160;Arm NN library or in Python using PyArmNN. You can find tutorials on how to setup and use our parsers in our doxygen</div><div class="line"><a name="l00044"></a><span class="lineno"> 44</span>&#160;documentation. The latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation)</div><div class="line"><a name="l00045"></a><span class="lineno"> 45</span>&#160;of this repository.</div><div class="line"><a name="l00046"></a><span class="lineno"> 46</span>&#160;</div><div class="line"><a name="l00047"></a><span class="lineno"> 47</span>&#160;Admittedly, building Arm NN and its parsers from source is not always easy to accomplish. We are trying to increase our</div><div class="line"><a name="l00048"></a><span class="lineno"> 48</span>&#160;usability by providing Arm NN as a **Debian package**. Our debian package is the most easy way to install the Arm NN Core,</div><div class="line"><a name="l00049"></a><span class="lineno"> 49</span>&#160;the TfLite Parser and PyArmNN (More support is about to come): [Installation via Apt Repository](InstallationViaAptRepository.md)</div><div class="line"><a name="l00050"></a><span class="lineno"> 50</span>&#160;</div><div class="line"><a name="l00051"></a><span class="lineno"> 51</span>&#160;The newest member in Arm NNs software toolkit is the **TfLite Delegate**. The delegate can be integrated in TfLite.</div><div class="line"><a name="l00052"></a><span class="lineno"> 52</span>&#160;TfLite will then delegate operations, that can be accelerated with Arm NN, to Arm NN. Every other operation will still be</div><div class="line"><a name="l00053"></a><span class="lineno"> 53</span>&#160;executed with the usual TfLite runtime. This is our **recommended way to accelerate TfLite models**. As with our parsers</div><div class="line"><a name="l00054"></a><span class="lineno"> 54</span>&#160;there are tutorials in our doxygen documentation that can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation).</div><div class="line"><a name="l00055"></a><span class="lineno"> 55</span>&#160;</div><div class="line"><a name="l00056"></a><span class="lineno"> 56</span>&#160;If you would like to use **Arm NN on Android** you can follow this guide which explains [how to build Arm NN using the AndroidNDK]().</div><div class="line"><a name="l00057"></a><span class="lineno"> 57</span>&#160;But you might also want to take a look at another repository which implements a hardware abstraction layer (HAL) for</div><div class="line"><a name="l00058"></a><span class="lineno"> 58</span>&#160;Android. The repository is called [Android-NN-Driver](https://github.com/ARM-software/android-nn-driver) and when</div><div class="line"><a name="l00059"></a><span class="lineno"> 59</span>&#160;integrated into Android it will automatically run neural networks with Arm NN.</div><div class="line"><a name="l00060"></a><span class="lineno"> 60</span>&#160;</div><div class="line"><a name="l00061"></a><span class="lineno"> 61</span>&#160;</div><div class="line"><a name="l00062"></a><span class="lineno"> 62</span>&#160;## Where to find more information</div><div class="line"><a name="l00063"></a><span class="lineno"> 63</span>&#160;The section above introduces the most important tools that Arm NN provides.</div><div class="line"><a name="l00064"></a><span class="lineno"> 64</span>&#160;You can find a complete list in our **doxygen documentation**. The</div><div class="line"><a name="l00065"></a><span class="lineno"> 65</span>&#160;latest version can be found in the [wiki section](https://github.com/ARM-software/armnn/wiki/Documentation) of our github</div><div class="line"><a name="l00066"></a><span class="lineno"> 66</span>&#160;repository.</div><div class="line"><a name="l00067"></a><span class="lineno"> 67</span>&#160;</div><div class="line"><a name="l00068"></a><span class="lineno"> 68</span>&#160;For FAQs and troubleshooting advice, see [FAQ.md](docs/FAQ.md)</div><div class="line"><a name="l00069"></a><span class="lineno"> 69</span>&#160;or take a look at previous [github issues](https://github.com/ARM-software/armnn/issues).</div><div class="line"><a name="l00070"></a><span class="lineno"> 70</span>&#160;</div><div class="line"><a name="l00071"></a><span class="lineno"> 71</span>&#160;</div><div class="line"><a name="l00072"></a><span class="lineno"> 72</span>&#160;## Note</div><div class="line"><a name="l00073"></a><span class="lineno"> 73</span>&#160;1. The following tools are **deprecated** in Arm NN 21.02 and will be removed in 21.05:</div><div class="line"><a name="l00074"></a><span class="lineno"> 74</span>&#160; * TensorflowParser</div><div class="line"><a name="l00075"></a><span class="lineno"> 75</span>&#160; * CaffeParser</div><div class="line"><a name="l00076"></a><span class="lineno"> 76</span>&#160; * Quantizer</div><div class="line"><a name="l00077"></a><span class="lineno"> 77</span>&#160;</div><div class="line"><a name="l00078"></a><span class="lineno"> 78</span>&#160;2. Ubuntu Linux 16.04 LTS will no longer be supported by April 30, 2021.</div><div class="line"><a name="l00079"></a><span class="lineno"> 79</span>&#160; At that time, Ubuntu 16.04 LTS will no longer receive security patches or other software updates.</div><div class="line"><a name="l00080"></a><span class="lineno"> 80</span>&#160; Consequently Arm NN will from the 21.08 Release at the end of August 2021 no longer be officially</div><div class="line"><a name="l00081"></a><span class="lineno"> 81</span>&#160; supported on Ubuntu 16.04 LTS but will instead be supported on Ubuntu 18.04 LTS.</div><div class="line"><a name="l00082"></a><span class="lineno"> 82</span>&#160;</div><div class="line"><a name="l00083"></a><span class="lineno"> 83</span>&#160;3. We are currently in the process of removing [boost](https://www.boost.org/) as a dependency to Arm NN. This process</div><div class="line"><a name="l00084"></a><span class="lineno"> 84</span>&#160; is finished for everything apart from our unit tests. This means you don&#39;t need boost to build and use Arm NN but</div><div class="line"><a name="l00085"></a><span class="lineno"> 85</span>&#160; you need it to execute our unit tests. Boost will soon be removed from Arm NN entirely.</div><div class="line"><a name="l00086"></a><span class="lineno"> 86</span>&#160;</div><div class="line"><a name="l00087"></a><span class="lineno"> 87</span>&#160;</div><div class="line"><a name="l00088"></a><span class="lineno"> 88</span>&#160;## Contributions</div><div class="line"><a name="l00089"></a><span class="lineno"> 89</span>&#160;The Arm NN project welcomes contributions. For more details on contributing to Arm NN see the [Contributing page](https://mlplatform.org/contributing/)</div><div class="line"><a name="l00090"></a><span class="lineno"> 90</span>&#160;on the [MLPlatform.org](https://mlplatform.org/) website, or see the [Contributor Guide](ContributorGuide.md).</div><div class="line"><a name="l00091"></a><span class="lineno"> 91</span>&#160;</div><div class="line"><a name="l00092"></a><span class="lineno"> 92</span>&#160;Particularly if you&#39;d like to implement your own backend next to our CPU, GPU and NPU backends there are guides for</div><div class="line"><a name="l00093"></a><span class="lineno"> 93</span>&#160;backend development:</div><div class="line"><a name="l00094"></a><span class="lineno"> 94</span>&#160;[Backend development guide](src/backends/README.md), [Dynamic backend development guide](src/dynamic/README.md)</div><div class="line"><a name="l00095"></a><span class="lineno"> 95</span>&#160;</div><div class="line"><a name="l00096"></a><span class="lineno"> 96</span>&#160;</div><div class="line"><a name="l00097"></a><span class="lineno"> 97</span>&#160;## Disclaimer</div><div class="line"><a name="l00098"></a><span class="lineno"> 98</span>&#160;The armnn/tests directory contains tests used during Arm NN development. Many of them depend on third-party IP, model</div><div class="line"><a name="l00099"></a><span class="lineno"> 99</span>&#160;protobufs and image files not distributed with Arm NN. The dependencies of some of the tests are available freely on</div><div class="line"><a name="l00100"></a><span class="lineno"> 100</span>&#160;the Internet, for those who wish to experiment, but they won&#39;t run out of the box.</div><div class="line"><a name="l00101"></a><span class="lineno"> 101</span>&#160;</div><div class="line"><a name="l00102"></a><span class="lineno"> 102</span>&#160;</div><div class="line"><a name="l00103"></a><span class="lineno"> 103</span>&#160;## License</div><div class="line"><a name="l00104"></a><span class="lineno"> 104</span>&#160;Arm NN is provided under the [MIT](https://spdx.org/licenses/MIT.html) license.</div><div class="line"><a name="l00105"></a><span class="lineno"> 105</span>&#160;See [LICENSE](LICENSE) for more information. Contributions to this project are accepted under the same license.</div><div class="line"><a name="l00106"></a><span class="lineno"> 106</span>&#160;</div><div class="line"><a name="l00107"></a><span class="lineno"> 107</span>&#160;Individual files contain the following tag instead of the full license text.</div><div class="line"><a name="l00108"></a><span class="lineno"> 108</span>&#160;</div><div class="line"><a name="l00109"></a><span class="lineno"> 109</span>&#160; SPDX-License-Identifier: MIT</div><div class="line"><a name="l00110"></a><span class="lineno"> 110</span>&#160;</div><div class="line"><a name="l00111"></a><span class="lineno"> 111</span>&#160;This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/</div><div class="line"><a name="l00112"></a><span class="lineno"> 112</span>&#160;</div><div class="line"><a name="l00113"></a><span class="lineno"> 113</span>&#160;</div><div class="line"><a name="l00114"></a><span class="lineno"> 114</span>&#160;## Third-party</div><div class="line"><a name="l00115"></a><span class="lineno"> 115</span>&#160;Third party tools used by Arm NN:</div><div class="line"><a name="l00116"></a><span class="lineno"> 116</span>&#160;</div><div class="line"><a name="l00117"></a><span class="lineno"> 117</span>&#160;| Tool | License (SPDX ID) | Description | Version | Provenience</div><div class="line"><a name="l00118"></a><span class="lineno"> 118</span>&#160;|----------------|-------------------|------------------------------------------------------------------|-------------|-------------------</div><div class="line"><a name="l00119"></a><span class="lineno"> 119</span>&#160;| cxxopts | MIT | A lightweight C++ option parser library | SHA 12e496da3d486b87fa9df43edea65232ed852510 | https://github.com/jarro2783/cxxopts</div><div class="line"><a name="l00120"></a><span class="lineno"> 120</span>&#160;| fmt | MIT | {fmt} is an open-source formatting library providing a fast and safe alternative to C stdio and C++ iostreams. | 7.0.1 | https://github.com/fmtlib/fmt</div><div class="line"><a name="l00121"></a><span class="lineno"> 121</span>&#160;| ghc | MIT | A header-only single-file std::filesystem compatible helper library | 1.3.2 | https://github.com/gulrak/filesystem</div><div class="line"><a name="l00122"></a><span class="lineno"> 122</span>&#160;| half | MIT | IEEE 754 conformant 16-bit half-precision floating point library | 1.12.0 | http://half.sourceforge.net</div><div class="line"><a name="l00123"></a><span class="lineno"> 123</span>&#160;| mapbox/variant | BSD | A header-only alternative to &#39;boost::variant&#39; | 1.1.3 | https://github.com/mapbox/variant</div><div class="line"><a name="l00124"></a><span class="lineno"> 124</span>&#160;| stb | MIT | Image loader, resize and writer | 2.16 | https://github.com/nothings/stb</div></div><!-- fragment --></div><!-- contents -->
</div><!-- doc-content -->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
<ul>
<li class="navelem"><a class="el" href="_r_e_a_d_m_e_8md.xhtml">README.md</a></li>
- <li class="footer">Generated on Thu Feb 25 2021 17:27:54 for ArmNN by
+ <li class="footer">Generated on Fri Mar 19 2021 15:26:05 for ArmNN by
<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
</ul>