aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorKevin May <kevin.may@arm.com>2021-04-28 16:16:22 +0100
committerKevin May <kevin.may@arm.com>2021-04-29 12:38:01 +0000
commiteb03e0fa0f6e8d133efe7d54831ad70da9431874 (patch)
tree1cc0b63450ba58a570974c7d3feba7b53cf3f8eb /README.md
parenta04a9d7c11f28c7e932435535e80223782f369f2 (diff)
downloadarmnn-eb03e0fa0f6e8d133efe7d54831ad70da9431874.tar.gz
IVGCVSW-5744 Remove Tensorflow, Caffe and Quantizer from documentation
* Remove from .md files and Doxygen * Remove from armnn/docker build * Remove Tensorflow model format from ExecuteNetworkParams * Remove Tensorflow model format from ImageTensorGenerator Signed-off-by: Kevin May <kevin.may@arm.com> Change-Id: Id6ed4a7d90366c396e8e0395d0ce43a3bcddcee6
Diffstat (limited to 'README.md')
-rw-r--r--README.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/README.md b/README.md
index cfa981309a..4d65b8f2ce 100644
--- a/README.md
+++ b/README.md
@@ -20,7 +20,7 @@ Arm NN SDK utilizes the Compute Library to target programmable cores, such as Co
as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide
their [own driver and backend](src/backends/README.md). Note, Arm NN does not provide support for Cortex-M CPUs.
-The latest release supports models created with **Caffe**, **TensorFlow**, **TensorFlow Lite** (TfLite) and **ONNX**.
+The latest release supports models created with **TensorFlow Lite** (TfLite) and **ONNX**.
Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the
hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural
network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup
@@ -33,7 +33,7 @@ to build for a wide variety of target platforms, from a wide variety of host env
## Getting started: Software tools overview
-Depending on what kind of framework (Tensorflow, Caffe, ONNX) you've been using to create your model there are multiple
+Depending on what kind of framework (Tensorflow Lite, ONNX) you've been using to create your model there are multiple
software tools available within Arm NN that can serve your needs.
Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run models from