From eb03e0fa0f6e8d133efe7d54831ad70da9431874 Mon Sep 17 00:00:00 2001 From: Kevin May Date: Wed, 28 Apr 2021 16:16:22 +0100 Subject: IVGCVSW-5744 Remove Tensorflow, Caffe and Quantizer from documentation * Remove from .md files and Doxygen * Remove from armnn/docker build * Remove Tensorflow model format from ExecuteNetworkParams * Remove Tensorflow model format from ImageTensorGenerator Signed-off-by: Kevin May Change-Id: Id6ed4a7d90366c396e8e0395d0ce43a3bcddcee6 --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'README.md') diff --git a/README.md b/README.md index cfa981309a..4d65b8f2ce 100644 --- a/README.md +++ b/README.md @@ -20,7 +20,7 @@ Arm NN SDK utilizes the Compute Library to target programmable cores, such as Co as efficiently as possible. To target Ethos NPUs the NPU-Driver is utilized. We also welcome new contributors to provide their [own driver and backend](src/backends/README.md). Note, Arm NN does not provide support for Cortex-M CPUs. -The latest release supports models created with **Caffe**, **TensorFlow**, **TensorFlow Lite** (TfLite) and **ONNX**. +The latest release supports models created with **TensorFlow Lite** (TfLite) and **ONNX**. Arm NN analysis a given model and replaces the operations within it with implementations particularly designed for the hardware you want to execute it on. This results in a great boost of execution speed. How much faster your neural network can be executed depends on the operations it contains and the available hardware. Below you can see the speedup @@ -33,7 +33,7 @@ to build for a wide variety of target platforms, from a wide variety of host env ## Getting started: Software tools overview -Depending on what kind of framework (Tensorflow, Caffe, ONNX) you've been using to create your model there are multiple +Depending on what kind of framework (Tensorflow Lite, ONNX) you've been using to create your model there are multiple software tools available within Arm NN that can serve your needs. Generally, there is a **parser** available **for each supported framework**. Each parser allows you to run models from -- cgit v1.2.1