aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNina Drozd <nina.drozd@arm.com>2019-05-21 11:17:10 +0100
committerNina Drozd <nina.drozd@arm.com>2019-05-21 12:29:23 +0100
commit997dd8c4fc5a3fef5dd1d6bce829d4d241e4becb (patch)
tree3f19266106df4db306a45828d353b18724adc79c
parent825af454a1df237dc3b5e4996ed85c71daa72284 (diff)
downloadarmnn-997dd8c4fc5a3fef5dd1d6bce829d4d241e4becb.tar.gz
IVGCVSW-3088 Update Readme for 19.05
* Added Readme file for ArmnnQuantizer * Added section about ArmnnQuantizer in armnn Readme file * Updated ModelAccuracyTool Readme file with default values for --compute Signed-off-by: Nina Drozd <nina.drozd@arm.com> Change-Id: I5fcead522b70086dcf63dfc6c77910a7d33d83f0
-rw-r--r--README.md5
-rw-r--r--src/armnnQuantizer/README.md18
-rw-r--r--tests/ModelAccuracyTool-Armnn/README.md2
3 files changed, 23 insertions, 2 deletions
diff --git a/README.md b/README.md
index c55c504986..a23d1e96a2 100644
--- a/README.md
+++ b/README.md
@@ -28,7 +28,10 @@ The 'armnn/samples' directory contains SimpleSample.cpp. A very basic example of
The 'ExecuteNetwork' program, in armnn/tests/ExecuteNetwork, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes any model and any input tensor, and simply prints out the output tensor. Run with no arguments to see command-line help.
-The 'ArmnnConverter' program, in armnn/src/ArmnnConverter, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes a model in TensorFlow format and produces a serialized model in Arm NN format. Run with no arguments to see command-line help. Note that this program can only convert models for which all operations are supported by the serialization tool (src/armnnSerializer).
+The 'ArmnnConverter' program, in armnn/src/armnnConverter, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes a model in TensorFlow format and produces a serialized model in Arm NN format. Run with no arguments to see command-line help. Note that this program can only convert models for which all operations are supported by the serialization tool (src/armnnSerializer).
+
+The 'ArmnnQuantizer' program, in armnn/src/armnnQuantizer, has no additional dependencies beyond those required by Arm NN and the model parsers. It takes a 32-bit float network and converts it into a quantized asymmetric 8-bit or quantized symmetric 16-bit network.
+Static quantization is supported by default but dynamic quantization can be enabled if CSV file of raw input tensors is specified. Run with no arguments to see command-line help.
Note that Arm NN needs to be built against a particular version of ARM's Compute Library. The get_compute_library.sh in the scripts subdirectory will clone the compute library from the review.mlplatform.org github repository into a directory alongside armnn named 'clframework' and checkouts the correct revision
diff --git a/src/armnnQuantizer/README.md b/src/armnnQuantizer/README.md
new file mode 100644
index 0000000000..cad382078c
--- /dev/null
+++ b/src/armnnQuantizer/README.md
@@ -0,0 +1,18 @@
+# The ArmnnQuantizer
+
+The `ArmnnQuantizer` is a program for loading a 32-bit float network into ArmNN and converting it into a quantized asymmetric 8-bit or quantized symmetric 16-bit network.
+It supports static quantization by default, dynamic quantization is enabled if CSV file of raw input tensors is provided. Run the program with no arguments to see command-line help.
+
+
+|Cmd:|||
+| ---|---|---|
+| -h | --help | Display help messages |
+| -f | --infile | Input file containing float 32 ArmNN Input Graph |
+| -s | --scheme | Quantization scheme, "QAsymm8" or "QSymm16". Default value: QAsymm8 |
+| -c | --csvfile | CSV file containing paths for raw input tensors for dynamic quantization. If unset, static quantization is used |
+| -p | --preserve-data-type | Preserve the input and output data types. If unset, input and output data types are not preserved |
+| -d | --outdir | Directory that output file will be written to |
+| -o | --outfile | ArmNN output file name |
+
+Example usage: <br>
+<code>./ArmnnQuantizer -f /path/to/armnn/input/graph/ -s "QSymm16" -c /path/to/csv/file -p 1 -d /path/to/output -o outputFileName</code> \ No newline at end of file
diff --git a/tests/ModelAccuracyTool-Armnn/README.md b/tests/ModelAccuracyTool-Armnn/README.md
index 2a8c977968..d6334caa5d 100644
--- a/tests/ModelAccuracyTool-Armnn/README.md
+++ b/tests/ModelAccuracyTool-Armnn/README.md
@@ -11,7 +11,7 @@ images to this format.
| ---|---|---|
| -h | --help | Display help messages |
| -m | --model-path | Path to armnn format model file |
-| -c | --compute | Which device to run layers on by default. Possible choices: CpuRef, CpuAcc, GpuAcc |
+| -c | --compute | Which device to run layers on by default. Possible choices: CpuRef, CpuAcc, GpuAcc. Default: CpuAcc, CpuRef |
| -d | --data-dir | Path to directory containing the ImageNet test data |
| -i | --input-name | Identifier of the input tensors in the network separated by comma |
| -o | --output-name | Identifier of the output tensors in the network separated by comma |