aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md239
1 files changed, 196 insertions, 43 deletions
diff --git a/README.md b/README.md
index 8d89e19..f22e4ff 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,6 @@
-TOSA Reference Model
-=============
+# TOSA Reference Model
-# Introduction
+## Introduction
The *Tensor Operator Set Architecture (TOSA) Specification
<https://git.mlplatform.org/tosa/specification.git/>* is a set of operators
@@ -19,7 +18,7 @@ nodes in NumPy format. By default, the model validates and evalutes
the network subgraph, and writes out the resulting output tensors in
NumPy format.
-# Installation Requirements
+## Installation Requirements
The *TOSA Reference Model* and testing suite requires the following
tools:
@@ -30,6 +29,7 @@ tools:
with C++17 support
The model includes the following git submodules:
+
* TOSA Serialization Library
* JSON for Modern C++ - 3.8.0
* Eigen 3.3.7
@@ -39,8 +39,9 @@ C++17 and has been primarily tested on Ubuntu x86_64 18.04 LTS Linux
systems.
The testing infrastructure requires:
+
* Python 3.6 or later
-* TensorFlow 2.3 or later
+* FlatBuffers 2.0 or later
* NumPy 1.15 or later
Check out the required git submodules with:
@@ -49,7 +50,23 @@ Check out the required git submodules with:
git submodule update --init --recursive
```
-# Compilation
+### Versioning
+
+The *TOSA Reference Model* repository has branches (major.minor) and tags
+(major.minor.patch) that correspond to each TOSA version. The `main` branch is
+used as the active development branch for the next version.
+
+Perform a check-out of a specific version before compilation or installation of
+the test infrastructure by using:
+
+```bash
+git checkout --recurse-submodules VERSION
+```
+
+Where `VERSION` can be for example: `v0.23` or `v0.23.0`
+
+
+## Compilation
The *TOSA Reference Model* build can be prepared by creating makefiles using CMake:
@@ -72,7 +89,7 @@ if the build environment changes (e.g., new dependencies or source
files). Code changes that do not affect these build rules can be
rebuilt simply using `make`.
-# Usage
+## Usage
The inputs to the *TOSA Reference Model* consist of a FlatBuffers file
containing the serialized subgraph, a JSON test descriptor that describes
@@ -140,7 +157,7 @@ FlatBuffers schema file from the TOSA Serialization library must be
specified using -Coperator_fbs=. When using the binary FlatBuffers
format (.tosa), the schema is not necessary.
-## Examples
+### Examples
The TOSA Reference Model distribution contains several example
networks with inputs and reference outputs generated by
@@ -153,14 +170,14 @@ may cause small differences in output for floating-point tests and
differences in quantized scaling between TensorFlow Lite and the TOSA
Specification may cause differences in quantized integer tests.
-# Debugging
+## Debugging
The debugging facility can be enabled by setting a debug scope and
debug level on the command line. For most purposes, the following
flags will work: `-dALL -lHIGH`. Debug output can be directed to a
file using the `-o` switch.
-# TOSA Unit Test Infrastructure
+## TOSA Unit Test Infrastructure
The TOSA Unit Test infrastruture builds and runs self-contained tests
for implementations of the *Tensor Operator Set Architecture (TOSA)
@@ -168,32 +185,37 @@ Specification*. These tools directly generate TOSA operators for
verification of the TOSA reference model against existing frameworks
or other operator implementations.
-The test builder tool generates tests with random arguments and
-reference inputs for each TOSA operator. Currently, the test builder
-focuses on generating a wide range of legal arguments to each
-operator, but it also has limited support for generating tests with
-illegal arguments in order to make sure such usages are properly
-detected.
+The test builder tool by default generates positive tests with random
+arguments and reference inputs for each TOSA operator. Positive tests
+are expected to run without error and usually produce a result (some
+control flow operators may not produce a result).
+The test builder can also generate negative tests for all the ERROR_IF
+conditions within the TOSA Specification by using the `--test-type`
+options. Negative tests may contain invalid arguments or inputs and
+are expected to run and fail without producing a result. Other errors
+or unpredictable results are handled in a system dependent way and
+are not tested by the test builder tool.
The unit tests are typically structured as a combination of input
placeholder nodes, const nodes, and attributes feeding into a single
TOSA operator. The unit tests use a Python copy of the FlatBuffers
-schema written by ``flatc`` to verif/tosa.
+schema written by `flatc` to verify tosa.
Each test has a JSON file which provides machine-readable metadata for
-the test, including the .tosa flatbuffer file, names, shapes, and
+the test, including the TOSA flatbuffer file, names, shapes, and
NumPy filenames for each input and output tensor. There is also a
boolean value for whether a failure is expected because the test is
expected to trigger an invalid set of operands or attributes.
-The test runner tool executes the unit tests on the TOSA Reference
-Model to generate reference output tensor values (for legal tests).
-The test runner is a modular tool which can be exended to run the same
-tests on additional tools or frameworks. The reference output NumPy
-files are generated by this step and can be programatically compared
-with output of other tools. to validate those tools.
+The test runner tool can execute the unit tests on the TOSA Reference
+Model to generate reference output tensor values (for positive tests).
+The test runner is a modular tool which can be extended to run the same
+tests on additional tools or frameworks - such a tool or framework is
+called a System Under Test (SUT).
+The reference output NumPy files generated by this step can be
+programatically compared with output of SUTs to validate them.
-## Installation
+### Installation
The test infrastructure needs installing before being used. It is recommended
to create a [python virtual environment](https://docs.python.org/3/library/venv.html)
@@ -207,52 +229,183 @@ pip install .
When installing without a python virtual environment, use the pip
option `--user` to install it for the current user only.
-## Usage
### Unit Test Builder
-The test builder is invoked by ``tosa_verif_build_tests``. The
-builder generates test outputs in ``./vtest/<operator_name>/`` by
+The test builder is invoked by `tosa_verif_build_tests`. The
+builder generates test outputs in `./vtest/<operator_name>/` by
default. To restrict test generation to particular regular expression
-wildcard, use the ``--filter `` argument. The tool can be run with no
+wildcard, use the `--filter ` argument. The tool can be run with no
arguments to generate all tests.
Inputs and certain attributes are created using a random number
generator, while others are exhaustive (within reasonable bounds)
where the combinatorics allow exhaustive tests. The test generation
is deterministic for a given random seed, but additional tests can be
-generated using ``--seed``. As many corner-case error are often
+generated using `--seed`. As many corner-case error are often
uncovered using creative tensor shapes, the random seed parameter will
help get coverage of additional shapes.
+By default only the positive tests will be produced, use the
+argument `--test-type both` to build positive and negative tests.
+
Additional parameters on some operators can be found in the command
line help.
### Unit Test Runner
The unit test running script takes self-contained unit tests from the
-builder and runs them on the reference model. Shell wildcards can be
-used to run more than one test at a time and tests can be run in
-parallel using the ``-j`` switch. For example, to run all of the
-add operator tests:
+builder and runs them on the reference model or on a System Under
+Test.
+
+#### Selecting tests
+
+The `--test` or `-t` option is used to specify a directory containing
+a test. Shell wildcards can be used to run more than one test at a time.
+Tests will be run sequentially by default, but you may control how
+many tests are run in parallel using the `--jobs` or `-j` switch.
+
+For example, to run all of the TOSA add operator tests on the reference
+model, eight at a time:
``` bash
-tosa_verif_run_ref -t vtest/add/add* -j 8
+tosa_verif_run_tests -t vtest/add/add* -j 8
```
-The test runner is quiet by default, so running a large number of
-tests without any obvious errors will show no output while the tests
-are running. The ``-v`` switch will show the command being run in the
+The default location that is used for the reference model is
+`reference_model/build/reference_model/tosa_reference_model`, use the option
+`--ref-model-path` if you run this from a different location.
+
+You can also supply a list of tests in a file, one per line, using the
+`--test-list` or `-T` option.
+
+Finally you can choose the type of test to run - positive, negative or both
+(default) -using the `test-type` option. To only run the positive tests:
+
+```bash
+tosa_run_tests --test-type positive -t vtest/*/*
+```
+
+#### Verbosity
+
+The test runner is reasonably quiet by default, so running a large number of
+tests without any obvious errors will show only 1 line of output per test
+completion. The `-v` switch will show the commands being run in the
background.
+#### Debugging
+
To enable debugging on the reference model, shortcut commands have
-been provided: ``--ref-debug=high`` and ``--ref-intermediates`` to
+been provided: `--ref-debug=high` and `--ref-intermediates` to
turn on debugging and dump intermediate tensor values.
+### Systems Under Test
+
Additional Systems Under Test (SUTs), such as reference
-implementations of operators, full frameworks, etc, can be defined by
-extending the TosaTestRunner class. The SUTs can then be enabled by
-using the ``--sut-module`` flag.
+implementations of operators, full frameworks, and hardware implementations
+can be tested by the test runner.
+
+To do this you need to define an SUT module by extending the
+`TosaTestRunner` class found in `verif/runner/tosa_test_runner.py`, and
+then supplying this to the TOSA Test Runner.
+
+#### SUT inputs and outputs
+
+With each test is a `desc.json` file that contains input and output filename
+information which is read and supplied to the `TosaTestRunner` class.
+
+A TOSA System Under Test will need to be able to read the following input files:
+
+* TOSA FlatBuffers (either JSON or binary format) - use the TOSA
+ Serialization Library (<https://git.mlplatform.org/tosa/serialization_lib.git>)
+ to do this.
+* Tensors from python numpy array files - see the
+ [numpy documentation](https://numpy.org/doc/stable/reference/generated/numpy.load.html)
+ for more information. Use the TOSA Serialization Library to help
+ (see the link above).
+
+Utilize the `TosaTestRunner` class to convert these test artifacts
+into another format before giving them to your SUT.
+
+For positive tests usually at least one results file should be produced by
+your SUT containing the resulting tensor in numpy format. The expected
+filenames are supplied in the `desc.json` information.
-# License
+#### TosaTestRunner class
+
+Your python class extending the `TosaTestRunner` class must contain:
+
+* `__init__(self,...)` function that calls the super() class function.
+* `runTestGraph(self)` function that invokes your SUT and then translates the
+ return code into a `TosaTestRunner.TosaGraphResult`. It returns this result
+ and an optional error message.
+
+Examples of implementations can be found:
+
+* `verif/runner/tosa_refmodel_sut_run.py` - the reference model
+* `verif/tests/tosa_mock_sut_run.py` - mock version for testing
+
+There is a helper function `run_sh_command` in `verif/runner/run_command.py`
+that provides a robust way of calling shell commands from python that can be used
+to invoke your SUT.
+
+#### Testing with the Unit Test Runner
+
+The SUT can then be supplied to the test runner by using the `--sut-module`
+flag, the following invokes the reference model as the SUT (default behaviour
+when not supplied):
+
+```bash
+tosa_verif_run_tests --sut-module runner.tosa_refmodel_sut_run -t TEST
+```
+
+You can also pass arguments to your SUT, for example this
+will pass an argument called `ARGUMENT` with a value of `VALUE`
+to the `mysut.tosa_mysut_sut_run` module:
+
+```bash
+tosa_run_tests --sut-module mysut.tosa_mysut_sut_run \
+ --sut-module-args mysut.tosa_mysut_sut_run:ARGUMENT=VALUE \
+ -t TEST
+```
+
+You can repeat this switch multiple times to pass multiple different arguments.
+
+For an example of how to read these arguments in your SUT module, please see the
+`tosa_mock_sut_run.py` file.
+
+
+## Other tools
+
+Included in this repository are some support utilities used by the test runner:
+
+* `json2numpy` - converts from JSON format to numpy array or the reverse operation.
+* `json2fbbin` - converts from JSON flatbuffer format to flatbuffer
+ binary format or the reverse operation. This is dependent on the FlatBuffers
+ command `flatc` - see the section on the FlatBuffers compiler below.
+* `tosa_verif_result_check` - compares two results files.
+
+Please see the respective `--help` of each utility for more information on using
+them standalone.
+
+### FlatBuffers compiler
+
+The FlatBuffers compiler tool (`flatc`) is only needed if you want to use
+json2fbbin to convert the TOSA flatbuffer binary test files (.tosa) to JSON
+or from JSON to binary.
+It is best to use the flatbuffer version that comes with the reference model.
+After following the reference model compilation instructions, you can build
+the FlatBuffers tool using:
+
+``` bash
+# After compiling the reference model (in the build directory)
+cd thirdparty/serialization_lib/third_party/flatbuffers
+make
+```
+
+
+## License
The *TOSA Reference Model* and TOSA Unit Tests are licensed under Apache-2.0.
+
+Copyright (c) 2020-2022 Arm Limited.
+