Age | Commit message (Collapse) | Author |
|
If the input model for rewriting is quantized:
- Record de-quantized TFRecords
- enable writing de-quantized calibration data for the training
- re-generate augmented training data, if needed
- Use quantization-aware training (QAT) to train the replacement models
- Check if replacement model is quantized:
If source model is quantized, we make sure rewrite's output model
is quantized too. Right now, only int8 is supported so raising
an error if any other datatype is present in the output.
Resolves: MLIA-907, MLIA-908, MLIA-927
Signed-off-by: Benjamin Klimczak <benjamin.klimczak@arm.com>
Change-Id: Icb4070a9e6f1fdb5ce36120d73823986e89ac955
|
|
- Fix imports
- Update variable names
- Refactor helper functions
- Add licence headers
- Add docstrings
- Use f-strings rather than % notation
- Create type annotations in rewrite module
- Migrate from tqdm to rich progress bar
- Use logging module in rewrite module: All print statements are
replaced with logging module
Resolves: MLIA-831, MLIA-842, MLIA-844, MLIA-846
Signed-off-by: Benjamin Klimczak <benjamin.klimczak@arm.com>
Change-Id: Idee37538d72b9f01128a894281a8d10155f7c17c
|
|
- Add required files for rewriting of TensorFlow Lite graphs
- Adapt rewrite dependency paths and project name
- Add license headers
Change-Id: I19c5f63215fe2af2fa7d7d44af08144c6c5f911c
Signed-off-by: Benjamin Klimczak <benjamin.klimczak@arm.com>
|