From 1156317a8a8df1bc0c25a54db8475d84495f1a79 Mon Sep 17 00:00:00 2001 From: Fredrik Svedberg Date: Wed, 6 Jul 2022 14:54:12 +0200 Subject: MLBEDSW-6703 Add SHAPE operator to supported operators Added SHAPE operator to the supported operators report. Updated the constraints for QUANTIZE and SHAPE operator. Also fixed RESHAPE consuming statically optimised shape. Signed-off-by: Fredrik Svedberg Change-Id: I1d964d602d3f361a0f16dae8133197280dd84c48 --- SUPPORTED_OPS.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) (limited to 'SUPPORTED_OPS.md') diff --git a/SUPPORTED_OPS.md b/SUPPORTED_OPS.md index 83429b7a..d16d5f8e 100644 --- a/SUPPORTED_OPS.md +++ b/SUPPORTED_OPS.md @@ -1,7 +1,7 @@ # Supported Ops This file was automatically generated by Vela using the `--supported-ops-report` parameter. -Vela version: `3.4.0rc3.dev1+g5e0ae55` +Vela version: `3.4.1.dev3+g5c30971e` This file complies with [**Gitiles Markdown syntax**](https://github.com/google/gitiles/blob/master/Documentation/markdown.md) @@ -42,6 +42,7 @@ Please check the supported operator list for your chosen runtime for further inf | RELU_N1_TO_1 | [Generic](#tflite-generic-constraints) | | RESHAPE | [Generic](#tflite-generic-constraints), [Specific](#tflite-reshape-constraints) | | RESIZE_BILINEAR | [Generic](#tflite-generic-constraints), [Specific](#tflite-resize_bilinear-constraints) | +| SHAPE | [Generic](#tflite-generic-constraints) | | SLICE | [Generic](#tflite-generic-constraints) | | SOFTMAX | [Generic](#tflite-generic-constraints), [Specific](#tflite-softmax-constraints) | | SPLIT | [Generic](#tflite-generic-constraints) | @@ -61,14 +62,14 @@ This is a list of constraints most NPU operators must satisfy in order to be sch - Input(s) and Output tensors must not be dynamic - [Quantize] - Input(s) and Output tensors must have a defined shape - Output tensors cannot be scalar - [Quantize] -- Scalar Input tensors are only valid for op type: ADD, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, SPLIT, SPLIT_V, SUB - [Quantize] +- Scalar Input tensors are only valid for op type: ADD, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, QUANTIZE, SPLIT, SPLIT_V, SUB - Input(s) and Output tensors must not be greater than 4D - Input(s), Output and Weight tensors must have quantization parameters - [Shape] -- Input(s), Output and Weight tensors with quantization scales must be finite - [Shape] -- Input and Output tensors must have quantization scales that fit within float32 precision - [Shape] +- Input(s), Output and Weight tensors with quantization scales must be finite +- Input and Output tensors must have quantization scales that fit within float32 precision - Constant tensors should not have NoneType-values - Tensors must be of type: int16, int32, int8, uint8 -- Tensors which are int32 are only valid when op type is: ADD, MUL, SUB +- Tensors which are int32 are only valid when op type is: ADD, MUL, SHAPE, SUB - Tensor dimensions must be in the range [1, 65535] - Per-axis quantization is only supported for the following op types: CONV_2D, DEPTHWISE_CONV_2D, TRANSPOSE_CONV - The fused activation function (if present) must be one of type: LOGISTIC, RELU, RELU6, RELU_N1_TO_1, TANH -- cgit v1.2.1