aboutsummaryrefslogtreecommitdiff
path: root/SUPPORTED_OPS.md
diff options
context:
space:
mode:
Diffstat (limited to 'SUPPORTED_OPS.md')
-rw-r--r--SUPPORTED_OPS.md11
1 files changed, 6 insertions, 5 deletions
diff --git a/SUPPORTED_OPS.md b/SUPPORTED_OPS.md
index 83429b7a..d16d5f8e 100644
--- a/SUPPORTED_OPS.md
+++ b/SUPPORTED_OPS.md
@@ -1,7 +1,7 @@
# Supported Ops
This file was automatically generated by Vela using the `--supported-ops-report` parameter.
-Vela version: `3.4.0rc3.dev1+g5e0ae55`
+Vela version: `3.4.1.dev3+g5c30971e`
This file complies with
[**Gitiles Markdown syntax**](https://github.com/google/gitiles/blob/master/Documentation/markdown.md)
@@ -42,6 +42,7 @@ Please check the supported operator list for your chosen runtime for further inf
| RELU_N1_TO_1 | [Generic](#tflite-generic-constraints) |
| RESHAPE | [Generic](#tflite-generic-constraints), [Specific](#tflite-reshape-constraints) |
| RESIZE_BILINEAR | [Generic](#tflite-generic-constraints), [Specific](#tflite-resize_bilinear-constraints) |
+| SHAPE | [Generic](#tflite-generic-constraints) |
| SLICE | [Generic](#tflite-generic-constraints) |
| SOFTMAX | [Generic](#tflite-generic-constraints), [Specific](#tflite-softmax-constraints) |
| SPLIT | [Generic](#tflite-generic-constraints) |
@@ -61,14 +62,14 @@ This is a list of constraints most NPU operators must satisfy in order to be sch
- Input(s) and Output tensors must not be dynamic - [Quantize]
- Input(s) and Output tensors must have a defined shape
- Output tensors cannot be scalar - [Quantize]
-- Scalar Input tensors are only valid for op type: ADD, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, SPLIT, SPLIT_V, SUB - [Quantize]
+- Scalar Input tensors are only valid for op type: ADD, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, QUANTIZE, SPLIT, SPLIT_V, SUB
- Input(s) and Output tensors must not be greater than 4D
- Input(s), Output and Weight tensors must have quantization parameters - [Shape]
-- Input(s), Output and Weight tensors with quantization scales must be finite - [Shape]
-- Input and Output tensors must have quantization scales that fit within float32 precision - [Shape]
+- Input(s), Output and Weight tensors with quantization scales must be finite
+- Input and Output tensors must have quantization scales that fit within float32 precision
- Constant tensors should not have NoneType-values
- Tensors must be of type: int16, int32, int8, uint8
-- Tensors which are int32 are only valid when op type is: ADD, MUL, SUB
+- Tensors which are int32 are only valid when op type is: ADD, MUL, SHAPE, SUB
- Tensor dimensions must be in the range [1, 65535]
- Per-axis quantization is only supported for the following op types: CONV_2D, DEPTHWISE_CONV_2D, TRANSPOSE_CONV
- The fused activation function (if present) must be one of type: LOGISTIC, RELU, RELU6, RELU_N1_TO_1, TANH