From c1ad80b3a581dd39b39a112d6c2026f6560207a4 Mon Sep 17 00:00:00 2001 From: Johan Alfven Date: Fri, 31 Mar 2023 10:19:23 +0200 Subject: MLBEDSW-7437: Add 64-bit output support for ArgMax - Added 64-bit support for ArgMax - Updated constraints for ArgMax and regenerated SUPPORTED_OPS.md Change-Id: I4ef7d2e6fccab0088b87757f6afe40a006c77bbd Signed-off-by: Johan Alfven --- SUPPORTED_OPS.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) (limited to 'SUPPORTED_OPS.md') diff --git a/SUPPORTED_OPS.md b/SUPPORTED_OPS.md index ba5b7919..08c63e7c 100644 --- a/SUPPORTED_OPS.md +++ b/SUPPORTED_OPS.md @@ -1,7 +1,7 @@ # Supported Ops This file was automatically generated by Vela using the `--supported-ops-report` parameter. -Vela version: `3.7.1.dev10+g521c494` +Vela version: `3.7.1.dev15+g2b5f66e` This file complies with [**Gitiles Markdown syntax**](https://github.com/google/gitiles/blob/master/Documentation/markdown.md) @@ -71,7 +71,7 @@ This is a list of constraints most NPU operators must satisfy in order to be sch - Input(s), Output and Weight tensors with quantization scales must be finite - Input and Output tensors must have quantization scales that fit within float32 precision - Constant tensors should not have NoneType-values -- Tensors must be of type: int16, int32, int8, uint8 +- Tensors must be of type: int16, int32, int8, uint8 - [ARG_MAX] - Tensors which are int32 are only valid when op type is: ADD, ARG_MAX, MUL, SHAPE, SUB - Tensor dimensions must be in the range [1, 65535] - Per-axis quantization is only supported for the following op types: CONV_2D, DEPTHWISE_CONV_2D, TRANSPOSE_CONV @@ -101,6 +101,7 @@ This is a list of constraints that the ADD operator must satisfy in order to be This is a list of constraints that the ARG_MAX operator must satisfy in order to be scheduled on the NPU. - IFM must be int8 or uint8 +- OFM must be int32 or int64 - Operation must be performed along the depth axis - IFM depth must be no greater than 127 -- cgit v1.2.1