From 813f23049d73177edfc1f1cff71147c39f4b695e Mon Sep 17 00:00:00 2001 From: Sadik Armagan Date: Tue, 19 May 2020 14:10:30 +0100 Subject: IVGCVSW-4453 Add Support for ANEURALNETWORKS_QLSTM to HAL 1.3 Driver * Add QLSTM support for Android NN Driver * Add overrideOutputInfo parameter to SetupAndTrackLayerOutputSlot * Add optional condition to GetInputScalar * Refactor Quantized 16 Bit LSTM impl Change-Id: Ie8fa98ad5ee4a62174ef91ca80f1df62b7fde937 Signed-off-by: Keith Davis Signed-off-by: Sadik Armagan --- NnapiSupport.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'NnapiSupport.txt') diff --git a/NnapiSupport.txt b/NnapiSupport.txt index d5e077bf..e3d7c692 100644 --- a/NnapiSupport.txt +++ b/NnapiSupport.txt @@ -54,6 +54,7 @@ PAD_V2 (FLOAT32, FLOAT16, QUANT8_ASYMM) PRELU (FLOAT32, QUANT8_ASYMM) QUANTIZE (FLOAT32 (input only), QUANT8_ASYMM (output only)) QUANTIZED_16BIT_LSTM (QUANT8_ASYMM) +QUANTIZED_LSTM (QUANT8_ASYMM) RELU (FLOAT32, QUANT8_ASYMM) RELU1 (FLOAT32, QUANT8_ASYMM) RELU6 (FLOAT32, QUANT8_ASYMM) @@ -74,7 +75,6 @@ TRANSPOSE_CONV_2D (FLOAT32, QUANT8_ASYMM) Where operations are not supported by the ArmNN Android NN Driver, the driver indicates this to the framework appropriately and the framework implements those operations using a CPU implementation. - NOTE: By convention, only those tensor types have been listed above, which are fully supported across all ArmNN backends. FLOAT16 input tensors are partially supported on most HAL 1.2 operators on the GpuAcc and CpuRef backends, however not on CpuAcc. -- cgit v1.2.1