aboutsummaryrefslogtreecommitdiff
path: root/arm_compute/runtime/NEON/functions/NEQLSTMLayer.h
diff options
context:
space:
mode:
authorSang-Hoon Park <sang-hoon.park@arm.com>2020-05-12 11:13:30 +0100
committerSang-Hoon Park <sang-hoon.park@arm.com>2020-05-12 16:25:57 +0000
commita7431aeef244c85f621b70b946d25229e42d1708 (patch)
tree62f74403008cad9cb812202865d016addf711a18 /arm_compute/runtime/NEON/functions/NEQLSTMLayer.h
parent1f567afcdfb2919fab417f0060155deda7132df8 (diff)
downloadComputeLibrary-a7431aeef244c85f621b70b946d25229e42d1708.tar.gz
COMPMID-3439: Fix peephole and projection in CLQLSTMLayer
The followings are essential to make it work - QSYMM16 is added as supported data type in CLGEMMLowpOutputStage - Internal TensorCopyKernel is added similar to NEQLSTMLayer The followings are fix for related things. - Projection is modified to remove copy of projection_bias from NEQLSTMLayer. - Fix wrong argument for validate_mm() - validate_mm() now returns on error. Change-Id: Icbd04e9fdb8821eb41dd3e0a6a0980965b779714 Signed-off-by: Sang-Hoon Park <sang-hoon.park@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/3177 Comments-Addressed: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Michele Di Giorgio <michele.digiorgio@arm.com>
Diffstat (limited to 'arm_compute/runtime/NEON/functions/NEQLSTMLayer.h')
-rw-r--r--arm_compute/runtime/NEON/functions/NEQLSTMLayer.h1
1 files changed, 0 insertions, 1 deletions
diff --git a/arm_compute/runtime/NEON/functions/NEQLSTMLayer.h b/arm_compute/runtime/NEON/functions/NEQLSTMLayer.h
index 4dde85e895..d1cc962940 100644
--- a/arm_compute/runtime/NEON/functions/NEQLSTMLayer.h
+++ b/arm_compute/runtime/NEON/functions/NEQLSTMLayer.h
@@ -426,7 +426,6 @@ private:
Tensor _mm_projection_res{ nullptr };
Tensor _projection_outstage_res{ nullptr };
Tensor _projection_out_res{ nullptr };
- Tensor _projection_eff_bias_adjusted{ nullptr };
Tensor _projection_accumulate_res{ nullptr };
Tensor _ones{ nullptr };
std::array<Tensor, _layer_norm_count> _layer_norm_output{ {} };