From afde732eb016f18c781923cf1e6c9edf68f586f7 Mon Sep 17 00:00:00 2001 From: Pablo Tello Date: Tue, 25 Jul 2017 09:19:46 +0100 Subject: COMPMID-421: Added FP16 support in the Neon Locally Connected Layer. Change-Id: I4b52a209a5ce1a7e69494008538ed242b14b5593 Reviewed-on: http://mpd-gerrit.cambridge.arm.com/81520 Tested-by: Kaizen Reviewed-by: Anthony Barbier --- arm_compute/runtime/NEON/functions/NELocallyConnectedLayer.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'arm_compute/runtime/NEON/functions/NELocallyConnectedLayer.h') diff --git a/arm_compute/runtime/NEON/functions/NELocallyConnectedLayer.h b/arm_compute/runtime/NEON/functions/NELocallyConnectedLayer.h index 1b2b2ee3cf..efb2ff6c8b 100644 --- a/arm_compute/runtime/NEON/functions/NELocallyConnectedLayer.h +++ b/arm_compute/runtime/NEON/functions/NELocallyConnectedLayer.h @@ -53,7 +53,7 @@ public: * * @param[in] input Source tensor. 3 lower dimensions represent a single input [width, height, IFM], * while every optional dimension from 4 and above represent a batch of inputs. - * Data types supported: F32. + * Data types supported: F16, F32. * @param[in] weights Weights tensor. Weights are 5D tensor with dimensions [kernel_x, kernel_y, IFM, OFM, num_patches]. Data type supported:Same as @p input. * @param[in] biases Biases tensor. Shared biases supported. Biases are 2D tensor with dimensions [OFM, num_patches]. Data type supported:Same as @p input. * @param[out] output Destination tensor. 3 lower dimensions represent a single output [width, height, OFM], while the rest represent batch of outputs. -- cgit v1.2.1