diff options
author | Gian Marco Iodice <gianmarco.iodice@arm.com> | 2017-07-03 12:33:49 +0100 |
---|---|---|
committer | Anthony Barbier <anthony.barbier@arm.com> | 2018-09-17 14:15:39 +0100 |
commit | 368da83fdd7406d629e8cca64f3eb0af05437419 (patch) | |
tree | fadac4142651cb0f86b997c06cbabb1bec622aae /arm_compute/runtime/CL/functions/CLFullyConnectedLayer.h | |
parent | adffa30de9292c96bf29ff0697ac573270046612 (diff) | |
download | ComputeLibrary-368da83fdd7406d629e8cca64f3eb0af05437419.tar.gz |
COMPMID-420, COMPMID-414 - Port CLConvolutionLayer and CLFullyConnectedLayer to use 8 bit fixed point
Change-Id: I1cb1b4d7711ad7b569ee691e13a5df1b3430292b
Reviewed-on: http://mpd-gerrit.cambridge.arm.com/79565
Tested-by: Kaizen <jeremy.johnson+kaizengerrit@arm.com>
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
Diffstat (limited to 'arm_compute/runtime/CL/functions/CLFullyConnectedLayer.h')
-rw-r--r-- | arm_compute/runtime/CL/functions/CLFullyConnectedLayer.h | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/arm_compute/runtime/CL/functions/CLFullyConnectedLayer.h b/arm_compute/runtime/CL/functions/CLFullyConnectedLayer.h index 826f445bd8..807ff693bc 100644 --- a/arm_compute/runtime/CL/functions/CLFullyConnectedLayer.h +++ b/arm_compute/runtime/CL/functions/CLFullyConnectedLayer.h @@ -50,7 +50,7 @@ public: CLFullyConnectedLayerReshapeWeights(); /** Set the input and output tensors. * - * @param[in] input Weights tensor. The weights must be 2 dimensional. Data types supported: QS8/F32. + * @param[in] input Weights tensor. The weights must be 2 dimensional. Data types supported: QS8/F16/F32. * @param[out] output Destination tensor. Data type supported: Same as @p input. * @param[in] transpose_weights True if the weights must be transposed. Data types supported: Same as @p weights. * @param[in] is_batched_fc_layer True if it is a batched fully connected layer @@ -85,7 +85,7 @@ public: CLFullyConnectedLayer(); /** Set the input and output tensors. * - * @param[in] input Source tensor. Data type supported: F16/F32. + * @param[in] input Source tensor. Data type supported: QS8/F16/F32. * @param[in] weights Weights tensor. The weights must be 2 dimensional. Data type supported: Same as @p input * @param[in] biases Bias tensor. It can be nullptr. Data type supported:Same as @p input. * @param[out] output Destination tensor. Data type supported: Same as @p input. |