aboutsummaryrefslogtreecommitdiff
path: root/tests/validation/reference/QLSTMLayerNormalization.cpp
diff options
context:
space:
mode:
authorMichele Di Giorgio <michele.digiorgio@arm.com>2021-01-12 13:49:07 +0000
committerGeorgios Pinitas <georgios.pinitas@arm.com>2021-01-13 16:18:37 +0000
commit7e5a86535cd8702e8cb06be5277c289be37ead9c (patch)
tree2f9e40c8caf0fa8027f31967e7e0528ea63164e2 /tests/validation/reference/QLSTMLayerNormalization.cpp
parentd30405ac6eb38205676dfaa6e875b264caef431d (diff)
downloadComputeLibrary-7e5a86535cd8702e8cb06be5277c289be37ead9c.tar.gz
Add tolerance for quantized activations computed in float
Some of the activation functions need complex mathematical operations and are implemented by dequantizing to float, performing the activation in the float domain and requantizing back. In such cases, the results may differ slightly between reference and optimized code. In fact, when running validation through valgrind we get a difference of 1 in the results and therefore, an absolute tolerance of 1 is added to the tests. Resolves: COMPMID-4067 Change-Id: Ic2eca5616371b0a324a246d40b515ddc9f576e61 Signed-off-by: Michele Di Giorgio <michele.digiorgio@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/4841 Reviewed-by: Giorgio Arena <giorgio.arena@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'tests/validation/reference/QLSTMLayerNormalization.cpp')
0 files changed, 0 insertions, 0 deletions