diff options
author | Eric Kunze <eric.kunze@arm.com> | 2023-05-12 17:50:19 -0700 |
---|---|---|
committer | Eric Kunze <eric.kunze@arm.com> | 2023-05-22 20:43:44 -0700 |
commit | 277a4f17f13ac882075e109c339cdeda03f8eedd (patch) | |
tree | 86c121c64f8cea5f8acf1947b426ec73f2ca70b9 /chapters | |
parent | b351aea93d369cfc082f2db0df6ac181d3821908 (diff) | |
download | specification-277a4f17f13ac882075e109c339cdeda03f8eedd.tar.gz |
Update descriptions for activation functions.
Provide the mathematical formulas for sigmoid and tanh.
Define the operation function for sigmoid and tanh for
floating-point numbers.
Signed-off-by: Eric Kunze <eric.kunze@arm.com>
Change-Id: Ib949d2e8e06309e5c5292aa0192746ad0f9b1f11
Diffstat (limited to 'chapters')
-rw-r--r-- | chapters/activation_funcs.adoc | 49 | ||||
-rw-r--r-- | chapters/introduction.adoc | 12 |
2 files changed, 51 insertions, 10 deletions
diff --git a/chapters/activation_funcs.adoc b/chapters/activation_funcs.adoc index 3bbeb30..46fa19d 100644 --- a/chapters/activation_funcs.adoc +++ b/chapters/activation_funcs.adoc @@ -30,17 +30,24 @@ for_each(index in shape) { ==== SIGMOID -Sigmoid function: output = 1 / (1 + exp(-input)) +Applies the sigmoid logistic function to each element of the input tensor. -For quantized integer data types, the TABLE operator should be used instead with -the following definition. +// sigmoid(x) = \frac{1}{1 + e^{-x}} -The sigmoid table has 513 entries each of 16-bit precision and covering the input range -16.0 to +16.0 in steps of 1/16. +.Calculation for the sigmoid function +image::sigmoid.svg["Sigmoid definition"] +For quantized integer data types, the TABLE operator should be used instead. +Each implementation may choose an appropriate TABLE given the scale and zero point of the input data. +Eight or sixteen bit precision tables may be used based on the input tensor to the sigmoid function. +Below we give an example table generation for 16-bit sigmoid. +This sigmoid table has 513 entries each of 16-bit precision and covering the input range -16.0 to +16.0 in steps of 1/16. + +.Code for generating 16-bit sigmoid table [source,c++] ---- int16_t sigmoid_reference(int16_t x) { // input x range is -256 to + 256 inclusive - F64 v = (double)x / (double)16; + fp64_t v = (fp64_t)x / (fp64_t)16; v = 1.0/(1.0 + exp(-v)); return round_to_nearest_int(32768.0 * v); } @@ -50,19 +57,34 @@ generate_lookup_table(&sigmoid_table, &sigmoid_reference); include::{generated}/operators/SIGMOID.adoc[] +[source,c++] +---- +for_each(index in shape) { + in_out_t value1 = tensor_read<in_out_t>(input, shape, index); + value = sigmoid<in_out_t>(value1); + tensor_write<in_out_t>(output, shape, index, value); +} +---- + ==== TANH Parameterized hyperbolic tangent. +// tanh(x) = \frac{1 - e^{-2x}}{1 + e^{-2x}} -For quantized integer data types, the TABLE operator should be used instead with -the following definition. +.Calculation for the sigmoid function +image::tanh.svg["Hyperbolic tangent definition"] -The tanh_table has 513 entries each of 16-bit precision and covering the input range -8.0 to +8.0 in steps of 1/32. The table is specified by: +For quantized integer data types, the TABLE operator should be used instead. +Each implementation may choose an appropriate TABLE given the scale and zero point of the input data. +Eight or sixteen bit precision tables may be used based on the input tensor to the sigmoid function. +Below we give an example table generation for 16-bit hyperbolic tangent. +This tanh_table has 513 entries each of 16-bit precision and covering the input range -8.0 to +8.0 in steps of 1/32. +.Calculation of an example 16-bit tanh table [source,c++] ---- int16_t tanh_reference(int16_t x) { // input x range is -256 to +256 inclusive - F64 v = (double)x/(double)32; + fp64_t v = (fp64_t)x/(fp64_t)32; v = exp(-2.0*v); v = (1.0-v)/(1.0+v); return round_to_nearest_int(32768.0 * v); @@ -72,3 +94,12 @@ generate_lookup_table(&tanh_table, &tanh_reference); ---- include::{generated}/operators/TANH.adoc[] + +[source,c++] +---- +for_each(index in shape) { + in_out_t value1 = tensor_read<in_out_t>(input, shape, index); + value = tanh<in_out_t>(value1); + tensor_write<in_out_t>(output, shape, index, value); +} +---- diff --git a/chapters/introduction.adoc b/chapters/introduction.adoc index c0b0874..9cfccb7 100644 --- a/chapters/introduction.adoc +++ b/chapters/introduction.adoc @@ -480,7 +480,17 @@ Signed zero must be supported. |fp32_t | -infinity | +infinity -| 16-bit single-precision floating-point defined by <<Other publications>>[1]. + +| 32-bit single-precision floating-point defined by <<Other publications>>[1]. + +Normal values must be supported. + +Denormal values must either be supported or flushed to zero. + +Positive and negative infinity must be supported. + +At least one NaN encoding must be supported. + +Signed zero must be supported. + +|fp64_t +| -infinity +| + infinity +| 64-bit double-precision floating-point defined by <<Other publications>>[1]. + Normal values must be supported. + Denormal values must either be supported or flushed to zero. + Positive and negative infinity must be supported. + |