diff options
author | Ryan OShea <ryan.oshea3@arm.com> | 2022-11-07 16:20:48 +0000 |
---|---|---|
committer | ryan.oshea3 <ryan.oshea3@arm.com> | 2022-11-16 15:22:50 +0000 |
commit | 31441595009182c985dacbedc70c41ee6664d070 (patch) | |
tree | 248a85295aeff4022c9b395fc97748b0a0aa6b35 /docs/02_operator_list.dox | |
parent | bd18eab07a8f30492de1e462b1815189014cb8d5 (diff) | |
download | armnn-31441595009182c985dacbedc70c41ee6664d070.tar.gz |
IVGCVSW-7214 Disable BF16-Turbo-Mode and remove conversion layers
- Remove Bf16ToFp32 Conversion Layer
- Remove Fp32ToBf16 Conversion Layer
- Remove B16 Conversion tests
* Throw exception if m_ReduceFp32ToBf16 optimzer option is set to true
* Provide comments to enable fast math in order to use bf16
* Update docs to inform users to enable fast math for bf16
Execute Network Changes
* Require bf16_turbo_mode to also have fast_math_enabled set to true
- Remove setting m_ReduceFp32ToBf16 optimizer option
Signed-off-by: Ryan OShea <ryan.oshea3@arm.com>
Change-Id: Ibaa6da9d29c96a1ce32ff5196b0847fde9f04a1c
Diffstat (limited to 'docs/02_operator_list.dox')
-rw-r--r-- | docs/02_operator_list.dox | 84 |
1 files changed, 0 insertions, 84 deletions
diff --git a/docs/02_operator_list.dox b/docs/02_operator_list.dox index 3a902c8883..d9a3d2c83b 100644 --- a/docs/02_operator_list.dox +++ b/docs/02_operator_list.dox @@ -655,48 +655,6 @@ where N = batches, C = channels, H = height, W = width <tr><td>All </table> <tr> - <td rowspan="3">ConvertBf16ToFp32Layer - <td rowspan="3" style="width:200px;"> Layer to convert BFloat16 tensor to Float32 tensor. - <td rowspan="3"> - <ul> - <li>N/A - </ul> - <td>CpuRef - <td> - <ul> - <li>All - </ul> - <td> - <table> - <tr><th> - <tr><td>BFLOAT16 - <tr><td>FLOAT32 - </table> -<tr> - <td>CpuAcc - <td> - <ul> - <li>All - </ul> - <td> - <table> - <tr><th> - <tr><td>BFLOAT16 - <tr><td>FLOAT32 - </table> -<tr> - <td>GpuAcc - <td> - <ul> - <li>All - </ul> - <td> - <table> - <tr><th> - <tr><td>BFLOAT16 - <tr><td>FLOAT32 - </table> -<tr> <td rowspan="3">ConvertFp16ToFp32Layer <td rowspan="3" style="width:200px;"> Layer to convert Float16 tensor to Float32 tensor. <td rowspan="3"> @@ -739,48 +697,6 @@ where N = batches, C = channels, H = height, W = width <tr><td>FLOAT32 </table> <tr> - <td rowspan="3">ConvertFp32ToBf16Layer - <td rowspan="3" style="width:200px;"> Layer to convert Float32 tensor to BFloat16 tensor. - <td rowspan="3"> - <ul> - <li>N/A - </ul> - <td>CpuRef - <td> - <ul> - <li>All - </ul> - <td> - <table> - <tr><th> - <tr><td>BFLOAT16 - <tr><td>FLOAT32 - </table> -<tr> - <td>CpuAcc - <td> - <ul> - <li>All - </ul> - <td> - <table> - <tr><th> - <tr><td>BFLOAT16 - <tr><td>FLOAT32 - </table> -<tr> - <td>GpuAcc - <td> - <ul> - <li>All - </ul> - <td> - <table> - <tr><th> - <tr><td>BFLOAT16 - <tr><td>FLOAT32 - </table> -<tr> <td rowspan="3">ConvertFp32ToFp16Layer <td rowspan="3" style="width:200px;"> Layer to convert Float32 tensor to Float16 tensor. <td rowspan="3"> |