aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide/how_to_build_and_run_examples.dox
blob: 0b8a23b3682cb8b1e48ac0be9f263db9dfc3d9b4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
///
/// Copyright (c) 2017-2024 Arm Limited.
///
/// SPDX-License-Identifier: MIT
///
/// Permission is hereby granted, free of charge, to any person obtaining a copy
/// of this software and associated documentation files (the "Software"), to
/// deal in the Software without restriction, including without limitation the
/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
/// sell copies of the Software, and to permit persons to whom the Software is
/// furnished to do so, subject to the following conditions:
///
/// The above copyright notice and this permission notice shall be included in all
/// copies or substantial portions of the Software.
///
/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
/// SOFTWARE.
///
namespace arm_compute
{
/** @page how_to_build How to Build and Run Examples

@tableofcontents

@section S1_1_build_options Build options

scons 2.3 or above is required to build the library.
To see the build options available simply run ```scons -h```

@section S1_2_linux Building for Linux

@subsection S1_2_1_library How to build the library ?

For Linux, the library was successfully built and tested using the following Linaro GCC toolchain:

 - gcc-linaro-6.3.1-2017.05-x86_64_arm-linux-gnueabihf
 - gcc-linaro-6.3.1-2017.05-x86_64_aarch64-linux-gnu

To cross-compile the library in debug mode, with Arm® Neon™ only support, for Linux 32bit:

	scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=linux arch=armv7a

To cross-compile the library in asserts mode, with OpenCL only support, for Linux 64bit:

	scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=linux arch=armv8a

You can also compile the library natively on an Arm device by using <b>build=native</b>:

	scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv8a build=native
	scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=native

@note g++ for Arm is mono-arch, therefore if you want to compile for Linux 32bit on a Linux 64bit platform you will have to use a cross compiler.

For example on a 64bit Debian based system you would have to install <b>g++-arm-linux-gnueabihf</b>

	apt-get install g++-arm-linux-gnueabihf

Then run

	scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=cross_compile

or simply remove the build parameter as build=cross_compile is the default value:

	scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a

@subsection S1_2_2_examples How to manually build the examples ?

The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.

@note The following command lines assume the arm_compute libraries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built libraries with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.

To cross compile a Arm® Neon™ example for Linux 32bit:

	arm-linux-gnueabihf-g++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -mfpu=neon -L. -larm_compute -o neon_cnn

To cross compile a Arm® Neon™ example for Linux 64bit:

	aarch64-linux-gnu-g++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -L. -larm_compute -o neon_cnn

(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)

To cross compile an OpenCL example for Linux 32bit:

	arm-linux-gnueabihf-g++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -mfpu=neon -L. -larm_compute -o cl_sgemm -DARM_COMPUTE_CL

To cross compile an OpenCL example for Linux 64bit:

	aarch64-linux-gnu-g++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -L. -larm_compute -o cl_sgemm -DARM_COMPUTE_CL

(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)

To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.

i.e. to cross compile the "graph_lenet" example for Linux 32bit:

	arm-linux-gnueabihf-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -mfpu=neon -L. -larm_compute_graph -larm_compute -Wl,--allow-shlib-undefined -o graph_lenet

i.e. to cross compile the "graph_lenet" example for Linux 64bit:

	aarch64-linux-gnu-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -L. -larm_compute_graph -larm_compute -Wl,--allow-shlib-undefined -o graph_lenet

(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)

@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute

To compile natively (i.e directly on an Arm device) for Arm® Neon™ for Linux 32bit:

	g++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -mfpu=neon -larm_compute -o neon_cnn

To compile natively (i.e directly on an Arm device) for Arm® Neon™ for Linux 64bit:

	g++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute -o neon_cnn

(notice the only difference with the 32 bit command is that we don't need the -mfpu option)

To compile natively (i.e directly on an Arm device) for OpenCL for Linux 32bit or Linux 64bit:

	g++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute -o cl_sgemm -DARM_COMPUTE_CL

To compile natively the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.

i.e. to natively compile the "graph_lenet" example for Linux 32bit:

	g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -mfpu=neon -L. -larm_compute_graph -larm_compute -Wl,--allow-shlib-undefined -o graph_lenet

i.e. to natively compile the "graph_lenet" example for Linux 64bit:

	g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -L. -larm_compute_graph -larm_compute -Wl,--allow-shlib-undefined -o graph_lenet

(notice the only difference with the 32 bit command is that we don't need the -mfpu option)

@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute

@note These two commands assume libarm_compute.so is available in your library path, if not add the path to it using -L (e.g. -Llib/linux-armv8a-neon-cl-asserts/)
@note You might need to export the path to OpenCL library as well in your LD_LIBRARY_PATH if Compute Library was built with OpenCL enabled.

To run the built executable simply run:

	LD_LIBRARY_PATH=build ./neon_cnn

or

	LD_LIBRARY_PATH=build ./cl_sgemm

@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.

For example:

	LD_LIBRARY_PATH=. ./graph_lenet --help

Below is a list of the common parameters among the graph examples :
@snippet utils/CommonGraphOptions.h Common graph examples parameters

@subsection S1_2_3_sve Build for SVE or SVE2

In order to build for SVE or SVE2 you need a compiler that supports them. You can find more information in the following these links:
    -# GCC: https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/sve-support
    -# LLVM: https://developer.arm.com/tools-and-software/open-source-software/developer-tools/llvm-toolchain/sve-support

@note You the need to indicate the toolchains using the scons "toolchain_prefix" parameter.

An example build command with SVE is:

        scons arch=armv8.2-a-sve os=linux build_dir=arm64 -j55 standalone=0 opencl=0 openmp=0 validation_tests=1 neon=1 cppthreads=1 toolchain_prefix=aarch64-none-linux-gnu-

@subsection S1_2_4_sme Build for SME2

In order to build for SME2 you need to use a compiler that supports SVE2 and enable SVE2 in the build as well.

@note You the need to indicate the toolchains using the scons "toolchain_prefix" parameter.

An example build command with SME2 is:

        scons arch=armv8.6-a-sve2-sme2 os=linux build_dir=arm64 -j55 standalone=0 opencl=0 openmp=0 validation_tests=1 neon=1 cppthreads=1 toolchain_prefix=aarch64-none-linux-gnu-

@subsection S1_2_5_clang_build_linux Building with LLVM+Clang Natively on Linux

The library can be built with LLVM+Clang by specifying CC and CXX environment variables appropriately as below. The **minimum** supported clang version is 11, as LLVM 11 introduces SVE/SVE2 VLA intrinsics: https://developer.arm.com/Tools%20and%20Software/LLVM%20Toolchain#Supported-Devices.

	CC=clang CXX=clang++ <build command>

Or, if the environment has multiple clang versions:

	CC=clang-16 CXX=clang++-16

Examples for different build tools look like below.

(experimental) CMake:

	mkdir build
	cd build
	CC=clang CXX=clang++ cmake .. -DCMAKE_BUILD_TYPE=Release -DARM_COMPUTE_OPENMP=1 -DARM_COMPUTE_WERROR=0 -DARM_COMPUTE_BUILD_EXAMPLES=1 -DARM_COMPUTE_BUILD_TESTING=1 -DCMAKE_INSTALL_LIBDIR=.
	CC=clang CXX=clang++ cmake --build . -j32

(experimental) Bazel:

	CC=clang CXX=clang++ bazel build //...

Scons:

	CC=clang CXX=clang++ scons -j32 Werror=1 debug=0 neon=1 openmp=1 cppthreads=1 os=linux arch=armv8a multi_isa=1 build=native validation_tests=1

Configurations supported are limited to the configurations supported by our CMake, Bazel and Multi ISA Scons builds. For more details on CMake and Bazel builds, please see @ref S1_8_experimental_builds

@section S1_3_android Building for Android

For Android, the library was successfully built and tested using Google's standalone toolchains:
 - clang++ from NDK r20b for armv8a
 - clang++ from NDK r20b for armv8.2-a with FP16 support

(From 23.02, NDK >= r20b is highly recommended) For NDK r18 or older, here is a guide to <a href="https://developer.android.com/ndk/guides/standalone_toolchain.html">create your Android standalone toolchains from the NDK</a>:
- Download the NDK r18b from here: https://developer.android.com/ndk/downloads/index.html to directory $NDK
- Make sure you have Python 2.7 installed on your machine.
- Generate the 32 and/or 64 toolchains by running the following commands to your toolchain directory $MY_TOOLCHAINS:

	$NDK/build/tools/make_standalone_toolchain.py --arch arm64 --install-dir $MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b --stl libc++ --api 21

	$NDK/build/tools/make_standalone_toolchain.py --arch arm --install-dir $MY_TOOLCHAINS/arm-linux-android-ndk-r18b --stl libc++ --api 21

For NDK r19 or newer, you can directly <a href="https://developer.android.com/ndk/downloads">Download</a> the NDK package for your development platform, without the need to launch the make_standalone_toolchain.py script. You can find all the prebuilt binaries inside $NDK/toolchains/llvm/prebuilt/$OS_ARCH/bin/.

@parblock
@attention The building script will look for a binary named "aarch64-linux-android-clang++", while the prebuilt binaries will have their API version as a suffix to their filename (e.g. "aarch64-linux-android21-clang++"). You can instruct scons to use the correct version by using a combination of the toolchain_prefix and the "CC" "CXX" environment variables.
@attention For this particular example, you can specify:

	CC=clang CXX=clang++ scons toolchain_prefix=aarch64-linux-android21-

@attention or:

	CC=aarch64-linux-android21-clang CXX=aarch64-linux-android21-clang++ scons toolchain_prefix=""

@endparblock

@parblock
@attention We used to use gnustl but as of NDK r17 it is deprecated so we switched to libc++
@endparblock

@note Make sure to add the toolchains to your PATH:

	export PATH=$PATH:$MY_TOOLCHAINS/aarch64-linux-android-ndk-r18b/bin:$MY_TOOLCHAINS/arm-linux-android-ndk-r18b/bin

@subsection S1_3_1_library How to build the library ?

To cross-compile the library in debug mode, with Arm® Neon™ only support, for Android 32bit:

	CXX=clang++ CC=clang scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=android arch=armv7a

To cross-compile the library in asserts mode, with OpenCL only support, for Android 64bit:

	CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=android arch=armv8a

@subsection S1_3_2_examples How to manually build the examples ?

The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.

@note The following command lines assume the arm_compute libraries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built libraries with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.

Once you've got your Android standalone toolchain built and added to your path you can do the following:

To cross compile a Arm® Neon™ example:

	#32 bit:
	arm-linux-androideabi-clang++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute-static -L. -o neon_cnn_arm -static-libstdc++ -pie
	#64 bit:
	aarch64-linux-android-clang++ examples/neon_cnn.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute-static -L. -o neon_cnn_aarch64 -static-libstdc++ -pie

To cross compile an OpenCL example:

	#32 bit:
	arm-linux-androideabi-clang++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute-static -L. -o cl_sgemm_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
	#64 bit:
	aarch64-linux-android-clang++ examples/cl_sgemm.cpp utils/Utils.cpp -I. -Iinclude -std=c++14 -larm_compute-static -L. -o cl_sgemm_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL

To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the library arm_compute_graph also.

	#32 bit:
	arm-linux-androideabi-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -L. -o graph_lenet_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
	#64 bit:
	aarch64-linux-android-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++14 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -L. -o graph_lenet_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL

@note Due to some issues in older versions of the Arm® Mali™ OpenCL DDK (<= r13p0), we recommend to link arm_compute statically on Android.
@note When linked statically the arm_compute_graph library currently needs the --whole-archive linker flag in order to work properly

Then you need to do is upload the executable and the shared library to the device using ADB:

	adb push neon_cnn_arm /data/local/tmp/
	adb push cl_sgemm_arm /data/local/tmp/
	adb push gc_absdiff_arm /data/local/tmp/
	adb shell chmod 777 -R /data/local/tmp/

And finally to run the example:

	adb shell /data/local/tmp/neon_cnn_arm
	adb shell /data/local/tmp/cl_sgemm_arm
	adb shell /data/local/tmp/gc_absdiff_arm

For 64bit:

	adb push neon_cnn_aarch64 /data/local/tmp/
	adb push cl_sgemm_aarch64 /data/local/tmp/
	adb push gc_absdiff_aarch64 /data/local/tmp/
	adb shell chmod 777 -R /data/local/tmp/

And finally to run the example:

	adb shell /data/local/tmp/neon_cnn_aarch64
	adb shell /data/local/tmp/cl_sgemm_aarch64
	adb shell /data/local/tmp/gc_absdiff_aarch64

@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.

For example:
	adb shell /data/local/tmp/graph_lenet --help

In this case the first argument of LeNet (like all the graph examples) is the target (i.e 0 to run on Neon™, 1 to run on OpenCL if available, 2 to run on OpenCL using the CLTuner), the second argument is the path to the folder containing the npy files for the weights and finally the third argument is the number of batches to run.

@section S1_4_macos Building for macOS

The library was successfully natively built for Apple Silicon under macOS 11.1 using clang v12.0.0.

To natively compile the library with accelerated CPU support:

	scons Werror=1 -j8 neon=1 opencl=0 os=macos arch=armv8a build=native

@note Initial support disables feature discovery through HWCAPS and thread scheduling affinity controls

@section S1_5_bare_metal Building for bare metal

For bare metal, the library was successfully built using linaro's latest (gcc-linaro-6.3.1-2017.05) bare metal toolchains:
 - arm-eabi for armv7a
 - aarch64-elf for armv8a

Download linaro for <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/arm-eabi/">armv7a</a> and <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/aarch64-elf/">armv8a</a>.

@note Make sure to add the toolchains to your PATH: export PATH=$PATH:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-elf/bin:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_arm-eabi/bin

@subsection S1_5_1_library How to build the library ?

To cross-compile the library with Arm® Neon™ support for baremetal armv8a:

	scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=bare_metal arch=armv8a build=cross_compile cppthreads=0 openmp=0 standalone=1

@subsection S1_5_2_examples How to manually build the examples ?

Examples are disabled when building for bare metal. If you want to build the examples you need to provide a custom bootcode depending on the target architecture and link against the compute library. More information about bare metal bootcode can be found <a href="http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0527a/index.html">here</a>.

@section S1_6_windows_host Building on a Windows® host system (cross-compile)

Using `scons` directly from the Windows® command line is known to cause
problems. The reason seems to be that if `scons` is setup for cross-compilation
it gets confused about Windows® style paths (using backslashes). Thus it is
recommended to follow one of the options outlined below.

@subsection S1_6_1_ubuntu_on_windows Bash on Ubuntu on Windows® (cross-compile)

The best and easiest option is to use
<a href="https://msdn.microsoft.com/en-gb/commandline/wsl/about">Ubuntu on Windows®</a>.
This feature is still marked as *beta* and thus might not be available.
However, if it is building the library is as simple as opening a *Bash on
Ubuntu on Windows®* shell and following the general guidelines given above.

@subsection S1_6_2_cygwin Cygwin (cross-compile)

If the Windows® subsystem for Linux is not available <a href="https://www.cygwin.com/">Cygwin</a>
can be used to install and run `scons`, the minimum Cygwin version must be 3.0.7 or later. In addition
to the default packages installed by Cygwin `scons` has to be selected in the installer. (`git` might
also be useful but is not strictly required if you already have got the source
code of the library.) Linaro provides pre-built versions of
<a href="http://releases.linaro.org/components/toolchain/binaries/">GCC cross-compilers</a>
that can be used from the Cygwin terminal. When building for Android the
compiler is included in the Android standalone toolchain. After everything has
been set up in the Cygwin terminal the general guide on building the library
can be followed.

@subsection S1_6_3_WoA Windows® on Arm™ (native build)

    Native builds on Windows® are experimental and some features from the library interacting with the OS are missing.

It's possible to build Compute Library natively on a Windows® system running on Arm™.

Windows® on Arm™ (WoA) systems provide compatibility emulating x86 binaries on aarch64. Unfortunately Visual Studio 2022 does not work on aarch64 systems because it's an x86_64bit application and these binaries cannot be exectuted on WoA yet.

Because we cannot use Visual Studio to build Compute Library we have to set up a native standalone toolchain to compile C++ code for arm64 on Windows®.

Native arm64 toolchain installation for WoA:
- LLVM+Clang-12 which can be downloaded from: https://github.com/llvm/llvm-project/releases/download/llvmorg-12.0.0/LLVM-12.0.0-woa64.exe
- Arm64 VC Runtime which can be downloaded from  https://aka.ms/vs/17/release/vc_redist.arm64.exe

- While full VS22 cannot be installed on WoA, we can install some components
    -# Desktop development with C++ and all Arm64 components for Visual Studio, refer to:  https://developer.arm.com/documentation/102528/0100/Install-Visual-Studio
    -# VS22 build tools: https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022

There are some additional tools we need to install to build Compute Library:

- git https://git-scm.com/download/win
- python 3 https://www.python.org/downloads/windows/
- scons can be installed with pip install scons

In order to use clang to build Windows® binaries natively we have to initialize the environment variables from VS22 correctly so that the compiler could find the arm64 C++ libraries. This can be done by pressing the key windows + r  and running the command:

    cmd /k "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvarsx86_arm64.bat"

To build Compute Library type:

     scons opencl=0 neon=1 os=windows examples=0 validation_tests=1 benchmark_examples=0 build=native arch=armv8a Werror=0 exceptions=1 standalone=1

@section S1_7_cl_requirements OpenCL DDK Requirements

@subsection S1_7_1_cl_hard_requirements Hard Requirements

Compute Library requires OpenCL 1.1 and above with support of non uniform workgroup sizes, which is officially supported in the Arm® Mali™ OpenCL DDK r8p0 and above as an extension (respective extension flag is \a -cl-arm-non-uniform-work-group-size).

Enabling 16-bit floating point calculations require \a cl_khr_fp16 extension to be supported. All Arm® Mali™ GPUs with compute capabilities have native support for half precision floating points.

@subsection S1_7_2_cl_performance_requirements Performance improvements

Integer dot product built-in function extensions (and therefore optimized kernels) are available with Arm® Mali™ OpenCL DDK r22p0 and above for the following GPUs : G71, G76. The relevant extensions are \a cl_arm_integer_dot_product_int8, \a cl_arm_integer_dot_product_accumulate_int8 and \a cl_arm_integer_dot_product_accumulate_int16.

OpenCL kernel level debugging can be simplified with the use of printf, this requires the \a cl_arm_printf extension to be supported.

SVM allocations are supported for all the underlying allocations in Compute Library. To enable this OpenCL 2.0 and above is a requirement.

@section S1_8_experimental_builds Experimental Bazel and CMake builds

In addition to the scons build the repository includes experimental Bazel and CMake builds.
These builds currently support a limited range of options. Both are similar to the scons multi_isa build. It compiles all libraries with Neon (TM) support, as well as SVE and SVE2 libraries. The build is CPU only, not including OpenCL support. Only Linux environment is targeted for now. Both were successfully built with gcc / g++ version 10.2.

@subsection S1_8_1_bazel_build Bazel build

@subsubsection S1_8_1_1_file_structure File structure

File structure for all files included in the Bazel build:

	.
	├──  .bazelrc
	├──  BUILD
	├──  WORKSPACE
	├── arm_compute
	│   └── BUILD
	├── examples
	│   └── BUILD
	├── include
	│   └── BUILD
	├── scripts
	│   ├── print_version_file.py
	│   └── BUILD
	├── src
	│   └── BUILD
	├── support
	│   └── BUILD
	├── tests
	│   ├── BUILD
	│   └── framework
	│       └── BUILD
	└── utils
		└── BUILD

@subsubsection S1_8_1_2_build_options Build options

Available build options:

	- debug: Enable ['-O0','-g','-gdwarf-2'] compilation flags
	- Werror: Enable -Werror compilation flag
	- logging: Enable logging
	- cppthreads: Enable C++11 threads backend
	- openmp: Enable OpenMP backend

@subsubsection S1_8_1_3_example_builds Example builds

Build everything (libraries, examples, tests):

	bazel build //...

Build libraries:

	bazel build //:all

Build arm_compute only:

	bazel build //:arm_compute

Build examples:

	bazel build //examples:all

Build resnet50 example:

	bazel build //examples:graph_resnet50

Build validation and benchmarking:

	bazel build //tests:all

@subsection S1_8_2_cmake_build CMake build

@subsubsection S1_8_2_1_file_structure File structure

File structure for all files included in the CMake build:

	.
	├──  CMakeLists.txt
	├── cmake
	│   ├── Options.cmake
	│   ├── Version.cmake
	│   └── toolchains
	│       └── aarch64_linux_toolchain.cmake
	├── examples
	│   └── CMakeLists.txt
	├── src
	│   └── CMakeLists.txt
	└── tests
		├── CMakeLists.txt
		├── benchmark
		│   └── CMakeLists.txt
		└── validation
			└── CMakeLists.txt

@subsubsection S1_8_2_2_build_options Build options

Available build options:

	- CMAKE_BUILD_TYPE: "Release" (default) enables ['-O3', '-DNDEBUG'] compilation flags, "Debug" enables ['-O0','-g','-gdwarf-2', '-DARM_COMPUTE_ASSERTS_ENABLED']
	- ARM_COMPUTE_WERROR: Enable -Werror compilation flag
	- ARM_COMPUTE_EXCEPTIONS: If disabled ARM_COMPUTE_EXCEPTIONS_DISABLED is enabled
	- ARM_COMPUTE_LOGGING: Enable logging
	- ARM_COMPUTE_BUILD_EXAMPLES: Build examples
	- ARM_COMPUTE_BUILD_TESTING: Build tests
	- ARM_COMPUTE_CPPTHREADS: Enable C++11 threads backend
	- ARM_COMPUTE_OPENMP: Enable OpenMP backend

@subsubsection S1_8_2_3_example_builds Example builds

To build libraries, examples and tests:

	mkdir build
	cd build
	cmake .. -DCMAKE_BUILD_TYPE=Release -DARM_COMPUTE_OPENMP=1 -DARM_COMPUTE_WERROR=0 -DARM_COMPUTE_BUILD_EXAMPLES=1 -DARM_COMPUTE_BUILD_TESTING=1 -DCMAKE_INSTALL_LIBDIR=.
	cmake --build . -j32

@section S1_9_fixed_format Building with support for fixed format kernels

@subsection S1_9_1_intro_to_fixed_format_kernels What are fixed format kernels?

The GEMM kernels used for convolutions and fully-connected layers in Compute Library employ memory layouts optimized for each kernel implementation. This then requires the supplied weights to be re-ordered into a buffer ready for consumption by the GEMM kernel. Where Compute Library is being called from a framework or library which implements operator caching, the re-ordering of the inputted weights into an intermediate buffer may no longer be desirable. When using a cached operator, the caller may wish to re-write the weights tensor, and re-run the operator using the updated weights. With the default GEMM kernels in Compute Library, the GEMM will be executed with the old weights, leading to incorrect results.

To address this, Compute Library provides a set of GEMM kernels which use a common blocked memory format. These kernels consume the input weights directly from the weights buffer and do not execute an intermediate pre-transpose step. With this approach, it is the responsibility of the user (in this case the calling framework) to ensure that the weights are re-ordered into the required memory format. @ref NEGEMM::has_opt_impl is a static function that queries whether there exists fixed-format kernel, and if so will return in the expected weights format. The supported weight formats are enumerated in @ref arm_compute::WeightFormat.

@subsection S1_9_2_building_fixed_format Building with fixed format kernels

Fixed format kernels are only available for the CPU backend. To build Compute Library with fixed format kernels set fixed_format_kernels=1:

        scons Werror=1 debug=0 neon=1 opencl=0 embed_kernels=0 os=linux multi_isa=1 build=native cppthreads=1 openmp=0 fixed_format_kernels=1

@section S1_10_doxygen Building the Doxygen Documentation

This documentation has been generated using the following shell command:

        $ ./scripts/generate_documentation.sh

This requires Doxygen to be installed and available on your system.

*/

} // namespace arm_compute