blob: 379b0aa1059df96f15e239952230c8317fad9c59 [file] [log] [blame]
Jenkinsb9abeae2018-11-22 11:58:08 +00001///
Jenkins36ccc902020-02-21 11:10:48 +00002/// Copyright (c) 2017-2020 ARM Limited.
Jenkinsb9abeae2018-11-22 11:58:08 +00003///
4/// SPDX-License-Identifier: MIT
5///
6/// Permission is hereby granted, free of charge, to any person obtaining a copy
7/// of this software and associated documentation files (the "Software"), to
8/// deal in the Software without restriction, including without limitation the
9/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
10/// sell copies of the Software, and to permit persons to whom the Software is
11/// furnished to do so, subject to the following conditions:
12///
13/// The above copyright notice and this permission notice shall be included in all
14/// copies or substantial portions of the Software.
15///
16/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
22/// SOFTWARE.
23///
Jenkinsc3f34a42018-03-02 12:38:09 +000024namespace arm_compute
25{
Anthony Barbierdbdab852017-06-23 15:42:00 +010026/** @mainpage Introduction
27
28@tableofcontents
29
30The Computer Vision and Machine Learning library is a set of functions optimised for both ARM CPUs and GPUs using SIMD technologies.
31
32Several builds of the library are available using various configurations:
33 - OS: Linux, Android or bare metal.
34 - Architecture: armv7a (32bit) or arm64-v8a (64bit)
Anthony Barbier8140e1e2017-12-14 23:48:46 +000035 - Technology: NEON / OpenCL / GLES_COMPUTE / NEON and OpenCL and GLES_COMPUTE
Anthony Barbierdbdab852017-06-23 15:42:00 +010036 - Debug / Asserts / Release: Use a build with asserts enabled to debug your application and enable extra validation. Once you are sure your application works as expected you can switch to a release build of the library for maximum performance.
37
38@section S0_1_contact Contact / Support
39
40Please email developer@arm.com
41
42In order to facilitate the work of the support team please provide the build information of the library you are using. To get the version of the library you are using simply run:
43
44 $ strings android-armv7a-cl-asserts/libarm_compute.so | grep arm_compute_version
45 arm_compute_version=v16.12 Build options: {'embed_kernels': '1', 'opencl': '1', 'arch': 'armv7a', 'neon': '0', 'asserts': '1', 'debug': '0', 'os': 'android', 'Werror': '1'} Git hash=f51a545d4ea12a9059fe4e598a092f1fd06dc858
46
Anthony Barbier8140e1e2017-12-14 23:48:46 +000047@section S0_2_prebuilt_binaries Pre-built binaries
48
49For each release we provide some pre-built binaries of the library [here](https://github.com/ARM-software/ComputeLibrary/releases)
50
51These binaries have been built using the following toolchains:
Jenkinsb9abeae2018-11-22 11:58:08 +000052 - Linux armv7a: gcc-linaro-4.9-2016.02-x86_64_arm-linux-gnueabihf
Anthony Barbier8140e1e2017-12-14 23:48:46 +000053 - Linux arm64-v8a: gcc-linaro-4.9-2016.02-x86_64_aarch64-linux-gnu
Jenkins0e205f72019-11-28 16:53:35 +000054 - Android armv7a: clang++ / libc++ NDK r17c
55 - Android am64-v8a: clang++ / libc++ NDK r17c
Anthony Barbier8140e1e2017-12-14 23:48:46 +000056
57@warning Make sure to use a compatible toolchain to build your application or you will get some std::bad_alloc errors at runtime.
58
Anthony Barbierdbdab852017-06-23 15:42:00 +010059@section S1_file_organisation File organisation
60
61This archive contains:
62 - The arm_compute header and source files
63 - The latest Khronos OpenCL 1.2 C headers from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a>
64 - The latest Khronos cl2.hpp from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a> (API version 2.1 when this document was written)
Anthony Barbier8140e1e2017-12-14 23:48:46 +000065 - The latest Khronos OpenGL ES 3.1 C headers from the <a href="https://www.khronos.org/registry/gles/">Khronos OpenGL ES registry</a>
66 - The latest Khronos EGL 1.5 C headers from the <a href="https://www.khronos.org/registry/gles/">Khronos EGL registry</a>
67 - The sources for a stub version of libOpenCL.so, libGLESv1_CM.so, libGLESv2.so and libEGL.so to help you build your application.
Anthony Barbierdbdab852017-06-23 15:42:00 +010068 - An examples folder containing a few examples to compile and link against the library.
69 - A @ref utils folder containing headers with some boiler plate code used by the examples.
70 - This documentation.
71
72You should have the following file organisation:
73
74 .
75 ├── arm_compute --> All the arm_compute headers
Jenkins4ba87db2019-05-23 17:11:51 +010076 │ ├── graph.h --> Includes all the Graph headers at once.
Anthony Barbierdbdab852017-06-23 15:42:00 +010077 │   ├── core
78 │   │   ├── CL
Jenkins36ccc902020-02-21 11:10:48 +000079 │ │ │ ├── CLCoreRuntimeContext.h --> Manages all core OpenCL objects needed for kernel execution (cl_context, cl_kernel, cl_command_queue, etc).
Kaizen8938bd32017-09-28 14:38:23 +010080 │   │   │   ├── CLKernelLibrary.h --> Manages all the OpenCL kernels compilation and caching, provides accessors for the OpenCL Context.
Anthony Barbierdbdab852017-06-23 15:42:00 +010081 │   │   │   ├── CLKernels.h --> Includes all the OpenCL kernels at once
Jenkins36ccc902020-02-21 11:10:48 +000082 │   │   │   ├── CL specialisation of all the generic objects interfaces (ICLTensor, ICLArray, etc.)
Anthony Barbierdbdab852017-06-23 15:42:00 +010083 │   │   │   ├── kernels --> Folder containing all the OpenCL kernels
84 │   │   │   │   └── CL*Kernel.h
85 │   │   │   └── OpenCL.h --> Wrapper to configure the Khronos OpenCL C++ header
86 │   │ ├── CPP
Kaizen8938bd32017-09-28 14:38:23 +010087 │   │   │   ├── CPPKernels.h --> Includes all the CPP kernels at once
Anthony Barbierdbdab852017-06-23 15:42:00 +010088 │   │ │   └── kernels --> Folder containing all the CPP kernels
Kaizen8938bd32017-09-28 14:38:23 +010089 │   │   │      └── CPP*Kernel.h
Anthony Barbier8140e1e2017-12-14 23:48:46 +000090 │   │   ├── GLES_COMPUTE
Jenkins36ccc902020-02-21 11:10:48 +000091 │ │ │ ├── GCCoreRuntimeContext.h --> Manages all core GLES objects needed for kernel execution.
Anthony Barbier8140e1e2017-12-14 23:48:46 +000092 │   │   │   ├── GCKernelLibrary.h --> Manages all the GLES kernels compilation and caching, provides accessors for the GLES Context.
93 │   │   │   ├── GCKernels.h --> Includes all the GLES kernels at once
Jenkins36ccc902020-02-21 11:10:48 +000094 │   │   │   ├── GLES specialisation of all the generic objects interfaces (IGCTensor etc.)
Anthony Barbier8140e1e2017-12-14 23:48:46 +000095 │   │   │   ├── kernels --> Folder containing all the GLES kernels
96 │   │   │   │   └── GC*Kernel.h
97 │   │   │   └── OpenGLES.h --> Wrapper to configure the Khronos EGL and OpenGL ES C header
Anthony Barbierdbdab852017-06-23 15:42:00 +010098 │   │   ├── NEON
99 │   │   │   ├── kernels --> Folder containing all the NEON kernels
Jenkinsb3a371b2018-05-23 11:36:53 +0100100 │   │   │   │ ├── assembly --> headers for assembly optimised NEON kernels.
101 │   │   │   │ ├── convolution --> headers for convolution assembly optimised NEON kernels.
102 │   │   │   │   │   ├── common --> headers for code which is common to several convolution implementations.
103 │   │   │   │   │   ├── depthwise --> headers for Depthwise convolultion assembly implementation
104 │   │   │   │   │   └── winograd --> headers for Winograd convolution assembly implementation
105 │   │   │   │ ├── detail --> Common code for several intrinsics implementations.
Anthony Barbierdbdab852017-06-23 15:42:00 +0100106 │   │   │   │   └── NE*Kernel.h
Jenkins36ccc902020-02-21 11:10:48 +0000107 │   │   │   ├── wrapper --> NEON wrapper used to simplify code
108 │   │   │   │ ├── intrinsics --> NEON instrincs' wrappers
109 │   │   │   │ ├── scalar --> Scalar operations
110 │   │   │   │ ├── traits.h --> Traits defined on NEON vectors
111 │   │   │   │   └── wrapper.h --> Includes all wrapper headers at once
Anthony Barbierdbdab852017-06-23 15:42:00 +0100112 │   │   │   └── NEKernels.h --> Includes all the NEON kernels at once
113 │   │   ├── All common basic types (Types.h, Window, Coordinates, Iterator, etc.)
Jenkins36ccc902020-02-21 11:10:48 +0000114 │   │   ├── All generic objects interfaces (ITensor, IArray, etc.)
115 │   │   └── Objects metadata classes (TensorInfo, MultiImageInfo)
Kaizen8938bd32017-09-28 14:38:23 +0100116 │   ├── graph
Jenkinsb3a371b2018-05-23 11:36:53 +0100117 │   │   ├── algorithms
118 │   │   │   └── Generic algorithms used by the graph backend (e.g Order of traversal)
119 │   │   ├── backends --> The backend specific code
120 │   │   │   ├── CL --> OpenCL specific operations
121 │   │   │   ├── GLES --> OpenGLES Compute Shaders specific operations
122 │   │   │   └── NEON --> NEON specific operations
123 │   │   ├── detail
124 │   │   │   └── Collection of internal utilities.
125 │   │   ├── frontend
126 │   │   │   └── Code related to the stream frontend interface.
127 │   │   ├── mutators
128 │   │   │   └── Used to modify / optimise the Graph intermediate representation(Operator fusion, in place operations, etc.)
Kaizen8938bd32017-09-28 14:38:23 +0100129 │   │   ├── nodes
130 │   │   │   └── The various nodes supported by the graph API
Jenkinsb3a371b2018-05-23 11:36:53 +0100131 │   │   ├── printers
132 │   │   │   └── Debug printers
Kaizen8938bd32017-09-28 14:38:23 +0100133 │   │   └── Graph objects ( INode, ITensorAccessor, Graph, etc.)
Anthony Barbierdbdab852017-06-23 15:42:00 +0100134 │   └── runtime
Jenkins36ccc902020-02-21 11:10:48 +0000135 │   ├── common
136 │ │ └── Common utility code used by all backends
Anthony Barbierdbdab852017-06-23 15:42:00 +0100137 │   ├── CL
Jenkins36ccc902020-02-21 11:10:48 +0000138 │   │   ├── CL objects & allocators (CLArray, CLTensor, etc.)
Anthony Barbierdbdab852017-06-23 15:42:00 +0100139 │   │   ├── functions --> Folder containing all the OpenCL functions
140 │   │   │   └── CL*.h
Kaizen8938bd32017-09-28 14:38:23 +0100141 │   │   ├── CLScheduler.h --> Interface to enqueue OpenCL kernels and get/set the OpenCL CommandQueue and ICLTuner.
Jenkinsb3a371b2018-05-23 11:36:53 +0100142 │   │   ├── CLFunctions.h --> Includes all the OpenCL functions at once
Jenkins36ccc902020-02-21 11:10:48 +0000143 │   │   ├── ICLTuner.h --> Interface used to tune the local work-group size of OpenCL kernels
Jenkinsb3a371b2018-05-23 11:36:53 +0100144 │   │   └── tuners
145 │   │      └── Local workgroup size tuners for specific architectures / GPUs
Anthony Barbierdbdab852017-06-23 15:42:00 +0100146 │   ├── CPP
Kaizen8938bd32017-09-28 14:38:23 +0100147 │      │   ├── CPPKernels.h --> Includes all the CPP functions at once.
Jenkinsb3a371b2018-05-23 11:36:53 +0100148 │   │   ├── CPPScheduler.h --> Basic pool of threads to execute CPP/NEON code on several cores in parallel
149 │   │   └── functions --> Folder containing all the CPP functions
150 │   │      └── CPP*.h
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000151 │   ├── GLES_COMPUTE
Jenkins36ccc902020-02-21 11:10:48 +0000152 │   │   ├── GLES objects & allocators (GCArray, GCTensor, etc.)
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000153 │   │   ├── functions --> Folder containing all the GLES functions
154 │   │   │   └── GC*.h
155 │   │   ├── GCScheduler.h --> Interface to enqueue GLES kernels and get/set the GLES CommandQueue.
156 │   │   └── GCFunctions.h --> Includes all the GLES functions at once
Anthony Barbierdbdab852017-06-23 15:42:00 +0100157 │   ├── NEON
158 │   │ ├── functions --> Folder containing all the NEON functions
159 │   │ │   └── NE*.h
160 │   │ └── NEFunctions.h --> Includes all the NEON functions at once
Kaizen8938bd32017-09-28 14:38:23 +0100161 │   ├── OMP
162 │   │   └── OMPScheduler.h --> OpenMP scheduler (Alternative to the CPPScheduler)
Jenkins36ccc902020-02-21 11:10:48 +0000163 │ ├── Memory & weights manager files (LifetimeManager, PoolManager, etc.)
164 │   └── Basic implementations of the generic object interfaces (Array, Tensor, etc.)
165 ├── data --> Contains test images and reference data dumps used by validation tests
166 ├── docs --> Contains Doxyfile and Doxygen sources used to generate the HTML pages in the documentation folder.
Anthony Barbierdbdab852017-06-23 15:42:00 +0100167 ├── documentation
168 │   ├── index.xhtml
169 │   └── ...
Jenkins36ccc902020-02-21 11:10:48 +0000170 ├── documentation.xhtml --> documentation/index.xhtml
Anthony Barbierdbdab852017-06-23 15:42:00 +0100171 ├── examples
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000172 │   ├── cl_*.cpp --> OpenCL examples
173 │   ├── gc_*.cpp --> GLES compute shaders examples
174 │   ├── graph_*.cpp --> Graph examples
175 │   ├── neoncl_*.cpp --> NEON / OpenCL interoperability examples
176 │   └── neon_*.cpp --> NEON examples
Anthony Barbierdbdab852017-06-23 15:42:00 +0100177 ├── include
Kaizen8938bd32017-09-28 14:38:23 +0100178 │   ├── CL
179 │   │ └── Khronos OpenCL C headers and C++ wrapper
180 │   ├── half --> FP16 library available from http://half.sourceforge.net
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000181 │   ├── libnpy --> Library to load / write npy buffers, available from https://github.com/llohse/libnpy
Jenkins36ccc902020-02-21 11:10:48 +0000182 │  ├── linux --> Headers only needed for Linux builds
183 │   │ └── Khronos EGL and OpenGLES headers
184 │ └── stb
185 │ └── stb_image.h --> Single header library to load image files, available from https://github.com/nothings/stb
Kaizen8938bd32017-09-28 14:38:23 +0100186 ├── scripts
187 │   ├── caffe_data_extractor.py --> Basic script to export weights from Caffe to npy files
188 │   └── tensorflow_data_extractor.py --> Basic script to export weights from Tensor Flow to npy files
Anthony Barbierdbdab852017-06-23 15:42:00 +0100189 ├── src
190 │   ├── core
191 │ │ └── ... (Same structure as headers)
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000192 │   │ ├── CL
193 │   │ │ └── cl_kernels --> All the OpenCL kernels
194 │   │ └── GLES_COMPUTE
195 │   │ └── cs_shaders --> All the OpenGL ES Compute Shaders
Kaizen8938bd32017-09-28 14:38:23 +0100196 │   ├── graph
197 │ │ └── ... (Same structure as headers)
Anthony Barbierdbdab852017-06-23 15:42:00 +0100198 │ └── runtime
199 │ └── ... (Same structure as headers)
Kaizen8938bd32017-09-28 14:38:23 +0100200 ├── support
201 │ └── Various headers to work around toolchains / platform issues.
Anthony Barbierdbdab852017-06-23 15:42:00 +0100202 ├── tests
203 │   ├── All test related files shared between validation and benchmark
Jenkinsb3a371b2018-05-23 11:36:53 +0100204 │   ├── benchmark --> Sources for benchmarking
205 │ │ ├── Benchmark specific files
206 │   │ ├── fixtures
207 │ │ │ └── Backend agnostic fixtures to initialise and run the functions to test.
208 │ │ ├── CL --> OpenCL benchmarking tests
209 │ │ ├── GLES_COMPUTE --> GLES benchmarking tests
210 │ │ └── NEON --> NEON benchmarking tests
Jenkins36ccc902020-02-21 11:10:48 +0000211 │ ├── benchmark_examples --> Sources needed to wrap examples to run through our benchmarking framework.
Kaizen8938bd32017-09-28 14:38:23 +0100212 │   ├── CL --> OpenCL accessors
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000213 │   ├── GLES_COMPUTE --> GLES accessors
Kaizen8938bd32017-09-28 14:38:23 +0100214 │   ├── NEON --> NEON accessors
Kaizen8938bd32017-09-28 14:38:23 +0100215 │   ├── datasets
216 │ │ └── Datasets for all the validation / benchmark tests, layer configurations for various networks, etc.
217 │   ├── framework
218 │ │ └── Boiler plate code for both validation and benchmark test suites (Command line parsers, instruments, output loggers, etc.)
Jenkins36ccc902020-02-21 11:10:48 +0000219 │   ├── instruments --> User defined instruments that can be registered to the framework.
220 │ ├── validate_examples --> Sources needed to wrap examples to run through our validation framework.
Jenkinsb3a371b2018-05-23 11:36:53 +0100221 │   └── validation --> Sources for validation
222 │ ├── Validation specific files
223 │   ├── fixtures
224 │ │ └── Backend agnostic fixtures to initialise and run the functions to test.
225 │   ├── reference
226 │ │ └── Reference implementation used to validate the results of the various backends.
227 │ ├── CL --> OpenCL validation tests
228 │ ├── GLES_COMPUTE --> GLES validation tests
229 │ ├── CPP --> C++ reference implementations
230 │ └── NEON --> NEON validation tests
Anthony Barbierdbdab852017-06-23 15:42:00 +0100231 └── utils --> Boiler plate code used by examples
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000232 └── Various utilities to print types, load / store assets, etc.
Anthony Barbierdbdab852017-06-23 15:42:00 +0100233
234@section S2_versions_changelog Release versions and changelog
235
236@subsection S2_1_versions Release versions
237
238All releases are numbered vYY.MM Where YY are the last two digits of the year, and MM the month number.
239If there is more than one release in a month then an extra sequential number is appended at the end:
240
241 v17.03 (First release of March 2017)
242 v17.03.1 (Second release of March 2017)
243 v17.04 (First release of April 2017)
244
245@note We're aiming at releasing one major public release with new features per quarter. All releases in between will only contain bug fixes.
246
247@subsection S2_2_changelog Changelog
248
Jenkins36ccc902020-02-21 11:10:48 +0000249v20.02 Public major release
250 - Various bug fixes.
251 - Various optimisations.
252 - Added new data type QASYMM8_SIGNED support for:
253 - @ref CLDepthwiseConvolutionLayer
254 - @ref CLDepthwiseConvolutionLayer3x3
255 - @ref CLGEMMConvolutionLayer
256 - @ref CLGEMMLowpMatrixMultiplyCore
257 - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel
258 - @ref CLGEMMLowpMatrixMultiplyNativeKernel
259 - @ref NEActivationLayer
260 - @ref NEComparisonOperationKernel
261 - @ref NEConvolutionLayer
262 - @ref NEDepthwiseConvolutionLayer
263 - @ref NEDepthwiseConvolutionLayer3x3Kernel
264 - @ref NEDirectConvolutionLayerOutputStageKernel
265 - @ref NEElementwiseComparison
266 - @ref NEElementwiseMax
267 - @ref NEElementwiseMin
268 - @ref NEElementwiseSquaredDiff
269 - @ref NEFullyConnectedLayer
270 - @ref NEGEMMMatrixVectorMultiplyKernel
271 - @ref NEPixelWiseMultiplication
272 - @ref NEPoolingLayer
273 - @ref NEPReluLayer
274 - Added support for QSYMM8_PER_CHANNEL in:
275 - @ref NEDepthwiseConvolutionLayer3x3Kernel
276 - Added support for split sizes in:
277 - @ref CLSplit
278 - @ref NESplit
279 - New OpenCL kernels / functions:
280 - @ref CLFill
281 - @ref CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel / @ref CLGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint
282 - New NEON kernels / functions:
283 - @ref NEFill
284 - @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToInt8ScaleByFixedPoint
285 - Deprecated NEON functions / interfaces:
286 - @ref CLDepthwiseConvolutionLayer3x3
287 - @ref NEDepthwiseConvolutionLayerOptimized
288 - @ref PoolingLayerInfo constructors without Data Layout.
289 - Added support for quantization with multiplier greater than 1 on NEON and CL.
290 - Added support for quantized inputs of type QASYMM8_SIGNED and QASYMM8 to @ref CLQuantizationLayer.
291 - Added the ability to build bootcode for bare metal.
292 - Added support for generating synthetic QASYMM8 graphs.
293 - Added support for F16 datatype in VGG16.
294 - Removed pre-built binaries for GLES.
295
Jenkins7f09cf72020-01-22 18:08:16 +0000296v19.11.1 Public maintenance release
297 - Fix offset calculation in NEReductionOperationKernel.
298 - Fix data layout in NEScaleKernel for nhwc.
299 - Retain configuration step data layout to avoid side-effects.
300 - Perform sqrt in double domain for L2 pooling.
301 - Fix output shape calculation for Reduce Mean
302 - Restrict cases where optimized NEPadLayer runs.
303
Jenkins0e205f72019-11-28 16:53:35 +0000304v19.11 Public major release
305 - Various bug fixes.
306 - Various optimisations.
307 - Updated recommended NDK version to r17c.
308 - Deprecated OpenCL kernels / functions:
309 - CLDepthwiseConvolutionLayerReshapeWeightsGenericKernel
310 - CLDepthwiseIm2ColKernel
311 - CLDepthwiseSeparableConvolutionLayer
312 - CLDepthwiseVectorToTensorKernel
313 - CLDirectConvolutionLayerOutputStageKernel
314 - Deprecated NEON kernels / functions:
315 - NEDepthwiseWeightsReshapeKernel
316 - NEDepthwiseIm2ColKernel
317 - NEDepthwiseSeparableConvolutionLayer
318 - NEDepthwiseVectorToTensorKernel
319 - NEDepthwiseConvolutionLayer3x3
320 - New OpenCL kernels / functions:
321 - @ref CLInstanceNormalizationLayerKernel / @ref CLInstanceNormalizationLayer
322 - @ref CLDepthwiseConvolutionLayerNativeKernel to replace the old generic depthwise convolution (see Deprecated
323 OpenCL kernels / functions)
324 - @ref CLLogSoftmaxLayer
325 - New NEON kernels / functions:
326 - @ref NEBoundingBoxTransformKernel / @ref NEBoundingBoxTransform
327 - @ref NEComputeAllAnchorsKernel / @ref NEComputeAllAnchors
328 - @ref NEDetectionPostProcessLayer
329 - @ref NEGenerateProposalsLayer
330 - @ref NEInstanceNormalizationLayerKernel / @ref NEInstanceNormalizationLayer
331 - @ref NELogSoftmaxLayer
332 - @ref NEROIAlignLayerKernel / @ref NEROIAlignLayer
333 - Added QASYMM8 support for:
334 - @ref CLGenerateProposalsLayer
335 - @ref CLROIAlignLayer
336 - @ref CPPBoxWithNonMaximaSuppressionLimit
337 - Added QASYMM16 support for:
338 - @ref CLBoundingBoxTransform
339 - Added FP16 support for:
340 - @ref CLGEMMMatrixMultiplyReshapedKernel
341 - Added new data type QASYMM8_PER_CHANNEL support for:
342 - @ref CLDequantizationLayer
343 - @ref NEDequantizationLayer
344 - Added new data type QSYMM8_PER_CHANNEL support for:
345 - @ref CLConvolutionLayer
346 - @ref NEConvolutionLayer
347 - @ref CLDepthwiseConvolutionLayer
348 - @ref NEDepthwiseConvolutionLayer
349 - Added FP16 mixed-precision support for:
350 - @ref CLGEMMMatrixMultiplyReshapedKernel
351 - @ref CLPoolingLayerKernel
352 - Added FP32 and FP16 ELU activation for:
353 - @ref CLActivationLayer
354 - @ref NEActivationLayer
355 - Added asymmetric padding support for:
356 - @ref CLDirectDeconvolutionLayer
357 - @ref CLGEMMDeconvolutionLayer
358 - @ref NEDeconvolutionLayer
359 - Added SYMMETRIC and REFLECT modes for @ref CLPadLayerKernel / @ref CLPadLayer.
360 - Replaced the calls to @ref NECopyKernel and @ref NEMemsetKernel with @ref NEPadLayer in @ref NEGenerateProposalsLayer.
361 - Replaced the calls to @ref CLCopyKernel and @ref CLMemsetKernel with @ref CLPadLayer in @ref CLGenerateProposalsLayer.
362 - Improved performance for CL Inception V3 - FP16.
363 - Improved accuracy for CL Inception V3 - FP16 by enabling FP32 accumulator (mixed-precision).
364 - Improved NEON performance by enabling fusing batch normalization with convolution and depth-wise convolution layer.
365 - Improved NEON performance for MobileNet-SSD by improving the output detection performance.
366 - Optimized @ref CLPadLayer.
367 - Optimized CL generic depthwise convolution layer by introducing @ref CLDepthwiseConvolutionLayerNativeKernel.
368 - Reduced memory consumption by implementing weights sharing.
369
Jenkins7f09cf72020-01-22 18:08:16 +0000370v19.08.1 Public maintenance release
371 - Fix offset calculation in NEReductionOperationKernel.
372 - Fix data layout in NEScaleKernel for nhwc.
373 - Retain configuration step data layout to avoid side-effects.
374 - Perform sqrt in double domain for L2 pooling.
375 - Fix output shape calculation for Reduce Mean
376 - Fix broadcast CLPixelwiseMultiplication with 5D tensors
377
Jenkins975dfe12019-09-02 11:47:54 +0100378v19.08 Public major release
379 - Various bug fixes.
380 - Various optimisations.
381 - Deprecated NEON functions
382 - NEDepthConcatenateLayer
383 - NEWidthConcatenateLayer
384 - Deprecated OpenCL kernels / functions
385 - CLDepthConcatenateLayer
386 - CLGEMMInterleave4x4Kernel / CLGEMMInterleave4x4
387 - CLGEMMTranspose1xWKernel / CLGEMMTranspose1xW
388 - CLWidthConcatenateLayer
389 - New NEON kernels / functions:
390 - @ref NEAbsLayer
391 - @ref NECast
392 - @ref NEElementwisePower
393 - @ref NELogLayer
394 - @ref NELSTMLayerQuantized
395 - @ref NENegLayer
396 - @ref NEPReluLayer
397 - @ref NESinLayer
398 - @ref NEBatchConcatenateLayerKernel
399 - @ref NEDepthToSpaceLayerKernel / @ref NEDepthToSpaceLayer
400 - @ref NEDepthwiseConvolutionLayerNativeKernel
401 - @ref NEGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
402 - @ref NEMeanStdDevNormalizationKernel / @ref NEMeanStdDevNormalizationLayer
403 - @ref NESpaceToDepthLayerKernel / @ref NESpaceToDepthLayer
404 - New OpenCL kernels / functions:
405 - @ref CLAbsLayer
406 - @ref CLElementwisePower
407 - @ref CLLogLayer
408 - @ref CLLSTMLayerQuantized
409 - @ref CLNegLayer
410 - @ref CLPReluLayer
411 - @ref CLSinLayer
412 - @ref CLBatchConcatenateLayerKernel
413 - @ref CLDepthToSpaceLayerKernel / @ref CLDepthToSpaceLayer
414 - @ref CLGEMMLowpMatrixMultiplyNativeKernel
415 - @ref CLGEMMLowpQuantizeDownInt32ToInt16ScaleByFixedPointKernel
416 - @ref CLGEMMMatrixMultiplyNativeKernel
417 - @ref CLMeanStdDevNormalizationKernel / @ref CLMeanStdDevNormalizationLayer
418 - @ref CLSpaceToDepthLayerKernel / @ref CLSpaceToDepthLayer
419 - New examples:
420 - neon_opticalflow
421 - cl_cache
422 - neon_permute
423 - Added support for FP16 in @ref NEDeconvolutionLayer
424 - Added support for FP16 in @ref CLDeconvolutionLayer
425 - Added support for REDUCE_MIN and REDUCE_MAX in @ref ReductionOperation
426 - Enable the fusion of batch normalization with convolution and depthwise convolution layer for FP32 in the graph API (OpenCL only)
427 - Added support for fusing activation function and broadcast addition with the matrix multiplication for FP32 (OpenCL only)
428 - Re-factored the depthwise convolution layer kernel on NEON for generic cases
429 - Added an optimized depthwise convolution layer kernel for 5x5 filters (NEON only)
430 - Added support to enable OpenCL kernel cache. Added example showing how to load the prebuilt OpenCL kernels from a binary cache file
431 - Altered @ref QuantizationInfo interface to support per-channel quantization.
Jenkins0e205f72019-11-28 16:53:35 +0000432 - The @ref CLDepthwiseConvolutionLayer3x3 will be included by @ref CLDepthwiseConvolutionLayer to accommodate for future optimizations.
433 - The @ref NEDepthwiseConvolutionLayerOptimized will be included by @ref NEDepthwiseConvolutionLayer to accommodate for future optimizations.
Jenkins975dfe12019-09-02 11:47:54 +0100434 - Removed inner_border_right and inner_border_top parameters from @ref CLDeconvolutionLayer interface
435 - Removed inner_border_right and inner_border_top parameters from @ref NEDeconvolutionLayer interface
436 - Optimized the NEON assembly kernel for GEMMLowp. The new implementation fuses the output stage and quantization with the matrix multiplication kernel
437
Jenkins4ba87db2019-05-23 17:11:51 +0100438v19.05 Public major release
439 - Various bug fixes.
440 - Various optimisations.
441 - New Neon kernels / functions:
442 - @ref NEBatchToSpaceLayerKernel / @ref NEBatchToSpaceLayer
443 - @ref NEComplexPixelWiseMultiplicationKernel / @ref NEComplexPixelWiseMultiplication
444 - @ref NECropKernel / @ref NECropResize
445 - @ref NEDepthwiseConvolutionAssemblyDispatch
446 - @ref NEFFTDigitReverseKernel
447 - @ref NEFFTRadixStageKernel
448 - @ref NEFFTScaleKernel
449 - @ref NEGEMMLowpOffsetContributionOutputStageKernel
450 - @ref NEHeightConcatenateLayerKernel
451 - @ref NESpaceToBatchLayerKernel / @ref NESpaceToBatchLayer
452 - @ref NEFFT1D
453 - @ref NEFFT2D
454 - @ref NEFFTConvolutionLayer
455 - New OpenCL kernels / functions:
456 - @ref CLComplexPixelWiseMultiplicationKernel / @ref CLComplexPixelWiseMultiplication
457 - @ref CLCropKernel / @ref CLCropResize
458 - @ref CLDeconvolutionReshapeOutputKernel
459 - @ref CLFFTDigitReverseKernel
460 - @ref CLFFTRadixStageKernel
461 - @ref CLFFTScaleKernel
462 - @ref CLGEMMLowpMatrixMultiplyReshapedOnlyRHSKernel
463 - @ref CLGEMMMatrixMultiplyReshapedOnlyRHSKernel
464 - @ref CLHeightConcatenateLayerKernel
465 - @ref CLDirectDeconvolutionLayer
466 - @ref CLFFT1D
467 - @ref CLFFT2D
468 - @ref CLFFTConvolutionLayer
469 - @ref CLGEMMDeconvolutionLayer
470 - New OpenGLES kernels / functions:
471 - @ref GCConcatenateLayer
472 - Deprecated functions/interfaces
Jenkins975dfe12019-09-02 11:47:54 +0100473 - GCDepthConcatenateLayer
474 - NEWidthConcatenateLayer
475 - NEDepthConcatenateLayer
476 - CLWidthConcatenateLayer
477 - CLDepthConcatenateLayer
478 - CLGEMMInterleave4x4
479 - CLGEMMTranspose1xW
Jenkins4ba87db2019-05-23 17:11:51 +0100480 - Support different quantization info in CLConcatLayer.
481 - Add checks on different input/output quantization info were not supported.
482 - Tensors have different quantization information.
483 - Add FP16 support checks.
484 - Fix output quantization CLDeptwiseConv3x3 when activation is fused.
485 - New graph examples:
486 - graph_convolution
487 - graph_fully_connected
488 - graph_depthwise_convolution
489 - Deepspeech v0.4.1
490 - Add support for QASYMM8 in NEArithmeticSubtractionKernel.
491 - Add support for QASYMM8 in NEPixelWiseMultiplicationKernel.
492 - Add support for QASYMM8 NEDeconvolution.
493 - Add support for DequantizationLayer for NEON/CL.
494 - Add support for dilation in CLDepthwiseConvolution.
495 - Fuse offset contribution with the output stage when we use NEGEMMLowpMatrixMultiplyCore.
496 - Optimize CLDeconvolution.
497 - Add StackLayer to the graph API.
498 - Add support for "reflect" padding mode in NEPad.
499 - Winograd 7x7 NHWC on OpenCL.
500 - Rework CL ML layers to run exclusively on CL.
501 - Support different quantization info in PoolingLayer.
502 - Implement and test import memory interfaces.
503 - Added new tests and removed old ones.
504 - Various clang-tidy fixes.
505
Jenkins514be652019-02-28 12:25:18 +0000506v19.02 Public major release
507 - Various bug fixes.
508 - Various optimisations.
509 - New Neon kernels / functions:
510 - @ref NETileKernel / @ref NETile
511 - @ref NEFuseBatchNormalizationKernel / @ref NEFuseBatchNormalization
512 - @ref NEElementwiseOperationKernel
513 - @ref NEElementwiseMax
514 - @ref NEElementwiseMin
515 - @ref NEElementwiseSquaredDiff
516 - @ref NESelectKernel / @ref NESelect
517 - @ref NESplit
518 - @ref NESlice
519 - @ref NEUnstack
520 - @ref NEStridedSliceKernel / @ref NEStridedSlice
521 - @ref NEElementwiseUnaryKernel
522 - @ref NERsqrtLayer
523 - @ref NEExpLayer
524 - @ref NEReverseKernel / @ref NEReverse
525 - @ref NEArgMinMaxLayer
526 - @ref NEStackLayerKernel / @ref NEStackLayer
527 - @ref NERangeKernel / @ref NERange
528 - @ref NEPadLayer
529 - @ref NEMemsetKernel
530 - @ref NEGatherKernel / @ref NEGather
531 - @ref NEElementwiseComparison
532 - @ref NEElementwiseComparisonStatic
533 - @ref NEComparisonOperationKernel
534 - @ref NEElementwiseDivision
535 - New OpenCL kernels / functions:
536 - @ref CLSelectKernel / @ref CLSelect
537 - @ref CLTileKernel / @ref CLTile
538 - @ref CLComparisonKernel / @ref CLComparison
539 - @ref CLArgMinMaxLayer
540 - @ref CLElementwiseMax
541 - @ref CLElementwiseMin
542 - @ref CLElementwiseSquaredDiff
543 - @ref CLStackLayerKernel / @ref CLStackLayer
544 - @ref CLReverse / @ref CLReverseKernel
545 - @ref CLRsqrtLayer
546 - @ref CLExpLayer
547 - @ref CLElementWiseUnaryLayerKernel
548 - @ref CLGEMMReshapeLHSMatrixKernel
549 - @ref CLGEMMReshapeRHSMatrixKernel
550 - @ref CLGEMMMatrixMultiplyReshapedKernel
551 - @ref CLRangeKernel / @ref CLRange
552 - @ref CLUnstack
553 - @ref CLGatherKernel / @ref CLGather
554 - @ref CLGEMMLowpMatrixMultiplyReshapedKernel
555 - New CPP kernels / functions:
556 - @ref CPPDetectionOutputLayer
557 - @ref CPPTopKV / @ref CPPTopKVKernel
558 - Added new examples:
559 - graph_ssd_mobilenet.cpp
560 - graph_mobilenet_v2.cpp
561 - graph_resnet12.cpp
562 - graph_srcnn955.cpp
563 - graph_vgg_vdsr.cpp
564 - graph_inception_resnet_v1.cpp
565 - Add 4D tensors support to
566 - @ref NESoftmaxLayer
567 - Fused activation in @ref CLWinogradConvolutionLayer
568 - Extented @ref NEPermute to support more cases
569 - Added NEON/SVE GEMM Hybrid kernels
570 - Added u8 and s8 hybrid assembly kernels
571 - Introduced GEMM strategy name in NEGEMMAssemblyWrapper
572 - Improved @ref CLTuner
573 - Fused the bias addition within @ref CLGEMM
574 - Added support for QASYMM8 LOGISTIC activation in @ref NEActivationLayer
575 - Added NHWC data layout support to:
576 - @ref NEScale for F16
577 - @ref CLNormalizationLayer IN_MAP_2D for FP32/FP16
578 - @ref NEL2NormalizeLayer for FP32/FP16
579 - @ref NENormalizationLayer IN_MAP_2D for FP32/FP16
580 - @ref CLROIAlignLayer
581 - @ref CLGenerateProposalsLayer
582 - Added QASYMM8 support to the following kernels:
583 - @ref NEArithmeticAdditionKernel
584 - @ref NEScale
585 - Added new tests and improved validation and benchmarking suites.
586 - Deprecated functions/interfaces
587 - Usage of inner_border_right and inner_border_top has been deprecated in @ref CLDeconvolutionLayer and @ref NEDeconvolutionLayer
588
Jenkinsb9abeae2018-11-22 11:58:08 +0000589v18.11 Public major release
590 - Various bug fixes.
591 - Various optimisations.
592 - New Neon kernels / functions:
593 - @ref NEChannelShuffleLayer / @ref NEChannelShuffleLayerKernel
594 - @ref NEReduceMean
595 - @ref NEReorgLayer / @ref NEReorgLayerKernel
596 - @ref NEPriorBoxLayer / @ref NEPriorBoxLayerKernel
597 - @ref NEUpsampleLayer / @ref NEUpsampleLayerKernel
598 - @ref NEYOLOLayer / @ref NEYOLOLayerKernel
599 - New OpenCL kernels / functions:
600 - @ref CLBatchToSpaceLayer / @ref CLBatchToSpaceLayerKernel
601 - @ref CLBoundingBoxTransform / @ref CLBoundingBoxTransformKernel
602 - @ref CLComputeAllAnchorsKernel
603 - @ref CLGenerateProposalsLayer
604 - @ref CLNormalizePlanarYUVLayer / @ref CLNormalizePlanarYUVLayerKernel
605 - @ref CLReorgLayer / @ref CLReorgLayerKernel
606 - @ref CLSpaceToBatchLayer / @ref CLSpaceToBatchLayerKernel
607 - @ref CLPadLayer
608 - @ref CLReduceMean
609 - @ref CLPriorBoxLayer / @ref CLPriorBoxLayerKernel
610 - @ref CLROIAlignLayer / @ref CLROIAlignLayerKernel
611 - @ref CLSlice
612 - @ref CLSplit
613 - @ref CLStridedSlice / @ref CLStridedSliceKernel
614 - @ref CLUpsampleLayer / @ref CLUpsampleLayerKernel
615 - @ref CLYOLOLayer / @ref CLYOLOLayerKernel
616 - New CPP kernels / functions:
617 - @ref CPPBoxWithNonMaximaSuppressionLimit / @ref CPPBoxWithNonMaximaSuppressionLimitKernel
618 - Added the validate method in:
619 - @ref NEDepthConvertLayer
620 - @ref NEFloor / @ref CLFloor
621 - @ref NEGEMMMatrixAdditionKernel
622 - @ref NEReshapeLayer / @ref CLReshapeLayer
623 - @ref CLScale
624 - Added new examples:
625 - graph_shufflenet.cpp
626 - graph_yolov3.cpp
627 - Added documentation for add a new function or kernel.
628 - Improved doxygen documentation adding a list of the existing functions.
629 - Add 4D tensors support to
Jenkins975dfe12019-09-02 11:47:54 +0100630 - CLWidthConcatenateLayer
Jenkinsb9abeae2018-11-22 11:58:08 +0000631 - @ref CLFlattenLayer
632 - @ref CLSoftmaxLayer
633 - Add dot product support for @ref CLDepthwiseConvolutionLayer3x3NHWCKernel non-unit stride
634 - Add SVE support
635 - Fused batch normalization into convolution layer weights in @ref CLFuseBatchNormalization
636 - Fuses activation in @ref CLDepthwiseConvolutionLayer3x3NCHWKernel, @ref CLDepthwiseConvolutionLayer3x3NHWCKernel and @ref NEGEMMConvolutionLayer
637 - Added NHWC data layout support to:
638 - @ref CLChannelShuffleLayer
639 - @ref CLDeconvolutionLayer
640 - @ref CLL2NormalizeLayer
641 - Added QASYMM8 support to the following kernels:
642 - @ref CLScaleKernel
643 - @ref NEDepthwiseConvolutionLayer3x3Kernel
644 - @ref CLPixelWiseMultiplicationKernel
645 - Added FP16 support to the following kernels:
646 - @ref CLDepthwiseConvolutionLayer3x3NHWCKernel
647 - @ref NEDepthwiseConvolutionLayer3x3Kernel
648 - @ref CLNormalizePlanarYUVLayerKernel
649 - @ref CLWinogradConvolutionLayer (5x5 kernel)
650 - More tests added to both validation and benchmarking suites.
651
Jenkins52ba29e2018-08-29 15:32:11 +0000652v18.08 Public major release
653 - Various bug fixes.
654 - Various optimisations.
655 - Updated recommended NDK version to r17b.
656 - Removed support for QS8/QS16 data types.
657 - Added support for grouped convolution in @ref CLConvolutionLayer.
658 - Added NHWC data layout support to:
Jenkins975dfe12019-09-02 11:47:54 +0100659 - NEDepthConcatenateLayer / CLDepthConcatenateLayer
Jenkins52ba29e2018-08-29 15:32:11 +0000660 - @ref NEWinogradConvolutionLayer / @ref CLWinogradConvolutionLayer
661 - @ref CLDepthwiseConvolutionLayer
662 - @ref CLDirectConvolutionLayer
663 - @ref CLConvolutionLayer
664 - @ref CLScale
665 - @ref CLIm2ColKernel
666 - New Neon kernels / functions:
667 - @ref NERNNLayer
668 - New OpenCL kernels / functions:
669 - @ref CLArithmeticDivision
670 - Introduced prepare() stage support in the graph API for GLES.
671 - Added support for memory reusage when trying to allocate smaller CLTensors.
672 - Enabled NHWC execution on graph examples.
673 - Added JPEG accessor for validation purposes.
674 - Added validate methods to some kernels / functions.
675
676v18.05 Public major release
Jenkinsb3a371b2018-05-23 11:36:53 +0100677 - Various bug fixes.
678 - Various optimisations.
679 - Major redesign in the interface for the neon kernels implemented in assembly.
680 - Removed arm_compute::NEGEMMLowpAArch64A53Kernel / arm_compute::NEGEMMLowpAArch64Kernel / arm_compute::NEGEMMLowpAArch64V8P4Kernel / arm_compute::NEGEMMInterleavedBlockedKernel / arm_compute::NEGEMMLowpAssemblyMatrixMultiplyCore / arm_compute::NEHGEMMAArch64FP16Kernel
681 - Added NEGEMMAssemblyWrapper and AssemblyKernelGlue which are used to execute assembly kernels in neon functions.
682 - Minor changes to the CPUInfo type to make it compatible with the new assembly gemm interface.
683 - Moved neon assembly kernels to the folder src/core/NEON/kernels/arm_gemm.
684 - Improved doxygen documentation.
685 - Improved memory management for layer's transitions.
686 - Added support for NHWC data layout in tensors.
687 - Added NHWC data layout support to:
688 - @ref NEGEMMConvolutionLayer
689 - @ref NEDirectConvolutionLayer
690 - @ref NEPoolingLayer / @ref CLPoolingLayer
691 - @ref NEBatchNormalizationLayer / @ref CLBatchNormalizationLayer
692 - @ref NEDepthwiseConvolutionLayer
693 - @ref NEScale
694 - @ref NEIm2Col
695 - Added support for dilated convolutions in @ref NEConvolutionLayer and @ref CLConvolutionLayer.
696 - New OpenCL kernels / functions:
697 - @ref CLChannelShuffleLayer / @ref CLChannelShuffleLayerKernel
698 - @ref CLConvertFullyConnectedWeightsKernel / @ref CLConvertFullyConnectedWeights
699 - @ref CLCopy / @ref CLCopyKernel
700 - @ref CLLSTMLayer
701 - @ref CLRNNLayer
Jenkins975dfe12019-09-02 11:47:54 +0100702 - CLWidthConcatenateLayer / @ref CLWidthConcatenateLayerKernel
Jenkinsb3a371b2018-05-23 11:36:53 +0100703 - @ref CLWinogradFilterTransformKernel / @ref CLWinogradInputTransformKernel / @ref CLWinogradConvolutionLayer
704 - @ref CLWinogradInputTransformKernel / @ref CLWinogradInputTransform
705 - New Neon kernels / functions:
Jenkinsb3a371b2018-05-23 11:36:53 +0100706 - @ref NEConvertFullyConnectedWeightsKernel / @ref NEConvertFullyConnectedWeights.
707 - Created the validate method in @ref CLDepthwiseConvolutionLayer.
708 - Beta and gamma are no longer mandatory arguments in @ref NEBatchNormalizationLayer and @ref CLBatchNormalizationLayer.
709 - Added depth multiplier support in @ref NEDepthwiseConvolutionLayer and @ref CLDepthwiseConvolutionLayer.
710 - Added broadcast multiply support in @ref NEPixelWiseMultiplication / @ref NEPixelWiseMultiplicationKernel.
711 - Port mobilenet example to NHWC data layout.
712 - Enabled Winograd method in @ref CLConvolutionLayer.
713 - Renamed NEWinogradLayer to @ref NEWinogradConvolutionLayer.
714 - Updated @ref NEWinogradConvolutionLayer to use highly optimised assembly kernels in src/core/NEON/kernels/arm_gemm.
715 - Added memory manager support in GLES functions.
716 - Major refactoring of the graph API.
717 - Added GLES backend in the graph API.
718 - Added support for the memory manager in the graph API.
719 - Enabled Winograd Convolution method in the graph API.
720 - Added support for grouped convolutions in the graph API.
721 - Replaced NEDeconvolutionLayerUpsampleKernel with @ref NEScaleKernel in @ref NEDeconvolutionLayer.
722 - Added fast maths flag in @ref CLConvolutionLayer.
723 - Added new tests and benchmarks in validation and benchmark frameworks
724 - Merge Activation layer with Convolution Layer (NEON. CL, GLES)
725 - Added support to OpenCL 2.0 SVM
726 - Added support to import memory in OpenCL tensors.
727 - Added the prepare() method to perform any one off pre-processing before running the function.
728 - Added new examples:
729 - graph_inception_v4.cpp
730 - graph_resnext50.cpp
731 - Added memory measurement instrument for CL.
732
Jenkinsc3f34a42018-03-02 12:38:09 +0000733v18.03 Public maintenance release
734 - Various bug fixes.
735 - Fixed bug in @ref NEActivationLayer
736 - Fix in @ref CLTuner when using batches.
737 - Updated recommended NDK version to r16b (And fixed warnings).
738 - Fixed bug in validation code.
739 - Added Inception v4 graph example.
Jenkinsb3a371b2018-05-23 11:36:53 +0100740 - Renamed NEWinogradLayer.cpp to @ref NEWinogradConvolutionLayer
Jenkinsc3f34a42018-03-02 12:38:09 +0000741
Anthony Barbier06ea0482018-02-22 15:45:35 +0000742v18.02 Public major release
743 - Various NEON / OpenCL / GLES optimisations.
744 - Various bug fixes.
745 - Changed default number of threads on big LITTLE systems.
746 - Refactored examples and added:
747 - graph_mobilenet_qassym8
748 - graph_resnet
749 - graph_squeezenet_v1_1
Jenkinsc3f34a42018-03-02 12:38:09 +0000750 - Renamed @ref CLConvolutionLayer into @ref CLGEMMConvolutionLayer and created a new @ref CLConvolutionLayer to select the fastest convolution method.
751 - Renamed @ref NEConvolutionLayer into @ref NEGEMMConvolutionLayer and created a new @ref NEConvolutionLayer to select the fastest convolution method.
Anthony Barbier06ea0482018-02-22 15:45:35 +0000752 - Added in place support to:
Jenkinsc3f34a42018-03-02 12:38:09 +0000753 - @ref CLActivationLayer
754 - @ref CLBatchNormalizationLayer
Anthony Barbier06ea0482018-02-22 15:45:35 +0000755 - Added QASYMM8 support to:
Jenkinsc3f34a42018-03-02 12:38:09 +0000756 - @ref CLActivationLayer
757 - @ref CLDepthwiseConvolutionLayer
758 - @ref NEDepthwiseConvolutionLayer
759 - @ref NESoftmaxLayer
Anthony Barbier06ea0482018-02-22 15:45:35 +0000760 - Added FP16 support to:
Jenkinsc3f34a42018-03-02 12:38:09 +0000761 - @ref CLDepthwiseConvolutionLayer3x3
762 - @ref CLDepthwiseConvolutionLayer
763 - Added broadcasting support to @ref NEArithmeticAddition / @ref CLArithmeticAddition / @ref CLPixelWiseMultiplication
764 - Added fused batched normalization and activation to @ref CLBatchNormalizationLayer and @ref NEBatchNormalizationLayer
765 - Added support for non-square pooling to @ref NEPoolingLayer and @ref CLPoolingLayer
Anthony Barbier06ea0482018-02-22 15:45:35 +0000766 - New OpenCL kernels / functions:
Jenkins0e205f72019-11-28 16:53:35 +0000767 - CLDirectConvolutionLayerOutputStageKernel
Anthony Barbier06ea0482018-02-22 15:45:35 +0000768 - New NEON kernels / functions
769 - Added name() method to all kernels.
770 - Added support for Winograd 5x5.
Jenkinsc3f34a42018-03-02 12:38:09 +0000771 - @ref NEPermuteKernel / @ref NEPermute
Jenkinsb3a371b2018-05-23 11:36:53 +0100772 - @ref NEWinogradLayerTransformInputKernel / NEWinogradLayer
773 - @ref NEWinogradLayerTransformOutputKernel / NEWinogradLayer
774 - @ref NEWinogradLayerTransformWeightsKernel / NEWinogradLayer
Jenkins52ba29e2018-08-29 15:32:11 +0000775 - Renamed NEWinogradLayerKernel into NEWinogradLayerBatchedGEMMKernel
Anthony Barbier06ea0482018-02-22 15:45:35 +0000776 - New GLES kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +0000777 - @ref GCTensorShiftKernel / @ref GCTensorShift
Anthony Barbier06ea0482018-02-22 15:45:35 +0000778
Anthony Barbierf45d5a92018-01-24 16:23:15 +0000779v18.01 Public maintenance release
780 - Various bug fixes
781 - Added some of the missing validate() methods
Jenkinsc3f34a42018-03-02 12:38:09 +0000782 - Added @ref CLDeconvolutionLayerUpsampleKernel / @ref CLDeconvolutionLayer @ref CLDeconvolutionLayerUpsample
783 - Added @ref CLPermuteKernel / @ref CLPermute
Anthony Barbierf45d5a92018-01-24 16:23:15 +0000784 - Added method to clean the programs cache in the CL Kernel library.
Jenkinsc3f34a42018-03-02 12:38:09 +0000785 - Added @ref GCArithmeticAdditionKernel / @ref GCArithmeticAddition
786 - Added @ref GCDepthwiseConvolutionLayer3x3Kernel / @ref GCDepthwiseConvolutionLayer3x3
787 - Added @ref GCNormalizePlanarYUVLayerKernel / @ref GCNormalizePlanarYUVLayer
788 - Added @ref GCScaleKernel / @ref GCScale
789 - Added @ref GCWeightsReshapeKernel / @ref GCConvolutionLayer
Anthony Barbierf45d5a92018-01-24 16:23:15 +0000790 - Added FP16 support to the following GLES compute kernels:
Jenkinsc3f34a42018-03-02 12:38:09 +0000791 - @ref GCCol2ImKernel
792 - @ref GCGEMMInterleave4x4Kernel
793 - @ref GCGEMMTranspose1xWKernel
794 - @ref GCIm2ColKernel
795 - Refactored NEON Winograd (NEWinogradLayerKernel)
796 - Added @ref NEDirectConvolutionLayerOutputStageKernel
Anthony Barbierf45d5a92018-01-24 16:23:15 +0000797 - Added QASYMM8 support to the following NEON kernels:
Jenkinsc3f34a42018-03-02 12:38:09 +0000798 - @ref NEDepthwiseConvolutionLayer3x3Kernel
799 - @ref NEFillBorderKernel
800 - @ref NEPoolingLayerKernel
Anthony Barbierf45d5a92018-01-24 16:23:15 +0000801 - Added new examples:
802 - graph_cl_mobilenet_qasymm8.cpp
803 - graph_inception_v3.cpp
804 - gc_dc.cpp
805 - More tests added to both validation and benchmarking suites.
806
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000807v17.12 Public major release
808 - Most machine learning functions on OpenCL support the new data type QASYMM8
809 - Introduced logging interface
810 - Introduced opencl timer
811 - Reworked GEMMLowp interface
812 - Added new NEON assembly kernels for GEMMLowp, SGEMM and HGEMM
813 - Added validation method for most Machine Learning kernels / functions
814 - Added new graph examples such as googlenet, mobilenet, squeezenet, vgg16 and vgg19
815 - Added sgemm example for OpenCL
816 - Added absolute difference example for GLES compute
817 - Added new tests and benchmarks in validation and benchmark frameworks
818 - Added new kernels / functions for GLES compute
819
820 - New OpenGL ES kernels / functions
Jenkinsc3f34a42018-03-02 12:38:09 +0000821 - @ref GCAbsoluteDifferenceKernel / @ref GCAbsoluteDifference
822 - @ref GCActivationLayerKernel / @ref GCActivationLayer
823 - @ref GCBatchNormalizationLayerKernel / @ref GCBatchNormalizationLayer
824 - @ref GCCol2ImKernel
Jenkins975dfe12019-09-02 11:47:54 +0100825 - @ref GCDepthConcatenateLayerKernel / GCDepthConcatenateLayer
Jenkinsc3f34a42018-03-02 12:38:09 +0000826 - @ref GCDirectConvolutionLayerKernel / @ref GCDirectConvolutionLayer
827 - @ref GCDropoutLayerKernel / @ref GCDropoutLayer
828 - @ref GCFillBorderKernel / @ref GCFillBorder
829 - @ref GCGEMMInterleave4x4Kernel / @ref GCGEMMInterleave4x4
830 - @ref GCGEMMMatrixAccumulateBiasesKernel / @ref GCGEMMMatrixAdditionKernel / @ref GCGEMMMatrixMultiplyKernel / @ref GCGEMM
831 - @ref GCGEMMTranspose1xWKernel / @ref GCGEMMTranspose1xW
832 - @ref GCIm2ColKernel
833 - @ref GCNormalizationLayerKernel / @ref GCNormalizationLayer
834 - @ref GCPixelWiseMultiplicationKernel / @ref GCPixelWiseMultiplication
835 - @ref GCPoolingLayerKernel / @ref GCPoolingLayer
836 - @ref GCLogits1DMaxKernel / @ref GCLogits1DShiftExpSumKernel / @ref GCLogits1DNormKernel / @ref GCSoftmaxLayer
837 - @ref GCTransposeKernel / @ref GCTranspose
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000838
839 - New NEON kernels / functions
Jenkinsb3a371b2018-05-23 11:36:53 +0100840 - arm_compute::NEGEMMLowpAArch64A53Kernel / arm_compute::NEGEMMLowpAArch64Kernel / arm_compute::NEGEMMLowpAArch64V8P4Kernel / arm_compute::NEGEMMInterleavedBlockedKernel / arm_compute::NEGEMMLowpAssemblyMatrixMultiplyCore
841 - arm_compute::NEHGEMMAArch64FP16Kernel
Jenkins0e205f72019-11-28 16:53:35 +0000842 - @ref NEDepthwiseConvolutionLayer3x3Kernel / NEDepthwiseIm2ColKernel / @ref NEGEMMMatrixVectorMultiplyKernel / NEDepthwiseVectorToTensorKernel / @ref NEDepthwiseConvolutionLayer
Jenkinsc3f34a42018-03-02 12:38:09 +0000843 - @ref NEGEMMLowpOffsetContributionKernel / @ref NEGEMMLowpMatrixAReductionKernel / @ref NEGEMMLowpMatrixBReductionKernel / @ref NEGEMMLowpMatrixMultiplyCore
844 - @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
845 - @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel / @ref NEGEMMLowpQuantizeDownInt32ToUint8Scale
Jenkinsb3a371b2018-05-23 11:36:53 +0100846 - NEWinogradLayer / NEWinogradLayerKernel
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000847
848 - New OpenCL kernels / functions
Jenkinsc3f34a42018-03-02 12:38:09 +0000849 - @ref CLGEMMLowpOffsetContributionKernel / @ref CLGEMMLowpMatrixAReductionKernel / @ref CLGEMMLowpMatrixBReductionKernel / @ref CLGEMMLowpMatrixMultiplyCore
850 - @ref CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel / @ref CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
851 - @ref CLGEMMLowpQuantizeDownInt32ToUint8ScaleKernel / @ref CLGEMMLowpQuantizeDownInt32ToUint8Scale
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000852
853 - New graph nodes for NEON and OpenCL
Jenkinsb3a371b2018-05-23 11:36:53 +0100854 - graph::BranchLayer
855 - graph::DepthConvertLayer
856 - graph::DepthwiseConvolutionLayer
857 - graph::DequantizationLayer
858 - graph::FlattenLayer
859 - graph::QuantizationLayer
860 - graph::ReshapeLayer
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000861
Kaizenbf8b01d2017-10-12 14:26:51 +0100862v17.10 Public maintenance release
863 - Bug fixes:
864 - Check the maximum local workgroup size supported by OpenCL devices
865 - Minor documentation updates (Fixed instructions to build the examples)
Jenkinsc3f34a42018-03-02 12:38:09 +0000866 - Introduced a graph::GraphContext
Anthony Barbier8140e1e2017-12-14 23:48:46 +0000867 - Added a few new Graph nodes, support for branches and grouping.
Kaizenbf8b01d2017-10-12 14:26:51 +0100868 - Automatically enable cl_printf in debug builds
869 - Fixed bare metal builds for armv7a
870 - Added AlexNet and cartoon effect examples
871 - Fixed library builds: libraries are no longer built as supersets of each other.(It means application using the Runtime part of the library now need to link against both libarm_compute_core and libarm_compute)
872
Kaizen8938bd32017-09-28 14:38:23 +0100873v17.09 Public major release
874 - Experimental Graph support: initial implementation of a simple stream API to easily chain machine learning layers.
Jenkinsc3f34a42018-03-02 12:38:09 +0000875 - Memory Manager (@ref BlobLifetimeManager, @ref BlobMemoryPool, @ref ILifetimeManager, @ref IMemoryGroup, @ref IMemoryManager, @ref IMemoryPool, @ref IPoolManager, @ref MemoryManagerOnDemand, @ref PoolManager)
Kaizen8938bd32017-09-28 14:38:23 +0100876 - New validation and benchmark frameworks (Boost and Google frameworks replaced by homemade framework).
877 - Most machine learning functions support both fixed point 8 and 16 bit (QS8, QS16) for both NEON and OpenCL.
878 - New NEON kernels / functions:
Jenkinsb3a371b2018-05-23 11:36:53 +0100879 - arm_compute::NEGEMMAssemblyBaseKernel arm_compute::NEGEMMAArch64Kernel
Jenkinsc3f34a42018-03-02 12:38:09 +0000880 - @ref NEDequantizationLayerKernel / @ref NEDequantizationLayer
881 - @ref NEFloorKernel / @ref NEFloor
882 - @ref NEL2NormalizeLayerKernel / @ref NEL2NormalizeLayer
883 - @ref NEQuantizationLayerKernel @ref NEMinMaxLayerKernel / @ref NEQuantizationLayer
884 - @ref NEROIPoolingLayerKernel / @ref NEROIPoolingLayer
885 - @ref NEReductionOperationKernel / @ref NEReductionOperation
886 - @ref NEReshapeLayerKernel / @ref NEReshapeLayer
Kaizen8938bd32017-09-28 14:38:23 +0100887
888 - New OpenCL kernels / functions:
Jenkins0e205f72019-11-28 16:53:35 +0000889 - @ref CLDepthwiseConvolutionLayer3x3NCHWKernel @ref CLDepthwiseConvolutionLayer3x3NHWCKernel CLDepthwiseIm2ColKernel CLDepthwiseVectorToTensorKernel CLDepthwiseWeightsReshapeKernel / @ref CLDepthwiseConvolutionLayer3x3 @ref CLDepthwiseConvolutionLayer CLDepthwiseSeparableConvolutionLayer
Jenkinsc3f34a42018-03-02 12:38:09 +0000890 - @ref CLDequantizationLayerKernel / @ref CLDequantizationLayer
891 - @ref CLDirectConvolutionLayerKernel / @ref CLDirectConvolutionLayer
892 - @ref CLFlattenLayer
893 - @ref CLFloorKernel / @ref CLFloor
Jenkins975dfe12019-09-02 11:47:54 +0100894 - CLGEMMTranspose1xW
Jenkinsc3f34a42018-03-02 12:38:09 +0000895 - @ref CLGEMMMatrixVectorMultiplyKernel
896 - @ref CLL2NormalizeLayerKernel / @ref CLL2NormalizeLayer
897 - @ref CLQuantizationLayerKernel @ref CLMinMaxLayerKernel / @ref CLQuantizationLayer
898 - @ref CLROIPoolingLayerKernel / @ref CLROIPoolingLayer
899 - @ref CLReductionOperationKernel / @ref CLReductionOperation
900 - @ref CLReshapeLayerKernel / @ref CLReshapeLayer
Kaizen8938bd32017-09-28 14:38:23 +0100901
Anthony Barbierdbdab852017-06-23 15:42:00 +0100902v17.06 Public major release
903 - Various bug fixes
904 - Added support for fixed point 8 bit (QS8) to the various NEON machine learning kernels.
905 - Added unit tests and benchmarks (AlexNet, LeNet)
906 - Added support for sub tensors.
907 - Added infrastructure to provide GPU specific optimisation for some OpenCL kernels.
Jenkinsc3f34a42018-03-02 12:38:09 +0000908 - Added @ref OMPScheduler (OpenMP) scheduler for NEON
909 - Added @ref SingleThreadScheduler scheduler for NEON (For bare metal)
910 - User can specify his own scheduler by implementing the @ref IScheduler interface.
Anthony Barbierdbdab852017-06-23 15:42:00 +0100911 - New OpenCL kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +0000912 - @ref CLBatchNormalizationLayerKernel / @ref CLBatchNormalizationLayer
Jenkins975dfe12019-09-02 11:47:54 +0100913 - @ref CLDepthConcatenateLayerKernel / CLDepthConcatenateLayer
Jenkinsc3f34a42018-03-02 12:38:09 +0000914 - @ref CLHOGOrientationBinningKernel @ref CLHOGBlockNormalizationKernel, @ref CLHOGDetectorKernel / @ref CLHOGDescriptor @ref CLHOGDetector @ref CLHOGGradient @ref CLHOGMultiDetection
915 - @ref CLLocallyConnectedMatrixMultiplyKernel / @ref CLLocallyConnectedLayer
916 - @ref CLWeightsReshapeKernel / @ref CLConvolutionLayerReshapeWeights
Anthony Barbierdbdab852017-06-23 15:42:00 +0100917 - New C++ kernels:
Jenkinsc3f34a42018-03-02 12:38:09 +0000918 - @ref CPPDetectionWindowNonMaximaSuppressionKernel
Anthony Barbierdbdab852017-06-23 15:42:00 +0100919 - New NEON kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +0000920 - @ref NEBatchNormalizationLayerKernel / @ref NEBatchNormalizationLayer
Jenkins975dfe12019-09-02 11:47:54 +0100921 - @ref NEDepthConcatenateLayerKernel / NEDepthConcatenateLayer
Jenkinsc3f34a42018-03-02 12:38:09 +0000922 - @ref NEDirectConvolutionLayerKernel / @ref NEDirectConvolutionLayer
923 - @ref NELocallyConnectedMatrixMultiplyKernel / @ref NELocallyConnectedLayer
924 - @ref NEWeightsReshapeKernel / @ref NEConvolutionLayerReshapeWeights
Anthony Barbierdbdab852017-06-23 15:42:00 +0100925
926v17.05 Public bug fixes release
927 - Various bug fixes
928 - Remaining of the functions ported to use accurate padding.
929 - Library does not link against OpenCL anymore (It uses dlopen / dlsym at runtime instead to determine whether or not OpenCL is available).
930 - Added "free" method to allocator.
931 - Minimum version of g++ required for armv7 Linux changed from 4.8 to 4.9
932
933v17.04 Public bug fixes release
934
935 The following functions have been ported to use the new accurate padding:
Jenkinsc3f34a42018-03-02 12:38:09 +0000936 - @ref CLColorConvertKernel
937 - @ref CLEdgeNonMaxSuppressionKernel
938 - @ref CLEdgeTraceKernel
939 - @ref CLGaussianPyramidHorKernel
940 - @ref CLGaussianPyramidVertKernel
941 - @ref CLGradientKernel
942 - @ref NEChannelCombineKernel
943 - @ref NEFillArrayKernel
944 - @ref NEGaussianPyramidHorKernel
945 - @ref NEGaussianPyramidVertKernel
Jenkinsb9abeae2018-11-22 11:58:08 +0000946 - NEHarrisScoreFP16Kernel
Jenkinsc3f34a42018-03-02 12:38:09 +0000947 - @ref NEHarrisScoreKernel
948 - @ref NEHOGDetectorKernel
949 - @ref NELogits1DMaxKernel
950 - NELogits1DShiftExpSumKernel
951 - NELogits1DNormKernel
952 - @ref NENonMaximaSuppression3x3FP16Kernel
953 - @ref NENonMaximaSuppression3x3Kernel
Anthony Barbierdbdab852017-06-23 15:42:00 +0100954
Anthony Barbierdbdab852017-06-23 15:42:00 +0100955v17.03.1 First Major public release of the sources
956 - Renamed the library to arm_compute
957 - New CPP target introduced for C++ kernels shared between NEON and CL functions.
958 - New padding calculation interface introduced and ported most kernels / functions to use it.
959 - New OpenCL kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +0000960 - @ref CLGEMMLowpMatrixMultiplyKernel / CLGEMMLowp
Anthony Barbierdbdab852017-06-23 15:42:00 +0100961 - New NEON kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +0000962 - @ref NENormalizationLayerKernel / @ref NENormalizationLayer
963 - @ref NETransposeKernel / @ref NETranspose
964 - @ref NELogits1DMaxKernel, NELogits1DShiftExpSumKernel, NELogits1DNormKernel / @ref NESoftmaxLayer
965 - @ref NEIm2ColKernel, @ref NECol2ImKernel, NEConvolutionLayerWeightsReshapeKernel / @ref NEConvolutionLayer
966 - @ref NEGEMMMatrixAccumulateBiasesKernel / @ref NEFullyConnectedLayer
967 - @ref NEGEMMLowpMatrixMultiplyKernel / NEGEMMLowp
Anthony Barbierdbdab852017-06-23 15:42:00 +0100968
969v17.03 Sources preview
970 - New OpenCL kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +0000971 - @ref CLGradientKernel, @ref CLEdgeNonMaxSuppressionKernel, @ref CLEdgeTraceKernel / @ref CLCannyEdge
Jenkins0e205f72019-11-28 16:53:35 +0000972 - GEMM refactoring + FP16 support: CLGEMMInterleave4x4Kernel, CLGEMMTranspose1xWKernel, @ref CLGEMMMatrixMultiplyKernel, CLGEMMMatrixAdditionKernel / @ref CLGEMM
Jenkinsc3f34a42018-03-02 12:38:09 +0000973 - @ref CLGEMMMatrixAccumulateBiasesKernel / @ref CLFullyConnectedLayer
974 - @ref CLTransposeKernel / @ref CLTranspose
975 - @ref CLLKTrackerInitKernel, @ref CLLKTrackerStage0Kernel, @ref CLLKTrackerStage1Kernel, @ref CLLKTrackerFinalizeKernel / @ref CLOpticalFlow
976 - @ref CLNormalizationLayerKernel / @ref CLNormalizationLayer
977 - @ref CLLaplacianPyramid, @ref CLLaplacianReconstruct
Anthony Barbierdbdab852017-06-23 15:42:00 +0100978 - New NEON kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +0000979 - @ref NEActivationLayerKernel / @ref NEActivationLayer
980 - GEMM refactoring + FP16 support (Requires armv8.2 CPU): @ref NEGEMMInterleave4x4Kernel, @ref NEGEMMTranspose1xWKernel, @ref NEGEMMMatrixMultiplyKernel, @ref NEGEMMMatrixAdditionKernel / @ref NEGEMM
981 - @ref NEPoolingLayerKernel / @ref NEPoolingLayer
Anthony Barbierdbdab852017-06-23 15:42:00 +0100982
983v17.02.1 Sources preview
984 - New OpenCL kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +0000985 - @ref CLLogits1DMaxKernel, @ref CLLogits1DShiftExpSumKernel, @ref CLLogits1DNormKernel / @ref CLSoftmaxLayer
986 - @ref CLPoolingLayerKernel / @ref CLPoolingLayer
987 - @ref CLIm2ColKernel, @ref CLCol2ImKernel, CLConvolutionLayerWeightsReshapeKernel / @ref CLConvolutionLayer
988 - @ref CLRemapKernel / @ref CLRemap
989 - @ref CLGaussianPyramidHorKernel, @ref CLGaussianPyramidVertKernel / @ref CLGaussianPyramid, @ref CLGaussianPyramidHalf, @ref CLGaussianPyramidOrb
990 - @ref CLMinMaxKernel, @ref CLMinMaxLocationKernel / @ref CLMinMaxLocation
991 - @ref CLNonLinearFilterKernel / @ref CLNonLinearFilter
Anthony Barbierdbdab852017-06-23 15:42:00 +0100992 - New NEON FP16 kernels (Requires armv8.2 CPU)
Jenkinsc3f34a42018-03-02 12:38:09 +0000993 - @ref NEAccumulateWeightedFP16Kernel
994 - @ref NEBox3x3FP16Kernel
995 - @ref NENonMaximaSuppression3x3FP16Kernel
Anthony Barbierdbdab852017-06-23 15:42:00 +0100996
997v17.02 Sources preview
998 - New OpenCL kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +0000999 - @ref CLActivationLayerKernel / @ref CLActivationLayer
1000 - @ref CLChannelCombineKernel / @ref CLChannelCombine
1001 - @ref CLDerivativeKernel / @ref CLChannelExtract
1002 - @ref CLFastCornersKernel / @ref CLFastCorners
1003 - @ref CLMeanStdDevKernel / @ref CLMeanStdDev
Anthony Barbierdbdab852017-06-23 15:42:00 +01001004 - New NEON kernels / functions:
Jenkinsc3f34a42018-03-02 12:38:09 +00001005 - HOG / SVM: @ref NEHOGOrientationBinningKernel, @ref NEHOGBlockNormalizationKernel, @ref NEHOGDetectorKernel, NEHOGNonMaximaSuppressionKernel / @ref NEHOGDescriptor, @ref NEHOGDetector, @ref NEHOGGradient, @ref NEHOGMultiDetection
1006 - @ref NENonLinearFilterKernel / @ref NENonLinearFilter
Anthony Barbierdbdab852017-06-23 15:42:00 +01001007 - Introduced a CLScheduler to manage the default context and command queue used by the runtime library and create synchronisation events.
1008 - Switched all the kernels / functions to use tensors instead of images.
1009 - Updated documentation to include instructions to build the library from sources.
1010
1011v16.12 Binary preview release
1012 - Original release
1013
1014@section S3_how_to_build How to build the library and the examples
1015
1016@subsection S3_1_build_options Build options
1017
1018scons 2.3 or above is required to build the library.
1019To see the build options available simply run ```scons -h```:
1020
1021 debug: Debug (yes|no)
1022 default: False
1023 actual: False
1024
1025 asserts: Enable asserts (this flag is forced to 1 for debug=1) (yes|no)
1026 default: False
1027 actual: False
1028
1029 arch: Target Architecture (armv7a|arm64-v8a|arm64-v8.2-a|x86_32|x86_64)
1030 default: armv7a
1031 actual: armv7a
1032
1033 os: Target OS (linux|android|bare_metal)
1034 default: linux
1035 actual: linux
1036
Anthony Barbier06ea0482018-02-22 15:45:35 +00001037 build: Build type (native|cross_compile|embed_only)
Anthony Barbierdbdab852017-06-23 15:42:00 +01001038 default: cross_compile
1039 actual: cross_compile
1040
1041 examples: Build example programs (yes|no)
1042 default: True
1043 actual: True
1044
1045 Werror: Enable/disable the -Werror compilation flag (yes|no)
1046 default: True
1047 actual: True
1048
1049 opencl: Enable OpenCL support (yes|no)
1050 default: True
1051 actual: True
1052
1053 neon: Enable Neon support (yes|no)
1054 default: False
1055 actual: False
1056
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001057 gles_compute: Enable OpenGL ES Compute Shader support (yes|no)
1058 default: False
1059 actual: False
1060
1061 embed_kernels: Embed OpenCL kernels and OpenGL ES compute shader in library binary (yes|no)
Anthony Barbierf45d5a92018-01-24 16:23:15 +00001062 default: True
1063 actual: True
Anthony Barbierdbdab852017-06-23 15:42:00 +01001064
1065 set_soname: Set the library's soname and shlibversion (requires SCons 2.4 or above) (yes|no)
1066 default: False
1067 actual: False
1068
1069 openmp: Enable OpenMP backend (yes|no)
1070 default: False
1071 actual: False
1072
1073 cppthreads: Enable C++11 threads backend (yes|no)
1074 default: True
1075 actual: True
1076
1077 build_dir: Specify sub-folder for the build ( /path/to/build_dir )
1078 default: .
1079 actual: .
1080
1081 extra_cxx_flags: Extra CXX flags to be appended to the build command
1082 default:
1083 actual:
1084
1085 pmu: Enable PMU counters (yes|no)
1086 default: False
1087 actual: False
1088
Kaizen8938bd32017-09-28 14:38:23 +01001089 mali: Enable Mali hardware counters (yes|no)
1090 default: False
1091 actual: False
1092
Anthony Barbierdbdab852017-06-23 15:42:00 +01001093 validation_tests: Build validation test programs (yes|no)
1094 default: False
1095 actual: False
1096
1097 benchmark_tests: Build benchmark test programs (yes|no)
1098 default: False
1099 actual: False
1100
1101@b debug / @b asserts:
1102 - With debug=1 asserts are enabled, and the library is built with symbols and no optimisations enabled.
1103 - With debug=0 and asserts=1: Optimisations are enabled and symbols are removed, however all the asserts are still present (This is about 20% slower than the release build)
1104 - With debug=0 and asserts=0: All optimisations are enable and no validation is performed, if the application misuses the library it is likely to result in a crash. (Only use this mode once you are sure your application is working as expected).
1105
1106@b arch: The x86_32 and x86_64 targets can only be used with neon=0 and opencl=1.
1107
1108@b os: Choose the operating system you are targeting: Linux, Android or bare metal.
1109@note bare metal can only be used for NEON (not OpenCL), only static libraries get built and NEON's multi-threading support is disabled.
1110
1111@b build: you can either build directly on your device (native) or cross compile from your desktop machine (cross-compile). In both cases make sure the compiler is available in your path.
1112
1113@note If you want to natively compile for 32bit on a 64bit ARM device running a 64bit OS then you will have to use cross-compile too.
1114
Anthony Barbier06ea0482018-02-22 15:45:35 +00001115There is also an 'embed_only' option which will generate all the .embed files for the OpenCL kernels and / or OpenGLES compute shaders. This might be useful if using a different build system to compile the library.
1116
Anthony Barbierdbdab852017-06-23 15:42:00 +01001117@b Werror: If you are compiling using the same toolchains as the ones used in this guide then there shouldn't be any warning and therefore you should be able to keep Werror=1. If with a different compiler version the library fails to build because of warnings interpreted as errors then, if you are sure the warnings are not important, you might want to try to build with Werror=0 (But please do report the issue either on Github or by an email to developer@arm.com so that the issue can be addressed).
1118
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001119@b opencl / @b neon / @b gles_compute: Choose which SIMD technology you want to target. (NEON for ARM Cortex-A CPUs or OpenCL / GLES_COMPUTE for ARM Mali GPUs)
Anthony Barbierdbdab852017-06-23 15:42:00 +01001120
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001121@b embed_kernels: For OpenCL / GLES_COMPUTE only: set embed_kernels=1 if you want the OpenCL / GLES_COMPUTE kernels to be built in the library's binaries instead of being read from separate ".cl" / ".cs" files. If embed_kernels is set to 0 then the application can set the path to the folder containing the OpenCL / GLES_COMPUTE kernel files by calling CLKernelLibrary::init() / GCKernelLibrary::init(). By default the path is set to "./cl_kernels" / "./cs_shaders".
Anthony Barbierdbdab852017-06-23 15:42:00 +01001122
1123@b set_soname: Do you want to build the versioned version of the library ?
1124
1125If enabled the library will contain a SONAME and SHLIBVERSION and some symlinks will automatically be created between the objects.
1126Example:
1127 libarm_compute_core.so -> libarm_compute_core.so.1.0.0
1128 libarm_compute_core.so.1 -> libarm_compute_core.so.1.0.0
1129 libarm_compute_core.so.1.0.0
1130
1131@note This options is disabled by default as it requires SCons version 2.4 or above.
1132
1133@b extra_cxx_flags: Custom CXX flags which will be appended to the end of the build command.
1134
1135@b build_dir: Build the library in a subfolder of the "build" folder. (Allows to build several configurations in parallel).
1136
1137@b examples: Build or not the examples
1138
1139@b validation_tests: Enable the build of the validation suite.
1140
Anthony Barbierdbdab852017-06-23 15:42:00 +01001141@b benchmark_tests: Enable the build of the benchmark tests
1142
1143@b pmu: Enable the PMU cycle counter to measure execution time in benchmark tests. (Your device needs to support it)
1144
Kaizen8938bd32017-09-28 14:38:23 +01001145@b mali: Enable the collection of Mali hardware counters to measure execution time in benchmark tests. (Your device needs to have a Mali driver that supports it)
Anthony Barbierdbdab852017-06-23 15:42:00 +01001146
1147@b openmp Build in the OpenMP scheduler for NEON.
1148
1149@note Only works when building with g++ not clang++
1150
1151@b cppthreads Build in the C++11 scheduler for NEON.
1152
Jenkinsc3f34a42018-03-02 12:38:09 +00001153@sa Scheduler::set
Anthony Barbierdbdab852017-06-23 15:42:00 +01001154
Kaizen8938bd32017-09-28 14:38:23 +01001155@subsection S3_2_linux Building for Linux
Anthony Barbierdbdab852017-06-23 15:42:00 +01001156
1157@subsubsection S3_2_1_library How to build the library ?
1158
1159For Linux, the library was successfully built and tested using the following Linaro GCC toolchain:
1160
Jenkins52ba29e2018-08-29 15:32:11 +00001161 - gcc-linaro-4.9-2016.02-x86_64_arm-linux-gnueabihf
Anthony Barbierdbdab852017-06-23 15:42:00 +01001162 - gcc-linaro-4.9-2016.02-x86_64_aarch64-linux-gnu
Anthony Barbierdbdab852017-06-23 15:42:00 +01001163
Anthony Barbierdbdab852017-06-23 15:42:00 +01001164To cross-compile the library in debug mode, with NEON only support, for Linux 32bit:
1165
1166 scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=linux arch=armv7a
1167
1168To cross-compile the library in asserts mode, with OpenCL only support, for Linux 64bit:
1169
1170 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=linux arch=arm64-v8a
1171
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001172To cross-compile the library in asserts mode, with GLES_COMPUTE only support, for Linux 64bit:
1173
1174 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=0 gles_compute=1 embed_kernels=1 os=linux arch=arm64-v8a
1175
Anthony Barbierdbdab852017-06-23 15:42:00 +01001176You can also compile the library natively on an ARM device by using <b>build=native</b>:
1177
1178 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=arm64-v8a build=native
1179 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=native
1180
1181@note g++ for ARM is mono-arch, therefore if you want to compile for Linux 32bit on a Linux 64bit platform you will have to use a cross compiler.
1182
1183For example on a 64bit Debian based system you would have to install <b>g++-arm-linux-gnueabihf</b>
1184
1185 apt-get install g++-arm-linux-gnueabihf
1186
1187Then run
1188
1189 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=cross_compile
1190
1191or simply remove the build parameter as build=cross_compile is the default value:
1192
1193 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a
1194
1195@attention To cross compile with opencl=1 you need to make sure to have a version of libOpenCL matching your target architecture.
1196
1197@subsubsection S3_2_2_examples How to manually build the examples ?
1198
1199The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
1200
Jenkinsb3a371b2018-05-23 11:36:53 +01001201@note The following command lines assume the arm_compute binaries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built library with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
Anthony Barbierdbdab852017-06-23 15:42:00 +01001202
1203To cross compile a NEON example for Linux 32bit:
1204
Kaizenbf8b01d2017-10-12 14:26:51 +01001205 arm-linux-gnueabihf-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbierdbdab852017-06-23 15:42:00 +01001206
1207To cross compile a NEON example for Linux 64bit:
1208
Kaizenbf8b01d2017-10-12 14:26:51 +01001209 aarch64-linux-gnu-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbierdbdab852017-06-23 15:42:00 +01001210
1211(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1212
1213To cross compile an OpenCL example for Linux 32bit:
1214
Jenkinsb3a371b2018-05-23 11:36:53 +01001215 arm-linux-gnueabihf-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbierdbdab852017-06-23 15:42:00 +01001216
1217To cross compile an OpenCL example for Linux 64bit:
1218
Jenkinsb3a371b2018-05-23 11:36:53 +01001219 aarch64-linux-gnu-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Kaizenbf8b01d2017-10-12 14:26:51 +01001220
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001221To cross compile a GLES example for Linux 32bit:
1222
1223 arm-linux-gnueabihf-g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -mfpu=neon -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1224
1225To cross compile a GLES example for Linux 64bit:
1226
1227 aarch64-linux-gnu-g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
1228
Kaizenbf8b01d2017-10-12 14:26:51 +01001229(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1230
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001231To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
1232
1233@note The compute library must currently be built with both neon and opencl enabled - neon=1 and opencl=1
Kaizenbf8b01d2017-10-12 14:26:51 +01001234
1235i.e. to cross compile the "graph_lenet" example for Linux 32bit:
1236
Jenkins52ba29e2018-08-29 15:32:11 +00001237 arm-linux-gnueabihf-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Kaizenbf8b01d2017-10-12 14:26:51 +01001238
1239i.e. to cross compile the "graph_lenet" example for Linux 64bit:
1240
Jenkins52ba29e2018-08-29 15:32:11 +00001241 aarch64-linux-gnu-g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Anthony Barbierdbdab852017-06-23 15:42:00 +01001242
1243(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
1244
giorgio-arena869d4242017-10-23 16:58:59 +01001245@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute, arm_compute_core
1246
Anthony Barbierdbdab852017-06-23 15:42:00 +01001247To compile natively (i.e directly on an ARM device) for NEON for Linux 32bit:
1248
Kaizenbf8b01d2017-10-12 14:26:51 +01001249 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbierdbdab852017-06-23 15:42:00 +01001250
1251To compile natively (i.e directly on an ARM device) for NEON for Linux 64bit:
1252
Kaizenbf8b01d2017-10-12 14:26:51 +01001253 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -larm_compute_core -o neon_convolution
Anthony Barbierdbdab852017-06-23 15:42:00 +01001254
1255(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
1256
1257To compile natively (i.e directly on an ARM device) for OpenCL for Linux 32bit or Linux 64bit:
1258
Jenkinsb3a371b2018-05-23 11:36:53 +01001259 g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -larm_compute_core -o cl_convolution -DARM_COMPUTE_CL
Anthony Barbierdbdab852017-06-23 15:42:00 +01001260
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001261To compile natively (i.e directly on an ARM device) for GLES for Linux 32bit or Linux 64bit:
Kaizenbf8b01d2017-10-12 14:26:51 +01001262
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001263 g++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude/ -L. -larm_compute -larm_compute_core -std=c++11 -DARM_COMPUTE_GC -Iinclude/linux/ -o gc_absdiff
Kaizenbf8b01d2017-10-12 14:26:51 +01001264
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001265To compile natively the examples with the Graph API, such as graph_lenet.cpp, you need to link the examples against arm_compute_graph.so too.
1266@note The compute library must currently be built with both neon and opencl enabled - neon=1 and opencl=1
Kaizenbf8b01d2017-10-12 14:26:51 +01001267
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001268i.e. to natively compile the "graph_lenet" example for Linux 32bit:
Kaizenbf8b01d2017-10-12 14:26:51 +01001269
Jenkins52ba29e2018-08-29 15:32:11 +00001270 g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001271
1272i.e. to natively compile the "graph_lenet" example for Linux 64bit:
1273
Jenkins52ba29e2018-08-29 15:32:11 +00001274 g++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 L. -larm_compute_graph -larm_compute -larm_compute_core -Wl,--allow-shlib-undefined -o graph_lenet
Kaizenbf8b01d2017-10-12 14:26:51 +01001275
1276(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
Anthony Barbierdbdab852017-06-23 15:42:00 +01001277
giorgio-arena869d4242017-10-23 16:58:59 +01001278@note If compiling using static libraries, this order must be followed when linking: arm_compute_graph_static, arm_compute, arm_compute_core
1279
Anthony Barbierdbdab852017-06-23 15:42:00 +01001280@note These two commands assume libarm_compute.so is available in your library path, if not add the path to it using -L
Jenkins36ccc902020-02-21 11:10:48 +00001281@note You might need to export the path to OpenCL library as well in your LD_LIBRARY_PATH if Compute Library was build with OpenCL enabled.
Anthony Barbierdbdab852017-06-23 15:42:00 +01001282
1283To run the built executable simply run:
1284
1285 LD_LIBRARY_PATH=build ./neon_convolution
1286
1287or
1288
1289 LD_LIBRARY_PATH=build ./cl_convolution
1290
Jenkins52ba29e2018-08-29 15:32:11 +00001291@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.
Jenkinsc3f34a42018-03-02 12:38:09 +00001292
1293For example:
Jenkinsb3a371b2018-05-23 11:36:53 +01001294
Jenkins52ba29e2018-08-29 15:32:11 +00001295 LD_LIBRARY_PATH=. ./graph_lenet --help
Jenkinsc3f34a42018-03-02 12:38:09 +00001296
Jenkins52ba29e2018-08-29 15:32:11 +00001297Below is a list of the common parameters among the graph examples :
1298@snippet utils/CommonGraphOptions.h Common graph examples parameters
Jenkinsc3f34a42018-03-02 12:38:09 +00001299
Kaizen8938bd32017-09-28 14:38:23 +01001300@subsection S3_3_android Building for Android
Anthony Barbierdbdab852017-06-23 15:42:00 +01001301
1302For Android, the library was successfully built and tested using Google's standalone toolchains:
Jenkins36ccc902020-02-21 11:10:48 +00001303 - clang++ from NDK r17c for armv7a
1304 - clang++ from NDK r17c for arm64-v8a
Jenkins52ba29e2018-08-29 15:32:11 +00001305 - clang++ from NDK r18-beta1 for arm64-v8.2-a with FP16 support
Anthony Barbierdbdab852017-06-23 15:42:00 +01001306
1307Here is a guide to <a href="https://developer.android.com/ndk/guides/standalone_toolchain.html">create your Android standalone toolchains from the NDK</a>
1308
Jenkins36ccc902020-02-21 11:10:48 +00001309- Download the NDK r17c from here: https://developer.android.com/ndk/downloads/index.html
Jenkins4ba87db2019-05-23 17:11:51 +01001310- Make sure you have Python 2.7 installed on your machine.
Anthony Barbierdbdab852017-06-23 15:42:00 +01001311- Generate the 32 and/or 64 toolchains by running the following commands:
1312
Jenkinsb3a371b2018-05-23 11:36:53 +01001313
Jenkins36ccc902020-02-21 11:10:48 +00001314 $NDK/build/tools/make_standalone_toolchain.py --arch arm64 --install-dir $MY_TOOLCHAINS/aarch64-linux-android-ndk-r17c --stl libc++ --api 21
1315 $NDK/build/tools/make_standalone_toolchain.py --arch arm --install-dir $MY_TOOLCHAINS/arm-linux-android-ndk-r17c --stl libc++ --api 21
Anthony Barbierdbdab852017-06-23 15:42:00 +01001316
Jenkins52ba29e2018-08-29 15:32:11 +00001317@attention We used to use gnustl but as of NDK r17 it is deprecated so we switched to libc++
Anthony Barbierdbdab852017-06-23 15:42:00 +01001318
Jenkinsb3a371b2018-05-23 11:36:53 +01001319@note Make sure to add the toolchains to your PATH:
1320
Jenkins36ccc902020-02-21 11:10:48 +00001321 export PATH=$PATH:$MY_TOOLCHAINS/aarch64-linux-android-ndk-r17c/bin:$MY_TOOLCHAINS/arm-linux-android-ndk-r17c/bin
Anthony Barbierdbdab852017-06-23 15:42:00 +01001322
1323@subsubsection S3_3_1_library How to build the library ?
1324
Anthony Barbierdbdab852017-06-23 15:42:00 +01001325To cross-compile the library in debug mode, with NEON only support, for Android 32bit:
1326
1327 CXX=clang++ CC=clang scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=android arch=armv7a
1328
1329To cross-compile the library in asserts mode, with OpenCL only support, for Android 64bit:
1330
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001331 CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=android arch=arm64-v8a
1332
1333To cross-compile the library in asserts mode, with GLES_COMPUTE only support, for Android 64bit:
1334
1335 CXX=clang++ CC=clang scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=0 gles_compute=1 embed_kernels=1 os=android arch=arm64-v8a
Anthony Barbierdbdab852017-06-23 15:42:00 +01001336
1337@subsubsection S3_3_2_examples How to manually build the examples ?
1338
1339The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
1340
Jenkinsb3a371b2018-05-23 11:36:53 +01001341@note The following command lines assume the arm_compute binaries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built library with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
Anthony Barbierdbdab852017-06-23 15:42:00 +01001342
1343Once you've got your Android standalone toolchain built and added to your path you can do the following:
1344
1345To cross compile a NEON example:
1346
1347 #32 bit:
Kaizenbf8b01d2017-10-12 14:26:51 +01001348 arm-linux-androideabi-clang++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o neon_convolution_arm -static-libstdc++ -pie
Anthony Barbierdbdab852017-06-23 15:42:00 +01001349 #64 bit:
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001350 aarch64-linux-android-clang++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o neon_convolution_aarch64 -static-libstdc++ -pie
Anthony Barbierdbdab852017-06-23 15:42:00 +01001351
1352To cross compile an OpenCL example:
1353
1354 #32 bit:
Jenkinsb3a371b2018-05-23 11:36:53 +01001355 arm-linux-androideabi-clang++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o cl_convolution_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
Anthony Barbierdbdab852017-06-23 15:42:00 +01001356 #64 bit:
Jenkinsb3a371b2018-05-23 11:36:53 +01001357 aarch64-linux-android-clang++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o cl_convolution_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001358
1359To cross compile a GLES example:
Anthony Barbierf45d5a92018-01-24 16:23:15 +00001360
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001361 #32 bit:
1362 arm-linux-androideabi-clang++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o gc_absdiff_arm -static-libstdc++ -pie -DARM_COMPUTE_GC
1363 #64 bit:
1364 aarch64-linux-android-clang++ examples/gc_absdiff.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -larm_compute_core-static -L. -o gc_absdiff_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_GC
Kaizenbf8b01d2017-10-12 14:26:51 +01001365
1366To cross compile the examples with the Graph API, such as graph_lenet.cpp, you need to link the library arm_compute_graph also.
1367(notice the compute library has to be built with both neon and opencl enabled - neon=1 and opencl=1)
1368
1369 #32 bit:
Jenkins52ba29e2018-08-29 15:32:11 +00001370 arm-linux-androideabi-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_arm -static-libstdc++ -pie -DARM_COMPUTE_CL
Kaizenbf8b01d2017-10-12 14:26:51 +01001371 #64 bit:
Jenkins52ba29e2018-08-29 15:32:11 +00001372 aarch64-linux-android-clang++ examples/graph_lenet.cpp utils/Utils.cpp utils/GraphUtils.cpp utils/CommonGraphOptions.cpp -I. -Iinclude -std=c++11 -Wl,--whole-archive -larm_compute_graph-static -Wl,--no-whole-archive -larm_compute-static -larm_compute_core-static -L. -o graph_lenet_aarch64 -static-libstdc++ -pie -DARM_COMPUTE_CL
Anthony Barbierdbdab852017-06-23 15:42:00 +01001373
1374@note Due to some issues in older versions of the Mali OpenCL DDK (<= r13p0), we recommend to link arm_compute statically on Android.
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001375@note When linked statically the arm_compute_graph library currently needs the --whole-archive linker flag in order to work properly
Anthony Barbierdbdab852017-06-23 15:42:00 +01001376
1377Then you need to do is upload the executable and the shared library to the device using ADB:
1378
1379 adb push neon_convolution_arm /data/local/tmp/
1380 adb push cl_convolution_arm /data/local/tmp/
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001381 adb push gc_absdiff_arm /data/local/tmp/
Anthony Barbierdbdab852017-06-23 15:42:00 +01001382 adb shell chmod 777 -R /data/local/tmp/
1383
1384And finally to run the example:
1385
1386 adb shell /data/local/tmp/neon_convolution_arm
1387 adb shell /data/local/tmp/cl_convolution_arm
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001388 adb shell /data/local/tmp/gc_absdiff_arm
Anthony Barbierdbdab852017-06-23 15:42:00 +01001389
1390For 64bit:
1391
1392 adb push neon_convolution_aarch64 /data/local/tmp/
1393 adb push cl_convolution_aarch64 /data/local/tmp/
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001394 adb push gc_absdiff_aarch64 /data/local/tmp/
Anthony Barbierdbdab852017-06-23 15:42:00 +01001395 adb shell chmod 777 -R /data/local/tmp/
1396
1397And finally to run the example:
1398
1399 adb shell /data/local/tmp/neon_convolution_aarch64
1400 adb shell /data/local/tmp/cl_convolution_aarch64
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001401 adb shell /data/local/tmp/gc_absdiff_aarch64
Anthony Barbierdbdab852017-06-23 15:42:00 +01001402
Jenkins52ba29e2018-08-29 15:32:11 +00001403@note Examples accept different types of arguments, to find out what they are run the example with \a --help as an argument. If no arguments are specified then random values will be used to execute the graph.
Jenkinsc3f34a42018-03-02 12:38:09 +00001404
1405For example:
Jenkins52ba29e2018-08-29 15:32:11 +00001406 adb shell /data/local/tmp/graph_lenet --help
Jenkinsc3f34a42018-03-02 12:38:09 +00001407
1408In this case the first argument of LeNet (like all the graph examples) is the target (i.e 0 to run on NEON, 1 to run on OpenCL if available, 2 to run on OpenCL using the CLTuner), the second argument is the path to the folder containing the npy files for the weights and finally the third argument is the number of batches to run.
1409
Kaizenbf8b01d2017-10-12 14:26:51 +01001410@subsection S3_4_bare_metal Building for bare metal
1411
1412For bare metal, the library was successfully built using linaros's latest (gcc-linaro-6.3.1-2017.05) bare metal toolchains:
1413 - arm-eabi for armv7a
1414 - aarch64-elf for arm64-v8a
1415
1416Download linaro for <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/arm-eabi/">armv7a</a> and <a href="https://releases.linaro.org/components/toolchain/binaries/6.3-2017.05/aarch64-elf/">arm64-v8a</a>.
1417
1418@note Make sure to add the toolchains to your PATH: export PATH=$PATH:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_aarch64-elf/bin:$MY_TOOLCHAINS/gcc-linaro-6.3.1-2017.05-x86_64_arm-eabi/bin
1419
1420@subsubsection S3_4_1_library How to build the library ?
1421
1422To cross-compile the library with NEON support for baremetal arm64-v8a:
1423
1424 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=bare_metal arch=arm64-v8a build=cross_compile cppthreads=0 openmp=0 standalone=1
1425
1426@subsubsection S3_4_2_examples How to manually build the examples ?
1427
1428Examples are disabled when building for bare metal. If you want to build the examples you need to provide a custom bootcode depending on the target architecture and link against the compute library. More information about bare metal bootcode can be found <a href="http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0527a/index.html">here</a>.
1429
1430@subsection S3_5_windows_host Building on a Windows host system
Kaizen8938bd32017-09-28 14:38:23 +01001431
1432Using `scons` directly from the Windows command line is known to cause
1433problems. The reason seems to be that if `scons` is setup for cross-compilation
1434it gets confused about Windows style paths (using backslashes). Thus it is
1435recommended to follow one of the options outlined below.
1436
Kaizenbf8b01d2017-10-12 14:26:51 +01001437@subsubsection S3_5_1_ubuntu_on_windows Bash on Ubuntu on Windows
Kaizen8938bd32017-09-28 14:38:23 +01001438
Jenkins975dfe12019-09-02 11:47:54 +01001439The best and easiest option is to use
1440<a href="https://msdn.microsoft.com/en-gb/commandline/wsl/about">Ubuntu on Windows</a>.
Kaizen8938bd32017-09-28 14:38:23 +01001441This feature is still marked as *beta* and thus might not be available.
1442However, if it is building the library is as simple as opening a *Bash on
1443Ubuntu on Windows* shell and following the general guidelines given above.
1444
Kaizenbf8b01d2017-10-12 14:26:51 +01001445@subsubsection S3_5_2_cygwin Cygwin
Kaizen8938bd32017-09-28 14:38:23 +01001446
Jenkins975dfe12019-09-02 11:47:54 +01001447If the Windows subsystem for Linux is not available <a href="https://www.cygwin.com/">Cygwin</a>
1448can be used to install and run `scons`, the minimum Cygwin version must be 3.0.7 or later. In addition
1449to the default packages installed by Cygwin `scons` has to be selected in the installer. (`git` might
Kaizen8938bd32017-09-28 14:38:23 +01001450also be useful but is not strictly required if you already have got the source
Jenkins975dfe12019-09-02 11:47:54 +01001451code of the library.) Linaro provides pre-built versions of
1452<a href="http://releases.linaro.org/components/toolchain/binaries/">GCC cross-compilers</a>
Kaizen8938bd32017-09-28 14:38:23 +01001453that can be used from the Cygwin terminal. When building for Android the
1454compiler is included in the Android standalone toolchain. After everything has
1455been set up in the Cygwin terminal the general guide on building the library
1456can be followed.
1457
Kaizenbf8b01d2017-10-12 14:26:51 +01001458@subsection S3_6_cl_stub_library The OpenCL stub library
Anthony Barbierdbdab852017-06-23 15:42:00 +01001459
1460In the opencl-1.2-stubs folder you will find the sources to build a stub OpenCL library which then can be used to link your application or arm_compute against.
1461
1462If you preferred you could retrieve the OpenCL library from your device and link against this one but often this library will have dependencies on a range of system libraries forcing you to link your application against those too even though it is not using them.
1463
1464@warning This OpenCL library provided is a stub and *not* a real implementation. You can use it to resolve OpenCL's symbols in arm_compute while building the example but you must make sure the real libOpenCL.so is in your PATH when running the example or it will not work.
1465
1466To cross-compile the stub OpenCL library simply run:
1467
1468 <target-prefix>-gcc -o libOpenCL.so -Iinclude opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
1469
1470For example:
1471
Anthony Barbierdbdab852017-06-23 15:42:00 +01001472 #Linux 32bit
1473 arm-linux-gnueabihf-gcc -o libOpenCL.so -Iinclude opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
1474 #Linux 64bit
1475 aarch64-linux-gnu-gcc -o libOpenCL.so -Iinclude -shared opencl-1.2-stubs/opencl_stubs.c -fPIC
1476 #Android 32bit
1477 arm-linux-androideabi-clang -o libOpenCL.so -Iinclude -shared opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
1478 #Android 64bit
Anthony Barbier8140e1e2017-12-14 23:48:46 +00001479 aarch64-linux-android-clang -o libOpenCL.so -Iinclude -shared opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
1480
1481@subsection S3_7_gles_stub_library The Linux OpenGLES and EGL stub libraries
1482
1483In the opengles-3.1-stubs folder you will find the sources to build stub EGL and OpenGLES libraries which then can be used to link your Linux application of arm_compute against.
1484
1485@note The stub libraries are only needed on Linux. For Android, the NDK toolchains already provide the meta-EGL and meta-GLES libraries.
1486
1487To cross-compile the stub OpenGLES and EGL libraries simply run:
1488
1489 <target-prefix>-gcc -o libEGL.so -Iinclude/linux opengles-3.1-stubs/EGL.c -fPIC -shared
1490 <target-prefix>-gcc -o libGLESv2.so -Iinclude/linux opengles-3.1-stubs/GLESv2.c -fPIC -shared
1491
1492 #Linux 32bit
1493 arm-linux-gnueabihf-gcc -o libEGL.so -Iinclude/linux opengles-3.1-stubs/EGL.c -fPIC -shared
1494 arm-linux-gnueabihf-gcc -o libGLESv2.so -Iinclude/linux opengles-3.1-stubs/GLESv2.c -fPIC -shared
1495
1496 #Linux 64bit
1497 aarch64-linux-gnu-gcc -o libEGL.so -Iinclude/linux opengles-3.1-stubs/EGL.c -fPIC -shared
1498 aarch64-linux-gnu-gcc -o libGLESv2.so -Iinclude/linux opengles-3.1-stubs/GLESv2.c -fPIC -shared
Jenkins52ba29e2018-08-29 15:32:11 +00001499
1500@subsection S3_8_cl_requirements OpenCL DDK Requirements
1501
1502@subsubsection S3_8_1_cl_hard_requirements Hard Requirements
1503
1504Compute Library requires OpenCL 1.1 and above with support of non uniform workgroup sizes, which is officially supported in the Mali OpenCL DDK r8p0 and above as an extension (respective extension flag is \a -cl-arm-non-uniform-work-group-size).
1505
1506Enabling 16-bit floating point calculations require \a cl_khr_fp16 extension to be supported. All Mali GPUs with compute capabilities have native support for half precision floating points.
1507
1508Use of @ref CLMeanStdDev function requires 64-bit atomics support, thus \a cl_khr_int64_base_atomics should be supported in order to use.
1509
1510@subsubsection S3_8_2_cl_performance_requirements Performance improvements
1511
1512Integer dot product built-in function extensions (and therefore optimized kernels) are available with Mali OpenCL DDK r22p0 and above for the following GPUs : G71, G76. The relevant extensions are \a cl_arm_integer_dot_product_int8, \a cl_arm_integer_dot_product_accumulate_int8 and \a cl_arm_integer_dot_product_accumulate_int16.
1513
1514OpenCL kernel level debugging can be simplified with the use of printf, this requires the \a cl_arm_printf extension to be supported.
1515
1516SVM allocations are supported for all the underlying allocations in Compute Library. To enable this OpenCL 2.0 and above is a requirement.
1517
1518@subsection S3_9_cl_tuner OpenCL Tuner
1519
1520The OpenCL tuner, a.k.a. CLTuner, is a module of Arm Compute Library that can improve the performance of the OpenCL kernels tuning the Local-Workgroup-Size (LWS).
1521The optimal LWS for each unique OpenCL kernel configuration is stored in a table. This table can be either imported or exported from/to a file.
Jenkins4ba87db2019-05-23 17:11:51 +01001522The OpenCL tuner runs the same OpenCL kernel for a range of local workgroup sizes and keeps the local workgroup size of the fastest run to use in subsequent calls to the kernel. It supports three modes of tuning with different trade-offs between the time taken to tune and the kernel execution time achieved using the best LWS found. In the Exhaustive mode, it searches all the supported values of LWS. This mode takes the longest time to tune and is the most likely to find the optimal LWS. Normal mode searches a subset of LWS values to yield a good approximation of the optimal LWS. It takes less time to tune than Exhaustive mode. Rapid mode takes the shortest time to tune and finds an LWS value that is at least as good or better than the default LWS value. The mode affects only the search for the optimal LWS and has no effect when the LWS value is imported from a file.
Jenkins52ba29e2018-08-29 15:32:11 +00001523In order for the performance numbers to be meaningful you must disable the GPU power management and set it to a fixed frequency for the entire duration of the tuning phase.
1524
1525If you wish to know more about LWS and the important role on improving the GPU cache utilization, we suggest having a look at the presentation "Even Faster CNNs: Exploring the New Class of Winograd Algorithms available at the following link:
1526
1527https://www.embedded-vision.com/platinum-members/arm/embedded-vision-training/videos/pages/may-2018-embedded-vision-summit-iodice
1528
1529Tuning a network from scratch can be long and affect considerably the execution time for the first run of your network. It is recommended for this reason to store the CLTuner's result in a file to amortize this time when you either re-use the same network or the functions with the same configurations. The tuning is performed only once for each OpenCL kernel.
1530
1531CLTuner looks for the optimal LWS for each unique OpenCL kernel configuration. Since a function (i.e. Convolution Layer, Pooling Layer, Fully Connected Layer ...) can be called multiple times but with different parameters, we associate an "id" (called "config_id") to each kernel to distinguish the unique configurations.
1532
1533 #Example: 2 unique Matrix Multiply configurations
1534@code{.cpp}
1535 TensorShape a0 = TensorShape(32,32);
1536 TensorShape b0 = TensorShape(32,32);
1537 TensorShape c0 = TensorShape(32,32);
1538 TensorShape a1 = TensorShape(64,64);
1539 TensorShape b1 = TensorShape(64,64);
1540 TensorShape c1 = TensorShape(64,64);
1541
1542 Tensor a0_tensor;
1543 Tensor b0_tensor;
1544 Tensor c0_tensor;
1545 Tensor a1_tensor;
1546 Tensor b1_tensor;
1547 Tensor c1_tensor;
1548
1549 a0_tensor.allocator()->init(TensorInfo(a0, 1, DataType::F32));
1550 b0_tensor.allocator()->init(TensorInfo(b0, 1, DataType::F32));
1551 c0_tensor.allocator()->init(TensorInfo(c0, 1, DataType::F32));
1552 a1_tensor.allocator()->init(TensorInfo(a1, 1, DataType::F32));
1553 b1_tensor.allocator()->init(TensorInfo(b1, 1, DataType::F32));
1554 c1_tensor.allocator()->init(TensorInfo(c1 1, DataType::F32));
1555
1556 CLGEMM gemm0;
1557 CLGEMM gemm1;
1558
1559 // Configuration 0
1560 gemm0.configure(&a0, &b0, nullptr, &c0, 1.0f, 0.0f);
1561
1562 // Configuration 1
1563 gemm1.configure(&a1, &b1, nullptr, &c1, 1.0f, 0.0f);
1564@endcode
1565
1566@subsubsection S3_9_1_cl_tuner_how_to How to use it
1567
1568All the graph examples in the ACL's folder "examples" and the arm_compute_benchmark accept an argument to enable the OpenCL tuner and an argument to export/import the LWS values to/from a file
1569
1570 #Enable CL tuner
1571 ./graph_mobilenet --enable-tuner –-target=CL
1572 ./arm_compute_benchmark --enable-tuner
1573
1574 #Export/Import to/from a file
1575 ./graph_mobilenet --enable-tuner --target=CL --tuner-file=acl_tuner.csv
1576 ./arm_compute_benchmark --enable-tuner --tuner-file=acl_tuner.csv
1577
1578If you are importing the CLTuner'results from a file, the new tuned LWS values will be appended to it.
1579
1580Either you are benchmarking the graph examples or the test cases in the arm_compute_benchmark remember to:
1581
1582 -# Disable the power management
1583 -# Keep the GPU frequency constant
1584 -# Run multiple times the network (i.e. 10).
1585
1586If you are not using the graph API or the benchmark infrastructure you will need to manually pass a CLTuner object to CLScheduler before configuring any function.
1587
1588@code{.cpp}
1589CLTuner tuner;
1590
1591// Setup Scheduler
1592CLScheduler::get().default_init(&tuner);
1593@endcode
1594
1595After the first run, the CLTuner's results can be exported to a file using the method "save_to_file()".
1596- tuner.save_to_file("results.csv");
1597
1598This file can be also imported using the method "load_from_file("results.csv")".
1599- tuner.load_from_file("results.csv");
Anthony Barbierdbdab852017-06-23 15:42:00 +01001600*/
Jenkinsc3f34a42018-03-02 12:38:09 +00001601} // namespace arm_compute