arm_compute v19.11
diff --git a/docs/00_introduction.dox b/docs/00_introduction.dox
index ca9e7e3..301e975 100644
--- a/docs/00_introduction.dox
+++ b/docs/00_introduction.dox
@@ -1,5 +1,5 @@
///
-/// Copyright (c) 2017-2018 ARM Limited.
+/// Copyright (c) 2017-2019 ARM Limited.
///
/// SPDX-License-Identifier: MIT
///
@@ -51,8 +51,8 @@
These binaries have been built using the following toolchains:
- Linux armv7a: gcc-linaro-4.9-2016.02-x86_64_arm-linux-gnueabihf
- Linux arm64-v8a: gcc-linaro-4.9-2016.02-x86_64_aarch64-linux-gnu
- - Android armv7a: clang++ / libc++ NDK r17b
- - Android am64-v8a: clang++ / libc++ NDK r17b
+ - Android armv7a: clang++ / libc++ NDK r17c
+ - Android am64-v8a: clang++ / libc++ NDK r17c
@warning Make sure to use a compatible toolchain to build your application or you will get some std::bad_alloc errors at runtime.
@@ -236,6 +236,72 @@
@subsection S2_2_changelog Changelog
+v19.11 Public major release
+ - Various bug fixes.
+ - Various optimisations.
+ - Updated recommended NDK version to r17c.
+ - Deprecated OpenCL kernels / functions:
+ - CLDepthwiseConvolutionLayerReshapeWeightsGenericKernel
+ - CLDepthwiseIm2ColKernel
+ - CLDepthwiseSeparableConvolutionLayer
+ - CLDepthwiseVectorToTensorKernel
+ - CLDirectConvolutionLayerOutputStageKernel
+ - Deprecated NEON kernels / functions:
+ - NEDepthwiseWeightsReshapeKernel
+ - NEDepthwiseIm2ColKernel
+ - NEDepthwiseSeparableConvolutionLayer
+ - NEDepthwiseVectorToTensorKernel
+ - NEDepthwiseConvolutionLayer3x3
+ - New OpenCL kernels / functions:
+ - @ref CLInstanceNormalizationLayerKernel / @ref CLInstanceNormalizationLayer
+ - @ref CLDepthwiseConvolutionLayerNativeKernel to replace the old generic depthwise convolution (see Deprecated
+ OpenCL kernels / functions)
+ - @ref CLLogSoftmaxLayer
+ - New NEON kernels / functions:
+ - @ref NEBoundingBoxTransformKernel / @ref NEBoundingBoxTransform
+ - @ref NEComputeAllAnchorsKernel / @ref NEComputeAllAnchors
+ - @ref NEDetectionPostProcessLayer
+ - @ref NEGenerateProposalsLayer
+ - @ref NEInstanceNormalizationLayerKernel / @ref NEInstanceNormalizationLayer
+ - @ref NELogSoftmaxLayer
+ - @ref NEROIAlignLayerKernel / @ref NEROIAlignLayer
+ - Added QASYMM8 support for:
+ - @ref CLGenerateProposalsLayer
+ - @ref CLROIAlignLayer
+ - @ref CPPBoxWithNonMaximaSuppressionLimit
+ - Added QASYMM16 support for:
+ - @ref CLBoundingBoxTransform
+ - Added FP16 support for:
+ - @ref CLGEMMMatrixMultiplyReshapedKernel
+ - Added new data type QASYMM8_PER_CHANNEL support for:
+ - @ref CLDequantizationLayer
+ - @ref NEDequantizationLayer
+ - Added new data type QSYMM8_PER_CHANNEL support for:
+ - @ref CLConvolutionLayer
+ - @ref NEConvolutionLayer
+ - @ref CLDepthwiseConvolutionLayer
+ - @ref NEDepthwiseConvolutionLayer
+ - Added FP16 mixed-precision support for:
+ - @ref CLGEMMMatrixMultiplyReshapedKernel
+ - @ref CLPoolingLayerKernel
+ - Added FP32 and FP16 ELU activation for:
+ - @ref CLActivationLayer
+ - @ref NEActivationLayer
+ - Added asymmetric padding support for:
+ - @ref CLDirectDeconvolutionLayer
+ - @ref CLGEMMDeconvolutionLayer
+ - @ref NEDeconvolutionLayer
+ - Added SYMMETRIC and REFLECT modes for @ref CLPadLayerKernel / @ref CLPadLayer.
+ - Replaced the calls to @ref NECopyKernel and @ref NEMemsetKernel with @ref NEPadLayer in @ref NEGenerateProposalsLayer.
+ - Replaced the calls to @ref CLCopyKernel and @ref CLMemsetKernel with @ref CLPadLayer in @ref CLGenerateProposalsLayer.
+ - Improved performance for CL Inception V3 - FP16.
+ - Improved accuracy for CL Inception V3 - FP16 by enabling FP32 accumulator (mixed-precision).
+ - Improved NEON performance by enabling fusing batch normalization with convolution and depth-wise convolution layer.
+ - Improved NEON performance for MobileNet-SSD by improving the output detection performance.
+ - Optimized @ref CLPadLayer.
+ - Optimized CL generic depthwise convolution layer by introducing @ref CLDepthwiseConvolutionLayerNativeKernel.
+ - Reduced memory consumption by implementing weights sharing.
+
v19.08 Public major release
- Various bug fixes.
- Various optimisations.
@@ -290,7 +356,8 @@
- Added an optimized depthwise convolution layer kernel for 5x5 filters (NEON only)
- Added support to enable OpenCL kernel cache. Added example showing how to load the prebuilt OpenCL kernels from a binary cache file
- Altered @ref QuantizationInfo interface to support per-channel quantization.
- - The @ref NEDepthwiseConvolutionLayer3x3 will be replaced by @ref NEDepthwiseConvolutionLayerOptimized to accommodate for future optimizations.
+ - The @ref CLDepthwiseConvolutionLayer3x3 will be included by @ref CLDepthwiseConvolutionLayer to accommodate for future optimizations.
+ - The @ref NEDepthwiseConvolutionLayerOptimized will be included by @ref NEDepthwiseConvolutionLayer to accommodate for future optimizations.
- Removed inner_border_right and inner_border_top parameters from @ref CLDeconvolutionLayer interface
- Removed inner_border_right and inner_border_top parameters from @ref NEDeconvolutionLayer interface
- Optimized the NEON assembly kernel for GEMMLowp. The new implementation fuses the output stage and quantization with the matrix multiplication kernel
@@ -624,7 +691,7 @@
- Added fused batched normalization and activation to @ref CLBatchNormalizationLayer and @ref NEBatchNormalizationLayer
- Added support for non-square pooling to @ref NEPoolingLayer and @ref CLPoolingLayer
- New OpenCL kernels / functions:
- - @ref CLDirectConvolutionLayerOutputStageKernel
+ - CLDirectConvolutionLayerOutputStageKernel
- New NEON kernels / functions
- Added name() method to all kernels.
- Added support for Winograd 5x5.
@@ -699,7 +766,7 @@
- New NEON kernels / functions
- arm_compute::NEGEMMLowpAArch64A53Kernel / arm_compute::NEGEMMLowpAArch64Kernel / arm_compute::NEGEMMLowpAArch64V8P4Kernel / arm_compute::NEGEMMInterleavedBlockedKernel / arm_compute::NEGEMMLowpAssemblyMatrixMultiplyCore
- arm_compute::NEHGEMMAArch64FP16Kernel
- - @ref NEDepthwiseConvolutionLayer3x3Kernel / @ref NEDepthwiseIm2ColKernel / @ref NEGEMMMatrixVectorMultiplyKernel / @ref NEDepthwiseVectorToTensorKernel / @ref NEDepthwiseConvolutionLayer
+ - @ref NEDepthwiseConvolutionLayer3x3Kernel / NEDepthwiseIm2ColKernel / @ref NEGEMMMatrixVectorMultiplyKernel / NEDepthwiseVectorToTensorKernel / @ref NEDepthwiseConvolutionLayer
- @ref NEGEMMLowpOffsetContributionKernel / @ref NEGEMMLowpMatrixAReductionKernel / @ref NEGEMMLowpMatrixBReductionKernel / @ref NEGEMMLowpMatrixMultiplyCore
- @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel / @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
- @ref NEGEMMLowpQuantizeDownInt32ToUint8ScaleKernel / @ref NEGEMMLowpQuantizeDownInt32ToUint8Scale
@@ -746,7 +813,7 @@
- @ref NEReshapeLayerKernel / @ref NEReshapeLayer
- New OpenCL kernels / functions:
- - @ref CLDepthwiseConvolutionLayer3x3NCHWKernel @ref CLDepthwiseConvolutionLayer3x3NHWCKernel @ref CLDepthwiseIm2ColKernel @ref CLDepthwiseVectorToTensorKernel CLDepthwiseWeightsReshapeKernel / @ref CLDepthwiseConvolutionLayer3x3 @ref CLDepthwiseConvolutionLayer @ref CLDepthwiseSeparableConvolutionLayer
+ - @ref CLDepthwiseConvolutionLayer3x3NCHWKernel @ref CLDepthwiseConvolutionLayer3x3NHWCKernel CLDepthwiseIm2ColKernel CLDepthwiseVectorToTensorKernel CLDepthwiseWeightsReshapeKernel / @ref CLDepthwiseConvolutionLayer3x3 @ref CLDepthwiseConvolutionLayer CLDepthwiseSeparableConvolutionLayer
- @ref CLDequantizationLayerKernel / @ref CLDequantizationLayer
- @ref CLDirectConvolutionLayerKernel / @ref CLDirectConvolutionLayer
- @ref CLFlattenLayer
@@ -829,7 +896,7 @@
v17.03 Sources preview
- New OpenCL kernels / functions:
- @ref CLGradientKernel, @ref CLEdgeNonMaxSuppressionKernel, @ref CLEdgeTraceKernel / @ref CLCannyEdge
- - GEMM refactoring + FP16 support: CLGEMMInterleave4x4Kernel, CLGEMMTranspose1xWKernel, @ref CLGEMMMatrixMultiplyKernel, @ref CLGEMMMatrixAdditionKernel / @ref CLGEMM
+ - GEMM refactoring + FP16 support: CLGEMMInterleave4x4Kernel, CLGEMMTranspose1xWKernel, @ref CLGEMMMatrixMultiplyKernel, CLGEMMMatrixAdditionKernel / @ref CLGEMM
- @ref CLGEMMMatrixAccumulateBiasesKernel / @ref CLFullyConnectedLayer
- @ref CLTransposeKernel / @ref CLTranspose
- @ref CLLKTrackerInitKernel, @ref CLLKTrackerStage0Kernel, @ref CLLKTrackerStage1Kernel, @ref CLLKTrackerFinalizeKernel / @ref CLOpticalFlow
diff --git a/docs/01_library.dox b/docs/01_library.dox
index 85af8a0..28ad5f9 100644
--- a/docs/01_library.dox
+++ b/docs/01_library.dox
@@ -1,5 +1,5 @@
///
-/// Copyright (c) 2017-2018 ARM Limited.
+/// Copyright (c) 2017-2019 ARM Limited.
///
/// SPDX-License-Identifier: MIT
///
@@ -28,7 +28,7 @@
@tableofcontents
-@section S4_1 Core vs Runtime libraries
+@section S4_1_1 Core vs Runtime libraries
The Core library is a low level collection of algorithms implementations, it is designed to be embedded in existing projects and applications:
@@ -43,6 +43,12 @@
For maximum performance, it is expected that the users would re-implement an equivalent to the runtime library which suits better their needs (With a more clever multi-threading strategy, load-balancing between NEON and OpenCL, etc.)
+@section S4_1_2 Thread-safety
+
+Although the library supports multi-threading during workload dispatch, thus parallelizing the execution of the workload at multiple threads, the current runtime module implementation is not thread-safe in the sense of executing different functions from separate threads.
+This lies to the fact that the provided scheduling mechanism wasn't designed with thread-safety in mind.
+As it is true with the rest of the runtime library a custom scheduling mechanism can be re-implemented to account for thread-safety if needed and be injected as the library's default scheduler.
+
@section S4_2_windows_kernels_mt_functions Windows, kernels, multi-threading and functions
@subsection S4_2_1_windows Windows
@@ -349,10 +355,6 @@
Requesting backing memory for a specific group can be done using @ref IMemoryGroup::acquire and releasing the memory back using @ref IMemoryGroup::release.
-@note Two types of memory groups are currently implemented:
-- @ref MemoryGroup that manages @ref Tensor objects
-- @ref CLMemoryGroup that manages @ref CLTensor objects.
-
@subsubsection S4_7_1_2_memory_pool MemoryPool
@ref IMemoryPool defines a pool of memory that can be used to provide backing memory to a memory group.
@@ -421,6 +423,7 @@
memory_group.release(); // Release memory so that it can be reused
@endcode
@note Execution of a pipeline can be done in a multi-threading environment as memory acquisition/release are thread safe.
+@note If you are handling sensitive data and it's required to zero out the memory buffers before freeing, make sure to also zero out the intermediate buffers. You can access the buffers through the memory group's mappings.
@subsection S4_7_3_memory_manager_function_support Function support
@@ -487,5 +490,41 @@
But, when the @ref CLTuner is disabled ( Target = 1 for the graph examples), the @ref graph::Graph will try to reload the file containing the tuning parameters, then for each executed kernel the Compute Library will use the fine tuned LWS if it was present in the file or use a default LWS value if it's not.
+@section S4_10_weights_manager Weights Manager
+
+@ref IWeightsManager is a weights managing interface that can be used to reduce the memory requirements of a given pipeline by reusing transformed weights across multiple function executions.
+@ref IWeightsManager is responsible for managing weight tensors alongside with their transformations.
+@ref ITransformWeights provides an interface for running the desired transform function. This interface is used by the weights manager.
+
+@subsection S4_10_1_working_with_weights_manager Working with the Weights Manager
+Following is a simple example that uses the weights manager:
+
+Initially a weights manager must be set-up:
+@code{.cpp}
+auto wm = std::make_shared<IWeightsManager>(); // Create a weights manager
+@endcode
+
+Once done, weights can be managed, configured and run:
+@code{.cpp}
+wm->manage(weights); // Manage the weights
+wm->acquire(weights, &_reshape_weights_managed_function); // Acquire the address of the transformed weights based on the transform function
+wm->run(weights, &_reshape_weights_managed_function); // Run the transpose function
+@endcode
+
+@section S5_0_experimental Experimental Features
+
+@subsection S5_1_run_time_context Run-time Context
+
+Some of the Compute Library components are modelled as singletons thus posing limitations to supporting some use-cases and ensuring a more client-controlled API.
+Thus, we are introducing an aggregate service interface @ref IRuntimeContext which will encapsulate the services that the singletons were providing and allow better control of these by the client code.
+Run-time context encapsulates a list of mechanisms, some of them are: scheduling, memory management, kernel caching and others.
+Consequently, this will allow better control of these services among pipelines when Compute Library is integrated in higher level frameworks.
+
+This feature introduces some changes to our API.
+All the kernels/functions will now accept a Runtime Context object which will allow the function to use the mentioned services.
+Moreover, all the objects will require to be created using the context to have access to these services.
+Note that these will apply to the runtime components as the core ones do not need access to such services. The only exception is the kernel caching mechanism which will need to be passed down at kernel level.
+
+Finally, we will try to adapt our code-base progressively to use the new mechanism but will continue supporting the legacy mechanism to allow a smooth transition. Changes will apply to all our three backends: NEON, OpenCL and OpenGL ES.
*/
} // namespace arm_compute
diff --git a/docs/02_tests.dox b/docs/02_tests.dox
index 2363ae0..81edcf2 100644
--- a/docs/02_tests.dox
+++ b/docs/02_tests.dox
@@ -1,5 +1,5 @@
///
-/// Copyright (c) 2017-2018 ARM Limited.
+/// Copyright (c) 2017-2019 ARM Limited.
///
/// SPDX-License-Identifier: MIT
///
diff --git a/docs/03_scripts.dox b/docs/03_scripts.dox
index 6b71590..e6c19e5 100644
--- a/docs/03_scripts.dox
+++ b/docs/03_scripts.dox
@@ -1,5 +1,5 @@
///
-/// Copyright (c) 2017-2018 ARM Limited.
+/// Copyright (c) 2017-2019 ARM Limited.
///
/// SPDX-License-Identifier: MIT
///
@@ -170,4 +170,4 @@
This can be useful when parallelizing the validation process is needed.
*/
-}
\ No newline at end of file
+}
diff --git a/docs/04_adding_operator.dox b/docs/04_adding_operator.dox
index 5f03785..66ae9a6 100644
--- a/docs/04_adding_operator.dox
+++ b/docs/04_adding_operator.dox
@@ -1,5 +1,5 @@
///
-/// Copyright (c) 2018 ARM Limited.
+/// Copyright (c) 2018-2019 ARM Limited.
///
/// SPDX-License-Identifier: MIT
///
diff --git a/docs/05_contribution_guidelines.dox b/docs/05_contribution_guidelines.dox
new file mode 100644
index 0000000..7c919eb
--- /dev/null
+++ b/docs/05_contribution_guidelines.dox
@@ -0,0 +1,406 @@
+///
+/// Copyright (c) 2019 ARM Limited.
+///
+/// SPDX-License-Identifier: MIT
+///
+/// Permission is hereby granted, free of charge, to any person obtaining a copy
+/// of this software and associated documentation files (the "Software"), to
+/// deal in the Software without restriction, including without limitation the
+/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+/// sell copies of the Software, and to permit persons to whom the Software is
+/// furnished to do so, subject to the following conditions:
+///
+/// The above copyright notice and this permission notice shall be included in all
+/// copies or substantial portions of the Software.
+///
+/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+/// SOFTWARE.
+///
+namespace arm_compute
+{
+/**
+@page contribution_guidelines Contribution guidelines
+
+@tableofcontents
+
+If you want to contribute to Arm Compute Library, be sure to review the following guidelines.
+
+The development is structured in the following way:
+- Release repository: https://github.com/arm-software/ComputeLibrary
+- Development repository: https://review.mlplatform.org/#/admin/projects/ml/ComputeLibrary
+- Please report issues here: https://github.com/ARM-software/ComputeLibrary/issues
+
+@section S5_1_coding_standards Coding standards and guidelines
+
+Best practices (as suggested by clang-tidy):
+
+- No uninitialised values
+
+Helps to prevent undefined behaviour and allows to declare variables const if they are not changed after initialisation. See http://clang.llvm.org/extra/clang-tidy/checks/cppcoreguidelines-pro-type-member-init.html
+
+@code{.cpp}
+const float32x4_t foo = vdupq_n_f32(0.f);
+const float32x4_t bar = foo;
+
+const int32x4x2_t i_foo = {{
+ vconvq_s32_f32(foo),
+ vconvq_s32_f32(foo)
+}};
+const int32x4x2_t i_bar = i_foo;
+@endcode
+
+- No C-style casts (in C++ source code)
+
+Only use static_cast, dynamic_cast, and (if required) reinterpret_cast and const_cast. See http://en.cppreference.com/w/cpp/language/explicit_cast for more information when to use which type of cast. C-style casts do not differentiate between the different cast types and thus make it easy to violate type safety. Also, due to the prefix notation it is less clear which part of an expression is going to be casted. See http://clang.llvm.org/extra/clang-tidy/checks/cppcoreguidelines-pro-type-cstyle-cast.html
+
+- No implicit casts to bool
+
+Helps to increase readability and might help to catch bugs during refactoring. See http://clang.llvm.org/extra/clang-tidy/checks/readability-implicit-bool-cast.html
+
+@code{.cpp}
+extern int *ptr;
+if(ptr){} // Bad
+if(ptr != nullptr) {} // Good
+
+extern int foo;
+if(foo) {} // Bad
+if(foo != 0) {} // Good
+@endcode
+
+- Use nullptr instead of NULL or 0
+
+The nullptr literal is type-checked and is therefore safer to use. See http://clang.llvm.org/extra/clang-tidy/checks/modernize-use-nullptr.html
+
+- No need to explicitly initialise std::string with an empty string
+
+The default constructor of std::string creates an empty string. In general it is therefore not necessary to specify it explicitly. See http://clang.llvm.org/extra/clang-tidy/checks/readability-redundant-string-init.html
+
+@code{.cpp}
+// Instead of
+std::string foo("");
+std::string bar = "";
+
+// The following has the same effect
+std::string foo;
+std::string bar;
+@endcode
+
+- Braces for all control blocks and loops (which have a body)
+
+To increase readability and protect against refactoring errors the body of control block and loops must be wrapped in braces. See http://clang.llvm.org/extra/clang-tidy/checks/readability-braces-around-statements.html
+
+For now loops for which the body is empty do not have to add empty braces. This exception might be revoked in the future. Anyway, situations in which this exception applies should be rare.
+
+@code{.cpp}
+Iterator it;
+while(it.next()); // No need for braces here
+
+// Make more use of it
+@endcode
+
+- Only one declaration per line
+
+Increase readability and thus prevent errors.
+
+@code{.cpp}
+int a, b; // BAD
+int c, *d; // EVEN WORSE
+
+int e = 0; // GOOD
+int *p = nullptr; // GOOD
+@endcode
+
+- Pass primitive types (and those that are cheap to copy or move) by value
+
+For primitive types it is more efficient to pass them by value instead of by const reference because:
+
+ - the data type might be smaller than the "reference type"
+ - pass by value avoids aliasing and thus allows for better optimisations
+ - pass by value is likely to avoid one level of indirection (references are often implemented as auto dereferenced pointers)
+
+This advice also applies to non-primitive types that have cheap copy or move operations and the function needs a local copy of the argument anyway.
+
+More information:
+
+ - http://stackoverflow.com/a/14013189
+ - http://stackoverflow.com/a/270435
+ - http://web.archive.org/web/20140113221447/http://cpp-next.com/archive/2009/08/want-speed-pass-by-value/
+
+@code{.cpp}
+void foo(int i, long l, float32x4_t f); // Pass-by-value for builtin types
+void bar(const float32x4x4_t &f); // As this is a struct pass-by-const-reference is probably better
+void foobar(const MyLargeCustomTypeClass &m); // Definitely better as const-reference except if a copy has to be made anyway.
+@endcode
+
+- Don't use unions
+
+Unions cannot be used to convert values between different types because (in C++) it is undefined behaviour to read from a member other than the last one that has been assigned to. This limits the use of unions to a few corner cases and therefor the general advice is not to use unions. See http://releases.llvm.org/3.8.0/tools/clang/tools/extra/docs/clang-tidy/checks/cppcoreguidelines-pro-type-union-access.html
+
+- Use pre-increment/pre-decrement whenever possible
+
+In contrast to the pre-incerement the post-increment has to make a copy of the incremented object. This might not be a problem for primitive types like int but for class like objects that overload the operators, like iterators, it can have a huge impact on the performance. See http://stackoverflow.com/a/9205011
+
+To be consistent across the different cases the general advice is to use the pre-increment operator unless post-increment is explicitly required. The same rules apply for the decrement operator.
+
+@code{.cpp}
+for(size_t i = 0; i < 9; i++); // BAD
+for(size_t i = 0; i < 9; ++i); // GOOD
+@endcode
+
+- Don't use uint in C/C++
+
+The C and C++ standards don't define a uint type. Though some compilers seem to support it by default it would require to include the header sys/types.h. Instead we use the slightly more verbose unsigned int type.
+
+- Don't use unsigned int in function's signature
+
+Unsigned integers are good for representing bitfields and modular arithmetic. The fact that unsigned arithmetic doesn't model the behavior of a simple integer, but is instead defined by the standard to model modular arithmetic (wrapping around on overflow/underflow), means that a significant class of bugs cannot be diagnosed by the compiler. Mixing signedness of integer types is responsible for an equally large class of problems.
+
+- No "Yoda-style" comparisons
+
+As compilers are now able to warn about accidental assignments if it is likely that the intention has been to compare values it is no longer required to place literals on the left-hand side of the comparison operator. Sticking to the natural order increases the readability and thus prevents logical errors (which cannot be spotted by the compiler). In the rare case that the desired result is to assign a value and check it the expression has to be surrounded by parentheses.
+
+@code{.cpp}
+if(nullptr == ptr || false == cond) // BAD
+{
+ //...
+}
+
+if(ptr == nullptr || cond == false) // GOOD
+{
+ //...
+}
+
+if(ptr = nullptr || cond = false) // Most likely a mistake. Will cause a compiler warning
+{
+ //...
+}
+
+if((ptr = nullptr) || (cond = false)) // Trust me, I know what I'm doing. No warning.
+{
+ //...
+}
+@endcode
+
+@subsection S5_1_1_rules Rules
+
+ - Use spaces for indentation and alignment. No tabs! Indentation should be done with 4 spaces.
+ - Unix line returns in all the files.
+ - Pointers and reference symbols attached to the variable name, not the type (i.e. char \&foo;, and not char& foo).
+ - No trailing spaces or tabs at the end of lines.
+ - No spaces or tabs on empty lines.
+ - Put { and } on a new line and increase the indentation level for code inside the scope (except for namespaces).
+ - Single space before and after comparison operators ==, <, >, !=.
+ - No space around parenthesis.
+ - No space before, one space after ; (unless it is at the end of a line).
+
+@code{.cpp}
+for(int i = 0; i < width * height; ++i)
+{
+ void *d = foo(ptr, i, &addr);
+ static_cast<uint8_t *>(data)[i] = static_cast<uint8_t *>(d)[0];
+}
+@endcode
+
+ - Put a comment after \#else, \#endif, and namespace closing brace indicating the related name
+
+@code{.cpp}
+namespace mali
+{
+#ifdef MALI_DEBUG
+ ...
+#else // MALI_DEBUG
+ ...
+#endif // MALI_DEBUG
+} // namespace mali
+@endcode
+
+- CamelCase for class names only and lower case words separated with _ (snake_case) for all the functions / methods / variables / arguments / attributes.
+
+@code{.cpp}
+class ClassName
+{
+ public:
+ void my_function();
+ int my_attribute() const; // Accessor = attribute name minus '_', const if it's a simple type
+ private:
+ int _my_attribute; // '_' in front of name
+};
+@endcode
+
+- Use quotes instead of angular brackets to include local headers. Use angular brackets for system headers.
+- Also include the module header first, then local headers, and lastly system headers. All groups should be separated by a blank line and sorted lexicographically within each group.
+- Where applicable the C++ version of system headers has to be included, e.g. cstddef instead of stddef.h.
+- See http://llvm.org/docs/CodingStandards.html#include-style
+
+@code{.cpp}
+#include "MyClass.h"
+
+#include "arm_cv/core/Helpers.h"
+#include "arm_cv/core/Types.h"
+
+#include <cstddef>
+#include <numeric>
+@endcode
+
+- Only use "auto" when the type can be explicitly deduced from the assignment.
+
+@code{.cpp}
+auto a = static_cast<float*>(bar); // OK: there is an explicit cast
+auto b = std::make_unique<Image>(foo); // OK: we can see it's going to be an std::unique_ptr<Image>
+auto c = img.ptr(); // NO: Can't tell what the type is without knowing the API.
+auto d = vdup_n_u8(0); // NO: It's not obvious what type this function returns.
+@endcode
+
+- OpenCL:
+ - Use __ in front of the memory types qualifiers and kernel: __kernel, __constant, __private, __global, __local.
+ - Indicate how the global workgroup size / offset / local workgroup size are being calculated.
+
+ - Doxygen:
+
+ - No '*' in front of argument names
+ - [in], [out] or [in,out] *in front* of arguments
+ - Skip a line between the description and params and between params and @return (If there is a return)
+ - Align params names and params descriptions (Using spaces), and with a single space between the widest column and the next one.
+ - Use an upper case at the beginning of the description
+
+@snippet arm_compute/runtime/NEON/functions/NEActivationLayer.h NEActivationLayer snippet
+
+@subsection S5_1_2_how_to_check_the_rules How to check the rules
+
+astyle (http://astyle.sourceforge.net/) and clang-format (https://clang.llvm.org/docs/ClangFormat.html) can check and help you apply some of these rules.
+
+@subsection S5_1_3_library_size_guidelines Library size: best practices and guidelines
+
+@subsubsection S5_1_3_1_template_suggestions Template suggestions
+
+When writing a new patch we should also have in mind the effect it will have in the final library size. We can try some of the following things:
+
+ - Place non-dependent template code in a different non-templated class/method
+
+@code{.cpp}
+template<typename T>
+class Foo
+{
+public:
+ enum { v1, v2 };
+ // ...
+};
+@endcode
+
+ can be converted to:
+
+@code{.cpp}
+struct Foo_base
+{
+ enum { v1, v2 };
+ // ...
+};
+
+template<typename T>
+class Foo : public Foo_base
+{
+public:
+ // ...
+};
+@endcode
+
+ - In some cases it's preferable to use runtime switches instead of template parameters
+
+ - Sometimes we can rewrite the code without templates and without any (significant) performance loss. Let's say that we've written a function where the only use of the templated argument is used for casting:
+
+@code{.cpp}
+template <typename T>
+void NETemplatedKernel::run(const Window &window)
+{
+...
+ *(reinterpret_cast<T *>(out.ptr())) = *(reinterpret_cast<const T *>(in.ptr()));
+...
+}
+@endcode
+
+The above snippet can be transformed to:
+
+@code{.cpp}
+void NENonTemplatedKernel::run(const Window &window)
+{
+...
+std::memcpy(out.ptr(), in.ptr(), element_size);
+...
+}
+@endcode
+
+@subsection S5_1_4_secure_coding_practices Secure coding practices
+
+@subsubsection S5_1_4_1_general_coding_practices General Coding Practices
+
+- **Use tested and approved managed code** rather than creating new unmanaged code for common tasks.
+- **Utilize locking to prevent multiple simultaneous requests** or use a synchronization mechanism to prevent race conditions.
+- **Protect shared variables and resources** from inappropriate concurrent access.
+- **Explicitly initialize all your variables and other data stores**, either during declaration or just before the first usage.
+- **In cases where the application must run with elevated privileges, raise privileges as late as possible, and drop them as soon as possible**.
+- **Avoid calculation errors** by understanding your programming language's underlying representation and how it interacts with numeric calculation. Pay close attention to byte size discrepancies, precision, signed/unsigned distinctions, truncation, conversion and casting between types, "not-a-number" calculations, and how your language handles numbers that are too large or too small for its underlying representation.
+- **Restrict users from generating new code** or altering existing code.
+
+
+@subsubsection S5_1_4_2_secure_coding_best_practices Secure Coding Best Practices
+
+- **Validate input**. Validate input from all untrusted data sources. Proper input validation can eliminate the vast majority of software vulnerabilities. Be suspicious of most external data sources, including command line arguments, network interfaces, environmental variables, and user controlled files.
+- **Heed compiler warnings**. Compile code using the default compiler flags that exist in the SConstruct file.
+- Use **static analysis tools** to detect and eliminate additional security flaws.
+- **Keep it simple**. Keep the design as simple and small as possible. Complex designs increase the likelihood that errors will be made in their implementation, configuration, and use. Additionally, the effort required to achieve an appropriate level of assurance increases dramatically as security mechanisms become more complex.
+- **Default deny**. Base access decisions on permission rather than exclusion. This means that, by default, access is denied and the protection scheme identifies conditions under which access is permitted
+- **Adhere to the principle of least privilege**. Every process should execute with the least set of privileges necessary to complete the job. Any elevated permission should only be accessed for the least amount of time required to complete the privileged task. This approach reduces the opportunities an attacker has to execute arbitrary code with elevated privileges.
+- **Sanitize data sent to other systems**. Sanitize all data passed to complex subsystems such as command shells, relational databases, and commercial off-the-shelf (COTS) components. Attackers may be able to invoke unused functionality in these components through the use of various injection attacks. This is not necessarily an input validation problem because the complex subsystem being invoked does not understand the context in which the call is made. Because the calling process understands the context, it is responsible for sanitizing the data before invoking the subsystem.
+- **Practice defense in depth**. Manage risk with multiple defensive strategies, so that if one layer of defense turns out to be inadequate, another layer of defense can prevent a security flaw from becoming an exploitable vulnerability and/or limit the consequences of a successful exploit. For example, combining secure programming techniques with secure runtime environments should reduce the likelihood that vulnerabilities remaining in the code at deployment time can be exploited in the operational environment.
+
+@section S5_2_how_to_submit_a_patch How to submit a patch
+
+To be able to submit a patch to our development repository you need to have a GitHub account. With that, you will be able to sign in to Gerrit where your patch will be reviewed.
+
+Next step is to clone the Compute Library repository:
+
+ git clone "ssh://<your-github-id>@review.mlplatform.org:29418/ml/ComputeLibrary"
+
+If you have cloned from GitHub or through HTTP, make sure you add a new git remote using SSH:
+
+ git remote add acl-gerrit "ssh://<your-github-id>@review.mlplatform.org:29418/ml/ComputeLibrary"
+
+After that, you will need to upload an SSH key to https://review.mlplatform.org/#/settings/ssh-keys
+
+Then, make sure to install the commit-msg Git hook in order to add a change-ID to the commit message of your patch:
+
+ cd "ComputeLibrary" && mkdir -p .git/hooks && curl -Lo `git rev-parse --git-dir`/hooks/commit-msg https://review.mlplatform.org/tools/hooks/commit-msg; chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
+
+When your patch is ready, remember to sign off your contribution by adding a line with your name and e-mail address to every git commit message:
+
+ Signed-off-by: John Doe <john.doe@example.org>
+
+You must use your real name, no pseudonyms or anonymous contributions are accepted.
+
+You can add this to your patch with:
+
+ git commit -s --amend
+
+You are now ready to submit your patch for review:
+
+ git push acl-gerrit HEAD:refs/for/master
+
+@section S5_3_code_review Patch acceptance and code review
+
+Once a patch is uploaded for review, there is a pre-commit test that runs on a Jenkins server for continuos integration tests. In order to be merged a patch needs to:
+
+- get a "+1 Verified" from the pre-commit job
+- get a "+1 Comments-Addressed", in case of comments from reviewers the committer has to address them all. A comment is considered addressed when the first line of the reply contains the word "Done"
+- get a "+2" from a reviewer, that means the patch has the final approval
+
+At the moment, the Jenkins server is not publicly accessible and for security reasons patches submitted by non-whitelisted committers do not trigger the pre-commit tests. For this reason, one of the maintainers has to manually trigger the job.
+
+If the pre-commit test fails, the Jenkins job will post a comment on Gerrit with the details about the failure so that the committer will be able to reproduce the error and fix the issue, if any (sometimes there can be infrastructure issues, a test platform disconnecting for example, where the job needs to be retriggered).
+
+*/
+} // namespace arm_compute
diff --git a/docs/05_functions_list.dox b/docs/06_functions_list.dox
similarity index 92%
rename from docs/05_functions_list.dox
rename to docs/06_functions_list.dox
index 999b573..0008087 100644
--- a/docs/05_functions_list.dox
+++ b/docs/06_functions_list.dox
@@ -29,14 +29,18 @@
@tableofcontents
-@section S5_1 NEON functions
+@section S6_1 NEON functions
- @ref IFunction
- @ref INESimpleFunction
- @ref NEAbsoluteDifference
- @ref NEArithmeticAddition
- @ref NEArithmeticSubtraction
+ - @ref NEBoundingBoxTransform
- @ref NEBox3x3
+ - @ref NECast
+ - @ref NEComplexPixelWiseMultiplication
+ - @ref NEComputeAllAnchors
- @ref NEConvolution3x3
- @ref NEConvolutionRectangle
- @ref NEDilate
@@ -54,7 +58,10 @@
- @ref NENonLinearFilter
- @ref NENonMaximaSuppression3x3
- @ref NEPixelWiseMultiplication
+ - @ref NEPReluLayer
- @ref NERemap
+ - @ref NEROIAlignLayer
+ - @ref NERoundLayer
- @ref NERsqrtLayer
- @ref NEScharr3x3
- @ref NESelect
@@ -89,6 +96,7 @@
- @ref NEGEMMTranspose1xW
- @ref NEHOGDetector
- @ref NEMagnitude
+ - @ref NEMeanStdDevNormalizationLayer
- @ref NEPermute
- @ref NEPhase
- @ref NEPriorBoxLayer
@@ -114,10 +122,10 @@
- @ref NEDeconvolutionLayer
- @ref NEDepthwiseConvolutionAssemblyDispatch
- @ref NEDepthwiseConvolutionLayer
- - @ref NEDepthwiseConvolutionLayer3x3
- - @ref NEDepthwiseSeparableConvolutionLayer
+ - @ref NEDepthwiseConvolutionLayerOptimized
- @ref NEDequantizationLayer
- @ref NEDerivative
+ - @ref NEDetectionPostProcessLayer
- @ref NEDirectConvolutionLayer
- @ref NEEqualizeHistogram
- @ref NEFastCorners
@@ -134,20 +142,22 @@
- @ref NEGEMM
- @ref NEGEMMAssemblyDispatch
- @ref NEGEMMConvolutionLayer
- - @ref NEGEMMInterleavedWrapper
- @ref NEGEMMLowpAssemblyMatrixMultiplyCore
- @ref NEGEMMLowpMatrixMultiplyCore
+ - @ref NEGenerateProposalsLayer
- @ref NEHarrisCorners
- @ref NEHistogram
- @ref NEHOGDescriptor
- @ref NEHOGGradient
- @ref NEHOGMultiDetection
- @ref NEIm2Col
+ - @ref NEInstanceNormalizationLayer
- @ref NEL2NormalizeLayer
- @ref NELaplacianPyramid
- @ref NELaplacianReconstruct
- @ref NELocallyConnectedLayer
- @ref NELSTMLayer
+ - @ref NELSTMLayerQuantized
- @ref NEMeanStdDev
- @ref NEMinMaxLocation
- @ref NENormalizationLayer
@@ -164,15 +174,16 @@
- @ref NESimpleAssemblyFunction
- @ref NESobel5x5
- @ref NESobel7x7
- - @ref NESoftmaxLayer
+ - @ref NESoftmaxLayerGeneric <IS_LOG>
- @ref NESpaceToBatchLayer
+ - @ref NESpaceToDepthLayer
- @ref NESplit
- @ref NEStackLayer
- @ref NEUnstack
- @ref NEUpsampleLayer
- @ref NEWinogradConvolutionLayer
-@section S5_2 OpenCL functions
+@section S6_2 OpenCL functions
- @ref IFunction
- @ref CLBatchNormalizationLayer
@@ -186,9 +197,9 @@
- @ref CLCropResize
- @ref CLDeconvolutionLayer
- @ref CLDeconvolutionLayerUpsample
+ - @ref CLDepthToSpaceLayer
- @ref CLDepthwiseConvolutionLayer
- @ref CLDepthwiseConvolutionLayer3x3
- - @ref CLDepthwiseSeparableConvolutionLayer
- @ref CLDequantizationLayer
- @ref CLDirectConvolutionLayer
- @ref CLDirectDeconvolutionLayer
@@ -220,6 +231,7 @@
- @ref CLLaplacianReconstruct
- @ref CLLocallyConnectedLayer
- @ref CLLSTMLayer
+ - @ref CLLSTMLayerQuantized
- @ref CLMeanStdDev
- @ref CLMinMaxLocation
- @ref CLNormalizationLayer
@@ -232,8 +244,9 @@
- @ref CLRNNLayer
- @ref CLSobel5x5
- @ref CLSobel7x7
- - @ref CLSoftmaxLayer
+ - @ref CLSoftmaxLayerGeneric <IS_LOG>
- @ref CLSpaceToBatchLayer
+ - @ref CLSpaceToDepthLayer
- @ref CLSplit
- @ref CLStackLayer
- @ref CLUnstack
@@ -285,6 +298,7 @@
- @ref CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
- @ref CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFloat
- @ref CLMagnitude
+ - @ref CLMeanStdDevNormalizationLayer
- @ref CLMedian3x3
- @ref CLNonLinearFilter
- @ref CLNonMaximaSuppression3x3
@@ -292,6 +306,7 @@
- @ref CLPhase
- @ref CLPixelWiseMultiplication
- @ref CLPoolingLayer
+ - @ref CLPReluLayer
- @ref CLPriorBoxLayer
- @ref CLRange
- @ref CLRemap
@@ -316,7 +331,7 @@
- @ref CLWinogradInputTransform
- @ref CLYOLOLayer
-@section S5_3 GLES Compute functions
+@section S6_3 GLES Compute functions
- @ref IFunction
- @ref GCBatchNormalizationLayer
@@ -345,10 +360,11 @@
- @ref GCTensorShift
- @ref GCTranspose
-@section S5_4 CPP functions
+@section S6_4 CPP functions
- @ref IFunction
- @ref CPPDetectionOutputLayer
+ - @ref CPPDetectionPostProcessLayer
- @ref ICPPSimpleFunction
- @ref CPPBoxWithNonMaximaSuppressionLimit
- @ref CPPPermute
diff --git a/docs/07_errata.dox b/docs/07_errata.dox
new file mode 100644
index 0000000..16541c5
--- /dev/null
+++ b/docs/07_errata.dox
@@ -0,0 +1,49 @@
+///
+/// Copyright (c) 2019 ARM Limited.
+///
+/// SPDX-License-Identifier: MIT
+///
+/// Permission is hereby granted, free of charge, to any person obtaining a copy
+/// of this software and associated documentation files (the "Software"), to
+/// deal in the Software without restriction, including without limitation the
+/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+/// sell copies of the Software, and to permit persons to whom the Software is
+/// furnished to do so, subject to the following conditions:
+///
+/// The above copyright notice and this permission notice shall be included in all
+/// copies or substantial portions of the Software.
+///
+/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+/// SOFTWARE.
+///
+namespace arm_compute
+{
+/**
+@page errata Errata
+
+@tableofcontents
+
+@section S7_1_errata Errata
+
+- Under certain conditions, benchmark examples can hang when OpenCL profiling queues are enabled.
+ - Versions Affected: >= v19.11
+ - OSs Affected: Linux
+ - Conditions:
+ - Mali DDK r1p0 - r8p0, and
+ - Linux kernel >= 4.4
+
+- On Android with arm64-v8a/arm64-v8.2-a architecture, NEON validation tests can fail when compiled using Android Ndk
+ >= r18b in debug mode (https://github.com/android/ndk/issues/1135).
+ - Versions Affected: >= v19.11
+ - OSs Affected: Android
+ - Conditions:
+ - arm64-v8a/arm64-v8.2-a architecture, and
+ - Compiled using Android NDK >= r18b in debug mode.
+
+*/
+} // namespace
diff --git a/docs/Doxyfile b/docs/Doxyfile
index e9027c8..64bd966 100644
--- a/docs/Doxyfile
+++ b/docs/Doxyfile
@@ -38,7 +38,7 @@
# could be handy for archiving the generated documentation or if some version
# control system is used.
-PROJECT_NUMBER = 19.08
+PROJECT_NUMBER = 19.11
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
@@ -769,11 +769,13 @@
# Note: If this tag is empty the current directory is searched.
INPUT = ./docs/00_introduction.dox \
- ./docs/05_functions_list.dox \
./docs/01_library.dox \
- ./docs/04_adding_operator.dox \
./docs/02_tests.dox \
./docs/03_scripts.dox \
+ ./docs/04_adding_operator.dox \
+ ./docs/05_contribution_guidelines.dox \
+ ./docs/06_functions_list.dox \
+ ./docs/07_errata.dox \
./arm_compute/ \
./src/ \
./examples/ \
@@ -996,7 +998,7 @@
# Fortran comments will always remain visible.
# The default value is: YES.
-STRIP_CODE_COMMENTS = YES
+STRIP_CODE_COMMENTS = NO
# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
# function all documented functions referencing it will be listed.