IVGCVSW-3894 Add support for LOG_SOFTMAX to the HAL 1.2 driver

Signed-off-by: Aron Virginas-Tar <Aron.Virginas-Tar@arm.com>
Change-Id: I59645b339f3b176e5d0852769acb95f5657101d3
3 files changed
tree: 69a32790256c26a94a45f31641179a37373f0bc8
  1. 1.0/
  2. 1.1/
  3. 1.2/
  4. test/
  5. .gitignore
  6. Android.bp
  7. android.hardware.neuralnetworks@1.0-service-armnn.rc
  8. android.hardware.neuralnetworks@1.1-service-armnn.rc
  9. android.hardware.neuralnetworks@1.2-service-armnn.rc
  10. Android.mk
  11. androidnn.go
  12. ArmnnDevice.cpp
  13. ArmnnDevice.hpp
  14. ArmnnDriver.hpp
  15. ArmnnDriverImpl.cpp
  16. ArmnnDriverImpl.hpp
  17. ArmnnPreparedModel.cpp
  18. ArmnnPreparedModel.hpp
  19. ArmnnPreparedModel_1_2.cpp
  20. ArmnnPreparedModel_1_2.hpp
  21. ConversionUtils.cpp
  22. ConversionUtils.hpp
  23. DriverOptions.cpp
  24. DriverOptions.hpp
  25. LICENSE
  26. ModelToINetworkConverter.cpp
  27. ModelToINetworkConverter.hpp
  28. NnapiSupport.txt
  29. README.md
  30. RequestThread.cpp
  31. RequestThread.hpp
  32. service.cpp
  33. setup.sh
  34. SystemPropertiesUtils.hpp
  35. Utils.cpp
  36. Utils.hpp
README.md

ArmNN Android Neural Networks driver

This directory contains the ArmNN driver for the Android Neural Networks API, implementing the android.hardware.neuralnetworks@1.0, android.hardware.neuralnetworks@1.1 and android.hardware.neuralnetworks@1.2 HALs.

For more information about supported operations and configurations, see NnapiSupport.txt

Integration guide

Prerequisites

  1. Android source tree for Android P FSK-R3 or later, in the directory <ANDROID_ROOT>
  2. Android source tree for Android Q FSK-2 or later, in the directory <ANDROID_ROOT>
  3. Mali OpenCL driver integrated into the Android source tree

Procedure

  1. Place this source directory at <ANDROID_ROOT>/vendor/arm/android-nn-driver
  2. Run setup.sh
  3. Update the Android build environment to add the ArmNN driver. This ensures that the driver service is built and copied to the system/vendor/bin/hw directory in the Android image. To update the build environment, add to the contents of the variable PRODUCT_PACKAGES within the device-specific makefile that is located in the <ANDROID_ROOT>/device/<manufacturer>/<product> directory. This file is normally called device.mk:

For Android P or Q, using NN API version (1.0), the following should be added to device.mk:

For Android P or Q, a new version of the NN API is available (1.1), thus the following should be added to device.mk instead:

For Android Q, a new version of the NN API is available (1.2), thus the following should be added to device.mk instead:

Similarly, the Neon, CL or reference backend can be enabled/disabled by setting ARMNN_COMPUTE_CL_ENABLE, ARMNN_COMPUTE_NEON_ENABLE or ARMNN_REF_ENABLE in device.mk:

For Android P and Android Q the vendor manifest.xml requires the Neural Network HAL information. For Android P use HAL version 1.1 as below. For Android Q substitute 1.2 where necessary.

<hal format="hidl">
    <name>android.hardware.neuralnetworks</name>
    <transport>hwbinder</transport>
    <version>1.1</version>
    <interface>
        <name>IDevice</name>
        <instance>armnn</instance>
    </interface>
    <fqname>@1.1::IDevice/armnn</fqname>
</hal>
  1. Build Android as normal, i.e. run make in <ANDROID_ROOT>
  2. To confirm that the ArmNN driver has been built, check for driver service executable at

Android P

For example, if the ArmNN driver has been built with the NN API 1.0, check for the following file:

Android Q has a different path:

Testing

  1. Run the ArmNN driver service executable in the background. The following examples assume that the 1.0 version of the driver is being used:
  1. Run some code that exercises the Android Neural Networks API, for example Android's NeuralNetworksTest unit tests (note this is an optional component that must be built).
  1. To confirm that the ArmNN driver is being used to service the Android Neural Networks API requests, check for messages in logcat with the ArmnnDriver tag.

Using the GPU tuner

The GPU tuner is a feature of the Compute Library that finds optimum values for GPU acceleration tuning parameters. There are three levels of tuning: exhaustive, normal and rapid. Exhaustive means that all lws values are tested. Normal means that a reduced number of lws values are tested, but that generally is sufficient to have a performance close enough to the exhaustive approach. Rapid means that only 3 lws values should be tested for each kernel. The recommended way of using it with ArmNN is to generate the tuning data during development of the Android image for a device, and use it in read-only mode during normal operation:

  1. Run the ArmNN driver service executable in tuning mode. The path to the tuning data must be writable by the service. The following examples assume that the 1.0 version of the driver is being used:
  1. Run a representative set of Android NNAPI testing loads. In this mode of operation, each NNAPI workload will be slow the first time it is executed, as the tuning parameters are being selected. Subsequent executions will use the tuning data which has been generated.
  2. Stop the service.
  3. Deploy the tuned parameters file to a location readable by the ArmNN driver service (for example, to a location within /vendor/etc).
  4. During normal operation, pass the location of the tuning data to the driver service (this would normally be done by passing arguments via Android init in the service .rc definition):

License

The android-nn-driver is provided under the MIT license. See LICENSE for more information. Contributions to this project are accepted under the same license.

Individual files contain the following tag instead of the full license text.

SPDX-License-Identifier: MIT

This enables machine processing of license information based on the SPDX License Identifiers that are available here: http://spdx.org/licenses/