blob: 0591b5fbe70d24c2524b589c79eca4f464930041 [file] [log] [blame]
Przemyslaw Szczepaniakd222d002018-10-15 11:44:03 +01001Copyright 2017 The Android Open Source Project
2
3Licensed under the Apache License, Version 2.0 (the "License");
4you may not use this file except in compliance with the License.
5You may obtain a copy of the License at
6
7 http://www.apache.org/licenses/LICENSE-2.0
8
9Unless required by applicable law or agreed to in writing, software
10distributed under the License is distributed on an "AS IS" BASIS,
11WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12See the License for the specific language governing permissions and
13limitations under the License.
14------------------------------------------------------------------
15
16This directory contains files for the Android MLTS (Machine Learning
17Test Suite). MLTS allows to evaluate NNAPI acceleration latency and accuracy
18on an Android device, using few selected ML models and datesets.
19
20Models and datasets used description and licensing can be found in
21platform/test/mlts/models/README.txt file.
22
23Usage:
24* Connect a target device to your workstation, make sure it's
25reachable through adb. Export target device ANDROID_SERIAL
26environment variable if more than one device is connected.
27* cd into android top-level source directory
28> source build/envsetup.sh
29> lunch aosp_arm-userdebug # Or aosp_arm64-userdebug if available.
30> ./test/mlts/benchmark/build_and_run_benchmark.sh
31* At the end of a benchmark run, its results will be
32presented as html page, passed to xdg-open.
33
Stefano Galarraga4edc2ea2020-03-02 15:38:02 +000034# Crash test
Przemyslaw Szczepaniakd222d002018-10-15 11:44:03 +010035
Stefano Galarraga4edc2ea2020-03-02 15:38:02 +000036The MLTS suite contains a series of tests to validate the behaviour of the drivers under stress or
37in corner case conditions.
38
39To run the tests use the specific targets available in the build_and_run_benchmark.sh script.
Stefano Galarraga06ec72b2020-05-29 10:47:31 +010040By default, every test gets run on each available accelerator in isolation. It is possible to filter the
41accelerators to test against by invoking the build_and_run_benchmark.sh script with the option
42-f (--filter-driver) and specifying a regular expression to filter the acccelerator names with.
43It is also possible to run additional tests without specified target accelerator to let NNAPI
44partition the model and assign the best available one(s) by using the
45-r (--include-nnapi-reference) option.
46
Stefano Galarragaf03d5732020-06-08 19:18:24 +010047Currently available tests are:
Stefano Galarraga4edc2ea2020-03-02 15:38:02 +000048
Stefano Galarragaf03d5732020-06-08 19:18:24 +010049* parallel-inference-stress: to test the behaviour of drivers with different amount of inference
Stefano Galarraga4edc2ea2020-03-02 15:38:02 +000050executed in parallel. Tests are running in a separate process so crashes can be detected and
Stefano Galarragaf03d5732020-06-08 19:18:24 +010051notified as test failures.
Stefano Galarraga4edc2ea2020-03-02 15:38:02 +000052
Stefano Galarragaf03d5732020-06-08 19:18:24 +010053* parallel-inference-stress-in-process: same as parallel-inference-stress but the tests are running
Stefano Galarraga4edc2ea2020-03-02 15:38:02 +000054in the same process of the test so in case of crash the testing app will crash too
Stefano Galarragaf03d5732020-06-08 19:18:24 +010055
56* client-early-termination-stress: to test the resilience of device drivers to failing clients.
57It spawns a separate process each running a set of parallel threads compiling different models.
58The process is then forcibly terminated. The test validates that the targeted driver is not
59crashing or hanging
60
61* multi-process-inference-stress: this extends the `parallel-inference-stress` running inference
62on a single model in multiple processes and threads with different probabilities in client process
63early termination
64
65* multi-process-model-load-stress: this extends the `parallel-inference-stress` running inference
66on a single model in multiple processes and threads with different probabilities in client process
Stefano Galarraga965e78e2020-05-12 17:11:54 +010067early termination
68
69* memory-mapped-model-load-stress: runs a series of parallel model compilation with memory mapped
Stefano Galarraga04f119a2020-05-13 19:19:32 +010070TFLite models
71
Stefano Galarragaf19cd012020-06-09 18:52:17 +010072* model-load-random-stress: test compiling a large set of randomly generated models
73
Stefano Galarraga3edb25c2020-06-16 18:27:35 +010074* inference-random-stress: test running a large set of randomly generated models
75
76* performance-degradation-stress: verifies that accelerator inference speed is not degrading over
Stefano Galarraga57b6e4c2021-09-10 15:18:12 +010077a certain threshold when running concurrent workload
78
79# Testing a NNAPI Support Library implementation
80
81All tests documented above can be run using a NNAPI Support Library implementation.
Stefano Galarragafdbb44e2021-10-04 17:20:17 +010082To do so you need to:
83
84- copy all the shared objects part of the libraries under the `sl_prebuilt`
85 folder
86- Use the `-s` or `--use-nnapi-sl` option when running `build_and_run_benchmark.sh`.
87
88By default the system will use the sl_prebuilt/Android.bp.template to map
89every library under sl_prebuilt to a native library to include in the APK.
90The file is already configured for the Qualcomm NNAPI SL binaries.
91If you have different libraries that the ones defined under sl_prebuilt/Android.bp.template
92you should
93
94- configure a sl_prebuilt/Android.bp with the list of binaries you added.
95 You can use the sl_prebuilt/Android.bp.template file as an example template.
96
97- Set in in Android.mk the SL_LIBS variable with the list of drivers