commit | e696c3fb6ff6c99e20cca6aeb703da98af1b05d5 | [log] [tgz] |
---|---|---|
author | Frank Barchard <fbarchard@google.com> | Wed Apr 07 13:19:30 2021 -0700 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Wed Apr 07 13:20:25 2021 -0700 |
tree | b74b2105be2cffce2d5bef5ce3c045f0a335702a | |
parent | 60fc61373f21f0ad3164cc719de464f4b787dc04 [diff] |
QS8 move loads to end of loop, 1 every 2 neon instructions. The intended win is on A72 and A73 that prefer more distance between the loads needed for the initial MUL instructions. This saves 1 cycle on A72. Then the spacing and order was tuned to have no impact on A75, achieving free loads on A75. end2end benchmark was: A75 QS8MobileNetV1/T:1/real_time 40098 us 39739 us 175 cpufreq=2.8032G QS8MobileNetV2/T:1/real_time 29054 us 28789 us 241 cpufreq=2.8032G A72 QS8MobileNetV1/T:1/real_time 52568 us 52567 us 135 cpufreq=2.516G QS8MobileNetV2/T:1/real_time 38372 us 38372 us 183 cpufreq=2.516G A73 QS8MobileNetV1/T:1/real_time 55208 us 54962 us 127 cpufreq=2.4576G QS8MobileNetV2/T:1/real_time 39366 us 39091 us 177 cpufreq=2.4576G end2end now A75 QS8MobileNetV1/T:1/real_time 39793 us 39468 us 176 cpufreq=2.8032G QS8MobileNetV2/T:1/real_time 29028 us 28760 us 241 cpufreq=2.8032G A72 QS8MobileNetV1/T:1/real_time 52717 us 52716 us 134 cpufreq=2.516G QS8MobileNetV2/T:1/real_time 38521 us 38521 us 182 cpufreq=2.516G A73 QS8MobileNetV1/T:1/real_time 55212 us 54955 us 127 cpufreq=2.4576G QS8MobileNetV2/T:1/real_time 39147 us 38897 us 179 cpufreq=2.4576G PiperOrigin-RevId: 367284511
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
MobileNet v1 1.0X | 82 | 86 | 88 |
MobileNet v2 1.0X | 49 | 53 | 55 |
MobileNet v3 Large | 39 | 42 | 44 |
MobileNet v3 Small | 12 | 14 | 14 |
The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
MobileNet v1 1.0X | 43 | 27 | 46 |
MobileNet v2 1.0X | 26 | 18 | 28 |
MobileNet v3 Large | 22 | 16 | 24 |
MobileNet v3 Small | 7 | 6 | 8 |
Benchmarked on March 27, 2020 with end2end_bench --benchmark_min_time=5
on an Android/ARM64 build with Android NDK r21 (bazel build -c opt --config android_arm64 :end2end_bench
) and neural network models with randomized weights and inputs.
The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.
Model | RPi Zero W (BCM2835), ms | RPi 2 (BCM2836), ms | RPi 3+ (BCM2837B0), ms | RPi 4 (BCM2711), ms |
---|---|---|---|---|
MobileNet v1 1.0X | 4004 | 337 | 116 | 72 |
MobileNet v2 1.0X | 2011 | 195 | 83 | 41 |
MobileNet v3 Large | 1694 | 163 | 70 | 38 |
MobileNet v3 Small | 482 | 52 | 23 | 13 |
Benchmarked on May 22, 2020 with end2end-bench --benchmark_min_time=5
on a Raspbian Buster build with CMake (./scripts/build-local.sh
) and neural network models with randomized weights and inputs.
XNNPACK is a based on QNNPACK library. Over time its codebase diverged a lot, and XNNPACK API is no longer compatible with QNNPACK.