commit | 6724218299028c69f71f3905b48f83bbcfb929d1 | [log] [tgz] |
---|---|---|
author | Frank Barchard <fbarchard@google.com> | Thu Jun 11 11:12:50 2020 -0700 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Thu Jun 11 11:13:24 2020 -0700 |
tree | f70560402d4ea96b3e7df934800d165b90c2a474 | |
parent | 177fe2d777008951f79d18f673b64e13cf430a52 [diff] |
Avoid x18 register x18 is reserved by the OS. Instead of loading cn_stride from stack, defer the load until clamping with min/max and use x0 that is free at that point. MobileNetV2 end2end CPU Before After a53 96055 96306 a55 111923 111956 a57 83247 83229 a72 61360 62700 a73 53267 53094 a75 39247 39136 a76 21868 21880 a77 20754 20812 kryo 47640 47671 m1 40545 40814 m2 44468 44656 m3 19333 19358 m4 16660 17206 PiperOrigin-RevId: 315937599
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
MobileNet v1 1.0X | 82 | 86 | 88 |
MobileNet v2 1.0X | 49 | 53 | 55 |
MobileNet v3 Large | 39 | 42 | 44 |
MobileNet v3 Small | 12 | 14 | 14 |
The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
MobileNet v1 1.0X | 43 | 27 | 46 |
MobileNet v2 1.0X | 26 | 18 | 28 |
MobileNet v3 Large | 22 | 16 | 24 |
MobileNet v3 Small | 7 | 6 | 8 |
Benchmarked on March 27, 2020 with end2end_bench --benchmark_min_time=5
on an Android/ARM64 build with Android NDK r21 (bazel build -c opt --config android_arm64 :end2end_bench
) and neural network models with randomized weights and inputs.
The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.
Model | RPi Zero W (BCM2835), ms | RPi 2 (BCM2836), ms | RPi 3+ (BCM2837B0), ms | RPi 4 (BCM2711), ms |
---|---|---|---|---|
MobileNet v1 1.0X | 4004 | 337 | 116 | 72 |
MobileNet v2 1.0X | 2011 | 195 | 83 | 41 |
MobileNet v3 Large | 1694 | 163 | 70 | 38 |
MobileNet v3 Small | 482 | 52 | 23 | 13 |
Benchmarked on May 22, 2020 with end2end-bench --benchmark_min_time=5
on a Raspbian Buster build with CMake (./scripts/build-local.sh
) and neural network models with randomized weights and inputs.
XNNPACK is a based on QNNPACK library. Over time its codebase diverged a lot, and XNNPACK API is no longer compatible with QNNPACK.