QS8 move loads to end of loop, 1 every 2 neon instructions.

The intended win is on A72 and A73 that prefer more distance between the loads
needed for the initial MUL instructions.  This saves 1 cycle on A72.
Then the spacing and order was tuned to have no impact on A75, achieving free
loads on A75.

end2end benchmark was:
A75
QS8MobileNetV1/T:1/real_time      40098 us        39739 us          175 cpufreq=2.8032G
QS8MobileNetV2/T:1/real_time      29054 us        28789 us          241 cpufreq=2.8032G
A72
QS8MobileNetV1/T:1/real_time      52568 us        52567 us          135 cpufreq=2.516G
QS8MobileNetV2/T:1/real_time      38372 us        38372 us          183 cpufreq=2.516G
A73
QS8MobileNetV1/T:1/real_time      55208 us        54962 us          127 cpufreq=2.4576G
QS8MobileNetV2/T:1/real_time      39366 us        39091 us          177 cpufreq=2.4576G

end2end now
A75
QS8MobileNetV1/T:1/real_time      39793 us        39468 us          176 cpufreq=2.8032G
QS8MobileNetV2/T:1/real_time      29028 us        28760 us          241 cpufreq=2.8032G
A72
QS8MobileNetV1/T:1/real_time      52717 us        52716 us          134 cpufreq=2.516G
QS8MobileNetV2/T:1/real_time      38521 us        38521 us          182 cpufreq=2.516G
A73
QS8MobileNetV1/T:1/real_time      55212 us        54955 us          127 cpufreq=2.4576G
QS8MobileNetV2/T:1/real_time      39147 us        38897 us          179 cpufreq=2.4576G

PiperOrigin-RevId: 367284511
2 files changed
tree: b74b2105be2cffce2d5bef5ce3c045f0a335702a
  1. bench/
  2. cmake/
  3. eval/
  4. include/
  5. models/
  6. scripts/
  7. src/
  8. test/
  9. third_party/
  10. tools/
  11. .bazelrc
  12. .gitignore
  13. BUILD.bazel
  14. build_defs.bzl
  15. CMakeLists.txt
  16. CONTRIBUTING.md
  17. emscripten.bzl
  18. LICENSE
  19. preamble.js.lds
  20. README.md
  21. WORKSPACE
README.md

XNNPACK

XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.

Supported Architectures

  • ARM64 on Android, Linux, macOS, and iOS (including WatchOS and tvOS)
  • ARMv7 (with NEON) on Android, Linux, and iOS (including WatchOS)
  • x86 and x86-64 (up to AVX512) on Windows, Linux, macOS, Android, and iOS simulator
  • WebAssembly MVP
  • WebAssembly SIMD (experimental)

Operator Coverage

XNNPACK implements the following neural network operators:

  • 2D Convolution (including grouped and depthwise)
  • 2D Deconvolution (AKA Transposed Convolution)
  • 2D Average Pooling
  • 2D Max Pooling
  • 2D ArgMax Pooling (Max Pooling + indices)
  • 2D Unpooling
  • 2D Bilinear Resize
  • 2D Depth-to-Space (AKA Pixel Shuffle)
  • Add (including broadcasting, two inputs only)
  • Subtract (including broadcasting)
  • Divide (including broadcasting)
  • Maximum (including broadcasting)
  • Minimum (including broadcasting)
  • Multiply (including broadcasting)
  • Squared Difference (including broadcasting)
  • Global Average Pooling
  • Channel Shuffle
  • Fully Connected
  • Abs (absolute value)
  • Bankers' Rounding (rounding to nearest, ties to even)
  • Ceiling (rounding to integer above)
  • Clamp (includes ReLU and ReLU6)
  • Copy
  • ELU
  • Floor (rounding to integer below)
  • HardSwish
  • Leaky ReLU
  • Negate
  • Sigmoid
  • Softmax
  • Square
  • Truncation (rounding to integer towards zero)
  • PReLU

All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.

Performance

Mobile phones

The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

ModelPixel, msPixel 2, msPixel 3a, ms
MobileNet v1 1.0X828688
MobileNet v2 1.0X495355
MobileNet v3 Large394244
MobileNet v3 Small121414

The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

ModelPixel, msPixel 2, msPixel 3a, ms
MobileNet v1 1.0X432746
MobileNet v2 1.0X261828
MobileNet v3 Large221624
MobileNet v3 Small768

Benchmarked on March 27, 2020 with end2end_bench --benchmark_min_time=5 on an Android/ARM64 build with Android NDK r21 (bazel build -c opt --config android_arm64 :end2end_bench) and neural network models with randomized weights and inputs.

Raspberry Pi

The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.

ModelRPi Zero W (BCM2835), msRPi 2 (BCM2836), msRPi 3+ (BCM2837B0), msRPi 4 (BCM2711), ms
MobileNet v1 1.0X400433711672
MobileNet v2 1.0X20111958341
MobileNet v3 Large16941637038
MobileNet v3 Small482522313

Benchmarked on May 22, 2020 with end2end-bench --benchmark_min_time=5 on a Raspbian Buster build with CMake (./scripts/build-local.sh) and neural network models with randomized weights and inputs.

Publications

Ecosystem

Machine Learning Frameworks

Acknowledgements

XNNPACK is a based on QNNPACK library. Over time its codebase diverged a lot, and XNNPACK API is no longer compatible with QNNPACK.