Remove unused x21 and switch x20 to x8 to avoid push.
Add missing prefetches in outer loop as done in GEMM kernels.

PiperOrigin-RevId: 273422307
3 files changed
tree: f920fb5863a809b1bc4ac5a5582693e3dc5004df
  1. bench/
  2. include/
  3. models/
  4. scripts/
  5. src/
  6. test/
  7. third_party/
  8. tools/
  9. .bazelrc
  10. BUILD
  11. build_defs.bzl
  12. CONTRIBUTING.md
  13. emscripten.bzl
  14. LICENSE
  15. preamble.js.lds
  16. README.md
  17. WORKSPACE
README.md

XNNPACK

XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.

Supported Architectures

  • ARM64 on Android and Linux
  • ARM on Android
  • WebAssembly MVP
  • WebAssembly SIMD (experimental)
  • x86 and x86-64 (up to SSE2 only) on Android, Linux, and macOS

Operator Coverage

XNNPACK implements the following neural network operators:

  • 2D Convolution (including grouped and depthwise)
  • 2D Deconvolution (AKA Transposed Convolution)
  • 2D Average Pooling
  • 2D Max Pooling
  • 2D ArgMax Pooling (Max Pooling + indices)
  • 2D Unpooling
  • Add (tensors of same shape)
  • Global Average Pooling
  • Channel Shuffle
  • Fully Connected
  • Clamp (includes ReLU and ReLU6)
  • HardSwish
  • PReLU

All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.

Publications

Acknowledgements

XNNPACK is a based on QNNPACK library. However, unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.