A53 GEMM / IGEMM kernel prefetches adjust by 1

The x5 register used for weights advances by 1 cache line... 64 bytes in most kernels, and 96 bytes in 4x12.
But the prefetch offsets dont account for this, and skip a cache line.  Adjust offsets by 1 cache line
4x12 consumes 192 bytes - 3 cache lines, so do 3 prefetches.

End To End was
f32_gemm_4x12__aarch64_neonfma_cortex_a53/mobilenet_v2/real_time     109863 us
4x12 is submitted
f32_gemm_6x8__aarch64_neonfma_cortex_a53/mobilenet_v2/real_time       96928 us
f32_gemm_4x8__aarch64_neonfma_cortex_a53/mobilenet_v2/real_time      106907 us

Now
f32_gemm_6x8__aarch64_neonfma_cortex_a53/mobilenet_v2/real_time       95999 us
f32_gemm_4x12__aarch64_neonfma_cortex_a53/mobilenet_v2/real_time     102843 us
f32_gemm_4x8__aarch64_neonfma_cortex_a53/mobilenet_v2/real_time      104823 us

PiperOrigin-RevId: 289984651
8 files changed
tree: bc3a52b5c5ab64315ffa79d0c679f7ef017f9c3e
  1. bench/
  2. cmake/
  3. eval/
  4. include/
  5. models/
  6. scripts/
  7. src/
  8. test/
  9. third_party/
  10. tools/
  11. .bazelrc
  12. .gitignore
  13. BUILD.bazel
  14. build_defs.bzl
  15. CMakeLists.txt
  16. CONTRIBUTING.md
  17. emscripten.bzl
  18. LICENSE
  19. preamble.js.lds
  20. README.md
  21. WORKSPACE
README.md

XNNPACK

XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.

Supported Architectures

  • ARM64 on Android and Linux
  • ARMv7 (with NEON) on Android and Linux
  • WebAssembly MVP
  • WebAssembly SIMD (experimental)
  • x86 and x86-64 (up to AVX512) on Android, Linux, and macOS

Operator Coverage

XNNPACK implements the following neural network operators:

  • 2D Convolution (including grouped and depthwise)
  • 2D Deconvolution (AKA Transposed Convolution)
  • 2D Average Pooling
  • 2D Max Pooling
  • 2D ArgMax Pooling (Max Pooling + indices)
  • 2D Unpooling
  • 2D Bilinear Resize
  • Add (including broadcasting, two inputs only)
  • Subtract (including broadcasting)
  • Divide (including broadcasting)
  • Maximum (including broadcasting)
  • Minimum (including broadcasting)
  • Multiply (including broadcasting)
  • Global Average Pooling
  • Channel Shuffle
  • Fully Connected
  • Clamp (includes ReLU and ReLU6)
  • HardSwish
  • Sigmoid
  • PReLU

All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.

Performance

Mobile phones

The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

ModelPixel, msPixel 2, msPixel 3a, ms
MobileNet v1 1.0X818988
MobileNet v2 1.0X485554
MobileNet v3 Large404444
MobileNet v3 Small121414

The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.

ModelPixel, msPixel 2, msPixel 3a, ms
MobileNet v1 1.0X452746
MobileNet v2 1.0X281828
MobileNet v3 Large231624
MobileNet v3 Small768

Benchmarked on January 9, 2020 with end2end_bench --benchmark_min_time=5 on an Android/ARM64 build (bazel build -c opt --config android_arm64 :end2end_bench) and neural network models with randomized weights and inputs.

Raspberry Pi

The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.

ModelRPi 2 (BCM2836), msRPi 3+ (BCM2837B0), msRPi 4 (BCM2711), ms
MobileNet v1 1.0X38011576
MobileNet v2 1.0X2178045
MobileNet v3 Large1806741
MobileNet v3 Small572315

Benchmarked on January 9, 2020 with end2end-bench --benchmark_min_time=5 on a Raspbian Buster build with CMake (./scripts/build-local.sh) and neural network models with randomized weights and inputs.

Publications

Acknowledgements

XNNPACK is a based on QNNPACK library. Unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.