commit | 9f7d55501df8798ac10d932d38a7d2f4c81a54fc | [log] [tgz] |
---|---|---|
author | Frank Barchard <fbarchard@google.com> | Thu Dec 12 10:58:10 2019 -0800 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Thu Dec 12 10:58:47 2019 -0800 |
tree | 809d9fa687af21a27819c029055d91723e422098 | |
parent | 73ccfb4e007b67efde34348ced025eb75f9431f2 [diff] |
Prefetch version of the aarch32 a75 GEMM kernel This kernel is the same as 4x8-aarch32-neon-cortex-a75.S but with PLD instructions added, modelled after the 4x8-aarch64-neonfma-cortex-a53.S which prefetches A and W every 64 bytes with offsets of 128 and 384, which are timed to the size of the kernel. With these PLD instructions, performance is improved on several cpus, such as kryo, a57, a72, a73, m1 and m2. But on some cpus PLD does not help, or can slightly hurt performance, so two versions of the kernel are needed to make comparisons. PiperOrigin-RevId: 285225721
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
The table below presents single-threaded performance of XNNPACK library on two generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
MobileNet v1 1.0X | 81 | 93 | 88 |
MobileNet v2 1.0X | 48 | 58 | 54 |
Benchmarked on October 9, 2019 with end2end_bench --benchmark_min_time=5
on an Android/ARM64 build (bazel build -c opt --config android_arm64 :end2end_bench
) and neural network models with randomized weights and inputs.
XNNPACK is a based on QNNPACK library. Unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.