commit | c659140ee533febad5143b340e8cc550eba87fe0 | [log] [tgz] |
---|---|---|
author | Frank Barchard <fbarchard@google.com> | Wed Dec 11 12:54:12 2019 -0800 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Wed Dec 11 12:54:46 2019 -0800 |
tree | d8a6531eea5f5d2f1ce459ab4bde202e8a613125 | |
parent | 1391604bed7a322b15de390fa380f40168b27638 [diff] |
a73 kernel move SUBS before clamp and add NOP before branch Was Dec 1 a CMP MobileNetV2/T:1/real_time 53047 us Then subs sooner but without Nop MobileNetV2/T:1/real_time 55467 us Now MobileNetV2/T:1/real_time 51793 us PiperOrigin-RevId: 285042520
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
The table below presents single-threaded performance of XNNPACK library on two generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
MobileNet v1 1.0X | 81 | 93 | 88 |
MobileNet v2 1.0X | 48 | 58 | 54 |
Benchmarked on October 9, 2019 with end2end_bench --benchmark_min_time=5
on an Android/ARM64 build (bazel build -c opt --config android_arm64 :end2end_bench
) and neural network models with randomized weights and inputs.
XNNPACK is a based on QNNPACK library. Unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.