commit | 07efec483d36edbecd2f2b675162e17c0eb34241 | [log] [tgz] |
---|---|---|
author | Frank Barchard <fbarchard@google.com> | Thu Dec 12 14:19:21 2019 -0800 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Thu Dec 12 14:19:56 2019 -0800 |
tree | 5f23a339d065b5e2f80d24ef35240dae18703691 | |
parent | 03b51ee1adc80d5b4d48f5a0b0818e03a0429fb5 [diff] |
Run generator for A73 kernel NOP On Moto Z3 (a73) Was MobileNetV1/T:1/real_time 93347 us 92246 us 14 Freq=2.4576G MobileNetV2/T:1/real_time 55494 us 54744 us 25 Freq=2.4576G MobileNetV3Large/T:1/real_time 44674 us 44088 us 31 Freq=2.4576G MobileNetV3Small/T:1/real_time 14249 us 14054 us 98 Freq=2.4576G Now MobileNetV1/T:1/real_time 85537 us 84500 us 16 Freq=2.4576G MobileNetV2/T:1/real_time 51934 us 51212 us 27 Freq=2.4576G MobileNetV3Large/T:1/real_time 42431 us 41871 us 33 Freq=2.4576G MobileNetV3Small/T:1/real_time 13850 us 13663 us 101 Freq=2.4576G PiperOrigin-RevId: 285269825
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
The table below presents single-threaded performance of XNNPACK library on two generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
MobileNet v1 1.0X | 81 | 93 | 88 |
MobileNet v2 1.0X | 48 | 58 | 54 |
Benchmarked on October 9, 2019 with end2end_bench --benchmark_min_time=5
on an Android/ARM64 build (bazel build -c opt --config android_arm64 :end2end_bench
) and neural network models with randomized weights and inputs.
The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.
Model | RPi 2 (BCM2836), ms | RPi 3+ (BCM2837B0), ms | RPi 4 (BCM2711), ms |
---|---|---|---|
MobileNet v1 1.0X | 342 | 122 | 79 |
MobileNet v2 1.0X | 199 | 82 | 47 |
MobileNet v3 Large | 166 | 71 | 42 |
MobileNet v3 Small | 53 | 24 | 15 |
Benchmarked on December 12, 2019 with end2end_bench --benchmark_min_time=5
on a Raspbian Buster build with CMake (./scripts/build-local.sh
) and neural network models with randomized weights and inputs.
XNNPACK is a based on QNNPACK library. Unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.