commit | f9fc9ec431ba2461bfdeb9198e0fef02a2b442e3 | [log] [tgz] |
---|---|---|
author | Zhi An Ng <zhin@google.com> | Tue Feb 01 13:19:31 2022 -0800 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Tue Feb 01 13:20:33 2022 -0800 |
tree | 7c79e4e250e45384ab6132abebb138446a5c9e8d | |
parent | 58cdcf250e9f91840195fe45c565576c9d18b2ff [diff] |
Integrate JIT generated GEMM microkernels into create_convolution2d_nhwc Introduce a new field, generator, into struct gemm_parameters, which contains JIT code generators for gemm, igemm, gemm1, igemm1. When set, the convolution operator creation will try to generate the microkernel using the JIT. (Right now only gemm is supported, the rest will follow in future patches.) The xnn_ukernel_gemm and xnn_ukernel_igemm structs also has a new field, struct xnn_code_buffer general_code_buffer and mr1_code_buffer, where the generated code will be kept, and is released when the operator is deleted. The generator field is only set in the e2e benchmarks, where we update F32 E2E benchmarks to support testing JIT generated microkernels. PiperOrigin-RevId: 425700057
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, and MediaPipe.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
The table below presents single-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
FP32 MobileNet v1 1.0X | 82 | 86 | 88 |
FP32 MobileNet v2 1.0X | 49 | 53 | 55 |
FP32 MobileNet v3 Large | 39 | 42 | 44 |
FP32 MobileNet v3 Small | 12 | 14 | 14 |
The following table presents multi-threaded (using as many threads as there are big cores) performance of XNNPACK library on three generations of MobileNet models and three generations of Pixel phones.
Model | Pixel, ms | Pixel 2, ms | Pixel 3a, ms |
---|---|---|---|
FP32 MobileNet v1 1.0X | 43 | 27 | 46 |
FP32 MobileNet v2 1.0X | 26 | 18 | 28 |
FP32 MobileNet v3 Large | 22 | 16 | 24 |
FP32 MobileNet v3 Small | 7 | 6 | 8 |
Benchmarked on March 27, 2020 with end2end_bench --benchmark_min_time=5
on an Android/ARM64 build with Android NDK r21 (bazel build -c opt --config android_arm64 :end2end_bench
) and neural network models with randomized weights and inputs.
The table below presents multi-threaded performance of XNNPACK library on three generations of MobileNet models and three generations of Raspberry Pi boards.
Model | RPi Zero W (BCM2835), ms | RPi 2 (BCM2836), ms | RPi 3+ (BCM2837B0), ms | RPi 4 (BCM2711), ms | RPi 4 (BCM2711, ARM64), ms |
---|---|---|---|---|---|
FP32 MobileNet v1 1.0X | 3937 | 299 | 114 | 72 | 76 |
FP32 MobileNet v2 1.0X | 1987 | 187 | 79 | 41 | 44 |
FP32 MobileNet v3 Large | 1658 | 158 | 67 | 38 | 41 |
FP32 MobileNet v3 Small | 487 | 50 | 23 | 13 | 14 |
INT8 MobileNet v1 1.0X | 2598 | 169 | 61 | 29 | 24 |
INT8 MobileNet v2 1.0X | 1487 | 109 | 40 | 20 | 17 |
Benchmarked on Oct 15, 2021 with end2end-bench --benchmark_min_time=5
on a Raspbian Buster build with CMake (./scripts/build-local.sh
) and neural network models with randomized weights and inputs. INT8 inference was evaluated on per-channel quantization schema.
XNNPACK is a based on QNNPACK library. Over time its codebase diverged a lot, and XNNPACK API is no longer compatible with QNNPACK.