commit | bb4c18b433764db0a3bcc0baa0d7bc41f2df97ec | [log] [tgz] |
---|---|---|
author | Frank Barchard <fbarchard@google.com> | Mon Sep 30 11:05:52 2019 -0700 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Mon Sep 30 11:27:55 2019 -0700 |
tree | e24c8f7d460366ae390913739423c781fe10cf5e | |
parent | 80fc932f1d9400051f0b4306fadd9ee3369fc727 [diff] |
Report Freq in additional benchmarks PiperOrigin-RevId: 272021416
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
XNNPACK is a based on QNNPACK library. However, unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.