commit | 12f1dea2655f61396dfb5f85be7228fc7ad22545 | [log] [tgz] |
---|---|---|
author | Marat Dukhan <maratek@google.com> | Fri Oct 04 14:44:59 2019 -0700 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Fri Oct 04 14:45:23 2019 -0700 |
tree | a5dc4d20695c60841d75cd904a076089a03d5d04 | |
parent | c068bb620f309a50c1b75b50c66441b9fe4ec359 [diff] |
Increase static memory in Emscripten benchmarks to 128MB Fix builds of end-to-end benchmark on Emscripten PiperOrigin-RevId: 272961641
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
XNNPACK is a based on QNNPACK library. However, unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.