commit | d62f3cca25e5ef28a9ec8e98098393188b71baae | [log] [tgz] |
---|---|---|
author | Marat Dukhan <maratek@google.com> | Tue Oct 01 12:37:52 2019 -0700 |
committer | XNNPACK Team <xnnpack-github-robot@google.com> | Tue Oct 01 12:38:13 2019 -0700 |
tree | 5882827708ccbfa9354c2ce2361b73ab8d65e669 | |
parent | 22f38e4ee00858c28209aece109cd81276ff1296 [diff] |
Avoid using cpuinfo_get_max_cache_size() function This function is missing in upstream cpuinfo, and causes build failures in OSS XNNPACK PiperOrigin-RevId: 272270763
XNNPACK is a highly optimized library of floating-point neural network inference operators for ARM, WebAssembly, and x86 (SSE2 level) platforms. XNNPACK is not intended for direct use by deep learning practitioners researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as MediaPipe, TensorFlow Lite, and TensorFlow.js.
XNNPACK implements the following neural network operators:
All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension. Thus, operators can consume a subset of channels in the input tensor, and produce a subset of channels in the output tensor, providing a zero-cost Channel Split and Channel Concatenation operations.
XNNPACK is a based on QNNPACK library. However, unlike QNNPACK, XNNPACK focuses entirely on floating-point operators, and its API is no longer compatible with QNNPACK.