commit | a0ba3ac1c02d9e9043eaa97d2da903adbc1d8b08 | [log] [tgz] |
---|---|---|
author | Benoit Jacob <benoitjacob@google.com> | Mon Apr 08 12:00:37 2019 -0400 |
committer | Benoit Jacob <benoitjacob@google.com> | Fri Mar 06 13:59:10 2020 -0500 |
tree | b91e8b935de0e237f3393698e1b49e94a2780720 |
internal change PiperOrigin-RevId: 242471618
ruy is a matrix multiplication library. Its aim is to be the only matrix multiplication library that TensorFlow Lite would need, and to be more efficient in that context.
ruy supports both floating-point (like Eigen) and quantized (like gemmlowp).
ruy is very new, immature code. It has quite good test coverage, but the code is in flux, lacks comments, needs more cleanup, and there are no design docs at the moment.
We hope to improve on all that and integrate ruy into TensorFlow Lite, at first as a non-default path for ARM A64 only, over the next few weeks [April 2019].
ruy is designed to achieve maximal performance not just on very large sizes, as is the focus of many established libraries, but on whatever are the actual sizes and shapes of matrices most critical in current TensorFlow Lite applications. This often means quite small sizes, e.g. 100x100 or even 50x50, and all sorts of rectangular shapes.
ruy is currently only optimized for ARM A64; other architectures have only slow reference code at the moment.
ruy is currently optimized only for the following combination of storage orders: LHS = row-major, RHS = column-major, destination = column-major. All other combinations of storage orders fall back to slow reference code at the moment.
With these caveats out of the way, here are benchmark results: