internal change

PiperOrigin-RevId: 242471618
42 files changed
tree: b91e8b935de0e237f3393698e1b49e94a2780720
  1. allocator.cc
  2. allocator.h
  3. benchmark.cc
  4. block_map.cc
  5. block_map.h
  6. blocking_counter.cc
  7. blocking_counter.h
  8. check_macros.h
  9. common.h
  10. context.cc
  11. context.h
  12. detect_dotprod.cc
  13. detect_dotprod.h
  14. dispatch.h
  15. example.cc
  16. impl.h
  17. kernel.cc
  18. kernel.h
  19. matrix.h
  20. opt_set.h
  21. pack.cc
  22. pack.h
  23. path.h
  24. pmu.cc
  25. pmu.h
  26. README.md
  27. ruy.h
  28. ruy_test.bzl
  29. ruy_test_ext.bzl
  30. size_util.h
  31. spec.h
  32. test.h
  33. test_fast.cc
  34. test_slow.cc
  35. test_special_specs.cc
  36. thread_pool.cc
  37. thread_pool.h
  38. time.h
  39. trace.cc
  40. trace.h
  41. tune.cc
  42. tune.h
README.md

ruy is not BLAS

ruy is a matrix multiplication library. Its aim is to be the only matrix multiplication library that TensorFlow Lite would need, and to be more efficient in that context.

ruy supports both floating-point (like Eigen) and quantized (like gemmlowp).

Status

ruy is very new, immature code. It has quite good test coverage, but the code is in flux, lacks comments, needs more cleanup, and there are no design docs at the moment.

We hope to improve on all that and integrate ruy into TensorFlow Lite, at first as a non-default path for ARM A64 only, over the next few weeks [April 2019].

Efficiency

ruy is designed to achieve maximal performance not just on very large sizes, as is the focus of many established libraries, but on whatever are the actual sizes and shapes of matrices most critical in current TensorFlow Lite applications. This often means quite small sizes, e.g. 100x100 or even 50x50, and all sorts of rectangular shapes.

ruy is currently only optimized for ARM A64; other architectures have only slow reference code at the moment.

ruy is currently optimized only for the following combination of storage orders: LHS = row-major, RHS = column-major, destination = column-major. All other combinations of storage orders fall back to slow reference code at the moment.

With these caveats out of the way, here are benchmark results: