Simplify ruy tests by removing the complicated logic determining quantized multipliers and clamp bounds. Now unconditionally doing what we used to do when QUICK_BENCHMARK=1 was passed. That was needed in practice to get quick results, as the old logic was very slow as it had to rely on a reference implementaiton of matmul (else it would have been very confusing when matmul regressed).

This required a couple of tweaks, especially to float tolerance. Feeling confident that this is a reasonable relaxation of previously unnecessarily tight tolerance values.

PiperOrigin-RevId: 289879473
2 files changed
tree: 4f632f4f216c87c54ae928dfc8ac25020f97e36d
  1. profiler/
  2. allocator.cc
  3. allocator.h
  4. allocator_test.cc
  5. benchmark.cc
  6. block_map.cc
  7. block_map.h
  8. block_map_test.cc
  9. blocking_counter.cc
  10. blocking_counter.h
  11. BUILD
  12. build_defs.bzl
  13. check_macros.h
  14. check_macros_test.cc
  15. common.h
  16. context.cc
  17. context.h
  18. context_test.cc
  19. CONTRIBUTING.md
  20. detect_arm.cc
  21. detect_arm.h
  22. detect_x86.cc
  23. detect_x86.h
  24. dispatch.h
  25. example.cc
  26. example_advanced.cc
  27. have_built_path_for.h
  28. have_built_path_for_avx2.cc
  29. have_built_path_for_avx512.cc
  30. have_built_path_for_avxvnni.cc
  31. have_built_path_for_sse42.cc
  32. internal_matrix.h
  33. kernel.h
  34. kernel_arm.h
  35. kernel_arm32.cc
  36. kernel_arm64.cc
  37. kernel_avx2.cc
  38. kernel_avx512.cc
  39. kernel_avxvnni.cc
  40. kernel_common.h
  41. kernel_sse42.cc
  42. kernel_x86.h
  43. LICENSE
  44. matrix.h
  45. opt_set.h
  46. pack.h
  47. pack_arm.cc
  48. pack_arm.h
  49. pack_avx2.cc
  50. pack_avx512.cc
  51. pack_avxvnni.cc
  52. pack_common.h
  53. pack_sse42.cc
  54. pack_x86.h
  55. path.h
  56. platform.h
  57. pmu.cc
  58. pmu.h
  59. prepack.h
  60. prepacked_cache.cc
  61. prepacked_cache.h
  62. prepacked_cache_test.cc
  63. README.md
  64. ruy.h
  65. ruy_advanced.h
  66. ruy_test.bzl
  67. ruy_test_ext.bzl
  68. side_pair.h
  69. size_util.h
  70. size_util_test.cc
  71. spec.h
  72. test.h
  73. test_fast.cc
  74. test_slow.cc
  75. test_special_specs.cc
  76. thread_pool.cc
  77. thread_pool.h
  78. time.h
  79. trace.cc
  80. trace.h
  81. trmul.cc
  82. trmul.h
  83. trmul_params.h
  84. tune.cc
  85. tune.h
  86. tune_test.cc
  87. tune_tool.cc
  88. wait.cc
  89. wait.h
  90. wait_test.cc
README.md

The ruy matrix multiplication library

This is not an officially supported Google product.

ruy is a matrix multiplication library. Its focus is to cover the matrix multiplication needs of neural network inference engines. Its initial user has been TensorFlow Lite, where it is used by default on the ARM CPU architecture.

ruy supports both floating-point and 8bit-integer-quantized matrices.

Efficiency

ruy is designed to achieve maximal performance not just on very large sizes, as is the focus of many established libraries, but on whatever are the actual sizes and shapes of matrices most critical in current TensorFlow Lite applications. This often means quite small sizes, e.g. 100x100 or even 50x50, and all sorts of rectangular shapes.

ruy is currently only optimized for the ARM architectures (both 64-bit and 32-bit code). Optimization for the Intel x86 architecture is in progress.

ruy is currently optimized only for the following combination of storage orders: LHS = row-major, RHS = column-major, destination = column-major. All other combinations of storage orders fall back to slow reference code at the moment.