Implement cache index randomization for large allocations.

Extract szad size quantization into {extent,run}_quantize(), and .
quantize szad run sizes to the union of valid small region run sizes and
large run sizes.

Refactor iteration in arena_run_first_fit() to use
run_quantize{,_first,_next(), and add support for padded large runs.

For large allocations that have no specified alignment constraints,
compute a pseudo-random offset from the beginning of the first backing
page that is a multiple of the cache line size.  Under typical
configurations with 4-KiB pages and 64-byte cache lines this results in
a uniform distribution among 64 page boundary offsets.

Add the --disable-cache-oblivious option, primarily intended for
performance testing.

This resolves #13.
diff --git a/ChangeLog b/ChangeLog
index 33139f9..b6fa366 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -101,6 +101,9 @@
     run fragmentation, smaller runs reduce external fragmentation for small size
     classes, and packed (less uniformly aligned) metadata layout improves CPU
     cache set distribution.
+  - Randomly distribute large allocation base pointer alignment relative to page
+    boundaries in order to more uniformly utilize CPU cache sets.  This can be
+    disabled via the --disable-cache-oblivious configure option.
   - Micro-optimize the fast paths for the public API functions.
   - Refactor thread-specific data to reside in a single structure.  This assures
     that only a single TLS read is necessary per call into the public API.