On 32 bit systems, force huge allocs on arena 0.
The new version of jemalloc has moved all huge allocations from
a single cache, to a per arena cache. This can result in virtual
address space exhaustion when someone has a pattern where they
allocate huge allocations from new threads all of the time.
Bug: 22172059
Change-Id: Ic919812f076761f1a4f5ae8313061118f1e19c64
diff --git a/src/huge.c b/src/huge.c
index 6e6824d..e397798 100644
--- a/src/huge.c
+++ b/src/huge.c
@@ -66,7 +66,17 @@
* it is possible to make correct junk/zero fill decisions below.
*/
is_zeroed = zero;
+ /* ANDROID change */
+#if !defined(__LP64__)
+ /* On 32 bit systems, using a per arena cache can exhaust
+ * virtual address space. Force all huge allocations to
+ * always take place in the first arena.
+ */
+ arena = a0get();
+#else
arena = arena_choose(tsd, arena);
+#endif
+ /* End ANDROID change */
if (unlikely(arena == NULL) || (ret = arena_chunk_alloc_huge(arena,
usize, alignment, &is_zeroed)) == NULL) {
idalloctm(tsd, node, tcache, true);