mm: prevent MIGRATE_ISOLATE pages entering other free lists
It is observed that, during a CMA allocation,
test_pages_isolated() fails sometimes.
In low memory situations, this results in continuous
failures of CMA allocations.
When testing for isolation, test_pages_isolated finds
that some pages are allocated, which were available on
pcp list when start_isolate_page_range marked the
pageblock as MIGRATE_ISOLATE. These pages, though
marked as isolated in the respective pageblock, are
moved to non-isolate freelists while being drained.
This results in these pages being allocated from other
paths while a contig allocation is in progress. In
certain cases, these pages get allocated from the
migrate_pages() of current alloc_contig_range() itself.
In the function free_pcppages_bulk(), when a page is
moved to buddy, the migrate list is picked based on
page_private. In the case of CMA pages, which were on
pcp list during start_isolate_page_range, page_private
will point to MIGRATE_CMA, and the pageblock migrate
type can be MIGRATE_ISOLATE. This results in these pages
being moved to cma freelist, rather than isolate freelist,
inside free_pcppages_bulk. This means they can get allocated.
If cma pages are never allowed to enter pcp lists, we
will not hit this issue. This is partly done by
free_hot_cold_page(), where we prevent cma pages
entering pcp. But cma pages can still get on pcp lists
through rmqueue_bulk. Preventing cma pages enter pcp
via rmqueue_bulk, can result in underutilization of
CMA pages for non-contiguous allocations, as rmqueue_bulk
is the frequent path for order 0 allocations.
CRs-fixed:720761
Change-Id: I01b12e6455c75519f1dce5d31467123cb1eb54b7
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d4c0534..7eefa94 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -657,6 +657,8 @@
int migratetype = 0;
int batch_free = 0;
int to_free = count;
+ int free = 0;
+ int cma_free = 0;
int mt = 0;
spin_lock(&zone->lock);
@@ -687,17 +689,23 @@
do {
page = list_entry(list->prev, struct page, lru);
mt = get_pageblock_migratetype(page);
+ if (likely(mt != MIGRATE_ISOLATE))
+ mt = page_private(page);
+
/* must delete as __free_one_page list manipulates */
list_del(&page->lru);
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
- __free_one_page(page, zone, 0, page_private(page));
- trace_mm_page_pcpu_drain(page, 0, page_private(page));
- if (is_migrate_cma(mt))
- __mod_zone_page_state(zone,
- NR_FREE_CMA_PAGES, 1);
+ __free_one_page(page, zone, 0, mt);
+ trace_mm_page_pcpu_drain(page, 0, mt);
+ if (likely(mt != MIGRATE_ISOLATE)) {
+ free++;
+ if (is_migrate_cma(mt))
+ cma_free++;
+ }
} while (--to_free && --batch_free && !list_empty(list));
}
- __mod_zone_page_state(zone, NR_FREE_PAGES, count);
+ __mod_zone_page_state(zone, NR_FREE_PAGES, free);
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, cma_free);
spin_unlock(&zone->lock);
}