blob: 2ec6adb5a4ce155c8ac19f6df9f54cd2523b8161 [file] [log] [blame]
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -08001= Transparent Hugepage Support =
2
3== Objective ==
4
5Performance critical computing applications dealing with large memory
6working sets are already running on top of libhugetlbfs and in turn
7hugetlbfs. Transparent Hugepage Support is an alternative means of
8using huge pages for the backing of virtual memory with huge pages
9that supports the automatic promotion and demotion of page sizes and
10without the shortcomings of hugetlbfs.
11
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -070012Currently it only works for anonymous memory mappings and tmpfs/shmem.
13But in the future it can expand to other filesystems.
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -080014
15The reason applications are running faster is because of two
16factors. The first factor is almost completely irrelevant and it's not
17of significant interest because it'll also have the downside of
18requiring larger clear-page copy-page in page faults which is a
19potentially negative effect. The first factor consists in taking a
20single page fault for each 2M virtual region touched by userland (so
21reducing the enter/exit kernel frequency by a 512 times factor). This
22only matters the first time the memory is accessed for the lifetime of
23a memory mapping. The second long lasting and much more important
24factor will affect all subsequent accesses to the memory for the whole
25runtime of the application. The second factor consist of two
26components: 1) the TLB miss will run faster (especially with
27virtualization using nested pagetables but almost always also on bare
28metal without virtualization) and 2) a single TLB entry will be
29mapping a much larger amount of virtual memory in turn reducing the
30number of TLB misses. With virtualization and nested pagetables the
31TLB can be mapped of larger size only if both KVM and the Linux guest
32are using hugepages but a significant speedup already happens if only
33one of the two is using hugepages just because of the fact the TLB
34miss is going to run faster.
35
36== Design ==
37
Kirill A. Shutemova46e6372016-01-15 16:54:30 -080038- "graceful fallback": mm components which don't have transparent hugepage
39 knowledge fall back to breaking huge pmd mapping into table of ptes and,
40 if necessary, split a transparent hugepage. Therefore these components
41 can continue working on the regular pages or regular pte mappings.
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -080042
43- if a hugepage allocation fails because of memory fragmentation,
44 regular pages should be gracefully allocated instead and mixed in
45 the same vma without any failure or significant delay and without
46 userland noticing
47
48- if some task quits and more hugepages become available (either
49 immediately in the buddy or through the VM), guest physical memory
50 backed by regular pages should be relocated on hugepages
51 automatically (with khugepaged)
52
53- it doesn't require memory reservation and in turn it uses hugepages
54 whenever possible (the only possible reservation here is kernelcore=
55 to avoid unmovable pages to fragment all the memory but such a tweak
56 is not specific to transparent hugepage support and it's a generic
57 feature that applies to all dynamic high order allocations in the
58 kernel)
59
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -080060Transparent Hugepage Support maximizes the usefulness of free memory
61if compared to the reservation approach of hugetlbfs by allowing all
62unused memory to be used as cache or other movable (or even unmovable
63entities). It doesn't require reservation to prevent hugepage
64allocation failures to be noticeable from userland. It allows paging
65and all other advanced VM features to be available on the
66hugepages. It requires no modifications for applications to take
67advantage of it.
68
69Applications however can be further optimized to take advantage of
70this feature, like for example they've been optimized before to avoid
71a flood of mmap system calls for every malloc(4k). Optimizing userland
72is by far not mandatory and khugepaged already can take care of long
73lived page allocations even for hugepage unaware applications that
74deals with large amounts of memory.
75
76In certain cases when hugepages are enabled system wide, application
77may end up allocating more memory resources. An application may mmap a
78large region but only touch 1 byte of it, in that case a 2M page might
79be allocated instead of a 4k page for no good. This is why it's
80possible to disable hugepages system-wide and to only have them inside
81MADV_HUGEPAGE madvise regions.
82
83Embedded systems should enable hugepages only inside madvise regions
84to eliminate any risk of wasting any precious byte of memory and to
85only run faster.
86
87Applications that gets a lot of benefit from hugepages and that don't
88risk to lose memory by using hugepages, should use
89madvise(MADV_HUGEPAGE) on their critical mmapped regions.
90
91== sysfs ==
92
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -070093Transparent Hugepage Support for anonymous memory can be entirely disabled
94(mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE
95regions (to avoid the risk of consuming more memory resources) or enabled
96system wide. This can be achieved with one of:
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -080097
98echo always >/sys/kernel/mm/transparent_hugepage/enabled
99echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
100echo never >/sys/kernel/mm/transparent_hugepage/enabled
101
102It's also possible to limit defrag efforts in the VM to generate
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700103anonymous hugepages in case they're not immediately free to madvise
104regions or to never try to defrag memory and simply fallback to regular
105pages unless hugepages are immediately available. Clearly if we spend CPU
106time to defrag memory, we would expect to gain even more by the fact we
107use hugepages later instead of regular pages. This isn't always
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800108guaranteed, but it may be more likely in case the allocation is for a
109MADV_HUGEPAGE region.
110
111echo always >/sys/kernel/mm/transparent_hugepage/defrag
Mel Gorman444eb2a42016-03-17 14:19:23 -0700112echo defer >/sys/kernel/mm/transparent_hugepage/defrag
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800113echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
114echo never >/sys/kernel/mm/transparent_hugepage/defrag
115
Mel Gorman444eb2a42016-03-17 14:19:23 -0700116"always" means that an application requesting THP will stall on allocation
117failure and directly reclaim pages and compact memory in an effort to
118allocate a THP immediately. This may be desirable for virtual machines
119that benefit heavily from THP use and are willing to delay the VM start
120to utilise them.
121
122"defer" means that an application will wake kswapd in the background
123to reclaim pages and wake kcompact to compact memory so that THP is
124available in the near future. It's the responsibility of khugepaged
125to then install the THP pages later.
126
127"madvise" will enter direct reclaim like "always" but only for regions
128that are have used madvise(MADV_HUGEPAGE). This is the default behaviour.
129
130"never" should be self-explanatory.
131
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700132By default kernel tries to use huge zero page on read page fault to
133anonymous mapping. It's possible to disable huge zero page by writing 0
134or enable it back by writing 1:
Kirill A. Shutemov79da5402012-12-12 13:51:12 -0800135
Wanpeng Lif49cbdd2013-07-08 16:00:16 -0700136echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page
137echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page
Kirill A. Shutemov79da5402012-12-12 13:51:12 -0800138
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800139khugepaged will be automatically started when
140transparent_hugepage/enabled is set to "always" or "madvise, and it'll
141be automatically shutdown if it's set to "never".
142
143khugepaged runs usually at low frequency so while one may not want to
144invoke defrag algorithms synchronously during the page faults, it
145should be worth invoking defrag at least in khugepaged. However it's
David Rientjese369fde2011-09-22 14:11:38 -0700146also possible to disable defrag in khugepaged by writing 0 or enable
147defrag in khugepaged by writing 1:
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800148
David Rientjese369fde2011-09-22 14:11:38 -0700149echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
150echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800151
152You can also control how many pages khugepaged should scan at each
153pass:
154
155/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
156
157and how many milliseconds to wait in khugepaged between each pass (you
158can set this to 0 to run khugepaged at 100% utilization of one core):
159
160/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
161
162and how many milliseconds to wait in khugepaged if there's an hugepage
163allocation failure to throttle the next allocation attempt.
164
165/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
166
167The khugepaged progress can be seen in the number of pages collapsed:
168
169/sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
170
171for each pass:
172
173/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
174
Ebru Akagunduz9ddfa692015-02-26 23:34:36 +0200175max_ptes_none specifies how many extra small pages (that are
176not already mapped) can be allocated when collapsing a group
177of small pages into one large page.
178
179/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none
180
181A higher value leads to use additional memory for programs.
182A lower value leads to gain less thp performance. Value of
183max_ptes_none can waste cpu time very little, you can
184ignore it.
185
Ebru Akagunduz80f73b42015-11-05 18:47:32 -0800186max_ptes_swap specifies how many pages can be brought in from
187swap when collapsing a group of pages into a transparent huge page.
188
189/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_swap
190
191A higher value can cause excessive swap IO and waste
192memory. A lower value can prevent THPs from being
193collapsed, resulting fewer pages being collapsed into
194THPs, and lower memory access performance.
195
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800196== Boot parameter ==
197
198You can change the sysfs boot time defaults of Transparent Hugepage
199Support by passing the parameter "transparent_hugepage=always" or
200"transparent_hugepage=madvise" or "transparent_hugepage=never"
201(without "") to the kernel command line.
202
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700203== Hugepages in tmpfs/shmem ==
204
205You can control hugepage allocation policy in tmpfs with mount option
206"huge=". It can have following values:
207
208 - "always":
209 Attempt to allocate huge pages every time we need a new page;
210
211 - "never":
212 Do not allocate huge pages;
213
214 - "within_size":
215 Only allocate huge page if it will be fully within i_size.
216 Also respect fadvise()/madvise() hints;
217
218 - "advise:
219 Only allocate huge pages if requested with fadvise()/madvise();
220
221The default policy is "never".
222
223"mount -o remount,huge= /mountpoint" works fine after mount: remounting
224huge=never will not attempt to break up huge pages at all, just stop more
225from being allocated.
226
227There's also sysfs knob to control hugepage allocation policy for internal
228shmem mount: /sys/kernel/mm/transparent_hugepage/shmem_enabled. The mount
229is used for SysV SHM, memfds, shared anonymous mmaps (of /dev/zero or
230MAP_ANONYMOUS), GPU drivers' DRM objects, Ashmem.
231
232In addition to policies listed above, shmem_enabled allows two further
233values:
234
235 - "deny":
236 For use in emergencies, to force the huge option off from
237 all mounts;
238 - "force":
239 Force the huge option on for all - very useful for testing;
240
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800241== Need of application restart ==
242
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700243The transparent_hugepage/enabled values and tmpfs mount option only affect
244future behavior. So to make them effective you need to restart any
245application that could have been using hugepages. This also applies to the
246regions registered in khugepaged.
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800247
Mel Gorman69256992012-05-29 15:06:45 -0700248== Monitoring usage ==
249
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700250The number of anonymous transparent huge pages currently used by the
251system is available by reading the AnonHugePages field in /proc/meminfo.
252To identify what applications are using anonymous transparent huge pages,
253it is necessary to read /proc/PID/smaps and count the AnonHugePages fields
254for each mapping.
255
256The number of file transparent huge pages mapped to userspace is available
257by reading ShmemPmdMapped and ShmemHugePages fields in /proc/meminfo.
258To identify what applications are mapping file transparent huge pages, it
259is necessary to read /proc/PID/smaps and count the FileHugeMapped fields
260for each mapping.
261
262Note that reading the smaps file is expensive and reading it
263frequently will incur overhead.
Mel Gorman69256992012-05-29 15:06:45 -0700264
265There are a number of counters in /proc/vmstat that may be used to
266monitor how successfully the system is providing huge pages for use.
267
268thp_fault_alloc is incremented every time a huge page is successfully
269 allocated to handle a page fault. This applies to both the
270 first time a page is faulted and for COW faults.
271
272thp_collapse_alloc is incremented by khugepaged when it has found
273 a range of pages to collapse into one huge page and has
274 successfully allocated a new huge page to store the data.
275
276thp_fault_fallback is incremented if a page fault fails to allocate
277 a huge page and instead falls back to using small pages.
278
279thp_collapse_alloc_failed is incremented if khugepaged found a range
280 of pages that should be collapsed into one huge page but failed
281 the allocation.
282
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700283thp_file_alloc is incremented every time a file huge page is successfully
284i allocated.
285
286thp_file_mapped is incremented every time a file huge page is mapped into
287 user address space.
288
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800289thp_split_page is incremented every time a huge page is split into base
Mel Gorman69256992012-05-29 15:06:45 -0700290 pages. This can happen for a variety of reasons but a common
291 reason is that a huge page is old and is being reclaimed.
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800292 This action implies splitting all PMD the page mapped with.
293
294thp_split_page_failed is is incremented if kernel fails to split huge
295 page. This can happen if the page was pinned by somebody.
296
Kirill A. Shutemovf9719a02016-03-17 14:18:45 -0700297thp_deferred_split_page is incremented when a huge page is put onto split
298 queue. This happens when a huge page is partially unmapped and
299 splitting it would free up some memory. Pages on split queue are
300 going to be split under memory pressure.
301
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800302thp_split_pmd is incremented every time a PMD split into table of PTEs.
303 This can happen, for instance, when application calls mprotect() or
304 munmap() on part of huge page. It doesn't split huge page, only
305 page table entry.
Mel Gorman69256992012-05-29 15:06:45 -0700306
Kirill A. Shutemovd8a8e1f2012-12-12 13:51:09 -0800307thp_zero_page_alloc is incremented every time a huge zero page is
308 successfully allocated. It includes allocations which where
309 dropped due race with other allocation. Note, it doesn't count
310 every map of the huge zero page, only its allocation.
311
312thp_zero_page_alloc_failed is incremented if kernel fails to allocate
313 huge zero page and falls back to using small pages.
314
Mel Gorman69256992012-05-29 15:06:45 -0700315As the system ages, allocating huge pages may be expensive as the
316system uses memory compaction to copy data around memory to free a
317huge page for use. There are some counters in /proc/vmstat to help
318monitor this overhead.
319
320compact_stall is incremented every time a process stalls to run
321 memory compaction so that a huge page is free for use.
322
323compact_success is incremented if the system compacted memory and
324 freed a huge page for use.
325
326compact_fail is incremented if the system tries to compact memory
327 but failed.
328
329compact_pages_moved is incremented each time a page is moved. If
330 this value is increasing rapidly, it implies that the system
331 is copying a lot of data to satisfy the huge page allocation.
332 It is possible that the cost of copying exceeds any savings
333 from reduced TLB misses.
334
335compact_pagemigrate_failed is incremented when the underlying mechanism
336 for moving a page failed.
337
338compact_blocks_moved is incremented each time memory compaction examines
339 a huge page aligned range of pages.
340
341It is possible to establish how long the stalls were using the function
342tracer to record how long was spent in __alloc_pages_nodemask and
343using the mm_page_alloc tracepoint to identify which allocations were
344for huge pages.
345
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800346== get_user_pages and follow_page ==
347
348get_user_pages and follow_page if run on a hugepage, will return the
349head or tail pages as usual (exactly as they would do on
350hugetlbfs). Most gup users will only care about the actual physical
351address of the page and its temporary pinning to release after the I/O
352is complete, so they won't ever notice the fact the page is huge. But
353if any driver is going to mangle over the page structure of the tail
354page (like for checking page->mapping or other bits that are relevant
355for the head page and not the tail page), it should be updated to jump
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800356to check head page instead. Taking reference on any head/tail page would
357prevent page from being split by anyone.
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800358
359NOTE: these aren't new constraints to the GUP API, and they match the
360same constrains that applies to hugetlbfs too, so any driver capable
361of handling GUP on hugetlbfs will also work fine on transparent
362hugepage backed mappings.
363
364In case you can't handle compound pages if they're returned by
365follow_page, the FOLL_SPLIT bit can be specified as parameter to
366follow_page, so that it will split the hugepages before returning
367them. Migration for example passes FOLL_SPLIT as parameter to
368follow_page because it's not hugepage aware and in fact it can't work
369at all on hugetlbfs (but it instead works fine on transparent
370hugepages thanks to FOLL_SPLIT). migration simply can't deal with
371hugepages being returned (as it's not only checking the pfn of the
372page and pinning it during the copy but it pretends to migrate the
373memory in regular page sizes and with regular pte/pmd mappings).
374
375== Optimizing the applications ==
376
377To be guaranteed that the kernel will map a 2M page immediately in any
378memory region, the mmap region has to be hugepage naturally
379aligned. posix_memalign() can provide that guarantee.
380
381== Hugetlbfs ==
382
383You can use hugetlbfs on a kernel that has transparent hugepage
384support enabled just fine as always. No difference can be noted in
385hugetlbfs other than there will be less overall fragmentation. All
386usual features belonging to hugetlbfs are preserved and
387unaffected. libhugetlbfs will also work fine as usual.
388
389== Graceful fallback ==
390
Eric Engestrom89474d52016-05-20 16:58:07 -0700391Code walking pagetables but unaware about huge pmds can simply call
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800392split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800393pmd_offset. It's trivial to make the code transparent hugepage aware
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800394by just grepping for "pmd_offset" and adding split_huge_pmd where
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800395missing after pmd_offset returns the pmd. Thanks to the graceful
396fallback design, with a one liner change, you can avoid to write
397hundred if not thousand of lines of complex code to make your code
398hugepage aware.
399
400If you're not walking pagetables but you run into a physical hugepage
401but you can't handle it natively in your code, you can split it by
402calling split_huge_page(page). This is what the Linux VM does before
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800403it tries to swapout the hugepage for example. split_huge_page() can fail
404if the page is pinned and you must handle this correctly.
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800405
406Example to make mremap.c transparent hugepage aware with a one liner
407change:
408
409diff --git a/mm/mremap.c b/mm/mremap.c
410--- a/mm/mremap.c
411+++ b/mm/mremap.c
412@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
413 return NULL;
414
415 pmd = pmd_offset(pud, addr);
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800416+ split_huge_pmd(vma, pmd, addr);
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800417 if (pmd_none_or_clear_bad(pmd))
418 return NULL;
419
420== Locking in hugepage aware code ==
421
422We want as much code as possible hugepage aware, as calling
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800423split_huge_page() or split_huge_pmd() has a cost.
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800424
425To make pagetable walks huge pmd aware, all you need to do is to call
426pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
427mmap_sem in read (or write) mode to be sure an huge pmd cannot be
428created from under you by khugepaged (khugepaged collapse_huge_page
429takes the mmap_sem in write mode in addition to the anon_vma lock). If
430pmd_trans_huge returns false, you just fallback in the old code
431paths. If instead pmd_trans_huge returns true, you have to take the
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800432page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
433page table lock will prevent the huge pmd to be converted into a
434regular pmd from under you (split_huge_pmd can run in parallel to the
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800435pagetable walk). If the second pmd_trans_huge returns false, you
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800436should just drop the page table lock and fallback to the old code as
437before. Otherwise you can proceed to process the huge pmd and the
438hugepage natively. Once finished you can drop the page table lock.
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800439
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800440== Refcounts and transparent huge pages ==
441
442Refcounting on THP is mostly consistent with refcounting on other compound
443pages:
444
Joonsoo Kim0139aa72016-05-19 17:10:49 -0700445 - get_page()/put_page() and GUP operate in head page's ->_refcount.
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800446
Joonsoo Kim0139aa72016-05-19 17:10:49 -0700447 - ->_refcount in tail pages is always zero: get_page_unless_zero() never
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800448 succeed on tail pages.
449
450 - map/unmap of the pages with PTE entry increment/decrement ->_mapcount
451 on relevant sub-page of the compound page.
452
453 - map/unmap of the whole compound page accounted in compound_mapcount
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700454 (stored in first tail page). For file huge pages, we also increment
455 ->_mapcount of all sub-pages in order to have race-free detection of
456 last unmap of subpages.
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800457
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700458PageDoubleMap() indicates that the page is *possibly* mapped with PTEs.
459
460For anonymous pages PageDoubleMap() also indicates ->_mapcount in all
461subpages is offset up by one. This additional reference is required to
462get race-free detection of unmap of subpages when we have them mapped with
463both PMDs and PTEs.
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800464
465This is optimization required to lower overhead of per-subpage mapcount
466tracking. The alternative is alter ->_mapcount in all subpages on each
467map/unmap of the whole compound page.
468
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700469For anonymous pages, we set PG_double_map when a PMD of the page got split
470for the first time, but still have PMD mapping. The additional references
471go away with last compound_mapcount.
472
473File pages get PG_double_map set on first map of the page with PTE and
474goes away when the page gets evicted from page cache.
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800475
476split_huge_page internally has to distribute the refcounts in the head
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800477page to the tail pages before clearing all PG_head/tail bits from the page
478structures. It can be done easily for refcounts taken by page table
479entries. But we don't have enough information on how to distribute any
480additional pins (i.e. from get_user_pages). split_huge_page() fails any
481requests to split pinned huge page: it expects page count to be equal to
482sum of mapcount of all sub-pages plus one (split_huge_page caller must
483have reference for head page).
484
Joonsoo Kim0139aa72016-05-19 17:10:49 -0700485split_huge_page uses migration entries to stabilize page->_refcount and
Kirill A. Shutemov1b5946a2016-07-26 15:26:40 -0700486page->_mapcount of anonymous pages. File pages just got unmapped.
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800487
488We safe against physical memory scanners too: the only legitimate way
489scanner can get reference to a page is get_page_unless_zero().
490
Eric Engestrom89474d52016-05-20 16:58:07 -0700491All tail pages have zero ->_refcount until atomic_add(). This prevents the
492scanner from getting a reference to the tail page up to that point. After the
493atomic_add() we don't care about the ->_refcount value. We already known how
494many references should be uncharged from the head page.
Kirill A. Shutemova46e6372016-01-15 16:54:30 -0800495
496For head page get_page_unless_zero() will succeed and we don't mind. It's
497clear where reference should go after split: it will stay on head page.
498
499Note that split_huge_pmd() doesn't have any limitation on refcounting:
500pmd can be split at any point and never fails.
501
502== Partial unmap and deferred_split_huge_page() ==
503
504Unmapping part of THP (with munmap() or other way) is not going to free
505memory immediately. Instead, we detect that a subpage of THP is not in use
506in page_remove_rmap() and queue the THP for splitting if memory pressure
507comes. Splitting will free up unused subpages.
508
509Splitting the page right away is not an option due to locking context in
510the place where we can detect partial unmap. It's also might be
511counterproductive since in many cases partial unmap unmap happens during
512exit(2) if an THP crosses VMA boundary.
513
514Function deferred_split_huge_page() is used to queue page for splitting.
515The splitting itself will happen when we get memory pressure via shrinker
516interface.