blob: 8f5b41db314cfe0f9c6e1e978cdfd315ddf26019 [file] [log] [blame]
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -08001= Transparent Hugepage Support =
2
3== Objective ==
4
5Performance critical computing applications dealing with large memory
6working sets are already running on top of libhugetlbfs and in turn
7hugetlbfs. Transparent Hugepage Support is an alternative means of
8using huge pages for the backing of virtual memory with huge pages
9that supports the automatic promotion and demotion of page sizes and
10without the shortcomings of hugetlbfs.
11
12Currently it only works for anonymous memory mappings but in the
13future it can expand over the pagecache layer starting with tmpfs.
14
15The reason applications are running faster is because of two
16factors. The first factor is almost completely irrelevant and it's not
17of significant interest because it'll also have the downside of
18requiring larger clear-page copy-page in page faults which is a
19potentially negative effect. The first factor consists in taking a
20single page fault for each 2M virtual region touched by userland (so
21reducing the enter/exit kernel frequency by a 512 times factor). This
22only matters the first time the memory is accessed for the lifetime of
23a memory mapping. The second long lasting and much more important
24factor will affect all subsequent accesses to the memory for the whole
25runtime of the application. The second factor consist of two
26components: 1) the TLB miss will run faster (especially with
27virtualization using nested pagetables but almost always also on bare
28metal without virtualization) and 2) a single TLB entry will be
29mapping a much larger amount of virtual memory in turn reducing the
30number of TLB misses. With virtualization and nested pagetables the
31TLB can be mapped of larger size only if both KVM and the Linux guest
32are using hugepages but a significant speedup already happens if only
33one of the two is using hugepages just because of the fact the TLB
34miss is going to run faster.
35
36== Design ==
37
38- "graceful fallback": mm components which don't have transparent
39 hugepage knowledge fall back to breaking a transparent hugepage and
40 working on the regular pages and their respective regular pmd/pte
41 mappings
42
43- if a hugepage allocation fails because of memory fragmentation,
44 regular pages should be gracefully allocated instead and mixed in
45 the same vma without any failure or significant delay and without
46 userland noticing
47
48- if some task quits and more hugepages become available (either
49 immediately in the buddy or through the VM), guest physical memory
50 backed by regular pages should be relocated on hugepages
51 automatically (with khugepaged)
52
53- it doesn't require memory reservation and in turn it uses hugepages
54 whenever possible (the only possible reservation here is kernelcore=
55 to avoid unmovable pages to fragment all the memory but such a tweak
56 is not specific to transparent hugepage support and it's a generic
57 feature that applies to all dynamic high order allocations in the
58 kernel)
59
60- this initial support only offers the feature in the anonymous memory
61 regions but it'd be ideal to move it to tmpfs and the pagecache
62 later
63
64Transparent Hugepage Support maximizes the usefulness of free memory
65if compared to the reservation approach of hugetlbfs by allowing all
66unused memory to be used as cache or other movable (or even unmovable
67entities). It doesn't require reservation to prevent hugepage
68allocation failures to be noticeable from userland. It allows paging
69and all other advanced VM features to be available on the
70hugepages. It requires no modifications for applications to take
71advantage of it.
72
73Applications however can be further optimized to take advantage of
74this feature, like for example they've been optimized before to avoid
75a flood of mmap system calls for every malloc(4k). Optimizing userland
76is by far not mandatory and khugepaged already can take care of long
77lived page allocations even for hugepage unaware applications that
78deals with large amounts of memory.
79
80In certain cases when hugepages are enabled system wide, application
81may end up allocating more memory resources. An application may mmap a
82large region but only touch 1 byte of it, in that case a 2M page might
83be allocated instead of a 4k page for no good. This is why it's
84possible to disable hugepages system-wide and to only have them inside
85MADV_HUGEPAGE madvise regions.
86
87Embedded systems should enable hugepages only inside madvise regions
88to eliminate any risk of wasting any precious byte of memory and to
89only run faster.
90
91Applications that gets a lot of benefit from hugepages and that don't
92risk to lose memory by using hugepages, should use
93madvise(MADV_HUGEPAGE) on their critical mmapped regions.
94
95== sysfs ==
96
97Transparent Hugepage Support can be entirely disabled (mostly for
98debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to
99avoid the risk of consuming more memory resources) or enabled system
100wide. This can be achieved with one of:
101
102echo always >/sys/kernel/mm/transparent_hugepage/enabled
103echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
104echo never >/sys/kernel/mm/transparent_hugepage/enabled
105
106It's also possible to limit defrag efforts in the VM to generate
107hugepages in case they're not immediately free to madvise regions or
108to never try to defrag memory and simply fallback to regular pages
109unless hugepages are immediately available. Clearly if we spend CPU
110time to defrag memory, we would expect to gain even more by the fact
111we use hugepages later instead of regular pages. This isn't always
112guaranteed, but it may be more likely in case the allocation is for a
113MADV_HUGEPAGE region.
114
115echo always >/sys/kernel/mm/transparent_hugepage/defrag
116echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
117echo never >/sys/kernel/mm/transparent_hugepage/defrag
118
119khugepaged will be automatically started when
120transparent_hugepage/enabled is set to "always" or "madvise, and it'll
121be automatically shutdown if it's set to "never".
122
123khugepaged runs usually at low frequency so while one may not want to
124invoke defrag algorithms synchronously during the page faults, it
125should be worth invoking defrag at least in khugepaged. However it's
David Rientjese369fde2011-09-22 14:11:38 -0700126also possible to disable defrag in khugepaged by writing 0 or enable
127defrag in khugepaged by writing 1:
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800128
David Rientjese369fde2011-09-22 14:11:38 -0700129echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
130echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800131
132You can also control how many pages khugepaged should scan at each
133pass:
134
135/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
136
137and how many milliseconds to wait in khugepaged between each pass (you
138can set this to 0 to run khugepaged at 100% utilization of one core):
139
140/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
141
142and how many milliseconds to wait in khugepaged if there's an hugepage
143allocation failure to throttle the next allocation attempt.
144
145/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
146
147The khugepaged progress can be seen in the number of pages collapsed:
148
149/sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
150
151for each pass:
152
153/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
154
155== Boot parameter ==
156
157You can change the sysfs boot time defaults of Transparent Hugepage
158Support by passing the parameter "transparent_hugepage=always" or
159"transparent_hugepage=madvise" or "transparent_hugepage=never"
160(without "") to the kernel command line.
161
162== Need of application restart ==
163
164The transparent_hugepage/enabled values only affect future
165behavior. So to make them effective you need to restart any
166application that could have been using hugepages. This also applies to
167the regions registered in khugepaged.
168
Mel Gorman69256992012-05-29 15:06:45 -0700169== Monitoring usage ==
170
171The number of transparent huge pages currently used by the system is
172available by reading the AnonHugePages field in /proc/meminfo. To
173identify what applications are using transparent huge pages, it is
174necessary to read /proc/PID/smaps and count the AnonHugePages fields
175for each mapping. Note that reading the smaps file is expensive and
176reading it frequently will incur overhead.
177
178There are a number of counters in /proc/vmstat that may be used to
179monitor how successfully the system is providing huge pages for use.
180
181thp_fault_alloc is incremented every time a huge page is successfully
182 allocated to handle a page fault. This applies to both the
183 first time a page is faulted and for COW faults.
184
185thp_collapse_alloc is incremented by khugepaged when it has found
186 a range of pages to collapse into one huge page and has
187 successfully allocated a new huge page to store the data.
188
189thp_fault_fallback is incremented if a page fault fails to allocate
190 a huge page and instead falls back to using small pages.
191
192thp_collapse_alloc_failed is incremented if khugepaged found a range
193 of pages that should be collapsed into one huge page but failed
194 the allocation.
195
196thp_split is incremented every time a huge page is split into base
197 pages. This can happen for a variety of reasons but a common
198 reason is that a huge page is old and is being reclaimed.
199
200As the system ages, allocating huge pages may be expensive as the
201system uses memory compaction to copy data around memory to free a
202huge page for use. There are some counters in /proc/vmstat to help
203monitor this overhead.
204
205compact_stall is incremented every time a process stalls to run
206 memory compaction so that a huge page is free for use.
207
208compact_success is incremented if the system compacted memory and
209 freed a huge page for use.
210
211compact_fail is incremented if the system tries to compact memory
212 but failed.
213
214compact_pages_moved is incremented each time a page is moved. If
215 this value is increasing rapidly, it implies that the system
216 is copying a lot of data to satisfy the huge page allocation.
217 It is possible that the cost of copying exceeds any savings
218 from reduced TLB misses.
219
220compact_pagemigrate_failed is incremented when the underlying mechanism
221 for moving a page failed.
222
223compact_blocks_moved is incremented each time memory compaction examines
224 a huge page aligned range of pages.
225
226It is possible to establish how long the stalls were using the function
227tracer to record how long was spent in __alloc_pages_nodemask and
228using the mm_page_alloc tracepoint to identify which allocations were
229for huge pages.
230
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800231== get_user_pages and follow_page ==
232
233get_user_pages and follow_page if run on a hugepage, will return the
234head or tail pages as usual (exactly as they would do on
235hugetlbfs). Most gup users will only care about the actual physical
236address of the page and its temporary pinning to release after the I/O
237is complete, so they won't ever notice the fact the page is huge. But
238if any driver is going to mangle over the page structure of the tail
239page (like for checking page->mapping or other bits that are relevant
240for the head page and not the tail page), it should be updated to jump
241to check head page instead (while serializing properly against
242split_huge_page() to avoid the head and tail pages to disappear from
243under it, see the futex code to see an example of that, hugetlbfs also
244needed special handling in futex code for similar reasons).
245
246NOTE: these aren't new constraints to the GUP API, and they match the
247same constrains that applies to hugetlbfs too, so any driver capable
248of handling GUP on hugetlbfs will also work fine on transparent
249hugepage backed mappings.
250
251In case you can't handle compound pages if they're returned by
252follow_page, the FOLL_SPLIT bit can be specified as parameter to
253follow_page, so that it will split the hugepages before returning
254them. Migration for example passes FOLL_SPLIT as parameter to
255follow_page because it's not hugepage aware and in fact it can't work
256at all on hugetlbfs (but it instead works fine on transparent
257hugepages thanks to FOLL_SPLIT). migration simply can't deal with
258hugepages being returned (as it's not only checking the pfn of the
259page and pinning it during the copy but it pretends to migrate the
260memory in regular page sizes and with regular pte/pmd mappings).
261
262== Optimizing the applications ==
263
264To be guaranteed that the kernel will map a 2M page immediately in any
265memory region, the mmap region has to be hugepage naturally
266aligned. posix_memalign() can provide that guarantee.
267
268== Hugetlbfs ==
269
270You can use hugetlbfs on a kernel that has transparent hugepage
271support enabled just fine as always. No difference can be noted in
272hugetlbfs other than there will be less overall fragmentation. All
273usual features belonging to hugetlbfs are preserved and
274unaffected. libhugetlbfs will also work fine as usual.
275
276== Graceful fallback ==
277
278Code walking pagetables but unware about huge pmds can simply call
Kirill A. Shutemove1803772012-12-12 13:50:59 -0800279split_huge_page_pmd(vma, addr, pmd) where the pmd is the one returned by
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800280pmd_offset. It's trivial to make the code transparent hugepage aware
281by just grepping for "pmd_offset" and adding split_huge_page_pmd where
282missing after pmd_offset returns the pmd. Thanks to the graceful
283fallback design, with a one liner change, you can avoid to write
284hundred if not thousand of lines of complex code to make your code
285hugepage aware.
286
287If you're not walking pagetables but you run into a physical hugepage
288but you can't handle it natively in your code, you can split it by
289calling split_huge_page(page). This is what the Linux VM does before
290it tries to swapout the hugepage for example.
291
292Example to make mremap.c transparent hugepage aware with a one liner
293change:
294
295diff --git a/mm/mremap.c b/mm/mremap.c
296--- a/mm/mremap.c
297+++ b/mm/mremap.c
298@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
299 return NULL;
300
301 pmd = pmd_offset(pud, addr);
Kirill A. Shutemove1803772012-12-12 13:50:59 -0800302+ split_huge_page_pmd(vma, addr, pmd);
Andrea Arcangeli1c9bf222011-01-13 15:46:30 -0800303 if (pmd_none_or_clear_bad(pmd))
304 return NULL;
305
306== Locking in hugepage aware code ==
307
308We want as much code as possible hugepage aware, as calling
309split_huge_page() or split_huge_page_pmd() has a cost.
310
311To make pagetable walks huge pmd aware, all you need to do is to call
312pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
313mmap_sem in read (or write) mode to be sure an huge pmd cannot be
314created from under you by khugepaged (khugepaged collapse_huge_page
315takes the mmap_sem in write mode in addition to the anon_vma lock). If
316pmd_trans_huge returns false, you just fallback in the old code
317paths. If instead pmd_trans_huge returns true, you have to take the
318mm->page_table_lock and re-run pmd_trans_huge. Taking the
319page_table_lock will prevent the huge pmd to be converted into a
320regular pmd from under you (split_huge_page can run in parallel to the
321pagetable walk). If the second pmd_trans_huge returns false, you
322should just drop the page_table_lock and fallback to the old code as
323before. Otherwise you should run pmd_trans_splitting on the pmd. In
324case pmd_trans_splitting returns true, it means split_huge_page is
325already in the middle of splitting the page. So if pmd_trans_splitting
326returns true it's enough to drop the page_table_lock and call
327wait_split_huge_page and then fallback the old code paths. You are
328guaranteed by the time wait_split_huge_page returns, the pmd isn't
329huge anymore. If pmd_trans_splitting returns false, you can proceed to
330process the huge pmd and the hugepage natively. Once finished you can
331drop the page_table_lock.
332
333== compound_lock, get_user_pages and put_page ==
334
335split_huge_page internally has to distribute the refcounts in the head
336page to the tail pages before clearing all PG_head/tail bits from the
337page structures. It can do that easily for refcounts taken by huge pmd
338mappings. But the GUI API as created by hugetlbfs (that returns head
339and tail pages if running get_user_pages on an address backed by any
340hugepage), requires the refcount to be accounted on the tail pages and
341not only in the head pages, if we want to be able to run
342split_huge_page while there are gup pins established on any tail
343page. Failure to be able to run split_huge_page if there's any gup pin
344on any tail page, would mean having to split all hugepages upfront in
345get_user_pages which is unacceptable as too many gup users are
346performance critical and they must work natively on hugepages like
347they work natively on hugetlbfs already (hugetlbfs is simpler because
348hugetlbfs pages cannot be splitted so there wouldn't be requirement of
349accounting the pins on the tail pages for hugetlbfs). If we wouldn't
350account the gup refcounts on the tail pages during gup, we won't know
351anymore which tail page is pinned by gup and which is not while we run
352split_huge_page. But we still have to add the gup pin to the head page
353too, to know when we can free the compound page in case it's never
354splitted during its lifetime. That requires changing not just
355get_page, but put_page as well so that when put_page runs on a tail
356page (and only on a tail page) it will find its respective head page,
357and then it will decrease the head page refcount in addition to the
358tail page refcount. To obtain a head page reliably and to decrease its
359refcount without race conditions, put_page has to serialize against
360__split_huge_page_refcount using a special per-page lock called
361compound_lock.