blob: 90912925425e43bffea3244f671c9573ceb34f41 [file] [log] [blame]
Andrea Arcangeli25edd8b2015-09-04 15:46:00 -07001= Userfaultfd =
2
3== Objective ==
4
5Userfaults allow the implementation of on-demand paging from userland
6and more generally they allow userland to take control of various
7memory page faults, something otherwise only the kernel code could do.
8
9For example userfaults allows a proper and more optimal implementation
10of the PROT_NONE+SIGSEGV trick.
11
12== Design ==
13
14Userfaults are delivered and resolved through the userfaultfd syscall.
15
16The userfaultfd (aside from registering and unregistering virtual
17memory ranges) provides two primary functionalities:
18
191) read/POLLIN protocol to notify a userland thread of the faults
20 happening
21
222) various UFFDIO_* ioctls that can manage the virtual memory regions
23 registered in the userfaultfd that allows userland to efficiently
24 resolve the userfaults it receives via 1) or to manage the virtual
25 memory in the background
26
27The real advantage of userfaults if compared to regular virtual memory
28management of mremap/mprotect is that the userfaults in all their
29operations never involve heavyweight structures like vmas (in fact the
30userfaultfd runtime load never takes the mmap_sem for writing).
31
32Vmas are not suitable for page- (or hugepage) granular fault tracking
33when dealing with virtual address spaces that could span
34Terabytes. Too many vmas would be needed for that.
35
36The userfaultfd once opened by invoking the syscall, can also be
37passed using unix domain sockets to a manager process, so the same
38manager process could handle the userfaults of a multitude of
39different processes without them being aware about what is going on
40(well of course unless they later try to use the userfaultfd
41themselves on the same region the manager is already tracking, which
42is a corner case that would currently return -EBUSY).
43
44== API ==
45
46When first opened the userfaultfd must be enabled invoking the
47UFFDIO_API ioctl specifying a uffdio_api.api value set to UFFD_API (or
48a later API version) which will specify the read/POLLIN protocol
49userland intends to speak on the UFFD. The UFFDIO_API ioctl if
50successful (i.e. if the requested uffdio_api.api is spoken also by the
51running kernel), will return into uffdio_api.features and
52uffdio_api.ioctls two 64bit bitmasks of respectively the activated
53feature of the read(2) protocol and the generic ioctl available.
54
55Once the userfaultfd has been enabled the UFFDIO_REGISTER ioctl should
56be invoked (if present in the returned uffdio_api.ioctls bitmask) to
57register a memory range in the userfaultfd by setting the
58uffdio_register structure accordingly. The uffdio_register.mode
59bitmask will specify to the kernel which kind of faults to track for
60the range (UFFDIO_REGISTER_MODE_MISSING would track missing
61pages). The UFFDIO_REGISTER ioctl will return the
62uffdio_register.ioctls bitmask of ioctls that are suitable to resolve
63userfaults on the range registered. Not all ioctls will necessarily be
64supported for all memory types depending on the underlying virtual
65memory backend (anonymous memory vs tmpfs vs real filebacked
66mappings).
67
68Userland can use the uffdio_register.ioctls to manage the virtual
69address space in the background (to add or potentially also remove
70memory from the userfaultfd registered range). This means a userfault
71could be triggering just before userland maps in the background the
72user-faulted page.
73
74The primary ioctl to resolve userfaults is UFFDIO_COPY. That
75atomically copies a page into the userfault registered range and wakes
76up the blocked userfaults (unless uffdio_copy.mode &
77UFFDIO_COPY_MODE_DONTWAKE is set). Other ioctl works similarly to
78UFFDIO_COPY. They're atomic as in guaranteeing that nothing can see an
79half copied page since it'll keep userfaulting until the copy has
80finished.
81
82== QEMU/KVM ==
83
84QEMU/KVM is using the userfaultfd syscall to implement postcopy live
85migration. Postcopy live migration is one form of memory
86externalization consisting of a virtual machine running with part or
87all of its memory residing on a different node in the cloud. The
88userfaultfd abstraction is generic enough that not a single line of
89KVM kernel code had to be modified in order to add postcopy live
90migration to QEMU.
91
92Guest async page faults, FOLL_NOWAIT and all other GUP features work
93just fine in combination with userfaults. Userfaults trigger async
94page faults in the guest scheduler so those guest processes that
95aren't waiting for userfaults (i.e. network bound) can keep running in
96the guest vcpus.
97
98It is generally beneficial to run one pass of precopy live migration
99just before starting postcopy live migration, in order to avoid
100generating userfaults for readonly guest regions.
101
102The implementation of postcopy live migration currently uses one
103single bidirectional socket but in the future two different sockets
104will be used (to reduce the latency of the userfaults to the minimum
105possible without having to decrease /proc/sys/net/ipv4/tcp_wmem).
106
107The QEMU in the source node writes all pages that it knows are missing
108in the destination node, into the socket, and the migration thread of
109the QEMU running in the destination node runs UFFDIO_COPY|ZEROPAGE
110ioctls on the userfaultfd in order to map the received pages into the
111guest (UFFDIO_ZEROCOPY is used if the source page was a zero page).
112
113A different postcopy thread in the destination node listens with
114poll() to the userfaultfd in parallel. When a POLLIN event is
115generated after a userfault triggers, the postcopy thread read() from
116the userfaultfd and receives the fault address (or -EAGAIN in case the
117userfault was already resolved and waken by a UFFDIO_COPY|ZEROPAGE run
118by the parallel QEMU migration thread).
119
120After the QEMU postcopy thread (running in the destination node) gets
121the userfault address it writes the information about the missing page
122into the socket. The QEMU source node receives the information and
123roughly "seeks" to that page address and continues sending all
124remaining missing pages from that new page offset. Soon after that
125(just the time to flush the tcp_wmem queue through the network) the
126migration thread in the QEMU running in the destination node will
127receive the page that triggered the userfault and it'll map it as
128usual with the UFFDIO_COPY|ZEROPAGE (without actually knowing if it
129was spontaneously sent by the source or if it was an urgent page
130requested through an userfault).
131
132By the time the userfaults start, the QEMU in the destination node
133doesn't need to keep any per-page state bitmap relative to the live
134migration around and a single per-page bitmap has to be maintained in
135the QEMU running in the source node to know which pages are still
136missing in the destination node. The bitmap in the source node is
137checked to find which missing pages to send in round robin and we seek
138over it when receiving incoming userfaults. After sending each page of
139course the bitmap is updated accordingly. It's also useful to avoid
140sending the same page twice (in case the userfault is read by the
141postcopy thread just before UFFDIO_COPY|ZEROPAGE runs in the migration
142thread).