blob: 142fbb0f325a76c508f7347ac5eced29c0de03c2 [file] [log] [blame]
Dan Magenheimer4fe47462011-05-26 10:00:56 -06001MOTIVATION
2
3Cleancache is a new optional feature provided by the VFS layer that
4potentially dramatically increases page cache effectiveness for
5many workloads in many environments at a negligible cost.
6
7Cleancache can be thought of as a page-granularity victim cache for clean
8pages that the kernel's pageframe replacement algorithm (PFRA) would like
9to keep around, but can't since there isn't enough memory. So when the
10PFRA "evicts" a page, it first attempts to use cleancache code to
11put the data contained in that page into "transcendent memory", memory
12that is not directly accessible or addressable by the kernel and is
13of unknown and possibly time-varying size.
14
15Later, when a cleancache-enabled filesystem wishes to access a page
16in a file on disk, it first checks cleancache to see if it already
17contains it; if it does, the page of data is copied into the kernel
18and a disk access is avoided.
19
20Transcendent memory "drivers" for cleancache are currently implemented
21in Xen (using hypervisor memory) and zcache (using in-kernel compressed
22memory) and other implementations are in development.
23
24FAQs are included below.
25
26IMPLEMENTATION OVERVIEW
27
28A cleancache "backend" that provides transcendent memory registers itself
29to the kernel's cleancache "frontend" by calling cleancache_register_ops,
30passing a pointer to a cleancache_ops structure with funcs set appropriately.
31Note that cleancache_register_ops returns the previous settings so that
32chaining can be performed if desired. The functions provided must conform to
33certain semantics as follows:
34
35Most important, cleancache is "ephemeral". Pages which are copied into
36cleancache have an indefinite lifetime which is completely unknowable
37by the kernel and so may or may not still be in cleancache at any later time.
38Thus, as its name implies, cleancache is not suitable for dirty pages.
39Cleancache has complete discretion over what pages to preserve and what
40pages to discard and when.
41
42Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a
43pool id which, if positive, must be saved in the filesystem's superblock;
44a negative return value indicates failure. A "put_page" will copy a
45(presumably about-to-be-evicted) page into cleancache and associate it with
46the pool id, a file key, and a page index into the file. (The combination
47of a pool id, a file key, and an index is sometimes called a "handle".)
48A "get_page" will copy the page, if found, from cleancache into kernel memory.
Dan Magenheimer31677602011-09-21 11:56:28 -040049An "invalidate_page" will ensure the page no longer is present in cleancache;
50an "invalidate_inode" will invalidate all pages associated with the specified
51file; and, when a filesystem is unmounted, an "invalidate_fs" will invalidate
52all pages in all files specified by the given pool id and also surrender
53the pool id.
Dan Magenheimer4fe47462011-05-26 10:00:56 -060054
55An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache
56to treat the pool as shared using a 128-bit UUID as a key. On systems
57that may run multiple kernels (such as hard partitioned or virtualized
58systems) that may share a clustered filesystem, and where cleancache
59may be shared among those kernels, calls to init_shared_fs that specify the
60same UUID will receive the same pool id, thus allowing the pages to
61be shared. Note that any security requirements must be imposed outside
62of the kernel (e.g. by "tools" that control cleancache). Or a
63cleancache implementation can simply disable shared_init by always
64returning a negative value.
65
Dan Magenheimer31677602011-09-21 11:56:28 -040066If a get_page is successful on a non-shared pool, the page is invalidated
67(thus making cleancache an "exclusive" cache). On a shared pool, the page
68is NOT invalidated on a successful get_page so that it remains accessible to
Dan Magenheimer4fe47462011-05-26 10:00:56 -060069other sharers. The kernel is responsible for ensuring coherency between
70cleancache (shared or not), the page cache, and the filesystem, using
Dan Magenheimer31677602011-09-21 11:56:28 -040071cleancache invalidate operations as required.
Dan Magenheimer4fe47462011-05-26 10:00:56 -060072
73Note that cleancache must enforce put-put-get coherency and get-get
74coherency. For the former, if two puts are made to the same handle but
75with different data, say AAA by the first put and BBB by the second, a
76subsequent get can never return the stale data (AAA). For get-get coherency,
77if a get for a given handle fails, subsequent gets for that handle will
78never succeed unless preceded by a successful put with that handle.
79
80Last, cleancache provides no SMP serialization guarantees; if two
Dan Magenheimer31677602011-09-21 11:56:28 -040081different Linux threads are simultaneously putting and invalidating a page
Dan Magenheimer4fe47462011-05-26 10:00:56 -060082with the same handle, the results are indeterminate. Callers must
83lock the page to ensure serial behavior.
84
85CLEANCACHE PERFORMANCE METRICS
86
Dan Magenheimer417fc2c2011-09-21 12:28:04 -040087If properly configured, monitoring of cleancache is done via debugfs in
88the /sys/kernel/debug/mm/cleancache directory. The effectiveness of cleancache
Dan Magenheimer4fe47462011-05-26 10:00:56 -060089can be measured (across all filesystems) with:
90
91succ_gets - number of gets that were successful
92failed_gets - number of gets that failed
93puts - number of puts attempted (all "succeed")
Dan Magenheimer31677602011-09-21 11:56:28 -040094invalidates - number of invalidates attempted
Dan Magenheimer4fe47462011-05-26 10:00:56 -060095
Masanari Iidad65657c2012-02-08 23:10:14 +090096A backend implementation may provide additional metrics.
Dan Magenheimer4fe47462011-05-26 10:00:56 -060097
98FAQ
99
1001) Where's the value? (Andrew Morton)
101
102Cleancache provides a significant performance benefit to many workloads
103in many environments with negligible overhead by improving the
104effectiveness of the pagecache. Clean pagecache pages are
105saved in transcendent memory (RAM that is otherwise not directly
106addressable to the kernel); fetching those pages later avoids "refaults"
107and thus disk reads.
108
109Cleancache (and its sister code "frontswap") provide interfaces for
110this transcendent memory (aka "tmem"), which conceptually lies between
111fast kernel-directly-addressable RAM and slower DMA/asynchronous devices.
112Disallowing direct kernel or userland reads/writes to tmem
113is ideal when data is transformed to a different form and size (such
114as with compression) or secretly moved (as might be useful for write-
115balancing for some RAM-like devices). Evicted page-cache pages (and
116swap pages) are a great use for this kind of slower-than-RAM-but-much-
117faster-than-disk transcendent memory, and the cleancache (and frontswap)
118"page-object-oriented" specification provides a nice way to read and
119write -- and indirectly "name" -- the pages.
120
121In the virtual case, the whole point of virtualization is to statistically
122multiplex physical resources across the varying demands of multiple
123virtual machines. This is really hard to do with RAM and efforts to
124do it well with no kernel change have essentially failed (except in some
125well-publicized special-case workloads). Cleancache -- and frontswap --
126with a fairly small impact on the kernel, provide a huge amount
127of flexibility for more dynamic, flexible RAM multiplexing.
128Specifically, the Xen Transcendent Memory backend allows otherwise
129"fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
130virtual machines, but the pages can be compressed and deduplicated to
131optimize RAM utilization. And when guest OS's are induced to surrender
132underutilized RAM (e.g. with "self-ballooning"), page cache pages
133are the first to go, and cleancache allows those pages to be
134saved and reclaimed if overall host system memory conditions allow.
135
136And the identical interface used for cleancache can be used in
137physical systems as well. The zcache driver acts as a memory-hungry
138device that stores pages of data in a compressed state. And
139the proposed "RAMster" driver shares RAM across multiple physical
140systems.
141
1422) Why does cleancache have its sticky fingers so deep inside the
143 filesystems and VFS? (Andrew Morton and Christoph Hellwig)
144
145The core hooks for cleancache in VFS are in most cases a single line
146and the minimum set are placed precisely where needed to maintain
Dan Magenheimer31677602011-09-21 11:56:28 -0400147coherency (via cleancache_invalidate operations) between cleancache,
Dan Magenheimer4fe47462011-05-26 10:00:56 -0600148the page cache, and disk. All hooks compile into nothingness if
149cleancache is config'ed off and turn into a function-pointer-
150compare-to-NULL if config'ed on but no backend claims the ops
151functions, or to a compare-struct-element-to-negative if a
152backend claims the ops functions but a filesystem doesn't enable
153cleancache.
154
155Some filesystems are built entirely on top of VFS and the hooks
156in VFS are sufficient, so don't require an "init_fs" hook; the
157initial implementation of cleancache didn't provide this hook.
158But for some filesystems (such as btrfs), the VFS hooks are
159incomplete and one or more hooks in fs-specific code are required.
160And for some other filesystems, such as tmpfs, cleancache may
161be counterproductive. So it seemed prudent to require a filesystem
162to "opt in" to use cleancache, which requires adding a hook in
163each filesystem. Not all filesystems are supported by cleancache
164only because they haven't been tested. The existing set should
165be sufficient to validate the concept, the opt-in approach means
166that untested filesystems are not affected, and the hooks in the
167existing filesystems should make it very easy to add more
168filesystems in the future.
169
170The total impact of the hooks to existing fs and mm files is only
171about 40 lines added (not counting comments and blank lines).
172
1733) Why not make cleancache asynchronous and batched so it can
174 more easily interface with real devices with DMA instead
175 of copying each individual page? (Minchan Kim)
176
177The one-page-at-a-time copy semantics simplifies the implementation
178on both the frontend and backend and also allows the backend to
179do fancy things on-the-fly like page compression and
180page deduplication. And since the data is "gone" (copied into/out
181of the pageframe) before the cleancache get/put call returns,
182a great deal of race conditions and potential coherency issues
183are avoided. While the interface seems odd for a "real device"
184or for real kernel-addressable RAM, it makes perfect sense for
185transcendent memory.
186
1874) Why is non-shared cleancache "exclusive"? And where is the
Dan Magenheimer31677602011-09-21 11:56:28 -0400188 page "invalidated" after a "get"? (Minchan Kim)
Dan Magenheimer4fe47462011-05-26 10:00:56 -0600189
190The main reason is to free up space in transcendent memory and
Dan Magenheimer31677602011-09-21 11:56:28 -0400191to avoid unnecessary cleancache_invalidate calls. If you want inclusive,
Dan Magenheimer4fe47462011-05-26 10:00:56 -0600192the page can be "put" immediately following the "get". If
193put-after-get for inclusive becomes common, the interface could
Dan Magenheimer31677602011-09-21 11:56:28 -0400194be easily extended to add a "get_no_invalidate" call.
Dan Magenheimer4fe47462011-05-26 10:00:56 -0600195
Dan Magenheimer31677602011-09-21 11:56:28 -0400196The invalidate is done by the cleancache backend implementation.
Dan Magenheimer4fe47462011-05-26 10:00:56 -0600197
1985) What's the performance impact?
199
200Performance analysis has been presented at OLS'09 and LCA'10.
201Briefly, performance gains can be significant on most workloads,
202especially when memory pressure is high (e.g. when RAM is
203overcommitted in a virtual workload); and because the hooks are
204invoked primarily in place of or in addition to a disk read/write,
205overhead is negligible even in worst case workloads. Basically
206cleancache replaces I/O with memory-copy-CPU-overhead; on older
207single-core systems with slow memory-copy speeds, cleancache
208has little value, but in newer multicore machines, especially
209consolidated/virtualized machines, it has great value.
210
2116) How do I add cleancache support for filesystem X? (Boaz Harrash)
212
213Filesystems that are well-behaved and conform to certain
214restrictions can utilize cleancache simply by making a call to
215cleancache_init_fs at mount time. Unusual, misbehaving, or
216poorly layered filesystems must either add additional hooks
217and/or undergo extensive additional testing... or should just
218not enable the optional cleancache.
219
220Some points for a filesystem to consider:
221
222- The FS should be block-device-based (e.g. a ram-based FS such
223 as tmpfs should not enable cleancache)
224- To ensure coherency/correctness, the FS must ensure that all
225 file removal or truncation operations either go through VFS or
Dan Magenheimer31677602011-09-21 11:56:28 -0400226 add hooks to do the equivalent cleancache "invalidate" operations
Dan Magenheimer4fe47462011-05-26 10:00:56 -0600227- To ensure coherency/correctness, either inode numbers must
228 be unique across the lifetime of the on-disk file OR the
229 FS must provide an "encode_fh" function.
230- The FS must call the VFS superblock alloc and deactivate routines
231 or add hooks to do the equivalent cleancache calls done there.
232- To maximize performance, all pages fetched from the FS should
233 go through the do_mpag_readpage routine or the FS should add
234 hooks to do the equivalent (cf. btrfs)
235- Currently, the FS blocksize must be the same as PAGESIZE. This
236 is not an architectural restriction, but no backends currently
237 support anything different.
238- A clustered FS should invoke the "shared_init_fs" cleancache
239 hook to get best performance for some backends.
240
2417) Why not use the KVA of the inode as the key? (Christoph Hellwig)
242
243If cleancache would use the inode virtual address instead of
244inode/filehandle, the pool id could be eliminated. But, this
245won't work because cleancache retains pagecache data pages
246persistently even when the inode has been pruned from the
Dan Magenheimer31677602011-09-21 11:56:28 -0400247inode unused list, and only invalidates the data page if the file
Dan Magenheimer4fe47462011-05-26 10:00:56 -0600248gets removed/truncated. So if cleancache used the inode kva,
249there would be potential coherency issues if/when the inode
250kva is reused for a different file. Alternately, if cleancache
Dan Magenheimer31677602011-09-21 11:56:28 -0400251invalidated the pages when the inode kva was freed, much of the value
Dan Magenheimer4fe47462011-05-26 10:00:56 -0600252of cleancache would be lost because the cache of pages in cleanache
253is potentially much larger than the kernel pagecache and is most
254useful if the pages survive inode cache removal.
255
2568) Why is a global variable required?
257
258The cleancache_enabled flag is checked in all of the frequently-used
259cleancache hooks. The alternative is a function call to check a static
260variable. Since cleancache is enabled dynamically at runtime, systems
261that don't enable cleancache would suffer thousands (possibly
262tens-of-thousands) of unnecessary function calls per second. So the
263global variable allows cleancache to be enabled by default at compile
264time, but have insignificant performance impact when cleancache remains
265disabled at runtime.
266
2679) Does cleanache work with KVM?
268
269The memory model of KVM is sufficiently different that a cleancache
270backend may have less value for KVM. This remains to be tested,
271especially in an overcommitted system.
272
27310) Does cleancache work in userspace? It sounds useful for
274 memory hungry caches like web browsers. (Jamie Lokier)
275
276No plans yet, though we agree it sounds useful, at least for
277apps that bypass the page cache (e.g. O_DIRECT).
278
279Last updated: Dan Magenheimer, April 13 2011