Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 1 | MOTIVATION |
| 2 | |
| 3 | Cleancache is a new optional feature provided by the VFS layer that |
| 4 | potentially dramatically increases page cache effectiveness for |
| 5 | many workloads in many environments at a negligible cost. |
| 6 | |
| 7 | Cleancache can be thought of as a page-granularity victim cache for clean |
| 8 | pages that the kernel's pageframe replacement algorithm (PFRA) would like |
| 9 | to keep around, but can't since there isn't enough memory. So when the |
| 10 | PFRA "evicts" a page, it first attempts to use cleancache code to |
| 11 | put the data contained in that page into "transcendent memory", memory |
| 12 | that is not directly accessible or addressable by the kernel and is |
| 13 | of unknown and possibly time-varying size. |
| 14 | |
| 15 | Later, when a cleancache-enabled filesystem wishes to access a page |
| 16 | in a file on disk, it first checks cleancache to see if it already |
| 17 | contains it; if it does, the page of data is copied into the kernel |
| 18 | and a disk access is avoided. |
| 19 | |
| 20 | Transcendent memory "drivers" for cleancache are currently implemented |
| 21 | in Xen (using hypervisor memory) and zcache (using in-kernel compressed |
| 22 | memory) and other implementations are in development. |
| 23 | |
| 24 | FAQs are included below. |
| 25 | |
| 26 | IMPLEMENTATION OVERVIEW |
| 27 | |
| 28 | A cleancache "backend" that provides transcendent memory registers itself |
| 29 | to the kernel's cleancache "frontend" by calling cleancache_register_ops, |
| 30 | passing a pointer to a cleancache_ops structure with funcs set appropriately. |
Vladimir Davydov | 53d85c9 | 2015-04-14 15:46:45 -0700 | [diff] [blame] | 31 | The functions provided must conform to certain semantics as follows: |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 32 | |
| 33 | Most important, cleancache is "ephemeral". Pages which are copied into |
| 34 | cleancache have an indefinite lifetime which is completely unknowable |
| 35 | by the kernel and so may or may not still be in cleancache at any later time. |
| 36 | Thus, as its name implies, cleancache is not suitable for dirty pages. |
| 37 | Cleancache has complete discretion over what pages to preserve and what |
| 38 | pages to discard and when. |
| 39 | |
| 40 | Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a |
| 41 | pool id which, if positive, must be saved in the filesystem's superblock; |
| 42 | a negative return value indicates failure. A "put_page" will copy a |
| 43 | (presumably about-to-be-evicted) page into cleancache and associate it with |
| 44 | the pool id, a file key, and a page index into the file. (The combination |
| 45 | of a pool id, a file key, and an index is sometimes called a "handle".) |
| 46 | A "get_page" will copy the page, if found, from cleancache into kernel memory. |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 47 | An "invalidate_page" will ensure the page no longer is present in cleancache; |
| 48 | an "invalidate_inode" will invalidate all pages associated with the specified |
| 49 | file; and, when a filesystem is unmounted, an "invalidate_fs" will invalidate |
| 50 | all pages in all files specified by the given pool id and also surrender |
| 51 | the pool id. |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 52 | |
| 53 | An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache |
| 54 | to treat the pool as shared using a 128-bit UUID as a key. On systems |
| 55 | that may run multiple kernels (such as hard partitioned or virtualized |
| 56 | systems) that may share a clustered filesystem, and where cleancache |
| 57 | may be shared among those kernels, calls to init_shared_fs that specify the |
| 58 | same UUID will receive the same pool id, thus allowing the pages to |
| 59 | be shared. Note that any security requirements must be imposed outside |
| 60 | of the kernel (e.g. by "tools" that control cleancache). Or a |
| 61 | cleancache implementation can simply disable shared_init by always |
| 62 | returning a negative value. |
| 63 | |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 64 | If a get_page is successful on a non-shared pool, the page is invalidated |
| 65 | (thus making cleancache an "exclusive" cache). On a shared pool, the page |
| 66 | is NOT invalidated on a successful get_page so that it remains accessible to |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 67 | other sharers. The kernel is responsible for ensuring coherency between |
| 68 | cleancache (shared or not), the page cache, and the filesystem, using |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 69 | cleancache invalidate operations as required. |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 70 | |
| 71 | Note that cleancache must enforce put-put-get coherency and get-get |
| 72 | coherency. For the former, if two puts are made to the same handle but |
| 73 | with different data, say AAA by the first put and BBB by the second, a |
| 74 | subsequent get can never return the stale data (AAA). For get-get coherency, |
| 75 | if a get for a given handle fails, subsequent gets for that handle will |
| 76 | never succeed unless preceded by a successful put with that handle. |
| 77 | |
| 78 | Last, cleancache provides no SMP serialization guarantees; if two |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 79 | different Linux threads are simultaneously putting and invalidating a page |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 80 | with the same handle, the results are indeterminate. Callers must |
| 81 | lock the page to ensure serial behavior. |
| 82 | |
| 83 | CLEANCACHE PERFORMANCE METRICS |
| 84 | |
Dan Magenheimer | 417fc2c | 2011-09-21 12:28:04 -0400 | [diff] [blame] | 85 | If properly configured, monitoring of cleancache is done via debugfs in |
Marcin Jabrzyk | 8fc8f4d | 2015-01-07 11:14:41 +0100 | [diff] [blame] | 86 | the /sys/kernel/debug/cleancache directory. The effectiveness of cleancache |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 87 | can be measured (across all filesystems) with: |
| 88 | |
| 89 | succ_gets - number of gets that were successful |
| 90 | failed_gets - number of gets that failed |
| 91 | puts - number of puts attempted (all "succeed") |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 92 | invalidates - number of invalidates attempted |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 93 | |
Masanari Iida | d65657c | 2012-02-08 23:10:14 +0900 | [diff] [blame] | 94 | A backend implementation may provide additional metrics. |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 95 | |
| 96 | FAQ |
| 97 | |
| 98 | 1) Where's the value? (Andrew Morton) |
| 99 | |
| 100 | Cleancache provides a significant performance benefit to many workloads |
| 101 | in many environments with negligible overhead by improving the |
| 102 | effectiveness of the pagecache. Clean pagecache pages are |
| 103 | saved in transcendent memory (RAM that is otherwise not directly |
| 104 | addressable to the kernel); fetching those pages later avoids "refaults" |
| 105 | and thus disk reads. |
| 106 | |
| 107 | Cleancache (and its sister code "frontswap") provide interfaces for |
| 108 | this transcendent memory (aka "tmem"), which conceptually lies between |
| 109 | fast kernel-directly-addressable RAM and slower DMA/asynchronous devices. |
| 110 | Disallowing direct kernel or userland reads/writes to tmem |
| 111 | is ideal when data is transformed to a different form and size (such |
| 112 | as with compression) or secretly moved (as might be useful for write- |
| 113 | balancing for some RAM-like devices). Evicted page-cache pages (and |
| 114 | swap pages) are a great use for this kind of slower-than-RAM-but-much- |
| 115 | faster-than-disk transcendent memory, and the cleancache (and frontswap) |
| 116 | "page-object-oriented" specification provides a nice way to read and |
| 117 | write -- and indirectly "name" -- the pages. |
| 118 | |
| 119 | In the virtual case, the whole point of virtualization is to statistically |
| 120 | multiplex physical resources across the varying demands of multiple |
| 121 | virtual machines. This is really hard to do with RAM and efforts to |
| 122 | do it well with no kernel change have essentially failed (except in some |
| 123 | well-publicized special-case workloads). Cleancache -- and frontswap -- |
| 124 | with a fairly small impact on the kernel, provide a huge amount |
| 125 | of flexibility for more dynamic, flexible RAM multiplexing. |
| 126 | Specifically, the Xen Transcendent Memory backend allows otherwise |
| 127 | "fallow" hypervisor-owned RAM to not only be "time-shared" between multiple |
| 128 | virtual machines, but the pages can be compressed and deduplicated to |
| 129 | optimize RAM utilization. And when guest OS's are induced to surrender |
| 130 | underutilized RAM (e.g. with "self-ballooning"), page cache pages |
| 131 | are the first to go, and cleancache allows those pages to be |
| 132 | saved and reclaimed if overall host system memory conditions allow. |
| 133 | |
| 134 | And the identical interface used for cleancache can be used in |
| 135 | physical systems as well. The zcache driver acts as a memory-hungry |
| 136 | device that stores pages of data in a compressed state. And |
| 137 | the proposed "RAMster" driver shares RAM across multiple physical |
| 138 | systems. |
| 139 | |
| 140 | 2) Why does cleancache have its sticky fingers so deep inside the |
| 141 | filesystems and VFS? (Andrew Morton and Christoph Hellwig) |
| 142 | |
| 143 | The core hooks for cleancache in VFS are in most cases a single line |
| 144 | and the minimum set are placed precisely where needed to maintain |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 145 | coherency (via cleancache_invalidate operations) between cleancache, |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 146 | the page cache, and disk. All hooks compile into nothingness if |
| 147 | cleancache is config'ed off and turn into a function-pointer- |
| 148 | compare-to-NULL if config'ed on but no backend claims the ops |
| 149 | functions, or to a compare-struct-element-to-negative if a |
| 150 | backend claims the ops functions but a filesystem doesn't enable |
| 151 | cleancache. |
| 152 | |
| 153 | Some filesystems are built entirely on top of VFS and the hooks |
| 154 | in VFS are sufficient, so don't require an "init_fs" hook; the |
| 155 | initial implementation of cleancache didn't provide this hook. |
| 156 | But for some filesystems (such as btrfs), the VFS hooks are |
| 157 | incomplete and one or more hooks in fs-specific code are required. |
| 158 | And for some other filesystems, such as tmpfs, cleancache may |
| 159 | be counterproductive. So it seemed prudent to require a filesystem |
| 160 | to "opt in" to use cleancache, which requires adding a hook in |
| 161 | each filesystem. Not all filesystems are supported by cleancache |
| 162 | only because they haven't been tested. The existing set should |
| 163 | be sufficient to validate the concept, the opt-in approach means |
| 164 | that untested filesystems are not affected, and the hooks in the |
| 165 | existing filesystems should make it very easy to add more |
| 166 | filesystems in the future. |
| 167 | |
| 168 | The total impact of the hooks to existing fs and mm files is only |
| 169 | about 40 lines added (not counting comments and blank lines). |
| 170 | |
| 171 | 3) Why not make cleancache asynchronous and batched so it can |
| 172 | more easily interface with real devices with DMA instead |
| 173 | of copying each individual page? (Minchan Kim) |
| 174 | |
| 175 | The one-page-at-a-time copy semantics simplifies the implementation |
| 176 | on both the frontend and backend and also allows the backend to |
| 177 | do fancy things on-the-fly like page compression and |
| 178 | page deduplication. And since the data is "gone" (copied into/out |
| 179 | of the pageframe) before the cleancache get/put call returns, |
| 180 | a great deal of race conditions and potential coherency issues |
| 181 | are avoided. While the interface seems odd for a "real device" |
| 182 | or for real kernel-addressable RAM, it makes perfect sense for |
| 183 | transcendent memory. |
| 184 | |
| 185 | 4) Why is non-shared cleancache "exclusive"? And where is the |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 186 | page "invalidated" after a "get"? (Minchan Kim) |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 187 | |
| 188 | The main reason is to free up space in transcendent memory and |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 189 | to avoid unnecessary cleancache_invalidate calls. If you want inclusive, |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 190 | the page can be "put" immediately following the "get". If |
| 191 | put-after-get for inclusive becomes common, the interface could |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 192 | be easily extended to add a "get_no_invalidate" call. |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 193 | |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 194 | The invalidate is done by the cleancache backend implementation. |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 195 | |
| 196 | 5) What's the performance impact? |
| 197 | |
| 198 | Performance analysis has been presented at OLS'09 and LCA'10. |
| 199 | Briefly, performance gains can be significant on most workloads, |
| 200 | especially when memory pressure is high (e.g. when RAM is |
| 201 | overcommitted in a virtual workload); and because the hooks are |
| 202 | invoked primarily in place of or in addition to a disk read/write, |
| 203 | overhead is negligible even in worst case workloads. Basically |
| 204 | cleancache replaces I/O with memory-copy-CPU-overhead; on older |
| 205 | single-core systems with slow memory-copy speeds, cleancache |
| 206 | has little value, but in newer multicore machines, especially |
| 207 | consolidated/virtualized machines, it has great value. |
| 208 | |
| 209 | 6) How do I add cleancache support for filesystem X? (Boaz Harrash) |
| 210 | |
| 211 | Filesystems that are well-behaved and conform to certain |
| 212 | restrictions can utilize cleancache simply by making a call to |
| 213 | cleancache_init_fs at mount time. Unusual, misbehaving, or |
| 214 | poorly layered filesystems must either add additional hooks |
| 215 | and/or undergo extensive additional testing... or should just |
| 216 | not enable the optional cleancache. |
| 217 | |
| 218 | Some points for a filesystem to consider: |
| 219 | |
| 220 | - The FS should be block-device-based (e.g. a ram-based FS such |
| 221 | as tmpfs should not enable cleancache) |
| 222 | - To ensure coherency/correctness, the FS must ensure that all |
| 223 | file removal or truncation operations either go through VFS or |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 224 | add hooks to do the equivalent cleancache "invalidate" operations |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 225 | - To ensure coherency/correctness, either inode numbers must |
| 226 | be unique across the lifetime of the on-disk file OR the |
| 227 | FS must provide an "encode_fh" function. |
| 228 | - The FS must call the VFS superblock alloc and deactivate routines |
| 229 | or add hooks to do the equivalent cleancache calls done there. |
| 230 | - To maximize performance, all pages fetched from the FS should |
| 231 | go through the do_mpag_readpage routine or the FS should add |
| 232 | hooks to do the equivalent (cf. btrfs) |
| 233 | - Currently, the FS blocksize must be the same as PAGESIZE. This |
| 234 | is not an architectural restriction, but no backends currently |
| 235 | support anything different. |
| 236 | - A clustered FS should invoke the "shared_init_fs" cleancache |
| 237 | hook to get best performance for some backends. |
| 238 | |
| 239 | 7) Why not use the KVA of the inode as the key? (Christoph Hellwig) |
| 240 | |
| 241 | If cleancache would use the inode virtual address instead of |
| 242 | inode/filehandle, the pool id could be eliminated. But, this |
| 243 | won't work because cleancache retains pagecache data pages |
| 244 | persistently even when the inode has been pruned from the |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 245 | inode unused list, and only invalidates the data page if the file |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 246 | gets removed/truncated. So if cleancache used the inode kva, |
| 247 | there would be potential coherency issues if/when the inode |
| 248 | kva is reused for a different file. Alternately, if cleancache |
Dan Magenheimer | 3167760 | 2011-09-21 11:56:28 -0400 | [diff] [blame] | 249 | invalidated the pages when the inode kva was freed, much of the value |
Dan Magenheimer | 4fe4746 | 2011-05-26 10:00:56 -0600 | [diff] [blame] | 250 | of cleancache would be lost because the cache of pages in cleanache |
| 251 | is potentially much larger than the kernel pagecache and is most |
| 252 | useful if the pages survive inode cache removal. |
| 253 | |
| 254 | 8) Why is a global variable required? |
| 255 | |
| 256 | The cleancache_enabled flag is checked in all of the frequently-used |
| 257 | cleancache hooks. The alternative is a function call to check a static |
| 258 | variable. Since cleancache is enabled dynamically at runtime, systems |
| 259 | that don't enable cleancache would suffer thousands (possibly |
| 260 | tens-of-thousands) of unnecessary function calls per second. So the |
| 261 | global variable allows cleancache to be enabled by default at compile |
| 262 | time, but have insignificant performance impact when cleancache remains |
| 263 | disabled at runtime. |
| 264 | |
| 265 | 9) Does cleanache work with KVM? |
| 266 | |
| 267 | The memory model of KVM is sufficiently different that a cleancache |
| 268 | backend may have less value for KVM. This remains to be tested, |
| 269 | especially in an overcommitted system. |
| 270 | |
| 271 | 10) Does cleancache work in userspace? It sounds useful for |
| 272 | memory hungry caches like web browsers. (Jamie Lokier) |
| 273 | |
| 274 | No plans yet, though we agree it sounds useful, at least for |
| 275 | apps that bypass the page cache (e.g. O_DIRECT). |
| 276 | |
| 277 | Last updated: Dan Magenheimer, April 13 2011 |