Kent Overstreet | cafe563 | 2013-03-23 16:11:31 -0700 | [diff] [blame] | 1 | Say you've got a big slow raid 6, and an X-25E or three. Wouldn't it be |
| 2 | nice if you could use them as cache... Hence bcache. |
| 3 | |
| 4 | Wiki and git repositories are at: |
| 5 | http://bcache.evilpiepirate.org |
| 6 | http://evilpiepirate.org/git/linux-bcache.git |
| 7 | http://evilpiepirate.org/git/bcache-tools.git |
| 8 | |
| 9 | It's designed around the performance characteristics of SSDs - it only allocates |
| 10 | in erase block sized buckets, and it uses a hybrid btree/log to track cached |
| 11 | extants (which can be anywhere from a single sector to the bucket size). It's |
| 12 | designed to avoid random writes at all costs; it fills up an erase block |
| 13 | sequentially, then issues a discard before reusing it. |
| 14 | |
| 15 | Both writethrough and writeback caching are supported. Writeback defaults to |
| 16 | off, but can be switched on and off arbitrarily at runtime. Bcache goes to |
| 17 | great lengths to protect your data - it reliably handles unclean shutdown. (It |
| 18 | doesn't even have a notion of a clean shutdown; bcache simply doesn't return |
| 19 | writes as completed until they're on stable storage). |
| 20 | |
| 21 | Writeback caching can use most of the cache for buffering writes - writing |
| 22 | dirty data to the backing device is always done sequentially, scanning from the |
| 23 | start to the end of the index. |
| 24 | |
| 25 | Since random IO is what SSDs excel at, there generally won't be much benefit |
| 26 | to caching large sequential IO. Bcache detects sequential IO and skips it; |
| 27 | it also keeps a rolling average of the IO sizes per task, and as long as the |
| 28 | average is above the cutoff it will skip all IO from that task - instead of |
| 29 | caching the first 512k after every seek. Backups and large file copies should |
| 30 | thus entirely bypass the cache. |
| 31 | |
| 32 | In the event of a data IO error on the flash it will try to recover by reading |
| 33 | from disk or invalidating cache entries. For unrecoverable errors (meta data |
| 34 | or dirty data), caching is automatically disabled; if dirty data was present |
| 35 | in the cache it first disables writeback caching and waits for all dirty data |
| 36 | to be flushed. |
| 37 | |
| 38 | Getting started: |
| 39 | You'll need make-bcache from the bcache-tools repository. Both the cache device |
| 40 | and backing device must be formatted before use. |
| 41 | make-bcache -B /dev/sdb |
| 42 | make-bcache -C /dev/sdc |
| 43 | |
| 44 | make-bcache has the ability to format multiple devices at the same time - if |
| 45 | you format your backing devices and cache device at the same time, you won't |
| 46 | have to manually attach: |
| 47 | make-bcache -B /dev/sda /dev/sdb -C /dev/sdc |
| 48 | |
| 49 | To make bcache devices known to the kernel, echo them to /sys/fs/bcache/register: |
| 50 | |
| 51 | echo /dev/sdb > /sys/fs/bcache/register |
| 52 | echo /dev/sdc > /sys/fs/bcache/register |
| 53 | |
| 54 | To register your bcache devices automatically, you could add something like |
| 55 | this to an init script: |
| 56 | |
| 57 | echo /dev/sd* > /sys/fs/bcache/register_quiet |
| 58 | |
| 59 | It'll look for bcache superblocks and ignore everything that doesn't have one. |
| 60 | |
| 61 | Registering the backing device makes the bcache show up in /dev; you can now |
| 62 | format it and use it as normal. But the first time using a new bcache device, |
| 63 | it'll be running in passthrough mode until you attach it to a cache. See the |
| 64 | section on attaching. |
| 65 | |
| 66 | The devices show up at /dev/bcacheN, and can be controlled via sysfs from |
| 67 | /sys/block/bcacheN/bcache: |
| 68 | |
| 69 | mkfs.ext4 /dev/bcache0 |
| 70 | mount /dev/bcache0 /mnt |
| 71 | |
| 72 | Cache devices are managed as sets; multiple caches per set isn't supported yet |
| 73 | but will allow for mirroring of metadata and dirty data in the future. Your new |
| 74 | cache set shows up as /sys/fs/bcache/<UUID> |
| 75 | |
| 76 | ATTACHING: |
| 77 | |
| 78 | After your cache device and backing device are registered, the backing device |
| 79 | must be attached to your cache set to enable caching. Attaching a backing |
| 80 | device to a cache set is done thusly, with the UUID of the cache set in |
| 81 | /sys/fs/bcache: |
| 82 | |
| 83 | echo <UUID> > /sys/block/bcache0/bcache/attach |
| 84 | |
| 85 | This only has to be done once. The next time you reboot, just reregister all |
| 86 | your bcache devices. If a backing device has data in a cache somewhere, the |
| 87 | /dev/bcache# device won't be created until the cache shows up - particularly |
| 88 | important if you have writeback caching turned on. |
| 89 | |
| 90 | If you're booting up and your cache device is gone and never coming back, you |
| 91 | can force run the backing device: |
| 92 | |
| 93 | echo 1 > /sys/block/sdb/bcache/running |
| 94 | |
| 95 | (You need to use /sys/block/sdb (or whatever your backing device is called), not |
| 96 | /sys/block/bcache0, because bcache0 doesn't exist yet. If you're using a |
| 97 | partition, the bcache directory would be at /sys/block/sdb/sdb2/bcache) |
| 98 | |
| 99 | The backing device will still use that cache set if it shows up in the future, |
| 100 | but all the cached data will be invalidated. If there was dirty data in the |
| 101 | cache, don't expect the filesystem to be recoverable - you will have massive |
| 102 | filesystem corruption, though ext4's fsck does work miracles. |
| 103 | |
Kent Overstreet | 7b41b51 | 2013-03-27 12:24:17 -0700 | [diff] [blame] | 104 | ERROR HANDLING: |
| 105 | |
| 106 | Bcache tries to transparently handle IO errors to/from the cache device without |
| 107 | affecting normal operation; if it sees too many errors (the threshold is |
| 108 | configurable, and defaults to 0) it shuts down the cache device and switches all |
| 109 | the backing devices to passthrough mode. |
| 110 | |
| 111 | - For reads from the cache, if they error we just retry the read from the |
| 112 | backing device. |
| 113 | |
| 114 | - For writethrough writes, if the write to the cache errors we just switch to |
| 115 | invalidating the data at that lba in the cache (i.e. the same thing we do for |
| 116 | a write that bypasses the cache) |
| 117 | |
| 118 | - For writeback writes, we currently pass that error back up to the |
| 119 | filesystem/userspace. This could be improved - we could retry it as a write |
| 120 | that skips the cache so we don't have to error the write. |
| 121 | |
| 122 | - When we detach, we first try to flush any dirty data (if we were running in |
| 123 | writeback mode). It currently doesn't do anything intelligent if it fails to |
| 124 | read some of the dirty data, though. |
| 125 | |
| 126 | TROUBLESHOOTING PERFORMANCE: |
| 127 | |
| 128 | Bcache has a bunch of config options and tunables. The defaults are intended to |
| 129 | be reasonable for typical desktop and server workloads, but they're not what you |
| 130 | want for getting the best possible numbers when benchmarking. |
| 131 | |
| 132 | - Bad write performance |
| 133 | |
| 134 | If write performance is not what you expected, you probably wanted to be |
| 135 | running in writeback mode, which isn't the default (not due to a lack of |
| 136 | maturity, but simply because in writeback mode you'll lose data if something |
| 137 | happens to your SSD) |
| 138 | |
| 139 | # echo writeback > /sys/block/bcache0/cache_mode |
| 140 | |
| 141 | - Bad performance, or traffic not going to the SSD that you'd expect |
| 142 | |
| 143 | By default, bcache doesn't cache everything. It tries to skip sequential IO - |
| 144 | because you really want to be caching the random IO, and if you copy a 10 |
| 145 | gigabyte file you probably don't want that pushing 10 gigabytes of randomly |
| 146 | accessed data out of your cache. |
| 147 | |
| 148 | But if you want to benchmark reads from cache, and you start out with fio |
| 149 | writing an 8 gigabyte test file - so you want to disable that. |
| 150 | |
| 151 | # echo 0 > /sys/block/bcache0/bcache/sequential_cutoff |
| 152 | |
| 153 | To set it back to the default (4 mb), do |
| 154 | |
| 155 | # echo 4M > /sys/block/bcache0/bcache/sequential_cutoff |
| 156 | |
| 157 | - Traffic's still going to the spindle/still getting cache misses |
| 158 | |
| 159 | In the real world, SSDs don't always keep up with disks - particularly with |
| 160 | slower SSDs, many disks being cached by one SSD, or mostly sequential IO. So |
| 161 | you want to avoid being bottlenecked by the SSD and having it slow everything |
| 162 | down. |
| 163 | |
| 164 | To avoid that bcache tracks latency to the cache device, and gradually |
| 165 | throttles traffic if the latency exceeds a threshold (it does this by |
| 166 | cranking down the sequential bypass). |
| 167 | |
| 168 | You can disable this if you need to by setting the thresholds to 0: |
| 169 | |
| 170 | # echo 0 > /sys/fs/bcache/<cache set>/congested_read_threshold_us |
| 171 | # echo 0 > /sys/fs/bcache/<cache set>/congested_write_threshold_us |
| 172 | |
| 173 | The default is 2000 us (2 milliseconds) for reads, and 20000 for writes. |
| 174 | |
| 175 | - Still getting cache misses, of the same data |
| 176 | |
| 177 | One last issue that sometimes trips people up is actually an old bug, due to |
| 178 | the way cache coherency is handled for cache misses. If a btree node is full, |
| 179 | a cache miss won't be able to insert a key for the new data and the data |
| 180 | won't be written to the cache. |
| 181 | |
| 182 | In practice this isn't an issue because as soon as a write comes along it'll |
| 183 | cause the btree node to be split, and you need almost no write traffic for |
| 184 | this to not show up enough to be noticable (especially since bcache's btree |
| 185 | nodes are huge and index large regions of the device). But when you're |
| 186 | benchmarking, if you're trying to warm the cache by reading a bunch of data |
| 187 | and there's no other traffic - that can be a problem. |
| 188 | |
| 189 | Solution: warm the cache by doing writes, or use the testing branch (there's |
| 190 | a fix for the issue there). |
| 191 | |
Kent Overstreet | cafe563 | 2013-03-23 16:11:31 -0700 | [diff] [blame] | 192 | SYSFS - BACKING DEVICE: |
| 193 | |
| 194 | attach |
| 195 | Echo the UUID of a cache set to this file to enable caching. |
| 196 | |
| 197 | cache_mode |
| 198 | Can be one of either writethrough, writeback, writearound or none. |
| 199 | |
| 200 | clear_stats |
| 201 | Writing to this file resets the running total stats (not the day/hour/5 minute |
| 202 | decaying versions). |
| 203 | |
| 204 | detach |
| 205 | Write to this file to detach from a cache set. If there is dirty data in the |
| 206 | cache, it will be flushed first. |
| 207 | |
| 208 | dirty_data |
| 209 | Amount of dirty data for this backing device in the cache. Continuously |
| 210 | updated unlike the cache set's version, but may be slightly off. |
| 211 | |
| 212 | label |
| 213 | Name of underlying device. |
| 214 | |
| 215 | readahead |
| 216 | Size of readahead that should be performed. Defaults to 0. If set to e.g. |
| 217 | 1M, it will round cache miss reads up to that size, but without overlapping |
| 218 | existing cache entries. |
| 219 | |
| 220 | running |
| 221 | 1 if bcache is running (i.e. whether the /dev/bcache device exists, whether |
| 222 | it's in passthrough mode or caching). |
| 223 | |
| 224 | sequential_cutoff |
| 225 | A sequential IO will bypass the cache once it passes this threshhold; the |
| 226 | most recent 128 IOs are tracked so sequential IO can be detected even when |
| 227 | it isn't all done at once. |
| 228 | |
| 229 | sequential_merge |
| 230 | If non zero, bcache keeps a list of the last 128 requests submitted to compare |
| 231 | against all new requests to determine which new requests are sequential |
| 232 | continuations of previous requests for the purpose of determining sequential |
| 233 | cutoff. This is necessary if the sequential cutoff value is greater than the |
| 234 | maximum acceptable sequential size for any single request. |
| 235 | |
| 236 | state |
| 237 | The backing device can be in one of four different states: |
| 238 | |
| 239 | no cache: Has never been attached to a cache set. |
| 240 | |
| 241 | clean: Part of a cache set, and there is no cached dirty data. |
| 242 | |
| 243 | dirty: Part of a cache set, and there is cached dirty data. |
| 244 | |
| 245 | inconsistent: The backing device was forcibly run by the user when there was |
| 246 | dirty data cached but the cache set was unavailable; whatever data was on the |
| 247 | backing device has likely been corrupted. |
| 248 | |
| 249 | stop |
| 250 | Write to this file to shut down the bcache device and close the backing |
| 251 | device. |
| 252 | |
| 253 | writeback_delay |
| 254 | When dirty data is written to the cache and it previously did not contain |
| 255 | any, waits some number of seconds before initiating writeback. Defaults to |
| 256 | 30. |
| 257 | |
| 258 | writeback_percent |
| 259 | If nonzero, bcache tries to keep around this percentage of the cache dirty by |
| 260 | throttling background writeback and using a PD controller to smoothly adjust |
| 261 | the rate. |
| 262 | |
| 263 | writeback_rate |
| 264 | Rate in sectors per second - if writeback_percent is nonzero, background |
| 265 | writeback is throttled to this rate. Continuously adjusted by bcache but may |
| 266 | also be set by the user. |
| 267 | |
| 268 | writeback_running |
| 269 | If off, writeback of dirty data will not take place at all. Dirty data will |
| 270 | still be added to the cache until it is mostly full; only meant for |
| 271 | benchmarking. Defaults to on. |
| 272 | |
| 273 | SYSFS - BACKING DEVICE STATS: |
| 274 | |
| 275 | There are directories with these numbers for a running total, as well as |
| 276 | versions that decay over the past day, hour and 5 minutes; they're also |
| 277 | aggregated in the cache set directory as well. |
| 278 | |
| 279 | bypassed |
| 280 | Amount of IO (both reads and writes) that has bypassed the cache |
| 281 | |
| 282 | cache_hits |
| 283 | cache_misses |
| 284 | cache_hit_ratio |
| 285 | Hits and misses are counted per individual IO as bcache sees them; a |
| 286 | partial hit is counted as a miss. |
| 287 | |
| 288 | cache_bypass_hits |
| 289 | cache_bypass_misses |
| 290 | Hits and misses for IO that is intended to skip the cache are still counted, |
| 291 | but broken out here. |
| 292 | |
| 293 | cache_miss_collisions |
| 294 | Counts instances where data was going to be inserted into the cache from a |
| 295 | cache miss, but raced with a write and data was already present (usually 0 |
| 296 | since the synchronization for cache misses was rewritten) |
| 297 | |
| 298 | cache_readaheads |
| 299 | Count of times readahead occured. |
| 300 | |
| 301 | SYSFS - CACHE SET: |
| 302 | |
| 303 | average_key_size |
| 304 | Average data per key in the btree. |
| 305 | |
| 306 | bdev<0..n> |
| 307 | Symlink to each of the attached backing devices. |
| 308 | |
| 309 | block_size |
| 310 | Block size of the cache devices. |
| 311 | |
| 312 | btree_cache_size |
| 313 | Amount of memory currently used by the btree cache |
| 314 | |
| 315 | bucket_size |
| 316 | Size of buckets |
| 317 | |
| 318 | cache<0..n> |
| 319 | Symlink to each of the cache devices comprising this cache set. |
| 320 | |
| 321 | cache_available_percent |
Gabriel | fe0a797 | 2013-04-24 19:51:02 +0200 | [diff] [blame^] | 322 | Percentage of cache device which doesn't contain dirty data, and could |
| 323 | potentially be used for writeback. This doesn't mean this space isn't used |
| 324 | for clean cached data; the unused statistic (in priority_stats) is typically |
| 325 | much lower. |
Kent Overstreet | cafe563 | 2013-03-23 16:11:31 -0700 | [diff] [blame] | 326 | |
| 327 | clear_stats |
| 328 | Clears the statistics associated with this cache |
| 329 | |
| 330 | dirty_data |
| 331 | Amount of dirty data is in the cache (updated when garbage collection runs). |
| 332 | |
| 333 | flash_vol_create |
| 334 | Echoing a size to this file (in human readable units, k/M/G) creates a thinly |
| 335 | provisioned volume backed by the cache set. |
| 336 | |
| 337 | io_error_halflife |
| 338 | io_error_limit |
| 339 | These determines how many errors we accept before disabling the cache. |
| 340 | Each error is decayed by the half life (in # ios). If the decaying count |
| 341 | reaches io_error_limit dirty data is written out and the cache is disabled. |
| 342 | |
| 343 | journal_delay_ms |
| 344 | Journal writes will delay for up to this many milliseconds, unless a cache |
| 345 | flush happens sooner. Defaults to 100. |
| 346 | |
| 347 | root_usage_percent |
| 348 | Percentage of the root btree node in use. If this gets too high the node |
| 349 | will split, increasing the tree depth. |
| 350 | |
| 351 | stop |
| 352 | Write to this file to shut down the cache set - waits until all attached |
| 353 | backing devices have been shut down. |
| 354 | |
| 355 | tree_depth |
| 356 | Depth of the btree (A single node btree has depth 0). |
| 357 | |
| 358 | unregister |
| 359 | Detaches all backing devices and closes the cache devices; if dirty data is |
| 360 | present it will disable writeback caching and wait for it to be flushed. |
| 361 | |
| 362 | SYSFS - CACHE SET INTERNAL: |
| 363 | |
| 364 | This directory also exposes timings for a number of internal operations, with |
| 365 | separate files for average duration, average frequency, last occurence and max |
| 366 | duration: garbage collection, btree read, btree node sorts and btree splits. |
| 367 | |
| 368 | active_journal_entries |
| 369 | Number of journal entries that are newer than the index. |
| 370 | |
| 371 | btree_nodes |
| 372 | Total nodes in the btree. |
| 373 | |
| 374 | btree_used_percent |
| 375 | Average fraction of btree in use. |
| 376 | |
| 377 | bset_tree_stats |
| 378 | Statistics about the auxiliary search trees |
| 379 | |
| 380 | btree_cache_max_chain |
| 381 | Longest chain in the btree node cache's hash table |
| 382 | |
| 383 | cache_read_races |
| 384 | Counts instances where while data was being read from the cache, the bucket |
| 385 | was reused and invalidated - i.e. where the pointer was stale after the read |
| 386 | completed. When this occurs the data is reread from the backing device. |
| 387 | |
| 388 | trigger_gc |
| 389 | Writing to this file forces garbage collection to run. |
| 390 | |
| 391 | SYSFS - CACHE DEVICE: |
| 392 | |
| 393 | block_size |
| 394 | Minimum granularity of writes - should match hardware sector size. |
| 395 | |
| 396 | btree_written |
| 397 | Sum of all btree writes, in (kilo/mega/giga) bytes |
| 398 | |
| 399 | bucket_size |
| 400 | Size of buckets |
| 401 | |
| 402 | cache_replacement_policy |
| 403 | One of either lru, fifo or random. |
| 404 | |
| 405 | discard |
| 406 | Boolean; if on a discard/TRIM will be issued to each bucket before it is |
| 407 | reused. Defaults to off, since SATA TRIM is an unqueued command (and thus |
| 408 | slow). |
| 409 | |
| 410 | freelist_percent |
| 411 | Size of the freelist as a percentage of nbuckets. Can be written to to |
| 412 | increase the number of buckets kept on the freelist, which lets you |
| 413 | artificially reduce the size of the cache at runtime. Mostly for testing |
| 414 | purposes (i.e. testing how different size caches affect your hit rate), but |
| 415 | since buckets are discarded when they move on to the freelist will also make |
| 416 | the SSD's garbage collection easier by effectively giving it more reserved |
| 417 | space. |
| 418 | |
| 419 | io_errors |
| 420 | Number of errors that have occured, decayed by io_error_halflife. |
| 421 | |
| 422 | metadata_written |
| 423 | Sum of all non data writes (btree writes and all other metadata). |
| 424 | |
| 425 | nbuckets |
| 426 | Total buckets in this cache |
| 427 | |
| 428 | priority_stats |
Gabriel | fe0a797 | 2013-04-24 19:51:02 +0200 | [diff] [blame^] | 429 | Statistics about how recently data in the cache has been accessed. |
| 430 | This can reveal your working set size. Unused is the percentage of |
| 431 | the cache that doesn't contain any data. Metadata is bcache's |
| 432 | metadata overhead. Average is the average priority of cache buckets. |
| 433 | Next is a list of quantiles with the priority threshold of each. |
Kent Overstreet | cafe563 | 2013-03-23 16:11:31 -0700 | [diff] [blame] | 434 | |
| 435 | written |
| 436 | Sum of all data that has been written to the cache; comparison with |
| 437 | btree_written gives the amount of write inflation in bcache. |