Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 1 | Introduction |
| 2 | ============ |
| 3 | |
Masanari Iida | 4998d8e | 2012-02-16 22:14:34 +0900 | [diff] [blame] | 4 | This document describes a collection of device-mapper targets that |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 5 | between them implement thin-provisioning and snapshots. |
| 6 | |
| 7 | The main highlight of this implementation, compared to the previous |
| 8 | implementation of snapshots, is that it allows many virtual devices to |
| 9 | be stored on the same data volume. This simplifies administration and |
| 10 | allows the sharing of data between volumes, thus reducing disk usage. |
| 11 | |
| 12 | Another significant feature is support for an arbitrary depth of |
| 13 | recursive snapshots (snapshots of snapshots of snapshots ...). The |
| 14 | previous implementation of snapshots did this by chaining together |
| 15 | lookup tables, and so performance was O(depth). This new |
| 16 | implementation uses a single data structure to avoid this degradation |
| 17 | with depth. Fragmentation may still be an issue, however, in some |
| 18 | scenarios. |
| 19 | |
| 20 | Metadata is stored on a separate device from data, giving the |
| 21 | administrator some freedom, for example to: |
| 22 | |
| 23 | - Improve metadata resilience by storing metadata on a mirrored volume |
| 24 | but data on a non-mirrored one. |
| 25 | |
| 26 | - Improve performance by storing the metadata on SSD. |
| 27 | |
| 28 | Status |
| 29 | ====== |
| 30 | |
| 31 | These targets are very much still in the EXPERIMENTAL state. Please |
| 32 | do not yet rely on them in production. But do experiment and offer us |
| 33 | feedback. Different use cases will have different performance |
| 34 | characteristics, for example due to fragmentation of the data volume. |
| 35 | |
| 36 | If you find this software is not performing as expected please mail |
| 37 | dm-devel@redhat.com with details and we'll try our best to improve |
| 38 | things for you. |
| 39 | |
| 40 | Userspace tools for checking and repairing the metadata are under |
| 41 | development. |
| 42 | |
| 43 | Cookbook |
| 44 | ======== |
| 45 | |
| 46 | This section describes some quick recipes for using thin provisioning. |
| 47 | They use the dmsetup program to control the device-mapper driver |
| 48 | directly. End users will be advised to use a higher-level volume |
| 49 | manager such as LVM2 once support has been added. |
| 50 | |
| 51 | Pool device |
| 52 | ----------- |
| 53 | |
| 54 | The pool device ties together the metadata volume and the data volume. |
| 55 | It maps I/O linearly to the data volume and updates the metadata via |
| 56 | two mechanisms: |
| 57 | |
| 58 | - Function calls from the thin targets |
| 59 | |
| 60 | - Device-mapper 'messages' from userspace which control the creation of new |
| 61 | virtual devices amongst other things. |
| 62 | |
| 63 | Setting up a fresh pool device |
| 64 | ------------------------------ |
| 65 | |
| 66 | Setting up a pool device requires a valid metadata device, and a |
| 67 | data device. If you do not have an existing metadata device you can |
| 68 | make one by zeroing the first 4k to indicate empty metadata. |
| 69 | |
| 70 | dd if=/dev/zero of=$metadata_dev bs=4096 count=1 |
| 71 | |
| 72 | The amount of metadata you need will vary according to how many blocks |
| 73 | are shared between thin devices (i.e. through snapshots). If you have |
| 74 | less sharing than average you'll need a larger-than-average metadata device. |
| 75 | |
| 76 | As a guide, we suggest you calculate the number of bytes to use in the |
| 77 | metadata device as 48 * $data_dev_size / $data_block_size but round it up |
Mike Snitzer | c4a69ec | 2012-03-28 18:41:28 +0100 | [diff] [blame] | 78 | to 2MB if the answer is smaller. If you're creating large numbers of |
| 79 | snapshots which are recording large amounts of change, you may find you |
| 80 | need to increase this. |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 81 | |
Mike Snitzer | c4a69ec | 2012-03-28 18:41:28 +0100 | [diff] [blame] | 82 | The largest size supported is 16GB: If the device is larger, |
| 83 | a warning will be issued and the excess space will not be used. |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 84 | |
| 85 | Reloading a pool table |
| 86 | ---------------------- |
| 87 | |
| 88 | You may reload a pool's table, indeed this is how the pool is resized |
| 89 | if it runs out of space. (N.B. While specifying a different metadata |
| 90 | device when reloading is not forbidden at the moment, things will go |
| 91 | wrong if it does not route I/O to exactly the same on-disk location as |
| 92 | previously.) |
| 93 | |
| 94 | Using an existing pool device |
| 95 | ----------------------------- |
| 96 | |
| 97 | dmsetup create pool \ |
| 98 | --table "0 20971520 thin-pool $metadata_dev $data_dev \ |
| 99 | $data_block_size $low_water_mark" |
| 100 | |
| 101 | $data_block_size gives the smallest unit of disk space that can be |
Carlos Maiolino | a561ddb | 2013-07-23 11:03:16 -0300 | [diff] [blame] | 102 | allocated at a time expressed in units of 512-byte sectors. |
| 103 | $data_block_size must be between 128 (64KB) and 2097152 (1GB) and a |
| 104 | multiple of 128 (64KB). $data_block_size cannot be changed after the |
| 105 | thin-pool is created. People primarily interested in thin provisioning |
| 106 | may want to use a value such as 1024 (512KB). People doing lots of |
| 107 | snapshotting may want a smaller value such as 128 (64KB). If you are |
| 108 | not zeroing newly-allocated data, a larger $data_block_size in the |
| 109 | region of 256000 (128MB) is suggested. |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 110 | |
| 111 | $low_water_mark is expressed in blocks of size $data_block_size. If |
| 112 | free space on the data device drops below this level then a dm event |
| 113 | will be triggered which a userspace daemon should catch allowing it to |
| 114 | extend the pool device. Only one such event will be sent. |
| 115 | Resuming a device with a new table itself triggers an event so the |
| 116 | userspace daemon can use this to detect a situation where a new table |
| 117 | already exceeds the threshold. |
| 118 | |
Mike Snitzer | 07f2b6e | 2014-02-14 11:58:41 -0500 | [diff] [blame] | 119 | A low water mark for the metadata device is maintained in the kernel and |
| 120 | will trigger a dm event if free space on the metadata device drops below |
| 121 | it. |
| 122 | |
| 123 | Updating on-disk metadata |
| 124 | ------------------------- |
| 125 | |
| 126 | On-disk metadata is committed every time a FLUSH or FUA bio is written. |
| 127 | If no such requests are made then commits will occur every second. This |
| 128 | means the thin-provisioning target behaves like a physical disk that has |
| 129 | a volatile write cache. If power is lost you may lose some recent |
| 130 | writes. The metadata should always be consistent in spite of any crash. |
| 131 | |
| 132 | If data space is exhausted the pool will either error or queue IO |
| 133 | according to the configuration (see: error_if_no_space). If metadata |
| 134 | space is exhausted or a metadata operation fails: the pool will error IO |
| 135 | until the pool is taken offline and repair is performed to 1) fix any |
| 136 | potential inconsistencies and 2) clear the flag that imposes repair. |
| 137 | Once the pool's metadata device is repaired it may be resized, which |
| 138 | will allow the pool to return to normal operation. Note that if a pool |
| 139 | is flagged as needing repair, the pool's data and metadata devices |
| 140 | cannot be resized until repair is performed. It should also be noted |
| 141 | that when the pool's metadata space is exhausted the current metadata |
| 142 | transaction is aborted. Given that the pool will cache IO whose |
| 143 | completion may have already been acknowledged to upper IO layers |
| 144 | (e.g. filesystem) it is strongly suggested that consistency checks |
| 145 | (e.g. fsck) be performed on those layers when repair of the pool is |
| 146 | required. |
| 147 | |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 148 | Thin provisioning |
| 149 | ----------------- |
| 150 | |
| 151 | i) Creating a new thinly-provisioned volume. |
| 152 | |
| 153 | To create a new thinly- provisioned volume you must send a message to an |
| 154 | active pool device, /dev/mapper/pool in this example. |
| 155 | |
| 156 | dmsetup message /dev/mapper/pool 0 "create_thin 0" |
| 157 | |
| 158 | Here '0' is an identifier for the volume, a 24-bit number. It's up |
| 159 | to the caller to allocate and manage these identifiers. If the |
| 160 | identifier is already in use, the message will fail with -EEXIST. |
| 161 | |
| 162 | ii) Using a thinly-provisioned volume. |
| 163 | |
| 164 | Thinly-provisioned volumes are activated using the 'thin' target: |
| 165 | |
| 166 | dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0" |
| 167 | |
| 168 | The last parameter is the identifier for the thinp device. |
| 169 | |
| 170 | Internal snapshots |
| 171 | ------------------ |
| 172 | |
| 173 | i) Creating an internal snapshot. |
| 174 | |
| 175 | Snapshots are created with another message to the pool. |
| 176 | |
| 177 | N.B. If the origin device that you wish to snapshot is active, you |
| 178 | must suspend it before creating the snapshot to avoid corruption. |
| 179 | This is NOT enforced at the moment, so please be careful! |
| 180 | |
| 181 | dmsetup suspend /dev/mapper/thin |
| 182 | dmsetup message /dev/mapper/pool 0 "create_snap 1 0" |
| 183 | dmsetup resume /dev/mapper/thin |
| 184 | |
| 185 | Here '1' is the identifier for the volume, a 24-bit number. '0' is the |
| 186 | identifier for the origin device. |
| 187 | |
| 188 | ii) Using an internal snapshot. |
| 189 | |
| 190 | Once created, the user doesn't have to worry about any connection |
| 191 | between the origin and the snapshot. Indeed the snapshot is no |
| 192 | different from any other thinly-provisioned device and can be |
| 193 | snapshotted itself via the same method. It's perfectly legal to |
| 194 | have only one of them active, and there's no ordering requirement on |
| 195 | activating or removing them both. (This differs from conventional |
| 196 | device-mapper snapshots.) |
| 197 | |
| 198 | Activate it exactly the same way as any other thinly-provisioned volume: |
| 199 | |
| 200 | dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1" |
| 201 | |
Joe Thornber | 2dd9c25 | 2012-03-28 18:41:28 +0100 | [diff] [blame] | 202 | External snapshots |
| 203 | ------------------ |
| 204 | |
| 205 | You can use an external _read only_ device as an origin for a |
| 206 | thinly-provisioned volume. Any read to an unprovisioned area of the |
| 207 | thin device will be passed through to the origin. Writes trigger |
| 208 | the allocation of new blocks as usual. |
| 209 | |
| 210 | One use case for this is VM hosts that want to run guests on |
| 211 | thinly-provisioned volumes but have the base image on another device |
| 212 | (possibly shared between many VMs). |
| 213 | |
| 214 | You must not write to the origin device if you use this technique! |
| 215 | Of course, you may write to the thin device and take internal snapshots |
| 216 | of the thin volume. |
| 217 | |
| 218 | i) Creating a snapshot of an external device |
| 219 | |
| 220 | This is the same as creating a thin device. |
| 221 | You don't mention the origin at this stage. |
| 222 | |
| 223 | dmsetup message /dev/mapper/pool 0 "create_thin 0" |
| 224 | |
| 225 | ii) Using a snapshot of an external device. |
| 226 | |
| 227 | Append an extra parameter to the thin target specifying the origin: |
| 228 | |
| 229 | dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 0 /dev/image" |
| 230 | |
| 231 | N.B. All descendants (internal snapshots) of this snapshot require the |
| 232 | same extra origin parameter. |
| 233 | |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 234 | Deactivation |
| 235 | ------------ |
| 236 | |
| 237 | All devices using a pool must be deactivated before the pool itself |
| 238 | can be. |
| 239 | |
| 240 | dmsetup remove thin |
| 241 | dmsetup remove snap |
| 242 | dmsetup remove pool |
| 243 | |
| 244 | Reference |
| 245 | ========= |
| 246 | |
| 247 | 'thin-pool' target |
| 248 | ------------------ |
| 249 | |
| 250 | i) Constructor |
| 251 | |
| 252 | thin-pool <metadata dev> <data dev> <data block size (sectors)> \ |
| 253 | <low water mark (blocks)> [<number of feature args> [<arg>]*] |
| 254 | |
| 255 | Optional feature arguments: |
Joe Thornber | 67e2e2b | 2012-03-28 18:41:29 +0100 | [diff] [blame] | 256 | |
| 257 | skip_block_zeroing: Skip the zeroing of newly-provisioned blocks. |
| 258 | |
| 259 | ignore_discard: Disable discard support. |
| 260 | |
| 261 | no_discard_passdown: Don't pass discards down to the underlying |
| 262 | data device, but just remove the mapping. |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 263 | |
Joe Thornber | e49e582 | 2012-07-27 15:08:16 +0100 | [diff] [blame] | 264 | read_only: Don't allow any changes to be made to the pool |
| 265 | metadata. |
| 266 | |
Mike Snitzer | 787a996c | 2013-12-06 16:21:43 -0500 | [diff] [blame] | 267 | error_if_no_space: Error IOs, instead of queueing, if no space. |
| 268 | |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 269 | Data block size must be between 64KB (128 sectors) and 1GB |
| 270 | (2097152 sectors) inclusive. |
| 271 | |
| 272 | |
| 273 | ii) Status |
| 274 | |
| 275 | <transaction id> <used metadata blocks>/<total metadata blocks> |
| 276 | <used data blocks>/<total data blocks> <held metadata root> |
Joe Thornber | e49e582 | 2012-07-27 15:08:16 +0100 | [diff] [blame] | 277 | [no_]discard_passdown ro|rw |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 278 | |
| 279 | transaction id: |
| 280 | A 64-bit number used by userspace to help synchronise with metadata |
| 281 | from volume managers. |
| 282 | |
| 283 | used data blocks / total data blocks |
| 284 | If the number of free blocks drops below the pool's low water mark a |
| 285 | dm event will be sent to userspace. This event is edge-triggered and |
| 286 | it will occur only once after each resume so volume manager writers |
| 287 | should register for the event and then check the target's status. |
| 288 | |
| 289 | held metadata root: |
Mike Snitzer | f6d16d3 | 2014-03-06 14:04:51 -0500 | [diff] [blame] | 290 | The location, in blocks, of the metadata root that has been |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 291 | 'held' for userspace read access. '-' indicates there is no |
Mike Snitzer | f6d16d3 | 2014-03-06 14:04:51 -0500 | [diff] [blame] | 292 | held root. |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 293 | |
Joe Thornber | e49e582 | 2012-07-27 15:08:16 +0100 | [diff] [blame] | 294 | discard_passdown|no_discard_passdown |
| 295 | Whether or not discards are actually being passed down to the |
| 296 | underlying device. When this is enabled when loading the table, |
| 297 | it can get disabled if the underlying device doesn't support it. |
| 298 | |
| 299 | ro|rw |
| 300 | If the pool encounters certain types of device failures it will |
| 301 | drop into a read-only metadata mode in which no changes to |
| 302 | the pool metadata (like allocating new blocks) are permitted. |
| 303 | |
| 304 | In serious cases where even a read-only mode is deemed unsafe |
| 305 | no further I/O will be permitted and the status will just |
| 306 | contain the string 'Fail'. The userspace recovery tools |
| 307 | should then be used. |
| 308 | |
Mike Snitzer | 787a996c | 2013-12-06 16:21:43 -0500 | [diff] [blame] | 309 | error_if_no_space|queue_if_no_space |
| 310 | If the pool runs out of data or metadata space, the pool will |
| 311 | either queue or error the IO destined to the data device. The |
Mike Snitzer | 80c5789 | 2014-05-20 13:38:33 -0400 | [diff] [blame] | 312 | default is to queue the IO until more space is added or the |
| 313 | 'no_space_timeout' expires. The 'no_space_timeout' dm-thin-pool |
| 314 | module parameter can be used to change this timeout -- it |
| 315 | defaults to 60 seconds but may be disabled using a value of 0. |
Mike Snitzer | 787a996c | 2013-12-06 16:21:43 -0500 | [diff] [blame] | 316 | |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 317 | iii) Messages |
| 318 | |
| 319 | create_thin <dev id> |
| 320 | |
| 321 | Create a new thinly-provisioned device. |
| 322 | <dev id> is an arbitrary unique 24-bit identifier chosen by |
| 323 | the caller. |
| 324 | |
| 325 | create_snap <dev id> <origin id> |
| 326 | |
| 327 | Create a new snapshot of another thinly-provisioned device. |
| 328 | <dev id> is an arbitrary unique 24-bit identifier chosen by |
| 329 | the caller. |
| 330 | <origin id> is the identifier of the thinly-provisioned device |
| 331 | of which the new device will be a snapshot. |
| 332 | |
| 333 | delete <dev id> |
| 334 | |
| 335 | Deletes a thin device. Irreversible. |
| 336 | |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 337 | set_transaction_id <current id> <new id> |
| 338 | |
| 339 | Userland volume managers, such as LVM, need a way to |
| 340 | synchronise their external metadata with the internal metadata of the |
| 341 | pool target. The thin-pool target offers to store an |
| 342 | arbitrary 64-bit transaction id and return it on the target's |
| 343 | status line. To avoid races you must provide what you think |
| 344 | the current transaction id is when you change it with this |
| 345 | compare-and-swap message. |
| 346 | |
Joe Thornber | cc8394d | 2012-06-03 00:30:01 +0100 | [diff] [blame] | 347 | reserve_metadata_snap |
| 348 | |
| 349 | Reserve a copy of the data mapping btree for use by userland. |
| 350 | This allows userland to inspect the mappings as they were when |
| 351 | this message was executed. Use the pool's status command to |
| 352 | get the root block associated with the metadata snapshot. |
| 353 | |
| 354 | release_metadata_snap |
| 355 | |
| 356 | Release a previously reserved copy of the data mapping btree. |
| 357 | |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 358 | 'thin' target |
| 359 | ------------- |
| 360 | |
| 361 | i) Constructor |
| 362 | |
Joe Thornber | 2dd9c25 | 2012-03-28 18:41:28 +0100 | [diff] [blame] | 363 | thin <pool dev> <dev id> [<external origin dev>] |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 364 | |
| 365 | pool dev: |
| 366 | the thin-pool device, e.g. /dev/mapper/my_pool or 253:0 |
| 367 | |
| 368 | dev id: |
| 369 | the internal device identifier of the device to be |
| 370 | activated. |
| 371 | |
Joe Thornber | 2dd9c25 | 2012-03-28 18:41:28 +0100 | [diff] [blame] | 372 | external origin dev: |
| 373 | an optional block device outside the pool to be treated as a |
| 374 | read-only snapshot origin: reads to unprovisioned areas of the |
| 375 | thin target will be mapped to this device. |
| 376 | |
Joe Thornber | 991d9fa | 2011-10-31 20:21:18 +0000 | [diff] [blame] | 377 | The pool doesn't store any size against the thin devices. If you |
| 378 | load a thin target that is smaller than you've been using previously, |
| 379 | then you'll have no access to blocks mapped beyond the end. If you |
| 380 | load a target that is bigger than before, then extra blocks will be |
| 381 | provisioned as and when needed. |
| 382 | |
| 383 | If you wish to reduce the size of your thin device and potentially |
| 384 | regain some space then send the 'trim' message to the pool. |
| 385 | |
| 386 | ii) Status |
| 387 | |
| 388 | <nr mapped sectors> <highest mapped sector> |
Joe Thornber | e49e582 | 2012-07-27 15:08:16 +0100 | [diff] [blame] | 389 | |
| 390 | If the pool has encountered device errors and failed, the status |
| 391 | will just contain the string 'Fail'. The userspace recovery |
| 392 | tools should then be used. |