Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 1 | Tools that manage md devices can be found at |
| 2 | http://www.<country>.kernel.org/pub/linux/utils/raid/.... |
| 3 | |
| 4 | |
| 5 | Boot time assembly of RAID arrays |
| 6 | --------------------------------- |
| 7 | |
| 8 | You can boot with your md device with the following kernel command |
| 9 | lines: |
| 10 | |
| 11 | for old raid arrays without persistent superblocks: |
| 12 | md=<md device no.>,<raid level>,<chunk size factor>,<fault level>,dev0,dev1,...,devn |
| 13 | |
| 14 | for raid arrays with persistent superblocks |
| 15 | md=<md device no.>,dev0,dev1,...,devn |
| 16 | or, to assemble a partitionable array: |
| 17 | md=d<md device no.>,dev0,dev1,...,devn |
| 18 | |
| 19 | md device no. = the number of the md device ... |
| 20 | 0 means md0, |
| 21 | 1 md1, |
| 22 | 2 md2, |
| 23 | 3 md3, |
| 24 | 4 md4 |
| 25 | |
| 26 | raid level = -1 linear mode |
| 27 | 0 striped mode |
| 28 | other modes are only supported with persistent super blocks |
| 29 | |
| 30 | chunk size factor = (raid-0 and raid-1 only) |
| 31 | Set the chunk size as 4k << n. |
| 32 | |
| 33 | fault level = totally ignored |
| 34 | |
| 35 | dev0-devn: e.g. /dev/hda1,/dev/hdc1,/dev/sda1,/dev/sdb1 |
| 36 | |
| 37 | A possible loadlin line (Harald Hoyer <HarryH@Royal.Net>) looks like this: |
| 38 | |
| 39 | e:\loadlin\loadlin e:\zimage root=/dev/md0 md=0,0,4,0,/dev/hdb2,/dev/hdc3 ro |
| 40 | |
| 41 | |
| 42 | Boot time autodetection of RAID arrays |
| 43 | -------------------------------------- |
| 44 | |
| 45 | When md is compiled into the kernel (not as module), partitions of |
| 46 | type 0xfd are scanned and automatically assembled into RAID arrays. |
| 47 | This autodetection may be suppressed with the kernel parameter |
| 48 | "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 |
| 49 | superblock can be autodetected and run at boot time. |
| 50 | |
| 51 | The kernel parameter "raid=partitionable" (or "raid=part") means |
| 52 | that all auto-detected arrays are assembled as partitionable. |
| 53 | |
NeilBrown | 6ff8d8ec | 2006-01-06 00:20:15 -0800 | [diff] [blame] | 54 | Boot time assembly of degraded/dirty arrays |
| 55 | ------------------------------------------- |
| 56 | |
| 57 | If a raid5 or raid6 array is both dirty and degraded, it could have |
| 58 | undetectable data corruption. This is because the fact that it is |
| 59 | 'dirty' means that the parity cannot be trusted, and the fact that it |
| 60 | is degraded means that some datablocks are missing and cannot reliably |
| 61 | be reconstructed (due to no parity). |
| 62 | |
| 63 | For this reason, md will normally refuse to start such an array. This |
| 64 | requires the sysadmin to take action to explicitly start the array |
| 65 | desipite possible corruption. This is normally done with |
| 66 | mdadm --assemble --force .... |
| 67 | |
| 68 | This option is not really available if the array has the root |
| 69 | filesystem on it. In order to support this booting from such an |
| 70 | array, md supports a module parameter "start_dirty_degraded" which, |
| 71 | when set to 1, bypassed the checks and will allows dirty degraded |
| 72 | arrays to be started. |
| 73 | |
| 74 | So, to boot with a root filesystem of a dirty degraded raid[56], use |
| 75 | |
| 76 | md-mod.start_dirty_degraded=1 |
| 77 | |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 78 | |
| 79 | Superblock formats |
| 80 | ------------------ |
| 81 | |
| 82 | The md driver can support a variety of different superblock formats. |
| 83 | Currently, it supports superblock formats "0.90.0" and the "md-1" format |
| 84 | introduced in the 2.5 development series. |
| 85 | |
| 86 | The kernel will autodetect which format superblock is being used. |
| 87 | |
| 88 | Superblock format '0' is treated differently to others for legacy |
| 89 | reasons - it is the original superblock format. |
| 90 | |
| 91 | |
| 92 | General Rules - apply for all superblock formats |
| 93 | ------------------------------------------------ |
| 94 | |
| 95 | An array is 'created' by writing appropriate superblocks to all |
| 96 | devices. |
| 97 | |
| 98 | It is 'assembled' by associating each of these devices with an |
| 99 | particular md virtual device. Once it is completely assembled, it can |
| 100 | be accessed. |
| 101 | |
| 102 | An array should be created by a user-space tool. This will write |
| 103 | superblocks to all devices. It will usually mark the array as |
| 104 | 'unclean', or with some devices missing so that the kernel md driver |
| 105 | can create appropriate redundancy (copying in raid1, parity |
| 106 | calculation in raid4/5). |
| 107 | |
| 108 | When an array is assembled, it is first initialized with the |
| 109 | SET_ARRAY_INFO ioctl. This contains, in particular, a major and minor |
| 110 | version number. The major version number selects which superblock |
| 111 | format is to be used. The minor number might be used to tune handling |
| 112 | of the format, such as suggesting where on each device to look for the |
| 113 | superblock. |
| 114 | |
| 115 | Then each device is added using the ADD_NEW_DISK ioctl. This |
| 116 | provides, in particular, a major and minor number identifying the |
| 117 | device to add. |
| 118 | |
| 119 | The array is started with the RUN_ARRAY ioctl. |
| 120 | |
| 121 | Once started, new devices can be added. They should have an |
| 122 | appropriate superblock written to them, and then passed be in with |
| 123 | ADD_NEW_DISK. |
| 124 | |
| 125 | Devices that have failed or are not yet active can be detached from an |
| 126 | array using HOT_REMOVE_DISK. |
| 127 | |
| 128 | |
| 129 | Specific Rules that apply to format-0 super block arrays, and |
| 130 | arrays with no superblock (non-persistent). |
| 131 | ------------------------------------------------------------- |
| 132 | |
| 133 | An array can be 'created' by describing the array (level, chunksize |
| 134 | etc) in a SET_ARRAY_INFO ioctl. This must has major_version==0 and |
| 135 | raid_disks != 0. |
| 136 | |
| 137 | Then uninitialized devices can be added with ADD_NEW_DISK. The |
| 138 | structure passed to ADD_NEW_DISK must specify the state of the device |
| 139 | and it's role in the array. |
| 140 | |
| 141 | Once started with RUN_ARRAY, uninitialized spares can be added with |
| 142 | HOT_ADD_DISK. |
NeilBrown | bb63654 | 2005-11-08 21:39:45 -0800 | [diff] [blame] | 143 | |
| 144 | |
| 145 | |
| 146 | MD devices in sysfs |
| 147 | ------------------- |
| 148 | md devices appear in sysfs (/sys) as regular block devices, |
| 149 | e.g. |
| 150 | /sys/block/md0 |
| 151 | |
| 152 | Each 'md' device will contain a subdirectory called 'md' which |
| 153 | contains further md-specific information about the device. |
| 154 | |
| 155 | All md devices contain: |
| 156 | level |
| 157 | a text file indicating the 'raid level'. This may be a standard |
| 158 | numerical level prefixed by "RAID-" - e.g. "RAID-5", or some |
| 159 | other name such as "linear" or "multipath". |
| 160 | If no raid level has been set yet (array is still being |
| 161 | assembled), this file will be empty. |
| 162 | |
| 163 | raid_disks |
| 164 | a text file with a simple number indicating the number of devices |
| 165 | in a fully functional array. If this is not yet known, the file |
| 166 | will be empty. If an array is being resized (not currently |
| 167 | possible) this will contain the larger of the old and new sizes. |
NeilBrown | da943b99 | 2006-01-06 00:20:54 -0800 | [diff] [blame] | 168 | Some raid level (RAID1) allow this value to be set while the |
| 169 | array is active. This will reconfigure the array. Otherwise |
| 170 | it can only be set while assembling an array. |
NeilBrown | bb63654 | 2005-11-08 21:39:45 -0800 | [diff] [blame] | 171 | |
NeilBrown | 3b34380 | 2006-01-06 00:20:47 -0800 | [diff] [blame] | 172 | chunk_size |
| 173 | This is the size if bytes for 'chunks' and is only relevant to |
| 174 | raid levels that involve striping (1,4,5,6,10). The address space |
| 175 | of the array is conceptually divided into chunks and consecutive |
| 176 | chunks are striped onto neighbouring devices. |
| 177 | The size should be atleast PAGE_SIZE (4k) and should be a power |
| 178 | of 2. This can only be set while assembling an array |
| 179 | |
NeilBrown | a35b0d6 | 2006-01-06 00:20:49 -0800 | [diff] [blame] | 180 | component_size |
| 181 | For arrays with data redundancy (i.e. not raid0, linear, faulty, |
| 182 | multipath), all components must be the same size - or at least |
| 183 | there must a size that they all provide space for. This is a key |
| 184 | part or the geometry of the array. It is measured in sectors |
| 185 | and can be read from here. Writing to this value may resize |
| 186 | the array if the personality supports it (raid1, raid5, raid6), |
| 187 | and if the component drives are large enough. |
| 188 | |
NeilBrown | 8bb93aa | 2006-01-06 00:20:50 -0800 | [diff] [blame] | 189 | metadata_version |
| 190 | This indicates the format that is being used to record metadata |
| 191 | about the array. It can be 0.90 (traditional format), 1.0, 1.1, |
| 192 | 1.2 (newer format in varying locations) or "none" indicating that |
| 193 | the kernel isn't managing metadata at all. |
| 194 | |
NeilBrown | d9d166c | 2006-01-06 00:20:51 -0800 | [diff] [blame] | 195 | level |
| 196 | The raid 'level' for this array. The name will often (but not |
| 197 | always) be the same as the name of the module that implements the |
| 198 | level. To be auto-loaded the module must have an alias |
| 199 | md-$LEVEL e.g. md-raid5 |
| 200 | This can be written only while the array is being assembled, not |
| 201 | after it is started. |
| 202 | |
NeilBrown | bb63654 | 2005-11-08 21:39:45 -0800 | [diff] [blame] | 203 | As component devices are added to an md array, they appear in the 'md' |
| 204 | directory as new directories named |
| 205 | dev-XXX |
| 206 | where XXX is a name that the kernel knows for the device, e.g. hdb1. |
| 207 | Each directory contains: |
| 208 | |
| 209 | block |
| 210 | a symlink to the block device in /sys/block, e.g. |
| 211 | /sys/block/md0/md/dev-hdb1/block -> ../../../../block/hdb/hdb1 |
| 212 | |
| 213 | super |
| 214 | A file containing an image of the superblock read from, or |
| 215 | written to, that device. |
| 216 | |
| 217 | state |
| 218 | A file recording the current state of the device in the array |
| 219 | which can be a comma separated list of |
| 220 | faulty - device has been kicked from active use due to |
| 221 | a detected fault |
| 222 | in_sync - device is a fully in-sync member of the array |
| 223 | spare - device is working, but not a full member. |
| 224 | This includes spares that are in the process |
| 225 | of being recoverred to |
| 226 | This list make grow in future. |
| 227 | |
NeilBrown | 4dbcdc7 | 2006-01-06 00:20:52 -0800 | [diff] [blame] | 228 | errors |
| 229 | An approximate count of read errors that have been detected on |
| 230 | this device but have not caused the device to be evicted from |
| 231 | the array (either because they were corrected or because they |
| 232 | happened while the array was read-only). When using version-1 |
| 233 | metadata, this value persists across restarts of the array. |
| 234 | |
| 235 | This value can be written while assembling an array thus |
| 236 | providing an ongoing count for arrays with metadata managed by |
| 237 | userspace. |
| 238 | |
NeilBrown | 014236d | 2006-01-06 00:20:55 -0800 | [diff] [blame^] | 239 | slot |
| 240 | This gives the role that the device has in the array. It will |
| 241 | either be 'none' if the device is not active in the array |
| 242 | (i.e. is a spare or has failed) or an integer less than the |
| 243 | 'raid_disks' number for the array indicating which possition |
| 244 | it currently fills. This can only be set while assembling an |
| 245 | array. A device for which this is set is assumed to be working. |
| 246 | |
NeilBrown | bb63654 | 2005-11-08 21:39:45 -0800 | [diff] [blame] | 247 | |
| 248 | An active md device will also contain and entry for each active device |
| 249 | in the array. These are named |
| 250 | |
| 251 | rdNN |
| 252 | |
| 253 | where 'NN' is the possition in the array, starting from 0. |
| 254 | So for a 3 drive array there will be rd0, rd1, rd2. |
| 255 | These are symbolic links to the appropriate 'dev-XXX' entry. |
| 256 | Thus, for example, |
| 257 | cat /sys/block/md*/md/rd*/state |
| 258 | will show 'in_sync' on every line. |
| 259 | |
| 260 | |
| 261 | |
| 262 | Active md devices for levels that support data redundancy (1,4,5,6) |
| 263 | also have |
| 264 | |
| 265 | sync_action |
| 266 | a text file that can be used to monitor and control the rebuild |
| 267 | process. It contains one word which can be one of: |
| 268 | resync - redundancy is being recalculated after unclean |
| 269 | shutdown or creation |
| 270 | recover - a hot spare is being built to replace a |
| 271 | failed/missing device |
| 272 | idle - nothing is happening |
| 273 | check - A full check of redundancy was requested and is |
| 274 | happening. This reads all block and checks |
| 275 | them. A repair may also happen for some raid |
| 276 | levels. |
| 277 | repair - A full check and repair is happening. This is |
| 278 | similar to 'resync', but was requested by the |
| 279 | user, and the write-intent bitmap is NOT used to |
| 280 | optimise the process. |
| 281 | |
| 282 | This file is writable, and each of the strings that could be |
| 283 | read are meaningful for writing. |
| 284 | |
| 285 | 'idle' will stop an active resync/recovery etc. There is no |
| 286 | guarantee that another resync/recovery may not be automatically |
| 287 | started again, though some event will be needed to trigger |
| 288 | this. |
| 289 | 'resync' or 'recovery' can be used to restart the |
| 290 | corresponding operation if it was stopped with 'idle'. |
| 291 | 'check' and 'repair' will start the appropriate process |
| 292 | providing the current state is 'idle'. |
| 293 | |
| 294 | mismatch_count |
| 295 | When performing 'check' and 'repair', and possibly when |
| 296 | performing 'resync', md will count the number of errors that are |
| 297 | found. The count in 'mismatch_cnt' is the number of sectors |
| 298 | that were re-written, or (for 'check') would have been |
| 299 | re-written. As most raid levels work in units of pages rather |
| 300 | than sectors, this my be larger than the number of actual errors |
| 301 | by a factor of the number of sectors in a page. |
| 302 | |
| 303 | Each active md device may also have attributes specific to the |
| 304 | personality module that manages it. |
| 305 | These are specific to the implementation of the module and could |
| 306 | change substantially if the implementation changes. |
| 307 | |
| 308 | These currently include |
| 309 | |
| 310 | stripe_cache_size (currently raid5 only) |
| 311 | number of entries in the stripe cache. This is writable, but |
| 312 | there are upper and lower limits (32768, 16). Default is 128. |
| 313 | strip_cache_active (currently raid5 only) |
| 314 | number of active entries in the stripe cache |