Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 1 | # |
| 2 | # Block device driver configuration |
| 3 | # |
| 4 | |
| 5 | menu "Multi-device support (RAID and LVM)" |
| 6 | |
| 7 | config MD |
| 8 | bool "Multiple devices driver support (RAID and LVM)" |
| 9 | help |
| 10 | Support multiple physical spindles through a single logical device. |
| 11 | Required for RAID and logical volume management. |
| 12 | |
| 13 | config BLK_DEV_MD |
| 14 | tristate "RAID support" |
| 15 | depends on MD |
| 16 | ---help--- |
| 17 | This driver lets you combine several hard disk partitions into one |
| 18 | logical block device. This can be used to simply append one |
| 19 | partition to another one or to combine several redundant hard disks |
| 20 | into a RAID1/4/5 device so as to provide protection against hard |
| 21 | disk failures. This is called "Software RAID" since the combining of |
| 22 | the partitions is done by the kernel. "Hardware RAID" means that the |
| 23 | combining is done by a dedicated controller; if you have such a |
| 24 | controller, you do not need to say Y here. |
| 25 | |
| 26 | More information about Software RAID on Linux is contained in the |
| 27 | Software RAID mini-HOWTO, available from |
| 28 | <http://www.tldp.org/docs.html#howto>. There you will also learn |
| 29 | where to get the supporting user space utilities raidtools. |
| 30 | |
| 31 | If unsure, say N. |
| 32 | |
| 33 | config MD_LINEAR |
| 34 | tristate "Linear (append) mode" |
| 35 | depends on BLK_DEV_MD |
| 36 | ---help--- |
| 37 | If you say Y here, then your multiple devices driver will be able to |
| 38 | use the so-called linear mode, i.e. it will combine the hard disk |
| 39 | partitions by simply appending one to the other. |
| 40 | |
| 41 | To compile this as a module, choose M here: the module |
| 42 | will be called linear. |
| 43 | |
| 44 | If unsure, say Y. |
| 45 | |
| 46 | config MD_RAID0 |
| 47 | tristate "RAID-0 (striping) mode" |
| 48 | depends on BLK_DEV_MD |
| 49 | ---help--- |
| 50 | If you say Y here, then your multiple devices driver will be able to |
| 51 | use the so-called raid0 mode, i.e. it will combine the hard disk |
| 52 | partitions into one logical device in such a fashion as to fill them |
| 53 | up evenly, one chunk here and one chunk there. This will increase |
| 54 | the throughput rate if the partitions reside on distinct disks. |
| 55 | |
| 56 | Information about Software RAID on Linux is contained in the |
| 57 | Software-RAID mini-HOWTO, available from |
| 58 | <http://www.tldp.org/docs.html#howto>. There you will also |
| 59 | learn where to get the supporting user space utilities raidtools. |
| 60 | |
| 61 | To compile this as a module, choose M here: the module |
| 62 | will be called raid0. |
| 63 | |
| 64 | If unsure, say Y. |
| 65 | |
| 66 | config MD_RAID1 |
| 67 | tristate "RAID-1 (mirroring) mode" |
| 68 | depends on BLK_DEV_MD |
| 69 | ---help--- |
| 70 | A RAID-1 set consists of several disk drives which are exact copies |
| 71 | of each other. In the event of a mirror failure, the RAID driver |
| 72 | will continue to use the operational mirrors in the set, providing |
| 73 | an error free MD (multiple device) to the higher levels of the |
| 74 | kernel. In a set with N drives, the available space is the capacity |
| 75 | of a single drive, and the set protects against a failure of (N - 1) |
| 76 | drives. |
| 77 | |
| 78 | Information about Software RAID on Linux is contained in the |
| 79 | Software-RAID mini-HOWTO, available from |
| 80 | <http://www.tldp.org/docs.html#howto>. There you will also |
| 81 | learn where to get the supporting user space utilities raidtools. |
| 82 | |
| 83 | If you want to use such a RAID-1 set, say Y. To compile this code |
| 84 | as a module, choose M here: the module will be called raid1. |
| 85 | |
| 86 | If unsure, say Y. |
| 87 | |
| 88 | config MD_RAID10 |
| 89 | tristate "RAID-10 (mirrored striping) mode (EXPERIMENTAL)" |
| 90 | depends on BLK_DEV_MD && EXPERIMENTAL |
| 91 | ---help--- |
| 92 | RAID-10 provides a combination of striping (RAID-0) and |
| 93 | mirroring (RAID-1) with easier configuration and more flexable |
| 94 | layout. |
| 95 | Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to |
| 96 | be the same size (or at least, only as much as the smallest device |
| 97 | will be used). |
| 98 | RAID-10 provides a variety of layouts that provide different levels |
| 99 | of redundancy and performance. |
| 100 | |
| 101 | RAID-10 requires mdadm-1.7.0 or later, available at: |
| 102 | |
| 103 | ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/ |
| 104 | |
| 105 | If unsure, say Y. |
| 106 | |
| 107 | config MD_RAID5 |
| 108 | tristate "RAID-4/RAID-5 mode" |
| 109 | depends on BLK_DEV_MD |
| 110 | ---help--- |
| 111 | A RAID-5 set of N drives with a capacity of C MB per drive provides |
| 112 | the capacity of C * (N - 1) MB, and protects against a failure |
| 113 | of a single drive. For a given sector (row) number, (N - 1) drives |
| 114 | contain data sectors, and one drive contains the parity protection. |
| 115 | For a RAID-4 set, the parity blocks are present on a single drive, |
| 116 | while a RAID-5 set distributes the parity across the drives in one |
| 117 | of the available parity distribution methods. |
| 118 | |
| 119 | Information about Software RAID on Linux is contained in the |
| 120 | Software-RAID mini-HOWTO, available from |
| 121 | <http://www.tldp.org/docs.html#howto>. There you will also |
| 122 | learn where to get the supporting user space utilities raidtools. |
| 123 | |
| 124 | If you want to use such a RAID-4/RAID-5 set, say Y. To |
| 125 | compile this code as a module, choose M here: the module |
| 126 | will be called raid5. |
| 127 | |
| 128 | If unsure, say Y. |
| 129 | |
NeilBrown | 2926955 | 2006-03-27 01:18:10 -0800 | [diff] [blame] | 130 | config MD_RAID5_RESHAPE |
| 131 | bool "Support adding drives to a raid-5 array (experimental)" |
| 132 | depends on MD_RAID5 && EXPERIMENTAL |
| 133 | ---help--- |
| 134 | A RAID-5 set can be expanded by adding extra drives. This |
| 135 | requires "restriping" the array which means (almost) every |
| 136 | block must be written to a different place. |
| 137 | |
| 138 | This option allows such restriping to be done while the array |
| 139 | is online. However it is still EXPERIMENTAL code. It should |
| 140 | work, but please be sure that you have backups. |
| 141 | |
NeilBrown | 6f91fe8 | 2006-04-10 22:52:48 -0700 | [diff] [blame] | 142 | You will need mdadm verion 2.4.1 or later to use this |
| 143 | feature safely. During the early stage of reshape there is |
| 144 | a critical section where live data is being over-written. A |
| 145 | crash during this time needs extra care for recovery. The |
| 146 | newer mdadm takes a copy of the data in the critical section |
| 147 | and will restore it, if necessary, after a crash. |
NeilBrown | 2926955 | 2006-03-27 01:18:10 -0800 | [diff] [blame] | 148 | |
| 149 | The mdadm usage is e.g. |
| 150 | mdadm --grow /dev/md1 --raid-disks=6 |
| 151 | to grow '/dev/md1' to having 6 disks. |
| 152 | |
| 153 | Note: The array can only be expanded, not contracted. |
| 154 | There should be enough spares already present to make the new |
| 155 | array workable. |
| 156 | |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 157 | config MD_RAID6 |
| 158 | tristate "RAID-6 mode" |
| 159 | depends on BLK_DEV_MD |
| 160 | ---help--- |
| 161 | A RAID-6 set of N drives with a capacity of C MB per drive |
| 162 | provides the capacity of C * (N - 2) MB, and protects |
| 163 | against a failure of any two drives. For a given sector |
| 164 | (row) number, (N - 2) drives contain data sectors, and two |
| 165 | drives contains two independent redundancy syndromes. Like |
| 166 | RAID-5, RAID-6 distributes the syndromes across the drives |
| 167 | in one of the available parity distribution methods. |
| 168 | |
| 169 | RAID-6 requires mdadm-1.5.0 or later, available at: |
| 170 | |
| 171 | ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/ |
| 172 | |
| 173 | If you want to use such a RAID-6 set, say Y. To compile |
| 174 | this code as a module, choose M here: the module will be |
| 175 | called raid6. |
| 176 | |
| 177 | If unsure, say Y. |
| 178 | |
| 179 | config MD_MULTIPATH |
| 180 | tristate "Multipath I/O support" |
| 181 | depends on BLK_DEV_MD |
| 182 | help |
| 183 | Multipath-IO is the ability of certain devices to address the same |
| 184 | physical disk over multiple 'IO paths'. The code ensures that such |
| 185 | paths can be defined and handled at runtime, and ensures that a |
| 186 | transparent failover to the backup path(s) happens if a IO errors |
| 187 | arrives on the primary path. |
| 188 | |
| 189 | If unsure, say N. |
| 190 | |
| 191 | config MD_FAULTY |
| 192 | tristate "Faulty test module for MD" |
| 193 | depends on BLK_DEV_MD |
| 194 | help |
| 195 | The "faulty" module allows for a block device that occasionally returns |
| 196 | read or write errors. It is useful for testing. |
| 197 | |
| 198 | In unsure, say N. |
| 199 | |
| 200 | config BLK_DEV_DM |
| 201 | tristate "Device mapper support" |
| 202 | depends on MD |
| 203 | ---help--- |
| 204 | Device-mapper is a low level volume manager. It works by allowing |
| 205 | people to specify mappings for ranges of logical sectors. Various |
| 206 | mapping types are available, in addition people may write their own |
| 207 | modules containing custom mappings if they wish. |
| 208 | |
| 209 | Higher level volume managers such as LVM2 use this driver. |
| 210 | |
| 211 | To compile this as a module, choose M here: the module will be |
| 212 | called dm-mod. |
| 213 | |
| 214 | If unsure, say N. |
| 215 | |
| 216 | config DM_CRYPT |
| 217 | tristate "Crypt target support" |
| 218 | depends on BLK_DEV_DM && EXPERIMENTAL |
| 219 | select CRYPTO |
| 220 | ---help--- |
| 221 | This device-mapper target allows you to create a device that |
| 222 | transparently encrypts the data on it. You'll need to activate |
| 223 | the ciphers you're going to use in the cryptoapi configuration. |
| 224 | |
| 225 | Information on how to use dm-crypt can be found on |
| 226 | |
| 227 | <http://www.saout.de/misc/dm-crypt/> |
| 228 | |
| 229 | To compile this code as a module, choose M here: the module will |
| 230 | be called dm-crypt. |
| 231 | |
| 232 | If unsure, say N. |
| 233 | |
| 234 | config DM_SNAPSHOT |
| 235 | tristate "Snapshot target (EXPERIMENTAL)" |
| 236 | depends on BLK_DEV_DM && EXPERIMENTAL |
| 237 | ---help--- |
| 238 | Allow volume managers to take writeable snapshots of a device. |
| 239 | |
| 240 | config DM_MIRROR |
| 241 | tristate "Mirror target (EXPERIMENTAL)" |
| 242 | depends on BLK_DEV_DM && EXPERIMENTAL |
| 243 | ---help--- |
| 244 | Allow volume managers to mirror logical volumes, also |
| 245 | needed for live data migration tools such as 'pvmove'. |
| 246 | |
| 247 | config DM_ZERO |
| 248 | tristate "Zero target (EXPERIMENTAL)" |
| 249 | depends on BLK_DEV_DM && EXPERIMENTAL |
| 250 | ---help--- |
| 251 | A target that discards writes, and returns all zeroes for |
| 252 | reads. Useful in some recovery situations. |
| 253 | |
| 254 | config DM_MULTIPATH |
| 255 | tristate "Multipath target (EXPERIMENTAL)" |
| 256 | depends on BLK_DEV_DM && EXPERIMENTAL |
| 257 | ---help--- |
| 258 | Allow volume managers to support multipath hardware. |
| 259 | |
| 260 | config DM_MULTIPATH_EMC |
| 261 | tristate "EMC CX/AX multipath support (EXPERIMENTAL)" |
| 262 | depends on DM_MULTIPATH && BLK_DEV_DM && EXPERIMENTAL |
| 263 | ---help--- |
| 264 | Multipath support for EMC CX/AX series hardware. |
| 265 | |
| 266 | endmenu |
| 267 | |