| # |
| # Block device driver configuration |
| # |
| |
| menuconfig MD |
| bool "Multiple devices driver support (RAID and LVM)" |
| depends on BLOCK |
| help |
| Support multiple physical spindles through a single logical device. |
| Required for RAID and logical volume management. |
| |
| if MD |
| |
| config BLK_DEV_MD |
| tristate "RAID support" |
| ---help--- |
| This driver lets you combine several hard disk partitions into one |
| logical block device. This can be used to simply append one |
| partition to another one or to combine several redundant hard disks |
| into a RAID1/4/5 device so as to provide protection against hard |
| disk failures. This is called "Software RAID" since the combining of |
| the partitions is done by the kernel. "Hardware RAID" means that the |
| combining is done by a dedicated controller; if you have such a |
| controller, you do not need to say Y here. |
| |
| More information about Software RAID on Linux is contained in the |
| Software RAID mini-HOWTO, available from |
| <http://www.tldp.org/docs.html#howto>. There you will also learn |
| where to get the supporting user space utilities raidtools. |
| |
| If unsure, say N. |
| |
| config MD_AUTODETECT |
| bool "Autodetect RAID arrays during kernel boot" |
| depends on BLK_DEV_MD=y |
| default y |
| ---help--- |
| If you say Y here, then the kernel will try to autodetect raid |
| arrays as part of its boot process. |
| |
| If you don't use raid and say Y, this autodetection can cause |
| a several-second delay in the boot time due to various |
| synchronisation steps that are part of this step. |
| |
| If unsure, say Y. |
| |
| config MD_LINEAR |
| tristate "Linear (append) mode" |
| depends on BLK_DEV_MD |
| ---help--- |
| If you say Y here, then your multiple devices driver will be able to |
| use the so-called linear mode, i.e. it will combine the hard disk |
| partitions by simply appending one to the other. |
| |
| To compile this as a module, choose M here: the module |
| will be called linear. |
| |
| If unsure, say Y. |
| |
| config MD_RAID0 |
| tristate "RAID-0 (striping) mode" |
| depends on BLK_DEV_MD |
| ---help--- |
| If you say Y here, then your multiple devices driver will be able to |
| use the so-called raid0 mode, i.e. it will combine the hard disk |
| partitions into one logical device in such a fashion as to fill them |
| up evenly, one chunk here and one chunk there. This will increase |
| the throughput rate if the partitions reside on distinct disks. |
| |
| Information about Software RAID on Linux is contained in the |
| Software-RAID mini-HOWTO, available from |
| <http://www.tldp.org/docs.html#howto>. There you will also |
| learn where to get the supporting user space utilities raidtools. |
| |
| To compile this as a module, choose M here: the module |
| will be called raid0. |
| |
| If unsure, say Y. |
| |
| config MD_RAID1 |
| tristate "RAID-1 (mirroring) mode" |
| depends on BLK_DEV_MD |
| ---help--- |
| A RAID-1 set consists of several disk drives which are exact copies |
| of each other. In the event of a mirror failure, the RAID driver |
| will continue to use the operational mirrors in the set, providing |
| an error free MD (multiple device) to the higher levels of the |
| kernel. In a set with N drives, the available space is the capacity |
| of a single drive, and the set protects against a failure of (N - 1) |
| drives. |
| |
| Information about Software RAID on Linux is contained in the |
| Software-RAID mini-HOWTO, available from |
| <http://www.tldp.org/docs.html#howto>. There you will also |
| learn where to get the supporting user space utilities raidtools. |
| |
| If you want to use such a RAID-1 set, say Y. To compile this code |
| as a module, choose M here: the module will be called raid1. |
| |
| If unsure, say Y. |
| |
| config MD_RAID10 |
| tristate "RAID-10 (mirrored striping) mode (EXPERIMENTAL)" |
| depends on BLK_DEV_MD && EXPERIMENTAL |
| ---help--- |
| RAID-10 provides a combination of striping (RAID-0) and |
| mirroring (RAID-1) with easier configuration and more flexible |
| layout. |
| Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to |
| be the same size (or at least, only as much as the smallest device |
| will be used). |
| RAID-10 provides a variety of layouts that provide different levels |
| of redundancy and performance. |
| |
| RAID-10 requires mdadm-1.7.0 or later, available at: |
| |
| ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/ |
| |
| If unsure, say Y. |
| |
| config MD_RAID456 |
| tristate "RAID-4/RAID-5/RAID-6 mode" |
| depends on BLK_DEV_MD |
| select RAID6_PQ |
| select ASYNC_MEMCPY |
| select ASYNC_XOR |
| select ASYNC_PQ |
| select ASYNC_RAID6_RECOV |
| ---help--- |
| A RAID-5 set of N drives with a capacity of C MB per drive provides |
| the capacity of C * (N - 1) MB, and protects against a failure |
| of a single drive. For a given sector (row) number, (N - 1) drives |
| contain data sectors, and one drive contains the parity protection. |
| For a RAID-4 set, the parity blocks are present on a single drive, |
| while a RAID-5 set distributes the parity across the drives in one |
| of the available parity distribution methods. |
| |
| A RAID-6 set of N drives with a capacity of C MB per drive |
| provides the capacity of C * (N - 2) MB, and protects |
| against a failure of any two drives. For a given sector |
| (row) number, (N - 2) drives contain data sectors, and two |
| drives contains two independent redundancy syndromes. Like |
| RAID-5, RAID-6 distributes the syndromes across the drives |
| in one of the available parity distribution methods. |
| |
| Information about Software RAID on Linux is contained in the |
| Software-RAID mini-HOWTO, available from |
| <http://www.tldp.org/docs.html#howto>. There you will also |
| learn where to get the supporting user space utilities raidtools. |
| |
| If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To |
| compile this code as a module, choose M here: the module |
| will be called raid456. |
| |
| If unsure, say Y. |
| |
| config MULTICORE_RAID456 |
| bool "RAID-4/RAID-5/RAID-6 Multicore processing (EXPERIMENTAL)" |
| depends on MD_RAID456 |
| depends on SMP |
| depends on EXPERIMENTAL |
| ---help--- |
| Enable the raid456 module to dispatch per-stripe raid operations to a |
| thread pool. |
| |
| If unsure, say N. |
| |
| config ASYNC_RAID6_TEST |
| tristate "Self test for hardware accelerated raid6 recovery" |
| depends on RAID6_PQ |
| select ASYNC_RAID6_RECOV |
| ---help--- |
| This is a one-shot self test that permutes through the |
| recovery of all the possible two disk failure scenarios for a |
| N-disk array. Recovery is performed with the asynchronous |
| raid6 recovery routines, and will optionally use an offload |
| engine if one is available. |
| |
| If unsure, say N. |
| |
| config MD_MULTIPATH |
| tristate "Multipath I/O support" |
| depends on BLK_DEV_MD |
| help |
| Multipath-IO is the ability of certain devices to address the same |
| physical disk over multiple 'IO paths'. The code ensures that such |
| paths can be defined and handled at runtime, and ensures that a |
| transparent failover to the backup path(s) happens if a IO errors |
| arrives on the primary path. |
| |
| If unsure, say N. |
| |
| config MD_FAULTY |
| tristate "Faulty test module for MD" |
| depends on BLK_DEV_MD |
| help |
| The "faulty" module allows for a block device that occasionally returns |
| read or write errors. It is useful for testing. |
| |
| In unsure, say N. |
| |
| config BLK_DEV_DM |
| tristate "Device mapper support" |
| ---help--- |
| Device-mapper is a low level volume manager. It works by allowing |
| people to specify mappings for ranges of logical sectors. Various |
| mapping types are available, in addition people may write their own |
| modules containing custom mappings if they wish. |
| |
| Higher level volume managers such as LVM2 use this driver. |
| |
| To compile this as a module, choose M here: the module will be |
| called dm-mod. |
| |
| If unsure, say N. |
| |
| config DM_DEBUG |
| boolean "Device mapper debugging support" |
| depends on BLK_DEV_DM |
| ---help--- |
| Enable this for messages that may help debug device-mapper problems. |
| |
| If unsure, say N. |
| |
| config DM_CRYPT |
| tristate "Crypt target support" |
| depends on BLK_DEV_DM |
| select CRYPTO |
| select CRYPTO_CBC |
| ---help--- |
| This device-mapper target allows you to create a device that |
| transparently encrypts the data on it. You'll need to activate |
| the ciphers you're going to use in the cryptoapi configuration. |
| |
| Information on how to use dm-crypt can be found on |
| |
| <http://www.saout.de/misc/dm-crypt/> |
| |
| To compile this code as a module, choose M here: the module will |
| be called dm-crypt. |
| |
| If unsure, say N. |
| |
| config DM_SNAPSHOT |
| tristate "Snapshot target" |
| depends on BLK_DEV_DM |
| ---help--- |
| Allow volume managers to take writable snapshots of a device. |
| |
| config DM_MIRROR |
| tristate "Mirror target" |
| depends on BLK_DEV_DM |
| ---help--- |
| Allow volume managers to mirror logical volumes, also |
| needed for live data migration tools such as 'pvmove'. |
| |
| config DM_LOG_USERSPACE |
| tristate "Mirror userspace logging (EXPERIMENTAL)" |
| depends on DM_MIRROR && EXPERIMENTAL && NET |
| select CONNECTOR |
| ---help--- |
| The userspace logging module provides a mechanism for |
| relaying the dm-dirty-log API to userspace. Log designs |
| which are more suited to userspace implementation (e.g. |
| shared storage logs) or experimental logs can be implemented |
| by leveraging this framework. |
| |
| config DM_ZERO |
| tristate "Zero target" |
| depends on BLK_DEV_DM |
| ---help--- |
| A target that discards writes, and returns all zeroes for |
| reads. Useful in some recovery situations. |
| |
| config DM_MULTIPATH |
| tristate "Multipath target" |
| depends on BLK_DEV_DM |
| # nasty syntax but means make DM_MULTIPATH independent |
| # of SCSI_DH if the latter isn't defined but if |
| # it is, DM_MULTIPATH must depend on it. We get a build |
| # error if SCSI_DH=m and DM_MULTIPATH=y |
| depends on SCSI_DH || !SCSI_DH |
| ---help--- |
| Allow volume managers to support multipath hardware. |
| |
| config DM_MULTIPATH_QL |
| tristate "I/O Path Selector based on the number of in-flight I/Os" |
| depends on DM_MULTIPATH |
| ---help--- |
| This path selector is a dynamic load balancer which selects |
| the path with the least number of in-flight I/Os. |
| |
| If unsure, say N. |
| |
| config DM_MULTIPATH_ST |
| tristate "I/O Path Selector based on the service time" |
| depends on DM_MULTIPATH |
| ---help--- |
| This path selector is a dynamic load balancer which selects |
| the path expected to complete the incoming I/O in the shortest |
| time. |
| |
| If unsure, say N. |
| |
| config DM_DELAY |
| tristate "I/O delaying target (EXPERIMENTAL)" |
| depends on BLK_DEV_DM && EXPERIMENTAL |
| ---help--- |
| A target that delays reads and/or writes and can send |
| them to different devices. Useful for testing. |
| |
| If unsure, say N. |
| |
| config DM_UEVENT |
| bool "DM uevents (EXPERIMENTAL)" |
| depends on BLK_DEV_DM && EXPERIMENTAL |
| ---help--- |
| Generate udev events for DM events. |
| |
| endif # MD |