Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 1 | Block IO Controller |
| 2 | =================== |
| 3 | Overview |
| 4 | ======== |
| 5 | cgroup subsys "blkio" implements the block io controller. There seems to be |
| 6 | a need of various kinds of IO control policies (like proportional BW, max BW) |
| 7 | both at leaf nodes as well as at intermediate nodes in a storage hierarchy. |
| 8 | Plan is to use the same cgroup based management interface for blkio controller |
| 9 | and based on user options switch IO policies in the background. |
| 10 | |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 11 | Currently two IO control policies are implemented. First one is proportional |
| 12 | weight time based division of disk policy. It is implemented in CFQ. Hence |
| 13 | this policy takes effect only on leaf nodes when CFQ is being used. The second |
| 14 | one is throttling policy which can be used to specify upper IO rate limits |
| 15 | on devices. This policy is implemented in generic block layer and can be |
| 16 | used on leaf nodes as well as higher level logical devices like device mapper. |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 17 | |
| 18 | HOWTO |
| 19 | ===== |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 20 | Proportional Weight division of bandwidth |
| 21 | ----------------------------------------- |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 22 | You can do a very simple testing of running two dd threads in two different |
| 23 | cgroups. Here is what you can do. |
| 24 | |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 25 | - Enable Block IO controller |
| 26 | CONFIG_BLK_CGROUP=y |
| 27 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 28 | - Enable group scheduling in CFQ |
| 29 | CONFIG_CFQ_GROUP_IOSCHED=y |
| 30 | |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 31 | - Compile and boot into kernel and mount IO controller (blkio); see |
| 32 | cgroups.txt, Why are cgroups needed?. |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 33 | |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 34 | mount -t tmpfs cgroup_root /sys/fs/cgroup |
| 35 | mkdir /sys/fs/cgroup/blkio |
| 36 | mount -t cgroup -o blkio none /sys/fs/cgroup/blkio |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 37 | |
| 38 | - Create two cgroups |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 39 | mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2 |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 40 | |
| 41 | - Set weights of group test1 and test2 |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 42 | echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight |
| 43 | echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 44 | |
| 45 | - Create two same size files (say 512MB each) on same disk (file1, file2) and |
| 46 | launch two dd threads in different cgroup to read those files. |
| 47 | |
| 48 | sync |
| 49 | echo 3 > /proc/sys/vm/drop_caches |
| 50 | |
| 51 | dd if=/mnt/sdb/zerofile1 of=/dev/null & |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 52 | echo $! > /sys/fs/cgroup/blkio/test1/tasks |
| 53 | cat /sys/fs/cgroup/blkio/test1/tasks |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 54 | |
| 55 | dd if=/mnt/sdb/zerofile2 of=/dev/null & |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 56 | echo $! > /sys/fs/cgroup/blkio/test2/tasks |
| 57 | cat /sys/fs/cgroup/blkio/test2/tasks |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 58 | |
| 59 | - At macro level, first dd should finish first. To get more precise data, keep |
| 60 | on looking at (with the help of script), at blkio.disk_time and |
| 61 | blkio.disk_sectors files of both test1 and test2 groups. This will tell how |
| 62 | much disk time (in milli seconds), each group got and how many secotors each |
| 63 | group dispatched to the disk. We provide fairness in terms of disk time, so |
| 64 | ideally io.disk_time of cgroups should be in proportion to the weight. |
| 65 | |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 66 | Throttling/Upper Limit policy |
| 67 | ----------------------------- |
| 68 | - Enable Block IO controller |
| 69 | CONFIG_BLK_CGROUP=y |
| 70 | |
| 71 | - Enable throttling in block layer |
| 72 | CONFIG_BLK_DEV_THROTTLING=y |
| 73 | |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 74 | - Mount blkio controller (see cgroups.txt, Why are cgroups needed?) |
| 75 | mount -t cgroup -o blkio none /sys/fs/cgroup/blkio |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 76 | |
| 77 | - Specify a bandwidth rate on particular device for root group. The format |
Warren Turkal | 52b233c | 2013-02-27 17:03:11 -0800 | [diff] [blame] | 78 | for policy is "<major>:<minor> <bytes_per_second>". |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 79 | |
Andrea Righi | 9b61fc4 | 2011-07-06 11:26:26 -0700 | [diff] [blame] | 80 | echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 81 | |
| 82 | Above will put a limit of 1MB/second on reads happening for root group |
| 83 | on device having major/minor number 8:16. |
| 84 | |
| 85 | - Run dd to read a file and see if rate is throttled to 1MB/s or not. |
| 86 | |
| 87 | # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024 |
| 88 | # iflag=direct |
| 89 | 1024+0 records in |
| 90 | 1024+0 records out |
| 91 | 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s |
| 92 | |
Andrea Righi | 9b61fc4 | 2011-07-06 11:26:26 -0700 | [diff] [blame] | 93 | Limits for writes can be put using blkio.throttle.write_bps_device file. |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 94 | |
Vivek Goyal | bdc85df | 2010-11-15 19:37:36 +0100 | [diff] [blame] | 95 | Hierarchical Cgroups |
| 96 | ==================== |
Tejun Heo | d02f7aa | 2013-01-09 08:05:11 -0800 | [diff] [blame] | 97 | - Currently only CFQ supports hierarchical groups. For throttling, |
| 98 | cgroup interface does allow creation of hierarchical cgroups and |
| 99 | internally it treats them as flat hierarchy. |
Vivek Goyal | bdc85df | 2010-11-15 19:37:36 +0100 | [diff] [blame] | 100 | |
Tejun Heo | d02f7aa | 2013-01-09 08:05:11 -0800 | [diff] [blame] | 101 | If somebody created a hierarchy like as follows. |
Vivek Goyal | bdc85df | 2010-11-15 19:37:36 +0100 | [diff] [blame] | 102 | |
| 103 | root |
| 104 | / \ |
| 105 | test1 test2 |
| 106 | | |
| 107 | test3 |
| 108 | |
Tejun Heo | d02f7aa | 2013-01-09 08:05:11 -0800 | [diff] [blame] | 109 | CFQ will handle the hierarchy correctly but and throttling will |
| 110 | practically treat all groups at same level. For details on CFQ |
| 111 | hierarchy support, refer to Documentation/block/cfq-iosched.txt. |
| 112 | Throttling will treat the hierarchy as if it looks like the |
| 113 | following. |
Vivek Goyal | bdc85df | 2010-11-15 19:37:36 +0100 | [diff] [blame] | 114 | |
| 115 | pivot |
Jörg Sommer | 67de016 | 2011-06-15 13:00:47 -0700 | [diff] [blame] | 116 | / / \ \ |
Vivek Goyal | bdc85df | 2010-11-15 19:37:36 +0100 | [diff] [blame] | 117 | root test1 test2 test3 |
| 118 | |
Tejun Heo | d02f7aa | 2013-01-09 08:05:11 -0800 | [diff] [blame] | 119 | Nesting cgroups, while allowed, isn't officially supported and blkio |
| 120 | genereates warning when cgroups nest. Once throttling implements |
| 121 | hierarchy support, hierarchy will be supported and the warning will |
| 122 | be removed. |
Vivek Goyal | bdc85df | 2010-11-15 19:37:36 +0100 | [diff] [blame] | 123 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 124 | Various user visible config options |
| 125 | =================================== |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 126 | CONFIG_BLK_CGROUP |
| 127 | - Block IO controller. |
| 128 | |
| 129 | CONFIG_DEBUG_BLK_CGROUP |
| 130 | - Debug help. Right now some additional stats file show up in cgroup |
| 131 | if this option is enabled. |
| 132 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 133 | CONFIG_CFQ_GROUP_IOSCHED |
| 134 | - Enables group scheduling in CFQ. Currently only 1 level of group |
| 135 | creation is allowed. |
| 136 | |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 137 | CONFIG_BLK_DEV_THROTTLING |
| 138 | - Enable block device throttling support in block layer. |
| 139 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 140 | Details of cgroup files |
| 141 | ======================= |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 142 | Proportional weight policy files |
| 143 | -------------------------------- |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 144 | - blkio.weight |
Gui Jianfeng | da69da1 | 2010-04-13 16:07:50 +0800 | [diff] [blame] | 145 | - Specifies per cgroup weight. This is default weight of the group |
| 146 | on all the devices until and unless overridden by per device rule. |
| 147 | (See blkio.weight_device). |
Justin TerAvest | df457f8 | 2011-03-08 19:45:00 +0100 | [diff] [blame] | 148 | Currently allowed range of weights is from 10 to 1000. |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 149 | |
Gui Jianfeng | da69da1 | 2010-04-13 16:07:50 +0800 | [diff] [blame] | 150 | - blkio.weight_device |
| 151 | - One can specify per cgroup per device rules using this interface. |
| 152 | These rules override the default value of group weight as specified |
| 153 | by blkio.weight. |
| 154 | |
| 155 | Following is the format. |
| 156 | |
Jörg Sommer | f6e07d3 | 2011-06-15 12:59:45 -0700 | [diff] [blame] | 157 | # echo dev_maj:dev_minor weight > blkio.weight_device |
Gui Jianfeng | da69da1 | 2010-04-13 16:07:50 +0800 | [diff] [blame] | 158 | Configure weight=300 on /dev/sdb (8:16) in this cgroup |
| 159 | # echo 8:16 300 > blkio.weight_device |
| 160 | # cat blkio.weight_device |
| 161 | dev weight |
| 162 | 8:16 300 |
| 163 | |
| 164 | Configure weight=500 on /dev/sda (8:0) in this cgroup |
| 165 | # echo 8:0 500 > blkio.weight_device |
| 166 | # cat blkio.weight_device |
| 167 | dev weight |
| 168 | 8:0 500 |
| 169 | 8:16 300 |
| 170 | |
| 171 | Remove specific weight for /dev/sda in this cgroup |
| 172 | # echo 8:0 0 > blkio.weight_device |
| 173 | # cat blkio.weight_device |
| 174 | dev weight |
| 175 | 8:16 300 |
| 176 | |
Tejun Heo | d02f7aa | 2013-01-09 08:05:11 -0800 | [diff] [blame] | 177 | - blkio.leaf_weight[_device] |
| 178 | - Equivalents of blkio.weight[_device] for the purpose of |
| 179 | deciding how much weight tasks in the given cgroup has while |
| 180 | competing with the cgroup's child cgroups. For details, |
| 181 | please refer to Documentation/block/cfq-iosched.txt. |
| 182 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 183 | - blkio.time |
| 184 | - disk time allocated to cgroup per device in milliseconds. First |
| 185 | two fields specify the major and minor number of the device and |
| 186 | third field specifies the disk time allocated to group in |
| 187 | milliseconds. |
| 188 | |
| 189 | - blkio.sectors |
| 190 | - number of sectors transferred to/from disk by the group. First |
| 191 | two fields specify the major and minor number of the device and |
| 192 | third field specifies the number of sectors transferred by the |
| 193 | group to/from the device. |
| 194 | |
Divyesh Shah | 84c124d | 2010-04-09 08:31:19 +0200 | [diff] [blame] | 195 | - blkio.io_service_bytes |
| 196 | - Number of bytes transferred to/from the disk by the group. These |
| 197 | are further divided by the type of operation - read or write, sync |
| 198 | or async. First two fields specify the major and minor number of the |
| 199 | device, third field specifies the operation type and the fourth field |
| 200 | specifies the number of bytes. |
| 201 | |
| 202 | - blkio.io_serviced |
| 203 | - Number of IOs completed to/from the disk by the group. These |
| 204 | are further divided by the type of operation - read or write, sync |
| 205 | or async. First two fields specify the major and minor number of the |
| 206 | device, third field specifies the operation type and the fourth field |
| 207 | specifies the number of IOs. |
| 208 | |
| 209 | - blkio.io_service_time |
| 210 | - Total amount of time between request dispatch and request completion |
| 211 | for the IOs done by this cgroup. This is in nanoseconds to make it |
| 212 | meaningful for flash devices too. For devices with queue depth of 1, |
| 213 | this time represents the actual service time. When queue_depth > 1, |
| 214 | that is no longer true as requests may be served out of order. This |
| 215 | may cause the service time for a given IO to include the service time |
| 216 | of multiple IOs when served out of order which may result in total |
| 217 | io_service_time > actual time elapsed. This time is further divided by |
| 218 | the type of operation - read or write, sync or async. First two fields |
| 219 | specify the major and minor number of the device, third field |
| 220 | specifies the operation type and the fourth field specifies the |
| 221 | io_service_time in ns. |
| 222 | |
| 223 | - blkio.io_wait_time |
| 224 | - Total amount of time the IOs for this cgroup spent waiting in the |
| 225 | scheduler queues for service. This can be greater than the total time |
| 226 | elapsed since it is cumulative io_wait_time for all IOs. It is not a |
| 227 | measure of total time the cgroup spent waiting but rather a measure of |
| 228 | the wait_time for its individual IOs. For devices with queue_depth > 1 |
| 229 | this metric does not include the time spent waiting for service once |
| 230 | the IO is dispatched to the device but till it actually gets serviced |
| 231 | (there might be a time lag here due to re-ordering of requests by the |
| 232 | device). This is in nanoseconds to make it meaningful for flash |
| 233 | devices too. This time is further divided by the type of operation - |
| 234 | read or write, sync or async. First two fields specify the major and |
| 235 | minor number of the device, third field specifies the operation type |
| 236 | and the fourth field specifies the io_wait_time in ns. |
| 237 | |
Divyesh Shah | 812d402 | 2010-04-08 21:14:23 -0700 | [diff] [blame] | 238 | - blkio.io_merged |
| 239 | - Total number of bios/requests merged into requests belonging to this |
| 240 | cgroup. This is further divided by the type of operation - read or |
| 241 | write, sync or async. |
| 242 | |
Divyesh Shah | cdc1184 | 2010-04-08 21:15:10 -0700 | [diff] [blame] | 243 | - blkio.io_queued |
| 244 | - Total number of requests queued up at any given instant for this |
| 245 | cgroup. This is further divided by the type of operation - read or |
| 246 | write, sync or async. |
| 247 | |
| 248 | - blkio.avg_queue_size |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 249 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
Divyesh Shah | cdc1184 | 2010-04-08 21:15:10 -0700 | [diff] [blame] | 250 | The average queue size for this cgroup over the entire time of this |
| 251 | cgroup's existence. Queue size samples are taken each time one of the |
| 252 | queues of this cgroup gets a timeslice. |
| 253 | |
Divyesh Shah | 812df48 | 2010-04-08 21:15:35 -0700 | [diff] [blame] | 254 | - blkio.group_wait_time |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 255 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
Divyesh Shah | 812df48 | 2010-04-08 21:15:35 -0700 | [diff] [blame] | 256 | This is the amount of time the cgroup had to wait since it became busy |
| 257 | (i.e., went from 0 to 1 request queued) to get a timeslice for one of |
| 258 | its queues. This is different from the io_wait_time which is the |
| 259 | cumulative total of the amount of time spent by each IO in that cgroup |
| 260 | waiting in the scheduler queue. This is in nanoseconds. If this is |
| 261 | read when the cgroup is in a waiting (for timeslice) state, the stat |
| 262 | will only report the group_wait_time accumulated till the last time it |
| 263 | got a timeslice and will not include the current delta. |
| 264 | |
| 265 | - blkio.empty_time |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 266 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
Divyesh Shah | 812df48 | 2010-04-08 21:15:35 -0700 | [diff] [blame] | 267 | This is the amount of time a cgroup spends without any pending |
| 268 | requests when not being served, i.e., it does not include any time |
| 269 | spent idling for one of the queues of the cgroup. This is in |
| 270 | nanoseconds. If this is read when the cgroup is in an empty state, |
| 271 | the stat will only report the empty_time accumulated till the last |
| 272 | time it had a pending request and will not include the current delta. |
| 273 | |
| 274 | - blkio.idle_time |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 275 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
Divyesh Shah | 812df48 | 2010-04-08 21:15:35 -0700 | [diff] [blame] | 276 | This is the amount of time spent by the IO scheduler idling for a |
Masanari Iida | 40e4712 | 2012-03-04 23:16:11 +0900 | [diff] [blame] | 277 | given cgroup in anticipation of a better request than the existing ones |
Divyesh Shah | 812df48 | 2010-04-08 21:15:35 -0700 | [diff] [blame] | 278 | from other queues/cgroups. This is in nanoseconds. If this is read |
| 279 | when the cgroup is in an idling state, the stat will only report the |
| 280 | idle_time accumulated till the last idle period and will not include |
| 281 | the current delta. |
| 282 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 283 | - blkio.dequeue |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 284 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 285 | gives the statistics about how many a times a group was dequeued |
| 286 | from service tree of the device. First two fields specify the major |
| 287 | and minor number of the device and third field specifies the number |
| 288 | of times a group was dequeued from a particular device. |
| 289 | |
Tejun Heo | d02f7aa | 2013-01-09 08:05:11 -0800 | [diff] [blame] | 290 | - blkio.*_recursive |
| 291 | - Recursive version of various stats. These files show the |
| 292 | same information as their non-recursive counterparts but |
| 293 | include stats from all the descendant cgroups. |
| 294 | |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 295 | Throttling/Upper limit policy files |
| 296 | ----------------------------------- |
| 297 | - blkio.throttle.read_bps_device |
| 298 | - Specifies upper limit on READ rate from the device. IO rate is |
Masanari Iida | 40e4712 | 2012-03-04 23:16:11 +0900 | [diff] [blame] | 299 | specified in bytes per second. Rules are per device. Following is |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 300 | the format. |
| 301 | |
Andrea Righi | 9b61fc4 | 2011-07-06 11:26:26 -0700 | [diff] [blame] | 302 | echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.read_bps_device |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 303 | |
| 304 | - blkio.throttle.write_bps_device |
| 305 | - Specifies upper limit on WRITE rate to the device. IO rate is |
Masanari Iida | 40e4712 | 2012-03-04 23:16:11 +0900 | [diff] [blame] | 306 | specified in bytes per second. Rules are per device. Following is |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 307 | the format. |
| 308 | |
Andrea Righi | 9b61fc4 | 2011-07-06 11:26:26 -0700 | [diff] [blame] | 309 | echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.write_bps_device |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 310 | |
| 311 | - blkio.throttle.read_iops_device |
| 312 | - Specifies upper limit on READ rate from the device. IO rate is |
Masanari Iida | 40e4712 | 2012-03-04 23:16:11 +0900 | [diff] [blame] | 313 | specified in IO per second. Rules are per device. Following is |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 314 | the format. |
| 315 | |
Andrea Righi | 9b61fc4 | 2011-07-06 11:26:26 -0700 | [diff] [blame] | 316 | echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.read_iops_device |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 317 | |
| 318 | - blkio.throttle.write_iops_device |
| 319 | - Specifies upper limit on WRITE rate to the device. IO rate is |
Masanari Iida | 40e4712 | 2012-03-04 23:16:11 +0900 | [diff] [blame] | 320 | specified in io per second. Rules are per device. Following is |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 321 | the format. |
| 322 | |
Andrea Righi | 9b61fc4 | 2011-07-06 11:26:26 -0700 | [diff] [blame] | 323 | echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.write_iops_device |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 324 | |
| 325 | Note: If both BW and IOPS rules are specified for a device, then IO is |
Masanari Iida | 40e4712 | 2012-03-04 23:16:11 +0900 | [diff] [blame] | 326 | subjected to both the constraints. |
Vivek Goyal | 2786c4e5 | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 327 | |
| 328 | - blkio.throttle.io_serviced |
| 329 | - Number of IOs (bio) completed to/from the disk by the group (as |
| 330 | seen by throttling policy). These are further divided by the type |
| 331 | of operation - read or write, sync or async. First two fields specify |
| 332 | the major and minor number of the device, third field specifies the |
| 333 | operation type and the fourth field specifies the number of IOs. |
| 334 | |
| 335 | blkio.io_serviced does accounting as seen by CFQ and counts are in |
| 336 | number of requests (struct request). On the other hand, |
| 337 | blkio.throttle.io_serviced counts number of IO in terms of number |
| 338 | of bios as seen by throttling policy. These bios can later be |
| 339 | merged by elevator and total number of requests completed can be |
| 340 | lesser. |
| 341 | |
| 342 | - blkio.throttle.io_service_bytes |
| 343 | - Number of bytes transferred to/from the disk by the group. These |
| 344 | are further divided by the type of operation - read or write, sync |
| 345 | or async. First two fields specify the major and minor number of the |
| 346 | device, third field specifies the operation type and the fourth field |
| 347 | specifies the number of bytes. |
| 348 | |
| 349 | These numbers should roughly be same as blkio.io_service_bytes as |
| 350 | updated by CFQ. The difference between two is that |
| 351 | blkio.io_service_bytes will not be updated if CFQ is not operating |
| 352 | on request queue. |
| 353 | |
| 354 | Common files among various policies |
| 355 | ----------------------------------- |
Divyesh Shah | 84c124d | 2010-04-09 08:31:19 +0200 | [diff] [blame] | 356 | - blkio.reset_stats |
| 357 | - Writing an int to this file will result in resetting all the stats |
| 358 | for that cgroup. |
| 359 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 360 | CFQ sysfs tunable |
| 361 | ================= |
Vivek Goyal | 6d6ac1c | 2010-08-23 12:25:29 +0200 | [diff] [blame] | 362 | /sys/block/<disk>/queue/iosched/slice_idle |
| 363 | ------------------------------------------ |
| 364 | On a faster hardware CFQ can be slow, especially with sequential workload. |
| 365 | This happens because CFQ idles on a single queue and single queue might not |
| 366 | drive deeper request queue depths to keep the storage busy. In such scenarios |
| 367 | one can try setting slice_idle=0 and that would switch CFQ to IOPS |
| 368 | (IO operations per second) mode on NCQ supporting hardware. |
| 369 | |
| 370 | That means CFQ will not idle between cfq queues of a cfq group and hence be |
| 371 | able to driver higher queue depth and achieve better throughput. That also |
| 372 | means that cfq provides fairness among groups in terms of IOPS and not in |
| 373 | terms of disk time. |
| 374 | |
| 375 | /sys/block/<disk>/queue/iosched/group_idle |
| 376 | ------------------------------------------ |
| 377 | If one disables idling on individual cfq queues and cfq service trees by |
| 378 | setting slice_idle=0, group_idle kicks in. That means CFQ will still idle |
| 379 | on the group in an attempt to provide fairness among groups. |
| 380 | |
| 381 | By default group_idle is same as slice_idle and does not do anything if |
| 382 | slice_idle is enabled. |
| 383 | |
| 384 | One can experience an overall throughput drop if you have created multiple |
| 385 | groups and put applications in that group which are not driving enough |
| 386 | IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle |
| 387 | on individual groups and throughput should improve. |
| 388 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 389 | What works |
| 390 | ========== |
| 391 | - Currently only sync IO queues are support. All the buffered writes are |
| 392 | still system wide and not per group. Hence we will not see service |
| 393 | differentiation between buffered writes between groups. |