Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 1 | Block IO Controller |
| 2 | =================== |
| 3 | Overview |
| 4 | ======== |
| 5 | cgroup subsys "blkio" implements the block io controller. There seems to be |
| 6 | a need of various kinds of IO control policies (like proportional BW, max BW) |
| 7 | both at leaf nodes as well as at intermediate nodes in a storage hierarchy. |
| 8 | Plan is to use the same cgroup based management interface for blkio controller |
| 9 | and based on user options switch IO policies in the background. |
| 10 | |
Vivek Goyal | 2786c4e | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 11 | Currently two IO control policies are implemented. First one is proportional |
| 12 | weight time based division of disk policy. It is implemented in CFQ. Hence |
| 13 | this policy takes effect only on leaf nodes when CFQ is being used. The second |
| 14 | one is throttling policy which can be used to specify upper IO rate limits |
| 15 | on devices. This policy is implemented in generic block layer and can be |
| 16 | used on leaf nodes as well as higher level logical devices like device mapper. |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 17 | |
| 18 | HOWTO |
| 19 | ===== |
Vivek Goyal | 2786c4e | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 20 | Proportional Weight division of bandwidth |
| 21 | ----------------------------------------- |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 22 | You can do a very simple testing of running two dd threads in two different |
| 23 | cgroups. Here is what you can do. |
| 24 | |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 25 | - Enable Block IO controller |
| 26 | CONFIG_BLK_CGROUP=y |
| 27 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 28 | - Enable group scheduling in CFQ |
| 29 | CONFIG_CFQ_GROUP_IOSCHED=y |
| 30 | |
| 31 | - Compile and boot into kernel and mount IO controller (blkio). |
| 32 | |
| 33 | mount -t cgroup -o blkio none /cgroup |
| 34 | |
| 35 | - Create two cgroups |
| 36 | mkdir -p /cgroup/test1/ /cgroup/test2 |
| 37 | |
| 38 | - Set weights of group test1 and test2 |
| 39 | echo 1000 > /cgroup/test1/blkio.weight |
| 40 | echo 500 > /cgroup/test2/blkio.weight |
| 41 | |
| 42 | - Create two same size files (say 512MB each) on same disk (file1, file2) and |
| 43 | launch two dd threads in different cgroup to read those files. |
| 44 | |
| 45 | sync |
| 46 | echo 3 > /proc/sys/vm/drop_caches |
| 47 | |
| 48 | dd if=/mnt/sdb/zerofile1 of=/dev/null & |
| 49 | echo $! > /cgroup/test1/tasks |
| 50 | cat /cgroup/test1/tasks |
| 51 | |
| 52 | dd if=/mnt/sdb/zerofile2 of=/dev/null & |
| 53 | echo $! > /cgroup/test2/tasks |
| 54 | cat /cgroup/test2/tasks |
| 55 | |
| 56 | - At macro level, first dd should finish first. To get more precise data, keep |
| 57 | on looking at (with the help of script), at blkio.disk_time and |
| 58 | blkio.disk_sectors files of both test1 and test2 groups. This will tell how |
| 59 | much disk time (in milli seconds), each group got and how many secotors each |
| 60 | group dispatched to the disk. We provide fairness in terms of disk time, so |
| 61 | ideally io.disk_time of cgroups should be in proportion to the weight. |
| 62 | |
Vivek Goyal | 2786c4e | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 63 | Throttling/Upper Limit policy |
| 64 | ----------------------------- |
| 65 | - Enable Block IO controller |
| 66 | CONFIG_BLK_CGROUP=y |
| 67 | |
| 68 | - Enable throttling in block layer |
| 69 | CONFIG_BLK_DEV_THROTTLING=y |
| 70 | |
| 71 | - Mount blkio controller |
| 72 | mount -t cgroup -o blkio none /cgroup/blkio |
| 73 | |
| 74 | - Specify a bandwidth rate on particular device for root group. The format |
| 75 | for policy is "<major>:<minor> <byes_per_second>". |
| 76 | |
| 77 | echo "8:16 1048576" > /cgroup/blkio/blkio.read_bps_device |
| 78 | |
| 79 | Above will put a limit of 1MB/second on reads happening for root group |
| 80 | on device having major/minor number 8:16. |
| 81 | |
| 82 | - Run dd to read a file and see if rate is throttled to 1MB/s or not. |
| 83 | |
| 84 | # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024 |
| 85 | # iflag=direct |
| 86 | 1024+0 records in |
| 87 | 1024+0 records out |
| 88 | 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s |
| 89 | |
| 90 | Limits for writes can be put using blkio.write_bps_device file. |
| 91 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 92 | Various user visible config options |
| 93 | =================================== |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 94 | CONFIG_BLK_CGROUP |
| 95 | - Block IO controller. |
| 96 | |
| 97 | CONFIG_DEBUG_BLK_CGROUP |
| 98 | - Debug help. Right now some additional stats file show up in cgroup |
| 99 | if this option is enabled. |
| 100 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 101 | CONFIG_CFQ_GROUP_IOSCHED |
| 102 | - Enables group scheduling in CFQ. Currently only 1 level of group |
| 103 | creation is allowed. |
| 104 | |
Vivek Goyal | 2786c4e | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 105 | CONFIG_BLK_DEV_THROTTLING |
| 106 | - Enable block device throttling support in block layer. |
| 107 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 108 | Details of cgroup files |
| 109 | ======================= |
Vivek Goyal | 2786c4e | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 110 | Proportional weight policy files |
| 111 | -------------------------------- |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 112 | - blkio.weight |
Gui Jianfeng | da69da1 | 2010-04-13 16:07:50 +0800 | [diff] [blame] | 113 | - Specifies per cgroup weight. This is default weight of the group |
| 114 | on all the devices until and unless overridden by per device rule. |
| 115 | (See blkio.weight_device). |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 116 | Currently allowed range of weights is from 100 to 1000. |
| 117 | |
Gui Jianfeng | da69da1 | 2010-04-13 16:07:50 +0800 | [diff] [blame] | 118 | - blkio.weight_device |
| 119 | - One can specify per cgroup per device rules using this interface. |
| 120 | These rules override the default value of group weight as specified |
| 121 | by blkio.weight. |
| 122 | |
| 123 | Following is the format. |
| 124 | |
| 125 | #echo dev_maj:dev_minor weight > /path/to/cgroup/blkio.weight_device |
| 126 | Configure weight=300 on /dev/sdb (8:16) in this cgroup |
| 127 | # echo 8:16 300 > blkio.weight_device |
| 128 | # cat blkio.weight_device |
| 129 | dev weight |
| 130 | 8:16 300 |
| 131 | |
| 132 | Configure weight=500 on /dev/sda (8:0) in this cgroup |
| 133 | # echo 8:0 500 > blkio.weight_device |
| 134 | # cat blkio.weight_device |
| 135 | dev weight |
| 136 | 8:0 500 |
| 137 | 8:16 300 |
| 138 | |
| 139 | Remove specific weight for /dev/sda in this cgroup |
| 140 | # echo 8:0 0 > blkio.weight_device |
| 141 | # cat blkio.weight_device |
| 142 | dev weight |
| 143 | 8:16 300 |
| 144 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 145 | - blkio.time |
| 146 | - disk time allocated to cgroup per device in milliseconds. First |
| 147 | two fields specify the major and minor number of the device and |
| 148 | third field specifies the disk time allocated to group in |
| 149 | milliseconds. |
| 150 | |
| 151 | - blkio.sectors |
| 152 | - number of sectors transferred to/from disk by the group. First |
| 153 | two fields specify the major and minor number of the device and |
| 154 | third field specifies the number of sectors transferred by the |
| 155 | group to/from the device. |
| 156 | |
Divyesh Shah | 84c124d | 2010-04-09 08:31:19 +0200 | [diff] [blame] | 157 | - blkio.io_service_bytes |
| 158 | - Number of bytes transferred to/from the disk by the group. These |
| 159 | are further divided by the type of operation - read or write, sync |
| 160 | or async. First two fields specify the major and minor number of the |
| 161 | device, third field specifies the operation type and the fourth field |
| 162 | specifies the number of bytes. |
| 163 | |
| 164 | - blkio.io_serviced |
| 165 | - Number of IOs completed to/from the disk by the group. These |
| 166 | are further divided by the type of operation - read or write, sync |
| 167 | or async. First two fields specify the major and minor number of the |
| 168 | device, third field specifies the operation type and the fourth field |
| 169 | specifies the number of IOs. |
| 170 | |
| 171 | - blkio.io_service_time |
| 172 | - Total amount of time between request dispatch and request completion |
| 173 | for the IOs done by this cgroup. This is in nanoseconds to make it |
| 174 | meaningful for flash devices too. For devices with queue depth of 1, |
| 175 | this time represents the actual service time. When queue_depth > 1, |
| 176 | that is no longer true as requests may be served out of order. This |
| 177 | may cause the service time for a given IO to include the service time |
| 178 | of multiple IOs when served out of order which may result in total |
| 179 | io_service_time > actual time elapsed. This time is further divided by |
| 180 | the type of operation - read or write, sync or async. First two fields |
| 181 | specify the major and minor number of the device, third field |
| 182 | specifies the operation type and the fourth field specifies the |
| 183 | io_service_time in ns. |
| 184 | |
| 185 | - blkio.io_wait_time |
| 186 | - Total amount of time the IOs for this cgroup spent waiting in the |
| 187 | scheduler queues for service. This can be greater than the total time |
| 188 | elapsed since it is cumulative io_wait_time for all IOs. It is not a |
| 189 | measure of total time the cgroup spent waiting but rather a measure of |
| 190 | the wait_time for its individual IOs. For devices with queue_depth > 1 |
| 191 | this metric does not include the time spent waiting for service once |
| 192 | the IO is dispatched to the device but till it actually gets serviced |
| 193 | (there might be a time lag here due to re-ordering of requests by the |
| 194 | device). This is in nanoseconds to make it meaningful for flash |
| 195 | devices too. This time is further divided by the type of operation - |
| 196 | read or write, sync or async. First two fields specify the major and |
| 197 | minor number of the device, third field specifies the operation type |
| 198 | and the fourth field specifies the io_wait_time in ns. |
| 199 | |
Divyesh Shah | 812d402 | 2010-04-08 21:14:23 -0700 | [diff] [blame] | 200 | - blkio.io_merged |
| 201 | - Total number of bios/requests merged into requests belonging to this |
| 202 | cgroup. This is further divided by the type of operation - read or |
| 203 | write, sync or async. |
| 204 | |
Divyesh Shah | cdc1184 | 2010-04-08 21:15:10 -0700 | [diff] [blame] | 205 | - blkio.io_queued |
| 206 | - Total number of requests queued up at any given instant for this |
| 207 | cgroup. This is further divided by the type of operation - read or |
| 208 | write, sync or async. |
| 209 | |
| 210 | - blkio.avg_queue_size |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 211 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
Divyesh Shah | cdc1184 | 2010-04-08 21:15:10 -0700 | [diff] [blame] | 212 | The average queue size for this cgroup over the entire time of this |
| 213 | cgroup's existence. Queue size samples are taken each time one of the |
| 214 | queues of this cgroup gets a timeslice. |
| 215 | |
Divyesh Shah | 812df48 | 2010-04-08 21:15:35 -0700 | [diff] [blame] | 216 | - blkio.group_wait_time |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 217 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
Divyesh Shah | 812df48 | 2010-04-08 21:15:35 -0700 | [diff] [blame] | 218 | This is the amount of time the cgroup had to wait since it became busy |
| 219 | (i.e., went from 0 to 1 request queued) to get a timeslice for one of |
| 220 | its queues. This is different from the io_wait_time which is the |
| 221 | cumulative total of the amount of time spent by each IO in that cgroup |
| 222 | waiting in the scheduler queue. This is in nanoseconds. If this is |
| 223 | read when the cgroup is in a waiting (for timeslice) state, the stat |
| 224 | will only report the group_wait_time accumulated till the last time it |
| 225 | got a timeslice and will not include the current delta. |
| 226 | |
| 227 | - blkio.empty_time |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 228 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
Divyesh Shah | 812df48 | 2010-04-08 21:15:35 -0700 | [diff] [blame] | 229 | This is the amount of time a cgroup spends without any pending |
| 230 | requests when not being served, i.e., it does not include any time |
| 231 | spent idling for one of the queues of the cgroup. This is in |
| 232 | nanoseconds. If this is read when the cgroup is in an empty state, |
| 233 | the stat will only report the empty_time accumulated till the last |
| 234 | time it had a pending request and will not include the current delta. |
| 235 | |
| 236 | - blkio.idle_time |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 237 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
Divyesh Shah | 812df48 | 2010-04-08 21:15:35 -0700 | [diff] [blame] | 238 | This is the amount of time spent by the IO scheduler idling for a |
| 239 | given cgroup in anticipation of a better request than the exising ones |
| 240 | from other queues/cgroups. This is in nanoseconds. If this is read |
| 241 | when the cgroup is in an idling state, the stat will only report the |
| 242 | idle_time accumulated till the last idle period and will not include |
| 243 | the current delta. |
| 244 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 245 | - blkio.dequeue |
Vivek Goyal | afc24d4 | 2010-04-26 19:27:56 +0200 | [diff] [blame] | 246 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 247 | gives the statistics about how many a times a group was dequeued |
| 248 | from service tree of the device. First two fields specify the major |
| 249 | and minor number of the device and third field specifies the number |
| 250 | of times a group was dequeued from a particular device. |
| 251 | |
Vivek Goyal | 2786c4e | 2010-09-15 17:06:38 -0400 | [diff] [blame] | 252 | Throttling/Upper limit policy files |
| 253 | ----------------------------------- |
| 254 | - blkio.throttle.read_bps_device |
| 255 | - Specifies upper limit on READ rate from the device. IO rate is |
| 256 | specified in bytes per second. Rules are per deivce. Following is |
| 257 | the format. |
| 258 | |
| 259 | echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.read_bps_device |
| 260 | |
| 261 | - blkio.throttle.write_bps_device |
| 262 | - Specifies upper limit on WRITE rate to the device. IO rate is |
| 263 | specified in bytes per second. Rules are per deivce. Following is |
| 264 | the format. |
| 265 | |
| 266 | echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.write_bps_device |
| 267 | |
| 268 | - blkio.throttle.read_iops_device |
| 269 | - Specifies upper limit on READ rate from the device. IO rate is |
| 270 | specified in IO per second. Rules are per deivce. Following is |
| 271 | the format. |
| 272 | |
| 273 | echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.read_iops_device |
| 274 | |
| 275 | - blkio.throttle.write_iops_device |
| 276 | - Specifies upper limit on WRITE rate to the device. IO rate is |
| 277 | specified in io per second. Rules are per deivce. Following is |
| 278 | the format. |
| 279 | |
| 280 | echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.write_iops_device |
| 281 | |
| 282 | Note: If both BW and IOPS rules are specified for a device, then IO is |
| 283 | subjectd to both the constraints. |
| 284 | |
| 285 | - blkio.throttle.io_serviced |
| 286 | - Number of IOs (bio) completed to/from the disk by the group (as |
| 287 | seen by throttling policy). These are further divided by the type |
| 288 | of operation - read or write, sync or async. First two fields specify |
| 289 | the major and minor number of the device, third field specifies the |
| 290 | operation type and the fourth field specifies the number of IOs. |
| 291 | |
| 292 | blkio.io_serviced does accounting as seen by CFQ and counts are in |
| 293 | number of requests (struct request). On the other hand, |
| 294 | blkio.throttle.io_serviced counts number of IO in terms of number |
| 295 | of bios as seen by throttling policy. These bios can later be |
| 296 | merged by elevator and total number of requests completed can be |
| 297 | lesser. |
| 298 | |
| 299 | - blkio.throttle.io_service_bytes |
| 300 | - Number of bytes transferred to/from the disk by the group. These |
| 301 | are further divided by the type of operation - read or write, sync |
| 302 | or async. First two fields specify the major and minor number of the |
| 303 | device, third field specifies the operation type and the fourth field |
| 304 | specifies the number of bytes. |
| 305 | |
| 306 | These numbers should roughly be same as blkio.io_service_bytes as |
| 307 | updated by CFQ. The difference between two is that |
| 308 | blkio.io_service_bytes will not be updated if CFQ is not operating |
| 309 | on request queue. |
| 310 | |
| 311 | Common files among various policies |
| 312 | ----------------------------------- |
Divyesh Shah | 84c124d | 2010-04-09 08:31:19 +0200 | [diff] [blame] | 313 | - blkio.reset_stats |
| 314 | - Writing an int to this file will result in resetting all the stats |
| 315 | for that cgroup. |
| 316 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 317 | CFQ sysfs tunable |
| 318 | ================= |
| 319 | /sys/block/<disk>/queue/iosched/group_isolation |
Vivek Goyal | 6d6ac1c | 2010-08-23 12:25:29 +0200 | [diff] [blame] | 320 | ----------------------------------------------- |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 321 | |
| 322 | If group_isolation=1, it provides stronger isolation between groups at the |
| 323 | expense of throughput. By default group_isolation is 0. In general that |
| 324 | means that if group_isolation=0, expect fairness for sequential workload |
| 325 | only. Set group_isolation=1 to see fairness for random IO workload also. |
| 326 | |
| 327 | Generally CFQ will put random seeky workload in sync-noidle category. CFQ |
| 328 | will disable idling on these queues and it does a collective idling on group |
| 329 | of such queues. Generally these are slow moving queues and if there is a |
| 330 | sync-noidle service tree in each group, that group gets exclusive access to |
| 331 | disk for certain period. That means it will bring the throughput down if |
| 332 | group does not have enough IO to drive deeper queue depths and utilize disk |
| 333 | capacity to the fullest in the slice allocated to it. But the flip side is |
| 334 | that even a random reader should get better latencies and overall throughput |
| 335 | if there are lots of sequential readers/sync-idle workload running in the |
| 336 | system. |
| 337 | |
| 338 | If group_isolation=0, then CFQ automatically moves all the random seeky queues |
| 339 | in the root group. That means there will be no service differentiation for |
| 340 | that kind of workload. This leads to better throughput as we do collective |
| 341 | idling on root sync-noidle tree. |
| 342 | |
| 343 | By default one should run with group_isolation=0. If that is not sufficient |
| 344 | and one wants stronger isolation between groups, then set group_isolation=1 |
| 345 | but this will come at cost of reduced throughput. |
| 346 | |
Vivek Goyal | 6d6ac1c | 2010-08-23 12:25:29 +0200 | [diff] [blame] | 347 | /sys/block/<disk>/queue/iosched/slice_idle |
| 348 | ------------------------------------------ |
| 349 | On a faster hardware CFQ can be slow, especially with sequential workload. |
| 350 | This happens because CFQ idles on a single queue and single queue might not |
| 351 | drive deeper request queue depths to keep the storage busy. In such scenarios |
| 352 | one can try setting slice_idle=0 and that would switch CFQ to IOPS |
| 353 | (IO operations per second) mode on NCQ supporting hardware. |
| 354 | |
| 355 | That means CFQ will not idle between cfq queues of a cfq group and hence be |
| 356 | able to driver higher queue depth and achieve better throughput. That also |
| 357 | means that cfq provides fairness among groups in terms of IOPS and not in |
| 358 | terms of disk time. |
| 359 | |
| 360 | /sys/block/<disk>/queue/iosched/group_idle |
| 361 | ------------------------------------------ |
| 362 | If one disables idling on individual cfq queues and cfq service trees by |
| 363 | setting slice_idle=0, group_idle kicks in. That means CFQ will still idle |
| 364 | on the group in an attempt to provide fairness among groups. |
| 365 | |
| 366 | By default group_idle is same as slice_idle and does not do anything if |
| 367 | slice_idle is enabled. |
| 368 | |
| 369 | One can experience an overall throughput drop if you have created multiple |
| 370 | groups and put applications in that group which are not driving enough |
| 371 | IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle |
| 372 | on individual groups and throughput should improve. |
| 373 | |
Vivek Goyal | 72f924f | 2009-12-03 12:59:57 -0500 | [diff] [blame] | 374 | What works |
| 375 | ========== |
| 376 | - Currently only sync IO queues are support. All the buffered writes are |
| 377 | still system wide and not per group. Hence we will not see service |
| 378 | differentiation between buffered writes between groups. |