blob: da272c8f44e7db9e8aa41780b29bc6a95401c34b [file] [log] [blame]
Vivek Goyal72f924f2009-12-03 12:59:57 -05001 Block IO Controller
2 ===================
3Overview
4========
5cgroup subsys "blkio" implements the block io controller. There seems to be
6a need of various kinds of IO control policies (like proportional BW, max BW)
7both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
8Plan is to use the same cgroup based management interface for blkio controller
9and based on user options switch IO policies in the background.
10
Vivek Goyal2786c4e2010-09-15 17:06:38 -040011Currently two IO control policies are implemented. First one is proportional
12weight time based division of disk policy. It is implemented in CFQ. Hence
13this policy takes effect only on leaf nodes when CFQ is being used. The second
14one is throttling policy which can be used to specify upper IO rate limits
15on devices. This policy is implemented in generic block layer and can be
16used on leaf nodes as well as higher level logical devices like device mapper.
Vivek Goyal72f924f2009-12-03 12:59:57 -050017
18HOWTO
19=====
Vivek Goyal2786c4e2010-09-15 17:06:38 -040020Proportional Weight division of bandwidth
21-----------------------------------------
Vivek Goyal72f924f2009-12-03 12:59:57 -050022You can do a very simple testing of running two dd threads in two different
23cgroups. Here is what you can do.
24
Vivek Goyalafc24d42010-04-26 19:27:56 +020025- Enable Block IO controller
26 CONFIG_BLK_CGROUP=y
27
Vivek Goyal72f924f2009-12-03 12:59:57 -050028- Enable group scheduling in CFQ
29 CONFIG_CFQ_GROUP_IOSCHED=y
30
Jörg Sommerf6e07d32011-06-15 12:59:45 -070031- Compile and boot into kernel and mount IO controller (blkio); see
32 cgroups.txt, Why are cgroups needed?.
Vivek Goyal72f924f2009-12-03 12:59:57 -050033
Jörg Sommerf6e07d32011-06-15 12:59:45 -070034 mount -t tmpfs cgroup_root /sys/fs/cgroup
35 mkdir /sys/fs/cgroup/blkio
36 mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
Vivek Goyal72f924f2009-12-03 12:59:57 -050037
38- Create two cgroups
Jörg Sommerf6e07d32011-06-15 12:59:45 -070039 mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2
Vivek Goyal72f924f2009-12-03 12:59:57 -050040
41- Set weights of group test1 and test2
Jörg Sommerf6e07d32011-06-15 12:59:45 -070042 echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
43 echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight
Vivek Goyal72f924f2009-12-03 12:59:57 -050044
45- Create two same size files (say 512MB each) on same disk (file1, file2) and
46 launch two dd threads in different cgroup to read those files.
47
48 sync
49 echo 3 > /proc/sys/vm/drop_caches
50
51 dd if=/mnt/sdb/zerofile1 of=/dev/null &
Jörg Sommerf6e07d32011-06-15 12:59:45 -070052 echo $! > /sys/fs/cgroup/blkio/test1/tasks
53 cat /sys/fs/cgroup/blkio/test1/tasks
Vivek Goyal72f924f2009-12-03 12:59:57 -050054
55 dd if=/mnt/sdb/zerofile2 of=/dev/null &
Jörg Sommerf6e07d32011-06-15 12:59:45 -070056 echo $! > /sys/fs/cgroup/blkio/test2/tasks
57 cat /sys/fs/cgroup/blkio/test2/tasks
Vivek Goyal72f924f2009-12-03 12:59:57 -050058
59- At macro level, first dd should finish first. To get more precise data, keep
60 on looking at (with the help of script), at blkio.disk_time and
61 blkio.disk_sectors files of both test1 and test2 groups. This will tell how
62 much disk time (in milli seconds), each group got and how many secotors each
63 group dispatched to the disk. We provide fairness in terms of disk time, so
64 ideally io.disk_time of cgroups should be in proportion to the weight.
65
Vivek Goyal2786c4e2010-09-15 17:06:38 -040066Throttling/Upper Limit policy
67-----------------------------
68- Enable Block IO controller
69 CONFIG_BLK_CGROUP=y
70
71- Enable throttling in block layer
72 CONFIG_BLK_DEV_THROTTLING=y
73
Jörg Sommerf6e07d32011-06-15 12:59:45 -070074- Mount blkio controller (see cgroups.txt, Why are cgroups needed?)
75 mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
Vivek Goyal2786c4e2010-09-15 17:06:38 -040076
77- Specify a bandwidth rate on particular device for root group. The format
Warren Turkal52b233c2013-02-27 17:03:11 -080078 for policy is "<major>:<minor> <bytes_per_second>".
Vivek Goyal2786c4e2010-09-15 17:06:38 -040079
Andrea Righi9b61fc42011-07-06 11:26:26 -070080 echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device
Vivek Goyal2786c4e2010-09-15 17:06:38 -040081
82 Above will put a limit of 1MB/second on reads happening for root group
83 on device having major/minor number 8:16.
84
85- Run dd to read a file and see if rate is throttled to 1MB/s or not.
86
87 # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024
88 # iflag=direct
89 1024+0 records in
90 1024+0 records out
91 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s
92
Andrea Righi9b61fc42011-07-06 11:26:26 -070093 Limits for writes can be put using blkio.throttle.write_bps_device file.
Vivek Goyal2786c4e2010-09-15 17:06:38 -040094
Vivek Goyalbdc85df2010-11-15 19:37:36 +010095Hierarchical Cgroups
96====================
Tejun Heod02f7aa2013-01-09 08:05:11 -080097- Currently only CFQ supports hierarchical groups. For throttling,
98 cgroup interface does allow creation of hierarchical cgroups and
99 internally it treats them as flat hierarchy.
Vivek Goyalbdc85df2010-11-15 19:37:36 +0100100
Tejun Heod02f7aa2013-01-09 08:05:11 -0800101 If somebody created a hierarchy like as follows.
Vivek Goyalbdc85df2010-11-15 19:37:36 +0100102
103 root
104 / \
105 test1 test2
106 |
107 test3
108
Tejun Heod02f7aa2013-01-09 08:05:11 -0800109 CFQ will handle the hierarchy correctly but and throttling will
110 practically treat all groups at same level. For details on CFQ
111 hierarchy support, refer to Documentation/block/cfq-iosched.txt.
112 Throttling will treat the hierarchy as if it looks like the
113 following.
Vivek Goyalbdc85df2010-11-15 19:37:36 +0100114
115 pivot
Jörg Sommer67de0162011-06-15 13:00:47 -0700116 / / \ \
Vivek Goyalbdc85df2010-11-15 19:37:36 +0100117 root test1 test2 test3
118
Tejun Heod02f7aa2013-01-09 08:05:11 -0800119 Nesting cgroups, while allowed, isn't officially supported and blkio
120 genereates warning when cgroups nest. Once throttling implements
121 hierarchy support, hierarchy will be supported and the warning will
122 be removed.
Vivek Goyalbdc85df2010-11-15 19:37:36 +0100123
Vivek Goyal72f924f2009-12-03 12:59:57 -0500124Various user visible config options
125===================================
Vivek Goyalafc24d42010-04-26 19:27:56 +0200126CONFIG_BLK_CGROUP
127 - Block IO controller.
128
129CONFIG_DEBUG_BLK_CGROUP
130 - Debug help. Right now some additional stats file show up in cgroup
131 if this option is enabled.
132
Vivek Goyal72f924f2009-12-03 12:59:57 -0500133CONFIG_CFQ_GROUP_IOSCHED
134 - Enables group scheduling in CFQ. Currently only 1 level of group
135 creation is allowed.
136
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400137CONFIG_BLK_DEV_THROTTLING
138 - Enable block device throttling support in block layer.
139
Vivek Goyal72f924f2009-12-03 12:59:57 -0500140Details of cgroup files
141=======================
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400142Proportional weight policy files
143--------------------------------
Vivek Goyal72f924f2009-12-03 12:59:57 -0500144- blkio.weight
Gui Jianfengda69da12010-04-13 16:07:50 +0800145 - Specifies per cgroup weight. This is default weight of the group
146 on all the devices until and unless overridden by per device rule.
147 (See blkio.weight_device).
Justin TerAvestdf457f82011-03-08 19:45:00 +0100148 Currently allowed range of weights is from 10 to 1000.
Vivek Goyal72f924f2009-12-03 12:59:57 -0500149
Gui Jianfengda69da12010-04-13 16:07:50 +0800150- blkio.weight_device
151 - One can specify per cgroup per device rules using this interface.
152 These rules override the default value of group weight as specified
153 by blkio.weight.
154
155 Following is the format.
156
Jörg Sommerf6e07d32011-06-15 12:59:45 -0700157 # echo dev_maj:dev_minor weight > blkio.weight_device
Gui Jianfengda69da12010-04-13 16:07:50 +0800158 Configure weight=300 on /dev/sdb (8:16) in this cgroup
159 # echo 8:16 300 > blkio.weight_device
160 # cat blkio.weight_device
161 dev weight
162 8:16 300
163
164 Configure weight=500 on /dev/sda (8:0) in this cgroup
165 # echo 8:0 500 > blkio.weight_device
166 # cat blkio.weight_device
167 dev weight
168 8:0 500
169 8:16 300
170
171 Remove specific weight for /dev/sda in this cgroup
172 # echo 8:0 0 > blkio.weight_device
173 # cat blkio.weight_device
174 dev weight
175 8:16 300
176
Tejun Heod02f7aa2013-01-09 08:05:11 -0800177- blkio.leaf_weight[_device]
178 - Equivalents of blkio.weight[_device] for the purpose of
179 deciding how much weight tasks in the given cgroup has while
180 competing with the cgroup's child cgroups. For details,
181 please refer to Documentation/block/cfq-iosched.txt.
182
Vivek Goyal72f924f2009-12-03 12:59:57 -0500183- blkio.time
184 - disk time allocated to cgroup per device in milliseconds. First
185 two fields specify the major and minor number of the device and
186 third field specifies the disk time allocated to group in
187 milliseconds.
188
189- blkio.sectors
190 - number of sectors transferred to/from disk by the group. First
191 two fields specify the major and minor number of the device and
192 third field specifies the number of sectors transferred by the
193 group to/from the device.
194
Divyesh Shah84c124d2010-04-09 08:31:19 +0200195- blkio.io_service_bytes
196 - Number of bytes transferred to/from the disk by the group. These
197 are further divided by the type of operation - read or write, sync
198 or async. First two fields specify the major and minor number of the
199 device, third field specifies the operation type and the fourth field
200 specifies the number of bytes.
201
202- blkio.io_serviced
203 - Number of IOs completed to/from the disk by the group. These
204 are further divided by the type of operation - read or write, sync
205 or async. First two fields specify the major and minor number of the
206 device, third field specifies the operation type and the fourth field
207 specifies the number of IOs.
208
209- blkio.io_service_time
210 - Total amount of time between request dispatch and request completion
211 for the IOs done by this cgroup. This is in nanoseconds to make it
212 meaningful for flash devices too. For devices with queue depth of 1,
213 this time represents the actual service time. When queue_depth > 1,
214 that is no longer true as requests may be served out of order. This
215 may cause the service time for a given IO to include the service time
216 of multiple IOs when served out of order which may result in total
217 io_service_time > actual time elapsed. This time is further divided by
218 the type of operation - read or write, sync or async. First two fields
219 specify the major and minor number of the device, third field
220 specifies the operation type and the fourth field specifies the
221 io_service_time in ns.
222
223- blkio.io_wait_time
224 - Total amount of time the IOs for this cgroup spent waiting in the
225 scheduler queues for service. This can be greater than the total time
226 elapsed since it is cumulative io_wait_time for all IOs. It is not a
227 measure of total time the cgroup spent waiting but rather a measure of
228 the wait_time for its individual IOs. For devices with queue_depth > 1
229 this metric does not include the time spent waiting for service once
230 the IO is dispatched to the device but till it actually gets serviced
231 (there might be a time lag here due to re-ordering of requests by the
232 device). This is in nanoseconds to make it meaningful for flash
233 devices too. This time is further divided by the type of operation -
234 read or write, sync or async. First two fields specify the major and
235 minor number of the device, third field specifies the operation type
236 and the fourth field specifies the io_wait_time in ns.
237
Divyesh Shah812d4022010-04-08 21:14:23 -0700238- blkio.io_merged
239 - Total number of bios/requests merged into requests belonging to this
240 cgroup. This is further divided by the type of operation - read or
241 write, sync or async.
242
Divyesh Shahcdc11842010-04-08 21:15:10 -0700243- blkio.io_queued
244 - Total number of requests queued up at any given instant for this
245 cgroup. This is further divided by the type of operation - read or
246 write, sync or async.
247
248- blkio.avg_queue_size
Vivek Goyalafc24d42010-04-26 19:27:56 +0200249 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
Divyesh Shahcdc11842010-04-08 21:15:10 -0700250 The average queue size for this cgroup over the entire time of this
251 cgroup's existence. Queue size samples are taken each time one of the
252 queues of this cgroup gets a timeslice.
253
Divyesh Shah812df482010-04-08 21:15:35 -0700254- blkio.group_wait_time
Vivek Goyalafc24d42010-04-26 19:27:56 +0200255 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
Divyesh Shah812df482010-04-08 21:15:35 -0700256 This is the amount of time the cgroup had to wait since it became busy
257 (i.e., went from 0 to 1 request queued) to get a timeslice for one of
258 its queues. This is different from the io_wait_time which is the
259 cumulative total of the amount of time spent by each IO in that cgroup
260 waiting in the scheduler queue. This is in nanoseconds. If this is
261 read when the cgroup is in a waiting (for timeslice) state, the stat
262 will only report the group_wait_time accumulated till the last time it
263 got a timeslice and will not include the current delta.
264
265- blkio.empty_time
Vivek Goyalafc24d42010-04-26 19:27:56 +0200266 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
Divyesh Shah812df482010-04-08 21:15:35 -0700267 This is the amount of time a cgroup spends without any pending
268 requests when not being served, i.e., it does not include any time
269 spent idling for one of the queues of the cgroup. This is in
270 nanoseconds. If this is read when the cgroup is in an empty state,
271 the stat will only report the empty_time accumulated till the last
272 time it had a pending request and will not include the current delta.
273
274- blkio.idle_time
Vivek Goyalafc24d42010-04-26 19:27:56 +0200275 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
Divyesh Shah812df482010-04-08 21:15:35 -0700276 This is the amount of time spent by the IO scheduler idling for a
Masanari Iida40e47122012-03-04 23:16:11 +0900277 given cgroup in anticipation of a better request than the existing ones
Divyesh Shah812df482010-04-08 21:15:35 -0700278 from other queues/cgroups. This is in nanoseconds. If this is read
279 when the cgroup is in an idling state, the stat will only report the
280 idle_time accumulated till the last idle period and will not include
281 the current delta.
282
Vivek Goyal72f924f2009-12-03 12:59:57 -0500283- blkio.dequeue
Vivek Goyalafc24d42010-04-26 19:27:56 +0200284 - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
Vivek Goyal72f924f2009-12-03 12:59:57 -0500285 gives the statistics about how many a times a group was dequeued
286 from service tree of the device. First two fields specify the major
287 and minor number of the device and third field specifies the number
288 of times a group was dequeued from a particular device.
289
Tejun Heod02f7aa2013-01-09 08:05:11 -0800290- blkio.*_recursive
291 - Recursive version of various stats. These files show the
292 same information as their non-recursive counterparts but
293 include stats from all the descendant cgroups.
294
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400295Throttling/Upper limit policy files
296-----------------------------------
297- blkio.throttle.read_bps_device
298 - Specifies upper limit on READ rate from the device. IO rate is
Masanari Iida40e47122012-03-04 23:16:11 +0900299 specified in bytes per second. Rules are per device. Following is
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400300 the format.
301
Andrea Righi9b61fc42011-07-06 11:26:26 -0700302 echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.read_bps_device
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400303
304- blkio.throttle.write_bps_device
305 - Specifies upper limit on WRITE rate to the device. IO rate is
Masanari Iida40e47122012-03-04 23:16:11 +0900306 specified in bytes per second. Rules are per device. Following is
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400307 the format.
308
Andrea Righi9b61fc42011-07-06 11:26:26 -0700309 echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.write_bps_device
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400310
311- blkio.throttle.read_iops_device
312 - Specifies upper limit on READ rate from the device. IO rate is
Masanari Iida40e47122012-03-04 23:16:11 +0900313 specified in IO per second. Rules are per device. Following is
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400314 the format.
315
Andrea Righi9b61fc42011-07-06 11:26:26 -0700316 echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.read_iops_device
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400317
318- blkio.throttle.write_iops_device
319 - Specifies upper limit on WRITE rate to the device. IO rate is
Masanari Iida40e47122012-03-04 23:16:11 +0900320 specified in io per second. Rules are per device. Following is
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400321 the format.
322
Andrea Righi9b61fc42011-07-06 11:26:26 -0700323 echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.write_iops_device
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400324
325Note: If both BW and IOPS rules are specified for a device, then IO is
Masanari Iida40e47122012-03-04 23:16:11 +0900326 subjected to both the constraints.
Vivek Goyal2786c4e2010-09-15 17:06:38 -0400327
328- blkio.throttle.io_serviced
329 - Number of IOs (bio) completed to/from the disk by the group (as
330 seen by throttling policy). These are further divided by the type
331 of operation - read or write, sync or async. First two fields specify
332 the major and minor number of the device, third field specifies the
333 operation type and the fourth field specifies the number of IOs.
334
335 blkio.io_serviced does accounting as seen by CFQ and counts are in
336 number of requests (struct request). On the other hand,
337 blkio.throttle.io_serviced counts number of IO in terms of number
338 of bios as seen by throttling policy. These bios can later be
339 merged by elevator and total number of requests completed can be
340 lesser.
341
342- blkio.throttle.io_service_bytes
343 - Number of bytes transferred to/from the disk by the group. These
344 are further divided by the type of operation - read or write, sync
345 or async. First two fields specify the major and minor number of the
346 device, third field specifies the operation type and the fourth field
347 specifies the number of bytes.
348
349 These numbers should roughly be same as blkio.io_service_bytes as
350 updated by CFQ. The difference between two is that
351 blkio.io_service_bytes will not be updated if CFQ is not operating
352 on request queue.
353
354Common files among various policies
355-----------------------------------
Divyesh Shah84c124d2010-04-09 08:31:19 +0200356- blkio.reset_stats
357 - Writing an int to this file will result in resetting all the stats
358 for that cgroup.
359
Vivek Goyal72f924f2009-12-03 12:59:57 -0500360CFQ sysfs tunable
361=================
Vivek Goyal6d6ac1c2010-08-23 12:25:29 +0200362/sys/block/<disk>/queue/iosched/slice_idle
363------------------------------------------
364On a faster hardware CFQ can be slow, especially with sequential workload.
365This happens because CFQ idles on a single queue and single queue might not
366drive deeper request queue depths to keep the storage busy. In such scenarios
367one can try setting slice_idle=0 and that would switch CFQ to IOPS
368(IO operations per second) mode on NCQ supporting hardware.
369
370That means CFQ will not idle between cfq queues of a cfq group and hence be
371able to driver higher queue depth and achieve better throughput. That also
372means that cfq provides fairness among groups in terms of IOPS and not in
373terms of disk time.
374
375/sys/block/<disk>/queue/iosched/group_idle
376------------------------------------------
377If one disables idling on individual cfq queues and cfq service trees by
378setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
379on the group in an attempt to provide fairness among groups.
380
381By default group_idle is same as slice_idle and does not do anything if
382slice_idle is enabled.
383
384One can experience an overall throughput drop if you have created multiple
385groups and put applications in that group which are not driving enough
386IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
387on individual groups and throughput should improve.
388
Vivek Goyal72f924f2009-12-03 12:59:57 -0500389What works
390==========
391- Currently only sync IO queues are support. All the buffered writes are
392 still system wide and not per group. Hence we will not see service
393 differentiation between buffered writes between groups.