blob: 895bd3813115f6b9aa1706616bcc72cb1d76fd37 [file] [log] [blame]
Namjae Jeon2792d872012-08-09 15:27:29 +02001CFQ (Complete Fairness Queueing)
2===============================
3
4The main aim of CFQ scheduler is to provide a fair allocation of the disk
5I/O bandwidth for all the processes which requests an I/O operation.
6
7CFQ maintains the per process queue for the processes which request I/O
Namjae Jeonfdc6fdc2013-04-09 14:57:06 +02008operation(synchronous requests). In case of asynchronous requests, all the
Namjae Jeon2792d872012-08-09 15:27:29 +02009requests from all the processes are batched together according to their
10process's I/O priority.
11
Vivek Goyal6d6ac1c2010-08-23 12:25:29 +020012CFQ ioscheduler tunables
13========================
14
15slice_idle
16----------
17This specifies how long CFQ should idle for next request on certain cfq queues
18(for sequential workloads) and service trees (for random workloads) before
19queue is expired and CFQ selects next queue to dispatch from.
20
21By default slice_idle is a non-zero value. That means by default we idle on
22queues/service trees. This can be very helpful on highly seeky media like
23single spindle SATA/SAS disks where we can cut down on overall number of
24seeks and see improved throughput.
25
26Setting slice_idle to 0 will remove all the idling on queues/service tree
27level and one should see an overall improved throughput on faster storage
28devices like multiple SATA/SAS disks in hardware RAID configuration. The down
29side is that isolation provided from WRITES also goes down and notion of
30IO priority becomes weaker.
31
32So depending on storage and workload, it might be useful to set slice_idle=0.
33In general I think for SATA/SAS disks and software RAID of SATA/SAS disks
34keeping slice_idle enabled should be useful. For any configurations where
35there are multiple spindles behind single LUN (Host based hardware RAID
36controller or for storage arrays), setting slice_idle=0 might end up in better
37throughput and acceptable latencies.
38
Namjae Jeon2792d872012-08-09 15:27:29 +020039back_seek_max
40-------------
41This specifies, given in Kbytes, the maximum "distance" for backward seeking.
42The distance is the amount of space from the current head location to the
43sectors that are backward in terms of distance.
44
45This parameter allows the scheduler to anticipate requests in the "backward"
46direction and consider them as being the "next" if they are within this
47distance from the current head location.
48
49back_seek_penalty
50-----------------
51This parameter is used to compute the cost of backward seeking. If the
52backward distance of request is just 1/back_seek_penalty from a "front"
53request, then the seeking cost of two requests is considered equivalent.
54
55So scheduler will not bias toward one or the other request (otherwise scheduler
56will bias toward front request). Default value of back_seek_penalty is 2.
57
58fifo_expire_async
59-----------------
60This parameter is used to set the timeout of asynchronous requests. Default
61value of this is 248ms.
62
63fifo_expire_sync
64----------------
65This parameter is used to set the timeout of synchronous requests. Default
66value of this is 124ms. In case to favor synchronous requests over asynchronous
67one, this value should be decreased relative to fifo_expire_async.
68
Namjae Jeonfdc6fdc2013-04-09 14:57:06 +020069group_idle
70-----------
71This parameter forces idling at the CFQ group level instead of CFQ
Masanari Iidac9f3f2d2013-07-18 01:29:12 +090072queue level. This was introduced after a bottleneck was observed
Namjae Jeonfdc6fdc2013-04-09 14:57:06 +020073in higher end storage due to idle on sequential queue and allow dispatch
74from a single queue. The idea with this parameter is that it can be run with
75slice_idle=0 and group_idle=8, so that idling does not happen on individual
76queues in the group but happens overall on the group and thus still keeps the
77IO controller working.
78Not idling on individual queues in the group will dispatch requests from
79multiple queues in the group at the same time and achieve higher throughput
80on higher end storage.
81
82Default value for this parameter is 8ms.
83
Libor Pechacekbcebb4c2015-12-04 10:10:03 +010084low_latency
85-----------
86This parameter is used to enable/disable the low latency mode of the CFQ
87scheduler. If enabled, CFQ tries to recompute the slice time for each process
88based on the target_latency set for the system. This favors fairness over
89throughput. Disabling low latency (setting it to 0) ignores target latency,
90allowing each process in the system to get a full time slice.
Namjae Jeonfdc6fdc2013-04-09 14:57:06 +020091
92By default low latency mode is enabled.
93
94target_latency
95--------------
96This parameter is used to calculate the time slice for a process if cfq's
97latency mode is enabled. It will ensure that sync requests have an estimated
98latency. But if sequential workload is higher(e.g. sequential read),
99then to meet the latency constraints, throughput may decrease because of less
100time for each process to issue I/O request before the cfq queue is switched.
101
102Though this can be overcome by disabling the latency_mode, it may increase
103the read latency for some applications. This parameter allows for changing
104target_latency through the sysfs interface which can provide the balanced
105throughput and read latency.
106
107Default value for target_latency is 300ms.
108
Namjae Jeon2792d872012-08-09 15:27:29 +0200109slice_async
110-----------
111This parameter is same as of slice_sync but for asynchronous queue. The
112default value is 40ms.
113
114slice_async_rq
115--------------
116This parameter is used to limit the dispatching of asynchronous request to
117device request queue in queue's slice time. The maximum number of request that
118are allowed to be dispatched also depends upon the io priority. Default value
119for this is 2.
120
121slice_sync
122----------
123When a queue is selected for execution, the queues IO requests are only
124executed for a certain amount of time(time_slice) before switching to another
125queue. This parameter is used to calculate the time slice of synchronous
126queue.
127
128time_slice is computed using the below equation:-
129time_slice = slice_sync + (slice_sync/5 * (4 - prio)). To increase the
130time_slice of synchronous queue, increase the value of slice_sync. Default
131value is 100ms.
132
133quantum
134-------
135This specifies the number of request dispatched to the device queue. In a
136queue's time slice, a request will not be dispatched if the number of request
137in the device exceeds this parameter. This parameter is used for synchronous
138request.
139
140In case of storage with several disk, this setting can limit the parallel
Namjae Jeonfdc6fdc2013-04-09 14:57:06 +0200141processing of request. Therefore, increasing the value can improve the
142performance although this can cause the latency of some I/O to increase due
Namjae Jeon2792d872012-08-09 15:27:29 +0200143to more number of requests.
144
Tejun Heod02f7aa2013-01-09 08:05:11 -0800145CFQ Group scheduling
146====================
147
148CFQ supports blkio cgroup and has "blkio." prefixed files in each
149blkio cgroup directory. It is weight-based and there are four knobs
150for configuration - weight[_device] and leaf_weight[_device].
151Internal cgroup nodes (the ones with children) can also have tasks in
152them, so the former two configure how much proportion the cgroup as a
153whole is entitled to at its parent's level while the latter two
154configure how much proportion the tasks in the cgroup have compared to
155its direct children.
156
157Another way to think about it is assuming that each internal node has
158an implicit leaf child node which hosts all the tasks whose weight is
159configured by leaf_weight[_device]. Let's assume a blkio hierarchy
160composed of five cgroups - root, A, B, AA and AB - with the following
161weights where the names represent the hierarchy.
162
163 weight leaf_weight
164 root : 125 125
165 A : 500 750
166 B : 250 500
167 AA : 500 500
168 AB : 1000 500
169
170root never has a parent making its weight is meaningless. For backward
171compatibility, weight is always kept in sync with leaf_weight. B, AA
172and AB have no child and thus its tasks have no children cgroup to
173compete with. They always get 100% of what the cgroup won at the
174parent level. Considering only the weights which matter, the hierarchy
175looks like the following.
176
177 root
178 / | \
179 A B leaf
180 500 250 125
181 / | \
182 AA AB leaf
183 500 1000 750
184
185If all cgroups have active IOs and competing with each other, disk
186time will be distributed like the following.
187
188Distribution below root. The total active weight at this level is
189A:500 + B:250 + C:125 = 875.
190
191 root-leaf : 125 / 875 =~ 14%
192 A : 500 / 875 =~ 57%
193 B(-leaf) : 250 / 875 =~ 28%
194
195A has children and further distributes its 57% among the children and
196the implicit leaf node. The total active weight at this level is
197AA:500 + AB:1000 + A-leaf:750 = 2250.
198
199 A-leaf : ( 750 / 2250) * A =~ 19%
200 AA(-leaf) : ( 500 / 2250) * A =~ 12%
201 AB(-leaf) : (1000 / 2250) * A =~ 25%
202
Vivek Goyal6d6ac1c2010-08-23 12:25:29 +0200203CFQ IOPS Mode for group scheduling
204===================================
205Basic CFQ design is to provide priority based time slices. Higher priority
206process gets bigger time slice and lower priority process gets smaller time
207slice. Measuring time becomes harder if storage is fast and supports NCQ and
208it would be better to dispatch multiple requests from multiple cfq queues in
209request queue at a time. In such scenario, it is not possible to measure time
210consumed by single queue accurately.
211
212What is possible though is to measure number of requests dispatched from a
213single queue and also allow dispatch from multiple cfq queue at the same time.
214This effectively becomes the fairness in terms of IOPS (IO operations per
215second).
216
217If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches
218to IOPS mode and starts providing fairness in terms of number of requests
219dispatched. Note that this mode switching takes effect only for group
220scheduling. For non-cgroup users nothing should change.
Vivek Goyal49314022011-08-05 09:42:20 +0200221
222CFQ IO scheduler Idling Theory
223===============================
224Idling on a queue is primarily about waiting for the next request to come
225on same queue after completion of a request. In this process CFQ will not
226dispatch requests from other cfq queues even if requests are pending there.
227
228The rationale behind idling is that it can cut down on number of seeks
229on rotational media. For example, if a process is doing dependent
230sequential reads (next read will come on only after completion of previous
231one), then not dispatching request from other queue should help as we
232did not move the disk head and kept on dispatching sequential IO from
233one queue.
234
235CFQ has following service trees and various queues are put on these trees.
236
237 sync-idle sync-noidle async
238
239All cfq queues doing synchronous sequential IO go on to sync-idle tree.
240On this tree we idle on each queue individually.
241
242All synchronous non-sequential queues go on sync-noidle tree. Also any
Christoph Hellwiga2b80962016-11-01 07:40:09 -0600243synchronous write request which is not marked with REQ_IDLE goes on this
244service tree. On this tree we do not idle on individual queues instead idle
245on the whole group of queues or the tree. So if there are 4 queues waiting
246for IO to dispatch we will idle only once last queue has dispatched the IO
247and there is no more IO on this service tree.
Vivek Goyal49314022011-08-05 09:42:20 +0200248
249All async writes go on async service tree. There is no idling on async
250queues.
251
252CFQ has some optimizations for SSDs and if it detects a non-rotational
253media which can support higher queue depth (multiple requests at in
254flight at a time), then it cuts down on idling of individual queues and
255all the queues move to sync-noidle tree and only tree idle remains. This
256tree idling provides isolation with buffered write queues on async tree.
257
258FAQ
259===
Christoph Hellwiga2b80962016-11-01 07:40:09 -0600260Q1. Why to idle at all on queues not marked with REQ_IDLE.
Vivek Goyal49314022011-08-05 09:42:20 +0200261
Christoph Hellwiga2b80962016-11-01 07:40:09 -0600262A1. We only do tree idle (all queues on sync-noidle tree) on queues not marked
263 with REQ_IDLE. This helps in providing isolation with all the sync-idle
Vivek Goyal49314022011-08-05 09:42:20 +0200264 queues. Otherwise in presence of many sequential readers, other
265 synchronous IO might not get fair share of disk.
266
267 For example, if there are 10 sequential readers doing IO and they get
Christoph Hellwiga2b80962016-11-01 07:40:09 -0600268 100ms each. If a !REQ_IDLE request comes in, it will be scheduled
269 roughly after 1 second. If after completion of !REQ_IDLE request we
270 do not idle, and after a couple of milli seconds a another !REQ_IDLE
Vivek Goyal49314022011-08-05 09:42:20 +0200271 request comes in, again it will be scheduled after 1second. Repeat it
272 and notice how a workload can lose its disk share and suffer due to
273 multiple sequential readers.
274
275 fsync can generate dependent IO where bunch of data is written in the
276 context of fsync, and later some journaling data is written. Journaling
277 data comes in only after fsync has finished its IO (atleast for ext4
278 that seemed to be the case). Now if one decides not to idle on fsync
Christoph Hellwiga2b80962016-11-01 07:40:09 -0600279 thread due to !REQ_IDLE, then next journaling write will not get
Vivek Goyal49314022011-08-05 09:42:20 +0200280 scheduled for another second. A process doing small fsync, will suffer
281 badly in presence of multiple sequential readers.
282
Christoph Hellwiga2b80962016-11-01 07:40:09 -0600283 Hence doing tree idling on threads using !REQ_IDLE flag on requests
Vivek Goyal49314022011-08-05 09:42:20 +0200284 provides isolation from multiple sequential readers and at the same
285 time we do not idle on individual threads.
286
Christoph Hellwiga2b80962016-11-01 07:40:09 -0600287Q2. When to specify REQ_IDLE
288A2. I would think whenever one is doing synchronous write and expecting
Vivek Goyal49314022011-08-05 09:42:20 +0200289 more writes to be dispatched from same context soon, should be able
Christoph Hellwiga2b80962016-11-01 07:40:09 -0600290 to specify REQ_IDLE on writes and that probably should work well for
Vivek Goyal49314022011-08-05 09:42:20 +0200291 most of the cases.