blob: 2d82c80322cbb225313778dafb3df989cf2a52fa [file] [log] [blame]
Linus Torvalds1da177e2005-04-16 15:20:36 -07001Deadline IO scheduler tunables
2==============================
3
4This little file attempts to document how the deadline io scheduler works.
5In particular, it will clarify the meaning of the exposed tunables that may be
6of interest to power users.
7
Alan D. Brunelle23c76982007-10-15 13:22:26 +02008Selecting IO schedulers
9-----------------------
10Refer to Documentation/block/switching-sched.txt for information on
11selecting an io scheduler on a per-device basis.
Linus Torvalds1da177e2005-04-16 15:20:36 -070012
13
14********************************************************************************
15
16
17read_expire (in ms)
18-----------
19
Matt LaPlantea2ffd272006-10-03 22:49:15 +020020The goal of the deadline io scheduler is to attempt to guarantee a start
Linus Torvalds1da177e2005-04-16 15:20:36 -070021service time for a request. As we focus mainly on read latencies, this is
22tunable. When a read request first enters the io scheduler, it is assigned
23a deadline that is the current time + the read_expire value in units of
Matt LaPlante2fe0ae72006-10-03 22:50:39 +020024milliseconds.
Linus Torvalds1da177e2005-04-16 15:20:36 -070025
26
27write_expire (in ms)
28-----------
29
30Similar to read_expire mentioned above, but for writes.
31
32
Aaron Carroll6a421c12008-08-14 18:17:15 +100033fifo_batch (number of requests)
Linus Torvalds1da177e2005-04-16 15:20:36 -070034----------
35
Aaron Carroll6a421c12008-08-14 18:17:15 +100036Requests are grouped into ``batches'' of a particular data direction (read or
37write) which are serviced in increasing sector order. To limit extra seeking,
38deadline expiries are only checked between batches. fifo_batch controls the
39maximum number of requests per batch.
40
41This parameter tunes the balance between per-request latency and aggregate
42throughput. When low latency is the primary concern, smaller is better (where
43a value of 1 yields first-come first-served behaviour). Increasing fifo_batch
44generally improves throughput, at the cost of latency variation.
Linus Torvalds1da177e2005-04-16 15:20:36 -070045
46
Alan D. Brunelle23c76982007-10-15 13:22:26 +020047writes_starved (number of dispatches)
48--------------
Linus Torvalds1da177e2005-04-16 15:20:36 -070049
50When we have to move requests from the io scheduler queue to the block
51device dispatch queue, we always give a preference to reads. However, we
52don't want to starve writes indefinitely either. So writes_starved controls
53how many times we give preference to reads over writes. When that has been
54done writes_starved number of times, we dispatch some writes based on the
55same criteria as reads.
56
57
58front_merges (bool)
59------------
60
Matt LaPlante19f59462009-04-27 15:06:31 +020061Sometimes it happens that a request enters the io scheduler that is contiguous
Linus Torvalds1da177e2005-04-16 15:20:36 -070062with a request that is already on the queue. Either it fits in the back of that
63request, or it fits at the front. That is called either a back merge candidate
64or a front merge candidate. Due to the way files are typically laid out,
65back merges are much more common than front merges. For some work loads, you
66may even know that it is a waste of time to spend any time attempting to
67front merge requests. Setting front_merges to 0 disables this functionality.
68Front merges may still occur due to the cached last_merge hint, but since
69that comes at basically 0 cost we leave that on. We simply disable the
70rbtree front sector lookup when the io scheduler merge function is called.
71
72
Rob Landley26bbb292007-10-15 11:42:52 +020073Nov 11 2002, Jens Axboe <jens.axboe@oracle.com>
Linus Torvalds1da177e2005-04-16 15:20:36 -070074
75