Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 1 | Deadline IO scheduler tunables |
| 2 | ============================== |
| 3 | |
| 4 | This little file attempts to document how the deadline io scheduler works. |
| 5 | In particular, it will clarify the meaning of the exposed tunables that may be |
| 6 | of interest to power users. |
| 7 | |
Alan D. Brunelle | 23c7698 | 2007-10-15 13:22:26 +0200 | [diff] [blame] | 8 | Selecting IO schedulers |
| 9 | ----------------------- |
| 10 | Refer to Documentation/block/switching-sched.txt for information on |
| 11 | selecting an io scheduler on a per-device basis. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 12 | |
| 13 | |
| 14 | ******************************************************************************** |
| 15 | |
| 16 | |
| 17 | read_expire (in ms) |
| 18 | ----------- |
| 19 | |
Matt LaPlante | a2ffd27 | 2006-10-03 22:49:15 +0200 | [diff] [blame] | 20 | The goal of the deadline io scheduler is to attempt to guarantee a start |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 21 | service time for a request. As we focus mainly on read latencies, this is |
| 22 | tunable. When a read request first enters the io scheduler, it is assigned |
| 23 | a deadline that is the current time + the read_expire value in units of |
Matt LaPlante | 2fe0ae7 | 2006-10-03 22:50:39 +0200 | [diff] [blame] | 24 | milliseconds. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 25 | |
| 26 | |
| 27 | write_expire (in ms) |
| 28 | ----------- |
| 29 | |
| 30 | Similar to read_expire mentioned above, but for writes. |
| 31 | |
| 32 | |
Aaron Carroll | 6a421c1 | 2008-08-14 18:17:15 +1000 | [diff] [blame] | 33 | fifo_batch (number of requests) |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 34 | ---------- |
| 35 | |
Aaron Carroll | 6a421c1 | 2008-08-14 18:17:15 +1000 | [diff] [blame] | 36 | Requests are grouped into ``batches'' of a particular data direction (read or |
| 37 | write) which are serviced in increasing sector order. To limit extra seeking, |
| 38 | deadline expiries are only checked between batches. fifo_batch controls the |
| 39 | maximum number of requests per batch. |
| 40 | |
| 41 | This parameter tunes the balance between per-request latency and aggregate |
| 42 | throughput. When low latency is the primary concern, smaller is better (where |
| 43 | a value of 1 yields first-come first-served behaviour). Increasing fifo_batch |
| 44 | generally improves throughput, at the cost of latency variation. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 45 | |
| 46 | |
Alan D. Brunelle | 23c7698 | 2007-10-15 13:22:26 +0200 | [diff] [blame] | 47 | writes_starved (number of dispatches) |
| 48 | -------------- |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 49 | |
| 50 | When we have to move requests from the io scheduler queue to the block |
| 51 | device dispatch queue, we always give a preference to reads. However, we |
| 52 | don't want to starve writes indefinitely either. So writes_starved controls |
| 53 | how many times we give preference to reads over writes. When that has been |
| 54 | done writes_starved number of times, we dispatch some writes based on the |
| 55 | same criteria as reads. |
| 56 | |
| 57 | |
| 58 | front_merges (bool) |
| 59 | ------------ |
| 60 | |
Matt LaPlante | 19f5946 | 2009-04-27 15:06:31 +0200 | [diff] [blame] | 61 | Sometimes it happens that a request enters the io scheduler that is contiguous |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 62 | with a request that is already on the queue. Either it fits in the back of that |
| 63 | request, or it fits at the front. That is called either a back merge candidate |
| 64 | or a front merge candidate. Due to the way files are typically laid out, |
| 65 | back merges are much more common than front merges. For some work loads, you |
| 66 | may even know that it is a waste of time to spend any time attempting to |
| 67 | front merge requests. Setting front_merges to 0 disables this functionality. |
| 68 | Front merges may still occur due to the cached last_merge hint, but since |
| 69 | that comes at basically 0 cost we leave that on. We simply disable the |
| 70 | rbtree front sector lookup when the io scheduler merge function is called. |
| 71 | |
| 72 | |
Rob Landley | 26bbb29 | 2007-10-15 11:42:52 +0200 | [diff] [blame] | 73 | Nov 11 2002, Jens Axboe <jens.axboe@oracle.com> |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 74 | |
| 75 | |