blob: d9246a32e673f273c03bdd2a0da9c44373c11891 [file] [log] [blame]
Joe Thornberf2836352013-03-01 22:45:51 +00001Guidance for writing policies
2=============================
3
4Try to keep transactionality out of it. The core is careful to
5avoid asking about anything that is migrating. This is a pain, but
6makes it easier to write the policies.
7
8Mappings are loaded into the policy at construction time.
9
10Every bio that is mapped by the target is referred to the policy.
11The policy can return a simple HIT or MISS or issue a migration.
12
13Currently there's no way for the policy to issue background work,
14e.g. to start writing back dirty blocks that are going to be evicte
15soon.
16
17Because we map bios, rather than requests it's easy for the policy
18to get fooled by many small bios. For this reason the core target
19issues periodic ticks to the policy. It's suggested that the policy
20doesn't update states (eg, hit counts) for a block more than once
21for each tick. The core ticks by watching bios complete, and so
22trying to see when the io scheduler has let the ios run.
23
24
25Overview of supplied cache replacement policies
26===============================================
27
Mike Snitzerbccab6a2015-06-17 11:43:38 -040028multiqueue (mq)
29---------------
Joe Thornberf2836352013-03-01 22:45:51 +000030
Mike Snitzerbccab6a2015-06-17 11:43:38 -040031This policy has been deprecated in favor of the smq policy (see below).
Joe Thornberf2836352013-03-01 22:45:51 +000032
Joe Thornber01911c12013-10-24 14:10:28 -040033The multiqueue policy has three sets of 16 queues: one set for entries
34waiting for the cache and another two for those in the cache (a set for
35clean entries and a set for dirty entries).
36
Joe Thornberf2836352013-03-01 22:45:51 +000037Cache entries in the queues are aged based on logical time. Entry into
38the cache is based on variable thresholds and queue selection is based
39on hit count on entry. The policy aims to take different cache miss
40costs into account and to adjust to varying load patterns automatically.
41
42Message and constructor argument pairs are:
Joe Thornber78e03d62013-12-09 12:53:05 +000043 'sequential_threshold <#nr_sequential_ios>'
44 'random_threshold <#nr_random_ios>'
45 'read_promote_adjustment <value>'
46 'write_promote_adjustment <value>'
47 'discard_promote_adjustment <value>'
Joe Thornberf2836352013-03-01 22:45:51 +000048
49The sequential threshold indicates the number of contiguous I/Os
Mike Snitzerf1afb362014-10-30 10:02:01 -040050required before a stream is treated as sequential. Once a stream is
51considered sequential it will bypass the cache. The random threshold
Joe Thornberf2836352013-03-01 22:45:51 +000052is the number of intervening non-contiguous I/Os that must be seen
53before the stream is treated as random again.
54
55The sequential and random thresholds default to 512 and 4 respectively.
56
Mike Snitzerf1afb362014-10-30 10:02:01 -040057Large, sequential I/Os are probably better left on the origin device
58since spindles tend to have good sequential I/O bandwidth. The
59io_tracker counts contiguous I/Os to try to spot when the I/O is in one
60of these sequential modes. But there are use-cases for wanting to
61promote sequential blocks to the cache (e.g. fast application startup).
62If sequential threshold is set to 0 the sequential I/O detection is
63disabled and sequential I/O will no longer implicitly bypass the cache.
64Setting the random threshold to 0 does _not_ disable the random I/O
65stream detection.
Joe Thornberf2836352013-03-01 22:45:51 +000066
Joe Thornberb155aa02014-10-22 14:30:58 +010067Internally the mq policy determines a promotion threshold. If the hit
68count of a block not in the cache goes above this threshold it gets
69promoted to the cache. The read, write and discard promote adjustment
Joe Thornber78e03d62013-12-09 12:53:05 +000070tunables allow you to tweak the promotion threshold by adding a small
71value based on the io type. They default to 4, 8 and 1 respectively.
72If you're trying to quickly warm a new cache device you may wish to
73reduce these to encourage promotion. Remember to switch them back to
74their defaults after the cache fills though.
75
Mike Snitzerbccab6a2015-06-17 11:43:38 -040076Stochastic multiqueue (smq)
77---------------------------
78
79This policy is the default.
80
81The stochastic multi-queue (smq) policy addresses some of the problems
82with the multiqueue (mq) policy.
83
84The smq policy (vs mq) offers the promise of less memory utilization,
85improved performance and increased adaptability in the face of changing
86workloads. SMQ also does not have any cumbersome tuning knobs.
87
88Users may switch from "mq" to "smq" simply by appropriately reloading a
89DM table that is using the cache target. Doing so will cause all of the
90mq policy's hints to be dropped. Also, performance of the cache may
91degrade slightly until smq recalculates the origin device's hotspots
92that should be cached.
93
94Memory usage:
95The mq policy uses a lot of memory; 88 bytes per cache block on a 64
96bit machine.
97
98SMQ uses 28bit indexes to implement it's data structures rather than
99pointers. It avoids storing an explicit hit count for each block. It
100has a 'hotspot' queue rather than a pre cache which uses a quarter of
101the entries (each hotspot block covers a larger area than a single
102cache block).
103
104All these mean smq uses ~25bytes per cache block. Still a lot of
105memory, but a substantial improvement nontheless.
106
107Level balancing:
108MQ places entries in different levels of the multiqueue structures
109based on their hit count (~ln(hit count)). This means the bottom
110levels generally have the most entries, and the top ones have very
111few. Having unbalanced levels like this reduces the efficacy of the
112multiqueue.
113
114SMQ does not maintain a hit count, instead it swaps hit entries with
115the least recently used entry from the level above. The over all
116ordering being a side effect of this stochastic process. With this
117scheme we can decide how many entries occupy each multiqueue level,
118resulting in better promotion/demotion decisions.
119
120Adaptability:
121The MQ policy maintains a hit count for each cache block. For a
122different block to get promoted to the cache it's hit count has to
123exceed the lowest currently in the cache. This means it can take a
124long time for the cache to adapt between varying IO patterns.
125Periodically degrading the hit counts could help with this, but I
126haven't found a nice general solution.
127
128SMQ doesn't maintain hit counts, so a lot of this problem just goes
129away. In addition it tracks performance of the hotspot queue, which
130is used to decide which blocks to promote. If the hotspot queue is
131performing badly then it starts moving entries more quickly between
132levels. This lets it adapt to new IO patterns very quickly.
133
134Performance:
135Testing SMQ shows substantially better performance than MQ.
136
Heinz Mauelshagen8735a812013-03-01 22:45:52 +0000137cleaner
138-------
139
140The cleaner writes back all dirty blocks in a cache to decommission it.
141
Joe Thornberf2836352013-03-01 22:45:51 +0000142Examples
143========
144
145The syntax for a table is:
146 cache <metadata dev> <cache dev> <origin dev> <block size>
147 <#feature_args> [<feature arg>]*
148 <policy> <#policy_args> [<policy arg>]*
149
150The syntax to send a message using the dmsetup command is:
151 dmsetup message <mapped device> 0 sequential_threshold 1024
152 dmsetup message <mapped device> 0 random_threshold 8
153
154Using dmsetup:
155 dmsetup create blah --table "0 268435456 cache /dev/sdb /dev/sdc \
156 /dev/sdd 512 0 mq 4 sequential_threshold 1024 random_threshold 8"
157 creates a 128GB large mapped device named 'blah' with the
158 sequential threshold set to 1024 and the random_threshold set to 8.