Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 1 | /* |
| 2 | lru_cache.c |
| 3 | |
| 4 | This file is part of DRBD by Philipp Reisner and Lars Ellenberg. |
| 5 | |
| 6 | Copyright (C) 2003-2008, LINBIT Information Technologies GmbH. |
| 7 | Copyright (C) 2003-2008, Philipp Reisner <philipp.reisner@linbit.com>. |
| 8 | Copyright (C) 2003-2008, Lars Ellenberg <lars.ellenberg@linbit.com>. |
| 9 | |
| 10 | drbd is free software; you can redistribute it and/or modify |
| 11 | it under the terms of the GNU General Public License as published by |
| 12 | the Free Software Foundation; either version 2, or (at your option) |
| 13 | any later version. |
| 14 | |
| 15 | drbd is distributed in the hope that it will be useful, |
| 16 | but WITHOUT ANY WARRANTY; without even the implied warranty of |
| 17 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
| 18 | GNU General Public License for more details. |
| 19 | |
| 20 | You should have received a copy of the GNU General Public License |
| 21 | along with drbd; see the file COPYING. If not, write to |
| 22 | the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA. |
| 23 | |
| 24 | */ |
| 25 | |
| 26 | #ifndef LRU_CACHE_H |
| 27 | #define LRU_CACHE_H |
| 28 | |
| 29 | #include <linux/list.h> |
| 30 | #include <linux/slab.h> |
| 31 | #include <linux/bitops.h> |
| 32 | #include <linux/string.h> /* for memset */ |
| 33 | #include <linux/seq_file.h> |
| 34 | |
| 35 | /* |
| 36 | This header file (and its .c file; kernel-doc of functions see there) |
| 37 | define a helper framework to easily keep track of index:label associations, |
| 38 | and changes to an "active set" of objects, as well as pending transactions, |
| 39 | to persistently record those changes. |
| 40 | |
| 41 | We use an LRU policy if it is necessary to "cool down" a region currently in |
| 42 | the active set before we can "heat" a previously unused region. |
| 43 | |
| 44 | Because of this later property, it is called "lru_cache". |
| 45 | As it actually Tracks Objects in an Active SeT, we could also call it |
| 46 | toast (incidentally that is what may happen to the data on the |
| 47 | backend storage uppon next resync, if we don't get it right). |
| 48 | |
| 49 | What for? |
| 50 | |
| 51 | We replicate IO (more or less synchronously) to local and remote disk. |
| 52 | |
| 53 | For crash recovery after replication node failure, |
| 54 | we need to resync all regions that have been target of in-flight WRITE IO |
Adam Buchbinder | 48fc7f7 | 2012-09-19 21:48:00 -0400 | [diff] [blame] | 55 | (in use, or "hot", regions), as we don't know whether or not those WRITEs |
| 56 | have made it to stable storage. |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 57 | |
| 58 | To avoid a "full resync", we need to persistently track these regions. |
| 59 | |
| 60 | This is known as "write intent log", and can be implemented as on-disk |
| 61 | (coarse or fine grained) bitmap, or other meta data. |
| 62 | |
| 63 | To avoid the overhead of frequent extra writes to this meta data area, |
| 64 | usually the condition is softened to regions that _may_ have been target of |
| 65 | in-flight WRITE IO, e.g. by only lazily clearing the on-disk write-intent |
| 66 | bitmap, trading frequency of meta data transactions against amount of |
Daniel Mack | 3ad2f3f | 2010-02-03 08:01:28 +0800 | [diff] [blame] | 67 | (possibly unnecessary) resync traffic. |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 68 | |
| 69 | If we set a hard limit on the area that may be "hot" at any given time, we |
| 70 | limit the amount of resync traffic needed for crash recovery. |
| 71 | |
| 72 | For recovery after replication link failure, |
| 73 | we need to resync all blocks that have been changed on the other replica |
| 74 | in the mean time, or, if both replica have been changed independently [*], |
| 75 | all blocks that have been changed on either replica in the mean time. |
| 76 | [*] usually as a result of a cluster split-brain and insufficient protection. |
| 77 | but there are valid use cases to do this on purpose. |
| 78 | |
| 79 | Tracking those blocks can be implemented as "dirty bitmap". |
| 80 | Having it fine-grained reduces the amount of resync traffic. |
| 81 | It should also be persistent, to allow for reboots (or crashes) |
| 82 | while the replication link is down. |
| 83 | |
| 84 | There are various possible implementations for persistently storing |
| 85 | write intent log information, three of which are mentioned here. |
| 86 | |
| 87 | "Chunk dirtying" |
| 88 | The on-disk "dirty bitmap" may be re-used as "write-intent" bitmap as well. |
| 89 | To reduce the frequency of bitmap updates for write-intent log purposes, |
| 90 | one could dirty "chunks" (of some size) at a time of the (fine grained) |
| 91 | on-disk bitmap, while keeping the in-memory "dirty" bitmap as clean as |
| 92 | possible, flushing it to disk again when a previously "hot" (and on-disk |
| 93 | dirtied as full chunk) area "cools down" again (no IO in flight anymore, |
| 94 | and none expected in the near future either). |
| 95 | |
| 96 | "Explicit (coarse) write intent bitmap" |
| 97 | An other implementation could chose a (probably coarse) explicit bitmap, |
| 98 | for write-intent log purposes, additionally to the fine grained dirty bitmap. |
| 99 | |
| 100 | "Activity log" |
| 101 | Yet an other implementation may keep track of the hot regions, by starting |
| 102 | with an empty set, and writing down a journal of region numbers that have |
| 103 | become "hot", or have "cooled down" again. |
| 104 | |
| 105 | To be able to use a ring buffer for this journal of changes to the active |
| 106 | set, we not only record the actual changes to that set, but also record the |
| 107 | not changing members of the set in a round robin fashion. To do so, we use a |
| 108 | fixed (but configurable) number of slots which we can identify by index, and |
| 109 | associate region numbers (labels) with these indices. |
| 110 | For each transaction recording a change to the active set, we record the |
| 111 | change itself (index: -old_label, +new_label), and which index is associated |
| 112 | with which label (index: current_label) within a certain sliding window that |
| 113 | is moved further over the available indices with each such transaction. |
| 114 | |
| 115 | Thus, for crash recovery, if the ringbuffer is sufficiently large, we can |
| 116 | accurately reconstruct the active set. |
| 117 | |
| 118 | Sufficiently large depends only on maximum number of active objects, and the |
| 119 | size of the sliding window recording "index: current_label" associations within |
| 120 | each transaction. |
| 121 | |
| 122 | This is what we call the "activity log". |
| 123 | |
| 124 | Currently we need one activity log transaction per single label change, which |
| 125 | does not give much benefit over the "dirty chunks of bitmap" approach, other |
| 126 | than potentially less seeks. |
| 127 | |
| 128 | We plan to change the transaction format to support multiple changes per |
| 129 | transaction, which then would reduce several (disjoint, "random") updates to |
| 130 | the bitmap into one transaction to the activity log ring buffer. |
| 131 | */ |
| 132 | |
| 133 | /* this defines an element in a tracked set |
| 134 | * .colision is for hash table lookup. |
| 135 | * When we process a new IO request, we know its sector, thus can deduce the |
| 136 | * region number (label) easily. To do the label -> object lookup without a |
| 137 | * full list walk, we use a simple hash table. |
| 138 | * |
| 139 | * .list is on one of three lists: |
| 140 | * in_use: currently in use (refcnt > 0, lc_number != LC_FREE) |
| 141 | * lru: unused but ready to be reused or recycled |
Lars Ellenberg | 600942e | 2011-01-27 15:24:58 +0100 | [diff] [blame] | 142 | * (lc_refcnt == 0, lc_number != LC_FREE), |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 143 | * free: unused but ready to be recycled |
Lars Ellenberg | 600942e | 2011-01-27 15:24:58 +0100 | [diff] [blame] | 144 | * (lc_refcnt == 0, lc_number == LC_FREE), |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 145 | * |
| 146 | * an element is said to be "in the active set", |
| 147 | * if either on "in_use" or "lru", i.e. lc_number != LC_FREE. |
| 148 | * |
| 149 | * DRBD currently (May 2009) only uses 61 elements on the resync lru_cache |
| 150 | * (total memory usage 2 pages), and up to 3833 elements on the act_log |
Lucas De Marchi | 25985ed | 2011-03-30 22:57:33 -0300 | [diff] [blame] | 151 | * lru_cache, totalling ~215 kB for 64bit architecture, ~53 pages. |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 152 | * |
| 153 | * We usually do not actually free these objects again, but only "recycle" |
| 154 | * them, as the change "index: -old_label, +LC_FREE" would need a transaction |
| 155 | * as well. Which also means that using a kmem_cache to allocate the objects |
| 156 | * from wastes some resources. |
| 157 | * But it avoids high order page allocations in kmalloc. |
| 158 | */ |
| 159 | struct lc_element { |
| 160 | struct hlist_node colision; |
| 161 | struct list_head list; /* LRU list or free list */ |
| 162 | unsigned refcnt; |
Lars Ellenberg | 600942e | 2011-01-27 15:24:58 +0100 | [diff] [blame] | 163 | /* back "pointer" into lc_cache->element[index], |
| 164 | * for paranoia, and for "lc_element_to_index" */ |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 165 | unsigned lc_index; |
| 166 | /* if we want to track a larger set of objects, |
| 167 | * it needs to become arch independend u64 */ |
| 168 | unsigned lc_number; |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 169 | /* special label when on free list */ |
| 170 | #define LC_FREE (~0U) |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 171 | |
| 172 | /* for pending changes */ |
| 173 | unsigned lc_new_number; |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 174 | }; |
| 175 | |
| 176 | struct lru_cache { |
| 177 | /* the least recently used item is kept at lru->prev */ |
| 178 | struct list_head lru; |
| 179 | struct list_head free; |
| 180 | struct list_head in_use; |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 181 | struct list_head to_be_changed; |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 182 | |
| 183 | /* the pre-created kmem cache to allocate the objects from */ |
| 184 | struct kmem_cache *lc_cache; |
| 185 | |
| 186 | /* size of tracked objects, used to memset(,0,) them in lc_reset */ |
| 187 | size_t element_size; |
| 188 | /* offset of struct lc_element member in the tracked object */ |
| 189 | size_t element_off; |
| 190 | |
| 191 | /* number of elements (indices) */ |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 192 | unsigned int nr_elements; |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 193 | /* Arbitrary limit on maximum tracked objects. Practical limit is much |
| 194 | * lower due to allocation failures, probably. For typical use cases, |
| 195 | * nr_elements should be a few thousand at most. |
Lars Ellenberg | 600942e | 2011-01-27 15:24:58 +0100 | [diff] [blame] | 196 | * This also limits the maximum value of lc_element.lc_index, allowing the |
| 197 | * 8 high bits of .lc_index to be overloaded with flags in the future. */ |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 198 | #define LC_MAX_ACTIVE (1<<24) |
| 199 | |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 200 | /* allow to accumulate a few (index:label) changes, |
| 201 | * but no more than max_pending_changes */ |
| 202 | unsigned int max_pending_changes; |
| 203 | /* number of elements currently on to_be_changed list */ |
| 204 | unsigned int pending_changes; |
| 205 | |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 206 | /* statistics */ |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 207 | unsigned used; /* number of elements currently on in_use list */ |
| 208 | unsigned long hits, misses, starving, locked, changed; |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 209 | |
| 210 | /* see below: flag-bits for lru_cache */ |
| 211 | unsigned long flags; |
| 212 | |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 213 | |
| 214 | void *lc_private; |
| 215 | const char *name; |
| 216 | |
| 217 | /* nr_elements there */ |
| 218 | struct hlist_head *lc_slot; |
| 219 | struct lc_element **lc_element; |
| 220 | }; |
| 221 | |
| 222 | |
| 223 | /* flag-bits for lru_cache */ |
| 224 | enum { |
| 225 | /* debugging aid, to catch concurrent access early. |
| 226 | * user needs to guarantee exclusive access by proper locking! */ |
| 227 | __LC_PARANOIA, |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 228 | |
| 229 | /* annotate that the set is "dirty", possibly accumulating further |
| 230 | * changes, until a transaction is finally triggered */ |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 231 | __LC_DIRTY, |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 232 | |
| 233 | /* Locked, no further changes allowed. |
| 234 | * Also used to serialize changing transactions. */ |
| 235 | __LC_LOCKED, |
| 236 | |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 237 | /* if we need to change the set, but currently there is no free nor |
| 238 | * unused element available, we are "starving", and must not give out |
| 239 | * further references, to guarantee that eventually some refcnt will |
| 240 | * drop to zero and we will be able to make progress again, changing |
| 241 | * the set, writing the transaction. |
| 242 | * if the statistics say we are frequently starving, |
| 243 | * nr_elements is too small. */ |
| 244 | __LC_STARVING, |
| 245 | }; |
| 246 | #define LC_PARANOIA (1<<__LC_PARANOIA) |
| 247 | #define LC_DIRTY (1<<__LC_DIRTY) |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 248 | #define LC_LOCKED (1<<__LC_LOCKED) |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 249 | #define LC_STARVING (1<<__LC_STARVING) |
| 250 | |
| 251 | extern struct lru_cache *lc_create(const char *name, struct kmem_cache *cache, |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 252 | unsigned max_pending_changes, |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 253 | unsigned e_count, size_t e_size, size_t e_off); |
| 254 | extern void lc_reset(struct lru_cache *lc); |
| 255 | extern void lc_destroy(struct lru_cache *lc); |
| 256 | extern void lc_set(struct lru_cache *lc, unsigned int enr, int index); |
| 257 | extern void lc_del(struct lru_cache *lc, struct lc_element *element); |
| 258 | |
Lars Ellenberg | cbe5e61 | 2013-03-22 22:17:36 -0600 | [diff] [blame] | 259 | extern struct lc_element *lc_get_cumulative(struct lru_cache *lc, unsigned int enr); |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 260 | extern struct lc_element *lc_try_get(struct lru_cache *lc, unsigned int enr); |
| 261 | extern struct lc_element *lc_find(struct lru_cache *lc, unsigned int enr); |
| 262 | extern struct lc_element *lc_get(struct lru_cache *lc, unsigned int enr); |
| 263 | extern unsigned int lc_put(struct lru_cache *lc, struct lc_element *e); |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 264 | extern void lc_committed(struct lru_cache *lc); |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 265 | |
| 266 | struct seq_file; |
| 267 | extern size_t lc_seq_printf_stats(struct seq_file *seq, struct lru_cache *lc); |
| 268 | |
| 269 | extern void lc_seq_dump_details(struct seq_file *seq, struct lru_cache *lc, char *utext, |
| 270 | void (*detail) (struct seq_file *, struct lc_element *)); |
| 271 | |
| 272 | /** |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 273 | * lc_try_lock_for_transaction - can be used to stop lc_get() from changing the tracked set |
| 274 | * @lc: the lru cache to operate on |
| 275 | * |
| 276 | * Allows (expects) the set to be "dirty". Note that the reference counts and |
| 277 | * order on the active and lru lists may still change. Used to serialize |
| 278 | * changing transactions. Returns true if we aquired the lock. |
| 279 | */ |
| 280 | static inline int lc_try_lock_for_transaction(struct lru_cache *lc) |
| 281 | { |
| 282 | return !test_and_set_bit(__LC_LOCKED, &lc->flags); |
| 283 | } |
| 284 | |
| 285 | /** |
| 286 | * lc_try_lock - variant to stop lc_get() from changing the tracked set |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 287 | * @lc: the lru cache to operate on |
| 288 | * |
| 289 | * Note that the reference counts and order on the active and lru lists may |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 290 | * still change. Only works on a "clean" set. Returns true if we aquired the |
| 291 | * lock, which means there are no pending changes, and any further attempt to |
| 292 | * change the set will not succeed until the next lc_unlock(). |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 293 | */ |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 294 | extern int lc_try_lock(struct lru_cache *lc); |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 295 | |
| 296 | /** |
| 297 | * lc_unlock - unlock @lc, allow lc_get() to change the set again |
| 298 | * @lc: the lru cache to operate on |
| 299 | */ |
| 300 | static inline void lc_unlock(struct lru_cache *lc) |
| 301 | { |
| 302 | clear_bit(__LC_DIRTY, &lc->flags); |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 303 | clear_bit_unlock(__LC_LOCKED, &lc->flags); |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 304 | } |
| 305 | |
Lars Ellenberg | 46a15bc | 2011-02-21 13:21:01 +0100 | [diff] [blame] | 306 | extern bool lc_is_used(struct lru_cache *lc, unsigned int enr); |
Philipp Reisner | b411b36 | 2009-09-25 16:07:19 -0700 | [diff] [blame] | 307 | |
| 308 | #define lc_entry(ptr, type, member) \ |
| 309 | container_of(ptr, type, member) |
| 310 | |
| 311 | extern struct lc_element *lc_element_by_index(struct lru_cache *lc, unsigned i); |
| 312 | extern unsigned int lc_index_of(struct lru_cache *lc, struct lc_element *e); |
| 313 | |
| 314 | #endif |