David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 1 | ================ |
| 2 | CIRCULAR BUFFERS |
| 3 | ================ |
| 4 | |
| 5 | By: David Howells <dhowells@redhat.com> |
| 6 | Paul E. McKenney <paulmck@linux.vnet.ibm.com> |
| 7 | |
| 8 | |
| 9 | Linux provides a number of features that can be used to implement circular |
| 10 | buffering. There are two sets of such features: |
| 11 | |
| 12 | (1) Convenience functions for determining information about power-of-2 sized |
| 13 | buffers. |
| 14 | |
| 15 | (2) Memory barriers for when the producer and the consumer of objects in the |
| 16 | buffer don't want to share a lock. |
| 17 | |
| 18 | To use these facilities, as discussed below, there needs to be just one |
| 19 | producer and just one consumer. It is possible to handle multiple producers by |
| 20 | serialising them, and to handle multiple consumers by serialising them. |
| 21 | |
| 22 | |
| 23 | Contents: |
| 24 | |
| 25 | (*) What is a circular buffer? |
| 26 | |
| 27 | (*) Measuring power-of-2 buffers. |
| 28 | |
| 29 | (*) Using memory barriers with circular buffers. |
| 30 | - The producer. |
| 31 | - The consumer. |
| 32 | |
| 33 | |
| 34 | ========================== |
| 35 | WHAT IS A CIRCULAR BUFFER? |
| 36 | ========================== |
| 37 | |
| 38 | First of all, what is a circular buffer? A circular buffer is a buffer of |
| 39 | fixed, finite size into which there are two indices: |
| 40 | |
| 41 | (1) A 'head' index - the point at which the producer inserts items into the |
| 42 | buffer. |
| 43 | |
| 44 | (2) A 'tail' index - the point at which the consumer finds the next item in |
| 45 | the buffer. |
| 46 | |
| 47 | Typically when the tail pointer is equal to the head pointer, the buffer is |
| 48 | empty; and the buffer is full when the head pointer is one less than the tail |
| 49 | pointer. |
| 50 | |
| 51 | The head index is incremented when items are added, and the tail index when |
| 52 | items are removed. The tail index should never jump the head index, and both |
| 53 | indices should be wrapped to 0 when they reach the end of the buffer, thus |
| 54 | allowing an infinite amount of data to flow through the buffer. |
| 55 | |
| 56 | Typically, items will all be of the same unit size, but this isn't strictly |
| 57 | required to use the techniques below. The indices can be increased by more |
| 58 | than 1 if multiple items or variable-sized items are to be included in the |
| 59 | buffer, provided that neither index overtakes the other. The implementer must |
| 60 | be careful, however, as a region more than one unit in size may wrap the end of |
| 61 | the buffer and be broken into two segments. |
| 62 | |
| 63 | |
| 64 | ============================ |
| 65 | MEASURING POWER-OF-2 BUFFERS |
| 66 | ============================ |
| 67 | |
| 68 | Calculation of the occupancy or the remaining capacity of an arbitrarily sized |
| 69 | circular buffer would normally be a slow operation, requiring the use of a |
| 70 | modulus (divide) instruction. However, if the buffer is of a power-of-2 size, |
| 71 | then a much quicker bitwise-AND instruction can be used instead. |
| 72 | |
| 73 | Linux provides a set of macros for handling power-of-2 circular buffers. These |
| 74 | can be made use of by: |
| 75 | |
| 76 | #include <linux/circ_buf.h> |
| 77 | |
| 78 | The macros are: |
| 79 | |
| 80 | (*) Measure the remaining capacity of a buffer: |
| 81 | |
| 82 | CIRC_SPACE(head_index, tail_index, buffer_size); |
| 83 | |
| 84 | This returns the amount of space left in the buffer[1] into which items |
| 85 | can be inserted. |
| 86 | |
| 87 | |
| 88 | (*) Measure the maximum consecutive immediate space in a buffer: |
| 89 | |
| 90 | CIRC_SPACE_TO_END(head_index, tail_index, buffer_size); |
| 91 | |
| 92 | This returns the amount of consecutive space left in the buffer[1] into |
| 93 | which items can be immediately inserted without having to wrap back to the |
| 94 | beginning of the buffer. |
| 95 | |
| 96 | |
| 97 | (*) Measure the occupancy of a buffer: |
| 98 | |
| 99 | CIRC_CNT(head_index, tail_index, buffer_size); |
| 100 | |
| 101 | This returns the number of items currently occupying a buffer[2]. |
| 102 | |
| 103 | |
| 104 | (*) Measure the non-wrapping occupancy of a buffer: |
| 105 | |
| 106 | CIRC_CNT_TO_END(head_index, tail_index, buffer_size); |
| 107 | |
| 108 | This returns the number of consecutive items[2] that can be extracted from |
| 109 | the buffer without having to wrap back to the beginning of the buffer. |
| 110 | |
| 111 | |
| 112 | Each of these macros will nominally return a value between 0 and buffer_size-1, |
| 113 | however: |
| 114 | |
| 115 | [1] CIRC_SPACE*() are intended to be used in the producer. To the producer |
| 116 | they will return a lower bound as the producer controls the head index, |
| 117 | but the consumer may still be depleting the buffer on another CPU and |
| 118 | moving the tail index. |
| 119 | |
| 120 | To the consumer it will show an upper bound as the producer may be busy |
| 121 | depleting the space. |
| 122 | |
| 123 | [2] CIRC_CNT*() are intended to be used in the consumer. To the consumer they |
| 124 | will return a lower bound as the consumer controls the tail index, but the |
| 125 | producer may still be filling the buffer on another CPU and moving the |
| 126 | head index. |
| 127 | |
| 128 | To the producer it will show an upper bound as the consumer may be busy |
| 129 | emptying the buffer. |
| 130 | |
| 131 | [3] To a third party, the order in which the writes to the indices by the |
| 132 | producer and consumer become visible cannot be guaranteed as they are |
| 133 | independent and may be made on different CPUs - so the result in such a |
| 134 | situation will merely be a guess, and may even be negative. |
| 135 | |
| 136 | |
| 137 | =========================================== |
| 138 | USING MEMORY BARRIERS WITH CIRCULAR BUFFERS |
| 139 | =========================================== |
| 140 | |
| 141 | By using memory barriers in conjunction with circular buffers, you can avoid |
| 142 | the need to: |
| 143 | |
| 144 | (1) use a single lock to govern access to both ends of the buffer, thus |
| 145 | allowing the buffer to be filled and emptied at the same time; and |
| 146 | |
| 147 | (2) use atomic counter operations. |
| 148 | |
| 149 | There are two sides to this: the producer that fills the buffer, and the |
| 150 | consumer that empties it. Only one thing should be filling a buffer at any one |
| 151 | time, and only one thing should be emptying a buffer at any one time, but the |
| 152 | two sides can operate simultaneously. |
| 153 | |
| 154 | |
| 155 | THE PRODUCER |
| 156 | ------------ |
| 157 | |
| 158 | The producer will look something like this: |
| 159 | |
| 160 | spin_lock(&producer_lock); |
| 161 | |
| 162 | unsigned long head = buffer->head; |
Paul E. McKenney | 6c43c091 | 2013-11-04 11:20:56 -0800 | [diff] [blame] | 163 | /* The spin_unlock() and next spin_lock() provide needed ordering. */ |
Mark Rutland | 01e4644 | 2016-11-16 11:12:49 +0000 | [diff] [blame] | 164 | unsigned long tail = READ_ONCE(buffer->tail); |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 165 | |
| 166 | if (CIRC_SPACE(head, tail, buffer->size) >= 1) { |
| 167 | /* insert one item into the buffer */ |
| 168 | struct item *item = buffer[head]; |
| 169 | |
| 170 | produce_item(item); |
| 171 | |
Paul E. McKenney | 6c43c091 | 2013-11-04 11:20:56 -0800 | [diff] [blame] | 172 | smp_store_release(buffer->head, |
| 173 | (head + 1) & (buffer->size - 1)); |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 174 | |
| 175 | /* wake_up() will make sure that the head is committed before |
| 176 | * waking anyone up */ |
| 177 | wake_up(consumer); |
| 178 | } |
| 179 | |
| 180 | spin_unlock(&producer_lock); |
| 181 | |
| 182 | This will instruct the CPU that the contents of the new item must be written |
| 183 | before the head index makes it available to the consumer and then instructs the |
| 184 | CPU that the revised head index must be written before the consumer is woken. |
| 185 | |
Paul E. McKenney | 9873552 | 2013-11-02 10:17:52 -0700 | [diff] [blame] | 186 | Note that wake_up() does not guarantee any sort of barrier unless something |
| 187 | is actually awakened. We therefore cannot rely on it for ordering. However, |
| 188 | there is always one element of the array left empty. Therefore, the |
| 189 | producer must produce two elements before it could possibly corrupt the |
| 190 | element currently being read by the consumer. Therefore, the unlock-lock |
| 191 | pair between consecutive invocations of the consumer provides the necessary |
| 192 | ordering between the read of the index indicating that the consumer has |
| 193 | vacated a given element and the write by the producer to that same element. |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 194 | |
| 195 | |
| 196 | THE CONSUMER |
| 197 | ------------ |
| 198 | |
| 199 | The consumer will look something like this: |
| 200 | |
| 201 | spin_lock(&consumer_lock); |
| 202 | |
Paul E. McKenney | 6c43c091 | 2013-11-04 11:20:56 -0800 | [diff] [blame] | 203 | /* Read index before reading contents at that index. */ |
| 204 | unsigned long head = smp_load_acquire(buffer->head); |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 205 | unsigned long tail = buffer->tail; |
| 206 | |
| 207 | if (CIRC_CNT(head, tail, buffer->size) >= 1) { |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 208 | |
| 209 | /* extract one item from the buffer */ |
| 210 | struct item *item = buffer[tail]; |
| 211 | |
| 212 | consume_item(item); |
| 213 | |
Paul E. McKenney | 6c43c091 | 2013-11-04 11:20:56 -0800 | [diff] [blame] | 214 | /* Finish reading descriptor before incrementing tail. */ |
| 215 | smp_store_release(buffer->tail, |
| 216 | (tail + 1) & (buffer->size - 1)); |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 217 | } |
| 218 | |
| 219 | spin_unlock(&consumer_lock); |
| 220 | |
| 221 | This will instruct the CPU to make sure the index is up to date before reading |
| 222 | the new item, and then it shall make sure the CPU has finished reading the item |
| 223 | before it writes the new tail pointer, which will erase the item. |
| 224 | |
Mark Rutland | 01e4644 | 2016-11-16 11:12:49 +0000 | [diff] [blame] | 225 | Note the use of READ_ONCE() and smp_load_acquire() to read the |
Paul E. McKenney | 6c43c091 | 2013-11-04 11:20:56 -0800 | [diff] [blame] | 226 | opposition index. This prevents the compiler from discarding and |
| 227 | reloading its cached value - which some compilers will do across |
| 228 | smp_read_barrier_depends(). This isn't strictly needed if you can |
| 229 | be sure that the opposition index will _only_ be used the once. |
| 230 | The smp_load_acquire() additionally forces the CPU to order against |
| 231 | subsequent memory references. Similarly, smp_store_release() is used |
| 232 | in both algorithms to write the thread's index. This documents the |
| 233 | fact that we are writing to something that can be read concurrently, |
| 234 | prevents the compiler from tearing the store, and enforces ordering |
| 235 | against previous accesses. |
David Howells | 90fddab | 2010-03-24 09:43:00 +0000 | [diff] [blame] | 236 | |
| 237 | |
| 238 | =============== |
| 239 | FURTHER READING |
| 240 | =============== |
| 241 | |
| 242 | See also Documentation/memory-barriers.txt for a description of Linux's memory |
| 243 | barrier facilities. |