Paul E. McKenney | 1c12757 | 2008-11-13 18:11:52 -0800 | [diff] [blame] | 1 | RCU and Unloadable Modules |
| 2 | |
| 3 | [Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/] |
| 4 | |
| 5 | RCU (read-copy update) is a synchronization mechanism that can be thought |
| 6 | of as a replacement for read-writer locking (among other things), but with |
| 7 | very low-overhead readers that are immune to deadlock, priority inversion, |
| 8 | and unbounded latency. RCU read-side critical sections are delimited |
| 9 | by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT |
| 10 | kernels, generate no code whatsoever. |
| 11 | |
| 12 | This means that RCU writers are unaware of the presence of concurrent |
| 13 | readers, so that RCU updates to shared data must be undertaken quite |
| 14 | carefully, leaving an old version of the data structure in place until all |
| 15 | pre-existing readers have finished. These old versions are needed because |
| 16 | such readers might hold a reference to them. RCU updates can therefore be |
| 17 | rather expensive, and RCU is thus best suited for read-mostly situations. |
| 18 | |
| 19 | How can an RCU writer possibly determine when all readers are finished, |
| 20 | given that readers might well leave absolutely no trace of their |
| 21 | presence? There is a synchronize_rcu() primitive that blocks until all |
| 22 | pre-existing readers have completed. An updater wishing to delete an |
| 23 | element p from a linked list might do the following, while holding an |
| 24 | appropriate lock, of course: |
| 25 | |
| 26 | list_del_rcu(p); |
| 27 | synchronize_rcu(); |
| 28 | kfree(p); |
| 29 | |
| 30 | But the above code cannot be used in IRQ context -- the call_rcu() |
| 31 | primitive must be used instead. This primitive takes a pointer to an |
| 32 | rcu_head struct placed within the RCU-protected data structure and |
| 33 | another pointer to a function that may be invoked later to free that |
| 34 | structure. Code to delete an element p from the linked list from IRQ |
| 35 | context might then be as follows: |
| 36 | |
| 37 | list_del_rcu(p); |
| 38 | call_rcu(&p->rcu, p_callback); |
| 39 | |
| 40 | Since call_rcu() never blocks, this code can safely be used from within |
| 41 | IRQ context. The function p_callback() might be defined as follows: |
| 42 | |
| 43 | static void p_callback(struct rcu_head *rp) |
| 44 | { |
| 45 | struct pstruct *p = container_of(rp, struct pstruct, rcu); |
| 46 | |
| 47 | kfree(p); |
| 48 | } |
| 49 | |
| 50 | |
| 51 | Unloading Modules That Use call_rcu() |
| 52 | |
| 53 | But what if p_callback is defined in an unloadable module? |
| 54 | |
| 55 | If we unload the module while some RCU callbacks are pending, |
| 56 | the CPUs executing these callbacks are going to be severely |
| 57 | disappointed when they are later invoked, as fancifully depicted at |
| 58 | http://lwn.net/images/ns/kernel/rcu-drop.jpg. |
| 59 | |
| 60 | We could try placing a synchronize_rcu() in the module-exit code path, |
| 61 | but this is not sufficient. Although synchronize_rcu() does wait for a |
| 62 | grace period to elapse, it does not wait for the callbacks to complete. |
| 63 | |
| 64 | One might be tempted to try several back-to-back synchronize_rcu() |
| 65 | calls, but this is still not guaranteed to work. If there is a very |
| 66 | heavy RCU-callback load, then some of the callbacks might be deferred |
| 67 | in order to allow other processing to proceed. Such deferral is required |
| 68 | in realtime kernels in order to avoid excessive scheduling latencies. |
| 69 | |
| 70 | |
| 71 | rcu_barrier() |
| 72 | |
| 73 | We instead need the rcu_barrier() primitive. This primitive is similar |
| 74 | to synchronize_rcu(), but instead of waiting solely for a grace |
| 75 | period to elapse, it also waits for all outstanding RCU callbacks to |
| 76 | complete. Pseudo-code using rcu_barrier() is as follows: |
| 77 | |
| 78 | 1. Prevent any new RCU callbacks from being posted. |
| 79 | 2. Execute rcu_barrier(). |
| 80 | 3. Allow the module to be unloaded. |
| 81 | |
Paul E. McKenney | 1c12757 | 2008-11-13 18:11:52 -0800 | [diff] [blame] | 82 | The rcutorture module makes use of rcu_barrier in its exit function |
| 83 | as follows: |
| 84 | |
| 85 | 1 static void |
| 86 | 2 rcu_torture_cleanup(void) |
| 87 | 3 { |
| 88 | 4 int i; |
| 89 | 5 |
| 90 | 6 fullstop = 1; |
| 91 | 7 if (shuffler_task != NULL) { |
| 92 | 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task"); |
| 93 | 9 kthread_stop(shuffler_task); |
| 94 | 10 } |
| 95 | 11 shuffler_task = NULL; |
| 96 | 12 |
| 97 | 13 if (writer_task != NULL) { |
| 98 | 14 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task"); |
| 99 | 15 kthread_stop(writer_task); |
| 100 | 16 } |
| 101 | 17 writer_task = NULL; |
| 102 | 18 |
| 103 | 19 if (reader_tasks != NULL) { |
| 104 | 20 for (i = 0; i < nrealreaders; i++) { |
| 105 | 21 if (reader_tasks[i] != NULL) { |
| 106 | 22 VERBOSE_PRINTK_STRING( |
| 107 | 23 "Stopping rcu_torture_reader task"); |
| 108 | 24 kthread_stop(reader_tasks[i]); |
| 109 | 25 } |
| 110 | 26 reader_tasks[i] = NULL; |
| 111 | 27 } |
| 112 | 28 kfree(reader_tasks); |
| 113 | 29 reader_tasks = NULL; |
| 114 | 30 } |
| 115 | 31 rcu_torture_current = NULL; |
| 116 | 32 |
| 117 | 33 if (fakewriter_tasks != NULL) { |
| 118 | 34 for (i = 0; i < nfakewriters; i++) { |
| 119 | 35 if (fakewriter_tasks[i] != NULL) { |
| 120 | 36 VERBOSE_PRINTK_STRING( |
| 121 | 37 "Stopping rcu_torture_fakewriter task"); |
| 122 | 38 kthread_stop(fakewriter_tasks[i]); |
| 123 | 39 } |
| 124 | 40 fakewriter_tasks[i] = NULL; |
| 125 | 41 } |
| 126 | 42 kfree(fakewriter_tasks); |
| 127 | 43 fakewriter_tasks = NULL; |
| 128 | 44 } |
| 129 | 45 |
| 130 | 46 if (stats_task != NULL) { |
| 131 | 47 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task"); |
| 132 | 48 kthread_stop(stats_task); |
| 133 | 49 } |
| 134 | 50 stats_task = NULL; |
| 135 | 51 |
| 136 | 52 /* Wait for all RCU callbacks to fire. */ |
| 137 | 53 rcu_barrier(); |
| 138 | 54 |
| 139 | 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ |
| 140 | 56 |
| 141 | 57 if (cur_ops->cleanup != NULL) |
| 142 | 58 cur_ops->cleanup(); |
| 143 | 59 if (atomic_read(&n_rcu_torture_error)) |
| 144 | 60 rcu_torture_print_module_parms("End of test: FAILURE"); |
| 145 | 61 else |
| 146 | 62 rcu_torture_print_module_parms("End of test: SUCCESS"); |
| 147 | 63 } |
| 148 | |
| 149 | Line 6 sets a global variable that prevents any RCU callbacks from |
| 150 | re-posting themselves. This will not be necessary in most cases, since |
| 151 | RCU callbacks rarely include calls to call_rcu(). However, the rcutorture |
| 152 | module is an exception to this rule, and therefore needs to set this |
| 153 | global variable. |
| 154 | |
| 155 | Lines 7-50 stop all the kernel tasks associated with the rcutorture |
| 156 | module. Therefore, once execution reaches line 53, no more rcutorture |
| 157 | RCU callbacks will be posted. The rcu_barrier() call on line 53 waits |
| 158 | for any pre-existing callbacks to complete. |
| 159 | |
| 160 | Then lines 55-62 print status and do operation-specific cleanup, and |
| 161 | then return, permitting the module-unload operation to be completed. |
| 162 | |
Paul E. McKenney | 74d874e | 2012-05-07 13:43:30 -0700 | [diff] [blame] | 163 | Quick Quiz #1: Is there any other situation where rcu_barrier() might |
Paul E. McKenney | 1c12757 | 2008-11-13 18:11:52 -0800 | [diff] [blame] | 164 | be required? |
| 165 | |
| 166 | Your module might have additional complications. For example, if your |
| 167 | module invokes call_rcu() from timers, you will need to first cancel all |
| 168 | the timers, and only then invoke rcu_barrier() to wait for any remaining |
| 169 | RCU callbacks to complete. |
| 170 | |
Paul E. McKenney | 240ebbf | 2009-06-25 09:08:18 -0700 | [diff] [blame] | 171 | Of course, if you module uses call_rcu_bh(), you will need to invoke |
| 172 | rcu_barrier_bh() before unloading. Similarly, if your module uses |
| 173 | call_rcu_sched(), you will need to invoke rcu_barrier_sched() before |
| 174 | unloading. If your module uses call_rcu(), call_rcu_bh(), -and- |
| 175 | call_rcu_sched(), then you will need to invoke each of rcu_barrier(), |
| 176 | rcu_barrier_bh(), and rcu_barrier_sched(). |
| 177 | |
Paul E. McKenney | 1c12757 | 2008-11-13 18:11:52 -0800 | [diff] [blame] | 178 | |
| 179 | Implementing rcu_barrier() |
| 180 | |
| 181 | Dipankar Sarma's implementation of rcu_barrier() makes use of the fact |
| 182 | that RCU callbacks are never reordered once queued on one of the per-CPU |
| 183 | queues. His implementation queues an RCU callback on each of the per-CPU |
| 184 | callback queues, and then waits until they have all started executing, at |
| 185 | which point, all earlier RCU callbacks are guaranteed to have completed. |
| 186 | |
| 187 | The original code for rcu_barrier() was as follows: |
| 188 | |
| 189 | 1 void rcu_barrier(void) |
| 190 | 2 { |
| 191 | 3 BUG_ON(in_interrupt()); |
| 192 | 4 /* Take cpucontrol mutex to protect against CPU hotplug */ |
| 193 | 5 mutex_lock(&rcu_barrier_mutex); |
| 194 | 6 init_completion(&rcu_barrier_completion); |
| 195 | 7 atomic_set(&rcu_barrier_cpu_count, 0); |
| 196 | 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1); |
| 197 | 9 wait_for_completion(&rcu_barrier_completion); |
| 198 | 10 mutex_unlock(&rcu_barrier_mutex); |
| 199 | 11 } |
| 200 | |
| 201 | Line 3 verifies that the caller is in process context, and lines 5 and 10 |
| 202 | use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the |
| 203 | global completion and counters at a time, which are initialized on lines |
| 204 | 6 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is |
| 205 | shown below. Note that the final "1" in on_each_cpu()'s argument list |
| 206 | ensures that all the calls to rcu_barrier_func() will have completed |
| 207 | before on_each_cpu() returns. Line 9 then waits for the completion. |
| 208 | |
| 209 | This code was rewritten in 2008 to support rcu_barrier_bh() and |
| 210 | rcu_barrier_sched() in addition to the original rcu_barrier(). |
| 211 | |
| 212 | The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() |
| 213 | to post an RCU callback, as follows: |
| 214 | |
| 215 | 1 static void rcu_barrier_func(void *notused) |
| 216 | 2 { |
| 217 | 3 int cpu = smp_processor_id(); |
| 218 | 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu); |
| 219 | 5 struct rcu_head *head; |
| 220 | 6 |
| 221 | 7 head = &rdp->barrier; |
| 222 | 8 atomic_inc(&rcu_barrier_cpu_count); |
| 223 | 9 call_rcu(head, rcu_barrier_callback); |
| 224 | 10 } |
| 225 | |
| 226 | Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure, |
| 227 | which contains the struct rcu_head that needed for the later call to |
| 228 | call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line |
| 229 | 8 increments a global counter. This counter will later be decremented |
| 230 | by the callback. Line 9 then registers the rcu_barrier_callback() on |
| 231 | the current CPU's queue. |
| 232 | |
| 233 | The rcu_barrier_callback() function simply atomically decrements the |
| 234 | rcu_barrier_cpu_count variable and finalizes the completion when it |
| 235 | reaches zero, as follows: |
| 236 | |
| 237 | 1 static void rcu_barrier_callback(struct rcu_head *notused) |
| 238 | 2 { |
| 239 | 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) |
| 240 | 4 complete(&rcu_barrier_completion); |
| 241 | 5 } |
| 242 | |
Paul E. McKenney | 74d874e | 2012-05-07 13:43:30 -0700 | [diff] [blame] | 243 | Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes |
Paul E. McKenney | 1c12757 | 2008-11-13 18:11:52 -0800 | [diff] [blame] | 244 | immediately (thus incrementing rcu_barrier_cpu_count to the |
| 245 | value one), but the other CPU's rcu_barrier_func() invocations |
| 246 | are delayed for a full grace period? Couldn't this result in |
| 247 | rcu_barrier() returning prematurely? |
| 248 | |
| 249 | |
| 250 | rcu_barrier() Summary |
| 251 | |
| 252 | The rcu_barrier() primitive has seen relatively little use, since most |
| 253 | code using RCU is in the core kernel rather than in modules. However, if |
| 254 | you are using RCU from an unloadable module, you need to use rcu_barrier() |
| 255 | so that your module may be safely unloaded. |
| 256 | |
| 257 | |
| 258 | Answers to Quick Quizzes |
| 259 | |
Paul E. McKenney | 74d874e | 2012-05-07 13:43:30 -0700 | [diff] [blame] | 260 | Quick Quiz #1: Is there any other situation where rcu_barrier() might |
Paul E. McKenney | 1c12757 | 2008-11-13 18:11:52 -0800 | [diff] [blame] | 261 | be required? |
| 262 | |
| 263 | Answer: Interestingly enough, rcu_barrier() was not originally |
| 264 | implemented for module unloading. Nikita Danilov was using |
| 265 | RCU in a filesystem, which resulted in a similar situation at |
| 266 | filesystem-unmount time. Dipankar Sarma coded up rcu_barrier() |
| 267 | in response, so that Nikita could invoke it during the |
| 268 | filesystem-unmount process. |
| 269 | |
| 270 | Much later, yours truly hit the RCU module-unload problem when |
| 271 | implementing rcutorture, and found that rcu_barrier() solves |
| 272 | this problem as well. |
| 273 | |
Paul E. McKenney | 74d874e | 2012-05-07 13:43:30 -0700 | [diff] [blame] | 274 | Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes |
Paul E. McKenney | 1c12757 | 2008-11-13 18:11:52 -0800 | [diff] [blame] | 275 | immediately (thus incrementing rcu_barrier_cpu_count to the |
| 276 | value one), but the other CPU's rcu_barrier_func() invocations |
| 277 | are delayed for a full grace period? Couldn't this result in |
| 278 | rcu_barrier() returning prematurely? |
| 279 | |
| 280 | Answer: This cannot happen. The reason is that on_each_cpu() has its last |
| 281 | argument, the wait flag, set to "1". This flag is passed through |
| 282 | to smp_call_function() and further to smp_call_function_on_cpu(), |
| 283 | causing this latter to spin until the cross-CPU invocation of |
| 284 | rcu_barrier_func() has completed. This by itself would prevent |
| 285 | a grace period from completing on non-CONFIG_PREEMPT kernels, |
| 286 | since each CPU must undergo a context switch (or other quiescent |
| 287 | state) before the grace period can complete. However, this is |
| 288 | of no use in CONFIG_PREEMPT kernels. |
| 289 | |
| 290 | Therefore, on_each_cpu() disables preemption across its call |
| 291 | to smp_call_function() and also across the local call to |
| 292 | rcu_barrier_func(). This prevents the local CPU from context |
| 293 | switching, again preventing grace periods from completing. This |
| 294 | means that all CPUs have executed rcu_barrier_func() before |
| 295 | the first rcu_barrier_callback() can possibly execute, in turn |
| 296 | preventing rcu_barrier_cpu_count from prematurely reaching zero. |
| 297 | |
| 298 | Currently, -rt implementations of RCU keep but a single global |
| 299 | queue for RCU callbacks, and thus do not suffer from this |
| 300 | problem. However, when the -rt RCU eventually does have per-CPU |
| 301 | callback queues, things will have to change. One simple change |
| 302 | is to add an rcu_read_lock() before line 8 of rcu_barrier() |
| 303 | and an rcu_read_unlock() after line 8 of this same function. If |
| 304 | you can think of a better change, please let me know! |