blob: 38428c125135504de8737943088a4e0f3df6f213 [file] [log] [blame]
Paul E. McKenney1c127572008-11-13 18:11:52 -08001RCU and Unloadable Modules
2
3[Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/]
4
5RCU (read-copy update) is a synchronization mechanism that can be thought
6of as a replacement for read-writer locking (among other things), but with
7very low-overhead readers that are immune to deadlock, priority inversion,
8and unbounded latency. RCU read-side critical sections are delimited
9by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
10kernels, generate no code whatsoever.
11
12This means that RCU writers are unaware of the presence of concurrent
13readers, so that RCU updates to shared data must be undertaken quite
14carefully, leaving an old version of the data structure in place until all
15pre-existing readers have finished. These old versions are needed because
16such readers might hold a reference to them. RCU updates can therefore be
17rather expensive, and RCU is thus best suited for read-mostly situations.
18
19How can an RCU writer possibly determine when all readers are finished,
20given that readers might well leave absolutely no trace of their
21presence? There is a synchronize_rcu() primitive that blocks until all
22pre-existing readers have completed. An updater wishing to delete an
23element p from a linked list might do the following, while holding an
24appropriate lock, of course:
25
26 list_del_rcu(p);
27 synchronize_rcu();
28 kfree(p);
29
30But the above code cannot be used in IRQ context -- the call_rcu()
31primitive must be used instead. This primitive takes a pointer to an
32rcu_head struct placed within the RCU-protected data structure and
33another pointer to a function that may be invoked later to free that
34structure. Code to delete an element p from the linked list from IRQ
35context might then be as follows:
36
37 list_del_rcu(p);
38 call_rcu(&p->rcu, p_callback);
39
40Since call_rcu() never blocks, this code can safely be used from within
41IRQ context. The function p_callback() might be defined as follows:
42
43 static void p_callback(struct rcu_head *rp)
44 {
45 struct pstruct *p = container_of(rp, struct pstruct, rcu);
46
47 kfree(p);
48 }
49
50
51Unloading Modules That Use call_rcu()
52
53But what if p_callback is defined in an unloadable module?
54
55If we unload the module while some RCU callbacks are pending,
56the CPUs executing these callbacks are going to be severely
57disappointed when they are later invoked, as fancifully depicted at
58http://lwn.net/images/ns/kernel/rcu-drop.jpg.
59
60We could try placing a synchronize_rcu() in the module-exit code path,
61but this is not sufficient. Although synchronize_rcu() does wait for a
62grace period to elapse, it does not wait for the callbacks to complete.
63
64One might be tempted to try several back-to-back synchronize_rcu()
65calls, but this is still not guaranteed to work. If there is a very
66heavy RCU-callback load, then some of the callbacks might be deferred
67in order to allow other processing to proceed. Such deferral is required
68in realtime kernels in order to avoid excessive scheduling latencies.
69
70
71rcu_barrier()
72
73We instead need the rcu_barrier() primitive. This primitive is similar
74to synchronize_rcu(), but instead of waiting solely for a grace
75period to elapse, it also waits for all outstanding RCU callbacks to
76complete. Pseudo-code using rcu_barrier() is as follows:
77
78 1. Prevent any new RCU callbacks from being posted.
79 2. Execute rcu_barrier().
80 3. Allow the module to be unloaded.
81
Paul E. McKenney1c127572008-11-13 18:11:52 -080082The rcutorture module makes use of rcu_barrier in its exit function
83as follows:
84
85 1 static void
86 2 rcu_torture_cleanup(void)
87 3 {
88 4 int i;
89 5
90 6 fullstop = 1;
91 7 if (shuffler_task != NULL) {
92 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
93 9 kthread_stop(shuffler_task);
9410 }
9511 shuffler_task = NULL;
9612
9713 if (writer_task != NULL) {
9814 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task");
9915 kthread_stop(writer_task);
10016 }
10117 writer_task = NULL;
10218
10319 if (reader_tasks != NULL) {
10420 for (i = 0; i < nrealreaders; i++) {
10521 if (reader_tasks[i] != NULL) {
10622 VERBOSE_PRINTK_STRING(
10723 "Stopping rcu_torture_reader task");
10824 kthread_stop(reader_tasks[i]);
10925 }
11026 reader_tasks[i] = NULL;
11127 }
11228 kfree(reader_tasks);
11329 reader_tasks = NULL;
11430 }
11531 rcu_torture_current = NULL;
11632
11733 if (fakewriter_tasks != NULL) {
11834 for (i = 0; i < nfakewriters; i++) {
11935 if (fakewriter_tasks[i] != NULL) {
12036 VERBOSE_PRINTK_STRING(
12137 "Stopping rcu_torture_fakewriter task");
12238 kthread_stop(fakewriter_tasks[i]);
12339 }
12440 fakewriter_tasks[i] = NULL;
12541 }
12642 kfree(fakewriter_tasks);
12743 fakewriter_tasks = NULL;
12844 }
12945
13046 if (stats_task != NULL) {
13147 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task");
13248 kthread_stop(stats_task);
13349 }
13450 stats_task = NULL;
13551
13652 /* Wait for all RCU callbacks to fire. */
13753 rcu_barrier();
13854
13955 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
14056
14157 if (cur_ops->cleanup != NULL)
14258 cur_ops->cleanup();
14359 if (atomic_read(&n_rcu_torture_error))
14460 rcu_torture_print_module_parms("End of test: FAILURE");
14561 else
14662 rcu_torture_print_module_parms("End of test: SUCCESS");
14763 }
148
149Line 6 sets a global variable that prevents any RCU callbacks from
150re-posting themselves. This will not be necessary in most cases, since
151RCU callbacks rarely include calls to call_rcu(). However, the rcutorture
152module is an exception to this rule, and therefore needs to set this
153global variable.
154
155Lines 7-50 stop all the kernel tasks associated with the rcutorture
156module. Therefore, once execution reaches line 53, no more rcutorture
157RCU callbacks will be posted. The rcu_barrier() call on line 53 waits
158for any pre-existing callbacks to complete.
159
160Then lines 55-62 print status and do operation-specific cleanup, and
161then return, permitting the module-unload operation to be completed.
162
Paul E. McKenney74d874e2012-05-07 13:43:30 -0700163Quick Quiz #1: Is there any other situation where rcu_barrier() might
Paul E. McKenney1c127572008-11-13 18:11:52 -0800164 be required?
165
166Your module might have additional complications. For example, if your
167module invokes call_rcu() from timers, you will need to first cancel all
168the timers, and only then invoke rcu_barrier() to wait for any remaining
169RCU callbacks to complete.
170
Paul E. McKenney240ebbf2009-06-25 09:08:18 -0700171Of course, if you module uses call_rcu_bh(), you will need to invoke
172rcu_barrier_bh() before unloading. Similarly, if your module uses
173call_rcu_sched(), you will need to invoke rcu_barrier_sched() before
174unloading. If your module uses call_rcu(), call_rcu_bh(), -and-
175call_rcu_sched(), then you will need to invoke each of rcu_barrier(),
176rcu_barrier_bh(), and rcu_barrier_sched().
177
Paul E. McKenney1c127572008-11-13 18:11:52 -0800178
179Implementing rcu_barrier()
180
181Dipankar Sarma's implementation of rcu_barrier() makes use of the fact
182that RCU callbacks are never reordered once queued on one of the per-CPU
183queues. His implementation queues an RCU callback on each of the per-CPU
184callback queues, and then waits until they have all started executing, at
185which point, all earlier RCU callbacks are guaranteed to have completed.
186
187The original code for rcu_barrier() was as follows:
188
189 1 void rcu_barrier(void)
190 2 {
191 3 BUG_ON(in_interrupt());
192 4 /* Take cpucontrol mutex to protect against CPU hotplug */
193 5 mutex_lock(&rcu_barrier_mutex);
194 6 init_completion(&rcu_barrier_completion);
195 7 atomic_set(&rcu_barrier_cpu_count, 0);
196 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1);
197 9 wait_for_completion(&rcu_barrier_completion);
19810 mutex_unlock(&rcu_barrier_mutex);
19911 }
200
201Line 3 verifies that the caller is in process context, and lines 5 and 10
202use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the
203global completion and counters at a time, which are initialized on lines
2046 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is
205shown below. Note that the final "1" in on_each_cpu()'s argument list
206ensures that all the calls to rcu_barrier_func() will have completed
207before on_each_cpu() returns. Line 9 then waits for the completion.
208
209This code was rewritten in 2008 to support rcu_barrier_bh() and
210rcu_barrier_sched() in addition to the original rcu_barrier().
211
212The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
213to post an RCU callback, as follows:
214
215 1 static void rcu_barrier_func(void *notused)
216 2 {
217 3 int cpu = smp_processor_id();
218 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
219 5 struct rcu_head *head;
220 6
221 7 head = &rdp->barrier;
222 8 atomic_inc(&rcu_barrier_cpu_count);
223 9 call_rcu(head, rcu_barrier_callback);
22410 }
225
226Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
227which contains the struct rcu_head that needed for the later call to
228call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line
2298 increments a global counter. This counter will later be decremented
230by the callback. Line 9 then registers the rcu_barrier_callback() on
231the current CPU's queue.
232
233The rcu_barrier_callback() function simply atomically decrements the
234rcu_barrier_cpu_count variable and finalizes the completion when it
235reaches zero, as follows:
236
237 1 static void rcu_barrier_callback(struct rcu_head *notused)
238 2 {
239 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count))
240 4 complete(&rcu_barrier_completion);
241 5 }
242
Paul E. McKenney74d874e2012-05-07 13:43:30 -0700243Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes
Paul E. McKenney1c127572008-11-13 18:11:52 -0800244 immediately (thus incrementing rcu_barrier_cpu_count to the
245 value one), but the other CPU's rcu_barrier_func() invocations
246 are delayed for a full grace period? Couldn't this result in
247 rcu_barrier() returning prematurely?
248
249
250rcu_barrier() Summary
251
252The rcu_barrier() primitive has seen relatively little use, since most
253code using RCU is in the core kernel rather than in modules. However, if
254you are using RCU from an unloadable module, you need to use rcu_barrier()
255so that your module may be safely unloaded.
256
257
258Answers to Quick Quizzes
259
Paul E. McKenney74d874e2012-05-07 13:43:30 -0700260Quick Quiz #1: Is there any other situation where rcu_barrier() might
Paul E. McKenney1c127572008-11-13 18:11:52 -0800261 be required?
262
263Answer: Interestingly enough, rcu_barrier() was not originally
264 implemented for module unloading. Nikita Danilov was using
265 RCU in a filesystem, which resulted in a similar situation at
266 filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
267 in response, so that Nikita could invoke it during the
268 filesystem-unmount process.
269
270 Much later, yours truly hit the RCU module-unload problem when
271 implementing rcutorture, and found that rcu_barrier() solves
272 this problem as well.
273
Paul E. McKenney74d874e2012-05-07 13:43:30 -0700274Quick Quiz #2: What happens if CPU 0's rcu_barrier_func() executes
Paul E. McKenney1c127572008-11-13 18:11:52 -0800275 immediately (thus incrementing rcu_barrier_cpu_count to the
276 value one), but the other CPU's rcu_barrier_func() invocations
277 are delayed for a full grace period? Couldn't this result in
278 rcu_barrier() returning prematurely?
279
280Answer: This cannot happen. The reason is that on_each_cpu() has its last
281 argument, the wait flag, set to "1". This flag is passed through
282 to smp_call_function() and further to smp_call_function_on_cpu(),
283 causing this latter to spin until the cross-CPU invocation of
284 rcu_barrier_func() has completed. This by itself would prevent
285 a grace period from completing on non-CONFIG_PREEMPT kernels,
286 since each CPU must undergo a context switch (or other quiescent
287 state) before the grace period can complete. However, this is
288 of no use in CONFIG_PREEMPT kernels.
289
290 Therefore, on_each_cpu() disables preemption across its call
291 to smp_call_function() and also across the local call to
292 rcu_barrier_func(). This prevents the local CPU from context
293 switching, again preventing grace periods from completing. This
294 means that all CPUs have executed rcu_barrier_func() before
295 the first rcu_barrier_callback() can possibly execute, in turn
296 preventing rcu_barrier_cpu_count from prematurely reaching zero.
297
298 Currently, -rt implementations of RCU keep but a single global
299 queue for RCU callbacks, and thus do not suffer from this
300 problem. However, when the -rt RCU eventually does have per-CPU
301 callback queues, things will have to change. One simple change
302 is to add an rcu_read_lock() before line 8 of rcu_barrier()
303 and an rcu_read_unlock() after line 8 of this same function. If
304 you can think of a better change, please let me know!