blob: e439a0edee2263d554a53282aaa8512b0295a40c [file] [log] [blame]
Paul E. McKenney1c127572008-11-13 18:11:52 -08001RCU and Unloadable Modules
2
3[Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/]
4
5RCU (read-copy update) is a synchronization mechanism that can be thought
6of as a replacement for read-writer locking (among other things), but with
7very low-overhead readers that are immune to deadlock, priority inversion,
8and unbounded latency. RCU read-side critical sections are delimited
9by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT
10kernels, generate no code whatsoever.
11
12This means that RCU writers are unaware of the presence of concurrent
13readers, so that RCU updates to shared data must be undertaken quite
14carefully, leaving an old version of the data structure in place until all
15pre-existing readers have finished. These old versions are needed because
16such readers might hold a reference to them. RCU updates can therefore be
17rather expensive, and RCU is thus best suited for read-mostly situations.
18
19How can an RCU writer possibly determine when all readers are finished,
20given that readers might well leave absolutely no trace of their
21presence? There is a synchronize_rcu() primitive that blocks until all
22pre-existing readers have completed. An updater wishing to delete an
23element p from a linked list might do the following, while holding an
24appropriate lock, of course:
25
26 list_del_rcu(p);
27 synchronize_rcu();
28 kfree(p);
29
30But the above code cannot be used in IRQ context -- the call_rcu()
31primitive must be used instead. This primitive takes a pointer to an
32rcu_head struct placed within the RCU-protected data structure and
33another pointer to a function that may be invoked later to free that
34structure. Code to delete an element p from the linked list from IRQ
35context might then be as follows:
36
37 list_del_rcu(p);
38 call_rcu(&p->rcu, p_callback);
39
40Since call_rcu() never blocks, this code can safely be used from within
41IRQ context. The function p_callback() might be defined as follows:
42
43 static void p_callback(struct rcu_head *rp)
44 {
45 struct pstruct *p = container_of(rp, struct pstruct, rcu);
46
47 kfree(p);
48 }
49
50
51Unloading Modules That Use call_rcu()
52
53But what if p_callback is defined in an unloadable module?
54
55If we unload the module while some RCU callbacks are pending,
56the CPUs executing these callbacks are going to be severely
57disappointed when they are later invoked, as fancifully depicted at
58http://lwn.net/images/ns/kernel/rcu-drop.jpg.
59
60We could try placing a synchronize_rcu() in the module-exit code path,
61but this is not sufficient. Although synchronize_rcu() does wait for a
62grace period to elapse, it does not wait for the callbacks to complete.
63
64One might be tempted to try several back-to-back synchronize_rcu()
65calls, but this is still not guaranteed to work. If there is a very
66heavy RCU-callback load, then some of the callbacks might be deferred
67in order to allow other processing to proceed. Such deferral is required
68in realtime kernels in order to avoid excessive scheduling latencies.
69
70
71rcu_barrier()
72
73We instead need the rcu_barrier() primitive. This primitive is similar
74to synchronize_rcu(), but instead of waiting solely for a grace
75period to elapse, it also waits for all outstanding RCU callbacks to
76complete. Pseudo-code using rcu_barrier() is as follows:
77
78 1. Prevent any new RCU callbacks from being posted.
79 2. Execute rcu_barrier().
80 3. Allow the module to be unloaded.
81
82Quick Quiz #1: Why is there no srcu_barrier()?
83
84The rcutorture module makes use of rcu_barrier in its exit function
85as follows:
86
87 1 static void
88 2 rcu_torture_cleanup(void)
89 3 {
90 4 int i;
91 5
92 6 fullstop = 1;
93 7 if (shuffler_task != NULL) {
94 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task");
95 9 kthread_stop(shuffler_task);
9610 }
9711 shuffler_task = NULL;
9812
9913 if (writer_task != NULL) {
10014 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task");
10115 kthread_stop(writer_task);
10216 }
10317 writer_task = NULL;
10418
10519 if (reader_tasks != NULL) {
10620 for (i = 0; i < nrealreaders; i++) {
10721 if (reader_tasks[i] != NULL) {
10822 VERBOSE_PRINTK_STRING(
10923 "Stopping rcu_torture_reader task");
11024 kthread_stop(reader_tasks[i]);
11125 }
11226 reader_tasks[i] = NULL;
11327 }
11428 kfree(reader_tasks);
11529 reader_tasks = NULL;
11630 }
11731 rcu_torture_current = NULL;
11832
11933 if (fakewriter_tasks != NULL) {
12034 for (i = 0; i < nfakewriters; i++) {
12135 if (fakewriter_tasks[i] != NULL) {
12236 VERBOSE_PRINTK_STRING(
12337 "Stopping rcu_torture_fakewriter task");
12438 kthread_stop(fakewriter_tasks[i]);
12539 }
12640 fakewriter_tasks[i] = NULL;
12741 }
12842 kfree(fakewriter_tasks);
12943 fakewriter_tasks = NULL;
13044 }
13145
13246 if (stats_task != NULL) {
13347 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task");
13448 kthread_stop(stats_task);
13549 }
13650 stats_task = NULL;
13751
13852 /* Wait for all RCU callbacks to fire. */
13953 rcu_barrier();
14054
14155 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
14256
14357 if (cur_ops->cleanup != NULL)
14458 cur_ops->cleanup();
14559 if (atomic_read(&n_rcu_torture_error))
14660 rcu_torture_print_module_parms("End of test: FAILURE");
14761 else
14862 rcu_torture_print_module_parms("End of test: SUCCESS");
14963 }
150
151Line 6 sets a global variable that prevents any RCU callbacks from
152re-posting themselves. This will not be necessary in most cases, since
153RCU callbacks rarely include calls to call_rcu(). However, the rcutorture
154module is an exception to this rule, and therefore needs to set this
155global variable.
156
157Lines 7-50 stop all the kernel tasks associated with the rcutorture
158module. Therefore, once execution reaches line 53, no more rcutorture
159RCU callbacks will be posted. The rcu_barrier() call on line 53 waits
160for any pre-existing callbacks to complete.
161
162Then lines 55-62 print status and do operation-specific cleanup, and
163then return, permitting the module-unload operation to be completed.
164
165Quick Quiz #2: Is there any other situation where rcu_barrier() might
166 be required?
167
168Your module might have additional complications. For example, if your
169module invokes call_rcu() from timers, you will need to first cancel all
170the timers, and only then invoke rcu_barrier() to wait for any remaining
171RCU callbacks to complete.
172
Paul E. McKenney240ebbf2009-06-25 09:08:18 -0700173Of course, if you module uses call_rcu_bh(), you will need to invoke
174rcu_barrier_bh() before unloading. Similarly, if your module uses
175call_rcu_sched(), you will need to invoke rcu_barrier_sched() before
176unloading. If your module uses call_rcu(), call_rcu_bh(), -and-
177call_rcu_sched(), then you will need to invoke each of rcu_barrier(),
178rcu_barrier_bh(), and rcu_barrier_sched().
179
Paul E. McKenney1c127572008-11-13 18:11:52 -0800180
181Implementing rcu_barrier()
182
183Dipankar Sarma's implementation of rcu_barrier() makes use of the fact
184that RCU callbacks are never reordered once queued on one of the per-CPU
185queues. His implementation queues an RCU callback on each of the per-CPU
186callback queues, and then waits until they have all started executing, at
187which point, all earlier RCU callbacks are guaranteed to have completed.
188
189The original code for rcu_barrier() was as follows:
190
191 1 void rcu_barrier(void)
192 2 {
193 3 BUG_ON(in_interrupt());
194 4 /* Take cpucontrol mutex to protect against CPU hotplug */
195 5 mutex_lock(&rcu_barrier_mutex);
196 6 init_completion(&rcu_barrier_completion);
197 7 atomic_set(&rcu_barrier_cpu_count, 0);
198 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1);
199 9 wait_for_completion(&rcu_barrier_completion);
20010 mutex_unlock(&rcu_barrier_mutex);
20111 }
202
203Line 3 verifies that the caller is in process context, and lines 5 and 10
204use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the
205global completion and counters at a time, which are initialized on lines
2066 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is
207shown below. Note that the final "1" in on_each_cpu()'s argument list
208ensures that all the calls to rcu_barrier_func() will have completed
209before on_each_cpu() returns. Line 9 then waits for the completion.
210
211This code was rewritten in 2008 to support rcu_barrier_bh() and
212rcu_barrier_sched() in addition to the original rcu_barrier().
213
214The rcu_barrier_func() runs on each CPU, where it invokes call_rcu()
215to post an RCU callback, as follows:
216
217 1 static void rcu_barrier_func(void *notused)
218 2 {
219 3 int cpu = smp_processor_id();
220 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
221 5 struct rcu_head *head;
222 6
223 7 head = &rdp->barrier;
224 8 atomic_inc(&rcu_barrier_cpu_count);
225 9 call_rcu(head, rcu_barrier_callback);
22610 }
227
228Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure,
229which contains the struct rcu_head that needed for the later call to
230call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line
2318 increments a global counter. This counter will later be decremented
232by the callback. Line 9 then registers the rcu_barrier_callback() on
233the current CPU's queue.
234
235The rcu_barrier_callback() function simply atomically decrements the
236rcu_barrier_cpu_count variable and finalizes the completion when it
237reaches zero, as follows:
238
239 1 static void rcu_barrier_callback(struct rcu_head *notused)
240 2 {
241 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count))
242 4 complete(&rcu_barrier_completion);
243 5 }
244
245Quick Quiz #3: What happens if CPU 0's rcu_barrier_func() executes
246 immediately (thus incrementing rcu_barrier_cpu_count to the
247 value one), but the other CPU's rcu_barrier_func() invocations
248 are delayed for a full grace period? Couldn't this result in
249 rcu_barrier() returning prematurely?
250
251
252rcu_barrier() Summary
253
254The rcu_barrier() primitive has seen relatively little use, since most
255code using RCU is in the core kernel rather than in modules. However, if
256you are using RCU from an unloadable module, you need to use rcu_barrier()
257so that your module may be safely unloaded.
258
259
260Answers to Quick Quizzes
261
262Quick Quiz #1: Why is there no srcu_barrier()?
263
264Answer: Since there is no call_srcu(), there can be no outstanding SRCU
265 callbacks. Therefore, there is no need to wait for them.
266
267Quick Quiz #2: Is there any other situation where rcu_barrier() might
268 be required?
269
270Answer: Interestingly enough, rcu_barrier() was not originally
271 implemented for module unloading. Nikita Danilov was using
272 RCU in a filesystem, which resulted in a similar situation at
273 filesystem-unmount time. Dipankar Sarma coded up rcu_barrier()
274 in response, so that Nikita could invoke it during the
275 filesystem-unmount process.
276
277 Much later, yours truly hit the RCU module-unload problem when
278 implementing rcutorture, and found that rcu_barrier() solves
279 this problem as well.
280
281Quick Quiz #3: What happens if CPU 0's rcu_barrier_func() executes
282 immediately (thus incrementing rcu_barrier_cpu_count to the
283 value one), but the other CPU's rcu_barrier_func() invocations
284 are delayed for a full grace period? Couldn't this result in
285 rcu_barrier() returning prematurely?
286
287Answer: This cannot happen. The reason is that on_each_cpu() has its last
288 argument, the wait flag, set to "1". This flag is passed through
289 to smp_call_function() and further to smp_call_function_on_cpu(),
290 causing this latter to spin until the cross-CPU invocation of
291 rcu_barrier_func() has completed. This by itself would prevent
292 a grace period from completing on non-CONFIG_PREEMPT kernels,
293 since each CPU must undergo a context switch (or other quiescent
294 state) before the grace period can complete. However, this is
295 of no use in CONFIG_PREEMPT kernels.
296
297 Therefore, on_each_cpu() disables preemption across its call
298 to smp_call_function() and also across the local call to
299 rcu_barrier_func(). This prevents the local CPU from context
300 switching, again preventing grace periods from completing. This
301 means that all CPUs have executed rcu_barrier_func() before
302 the first rcu_barrier_callback() can possibly execute, in turn
303 preventing rcu_barrier_cpu_count from prematurely reaching zero.
304
305 Currently, -rt implementations of RCU keep but a single global
306 queue for RCU callbacks, and thus do not suffer from this
307 problem. However, when the -rt RCU eventually does have per-CPU
308 callback queues, things will have to change. One simple change
309 is to add an rcu_read_lock() before line 8 of rcu_barrier()
310 and an rcu_read_unlock() after line 8 of this same function. If
311 you can think of a better change, please let me know!