blob: df62466da4e0692086143fdd1294e6bb69eacb5f [file] [log] [blame]
Paul E. McKenney32300752008-05-12 21:21:05 +02001Please note that the "What is RCU?" LWN series is an excellent place
2to start learning about RCU:
3
41. What is RCU, Fundamentally? http://lwn.net/Articles/262464/
52. What is RCU? Part 2: Usage http://lwn.net/Articles/263130/
63. RCU part 3: the RCU API http://lwn.net/Articles/264090/
Kees Cookd4930112011-12-07 15:11:23 -080074. The RCU API, 2010 Edition http://lwn.net/Articles/418853/
Paul E. McKenneydb4855b2016-04-01 05:21:56 -07008 2010 Big API Table http://lwn.net/Articles/419086/
Paul E. McKenney2921b122016-04-01 05:09:53 -070095. The RCU API, 2014 Edition http://lwn.net/Articles/609904/
Paul E. McKenneydb4855b2016-04-01 05:21:56 -070010 2014 Big API Table http://lwn.net/Articles/609973/
Paul E. McKenney32300752008-05-12 21:21:05 +020011
12
Paul E. McKenneydd81eca2005-09-10 00:26:24 -070013What is RCU?
14
15RCU is a synchronization mechanism that was added to the Linux kernel
16during the 2.5 development effort that is optimized for read-mostly
17situations. Although RCU is actually quite simple once you understand it,
18getting there can sometimes be a challenge. Part of the problem is that
19most of the past descriptions of RCU have been written with the mistaken
20assumption that there is "one true way" to describe RCU. Instead,
21the experience has been that different people must take different paths
22to arrive at an understanding of RCU. This document provides several
23different paths, as follows:
24
251. RCU OVERVIEW
262. WHAT IS RCU'S CORE API?
273. WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
284. WHAT IF MY UPDATING THREAD CANNOT BLOCK?
295. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
306. ANALOGY WITH READER-WRITER LOCKING
317. FULL LIST OF RCU APIs
328. ANSWERS TO QUICK QUIZZES
33
34People who prefer starting with a conceptual overview should focus on
35Section 1, though most readers will profit by reading this section at
36some point. People who prefer to start with an API that they can then
37experiment with should focus on Section 2. People who prefer to start
38with example uses should focus on Sections 3 and 4. People who need to
39understand the RCU implementation should focus on Section 5, then dive
40into the kernel source code. People who reason best by analogy should
41focus on Section 6. Section 7 serves as an index to the docbook API
42documentation, and Section 8 is the traditional answer key.
43
44So, start with the section that makes the most sense to you and your
45preferred method of learning. If you need to know everything about
46everything, feel free to read the whole thing -- but if you are really
47that type of person, you have perused the source code and will therefore
48never need this document anyway. ;-)
49
50
511. RCU OVERVIEW
52
53The basic idea behind RCU is to split updates into "removal" and
54"reclamation" phases. The removal phase removes references to data items
55within a data structure (possibly by replacing them with references to
56new versions of these data items), and can run concurrently with readers.
57The reason that it is safe to run the removal phase concurrently with
58readers is the semantics of modern CPUs guarantee that readers will see
59either the old or the new version of the data structure rather than a
60partially updated reference. The reclamation phase does the work of reclaiming
61(e.g., freeing) the data items removed from the data structure during the
62removal phase. Because reclaiming data items can disrupt any readers
63concurrently referencing those data items, the reclamation phase must
64not start until readers no longer hold references to those data items.
65
66Splitting the update into removal and reclamation phases permits the
67updater to perform the removal phase immediately, and to defer the
68reclamation phase until all readers active during the removal phase have
69completed, either by blocking until they finish or by registering a
70callback that is invoked after they finish. Only readers that are active
71during the removal phase need be considered, because any reader starting
72after the removal phase will be unable to gain a reference to the removed
73data items, and therefore cannot be disrupted by the reclamation phase.
74
75So the typical RCU update sequence goes something like the following:
76
77a. Remove pointers to a data structure, so that subsequent
78 readers cannot gain a reference to it.
79
80b. Wait for all previous readers to complete their RCU read-side
81 critical sections.
82
83c. At this point, there cannot be any readers who hold references
84 to the data structure, so it now may safely be reclaimed
85 (e.g., kfree()d).
86
87Step (b) above is the key idea underlying RCU's deferred destruction.
88The ability to wait until all readers are done allows RCU readers to
89use much lighter-weight synchronization, in some cases, absolutely no
90synchronization at all. In contrast, in more conventional lock-based
91schemes, readers must use heavy-weight synchronization in order to
92prevent an updater from deleting the data structure out from under them.
93This is because lock-based updaters typically update data items in place,
94and must therefore exclude readers. In contrast, RCU-based updaters
95typically take advantage of the fact that writes to single aligned
96pointers are atomic on modern CPUs, allowing atomic insertion, removal,
97and replacement of data items in a linked structure without disrupting
98readers. Concurrent RCU readers can then continue accessing the old
99versions, and can dispense with the atomic operations, memory barriers,
100and communications cache misses that are so expensive on present-day
101SMP computer systems, even in absence of lock contention.
102
103In the three-step procedure shown above, the updater is performing both
104the removal and the reclamation step, but it is often helpful for an
105entirely different thread to do the reclamation, as is in fact the case
106in the Linux kernel's directory-entry cache (dcache). Even if the same
107thread performs both the update step (step (a) above) and the reclamation
108step (step (c) above), it is often helpful to think of them separately.
109For example, RCU readers and updaters need not communicate at all,
110but RCU provides implicit low-overhead communication between readers
111and reclaimers, namely, in step (b) above.
112
113So how the heck can a reclaimer tell when a reader is done, given
114that readers are not doing any sort of synchronization operations???
115Read on to learn about how RCU's API makes this easy.
116
117
1182. WHAT IS RCU'S CORE API?
119
120The core RCU API is quite small:
121
122a. rcu_read_lock()
123b. rcu_read_unlock()
124c. synchronize_rcu() / call_rcu()
125d. rcu_assign_pointer()
126e. rcu_dereference()
127
128There are many other members of the RCU API, but the rest can be
129expressed in terms of these five, though most implementations instead
130express synchronize_rcu() in terms of the call_rcu() callback API.
131
132The five core RCU APIs are described below, the other 18 will be enumerated
133later. See the kernel docbook documentation for more info, or look directly
134at the function header comments.
135
136rcu_read_lock()
137
138 void rcu_read_lock(void);
139
140 Used by a reader to inform the reclaimer that the reader is
141 entering an RCU read-side critical section. It is illegal
142 to block while in an RCU read-side critical section, though
Pranith Kumar28f65692014-09-22 14:00:48 -0400143 kernels built with CONFIG_PREEMPT_RCU can preempt RCU
Paul E. McKenney6b3ef482009-08-22 13:56:53 -0700144 read-side critical sections. Any RCU-protected data structure
145 accessed during an RCU read-side critical section is guaranteed to
146 remain unreclaimed for the full duration of that critical section.
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700147 Reference counts may be used in conjunction with RCU to maintain
148 longer-term references to data structures.
149
150rcu_read_unlock()
151
152 void rcu_read_unlock(void);
153
154 Used by a reader to inform the reclaimer that the reader is
155 exiting an RCU read-side critical section. Note that RCU
156 read-side critical sections may be nested and/or overlapping.
157
158synchronize_rcu()
159
160 void synchronize_rcu(void);
161
162 Marks the end of updater code and the beginning of reclaimer
163 code. It does this by blocking until all pre-existing RCU
164 read-side critical sections on all CPUs have completed.
165 Note that synchronize_rcu() will -not- necessarily wait for
166 any subsequent RCU read-side critical sections to complete.
167 For example, consider the following sequence of events:
168
169 CPU 0 CPU 1 CPU 2
170 ----------------- ------------------------- ---------------
171 1. rcu_read_lock()
172 2. enters synchronize_rcu()
173 3. rcu_read_lock()
174 4. rcu_read_unlock()
175 5. exits synchronize_rcu()
176 6. rcu_read_unlock()
177
178 To reiterate, synchronize_rcu() waits only for ongoing RCU
179 read-side critical sections to complete, not necessarily for
180 any that begin after synchronize_rcu() is invoked.
181
182 Of course, synchronize_rcu() does not necessarily return
183 -immediately- after the last pre-existing RCU read-side critical
184 section completes. For one thing, there might well be scheduling
185 delays. For another thing, many RCU implementations process
186 requests in batches in order to improve efficiencies, which can
187 further delay synchronize_rcu().
188
189 Since synchronize_rcu() is the API that must figure out when
190 readers are done, its implementation is key to RCU. For RCU
191 to be useful in all but the most read-intensive situations,
192 synchronize_rcu()'s overhead must also be quite small.
193
194 The call_rcu() API is a callback form of synchronize_rcu(),
195 and is described in more detail in a later section. Instead of
196 blocking, it registers a function and argument which are invoked
197 after all ongoing RCU read-side critical sections have completed.
198 This callback variant is particularly useful in situations where
Paul E. McKenney165d6c72006-06-25 05:48:44 -0700199 it is illegal to block or where update-side performance is
200 critically important.
201
202 However, the call_rcu() API should not be used lightly, as use
203 of the synchronize_rcu() API generally results in simpler code.
204 In addition, the synchronize_rcu() API has the nice property
205 of automatically limiting update rate should grace periods
206 be delayed. This property results in system resilience in face
207 of denial-of-service attacks. Code using call_rcu() should limit
208 update rate in order to gain this same sort of resilience. See
209 checklist.txt for some approaches to limiting the update rate.
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700210
211rcu_assign_pointer()
212
213 typeof(p) rcu_assign_pointer(p, typeof(p) v);
214
215 Yes, rcu_assign_pointer() -is- implemented as a macro, though it
216 would be cool to be able to declare a function in this manner.
217 (Compiler experts will no doubt disagree.)
218
219 The updater uses this function to assign a new value to an
220 RCU-protected pointer, in order to safely communicate the change
221 in value from the updater to the reader. This function returns
222 the new value, and also executes any memory-barrier instructions
223 required for a given CPU architecture.
224
Paul E. McKenneyd19720a2006-02-01 03:06:42 -0800225 Perhaps just as important, it serves to document (1) which
226 pointers are protected by RCU and (2) the point at which a
227 given structure becomes accessible to other CPUs. That said,
228 rcu_assign_pointer() is most frequently used indirectly, via
229 the _rcu list-manipulation primitives such as list_add_rcu().
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700230
231rcu_dereference()
232
233 typeof(p) rcu_dereference(p);
234
235 Like rcu_assign_pointer(), rcu_dereference() must be implemented
236 as a macro.
237
238 The reader uses rcu_dereference() to fetch an RCU-protected
239 pointer, which returns a value that may then be safely
Pranith Kumar8cf503d2016-10-18 00:54:03 -0400240 dereferenced. Note that rcu_dereference() does not actually
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700241 dereference the pointer, instead, it protects the pointer for
242 later dereferencing. It also executes any needed memory-barrier
243 instructions for a given CPU architecture. Currently, only Alpha
244 needs memory barriers within rcu_dereference() -- on other CPUs,
245 it compiles to nothing, not even a compiler directive.
246
247 Common coding practice uses rcu_dereference() to copy an
248 RCU-protected pointer to a local variable, then dereferences
249 this local variable, for example as follows:
250
251 p = rcu_dereference(head.next);
252 return p->data;
253
254 However, in this case, one could just as easily combine these
255 into one statement:
256
257 return rcu_dereference(head.next)->data;
258
259 If you are going to be fetching multiple fields from the
260 RCU-protected structure, using the local variable is of
261 course preferred. Repeated rcu_dereference() calls look
Milos Vyleteled384462015-04-17 16:38:04 +0200262 ugly, do not guarantee that the same pointer will be returned
263 if an update happened while in the critical section, and incur
264 unnecessary overhead on Alpha CPUs.
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700265
266 Note that the value returned by rcu_dereference() is valid
267 only within the enclosing RCU read-side critical section.
268 For example, the following is -not- legal:
269
270 rcu_read_lock();
271 p = rcu_dereference(head.next);
272 rcu_read_unlock();
Paul E. McKenney4357fb52013-02-12 07:56:27 -0800273 x = p->address; /* BUG!!! */
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700274 rcu_read_lock();
Paul E. McKenney4357fb52013-02-12 07:56:27 -0800275 y = p->data; /* BUG!!! */
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700276 rcu_read_unlock();
277
278 Holding a reference from one RCU read-side critical section
279 to another is just as illegal as holding a reference from
280 one lock-based critical section to another! Similarly,
281 using a reference outside of the critical section in which
282 it was acquired is just as illegal as doing so with normal
283 locking.
284
285 As with rcu_assign_pointer(), an important function of
Paul E. McKenneyd19720a2006-02-01 03:06:42 -0800286 rcu_dereference() is to document which pointers are protected by
287 RCU, in particular, flagging a pointer that is subject to changing
288 at any time, including immediately after the rcu_dereference().
289 And, again like rcu_assign_pointer(), rcu_dereference() is
290 typically used indirectly, via the _rcu list-manipulation
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700291 primitives, such as list_for_each_entry_rcu().
292
293The following diagram shows how each API communicates among the
294reader, updater, and reclaimer.
295
296
297 rcu_assign_pointer()
298 +--------+
299 +---------------------->| reader |---------+
300 | +--------+ |
301 | | |
302 | | | Protect:
303 | | | rcu_read_lock()
304 | | | rcu_read_unlock()
305 | rcu_dereference() | |
306 +---------+ | |
307 | updater |<---------------------+ |
308 +---------+ V
309 | +-----------+
310 +----------------------------------->| reclaimer |
311 +-----------+
312 Defer:
313 synchronize_rcu() & call_rcu()
314
315
316The RCU infrastructure observes the time sequence of rcu_read_lock(),
317rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in
318order to determine when (1) synchronize_rcu() invocations may return
319to their callers and (2) call_rcu() callbacks may be invoked. Efficient
320implementations of the RCU infrastructure make heavy use of batching in
321order to amortize their overhead over many uses of the corresponding APIs.
322
323There are no fewer than three RCU mechanisms in the Linux kernel; the
324diagram above shows the first one, which is by far the most commonly used.
325The rcu_dereference() and rcu_assign_pointer() primitives are used for
326all three mechanisms, but different defer and protect primitives are
327used as follows:
328
329 Defer Protect
330
331a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock()
Paul E. McKenneyc598a072010-02-22 17:04:57 -0800332 call_rcu() rcu_dereference()
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700333
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700334b. synchronize_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh()
335 call_rcu_bh() rcu_dereference_bh()
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700336
Paul E. McKenney4c540052010-01-14 16:10:57 -0800337c. synchronize_sched() rcu_read_lock_sched() / rcu_read_unlock_sched()
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700338 call_rcu_sched() preempt_disable() / preempt_enable()
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700339 local_irq_save() / local_irq_restore()
340 hardirq enter / hardirq exit
341 NMI enter / NMI exit
Paul E. McKenneyc598a072010-02-22 17:04:57 -0800342 rcu_dereference_sched()
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700343
344These three mechanisms are used as follows:
345
346a. RCU applied to normal data structures.
347
348b. RCU applied to networking data structures that may be subjected
349 to remote denial-of-service attacks.
350
351c. RCU applied to scheduler and interrupt/NMI-handler tasks.
352
353Again, most uses will be of (a). The (b) and (c) cases are important
354for specialized uses, but are relatively uncommon.
355
356
3573. WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
358
359This section shows a simple use of the core RCU API to protect a
Paul E. McKenneyd19720a2006-02-01 03:06:42 -0800360global pointer to a dynamically allocated structure. More-typical
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700361uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt.
362
363 struct foo {
364 int a;
365 char b;
366 long c;
367 };
368 DEFINE_SPINLOCK(foo_mutex);
369
Jason A. Donenfeld2c4ac342015-08-11 14:26:33 +0200370 struct foo __rcu *gbl_foo;
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700371
372 /*
373 * Create a new struct foo that is the same as the one currently
374 * pointed to by gbl_foo, except that field "a" is replaced
375 * with "new_a". Points gbl_foo to the new structure, and
376 * frees up the old structure after a grace period.
377 *
378 * Uses rcu_assign_pointer() to ensure that concurrent readers
379 * see the initialized version of the new structure.
380 *
381 * Uses synchronize_rcu() to ensure that any readers that might
382 * have references to the old structure complete before freeing
383 * the old structure.
384 */
385 void foo_update_a(int new_a)
386 {
387 struct foo *new_fp;
388 struct foo *old_fp;
389
Baruch Evende0dfcd2006-03-24 18:25:25 +0100390 new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700391 spin_lock(&foo_mutex);
Jason A. Donenfeld2c4ac342015-08-11 14:26:33 +0200392 old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700393 *new_fp = *old_fp;
394 new_fp->a = new_a;
395 rcu_assign_pointer(gbl_foo, new_fp);
396 spin_unlock(&foo_mutex);
397 synchronize_rcu();
398 kfree(old_fp);
399 }
400
401 /*
402 * Return the value of field "a" of the current gbl_foo
403 * structure. Use rcu_read_lock() and rcu_read_unlock()
404 * to ensure that the structure does not get deleted out
405 * from under us, and use rcu_dereference() to ensure that
406 * we see the initialized version of the structure (important
407 * for DEC Alpha and for people reading the code).
408 */
409 int foo_get_a(void)
410 {
411 int retval;
412
413 rcu_read_lock();
414 retval = rcu_dereference(gbl_foo)->a;
415 rcu_read_unlock();
416 return retval;
417 }
418
419So, to sum up:
420
421o Use rcu_read_lock() and rcu_read_unlock() to guard RCU
422 read-side critical sections.
423
424o Within an RCU read-side critical section, use rcu_dereference()
425 to dereference RCU-protected pointers.
426
427o Use some solid scheme (such as locks or semaphores) to
428 keep concurrent updates from interfering with each other.
429
430o Use rcu_assign_pointer() to update an RCU-protected pointer.
431 This primitive protects concurrent readers from the updater,
432 -not- concurrent updates from each other! You therefore still
433 need to use locking (or something similar) to keep concurrent
434 rcu_assign_pointer() primitives from interfering with each other.
435
436o Use synchronize_rcu() -after- removing a data element from an
437 RCU-protected data structure, but -before- reclaiming/freeing
438 the data element, in order to wait for the completion of all
439 RCU read-side critical sections that might be referencing that
440 data item.
441
442See checklist.txt for additional rules to follow when using RCU.
Paul E. McKenneyd19720a2006-02-01 03:06:42 -0800443And again, more-typical uses of RCU may be found in listRCU.txt,
444arrayRCU.txt, and NMI-RCU.txt.
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700445
446
4474. WHAT IF MY UPDATING THREAD CANNOT BLOCK?
448
449In the example above, foo_update_a() blocks until a grace period elapses.
450This is quite simple, but in some cases one cannot afford to wait so
451long -- there might be other high-priority work to be done.
452
453In such cases, one uses call_rcu() rather than synchronize_rcu().
454The call_rcu() API is as follows:
455
456 void call_rcu(struct rcu_head * head,
457 void (*func)(struct rcu_head *head));
458
459This function invokes func(head) after a grace period has elapsed.
460This invocation might happen from either softirq or process context,
461so the function is not permitted to block. The foo struct needs to
462have an rcu_head structure added, perhaps as follows:
463
464 struct foo {
465 int a;
466 char b;
467 long c;
468 struct rcu_head rcu;
469 };
470
471The foo_update_a() function might then be written as follows:
472
473 /*
474 * Create a new struct foo that is the same as the one currently
475 * pointed to by gbl_foo, except that field "a" is replaced
476 * with "new_a". Points gbl_foo to the new structure, and
477 * frees up the old structure after a grace period.
478 *
479 * Uses rcu_assign_pointer() to ensure that concurrent readers
480 * see the initialized version of the new structure.
481 *
482 * Uses call_rcu() to ensure that any readers that might have
483 * references to the old structure complete before freeing the
484 * old structure.
485 */
486 void foo_update_a(int new_a)
487 {
488 struct foo *new_fp;
489 struct foo *old_fp;
490
Baruch Evende0dfcd2006-03-24 18:25:25 +0100491 new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700492 spin_lock(&foo_mutex);
Jason A. Donenfeld2c4ac342015-08-11 14:26:33 +0200493 old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700494 *new_fp = *old_fp;
495 new_fp->a = new_a;
496 rcu_assign_pointer(gbl_foo, new_fp);
497 spin_unlock(&foo_mutex);
498 call_rcu(&old_fp->rcu, foo_reclaim);
499 }
500
501The foo_reclaim() function might appear as follows:
502
503 void foo_reclaim(struct rcu_head *rp)
504 {
505 struct foo *fp = container_of(rp, struct foo, rcu);
506
Kees Cook57d34a62012-10-19 09:48:30 -0700507 foo_cleanup(fp->a);
508
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700509 kfree(fp);
510 }
511
512The container_of() primitive is a macro that, given a pointer into a
513struct, the type of the struct, and the pointed-to field within the
514struct, returns a pointer to the beginning of the struct.
515
516The use of call_rcu() permits the caller of foo_update_a() to
517immediately regain control, without needing to worry further about the
518old version of the newly updated element. It also clearly shows the
519RCU distinction between updater, namely foo_update_a(), and reclaimer,
520namely foo_reclaim().
521
522The summary of advice is the same as for the previous section, except
523that we are now using call_rcu() rather than synchronize_rcu():
524
525o Use call_rcu() -after- removing a data element from an
526 RCU-protected data structure in order to register a callback
527 function that will be invoked after the completion of all RCU
528 read-side critical sections that might be referencing that
529 data item.
530
Kees Cook57d34a62012-10-19 09:48:30 -0700531If the callback for call_rcu() is not doing anything more than calling
532kfree() on the structure, you can use kfree_rcu() instead of call_rcu()
533to avoid having to write your own callback:
534
535 kfree_rcu(old_fp, rcu);
536
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700537Again, see checklist.txt for additional rules governing the use of RCU.
538
539
5405. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
541
542One of the nice things about RCU is that it has extremely simple "toy"
543implementations that are a good first step towards understanding the
544production-quality implementations in the Linux kernel. This section
545presents two such "toy" implementations of RCU, one that is implemented
546in terms of familiar locking primitives, and another that more closely
547resembles "classic" RCU. Both are way too simple for real-world use,
548lacking both functionality and performance. However, they are useful
549in getting a feel for how RCU works. See kernel/rcupdate.c for a
550production-quality implementation, and see:
551
552 http://www.rdrop.com/users/paulmck/RCU
553
554for papers describing the Linux kernel RCU implementation. The OLS'01
555and OLS'02 papers are a good introduction, and the dissertation provides
Paul E. McKenneyd19720a2006-02-01 03:06:42 -0800556more details on the current implementation as of early 2004.
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700557
558
5595A. "TOY" IMPLEMENTATION #1: LOCKING
560
561This section presents a "toy" RCU implementation that is based on
562familiar locking primitives. Its overhead makes it a non-starter for
563real-life use, as does its lack of scalability. It is also unsuitable
564for realtime use, since it allows scheduling latency to "bleed" from
Paul E. McKenneyd3d3a3c2017-03-28 19:57:45 -0700565one read-side critical section to another. It also assumes recursive
566reader-writer locks: If you try this with non-recursive locks, and
567you allow nested rcu_read_lock() calls, you can deadlock.
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700568
569However, it is probably the easiest implementation to relate to, so is
570a good starting point.
571
572It is extremely simple:
573
574 static DEFINE_RWLOCK(rcu_gp_mutex);
575
576 void rcu_read_lock(void)
577 {
578 read_lock(&rcu_gp_mutex);
579 }
580
581 void rcu_read_unlock(void)
582 {
583 read_unlock(&rcu_gp_mutex);
584 }
585
586 void synchronize_rcu(void)
587 {
588 write_lock(&rcu_gp_mutex);
589 write_unlock(&rcu_gp_mutex);
590 }
591
Paul E. McKenney066bb1c2017-03-07 07:30:58 -0800592[You can ignore rcu_assign_pointer() and rcu_dereference() without missing
593much. But here are simplified versions anyway. And whatever you do,
594don't forget about them when submitting patches making use of RCU!]
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700595
Paul E. McKenney066bb1c2017-03-07 07:30:58 -0800596 #define rcu_assign_pointer(p, v) \
597 ({ \
598 smp_store_release(&(p), (v)); \
599 })
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700600
Paul E. McKenney066bb1c2017-03-07 07:30:58 -0800601 #define rcu_dereference(p) \
602 ({ \
603 typeof(p) _________p1 = p; \
604 smp_read_barrier_depends(); \
605 (_________p1); \
606 })
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700607
608
609The rcu_read_lock() and rcu_read_unlock() primitive read-acquire
610and release a global reader-writer lock. The synchronize_rcu()
611primitive write-acquires this same lock, then immediately releases
612it. This means that once synchronize_rcu() exits, all RCU read-side
Matt LaPlante53cb4722006-10-03 22:55:17 +0200613critical sections that were in progress before synchronize_rcu() was
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700614called are guaranteed to have completed -- there is no way that
615synchronize_rcu() would have been able to write-acquire the lock
616otherwise.
617
618It is possible to nest rcu_read_lock(), since reader-writer locks may
619be recursively acquired. Note also that rcu_read_lock() is immune
620from deadlock (an important property of RCU). The reason for this is
621that the only thing that can block rcu_read_lock() is a synchronize_rcu().
622But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex,
623so there can be no deadlock cycle.
624
625Quick Quiz #1: Why is this argument naive? How could a deadlock
626 occur when using this algorithm in a real-world Linux
627 kernel? How could this deadlock be avoided?
628
629
6305B. "TOY" EXAMPLE #2: CLASSIC RCU
631
632This section presents a "toy" RCU implementation that is based on
633"classic RCU". It is also short on performance (but only for updates) and
634on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT
635kernels. The definitions of rcu_dereference() and rcu_assign_pointer()
636are the same as those shown in the preceding section, so they are omitted.
637
638 void rcu_read_lock(void) { }
639
640 void rcu_read_unlock(void) { }
641
642 void synchronize_rcu(void)
643 {
644 int cpu;
645
KAMEZAWA Hiroyuki3c30a752006-03-28 01:56:39 -0800646 for_each_possible_cpu(cpu)
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700647 run_on(cpu);
648 }
649
650Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing.
651This is the great strength of classic RCU in a non-preemptive kernel:
652read-side overhead is precisely zero, at least on non-Alpha CPUs.
653And there is absolutely no way that rcu_read_lock() can possibly
654participate in a deadlock cycle!
655
656The implementation of synchronize_rcu() simply schedules itself on each
657CPU in turn. The run_on() primitive can be implemented straightforwardly
658in terms of the sched_setaffinity() primitive. Of course, a somewhat less
659"toy" implementation would restore the affinity upon completion rather
660than just leaving all tasks running on the last CPU, but when I said
661"toy", I meant -toy-!
662
663So how the heck is this supposed to work???
664
665Remember that it is illegal to block while in an RCU read-side critical
666section. Therefore, if a given CPU executes a context switch, we know
667that it must have completed all preceding RCU read-side critical sections.
668Once -all- CPUs have executed a context switch, then -all- preceding
669RCU read-side critical sections will have completed.
670
671So, suppose that we remove a data item from its structure and then invoke
672synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed
673that there are no RCU read-side critical sections holding a reference
674to that data item, so we can safely reclaim it.
675
676Quick Quiz #2: Give an example where Classic RCU's read-side
677 overhead is -negative-.
678
679Quick Quiz #3: If it is illegal to block in an RCU read-side
680 critical section, what the heck do you do in
681 PREEMPT_RT, where normal spinlocks can block???
682
683
6846. ANALOGY WITH READER-WRITER LOCKING
685
686Although RCU can be used in many different ways, a very common use of
687RCU is analogous to reader-writer locking. The following unified
688diff shows how closely related RCU and reader-writer locking can be.
689
Yao Dongdong70946a42016-03-07 16:02:14 +0800690 @@ -5,5 +5,5 @@ struct el {
691 int data;
692 /* Other data fields */
693 };
694 -rwlock_t listmutex;
695 +spinlock_t listmutex;
696 struct el head;
697
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700698 @@ -13,15 +14,15 @@
699 struct list_head *lp;
700 struct el *p;
701
Yao Dongdong70946a42016-03-07 16:02:14 +0800702 - read_lock(&listmutex);
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700703 - list_for_each_entry(p, head, lp) {
704 + rcu_read_lock();
705 + list_for_each_entry_rcu(p, head, lp) {
706 if (p->key == key) {
707 *result = p->data;
Yao Dongdong70946a42016-03-07 16:02:14 +0800708 - read_unlock(&listmutex);
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700709 + rcu_read_unlock();
710 return 1;
711 }
712 }
Yao Dongdong70946a42016-03-07 16:02:14 +0800713 - read_unlock(&listmutex);
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700714 + rcu_read_unlock();
715 return 0;
716 }
717
718 @@ -29,15 +30,16 @@
719 {
720 struct el *p;
721
722 - write_lock(&listmutex);
723 + spin_lock(&listmutex);
724 list_for_each_entry(p, head, lp) {
725 if (p->key == key) {
Urs Thuermann82a854e2006-07-10 04:44:06 -0700726 - list_del(&p->list);
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700727 - write_unlock(&listmutex);
Urs Thuermann82a854e2006-07-10 04:44:06 -0700728 + list_del_rcu(&p->list);
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700729 + spin_unlock(&listmutex);
730 + synchronize_rcu();
731 kfree(p);
732 return 1;
733 }
734 }
735 - write_unlock(&listmutex);
736 + spin_unlock(&listmutex);
737 return 0;
738 }
739
740Or, for those who prefer a side-by-side listing:
741
742 1 struct el { 1 struct el {
743 2 struct list_head list; 2 struct list_head list;
744 3 long key; 3 long key;
745 4 spinlock_t mutex; 4 spinlock_t mutex;
746 5 int data; 5 int data;
747 6 /* Other data fields */ 6 /* Other data fields */
748 7 }; 7 };
Yao Dongdong70946a42016-03-07 16:02:14 +0800749 8 rwlock_t listmutex; 8 spinlock_t listmutex;
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700750 9 struct el head; 9 struct el head;
751
752 1 int search(long key, int *result) 1 int search(long key, int *result)
753 2 { 2 {
754 3 struct list_head *lp; 3 struct list_head *lp;
755 4 struct el *p; 4 struct el *p;
756 5 5
Yao Dongdong70946a42016-03-07 16:02:14 +0800757 6 read_lock(&listmutex); 6 rcu_read_lock();
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700758 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) {
759 8 if (p->key == key) { 8 if (p->key == key) {
760 9 *result = p->data; 9 *result = p->data;
Yao Dongdong70946a42016-03-07 16:02:14 +080076110 read_unlock(&listmutex); 10 rcu_read_unlock();
Paul E. McKenneydd81eca2005-09-10 00:26:24 -070076211 return 1; 11 return 1;
76312 } 12 }
76413 } 13 }
Yao Dongdong70946a42016-03-07 16:02:14 +080076514 read_unlock(&listmutex); 14 rcu_read_unlock();
Paul E. McKenneydd81eca2005-09-10 00:26:24 -070076615 return 0; 15 return 0;
76716 } 16 }
768
769 1 int delete(long key) 1 int delete(long key)
770 2 { 2 {
771 3 struct el *p; 3 struct el *p;
772 4 4
773 5 write_lock(&listmutex); 5 spin_lock(&listmutex);
774 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) {
775 7 if (p->key == key) { 7 if (p->key == key) {
Urs Thuermann82a854e2006-07-10 04:44:06 -0700776 8 list_del(&p->list); 8 list_del_rcu(&p->list);
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700777 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex);
778 10 synchronize_rcu();
77910 kfree(p); 11 kfree(p);
78011 return 1; 12 return 1;
78112 } 13 }
78213 } 14 }
78314 write_unlock(&listmutex); 15 spin_unlock(&listmutex);
78415 return 0; 16 return 0;
78516 } 17 }
786
787Either way, the differences are quite small. Read-side locking moves
788to rcu_read_lock() and rcu_read_unlock, update-side locking moves from
Paolo Ornati670e9f32006-10-03 22:57:56 +0200789a reader-writer lock to a simple spinlock, and a synchronize_rcu()
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700790precedes the kfree().
791
792However, there is one potential catch: the read-side and update-side
793critical sections can now run concurrently. In many cases, this will
794not be a problem, but it is necessary to check carefully regardless.
795For example, if multiple independent list updates must be seen as
796a single atomic update, converting to RCU will require special care.
797
798Also, the presence of synchronize_rcu() means that the RCU version of
799delete() can now block. If this is a problem, there is a callback-based
Kees Cook57d34a62012-10-19 09:48:30 -0700800mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can
801be used in place of synchronize_rcu().
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700802
803
8047. FULL LIST OF RCU APIs
805
806The RCU APIs are documented in docbook-format header comments in the
807Linux-kernel source code, but it helps to have a full list of the
808APIs, since there does not appear to be a way to categorize them
809in docbook. Here is the list, by category.
810
Paul E. McKenneyc598a072010-02-22 17:04:57 -0800811RCU list traversal:
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700812
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700813 list_entry_rcu
814 list_first_entry_rcu
815 list_next_rcu
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700816 list_for_each_entry_rcu
Paul E. McKenneybb08f762012-10-20 12:33:37 -0700817 list_for_each_entry_continue_rcu
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700818 hlist_first_rcu
819 hlist_next_rcu
820 hlist_pprev_rcu
821 hlist_for_each_entry_rcu
822 hlist_for_each_entry_rcu_bh
823 hlist_for_each_entry_continue_rcu
824 hlist_for_each_entry_continue_rcu_bh
825 hlist_nulls_first_rcu
826 hlist_nulls_for_each_entry_rcu
827 hlist_bl_first_rcu
828 hlist_bl_for_each_entry_rcu
Paul E. McKenney32300752008-05-12 21:21:05 +0200829
830RCU pointer/list update:
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700831
832 rcu_assign_pointer
833 list_add_rcu
834 list_add_tail_rcu
835 list_del_rcu
836 list_replace_rcu
Ken Helias1d023282014-08-06 16:09:16 -0700837 hlist_add_behind_rcu
Paul E. McKenney32300752008-05-12 21:21:05 +0200838 hlist_add_before_rcu
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700839 hlist_add_head_rcu
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700840 hlist_del_rcu
841 hlist_del_init_rcu
Paul E. McKenney32300752008-05-12 21:21:05 +0200842 hlist_replace_rcu
843 list_splice_init_rcu()
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700844 hlist_nulls_del_init_rcu
845 hlist_nulls_del_rcu
846 hlist_nulls_add_head_rcu
847 hlist_bl_add_head_rcu
848 hlist_bl_del_init_rcu
849 hlist_bl_del_rcu
850 hlist_bl_set_first_rcu
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700851
Paul E. McKenney32300752008-05-12 21:21:05 +0200852RCU: Critical sections Grace period Barrier
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700853
Paul E. McKenney32300752008-05-12 21:21:05 +0200854 rcu_read_lock synchronize_net rcu_barrier
855 rcu_read_unlock synchronize_rcu
Paul E. McKenneyc598a072010-02-22 17:04:57 -0800856 rcu_dereference synchronize_rcu_expedited
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700857 rcu_read_lock_held call_rcu
858 rcu_dereference_check kfree_rcu
859 rcu_dereference_protected
Paul E. McKenney32300752008-05-12 21:21:05 +0200860
861bh: Critical sections Grace period Barrier
862
863 rcu_read_lock_bh call_rcu_bh rcu_barrier_bh
Paul E. McKenney240ebbf2009-06-25 09:08:18 -0700864 rcu_read_unlock_bh synchronize_rcu_bh
Paul E. McKenneyc598a072010-02-22 17:04:57 -0800865 rcu_dereference_bh synchronize_rcu_bh_expedited
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700866 rcu_dereference_bh_check
867 rcu_dereference_bh_protected
868 rcu_read_lock_bh_held
Paul E. McKenney32300752008-05-12 21:21:05 +0200869
870sched: Critical sections Grace period Barrier
871
Paul E. McKenney240ebbf2009-06-25 09:08:18 -0700872 rcu_read_lock_sched synchronize_sched rcu_barrier_sched
873 rcu_read_unlock_sched call_rcu_sched
874 [preempt_disable] synchronize_sched_expedited
875 [and friends]
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700876 rcu_read_lock_sched_notrace
877 rcu_read_unlock_sched_notrace
Paul E. McKenneyc598a072010-02-22 17:04:57 -0800878 rcu_dereference_sched
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700879 rcu_dereference_sched_check
880 rcu_dereference_sched_protected
881 rcu_read_lock_sched_held
Paul E. McKenney32300752008-05-12 21:21:05 +0200882
883
884SRCU: Critical sections Grace period Barrier
885
Paul E. McKenney74d874e2012-05-07 13:43:30 -0700886 srcu_read_lock synchronize_srcu srcu_barrier
887 srcu_read_unlock call_srcu
Paul E. McKenney99f88912013-03-12 16:54:14 -0700888 srcu_dereference synchronize_srcu_expedited
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700889 srcu_dereference_check
890 srcu_read_lock_held
Paul E. McKenney32300752008-05-12 21:21:05 +0200891
Paul E. McKenney240ebbf2009-06-25 09:08:18 -0700892SRCU: Initialization/cleanup
Paul E. McKenney4de5f892017-06-06 15:04:03 -0700893 DEFINE_SRCU
894 DEFINE_STATIC_SRCU
Paul E. McKenney240ebbf2009-06-25 09:08:18 -0700895 init_srcu_struct
896 cleanup_srcu_struct
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700897
Paul E. McKenney50aec002010-04-09 15:39:12 -0700898All: lockdep-checked RCU-protected pointer access
899
Paul E. McKenney50aec002010-04-09 15:39:12 -0700900 rcu_access_pointer
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700901 rcu_dereference_raw
Paul E. McKenneyf78f5b92015-06-18 15:50:02 -0700902 RCU_LOCKDEP_WARN
Paul E. McKenneyd07e6d02014-03-31 13:36:33 -0700903 rcu_sleep_check
904 RCU_NONIDLE
Paul E. McKenney50aec002010-04-09 15:39:12 -0700905
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700906See the comment headers in the source code (or the docbook generated
907from them) for more information.
908
Paul E. McKenneyfea65122011-01-23 22:35:45 -0800909However, given that there are no fewer than four families of RCU APIs
910in the Linux kernel, how do you choose which one to use? The following
911list can be helpful:
912
913a. Will readers need to block? If so, you need SRCU.
914
Paul E. McKenney99f88912013-03-12 16:54:14 -0700915b. What about the -rt patchset? If readers would need to block
Paul E. McKenneyfea65122011-01-23 22:35:45 -0800916 in an non-rt kernel, you need SRCU. If readers would block
917 in a -rt kernel, but not in a non-rt kernel, SRCU is not
Paul E. McKenney4de5f892017-06-06 15:04:03 -0700918 necessary. (The -rt patchset turns spinlocks into sleeplocks,
919 hence this distinction.)
Paul E. McKenneyfea65122011-01-23 22:35:45 -0800920
Paul E. McKenney99f88912013-03-12 16:54:14 -0700921c. Do you need to treat NMI handlers, hardirq handlers,
Paul E. McKenneyfea65122011-01-23 22:35:45 -0800922 and code segments with preemption disabled (whether
923 via preempt_disable(), local_irq_save(), local_bh_disable(),
924 or some other mechanism) as if they were explicit RCU readers?
Paul E. McKenney2aef6192012-08-03 16:41:23 -0700925 If so, RCU-sched is the only choice that will work for you.
Paul E. McKenneyfea65122011-01-23 22:35:45 -0800926
Paul E. McKenney99f88912013-03-12 16:54:14 -0700927d. Do you need RCU grace periods to complete even in the face
Paul E. McKenneyfea65122011-01-23 22:35:45 -0800928 of softirq monopolization of one or more of the CPUs? For
929 example, is your code subject to network-based denial-of-service
930 attacks? If so, you need RCU-bh.
931
Paul E. McKenney99f88912013-03-12 16:54:14 -0700932e. Is your workload too update-intensive for normal use of
Paul E. McKenneyfea65122011-01-23 22:35:45 -0800933 RCU, but inappropriate for other synchronization mechanisms?
Paul E. McKenney5f0d5a32017-01-18 02:53:44 -0800934 If so, consider SLAB_TYPESAFE_BY_RCU (which was originally
935 named SLAB_DESTROY_BY_RCU). But please be careful!
Paul E. McKenneyfea65122011-01-23 22:35:45 -0800936
Paul E. McKenney99f88912013-03-12 16:54:14 -0700937f. Do you need read-side critical sections that are respected
Paul E. McKenney2aef6192012-08-03 16:41:23 -0700938 even though they are in the middle of the idle loop, during
939 user-mode execution, or on an offlined CPU? If so, SRCU is the
940 only choice that will work for you.
941
Paul E. McKenney99f88912013-03-12 16:54:14 -0700942g. Otherwise, use RCU.
Paul E. McKenneyfea65122011-01-23 22:35:45 -0800943
944Of course, this all assumes that you have determined that RCU is in fact
945the right tool for your job.
946
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700947
9488. ANSWERS TO QUICK QUIZZES
949
950Quick Quiz #1: Why is this argument naive? How could a deadlock
951 occur when using this algorithm in a real-world Linux
952 kernel? [Referring to the lock-based "toy" RCU
953 algorithm.]
954
955Answer: Consider the following sequence of events:
956
957 1. CPU 0 acquires some unrelated lock, call it
Paul E. McKenneyd19720a2006-02-01 03:06:42 -0800958 "problematic_lock", disabling irq via
959 spin_lock_irqsave().
Paul E. McKenneydd81eca2005-09-10 00:26:24 -0700960
961 2. CPU 1 enters synchronize_rcu(), write-acquiring
962 rcu_gp_mutex.
963
964 3. CPU 0 enters rcu_read_lock(), but must wait
965 because CPU 1 holds rcu_gp_mutex.
966
967 4. CPU 1 is interrupted, and the irq handler
968 attempts to acquire problematic_lock.
969
970 The system is now deadlocked.
971
972 One way to avoid this deadlock is to use an approach like
973 that of CONFIG_PREEMPT_RT, where all normal spinlocks
974 become blocking locks, and all irq handlers execute in
975 the context of special tasks. In this case, in step 4
976 above, the irq handler would block, allowing CPU 1 to
977 release rcu_gp_mutex, avoiding the deadlock.
978
979 Even in the absence of deadlock, this RCU implementation
980 allows latency to "bleed" from readers to other
981 readers through synchronize_rcu(). To see this,
982 consider task A in an RCU read-side critical section
983 (thus read-holding rcu_gp_mutex), task B blocked
984 attempting to write-acquire rcu_gp_mutex, and
985 task C blocked in rcu_read_lock() attempting to
986 read_acquire rcu_gp_mutex. Task A's RCU read-side
987 latency is holding up task C, albeit indirectly via
988 task B.
989
990 Realtime RCU implementations therefore use a counter-based
991 approach where tasks in RCU read-side critical sections
992 cannot be blocked by tasks executing synchronize_rcu().
993
994Quick Quiz #2: Give an example where Classic RCU's read-side
995 overhead is -negative-.
996
997Answer: Imagine a single-CPU system with a non-CONFIG_PREEMPT
998 kernel where a routing table is used by process-context
999 code, but can be updated by irq-context code (for example,
1000 by an "ICMP REDIRECT" packet). The usual way of handling
1001 this would be to have the process-context code disable
1002 interrupts while searching the routing table. Use of
1003 RCU allows such interrupt-disabling to be dispensed with.
1004 Thus, without RCU, you pay the cost of disabling interrupts,
1005 and with RCU you don't.
1006
1007 One can argue that the overhead of RCU in this
1008 case is negative with respect to the single-CPU
1009 interrupt-disabling approach. Others might argue that
1010 the overhead of RCU is merely zero, and that replacing
1011 the positive overhead of the interrupt-disabling scheme
1012 with the zero-overhead RCU scheme does not constitute
1013 negative overhead.
1014
1015 In real life, of course, things are more complex. But
1016 even the theoretical possibility of negative overhead for
1017 a synchronization primitive is a bit unexpected. ;-)
1018
1019Quick Quiz #3: If it is illegal to block in an RCU read-side
1020 critical section, what the heck do you do in
1021 PREEMPT_RT, where normal spinlocks can block???
1022
1023Answer: Just as PREEMPT_RT permits preemption of spinlock
1024 critical sections, it permits preemption of RCU
1025 read-side critical sections. It also permits
1026 spinlocks blocking while in RCU read-side critical
1027 sections.
1028
1029 Why the apparent inconsistency? Because it is it
1030 possible to use priority boosting to keep the RCU
1031 grace periods short if need be (for example, if running
1032 short of memory). In contrast, if blocking waiting
1033 for (say) network reception, there is no way to know
1034 what should be boosted. Especially given that the
1035 process we need to boost might well be a human being
1036 who just went out for a pizza or something. And although
1037 a computer-operated cattle prod might arouse serious
1038 interest, it might also provoke serious objections.
1039 Besides, how does the computer know what pizza parlor
1040 the human being went to???
1041
1042
1043ACKNOWLEDGEMENTS
1044
1045My thanks to the people who helped make this human-readable, including
Paul E. McKenneyd19720a2006-02-01 03:06:42 -08001046Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern.
Paul E. McKenneydd81eca2005-09-10 00:26:24 -07001047
1048
1049For more information, see http://www.rdrop.com/users/paulmck/RCU.