Tree-RCU-Memory-Ordering.html 29.7 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
        "http://www.w3.org/TR/html4/loose.dtd">
        <html>
        <head><title>A Tour Through TREE_RCU's Grace-Period Memory Ordering</title>
        <meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">

           <p>August 8, 2017</p>
           <p>This article was contributed by Paul E.&nbsp;McKenney</p>

<h3>Introduction</h3>

<p>This document gives a rough visual overview of how Tree RCU's
grace-period memory ordering guarantee is provided.

<ol>
<li>	<a href="#What Is Tree RCU's Grace Period Memory Ordering Guarantee?">
	What Is Tree RCU's Grace Period Memory Ordering Guarantee?</a>
<li>	<a href="#Tree RCU Grace Period Memory Ordering Building Blocks">
	Tree RCU Grace Period Memory Ordering Building Blocks</a>
<li>	<a href="#Tree RCU Grace Period Memory Ordering Components">
	Tree RCU Grace Period Memory Ordering Components</a>
<li>	<a href="#Putting It All Together">Putting It All Together</a>
</ol>

<h3><a name="What Is Tree RCU's Grace Period Memory Ordering Guarantee?">
What Is Tree RCU's Grace Period Memory Ordering Guarantee?</a></h3>

<p>RCU grace periods provide extremely strong memory-ordering guarantees
for non-idle non-offline code.
Any code that happens after the end of a given RCU grace period is guaranteed
to see the effects of all accesses prior to the beginning of that grace
period that are within RCU read-side critical sections.
Similarly, any code that happens before the beginning of a given RCU grace
period is guaranteed to see the effects of all accesses following the end
of that grace period that are within RCU read-side critical sections.

<p>This guarantee is particularly pervasive for <tt>synchronize_sched()</tt>,
for which RCU-sched read-side critical sections include any region
of code for which preemption is disabled.
Given that each individual machine instruction can be thought of as
an extremely small region of preemption-disabled code, one can think of
<tt>synchronize_sched()</tt> as <tt>smp_mb()</tt> on steroids.

<p>RCU updaters use this guarantee by splitting their updates into
two phases, one of which is executed before the grace period and
the other of which is executed after the grace period.
In the most common use case, phase one removes an element from
a linked RCU-protected data structure, and phase two frees that element.
For this to work, any readers that have witnessed state prior to the
phase-one update (in the common case, removal) must not witness state
following the phase-two update (in the common case, freeing).

<p>The RCU implementation provides this guarantee using a network
of lock-based critical sections, memory barriers, and per-CPU
processing, as is described in the following sections.

<h3><a name="Tree RCU Grace Period Memory Ordering Building Blocks">
Tree RCU Grace Period Memory Ordering Building Blocks</a></h3>

<p>The workhorse for RCU's grace-period memory ordering is the
critical section for the <tt>rcu_node</tt> structure's
<tt>-&gt;lock</tt>.
These critical sections use helper functions for lock acquisition, including
<tt>raw_spin_lock_rcu_node()</tt>,
<tt>raw_spin_lock_irq_rcu_node()</tt>, and
<tt>raw_spin_lock_irqsave_rcu_node()</tt>.
Their lock-release counterparts are
<tt>raw_spin_unlock_rcu_node()</tt>,
<tt>raw_spin_unlock_irq_rcu_node()</tt>, and
<tt>raw_spin_unlock_irqrestore_rcu_node()</tt>,
respectively.
For completeness, a
<tt>raw_spin_trylock_rcu_node()</tt>
is also provided.
The key point is that the lock-acquisition functions, including
<tt>raw_spin_trylock_rcu_node()</tt>, all invoke
<tt>smp_mb__after_unlock_lock()</tt> immediately after successful
acquisition of the lock.

80
<p>Therefore, for any given <tt>rcu_node</tt> structure, any access
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359
happening before one of the above lock-release functions will be seen
by all CPUs as happening before any access happening after a later
one of the above lock-acquisition functions.
Furthermore, any access happening before one of the
above lock-release function on any given CPU will be seen by all
CPUs as happening before any access happening after a later one
of the above lock-acquisition functions executing on that same CPU,
even if the lock-release and lock-acquisition functions are operating
on different <tt>rcu_node</tt> structures.
Tree RCU uses these two ordering guarantees to form an ordering
network among all CPUs that were in any way involved in the grace
period, including any CPUs that came online or went offline during
the grace period in question.

<p>The following litmus test exhibits the ordering effects of these
lock-acquisition and lock-release functions:

<pre>
 1 int x, y, z;
 2
 3 void task0(void)
 4 {
 5   raw_spin_lock_rcu_node(rnp);
 6   WRITE_ONCE(x, 1);
 7   r1 = READ_ONCE(y);
 8   raw_spin_unlock_rcu_node(rnp);
 9 }
10
11 void task1(void)
12 {
13   raw_spin_lock_rcu_node(rnp);
14   WRITE_ONCE(y, 1);
15   r2 = READ_ONCE(z);
16   raw_spin_unlock_rcu_node(rnp);
17 }
18
19 void task2(void)
20 {
21   WRITE_ONCE(z, 1);
22   smp_mb();
23   r3 = READ_ONCE(x);
24 }
25
26 WARN_ON(r1 == 0 &amp;&amp; r2 == 0 &amp;&amp; r3 == 0);
</pre>

<p>The <tt>WARN_ON()</tt> is evaluated at &ldquo;the end of time&rdquo;,
after all changes have propagated throughout the system.
Without the <tt>smp_mb__after_unlock_lock()</tt> provided by the
acquisition functions, this <tt>WARN_ON()</tt> could trigger, for example
on PowerPC.
The <tt>smp_mb__after_unlock_lock()</tt> invocations prevent this
<tt>WARN_ON()</tt> from triggering.

<p>This approach must be extended to include idle CPUs, which need
RCU's grace-period memory ordering guarantee to extend to any
RCU read-side critical sections preceding and following the current
idle sojourn.
This case is handled by calls to the strongly ordered
<tt>atomic_add_return()</tt> read-modify-write atomic operation that
is invoked within <tt>rcu_dynticks_eqs_enter()</tt> at idle-entry
time and within <tt>rcu_dynticks_eqs_exit()</tt> at idle-exit time.
The grace-period kthread invokes <tt>rcu_dynticks_snap()</tt> and
<tt>rcu_dynticks_in_eqs_since()</tt> (both of which invoke
an <tt>atomic_add_return()</tt> of zero) to detect idle CPUs.

<table>
<tr><th>&nbsp;</th></tr>
<tr><th align="left">Quick Quiz:</th></tr>
<tr><td>
	But what about CPUs that remain offline for the entire
	grace period?
</td></tr>
<tr><th align="left">Answer:</th></tr>
<tr><td bgcolor="#ffffff"><font color="ffffff">
	Such CPUs will be offline at the beginning of the grace period,
	so the grace period won't expect quiescent states from them.
	Races between grace-period start and CPU-hotplug operations
	are mediated by the CPU's leaf <tt>rcu_node</tt> structure's
	<tt>-&gt;lock</tt> as described above.
</font></td></tr>
<tr><td>&nbsp;</td></tr>
</table>

<p>The approach must be extended to handle one final case, that
of waking a task blocked in <tt>synchronize_rcu()</tt>.
This task might be affinitied to a CPU that is not yet aware that
the grace period has ended, and thus might not yet be subject to
the grace period's memory ordering.
Therefore, there is an <tt>smp_mb()</tt> after the return from
<tt>wait_for_completion()</tt> in the <tt>synchronize_rcu()</tt>
code path.

<table>
<tr><th>&nbsp;</th></tr>
<tr><th align="left">Quick Quiz:</th></tr>
<tr><td>
	What?  Where???
	I don't see any <tt>smp_mb()</tt> after the return from
	<tt>wait_for_completion()</tt>!!!
</td></tr>
<tr><th align="left">Answer:</th></tr>
<tr><td bgcolor="#ffffff"><font color="ffffff">
	That would be because I spotted the need for that
	<tt>smp_mb()</tt> during the creation of this documentation,
	and it is therefore unlikely to hit mainline before v4.14.
	Kudos to Lance Roy, Will Deacon, Peter Zijlstra, and
	Jonathan Cameron for asking questions that sensitized me
	to the rather elaborate sequence of events that demonstrate
	the need for this memory barrier.
</font></td></tr>
<tr><td>&nbsp;</td></tr>
</table>

<p>Tree RCU's grace--period memory-ordering guarantees rely most
heavily on the <tt>rcu_node</tt> structure's <tt>-&gt;lock</tt>
field, so much so that it is necessary to abbreviate this pattern
in the diagrams in the next section.
For example, consider the <tt>rcu_prepare_for_idle()</tt> function
shown below, which is one of several functions that enforce ordering
of newly arrived RCU callbacks against future grace periods:

<pre>
 1 static void rcu_prepare_for_idle(void)
 2 {
 3   bool needwake;
 4   struct rcu_data *rdp;
 5   struct rcu_dynticks *rdtp = this_cpu_ptr(&amp;rcu_dynticks);
 6   struct rcu_node *rnp;
 7   struct rcu_state *rsp;
 8   int tne;
 9
10   if (IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL) ||
11       rcu_is_nocb_cpu(smp_processor_id()))
12     return;
13   tne = READ_ONCE(tick_nohz_active);
14   if (tne != rdtp-&gt;tick_nohz_enabled_snap) {
15     if (rcu_cpu_has_callbacks(NULL))
16       invoke_rcu_core();
17     rdtp-&gt;tick_nohz_enabled_snap = tne;
18     return;
19   }
20   if (!tne)
21     return;
22   if (rdtp-&gt;all_lazy &amp;&amp;
23       rdtp-&gt;nonlazy_posted != rdtp-&gt;nonlazy_posted_snap) {
24     rdtp-&gt;all_lazy = false;
25     rdtp-&gt;nonlazy_posted_snap = rdtp-&gt;nonlazy_posted;
26     invoke_rcu_core();
27     return;
28   }
29   if (rdtp-&gt;last_accelerate == jiffies)
30     return;
31   rdtp-&gt;last_accelerate = jiffies;
32   for_each_rcu_flavor(rsp) {
33     rdp = this_cpu_ptr(rsp-&gt;rda);
34     if (rcu_segcblist_pend_cbs(&amp;rdp-&gt;cblist))
35       continue;
36     rnp = rdp-&gt;mynode;
37     raw_spin_lock_rcu_node(rnp);
38     needwake = rcu_accelerate_cbs(rsp, rnp, rdp);
39     raw_spin_unlock_rcu_node(rnp);
40     if (needwake)
41       rcu_gp_kthread_wake(rsp);
42   }
43 }
</pre>

<p>But the only part of <tt>rcu_prepare_for_idle()</tt> that really
matters for this discussion are lines&nbsp;37&ndash;39.
We will therefore abbreviate this function as follows:

</p><p><img src="rcu_node-lock.svg" alt="rcu_node-lock.svg">

<p>The box represents the <tt>rcu_node</tt> structure's <tt>-&gt;lock</tt>
critical section, with the double line on top representing the additional
<tt>smp_mb__after_unlock_lock()</tt>.

<h3><a name="Tree RCU Grace Period Memory Ordering Components">
Tree RCU Grace Period Memory Ordering Components</a></h3>

<p>Tree RCU's grace-period memory-ordering guarantee is provided by
a number of RCU components:

<ol>
<li>	<a href="#Callback Registry">Callback Registry</a>
<li>	<a href="#Grace-Period Initialization">Grace-Period Initialization</a>
<li>	<a href="#Self-Reported Quiescent States">
	Self-Reported Quiescent States</a>
<li>	<a href="#Dynamic Tick Interface">Dynamic Tick Interface</a>
<li>	<a href="#CPU-Hotplug Interface">CPU-Hotplug Interface</a>
<li>	<a href="Forcing Quiescent States">Forcing Quiescent States</a>
<li>	<a href="Grace-Period Cleanup">Grace-Period Cleanup</a>
<li>	<a href="Callback Invocation">Callback Invocation</a>
</ol>

<p>Each of the following section looks at the corresponding component
in detail.

<h4><a name="Callback Registry">Callback Registry</a></h4>

<p>If RCU's grace-period guarantee is to mean anything at all, any
access that happens before a given invocation of <tt>call_rcu()</tt>
must also happen before the corresponding grace period.
The implementation of this portion of RCU's grace period guarantee
is shown in the following figure:

</p><p><img src="TreeRCU-callback-registry.svg" alt="TreeRCU-callback-registry.svg">

<p>Because <tt>call_rcu()</tt> normally acts only on CPU-local state,
it provides no ordering guarantees, either for itself or for
phase one of the update (which again will usually be removal of
an element from an RCU-protected data structure).
It simply enqueues the <tt>rcu_head</tt> structure on a per-CPU list,
which cannot become associated with a grace period until a later
call to <tt>rcu_accelerate_cbs()</tt>, as shown in the diagram above.

<p>One set of code paths shown on the left invokes
<tt>rcu_accelerate_cbs()</tt> via
<tt>note_gp_changes()</tt>, either directly from <tt>call_rcu()</tt> (if
the current CPU is inundated with queued <tt>rcu_head</tt> structures)
or more likely from an <tt>RCU_SOFTIRQ</tt> handler.
Another code path in the middle is taken only in kernels built with
<tt>CONFIG_RCU_FAST_NO_HZ=y</tt>, which invokes
<tt>rcu_accelerate_cbs()</tt> via <tt>rcu_prepare_for_idle()</tt>.
The final code path on the right is taken only in kernels built with
<tt>CONFIG_HOTPLUG_CPU=y</tt>, which invokes
<tt>rcu_accelerate_cbs()</tt> via
<tt>rcu_advance_cbs()</tt>, <tt>rcu_migrate_callbacks</tt>,
<tt>rcutree_migrate_callbacks()</tt>, and <tt>takedown_cpu()</tt>,
which in turn is invoked on a surviving CPU after the outgoing
CPU has been completely offlined.

<p>There are a few other code paths within grace-period processing
that opportunistically invoke <tt>rcu_accelerate_cbs()</tt>.
However, either way, all of the CPU's recently queued <tt>rcu_head</tt>
structures are associated with a future grace-period number under
the protection of the CPU's lead <tt>rcu_node</tt> structure's
<tt>-&gt;lock</tt>.
In all cases, there is full ordering against any prior critical section
for that same <tt>rcu_node</tt> structure's <tt>-&gt;lock</tt>, and
also full ordering against any of the current task's or CPU's prior critical
sections for any <tt>rcu_node</tt> structure's <tt>-&gt;lock</tt>.

<p>The next section will show how this ordering ensures that any
accesses prior to the <tt>call_rcu()</tt> (particularly including phase
one of the update)
happen before the start of the corresponding grace period.

<table>
<tr><th>&nbsp;</th></tr>
<tr><th align="left">Quick Quiz:</th></tr>
<tr><td>
	But what about <tt>synchronize_rcu()</tt>?
</td></tr>
<tr><th align="left">Answer:</th></tr>
<tr><td bgcolor="#ffffff"><font color="ffffff">
	The <tt>synchronize_rcu()</tt> passes <tt>call_rcu()</tt>
	to <tt>wait_rcu_gp()</tt>, which invokes it.
	So either way, it eventually comes down to <tt>call_rcu()</tt>.
</font></td></tr>
<tr><td>&nbsp;</td></tr>
</table>

<h4><a name="Grace-Period Initialization">Grace-Period Initialization</a></h4>

<p>Grace-period initialization is carried out by
the grace-period kernel thread, which makes several passes over the
<tt>rcu_node</tt> tree within the <tt>rcu_gp_init()</tt> function.
This means that showing the full flow of ordering through the
grace-period computation will require duplicating this tree.
If you find this confusing, please note that the state of the
<tt>rcu_node</tt> changes over time, just like Heraclitus's river.
However, to keep the <tt>rcu_node</tt> river tractable, the
grace-period kernel thread's traversals are presented in multiple
parts, starting in this section with the various phases of
grace-period initialization.

<p>The first ordering-related grace-period initialization action is to
360
advance the <tt>rcu_state</tt> structure's <tt>-&gt;gp_seq</tt>
361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390
grace-period-number counter, as shown below:

</p><p><img src="TreeRCU-gp-init-1.svg" alt="TreeRCU-gp-init-1.svg" width="75%">

<p>The actual increment is carried out using <tt>smp_store_release()</tt>,
which helps reject false-positive RCU CPU stall detection.
Note that only the root <tt>rcu_node</tt> structure is touched.

<p>The first pass through the <tt>rcu_node</tt> tree updates bitmasks
based on CPUs having come online or gone offline since the start of
the previous grace period.
In the common case where the number of online CPUs for this <tt>rcu_node</tt>
structure has not transitioned to or from zero,
this pass will scan only the leaf <tt>rcu_node</tt> structures.
However, if the number of online CPUs for a given leaf <tt>rcu_node</tt>
structure has transitioned from zero,
<tt>rcu_init_new_rnp()</tt> will be invoked for the first incoming CPU.
Similarly, if the number of online CPUs for a given leaf <tt>rcu_node</tt>
structure has transitioned to zero,
<tt>rcu_cleanup_dead_rnp()</tt> will be invoked for the last outgoing CPU.
The diagram below shows the path of ordering if the leftmost
<tt>rcu_node</tt> structure onlines its first CPU and if the next
<tt>rcu_node</tt> structure has no online CPUs
(or, alternatively if the leftmost <tt>rcu_node</tt> structure offlines
its last CPU and if the next <tt>rcu_node</tt> structure has no online CPUs).

</p><p><img src="TreeRCU-gp-init-2.svg" alt="TreeRCU-gp-init-1.svg" width="75%">

<p>The final <tt>rcu_gp_init()</tt> pass through the <tt>rcu_node</tt>
tree traverses breadth-first, setting each <tt>rcu_node</tt> structure's
391
<tt>-&gt;gp_seq</tt> field to the newly advanced value from the
392 393 394 395 396 397 398 399 400
<tt>rcu_state</tt> structure, as shown in the following diagram.

</p><p><img src="TreeRCU-gp-init-3.svg" alt="TreeRCU-gp-init-1.svg" width="75%">

<p>This change will also cause each CPU's next call to
<tt>__note_gp_changes()</tt>
to notice that a new grace period has started, as described in the next
section.
But because the grace-period kthread started the grace period at the
401 402 403
root (with the advancing of the <tt>rcu_state</tt> structure's
<tt>-&gt;gp_seq</tt> field) before setting each leaf <tt>rcu_node</tt>
structure's <tt>-&gt;gp_seq</tt> field, each CPU's observation of
404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468
the start of the grace period will happen after the actual start
of the grace period.

<table>
<tr><th>&nbsp;</th></tr>
<tr><th align="left">Quick Quiz:</th></tr>
<tr><td>
	But what about the CPU that started the grace period?
	Why wouldn't it see the start of the grace period right when
	it started that grace period?
</td></tr>
<tr><th align="left">Answer:</th></tr>
<tr><td bgcolor="#ffffff"><font color="ffffff">
	In some deep philosophical and overly anthromorphized
	sense, yes, the CPU starting the grace period is immediately
	aware of having done so.
	However, if we instead assume that RCU is not self-aware,
	then even the CPU starting the grace period does not really
	become aware of the start of this grace period until its
	first call to <tt>__note_gp_changes()</tt>.
	On the other hand, this CPU potentially gets early notification
	because it invokes <tt>__note_gp_changes()</tt> during its
	last <tt>rcu_gp_init()</tt> pass through its leaf
	<tt>rcu_node</tt> structure.
</font></td></tr>
<tr><td>&nbsp;</td></tr>
</table>

<h4><a name="Self-Reported Quiescent States">
Self-Reported Quiescent States</a></h4>

<p>When all entities that might block the grace period have reported
quiescent states (or as described in a later section, had quiescent
states reported on their behalf), the grace period can end.
Online non-idle CPUs report their own quiescent states, as shown
in the following diagram:

</p><p><img src="TreeRCU-qs.svg" alt="TreeRCU-qs.svg" width="75%">

<p>This is for the last CPU to report a quiescent state, which signals
the end of the grace period.
Earlier quiescent states would push up the <tt>rcu_node</tt> tree
only until they encountered an <tt>rcu_node</tt> structure that
is waiting for additional quiescent states.
However, ordering is nevertheless preserved because some later quiescent
state will acquire that <tt>rcu_node</tt> structure's <tt>-&gt;lock</tt>.

<p>Any number of events can lead up to a CPU invoking
<tt>note_gp_changes</tt> (or alternatively, directly invoking
<tt>__note_gp_changes()</tt>), at which point that CPU will notice
the start of a new grace period while holding its leaf
<tt>rcu_node</tt> lock.
Therefore, all execution shown in this diagram happens after the
start of the grace period.
In addition, this CPU will consider any RCU read-side critical
section that started before the invocation of <tt>__note_gp_changes()</tt>
to have started before the grace period, and thus a critical
section that the grace period must wait on.

<table>
<tr><th>&nbsp;</th></tr>
<tr><th align="left">Quick Quiz:</th></tr>
<tr><td>
	But a RCU read-side critical section might have started
	after the beginning of the grace period
469
	(the advancing of <tt>-&gt;gp_seq</tt> from earlier), so why should
470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487
	the grace period wait on such a critical section?
</td></tr>
<tr><th align="left">Answer:</th></tr>
<tr><td bgcolor="#ffffff"><font color="ffffff">
	It is indeed not necessary for the grace period to wait on such
	a critical section.
	However, it is permissible to wait on it.
	And it is furthermore important to wait on it, as this
	lazy approach is far more scalable than a &ldquo;big bang&rdquo;
	all-at-once grace-period start could possibly be.
</font></td></tr>
<tr><td>&nbsp;</td></tr>
</table>

<p>If the CPU does a context switch, a quiescent state will be
noted by <tt>rcu_node_context_switch()</tt> on the left.
On the other hand, if the CPU takes a scheduler-clock interrupt
while executing in usermode, a quiescent state will be noted by
488
<tt>rcu_sched_clock_irq()</tt> on the right.
489 490 491 492 493
Either way, the passage through a quiescent state will be noted
in a per-CPU variable.

<p>The next time an <tt>RCU_SOFTIRQ</tt> handler executes on
this CPU (for example, after the next scheduler-clock
494
interrupt), <tt>rcu_core()</tt> will invoke
495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611
<tt>rcu_check_quiescent_state()</tt>, which will notice the
recorded quiescent state, and invoke
<tt>rcu_report_qs_rdp()</tt>.
If <tt>rcu_report_qs_rdp()</tt> verifies that the quiescent state
really does apply to the current grace period, it invokes
<tt>rcu_report_rnp()</tt> which traverses up the <tt>rcu_node</tt>
tree as shown at the bottom of the diagram, clearing bits from
each <tt>rcu_node</tt> structure's <tt>-&gt;qsmask</tt> field,
and propagating up the tree when the result is zero.

<p>Note that traversal passes upwards out of a given <tt>rcu_node</tt>
structure only if the current CPU is reporting the last quiescent
state for the subtree headed by that <tt>rcu_node</tt> structure.
A key point is that if a CPU's traversal stops at a given <tt>rcu_node</tt>
structure, then there will be a later traversal by another CPU
(or perhaps the same one) that proceeds upwards
from that point, and the <tt>rcu_node</tt> <tt>-&gt;lock</tt>
guarantees that the first CPU's quiescent state happens before the
remainder of the second CPU's traversal.
Applying this line of thought repeatedly shows that all CPUs'
quiescent states happen before the last CPU traverses through
the root <tt>rcu_node</tt> structure, the &ldquo;last CPU&rdquo;
being the one that clears the last bit in the root <tt>rcu_node</tt>
structure's <tt>-&gt;qsmask</tt> field.

<h4><a name="Dynamic Tick Interface">Dynamic Tick Interface</a></h4>

<p>Due to energy-efficiency considerations, RCU is forbidden from
disturbing idle CPUs.
CPUs are therefore required to notify RCU when entering or leaving idle
state, which they do via fully ordered value-returning atomic operations
on a per-CPU variable.
The ordering effects are as shown below:

</p><p><img src="TreeRCU-dyntick.svg" alt="TreeRCU-dyntick.svg" width="50%">

<p>The RCU grace-period kernel thread samples the per-CPU idleness
variable while holding the corresponding CPU's leaf <tt>rcu_node</tt>
structure's <tt>-&gt;lock</tt>.
This means that any RCU read-side critical sections that precede the
idle period (the oval near the top of the diagram above) will happen
before the end of the current grace period.
Similarly, the beginning of the current grace period will happen before
any RCU read-side critical sections that follow the
idle period (the oval near the bottom of the diagram above).

<p>Plumbing this into the full grace-period execution is described
<a href="#Forcing Quiescent States">below</a>.

<h4><a name="CPU-Hotplug Interface">CPU-Hotplug Interface</a></h4>

<p>RCU is also forbidden from disturbing offline CPUs, which might well
be powered off and removed from the system completely.
CPUs are therefore required to notify RCU of their comings and goings
as part of the corresponding CPU hotplug operations.
The ordering effects are shown below:

</p><p><img src="TreeRCU-hotplug.svg" alt="TreeRCU-hotplug.svg" width="50%">

<p>Because CPU hotplug operations are much less frequent than idle transitions,
they are heavier weight, and thus acquire the CPU's leaf <tt>rcu_node</tt>
structure's <tt>-&gt;lock</tt> and update this structure's
<tt>-&gt;qsmaskinitnext</tt>.
The RCU grace-period kernel thread samples this mask to detect CPUs
having gone offline since the beginning of this grace period.

<p>Plumbing this into the full grace-period execution is described
<a href="#Forcing Quiescent States">below</a>.

<h4><a name="Forcing Quiescent States">Forcing Quiescent States</a></h4>

<p>As noted above, idle and offline CPUs cannot report their own
quiescent states, and therefore the grace-period kernel thread
must do the reporting on their behalf.
This process is called &ldquo;forcing quiescent states&rdquo;, it is
repeated every few jiffies, and its ordering effects are shown below:

</p><p><img src="TreeRCU-gp-fqs.svg" alt="TreeRCU-gp-fqs.svg" width="100%">

<p>Each pass of quiescent state forcing is guaranteed to traverse the
leaf <tt>rcu_node</tt> structures, and if there are no new quiescent
states due to recently idled and/or offlined CPUs, then only the
leaves are traversed.
However, if there is a newly offlined CPU as illustrated on the left
or a newly idled CPU as illustrated on the right, the corresponding
quiescent state will be driven up towards the root.
As with self-reported quiescent states, the upwards driving stops
once it reaches an <tt>rcu_node</tt> structure that has quiescent
states outstanding from other CPUs.

<table>
<tr><th>&nbsp;</th></tr>
<tr><th align="left">Quick Quiz:</th></tr>
<tr><td>
	The leftmost drive to root stopped before it reached
	the root <tt>rcu_node</tt> structure, which means that
	there are still CPUs subordinate to that structure on
	which the current grace period is waiting.
	Given that, how is it possible that the rightmost drive
	to root ended the grace period?
</td></tr>
<tr><th align="left">Answer:</th></tr>
<tr><td bgcolor="#ffffff"><font color="ffffff">
	Good analysis!
	It is in fact impossible in the absence of bugs in RCU.
	But this diagram is complex enough as it is, so simplicity
	overrode accuracy.
	You can think of it as poetic license, or you can think of
	it as misdirection that is resolved in the
	<a href="#Putting It All Together">stitched-together diagram</a>.
</font></td></tr>
<tr><td>&nbsp;</td></tr>
</table>

<h4><a name="Grace-Period Cleanup">Grace-Period Cleanup</a></h4>

<p>Grace-period cleanup first scans the <tt>rcu_node</tt> tree
612 613
breadth-first advancing all the <tt>-&gt;gp_seq</tt> fields, then it
advances the <tt>rcu_state</tt> structure's <tt>-&gt;gp_seq</tt> field.
614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634
The ordering effects are shown below:

</p><p><img src="TreeRCU-gp-cleanup.svg" alt="TreeRCU-gp-cleanup.svg" width="75%">

<p>As indicated by the oval at the bottom of the diagram, once
grace-period cleanup is complete, the next grace period can begin.

<table>
<tr><th>&nbsp;</th></tr>
<tr><th align="left">Quick Quiz:</th></tr>
<tr><td>
	But when precisely does the grace period end?
</td></tr>
<tr><th align="left">Answer:</th></tr>
<tr><td bgcolor="#ffffff"><font color="ffffff">
	There is no useful single point at which the grace period
	can be said to end.
	The earliest reasonable candidate is as soon as the last
	CPU has reported its quiescent state, but it may be some
	milliseconds before RCU becomes aware of this.
	The latest reasonable candidate is once the <tt>rcu_state</tt>
635
	structure's <tt>-&gt;gp_seq</tt> field has been updated,
636 637 638 639 640 641 642 643 644 645 646 647
	but it is quite possible that some CPUs have already completed
	phase two of their updates by that time.
	In short, if you are going to work with RCU, you need to
	learn to embrace uncertainty.
</font></td></tr>
<tr><td>&nbsp;</td></tr>
</table>


<h4><a name="Callback Invocation">Callback Invocation</a></h4>

<p>Once a given CPU's leaf <tt>rcu_node</tt> structure's
648
<tt>-&gt;gp_seq</tt> field has been updated, that CPU can begin
649 650 651 652 653
invoking its RCU callbacks that were waiting for this grace period
to end.
These callbacks are identified by <tt>rcu_advance_cbs()</tt>,
which is usually invoked by <tt>__note_gp_changes()</tt>.
As shown in the diagram below, this invocation can be triggered by
654
the scheduling-clock interrupt (<tt>rcu_sched_clock_irq()</tt> on
655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705
the left) or by idle entry (<tt>rcu_cleanup_after_idle()</tt> on
the right, but only for kernels build with
<tt>CONFIG_RCU_FAST_NO_HZ=y</tt>).
Either way, <tt>RCU_SOFTIRQ</tt> is raised, which results in
<tt>rcu_do_batch()</tt> invoking the callbacks, which in turn
allows those callbacks to carry out (either directly or indirectly
via wakeup) the needed phase-two processing for each update.

</p><p><img src="TreeRCU-callback-invocation.svg" alt="TreeRCU-callback-invocation.svg" width="60%">

<p>Please note that callback invocation can also be prompted by any
number of corner-case code paths, for example, when a CPU notes that
it has excessive numbers of callbacks queued.
In all cases, the CPU acquires its leaf <tt>rcu_node</tt> structure's
<tt>-&gt;lock</tt> before invoking callbacks, which preserves the
required ordering against the newly completed grace period.

<p>However, if the callback function communicates to other CPUs,
for example, doing a wakeup, then it is that function's responsibility
to maintain ordering.
For example, if the callback function wakes up a task that runs on
some other CPU, proper ordering must in place in both the callback
function and the task being awakened.
To see why this is important, consider the top half of the
<a href="#Grace-Period Cleanup">grace-period cleanup</a> diagram.
The callback might be running on a CPU corresponding to the leftmost
leaf <tt>rcu_node</tt> structure, and awaken a task that is to run on
a CPU corresponding to the rightmost leaf <tt>rcu_node</tt> structure,
and the grace-period kernel thread might not yet have reached the
rightmost leaf.
In this case, the grace period's memory ordering might not yet have
reached that CPU, so again the callback function and the awakened
task must supply proper ordering.

<h3><a name="Putting It All Together">Putting It All Together</a></h3>

<p>A stitched-together diagram is
<a href="Tree-RCU-Diagram.html">here</a>.

<h3><a name="Legal Statement">
Legal Statement</a></h3>

<p>This work represents the view of the author and does not necessarily
represent the view of IBM.

</p><p>Linux is a registered trademark of Linus Torvalds.

</p><p>Other company, product, and service names may be trademarks or
service marks of others.

</body></html>