Commit 423d091d authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (64 commits)
  cpu: Export cpu_up()
  rcu: Apply ACCESS_ONCE() to rcu_boost() return value
  Revert "rcu: Permit rt_mutex_unlock() with irqs disabled"
  docs: Additional LWN links to RCU API
  rcu: Augment rcu_batch_end tracing for idle and callback state
  rcu: Add rcutorture tests for srcu_read_lock_raw()
  rcu: Make rcutorture test for hotpluggability before offlining CPUs
  driver-core/cpu: Expose hotpluggability to the rest of the kernel
  rcu: Remove redundant rcu_cpu_stall_suppress declaration
  rcu: Adaptive dyntick-idle preparation
  rcu: Keep invoking callbacks if CPU otherwise idle
  rcu: Irq nesting is always 0 on rcu_enter_idle_common
  rcu: Don't check irq nesting from rcu idle entry/exit
  rcu: Permit dyntick-idle with callbacks pending
  rcu: Document same-context read-side constraints
  rcu: Identify dyntick-idle CPUs on first force_quiescent_state() pass
  rcu: Remove dynticks false positives and RCU failures
  rcu: Reduce latency of rcu_prepare_for_idle()
  rcu: Eliminate RCU_FAST_NO_HZ grace-period hang
  rcu: Avoid needlessly IPIing CPUs at GP end
  ...
parents 1483b382 919b8345
...@@ -328,6 +328,12 @@ over a rather long period of time, but improvements are always welcome! ...@@ -328,6 +328,12 @@ over a rather long period of time, but improvements are always welcome!
RCU rather than SRCU, because RCU is almost always faster and RCU rather than SRCU, because RCU is almost always faster and
easier to use than is SRCU. easier to use than is SRCU.
If you need to enter your read-side critical section in a
hardirq or exception handler, and then exit that same read-side
critical section in the task that was interrupted, then you need
to srcu_read_lock_raw() and srcu_read_unlock_raw(), which avoid
the lockdep checking that would otherwise this practice illegal.
Also unlike other forms of RCU, explicit initialization Also unlike other forms of RCU, explicit initialization
and cleanup is required via init_srcu_struct() and and cleanup is required via init_srcu_struct() and
cleanup_srcu_struct(). These are passed a "struct srcu_struct" cleanup_srcu_struct(). These are passed a "struct srcu_struct"
......
...@@ -38,11 +38,11 @@ o How can the updater tell when a grace period has completed ...@@ -38,11 +38,11 @@ o How can the updater tell when a grace period has completed
Preemptible variants of RCU (CONFIG_TREE_PREEMPT_RCU) get the Preemptible variants of RCU (CONFIG_TREE_PREEMPT_RCU) get the
same effect, but require that the readers manipulate CPU-local same effect, but require that the readers manipulate CPU-local
counters. These counters allow limited types of blocking counters. These counters allow limited types of blocking within
within RCU read-side critical sections. SRCU also uses RCU read-side critical sections. SRCU also uses CPU-local
CPU-local counters, and permits general blocking within counters, and permits general blocking within RCU read-side
RCU read-side critical sections. These two variants of critical sections. These variants of RCU detect grace periods
RCU detect grace periods by sampling these counters. by sampling these counters.
o If I am running on a uniprocessor kernel, which can only do one o If I am running on a uniprocessor kernel, which can only do one
thing at a time, why should I wait for a grace period? thing at a time, why should I wait for a grace period?
......
...@@ -101,6 +101,11 @@ o A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that ...@@ -101,6 +101,11 @@ o A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
CONFIG_TREE_PREEMPT_RCU case, you might see stall-warning CONFIG_TREE_PREEMPT_RCU case, you might see stall-warning
messages. messages.
o A hardware or software issue shuts off the scheduler-clock
interrupt on a CPU that is not in dyntick-idle mode. This
problem really has happened, and seems to be most likely to
result in RCU CPU stall warnings for CONFIG_NO_HZ=n kernels.
o A bug in the RCU implementation. o A bug in the RCU implementation.
o A hardware failure. This is quite unlikely, but has occurred o A hardware failure. This is quite unlikely, but has occurred
...@@ -109,12 +114,11 @@ o A hardware failure. This is quite unlikely, but has occurred ...@@ -109,12 +114,11 @@ o A hardware failure. This is quite unlikely, but has occurred
This resulted in a series of RCU CPU stall warnings, eventually This resulted in a series of RCU CPU stall warnings, eventually
leading the realization that the CPU had failed. leading the realization that the CPU had failed.
The RCU, RCU-sched, and RCU-bh implementations have CPU stall The RCU, RCU-sched, and RCU-bh implementations have CPU stall warning.
warning. SRCU does not have its own CPU stall warnings, but its SRCU does not have its own CPU stall warnings, but its calls to
calls to synchronize_sched() will result in RCU-sched detecting synchronize_sched() will result in RCU-sched detecting RCU-sched-related
RCU-sched-related CPU stalls. Please note that RCU only detects CPU stalls. Please note that RCU only detects CPU stalls when there is
CPU stalls when there is a grace period in progress. No grace period, a grace period in progress. No grace period, no CPU stall warnings.
no CPU stall warnings.
To diagnose the cause of the stall, inspect the stack traces. To diagnose the cause of the stall, inspect the stack traces.
The offending function will usually be near the top of the stack. The offending function will usually be near the top of the stack.
......
...@@ -61,11 +61,24 @@ nreaders This is the number of RCU reading threads supported. ...@@ -61,11 +61,24 @@ nreaders This is the number of RCU reading threads supported.
To properly exercise RCU implementations with preemptible To properly exercise RCU implementations with preemptible
read-side critical sections. read-side critical sections.
onoff_interval
The number of seconds between each attempt to execute a
randomly selected CPU-hotplug operation. Defaults to
zero, which disables CPU hotplugging. In HOTPLUG_CPU=n
kernels, rcutorture will silently refuse to do any
CPU-hotplug operations regardless of what value is
specified for onoff_interval.
shuffle_interval shuffle_interval
The number of seconds to keep the test threads affinitied The number of seconds to keep the test threads affinitied
to a particular subset of the CPUs, defaults to 3 seconds. to a particular subset of the CPUs, defaults to 3 seconds.
Used in conjunction with test_no_idle_hz. Used in conjunction with test_no_idle_hz.
shutdown_secs The number of seconds to run the test before terminating
the test and powering off the system. The default is
zero, which disables test termination and system shutdown.
This capability is useful for automated testing.
stat_interval The number of seconds between output of torture stat_interval The number of seconds between output of torture
statistics (via printk()). Regardless of the interval, statistics (via printk()). Regardless of the interval,
statistics are printed when the module is unloaded. statistics are printed when the module is unloaded.
......
...@@ -105,14 +105,10 @@ o "dt" is the current value of the dyntick counter that is incremented ...@@ -105,14 +105,10 @@ o "dt" is the current value of the dyntick counter that is incremented
or one greater than the interrupt-nesting depth otherwise. or one greater than the interrupt-nesting depth otherwise.
The number after the second "/" is the NMI nesting depth. The number after the second "/" is the NMI nesting depth.
This field is displayed only for CONFIG_NO_HZ kernels.
o "df" is the number of times that some other CPU has forced a o "df" is the number of times that some other CPU has forced a
quiescent state on behalf of this CPU due to this CPU being in quiescent state on behalf of this CPU due to this CPU being in
dynticks-idle state. dynticks-idle state.
This field is displayed only for CONFIG_NO_HZ kernels.
o "of" is the number of times that some other CPU has forced a o "of" is the number of times that some other CPU has forced a
quiescent state on behalf of this CPU due to this CPU being quiescent state on behalf of this CPU due to this CPU being
offline. In a perfect world, this might never happen, but it offline. In a perfect world, this might never happen, but it
......
...@@ -4,6 +4,7 @@ to start learning about RCU: ...@@ -4,6 +4,7 @@ to start learning about RCU:
1. What is RCU, Fundamentally? http://lwn.net/Articles/262464/ 1. What is RCU, Fundamentally? http://lwn.net/Articles/262464/
2. What is RCU? Part 2: Usage http://lwn.net/Articles/263130/ 2. What is RCU? Part 2: Usage http://lwn.net/Articles/263130/
3. RCU part 3: the RCU API http://lwn.net/Articles/264090/ 3. RCU part 3: the RCU API http://lwn.net/Articles/264090/
4. The RCU API, 2010 Edition http://lwn.net/Articles/418853/
What is RCU? What is RCU?
...@@ -834,6 +835,8 @@ SRCU: Critical sections Grace period Barrier ...@@ -834,6 +835,8 @@ SRCU: Critical sections Grace period Barrier
srcu_read_lock synchronize_srcu N/A srcu_read_lock synchronize_srcu N/A
srcu_read_unlock synchronize_srcu_expedited srcu_read_unlock synchronize_srcu_expedited
srcu_read_lock_raw
srcu_read_unlock_raw
srcu_dereference srcu_dereference
SRCU: Initialization/cleanup SRCU: Initialization/cleanup
...@@ -855,27 +858,33 @@ list can be helpful: ...@@ -855,27 +858,33 @@ list can be helpful:
a. Will readers need to block? If so, you need SRCU. a. Will readers need to block? If so, you need SRCU.
b. What about the -rt patchset? If readers would need to block b. Is it necessary to start a read-side critical section in a
hardirq handler or exception handler, and then to complete
this read-side critical section in the task that was
interrupted? If so, you need SRCU's srcu_read_lock_raw() and
srcu_read_unlock_raw() primitives.
c. What about the -rt patchset? If readers would need to block
in an non-rt kernel, you need SRCU. If readers would block in an non-rt kernel, you need SRCU. If readers would block
in a -rt kernel, but not in a non-rt kernel, SRCU is not in a -rt kernel, but not in a non-rt kernel, SRCU is not
necessary. necessary.
c. Do you need to treat NMI handlers, hardirq handlers, d. Do you need to treat NMI handlers, hardirq handlers,
and code segments with preemption disabled (whether and code segments with preemption disabled (whether
via preempt_disable(), local_irq_save(), local_bh_disable(), via preempt_disable(), local_irq_save(), local_bh_disable(),
or some other mechanism) as if they were explicit RCU readers? or some other mechanism) as if they were explicit RCU readers?
If so, you need RCU-sched. If so, you need RCU-sched.
d. Do you need RCU grace periods to complete even in the face e. Do you need RCU grace periods to complete even in the face
of softirq monopolization of one or more of the CPUs? For of softirq monopolization of one or more of the CPUs? For
example, is your code subject to network-based denial-of-service example, is your code subject to network-based denial-of-service
attacks? If so, you need RCU-bh. attacks? If so, you need RCU-bh.
e. Is your workload too update-intensive for normal use of f. Is your workload too update-intensive for normal use of
RCU, but inappropriate for other synchronization mechanisms? RCU, but inappropriate for other synchronization mechanisms?
If so, consider SLAB_DESTROY_BY_RCU. But please be careful! If so, consider SLAB_DESTROY_BY_RCU. But please be careful!
f. Otherwise, use RCU. g. Otherwise, use RCU.
Of course, this all assumes that you have determined that RCU is in fact Of course, this all assumes that you have determined that RCU is in fact
the right tool for your job. the right tool for your job.
......
...@@ -84,6 +84,93 @@ compiler optimizes the section accessing atomic_t variables. ...@@ -84,6 +84,93 @@ compiler optimizes the section accessing atomic_t variables.
*** YOU HAVE BEEN WARNED! *** *** YOU HAVE BEEN WARNED! ***
Properly aligned pointers, longs, ints, and chars (and unsigned
equivalents) may be atomically loaded from and stored to in the same
sense as described for atomic_read() and atomic_set(). The ACCESS_ONCE()
macro should be used to prevent the compiler from using optimizations
that might otherwise optimize accesses out of existence on the one hand,
or that might create unsolicited accesses on the other.
For example consider the following code:
while (a > 0)
do_something();
If the compiler can prove that do_something() does not store to the
variable a, then the compiler is within its rights transforming this to
the following:
tmp = a;
if (a > 0)
for (;;)
do_something();
If you don't want the compiler to do this (and you probably don't), then
you should use something like the following:
while (ACCESS_ONCE(a) < 0)
do_something();
Alternatively, you could place a barrier() call in the loop.
For another example, consider the following code:
tmp_a = a;
do_something_with(tmp_a);
do_something_else_with(tmp_a);
If the compiler can prove that do_something_with() does not store to the
variable a, then the compiler is within its rights to manufacture an
additional load as follows:
tmp_a = a;
do_something_with(tmp_a);
tmp_a = a;
do_something_else_with(tmp_a);
This could fatally confuse your code if it expected the same value
to be passed to do_something_with() and do_something_else_with().
The compiler would be likely to manufacture this additional load if
do_something_with() was an inline function that made very heavy use
of registers: reloading from variable a could save a flush to the
stack and later reload. To prevent the compiler from attacking your
code in this manner, write the following:
tmp_a = ACCESS_ONCE(a);
do_something_with(tmp_a);
do_something_else_with(tmp_a);
For a final example, consider the following code, assuming that the
variable a is set at boot time before the second CPU is brought online
and never changed later, so that memory barriers are not needed:
if (a)
b = 9;
else
b = 42;
The compiler is within its rights to manufacture an additional store
by transforming the above code into the following:
b = 42;
if (a)
b = 9;
This could come as a fatal surprise to other code running concurrently
that expected b to never have the value 42 if a was zero. To prevent
the compiler from doing this, write something like:
if (a)
ACCESS_ONCE(b) = 9;
else
ACCESS_ONCE(b) = 42;
Don't even -think- about doing this without proper use of memory barriers,
locks, or atomic operations if variable a can change at runtime!
*** WARNING: ACCESS_ONCE() DOES NOT IMPLY A BARRIER! ***
Now, we move onto the atomic operation interfaces typically implemented with Now, we move onto the atomic operation interfaces typically implemented with
the help of assembly code. the help of assembly code.
......
...@@ -221,3 +221,66 @@ when the chain is validated for the first time, is then put into a hash ...@@ -221,3 +221,66 @@ when the chain is validated for the first time, is then put into a hash
table, which hash-table can be checked in a lockfree manner. If the table, which hash-table can be checked in a lockfree manner. If the
locking chain occurs again later on, the hash table tells us that we locking chain occurs again later on, the hash table tells us that we
dont have to validate the chain again. dont have to validate the chain again.
Troubleshooting:
----------------
The validator tracks a maximum of MAX_LOCKDEP_KEYS number of lock classes.
Exceeding this number will trigger the following lockdep warning:
(DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS))
By default, MAX_LOCKDEP_KEYS is currently set to 8191, and typical
desktop systems have less than 1,000 lock classes, so this warning
normally results from lock-class leakage or failure to properly
initialize locks. These two problems are illustrated below:
1. Repeated module loading and unloading while running the validator
will result in lock-class leakage. The issue here is that each
load of the module will create a new set of lock classes for
that module's locks, but module unloading does not remove old
classes (see below discussion of reuse of lock classes for why).
Therefore, if that module is loaded and unloaded repeatedly,
the number of lock classes will eventually reach the maximum.
2. Using structures such as arrays that have large numbers of
locks that are not explicitly initialized. For example,
a hash table with 8192 buckets where each bucket has its own
spinlock_t will consume 8192 lock classes -unless- each spinlock
is explicitly initialized at runtime, for example, using the
run-time spin_lock_init() as opposed to compile-time initializers
such as __SPIN_LOCK_UNLOCKED(). Failure to properly initialize
the per-bucket spinlocks would guarantee lock-class overflow.
In contrast, a loop that called spin_lock_init() on each lock
would place all 8192 locks into a single lock class.
The moral of this story is that you should always explicitly
initialize your locks.
One might argue that the validator should be modified to allow
lock classes to be reused. However, if you are tempted to make this
argument, first review the code and think through the changes that would
be required, keeping in mind that the lock classes to be removed are
likely to be linked into the lock-dependency graph. This turns out to
be harder to do than to say.
Of course, if you do run out of lock classes, the next thing to do is
to find the offending lock classes. First, the following command gives
you the number of lock classes currently in use along with the maximum:
grep "lock-classes" /proc/lockdep_stats
This command produces the following output on a modest system:
lock-classes: 748 [max: 8191]
If the number allocated (748 above) increases continually over time,
then there is likely a leak. The following command can be used to
identify the leaking lock classes:
grep "BD" /proc/lockdep
Run the command and save the output, then compare against the output from
a later run of this command to identify the leakers. This same output
can also help you find situations where runtime lock initialization has
been omitted.
...@@ -183,7 +183,8 @@ void cpu_idle(void) ...@@ -183,7 +183,8 @@ void cpu_idle(void)
/* endless idle loop with no priority at all */ /* endless idle loop with no priority at all */
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
leds_event(led_idle_start); leds_event(led_idle_start);
while (!need_resched()) { while (!need_resched()) {
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
...@@ -213,7 +214,8 @@ void cpu_idle(void) ...@@ -213,7 +214,8 @@ void cpu_idle(void)
} }
} }
leds_event(led_idle_end); leds_event(led_idle_end);
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -34,10 +34,12 @@ void cpu_idle(void) ...@@ -34,10 +34,12 @@ void cpu_idle(void)
{ {
/* endless idle loop with no priority at all */ /* endless idle loop with no priority at all */
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched()) while (!need_resched())
cpu_idle_sleep(); cpu_idle_sleep();
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -88,10 +88,12 @@ void cpu_idle(void) ...@@ -88,10 +88,12 @@ void cpu_idle(void)
#endif #endif
if (!idle) if (!idle)
idle = default_idle; idle = default_idle;
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched()) while (!need_resched())
idle(); idle();
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -103,10 +103,12 @@ void cpu_idle(void) ...@@ -103,10 +103,12 @@ void cpu_idle(void)
if (!idle) if (!idle)
idle = default_idle; idle = default_idle;
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched()) while (!need_resched())
idle(); idle();
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
......
...@@ -56,7 +56,8 @@ void __noreturn cpu_idle(void) ...@@ -56,7 +56,8 @@ void __noreturn cpu_idle(void)
/* endless idle loop with no priority at all */ /* endless idle loop with no priority at all */
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched() && cpu_online(cpu)) { while (!need_resched() && cpu_online(cpu)) {
#ifdef CONFIG_MIPS_MT_SMTC #ifdef CONFIG_MIPS_MT_SMTC
extern void smtc_idle_loop_hook(void); extern void smtc_idle_loop_hook(void);
...@@ -77,7 +78,8 @@ void __noreturn cpu_idle(void) ...@@ -77,7 +78,8 @@ void __noreturn cpu_idle(void)
system_state == SYSTEM_BOOTING)) system_state == SYSTEM_BOOTING))
play_dead(); play_dead();
#endif #endif
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -51,7 +51,8 @@ void cpu_idle(void) ...@@ -51,7 +51,8 @@ void cpu_idle(void)
/* endless idle loop with no priority at all */ /* endless idle loop with no priority at all */
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched()) { while (!need_resched()) {
check_pgt_cache(); check_pgt_cache();
...@@ -69,7 +70,8 @@ void cpu_idle(void) ...@@ -69,7 +70,8 @@ void cpu_idle(void)
set_thread_flag(TIF_POLLING_NRFLAG); set_thread_flag(TIF_POLLING_NRFLAG);
} }
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -46,6 +46,12 @@ static int __init powersave_off(char *arg) ...@@ -46,6 +46,12 @@ static int __init powersave_off(char *arg)
} }
__setup("powersave=off", powersave_off); __setup("powersave=off", powersave_off);
#if defined(CONFIG_PPC_PSERIES) && defined(CONFIG_TRACEPOINTS)
static const bool idle_uses_rcu = 1;
#else
static const bool idle_uses_rcu;
#endif
/* /*
* The body of the idle task. * The body of the idle task.
*/ */
...@@ -56,7 +62,10 @@ void cpu_idle(void) ...@@ -56,7 +62,10 @@ void cpu_idle(void)
set_thread_flag(TIF_POLLING_NRFLAG); set_thread_flag(TIF_POLLING_NRFLAG);
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
if (!idle_uses_rcu)
rcu_idle_enter();
while (!need_resched() && !cpu_should_die()) { while (!need_resched() && !cpu_should_die()) {
ppc64_runlatch_off(); ppc64_runlatch_off();
...@@ -93,7 +102,9 @@ void cpu_idle(void) ...@@ -93,7 +102,9 @@ void cpu_idle(void)
HMT_medium(); HMT_medium();
ppc64_runlatch_on(); ppc64_runlatch_on();
tick_nohz_restart_sched_tick(); if (!idle_uses_rcu)
rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
if (cpu_should_die()) if (cpu_should_die())
cpu_die(); cpu_die();
......
...@@ -563,7 +563,8 @@ static void yield_shared_processor(void) ...@@ -563,7 +563,8 @@ static void yield_shared_processor(void)
static void iseries_shared_idle(void) static void iseries_shared_idle(void)
{ {
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched() && !hvlpevent_is_pending()) { while (!need_resched() && !hvlpevent_is_pending()) {
local_irq_disable(); local_irq_disable();
ppc64_runlatch_off(); ppc64_runlatch_off();
...@@ -577,7 +578,8 @@ static void iseries_shared_idle(void) ...@@ -577,7 +578,8 @@ static void iseries_shared_idle(void)
} }
ppc64_runlatch_on(); ppc64_runlatch_on();
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
if (hvlpevent_is_pending()) if (hvlpevent_is_pending())
process_iSeries_events(); process_iSeries_events();
...@@ -593,7 +595,8 @@ static void iseries_dedicated_idle(void) ...@@ -593,7 +595,8 @@ static void iseries_dedicated_idle(void)
set_thread_flag(TIF_POLLING_NRFLAG); set_thread_flag(TIF_POLLING_NRFLAG);
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
if (!need_resched()) { if (!need_resched()) {
while (!need_resched()) { while (!need_resched()) {
ppc64_runlatch_off(); ppc64_runlatch_off();
...@@ -610,7 +613,8 @@ static void iseries_dedicated_idle(void) ...@@ -610,7 +613,8 @@ static void iseries_dedicated_idle(void)
} }
ppc64_runlatch_on(); ppc64_runlatch_on();
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -555,6 +555,8 @@ void __trace_hcall_entry(unsigned long opcode, unsigned long *args) ...@@ -555,6 +555,8 @@ void __trace_hcall_entry(unsigned long opcode, unsigned long *args)
(*depth)++; (*depth)++;
trace_hcall_entry(opcode, args); trace_hcall_entry(opcode, args);
if (opcode == H_CEDE)
rcu_idle_enter();
(*depth)--; (*depth)--;
out: out:
...@@ -575,6 +577,8 @@ void __trace_hcall_exit(long opcode, unsigned long retval, ...@@ -575,6 +577,8 @@ void __trace_hcall_exit(long opcode, unsigned long retval,
goto out; goto out;
(*depth)++; (*depth)++;
if (opcode == H_CEDE)
rcu_idle_exit();
trace_hcall_exit(opcode, retval, retbuf); trace_hcall_exit(opcode, retval, retbuf);
(*depth)--; (*depth)--;
......
...@@ -91,10 +91,12 @@ static void default_idle(void) ...@@ -91,10 +91,12 @@ static void default_idle(void)
void cpu_idle(void) void cpu_idle(void)
{ {
for (;;) { for (;;) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched()) while (!need_resched())
default_idle(); default_idle();
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -89,7 +89,8 @@ void cpu_idle(void) ...@@ -89,7 +89,8 @@ void cpu_idle(void)
/* endless idle loop with no priority at all */ /* endless idle loop with no priority at all */
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched()) { while (!need_resched()) {
check_pgt_cache(); check_pgt_cache();
...@@ -111,7 +112,8 @@ void cpu_idle(void) ...@@ -111,7 +112,8 @@ void cpu_idle(void)
start_critical_timings(); start_critical_timings();
} }
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -95,12 +95,14 @@ void cpu_idle(void) ...@@ -95,12 +95,14 @@ void cpu_idle(void)
set_thread_flag(TIF_POLLING_NRFLAG); set_thread_flag(TIF_POLLING_NRFLAG);
while(1) { while(1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched() && !cpu_is_offline(cpu)) while (!need_resched() && !cpu_is_offline(cpu))
sparc64_yield(cpu); sparc64_yield(cpu);
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
......
...@@ -84,7 +84,7 @@ static void prom_sync_me(void) ...@@ -84,7 +84,7 @@ static void prom_sync_me(void)
prom_printf("PROM SYNC COMMAND...\n"); prom_printf("PROM SYNC COMMAND...\n");
show_free_areas(0); show_free_areas(0);
if(current->pid != 0) { if (!is_idle_task(current)) {
local_irq_enable(); local_irq_enable();
sys_sync(); sys_sync();
local_irq_disable(); local_irq_disable();
......
...@@ -85,7 +85,8 @@ void cpu_idle(void) ...@@ -85,7 +85,8 @@ void cpu_idle(void)
/* endless idle loop with no priority at all */ /* endless idle loop with no priority at all */
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched()) { while (!need_resched()) {
if (cpu_is_offline(cpu)) if (cpu_is_offline(cpu))
BUG(); /* no HOTPLUG_CPU */ BUG(); /* no HOTPLUG_CPU */
...@@ -105,7 +106,8 @@ void cpu_idle(void) ...@@ -105,7 +106,8 @@ void cpu_idle(void)
local_irq_enable(); local_irq_enable();
current_thread_info()->status |= TS_POLLING; current_thread_info()->status |= TS_POLLING;
} }
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -54,7 +54,7 @@ static noinline void force_sig_info_fault(const char *type, int si_signo, ...@@ -54,7 +54,7 @@ static noinline void force_sig_info_fault(const char *type, int si_signo,
if (unlikely(tsk->pid < 2)) { if (unlikely(tsk->pid < 2)) {
panic("Signal %d (code %d) at %#lx sent to %s!", panic("Signal %d (code %d) at %#lx sent to %s!",
si_signo, si_code & 0xffff, address, si_signo, si_code & 0xffff, address,
tsk->pid ? "init" : "the idle task"); is_idle_task(tsk) ? "the idle task" : "init");
} }
info.si_signo = si_signo; info.si_signo = si_signo;
...@@ -515,7 +515,7 @@ static int handle_page_fault(struct pt_regs *regs, ...@@ -515,7 +515,7 @@ static int handle_page_fault(struct pt_regs *regs,
if (unlikely(tsk->pid < 2)) { if (unlikely(tsk->pid < 2)) {
panic("Kernel page fault running %s!", panic("Kernel page fault running %s!",
tsk->pid ? "init" : "the idle task"); is_idle_task(tsk) ? "the idle task" : "init");
} }
/* /*
......
...@@ -246,10 +246,12 @@ void default_idle(void) ...@@ -246,10 +246,12 @@ void default_idle(void)
if (need_resched()) if (need_resched())
schedule(); schedule();
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
nsecs = disable_timer(); nsecs = disable_timer();
idle_sleep(nsecs); idle_sleep(nsecs);
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
} }
} }
......
...@@ -55,7 +55,8 @@ void cpu_idle(void) ...@@ -55,7 +55,8 @@ void cpu_idle(void)
{ {
/* endless idle loop with no priority at all */ /* endless idle loop with no priority at all */
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched()) { while (!need_resched()) {
local_irq_disable(); local_irq_disable();
stop_critical_timings(); stop_critical_timings();
...@@ -63,7 +64,8 @@ void cpu_idle(void) ...@@ -63,7 +64,8 @@ void cpu_idle(void)
local_irq_enable(); local_irq_enable();
start_critical_timings(); start_critical_timings();
} }
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -876,8 +876,8 @@ void __irq_entry smp_apic_timer_interrupt(struct pt_regs *regs) ...@@ -876,8 +876,8 @@ void __irq_entry smp_apic_timer_interrupt(struct pt_regs *regs)
* Besides, if we don't timer interrupts ignore the global * Besides, if we don't timer interrupts ignore the global
* interrupt lock, which is the WrongThing (tm) to do. * interrupt lock, which is the WrongThing (tm) to do.
*/ */
exit_idle();
irq_enter(); irq_enter();
exit_idle();
local_apic_timer_interrupt(); local_apic_timer_interrupt();
irq_exit(); irq_exit();
...@@ -1809,8 +1809,8 @@ void smp_spurious_interrupt(struct pt_regs *regs) ...@@ -1809,8 +1809,8 @@ void smp_spurious_interrupt(struct pt_regs *regs)
{ {
u32 v; u32 v;
exit_idle();
irq_enter(); irq_enter();
exit_idle();
/* /*
* Check if this really is a spurious interrupt and ACK it * Check if this really is a spurious interrupt and ACK it
* if it is a vectored one. Just in case... * if it is a vectored one. Just in case...
...@@ -1846,8 +1846,8 @@ void smp_error_interrupt(struct pt_regs *regs) ...@@ -1846,8 +1846,8 @@ void smp_error_interrupt(struct pt_regs *regs)
"Illegal register address", /* APIC Error Bit 7 */ "Illegal register address", /* APIC Error Bit 7 */
}; };
exit_idle();
irq_enter(); irq_enter();
exit_idle();
/* First tickle the hardware, only then report what went on. -- REW */ /* First tickle the hardware, only then report what went on. -- REW */
v0 = apic_read(APIC_ESR); v0 = apic_read(APIC_ESR);
apic_write(APIC_ESR, 0); apic_write(APIC_ESR, 0);
......
...@@ -2421,8 +2421,8 @@ asmlinkage void smp_irq_move_cleanup_interrupt(void) ...@@ -2421,8 +2421,8 @@ asmlinkage void smp_irq_move_cleanup_interrupt(void)
unsigned vector, me; unsigned vector, me;
ack_APIC_irq(); ack_APIC_irq();
exit_idle();
irq_enter(); irq_enter();
exit_idle();
me = smp_processor_id(); me = smp_processor_id();
for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) { for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) {
......
...@@ -397,8 +397,8 @@ static void (*smp_thermal_vector)(void) = unexpected_thermal_interrupt; ...@@ -397,8 +397,8 @@ static void (*smp_thermal_vector)(void) = unexpected_thermal_interrupt;
asmlinkage void smp_thermal_interrupt(struct pt_regs *regs) asmlinkage void smp_thermal_interrupt(struct pt_regs *regs)
{ {
exit_idle();
irq_enter(); irq_enter();
exit_idle();
inc_irq_stat(irq_thermal_count); inc_irq_stat(irq_thermal_count);
smp_thermal_vector(); smp_thermal_vector();
irq_exit(); irq_exit();
......
...@@ -19,8 +19,8 @@ void (*mce_threshold_vector)(void) = default_threshold_interrupt; ...@@ -19,8 +19,8 @@ void (*mce_threshold_vector)(void) = default_threshold_interrupt;
asmlinkage void smp_threshold_interrupt(void) asmlinkage void smp_threshold_interrupt(void)
{ {
exit_idle();
irq_enter(); irq_enter();
exit_idle();
inc_irq_stat(irq_threshold_count); inc_irq_stat(irq_threshold_count);
mce_threshold_vector(); mce_threshold_vector();
irq_exit(); irq_exit();
......
...@@ -181,8 +181,8 @@ unsigned int __irq_entry do_IRQ(struct pt_regs *regs) ...@@ -181,8 +181,8 @@ unsigned int __irq_entry do_IRQ(struct pt_regs *regs)
unsigned vector = ~regs->orig_ax; unsigned vector = ~regs->orig_ax;
unsigned irq; unsigned irq;
exit_idle();
irq_enter(); irq_enter();
exit_idle();
irq = __this_cpu_read(vector_irq[vector]); irq = __this_cpu_read(vector_irq[vector]);
...@@ -209,10 +209,10 @@ void smp_x86_platform_ipi(struct pt_regs *regs) ...@@ -209,10 +209,10 @@ void smp_x86_platform_ipi(struct pt_regs *regs)
ack_APIC_irq(); ack_APIC_irq();
exit_idle();
irq_enter(); irq_enter();
exit_idle();
inc_irq_stat(x86_platform_ipis); inc_irq_stat(x86_platform_ipis);
if (x86_platform_ipi_callback) if (x86_platform_ipi_callback)
......
...@@ -99,7 +99,8 @@ void cpu_idle(void) ...@@ -99,7 +99,8 @@ void cpu_idle(void)
/* endless idle loop with no priority at all */ /* endless idle loop with no priority at all */
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
rcu_idle_enter();
while (!need_resched()) { while (!need_resched()) {
check_pgt_cache(); check_pgt_cache();
...@@ -116,7 +117,8 @@ void cpu_idle(void) ...@@ -116,7 +117,8 @@ void cpu_idle(void)
pm_idle(); pm_idle();
start_critical_timings(); start_critical_timings();
} }
tick_nohz_restart_sched_tick(); rcu_idle_exit();
tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -122,7 +122,7 @@ void cpu_idle(void) ...@@ -122,7 +122,7 @@ void cpu_idle(void)
/* endless idle loop with no priority at all */ /* endless idle loop with no priority at all */
while (1) { while (1) {
tick_nohz_stop_sched_tick(1); tick_nohz_idle_enter();
while (!need_resched()) { while (!need_resched()) {
rmb(); rmb();
...@@ -139,8 +139,14 @@ void cpu_idle(void) ...@@ -139,8 +139,14 @@ void cpu_idle(void)
enter_idle(); enter_idle();
/* Don't trace irqs off for idle */ /* Don't trace irqs off for idle */
stop_critical_timings(); stop_critical_timings();
/* enter_idle() needs rcu for notifiers */
rcu_idle_enter();
if (cpuidle_idle_call()) if (cpuidle_idle_call())
pm_idle(); pm_idle();
rcu_idle_exit();
start_critical_timings(); start_critical_timings();
/* In many cases the interrupt that ended idle /* In many cases the interrupt that ended idle
...@@ -149,7 +155,7 @@ void cpu_idle(void) ...@@ -149,7 +155,7 @@ void cpu_idle(void)
__exit_idle(); __exit_idle();
} }
tick_nohz_restart_sched_tick(); tick_nohz_idle_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
schedule(); schedule();
preempt_disable(); preempt_disable();
......
...@@ -247,6 +247,13 @@ struct sys_device *get_cpu_sysdev(unsigned cpu) ...@@ -247,6 +247,13 @@ struct sys_device *get_cpu_sysdev(unsigned cpu)
} }
EXPORT_SYMBOL_GPL(get_cpu_sysdev); EXPORT_SYMBOL_GPL(get_cpu_sysdev);
bool cpu_is_hotpluggable(unsigned cpu)
{
struct sys_device *dev = get_cpu_sysdev(cpu);
return dev && container_of(dev, struct cpu, sysdev)->hotpluggable;
}
EXPORT_SYMBOL_GPL(cpu_is_hotpluggable);
int __init cpu_dev_init(void) int __init cpu_dev_init(void)
{ {
int err; int err;
......
...@@ -27,6 +27,7 @@ struct cpu { ...@@ -27,6 +27,7 @@ struct cpu {
extern int register_cpu(struct cpu *cpu, int num); extern int register_cpu(struct cpu *cpu, int num);
extern struct sys_device *get_cpu_sysdev(unsigned cpu); extern struct sys_device *get_cpu_sysdev(unsigned cpu);
extern bool cpu_is_hotpluggable(unsigned cpu);
extern int cpu_add_sysdev_attr(struct sysdev_attribute *attr); extern int cpu_add_sysdev_attr(struct sysdev_attribute *attr);
extern void cpu_remove_sysdev_attr(struct sysdev_attribute *attr); extern void cpu_remove_sysdev_attr(struct sysdev_attribute *attr);
......
...@@ -139,20 +139,7 @@ static inline void account_system_vtime(struct task_struct *tsk) ...@@ -139,20 +139,7 @@ static inline void account_system_vtime(struct task_struct *tsk)
extern void account_system_vtime(struct task_struct *tsk); extern void account_system_vtime(struct task_struct *tsk);
#endif #endif
#if defined(CONFIG_NO_HZ)
#if defined(CONFIG_TINY_RCU) || defined(CONFIG_TINY_PREEMPT_RCU) #if defined(CONFIG_TINY_RCU) || defined(CONFIG_TINY_PREEMPT_RCU)
extern void rcu_enter_nohz(void);
extern void rcu_exit_nohz(void);
static inline void rcu_irq_enter(void)
{
rcu_exit_nohz();
}
static inline void rcu_irq_exit(void)
{
rcu_enter_nohz();
}
static inline void rcu_nmi_enter(void) static inline void rcu_nmi_enter(void)
{ {
...@@ -163,17 +150,9 @@ static inline void rcu_nmi_exit(void) ...@@ -163,17 +150,9 @@ static inline void rcu_nmi_exit(void)
} }
#else #else
extern void rcu_irq_enter(void);
extern void rcu_irq_exit(void);
extern void rcu_nmi_enter(void); extern void rcu_nmi_enter(void);
extern void rcu_nmi_exit(void); extern void rcu_nmi_exit(void);
#endif #endif
#else
# define rcu_irq_enter() do { } while (0)
# define rcu_irq_exit() do { } while (0)
# define rcu_nmi_enter() do { } while (0)
# define rcu_nmi_exit() do { } while (0)
#endif /* #if defined(CONFIG_NO_HZ) */
/* /*
* It is safe to do non-atomic ops on ->hardirq_context, * It is safe to do non-atomic ops on ->hardirq_context,
......
...@@ -51,6 +51,8 @@ extern int rcutorture_runnable; /* for sysctl */ ...@@ -51,6 +51,8 @@ extern int rcutorture_runnable; /* for sysctl */
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU)
extern void rcutorture_record_test_transition(void); extern void rcutorture_record_test_transition(void);
extern void rcutorture_record_progress(unsigned long vernum); extern void rcutorture_record_progress(unsigned long vernum);
extern void do_trace_rcu_torture_read(char *rcutorturename,
struct rcu_head *rhp);
#else #else
static inline void rcutorture_record_test_transition(void) static inline void rcutorture_record_test_transition(void)
{ {
...@@ -58,6 +60,12 @@ static inline void rcutorture_record_test_transition(void) ...@@ -58,6 +60,12 @@ static inline void rcutorture_record_test_transition(void)
static inline void rcutorture_record_progress(unsigned long vernum) static inline void rcutorture_record_progress(unsigned long vernum)
{ {
} }
#ifdef CONFIG_RCU_TRACE
extern void do_trace_rcu_torture_read(char *rcutorturename,
struct rcu_head *rhp);
#else
#define do_trace_rcu_torture_read(rcutorturename, rhp) do { } while (0)
#endif
#endif #endif
#define UINT_CMP_GE(a, b) (UINT_MAX / 2 >= (a) - (b)) #define UINT_CMP_GE(a, b) (UINT_MAX / 2 >= (a) - (b))
...@@ -177,23 +185,10 @@ extern void rcu_sched_qs(int cpu); ...@@ -177,23 +185,10 @@ extern void rcu_sched_qs(int cpu);
extern void rcu_bh_qs(int cpu); extern void rcu_bh_qs(int cpu);
extern void rcu_check_callbacks(int cpu, int user); extern void rcu_check_callbacks(int cpu, int user);
struct notifier_block; struct notifier_block;
extern void rcu_idle_enter(void);
#ifdef CONFIG_NO_HZ extern void rcu_idle_exit(void);
extern void rcu_irq_enter(void);
extern void rcu_enter_nohz(void); extern void rcu_irq_exit(void);
extern void rcu_exit_nohz(void);
#else /* #ifdef CONFIG_NO_HZ */
static inline void rcu_enter_nohz(void)
{
}
static inline void rcu_exit_nohz(void)
{
}
#endif /* #else #ifdef CONFIG_NO_HZ */
/* /*
* Infrastructure to implement the synchronize_() primitives in * Infrastructure to implement the synchronize_() primitives in
...@@ -233,22 +228,30 @@ static inline void destroy_rcu_head_on_stack(struct rcu_head *head) ...@@ -233,22 +228,30 @@ static inline void destroy_rcu_head_on_stack(struct rcu_head *head)
#ifdef CONFIG_DEBUG_LOCK_ALLOC #ifdef CONFIG_DEBUG_LOCK_ALLOC
extern struct lockdep_map rcu_lock_map; #ifdef CONFIG_PROVE_RCU
# define rcu_read_acquire() \ extern int rcu_is_cpu_idle(void);
lock_acquire(&rcu_lock_map, 0, 0, 2, 1, NULL, _THIS_IP_) #else /* !CONFIG_PROVE_RCU */
# define rcu_read_release() lock_release(&rcu_lock_map, 1, _THIS_IP_) static inline int rcu_is_cpu_idle(void)
{
return 0;
}
#endif /* else !CONFIG_PROVE_RCU */
extern struct lockdep_map rcu_bh_lock_map; static inline void rcu_lock_acquire(struct lockdep_map *map)
# define rcu_read_acquire_bh() \ {
lock_acquire(&rcu_bh_lock_map, 0, 0, 2, 1, NULL, _THIS_IP_) WARN_ON_ONCE(rcu_is_cpu_idle());
# define rcu_read_release_bh() lock_release(&rcu_bh_lock_map, 1, _THIS_IP_) lock_acquire(map, 0, 0, 2, 1, NULL, _THIS_IP_);
}
extern struct lockdep_map rcu_sched_lock_map; static inline void rcu_lock_release(struct lockdep_map *map)
# define rcu_read_acquire_sched() \ {
lock_acquire(&rcu_sched_lock_map, 0, 0, 2, 1, NULL, _THIS_IP_) WARN_ON_ONCE(rcu_is_cpu_idle());
# define rcu_read_release_sched() \ lock_release(map, 1, _THIS_IP_);
lock_release(&rcu_sched_lock_map, 1, _THIS_IP_) }
extern struct lockdep_map rcu_lock_map;
extern struct lockdep_map rcu_bh_lock_map;
extern struct lockdep_map rcu_sched_lock_map;
extern int debug_lockdep_rcu_enabled(void); extern int debug_lockdep_rcu_enabled(void);
/** /**
...@@ -262,11 +265,18 @@ extern int debug_lockdep_rcu_enabled(void); ...@@ -262,11 +265,18 @@ extern int debug_lockdep_rcu_enabled(void);
* *
* Checks debug_lockdep_rcu_enabled() to prevent false positives during boot * Checks debug_lockdep_rcu_enabled() to prevent false positives during boot
* and while lockdep is disabled. * and while lockdep is disabled.
*
* Note that rcu_read_lock() and the matching rcu_read_unlock() must
* occur in the same context, for example, it is illegal to invoke
* rcu_read_unlock() in process context if the matching rcu_read_lock()
* was invoked from within an irq handler.
*/ */
static inline int rcu_read_lock_held(void) static inline int rcu_read_lock_held(void)
{ {
if (!debug_lockdep_rcu_enabled()) if (!debug_lockdep_rcu_enabled())
return 1; return 1;
if (rcu_is_cpu_idle())
return 0;
return lock_is_held(&rcu_lock_map); return lock_is_held(&rcu_lock_map);
} }
...@@ -290,6 +300,19 @@ extern int rcu_read_lock_bh_held(void); ...@@ -290,6 +300,19 @@ extern int rcu_read_lock_bh_held(void);
* *
* Check debug_lockdep_rcu_enabled() to prevent false positives during boot * Check debug_lockdep_rcu_enabled() to prevent false positives during boot
* and while lockdep is disabled. * and while lockdep is disabled.
*
* Note that if the CPU is in the idle loop from an RCU point of
* view (ie: that we are in the section between rcu_idle_enter() and
* rcu_idle_exit()) then rcu_read_lock_held() returns false even if the CPU
* did an rcu_read_lock(). The reason for this is that RCU ignores CPUs
* that are in such a section, considering these as in extended quiescent
* state, so such a CPU is effectively never in an RCU read-side critical
* section regardless of what RCU primitives it invokes. This state of
* affairs is required --- we need to keep an RCU-free window in idle
* where the CPU may possibly enter into low power mode. This way we can
* notice an extended quiescent state to other CPUs that started a grace
* period. Otherwise we would delay any grace period as long as we run in
* the idle task.
*/ */
#ifdef CONFIG_PREEMPT_COUNT #ifdef CONFIG_PREEMPT_COUNT
static inline int rcu_read_lock_sched_held(void) static inline int rcu_read_lock_sched_held(void)
...@@ -298,6 +321,8 @@ static inline int rcu_read_lock_sched_held(void) ...@@ -298,6 +321,8 @@ static inline int rcu_read_lock_sched_held(void)
if (!debug_lockdep_rcu_enabled()) if (!debug_lockdep_rcu_enabled())
return 1; return 1;
if (rcu_is_cpu_idle())
return 0;
if (debug_locks) if (debug_locks)
lockdep_opinion = lock_is_held(&rcu_sched_lock_map); lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
return lockdep_opinion || preempt_count() != 0 || irqs_disabled(); return lockdep_opinion || preempt_count() != 0 || irqs_disabled();
...@@ -311,12 +336,8 @@ static inline int rcu_read_lock_sched_held(void) ...@@ -311,12 +336,8 @@ static inline int rcu_read_lock_sched_held(void)
#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
# define rcu_read_acquire() do { } while (0) # define rcu_lock_acquire(a) do { } while (0)
# define rcu_read_release() do { } while (0) # define rcu_lock_release(a) do { } while (0)
# define rcu_read_acquire_bh() do { } while (0)
# define rcu_read_release_bh() do { } while (0)
# define rcu_read_acquire_sched() do { } while (0)
# define rcu_read_release_sched() do { } while (0)
static inline int rcu_read_lock_held(void) static inline int rcu_read_lock_held(void)
{ {
...@@ -637,7 +658,7 @@ static inline void rcu_read_lock(void) ...@@ -637,7 +658,7 @@ static inline void rcu_read_lock(void)
{ {
__rcu_read_lock(); __rcu_read_lock();
__acquire(RCU); __acquire(RCU);
rcu_read_acquire(); rcu_lock_acquire(&rcu_lock_map);
} }
/* /*
...@@ -657,7 +678,7 @@ static inline void rcu_read_lock(void) ...@@ -657,7 +678,7 @@ static inline void rcu_read_lock(void)
*/ */
static inline void rcu_read_unlock(void) static inline void rcu_read_unlock(void)
{ {
rcu_read_release(); rcu_lock_release(&rcu_lock_map);
__release(RCU); __release(RCU);
__rcu_read_unlock(); __rcu_read_unlock();
} }
...@@ -673,12 +694,17 @@ static inline void rcu_read_unlock(void) ...@@ -673,12 +694,17 @@ static inline void rcu_read_unlock(void)
* critical sections in interrupt context can use just rcu_read_lock(), * critical sections in interrupt context can use just rcu_read_lock(),
* though this should at least be commented to avoid confusing people * though this should at least be commented to avoid confusing people
* reading the code. * reading the code.
*
* Note that rcu_read_lock_bh() and the matching rcu_read_unlock_bh()
* must occur in the same context, for example, it is illegal to invoke
* rcu_read_unlock_bh() from one task if the matching rcu_read_lock_bh()
* was invoked from some other task.
*/ */
static inline void rcu_read_lock_bh(void) static inline void rcu_read_lock_bh(void)
{ {
local_bh_disable(); local_bh_disable();
__acquire(RCU_BH); __acquire(RCU_BH);
rcu_read_acquire_bh(); rcu_lock_acquire(&rcu_bh_lock_map);
} }
/* /*
...@@ -688,7 +714,7 @@ static inline void rcu_read_lock_bh(void) ...@@ -688,7 +714,7 @@ static inline void rcu_read_lock_bh(void)
*/ */
static inline void rcu_read_unlock_bh(void) static inline void rcu_read_unlock_bh(void)
{ {
rcu_read_release_bh(); rcu_lock_release(&rcu_bh_lock_map);
__release(RCU_BH); __release(RCU_BH);
local_bh_enable(); local_bh_enable();
} }
...@@ -700,12 +726,17 @@ static inline void rcu_read_unlock_bh(void) ...@@ -700,12 +726,17 @@ static inline void rcu_read_unlock_bh(void)
* are being done using call_rcu_sched() or synchronize_rcu_sched(). * are being done using call_rcu_sched() or synchronize_rcu_sched().
* Read-side critical sections can also be introduced by anything that * Read-side critical sections can also be introduced by anything that
* disables preemption, including local_irq_disable() and friends. * disables preemption, including local_irq_disable() and friends.
*
* Note that rcu_read_lock_sched() and the matching rcu_read_unlock_sched()
* must occur in the same context, for example, it is illegal to invoke
* rcu_read_unlock_sched() from process context if the matching
* rcu_read_lock_sched() was invoked from an NMI handler.
*/ */
static inline void rcu_read_lock_sched(void) static inline void rcu_read_lock_sched(void)
{ {
preempt_disable(); preempt_disable();
__acquire(RCU_SCHED); __acquire(RCU_SCHED);
rcu_read_acquire_sched(); rcu_lock_acquire(&rcu_sched_lock_map);
} }
/* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */
...@@ -722,7 +753,7 @@ static inline notrace void rcu_read_lock_sched_notrace(void) ...@@ -722,7 +753,7 @@ static inline notrace void rcu_read_lock_sched_notrace(void)
*/ */
static inline void rcu_read_unlock_sched(void) static inline void rcu_read_unlock_sched(void)
{ {
rcu_read_release_sched(); rcu_lock_release(&rcu_sched_lock_map);
__release(RCU_SCHED); __release(RCU_SCHED);
preempt_enable(); preempt_enable();
} }
......
...@@ -2070,6 +2070,14 @@ extern int sched_setscheduler(struct task_struct *, int, ...@@ -2070,6 +2070,14 @@ extern int sched_setscheduler(struct task_struct *, int,
extern int sched_setscheduler_nocheck(struct task_struct *, int, extern int sched_setscheduler_nocheck(struct task_struct *, int,
const struct sched_param *); const struct sched_param *);
extern struct task_struct *idle_task(int cpu); extern struct task_struct *idle_task(int cpu);
/**
* is_idle_task - is the specified task an idle task?
* @tsk: the task in question.
*/
static inline bool is_idle_task(struct task_struct *p)
{
return p->pid == 0;
}
extern struct task_struct *curr_task(int cpu); extern struct task_struct *curr_task(int cpu);
extern void set_curr_task(int cpu, struct task_struct *p); extern void set_curr_task(int cpu, struct task_struct *p);
......
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#define _LINUX_SRCU_H #define _LINUX_SRCU_H
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/rcupdate.h>
struct srcu_struct_array { struct srcu_struct_array {
int c[2]; int c[2];
...@@ -60,18 +61,10 @@ int __init_srcu_struct(struct srcu_struct *sp, const char *name, ...@@ -60,18 +61,10 @@ int __init_srcu_struct(struct srcu_struct *sp, const char *name,
__init_srcu_struct((sp), #sp, &__srcu_key); \ __init_srcu_struct((sp), #sp, &__srcu_key); \
}) })
# define srcu_read_acquire(sp) \
lock_acquire(&(sp)->dep_map, 0, 0, 2, 1, NULL, _THIS_IP_)
# define srcu_read_release(sp) \
lock_release(&(sp)->dep_map, 1, _THIS_IP_)
#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
int init_srcu_struct(struct srcu_struct *sp); int init_srcu_struct(struct srcu_struct *sp);
# define srcu_read_acquire(sp) do { } while (0)
# define srcu_read_release(sp) do { } while (0)
#endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
void cleanup_srcu_struct(struct srcu_struct *sp); void cleanup_srcu_struct(struct srcu_struct *sp);
...@@ -90,12 +83,32 @@ long srcu_batches_completed(struct srcu_struct *sp); ...@@ -90,12 +83,32 @@ long srcu_batches_completed(struct srcu_struct *sp);
* read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC, * read-side critical section. In absence of CONFIG_DEBUG_LOCK_ALLOC,
* this assumes we are in an SRCU read-side critical section unless it can * this assumes we are in an SRCU read-side critical section unless it can
* prove otherwise. * prove otherwise.
*
* Checks debug_lockdep_rcu_enabled() to prevent false positives during boot
* and while lockdep is disabled.
*
* Note that if the CPU is in the idle loop from an RCU point of view
* (ie: that we are in the section between rcu_idle_enter() and
* rcu_idle_exit()) then srcu_read_lock_held() returns false even if
* the CPU did an srcu_read_lock(). The reason for this is that RCU
* ignores CPUs that are in such a section, considering these as in
* extended quiescent state, so such a CPU is effectively never in an
* RCU read-side critical section regardless of what RCU primitives it
* invokes. This state of affairs is required --- we need to keep an
* RCU-free window in idle where the CPU may possibly enter into low
* power mode. This way we can notice an extended quiescent state to
* other CPUs that started a grace period. Otherwise we would delay any
* grace period as long as we run in the idle task.
*/ */
static inline int srcu_read_lock_held(struct srcu_struct *sp) static inline int srcu_read_lock_held(struct srcu_struct *sp)
{ {
if (debug_locks) if (rcu_is_cpu_idle())
return lock_is_held(&sp->dep_map); return 0;
if (!debug_lockdep_rcu_enabled())
return 1; return 1;
return lock_is_held(&sp->dep_map);
} }
#else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
...@@ -145,12 +158,17 @@ static inline int srcu_read_lock_held(struct srcu_struct *sp) ...@@ -145,12 +158,17 @@ static inline int srcu_read_lock_held(struct srcu_struct *sp)
* one way to indirectly wait on an SRCU grace period is to acquire * one way to indirectly wait on an SRCU grace period is to acquire
* a mutex that is held elsewhere while calling synchronize_srcu() or * a mutex that is held elsewhere while calling synchronize_srcu() or
* synchronize_srcu_expedited(). * synchronize_srcu_expedited().
*
* Note that srcu_read_lock() and the matching srcu_read_unlock() must
* occur in the same context, for example, it is illegal to invoke
* srcu_read_unlock() in an irq handler if the matching srcu_read_lock()
* was invoked in process context.
*/ */
static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp) static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp)
{ {
int retval = __srcu_read_lock(sp); int retval = __srcu_read_lock(sp);
srcu_read_acquire(sp); rcu_lock_acquire(&(sp)->dep_map);
return retval; return retval;
} }
...@@ -164,8 +182,51 @@ static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp) ...@@ -164,8 +182,51 @@ static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp)
static inline void srcu_read_unlock(struct srcu_struct *sp, int idx) static inline void srcu_read_unlock(struct srcu_struct *sp, int idx)
__releases(sp) __releases(sp)
{ {
srcu_read_release(sp); rcu_lock_release(&(sp)->dep_map);
__srcu_read_unlock(sp, idx);
}
/**
* srcu_read_lock_raw - register a new reader for an SRCU-protected structure.
* @sp: srcu_struct in which to register the new reader.
*
* Enter an SRCU read-side critical section. Similar to srcu_read_lock(),
* but avoids the RCU-lockdep checking. This means that it is legal to
* use srcu_read_lock_raw() in one context, for example, in an exception
* handler, and then have the matching srcu_read_unlock_raw() in another
* context, for example in the task that took the exception.
*
* However, the entire SRCU read-side critical section must reside within a
* single task. For example, beware of using srcu_read_lock_raw() in
* a device interrupt handler and srcu_read_unlock() in the interrupted
* task: This will not work if interrupts are threaded.
*/
static inline int srcu_read_lock_raw(struct srcu_struct *sp)
{
unsigned long flags;
int ret;
local_irq_save(flags);
ret = __srcu_read_lock(sp);
local_irq_restore(flags);
return ret;
}
/**
* srcu_read_unlock_raw - unregister reader from an SRCU-protected structure.
* @sp: srcu_struct in which to unregister the old reader.
* @idx: return value from corresponding srcu_read_lock_raw().
*
* Exit an SRCU read-side critical section without lockdep-RCU checking.
* See srcu_read_lock_raw() for more details.
*/
static inline void srcu_read_unlock_raw(struct srcu_struct *sp, int idx)
{
unsigned long flags;
local_irq_save(flags);
__srcu_read_unlock(sp, idx); __srcu_read_unlock(sp, idx);
local_irq_restore(flags);
} }
#endif #endif
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#define _LINUX_TICK_H #define _LINUX_TICK_H
#include <linux/clockchips.h> #include <linux/clockchips.h>
#include <linux/irqflags.h>
#ifdef CONFIG_GENERIC_CLOCKEVENTS #ifdef CONFIG_GENERIC_CLOCKEVENTS
...@@ -121,14 +122,16 @@ static inline int tick_oneshot_mode_active(void) { return 0; } ...@@ -121,14 +122,16 @@ static inline int tick_oneshot_mode_active(void) { return 0; }
#endif /* !CONFIG_GENERIC_CLOCKEVENTS */ #endif /* !CONFIG_GENERIC_CLOCKEVENTS */
# ifdef CONFIG_NO_HZ # ifdef CONFIG_NO_HZ
extern void tick_nohz_stop_sched_tick(int inidle); extern void tick_nohz_idle_enter(void);
extern void tick_nohz_restart_sched_tick(void); extern void tick_nohz_idle_exit(void);
extern void tick_nohz_irq_exit(void);
extern ktime_t tick_nohz_get_sleep_length(void); extern ktime_t tick_nohz_get_sleep_length(void);
extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time); extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time); extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
# else # else
static inline void tick_nohz_stop_sched_tick(int inidle) { } static inline void tick_nohz_idle_enter(void) { }
static inline void tick_nohz_restart_sched_tick(void) { } static inline void tick_nohz_idle_exit(void) { }
static inline ktime_t tick_nohz_get_sleep_length(void) static inline ktime_t tick_nohz_get_sleep_length(void)
{ {
ktime_t len = { .tv64 = NSEC_PER_SEC/HZ }; ktime_t len = { .tv64 = NSEC_PER_SEC/HZ };
......
...@@ -241,24 +241,73 @@ TRACE_EVENT(rcu_fqs, ...@@ -241,24 +241,73 @@ TRACE_EVENT(rcu_fqs,
/* /*
* Tracepoint for dyntick-idle entry/exit events. These take a string * Tracepoint for dyntick-idle entry/exit events. These take a string
* as argument: "Start" for entering dyntick-idle mode and "End" for * as argument: "Start" for entering dyntick-idle mode, "End" for
* leaving it. * leaving it, "--=" for events moving towards idle, and "++=" for events
* moving away from idle. "Error on entry: not idle task" and "Error on
* exit: not idle task" indicate that a non-idle task is erroneously
* toying with the idle loop.
*
* These events also take a pair of numbers, which indicate the nesting
* depth before and after the event of interest. Note that task-related
* events use the upper bits of each number, while interrupt-related
* events use the lower bits.
*/ */
TRACE_EVENT(rcu_dyntick, TRACE_EVENT(rcu_dyntick,
TP_PROTO(char *polarity), TP_PROTO(char *polarity, long long oldnesting, long long newnesting),
TP_ARGS(polarity), TP_ARGS(polarity, oldnesting, newnesting),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, polarity) __field(char *, polarity)
__field(long long, oldnesting)
__field(long long, newnesting)
), ),
TP_fast_assign( TP_fast_assign(
__entry->polarity = polarity; __entry->polarity = polarity;
__entry->oldnesting = oldnesting;
__entry->newnesting = newnesting;
),
TP_printk("%s %llx %llx", __entry->polarity,
__entry->oldnesting, __entry->newnesting)
);
/*
* Tracepoint for RCU preparation for idle, the goal being to get RCU
* processing done so that the current CPU can shut off its scheduling
* clock and enter dyntick-idle mode. One way to accomplish this is
* to drain all RCU callbacks from this CPU, and the other is to have
* done everything RCU requires for the current grace period. In this
* latter case, the CPU will be awakened at the end of the current grace
* period in order to process the remainder of its callbacks.
*
* These tracepoints take a string as argument:
*
* "No callbacks": Nothing to do, no callbacks on this CPU.
* "In holdoff": Nothing to do, holding off after unsuccessful attempt.
* "Begin holdoff": Attempt failed, don't retry until next jiffy.
* "Dyntick with callbacks": Entering dyntick-idle despite callbacks.
* "More callbacks": Still more callbacks, try again to clear them out.
* "Callbacks drained": All callbacks processed, off to dyntick idle!
* "Timer": Timer fired to cause CPU to continue processing callbacks.
*/
TRACE_EVENT(rcu_prep_idle,
TP_PROTO(char *reason),
TP_ARGS(reason),
TP_STRUCT__entry(
__field(char *, reason)
),
TP_fast_assign(
__entry->reason = reason;
), ),
TP_printk("%s", __entry->polarity) TP_printk("%s", __entry->reason)
); );
/* /*
...@@ -412,27 +461,71 @@ TRACE_EVENT(rcu_invoke_kfree_callback, ...@@ -412,27 +461,71 @@ TRACE_EVENT(rcu_invoke_kfree_callback,
/* /*
* Tracepoint for exiting rcu_do_batch after RCU callbacks have been * Tracepoint for exiting rcu_do_batch after RCU callbacks have been
* invoked. The first argument is the name of the RCU flavor and * invoked. The first argument is the name of the RCU flavor,
* the second argument is number of callbacks actually invoked. * the second argument is number of callbacks actually invoked,
* the third argument (cb) is whether or not any of the callbacks that
* were ready to invoke at the beginning of this batch are still
* queued, the fourth argument (nr) is the return value of need_resched(),
* the fifth argument (iit) is 1 if the current task is the idle task,
* and the sixth argument (risk) is the return value from
* rcu_is_callbacks_kthread().
*/ */
TRACE_EVENT(rcu_batch_end, TRACE_EVENT(rcu_batch_end,
TP_PROTO(char *rcuname, int callbacks_invoked), TP_PROTO(char *rcuname, int callbacks_invoked,
bool cb, bool nr, bool iit, bool risk),
TP_ARGS(rcuname, callbacks_invoked), TP_ARGS(rcuname, callbacks_invoked, cb, nr, iit, risk),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(char *, rcuname)
__field(int, callbacks_invoked) __field(int, callbacks_invoked)
__field(bool, cb)
__field(bool, nr)
__field(bool, iit)
__field(bool, risk)
), ),
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->callbacks_invoked = callbacks_invoked; __entry->callbacks_invoked = callbacks_invoked;
__entry->cb = cb;
__entry->nr = nr;
__entry->iit = iit;
__entry->risk = risk;
),
TP_printk("%s CBs-invoked=%d idle=%c%c%c%c",
__entry->rcuname, __entry->callbacks_invoked,
__entry->cb ? 'C' : '.',
__entry->nr ? 'S' : '.',
__entry->iit ? 'I' : '.',
__entry->risk ? 'R' : '.')
);
/*
* Tracepoint for rcutorture readers. The first argument is the name
* of the RCU flavor from rcutorture's viewpoint and the second argument
* is the callback address.
*/
TRACE_EVENT(rcu_torture_read,
TP_PROTO(char *rcutorturename, struct rcu_head *rhp),
TP_ARGS(rcutorturename, rhp),
TP_STRUCT__entry(
__field(char *, rcutorturename)
__field(struct rcu_head *, rhp)
),
TP_fast_assign(
__entry->rcutorturename = rcutorturename;
__entry->rhp = rhp;
), ),
TP_printk("%s CBs-invoked=%d", TP_printk("%s torture read %p",
__entry->rcuname, __entry->callbacks_invoked) __entry->rcutorturename, __entry->rhp)
); );
#else /* #ifdef CONFIG_RCU_TRACE */ #else /* #ifdef CONFIG_RCU_TRACE */
...@@ -443,13 +536,16 @@ TRACE_EVENT(rcu_batch_end, ...@@ -443,13 +536,16 @@ TRACE_EVENT(rcu_batch_end,
#define trace_rcu_unlock_preempted_task(rcuname, gpnum, pid) do { } while (0) #define trace_rcu_unlock_preempted_task(rcuname, gpnum, pid) do { } while (0)
#define trace_rcu_quiescent_state_report(rcuname, gpnum, mask, qsmask, level, grplo, grphi, gp_tasks) do { } while (0) #define trace_rcu_quiescent_state_report(rcuname, gpnum, mask, qsmask, level, grplo, grphi, gp_tasks) do { } while (0)
#define trace_rcu_fqs(rcuname, gpnum, cpu, qsevent) do { } while (0) #define trace_rcu_fqs(rcuname, gpnum, cpu, qsevent) do { } while (0)
#define trace_rcu_dyntick(polarity) do { } while (0) #define trace_rcu_dyntick(polarity, oldnesting, newnesting) do { } while (0)
#define trace_rcu_prep_idle(reason) do { } while (0)
#define trace_rcu_callback(rcuname, rhp, qlen) do { } while (0) #define trace_rcu_callback(rcuname, rhp, qlen) do { } while (0)
#define trace_rcu_kfree_callback(rcuname, rhp, offset, qlen) do { } while (0) #define trace_rcu_kfree_callback(rcuname, rhp, offset, qlen) do { } while (0)
#define trace_rcu_batch_start(rcuname, qlen, blimit) do { } while (0) #define trace_rcu_batch_start(rcuname, qlen, blimit) do { } while (0)
#define trace_rcu_invoke_callback(rcuname, rhp) do { } while (0) #define trace_rcu_invoke_callback(rcuname, rhp) do { } while (0)
#define trace_rcu_invoke_kfree_callback(rcuname, rhp, offset) do { } while (0) #define trace_rcu_invoke_kfree_callback(rcuname, rhp, offset) do { } while (0)
#define trace_rcu_batch_end(rcuname, callbacks_invoked) do { } while (0) #define trace_rcu_batch_end(rcuname, callbacks_invoked, cb, nr, iit, risk) \
do { } while (0)
#define trace_rcu_torture_read(rcutorturename, rhp) do { } while (0)
#endif /* #else #ifdef CONFIG_RCU_TRACE */ #endif /* #else #ifdef CONFIG_RCU_TRACE */
......
...@@ -469,14 +469,14 @@ config RCU_FANOUT_EXACT ...@@ -469,14 +469,14 @@ config RCU_FANOUT_EXACT
config RCU_FAST_NO_HZ config RCU_FAST_NO_HZ
bool "Accelerate last non-dyntick-idle CPU's grace periods" bool "Accelerate last non-dyntick-idle CPU's grace periods"
depends on TREE_RCU && NO_HZ && SMP depends on NO_HZ && SMP
default n default n
help help
This option causes RCU to attempt to accelerate grace periods This option causes RCU to attempt to accelerate grace periods
in order to allow the final CPU to enter dynticks-idle state in order to allow CPUs to enter dynticks-idle state more
more quickly. On the other hand, this option increases the quickly. On the other hand, this option increases the overhead
overhead of the dynticks-idle checking, particularly on systems of the dynticks-idle checking, particularly on systems with
with large numbers of CPUs. large numbers of CPUs.
Say Y if energy efficiency is critically important, particularly Say Y if energy efficiency is critically important, particularly
if you have relatively few CPUs. if you have relatively few CPUs.
......
...@@ -380,6 +380,7 @@ int __cpuinit cpu_up(unsigned int cpu) ...@@ -380,6 +380,7 @@ int __cpuinit cpu_up(unsigned int cpu)
cpu_maps_update_done(); cpu_maps_update_done();
return err; return err;
} }
EXPORT_SYMBOL_GPL(cpu_up);
#ifdef CONFIG_PM_SLEEP_SMP #ifdef CONFIG_PM_SLEEP_SMP
static cpumask_var_t frozen_cpus; static cpumask_var_t frozen_cpus;
......
...@@ -636,7 +636,7 @@ char kdb_task_state_char (const struct task_struct *p) ...@@ -636,7 +636,7 @@ char kdb_task_state_char (const struct task_struct *p)
(p->exit_state & EXIT_ZOMBIE) ? 'Z' : (p->exit_state & EXIT_ZOMBIE) ? 'Z' :
(p->exit_state & EXIT_DEAD) ? 'E' : (p->exit_state & EXIT_DEAD) ? 'E' :
(p->state & TASK_INTERRUPTIBLE) ? 'S' : '?'; (p->state & TASK_INTERRUPTIBLE) ? 'S' : '?';
if (p->pid == 0) { if (is_idle_task(p)) {
/* Idle task. Is it really idle, apart from the kdb /* Idle task. Is it really idle, apart from the kdb
* interrupt? */ * interrupt? */
if (!kdb_task_has_cpu(p) || kgdb_info[cpu].irq_depth == 1) { if (!kdb_task_has_cpu(p) || kgdb_info[cpu].irq_depth == 1) {
......
...@@ -5366,7 +5366,7 @@ static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer) ...@@ -5366,7 +5366,7 @@ static enum hrtimer_restart perf_swevent_hrtimer(struct hrtimer *hrtimer)
regs = get_irq_regs(); regs = get_irq_regs();
if (regs && !perf_exclude_event(event, regs)) { if (regs && !perf_exclude_event(event, regs)) {
if (!(event->attr.exclude_idle && current->pid == 0)) if (!(event->attr.exclude_idle && is_idle_task(current)))
if (perf_event_overflow(event, &data, regs)) if (perf_event_overflow(event, &data, regs))
ret = HRTIMER_NORESTART; ret = HRTIMER_NORESTART;
} }
......
...@@ -4181,6 +4181,28 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s) ...@@ -4181,6 +4181,28 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
printk("%s:%d %s!\n", file, line, s); printk("%s:%d %s!\n", file, line, s);
printk("\nother info that might help us debug this:\n\n"); printk("\nother info that might help us debug this:\n\n");
printk("\nrcu_scheduler_active = %d, debug_locks = %d\n", rcu_scheduler_active, debug_locks); printk("\nrcu_scheduler_active = %d, debug_locks = %d\n", rcu_scheduler_active, debug_locks);
/*
* If a CPU is in the RCU-free window in idle (ie: in the section
* between rcu_idle_enter() and rcu_idle_exit(), then RCU
* considers that CPU to be in an "extended quiescent state",
* which means that RCU will be completely ignoring that CPU.
* Therefore, rcu_read_lock() and friends have absolutely no
* effect on a CPU running in that state. In other words, even if
* such an RCU-idle CPU has called rcu_read_lock(), RCU might well
* delete data structures out from under it. RCU really has no
* choice here: we need to keep an RCU-free window in idle where
* the CPU may possibly enter into low power mode. This way we can
* notice an extended quiescent state to other CPUs that started a grace
* period. Otherwise we would delay any grace period as long as we run
* in the idle task.
*
* So complain bitterly if someone does call rcu_read_lock(),
* rcu_read_lock_bh() and so on from extended quiescent states.
*/
if (rcu_is_cpu_idle())
printk("RCU used illegally from extended quiescent state!\n");
lockdep_print_held_locks(curr); lockdep_print_held_locks(curr);
printk("\nstack backtrace:\n"); printk("\nstack backtrace:\n");
dump_stack(); dump_stack();
......
...@@ -29,6 +29,13 @@ ...@@ -29,6 +29,13 @@
#define RCU_TRACE(stmt) #define RCU_TRACE(stmt)
#endif /* #else #ifdef CONFIG_RCU_TRACE */ #endif /* #else #ifdef CONFIG_RCU_TRACE */
/*
* Process-level increment to ->dynticks_nesting field. This allows for
* architectures that use half-interrupts and half-exceptions from
* process context.
*/
#define DYNTICK_TASK_NESTING (LLONG_MAX / 2 - 1)
/* /*
* debug_rcu_head_queue()/debug_rcu_head_unqueue() are used internally * debug_rcu_head_queue()/debug_rcu_head_unqueue() are used internally
* by call_rcu() and rcu callback execution, and are therefore not part of the * by call_rcu() and rcu callback execution, and are therefore not part of the
......
...@@ -93,6 +93,8 @@ int rcu_read_lock_bh_held(void) ...@@ -93,6 +93,8 @@ int rcu_read_lock_bh_held(void)
{ {
if (!debug_lockdep_rcu_enabled()) if (!debug_lockdep_rcu_enabled())
return 1; return 1;
if (rcu_is_cpu_idle())
return 0;
return in_softirq() || irqs_disabled(); return in_softirq() || irqs_disabled();
} }
EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held); EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
...@@ -316,3 +318,13 @@ struct debug_obj_descr rcuhead_debug_descr = { ...@@ -316,3 +318,13 @@ struct debug_obj_descr rcuhead_debug_descr = {
}; };
EXPORT_SYMBOL_GPL(rcuhead_debug_descr); EXPORT_SYMBOL_GPL(rcuhead_debug_descr);
#endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */ #endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) || defined(CONFIG_RCU_TRACE)
void do_trace_rcu_torture_read(char *rcutorturename, struct rcu_head *rhp)
{
trace_rcu_torture_read(rcutorturename, rhp);
}
EXPORT_SYMBOL_GPL(do_trace_rcu_torture_read);
#else
#define do_trace_rcu_torture_read(rcutorturename, rhp) do { } while (0)
#endif
...@@ -53,31 +53,137 @@ static void __call_rcu(struct rcu_head *head, ...@@ -53,31 +53,137 @@ static void __call_rcu(struct rcu_head *head,
#include "rcutiny_plugin.h" #include "rcutiny_plugin.h"
#ifdef CONFIG_NO_HZ static long long rcu_dynticks_nesting = DYNTICK_TASK_NESTING;
static long rcu_dynticks_nesting = 1; /* Common code for rcu_idle_enter() and rcu_irq_exit(), see kernel/rcutree.c. */
static void rcu_idle_enter_common(long long oldval)
{
if (rcu_dynticks_nesting) {
RCU_TRACE(trace_rcu_dyntick("--=",
oldval, rcu_dynticks_nesting));
return;
}
RCU_TRACE(trace_rcu_dyntick("Start", oldval, rcu_dynticks_nesting));
if (!is_idle_task(current)) {
struct task_struct *idle = idle_task(smp_processor_id());
RCU_TRACE(trace_rcu_dyntick("Error on entry: not idle task",
oldval, rcu_dynticks_nesting));
ftrace_dump(DUMP_ALL);
WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
current->pid, current->comm,
idle->pid, idle->comm); /* must be idle task! */
}
rcu_sched_qs(0); /* implies rcu_bh_qsctr_inc(0) */
}
/* /*
* Enter dynticks-idle mode, which is an extended quiescent state * Enter idle, which is an extended quiescent state if we have fully
* if we have fully entered that mode (i.e., if the new value of * entered that mode (i.e., if the new value of dynticks_nesting is zero).
* dynticks_nesting is zero).
*/ */
void rcu_enter_nohz(void) void rcu_idle_enter(void)
{ {
if (--rcu_dynticks_nesting == 0) unsigned long flags;
rcu_sched_qs(0); /* implies rcu_bh_qsctr_inc(0) */ long long oldval;
local_irq_save(flags);
oldval = rcu_dynticks_nesting;
rcu_dynticks_nesting = 0;
rcu_idle_enter_common(oldval);
local_irq_restore(flags);
} }
/* /*
* Exit dynticks-idle mode, so that we are no longer in an extended * Exit an interrupt handler towards idle.
* quiescent state.
*/ */
void rcu_exit_nohz(void) void rcu_irq_exit(void)
{ {
unsigned long flags;
long long oldval;
local_irq_save(flags);
oldval = rcu_dynticks_nesting;
rcu_dynticks_nesting--;
WARN_ON_ONCE(rcu_dynticks_nesting < 0);
rcu_idle_enter_common(oldval);
local_irq_restore(flags);
}
/* Common code for rcu_idle_exit() and rcu_irq_enter(), see kernel/rcutree.c. */
static void rcu_idle_exit_common(long long oldval)
{
if (oldval) {
RCU_TRACE(trace_rcu_dyntick("++=",
oldval, rcu_dynticks_nesting));
return;
}
RCU_TRACE(trace_rcu_dyntick("End", oldval, rcu_dynticks_nesting));
if (!is_idle_task(current)) {
struct task_struct *idle = idle_task(smp_processor_id());
RCU_TRACE(trace_rcu_dyntick("Error on exit: not idle task",
oldval, rcu_dynticks_nesting));
ftrace_dump(DUMP_ALL);
WARN_ONCE(1, "Current pid: %d comm: %s / Idle pid: %d comm: %s",
current->pid, current->comm,
idle->pid, idle->comm); /* must be idle task! */
}
}
/*
* Exit idle, so that we are no longer in an extended quiescent state.
*/
void rcu_idle_exit(void)
{
unsigned long flags;
long long oldval;
local_irq_save(flags);
oldval = rcu_dynticks_nesting;
WARN_ON_ONCE(oldval != 0);
rcu_dynticks_nesting = DYNTICK_TASK_NESTING;
rcu_idle_exit_common(oldval);
local_irq_restore(flags);
}
/*
* Enter an interrupt handler, moving away from idle.
*/
void rcu_irq_enter(void)
{
unsigned long flags;
long long oldval;
local_irq_save(flags);
oldval = rcu_dynticks_nesting;
rcu_dynticks_nesting++; rcu_dynticks_nesting++;
WARN_ON_ONCE(rcu_dynticks_nesting == 0);
rcu_idle_exit_common(oldval);
local_irq_restore(flags);
} }
#endif /* #ifdef CONFIG_NO_HZ */ #ifdef CONFIG_PROVE_RCU
/*
* Test whether RCU thinks that the current CPU is idle.
*/
int rcu_is_cpu_idle(void)
{
return !rcu_dynticks_nesting;
}
EXPORT_SYMBOL(rcu_is_cpu_idle);
#endif /* #ifdef CONFIG_PROVE_RCU */
/*
* Test whether the current CPU was interrupted from idle. Nested
* interrupts don't count, we must be running at the first interrupt
* level.
*/
int rcu_is_cpu_rrupt_from_idle(void)
{
return rcu_dynticks_nesting <= 0;
}
/* /*
* Helper function for rcu_sched_qs() and rcu_bh_qs(). * Helper function for rcu_sched_qs() and rcu_bh_qs().
...@@ -126,14 +232,13 @@ void rcu_bh_qs(int cpu) ...@@ -126,14 +232,13 @@ void rcu_bh_qs(int cpu)
/* /*
* Check to see if the scheduling-clock interrupt came from an extended * Check to see if the scheduling-clock interrupt came from an extended
* quiescent state, and, if so, tell RCU about it. * quiescent state, and, if so, tell RCU about it. This function must
* be called from hardirq context. It is normally called from the
* scheduling-clock interrupt.
*/ */
void rcu_check_callbacks(int cpu, int user) void rcu_check_callbacks(int cpu, int user)
{ {
if (user || if (user || rcu_is_cpu_rrupt_from_idle())
(idle_cpu(cpu) &&
!in_softirq() &&
hardirq_count() <= (1 << HARDIRQ_SHIFT)))
rcu_sched_qs(cpu); rcu_sched_qs(cpu);
else if (!in_softirq()) else if (!in_softirq())
rcu_bh_qs(cpu); rcu_bh_qs(cpu);
...@@ -154,7 +259,11 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp) ...@@ -154,7 +259,11 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
/* If no RCU callbacks ready to invoke, just return. */ /* If no RCU callbacks ready to invoke, just return. */
if (&rcp->rcucblist == rcp->donetail) { if (&rcp->rcucblist == rcp->donetail) {
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, -1)); RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, -1));
RCU_TRACE(trace_rcu_batch_end(rcp->name, 0)); RCU_TRACE(trace_rcu_batch_end(rcp->name, 0,
ACCESS_ONCE(rcp->rcucblist),
need_resched(),
is_idle_task(current),
rcu_is_callbacks_kthread()));
return; return;
} }
...@@ -183,7 +292,9 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp) ...@@ -183,7 +292,9 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
RCU_TRACE(cb_count++); RCU_TRACE(cb_count++);
} }
RCU_TRACE(rcu_trace_sub_qlen(rcp, cb_count)); RCU_TRACE(rcu_trace_sub_qlen(rcp, cb_count));
RCU_TRACE(trace_rcu_batch_end(rcp->name, cb_count)); RCU_TRACE(trace_rcu_batch_end(rcp->name, cb_count, 0, need_resched(),
is_idle_task(current),
rcu_is_callbacks_kthread()));
} }
static void rcu_process_callbacks(struct softirq_action *unused) static void rcu_process_callbacks(struct softirq_action *unused)
......
...@@ -312,8 +312,8 @@ static int rcu_boost(void) ...@@ -312,8 +312,8 @@ static int rcu_boost(void)
rt_mutex_lock(&mtx); rt_mutex_lock(&mtx);
rt_mutex_unlock(&mtx); /* Keep lockdep happy. */ rt_mutex_unlock(&mtx); /* Keep lockdep happy. */
return rcu_preempt_ctrlblk.boost_tasks != NULL || return ACCESS_ONCE(rcu_preempt_ctrlblk.boost_tasks) != NULL ||
rcu_preempt_ctrlblk.exp_tasks != NULL; ACCESS_ONCE(rcu_preempt_ctrlblk.exp_tasks) != NULL;
} }
/* /*
...@@ -885,6 +885,19 @@ static void invoke_rcu_callbacks(void) ...@@ -885,6 +885,19 @@ static void invoke_rcu_callbacks(void)
wake_up(&rcu_kthread_wq); wake_up(&rcu_kthread_wq);
} }
#ifdef CONFIG_RCU_TRACE
/*
* Is the current CPU running the RCU-callbacks kthread?
* Caller must have preemption disabled.
*/
static bool rcu_is_callbacks_kthread(void)
{
return rcu_kthread_task == current;
}
#endif /* #ifdef CONFIG_RCU_TRACE */
/* /*
* This kthread invokes RCU callbacks whose grace periods have * This kthread invokes RCU callbacks whose grace periods have
* elapsed. It is awakened as needed, and takes the place of the * elapsed. It is awakened as needed, and takes the place of the
...@@ -938,6 +951,18 @@ void invoke_rcu_callbacks(void) ...@@ -938,6 +951,18 @@ void invoke_rcu_callbacks(void)
raise_softirq(RCU_SOFTIRQ); raise_softirq(RCU_SOFTIRQ);
} }
#ifdef CONFIG_RCU_TRACE
/*
* There is no callback kthread, so this thread is never it.
*/
static bool rcu_is_callbacks_kthread(void)
{
return false;
}
#endif /* #ifdef CONFIG_RCU_TRACE */
void rcu_init(void) void rcu_init(void)
{ {
open_softirq(RCU_SOFTIRQ, rcu_process_callbacks); open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
......
This diff is collapsed.
This diff is collapsed.
...@@ -84,9 +84,10 @@ ...@@ -84,9 +84,10 @@
* Dynticks per-CPU state. * Dynticks per-CPU state.
*/ */
struct rcu_dynticks { struct rcu_dynticks {
int dynticks_nesting; /* Track irq/process nesting level. */ long long dynticks_nesting; /* Track irq/process nesting level. */
/* Process level is worth LLONG_MAX/2. */
int dynticks_nmi_nesting; /* Track NMI nesting level. */ int dynticks_nmi_nesting; /* Track NMI nesting level. */
atomic_t dynticks; /* Even value for dynticks-idle, else odd. */ atomic_t dynticks; /* Even value for idle, else odd. */
}; };
/* RCU's kthread states for tracing. */ /* RCU's kthread states for tracing. */
...@@ -274,16 +275,12 @@ struct rcu_data { ...@@ -274,16 +275,12 @@ struct rcu_data {
/* did other CPU force QS recently? */ /* did other CPU force QS recently? */
long blimit; /* Upper limit on a processed batch */ long blimit; /* Upper limit on a processed batch */
#ifdef CONFIG_NO_HZ
/* 3) dynticks interface. */ /* 3) dynticks interface. */
struct rcu_dynticks *dynticks; /* Shared per-CPU dynticks state. */ struct rcu_dynticks *dynticks; /* Shared per-CPU dynticks state. */
int dynticks_snap; /* Per-GP tracking for dynticks. */ int dynticks_snap; /* Per-GP tracking for dynticks. */
#endif /* #ifdef CONFIG_NO_HZ */
/* 4) reasons this CPU needed to be kicked by force_quiescent_state */ /* 4) reasons this CPU needed to be kicked by force_quiescent_state */
#ifdef CONFIG_NO_HZ
unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */ unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */
#endif /* #ifdef CONFIG_NO_HZ */
unsigned long offline_fqs; /* Kicked due to being offline. */ unsigned long offline_fqs; /* Kicked due to being offline. */
unsigned long resched_ipi; /* Sent a resched IPI. */ unsigned long resched_ipi; /* Sent a resched IPI. */
...@@ -302,16 +299,12 @@ struct rcu_data { ...@@ -302,16 +299,12 @@ struct rcu_data {
struct rcu_state *rsp; struct rcu_state *rsp;
}; };
/* Values for signaled field in struct rcu_state. */ /* Values for fqs_state field in struct rcu_state. */
#define RCU_GP_IDLE 0 /* No grace period in progress. */ #define RCU_GP_IDLE 0 /* No grace period in progress. */
#define RCU_GP_INIT 1 /* Grace period being initialized. */ #define RCU_GP_INIT 1 /* Grace period being initialized. */
#define RCU_SAVE_DYNTICK 2 /* Need to scan dyntick state. */ #define RCU_SAVE_DYNTICK 2 /* Need to scan dyntick state. */
#define RCU_FORCE_QS 3 /* Need to force quiescent state. */ #define RCU_FORCE_QS 3 /* Need to force quiescent state. */
#ifdef CONFIG_NO_HZ
#define RCU_SIGNAL_INIT RCU_SAVE_DYNTICK #define RCU_SIGNAL_INIT RCU_SAVE_DYNTICK
#else /* #ifdef CONFIG_NO_HZ */
#define RCU_SIGNAL_INIT RCU_FORCE_QS
#endif /* #else #ifdef CONFIG_NO_HZ */
#define RCU_JIFFIES_TILL_FORCE_QS 3 /* for rsp->jiffies_force_qs */ #define RCU_JIFFIES_TILL_FORCE_QS 3 /* for rsp->jiffies_force_qs */
...@@ -361,7 +354,7 @@ struct rcu_state { ...@@ -361,7 +354,7 @@ struct rcu_state {
/* The following fields are guarded by the root rcu_node's lock. */ /* The following fields are guarded by the root rcu_node's lock. */
u8 signaled ____cacheline_internodealigned_in_smp; u8 fqs_state ____cacheline_internodealigned_in_smp;
/* Force QS state. */ /* Force QS state. */
u8 fqs_active; /* force_quiescent_state() */ u8 fqs_active; /* force_quiescent_state() */
/* is running. */ /* is running. */
...@@ -451,7 +444,8 @@ static void rcu_preempt_check_callbacks(int cpu); ...@@ -451,7 +444,8 @@ static void rcu_preempt_check_callbacks(int cpu);
static void rcu_preempt_process_callbacks(void); static void rcu_preempt_process_callbacks(void);
void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu)); void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu));
#if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU)
static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp); static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp,
bool wake);
#endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) */ #endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) */
static int rcu_preempt_pending(int cpu); static int rcu_preempt_pending(int cpu);
static int rcu_preempt_needs_cpu(int cpu); static int rcu_preempt_needs_cpu(int cpu);
...@@ -461,6 +455,7 @@ static void __init __rcu_init_preempt(void); ...@@ -461,6 +455,7 @@ static void __init __rcu_init_preempt(void);
static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags); static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
static void rcu_preempt_boost_start_gp(struct rcu_node *rnp); static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
static void invoke_rcu_callbacks_kthread(void); static void invoke_rcu_callbacks_kthread(void);
static bool rcu_is_callbacks_kthread(void);
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
static void rcu_preempt_do_callbacks(void); static void rcu_preempt_do_callbacks(void);
static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp,
...@@ -473,5 +468,8 @@ static void rcu_yield(void (*f)(unsigned long), unsigned long arg); ...@@ -473,5 +468,8 @@ static void rcu_yield(void (*f)(unsigned long), unsigned long arg);
#endif /* #ifdef CONFIG_RCU_BOOST */ #endif /* #ifdef CONFIG_RCU_BOOST */
static void rcu_cpu_kthread_setrt(int cpu, int to_rt); static void rcu_cpu_kthread_setrt(int cpu, int to_rt);
static void __cpuinit rcu_prepare_kthreads(int cpu); static void __cpuinit rcu_prepare_kthreads(int cpu);
static void rcu_prepare_for_idle_init(int cpu);
static void rcu_cleanup_after_idle(int cpu);
static void rcu_prepare_for_idle(int cpu);
#endif /* #ifndef RCU_TREE_NONCORE */ #endif /* #ifndef RCU_TREE_NONCORE */
This diff is collapsed.
...@@ -67,13 +67,11 @@ static void print_one_rcu_data(struct seq_file *m, struct rcu_data *rdp) ...@@ -67,13 +67,11 @@ static void print_one_rcu_data(struct seq_file *m, struct rcu_data *rdp)
rdp->completed, rdp->gpnum, rdp->completed, rdp->gpnum,
rdp->passed_quiesce, rdp->passed_quiesce_gpnum, rdp->passed_quiesce, rdp->passed_quiesce_gpnum,
rdp->qs_pending); rdp->qs_pending);
#ifdef CONFIG_NO_HZ seq_printf(m, " dt=%d/%llx/%d df=%lu",
seq_printf(m, " dt=%d/%d/%d df=%lu",
atomic_read(&rdp->dynticks->dynticks), atomic_read(&rdp->dynticks->dynticks),
rdp->dynticks->dynticks_nesting, rdp->dynticks->dynticks_nesting,
rdp->dynticks->dynticks_nmi_nesting, rdp->dynticks->dynticks_nmi_nesting,
rdp->dynticks_fqs); rdp->dynticks_fqs);
#endif /* #ifdef CONFIG_NO_HZ */
seq_printf(m, " of=%lu ri=%lu", rdp->offline_fqs, rdp->resched_ipi); seq_printf(m, " of=%lu ri=%lu", rdp->offline_fqs, rdp->resched_ipi);
seq_printf(m, " ql=%ld qs=%c%c%c%c", seq_printf(m, " ql=%ld qs=%c%c%c%c",
rdp->qlen, rdp->qlen,
...@@ -141,13 +139,11 @@ static void print_one_rcu_data_csv(struct seq_file *m, struct rcu_data *rdp) ...@@ -141,13 +139,11 @@ static void print_one_rcu_data_csv(struct seq_file *m, struct rcu_data *rdp)
rdp->completed, rdp->gpnum, rdp->completed, rdp->gpnum,
rdp->passed_quiesce, rdp->passed_quiesce_gpnum, rdp->passed_quiesce, rdp->passed_quiesce_gpnum,
rdp->qs_pending); rdp->qs_pending);
#ifdef CONFIG_NO_HZ seq_printf(m, ",%d,%llx,%d,%lu",
seq_printf(m, ",%d,%d,%d,%lu",
atomic_read(&rdp->dynticks->dynticks), atomic_read(&rdp->dynticks->dynticks),
rdp->dynticks->dynticks_nesting, rdp->dynticks->dynticks_nesting,
rdp->dynticks->dynticks_nmi_nesting, rdp->dynticks->dynticks_nmi_nesting,
rdp->dynticks_fqs); rdp->dynticks_fqs);
#endif /* #ifdef CONFIG_NO_HZ */
seq_printf(m, ",%lu,%lu", rdp->offline_fqs, rdp->resched_ipi); seq_printf(m, ",%lu,%lu", rdp->offline_fqs, rdp->resched_ipi);
seq_printf(m, ",%ld,\"%c%c%c%c\"", rdp->qlen, seq_printf(m, ",%ld,\"%c%c%c%c\"", rdp->qlen,
".N"[rdp->nxttail[RCU_NEXT_READY_TAIL] != ".N"[rdp->nxttail[RCU_NEXT_READY_TAIL] !=
...@@ -171,9 +167,7 @@ static void print_one_rcu_data_csv(struct seq_file *m, struct rcu_data *rdp) ...@@ -171,9 +167,7 @@ static void print_one_rcu_data_csv(struct seq_file *m, struct rcu_data *rdp)
static int show_rcudata_csv(struct seq_file *m, void *unused) static int show_rcudata_csv(struct seq_file *m, void *unused)
{ {
seq_puts(m, "\"CPU\",\"Online?\",\"c\",\"g\",\"pq\",\"pgp\",\"pq\","); seq_puts(m, "\"CPU\",\"Online?\",\"c\",\"g\",\"pq\",\"pgp\",\"pq\",");
#ifdef CONFIG_NO_HZ
seq_puts(m, "\"dt\",\"dt nesting\",\"dt NMI nesting\",\"df\","); seq_puts(m, "\"dt\",\"dt nesting\",\"dt NMI nesting\",\"df\",");
#endif /* #ifdef CONFIG_NO_HZ */
seq_puts(m, "\"of\",\"ri\",\"ql\",\"qs\""); seq_puts(m, "\"of\",\"ri\",\"ql\",\"qs\"");
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
seq_puts(m, "\"kt\",\"ktl\""); seq_puts(m, "\"kt\",\"ktl\"");
...@@ -278,7 +272,7 @@ static void print_one_rcu_state(struct seq_file *m, struct rcu_state *rsp) ...@@ -278,7 +272,7 @@ static void print_one_rcu_state(struct seq_file *m, struct rcu_state *rsp)
gpnum = rsp->gpnum; gpnum = rsp->gpnum;
seq_printf(m, "c=%lu g=%lu s=%d jfq=%ld j=%x " seq_printf(m, "c=%lu g=%lu s=%d jfq=%ld j=%x "
"nfqs=%lu/nfqsng=%lu(%lu) fqlh=%lu\n", "nfqs=%lu/nfqsng=%lu(%lu) fqlh=%lu\n",
rsp->completed, gpnum, rsp->signaled, rsp->completed, gpnum, rsp->fqs_state,
(long)(rsp->jiffies_force_qs - jiffies), (long)(rsp->jiffies_force_qs - jiffies),
(int)(jiffies & 0xffff), (int)(jiffies & 0xffff),
rsp->n_force_qs, rsp->n_force_qs_ngp, rsp->n_force_qs, rsp->n_force_qs_ngp,
......
...@@ -579,7 +579,6 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state, ...@@ -579,7 +579,6 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
struct rt_mutex_waiter *waiter) struct rt_mutex_waiter *waiter)
{ {
int ret = 0; int ret = 0;
int was_disabled;
for (;;) { for (;;) {
/* Try to acquire the lock: */ /* Try to acquire the lock: */
...@@ -602,17 +601,10 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state, ...@@ -602,17 +601,10 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
raw_spin_unlock(&lock->wait_lock); raw_spin_unlock(&lock->wait_lock);
was_disabled = irqs_disabled();
if (was_disabled)
local_irq_enable();
debug_rt_mutex_print_deadlock(waiter); debug_rt_mutex_print_deadlock(waiter);
schedule_rt_mutex(lock); schedule_rt_mutex(lock);
if (was_disabled)
local_irq_disable();
raw_spin_lock(&lock->wait_lock); raw_spin_lock(&lock->wait_lock);
set_current_state(state); set_current_state(state);
} }
......
...@@ -347,12 +347,12 @@ void irq_exit(void) ...@@ -347,12 +347,12 @@ void irq_exit(void)
if (!in_interrupt() && local_softirq_pending()) if (!in_interrupt() && local_softirq_pending())
invoke_softirq(); invoke_softirq();
rcu_irq_exit();
#ifdef CONFIG_NO_HZ #ifdef CONFIG_NO_HZ
/* Make sure that timer wheel updates are propagated */ /* Make sure that timer wheel updates are propagated */
if (idle_cpu(smp_processor_id()) && !in_interrupt() && !need_resched()) if (idle_cpu(smp_processor_id()) && !in_interrupt() && !need_resched())
tick_nohz_stop_sched_tick(0); tick_nohz_irq_exit();
#endif #endif
rcu_irq_exit();
preempt_enable_no_resched(); preempt_enable_no_resched();
} }
......
...@@ -275,42 +275,17 @@ u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time) ...@@ -275,42 +275,17 @@ u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time)
} }
EXPORT_SYMBOL_GPL(get_cpu_iowait_time_us); EXPORT_SYMBOL_GPL(get_cpu_iowait_time_us);
/** static void tick_nohz_stop_sched_tick(struct tick_sched *ts)
* tick_nohz_stop_sched_tick - stop the idle tick from the idle task
*
* When the next event is more than a tick into the future, stop the idle tick
* Called either from the idle loop or from irq_exit() when an idle period was
* just interrupted by an interrupt which did not cause a reschedule.
*/
void tick_nohz_stop_sched_tick(int inidle)
{ {
unsigned long seq, last_jiffies, next_jiffies, delta_jiffies, flags; unsigned long seq, last_jiffies, next_jiffies, delta_jiffies;
struct tick_sched *ts;
ktime_t last_update, expires, now; ktime_t last_update, expires, now;
struct clock_event_device *dev = __get_cpu_var(tick_cpu_device).evtdev; struct clock_event_device *dev = __get_cpu_var(tick_cpu_device).evtdev;
u64 time_delta; u64 time_delta;
int cpu; int cpu;
local_irq_save(flags);
cpu = smp_processor_id(); cpu = smp_processor_id();
ts = &per_cpu(tick_cpu_sched, cpu); ts = &per_cpu(tick_cpu_sched, cpu);
/*
* Call to tick_nohz_start_idle stops the last_update_time from being
* updated. Thus, it must not be called in the event we are called from
* irq_exit() with the prior state different than idle.
*/
if (!inidle && !ts->inidle)
goto end;
/*
* Set ts->inidle unconditionally. Even if the system did not
* switch to NOHZ mode the cpu frequency governers rely on the
* update of the idle time accounting in tick_nohz_start_idle().
*/
ts->inidle = 1;
now = tick_nohz_start_idle(cpu, ts); now = tick_nohz_start_idle(cpu, ts);
/* /*
...@@ -326,10 +301,10 @@ void tick_nohz_stop_sched_tick(int inidle) ...@@ -326,10 +301,10 @@ void tick_nohz_stop_sched_tick(int inidle)
} }
if (unlikely(ts->nohz_mode == NOHZ_MODE_INACTIVE)) if (unlikely(ts->nohz_mode == NOHZ_MODE_INACTIVE))
goto end; return;
if (need_resched()) if (need_resched())
goto end; return;
if (unlikely(local_softirq_pending() && cpu_online(cpu))) { if (unlikely(local_softirq_pending() && cpu_online(cpu))) {
static int ratelimit; static int ratelimit;
...@@ -339,7 +314,7 @@ void tick_nohz_stop_sched_tick(int inidle) ...@@ -339,7 +314,7 @@ void tick_nohz_stop_sched_tick(int inidle)
(unsigned int) local_softirq_pending()); (unsigned int) local_softirq_pending());
ratelimit++; ratelimit++;
} }
goto end; return;
} }
ts->idle_calls++; ts->idle_calls++;
...@@ -434,7 +409,6 @@ void tick_nohz_stop_sched_tick(int inidle) ...@@ -434,7 +409,6 @@ void tick_nohz_stop_sched_tick(int inidle)
ts->idle_tick = hrtimer_get_expires(&ts->sched_timer); ts->idle_tick = hrtimer_get_expires(&ts->sched_timer);
ts->tick_stopped = 1; ts->tick_stopped = 1;
ts->idle_jiffies = last_jiffies; ts->idle_jiffies = last_jiffies;
rcu_enter_nohz();
} }
ts->idle_sleeps++; ts->idle_sleeps++;
...@@ -472,8 +446,56 @@ void tick_nohz_stop_sched_tick(int inidle) ...@@ -472,8 +446,56 @@ void tick_nohz_stop_sched_tick(int inidle)
ts->next_jiffies = next_jiffies; ts->next_jiffies = next_jiffies;
ts->last_jiffies = last_jiffies; ts->last_jiffies = last_jiffies;
ts->sleep_length = ktime_sub(dev->next_event, now); ts->sleep_length = ktime_sub(dev->next_event, now);
end: }
local_irq_restore(flags);
/**
* tick_nohz_idle_enter - stop the idle tick from the idle task
*
* When the next event is more than a tick into the future, stop the idle tick
* Called when we start the idle loop.
*
* The arch is responsible of calling:
*
* - rcu_idle_enter() after its last use of RCU before the CPU is put
* to sleep.
* - rcu_idle_exit() before the first use of RCU after the CPU is woken up.
*/
void tick_nohz_idle_enter(void)
{
struct tick_sched *ts;
WARN_ON_ONCE(irqs_disabled());
local_irq_disable();
ts = &__get_cpu_var(tick_cpu_sched);
/*
* set ts->inidle unconditionally. even if the system did not
* switch to nohz mode the cpu frequency governers rely on the
* update of the idle time accounting in tick_nohz_start_idle().
*/
ts->inidle = 1;
tick_nohz_stop_sched_tick(ts);
local_irq_enable();
}
/**
* tick_nohz_irq_exit - update next tick event from interrupt exit
*
* When an interrupt fires while we are idle and it doesn't cause
* a reschedule, it may still add, modify or delete a timer, enqueue
* an RCU callback, etc...
* So we need to re-calculate and reprogram the next tick event.
*/
void tick_nohz_irq_exit(void)
{
struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
if (!ts->inidle)
return;
tick_nohz_stop_sched_tick(ts);
} }
/** /**
...@@ -515,11 +537,13 @@ static void tick_nohz_restart(struct tick_sched *ts, ktime_t now) ...@@ -515,11 +537,13 @@ static void tick_nohz_restart(struct tick_sched *ts, ktime_t now)
} }
/** /**
* tick_nohz_restart_sched_tick - restart the idle tick from the idle task * tick_nohz_idle_exit - restart the idle tick from the idle task
* *
* Restart the idle tick when the CPU is woken up from idle * Restart the idle tick when the CPU is woken up from idle
* This also exit the RCU extended quiescent state. The CPU
* can use RCU again after this function is called.
*/ */
void tick_nohz_restart_sched_tick(void) void tick_nohz_idle_exit(void)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu); struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu);
...@@ -529,6 +553,7 @@ void tick_nohz_restart_sched_tick(void) ...@@ -529,6 +553,7 @@ void tick_nohz_restart_sched_tick(void)
ktime_t now; ktime_t now;
local_irq_disable(); local_irq_disable();
if (ts->idle_active || (ts->inidle && ts->tick_stopped)) if (ts->idle_active || (ts->inidle && ts->tick_stopped))
now = ktime_get(); now = ktime_get();
...@@ -543,8 +568,6 @@ void tick_nohz_restart_sched_tick(void) ...@@ -543,8 +568,6 @@ void tick_nohz_restart_sched_tick(void)
ts->inidle = 0; ts->inidle = 0;
rcu_exit_nohz();
/* Update jiffies first */ /* Update jiffies first */
select_nohz_load_balancer(0); select_nohz_load_balancer(0);
tick_do_update_jiffies64(now); tick_do_update_jiffies64(now);
......
...@@ -4775,6 +4775,7 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode) ...@@ -4775,6 +4775,7 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
{ {
__ftrace_dump(true, oops_dump_mode); __ftrace_dump(true, oops_dump_mode);
} }
EXPORT_SYMBOL_GPL(ftrace_dump);
__init static int tracer_alloc_buffers(void) __init static int tracer_alloc_buffers(void)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment