Commit 085c7897 authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'for-mingo' of...

Merge branch 'for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu

Pull RCU changes from Paul E. McKenney:

  - Initialization/Kconfig updates: hide most Kconfig options from unsuspecting users.
    There's now a single high level configuration option:

      *
      * RCU Subsystem
      *
      Make expert-level adjustments to RCU configuration (RCU_EXPERT) [N/y/?] (NEW)

    Which if answered in the negative, leaves us with a single interactive
    configuration option:

      Offload RCU callback processing from boot-selected CPUs (RCU_NOCB_CPU) [N/y/?] (NEW)

    All the rest of the RCU options are configured automatically.

  - Remove all uses of RCU-protected array indexes: replace the
    rcu_[access|dereference]_index_check() APIs with READ_ONCE() and rcu_lockdep_assert().

  - RCU CPU-hotplug cleanups.

  - Updates to Tiny RCU: a race fix and further code shrinkage.

  - RCU torture-testing updates: fixes, speedups, cleanups and
    documentation updates.

  - Miscellaneous fixes.

  - Documentation updates.
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents c46a024e 0868aa22
...@@ -10,7 +10,19 @@ also be used to protect arrays. Three situations are as follows: ...@@ -10,7 +10,19 @@ also be used to protect arrays. Three situations are as follows:
3. Resizeable Arrays 3. Resizeable Arrays
Each of these situations are discussed below. Each of these three situations involves an RCU-protected pointer to an
array that is separately indexed. It might be tempting to consider use
of RCU to instead protect the index into an array, however, this use
case is -not- supported. The problem with RCU-protected indexes into
arrays is that compilers can play way too many optimization games with
integers, which means that the rules governing handling of these indexes
are far more trouble than they are worth. If RCU-protected indexes into
arrays prove to be particularly valuable (which they have not thus far),
explicit cooperation from the compiler will be required to permit them
to be safely used.
That aside, each of the three RCU-protected pointer situations are
described in the following sections.
Situation 1: Hash Tables Situation 1: Hash Tables
...@@ -36,9 +48,9 @@ Quick Quiz: Why is it so important that updates be rare when ...@@ -36,9 +48,9 @@ Quick Quiz: Why is it so important that updates be rare when
Situation 3: Resizeable Arrays Situation 3: Resizeable Arrays
Use of RCU for resizeable arrays is demonstrated by the grow_ary() Use of RCU for resizeable arrays is demonstrated by the grow_ary()
function used by the System V IPC code. The array is used to map from function formerly used by the System V IPC code. The array is used
semaphore, message-queue, and shared-memory IDs to the data structure to map from semaphore, message-queue, and shared-memory IDs to the data
that represents the corresponding IPC construct. The grow_ary() structure that represents the corresponding IPC construct. The grow_ary()
function does not acquire any locks; instead its caller must hold the function does not acquire any locks; instead its caller must hold the
ids->sem semaphore. ids->sem semaphore.
......
...@@ -47,11 +47,6 @@ checking of rcu_dereference() primitives: ...@@ -47,11 +47,6 @@ checking of rcu_dereference() primitives:
Use explicit check expression "c" along with Use explicit check expression "c" along with
srcu_read_lock_held()(). This is useful in code that srcu_read_lock_held()(). This is useful in code that
is invoked by both SRCU readers and updaters. is invoked by both SRCU readers and updaters.
rcu_dereference_index_check(p, c):
Use explicit check expression "c", but the caller
must supply one of the rcu_read_lock_held() functions.
This is useful in code that uses RCU-protected arrays
that is invoked by both RCU readers and updaters.
rcu_dereference_raw(p): rcu_dereference_raw(p):
Don't check. (Use sparingly, if at all.) Don't check. (Use sparingly, if at all.)
rcu_dereference_protected(p, c): rcu_dereference_protected(p, c):
...@@ -64,11 +59,6 @@ checking of rcu_dereference() primitives: ...@@ -64,11 +59,6 @@ checking of rcu_dereference() primitives:
but retain the compiler constraints that prevent duplicating but retain the compiler constraints that prevent duplicating
or coalescsing. This is useful when when testing the or coalescsing. This is useful when when testing the
value of the pointer itself, for example, against NULL. value of the pointer itself, for example, against NULL.
rcu_access_index(idx):
Return the value of the index and omit all barriers, but
retain the compiler constraints that prevent duplicating
or coalescsing. This is useful when when testing the
value of the index itself, for example, against -1.
The rcu_dereference_check() check expression can be any boolean The rcu_dereference_check() check expression can be any boolean
expression, but would normally include a lockdep expression. However, expression, but would normally include a lockdep expression. However,
......
...@@ -25,17 +25,6 @@ o You must use one of the rcu_dereference() family of primitives ...@@ -25,17 +25,6 @@ o You must use one of the rcu_dereference() family of primitives
for an example where the compiler can in fact deduce the exact for an example where the compiler can in fact deduce the exact
value of the pointer, and thus cause misordering. value of the pointer, and thus cause misordering.
o Do not use single-element RCU-protected arrays. The compiler
is within its right to assume that the value of an index into
such an array must necessarily evaluate to zero. The compiler
could then substitute the constant zero for the computation, so
that the array index no longer depended on the value returned
by rcu_dereference(). If the array index no longer depends
on rcu_dereference(), then both the compiler and the CPU
are within their rights to order the array access before the
rcu_dereference(), which can cause the array access to return
garbage.
o Avoid cancellation when using the "+" and "-" infix arithmetic o Avoid cancellation when using the "+" and "-" infix arithmetic
operators. For example, for a given variable "x", avoid operators. For example, for a given variable "x", avoid
"(x-x)". There are similar arithmetic pitfalls from other "(x-x)". There are similar arithmetic pitfalls from other
...@@ -76,14 +65,15 @@ o Do not use the results from the boolean "&&" and "||" when ...@@ -76,14 +65,15 @@ o Do not use the results from the boolean "&&" and "||" when
dereferencing. For example, the following (rather improbable) dereferencing. For example, the following (rather improbable)
code is buggy: code is buggy:
int a[2]; int *p;
int index; int *q;
int force_zero_index = 1;
... ...
r1 = rcu_dereference(i1) p = rcu_dereference(gp)
r2 = a[r1 && force_zero_index]; /* BUGGY!!! */ q = &global_q;
q += p != &oom_p1 && p != &oom_p2;
r1 = *q; /* BUGGY!!! */
The reason this is buggy is that "&&" and "||" are often compiled The reason this is buggy is that "&&" and "||" are often compiled
using branches. While weak-memory machines such as ARM or PowerPC using branches. While weak-memory machines such as ARM or PowerPC
...@@ -94,14 +84,15 @@ o Do not use the results from relational operators ("==", "!=", ...@@ -94,14 +84,15 @@ o Do not use the results from relational operators ("==", "!=",
">", ">=", "<", or "<=") when dereferencing. For example, ">", ">=", "<", or "<=") when dereferencing. For example,
the following (quite strange) code is buggy: the following (quite strange) code is buggy:
int a[2]; int *p;
int index; int *q;
int flip_index = 0;
... ...
r1 = rcu_dereference(i1) p = rcu_dereference(gp)
r2 = a[r1 != flip_index]; /* BUGGY!!! */ q = &global_q;
q += p > &oom_p;
r1 = *q; /* BUGGY!!! */
As before, the reason this is buggy is that relational operators As before, the reason this is buggy is that relational operators
are often compiled using branches. And as before, although are often compiled using branches. And as before, although
...@@ -193,6 +184,11 @@ o Be very careful about comparing pointers obtained from ...@@ -193,6 +184,11 @@ o Be very careful about comparing pointers obtained from
pointer. Note that the volatile cast in rcu_dereference() pointer. Note that the volatile cast in rcu_dereference()
will normally prevent the compiler from knowing too much. will normally prevent the compiler from knowing too much.
However, please note that if the compiler knows that the
pointer takes on only one of two values, a not-equal
comparison will provide exactly the information that the
compiler needs to deduce the value of the pointer.
o Disable any value-speculation optimizations that your compiler o Disable any value-speculation optimizations that your compiler
might provide, especially if you are making use of feedback-based might provide, especially if you are making use of feedback-based
optimizations that take data collected from prior runs. Such optimizations that take data collected from prior runs. Such
......
...@@ -256,7 +256,9 @@ rcu_dereference() ...@@ -256,7 +256,9 @@ rcu_dereference()
If you are going to be fetching multiple fields from the If you are going to be fetching multiple fields from the
RCU-protected structure, using the local variable is of RCU-protected structure, using the local variable is of
course preferred. Repeated rcu_dereference() calls look course preferred. Repeated rcu_dereference() calls look
ugly and incur unnecessary overhead on Alpha CPUs. ugly, do not guarantee that the same pointer will be returned
if an update happened while in the critical section, and incur
unnecessary overhead on Alpha CPUs.
Note that the value returned by rcu_dereference() is valid Note that the value returned by rcu_dereference() is valid
only within the enclosing RCU read-side critical section. only within the enclosing RCU read-side critical section.
...@@ -879,9 +881,7 @@ SRCU: Initialization/cleanup ...@@ -879,9 +881,7 @@ SRCU: Initialization/cleanup
All: lockdep-checked RCU-protected pointer access All: lockdep-checked RCU-protected pointer access
rcu_access_index
rcu_access_pointer rcu_access_pointer
rcu_dereference_index_check
rcu_dereference_raw rcu_dereference_raw
rcu_lockdep_assert rcu_lockdep_assert
rcu_sleep_check rcu_sleep_check
......
...@@ -2992,11 +2992,34 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -2992,11 +2992,34 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Set maximum number of finished RCU callbacks to Set maximum number of finished RCU callbacks to
process in one batch. process in one batch.
rcutree.dump_tree= [KNL]
Dump the structure of the rcu_node combining tree
out at early boot. This is used for diagnostic
purposes, to verify correct tree setup.
rcutree.gp_cleanup_delay= [KNL]
Set the number of jiffies to delay each step of
RCU grace-period cleanup. This only has effect
when CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP is set.
rcutree.gp_init_delay= [KNL] rcutree.gp_init_delay= [KNL]
Set the number of jiffies to delay each step of Set the number of jiffies to delay each step of
RCU grace-period initialization. This only has RCU grace-period initialization. This only has
effect when CONFIG_RCU_TORTURE_TEST_SLOW_INIT is effect when CONFIG_RCU_TORTURE_TEST_SLOW_INIT
set. is set.
rcutree.gp_preinit_delay= [KNL]
Set the number of jiffies to delay each step of
RCU grace-period pre-initialization, that is,
the propagation of recent CPU-hotplug changes up
the rcu_node combining tree. This only has effect
when CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT is set.
rcutree.rcu_fanout_exact= [KNL]
Disable autobalancing of the rcu_node combining
tree. This is used by rcutorture, and might
possibly be useful for architectures having high
cache-to-cache transfer latencies.
rcutree.rcu_fanout_leaf= [KNL] rcutree.rcu_fanout_leaf= [KNL]
Increase the number of CPUs assigned to each Increase the number of CPUs assigned to each
...@@ -3101,7 +3124,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -3101,7 +3124,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
test, hence the "fake". test, hence the "fake".
rcutorture.nreaders= [KNL] rcutorture.nreaders= [KNL]
Set number of RCU readers. Set number of RCU readers. The value -1 selects
N-1, where N is the number of CPUs. A value
"n" less than -1 selects N-n-2, where N is again
the number of CPUs. For example, -2 selects N
(the number of CPUs), -3 selects N+1, and so on.
rcutorture.object_debug= [KNL] rcutorture.object_debug= [KNL]
Enable debug-object double-call_rcu() testing. Enable debug-object double-call_rcu() testing.
......
...@@ -617,16 +617,16 @@ case what's actually required is: ...@@ -617,16 +617,16 @@ case what's actually required is:
However, stores are not speculated. This means that ordering -is- provided However, stores are not speculated. This means that ordering -is- provided
for load-store control dependencies, as in the following example: for load-store control dependencies, as in the following example:
q = ACCESS_ONCE(a); q = READ_ONCE_CTRL(a);
if (q) { if (q) {
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
} }
Control dependencies pair normally with other types of barriers. Control dependencies pair normally with other types of barriers. That
That said, please note that ACCESS_ONCE() is not optional! Without the said, please note that READ_ONCE_CTRL() is not optional! Without the
ACCESS_ONCE(), might combine the load from 'a' with other loads from READ_ONCE_CTRL(), the compiler might combine the load from 'a' with
'a', and the store to 'b' with other stores to 'b', with possible highly other loads from 'a', and the store to 'b' with other stores to 'b',
counterintuitive effects on ordering. with possible highly counterintuitive effects on ordering.
Worse yet, if the compiler is able to prove (say) that the value of Worse yet, if the compiler is able to prove (say) that the value of
variable 'a' is always non-zero, it would be well within its rights variable 'a' is always non-zero, it would be well within its rights
...@@ -636,12 +636,15 @@ as follows: ...@@ -636,12 +636,15 @@ as follows:
q = a; q = a;
b = p; /* BUG: Compiler and CPU can both reorder!!! */ b = p; /* BUG: Compiler and CPU can both reorder!!! */
So don't leave out the ACCESS_ONCE(). Finally, the READ_ONCE_CTRL() includes an smp_read_barrier_depends()
that DEC Alpha needs in order to respect control depedencies.
So don't leave out the READ_ONCE_CTRL().
It is tempting to try to enforce ordering on identical stores on both It is tempting to try to enforce ordering on identical stores on both
branches of the "if" statement as follows: branches of the "if" statement as follows:
q = ACCESS_ONCE(a); q = READ_ONCE_CTRL(a);
if (q) { if (q) {
barrier(); barrier();
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
...@@ -655,7 +658,7 @@ branches of the "if" statement as follows: ...@@ -655,7 +658,7 @@ branches of the "if" statement as follows:
Unfortunately, current compilers will transform this as follows at high Unfortunately, current compilers will transform this as follows at high
optimization levels: optimization levels:
q = ACCESS_ONCE(a); q = READ_ONCE_CTRL(a);
barrier(); barrier();
ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */ ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */
if (q) { if (q) {
...@@ -685,7 +688,7 @@ memory barriers, for example, smp_store_release(): ...@@ -685,7 +688,7 @@ memory barriers, for example, smp_store_release():
In contrast, without explicit memory barriers, two-legged-if control In contrast, without explicit memory barriers, two-legged-if control
ordering is guaranteed only when the stores differ, for example: ordering is guaranteed only when the stores differ, for example:
q = ACCESS_ONCE(a); q = READ_ONCE_CTRL(a);
if (q) { if (q) {
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
do_something(); do_something();
...@@ -694,14 +697,14 @@ ordering is guaranteed only when the stores differ, for example: ...@@ -694,14 +697,14 @@ ordering is guaranteed only when the stores differ, for example:
do_something_else(); do_something_else();
} }
The initial ACCESS_ONCE() is still required to prevent the compiler from The initial READ_ONCE_CTRL() is still required to prevent the compiler
proving the value of 'a'. from proving the value of 'a'.
In addition, you need to be careful what you do with the local variable 'q', In addition, you need to be careful what you do with the local variable 'q',
otherwise the compiler might be able to guess the value and again remove otherwise the compiler might be able to guess the value and again remove
the needed conditional. For example: the needed conditional. For example:
q = ACCESS_ONCE(a); q = READ_ONCE_CTRL(a);
if (q % MAX) { if (q % MAX) {
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
do_something(); do_something();
...@@ -714,7 +717,7 @@ If MAX is defined to be 1, then the compiler knows that (q % MAX) is ...@@ -714,7 +717,7 @@ If MAX is defined to be 1, then the compiler knows that (q % MAX) is
equal to zero, in which case the compiler is within its rights to equal to zero, in which case the compiler is within its rights to
transform the above code into the following: transform the above code into the following:
q = ACCESS_ONCE(a); q = READ_ONCE_CTRL(a);
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
do_something_else(); do_something_else();
...@@ -725,7 +728,7 @@ is gone, and the barrier won't bring it back. Therefore, if you are ...@@ -725,7 +728,7 @@ is gone, and the barrier won't bring it back. Therefore, if you are
relying on this ordering, you should make sure that MAX is greater than relying on this ordering, you should make sure that MAX is greater than
one, perhaps as follows: one, perhaps as follows:
q = ACCESS_ONCE(a); q = READ_ONCE_CTRL(a);
BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
if (q % MAX) { if (q % MAX) {
ACCESS_ONCE(b) = p; ACCESS_ONCE(b) = p;
...@@ -742,14 +745,15 @@ of the 'if' statement. ...@@ -742,14 +745,15 @@ of the 'if' statement.
You must also be careful not to rely too much on boolean short-circuit You must also be careful not to rely too much on boolean short-circuit
evaluation. Consider this example: evaluation. Consider this example:
q = ACCESS_ONCE(a); q = READ_ONCE_CTRL(a);
if (a || 1 > 0) if (a || 1 > 0)
ACCESS_ONCE(b) = 1; ACCESS_ONCE(b) = 1;
Because the second condition is always true, the compiler can transform Because the first condition cannot fault and the second condition is
this example as following, defeating control dependency: always true, the compiler can transform this example as following,
defeating control dependency:
q = ACCESS_ONCE(a); q = READ_ONCE_CTRL(a);
ACCESS_ONCE(b) = 1; ACCESS_ONCE(b) = 1;
This example underscores the need to ensure that the compiler cannot This example underscores the need to ensure that the compiler cannot
...@@ -762,8 +766,8 @@ demonstrated by two related examples, with the initial values of ...@@ -762,8 +766,8 @@ demonstrated by two related examples, with the initial values of
x and y both being zero: x and y both being zero:
CPU 0 CPU 1 CPU 0 CPU 1
===================== ===================== ======================= =======================
r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y); r1 = READ_ONCE_CTRL(x); r2 = READ_ONCE_CTRL(y);
if (r1 > 0) if (r2 > 0) if (r1 > 0) if (r2 > 0)
ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1; ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1;
...@@ -783,7 +787,8 @@ But because control dependencies do -not- provide transitivity, the above ...@@ -783,7 +787,8 @@ But because control dependencies do -not- provide transitivity, the above
assertion can fail after the combined three-CPU example completes. If you assertion can fail after the combined three-CPU example completes. If you
need the three-CPU example to provide ordering, you will need smp_mb() need the three-CPU example to provide ordering, you will need smp_mb()
between the loads and stores in the CPU 0 and CPU 1 code fragments, between the loads and stores in the CPU 0 and CPU 1 code fragments,
that is, just before or just after the "if" statements. that is, just before or just after the "if" statements. Furthermore,
the original two-CPU example is very fragile and should be avoided.
These two examples are the LB and WWC litmus tests from this paper: These two examples are the LB and WWC litmus tests from this paper:
http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
...@@ -791,6 +796,12 @@ site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html. ...@@ -791,6 +796,12 @@ site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
In summary: In summary:
(*) Control dependencies must be headed by READ_ONCE_CTRL().
Or, as a much less preferable alternative, interpose
be headed by READ_ONCE() or an ACCESS_ONCE() read and must
have smp_read_barrier_depends() between this read and the
control-dependent write.
(*) Control dependencies can order prior loads against later stores. (*) Control dependencies can order prior loads against later stores.
However, they do -not- guarantee any other sort of ordering: However, they do -not- guarantee any other sort of ordering:
Not prior loads against later loads, nor prior stores against Not prior loads against later loads, nor prior stores against
...@@ -1784,10 +1795,9 @@ for each construct. These operations all imply certain barriers: ...@@ -1784,10 +1795,9 @@ for each construct. These operations all imply certain barriers:
Memory operations issued before the ACQUIRE may be completed after Memory operations issued before the ACQUIRE may be completed after
the ACQUIRE operation has completed. An smp_mb__before_spinlock(), the ACQUIRE operation has completed. An smp_mb__before_spinlock(),
combined with a following ACQUIRE, orders prior loads against combined with a following ACQUIRE, orders prior stores against
subsequent loads and stores and also orders prior stores against subsequent loads and stores. Note that this is weaker than smp_mb()!
subsequent stores. Note that this is weaker than smp_mb()! The The smp_mb__before_spinlock() primitive is free on many architectures.
smp_mb__before_spinlock() primitive is free on many architectures.
(2) RELEASE operation implication: (2) RELEASE operation implication:
......
...@@ -89,5 +89,6 @@ do { \ ...@@ -89,5 +89,6 @@ do { \
#define smp_mb__before_atomic() smp_mb() #define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb() #define smp_mb__after_atomic() smp_mb()
#define smp_mb__before_spinlock() smp_mb()
#endif /* _ASM_POWERPC_BARRIER_H */ #endif /* _ASM_POWERPC_BARRIER_H */
...@@ -53,9 +53,12 @@ ...@@ -53,9 +53,12 @@
static DEFINE_MUTEX(mce_chrdev_read_mutex); static DEFINE_MUTEX(mce_chrdev_read_mutex);
#define rcu_dereference_check_mce(p) \ #define rcu_dereference_check_mce(p) \
rcu_dereference_index_check((p), \ ({ \
rcu_read_lock_sched_held() || \ rcu_lockdep_assert(rcu_read_lock_sched_held() || \
lockdep_is_held(&mce_chrdev_read_mutex)) lockdep_is_held(&mce_chrdev_read_mutex), \
"suspicious rcu_dereference_check_mce() usage"); \
smp_load_acquire(&(p)); \
})
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include <trace/events/mce.h> #include <trace/events/mce.h>
...@@ -1887,7 +1890,7 @@ static ssize_t mce_chrdev_read(struct file *filp, char __user *ubuf, ...@@ -1887,7 +1890,7 @@ static ssize_t mce_chrdev_read(struct file *filp, char __user *ubuf,
static unsigned int mce_chrdev_poll(struct file *file, poll_table *wait) static unsigned int mce_chrdev_poll(struct file *file, poll_table *wait)
{ {
poll_wait(file, &mce_chrdev_wait, wait); poll_wait(file, &mce_chrdev_wait, wait);
if (rcu_access_index(mcelog.next)) if (READ_ONCE(mcelog.next))
return POLLIN | POLLRDNORM; return POLLIN | POLLRDNORM;
if (!mce_apei_read_done && apei_check_mce()) if (!mce_apei_read_done && apei_check_mce())
return POLLIN | POLLRDNORM; return POLLIN | POLLRDNORM;
...@@ -1932,8 +1935,8 @@ void register_mce_write_callback(ssize_t (*fn)(struct file *filp, ...@@ -1932,8 +1935,8 @@ void register_mce_write_callback(ssize_t (*fn)(struct file *filp,
} }
EXPORT_SYMBOL_GPL(register_mce_write_callback); EXPORT_SYMBOL_GPL(register_mce_write_callback);
ssize_t mce_chrdev_write(struct file *filp, const char __user *ubuf, static ssize_t mce_chrdev_write(struct file *filp, const char __user *ubuf,
size_t usize, loff_t *off) size_t usize, loff_t *off)
{ {
if (mce_write) if (mce_write)
return mce_write(filp, ubuf, usize, off); return mce_write(filp, ubuf, usize, off);
......
...@@ -252,6 +252,22 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s ...@@ -252,6 +252,22 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
#define WRITE_ONCE(x, val) \ #define WRITE_ONCE(x, val) \
({ typeof(x) __val = (val); __write_once_size(&(x), &__val, sizeof(__val)); __val; }) ({ typeof(x) __val = (val); __write_once_size(&(x), &__val, sizeof(__val)); __val; })
/**
* READ_ONCE_CTRL - Read a value heading a control dependency
* @x: The value to be read, heading the control dependency
*
* Control dependencies are tricky. See Documentation/memory-barriers.txt
* for important information on how to use them. Note that in many cases,
* use of smp_load_acquire() will be much simpler. Control dependencies
* should be avoided except on the hottest of hotpaths.
*/
#define READ_ONCE_CTRL(x) \
({ \
typeof(x) __val = READ_ONCE(x); \
smp_read_barrier_depends(); /* Enforce control dependency. */ \
__val; \
})
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -29,8 +29,8 @@ ...@@ -29,8 +29,8 @@
*/ */
static inline void INIT_LIST_HEAD_RCU(struct list_head *list) static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
{ {
ACCESS_ONCE(list->next) = list; WRITE_ONCE(list->next, list);
ACCESS_ONCE(list->prev) = list; WRITE_ONCE(list->prev, list);
} }
/* /*
...@@ -288,7 +288,7 @@ static inline void list_splice_init_rcu(struct list_head *list, ...@@ -288,7 +288,7 @@ static inline void list_splice_init_rcu(struct list_head *list,
#define list_first_or_null_rcu(ptr, type, member) \ #define list_first_or_null_rcu(ptr, type, member) \
({ \ ({ \
struct list_head *__ptr = (ptr); \ struct list_head *__ptr = (ptr); \
struct list_head *__next = ACCESS_ONCE(__ptr->next); \ struct list_head *__next = READ_ONCE(__ptr->next); \
likely(__ptr != __next) ? list_entry_rcu(__next, type, member) : NULL; \ likely(__ptr != __next) ? list_entry_rcu(__next, type, member) : NULL; \
}) })
...@@ -549,8 +549,8 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n, ...@@ -549,8 +549,8 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n,
*/ */
#define hlist_for_each_entry_from_rcu(pos, member) \ #define hlist_for_each_entry_from_rcu(pos, member) \
for (; pos; \ for (; pos; \
pos = hlist_entry_safe(rcu_dereference((pos)->member.next),\ pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu( \
typeof(*(pos)), member)) &(pos)->member)), typeof(*(pos)), member))
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif #endif
...@@ -292,10 +292,6 @@ void rcu_sched_qs(void); ...@@ -292,10 +292,6 @@ void rcu_sched_qs(void);
void rcu_bh_qs(void); void rcu_bh_qs(void);
void rcu_check_callbacks(int user); void rcu_check_callbacks(int user);
struct notifier_block; struct notifier_block;
void rcu_idle_enter(void);
void rcu_idle_exit(void);
void rcu_irq_enter(void);
void rcu_irq_exit(void);
int rcu_cpu_notify(struct notifier_block *self, int rcu_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu); unsigned long action, void *hcpu);
...@@ -364,8 +360,8 @@ extern struct srcu_struct tasks_rcu_exit_srcu; ...@@ -364,8 +360,8 @@ extern struct srcu_struct tasks_rcu_exit_srcu;
#define rcu_note_voluntary_context_switch(t) \ #define rcu_note_voluntary_context_switch(t) \
do { \ do { \
rcu_all_qs(); \ rcu_all_qs(); \
if (ACCESS_ONCE((t)->rcu_tasks_holdout)) \ if (READ_ONCE((t)->rcu_tasks_holdout)) \
ACCESS_ONCE((t)->rcu_tasks_holdout) = false; \ WRITE_ONCE((t)->rcu_tasks_holdout, false); \
} while (0) } while (0)
#else /* #ifdef CONFIG_TASKS_RCU */ #else /* #ifdef CONFIG_TASKS_RCU */
#define TASKS_RCU(x) do { } while (0) #define TASKS_RCU(x) do { } while (0)
...@@ -609,7 +605,7 @@ static inline void rcu_preempt_sleep_check(void) ...@@ -609,7 +605,7 @@ static inline void rcu_preempt_sleep_check(void)
#define __rcu_access_pointer(p, space) \ #define __rcu_access_pointer(p, space) \
({ \ ({ \
typeof(*p) *_________p1 = (typeof(*p) *__force)ACCESS_ONCE(p); \ typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
rcu_dereference_sparse(p, space); \ rcu_dereference_sparse(p, space); \
((typeof(*p) __force __kernel *)(_________p1)); \ ((typeof(*p) __force __kernel *)(_________p1)); \
}) })
...@@ -628,21 +624,6 @@ static inline void rcu_preempt_sleep_check(void) ...@@ -628,21 +624,6 @@ static inline void rcu_preempt_sleep_check(void)
((typeof(*p) __force __kernel *)(p)); \ ((typeof(*p) __force __kernel *)(p)); \
}) })
#define __rcu_access_index(p, space) \
({ \
typeof(p) _________p1 = ACCESS_ONCE(p); \
rcu_dereference_sparse(p, space); \
(_________p1); \
})
#define __rcu_dereference_index_check(p, c) \
({ \
/* Dependency order vs. p above. */ \
typeof(p) _________p1 = lockless_dereference(p); \
rcu_lockdep_assert(c, \
"suspicious rcu_dereference_index_check() usage"); \
(_________p1); \
})
/** /**
* RCU_INITIALIZER() - statically initialize an RCU-protected global variable * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
* @v: The value to statically initialize with. * @v: The value to statically initialize with.
...@@ -659,7 +640,7 @@ static inline void rcu_preempt_sleep_check(void) ...@@ -659,7 +640,7 @@ static inline void rcu_preempt_sleep_check(void)
*/ */
#define lockless_dereference(p) \ #define lockless_dereference(p) \
({ \ ({ \
typeof(p) _________p1 = ACCESS_ONCE(p); \ typeof(p) _________p1 = READ_ONCE(p); \
smp_read_barrier_depends(); /* Dependency order vs. p above. */ \ smp_read_barrier_depends(); /* Dependency order vs. p above. */ \
(_________p1); \ (_________p1); \
}) })
...@@ -702,7 +683,7 @@ static inline void rcu_preempt_sleep_check(void) ...@@ -702,7 +683,7 @@ static inline void rcu_preempt_sleep_check(void)
* @p: The pointer to read * @p: The pointer to read
* *
* Return the value of the specified RCU-protected pointer, but omit the * Return the value of the specified RCU-protected pointer, but omit the
* smp_read_barrier_depends() and keep the ACCESS_ONCE(). This is useful * smp_read_barrier_depends() and keep the READ_ONCE(). This is useful
* when the value of this pointer is accessed, but the pointer is not * when the value of this pointer is accessed, but the pointer is not
* dereferenced, for example, when testing an RCU-protected pointer against * dereferenced, for example, when testing an RCU-protected pointer against
* NULL. Although rcu_access_pointer() may also be used in cases where * NULL. Although rcu_access_pointer() may also be used in cases where
...@@ -786,48 +767,13 @@ static inline void rcu_preempt_sleep_check(void) ...@@ -786,48 +767,13 @@ static inline void rcu_preempt_sleep_check(void)
*/ */
#define rcu_dereference_raw_notrace(p) __rcu_dereference_check((p), 1, __rcu) #define rcu_dereference_raw_notrace(p) __rcu_dereference_check((p), 1, __rcu)
/**
* rcu_access_index() - fetch RCU index with no dereferencing
* @p: The index to read
*
* Return the value of the specified RCU-protected index, but omit the
* smp_read_barrier_depends() and keep the ACCESS_ONCE(). This is useful
* when the value of this index is accessed, but the index is not
* dereferenced, for example, when testing an RCU-protected index against
* -1. Although rcu_access_index() may also be used in cases where
* update-side locks prevent the value of the index from changing, you
* should instead use rcu_dereference_index_protected() for this use case.
*/
#define rcu_access_index(p) __rcu_access_index((p), __rcu)
/**
* rcu_dereference_index_check() - rcu_dereference for indices with debug checking
* @p: The pointer to read, prior to dereferencing
* @c: The conditions under which the dereference will take place
*
* Similar to rcu_dereference_check(), but omits the sparse checking.
* This allows rcu_dereference_index_check() to be used on integers,
* which can then be used as array indices. Attempting to use
* rcu_dereference_check() on an integer will give compiler warnings
* because the sparse address-space mechanism relies on dereferencing
* the RCU-protected pointer. Dereferencing integers is not something
* that even gcc will put up with.
*
* Note that this function does not implicitly check for RCU read-side
* critical sections. If this function gains lots of uses, it might
* make sense to provide versions for each flavor of RCU, but it does
* not make sense as of early 2010.
*/
#define rcu_dereference_index_check(p, c) \
__rcu_dereference_index_check((p), (c))
/** /**
* rcu_dereference_protected() - fetch RCU pointer when updates prevented * rcu_dereference_protected() - fetch RCU pointer when updates prevented
* @p: The pointer to read, prior to dereferencing * @p: The pointer to read, prior to dereferencing
* @c: The conditions under which the dereference will take place * @c: The conditions under which the dereference will take place
* *
* Return the value of the specified RCU-protected pointer, but omit * Return the value of the specified RCU-protected pointer, but omit
* both the smp_read_barrier_depends() and the ACCESS_ONCE(). This * both the smp_read_barrier_depends() and the READ_ONCE(). This
* is useful in cases where update-side locks prevent the value of the * is useful in cases where update-side locks prevent the value of the
* pointer from changing. Please note that this primitive does -not- * pointer from changing. Please note that this primitive does -not-
* prevent the compiler from repeating this reference or combining it * prevent the compiler from repeating this reference or combining it
...@@ -1153,13 +1099,13 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) ...@@ -1153,13 +1099,13 @@ static inline notrace void rcu_read_unlock_sched_notrace(void)
#define kfree_rcu(ptr, rcu_head) \ #define kfree_rcu(ptr, rcu_head) \
__kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head)) __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head))
#if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) #ifdef CONFIG_TINY_RCU
static inline int rcu_needs_cpu(unsigned long *delta_jiffies) static inline int rcu_needs_cpu(unsigned long *delta_jiffies)
{ {
*delta_jiffies = ULONG_MAX; *delta_jiffies = ULONG_MAX;
return 0; return 0;
} }
#endif /* #if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) */ #endif /* #ifdef CONFIG_TINY_RCU */
#if defined(CONFIG_RCU_NOCB_CPU_ALL) #if defined(CONFIG_RCU_NOCB_CPU_ALL)
static inline bool rcu_is_nocb_cpu(int cpu) { return true; } static inline bool rcu_is_nocb_cpu(int cpu) { return true; }
......
...@@ -159,6 +159,22 @@ static inline void rcu_cpu_stall_reset(void) ...@@ -159,6 +159,22 @@ static inline void rcu_cpu_stall_reset(void)
{ {
} }
static inline void rcu_idle_enter(void)
{
}
static inline void rcu_idle_exit(void)
{
}
static inline void rcu_irq_enter(void)
{
}
static inline void rcu_irq_exit(void)
{
}
static inline void exit_rcu(void) static inline void exit_rcu(void)
{ {
} }
......
...@@ -31,9 +31,7 @@ ...@@ -31,9 +31,7 @@
#define __LINUX_RCUTREE_H #define __LINUX_RCUTREE_H
void rcu_note_context_switch(void); void rcu_note_context_switch(void);
#ifndef CONFIG_RCU_NOCB_CPU_ALL
int rcu_needs_cpu(unsigned long *delta_jiffies); int rcu_needs_cpu(unsigned long *delta_jiffies);
#endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */
void rcu_cpu_stall_reset(void); void rcu_cpu_stall_reset(void);
/* /*
...@@ -93,6 +91,11 @@ void rcu_force_quiescent_state(void); ...@@ -93,6 +91,11 @@ void rcu_force_quiescent_state(void);
void rcu_bh_force_quiescent_state(void); void rcu_bh_force_quiescent_state(void);
void rcu_sched_force_quiescent_state(void); void rcu_sched_force_quiescent_state(void);
void rcu_idle_enter(void);
void rcu_idle_exit(void);
void rcu_irq_enter(void);
void rcu_irq_exit(void);
void exit_rcu(void); void exit_rcu(void);
void rcu_scheduler_starting(void); void rcu_scheduler_starting(void);
......
...@@ -120,7 +120,7 @@ do { \ ...@@ -120,7 +120,7 @@ do { \
/* /*
* Despite its name it doesn't necessarily has to be a full barrier. * Despite its name it doesn't necessarily has to be a full barrier.
* It should only guarantee that a STORE before the critical section * It should only guarantee that a STORE before the critical section
* can not be reordered with a LOAD inside this section. * can not be reordered with LOADs and STOREs inside this section.
* spin_lock() is the one-way barrier, this LOAD can not escape out * spin_lock() is the one-way barrier, this LOAD can not escape out
* of the region. So the default implementation simply ensures that * of the region. So the default implementation simply ensures that
* a STORE can not move into the critical section, smp_wmb() should * a STORE can not move into the critical section, smp_wmb() should
......
...@@ -465,13 +465,9 @@ endmenu # "CPU/Task time and stats accounting" ...@@ -465,13 +465,9 @@ endmenu # "CPU/Task time and stats accounting"
menu "RCU Subsystem" menu "RCU Subsystem"
choice
prompt "RCU Implementation"
default TREE_RCU
config TREE_RCU config TREE_RCU
bool "Tree-based hierarchical RCU" bool
depends on !PREEMPT && SMP default y if !PREEMPT && SMP
help help
This option selects the RCU implementation that is This option selects the RCU implementation that is
designed for very large SMP system with hundreds or designed for very large SMP system with hundreds or
...@@ -479,8 +475,8 @@ config TREE_RCU ...@@ -479,8 +475,8 @@ config TREE_RCU
smaller systems. smaller systems.
config PREEMPT_RCU config PREEMPT_RCU
bool "Preemptible tree-based hierarchical RCU" bool
depends on PREEMPT default y if PREEMPT
help help
This option selects the RCU implementation that is This option selects the RCU implementation that is
designed for very large SMP systems with hundreds or designed for very large SMP systems with hundreds or
...@@ -491,15 +487,28 @@ config PREEMPT_RCU ...@@ -491,15 +487,28 @@ config PREEMPT_RCU
Select this option if you are unsure. Select this option if you are unsure.
config TINY_RCU config TINY_RCU
bool "UP-only small-memory-footprint RCU" bool
depends on !PREEMPT && !SMP default y if !PREEMPT && !SMP
help help
This option selects the RCU implementation that is This option selects the RCU implementation that is
designed for UP systems from which real-time response designed for UP systems from which real-time response
is not required. This option greatly reduces the is not required. This option greatly reduces the
memory footprint of RCU. memory footprint of RCU.
endchoice config RCU_EXPERT
bool "Make expert-level adjustments to RCU configuration"
default n
help
This option needs to be enabled if you wish to make
expert-level adjustments to RCU configuration. By default,
no such adjustments can be made, which has the often-beneficial
side-effect of preventing "make oldconfig" from asking you all
sorts of detailed questions about how you would like numerous
obscure RCU options to be set up.
Say Y if you need to make expert-level adjustments to RCU.
Say N if you are unsure.
config SRCU config SRCU
bool bool
...@@ -509,7 +518,7 @@ config SRCU ...@@ -509,7 +518,7 @@ config SRCU
sections. sections.
config TASKS_RCU config TASKS_RCU
bool "Task_based RCU implementation using voluntary context switch" bool
default n default n
select SRCU select SRCU
help help
...@@ -517,8 +526,6 @@ config TASKS_RCU ...@@ -517,8 +526,6 @@ config TASKS_RCU
only voluntary context switch (not preemption!), idle, and only voluntary context switch (not preemption!), idle, and
user-mode execution as quiescent states. user-mode execution as quiescent states.
If unsure, say N.
config RCU_STALL_COMMON config RCU_STALL_COMMON
def_bool ( TREE_RCU || PREEMPT_RCU || RCU_TRACE ) def_bool ( TREE_RCU || PREEMPT_RCU || RCU_TRACE )
help help
...@@ -531,9 +538,7 @@ config CONTEXT_TRACKING ...@@ -531,9 +538,7 @@ config CONTEXT_TRACKING
bool bool
config RCU_USER_QS config RCU_USER_QS
bool "Consider userspace as in RCU extended quiescent state" bool
depends on HAVE_CONTEXT_TRACKING && SMP
select CONTEXT_TRACKING
help help
This option sets hooks on kernel / userspace boundaries and This option sets hooks on kernel / userspace boundaries and
puts RCU in extended quiescent state when the CPU runs in puts RCU in extended quiescent state when the CPU runs in
...@@ -541,12 +546,6 @@ config RCU_USER_QS ...@@ -541,12 +546,6 @@ config RCU_USER_QS
excluded from the global RCU state machine and thus doesn't excluded from the global RCU state machine and thus doesn't
try to keep the timer tick on for RCU. try to keep the timer tick on for RCU.
Unless you want to hack and help the development of the full
dynticks mode, you shouldn't enable this option. It also
adds unnecessary overhead.
If unsure say N
config CONTEXT_TRACKING_FORCE config CONTEXT_TRACKING_FORCE
bool "Force context tracking" bool "Force context tracking"
depends on CONTEXT_TRACKING depends on CONTEXT_TRACKING
...@@ -578,7 +577,7 @@ config RCU_FANOUT ...@@ -578,7 +577,7 @@ config RCU_FANOUT
int "Tree-based hierarchical RCU fanout value" int "Tree-based hierarchical RCU fanout value"
range 2 64 if 64BIT range 2 64 if 64BIT
range 2 32 if !64BIT range 2 32 if !64BIT
depends on TREE_RCU || PREEMPT_RCU depends on (TREE_RCU || PREEMPT_RCU) && RCU_EXPERT
default 64 if 64BIT default 64 if 64BIT
default 32 if !64BIT default 32 if !64BIT
help help
...@@ -596,9 +595,9 @@ config RCU_FANOUT ...@@ -596,9 +595,9 @@ config RCU_FANOUT
config RCU_FANOUT_LEAF config RCU_FANOUT_LEAF
int "Tree-based hierarchical RCU leaf-level fanout value" int "Tree-based hierarchical RCU leaf-level fanout value"
range 2 RCU_FANOUT if 64BIT range 2 64 if 64BIT
range 2 RCU_FANOUT if !64BIT range 2 32 if !64BIT
depends on TREE_RCU || PREEMPT_RCU depends on (TREE_RCU || PREEMPT_RCU) && RCU_EXPERT
default 16 default 16
help help
This option controls the leaf-level fanout of hierarchical This option controls the leaf-level fanout of hierarchical
...@@ -621,23 +620,9 @@ config RCU_FANOUT_LEAF ...@@ -621,23 +620,9 @@ config RCU_FANOUT_LEAF
Take the default if unsure. Take the default if unsure.
config RCU_FANOUT_EXACT
bool "Disable tree-based hierarchical RCU auto-balancing"
depends on TREE_RCU || PREEMPT_RCU
default n
help
This option forces use of the exact RCU_FANOUT value specified,
regardless of imbalances in the hierarchy. This is useful for
testing RCU itself, and might one day be useful on systems with
strong NUMA behavior.
Without RCU_FANOUT_EXACT, the code will balance the hierarchy.
Say N if unsure.
config RCU_FAST_NO_HZ config RCU_FAST_NO_HZ
bool "Accelerate last non-dyntick-idle CPU's grace periods" bool "Accelerate last non-dyntick-idle CPU's grace periods"
depends on NO_HZ_COMMON && SMP depends on NO_HZ_COMMON && SMP && RCU_EXPERT
default n default n
help help
This option permits CPUs to enter dynticks-idle state even if This option permits CPUs to enter dynticks-idle state even if
...@@ -663,7 +648,7 @@ config TREE_RCU_TRACE ...@@ -663,7 +648,7 @@ config TREE_RCU_TRACE
config RCU_BOOST config RCU_BOOST
bool "Enable RCU priority boosting" bool "Enable RCU priority boosting"
depends on RT_MUTEXES && PREEMPT_RCU depends on RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT
default n default n
help help
This option boosts the priority of preempted RCU readers that This option boosts the priority of preempted RCU readers that
...@@ -680,6 +665,7 @@ config RCU_KTHREAD_PRIO ...@@ -680,6 +665,7 @@ config RCU_KTHREAD_PRIO
range 0 99 if !RCU_BOOST range 0 99 if !RCU_BOOST
default 1 if RCU_BOOST default 1 if RCU_BOOST
default 0 if !RCU_BOOST default 0 if !RCU_BOOST
depends on RCU_EXPERT
help help
This option specifies the SCHED_FIFO priority value that will be This option specifies the SCHED_FIFO priority value that will be
assigned to the rcuc/n and rcub/n threads and is also the value assigned to the rcuc/n and rcub/n threads and is also the value
......
...@@ -398,7 +398,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen) ...@@ -398,7 +398,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu)); err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
if (err) { if (err) {
/* CPU didn't die: tell everyone. Can't complain. */ /* CPU didn't die: tell everyone. Can't complain. */
smpboot_unpark_threads(cpu);
cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu); cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);
goto out_release; goto out_release;
} }
...@@ -463,6 +462,7 @@ static int smpboot_thread_call(struct notifier_block *nfb, ...@@ -463,6 +462,7 @@ static int smpboot_thread_call(struct notifier_block *nfb,
switch (action & ~CPU_TASKS_FROZEN) { switch (action & ~CPU_TASKS_FROZEN) {
case CPU_DOWN_FAILED:
case CPU_ONLINE: case CPU_ONLINE:
smpboot_unpark_threads(cpu); smpboot_unpark_threads(cpu);
break; break;
...@@ -479,7 +479,7 @@ static struct notifier_block smpboot_thread_notifier = { ...@@ -479,7 +479,7 @@ static struct notifier_block smpboot_thread_notifier = {
.priority = CPU_PRI_SMPBOOT, .priority = CPU_PRI_SMPBOOT,
}; };
void __cpuinit smpboot_thread_init(void) void smpboot_thread_init(void)
{ {
register_cpu_notifier(&smpboot_thread_notifier); register_cpu_notifier(&smpboot_thread_notifier);
} }
......
...@@ -141,7 +141,7 @@ int perf_output_begin(struct perf_output_handle *handle, ...@@ -141,7 +141,7 @@ int perf_output_begin(struct perf_output_handle *handle,
perf_output_get_handle(handle); perf_output_get_handle(handle);
do { do {
tail = ACCESS_ONCE(rb->user_page->data_tail); tail = READ_ONCE_CTRL(rb->user_page->data_tail);
offset = head = local_read(&rb->head); offset = head = local_read(&rb->head);
if (!rb->overwrite && if (!rb->overwrite &&
unlikely(CIRC_SPACE(head, tail, perf_data_size(rb)) < size)) unlikely(CIRC_SPACE(head, tail, perf_data_size(rb)) < size))
......
...@@ -122,12 +122,12 @@ static int torture_lock_busted_write_lock(void) ...@@ -122,12 +122,12 @@ static int torture_lock_busted_write_lock(void)
static void torture_lock_busted_write_delay(struct torture_random_state *trsp) static void torture_lock_busted_write_delay(struct torture_random_state *trsp)
{ {
const unsigned long longdelay_us = 100; const unsigned long longdelay_ms = 100;
/* We want a long delay occasionally to force massive contention. */ /* We want a long delay occasionally to force massive contention. */
if (!(torture_random(trsp) % if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_us))) (cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_us); mdelay(longdelay_ms);
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000))) if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000)))
preempt_schedule(); /* Allow test to be preempted. */ preempt_schedule(); /* Allow test to be preempted. */
...@@ -160,14 +160,14 @@ static int torture_spin_lock_write_lock(void) __acquires(torture_spinlock) ...@@ -160,14 +160,14 @@ static int torture_spin_lock_write_lock(void) __acquires(torture_spinlock)
static void torture_spin_lock_write_delay(struct torture_random_state *trsp) static void torture_spin_lock_write_delay(struct torture_random_state *trsp)
{ {
const unsigned long shortdelay_us = 2; const unsigned long shortdelay_us = 2;
const unsigned long longdelay_us = 100; const unsigned long longdelay_ms = 100;
/* We want a short delay mostly to emulate likely code, and /* We want a short delay mostly to emulate likely code, and
* we want a long delay occasionally to force massive contention. * we want a long delay occasionally to force massive contention.
*/ */
if (!(torture_random(trsp) % if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2000 * longdelay_us))) (cxt.nrealwriters_stress * 2000 * longdelay_ms)))
mdelay(longdelay_us); mdelay(longdelay_ms);
if (!(torture_random(trsp) % if (!(torture_random(trsp) %
(cxt.nrealwriters_stress * 2 * shortdelay_us))) (cxt.nrealwriters_stress * 2 * shortdelay_us)))
udelay(shortdelay_us); udelay(shortdelay_us);
...@@ -309,7 +309,7 @@ static int torture_rwlock_read_lock_irq(void) __acquires(torture_rwlock) ...@@ -309,7 +309,7 @@ static int torture_rwlock_read_lock_irq(void) __acquires(torture_rwlock)
static void torture_rwlock_read_unlock_irq(void) static void torture_rwlock_read_unlock_irq(void)
__releases(torture_rwlock) __releases(torture_rwlock)
{ {
write_unlock_irqrestore(&torture_rwlock, cxt.cur_ops->flags); read_unlock_irqrestore(&torture_rwlock, cxt.cur_ops->flags);
} }
static struct lock_torture_ops rw_lock_irq_ops = { static struct lock_torture_ops rw_lock_irq_ops = {
......
...@@ -241,6 +241,7 @@ rcu_torture_free(struct rcu_torture *p) ...@@ -241,6 +241,7 @@ rcu_torture_free(struct rcu_torture *p)
struct rcu_torture_ops { struct rcu_torture_ops {
int ttype; int ttype;
void (*init)(void); void (*init)(void);
void (*cleanup)(void);
int (*readlock)(void); int (*readlock)(void);
void (*read_delay)(struct torture_random_state *rrsp); void (*read_delay)(struct torture_random_state *rrsp);
void (*readunlock)(int idx); void (*readunlock)(int idx);
...@@ -477,10 +478,12 @@ static struct rcu_torture_ops rcu_busted_ops = { ...@@ -477,10 +478,12 @@ static struct rcu_torture_ops rcu_busted_ops = {
*/ */
DEFINE_STATIC_SRCU(srcu_ctl); DEFINE_STATIC_SRCU(srcu_ctl);
static struct srcu_struct srcu_ctld;
static struct srcu_struct *srcu_ctlp = &srcu_ctl;
static int srcu_torture_read_lock(void) __acquires(&srcu_ctl) static int srcu_torture_read_lock(void) __acquires(srcu_ctlp)
{ {
return srcu_read_lock(&srcu_ctl); return srcu_read_lock(srcu_ctlp);
} }
static void srcu_read_delay(struct torture_random_state *rrsp) static void srcu_read_delay(struct torture_random_state *rrsp)
...@@ -499,49 +502,49 @@ static void srcu_read_delay(struct torture_random_state *rrsp) ...@@ -499,49 +502,49 @@ static void srcu_read_delay(struct torture_random_state *rrsp)
rcu_read_delay(rrsp); rcu_read_delay(rrsp);
} }
static void srcu_torture_read_unlock(int idx) __releases(&srcu_ctl) static void srcu_torture_read_unlock(int idx) __releases(srcu_ctlp)
{ {
srcu_read_unlock(&srcu_ctl, idx); srcu_read_unlock(srcu_ctlp, idx);
} }
static unsigned long srcu_torture_completed(void) static unsigned long srcu_torture_completed(void)
{ {
return srcu_batches_completed(&srcu_ctl); return srcu_batches_completed(srcu_ctlp);
} }
static void srcu_torture_deferred_free(struct rcu_torture *rp) static void srcu_torture_deferred_free(struct rcu_torture *rp)
{ {
call_srcu(&srcu_ctl, &rp->rtort_rcu, rcu_torture_cb); call_srcu(srcu_ctlp, &rp->rtort_rcu, rcu_torture_cb);
} }
static void srcu_torture_synchronize(void) static void srcu_torture_synchronize(void)
{ {
synchronize_srcu(&srcu_ctl); synchronize_srcu(srcu_ctlp);
} }
static void srcu_torture_call(struct rcu_head *head, static void srcu_torture_call(struct rcu_head *head,
void (*func)(struct rcu_head *head)) void (*func)(struct rcu_head *head))
{ {
call_srcu(&srcu_ctl, head, func); call_srcu(srcu_ctlp, head, func);
} }
static void srcu_torture_barrier(void) static void srcu_torture_barrier(void)
{ {
srcu_barrier(&srcu_ctl); srcu_barrier(srcu_ctlp);
} }
static void srcu_torture_stats(void) static void srcu_torture_stats(void)
{ {
int cpu; int cpu;
int idx = srcu_ctl.completed & 0x1; int idx = srcu_ctlp->completed & 0x1;
pr_alert("%s%s per-CPU(idx=%d):", pr_alert("%s%s per-CPU(idx=%d):",
torture_type, TORTURE_FLAG, idx); torture_type, TORTURE_FLAG, idx);
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
long c0, c1; long c0, c1;
c0 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[!idx]; c0 = (long)per_cpu_ptr(srcu_ctlp->per_cpu_ref, cpu)->c[!idx];
c1 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[idx]; c1 = (long)per_cpu_ptr(srcu_ctlp->per_cpu_ref, cpu)->c[idx];
pr_cont(" %d(%ld,%ld)", cpu, c0, c1); pr_cont(" %d(%ld,%ld)", cpu, c0, c1);
} }
pr_cont("\n"); pr_cont("\n");
...@@ -549,7 +552,7 @@ static void srcu_torture_stats(void) ...@@ -549,7 +552,7 @@ static void srcu_torture_stats(void)
static void srcu_torture_synchronize_expedited(void) static void srcu_torture_synchronize_expedited(void)
{ {
synchronize_srcu_expedited(&srcu_ctl); synchronize_srcu_expedited(srcu_ctlp);
} }
static struct rcu_torture_ops srcu_ops = { static struct rcu_torture_ops srcu_ops = {
...@@ -569,6 +572,38 @@ static struct rcu_torture_ops srcu_ops = { ...@@ -569,6 +572,38 @@ static struct rcu_torture_ops srcu_ops = {
.name = "srcu" .name = "srcu"
}; };
static void srcu_torture_init(void)
{
rcu_sync_torture_init();
WARN_ON(init_srcu_struct(&srcu_ctld));
srcu_ctlp = &srcu_ctld;
}
static void srcu_torture_cleanup(void)
{
cleanup_srcu_struct(&srcu_ctld);
srcu_ctlp = &srcu_ctl; /* In case of a later rcutorture run. */
}
/* As above, but dynamically allocated. */
static struct rcu_torture_ops srcud_ops = {
.ttype = SRCU_FLAVOR,
.init = srcu_torture_init,
.cleanup = srcu_torture_cleanup,
.readlock = srcu_torture_read_lock,
.read_delay = srcu_read_delay,
.readunlock = srcu_torture_read_unlock,
.started = NULL,
.completed = srcu_torture_completed,
.deferred_free = srcu_torture_deferred_free,
.sync = srcu_torture_synchronize,
.exp_sync = srcu_torture_synchronize_expedited,
.call = srcu_torture_call,
.cb_barrier = srcu_torture_barrier,
.stats = srcu_torture_stats,
.name = "srcud"
};
/* /*
* Definitions for sched torture testing. * Definitions for sched torture testing.
*/ */
...@@ -672,8 +707,8 @@ static void rcu_torture_boost_cb(struct rcu_head *head) ...@@ -672,8 +707,8 @@ static void rcu_torture_boost_cb(struct rcu_head *head)
struct rcu_boost_inflight *rbip = struct rcu_boost_inflight *rbip =
container_of(head, struct rcu_boost_inflight, rcu); container_of(head, struct rcu_boost_inflight, rcu);
smp_mb(); /* Ensure RCU-core accesses precede clearing ->inflight */ /* Ensure RCU-core accesses precede clearing ->inflight */
rbip->inflight = 0; smp_store_release(&rbip->inflight, 0);
} }
static int rcu_torture_boost(void *arg) static int rcu_torture_boost(void *arg)
...@@ -710,9 +745,9 @@ static int rcu_torture_boost(void *arg) ...@@ -710,9 +745,9 @@ static int rcu_torture_boost(void *arg)
call_rcu_time = jiffies; call_rcu_time = jiffies;
while (ULONG_CMP_LT(jiffies, endtime)) { while (ULONG_CMP_LT(jiffies, endtime)) {
/* If we don't have a callback in flight, post one. */ /* If we don't have a callback in flight, post one. */
if (!rbi.inflight) { if (!smp_load_acquire(&rbi.inflight)) {
smp_mb(); /* RCU core before ->inflight = 1. */ /* RCU core before ->inflight = 1. */
rbi.inflight = 1; smp_store_release(&rbi.inflight, 1);
call_rcu(&rbi.rcu, rcu_torture_boost_cb); call_rcu(&rbi.rcu, rcu_torture_boost_cb);
if (jiffies - call_rcu_time > if (jiffies - call_rcu_time >
test_boost_duration * HZ - HZ / 2) { test_boost_duration * HZ - HZ / 2) {
...@@ -751,11 +786,10 @@ checkwait: stutter_wait("rcu_torture_boost"); ...@@ -751,11 +786,10 @@ checkwait: stutter_wait("rcu_torture_boost");
} while (!torture_must_stop()); } while (!torture_must_stop());
/* Clean up and exit. */ /* Clean up and exit. */
while (!kthread_should_stop() || rbi.inflight) { while (!kthread_should_stop() || smp_load_acquire(&rbi.inflight)) {
torture_shutdown_absorb("rcu_torture_boost"); torture_shutdown_absorb("rcu_torture_boost");
schedule_timeout_uninterruptible(1); schedule_timeout_uninterruptible(1);
} }
smp_mb(); /* order accesses to ->inflight before stack-frame death. */
destroy_rcu_head_on_stack(&rbi.rcu); destroy_rcu_head_on_stack(&rbi.rcu);
torture_kthread_stopping("rcu_torture_boost"); torture_kthread_stopping("rcu_torture_boost");
return 0; return 0;
...@@ -1054,7 +1088,7 @@ static void rcu_torture_timer(unsigned long unused) ...@@ -1054,7 +1088,7 @@ static void rcu_torture_timer(unsigned long unused)
p = rcu_dereference_check(rcu_torture_current, p = rcu_dereference_check(rcu_torture_current,
rcu_read_lock_bh_held() || rcu_read_lock_bh_held() ||
rcu_read_lock_sched_held() || rcu_read_lock_sched_held() ||
srcu_read_lock_held(&srcu_ctl)); srcu_read_lock_held(srcu_ctlp));
if (p == NULL) { if (p == NULL) {
/* Leave because rcu_torture_writer is not yet underway */ /* Leave because rcu_torture_writer is not yet underway */
cur_ops->readunlock(idx); cur_ops->readunlock(idx);
...@@ -1128,7 +1162,7 @@ rcu_torture_reader(void *arg) ...@@ -1128,7 +1162,7 @@ rcu_torture_reader(void *arg)
p = rcu_dereference_check(rcu_torture_current, p = rcu_dereference_check(rcu_torture_current,
rcu_read_lock_bh_held() || rcu_read_lock_bh_held() ||
rcu_read_lock_sched_held() || rcu_read_lock_sched_held() ||
srcu_read_lock_held(&srcu_ctl)); srcu_read_lock_held(srcu_ctlp));
if (p == NULL) { if (p == NULL) {
/* Wait for rcu_torture_writer to get underway */ /* Wait for rcu_torture_writer to get underway */
cur_ops->readunlock(idx); cur_ops->readunlock(idx);
...@@ -1413,12 +1447,15 @@ static int rcu_torture_barrier_cbs(void *arg) ...@@ -1413,12 +1447,15 @@ static int rcu_torture_barrier_cbs(void *arg)
do { do {
wait_event(barrier_cbs_wq[myid], wait_event(barrier_cbs_wq[myid],
(newphase = (newphase =
ACCESS_ONCE(barrier_phase)) != lastphase || smp_load_acquire(&barrier_phase)) != lastphase ||
torture_must_stop()); torture_must_stop());
lastphase = newphase; lastphase = newphase;
smp_mb(); /* ensure barrier_phase load before ->call(). */
if (torture_must_stop()) if (torture_must_stop())
break; break;
/*
* The above smp_load_acquire() ensures barrier_phase load
* is ordered before the folloiwng ->call().
*/
cur_ops->call(&rcu, rcu_torture_barrier_cbf); cur_ops->call(&rcu, rcu_torture_barrier_cbf);
if (atomic_dec_and_test(&barrier_cbs_count)) if (atomic_dec_and_test(&barrier_cbs_count))
wake_up(&barrier_wq); wake_up(&barrier_wq);
...@@ -1439,8 +1476,8 @@ static int rcu_torture_barrier(void *arg) ...@@ -1439,8 +1476,8 @@ static int rcu_torture_barrier(void *arg)
do { do {
atomic_set(&barrier_cbs_invoked, 0); atomic_set(&barrier_cbs_invoked, 0);
atomic_set(&barrier_cbs_count, n_barrier_cbs); atomic_set(&barrier_cbs_count, n_barrier_cbs);
smp_mb(); /* Ensure barrier_phase after prior assignments. */ /* Ensure barrier_phase ordered after prior assignments. */
barrier_phase = !barrier_phase; smp_store_release(&barrier_phase, !barrier_phase);
for (i = 0; i < n_barrier_cbs; i++) for (i = 0; i < n_barrier_cbs; i++)
wake_up(&barrier_cbs_wq[i]); wake_up(&barrier_cbs_wq[i]);
wait_event(barrier_wq, wait_event(barrier_wq,
...@@ -1588,10 +1625,14 @@ rcu_torture_cleanup(void) ...@@ -1588,10 +1625,14 @@ rcu_torture_cleanup(void)
rcutorture_booster_cleanup(i); rcutorture_booster_cleanup(i);
} }
/* Wait for all RCU callbacks to fire. */ /*
* Wait for all RCU callbacks to fire, then do flavor-specific
* cleanup operations.
*/
if (cur_ops->cb_barrier != NULL) if (cur_ops->cb_barrier != NULL)
cur_ops->cb_barrier(); cur_ops->cb_barrier();
if (cur_ops->cleanup != NULL)
cur_ops->cleanup();
rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ rcu_torture_stats_print(); /* -After- the stats thread is stopped! */
...@@ -1668,8 +1709,8 @@ rcu_torture_init(void) ...@@ -1668,8 +1709,8 @@ rcu_torture_init(void)
int cpu; int cpu;
int firsterr = 0; int firsterr = 0;
static struct rcu_torture_ops *torture_ops[] = { static struct rcu_torture_ops *torture_ops[] = {
&rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &sched_ops, &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops,
RCUTORTURE_TASKS_OPS &sched_ops, RCUTORTURE_TASKS_OPS
}; };
if (!torture_init_begin(torture_type, verbose, &torture_runnable)) if (!torture_init_begin(torture_type, verbose, &torture_runnable))
...@@ -1701,7 +1742,7 @@ rcu_torture_init(void) ...@@ -1701,7 +1742,7 @@ rcu_torture_init(void)
if (nreaders >= 0) { if (nreaders >= 0) {
nrealreaders = nreaders; nrealreaders = nreaders;
} else { } else {
nrealreaders = num_online_cpus() - 1; nrealreaders = num_online_cpus() - 2 - nreaders;
if (nrealreaders <= 0) if (nrealreaders <= 0)
nrealreaders = 1; nrealreaders = 1;
} }
......
...@@ -151,7 +151,7 @@ static unsigned long srcu_readers_seq_idx(struct srcu_struct *sp, int idx) ...@@ -151,7 +151,7 @@ static unsigned long srcu_readers_seq_idx(struct srcu_struct *sp, int idx)
unsigned long t; unsigned long t;
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
t = ACCESS_ONCE(per_cpu_ptr(sp->per_cpu_ref, cpu)->seq[idx]); t = READ_ONCE(per_cpu_ptr(sp->per_cpu_ref, cpu)->seq[idx]);
sum += t; sum += t;
} }
return sum; return sum;
...@@ -168,7 +168,7 @@ static unsigned long srcu_readers_active_idx(struct srcu_struct *sp, int idx) ...@@ -168,7 +168,7 @@ static unsigned long srcu_readers_active_idx(struct srcu_struct *sp, int idx)
unsigned long t; unsigned long t;
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
t = ACCESS_ONCE(per_cpu_ptr(sp->per_cpu_ref, cpu)->c[idx]); t = READ_ONCE(per_cpu_ptr(sp->per_cpu_ref, cpu)->c[idx]);
sum += t; sum += t;
} }
return sum; return sum;
...@@ -265,8 +265,8 @@ static int srcu_readers_active(struct srcu_struct *sp) ...@@ -265,8 +265,8 @@ static int srcu_readers_active(struct srcu_struct *sp)
unsigned long sum = 0; unsigned long sum = 0;
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
sum += ACCESS_ONCE(per_cpu_ptr(sp->per_cpu_ref, cpu)->c[0]); sum += READ_ONCE(per_cpu_ptr(sp->per_cpu_ref, cpu)->c[0]);
sum += ACCESS_ONCE(per_cpu_ptr(sp->per_cpu_ref, cpu)->c[1]); sum += READ_ONCE(per_cpu_ptr(sp->per_cpu_ref, cpu)->c[1]);
} }
return sum; return sum;
} }
...@@ -296,7 +296,7 @@ int __srcu_read_lock(struct srcu_struct *sp) ...@@ -296,7 +296,7 @@ int __srcu_read_lock(struct srcu_struct *sp)
{ {
int idx; int idx;
idx = ACCESS_ONCE(sp->completed) & 0x1; idx = READ_ONCE(sp->completed) & 0x1;
preempt_disable(); preempt_disable();
__this_cpu_inc(sp->per_cpu_ref->c[idx]); __this_cpu_inc(sp->per_cpu_ref->c[idx]);
smp_mb(); /* B */ /* Avoid leaking the critical section. */ smp_mb(); /* B */ /* Avoid leaking the critical section. */
......
...@@ -49,39 +49,6 @@ static void __call_rcu(struct rcu_head *head, ...@@ -49,39 +49,6 @@ static void __call_rcu(struct rcu_head *head,
#include "tiny_plugin.h" #include "tiny_plugin.h"
/*
* Enter idle, which is an extended quiescent state if we have fully
* entered that mode.
*/
void rcu_idle_enter(void)
{
}
EXPORT_SYMBOL_GPL(rcu_idle_enter);
/*
* Exit an interrupt handler towards idle.
*/
void rcu_irq_exit(void)
{
}
EXPORT_SYMBOL_GPL(rcu_irq_exit);
/*
* Exit idle, so that we are no longer in an extended quiescent state.
*/
void rcu_idle_exit(void)
{
}
EXPORT_SYMBOL_GPL(rcu_idle_exit);
/*
* Enter an interrupt handler, moving away from idle.
*/
void rcu_irq_enter(void)
{
}
EXPORT_SYMBOL_GPL(rcu_irq_enter);
#if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE)
/* /*
...@@ -170,6 +137,11 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp) ...@@ -170,6 +137,11 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
/* Move the ready-to-invoke callbacks to a local list. */ /* Move the ready-to-invoke callbacks to a local list. */
local_irq_save(flags); local_irq_save(flags);
if (rcp->donetail == &rcp->rcucblist) {
/* No callbacks ready, so just leave. */
local_irq_restore(flags);
return;
}
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1)); RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1));
list = rcp->rcucblist; list = rcp->rcucblist;
rcp->rcucblist = *rcp->donetail; rcp->rcucblist = *rcp->donetail;
......
...@@ -144,16 +144,17 @@ static void check_cpu_stall(struct rcu_ctrlblk *rcp) ...@@ -144,16 +144,17 @@ static void check_cpu_stall(struct rcu_ctrlblk *rcp)
return; return;
rcp->ticks_this_gp++; rcp->ticks_this_gp++;
j = jiffies; j = jiffies;
js = ACCESS_ONCE(rcp->jiffies_stall); js = READ_ONCE(rcp->jiffies_stall);
if (rcp->rcucblist && ULONG_CMP_GE(j, js)) { if (rcp->rcucblist && ULONG_CMP_GE(j, js)) {
pr_err("INFO: %s stall on CPU (%lu ticks this GP) idle=%llx (t=%lu jiffies q=%ld)\n", pr_err("INFO: %s stall on CPU (%lu ticks this GP) idle=%llx (t=%lu jiffies q=%ld)\n",
rcp->name, rcp->ticks_this_gp, DYNTICK_TASK_EXIT_IDLE, rcp->name, rcp->ticks_this_gp, DYNTICK_TASK_EXIT_IDLE,
jiffies - rcp->gp_start, rcp->qlen); jiffies - rcp->gp_start, rcp->qlen);
dump_stack(); dump_stack();
ACCESS_ONCE(rcp->jiffies_stall) = jiffies + WRITE_ONCE(rcp->jiffies_stall,
3 * rcu_jiffies_till_stall_check() + 3; jiffies + 3 * rcu_jiffies_till_stall_check() + 3);
} else if (ULONG_CMP_GE(j, js)) { } else if (ULONG_CMP_GE(j, js)) {
ACCESS_ONCE(rcp->jiffies_stall) = jiffies + rcu_jiffies_till_stall_check(); WRITE_ONCE(rcp->jiffies_stall,
jiffies + rcu_jiffies_till_stall_check());
} }
} }
...@@ -161,7 +162,8 @@ static void reset_cpu_stall_ticks(struct rcu_ctrlblk *rcp) ...@@ -161,7 +162,8 @@ static void reset_cpu_stall_ticks(struct rcu_ctrlblk *rcp)
{ {
rcp->ticks_this_gp = 0; rcp->ticks_this_gp = 0;
rcp->gp_start = jiffies; rcp->gp_start = jiffies;
ACCESS_ONCE(rcp->jiffies_stall) = jiffies + rcu_jiffies_till_stall_check(); WRITE_ONCE(rcp->jiffies_stall,
jiffies + rcu_jiffies_till_stall_check());
} }
static void check_cpu_stalls(void) static void check_cpu_stalls(void)
......
This diff is collapsed.
...@@ -35,11 +35,33 @@ ...@@ -35,11 +35,33 @@
* In practice, this did work well going from three levels to four. * In practice, this did work well going from three levels to four.
* Of course, your mileage may vary. * Of course, your mileage may vary.
*/ */
#define MAX_RCU_LVLS 4 #define MAX_RCU_LVLS 4
#define RCU_FANOUT_1 (CONFIG_RCU_FANOUT_LEAF)
#define RCU_FANOUT_2 (RCU_FANOUT_1 * CONFIG_RCU_FANOUT) #ifdef CONFIG_RCU_FANOUT
#define RCU_FANOUT_3 (RCU_FANOUT_2 * CONFIG_RCU_FANOUT) #define RCU_FANOUT CONFIG_RCU_FANOUT
#define RCU_FANOUT_4 (RCU_FANOUT_3 * CONFIG_RCU_FANOUT) #else /* #ifdef CONFIG_RCU_FANOUT */
# ifdef CONFIG_64BIT
# define RCU_FANOUT 64
# else
# define RCU_FANOUT 32
# endif
#endif /* #else #ifdef CONFIG_RCU_FANOUT */
#ifdef CONFIG_RCU_FANOUT_LEAF
#define RCU_FANOUT_LEAF CONFIG_RCU_FANOUT_LEAF
#else /* #ifdef CONFIG_RCU_FANOUT_LEAF */
# ifdef CONFIG_64BIT
# define RCU_FANOUT_LEAF 64
# else
# define RCU_FANOUT_LEAF 32
# endif
#endif /* #else #ifdef CONFIG_RCU_FANOUT_LEAF */
#define RCU_FANOUT_1 (RCU_FANOUT_LEAF)
#define RCU_FANOUT_2 (RCU_FANOUT_1 * RCU_FANOUT)
#define RCU_FANOUT_3 (RCU_FANOUT_2 * RCU_FANOUT)
#define RCU_FANOUT_4 (RCU_FANOUT_3 * RCU_FANOUT)
#if NR_CPUS <= RCU_FANOUT_1 #if NR_CPUS <= RCU_FANOUT_1
# define RCU_NUM_LVLS 1 # define RCU_NUM_LVLS 1
...@@ -170,7 +192,6 @@ struct rcu_node { ...@@ -170,7 +192,6 @@ struct rcu_node {
/* if there is no such task. If there */ /* if there is no such task. If there */
/* is no current expedited grace period, */ /* is no current expedited grace period, */
/* then there can cannot be any such task. */ /* then there can cannot be any such task. */
#ifdef CONFIG_RCU_BOOST
struct list_head *boost_tasks; struct list_head *boost_tasks;
/* Pointer to first task that needs to be */ /* Pointer to first task that needs to be */
/* priority boosted, or NULL if no priority */ /* priority boosted, or NULL if no priority */
...@@ -208,7 +229,6 @@ struct rcu_node { ...@@ -208,7 +229,6 @@ struct rcu_node {
unsigned long n_balk_nos; unsigned long n_balk_nos;
/* Refused to boost: not sure why, though. */ /* Refused to boost: not sure why, though. */
/* This can happen due to race conditions. */ /* This can happen due to race conditions. */
#endif /* #ifdef CONFIG_RCU_BOOST */
#ifdef CONFIG_RCU_NOCB_CPU #ifdef CONFIG_RCU_NOCB_CPU
wait_queue_head_t nocb_gp_wq[2]; wait_queue_head_t nocb_gp_wq[2];
/* Place for rcu_nocb_kthread() to wait GP. */ /* Place for rcu_nocb_kthread() to wait GP. */
...@@ -519,14 +539,11 @@ extern struct list_head rcu_struct_flavors; ...@@ -519,14 +539,11 @@ extern struct list_head rcu_struct_flavors;
* RCU implementation internal declarations: * RCU implementation internal declarations:
*/ */
extern struct rcu_state rcu_sched_state; extern struct rcu_state rcu_sched_state;
DECLARE_PER_CPU(struct rcu_data, rcu_sched_data);
extern struct rcu_state rcu_bh_state; extern struct rcu_state rcu_bh_state;
DECLARE_PER_CPU(struct rcu_data, rcu_bh_data);
#ifdef CONFIG_PREEMPT_RCU #ifdef CONFIG_PREEMPT_RCU
extern struct rcu_state rcu_preempt_state; extern struct rcu_state rcu_preempt_state;
DECLARE_PER_CPU(struct rcu_data, rcu_preempt_data);
#endif /* #ifdef CONFIG_PREEMPT_RCU */ #endif /* #ifdef CONFIG_PREEMPT_RCU */
#ifdef CONFIG_RCU_BOOST #ifdef CONFIG_RCU_BOOST
......
This diff is collapsed.
...@@ -277,7 +277,7 @@ static void print_one_rcu_state(struct seq_file *m, struct rcu_state *rsp) ...@@ -277,7 +277,7 @@ static void print_one_rcu_state(struct seq_file *m, struct rcu_state *rsp)
seq_printf(m, "nfqs=%lu/nfqsng=%lu(%lu) fqlh=%lu oqlen=%ld/%ld\n", seq_printf(m, "nfqs=%lu/nfqsng=%lu(%lu) fqlh=%lu oqlen=%ld/%ld\n",
rsp->n_force_qs, rsp->n_force_qs_ngp, rsp->n_force_qs, rsp->n_force_qs_ngp,
rsp->n_force_qs - rsp->n_force_qs_ngp, rsp->n_force_qs - rsp->n_force_qs_ngp,
ACCESS_ONCE(rsp->n_force_qs_lh), rsp->qlen_lazy, rsp->qlen); READ_ONCE(rsp->n_force_qs_lh), rsp->qlen_lazy, rsp->qlen);
for (rnp = &rsp->node[0]; rnp - &rsp->node[0] < rcu_num_nodes; rnp++) { for (rnp = &rsp->node[0]; rnp - &rsp->node[0] < rcu_num_nodes; rnp++) {
if (rnp->level != level) { if (rnp->level != level) {
seq_puts(m, "\n"); seq_puts(m, "\n");
...@@ -323,8 +323,8 @@ static void show_one_rcugp(struct seq_file *m, struct rcu_state *rsp) ...@@ -323,8 +323,8 @@ static void show_one_rcugp(struct seq_file *m, struct rcu_state *rsp)
struct rcu_node *rnp = &rsp->node[0]; struct rcu_node *rnp = &rsp->node[0];
raw_spin_lock_irqsave(&rnp->lock, flags); raw_spin_lock_irqsave(&rnp->lock, flags);
completed = ACCESS_ONCE(rsp->completed); completed = READ_ONCE(rsp->completed);
gpnum = ACCESS_ONCE(rsp->gpnum); gpnum = READ_ONCE(rsp->gpnum);
if (completed == gpnum) if (completed == gpnum)
gpage = 0; gpage = 0;
else else
......
...@@ -150,14 +150,14 @@ void __rcu_read_unlock(void) ...@@ -150,14 +150,14 @@ void __rcu_read_unlock(void)
barrier(); /* critical section before exit code. */ barrier(); /* critical section before exit code. */
t->rcu_read_lock_nesting = INT_MIN; t->rcu_read_lock_nesting = INT_MIN;
barrier(); /* assign before ->rcu_read_unlock_special load */ barrier(); /* assign before ->rcu_read_unlock_special load */
if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special.s))) if (unlikely(READ_ONCE(t->rcu_read_unlock_special.s)))
rcu_read_unlock_special(t); rcu_read_unlock_special(t);
barrier(); /* ->rcu_read_unlock_special load before assign */ barrier(); /* ->rcu_read_unlock_special load before assign */
t->rcu_read_lock_nesting = 0; t->rcu_read_lock_nesting = 0;
} }
#ifdef CONFIG_PROVE_LOCKING #ifdef CONFIG_PROVE_LOCKING
{ {
int rrln = ACCESS_ONCE(t->rcu_read_lock_nesting); int rrln = READ_ONCE(t->rcu_read_lock_nesting);
WARN_ON_ONCE(rrln < 0 && rrln > INT_MIN / 2); WARN_ON_ONCE(rrln < 0 && rrln > INT_MIN / 2);
} }
...@@ -389,17 +389,17 @@ module_param(rcu_cpu_stall_timeout, int, 0644); ...@@ -389,17 +389,17 @@ module_param(rcu_cpu_stall_timeout, int, 0644);
int rcu_jiffies_till_stall_check(void) int rcu_jiffies_till_stall_check(void)
{ {
int till_stall_check = ACCESS_ONCE(rcu_cpu_stall_timeout); int till_stall_check = READ_ONCE(rcu_cpu_stall_timeout);
/* /*
* Limit check must be consistent with the Kconfig limits * Limit check must be consistent with the Kconfig limits
* for CONFIG_RCU_CPU_STALL_TIMEOUT. * for CONFIG_RCU_CPU_STALL_TIMEOUT.
*/ */
if (till_stall_check < 3) { if (till_stall_check < 3) {
ACCESS_ONCE(rcu_cpu_stall_timeout) = 3; WRITE_ONCE(rcu_cpu_stall_timeout, 3);
till_stall_check = 3; till_stall_check = 3;
} else if (till_stall_check > 300) { } else if (till_stall_check > 300) {
ACCESS_ONCE(rcu_cpu_stall_timeout) = 300; WRITE_ONCE(rcu_cpu_stall_timeout, 300);
till_stall_check = 300; till_stall_check = 300;
} }
return till_stall_check * HZ + RCU_STALL_DELAY_DELTA; return till_stall_check * HZ + RCU_STALL_DELAY_DELTA;
...@@ -550,12 +550,12 @@ static void check_holdout_task(struct task_struct *t, ...@@ -550,12 +550,12 @@ static void check_holdout_task(struct task_struct *t,
{ {
int cpu; int cpu;
if (!ACCESS_ONCE(t->rcu_tasks_holdout) || if (!READ_ONCE(t->rcu_tasks_holdout) ||
t->rcu_tasks_nvcsw != ACCESS_ONCE(t->nvcsw) || t->rcu_tasks_nvcsw != READ_ONCE(t->nvcsw) ||
!ACCESS_ONCE(t->on_rq) || !READ_ONCE(t->on_rq) ||
(IS_ENABLED(CONFIG_NO_HZ_FULL) && (IS_ENABLED(CONFIG_NO_HZ_FULL) &&
!is_idle_task(t) && t->rcu_tasks_idle_cpu >= 0)) { !is_idle_task(t) && t->rcu_tasks_idle_cpu >= 0)) {
ACCESS_ONCE(t->rcu_tasks_holdout) = false; WRITE_ONCE(t->rcu_tasks_holdout, false);
list_del_init(&t->rcu_tasks_holdout_list); list_del_init(&t->rcu_tasks_holdout_list);
put_task_struct(t); put_task_struct(t);
return; return;
...@@ -639,11 +639,11 @@ static int __noreturn rcu_tasks_kthread(void *arg) ...@@ -639,11 +639,11 @@ static int __noreturn rcu_tasks_kthread(void *arg)
*/ */
rcu_read_lock(); rcu_read_lock();
for_each_process_thread(g, t) { for_each_process_thread(g, t) {
if (t != current && ACCESS_ONCE(t->on_rq) && if (t != current && READ_ONCE(t->on_rq) &&
!is_idle_task(t)) { !is_idle_task(t)) {
get_task_struct(t); get_task_struct(t);
t->rcu_tasks_nvcsw = ACCESS_ONCE(t->nvcsw); t->rcu_tasks_nvcsw = READ_ONCE(t->nvcsw);
ACCESS_ONCE(t->rcu_tasks_holdout) = true; WRITE_ONCE(t->rcu_tasks_holdout, true);
list_add(&t->rcu_tasks_holdout_list, list_add(&t->rcu_tasks_holdout_list,
&rcu_tasks_holdouts); &rcu_tasks_holdouts);
} }
...@@ -672,7 +672,7 @@ static int __noreturn rcu_tasks_kthread(void *arg) ...@@ -672,7 +672,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
struct task_struct *t1; struct task_struct *t1;
schedule_timeout_interruptible(HZ); schedule_timeout_interruptible(HZ);
rtst = ACCESS_ONCE(rcu_task_stall_timeout); rtst = READ_ONCE(rcu_task_stall_timeout);
needreport = rtst > 0 && needreport = rtst > 0 &&
time_after(jiffies, lastreport + rtst); time_after(jiffies, lastreport + rtst);
if (needreport) if (needreport)
...@@ -728,7 +728,7 @@ static void rcu_spawn_tasks_kthread(void) ...@@ -728,7 +728,7 @@ static void rcu_spawn_tasks_kthread(void)
static struct task_struct *rcu_tasks_kthread_ptr; static struct task_struct *rcu_tasks_kthread_ptr;
struct task_struct *t; struct task_struct *t;
if (ACCESS_ONCE(rcu_tasks_kthread_ptr)) { if (READ_ONCE(rcu_tasks_kthread_ptr)) {
smp_mb(); /* Ensure caller sees full kthread. */ smp_mb(); /* Ensure caller sees full kthread. */
return; return;
} }
...@@ -740,7 +740,7 @@ static void rcu_spawn_tasks_kthread(void) ...@@ -740,7 +740,7 @@ static void rcu_spawn_tasks_kthread(void)
t = kthread_run(rcu_tasks_kthread, NULL, "rcu_tasks_kthread"); t = kthread_run(rcu_tasks_kthread, NULL, "rcu_tasks_kthread");
BUG_ON(IS_ERR(t)); BUG_ON(IS_ERR(t));
smp_mb(); /* Ensure others see full kthread. */ smp_mb(); /* Ensure others see full kthread. */
ACCESS_ONCE(rcu_tasks_kthread_ptr) = t; WRITE_ONCE(rcu_tasks_kthread_ptr, t);
mutex_unlock(&rcu_tasks_kthread_mutex); mutex_unlock(&rcu_tasks_kthread_mutex);
} }
......
...@@ -409,7 +409,7 @@ static void (*torture_shutdown_hook)(void); ...@@ -409,7 +409,7 @@ static void (*torture_shutdown_hook)(void);
*/ */
void torture_shutdown_absorb(const char *title) void torture_shutdown_absorb(const char *title)
{ {
while (ACCESS_ONCE(fullstop) == FULLSTOP_SHUTDOWN) { while (READ_ONCE(fullstop) == FULLSTOP_SHUTDOWN) {
pr_notice("torture thread %s parking due to system shutdown\n", pr_notice("torture thread %s parking due to system shutdown\n",
title); title);
schedule_timeout_uninterruptible(MAX_SCHEDULE_TIMEOUT); schedule_timeout_uninterruptible(MAX_SCHEDULE_TIMEOUT);
...@@ -480,9 +480,9 @@ static int torture_shutdown_notify(struct notifier_block *unused1, ...@@ -480,9 +480,9 @@ static int torture_shutdown_notify(struct notifier_block *unused1,
unsigned long unused2, void *unused3) unsigned long unused2, void *unused3)
{ {
mutex_lock(&fullstop_mutex); mutex_lock(&fullstop_mutex);
if (ACCESS_ONCE(fullstop) == FULLSTOP_DONTSTOP) { if (READ_ONCE(fullstop) == FULLSTOP_DONTSTOP) {
VERBOSE_TOROUT_STRING("Unscheduled system shutdown detected"); VERBOSE_TOROUT_STRING("Unscheduled system shutdown detected");
ACCESS_ONCE(fullstop) = FULLSTOP_SHUTDOWN; WRITE_ONCE(fullstop, FULLSTOP_SHUTDOWN);
} else { } else {
pr_warn("Concurrent rmmod and shutdown illegal!\n"); pr_warn("Concurrent rmmod and shutdown illegal!\n");
} }
...@@ -523,13 +523,13 @@ static int stutter; ...@@ -523,13 +523,13 @@ static int stutter;
*/ */
void stutter_wait(const char *title) void stutter_wait(const char *title)
{ {
while (ACCESS_ONCE(stutter_pause_test) || while (READ_ONCE(stutter_pause_test) ||
(torture_runnable && !ACCESS_ONCE(*torture_runnable))) { (torture_runnable && !READ_ONCE(*torture_runnable))) {
if (stutter_pause_test) if (stutter_pause_test)
if (ACCESS_ONCE(stutter_pause_test) == 1) if (READ_ONCE(stutter_pause_test) == 1)
schedule_timeout_interruptible(1); schedule_timeout_interruptible(1);
else else
while (ACCESS_ONCE(stutter_pause_test)) while (READ_ONCE(stutter_pause_test))
cond_resched(); cond_resched();
else else
schedule_timeout_interruptible(round_jiffies_relative(HZ)); schedule_timeout_interruptible(round_jiffies_relative(HZ));
...@@ -549,14 +549,14 @@ static int torture_stutter(void *arg) ...@@ -549,14 +549,14 @@ static int torture_stutter(void *arg)
if (!torture_must_stop()) { if (!torture_must_stop()) {
if (stutter > 1) { if (stutter > 1) {
schedule_timeout_interruptible(stutter - 1); schedule_timeout_interruptible(stutter - 1);
ACCESS_ONCE(stutter_pause_test) = 2; WRITE_ONCE(stutter_pause_test, 2);
} }
schedule_timeout_interruptible(1); schedule_timeout_interruptible(1);
ACCESS_ONCE(stutter_pause_test) = 1; WRITE_ONCE(stutter_pause_test, 1);
} }
if (!torture_must_stop()) if (!torture_must_stop())
schedule_timeout_interruptible(stutter); schedule_timeout_interruptible(stutter);
ACCESS_ONCE(stutter_pause_test) = 0; WRITE_ONCE(stutter_pause_test, 0);
torture_shutdown_absorb("torture_stutter"); torture_shutdown_absorb("torture_stutter");
} while (!torture_must_stop()); } while (!torture_must_stop());
torture_kthread_stopping("torture_stutter"); torture_kthread_stopping("torture_stutter");
...@@ -642,13 +642,13 @@ EXPORT_SYMBOL_GPL(torture_init_end); ...@@ -642,13 +642,13 @@ EXPORT_SYMBOL_GPL(torture_init_end);
bool torture_cleanup_begin(void) bool torture_cleanup_begin(void)
{ {
mutex_lock(&fullstop_mutex); mutex_lock(&fullstop_mutex);
if (ACCESS_ONCE(fullstop) == FULLSTOP_SHUTDOWN) { if (READ_ONCE(fullstop) == FULLSTOP_SHUTDOWN) {
pr_warn("Concurrent rmmod and shutdown illegal!\n"); pr_warn("Concurrent rmmod and shutdown illegal!\n");
mutex_unlock(&fullstop_mutex); mutex_unlock(&fullstop_mutex);
schedule_timeout_uninterruptible(10); schedule_timeout_uninterruptible(10);
return true; return true;
} }
ACCESS_ONCE(fullstop) = FULLSTOP_RMMOD; WRITE_ONCE(fullstop, FULLSTOP_RMMOD);
mutex_unlock(&fullstop_mutex); mutex_unlock(&fullstop_mutex);
torture_shutdown_cleanup(); torture_shutdown_cleanup();
torture_shuffle_cleanup(); torture_shuffle_cleanup();
...@@ -681,7 +681,7 @@ EXPORT_SYMBOL_GPL(torture_must_stop); ...@@ -681,7 +681,7 @@ EXPORT_SYMBOL_GPL(torture_must_stop);
*/ */
bool torture_must_stop_irq(void) bool torture_must_stop_irq(void)
{ {
return ACCESS_ONCE(fullstop) != FULLSTOP_DONTSTOP; return READ_ONCE(fullstop) != FULLSTOP_DONTSTOP;
} }
EXPORT_SYMBOL_GPL(torture_must_stop_irq); EXPORT_SYMBOL_GPL(torture_must_stop_irq);
......
...@@ -1233,6 +1233,7 @@ config RCU_TORTURE_TEST ...@@ -1233,6 +1233,7 @@ config RCU_TORTURE_TEST
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
select TORTURE_TEST select TORTURE_TEST
select SRCU select SRCU
select TASKS_RCU
default n default n
help help
This option provides a kernel module that runs torture tests This option provides a kernel module that runs torture tests
...@@ -1261,12 +1262,38 @@ config RCU_TORTURE_TEST_RUNNABLE ...@@ -1261,12 +1262,38 @@ config RCU_TORTURE_TEST_RUNNABLE
Say N here if you want the RCU torture tests to start only Say N here if you want the RCU torture tests to start only
after being manually enabled via /proc. after being manually enabled via /proc.
config RCU_TORTURE_TEST_SLOW_PREINIT
bool "Slow down RCU grace-period pre-initialization to expose races"
depends on RCU_TORTURE_TEST
help
This option delays grace-period pre-initialization (the
propagation of CPU-hotplug changes up the rcu_node combining
tree) for a few jiffies between initializing each pair of
consecutive rcu_node structures. This helps to expose races
involving grace-period pre-initialization, in other words, it
makes your kernel less stable. It can also greatly increase
grace-period latency, especially on systems with large numbers
of CPUs. This is useful when torture-testing RCU, but in
almost no other circumstance.
Say Y here if you want your system to crash and hang more often.
Say N if you want a sane system.
config RCU_TORTURE_TEST_SLOW_PREINIT_DELAY
int "How much to slow down RCU grace-period pre-initialization"
range 0 5
default 3
depends on RCU_TORTURE_TEST_SLOW_PREINIT
help
This option specifies the number of jiffies to wait between
each rcu_node structure pre-initialization step.
config RCU_TORTURE_TEST_SLOW_INIT config RCU_TORTURE_TEST_SLOW_INIT
bool "Slow down RCU grace-period initialization to expose races" bool "Slow down RCU grace-period initialization to expose races"
depends on RCU_TORTURE_TEST depends on RCU_TORTURE_TEST
help help
This option makes grace-period initialization block for a This option delays grace-period initialization for a few
few jiffies between initializing each pair of consecutive jiffies between initializing each pair of consecutive
rcu_node structures. This helps to expose races involving rcu_node structures. This helps to expose races involving
grace-period initialization, in other words, it makes your grace-period initialization, in other words, it makes your
kernel less stable. It can also greatly increase grace-period kernel less stable. It can also greatly increase grace-period
...@@ -1286,6 +1313,30 @@ config RCU_TORTURE_TEST_SLOW_INIT_DELAY ...@@ -1286,6 +1313,30 @@ config RCU_TORTURE_TEST_SLOW_INIT_DELAY
This option specifies the number of jiffies to wait between This option specifies the number of jiffies to wait between
each rcu_node structure initialization. each rcu_node structure initialization.
config RCU_TORTURE_TEST_SLOW_CLEANUP
bool "Slow down RCU grace-period cleanup to expose races"
depends on RCU_TORTURE_TEST
help
This option delays grace-period cleanup for a few jiffies
between cleaning up each pair of consecutive rcu_node
structures. This helps to expose races involving grace-period
cleanup, in other words, it makes your kernel less stable.
It can also greatly increase grace-period latency, especially
on systems with large numbers of CPUs. This is useful when
torture-testing RCU, but in almost no other circumstance.
Say Y here if you want your system to crash and hang more often.
Say N if you want a sane system.
config RCU_TORTURE_TEST_SLOW_CLEANUP_DELAY
int "How much to slow down RCU grace-period cleanup"
range 0 5
default 3
depends on RCU_TORTURE_TEST_SLOW_CLEANUP
help
This option specifies the number of jiffies to wait between
each rcu_node structure cleanup operation.
config RCU_CPU_STALL_TIMEOUT config RCU_CPU_STALL_TIMEOUT
int "RCU CPU stall timeout in seconds" int "RCU CPU stall timeout in seconds"
depends on RCU_STALL_COMMON depends on RCU_STALL_COMMON
...@@ -1322,6 +1373,17 @@ config RCU_TRACE ...@@ -1322,6 +1373,17 @@ config RCU_TRACE
Say Y here if you want to enable RCU tracing Say Y here if you want to enable RCU tracing
Say N if you are unsure. Say N if you are unsure.
config RCU_EQS_DEBUG
bool "Use this when adding any sort of NO_HZ support to your arch"
depends on DEBUG_KERNEL
help
This option provides consistency checks in RCU's handling of
NO_HZ. These checks have proven quite helpful in detecting
bugs in arch-specific NO_HZ code.
Say N here if you need ultimate kernel/user switch latencies
Say Y if you are unsure
endmenu # "RCU Debugging" endmenu # "RCU Debugging"
config DEBUG_BLOCK_EXT_DEVT config DEBUG_BLOCK_EXT_DEVT
......
...@@ -66,7 +66,7 @@ make $buildloc $TORTURE_DEFCONFIG > $builddir/Make.defconfig.out 2>&1 ...@@ -66,7 +66,7 @@ make $buildloc $TORTURE_DEFCONFIG > $builddir/Make.defconfig.out 2>&1
mv $builddir/.config $builddir/.config.sav mv $builddir/.config $builddir/.config.sav
sh $T/upd.sh < $builddir/.config.sav > $builddir/.config sh $T/upd.sh < $builddir/.config.sav > $builddir/.config
cp $builddir/.config $builddir/.config.new cp $builddir/.config $builddir/.config.new
yes '' | make $buildloc oldconfig > $builddir/Make.modconfig.out 2>&1 yes '' | make $buildloc oldconfig > $builddir/Make.oldconfig.out 2> $builddir/Make.oldconfig.err
# verify new config matches specification. # verify new config matches specification.
configcheck.sh $builddir/.config $c configcheck.sh $builddir/.config $c
......
...@@ -43,6 +43,10 @@ do ...@@ -43,6 +43,10 @@ do
if test -f "$i/console.log" if test -f "$i/console.log"
then then
configcheck.sh $i/.config $i/ConfigFragment configcheck.sh $i/.config $i/ConfigFragment
if test -r $i/Make.oldconfig.err
then
cat $i/Make.oldconfig.err
fi
parse-build.sh $i/Make.out $configfile parse-build.sh $i/Make.out $configfile
parse-torture.sh $i/console.log $configfile parse-torture.sh $i/console.log $configfile
parse-console.sh $i/console.log $configfile parse-console.sh $i/console.log $configfile
......
...@@ -55,7 +55,7 @@ usage () { ...@@ -55,7 +55,7 @@ usage () {
echo " --bootargs kernel-boot-arguments" echo " --bootargs kernel-boot-arguments"
echo " --bootimage relative-path-to-kernel-boot-image" echo " --bootimage relative-path-to-kernel-boot-image"
echo " --buildonly" echo " --buildonly"
echo " --configs \"config-file list\"" echo " --configs \"config-file list w/ repeat factor (3*TINY01)\""
echo " --cpus N" echo " --cpus N"
echo " --datestamp string" echo " --datestamp string"
echo " --defconfig string" echo " --defconfig string"
...@@ -178,13 +178,26 @@ fi ...@@ -178,13 +178,26 @@ fi
touch $T/cfgcpu touch $T/cfgcpu
for CF in $configs for CF in $configs
do do
if test -f "$CONFIGFRAG/$CF" case $CF in
[0-9]\**|[0-9][0-9]\**|[0-9][0-9][0-9]\**)
config_reps=`echo $CF | sed -e 's/\*.*$//'`
CF1=`echo $CF | sed -e 's/^[^*]*\*//'`
;;
*)
config_reps=1
CF1=$CF
;;
esac
if test -f "$CONFIGFRAG/$CF1"
then then
cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$CF` cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$CF1`
cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF" "$cpu_count"` cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF1" "$cpu_count"`
echo $CF $cpu_count >> $T/cfgcpu for ((cur_rep=0;cur_rep<$config_reps;cur_rep++))
do
echo $CF1 $cpu_count >> $T/cfgcpu
done
else else
echo "The --configs file $CF does not exist, terminating." echo "The --configs file $CF1 does not exist, terminating."
exit 1 exit 1
fi fi
done done
......
CONFIG_RCU_TORTURE_TEST=y CONFIG_RCU_TORTURE_TEST=y
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP=y
CONFIG_RCU_TORTURE_TEST_SLOW_INIT=y CONFIG_RCU_TORTURE_TEST_SLOW_INIT=y
CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT=y
...@@ -5,3 +5,4 @@ CONFIG_HOTPLUG_CPU=y ...@@ -5,3 +5,4 @@ CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=y CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n CONFIG_PREEMPT=n
CONFIG_RCU_EXPERT=y
...@@ -5,3 +5,4 @@ CONFIG_HOTPLUG_CPU=y ...@@ -5,3 +5,4 @@ CONFIG_HOTPLUG_CPU=y
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_RCU_EXPERT=n
rcutorture.torture_type=srcu rcutorture.torture_type=srcud
...@@ -5,5 +5,6 @@ CONFIG_PREEMPT_NONE=n ...@@ -5,5 +5,6 @@ CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_RCU=y CONFIG_PROVE_LOCKING=n
CONFIG_TASKS_RCU=y #CHECK#CONFIG_PROVE_RCU=n
CONFIG_RCU_EXPERT=y
...@@ -2,4 +2,3 @@ CONFIG_SMP=n ...@@ -2,4 +2,3 @@ CONFIG_SMP=n
CONFIG_PREEMPT_NONE=y CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n CONFIG_PREEMPT=n
CONFIG_TASKS_RCU=y
...@@ -6,8 +6,8 @@ CONFIG_HIBERNATION=n ...@@ -6,8 +6,8 @@ CONFIG_HIBERNATION=n
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
CONFIG_TASKS_RCU=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=n CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=y CONFIG_NO_HZ_FULL=y
CONFIG_NO_HZ_FULL_ALL=y CONFIG_NO_HZ_FULL_ALL=y
#CHECK#CONFIG_RCU_EXPERT=n
...@@ -8,7 +8,7 @@ CONFIG_NO_HZ_IDLE=n ...@@ -8,7 +8,7 @@ CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_TRACE=y CONFIG_RCU_TRACE=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y #CHECK#CONFIG_PROVE_RCU=y
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_PREEMPT_COUNT=y CONFIG_PREEMPT_COUNT=y
rcupdate.rcu_self_test=1 rcupdate.rcu_self_test=1
rcupdate.rcu_self_test_bh=1 rcupdate.rcu_self_test_bh=1
rcutorture.torture_type=rcu_bh
...@@ -16,3 +16,4 @@ CONFIG_DEBUG_LOCK_ALLOC=n ...@@ -16,3 +16,4 @@ CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
...@@ -14,10 +14,10 @@ CONFIG_SUSPEND=n ...@@ -14,10 +14,10 @@ CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
CONFIG_RCU_FANOUT=3 CONFIG_RCU_FANOUT=3
CONFIG_RCU_FANOUT_LEAF=3 CONFIG_RCU_FANOUT_LEAF=3
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=n CONFIG_PROVE_LOCKING=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
...@@ -14,7 +14,6 @@ CONFIG_SUSPEND=n ...@@ -14,7 +14,6 @@ CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
CONFIG_RCU_FANOUT=3 CONFIG_RCU_FANOUT=3
CONFIG_RCU_FANOUT_LEAF=3 CONFIG_RCU_FANOUT_LEAF=3
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=n CONFIG_PROVE_LOCKING=n
......
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_NR_CPUS=8 CONFIG_NR_CPUS=16
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
...@@ -9,12 +9,12 @@ CONFIG_NO_HZ_IDLE=n ...@@ -9,12 +9,12 @@ CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
CONFIG_RCU_TRACE=y CONFIG_RCU_TRACE=y
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_FANOUT=4 CONFIG_RCU_FANOUT=2
CONFIG_RCU_FANOUT_LEAF=4 CONFIG_RCU_FANOUT_LEAF=2
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_BOOST=y CONFIG_RCU_BOOST=y
CONFIG_RCU_KTHREAD_PRIO=2 CONFIG_RCU_KTHREAD_PRIO=2
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
rcutorture.onoff_interval=1 rcutorture.onoff_holdoff=30
...@@ -13,10 +13,10 @@ CONFIG_RCU_TRACE=y ...@@ -13,10 +13,10 @@ CONFIG_RCU_TRACE=y
CONFIG_HOTPLUG_CPU=n CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
CONFIG_RCU_FANOUT=2 CONFIG_RCU_FANOUT=4
CONFIG_RCU_FANOUT_LEAF=2 CONFIG_RCU_FANOUT_LEAF=4
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=y CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
...@@ -12,11 +12,11 @@ CONFIG_RCU_TRACE=n ...@@ -12,11 +12,11 @@ CONFIG_RCU_TRACE=n
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_FANOUT=6 CONFIG_RCU_FANOUT=6
CONFIG_RCU_FANOUT_LEAF=6 CONFIG_RCU_FANOUT_LEAF=6
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=y CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_NONE=y CONFIG_RCU_NOCB_CPU_NONE=y
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y #CHECK#CONFIG_PROVE_RCU=y
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
...@@ -14,10 +14,10 @@ CONFIG_SUSPEND=n ...@@ -14,10 +14,10 @@ CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
CONFIG_RCU_FANOUT=6 CONFIG_RCU_FANOUT=6
CONFIG_RCU_FANOUT_LEAF=6 CONFIG_RCU_FANOUT_LEAF=6
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y #CHECK#CONFIG_PROVE_RCU=y
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_RCU_EXPERT=y
rcupdate.rcu_self_test=1 rcupdate.rcu_self_test=1
rcupdate.rcu_self_test_bh=1 rcupdate.rcu_self_test_bh=1
rcupdate.rcu_self_test_sched=1 rcupdate.rcu_self_test_sched=1
rcutree.rcu_fanout_exact=1
...@@ -15,8 +15,8 @@ CONFIG_RCU_TRACE=y ...@@ -15,8 +15,8 @@ CONFIG_RCU_TRACE=y
CONFIG_HOTPLUG_CPU=y CONFIG_HOTPLUG_CPU=y
CONFIG_RCU_FANOUT=2 CONFIG_RCU_FANOUT=2
CONFIG_RCU_FANOUT_LEAF=2 CONFIG_RCU_FANOUT_LEAF=2
CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=y CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_NR_CPUS=16 CONFIG_NR_CPUS=8
CONFIG_PREEMPT_NONE=n CONFIG_PREEMPT_NONE=n
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
...@@ -13,13 +13,13 @@ CONFIG_HOTPLUG_CPU=n ...@@ -13,13 +13,13 @@ CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
CONFIG_RCU_FANOUT=3 CONFIG_RCU_FANOUT=3
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_RCU_FANOUT_LEAF=2 CONFIG_RCU_FANOUT_LEAF=2
CONFIG_RCU_NOCB_CPU=y CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ALL=y CONFIG_RCU_NOCB_CPU_ALL=y
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y #CHECK#CONFIG_PROVE_RCU=y
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
...@@ -13,7 +13,6 @@ CONFIG_HOTPLUG_CPU=n ...@@ -13,7 +13,6 @@ CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
CONFIG_RCU_FANOUT=3 CONFIG_RCU_FANOUT=3
CONFIG_RCU_FANOUT_EXACT=y
CONFIG_RCU_FANOUT_LEAF=2 CONFIG_RCU_FANOUT_LEAF=2
CONFIG_RCU_NOCB_CPU=y CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ALL=y CONFIG_RCU_NOCB_CPU_ALL=y
......
rcutorture.torture_type=sched rcutorture.torture_type=sched
rcupdate.rcu_self_test=1 rcupdate.rcu_self_test=1
rcupdate.rcu_self_test_sched=1 rcupdate.rcu_self_test_sched=1
rcutree.rcu_fanout_exact=1
...@@ -16,3 +16,4 @@ CONFIG_DEBUG_LOCK_ALLOC=n ...@@ -16,3 +16,4 @@ CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
#CHECK#CONFIG_RCU_EXPERT=n
...@@ -12,13 +12,12 @@ CONFIG_NO_HZ_IDLE -- Do those not otherwise specified. (Groups of two.) ...@@ -12,13 +12,12 @@ CONFIG_NO_HZ_IDLE -- Do those not otherwise specified. (Groups of two.)
CONFIG_NO_HZ_FULL -- Do two, one with CONFIG_NO_HZ_FULL_SYSIDLE. CONFIG_NO_HZ_FULL -- Do two, one with CONFIG_NO_HZ_FULL_SYSIDLE.
CONFIG_NO_HZ_FULL_SYSIDLE -- Do one. CONFIG_NO_HZ_FULL_SYSIDLE -- Do one.
CONFIG_PREEMPT -- Do half. (First three and #8.) CONFIG_PREEMPT -- Do half. (First three and #8.)
CONFIG_PROVE_LOCKING -- Do all but two, covering CONFIG_PROVE_RCU and not. CONFIG_PROVE_LOCKING -- Do several, covering CONFIG_DEBUG_LOCK_ALLOC=y and not.
CONFIG_PROVE_RCU -- Do all but one under CONFIG_PROVE_LOCKING. CONFIG_PROVE_RCU -- Hardwired to CONFIG_PROVE_LOCKING.
CONFIG_RCU_BOOST -- one of PREEMPT_RCU. CONFIG_RCU_BOOST -- one of PREEMPT_RCU.
CONFIG_RCU_KTHREAD_PRIO -- set to 2 for _BOOST testing. CONFIG_RCU_KTHREAD_PRIO -- set to 2 for _BOOST testing.
CONFIG_RCU_CPU_STALL_INFO -- Do one. CONFIG_RCU_CPU_STALL_INFO -- Now default, avoid at least twice.
CONFIG_RCU_FANOUT -- Cover hierarchy as currently, but overlap with others. CONFIG_RCU_FANOUT -- Cover hierarchy, but overlap with others.
CONFIG_RCU_FANOUT_EXACT -- Do one.
CONFIG_RCU_FANOUT_LEAF -- Do one non-default. CONFIG_RCU_FANOUT_LEAF -- Do one non-default.
CONFIG_RCU_FAST_NO_HZ -- Do one, but not with CONFIG_RCU_NOCB_CPU_ALL. CONFIG_RCU_FAST_NO_HZ -- Do one, but not with CONFIG_RCU_NOCB_CPU_ALL.
CONFIG_RCU_NOCB_CPU -- Do three, see below. CONFIG_RCU_NOCB_CPU -- Do three, see below.
...@@ -27,28 +26,19 @@ CONFIG_RCU_NOCB_CPU_NONE -- Do one. ...@@ -27,28 +26,19 @@ CONFIG_RCU_NOCB_CPU_NONE -- Do one.
CONFIG_RCU_NOCB_CPU_ZERO -- Do one. CONFIG_RCU_NOCB_CPU_ZERO -- Do one.
CONFIG_RCU_TRACE -- Do half. CONFIG_RCU_TRACE -- Do half.
CONFIG_SMP -- Need one !SMP for PREEMPT_RCU. CONFIG_SMP -- Need one !SMP for PREEMPT_RCU.
!RCU_EXPERT -- Do a few, but these have to be vanilla configurations.
RCU-bh: Do one with PREEMPT and one with !PREEMPT. RCU-bh: Do one with PREEMPT and one with !PREEMPT.
RCU-sched: Do one with PREEMPT but not BOOST. RCU-sched: Do one with PREEMPT but not BOOST.
Hierarchy: Boot parameters:
TREE01. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=8, CONFIG_RCU_FANOUT_EXACT=n. nohz_full - do at least one.
TREE02. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=3, CONFIG_RCU_FANOUT_EXACT=n, maxcpu -- do at least one.
CONFIG_RCU_FANOUT_LEAF=3. rcupdate.rcu_self_test_bh -- Do at least one each, offloaded and not.
TREE03. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=4, CONFIG_RCU_FANOUT_EXACT=n, rcupdate.rcu_self_test_sched -- Do at least one each, offloaded and not.
CONFIG_RCU_FANOUT_LEAF=4. rcupdate.rcu_self_test -- Do at least one each, offloaded and not.
TREE04. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=2, CONFIG_RCU_FANOUT_EXACT=n, rcutree.rcu_fanout_exact -- Do at least one.
CONFIG_RCU_FANOUT_LEAF=2.
TREE05. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=6, CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_FANOUT_LEAF=6.
TREE06. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=6, CONFIG_RCU_FANOUT_EXACT=y
CONFIG_RCU_FANOUT_LEAF=6.
TREE07. CONFIG_NR_CPUS=16, CONFIG_RCU_FANOUT=2, CONFIG_RCU_FANOUT_EXACT=n,
CONFIG_RCU_FANOUT_LEAF=2.
TREE08. CONFIG_NR_CPUS=16, CONFIG_RCU_FANOUT=3, CONFIG_RCU_FANOUT_EXACT=y,
CONFIG_RCU_FANOUT_LEAF=2.
TREE09. CONFIG_NR_CPUS=1.
Kconfig Parameters Ignored: Kconfig Parameters Ignored:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment