Commit 01c9db82 authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'rcu/next' of...

Merge branch 'rcu/next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu

Pull RCU updates from Paul E. McKenney:

  * Update RCU documentation.

  * Miscellaneous fixes.

  * Maintainership changes.

  * Torture-test updates.

  * Callback-offloading changes.
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents 1795cd9b 187497fa
...@@ -2451,8 +2451,8 @@ lot of {Linux} into your technology!!!" ...@@ -2451,8 +2451,8 @@ lot of {Linux} into your technology!!!"
,month="February" ,month="February"
,year="2010" ,year="2010"
,note="Available: ,note="Available:
\url{http://kerneltrap.com/mailarchive/linux-netdev/2010/2/26/6270589} \url{http://thread.gmane.org/gmane.linux.network/153338}
[Viewed March 20, 2011]" [Viewed June 9, 2014]"
,annotation={ ,annotation={
Use a pair of list_head structures to support RCU-protected Use a pair of list_head structures to support RCU-protected
resizable hash tables. resizable hash tables.
......
Reference-count design for elements of lists/arrays protected by RCU. Reference-count design for elements of lists/arrays protected by RCU.
Please note that the percpu-ref feature is likely your first
stop if you need to combine reference counts and RCU. Please see
include/linux/percpu-refcount.h for more information. However, in
those unusual cases where percpu-ref would consume too much memory,
please read on.
------------------------------------------------------------------------
Reference counting on elements of lists which are protected by traditional Reference counting on elements of lists which are protected by traditional
reader/writer spinlocks or semaphores are straightforward: reader/writer spinlocks or semaphores are straightforward:
......
...@@ -2790,6 +2790,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -2790,6 +2790,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
leaf rcu_node structure. Useful for very large leaf rcu_node structure. Useful for very large
systems. systems.
rcutree.jiffies_till_sched_qs= [KNL]
Set required age in jiffies for a
given grace period before RCU starts
soliciting quiescent-state help from
rcu_note_context_switch().
rcutree.jiffies_till_first_fqs= [KNL] rcutree.jiffies_till_first_fqs= [KNL]
Set delay from grace-period initialization to Set delay from grace-period initialization to
first attempt to force quiescent states. first attempt to force quiescent states.
...@@ -2801,6 +2807,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -2801,6 +2807,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
quiescent states. Units are jiffies, minimum quiescent states. Units are jiffies, minimum
value is one, and maximum value is HZ. value is one, and maximum value is HZ.
rcutree.rcu_nocb_leader_stride= [KNL]
Set the number of NOCB kthread groups, which
defaults to the square root of the number of
CPUs. Larger numbers reduces the wakeup overhead
on the per-CPU grace-period kthreads, but increases
that same overhead on each group's leader.
rcutree.qhimark= [KNL] rcutree.qhimark= [KNL]
Set threshold of queued RCU callbacks beyond which Set threshold of queued RCU callbacks beyond which
batch limiting is disabled. batch limiting is disabled.
......
...@@ -757,10 +757,14 @@ SMP BARRIER PAIRING ...@@ -757,10 +757,14 @@ SMP BARRIER PAIRING
When dealing with CPU-CPU interactions, certain types of memory barrier should When dealing with CPU-CPU interactions, certain types of memory barrier should
always be paired. A lack of appropriate pairing is almost certainly an error. always be paired. A lack of appropriate pairing is almost certainly an error.
A write barrier should always be paired with a data dependency barrier or read General barriers pair with each other, though they also pair with
barrier, though a general barrier would also be viable. Similarly a read most other types of barriers, albeit without transitivity. An acquire
barrier or a data dependency barrier should always be paired with at least an barrier pairs with a release barrier, but both may also pair with other
write barrier, though, again, a general barrier is viable: barriers, including of course general barriers. A write barrier pairs
with a data dependency barrier, an acquire barrier, a release barrier,
a read barrier, or a general barrier. Similarly a read barrier or a
data dependency barrier pairs with a write barrier, an acquire barrier,
a release barrier, or a general barrier:
CPU 1 CPU 2 CPU 1 CPU 2
=============== =============== =============== ===============
...@@ -1893,6 +1897,21 @@ between the STORE to indicate the event and the STORE to set TASK_RUNNING: ...@@ -1893,6 +1897,21 @@ between the STORE to indicate the event and the STORE to set TASK_RUNNING:
<general barrier> STORE current->state <general barrier> STORE current->state
LOAD event_indicated LOAD event_indicated
To repeat, this write memory barrier is present if and only if something
is actually awakened. To see this, consider the following sequence of
events, where X and Y are both initially zero:
CPU 1 CPU 2
=============================== ===============================
X = 1; STORE event_indicated
smp_mb(); wake_up();
Y = 1; wait_event(wq, Y == 1);
wake_up(); load from Y sees 1, no memory barrier
load from X might see 0
In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
to see 1.
The available waker functions include: The available waker functions include:
complete(); complete();
......
...@@ -70,6 +70,8 @@ Descriptions of section entries: ...@@ -70,6 +70,8 @@ Descriptions of section entries:
P: Person (obsolete) P: Person (obsolete)
M: Mail patches to: FullName <address@domain> M: Mail patches to: FullName <address@domain>
R: Designated reviewer: FullName <address@domain>
These reviewers should be CCed on patches.
L: Mailing list that is relevant to this area L: Mailing list that is relevant to this area
W: Web-page with status/info W: Web-page with status/info
Q: Patchwork web based patch tracking system site Q: Patchwork web based patch tracking system site
...@@ -7426,10 +7428,14 @@ L: linux-kernel@vger.kernel.org ...@@ -7426,10 +7428,14 @@ L: linux-kernel@vger.kernel.org
S: Supported S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
F: Documentation/RCU/torture.txt F: Documentation/RCU/torture.txt
F: kernel/rcu/torture.c F: kernel/rcu/rcutorture.c
RCUTORTURE TEST FRAMEWORK RCUTORTURE TEST FRAMEWORK
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
M: Josh Triplett <josh@joshtriplett.org>
R: Steven Rostedt <rostedt@goodmis.org>
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
R: Lai Jiangshan <laijs@cn.fujitsu.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Supported S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
...@@ -7452,8 +7458,11 @@ S: Supported ...@@ -7452,8 +7458,11 @@ S: Supported
F: net/rds/ F: net/rds/
READ-COPY UPDATE (RCU) READ-COPY UPDATE (RCU)
M: Dipankar Sarma <dipankar@in.ibm.com>
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
M: Josh Triplett <josh@joshtriplett.org>
R: Steven Rostedt <rostedt@goodmis.org>
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
R: Lai Jiangshan <laijs@cn.fujitsu.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
W: http://www.rdrop.com/users/paulmck/RCU/ W: http://www.rdrop.com/users/paulmck/RCU/
S: Supported S: Supported
...@@ -7463,7 +7472,7 @@ X: Documentation/RCU/torture.txt ...@@ -7463,7 +7472,7 @@ X: Documentation/RCU/torture.txt
F: include/linux/rcu* F: include/linux/rcu*
X: include/linux/srcu.h X: include/linux/srcu.h
F: kernel/rcu/ F: kernel/rcu/
X: kernel/rcu/torture.c X: kernel/torture.c
REAL TIME CLOCK (RTC) SUBSYSTEM REAL TIME CLOCK (RTC) SUBSYSTEM
M: Alessandro Zummo <a.zummo@towertech.it> M: Alessandro Zummo <a.zummo@towertech.it>
...@@ -8236,6 +8245,9 @@ F: mm/sl?b* ...@@ -8236,6 +8245,9 @@ F: mm/sl?b*
SLEEPABLE READ-COPY UPDATE (SRCU) SLEEPABLE READ-COPY UPDATE (SRCU)
M: Lai Jiangshan <laijs@cn.fujitsu.com> M: Lai Jiangshan <laijs@cn.fujitsu.com>
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
M: Josh Triplett <josh@joshtriplett.org>
R: Steven Rostedt <rostedt@goodmis.org>
R: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
W: http://www.rdrop.com/users/paulmck/RCU/ W: http://www.rdrop.com/users/paulmck/RCU/
S: Supported S: Supported
......
...@@ -102,12 +102,6 @@ extern struct group_info init_groups; ...@@ -102,12 +102,6 @@ extern struct group_info init_groups;
#define INIT_IDS #define INIT_IDS
#endif #endif
#ifdef CONFIG_RCU_BOOST
#define INIT_TASK_RCU_BOOST() \
.rcu_boost_mutex = NULL,
#else
#define INIT_TASK_RCU_BOOST()
#endif
#ifdef CONFIG_TREE_PREEMPT_RCU #ifdef CONFIG_TREE_PREEMPT_RCU
#define INIT_TASK_RCU_TREE_PREEMPT() \ #define INIT_TASK_RCU_TREE_PREEMPT() \
.rcu_blocked_node = NULL, .rcu_blocked_node = NULL,
...@@ -119,8 +113,7 @@ extern struct group_info init_groups; ...@@ -119,8 +113,7 @@ extern struct group_info init_groups;
.rcu_read_lock_nesting = 0, \ .rcu_read_lock_nesting = 0, \
.rcu_read_unlock_special = 0, \ .rcu_read_unlock_special = 0, \
.rcu_node_entry = LIST_HEAD_INIT(tsk.rcu_node_entry), \ .rcu_node_entry = LIST_HEAD_INIT(tsk.rcu_node_entry), \
INIT_TASK_RCU_TREE_PREEMPT() \ INIT_TASK_RCU_TREE_PREEMPT()
INIT_TASK_RCU_BOOST()
#else #else
#define INIT_TASK_RCU_PREEMPT(tsk) #define INIT_TASK_RCU_PREEMPT(tsk)
#endif #endif
......
...@@ -44,7 +44,6 @@ ...@@ -44,7 +44,6 @@
#include <linux/debugobjects.h> #include <linux/debugobjects.h>
#include <linux/bug.h> #include <linux/bug.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/percpu.h>
#include <asm/barrier.h> #include <asm/barrier.h>
extern int rcu_expedited; /* for sysctl */ extern int rcu_expedited; /* for sysctl */
...@@ -299,41 +298,6 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev, ...@@ -299,41 +298,6 @@ static inline void rcu_user_hooks_switch(struct task_struct *prev,
bool __rcu_is_watching(void); bool __rcu_is_watching(void);
#endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */ #endif /* #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) */
/*
* Hooks for cond_resched() and friends to avoid RCU CPU stall warnings.
*/
#define RCU_COND_RESCHED_LIM 256 /* ms vs. 100s of ms. */
DECLARE_PER_CPU(int, rcu_cond_resched_count);
void rcu_resched(void);
/*
* Is it time to report RCU quiescent states?
*
* Note unsynchronized access to rcu_cond_resched_count. Yes, we might
* increment some random CPU's count, and possibly also load the result from
* yet another CPU's count. We might even clobber some other CPU's attempt
* to zero its counter. This is all OK because the goal is not precision,
* but rather reasonable amortization of rcu_note_context_switch() overhead
* and extremely high probability of avoiding RCU CPU stall warnings.
* Note that this function has to be preempted in just the wrong place,
* many thousands of times in a row, for anything bad to happen.
*/
static inline bool rcu_should_resched(void)
{
return raw_cpu_inc_return(rcu_cond_resched_count) >=
RCU_COND_RESCHED_LIM;
}
/*
* Report quiscent states to RCU if it is time to do so.
*/
static inline void rcu_cond_resched(void)
{
if (unlikely(rcu_should_resched()))
rcu_resched();
}
/* /*
* Infrastructure to implement the synchronize_() primitives in * Infrastructure to implement the synchronize_() primitives in
* TREE_RCU and rcu_barrier_() primitives in TINY_RCU. * TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
...@@ -358,9 +322,19 @@ void wait_rcu_gp(call_rcu_func_t crf); ...@@ -358,9 +322,19 @@ void wait_rcu_gp(call_rcu_func_t crf);
* initialization. * initialization.
*/ */
#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
void init_rcu_head(struct rcu_head *head);
void destroy_rcu_head(struct rcu_head *head);
void init_rcu_head_on_stack(struct rcu_head *head); void init_rcu_head_on_stack(struct rcu_head *head);
void destroy_rcu_head_on_stack(struct rcu_head *head); void destroy_rcu_head_on_stack(struct rcu_head *head);
#else /* !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ #else /* !CONFIG_DEBUG_OBJECTS_RCU_HEAD */
static inline void init_rcu_head(struct rcu_head *head)
{
}
static inline void destroy_rcu_head(struct rcu_head *head)
{
}
static inline void init_rcu_head_on_stack(struct rcu_head *head) static inline void init_rcu_head_on_stack(struct rcu_head *head)
{ {
} }
...@@ -852,15 +826,14 @@ static inline void rcu_preempt_sleep_check(void) ...@@ -852,15 +826,14 @@ static inline void rcu_preempt_sleep_check(void)
* read-side critical section that would block in a !PREEMPT kernel. * read-side critical section that would block in a !PREEMPT kernel.
* But if you want the full story, read on! * But if you want the full story, read on!
* *
* In non-preemptible RCU implementations (TREE_RCU and TINY_RCU), it * In non-preemptible RCU implementations (TREE_RCU and TINY_RCU),
* is illegal to block while in an RCU read-side critical section. In * it is illegal to block while in an RCU read-side critical section.
* preemptible RCU implementations (TREE_PREEMPT_RCU and TINY_PREEMPT_RCU) * In preemptible RCU implementations (TREE_PREEMPT_RCU) in CONFIG_PREEMPT
* in CONFIG_PREEMPT kernel builds, RCU read-side critical sections may * kernel builds, RCU read-side critical sections may be preempted,
* be preempted, but explicit blocking is illegal. Finally, in preemptible * but explicit blocking is illegal. Finally, in preemptible RCU
* RCU implementations in real-time (with -rt patchset) kernel builds, * implementations in real-time (with -rt patchset) kernel builds, RCU
* RCU read-side critical sections may be preempted and they may also * read-side critical sections may be preempted and they may also block, but
* block, but only when acquiring spinlocks that are subject to priority * only when acquiring spinlocks that are subject to priority inheritance.
* inheritance.
*/ */
static inline void rcu_read_lock(void) static inline void rcu_read_lock(void)
{ {
...@@ -884,6 +857,34 @@ static inline void rcu_read_lock(void) ...@@ -884,6 +857,34 @@ static inline void rcu_read_lock(void)
/** /**
* rcu_read_unlock() - marks the end of an RCU read-side critical section. * rcu_read_unlock() - marks the end of an RCU read-side critical section.
* *
* In most situations, rcu_read_unlock() is immune from deadlock.
* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
* is responsible for deboosting, which it does via rt_mutex_unlock().
* Unfortunately, this function acquires the scheduler's runqueue and
* priority-inheritance spinlocks. This means that deadlock could result
* if the caller of rcu_read_unlock() already holds one of these locks or
* any lock that is ever acquired while holding them.
*
* That said, RCU readers are never priority boosted unless they were
* preempted. Therefore, one way to avoid deadlock is to make sure
* that preemption never happens within any RCU read-side critical
* section whose outermost rcu_read_unlock() is called with one of
* rt_mutex_unlock()'s locks held. Such preemption can be avoided in
* a number of ways, for example, by invoking preempt_disable() before
* critical section's outermost rcu_read_lock().
*
* Given that the set of locks acquired by rt_mutex_unlock() might change
* at any time, a somewhat more future-proofed approach is to make sure
* that that preemption never happens within any RCU read-side critical
* section whose outermost rcu_read_unlock() is called with irqs disabled.
* This approach relies on the fact that rt_mutex_unlock() currently only
* acquires irq-disabled locks.
*
* The second of these two approaches is best in most situations,
* however, the first approach can also be useful, at least to those
* developers willing to keep abreast of the set of locks acquired by
* rt_mutex_unlock().
*
* See rcu_read_lock() for more information. * See rcu_read_lock() for more information.
*/ */
static inline void rcu_read_unlock(void) static inline void rcu_read_unlock(void)
......
...@@ -1270,9 +1270,6 @@ struct task_struct { ...@@ -1270,9 +1270,6 @@ struct task_struct {
#ifdef CONFIG_TREE_PREEMPT_RCU #ifdef CONFIG_TREE_PREEMPT_RCU
struct rcu_node *rcu_blocked_node; struct rcu_node *rcu_blocked_node;
#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */ #endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
#ifdef CONFIG_RCU_BOOST
struct rt_mutex *rcu_boost_mutex;
#endif /* #ifdef CONFIG_RCU_BOOST */
#if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT) #if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT)
struct sched_info sched_info; struct sched_info sched_info;
...@@ -2009,9 +2006,6 @@ static inline void rcu_copy_process(struct task_struct *p) ...@@ -2009,9 +2006,6 @@ static inline void rcu_copy_process(struct task_struct *p)
#ifdef CONFIG_TREE_PREEMPT_RCU #ifdef CONFIG_TREE_PREEMPT_RCU
p->rcu_blocked_node = NULL; p->rcu_blocked_node = NULL;
#endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */ #endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */
#ifdef CONFIG_RCU_BOOST
p->rcu_boost_mutex = NULL;
#endif /* #ifdef CONFIG_RCU_BOOST */
INIT_LIST_HEAD(&p->rcu_node_entry); INIT_LIST_HEAD(&p->rcu_node_entry);
} }
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/hrtimer.h> #include <linux/hrtimer.h>
#include <linux/context_tracking_state.h> #include <linux/context_tracking_state.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/sched.h>
#ifdef CONFIG_GENERIC_CLOCKEVENTS #ifdef CONFIG_GENERIC_CLOCKEVENTS
...@@ -162,6 +163,7 @@ static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; } ...@@ -162,6 +163,7 @@ static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
#ifdef CONFIG_NO_HZ_FULL #ifdef CONFIG_NO_HZ_FULL
extern bool tick_nohz_full_running; extern bool tick_nohz_full_running;
extern cpumask_var_t tick_nohz_full_mask; extern cpumask_var_t tick_nohz_full_mask;
extern cpumask_var_t housekeeping_mask;
static inline bool tick_nohz_full_enabled(void) static inline bool tick_nohz_full_enabled(void)
{ {
...@@ -194,6 +196,24 @@ static inline void tick_nohz_full_kick_all(void) { } ...@@ -194,6 +196,24 @@ static inline void tick_nohz_full_kick_all(void) { }
static inline void __tick_nohz_task_switch(struct task_struct *tsk) { } static inline void __tick_nohz_task_switch(struct task_struct *tsk) { }
#endif #endif
static inline bool is_housekeeping_cpu(int cpu)
{
#ifdef CONFIG_NO_HZ_FULL
if (tick_nohz_full_enabled())
return cpumask_test_cpu(cpu, housekeeping_mask);
#endif
return true;
}
static inline void housekeeping_affine(struct task_struct *t)
{
#ifdef CONFIG_NO_HZ_FULL
if (tick_nohz_full_enabled())
set_cpus_allowed_ptr(t, housekeeping_mask);
#endif
}
static inline void tick_nohz_full_check(void) static inline void tick_nohz_full_check(void)
{ {
if (tick_nohz_full_enabled()) if (tick_nohz_full_enabled())
......
...@@ -505,7 +505,7 @@ config PREEMPT_RCU ...@@ -505,7 +505,7 @@ config PREEMPT_RCU
def_bool TREE_PREEMPT_RCU def_bool TREE_PREEMPT_RCU
help help
This option enables preemptible-RCU code that is common between This option enables preemptible-RCU code that is common between
the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations. TREE_PREEMPT_RCU and, in the old days, TINY_PREEMPT_RCU.
config RCU_STALL_COMMON config RCU_STALL_COMMON
def_bool ( TREE_RCU || TREE_PREEMPT_RCU || RCU_TRACE ) def_bool ( TREE_RCU || TREE_PREEMPT_RCU || RCU_TRACE )
...@@ -737,7 +737,7 @@ choice ...@@ -737,7 +737,7 @@ choice
config RCU_NOCB_CPU_NONE config RCU_NOCB_CPU_NONE
bool "No build_forced no-CBs CPUs" bool "No build_forced no-CBs CPUs"
depends on RCU_NOCB_CPU && !NO_HZ_FULL depends on RCU_NOCB_CPU && !NO_HZ_FULL_ALL
help help
This option does not force any of the CPUs to be no-CBs CPUs. This option does not force any of the CPUs to be no-CBs CPUs.
Only CPUs designated by the rcu_nocbs= boot parameter will be Only CPUs designated by the rcu_nocbs= boot parameter will be
...@@ -751,7 +751,7 @@ config RCU_NOCB_CPU_NONE ...@@ -751,7 +751,7 @@ config RCU_NOCB_CPU_NONE
config RCU_NOCB_CPU_ZERO config RCU_NOCB_CPU_ZERO
bool "CPU 0 is a build_forced no-CBs CPU" bool "CPU 0 is a build_forced no-CBs CPU"
depends on RCU_NOCB_CPU && !NO_HZ_FULL depends on RCU_NOCB_CPU && !NO_HZ_FULL_ALL
help help
This option forces CPU 0 to be a no-CBs CPU, so that its RCU This option forces CPU 0 to be a no-CBs CPU, so that its RCU
callbacks are invoked by a per-CPU kthread whose name begins callbacks are invoked by a per-CPU kthread whose name begins
......
...@@ -99,6 +99,10 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head) ...@@ -99,6 +99,10 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head)
void kfree(const void *); void kfree(const void *);
/*
* Reclaim the specified callback, either by invoking it (non-lazy case)
* or freeing it directly (lazy case). Return true if lazy, false otherwise.
*/
static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head) static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
{ {
unsigned long offset = (unsigned long)head->func; unsigned long offset = (unsigned long)head->func;
...@@ -108,12 +112,12 @@ static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head) ...@@ -108,12 +112,12 @@ static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
RCU_TRACE(trace_rcu_invoke_kfree_callback(rn, head, offset)); RCU_TRACE(trace_rcu_invoke_kfree_callback(rn, head, offset));
kfree((void *)head - offset); kfree((void *)head - offset);
rcu_lock_release(&rcu_callback_map); rcu_lock_release(&rcu_callback_map);
return 1; return true;
} else { } else {
RCU_TRACE(trace_rcu_invoke_callback(rn, head)); RCU_TRACE(trace_rcu_invoke_callback(rn, head));
head->func(head); head->func(head);
rcu_lock_release(&rcu_callback_map); rcu_lock_release(&rcu_callback_map);
return 0; return false;
} }
} }
......
...@@ -298,9 +298,9 @@ int __srcu_read_lock(struct srcu_struct *sp) ...@@ -298,9 +298,9 @@ int __srcu_read_lock(struct srcu_struct *sp)
idx = ACCESS_ONCE(sp->completed) & 0x1; idx = ACCESS_ONCE(sp->completed) & 0x1;
preempt_disable(); preempt_disable();
ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->c[idx]) += 1; __this_cpu_inc(sp->per_cpu_ref->c[idx]);
smp_mb(); /* B */ /* Avoid leaking the critical section. */ smp_mb(); /* B */ /* Avoid leaking the critical section. */
ACCESS_ONCE(this_cpu_ptr(sp->per_cpu_ref)->seq[idx]) += 1; __this_cpu_inc(sp->per_cpu_ref->seq[idx]);
preempt_enable(); preempt_enable();
return idx; return idx;
} }
......
This diff is collapsed.
...@@ -172,6 +172,14 @@ struct rcu_node { ...@@ -172,6 +172,14 @@ struct rcu_node {
/* queued on this rcu_node structure that */ /* queued on this rcu_node structure that */
/* are blocking the current grace period, */ /* are blocking the current grace period, */
/* there can be no such task. */ /* there can be no such task. */
struct completion boost_completion;
/* Used to ensure that the rt_mutex used */
/* to carry out the boosting is fully */
/* released with no future boostee accesses */
/* before that rt_mutex is re-initialized. */
struct rt_mutex boost_mtx;
/* Used only for the priority-boosting */
/* side effect, not as a lock. */
unsigned long boost_time; unsigned long boost_time;
/* When to start boosting (jiffies). */ /* When to start boosting (jiffies). */
struct task_struct *boost_kthread_task; struct task_struct *boost_kthread_task;
...@@ -307,6 +315,9 @@ struct rcu_data { ...@@ -307,6 +315,9 @@ struct rcu_data {
/* 4) reasons this CPU needed to be kicked by force_quiescent_state */ /* 4) reasons this CPU needed to be kicked by force_quiescent_state */
unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */ unsigned long dynticks_fqs; /* Kicked due to dynticks idle. */
unsigned long offline_fqs; /* Kicked due to being offline. */ unsigned long offline_fqs; /* Kicked due to being offline. */
unsigned long cond_resched_completed;
/* Grace period that needs help */
/* from cond_resched(). */
/* 5) __rcu_pending() statistics. */ /* 5) __rcu_pending() statistics. */
unsigned long n_rcu_pending; /* rcu_pending() calls since boot. */ unsigned long n_rcu_pending; /* rcu_pending() calls since boot. */
...@@ -331,11 +342,29 @@ struct rcu_data { ...@@ -331,11 +342,29 @@ struct rcu_data {
struct rcu_head **nocb_tail; struct rcu_head **nocb_tail;
atomic_long_t nocb_q_count; /* # CBs waiting for kthread */ atomic_long_t nocb_q_count; /* # CBs waiting for kthread */
atomic_long_t nocb_q_count_lazy; /* (approximate). */ atomic_long_t nocb_q_count_lazy; /* (approximate). */
struct rcu_head *nocb_follower_head; /* CBs ready to invoke. */
struct rcu_head **nocb_follower_tail;
atomic_long_t nocb_follower_count; /* # CBs ready to invoke. */
atomic_long_t nocb_follower_count_lazy; /* (approximate). */
int nocb_p_count; /* # CBs being invoked by kthread */ int nocb_p_count; /* # CBs being invoked by kthread */
int nocb_p_count_lazy; /* (approximate). */ int nocb_p_count_lazy; /* (approximate). */
wait_queue_head_t nocb_wq; /* For nocb kthreads to sleep on. */ wait_queue_head_t nocb_wq; /* For nocb kthreads to sleep on. */
struct task_struct *nocb_kthread; struct task_struct *nocb_kthread;
bool nocb_defer_wakeup; /* Defer wakeup of nocb_kthread. */ bool nocb_defer_wakeup; /* Defer wakeup of nocb_kthread. */
/* The following fields are used by the leader, hence own cacheline. */
struct rcu_head *nocb_gp_head ____cacheline_internodealigned_in_smp;
/* CBs waiting for GP. */
struct rcu_head **nocb_gp_tail;
long nocb_gp_count;
long nocb_gp_count_lazy;
bool nocb_leader_wake; /* Is the nocb leader thread awake? */
struct rcu_data *nocb_next_follower;
/* Next follower in wakeup chain. */
/* The following fields are used by the follower, hence new cachline. */
struct rcu_data *nocb_leader ____cacheline_internodealigned_in_smp;
/* Leader CPU takes GP-end wakeups. */
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */ #endif /* #ifdef CONFIG_RCU_NOCB_CPU */
/* 8) RCU CPU stall data. */ /* 8) RCU CPU stall data. */
...@@ -392,6 +421,7 @@ struct rcu_state { ...@@ -392,6 +421,7 @@ struct rcu_state {
struct rcu_node *level[RCU_NUM_LVLS]; /* Hierarchy levels. */ struct rcu_node *level[RCU_NUM_LVLS]; /* Hierarchy levels. */
u32 levelcnt[MAX_RCU_LVLS + 1]; /* # nodes in each level. */ u32 levelcnt[MAX_RCU_LVLS + 1]; /* # nodes in each level. */
u8 levelspread[RCU_NUM_LVLS]; /* kids/node in each level. */ u8 levelspread[RCU_NUM_LVLS]; /* kids/node in each level. */
u8 flavor_mask; /* bit in flavor mask. */
struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */ struct rcu_data __percpu *rda; /* pointer of percu rcu_data. */
void (*call)(struct rcu_head *head, /* call_rcu() flavor. */ void (*call)(struct rcu_head *head, /* call_rcu() flavor. */
void (*func)(struct rcu_head *head)); void (*func)(struct rcu_head *head));
...@@ -563,7 +593,7 @@ static bool rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp); ...@@ -563,7 +593,7 @@ static bool rcu_nocb_need_deferred_wakeup(struct rcu_data *rdp);
static void do_nocb_deferred_wakeup(struct rcu_data *rdp); static void do_nocb_deferred_wakeup(struct rcu_data *rdp);
static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp); static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp);
static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp); static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp);
static void rcu_kick_nohz_cpu(int cpu); static void __maybe_unused rcu_kick_nohz_cpu(int cpu);
static bool init_nocb_callback_list(struct rcu_data *rdp); static bool init_nocb_callback_list(struct rcu_data *rdp);
static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq); static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq);
static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq); static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq);
...@@ -583,8 +613,14 @@ static bool rcu_nohz_full_cpu(struct rcu_state *rsp); ...@@ -583,8 +613,14 @@ static bool rcu_nohz_full_cpu(struct rcu_state *rsp);
/* Sum up queue lengths for tracing. */ /* Sum up queue lengths for tracing. */
static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll) static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll)
{ {
*ql = atomic_long_read(&rdp->nocb_q_count) + rdp->nocb_p_count; *ql = atomic_long_read(&rdp->nocb_q_count) +
*qll = atomic_long_read(&rdp->nocb_q_count_lazy) + rdp->nocb_p_count_lazy; rdp->nocb_p_count +
atomic_long_read(&rdp->nocb_follower_count) +
rdp->nocb_p_count + rdp->nocb_gp_count;
*qll = atomic_long_read(&rdp->nocb_q_count_lazy) +
rdp->nocb_p_count_lazy +
atomic_long_read(&rdp->nocb_follower_count_lazy) +
rdp->nocb_p_count_lazy + rdp->nocb_gp_count_lazy;
} }
#else /* #ifdef CONFIG_RCU_NOCB_CPU */ #else /* #ifdef CONFIG_RCU_NOCB_CPU */
static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll) static inline void rcu_nocb_q_lengths(struct rcu_data *rdp, long *ql, long *qll)
......
This diff is collapsed.
...@@ -90,9 +90,6 @@ void __rcu_read_unlock(void) ...@@ -90,9 +90,6 @@ void __rcu_read_unlock(void)
} else { } else {
barrier(); /* critical section before exit code. */ barrier(); /* critical section before exit code. */
t->rcu_read_lock_nesting = INT_MIN; t->rcu_read_lock_nesting = INT_MIN;
#ifdef CONFIG_PROVE_RCU_DELAY
udelay(10); /* Make preemption more probable. */
#endif /* #ifdef CONFIG_PROVE_RCU_DELAY */
barrier(); /* assign before ->rcu_read_unlock_special load */ barrier(); /* assign before ->rcu_read_unlock_special load */
if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special))) if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special)))
rcu_read_unlock_special(t); rcu_read_unlock_special(t);
...@@ -200,12 +197,12 @@ void wait_rcu_gp(call_rcu_func_t crf) ...@@ -200,12 +197,12 @@ void wait_rcu_gp(call_rcu_func_t crf)
EXPORT_SYMBOL_GPL(wait_rcu_gp); EXPORT_SYMBOL_GPL(wait_rcu_gp);
#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
static inline void debug_init_rcu_head(struct rcu_head *head) void init_rcu_head(struct rcu_head *head)
{ {
debug_object_init(head, &rcuhead_debug_descr); debug_object_init(head, &rcuhead_debug_descr);
} }
static inline void debug_rcu_head_free(struct rcu_head *head) void destroy_rcu_head(struct rcu_head *head)
{ {
debug_object_free(head, &rcuhead_debug_descr); debug_object_free(head, &rcuhead_debug_descr);
} }
...@@ -350,21 +347,3 @@ static int __init check_cpu_stall_init(void) ...@@ -350,21 +347,3 @@ static int __init check_cpu_stall_init(void)
early_initcall(check_cpu_stall_init); early_initcall(check_cpu_stall_init);
#endif /* #ifdef CONFIG_RCU_STALL_COMMON */ #endif /* #ifdef CONFIG_RCU_STALL_COMMON */
/*
* Hooks for cond_resched() and friends to avoid RCU CPU stall warnings.
*/
DEFINE_PER_CPU(int, rcu_cond_resched_count);
/*
* Report a set of RCU quiescent states, for use by cond_resched()
* and friends. Out of line due to being called infrequently.
*/
void rcu_resched(void)
{
preempt_disable();
__this_cpu_write(rcu_cond_resched_count, 0);
rcu_note_context_switch(smp_processor_id());
preempt_enable();
}
...@@ -4147,7 +4147,6 @@ static void __cond_resched(void) ...@@ -4147,7 +4147,6 @@ static void __cond_resched(void)
int __sched _cond_resched(void) int __sched _cond_resched(void)
{ {
rcu_cond_resched();
if (should_resched()) { if (should_resched()) {
__cond_resched(); __cond_resched();
return 1; return 1;
...@@ -4166,18 +4165,15 @@ EXPORT_SYMBOL(_cond_resched); ...@@ -4166,18 +4165,15 @@ EXPORT_SYMBOL(_cond_resched);
*/ */
int __cond_resched_lock(spinlock_t *lock) int __cond_resched_lock(spinlock_t *lock)
{ {
bool need_rcu_resched = rcu_should_resched();
int resched = should_resched(); int resched = should_resched();
int ret = 0; int ret = 0;
lockdep_assert_held(lock); lockdep_assert_held(lock);
if (spin_needbreak(lock) || resched || need_rcu_resched) { if (spin_needbreak(lock) || resched) {
spin_unlock(lock); spin_unlock(lock);
if (resched) if (resched)
__cond_resched(); __cond_resched();
else if (unlikely(need_rcu_resched))
rcu_resched();
else else
cpu_relax(); cpu_relax();
ret = 1; ret = 1;
...@@ -4191,7 +4187,6 @@ int __sched __cond_resched_softirq(void) ...@@ -4191,7 +4187,6 @@ int __sched __cond_resched_softirq(void)
{ {
BUG_ON(!in_softirq()); BUG_ON(!in_softirq());
rcu_cond_resched(); /* BH disabled OK, just recording QSes. */
if (should_resched()) { if (should_resched()) {
local_bh_enable(); local_bh_enable();
__cond_resched(); __cond_resched();
......
...@@ -1263,6 +1263,10 @@ struct sighand_struct *__lock_task_sighand(struct task_struct *tsk, ...@@ -1263,6 +1263,10 @@ struct sighand_struct *__lock_task_sighand(struct task_struct *tsk,
struct sighand_struct *sighand; struct sighand_struct *sighand;
for (;;) { for (;;) {
/*
* Disable interrupts early to avoid deadlocks.
* See rcu_read_unlock() comment header for details.
*/
local_irq_save(*flags); local_irq_save(*flags);
rcu_read_lock(); rcu_read_lock();
sighand = rcu_dereference(tsk->sighand); sighand = rcu_dereference(tsk->sighand);
......
...@@ -154,6 +154,7 @@ static void tick_sched_handle(struct tick_sched *ts, struct pt_regs *regs) ...@@ -154,6 +154,7 @@ static void tick_sched_handle(struct tick_sched *ts, struct pt_regs *regs)
#ifdef CONFIG_NO_HZ_FULL #ifdef CONFIG_NO_HZ_FULL
cpumask_var_t tick_nohz_full_mask; cpumask_var_t tick_nohz_full_mask;
cpumask_var_t housekeeping_mask;
bool tick_nohz_full_running; bool tick_nohz_full_running;
static bool can_stop_full_tick(void) static bool can_stop_full_tick(void)
...@@ -281,6 +282,7 @@ static int __init tick_nohz_full_setup(char *str) ...@@ -281,6 +282,7 @@ static int __init tick_nohz_full_setup(char *str)
int cpu; int cpu;
alloc_bootmem_cpumask_var(&tick_nohz_full_mask); alloc_bootmem_cpumask_var(&tick_nohz_full_mask);
alloc_bootmem_cpumask_var(&housekeeping_mask);
if (cpulist_parse(str, tick_nohz_full_mask) < 0) { if (cpulist_parse(str, tick_nohz_full_mask) < 0) {
pr_warning("NOHZ: Incorrect nohz_full cpumask\n"); pr_warning("NOHZ: Incorrect nohz_full cpumask\n");
return 1; return 1;
...@@ -291,6 +293,8 @@ static int __init tick_nohz_full_setup(char *str) ...@@ -291,6 +293,8 @@ static int __init tick_nohz_full_setup(char *str)
pr_warning("NO_HZ: Clearing %d from nohz_full range for timekeeping\n", cpu); pr_warning("NO_HZ: Clearing %d from nohz_full range for timekeeping\n", cpu);
cpumask_clear_cpu(cpu, tick_nohz_full_mask); cpumask_clear_cpu(cpu, tick_nohz_full_mask);
} }
cpumask_andnot(housekeeping_mask,
cpu_possible_mask, tick_nohz_full_mask);
tick_nohz_full_running = true; tick_nohz_full_running = true;
return 1; return 1;
...@@ -332,9 +336,15 @@ static int tick_nohz_init_all(void) ...@@ -332,9 +336,15 @@ static int tick_nohz_init_all(void)
pr_err("NO_HZ: Can't allocate full dynticks cpumask\n"); pr_err("NO_HZ: Can't allocate full dynticks cpumask\n");
return err; return err;
} }
if (!alloc_cpumask_var(&housekeeping_mask, GFP_KERNEL)) {
pr_err("NO_HZ: Can't allocate not-full dynticks cpumask\n");
return err;
}
err = 0; err = 0;
cpumask_setall(tick_nohz_full_mask); cpumask_setall(tick_nohz_full_mask);
cpumask_clear_cpu(smp_processor_id(), tick_nohz_full_mask); cpumask_clear_cpu(smp_processor_id(), tick_nohz_full_mask);
cpumask_clear(housekeeping_mask);
cpumask_set_cpu(smp_processor_id(), housekeeping_mask);
tick_nohz_full_running = true; tick_nohz_full_running = true;
#endif #endif
return err; return err;
......
...@@ -708,7 +708,7 @@ int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m, ...@@ -708,7 +708,7 @@ int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m,
int ret = 0; int ret = 0;
VERBOSE_TOROUT_STRING(m); VERBOSE_TOROUT_STRING(m);
*tp = kthread_run(fn, arg, s); *tp = kthread_run(fn, arg, "%s", s);
if (IS_ERR(*tp)) { if (IS_ERR(*tp)) {
ret = PTR_ERR(*tp); ret = PTR_ERR(*tp);
VERBOSE_TOROUT_ERRSTRING(f); VERBOSE_TOROUT_ERRSTRING(f);
......
...@@ -1131,20 +1131,6 @@ config PROVE_RCU_REPEATEDLY ...@@ -1131,20 +1131,6 @@ config PROVE_RCU_REPEATEDLY
Say N if you are unsure. Say N if you are unsure.
config PROVE_RCU_DELAY
bool "RCU debugging: preemptible RCU race provocation"
depends on DEBUG_KERNEL && PREEMPT_RCU
default n
help
There is a class of races that involve an unlikely preemption
of __rcu_read_unlock() just after ->rcu_read_lock_nesting has
been set to INT_MIN. This feature inserts a delay at that
point to increase the probability of these races.
Say Y to increase probability of preemption of __rcu_read_unlock().
Say N if you are unsure.
config SPARSE_RCU_POINTER config SPARSE_RCU_POINTER
bool "RCU debugging: sparse-based checks for pointer usage" bool "RCU debugging: sparse-based checks for pointer usage"
default n default n
......
...@@ -21,6 +21,7 @@ my $lk_path = "./"; ...@@ -21,6 +21,7 @@ my $lk_path = "./";
my $email = 1; my $email = 1;
my $email_usename = 1; my $email_usename = 1;
my $email_maintainer = 1; my $email_maintainer = 1;
my $email_reviewer = 1;
my $email_list = 1; my $email_list = 1;
my $email_subscriber_list = 0; my $email_subscriber_list = 0;
my $email_git_penguin_chiefs = 0; my $email_git_penguin_chiefs = 0;
...@@ -202,6 +203,7 @@ if (!GetOptions( ...@@ -202,6 +203,7 @@ if (!GetOptions(
'remove-duplicates!' => \$email_remove_duplicates, 'remove-duplicates!' => \$email_remove_duplicates,
'mailmap!' => \$email_use_mailmap, 'mailmap!' => \$email_use_mailmap,
'm!' => \$email_maintainer, 'm!' => \$email_maintainer,
'r!' => \$email_reviewer,
'n!' => \$email_usename, 'n!' => \$email_usename,
'l!' => \$email_list, 'l!' => \$email_list,
's!' => \$email_subscriber_list, 's!' => \$email_subscriber_list,
...@@ -260,7 +262,8 @@ if ($sections) { ...@@ -260,7 +262,8 @@ if ($sections) {
} }
if ($email && if ($email &&
($email_maintainer + $email_list + $email_subscriber_list + ($email_maintainer + $email_reviewer +
$email_list + $email_subscriber_list +
$email_git + $email_git_penguin_chiefs + $email_git_blame) == 0) { $email_git + $email_git_penguin_chiefs + $email_git_blame) == 0) {
die "$P: Please select at least 1 email option\n"; die "$P: Please select at least 1 email option\n";
} }
...@@ -750,6 +753,7 @@ MAINTAINER field selection options: ...@@ -750,6 +753,7 @@ MAINTAINER field selection options:
--hg-since => hg history to use (default: $email_hg_since) --hg-since => hg history to use (default: $email_hg_since)
--interactive => display a menu (mostly useful if used with the --git option) --interactive => display a menu (mostly useful if used with the --git option)
--m => include maintainer(s) if any --m => include maintainer(s) if any
--r => include reviewer(s) if any
--n => include name 'Full Name <addr\@domain.tld>' --n => include name 'Full Name <addr\@domain.tld>'
--l => include list(s) if any --l => include list(s) if any
--s => include subscriber only list(s) if any --s => include subscriber only list(s) if any
...@@ -1064,6 +1068,22 @@ sub add_categories { ...@@ -1064,6 +1068,22 @@ sub add_categories {
my $role = get_maintainer_role($i); my $role = get_maintainer_role($i);
push_email_addresses($pvalue, $role); push_email_addresses($pvalue, $role);
} }
} elsif ($ptype eq "R") {
my ($name, $address) = parse_email($pvalue);
if ($name eq "") {
if ($i > 0) {
my $tv = $typevalue[$i - 1];
if ($tv =~ m/^(\C):\s*(.*)/) {
if ($1 eq "P") {
$name = $2;
$pvalue = format_email($name, $address, $email_usename);
}
}
}
}
if ($email_reviewer) {
push_email_addresses($pvalue, 'reviewer');
}
} elsif ($ptype eq "T") { } elsif ($ptype eq "T") {
push(@scm, $pvalue); push(@scm, $pvalue);
} elsif ($ptype eq "W") { } elsif ($ptype eq "W") {
......
...@@ -54,10 +54,16 @@ do ...@@ -54,10 +54,16 @@ do
if test -f "$i/qemu-cmd" if test -f "$i/qemu-cmd"
then then
print_bug qemu failed print_bug qemu failed
echo " $i"
elif test -f "$i/buildonly"
then
echo Build-only run, no boot/test
configcheck.sh $i/.config $i/ConfigFragment
parse-build.sh $i/Make.out $configfile
else else
print_bug Build failed print_bug Build failed
fi
echo " $i" echo " $i"
fi fi
fi
done done
done done
...@@ -42,6 +42,7 @@ grace=120 ...@@ -42,6 +42,7 @@ grace=120
T=/tmp/kvm-test-1-run.sh.$$ T=/tmp/kvm-test-1-run.sh.$$
trap 'rm -rf $T' 0 trap 'rm -rf $T' 0
touch $T
. $KVM/bin/functions.sh . $KVM/bin/functions.sh
. $KVPATH/ver_functions.sh . $KVPATH/ver_functions.sh
...@@ -131,7 +132,10 @@ boot_args=$6 ...@@ -131,7 +132,10 @@ boot_args=$6
cd $KVM cd $KVM
kstarttime=`awk 'BEGIN { print systime() }' < /dev/null` kstarttime=`awk 'BEGIN { print systime() }' < /dev/null`
echo ' ---' `date`: Starting kernel if test -z "$TORTURE_BUILDONLY"
then
echo ' ---' `date`: Starting kernel
fi
# Generate -smp qemu argument. # Generate -smp qemu argument.
qemu_args="-nographic $qemu_args" qemu_args="-nographic $qemu_args"
...@@ -157,12 +161,13 @@ boot_args="`configfrag_boot_params "$boot_args" "$config_template"`" ...@@ -157,12 +161,13 @@ boot_args="`configfrag_boot_params "$boot_args" "$config_template"`"
# Generate kernel-version-specific boot parameters # Generate kernel-version-specific boot parameters
boot_args="`per_version_boot_params "$boot_args" $builddir/.config $seconds`" boot_args="`per_version_boot_params "$boot_args" $builddir/.config $seconds`"
echo $QEMU $qemu_args -m 512 -kernel $builddir/$BOOT_IMAGE -append \"$qemu_append $boot_args\" > $resdir/qemu-cmd
if test -n "$TORTURE_BUILDONLY" if test -n "$TORTURE_BUILDONLY"
then then
echo Build-only run specified, boot/test omitted. echo Build-only run specified, boot/test omitted.
touch $resdir/buildonly
exit 0 exit 0
fi fi
echo $QEMU $qemu_args -m 512 -kernel $builddir/$BOOT_IMAGE -append \"$qemu_append $boot_args\" > $resdir/qemu-cmd
( $QEMU $qemu_args -m 512 -kernel $builddir/$BOOT_IMAGE -append "$qemu_append $boot_args"; echo $? > $resdir/qemu-retval ) & ( $QEMU $qemu_args -m 512 -kernel $builddir/$BOOT_IMAGE -append "$qemu_append $boot_args"; echo $? > $resdir/qemu-retval ) &
qemu_pid=$! qemu_pid=$!
commandcompleted=0 commandcompleted=0
......
...@@ -340,12 +340,18 @@ function dump(first, pastlast) ...@@ -340,12 +340,18 @@ function dump(first, pastlast)
for (j = 1; j < jn; j++) { for (j = 1; j < jn; j++) {
builddir=KVM "/b" j builddir=KVM "/b" j
print "rm -f " builddir ".ready" print "rm -f " builddir ".ready"
print "echo ----", cfr[j], cpusr[j] ovf ": Starting kernel. `date`"; print "if test -z \"$TORTURE_BUILDONLY\""
print "echo ----", cfr[j], cpusr[j] ovf ": Starting kernel. `date` >> " rd "/log"; print "then"
print "\techo ----", cfr[j], cpusr[j] ovf ": Starting kernel. `date`";
print "\techo ----", cfr[j], cpusr[j] ovf ": Starting kernel. `date` >> " rd "/log";
print "fi"
} }
print "wait" print "wait"
print "echo ---- All kernel runs complete. `date`"; print "if test -z \"$TORTURE_BUILDONLY\""
print "echo ---- All kernel runs complete. `date` >> " rd "/log"; print "then"
print "\techo ---- All kernel runs complete. `date`";
print "\techo ---- All kernel runs complete. `date` >> " rd "/log";
print "fi"
for (j = 1; j < jn; j++) { for (j = 1; j < jn; j++) {
builddir=KVM "/b" j builddir=KVM "/b" j
print "echo ----", cfr[j], cpusr[j] ovf ": Build/run results:"; print "echo ----", cfr[j], cpusr[j] ovf ": Build/run results:";
...@@ -385,10 +391,7 @@ echo ...@@ -385,10 +391,7 @@ echo
echo echo
echo " --- `date` Test summary:" echo " --- `date` Test summary:"
echo Results directory: $resdir/$ds echo Results directory: $resdir/$ds
if test -z "$TORTURE_BUILDONLY" kvm-recheck.sh $resdir/$ds
then
kvm-recheck.sh $resdir/$ds
fi
___EOF___ ___EOF___
if test "$dryrun" = script if test "$dryrun" = script
...@@ -403,7 +406,7 @@ then ...@@ -403,7 +406,7 @@ then
sed -e 's/:.*$//' -e 's/^echo //' sed -e 's/:.*$//' -e 's/^echo //'
exit 0 exit 0
else else
# Not a dryru, so run the script. # Not a dryrun, so run the script.
sh $T/script sh $T/script
fi fi
......
...@@ -15,7 +15,6 @@ CONFIG_RCU_FANOUT_EXACT=n ...@@ -15,7 +15,6 @@ CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=y CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ZERO=y CONFIG_RCU_NOCB_CPU_ZERO=y
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
......
...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_EXACT=n ...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=n CONFIG_PROVE_LOCKING=n
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=y CONFIG_RCU_CPU_STALL_VERBOSE=y
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
......
...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_EXACT=n ...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=n CONFIG_PROVE_LOCKING=n
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=y CONFIG_RCU_CPU_STALL_VERBOSE=y
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
......
...@@ -14,7 +14,6 @@ CONFIG_RCU_FANOUT_LEAF=4 ...@@ -14,7 +14,6 @@ CONFIG_RCU_FANOUT_LEAF=4
CONFIG_RCU_FANOUT_EXACT=n CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=y CONFIG_RCU_BOOST=y
......
...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_LEAF=2 ...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_LEAF=2
CONFIG_RCU_FANOUT_EXACT=n CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=y CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_RCU_CPU_STALL_VERBOSE=y CONFIG_RCU_CPU_STALL_VERBOSE=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
...@@ -18,7 +18,6 @@ CONFIG_RCU_NOCB_CPU_NONE=y ...@@ -18,7 +18,6 @@ CONFIG_RCU_NOCB_CPU_NONE=y
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y CONFIG_PROVE_RCU=y
CONFIG_PROVE_RCU_DELAY=y
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
...@@ -19,7 +19,6 @@ CONFIG_RCU_NOCB_CPU=n ...@@ -19,7 +19,6 @@ CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_PROVE_RCU=y CONFIG_PROVE_RCU=y
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
...@@ -17,7 +17,6 @@ CONFIG_RCU_FANOUT_LEAF=2 ...@@ -17,7 +17,6 @@ CONFIG_RCU_FANOUT_LEAF=2
CONFIG_RCU_FANOUT_EXACT=n CONFIG_RCU_FANOUT_EXACT=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=y CONFIG_RCU_CPU_STALL_INFO=y
CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_LEAF=2 ...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_LEAF=2
CONFIG_RCU_NOCB_CPU=y CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ALL=y CONFIG_RCU_NOCB_CPU_ALL=y
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
......
...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_LEAF=2 ...@@ -18,7 +18,6 @@ CONFIG_RCU_FANOUT_LEAF=2
CONFIG_RCU_NOCB_CPU=y CONFIG_RCU_NOCB_CPU=y
CONFIG_RCU_NOCB_CPU_ALL=y CONFIG_RCU_NOCB_CPU_ALL=y
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
......
...@@ -13,7 +13,6 @@ CONFIG_SUSPEND=n ...@@ -13,7 +13,6 @@ CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n CONFIG_HIBERNATION=n
CONFIG_RCU_NOCB_CPU=n CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_RCU_DELAY=n
CONFIG_RCU_CPU_STALL_INFO=n CONFIG_RCU_CPU_STALL_INFO=n
CONFIG_RCU_CPU_STALL_VERBOSE=n CONFIG_RCU_CPU_STALL_VERBOSE=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
......
...@@ -13,7 +13,6 @@ CONFIG_PREEMPT_VOLUNTARY=n ...@@ -13,7 +13,6 @@ CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_PROVE_RCU_DELAY=y
CONFIG_DEBUG_OBJECTS=y CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_RT_MUTEXES=y CONFIG_RT_MUTEXES=y
......
...@@ -13,7 +13,6 @@ CONFIG_PREEMPT_VOLUNTARY=n ...@@ -13,7 +13,6 @@ CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_PROVE_RCU_DELAY=y
CONFIG_DEBUG_OBJECTS=y CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_RT_MUTEXES=y CONFIG_RT_MUTEXES=y
......
...@@ -13,7 +13,6 @@ CONFIG_PREEMPT_VOLUNTARY=n ...@@ -13,7 +13,6 @@ CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_PROVE_RCU_DELAY=y
CONFIG_DEBUG_OBJECTS=y CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_RT_MUTEXES=y CONFIG_RT_MUTEXES=y
......
...@@ -13,7 +13,6 @@ CONFIG_PREEMPT_VOLUNTARY=n ...@@ -13,7 +13,6 @@ CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=y CONFIG_PREEMPT=y
#CHECK#CONFIG_TREE_PREEMPT_RCU=y #CHECK#CONFIG_TREE_PREEMPT_RCU=y
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_KERNEL=y
CONFIG_PROVE_RCU_DELAY=y
CONFIG_DEBUG_OBJECTS=y CONFIG_DEBUG_OBJECTS=y
CONFIG_DEBUG_OBJECTS_RCU_HEAD=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
CONFIG_RT_MUTEXES=y CONFIG_RT_MUTEXES=y
......
...@@ -14,7 +14,6 @@ CONFIG_NO_HZ_FULL_SYSIDLE -- Do one. ...@@ -14,7 +14,6 @@ CONFIG_NO_HZ_FULL_SYSIDLE -- Do one.
CONFIG_PREEMPT -- Do half. (First three and #8.) CONFIG_PREEMPT -- Do half. (First three and #8.)
CONFIG_PROVE_LOCKING -- Do all but two, covering CONFIG_PROVE_RCU and not. CONFIG_PROVE_LOCKING -- Do all but two, covering CONFIG_PROVE_RCU and not.
CONFIG_PROVE_RCU -- Do all but one under CONFIG_PROVE_LOCKING. CONFIG_PROVE_RCU -- Do all but one under CONFIG_PROVE_LOCKING.
CONFIG_PROVE_RCU_DELAY -- Do one.
CONFIG_RCU_BOOST -- one of TREE_PREEMPT_RCU. CONFIG_RCU_BOOST -- one of TREE_PREEMPT_RCU.
CONFIG_RCU_BOOST_PRIO -- set to 2 for _BOOST testing. CONFIG_RCU_BOOST_PRIO -- set to 2 for _BOOST testing.
CONFIG_RCU_CPU_STALL_INFO -- do one with and without _VERBOSE. CONFIG_RCU_CPU_STALL_INFO -- do one with and without _VERBOSE.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment