- 07 Oct, 2015 12 commits
-
-
Paul E. McKenney authored
doc.2015.10.06a: Documentation updates. percpu-rwsem.2015.10.06a: Optimization of per-CPU reader-writer semaphores. torture.2015.10.06a: Torture-test updates.
-
Paul E. McKenney authored
exp.2015.10.07a: Reduce OS jitter of RCU-sched expedited grace periods. fixes.2015.10.06a: Miscellaneous fixes.
-
Paul E. McKenney authored
Earlier versions of synchronize_sched_expedited() can prematurely end grace periods due to the fact that a CPU marked as cpu_is_offline() can still be using RCU read-side critical sections during the time that CPU makes its last pass through the scheduler and into the idle loop and during the time that a given CPU is in the process of coming online. This commit therefore eliminates this window by adding additional interaction with the CPU-hotplug operations. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
This commit redirects synchronize_rcu_expedited()'s wait to synchronize_sched_expedited_wait(), thus enabling RCU CPU stall warnings. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
This commit adds task-print ability to the expedited RCU CPU stall warning messages in preparation for adding stall warnings to synchornize_rcu_expedited(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
This commit makes the RCU CPU stall warning message print online/offline indications immediately after the CPU number. A "O" indicates global offline, a "." global online, and a "o" indicates RCU believes that the CPU is offline for the current grace period and "." otherwise, and an "N" indicates that RCU believes that the CPU will be offline for the next grace period, and "." otherwise, all right after the CPU number. So for CPU 10, you would normally see "10-...:" indicating that everything believes that the CPU is online. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
Now that sync_sched_exp_select_cpus() and sync_rcu_exp_select_cpus() are identical aside from the the argument to smp_call_function_single(), this commit consolidates them with a functional argument. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
This commit brings sync_sched_exp_select_cpus() into alignment with sync_rcu_exp_select_cpus(), as a first step towards consolidating them into one function. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
Now that synchronize_sched_expedited() no longer uses it, there are no users of try_get_online_cpus() in mainline. This commit therefore removes it. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de>
-
Paul E. McKenney authored
Now that synchronize_sched_expedited() uses IPIs, a hook in rcu_sched_qs(), and the ->expmask field in the rcu_node combining tree, it is no longer necessary to exclude CPU hotplug. Any races with CPU hotplug will be detected when attempting to send the IPI. This commit therefore removes the code excluding CPU hotplug operations. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
This reverts commit af859bea (rcu: Silence lockdep false positive for expedited grace periods). Because synchronize_rcu_expedited() no longer invokes synchronize_sched_expedited(), ->exp_funnel_mutex acquisition is no longer nested, so the false positive no longer happens. This commit therefore removes the extra lockdep data structures, as they are no longer needed.
-
Paul E. McKenney authored
This commit switches synchronize_sched_expedited() from stop_one_cpu_nowait() to smp_call_function_single(), thus moving from an IPI and a pair of context switches to an IPI and a single pass through the scheduler. Of course, if the scheduler actually does decide to switch to a different task, there will still be a pair of context switches, but there would likely have been a pair of context switches anyway, just a bit later. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 06 Oct, 2015 28 commits
-
-
Paul E. McKenney authored
The locktorture module has a list of torture types, and specifying a type not on this list is supposed to cleanly fail the module load. Unfortunately, the "fail" happens without the "cleanly". This commit therefore adds the needed clean-up after an incorrect torture_type. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
This commit allows --bootarg instead of --bootargs, --config instead of --configs, and --qemu-arg instead of --qemu-args. For those cases where a native English speaker might auto-correct the argument to be incorrect. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
The torturing_tasks() function is used only in kernels built with CONFIG_PROVE_RCU=y, so the second definition can result in unused-function compiler warnings. This commit adds __maybe_unused to suppress these warnings. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
The rcutorture module has a list of torture types, and specifying a type not on this list is supposed to cleanly fail the module load. Unfortunately, the "fail" happens without the "cleanly". This commit therefore adds the needed clean-up after an incorrect torture_type. Reported-by: David Miller <davem@davemloft.net> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: David Miller <davem@davemloft.net> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Oleg Nesterov authored
1. Rename __rcu_sync_is_idle() to rcu_sync_lockdep_assert() and change it to use rcu_lockdep_assert(). 2. Change rcu_sync_is_idle() to return rsp->gp_state == GP_IDLE unconditonally, this way we can remove the same check from rcu_sync_lockdep_assert() and clearly isolate the debugging code. Note: rcu_sync_enter()->wait_event(gp_state == GP_PASSED) needs another CONFIG_PROVE_RCU check, the same as is done in ->sync(); but this needs some simple preparations in the core RCU code to avoid the code duplication. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Oleg Nesterov authored
Based on Peter Zijlstra's earlier patch. Change percpu_down_read() to use __down_read(), this way we can do rwsem_acquire_read() unconditionally at the start to make this code more symmetric and clean. Originally-From: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Oleg Nesterov authored
Update the comments broken by the previous change. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Oleg Nesterov authored
Currently down_write/up_write calls synchronize_sched_expedited() twice, which is evil. Change this code to rely on rcu-sync primitives. This avoids the _expedited "big hammer", and this can be faster in the contended case or even in the case when a single thread does down_write/up_write in a loop. Of course, a single down_write() will take more time, but otoh it will be much more friendly to the whole system. To simplify the review this patch doesn't update the comments, fixed by the next change. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Oleg Nesterov authored
This is the temporary ugly hack which will be reverted later. We only need it to ensure that the next patch will not break "change sb_writers to use percpu_rw_semaphore" patches routed via the VFS tree. The alloc_super()->destroy_super() error path assumes that it is safe to call percpu_free_rwsem() after kzalloc() without percpu_init_rwsem(), so let's not disappoint it. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Oleg Nesterov authored
This commit allows rcu_sync structures to be safely deallocated, The trick is to add a new ->wait field to the gp_ops array. This field is a pointer to the rcu_barrier() function corresponding to the flavor of RCU in question. This allows a new rcu_sync_dtor() to wait for any outstanding callbacks before freeing the rcu_sync structure. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Oleg Nesterov authored
This commit validates that the caller of rcu_sync_is_idle() holds the corresponding type of RCU read-side lock, but only in kernels built with CONFIG_PROVE_RCU=y. This validation is carried out via a new rcu_sync_ops->held() method that is checked within rcu_sync_is_idle(). Note that although this does add code to the fast path, it only does so in kernels built with CONFIG_PROVE_RCU=y. Suggested-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Oleg Nesterov authored
This commit adds the new struct rcu_sync_ops which holds sync/call methods, and turns the function pointers in rcu_sync_struct into an array of struct rcu_sync_ops. This simplifies the "init" helpers by collapsing a switch statement and explicit multiple definitions into a simple assignment and a helper macro, respectively. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Oleg Nesterov authored
The rcu_sync infrastructure can be thought of as infrastructure to be used to implement reader-writer primitives having extremely lightweight readers during times when there are no writers. The first use is in the percpu_rwsem used by the VFS subsystem. This infrastructure is functionally equivalent to struct rcu_sync_struct { atomic_t counter; }; /* Check possibility of fast-path read-side operations. */ static inline bool rcu_sync_is_idle(struct rcu_sync_struct *rss) { return atomic_read(&rss->counter) == 0; } /* Tell readers to use slowpaths. */ static inline void rcu_sync_enter(struct rcu_sync_struct *rss) { atomic_inc(&rss->counter); synchronize_sched(); } /* Allow readers to once again use fastpaths. */ static inline void rcu_sync_exit(struct rcu_sync_struct *rss) { synchronize_sched(); atomic_dec(&rss->counter); } The main difference is that it records the state and only calls synchronize_sched() if required. At least some of the calls to synchronize_sched() will be optimized away when rcu_sync_enter() and rcu_sync_exit() are invoked repeatedly in quick succession. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
This commit moves cond_resched_rcu_qs() into stutter_wait(), saving a line and also avoiding RCU CPU stall warnings from all torture loops containing a stutter_wait(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
This commit adds percpu_rwsem tests based on the earlier rwsem tests. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
This commit exports percpu_down_read(), percpu_down_write(), __percpu_init_rwsem(), percpu_up_read(), and percpu_up_write() to allow locktorture to test them when built as a module. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Davidlohr Bueso authored
Real time mutexes is one of the few general primitives that we do not have in locktorture. Address this -- a few considerations: o To spice things up, enable competing thread(s) to become rt, such that we can stress different prio boosting paths in the rtmutex code. Introduce a ->task_boost callback, only used by rtmutex-torturer. Tasks will boost/deboost around every 50k (arbitrarily) lock/unlock operations. o Hold times are similar to what we have for other locks: only occasionally having longer hold times (per ~200k ops). So we roughly do two full rt boost+deboosting ops with short hold times. Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
The recently added lockless_dereference() macro is not present in the Documentation/ directory, so this commit fixes that. Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
This commit rids the documentation of long-obsolete torture_type options such as rcu_sync and adds new ones such as tasks. Also add verbiage noting the fact that rcutorture now concurrently tests the asynchrounous, synchronous, expedited synchronous, and polling grace-period primitives. Reported-by: David Miller <davem@davemloft.net> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: David Miller <davem@davemloft.net> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Jason A. Donenfeld authored
As there is lots of misinformation and outdated information on the Internet about nearly all topics related to the kernel, I thought it would be best if I based my RCU code on the guidelines of the examples in the Documentation/ tree of the latest kernel. One thing that stuck out when reading the whatisRCU.txt document was, "interesting how we don't need any function to dereference rcu protected pointers when doing updates if a lock is held. I wonder how static analyzers will work with that." Then, a few weeks later, upon discovering sparse's __rcu support, I ran it over my code, and lo and behold, things weren't done right. Examining other RCU usages in the kernel reveal consistent usage of rcu_dereference_protected, passing in lockdep_is_held as the conditional. So, this patch adds that idiom to the documentation, so that others ahead of me won't endure the same exercise. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
The Linux kernel outputs copious text during boot, and a slow serial console can result in stall warnings, particularly when messages are printed with interrupts disabled. This commit adds this to the list of causes of RCU CPU stall warning messages. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
Documentation/memory-barriers.txt calls out RCU as one of the sets of primitives associated with ACQUIRE and RELEASE. There really is an association in that rcu_assign_pointer() includes a RELEASE operation, but a quick read can convince people that rcu_read_lock() and rcu_read_unlock() have ACQUIRE and RELEASE semantics, which they do not. This commit therefore removes RCU from this list in order to avoid this confusion. Reported-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
The various RCU list-deletion macros (list_del_rcu(), hlist_del_init_rcu(), hlist_del_rcu(), hlist_bl_del_init_rcu(), hlist_bl_del_rcu(), hlist_nulls_del_init_rcu(), and hlist_nulls_del_rcu()) do plain stores into the ->next pointer of the preceding list elemment. Unfortunately, the compiler is within its rights to (for example) use byte-at-a-time writes to update the pointer, which would fatally confuse concurrent readers. This patch therefore adds the needed WRITE_ONCE() macros. KernelThreadSanitizer (KTSAN) reported the __hlist_del() issue, which is a problem when __hlist_del() is invoked by hlist_del_rcu(). Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
The old rcu_lockdep_assert() was retained to ease handling of incoming patches, but any use will result in deprecated warnings. However, its replacement, RCU_LOCKDEP_WARN(), is now upstream. It is therefore time to remove rcu_lockdep_assert(), which this commit does. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Patrick Marlier authored
The current list_entry_rcu() implementation copies the pointer to a stack variable, then invokes rcu_dereference_raw() on it. This results in an additional store-load pair. Now, most compilers will emit normal store and load instructions, which might seem to be of negligible overhead, but this results in a load-hit-store situation that can cause surprisingly long pipeline stalls, even on modern microprocessors. The problem is that it takes time for the store to get the store buffer updated, which can delay the subsequent load, which immediately follows. This commit therefore switches to the lockless_dereference() primitive, which does not expect the __rcu annotations (that are anyway not present in the list_head structure) and which, like rcu_dereference_raw(), does not check for an enclosing RCU read-side critical section. Most importantly, it does not copy the pointer, thus avoiding the load-hit-store overhead. Signed-off-by: Patrick Marlier <patrick.marlier@gmail.com> [ paulmck: Switched to lockless_dereference() to suppress sparse warnings. ] Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Paul E. McKenney authored
This commit adds an rcu_pointer_handoff() that is intended to mark situations where a structure's protection transitions from RCU to some other mechanism (locking, reference counting, whatever). These markings should allow external tools to more easily spot bugs involving leaking pointers out of RCU read-side critical sections. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
This commit corrects the comment for the values of the ->gp_state field, which previously incorrectly said that these were for the ->gp_flags field. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-
Petr Mladek authored
Commit commit 4cdfc175 ("rcu: Move quiescent-state forcing into kthread") started the process of folding the old ->fqs_state into ->gp_state, but did not complete it. This situation does not cause any malfunction, but can result in extremely confusing trace output. This commit completes this task of eliminating ->fqs_state in favor of ->gp_state. The old ->fqs_state was also used to decide when to collect dyntick-idle snapshots. For this purpose, we add a boolean variable into the kthread, which is set on the first call to rcu_gp_fqs() for a given grace period and clear otherwise. Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
-