- 01 Dec, 2018 16 commits
-
-
Paul E. McKenney authored
This commit splits rcu_torture_fwd_prog_nr() and rcu_torture_fwd_prog_cr() functions out of rcu_torture_fwd_prog() in order to reduce indentation pain and because rcu_torture_fwd_prog() was getting a bit too long. In addition, this will enable easier conditional execution of the rcu_torture_fwd_prog_cr() function, which can give false-positive failures in some NO_HZ_FULL configurations due to overloading the housekeeping CPUs. Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
Now that the forward-progress code does a full-bore continuous callback flood lasting multiple seconds, there is little point in also posting a mere 60,000 callbacks every second or so. This commit therefore removes the old cbflood testing. Over time, it may be desirable to concurrently do full-bore continuous callback floods on all CPUs simultaneously, but one dragon at a time. Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
Currently, the torture scripts rely on the initrd/init script to bring any extra CPUs online, for example, in the case where the kernel and qemu have different ideas about how many CPUs are present. This works, but is an unnecessary dependency on initrd, which needs to vary depending on the distro. This commit therefore causes torture_onoff() to check for additional CPUs, attempting to bring any found online. Errors are ignored, just as they are by the initrd/init script. Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
This commit adds a call_rcu() flooding loop to the forward-progress test. This emulates tight userspace loops that force call_rcu() invocations, for example, the infamous loop containing close(open()) that instigated the addition of blimit. If RCU does not make sufficient forward progress in invoking the resulting flood of callbacks, rcutorture emits a warning. Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
Paul E. McKenney authored
Merge branches 'bug.2018.11.12a', 'consolidate.2018.12.01a', 'doc.2018.11.12a', 'fixes.2018.11.12a', 'initrd.2018.11.08b', 'sil.2018.11.12a' and 'srcu.2018.11.27a' into HEAD bug.2018.11.12a: Get rid of BUG_ON() and friends consolidate.2018.12.01a: Continued RCU flavor-consolidation cleanup doc.2018.11.12a: Documentation updates fixes.2018.11.12a: Miscellaneous fixes initrd.2018.11.08b: Automate creation of rcutorture initrd sil.2018.11.12a: Remove more spin_unlock_wait() calls
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change, even though it is but a comment. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change, even though it is but a comment. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: <linux-kernel@vger.kernel.org>
-
Paul E. McKenney authored
Now that all RCU flavors have been consolidated, rcu_barrier_bh() is but a synonym for rcu_barrier(). This commit therefore replaces the former with the latter. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: <linux-decnet-user@lists.sourceforge.net> Cc: <netdev@vger.kernel.org>
-
Paul E. McKenney authored
Now that call_rcu()'s callback is not invoked until after all preempt-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_sched(). This commit therefore makes that change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: <netdev@vger.kernel.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change, even though it is but a comment. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change, even though it is but a comment. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Dennis Zhou <dennis@kernel.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "Dennis Zhou (Facebook)" <dennisszhou@gmail.com> Acked-by:
Tejun Heo <tj@kernel.org>
-
Paul E. McKenney authored
Now that call_rcu()'s callback is not invoked until after bh-disable and preempt-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_bh() and call_rcu_sched(). This commit therefore removes these two API members from the callback_head structure's header comment. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Alexey Dobriyan <adobriyan@gmail.com>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change, even though it is but a comment. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Dennis Zhou <dennis@kernel.org> Cc: Christoph Lameter <cl@linux.com> Acked-by:
Tejun Heo <tj@kernel.org>
-
Paul E. McKenney authored
Now that call_rcu()'s callback is not invoked until after all bh-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_bh(). Similarly, rcu_barrier() can be used in place of rcu_barrier_bh(). This commit therefore makes these changes. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Roopa Prabhu <roopa@cumulusnetworks.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: <bridge@lists.linux-foundation.org> Cc: <netdev@vger.kernel.org> Acked-by:
Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
-
Paul E. McKenney authored
Now that call_rcu()'s callback is not invoked until after all bh-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_bh(). Similarly, synchronize_rcu() can be used in place of synchronize_rcu_bh(). This commit therefore makes these changes. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Cc: <netdev@vger.kernel.org>
-
Paul E. McKenney authored
Now that call_rcu()'s callback is not invoked until after bh-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_bh(). Similarly, rcu_barrier() can be used in place o frcu_barrier_bh(). This commit therefore makes these changes. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jiri Pirko <jiri@resnulli.us> Cc: "David S. Miller" <davem@davemloft.net> Cc: <netdev@vger.kernel.org>
-
- 27 Nov, 2018 21 commits
-
-
Paul E. McKenney authored
In RCU, the distinction between "rsp", "rnp", and "rdp" has served well for a great many years, but in SRCU, "sp" vs. "sdp" has proven confusing. This commit therefore renames SRCU's "sp" pointers to "ssp", so that there is "ssp" for srcu_struct pointer, "snp" for srcu_node pointer, and "sdp" for srcu_data pointer. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com>
-
Dennis Krein authored
The srcu_gp_start() function is called with the srcu_struct structure's ->lock held, but not with the srcu_data structure's ->lock. This is problematic because this function accesses and updates the srcu_data structure's ->srcu_cblist, which is protected by that lock. Failing to hold this lock can result in corruption of the SRCU callback lists, which in turn can result in arbitrarily bad results. This commit therefore makes srcu_gp_start() acquire the srcu_data structure's ->lock across the calls to rcu_segcblist_advance() and rcu_segcblist_accelerate(), thus preventing this corruption. Reported-by:
Bart Van Assche <bvanassche@acm.org> Reported-by:
Christoph Hellwig <hch@infradead.org> Reported-by:
Sebastian Kuzminsky <seb.kuzminsky@gmail.com> Signed-off-by:
Dennis Krein <Dennis.Krein@netapp.com> Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Tested-by:
Dennis Krein <Dennis.Krein@netapp.com> Cc: <stable@vger.kernel.org> # 4.16.x
-
Paul E. McKenney authored
Now that call_rcu()'s callback is not invoked until after all preempt-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_sched(). This commit therefore makes that change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <linux-mm@kvack.org>
-
Paul E. McKenney authored
Now that call_rcu()'s callback is not invoked until after all preempt-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_sched(). This commit therefore makes that change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Ming Lei <ming.lei@redhat.com> Cc: Bart Van Assche <bvanassche@acm.org> Cc: Jens Axboe <axboe@kernel.dk> Acked-by:
Tejun Heo <tj@kernel.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org>
-
Paul E. McKenney authored
Now that call_rcu()'s callback is not invoked until after all preempt-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_sched(). This commit therefore makes that change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Acked-by:
Tejun Heo <tj@kernel.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). Similarly, call_rcu_sched() can be replaced by call_rcu(). This commit therefore makes these changes. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Acked-by:
Jessica Yu <jeyu@kernel.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Will Deacon <will.deacon@arm.com>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org>
-
Paul E. McKenney authored
Now that all RCU flavors have been consolidated, rcu_barrier_sched() is but a synonym for rcu_barrier(). This commit therefore replaces the former with the latter. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <linux-mm@kvack.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). Similarly, call_rcu_sched() can be replaced by call_rcu(). This commit therefore makes these changes. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: <linux-kernel@vger.kernel.org> Acked-by:
Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: <linux-fsdevel@vger.kernel.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Cc: Len Brown <lenb@kernel.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Viresh Kumar <viresh.kumar@linaro.org> Cc: <linux-pm@vger.kernel.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for bh-disable regions of code as well as RCU read-side critical sections, synchronize_rcu_bh() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: <kvm@vger.kernel.org> Cc: <virtualization@lists.linux-foundation.org> Cc: <netdev@vger.kernel.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Realtek linux nic maintainers <nic_swsd@realtek.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: <netdev@vger.kernel.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Francois Romieu <romieu@fr.zoreil.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: <netdev@vger.kernel.org>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: <openipmi-developer@lists.sourceforge.net> Acked-by:
Corey Minyard <cminyard@mvista.com>
-
Paul E. McKenney authored
Now that synchronize_rcu() waits for bh-disable regions of code as well as RCU read-side critical sections, the synchronize_rcu_bh() in pcrypt_cpumask_change_notify() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: <linux-crypto@vger.kernel.org> Acked-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 12 Nov, 2018 3 commits
-
-
Lance Roy authored
lockdep_assert_held() is better suited to checking locking requirements, since it only checks if the current thread holds the lock regardless of whether someone else does. This is also a step towards possibly removing spin_is_locked(). Signed-off-by:
Lance Roy <ldr709@gmail.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Eric Auger <eric.auger@redhat.com> Cc: linux-arm-kernel@lists.infradead.org Cc: <kvmarm@lists.cs.columbia.edu> Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com> Acked-by:
Christoffer Dall <christoffer.dall@arm.com>
-
Lance Roy authored
lockdep_assert_held() is better suited to checking locking requirements, since it only checks if the current thread holds the lock regardless of whether someone else does. This is also a step towards possibly removing spin_is_locked(). Signed-off-by:
Lance Roy <ldr709@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: Matthew Wilcox <mawilcox@microsoft.com> Cc: Mel Gorman <mgorman@techsingularity.net> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Cc: Jan Kara <jack@suse.cz> Cc: Shakeel Butt <shakeelb@google.com> Cc: <linux-mm@kvack.org> Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com>
-
Lance Roy authored
lockdep_assert_held() is better suited to checking locking requirements, since it only checks if the current thread holds the lock regardless of whether someone else does. This is also a step towards possibly removing spin_is_locked(). Signed-off-by:
Lance Roy <ldr709@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Paul E. McKenney <paulmck@linux.ibm.com>
-