Commit 067610eb authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'rcu.release.v6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux

Pull RCU updates from Neeraj Upadhyay:
 "Context tracking:
   - rename context tracking state related symbols and remove references
     to "dynticks" in various context tracking state variables and
     related helpers
   - force context_tracking_enabled_this_cpu() to be inlined to avoid
     leaving a noinstr section

  CSD lock:
   - enhance CSD-lock diagnostic reports
   - add an API to provide an indication of ongoing CSD-lock stall

  nocb:
   - update and simplify RCU nocb code to handle (de-)offloading of
     callbacks only for offline CPUs
   - fix RT throttling hrtimer being armed from offline CPU

  rcutorture:
   - remove redundant rcu_torture_ops get_gp_completed fields
   - add SRCU ->same_gp_state and ->get_comp_state functions
   - add generic test for NUM_ACTIVE_*RCU_POLL* for testing RCU and SRCU
     polled grace periods
   - add CFcommon.arch for arch-specific Kconfig options
   - print number of update types in rcu_torture_write_types()
   - add rcutree.nohz_full_patience_delay testing to the TREE07 scenario
   - add a stall_cpu_repeat module parameter to test repeated CPU stalls
   - add argument to limit number of CPUs a guest OS can use in
     torture.sh

  rcustall:
   - abbreviate RCU CPU stall warnings during CSD-lock stalls
   - Allow dump_cpu_task() to be called without disabling preemption
   - defer printing stall-warning backtrace when holding rcu_node lock

  srcu:
   - make SRCU gp seq wrap-around faster
   - add KCSAN checks for concurrent updates to ->srcu_n_exp_nodelay and
     ->reschedule_count which are used in heuristics governing
     auto-expediting of normal SRCU grace periods and
     grace-period-state-machine delays
   - mark idle SRCU-barrier callbacks to help identify stuck
     SRCU-barrier callback

  rcu tasks:
   - remove RCU Tasks Rude asynchronous APIs as they are no longer used
   - stop testing RCU Tasks Rude asynchronous APIs
   - fix access to non-existent percpu regions
   - check processor-ID assumptions during chosen CPU calculation for
     callback enqueuing
   - update description of rtp->tasks_gp_seq grace-period sequence
     number
   - add rcu_barrier_cb_is_done() to identify whether a given
     rcu_barrier callback is stuck
   - mark idle Tasks-RCU-barrier callbacks
   - add *torture_stats_print() functions to print detailed diagnostics
     for Tasks-RCU variants
   - capture start time of rcu_barrier_tasks*() operation to help
     distinguish a hung barrier operation from a long series of barrier
     operations

  refscale:
   - add a TINY scenario to support tests of Tiny RCU and Tiny
     SRCU
   - optimize process_durations() operation

  rcuscale:
   - dump stacks of stalled rcu_scale_writer() instances and
     grace-period statistics when rcu_scale_writer() stalls
   - mark idle RCU-barrier callbacks to identify stuck RCU-barrier
     callbacks
   - print detailed grace-period and barrier diagnostics on
     rcu_scale_writer() hangs for Tasks-RCU variants
   - warn if async module parameter is specified for RCU implementations
     that do not have async primitives such as RCU Tasks Rude
   - make all writer tasks report upon hang
   - tolerate repeated GFP_KERNEL failure in rcu_scale_writer()
   - use special allocator for rcu_scale_writer()
   - NULL out top-level pointers to heap memory to avoid double-free
     bugs on modprobe failures
   - maintain per-task instead of per-CPU callbacks count to avoid any
     issues with migration of either tasks or callbacks
   - constify struct ref_scale_ops

  Fixes:
   - use system_unbound_wq for kfree_rcu work to avoid disturbing
     isolated CPUs

  Misc:
   - warn on unexpected rcu_state.srs_done_tail state
   - better define "atomic" for list_replace_rcu() and
     hlist_replace_rcu() routines
   - annotate struct kvfree_rcu_bulk_data with __counted_by()"

* tag 'rcu.release.v6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux: (90 commits)
  rcu: Defer printing stall-warning backtrace when holding rcu_node lock
  rcu/nocb: Remove superfluous memory barrier after bypass enqueue
  rcu/nocb: Conditionally wake up rcuo if not already waiting on GP
  rcu/nocb: Fix RT throttling hrtimer armed from offline CPU
  rcu/nocb: Simplify (de-)offloading state machine
  context_tracking: Tag context_tracking_enabled_this_cpu() __always_inline
  context_tracking, rcu: Rename rcu_dyntick trace event into rcu_watching
  rcu: Update stray documentation references to rcu_dynticks_eqs_{enter, exit}()
  rcu: Rename rcu_momentary_dyntick_idle() into rcu_momentary_eqs()
  rcu: Rename rcu_implicit_dynticks_qs() into rcu_watching_snap_recheck()
  rcu: Rename dyntick_save_progress_counter() into rcu_watching_snap_save()
  rcu: Rename struct rcu_data .exp_dynticks_snap into .exp_watching_snap
  rcu: Rename struct rcu_data .dynticks_snap into .watching_snap
  rcu: Rename rcu_dynticks_zero_in_eqs() into rcu_watching_zero_in_eqs()
  rcu: Rename rcu_dynticks_in_eqs_since() into rcu_watching_snap_stopped_since()
  rcu: Rename rcu_dynticks_in_eqs() into rcu_watching_snap_in_eqs()
  rcu: Rename rcu_dynticks_eqs_online() into rcu_watching_online()
  context_tracking, rcu: Rename rcu_dynticks_curr_cpu_in_eqs() into rcu_is_watching_curr_cpu()
  context_tracking, rcu: Rename rcu_dynticks_task*() into rcu_task*()
  refscale: Constify struct ref_scale_ops
  ...
parents 85a77db9 355debb8
...@@ -921,10 +921,10 @@ This portion of the ``rcu_data`` structure is declared as follows: ...@@ -921,10 +921,10 @@ This portion of the ``rcu_data`` structure is declared as follows:
:: ::
1 int dynticks_snap; 1 int watching_snap;
2 unsigned long dynticks_fqs; 2 unsigned long dynticks_fqs;
The ``->dynticks_snap`` field is used to take a snapshot of the The ``->watching_snap`` field is used to take a snapshot of the
corresponding CPU's dyntick-idle state when forcing quiescent states, corresponding CPU's dyntick-idle state when forcing quiescent states,
and is therefore accessed from other CPUs. Finally, the and is therefore accessed from other CPUs. Finally, the
``->dynticks_fqs`` field is used to count the number of times this CPU ``->dynticks_fqs`` field is used to count the number of times this CPU
...@@ -935,8 +935,8 @@ This portion of the rcu_data structure is declared as follows: ...@@ -935,8 +935,8 @@ This portion of the rcu_data structure is declared as follows:
:: ::
1 long dynticks_nesting; 1 long nesting;
2 long dynticks_nmi_nesting; 2 long nmi_nesting;
3 atomic_t dynticks; 3 atomic_t dynticks;
4 bool rcu_need_heavy_qs; 4 bool rcu_need_heavy_qs;
5 bool rcu_urgent_qs; 5 bool rcu_urgent_qs;
...@@ -945,14 +945,14 @@ These fields in the rcu_data structure maintain the per-CPU dyntick-idle ...@@ -945,14 +945,14 @@ These fields in the rcu_data structure maintain the per-CPU dyntick-idle
state for the corresponding CPU. The fields may be accessed only from state for the corresponding CPU. The fields may be accessed only from
the corresponding CPU (and from tracing) unless otherwise stated. the corresponding CPU (and from tracing) unless otherwise stated.
The ``->dynticks_nesting`` field counts the nesting depth of process The ``->nesting`` field counts the nesting depth of process
execution, so that in normal circumstances this counter has value zero execution, so that in normal circumstances this counter has value zero
or one. NMIs, irqs, and tracers are counted by the or one. NMIs, irqs, and tracers are counted by the
``->dynticks_nmi_nesting`` field. Because NMIs cannot be masked, changes ``->nmi_nesting`` field. Because NMIs cannot be masked, changes
to this variable have to be undertaken carefully using an algorithm to this variable have to be undertaken carefully using an algorithm
provided by Andy Lutomirski. The initial transition from idle adds one, provided by Andy Lutomirski. The initial transition from idle adds one,
and nested transitions add two, so that a nesting level of five is and nested transitions add two, so that a nesting level of five is
represented by a ``->dynticks_nmi_nesting`` value of nine. This counter represented by a ``->nmi_nesting`` value of nine. This counter
can therefore be thought of as counting the number of reasons why this can therefore be thought of as counting the number of reasons why this
CPU cannot be permitted to enter dyntick-idle mode, aside from CPU cannot be permitted to enter dyntick-idle mode, aside from
process-level transitions. process-level transitions.
...@@ -960,12 +960,12 @@ process-level transitions. ...@@ -960,12 +960,12 @@ process-level transitions.
However, it turns out that when running in non-idle kernel context, the However, it turns out that when running in non-idle kernel context, the
Linux kernel is fully capable of entering interrupt handlers that never Linux kernel is fully capable of entering interrupt handlers that never
exit and perhaps also vice versa. Therefore, whenever the exit and perhaps also vice versa. Therefore, whenever the
``->dynticks_nesting`` field is incremented up from zero, the ``->nesting`` field is incremented up from zero, the
``->dynticks_nmi_nesting`` field is set to a large positive number, and ``->nmi_nesting`` field is set to a large positive number, and
whenever the ``->dynticks_nesting`` field is decremented down to zero, whenever the ``->nesting`` field is decremented down to zero,
the ``->dynticks_nmi_nesting`` field is set to zero. Assuming that the ``->nmi_nesting`` field is set to zero. Assuming that
the number of misnested interrupts is not sufficient to overflow the the number of misnested interrupts is not sufficient to overflow the
counter, this approach corrects the ``->dynticks_nmi_nesting`` field counter, this approach corrects the ``->nmi_nesting`` field
every time the corresponding CPU enters the idle loop from process every time the corresponding CPU enters the idle loop from process
context. context.
...@@ -992,8 +992,8 @@ code. ...@@ -992,8 +992,8 @@ code.
+-----------------------------------------------------------------------+ +-----------------------------------------------------------------------+
| **Quick Quiz**: | | **Quick Quiz**: |
+-----------------------------------------------------------------------+ +-----------------------------------------------------------------------+
| Why not simply combine the ``->dynticks_nesting`` and | | Why not simply combine the ``->nesting`` and |
| ``->dynticks_nmi_nesting`` counters into a single counter that just | | ``->nmi_nesting`` counters into a single counter that just |
| counts the number of reasons that the corresponding CPU is non-idle? | | counts the number of reasons that the corresponding CPU is non-idle? |
+-----------------------------------------------------------------------+ +-----------------------------------------------------------------------+
| **Answer**: | | **Answer**: |
......
...@@ -147,10 +147,10 @@ RCU read-side critical sections preceding and following the current ...@@ -147,10 +147,10 @@ RCU read-side critical sections preceding and following the current
idle sojourn. idle sojourn.
This case is handled by calls to the strongly ordered This case is handled by calls to the strongly ordered
``atomic_add_return()`` read-modify-write atomic operation that ``atomic_add_return()`` read-modify-write atomic operation that
is invoked within ``rcu_dynticks_eqs_enter()`` at idle-entry is invoked within ``ct_kernel_exit_state()`` at idle-entry
time and within ``rcu_dynticks_eqs_exit()`` at idle-exit time. time and within ``ct_kernel_enter_state()`` at idle-exit time.
The grace-period kthread invokes first ``ct_dynticks_cpu_acquire()`` The grace-period kthread invokes first ``ct_rcu_watching_cpu_acquire()``
(preceded by a full memory barrier) and ``rcu_dynticks_in_eqs_since()`` (preceded by a full memory barrier) and ``rcu_watching_snap_stopped_since()``
(both of which rely on acquire semantics) to detect idle CPUs. (both of which rely on acquire semantics) to detect idle CPUs.
+-----------------------------------------------------------------------+ +-----------------------------------------------------------------------+
......
...@@ -528,7 +528,7 @@ ...@@ -528,7 +528,7 @@
font-style="normal" font-style="normal"
y="-8652.5312" y="-8652.5312"
x="2466.7822" x="2466.7822"
xml:space="preserve">dyntick_save_progress_counter()</text> xml:space="preserve">rcu_watching_snap_save()</text>
<text <text
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier" style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier"
id="text202-7-2-7-2-0" id="text202-7-2-7-2-0"
...@@ -537,7 +537,7 @@ ...@@ -537,7 +537,7 @@
font-style="normal" font-style="normal"
y="-8368.1475" y="-8368.1475"
x="2463.3262" x="2463.3262"
xml:space="preserve">rcu_implicit_dynticks_qs()</text> xml:space="preserve">rcu_watching_snap_recheck()</text>
</g> </g>
<g <g
id="g4504" id="g4504"
...@@ -607,7 +607,7 @@ ...@@ -607,7 +607,7 @@
font-weight="bold" font-weight="bold"
font-size="192" font-size="192"
id="text202-7-5-3-27-6" id="text202-7-5-3-27-6"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_dynticks_eqs_enter()</text> style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">ct_kernel_exit_state()</text>
<text <text
xml:space="preserve" xml:space="preserve"
x="3745.7725" x="3745.7725"
...@@ -638,7 +638,7 @@ ...@@ -638,7 +638,7 @@
font-weight="bold" font-weight="bold"
font-size="192" font-size="192"
id="text202-7-5-3-27-6-1" id="text202-7-5-3-27-6-1"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_dynticks_eqs_exit()</text> style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">ct_kernel_enter_state()</text>
<text <text
xml:space="preserve" xml:space="preserve"
x="3745.7725" x="3745.7725"
......
...@@ -844,7 +844,7 @@ ...@@ -844,7 +844,7 @@
font-style="normal" font-style="normal"
y="1547.8876" y="1547.8876"
x="4417.6396" x="4417.6396"
xml:space="preserve">dyntick_save_progress_counter()</text> xml:space="preserve">rcu_watching_snap_save()</text>
<g <g
style="fill:none;stroke-width:0.025in" style="fill:none;stroke-width:0.025in"
transform="translate(6501.9719,-10685.904)" transform="translate(6501.9719,-10685.904)"
...@@ -899,7 +899,7 @@ ...@@ -899,7 +899,7 @@
font-style="normal" font-style="normal"
y="1858.8729" y="1858.8729"
x="4414.1836" x="4414.1836"
xml:space="preserve">rcu_implicit_dynticks_qs()</text> xml:space="preserve">rcu_watching_snap_recheck()</text>
<text <text
xml:space="preserve" xml:space="preserve"
x="14659.87" x="14659.87"
...@@ -977,7 +977,7 @@ ...@@ -977,7 +977,7 @@
font-weight="bold" font-weight="bold"
font-size="192" font-size="192"
id="text202-7-5-3-27-6" id="text202-7-5-3-27-6"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_dynticks_eqs_enter()</text> style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">ct_kernel_exit_state()</text>
<text <text
xml:space="preserve" xml:space="preserve"
x="3745.7725" x="3745.7725"
...@@ -1008,7 +1008,7 @@ ...@@ -1008,7 +1008,7 @@
font-weight="bold" font-weight="bold"
font-size="192" font-size="192"
id="text202-7-5-3-27-6-1" id="text202-7-5-3-27-6-1"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_dynticks_eqs_exit()</text> style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">ct_kernel_enter_state()</text>
<text <text
xml:space="preserve" xml:space="preserve"
x="3745.7725" x="3745.7725"
......
...@@ -2974,7 +2974,7 @@ ...@@ -2974,7 +2974,7 @@
font-style="normal" font-style="normal"
y="38114.047" y="38114.047"
x="-334.33856" x="-334.33856"
xml:space="preserve">dyntick_save_progress_counter()</text> xml:space="preserve">rcu_watching_snap_save()</text>
<g <g
style="fill:none;stroke-width:0.025in" style="fill:none;stroke-width:0.025in"
transform="translate(1749.9916,25880.249)" transform="translate(1749.9916,25880.249)"
...@@ -3029,7 +3029,7 @@ ...@@ -3029,7 +3029,7 @@
font-style="normal" font-style="normal"
y="38425.035" y="38425.035"
x="-337.79462" x="-337.79462"
xml:space="preserve">rcu_implicit_dynticks_qs()</text> xml:space="preserve">rcu_watching_snap_recheck()</text>
<text <text
xml:space="preserve" xml:space="preserve"
x="9907.8887" x="9907.8887"
...@@ -3107,7 +3107,7 @@ ...@@ -3107,7 +3107,7 @@
font-weight="bold" font-weight="bold"
font-size="192" font-size="192"
id="text202-7-5-3-27-6" id="text202-7-5-3-27-6"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_dynticks_eqs_enter()</text> style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">ct_kernel_exit_state()</text>
<text <text
xml:space="preserve" xml:space="preserve"
x="3745.7725" x="3745.7725"
...@@ -3138,7 +3138,7 @@ ...@@ -3138,7 +3138,7 @@
font-weight="bold" font-weight="bold"
font-size="192" font-size="192"
id="text202-7-5-3-27-6-1" id="text202-7-5-3-27-6-1"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_dynticks_eqs_exit()</text> style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">ct_kernel_enter_state()</text>
<text <text
xml:space="preserve" xml:space="preserve"
x="3745.7725" x="3745.7725"
......
...@@ -516,7 +516,7 @@ ...@@ -516,7 +516,7 @@
font-style="normal" font-style="normal"
y="-8652.5312" y="-8652.5312"
x="2466.7822" x="2466.7822"
xml:space="preserve">dyntick_save_progress_counter()</text> xml:space="preserve">rcu_watching_snap_save()</text>
<text <text
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier" style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier"
id="text202-7-2-7-2-0" id="text202-7-2-7-2-0"
...@@ -525,7 +525,7 @@ ...@@ -525,7 +525,7 @@
font-style="normal" font-style="normal"
y="-8368.1475" y="-8368.1475"
x="2463.3262" x="2463.3262"
xml:space="preserve">rcu_implicit_dynticks_qs()</text> xml:space="preserve">rcu_watching_snap_recheck()</text>
<text <text
sodipodi:linespacing="125%" sodipodi:linespacing="125%"
style="font-size:192px;font-style:normal;font-weight:bold;line-height:125%;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier" style="font-size:192px;font-style:normal;font-weight:bold;line-height:125%;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier"
......
...@@ -2649,8 +2649,7 @@ those that are idle from RCU's perspective) and then Tasks Rude RCU can ...@@ -2649,8 +2649,7 @@ those that are idle from RCU's perspective) and then Tasks Rude RCU can
be removed from the kernel. be removed from the kernel.
The tasks-rude-RCU API is also reader-marking-free and thus quite compact, The tasks-rude-RCU API is also reader-marking-free and thus quite compact,
consisting of call_rcu_tasks_rude(), synchronize_rcu_tasks_rude(), consisting solely of synchronize_rcu_tasks_rude().
and rcu_barrier_tasks_rude().
Tasks Trace RCU Tasks Trace RCU
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
......
...@@ -194,14 +194,13 @@ over a rather long period of time, but improvements are always welcome! ...@@ -194,14 +194,13 @@ over a rather long period of time, but improvements are always welcome!
when publicizing a pointer to a structure that can when publicizing a pointer to a structure that can
be traversed by an RCU read-side critical section. be traversed by an RCU read-side critical section.
5. If any of call_rcu(), call_srcu(), call_rcu_tasks(), 5. If any of call_rcu(), call_srcu(), call_rcu_tasks(), or
call_rcu_tasks_rude(), or call_rcu_tasks_trace() is used, call_rcu_tasks_trace() is used, the callback function may be
the callback function may be invoked from softirq context, invoked from softirq context, and in any case with bottom halves
and in any case with bottom halves disabled. In particular, disabled. In particular, this callback function cannot block.
this callback function cannot block. If you need the callback If you need the callback to block, run that code in a workqueue
to block, run that code in a workqueue handler scheduled from handler scheduled from the callback. The queue_rcu_work()
the callback. The queue_rcu_work() function does this for you function does this for you in the case of call_rcu().
in the case of call_rcu().
6. Since synchronize_rcu() can block, it cannot be called 6. Since synchronize_rcu() can block, it cannot be called
from any sort of irq context. The same rule applies from any sort of irq context. The same rule applies
...@@ -254,10 +253,10 @@ over a rather long period of time, but improvements are always welcome! ...@@ -254,10 +253,10 @@ over a rather long period of time, but improvements are always welcome!
corresponding readers must use rcu_read_lock_trace() corresponding readers must use rcu_read_lock_trace()
and rcu_read_unlock_trace(). and rcu_read_unlock_trace().
c. If an updater uses call_rcu_tasks_rude() or c. If an updater uses synchronize_rcu_tasks_rude(),
synchronize_rcu_tasks_rude(), then the corresponding then the corresponding readers must use anything that
readers must use anything that disables preemption, disables preemption, for example, preempt_disable()
for example, preempt_disable() and preempt_enable(). and preempt_enable().
Mixing things up will result in confusion and broken kernels, and Mixing things up will result in confusion and broken kernels, and
has even resulted in an exploitable security issue. Therefore, has even resulted in an exploitable security issue. Therefore,
...@@ -326,11 +325,9 @@ over a rather long period of time, but improvements are always welcome! ...@@ -326,11 +325,9 @@ over a rather long period of time, but improvements are always welcome!
d. Periodically invoke rcu_barrier(), permitting a limited d. Periodically invoke rcu_barrier(), permitting a limited
number of updates per grace period. number of updates per grace period.
The same cautions apply to call_srcu(), call_rcu_tasks(), The same cautions apply to call_srcu(), call_rcu_tasks(), and
call_rcu_tasks_rude(), and call_rcu_tasks_trace(). This is call_rcu_tasks_trace(). This is why there is an srcu_barrier(),
why there is an srcu_barrier(), rcu_barrier_tasks(), rcu_barrier_tasks(), and rcu_barrier_tasks_trace(), respectively.
rcu_barrier_tasks_rude(), and rcu_barrier_tasks_rude(),
respectively.
Note that although these primitives do take action to avoid Note that although these primitives do take action to avoid
memory exhaustion when any given CPU has too many callbacks, memory exhaustion when any given CPU has too many callbacks,
...@@ -383,17 +380,17 @@ over a rather long period of time, but improvements are always welcome! ...@@ -383,17 +380,17 @@ over a rather long period of time, but improvements are always welcome!
must use whatever locking or other synchronization is required must use whatever locking or other synchronization is required
to safely access and/or modify that data structure. to safely access and/or modify that data structure.
Do not assume that RCU callbacks will be executed on Do not assume that RCU callbacks will be executed on the same
the same CPU that executed the corresponding call_rcu(), CPU that executed the corresponding call_rcu(), call_srcu(),
call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(), or call_rcu_tasks(), or call_rcu_tasks_trace(). For example, if
call_rcu_tasks_trace(). For example, if a given CPU goes offline a given CPU goes offline while having an RCU callback pending,
while having an RCU callback pending, then that RCU callback then that RCU callback will execute on some surviving CPU.
will execute on some surviving CPU. (If this was not the case, (If this was not the case, a self-spawning RCU callback would
a self-spawning RCU callback would prevent the victim CPU from prevent the victim CPU from ever going offline.) Furthermore,
ever going offline.) Furthermore, CPUs designated by rcu_nocbs= CPUs designated by rcu_nocbs= might well *always* have their
might well *always* have their RCU callbacks executed on some RCU callbacks executed on some other CPUs, in fact, for some
other CPUs, in fact, for some real-time workloads, this is the real-time workloads, this is the whole point of using the
whole point of using the rcu_nocbs= kernel boot parameter. rcu_nocbs= kernel boot parameter.
In addition, do not assume that callbacks queued in a given order In addition, do not assume that callbacks queued in a given order
will be invoked in that order, even if they all are queued on the will be invoked in that order, even if they all are queued on the
...@@ -507,9 +504,9 @@ over a rather long period of time, but improvements are always welcome! ...@@ -507,9 +504,9 @@ over a rather long period of time, but improvements are always welcome!
These debugging aids can help you find problems that are These debugging aids can help you find problems that are
otherwise extremely difficult to spot. otherwise extremely difficult to spot.
17. If you pass a callback function defined within a module to one of 17. If you pass a callback function defined within a module
call_rcu(), call_srcu(), call_rcu_tasks(), call_rcu_tasks_rude(), to one of call_rcu(), call_srcu(), call_rcu_tasks(), or
or call_rcu_tasks_trace(), then it is necessary to wait for all call_rcu_tasks_trace(), then it is necessary to wait for all
pending callbacks to be invoked before unloading that module. pending callbacks to be invoked before unloading that module.
Note that it is absolutely *not* sufficient to wait for a grace Note that it is absolutely *not* sufficient to wait for a grace
period! For example, synchronize_rcu() implementation is *not* period! For example, synchronize_rcu() implementation is *not*
...@@ -522,7 +519,6 @@ over a rather long period of time, but improvements are always welcome! ...@@ -522,7 +519,6 @@ over a rather long period of time, but improvements are always welcome!
- call_rcu() -> rcu_barrier() - call_rcu() -> rcu_barrier()
- call_srcu() -> srcu_barrier() - call_srcu() -> srcu_barrier()
- call_rcu_tasks() -> rcu_barrier_tasks() - call_rcu_tasks() -> rcu_barrier_tasks()
- call_rcu_tasks_rude() -> rcu_barrier_tasks_rude()
- call_rcu_tasks_trace() -> rcu_barrier_tasks_trace() - call_rcu_tasks_trace() -> rcu_barrier_tasks_trace()
However, these barrier functions are absolutely *not* guaranteed However, these barrier functions are absolutely *not* guaranteed
...@@ -539,7 +535,6 @@ over a rather long period of time, but improvements are always welcome! ...@@ -539,7 +535,6 @@ over a rather long period of time, but improvements are always welcome!
- Either synchronize_srcu() or synchronize_srcu_expedited(), - Either synchronize_srcu() or synchronize_srcu_expedited(),
together with and srcu_barrier() together with and srcu_barrier()
- synchronize_rcu_tasks() and rcu_barrier_tasks() - synchronize_rcu_tasks() and rcu_barrier_tasks()
- synchronize_tasks_rude() and rcu_barrier_tasks_rude()
- synchronize_tasks_trace() and rcu_barrier_tasks_trace() - synchronize_tasks_trace() and rcu_barrier_tasks_trace()
If necessary, you can use something like workqueues to execute If necessary, you can use something like workqueues to execute
......
...@@ -1103,7 +1103,7 @@ RCU-Tasks-Rude:: ...@@ -1103,7 +1103,7 @@ RCU-Tasks-Rude::
Critical sections Grace period Barrier Critical sections Grace period Barrier
N/A call_rcu_tasks_rude rcu_barrier_tasks_rude N/A N/A
synchronize_rcu_tasks_rude synchronize_rcu_tasks_rude
......
...@@ -4969,6 +4969,10 @@ ...@@ -4969,6 +4969,10 @@
Set maximum number of finished RCU callbacks to Set maximum number of finished RCU callbacks to
process in one batch. process in one batch.
rcutree.csd_lock_suppress_rcu_stall= [KNL]
Do only a one-line RCU CPU stall warning when
there is an ongoing too-long CSD-lock wait.
rcutree.do_rcu_barrier= [KNL] rcutree.do_rcu_barrier= [KNL]
Request a call to rcu_barrier(). This is Request a call to rcu_barrier(). This is
throttled so that userspace tests can safely throttled so that userspace tests can safely
...@@ -5416,7 +5420,13 @@ ...@@ -5416,7 +5420,13 @@
Time to wait (s) after boot before inducing stall. Time to wait (s) after boot before inducing stall.
rcutorture.stall_cpu_irqsoff= [KNL] rcutorture.stall_cpu_irqsoff= [KNL]
Disable interrupts while stalling if set. Disable interrupts while stalling if set, but only
on the first stall in the set.
rcutorture.stall_cpu_repeat= [KNL]
Number of times to repeat the stall sequence,
so that rcutorture.stall_cpu_repeat=3 will result
in four stall sequences.
rcutorture.stall_gp_kthread= [KNL] rcutorture.stall_gp_kthread= [KNL]
Duration (s) of forced sleep within RCU Duration (s) of forced sleep within RCU
...@@ -5604,14 +5614,6 @@ ...@@ -5604,14 +5614,6 @@
of zero will disable batching. Batching is of zero will disable batching. Batching is
always disabled for synchronize_rcu_tasks(). always disabled for synchronize_rcu_tasks().
rcupdate.rcu_tasks_rude_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks
Rude asynchronous callback batching for
call_rcu_tasks_rude(). A negative value
will take the default. A value of zero will
disable batching. Batching is always disabled
for synchronize_rcu_tasks_rude().
rcupdate.rcu_tasks_trace_lazy_ms= [KNL] rcupdate.rcu_tasks_trace_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks Set timeout in milliseconds RCU Tasks
Trace asynchronous callback batching for Trace asynchronous callback batching for
......
...@@ -862,7 +862,7 @@ config HAVE_CONTEXT_TRACKING_USER_OFFSTACK ...@@ -862,7 +862,7 @@ config HAVE_CONTEXT_TRACKING_USER_OFFSTACK
Architecture neither relies on exception_enter()/exception_exit() Architecture neither relies on exception_enter()/exception_exit()
nor on schedule_user(). Also preempt_schedule_notrace() and nor on schedule_user(). Also preempt_schedule_notrace() and
preempt_schedule_irq() can't be called in a preemptible section preempt_schedule_irq() can't be called in a preemptible section
while context tracking is CONTEXT_USER. This feature reflects a sane while context tracking is CT_STATE_USER. This feature reflects a sane
entry implementation where the following requirements are met on entry implementation where the following requirements are met on
critical entry code, ie: before user_exit() or after user_enter(): critical entry code, ie: before user_exit() or after user_enter():
......
...@@ -103,7 +103,7 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs) ...@@ -103,7 +103,7 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs)
static __always_inline void __enter_from_user_mode(void) static __always_inline void __enter_from_user_mode(void)
{ {
lockdep_hardirqs_off(CALLER_ADDR0); lockdep_hardirqs_off(CALLER_ADDR0);
CT_WARN_ON(ct_state() != CONTEXT_USER); CT_WARN_ON(ct_state() != CT_STATE_USER);
user_exit_irqoff(); user_exit_irqoff();
trace_hardirqs_off_finish(); trace_hardirqs_off_finish();
mte_disable_tco_entry(current); mte_disable_tco_entry(current);
......
...@@ -177,7 +177,7 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs) ...@@ -177,7 +177,7 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs)
if (user_mode(regs)) { if (user_mode(regs)) {
kuap_lock(); kuap_lock();
CT_WARN_ON(ct_state() != CONTEXT_USER); CT_WARN_ON(ct_state() != CT_STATE_USER);
user_exit_irqoff(); user_exit_irqoff();
account_cpu_user_entry(); account_cpu_user_entry();
...@@ -189,8 +189,8 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs) ...@@ -189,8 +189,8 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs)
* so avoid recursion. * so avoid recursion.
*/ */
if (TRAP(regs) != INTERRUPT_PROGRAM) if (TRAP(regs) != INTERRUPT_PROGRAM)
CT_WARN_ON(ct_state() != CONTEXT_KERNEL && CT_WARN_ON(ct_state() != CT_STATE_KERNEL &&
ct_state() != CONTEXT_IDLE); ct_state() != CT_STATE_IDLE);
INT_SOFT_MASK_BUG_ON(regs, is_implicit_soft_masked(regs)); INT_SOFT_MASK_BUG_ON(regs, is_implicit_soft_masked(regs));
INT_SOFT_MASK_BUG_ON(regs, arch_irq_disabled_regs(regs) && INT_SOFT_MASK_BUG_ON(regs, arch_irq_disabled_regs(regs) &&
search_kernel_restart_table(regs->nip)); search_kernel_restart_table(regs->nip));
......
...@@ -266,7 +266,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3, ...@@ -266,7 +266,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
unsigned long ret = 0; unsigned long ret = 0;
bool is_not_scv = !IS_ENABLED(CONFIG_PPC_BOOK3S_64) || !scv; bool is_not_scv = !IS_ENABLED(CONFIG_PPC_BOOK3S_64) || !scv;
CT_WARN_ON(ct_state() == CONTEXT_USER); CT_WARN_ON(ct_state() == CT_STATE_USER);
kuap_assert_locked(); kuap_assert_locked();
...@@ -344,7 +344,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs) ...@@ -344,7 +344,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs)
BUG_ON(regs_is_unrecoverable(regs)); BUG_ON(regs_is_unrecoverable(regs));
BUG_ON(arch_irq_disabled_regs(regs)); BUG_ON(arch_irq_disabled_regs(regs));
CT_WARN_ON(ct_state() == CONTEXT_USER); CT_WARN_ON(ct_state() == CT_STATE_USER);
/* /*
* We don't need to restore AMR on the way back to userspace for KUAP. * We don't need to restore AMR on the way back to userspace for KUAP.
...@@ -386,7 +386,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs) ...@@ -386,7 +386,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs)
if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64) && if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64) &&
TRAP(regs) != INTERRUPT_PROGRAM && TRAP(regs) != INTERRUPT_PROGRAM &&
TRAP(regs) != INTERRUPT_PERFMON) TRAP(regs) != INTERRUPT_PERFMON)
CT_WARN_ON(ct_state() == CONTEXT_USER); CT_WARN_ON(ct_state() == CT_STATE_USER);
kuap = kuap_get_and_assert_locked(); kuap = kuap_get_and_assert_locked();
......
...@@ -27,7 +27,7 @@ notrace long system_call_exception(struct pt_regs *regs, unsigned long r0) ...@@ -27,7 +27,7 @@ notrace long system_call_exception(struct pt_regs *regs, unsigned long r0)
trace_hardirqs_off(); /* finish reconciling */ trace_hardirqs_off(); /* finish reconciling */
CT_WARN_ON(ct_state() == CONTEXT_KERNEL); CT_WARN_ON(ct_state() == CT_STATE_KERNEL);
user_exit_irqoff(); user_exit_irqoff();
BUG_ON(regs_is_unrecoverable(regs)); BUG_ON(regs_is_unrecoverable(regs));
......
...@@ -150,7 +150,7 @@ early_param("ia32_emulation", ia32_emulation_override_cmdline); ...@@ -150,7 +150,7 @@ early_param("ia32_emulation", ia32_emulation_override_cmdline);
#endif #endif
/* /*
* Invoke a 32-bit syscall. Called with IRQs on in CONTEXT_KERNEL. * Invoke a 32-bit syscall. Called with IRQs on in CT_STATE_KERNEL.
*/ */
static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs, int nr) static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs, int nr)
{ {
......
...@@ -26,26 +26,26 @@ extern void user_exit_callable(void); ...@@ -26,26 +26,26 @@ extern void user_exit_callable(void);
static inline void user_enter(void) static inline void user_enter(void)
{ {
if (context_tracking_enabled()) if (context_tracking_enabled())
ct_user_enter(CONTEXT_USER); ct_user_enter(CT_STATE_USER);
} }
static inline void user_exit(void) static inline void user_exit(void)
{ {
if (context_tracking_enabled()) if (context_tracking_enabled())
ct_user_exit(CONTEXT_USER); ct_user_exit(CT_STATE_USER);
} }
/* Called with interrupts disabled. */ /* Called with interrupts disabled. */
static __always_inline void user_enter_irqoff(void) static __always_inline void user_enter_irqoff(void)
{ {
if (context_tracking_enabled()) if (context_tracking_enabled())
__ct_user_enter(CONTEXT_USER); __ct_user_enter(CT_STATE_USER);
} }
static __always_inline void user_exit_irqoff(void) static __always_inline void user_exit_irqoff(void)
{ {
if (context_tracking_enabled()) if (context_tracking_enabled())
__ct_user_exit(CONTEXT_USER); __ct_user_exit(CT_STATE_USER);
} }
static inline enum ctx_state exception_enter(void) static inline enum ctx_state exception_enter(void)
...@@ -57,7 +57,7 @@ static inline enum ctx_state exception_enter(void) ...@@ -57,7 +57,7 @@ static inline enum ctx_state exception_enter(void)
return 0; return 0;
prev_ctx = __ct_state(); prev_ctx = __ct_state();
if (prev_ctx != CONTEXT_KERNEL) if (prev_ctx != CT_STATE_KERNEL)
ct_user_exit(prev_ctx); ct_user_exit(prev_ctx);
return prev_ctx; return prev_ctx;
...@@ -67,7 +67,7 @@ static inline void exception_exit(enum ctx_state prev_ctx) ...@@ -67,7 +67,7 @@ static inline void exception_exit(enum ctx_state prev_ctx)
{ {
if (!IS_ENABLED(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK) && if (!IS_ENABLED(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK) &&
context_tracking_enabled()) { context_tracking_enabled()) {
if (prev_ctx != CONTEXT_KERNEL) if (prev_ctx != CT_STATE_KERNEL)
ct_user_enter(prev_ctx); ct_user_enter(prev_ctx);
} }
} }
...@@ -75,7 +75,7 @@ static inline void exception_exit(enum ctx_state prev_ctx) ...@@ -75,7 +75,7 @@ static inline void exception_exit(enum ctx_state prev_ctx)
static __always_inline bool context_tracking_guest_enter(void) static __always_inline bool context_tracking_guest_enter(void)
{ {
if (context_tracking_enabled()) if (context_tracking_enabled())
__ct_user_enter(CONTEXT_GUEST); __ct_user_enter(CT_STATE_GUEST);
return context_tracking_enabled_this_cpu(); return context_tracking_enabled_this_cpu();
} }
...@@ -83,7 +83,7 @@ static __always_inline bool context_tracking_guest_enter(void) ...@@ -83,7 +83,7 @@ static __always_inline bool context_tracking_guest_enter(void)
static __always_inline bool context_tracking_guest_exit(void) static __always_inline bool context_tracking_guest_exit(void)
{ {
if (context_tracking_enabled()) if (context_tracking_enabled())
__ct_user_exit(CONTEXT_GUEST); __ct_user_exit(CT_STATE_GUEST);
return context_tracking_enabled_this_cpu(); return context_tracking_enabled_this_cpu();
} }
...@@ -115,13 +115,17 @@ extern void ct_idle_enter(void); ...@@ -115,13 +115,17 @@ extern void ct_idle_enter(void);
extern void ct_idle_exit(void); extern void ct_idle_exit(void);
/* /*
* Is the current CPU in an extended quiescent state? * Is RCU watching the current CPU (IOW, it is not in an extended quiescent state)?
*
* Note that this returns the actual boolean data (watching / not watching),
* whereas ct_rcu_watching() returns the RCU_WATCHING subvariable of
* context_tracking.state.
* *
* No ordering, as we are sampling CPU-local information. * No ordering, as we are sampling CPU-local information.
*/ */
static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void) static __always_inline bool rcu_is_watching_curr_cpu(void)
{ {
return !(raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX); return raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_RCU_WATCHING;
} }
/* /*
...@@ -142,9 +146,9 @@ static __always_inline bool warn_rcu_enter(void) ...@@ -142,9 +146,9 @@ static __always_inline bool warn_rcu_enter(void)
* lots of the actual reporting also relies on RCU. * lots of the actual reporting also relies on RCU.
*/ */
preempt_disable_notrace(); preempt_disable_notrace();
if (rcu_dynticks_curr_cpu_in_eqs()) { if (!rcu_is_watching_curr_cpu()) {
ret = true; ret = true;
ct_state_inc(RCU_DYNTICKS_IDX); ct_state_inc(CT_RCU_WATCHING);
} }
return ret; return ret;
...@@ -153,7 +157,7 @@ static __always_inline bool warn_rcu_enter(void) ...@@ -153,7 +157,7 @@ static __always_inline bool warn_rcu_enter(void)
static __always_inline void warn_rcu_exit(bool rcu) static __always_inline void warn_rcu_exit(bool rcu)
{ {
if (rcu) if (rcu)
ct_state_inc(RCU_DYNTICKS_IDX); ct_state_inc(CT_RCU_WATCHING);
preempt_enable_notrace(); preempt_enable_notrace();
} }
......
...@@ -7,22 +7,22 @@ ...@@ -7,22 +7,22 @@
#include <linux/context_tracking_irq.h> #include <linux/context_tracking_irq.h>
/* Offset to allow distinguishing irq vs. task-based idle entry/exit. */ /* Offset to allow distinguishing irq vs. task-based idle entry/exit. */
#define DYNTICK_IRQ_NONIDLE ((LONG_MAX / 2) + 1) #define CT_NESTING_IRQ_NONIDLE ((LONG_MAX / 2) + 1)
enum ctx_state { enum ctx_state {
CONTEXT_DISABLED = -1, /* returned by ct_state() if unknown */ CT_STATE_DISABLED = -1, /* returned by ct_state() if unknown */
CONTEXT_KERNEL = 0, CT_STATE_KERNEL = 0,
CONTEXT_IDLE = 1, CT_STATE_IDLE = 1,
CONTEXT_USER = 2, CT_STATE_USER = 2,
CONTEXT_GUEST = 3, CT_STATE_GUEST = 3,
CONTEXT_MAX = 4, CT_STATE_MAX = 4,
}; };
/* Even value for idle, else odd. */ /* Odd value for watching, else even. */
#define RCU_DYNTICKS_IDX CONTEXT_MAX #define CT_RCU_WATCHING CT_STATE_MAX
#define CT_STATE_MASK (CONTEXT_MAX - 1) #define CT_STATE_MASK (CT_STATE_MAX - 1)
#define CT_DYNTICKS_MASK (~CT_STATE_MASK) #define CT_RCU_WATCHING_MASK (~CT_STATE_MASK)
struct context_tracking { struct context_tracking {
#ifdef CONFIG_CONTEXT_TRACKING_USER #ifdef CONFIG_CONTEXT_TRACKING_USER
...@@ -39,8 +39,8 @@ struct context_tracking { ...@@ -39,8 +39,8 @@ struct context_tracking {
atomic_t state; atomic_t state;
#endif #endif
#ifdef CONFIG_CONTEXT_TRACKING_IDLE #ifdef CONFIG_CONTEXT_TRACKING_IDLE
long dynticks_nesting; /* Track process nesting level. */ long nesting; /* Track process nesting level. */
long dynticks_nmi_nesting; /* Track irq/NMI nesting level. */ long nmi_nesting; /* Track irq/NMI nesting level. */
#endif #endif
}; };
...@@ -56,47 +56,47 @@ static __always_inline int __ct_state(void) ...@@ -56,47 +56,47 @@ static __always_inline int __ct_state(void)
#endif #endif
#ifdef CONFIG_CONTEXT_TRACKING_IDLE #ifdef CONFIG_CONTEXT_TRACKING_IDLE
static __always_inline int ct_dynticks(void) static __always_inline int ct_rcu_watching(void)
{ {
return atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_DYNTICKS_MASK; return atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_RCU_WATCHING_MASK;
} }
static __always_inline int ct_dynticks_cpu(int cpu) static __always_inline int ct_rcu_watching_cpu(int cpu)
{ {
struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu);
return atomic_read(&ct->state) & CT_DYNTICKS_MASK; return atomic_read(&ct->state) & CT_RCU_WATCHING_MASK;
} }
static __always_inline int ct_dynticks_cpu_acquire(int cpu) static __always_inline int ct_rcu_watching_cpu_acquire(int cpu)
{ {
struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu);
return atomic_read_acquire(&ct->state) & CT_DYNTICKS_MASK; return atomic_read_acquire(&ct->state) & CT_RCU_WATCHING_MASK;
} }
static __always_inline long ct_dynticks_nesting(void) static __always_inline long ct_nesting(void)
{ {
return __this_cpu_read(context_tracking.dynticks_nesting); return __this_cpu_read(context_tracking.nesting);
} }
static __always_inline long ct_dynticks_nesting_cpu(int cpu) static __always_inline long ct_nesting_cpu(int cpu)
{ {
struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu);
return ct->dynticks_nesting; return ct->nesting;
} }
static __always_inline long ct_dynticks_nmi_nesting(void) static __always_inline long ct_nmi_nesting(void)
{ {
return __this_cpu_read(context_tracking.dynticks_nmi_nesting); return __this_cpu_read(context_tracking.nmi_nesting);
} }
static __always_inline long ct_dynticks_nmi_nesting_cpu(int cpu) static __always_inline long ct_nmi_nesting_cpu(int cpu)
{ {
struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu);
return ct->dynticks_nmi_nesting; return ct->nmi_nesting;
} }
#endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */ #endif /* #ifdef CONFIG_CONTEXT_TRACKING_IDLE */
...@@ -113,7 +113,7 @@ static __always_inline bool context_tracking_enabled_cpu(int cpu) ...@@ -113,7 +113,7 @@ static __always_inline bool context_tracking_enabled_cpu(int cpu)
return context_tracking_enabled() && per_cpu(context_tracking.active, cpu); return context_tracking_enabled() && per_cpu(context_tracking.active, cpu);
} }
static inline bool context_tracking_enabled_this_cpu(void) static __always_inline bool context_tracking_enabled_this_cpu(void)
{ {
return context_tracking_enabled() && __this_cpu_read(context_tracking.active); return context_tracking_enabled() && __this_cpu_read(context_tracking.active);
} }
...@@ -123,14 +123,14 @@ static inline bool context_tracking_enabled_this_cpu(void) ...@@ -123,14 +123,14 @@ static inline bool context_tracking_enabled_this_cpu(void)
* *
* Returns the current cpu's context tracking state if context tracking * Returns the current cpu's context tracking state if context tracking
* is enabled. If context tracking is disabled, returns * is enabled. If context tracking is disabled, returns
* CONTEXT_DISABLED. This should be used primarily for debugging. * CT_STATE_DISABLED. This should be used primarily for debugging.
*/ */
static __always_inline int ct_state(void) static __always_inline int ct_state(void)
{ {
int ret; int ret;
if (!context_tracking_enabled()) if (!context_tracking_enabled())
return CONTEXT_DISABLED; return CT_STATE_DISABLED;
preempt_disable(); preempt_disable();
ret = __ct_state(); ret = __ct_state();
......
...@@ -108,7 +108,7 @@ static __always_inline void enter_from_user_mode(struct pt_regs *regs) ...@@ -108,7 +108,7 @@ static __always_inline void enter_from_user_mode(struct pt_regs *regs)
arch_enter_from_user_mode(regs); arch_enter_from_user_mode(regs);
lockdep_hardirqs_off(CALLER_ADDR0); lockdep_hardirqs_off(CALLER_ADDR0);
CT_WARN_ON(__ct_state() != CONTEXT_USER); CT_WARN_ON(__ct_state() != CT_STATE_USER);
user_exit_irqoff(); user_exit_irqoff();
instrumentation_begin(); instrumentation_begin();
......
...@@ -185,11 +185,7 @@ struct rcu_cblist { ...@@ -185,11 +185,7 @@ struct rcu_cblist {
* ---------------------------------------------------------------------------- * ----------------------------------------------------------------------------
*/ */
#define SEGCBLIST_ENABLED BIT(0) #define SEGCBLIST_ENABLED BIT(0)
#define SEGCBLIST_RCU_CORE BIT(1) #define SEGCBLIST_OFFLOADED BIT(1)
#define SEGCBLIST_LOCKING BIT(2)
#define SEGCBLIST_KTHREAD_CB BIT(3)
#define SEGCBLIST_KTHREAD_GP BIT(4)
#define SEGCBLIST_OFFLOADED BIT(5)
struct rcu_segcblist { struct rcu_segcblist {
struct rcu_head *head; struct rcu_head *head;
......
...@@ -191,7 +191,10 @@ static inline void hlist_del_init_rcu(struct hlist_node *n) ...@@ -191,7 +191,10 @@ static inline void hlist_del_init_rcu(struct hlist_node *n)
* @old : the element to be replaced * @old : the element to be replaced
* @new : the new element to insert * @new : the new element to insert
* *
* The @old entry will be replaced with the @new entry atomically. * The @old entry will be replaced with the @new entry atomically from
* the perspective of concurrent readers. It is the caller's responsibility
* to synchronize with concurrent updaters, if any.
*
* Note: @old should not be empty. * Note: @old should not be empty.
*/ */
static inline void list_replace_rcu(struct list_head *old, static inline void list_replace_rcu(struct list_head *old,
...@@ -519,7 +522,9 @@ static inline void hlist_del_rcu(struct hlist_node *n) ...@@ -519,7 +522,9 @@ static inline void hlist_del_rcu(struct hlist_node *n)
* @old : the element to be replaced * @old : the element to be replaced
* @new : the new element to insert * @new : the new element to insert
* *
* The @old entry will be replaced with the @new entry atomically. * The @old entry will be replaced with the @new entry atomically from
* the perspective of concurrent readers. It is the caller's responsibility
* to synchronize with concurrent updaters, if any.
*/ */
static inline void hlist_replace_rcu(struct hlist_node *old, static inline void hlist_replace_rcu(struct hlist_node *old,
struct hlist_node *new) struct hlist_node *new)
......
...@@ -34,10 +34,12 @@ ...@@ -34,10 +34,12 @@
#define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b))
#define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b))
#define RCU_SEQ_CTR_SHIFT 2
#define RCU_SEQ_STATE_MASK ((1 << RCU_SEQ_CTR_SHIFT) - 1)
/* Exported common interfaces */ /* Exported common interfaces */
void call_rcu(struct rcu_head *head, rcu_callback_t func); void call_rcu(struct rcu_head *head, rcu_callback_t func);
void rcu_barrier_tasks(void); void rcu_barrier_tasks(void);
void rcu_barrier_tasks_rude(void);
void synchronize_rcu(void); void synchronize_rcu(void);
struct rcu_gp_oldstate; struct rcu_gp_oldstate;
...@@ -144,11 +146,18 @@ void rcu_init_nohz(void); ...@@ -144,11 +146,18 @@ void rcu_init_nohz(void);
int rcu_nocb_cpu_offload(int cpu); int rcu_nocb_cpu_offload(int cpu);
int rcu_nocb_cpu_deoffload(int cpu); int rcu_nocb_cpu_deoffload(int cpu);
void rcu_nocb_flush_deferred_wakeup(void); void rcu_nocb_flush_deferred_wakeup(void);
#define RCU_NOCB_LOCKDEP_WARN(c, s) RCU_LOCKDEP_WARN(c, s)
#else /* #ifdef CONFIG_RCU_NOCB_CPU */ #else /* #ifdef CONFIG_RCU_NOCB_CPU */
static inline void rcu_init_nohz(void) { } static inline void rcu_init_nohz(void) { }
static inline int rcu_nocb_cpu_offload(int cpu) { return -EINVAL; } static inline int rcu_nocb_cpu_offload(int cpu) { return -EINVAL; }
static inline int rcu_nocb_cpu_deoffload(int cpu) { return 0; } static inline int rcu_nocb_cpu_deoffload(int cpu) { return 0; }
static inline void rcu_nocb_flush_deferred_wakeup(void) { } static inline void rcu_nocb_flush_deferred_wakeup(void) { }
#define RCU_NOCB_LOCKDEP_WARN(c, s)
#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */ #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
/* /*
...@@ -165,6 +174,7 @@ static inline void rcu_nocb_flush_deferred_wakeup(void) { } ...@@ -165,6 +174,7 @@ static inline void rcu_nocb_flush_deferred_wakeup(void) { }
} while (0) } while (0)
void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func); void call_rcu_tasks(struct rcu_head *head, rcu_callback_t func);
void synchronize_rcu_tasks(void); void synchronize_rcu_tasks(void);
void rcu_tasks_torture_stats_print(char *tt, char *tf);
# else # else
# define rcu_tasks_classic_qs(t, preempt) do { } while (0) # define rcu_tasks_classic_qs(t, preempt) do { } while (0)
# define call_rcu_tasks call_rcu # define call_rcu_tasks call_rcu
...@@ -191,6 +201,7 @@ void rcu_tasks_trace_qs_blkd(struct task_struct *t); ...@@ -191,6 +201,7 @@ void rcu_tasks_trace_qs_blkd(struct task_struct *t);
rcu_tasks_trace_qs_blkd(t); \ rcu_tasks_trace_qs_blkd(t); \
} \ } \
} while (0) } while (0)
void rcu_tasks_trace_torture_stats_print(char *tt, char *tf);
# else # else
# define rcu_tasks_trace_qs(t) do { } while (0) # define rcu_tasks_trace_qs(t) do { } while (0)
# endif # endif
...@@ -202,8 +213,8 @@ do { \ ...@@ -202,8 +213,8 @@ do { \
} while (0) } while (0)
# ifdef CONFIG_TASKS_RUDE_RCU # ifdef CONFIG_TASKS_RUDE_RCU
void call_rcu_tasks_rude(struct rcu_head *head, rcu_callback_t func);
void synchronize_rcu_tasks_rude(void); void synchronize_rcu_tasks_rude(void);
void rcu_tasks_rude_torture_stats_print(char *tt, char *tf);
# endif # endif
#define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t, false) #define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t, false)
......
...@@ -158,7 +158,7 @@ void rcu_scheduler_starting(void); ...@@ -158,7 +158,7 @@ void rcu_scheduler_starting(void);
static inline void rcu_end_inkernel_boot(void) { } static inline void rcu_end_inkernel_boot(void) { }
static inline bool rcu_inkernel_boot_has_ended(void) { return true; } static inline bool rcu_inkernel_boot_has_ended(void) { return true; }
static inline bool rcu_is_watching(void) { return true; } static inline bool rcu_is_watching(void) { return true; }
static inline void rcu_momentary_dyntick_idle(void) { } static inline void rcu_momentary_eqs(void) { }
static inline void kfree_rcu_scheduler_running(void) { } static inline void kfree_rcu_scheduler_running(void) { }
static inline bool rcu_gp_might_be_stalled(void) { return false; } static inline bool rcu_gp_might_be_stalled(void) { return false; }
......
...@@ -37,7 +37,7 @@ void synchronize_rcu_expedited(void); ...@@ -37,7 +37,7 @@ void synchronize_rcu_expedited(void);
void kvfree_call_rcu(struct rcu_head *head, void *ptr); void kvfree_call_rcu(struct rcu_head *head, void *ptr);
void rcu_barrier(void); void rcu_barrier(void);
void rcu_momentary_dyntick_idle(void); void rcu_momentary_eqs(void);
void kfree_rcu_scheduler_running(void); void kfree_rcu_scheduler_running(void);
bool rcu_gp_might_be_stalled(void); bool rcu_gp_might_be_stalled(void);
......
...@@ -294,4 +294,10 @@ int smpcfd_prepare_cpu(unsigned int cpu); ...@@ -294,4 +294,10 @@ int smpcfd_prepare_cpu(unsigned int cpu);
int smpcfd_dead_cpu(unsigned int cpu); int smpcfd_dead_cpu(unsigned int cpu);
int smpcfd_dying_cpu(unsigned int cpu); int smpcfd_dying_cpu(unsigned int cpu);
#ifdef CONFIG_CSD_LOCK_WAIT_DEBUG
bool csd_lock_is_stuck(void);
#else
static inline bool csd_lock_is_stuck(void) { return false; }
#endif
#endif /* __LINUX_SMP_H */ #endif /* __LINUX_SMP_H */
...@@ -129,10 +129,23 @@ struct srcu_struct { ...@@ -129,10 +129,23 @@ struct srcu_struct {
#define SRCU_STATE_SCAN1 1 #define SRCU_STATE_SCAN1 1
#define SRCU_STATE_SCAN2 2 #define SRCU_STATE_SCAN2 2
/*
* Values for initializing gp sequence fields. Higher values allow wrap arounds to
* occur earlier.
* The second value with state is useful in the case of static initialization of
* srcu_usage where srcu_gp_seq_needed is expected to have some state value in its
* lower bits (or else it will appear to be already initialized within
* the call check_init_srcu_struct()).
*/
#define SRCU_GP_SEQ_INITIAL_VAL ((0UL - 100UL) << RCU_SEQ_CTR_SHIFT)
#define SRCU_GP_SEQ_INITIAL_VAL_WITH_STATE (SRCU_GP_SEQ_INITIAL_VAL - 1)
#define __SRCU_USAGE_INIT(name) \ #define __SRCU_USAGE_INIT(name) \
{ \ { \
.lock = __SPIN_LOCK_UNLOCKED(name.lock), \ .lock = __SPIN_LOCK_UNLOCKED(name.lock), \
.srcu_gp_seq_needed = -1UL, \ .srcu_gp_seq = SRCU_GP_SEQ_INITIAL_VAL, \
.srcu_gp_seq_needed = SRCU_GP_SEQ_INITIAL_VAL_WITH_STATE, \
.srcu_gp_seq_needed_exp = SRCU_GP_SEQ_INITIAL_VAL, \
.work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \ .work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \
} }
......
...@@ -466,40 +466,40 @@ TRACE_EVENT(rcu_stall_warning, ...@@ -466,40 +466,40 @@ TRACE_EVENT(rcu_stall_warning,
/* /*
* Tracepoint for dyntick-idle entry/exit events. These take 2 strings * Tracepoint for dyntick-idle entry/exit events. These take 2 strings
* as argument: * as argument:
* polarity: "Start", "End", "StillNonIdle" for entering, exiting or still not * polarity: "Start", "End", "StillWatching" for entering, exiting or still not
* being in dyntick-idle mode. * being in EQS mode.
* context: "USER" or "IDLE" or "IRQ". * context: "USER" or "IDLE" or "IRQ".
* NMIs nested in IRQs are inferred with dynticks_nesting > 1 in IRQ context. * NMIs nested in IRQs are inferred with nesting > 1 in IRQ context.
* *
* These events also take a pair of numbers, which indicate the nesting * These events also take a pair of numbers, which indicate the nesting
* depth before and after the event of interest, and a third number that is * depth before and after the event of interest, and a third number that is
* the ->dynticks counter. Note that task-related and interrupt-related * the RCU_WATCHING counter. Note that task-related and interrupt-related
* events use two separate counters, and that the "++=" and "--=" events * events use two separate counters, and that the "++=" and "--=" events
* for irq/NMI will change the counter by two, otherwise by one. * for irq/NMI will change the counter by two, otherwise by one.
*/ */
TRACE_EVENT_RCU(rcu_dyntick, TRACE_EVENT_RCU(rcu_watching,
TP_PROTO(const char *polarity, long oldnesting, long newnesting, int dynticks), TP_PROTO(const char *polarity, long oldnesting, long newnesting, int counter),
TP_ARGS(polarity, oldnesting, newnesting, dynticks), TP_ARGS(polarity, oldnesting, newnesting, counter),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, polarity) __field(const char *, polarity)
__field(long, oldnesting) __field(long, oldnesting)
__field(long, newnesting) __field(long, newnesting)
__field(int, dynticks) __field(int, counter)
), ),
TP_fast_assign( TP_fast_assign(
__entry->polarity = polarity; __entry->polarity = polarity;
__entry->oldnesting = oldnesting; __entry->oldnesting = oldnesting;
__entry->newnesting = newnesting; __entry->newnesting = newnesting;
__entry->dynticks = dynticks; __entry->counter = counter;
), ),
TP_printk("%s %lx %lx %#3x", __entry->polarity, TP_printk("%s %lx %lx %#3x", __entry->polarity,
__entry->oldnesting, __entry->newnesting, __entry->oldnesting, __entry->newnesting,
__entry->dynticks & 0xfff) __entry->counter & 0xfff)
); );
/* /*
......
This diff is collapsed.
...@@ -182,7 +182,7 @@ static void syscall_exit_to_user_mode_prepare(struct pt_regs *regs) ...@@ -182,7 +182,7 @@ static void syscall_exit_to_user_mode_prepare(struct pt_regs *regs)
unsigned long work = READ_ONCE(current_thread_info()->syscall_work); unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
unsigned long nr = syscall_get_nr(current, regs); unsigned long nr = syscall_get_nr(current, regs);
CT_WARN_ON(ct_state() != CONTEXT_KERNEL); CT_WARN_ON(ct_state() != CT_STATE_KERNEL);
if (IS_ENABLED(CONFIG_PROVE_LOCKING)) { if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
if (WARN(irqs_disabled(), "syscall %lu left IRQs disabled", nr)) if (WARN(irqs_disabled(), "syscall %lu left IRQs disabled", nr))
......
...@@ -54,9 +54,6 @@ ...@@ -54,9 +54,6 @@
* grace-period sequence number. * grace-period sequence number.
*/ */
#define RCU_SEQ_CTR_SHIFT 2
#define RCU_SEQ_STATE_MASK ((1 << RCU_SEQ_CTR_SHIFT) - 1)
/* Low-order bit definition for polled grace-period APIs. */ /* Low-order bit definition for polled grace-period APIs. */
#define RCU_GET_STATE_COMPLETED 0x1 #define RCU_GET_STATE_COMPLETED 0x1
...@@ -255,6 +252,11 @@ static inline void debug_rcu_head_callback(struct rcu_head *rhp) ...@@ -255,6 +252,11 @@ static inline void debug_rcu_head_callback(struct rcu_head *rhp)
kmem_dump_obj(rhp); kmem_dump_obj(rhp);
} }
static inline bool rcu_barrier_cb_is_done(struct rcu_head *rhp)
{
return rhp->next == rhp;
}
extern int rcu_cpu_stall_suppress_at_boot; extern int rcu_cpu_stall_suppress_at_boot;
static inline bool rcu_stall_is_suppressed_at_boot(void) static inline bool rcu_stall_is_suppressed_at_boot(void)
...@@ -606,7 +608,7 @@ void srcutorture_get_gp_data(struct srcu_struct *sp, int *flags, ...@@ -606,7 +608,7 @@ void srcutorture_get_gp_data(struct srcu_struct *sp, int *flags,
#endif #endif
#ifdef CONFIG_TINY_RCU #ifdef CONFIG_TINY_RCU
static inline bool rcu_dynticks_zero_in_eqs(int cpu, int *vp) { return false; } static inline bool rcu_watching_zero_in_eqs(int cpu, int *vp) { return false; }
static inline unsigned long rcu_get_gp_seq(void) { return 0; } static inline unsigned long rcu_get_gp_seq(void) { return 0; }
static inline unsigned long rcu_exp_batches_completed(void) { return 0; } static inline unsigned long rcu_exp_batches_completed(void) { return 0; }
static inline unsigned long static inline unsigned long
...@@ -619,7 +621,7 @@ static inline void rcu_fwd_progress_check(unsigned long j) { } ...@@ -619,7 +621,7 @@ static inline void rcu_fwd_progress_check(unsigned long j) { }
static inline void rcu_gp_slow_register(atomic_t *rgssp) { } static inline void rcu_gp_slow_register(atomic_t *rgssp) { }
static inline void rcu_gp_slow_unregister(atomic_t *rgssp) { } static inline void rcu_gp_slow_unregister(atomic_t *rgssp) { }
#else /* #ifdef CONFIG_TINY_RCU */ #else /* #ifdef CONFIG_TINY_RCU */
bool rcu_dynticks_zero_in_eqs(int cpu, int *vp); bool rcu_watching_zero_in_eqs(int cpu, int *vp);
unsigned long rcu_get_gp_seq(void); unsigned long rcu_get_gp_seq(void);
unsigned long rcu_exp_batches_completed(void); unsigned long rcu_exp_batches_completed(void);
unsigned long srcu_batches_completed(struct srcu_struct *sp); unsigned long srcu_batches_completed(struct srcu_struct *sp);
......
...@@ -260,17 +260,6 @@ void rcu_segcblist_disable(struct rcu_segcblist *rsclp) ...@@ -260,17 +260,6 @@ void rcu_segcblist_disable(struct rcu_segcblist *rsclp)
rcu_segcblist_clear_flags(rsclp, SEGCBLIST_ENABLED); rcu_segcblist_clear_flags(rsclp, SEGCBLIST_ENABLED);
} }
/*
* Mark the specified rcu_segcblist structure as offloaded (or not)
*/
void rcu_segcblist_offload(struct rcu_segcblist *rsclp, bool offload)
{
if (offload)
rcu_segcblist_set_flags(rsclp, SEGCBLIST_LOCKING | SEGCBLIST_OFFLOADED);
else
rcu_segcblist_clear_flags(rsclp, SEGCBLIST_OFFLOADED);
}
/* /*
* Does the specified rcu_segcblist structure contain callbacks that * Does the specified rcu_segcblist structure contain callbacks that
* are ready to be invoked? * are ready to be invoked?
......
...@@ -89,16 +89,7 @@ static inline bool rcu_segcblist_is_enabled(struct rcu_segcblist *rsclp) ...@@ -89,16 +89,7 @@ static inline bool rcu_segcblist_is_enabled(struct rcu_segcblist *rsclp)
static inline bool rcu_segcblist_is_offloaded(struct rcu_segcblist *rsclp) static inline bool rcu_segcblist_is_offloaded(struct rcu_segcblist *rsclp)
{ {
if (IS_ENABLED(CONFIG_RCU_NOCB_CPU) && if (IS_ENABLED(CONFIG_RCU_NOCB_CPU) &&
rcu_segcblist_test_flags(rsclp, SEGCBLIST_LOCKING)) rcu_segcblist_test_flags(rsclp, SEGCBLIST_OFFLOADED))
return true;
return false;
}
static inline bool rcu_segcblist_completely_offloaded(struct rcu_segcblist *rsclp)
{
if (IS_ENABLED(CONFIG_RCU_NOCB_CPU) &&
!rcu_segcblist_test_flags(rsclp, SEGCBLIST_RCU_CORE))
return true; return true;
return false; return false;
......
This diff is collapsed.
This diff is collapsed.
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include <linux/rcupdate_trace.h> #include <linux/rcupdate_trace.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/seq_buf.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/stat.h> #include <linux/stat.h>
...@@ -134,7 +135,7 @@ struct ref_scale_ops { ...@@ -134,7 +135,7 @@ struct ref_scale_ops {
const char *name; const char *name;
}; };
static struct ref_scale_ops *cur_ops; static const struct ref_scale_ops *cur_ops;
static void un_delay(const int udl, const int ndl) static void un_delay(const int udl, const int ndl)
{ {
...@@ -170,7 +171,7 @@ static bool rcu_sync_scale_init(void) ...@@ -170,7 +171,7 @@ static bool rcu_sync_scale_init(void)
return true; return true;
} }
static struct ref_scale_ops rcu_ops = { static const struct ref_scale_ops rcu_ops = {
.init = rcu_sync_scale_init, .init = rcu_sync_scale_init,
.readsection = ref_rcu_read_section, .readsection = ref_rcu_read_section,
.delaysection = ref_rcu_delay_section, .delaysection = ref_rcu_delay_section,
...@@ -204,7 +205,7 @@ static void srcu_ref_scale_delay_section(const int nloops, const int udl, const ...@@ -204,7 +205,7 @@ static void srcu_ref_scale_delay_section(const int nloops, const int udl, const
} }
} }
static struct ref_scale_ops srcu_ops = { static const struct ref_scale_ops srcu_ops = {
.init = rcu_sync_scale_init, .init = rcu_sync_scale_init,
.readsection = srcu_ref_scale_read_section, .readsection = srcu_ref_scale_read_section,
.delaysection = srcu_ref_scale_delay_section, .delaysection = srcu_ref_scale_delay_section,
...@@ -231,7 +232,7 @@ static void rcu_tasks_ref_scale_delay_section(const int nloops, const int udl, c ...@@ -231,7 +232,7 @@ static void rcu_tasks_ref_scale_delay_section(const int nloops, const int udl, c
un_delay(udl, ndl); un_delay(udl, ndl);
} }
static struct ref_scale_ops rcu_tasks_ops = { static const struct ref_scale_ops rcu_tasks_ops = {
.init = rcu_sync_scale_init, .init = rcu_sync_scale_init,
.readsection = rcu_tasks_ref_scale_read_section, .readsection = rcu_tasks_ref_scale_read_section,
.delaysection = rcu_tasks_ref_scale_delay_section, .delaysection = rcu_tasks_ref_scale_delay_section,
...@@ -270,7 +271,7 @@ static void rcu_trace_ref_scale_delay_section(const int nloops, const int udl, c ...@@ -270,7 +271,7 @@ static void rcu_trace_ref_scale_delay_section(const int nloops, const int udl, c
} }
} }
static struct ref_scale_ops rcu_trace_ops = { static const struct ref_scale_ops rcu_trace_ops = {
.init = rcu_sync_scale_init, .init = rcu_sync_scale_init,
.readsection = rcu_trace_ref_scale_read_section, .readsection = rcu_trace_ref_scale_read_section,
.delaysection = rcu_trace_ref_scale_delay_section, .delaysection = rcu_trace_ref_scale_delay_section,
...@@ -309,7 +310,7 @@ static void ref_refcnt_delay_section(const int nloops, const int udl, const int ...@@ -309,7 +310,7 @@ static void ref_refcnt_delay_section(const int nloops, const int udl, const int
} }
} }
static struct ref_scale_ops refcnt_ops = { static const struct ref_scale_ops refcnt_ops = {
.init = rcu_sync_scale_init, .init = rcu_sync_scale_init,
.readsection = ref_refcnt_section, .readsection = ref_refcnt_section,
.delaysection = ref_refcnt_delay_section, .delaysection = ref_refcnt_delay_section,
...@@ -346,7 +347,7 @@ static void ref_rwlock_delay_section(const int nloops, const int udl, const int ...@@ -346,7 +347,7 @@ static void ref_rwlock_delay_section(const int nloops, const int udl, const int
} }
} }
static struct ref_scale_ops rwlock_ops = { static const struct ref_scale_ops rwlock_ops = {
.init = ref_rwlock_init, .init = ref_rwlock_init,
.readsection = ref_rwlock_section, .readsection = ref_rwlock_section,
.delaysection = ref_rwlock_delay_section, .delaysection = ref_rwlock_delay_section,
...@@ -383,7 +384,7 @@ static void ref_rwsem_delay_section(const int nloops, const int udl, const int n ...@@ -383,7 +384,7 @@ static void ref_rwsem_delay_section(const int nloops, const int udl, const int n
} }
} }
static struct ref_scale_ops rwsem_ops = { static const struct ref_scale_ops rwsem_ops = {
.init = ref_rwsem_init, .init = ref_rwsem_init,
.readsection = ref_rwsem_section, .readsection = ref_rwsem_section,
.delaysection = ref_rwsem_delay_section, .delaysection = ref_rwsem_delay_section,
...@@ -418,7 +419,7 @@ static void ref_lock_delay_section(const int nloops, const int udl, const int nd ...@@ -418,7 +419,7 @@ static void ref_lock_delay_section(const int nloops, const int udl, const int nd
preempt_enable(); preempt_enable();
} }
static struct ref_scale_ops lock_ops = { static const struct ref_scale_ops lock_ops = {
.readsection = ref_lock_section, .readsection = ref_lock_section,
.delaysection = ref_lock_delay_section, .delaysection = ref_lock_delay_section,
.name = "lock" .name = "lock"
...@@ -453,7 +454,7 @@ static void ref_lock_irq_delay_section(const int nloops, const int udl, const in ...@@ -453,7 +454,7 @@ static void ref_lock_irq_delay_section(const int nloops, const int udl, const in
preempt_enable(); preempt_enable();
} }
static struct ref_scale_ops lock_irq_ops = { static const struct ref_scale_ops lock_irq_ops = {
.readsection = ref_lock_irq_section, .readsection = ref_lock_irq_section,
.delaysection = ref_lock_irq_delay_section, .delaysection = ref_lock_irq_delay_section,
.name = "lock-irq" .name = "lock-irq"
...@@ -489,7 +490,7 @@ static void ref_acqrel_delay_section(const int nloops, const int udl, const int ...@@ -489,7 +490,7 @@ static void ref_acqrel_delay_section(const int nloops, const int udl, const int
preempt_enable(); preempt_enable();
} }
static struct ref_scale_ops acqrel_ops = { static const struct ref_scale_ops acqrel_ops = {
.readsection = ref_acqrel_section, .readsection = ref_acqrel_section,
.delaysection = ref_acqrel_delay_section, .delaysection = ref_acqrel_delay_section,
.name = "acqrel" .name = "acqrel"
...@@ -523,7 +524,7 @@ static void ref_clock_delay_section(const int nloops, const int udl, const int n ...@@ -523,7 +524,7 @@ static void ref_clock_delay_section(const int nloops, const int udl, const int n
stopopts = x; stopopts = x;
} }
static struct ref_scale_ops clock_ops = { static const struct ref_scale_ops clock_ops = {
.readsection = ref_clock_section, .readsection = ref_clock_section,
.delaysection = ref_clock_delay_section, .delaysection = ref_clock_delay_section,
.name = "clock" .name = "clock"
...@@ -555,7 +556,7 @@ static void ref_jiffies_delay_section(const int nloops, const int udl, const int ...@@ -555,7 +556,7 @@ static void ref_jiffies_delay_section(const int nloops, const int udl, const int
stopopts = x; stopopts = x;
} }
static struct ref_scale_ops jiffies_ops = { static const struct ref_scale_ops jiffies_ops = {
.readsection = ref_jiffies_section, .readsection = ref_jiffies_section,
.delaysection = ref_jiffies_delay_section, .delaysection = ref_jiffies_delay_section,
.name = "jiffies" .name = "jiffies"
...@@ -705,9 +706,9 @@ static void refscale_typesafe_ctor(void *rtsp_in) ...@@ -705,9 +706,9 @@ static void refscale_typesafe_ctor(void *rtsp_in)
preempt_enable(); preempt_enable();
} }
static struct ref_scale_ops typesafe_ref_ops; static const struct ref_scale_ops typesafe_ref_ops;
static struct ref_scale_ops typesafe_lock_ops; static const struct ref_scale_ops typesafe_lock_ops;
static struct ref_scale_ops typesafe_seqlock_ops; static const struct ref_scale_ops typesafe_seqlock_ops;
// Initialize for a typesafe test. // Initialize for a typesafe test.
static bool typesafe_init(void) static bool typesafe_init(void)
...@@ -768,7 +769,7 @@ static void typesafe_cleanup(void) ...@@ -768,7 +769,7 @@ static void typesafe_cleanup(void)
} }
// The typesafe_init() function distinguishes these structures by address. // The typesafe_init() function distinguishes these structures by address.
static struct ref_scale_ops typesafe_ref_ops = { static const struct ref_scale_ops typesafe_ref_ops = {
.init = typesafe_init, .init = typesafe_init,
.cleanup = typesafe_cleanup, .cleanup = typesafe_cleanup,
.readsection = typesafe_read_section, .readsection = typesafe_read_section,
...@@ -776,7 +777,7 @@ static struct ref_scale_ops typesafe_ref_ops = { ...@@ -776,7 +777,7 @@ static struct ref_scale_ops typesafe_ref_ops = {
.name = "typesafe_ref" .name = "typesafe_ref"
}; };
static struct ref_scale_ops typesafe_lock_ops = { static const struct ref_scale_ops typesafe_lock_ops = {
.init = typesafe_init, .init = typesafe_init,
.cleanup = typesafe_cleanup, .cleanup = typesafe_cleanup,
.readsection = typesafe_read_section, .readsection = typesafe_read_section,
...@@ -784,7 +785,7 @@ static struct ref_scale_ops typesafe_lock_ops = { ...@@ -784,7 +785,7 @@ static struct ref_scale_ops typesafe_lock_ops = {
.name = "typesafe_lock" .name = "typesafe_lock"
}; };
static struct ref_scale_ops typesafe_seqlock_ops = { static const struct ref_scale_ops typesafe_seqlock_ops = {
.init = typesafe_init, .init = typesafe_init,
.cleanup = typesafe_cleanup, .cleanup = typesafe_cleanup,
.readsection = typesafe_read_section, .readsection = typesafe_read_section,
...@@ -891,32 +892,34 @@ static u64 process_durations(int n) ...@@ -891,32 +892,34 @@ static u64 process_durations(int n)
{ {
int i; int i;
struct reader_task *rt; struct reader_task *rt;
char buf1[64]; struct seq_buf s;
char *buf; char *buf;
u64 sum = 0; u64 sum = 0;
buf = kmalloc(800 + 64, GFP_KERNEL); buf = kmalloc(800 + 64, GFP_KERNEL);
if (!buf) if (!buf)
return 0; return 0;
buf[0] = 0; seq_buf_init(&s, buf, 800 + 64);
sprintf(buf, "Experiment #%d (Format: <THREAD-NUM>:<Total loop time in ns>)",
seq_buf_printf(&s, "Experiment #%d (Format: <THREAD-NUM>:<Total loop time in ns>)",
exp_idx); exp_idx);
for (i = 0; i < n && !torture_must_stop(); i++) { for (i = 0; i < n && !torture_must_stop(); i++) {
rt = &(reader_tasks[i]); rt = &(reader_tasks[i]);
sprintf(buf1, "%d: %llu\t", i, rt->last_duration_ns);
if (i % 5 == 0) if (i % 5 == 0)
strcat(buf, "\n"); seq_buf_putc(&s, '\n');
if (strlen(buf) >= 800) {
pr_alert("%s", buf); if (seq_buf_used(&s) >= 800) {
buf[0] = 0; pr_alert("%s", seq_buf_str(&s));
seq_buf_clear(&s);
} }
strcat(buf, buf1);
seq_buf_printf(&s, "%d: %llu\t", i, rt->last_duration_ns);
sum += rt->last_duration_ns; sum += rt->last_duration_ns;
} }
pr_alert("%s\n", buf); pr_alert("%s\n", seq_buf_str(&s));
kfree(buf); kfree(buf);
return sum; return sum;
...@@ -1023,7 +1026,7 @@ static int main_func(void *arg) ...@@ -1023,7 +1026,7 @@ static int main_func(void *arg)
} }
static void static void
ref_scale_print_module_parms(struct ref_scale_ops *cur_ops, const char *tag) ref_scale_print_module_parms(const struct ref_scale_ops *cur_ops, const char *tag)
{ {
pr_alert("%s" SCALE_FLAG pr_alert("%s" SCALE_FLAG
"--- %s: verbose=%d verbose_batched=%d shutdown=%d holdoff=%d lookup_instances=%ld loops=%ld nreaders=%d nruns=%d readdelay=%d\n", scale_type, tag, "--- %s: verbose=%d verbose_batched=%d shutdown=%d holdoff=%d lookup_instances=%ld loops=%ld nreaders=%d nruns=%d readdelay=%d\n", scale_type, tag,
...@@ -1078,7 +1081,7 @@ ref_scale_init(void) ...@@ -1078,7 +1081,7 @@ ref_scale_init(void)
{ {
long i; long i;
int firsterr = 0; int firsterr = 0;
static struct ref_scale_ops *scale_ops[] = { static const struct ref_scale_ops *scale_ops[] = {
&rcu_ops, &srcu_ops, RCU_TRACE_OPS RCU_TASKS_OPS &refcnt_ops, &rwlock_ops, &rcu_ops, &srcu_ops, RCU_TRACE_OPS RCU_TASKS_OPS &refcnt_ops, &rwlock_ops,
&rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops, &clock_ops, &jiffies_ops, &rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops, &clock_ops, &jiffies_ops,
&typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops, &typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops,
......
...@@ -137,6 +137,7 @@ static void init_srcu_struct_data(struct srcu_struct *ssp) ...@@ -137,6 +137,7 @@ static void init_srcu_struct_data(struct srcu_struct *ssp)
sdp->srcu_cblist_invoking = false; sdp->srcu_cblist_invoking = false;
sdp->srcu_gp_seq_needed = ssp->srcu_sup->srcu_gp_seq; sdp->srcu_gp_seq_needed = ssp->srcu_sup->srcu_gp_seq;
sdp->srcu_gp_seq_needed_exp = ssp->srcu_sup->srcu_gp_seq; sdp->srcu_gp_seq_needed_exp = ssp->srcu_sup->srcu_gp_seq;
sdp->srcu_barrier_head.next = &sdp->srcu_barrier_head;
sdp->mynode = NULL; sdp->mynode = NULL;
sdp->cpu = cpu; sdp->cpu = cpu;
INIT_WORK(&sdp->work, srcu_invoke_callbacks); INIT_WORK(&sdp->work, srcu_invoke_callbacks);
...@@ -247,7 +248,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ...@@ -247,7 +248,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static)
mutex_init(&ssp->srcu_sup->srcu_cb_mutex); mutex_init(&ssp->srcu_sup->srcu_cb_mutex);
mutex_init(&ssp->srcu_sup->srcu_gp_mutex); mutex_init(&ssp->srcu_sup->srcu_gp_mutex);
ssp->srcu_idx = 0; ssp->srcu_idx = 0;
ssp->srcu_sup->srcu_gp_seq = 0; ssp->srcu_sup->srcu_gp_seq = SRCU_GP_SEQ_INITIAL_VAL;
ssp->srcu_sup->srcu_barrier_seq = 0; ssp->srcu_sup->srcu_barrier_seq = 0;
mutex_init(&ssp->srcu_sup->srcu_barrier_mutex); mutex_init(&ssp->srcu_sup->srcu_barrier_mutex);
atomic_set(&ssp->srcu_sup->srcu_barrier_cpu_cnt, 0); atomic_set(&ssp->srcu_sup->srcu_barrier_cpu_cnt, 0);
...@@ -258,7 +259,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ...@@ -258,7 +259,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static)
if (!ssp->sda) if (!ssp->sda)
goto err_free_sup; goto err_free_sup;
init_srcu_struct_data(ssp); init_srcu_struct_data(ssp);
ssp->srcu_sup->srcu_gp_seq_needed_exp = 0; ssp->srcu_sup->srcu_gp_seq_needed_exp = SRCU_GP_SEQ_INITIAL_VAL;
ssp->srcu_sup->srcu_last_gp_end = ktime_get_mono_fast_ns(); ssp->srcu_sup->srcu_last_gp_end = ktime_get_mono_fast_ns();
if (READ_ONCE(ssp->srcu_sup->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) { if (READ_ONCE(ssp->srcu_sup->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) {
if (!init_srcu_struct_nodes(ssp, GFP_ATOMIC)) if (!init_srcu_struct_nodes(ssp, GFP_ATOMIC))
...@@ -266,7 +267,8 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ...@@ -266,7 +267,8 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static)
WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG); WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG);
} }
ssp->srcu_sup->srcu_ssp = ssp; ssp->srcu_sup->srcu_ssp = ssp;
smp_store_release(&ssp->srcu_sup->srcu_gp_seq_needed, 0); /* Init done. */ smp_store_release(&ssp->srcu_sup->srcu_gp_seq_needed,
SRCU_GP_SEQ_INITIAL_VAL); /* Init done. */
return 0; return 0;
err_free_sda: err_free_sda:
...@@ -628,6 +630,7 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) ...@@ -628,6 +630,7 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp)
if (time_after(j, gpstart)) if (time_after(j, gpstart))
jbase += j - gpstart; jbase += j - gpstart;
if (!jbase) { if (!jbase) {
ASSERT_EXCLUSIVE_WRITER(sup->srcu_n_exp_nodelay);
WRITE_ONCE(sup->srcu_n_exp_nodelay, READ_ONCE(sup->srcu_n_exp_nodelay) + 1); WRITE_ONCE(sup->srcu_n_exp_nodelay, READ_ONCE(sup->srcu_n_exp_nodelay) + 1);
if (READ_ONCE(sup->srcu_n_exp_nodelay) > srcu_max_nodelay_phase) if (READ_ONCE(sup->srcu_n_exp_nodelay) > srcu_max_nodelay_phase)
jbase = 1; jbase = 1;
...@@ -1560,6 +1563,7 @@ static void srcu_barrier_cb(struct rcu_head *rhp) ...@@ -1560,6 +1563,7 @@ static void srcu_barrier_cb(struct rcu_head *rhp)
struct srcu_data *sdp; struct srcu_data *sdp;
struct srcu_struct *ssp; struct srcu_struct *ssp;
rhp->next = rhp; // Mark the callback as having been invoked.
sdp = container_of(rhp, struct srcu_data, srcu_barrier_head); sdp = container_of(rhp, struct srcu_data, srcu_barrier_head);
ssp = sdp->ssp; ssp = sdp->ssp;
if (atomic_dec_and_test(&ssp->srcu_sup->srcu_barrier_cpu_cnt)) if (atomic_dec_and_test(&ssp->srcu_sup->srcu_barrier_cpu_cnt))
...@@ -1818,6 +1822,7 @@ static void process_srcu(struct work_struct *work) ...@@ -1818,6 +1822,7 @@ static void process_srcu(struct work_struct *work)
} else { } else {
j = jiffies; j = jiffies;
if (READ_ONCE(sup->reschedule_jiffies) == j) { if (READ_ONCE(sup->reschedule_jiffies) == j) {
ASSERT_EXCLUSIVE_WRITER(sup->reschedule_count);
WRITE_ONCE(sup->reschedule_count, READ_ONCE(sup->reschedule_count) + 1); WRITE_ONCE(sup->reschedule_count, READ_ONCE(sup->reschedule_count) + 1);
if (READ_ONCE(sup->reschedule_count) > srcu_max_nodelay) if (READ_ONCE(sup->reschedule_count) > srcu_max_nodelay)
curdelay = 1; curdelay = 1;
......
This diff is collapsed.
This diff is collapsed.
...@@ -206,7 +206,7 @@ struct rcu_data { ...@@ -206,7 +206,7 @@ struct rcu_data {
long blimit; /* Upper limit on a processed batch */ long blimit; /* Upper limit on a processed batch */
/* 3) dynticks interface. */ /* 3) dynticks interface. */
int dynticks_snap; /* Per-GP tracking for dynticks. */ int watching_snap; /* Per-GP tracking for dynticks. */
bool rcu_need_heavy_qs; /* GP old, so heavy quiescent state! */ bool rcu_need_heavy_qs; /* GP old, so heavy quiescent state! */
bool rcu_urgent_qs; /* GP old need light quiescent state. */ bool rcu_urgent_qs; /* GP old need light quiescent state. */
bool rcu_forced_tick; /* Forced tick to provide QS. */ bool rcu_forced_tick; /* Forced tick to provide QS. */
...@@ -215,7 +215,7 @@ struct rcu_data { ...@@ -215,7 +215,7 @@ struct rcu_data {
/* 4) rcu_barrier(), OOM callbacks, and expediting. */ /* 4) rcu_barrier(), OOM callbacks, and expediting. */
unsigned long barrier_seq_snap; /* Snap of rcu_state.barrier_sequence. */ unsigned long barrier_seq_snap; /* Snap of rcu_state.barrier_sequence. */
struct rcu_head barrier_head; struct rcu_head barrier_head;
int exp_dynticks_snap; /* Double-check need for IPI. */ int exp_watching_snap; /* Double-check need for IPI. */
/* 5) Callback offloading. */ /* 5) Callback offloading. */
#ifdef CONFIG_RCU_NOCB_CPU #ifdef CONFIG_RCU_NOCB_CPU
...@@ -411,7 +411,6 @@ struct rcu_state { ...@@ -411,7 +411,6 @@ struct rcu_state {
arch_spinlock_t ofl_lock ____cacheline_internodealigned_in_smp; arch_spinlock_t ofl_lock ____cacheline_internodealigned_in_smp;
/* Synchronize offline with */ /* Synchronize offline with */
/* GP pre-initialization. */ /* GP pre-initialization. */
int nocb_is_setup; /* nocb is setup from boot */
/* synchronize_rcu() part. */ /* synchronize_rcu() part. */
struct llist_head srs_next; /* request a GP users. */ struct llist_head srs_next; /* request a GP users. */
...@@ -420,6 +419,11 @@ struct rcu_state { ...@@ -420,6 +419,11 @@ struct rcu_state {
struct sr_wait_node srs_wait_nodes[SR_NORMAL_GP_WAIT_HEAD_MAX]; struct sr_wait_node srs_wait_nodes[SR_NORMAL_GP_WAIT_HEAD_MAX];
struct work_struct srs_cleanup_work; struct work_struct srs_cleanup_work;
atomic_t srs_cleanups_pending; /* srs inflight worker cleanups. */ atomic_t srs_cleanups_pending; /* srs inflight worker cleanups. */
#ifdef CONFIG_RCU_NOCB_CPU
struct mutex nocb_mutex; /* Guards (de-)offloading */
int nocb_is_setup; /* nocb is setup from boot */
#endif
}; };
/* Values for rcu_state structure's gp_flags field. */ /* Values for rcu_state structure's gp_flags field. */
......
...@@ -377,11 +377,11 @@ static void __sync_rcu_exp_select_node_cpus(struct rcu_exp_work *rewp) ...@@ -377,11 +377,11 @@ static void __sync_rcu_exp_select_node_cpus(struct rcu_exp_work *rewp)
* post grace period updater's accesses is enforced by the * post grace period updater's accesses is enforced by the
* below acquire semantic. * below acquire semantic.
*/ */
snap = ct_dynticks_cpu_acquire(cpu); snap = ct_rcu_watching_cpu_acquire(cpu);
if (rcu_dynticks_in_eqs(snap)) if (rcu_watching_snap_in_eqs(snap))
mask_ofl_test |= mask; mask_ofl_test |= mask;
else else
rdp->exp_dynticks_snap = snap; rdp->exp_watching_snap = snap;
} }
} }
mask_ofl_ipi = rnp->expmask & ~mask_ofl_test; mask_ofl_ipi = rnp->expmask & ~mask_ofl_test;
...@@ -401,7 +401,7 @@ static void __sync_rcu_exp_select_node_cpus(struct rcu_exp_work *rewp) ...@@ -401,7 +401,7 @@ static void __sync_rcu_exp_select_node_cpus(struct rcu_exp_work *rewp)
unsigned long mask = rdp->grpmask; unsigned long mask = rdp->grpmask;
retry_ipi: retry_ipi:
if (rcu_dynticks_in_eqs_since(rdp, rdp->exp_dynticks_snap)) { if (rcu_watching_snap_stopped_since(rdp, rdp->exp_watching_snap)) {
mask_ofl_test |= mask; mask_ofl_test |= mask;
continue; continue;
} }
...@@ -544,61 +544,21 @@ static bool synchronize_rcu_expedited_wait_once(long tlimit) ...@@ -544,61 +544,21 @@ static bool synchronize_rcu_expedited_wait_once(long tlimit)
} }
/* /*
* Wait for the expedited grace period to elapse, issuing any needed * Print out an expedited RCU CPU stall warning message.
* RCU CPU stall warnings along the way.
*/ */
static void synchronize_rcu_expedited_wait(void) static void synchronize_rcu_expedited_stall(unsigned long jiffies_start, unsigned long j)
{ {
int cpu; int cpu;
unsigned long j;
unsigned long jiffies_stall;
unsigned long jiffies_start;
unsigned long mask; unsigned long mask;
int ndetected; int ndetected;
struct rcu_data *rdp;
struct rcu_node *rnp; struct rcu_node *rnp;
struct rcu_node *rnp_root = rcu_get_root(); struct rcu_node *rnp_root = rcu_get_root();
unsigned long flags;
trace_rcu_exp_grace_period(rcu_state.name, rcu_exp_gp_seq_endval(), TPS("startwait")); if (READ_ONCE(csd_lock_suppress_rcu_stall) && csd_lock_is_stuck()) {
jiffies_stall = rcu_exp_jiffies_till_stall_check(); pr_err("INFO: %s detected expedited stalls, but suppressed full report due to a stuck CSD-lock.\n", rcu_state.name);
jiffies_start = jiffies;
if (tick_nohz_full_enabled() && rcu_inkernel_boot_has_ended()) {
if (synchronize_rcu_expedited_wait_once(1))
return; return;
rcu_for_each_leaf_node(rnp) {
raw_spin_lock_irqsave_rcu_node(rnp, flags);
mask = READ_ONCE(rnp->expmask);
for_each_leaf_node_cpu_mask(rnp, cpu, mask) {
rdp = per_cpu_ptr(&rcu_data, cpu);
if (rdp->rcu_forced_tick_exp)
continue;
rdp->rcu_forced_tick_exp = true;
if (cpu_online(cpu))
tick_dep_set_cpu(cpu, TICK_DEP_BIT_RCU_EXP);
}
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
} }
j = READ_ONCE(jiffies_till_first_fqs); pr_err("INFO: %s detected expedited stalls on CPUs/tasks: {", rcu_state.name);
if (synchronize_rcu_expedited_wait_once(j + HZ))
return;
}
for (;;) {
unsigned long j;
if (synchronize_rcu_expedited_wait_once(jiffies_stall))
return;
if (rcu_stall_is_suppressed())
continue;
nbcon_cpu_emergency_enter();
j = jiffies;
rcu_stall_notifier_call_chain(RCU_STALL_NOTIFY_EXP, (void *)(j - jiffies_start));
trace_rcu_stall_warning(rcu_state.name, TPS("ExpeditedStall"));
pr_err("INFO: %s detected expedited stalls on CPUs/tasks: {",
rcu_state.name);
ndetected = 0; ndetected = 0;
rcu_for_each_leaf_node(rnp) { rcu_for_each_leaf_node(rnp) {
ndetected += rcu_print_task_exp_stall(rnp); ndetected += rcu_print_task_exp_stall(rnp);
...@@ -618,8 +578,7 @@ static void synchronize_rcu_expedited_wait(void) ...@@ -618,8 +578,7 @@ static void synchronize_rcu_expedited_wait(void)
} }
} }
pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n", pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n",
j - jiffies_start, rcu_state.expedited_sequence, j - jiffies_start, rcu_state.expedited_sequence, data_race(rnp_root->expmask),
data_race(rnp_root->expmask),
".T"[!!data_race(rnp_root->exp_tasks)]); ".T"[!!data_race(rnp_root->exp_tasks)]);
if (ndetected) { if (ndetected) {
pr_err("blocking rcu_node structures (internal RCU debug):"); pr_err("blocking rcu_node structures (internal RCU debug):");
...@@ -629,8 +588,7 @@ static void synchronize_rcu_expedited_wait(void) ...@@ -629,8 +588,7 @@ static void synchronize_rcu_expedited_wait(void)
if (sync_rcu_exp_done_unlocked(rnp)) if (sync_rcu_exp_done_unlocked(rnp))
continue; continue;
pr_cont(" l=%u:%d-%d:%#lx/%c", pr_cont(" l=%u:%d-%d:%#lx/%c",
rnp->level, rnp->grplo, rnp->grphi, rnp->level, rnp->grplo, rnp->grphi, data_race(rnp->expmask),
data_race(rnp->expmask),
".T"[!!data_race(rnp->exp_tasks)]); ".T"[!!data_race(rnp->exp_tasks)]);
} }
pr_cont("\n"); pr_cont("\n");
...@@ -640,12 +598,65 @@ static void synchronize_rcu_expedited_wait(void) ...@@ -640,12 +598,65 @@ static void synchronize_rcu_expedited_wait(void)
mask = leaf_node_cpu_bit(rnp, cpu); mask = leaf_node_cpu_bit(rnp, cpu);
if (!(READ_ONCE(rnp->expmask) & mask)) if (!(READ_ONCE(rnp->expmask) & mask))
continue; continue;
preempt_disable(); // For smp_processor_id() in dump_cpu_task().
dump_cpu_task(cpu); dump_cpu_task(cpu);
preempt_enable();
} }
rcu_exp_print_detail_task_stall_rnp(rnp); rcu_exp_print_detail_task_stall_rnp(rnp);
} }
}
/*
* Wait for the expedited grace period to elapse, issuing any needed
* RCU CPU stall warnings along the way.
*/
static void synchronize_rcu_expedited_wait(void)
{
int cpu;
unsigned long j;
unsigned long jiffies_stall;
unsigned long jiffies_start;
unsigned long mask;
struct rcu_data *rdp;
struct rcu_node *rnp;
unsigned long flags;
trace_rcu_exp_grace_period(rcu_state.name, rcu_exp_gp_seq_endval(), TPS("startwait"));
jiffies_stall = rcu_exp_jiffies_till_stall_check();
jiffies_start = jiffies;
if (tick_nohz_full_enabled() && rcu_inkernel_boot_has_ended()) {
if (synchronize_rcu_expedited_wait_once(1))
return;
rcu_for_each_leaf_node(rnp) {
raw_spin_lock_irqsave_rcu_node(rnp, flags);
mask = READ_ONCE(rnp->expmask);
for_each_leaf_node_cpu_mask(rnp, cpu, mask) {
rdp = per_cpu_ptr(&rcu_data, cpu);
if (rdp->rcu_forced_tick_exp)
continue;
rdp->rcu_forced_tick_exp = true;
if (cpu_online(cpu))
tick_dep_set_cpu(cpu, TICK_DEP_BIT_RCU_EXP);
}
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
}
j = READ_ONCE(jiffies_till_first_fqs);
if (synchronize_rcu_expedited_wait_once(j + HZ))
return;
}
for (;;) {
unsigned long j;
if (synchronize_rcu_expedited_wait_once(jiffies_stall))
return;
if (rcu_stall_is_suppressed())
continue;
nbcon_cpu_emergency_enter();
j = jiffies;
rcu_stall_notifier_call_chain(RCU_STALL_NOTIFY_EXP, (void *)(j - jiffies_start));
trace_rcu_stall_warning(rcu_state.name, TPS("ExpeditedStall"));
synchronize_rcu_expedited_stall(jiffies_start, j);
jiffies_stall = 3 * rcu_exp_jiffies_till_stall_check() + 3; jiffies_stall = 3 * rcu_exp_jiffies_till_stall_check() + 3;
nbcon_cpu_emergency_exit(); nbcon_cpu_emergency_exit();
......
This diff is collapsed.
...@@ -24,10 +24,11 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp) ...@@ -24,10 +24,11 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp)
* timers have their own means of synchronization against the * timers have their own means of synchronization against the
* offloaded state updaters. * offloaded state updaters.
*/ */
RCU_LOCKDEP_WARN( RCU_NOCB_LOCKDEP_WARN(
!(lockdep_is_held(&rcu_state.barrier_mutex) || !(lockdep_is_held(&rcu_state.barrier_mutex) ||
(IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) || (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) ||
rcu_lockdep_is_held_nocb(rdp) || lockdep_is_held(&rdp->nocb_lock) ||
lockdep_is_held(&rcu_state.nocb_mutex) ||
(!(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible()) && (!(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible()) &&
rdp == this_cpu_ptr(&rcu_data)) || rdp == this_cpu_ptr(&rcu_data)) ||
rcu_current_is_nocb_kthread(rdp)), rcu_current_is_nocb_kthread(rdp)),
...@@ -869,7 +870,7 @@ static void rcu_qs(void) ...@@ -869,7 +870,7 @@ static void rcu_qs(void)
/* /*
* Register an urgently needed quiescent state. If there is an * Register an urgently needed quiescent state. If there is an
* emergency, invoke rcu_momentary_dyntick_idle() to do a heavy-weight * emergency, invoke rcu_momentary_eqs() to do a heavy-weight
* dyntick-idle quiescent state visible to other CPUs, which will in * dyntick-idle quiescent state visible to other CPUs, which will in
* some cases serve for expedited as well as normal grace periods. * some cases serve for expedited as well as normal grace periods.
* Either way, register a lightweight quiescent state. * Either way, register a lightweight quiescent state.
...@@ -889,7 +890,7 @@ void rcu_all_qs(void) ...@@ -889,7 +890,7 @@ void rcu_all_qs(void)
this_cpu_write(rcu_data.rcu_urgent_qs, false); this_cpu_write(rcu_data.rcu_urgent_qs, false);
if (unlikely(raw_cpu_read(rcu_data.rcu_need_heavy_qs))) { if (unlikely(raw_cpu_read(rcu_data.rcu_need_heavy_qs))) {
local_irq_save(flags); local_irq_save(flags);
rcu_momentary_dyntick_idle(); rcu_momentary_eqs();
local_irq_restore(flags); local_irq_restore(flags);
} }
rcu_qs(); rcu_qs();
...@@ -909,7 +910,7 @@ void rcu_note_context_switch(bool preempt) ...@@ -909,7 +910,7 @@ void rcu_note_context_switch(bool preempt)
goto out; goto out;
this_cpu_write(rcu_data.rcu_urgent_qs, false); this_cpu_write(rcu_data.rcu_urgent_qs, false);
if (unlikely(raw_cpu_read(rcu_data.rcu_need_heavy_qs))) if (unlikely(raw_cpu_read(rcu_data.rcu_need_heavy_qs)))
rcu_momentary_dyntick_idle(); rcu_momentary_eqs();
out: out:
rcu_tasks_qs(current, preempt); rcu_tasks_qs(current, preempt);
trace_rcu_utilization(TPS("End context switch")); trace_rcu_utilization(TPS("End context switch"));
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/console.h> #include <linux/console.h>
#include <linux/kvm_para.h> #include <linux/kvm_para.h>
#include <linux/rcu_notifier.h> #include <linux/rcu_notifier.h>
#include <linux/smp.h>
////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////
// //
...@@ -371,6 +372,7 @@ static void rcu_dump_cpu_stacks(void) ...@@ -371,6 +372,7 @@ static void rcu_dump_cpu_stacks(void)
struct rcu_node *rnp; struct rcu_node *rnp;
rcu_for_each_leaf_node(rnp) { rcu_for_each_leaf_node(rnp) {
printk_deferred_enter();
raw_spin_lock_irqsave_rcu_node(rnp, flags); raw_spin_lock_irqsave_rcu_node(rnp, flags);
for_each_leaf_node_possible_cpu(rnp, cpu) for_each_leaf_node_possible_cpu(rnp, cpu)
if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) { if (rnp->qsmask & leaf_node_cpu_bit(rnp, cpu)) {
...@@ -380,6 +382,7 @@ static void rcu_dump_cpu_stacks(void) ...@@ -380,6 +382,7 @@ static void rcu_dump_cpu_stacks(void)
dump_cpu_task(cpu); dump_cpu_task(cpu);
} }
raw_spin_unlock_irqrestore_rcu_node(rnp, flags); raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
printk_deferred_exit();
} }
} }
...@@ -502,7 +505,7 @@ static void print_cpu_stall_info(int cpu) ...@@ -502,7 +505,7 @@ static void print_cpu_stall_info(int cpu)
} }
delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq); delta = rcu_seq_ctr(rdp->mynode->gp_seq - rdp->rcu_iw_gp_seq);
falsepositive = rcu_is_gp_kthread_starving(NULL) && falsepositive = rcu_is_gp_kthread_starving(NULL) &&
rcu_dynticks_in_eqs(ct_dynticks_cpu(cpu)); rcu_watching_snap_in_eqs(ct_rcu_watching_cpu(cpu));
rcuc_starved = rcu_is_rcuc_kthread_starving(rdp, &j); rcuc_starved = rcu_is_rcuc_kthread_starving(rdp, &j);
if (rcuc_starved) if (rcuc_starved)
// Print signed value, as negative values indicate a probable bug. // Print signed value, as negative values indicate a probable bug.
...@@ -516,8 +519,8 @@ static void print_cpu_stall_info(int cpu) ...@@ -516,8 +519,8 @@ static void print_cpu_stall_info(int cpu)
rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' : rdp->rcu_iw_pending ? (int)min(delta, 9UL) + '0' :
"!."[!delta], "!."[!delta],
ticks_value, ticks_title, ticks_value, ticks_title,
ct_dynticks_cpu(cpu) & 0xffff, ct_rcu_watching_cpu(cpu) & 0xffff,
ct_dynticks_nesting_cpu(cpu), ct_dynticks_nmi_nesting_cpu(cpu), ct_nesting_cpu(cpu), ct_nmi_nesting_cpu(cpu),
rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu), rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart, data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
rcuc_starved ? buf : "", rcuc_starved ? buf : "",
...@@ -728,6 +731,9 @@ static void print_cpu_stall(unsigned long gps) ...@@ -728,6 +731,9 @@ static void print_cpu_stall(unsigned long gps)
set_preempt_need_resched(); set_preempt_need_resched();
} }
static bool csd_lock_suppress_rcu_stall;
module_param(csd_lock_suppress_rcu_stall, bool, 0644);
static void check_cpu_stall(struct rcu_data *rdp) static void check_cpu_stall(struct rcu_data *rdp)
{ {
bool self_detected; bool self_detected;
...@@ -800,7 +806,9 @@ static void check_cpu_stall(struct rcu_data *rdp) ...@@ -800,7 +806,9 @@ static void check_cpu_stall(struct rcu_data *rdp)
return; return;
rcu_stall_notifier_call_chain(RCU_STALL_NOTIFY_NORM, (void *)j - gps); rcu_stall_notifier_call_chain(RCU_STALL_NOTIFY_NORM, (void *)j - gps);
if (self_detected) { if (READ_ONCE(csd_lock_suppress_rcu_stall) && csd_lock_is_stuck()) {
pr_err("INFO: %s detected stall, but suppressed full report due to a stuck CSD-lock.\n", rcu_state.name);
} else if (self_detected) {
/* We haven't checked in, so go dump stack. */ /* We haven't checked in, so go dump stack. */
print_cpu_stall(gps); print_cpu_stall(gps);
} else { } else {
......
...@@ -5762,7 +5762,7 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt) ...@@ -5762,7 +5762,7 @@ static inline void schedule_debug(struct task_struct *prev, bool preempt)
preempt_count_set(PREEMPT_DISABLED); preempt_count_set(PREEMPT_DISABLED);
} }
rcu_sleep_check(); rcu_sleep_check();
SCHED_WARN_ON(ct_state() == CONTEXT_USER); SCHED_WARN_ON(ct_state() == CT_STATE_USER);
profile_hit(SCHED_PROFILING, __builtin_return_address(0)); profile_hit(SCHED_PROFILING, __builtin_return_address(0));
...@@ -6658,7 +6658,7 @@ asmlinkage __visible void __sched schedule_user(void) ...@@ -6658,7 +6658,7 @@ asmlinkage __visible void __sched schedule_user(void)
* we find a better solution. * we find a better solution.
* *
* NB: There are buggy callers of this function. Ideally we * NB: There are buggy callers of this function. Ideally we
* should warn if prev_state != CONTEXT_USER, but that will trigger * should warn if prev_state != CT_STATE_USER, but that will trigger
* too frequently to make sense yet. * too frequently to make sense yet.
*/ */
enum ctx_state prev_state = exception_enter(); enum ctx_state prev_state = exception_enter();
...@@ -9752,7 +9752,7 @@ struct cgroup_subsys cpu_cgrp_subsys = { ...@@ -9752,7 +9752,7 @@ struct cgroup_subsys cpu_cgrp_subsys = {
void dump_cpu_task(int cpu) void dump_cpu_task(int cpu)
{ {
if (cpu == smp_processor_id() && in_hardirq()) { if (in_hardirq() && cpu == smp_processor_id()) {
struct pt_regs *regs; struct pt_regs *regs;
regs = get_irq_regs(); regs = get_irq_regs();
......
...@@ -208,12 +208,25 @@ static int csd_lock_wait_getcpu(call_single_data_t *csd) ...@@ -208,12 +208,25 @@ static int csd_lock_wait_getcpu(call_single_data_t *csd)
return -1; return -1;
} }
static atomic_t n_csd_lock_stuck;
/**
* csd_lock_is_stuck - Has a CSD-lock acquisition been stuck too long?
*
* Returns @true if a CSD-lock acquisition is stuck and has been stuck
* long enough for a "non-responsive CSD lock" message to be printed.
*/
bool csd_lock_is_stuck(void)
{
return !!atomic_read(&n_csd_lock_stuck);
}
/* /*
* Complain if too much time spent waiting. Note that only * Complain if too much time spent waiting. Note that only
* the CSD_TYPE_SYNC/ASYNC types provide the destination CPU, * the CSD_TYPE_SYNC/ASYNC types provide the destination CPU,
* so waiting on other types gets much less information. * so waiting on other types gets much less information.
*/ */
static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, int *bug_id) static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, int *bug_id, unsigned long *nmessages)
{ {
int cpu = -1; int cpu = -1;
int cpux; int cpux;
...@@ -229,14 +242,25 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in ...@@ -229,14 +242,25 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
cpu = csd_lock_wait_getcpu(csd); cpu = csd_lock_wait_getcpu(csd);
pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock.\n", pr_alert("csd: CSD lock (#%d) got unstuck on CPU#%02d, CPU#%02d released the lock.\n",
*bug_id, raw_smp_processor_id(), cpu); *bug_id, raw_smp_processor_id(), cpu);
atomic_dec(&n_csd_lock_stuck);
return true; return true;
} }
ts2 = sched_clock(); ts2 = sched_clock();
/* How long since we last checked for a stuck CSD lock.*/ /* How long since we last checked for a stuck CSD lock.*/
ts_delta = ts2 - *ts1; ts_delta = ts2 - *ts1;
if (likely(ts_delta <= csd_lock_timeout_ns || csd_lock_timeout_ns == 0)) if (likely(ts_delta <= csd_lock_timeout_ns * (*nmessages + 1) *
(!*nmessages ? 1 : (ilog2(num_online_cpus()) / 2 + 1)) ||
csd_lock_timeout_ns == 0))
return false;
if (ts0 > ts2) {
/* Our own sched_clock went backward; don't blame another CPU. */
ts_delta = ts0 - ts2;
pr_alert("sched_clock on CPU %d went backward by %llu ns\n", raw_smp_processor_id(), ts_delta);
*ts1 = ts2;
return false; return false;
}
firsttime = !*bug_id; firsttime = !*bug_id;
if (firsttime) if (firsttime)
...@@ -249,9 +273,12 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in ...@@ -249,9 +273,12 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
cpu_cur_csd = smp_load_acquire(&per_cpu(cur_csd, cpux)); /* Before func and info. */ cpu_cur_csd = smp_load_acquire(&per_cpu(cur_csd, cpux)); /* Before func and info. */
/* How long since this CSD lock was stuck. */ /* How long since this CSD lock was stuck. */
ts_delta = ts2 - ts0; ts_delta = ts2 - ts0;
pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %llu ns for CPU#%02d %pS(%ps).\n", pr_alert("csd: %s non-responsive CSD lock (#%d) on CPU#%d, waiting %lld ns for CPU#%02d %pS(%ps).\n",
firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id(), ts_delta, firsttime ? "Detected" : "Continued", *bug_id, raw_smp_processor_id(), (s64)ts_delta,
cpu, csd->func, csd->info); cpu, csd->func, csd->info);
(*nmessages)++;
if (firsttime)
atomic_inc(&n_csd_lock_stuck);
/* /*
* If the CSD lock is still stuck after 5 minutes, it is unlikely * If the CSD lock is still stuck after 5 minutes, it is unlikely
* to become unstuck. Use a signed comparison to avoid triggering * to become unstuck. Use a signed comparison to avoid triggering
...@@ -290,12 +317,13 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in ...@@ -290,12 +317,13 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, u64 ts0, u64 *ts1, in
*/ */
static void __csd_lock_wait(call_single_data_t *csd) static void __csd_lock_wait(call_single_data_t *csd)
{ {
unsigned long nmessages = 0;
int bug_id = 0; int bug_id = 0;
u64 ts0, ts1; u64 ts0, ts1;
ts1 = ts0 = sched_clock(); ts1 = ts0 = sched_clock();
for (;;) { for (;;) {
if (csd_lock_wait_toolong(csd, ts0, &ts1, &bug_id)) if (csd_lock_wait_toolong(csd, ts0, &ts1, &bug_id, &nmessages))
break; break;
cpu_relax(); cpu_relax();
} }
......
...@@ -251,7 +251,7 @@ static int multi_cpu_stop(void *data) ...@@ -251,7 +251,7 @@ static int multi_cpu_stop(void *data)
*/ */
touch_nmi_watchdog(); touch_nmi_watchdog();
} }
rcu_momentary_dyntick_idle(); rcu_momentary_eqs();
} while (curstate != MULTI_STOP_EXIT); } while (curstate != MULTI_STOP_EXIT);
local_irq_restore(flags); local_irq_restore(flags);
......
...@@ -1541,7 +1541,7 @@ static int run_osnoise(void) ...@@ -1541,7 +1541,7 @@ static int run_osnoise(void)
* This will eventually cause unwarranted noise as PREEMPT_RCU * This will eventually cause unwarranted noise as PREEMPT_RCU
* will force preemption as the means of ending the current * will force preemption as the means of ending the current
* grace period. We avoid this problem by calling * grace period. We avoid this problem by calling
* rcu_momentary_dyntick_idle(), which performs a zero duration * rcu_momentary_eqs(), which performs a zero duration
* EQS allowing PREEMPT_RCU to end the current grace period. * EQS allowing PREEMPT_RCU to end the current grace period.
* This call shouldn't be wrapped inside an RCU critical * This call shouldn't be wrapped inside an RCU critical
* section. * section.
...@@ -1553,7 +1553,7 @@ static int run_osnoise(void) ...@@ -1553,7 +1553,7 @@ static int run_osnoise(void)
if (!disable_irq) if (!disable_irq)
local_irq_disable(); local_irq_disable();
rcu_momentary_dyntick_idle(); rcu_momentary_eqs();
if (!disable_irq) if (!disable_irq)
local_irq_enable(); local_irq_enable();
......
...@@ -1614,6 +1614,7 @@ config SCF_TORTURE_TEST ...@@ -1614,6 +1614,7 @@ config SCF_TORTURE_TEST
config CSD_LOCK_WAIT_DEBUG config CSD_LOCK_WAIT_DEBUG
bool "Debugging for csd_lock_wait(), called from smp_call_function*()" bool "Debugging for csd_lock_wait(), called from smp_call_function*()"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
depends on SMP
depends on 64BIT depends on 64BIT
default n default n
help help
......
...@@ -21,12 +21,10 @@ fi ...@@ -21,12 +21,10 @@ fi
bpftrace -e 'kprobe:kvfree_call_rcu, bpftrace -e 'kprobe:kvfree_call_rcu,
kprobe:call_rcu, kprobe:call_rcu,
kprobe:call_rcu_tasks, kprobe:call_rcu_tasks,
kprobe:call_rcu_tasks_rude,
kprobe:call_rcu_tasks_trace, kprobe:call_rcu_tasks_trace,
kprobe:call_srcu, kprobe:call_srcu,
kprobe:rcu_barrier, kprobe:rcu_barrier,
kprobe:rcu_barrier_tasks, kprobe:rcu_barrier_tasks,
kprobe:rcu_barrier_tasks_rude,
kprobe:rcu_barrier_tasks_trace, kprobe:rcu_barrier_tasks_trace,
kprobe:srcu_barrier, kprobe:srcu_barrier,
kprobe:synchronize_rcu, kprobe:synchronize_rcu,
......
...@@ -68,6 +68,8 @@ config_override_param "--gdb options" KcList "$TORTURE_KCONFIG_GDB_ARG" ...@@ -68,6 +68,8 @@ config_override_param "--gdb options" KcList "$TORTURE_KCONFIG_GDB_ARG"
config_override_param "--kasan options" KcList "$TORTURE_KCONFIG_KASAN_ARG" config_override_param "--kasan options" KcList "$TORTURE_KCONFIG_KASAN_ARG"
config_override_param "--kcsan options" KcList "$TORTURE_KCONFIG_KCSAN_ARG" config_override_param "--kcsan options" KcList "$TORTURE_KCONFIG_KCSAN_ARG"
config_override_param "--kconfig argument" KcList "$TORTURE_KCONFIG_ARG" config_override_param "--kconfig argument" KcList "$TORTURE_KCONFIG_ARG"
config_override_param "$config_dir/CFcommon.$(uname -m)" KcList \
"`cat $config_dir/CFcommon.$(uname -m) 2> /dev/null`"
cp $T/KcList $resdir/ConfigFragment cp $T/KcList $resdir/ConfigFragment
base_resdir=`echo $resdir | sed -e 's/\.[0-9]\+$//'` base_resdir=`echo $resdir | sed -e 's/\.[0-9]\+$//'`
......
...@@ -19,10 +19,10 @@ PATH=${RCUTORTURE}/bin:$PATH; export PATH ...@@ -19,10 +19,10 @@ PATH=${RCUTORTURE}/bin:$PATH; export PATH
TORTURE_ALLOTED_CPUS="`identify_qemu_vcpus`" TORTURE_ALLOTED_CPUS="`identify_qemu_vcpus`"
MAKE_ALLOTED_CPUS=$((TORTURE_ALLOTED_CPUS*2)) MAKE_ALLOTED_CPUS=$((TORTURE_ALLOTED_CPUS*2))
HALF_ALLOTED_CPUS=$((TORTURE_ALLOTED_CPUS/2)) SCALE_ALLOTED_CPUS=$((TORTURE_ALLOTED_CPUS/2))
if test "$HALF_ALLOTED_CPUS" -lt 1 if test "$SCALE_ALLOTED_CPUS" -lt 1
then then
HALF_ALLOTED_CPUS=1 SCALE_ALLOTED_CPUS=1
fi fi
VERBOSE_BATCH_CPUS=$((TORTURE_ALLOTED_CPUS/16)) VERBOSE_BATCH_CPUS=$((TORTURE_ALLOTED_CPUS/16))
if test "$VERBOSE_BATCH_CPUS" -lt 2 if test "$VERBOSE_BATCH_CPUS" -lt 2
...@@ -90,6 +90,7 @@ usage () { ...@@ -90,6 +90,7 @@ usage () {
echo " --do-scftorture / --do-no-scftorture / --no-scftorture" echo " --do-scftorture / --do-no-scftorture / --no-scftorture"
echo " --do-srcu-lockdep / --do-no-srcu-lockdep / --no-srcu-lockdep" echo " --do-srcu-lockdep / --do-no-srcu-lockdep / --no-srcu-lockdep"
echo " --duration [ <minutes> | <hours>h | <days>d ]" echo " --duration [ <minutes> | <hours>h | <days>d ]"
echo " --guest-cpu-limit N"
echo " --kcsan-kmake-arg kernel-make-arguments" echo " --kcsan-kmake-arg kernel-make-arguments"
exit 1 exit 1
} }
...@@ -203,6 +204,21 @@ do ...@@ -203,6 +204,21 @@ do
duration_base=$(($ts*mult)) duration_base=$(($ts*mult))
shift shift
;; ;;
--guest-cpu-limit|--guest-cpu-lim)
checkarg --guest-cpu-limit "(number)" "$#" "$2" '^[0-9]*$' '^--'
if (("$2" <= "$TORTURE_ALLOTED_CPUS" / 2))
then
SCALE_ALLOTED_CPUS="$2"
VERBOSE_BATCH_CPUS="$((SCALE_ALLOTED_CPUS/8))"
if (("$VERBOSE_BATCH_CPUS" < 2))
then
VERBOSE_BATCH_CPUS=0
fi
else
echo "Ignoring value of $2 for --guest-cpu-limit which is greater than (("$TORTURE_ALLOTED_CPUS" / 2))."
fi
shift
;;
--kcsan-kmake-arg|--kcsan-kmake-args) --kcsan-kmake-arg|--kcsan-kmake-args)
checkarg --kcsan-kmake-arg "(kernel make arguments)" $# "$2" '.*' '^error$' checkarg --kcsan-kmake-arg "(kernel make arguments)" $# "$2" '.*' '^error$'
kcsan_kmake_args="`echo "$kcsan_kmake_args $2" | sed -e 's/^ *//' -e 's/ *$//'`" kcsan_kmake_args="`echo "$kcsan_kmake_args $2" | sed -e 's/^ *//' -e 's/ *$//'`"
...@@ -425,9 +441,9 @@ fi ...@@ -425,9 +441,9 @@ fi
if test "$do_scftorture" = "yes" if test "$do_scftorture" = "yes"
then then
# Scale memory based on the number of CPUs. # Scale memory based on the number of CPUs.
scfmem=$((3+HALF_ALLOTED_CPUS/16)) scfmem=$((3+SCALE_ALLOTED_CPUS/16))
torture_bootargs="scftorture.nthreads=$HALF_ALLOTED_CPUS torture.disable_onoff_at_boot csdlock_debug=1" torture_bootargs="scftorture.nthreads=$SCALE_ALLOTED_CPUS torture.disable_onoff_at_boot csdlock_debug=1"
torture_set "scftorture" tools/testing/selftests/rcutorture/bin/kvm.sh --torture scf --allcpus --duration "$duration_scftorture" --configs "$configs_scftorture" --kconfig "CONFIG_NR_CPUS=$HALF_ALLOTED_CPUS" --memory ${scfmem}G --trust-make torture_set "scftorture" tools/testing/selftests/rcutorture/bin/kvm.sh --torture scf --allcpus --duration "$duration_scftorture" --configs "$configs_scftorture" --kconfig "CONFIG_NR_CPUS=$SCALE_ALLOTED_CPUS" --memory ${scfmem}G --trust-make
fi fi
if test "$do_rt" = "yes" if test "$do_rt" = "yes"
...@@ -471,8 +487,8 @@ for prim in $primlist ...@@ -471,8 +487,8 @@ for prim in $primlist
do do
if test -n "$firsttime" if test -n "$firsttime"
then then
torture_bootargs="refscale.scale_type="$prim" refscale.nreaders=$HALF_ALLOTED_CPUS refscale.loops=10000 refscale.holdoff=20 torture.disable_onoff_at_boot" torture_bootargs="refscale.scale_type="$prim" refscale.nreaders=$SCALE_ALLOTED_CPUS refscale.loops=10000 refscale.holdoff=20 torture.disable_onoff_at_boot"
torture_set "refscale-$prim" tools/testing/selftests/rcutorture/bin/kvm.sh --torture refscale --allcpus --duration 5 --kconfig "CONFIG_TASKS_TRACE_RCU=y CONFIG_NR_CPUS=$HALF_ALLOTED_CPUS" --bootargs "refscale.verbose_batched=$VERBOSE_BATCH_CPUS torture.verbose_sleep_frequency=8 torture.verbose_sleep_duration=$VERBOSE_BATCH_CPUS" --trust-make torture_set "refscale-$prim" tools/testing/selftests/rcutorture/bin/kvm.sh --torture refscale --allcpus --duration 5 --kconfig "CONFIG_TASKS_TRACE_RCU=y CONFIG_NR_CPUS=$SCALE_ALLOTED_CPUS" --bootargs "refscale.verbose_batched=$VERBOSE_BATCH_CPUS torture.verbose_sleep_frequency=8 torture.verbose_sleep_duration=$VERBOSE_BATCH_CPUS" --trust-make
mv $T/last-resdir-nodebug $T/first-resdir-nodebug || : mv $T/last-resdir-nodebug $T/first-resdir-nodebug || :
if test -f "$T/last-resdir-kasan" if test -f "$T/last-resdir-kasan"
then then
...@@ -520,8 +536,8 @@ for prim in $primlist ...@@ -520,8 +536,8 @@ for prim in $primlist
do do
if test -n "$firsttime" if test -n "$firsttime"
then then
torture_bootargs="rcuscale.scale_type="$prim" rcuscale.nwriters=$HALF_ALLOTED_CPUS rcuscale.holdoff=20 torture.disable_onoff_at_boot" torture_bootargs="rcuscale.scale_type="$prim" rcuscale.nwriters=$SCALE_ALLOTED_CPUS rcuscale.holdoff=20 torture.disable_onoff_at_boot"
torture_set "rcuscale-$prim" tools/testing/selftests/rcutorture/bin/kvm.sh --torture rcuscale --allcpus --duration 5 --kconfig "CONFIG_TASKS_TRACE_RCU=y CONFIG_NR_CPUS=$HALF_ALLOTED_CPUS" --trust-make torture_set "rcuscale-$prim" tools/testing/selftests/rcutorture/bin/kvm.sh --torture rcuscale --allcpus --duration 5 --kconfig "CONFIG_TASKS_TRACE_RCU=y CONFIG_NR_CPUS=$SCALE_ALLOTED_CPUS" --trust-make
mv $T/last-resdir-nodebug $T/first-resdir-nodebug || : mv $T/last-resdir-nodebug $T/first-resdir-nodebug || :
if test -f "$T/last-resdir-kasan" if test -f "$T/last-resdir-kasan"
then then
...@@ -559,7 +575,7 @@ do_kcsan="$do_kcsan_save" ...@@ -559,7 +575,7 @@ do_kcsan="$do_kcsan_save"
if test "$do_kvfree" = "yes" if test "$do_kvfree" = "yes"
then then
torture_bootargs="rcuscale.kfree_rcu_test=1 rcuscale.kfree_nthreads=16 rcuscale.holdoff=20 rcuscale.kfree_loops=10000 torture.disable_onoff_at_boot" torture_bootargs="rcuscale.kfree_rcu_test=1 rcuscale.kfree_nthreads=16 rcuscale.holdoff=20 rcuscale.kfree_loops=10000 torture.disable_onoff_at_boot"
torture_set "rcuscale-kvfree" tools/testing/selftests/rcutorture/bin/kvm.sh --torture rcuscale --allcpus --duration $duration_rcutorture --kconfig "CONFIG_NR_CPUS=$HALF_ALLOTED_CPUS" --memory 2G --trust-make torture_set "rcuscale-kvfree" tools/testing/selftests/rcutorture/bin/kvm.sh --torture rcuscale --allcpus --duration $duration_rcutorture --kconfig "CONFIG_NR_CPUS=$SCALE_ALLOTED_CPUS" --memory 2G --trust-make
fi fi
if test "$do_clocksourcewd" = "yes" if test "$do_clocksourcewd" = "yes"
......
CONFIG_RCU_TORTURE_TEST=y CONFIG_RCU_TORTURE_TEST=y
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y CONFIG_PARAVIRT=y
CONFIG_KVM_GUEST=y
CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n
CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n
CONFIG_HYPERVISOR_GUEST=y
CONFIG_KVM_GUEST=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_KVM_GUEST=y
...@@ -2,3 +2,4 @@ nohz_full=2-9 ...@@ -2,3 +2,4 @@ nohz_full=2-9
rcutorture.stall_cpu=14 rcutorture.stall_cpu=14
rcutorture.stall_cpu_holdoff=90 rcutorture.stall_cpu_holdoff=90
rcutorture.fwd_progress=0 rcutorture.fwd_progress=0
rcutree.nohz_full_patience_delay=1000
CONFIG_SMP=n
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
CONFIG_PREEMPT_DYNAMIC=n
#CHECK#CONFIG_PREEMPT_RCU=n
CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n
CONFIG_HOTPLUG_CPU=n
CONFIG_SUSPEND=n
CONFIG_HIBERNATION=n
CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_LOCKING=n
CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y
CONFIG_KPROBES=n
CONFIG_FTRACE=n
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment