• Zqiang's avatar
    rcu: Avoid triggering strict-GP irq-work when RCU is idle · 621189a1
    Zqiang authored
    Kernels built with PREEMPT_RCU=y and RCU_STRICT_GRACE_PERIOD=y trigger
    irq-work from rcu_read_unlock(), and the resulting irq-work handler
    invokes rcu_preempt_deferred_qs_handle().  The point of this triggering
    is to force grace periods to end quickly in order to give tools like KASAN
    a better chance of detecting RCU usage bugs such as leaking RCU-protected
    pointers out of an RCU read-side critical section.
    
    However, this irq-work triggering is unconditional.  This works, but
    there is no point in doing this irq-work unless the current grace period
    is waiting on the running CPU or task, which is not the common case.
    After all, in the common case there are many rcu_read_unlock() calls
    per CPU per grace period.
    
    This commit therefore triggers the irq-work only when the current grace
    period is waiting on the running CPU or task.
    
    This change was tested as follows on a four-CPU system:
    
    	echo rcu_preempt_deferred_qs_handler > /sys/kernel/debug/tracing/set_ftrace_filter
    	echo 1 > /sys/kernel/debug/tracing/function_profile_enabled
    	insmod rcutorture.ko
    	sleep 20
    	rmmod rcutorture.ko
    	echo 0 > /sys/kernel/debug/tracing/function_profile_enabled
    	echo > /sys/kernel/debug/tracing/set_ftrace_filter
    
    This procedure produces results in this per-CPU set of files:
    
    	/sys/kernel/debug/tracing/trace_stat/function*
    
    Sample output from one of these files is as follows:
    
      Function                               Hit    Time            Avg             s^2
      --------                               ---    ----            ---             ---
      rcu_preempt_deferred_qs_handle      838746    182650.3 us     0.217 us        0.004 us
    
    The baseline sum of the "Hit" values (the number of calls to this
    function) was 3,319,015.  With this commit, that sum was 1,140,359,
    for a 2.9x reduction.  The worst-case variance across the CPUs was less
    than 25%, so this large effect size is statistically significant.
    
    The raw data is available in the Link: URL.
    
    Link: https://lore.kernel.org/all/20220808022626.12825-1-qiang1.zhang@intel.com/Signed-off-by: default avatarZqiang <qiang1.zhang@intel.com>
    Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
    621189a1
tree_plugin.h 42.4 KB