• Peter Zijlstra's avatar
    mm: sched: numa: Implement slow start for working set sampling · 4b96a29b
    Peter Zijlstra authored
    Add a 1 second delay before starting to scan the working set of
    a task and starting to balance it amongst nodes.
    
    [ note that before the constant per task WSS sampling rate patch
      the initial scan would happen much later still, in effect that
      patch caused this regression. ]
    
    The theory is that short-run tasks benefit very little from NUMA
    placement: they come and go, and they better stick to the node
    they were started on. As tasks mature and rebalance to other CPUs
    and nodes, so does their NUMA placement have to change and so
    does it start to matter more and more.
    
    In practice this change fixes an observable kbuild regression:
    
       # [ a perf stat --null --repeat 10 test of ten bzImage builds to /dev/shm ]
    
       !NUMA:
       45.291088843 seconds time elapsed                                          ( +-  0.40% )
       45.154231752 seconds time elapsed                                          ( +-  0.36% )
    
       +NUMA, no slow start:
       46.172308123 seconds time elapsed                                          ( +-  0.30% )
       46.343168745 seconds time elapsed                                          ( +-  0.25% )
    
       +NUMA, 1 sec slow start:
       45.224189155 seconds time elapsed                                          ( +-  0.25% )
       45.160866532 seconds time elapsed                                          ( +-  0.17% )
    
    and it also fixes an observable perf bench (hackbench) regression:
    
       # perf stat --null --repeat 10 perf bench sched messaging
    
       -NUMA:
    
       -NUMA:                  0.246225691 seconds time elapsed                   ( +-  1.31% )
       +NUMA no slow start:    0.252620063 seconds time elapsed                   ( +-  1.13% )
    
       +NUMA 1sec delay:       0.248076230 seconds time elapsed                   ( +-  1.35% )
    
    The implementation is simple and straightforward, most of the patch
    deals with adding the /proc/sys/kernel/numa_balancing_scan_delay_ms tunable
    knob.
    Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Rik van Riel <riel@redhat.com>
    [ Wrote the changelog, ran measurements, tuned the default. ]
    Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
    Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
    Reviewed-by: default avatarRik van Riel <riel@redhat.com>
    4b96a29b
core.c 194 KB