• Mel Gorman's avatar
    sched/numa: Adjust imb_numa_nr to a better approximation of memory channels · 026b98a9
    Mel Gorman authored
    For a single LLC per node, a NUMA imbalance is allowed up until 25%
    of CPUs sharing a node could be active. One intent of the cut-off is
    to avoid an imbalance of memory channels but there is no topological
    information based on active memory channels. Furthermore, there can
    be differences between nodes depending on the number of populated
    DIMMs.
    
    A cut-off of 25% was arbitrary but generally worked. It does have a severe
    corner cases though when an parallel workload is using 25% of all available
    CPUs over-saturates memory channels. This can happen due to the initial
    forking of tasks that get pulled more to one node after early wakeups
    (e.g. a barrier synchronisation) that is not quickly corrected by the
    load balancer. The LB may fail to act quickly as the parallel tasks are
    considered to be poor migrate candidates due to locality or cache hotness.
    
    On a range of modern Intel CPUs, 12.5% appears to be a better cut-off
    assuming all memory channels are populated and is used as the new cut-off
    point. A minimum of 1 is specified to allow a communicating pair to
    remain local even for CPUs with low numbers of cores. For modern AMDs,
    there are multiple LLCs and are not affected.
    Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
    Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
    Tested-by: default avatarK Prateek Nayak <kprateek.nayak@amd.com>
    Link: https://lore.kernel.org/r/20220520103519.1863-5-mgorman@techsingularity.net
    026b98a9
topology.c 67 KB