Commit 5c34f497 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Greg Kroah-Hartman

sched/topology: Fix overlapping sched_group_mask

commit 73bb059f upstream.

The point of sched_group_mask is to select those CPUs from
sched_group_cpus that can actually arrive at this balance domain.

The current code gets it wrong, as can be readily demonstrated with a
topology like:

  node   0   1   2   3
    0:  10  20  30  20
    1:  20  10  20  30
    2:  30  20  10  20
    3:  20  30  20  10

Where (for example) domain 1 on CPU1 ends up with a mask that includes
CPU0:

  [] CPU1 attaching sched-domain:
  []  domain 0: span 0-2 level NUMA
  []   groups: 1 (mask: 1), 2, 0
  []   domain 1: span 0-3 level NUMA
  []    groups: 0-2 (mask: 0-2) (cpu_capacity: 3072), 0,2-3 (cpu_capacity: 3072)

This causes sched_balance_cpu() to compute the wrong CPU and
consequently should_we_balance() will terminate early resulting in
missed load-balance opportunities.

The fixed topology looks like:

  [] CPU1 attaching sched-domain:
  []  domain 0: span 0-2 level NUMA
  []   groups: 1 (mask: 1), 2, 0
  []   domain 1: span 0-3 level NUMA
  []    groups: 0-2 (mask: 1) (cpu_capacity: 3072), 0,2-3 (cpu_capacity: 3072)

(note: this relies on OVERLAP domains to always have children, this is
 true because the regular topology domains are still here -- this is
 before degenerate trimming)
Debugged-by: default avatarLauro Ramos Venancio <lvenanci@redhat.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Fixes: e3589f6c ("sched: Allow for overlapping sched_domain spans")
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 4e3c1188
...@@ -6123,6 +6123,9 @@ enum s_alloc { ...@@ -6123,6 +6123,9 @@ enum s_alloc {
* Build an iteration mask that can exclude certain CPUs from the upwards * Build an iteration mask that can exclude certain CPUs from the upwards
* domain traversal. * domain traversal.
* *
* Only CPUs that can arrive at this group should be considered to continue
* balancing.
*
* Asymmetric node setups can result in situations where the domain tree is of * Asymmetric node setups can result in situations where the domain tree is of
* unequal depth, make sure to skip domains that already cover the entire * unequal depth, make sure to skip domains that already cover the entire
* range. * range.
...@@ -6141,11 +6144,24 @@ static void build_group_mask(struct sched_domain *sd, struct sched_group *sg) ...@@ -6141,11 +6144,24 @@ static void build_group_mask(struct sched_domain *sd, struct sched_group *sg)
for_each_cpu(i, span) { for_each_cpu(i, span) {
sibling = *per_cpu_ptr(sdd->sd, i); sibling = *per_cpu_ptr(sdd->sd, i);
if (!cpumask_test_cpu(i, sched_domain_span(sibling)))
/*
* Can happen in the asymmetric case, where these siblings are
* unused. The mask will not be empty because those CPUs that
* do have the top domain _should_ span the domain.
*/
if (!sibling->child)
continue;
/* If we would not end up here, we can't continue from here */
if (!cpumask_equal(sg_span, sched_domain_span(sibling->child)))
continue; continue;
cpumask_set_cpu(i, sched_group_mask(sg)); cpumask_set_cpu(i, sched_group_mask(sg));
} }
/* We must not have empty masks here */
WARN_ON_ONCE(cpumask_empty(sched_group_mask(sg)));
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment