Commit 44dcb04f authored by Srikar Dronamraju's avatar Srikar Dronamraju Committed by Ingo Molnar

sched/numa: Consider 'imbalance_pct' when comparing loads in numa_has_capacity()

This is consistent with all other load balancing instances where we
absorb unfairness upto env->imbalance_pct. Absorbing unfairness upto
env->imbalance_pct allows to pull and retain task to their preferred
nodes.
Signed-off-by: default avatarSrikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: default avatarRik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1434455762-30857-3-git-send-email-srikar@linux.vnet.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 2a1ed24c
...@@ -1415,8 +1415,9 @@ static bool numa_has_capacity(struct task_numa_env *env) ...@@ -1415,8 +1415,9 @@ static bool numa_has_capacity(struct task_numa_env *env)
* --------------------- vs --------------------- * --------------------- vs ---------------------
* src->compute_capacity dst->compute_capacity * src->compute_capacity dst->compute_capacity
*/ */
if (src->load * dst->compute_capacity > if (src->load * dst->compute_capacity * env->imbalance_pct >
dst->load * src->compute_capacity)
dst->load * src->compute_capacity * 100)
return true; return true;
return false; return false;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment