Commit 6cf82d55 authored by Vincent Guittot's avatar Vincent Guittot Committed by Peter Zijlstra

sched/cfs: fix spurious active migration

The load balance can fail to find a suitable task during the periodic check
because  the imbalance is smaller than half of the load of the waiting
tasks. This results in the increase of the number of failed load balance,
which can end up to start an active migration. This active migration is
useless because the current running task is not a better choice than the
waiting ones. In fact, the current task was probably not running but
waiting for the CPU during one of the previous attempts and it had already
not been selected.

When load balance fails too many times to migrate a task, we should relax
the contraint on the maximum load of the tasks that can be migrated
similarly to what is done with cache hotness.

Before the rework, load balance used to set the imbalance to the average
load_per_task in order to mitigate such situation. This increased the
likelihood of migrating a task but also of selecting a larger task than
needed while more appropriate ones were in the list.
Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/1575036287-6052-1-git-send-email-vincent.guittot@linaro.org
parent 7ed735c3
...@@ -7328,7 +7328,14 @@ static int detach_tasks(struct lb_env *env) ...@@ -7328,7 +7328,14 @@ static int detach_tasks(struct lb_env *env)
load < 16 && !env->sd->nr_balance_failed) load < 16 && !env->sd->nr_balance_failed)
goto next; goto next;
if (load/2 > env->imbalance) /*
* Make sure that we don't migrate too much load.
* Nevertheless, let relax the constraint if
* scheduler fails to find a good waiting task to
* migrate.
*/
if (load/2 > env->imbalance &&
env->sd->nr_balance_failed <= env->sd->cache_nice_tries)
goto next; goto next;
env->imbalance -= load; env->imbalance -= load;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment