Commit 7dd49125 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

sched/fair: Fix effective_load() to consistently use smoothed load

Starting with the following commit:

  fde7d22e ("sched/fair: Fix overly small weight for interactive group entities")

calc_tg_weight() doesn't compute the right value as expected by effective_load().

The difference is in the 'correction' term. In order to ensure \Sum
rw_j >= rw_i we cannot use tg->load_avg directly, since that might be
lagging a correction on the current cfs_rq->avg.load_avg value.
Therefore we use tg->load_avg - cfs_rq->tg_load_avg_contrib +
cfs_rq->avg.load_avg.

Now, per the referenced commit, calc_tg_weight() doesn't use
cfs_rq->avg.load_avg, as is later used in @w, but uses
cfs_rq->load.weight instead.

So stop using calc_tg_weight() and do it explicitly.

The effects of this bug are wake_affine() making randomly
poor choices in cgroup-intense workloads.
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org> # v4.3+
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: fde7d22e ("sched/fair: Fix overly small weight for interactive group entities")
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 4c2e07c6
...@@ -735,8 +735,6 @@ void post_init_entity_util_avg(struct sched_entity *se) ...@@ -735,8 +735,6 @@ void post_init_entity_util_avg(struct sched_entity *se)
} }
} }
static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq);
static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq);
#else #else
void init_entity_runnable_average(struct sched_entity *se) void init_entity_runnable_average(struct sched_entity *se)
{ {
...@@ -4946,19 +4944,24 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg) ...@@ -4946,19 +4944,24 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
return wl; return wl;
for_each_sched_entity(se) { for_each_sched_entity(se) {
long w, W; struct cfs_rq *cfs_rq = se->my_q;
long W, w = cfs_rq_load_avg(cfs_rq);
tg = se->my_q->tg; tg = cfs_rq->tg;
/* /*
* W = @wg + \Sum rw_j * W = @wg + \Sum rw_j
*/ */
W = wg + calc_tg_weight(tg, se->my_q); W = wg + atomic_long_read(&tg->load_avg);
/* Ensure \Sum rw_j >= rw_i */
W -= cfs_rq->tg_load_avg_contrib;
W += w;
/* /*
* w = rw_i + @wl * w = rw_i + @wl
*/ */
w = cfs_rq_load_avg(se->my_q) + wl; w += wl;
/* /*
* wl = S * s'_i; see (2) * wl = S * s'_i; see (2)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment