Commit 8fe5c5a9 authored by Quentin Perret's avatar Quentin Perret Committed by Ingo Molnar

sched/fair: Fix util_avg of new tasks for asymmetric systems

When a new task wakes-up for the first time, its initial utilization
is set to half of the spare capacity of its CPU. The current
implementation of post_init_entity_util_avg() uses SCHED_CAPACITY_SCALE
directly as a capacity reference. As a result, on a big.LITTLE system, a
new task waking up on an idle little CPU will be given ~512 of util_avg,
even if the CPU's capacity is significantly less than that.

Fix this by computing the spare capacity with arch_scale_cpu_capacity().
Signed-off-by: default avatarQuentin Perret <quentin.perret@arm.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: morten.rasmussen@arm.com
Cc: patrick.bellasi@arm.com
Link: http://lkml.kernel.org/r/20180612112215.25448-1-quentin.perret@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent be45bf53
...@@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity *se); ...@@ -735,11 +735,12 @@ static void attach_entity_cfs_rq(struct sched_entity *se);
* To solve this problem, we also cap the util_avg of successive tasks to * To solve this problem, we also cap the util_avg of successive tasks to
* only 1/2 of the left utilization budget: * only 1/2 of the left utilization budget:
* *
* util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n * util_avg_cap = (cpu_scale - cfs_rq->avg.util_avg) / 2^n
* *
* where n denotes the nth task. * where n denotes the nth task and cpu_scale the CPU capacity.
* *
* For example, a simplest series from the beginning would be like: * For example, for a CPU with 1024 of capacity, a simplest series from
* the beginning would be like:
* *
* task util_avg: 512, 256, 128, 64, 32, 16, 8, ... * task util_avg: 512, 256, 128, 64, 32, 16, 8, ...
* cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ... * cfs_rq util_avg: 512, 768, 896, 960, 992, 1008, 1016, ...
...@@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se) ...@@ -751,7 +752,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
{ {
struct cfs_rq *cfs_rq = cfs_rq_of(se); struct cfs_rq *cfs_rq = cfs_rq_of(se);
struct sched_avg *sa = &se->avg; struct sched_avg *sa = &se->avg;
long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; long cpu_scale = arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq)));
long cap = (long)(cpu_scale - cfs_rq->avg.util_avg) / 2;
if (cap > 0) { if (cap > 0) {
if (cfs_rq->avg.util_avg != 0) { if (cfs_rq->avg.util_avg != 0) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment