Commit 9390675a authored by Vincent Guittot's avatar Vincent Guittot Committed by Ingo Molnar

Revert "sched: Fix sleep time double accounting in enqueue entity"

This reverts commit 282cf499.

With the current implementation, the load average statistics of a sched entity
change according to other activity on the CPU even if this activity is done
between the running window of the sched entity and have no influence on the
running duration of the task.

When a task wakes up on the same CPU, we currently update last_runnable_update
with the return  of __synchronize_entity_decay without updating the
runnable_avg_sum and runnable_avg_period accordingly. In fact, we have to sync
the load_contrib of the se with the rq's blocked_load_contrib before removing
it from the latter (with __synchronize_entity_decay) but we must keep
last_runnable_update unchanged for updating runnable_avg_sum/period during the
next update_entity_load_avg.
Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Reviewed-by: default avatarBen Segall <bsegall@google.com>
Cc: pjt@google.com
Cc: alex.shi@linaro.org
Link: http://lkml.kernel.org/r/1390376734-6800-1-git-send-email-vincent.guittot@linaro.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 15c81026
...@@ -2356,13 +2356,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, ...@@ -2356,13 +2356,7 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
} }
wakeup = 0; wakeup = 0;
} else { } else {
/* __synchronize_entity_decay(se);
* Task re-woke on same cpu (or else migrate_task_rq_fair()
* would have made count negative); we must be careful to avoid
* double-accounting blocked time after synchronizing decays.
*/
se->avg.last_runnable_update += __synchronize_entity_decay(se)
<< 20;
} }
/* migrated tasks did not contribute to our blocked load */ /* migrated tasks did not contribute to our blocked load */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment