• Peter Zijlstra's avatar
    sched/fair: Fix and optimize the fork() path · e210bffd
    Peter Zijlstra authored
    The task_fork_fair() callback already calls __set_task_cpu() and takes
    rq->lock.
    
    If we move the sched_class::task_fork callback in sched_fork() under
    the existing p->pi_lock, right after its set_task_cpu() call, we can
    avoid doing two such calls and omit the IRQ disabling on the rq->lock.
    
    Change to __set_task_cpu() to skip the migration bits, this is a new
    task, not a migration. Similarly, make wake_up_new_task() use
    __set_task_cpu() for the same reason, the task hasn't actually
    migrated as it hasn't ever ran.
    
    This cures the problem of calling migrate_task_rq_fair(), which does
    remove_entity_from_load_avg() on tasks that have never been added to
    the load avg to begin with.
    
    This bug would result in transiently messed up load_avg values, averaged
    out after a few dozen milliseconds. This is probably the reason why
    this bug was not found for such a long time.
    Reported-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
    Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
    e210bffd
core.c 211 KB