• Peter Zijlstra's avatar
    sched/fair: Fix PELT integrity for new tasks · 7dc603c9
    Peter Zijlstra authored
    Vincent and Yuyang found another few scenarios in which entity
    tracking goes wobbly.
    
    The scenarios are basically due to the fact that new tasks are not
    immediately attached and thereby differ from the normal situation -- a
    task is always attached to a cfs_rq load average (such that it
    includes its blocked contribution) and are explicitly
    detached/attached on migration to another cfs_rq.
    
    Scenario 1: switch to fair class
    
      p->sched_class = fair_class;
      if (queued)
        enqueue_task(p);
          ...
            enqueue_entity()
    	  enqueue_entity_load_avg()
    	    migrated = !sa->last_update_time (true)
    	    if (migrated)
    	      attach_entity_load_avg()
      check_class_changed()
        switched_from() (!fair)
        switched_to()   (fair)
          switched_to_fair()
            attach_entity_load_avg()
    
    If @p is a new task that hasn't been fair before, it will have
    !last_update_time and, per the above, end up in
    attach_entity_load_avg() _twice_.
    
    Scenario 2: change between cgroups
    
      sched_move_group(p)
        if (queued)
          dequeue_task()
        task_move_group_fair()
          detach_task_cfs_rq()
            detach_entity_load_avg()
          set_task_rq()
          attach_task_cfs_rq()
            attach_entity_load_avg()
        if (queued)
          enqueue_task();
            ...
              enqueue_entity()
    	    enqueue_entity_load_avg()
    	      migrated = !sa->last_update_time (true)
    	      if (migrated)
    	        attach_entity_load_avg()
    
    Similar as with scenario 1, if @p is a new task, it will have
    !load_update_time and we'll end up in attach_entity_load_avg()
    _twice_.
    
    Furthermore, notice how we do a detach_entity_load_avg() on something
    that wasn't attached to begin with.
    
    As stated above; the problem is that the new task isn't yet attached
    to the load tracking and thereby violates the invariant assumption.
    
    This patch remedies this by ensuring a new task is indeed properly
    attached to the load tracking on creation, through
    post_init_entity_util_avg().
    
    Of course, this isn't entirely as straightforward as one might think,
    since the task is hashed before we call wake_up_new_task() and thus
    can be poked at. We avoid this by adding TASK_NEW and teaching
    cpu_cgroup_can_attach() to refuse such tasks.
    Reported-by: default avatarYuyang Du <yuyang.du@intel.com>
    Reported-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
    Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
    7dc603c9
core.c 212 KB