1. 11 Nov, 2010 1 commit
  2. 10 Nov, 2010 2 commits
    • Peter Zijlstra's avatar
      sched: Fix runnable condition for stoptask · 2d467090
      Peter Zijlstra authored
      Heiko reported that the TASK_RUNNING check is not sufficient for
      CONFIG_PREEMPT=y since we can get preempted with !TASK_RUNNING.
      
      He suggested adding a ->se.on_rq test to the existing TASK_RUNNING
      one, however TASK_RUNNING will always have ->se.on_rq, so we might as
      well reduce that to a single test.
      
      [ stop tasks should never get preempted, but its good to handle
        this case correctly should this ever happen ]
      Reported-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      2d467090
    • Suresh Siddha's avatar
      sched: Use group weight, idle cpu metrics to fix imbalances during idle · aae6d3dd
      Suresh Siddha authored
      Currently we consider a sched domain to be well balanced when the imbalance
      is less than the domain's imablance_pct. As the number of cores and threads
      are increasing, current values of imbalance_pct (for example 25% for a
      NUMA domain) are not enough to detect imbalances like:
      
      a) On a WSM-EP system (two sockets, each having 6 cores and 12 logical threads),
      24 cpu-hogging tasks get scheduled as 13 on one socket and 11 on another
      socket. Leading to an idle HT cpu.
      
      b) On a hypothetial 2 socket NHM-EX system (each socket having 8 cores and
      16 logical threads), 16 cpu-hogging tasks can get scheduled as 9 on one
      socket and 7 on another socket. Leaving one core in a socket idle
      whereas in another socket we have a core having both its HT siblings busy.
      
      While this issue can be fixed by decreasing the domain's imbalance_pct
      (by making it a function of number of logical cpus in the domain), it
      can potentially cause more task migrations across sched groups in an
      overloaded case.
      
      Fix this by using imbalance_pct only during newly_idle and busy
      load balancing. And during idle load balancing, check if there
      is an imbalance in number of idle cpu's across the busiest and this
      sched_group or if the busiest group has more tasks than its weight that
      the idle cpu in this_group can pull.
      Reported-by: default avatarNikhil Rao <ncrao@google.com>
      Signed-off-by: default avatarSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1284760952.2676.11.camel@sbsiddha-MOBL3.sc.intel.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      aae6d3dd
  3. 09 Nov, 2010 5 commits
  4. 08 Nov, 2010 17 commits
  5. 06 Nov, 2010 7 commits
  6. 05 Nov, 2010 8 commits