1. 23 Apr, 2016 8 commits
    • Frederic Weisbecker's avatar
      sched/fair: Optimize !CONFIG_NO_HZ_COMMON CPU load updates · 9fd81dd5
      Frederic Weisbecker authored
      Some code in CPU load update only concern NO_HZ configs but it is
      built on all configurations. When NO_HZ isn't built, that code is harmless
      but just happens to take some useless ressources in CPU and memory:
      
      1) one useless field in struct rq
      2) jiffies record on every tick that is never used (cpu_load_update_periodic)
      3) decay_load_missed is called two times on every tick to eventually
         return immediately with no action taken. And that function is dead
         code.
      
      For pure optimization purposes, lets conditionally build the NO_HZ
      related code.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1461080211-16271-1-git-send-email-fweisbec@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      9fd81dd5
    • Frederic Weisbecker's avatar
      sched/fair: Correctly handle nohz ticks CPU load accounting · 1f41906a
      Frederic Weisbecker authored
      Ticks can happen while the CPU is in dynticks-idle or dynticks-singletask
      mode. In fact "nohz" or "dynticks" only mean that we exit the periodic
      mode and we try to minimize the ticks as much as possible. The nohz
      subsystem uses a confusing terminology with the internal state
      "ts->tick_stopped" which is also available through its public interface
      with tick_nohz_tick_stopped(). This is a misnomer as the tick is instead
      reduced with the best effort rather than stopped. In the best case the
      tick can indeed be actually stopped but there is no guarantee about that.
      If a timer needs to fire one second later, a tick will fire while the
      CPU is in nohz mode and this is a very common scenario.
      
      Now this confusion happens to be a problem with CPU load updates:
      cpu_load_update_active() doesn't handle nohz ticks correctly because it
      assumes that ticks are completely stopped in nohz mode and that
      cpu_load_update_active() can't be called in dynticks mode. When that
      happens, the whole previous tickless load is ignored and the function
      just records the load for the current tick, ignoring potentially long
      idle periods behind.
      
      In order to solve this, we could account the current load for the
      previous nohz time but there is a risk that we account the load of a
      task that got freshly enqueued for the whole nohz period.
      
      So instead, lets record the dynticks load on nohz frame entry so we know
      what to record in case of nohz ticks, then use this record to account
      the tickless load on nohz ticks and nohz frame end.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1460555812-25375-3-git-send-email-fweisbec@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1f41906a
    • Frederic Weisbecker's avatar
      sched/fair: Gather CPU load functions under a more conventional namespace · cee1afce
      Frederic Weisbecker authored
      The CPU load update related functions have a weak naming convention
      currently, starting with update_cpu_load_*() which isn't ideal as
      "update" is a very generic concept.
      
      Since two of these functions are public already (and a third is to come)
      that's enough to introduce a more conventional naming scheme. So let's
      do the following rename instead:
      
      	update_cpu_load_*() -> cpu_load_update_*()
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1460555812-25375-2-git-send-email-fweisbec@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      cee1afce
    • Steve Muckle's avatar
      sched/fair: Call cpufreq hook in additional paths · a2c6c91f
      Steve Muckle authored
      The cpufreq hook should be called any time the root CFS rq utilization
      changes. This can occur when a task is switched to or from the fair
      class, or a task moves between groups or CPUs, but these paths
      currently do not call the cpufreq hook.
      
      Fix this by adding the hook to attach_entity_load_avg() and
      detach_entity_load_avg().
      Suggested-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: default avatarSteve Muckle <smuckle@linaro.org>
      [ Added the .update_freq argument to update_cfs_rq_load_avg() to avoid a double cpufreq call. ]
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Juri Lelli <Juri.Lelli@arm.com>
      Cc: Michael Turquette <mturquette@baylibre.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Patrick Bellasi <patrick.bellasi@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1458858367-2831-1-git-send-email-smuckle@linaro.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a2c6c91f
    • Steve Muckle's avatar
      sched/fair: Do not call cpufreq hook unless util changed · 41e0d37f
      Steve Muckle authored
      There's no reason to call the cpufreq hook if the root cfs_rq
      utilization has not been modified.
      Signed-off-by: default avatarSteve Muckle <smuckle@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Juri Lelli <Juri.Lelli@arm.com>
      Cc: Michael Turquette <mturquette@baylibre.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Patrick Bellasi <patrick.bellasi@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Link: http://lkml.kernel.org/r/1458606068-7476-2-git-send-email-smuckle@linaro.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      41e0d37f
    • Steve Muckle's avatar
      sched/fair: Move cpufreq hook to update_cfs_rq_load_avg() · 21e96f88
      Steve Muckle authored
      The cpufreq hook should be called whenever the root cfs_rq
      utilization changes so update_cfs_rq_load_avg() is a better
      place for it. The current location is not invoked in the
      enqueue_entity() or update_blocked_averages() paths.
      Suggested-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      Signed-off-by: default avatarSteve Muckle <smuckle@linaro.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
      Cc: Juri Lelli <Juri.Lelli@arm.com>
      Cc: Michael Turquette <mturquette@baylibre.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Patrick Bellasi <patrick.bellasi@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1458606068-7476-1-git-send-email-smuckle@linaro.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      21e96f88
    • Srikar Dronamraju's avatar
      sched/fair: Fix asym packing to select correct CPU · 1f621e02
      Srikar Dronamraju authored
      When asymmetric packing is set in the sched_domain and target CPU is
      busy, update_sd_pick_busiest() may not select the busiest runqueue.
      When target CPU is busy, find_busiest_group() will ignore checks for
      asym packing and may continue to load balance using the currently
      selected not-the-busiest runqueue as source runqueue.
      Selecting the busiest runqueue as source when the target CPU is busy,
      should result in achieving much better load balance.
      
      Also when target CPU is not busy and asymmetric packing is set in sd,
      select higher CPU as source CPU for load balancing.
      
      While doing this change, move the check to see if target CPU is busy
      into check_asym_packing().
      
      The extent of performance benefit from this change decreases with the
      increasing load. However there is benefit in undercommit as well as
      overcommit conditions.
      
      1. Record per second ebizzy (32 threads) on a 64 CPU power 7 box. (5 iterations)
      4.6.0-rc2
      	Testcase:         Min         Max         Avg      StdDev
      	  ebizzy:  5223767.00 10368236.00  7946971.00  1753094.76
      
      4.6.0-rc2+asym-changes
      	Testcase:         Min         Max         Avg      StdDev     %Change
      	  ebizzy:  8617191.00 13872356.00 11383980.00  1783400.89     +24.78%
      
      2. Record per second ebizzy (64 threads) on a 64 CPU power 7 box. (5 iterations)
      4.6.0-rc2
      	Testcase:         Min         Max         Avg      StdDev
      	  ebizzy:  6497666.00 18399783.00 10818093.20  4051452.08
      
      4.6.0-rc2+asym-changes
      	Testcase:         Min         Max         Avg      StdDev     %Change
      	  ebizzy:  7567365.00 19456937.00 11674063.60  4295407.48      +4.40%
      
      3. Record per second ebizzy (128 threads) on a 64 CPU power 7 box. (5 iterations)
      4.6.0-rc2
      	Testcase:         Min         Max         Avg      StdDev
      	  ebizzy: 37073983.00 40341911.00 38776241.80  1259766.82
      
      4.6.0-rc2+asym-changes
      	Testcase:         Min         Max         Avg      StdDev     %Change
      	  ebizzy: 38030399.00 41333378.00 39827404.40  1255001.86      +2.54%
      Signed-off-by: default avatarSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Gautham R Shenoy <ego@linux.vnet.ibm.com>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1459948660-16073-1-git-send-email-srikar@linux.vnet.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1f621e02
    • Ingo Molnar's avatar
      84eaae15
  2. 19 Apr, 2016 1 commit
  3. 18 Apr, 2016 1 commit
  4. 17 Apr, 2016 5 commits
  5. 16 Apr, 2016 7 commits
  6. 15 Apr, 2016 17 commits
  7. 14 Apr, 2016 1 commit
    • Mike Snitzer's avatar
      dm cache metadata: fix READ_LOCK macros and cleanup WRITE_LOCK macros · 9567366f
      Mike Snitzer authored
      The READ_LOCK macro was incorrectly returning -EINVAL if
      dm_bm_is_read_only() was true -- it will always be true once the cache
      metadata transitions to read-only by dm_cache_metadata_set_read_only().
      
      Wrap READ_LOCK and WRITE_LOCK multi-statement macros in do {} while(0).
      Also, all accesses of the 'cmd' argument passed to these related macros
      are now encapsulated in parenthesis.
      
      A follow-up patch can be developed to eliminate the use of macros in
      favor of pure C code.  Avoiding that now given that this needs to apply
      to stable@.
      Reported-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Fixes: d14fcf3d ("dm cache: make sure every metadata function checks fail_io")
      Cc: stable@vger.kernel.org
      9567366f