Commit 0a9b23ce authored by Dietmar Eggemann's avatar Dietmar Eggemann Committed by Ingo Molnar

sched/fair: Remove stale power aware scheduling comments

Commit 8e7fbcbc ("sched: Remove stale power aware scheduling remnants
and dysfunctional knobs") deleted the power aware scheduling support.

This patch gets rid of the remaining power aware scheduling related
comments in the code as well.
Signed-off-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1461958364-675-2-git-send-email-dietmar.eggemann@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent b52fad2d
...@@ -7027,9 +7027,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s ...@@ -7027,9 +7027,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
* We're trying to get all the cpus to the average_load, so we don't * We're trying to get all the cpus to the average_load, so we don't
* want to push ourselves above the average load, nor do we wish to * want to push ourselves above the average load, nor do we wish to
* reduce the max loaded cpu below the average load. At the same time, * reduce the max loaded cpu below the average load. At the same time,
* we also don't want to reduce the group load below the group capacity * we also don't want to reduce the group load below the group
* (so that we can implement power-savings policies etc). Thus we look * capacity. Thus we look for the minimum possible imbalance.
* for the minimum possible imbalance.
*/ */
max_pull = min(busiest->avg_load - sds->avg_load, load_above_capacity); max_pull = min(busiest->avg_load - sds->avg_load, load_above_capacity);
...@@ -7053,10 +7052,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s ...@@ -7053,10 +7052,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
/** /**
* find_busiest_group - Returns the busiest group within the sched_domain * find_busiest_group - Returns the busiest group within the sched_domain
* if there is an imbalance. If there isn't an imbalance, and * if there is an imbalance.
* the user has opted for power-savings, it returns a group whose
* CPUs can be put to idle by rebalancing those tasks elsewhere, if
* such a group exists.
* *
* Also calculates the amount of weighted load which should be moved * Also calculates the amount of weighted load which should be moved
* to restore balance. * to restore balance.
...@@ -7064,9 +7060,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s ...@@ -7064,9 +7060,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
* @env: The load balancing environment. * @env: The load balancing environment.
* *
* Return: - The busiest group if imbalance exists. * Return: - The busiest group if imbalance exists.
* - If no imbalance and user has opted for power-savings balance,
* return the least loaded group whose CPUs can be
* put to idle by rebalancing its tasks onto our group.
*/ */
static struct sched_group *find_busiest_group(struct lb_env *env) static struct sched_group *find_busiest_group(struct lb_env *env)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment