Commit 14ff4dbd authored by Ingo Molnar's avatar Ingo Molnar

sched/balancing: Rename rebalance_domains() => sched_balance_domains()

Standardize scheduler load-balancing function names on the
sched_balance_() prefix.
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
Reviewed-by: default avatarShrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-5-mingo@kernel.org
parent 983be062
...@@ -34,7 +34,7 @@ out of balance are tasks moved between groups. ...@@ -34,7 +34,7 @@ out of balance are tasks moved between groups.
In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU
through sched_tick(). It raises a softirq after the next regularly scheduled through sched_tick(). It raises a softirq after the next regularly scheduled
rebalancing event for the current runqueue has arrived. The actual load rebalancing event for the current runqueue has arrived. The actual load
balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run balancing workhorse, sched_balance_softirq()->sched_balance_domains(), is then run
in softirq context (SCHED_SOFTIRQ). in softirq context (SCHED_SOFTIRQ).
The latter function takes two arguments: the runqueue of current CPU and whether The latter function takes two arguments: the runqueue of current CPU and whether
......
...@@ -36,7 +36,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这 ...@@ -36,7 +36,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这
在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick() 在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick()
周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行 的工作由sched_balance_softirq()->sched_balance_domains()完成,在软中断上下文中执行
(SCHED_SOFTIRQ)。 (SCHED_SOFTIRQ)。
后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从 后一个函数有两个入参:当前CPU的运行队列、它在sched_tick()调用时是否空闲。函数会从
......
...@@ -42,7 +42,7 @@ ...@@ -42,7 +42,7 @@
* can take this difference into account during load balance. A per cpu * can take this difference into account during load balance. A per cpu
* structure is preferred because each CPU updates its own cpu_capacity field * structure is preferred because each CPU updates its own cpu_capacity field
* during the load balance except for idle cores. One idle core is selected * during the load balance except for idle cores. One idle core is selected
* to run the rebalance_domains for all idle cores and the cpu_capacity can be * to run the sched_balance_domains for all idle cores and the cpu_capacity can be
* updated during this sequence. * updated during this sequence.
*/ */
......
...@@ -11685,7 +11685,7 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost) ...@@ -11685,7 +11685,7 @@ static inline bool update_newidle_cost(struct sched_domain *sd, u64 cost)
* *
* Balancing parameters are set up in init_sched_domains. * Balancing parameters are set up in init_sched_domains.
*/ */
static void rebalance_domains(struct rq *rq, enum cpu_idle_type idle) static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
{ {
int continue_balancing = 1; int continue_balancing = 1;
int cpu = rq->cpu; int cpu = rq->cpu;
...@@ -12161,7 +12161,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags) ...@@ -12161,7 +12161,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
rq_unlock_irqrestore(rq, &rf); rq_unlock_irqrestore(rq, &rf);
if (flags & NOHZ_BALANCE_KICK) if (flags & NOHZ_BALANCE_KICK)
rebalance_domains(rq, CPU_IDLE); sched_balance_domains(rq, CPU_IDLE);
} }
if (time_after(next_balance, rq->next_balance)) { if (time_after(next_balance, rq->next_balance)) {
...@@ -12422,7 +12422,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h) ...@@ -12422,7 +12422,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
/* /*
* If this CPU has a pending NOHZ_BALANCE_KICK, then do the * If this CPU has a pending NOHZ_BALANCE_KICK, then do the
* balancing on behalf of the other idle CPUs whose ticks are * balancing on behalf of the other idle CPUs whose ticks are
* stopped. Do nohz_idle_balance *before* rebalance_domains to * stopped. Do nohz_idle_balance *before* sched_balance_domains to
* give the idle CPUs a chance to load balance. Else we may * give the idle CPUs a chance to load balance. Else we may
* load balance only within the local sched_domain hierarchy * load balance only within the local sched_domain hierarchy
* and abort nohz_idle_balance altogether if we pull some load. * and abort nohz_idle_balance altogether if we pull some load.
...@@ -12432,7 +12432,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h) ...@@ -12432,7 +12432,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
/* normal load balance */ /* normal load balance */
update_blocked_averages(this_rq->cpu); update_blocked_averages(this_rq->cpu);
rebalance_domains(this_rq, idle); sched_balance_domains(this_rq, idle);
} }
/* /*
......
...@@ -2904,7 +2904,7 @@ extern void cfs_bandwidth_usage_dec(void); ...@@ -2904,7 +2904,7 @@ extern void cfs_bandwidth_usage_dec(void);
#define NOHZ_NEWILB_KICK_BIT 2 #define NOHZ_NEWILB_KICK_BIT 2
#define NOHZ_NEXT_KICK_BIT 3 #define NOHZ_NEXT_KICK_BIT 3
/* Run rebalance_domains() */ /* Run sched_balance_domains() */
#define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT) #define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT)
/* Update blocked load */ /* Update blocked load */
#define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT) #define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment