Commit 148086bb authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'sched-fixes-for-linus' of...

Merge branch 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

* 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  sched: Fix rebalance interval calculation
  sched, doc: Beef up load balancing description
  sched: Leave sched_setscheduler() earlier if possible, do not disturb SCHED_FIFO tasks
parents 4da7e90e 3436ae12
Each CPU has a "base" scheduling domain (struct sched_domain). These are Each CPU has a "base" scheduling domain (struct sched_domain). The domain
accessed via cpu_sched_domain(i) and this_sched_domain() macros. The domain
hierarchy is built from these base domains via the ->parent pointer. ->parent hierarchy is built from these base domains via the ->parent pointer. ->parent
MUST be NULL terminated, and domain structures should be per-CPU as they MUST be NULL terminated, and domain structures should be per-CPU as they are
are locklessly updated. locklessly updated.
Each scheduling domain spans a number of CPUs (stored in the ->span field). Each scheduling domain spans a number of CPUs (stored in the ->span field).
A domain's span MUST be a superset of it child's span (this restriction could A domain's span MUST be a superset of it child's span (this restriction could
...@@ -26,11 +25,26 @@ is treated as one entity. The load of a group is defined as the sum of the ...@@ -26,11 +25,26 @@ is treated as one entity. The load of a group is defined as the sum of the
load of each of its member CPUs, and only when the load of a group becomes load of each of its member CPUs, and only when the load of a group becomes
out of balance are tasks moved between groups. out of balance are tasks moved between groups.
In kernel/sched.c, rebalance_tick is run periodically on each CPU. This In kernel/sched.c, trigger_load_balance() is run periodically on each CPU
function takes its CPU's base sched domain and checks to see if has reached through scheduler_tick(). It raises a softirq after the next regularly scheduled
its rebalance interval. If so, then it will run load_balance on that domain. rebalancing event for the current runqueue has arrived. The actual load
rebalance_tick then checks the parent sched_domain (if it exists), and the balancing workhorse, run_rebalance_domains()->rebalance_domains(), is then run
parent of the parent and so forth. in softirq context (SCHED_SOFTIRQ).
The latter function takes two arguments: the current CPU and whether it was idle
at the time the scheduler_tick() happened and iterates over all sched domains
our CPU is on, starting from its base domain and going up the ->parent chain.
While doing that, it checks to see if the current domain has exhausted its
rebalance interval. If so, it runs load_balance() on that domain. It then checks
the parent sched_domain (if it exists), and the parent of the parent and so
forth.
Initially, load_balance() finds the busiest group in the current sched domain.
If it succeeds, it looks for the busiest runqueue of all the CPUs' runqueues in
that group. If it manages to find such a runqueue, it locks both our initial
CPU's runqueue and the newly found busiest one and starts moving tasks from it
to our runqueue. The exact number of tasks amounts to an imbalance previously
computed while iterating over this sched domain's groups.
*** Implementing sched domains *** *** Implementing sched domains ***
The "base" domain will "span" the first level of the hierarchy. In the case The "base" domain will "span" the first level of the hierarchy. In the case
......
...@@ -5011,6 +5011,17 @@ static int __sched_setscheduler(struct task_struct *p, int policy, ...@@ -5011,6 +5011,17 @@ static int __sched_setscheduler(struct task_struct *p, int policy,
return -EINVAL; return -EINVAL;
} }
/*
* If not changing anything there's no need to proceed further:
*/
if (unlikely(policy == p->policy && (!rt_policy(policy) ||
param->sched_priority == p->rt_priority))) {
__task_rq_unlock(rq);
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
return 0;
}
#ifdef CONFIG_RT_GROUP_SCHED #ifdef CONFIG_RT_GROUP_SCHED
if (user) { if (user) {
/* /*
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/latencytop.h> #include <linux/latencytop.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/cpumask.h>
/* /*
* Targeted preemption latency for CPU-bound tasks: * Targeted preemption latency for CPU-bound tasks:
...@@ -3850,8 +3851,8 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle) ...@@ -3850,8 +3851,8 @@ static void rebalance_domains(int cpu, enum cpu_idle_type idle)
interval = msecs_to_jiffies(interval); interval = msecs_to_jiffies(interval);
if (unlikely(!interval)) if (unlikely(!interval))
interval = 1; interval = 1;
if (interval > HZ*NR_CPUS/10) if (interval > HZ*num_online_cpus()/10)
interval = HZ*NR_CPUS/10; interval = HZ*num_online_cpus()/10;
need_serialize = sd->flags & SD_SERIALIZE; need_serialize = sd->flags & SD_SERIALIZE;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment