Commit 875ee1e1 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] CPU scheduler balancing fix

From: Nick Piggin <piggin@cyberone.com.au>

The patch changes the imbalance required before a balance to 25% from 50% -
as the comments intend.  It also changes a case where the balancing
wouldn't be done if the imbalance was >= 25% but only 1 task difference.

The downside of the second change is that one task may bounce from one cpu
to another for some loads.  This will only bounce once every 200ms, so it
shouldn't be a big problem.

(Benchmarking results are basically a wash - SDET is increased maybe 0.5%)
parent 2b7e8ff7
......@@ -995,10 +995,10 @@ static inline runqueue_t *find_busiest_queue(runqueue_t *this_rq, int this_cpu,
if (likely(!busiest))
goto out;
*imbalance = (max_load - nr_running) / 2;
*imbalance = max_load - nr_running;
/* It needs an at least ~25% imbalance to trigger balancing. */
if (!idle && (*imbalance < (max_load + 3)/4)) {
if (!idle && ((*imbalance)*4 < max_load)) {
busiest = NULL;
goto out;
}
......@@ -1008,7 +1008,7 @@ static inline runqueue_t *find_busiest_queue(runqueue_t *this_rq, int this_cpu,
* Make sure nothing changed since we checked the
* runqueue length.
*/
if (busiest->nr_running <= nr_running + 1) {
if (busiest->nr_running <= nr_running) {
spin_unlock(&busiest->lock);
busiest = NULL;
}
......@@ -1057,6 +1057,12 @@ static void load_balance(runqueue_t *this_rq, int idle, cpumask_t cpumask)
goto out;
now = sched_clock();
/*
* We only want to steal a number of tasks equal to 1/2 the imbalance,
* otherwise we'll just shift the imbalance to the new queue:
*/
imbalance /= 2;
/*
* We first consider expired tasks. Those will likely not be
* executed in the near future, and they are most likely to
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment