Commit 632eb5ca authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] sched: less locking in balancing

From: Nick Piggin <nickpiggin@yahoo.com.au>

Analysis and basic idea from Suresh Siddha <suresh.b.siddha@intel.com>

"This small change in load_balance() brings the performance back upto base
scheduler(infact I see a ~1.5% performance improvement now).  Basically
this fix removes the unnecessary double_lock.."

Workload is SpecJBB on 16-way Altix.
parent 44069c37
...@@ -1685,12 +1685,20 @@ static int load_balance(int this_cpu, runqueue_t *this_rq, ...@@ -1685,12 +1685,20 @@ static int load_balance(int this_cpu, runqueue_t *this_rq,
goto out_balanced; goto out_balanced;
} }
/* Attempt to move tasks */ nr_moved = 0;
double_lock_balance(this_rq, busiest); if (busiest->nr_running > 1) {
/*
nr_moved = move_tasks(this_rq, this_cpu, busiest, imbalance, sd, idle); * Attempt to move tasks. If find_busiest_group has found
* an imbalance but busiest->nr_running <= 1, the group is
* still unbalanced. nr_moved simply stays zero, so it is
* correctly treated as an imbalance.
*/
double_lock_balance(this_rq, busiest);
nr_moved = move_tasks(this_rq, this_cpu, busiest,
imbalance, sd, idle);
spin_unlock(&busiest->lock);
}
spin_unlock(&this_rq->lock); spin_unlock(&this_rq->lock);
spin_unlock(&busiest->lock);
if (!nr_moved) { if (!nr_moved) {
sd->nr_balance_failed++; sd->nr_balance_failed++;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment