Commit 2a7a183f authored by Robert Love's avatar Robert Love Committed by James Simmons

[PATCH] fix preempt_count overflow with brlocks

Now that brlocks loop over NR_CPUS, on SMP every br_lock/br_unlock
results in the acquire/release of 32 locks.  This incs/decs the
preempt_count by 32.

Since we only have 7 bits now for actually storing the lock depth, we
cannot nest but 3 locks deep.  I doubt we ever acquire three brlocks
concurrently, but it is still a concern.

Attached patch disables/enables preemption explicitly once and only
once for each lock/unlock.  This is also an optimization as it
removes 31 incs, decs, and conditionals. :)

Problem reported by Andrew Morton.
parent 26113ebe
......@@ -24,8 +24,9 @@ void __br_write_lock (enum brlock_indices idx)
{
int i;
preempt_disable();
for (i = 0; i < NR_CPUS; i++)
write_lock(&__brlock_array[i][idx]);
_raw_write_lock(&__brlock_array[i][idx]);
}
void __br_write_unlock (enum brlock_indices idx)
......@@ -33,7 +34,8 @@ void __br_write_unlock (enum brlock_indices idx)
int i;
for (i = 0; i < NR_CPUS; i++)
write_unlock(&__brlock_array[i][idx]);
_raw_write_unlock(&__brlock_array[i][idx]);
preempt_enable();
}
#else /* ! __BRLOCK_USE_ATOMICS */
......@@ -48,11 +50,12 @@ void __br_write_lock (enum brlock_indices idx)
{
int i;
preempt_disable();
again:
spin_lock(&__br_write_locks[idx].lock);
_raw_spin_lock(&__br_write_locks[idx].lock);
for (i = 0; i < NR_CPUS; i++)
if (__brlock_array[i][idx] != 0) {
spin_unlock(&__br_write_locks[idx].lock);
_raw_spin_unlock(&__br_write_locks[idx].lock);
barrier();
cpu_relax();
goto again;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment