Commit 59aabfc7 authored by Waiman Long's avatar Waiman Long Committed by Ingo Molnar

locking/rwsem: Reduce spinlock contention in wakeup after up_read()/up_write()

In up_write()/up_read(), rwsem_wake() will be called whenever it
detects that some writers/readers are waiting. The rwsem_wake()
function will take the wait_lock and call __rwsem_do_wake() to do the
real wakeup.  For a heavily contended rwsem, doing a spin_lock() on
wait_lock will cause further contention on the heavily contended rwsem
cacheline resulting in delay in the completion of the up_read/up_write
operations.

This patch makes the wait_lock taking and the call to __rwsem_do_wake()
optional if at least one spinning writer is present. The spinning
writer will be able to take the rwsem and call rwsem_wake() later
when it calls up_write(). With the presence of a spinning writer,
rwsem_wake() will now try to acquire the lock using trylock. If that
fails, it will just quit.
Suggested-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
Acked-by: default avatarJason Low <jason.low2@hp.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1430428337-16802-2-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 3e0283a5
...@@ -32,4 +32,9 @@ static inline void osq_lock_init(struct optimistic_spin_queue *lock) ...@@ -32,4 +32,9 @@ static inline void osq_lock_init(struct optimistic_spin_queue *lock)
extern bool osq_lock(struct optimistic_spin_queue *lock); extern bool osq_lock(struct optimistic_spin_queue *lock);
extern void osq_unlock(struct optimistic_spin_queue *lock); extern void osq_unlock(struct optimistic_spin_queue *lock);
static inline bool osq_is_locked(struct optimistic_spin_queue *lock)
{
return atomic_read(&lock->tail) != OSQ_UNLOCKED_VAL;
}
#endif #endif
...@@ -409,11 +409,24 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem) ...@@ -409,11 +409,24 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
return taken; return taken;
} }
/*
* Return true if the rwsem has active spinner
*/
static inline bool rwsem_has_spinner(struct rw_semaphore *sem)
{
return osq_is_locked(&sem->osq);
}
#else #else
static bool rwsem_optimistic_spin(struct rw_semaphore *sem) static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
{ {
return false; return false;
} }
static inline bool rwsem_has_spinner(struct rw_semaphore *sem)
{
return false;
}
#endif #endif
/* /*
...@@ -496,7 +509,38 @@ struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem) ...@@ -496,7 +509,38 @@ struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem)
{ {
unsigned long flags; unsigned long flags;
/*
* If a spinner is present, it is not necessary to do the wakeup.
* Try to do wakeup only if the trylock succeeds to minimize
* spinlock contention which may introduce too much delay in the
* unlock operation.
*
* spinning writer up_write/up_read caller
* --------------- -----------------------
* [S] osq_unlock() [L] osq
* MB RMB
* [RmW] rwsem_try_write_lock() [RmW] spin_trylock(wait_lock)
*
* Here, it is important to make sure that there won't be a missed
* wakeup while the rwsem is free and the only spinning writer goes
* to sleep without taking the rwsem. Even when the spinning writer
* is just going to break out of the waiting loop, it will still do
* a trylock in rwsem_down_write_failed() before sleeping. IOW, if
* rwsem_has_spinner() is true, it will guarantee at least one
* trylock attempt on the rwsem later on.
*/
if (rwsem_has_spinner(sem)) {
/*
* The smp_rmb() here is to make sure that the spinner
* state is consulted before reading the wait_lock.
*/
smp_rmb();
if (!raw_spin_trylock_irqsave(&sem->wait_lock, flags))
return sem;
goto locked;
}
raw_spin_lock_irqsave(&sem->wait_lock, flags); raw_spin_lock_irqsave(&sem->wait_lock, flags);
locked:
/* do nothing if list empty */ /* do nothing if list empty */
if (!list_empty(&sem->wait_list)) if (!list_empty(&sem->wait_list))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment