Commit aa68744f authored by Waiman Long's avatar Waiman Long Committed by Ingo Molnar

locking/qspinlock: Avoid redundant read of next pointer

With optimistic prefetch of the next node cacheline, the next pointer
may have been properly inititalized. As a result, the reading
of node->next in the contended path may be redundant. This patch
eliminates the redundant read if the next pointer value is not NULL.
Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1447114167-47185-4-git-send-email-Waiman.Long@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 81b55986
......@@ -396,6 +396,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* p,*,* -> n,*,*
*/
old = xchg_tail(lock, tail);
next = NULL;
/*
* if there was a previous node; link it and wait until reaching the
......@@ -463,10 +464,12 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
}
/*
* contended path; wait for next, release.
* contended path; wait for next if not observed yet, release.
*/
while (!(next = READ_ONCE(node->next)))
cpu_relax();
if (!next) {
while (!(next = READ_ONCE(node->next)))
cpu_relax();
}
arch_mcs_spin_unlock_contended(&next->locked);
pv_kick_node(lock, next);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment