Commit 93edc8bd authored by Waiman Long's avatar Waiman Long Committed by Ingo Molnar

locking/pvqspinlock: Kick the PV CPU unconditionally when _Q_SLOW_VAL

If _Q_SLOW_VAL has been set, the vCPU state must have been vcpu_hashed.
The extra check at the end of __pv_queued_spin_unlock() is unnecessary
and can be removed.
Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1441996658-62854-3-git-send-email-Waiman.Long@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent c55a6ffa
...@@ -267,7 +267,6 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) ...@@ -267,7 +267,6 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node)
} }
if (!lp) { /* ONCE */ if (!lp) { /* ONCE */
WRITE_ONCE(pn->state, vcpu_hashed);
lp = pv_hash(lock, pn); lp = pv_hash(lock, pn);
/* /*
...@@ -275,11 +274,9 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) ...@@ -275,11 +274,9 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node)
* when we observe _Q_SLOW_VAL in __pv_queued_spin_unlock() * when we observe _Q_SLOW_VAL in __pv_queued_spin_unlock()
* we'll be sure to be able to observe our hash entry. * we'll be sure to be able to observe our hash entry.
* *
* [S] pn->state
* [S] <hash> [Rmw] l->locked == _Q_SLOW_VAL * [S] <hash> [Rmw] l->locked == _Q_SLOW_VAL
* MB RMB * MB RMB
* [RmW] l->locked = _Q_SLOW_VAL [L] <unhash> * [RmW] l->locked = _Q_SLOW_VAL [L] <unhash>
* [L] pn->state
* *
* Matches the smp_rmb() in __pv_queued_spin_unlock(). * Matches the smp_rmb() in __pv_queued_spin_unlock().
*/ */
...@@ -364,7 +361,6 @@ __visible void __pv_queued_spin_unlock(struct qspinlock *lock) ...@@ -364,7 +361,6 @@ __visible void __pv_queued_spin_unlock(struct qspinlock *lock)
* vCPU is harmless other than the additional latency in completing * vCPU is harmless other than the additional latency in completing
* the unlock. * the unlock.
*/ */
if (READ_ONCE(node->state) == vcpu_hashed)
pv_kick(node->cpu); pv_kick(node->cpu);
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment