Commit 0bce9c46 authored by Will Deacon's avatar Will Deacon Committed by Thomas Gleixner

mutex: Place lock in contended state after fastpath_lock failure

ARM recently moved to asm-generic/mutex-xchg.h for its mutex
implementation after the previous implementation was found to be missing
some crucial memory barriers. However, this has revealed some problems
running hackbench on SMP platforms due to the way in which the
MUTEX_SPIN_ON_OWNER code operates.

The symptoms are that a bunch of hackbench tasks are left waiting on an
unlocked mutex and therefore never get woken up to claim it. This boils
down to the following sequence of events:

        Task A        Task B        Task C        Lock value
0                                                     1
1       lock()                                        0
2                     lock()                          0
3                     spin(A)                         0
4       unlock()                                      1
5                                   lock()            0
6                     cmpxchg(1,0)                    0
7                     contended()                    -1
8       lock()                                        0
9       spin(C)                                       0
10                                  unlock()          1
11      cmpxchg(1,0)                                  0
12      unlock()                                      1

At this point, the lock is unlocked, but Task B is in an uninterruptible
sleep with nobody to wake it up.

This patch fixes the problem by ensuring we put the lock into the
contended state if we fail to acquire it on the fastpath, ensuring that
any blocked waiters are woken up when the mutex is released.
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Chris Mason <chris.mason@fusionio.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: <stable@vger.kernel.org>
Reviewed-by: default avatarNicolas Pitre <nico@linaro.org>
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-6e9lrw2avczr0617fzl5vqb8@git.kernel.orgSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
parent 3bf671af
...@@ -26,6 +26,12 @@ static inline void ...@@ -26,6 +26,12 @@ static inline void
__mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *)) __mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
{ {
if (unlikely(atomic_xchg(count, 0) != 1)) if (unlikely(atomic_xchg(count, 0) != 1))
/*
* We failed to acquire the lock, so mark it contended
* to ensure that any waiting tasks are woken up by the
* unlock slow path.
*/
if (likely(atomic_xchg(count, -1) != 1))
fail_fn(count); fail_fn(count);
} }
...@@ -43,6 +49,7 @@ static inline int ...@@ -43,6 +49,7 @@ static inline int
__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *)) __mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
{ {
if (unlikely(atomic_xchg(count, 0) != 1)) if (unlikely(atomic_xchg(count, 0) != 1))
if (likely(atomic_xchg(count, -1) != 1))
return fail_fn(count); return fail_fn(count);
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment