Commit d2212b4d authored by Will Deacon's avatar Will Deacon Committed by Linus Torvalds

lockref: allow relaxed cmpxchg64 variant for lockless updates

The 64-bit cmpxchg operation on the lockref is ordered by virtue of
hazarding between the cmpxchg operation and the reference count
manipulation. On weakly ordered memory architectures (such as ARM), it
can be of great benefit to omit the barrier instructions where they are
not needed.

This patch moves the lockless lockref code over to a cmpxchg64_relaxed
operation, which doesn't provide barrier semantics. If the operation
isn't defined, we simply #define it as the usual 64-bit cmpxchg macro.

Cc: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 4b972806
...@@ -3,6 +3,14 @@ ...@@ -3,6 +3,14 @@
#ifdef CONFIG_CMPXCHG_LOCKREF #ifdef CONFIG_CMPXCHG_LOCKREF
/*
* Allow weakly-ordered memory architectures to provide barrier-less
* cmpxchg semantics for lockref updates.
*/
#ifndef cmpxchg64_relaxed
# define cmpxchg64_relaxed cmpxchg64
#endif
/* /*
* Note that the "cmpxchg()" reloads the "old" value for the * Note that the "cmpxchg()" reloads the "old" value for the
* failure case. * failure case.
...@@ -14,8 +22,9 @@ ...@@ -14,8 +22,9 @@
while (likely(arch_spin_value_unlocked(old.lock.rlock.raw_lock))) { \ while (likely(arch_spin_value_unlocked(old.lock.rlock.raw_lock))) { \
struct lockref new = old, prev = old; \ struct lockref new = old, prev = old; \
CODE \ CODE \
old.lock_count = cmpxchg64(&lockref->lock_count, \ old.lock_count = cmpxchg64_relaxed(&lockref->lock_count, \
old.lock_count, new.lock_count); \ old.lock_count, \
new.lock_count); \
if (likely(old.lock_count == prev.lock_count)) { \ if (likely(old.lock_count == prev.lock_count)) { \
SUCCESS; \ SUCCESS; \
} \ } \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment