Commit a9668cd6 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

locking: Remove smp_mb__before_spinlock()

Now that there are no users of smp_mb__before_spinlock() left, remove
it entirely.
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent d89e588c
...@@ -1981,10 +1981,7 @@ for each construct. These operations all imply certain barriers: ...@@ -1981,10 +1981,7 @@ for each construct. These operations all imply certain barriers:
ACQUIRE operation has completed. ACQUIRE operation has completed.
Memory operations issued before the ACQUIRE may be completed after Memory operations issued before the ACQUIRE may be completed after
the ACQUIRE operation has completed. An smp_mb__before_spinlock(), the ACQUIRE operation has completed.
combined with a following ACQUIRE, orders prior stores against
subsequent loads and stores. Note that this is weaker than smp_mb()!
The smp_mb__before_spinlock() primitive is free on many architectures.
(2) RELEASE operation implication: (2) RELEASE operation implication:
......
...@@ -1956,10 +1956,7 @@ MMIO 쓰기 배리어 ...@@ -1956,10 +1956,7 @@ MMIO 쓰기 배리어
뒤에 완료됩니다. 뒤에 완료됩니다.
ACQUIRE 앞에서 요청된 메모리 오퍼레이션은 ACQUIRE 오퍼레이션이 완료된 후에 ACQUIRE 앞에서 요청된 메모리 오퍼레이션은 ACQUIRE 오퍼레이션이 완료된 후에
완료될 수 있습니다. smp_mb__before_spinlock() 뒤에 ACQUIRE 가 실행되는 완료될 수 있습니다.
코드 블록은 블록 앞의 스토어를 블록 뒤의 로드와 스토어에 대해 순서
맞춥니다. 이건 smp_mb() 보다 완화된 것임을 기억하세요! 많은 아키텍쳐에서
smp_mb__before_spinlock() 은 사실 아무일도 하지 않습니다.
(2) RELEASE 오퍼레이션의 영향: (2) RELEASE 오퍼레이션의 영향:
......
...@@ -358,15 +358,6 @@ static inline int arch_read_trylock(arch_rwlock_t *rw) ...@@ -358,15 +358,6 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
#define arch_read_relax(lock) cpu_relax() #define arch_read_relax(lock) cpu_relax()
#define arch_write_relax(lock) cpu_relax() #define arch_write_relax(lock) cpu_relax()
/*
* Accesses appearing in program order before a spin_lock() operation
* can be reordered with accesses inside the critical section, by virtue
* of arch_spin_lock being constructed using acquire semantics.
*
* In cases where this is problematic (e.g. try_to_wake_up), an
* smp_mb__before_spinlock() can restore the required ordering.
*/
#define smp_mb__before_spinlock() smp_mb()
/* See include/linux/spinlock.h */ /* See include/linux/spinlock.h */
#define smp_mb__after_spinlock() smp_mb() #define smp_mb__after_spinlock() smp_mb()
......
...@@ -74,13 +74,6 @@ do { \ ...@@ -74,13 +74,6 @@ do { \
___p1; \ ___p1; \
}) })
/*
* This must resolve to hwsync on SMP for the context switch path.
* See _switch, and core scheduler context switch memory ordering
* comments.
*/
#define smp_mb__before_spinlock() smp_mb()
#include <asm-generic/barrier.h> #include <asm-generic/barrier.h>
#endif /* _ASM_POWERPC_BARRIER_H */ #endif /* _ASM_POWERPC_BARRIER_H */
...@@ -109,27 +109,24 @@ static int userfaultfd_wake_function(wait_queue_entry_t *wq, unsigned mode, ...@@ -109,27 +109,24 @@ static int userfaultfd_wake_function(wait_queue_entry_t *wq, unsigned mode,
goto out; goto out;
WRITE_ONCE(uwq->waken, true); WRITE_ONCE(uwq->waken, true);
/* /*
* The implicit smp_mb__before_spinlock in try_to_wake_up() * The Program-Order guarantees provided by the scheduler
* renders uwq->waken visible to other CPUs before the task is * ensure uwq->waken is visible before the task is woken.
* waken.
*/ */
ret = wake_up_state(wq->private, mode); ret = wake_up_state(wq->private, mode);
if (ret) if (ret) {
/* /*
* Wake only once, autoremove behavior. * Wake only once, autoremove behavior.
* *
* After the effect of list_del_init is visible to the * After the effect of list_del_init is visible to the other
* other CPUs, the waitqueue may disappear from under * CPUs, the waitqueue may disappear from under us, see the
* us, see the !list_empty_careful() in * !list_empty_careful() in handle_userfault().
* handle_userfault(). try_to_wake_up() has an *
* implicit smp_mb__before_spinlock, and the * try_to_wake_up() has an implicit smp_mb(), and the
* wq->private is read before calling the extern * wq->private is read before calling the extern function
* function "wake_up_state" (which in turns calls * "wake_up_state" (which in turns calls try_to_wake_up).
* try_to_wake_up). While the spin_lock;spin_unlock;
* wouldn't be enough, the smp_mb__before_spinlock is
* enough to avoid an explicit smp_mb() here.
*/ */
list_del_init(&wq->entry); list_del_init(&wq->entry);
}
out: out:
return ret; return ret;
} }
......
...@@ -117,19 +117,6 @@ do { \ ...@@ -117,19 +117,6 @@ do { \
#endif /*arch_spin_is_contended*/ #endif /*arch_spin_is_contended*/
#endif #endif
/*
* Despite its name it doesn't necessarily has to be a full barrier.
* It should only guarantee that a STORE before the critical section
* can not be reordered with LOADs and STOREs inside this section.
* spin_lock() is the one-way barrier, this LOAD can not escape out
* of the region. So the default implementation simply ensures that
* a STORE can not move into the critical section, smp_wmb() should
* serialize it with another STORE done by spin_lock().
*/
#ifndef smp_mb__before_spinlock
#define smp_mb__before_spinlock() smp_wmb()
#endif
/* /*
* This barrier must provide two things: * This barrier must provide two things:
* *
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment