Commit 701f3b31 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:
 "The main changes in the locking subsystem in this cycle were:

   - Add the Linux Kernel Memory Consistency Model (LKMM) subsystem,
     which is an an array of tools in tools/memory-model/ that formally
     describe the Linux memory coherency model (a.k.a.
     Documentation/memory-barriers.txt), and also produce 'litmus tests'
     in form of kernel code which can be directly executed and tested.

     Here's a high level background article about an earlier version of
     this work on LWN.net:

        https://lwn.net/Articles/718628/

     The design principles:

      "There is reason to believe that Documentation/memory-barriers.txt
       could use some help, and a major purpose of this patch is to
       provide that help in the form of a design-time tool that can
       produce all valid executions of a small fragment of concurrent
       Linux-kernel code, which is called a "litmus test". This tool's
       functionality is roughly similar to a full state-space search.
       Please note that this is a design-time tool, not useful for
       regression testing. However, we hope that the underlying
       Linux-kernel memory model will be incorporated into other tools
       capable of analyzing large bodies of code for regression-testing
       purposes."

     [...]

      "A second tool is klitmus7, which converts litmus tests to
       loadable kernel modules for direct testing. As with herd7, the
       klitmus7 code is freely available from

         http://diy.inria.fr/sources/index.html

       (and via "git" at https://github.com/herd/herdtools7)"

     [...]

     Credits go to:

      "This patch was the result of a most excellent collaboration
       founded by Jade Alglave and also including Alan Stern, Andrea
       Parri, and Luc Maranget."

     ... and to the gents listed in the MAINTAINERS entry:

        LINUX KERNEL MEMORY CONSISTENCY MODEL (LKMM)
        M:      Alan Stern <stern@rowland.harvard.edu>
        M:      Andrea Parri <parri.andrea@gmail.com>
        M:      Will Deacon <will.deacon@arm.com>
        M:      Peter Zijlstra <peterz@infradead.org>
        M:      Boqun Feng <boqun.feng@gmail.com>
        M:      Nicholas Piggin <npiggin@gmail.com>
        M:      David Howells <dhowells@redhat.com>
        M:      Jade Alglave <j.alglave@ucl.ac.uk>
        M:      Luc Maranget <luc.maranget@inria.fr>
        M:      "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

     The LKMM project already found several bugs in Linux locking
     primitives and improved the understanding and the documentation of
     the Linux memory model all around.

   - Add KASAN instrumentation to atomic APIs (Dmitry Vyukov)

   - Add RWSEM API debugging and reorganize the lock debugging Kconfig
     (Waiman Long)

   - ... misc cleanups and other smaller changes"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (31 commits)
  locking/Kconfig: Restructure the lock debugging menu
  locking/Kconfig: Add LOCK_DEBUGGING_SUPPORT to make it more readable
  locking/rwsem: Add DEBUG_RWSEMS to look for lock/unlock mismatches
  lockdep: Make the lock debug output more useful
  locking/rtmutex: Handle non enqueued waiters gracefully in remove_waiter()
  locking/atomic, asm-generic, x86: Add comments for atomic instrumentation
  locking/atomic, asm-generic: Add KASAN instrumentation to atomic operations
  locking/atomic/x86: Switch atomic.h to use atomic-instrumented.h
  locking/atomic, asm-generic: Add asm-generic/atomic-instrumented.h
  locking/xchg/alpha: Remove superfluous memory barriers from the _local() variants
  tools/memory-model: Finish the removal of rb-dep, smp_read_barrier_depends(), and lockless_dereference()
  tools/memory-model: Add documentation of new litmus test
  tools/memory-model: Remove mention of docker/gentoo image
  locking/memory-barriers: De-emphasize smp_read_barrier_depends() some more
  locking/lockdep: Show unadorned pointers
  mutex: Drop linkage.h from mutex.h
  tools/memory-model: Remove rb-dep, smp_read_barrier_depends, and lockless_dereference
  tools/memory-model: Convert underscores to hyphens
  tools/memory-model: Add a S lock-based external-view litmus test
  tools/memory-model: Add required herd7 version to README file
  ...
parents 8747a291 19193bca
...@@ -27,7 +27,8 @@ lock-class. ...@@ -27,7 +27,8 @@ lock-class.
State State
----- -----
The validator tracks lock-class usage history into 4n + 1 separate state bits: The validator tracks lock-class usage history into 4 * nSTATEs + 1 separate
state bits:
- 'ever held in STATE context' - 'ever held in STATE context'
- 'ever held as readlock in STATE context' - 'ever held as readlock in STATE context'
...@@ -37,7 +38,6 @@ The validator tracks lock-class usage history into 4n + 1 separate state bits: ...@@ -37,7 +38,6 @@ The validator tracks lock-class usage history into 4n + 1 separate state bits:
Where STATE can be either one of (kernel/locking/lockdep_states.h) Where STATE can be either one of (kernel/locking/lockdep_states.h)
- hardirq - hardirq
- softirq - softirq
- reclaim_fs
- 'ever used' [ == !unused ] - 'ever used' [ == !unused ]
...@@ -169,6 +169,53 @@ Note: When changing code to use the _nested() primitives, be careful and ...@@ -169,6 +169,53 @@ Note: When changing code to use the _nested() primitives, be careful and
check really thoroughly that the hierarchy is correctly mapped; otherwise check really thoroughly that the hierarchy is correctly mapped; otherwise
you can get false positives or false negatives. you can get false positives or false negatives.
Annotations
-----------
Two constructs can be used to annotate and check where and if certain locks
must be held: lockdep_assert_held*(&lock) and lockdep_*pin_lock(&lock).
As the name suggests, lockdep_assert_held* family of macros assert that a
particular lock is held at a certain time (and generate a WARN() otherwise).
This annotation is largely used all over the kernel, e.g. kernel/sched/
core.c
void update_rq_clock(struct rq *rq)
{
s64 delta;
lockdep_assert_held(&rq->lock);
[...]
}
where holding rq->lock is required to safely update a rq's clock.
The other family of macros is lockdep_*pin_lock(), which is admittedly only
used for rq->lock ATM. Despite their limited adoption these annotations
generate a WARN() if the lock of interest is "accidentally" unlocked. This turns
out to be especially helpful to debug code with callbacks, where an upper
layer assumes a lock remains taken, but a lower layer thinks it can maybe drop
and reacquire the lock ("unwittingly" introducing races). lockdep_pin_lock()
returns a 'struct pin_cookie' that is then used by lockdep_unpin_lock() to check
that nobody tampered with the lock, e.g. kernel/sched/sched.h
static inline void rq_pin_lock(struct rq *rq, struct rq_flags *rf)
{
rf->cookie = lockdep_pin_lock(&rq->lock);
[...]
}
static inline void rq_unpin_lock(struct rq *rq, struct rq_flags *rf)
{
[...]
lockdep_unpin_lock(&rq->lock, rf->cookie);
}
While comments about locking requirements might provide useful information,
the runtime checks performed by annotations are invaluable when debugging
locking problems and they carry the same level of details when inspecting
code. Always prefer annotations when in doubt!
Proof of 100% correctness: Proof of 100% correctness:
-------------------------- --------------------------
......
...@@ -14,7 +14,11 @@ DISCLAIMER ...@@ -14,7 +14,11 @@ DISCLAIMER
This document is not a specification; it is intentionally (for the sake of This document is not a specification; it is intentionally (for the sake of
brevity) and unintentionally (due to being human) incomplete. This document is brevity) and unintentionally (due to being human) incomplete. This document is
meant as a guide to using the various memory barriers provided by Linux, but meant as a guide to using the various memory barriers provided by Linux, but
in case of any doubt (and there are many) please ask. in case of any doubt (and there are many) please ask. Some doubts may be
resolved by referring to the formal memory consistency model and related
documentation at tools/memory-model/. Nevertheless, even this memory
model should be viewed as the collective opinion of its maintainers rather
than as an infallible oracle.
To repeat, this document is not a specification of what Linux expects from To repeat, this document is not a specification of what Linux expects from
hardware. hardware.
...@@ -48,7 +52,7 @@ CONTENTS ...@@ -48,7 +52,7 @@ CONTENTS
- Varieties of memory barrier. - Varieties of memory barrier.
- What may not be assumed about memory barriers? - What may not be assumed about memory barriers?
- Data dependency barriers. - Data dependency barriers (historical).
- Control dependencies. - Control dependencies.
- SMP barrier pairing. - SMP barrier pairing.
- Examples of memory barrier sequences. - Examples of memory barrier sequences.
...@@ -399,7 +403,7 @@ Memory barriers come in four basic varieties: ...@@ -399,7 +403,7 @@ Memory barriers come in four basic varieties:
where two loads are performed such that the second depends on the result where two loads are performed such that the second depends on the result
of the first (eg: the first load retrieves the address to which the second of the first (eg: the first load retrieves the address to which the second
load will be directed), a data dependency barrier would be required to load will be directed), a data dependency barrier would be required to
make sure that the target of the second load is updated before the address make sure that the target of the second load is updated after the address
obtained by the first load is accessed. obtained by the first load is accessed.
A data dependency barrier is a partial ordering on interdependent loads A data dependency barrier is a partial ordering on interdependent loads
...@@ -550,8 +554,15 @@ There are certain things that the Linux kernel memory barriers do not guarantee: ...@@ -550,8 +554,15 @@ There are certain things that the Linux kernel memory barriers do not guarantee:
Documentation/DMA-API.txt Documentation/DMA-API.txt
DATA DEPENDENCY BARRIERS DATA DEPENDENCY BARRIERS (HISTORICAL)
------------------------ -------------------------------------
As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was
added to READ_ONCE(), which means that about the only people who
need to pay attention to this section are those working on DEC Alpha
architecture-specific code and those working on READ_ONCE() itself.
For those who need it, and for those who are interested in the history,
here is the story of data-dependency barriers.
The usage requirements of data dependency barriers are a little subtle, and The usage requirements of data dependency barriers are a little subtle, and
it's not always obvious that they're needed. To illustrate, consider the it's not always obvious that they're needed. To illustrate, consider the
...@@ -2839,8 +2850,9 @@ as that committed on CPU 1. ...@@ -2839,8 +2850,9 @@ as that committed on CPU 1.
To intervene, we need to interpolate a data dependency barrier or a read To intervene, we need to interpolate a data dependency barrier or a read
barrier between the loads. This will force the cache to commit its coherency barrier between the loads (which as of v4.15 is supplied unconditionally
queue before processing any further requests: by the READ_ONCE() macro). This will force the cache to commit its
coherency queue before processing any further requests:
CPU 1 CPU 2 COMMENT CPU 1 CPU 2 COMMENT
=============== =============== ======================================= =============== =============== =======================================
...@@ -2869,8 +2881,8 @@ Other CPUs may also have split caches, but must coordinate between the various ...@@ -2869,8 +2881,8 @@ Other CPUs may also have split caches, but must coordinate between the various
cachelets for normal memory accesses. The semantics of the Alpha removes the cachelets for normal memory accesses. The semantics of the Alpha removes the
need for hardware coordination in the absence of memory barriers, which need for hardware coordination in the absence of memory barriers, which
permitted Alpha to sport higher CPU clock rates back in the day. However, permitted Alpha to sport higher CPU clock rates back in the day. However,
please note that smp_read_barrier_depends() should not be used except in please note that (again, as of v4.15) smp_read_barrier_depends() should not
Alpha arch-specific code and within the READ_ONCE() macro. be used except in Alpha arch-specific code and within the READ_ONCE() macro.
CACHE COHERENCY VS DMA CACHE COHERENCY VS DMA
...@@ -3035,7 +3047,9 @@ the data dependency barrier really becomes necessary as this synchronises both ...@@ -3035,7 +3047,9 @@ the data dependency barrier really becomes necessary as this synchronises both
caches with the memory coherence system, thus making it seem like pointer caches with the memory coherence system, thus making it seem like pointer
changes vs new data occur in the right order. changes vs new data occur in the right order.
The Alpha defines the Linux kernel's memory barrier model. The Alpha defines the Linux kernel's memory model, although as of v4.15
the Linux kernel's addition of smp_read_barrier_depends() to READ_ONCE()
greatly reduced Alpha's impact on the memory model.
See the subsection on "Cache Coherency" above. See the subsection on "Cache Coherency" above.
......
...@@ -8162,6 +8162,24 @@ M: Kees Cook <keescook@chromium.org> ...@@ -8162,6 +8162,24 @@ M: Kees Cook <keescook@chromium.org>
S: Maintained S: Maintained
F: drivers/misc/lkdtm* F: drivers/misc/lkdtm*
LINUX KERNEL MEMORY CONSISTENCY MODEL (LKMM)
M: Alan Stern <stern@rowland.harvard.edu>
M: Andrea Parri <parri.andrea@gmail.com>
M: Will Deacon <will.deacon@arm.com>
M: Peter Zijlstra <peterz@infradead.org>
M: Boqun Feng <boqun.feng@gmail.com>
M: Nicholas Piggin <npiggin@gmail.com>
M: David Howells <dhowells@redhat.com>
M: Jade Alglave <j.alglave@ucl.ac.uk>
M: Luc Maranget <luc.maranget@inria.fr>
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
R: Akira Yokosawa <akiyks@gmail.com>
L: linux-kernel@vger.kernel.org
S: Supported
T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
F: tools/memory-model/
F: Documentation/memory-barriers.txt
LINUX SECURITY MODULE (LSM) FRAMEWORK LINUX SECURITY MODULE (LSM) FRAMEWORK
M: Chris Wright <chrisw@sous-sol.org> M: Chris Wright <chrisw@sous-sol.org>
L: linux-security-module@vger.kernel.org L: linux-security-module@vger.kernel.org
......
...@@ -38,19 +38,31 @@ ...@@ -38,19 +38,31 @@
#define ____cmpxchg(type, args...) __cmpxchg ##type(args) #define ____cmpxchg(type, args...) __cmpxchg ##type(args)
#include <asm/xchg.h> #include <asm/xchg.h>
/*
* The leading and the trailing memory barriers guarantee that these
* operations are fully ordered.
*/
#define xchg(ptr, x) \ #define xchg(ptr, x) \
({ \ ({ \
__typeof__(*(ptr)) __ret; \
__typeof__(*(ptr)) _x_ = (x); \ __typeof__(*(ptr)) _x_ = (x); \
(__typeof__(*(ptr))) __xchg((ptr), (unsigned long)_x_, \ smp_mb(); \
sizeof(*(ptr))); \ __ret = (__typeof__(*(ptr))) \
__xchg((ptr), (unsigned long)_x_, sizeof(*(ptr))); \
smp_mb(); \
__ret; \
}) })
#define cmpxchg(ptr, o, n) \ #define cmpxchg(ptr, o, n) \
({ \ ({ \
__typeof__(*(ptr)) __ret; \
__typeof__(*(ptr)) _o_ = (o); \ __typeof__(*(ptr)) _o_ = (o); \
__typeof__(*(ptr)) _n_ = (n); \ __typeof__(*(ptr)) _n_ = (n); \
(__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ smp_mb(); \
(unsigned long)_n_, sizeof(*(ptr)));\ __ret = (__typeof__(*(ptr))) __cmpxchg((ptr), \
(unsigned long)_o_, (unsigned long)_n_, sizeof(*(ptr)));\
smp_mb(); \
__ret; \
}) })
#define cmpxchg64(ptr, o, n) \ #define cmpxchg64(ptr, o, n) \
......
...@@ -12,10 +12,6 @@ ...@@ -12,10 +12,6 @@
* Atomic exchange. * Atomic exchange.
* Since it can be used to implement critical sections * Since it can be used to implement critical sections
* it must clobber "memory" (also for interrupts in UP). * it must clobber "memory" (also for interrupts in UP).
*
* The leading and the trailing memory barriers guarantee that these
* operations are fully ordered.
*
*/ */
static inline unsigned long static inline unsigned long
...@@ -23,7 +19,6 @@ ____xchg(_u8, volatile char *m, unsigned long val) ...@@ -23,7 +19,6 @@ ____xchg(_u8, volatile char *m, unsigned long val)
{ {
unsigned long ret, tmp, addr64; unsigned long ret, tmp, addr64;
smp_mb();
__asm__ __volatile__( __asm__ __volatile__(
" andnot %4,7,%3\n" " andnot %4,7,%3\n"
" insbl %1,%4,%1\n" " insbl %1,%4,%1\n"
...@@ -38,7 +33,6 @@ ____xchg(_u8, volatile char *m, unsigned long val) ...@@ -38,7 +33,6 @@ ____xchg(_u8, volatile char *m, unsigned long val)
".previous" ".previous"
: "=&r" (ret), "=&r" (val), "=&r" (tmp), "=&r" (addr64) : "=&r" (ret), "=&r" (val), "=&r" (tmp), "=&r" (addr64)
: "r" ((long)m), "1" (val) : "memory"); : "r" ((long)m), "1" (val) : "memory");
smp_mb();
return ret; return ret;
} }
...@@ -48,7 +42,6 @@ ____xchg(_u16, volatile short *m, unsigned long val) ...@@ -48,7 +42,6 @@ ____xchg(_u16, volatile short *m, unsigned long val)
{ {
unsigned long ret, tmp, addr64; unsigned long ret, tmp, addr64;
smp_mb();
__asm__ __volatile__( __asm__ __volatile__(
" andnot %4,7,%3\n" " andnot %4,7,%3\n"
" inswl %1,%4,%1\n" " inswl %1,%4,%1\n"
...@@ -63,7 +56,6 @@ ____xchg(_u16, volatile short *m, unsigned long val) ...@@ -63,7 +56,6 @@ ____xchg(_u16, volatile short *m, unsigned long val)
".previous" ".previous"
: "=&r" (ret), "=&r" (val), "=&r" (tmp), "=&r" (addr64) : "=&r" (ret), "=&r" (val), "=&r" (tmp), "=&r" (addr64)
: "r" ((long)m), "1" (val) : "memory"); : "r" ((long)m), "1" (val) : "memory");
smp_mb();
return ret; return ret;
} }
...@@ -73,7 +65,6 @@ ____xchg(_u32, volatile int *m, unsigned long val) ...@@ -73,7 +65,6 @@ ____xchg(_u32, volatile int *m, unsigned long val)
{ {
unsigned long dummy; unsigned long dummy;
smp_mb();
__asm__ __volatile__( __asm__ __volatile__(
"1: ldl_l %0,%4\n" "1: ldl_l %0,%4\n"
" bis $31,%3,%1\n" " bis $31,%3,%1\n"
...@@ -84,7 +75,6 @@ ____xchg(_u32, volatile int *m, unsigned long val) ...@@ -84,7 +75,6 @@ ____xchg(_u32, volatile int *m, unsigned long val)
".previous" ".previous"
: "=&r" (val), "=&r" (dummy), "=m" (*m) : "=&r" (val), "=&r" (dummy), "=m" (*m)
: "rI" (val), "m" (*m) : "memory"); : "rI" (val), "m" (*m) : "memory");
smp_mb();
return val; return val;
} }
...@@ -94,7 +84,6 @@ ____xchg(_u64, volatile long *m, unsigned long val) ...@@ -94,7 +84,6 @@ ____xchg(_u64, volatile long *m, unsigned long val)
{ {
unsigned long dummy; unsigned long dummy;
smp_mb();
__asm__ __volatile__( __asm__ __volatile__(
"1: ldq_l %0,%4\n" "1: ldq_l %0,%4\n"
" bis $31,%3,%1\n" " bis $31,%3,%1\n"
...@@ -105,7 +94,6 @@ ____xchg(_u64, volatile long *m, unsigned long val) ...@@ -105,7 +94,6 @@ ____xchg(_u64, volatile long *m, unsigned long val)
".previous" ".previous"
: "=&r" (val), "=&r" (dummy), "=m" (*m) : "=&r" (val), "=&r" (dummy), "=m" (*m)
: "rI" (val), "m" (*m) : "memory"); : "rI" (val), "m" (*m) : "memory");
smp_mb();
return val; return val;
} }
...@@ -135,13 +123,6 @@ ____xchg(, volatile void *ptr, unsigned long x, int size) ...@@ -135,13 +123,6 @@ ____xchg(, volatile void *ptr, unsigned long x, int size)
* Atomic compare and exchange. Compare OLD with MEM, if identical, * Atomic compare and exchange. Compare OLD with MEM, if identical,
* store NEW in MEM. Return the initial value in MEM. Success is * store NEW in MEM. Return the initial value in MEM. Success is
* indicated by comparing RETURN with OLD. * indicated by comparing RETURN with OLD.
*
* The leading and the trailing memory barriers guarantee that these
* operations are fully ordered.
*
* The trailing memory barrier is placed in SMP unconditionally, in
* order to guarantee that dependency ordering is preserved when a
* dependency is headed by an unsuccessful operation.
*/ */
static inline unsigned long static inline unsigned long
...@@ -149,7 +130,6 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new) ...@@ -149,7 +130,6 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new)
{ {
unsigned long prev, tmp, cmp, addr64; unsigned long prev, tmp, cmp, addr64;
smp_mb();
__asm__ __volatile__( __asm__ __volatile__(
" andnot %5,7,%4\n" " andnot %5,7,%4\n"
" insbl %1,%5,%1\n" " insbl %1,%5,%1\n"
...@@ -167,7 +147,6 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new) ...@@ -167,7 +147,6 @@ ____cmpxchg(_u8, volatile char *m, unsigned char old, unsigned char new)
".previous" ".previous"
: "=&r" (prev), "=&r" (new), "=&r" (tmp), "=&r" (cmp), "=&r" (addr64) : "=&r" (prev), "=&r" (new), "=&r" (tmp), "=&r" (cmp), "=&r" (addr64)
: "r" ((long)m), "Ir" (old), "1" (new) : "memory"); : "r" ((long)m), "Ir" (old), "1" (new) : "memory");
smp_mb();
return prev; return prev;
} }
...@@ -177,7 +156,6 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new) ...@@ -177,7 +156,6 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new)
{ {
unsigned long prev, tmp, cmp, addr64; unsigned long prev, tmp, cmp, addr64;
smp_mb();
__asm__ __volatile__( __asm__ __volatile__(
" andnot %5,7,%4\n" " andnot %5,7,%4\n"
" inswl %1,%5,%1\n" " inswl %1,%5,%1\n"
...@@ -195,7 +173,6 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new) ...@@ -195,7 +173,6 @@ ____cmpxchg(_u16, volatile short *m, unsigned short old, unsigned short new)
".previous" ".previous"
: "=&r" (prev), "=&r" (new), "=&r" (tmp), "=&r" (cmp), "=&r" (addr64) : "=&r" (prev), "=&r" (new), "=&r" (tmp), "=&r" (cmp), "=&r" (addr64)
: "r" ((long)m), "Ir" (old), "1" (new) : "memory"); : "r" ((long)m), "Ir" (old), "1" (new) : "memory");
smp_mb();
return prev; return prev;
} }
...@@ -205,7 +182,6 @@ ____cmpxchg(_u32, volatile int *m, int old, int new) ...@@ -205,7 +182,6 @@ ____cmpxchg(_u32, volatile int *m, int old, int new)
{ {
unsigned long prev, cmp; unsigned long prev, cmp;
smp_mb();
__asm__ __volatile__( __asm__ __volatile__(
"1: ldl_l %0,%5\n" "1: ldl_l %0,%5\n"
" cmpeq %0,%3,%1\n" " cmpeq %0,%3,%1\n"
...@@ -219,7 +195,6 @@ ____cmpxchg(_u32, volatile int *m, int old, int new) ...@@ -219,7 +195,6 @@ ____cmpxchg(_u32, volatile int *m, int old, int new)
".previous" ".previous"
: "=&r"(prev), "=&r"(cmp), "=m"(*m) : "=&r"(prev), "=&r"(cmp), "=m"(*m)
: "r"((long) old), "r"(new), "m"(*m) : "memory"); : "r"((long) old), "r"(new), "m"(*m) : "memory");
smp_mb();
return prev; return prev;
} }
...@@ -229,7 +204,6 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new) ...@@ -229,7 +204,6 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new)
{ {
unsigned long prev, cmp; unsigned long prev, cmp;
smp_mb();
__asm__ __volatile__( __asm__ __volatile__(
"1: ldq_l %0,%5\n" "1: ldq_l %0,%5\n"
" cmpeq %0,%3,%1\n" " cmpeq %0,%3,%1\n"
...@@ -243,7 +217,6 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new) ...@@ -243,7 +217,6 @@ ____cmpxchg(_u64, volatile long *m, unsigned long old, unsigned long new)
".previous" ".previous"
: "=&r"(prev), "=&r"(cmp), "=m"(*m) : "=&r"(prev), "=&r"(cmp), "=m"(*m)
: "r"((long) old), "r"(new), "m"(*m) : "memory"); : "r"((long) old), "r"(new), "m"(*m) : "memory");
smp_mb();
return prev; return prev;
} }
......
...@@ -17,36 +17,40 @@ ...@@ -17,36 +17,40 @@
#define ATOMIC_INIT(i) { (i) } #define ATOMIC_INIT(i) { (i) }
/** /**
* atomic_read - read atomic variable * arch_atomic_read - read atomic variable
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
* Atomically reads the value of @v. * Atomically reads the value of @v.
*/ */
static __always_inline int atomic_read(const atomic_t *v) static __always_inline int arch_atomic_read(const atomic_t *v)
{ {
/*
* Note for KASAN: we deliberately don't use READ_ONCE_NOCHECK() here,
* it's non-inlined function that increases binary size and stack usage.
*/
return READ_ONCE((v)->counter); return READ_ONCE((v)->counter);
} }
/** /**
* atomic_set - set atomic variable * arch_atomic_set - set atomic variable
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* @i: required value * @i: required value
* *
* Atomically sets the value of @v to @i. * Atomically sets the value of @v to @i.
*/ */
static __always_inline void atomic_set(atomic_t *v, int i) static __always_inline void arch_atomic_set(atomic_t *v, int i)
{ {
WRITE_ONCE(v->counter, i); WRITE_ONCE(v->counter, i);
} }
/** /**
* atomic_add - add integer to atomic variable * arch_atomic_add - add integer to atomic variable
* @i: integer value to add * @i: integer value to add
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
* Atomically adds @i to @v. * Atomically adds @i to @v.
*/ */
static __always_inline void atomic_add(int i, atomic_t *v) static __always_inline void arch_atomic_add(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "addl %1,%0" asm volatile(LOCK_PREFIX "addl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -54,13 +58,13 @@ static __always_inline void atomic_add(int i, atomic_t *v) ...@@ -54,13 +58,13 @@ static __always_inline void atomic_add(int i, atomic_t *v)
} }
/** /**
* atomic_sub - subtract integer from atomic variable * arch_atomic_sub - subtract integer from atomic variable
* @i: integer value to subtract * @i: integer value to subtract
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
* Atomically subtracts @i from @v. * Atomically subtracts @i from @v.
*/ */
static __always_inline void atomic_sub(int i, atomic_t *v) static __always_inline void arch_atomic_sub(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "subl %1,%0" asm volatile(LOCK_PREFIX "subl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -68,7 +72,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v) ...@@ -68,7 +72,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
} }
/** /**
* atomic_sub_and_test - subtract value from variable and test result * arch_atomic_sub_and_test - subtract value from variable and test result
* @i: integer value to subtract * @i: integer value to subtract
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
...@@ -76,63 +80,63 @@ static __always_inline void atomic_sub(int i, atomic_t *v) ...@@ -76,63 +80,63 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
* true if the result is zero, or false for all * true if the result is zero, or false for all
* other cases. * other cases.
*/ */
static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e); GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e);
} }
/** /**
* atomic_inc - increment atomic variable * arch_atomic_inc - increment atomic variable
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
* Atomically increments @v by 1. * Atomically increments @v by 1.
*/ */
static __always_inline void atomic_inc(atomic_t *v) static __always_inline void arch_atomic_inc(atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "incl %0" asm volatile(LOCK_PREFIX "incl %0"
: "+m" (v->counter)); : "+m" (v->counter));
} }
/** /**
* atomic_dec - decrement atomic variable * arch_atomic_dec - decrement atomic variable
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
* Atomically decrements @v by 1. * Atomically decrements @v by 1.
*/ */
static __always_inline void atomic_dec(atomic_t *v) static __always_inline void arch_atomic_dec(atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "decl %0" asm volatile(LOCK_PREFIX "decl %0"
: "+m" (v->counter)); : "+m" (v->counter));
} }
/** /**
* atomic_dec_and_test - decrement and test * arch_atomic_dec_and_test - decrement and test
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
* Atomically decrements @v by 1 and * Atomically decrements @v by 1 and
* returns true if the result is 0, or false for all other * returns true if the result is 0, or false for all other
* cases. * cases.
*/ */
static __always_inline bool atomic_dec_and_test(atomic_t *v) static __always_inline bool arch_atomic_dec_and_test(atomic_t *v)
{ {
GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e); GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
} }
/** /**
* atomic_inc_and_test - increment and test * arch_atomic_inc_and_test - increment and test
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
* Atomically increments @v by 1 * Atomically increments @v by 1
* and returns true if the result is zero, or false for all * and returns true if the result is zero, or false for all
* other cases. * other cases.
*/ */
static __always_inline bool atomic_inc_and_test(atomic_t *v) static __always_inline bool arch_atomic_inc_and_test(atomic_t *v)
{ {
GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e); GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e);
} }
/** /**
* atomic_add_negative - add and test if negative * arch_atomic_add_negative - add and test if negative
* @i: integer value to add * @i: integer value to add
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
...@@ -140,65 +144,65 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v) ...@@ -140,65 +144,65 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v)
* if the result is negative, or false when * if the result is negative, or false when
* result is greater than or equal to zero. * result is greater than or equal to zero.
*/ */
static __always_inline bool atomic_add_negative(int i, atomic_t *v) static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s); GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
} }
/** /**
* atomic_add_return - add integer and return * arch_atomic_add_return - add integer and return
* @i: integer value to add * @i: integer value to add
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
* Atomically adds @i to @v and returns @i + @v * Atomically adds @i to @v and returns @i + @v
*/ */
static __always_inline int atomic_add_return(int i, atomic_t *v) static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
{ {
return i + xadd(&v->counter, i); return i + xadd(&v->counter, i);
} }
/** /**
* atomic_sub_return - subtract integer and return * arch_atomic_sub_return - subtract integer and return
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* @i: integer value to subtract * @i: integer value to subtract
* *
* Atomically subtracts @i from @v and returns @v - @i * Atomically subtracts @i from @v and returns @v - @i
*/ */
static __always_inline int atomic_sub_return(int i, atomic_t *v) static __always_inline int arch_atomic_sub_return(int i, atomic_t *v)
{ {
return atomic_add_return(-i, v); return arch_atomic_add_return(-i, v);
} }
#define atomic_inc_return(v) (atomic_add_return(1, v)) #define arch_atomic_inc_return(v) (arch_atomic_add_return(1, v))
#define atomic_dec_return(v) (atomic_sub_return(1, v)) #define arch_atomic_dec_return(v) (arch_atomic_sub_return(1, v))
static __always_inline int atomic_fetch_add(int i, atomic_t *v) static __always_inline int arch_atomic_fetch_add(int i, atomic_t *v)
{ {
return xadd(&v->counter, i); return xadd(&v->counter, i);
} }
static __always_inline int atomic_fetch_sub(int i, atomic_t *v) static __always_inline int arch_atomic_fetch_sub(int i, atomic_t *v)
{ {
return xadd(&v->counter, -i); return xadd(&v->counter, -i);
} }
static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) static __always_inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
{ {
return cmpxchg(&v->counter, old, new); return arch_cmpxchg(&v->counter, old, new);
} }
#define atomic_try_cmpxchg atomic_try_cmpxchg #define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{ {
return try_cmpxchg(&v->counter, old, new); return try_cmpxchg(&v->counter, old, new);
} }
static inline int atomic_xchg(atomic_t *v, int new) static inline int arch_atomic_xchg(atomic_t *v, int new)
{ {
return xchg(&v->counter, new); return xchg(&v->counter, new);
} }
static inline void atomic_and(int i, atomic_t *v) static inline void arch_atomic_and(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "andl %1,%0" asm volatile(LOCK_PREFIX "andl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -206,16 +210,16 @@ static inline void atomic_and(int i, atomic_t *v) ...@@ -206,16 +210,16 @@ static inline void atomic_and(int i, atomic_t *v)
: "memory"); : "memory");
} }
static inline int atomic_fetch_and(int i, atomic_t *v) static inline int arch_atomic_fetch_and(int i, atomic_t *v)
{ {
int val = atomic_read(v); int val = arch_atomic_read(v);
do { } while (!atomic_try_cmpxchg(v, &val, val & i)); do { } while (!arch_atomic_try_cmpxchg(v, &val, val & i));
return val; return val;
} }
static inline void atomic_or(int i, atomic_t *v) static inline void arch_atomic_or(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "orl %1,%0" asm volatile(LOCK_PREFIX "orl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -223,16 +227,16 @@ static inline void atomic_or(int i, atomic_t *v) ...@@ -223,16 +227,16 @@ static inline void atomic_or(int i, atomic_t *v)
: "memory"); : "memory");
} }
static inline int atomic_fetch_or(int i, atomic_t *v) static inline int arch_atomic_fetch_or(int i, atomic_t *v)
{ {
int val = atomic_read(v); int val = arch_atomic_read(v);
do { } while (!atomic_try_cmpxchg(v, &val, val | i)); do { } while (!arch_atomic_try_cmpxchg(v, &val, val | i));
return val; return val;
} }
static inline void atomic_xor(int i, atomic_t *v) static inline void arch_atomic_xor(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "xorl %1,%0" asm volatile(LOCK_PREFIX "xorl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -240,17 +244,17 @@ static inline void atomic_xor(int i, atomic_t *v) ...@@ -240,17 +244,17 @@ static inline void atomic_xor(int i, atomic_t *v)
: "memory"); : "memory");
} }
static inline int atomic_fetch_xor(int i, atomic_t *v) static inline int arch_atomic_fetch_xor(int i, atomic_t *v)
{ {
int val = atomic_read(v); int val = arch_atomic_read(v);
do { } while (!atomic_try_cmpxchg(v, &val, val ^ i)); do { } while (!arch_atomic_try_cmpxchg(v, &val, val ^ i));
return val; return val;
} }
/** /**
* __atomic_add_unless - add unless the number is already a given value * __arch_atomic_add_unless - add unless the number is already a given value
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* @a: the amount to add to v... * @a: the amount to add to v...
* @u: ...unless v is equal to u. * @u: ...unless v is equal to u.
...@@ -258,14 +262,14 @@ static inline int atomic_fetch_xor(int i, atomic_t *v) ...@@ -258,14 +262,14 @@ static inline int atomic_fetch_xor(int i, atomic_t *v)
* Atomically adds @a to @v, so long as @v was not already @u. * Atomically adds @a to @v, so long as @v was not already @u.
* Returns the old value of @v. * Returns the old value of @v.
*/ */
static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u) static __always_inline int __arch_atomic_add_unless(atomic_t *v, int a, int u)
{ {
int c = atomic_read(v); int c = arch_atomic_read(v);
do { do {
if (unlikely(c == u)) if (unlikely(c == u))
break; break;
} while (!atomic_try_cmpxchg(v, &c, c + a)); } while (!arch_atomic_try_cmpxchg(v, &c, c + a));
return c; return c;
} }
...@@ -276,4 +280,6 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u) ...@@ -276,4 +280,6 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
# include <asm/atomic64_64.h> # include <asm/atomic64_64.h>
#endif #endif
#include <asm-generic/atomic-instrumented.h>
#endif /* _ASM_X86_ATOMIC_H */ #endif /* _ASM_X86_ATOMIC_H */
This diff is collapsed.
...@@ -11,37 +11,37 @@ ...@@ -11,37 +11,37 @@
#define ATOMIC64_INIT(i) { (i) } #define ATOMIC64_INIT(i) { (i) }
/** /**
* atomic64_read - read atomic64 variable * arch_atomic64_read - read atomic64 variable
* @v: pointer of type atomic64_t * @v: pointer of type atomic64_t
* *
* Atomically reads the value of @v. * Atomically reads the value of @v.
* Doesn't imply a read memory barrier. * Doesn't imply a read memory barrier.
*/ */
static inline long atomic64_read(const atomic64_t *v) static inline long arch_atomic64_read(const atomic64_t *v)
{ {
return READ_ONCE((v)->counter); return READ_ONCE((v)->counter);
} }
/** /**
* atomic64_set - set atomic64 variable * arch_atomic64_set - set atomic64 variable
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* @i: required value * @i: required value
* *
* Atomically sets the value of @v to @i. * Atomically sets the value of @v to @i.
*/ */
static inline void atomic64_set(atomic64_t *v, long i) static inline void arch_atomic64_set(atomic64_t *v, long i)
{ {
WRITE_ONCE(v->counter, i); WRITE_ONCE(v->counter, i);
} }
/** /**
* atomic64_add - add integer to atomic64 variable * arch_atomic64_add - add integer to atomic64 variable
* @i: integer value to add * @i: integer value to add
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically adds @i to @v. * Atomically adds @i to @v.
*/ */
static __always_inline void atomic64_add(long i, atomic64_t *v) static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "addq %1,%0" asm volatile(LOCK_PREFIX "addq %1,%0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -49,13 +49,13 @@ static __always_inline void atomic64_add(long i, atomic64_t *v) ...@@ -49,13 +49,13 @@ static __always_inline void atomic64_add(long i, atomic64_t *v)
} }
/** /**
* atomic64_sub - subtract the atomic64 variable * arch_atomic64_sub - subtract the atomic64 variable
* @i: integer value to subtract * @i: integer value to subtract
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically subtracts @i from @v. * Atomically subtracts @i from @v.
*/ */
static inline void atomic64_sub(long i, atomic64_t *v) static inline void arch_atomic64_sub(long i, atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "subq %1,%0" asm volatile(LOCK_PREFIX "subq %1,%0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -63,7 +63,7 @@ static inline void atomic64_sub(long i, atomic64_t *v) ...@@ -63,7 +63,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
} }
/** /**
* atomic64_sub_and_test - subtract value from variable and test result * arch_atomic64_sub_and_test - subtract value from variable and test result
* @i: integer value to subtract * @i: integer value to subtract
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
...@@ -71,18 +71,18 @@ static inline void atomic64_sub(long i, atomic64_t *v) ...@@ -71,18 +71,18 @@ static inline void atomic64_sub(long i, atomic64_t *v)
* true if the result is zero, or false for all * true if the result is zero, or false for all
* other cases. * other cases.
*/ */
static inline bool atomic64_sub_and_test(long i, atomic64_t *v) static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e); GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
} }
/** /**
* atomic64_inc - increment atomic64 variable * arch_atomic64_inc - increment atomic64 variable
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically increments @v by 1. * Atomically increments @v by 1.
*/ */
static __always_inline void atomic64_inc(atomic64_t *v) static __always_inline void arch_atomic64_inc(atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "incq %0" asm volatile(LOCK_PREFIX "incq %0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -90,12 +90,12 @@ static __always_inline void atomic64_inc(atomic64_t *v) ...@@ -90,12 +90,12 @@ static __always_inline void atomic64_inc(atomic64_t *v)
} }
/** /**
* atomic64_dec - decrement atomic64 variable * arch_atomic64_dec - decrement atomic64 variable
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically decrements @v by 1. * Atomically decrements @v by 1.
*/ */
static __always_inline void atomic64_dec(atomic64_t *v) static __always_inline void arch_atomic64_dec(atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "decq %0" asm volatile(LOCK_PREFIX "decq %0"
: "=m" (v->counter) : "=m" (v->counter)
...@@ -103,33 +103,33 @@ static __always_inline void atomic64_dec(atomic64_t *v) ...@@ -103,33 +103,33 @@ static __always_inline void atomic64_dec(atomic64_t *v)
} }
/** /**
* atomic64_dec_and_test - decrement and test * arch_atomic64_dec_and_test - decrement and test
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically decrements @v by 1 and * Atomically decrements @v by 1 and
* returns true if the result is 0, or false for all other * returns true if the result is 0, or false for all other
* cases. * cases.
*/ */
static inline bool atomic64_dec_and_test(atomic64_t *v) static inline bool arch_atomic64_dec_and_test(atomic64_t *v)
{ {
GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e); GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e);
} }
/** /**
* atomic64_inc_and_test - increment and test * arch_atomic64_inc_and_test - increment and test
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically increments @v by 1 * Atomically increments @v by 1
* and returns true if the result is zero, or false for all * and returns true if the result is zero, or false for all
* other cases. * other cases.
*/ */
static inline bool atomic64_inc_and_test(atomic64_t *v) static inline bool arch_atomic64_inc_and_test(atomic64_t *v)
{ {
GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e); GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e);
} }
/** /**
* atomic64_add_negative - add and test if negative * arch_atomic64_add_negative - add and test if negative
* @i: integer value to add * @i: integer value to add
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
...@@ -137,59 +137,59 @@ static inline bool atomic64_inc_and_test(atomic64_t *v) ...@@ -137,59 +137,59 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
* if the result is negative, or false when * if the result is negative, or false when
* result is greater than or equal to zero. * result is greater than or equal to zero.
*/ */
static inline bool atomic64_add_negative(long i, atomic64_t *v) static inline bool arch_atomic64_add_negative(long i, atomic64_t *v)
{ {
GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s); GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
} }
/** /**
* atomic64_add_return - add and return * arch_atomic64_add_return - add and return
* @i: integer value to add * @i: integer value to add
* @v: pointer to type atomic64_t * @v: pointer to type atomic64_t
* *
* Atomically adds @i to @v and returns @i + @v * Atomically adds @i to @v and returns @i + @v
*/ */
static __always_inline long atomic64_add_return(long i, atomic64_t *v) static __always_inline long arch_atomic64_add_return(long i, atomic64_t *v)
{ {
return i + xadd(&v->counter, i); return i + xadd(&v->counter, i);
} }
static inline long atomic64_sub_return(long i, atomic64_t *v) static inline long arch_atomic64_sub_return(long i, atomic64_t *v)
{ {
return atomic64_add_return(-i, v); return arch_atomic64_add_return(-i, v);
} }
static inline long atomic64_fetch_add(long i, atomic64_t *v) static inline long arch_atomic64_fetch_add(long i, atomic64_t *v)
{ {
return xadd(&v->counter, i); return xadd(&v->counter, i);
} }
static inline long atomic64_fetch_sub(long i, atomic64_t *v) static inline long arch_atomic64_fetch_sub(long i, atomic64_t *v)
{ {
return xadd(&v->counter, -i); return xadd(&v->counter, -i);
} }
#define atomic64_inc_return(v) (atomic64_add_return(1, (v))) #define arch_atomic64_inc_return(v) (arch_atomic64_add_return(1, (v)))
#define atomic64_dec_return(v) (atomic64_sub_return(1, (v))) #define arch_atomic64_dec_return(v) (arch_atomic64_sub_return(1, (v)))
static inline long atomic64_cmpxchg(atomic64_t *v, long old, long new) static inline long arch_atomic64_cmpxchg(atomic64_t *v, long old, long new)
{ {
return cmpxchg(&v->counter, old, new); return arch_cmpxchg(&v->counter, old, new);
} }
#define atomic64_try_cmpxchg atomic64_try_cmpxchg #define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, long new) static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, long new)
{ {
return try_cmpxchg(&v->counter, old, new); return try_cmpxchg(&v->counter, old, new);
} }
static inline long atomic64_xchg(atomic64_t *v, long new) static inline long arch_atomic64_xchg(atomic64_t *v, long new)
{ {
return xchg(&v->counter, new); return xchg(&v->counter, new);
} }
/** /**
* atomic64_add_unless - add unless the number is a given value * arch_atomic64_add_unless - add unless the number is a given value
* @v: pointer of type atomic64_t * @v: pointer of type atomic64_t
* @a: the amount to add to v... * @a: the amount to add to v...
* @u: ...unless v is equal to u. * @u: ...unless v is equal to u.
...@@ -197,37 +197,37 @@ static inline long atomic64_xchg(atomic64_t *v, long new) ...@@ -197,37 +197,37 @@ static inline long atomic64_xchg(atomic64_t *v, long new)
* Atomically adds @a to @v, so long as it was not @u. * Atomically adds @a to @v, so long as it was not @u.
* Returns the old value of @v. * Returns the old value of @v.
*/ */
static inline bool atomic64_add_unless(atomic64_t *v, long a, long u) static inline bool arch_atomic64_add_unless(atomic64_t *v, long a, long u)
{ {
s64 c = atomic64_read(v); s64 c = arch_atomic64_read(v);
do { do {
if (unlikely(c == u)) if (unlikely(c == u))
return false; return false;
} while (!atomic64_try_cmpxchg(v, &c, c + a)); } while (!arch_atomic64_try_cmpxchg(v, &c, c + a));
return true; return true;
} }
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0) #define arch_atomic64_inc_not_zero(v) arch_atomic64_add_unless((v), 1, 0)
/* /*
* atomic64_dec_if_positive - decrement by 1 if old value positive * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
* @v: pointer of type atomic_t * @v: pointer of type atomic_t
* *
* The function returns the old value of *v minus 1, even if * The function returns the old value of *v minus 1, even if
* the atomic variable, v, was not decremented. * the atomic variable, v, was not decremented.
*/ */
static inline long atomic64_dec_if_positive(atomic64_t *v) static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
{ {
s64 dec, c = atomic64_read(v); s64 dec, c = arch_atomic64_read(v);
do { do {
dec = c - 1; dec = c - 1;
if (unlikely(dec < 0)) if (unlikely(dec < 0))
break; break;
} while (!atomic64_try_cmpxchg(v, &c, dec)); } while (!arch_atomic64_try_cmpxchg(v, &c, dec));
return dec; return dec;
} }
static inline void atomic64_and(long i, atomic64_t *v) static inline void arch_atomic64_and(long i, atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "andq %1,%0" asm volatile(LOCK_PREFIX "andq %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -235,16 +235,16 @@ static inline void atomic64_and(long i, atomic64_t *v) ...@@ -235,16 +235,16 @@ static inline void atomic64_and(long i, atomic64_t *v)
: "memory"); : "memory");
} }
static inline long atomic64_fetch_and(long i, atomic64_t *v) static inline long arch_atomic64_fetch_and(long i, atomic64_t *v)
{ {
s64 val = atomic64_read(v); s64 val = arch_atomic64_read(v);
do { do {
} while (!atomic64_try_cmpxchg(v, &val, val & i)); } while (!arch_atomic64_try_cmpxchg(v, &val, val & i));
return val; return val;
} }
static inline void atomic64_or(long i, atomic64_t *v) static inline void arch_atomic64_or(long i, atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "orq %1,%0" asm volatile(LOCK_PREFIX "orq %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -252,16 +252,16 @@ static inline void atomic64_or(long i, atomic64_t *v) ...@@ -252,16 +252,16 @@ static inline void atomic64_or(long i, atomic64_t *v)
: "memory"); : "memory");
} }
static inline long atomic64_fetch_or(long i, atomic64_t *v) static inline long arch_atomic64_fetch_or(long i, atomic64_t *v)
{ {
s64 val = atomic64_read(v); s64 val = arch_atomic64_read(v);
do { do {
} while (!atomic64_try_cmpxchg(v, &val, val | i)); } while (!arch_atomic64_try_cmpxchg(v, &val, val | i));
return val; return val;
} }
static inline void atomic64_xor(long i, atomic64_t *v) static inline void arch_atomic64_xor(long i, atomic64_t *v)
{ {
asm volatile(LOCK_PREFIX "xorq %1,%0" asm volatile(LOCK_PREFIX "xorq %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -269,12 +269,12 @@ static inline void atomic64_xor(long i, atomic64_t *v) ...@@ -269,12 +269,12 @@ static inline void atomic64_xor(long i, atomic64_t *v)
: "memory"); : "memory");
} }
static inline long atomic64_fetch_xor(long i, atomic64_t *v) static inline long arch_atomic64_fetch_xor(long i, atomic64_t *v)
{ {
s64 val = atomic64_read(v); s64 val = arch_atomic64_read(v);
do { do {
} while (!atomic64_try_cmpxchg(v, &val, val ^ i)); } while (!arch_atomic64_try_cmpxchg(v, &val, val ^ i));
return val; return val;
} }
......
...@@ -145,13 +145,13 @@ extern void __add_wrong_size(void) ...@@ -145,13 +145,13 @@ extern void __add_wrong_size(void)
# include <asm/cmpxchg_64.h> # include <asm/cmpxchg_64.h>
#endif #endif
#define cmpxchg(ptr, old, new) \ #define arch_cmpxchg(ptr, old, new) \
__cmpxchg(ptr, old, new, sizeof(*(ptr))) __cmpxchg(ptr, old, new, sizeof(*(ptr)))
#define sync_cmpxchg(ptr, old, new) \ #define arch_sync_cmpxchg(ptr, old, new) \
__sync_cmpxchg(ptr, old, new, sizeof(*(ptr))) __sync_cmpxchg(ptr, old, new, sizeof(*(ptr)))
#define cmpxchg_local(ptr, old, new) \ #define arch_cmpxchg_local(ptr, old, new) \
__cmpxchg_local(ptr, old, new, sizeof(*(ptr))) __cmpxchg_local(ptr, old, new, sizeof(*(ptr)))
...@@ -250,10 +250,10 @@ extern void __add_wrong_size(void) ...@@ -250,10 +250,10 @@ extern void __add_wrong_size(void)
__ret; \ __ret; \
}) })
#define cmpxchg_double(p1, p2, o1, o2, n1, n2) \ #define arch_cmpxchg_double(p1, p2, o1, o2, n1, n2) \
__cmpxchg_double(LOCK_PREFIX, p1, p2, o1, o2, n1, n2) __cmpxchg_double(LOCK_PREFIX, p1, p2, o1, o2, n1, n2)
#define cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \ #define arch_cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \
__cmpxchg_double(, p1, p2, o1, o2, n1, n2) __cmpxchg_double(, p1, p2, o1, o2, n1, n2)
#endif /* ASM_X86_CMPXCHG_H */ #endif /* ASM_X86_CMPXCHG_H */
...@@ -36,10 +36,10 @@ static inline void set_64bit(volatile u64 *ptr, u64 value) ...@@ -36,10 +36,10 @@ static inline void set_64bit(volatile u64 *ptr, u64 value)
} }
#ifdef CONFIG_X86_CMPXCHG64 #ifdef CONFIG_X86_CMPXCHG64
#define cmpxchg64(ptr, o, n) \ #define arch_cmpxchg64(ptr, o, n) \
((__typeof__(*(ptr)))__cmpxchg64((ptr), (unsigned long long)(o), \ ((__typeof__(*(ptr)))__cmpxchg64((ptr), (unsigned long long)(o), \
(unsigned long long)(n))) (unsigned long long)(n)))
#define cmpxchg64_local(ptr, o, n) \ #define arch_cmpxchg64_local(ptr, o, n) \
((__typeof__(*(ptr)))__cmpxchg64_local((ptr), (unsigned long long)(o), \ ((__typeof__(*(ptr)))__cmpxchg64_local((ptr), (unsigned long long)(o), \
(unsigned long long)(n))) (unsigned long long)(n)))
#endif #endif
...@@ -76,7 +76,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new) ...@@ -76,7 +76,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
* to simulate the cmpxchg8b on the 80386 and 80486 CPU. * to simulate the cmpxchg8b on the 80386 and 80486 CPU.
*/ */
#define cmpxchg64(ptr, o, n) \ #define arch_cmpxchg64(ptr, o, n) \
({ \ ({ \
__typeof__(*(ptr)) __ret; \ __typeof__(*(ptr)) __ret; \
__typeof__(*(ptr)) __old = (o); \ __typeof__(*(ptr)) __old = (o); \
...@@ -93,7 +93,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new) ...@@ -93,7 +93,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
__ret; }) __ret; })
#define cmpxchg64_local(ptr, o, n) \ #define arch_cmpxchg64_local(ptr, o, n) \
({ \ ({ \
__typeof__(*(ptr)) __ret; \ __typeof__(*(ptr)) __ret; \
__typeof__(*(ptr)) __old = (o); \ __typeof__(*(ptr)) __old = (o); \
......
...@@ -7,13 +7,13 @@ static inline void set_64bit(volatile u64 *ptr, u64 val) ...@@ -7,13 +7,13 @@ static inline void set_64bit(volatile u64 *ptr, u64 val)
*ptr = val; *ptr = val;
} }
#define cmpxchg64(ptr, o, n) \ #define arch_cmpxchg64(ptr, o, n) \
({ \ ({ \
BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ BUILD_BUG_ON(sizeof(*(ptr)) != 8); \
cmpxchg((ptr), (o), (n)); \ cmpxchg((ptr), (o), (n)); \
}) })
#define cmpxchg64_local(ptr, o, n) \ #define arch_cmpxchg64_local(ptr, o, n) \
({ \ ({ \
BUILD_BUG_ON(sizeof(*(ptr)) != 8); \ BUILD_BUG_ON(sizeof(*(ptr)) != 8); \
cmpxchg_local((ptr), (o), (n)); \ cmpxchg_local((ptr), (o), (n)); \
......
This diff is collapsed.
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
#include <asm/current.h> #include <asm/current.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/spinlock_types.h> #include <linux/spinlock_types.h>
#include <linux/linkage.h>
#include <linux/lockdep.h> #include <linux/lockdep.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <asm/processor.h> #include <asm/processor.h>
......
...@@ -556,9 +556,9 @@ static void print_lock(struct held_lock *hlock) ...@@ -556,9 +556,9 @@ static void print_lock(struct held_lock *hlock)
return; return;
} }
printk(KERN_CONT "%p", hlock->instance);
print_lock_name(lock_classes + class_idx - 1); print_lock_name(lock_classes + class_idx - 1);
printk(KERN_CONT ", at: [<%p>] %pS\n", printk(KERN_CONT ", at: %pS\n", (void *)hlock->acquire_ip);
(void *)hlock->acquire_ip, (void *)hlock->acquire_ip);
} }
static void lockdep_print_held_locks(struct task_struct *curr) static void lockdep_print_held_locks(struct task_struct *curr)
...@@ -808,7 +808,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force) ...@@ -808,7 +808,7 @@ register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
if (verbose(class)) { if (verbose(class)) {
graph_unlock(); graph_unlock();
printk("\nnew class %p: %s", class->key, class->name); printk("\nnew class %px: %s", class->key, class->name);
if (class->name_version > 1) if (class->name_version > 1)
printk(KERN_CONT "#%d", class->name_version); printk(KERN_CONT "#%d", class->name_version);
printk(KERN_CONT "\n"); printk(KERN_CONT "\n");
...@@ -1407,7 +1407,7 @@ static void print_lock_class_header(struct lock_class *class, int depth) ...@@ -1407,7 +1407,7 @@ static void print_lock_class_header(struct lock_class *class, int depth)
} }
printk("%*s }\n", depth, ""); printk("%*s }\n", depth, "");
printk("%*s ... key at: [<%p>] %pS\n", printk("%*s ... key at: [<%px>] %pS\n",
depth, "", class->key, class->key); depth, "", class->key, class->key);
} }
...@@ -2340,7 +2340,7 @@ static inline int lookup_chain_cache_add(struct task_struct *curr, ...@@ -2340,7 +2340,7 @@ static inline int lookup_chain_cache_add(struct task_struct *curr,
if (very_verbose(class)) { if (very_verbose(class)) {
printk("\nhash chain already cached, key: " printk("\nhash chain already cached, key: "
"%016Lx tail class: [%p] %s\n", "%016Lx tail class: [%px] %s\n",
(unsigned long long)chain_key, (unsigned long long)chain_key,
class->key, class->name); class->key, class->name);
} }
...@@ -2349,7 +2349,7 @@ static inline int lookup_chain_cache_add(struct task_struct *curr, ...@@ -2349,7 +2349,7 @@ static inline int lookup_chain_cache_add(struct task_struct *curr,
} }
if (very_verbose(class)) { if (very_verbose(class)) {
printk("\nnew hash chain, key: %016Lx tail class: [%p] %s\n", printk("\nnew hash chain, key: %016Lx tail class: [%px] %s\n",
(unsigned long long)chain_key, class->key, class->name); (unsigned long long)chain_key, class->key, class->name);
} }
...@@ -2676,16 +2676,16 @@ check_usage_backwards(struct task_struct *curr, struct held_lock *this, ...@@ -2676,16 +2676,16 @@ check_usage_backwards(struct task_struct *curr, struct held_lock *this,
void print_irqtrace_events(struct task_struct *curr) void print_irqtrace_events(struct task_struct *curr)
{ {
printk("irq event stamp: %u\n", curr->irq_events); printk("irq event stamp: %u\n", curr->irq_events);
printk("hardirqs last enabled at (%u): [<%p>] %pS\n", printk("hardirqs last enabled at (%u): [<%px>] %pS\n",
curr->hardirq_enable_event, (void *)curr->hardirq_enable_ip, curr->hardirq_enable_event, (void *)curr->hardirq_enable_ip,
(void *)curr->hardirq_enable_ip); (void *)curr->hardirq_enable_ip);
printk("hardirqs last disabled at (%u): [<%p>] %pS\n", printk("hardirqs last disabled at (%u): [<%px>] %pS\n",
curr->hardirq_disable_event, (void *)curr->hardirq_disable_ip, curr->hardirq_disable_event, (void *)curr->hardirq_disable_ip,
(void *)curr->hardirq_disable_ip); (void *)curr->hardirq_disable_ip);
printk("softirqs last enabled at (%u): [<%p>] %pS\n", printk("softirqs last enabled at (%u): [<%px>] %pS\n",
curr->softirq_enable_event, (void *)curr->softirq_enable_ip, curr->softirq_enable_event, (void *)curr->softirq_enable_ip,
(void *)curr->softirq_enable_ip); (void *)curr->softirq_enable_ip);
printk("softirqs last disabled at (%u): [<%p>] %pS\n", printk("softirqs last disabled at (%u): [<%px>] %pS\n",
curr->softirq_disable_event, (void *)curr->softirq_disable_ip, curr->softirq_disable_event, (void *)curr->softirq_disable_ip,
(void *)curr->softirq_disable_ip); (void *)curr->softirq_disable_ip);
} }
...@@ -3207,7 +3207,7 @@ static void __lockdep_init_map(struct lockdep_map *lock, const char *name, ...@@ -3207,7 +3207,7 @@ static void __lockdep_init_map(struct lockdep_map *lock, const char *name,
* Sanity check, the lock-class key must be persistent: * Sanity check, the lock-class key must be persistent:
*/ */
if (!static_obj(key)) { if (!static_obj(key)) {
printk("BUG: key %p not in .data!\n", key); printk("BUG: key %px not in .data!\n", key);
/* /*
* What it says above ^^^^^, I suggest you read it. * What it says above ^^^^^, I suggest you read it.
*/ */
...@@ -3322,7 +3322,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, ...@@ -3322,7 +3322,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass,
} }
atomic_inc((atomic_t *)&class->ops); atomic_inc((atomic_t *)&class->ops);
if (very_verbose(class)) { if (very_verbose(class)) {
printk("\nacquire class [%p] %s", class->key, class->name); printk("\nacquire class [%px] %s", class->key, class->name);
if (class->name_version > 1) if (class->name_version > 1)
printk(KERN_CONT "#%d", class->name_version); printk(KERN_CONT "#%d", class->name_version);
printk(KERN_CONT "\n"); printk(KERN_CONT "\n");
...@@ -4376,7 +4376,7 @@ print_freed_lock_bug(struct task_struct *curr, const void *mem_from, ...@@ -4376,7 +4376,7 @@ print_freed_lock_bug(struct task_struct *curr, const void *mem_from,
pr_warn("WARNING: held lock freed!\n"); pr_warn("WARNING: held lock freed!\n");
print_kernel_ident(); print_kernel_ident();
pr_warn("-------------------------\n"); pr_warn("-------------------------\n");
pr_warn("%s/%d is freeing memory %p-%p, with a lock still held there!\n", pr_warn("%s/%d is freeing memory %px-%px, with a lock still held there!\n",
curr->comm, task_pid_nr(curr), mem_from, mem_to-1); curr->comm, task_pid_nr(curr), mem_from, mem_to-1);
print_lock(hlock); print_lock(hlock);
lockdep_print_held_locks(curr); lockdep_print_held_locks(curr);
......
...@@ -1268,7 +1268,6 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state, ...@@ -1268,7 +1268,6 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state,
if (unlikely(ret)) { if (unlikely(ret)) {
__set_current_state(TASK_RUNNING); __set_current_state(TASK_RUNNING);
if (rt_mutex_has_waiters(lock))
remove_waiter(lock, &waiter); remove_waiter(lock, &waiter);
rt_mutex_handle_deadlock(ret, chwalk, &waiter); rt_mutex_handle_deadlock(ret, chwalk, &waiter);
} }
......
...@@ -52,12 +52,13 @@ static inline int rt_mutex_has_waiters(struct rt_mutex *lock) ...@@ -52,12 +52,13 @@ static inline int rt_mutex_has_waiters(struct rt_mutex *lock)
static inline struct rt_mutex_waiter * static inline struct rt_mutex_waiter *
rt_mutex_top_waiter(struct rt_mutex *lock) rt_mutex_top_waiter(struct rt_mutex *lock)
{ {
struct rt_mutex_waiter *w; struct rb_node *leftmost = rb_first_cached(&lock->waiters);
struct rt_mutex_waiter *w = NULL;
w = rb_entry(lock->waiters.rb_leftmost, if (leftmost) {
struct rt_mutex_waiter, tree_entry); w = rb_entry(leftmost, struct rt_mutex_waiter, tree_entry);
BUG_ON(w->lock != lock); BUG_ON(w->lock != lock);
}
return w; return w;
} }
......
...@@ -117,6 +117,7 @@ EXPORT_SYMBOL(down_write_trylock); ...@@ -117,6 +117,7 @@ EXPORT_SYMBOL(down_write_trylock);
void up_read(struct rw_semaphore *sem) void up_read(struct rw_semaphore *sem)
{ {
rwsem_release(&sem->dep_map, 1, _RET_IP_); rwsem_release(&sem->dep_map, 1, _RET_IP_);
DEBUG_RWSEMS_WARN_ON(sem->owner != RWSEM_READER_OWNED);
__up_read(sem); __up_read(sem);
} }
...@@ -129,6 +130,7 @@ EXPORT_SYMBOL(up_read); ...@@ -129,6 +130,7 @@ EXPORT_SYMBOL(up_read);
void up_write(struct rw_semaphore *sem) void up_write(struct rw_semaphore *sem)
{ {
rwsem_release(&sem->dep_map, 1, _RET_IP_); rwsem_release(&sem->dep_map, 1, _RET_IP_);
DEBUG_RWSEMS_WARN_ON(sem->owner != current);
rwsem_clear_owner(sem); rwsem_clear_owner(sem);
__up_write(sem); __up_write(sem);
...@@ -142,6 +144,7 @@ EXPORT_SYMBOL(up_write); ...@@ -142,6 +144,7 @@ EXPORT_SYMBOL(up_write);
void downgrade_write(struct rw_semaphore *sem) void downgrade_write(struct rw_semaphore *sem)
{ {
lock_downgrade(&sem->dep_map, _RET_IP_); lock_downgrade(&sem->dep_map, _RET_IP_);
DEBUG_RWSEMS_WARN_ON(sem->owner != current);
rwsem_set_reader_owned(sem); rwsem_set_reader_owned(sem);
__downgrade_write(sem); __downgrade_write(sem);
...@@ -211,6 +214,7 @@ EXPORT_SYMBOL(down_write_killable_nested); ...@@ -211,6 +214,7 @@ EXPORT_SYMBOL(down_write_killable_nested);
void up_read_non_owner(struct rw_semaphore *sem) void up_read_non_owner(struct rw_semaphore *sem)
{ {
DEBUG_RWSEMS_WARN_ON(sem->owner != RWSEM_READER_OWNED);
__up_read(sem); __up_read(sem);
} }
......
...@@ -16,6 +16,12 @@ ...@@ -16,6 +16,12 @@
*/ */
#define RWSEM_READER_OWNED ((struct task_struct *)1UL) #define RWSEM_READER_OWNED ((struct task_struct *)1UL)
#ifdef CONFIG_DEBUG_RWSEMS
# define DEBUG_RWSEMS_WARN_ON(c) DEBUG_LOCKS_WARN_ON(c)
#else
# define DEBUG_RWSEMS_WARN_ON(c)
#endif
#ifdef CONFIG_RWSEM_SPIN_ON_OWNER #ifdef CONFIG_RWSEM_SPIN_ON_OWNER
/* /*
* All writes to owner are protected by WRITE_ONCE() to make sure that * All writes to owner are protected by WRITE_ONCE() to make sure that
...@@ -41,7 +47,7 @@ static inline void rwsem_set_reader_owned(struct rw_semaphore *sem) ...@@ -41,7 +47,7 @@ static inline void rwsem_set_reader_owned(struct rw_semaphore *sem)
* do a write to the rwsem cacheline when it is really necessary * do a write to the rwsem cacheline when it is really necessary
* to minimize cacheline contention. * to minimize cacheline contention.
*/ */
if (sem->owner != RWSEM_READER_OWNED) if (READ_ONCE(sem->owner) != RWSEM_READER_OWNED)
WRITE_ONCE(sem->owner, RWSEM_READER_OWNED); WRITE_ONCE(sem->owner, RWSEM_READER_OWNED);
} }
......
...@@ -1034,69 +1034,20 @@ config DEBUG_PREEMPT ...@@ -1034,69 +1034,20 @@ config DEBUG_PREEMPT
menu "Lock Debugging (spinlocks, mutexes, etc...)" menu "Lock Debugging (spinlocks, mutexes, etc...)"
config DEBUG_RT_MUTEXES config LOCK_DEBUGGING_SUPPORT
bool "RT Mutex debugging, deadlock detection" bool
depends on DEBUG_KERNEL && RT_MUTEXES depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
help default y
This allows rt mutex semantics violations and rt mutex related
deadlocks (lockups) to be detected and reported automatically.
config DEBUG_SPINLOCK
bool "Spinlock and rw-lock debugging: basic checks"
depends on DEBUG_KERNEL
select UNINLINE_SPIN_UNLOCK
help
Say Y here and build SMP to catch missing spinlock initialization
and certain other kinds of spinlock errors commonly made. This is
best used in conjunction with the NMI watchdog so that spinlock
deadlocks are also debuggable.
config DEBUG_MUTEXES
bool "Mutex debugging: basic checks"
depends on DEBUG_KERNEL
help
This feature allows mutex semantics violations to be detected and
reported.
config DEBUG_WW_MUTEX_SLOWPATH
bool "Wait/wound mutex debugging: Slowpath testing"
depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
select DEBUG_LOCK_ALLOC
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
help
This feature enables slowpath testing for w/w mutex users by
injecting additional -EDEADLK wound/backoff cases. Together with
the full mutex checks enabled with (CONFIG_PROVE_LOCKING) this
will test all possible w/w mutex interface abuse with the
exception of simply not acquiring all the required locks.
Note that this feature can introduce significant overhead, so
it really should not be enabled in a production or distro kernel,
even a debug kernel. If you are a driver writer, enable it. If
you are a distro, do not.
config DEBUG_LOCK_ALLOC
bool "Lock debugging: detect incorrect freeing of live locks"
depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
select DEBUG_RT_MUTEXES if RT_MUTEXES
select LOCKDEP
help
This feature will check whether any held lock (spinlock, rwlock,
mutex or rwsem) is incorrectly freed by the kernel, via any of the
memory-freeing routines (kfree(), kmem_cache_free(), free_pages(),
vfree(), etc.), whether a live lock is incorrectly reinitialized via
spin_lock_init()/mutex_init()/etc., or whether there is any lock
held during task exit.
config PROVE_LOCKING config PROVE_LOCKING
bool "Lock debugging: prove locking correctness" bool "Lock debugging: prove locking correctness"
depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select LOCKDEP select LOCKDEP
select DEBUG_SPINLOCK select DEBUG_SPINLOCK
select DEBUG_MUTEXES select DEBUG_MUTEXES
select DEBUG_RT_MUTEXES if RT_MUTEXES select DEBUG_RT_MUTEXES if RT_MUTEXES
select DEBUG_RWSEMS if RWSEM_SPIN_ON_OWNER
select DEBUG_WW_MUTEX_SLOWPATH
select DEBUG_LOCK_ALLOC select DEBUG_LOCK_ALLOC
select TRACE_IRQFLAGS select TRACE_IRQFLAGS
default n default n
...@@ -1134,20 +1085,9 @@ config PROVE_LOCKING ...@@ -1134,20 +1085,9 @@ config PROVE_LOCKING
For more details, see Documentation/locking/lockdep-design.txt. For more details, see Documentation/locking/lockdep-design.txt.
config LOCKDEP
bool
depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
select STACKTRACE
select FRAME_POINTER if !MIPS && !PPC && !ARM_UNWIND && !S390 && !MICROBLAZE && !ARC && !SCORE && !X86
select KALLSYMS
select KALLSYMS_ALL
config LOCKDEP_SMALL
bool
config LOCK_STAT config LOCK_STAT
bool "Lock usage statistics" bool "Lock usage statistics"
depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select LOCKDEP select LOCKDEP
select DEBUG_SPINLOCK select DEBUG_SPINLOCK
select DEBUG_MUTEXES select DEBUG_MUTEXES
...@@ -1167,6 +1107,80 @@ config LOCK_STAT ...@@ -1167,6 +1107,80 @@ config LOCK_STAT
CONFIG_LOCK_STAT defines "contended" and "acquired" lock events. CONFIG_LOCK_STAT defines "contended" and "acquired" lock events.
(CONFIG_LOCKDEP defines "acquire" and "release" events.) (CONFIG_LOCKDEP defines "acquire" and "release" events.)
config DEBUG_RT_MUTEXES
bool "RT Mutex debugging, deadlock detection"
depends on DEBUG_KERNEL && RT_MUTEXES
help
This allows rt mutex semantics violations and rt mutex related
deadlocks (lockups) to be detected and reported automatically.
config DEBUG_SPINLOCK
bool "Spinlock and rw-lock debugging: basic checks"
depends on DEBUG_KERNEL
select UNINLINE_SPIN_UNLOCK
help
Say Y here and build SMP to catch missing spinlock initialization
and certain other kinds of spinlock errors commonly made. This is
best used in conjunction with the NMI watchdog so that spinlock
deadlocks are also debuggable.
config DEBUG_MUTEXES
bool "Mutex debugging: basic checks"
depends on DEBUG_KERNEL
help
This feature allows mutex semantics violations to be detected and
reported.
config DEBUG_WW_MUTEX_SLOWPATH
bool "Wait/wound mutex debugging: Slowpath testing"
depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select DEBUG_LOCK_ALLOC
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
help
This feature enables slowpath testing for w/w mutex users by
injecting additional -EDEADLK wound/backoff cases. Together with
the full mutex checks enabled with (CONFIG_PROVE_LOCKING) this
will test all possible w/w mutex interface abuse with the
exception of simply not acquiring all the required locks.
Note that this feature can introduce significant overhead, so
it really should not be enabled in a production or distro kernel,
even a debug kernel. If you are a driver writer, enable it. If
you are a distro, do not.
config DEBUG_RWSEMS
bool "RW Semaphore debugging: basic checks"
depends on DEBUG_KERNEL && RWSEM_SPIN_ON_OWNER
help
This debugging feature allows mismatched rw semaphore locks and unlocks
to be detected and reported.
config DEBUG_LOCK_ALLOC
bool "Lock debugging: detect incorrect freeing of live locks"
depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
select DEBUG_RT_MUTEXES if RT_MUTEXES
select LOCKDEP
help
This feature will check whether any held lock (spinlock, rwlock,
mutex or rwsem) is incorrectly freed by the kernel, via any of the
memory-freeing routines (kfree(), kmem_cache_free(), free_pages(),
vfree(), etc.), whether a live lock is incorrectly reinitialized via
spin_lock_init()/mutex_init()/etc., or whether there is any lock
held during task exit.
config LOCKDEP
bool
depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select STACKTRACE
select FRAME_POINTER if !MIPS && !PPC && !ARM_UNWIND && !S390 && !MICROBLAZE && !ARC && !SCORE && !X86
select KALLSYMS
select KALLSYMS_ALL
config LOCKDEP_SMALL
bool
config DEBUG_LOCKDEP config DEBUG_LOCKDEP
bool "Lock dependency engine debugging" bool "Lock dependency engine debugging"
depends on DEBUG_KERNEL && LOCKDEP depends on DEBUG_KERNEL && LOCKDEP
......
Prior Operation Subsequent Operation
--------------- ---------------------------
C Self R W RWM Self R W DR DW RMW SV
-- ---- - - --- ---- - - -- -- --- --
Store, e.g., WRITE_ONCE() Y Y
Load, e.g., READ_ONCE() Y Y Y Y
Unsuccessful RMW operation Y Y Y Y
rcu_dereference() Y Y Y Y
Successful *_acquire() R Y Y Y Y Y Y
Successful *_release() C Y Y Y W Y
smp_rmb() Y R Y Y R
smp_wmb() Y W Y Y W
smp_mb() & synchronize_rcu() CP Y Y Y Y Y Y Y Y
Successful full non-void RMW CP Y Y Y Y Y Y Y Y Y Y Y
smp_mb__before_atomic() CP Y Y Y a a a a Y
smp_mb__after_atomic() CP a a Y Y Y Y Y
Key: C: Ordering is cumulative
P: Ordering propagates
R: Read, for example, READ_ONCE(), or read portion of RMW
W: Write, for example, WRITE_ONCE(), or write portion of RMW
Y: Provides ordering
a: Provides ordering given intervening RMW atomic operation
DR: Dependent read (address dependency)
DW: Dependent write (address, data, or control dependency)
RMW: Atomic read-modify-write operation
SV Same-variable access
This diff is collapsed.
This diff is collapsed.
This document provides background reading for memory models and related
tools. These documents are aimed at kernel hackers who are interested
in memory models.
Hardware manuals and models
===========================
o SPARC International Inc. (Ed.). 1994. "The SPARC Architecture
Reference Manual Version 9". SPARC International Inc.
o Compaq Computer Corporation (Ed.). 2002. "Alpha Architecture
Reference Manual". Compaq Computer Corporation.
o Intel Corporation (Ed.). 2002. "A Formal Specification of Intel
Itanium Processor Family Memory Ordering". Intel Corporation.
o Intel Corporation (Ed.). 2002. "Intel 64 and IA-32 Architectures
Software Developer’s Manual". Intel Corporation.
o Peter Sewell, Susmit Sarkar, Scott Owens, Francesco Zappa Nardelli,
and Magnus O. Myreen. 2010. "x86-TSO: A Rigorous and Usable
Programmer's Model for x86 Multiprocessors". Commun. ACM 53, 7
(July, 2010), 89-97. http://doi.acm.org/10.1145/1785414.1785443
o IBM Corporation (Ed.). 2009. "Power ISA Version 2.06". IBM
Corporation.
o ARM Ltd. (Ed.). 2009. "ARM Barrier Litmus Tests and Cookbook".
ARM Ltd.
o Susmit Sarkar, Peter Sewell, Jade Alglave, Luc Maranget, and
Derek Williams. 2011. "Understanding POWER Multiprocessors". In
Proceedings of the 32Nd ACM SIGPLAN Conference on Programming
Language Design and Implementation (PLDI ’11). ACM, New York,
NY, USA, 175–186.
o Susmit Sarkar, Kayvan Memarian, Scott Owens, Mark Batty,
Peter Sewell, Luc Maranget, Jade Alglave, and Derek Williams.
2012. "Synchronising C/C++ and POWER". In Proceedings of the 33rd
ACM SIGPLAN Conference on Programming Language Design and
Implementation (PLDI '12). ACM, New York, NY, USA, 311-322.
o ARM Ltd. (Ed.). 2014. "ARM Architecture Reference Manual (ARMv8,
for ARMv8-A architecture profile)". ARM Ltd.
o Imagination Technologies, LTD. 2015. "MIPS(R) Architecture
For Programmers, Volume II-A: The MIPS64(R) Instruction,
Set Reference Manual". Imagination Technologies,
LTD. https://imgtec.com/?do-download=4302.
o Shaked Flur, Kathryn E. Gray, Christopher Pulte, Susmit
Sarkar, Ali Sezgin, Luc Maranget, Will Deacon, and Peter
Sewell. 2016. "Modelling the ARMv8 Architecture, Operationally:
Concurrency and ISA". In Proceedings of the 43rd Annual ACM
SIGPLAN-SIGACT Symposium on Principles of Programming Languages
(POPL ’16). ACM, New York, NY, USA, 608–621.
o Shaked Flur, Susmit Sarkar, Christopher Pulte, Kyndylan Nienhuis,
Luc Maranget, Kathryn E. Gray, Ali Sezgin, Mark Batty, and Peter
Sewell. 2017. "Mixed-size Concurrency: ARM, POWER, C/C++11,
and SC". In Proceedings of the 44th ACM SIGPLAN Symposium on
Principles of Programming Languages (POPL 2017). ACM, New York,
NY, USA, 429–442.
Linux-kernel memory model
=========================
o Andrea Parri, Alan Stern, Luc Maranget, Paul E. McKenney,
and Jade Alglave. 2017. "A formal model of
Linux-kernel memory ordering - companion webpage".
http://moscova.inria.fr/∼maranget/cats7/linux/. (2017). [Online;
accessed 30-January-2017].
o Jade Alglave, Luc Maranget, Paul E. McKenney, Andrea Parri, and
Alan Stern. 2017. "A formal kernel memory-ordering model (part 1)"
Linux Weekly News. https://lwn.net/Articles/718628/
o Jade Alglave, Luc Maranget, Paul E. McKenney, Andrea Parri, and
Alan Stern. 2017. "A formal kernel memory-ordering model (part 2)"
Linux Weekly News. https://lwn.net/Articles/720550/
Memory-model tooling
====================
o Daniel Jackson. 2002. "Alloy: A Lightweight Object Modelling
Notation". ACM Trans. Softw. Eng. Methodol. 11, 2 (April 2002),
256–290. http://doi.acm.org/10.1145/505145.505149
o Jade Alglave, Luc Maranget, and Michael Tautschnig. 2014. "Herding
Cats: Modelling, Simulation, Testing, and Data Mining for Weak
Memory". ACM Trans. Program. Lang. Syst. 36, 2, Article 7 (July
2014), 7:1–7:74 pages.
o Jade Alglave, Patrick Cousot, and Luc Maranget. 2016. "Syntax and
semantics of the weak consistency model specification language
cat". CoRR abs/1608.07531 (2016). http://arxiv.org/abs/1608.07531
Memory-model comparisons
========================
o Paul E. McKenney, Ulrich Weigand, Andrea Parri, and Boqun
Feng. 2016. "Linux-Kernel Memory Model". (6 June 2016).
http://open-std.org/JTC1/SC22/WG21/docs/papers/2016/p0124r2.html.
=====================================
LINUX KERNEL MEMORY CONSISTENCY MODEL
=====================================
============
INTRODUCTION
============
This directory contains the memory consistency model (memory model, for
short) of the Linux kernel, written in the "cat" language and executable
by the externally provided "herd7" simulator, which exhaustively explores
the state space of small litmus tests.
In addition, the "klitmus7" tool (also externally provided) may be used
to convert a litmus test to a Linux kernel module, which in turn allows
that litmus test to be exercised within the Linux kernel.
============
REQUIREMENTS
============
Version 7.48 of the "herd7" and "klitmus7" tools must be downloaded
separately:
https://github.com/herd/herdtools7
See "herdtools7/INSTALL.md" for installation instructions.
==================
BASIC USAGE: HERD7
==================
The memory model is used, in conjunction with "herd7", to exhaustively
explore the state space of small litmus tests.
For example, to run SB+mbonceonces.litmus against the memory model:
$ herd7 -conf linux-kernel.cfg litmus-tests/SB+mbonceonces.litmus
Here is the corresponding output:
Test SB+mbonceonces Allowed
States 3
0:r0=0; 1:r0=1;
0:r0=1; 1:r0=0;
0:r0=1; 1:r0=1;
No
Witnesses
Positive: 0 Negative: 3
Condition exists (0:r0=0 /\ 1:r0=0)
Observation SB+mbonceonces Never 0 3
Time SB+mbonceonces 0.01
Hash=d66d99523e2cac6b06e66f4c995ebb48
The "Positive: 0 Negative: 3" and the "Never 0 3" each indicate that
this litmus test's "exists" clause can not be satisfied.
See "herd7 -help" or "herdtools7/doc/" for more information.
=====================
BASIC USAGE: KLITMUS7
=====================
The "klitmus7" tool converts a litmus test into a Linux kernel module,
which may then be loaded and run.
For example, to run SB+mbonceonces.litmus against hardware:
$ mkdir mymodules
$ klitmus7 -o mymodules litmus-tests/SB+mbonceonces.litmus
$ cd mymodules ; make
$ sudo sh run.sh
The corresponding output includes:
Test SB+mbonceonces Allowed
Histogram (3 states)
644580 :>0:r0=1; 1:r0=0;
644328 :>0:r0=0; 1:r0=1;
711092 :>0:r0=1; 1:r0=1;
No
Witnesses
Positive: 0, Negative: 2000000
Condition exists (0:r0=0 /\ 1:r0=0) is NOT validated
Hash=d66d99523e2cac6b06e66f4c995ebb48
Observation SB+mbonceonces Never 0 2000000
Time SB+mbonceonces 0.16
The "Positive: 0 Negative: 2000000" and the "Never 0 2000000" indicate
that during two million trials, the state specified in this litmus
test's "exists" clause was not reached.
And, as with "herd7", please see "klitmus7 -help" or "herdtools7/doc/"
for more information.
====================
DESCRIPTION OF FILES
====================
Documentation/cheatsheet.txt
Quick-reference guide to the Linux-kernel memory model.
Documentation/explanation.txt
Describes the memory model in detail.
Documentation/recipes.txt
Lists common memory-ordering patterns.
Documentation/references.txt
Provides background reading.
linux-kernel.bell
Categorizes the relevant instructions, including memory
references, memory barriers, atomic read-modify-write operations,
lock acquisition/release, and RCU operations.
More formally, this file (1) lists the subtypes of the various
event types used by the memory model and (2) performs RCU
read-side critical section nesting analysis.
linux-kernel.cat
Specifies what reorderings are forbidden by memory references,
memory barriers, atomic read-modify-write operations, and RCU.
More formally, this file specifies what executions are forbidden
by the memory model. Allowed executions are those which
satisfy the model's "coherence", "atomic", "happens-before",
"propagation", and "rcu" axioms, which are defined in the file.
linux-kernel.cfg
Convenience file that gathers the common-case herd7 command-line
arguments.
linux-kernel.def
Maps from C-like syntax to herd7's internal litmus-test
instruction-set architecture.
litmus-tests
Directory containing a few representative litmus tests, which
are listed in litmus-tests/README. A great deal more litmus
tests are available at https://github.com/paulmckrcu/litmus.
lock.cat
Provides a front-end analysis of lock acquisition and release,
for example, associating a lock acquisition with the preceding
and following releases and checking for self-deadlock.
More formally, this file defines a performance-enhanced scheme
for generation of the possible reads-from and coherence order
relations on the locking primitives.
README
This file.
===========
LIMITATIONS
===========
The Linux-kernel memory model has the following limitations:
1. Compiler optimizations are not modeled. Of course, the use
of READ_ONCE() and WRITE_ONCE() limits the compiler's ability
to optimize, but there is Linux-kernel code that uses bare C
memory accesses. Handling this code is on the to-do list.
For more information, see Documentation/explanation.txt (in
particular, the "THE PROGRAM ORDER RELATION: po AND po-loc"
and "A WARNING" sections).
2. Multiple access sizes for a single variable are not supported,
and neither are misaligned or partially overlapping accesses.
3. Exceptions and interrupts are not modeled. In some cases,
this limitation can be overcome by modeling the interrupt or
exception with an additional process.
4. I/O such as MMIO or DMA is not supported.
5. Self-modifying code (such as that found in the kernel's
alternatives mechanism, function tracer, Berkeley Packet Filter
JIT compiler, and module loader) is not supported.
6. Complete modeling of all variants of atomic read-modify-write
operations, locking primitives, and RCU is not provided.
For example, call_rcu() and rcu_barrier() are not supported.
However, a substantial amount of support is provided for these
operations, as shown in the linux-kernel.def file.
The "herd7" tool has some additional limitations of its own, apart from
the memory model:
1. Non-trivial data structures such as arrays or structures are
not supported. However, pointers are supported, allowing trivial
linked lists to be constructed.
2. Dynamic memory allocation is not supported, although this can
be worked around in some cases by supplying multiple statically
allocated variables.
Some of these limitations may be overcome in the future, but others are
more likely to be addressed by incorporating the Linux-kernel memory model
into other tools.
// SPDX-License-Identifier: GPL-2.0+
(*
* Copyright (C) 2015 Jade Alglave <j.alglave@ucl.ac.uk>,
* Copyright (C) 2016 Luc Maranget <luc.maranget@inria.fr> for Inria
* Copyright (C) 2017 Alan Stern <stern@rowland.harvard.edu>,
* Andrea Parri <parri.andrea@gmail.com>
*
* An earlier version of this file appears in the companion webpage for
* "Frightening small children and disconcerting grown-ups: Concurrency
* in the Linux kernel" by Alglave, Maranget, McKenney, Parri, and Stern,
* which is to appear in ASPLOS 2018.
*)
"Linux-kernel memory consistency model"
enum Accesses = 'once (*READ_ONCE,WRITE_ONCE,ACCESS_ONCE*) ||
'release (*smp_store_release*) ||
'acquire (*smp_load_acquire*) ||
'noreturn (* R of non-return RMW *)
instructions R[{'once,'acquire,'noreturn}]
instructions W[{'once,'release}]
instructions RMW[{'once,'acquire,'release}]
enum Barriers = 'wmb (*smp_wmb*) ||
'rmb (*smp_rmb*) ||
'mb (*smp_mb*) ||
'rcu-lock (*rcu_read_lock*) ||
'rcu-unlock (*rcu_read_unlock*) ||
'sync-rcu (*synchronize_rcu*) ||
'before-atomic (*smp_mb__before_atomic*) ||
'after-atomic (*smp_mb__after_atomic*) ||
'after-spinlock (*smp_mb__after_spinlock*)
instructions F[Barriers]
(* Compute matching pairs of nested Rcu-lock and Rcu-unlock *)
let matched = let rec
unmatched-locks = Rcu-lock \ domain(matched)
and unmatched-unlocks = Rcu-unlock \ range(matched)
and unmatched = unmatched-locks | unmatched-unlocks
and unmatched-po = [unmatched] ; po ; [unmatched]
and unmatched-locks-to-unlocks =
[unmatched-locks] ; po ; [unmatched-unlocks]
and matched = matched | (unmatched-locks-to-unlocks \
(unmatched-po ; unmatched-po))
in matched
(* Validate nesting *)
flag ~empty Rcu-lock \ domain(matched) as unbalanced-rcu-locking
flag ~empty Rcu-unlock \ range(matched) as unbalanced-rcu-locking
(* Outermost level of nesting only *)
let crit = matched \ (po^-1 ; matched ; po^-1)
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment