Commit 3478588b authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:
 "The biggest part of this tree is the new auto-generated atomics API
  wrappers by Mark Rutland.

  The primary motivation was to allow instrumentation without uglifying
  the primary source code.

  The linecount increase comes from adding the auto-generated files to
  the Git space as well:

    include/asm-generic/atomic-instrumented.h     | 1689 ++++++++++++++++--
    include/asm-generic/atomic-long.h             | 1174 ++++++++++---
    include/linux/atomic-fallback.h               | 2295 +++++++++++++++++++++++++
    include/linux/atomic.h                        | 1241 +------------

  I preferred this approach, so that the full call stack of the (already
  complex) locking APIs is still fully visible in 'git grep'.

  But if this is excessive we could certainly hide them.

  There's a separate build-time mechanism to determine whether the
  headers are out of date (they should never be stale if we do our job
  right).

  Anyway, nothing from this should be visible to regular kernel
  developers.

  Other changes:

   - Add support for dynamic keys, which removes a source of false
     positives in the workqueue code, among other things (Bart Van
     Assche)

   - Updates to tools/memory-model (Andrea Parri, Paul E. McKenney)

   - qspinlock, wake_q and lockdep micro-optimizations (Waiman Long)

   - misc other updates and enhancements"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (48 commits)
  locking/lockdep: Shrink struct lock_class_key
  locking/lockdep: Add module_param to enable consistency checks
  lockdep/lib/tests: Test dynamic key registration
  lockdep/lib/tests: Fix run_tests.sh
  kernel/workqueue: Use dynamic lockdep keys for workqueues
  locking/lockdep: Add support for dynamic keys
  locking/lockdep: Verify whether lock objects are small enough to be used as class keys
  locking/lockdep: Check data structure consistency
  locking/lockdep: Reuse lock chains that have been freed
  locking/lockdep: Fix a comment in add_chain_cache()
  locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count()
  locking/lockdep: Reuse list entries that are no longer in use
  locking/lockdep: Free lock classes that are no longer in use
  locking/lockdep: Update two outdated comments
  locking/lockdep: Make it easy to detect whether or not inside a selftest
  locking/lockdep: Split lockdep_free_key_range() and lockdep_reset_lock()
  locking/lockdep: Initialize the locks_before and locks_after lists earlier
  locking/lockdep: Make zap_class() remove all matching lock order entries
  locking/lockdep: Reorder struct lock_class members
  locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to the cache
  ...
parents c8f5ed6e 28d49e28
...@@ -54,6 +54,13 @@ must propagate to all other CPUs before the release operation ...@@ -54,6 +54,13 @@ must propagate to all other CPUs before the release operation
(A-cumulative property). This is implemented using (A-cumulative property). This is implemented using
:c:func:`smp_store_release`. :c:func:`smp_store_release`.
An ACQUIRE memory ordering guarantees that all post loads and
stores (all po-later instructions) on the same CPU are
completed after the acquire operation. It also guarantees that all
po-later stores on the same CPU must propagate to all other CPUs
after the acquire operation executes. This is implemented using
:c:func:`smp_acquire__after_ctrl_dep`.
A control dependency (on success) for refcounters guarantees that A control dependency (on success) for refcounters guarantees that
if a reference for an object was successfully obtained (reference if a reference for an object was successfully obtained (reference
counter increment or addition happened, function returned true), counter increment or addition happened, function returned true),
...@@ -119,13 +126,24 @@ Memory ordering guarantees changes: ...@@ -119,13 +126,24 @@ Memory ordering guarantees changes:
result of obtaining pointer to the object! result of obtaining pointer to the object!
case 5) - decrement-based RMW ops that return a value case 5) - generic dec/sub decrement-based RMW ops that return a value
----------------------------------------------------- ---------------------------------------------------------------------
Function changes: Function changes:
* :c:func:`atomic_dec_and_test` --> :c:func:`refcount_dec_and_test` * :c:func:`atomic_dec_and_test` --> :c:func:`refcount_dec_and_test`
* :c:func:`atomic_sub_and_test` --> :c:func:`refcount_sub_and_test` * :c:func:`atomic_sub_and_test` --> :c:func:`refcount_sub_and_test`
Memory ordering guarantees changes:
* fully ordered --> RELEASE ordering + ACQUIRE ordering on success
case 6) other decrement-based RMW ops that return a value
---------------------------------------------------------
Function changes:
* no atomic counterpart --> :c:func:`refcount_dec_if_one` * no atomic counterpart --> :c:func:`refcount_dec_if_one`
* ``atomic_add_unless(&var, -1, 1)`` --> ``refcount_dec_not_one(&var)`` * ``atomic_add_unless(&var, -1, 1)`` --> ``refcount_dec_not_one(&var)``
...@@ -136,7 +154,7 @@ Memory ordering guarantees changes: ...@@ -136,7 +154,7 @@ Memory ordering guarantees changes:
.. note:: :c:func:`atomic_add_unless` only provides full order on success. .. note:: :c:func:`atomic_add_unless` only provides full order on success.
case 6) - lock-based RMW case 7) - lock-based RMW
------------------------ ------------------------
Function changes: Function changes:
......
...@@ -6,7 +6,8 @@ ...@@ -6,7 +6,8 @@
# 2) Generate timeconst.h # 2) Generate timeconst.h
# 3) Generate asm-offsets.h (may need bounds.h and timeconst.h) # 3) Generate asm-offsets.h (may need bounds.h and timeconst.h)
# 4) Check for missing system calls # 4) Check for missing system calls
# 5) Generate constants.py (may need bounds.h) # 5) check atomics headers are up-to-date
# 6) Generate constants.py (may need bounds.h)
##### #####
# 1) Generate bounds.h # 1) Generate bounds.h
...@@ -59,7 +60,20 @@ missing-syscalls: scripts/checksyscalls.sh $(offsets-file) FORCE ...@@ -59,7 +60,20 @@ missing-syscalls: scripts/checksyscalls.sh $(offsets-file) FORCE
$(call cmd,syscalls) $(call cmd,syscalls)
##### #####
# 5) Generate constants for Python GDB integration # 5) Check atomic headers are up-to-date
#
always += old-atomics
targets += old-atomics
quiet_cmd_atomics = CALL $<
cmd_atomics = $(CONFIG_SHELL) $<
old-atomics: scripts/atomic/check-atomics.sh FORCE
$(call cmd,atomics)
#####
# 6) Generate constants for Python GDB integration
# #
extra-$(CONFIG_GDB_SCRIPTS) += build_constants_py extra-$(CONFIG_GDB_SCRIPTS) += build_constants_py
......
...@@ -2608,6 +2608,7 @@ L: linux-kernel@vger.kernel.org ...@@ -2608,6 +2608,7 @@ L: linux-kernel@vger.kernel.org
S: Maintained S: Maintained
F: arch/*/include/asm/atomic*.h F: arch/*/include/asm/atomic*.h
F: include/*/atomic*.h F: include/*/atomic*.h
F: scripts/atomic/
ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER ATTO EXPRESSSAS SAS/SATA RAID SCSI DRIVER
M: Bradley Grove <linuxdrivers@attotech.com> M: Bradley Grove <linuxdrivers@attotech.com>
......
This diff is collapsed.
...@@ -39,7 +39,7 @@ ...@@ -39,7 +39,7 @@
#define ATOMIC_OP(op, asm_op) \ #define ATOMIC_OP(op, asm_op) \
__LL_SC_INLINE void \ __LL_SC_INLINE void \
__LL_SC_PREFIX(atomic_##op(int i, atomic_t *v)) \ __LL_SC_PREFIX(arch_atomic_##op(int i, atomic_t *v)) \
{ \ { \
unsigned long tmp; \ unsigned long tmp; \
int result; \ int result; \
...@@ -53,11 +53,11 @@ __LL_SC_PREFIX(atomic_##op(int i, atomic_t *v)) \ ...@@ -53,11 +53,11 @@ __LL_SC_PREFIX(atomic_##op(int i, atomic_t *v)) \
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
: "Ir" (i)); \ : "Ir" (i)); \
} \ } \
__LL_SC_EXPORT(atomic_##op); __LL_SC_EXPORT(arch_atomic_##op);
#define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \ #define ATOMIC_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \
__LL_SC_INLINE int \ __LL_SC_INLINE int \
__LL_SC_PREFIX(atomic_##op##_return##name(int i, atomic_t *v)) \ __LL_SC_PREFIX(arch_atomic_##op##_return##name(int i, atomic_t *v)) \
{ \ { \
unsigned long tmp; \ unsigned long tmp; \
int result; \ int result; \
...@@ -75,11 +75,11 @@ __LL_SC_PREFIX(atomic_##op##_return##name(int i, atomic_t *v)) \ ...@@ -75,11 +75,11 @@ __LL_SC_PREFIX(atomic_##op##_return##name(int i, atomic_t *v)) \
\ \
return result; \ return result; \
} \ } \
__LL_SC_EXPORT(atomic_##op##_return##name); __LL_SC_EXPORT(arch_atomic_##op##_return##name);
#define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \ #define ATOMIC_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \
__LL_SC_INLINE int \ __LL_SC_INLINE int \
__LL_SC_PREFIX(atomic_fetch_##op##name(int i, atomic_t *v)) \ __LL_SC_PREFIX(arch_atomic_fetch_##op##name(int i, atomic_t *v)) \
{ \ { \
unsigned long tmp; \ unsigned long tmp; \
int val, result; \ int val, result; \
...@@ -97,7 +97,7 @@ __LL_SC_PREFIX(atomic_fetch_##op##name(int i, atomic_t *v)) \ ...@@ -97,7 +97,7 @@ __LL_SC_PREFIX(atomic_fetch_##op##name(int i, atomic_t *v)) \
\ \
return result; \ return result; \
} \ } \
__LL_SC_EXPORT(atomic_fetch_##op##name); __LL_SC_EXPORT(arch_atomic_fetch_##op##name);
#define ATOMIC_OPS(...) \ #define ATOMIC_OPS(...) \
ATOMIC_OP(__VA_ARGS__) \ ATOMIC_OP(__VA_ARGS__) \
...@@ -133,7 +133,7 @@ ATOMIC_OPS(xor, eor) ...@@ -133,7 +133,7 @@ ATOMIC_OPS(xor, eor)
#define ATOMIC64_OP(op, asm_op) \ #define ATOMIC64_OP(op, asm_op) \
__LL_SC_INLINE void \ __LL_SC_INLINE void \
__LL_SC_PREFIX(atomic64_##op(long i, atomic64_t *v)) \ __LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \
{ \ { \
long result; \ long result; \
unsigned long tmp; \ unsigned long tmp; \
...@@ -147,11 +147,11 @@ __LL_SC_PREFIX(atomic64_##op(long i, atomic64_t *v)) \ ...@@ -147,11 +147,11 @@ __LL_SC_PREFIX(atomic64_##op(long i, atomic64_t *v)) \
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \
: "Ir" (i)); \ : "Ir" (i)); \
} \ } \
__LL_SC_EXPORT(atomic64_##op); __LL_SC_EXPORT(arch_atomic64_##op);
#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \ #define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \
__LL_SC_INLINE long \ __LL_SC_INLINE long \
__LL_SC_PREFIX(atomic64_##op##_return##name(long i, atomic64_t *v)) \ __LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
{ \ { \
long result; \ long result; \
unsigned long tmp; \ unsigned long tmp; \
...@@ -169,11 +169,11 @@ __LL_SC_PREFIX(atomic64_##op##_return##name(long i, atomic64_t *v)) \ ...@@ -169,11 +169,11 @@ __LL_SC_PREFIX(atomic64_##op##_return##name(long i, atomic64_t *v)) \
\ \
return result; \ return result; \
} \ } \
__LL_SC_EXPORT(atomic64_##op##_return##name); __LL_SC_EXPORT(arch_atomic64_##op##_return##name);
#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \ #define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \
__LL_SC_INLINE long \ __LL_SC_INLINE long \
__LL_SC_PREFIX(atomic64_fetch_##op##name(long i, atomic64_t *v)) \ __LL_SC_PREFIX(arch_atomic64_fetch_##op##name(long i, atomic64_t *v)) \
{ \ { \
long result, val; \ long result, val; \
unsigned long tmp; \ unsigned long tmp; \
...@@ -191,7 +191,7 @@ __LL_SC_PREFIX(atomic64_fetch_##op##name(long i, atomic64_t *v)) \ ...@@ -191,7 +191,7 @@ __LL_SC_PREFIX(atomic64_fetch_##op##name(long i, atomic64_t *v)) \
\ \
return result; \ return result; \
} \ } \
__LL_SC_EXPORT(atomic64_fetch_##op##name); __LL_SC_EXPORT(arch_atomic64_fetch_##op##name);
#define ATOMIC64_OPS(...) \ #define ATOMIC64_OPS(...) \
ATOMIC64_OP(__VA_ARGS__) \ ATOMIC64_OP(__VA_ARGS__) \
...@@ -226,7 +226,7 @@ ATOMIC64_OPS(xor, eor) ...@@ -226,7 +226,7 @@ ATOMIC64_OPS(xor, eor)
#undef ATOMIC64_OP #undef ATOMIC64_OP
__LL_SC_INLINE long __LL_SC_INLINE long
__LL_SC_PREFIX(atomic64_dec_if_positive(atomic64_t *v)) __LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v))
{ {
long result; long result;
unsigned long tmp; unsigned long tmp;
...@@ -246,7 +246,7 @@ __LL_SC_PREFIX(atomic64_dec_if_positive(atomic64_t *v)) ...@@ -246,7 +246,7 @@ __LL_SC_PREFIX(atomic64_dec_if_positive(atomic64_t *v))
return result; return result;
} }
__LL_SC_EXPORT(atomic64_dec_if_positive); __LL_SC_EXPORT(arch_atomic64_dec_if_positive);
#define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl) \ #define __CMPXCHG_CASE(w, sfx, name, sz, mb, acq, rel, cl) \
__LL_SC_INLINE u##sz \ __LL_SC_INLINE u##sz \
......
...@@ -25,9 +25,9 @@ ...@@ -25,9 +25,9 @@
#error "please don't include this file directly" #error "please don't include this file directly"
#endif #endif
#define __LL_SC_ATOMIC(op) __LL_SC_CALL(atomic_##op) #define __LL_SC_ATOMIC(op) __LL_SC_CALL(arch_atomic_##op)
#define ATOMIC_OP(op, asm_op) \ #define ATOMIC_OP(op, asm_op) \
static inline void atomic_##op(int i, atomic_t *v) \ static inline void arch_atomic_##op(int i, atomic_t *v) \
{ \ { \
register int w0 asm ("w0") = i; \ register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \ register atomic_t *x1 asm ("x1") = v; \
...@@ -47,7 +47,7 @@ ATOMIC_OP(add, stadd) ...@@ -47,7 +47,7 @@ ATOMIC_OP(add, stadd)
#undef ATOMIC_OP #undef ATOMIC_OP
#define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \ #define ATOMIC_FETCH_OP(name, mb, op, asm_op, cl...) \
static inline int atomic_fetch_##op##name(int i, atomic_t *v) \ static inline int arch_atomic_fetch_##op##name(int i, atomic_t *v) \
{ \ { \
register int w0 asm ("w0") = i; \ register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \ register atomic_t *x1 asm ("x1") = v; \
...@@ -79,7 +79,7 @@ ATOMIC_FETCH_OPS(add, ldadd) ...@@ -79,7 +79,7 @@ ATOMIC_FETCH_OPS(add, ldadd)
#undef ATOMIC_FETCH_OPS #undef ATOMIC_FETCH_OPS
#define ATOMIC_OP_ADD_RETURN(name, mb, cl...) \ #define ATOMIC_OP_ADD_RETURN(name, mb, cl...) \
static inline int atomic_add_return##name(int i, atomic_t *v) \ static inline int arch_atomic_add_return##name(int i, atomic_t *v) \
{ \ { \
register int w0 asm ("w0") = i; \ register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \ register atomic_t *x1 asm ("x1") = v; \
...@@ -105,7 +105,7 @@ ATOMIC_OP_ADD_RETURN( , al, "memory") ...@@ -105,7 +105,7 @@ ATOMIC_OP_ADD_RETURN( , al, "memory")
#undef ATOMIC_OP_ADD_RETURN #undef ATOMIC_OP_ADD_RETURN
static inline void atomic_and(int i, atomic_t *v) static inline void arch_atomic_and(int i, atomic_t *v)
{ {
register int w0 asm ("w0") = i; register int w0 asm ("w0") = i;
register atomic_t *x1 asm ("x1") = v; register atomic_t *x1 asm ("x1") = v;
...@@ -123,7 +123,7 @@ static inline void atomic_and(int i, atomic_t *v) ...@@ -123,7 +123,7 @@ static inline void atomic_and(int i, atomic_t *v)
} }
#define ATOMIC_FETCH_OP_AND(name, mb, cl...) \ #define ATOMIC_FETCH_OP_AND(name, mb, cl...) \
static inline int atomic_fetch_and##name(int i, atomic_t *v) \ static inline int arch_atomic_fetch_and##name(int i, atomic_t *v) \
{ \ { \
register int w0 asm ("w0") = i; \ register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \ register atomic_t *x1 asm ("x1") = v; \
...@@ -149,7 +149,7 @@ ATOMIC_FETCH_OP_AND( , al, "memory") ...@@ -149,7 +149,7 @@ ATOMIC_FETCH_OP_AND( , al, "memory")
#undef ATOMIC_FETCH_OP_AND #undef ATOMIC_FETCH_OP_AND
static inline void atomic_sub(int i, atomic_t *v) static inline void arch_atomic_sub(int i, atomic_t *v)
{ {
register int w0 asm ("w0") = i; register int w0 asm ("w0") = i;
register atomic_t *x1 asm ("x1") = v; register atomic_t *x1 asm ("x1") = v;
...@@ -167,7 +167,7 @@ static inline void atomic_sub(int i, atomic_t *v) ...@@ -167,7 +167,7 @@ static inline void atomic_sub(int i, atomic_t *v)
} }
#define ATOMIC_OP_SUB_RETURN(name, mb, cl...) \ #define ATOMIC_OP_SUB_RETURN(name, mb, cl...) \
static inline int atomic_sub_return##name(int i, atomic_t *v) \ static inline int arch_atomic_sub_return##name(int i, atomic_t *v) \
{ \ { \
register int w0 asm ("w0") = i; \ register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \ register atomic_t *x1 asm ("x1") = v; \
...@@ -195,7 +195,7 @@ ATOMIC_OP_SUB_RETURN( , al, "memory") ...@@ -195,7 +195,7 @@ ATOMIC_OP_SUB_RETURN( , al, "memory")
#undef ATOMIC_OP_SUB_RETURN #undef ATOMIC_OP_SUB_RETURN
#define ATOMIC_FETCH_OP_SUB(name, mb, cl...) \ #define ATOMIC_FETCH_OP_SUB(name, mb, cl...) \
static inline int atomic_fetch_sub##name(int i, atomic_t *v) \ static inline int arch_atomic_fetch_sub##name(int i, atomic_t *v) \
{ \ { \
register int w0 asm ("w0") = i; \ register int w0 asm ("w0") = i; \
register atomic_t *x1 asm ("x1") = v; \ register atomic_t *x1 asm ("x1") = v; \
...@@ -222,9 +222,9 @@ ATOMIC_FETCH_OP_SUB( , al, "memory") ...@@ -222,9 +222,9 @@ ATOMIC_FETCH_OP_SUB( , al, "memory")
#undef ATOMIC_FETCH_OP_SUB #undef ATOMIC_FETCH_OP_SUB
#undef __LL_SC_ATOMIC #undef __LL_SC_ATOMIC
#define __LL_SC_ATOMIC64(op) __LL_SC_CALL(atomic64_##op) #define __LL_SC_ATOMIC64(op) __LL_SC_CALL(arch_atomic64_##op)
#define ATOMIC64_OP(op, asm_op) \ #define ATOMIC64_OP(op, asm_op) \
static inline void atomic64_##op(long i, atomic64_t *v) \ static inline void arch_atomic64_##op(long i, atomic64_t *v) \
{ \ { \
register long x0 asm ("x0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
...@@ -244,7 +244,7 @@ ATOMIC64_OP(add, stadd) ...@@ -244,7 +244,7 @@ ATOMIC64_OP(add, stadd)
#undef ATOMIC64_OP #undef ATOMIC64_OP
#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \ #define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \
static inline long atomic64_fetch_##op##name(long i, atomic64_t *v) \ static inline long arch_atomic64_fetch_##op##name(long i, atomic64_t *v)\
{ \ { \
register long x0 asm ("x0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
...@@ -276,7 +276,7 @@ ATOMIC64_FETCH_OPS(add, ldadd) ...@@ -276,7 +276,7 @@ ATOMIC64_FETCH_OPS(add, ldadd)
#undef ATOMIC64_FETCH_OPS #undef ATOMIC64_FETCH_OPS
#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \ #define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \
static inline long atomic64_add_return##name(long i, atomic64_t *v) \ static inline long arch_atomic64_add_return##name(long i, atomic64_t *v)\
{ \ { \
register long x0 asm ("x0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
...@@ -302,7 +302,7 @@ ATOMIC64_OP_ADD_RETURN( , al, "memory") ...@@ -302,7 +302,7 @@ ATOMIC64_OP_ADD_RETURN( , al, "memory")
#undef ATOMIC64_OP_ADD_RETURN #undef ATOMIC64_OP_ADD_RETURN
static inline void atomic64_and(long i, atomic64_t *v) static inline void arch_atomic64_and(long i, atomic64_t *v)
{ {
register long x0 asm ("x0") = i; register long x0 asm ("x0") = i;
register atomic64_t *x1 asm ("x1") = v; register atomic64_t *x1 asm ("x1") = v;
...@@ -320,7 +320,7 @@ static inline void atomic64_and(long i, atomic64_t *v) ...@@ -320,7 +320,7 @@ static inline void atomic64_and(long i, atomic64_t *v)
} }
#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ #define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
static inline long atomic64_fetch_and##name(long i, atomic64_t *v) \ static inline long arch_atomic64_fetch_and##name(long i, atomic64_t *v) \
{ \ { \
register long x0 asm ("x0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
...@@ -346,7 +346,7 @@ ATOMIC64_FETCH_OP_AND( , al, "memory") ...@@ -346,7 +346,7 @@ ATOMIC64_FETCH_OP_AND( , al, "memory")
#undef ATOMIC64_FETCH_OP_AND #undef ATOMIC64_FETCH_OP_AND
static inline void atomic64_sub(long i, atomic64_t *v) static inline void arch_atomic64_sub(long i, atomic64_t *v)
{ {
register long x0 asm ("x0") = i; register long x0 asm ("x0") = i;
register atomic64_t *x1 asm ("x1") = v; register atomic64_t *x1 asm ("x1") = v;
...@@ -364,7 +364,7 @@ static inline void atomic64_sub(long i, atomic64_t *v) ...@@ -364,7 +364,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
} }
#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \ #define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \
static inline long atomic64_sub_return##name(long i, atomic64_t *v) \ static inline long arch_atomic64_sub_return##name(long i, atomic64_t *v)\
{ \ { \
register long x0 asm ("x0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
...@@ -392,7 +392,7 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory") ...@@ -392,7 +392,7 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory")
#undef ATOMIC64_OP_SUB_RETURN #undef ATOMIC64_OP_SUB_RETURN
#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \ #define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \
static inline long atomic64_fetch_sub##name(long i, atomic64_t *v) \ static inline long arch_atomic64_fetch_sub##name(long i, atomic64_t *v) \
{ \ { \
register long x0 asm ("x0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
...@@ -418,7 +418,7 @@ ATOMIC64_FETCH_OP_SUB( , al, "memory") ...@@ -418,7 +418,7 @@ ATOMIC64_FETCH_OP_SUB( , al, "memory")
#undef ATOMIC64_FETCH_OP_SUB #undef ATOMIC64_FETCH_OP_SUB
static inline long atomic64_dec_if_positive(atomic64_t *v) static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
{ {
register long x0 asm ("x0") = (long)v; register long x0 asm ("x0") = (long)v;
......
...@@ -110,10 +110,10 @@ __XCHG_GEN(_mb) ...@@ -110,10 +110,10 @@ __XCHG_GEN(_mb)
}) })
/* xchg */ /* xchg */
#define xchg_relaxed(...) __xchg_wrapper( , __VA_ARGS__) #define arch_xchg_relaxed(...) __xchg_wrapper( , __VA_ARGS__)
#define xchg_acquire(...) __xchg_wrapper(_acq, __VA_ARGS__) #define arch_xchg_acquire(...) __xchg_wrapper(_acq, __VA_ARGS__)
#define xchg_release(...) __xchg_wrapper(_rel, __VA_ARGS__) #define arch_xchg_release(...) __xchg_wrapper(_rel, __VA_ARGS__)
#define xchg(...) __xchg_wrapper( _mb, __VA_ARGS__) #define arch_xchg(...) __xchg_wrapper( _mb, __VA_ARGS__)
#define __CMPXCHG_GEN(sfx) \ #define __CMPXCHG_GEN(sfx) \
static inline unsigned long __cmpxchg##sfx(volatile void *ptr, \ static inline unsigned long __cmpxchg##sfx(volatile void *ptr, \
...@@ -154,18 +154,18 @@ __CMPXCHG_GEN(_mb) ...@@ -154,18 +154,18 @@ __CMPXCHG_GEN(_mb)
}) })
/* cmpxchg */ /* cmpxchg */
#define cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__) #define arch_cmpxchg_relaxed(...) __cmpxchg_wrapper( , __VA_ARGS__)
#define cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__) #define arch_cmpxchg_acquire(...) __cmpxchg_wrapper(_acq, __VA_ARGS__)
#define cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__) #define arch_cmpxchg_release(...) __cmpxchg_wrapper(_rel, __VA_ARGS__)
#define cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__) #define arch_cmpxchg(...) __cmpxchg_wrapper( _mb, __VA_ARGS__)
#define cmpxchg_local cmpxchg_relaxed #define arch_cmpxchg_local arch_cmpxchg_relaxed
/* cmpxchg64 */ /* cmpxchg64 */
#define cmpxchg64_relaxed cmpxchg_relaxed #define arch_cmpxchg64_relaxed arch_cmpxchg_relaxed
#define cmpxchg64_acquire cmpxchg_acquire #define arch_cmpxchg64_acquire arch_cmpxchg_acquire
#define cmpxchg64_release cmpxchg_release #define arch_cmpxchg64_release arch_cmpxchg_release
#define cmpxchg64 cmpxchg #define arch_cmpxchg64 arch_cmpxchg
#define cmpxchg64_local cmpxchg_local #define arch_cmpxchg64_local arch_cmpxchg_local
/* cmpxchg_double */ /* cmpxchg_double */
#define system_has_cmpxchg_double() 1 #define system_has_cmpxchg_double() 1
...@@ -177,9 +177,9 @@ __CMPXCHG_GEN(_mb) ...@@ -177,9 +177,9 @@ __CMPXCHG_GEN(_mb)
VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) != 1); \ VM_BUG_ON((unsigned long *)(ptr2) - (unsigned long *)(ptr1) != 1); \
}) })
#define cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \ #define arch_cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \
({\ ({ \
int __ret;\ int __ret; \
__cmpxchg_double_check(ptr1, ptr2); \ __cmpxchg_double_check(ptr1, ptr2); \
__ret = !__cmpxchg_double_mb((unsigned long)(o1), (unsigned long)(o2), \ __ret = !__cmpxchg_double_mb((unsigned long)(o1), (unsigned long)(o2), \
(unsigned long)(n1), (unsigned long)(n2), \ (unsigned long)(n1), (unsigned long)(n2), \
...@@ -187,9 +187,9 @@ __CMPXCHG_GEN(_mb) ...@@ -187,9 +187,9 @@ __CMPXCHG_GEN(_mb)
__ret; \ __ret; \
}) })
#define cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \ #define arch_cmpxchg_double_local(ptr1, ptr2, o1, o2, n1, n2) \
({\ ({ \
int __ret;\ int __ret; \
__cmpxchg_double_check(ptr1, ptr2); \ __cmpxchg_double_check(ptr1, ptr2); \
__ret = !__cmpxchg_double((unsigned long)(o1), (unsigned long)(o2), \ __ret = !__cmpxchg_double((unsigned long)(o1), (unsigned long)(o2), \
(unsigned long)(n1), (unsigned long)(n2), \ (unsigned long)(n1), (unsigned long)(n2), \
......
...@@ -22,6 +22,6 @@ ...@@ -22,6 +22,6 @@
#define sync_test_and_clear_bit(nr, p) test_and_clear_bit(nr, p) #define sync_test_and_clear_bit(nr, p) test_and_clear_bit(nr, p)
#define sync_test_and_change_bit(nr, p) test_and_change_bit(nr, p) #define sync_test_and_change_bit(nr, p) test_and_change_bit(nr, p)
#define sync_test_bit(nr, addr) test_bit(nr, addr) #define sync_test_bit(nr, addr) test_bit(nr, addr)
#define sync_cmpxchg cmpxchg #define arch_sync_cmpxchg arch_cmpxchg
#endif #endif
...@@ -67,16 +67,30 @@ static __always_inline void refcount_dec(refcount_t *r) ...@@ -67,16 +67,30 @@ static __always_inline void refcount_dec(refcount_t *r)
static __always_inline __must_check static __always_inline __must_check
bool refcount_sub_and_test(unsigned int i, refcount_t *r) bool refcount_sub_and_test(unsigned int i, refcount_t *r)
{ {
return GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", bool ret = GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl",
REFCOUNT_CHECK_LT_ZERO, REFCOUNT_CHECK_LT_ZERO,
r->refs.counter, e, "er", i, "cx"); r->refs.counter, e, "er", i, "cx");
if (ret) {
smp_acquire__after_ctrl_dep();
return true;
}
return false;
} }
static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r) static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
{ {
return GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", bool ret = GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl",
REFCOUNT_CHECK_LT_ZERO, REFCOUNT_CHECK_LT_ZERO,
r->refs.counter, e, "cx"); r->refs.counter, e, "cx");
if (ret) {
smp_acquire__after_ctrl_dep();
return true;
}
return false;
} }
static __always_inline __must_check static __always_inline __must_check
......
...@@ -1058,7 +1058,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request) ...@@ -1058,7 +1058,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
return -ENOMEM; return -ENOMEM;
} }
percpu_down_read_preempt_disable(&file_rwsem); percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock); spin_lock(&ctx->flc_lock);
if (request->fl_flags & FL_ACCESS) if (request->fl_flags & FL_ACCESS)
goto find_conflict; goto find_conflict;
...@@ -1100,7 +1100,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request) ...@@ -1100,7 +1100,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
out: out:
spin_unlock(&ctx->flc_lock); spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem); percpu_up_read(&file_rwsem);
if (new_fl) if (new_fl)
locks_free_lock(new_fl); locks_free_lock(new_fl);
locks_dispose_list(&dispose); locks_dispose_list(&dispose);
...@@ -1138,7 +1138,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request, ...@@ -1138,7 +1138,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
new_fl2 = locks_alloc_lock(); new_fl2 = locks_alloc_lock();
} }
percpu_down_read_preempt_disable(&file_rwsem); percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock); spin_lock(&ctx->flc_lock);
/* /*
* New lock request. Walk all POSIX locks and look for conflicts. If * New lock request. Walk all POSIX locks and look for conflicts. If
...@@ -1312,7 +1312,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request, ...@@ -1312,7 +1312,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
} }
out: out:
spin_unlock(&ctx->flc_lock); spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem); percpu_up_read(&file_rwsem);
/* /*
* Free any unused locks. * Free any unused locks.
*/ */
...@@ -1584,7 +1584,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) ...@@ -1584,7 +1584,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
return error; return error;
} }
percpu_down_read_preempt_disable(&file_rwsem); percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock); spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose); time_out_leases(inode, &dispose);
...@@ -1636,13 +1636,13 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) ...@@ -1636,13 +1636,13 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
locks_insert_block(fl, new_fl, leases_conflict); locks_insert_block(fl, new_fl, leases_conflict);
trace_break_lease_block(inode, new_fl); trace_break_lease_block(inode, new_fl);
spin_unlock(&ctx->flc_lock); spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem); percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose); locks_dispose_list(&dispose);
error = wait_event_interruptible_timeout(new_fl->fl_wait, error = wait_event_interruptible_timeout(new_fl->fl_wait,
!new_fl->fl_blocker, break_time); !new_fl->fl_blocker, break_time);
percpu_down_read_preempt_disable(&file_rwsem); percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock); spin_lock(&ctx->flc_lock);
trace_break_lease_unblock(inode, new_fl); trace_break_lease_unblock(inode, new_fl);
locks_delete_block(new_fl); locks_delete_block(new_fl);
...@@ -1659,7 +1659,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) ...@@ -1659,7 +1659,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
} }
out: out:
spin_unlock(&ctx->flc_lock); spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem); percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose); locks_dispose_list(&dispose);
locks_free_lock(new_fl); locks_free_lock(new_fl);
return error; return error;
...@@ -1729,7 +1729,7 @@ int fcntl_getlease(struct file *filp) ...@@ -1729,7 +1729,7 @@ int fcntl_getlease(struct file *filp)
ctx = smp_load_acquire(&inode->i_flctx); ctx = smp_load_acquire(&inode->i_flctx);
if (ctx && !list_empty_careful(&ctx->flc_lease)) { if (ctx && !list_empty_careful(&ctx->flc_lease)) {
percpu_down_read_preempt_disable(&file_rwsem); percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock); spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose); time_out_leases(inode, &dispose);
list_for_each_entry(fl, &ctx->flc_lease, fl_list) { list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
...@@ -1739,7 +1739,7 @@ int fcntl_getlease(struct file *filp) ...@@ -1739,7 +1739,7 @@ int fcntl_getlease(struct file *filp)
break; break;
} }
spin_unlock(&ctx->flc_lock); spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem); percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose); locks_dispose_list(&dispose);
} }
...@@ -1813,7 +1813,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr ...@@ -1813,7 +1813,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
return -EINVAL; return -EINVAL;
} }
percpu_down_read_preempt_disable(&file_rwsem); percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock); spin_lock(&ctx->flc_lock);
time_out_leases(inode, &dispose); time_out_leases(inode, &dispose);
error = check_conflicting_open(dentry, arg, lease->fl_flags); error = check_conflicting_open(dentry, arg, lease->fl_flags);
...@@ -1884,7 +1884,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr ...@@ -1884,7 +1884,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
lease->fl_lmops->lm_setup(lease, priv); lease->fl_lmops->lm_setup(lease, priv);
out: out:
spin_unlock(&ctx->flc_lock); spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem); percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose); locks_dispose_list(&dispose);
if (is_deleg) if (is_deleg)
inode_unlock(inode); inode_unlock(inode);
...@@ -1907,7 +1907,7 @@ static int generic_delete_lease(struct file *filp, void *owner) ...@@ -1907,7 +1907,7 @@ static int generic_delete_lease(struct file *filp, void *owner)
return error; return error;
} }
percpu_down_read_preempt_disable(&file_rwsem); percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock); spin_lock(&ctx->flc_lock);
list_for_each_entry(fl, &ctx->flc_lease, fl_list) { list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
if (fl->fl_file == filp && if (fl->fl_file == filp &&
...@@ -1920,7 +1920,7 @@ static int generic_delete_lease(struct file *filp, void *owner) ...@@ -1920,7 +1920,7 @@ static int generic_delete_lease(struct file *filp, void *owner)
if (victim) if (victim)
error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose); error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose);
spin_unlock(&ctx->flc_lock); spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem); percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose); locks_dispose_list(&dispose);
return error; return error;
} }
...@@ -2643,13 +2643,13 @@ locks_remove_lease(struct file *filp, struct file_lock_context *ctx) ...@@ -2643,13 +2643,13 @@ locks_remove_lease(struct file *filp, struct file_lock_context *ctx)
if (list_empty(&ctx->flc_lease)) if (list_empty(&ctx->flc_lease))
return; return;
percpu_down_read_preempt_disable(&file_rwsem); percpu_down_read(&file_rwsem);
spin_lock(&ctx->flc_lock); spin_lock(&ctx->flc_lock);
list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list) list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list)
if (filp == fl->fl_file) if (filp == fl->fl_file)
lease_modify(fl, F_UNLCK, &dispose); lease_modify(fl, F_UNLCK, &dispose);
spin_unlock(&ctx->flc_lock); spin_unlock(&ctx->flc_lock);
percpu_up_read_preempt_enable(&file_rwsem); percpu_up_read(&file_rwsem);
locks_dispose_list(&dispose); locks_dispose_list(&dispose);
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -46,16 +46,22 @@ extern int lock_stat; ...@@ -46,16 +46,22 @@ extern int lock_stat;
#define NR_LOCKDEP_CACHING_CLASSES 2 #define NR_LOCKDEP_CACHING_CLASSES 2
/* /*
* Lock-classes are keyed via unique addresses, by embedding the * A lockdep key is associated with each lock object. For static locks we use
* lockclass-key into the kernel (or module) .data section. (For * the lock address itself as the key. Dynamically allocated lock objects can
* static locks we use the lock address itself as the key.) * have a statically or dynamically allocated key. Dynamically allocated lock
* keys must be registered before being used and must be unregistered before
* the key memory is freed.
*/ */
struct lockdep_subclass_key { struct lockdep_subclass_key {
char __one_byte; char __one_byte;
} __attribute__ ((__packed__)); } __attribute__ ((__packed__));
/* hash_entry is used to keep track of dynamically allocated keys. */
struct lock_class_key { struct lock_class_key {
union {
struct hlist_node hash_entry;
struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES]; struct lockdep_subclass_key subkeys[MAX_LOCKDEP_SUBCLASSES];
};
}; };
extern struct lock_class_key __lockdep_no_validate__; extern struct lock_class_key __lockdep_no_validate__;
...@@ -63,7 +69,8 @@ extern struct lock_class_key __lockdep_no_validate__; ...@@ -63,7 +69,8 @@ extern struct lock_class_key __lockdep_no_validate__;
#define LOCKSTAT_POINTS 4 #define LOCKSTAT_POINTS 4
/* /*
* The lock-class itself: * The lock-class itself. The order of the structure members matters.
* reinit_class() zeroes the key member and all subsequent members.
*/ */
struct lock_class { struct lock_class {
/* /*
...@@ -72,10 +79,19 @@ struct lock_class { ...@@ -72,10 +79,19 @@ struct lock_class {
struct hlist_node hash_entry; struct hlist_node hash_entry;
/* /*
* global list of all lock-classes: * Entry in all_lock_classes when in use. Entry in free_lock_classes
* when not in use. Instances that are being freed are on one of the
* zapped_classes lists.
*/ */
struct list_head lock_entry; struct list_head lock_entry;
/*
* These fields represent a directed graph of lock dependencies,
* to every node we attach a list of "forward" and a list of
* "backward" graph nodes.
*/
struct list_head locks_after, locks_before;
struct lockdep_subclass_key *key; struct lockdep_subclass_key *key;
unsigned int subclass; unsigned int subclass;
unsigned int dep_gen_id; unsigned int dep_gen_id;
...@@ -86,13 +102,6 @@ struct lock_class { ...@@ -86,13 +102,6 @@ struct lock_class {
unsigned long usage_mask; unsigned long usage_mask;
struct stack_trace usage_traces[XXX_LOCK_USAGE_STATES]; struct stack_trace usage_traces[XXX_LOCK_USAGE_STATES];
/*
* These fields represent a directed graph of lock dependencies,
* to every node we attach a list of "forward" and a list of
* "backward" graph nodes.
*/
struct list_head locks_after, locks_before;
/* /*
* Generation counter, when doing certain classes of graph walking, * Generation counter, when doing certain classes of graph walking,
* to ensure that we check one node only once: * to ensure that we check one node only once:
...@@ -104,7 +113,7 @@ struct lock_class { ...@@ -104,7 +113,7 @@ struct lock_class {
unsigned long contention_point[LOCKSTAT_POINTS]; unsigned long contention_point[LOCKSTAT_POINTS];
unsigned long contending_point[LOCKSTAT_POINTS]; unsigned long contending_point[LOCKSTAT_POINTS];
#endif #endif
}; } __no_randomize_layout;
#ifdef CONFIG_LOCK_STAT #ifdef CONFIG_LOCK_STAT
struct lock_time { struct lock_time {
...@@ -178,6 +187,7 @@ static inline void lockdep_copy_map(struct lockdep_map *to, ...@@ -178,6 +187,7 @@ static inline void lockdep_copy_map(struct lockdep_map *to,
struct lock_list { struct lock_list {
struct list_head entry; struct list_head entry;
struct lock_class *class; struct lock_class *class;
struct lock_class *links_to;
struct stack_trace trace; struct stack_trace trace;
int distance; int distance;
...@@ -264,10 +274,14 @@ extern void lockdep_reset(void); ...@@ -264,10 +274,14 @@ extern void lockdep_reset(void);
extern void lockdep_reset_lock(struct lockdep_map *lock); extern void lockdep_reset_lock(struct lockdep_map *lock);
extern void lockdep_free_key_range(void *start, unsigned long size); extern void lockdep_free_key_range(void *start, unsigned long size);
extern asmlinkage void lockdep_sys_exit(void); extern asmlinkage void lockdep_sys_exit(void);
extern void lockdep_set_selftest_task(struct task_struct *task);
extern void lockdep_off(void); extern void lockdep_off(void);
extern void lockdep_on(void); extern void lockdep_on(void);
extern void lockdep_register_key(struct lock_class_key *key);
extern void lockdep_unregister_key(struct lock_class_key *key);
/* /*
* These methods are used by specific locking variants (spinlocks, * These methods are used by specific locking variants (spinlocks,
* rwlocks, mutexes and rwsems) to pass init/acquire/release events * rwlocks, mutexes and rwsems) to pass init/acquire/release events
...@@ -394,6 +408,10 @@ static inline void lockdep_on(void) ...@@ -394,6 +408,10 @@ static inline void lockdep_on(void)
{ {
} }
static inline void lockdep_set_selftest_task(struct task_struct *task)
{
}
# define lock_acquire(l, s, t, r, c, n, i) do { } while (0) # define lock_acquire(l, s, t, r, c, n, i) do { } while (0)
# define lock_release(l, n, i) do { } while (0) # define lock_release(l, n, i) do { } while (0)
# define lock_downgrade(l, i) do { } while (0) # define lock_downgrade(l, i) do { } while (0)
...@@ -425,6 +443,14 @@ static inline void lockdep_on(void) ...@@ -425,6 +443,14 @@ static inline void lockdep_on(void)
*/ */
struct lock_class_key { }; struct lock_class_key { };
static inline void lockdep_register_key(struct lock_class_key *key)
{
}
static inline void lockdep_unregister_key(struct lock_class_key *key)
{
}
/* /*
* The lockdep_map takes no space if lockdep is disabled: * The lockdep_map takes no space if lockdep is disabled:
*/ */
......
...@@ -29,7 +29,7 @@ static struct percpu_rw_semaphore name = { \ ...@@ -29,7 +29,7 @@ static struct percpu_rw_semaphore name = { \
extern int __percpu_down_read(struct percpu_rw_semaphore *, int); extern int __percpu_down_read(struct percpu_rw_semaphore *, int);
extern void __percpu_up_read(struct percpu_rw_semaphore *); extern void __percpu_up_read(struct percpu_rw_semaphore *);
static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *sem) static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
{ {
might_sleep(); might_sleep();
...@@ -47,16 +47,10 @@ static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore * ...@@ -47,16 +47,10 @@ static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *
__this_cpu_inc(*sem->read_count); __this_cpu_inc(*sem->read_count);
if (unlikely(!rcu_sync_is_idle(&sem->rss))) if (unlikely(!rcu_sync_is_idle(&sem->rss)))
__percpu_down_read(sem, false); /* Unconditional memory barrier */ __percpu_down_read(sem, false); /* Unconditional memory barrier */
barrier();
/* /*
* The barrier() prevents the compiler from * The preempt_enable() prevents the compiler from
* bleeding the critical section out. * bleeding the critical section out.
*/ */
}
static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
{
percpu_down_read_preempt_disable(sem);
preempt_enable(); preempt_enable();
} }
...@@ -83,13 +77,9 @@ static inline int percpu_down_read_trylock(struct percpu_rw_semaphore *sem) ...@@ -83,13 +77,9 @@ static inline int percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
return ret; return ret;
} }
static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem) static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
{ {
/* preempt_disable();
* The barrier() prevents the compiler from
* bleeding the critical section out.
*/
barrier();
/* /*
* Same as in percpu_down_read(). * Same as in percpu_down_read().
*/ */
...@@ -102,12 +92,6 @@ static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem ...@@ -102,12 +92,6 @@ static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem
rwsem_release(&sem->rw_sem.dep_map, 1, _RET_IP_); rwsem_release(&sem->rw_sem.dep_map, 1, _RET_IP_);
} }
static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
{
preempt_disable();
percpu_up_read_preempt_enable(sem);
}
extern void percpu_down_write(struct percpu_rw_semaphore *); extern void percpu_down_write(struct percpu_rw_semaphore *);
extern void percpu_up_write(struct percpu_rw_semaphore *); extern void percpu_up_write(struct percpu_rw_semaphore *);
......
...@@ -51,8 +51,8 @@ static inline void wake_q_init(struct wake_q_head *head) ...@@ -51,8 +51,8 @@ static inline void wake_q_init(struct wake_q_head *head)
head->lastp = &head->first; head->lastp = &head->first;
} }
extern void wake_q_add(struct wake_q_head *head, extern void wake_q_add(struct wake_q_head *head, struct task_struct *task);
struct task_struct *task); extern void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task);
extern void wake_up_q(struct wake_q_head *head); extern void wake_up_q(struct wake_q_head *head);
#endif /* _LINUX_SCHED_WAKE_Q_H */ #endif /* _LINUX_SCHED_WAKE_Q_H */
...@@ -390,43 +390,23 @@ extern struct workqueue_struct *system_freezable_wq; ...@@ -390,43 +390,23 @@ extern struct workqueue_struct *system_freezable_wq;
extern struct workqueue_struct *system_power_efficient_wq; extern struct workqueue_struct *system_power_efficient_wq;
extern struct workqueue_struct *system_freezable_power_efficient_wq; extern struct workqueue_struct *system_freezable_power_efficient_wq;
extern struct workqueue_struct *
__alloc_workqueue_key(const char *fmt, unsigned int flags, int max_active,
struct lock_class_key *key, const char *lock_name, ...) __printf(1, 6);
/** /**
* alloc_workqueue - allocate a workqueue * alloc_workqueue - allocate a workqueue
* @fmt: printf format for the name of the workqueue * @fmt: printf format for the name of the workqueue
* @flags: WQ_* flags * @flags: WQ_* flags
* @max_active: max in-flight work items, 0 for default * @max_active: max in-flight work items, 0 for default
* @args...: args for @fmt * remaining args: args for @fmt
* *
* Allocate a workqueue with the specified parameters. For detailed * Allocate a workqueue with the specified parameters. For detailed
* information on WQ_* flags, please refer to * information on WQ_* flags, please refer to
* Documentation/core-api/workqueue.rst. * Documentation/core-api/workqueue.rst.
* *
* The __lock_name macro dance is to guarantee that single lock_class_key
* doesn't end up with different namesm, which isn't allowed by lockdep.
*
* RETURNS: * RETURNS:
* Pointer to the allocated workqueue on success, %NULL on failure. * Pointer to the allocated workqueue on success, %NULL on failure.
*/ */
#ifdef CONFIG_LOCKDEP struct workqueue_struct *alloc_workqueue(const char *fmt,
#define alloc_workqueue(fmt, flags, max_active, args...) \ unsigned int flags,
({ \ int max_active, ...);
static struct lock_class_key __key; \
const char *__lock_name; \
\
__lock_name = "(wq_completion)"#fmt#args; \
\
__alloc_workqueue_key((fmt), (flags), (max_active), \
&__key, __lock_name, ##args); \
})
#else
#define alloc_workqueue(fmt, flags, max_active, args...) \
__alloc_workqueue_key((fmt), (flags), (max_active), \
NULL, NULL, ##args)
#endif
/** /**
* alloc_ordered_workqueue - allocate an ordered workqueue * alloc_ordered_workqueue - allocate an ordered workqueue
......
...@@ -313,6 +313,15 @@ void cpus_write_unlock(void) ...@@ -313,6 +313,15 @@ void cpus_write_unlock(void)
void lockdep_assert_cpus_held(void) void lockdep_assert_cpus_held(void)
{ {
/*
* We can't have hotplug operations before userspace starts running,
* and some init codepaths will knowingly not take the hotplug lock.
* This is all valid, so mute lockdep until it makes sense to report
* unheld locks.
*/
if (system_state < SYSTEM_RUNNING)
return;
percpu_rwsem_assert_held(&cpu_hotplug_lock); percpu_rwsem_assert_held(&cpu_hotplug_lock);
} }
......
...@@ -68,6 +68,7 @@ ...@@ -68,6 +68,7 @@
#include <linux/freezer.h> #include <linux/freezer.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/fault-inject.h> #include <linux/fault-inject.h>
#include <linux/refcount.h>
#include <asm/futex.h> #include <asm/futex.h>
...@@ -212,7 +213,7 @@ struct futex_pi_state { ...@@ -212,7 +213,7 @@ struct futex_pi_state {
struct rt_mutex pi_mutex; struct rt_mutex pi_mutex;
struct task_struct *owner; struct task_struct *owner;
atomic_t refcount; refcount_t refcount;
union futex_key key; union futex_key key;
} __randomize_layout; } __randomize_layout;
...@@ -321,12 +322,8 @@ static int __init fail_futex_debugfs(void) ...@@ -321,12 +322,8 @@ static int __init fail_futex_debugfs(void)
if (IS_ERR(dir)) if (IS_ERR(dir))
return PTR_ERR(dir); return PTR_ERR(dir);
if (!debugfs_create_bool("ignore-private", mode, dir, debugfs_create_bool("ignore-private", mode, dir,
&fail_futex.ignore_private)) { &fail_futex.ignore_private);
debugfs_remove_recursive(dir);
return -ENOMEM;
}
return 0; return 0;
} }
...@@ -803,7 +800,7 @@ static int refill_pi_state_cache(void) ...@@ -803,7 +800,7 @@ static int refill_pi_state_cache(void)
INIT_LIST_HEAD(&pi_state->list); INIT_LIST_HEAD(&pi_state->list);
/* pi_mutex gets initialized later */ /* pi_mutex gets initialized later */
pi_state->owner = NULL; pi_state->owner = NULL;
atomic_set(&pi_state->refcount, 1); refcount_set(&pi_state->refcount, 1);
pi_state->key = FUTEX_KEY_INIT; pi_state->key = FUTEX_KEY_INIT;
current->pi_state_cache = pi_state; current->pi_state_cache = pi_state;
...@@ -823,7 +820,7 @@ static struct futex_pi_state *alloc_pi_state(void) ...@@ -823,7 +820,7 @@ static struct futex_pi_state *alloc_pi_state(void)
static void get_pi_state(struct futex_pi_state *pi_state) static void get_pi_state(struct futex_pi_state *pi_state)
{ {
WARN_ON_ONCE(!atomic_inc_not_zero(&pi_state->refcount)); WARN_ON_ONCE(!refcount_inc_not_zero(&pi_state->refcount));
} }
/* /*
...@@ -835,7 +832,7 @@ static void put_pi_state(struct futex_pi_state *pi_state) ...@@ -835,7 +832,7 @@ static void put_pi_state(struct futex_pi_state *pi_state)
if (!pi_state) if (!pi_state)
return; return;
if (!atomic_dec_and_test(&pi_state->refcount)) if (!refcount_dec_and_test(&pi_state->refcount))
return; return;
/* /*
...@@ -865,7 +862,7 @@ static void put_pi_state(struct futex_pi_state *pi_state) ...@@ -865,7 +862,7 @@ static void put_pi_state(struct futex_pi_state *pi_state)
* refcount is at 0 - put it back to 1. * refcount is at 0 - put it back to 1.
*/ */
pi_state->owner = NULL; pi_state->owner = NULL;
atomic_set(&pi_state->refcount, 1); refcount_set(&pi_state->refcount, 1);
current->pi_state_cache = pi_state; current->pi_state_cache = pi_state;
} }
} }
...@@ -908,7 +905,7 @@ void exit_pi_state_list(struct task_struct *curr) ...@@ -908,7 +905,7 @@ void exit_pi_state_list(struct task_struct *curr)
* In that case; drop the locks to let put_pi_state() make * In that case; drop the locks to let put_pi_state() make
* progress and retry the loop. * progress and retry the loop.
*/ */
if (!atomic_inc_not_zero(&pi_state->refcount)) { if (!refcount_inc_not_zero(&pi_state->refcount)) {
raw_spin_unlock_irq(&curr->pi_lock); raw_spin_unlock_irq(&curr->pi_lock);
cpu_relax(); cpu_relax();
raw_spin_lock_irq(&curr->pi_lock); raw_spin_lock_irq(&curr->pi_lock);
...@@ -1064,7 +1061,7 @@ static int attach_to_pi_state(u32 __user *uaddr, u32 uval, ...@@ -1064,7 +1061,7 @@ static int attach_to_pi_state(u32 __user *uaddr, u32 uval,
* and futex_wait_requeue_pi() as it cannot go to 0 and consequently * and futex_wait_requeue_pi() as it cannot go to 0 and consequently
* free pi_state before we can take a reference ourselves. * free pi_state before we can take a reference ourselves.
*/ */
WARN_ON(!atomic_read(&pi_state->refcount)); WARN_ON(!refcount_read(&pi_state->refcount));
/* /*
* Now that we have a pi_state, we can acquire wait_lock * Now that we have a pi_state, we can acquire wait_lock
...@@ -1467,8 +1464,7 @@ static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q) ...@@ -1467,8 +1464,7 @@ static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
* Queue the task for later wakeup for after we've released * Queue the task for later wakeup for after we've released
* the hb->lock. wake_q_add() grabs reference to p. * the hb->lock. wake_q_add() grabs reference to p.
*/ */
wake_q_add(wake_q, p); wake_q_add_safe(wake_q, p);
put_task_struct(p);
} }
/* /*
......
This diff is collapsed.
...@@ -22,6 +22,10 @@ enum lock_usage_bit { ...@@ -22,6 +22,10 @@ enum lock_usage_bit {
LOCK_USAGE_STATES LOCK_USAGE_STATES
}; };
#define LOCK_USAGE_READ_MASK 1
#define LOCK_USAGE_DIR_MASK 2
#define LOCK_USAGE_STATE_MASK (~(LOCK_USAGE_READ_MASK | LOCK_USAGE_DIR_MASK))
/* /*
* Usage-state bitmasks: * Usage-state bitmasks:
*/ */
...@@ -96,7 +100,8 @@ struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i); ...@@ -96,7 +100,8 @@ struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i);
extern unsigned long nr_lock_classes; extern unsigned long nr_lock_classes;
extern unsigned long nr_list_entries; extern unsigned long nr_list_entries;
extern unsigned long nr_lock_chains; long lockdep_next_lockchain(long i);
unsigned long lock_chain_count(void);
extern int nr_chain_hlocks; extern int nr_chain_hlocks;
extern unsigned long nr_stack_trace_entries; extern unsigned long nr_stack_trace_entries;
......
...@@ -104,18 +104,18 @@ static const struct seq_operations lockdep_ops = { ...@@ -104,18 +104,18 @@ static const struct seq_operations lockdep_ops = {
#ifdef CONFIG_PROVE_LOCKING #ifdef CONFIG_PROVE_LOCKING
static void *lc_start(struct seq_file *m, loff_t *pos) static void *lc_start(struct seq_file *m, loff_t *pos)
{ {
if (*pos < 0)
return NULL;
if (*pos == 0) if (*pos == 0)
return SEQ_START_TOKEN; return SEQ_START_TOKEN;
if (*pos - 1 < nr_lock_chains)
return lock_chains + (*pos - 1); return lock_chains + (*pos - 1);
return NULL;
} }
static void *lc_next(struct seq_file *m, void *v, loff_t *pos) static void *lc_next(struct seq_file *m, void *v, loff_t *pos)
{ {
(*pos)++; *pos = lockdep_next_lockchain(*pos - 1) + 1;
return lc_start(m, pos); return lc_start(m, pos);
} }
...@@ -268,7 +268,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v) ...@@ -268,7 +268,7 @@ static int lockdep_stats_show(struct seq_file *m, void *v)
#ifdef CONFIG_PROVE_LOCKING #ifdef CONFIG_PROVE_LOCKING
seq_printf(m, " dependency chains: %11lu [max: %lu]\n", seq_printf(m, " dependency chains: %11lu [max: %lu]\n",
nr_lock_chains, MAX_LOCKDEP_CHAINS); lock_chain_count(), MAX_LOCKDEP_CHAINS);
seq_printf(m, " dependency chain hlocks: %11d [max: %lu]\n", seq_printf(m, " dependency chain hlocks: %11d [max: %lu]\n",
nr_chain_hlocks, MAX_LOCKDEP_CHAIN_HLOCKS); nr_chain_hlocks, MAX_LOCKDEP_CHAIN_HLOCKS);
#endif #endif
......
...@@ -124,9 +124,6 @@ static inline __pure u32 encode_tail(int cpu, int idx) ...@@ -124,9 +124,6 @@ static inline __pure u32 encode_tail(int cpu, int idx)
{ {
u32 tail; u32 tail;
#ifdef CONFIG_DEBUG_SPINLOCK
BUG_ON(idx > 3);
#endif
tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET; tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET;
tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */ tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */
...@@ -412,12 +409,28 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) ...@@ -412,12 +409,28 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
idx = node->count++; idx = node->count++;
tail = encode_tail(smp_processor_id(), idx); tail = encode_tail(smp_processor_id(), idx);
/*
* 4 nodes are allocated based on the assumption that there will
* not be nested NMIs taking spinlocks. That may not be true in
* some architectures even though the chance of needing more than
* 4 nodes will still be extremely unlikely. When that happens,
* we fall back to spinning on the lock directly without using
* any MCS node. This is not the most elegant solution, but is
* simple enough.
*/
if (unlikely(idx >= MAX_NODES)) {
qstat_inc(qstat_lock_no_node, true);
while (!queued_spin_trylock(lock))
cpu_relax();
goto release;
}
node = grab_mcs_node(node, idx); node = grab_mcs_node(node, idx);
/* /*
* Keep counts of non-zero index values: * Keep counts of non-zero index values:
*/ */
qstat_inc(qstat_lock_idx1 + idx - 1, idx); qstat_inc(qstat_lock_use_node2 + idx - 1, idx);
/* /*
* Ensure that we increment the head node->count before initialising * Ensure that we increment the head node->count before initialising
......
...@@ -30,6 +30,13 @@ ...@@ -30,6 +30,13 @@
* pv_wait_node - # of vCPU wait's at a non-head queue node * pv_wait_node - # of vCPU wait's at a non-head queue node
* lock_pending - # of locking operations via pending code * lock_pending - # of locking operations via pending code
* lock_slowpath - # of locking operations via MCS lock queue * lock_slowpath - # of locking operations via MCS lock queue
* lock_use_node2 - # of locking operations that use 2nd per-CPU node
* lock_use_node3 - # of locking operations that use 3rd per-CPU node
* lock_use_node4 - # of locking operations that use 4th per-CPU node
* lock_no_node - # of locking operations without using per-CPU node
*
* Subtracting lock_use_node[234] from lock_slowpath will give you
* lock_use_node1.
* *
* Writing to the "reset_counters" file will reset all the above counter * Writing to the "reset_counters" file will reset all the above counter
* values. * values.
...@@ -55,9 +62,10 @@ enum qlock_stats { ...@@ -55,9 +62,10 @@ enum qlock_stats {
qstat_pv_wait_node, qstat_pv_wait_node,
qstat_lock_pending, qstat_lock_pending,
qstat_lock_slowpath, qstat_lock_slowpath,
qstat_lock_idx1, qstat_lock_use_node2,
qstat_lock_idx2, qstat_lock_use_node3,
qstat_lock_idx3, qstat_lock_use_node4,
qstat_lock_no_node,
qstat_num, /* Total number of statistical counters */ qstat_num, /* Total number of statistical counters */
qstat_reset_cnts = qstat_num, qstat_reset_cnts = qstat_num,
}; };
...@@ -85,9 +93,10 @@ static const char * const qstat_names[qstat_num + 1] = { ...@@ -85,9 +93,10 @@ static const char * const qstat_names[qstat_num + 1] = {
[qstat_pv_wait_node] = "pv_wait_node", [qstat_pv_wait_node] = "pv_wait_node",
[qstat_lock_pending] = "lock_pending", [qstat_lock_pending] = "lock_pending",
[qstat_lock_slowpath] = "lock_slowpath", [qstat_lock_slowpath] = "lock_slowpath",
[qstat_lock_idx1] = "lock_index1", [qstat_lock_use_node2] = "lock_use_node2",
[qstat_lock_idx2] = "lock_index2", [qstat_lock_use_node3] = "lock_use_node3",
[qstat_lock_idx3] = "lock_index3", [qstat_lock_use_node4] = "lock_use_node4",
[qstat_lock_no_node] = "lock_no_node",
[qstat_reset_cnts] = "reset_counters", [qstat_reset_cnts] = "reset_counters",
}; };
......
...@@ -211,9 +211,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem, ...@@ -211,9 +211,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
* Ensure issuing the wakeup (either by us or someone else) * Ensure issuing the wakeup (either by us or someone else)
* after setting the reader waiter to nil. * after setting the reader waiter to nil.
*/ */
wake_q_add(wake_q, tsk); wake_q_add_safe(wake_q, tsk);
/* wake_q_add() already take the task ref */
put_task_struct(tsk);
} }
adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment; adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
......
...@@ -396,19 +396,7 @@ static bool set_nr_if_polling(struct task_struct *p) ...@@ -396,19 +396,7 @@ static bool set_nr_if_polling(struct task_struct *p)
#endif #endif
#endif #endif
/** static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
* wake_q_add() - queue a wakeup for 'later' waking.
* @head: the wake_q_head to add @task to
* @task: the task to queue for 'later' wakeup
*
* Queue a task for later wakeup, most likely by the wake_up_q() call in the
* same context, _HOWEVER_ this is not guaranteed, the wakeup can come
* instantly.
*
* This function must be used as-if it were wake_up_process(); IOW the task
* must be ready to be woken at this location.
*/
void wake_q_add(struct wake_q_head *head, struct task_struct *task)
{ {
struct wake_q_node *node = &task->wake_q; struct wake_q_node *node = &task->wake_q;
...@@ -421,16 +409,56 @@ void wake_q_add(struct wake_q_head *head, struct task_struct *task) ...@@ -421,16 +409,56 @@ void wake_q_add(struct wake_q_head *head, struct task_struct *task)
* state, even in the failed case, an explicit smp_mb() must be used. * state, even in the failed case, an explicit smp_mb() must be used.
*/ */
smp_mb__before_atomic(); smp_mb__before_atomic();
if (cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)) if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
return; return false;
get_task_struct(task);
/* /*
* The head is context local, there can be no concurrency. * The head is context local, there can be no concurrency.
*/ */
*head->lastp = node; *head->lastp = node;
head->lastp = &node->next; head->lastp = &node->next;
return true;
}
/**
* wake_q_add() - queue a wakeup for 'later' waking.
* @head: the wake_q_head to add @task to
* @task: the task to queue for 'later' wakeup
*
* Queue a task for later wakeup, most likely by the wake_up_q() call in the
* same context, _HOWEVER_ this is not guaranteed, the wakeup can come
* instantly.
*
* This function must be used as-if it were wake_up_process(); IOW the task
* must be ready to be woken at this location.
*/
void wake_q_add(struct wake_q_head *head, struct task_struct *task)
{
if (__wake_q_add(head, task))
get_task_struct(task);
}
/**
* wake_q_add_safe() - safely queue a wakeup for 'later' waking.
* @head: the wake_q_head to add @task to
* @task: the task to queue for 'later' wakeup
*
* Queue a task for later wakeup, most likely by the wake_up_q() call in the
* same context, _HOWEVER_ this is not guaranteed, the wakeup can come
* instantly.
*
* This function must be used as-if it were wake_up_process(); IOW the task
* must be ready to be woken at this location.
*
* This function is essentially a task-safe equivalent to wake_q_add(). Callers
* that already hold reference to @task can call the 'safe' version and trust
* wake_q to do the right thing depending whether or not the @task is already
* queued for wakeup.
*/
void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
{
if (!__wake_q_add(head, task))
put_task_struct(task);
} }
void wake_up_q(struct wake_q_head *head) void wake_up_q(struct wake_q_head *head)
...@@ -5866,14 +5894,11 @@ void __init sched_init_smp(void) ...@@ -5866,14 +5894,11 @@ void __init sched_init_smp(void)
/* /*
* There's no userspace yet to cause hotplug operations; hence all the * There's no userspace yet to cause hotplug operations; hence all the
* CPU masks are stable and all blatant races in the below code cannot * CPU masks are stable and all blatant races in the below code cannot
* happen. The hotplug lock is nevertheless taken to satisfy lockdep, * happen.
* but there won't be any contention on it.
*/ */
cpus_read_lock();
mutex_lock(&sched_domains_mutex); mutex_lock(&sched_domains_mutex);
sched_init_domains(cpu_active_mask); sched_init_domains(cpu_active_mask);
mutex_unlock(&sched_domains_mutex); mutex_unlock(&sched_domains_mutex);
cpus_read_unlock();
/* Move init over to a non-isolated CPU */ /* Move init over to a non-isolated CPU */
if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0) if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)
......
...@@ -259,6 +259,8 @@ struct workqueue_struct { ...@@ -259,6 +259,8 @@ struct workqueue_struct {
struct wq_device *wq_dev; /* I: for sysfs interface */ struct wq_device *wq_dev; /* I: for sysfs interface */
#endif #endif
#ifdef CONFIG_LOCKDEP #ifdef CONFIG_LOCKDEP
char *lock_name;
struct lock_class_key key;
struct lockdep_map lockdep_map; struct lockdep_map lockdep_map;
#endif #endif
char name[WQ_NAME_LEN]; /* I: workqueue name */ char name[WQ_NAME_LEN]; /* I: workqueue name */
...@@ -3337,11 +3339,49 @@ static int init_worker_pool(struct worker_pool *pool) ...@@ -3337,11 +3339,49 @@ static int init_worker_pool(struct worker_pool *pool)
return 0; return 0;
} }
#ifdef CONFIG_LOCKDEP
static void wq_init_lockdep(struct workqueue_struct *wq)
{
char *lock_name;
lockdep_register_key(&wq->key);
lock_name = kasprintf(GFP_KERNEL, "%s%s", "(wq_completion)", wq->name);
if (!lock_name)
lock_name = wq->name;
lockdep_init_map(&wq->lockdep_map, lock_name, &wq->key, 0);
}
static void wq_unregister_lockdep(struct workqueue_struct *wq)
{
lockdep_unregister_key(&wq->key);
}
static void wq_free_lockdep(struct workqueue_struct *wq)
{
if (wq->lock_name != wq->name)
kfree(wq->lock_name);
}
#else
static void wq_init_lockdep(struct workqueue_struct *wq)
{
}
static void wq_unregister_lockdep(struct workqueue_struct *wq)
{
}
static void wq_free_lockdep(struct workqueue_struct *wq)
{
}
#endif
static void rcu_free_wq(struct rcu_head *rcu) static void rcu_free_wq(struct rcu_head *rcu)
{ {
struct workqueue_struct *wq = struct workqueue_struct *wq =
container_of(rcu, struct workqueue_struct, rcu); container_of(rcu, struct workqueue_struct, rcu);
wq_free_lockdep(wq);
if (!(wq->flags & WQ_UNBOUND)) if (!(wq->flags & WQ_UNBOUND))
free_percpu(wq->cpu_pwqs); free_percpu(wq->cpu_pwqs);
else else
...@@ -3532,8 +3572,10 @@ static void pwq_unbound_release_workfn(struct work_struct *work) ...@@ -3532,8 +3572,10 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
* If we're the last pwq going away, @wq is already dead and no one * If we're the last pwq going away, @wq is already dead and no one
* is gonna access it anymore. Schedule RCU free. * is gonna access it anymore. Schedule RCU free.
*/ */
if (is_last) if (is_last) {
wq_unregister_lockdep(wq);
call_rcu(&wq->rcu, rcu_free_wq); call_rcu(&wq->rcu, rcu_free_wq);
}
} }
/** /**
...@@ -4067,11 +4109,9 @@ static int init_rescuer(struct workqueue_struct *wq) ...@@ -4067,11 +4109,9 @@ static int init_rescuer(struct workqueue_struct *wq)
return 0; return 0;
} }
struct workqueue_struct *__alloc_workqueue_key(const char *fmt, struct workqueue_struct *alloc_workqueue(const char *fmt,
unsigned int flags, unsigned int flags,
int max_active, int max_active, ...)
struct lock_class_key *key,
const char *lock_name, ...)
{ {
size_t tbl_size = 0; size_t tbl_size = 0;
va_list args; va_list args;
...@@ -4106,7 +4146,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt, ...@@ -4106,7 +4146,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
goto err_free_wq; goto err_free_wq;
} }
va_start(args, lock_name); va_start(args, max_active);
vsnprintf(wq->name, sizeof(wq->name), fmt, args); vsnprintf(wq->name, sizeof(wq->name), fmt, args);
va_end(args); va_end(args);
...@@ -4123,7 +4163,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt, ...@@ -4123,7 +4163,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
INIT_LIST_HEAD(&wq->flusher_overflow); INIT_LIST_HEAD(&wq->flusher_overflow);
INIT_LIST_HEAD(&wq->maydays); INIT_LIST_HEAD(&wq->maydays);
lockdep_init_map(&wq->lockdep_map, lock_name, key, 0); wq_init_lockdep(wq);
INIT_LIST_HEAD(&wq->list); INIT_LIST_HEAD(&wq->list);
if (alloc_and_link_pwqs(wq) < 0) if (alloc_and_link_pwqs(wq) < 0)
...@@ -4161,7 +4201,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt, ...@@ -4161,7 +4201,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
destroy_workqueue(wq); destroy_workqueue(wq);
return NULL; return NULL;
} }
EXPORT_SYMBOL_GPL(__alloc_workqueue_key); EXPORT_SYMBOL_GPL(alloc_workqueue);
/** /**
* destroy_workqueue - safely terminate a workqueue * destroy_workqueue - safely terminate a workqueue
...@@ -4214,6 +4254,7 @@ void destroy_workqueue(struct workqueue_struct *wq) ...@@ -4214,6 +4254,7 @@ void destroy_workqueue(struct workqueue_struct *wq)
kthread_stop(wq->rescuer->task); kthread_stop(wq->rescuer->task);
if (!(wq->flags & WQ_UNBOUND)) { if (!(wq->flags & WQ_UNBOUND)) {
wq_unregister_lockdep(wq);
/* /*
* The base ref is never dropped on per-cpu pwqs. Directly * The base ref is never dropped on per-cpu pwqs. Directly
* schedule RCU free. * schedule RCU free.
......
...@@ -1989,6 +1989,7 @@ void locking_selftest(void) ...@@ -1989,6 +1989,7 @@ void locking_selftest(void)
init_shared_classes(); init_shared_classes();
debug_locks_silent = !debug_locks_verbose; debug_locks_silent = !debug_locks_verbose;
lockdep_set_selftest_task(current);
DO_TESTCASE_6R("A-A deadlock", AA); DO_TESTCASE_6R("A-A deadlock", AA);
DO_TESTCASE_6R("A-B-B-A deadlock", ABBA); DO_TESTCASE_6R("A-B-B-A deadlock", ABBA);
...@@ -2097,5 +2098,6 @@ void locking_selftest(void) ...@@ -2097,5 +2098,6 @@ void locking_selftest(void)
printk("---------------------------------\n"); printk("---------------------------------\n");
debug_locks = 1; debug_locks = 1;
} }
lockdep_set_selftest_task(NULL);
debug_locks_silent = 0; debug_locks_silent = 0;
} }
...@@ -33,6 +33,9 @@ ...@@ -33,6 +33,9 @@
* Note that the allocator is responsible for ordering things between free() * Note that the allocator is responsible for ordering things between free()
* and alloc(). * and alloc().
* *
* The decrements dec_and_test() and sub_and_test() also provide acquire
* ordering on success.
*
*/ */
#include <linux/mutex.h> #include <linux/mutex.h>
...@@ -164,8 +167,8 @@ EXPORT_SYMBOL(refcount_inc_checked); ...@@ -164,8 +167,8 @@ EXPORT_SYMBOL(refcount_inc_checked);
* at UINT_MAX. * at UINT_MAX.
* *
* Provides release memory ordering, such that prior loads and stores are done * Provides release memory ordering, such that prior loads and stores are done
* before, and provides a control dependency such that free() must come after. * before, and provides an acquire ordering on success such that free()
* See the comment on top. * must come after.
* *
* Use of this function is not recommended for the normal reference counting * Use of this function is not recommended for the normal reference counting
* use case in which references are taken and released one at a time. In these * use case in which references are taken and released one at a time. In these
...@@ -190,7 +193,12 @@ bool refcount_sub_and_test_checked(unsigned int i, refcount_t *r) ...@@ -190,7 +193,12 @@ bool refcount_sub_and_test_checked(unsigned int i, refcount_t *r)
} while (!atomic_try_cmpxchg_release(&r->refs, &val, new)); } while (!atomic_try_cmpxchg_release(&r->refs, &val, new));
return !new; if (!new) {
smp_acquire__after_ctrl_dep();
return true;
}
return false;
} }
EXPORT_SYMBOL(refcount_sub_and_test_checked); EXPORT_SYMBOL(refcount_sub_and_test_checked);
...@@ -202,8 +210,8 @@ EXPORT_SYMBOL(refcount_sub_and_test_checked); ...@@ -202,8 +210,8 @@ EXPORT_SYMBOL(refcount_sub_and_test_checked);
* decrement when saturated at UINT_MAX. * decrement when saturated at UINT_MAX.
* *
* Provides release memory ordering, such that prior loads and stores are done * Provides release memory ordering, such that prior loads and stores are done
* before, and provides a control dependency such that free() must come after. * before, and provides an acquire ordering on success such that free()
* See the comment on top. * must come after.
* *
* Return: true if the resulting refcount is 0, false otherwise * Return: true if the resulting refcount is 0, false otherwise
*/ */
......
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# helpers for dealing with atomics.tbl
#meta_in(meta, match)
meta_in()
{
case "$1" in
[$2]) return 0;;
esac
return 1
}
#meta_has_ret(meta)
meta_has_ret()
{
meta_in "$1" "bBiIfFlR"
}
#meta_has_acquire(meta)
meta_has_acquire()
{
meta_in "$1" "BFIlR"
}
#meta_has_release(meta)
meta_has_release()
{
meta_in "$1" "BFIRs"
}
#meta_has_relaxed(meta)
meta_has_relaxed()
{
meta_in "$1" "BFIR"
}
#find_fallback_template(pfx, name, sfx, order)
find_fallback_template()
{
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
local base=""
local file=""
# We may have fallbacks for a specific case (e.g. read_acquire()), or
# an entire class, e.g. *inc*().
#
# Start at the most specific, and fall back to the most general. Once
# we find a specific fallback, don't bother looking for more.
for base in "${pfx}${name}${sfx}${order}" "${name}"; do
file="${ATOMICDIR}/fallbacks/${base}"
if [ -f "${file}" ]; then
printf "${file}"
break
fi
done
}
#gen_ret_type(meta, int)
gen_ret_type() {
local meta="$1"; shift
local int="$1"; shift
case "${meta}" in
[sv]) printf "void";;
[bB]) printf "bool";;
[aiIfFlR]) printf "${int}";;
esac
}
#gen_ret_stmt(meta)
gen_ret_stmt()
{
if meta_has_ret "${meta}"; then
printf "return ";
fi
}
# gen_param_name(arg)
gen_param_name()
{
# strip off the leading 'c' for 'cv'
local name="${1#c}"
printf "${name#*:}"
}
# gen_param_type(arg, int, atomic)
gen_param_type()
{
local type="${1%%:*}"; shift
local int="$1"; shift
local atomic="$1"; shift
case "${type}" in
i) type="${int} ";;
p) type="${int} *";;
v) type="${atomic}_t *";;
cv) type="const ${atomic}_t *";;
esac
printf "${type}"
}
#gen_param(arg, int, atomic)
gen_param()
{
local arg="$1"; shift
local int="$1"; shift
local atomic="$1"; shift
local name="$(gen_param_name "${arg}")"
local type="$(gen_param_type "${arg}" "${int}" "${atomic}")"
printf "${type}${name}"
}
#gen_params(int, atomic, arg...)
gen_params()
{
local int="$1"; shift
local atomic="$1"; shift
while [ "$#" -gt 0 ]; do
gen_param "$1" "${int}" "${atomic}"
[ "$#" -gt 1 ] && printf ", "
shift;
done
}
#gen_args(arg...)
gen_args()
{
while [ "$#" -gt 0 ]; do
printf "$(gen_param_name "$1")"
[ "$#" -gt 1 ] && printf ", "
shift;
done
}
#gen_proto_order_variants(meta, pfx, name, sfx, ...)
gen_proto_order_variants()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
if meta_has_acquire "${meta}"; then
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
fi
if meta_has_release "${meta}"; then
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
fi
if meta_has_relaxed "${meta}"; then
gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@"
fi
}
#gen_proto_variants(meta, name, ...)
gen_proto_variants()
{
local meta="$1"; shift
local name="$1"; shift
local pfx=""
local sfx=""
meta_in "${meta}" "fF" && pfx="fetch_"
meta_in "${meta}" "R" && sfx="_return"
gen_proto_order_variants "${meta}" "${pfx}" "${name}" "${sfx}" "$@"
}
#gen_proto(meta, ...)
gen_proto() {
local meta="$1"; shift
for m in $(echo "${meta}" | grep -o .); do
gen_proto_variants "${m}" "$@"
done
}
# name meta args...
#
# Where meta contains a string of variants to generate.
# Upper-case implies _{acquire,release,relaxed} variants.
# Valid meta values are:
# * B/b - bool: returns bool
# * v - void: returns void
# * I/i - int: returns base type
# * R - return: returns base type (has _return variants)
# * F/f - fetch: returns base type (has fetch_ variants)
# * l - load: returns base type (has _acquire order variant)
# * s - store: returns void (has _release order variant)
#
# Where args contains list of type[:name], where type is:
# * cv - const pointer to atomic base type (atomic_t/atomic64_t/atomic_long_t)
# * v - pointer to atomic base type (atomic_t/atomic64_t/atomic_long_t)
# * i - base type (int/s64/long)
# * p - pointer to base type (int/s64/long)
#
read l cv
set s v i
add vRF i v
sub vRF i v
inc vRF v
dec vRF v
and vF i v
andnot vF i v
or vF i v
xor vF i v
xchg I v i
cmpxchg I v i:old i:new
try_cmpxchg B v p:old i:new
sub_and_test b i v
dec_and_test b v
inc_and_test b v
add_negative b i v
add_unless fb v i:a i:u
inc_not_zero b v
inc_unless_negative b v
dec_unless_positive b v
dec_if_positive i v
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
#
# Check if atomic headers are up-to-date
ATOMICDIR=$(dirname $0)
ATOMICTBL=${ATOMICDIR}/atomics.tbl
LINUXDIR=${ATOMICDIR}/../..
echo '' | sha1sum - > /dev/null 2>&1
if [ $? -ne 0 ]; then
printf "sha1sum not available, skipping atomic header checks.\n"
exit 0
fi
cat <<EOF |
asm-generic/atomic-instrumented.h
asm-generic/atomic-long.h
linux/atomic-fallback.h
EOF
while read header; do
OLDSUM="$(tail -n 1 ${LINUXDIR}/include/${header})"
OLDSUM="${OLDSUM#// }"
NEWSUM="$(head -n -1 ${LINUXDIR}/include/${header} | sha1sum)"
NEWSUM="${NEWSUM%% *}"
if [ "${OLDSUM}" != "${NEWSUM}" ]; then
printf "warning: generated include/${header} has been modified.\n"
fi
done
exit 0
cat <<EOF
static inline ${ret}
${atomic}_${pfx}${name}${sfx}_acquire(${params})
{
${ret} ret = ${atomic}_${pfx}${name}${sfx}_relaxed(${args});
__atomic_acquire_fence();
return ret;
}
EOF
This diff is collapsed.
This diff is collapsed.
cat <<EOF
static inline ${ret}
${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
{
${retstmt}${atomic}_${pfx}and${sfx}${order}(~i, v);
}
EOF
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment