1. 29 Mar, 2021 39 commits
  2. 26 Mar, 2021 1 commit
    • Davidlohr Bueso's avatar
      powerpc/spinlock: Unserialize spin_is_locked · 66f60522
      Davidlohr Bueso authored
      c6f5d02b (locking/spinlocks/arm64: Remove smp_mb() from
      arch_spin_is_locked()) made it pretty official that the call
      semantics do not imply any sort of barriers, and any user that
      gets creative must explicitly do any serialization.
      
      This creativity, however, is nowadays pretty limited:
      
      1. spin_unlock_wait() has been removed from the kernel in favor
      of a lock/unlock combo. Furthermore, queued spinlocks have now
      for a number of years no longer relied on _Q_LOCKED_VAL for the
      call, but any non-zero value to indicate a locked state. There
      were cases where the delayed locked store could lead to breaking
      mutual exclusion with crossed locking; such as with sysv ipc and
      netfilter being the most extreme.
      
      2. The auditing Andrea did in verified that remaining spin_is_locked()
      no longer rely on such semantics. Most callers just use it to assert
      a lock is taken, in a debug nature. The only user that gets cute is
      NOLOCK qdisc, as of:
      
         96009c7d (sched: replace __QDISC_STATE_RUNNING bit with a spin lock)
      
      ... which ironically went in the next day after c6f5d02b. This
      change replaces test_bit() with spin_is_locked() to know whether
      to take the busylock heuristic to reduce contention on the main
      qdisc lock. So any races against spin_is_locked() for archs that
      use LL/SC for spin_lock() will be benign and not break any mutual
      exclusion; furthermore, both the seqlock and busylock have the same
      scope.
      Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20210309015950.27688-3-dave@stgolabs.net
      66f60522