Commit c1d7cd22 authored by Will Deacon's avatar Will Deacon

arm64: spinlock: fix ll/sc unlock on big-endian systems

When unlocking a spinlock, we perform a read-modify-write on the owner
ticket in order to increment it and store it back with release
semantics.

In the LL/SC case, we load the 16-bit ticket using a 32-bit load and
therefore store back the wrong halfword on a big-endian system,
corrupting the lock after the first unlock and killing the system dead.

This patch fixes the unlock code to use 16-bit accessors consistently.
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
parent 4150e50b
...@@ -110,7 +110,7 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock) ...@@ -110,7 +110,7 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
asm volatile(ARM64_LSE_ATOMIC_INSN( asm volatile(ARM64_LSE_ATOMIC_INSN(
/* LL/SC */ /* LL/SC */
" ldr %w1, %0\n" " ldrh %w1, %0\n"
" add %w1, %w1, #1\n" " add %w1, %w1, #1\n"
" stlrh %w1, %0", " stlrh %w1, %0",
/* LSE atomics */ /* LSE atomics */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment