Commit 7c8746a9 authored by Will Deacon's avatar Will Deacon Committed by Russell King

ARM: 7955/1: spinlock: ensure we have a compiler barrier before sev

When unlocking a spinlock, we require the following, strictly ordered
sequence of events:

	<barrier>	/* dmb */
	<unlock>
	<barrier>	/* dsb */
	<sev>

Whilst the code does indeed reflect this in terms of the architecture,
the final <barrier> + <sev> have been contracted into a single inline
asm without a "memory" clobber, therefore the compiler is at liberty to
reorder the unlock to the end of the above sequence. In such a case,
a waiting CPU may be woken up before the lock has been unlocked, leading
to extremely poor performance.

This patch reworks the dsb_sev() function to make use of the dsb()
macro and ensure ordering against the unlock.

Cc: <stable@vger.kernel.org>
Reported-by: default avatarMark Rutland <mark.rutland@arm.com>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
parent bae0ca2b
...@@ -37,18 +37,9 @@ ...@@ -37,18 +37,9 @@
static inline void dsb_sev(void) static inline void dsb_sev(void)
{ {
#if __LINUX_ARM_ARCH__ >= 7
__asm__ __volatile__ ( dsb(ishst);
"dsb ishst\n" __asm__(SEV);
SEV
);
#else
__asm__ __volatile__ (
"mcr p15, 0, %0, c7, c10, 4\n"
SEV
: : "r" (0)
);
#endif
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment