Commit 1aa7a573 authored by Linus Torvalds's avatar Linus Torvalds Committed by Thomas Gleixner

x86/nospec: Simplify alternative_msr_write()

The macro is not type safe and I did look for why that "g" constraint for
the asm doesn't work: it's because the asm is more fundamentally wrong.

It does

        movl %[val], %%eax

but "val" isn't a 32-bit value, so then gcc will pass it in a register, 
and generate code like

        movl %rsi, %eax

and gas will complain about a nonsensical 'mov' instruction (it's moving a 
64-bit register to a 32-bit one).

Passing it through memory will just hide the real bug - gcc still thinks 
the memory location is 64-bit, but the "movl" will only load the first 32 
bits and it all happens to work because x86 is little-endian.

Convert it to a type safe inline function with a little trick which hands
the feature into the ALTERNATIVE macro.
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
parent c65732e4
......@@ -241,15 +241,16 @@ static inline void vmexit_fill_RSB(void)
#endif
}
#define alternative_msr_write(_msr, _val, _feature) \
asm volatile(ALTERNATIVE("", \
"movl %[msr], %%ecx\n\t" \
"movl %[val], %%eax\n\t" \
"movl $0, %%edx\n\t" \
"wrmsr", \
_feature) \
: : [msr] "i" (_msr), [val] "i" (_val) \
: "eax", "ecx", "edx", "memory")
static __always_inline
void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature)
{
asm volatile(ALTERNATIVE("", "wrmsr", %c[feature])
: : "c" (msr),
"a" (val),
"d" (val >> 32),
[feature] "i" (feature)
: "memory");
}
static inline void indirect_branch_prediction_barrier(void)
{
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment