Commit 77f48ec2 authored by Nadav Amit's avatar Nadav Amit Committed by Ingo Molnar

x86/alternatives: Macrofy lock prefixes to work around GCC inlining bugs

As described in:

  77b0bf55: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

The workaround is to set an assembly macro and call it from the inline
assembly block - i.e. to macrify the affected block.

As a result GCC considers the inline assembly block as a single instruction.

This patch handles the LOCK prefix, allowing more aggresive inlining:

      text     data     bss      dec     hex  filename
  18140140 10225284 2957312 31322736 1ddf270  ./vmlinux before
  18146889 10225380 2957312 31329581 1de0d2d  ./vmlinux after (+6845)

This is the reduction in non-inlined functions:

  Before: 40286
  After:  40218 (-68)
Tested-by: default avatarKees Cook <keescook@chromium.org>
Signed-off-by: default avatarNadav Amit <namit@vmware.com>
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20181003213100.189959-6-namit@vmware.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 9e1725b4
...@@ -7,16 +7,24 @@ ...@@ -7,16 +7,24 @@
#include <asm/asm.h> #include <asm/asm.h>
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
.macro LOCK_PREFIX .macro LOCK_PREFIX_HERE
672: lock
.pushsection .smp_locks,"a" .pushsection .smp_locks,"a"
.balign 4 .balign 4
.long 672b - . .long 671f - . # offset
.popsection .popsection
.endm 671:
.endm
.macro LOCK_PREFIX insn:vararg
LOCK_PREFIX_HERE
lock \insn
.endm
#else #else
.macro LOCK_PREFIX .macro LOCK_PREFIX_HERE
.endm .endm
.macro LOCK_PREFIX insn:vararg
.endm
#endif #endif
/* /*
......
...@@ -31,15 +31,8 @@ ...@@ -31,15 +31,8 @@
*/ */
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define LOCK_PREFIX_HERE \ #define LOCK_PREFIX_HERE "LOCK_PREFIX_HERE\n\t"
".pushsection .smp_locks,\"a\"\n" \ #define LOCK_PREFIX "LOCK_PREFIX "
".balign 4\n" \
".long 671f - .\n" /* offset */ \
".popsection\n" \
"671:"
#define LOCK_PREFIX LOCK_PREFIX_HERE "\n\tlock; "
#else /* ! CONFIG_SMP */ #else /* ! CONFIG_SMP */
#define LOCK_PREFIX_HERE "" #define LOCK_PREFIX_HERE ""
#define LOCK_PREFIX "" #define LOCK_PREFIX ""
......
...@@ -8,3 +8,4 @@ ...@@ -8,3 +8,4 @@
#include <linux/compiler.h> #include <linux/compiler.h>
#include <asm/refcount.h> #include <asm/refcount.h>
#include <asm/alternative-asm.h>
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment