Commit 7829fb09 authored by Daniel Borkmann's avatar Daniel Borkmann Committed by Herbert Xu

lib: make memzero_explicit more robust against dead store elimination

In commit 0b053c95 ("lib: memzero_explicit: use barrier instead
of OPTIMIZER_HIDE_VAR"), we made memzero_explicit() more robust in
case LTO would decide to inline memzero_explicit() and eventually
find out it could be elimiated as dead store.

While using barrier() works well for the case of gcc, recent efforts
from LLVMLinux people suggest to use llvm as an alternative to gcc,
and there, Stephan found in a simple stand-alone user space example
that llvm could nevertheless optimize and thus elimitate the memset().
A similar issue has been observed in the referenced llvm bug report,
which is regarded as not-a-bug.

Based on some experiments, icc is a bit special on its own, while it
doesn't seem to eliminate the memset(), it could do so with an own
implementation, and then result in similar findings as with llvm.

The fix in this patch now works for all three compilers (also tested
with more aggressive optimization levels). Arguably, in the current
kernel tree it's more of a theoretical issue, but imho, it's better
to be pedantic about it.

It's clearly visible with gcc/llvm though, with the below code: if we
would have used barrier() only here, llvm would have omitted clearing,
not so with barrier_data() variant:

  static inline void memzero_explicit(void *s, size_t count)
  {
    memset(s, 0, count);
    barrier_data(s);
  }

  int main(void)
  {
    char buff[20];
    memzero_explicit(buff, sizeof(buff));
    return 0;
  }

  $ gcc -O2 test.c
  $ gdb a.out
  (gdb) disassemble main
  Dump of assembler code for function main:
   0x0000000000400400  <+0>: lea   -0x28(%rsp),%rax
   0x0000000000400405  <+5>: movq  $0x0,-0x28(%rsp)
   0x000000000040040e <+14>: movq  $0x0,-0x20(%rsp)
   0x0000000000400417 <+23>: movl  $0x0,-0x18(%rsp)
   0x000000000040041f <+31>: xor   %eax,%eax
   0x0000000000400421 <+33>: retq
  End of assembler dump.

  $ clang -O2 test.c
  $ gdb a.out
  (gdb) disassemble main
  Dump of assembler code for function main:
   0x00000000004004f0  <+0>: xorps  %xmm0,%xmm0
   0x00000000004004f3  <+3>: movaps %xmm0,-0x18(%rsp)
   0x00000000004004f8  <+8>: movl   $0x0,-0x8(%rsp)
   0x0000000000400500 <+16>: lea    -0x18(%rsp),%rax
   0x0000000000400505 <+21>: xor    %eax,%eax
   0x0000000000400507 <+23>: retq
  End of assembler dump.

As gcc, clang, but also icc defines __GNUC__, it's sufficient to define
this in compiler-gcc.h only to be picked up. For a fallback or otherwise
unsupported compiler, we define it as a barrier. Similarly, for ecc which
does not support gcc inline asm.

Reference: https://llvm.org/bugs/show_bug.cgi?id=15495Reported-by: default avatarStephan Mueller <smueller@chronox.de>
Tested-by: default avatarStephan Mueller <smueller@chronox.de>
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Stephan Mueller <smueller@chronox.de>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: mancha security <mancha1@zoho.com>
Cc: Mark Charlebois <charlebm@gmail.com>
Cc: Behan Webster <behanw@converseincode.com>
Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
parent 8c98ebd7
...@@ -9,10 +9,24 @@ ...@@ -9,10 +9,24 @@
+ __GNUC_MINOR__ * 100 \ + __GNUC_MINOR__ * 100 \
+ __GNUC_PATCHLEVEL__) + __GNUC_PATCHLEVEL__)
/* Optimization barrier */ /* Optimization barrier */
/* The "volatile" is due to gcc bugs */ /* The "volatile" is due to gcc bugs */
#define barrier() __asm__ __volatile__("": : :"memory") #define barrier() __asm__ __volatile__("": : :"memory")
/*
* This version is i.e. to prevent dead stores elimination on @ptr
* where gcc and llvm may behave differently when otherwise using
* normal barrier(): while gcc behavior gets along with a normal
* barrier(), llvm needs an explicit input variable to be assumed
* clobbered. The issue is as follows: while the inline asm might
* access any memory it wants, the compiler could have fit all of
* @ptr into memory registers instead, and since @ptr never escaped
* from that, it proofed that the inline asm wasn't touching any of
* it. This version works well with both compilers, i.e. we're telling
* the compiler that the inline asm absolutely may see the contents
* of @ptr. See also: https://llvm.org/bugs/show_bug.cgi?id=15495
*/
#define barrier_data(ptr) __asm__ __volatile__("": :"r"(ptr) :"memory")
/* /*
* This macro obfuscates arithmetic on a variable address so that gcc * This macro obfuscates arithmetic on a variable address so that gcc
......
...@@ -13,9 +13,12 @@ ...@@ -13,9 +13,12 @@
/* Intel ECC compiler doesn't support gcc specific asm stmts. /* Intel ECC compiler doesn't support gcc specific asm stmts.
* It uses intrinsics to do the equivalent things. * It uses intrinsics to do the equivalent things.
*/ */
#undef barrier_data
#undef RELOC_HIDE #undef RELOC_HIDE
#undef OPTIMIZER_HIDE_VAR #undef OPTIMIZER_HIDE_VAR
#define barrier_data(ptr) barrier()
#define RELOC_HIDE(ptr, off) \ #define RELOC_HIDE(ptr, off) \
({ unsigned long __ptr; \ ({ unsigned long __ptr; \
__ptr = (unsigned long) (ptr); \ __ptr = (unsigned long) (ptr); \
......
...@@ -169,6 +169,10 @@ void ftrace_likely_update(struct ftrace_branch_data *f, int val, int expect); ...@@ -169,6 +169,10 @@ void ftrace_likely_update(struct ftrace_branch_data *f, int val, int expect);
# define barrier() __memory_barrier() # define barrier() __memory_barrier()
#endif #endif
#ifndef barrier_data
# define barrier_data(ptr) barrier()
#endif
/* Unreachable code */ /* Unreachable code */
#ifndef unreachable #ifndef unreachable
# define unreachable() do { } while (1) # define unreachable() do { } while (1)
......
...@@ -607,7 +607,7 @@ EXPORT_SYMBOL(memset); ...@@ -607,7 +607,7 @@ EXPORT_SYMBOL(memset);
void memzero_explicit(void *s, size_t count) void memzero_explicit(void *s, size_t count)
{ {
memset(s, 0, count); memset(s, 0, count);
barrier(); barrier_data(s);
} }
EXPORT_SYMBOL(memzero_explicit); EXPORT_SYMBOL(memzero_explicit);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment