1. 12 Mar, 2018 15 commits
  2. 09 Mar, 2018 1 commit
    • Francis Deslauriers's avatar
      x86/kprobes: Fix kernel crash when probing .entry_trampoline code · c07a8f8b
      Francis Deslauriers authored
      Disable the kprobe probing of the entry trampoline:
      
      .entry_trampoline is a code area that is used to ensure page table
      isolation between userspace and kernelspace.
      
      At the beginning of the execution of the trampoline, we load the
      kernel's CR3 register. This has the effect of enabling the translation
      of the kernel virtual addresses to physical addresses. Before this
      happens most kernel addresses can not be translated because the running
      process' CR3 is still used.
      
      If a kprobe is placed on the trampoline code before that change of the
      CR3 register happens the kernel crashes because int3 handling pages are
      not accessible.
      
      To fix this, add the .entry_trampoline section to the kprobe blacklist
      to prohibit the probing of code before all the kernel pages are
      accessible.
      Signed-off-by: default avatarFrancis Deslauriers <francis.deslauriers@efficios.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: mathieu.desnoyers@efficios.com
      Cc: mhiramat@kernel.org
      Link: http://lkml.kernel.org/r/1520565492-4637-2-git-send-email-francis.deslauriers@efficios.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c07a8f8b
  3. 08 Mar, 2018 10 commits
  4. 07 Mar, 2018 7 commits
  5. 01 Mar, 2018 1 commit
    • Thomas Gleixner's avatar
      x86/cpu_entry_area: Sync cpu_entry_area to initial_page_table · 945fd17a
      Thomas Gleixner authored
      The separation of the cpu_entry_area from the fixmap missed the fact that
      on 32bit non-PAE kernels the cpu_entry_area mapping might not be covered in
      initial_page_table by the previous synchronizations.
      
      This results in suspend/resume failures because 32bit utilizes initial page
      table for resume. The absence of the cpu_entry_area mapping results in a
      triple fault, aka. insta reboot.
      
      With PAE enabled this works by chance because the PGD entry which covers
      the fixmap and other parts incindentally provides the cpu_entry_area
      mapping as well.
      
      Synchronize the initial page table after setting up the cpu entry
      area. Instead of adding yet another copy of the same code, move it to a
      function and invoke it from the various places.
      
      It needs to be investigated if the existing calls in setup_arch() and
      setup_per_cpu_areas() can be replaced by the later invocation from
      setup_cpu_entry_areas(), but that's beyond the scope of this fix.
      
      Fixes: 92a0f81d ("x86/cpu_entry_area: Move it out of the fixmap")
      Reported-by: default avatarWoody Suwalski <terraluna977@gmail.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Tested-by: default avatarWoody Suwalski <terraluna977@gmail.com>
      Cc: William Grant <william.grant@canonical.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1802282137290.1392@nanos.tec.linutronix.de
      945fd17a
  6. 28 Feb, 2018 3 commits
  7. 26 Feb, 2018 3 commits
    • Ingo Molnar's avatar
      Merge branch 'x86/boot' into x86/mm, to unify branches · 1ea4fe84
      Ingo Molnar authored
      Both x86/mm and x86/boot contain 5-level paging related patches,
      unify them to have a single tree to work against.
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1ea4fe84
    • Jan H. Schönherr's avatar
      x86/boot/e820: Implement a range manipulation operator · ef61f8a3
      Jan H. Schönherr authored
      Add a more versatile memmap= operator, which -- in addition to all the
      things that were possible before -- allows you to:
      
      - redeclare existing ranges -- before, you were limited to adding ranges;
      - drop any range -- like a mem= for any location;
      - use any e820 memory type -- not just some predefined ones.
      
      The syntax is:
      
        memmap=<size>%<offset>-<oldtype>+<newtype>
      
      Size and offset work as usual. The "-<oldtype>" and "+<newtype>" are
      optional and their existence determine the behavior: The command
      works on the specified range of memory limited to type <oldtype>
      (if specified). This memory is then configured to show up as <newtype>.
      If <newtype> is not specified, the memory is removed from the e820 map.
      Signed-off-by: default avatarJan H. Schönherr <jschoenh@amazon.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180202231020.15608-1-jschoenh@amazon.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ef61f8a3
    • Jan Beulich's avatar
      x86/mm: Consider effective protection attributes in W+X check · 672c0ae0
      Jan Beulich authored
      Using just the leaf page table entry flags would cause a false warning
      in case _PAGE_RW is clear or _PAGE_NX is set in a higher level entry.
      Hand through both the current entry's flags as well as the accumulated
      effective value (the latter as pgprotval_t instead of pgprot_t, as it's
      not an actual entry's value).
      
      This in particular eliminates the false W+X warning when running under
      Xen, as commit:
      
        2cc42bac ("x86-64/Xen: eliminate W+X mappings")
      
      had to make the necessary adjustment in L2 rather than L1 (the reason is
      explained there). I.e. _PAGE_RW is clear there in L1, but _PAGE_NX is
      set in L2.
      Signed-off-by: default avatarJan Beulich <jbeulich@suse.com>
      Reviewed-by: default avatarJuergen Gross <jgross@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/5A8FDE8902000078001AABBB@prv-mh.provo.novell.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      672c0ae0