An error occurred fetching the project authors.
  1. 23 Apr, 2020 1 commit
  2. 01 Apr, 2020 1 commit
    • Amit Daniel Kachhap's avatar
      arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch · 15cd0e67
      Amit Daniel Kachhap authored
      Recent addition of ARM64_PTR_AUTH exposed a mismatch issue with binutils.
      9.1+ versions of gcc inserts a section note .note.gnu.property but this
      can be used properly by binutils version greater than 2.33.1. If older
      binutils are used then the following warnings are generated,
      
      aarch64-linux-ld: warning: arch/arm64/kernel/vdso/vgettimeofday.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      aarch64-linux-objdump: warning: arch/arm64/lib/csum.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      aarch64-linux-nm: warning: .tmp_vmlinux1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      
      This patch enables ARM64_PTR_AUTH when gcc and binutils versions are
      compatible with each other. Older gcc which do not insert such section
      continue to work as before.
      
      This scenario may not occur with clang as a recent commit 3b446c7d
      ("arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH") masks
      binutils version lesser then 2.34.
      Reported-by: default avatarkbuild test robot <lkp@intel.com>
      Suggested-by: default avatarVincenzo Frascino <Vincenzo.Frascino@arm.com>
      Signed-off-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      [catalin.marinas@arm.com: slight adjustment to the comment]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      15cd0e67
  3. 20 Mar, 2020 1 commit
  4. 18 Mar, 2020 3 commits
    • Kristina Martsenko's avatar
      arm64: compile the kernel with ptrauth return address signing · 74afda40
      Kristina Martsenko authored
      Compile all functions with two ptrauth instructions: PACIASP in the
      prologue to sign the return address, and AUTIASP in the epilogue to
      authenticate the return address (from the stack). If authentication
      fails, the return will cause an instruction abort to be taken, followed
      by an oops and killing the task.
      
      This should help protect the kernel against attacks using
      return-oriented programming. As ptrauth protects the return address, it
      can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
      that it does not protect other parts of the stack.
      
      The new instructions are in the HINT encoding space, so on a system
      without ptrauth they execute as NOPs.
      
      CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
      guests, but also automatically builds the kernel with ptrauth
      instructions if the compiler supports it. If there is no compiler
      support, we do not warn that the kernel was built without ptrauth
      instructions.
      
      GCC 7 and 8 support the -msign-return-address option, while GCC 9
      deprecates that option and replaces it with -mbranch-protection. Support
      both options.
      
      Clang uses an external assembler hence this patch makes sure that the
      correct parameters (-march=armv8.3-a) are passed down to help it recognize
      the ptrauth instructions.
      
      Ftrace function tracer works properly with Ptrauth only when
      patchable-function-entry feature is present and is ensured by the
      Kconfig dependency.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> # not co-dev parts
      Co-developed-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: default avatarKristina Martsenko <kristina.martsenko@arm.com>
      [Amit: Cover leaf function, comments, Ftrace Kconfig]
      Signed-off-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      74afda40
    • Amit Daniel Kachhap's avatar
      arm64: mask PAC bits of __builtin_return_address · 689eae42
      Amit Daniel Kachhap authored
      Functions like vmap() record how much memory has been allocated by their
      callers, and callers are identified using __builtin_return_address(). Once
      the kernel is using pointer-auth the return address will be signed. This
      means it will not match any kernel symbol, and will vary between threads
      even for the same caller.
      
      The output of /proc/vmallocinfo in this case may look like,
      0x(____ptrval____)-0x(____ptrval____)   20480 0x86e28000100e7c60 pages=4 vmalloc N0=4
      0x(____ptrval____)-0x(____ptrval____)   20480 0x86e28000100e7c60 pages=4 vmalloc N0=4
      0x(____ptrval____)-0x(____ptrval____)   20480 0xc5c78000100e7c60 pages=4 vmalloc N0=4
      
      The above three 64bit values should be the same symbol name and not
      different LR values.
      
      Use the pre-processor to add logic to clear the PAC to
      __builtin_return_address() callers. This patch adds a new file
      asm/compiler.h and is transitively included via include/compiler_types.h on
      the compiler command line so it is guaranteed to be loaded and the users of
      this macro will not find a wrong version.
      
      Helper macros ptrauth_kernel_pac_mask/ptrauth_clear_pac are created for
      this purpose and added in this file. Existing macro ptrauth_user_pac_mask
      moved from asm/pointer_auth.h.
      Signed-off-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      689eae42
    • Kristina Martsenko's avatar
      arm64: enable ptrauth earlier · 6982934e
      Kristina Martsenko authored
      When the kernel is compiled with pointer auth instructions, the boot CPU
      needs to start using address auth very early, so change the cpucap to
      account for this.
      
      Pointer auth must be enabled before we call C functions, because it is
      not possible to enter a function with pointer auth disabled and exit it
      with pointer auth enabled. Note, mismatches between architected and
      IMPDEF algorithms will still be caught by the cpufeature framework (the
      separate *_ARCH and *_IMP_DEF cpucaps).
      
      Note the change in behavior: if the boot CPU has address auth and a
      late CPU does not, then the late CPU is parked by the cpufeature
      framework. This is possible as kernel will only have NOP space intructions
      for PAC so such mismatched late cpu will silently ignore those
      instructions in C functions. Also, if the boot CPU does not have address
      auth and the late CPU has then the late cpu will still boot but with
      ptrauth feature disabled.
      
      Leave generic authentication as a "system scope" cpucap for now, since
      initially the kernel will only use address authentication.
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: default avatarVincenzo Frascino <Vincenzo.Frascino@arm.com>
      Signed-off-by: default avatarKristina Martsenko <kristina.martsenko@arm.com>
      [Amit: Re-worked ptrauth setup logic, comments]
      Signed-off-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      6982934e
  5. 06 Mar, 2020 1 commit
  6. 04 Mar, 2020 1 commit
    • Anshuman Khandual's avatar
      arm64/mm: Enable memory hot remove · bbd6ec60
      Anshuman Khandual authored
      The arch code for hot-remove must tear down portions of the linear map and
      vmemmap corresponding to memory being removed. In both cases the page
      tables mapping these regions must be freed, and when sparse vmemmap is in
      use the memory backing the vmemmap must also be freed.
      
      This patch adds unmap_hotplug_range() and free_empty_tables() helpers which
      can be used to tear down either region and calls it from vmemmap_free() and
      ___remove_pgd_mapping(). The free_mapped argument determines whether the
      backing memory will be freed.
      
      It makes two distinct passes over the kernel page table. In the first pass
      with unmap_hotplug_range() it unmaps, invalidates applicable TLB cache and
      frees backing memory if required (vmemmap) for each mapped leaf entry. In
      the second pass with free_empty_tables() it looks for empty page table
      sections whose page table page can be unmapped, TLB invalidated and freed.
      
      While freeing intermediate level page table pages bail out if any of its
      entries are still valid. This can happen for partially filled kernel page
      table either from a previously attempted failed memory hot add or while
      removing an address range which does not span the entire page table page
      range.
      
      The vmemmap region may share levels of table with the vmalloc region.
      There can be conflicts between hot remove freeing page table pages with
      a concurrent vmalloc() walking the kernel page table. This conflict can
      not just be solved by taking the init_mm ptl because of existing locking
      scheme in vmalloc(). So free_empty_tables() implements a floor and ceiling
      method which is borrowed from user page table tear with free_pgd_range()
      which skips freeing page table pages if intermediate address range is not
      aligned or maximum floor-ceiling might not own the entire page table page.
      
      Boot memory on arm64 cannot be removed. Hence this registers a new memory
      hotplug notifier which prevents boot memory offlining and it's removal.
      
      While here update arch_add_memory() to handle __add_pages() failures by
      just unmapping recently added kernel linear mapping. Now enable memory hot
      remove on arm64 platforms by default with ARCH_ENABLE_MEMORY_HOTREMOVE.
      
      This implementation is overall inspired from kernel page table tear down
      procedure on X86 architecture and user page table tear down method.
      
      [Mike and Catalin added P4D page table level support]
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bbd6ec60
  7. 27 Feb, 2020 1 commit
  8. 20 Feb, 2020 1 commit
    • Frederic Weisbecker's avatar
      arm64: Remove TIF_NOHZ · 320a4fc2
      Frederic Weisbecker authored
      The syscall slow path is spuriously invoked when context tracking is
      activated while the entry code calls context tracking from fast path.
      
      Remove that overhead and the unused flag itself while at it.
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Signed-off-by: default avatarFrederic Weisbecker <frederic@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      320a4fc2
  9. 17 Feb, 2020 2 commits
  10. 14 Feb, 2020 1 commit
    • Frederic Weisbecker's avatar
      context-tracking: Introduce CONFIG_HAVE_TIF_NOHZ · 490f561b
      Frederic Weisbecker authored
      A few archs (x86, arm, arm64) don't rely anymore on TIF_NOHZ to call
      into context tracking on user entry/exit but instead use static keys
      (or not) to optimize those calls. Ideally every arch should migrate to
      that behaviour in the long run.
      
      Settle a config option to let those archs remove their TIF_NOHZ
      definitions.
      Signed-off-by: default avatarFrederic Weisbecker <frederic@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paulburton@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: David S. Miller <davem@davemloft.net>
      490f561b
  11. 04 Feb, 2020 2 commits
  12. 22 Jan, 2020 2 commits
  13. 21 Jan, 2020 1 commit
  14. 16 Jan, 2020 3 commits
  15. 15 Jan, 2020 3 commits
  16. 08 Jan, 2020 1 commit
  17. 07 Jan, 2020 1 commit
  18. 13 Dec, 2019 1 commit
  19. 12 Dec, 2019 1 commit
    • Daniel Borkmann's avatar
      bpf, x86, arm64: Enable jit by default when not built as always-on · 81c22041
      Daniel Borkmann authored
      After Spectre 2 fix via 290af866 ("bpf: introduce BPF_JIT_ALWAYS_ON
      config") most major distros use BPF_JIT_ALWAYS_ON configuration these days
      which compiles out the BPF interpreter entirely and always enables the
      JIT. Also given recent fix in e1608f3f ("bpf: Avoid setting bpf insns
      pages read-only when prog is jited"), we additionally avoid fragmenting
      the direct map for the BPF insns pages sitting in the general data heap
      since they are not used during execution. Latter is only needed when run
      through the interpreter.
      
      Since both x86 and arm64 JITs have seen a lot of exposure over the years,
      are generally most up to date and maintained, there is more downside in
      !BPF_JIT_ALWAYS_ON configurations to have the interpreter enabled by default
      rather than the JIT. Add a ARCH_WANT_DEFAULT_BPF_JIT config which archs can
      use to set the bpf_jit_{enable,kallsyms} to 1. Back in the days the
      bpf_jit_kallsyms knob was set to 0 by default since major distros still
      had /proc/kallsyms addresses exposed to unprivileged user space which is
      not the case anymore. Hence both knobs are set via BPF_JIT_DEFAULT_ON which
      is set to 'y' in case of BPF_JIT_ALWAYS_ON or ARCH_WANT_DEFAULT_BPF_JIT.
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Link: https://lore.kernel.org/bpf/f78ad24795c2966efcc2ee19025fa3459f622185.1575903816.git.daniel@iogearbox.net
      81c22041
  20. 08 Dec, 2019 1 commit
  21. 25 Nov, 2019 1 commit
  22. 17 Nov, 2019 1 commit
    • Ard Biesheuvel's avatar
      int128: move __uint128_t compiler test to Kconfig · c12d3362
      Ard Biesheuvel authored
      In order to use 128-bit integer arithmetic in C code, the architecture
      needs to have declared support for it by setting ARCH_SUPPORTS_INT128,
      and it requires a version of the toolchain that supports this at build
      time. This is why all existing tests for ARCH_SUPPORTS_INT128 also test
      whether __SIZEOF_INT128__ is defined, since this is only the case for
      compilers that can support 128-bit integers.
      
      Let's fold this additional test into the Kconfig declaration of
      ARCH_SUPPORTS_INT128 so that we can also use the symbol in Makefiles,
      e.g., to decide whether a certain object needs to be included in the
      first place.
      
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c12d3362
  23. 14 Nov, 2019 1 commit
  24. 11 Nov, 2019 2 commits
  25. 06 Nov, 2019 1 commit
    • Torsten Duwe's avatar
      arm64: implement ftrace with regs · 3b23e499
      Torsten Duwe authored
      This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
      function's arguments (and some other registers) to be captured into a
      struct pt_regs, allowing these to be inspected and/or modified. This is
      a building block for live-patching, where a function's arguments may be
      forwarded to another function. This is also necessary to enable ftrace
      and in-kernel pointer authentication at the same time, as it allows the
      LR value to be captured and adjusted prior to signing.
      
      Using GCC's -fpatchable-function-entry=N option, we can have the
      compiler insert a configurable number of NOPs between the function entry
      point and the usual prologue. This also ensures functions are AAPCS
      compliant (e.g. disabling inter-procedural register allocation).
      
      For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
      following:
      
      | unsigned long bar(void);
      |
      | unsigned long foo(void)
      | {
      |         return bar() + 1;
      | }
      
      ... to:
      
      | <foo>:
      |         nop
      |         nop
      |         stp     x29, x30, [sp, #-16]!
      |         mov     x29, sp
      |         bl      0 <bar>
      |         add     x0, x0, #0x1
      |         ldp     x29, x30, [sp], #16
      |         ret
      
      This patch builds the kernel with -fpatchable-function-entry=2,
      prefixing each function with two NOPs. To trace a function, we replace
      these NOPs with a sequence that saves the LR into a GPR, then calls an
      ftrace entry assembly function which saves this and other relevant
      registers:
      
      | mov	x9, x30
      | bl	<ftrace-entry>
      
      Since patchable functions are AAPCS compliant (and the kernel does not
      use x18 as a platform register), x9-x18 can be safely clobbered in the
      patched sequence and the ftrace entry code.
      
      There are now two ftrace entry functions, ftrace_regs_entry (which saves
      all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
      allocated for each within modules.
      Signed-off-by: default avatarTorsten Duwe <duwe@suse.de>
      [Mark: rework asm, comments, PLTs, initialization, commit message]
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Julien Thierry <jthierry@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      3b23e499
  26. 26 Oct, 2019 1 commit
  27. 25 Oct, 2019 1 commit
  28. 14 Oct, 2019 1 commit
  29. 08 Oct, 2019 1 commit
  30. 07 Oct, 2019 1 commit