1. 30 Nov, 2020 8 commits
    • Mark Rutland's avatar
      arm64: ptrace: prepare for EL1 irq/rcu tracking · 1ec2f2c0
      Mark Rutland authored
      Exceptions from EL1 may be taken when RCU isn't watching (e.g. in idle
      sequences), or when the lockdep hardirqs transiently out-of-sync with
      the hardware state (e.g. in the middle of local_irq_enable()). To
      correctly handle these cases, we'll need to save/restore this state
      across some exceptions taken from EL1.
      
      A series of subsequent patches will update EL1 exception handlers to
      handle this. In preparation for this, and to avoid dependencies between
      those patches, this patch adds two new fields to struct pt_regs so that
      exception handlers can track this state.
      
      Note that this is placed in pt_regs as some entry/exit sequences such as
      el1_irq are invoked from assembly, which makes it very difficult to add
      a separate structure as with the irqentry_state used by x86. We can
      separate this once more of the exception logic is moved to C. While the
      fields only need to be bool, they are both made u64 to keep pt_regs
      16-byte aligned.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201130115950.22492-9-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      1ec2f2c0
    • Mark Rutland's avatar
      arm64: entry: fix non-NMI user<->kernel transitions · 23529049
      Mark Rutland authored
      When built with PROVE_LOCKING, NO_HZ_FULL, and CONTEXT_TRACKING_FORCE
      will WARN() at boot time that interrupts are enabled when we call
      context_tracking_user_enter(), despite the DAIF flags indicating that
      IRQs are masked.
      
      The problem is that we're not tracking IRQ flag changes accurately, and
      so lockdep believes interrupts are enabled when they are not (and
      vice-versa). We can shuffle things so to make this more accurate. For
      kernel->user transitions there are a number of constraints we need to
      consider:
      
      1) When we call __context_tracking_user_enter() HW IRQs must be disabled
         and lockdep must be up-to-date with this.
      
      2) Userspace should be treated as having IRQs enabled from the PoV of
         both lockdep and tracing.
      
      3) As context_tracking_user_enter() stops RCU from watching, we cannot
         use RCU after calling it.
      
      4) IRQ flag tracing and lockdep have state that must be manipulated
         before RCU is disabled.
      
      ... with similar constraints applying for user->kernel transitions, with
      the ordering reversed.
      
      The generic entry code has enter_from_user_mode() and
      exit_to_user_mode() helpers to handle this. We can't use those directly,
      so we add arm64 copies for now (without the instrumentation markers
      which aren't used on arm64). These replace the existing user_exit() and
      user_exit_irqoff() calls spread throughout handlers, and the exception
      unmasking is left as-is.
      
      Note that:
      
      * The accounting for debug exceptions from userspace now happens in
        el0_dbg() and ret_to_user(), so this is removed from
        debug_exception_enter() and debug_exception_exit(). As
        user_exit_irqoff() wakes RCU, the userspace-specific check is removed.
      
      * The accounting for syscalls now happens in el0_svc(),
        el0_svc_compat(), and ret_to_user(), so this is removed from
        el0_svc_common(). This does not adversely affect the workaround for
        erratum 1463225, as this does not depend on any of the state tracking.
      
      * In ret_to_user() we mask interrupts with local_daif_mask(), and so we
        need to inform lockdep and tracing. Here a trace_hardirqs_off() is
        sufficient and safe as we have not yet exited kernel context and RCU
        is usable.
      
      * As PROVE_LOCKING selects TRACE_IRQFLAGS, the ifdeferry in entry.S only
        needs to check for the latter.
      
      * EL0 SError handling will be dealt with in a subsequent patch, as this
        needs to be treated as an NMI.
      
      Prior to this patch, booting an appropriately-configured kernel would
      result in spats as below:
      
      | DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled())
      | WARNING: CPU: 2 PID: 1 at kernel/locking/lockdep.c:5280 check_flags.part.54+0x1dc/0x1f0
      | Modules linked in:
      | CPU: 2 PID: 1 Comm: init Not tainted 5.10.0-rc3 #3
      | Hardware name: linux,dummy-virt (DT)
      | pstate: 804003c5 (Nzcv DAIF +PAN -UAO -TCO BTYPE=--)
      | pc : check_flags.part.54+0x1dc/0x1f0
      | lr : check_flags.part.54+0x1dc/0x1f0
      | sp : ffff80001003bd80
      | x29: ffff80001003bd80 x28: ffff66ce801e0000
      | x27: 00000000ffffffff x26: 00000000000003c0
      | x25: 0000000000000000 x24: ffffc31842527258
      | x23: ffffc31842491368 x22: ffffc3184282d000
      | x21: 0000000000000000 x20: 0000000000000001
      | x19: ffffc318432ce000 x18: 0080000000000000
      | x17: 0000000000000000 x16: ffffc31840f18a78
      | x15: 0000000000000001 x14: ffffc3184285c810
      | x13: 0000000000000001 x12: 0000000000000000
      | x11: ffffc318415857a0 x10: ffffc318406614c0
      | x9 : ffffc318415857a0 x8 : ffffc31841f1d000
      | x7 : 647261685f706564 x6 : ffffc3183ff7c66c
      | x5 : ffff66ce801e0000 x4 : 0000000000000000
      | x3 : ffffc3183fe00000 x2 : ffffc31841500000
      | x1 : e956dc24146b3500 x0 : 0000000000000000
      | Call trace:
      |  check_flags.part.54+0x1dc/0x1f0
      |  lock_is_held_type+0x10c/0x188
      |  rcu_read_lock_sched_held+0x70/0x98
      |  __context_tracking_enter+0x310/0x350
      |  context_tracking_enter.part.3+0x5c/0xc8
      |  context_tracking_user_enter+0x6c/0x80
      |  finish_ret_to_user+0x2c/0x13cr
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201130115950.22492-8-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      23529049
    • Mark Rutland's avatar
      arm64: entry: move el1 irq/nmi logic to C · 105fc335
      Mark Rutland authored
      In preparation for reworking the EL1 irq/nmi entry code, move the
      existing logic to C. We no longer need the asm_nmi_enter() and
      asm_nmi_exit() wrappers, so these are removed. The new C functions are
      marked noinstr, which prevents compiler instrumentation and runtime
      probing.
      
      In subsequent patches we'll want the new C helpers to be called in all
      cases, so we don't bother wrapping the calls with ifdeferry. Even when
      the new C functions are stubs the trivial calls are unlikely to have a
      measurable impact on the IRQ or NMI paths anyway.
      
      Prototypes are added to <asm/exception.h> as otherwise (in some
      configurations) GCC will complain about the lack of a forward
      declaration. We already do this for existing function, e.g.
      enter_from_user_mode().
      
      The new helpers are marked as noinstr (which prevents all
      instrumentation, tracing, and kprobes). Otherwise, there should be no
      functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201130115950.22492-7-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      105fc335
    • Mark Rutland's avatar
      arm64: entry: prepare ret_to_user for function call · 3cb5ed4d
      Mark Rutland authored
      In a subsequent patch ret_to_user will need to make a C function call
      (in some configurations) which may clobber x0-x18 at the start of the
      finish_ret_to_user block, before enable_step_tsk consumes the flags
      loaded into x1.
      
      In preparation for this, let's load the flags into x19, which is
      preserved across C function calls. This avoids a redundant reload of the
      flags and ensures we operate on a consistent shapshot regardless.
      
      There should be no functional change as a result of this patch. At this
      point of the entry/exit paths we only need to preserve x28 (tsk) and the
      sp, and x19 is free for this use.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201130115950.22492-6-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      3cb5ed4d
    • Mark Rutland's avatar
      arm64: entry: move enter_from_user_mode to entry-common.c · 2f911d49
      Mark Rutland authored
      In later patches we'll want to extend enter_from_user_mode() and add a
      corresponding exit_to_user_mode(). As these will be common for all
      entries/exits from userspace, it'd be better for these to live in
      entry-common.c with the rest of the entry logic.
      
      This patch moves enter_from_user_mode() into entry-common.c. As with
      other functions in entry-common.c it is marked as noinstr (which
      prevents all instrumentation, tracing, and kprobes) but there are no
      other functional changes.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201130115950.22492-5-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      2f911d49
    • Mark Rutland's avatar
      arm64: entry: mark entry code as noinstr · da192676
      Mark Rutland authored
      Functions in entry-common.c are marked as notrace and NOKPROBE_SYMBOL(),
      but they're still subject to other instrumentation which may rely on
      lockdep/rcu/context-tracking being up-to-date, and may cause nested
      exceptions (e.g. for WARN/BUG or KASAN's use of BRK) which will corrupt
      exceptions registers which have not yet been read.
      
      Prevent this by marking all functions in entry-common.c as noinstr to
      prevent compiler instrumentation. This also blacklists the functions for
      tracing and kprobes, so we don't need to handle that separately.
      Functions elsewhere will be dealt with in subsequent patches.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201130115950.22492-4-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      da192676
    • Mark Rutland's avatar
      arm64: mark idle code as noinstr · 114e0a68
      Mark Rutland authored
      Core code disables RCU when calling arch_cpu_idle(), so it's not safe
      for arch_cpu_idle() or its calees to be instrumented, as the
      instrumentation callbacks may attempt to use RCU or other features which
      are unsafe to use in this context.
      
      Mark them noinstr to prevent issues.
      
      The use of local_irq_enable() in arch_cpu_idle() is similarly
      problematic, and the "sched/idle: Fix arch_cpu_idle() vs tracing" patch
      queued in the tip tree addresses that case.
      Reported-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201130115950.22492-3-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      114e0a68
    • Mark Rutland's avatar
      arm64: syscall: exit userspace before unmasking exceptions · ca1314d7
      Mark Rutland authored
      In el0_svc_common() we unmask exceptions before we call user_exit(), and
      so there's a window where an IRQ or debug exception can be taken while
      RCU is not watching. In do_debug_exception() we account for this in via
      debug_exception_{enter,exit}(), but in the el1_irq asm we do not and we
      call trace functions which rely on RCU before we have a guarantee that
      RCU is watching.
      
      Let's avoid this by having el0_svc_common() exit userspace before
      unmasking exceptions, matching what we do for all other EL0 entry paths.
      We can use user_exit_irqoff() to avoid the pointless save/restore of IRQ
      flags while we're sure exceptions are masked in DAIF.
      
      The workaround for Cortex-A76 erratum 1463225 may trigger a debug
      exception before this point, but the debug code invoked in this case is
      safe even when RCU is not watching.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201130115950.22492-2-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      ca1314d7
  2. 23 Nov, 2020 4 commits
  3. 13 Nov, 2020 5 commits
  4. 10 Nov, 2020 4 commits
  5. 05 Nov, 2020 1 commit
  6. 03 Nov, 2020 2 commits
  7. 30 Oct, 2020 2 commits
  8. 29 Oct, 2020 2 commits
  9. 28 Oct, 2020 12 commits
    • Catalin Marinas's avatar
      arm64: mte: Document that user PSTATE.TCO is ignored by kernel uaccess · ef5dd6a0
      Catalin Marinas authored
      On exception entry, the kernel explicitly resets the PSTATE.TCO (tag
      check override) so that any kernel memory accesses will be checked (the
      bit is restored on exception return). This has the side-effect that the
      uaccess routines will not honour the PSTATE.TCO that may have been set
      by the user prior to a syscall.
      
      There is no issue in practice since PSTATE.TCO is expected to be used
      only for brief periods in specific routines (e.g. garbage collection).
      To control the tag checking mode of the uaccess routines, the user will
      have to invoke a corresponding prctl() call.
      
      Document the kernel behaviour w.r.t. PSTATE.TCO accordingly.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Fixes: df9d7a22 ("arm64: mte: Add Memory Tagging Extension documentation")
      Reviewed-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Szabolcs Nagy <szabolcs.nagy@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      ef5dd6a0
    • Ard Biesheuvel's avatar
      module: use hidden visibility for weak symbol references · 13150bc5
      Ard Biesheuvel authored
      Geert reports that commit be288182 ("arm64/build: Assert for
      unwanted sections") results in build errors on arm64 for configurations
      that have CONFIG_MODULES disabled.
      
      The commit in question added ASSERT()s to the arm64 linker script to
      ensure that linker generated sections such as .got.plt etc are empty,
      but as it turns out, there are corner cases where the linker does emit
      content into those sections. More specifically, weak references to
      function symbols (which can remain unsatisfied, and can therefore not
      be emitted as relative references) will be emitted as GOT and PLT
      entries when linking the kernel in PIE mode (which is the case when
      CONFIG_RELOCATABLE is enabled, which is on by default).
      
      What happens is that code such as
      
      	struct device *(*fn)(struct device *dev);
      	struct device *iommu_device;
      
      	fn = symbol_get(mdev_get_iommu_device);
      	if (fn) {
      		iommu_device = fn(dev);
      
      essentially gets converted into the following when CONFIG_MODULES is off:
      
      	struct device *iommu_device;
      
      	if (&mdev_get_iommu_device) {
      		iommu_device = mdev_get_iommu_device(dev);
      
      where mdev_get_iommu_device is emitted as a weak symbol reference into
      the object file. The first reference is decorated with an ordinary
      ABS64 data relocation (which yields 0x0 if the reference remains
      unsatisfied). However, the indirect call is turned into a direct call
      covered by a R_AARCH64_CALL26 relocation, which is converted into a
      call via a PLT entry taking the target address from the associated
      GOT entry.
      
      Given that such GOT and PLT entries are unnecessary for fully linked
      binaries such as the kernel, let's give these weak symbol references
      hidden visibility, so that the linker knows that the weak reference
      via R_AARCH64_CALL26 can simply remain unsatisfied.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Reviewed-by: default avatarFangrui Song <maskray@google.com>
      Acked-by: default avatarJessica Yu <jeyu@kernel.org>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Link: https://lore.kernel.org/r/20201027151132.14066-1-ardb@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      13150bc5
    • Ard Biesheuvel's avatar
      arm64: efi: increase EFI PE/COFF header padding to 64 KB · a2d50c1c
      Ard Biesheuvel authored
      Commit 76085aff ("efi/libstub/arm64: align PE/COFF sections to segment
      alignment") increased the PE/COFF section alignment to match the minimum
      segment alignment of the kernel image, which ensures that the kernel does
      not need to be moved around in memory by the EFI stub if it was built as
      relocatable.
      
      However, the first PE/COFF section starts at _stext, which is only 4 KB
      aligned, and so the section layout is inconsistent. Existing EFI loaders
      seem to care little about this, but it is better to clean this up.
      
      So let's pad the header to 64 KB to match the PE/COFF section alignment.
      
      Fixes: 76085aff ("efi/libstub/arm64: align PE/COFF sections to segment alignment")
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20201027073209.2897-2-ardb@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      a2d50c1c
    • Ard Biesheuvel's avatar
      arm64: vmlinux.lds: account for spurious empty .igot.plt sections · 5f692a81
      Ard Biesheuvel authored
      Now that we started making the linker warn about orphan sections
      (input sections that are not explicitly consumed by an output section),
      some configurations produce the following warning:
      
        aarch64-linux-gnu-ld: warning: orphan section `.igot.plt' from
               `arch/arm64/kernel/head.o' being placed in section `.igot.plt'
      
      It could be any file that triggers this - head.o is simply the first
      input file in the link - and the resulting .igot.plt section never
      actually appears in vmlinux as it turns out to be empty.
      
      So let's add .igot.plt to our collection of input sections to disregard
      unless they are empty.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Link: https://lore.kernel.org/r/20201028133332.5571-1-ardb@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      5f692a81
    • Vincenzo Frascino's avatar
      kselftest/arm64: Fix check_user_mem test · 493b35db
      Vincenzo Frascino authored
      The check_user_mem test reports the error below because the test
      plan is not declared correctly:
      
        # Planned tests != run tests (0 != 4)
      
      Fix the test adding the correct test plan declaration.
      
      Fixes: 4dafc08d ("kselftest/arm64: Check mte tagged user address in kernel")
      Signed-off-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Gabor Kertesz <gabor.kertesz@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Link: https://lore.kernel.org/r/20201026121248.2340-7-vincenzo.frascino@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      493b35db
    • Vincenzo Frascino's avatar
      kselftest/arm64: Fix check_ksm_options test · cbb268af
      Vincenzo Frascino authored
      The check_ksm_options test reports the error below because the test
      plan is not declared correctly:
      
        # Planned tests != run tests (0 != 4)
      
      Fix the test adding the correct test plan declaration.
      
      Fixes: f981d8fa ("kselftest/arm64: Verify KSM page merge for MTE pages")
      Signed-off-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Gabor Kertesz <gabor.kertesz@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Link: https://lore.kernel.org/r/20201026121248.2340-6-vincenzo.frascino@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      cbb268af
    • Vincenzo Frascino's avatar
      kselftest/arm64: Fix check_mmap_options test · 7419390a
      Vincenzo Frascino authored
      The check_mmap_options test reports the error below because the test
      plan is not declared correctly:
      
        # Planned tests != run tests (0 != 22)
      
      Fix the test adding the correct test plan declaration.
      
      Fixes: 53ec81d2 ("kselftest/arm64: Verify all different mmap MTE options")
      Signed-off-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Gabor Kertesz <gabor.kertesz@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Link: https://lore.kernel.org/r/20201026121248.2340-5-vincenzo.frascino@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      7419390a
    • Vincenzo Frascino's avatar
      kselftest/arm64: Fix check_child_memory test · 386cf789
      Vincenzo Frascino authored
      The check_child_memory test reports the error below because the test
      plan is not declared correctly:
      
        # Planned tests != run tests (0 != 12)
      
      Fix the test adding the correct test plan declaration.
      
      Fixes: dfe537cf ("kselftest/arm64: Check forked child mte memory accessibility")
      Signed-off-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Gabor Kertesz <gabor.kertesz@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Link: https://lore.kernel.org/r/20201026121248.2340-4-vincenzo.frascino@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      386cf789
    • Vincenzo Frascino's avatar
      kselftest/arm64: Fix check_tags_inclusion test · 041fa41f
      Vincenzo Frascino authored
      The check_tags_inclusion test reports the error below because the test
      plan is not declared correctly:
      
        # Planned tests != run tests (0 != 4)
      
      Fix the test adding the correct test plan declaration.
      
      Fixes: f3b2a26c ("kselftest/arm64: Verify mte tag inclusion via prctl")
      Signed-off-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Gabor Kertesz <gabor.kertesz@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Link: https://lore.kernel.org/r/20201026121248.2340-3-vincenzo.frascino@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      041fa41f
    • Vincenzo Frascino's avatar
      kselftest/arm64: Fix check_buffer_fill test · 5bc7c115
      Vincenzo Frascino authored
      The check_buffer_fill test reports the error below because the test
      plan is not declared correctly:
      
        # Planned tests != run tests (0 != 20)
      
      Fix the test adding the correct test plan declaration.
      
      Fixes: e9b60476 ("kselftest/arm64: Add utilities and a test to validate mte memory")
      Signed-off-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Gabor Kertesz <gabor.kertesz@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Link: https://lore.kernel.org/r/20201026121248.2340-2-vincenzo.frascino@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      5bc7c115
    • Arnd Bergmann's avatar
      arm64: avoid -Woverride-init warning · 332576e6
      Arnd Bergmann authored
      The icache_policy_str[] definition causes a warning when extra
      warning flags are enabled:
      
      arch/arm64/kernel/cpuinfo.c:38:26: warning: initialized field overwritten [-Woverride-init]
         38 |  [ICACHE_POLICY_VIPT]  = "VIPT",
            |                          ^~~~~~
      arch/arm64/kernel/cpuinfo.c:38:26: note: (near initialization for 'icache_policy_str[2]')
      arch/arm64/kernel/cpuinfo.c:39:26: warning: initialized field overwritten [-Woverride-init]
         39 |  [ICACHE_POLICY_PIPT]  = "PIPT",
            |                          ^~~~~~
      arch/arm64/kernel/cpuinfo.c:39:26: note: (near initialization for 'icache_policy_str[3]')
      arch/arm64/kernel/cpuinfo.c:40:27: warning: initialized field overwritten [-Woverride-init]
         40 |  [ICACHE_POLICY_VPIPT]  = "VPIPT",
            |                           ^~~~~~~
      arch/arm64/kernel/cpuinfo.c:40:27: note: (near initialization for 'icache_policy_str[0]')
      
      There is no real need for the default initializer here, as printing a
      NULL string is harmless. Rewrite the logic to have an explicit
      reserved value for the only one that uses the default value.
      
      This partially reverts the commit that removed ICACHE_POLICY_AIVIVT.
      
      Fixes: 155433cb ("arm64: cache: Remove support for ASID-tagged VIVT I-caches")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Link: https://lore.kernel.org/r/20201026193807.3816388-1-arnd@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      332576e6
    • Stephen Boyd's avatar
      KVM: arm64: ARM_SMCCC_ARCH_WORKAROUND_1 doesn't return SMCCC_RET_NOT_REQUIRED · 1de111b5
      Stephen Boyd authored
      According to the SMCCC spec[1](7.5.2 Discovery) the
      ARM_SMCCC_ARCH_WORKAROUND_1 function id only returns 0, 1, and
      SMCCC_RET_NOT_SUPPORTED.
      
       0 is "workaround required and safe to call this function"
       1 is "workaround not required but safe to call this function"
       SMCCC_RET_NOT_SUPPORTED is "might be vulnerable or might not be, who knows, I give up!"
      
      SMCCC_RET_NOT_SUPPORTED might as well mean "workaround required, except
      calling this function may not work because it isn't implemented in some
      cases". Wonderful. We map this SMC call to
      
       0 is SPECTRE_MITIGATED
       1 is SPECTRE_UNAFFECTED
       SMCCC_RET_NOT_SUPPORTED is SPECTRE_VULNERABLE
      
      For KVM hypercalls (hvc), we've implemented this function id to return
      SMCCC_RET_NOT_SUPPORTED, 0, and SMCCC_RET_NOT_REQUIRED. One of those
      isn't supposed to be there. Per the code we call
      arm64_get_spectre_v2_state() to figure out what to return for this
      feature discovery call.
      
       0 is SPECTRE_MITIGATED
       SMCCC_RET_NOT_REQUIRED is SPECTRE_UNAFFECTED
       SMCCC_RET_NOT_SUPPORTED is SPECTRE_VULNERABLE
      
      Let's clean this up so that KVM tells the guest this mapping:
      
       0 is SPECTRE_MITIGATED
       1 is SPECTRE_UNAFFECTED
       SMCCC_RET_NOT_SUPPORTED is SPECTRE_VULNERABLE
      
      Note: SMCCC_RET_NOT_AFFECTED is 1 but isn't part of the SMCCC spec
      
      Fixes: c118bbb5 ("arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests")
      Signed-off-by: default avatarStephen Boyd <swboyd@chromium.org>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Steven Price <steven.price@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: stable@vger.kernel.org
      Link: https://developer.arm.com/documentation/den0028/latest [1]
      Link: https://lore.kernel.org/r/20201023154751.1973872-1-swboyd@chromium.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      1de111b5