An error occurred fetching the project authors.
  1. 21 Jan, 2020 1 commit
  2. 17 Jan, 2020 1 commit
    • Mark Rutland's avatar
      arm64: entry: cleanup sp_el0 manipulation · 3e393417
      Mark Rutland authored
      The kernel stashes the current task struct in sp_el0 so that this can be
      acquired consistently/cheaply when required. When we take an exception
      from EL0 we have to:
      
      1) stash the original sp_el0 value
      2) find the current task
      3) update sp_el0 with the current task pointer
      
      Currently steps #1 and #2 occur in one place, and step #3 a while later.
      As the value of sp_el0 is immaterial between these points, let's move
      them together to make the code clearer and minimize ifdeffery. This
      necessitates moving the comment for MDSCR_EL1.SS.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      3e393417
  3. 13 Jan, 2020 1 commit
    • Mark Brown's avatar
      arm64: kernel: Correct annotation of end of el0_sync · 73d6890f
      Mark Brown authored
      Commit 582f9583 ("arm64: entry: convert el0_sync to C") caused
      the ENDPROC() annotating the end of el0_sync to be placed after the code
      for el0_sync_compat. This replaced the previous annotation where it was
      located after all the cases that are now converted to C, including after
      the currently unannotated el0_irq_compat and el0_error_compat. Move the
      annotation to the end of the function and add separate annotations for
      the _compat ones.
      
      Fixes: 582f9583 (arm64: entry: convert el0_sync to C)
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      73d6890f
  4. 06 Dec, 2019 1 commit
  5. 28 Oct, 2019 2 commits
    • Mark Rutland's avatar
      arm64: entry: convert el0_sync to C · 582f9583
      Mark Rutland authored
      This is largely a 1-1 conversion of asm to C, with a couple of caveats.
      
      The el0_sync{_compat} switches explicitly handle all the EL0 debug
      cases, so el0_dbg doesn't have to try to bail out for unexpected EL1
      debug ESR values. This also means that an unexpected vector catch from
      AArch32 is routed to el0_inv.
      
      We *could* merge the native and compat switches, which would make the
      diffstat negative, but I've tried to stay as close to the existing
      assembly as possible for the moment.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      [split out of a bigger series, added nokprobes. removed irq trace
       calls as the C helpers do this. renamed el0_dbg's use of FAR]
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      582f9583
    • Mark Rutland's avatar
      arm64: entry: convert el1_sync to C · ed3768db
      Mark Rutland authored
      This patch converts the EL1 sync entry assembly logic to C code.
      
      Doing this will allow us to make changes in a slightly more
      readable way. A case in point is supporting kernel-first RAS.
      do_sea() should be called on the CPU that took the fault.
      
      Largely the assembly code is converted to C in a relatively
      straightforward manner.
      
      Since all sync sites share a common asm entry point, the ASM_BUG()
      instances are no longer required for effective backtraces back to
      assembly, and we don't need similar BUG() entries.
      
      The ESR_ELx.EC codes for all (supported) debug exceptions are now
      checked in the el1_sync_handler's switch statement, which renders the
      check in el1_dbg redundant. This both simplifies the el1_dbg handler,
      and makes the EL1 exception handling more robust to
      currently-unallocated ESR_ELx.EC encodings.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      [split out of a bigger series, added nokprobes, moved prototypes]
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ed3768db
  6. 16 Oct, 2019 2 commits
  7. 15 Oct, 2019 1 commit
    • Marc Zyngier's avatar
      arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear · f2266504
      Marc Zyngier authored
      The GICv3 architecture specification is incredibly misleading when it
      comes to PMR and the requirement for a DSB. It turns out that this DSB
      is only required if the CPU interface sends an Upstream Control
      message to the redistributor in order to update the RD's view of PMR.
      
      This message is only sent when ICC_CTLR_EL1.PMHE is set, which isn't
      the case in Linux. It can still be set from EL3, so some special care
      is required. But the upshot is that in the (hopefuly large) majority
      of the cases, we can drop the DSB altogether.
      
      This relies on a new static key being set if the boot CPU has PMHE
      set. The drawback is that this static key has to be exported to
      modules.
      
      Cc: Will Deacon <will@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f2266504
  8. 08 Oct, 2019 1 commit
  9. 04 Oct, 2019 1 commit
    • James Morse's avatar
      arm64: Fix incorrect irqflag restore for priority masking for compat · f46f27a5
      James Morse authored
      Commit bd82d4bd ("arm64: Fix incorrect irqflag restore for priority
      masking") added a macro to the entry.S call paths that leave the
      PSTATE.I bit set. This tells the pPNMI masking logic that interrupts
      are masked by the CPU, not by the PMR. This value is read back by
      local_daif_save().
      
      Commit bd82d4bd added this call to el0_svc, as el0_svc_handler
      is called with interrupts masked. el0_svc_compat was missed, but should
      be covered in the same way as both of these paths end up in
      el0_svc_common(), which expects to unmask interrupts.
      
      Fixes: bd82d4bd ("arm64: Fix incorrect irqflag restore for priority masking")
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      f46f27a5
  10. 21 Aug, 2019 1 commit
    • James Morse's avatar
      arm64: entry: Move ct_user_exit before any other exception · 2671828c
      James Morse authored
      When taking an SError or Debug exception from EL0, we run the C
      handler for these exceptions before updating the context tracking
      code and unmasking lower priority interrupts.
      
      When booting with nohz_full lockdep tells us we got this wrong:
      | =============================
      | WARNING: suspicious RCU usage
      | 5.3.0-rc2-00010-gb4b5e9dcb11b-dirty #11271 Not tainted
      | -----------------------------
      | include/linux/rcupdate.h:643 rcu_read_unlock() used illegally wh!
      |
      | other info that might help us debug this:
      |
      |
      | RCU used illegally from idle CPU!
      | rcu_scheduler_active = 2, debug_locks = 1
      | RCU used illegally from extended quiescent state!
      | 1 lock held by a.out/432:
      |  #0: 00000000c7a79515 (rcu_read_lock){....}, at: brk_handler+0x00
      |
      | stack backtrace:
      | CPU: 1 PID: 432 Comm: a.out Not tainted 5.3.0-rc2-00010-gb4b5e9d1
      | Hardware name: ARM LTD ARM Juno Development Platform/ARM Juno De8
      | Call trace:
      |  dump_backtrace+0x0/0x140
      |  show_stack+0x14/0x20
      |  dump_stack+0xbc/0x104
      |  lockdep_rcu_suspicious+0xf8/0x108
      |  brk_handler+0x164/0x1b0
      |  do_debug_exception+0x11c/0x278
      |  el0_dbg+0x14/0x20
      
      Moving the ct_user_exit calls to be before do_debug_exception() means
      they are also before trace_hardirqs_off() has been updated. Add a new
      ct_user_exit_irqoff macro to avoid the context-tracking code using
      irqsave/restore before we've updated trace_hardirqs_off(). To be
      consistent, do this everywhere.
      
      The C helper is called enter_from_user_mode() to match x86 in the hope
      we can merge them into kernel/context_tracking.c later.
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Fixes: 6c81fe79 ("arm64: enable context tracking")
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      2671828c
  11. 22 Jul, 2019 1 commit
    • James Morse's avatar
      arm64: entry: SP Alignment Fault doesn't write to FAR_EL1 · 40ca0ce5
      James Morse authored
      Comparing the arm-arm's  pseudocode for AArch64.PCAlignmentFault() with
      AArch64.SPAlignmentFault() shows that SP faults don't copy the faulty-SP
      to FAR_EL1, but this is where we read from, and the address we provide
      to user-space with the BUS_ADRALN signal.
      
      For user-space this value will be UNKNOWN due to the previous ERET to
      user-space. If the last value is preserved, on systems with KASLR or KPTI
      this will be the user-space link-register left in FAR_EL1 by tramp_exit().
      Fix this to retrieve the original sp_el0 value, and pass this to
      do_sp_pc_fault().
      
      SP alignment faults from EL1 will cause us to take the fault again when
      trying to store the pt_regs. This eventually takes us to the overflow
      stack. Remove the ESR_ELx_EC_SP_ALIGN check as we will never make it
      this far.
      
      Fixes: 60ffc30d ("arm64: Exception handling")
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      [will: change label name and fleshed out comment]
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      40ca0ce5
  12. 21 Jun, 2019 3 commits
    • Julien Thierry's avatar
      arm64: Fix incorrect irqflag restore for priority masking · bd82d4bd
      Julien Thierry authored
      When using IRQ priority masking to disable interrupts, in order to deal
      with the PSR.I state, local_irq_save() would convert the I bit into a
      PMR value (GIC_PRIO_IRQOFF). This resulted in local_irq_restore()
      potentially modifying the value of PMR in undesired location due to the
      state of PSR.I upon flag saving [1].
      
      In an attempt to solve this issue in a less hackish manner, introduce
      a bit (GIC_PRIO_IGNORE_PMR) for the PMR values that can represent
      whether PSR.I is being used to disable interrupts, in which case it
      takes precedence of the status of interrupt masking via PMR.
      
      GIC_PRIO_PSR_I_SET is chosen such that (<pmr_value> |
      GIC_PRIO_PSR_I_SET) does not mask more interrupts than <pmr_value> as
      some sections (e.g. arch_cpu_idle(), interrupt acknowledge path)
      requires PMR not to mask interrupts that could be signaled to the
      CPU when using only PSR.I.
      
      [1] https://www.spinics.net/lists/arm-kernel/msg716956.html
      
      Fixes: 4a503217 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking")
      Cc: <stable@vger.kernel.org> # 5.1.x-
      Reported-by: default avatarZenghui Yu <yuzenghui@huawei.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Wei Li <liwei391@huawei.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bd82d4bd
    • Julien Thierry's avatar
      arm64: Fix interrupt tracing in the presence of NMIs · 17ce302f
      Julien Thierry authored
      In the presence of any form of instrumentation, nmi_enter() should be
      done before calling any traceable code and any instrumentation code.
      
      Currently, nmi_enter() is done in handle_domain_nmi(), which is much
      too late as instrumentation code might get called before. Move the
      nmi_enter/exit() calls to the arch IRQ vector handler.
      
      On arm64, it is not possible to know if the IRQ vector handler was
      called because of an NMI before acknowledging the interrupt. However, It
      is possible to know whether normal interrupts could be taken in the
      interrupted context (i.e. if taking an NMI in that context could
      introduce a potential race condition).
      
      When interrupting a context with IRQs disabled, call nmi_enter() as soon
      as possible. In contexts with IRQs enabled, defer this to the interrupt
      controller, which is in a better position to know if an interrupt taken
      is an NMI.
      
      Fixes: bc3c03cc ("arm64: Enable the support of pseudo-NMIs")
      Cc: <stable@vger.kernel.org> # 5.1.x-
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jason Cooper <jason@lakedaemon.net>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      17ce302f
    • Julien Thierry's avatar
      arm64: Do not enable IRQs for ct_user_exit · 9034f625
      Julien Thierry authored
      For el0_dbg and el0_error, DAIF bits get explicitly cleared before
      calling ct_user_exit.
      
      When context tracking is disabled, DAIF gets set (almost) immediately
      after. When context tracking is enabled, among the first things done
      is disabling IRQs.
      
      What is actually needed is:
      - PSR.D = 0 so the system can be debugged (should be already the case)
      - PSR.A = 0 so async error can be handled during context tracking
      
      Do not clear PSR.I in those two locations.
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9034f625
  13. 19 Jun, 2019 1 commit
  14. 23 May, 2019 1 commit
    • Marc Zyngier's avatar
      arm64: Handle erratum 1418040 as a superset of erratum 1188873 · a5325089
      Marc Zyngier authored
      We already mitigate erratum 1188873 affecting Cortex-A76 and
      Neoverse-N1 r0p0 to r2p0. It turns out that revisions r0p0 to
      r3p1 of the same cores are affected by erratum 1418040, which
      has the same workaround as 1188873.
      
      Let's expand the range of affected revisions to match 1418040,
      and repaint all occurences of 1188873 to 1418040. Whilst we're
      there, do a bit of reformating in silicon-errata.txt and drop
      a now unnecessary dependency on ARM_ARCH_TIMER_OOL_WORKAROUND.
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      a5325089
  15. 30 Apr, 2019 1 commit
  16. 26 Feb, 2019 1 commit
    • Julien Thierry's avatar
      arm64: Rename get_thread_info() · 4caf8758
      Julien Thierry authored
      The assembly macro get_thread_info() actually returns a task_struct and is
      analogous to the current/get_current macro/function.
      
      While it could be argued that thread_info sits at the start of
      task_struct and the intention could have been to return a thread_info,
      instances of loads from/stores to the address obtained from
      get_thread_info() use offsets that are generated with
      offsetof(struct task_struct, [...]).
      
      Rename get_thread_info() to state it returns a task_struct.
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarJulien Thierry <julien.thierry@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4caf8758
  17. 06 Feb, 2019 3 commits
  18. 04 Feb, 2019 1 commit
  19. 03 Jan, 2019 1 commit
  20. 11 Dec, 2018 1 commit
    • Will Deacon's avatar
      arm64: preempt: Fix big-endian when checking preempt count in assembly · 7faa313f
      Will Deacon authored
      Commit 39624469 ("arm64: preempt: Provide our own implementation of
      asm/preempt.h") extended the preempt count field in struct thread_info
      to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
      indicating whether or not the current task needs rescheduling.
      
      Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
      to this new field, the assembly usage was left untouched meaning that a
      32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
      the reschedule flag instead of the count.
      
      Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
      we're actually better off reworking the two assembly users so that they
      operate on the whole 64-bit value in favour of inspecting the thread
      flags separately in order to determine whether a reschedule is needed.
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: default avatar"kernelci.org bot" <bot@kernelci.org>
      Tested-by: default avatarKevin Hilman <khilman@baylibre.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      7faa313f
  21. 06 Dec, 2018 2 commits
    • Will Deacon's avatar
      arm64: entry: Remove confusing comment · 8cb3451b
      Will Deacon authored
      The comment about SYS_MEMBARRIER_SYNC_CORE relying on ERET being
      context-synchronizing is confusing and misplaced with kpti. Given that
      this is already documented under Documentation/ (see arch-support.txt
      for membarrier), remove the comment altogether.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      8cb3451b
    • Will Deacon's avatar
      arm64: entry: Place an SB sequence following an ERET instruction · 679db708
      Will Deacon authored
      Some CPUs can speculate past an ERET instruction and potentially perform
      speculative accesses to memory before processing the exception return.
      Since the register state is often controlled by a lower privilege level
      at the point of an ERET, this could potentially be used as part of a
      side-channel attack.
      
      This patch emits an SB sequence after each ERET so that speculation is
      held up on exception return.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      679db708
  22. 01 Oct, 2018 2 commits
  23. 14 Sep, 2018 1 commit
  24. 26 Jul, 2018 1 commit
  25. 12 Jul, 2018 7 commits
  26. 11 Jul, 2018 1 commit