An error occurred fetching the project authors.
- 21 Jan, 2020 1 commit
-
-
Julien Thierry authored
kernel_ventry will create alternative entries to potentially replace 0 instructions with 0 instructions for EL1 vectors. While this does not cause an issue, it pointlessly takes up some bytes in the alternatives section. Do not generate such entries. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Julien Thierry <jthierry@redhat.com> Signed-off-by:
Will Deacon <will@kernel.org>
-
- 17 Jan, 2020 1 commit
-
-
Mark Rutland authored
The kernel stashes the current task struct in sp_el0 so that this can be acquired consistently/cheaply when required. When we take an exception from EL0 we have to: 1) stash the original sp_el0 value 2) find the current task 3) update sp_el0 with the current task pointer Currently steps #1 and #2 occur in one place, and step #3 a while later. As the value of sp_el0 is immaterial between these points, let's move them together to make the code clearer and minimize ifdeffery. This necessitates moving the comment for MDSCR_EL1.SS. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Will Deacon <will@kernel.org>
-
- 13 Jan, 2020 1 commit
-
-
Mark Brown authored
Commit 582f9583 ("arm64: entry: convert el0_sync to C") caused the ENDPROC() annotating the end of el0_sync to be placed after the code for el0_sync_compat. This replaced the previous annotation where it was located after all the cases that are now converted to C, including after the currently unannotated el0_irq_compat and el0_error_compat. Move the annotation to the end of the function and add separate annotations for the _compat ones. Fixes: 582f9583 (arm64: entry: convert el0_sync to C) Signed-off-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Will Deacon <will@kernel.org>
-
- 06 Dec, 2019 1 commit
-
-
Heyi Guo authored
Stack overflow checking can be done by testing sp & (1 << THREAD_SHIFT) only for the stacks are aligned to (2 << THREAD_SHIFT) with size of (1 << THREAD_SIZE), and this is the case when CONFIG_VMAP_STACK is set. Fix the code comment to avoid confusion. Cc: Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Heyi Guo <guoheyi@huawei.com> [catalin.marinas@arm.com: Updated comment following Mark's suggestion] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 28 Oct, 2019 2 commits
-
-
Mark Rutland authored
This is largely a 1-1 conversion of asm to C, with a couple of caveats. The el0_sync{_compat} switches explicitly handle all the EL0 debug cases, so el0_dbg doesn't have to try to bail out for unexpected EL1 debug ESR values. This also means that an unexpected vector catch from AArch32 is routed to el0_inv. We *could* merge the native and compat switches, which would make the diffstat negative, but I've tried to stay as close to the existing assembly as possible for the moment. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> [split out of a bigger series, added nokprobes. removed irq trace calls as the C helpers do this. renamed el0_dbg's use of FAR] Signed-off-by:
James Morse <james.morse@arm.com> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
This patch converts the EL1 sync entry assembly logic to C code. Doing this will allow us to make changes in a slightly more readable way. A case in point is supporting kernel-first RAS. do_sea() should be called on the CPU that took the fault. Largely the assembly code is converted to C in a relatively straightforward manner. Since all sync sites share a common asm entry point, the ASM_BUG() instances are no longer required for effective backtraces back to assembly, and we don't need similar BUG() entries. The ESR_ELx.EC codes for all (supported) debug exceptions are now checked in the el1_sync_handler's switch statement, which renders the check in el1_dbg redundant. This both simplifies the el1_dbg handler, and makes the EL1 exception handling more robust to currently-unallocated ESR_ELx.EC encodings. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> [split out of a bigger series, added nokprobes, moved prototypes] Signed-off-by:
James Morse <james.morse@arm.com> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 16 Oct, 2019 2 commits
-
-
Will Deacon authored
Sign-extending TTBR1 addresses when converting to an untagged address breaks the documented POSIX semantics for mlock() in some obscure error cases where we end up returning -EINVAL instead of -ENOMEM as a direct result of rewriting the upper address bits. Rework the untagged_addr() macro to preserve the upper address bits for TTBR1 addresses and only clear the tag bits for user addresses. This matches the behaviour of the 'clear_address_tag' assembly macro, so rename that and align the implementations at the same time so that they use the same instruction sequences for the tag manipulation. Link: https://lore.kernel.org/stable/20191014162651.GF19200@arrakis.emea.arm.com/Reported-by:
Jan Stancek <jstancek@redhat.com> Tested-by:
Jan Stancek <jstancek@redhat.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Tested-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Will Deacon <will@kernel.org>
-
Julien Thierry authored
Preempting from IRQ-return means that the task has its PSTATE saved on the stack, which will get restored when the task is resumed and does the actual IRQ return. However, enabling some CPU features requires modifying the PSTATE. This means that, if a task was scheduled out during an IRQ-return before all CPU features are enabled, the task might restore a PSTATE that does not include the feature enablement changes once scheduled back in. * Task 1: PAN == 0 ---| |--------------- | |<- return from IRQ, PSTATE.PAN = 0 | <- IRQ | +--------+ <- preempt() +-- ^ | reschedule Task 1, PSTATE.PAN == 1 * Init: --------------------+------------------------ ^ | enable_cpu_features set PSTATE.PAN on all CPUs Worse than this, since PSTATE is untouched when task switching is done, a task missing the new bits in PSTATE might affect another task, if both do direct calls to schedule() (outside of IRQ/exception contexts). Fix this by preventing preemption on IRQ-return until features are enabled on all CPUs. This way the only PSTATE values that are saved on the stack are from synchronous exceptions. These are expected to be fatal this early, the exception is BRK for WARN_ON(), but as this uses do_debug_exception() which keeps IRQs masked, it shouldn't call schedule(). Signed-off-by:
Julien Thierry <julien.thierry@arm.com> [james: Replaced a really cool hack, with an even simpler static key in C. expanded commit message with Julien's cover-letter ascii art] Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Will Deacon <will@kernel.org>
-
- 15 Oct, 2019 1 commit
-
-
Marc Zyngier authored
The GICv3 architecture specification is incredibly misleading when it comes to PMR and the requirement for a DSB. It turns out that this DSB is only required if the CPU interface sends an Upstream Control message to the redistributor in order to update the RD's view of PMR. This message is only sent when ICC_CTLR_EL1.PMHE is set, which isn't the case in Linux. It can still be set from EL3, so some special care is required. But the upshot is that in the (hopefuly large) majority of the cases, we can drop the DSB altogether. This relies on a new static key being set if the boot CPU has PMHE set. The drawback is that this static key has to be exported to modules. Cc: Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 08 Oct, 2019 1 commit
-
-
Marc Zyngier authored
As a PRFM instruction racing against a TTBR update can have undesirable effects on TX2, NOP-out such PRFM on cores that are affected by the TX2-219 erratum. Cc: <stable@vger.kernel.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Will Deacon <will@kernel.org>
-
- 04 Oct, 2019 1 commit
-
-
James Morse authored
Commit bd82d4bd ("arm64: Fix incorrect irqflag restore for priority masking") added a macro to the entry.S call paths that leave the PSTATE.I bit set. This tells the pPNMI masking logic that interrupts are masked by the CPU, not by the PMR. This value is read back by local_daif_save(). Commit bd82d4bd added this call to el0_svc, as el0_svc_handler is called with interrupts masked. el0_svc_compat was missed, but should be covered in the same way as both of these paths end up in el0_svc_common(), which expects to unmask interrupts. Fixes: bd82d4bd ("arm64: Fix incorrect irqflag restore for priority masking") Signed-off-by:
James Morse <james.morse@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Signed-off-by:
Will Deacon <will@kernel.org>
-
- 21 Aug, 2019 1 commit
-
-
James Morse authored
When taking an SError or Debug exception from EL0, we run the C handler for these exceptions before updating the context tracking code and unmasking lower priority interrupts. When booting with nohz_full lockdep tells us we got this wrong: | ============================= | WARNING: suspicious RCU usage | 5.3.0-rc2-00010-gb4b5e9dcb11b-dirty #11271 Not tainted | ----------------------------- | include/linux/rcupdate.h:643 rcu_read_unlock() used illegally wh! | | other info that might help us debug this: | | | RCU used illegally from idle CPU! | rcu_scheduler_active = 2, debug_locks = 1 | RCU used illegally from extended quiescent state! | 1 lock held by a.out/432: | #0: 00000000c7a79515 (rcu_read_lock){....}, at: brk_handler+0x00 | | stack backtrace: | CPU: 1 PID: 432 Comm: a.out Not tainted 5.3.0-rc2-00010-gb4b5e9d1 | Hardware name: ARM LTD ARM Juno Development Platform/ARM Juno De8 | Call trace: | dump_backtrace+0x0/0x140 | show_stack+0x14/0x20 | dump_stack+0xbc/0x104 | lockdep_rcu_suspicious+0xf8/0x108 | brk_handler+0x164/0x1b0 | do_debug_exception+0x11c/0x278 | el0_dbg+0x14/0x20 Moving the ct_user_exit calls to be before do_debug_exception() means they are also before trace_hardirqs_off() has been updated. Add a new ct_user_exit_irqoff macro to avoid the context-tracking code using irqsave/restore before we've updated trace_hardirqs_off(). To be consistent, do this everywhere. The C helper is called enter_from_user_mode() to match x86 in the hope we can merge them into kernel/context_tracking.c later. Cc: Masami Hiramatsu <mhiramat@kernel.org> Fixes: 6c81fe79 ("arm64: enable context tracking") Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Will Deacon <will@kernel.org>
-
- 22 Jul, 2019 1 commit
-
-
James Morse authored
Comparing the arm-arm's pseudocode for AArch64.PCAlignmentFault() with AArch64.SPAlignmentFault() shows that SP faults don't copy the faulty-SP to FAR_EL1, but this is where we read from, and the address we provide to user-space with the BUS_ADRALN signal. For user-space this value will be UNKNOWN due to the previous ERET to user-space. If the last value is preserved, on systems with KASLR or KPTI this will be the user-space link-register left in FAR_EL1 by tramp_exit(). Fix this to retrieve the original sp_el0 value, and pass this to do_sp_pc_fault(). SP alignment faults from EL1 will cause us to take the fault again when trying to store the pt_regs. This eventually takes us to the overflow stack. Remove the ESR_ELx_EC_SP_ALIGN check as we will never make it this far. Fixes: 60ffc30d ("arm64: Exception handling") Signed-off-by:
James Morse <james.morse@arm.com> [will: change label name and fleshed out comment] Signed-off-by:
Will Deacon <will@kernel.org>
-
- 21 Jun, 2019 3 commits
-
-
Julien Thierry authored
When using IRQ priority masking to disable interrupts, in order to deal with the PSR.I state, local_irq_save() would convert the I bit into a PMR value (GIC_PRIO_IRQOFF). This resulted in local_irq_restore() potentially modifying the value of PMR in undesired location due to the state of PSR.I upon flag saving [1]. In an attempt to solve this issue in a less hackish manner, introduce a bit (GIC_PRIO_IGNORE_PMR) for the PMR values that can represent whether PSR.I is being used to disable interrupts, in which case it takes precedence of the status of interrupt masking via PMR. GIC_PRIO_PSR_I_SET is chosen such that (<pmr_value> | GIC_PRIO_PSR_I_SET) does not mask more interrupts than <pmr_value> as some sections (e.g. arch_cpu_idle(), interrupt acknowledge path) requires PMR not to mask interrupts that could be signaled to the CPU when using only PSR.I. [1] https://www.spinics.net/lists/arm-kernel/msg716956.html Fixes: 4a503217 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking") Cc: <stable@vger.kernel.org> # 5.1.x- Reported-by:
Zenghui Yu <yuzenghui@huawei.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Wei Li <liwei391@huawei.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Pouloze <suzuki.poulose@arm.com> Cc: Oleg Nesterov <oleg@redhat.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Julien Thierry authored
In the presence of any form of instrumentation, nmi_enter() should be done before calling any traceable code and any instrumentation code. Currently, nmi_enter() is done in handle_domain_nmi(), which is much too late as instrumentation code might get called before. Move the nmi_enter/exit() calls to the arch IRQ vector handler. On arm64, it is not possible to know if the IRQ vector handler was called because of an NMI before acknowledging the interrupt. However, It is possible to know whether normal interrupts could be taken in the interrupted context (i.e. if taking an NMI in that context could introduce a potential race condition). When interrupting a context with IRQs disabled, call nmi_enter() as soon as possible. In contexts with IRQs enabled, defer this to the interrupt controller, which is in a better position to know if an interrupt taken is an NMI. Fixes: bc3c03cc ("arm64: Enable the support of pseudo-NMIs") Cc: <stable@vger.kernel.org> # 5.1.x- Cc: Will Deacon <will.deacon@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Julien Thierry authored
For el0_dbg and el0_error, DAIF bits get explicitly cleared before calling ct_user_exit. When context tracking is disabled, DAIF gets set (almost) immediately after. When context tracking is enabled, among the first things done is disabling IRQs. What is actually needed is: - PSR.D = 0 so the system can be debugged (should be already the case) - PSR.A = 0 so async error can be handled during context tracking Do not clear PSR.I in those two locations. Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 19 Jun, 2019 1 commit
-
-
Thomas Gleixner authored
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Alexios Zavras <alexios.zavras@intel.com> Reviewed-by:
Allison Randal <allison@lohutok.net> Reviewed-by:
Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.deSigned-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 May, 2019 1 commit
-
-
Marc Zyngier authored
We already mitigate erratum 1188873 affecting Cortex-A76 and Neoverse-N1 r0p0 to r2p0. It turns out that revisions r0p0 to r3p1 of the same cores are affected by erratum 1418040, which has the same workaround as 1188873. Let's expand the range of affected revisions to match 1418040, and repaint all occurences of 1188873 to 1418040. Whilst we're there, do a bit of reformating in silicon-errata.txt and drop a now unnecessary dependency on ARM_ARCH_TIMER_OOL_WORKAROUND. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 30 Apr, 2019 1 commit
-
-
Marc Zyngier authored
We currently deal with ARM64_ERRATUM_1188873 by always trapping EL0 accesses for both instruction sets. Although nothing wrong comes out of that, people trying to squeeze the last drop of performance from buggy HW find this over the top. Oh well. Let's change the mitigation by flipping the counter enable bit on return to userspace. Non-broken HW gets an extra branch on the fast path, which is hopefully not the end of the world. The arch timer workaround is also removed. Acked-by:
Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 26 Feb, 2019 1 commit
-
-
Julien Thierry authored
The assembly macro get_thread_info() actually returns a task_struct and is analogous to the current/get_current macro/function. While it could be argued that thread_info sits at the start of task_struct and the intention could have been to return a thread_info, instances of loads from/stores to the address obtained from get_thread_info() use offsets that are generated with offsetof(struct task_struct, [...]). Rename get_thread_info() to state it returns a task_struct. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 06 Feb, 2019 3 commits
-
-
Julien Thierry authored
When an NMI is raised while interrupts where disabled, the IRQ tracing already is in the correct state (i.e. hardirqs_off) and should be left as such when returning to the interrupted context. Check whether PMR was masking interrupts when the NMI was raised and skip IRQ tracing if necessary. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Julien Thierry authored
Handling of an NMI should not set any TIF flags. For NMIs received from EL0 the current exit path is safe to use. However, an NMI received at EL1 could have interrupted some task context that has set the TIF_NEED_RESCHED flag. Preempting a task should not happen as a result of an NMI. Skip preemption after handling an NMI from EL1. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Julien Thierry authored
In order to replace PSR.I interrupt disabling/enabling with ICC_PMR_EL1 interrupt masking, ICC_PMR_EL1 needs to be saved/restored when taking/returning from an exception. This mimics the way hardware saves and restores PSR.I bit in spsr_el1 for exceptions and ERET. Add PMR to the registers to save in the pt_regs struct upon kernel entry, and restore it before ERET. Also, initialize it to a sane value when creating new tasks. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Dave Martin <Dave.Martin@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 04 Feb, 2019 1 commit
-
-
Valentin Schneider authored
Since the enabling and disabling of IRQs within preempt_schedule_irq() is contained in a need_resched() loop, we don't need the outer arch code loop. Reported-by:
Julien Thierry <julien.thierry@arm.com> Reported-by:
Will Deacon <will.deacon@arm.com> Reviewed-by:
Julien Thierry <julien.thierry@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Valentin Schneider <valentin.schneider@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Julien Grall <julien.grall@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 03 Jan, 2019 1 commit
-
-
Mark Rutland authored
In commit: 3b714275 ("arm64: convert native/compat syscall entry to C") ... we moved the syscall invocation code from assembly to C, but left behind a number of register aliases which are now unused. Let's remove them before they confuse someone. Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Dave Martin <Dave.Martin@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 11 Dec, 2018 1 commit
-
-
Will Deacon authored
Commit 39624469 ("arm64: preempt: Provide our own implementation of asm/preempt.h") extended the preempt count field in struct thread_info to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag indicating whether or not the current task needs rescheduling. Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point to this new field, the assembly usage was left untouched meaning that a 32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns the reschedule flag instead of the count. Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field, we're actually better off reworking the two assembly users so that they operate on the whole 64-bit value in favour of inspecting the thread flags separately in order to determine whether a reschedule is needed. Acked-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by:
"kernelci.org bot" <bot@kernelci.org> Tested-by:
Kevin Hilman <khilman@baylibre.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 06 Dec, 2018 2 commits
-
-
Will Deacon authored
The comment about SYS_MEMBARRIER_SYNC_CORE relying on ERET being context-synchronizing is confusing and misplaced with kpti. Given that this is already documented under Documentation/ (see arch-support.txt for membarrier), remove the comment altogether. Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Some CPUs can speculate past an ERET instruction and potentially perform speculative accesses to memory before processing the exception return. Since the register state is often controlled by a lower privilege level at the point of an ERET, this could potentially be used as part of a side-channel attack. This patch emits an SB sequence after each ERET so that speculation is held up on exception return. Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 01 Oct, 2018 2 commits
-
-
Marc Zyngier authored
It recently came to light that userspace can execute WFI, and that the arm64 kernel doesn't trap this event. This sounds rather benign, but the kernel should decide when it wants to wait for an interrupt, and not userspace. Let's trap WFI and immediately return after having skipped the instruction. This effectively makes WFI a rather expensive NOP. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Instead of directly generating an UNDEF when trapping a CP15 access, let's add a new entry point to that effect (which only generates an UNDEF for now). Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 14 Sep, 2018 1 commit
-
-
Will Deacon authored
Rather than panic() when taking an undefined instruction exception from EL1, allow a hook to be registered in case we want to emulate the instruction, like we will for the SSBS PSTATE manipulation instructions. Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 26 Jul, 2018 1 commit
-
-
Laura Abbott authored
This adds support for the STACKLEAK gcc plugin to arm64 by implementing stackleak_check_alloca(), based heavily on the x86 version, and adding the two helpers used by the stackleak common code: current_top_of_stack() and on_thread_stack(). The stack erasure calls are made at syscall returns. Additionally, this disables the plugin in hypervisor and EFI stub code, which are out of scope for the protection. Acked-by:
Alexander Popov <alex.popov@linux.com> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Laura Abbott <labbott@redhat.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 12 Jul, 2018 7 commits
-
-
Mark Rutland authored
We can zero GPRs x0 - x29 upon entry from EL0 to make it harder for userspace to control values consumed by speculative gadgets. We don't blat x30, since this is stashed much later, and we'll blat it before invoking C code. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Now that all of the syscall logic works on the saved pt_regs, apply_ssbd can safely corrupt x0-x3 in the entry paths, and we no longer need to restore them. So let's remove the logic doing so. With that logic gone, we can fold the branch target into the macro, so that callers need not deal with this. GAS provides \@, which provides a unique value per macro invocation, which we can use to create a unique label. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Now that syscalls are invoked with pt_regs, we no longer need to ensure that the argument regsiters are live in the entry assembly, and it's fine to not restore them after context_tracking_user_exit() has corrupted them. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Now that the syscall invocation logic is in C, we can migrate the rest of the syscall entry logic over, so that the entry assembly needn't look at the register values at all. The SVE reset across syscall logic now unconditionally clears TIF_SVE, but sve_user_disable() will only write back to CPACR_EL1 when SVE is actually enabled. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Dave Martin <dave.martin@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Currently syscall tracing is a tricky assembly state machine, which can be rather difficult to follow, and even harder to modify. Before we start fiddling with it for pt_regs syscalls, let's convert it to C. This is not intended to have any functional change. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
As a first step towards invoking syscalls with a pt_regs argument, convert the raw syscall invocation logic to C. We end up with a bit more register shuffling, but the unified invocation logic means we can unify the tracing paths, too. Previously, assembly had to open-code calls to ni_sys() when the system call number was out-of-bounds for the relevant syscall table. This case is now handled by invoke_syscall(), and the assembly no longer need to handle this case explicitly. This allows the tracing paths to be simplified and unified, as we no longer need the __ni_sys_trace path and the __sys_trace_return label. This only converts the invocation of the syscall. The rest of the syscall triage and tracing is left in assembly for now, and will be converted in subsequent patches. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
The arm64 sigreturn* syscall handlers are non-standard. Rather than taking a number of user parameters in registers as per the AAPCS, they expect the pt_regs as their sole argument. To make this work, we override the syscall definitions to invoke wrappers written in assembly, which mov the SP into x0, and branch to their respective C functions. On other architectures (such as x86), the sigreturn* functions take no argument and instead use current_pt_regs() to acquire the user registers. This requires less boilerplate code, and allows for other features such as interposing C code in this path. This patch takes the same approach for arm64. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Tentatively-reviewed-by:
Dave Martin <dave.martin@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 11 Jul, 2018 1 commit
-
-
Will Deacon authored
Implement calls to rseq_signal_deliver, rseq_handle_notify_resume and rseq_syscall so that we can select HAVE_RSEQ on arm64. Acked-by:
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-