- 24 Jun, 2021 10 commits
-
-
Will Deacon authored
Refactoring of our instruction decoding routines and addition of some missing encodings. * for-next/insn: arm64: insn: avoid circular include dependency arm64: insn: move AARCH64_INSN_SIZE into <asm/insn.h> arm64: insn: decouple patching from insn code arm64: insn: Add load/store decoding helpers arm64: insn: Add some opcodes to instruction decoder arm64: insn: Add barrier encodings arm64: insn: Add SVE instruction class arm64: Move instruction encoder/decoder under lib/ arm64: Move aarch32 condition check functions arm64: Move patching utilities out of instruction encoding/decoding
-
Will Deacon authored
The never-ending entry.S refactoring continues, putting us in a much better place wrt compiler instrumentation whilst moving more of the code into C. * for-next/entry: arm64: idle: don't instrument idle code with KCOV arm64: entry: don't instrument entry code with KCOV arm64: entry: make NMI entry/exit functions static arm64: entry: split SDEI entry arm64: entry: split bad stack entry arm64: entry: fold el1_inv() into el1h_64_sync_handler() arm64: entry: handle all vectors with C arm64: entry: template the entry asm functions arm64: entry: improve bad_mode() arm64: entry: move bad_mode() to entry-common.c arm64: entry: consolidate EL1 exception returns arm64: entry: organise entry vectors consistently arm64: entry: organise entry handlers consistently arm64: entry: convert IRQ+FIQ handlers to C arm64: entry: add a call_on_irq_stack helper arm64: entry: move NMI preempt logic to C arm64: entry: move arm64_preempt_schedule_irq to entry-common.c arm64: entry: convert SError handlers to C arm64: entry: unmask IRQ+FIQ after EL0 handling arm64: remove redundant local_daif_mask() in bad_mode()
-
Will Deacon authored
Update booting requirements for the FEAT_HCX feature, added to v8.7 of the architecture. * for-next/docs: arm64: Document requirement for access to FEAT_HCX
-
Will Deacon authored
Fix resume from idle when pNMI is being used. * for-next/cpuidle: arm64: suspend: Use cpuidle context helpers in cpu_suspend() PSCI: Use cpuidle context helpers in psci_cpu_suspend_enter() arm64: Convert cpu_do_idle() to using cpuidle context helpers arm64: Add cpuidle context save/restore helpers
-
Will Deacon authored
Additional CPU sanity checks for MTE and preparatory changes for systems where not all of the CPUs support 32-bit EL0. * for-next/cpufeature: arm64: Restrict undef hook for cpufeature registers arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support arm64: Allow mismatched 32-bit EL0 support arm64: cpuinfo: Split AArch32 registers out into a separate struct arm64: Check if GMID_EL1.BS is the same on all CPUs arm64: Change the cpuinfo_arm64 member type for some sysregs to u64
-
Will Deacon authored
Update our kernel string routines to the latest Cortex Strings implementation. * for-next/cortex-strings: arm64: update string routine copyrights and URLs arm64: Rewrite __arch_clear_user() arm64: Better optimised memchr() arm64: Import latest memcpy()/memmove() implementation arm64: Add assembly annotations for weak-PI-alias madness arm64: Import latest version of Cortex Strings' strncmp arm64: Import updated version of Cortex Strings' strlen arm64: Import latest version of Cortex Strings' strcmp arm64: Import latest version of Cortex Strings' memcmp
-
Will Deacon authored
Big cleanup of our cache maintenance routines, which were confusingly named and inconsistent in their implementations. * for-next/caches: arm64: Rename arm64-internal cache maintenance functions arm64: Fix cache maintenance function comments arm64: sync_icache_aliases to take end parameter instead of size arm64: __clean_dcache_area_pou to take end parameter instead of size arm64: __clean_dcache_area_pop to take end parameter instead of size arm64: __clean_dcache_area_poc to take end parameter instead of size arm64: __flush_dcache_area to take end parameter instead of size arm64: dcache_by_line_op to take end parameter instead of size arm64: __inval_dcache_area to take end parameter instead of size arm64: Fix comments to refer to correct function __flush_icache_range arm64: Move documentation of dcache_by_line_op arm64: assembler: remove user_alt arm64: Downgrade flush_icache_range to invalidate arm64: Do not enable uaccess for invalidate_icache_range arm64: Do not enable uaccess for flush_icache_range arm64: Apply errata to swsusp_arch_suspend_exit arm64: assembler: add conditional cache fixups arm64: assembler: replace `kaddr` with `addr`
-
Will Deacon authored
Tweak linker flags so that GDB can understand vmlinux when using RELR relocations. * for-next/build: Makefile: fix GDB warning with CONFIG_RELR
-
Will Deacon authored
Boot path cleanups to enable early initialisation of per-cpu operations needed by KCSAN. * for-next/boot: arm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macros arm64: smp: initialize cpu offset earlier arm64: smp: unify task and sp setup arm64: smp: remove stack from secondary_data arm64: smp: remove pointless secondary_data maintenance arm64: assembler: add set_this_cpu_offset
-
Will Deacon authored
Relax frame record alignment requirements to facilitate 8-byte alignment with KASAN and Clang. * for-next/stacktrace: arm64: stacktrace: Relax frame record alignment requirement to 8 bytes arm64: Change the on_*stack functions to take a size argument arm64: Implement stack trace termination record
-
- 22 Jun, 2021 1 commit
-
-
Raphael Gault authored
This commit modifies the mask of the mrs_hook declared in arch/arm64/kernel/cpufeatures.c which emulates only feature register access. This is necessary because this hook's mask was too large and thus masking any mrs instruction, even if not related to the emulated registers which made the pmu emulation inefficient. Signed-off-by:
Raphael Gault <raphael.gault@arm.com> Signed-off-by:
Rob Herring <robh@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210517180256.2881891-1-robh@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 21 Jun, 2021 1 commit
-
-
Mark Rutland authored
Nathan reports that when building with CONFIG_LTO_CLANG_THIN=y, the build fails due to BUILD_BUG_ON() not being defined before its uss in <asm/insn.h>. The problem is that with LTO, we patch READ_ONCE(), and <asm/rwonce.h> includes <asm/insn.h>, creating a circular include chain: <linux/build_bug.h> <linux/compiler.h> <asm/rwonce.h> <asm/alternative-macros.h> <asm/insn.h> <linux/build-bug.h> ... and so when <asm/insn.h> includes <linux/build_bug.h>, none of the BUILD_BUG* definitions have happened yet. To avoid this, let's move AARCH64_INSN_SIZE into a header without any dependencies, such that it can always be safely included. At the same time, avoid including <asm/alternative.h> in <asm/insn.h>, which should no longer be necessary (and doesn't make sense when insn.h is consumed by userspace). Reported-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210621080830.GA37068@C02TD0UTHF1T.local Fixes: 3e00e39d ("arm64: insn: move AARCH64_INSN_SIZE into <asm/insn.h>") Signed-off-by:
Will Deacon <will@kernel.org>
-
- 17 Jun, 2021 4 commits
-
-
Marc Zyngier authored
Use cpuidle context helpers to switch to using DAIF.IF instead of PMR to mask interrupts, ensuring that we suspend with interrupts being able to reach the CPU interface. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20210615111227.2454465-5-maz@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
The PSCI CPU suspend code isn't aware of the PMR vs DAIF game, resulting in a system that locks up if entering CPU suspend with GICv3 pNMI enabled. To save the day, teach the suspend code about our new cpuidle context helpers, which will do everything that's required just like the usual WFI cpuidle code. This fixes my Altra system, which would otherwise lock-up at boot time when booted with irqchip.gicv3_pseudo_nmi=1. Tested-by:
Valentin Schneider <valentin.schneider@arm.com> Reviewed-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20210615111227.2454465-4-maz@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Now that we have helpers that are aware of the pseudo-NMI feature, introduce them to cpu_do_idle(). This allows for some nice cleanup. No functional change intended. Tested-by:
Valentin Schneider <valentin.schneider@arm.com> Reviewed-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210615111227.2454465-3-maz@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
As we need to start doing some additional work on all idle paths, let's introduce a set of macros that will perform the work related to the GICv3 pseudo-NMI idle entry exit. Stubs are introduced to 32bit ARM for compatibility. As these helpers are currently unused, there is no functional change. Tested-by:
Valentin Schneider <valentin.schneider@arm.com> Reviewed-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210615111227.2454465-2-maz@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 11 Jun, 2021 6 commits
-
-
Will Deacon authored
Scheduling a 32-bit application on a 64-bit-only CPU is a bad idea. Ensure that 32-bit applications always take the slow-path when returning to userspace on a system with mismatched support at EL0, so that we can avoid trying to run on a 64-bit-only CPU and force a SIGKILL instead. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-5-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
If a vCPU is caught running 32-bit code on a system with mismatched support at EL0, then we should kill it. Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-4-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
When confronted with a mixture of CPUs, some of which support 32-bit applications and others which don't, we quite sensibly treat the system as 64-bit only for userspace and prevent execve() of 32-bit binaries. Unfortunately, some crazy folks have decided to build systems like this with the intention of running 32-bit applications, so relax our sanitisation logic to continue to advertise 32-bit support to userspace on these systems and track the real 32-bit capable cores in a cpumask instead. For now, the default behaviour remains but will be tied to a command-line option in a later patch. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-3-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
In preparation for late initialisation of the "sanitised" AArch32 register state, move the AArch32 registers out of 'struct cpuinfo' and into their own struct definition. Acked-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-2-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
For histroical reasons, we define AARCH64_INSN_SIZE in <asm/alternative-macros.h>, but it would make more sense to do so in <asm/insn.h>. Let's move it into <asm/insn.h>, and add the necessary include directives for this. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210609102301.17332-3-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Currently, <asm/insn.h> includes <asm/patching.h>. We intend that <asm/insn.h> will be usable from userspace, so it doesn't make sense to include headers for kernel-only features such as the patching routines, and we'd intended to restrict <asm/insn.h> to instruction encoding details. Let's decouple the patching code from <asm/insn.h>, and explicitly include <asm/patching.h> where it is needed. Since <asm/patching.h> isn't included from assembly, we can drop the __ASSEMBLY__ guards. At the same time, sort the kprobes includes so that it's easier to see what is and isn't incldued. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210609102301.17332-2-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 08 Jun, 2021 1 commit
-
-
Nick Desaulniers authored
GDB produces the following warning when debugging kernels built with CONFIG_RELR: BFD: /android0/linux-next/vmlinux: unknown type [0x13] section `.relr.dyn' when loading a kernel built with CONFIG_RELR into GDB. It can also prevent debugging symbols using such relocations. Peter sugguests: [That flag] means that lld will use dynamic tags and section type numbers in the OS-specific range rather than the generic range. The kernel itself doesn't care about these numbers; it determines the location of the RELR section using symbols defined by a linker script. Link: https://github.com/ClangBuiltLinux/linux/issues/1057Suggested-by:
Peter Collingbourne <pcc@google.com> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20210522012626.2811297-1-ndesaulniers@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 07 Jun, 2021 17 commits
-
-
Mark Rutland authored
The low-level idle code in arch_cpu_idle() and its callees runs at a time where where portions of the kernel environment aren't available. For example, RCU may not be watching, and lockdep state may be out-of-sync with the hardware. Due to this, it is not sound to instrument this code. We generally avoid instrumentation by marking the entry functions as `noinstr`, but currently this doesn't inhibit KCOV instrumentation. Prevent this by factoring these functions into a new idle.c so that we can disable KCOV for the entire compilation unit, as is done for the core idle code in kernel/sched/idle.c. We'd like to keep instrumentation of the rest of process.c, and for the existing code in cpuidle.c, so a new compilation unit is preferable. The arch_cpu_idle_dead() function in process.c is a cpu hotplug function that is safe to instrument, so it is left as-is in process.c. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-21-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
The code in entry-common.c runs at exception entry and return boundaries, where portions of the kernel environment aren't available. For example, RCU may not be watching, and lockdep state may be out-of-sync with the hardware. Due to this, it is not sound to instrument this code. We generally avoid instrumentation by marking the entry functions as `noinstr`, but currently this doesn't inhibit KCOV instrumentation. Prevent this by disabling KCOV for the entire compilation unit. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-20-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Now that we only call arm64_enter_nmi() and arm64_exit_nmi() from within entry-common.c, let's make these static to ensure this remains the case. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-19-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
We'd like to keep all the entry sequencing in entry-common.c, as this will allow us to ensure this is consistent, and free from any unsound instrumentation. Currently __sdei_handler() performs the NMI entry/exit sequences in sdei.c. Let's split the low-level entry sequence from the event handling, moving the former to entry-common.c and keeping the latter in sdei.c. The event handling function is renamed to do_sdei_event(), matching the do_${FOO}() pattern used for other exception handlers. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-18-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
We'd like to keep all the entry sequencing in entry-common.c, as this will allow us to ensure this is consistent, and free from any unsound instrumentation. Currently handle_bad_stack() performs the NMI entry sequence in traps.c. Let's split the low-level entry sequence from the reporting, moving the former to entry-common.c and keeping the latter in traps.c. To make it clear that reporting function never returns, it is renamed to panic_bad_stack(). Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-17-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
An unexpected synchronous exception from EL1h could happen at any time, and for robustness we should treat this as an NMI, making minimal assumptions about the context the exception was taken from. Currently el1_inv() assumes we can use enter_from_kernel_mode(), and also assumes that we should inherit the original DAIF value. Neither of these are desireable when we take an unexpected exception. Further, after el1_inv() calls __panic_unhandled(), the remainder of the function is unreachable, and therefore superfluous. Let's address this and simplify things by having el1h_64_sync_handler() call __panic_unhandled() directly, without any of the redundant logic. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reported-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-16-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
We have 16 architectural exception vectors, and depending on kernel configuration we handle 8 or 12 of these with C code, with the remaining 8 or 4 of these handled as special cases in the entry assembly. It would be nicer if the entry assembly were uniform for all exceptions, and we deferred any specific handling of the exceptions to C code. This way the entry assembly can be more easily templated without ifdeffery or special cases, and it's easier to modify the handling of these cases in future (e.g. to dump additional registers other context). This patch reworks the entry code so that we always have a C handler for every architectural exception vector, with the entry assembly being completely uniform. We now have to handle exceptions from EL1t and EL1h, and also have to handle exceptions from AArch32 even when the kernel is built without CONFIG_COMPAT. To make this clear and to simplify templating, we rename the top-level exception handlers with a consistent naming scheme: asm: <el+sp>_<regsize>_<type> c: <el+sp>_<regsize>_<type>_handler .. where: <el+sp> is `el1t`, `el1h`, or `el0t` <regsize> is `64` or `32` <type> is `sync`, `irq`, `fiq`, or `error` ... e.g. asm: el1h_64_sync c: el1h_64_sync_handler ... with lower-level handlers simply using "el1" and "compat" as today. For unexpected exceptions, this information is passed to __panic_unhandled(), so it can report the specific vector an unexpected exception was taken from, e.g. | Unhandled 64-bit el1t sync exception For vectors we never expect to enter legitimately, the C code is generated using a macro to avoid code duplication. The exceptions are handled via __panic_unhandled(), replacing bad_mode() (which is removed). The `kernel_ventry` and `entry_handler` assembly macros are updated to handle the new naming scheme. In theory it should be possible to generate the entry functions at the same time as the vectors using a single table, but this will require reworking the linker script to split the two into separate sections, so for now we have separate tables. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-15-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Now that the majority of the exception triage logic has been converted to C, the entry assembly functions all have a uniform structure. Let's generate them all with an assembly macro to reduce the amount of code and to ensure they all remain in sync if we make changes in future. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-14-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Our use of bad_mode() has a few rough edges: * AArch64 doesn't use the term "mode", and refers to "Execution states", "Exception levels", and "Selected stack pointer". * We log the exception type (SYNC/IRQ/FIQ/SError), but not the actual "mode" (though this can be decoded from the SPSR value). * We use bad_mode() as a second-level handler for unexpected synchronous exceptions, where the "mode" is legitimate, but the specific exception is not. * We dump the ESR value, but call this "code", and so it's not clear to all readers that this is the ESR. ... and all of this can be somewhat opaque to those who aren't extremely familiar with the code. Let's make this a bit clearer by having bad_mode() log "Unhandled ${TYPE} exception" rather than "Bad mode in ${TYPE} handler", using "ESR" rather than "code", and having the final panic() log "Unhandled exception" rather than "Bad mode". In future we'd like to log the specific architectural vector rather than just the type of exception, so we also split the core of bad_mode() out into a helper called __panic_unhandled(), which takes the vector as a string argument. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-13-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
In subsequent patches we'll rework the way bad_mode() is called by exception entry code. In preparation for this, let's move bad_mode() itself into entry-common.c. Let's also mark it as noinstr (e.g. to prevent it being kprobed), and let's also make the `handler` array a local variable, as this is only use by bad_mode(), and will be removed entirely in a subsequent patch. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-12-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Following the example of ret_to_user, let's consolidate all the EL1 return paths with a ret_to_kernel helper, rather than each entry point having its own copy of the return code. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-11-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
In subsequent patches we'll rename the entry handlers based on their original EL, register width, and exception class. To do so, we need to make all 3 mandatory arguments to the `kernel_ventry` macro, and distinguish EL1h from EL1t. In preparation for this, let's make the current set of arguments mandatory, and move the `regsize` column before the branch label suffix, making the vectors easier to read column-wise. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-10-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
In entry.S we have two comments which distinguish EL0 and EL1 exception handlers, but the code isn't actually laid out to match, and there are a few other inconsistencies that would be good to clear up. This patch organizes the entry handers consistently: * The handlers are laid out in order of the vectors, to make them easier to navigate. * The inconsistently-applied alignment is removed * The handlers are consistently marked with SYM_CODE_START_LOCAL() rather than SYM_CODE_START_LOCAL_NOALIGN(), giving them the same default alignment as other assembly code snippets. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-9-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
For various reasons we'd like to convert the bulk of arm64's exception triage logic to C. As a step towards that, this patch converts the EL1 and EL0 IRQ+FIQ triage logic to C. Separate C functions are added for the native and compat cases so that in subsequent patches we can handle native/compat differences in C. Since the triage functions can now call arm64_apply_bp_hardening() directly, the do_el0_irq_bp_hardening() wrapper function is removed. Since the user_exit_irqoff macro is now unused, it is removed. The user_enter_irqoff macro is still used by the ret_to_user code, and cannot be removed at this time. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-8-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
When handling IRQ/FIQ exceptions the entry assembly may transition from a task's stack to a CPU's IRQ stack (and IRQ shadow call stack). In subsequent patches we want to migrate the IRQ/FIQ triage logic to C, and as we want to perform some actions on the task stack (e.g. EL1 preemption), we need to switch stacks within the C handler. So that we can do so, this patch adds a helper to call a function on a CPU's IRQ stack (and shadow stack as appropriate). Subsequent patches will make use of the new helper function. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-7-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Currently portions of our preempt logic are written in C while other parts are written in assembly. Let's clean this up a little bit by moving the NMI preempt checks to C. For now, the preempt count (and need_resched) checking is left in assembly, and will be converted with the body of the IRQ handler in subsequent patches. Other than the increased lockdep coverage there should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-6-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Subsequent patches will pull more of the IRQ entry handling into C. To keep this in one place, let's move arm64_preempt_schedule_irq() into entry-common.c along with the other entry management functions. We no longer need to include <linux/lockdep.h> in process.c, so the include directive is removed. There should be no functional change as a result of this patch. Reviewed-by Joey Gouly <joey.gouly@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-5-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-