- 24 Jun, 2021 7 commits
-
-
Will Deacon authored
Fix resume from idle when pNMI is being used. * for-next/cpuidle: arm64: suspend: Use cpuidle context helpers in cpu_suspend() PSCI: Use cpuidle context helpers in psci_cpu_suspend_enter() arm64: Convert cpu_do_idle() to using cpuidle context helpers arm64: Add cpuidle context save/restore helpers
-
Will Deacon authored
Additional CPU sanity checks for MTE and preparatory changes for systems where not all of the CPUs support 32-bit EL0. * for-next/cpufeature: arm64: Restrict undef hook for cpufeature registers arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support arm64: Allow mismatched 32-bit EL0 support arm64: cpuinfo: Split AArch32 registers out into a separate struct arm64: Check if GMID_EL1.BS is the same on all CPUs arm64: Change the cpuinfo_arm64 member type for some sysregs to u64
-
Will Deacon authored
Update our kernel string routines to the latest Cortex Strings implementation. * for-next/cortex-strings: arm64: update string routine copyrights and URLs arm64: Rewrite __arch_clear_user() arm64: Better optimised memchr() arm64: Import latest memcpy()/memmove() implementation arm64: Add assembly annotations for weak-PI-alias madness arm64: Import latest version of Cortex Strings' strncmp arm64: Import updated version of Cortex Strings' strlen arm64: Import latest version of Cortex Strings' strcmp arm64: Import latest version of Cortex Strings' memcmp
-
Will Deacon authored
Big cleanup of our cache maintenance routines, which were confusingly named and inconsistent in their implementations. * for-next/caches: arm64: Rename arm64-internal cache maintenance functions arm64: Fix cache maintenance function comments arm64: sync_icache_aliases to take end parameter instead of size arm64: __clean_dcache_area_pou to take end parameter instead of size arm64: __clean_dcache_area_pop to take end parameter instead of size arm64: __clean_dcache_area_poc to take end parameter instead of size arm64: __flush_dcache_area to take end parameter instead of size arm64: dcache_by_line_op to take end parameter instead of size arm64: __inval_dcache_area to take end parameter instead of size arm64: Fix comments to refer to correct function __flush_icache_range arm64: Move documentation of dcache_by_line_op arm64: assembler: remove user_alt arm64: Downgrade flush_icache_range to invalidate arm64: Do not enable uaccess for invalidate_icache_range arm64: Do not enable uaccess for flush_icache_range arm64: Apply errata to swsusp_arch_suspend_exit arm64: assembler: add conditional cache fixups arm64: assembler: replace `kaddr` with `addr`
-
Will Deacon authored
Tweak linker flags so that GDB can understand vmlinux when using RELR relocations. * for-next/build: Makefile: fix GDB warning with CONFIG_RELR
-
Will Deacon authored
Boot path cleanups to enable early initialisation of per-cpu operations needed by KCSAN. * for-next/boot: arm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macros arm64: smp: initialize cpu offset earlier arm64: smp: unify task and sp setup arm64: smp: remove stack from secondary_data arm64: smp: remove pointless secondary_data maintenance arm64: assembler: add set_this_cpu_offset
-
Will Deacon authored
Relax frame record alignment requirements to facilitate 8-byte alignment with KASAN and Clang. * for-next/stacktrace: arm64: stacktrace: Relax frame record alignment requirement to 8 bytes arm64: Change the on_*stack functions to take a size argument arm64: Implement stack trace termination record
-
- 22 Jun, 2021 1 commit
-
-
Raphael Gault authored
This commit modifies the mask of the mrs_hook declared in arch/arm64/kernel/cpufeatures.c which emulates only feature register access. This is necessary because this hook's mask was too large and thus masking any mrs instruction, even if not related to the emulated registers which made the pmu emulation inefficient. Signed-off-by:
Raphael Gault <raphael.gault@arm.com> Signed-off-by:
Rob Herring <robh@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210517180256.2881891-1-robh@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 17 Jun, 2021 4 commits
-
-
Marc Zyngier authored
Use cpuidle context helpers to switch to using DAIF.IF instead of PMR to mask interrupts, ensuring that we suspend with interrupts being able to reach the CPU interface. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20210615111227.2454465-5-maz@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
The PSCI CPU suspend code isn't aware of the PMR vs DAIF game, resulting in a system that locks up if entering CPU suspend with GICv3 pNMI enabled. To save the day, teach the suspend code about our new cpuidle context helpers, which will do everything that's required just like the usual WFI cpuidle code. This fixes my Altra system, which would otherwise lock-up at boot time when booted with irqchip.gicv3_pseudo_nmi=1. Tested-by:
Valentin Schneider <valentin.schneider@arm.com> Reviewed-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20210615111227.2454465-4-maz@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Now that we have helpers that are aware of the pseudo-NMI feature, introduce them to cpu_do_idle(). This allows for some nice cleanup. No functional change intended. Tested-by:
Valentin Schneider <valentin.schneider@arm.com> Reviewed-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210615111227.2454465-3-maz@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
As we need to start doing some additional work on all idle paths, let's introduce a set of macros that will perform the work related to the GICv3 pseudo-NMI idle entry exit. Stubs are introduced to 32bit ARM for compatibility. As these helpers are currently unused, there is no functional change. Tested-by:
Valentin Schneider <valentin.schneider@arm.com> Reviewed-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210615111227.2454465-2-maz@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 11 Jun, 2021 4 commits
-
-
Will Deacon authored
Scheduling a 32-bit application on a 64-bit-only CPU is a bad idea. Ensure that 32-bit applications always take the slow-path when returning to userspace on a system with mismatched support at EL0, so that we can avoid trying to run on a 64-bit-only CPU and force a SIGKILL instead. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-5-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
If a vCPU is caught running 32-bit code on a system with mismatched support at EL0, then we should kill it. Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-4-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
When confronted with a mixture of CPUs, some of which support 32-bit applications and others which don't, we quite sensibly treat the system as 64-bit only for userspace and prevent execve() of 32-bit binaries. Unfortunately, some crazy folks have decided to build systems like this with the intention of running 32-bit applications, so relax our sanitisation logic to continue to advertise 32-bit support to userspace on these systems and track the real 32-bit capable cores in a cpumask instead. For now, the default behaviour remains but will be tied to a command-line option in a later patch. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-3-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
In preparation for late initialisation of the "sanitised" AArch32 register state, move the AArch32 registers out of 'struct cpuinfo' and into their own struct definition. Acked-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-2-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 08 Jun, 2021 1 commit
-
-
Nick Desaulniers authored
GDB produces the following warning when debugging kernels built with CONFIG_RELR: BFD: /android0/linux-next/vmlinux: unknown type [0x13] section `.relr.dyn' when loading a kernel built with CONFIG_RELR into GDB. It can also prevent debugging symbols using such relocations. Peter sugguests: [That flag] means that lld will use dynamic tags and section type numbers in the OS-specific range rather than the generic range. The kernel itself doesn't care about these numbers; it determines the location of the RELR section using symbols defined by a linker script. Link: https://github.com/ClangBuiltLinux/linux/issues/1057Suggested-by:
Peter Collingbourne <pcc@google.com> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20210522012626.2811297-1-ndesaulniers@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 02 Jun, 2021 1 commit
-
-
Mark Rutland authored
To make future archaeology easier, let's have the string routine comment blocks encode the specific upstream commit ID they were imported from. These are the same commit IDs as listed in the commits importing the code, expanded to 16 characters. Note that the routines have different commit IDs, each reprsenting the latest upstream commit which changed the particular routine. At the same time, let's consistently include 2021 in the copyright dates. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210602151358.35571-1-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 01 Jun, 2021 8 commits
-
-
Robin Murphy authored
Now that we're always using STTR variants rather than abstracting two different addressing modes, the user_ldst macro here is frankly more obfuscating than helpful. Rewrite __arch_clear_user() with regular USER() annotations so that it's clearer what's going on, and take the opportunity to minimise the branchiness in the most common paths, while also allowing the exception fixup to return an accurate result. Apparently some folks examine large reads from /dev/zero closely enough to notice the loop being hot, so align it per the other critical loops (presumably around a typical instruction fetch granularity). Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/1cbd78b12c076a8ad4656a345811cfb9425df0b3.1622128527.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Robin Murphy authored
Although we implement our own assembly version of memchr(), it turns out to be barely any better than what GCC can generate for the generic C version (and would go wrong if the size_t argument were ever large enough to be interpreted as negative). Unfortunately we can't import the tuned implementation from the Arm optimized-routines library, since that has some Advanced SIMD parts which are not really viable for general kernel library code. What we can do, however, is pep things up with some relatively straightforward word-at-a-time logic for larger calls. Adding some timing to optimized-routines' memchr() test for a simple benchmark, overall this version comes in around half as fast as the SIMD code, but still nearly 4x faster than our existing implementation. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/58471b42f9287e039dafa9e5e7035077152438fd.1622128527.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Robin Murphy authored
Import the latest implementation of memcpy(), based on the upstream code of string/aarch64/memcpy.S at commit afd6244 from https://github.com/ARM-software/optimized-routines, and subsuming memmove() in the process. Note that for simplicity Arm have chosen to contribute this code to Linux under GPLv2 rather than the original MIT license. Note also that the needs of the usercopy routines vs. regular memcpy() have now diverged so far that we abandon the shared template idea and the damage which that incurred to the tuning of LDP/STP loops. We'll be back to tackle those routines separately in future. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/3c953af43506581b2422f61952261e76949ba711.1622128527.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Robin Murphy authored
Add yet another set of assembly symbol annotations, this time for the borderline-absurd situation of a function aliasing to a weak symbol which itself also wants a position-independent alias. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/75545b3c4129b20b887474bb58a9cf302bf2132b.1622128527.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Sam Tebbs authored
Import the latest version of the former Cortex Strings - now Arm Optimized Routines - strncmp function based on the upstream code of string/aarch64/strncmp.S at commit e823e3a from https://github.com/ARM-software/optimized-routines Note that for simplicity Arm have chosen to contribute this code to Linux under GPLv2 rather than the original MIT license. Signed-off-by:
Sam Tebbs <sam.tebbs@arm.com> [ rm: update attribution and commit message ] Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/26110bee02ad360596c9a7536af7eaaf6890d0e8.1622128527.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Sam Tebbs authored
Import an updated version of the former Cortex Strings - now Arm Optimized Routines - strcmp function. The latest version introduces Advanced SIMD usage which rules it out for our purposes, but we can still pick an intermediate improvement from the previous version, namely string/aarch64/strlen.S at commit 98e4d6a from https://github.com/ARM-software/optimized-routines Note that for simplicity Arm have chosen to contribute this code to Linux under GPLv2 rather than the original MIT license. Signed-off-by:
Sam Tebbs <sam.tebbs@arm.com> [ rm: update attribution and commit message ] Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/32e3489398a24b23ae6e996935ac4818f8fd9dfd.1622128527.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Sam Tebbs authored
Import the latest version of the former Cortex Strings - now Arm Optimized Routines - strcmp function based on the upstream code of string/aarch64/strcmp.S at commit afd6244 from https://github.com/ARM-software/optimized-routines Note that for simplicity Arm have chosen to contribute this code to Linux under GPLv2 rather than the original MIT license. Signed-off-by:
Sam Tebbs <sam.tebbs@arm.com> [ rm: update attribution and commit message ] Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/0fe90c90b96b569fbdfd46e47bd1298abb02079e.1622128527.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Sam Tebbs authored
Import the latest version of the former Cortex Strings - now Arm Optimized Routines - memcmp function based on the upstream code of string/aarch64/memcmp.S at commit e823e3a from https://github.com/ARM-software/optimized-routines Note that for simplicity Arm have chosen to contribute this code to Linux under GPLv2 rather than the original MIT license. Signed-off-by:
Sam Tebbs <sam.tebbs@arm.com> [ rm: update attribution and commit message ] Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/2889de2d41054f3f508fb3addad784a3606ef383.1622128527.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 27 May, 2021 1 commit
-
-
Will Deacon authored
The scs_load and scs_save asm macros don't make use of the mandatory 'tmp' register argument, so drop it and fix up the callers. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20210527105529.21967-1-will@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 26 May, 2021 10 commits
-
-
Mark Rutland authored
Now that we have a consistent place to initialize CPU context registers early in the boot path, let's also initialize the per-cpu offset here. This makes the primary and secondary boot paths more consistent, and allows for the use of per-cpu operations earlier, which will be necessary for instrumentation with KCSAN. Note that smp_prepare_boot_cpu() still needs to re-initialize CPU0's offset as immediately prior to this the per-cpu areas may be reallocated, and hence the boot-time offset may be stale. A comment is added to make this clear. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-7-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Once we enable the MMU, we have to initialize: * SP_EL0 to point at the active task * SP to point at the active task's stack * SCS_SP to point at the active task's shadow stack For all tasks (including init_task), this information can be derived from the task's task_struct. Let's unify __primary_switched and __secondary_switched to consistently acquire this information from the relevant task_struct. At the same time, let's fold this together with initializing a task's final frame. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-6-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
When we boot a secondary CPU, we pass it a task and a stack to use. As the stack is always the task's stack, which can be derived from the task, let's have the secondary CPU derive this itself and avoid passing redundant information. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-5-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
All reads and writes of secondary_data occur with the MMU on, using coherent attributes, so there's no need to perform any cache maintenance for this. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-4-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-3-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
Merge in stack unwinding work to minimise conflicts in head.S. * for-next/stacktrace: arm64: stacktrace: Relax frame record alignment requirement to 8 bytes arm64: Change the on_*stack functions to take a size argument arm64: Implement stack trace termination record
-
Catalin Marinas authored
The GMID_EL1.BS field determines the number of tags accessed by the LDGM/STGM instructions (EL1 and up), used by the kernel for copying or zeroing page tags. Taint the kernel if GMID_EL1.BS differs between CPUs but only of CONFIG_ARM64_MTE is enabled. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com> Link: https://lore.kernel.org/r/20210526193621.21559-3-catalin.marinas@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Catalin Marinas authored
The architecture has been updated and the CTR_EL0, CNTFRQ_EL0, DCZID_EL0, MIDR_EL1, REVIDR_EL1 registers are all 64-bit, even if most of them have a RES0 top 32-bit. Change their type to u64 in struct cpuinfo_arm64. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Suzuki K Poulose <Suzuki.Poulose@arm.com> Link: https://lore.kernel.org/r/20210526193621.21559-2-catalin.marinas@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Peter Collingbourne authored
The AAPCS places no requirements on the alignment of the frame record. In theory it could be placed anywhere, although it seems sensible to require it to be aligned to 8 bytes. With an upcoming enhancement to tag-based KASAN Clang will begin creating frame records located at an address that is only aligned to 8 bytes. Accommodate such frame records in the stack unwinding code. As pointed out by Mark Rutland, the userspace stack unwinding code has the same problem, so fix it there as well. Signed-off-by:
Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ia22c375230e67ca055e9e4bb639383567f7ad268Acked-by:
Andrey Konovalov <andreyknvl@gmail.com> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210526174927.2477847-2-pcc@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Peter Collingbourne authored
unwind_frame() was previously implicitly checking that the frame record is in bounds of the stack by enforcing that FP is both aligned to 16 and in bounds of the stack. Once the FP alignment requirement is relaxed to 8 this will not be sufficient because it does not account for the case where FP points to 8 bytes before the end of the stack. Make the check explicit by changing the on_*stack functions to take a size argument and adjusting the callers to pass the appropriate sizes. Signed-off-by:
Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ib7a3eb3eea41b0687ffaba045ceb2012d077d8b4Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210526174927.2477847-1-pcc@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 25 May, 2021 3 commits
-
-
Fuad Tabba authored
Although naming across the codebase isn't that consistent, it tends to follow certain patterns. Moreover, the term "flush" isn't defined in the Arm Architecture reference manual, and might be interpreted to mean clean, invalidate, or both for a cache. Rename arm64-internal functions to make the naming internally consistent, as well as making it consistent with the Arm ARM, by specifying whether it applies to the instruction, data, or both caches, whether the operation is a clean, invalidate, or both. Also specify which point the operation applies to, i.e., to the point of unification (PoU), coherency (PoC), or persistence (PoP). This commit applies the following sed transformation to all files under arch/arm64: "s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\ "s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\ "s/\binvalidate_icache_range\b/icache_inval_pou/g;"\ "s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\ "s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\ "s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\ "s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\ "s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\ "s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\ "s/\b__flush_icache_all\b/icache_inval_all_pou/g;" Note that __clean_dcache_area_poc is deliberately missing a word boundary check at the beginning in order to match the efistub symbols in image-vars.h. Also note that, despite its name, __flush_icache_range operates on both instruction and data caches. The name change here reflects that. No functional change intended. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-19-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
Fix and expand comments for the cache maintenance functions in cacheflush.h. Adds comments to functions that weren't described before. Explains what the functions do using Arm Architecture Reference Manual terminology. No functional change intended. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-18-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Fuad Tabba authored
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size. No functional change intended. Reported-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Fuad Tabba <tabba@google.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-17-tabba@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-