- 26 Aug, 2021 2 commits
-
-
Catalin Marinas authored
Merge branches 'for-next/mte', 'for-next/misc' and 'for-next/kselftest', remote-tracking branch 'arm64/for-next/perf' into for-next/core * arm64/for-next/perf: arm64/perf: Replace '0xf' instances with ID_AA64DFR0_PMUVER_IMP_DEF * for-next/mte: : Miscellaneous MTE improvements. arm64/cpufeature: Optionally disable MTE via command-line arm64: kasan: mte: remove redundant mte_report_once logic arm64: kasan: mte: use a constant kernel GCR_EL1 value arm64: avoid double ISB on kernel entry arm64: mte: optimize GCR_EL1 modification on kernel entry/exit Documentation: document the preferred tag checking mode feature arm64: mte: introduce a per-CPU tag checking mode preference arm64: move preemption disablement to prctl handlers arm64: mte: change ASYNC and SYNC TCF settings into bitfields arm64: mte: rename gcr_user_excl to mte_ctrl arm64: mte: avoid TFSRE0_EL1 related operations unless in async mode * for-next/misc: : Miscellaneous updates. arm64: Do not trap PMSNEVFR_EL1 arm64: mm: fix comment typo of pud_offset_phys() arm64: signal32: Drop pointless call to sigdelsetmask() arm64/sve: Better handle failure to allocate SVE register storage arm64: Document the requirement for SCR_EL3.HCE arm64: head: avoid over-mapping in map_memory arm64/sve: Add a comment documenting the binutils needed for SVE asm arm64/sve: Add some comments for sve_save/load_state() arm64: replace in_irq() with in_hardirq() arm64: mm: Fix TLBI vs ASID rollover arm64: entry: Add SYM_CODE annotation for __bad_stack arm64: fix typo in a comment arm64: move the (z)install rules to arch/arm64/Makefile arm64/sve: Make fpsimd_bind_task_to_cpu() static arm64: unnecessary end 'return;' in void functions arm64/sme: Document boot requirements for SME arm64: use __func__ to get function name in pr_err arm64: SSBS/DIT: print SSBS and DIT bit when printing PSTATE arm64: cpufeature: Use defined macro instead of magic numbers arm64/kexec: Test page size support with new TGRAN range values * for-next/kselftest: : Kselftest additions for arm64. kselftest/arm64: signal: Add a TODO list for signal handling tests kselftest/arm64: signal: Add test case for SVE register state in signals kselftest/arm64: signal: Verify that signals can't change the SVE vector length kselftest/arm64: signal: Check SVE signal frame shows expected vector length kselftest/arm64: signal: Support signal frames with SVE register data kselftest/arm64: signal: Add SVE to the set of features we can check for kselftest/arm64: pac: Fix skipping of tests on systems without PAC kselftest/arm64: mte: Fix misleading output when skipping tests kselftest/arm64: Add a TODO list for floating point tests kselftest/arm64: Add tests for SVE vector configuration kselftest/arm64: Validate vector lengths are set in sve-probe-vls kselftest/arm64: Provide a helper binary and "library" for SVE RDVL kselftest/arm64: Ignore check_gcr_el1_cswitch binary
-
Alexandru Elisei authored
Commit 31c00d2a ("arm64: Disable fine grained traps on boot") zeroed the fine grained trap registers to prevent unwanted register traps from occuring. However, for the PMSNEVFR_EL1 register, the corresponding HDFG{R,W}TR_EL2.nPMSNEVFR_EL1 fields must be 1 to disable trapping. Set both fields to 1 if FEAT_SPEv1p2 is detected to disable read and write traps. Fixes: 31c00d2a ("arm64: Disable fine grained traps on boot") Cc: <stable@vger.kernel.org> # 5.13.x Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210824154523.906270-1-alexandru.elisei@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 25 Aug, 2021 2 commits
-
-
Xujun Leng authored
Fix a typo in the comment of macro pud_offset_phys(). Signed-off-by: Xujun Leng <lengxujun2007@126.com> Link: https://lore.kernel.org/r/20210825150526.12582-1-lengxujun2007@126.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Commit 77097ae5 ("most of set_current_blocked() callers want SIGKILL/SIGSTOP removed from set") extended set_current_blocked() to remove SIGKILL and SIGSTOP from the new signal set and updated all callers accordingly. Unfortunately, this collided with the merge of the arm64 architecture, which duly removes these signals when restoring the compat sigframe, as this was what was previously done by arch/arm/. Remove the redundant call to sigdelsetmask() from compat_restore_sigframe(). Reported-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210825093911.24493-1-will@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 24 Aug, 2021 5 commits
-
-
Mark Brown authored
Currently we "handle" failure to allocate the SVE register storage by doing a BUG_ON() and hoping for the best. This is obviously not great and the memory allocation failure will already be loud enough without the BUG_ON(). As the comment says it is a corner case but let's try to do a bit better, remove the BUG_ON() and add code to handle the failure in the callers. For the ptrace and signal code we can return -ENOMEM gracefully however we have no real error reporting path available to us for the SVE access trap so instead generate a SIGKILL if the allocation fails there. This at least means that we won't try to soldier on and end up trying to access the nonexistant state and while it's obviously not ideal for userspace SIGKILL doesn't allow any handling so minimises the ABI impact, making it easier to improve the interface later if we come up with a better idea. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210824153417.18371-1-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
It is amazing that we never documented this absolutely basic requirement: if you boot the kernel at EL2, you'd better enable the HVC instruction from EL3. Really, just do it. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210812190213.2601506-6-maz@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this. Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses. Fixes: 0370b31e ("arm64: Extend early page table code to allow for larger kernels") Cc: <stable@vger.kernel.org> # 4.16.x Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210823101253.55567-1-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
At some point it would be nice to avoid the need to manually encode SVE instructions, add a note of the binutils version required to save looking it up. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210816125024.8112-1-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
The use of macros for the actual function bodies means legibility is always going to be a bit of a challenge, especially while we can't rely on SVE support in the toolchain, but this helps a little. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210812201143.35578-1-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 23 Aug, 2021 6 commits
-
-
Mark Brown authored
Note down a few gaps in our coverage. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-7-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Currently this doesn't actually verify that the register contents do the right thing, it just verifes that a SVE context with appropriate size appears. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-6-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
We do not support changing the SVE vector length as part of signal return, verify that this is the case if the system supports multiple vector lengths. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-5-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
As a basic check that the SVE signal frame is being set up correctly verify that the vector length in the signal frame is the vector length that the process has. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-4-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
A signal frame with SVE may validly either be a bare struct sve_context or a struct sve_context followed by vector length dependent register data. Support either in the generic helpers for the signal tests, and while we're at it validate the SVE vector length reported. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-3-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Allow testcases for SVE signal handling to flag the dependency and be skipped on systems without SVE support. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819134245.13935-2-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 20 Aug, 2021 3 commits
-
-
Changbin Du authored
Replace the obsolete and ambiguos macro in_irq() with new macro in_hardirq(). Signed-off-by: Changbin Du <changbin.du@gmail.com> Link: https://lore.kernel.org/r/20210814005405.2658-1-changbin.du@gmail.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
The PAC tests check to see if the system supports the relevant PAC features but instead of skipping the tests if they can't be executed they fail the tests which makes things look like they're not working when they are. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819165723.43903-1-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
When skipping the tests due to a lack of system support for MTE we currently print a message saying FAIL which makes it look like the test failed even though the test did actually report KSFT_SKIP, creating some confusion. Change the error message to say SKIP instead so things are clearer. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210819172902.56211-1-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 11 Aug, 2021 1 commit
-
-
Anshuman Khandual authored
ID_AA64DFR0_PMUVER_IMP_DEF, indicating an "implementation defined" PMU, never actually gets used although there are '0xf' instances scattered all around. Use the symbolic name instead of the raw hex constant. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1628652427-24695-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 06 Aug, 2021 2 commits
-
-
Will Deacon authored
When switching to an 'mm_struct' for the first time following an ASID rollover, a new ASID may be allocated and assigned to 'mm->context.id'. This reassignment can happen concurrently with other operations on the mm, such as unmapping pages and subsequently issuing TLB invalidation. Consequently, we need to ensure that (a) accesses to 'mm->context.id' are atomic and (b) all page-table updates made prior to a TLBI using the old ASID are guaranteed to be visible to CPUs running with the new ASID. This was found by inspection after reviewing the VMID changes from Shameer but it looks like a real (yet hard to hit) bug. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Jade Alglave <jade.alglave@arm.com> Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Signed-off-by: Will Deacon <will@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210806113109.2475-2-will@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
When converting arm64 to modern assembler annotations __bad_stack was left as a raw local label without annotations. While this will have little if any practical impact at present it may cause issues in the future if we start using the annotations for things like reliable stack trace. Add SYM_CODE annotations to fix this. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210804181710.19059-1-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 03 Aug, 2021 7 commits
-
-
Mark Brown authored
Write down some ideas for additional coverage for floating point in case someone feels inspired to look into them. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20210803140450.46624-5-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
We provide interfaces for configuring the SVE vector length seen by processes using prctl and also via /proc for configuring the default values. Provide tests that exercise all these interfaces and verify that they take effect as expected, though at present no test fully enumerates all the possible vector lengths. A subset of this is already tested via sve-probe-vls but the /proc interfaces are not currently covered at all. In preparation for the forthcoming support for SME, the Scalable Matrix Extension, which has separately but similarly configured vector lengths which we expect to offer similar userspace interfaces for, all the actual files and prctls used are parameterised and we don't validate that the architectural minimum vector length is the minimum we see. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20210803140450.46624-4-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Currently sve-probe-vls does not verify that the vector lengths reported by the prctl() interface are actually what is reported by the architecture, use the rdvl_sve() helper to validate this. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20210803140450.46624-3-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
SVE provides an instruction RDVL which reports the currently configured vector length. In order to validate that our vector length configuration interfaces are working correctly without having to build the C code for our test programs with SVE enabled provide a trivial assembly library with a C callable function that executes RDVL. Since these interfaces also control behaviour on exec*() provide a trivial wrapper program which reports the currently configured vector length on stdout, tests can use this to verify that behaviour on exec*() is as expected. In preparation for providing similar helper functionality for SME, the Scalable Matrix Extension, which allows separately configured vector lengths to be read back both the assembler function and wrapper binary have SVE included in their name. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20210803140450.46624-2-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jason Wang authored
The double 'the' after 'If' in this comment "If the the TLB range ops are supported..." is repeated. Consequently, one 'the' should be removed from the comment. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Link: https://lore.kernel.org/r/20210803142020.124230-1-wangborong@cdjrlc.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Masahiro Yamada authored
Currently, the (z)install targets in arch/arm64/Makefile descend into arch/arm64/boot/Makefile to invoke the shell script, but there is no good reason to do so. arch/arm64/Makefile can run the shell script directly. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Link: https://lore.kernel.org/r/20210729140527.443116-1-masahiroy@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Yee Lee authored
MTE support needs to be optionally disabled in runtime for HW issue workaround, FW development and some evaluation works on system resource and performance. This patch makes two changes: (1) moves init of tag-allocation bits(ATA/ATA0) to cpu_enable_mte() as not cached in TLB. (2) allows ID_AA64PFR1_EL1.MTE to be overridden on its shadow value by giving "arm64.nomte" on cmdline. When the feature value is off, ATA and TCF will not set and the related functionalities are accordingly suppressed. Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Suggested-by: Marc Zyngier <maz@kernel.org> Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Yee Lee <yee.lee@mediatek.com> Link: https://lore.kernel.org/r/20210803070824.7586-2-yee.lee@mediatek.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 02 Aug, 2021 6 commits
-
-
Mark Rutland authored
We have special logic to suppress MTE tag check fault reporting, based on a global `mte_report_once` and `reported` variables. These can be used to suppress calling kasan_report() when taking a tag check fault, but do not prevent taking the fault in the first place, nor does they affect the way we disable tag checks upon taking a fault. The core KASAN code already defaults to reporting a single fault, and has a `multi_shot` control to permit reporting multiple faults. The only place we transiently alter `mte_report_once` is in lib/test_kasan.c, where we also the `multi_shot` state as the same time. Thus `mte_report_once` and `reported` are redundant, and can be removed. When a tag check fault is taken, tag checking will be disabled by `do_tag_recovery` and must be explicitly re-enabled if desired. The test code does this by calling kasan_enable_tagging_sync(). This patch removes the redundant mte_report_once() logic and associated variables. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Will Deacon <will@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20210714143843.56537-4-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
When KASAN_HW_TAGS is selected, KASAN is enabled at boot time, and the hardware supports MTE, we'll initialize `kernel_gcr_excl` with a value dependent on KASAN_TAG_MAX. While the resulting value is a constant which depends on KASAN_TAG_MAX, we have to perform some runtime work to generate the value, and have to read the value from memory during the exception entry path. It would be better if we could generate this as a constant at compile-time, and use it as such directly. Early in boot within __cpu_setup(), we initialize GCR_EL1 to a safe value, and later override this with the value required by KASAN. If CONFIG_KASAN_HW_TAGS is not selected, or if KASAN is disabeld at boot time, the kernel will not use IRG instructions, and so the initial value of GCR_EL1 is does not matter to the kernel. Thus, we can instead have __cpu_setup() initialize GCR_EL1 to a value consistent with KASAN_TAG_MAX, and avoid the need to re-initialize it during hotplug and resume form suspend. This patch makes arem64 use a compile-time constant KERNEL_GCR_EL1 value, which is compatible with KASAN_HW_TAGS when this is selected. This removes the need to re-initialize GCR_EL1 dynamically, and acts as an optimization to the entry assembly, which no longer needs to load this value from memory. The redundant initialization hooks are removed. In order to do this, KASAN_TAG_MAX needs to be visible outside of the core KASAN code. To do this, I've moved the KASAN_TAG_* values into <linux/kasan-tags.h>. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Tested-by: Andrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20210714143843.56537-3-mark.rutland@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
This function is not referenced outside fpsimd.c so can be static, making it that little bit easier to follow what is called from where. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210730165846.18558-1-broonie@kernel.orgReviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
We added check_gcr_el1_cswitch but did not ignore the generated binary, add it. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210728173539.6231-1-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jason Wang authored
The end 'return;' in a void function is useless and verbose. It can be removed safely. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Link: https://lore.kernel.org/r/20210726123940.63232-1-wangborong@cdjrlc.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Document our requirements for initialisation of the Scalable Matrix Extension (SME) at kernel start. While we do have the ability to handle mismatched vector lengths we will reject any late CPUs that can't support the minimum set we determine at boot so for clarity we document a requirement that all CPUs make the same vector length available. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210720204220.22951-1-broonie@kernel.orgSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 30 Jul, 2021 4 commits
-
-
Jason Wang authored
Prefer using '"%s...", __func__' to get current function's name in a debug message. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210726122907.51529-1-wangborong@cdjrlc.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Lingyan Huang authored
The current code to print PSTATE when generating backtraces does not include SSBS bit and DIT bit, so add this information. Cc: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Lingyan Huang <huanglingyan2@huawei.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/1626920436-54816-1-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Shaokun Zhang authored
Use defined macro to simplify the code and make it more readable. Cc: Marc Zyngier <maz@kernel.org> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1626415089-57584-1-git-send-email-zhangshaokun@hisilicon.comAcked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Anshuman Khandual authored
The commit 26f55386 ("arm64/mm: Fix __enable_mmu() for new TGRAN range values") had already switched into testing ID_AA64MMFR0_TGRAN range values. This just changes system_supports_[4|16|64]kb_granule() helpers to perform similar range tests as well. While here, it standardizes page size specific supported min and max TGRAN values. Cc: Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1626237975-1909-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 28 Jul, 2021 2 commits
-
-
Peter Collingbourne authored
Although an ISB is required in order to make the MTE-related system register update to GCR_EL1 effective, and the same is true for PAC-related updates to SCTLR_EL1 or APIAKey{Hi,Lo}_EL1, we issue two ISBs on machines that support both features while we only need to issue one. To avoid the unnecessary additional ISB, remove the ISBs from the PAC and MTE-specific alternative blocks and add a couple of additional blocks that cause us to only execute one ISB if both features are supported. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Idee7e8114d5ae5a0b171d06220a0eb4bb015a51c Link: https://lore.kernel.org/r/20210727205439.2557419-1-pcc@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Peter Collingbourne authored
Accessing GCR_EL1 and issuing an ISB can be expensive on some microarchitectures. Although we must write to GCR_EL1, we can restructure the code to avoid reading from it because the new value can be derived entirely from the exclusion mask, which is already in a GPR. Do so. Signed-off-by: Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/I560a190a74176ca4cc5191dad08f77f6b1577c75 Link: https://lore.kernel.org/r/20210714013638.3995315-1-pcc@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-