1. 28 Feb, 2023 2 commits
    • Ard Biesheuvel's avatar
      arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN · 010338d7
      Ard Biesheuvel authored
      Our virtual KASLR displacement is a randomly chosen multiple of
      2 MiB plus an offset that is equal to the physical placement modulo 2
      MiB. This arrangement ensures that we can always use 2 MiB block
      mappings (or contiguous PTE mappings for 16k or 64k pages) to map the
      kernel.
      
      This means that a KASLR offset of less than 2 MiB is simply the product
      of this physical displacement, and no randomization has actually taken
      place. Currently, we use 'kaslr_offset() > 0' to decide whether or not
      randomization has occurred, and so we misidentify this case.
      
      If the kernel image placement is not randomized, modules are allocated
      from a dedicated region below the kernel mapping, which is only used for
      modules and not for other vmalloc() or vmap() calls.
      
      When randomization is enabled, the kernel image is vmap()'ed randomly
      inside the vmalloc region, and modules are allocated in the vicinity of
      this mapping to ensure that relative references are always in range.
      However, unlike the dedicated module region below the vmalloc region,
      this region is not reserved exclusively for modules, and so ordinary
      vmalloc() calls may end up overlapping with it. This should rarely
      happen, given that vmalloc allocates bottom up, although it cannot be
      ruled out entirely.
      
      The misidentified case results in a placement of the kernel image within
      2 MiB of its default address. However, the logic that randomizes the
      module region is still invoked, and this could result in the module
      region overlapping with the start of the vmalloc region, instead of
      using the dedicated region below it. If this happens, a single large
      vmalloc() or vmap() call will use up the entire region, and leave no
      space for loading modules after that.
      
      Since commit 82046702 ("efi/libstub/arm64: Replace 'preferred'
      offset with alignment check"), this is much more likely to occur on
      systems that boot via EFI but lack an implementation of the EFI RNG
      protocol, as in that case, the EFI stub will decide to leave the image
      where it found it, and the EFI firmware uses 64k alignment only.
      
      Fix this, by correctly identifying the case where the virtual
      displacement is a result of the physical displacement only.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarMark Brown <broonie@kernel.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20230223204101.1500373-1-ardb@kernel.orgSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      010338d7
    • Mark Rutland's avatar
      arm64: ftrace: forbid CALL_OPS with CC_OPTIMIZE_FOR_SIZE · b3f11af9
      Mark Rutland authored
      Florian reports that when building with CONFIG_CC_OPTIMIZE_FOR_SIZE=y,
      he sees "Misaligned patch-site" warnings at boot, e.g.
      
      | Misaligned patch-site bcm2836_arm_irqchip_handle_irq+0x0/0x88
      | WARNING: CPU: 0 PID: 0 at arch/arm64/kernel/ftrace.c:120 ftrace_call_adjust+0x4c/0x70
      
      This is because GCC will silently ignore `-falign-functions=N` when
      passed `-Os`, resulting in functions not being aligned as we expect.
      This is a known issue, and to account for this we modified the kernel to
      avoid `-Os` generally. Unfortunately we forgot to account for
      CONFIG_CC_OPTIMIZE_FOR_SIZE.
      
      Forbid the use of CALL_OPS with CONFIG_CC_OPTIMIZE_FOR_SIZE=y to prevent
      this issue. All exising ftrace features will work as before, though
      without the performance benefit of CALL_OPS.
      Reported-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Link: http://lore.kernel.org/linux-arm-kernel/2d9284c3-3805-402b-5423-520ced56d047@gmail.comSigned-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Stefan Wahren <stefan.wahren@i2se.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Deacon <will@kernel.org>
      Tested-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Link: https://lore.kernel.org/r/20230227115819.365630-1-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b3f11af9
  2. 24 Feb, 2023 2 commits
  3. 22 Feb, 2023 3 commits
  4. 20 Feb, 2023 1 commit
    • Mark Rutland's avatar
      arm64: fix .idmap.text assertion for large kernels · d5417081
      Mark Rutland authored
      When building a kernel with many debug options enabled (which happens in
      test configurations use by myself and syzbot), the kernel can become
      large enough that portions of .text can be more than 128M away from
      .idmap.text (which is placed inside the .rodata section). Where idmap
      code branches into .text, the linker will place veneers in the
      .idmap.text section to make those branches possible.
      
      Unfortunately, as Ard reports, GNU LD has bseen observed to add 4K of
      padding when adding such veneers, e.g.
      
      | .idmap.text    0xffffffc01e48e5c0      0x32c arch/arm64/mm/proc.o
      |                0xffffffc01e48e5c0                idmap_cpu_replace_ttbr1
      |                0xffffffc01e48e600                idmap_kpti_install_ng_mappings
      |                0xffffffc01e48e800                __cpu_setup
      | *fill*         0xffffffc01e48e8ec        0x4
      | .idmap.text.stub
      |                0xffffffc01e48e8f0       0x18 linker stubs
      |                0xffffffc01e48f8f0                __idmap_text_end = .
      |                0xffffffc01e48f000                . = ALIGN (0x1000)
      | *fill*         0xffffffc01e48f8f0      0x710
      |                0xffffffc01e490000                idmap_pg_dir = .
      
      This makes the __idmap_text_start .. __idmap_text_end region bigger than
      the 4K we require it to fit within, and triggers an assertion in arm64's
      vmlinux.lds.S, which breaks the build:
      
      | LD      .tmp_vmlinux.kallsyms1
      | aarch64-linux-gnu-ld: ID map text too big or misaligned
      | make[1]: *** [scripts/Makefile.vmlinux:35: vmlinux] Error 1
      | make: *** [Makefile:1264: vmlinux] Error 2
      
      Avoid this by using an `ADRP+ADD+BLR` sequence for branches out of
      .idmap.text, which avoids the need for veneers. These branches are only
      executed once per boot, and only when the MMU is on, so there should be
      no noticeable performance penalty in replacing `BL` with `ADRP+ADD+BLR`.
      
      At the same time, remove the "x" and "w" attributes when placing code in
      .idmap.text, as these are not necessary, and this will prevent the
      linker from assuming that it is safe to place PLTs into .idmap.text,
      causing it to warn if and when there are out-of-range branches within
      .idmap.text, e.g.
      
      |   LD      .tmp_vmlinux.kallsyms1
      | arch/arm64/kernel/head.o: in function `primary_entry':
      | (.idmap.text+0x1c): relocation truncated to fit: R_AARCH64_CALL26 against symbol `dcache_clean_poc' defined in .text section in arch/arm64/mm/cache.o
      | arch/arm64/kernel/head.o: in function `init_el2':
      | (.idmap.text+0x88): relocation truncated to fit: R_AARCH64_CALL26 against symbol `dcache_clean_poc' defined in .text section in arch/arm64/mm/cache.o
      | make[1]: *** [scripts/Makefile.vmlinux:34: vmlinux] Error 1
      | make: *** [Makefile:1252: vmlinux] Error 2
      
      Thus, if future changes add out-of-range branches in .idmap.text, it
      should be easy enough to identify those from the resulting linker
      errors.
      
      Reported-by: syzbot+f8ac312e31226e23302b@syzkaller.appspotmail.com
      Link: https://lore.kernel.org/linux-arm-kernel/00000000000028ea4105f4e2ef54@google.com/Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Tested-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20230220162317.1581208-1-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d5417081
  5. 10 Feb, 2023 3 commits
    • Catalin Marinas's avatar
      Merge branch 'for-next/signal' into for-next/core · ad4a4d3a
      Catalin Marinas authored
      * for-next/signal:
        : Signal handling cleanups
        arm64/signal: Only read new data when parsing the ZT context
        arm64/signal: Only read new data when parsing the ZA context
        arm64/signal: Only read new data when parsing the SVE context
        arm64/signal: Avoid rereading context frame sizes
        arm64/signal: Make interface for restore_fpsimd_context() consistent
        arm64/signal: Remove redundant size validation from parse_user_sigframe()
        arm64/signal: Don't redundantly verify FPSIMD magic
      ad4a4d3a
    • Catalin Marinas's avatar
      Merge branch 'for-next/sysreg-hwcaps' into for-next/core · 96004636
      Catalin Marinas authored
      * for-next/sysreg-hwcaps:
        : Make use of sysreg helpers for hwcaps
        arm64/cpufeature: Use helper macros to specify hwcaps
        arm64/cpufeature: Always use symbolic name for feature value in hwcaps
        arm64/sysreg: Initial unsigned annotations for ID registers
        arm64/sysreg: Initial annotation of signed ID registers
        arm64/sysreg: Allow enumerations to be declared as signed or unsigned
      96004636
    • Catalin Marinas's avatar
      Merge branches 'for-next/sysreg', 'for-next/sme', 'for-next/kselftest',... · 156010ed
      Catalin Marinas authored
      Merge branches 'for-next/sysreg', 'for-next/sme', 'for-next/kselftest', 'for-next/misc', 'for-next/sme2', 'for-next/tpidr2', 'for-next/scs', 'for-next/compat-hwcap', 'for-next/ftrace', 'for-next/efi-boot-mmu-on', 'for-next/ptrauth' and 'for-next/pseudo-nmi', remote-tracking branch 'arm64/for-next/perf' into for-next/core
      
      * arm64/for-next/perf:
        perf: arm_spe: Print the version of SPE detected
        perf: arm_spe: Add support for SPEv1.2 inverted event filtering
        perf: Add perf_event_attr::config3
        drivers/perf: fsl_imx8_ddr_perf: Remove set-but-not-used variable
        perf: arm_spe: Support new SPEv1.2/v8.7 'not taken' event
        perf: arm_spe: Use new PMSIDR_EL1 register enums
        perf: arm_spe: Drop BIT() and use FIELD_GET/PREP accessors
        arm64/sysreg: Convert SPE registers to automatic generation
        arm64: Drop SYS_ from SPE register defines
        perf: arm_spe: Use feature numbering for PMSEVFR_EL1 defines
        perf/marvell: Add ACPI support to TAD uncore driver
        perf/marvell: Add ACPI support to DDR uncore driver
        perf/arm-cmn: Reset DTM_PMU_CONFIG at probe
        drivers/perf: hisi: Extract initialization of "cpa_pmu->pmu"
        drivers/perf: hisi: Simplify the parameters of hisi_pmu_init()
        drivers/perf: hisi: Advertise the PERF_PMU_CAP_NO_EXCLUDE capability
      
      * for-next/sysreg:
        : arm64 sysreg and cpufeature fixes/updates
        KVM: arm64: Use symbolic definition for ISR_EL1.A
        arm64/sysreg: Add definition of ISR_EL1
        arm64/sysreg: Add definition for ICC_NMIAR1_EL1
        arm64/cpufeature: Remove 4 bit assumption in ARM64_FEATURE_MASK()
        arm64/sysreg: Fix errors in 32 bit enumeration values
        arm64/cpufeature: Fix field sign for DIT hwcap detection
      
      * for-next/sme:
        : SME-related updates
        arm64/sme: Optimise SME exit on syscall entry
        arm64/sme: Don't use streaming mode to probe the maximum SME VL
        arm64/ptrace: Use system_supports_tpidr2() to check for TPIDR2 support
      
      * for-next/kselftest: (23 commits)
        : arm64 kselftest fixes and improvements
        kselftest/arm64: Don't require FA64 for streaming SVE+ZA tests
        kselftest/arm64: Copy whole EXTRA context
        kselftest/arm64: Fix enumeration of systems without 128 bit SME for SSVE+ZA
        kselftest/arm64: Fix enumeration of systems without 128 bit SME
        kselftest/arm64: Don't require FA64 for streaming SVE tests
        kselftest/arm64: Limit the maximum VL we try to set via ptrace
        kselftest/arm64: Correct buffer size for SME ZA storage
        kselftest/arm64: Remove the local NUM_VL definition
        kselftest/arm64: Verify simultaneous SSVE and ZA context generation
        kselftest/arm64: Verify that SSVE signal context has SVE_SIG_FLAG_SM set
        kselftest/arm64: Remove spurious comment from MTE test Makefile
        kselftest/arm64: Support build of MTE tests with clang
        kselftest/arm64: Initialise current at build time in signal tests
        kselftest/arm64: Don't pass headers to the compiler as source
        kselftest/arm64: Remove redundant _start labels from FP tests
        kselftest/arm64: Fix .pushsection for strings in FP tests
        kselftest/arm64: Run BTI selftests on systems without BTI
        kselftest/arm64: Fix test numbering when skipping tests
        kselftest/arm64: Skip non-power of 2 SVE vector lengths in fp-stress
        kselftest/arm64: Only enumerate power of two VLs in syscall-abi
        ...
      
      * for-next/misc:
        : Miscellaneous arm64 updates
        arm64/mm: Intercept pfn changes in set_pte_at()
        Documentation: arm64: correct spelling
        arm64: traps: attempt to dump all instructions
        arm64: Apply dynamic shadow call stack patching in two passes
        arm64: el2_setup.h: fix spelling typo in comments
        arm64: Kconfig: fix spelling
        arm64: cpufeature: Use kstrtobool() instead of strtobool()
        arm64: Avoid repeated AA64MMFR1_EL1 register read on pagefault path
        arm64: make ARCH_FORCE_MAX_ORDER selectable
      
      * for-next/sme2: (23 commits)
        : Support for arm64 SME 2 and 2.1
        arm64/sme: Fix __finalise_el2 SMEver check
        kselftest/arm64: Remove redundant _start labels from zt-test
        kselftest/arm64: Add coverage of SME 2 and 2.1 hwcaps
        kselftest/arm64: Add coverage of the ZT ptrace regset
        kselftest/arm64: Add SME2 coverage to syscall-abi
        kselftest/arm64: Add test coverage for ZT register signal frames
        kselftest/arm64: Teach the generic signal context validation about ZT
        kselftest/arm64: Enumerate SME2 in the signal test utility code
        kselftest/arm64: Cover ZT in the FP stress test
        kselftest/arm64: Add a stress test program for ZT0
        arm64/sme: Add hwcaps for SME 2 and 2.1 features
        arm64/sme: Implement ZT0 ptrace support
        arm64/sme: Implement signal handling for ZT
        arm64/sme: Implement context switching for ZT0
        arm64/sme: Provide storage for ZT0
        arm64/sme: Add basic enumeration for SME2
        arm64/sme: Enable host kernel to access ZT0
        arm64/sme: Manually encode ZT0 load and store instructions
        arm64/esr: Document ISS for ZT0 being disabled
        arm64/sme: Document SME 2 and SME 2.1 ABI
        ...
      
      * for-next/tpidr2:
        : Include TPIDR2 in the signal context
        kselftest/arm64: Add test case for TPIDR2 signal frame records
        kselftest/arm64: Add TPIDR2 to the set of known signal context records
        arm64/signal: Include TPIDR2 in the signal context
        arm64/sme: Document ABI for TPIDR2 signal information
      
      * for-next/scs:
        : arm64: harden shadow call stack pointer handling
        arm64: Stash shadow stack pointer in the task struct on interrupt
        arm64: Always load shadow stack pointer directly from the task struct
      
      * for-next/compat-hwcap:
        : arm64: Expose compat ARMv8 AArch32 features (HWCAPs)
        arm64: Add compat hwcap SSBS
        arm64: Add compat hwcap SB
        arm64: Add compat hwcap I8MM
        arm64: Add compat hwcap ASIMDBF16
        arm64: Add compat hwcap ASIMDFHM
        arm64: Add compat hwcap ASIMDDP
        arm64: Add compat hwcap FPHP and ASIMDHP
      
      * for-next/ftrace:
        : Add arm64 support for DYNAMICE_FTRACE_WITH_CALL_OPS
        arm64: avoid executing padding bytes during kexec / hibernation
        arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS
        arm64: ftrace: Update stale comment
        arm64: patching: Add aarch64_insn_write_literal_u64()
        arm64: insn: Add helpers for BTI
        arm64: Extend support for CONFIG_FUNCTION_ALIGNMENT
        ACPI: Don't build ACPICA with '-Os'
        Compiler attributes: GCC cold function alignment workarounds
        ftrace: Add DYNAMIC_FTRACE_WITH_CALL_OPS
      
      * for-next/efi-boot-mmu-on:
        : Permit arm64 EFI boot with MMU and caches on
        arm64: kprobes: Drop ID map text from kprobes blacklist
        arm64: head: Switch endianness before populating the ID map
        efi: arm64: enter with MMU and caches enabled
        arm64: head: Clean the ID map and the HYP text to the PoC if needed
        arm64: head: avoid cache invalidation when entering with the MMU on
        arm64: head: record the MMU state at primary entry
        arm64: kernel: move identity map out of .text mapping
        arm64: head: Move all finalise_el2 calls to after __enable_mmu
      
      * for-next/ptrauth:
        : arm64 pointer authentication cleanup
        arm64: pauth: don't sign leaf functions
        arm64: unify asm-arch manipulation
      
      * for-next/pseudo-nmi:
        : Pseudo-NMI code generation optimisations
        arm64: irqflags: use alternative branches for pseudo-NMI logic
        arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap
        arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_CPUIF_SYSREGS
        arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING
        arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS
      156010ed
  6. 07 Feb, 2023 6 commits
  7. 06 Feb, 2023 1 commit
  8. 03 Feb, 2023 1 commit
  9. 01 Feb, 2023 17 commits
  10. 31 Jan, 2023 4 commits
    • Mark Rutland's avatar
      arm64: irqflags: use alternative branches for pseudo-NMI logic · a5f61cc6
      Mark Rutland authored
      Due to the way we use alternatives in the irqflags code, even when
      CONFIG_ARM64_PSEUDO_NMI=n, we generate unused alternative code for
      pseudo-NMI management. This patch reworks the irqflags code to remove
      the redundant code when CONFIG_ARM64_PSEUDO_NMI=n, which benefits the
      more common case, and will permit further rework of our DAIF management
      (e.g. in preparation for ARMv8.8-A's NMI feature).
      
      Prior to this patch a defconfig kernel has hundreds of redundant
      instructions to access ICC_PMR_EL1 (which should only need to be
      manipulated in setup code), which this patch removes:
      
      | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-before-defconfig | grep icc_pmr_el1 | wc -l
      | 885
      | [mark@lakrids:~/src/linux]% usekorg 12.1.0 aarch64-linux-objdump -d vmlinux-after-defconfig | grep icc_pmr_el1 | wc -l
      | 5
      
      Those instructions alone account for more than 3KiB of kernel text, and
      will be associated with additional alt_instr entries, padding and
      branches, etc.
      
      These redundant instructions exist because we use alternative sequences
      for to choose between DAIF / PMR management in irqflags.h, and even when
      CONFIG_ARM64_PSEUDO_NMI=n, those alternative sequences will generate the
      code for PMR management, along with alt_instr entries. We use
      alternatives here as this was necessary to ensure that we never
      encounter a mismatched local_irq_save() ... local_irq_restore() sequence
      in the middle of patching, which was possible to see if we used static
      keys to choose between DAIF and PMR management.
      
      Since commit:
      
        21fb26bf ("arm64: alternatives: add alternative_has_feature_*()")
      
      ... we have a mechanism to use alternatives similarly to static keys,
      allowing us to write the bulk of the logic in C code while also being
      able to rely on all sites being patched in one go, and avoiding a
      mismatched mismatched local_irq_save() ... local_irq_restore() sequence
      during patching.
      
      This patch rewrites arm64's local_irq_*() functions to use alternative
      branches. This allows for the pseudo-NMI code to be entirely elided when
      CONFIG_ARM64_PSEUDO_NMI=n, making a defconfig Image 64KiB smaller, and
      not affectint the size of an Image with CONFIG_ARM64_PSEUDO_NMI=y:
      
      | [mark@lakrids:~/src/linux]% ls -al vmlinux-*
      | -rwxr-xr-x 1 mark mark 137473432 Jan 18 11:11 vmlinux-after-defconfig
      | -rwxr-xr-x 1 mark mark 137918776 Jan 18 11:15 vmlinux-after-pnmi
      | -rwxr-xr-x 1 mark mark 137380152 Jan 18 11:03 vmlinux-before-defconfig
      | -rwxr-xr-x 1 mark mark 137523704 Jan 18 11:08 vmlinux-before-pnmi
      | [mark@lakrids:~/src/linux]% ls -al Image-*
      | -rw-r--r-- 1 mark mark 38646272 Jan 18 11:11 Image-after-defconfig
      | -rw-r--r-- 1 mark mark 38777344 Jan 18 11:14 Image-after-pnmi
      | -rw-r--r-- 1 mark mark 38711808 Jan 18 11:03 Image-before-defconfig
      | -rw-r--r-- 1 mark mark 38777344 Jan 18 11:08 Image-before-pnmi
      
      Some sensitive code depends on being run with interrupts enabled or with
      interrupts disabled, and so when enabling or disabling interrupts we
      must ensure that the compiler does not move such code around the actual
      enable/disable. Before this patch, that was ensured by the combined asm
      volatile blocks having memory clobbers (and any sensitive code either
      being asm volatile, or touching memory). This patch consistently uses
      explicit barrier() operations before and after the enable/disable, which
      allows us to use the usual sysreg accessors (which are asm volatile) to
      manipulate the interrupt masks. The use of pmr_sync() is pulled within
      this critical section for consistency.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230130145429.903791-6-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a5f61cc6
    • Mark Rutland's avatar
      arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap · 8bf0a804
      Mark Rutland authored
      When Priority Mask Hint Enable (PMHE) == 0b1, the GIC may use the PMR
      value to determine whether to signal an IRQ to a PE, and consequently
      after a change to the PMR value, a DSB SY may be required to ensure that
      interrupts are signalled to a CPU in finite time. When PMHE == 0b0,
      interrupts are always signalled to the relevant PE, and all masking
      occurs locally, without requiring a DSB SY.
      
      Since commit:
      
        f2266504 ("arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear")
      
      ... we handle this dynamically: in most cases a static key is used to
      determine whether to issue a DSB SY, but the entry code must read from
      ICC_CTLR_EL1 as static keys aren't accessible from plain assembly.
      
      It would be much nicer to use an alternative instruction sequence for
      the DSB, as this would avoid the need to read from ICC_CTLR_EL1 in the
      entry code, and for most other code this will result in simpler code
      generation with fewer instructions and fewer branches.
      
      This patch adds a new ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap which is
      only set when ICC_CTLR_EL1.PMHE == 0b0 (and GIC priority masking is in
      use). This allows us to replace the existing users of the
      `gic_pmr_sync` static key with alternative sequences which default to a
      DSB SY and are relaxed to a NOP when PMHE is not in use.
      
      The entry assembly management of the PMR is slightly restructured to use
      a branch (rather than multiple NOPs) when priority masking is not in
      use. This is more in keeping with other alternatives in the entry
      assembly, and permits the use of a separate alternatives for the
      PMHE-dependent DSB SY (and removal of the conditional branch this
      currently requires). For consistency I've adjusted both the save and
      restore paths.
      
      According to bloat-o-meter, when building defconfig +
      CONFIG_ARM64_PSEUDO_NMI=y this shrinks the kernel text by ~4KiB:
      
      | add/remove: 4/2 grow/shrink: 42/310 up/down: 332/-5032 (-4700)
      
      The resulting vmlinux is ~66KiB smaller, though the resulting Image size
      is unchanged due to padding and alignment:
      
      | [mark@lakrids:~/src/linux]% ls -al vmlinux-*
      | -rwxr-xr-x 1 mark mark 137508344 Jan 17 14:11 vmlinux-after
      | -rwxr-xr-x 1 mark mark 137575440 Jan 17 13:49 vmlinux-before
      | [mark@lakrids:~/src/linux]% ls -al Image-*
      | -rw-r--r-- 1 mark mark 38777344 Jan 17 14:11 Image-after
      | -rw-r--r-- 1 mark mark 38777344 Jan 17 13:49 Image-before
      
      Prior to this patch we did not verify the state of ICC_CTLR_EL1.PMHE on
      secondary CPUs. As of this patch this is verified by the cpufeature code
      when using GIC priority masking (i.e. when using pseudo-NMIs).
      
      Note that since commit:
      
        7e3a57fa ("arm64: Document ICC_CTLR_EL3.PMHE setting requirements")
      
      ... Documentation/arm64/booting.rst specifies:
      
      |      - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across
      |        all CPUs the kernel is executing on, and must stay constant
      |        for the lifetime of the kernel.
      
      ... so that should not adversely affect any compliant systems, and as
      we'll only check for the absense of PMHE when using pseudo-NMIs, this
      will only fire when such mismatch will adversely affect the system.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230130145429.903791-5-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      8bf0a804
    • Mark Rutland's avatar
      arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_CPUIF_SYSREGS · 4b43f1cd
      Mark Rutland authored
      Currently the arm64_cpu_capabilities structure for
      ARM64_HAS_GIC_PRIO_MASKING open-codes the same CPU field definitions as
      the arm64_cpu_capabilities structure for ARM64_HAS_GIC_CPUIF_SYSREGS, so
      that can_use_gic_priorities() can use has_useable_gicv3_cpuif().
      
      This duplication isn't ideal for the legibility of the code, and sets a
      bad example for any ARM64_HAS_GIC_* definitions added by subsequent
      patches.
      
      Instead, have ARM64_HAS_GIC_PRIO_MASKING check for the
      ARM64_HAS_GIC_CPUIF_SYSREGS cpucap, and add a comment explaining why
      this is safe. Subsequent patches will use the same pattern where one
      cpucap depends upon another.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230130145429.903791-4-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4b43f1cd
    • Mark Rutland's avatar
      arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING · c888b7bd
      Mark Rutland authored
      Subsequent patches will add more GIC-related cpucaps. When we do so, it
      would be nice to give them a consistent HAS_GIC_* prefix.
      
      In preparation for doing so, this patch renames the existing
      ARM64_HAS_IRQ_PRIO_MASKING cap to ARM64_HAS_GIC_PRIO_MASKING.
      
      The cpucaps file was hand-modified; all other changes were scripted
      with:
      
        find . -type f -name '*.[chS]' -print0 | \
          xargs -0 sed -i 's/ARM64_HAS_IRQ_PRIO_MASKING/ARM64_HAS_GIC_PRIO_MASKING/'
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230130145429.903791-3-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c888b7bd