Commit 6734e20e authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Will Deacon:
 "There's quite a lot of code here, but much of it is due to the
  addition of a new PMU driver as well as some arm64-specific selftests
  which is an area where we've traditionally been lagging a bit.

  In terms of exciting features, this includes support for the Memory
  Tagging Extension which narrowly missed 5.9, hopefully allowing
  userspace to run with use-after-free detection in production on CPUs
  that support it. Work is ongoing to integrate the feature with KASAN
  for 5.11.

  Another change that I'm excited about (assuming they get the hardware
  right) is preparing the ASID allocator for sharing the CPU page-table
  with the SMMU. Those changes will also come in via Joerg with the
  IOMMU pull.

  We do stray outside of our usual directories in a few places, mostly
  due to core changes required by MTE. Although much of this has been
  Acked, there were a couple of places where we unfortunately didn't get
  any review feedback.

  Other than that, we ran into a handful of minor conflicts in -next,
  but nothing that should post any issues.

  Summary:

   - Userspace support for the Memory Tagging Extension introduced by
     Armv8.5. Kernel support (via KASAN) is likely to follow in 5.11.

   - Selftests for MTE, Pointer Authentication and FPSIMD/SVE context
     switching.

   - Fix and subsequent rewrite of our Spectre mitigations, including
     the addition of support for PR_SPEC_DISABLE_NOEXEC.

   - Support for the Armv8.3 Pointer Authentication enhancements.

   - Support for ASID pinning, which is required when sharing
     page-tables with the SMMU.

   - MM updates, including treating flush_tlb_fix_spurious_fault() as a
     no-op.

   - Perf/PMU driver updates, including addition of the ARM CMN PMU
     driver and also support to handle CPU PMU IRQs as NMIs.

   - Allow prefetchable PCI BARs to be exposed to userspace using normal
     non-cacheable mappings.

   - Implementation of ARCH_STACKWALK for unwinding.

   - Improve reporting of unexpected kernel traps due to BPF JIT
     failure.

   - Improve robustness of user-visible HWCAP strings and their
     corresponding numerical constants.

   - Removal of TEXT_OFFSET.

   - Removal of some unused functions, parameters and prototypes.

   - Removal of MPIDR-based topology detection in favour of firmware
     description.

   - Cleanups to handling of SVE and FPSIMD register state in
     preparation for potential future optimisation of handling across
     syscalls.

   - Cleanups to the SDEI driver in preparation for support in KVM.

   - Miscellaneous cleanups and refactoring work"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (148 commits)
  Revert "arm64: initialize per-cpu offsets earlier"
  arm64: random: Remove no longer needed prototypes
  arm64: initialize per-cpu offsets earlier
  kselftest/arm64: Check mte tagged user address in kernel
  kselftest/arm64: Verify KSM page merge for MTE pages
  kselftest/arm64: Verify all different mmap MTE options
  kselftest/arm64: Check forked child mte memory accessibility
  kselftest/arm64: Verify mte tag inclusion via prctl
  kselftest/arm64: Add utilities and a test to validate mte memory
  perf: arm-cmn: Fix conversion specifiers for node type
  perf: arm-cmn: Fix unsigned comparison to less than zero
  arm64: dbm: Invalidate local TLB when setting TCR_EL1.HD
  arm64: mm: Make flush_tlb_fix_spurious_fault() a no-op
  arm64: Add support for PR_SPEC_DISABLE_NOEXEC prctl() option
  arm64: Pull in task_stack_page() to Spectre-v4 mitigation code
  KVM: arm64: Allow patching EL2 vectors even with KASLR is not enabled
  arm64: Get rid of arm64_ssbd_state
  KVM: arm64: Convert ARCH_WORKAROUND_2 to arm64_get_spectre_v4_state()
  KVM: arm64: Get rid of kvm_arm_have_ssbd()
  KVM: arm64: Simplify handling of ARCH_WORKAROUND_2
  ...
parents d04a248f d13027bb
=============================
Arm Coherent Mesh Network PMU
=============================
CMN-600 is a configurable mesh interconnect consisting of a rectangular
grid of crosspoints (XPs), with each crosspoint supporting up to two
device ports to which various AMBA CHI agents are attached.
CMN implements a distributed PMU design as part of its debug and trace
functionality. This consists of a local monitor (DTM) at every XP, which
counts up to 4 event signals from the connected device nodes and/or the
XP itself. Overflow from these local counters is accumulated in up to 8
global counters implemented by the main controller (DTC), which provides
overall PMU control and interrupts for global counter overflow.
PMU events
----------
The PMU driver registers a single PMU device for the whole interconnect,
see /sys/bus/event_source/devices/arm_cmn. Multi-chip systems may link
more than one CMN together via external CCIX links - in this situation,
each mesh counts its own events entirely independently, and additional
PMU devices will be named arm_cmn_{1..n}.
Most events are specified in a format based directly on the TRM
definitions - "type" selects the respective node type, and "eventid" the
event number. Some events require an additional occupancy ID, which is
specified by "occupid".
* Since RN-D nodes do not have any distinct events from RN-I nodes, they
are treated as the same type (0xa), and the common event templates are
named "rnid_*".
* The cycle counter is treated as a synthetic event belonging to the DTC
node ("type" == 0x3, "eventid" is ignored).
* XP events also encode the port and channel in the "eventid" field, to
match the underlying pmu_event0_id encoding for the pmu_event_sel
register. The event templates are named with prefixes to cover all
permutations.
By default each event provides an aggregate count over all nodes of the
given type. To target a specific node, "bynodeid" must be set to 1 and
"nodeid" to the appropriate value derived from the CMN configuration
(as defined in the "Node ID Mapping" section of the TRM).
Watchpoints
-----------
The PMU can also count watchpoint events to monitor specific flit
traffic. Watchpoints are treated as a synthetic event type, and like PMU
events can be global or targeted with a particular XP's "nodeid" value.
Since the watchpoint direction is otherwise implicit in the underlying
register selection, separate events are provided for flit uploads and
downloads.
The flit match value and mask are passed in config1 and config2 ("val"
and "mask" respectively). "wp_dev_sel", "wp_chn_sel", "wp_grp" and
"wp_exclusive" are specified per the TRM definitions for dtm_wp_config0.
Where a watchpoint needs to match fields from both match groups on the
REQ or SNP channel, it can be specified as two events - one for each
group - with the same nonzero "combine" value. The count for such a
pair of combined events will be attributed to the primary match.
Watchpoint events with a "combine" value of 0 are considered independent
and will count individually.
...@@ -12,6 +12,7 @@ Performance monitor support ...@@ -12,6 +12,7 @@ Performance monitor support
qcom_l2_pmu qcom_l2_pmu
qcom_l3_pmu qcom_l3_pmu
arm-ccn arm-ccn
arm-cmn
xgene-pmu xgene-pmu
arm_dsu_pmu arm_dsu_pmu
thunderx2-pmu thunderx2-pmu
...@@ -175,6 +175,8 @@ infrastructure: ...@@ -175,6 +175,8 @@ infrastructure:
+------------------------------+---------+---------+ +------------------------------+---------+---------+
| Name | bits | visible | | Name | bits | visible |
+------------------------------+---------+---------+ +------------------------------+---------+---------+
| MTE | [11-8] | y |
+------------------------------+---------+---------+
| SSBS | [7-4] | y | | SSBS | [7-4] | y |
+------------------------------+---------+---------+ +------------------------------+---------+---------+
| BT | [3-0] | y | | BT | [3-0] | y |
......
...@@ -240,6 +240,10 @@ HWCAP2_BTI ...@@ -240,6 +240,10 @@ HWCAP2_BTI
Functionality implied by ID_AA64PFR0_EL1.BT == 0b0001. Functionality implied by ID_AA64PFR0_EL1.BT == 0b0001.
HWCAP2_MTE
Functionality implied by ID_AA64PFR1_EL1.MTE == 0b0010, as described
by Documentation/arm64/memory-tagging-extension.rst.
4. Unused AT_HWCAP bits 4. Unused AT_HWCAP bits
----------------------- -----------------------
......
...@@ -14,6 +14,7 @@ ARM64 Architecture ...@@ -14,6 +14,7 @@ ARM64 Architecture
hugetlbpage hugetlbpage
legacy_instructions legacy_instructions
memory memory
memory-tagging-extension
perf perf
pointer-authentication pointer-authentication
silicon-errata silicon-errata
......
This diff is collapsed.
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright 2020 Arm Ltd.
%YAML 1.2
---
$id: http://devicetree.org/schemas/perf/arm,cmn.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Arm CMN (Coherent Mesh Network) Performance Monitors
maintainers:
- Robin Murphy <robin.murphy@arm.com>
properties:
compatible:
const: arm,cmn-600
reg:
items:
- description: Physical address of the base (PERIPHBASE) and
size (up to 64MB) of the configuration address space.
interrupts:
minItems: 1
maxItems: 4
items:
- description: Overflow interrupt for DTC0
- description: Overflow interrupt for DTC1
- description: Overflow interrupt for DTC2
- description: Overflow interrupt for DTC3
description: One interrupt for each DTC domain implemented must
be specified, in order. DTC0 is always present.
arm,root-node:
$ref: /schemas/types.yaml#/definitions/uint32
description: Offset from PERIPHBASE of the configuration
discovery node (see TRM definition of ROOTNODEBASE).
required:
- compatible
- reg
- interrupts
- arm,root-node
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
pmu@50000000 {
compatible = "arm,cmn-600";
reg = <0x50000000 0x4000000>;
/* 4x2 mesh with one DTC, and CFG node at 0,1,1,0 */
interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>;
arm,root-node = <0x104000>;
};
...
...@@ -54,9 +54,9 @@ these functions (see arch/arm{,64}/include/asm/virt.h): ...@@ -54,9 +54,9 @@ these functions (see arch/arm{,64}/include/asm/virt.h):
x3 = x1's value when entering the next payload (arm64) x3 = x1's value when entering the next payload (arm64)
x4 = x2's value when entering the next payload (arm64) x4 = x2's value when entering the next payload (arm64)
Mask all exceptions, disable the MMU, move the arguments into place Mask all exceptions, disable the MMU, clear I+D bits, move the arguments
(arm64 only), and jump to the restart address while at HYP/EL2. This into place (arm64 only), and jump to the restart address while at HYP/EL2.
hypercall is not expected to return to its caller. This hypercall is not expected to return to its caller.
Any other value of r0/x0 triggers a hypervisor-specific handling, Any other value of r0/x0 triggers a hypervisor-specific handling,
which is not documented here. which is not documented here.
......
...@@ -29,6 +29,7 @@ config ARM64 ...@@ -29,6 +29,7 @@ config ARM64
select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SETUP_DMA_OPS
select ARCH_HAS_SET_DIRECT_MAP select ARCH_HAS_SET_DIRECT_MAP
select ARCH_HAS_SET_MEMORY select ARCH_HAS_SET_MEMORY
select ARCH_STACKWALK
select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_HAS_STRICT_MODULE_RWX select ARCH_HAS_STRICT_MODULE_RWX
select ARCH_HAS_SYNC_DMA_FOR_DEVICE select ARCH_HAS_SYNC_DMA_FOR_DEVICE
...@@ -211,12 +212,18 @@ config ARM64_PAGE_SHIFT ...@@ -211,12 +212,18 @@ config ARM64_PAGE_SHIFT
default 14 if ARM64_16K_PAGES default 14 if ARM64_16K_PAGES
default 12 default 12
config ARM64_CONT_SHIFT config ARM64_CONT_PTE_SHIFT
int int
default 5 if ARM64_64K_PAGES default 5 if ARM64_64K_PAGES
default 7 if ARM64_16K_PAGES default 7 if ARM64_16K_PAGES
default 4 default 4
config ARM64_CONT_PMD_SHIFT
int
default 5 if ARM64_64K_PAGES
default 5 if ARM64_16K_PAGES
default 4
config ARCH_MMAP_RND_BITS_MIN config ARCH_MMAP_RND_BITS_MIN
default 14 if ARM64_64K_PAGES default 14 if ARM64_64K_PAGES
default 16 if ARM64_16K_PAGES default 16 if ARM64_16K_PAGES
...@@ -1165,32 +1172,6 @@ config UNMAP_KERNEL_AT_EL0 ...@@ -1165,32 +1172,6 @@ config UNMAP_KERNEL_AT_EL0
If unsure, say Y. If unsure, say Y.
config HARDEN_BRANCH_PREDICTOR
bool "Harden the branch predictor against aliasing attacks" if EXPERT
default y
help
Speculation attacks against some high-performance processors rely on
being able to manipulate the branch predictor for a victim context by
executing aliasing branches in the attacker context. Such attacks
can be partially mitigated against by clearing internal branch
predictor state and limiting the prediction logic in some situations.
This config option will take CPU-specific actions to harden the
branch predictor against aliasing attacks and may rely on specific
instruction sequences or control bits being set by the system
firmware.
If unsure, say Y.
config ARM64_SSBD
bool "Speculative Store Bypass Disable" if EXPERT
default y
help
This enables mitigation of the bypassing of previous stores
by speculative loads.
If unsure, say Y.
config RODATA_FULL_DEFAULT_ENABLED config RODATA_FULL_DEFAULT_ENABLED
bool "Apply r/o permissions of VM areas also to their linear aliases" bool "Apply r/o permissions of VM areas also to their linear aliases"
default y default y
...@@ -1664,6 +1645,39 @@ config ARCH_RANDOM ...@@ -1664,6 +1645,39 @@ config ARCH_RANDOM
provides a high bandwidth, cryptographically secure provides a high bandwidth, cryptographically secure
hardware random number generator. hardware random number generator.
config ARM64_AS_HAS_MTE
# Initial support for MTE went in binutils 2.32.0, checked with
# ".arch armv8.5-a+memtag" below. However, this was incomplete
# as a late addition to the final architecture spec (LDGM/STGM)
# is only supported in the newer 2.32.x and 2.33 binutils
# versions, hence the extra "stgm" instruction check below.
def_bool $(as-instr,.arch armv8.5-a+memtag\nstgm xzr$(comma)[x0])
config ARM64_MTE
bool "Memory Tagging Extension support"
default y
depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI
select ARCH_USES_HIGH_VMA_FLAGS
help
Memory Tagging (part of the ARMv8.5 Extensions) provides
architectural support for run-time, always-on detection of
various classes of memory error to aid with software debugging
to eliminate vulnerabilities arising from memory-unsafe
languages.
This option enables the support for the Memory Tagging
Extension at EL0 (i.e. for userspace).
Selecting this option allows the feature to be detected at
runtime. Any secondary CPU not implementing this feature will
not be allowed a late bring-up.
Userspace binaries that want to use this feature must
explicitly opt in. The mechanism for the userspace is
described in:
Documentation/arm64/memory-tagging-extension.rst.
endmenu endmenu
config ARM64_SVE config ARM64_SVE
...@@ -1876,6 +1890,10 @@ config ARCH_ENABLE_HUGEPAGE_MIGRATION ...@@ -1876,6 +1890,10 @@ config ARCH_ENABLE_HUGEPAGE_MIGRATION
def_bool y def_bool y
depends on HUGETLB_PAGE && MIGRATION depends on HUGETLB_PAGE && MIGRATION
config ARCH_ENABLE_THP_MIGRATION
def_bool y
depends on TRANSPARENT_HUGEPAGE
menu "Power management options" menu "Power management options"
source "kernel/power/Kconfig" source "kernel/power/Kconfig"
......
...@@ -11,7 +11,6 @@ ...@@ -11,7 +11,6 @@
# Copyright (C) 1995-2001 by Russell King # Copyright (C) 1995-2001 by Russell King
LDFLAGS_vmlinux :=--no-undefined -X LDFLAGS_vmlinux :=--no-undefined -X
CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET)
ifeq ($(CONFIG_RELOCATABLE), y) ifeq ($(CONFIG_RELOCATABLE), y)
# Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour # Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
...@@ -132,9 +131,6 @@ endif ...@@ -132,9 +131,6 @@ endif
# Default value # Default value
head-y := arch/arm64/kernel/head.o head-y := arch/arm64/kernel/head.o
# The byte offset of the kernel image in RAM from the start of RAM.
TEXT_OFFSET := 0x0
ifeq ($(CONFIG_KASAN_SW_TAGS), y) ifeq ($(CONFIG_KASAN_SW_TAGS), y)
KASAN_SHADOW_SCALE_SHIFT := 4 KASAN_SHADOW_SCALE_SHIFT := 4
else else
...@@ -145,8 +141,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) ...@@ -145,8 +141,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
export TEXT_OFFSET
core-y += arch/arm64/ core-y += arch/arm64/
libs-y := arch/arm64/lib/ $(libs-y) libs-y := arch/arm64/lib/ $(libs-y)
libs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a libs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
......
...@@ -79,10 +79,5 @@ arch_get_random_seed_long_early(unsigned long *v) ...@@ -79,10 +79,5 @@ arch_get_random_seed_long_early(unsigned long *v)
} }
#define arch_get_random_seed_long_early arch_get_random_seed_long_early #define arch_get_random_seed_long_early arch_get_random_seed_long_early
#else
static inline bool __arm64_rndr(unsigned long *v) { return false; }
static inline bool __init __early_cpu_has_rndr(void) { return false; }
#endif /* CONFIG_ARCH_RANDOM */ #endif /* CONFIG_ARCH_RANDOM */
#endif /* _ASM_ARCHRANDOM_H */ #endif /* _ASM_ARCHRANDOM_H */
...@@ -13,8 +13,7 @@ ...@@ -13,8 +13,7 @@
#define MAX_FDT_SIZE SZ_2M #define MAX_FDT_SIZE SZ_2M
/* /*
* arm64 requires the kernel image to placed * arm64 requires the kernel image to placed at a 2 MB aligned base address
* TEXT_OFFSET bytes beyond a 2 MB aligned base
*/ */
#define MIN_KIMG_ALIGN SZ_2M #define MIN_KIMG_ALIGN SZ_2M
......
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
* mechanism for doing so, tests whether it is possible to boot * mechanism for doing so, tests whether it is possible to boot
* the given CPU. * the given CPU.
* @cpu_boot: Boots a cpu into the kernel. * @cpu_boot: Boots a cpu into the kernel.
* @cpu_postboot: Optionally, perform any post-boot cleanup or necesary * @cpu_postboot: Optionally, perform any post-boot cleanup or necessary
* synchronisation. Called from the cpu being booted. * synchronisation. Called from the cpu being booted.
* @cpu_can_disable: Determines whether a CPU can be disabled based on * @cpu_can_disable: Determines whether a CPU can be disabled based on
* mechanism-specific information. * mechanism-specific information.
......
...@@ -31,13 +31,13 @@ ...@@ -31,13 +31,13 @@
#define ARM64_HAS_DCPOP 21 #define ARM64_HAS_DCPOP 21
#define ARM64_SVE 22 #define ARM64_SVE 22
#define ARM64_UNMAP_KERNEL_AT_EL0 23 #define ARM64_UNMAP_KERNEL_AT_EL0 23
#define ARM64_HARDEN_BRANCH_PREDICTOR 24 #define ARM64_SPECTRE_V2 24
#define ARM64_HAS_RAS_EXTN 25 #define ARM64_HAS_RAS_EXTN 25
#define ARM64_WORKAROUND_843419 26 #define ARM64_WORKAROUND_843419 26
#define ARM64_HAS_CACHE_IDC 27 #define ARM64_HAS_CACHE_IDC 27
#define ARM64_HAS_CACHE_DIC 28 #define ARM64_HAS_CACHE_DIC 28
#define ARM64_HW_DBM 29 #define ARM64_HW_DBM 29
#define ARM64_SSBD 30 #define ARM64_SPECTRE_V4 30
#define ARM64_MISMATCHED_CACHE_TYPE 31 #define ARM64_MISMATCHED_CACHE_TYPE 31
#define ARM64_HAS_STAGE2_FWB 32 #define ARM64_HAS_STAGE2_FWB 32
#define ARM64_HAS_CRC32 33 #define ARM64_HAS_CRC32 33
...@@ -64,7 +64,8 @@ ...@@ -64,7 +64,8 @@
#define ARM64_BTI 54 #define ARM64_BTI 54
#define ARM64_HAS_ARMv8_4_TTL 55 #define ARM64_HAS_ARMv8_4_TTL 55
#define ARM64_HAS_TLB_RANGE 56 #define ARM64_HAS_TLB_RANGE 56
#define ARM64_MTE 57
#define ARM64_NCAPS 57 #define ARM64_NCAPS 58
#endif /* __ASM_CPUCAPS_H */ #endif /* __ASM_CPUCAPS_H */
...@@ -358,7 +358,7 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap) ...@@ -358,7 +358,7 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap)
} }
/* /*
* Generic helper for handling capabilties with multiple (match,enable) pairs * Generic helper for handling capabilities with multiple (match,enable) pairs
* of call backs, sharing the same capability bit. * of call backs, sharing the same capability bit.
* Iterate over each entry to see if at least one matches. * Iterate over each entry to see if at least one matches.
*/ */
...@@ -681,6 +681,12 @@ static __always_inline bool system_uses_irq_prio_masking(void) ...@@ -681,6 +681,12 @@ static __always_inline bool system_uses_irq_prio_masking(void)
cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING); cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
} }
static inline bool system_supports_mte(void)
{
return IS_ENABLED(CONFIG_ARM64_MTE) &&
cpus_have_const_cap(ARM64_MTE);
}
static inline bool system_has_prio_mask_debugging(void) static inline bool system_has_prio_mask_debugging(void)
{ {
return IS_ENABLED(CONFIG_ARM64_DEBUG_PRIORITY_MASKING) && return IS_ENABLED(CONFIG_ARM64_DEBUG_PRIORITY_MASKING) &&
...@@ -698,30 +704,6 @@ static inline bool system_supports_tlb_range(void) ...@@ -698,30 +704,6 @@ static inline bool system_supports_tlb_range(void)
cpus_have_const_cap(ARM64_HAS_TLB_RANGE); cpus_have_const_cap(ARM64_HAS_TLB_RANGE);
} }
#define ARM64_BP_HARDEN_UNKNOWN -1
#define ARM64_BP_HARDEN_WA_NEEDED 0
#define ARM64_BP_HARDEN_NOT_REQUIRED 1
int get_spectre_v2_workaround_state(void);
#define ARM64_SSBD_UNKNOWN -1
#define ARM64_SSBD_FORCE_DISABLE 0
#define ARM64_SSBD_KERNEL 1
#define ARM64_SSBD_FORCE_ENABLE 2
#define ARM64_SSBD_MITIGATED 3
static inline int arm64_get_ssbd_state(void)
{
#ifdef CONFIG_ARM64_SSBD
extern int ssbd_state;
return ssbd_state;
#else
return ARM64_SSBD_UNKNOWN;
#endif
}
void arm64_set_ssbd_mitigation(bool state);
extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange) static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
......
...@@ -35,7 +35,9 @@ ...@@ -35,7 +35,9 @@
#define ESR_ELx_EC_SYS64 (0x18) #define ESR_ELx_EC_SYS64 (0x18)
#define ESR_ELx_EC_SVE (0x19) #define ESR_ELx_EC_SVE (0x19)
#define ESR_ELx_EC_ERET (0x1a) /* EL2 only */ #define ESR_ELx_EC_ERET (0x1a) /* EL2 only */
/* Unallocated EC: 0x1b - 0x1E */ /* Unallocated EC: 0x1B */
#define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */
/* Unallocated EC: 0x1D - 0x1E */
#define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */ #define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */
#define ESR_ELx_EC_IABT_LOW (0x20) #define ESR_ELx_EC_IABT_LOW (0x20)
#define ESR_ELx_EC_IABT_CUR (0x21) #define ESR_ELx_EC_IABT_CUR (0x21)
......
...@@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr); ...@@ -47,4 +47,5 @@ void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr);
void do_cp15instr(unsigned int esr, struct pt_regs *regs); void do_cp15instr(unsigned int esr, struct pt_regs *regs);
void do_el0_svc(struct pt_regs *regs); void do_el0_svc(struct pt_regs *regs);
void do_el0_svc_compat(struct pt_regs *regs); void do_el0_svc_compat(struct pt_regs *regs);
void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr);
#endif /* __ASM_EXCEPTION_H */ #endif /* __ASM_EXCEPTION_H */
...@@ -22,6 +22,15 @@ struct exception_table_entry ...@@ -22,6 +22,15 @@ struct exception_table_entry
#define ARCH_HAS_RELATIVE_EXTABLE #define ARCH_HAS_RELATIVE_EXTABLE
static inline bool in_bpf_jit(struct pt_regs *regs)
{
if (!IS_ENABLED(CONFIG_BPF_JIT))
return false;
return regs->pc >= BPF_JIT_REGION_START &&
regs->pc < BPF_JIT_REGION_END;
}
#ifdef CONFIG_BPF_JIT #ifdef CONFIG_BPF_JIT
int arm64_bpf_fixup_exception(const struct exception_table_entry *ex, int arm64_bpf_fixup_exception(const struct exception_table_entry *ex,
struct pt_regs *regs); struct pt_regs *regs);
......
...@@ -69,6 +69,9 @@ static inline void *sve_pffr(struct thread_struct *thread) ...@@ -69,6 +69,9 @@ static inline void *sve_pffr(struct thread_struct *thread)
extern void sve_save_state(void *state, u32 *pfpsr); extern void sve_save_state(void *state, u32 *pfpsr);
extern void sve_load_state(void const *state, u32 const *pfpsr, extern void sve_load_state(void const *state, u32 const *pfpsr,
unsigned long vq_minus_1); unsigned long vq_minus_1);
extern void sve_flush_live(void);
extern void sve_load_from_fpsimd_state(struct user_fpsimd_state const *state,
unsigned long vq_minus_1);
extern unsigned int sve_get_vl(void); extern unsigned int sve_get_vl(void);
struct arm64_cpu_capabilities; struct arm64_cpu_capabilities;
......
...@@ -164,25 +164,59 @@ ...@@ -164,25 +164,59 @@
| ((\np) << 5) | ((\np) << 5)
.endm .endm
/* PFALSE P\np.B */
.macro _sve_pfalse np
_sve_check_preg \np
.inst 0x2518e400 \
| (\np)
.endm
.macro __for from:req, to:req .macro __for from:req, to:req
.if (\from) == (\to) .if (\from) == (\to)
_for__body \from _for__body %\from
.else .else
__for \from, (\from) + ((\to) - (\from)) / 2 __for %\from, %((\from) + ((\to) - (\from)) / 2)
__for (\from) + ((\to) - (\from)) / 2 + 1, \to __for %((\from) + ((\to) - (\from)) / 2 + 1), %\to
.endif .endif
.endm .endm
.macro _for var:req, from:req, to:req, insn:vararg .macro _for var:req, from:req, to:req, insn:vararg
.macro _for__body \var:req .macro _for__body \var:req
.noaltmacro
\insn \insn
.altmacro
.endm .endm
.altmacro
__for \from, \to __for \from, \to
.noaltmacro
.purgem _for__body .purgem _for__body
.endm .endm
/* Update ZCR_EL1.LEN with the new VQ */
.macro sve_load_vq xvqminus1, xtmp, xtmp2
mrs_s \xtmp, SYS_ZCR_EL1
bic \xtmp2, \xtmp, ZCR_ELx_LEN_MASK
orr \xtmp2, \xtmp2, \xvqminus1
cmp \xtmp2, \xtmp
b.eq 921f
msr_s SYS_ZCR_EL1, \xtmp2 //self-synchronising
921:
.endm
/* Preserve the first 128-bits of Znz and zero the rest. */
.macro _sve_flush_z nz
_sve_check_zreg \nz
mov v\nz\().16b, v\nz\().16b
.endm
.macro sve_flush
_for n, 0, 31, _sve_flush_z \n
_for n, 0, 15, _sve_pfalse \n
_sve_wrffr 0
.endm
.macro sve_save nxbase, xpfpsr, nxtmp .macro sve_save nxbase, xpfpsr, nxtmp
_for n, 0, 31, _sve_str_v \n, \nxbase, \n - 34 _for n, 0, 31, _sve_str_v \n, \nxbase, \n - 34
_for n, 0, 15, _sve_str_p \n, \nxbase, \n - 16 _for n, 0, 15, _sve_str_p \n, \nxbase, \n - 16
...@@ -197,13 +231,7 @@ ...@@ -197,13 +231,7 @@
.endm .endm
.macro sve_load nxbase, xpfpsr, xvqminus1, nxtmp, xtmp2 .macro sve_load nxbase, xpfpsr, xvqminus1, nxtmp, xtmp2
mrs_s x\nxtmp, SYS_ZCR_EL1 sve_load_vq \xvqminus1, x\nxtmp, \xtmp2
bic \xtmp2, x\nxtmp, ZCR_ELx_LEN_MASK
orr \xtmp2, \xtmp2, \xvqminus1
cmp \xtmp2, x\nxtmp
b.eq 921f
msr_s SYS_ZCR_EL1, \xtmp2 // self-synchronising
921:
_for n, 0, 31, _sve_ldr_v \n, \nxbase, \n - 34 _for n, 0, 31, _sve_ldr_v \n, \nxbase, \n - 34
_sve_ldr_p 0, \nxbase _sve_ldr_p 0, \nxbase
_sve_wrffr 0 _sve_wrffr 0
......
...@@ -8,18 +8,27 @@ ...@@ -8,18 +8,27 @@
#include <uapi/asm/hwcap.h> #include <uapi/asm/hwcap.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#define COMPAT_HWCAP_SWP (1 << 0)
#define COMPAT_HWCAP_HALF (1 << 1) #define COMPAT_HWCAP_HALF (1 << 1)
#define COMPAT_HWCAP_THUMB (1 << 2) #define COMPAT_HWCAP_THUMB (1 << 2)
#define COMPAT_HWCAP_26BIT (1 << 3)
#define COMPAT_HWCAP_FAST_MULT (1 << 4) #define COMPAT_HWCAP_FAST_MULT (1 << 4)
#define COMPAT_HWCAP_FPA (1 << 5)
#define COMPAT_HWCAP_VFP (1 << 6) #define COMPAT_HWCAP_VFP (1 << 6)
#define COMPAT_HWCAP_EDSP (1 << 7) #define COMPAT_HWCAP_EDSP (1 << 7)
#define COMPAT_HWCAP_JAVA (1 << 8)
#define COMPAT_HWCAP_IWMMXT (1 << 9)
#define COMPAT_HWCAP_CRUNCH (1 << 10)
#define COMPAT_HWCAP_THUMBEE (1 << 11)
#define COMPAT_HWCAP_NEON (1 << 12) #define COMPAT_HWCAP_NEON (1 << 12)
#define COMPAT_HWCAP_VFPv3 (1 << 13) #define COMPAT_HWCAP_VFPv3 (1 << 13)
#define COMPAT_HWCAP_VFPV3D16 (1 << 14)
#define COMPAT_HWCAP_TLS (1 << 15) #define COMPAT_HWCAP_TLS (1 << 15)
#define COMPAT_HWCAP_VFPv4 (1 << 16) #define COMPAT_HWCAP_VFPv4 (1 << 16)
#define COMPAT_HWCAP_IDIVA (1 << 17) #define COMPAT_HWCAP_IDIVA (1 << 17)
#define COMPAT_HWCAP_IDIVT (1 << 18) #define COMPAT_HWCAP_IDIVT (1 << 18)
#define COMPAT_HWCAP_IDIV (COMPAT_HWCAP_IDIVA|COMPAT_HWCAP_IDIVT) #define COMPAT_HWCAP_IDIV (COMPAT_HWCAP_IDIVA|COMPAT_HWCAP_IDIVT)
#define COMPAT_HWCAP_VFPD32 (1 << 19)
#define COMPAT_HWCAP_LPAE (1 << 20) #define COMPAT_HWCAP_LPAE (1 << 20)
#define COMPAT_HWCAP_EVTSTRM (1 << 21) #define COMPAT_HWCAP_EVTSTRM (1 << 21)
...@@ -95,7 +104,7 @@ ...@@ -95,7 +104,7 @@
#define KERNEL_HWCAP_DGH __khwcap2_feature(DGH) #define KERNEL_HWCAP_DGH __khwcap2_feature(DGH)
#define KERNEL_HWCAP_RNG __khwcap2_feature(RNG) #define KERNEL_HWCAP_RNG __khwcap2_feature(RNG)
#define KERNEL_HWCAP_BTI __khwcap2_feature(BTI) #define KERNEL_HWCAP_BTI __khwcap2_feature(BTI)
/* reserved for KERNEL_HWCAP_MTE __khwcap2_feature(MTE) */ #define KERNEL_HWCAP_MTE __khwcap2_feature(MTE)
/* /*
* This yields a mask that user programs can use to figure out what * This yields a mask that user programs can use to figure out what
......
...@@ -359,9 +359,13 @@ __AARCH64_INSN_FUNCS(brk, 0xFFE0001F, 0xD4200000) ...@@ -359,9 +359,13 @@ __AARCH64_INSN_FUNCS(brk, 0xFFE0001F, 0xD4200000)
__AARCH64_INSN_FUNCS(exception, 0xFF000000, 0xD4000000) __AARCH64_INSN_FUNCS(exception, 0xFF000000, 0xD4000000)
__AARCH64_INSN_FUNCS(hint, 0xFFFFF01F, 0xD503201F) __AARCH64_INSN_FUNCS(hint, 0xFFFFF01F, 0xD503201F)
__AARCH64_INSN_FUNCS(br, 0xFFFFFC1F, 0xD61F0000) __AARCH64_INSN_FUNCS(br, 0xFFFFFC1F, 0xD61F0000)
__AARCH64_INSN_FUNCS(br_auth, 0xFEFFF800, 0xD61F0800)
__AARCH64_INSN_FUNCS(blr, 0xFFFFFC1F, 0xD63F0000) __AARCH64_INSN_FUNCS(blr, 0xFFFFFC1F, 0xD63F0000)
__AARCH64_INSN_FUNCS(blr_auth, 0xFEFFF800, 0xD63F0800)
__AARCH64_INSN_FUNCS(ret, 0xFFFFFC1F, 0xD65F0000) __AARCH64_INSN_FUNCS(ret, 0xFFFFFC1F, 0xD65F0000)
__AARCH64_INSN_FUNCS(ret_auth, 0xFFFFFBFF, 0xD65F0BFF)
__AARCH64_INSN_FUNCS(eret, 0xFFFFFFFF, 0xD69F03E0) __AARCH64_INSN_FUNCS(eret, 0xFFFFFFFF, 0xD69F03E0)
__AARCH64_INSN_FUNCS(eret_auth, 0xFFFFFBFF, 0xD69F0BFF)
__AARCH64_INSN_FUNCS(mrs, 0xFFF00000, 0xD5300000) __AARCH64_INSN_FUNCS(mrs, 0xFFF00000, 0xD5300000)
__AARCH64_INSN_FUNCS(msr_imm, 0xFFF8F01F, 0xD500401F) __AARCH64_INSN_FUNCS(msr_imm, 0xFFF8F01F, 0xD500401F)
__AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000) __AARCH64_INSN_FUNCS(msr_reg, 0xFFF00000, 0xD5100000)
......
...@@ -86,7 +86,7 @@ ...@@ -86,7 +86,7 @@
+ EARLY_PGDS((vstart), (vend)) /* each PGDIR needs a next level page table */ \ + EARLY_PGDS((vstart), (vend)) /* each PGDIR needs a next level page table */ \
+ EARLY_PUDS((vstart), (vend)) /* each PUD needs a next level page table */ \ + EARLY_PUDS((vstart), (vend)) /* each PUD needs a next level page table */ \
+ EARLY_PMDS((vstart), (vend))) /* each PMD needs a next level page table */ + EARLY_PMDS((vstart), (vend))) /* each PMD needs a next level page table */
#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR + TEXT_OFFSET, _end)) #define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end))
#define IDMAP_DIR_SIZE (IDMAP_PGTABLE_LEVELS * PAGE_SIZE) #define IDMAP_DIR_SIZE (IDMAP_PGTABLE_LEVELS * PAGE_SIZE)
#ifdef CONFIG_ARM64_SW_TTBR0_PAN #ifdef CONFIG_ARM64_SW_TTBR0_PAN
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <asm/types.h> #include <asm/types.h>
/* Hyp Configuration Register (HCR) bits */ /* Hyp Configuration Register (HCR) bits */
#define HCR_ATA (UL(1) << 56)
#define HCR_FWB (UL(1) << 46) #define HCR_FWB (UL(1) << 46)
#define HCR_API (UL(1) << 41) #define HCR_API (UL(1) << 41)
#define HCR_APK (UL(1) << 40) #define HCR_APK (UL(1) << 40)
...@@ -66,7 +67,7 @@ ...@@ -66,7 +67,7 @@
* TWI: Trap WFI * TWI: Trap WFI
* TIDCP: Trap L2CTLR/L2ECTLR * TIDCP: Trap L2CTLR/L2ECTLR
* BSU_IS: Upgrade barriers to the inner shareable domain * BSU_IS: Upgrade barriers to the inner shareable domain
* FB: Force broadcast of all maintainance operations * FB: Force broadcast of all maintenance operations
* AMO: Override CPSR.A and enable signaling with VA * AMO: Override CPSR.A and enable signaling with VA
* IMO: Override CPSR.I and enable signaling with VI * IMO: Override CPSR.I and enable signaling with VI
* FMO: Override CPSR.F and enable signaling with VF * FMO: Override CPSR.F and enable signaling with VF
...@@ -78,7 +79,7 @@ ...@@ -78,7 +79,7 @@
HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \ HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TLOR | \
HCR_FMO | HCR_IMO | HCR_PTW ) HCR_FMO | HCR_IMO | HCR_PTW )
#define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF) #define HCR_VIRT_EXCP_MASK (HCR_VSE | HCR_VI | HCR_VF)
#define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK) #define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK | HCR_ATA)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H) #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
/* TCR_EL2 Registers bits */ /* TCR_EL2 Registers bits */
......
...@@ -9,9 +9,6 @@ ...@@ -9,9 +9,6 @@
#include <asm/virt.h> #include <asm/virt.h>
#define VCPU_WORKAROUND_2_FLAG_SHIFT 0
#define VCPU_WORKAROUND_2_FLAG (_AC(1, UL) << VCPU_WORKAROUND_2_FLAG_SHIFT)
#define ARM_EXIT_WITH_SERROR_BIT 31 #define ARM_EXIT_WITH_SERROR_BIT 31
#define ARM_EXCEPTION_CODE(x) ((x) & ~(1U << ARM_EXIT_WITH_SERROR_BIT)) #define ARM_EXCEPTION_CODE(x) ((x) & ~(1U << ARM_EXIT_WITH_SERROR_BIT))
#define ARM_EXCEPTION_IS_TRAP(x) (ARM_EXCEPTION_CODE((x)) == ARM_EXCEPTION_TRAP) #define ARM_EXCEPTION_IS_TRAP(x) (ARM_EXCEPTION_CODE((x)) == ARM_EXCEPTION_TRAP)
...@@ -102,11 +99,9 @@ DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); ...@@ -102,11 +99,9 @@ DECLARE_KVM_HYP_SYM(__kvm_hyp_vector);
#define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init) #define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init)
#define __kvm_hyp_vector CHOOSE_HYP_SYM(__kvm_hyp_vector) #define __kvm_hyp_vector CHOOSE_HYP_SYM(__kvm_hyp_vector)
#ifdef CONFIG_KVM_INDIRECT_VECTORS
extern atomic_t arm64_el2_vector_last_slot; extern atomic_t arm64_el2_vector_last_slot;
DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs); DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
#define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs) #define __bp_harden_hyp_vecs CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)
#endif
extern void __kvm_flush_vm_context(void); extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa, extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
......
...@@ -391,20 +391,6 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu) ...@@ -391,20 +391,6 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK; return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
} }
static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
{
return vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG;
}
static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
bool flag)
{
if (flag)
vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
else
vcpu->arch.workaround_flags &= ~VCPU_WORKAROUND_2_FLAG;
}
static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu) static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
{ {
if (vcpu_mode_is_32bit(vcpu)) { if (vcpu_mode_is_32bit(vcpu)) {
......
...@@ -631,46 +631,6 @@ static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {} ...@@ -631,46 +631,6 @@ static inline void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr) {}
static inline void kvm_clr_pmu_events(u32 clr) {} static inline void kvm_clr_pmu_events(u32 clr) {}
#endif #endif
#define KVM_BP_HARDEN_UNKNOWN -1
#define KVM_BP_HARDEN_WA_NEEDED 0
#define KVM_BP_HARDEN_NOT_REQUIRED 1
static inline int kvm_arm_harden_branch_predictor(void)
{
switch (get_spectre_v2_workaround_state()) {
case ARM64_BP_HARDEN_WA_NEEDED:
return KVM_BP_HARDEN_WA_NEEDED;
case ARM64_BP_HARDEN_NOT_REQUIRED:
return KVM_BP_HARDEN_NOT_REQUIRED;
case ARM64_BP_HARDEN_UNKNOWN:
default:
return KVM_BP_HARDEN_UNKNOWN;
}
}
#define KVM_SSBD_UNKNOWN -1
#define KVM_SSBD_FORCE_DISABLE 0
#define KVM_SSBD_KERNEL 1
#define KVM_SSBD_FORCE_ENABLE 2
#define KVM_SSBD_MITIGATED 3
static inline int kvm_arm_have_ssbd(void)
{
switch (arm64_get_ssbd_state()) {
case ARM64_SSBD_FORCE_DISABLE:
return KVM_SSBD_FORCE_DISABLE;
case ARM64_SSBD_KERNEL:
return KVM_SSBD_KERNEL;
case ARM64_SSBD_FORCE_ENABLE:
return KVM_SSBD_FORCE_ENABLE;
case ARM64_SSBD_MITIGATED:
return KVM_SSBD_MITIGATED;
case ARM64_SSBD_UNKNOWN:
default:
return KVM_SSBD_UNKNOWN;
}
}
void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu); void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu);
void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu); void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu);
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include <asm/page.h> #include <asm/page.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/mmu.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
/* /*
...@@ -430,19 +431,17 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa, ...@@ -430,19 +431,17 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa,
return ret; return ret;
} }
#ifdef CONFIG_KVM_INDIRECT_VECTORS
/* /*
* EL2 vectors can be mapped and rerouted in a number of ways, * EL2 vectors can be mapped and rerouted in a number of ways,
* depending on the kernel configuration and CPU present: * depending on the kernel configuration and CPU present:
* *
* - If the CPU has the ARM64_HARDEN_BRANCH_PREDICTOR cap, the * - If the CPU is affected by Spectre-v2, the hardening sequence is
* hardening sequence is placed in one of the vector slots, which is * placed in one of the vector slots, which is executed before jumping
* executed before jumping to the real vectors. * to the real vectors.
* *
* - If the CPU has both the ARM64_HARDEN_EL2_VECTORS cap and the * - If the CPU also has the ARM64_HARDEN_EL2_VECTORS cap, the slot
* ARM64_HARDEN_BRANCH_PREDICTOR cap, the slot containing the * containing the hardening sequence is mapped next to the idmap page,
* hardening sequence is mapped next to the idmap page, and executed * and executed before jumping to the real vectors.
* before jumping to the real vectors.
* *
* - If the CPU only has the ARM64_HARDEN_EL2_VECTORS cap, then an * - If the CPU only has the ARM64_HARDEN_EL2_VECTORS cap, then an
* empty slot is selected, mapped next to the idmap page, and * empty slot is selected, mapped next to the idmap page, and
...@@ -452,19 +451,16 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa, ...@@ -452,19 +451,16 @@ static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa,
* VHE, as we don't have hypervisor-specific mappings. If the system * VHE, as we don't have hypervisor-specific mappings. If the system
* is VHE and yet selects this capability, it will be ignored. * is VHE and yet selects this capability, it will be ignored.
*/ */
#include <asm/mmu.h>
extern void *__kvm_bp_vect_base; extern void *__kvm_bp_vect_base;
extern int __kvm_harden_el2_vector_slot; extern int __kvm_harden_el2_vector_slot;
/* This is called on both VHE and !VHE systems */
static inline void *kvm_get_hyp_vector(void) static inline void *kvm_get_hyp_vector(void)
{ {
struct bp_hardening_data *data = arm64_get_bp_hardening_data(); struct bp_hardening_data *data = arm64_get_bp_hardening_data();
void *vect = kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); void *vect = kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector));
int slot = -1; int slot = -1;
if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR) && data->fn) { if (cpus_have_const_cap(ARM64_SPECTRE_V2) && data->fn) {
vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs)); vect = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs));
slot = data->hyp_vectors_slot; slot = data->hyp_vectors_slot;
} }
...@@ -481,76 +477,6 @@ static inline void *kvm_get_hyp_vector(void) ...@@ -481,76 +477,6 @@ static inline void *kvm_get_hyp_vector(void)
return vect; return vect;
} }
/* This is only called on a !VHE system */
static inline int kvm_map_vectors(void)
{
/*
* HBP = ARM64_HARDEN_BRANCH_PREDICTOR
* HEL2 = ARM64_HARDEN_EL2_VECTORS
*
* !HBP + !HEL2 -> use direct vectors
* HBP + !HEL2 -> use hardened vectors in place
* !HBP + HEL2 -> allocate one vector slot and use exec mapping
* HBP + HEL2 -> use hardened vertors and use exec mapping
*/
if (cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) {
__kvm_bp_vect_base = kvm_ksym_ref(__bp_harden_hyp_vecs);
__kvm_bp_vect_base = kern_hyp_va(__kvm_bp_vect_base);
}
if (cpus_have_const_cap(ARM64_HARDEN_EL2_VECTORS)) {
phys_addr_t vect_pa = __pa_symbol(__bp_harden_hyp_vecs);
unsigned long size = __BP_HARDEN_HYP_VECS_SZ;
/*
* Always allocate a spare vector slot, as we don't
* know yet which CPUs have a BP hardening slot that
* we can reuse.
*/
__kvm_harden_el2_vector_slot = atomic_inc_return(&arm64_el2_vector_last_slot);
BUG_ON(__kvm_harden_el2_vector_slot >= BP_HARDEN_EL2_SLOTS);
return create_hyp_exec_mappings(vect_pa, size,
&__kvm_bp_vect_base);
}
return 0;
}
#else
static inline void *kvm_get_hyp_vector(void)
{
return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector));
}
static inline int kvm_map_vectors(void)
{
return 0;
}
#endif
#ifdef CONFIG_ARM64_SSBD
DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
static inline int hyp_map_aux_data(void)
{
int cpu, err;
for_each_possible_cpu(cpu) {
u64 *ptr;
ptr = per_cpu_ptr(&arm64_ssbd_callback_required, cpu);
err = create_hyp_mappings(ptr, ptr + 1, PAGE_HYP);
if (err)
return err;
}
return 0;
}
#else
static inline int hyp_map_aux_data(void)
{
return 0;
}
#endif
#define kvm_phys_to_vttbr(addr) phys_to_ttbr(addr) #define kvm_phys_to_vttbr(addr) phys_to_ttbr(addr)
/* /*
......
...@@ -126,13 +126,18 @@ ...@@ -126,13 +126,18 @@
/* /*
* Memory types available. * Memory types available.
*
* IMPORTANT: MT_NORMAL must be index 0 since vm_get_page_prot() may 'or' in
* the MT_NORMAL_TAGGED memory type for PROT_MTE mappings. Note
* that protection_map[] only contains MT_NORMAL attributes.
*/ */
#define MT_DEVICE_nGnRnE 0 #define MT_NORMAL 0
#define MT_DEVICE_nGnRE 1 #define MT_NORMAL_TAGGED 1
#define MT_DEVICE_GRE 2 #define MT_NORMAL_NC 2
#define MT_NORMAL_NC 3 #define MT_NORMAL_WT 3
#define MT_NORMAL 4 #define MT_DEVICE_nGnRnE 4
#define MT_NORMAL_WT 5 #define MT_DEVICE_nGnRE 5
#define MT_DEVICE_GRE 6
/* /*
* Memory types for Stage-2 translation * Memory types for Stage-2 translation
...@@ -169,7 +174,7 @@ extern s64 memstart_addr; ...@@ -169,7 +174,7 @@ extern s64 memstart_addr;
/* PHYS_OFFSET - the physical address of the start of memory. */ /* PHYS_OFFSET - the physical address of the start of memory. */
#define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; }) #define PHYS_OFFSET ({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
/* the virtual base of the kernel image (minus TEXT_OFFSET) */ /* the virtual base of the kernel image */
extern u64 kimage_vaddr; extern u64 kimage_vaddr;
/* the offset between the kernel virtual and physical mappings */ /* the offset between the kernel virtual and physical mappings */
......
...@@ -9,16 +9,53 @@ ...@@ -9,16 +9,53 @@
static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
unsigned long pkey __always_unused) unsigned long pkey __always_unused)
{ {
unsigned long ret = 0;
if (system_supports_bti() && (prot & PROT_BTI)) if (system_supports_bti() && (prot & PROT_BTI))
return VM_ARM64_BTI; ret |= VM_ARM64_BTI;
return 0; if (system_supports_mte() && (prot & PROT_MTE))
ret |= VM_MTE;
return ret;
} }
#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
{
/*
* Only allow MTE on anonymous mappings as these are guaranteed to be
* backed by tags-capable memory. The vm_flags may be overridden by a
* filesystem supporting MTE (RAM-based).
*/
if (system_supports_mte() && (flags & MAP_ANONYMOUS))
return VM_MTE_ALLOWED;
return 0;
}
#define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags)
static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags) static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
{ {
return (vm_flags & VM_ARM64_BTI) ? __pgprot(PTE_GP) : __pgprot(0); pteval_t prot = 0;
if (vm_flags & VM_ARM64_BTI)
prot |= PTE_GP;
/*
* There are two conditions required for returning a Normal Tagged
* memory type: (1) the user requested it via PROT_MTE passed to
* mmap() or mprotect() and (2) the corresponding vma supports MTE. We
* register (1) as VM_MTE in the vma->vm_flags and (2) as
* VM_MTE_ALLOWED. Note that the latter can only be set during the
* mmap() call since mprotect() does not accept MAP_* flags.
* Checking for VM_MTE only is sufficient since arch_validate_flags()
* does not permit (VM_MTE & !VM_MTE_ALLOWED).
*/
if (vm_flags & VM_MTE)
prot |= PTE_ATTRINDX(MT_NORMAL_TAGGED);
return __pgprot(prot);
} }
#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags) #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
...@@ -30,8 +67,21 @@ static inline bool arch_validate_prot(unsigned long prot, ...@@ -30,8 +67,21 @@ static inline bool arch_validate_prot(unsigned long prot,
if (system_supports_bti()) if (system_supports_bti())
supported |= PROT_BTI; supported |= PROT_BTI;
if (system_supports_mte())
supported |= PROT_MTE;
return (prot & ~supported) == 0; return (prot & ~supported) == 0;
} }
#define arch_validate_prot(prot, addr) arch_validate_prot(prot, addr) #define arch_validate_prot(prot, addr) arch_validate_prot(prot, addr)
static inline bool arch_validate_flags(unsigned long vm_flags)
{
if (!system_supports_mte())
return true;
/* only allow VM_MTE if VM_MTE_ALLOWED has been set previously */
return !(vm_flags & VM_MTE) || (vm_flags & VM_MTE_ALLOWED);
}
#define arch_validate_flags(vm_flags) arch_validate_flags(vm_flags)
#endif /* ! __ASM_MMAN_H__ */ #endif /* ! __ASM_MMAN_H__ */
...@@ -17,11 +17,14 @@ ...@@ -17,11 +17,14 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/refcount.h>
typedef struct { typedef struct {
atomic64_t id; atomic64_t id;
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
void *sigpage; void *sigpage;
#endif #endif
refcount_t pinned;
void *vdso; void *vdso;
unsigned long flags; unsigned long flags;
} mm_context_t; } mm_context_t;
...@@ -45,7 +48,6 @@ struct bp_hardening_data { ...@@ -45,7 +48,6 @@ struct bp_hardening_data {
bp_hardening_cb_t fn; bp_hardening_cb_t fn;
}; };
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void) static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
...@@ -57,21 +59,13 @@ static inline void arm64_apply_bp_hardening(void) ...@@ -57,21 +59,13 @@ static inline void arm64_apply_bp_hardening(void)
{ {
struct bp_hardening_data *d; struct bp_hardening_data *d;
if (!cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR)) if (!cpus_have_const_cap(ARM64_SPECTRE_V2))
return; return;
d = arm64_get_bp_hardening_data(); d = arm64_get_bp_hardening_data();
if (d->fn) if (d->fn)
d->fn(); d->fn();
} }
#else
static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
{
return NULL;
}
static inline void arm64_apply_bp_hardening(void) { }
#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
extern void arm64_memblock_init(void); extern void arm64_memblock_init(void);
extern void paging_init(void); extern void paging_init(void);
......
...@@ -177,7 +177,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp) ...@@ -177,7 +177,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
#define destroy_context(mm) do { } while(0) #define destroy_context(mm) do { } while(0)
void check_and_switch_context(struct mm_struct *mm); void check_and_switch_context(struct mm_struct *mm);
#define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; }) static inline int
init_new_context(struct task_struct *tsk, struct mm_struct *mm)
{
atomic64_set(&mm->context.id, 0);
refcount_set(&mm->context.pinned, 0);
return 0;
}
#ifdef CONFIG_ARM64_SW_TTBR0_PAN #ifdef CONFIG_ARM64_SW_TTBR0_PAN
static inline void update_saved_ttbr0(struct task_struct *tsk, static inline void update_saved_ttbr0(struct task_struct *tsk,
...@@ -248,6 +254,9 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next, ...@@ -248,6 +254,9 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
void verify_cpu_asid_bits(void); void verify_cpu_asid_bits(void);
void post_ttbr_update_workaround(void); void post_ttbr_update_workaround(void);
unsigned long arm64_mm_context_get(struct mm_struct *mm);
void arm64_mm_context_put(struct mm_struct *mm);
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#endif /* !__ASM_MMU_CONTEXT_H */ #endif /* !__ASM_MMU_CONTEXT_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020 ARM Ltd.
*/
#ifndef __ASM_MTE_H
#define __ASM_MTE_H
#define MTE_GRANULE_SIZE UL(16)
#define MTE_GRANULE_MASK (~(MTE_GRANULE_SIZE - 1))
#define MTE_TAG_SHIFT 56
#define MTE_TAG_SIZE 4
#ifndef __ASSEMBLY__
#include <linux/page-flags.h>
#include <asm/pgtable-types.h>
void mte_clear_page_tags(void *addr);
unsigned long mte_copy_tags_from_user(void *to, const void __user *from,
unsigned long n);
unsigned long mte_copy_tags_to_user(void __user *to, void *from,
unsigned long n);
int mte_save_tags(struct page *page);
void mte_save_page_tags(const void *page_addr, void *tag_storage);
bool mte_restore_tags(swp_entry_t entry, struct page *page);
void mte_restore_page_tags(void *page_addr, const void *tag_storage);
void mte_invalidate_tags(int type, pgoff_t offset);
void mte_invalidate_tags_area(int type);
void *mte_allocate_tag_storage(void);
void mte_free_tag_storage(char *storage);
#ifdef CONFIG_ARM64_MTE
/* track which pages have valid allocation tags */
#define PG_mte_tagged PG_arch_2
void mte_sync_tags(pte_t *ptep, pte_t pte);
void mte_copy_page_tags(void *kto, const void *kfrom);
void flush_mte_state(void);
void mte_thread_switch(struct task_struct *next);
void mte_suspend_exit(void);
long set_mte_ctrl(struct task_struct *task, unsigned long arg);
long get_mte_ctrl(struct task_struct *task);
int mte_ptrace_copy_tags(struct task_struct *child, long request,
unsigned long addr, unsigned long data);
#else
/* unused if !CONFIG_ARM64_MTE, silence the compiler */
#define PG_mte_tagged 0
static inline void mte_sync_tags(pte_t *ptep, pte_t pte)
{
}
static inline void mte_copy_page_tags(void *kto, const void *kfrom)
{
}
static inline void flush_mte_state(void)
{
}
static inline void mte_thread_switch(struct task_struct *next)
{
}
static inline void mte_suspend_exit(void)
{
}
static inline long set_mte_ctrl(struct task_struct *task, unsigned long arg)
{
return 0;
}
static inline long get_mte_ctrl(struct task_struct *task)
{
return 0;
}
static inline int mte_ptrace_copy_tags(struct task_struct *child,
long request, unsigned long addr,
unsigned long data)
{
return -EIO;
}
#endif
#endif /* __ASSEMBLY__ */
#endif /* __ASM_MTE_H */
...@@ -25,6 +25,9 @@ const struct cpumask *cpumask_of_node(int node); ...@@ -25,6 +25,9 @@ const struct cpumask *cpumask_of_node(int node);
/* Returns a pointer to the cpumask of CPUs on Node 'node'. */ /* Returns a pointer to the cpumask of CPUs on Node 'node'. */
static inline const struct cpumask *cpumask_of_node(int node) static inline const struct cpumask *cpumask_of_node(int node)
{ {
if (node == NUMA_NO_NODE)
return cpu_all_mask;
return node_to_cpumask_map[node]; return node_to_cpumask_map[node];
} }
#endif #endif
......
...@@ -11,13 +11,8 @@ ...@@ -11,13 +11,8 @@
#include <linux/const.h> #include <linux/const.h>
/* PAGE_SHIFT determines the page size */ /* PAGE_SHIFT determines the page size */
/* CONT_SHIFT determines the number of pages which can be tracked together */
#define PAGE_SHIFT CONFIG_ARM64_PAGE_SHIFT #define PAGE_SHIFT CONFIG_ARM64_PAGE_SHIFT
#define CONT_SHIFT CONFIG_ARM64_CONT_SHIFT
#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT) #define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1)) #define PAGE_MASK (~(PAGE_SIZE-1))
#define CONT_SIZE (_AC(1, UL) << (CONT_SHIFT + PAGE_SHIFT))
#define CONT_MASK (~(CONT_SIZE-1))
#endif /* __ASM_PAGE_DEF_H */ #endif /* __ASM_PAGE_DEF_H */
...@@ -15,18 +15,25 @@ ...@@ -15,18 +15,25 @@
#include <linux/personality.h> /* for READ_IMPLIES_EXEC */ #include <linux/personality.h> /* for READ_IMPLIES_EXEC */
#include <asm/pgtable-types.h> #include <asm/pgtable-types.h>
extern void __cpu_clear_user_page(void *p, unsigned long user); struct page;
extern void __cpu_copy_user_page(void *to, const void *from, struct vm_area_struct;
unsigned long user);
extern void copy_page(void *to, const void *from); extern void copy_page(void *to, const void *from);
extern void clear_page(void *to); extern void clear_page(void *to);
void copy_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma);
#define __HAVE_ARCH_COPY_USER_HIGHPAGE
void copy_highpage(struct page *to, struct page *from);
#define __HAVE_ARCH_COPY_HIGHPAGE
#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ #define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
#define clear_user_page(addr,vaddr,pg) __cpu_clear_user_page(addr, vaddr) #define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to,from,vaddr,pg) __cpu_copy_user_page(to, from, vaddr) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
typedef struct page *pgtable_t; typedef struct page *pgtable_t;
...@@ -36,7 +43,7 @@ extern int pfn_valid(unsigned long); ...@@ -36,7 +43,7 @@ extern int pfn_valid(unsigned long);
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC #define VM_DATA_DEFAULT_FLAGS (VM_DATA_FLAGS_TSK_EXEC | VM_MTE_ALLOWED)
#include <asm-generic/getorder.h> #include <asm-generic/getorder.h>
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#define pcibios_assign_all_busses() \ #define pcibios_assign_all_busses() \
(pci_has_flag(PCI_REASSIGN_ALL_BUS)) (pci_has_flag(PCI_REASSIGN_ALL_BUS))
#define arch_can_pci_mmap_wc() 1
#define ARCH_GENERIC_PCI_MMAP_RESOURCE 1 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 1
extern int isa_dma_bridge_buggy; extern int isa_dma_bridge_buggy;
......
...@@ -236,6 +236,9 @@ ...@@ -236,6 +236,9 @@
#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */
#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */
/* PMMIR_EL1.SLOTS mask */
#define ARMV8_PMU_SLOTS_MASK 0xff
#ifdef CONFIG_PERF_EVENTS #ifdef CONFIG_PERF_EVENTS
struct pt_regs; struct pt_regs;
extern unsigned long perf_instruction_pointer(struct pt_regs *regs); extern unsigned long perf_instruction_pointer(struct pt_regs *regs);
......
...@@ -81,25 +81,15 @@ ...@@ -81,25 +81,15 @@
/* /*
* Contiguous page definitions. * Contiguous page definitions.
*/ */
#ifdef CONFIG_ARM64_64K_PAGES #define CONT_PTE_SHIFT (CONFIG_ARM64_CONT_PTE_SHIFT + PAGE_SHIFT)
#define CONT_PTE_SHIFT (5 + PAGE_SHIFT)
#define CONT_PMD_SHIFT (5 + PMD_SHIFT)
#elif defined(CONFIG_ARM64_16K_PAGES)
#define CONT_PTE_SHIFT (7 + PAGE_SHIFT)
#define CONT_PMD_SHIFT (5 + PMD_SHIFT)
#else
#define CONT_PTE_SHIFT (4 + PAGE_SHIFT)
#define CONT_PMD_SHIFT (4 + PMD_SHIFT)
#endif
#define CONT_PTES (1 << (CONT_PTE_SHIFT - PAGE_SHIFT)) #define CONT_PTES (1 << (CONT_PTE_SHIFT - PAGE_SHIFT))
#define CONT_PTE_SIZE (CONT_PTES * PAGE_SIZE) #define CONT_PTE_SIZE (CONT_PTES * PAGE_SIZE)
#define CONT_PTE_MASK (~(CONT_PTE_SIZE - 1)) #define CONT_PTE_MASK (~(CONT_PTE_SIZE - 1))
#define CONT_PMD_SHIFT (CONFIG_ARM64_CONT_PMD_SHIFT + PMD_SHIFT)
#define CONT_PMDS (1 << (CONT_PMD_SHIFT - PMD_SHIFT)) #define CONT_PMDS (1 << (CONT_PMD_SHIFT - PMD_SHIFT))
#define CONT_PMD_SIZE (CONT_PMDS * PMD_SIZE) #define CONT_PMD_SIZE (CONT_PMDS * PMD_SIZE)
#define CONT_PMD_MASK (~(CONT_PMD_SIZE - 1)) #define CONT_PMD_MASK (~(CONT_PMD_SIZE - 1))
/* the numerical offset of the PTE within a range of CONT_PTES */
#define CONT_RANGE_OFFSET(addr) (((addr)>>PAGE_SHIFT)&(CONT_PTES-1))
/* /*
* Hardware page table definitions. * Hardware page table definitions.
......
...@@ -19,6 +19,13 @@ ...@@ -19,6 +19,13 @@
#define PTE_DEVMAP (_AT(pteval_t, 1) << 57) #define PTE_DEVMAP (_AT(pteval_t, 1) << 57)
#define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */ #define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */
/*
* This bit indicates that the entry is present i.e. pmd_page()
* still points to a valid huge page in memory even if the pmd
* has been invalidated.
*/
#define PMD_PRESENT_INVALID (_AT(pteval_t, 1) << 59) /* only when !PMD_SECT_VALID */
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
...@@ -50,6 +57,7 @@ extern bool arm64_use_ng_mappings; ...@@ -50,6 +57,7 @@ extern bool arm64_use_ng_mappings;
#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC)) #define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC))
#define PROT_NORMAL_WT (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_WT)) #define PROT_NORMAL_WT (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_WT))
#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL)) #define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL))
#define PROT_NORMAL_TAGGED (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_TAGGED))
#define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE)) #define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE))
#define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL)) #define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
...@@ -59,6 +67,7 @@ extern bool arm64_use_ng_mappings; ...@@ -59,6 +67,7 @@ extern bool arm64_use_ng_mappings;
#define _HYP_PAGE_DEFAULT _PAGE_DEFAULT #define _HYP_PAGE_DEFAULT _PAGE_DEFAULT
#define PAGE_KERNEL __pgprot(PROT_NORMAL) #define PAGE_KERNEL __pgprot(PROT_NORMAL)
#define PAGE_KERNEL_TAGGED __pgprot(PROT_NORMAL_TAGGED)
#define PAGE_KERNEL_RO __pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY) #define PAGE_KERNEL_RO __pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
#define PAGE_KERNEL_ROX __pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY) #define PAGE_KERNEL_ROX __pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
#define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN) #define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN)
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#include <asm/proc-fns.h> #include <asm/proc-fns.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/mte.h>
#include <asm/pgtable-hwdef.h> #include <asm/pgtable-hwdef.h>
#include <asm/pgtable-prot.h> #include <asm/pgtable-prot.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
...@@ -35,11 +36,6 @@ ...@@ -35,11 +36,6 @@
extern struct page *vmemmap; extern struct page *vmemmap;
extern void __pte_error(const char *file, int line, unsigned long val);
extern void __pmd_error(const char *file, int line, unsigned long val);
extern void __pud_error(const char *file, int line, unsigned long val);
extern void __pgd_error(const char *file, int line, unsigned long val);
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
...@@ -50,6 +46,14 @@ extern void __pgd_error(const char *file, int line, unsigned long val); ...@@ -50,6 +46,14 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
__flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
/*
* Outside of a few very special situations (e.g. hibernation), we always
* use broadcast TLB invalidation instructions, therefore a spurious page
* fault on one CPU which has been handled concurrently by another CPU
* does not need to perform additional invalidation.
*/
#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0)
/* /*
* ZERO_PAGE is a global shared page that is always zero: used * ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc.. * for zero-mapped memory areas etc..
...@@ -57,7 +61,8 @@ extern void __pgd_error(const char *file, int line, unsigned long val); ...@@ -57,7 +61,8 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define ZERO_PAGE(vaddr) phys_to_page(__pa_symbol(empty_zero_page)) #define ZERO_PAGE(vaddr) phys_to_page(__pa_symbol(empty_zero_page))
#define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte)) #define pte_ERROR(e) \
pr_err("%s:%d: bad pte %016llx.\n", __FILE__, __LINE__, pte_val(e))
/* /*
* Macros to convert between a physical address and its placement in a * Macros to convert between a physical address and its placement in a
...@@ -90,6 +95,8 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; ...@@ -90,6 +95,8 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
#define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN)) #define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN))
#define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT)) #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT))
#define pte_devmap(pte) (!!(pte_val(pte) & PTE_DEVMAP)) #define pte_devmap(pte) (!!(pte_val(pte) & PTE_DEVMAP))
#define pte_tagged(pte) ((pte_val(pte) & PTE_ATTRINDX_MASK) == \
PTE_ATTRINDX(MT_NORMAL_TAGGED))
#define pte_cont_addr_end(addr, end) \ #define pte_cont_addr_end(addr, end) \
({ unsigned long __boundary = ((addr) + CONT_PTE_SIZE) & CONT_PTE_MASK; \ ({ unsigned long __boundary = ((addr) + CONT_PTE_SIZE) & CONT_PTE_MASK; \
...@@ -145,6 +152,18 @@ static inline pte_t set_pte_bit(pte_t pte, pgprot_t prot) ...@@ -145,6 +152,18 @@ static inline pte_t set_pte_bit(pte_t pte, pgprot_t prot)
return pte; return pte;
} }
static inline pmd_t clear_pmd_bit(pmd_t pmd, pgprot_t prot)
{
pmd_val(pmd) &= ~pgprot_val(prot);
return pmd;
}
static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot)
{
pmd_val(pmd) |= pgprot_val(prot);
return pmd;
}
static inline pte_t pte_wrprotect(pte_t pte) static inline pte_t pte_wrprotect(pte_t pte)
{ {
pte = clear_pte_bit(pte, __pgprot(PTE_WRITE)); pte = clear_pte_bit(pte, __pgprot(PTE_WRITE));
...@@ -284,6 +303,10 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, ...@@ -284,6 +303,10 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte)) if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte))
__sync_icache_dcache(pte); __sync_icache_dcache(pte);
if (system_supports_mte() &&
pte_present(pte) && pte_tagged(pte) && !pte_special(pte))
mte_sync_tags(ptep, pte);
__check_racy_pte_update(mm, ptep, pte); __check_racy_pte_update(mm, ptep, pte);
set_pte(ptep, pte); set_pte(ptep, pte);
...@@ -363,15 +386,24 @@ static inline int pmd_protnone(pmd_t pmd) ...@@ -363,15 +386,24 @@ static inline int pmd_protnone(pmd_t pmd)
} }
#endif #endif
#define pmd_present_invalid(pmd) (!!(pmd_val(pmd) & PMD_PRESENT_INVALID))
static inline int pmd_present(pmd_t pmd)
{
return pte_present(pmd_pte(pmd)) || pmd_present_invalid(pmd);
}
/* /*
* THP definitions. * THP definitions.
*/ */
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT)) static inline int pmd_trans_huge(pmd_t pmd)
{
return pmd_val(pmd) && pmd_present(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#define pmd_present(pmd) pte_present(pmd_pte(pmd))
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd)) #define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
#define pmd_young(pmd) pte_young(pmd_pte(pmd)) #define pmd_young(pmd) pte_young(pmd_pte(pmd))
#define pmd_valid(pmd) pte_valid(pmd_pte(pmd)) #define pmd_valid(pmd) pte_valid(pmd_pte(pmd))
...@@ -381,7 +413,14 @@ static inline int pmd_protnone(pmd_t pmd) ...@@ -381,7 +413,14 @@ static inline int pmd_protnone(pmd_t pmd)
#define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd))) #define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd)))
#define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd))) #define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd)))
#define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd))) #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
#define pmd_mkinvalid(pmd) (__pmd(pmd_val(pmd) & ~PMD_SECT_VALID))
static inline pmd_t pmd_mkinvalid(pmd_t pmd)
{
pmd = set_pmd_bit(pmd, __pgprot(PMD_PRESENT_INVALID));
pmd = clear_pmd_bit(pmd, __pgprot(PMD_SECT_VALID));
return pmd;
}
#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) #define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))
...@@ -541,7 +580,8 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) ...@@ -541,7 +580,8 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
#if CONFIG_PGTABLE_LEVELS > 2 #if CONFIG_PGTABLE_LEVELS > 2
#define pmd_ERROR(pmd) __pmd_error(__FILE__, __LINE__, pmd_val(pmd)) #define pmd_ERROR(e) \
pr_err("%s:%d: bad pmd %016llx.\n", __FILE__, __LINE__, pmd_val(e))
#define pud_none(pud) (!pud_val(pud)) #define pud_none(pud) (!pud_val(pud))
#define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT)) #define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT))
...@@ -608,7 +648,8 @@ static inline unsigned long pud_page_vaddr(pud_t pud) ...@@ -608,7 +648,8 @@ static inline unsigned long pud_page_vaddr(pud_t pud)
#if CONFIG_PGTABLE_LEVELS > 3 #if CONFIG_PGTABLE_LEVELS > 3
#define pud_ERROR(pud) __pud_error(__FILE__, __LINE__, pud_val(pud)) #define pud_ERROR(e) \
pr_err("%s:%d: bad pud %016llx.\n", __FILE__, __LINE__, pud_val(e))
#define p4d_none(p4d) (!p4d_val(p4d)) #define p4d_none(p4d) (!p4d_val(p4d))
#define p4d_bad(p4d) (!(p4d_val(p4d) & 2)) #define p4d_bad(p4d) (!(p4d_val(p4d) & 2))
...@@ -667,15 +708,21 @@ static inline unsigned long p4d_page_vaddr(p4d_t p4d) ...@@ -667,15 +708,21 @@ static inline unsigned long p4d_page_vaddr(p4d_t p4d)
#endif /* CONFIG_PGTABLE_LEVELS > 3 */ #endif /* CONFIG_PGTABLE_LEVELS > 3 */
#define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) #define pgd_ERROR(e) \
pr_err("%s:%d: bad pgd %016llx.\n", __FILE__, __LINE__, pgd_val(e))
#define pgd_set_fixmap(addr) ((pgd_t *)set_fixmap_offset(FIX_PGD, addr)) #define pgd_set_fixmap(addr) ((pgd_t *)set_fixmap_offset(FIX_PGD, addr))
#define pgd_clear_fixmap() clear_fixmap(FIX_PGD) #define pgd_clear_fixmap() clear_fixmap(FIX_PGD)
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{ {
/*
* Normal and Normal-Tagged are two different memory types and indices
* in MAIR_EL1. The mask below has to include PTE_ATTRINDX_MASK.
*/
const pteval_t mask = PTE_USER | PTE_PXN | PTE_UXN | PTE_RDONLY | const pteval_t mask = PTE_USER | PTE_PXN | PTE_UXN | PTE_RDONLY |
PTE_PROT_NONE | PTE_VALID | PTE_WRITE | PTE_GP; PTE_PROT_NONE | PTE_VALID | PTE_WRITE | PTE_GP |
PTE_ATTRINDX_MASK;
/* preserve the hardware dirty information */ /* preserve the hardware dirty information */
if (pte_hw_dirty(pte)) if (pte_hw_dirty(pte))
pte = pte_mkdirty(pte); pte = pte_mkdirty(pte);
...@@ -847,6 +894,11 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, ...@@ -847,6 +894,11 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
#define __swp_entry_to_pte(swp) ((pte_t) { (swp).val }) #define __swp_entry_to_pte(swp) ((pte_t) { (swp).val })
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
#define __pmd_to_swp_entry(pmd) ((swp_entry_t) { pmd_val(pmd) })
#define __swp_entry_to_pmd(swp) __pmd((swp).val)
#endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */
/* /*
* Ensure that there are not more swap files than can be encoded in the kernel * Ensure that there are not more swap files than can be encoded in the kernel
* PTEs. * PTEs.
...@@ -855,6 +907,38 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, ...@@ -855,6 +907,38 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
extern int kern_addr_valid(unsigned long addr); extern int kern_addr_valid(unsigned long addr);
#ifdef CONFIG_ARM64_MTE
#define __HAVE_ARCH_PREPARE_TO_SWAP
static inline int arch_prepare_to_swap(struct page *page)
{
if (system_supports_mte())
return mte_save_tags(page);
return 0;
}
#define __HAVE_ARCH_SWAP_INVALIDATE
static inline void arch_swap_invalidate_page(int type, pgoff_t offset)
{
if (system_supports_mte())
mte_invalidate_tags(type, offset);
}
static inline void arch_swap_invalidate_area(int type)
{
if (system_supports_mte())
mte_invalidate_tags_area(type);
}
#define __HAVE_ARCH_SWAP_RESTORE
static inline void arch_swap_restore(swp_entry_t entry, struct page *page)
{
if (system_supports_mte() && mte_restore_tags(entry, page))
set_bit(PG_mte_tagged, &page->flags);
}
#endif /* CONFIG_ARM64_MTE */
/* /*
* On AArch64, the cache coherency is handled via the set_pte_at() function. * On AArch64, the cache coherency is handled via the set_pte_at() function.
*/ */
......
...@@ -38,6 +38,7 @@ ...@@ -38,6 +38,7 @@
#include <asm/pgtable-hwdef.h> #include <asm/pgtable-hwdef.h>
#include <asm/pointer_auth.h> #include <asm/pointer_auth.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/spectre.h>
#include <asm/types.h> #include <asm/types.h>
/* /*
...@@ -151,6 +152,10 @@ struct thread_struct { ...@@ -151,6 +152,10 @@ struct thread_struct {
struct ptrauth_keys_user keys_user; struct ptrauth_keys_user keys_user;
struct ptrauth_keys_kernel keys_kernel; struct ptrauth_keys_kernel keys_kernel;
#endif #endif
#ifdef CONFIG_ARM64_MTE
u64 sctlr_tcf0;
u64 gcr_user_incl;
#endif
}; };
static inline void arch_thread_struct_whitelist(unsigned long *offset, static inline void arch_thread_struct_whitelist(unsigned long *offset,
...@@ -197,40 +202,15 @@ static inline void start_thread_common(struct pt_regs *regs, unsigned long pc) ...@@ -197,40 +202,15 @@ static inline void start_thread_common(struct pt_regs *regs, unsigned long pc)
regs->pmr_save = GIC_PRIO_IRQON; regs->pmr_save = GIC_PRIO_IRQON;
} }
static inline void set_ssbs_bit(struct pt_regs *regs)
{
regs->pstate |= PSR_SSBS_BIT;
}
static inline void set_compat_ssbs_bit(struct pt_regs *regs)
{
regs->pstate |= PSR_AA32_SSBS_BIT;
}
static inline void start_thread(struct pt_regs *regs, unsigned long pc, static inline void start_thread(struct pt_regs *regs, unsigned long pc,
unsigned long sp) unsigned long sp)
{ {
start_thread_common(regs, pc); start_thread_common(regs, pc);
regs->pstate = PSR_MODE_EL0t; regs->pstate = PSR_MODE_EL0t;
spectre_v4_enable_task_mitigation(current);
if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE)
set_ssbs_bit(regs);
regs->sp = sp; regs->sp = sp;
} }
static inline bool is_ttbr0_addr(unsigned long addr)
{
/* entry assembly clears tags for TTBR0 addrs */
return addr < TASK_SIZE;
}
static inline bool is_ttbr1_addr(unsigned long addr)
{
/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
}
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc, static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
unsigned long sp) unsigned long sp)
...@@ -244,13 +224,23 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc, ...@@ -244,13 +224,23 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
regs->pstate |= PSR_AA32_E_BIT; regs->pstate |= PSR_AA32_E_BIT;
#endif #endif
if (arm64_get_ssbd_state() != ARM64_SSBD_FORCE_ENABLE) spectre_v4_enable_task_mitigation(current);
set_compat_ssbs_bit(regs);
regs->compat_sp = sp; regs->compat_sp = sp;
} }
#endif #endif
static inline bool is_ttbr0_addr(unsigned long addr)
{
/* entry assembly clears tags for TTBR0 addrs */
return addr < TASK_SIZE;
}
static inline bool is_ttbr1_addr(unsigned long addr)
{
/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
}
/* Forward declaration, a strange C thing */ /* Forward declaration, a strange C thing */
struct task_struct; struct task_struct;
...@@ -315,10 +305,10 @@ extern void __init minsigstksz_setup(void); ...@@ -315,10 +305,10 @@ extern void __init minsigstksz_setup(void);
#ifdef CONFIG_ARM64_TAGGED_ADDR_ABI #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
/* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */ /* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */
long set_tagged_addr_ctrl(unsigned long arg); long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg);
long get_tagged_addr_ctrl(void); long get_tagged_addr_ctrl(struct task_struct *task);
#define SET_TAGGED_ADDR_CTRL(arg) set_tagged_addr_ctrl(arg) #define SET_TAGGED_ADDR_CTRL(arg) set_tagged_addr_ctrl(current, arg)
#define GET_TAGGED_ADDR_CTRL() get_tagged_addr_ctrl() #define GET_TAGGED_ADDR_CTRL() get_tagged_addr_ctrl(current)
#endif #endif
/* /*
......
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Interface for managing mitigations for Spectre vulnerabilities.
*
* Copyright (C) 2020 Google LLC
* Author: Will Deacon <will@kernel.org>
*/
#ifndef __ASM_SPECTRE_H
#define __ASM_SPECTRE_H
#include <asm/cpufeature.h>
/* Watch out, ordering is important here. */
enum mitigation_state {
SPECTRE_UNAFFECTED,
SPECTRE_MITIGATED,
SPECTRE_VULNERABLE,
};
struct task_struct;
enum mitigation_state arm64_get_spectre_v2_state(void);
bool has_spectre_v2(const struct arm64_cpu_capabilities *cap, int scope);
void spectre_v2_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
enum mitigation_state arm64_get_spectre_v4_state(void);
bool has_spectre_v4(const struct arm64_cpu_capabilities *cap, int scope);
void spectre_v4_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
void spectre_v4_enable_task_mitigation(struct task_struct *tsk);
#endif /* __ASM_SPECTRE_H */
...@@ -63,7 +63,7 @@ struct stackframe { ...@@ -63,7 +63,7 @@ struct stackframe {
extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame); extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame, extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
int (*fn)(struct stackframe *, void *), void *data); bool (*fn)(void *, unsigned long), void *data);
extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl); const char *loglvl);
......
...@@ -91,10 +91,12 @@ ...@@ -91,10 +91,12 @@
#define PSTATE_PAN pstate_field(0, 4) #define PSTATE_PAN pstate_field(0, 4)
#define PSTATE_UAO pstate_field(0, 3) #define PSTATE_UAO pstate_field(0, 3)
#define PSTATE_SSBS pstate_field(3, 1) #define PSTATE_SSBS pstate_field(3, 1)
#define PSTATE_TCO pstate_field(3, 4)
#define SET_PSTATE_PAN(x) __emit_inst(0xd500401f | PSTATE_PAN | ((!!x) << PSTATE_Imm_shift)) #define SET_PSTATE_PAN(x) __emit_inst(0xd500401f | PSTATE_PAN | ((!!x) << PSTATE_Imm_shift))
#define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift)) #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift))
#define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift)) #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift))
#define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift))
#define __SYS_BARRIER_INSN(CRm, op2, Rt) \ #define __SYS_BARRIER_INSN(CRm, op2, Rt) \
__emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f)) __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f))
...@@ -181,6 +183,8 @@ ...@@ -181,6 +183,8 @@
#define SYS_SCTLR_EL1 sys_reg(3, 0, 1, 0, 0) #define SYS_SCTLR_EL1 sys_reg(3, 0, 1, 0, 0)
#define SYS_ACTLR_EL1 sys_reg(3, 0, 1, 0, 1) #define SYS_ACTLR_EL1 sys_reg(3, 0, 1, 0, 1)
#define SYS_CPACR_EL1 sys_reg(3, 0, 1, 0, 2) #define SYS_CPACR_EL1 sys_reg(3, 0, 1, 0, 2)
#define SYS_RGSR_EL1 sys_reg(3, 0, 1, 0, 5)
#define SYS_GCR_EL1 sys_reg(3, 0, 1, 0, 6)
#define SYS_ZCR_EL1 sys_reg(3, 0, 1, 2, 0) #define SYS_ZCR_EL1 sys_reg(3, 0, 1, 2, 0)
...@@ -218,6 +222,8 @@ ...@@ -218,6 +222,8 @@
#define SYS_ERXADDR_EL1 sys_reg(3, 0, 5, 4, 3) #define SYS_ERXADDR_EL1 sys_reg(3, 0, 5, 4, 3)
#define SYS_ERXMISC0_EL1 sys_reg(3, 0, 5, 5, 0) #define SYS_ERXMISC0_EL1 sys_reg(3, 0, 5, 5, 0)
#define SYS_ERXMISC1_EL1 sys_reg(3, 0, 5, 5, 1) #define SYS_ERXMISC1_EL1 sys_reg(3, 0, 5, 5, 1)
#define SYS_TFSR_EL1 sys_reg(3, 0, 5, 6, 0)
#define SYS_TFSRE0_EL1 sys_reg(3, 0, 5, 6, 1)
#define SYS_FAR_EL1 sys_reg(3, 0, 6, 0, 0) #define SYS_FAR_EL1 sys_reg(3, 0, 6, 0, 0)
#define SYS_PAR_EL1 sys_reg(3, 0, 7, 4, 0) #define SYS_PAR_EL1 sys_reg(3, 0, 7, 4, 0)
...@@ -321,6 +327,8 @@ ...@@ -321,6 +327,8 @@
#define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1) #define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1)
#define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2) #define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2)
#define SYS_PMMIR_EL1 sys_reg(3, 0, 9, 14, 6)
#define SYS_MAIR_EL1 sys_reg(3, 0, 10, 2, 0) #define SYS_MAIR_EL1 sys_reg(3, 0, 10, 2, 0)
#define SYS_AMAIR_EL1 sys_reg(3, 0, 10, 3, 0) #define SYS_AMAIR_EL1 sys_reg(3, 0, 10, 3, 0)
...@@ -368,6 +376,7 @@ ...@@ -368,6 +376,7 @@
#define SYS_CCSIDR_EL1 sys_reg(3, 1, 0, 0, 0) #define SYS_CCSIDR_EL1 sys_reg(3, 1, 0, 0, 0)
#define SYS_CLIDR_EL1 sys_reg(3, 1, 0, 0, 1) #define SYS_CLIDR_EL1 sys_reg(3, 1, 0, 0, 1)
#define SYS_GMID_EL1 sys_reg(3, 1, 0, 0, 4)
#define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7) #define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7)
#define SYS_CSSELR_EL1 sys_reg(3, 2, 0, 0, 0) #define SYS_CSSELR_EL1 sys_reg(3, 2, 0, 0, 0)
...@@ -460,6 +469,7 @@ ...@@ -460,6 +469,7 @@
#define SYS_ESR_EL2 sys_reg(3, 4, 5, 2, 0) #define SYS_ESR_EL2 sys_reg(3, 4, 5, 2, 0)
#define SYS_VSESR_EL2 sys_reg(3, 4, 5, 2, 3) #define SYS_VSESR_EL2 sys_reg(3, 4, 5, 2, 3)
#define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0) #define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0)
#define SYS_TFSR_EL2 sys_reg(3, 4, 5, 6, 0)
#define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0) #define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0)
#define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1) #define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1)
...@@ -516,6 +526,7 @@ ...@@ -516,6 +526,7 @@
#define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0) #define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0)
#define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1) #define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1)
#define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0) #define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0)
#define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0)
#define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0) #define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0)
#define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0) #define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0)
#define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0) #define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0)
...@@ -531,6 +542,15 @@ ...@@ -531,6 +542,15 @@
/* Common SCTLR_ELx flags. */ /* Common SCTLR_ELx flags. */
#define SCTLR_ELx_DSSBS (BIT(44)) #define SCTLR_ELx_DSSBS (BIT(44))
#define SCTLR_ELx_ATA (BIT(43))
#define SCTLR_ELx_TCF_SHIFT 40
#define SCTLR_ELx_TCF_NONE (UL(0x0) << SCTLR_ELx_TCF_SHIFT)
#define SCTLR_ELx_TCF_SYNC (UL(0x1) << SCTLR_ELx_TCF_SHIFT)
#define SCTLR_ELx_TCF_ASYNC (UL(0x2) << SCTLR_ELx_TCF_SHIFT)
#define SCTLR_ELx_TCF_MASK (UL(0x3) << SCTLR_ELx_TCF_SHIFT)
#define SCTLR_ELx_ITFSB (BIT(37))
#define SCTLR_ELx_ENIA (BIT(31)) #define SCTLR_ELx_ENIA (BIT(31))
#define SCTLR_ELx_ENIB (BIT(30)) #define SCTLR_ELx_ENIB (BIT(30))
#define SCTLR_ELx_ENDA (BIT(27)) #define SCTLR_ELx_ENDA (BIT(27))
...@@ -559,6 +579,14 @@ ...@@ -559,6 +579,14 @@
#endif #endif
/* SCTLR_EL1 specific flags. */ /* SCTLR_EL1 specific flags. */
#define SCTLR_EL1_ATA0 (BIT(42))
#define SCTLR_EL1_TCF0_SHIFT 38
#define SCTLR_EL1_TCF0_NONE (UL(0x0) << SCTLR_EL1_TCF0_SHIFT)
#define SCTLR_EL1_TCF0_SYNC (UL(0x1) << SCTLR_EL1_TCF0_SHIFT)
#define SCTLR_EL1_TCF0_ASYNC (UL(0x2) << SCTLR_EL1_TCF0_SHIFT)
#define SCTLR_EL1_TCF0_MASK (UL(0x3) << SCTLR_EL1_TCF0_SHIFT)
#define SCTLR_EL1_BT1 (BIT(36)) #define SCTLR_EL1_BT1 (BIT(36))
#define SCTLR_EL1_BT0 (BIT(35)) #define SCTLR_EL1_BT0 (BIT(35))
#define SCTLR_EL1_UCI (BIT(26)) #define SCTLR_EL1_UCI (BIT(26))
...@@ -587,6 +615,7 @@ ...@@ -587,6 +615,7 @@
SCTLR_EL1_SA0 | SCTLR_EL1_SED | SCTLR_ELx_I |\ SCTLR_EL1_SA0 | SCTLR_EL1_SED | SCTLR_ELx_I |\
SCTLR_EL1_DZE | SCTLR_EL1_UCT |\ SCTLR_EL1_DZE | SCTLR_EL1_UCT |\
SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN |\ SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN |\
SCTLR_ELx_ITFSB| SCTLR_ELx_ATA | SCTLR_EL1_ATA0 |\
ENDIAN_SET_EL1 | SCTLR_EL1_UCI | SCTLR_EL1_RES1) ENDIAN_SET_EL1 | SCTLR_EL1_UCI | SCTLR_EL1_RES1)
/* MAIR_ELx memory attributes (used by Linux) */ /* MAIR_ELx memory attributes (used by Linux) */
...@@ -595,6 +624,7 @@ ...@@ -595,6 +624,7 @@
#define MAIR_ATTR_DEVICE_GRE UL(0x0c) #define MAIR_ATTR_DEVICE_GRE UL(0x0c)
#define MAIR_ATTR_NORMAL_NC UL(0x44) #define MAIR_ATTR_NORMAL_NC UL(0x44)
#define MAIR_ATTR_NORMAL_WT UL(0xbb) #define MAIR_ATTR_NORMAL_WT UL(0xbb)
#define MAIR_ATTR_NORMAL_TAGGED UL(0xf0)
#define MAIR_ATTR_NORMAL UL(0xff) #define MAIR_ATTR_NORMAL UL(0xff)
#define MAIR_ATTR_MASK UL(0xff) #define MAIR_ATTR_MASK UL(0xff)
...@@ -636,14 +666,22 @@ ...@@ -636,14 +666,22 @@
#define ID_AA64ISAR1_APA_SHIFT 4 #define ID_AA64ISAR1_APA_SHIFT 4
#define ID_AA64ISAR1_DPB_SHIFT 0 #define ID_AA64ISAR1_DPB_SHIFT 0
#define ID_AA64ISAR1_APA_NI 0x0 #define ID_AA64ISAR1_APA_NI 0x0
#define ID_AA64ISAR1_APA_ARCHITECTED 0x1 #define ID_AA64ISAR1_APA_ARCHITECTED 0x1
#define ID_AA64ISAR1_API_NI 0x0 #define ID_AA64ISAR1_APA_ARCH_EPAC 0x2
#define ID_AA64ISAR1_API_IMP_DEF 0x1 #define ID_AA64ISAR1_APA_ARCH_EPAC2 0x3
#define ID_AA64ISAR1_GPA_NI 0x0 #define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC 0x4
#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1 #define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB 0x5
#define ID_AA64ISAR1_GPI_NI 0x0 #define ID_AA64ISAR1_API_NI 0x0
#define ID_AA64ISAR1_GPI_IMP_DEF 0x1 #define ID_AA64ISAR1_API_IMP_DEF 0x1
#define ID_AA64ISAR1_API_IMP_DEF_EPAC 0x2
#define ID_AA64ISAR1_API_IMP_DEF_EPAC2 0x3
#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC 0x4
#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB 0x5
#define ID_AA64ISAR1_GPA_NI 0x0
#define ID_AA64ISAR1_GPA_ARCHITECTED 0x1
#define ID_AA64ISAR1_GPI_NI 0x0
#define ID_AA64ISAR1_GPI_IMP_DEF 0x1
/* id_aa64pfr0 */ /* id_aa64pfr0 */
#define ID_AA64PFR0_CSV3_SHIFT 60 #define ID_AA64PFR0_CSV3_SHIFT 60
...@@ -686,6 +724,10 @@ ...@@ -686,6 +724,10 @@
#define ID_AA64PFR1_SSBS_PSTATE_INSNS 2 #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2
#define ID_AA64PFR1_BT_BTI 0x1 #define ID_AA64PFR1_BT_BTI 0x1
#define ID_AA64PFR1_MTE_NI 0x0
#define ID_AA64PFR1_MTE_EL0 0x1
#define ID_AA64PFR1_MTE 0x2
/* id_aa64zfr0 */ /* id_aa64zfr0 */
#define ID_AA64ZFR0_F64MM_SHIFT 56 #define ID_AA64ZFR0_F64MM_SHIFT 56
#define ID_AA64ZFR0_F32MM_SHIFT 52 #define ID_AA64ZFR0_F32MM_SHIFT 52
...@@ -920,6 +962,28 @@ ...@@ -920,6 +962,28 @@
#define CPACR_EL1_ZEN_EL0EN (BIT(17)) /* enable EL0 access, if EL1EN set */ #define CPACR_EL1_ZEN_EL0EN (BIT(17)) /* enable EL0 access, if EL1EN set */
#define CPACR_EL1_ZEN (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN) #define CPACR_EL1_ZEN (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN)
/* TCR EL1 Bit Definitions */
#define SYS_TCR_EL1_TCMA1 (BIT(58))
#define SYS_TCR_EL1_TCMA0 (BIT(57))
/* GCR_EL1 Definitions */
#define SYS_GCR_EL1_RRND (BIT(16))
#define SYS_GCR_EL1_EXCL_MASK 0xffffUL
/* RGSR_EL1 Definitions */
#define SYS_RGSR_EL1_TAG_MASK 0xfUL
#define SYS_RGSR_EL1_SEED_SHIFT 8
#define SYS_RGSR_EL1_SEED_MASK 0xffffUL
/* GMID_EL1 field definitions */
#define SYS_GMID_EL1_BS_SHIFT 0
#define SYS_GMID_EL1_BS_SIZE 4
/* TFSR{,E0}_EL1 bit definitions */
#define SYS_TFSR_EL1_TF0_SHIFT 0
#define SYS_TFSR_EL1_TF1_SHIFT 1
#define SYS_TFSR_EL1_TF0 (UL(1) << SYS_TFSR_EL1_TF0_SHIFT)
#define SYS_TFSR_EL1_TF1 (UK(2) << SYS_TFSR_EL1_TF1_SHIFT)
/* Safe value for MPIDR_EL1: Bit31:RES1, Bit30:U:0, Bit24:MT:0 */ /* Safe value for MPIDR_EL1: Bit31:RES1, Bit30:U:0, Bit24:MT:0 */
#define SYS_MPIDR_SAFE_VAL (BIT(31)) #define SYS_MPIDR_SAFE_VAL (BIT(31))
...@@ -1024,6 +1088,13 @@ ...@@ -1024,6 +1088,13 @@
write_sysreg(__scs_new, sysreg); \ write_sysreg(__scs_new, sysreg); \
} while (0) } while (0)
#define sysreg_clear_set_s(sysreg, clear, set) do { \
u64 __scs_val = read_sysreg_s(sysreg); \
u64 __scs_new = (__scs_val & ~(u64)(clear)) | (set); \
if (__scs_new != __scs_val) \
write_sysreg_s(__scs_new, sysreg); \
} while (0)
#endif #endif
#endif /* __ASM_SYSREG_H */ #endif /* __ASM_SYSREG_H */
...@@ -67,6 +67,7 @@ void arch_release_task_struct(struct task_struct *tsk); ...@@ -67,6 +67,7 @@ void arch_release_task_struct(struct task_struct *tsk);
#define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */ #define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */
#define TIF_UPROBE 4 /* uprobe breakpoint or singlestep */ #define TIF_UPROBE 4 /* uprobe breakpoint or singlestep */
#define TIF_FSCHECK 5 /* Check FS is USER_DS on return */ #define TIF_FSCHECK 5 /* Check FS is USER_DS on return */
#define TIF_MTE_ASYNC_FAULT 6 /* MTE Asynchronous Tag Check Fault */
#define TIF_SYSCALL_TRACE 8 /* syscall trace active */ #define TIF_SYSCALL_TRACE 8 /* syscall trace active */
#define TIF_SYSCALL_AUDIT 9 /* syscall auditing */ #define TIF_SYSCALL_AUDIT 9 /* syscall auditing */
#define TIF_SYSCALL_TRACEPOINT 10 /* syscall tracepoint for ftrace */ #define TIF_SYSCALL_TRACEPOINT 10 /* syscall tracepoint for ftrace */
...@@ -96,10 +97,11 @@ void arch_release_task_struct(struct task_struct *tsk); ...@@ -96,10 +97,11 @@ void arch_release_task_struct(struct task_struct *tsk);
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_32BIT (1 << TIF_32BIT) #define _TIF_32BIT (1 << TIF_32BIT)
#define _TIF_SVE (1 << TIF_SVE) #define _TIF_SVE (1 << TIF_SVE)
#define _TIF_MTE_ASYNC_FAULT (1 << TIF_MTE_ASYNC_FAULT)
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \ #define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
_TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
_TIF_UPROBE | _TIF_FSCHECK) _TIF_UPROBE | _TIF_FSCHECK | _TIF_MTE_ASYNC_FAULT)
#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
_TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \ _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
......
...@@ -24,7 +24,7 @@ struct undef_hook { ...@@ -24,7 +24,7 @@ struct undef_hook {
void register_undef_hook(struct undef_hook *hook); void register_undef_hook(struct undef_hook *hook);
void unregister_undef_hook(struct undef_hook *hook); void unregister_undef_hook(struct undef_hook *hook);
void force_signal_inject(int signal, int code, unsigned long address); void force_signal_inject(int signal, int code, unsigned long address, unsigned int err);
void arm64_notify_segfault(unsigned long addr); void arm64_notify_segfault(unsigned long addr);
void arm64_force_sig_fault(int signo, int code, void __user *addr, const char *str); void arm64_force_sig_fault(int signo, int code, void __user *addr, const char *str);
void arm64_force_sig_mceerr(int code, void __user *addr, short lsb, const char *str); void arm64_force_sig_mceerr(int code, void __user *addr, short lsb, const char *str);
......
...@@ -74,6 +74,6 @@ ...@@ -74,6 +74,6 @@
#define HWCAP2_DGH (1 << 15) #define HWCAP2_DGH (1 << 15)
#define HWCAP2_RNG (1 << 16) #define HWCAP2_RNG (1 << 16)
#define HWCAP2_BTI (1 << 17) #define HWCAP2_BTI (1 << 17)
/* reserved for HWCAP2_MTE (1 << 18) */ #define HWCAP2_MTE (1 << 18)
#endif /* _UAPI__ASM_HWCAP_H */ #endif /* _UAPI__ASM_HWCAP_H */
...@@ -242,6 +242,15 @@ struct kvm_vcpu_events { ...@@ -242,6 +242,15 @@ struct kvm_vcpu_events {
#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL 0 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL 0
#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL 1 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL 1
#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED 2 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED 2
/*
* Only two states can be presented by the host kernel:
* - NOT_REQUIRED: the guest doesn't need to do anything
* - NOT_AVAIL: the guest isn't mitigated (it can still use SSBS if available)
*
* All the other values are deprecated. The host still accepts all
* values (they are ABI), but will narrow them to the above two.
*/
#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2 KVM_REG_ARM_FW_REG(2) #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2 KVM_REG_ARM_FW_REG(2)
#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL 0 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL 0
#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN 1 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN 1
......
...@@ -5,5 +5,6 @@ ...@@ -5,5 +5,6 @@
#include <asm-generic/mman.h> #include <asm-generic/mman.h>
#define PROT_BTI 0x10 /* BTI guarded page */ #define PROT_BTI 0x10 /* BTI guarded page */
#define PROT_MTE 0x20 /* Normal Tagged mapping */
#endif /* ! _UAPI__ASM_MMAN_H */ #endif /* ! _UAPI__ASM_MMAN_H */
...@@ -51,6 +51,7 @@ ...@@ -51,6 +51,7 @@
#define PSR_PAN_BIT 0x00400000 #define PSR_PAN_BIT 0x00400000
#define PSR_UAO_BIT 0x00800000 #define PSR_UAO_BIT 0x00800000
#define PSR_DIT_BIT 0x01000000 #define PSR_DIT_BIT 0x01000000
#define PSR_TCO_BIT 0x02000000
#define PSR_V_BIT 0x10000000 #define PSR_V_BIT 0x10000000
#define PSR_C_BIT 0x20000000 #define PSR_C_BIT 0x20000000
#define PSR_Z_BIT 0x40000000 #define PSR_Z_BIT 0x40000000
...@@ -75,6 +76,9 @@ ...@@ -75,6 +76,9 @@
/* syscall emulation path in ptrace */ /* syscall emulation path in ptrace */
#define PTRACE_SYSEMU 31 #define PTRACE_SYSEMU 31
#define PTRACE_SYSEMU_SINGLESTEP 32 #define PTRACE_SYSEMU_SINGLESTEP 32
/* MTE allocation tag access */
#define PTRACE_PEEKMTETAGS 33
#define PTRACE_POKEMTETAGS 34
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
......
...@@ -3,8 +3,6 @@ ...@@ -3,8 +3,6 @@
# Makefile for the linux kernel. # Makefile for the linux kernel.
# #
CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET=$(TEXT_OFFSET)
AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
CFLAGS_armv8_deprecated.o := -I$(src) CFLAGS_armv8_deprecated.o := -I$(src)
CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
...@@ -19,7 +17,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ ...@@ -19,7 +17,7 @@ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
return_address.o cpuinfo.o cpu_errata.o \ return_address.o cpuinfo.o cpu_errata.o \
cpufeature.o alternative.o cacheinfo.o \ cpufeature.o alternative.o cacheinfo.o \
smp.o smp_spin_table.o topology.o smccc-call.o \ smp.o smp_spin_table.o topology.o smccc-call.o \
syscall.o syscall.o proton-pack.o
targets += efi-entry.o targets += efi-entry.o
...@@ -59,9 +57,9 @@ arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o ...@@ -59,9 +57,9 @@ arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_CRASH_CORE) += crash_core.o
obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
obj-$(CONFIG_ARM64_SSBD) += ssbd.o
obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o
obj-$(CONFIG_ARM64_MTE) += mte.o
obj-y += vdso/ probes/ obj-y += vdso/ probes/
obj-$(CONFIG_COMPAT_VDSO) += vdso32/ obj-$(CONFIG_COMPAT_VDSO) += vdso32/
......
...@@ -35,6 +35,10 @@ SYM_CODE_START(__cpu_soft_restart) ...@@ -35,6 +35,10 @@ SYM_CODE_START(__cpu_soft_restart)
mov_q x13, SCTLR_ELx_FLAGS mov_q x13, SCTLR_ELx_FLAGS
bic x12, x12, x13 bic x12, x12, x13
pre_disable_mmu_workaround pre_disable_mmu_workaround
/*
* either disable EL1&0 translation regime or disable EL2&0 translation
* regime if HCR_EL2.E2H == 1
*/
msr sctlr_el1, x12 msr sctlr_el1, x12
isb isb
......
This diff is collapsed.
...@@ -75,6 +75,7 @@ ...@@ -75,6 +75,7 @@
#include <asm/cpu_ops.h> #include <asm/cpu_ops.h>
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/mte.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
#include <asm/traps.h> #include <asm/traps.h>
...@@ -197,9 +198,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = { ...@@ -197,9 +198,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_API_SHIFT, 4, 0), FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_APA_SHIFT, 4, 0), FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
ARM64_FTR_END, ARM64_FTR_END,
}; };
...@@ -227,7 +228,9 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { ...@@ -227,7 +228,9 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_MTE),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MTE_SHIFT, 4, ID_AA64PFR1_MTE_NI),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_BT_SHIFT, 4, 0), FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_BT_SHIFT, 4, 0),
ARM64_FTR_END, ARM64_FTR_END,
...@@ -487,7 +490,7 @@ static const struct arm64_ftr_bits ftr_id_pfr1[] = { ...@@ -487,7 +490,7 @@ static const struct arm64_ftr_bits ftr_id_pfr1[] = {
}; };
static const struct arm64_ftr_bits ftr_id_pfr2[] = { static const struct arm64_ftr_bits ftr_id_pfr2[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR2_SSBS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_SSBS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_CSV3_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_CSV3_SHIFT, 4, 0),
ARM64_FTR_END, ARM64_FTR_END,
}; };
...@@ -1111,6 +1114,7 @@ u64 read_sanitised_ftr_reg(u32 id) ...@@ -1111,6 +1114,7 @@ u64 read_sanitised_ftr_reg(u32 id)
return 0; return 0;
return regp->sys_val; return regp->sys_val;
} }
EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);
#define read_sysreg_case(r) \ #define read_sysreg_case(r) \
case r: return read_sysreg_s(r) case r: return read_sysreg_s(r)
...@@ -1443,6 +1447,7 @@ static inline void __cpu_enable_hw_dbm(void) ...@@ -1443,6 +1447,7 @@ static inline void __cpu_enable_hw_dbm(void)
write_sysreg(tcr, tcr_el1); write_sysreg(tcr, tcr_el1);
isb(); isb();
local_flush_tlb_all();
} }
static bool cpu_has_broken_dbm(void) static bool cpu_has_broken_dbm(void)
...@@ -1583,48 +1588,6 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused) ...@@ -1583,48 +1588,6 @@ static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused)
WARN_ON(val & (7 << 27 | 7 << 21)); WARN_ON(val & (7 << 27 | 7 << 21));
} }
#ifdef CONFIG_ARM64_SSBD
static int ssbs_emulation_handler(struct pt_regs *regs, u32 instr)
{
if (user_mode(regs))
return 1;
if (instr & BIT(PSTATE_Imm_shift))
regs->pstate |= PSR_SSBS_BIT;
else
regs->pstate &= ~PSR_SSBS_BIT;
arm64_skip_faulting_instruction(regs, 4);
return 0;
}
static struct undef_hook ssbs_emulation_hook = {
.instr_mask = ~(1U << PSTATE_Imm_shift),
.instr_val = 0xd500401f | PSTATE_SSBS,
.fn = ssbs_emulation_handler,
};
static void cpu_enable_ssbs(const struct arm64_cpu_capabilities *__unused)
{
static bool undef_hook_registered = false;
static DEFINE_RAW_SPINLOCK(hook_lock);
raw_spin_lock(&hook_lock);
if (!undef_hook_registered) {
register_undef_hook(&ssbs_emulation_hook);
undef_hook_registered = true;
}
raw_spin_unlock(&hook_lock);
if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) {
sysreg_clear_set(sctlr_el1, 0, SCTLR_ELx_DSSBS);
arm64_set_ssbd_mitigation(false);
} else {
arm64_set_ssbd_mitigation(true);
}
}
#endif /* CONFIG_ARM64_SSBD */
#ifdef CONFIG_ARM64_PAN #ifdef CONFIG_ARM64_PAN
static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused) static void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
{ {
...@@ -1648,11 +1611,37 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused) ...@@ -1648,11 +1611,37 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
#endif /* CONFIG_ARM64_RAS_EXTN */ #endif /* CONFIG_ARM64_RAS_EXTN */
#ifdef CONFIG_ARM64_PTR_AUTH #ifdef CONFIG_ARM64_PTR_AUTH
static bool has_address_auth(const struct arm64_cpu_capabilities *entry, static bool has_address_auth_cpucap(const struct arm64_cpu_capabilities *entry, int scope)
int __unused)
{ {
return __system_matches_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) || int boot_val, sec_val;
__system_matches_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF);
/* We don't expect to be called with SCOPE_SYSTEM */
WARN_ON(scope == SCOPE_SYSTEM);
/*
* The ptr-auth feature levels are not intercompatible with lower
* levels. Hence we must match ptr-auth feature level of the secondary
* CPUs with that of the boot CPU. The level of boot cpu is fetched
* from the sanitised register whereas direct register read is done for
* the secondary CPUs.
* The sanitised feature state is guaranteed to match that of the
* boot CPU as a mismatched secondary CPU is parked before it gets
* a chance to update the state, with the capability.
*/
boot_val = cpuid_feature_extract_field(read_sanitised_ftr_reg(entry->sys_reg),
entry->field_pos, entry->sign);
if (scope & SCOPE_BOOT_CPU)
return boot_val >= entry->min_field_value;
/* Now check for the secondary CPUs with SCOPE_LOCAL_CPU scope */
sec_val = cpuid_feature_extract_field(__read_sysreg_by_encoding(entry->sys_reg),
entry->field_pos, entry->sign);
return sec_val == boot_val;
}
static bool has_address_auth_metacap(const struct arm64_cpu_capabilities *entry,
int scope)
{
return has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_ARCH], scope) ||
has_address_auth_cpucap(cpu_hwcaps_ptrs[ARM64_HAS_ADDRESS_AUTH_IMP_DEF], scope);
} }
static bool has_generic_auth(const struct arm64_cpu_capabilities *entry, static bool has_generic_auth(const struct arm64_cpu_capabilities *entry,
...@@ -1702,6 +1691,22 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused) ...@@ -1702,6 +1691,22 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
} }
#endif /* CONFIG_ARM64_BTI */ #endif /* CONFIG_ARM64_BTI */
#ifdef CONFIG_ARM64_MTE
static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
{
static bool cleared_zero_page = false;
/*
* Clear the tags in the zero page. This needs to be done via the
* linear map which has the Tagged attribute.
*/
if (!cleared_zero_page) {
cleared_zero_page = true;
mte_clear_page_tags(lm_alias(empty_zero_page));
}
}
#endif /* CONFIG_ARM64_MTE */
/* Internal helper functions to match cpu capability type */ /* Internal helper functions to match cpu capability type */
static bool static bool
cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap)
...@@ -1976,19 +1981,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -1976,19 +1981,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64ISAR0_CRC32_SHIFT, .field_pos = ID_AA64ISAR0_CRC32_SHIFT,
.min_field_value = 1, .min_field_value = 1,
}, },
#ifdef CONFIG_ARM64_SSBD
{ {
.desc = "Speculative Store Bypassing Safe (SSBS)", .desc = "Speculative Store Bypassing Safe (SSBS)",
.capability = ARM64_SSBS, .capability = ARM64_SSBS,
.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, .type = ARM64_CPUCAP_SYSTEM_FEATURE,
.matches = has_cpuid_feature, .matches = has_cpuid_feature,
.sys_reg = SYS_ID_AA64PFR1_EL1, .sys_reg = SYS_ID_AA64PFR1_EL1,
.field_pos = ID_AA64PFR1_SSBS_SHIFT, .field_pos = ID_AA64PFR1_SSBS_SHIFT,
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
.min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY, .min_field_value = ID_AA64PFR1_SSBS_PSTATE_ONLY,
.cpu_enable = cpu_enable_ssbs,
}, },
#endif
#ifdef CONFIG_ARM64_CNP #ifdef CONFIG_ARM64_CNP
{ {
.desc = "Common not Private translations", .desc = "Common not Private translations",
...@@ -2021,7 +2023,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -2021,7 +2023,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
.field_pos = ID_AA64ISAR1_APA_SHIFT, .field_pos = ID_AA64ISAR1_APA_SHIFT,
.min_field_value = ID_AA64ISAR1_APA_ARCHITECTED, .min_field_value = ID_AA64ISAR1_APA_ARCHITECTED,
.matches = has_cpuid_feature, .matches = has_address_auth_cpucap,
}, },
{ {
.desc = "Address authentication (IMP DEF algorithm)", .desc = "Address authentication (IMP DEF algorithm)",
...@@ -2031,12 +2033,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -2031,12 +2033,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
.field_pos = ID_AA64ISAR1_API_SHIFT, .field_pos = ID_AA64ISAR1_API_SHIFT,
.min_field_value = ID_AA64ISAR1_API_IMP_DEF, .min_field_value = ID_AA64ISAR1_API_IMP_DEF,
.matches = has_cpuid_feature, .matches = has_address_auth_cpucap,
}, },
{ {
.capability = ARM64_HAS_ADDRESS_AUTH, .capability = ARM64_HAS_ADDRESS_AUTH,
.type = ARM64_CPUCAP_BOOT_CPU_FEATURE, .type = ARM64_CPUCAP_BOOT_CPU_FEATURE,
.matches = has_address_auth, .matches = has_address_auth_metacap,
}, },
{ {
.desc = "Generic authentication (architected algorithm)", .desc = "Generic authentication (architected algorithm)",
...@@ -2121,6 +2123,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -2121,6 +2123,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.sign = FTR_UNSIGNED, .sign = FTR_UNSIGNED,
}, },
#endif #endif
#ifdef CONFIG_ARM64_MTE
{
.desc = "Memory Tagging Extension",
.capability = ARM64_MTE,
.type = ARM64_CPUCAP_STRICT_BOOT_CPU_FEATURE,
.matches = has_cpuid_feature,
.sys_reg = SYS_ID_AA64PFR1_EL1,
.field_pos = ID_AA64PFR1_MTE_SHIFT,
.min_field_value = ID_AA64PFR1_MTE,
.sign = FTR_UNSIGNED,
.cpu_enable = cpu_enable_mte,
},
#endif /* CONFIG_ARM64_MTE */
{}, {},
}; };
...@@ -2237,6 +2252,9 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { ...@@ -2237,6 +2252,9 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, KERNEL_HWCAP_PACA), HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, KERNEL_HWCAP_PACA),
HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, KERNEL_HWCAP_PACG), HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, KERNEL_HWCAP_PACG),
#endif #endif
#ifdef CONFIG_ARM64_MTE
HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_MTE_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_MTE, CAP_HWCAP, KERNEL_HWCAP_MTE),
#endif /* CONFIG_ARM64_MTE */
{}, {},
}; };
......
...@@ -43,94 +43,93 @@ static const char *icache_policy_str[] = { ...@@ -43,94 +43,93 @@ static const char *icache_policy_str[] = {
unsigned long __icache_flags; unsigned long __icache_flags;
static const char *const hwcap_str[] = { static const char *const hwcap_str[] = {
"fp", [KERNEL_HWCAP_FP] = "fp",
"asimd", [KERNEL_HWCAP_ASIMD] = "asimd",
"evtstrm", [KERNEL_HWCAP_EVTSTRM] = "evtstrm",
"aes", [KERNEL_HWCAP_AES] = "aes",
"pmull", [KERNEL_HWCAP_PMULL] = "pmull",
"sha1", [KERNEL_HWCAP_SHA1] = "sha1",
"sha2", [KERNEL_HWCAP_SHA2] = "sha2",
"crc32", [KERNEL_HWCAP_CRC32] = "crc32",
"atomics", [KERNEL_HWCAP_ATOMICS] = "atomics",
"fphp", [KERNEL_HWCAP_FPHP] = "fphp",
"asimdhp", [KERNEL_HWCAP_ASIMDHP] = "asimdhp",
"cpuid", [KERNEL_HWCAP_CPUID] = "cpuid",
"asimdrdm", [KERNEL_HWCAP_ASIMDRDM] = "asimdrdm",
"jscvt", [KERNEL_HWCAP_JSCVT] = "jscvt",
"fcma", [KERNEL_HWCAP_FCMA] = "fcma",
"lrcpc", [KERNEL_HWCAP_LRCPC] = "lrcpc",
"dcpop", [KERNEL_HWCAP_DCPOP] = "dcpop",
"sha3", [KERNEL_HWCAP_SHA3] = "sha3",
"sm3", [KERNEL_HWCAP_SM3] = "sm3",
"sm4", [KERNEL_HWCAP_SM4] = "sm4",
"asimddp", [KERNEL_HWCAP_ASIMDDP] = "asimddp",
"sha512", [KERNEL_HWCAP_SHA512] = "sha512",
"sve", [KERNEL_HWCAP_SVE] = "sve",
"asimdfhm", [KERNEL_HWCAP_ASIMDFHM] = "asimdfhm",
"dit", [KERNEL_HWCAP_DIT] = "dit",
"uscat", [KERNEL_HWCAP_USCAT] = "uscat",
"ilrcpc", [KERNEL_HWCAP_ILRCPC] = "ilrcpc",
"flagm", [KERNEL_HWCAP_FLAGM] = "flagm",
"ssbs", [KERNEL_HWCAP_SSBS] = "ssbs",
"sb", [KERNEL_HWCAP_SB] = "sb",
"paca", [KERNEL_HWCAP_PACA] = "paca",
"pacg", [KERNEL_HWCAP_PACG] = "pacg",
"dcpodp", [KERNEL_HWCAP_DCPODP] = "dcpodp",
"sve2", [KERNEL_HWCAP_SVE2] = "sve2",
"sveaes", [KERNEL_HWCAP_SVEAES] = "sveaes",
"svepmull", [KERNEL_HWCAP_SVEPMULL] = "svepmull",
"svebitperm", [KERNEL_HWCAP_SVEBITPERM] = "svebitperm",
"svesha3", [KERNEL_HWCAP_SVESHA3] = "svesha3",
"svesm4", [KERNEL_HWCAP_SVESM4] = "svesm4",
"flagm2", [KERNEL_HWCAP_FLAGM2] = "flagm2",
"frint", [KERNEL_HWCAP_FRINT] = "frint",
"svei8mm", [KERNEL_HWCAP_SVEI8MM] = "svei8mm",
"svef32mm", [KERNEL_HWCAP_SVEF32MM] = "svef32mm",
"svef64mm", [KERNEL_HWCAP_SVEF64MM] = "svef64mm",
"svebf16", [KERNEL_HWCAP_SVEBF16] = "svebf16",
"i8mm", [KERNEL_HWCAP_I8MM] = "i8mm",
"bf16", [KERNEL_HWCAP_BF16] = "bf16",
"dgh", [KERNEL_HWCAP_DGH] = "dgh",
"rng", [KERNEL_HWCAP_RNG] = "rng",
"bti", [KERNEL_HWCAP_BTI] = "bti",
/* reserved for "mte" */ [KERNEL_HWCAP_MTE] = "mte",
NULL
}; };
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
#define COMPAT_KERNEL_HWCAP(x) const_ilog2(COMPAT_HWCAP_ ## x)
static const char *const compat_hwcap_str[] = { static const char *const compat_hwcap_str[] = {
"swp", [COMPAT_KERNEL_HWCAP(SWP)] = "swp",
"half", [COMPAT_KERNEL_HWCAP(HALF)] = "half",
"thumb", [COMPAT_KERNEL_HWCAP(THUMB)] = "thumb",
"26bit", [COMPAT_KERNEL_HWCAP(26BIT)] = NULL, /* Not possible on arm64 */
"fastmult", [COMPAT_KERNEL_HWCAP(FAST_MULT)] = "fastmult",
"fpa", [COMPAT_KERNEL_HWCAP(FPA)] = NULL, /* Not possible on arm64 */
"vfp", [COMPAT_KERNEL_HWCAP(VFP)] = "vfp",
"edsp", [COMPAT_KERNEL_HWCAP(EDSP)] = "edsp",
"java", [COMPAT_KERNEL_HWCAP(JAVA)] = NULL, /* Not possible on arm64 */
"iwmmxt", [COMPAT_KERNEL_HWCAP(IWMMXT)] = NULL, /* Not possible on arm64 */
"crunch", [COMPAT_KERNEL_HWCAP(CRUNCH)] = NULL, /* Not possible on arm64 */
"thumbee", [COMPAT_KERNEL_HWCAP(THUMBEE)] = NULL, /* Not possible on arm64 */
"neon", [COMPAT_KERNEL_HWCAP(NEON)] = "neon",
"vfpv3", [COMPAT_KERNEL_HWCAP(VFPv3)] = "vfpv3",
"vfpv3d16", [COMPAT_KERNEL_HWCAP(VFPV3D16)] = NULL, /* Not possible on arm64 */
"tls", [COMPAT_KERNEL_HWCAP(TLS)] = "tls",
"vfpv4", [COMPAT_KERNEL_HWCAP(VFPv4)] = "vfpv4",
"idiva", [COMPAT_KERNEL_HWCAP(IDIVA)] = "idiva",
"idivt", [COMPAT_KERNEL_HWCAP(IDIVT)] = "idivt",
"vfpd32", [COMPAT_KERNEL_HWCAP(VFPD32)] = NULL, /* Not possible on arm64 */
"lpae", [COMPAT_KERNEL_HWCAP(LPAE)] = "lpae",
"evtstrm", [COMPAT_KERNEL_HWCAP(EVTSTRM)] = "evtstrm",
NULL
}; };
#define COMPAT_KERNEL_HWCAP2(x) const_ilog2(COMPAT_HWCAP2_ ## x)
static const char *const compat_hwcap2_str[] = { static const char *const compat_hwcap2_str[] = {
"aes", [COMPAT_KERNEL_HWCAP2(AES)] = "aes",
"pmull", [COMPAT_KERNEL_HWCAP2(PMULL)] = "pmull",
"sha1", [COMPAT_KERNEL_HWCAP2(SHA1)] = "sha1",
"sha2", [COMPAT_KERNEL_HWCAP2(SHA2)] = "sha2",
"crc32", [COMPAT_KERNEL_HWCAP2(CRC32)] = "crc32",
NULL
}; };
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
...@@ -166,16 +165,25 @@ static int c_show(struct seq_file *m, void *v) ...@@ -166,16 +165,25 @@ static int c_show(struct seq_file *m, void *v)
seq_puts(m, "Features\t:"); seq_puts(m, "Features\t:");
if (compat) { if (compat) {
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
for (j = 0; compat_hwcap_str[j]; j++) for (j = 0; j < ARRAY_SIZE(compat_hwcap_str); j++) {
if (compat_elf_hwcap & (1 << j)) if (compat_elf_hwcap & (1 << j)) {
/*
* Warn once if any feature should not
* have been present on arm64 platform.
*/
if (WARN_ON_ONCE(!compat_hwcap_str[j]))
continue;
seq_printf(m, " %s", compat_hwcap_str[j]); seq_printf(m, " %s", compat_hwcap_str[j]);
}
}
for (j = 0; compat_hwcap2_str[j]; j++) for (j = 0; j < ARRAY_SIZE(compat_hwcap2_str); j++)
if (compat_elf_hwcap2 & (1 << j)) if (compat_elf_hwcap2 & (1 << j))
seq_printf(m, " %s", compat_hwcap2_str[j]); seq_printf(m, " %s", compat_hwcap2_str[j]);
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
} else { } else {
for (j = 0; hwcap_str[j]; j++) for (j = 0; j < ARRAY_SIZE(hwcap_str); j++)
if (cpu_have_feature(j)) if (cpu_have_feature(j))
seq_printf(m, " %s", hwcap_str[j]); seq_printf(m, " %s", hwcap_str[j]);
} }
......
...@@ -384,7 +384,7 @@ void __init debug_traps_init(void) ...@@ -384,7 +384,7 @@ void __init debug_traps_init(void)
hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP, hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP,
TRAP_TRACE, "single-step handler"); TRAP_TRACE, "single-step handler");
hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP, hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP,
TRAP_BRKPT, "ptrace BRK handler"); TRAP_BRKPT, "BRK handler");
} }
/* Re-enable single step for syscall restarting. */ /* Re-enable single step for syscall restarting. */
......
...@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr) ...@@ -66,6 +66,13 @@ static void notrace el1_dbg(struct pt_regs *regs, unsigned long esr)
} }
NOKPROBE_SYMBOL(el1_dbg); NOKPROBE_SYMBOL(el1_dbg);
static void notrace el1_fpac(struct pt_regs *regs, unsigned long esr)
{
local_daif_inherit(regs);
do_ptrauth_fault(regs, esr);
}
NOKPROBE_SYMBOL(el1_fpac);
asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
{ {
unsigned long esr = read_sysreg(esr_el1); unsigned long esr = read_sysreg(esr_el1);
...@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) ...@@ -92,6 +99,9 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
case ESR_ELx_EC_BRK64: case ESR_ELx_EC_BRK64:
el1_dbg(regs, esr); el1_dbg(regs, esr);
break; break;
case ESR_ELx_EC_FPAC:
el1_fpac(regs, esr);
break;
default: default:
el1_inv(regs, esr); el1_inv(regs, esr);
} }
...@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs) ...@@ -227,6 +237,14 @@ static void notrace el0_svc(struct pt_regs *regs)
} }
NOKPROBE_SYMBOL(el0_svc); NOKPROBE_SYMBOL(el0_svc);
static void notrace el0_fpac(struct pt_regs *regs, unsigned long esr)
{
user_exit_irqoff();
local_daif_restore(DAIF_PROCCTX);
do_ptrauth_fault(regs, esr);
}
NOKPROBE_SYMBOL(el0_fpac);
asmlinkage void notrace el0_sync_handler(struct pt_regs *regs) asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
{ {
unsigned long esr = read_sysreg(esr_el1); unsigned long esr = read_sysreg(esr_el1);
...@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs) ...@@ -272,6 +290,9 @@ asmlinkage void notrace el0_sync_handler(struct pt_regs *regs)
case ESR_ELx_EC_BRK64: case ESR_ELx_EC_BRK64:
el0_dbg(regs, esr); el0_dbg(regs, esr);
break; break;
case ESR_ELx_EC_FPAC:
el0_fpac(regs, esr);
break;
default: default:
el0_inv(regs, esr); el0_inv(regs, esr);
} }
......
...@@ -32,6 +32,7 @@ SYM_FUNC_START(fpsimd_load_state) ...@@ -32,6 +32,7 @@ SYM_FUNC_START(fpsimd_load_state)
SYM_FUNC_END(fpsimd_load_state) SYM_FUNC_END(fpsimd_load_state)
#ifdef CONFIG_ARM64_SVE #ifdef CONFIG_ARM64_SVE
SYM_FUNC_START(sve_save_state) SYM_FUNC_START(sve_save_state)
sve_save 0, x1, 2 sve_save 0, x1, 2
ret ret
...@@ -46,4 +47,28 @@ SYM_FUNC_START(sve_get_vl) ...@@ -46,4 +47,28 @@ SYM_FUNC_START(sve_get_vl)
_sve_rdvl 0, 1 _sve_rdvl 0, 1
ret ret
SYM_FUNC_END(sve_get_vl) SYM_FUNC_END(sve_get_vl)
/*
* Load SVE state from FPSIMD state.
*
* x0 = pointer to struct fpsimd_state
* x1 = VQ - 1
*
* Each SVE vector will be loaded with the first 128-bits taken from FPSIMD
* and the rest zeroed. All the other SVE registers will be zeroed.
*/
SYM_FUNC_START(sve_load_from_fpsimd_state)
sve_load_vq x1, x2, x3
fpsimd_restore x0, 8
_for n, 0, 15, _sve_pfalse \n
_sve_wrffr 0
ret
SYM_FUNC_END(sve_load_from_fpsimd_state)
/* Zero all SVE registers but the first 128-bits of each vector */
SYM_FUNC_START(sve_flush_live)
sve_flush
ret
SYM_FUNC_END(sve_flush_live)
#endif /* CONFIG_ARM64_SVE */ #endif /* CONFIG_ARM64_SVE */
...@@ -132,9 +132,8 @@ alternative_else_nop_endif ...@@ -132,9 +132,8 @@ alternative_else_nop_endif
* them if required. * them if required.
*/ */
.macro apply_ssbd, state, tmp1, tmp2 .macro apply_ssbd, state, tmp1, tmp2
#ifdef CONFIG_ARM64_SSBD alternative_cb spectre_v4_patch_fw_mitigation_enable
alternative_cb arm64_enable_wa2_handling b .L__asm_ssbd_skip\@ // Patched to NOP
b .L__asm_ssbd_skip\@
alternative_cb_end alternative_cb_end
ldr_this_cpu \tmp2, arm64_ssbd_callback_required, \tmp1 ldr_this_cpu \tmp2, arm64_ssbd_callback_required, \tmp1
cbz \tmp2, .L__asm_ssbd_skip\@ cbz \tmp2, .L__asm_ssbd_skip\@
...@@ -142,10 +141,35 @@ alternative_cb_end ...@@ -142,10 +141,35 @@ alternative_cb_end
tbnz \tmp2, #TIF_SSBD, .L__asm_ssbd_skip\@ tbnz \tmp2, #TIF_SSBD, .L__asm_ssbd_skip\@
mov w0, #ARM_SMCCC_ARCH_WORKAROUND_2 mov w0, #ARM_SMCCC_ARCH_WORKAROUND_2
mov w1, #\state mov w1, #\state
alternative_cb arm64_update_smccc_conduit alternative_cb spectre_v4_patch_fw_mitigation_conduit
nop // Patched to SMC/HVC #0 nop // Patched to SMC/HVC #0
alternative_cb_end alternative_cb_end
.L__asm_ssbd_skip\@: .L__asm_ssbd_skip\@:
.endm
/* Check for MTE asynchronous tag check faults */
.macro check_mte_async_tcf, flgs, tmp
#ifdef CONFIG_ARM64_MTE
alternative_if_not ARM64_MTE
b 1f
alternative_else_nop_endif
mrs_s \tmp, SYS_TFSRE0_EL1
tbz \tmp, #SYS_TFSR_EL1_TF0_SHIFT, 1f
/* Asynchronous TCF occurred for TTBR0 access, set the TI flag */
orr \flgs, \flgs, #_TIF_MTE_ASYNC_FAULT
str \flgs, [tsk, #TSK_TI_FLAGS]
msr_s SYS_TFSRE0_EL1, xzr
1:
#endif
.endm
/* Clear the MTE asynchronous tag check faults */
.macro clear_mte_async_tcf
#ifdef CONFIG_ARM64_MTE
alternative_if ARM64_MTE
dsb ish
msr_s SYS_TFSRE0_EL1, xzr
alternative_else_nop_endif
#endif #endif
.endm .endm
...@@ -182,6 +206,8 @@ alternative_cb_end ...@@ -182,6 +206,8 @@ alternative_cb_end
ldr x19, [tsk, #TSK_TI_FLAGS] ldr x19, [tsk, #TSK_TI_FLAGS]
disable_step_tsk x19, x20 disable_step_tsk x19, x20
/* Check for asynchronous tag check faults in user space */
check_mte_async_tcf x19, x22
apply_ssbd 1, x22, x23 apply_ssbd 1, x22, x23
ptrauth_keys_install_kernel tsk, x20, x22, x23 ptrauth_keys_install_kernel tsk, x20, x22, x23
...@@ -233,6 +259,13 @@ alternative_if ARM64_HAS_IRQ_PRIO_MASKING ...@@ -233,6 +259,13 @@ alternative_if ARM64_HAS_IRQ_PRIO_MASKING
str x20, [sp, #S_PMR_SAVE] str x20, [sp, #S_PMR_SAVE]
alternative_else_nop_endif alternative_else_nop_endif
/* Re-enable tag checking (TCO set on exception entry) */
#ifdef CONFIG_ARM64_MTE
alternative_if ARM64_MTE
SET_PSTATE_TCO(0)
alternative_else_nop_endif
#endif
/* /*
* Registers that may be useful after this macro is invoked: * Registers that may be useful after this macro is invoked:
* *
...@@ -697,11 +730,9 @@ el0_irq_naked: ...@@ -697,11 +730,9 @@ el0_irq_naked:
bl trace_hardirqs_off bl trace_hardirqs_off
#endif #endif
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
tbz x22, #55, 1f tbz x22, #55, 1f
bl do_el0_irq_bp_hardening bl do_el0_irq_bp_hardening
1: 1:
#endif
irq_handler irq_handler
#ifdef CONFIG_TRACE_IRQFLAGS #ifdef CONFIG_TRACE_IRQFLAGS
...@@ -744,6 +775,8 @@ SYM_CODE_START_LOCAL(ret_to_user) ...@@ -744,6 +775,8 @@ SYM_CODE_START_LOCAL(ret_to_user)
and x2, x1, #_TIF_WORK_MASK and x2, x1, #_TIF_WORK_MASK
cbnz x2, work_pending cbnz x2, work_pending
finish_ret_to_user: finish_ret_to_user:
/* Ignore asynchronous tag check faults in the uaccess routines */
clear_mte_async_tcf
enable_step_tsk x1, x2 enable_step_tsk x1, x2
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
bl stackleak_erase bl stackleak_erase
......
...@@ -32,9 +32,11 @@ ...@@ -32,9 +32,11 @@
#include <linux/swab.h> #include <linux/swab.h>
#include <asm/esr.h> #include <asm/esr.h>
#include <asm/exception.h>
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/neon.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/simd.h> #include <asm/simd.h>
#include <asm/sigcontext.h> #include <asm/sigcontext.h>
...@@ -312,7 +314,7 @@ static void fpsimd_save(void) ...@@ -312,7 +314,7 @@ static void fpsimd_save(void)
* re-enter user with corrupt state. * re-enter user with corrupt state.
* There's no way to recover, so kill it: * There's no way to recover, so kill it:
*/ */
force_signal_inject(SIGKILL, SI_KERNEL, 0); force_signal_inject(SIGKILL, SI_KERNEL, 0, 0);
return; return;
} }
...@@ -928,7 +930,7 @@ void fpsimd_release_task(struct task_struct *dead_task) ...@@ -928,7 +930,7 @@ void fpsimd_release_task(struct task_struct *dead_task)
* the SVE access trap will be disabled the next time this task * the SVE access trap will be disabled the next time this task
* reaches ret_to_user. * reaches ret_to_user.
* *
* TIF_SVE should be clear on entry: otherwise, task_fpsimd_load() * TIF_SVE should be clear on entry: otherwise, fpsimd_restore_current_state()
* would have disabled the SVE access trap for userspace during * would have disabled the SVE access trap for userspace during
* ret_to_user, making an SVE access trap impossible in that case. * ret_to_user, making an SVE access trap impossible in that case.
*/ */
...@@ -936,7 +938,7 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs) ...@@ -936,7 +938,7 @@ void do_sve_acc(unsigned int esr, struct pt_regs *regs)
{ {
/* Even if we chose not to use SVE, the hardware could still trap: */ /* Even if we chose not to use SVE, the hardware could still trap: */
if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) { if (unlikely(!system_supports_sve()) || WARN_ON(is_compat_task())) {
force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc); force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
return; return;
} }
......
...@@ -36,14 +36,10 @@ ...@@ -36,14 +36,10 @@
#include "efi-header.S" #include "efi-header.S"
#define __PHYS_OFFSET (KERNEL_START - TEXT_OFFSET) #define __PHYS_OFFSET KERNEL_START
#if (TEXT_OFFSET & 0xfff) != 0 #if (PAGE_OFFSET & 0x1fffff) != 0
#error TEXT_OFFSET must be at least 4KB aligned
#elif (PAGE_OFFSET & 0x1fffff) != 0
#error PAGE_OFFSET must be at least 2MB aligned #error PAGE_OFFSET must be at least 2MB aligned
#elif TEXT_OFFSET > 0x1fffff
#error TEXT_OFFSET must be less than 2MB
#endif #endif
/* /*
...@@ -55,7 +51,7 @@ ...@@ -55,7 +51,7 @@
* x0 = physical address to the FDT blob. * x0 = physical address to the FDT blob.
* *
* This code is mostly position independent so you call this at * This code is mostly position independent so you call this at
* __pa(PAGE_OFFSET + TEXT_OFFSET). * __pa(PAGE_OFFSET).
* *
* Note that the callee-saved registers are used for storing variables * Note that the callee-saved registers are used for storing variables
* that are useful before the MMU is enabled. The allocations are described * that are useful before the MMU is enabled. The allocations are described
...@@ -77,7 +73,7 @@ _head: ...@@ -77,7 +73,7 @@ _head:
b primary_entry // branch to kernel start, magic b primary_entry // branch to kernel start, magic
.long 0 // reserved .long 0 // reserved
#endif #endif
le64sym _kernel_offset_le // Image load offset from start of RAM, little-endian .quad 0 // Image load offset from start of RAM, little-endian
le64sym _kernel_size_le // Effective size of kernel image, little-endian le64sym _kernel_size_le // Effective size of kernel image, little-endian
le64sym _kernel_flags_le // Informative flags, little-endian le64sym _kernel_flags_le // Informative flags, little-endian
.quad 0 // reserved .quad 0 // reserved
...@@ -382,7 +378,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables) ...@@ -382,7 +378,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
* Map the kernel image (starting with PHYS_OFFSET). * Map the kernel image (starting with PHYS_OFFSET).
*/ */
adrp x0, init_pg_dir adrp x0, init_pg_dir
mov_q x5, KIMAGE_VADDR + TEXT_OFFSET // compile time __va(_text) mov_q x5, KIMAGE_VADDR // compile time __va(_text)
add x5, x5, x23 // add KASLR displacement add x5, x5, x23 // add KASLR displacement
mov x4, PTRS_PER_PGD mov x4, PTRS_PER_PGD
adrp x6, _end // runtime __pa(_end) adrp x6, _end // runtime __pa(_end)
...@@ -474,7 +470,7 @@ SYM_FUNC_END(__primary_switched) ...@@ -474,7 +470,7 @@ SYM_FUNC_END(__primary_switched)
.pushsection ".rodata", "a" .pushsection ".rodata", "a"
SYM_DATA_START(kimage_vaddr) SYM_DATA_START(kimage_vaddr)
.quad _text - TEXT_OFFSET .quad _text
SYM_DATA_END(kimage_vaddr) SYM_DATA_END(kimage_vaddr)
EXPORT_SYMBOL(kimage_vaddr) EXPORT_SYMBOL(kimage_vaddr)
.popsection .popsection
......
...@@ -21,7 +21,6 @@ ...@@ -21,7 +21,6 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/version.h>
#include <asm/barrier.h> #include <asm/barrier.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
...@@ -31,6 +30,7 @@ ...@@ -31,6 +30,7 @@
#include <asm/kexec.h> #include <asm/kexec.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/mte.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/pgtable-hwdef.h> #include <asm/pgtable-hwdef.h>
#include <asm/sections.h> #include <asm/sections.h>
...@@ -285,6 +285,117 @@ static int create_safe_exec_page(void *src_start, size_t length, ...@@ -285,6 +285,117 @@ static int create_safe_exec_page(void *src_start, size_t length,
#define dcache_clean_range(start, end) __flush_dcache_area(start, (end - start)) #define dcache_clean_range(start, end) __flush_dcache_area(start, (end - start))
#ifdef CONFIG_ARM64_MTE
static DEFINE_XARRAY(mte_pages);
static int save_tags(struct page *page, unsigned long pfn)
{
void *tag_storage, *ret;
tag_storage = mte_allocate_tag_storage();
if (!tag_storage)
return -ENOMEM;
mte_save_page_tags(page_address(page), tag_storage);
ret = xa_store(&mte_pages, pfn, tag_storage, GFP_KERNEL);
if (WARN(xa_is_err(ret), "Failed to store MTE tags")) {
mte_free_tag_storage(tag_storage);
return xa_err(ret);
} else if (WARN(ret, "swsusp: %s: Duplicate entry", __func__)) {
mte_free_tag_storage(ret);
}
return 0;
}
static void swsusp_mte_free_storage(void)
{
XA_STATE(xa_state, &mte_pages, 0);
void *tags;
xa_lock(&mte_pages);
xas_for_each(&xa_state, tags, ULONG_MAX) {
mte_free_tag_storage(tags);
}
xa_unlock(&mte_pages);
xa_destroy(&mte_pages);
}
static int swsusp_mte_save_tags(void)
{
struct zone *zone;
unsigned long pfn, max_zone_pfn;
int ret = 0;
int n = 0;
if (!system_supports_mte())
return 0;
for_each_populated_zone(zone) {
max_zone_pfn = zone_end_pfn(zone);
for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) {
struct page *page = pfn_to_online_page(pfn);
if (!page)
continue;
if (!test_bit(PG_mte_tagged, &page->flags))
continue;
ret = save_tags(page, pfn);
if (ret) {
swsusp_mte_free_storage();
goto out;
}
n++;
}
}
pr_info("Saved %d MTE pages\n", n);
out:
return ret;
}
static void swsusp_mte_restore_tags(void)
{
XA_STATE(xa_state, &mte_pages, 0);
int n = 0;
void *tags;
xa_lock(&mte_pages);
xas_for_each(&xa_state, tags, ULONG_MAX) {
unsigned long pfn = xa_state.xa_index;
struct page *page = pfn_to_online_page(pfn);
mte_restore_page_tags(page_address(page), tags);
mte_free_tag_storage(tags);
n++;
}
xa_unlock(&mte_pages);
pr_info("Restored %d MTE pages\n", n);
xa_destroy(&mte_pages);
}
#else /* CONFIG_ARM64_MTE */
static int swsusp_mte_save_tags(void)
{
return 0;
}
static void swsusp_mte_restore_tags(void)
{
}
#endif /* CONFIG_ARM64_MTE */
int swsusp_arch_suspend(void) int swsusp_arch_suspend(void)
{ {
int ret = 0; int ret = 0;
...@@ -302,6 +413,10 @@ int swsusp_arch_suspend(void) ...@@ -302,6 +413,10 @@ int swsusp_arch_suspend(void)
/* make the crash dump kernel image visible/saveable */ /* make the crash dump kernel image visible/saveable */
crash_prepare_suspend(); crash_prepare_suspend();
ret = swsusp_mte_save_tags();
if (ret)
return ret;
sleep_cpu = smp_processor_id(); sleep_cpu = smp_processor_id();
ret = swsusp_save(); ret = swsusp_save();
} else { } else {
...@@ -315,6 +430,8 @@ int swsusp_arch_suspend(void) ...@@ -315,6 +430,8 @@ int swsusp_arch_suspend(void)
dcache_clean_range(__hyp_text_start, __hyp_text_end); dcache_clean_range(__hyp_text_start, __hyp_text_end);
} }
swsusp_mte_restore_tags();
/* make the crash dump kernel image protected again */ /* make the crash dump kernel image protected again */
crash_post_resume(); crash_post_resume();
...@@ -332,11 +449,7 @@ int swsusp_arch_suspend(void) ...@@ -332,11 +449,7 @@ int swsusp_arch_suspend(void)
* mitigation off behind our back, let's set the state * mitigation off behind our back, let's set the state
* to what we expect it to be. * to what we expect it to be.
*/ */
switch (arm64_get_ssbd_state()) { spectre_v4_enable_mitigation(NULL);
case ARM64_SSBD_FORCE_ENABLE:
case ARM64_SSBD_KERNEL:
arm64_set_ssbd_mitigation(true);
}
} }
local_daif_restore(flags); local_daif_restore(flags);
......
...@@ -64,12 +64,10 @@ __efistub__ctype = _ctype; ...@@ -64,12 +64,10 @@ __efistub__ctype = _ctype;
#define KVM_NVHE_ALIAS(sym) __kvm_nvhe_##sym = sym; #define KVM_NVHE_ALIAS(sym) __kvm_nvhe_##sym = sym;
/* Alternative callbacks for init-time patching of nVHE hyp code. */ /* Alternative callbacks for init-time patching of nVHE hyp code. */
KVM_NVHE_ALIAS(arm64_enable_wa2_handling);
KVM_NVHE_ALIAS(kvm_patch_vector_branch); KVM_NVHE_ALIAS(kvm_patch_vector_branch);
KVM_NVHE_ALIAS(kvm_update_va_mask); KVM_NVHE_ALIAS(kvm_update_va_mask);
/* Global kernel state accessed by nVHE hyp code. */ /* Global kernel state accessed by nVHE hyp code. */
KVM_NVHE_ALIAS(arm64_ssbd_callback_required);
KVM_NVHE_ALIAS(kvm_host_data); KVM_NVHE_ALIAS(kvm_host_data);
KVM_NVHE_ALIAS(kvm_vgic_global_state); KVM_NVHE_ALIAS(kvm_vgic_global_state);
......
...@@ -62,7 +62,6 @@ ...@@ -62,7 +62,6 @@
*/ */
#define HEAD_SYMBOLS \ #define HEAD_SYMBOLS \
DEFINE_IMAGE_LE64(_kernel_size_le, _end - _text); \ DEFINE_IMAGE_LE64(_kernel_size_le, _end - _text); \
DEFINE_IMAGE_LE64(_kernel_offset_le, TEXT_OFFSET); \
DEFINE_IMAGE_LE64(_kernel_flags_le, __HEAD_FLAGS); DEFINE_IMAGE_LE64(_kernel_flags_le, __HEAD_FLAGS);
#endif /* __ARM64_KERNEL_IMAGE_H */ #endif /* __ARM64_KERNEL_IMAGE_H */
...@@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn) ...@@ -60,16 +60,10 @@ bool __kprobes aarch64_insn_is_steppable_hint(u32 insn)
case AARCH64_INSN_HINT_XPACLRI: case AARCH64_INSN_HINT_XPACLRI:
case AARCH64_INSN_HINT_PACIA_1716: case AARCH64_INSN_HINT_PACIA_1716:
case AARCH64_INSN_HINT_PACIB_1716: case AARCH64_INSN_HINT_PACIB_1716:
case AARCH64_INSN_HINT_AUTIA_1716:
case AARCH64_INSN_HINT_AUTIB_1716:
case AARCH64_INSN_HINT_PACIAZ: case AARCH64_INSN_HINT_PACIAZ:
case AARCH64_INSN_HINT_PACIASP: case AARCH64_INSN_HINT_PACIASP:
case AARCH64_INSN_HINT_PACIBZ: case AARCH64_INSN_HINT_PACIBZ:
case AARCH64_INSN_HINT_PACIBSP: case AARCH64_INSN_HINT_PACIBSP:
case AARCH64_INSN_HINT_AUTIAZ:
case AARCH64_INSN_HINT_AUTIASP:
case AARCH64_INSN_HINT_AUTIBZ:
case AARCH64_INSN_HINT_AUTIBSP:
case AARCH64_INSN_HINT_BTI: case AARCH64_INSN_HINT_BTI:
case AARCH64_INSN_HINT_BTIC: case AARCH64_INSN_HINT_BTIC:
case AARCH64_INSN_HINT_BTIJ: case AARCH64_INSN_HINT_BTIJ:
...@@ -176,7 +170,7 @@ bool __kprobes aarch64_insn_uses_literal(u32 insn) ...@@ -176,7 +170,7 @@ bool __kprobes aarch64_insn_uses_literal(u32 insn)
bool __kprobes aarch64_insn_is_branch(u32 insn) bool __kprobes aarch64_insn_is_branch(u32 insn)
{ {
/* b, bl, cb*, tb*, b.cond, br, blr */ /* b, bl, cb*, tb*, ret*, b.cond, br*, blr* */
return aarch64_insn_is_b(insn) || return aarch64_insn_is_b(insn) ||
aarch64_insn_is_bl(insn) || aarch64_insn_is_bl(insn) ||
...@@ -185,8 +179,11 @@ bool __kprobes aarch64_insn_is_branch(u32 insn) ...@@ -185,8 +179,11 @@ bool __kprobes aarch64_insn_is_branch(u32 insn)
aarch64_insn_is_tbz(insn) || aarch64_insn_is_tbz(insn) ||
aarch64_insn_is_tbnz(insn) || aarch64_insn_is_tbnz(insn) ||
aarch64_insn_is_ret(insn) || aarch64_insn_is_ret(insn) ||
aarch64_insn_is_ret_auth(insn) ||
aarch64_insn_is_br(insn) || aarch64_insn_is_br(insn) ||
aarch64_insn_is_br_auth(insn) ||
aarch64_insn_is_blr(insn) || aarch64_insn_is_blr(insn) ||
aarch64_insn_is_blr_auth(insn) ||
aarch64_insn_is_bcond(insn); aarch64_insn_is_bcond(insn);
} }
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020 ARM Ltd.
*/
#include <linux/bitops.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/prctl.h>
#include <linux/sched.h>
#include <linux/sched/mm.h>
#include <linux/string.h>
#include <linux/swap.h>
#include <linux/swapops.h>
#include <linux/thread_info.h>
#include <linux/uio.h>
#include <asm/cpufeature.h>
#include <asm/mte.h>
#include <asm/ptrace.h>
#include <asm/sysreg.h>
static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
{
pte_t old_pte = READ_ONCE(*ptep);
if (check_swap && is_swap_pte(old_pte)) {
swp_entry_t entry = pte_to_swp_entry(old_pte);
if (!non_swap_entry(entry) && mte_restore_tags(entry, page))
return;
}
mte_clear_page_tags(page_address(page));
}
void mte_sync_tags(pte_t *ptep, pte_t pte)
{
struct page *page = pte_page(pte);
long i, nr_pages = compound_nr(page);
bool check_swap = nr_pages == 1;
/* if PG_mte_tagged is set, tags have already been initialised */
for (i = 0; i < nr_pages; i++, page++) {
if (!test_and_set_bit(PG_mte_tagged, &page->flags))
mte_sync_page_tags(page, ptep, check_swap);
}
}
int memcmp_pages(struct page *page1, struct page *page2)
{
char *addr1, *addr2;
int ret;
addr1 = page_address(page1);
addr2 = page_address(page2);
ret = memcmp(addr1, addr2, PAGE_SIZE);
if (!system_supports_mte() || ret)
return ret;
/*
* If the page content is identical but at least one of the pages is
* tagged, return non-zero to avoid KSM merging. If only one of the
* pages is tagged, set_pte_at() may zero or change the tags of the
* other page via mte_sync_tags().
*/
if (test_bit(PG_mte_tagged, &page1->flags) ||
test_bit(PG_mte_tagged, &page2->flags))
return addr1 != addr2;
return ret;
}
static void update_sctlr_el1_tcf0(u64 tcf0)
{
/* ISB required for the kernel uaccess routines */
sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCF0_MASK, tcf0);
isb();
}
static void set_sctlr_el1_tcf0(u64 tcf0)
{
/*
* mte_thread_switch() checks current->thread.sctlr_tcf0 as an
* optimisation. Disable preemption so that it does not see
* the variable update before the SCTLR_EL1.TCF0 one.
*/
preempt_disable();
current->thread.sctlr_tcf0 = tcf0;
update_sctlr_el1_tcf0(tcf0);
preempt_enable();
}
static void update_gcr_el1_excl(u64 incl)
{
u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK;
/*
* Note that 'incl' is an include mask (controlled by the user via
* prctl()) while GCR_EL1 accepts an exclude mask.
* No need for ISB since this only affects EL0 currently, implicit
* with ERET.
*/
sysreg_clear_set_s(SYS_GCR_EL1, SYS_GCR_EL1_EXCL_MASK, excl);
}
static void set_gcr_el1_excl(u64 incl)
{
current->thread.gcr_user_incl = incl;
update_gcr_el1_excl(incl);
}
void flush_mte_state(void)
{
if (!system_supports_mte())
return;
/* clear any pending asynchronous tag fault */
dsb(ish);
write_sysreg_s(0, SYS_TFSRE0_EL1);
clear_thread_flag(TIF_MTE_ASYNC_FAULT);
/* disable tag checking */
set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE);
/* reset tag generation mask */
set_gcr_el1_excl(0);
}
void mte_thread_switch(struct task_struct *next)
{
if (!system_supports_mte())
return;
/* avoid expensive SCTLR_EL1 accesses if no change */
if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0)
update_sctlr_el1_tcf0(next->thread.sctlr_tcf0);
update_gcr_el1_excl(next->thread.gcr_user_incl);
}
void mte_suspend_exit(void)
{
if (!system_supports_mte())
return;
update_gcr_el1_excl(current->thread.gcr_user_incl);
}
long set_mte_ctrl(struct task_struct *task, unsigned long arg)
{
u64 tcf0;
u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT;
if (!system_supports_mte())
return 0;
switch (arg & PR_MTE_TCF_MASK) {
case PR_MTE_TCF_NONE:
tcf0 = SCTLR_EL1_TCF0_NONE;
break;
case PR_MTE_TCF_SYNC:
tcf0 = SCTLR_EL1_TCF0_SYNC;
break;
case PR_MTE_TCF_ASYNC:
tcf0 = SCTLR_EL1_TCF0_ASYNC;
break;
default:
return -EINVAL;
}
if (task != current) {
task->thread.sctlr_tcf0 = tcf0;
task->thread.gcr_user_incl = gcr_incl;
} else {
set_sctlr_el1_tcf0(tcf0);
set_gcr_el1_excl(gcr_incl);
}
return 0;
}
long get_mte_ctrl(struct task_struct *task)
{
unsigned long ret;
if (!system_supports_mte())
return 0;
ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT;
switch (task->thread.sctlr_tcf0) {
case SCTLR_EL1_TCF0_NONE:
return PR_MTE_TCF_NONE;
case SCTLR_EL1_TCF0_SYNC:
ret |= PR_MTE_TCF_SYNC;
break;
case SCTLR_EL1_TCF0_ASYNC:
ret |= PR_MTE_TCF_ASYNC;
break;
}
return ret;
}
/*
* Access MTE tags in another process' address space as given in mm. Update
* the number of tags copied. Return 0 if any tags copied, error otherwise.
* Inspired by __access_remote_vm().
*/
static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
struct iovec *kiov, unsigned int gup_flags)
{
struct vm_area_struct *vma;
void __user *buf = kiov->iov_base;
size_t len = kiov->iov_len;
int ret;
int write = gup_flags & FOLL_WRITE;
if (!access_ok(buf, len))
return -EFAULT;
if (mmap_read_lock_killable(mm))
return -EIO;
while (len) {
unsigned long tags, offset;
void *maddr;
struct page *page = NULL;
ret = get_user_pages_remote(mm, addr, 1, gup_flags, &page,
&vma, NULL);
if (ret <= 0)
break;
/*
* Only copy tags if the page has been mapped as PROT_MTE
* (PG_mte_tagged set). Otherwise the tags are not valid and
* not accessible to user. Moreover, an mprotect(PROT_MTE)
* would cause the existing tags to be cleared if the page
* was never mapped with PROT_MTE.
*/
if (!test_bit(PG_mte_tagged, &page->flags)) {
ret = -EOPNOTSUPP;
put_page(page);
break;
}
/* limit access to the end of the page */
offset = offset_in_page(addr);
tags = min(len, (PAGE_SIZE - offset) / MTE_GRANULE_SIZE);
maddr = page_address(page);
if (write) {
tags = mte_copy_tags_from_user(maddr + offset, buf, tags);
set_page_dirty_lock(page);
} else {
tags = mte_copy_tags_to_user(buf, maddr + offset, tags);
}
put_page(page);
/* error accessing the tracer's buffer */
if (!tags)
break;
len -= tags;
buf += tags;
addr += tags * MTE_GRANULE_SIZE;
}
mmap_read_unlock(mm);
/* return an error if no tags copied */
kiov->iov_len = buf - kiov->iov_base;
if (!kiov->iov_len) {
/* check for error accessing the tracee's address space */
if (ret <= 0)
return -EIO;
else
return -EFAULT;
}
return 0;
}
/*
* Copy MTE tags in another process' address space at 'addr' to/from tracer's
* iovec buffer. Return 0 on success. Inspired by ptrace_access_vm().
*/
static int access_remote_tags(struct task_struct *tsk, unsigned long addr,
struct iovec *kiov, unsigned int gup_flags)
{
struct mm_struct *mm;
int ret;
mm = get_task_mm(tsk);
if (!mm)
return -EPERM;
if (!tsk->ptrace || (current != tsk->parent) ||
((get_dumpable(mm) != SUID_DUMP_USER) &&
!ptracer_capable(tsk, mm->user_ns))) {
mmput(mm);
return -EPERM;
}
ret = __access_remote_tags(mm, addr, kiov, gup_flags);
mmput(mm);
return ret;
}
int mte_ptrace_copy_tags(struct task_struct *child, long request,
unsigned long addr, unsigned long data)
{
int ret;
struct iovec kiov;
struct iovec __user *uiov = (void __user *)data;
unsigned int gup_flags = FOLL_FORCE;
if (!system_supports_mte())
return -EIO;
if (get_user(kiov.iov_base, &uiov->iov_base) ||
get_user(kiov.iov_len, &uiov->iov_len))
return -EFAULT;
if (request == PTRACE_POKEMTETAGS)
gup_flags |= FOLL_WRITE;
/* align addr to the MTE tag granule */
addr &= MTE_GRANULE_MASK;
ret = access_remote_tags(child, addr, &kiov, gup_flags);
if (!ret)
ret = put_user(kiov.iov_len, &uiov->iov_len);
return ret;
}
...@@ -137,11 +137,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry, ...@@ -137,11 +137,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
* whist unwinding the stackframe and is like a subroutine return so we use * whist unwinding the stackframe and is like a subroutine return so we use
* the PC. * the PC.
*/ */
static int callchain_trace(struct stackframe *frame, void *data) static bool callchain_trace(void *data, unsigned long pc)
{ {
struct perf_callchain_entry_ctx *entry = data; struct perf_callchain_entry_ctx *entry = data;
perf_callchain_store(entry, frame->pc); perf_callchain_store(entry, pc);
return 0; return true;
} }
void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
......
This diff is collapsed.
...@@ -16,7 +16,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx) ...@@ -16,7 +16,7 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
/* /*
* Our handling of compat tasks (PERF_SAMPLE_REGS_ABI_32) is weird, but * Our handling of compat tasks (PERF_SAMPLE_REGS_ABI_32) is weird, but
* we're stuck with it for ABI compatability reasons. * we're stuck with it for ABI compatibility reasons.
* *
* For a 32-bit consumer inspecting a 32-bit task, then it will look at * For a 32-bit consumer inspecting a 32-bit task, then it will look at
* the first 16 registers (see arch/arm/include/uapi/asm/perf_regs.h). * the first 16 registers (see arch/arm/include/uapi/asm/perf_regs.h).
......
...@@ -29,7 +29,8 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn) ...@@ -29,7 +29,8 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
aarch64_insn_is_msr_imm(insn) || aarch64_insn_is_msr_imm(insn) ||
aarch64_insn_is_msr_reg(insn) || aarch64_insn_is_msr_reg(insn) ||
aarch64_insn_is_exception(insn) || aarch64_insn_is_exception(insn) ||
aarch64_insn_is_eret(insn)) aarch64_insn_is_eret(insn) ||
aarch64_insn_is_eret_auth(insn))
return false; return false;
/* /*
...@@ -42,8 +43,10 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn) ...@@ -42,8 +43,10 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
!= AARCH64_INSN_SPCLREG_DAIF; != AARCH64_INSN_SPCLREG_DAIF;
/* /*
* The HINT instruction is is problematic when single-stepping, * The HINT instruction is steppable only if it is in whitelist
* except for the NOP case. * and the rest of other such instructions are blocked for
* single stepping as they may cause exception or other
* unintended behaviour.
*/ */
if (aarch64_insn_is_hint(insn)) if (aarch64_insn_is_hint(insn))
return aarch64_insn_is_steppable_hint(insn); return aarch64_insn_is_steppable_hint(insn);
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/lockdep.h> #include <linux/lockdep.h>
#include <linux/mman.h> #include <linux/mman.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/nospec.h>
#include <linux/stddef.h> #include <linux/stddef.h>
#include <linux/sysctl.h> #include <linux/sysctl.h>
#include <linux/unistd.h> #include <linux/unistd.h>
...@@ -52,6 +53,7 @@ ...@@ -52,6 +53,7 @@
#include <asm/exec.h> #include <asm/exec.h>
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/mte.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/pointer_auth.h> #include <asm/pointer_auth.h>
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
...@@ -239,7 +241,7 @@ static void print_pstate(struct pt_regs *regs) ...@@ -239,7 +241,7 @@ static void print_pstate(struct pt_regs *regs)
const char *btype_str = btypes[(pstate & PSR_BTYPE_MASK) >> const char *btype_str = btypes[(pstate & PSR_BTYPE_MASK) >>
PSR_BTYPE_SHIFT]; PSR_BTYPE_SHIFT];
printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO BTYPE=%s)\n", printk("pstate: %08llx (%c%c%c%c %c%c%c%c %cPAN %cUAO %cTCO BTYPE=%s)\n",
pstate, pstate,
pstate & PSR_N_BIT ? 'N' : 'n', pstate & PSR_N_BIT ? 'N' : 'n',
pstate & PSR_Z_BIT ? 'Z' : 'z', pstate & PSR_Z_BIT ? 'Z' : 'z',
...@@ -251,6 +253,7 @@ static void print_pstate(struct pt_regs *regs) ...@@ -251,6 +253,7 @@ static void print_pstate(struct pt_regs *regs)
pstate & PSR_F_BIT ? 'F' : 'f', pstate & PSR_F_BIT ? 'F' : 'f',
pstate & PSR_PAN_BIT ? '+' : '-', pstate & PSR_PAN_BIT ? '+' : '-',
pstate & PSR_UAO_BIT ? '+' : '-', pstate & PSR_UAO_BIT ? '+' : '-',
pstate & PSR_TCO_BIT ? '+' : '-',
btype_str); btype_str);
} }
} }
...@@ -336,6 +339,7 @@ void flush_thread(void) ...@@ -336,6 +339,7 @@ void flush_thread(void)
tls_thread_flush(); tls_thread_flush();
flush_ptrace_hw_breakpoint(current); flush_ptrace_hw_breakpoint(current);
flush_tagged_addr_state(); flush_tagged_addr_state();
flush_mte_state();
} }
void release_thread(struct task_struct *dead_task) void release_thread(struct task_struct *dead_task)
...@@ -368,6 +372,9 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) ...@@ -368,6 +372,9 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
dst->thread.sve_state = NULL; dst->thread.sve_state = NULL;
clear_tsk_thread_flag(dst, TIF_SVE); clear_tsk_thread_flag(dst, TIF_SVE);
/* clear any pending asynchronous tag fault raised by the parent */
clear_tsk_thread_flag(dst, TIF_MTE_ASYNC_FAULT);
return 0; return 0;
} }
...@@ -421,8 +428,7 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, ...@@ -421,8 +428,7 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
cpus_have_const_cap(ARM64_HAS_UAO)) cpus_have_const_cap(ARM64_HAS_UAO))
childregs->pstate |= PSR_UAO_BIT; childregs->pstate |= PSR_UAO_BIT;
if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) spectre_v4_enable_task_mitigation(p);
set_ssbs_bit(childregs);
if (system_uses_irq_prio_masking()) if (system_uses_irq_prio_masking())
childregs->pmr_save = GIC_PRIO_IRQON; childregs->pmr_save = GIC_PRIO_IRQON;
...@@ -472,8 +478,6 @@ void uao_thread_switch(struct task_struct *next) ...@@ -472,8 +478,6 @@ void uao_thread_switch(struct task_struct *next)
*/ */
static void ssbs_thread_switch(struct task_struct *next) static void ssbs_thread_switch(struct task_struct *next)
{ {
struct pt_regs *regs = task_pt_regs(next);
/* /*
* Nothing to do for kernel threads, but 'regs' may be junk * Nothing to do for kernel threads, but 'regs' may be junk
* (e.g. idle task) so check the flags and bail early. * (e.g. idle task) so check the flags and bail early.
...@@ -485,18 +489,10 @@ static void ssbs_thread_switch(struct task_struct *next) ...@@ -485,18 +489,10 @@ static void ssbs_thread_switch(struct task_struct *next)
* If all CPUs implement the SSBS extension, then we just need to * If all CPUs implement the SSBS extension, then we just need to
* context-switch the PSTATE field. * context-switch the PSTATE field.
*/ */
if (cpu_have_feature(cpu_feature(SSBS))) if (cpus_have_const_cap(ARM64_SSBS))
return; return;
/* If the mitigation is enabled, then we leave SSBS clear. */ spectre_v4_enable_task_mitigation(next);
if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
test_tsk_thread_flag(next, TIF_SSBD))
return;
if (compat_user_mode(regs))
set_compat_ssbs_bit(regs);
else if (user_mode(regs))
set_ssbs_bit(regs);
} }
/* /*
...@@ -571,6 +567,13 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, ...@@ -571,6 +567,13 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
*/ */
dsb(ish); dsb(ish);
/*
* MTE thread switching must happen after the DSB above to ensure that
* any asynchronous tag check faults have been logged in the TFSR*_EL1
* registers.
*/
mte_thread_switch(next);
/* the actual thread switch */ /* the actual thread switch */
last = cpu_switch_to(prev, next); last = cpu_switch_to(prev, next);
...@@ -620,6 +623,11 @@ void arch_setup_new_exec(void) ...@@ -620,6 +623,11 @@ void arch_setup_new_exec(void)
current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0; current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
ptrauth_thread_init_user(current); ptrauth_thread_init_user(current);
if (task_spec_ssb_noexec(current)) {
arch_prctl_spec_ctrl_set(current, PR_SPEC_STORE_BYPASS,
PR_SPEC_ENABLE);
}
} }
#ifdef CONFIG_ARM64_TAGGED_ADDR_ABI #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
...@@ -628,11 +636,18 @@ void arch_setup_new_exec(void) ...@@ -628,11 +636,18 @@ void arch_setup_new_exec(void)
*/ */
static unsigned int tagged_addr_disabled; static unsigned int tagged_addr_disabled;
long set_tagged_addr_ctrl(unsigned long arg) long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg)
{ {
if (is_compat_task()) unsigned long valid_mask = PR_TAGGED_ADDR_ENABLE;
struct thread_info *ti = task_thread_info(task);
if (is_compat_thread(ti))
return -EINVAL; return -EINVAL;
if (arg & ~PR_TAGGED_ADDR_ENABLE)
if (system_supports_mte())
valid_mask |= PR_MTE_TCF_MASK | PR_MTE_TAG_MASK;
if (arg & ~valid_mask)
return -EINVAL; return -EINVAL;
/* /*
...@@ -642,20 +657,28 @@ long set_tagged_addr_ctrl(unsigned long arg) ...@@ -642,20 +657,28 @@ long set_tagged_addr_ctrl(unsigned long arg)
if (arg & PR_TAGGED_ADDR_ENABLE && tagged_addr_disabled) if (arg & PR_TAGGED_ADDR_ENABLE && tagged_addr_disabled)
return -EINVAL; return -EINVAL;
update_thread_flag(TIF_TAGGED_ADDR, arg & PR_TAGGED_ADDR_ENABLE); if (set_mte_ctrl(task, arg) != 0)
return -EINVAL;
update_ti_thread_flag(ti, TIF_TAGGED_ADDR, arg & PR_TAGGED_ADDR_ENABLE);
return 0; return 0;
} }
long get_tagged_addr_ctrl(void) long get_tagged_addr_ctrl(struct task_struct *task)
{ {
if (is_compat_task()) long ret = 0;
struct thread_info *ti = task_thread_info(task);
if (is_compat_thread(ti))
return -EINVAL; return -EINVAL;
if (test_thread_flag(TIF_TAGGED_ADDR)) if (test_ti_thread_flag(ti, TIF_TAGGED_ADDR))
return PR_TAGGED_ADDR_ENABLE; ret = PR_TAGGED_ADDR_ENABLE;
return 0; ret |= get_mte_ctrl(task);
return ret;
} }
/* /*
......
This diff is collapsed.
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/debug-monitors.h> #include <asm/debug-monitors.h>
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
#include <asm/mte.h>
#include <asm/pointer_auth.h> #include <asm/pointer_auth.h>
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
#include <asm/syscall.h> #include <asm/syscall.h>
...@@ -1032,6 +1033,35 @@ static int pac_generic_keys_set(struct task_struct *target, ...@@ -1032,6 +1033,35 @@ static int pac_generic_keys_set(struct task_struct *target,
#endif /* CONFIG_CHECKPOINT_RESTORE */ #endif /* CONFIG_CHECKPOINT_RESTORE */
#endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* CONFIG_ARM64_PTR_AUTH */
#ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
static int tagged_addr_ctrl_get(struct task_struct *target,
const struct user_regset *regset,
struct membuf to)
{
long ctrl = get_tagged_addr_ctrl(target);
if (IS_ERR_VALUE(ctrl))
return ctrl;
return membuf_write(&to, &ctrl, sizeof(ctrl));
}
static int tagged_addr_ctrl_set(struct task_struct *target, const struct
user_regset *regset, unsigned int pos,
unsigned int count, const void *kbuf, const
void __user *ubuf)
{
int ret;
long ctrl;
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &ctrl, 0, -1);
if (ret)
return ret;
return set_tagged_addr_ctrl(target, ctrl);
}
#endif
enum aarch64_regset { enum aarch64_regset {
REGSET_GPR, REGSET_GPR,
REGSET_FPR, REGSET_FPR,
...@@ -1051,6 +1081,9 @@ enum aarch64_regset { ...@@ -1051,6 +1081,9 @@ enum aarch64_regset {
REGSET_PACG_KEYS, REGSET_PACG_KEYS,
#endif #endif
#endif #endif
#ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
REGSET_TAGGED_ADDR_CTRL,
#endif
}; };
static const struct user_regset aarch64_regsets[] = { static const struct user_regset aarch64_regsets[] = {
...@@ -1148,6 +1181,16 @@ static const struct user_regset aarch64_regsets[] = { ...@@ -1148,6 +1181,16 @@ static const struct user_regset aarch64_regsets[] = {
}, },
#endif #endif
#endif #endif
#ifdef CONFIG_ARM64_TAGGED_ADDR_ABI
[REGSET_TAGGED_ADDR_CTRL] = {
.core_note_type = NT_ARM_TAGGED_ADDR_CTRL,
.n = 1,
.size = sizeof(long),
.align = sizeof(long),
.regset_get = tagged_addr_ctrl_get,
.set = tagged_addr_ctrl_set,
},
#endif
}; };
static const struct user_regset_view user_aarch64_view = { static const struct user_regset_view user_aarch64_view = {
...@@ -1691,6 +1734,12 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task) ...@@ -1691,6 +1734,12 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
long arch_ptrace(struct task_struct *child, long request, long arch_ptrace(struct task_struct *child, long request,
unsigned long addr, unsigned long data) unsigned long addr, unsigned long data)
{ {
switch (request) {
case PTRACE_PEEKMTETAGS:
case PTRACE_POKEMTETAGS:
return mte_ptrace_copy_tags(child, request, addr, data);
}
return ptrace_request(child, request, addr, data); return ptrace_request(child, request, addr, data);
} }
...@@ -1793,7 +1842,7 @@ void syscall_trace_exit(struct pt_regs *regs) ...@@ -1793,7 +1842,7 @@ void syscall_trace_exit(struct pt_regs *regs)
* We also reserve IL for the kernel; SS is handled dynamically. * We also reserve IL for the kernel; SS is handled dynamically.
*/ */
#define SPSR_EL1_AARCH64_RES0_BITS \ #define SPSR_EL1_AARCH64_RES0_BITS \
(GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \ (GENMASK_ULL(63, 32) | GENMASK_ULL(27, 26) | GENMASK_ULL(23, 22) | \
GENMASK_ULL(20, 13) | GENMASK_ULL(5, 5)) GENMASK_ULL(20, 13) | GENMASK_ULL(5, 5))
#define SPSR_EL1_AARCH32_RES0_BITS \ #define SPSR_EL1_AARCH32_RES0_BITS \
(GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20)) (GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20))
......
...@@ -36,18 +36,6 @@ SYM_CODE_START(arm64_relocate_new_kernel) ...@@ -36,18 +36,6 @@ SYM_CODE_START(arm64_relocate_new_kernel)
mov x14, xzr /* x14 = entry ptr */ mov x14, xzr /* x14 = entry ptr */
mov x13, xzr /* x13 = copy dest */ mov x13, xzr /* x13 = copy dest */
/* Clear the sctlr_el2 flags. */
mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2
b.ne 1f
mrs x0, sctlr_el2
mov_q x1, SCTLR_ELx_FLAGS
bic x0, x0, x1
pre_disable_mmu_workaround
msr sctlr_el2, x0
isb
1:
/* Check if the new image needs relocation. */ /* Check if the new image needs relocation. */
tbnz x16, IND_DONE_BIT, .Ldone tbnz x16, IND_DONE_BIT, .Ldone
......
...@@ -18,16 +18,16 @@ struct return_address_data { ...@@ -18,16 +18,16 @@ struct return_address_data {
void *addr; void *addr;
}; };
static int save_return_addr(struct stackframe *frame, void *d) static bool save_return_addr(void *d, unsigned long pc)
{ {
struct return_address_data *data = d; struct return_address_data *data = d;
if (!data->level) { if (!data->level) {
data->addr = (void *)frame->pc; data->addr = (void *)pc;
return 1; return false;
} else { } else {
--data->level; --data->level;
return 0; return true;
} }
} }
NOKPROBE_SYMBOL(save_return_addr); NOKPROBE_SYMBOL(save_return_addr);
......
...@@ -244,7 +244,8 @@ static int preserve_sve_context(struct sve_context __user *ctx) ...@@ -244,7 +244,8 @@ static int preserve_sve_context(struct sve_context __user *ctx)
if (vq) { if (vq) {
/* /*
* This assumes that the SVE state has already been saved to * This assumes that the SVE state has already been saved to
* the task struct by calling preserve_fpsimd_context(). * the task struct by calling the function
* fpsimd_signal_preserve_current_state().
*/ */
err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET, err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET,
current->thread.sve_state, current->thread.sve_state,
...@@ -748,6 +749,9 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, ...@@ -748,6 +749,9 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka,
regs->pstate |= PSR_BTYPE_C; regs->pstate |= PSR_BTYPE_C;
} }
/* TCO (Tag Check Override) always cleared for signal handlers */
regs->pstate &= ~PSR_TCO_BIT;
if (ka->sa.sa_flags & SA_RESTORER) if (ka->sa.sa_flags & SA_RESTORER)
sigtramp = ka->sa.sa_restorer; sigtramp = ka->sa.sa_restorer;
else else
...@@ -932,6 +936,12 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, ...@@ -932,6 +936,12 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
if (thread_flags & _TIF_UPROBE) if (thread_flags & _TIF_UPROBE)
uprobe_notify_resume(regs); uprobe_notify_resume(regs);
if (thread_flags & _TIF_MTE_ASYNC_FAULT) {
clear_thread_flag(TIF_MTE_ASYNC_FAULT);
send_sig_fault(SIGSEGV, SEGV_MTEAERR,
(void __user *)NULL, current);
}
if (thread_flags & _TIF_SIGPENDING) if (thread_flags & _TIF_SIGPENDING)
do_signal(regs); do_signal(regs);
......
...@@ -83,9 +83,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu) ...@@ -83,9 +83,9 @@ static int smp_spin_table_cpu_prepare(unsigned int cpu)
/* /*
* We write the release address as LE regardless of the native * We write the release address as LE regardless of the native
* endianess of the kernel. Therefore, any boot-loaders that * endianness of the kernel. Therefore, any boot-loaders that
* read this address need to convert this address to the * read this address need to convert this address to the
* boot-loader's endianess before jumping. This is mandated by * boot-loader's endianness before jumping. This is mandated by
* the boot protocol. * the boot protocol.
*/ */
writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr); writeq_relaxed(__pa_symbol(secondary_holding_pen), release_addr);
......
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2018 ARM Ltd, All Rights Reserved.
*/
#include <linux/compat.h>
#include <linux/errno.h>
#include <linux/prctl.h>
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
#include <linux/thread_info.h>
#include <asm/cpufeature.h>
static void ssbd_ssbs_enable(struct task_struct *task)
{
u64 val = is_compat_thread(task_thread_info(task)) ?
PSR_AA32_SSBS_BIT : PSR_SSBS_BIT;
task_pt_regs(task)->pstate |= val;
}
static void ssbd_ssbs_disable(struct task_struct *task)
{
u64 val = is_compat_thread(task_thread_info(task)) ?
PSR_AA32_SSBS_BIT : PSR_SSBS_BIT;
task_pt_regs(task)->pstate &= ~val;
}
/*
* prctl interface for SSBD
*/
static int ssbd_prctl_set(struct task_struct *task, unsigned long ctrl)
{
int state = arm64_get_ssbd_state();
/* Unsupported */
if (state == ARM64_SSBD_UNKNOWN)
return -ENODEV;
/* Treat the unaffected/mitigated state separately */
if (state == ARM64_SSBD_MITIGATED) {
switch (ctrl) {
case PR_SPEC_ENABLE:
return -EPERM;
case PR_SPEC_DISABLE:
case PR_SPEC_FORCE_DISABLE:
return 0;
}
}
/*
* Things are a bit backward here: the arm64 internal API
* *enables the mitigation* when the userspace API *disables
* speculation*. So much fun.
*/
switch (ctrl) {
case PR_SPEC_ENABLE:
/* If speculation is force disabled, enable is not allowed */
if (state == ARM64_SSBD_FORCE_ENABLE ||
task_spec_ssb_force_disable(task))
return -EPERM;
task_clear_spec_ssb_disable(task);
clear_tsk_thread_flag(task, TIF_SSBD);
ssbd_ssbs_enable(task);
break;
case PR_SPEC_DISABLE:
if (state == ARM64_SSBD_FORCE_DISABLE)
return -EPERM;
task_set_spec_ssb_disable(task);
set_tsk_thread_flag(task, TIF_SSBD);
ssbd_ssbs_disable(task);
break;
case PR_SPEC_FORCE_DISABLE:
if (state == ARM64_SSBD_FORCE_DISABLE)
return -EPERM;
task_set_spec_ssb_disable(task);
task_set_spec_ssb_force_disable(task);
set_tsk_thread_flag(task, TIF_SSBD);
ssbd_ssbs_disable(task);
break;
default:
return -ERANGE;
}
return 0;
}
int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
unsigned long ctrl)
{
switch (which) {
case PR_SPEC_STORE_BYPASS:
return ssbd_prctl_set(task, ctrl);
default:
return -ENODEV;
}
}
static int ssbd_prctl_get(struct task_struct *task)
{
switch (arm64_get_ssbd_state()) {
case ARM64_SSBD_UNKNOWN:
return -ENODEV;
case ARM64_SSBD_FORCE_ENABLE:
return PR_SPEC_DISABLE;
case ARM64_SSBD_KERNEL:
if (task_spec_ssb_force_disable(task))
return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
if (task_spec_ssb_disable(task))
return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
case ARM64_SSBD_FORCE_DISABLE:
return PR_SPEC_ENABLE;
default:
return PR_SPEC_NOT_AFFECTED;
}
}
int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
{
switch (which) {
case PR_SPEC_STORE_BYPASS:
return ssbd_prctl_get(task);
default:
return -ENODEV;
}
}
This diff is collapsed.
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <asm/daifflags.h> #include <asm/daifflags.h>
#include <asm/debug-monitors.h> #include <asm/debug-monitors.h>
#include <asm/exec.h> #include <asm/exec.h>
#include <asm/mte.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
...@@ -72,8 +73,10 @@ void notrace __cpu_suspend_exit(void) ...@@ -72,8 +73,10 @@ void notrace __cpu_suspend_exit(void)
* have turned the mitigation on. If the user has forcefully * have turned the mitigation on. If the user has forcefully
* disabled it, make sure their wishes are obeyed. * disabled it, make sure their wishes are obeyed.
*/ */
if (arm64_get_ssbd_state() == ARM64_SSBD_FORCE_DISABLE) spectre_v4_enable_mitigation(NULL);
arm64_set_ssbd_mitigation(false);
/* Restore additional MTE-specific configuration */
mte_suspend_exit();
} }
/* /*
......
...@@ -123,6 +123,16 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr, ...@@ -123,6 +123,16 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
local_daif_restore(DAIF_PROCCTX); local_daif_restore(DAIF_PROCCTX);
user_exit(); user_exit();
if (system_supports_mte() && (flags & _TIF_MTE_ASYNC_FAULT)) {
/*
* Process the asynchronous tag check fault before the actual
* syscall. do_notify_resume() will send a signal to userspace
* before the syscall is restarted.
*/
regs->regs[0] = -ERESTARTNOINTR;
return;
}
if (has_syscall_work(flags)) { if (has_syscall_work(flags)) {
/* /*
* The de-facto standard way to skip a system call using ptrace * The de-facto standard way to skip a system call using ptrace
......
...@@ -36,21 +36,23 @@ void store_cpu_topology(unsigned int cpuid) ...@@ -36,21 +36,23 @@ void store_cpu_topology(unsigned int cpuid)
if (mpidr & MPIDR_UP_BITMASK) if (mpidr & MPIDR_UP_BITMASK)
return; return;
/* Create cpu topology mapping based on MPIDR. */ /*
if (mpidr & MPIDR_MT_BITMASK) { * This would be the place to create cpu topology based on MPIDR.
/* Multiprocessor system : Multi-threads per core */ *
cpuid_topo->thread_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); * However, it cannot be trusted to depict the actual topology; some
cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 1); * pieces of the architecture enforce an artificial cap on Aff0 values
cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 2) | * (e.g. GICv3's ICC_SGI1R_EL1 limits it to 15), leading to an
MPIDR_AFFINITY_LEVEL(mpidr, 3) << 8; * artificial cycling of Aff1, Aff2 and Aff3 values. IOW, these end up
} else { * having absolutely no relationship to the actual underlying system
/* Multiprocessor system : Single-thread per core */ * topology, and cannot be reasonably used as core / package ID.
cpuid_topo->thread_id = -1; *
cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); * If the MT bit is set, Aff0 *could* be used to define a thread ID, but
cpuid_topo->package_id = MPIDR_AFFINITY_LEVEL(mpidr, 1) | * we still wouldn't be able to obtain a sane core ID. This means we
MPIDR_AFFINITY_LEVEL(mpidr, 2) << 8 | * need to entirely ignore MPIDR for any topology deduction.
MPIDR_AFFINITY_LEVEL(mpidr, 3) << 16; */
} cpuid_topo->thread_id = -1;
cpuid_topo->core_id = cpuid;
cpuid_topo->package_id = cpu_to_node(cpuid);
pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n", pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n",
cpuid, cpuid_topo->package_id, cpuid_topo->core_id, cpuid, cpuid_topo->package_id, cpuid_topo->core_id,
......
This diff is collapsed.
This diff is collapsed.
...@@ -105,7 +105,7 @@ SECTIONS ...@@ -105,7 +105,7 @@ SECTIONS
*(.eh_frame) *(.eh_frame)
} }
. = KIMAGE_VADDR + TEXT_OFFSET; . = KIMAGE_VADDR;
.head.text : { .head.text : {
_text = .; _text = .;
...@@ -274,4 +274,4 @@ ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE, ...@@ -274,4 +274,4 @@ ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
/* /*
* If padding is applied before .head.text, virt<->phys conversions will fail. * If padding is applied before .head.text, virt<->phys conversions will fail.
*/ */
ASSERT(_text == (KIMAGE_VADDR + TEXT_OFFSET), "HEAD is misaligned") ASSERT(_text == KIMAGE_VADDR, "HEAD is misaligned")
...@@ -57,9 +57,6 @@ config KVM_ARM_PMU ...@@ -57,9 +57,6 @@ config KVM_ARM_PMU
Adds support for a virtual Performance Monitoring Unit (PMU) in Adds support for a virtual Performance Monitoring Unit (PMU) in
virtual machines. virtual machines.
config KVM_INDIRECT_VECTORS
def_bool HARDEN_BRANCH_PREDICTOR || RANDOMIZE_BASE
endif # KVM endif # KVM
endif # VIRTUALIZATION endif # VIRTUALIZATION
This diff is collapsed.
...@@ -10,5 +10,4 @@ subdir-ccflags-y := -I$(incdir) \ ...@@ -10,5 +10,4 @@ subdir-ccflags-y := -I$(incdir) \
-DDISABLE_BRANCH_PROFILING \ -DDISABLE_BRANCH_PROFILING \
$(DISABLE_STACKLEAK_PLUGIN) $(DISABLE_STACKLEAK_PLUGIN)
obj-$(CONFIG_KVM) += vhe/ nvhe/ obj-$(CONFIG_KVM) += vhe/ nvhe/ smccc_wa.o
obj-$(CONFIG_KVM_INDIRECT_VECTORS) += smccc_wa.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment