Commit 143a6252 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Catalin Marinas:

 - Initial support for the ARMv9 Scalable Matrix Extension (SME).

   SME takes the approach used for vectors in SVE and extends this to
   provide architectural support for matrix operations. No KVM support
   yet, SME is disabled in guests.

 - Support for crashkernel reservations above ZONE_DMA via the
   'crashkernel=X,high' command line option.

 - btrfs search_ioctl() fix for live-lock with sub-page faults.

 - arm64 perf updates: support for the Hisilicon "CPA" PMU for
   monitoring coherent I/O traffic, support for Arm's CMN-650 and
   CMN-700 interconnect PMUs, minor driver fixes, kerneldoc cleanup.

 - Kselftest updates for SME, BTI, MTE.

 - Automatic generation of the system register macros from a 'sysreg'
   file describing the register bitfields.

 - Update the type of the function argument holding the ESR_ELx register
   value to unsigned long to match the architecture register size
   (originally 32-bit but extended since ARMv8.0).

 - stacktrace cleanups.

 - ftrace cleanups.

 - Miscellaneous updates, most notably: arm64-specific huge_ptep_get(),
   avoid executable mappings in kexec/hibernate code, drop TLB flushing
   from get_clear_flush() (and rename it to get_clear_contig()),
   ARCH_NR_GPIO bumped to 2048 for ARCH_APPLE.

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (145 commits)
  arm64/sysreg: Generate definitions for FAR_ELx
  arm64/sysreg: Generate definitions for DACR32_EL2
  arm64/sysreg: Generate definitions for CSSELR_EL1
  arm64/sysreg: Generate definitions for CPACR_ELx
  arm64/sysreg: Generate definitions for CONTEXTIDR_ELx
  arm64/sysreg: Generate definitions for CLIDR_EL1
  arm64/sve: Move sve_free() into SVE code section
  arm64: Kconfig.platforms: Add comments
  arm64: Kconfig: Fix indentation and add comments
  arm64: mm: avoid writable executable mappings in kexec/hibernate code
  arm64: lds: move special code sections out of kernel exec segment
  arm64/hugetlb: Implement arm64 specific huge_ptep_get()
  arm64/hugetlb: Use ptep_get() to get the pte value of a huge page
  arm64: kdump: Do not allocate crash low memory if not needed
  arm64/sve: Generate ZCR definitions
  arm64/sme: Generate defintions for SVCR
  arm64/sme: Generate SMPRI_EL1 definitions
  arm64/sme: Automatically generate SMPRIMAP_EL2 definitions
  arm64/sme: Automatically generate SMIDR_EL1 defines
  arm64/sme: Automatically generate defines for SMCR
  ...
parents d6edf951 0616ea3f
...@@ -813,7 +813,7 @@ ...@@ -813,7 +813,7 @@
Documentation/admin-guide/kdump/kdump.rst for an example. Documentation/admin-guide/kdump/kdump.rst for an example.
crashkernel=size[KMG],high crashkernel=size[KMG],high
[KNL, X86-64] range could be above 4G. Allow kernel [KNL, X86-64, ARM64] range could be above 4G. Allow kernel
to allocate physical memory region from top, so could to allocate physical memory region from top, so could
be above 4G if system have more than 4G ram installed. be above 4G if system have more than 4G ram installed.
Otherwise memory region will be allocated below 4G, if Otherwise memory region will be allocated below 4G, if
...@@ -826,14 +826,20 @@ ...@@ -826,14 +826,20 @@
that require some amount of low memory, e.g. swiotlb that require some amount of low memory, e.g. swiotlb
requires at least 64M+32K low memory, also enough extra requires at least 64M+32K low memory, also enough extra
low memory is needed to make sure DMA buffers for 32-bit low memory is needed to make sure DMA buffers for 32-bit
devices won't run out. Kernel would try to allocate at devices won't run out. Kernel would try to allocate
at least 256M below 4G automatically. at least 256M below 4G automatically.
This one let user to specify own low range under 4G This one lets the user specify own low range under 4G
for second kernel instead. for second kernel instead.
0: to disable low allocation. 0: to disable low allocation.
It will be ignored when crashkernel=X,high is not used It will be ignored when crashkernel=X,high is not used
or memory reserved is below 4G. or memory reserved is below 4G.
[KNL, ARM64] range in low memory.
This one lets the user specify a low range in the
DMA zone for the crash dump kernel.
It will be ignored when crashkernel=X,high is not used
or memory reserved is located in the DMA zones.
cryptomgr.notests cryptomgr.notests
[KNL] Disable crypto self-tests [KNL] Disable crypto self-tests
......
...@@ -350,6 +350,16 @@ Before jumping into the kernel, the following conditions must be met: ...@@ -350,6 +350,16 @@ Before jumping into the kernel, the following conditions must be met:
- SMCR_EL2.FA64 (bit 31) must be initialised to 0b1. - SMCR_EL2.FA64 (bit 31) must be initialised to 0b1.
For CPUs with the Memory Tagging Extension feature (FEAT_MTE2):
- If EL3 is present:
- SCR_EL3.ATA (bit 26) must be initialised to 0b1.
- If the kernel is entered at EL1 and EL2 is present:
- HCR_EL2.ATA (bit 56) must be initialised to 0b1.
The requirements described above for CPU mode, caches, MMUs, architected The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level. Where the values documented enter the kernel in the same exception level. Where the values documented
......
...@@ -264,6 +264,39 @@ HWCAP2_MTE3 ...@@ -264,6 +264,39 @@ HWCAP2_MTE3
Functionality implied by ID_AA64PFR1_EL1.MTE == 0b0011, as described Functionality implied by ID_AA64PFR1_EL1.MTE == 0b0011, as described
by Documentation/arm64/memory-tagging-extension.rst. by Documentation/arm64/memory-tagging-extension.rst.
HWCAP2_SME
Functionality implied by ID_AA64PFR1_EL1.SME == 0b0001, as described
by Documentation/arm64/sme.rst.
HWCAP2_SME_I16I64
Functionality implied by ID_AA64SMFR0_EL1.I16I64 == 0b1111.
HWCAP2_SME_F64F64
Functionality implied by ID_AA64SMFR0_EL1.F64F64 == 0b1.
HWCAP2_SME_I8I32
Functionality implied by ID_AA64SMFR0_EL1.I8I32 == 0b1111.
HWCAP2_SME_F16F32
Functionality implied by ID_AA64SMFR0_EL1.F16F32 == 0b1.
HWCAP2_SME_B16F32
Functionality implied by ID_AA64SMFR0_EL1.B16F32 == 0b1.
HWCAP2_SME_F32F32
Functionality implied by ID_AA64SMFR0_EL1.F32F32 == 0b1.
HWCAP2_SME_FA64
Functionality implied by ID_AA64SMFR0_EL1.FA64 == 0b1.
4. Unused AT_HWCAP bits 4. Unused AT_HWCAP bits
----------------------- -----------------------
......
...@@ -21,6 +21,7 @@ ARM64 Architecture ...@@ -21,6 +21,7 @@ ARM64 Architecture
perf perf
pointer-authentication pointer-authentication
silicon-errata silicon-errata
sme
sve sve
tagged-address-abi tagged-address-abi
tagged-pointers tagged-pointers
......
This diff is collapsed.
...@@ -7,7 +7,9 @@ Author: Dave Martin <Dave.Martin@arm.com> ...@@ -7,7 +7,9 @@ Author: Dave Martin <Dave.Martin@arm.com>
Date: 4 August 2017 Date: 4 August 2017
This document outlines briefly the interface provided to userspace by Linux in This document outlines briefly the interface provided to userspace by Linux in
order to support use of the ARM Scalable Vector Extension (SVE). order to support use of the ARM Scalable Vector Extension (SVE), including
interactions with Streaming SVE mode added by the Scalable Matrix Extension
(SME).
This is an outline of the most important features and issues only and not This is an outline of the most important features and issues only and not
intended to be exhaustive. intended to be exhaustive.
...@@ -23,6 +25,10 @@ model features for SVE is included in Appendix A. ...@@ -23,6 +25,10 @@ model features for SVE is included in Appendix A.
* SVE registers Z0..Z31, P0..P15 and FFR and the current vector length VL, are * SVE registers Z0..Z31, P0..P15 and FFR and the current vector length VL, are
tracked per-thread. tracked per-thread.
* In streaming mode FFR is not accessible unless HWCAP2_SME_FA64 is present
in the system, when it is not supported and these interfaces are used to
access streaming mode FFR is read and written as zero.
* The presence of SVE is reported to userspace via HWCAP_SVE in the aux vector * The presence of SVE is reported to userspace via HWCAP_SVE in the aux vector
AT_HWCAP entry. Presence of this flag implies the presence of the SVE AT_HWCAP entry. Presence of this flag implies the presence of the SVE
instructions and registers, and the Linux-specific system interfaces instructions and registers, and the Linux-specific system interfaces
...@@ -53,10 +59,19 @@ model features for SVE is included in Appendix A. ...@@ -53,10 +59,19 @@ model features for SVE is included in Appendix A.
which userspace can read using an MRS instruction. See elf_hwcaps.txt and which userspace can read using an MRS instruction. See elf_hwcaps.txt and
cpu-feature-registers.txt for details. cpu-feature-registers.txt for details.
* On hardware that supports the SME extensions, HWCAP2_SME will also be
reported in the AT_HWCAP2 aux vector entry. Among other things SME adds
streaming mode which provides a subset of the SVE feature set using a
separate SME vector length and the same Z/V registers. See sme.rst
for more details.
* Debuggers should restrict themselves to interacting with the target via the * Debuggers should restrict themselves to interacting with the target via the
NT_ARM_SVE regset. The recommended way of detecting support for this regset NT_ARM_SVE regset. The recommended way of detecting support for this regset
is to connect to a target process first and then attempt a is to connect to a target process first and then attempt a
ptrace(PTRACE_GETREGSET, pid, NT_ARM_SVE, &iov). ptrace(PTRACE_GETREGSET, pid, NT_ARM_SVE, &iov). Note that when SME is
present and streaming SVE mode is in use the FPSIMD subset of registers
will be read via NT_ARM_SVE and NT_ARM_SVE writes will exit streaming mode
in the target.
* Whenever SVE scalable register values (Zn, Pn, FFR) are exchanged in memory * Whenever SVE scalable register values (Zn, Pn, FFR) are exchanged in memory
between userspace and the kernel, the register value is encoded in memory in between userspace and the kernel, the register value is encoded in memory in
...@@ -126,6 +141,11 @@ the SVE instruction set architecture. ...@@ -126,6 +141,11 @@ the SVE instruction set architecture.
are only present in fpsimd_context. For convenience, the content of V0..V31 are only present in fpsimd_context. For convenience, the content of V0..V31
is duplicated between sve_context and fpsimd_context. is duplicated between sve_context and fpsimd_context.
* The record contains a flag field which includes a flag SVE_SIG_FLAG_SM which
if set indicates that the thread is in streaming mode and the vector length
and register data (if present) describe the streaming SVE data and vector
length.
* The signal frame record for SVE always contains basic metadata, in particular * The signal frame record for SVE always contains basic metadata, in particular
the thread's vector length (in sve_context.vl). the thread's vector length (in sve_context.vl).
...@@ -170,6 +190,11 @@ When returning from a signal handler: ...@@ -170,6 +190,11 @@ When returning from a signal handler:
the signal frame does not match the current vector length, the signal return the signal frame does not match the current vector length, the signal return
attempt is treated as illegal, resulting in a forced SIGSEGV. attempt is treated as illegal, resulting in a forced SIGSEGV.
* It is permitted to enter or leave streaming mode by setting or clearing
the SVE_SIG_FLAG_SM flag but applications should take care to ensure that
when doing so sve_context.vl and any register data are appropriate for the
vector length in the new mode.
6. prctl extensions 6. prctl extensions
-------------------- --------------------
...@@ -265,8 +290,14 @@ prctl(PR_SVE_GET_VL) ...@@ -265,8 +290,14 @@ prctl(PR_SVE_GET_VL)
7. ptrace extensions 7. ptrace extensions
--------------------- ---------------------
* A new regset NT_ARM_SVE is defined for use with PTRACE_GETREGSET and * New regsets NT_ARM_SVE and NT_ARM_SSVE are defined for use with
PTRACE_SETREGSET. PTRACE_GETREGSET and PTRACE_SETREGSET. NT_ARM_SSVE describes the
streaming mode SVE registers and NT_ARM_SVE describes the
non-streaming mode SVE registers.
In this description a register set is referred to as being "live" when
the target is in the appropriate streaming or non-streaming mode and is
using data beyond the subset shared with the FPSIMD Vn registers.
Refer to [2] for definitions. Refer to [2] for definitions.
...@@ -297,7 +328,7 @@ The regset data starts with struct user_sve_header, containing: ...@@ -297,7 +328,7 @@ The regset data starts with struct user_sve_header, containing:
flags flags
either at most one of
SVE_PT_REGS_FPSIMD SVE_PT_REGS_FPSIMD
...@@ -331,6 +362,10 @@ The regset data starts with struct user_sve_header, containing: ...@@ -331,6 +362,10 @@ The regset data starts with struct user_sve_header, containing:
SVE_PT_VL_ONEXEC (SETREGSET only). SVE_PT_VL_ONEXEC (SETREGSET only).
If neither FPSIMD nor SVE flags are provided then no register
payload is available, this is only possible when SME is implemented.
* The effects of changing the vector length and/or flags are equivalent to * The effects of changing the vector length and/or flags are equivalent to
those documented for PR_SVE_SET_VL. those documented for PR_SVE_SET_VL.
...@@ -346,6 +381,13 @@ The regset data starts with struct user_sve_header, containing: ...@@ -346,6 +381,13 @@ The regset data starts with struct user_sve_header, containing:
case only the vector length and flags are changed (along with any case only the vector length and flags are changed (along with any
consequences of those changes). consequences of those changes).
* In systems supporting SME when in streaming mode a GETREGSET for
NT_REG_SVE will return only the user_sve_header with no register data,
similarly a GETREGSET for NT_REG_SSVE will not return any register data
when not in streaming mode.
* A GETREGSET for NT_ARM_SSVE will never return SVE_PT_REGS_FPSIMD.
* For SETREGSET, if an SVE_PT_REGS_SVE payload is present and the * For SETREGSET, if an SVE_PT_REGS_SVE payload is present and the
requested VL is not supported, the effect will be the same as if the requested VL is not supported, the effect will be the same as if the
payload were omitted, except that an EIO error is reported. No payload were omitted, except that an EIO error is reported. No
...@@ -355,17 +397,25 @@ The regset data starts with struct user_sve_header, containing: ...@@ -355,17 +397,25 @@ The regset data starts with struct user_sve_header, containing:
unspecified. It is up to the caller to translate the payload layout unspecified. It is up to the caller to translate the payload layout
for the actual VL and retry. for the actual VL and retry.
* Where SME is implemented it is not possible to GETREGSET the register
state for normal SVE when in streaming mode, nor the streaming mode
register state when in normal mode, regardless of the implementation defined
behaviour of the hardware for sharing data between the two modes.
* Any SETREGSET of NT_ARM_SVE will exit streaming mode if the target was in
streaming mode and any SETREGSET of NT_ARM_SSVE will enter streaming mode
if the target was not in streaming mode.
* The effect of writing a partial, incomplete payload is unspecified. * The effect of writing a partial, incomplete payload is unspecified.
8. ELF coredump extensions 8. ELF coredump extensions
--------------------------- ---------------------------
* A NT_ARM_SVE note will be added to each coredump for each thread of the * NT_ARM_SVE and NT_ARM_SSVE notes will be added to each coredump for
dumped process. The contents will be equivalent to the data that would have each thread of the dumped process. The contents will be equivalent to the
been read if a PTRACE_GETREGSET of NT_ARM_SVE were executed for each thread data that would have been read if a PTRACE_GETREGSET of the corresponding
when the coredump was generated. type were executed for each thread when the coredump was generated.
9. System runtime configuration 9. System runtime configuration
-------------------------------- --------------------------------
......
...@@ -14,6 +14,8 @@ properties: ...@@ -14,6 +14,8 @@ properties:
compatible: compatible:
enum: enum:
- arm,cmn-600 - arm,cmn-600
- arm,cmn-650
- arm,cmn-700
- arm,ci-700 - arm,ci-700
reg: reg:
......
...@@ -5713,6 +5713,8 @@ affect the device's behavior. Current defined flags:: ...@@ -5713,6 +5713,8 @@ affect the device's behavior. Current defined flags::
#define KVM_RUN_X86_SMM (1 << 0) #define KVM_RUN_X86_SMM (1 << 0)
/* x86, set if bus lock detected in VM */ /* x86, set if bus lock detected in VM */
#define KVM_RUN_BUS_LOCK (1 << 1) #define KVM_RUN_BUS_LOCK (1 << 1)
/* arm64, set for KVM_EXIT_DEBUG */
#define KVM_DEBUG_ARCH_HSR_HIGH_VALID (1 << 0)
:: ::
......
...@@ -24,6 +24,13 @@ config KEXEC_ELF ...@@ -24,6 +24,13 @@ config KEXEC_ELF
config HAVE_IMA_KEXEC config HAVE_IMA_KEXEC
bool bool
config ARCH_HAS_SUBPAGE_FAULTS
bool
help
Select if the architecture can check permissions at sub-page
granularity (e.g. arm64 MTE). The probe_user_*() functions
must be implemented.
config HOTPLUG_SMT config HOTPLUG_SMT
bool bool
......
...@@ -262,31 +262,31 @@ config ARM64_CONT_PMD_SHIFT ...@@ -262,31 +262,31 @@ config ARM64_CONT_PMD_SHIFT
default 4 default 4
config ARCH_MMAP_RND_BITS_MIN config ARCH_MMAP_RND_BITS_MIN
default 14 if ARM64_64K_PAGES default 14 if ARM64_64K_PAGES
default 16 if ARM64_16K_PAGES default 16 if ARM64_16K_PAGES
default 18 default 18
# max bits determined by the following formula: # max bits determined by the following formula:
# VA_BITS - PAGE_SHIFT - 3 # VA_BITS - PAGE_SHIFT - 3
config ARCH_MMAP_RND_BITS_MAX config ARCH_MMAP_RND_BITS_MAX
default 19 if ARM64_VA_BITS=36 default 19 if ARM64_VA_BITS=36
default 24 if ARM64_VA_BITS=39 default 24 if ARM64_VA_BITS=39
default 27 if ARM64_VA_BITS=42 default 27 if ARM64_VA_BITS=42
default 30 if ARM64_VA_BITS=47 default 30 if ARM64_VA_BITS=47
default 29 if ARM64_VA_BITS=48 && ARM64_64K_PAGES default 29 if ARM64_VA_BITS=48 && ARM64_64K_PAGES
default 31 if ARM64_VA_BITS=48 && ARM64_16K_PAGES default 31 if ARM64_VA_BITS=48 && ARM64_16K_PAGES
default 33 if ARM64_VA_BITS=48 default 33 if ARM64_VA_BITS=48
default 14 if ARM64_64K_PAGES default 14 if ARM64_64K_PAGES
default 16 if ARM64_16K_PAGES default 16 if ARM64_16K_PAGES
default 18 default 18
config ARCH_MMAP_RND_COMPAT_BITS_MIN config ARCH_MMAP_RND_COMPAT_BITS_MIN
default 7 if ARM64_64K_PAGES default 7 if ARM64_64K_PAGES
default 9 if ARM64_16K_PAGES default 9 if ARM64_16K_PAGES
default 11 default 11
config ARCH_MMAP_RND_COMPAT_BITS_MAX config ARCH_MMAP_RND_COMPAT_BITS_MAX
default 16 default 16
config NO_IOPORT_MAP config NO_IOPORT_MAP
def_bool y if !PCI def_bool y if !PCI
...@@ -313,7 +313,7 @@ config GENERIC_HWEIGHT ...@@ -313,7 +313,7 @@ config GENERIC_HWEIGHT
def_bool y def_bool y
config GENERIC_CSUM config GENERIC_CSUM
def_bool y def_bool y
config GENERIC_CALIBRATE_DELAY config GENERIC_CALIBRATE_DELAY
def_bool y def_bool y
...@@ -1046,8 +1046,7 @@ config SOCIONEXT_SYNQUACER_PREITS ...@@ -1046,8 +1046,7 @@ config SOCIONEXT_SYNQUACER_PREITS
If unsure, say Y. If unsure, say Y.
endmenu endmenu # "ARM errata workarounds via the alternatives framework"
choice choice
prompt "Page size" prompt "Page size"
...@@ -1575,9 +1574,9 @@ config SETEND_EMULATION ...@@ -1575,9 +1574,9 @@ config SETEND_EMULATION
be unexpected results in the applications. be unexpected results in the applications.
If unsure, say Y If unsure, say Y
endif endif # ARMV8_DEPRECATED
endif endif # COMPAT
menu "ARMv8.1 architectural features" menu "ARMv8.1 architectural features"
...@@ -1602,15 +1601,15 @@ config ARM64_PAN ...@@ -1602,15 +1601,15 @@ config ARM64_PAN
bool "Enable support for Privileged Access Never (PAN)" bool "Enable support for Privileged Access Never (PAN)"
default y default y
help help
Privileged Access Never (PAN; part of the ARMv8.1 Extensions) Privileged Access Never (PAN; part of the ARMv8.1 Extensions)
prevents the kernel or hypervisor from accessing user-space (EL0) prevents the kernel or hypervisor from accessing user-space (EL0)
memory directly. memory directly.
Choosing this option will cause any unprotected (not using Choosing this option will cause any unprotected (not using
copy_to_user et al) memory access to fail with a permission fault. copy_to_user et al) memory access to fail with a permission fault.
The feature is detected at runtime, and will remain as a 'nop' The feature is detected at runtime, and will remain as a 'nop'
instruction if the cpu does not implement the feature. instruction if the cpu does not implement the feature.
config AS_HAS_LDAPR config AS_HAS_LDAPR
def_bool $(as-instr,.arch_extension rcpc) def_bool $(as-instr,.arch_extension rcpc)
...@@ -1638,15 +1637,15 @@ config ARM64_USE_LSE_ATOMICS ...@@ -1638,15 +1637,15 @@ config ARM64_USE_LSE_ATOMICS
built with binutils >= 2.25 in order for the new instructions built with binutils >= 2.25 in order for the new instructions
to be used. to be used.
endmenu endmenu # "ARMv8.1 architectural features"
menu "ARMv8.2 architectural features" menu "ARMv8.2 architectural features"
config AS_HAS_ARMV8_2 config AS_HAS_ARMV8_2
def_bool $(cc-option,-Wa$(comma)-march=armv8.2-a) def_bool $(cc-option,-Wa$(comma)-march=armv8.2-a)
config AS_HAS_SHA3 config AS_HAS_SHA3
def_bool $(as-instr,.arch armv8.2-a+sha3) def_bool $(as-instr,.arch armv8.2-a+sha3)
config ARM64_PMEM config ARM64_PMEM
bool "Enable support for persistent memory" bool "Enable support for persistent memory"
...@@ -1690,7 +1689,7 @@ config ARM64_CNP ...@@ -1690,7 +1689,7 @@ config ARM64_CNP
at runtime, and does not affect PEs that do not implement at runtime, and does not affect PEs that do not implement
this feature. this feature.
endmenu endmenu # "ARMv8.2 architectural features"
menu "ARMv8.3 architectural features" menu "ARMv8.3 architectural features"
...@@ -1753,7 +1752,7 @@ config AS_HAS_PAC ...@@ -1753,7 +1752,7 @@ config AS_HAS_PAC
config AS_HAS_CFI_NEGATE_RA_STATE config AS_HAS_CFI_NEGATE_RA_STATE
def_bool $(as-instr,.cfi_startproc\n.cfi_negate_ra_state\n.cfi_endproc\n) def_bool $(as-instr,.cfi_startproc\n.cfi_negate_ra_state\n.cfi_endproc\n)
endmenu endmenu # "ARMv8.3 architectural features"
menu "ARMv8.4 architectural features" menu "ARMv8.4 architectural features"
...@@ -1794,7 +1793,7 @@ config ARM64_TLB_RANGE ...@@ -1794,7 +1793,7 @@ config ARM64_TLB_RANGE
The feature introduces new assembly instructions, and they were The feature introduces new assembly instructions, and they were
support when binutils >= 2.30. support when binutils >= 2.30.
endmenu endmenu # "ARMv8.4 architectural features"
menu "ARMv8.5 architectural features" menu "ARMv8.5 architectural features"
...@@ -1880,6 +1879,7 @@ config ARM64_MTE ...@@ -1880,6 +1879,7 @@ config ARM64_MTE
depends on AS_HAS_LSE_ATOMICS depends on AS_HAS_LSE_ATOMICS
# Required for tag checking in the uaccess routines # Required for tag checking in the uaccess routines
depends on ARM64_PAN depends on ARM64_PAN
select ARCH_HAS_SUBPAGE_FAULTS
select ARCH_USES_HIGH_VMA_FLAGS select ARCH_USES_HIGH_VMA_FLAGS
help help
Memory Tagging (part of the ARMv8.5 Extensions) provides Memory Tagging (part of the ARMv8.5 Extensions) provides
...@@ -1901,7 +1901,7 @@ config ARM64_MTE ...@@ -1901,7 +1901,7 @@ config ARM64_MTE
Documentation/arm64/memory-tagging-extension.rst. Documentation/arm64/memory-tagging-extension.rst.
endmenu endmenu # "ARMv8.5 architectural features"
menu "ARMv8.7 architectural features" menu "ARMv8.7 architectural features"
...@@ -1910,12 +1910,12 @@ config ARM64_EPAN ...@@ -1910,12 +1910,12 @@ config ARM64_EPAN
default y default y
depends on ARM64_PAN depends on ARM64_PAN
help help
Enhanced Privileged Access Never (EPAN) allows Privileged Enhanced Privileged Access Never (EPAN) allows Privileged
Access Never to be used with Execute-only mappings. Access Never to be used with Execute-only mappings.
The feature is detected at runtime, and will remain disabled The feature is detected at runtime, and will remain disabled
if the cpu does not implement the feature. if the cpu does not implement the feature.
endmenu endmenu # "ARMv8.7 architectural features"
config ARM64_SVE config ARM64_SVE
bool "ARM Scalable Vector Extension support" bool "ARM Scalable Vector Extension support"
...@@ -1948,6 +1948,17 @@ config ARM64_SVE ...@@ -1948,6 +1948,17 @@ config ARM64_SVE
booting the kernel. If unsure and you are not observing these booting the kernel. If unsure and you are not observing these
symptoms, you should assume that it is safe to say Y. symptoms, you should assume that it is safe to say Y.
config ARM64_SME
bool "ARM Scalable Matrix Extension support"
default y
depends on ARM64_SVE
help
The Scalable Matrix Extension (SME) is an extension to the AArch64
execution state which utilises a substantial subset of the SVE
instruction set, together with the addition of new architectural
register state capable of holding two dimensional matrix tiles to
enable various matrix operations.
config ARM64_MODULE_PLTS config ARM64_MODULE_PLTS
bool "Use PLTs to allow module memory to spill over into vmalloc area" bool "Use PLTs to allow module memory to spill over into vmalloc area"
depends on MODULES depends on MODULES
...@@ -1991,7 +2002,7 @@ config ARM64_DEBUG_PRIORITY_MASKING ...@@ -1991,7 +2002,7 @@ config ARM64_DEBUG_PRIORITY_MASKING
the validity of ICC_PMR_EL1 when calling concerned functions. the validity of ICC_PMR_EL1 when calling concerned functions.
If unsure, say N If unsure, say N
endif endif # ARM64_PSEUDO_NMI
config RELOCATABLE config RELOCATABLE
bool "Build a relocatable kernel image" if EXPERT bool "Build a relocatable kernel image" if EXPERT
...@@ -2050,7 +2061,19 @@ config STACKPROTECTOR_PER_TASK ...@@ -2050,7 +2061,19 @@ config STACKPROTECTOR_PER_TASK
def_bool y def_bool y
depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_SYSREG depends on STACKPROTECTOR && CC_HAVE_STACKPROTECTOR_SYSREG
endmenu # The GPIO number here must be sorted by descending number. In case of
# a multiplatform kernel, we just want the highest value required by the
# selected platforms.
config ARCH_NR_GPIO
int
default 2048 if ARCH_APPLE
default 0
help
Maximum number of GPIOs in the system.
If unsure, leave the default value.
endmenu # "Kernel Features"
menu "Boot options" menu "Boot options"
...@@ -2114,7 +2137,7 @@ config EFI ...@@ -2114,7 +2137,7 @@ config EFI
help help
This option provides support for runtime services provided This option provides support for runtime services provided
by UEFI firmware (such as non-volatile variables, realtime by UEFI firmware (such as non-volatile variables, realtime
clock, and platform reset). A UEFI stub is also provided to clock, and platform reset). A UEFI stub is also provided to
allow the kernel to be booted as an EFI application. This allow the kernel to be booted as an EFI application. This
is only useful on systems that have UEFI firmware. is only useful on systems that have UEFI firmware.
...@@ -2129,7 +2152,7 @@ config DMI ...@@ -2129,7 +2152,7 @@ config DMI
However, even with this option, the resultant kernel should However, even with this option, the resultant kernel should
continue to boot on existing non-UEFI platforms. continue to boot on existing non-UEFI platforms.
endmenu endmenu # "Boot options"
config SYSVIPC_COMPAT config SYSVIPC_COMPAT
def_bool y def_bool y
...@@ -2150,7 +2173,7 @@ config ARCH_HIBERNATION_HEADER ...@@ -2150,7 +2173,7 @@ config ARCH_HIBERNATION_HEADER
config ARCH_SUSPEND_POSSIBLE config ARCH_SUSPEND_POSSIBLE
def_bool y def_bool y
endmenu endmenu # "Power management options"
menu "CPU Power Management" menu "CPU Power Management"
...@@ -2158,7 +2181,7 @@ source "drivers/cpuidle/Kconfig" ...@@ -2158,7 +2181,7 @@ source "drivers/cpuidle/Kconfig"
source "drivers/cpufreq/Kconfig" source "drivers/cpufreq/Kconfig"
endmenu endmenu # "CPU Power Management"
source "drivers/acpi/Kconfig" source "drivers/acpi/Kconfig"
...@@ -2166,4 +2189,4 @@ source "arch/arm64/kvm/Kconfig" ...@@ -2166,4 +2189,4 @@ source "arch/arm64/kvm/Kconfig"
if CRYPTO if CRYPTO
source "arch/arm64/crypto/Kconfig" source "arch/arm64/crypto/Kconfig"
endif endif # CRYPTO
...@@ -325,4 +325,4 @@ config ARCH_ZYNQMP ...@@ -325,4 +325,4 @@ config ARCH_ZYNQMP
help help
This enables support for Xilinx ZynqMP Family This enables support for Xilinx ZynqMP Family
endmenu endmenu # "Platform selection"
...@@ -7,3 +7,4 @@ generic-y += parport.h ...@@ -7,3 +7,4 @@ generic-y += parport.h
generic-y += user.h generic-y += user.h
generated-y += cpucaps.h generated-y += cpucaps.h
generated-y += sysreg-defs.h
...@@ -142,7 +142,7 @@ static inline bool __init __early_cpu_has_rndr(void) ...@@ -142,7 +142,7 @@ static inline bool __init __early_cpu_has_rndr(void)
{ {
/* Open code as we run prior to the first call to cpufeature. */ /* Open code as we run prior to the first call to cpufeature. */
unsigned long ftr = read_sysreg_s(SYS_ID_AA64ISAR0_EL1); unsigned long ftr = read_sysreg_s(SYS_ID_AA64ISAR0_EL1);
return (ftr >> ID_AA64ISAR0_RNDR_SHIFT) & 0xf; return (ftr >> ID_AA64ISAR0_EL1_RNDR_SHIFT) & 0xf;
} }
static inline bool __init __must_check static inline bool __init __must_check
......
...@@ -58,11 +58,15 @@ struct cpuinfo_arm64 { ...@@ -58,11 +58,15 @@ struct cpuinfo_arm64 {
u64 reg_id_aa64pfr0; u64 reg_id_aa64pfr0;
u64 reg_id_aa64pfr1; u64 reg_id_aa64pfr1;
u64 reg_id_aa64zfr0; u64 reg_id_aa64zfr0;
u64 reg_id_aa64smfr0;
struct cpuinfo_32bit aarch32; struct cpuinfo_32bit aarch32;
/* pseudo-ZCR for recording maximum ZCR_EL1 LEN value: */ /* pseudo-ZCR for recording maximum ZCR_EL1 LEN value: */
u64 reg_zcr; u64 reg_zcr;
/* pseudo-SMCR for recording maximum SMCR_EL1 LEN value: */
u64 reg_smcr;
}; };
DECLARE_PER_CPU(struct cpuinfo_arm64, cpu_data); DECLARE_PER_CPU(struct cpuinfo_arm64, cpu_data);
......
...@@ -622,6 +622,13 @@ static inline bool id_aa64pfr0_sve(u64 pfr0) ...@@ -622,6 +622,13 @@ static inline bool id_aa64pfr0_sve(u64 pfr0)
return val > 0; return val > 0;
} }
static inline bool id_aa64pfr1_sme(u64 pfr1)
{
u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_SME_SHIFT);
return val > 0;
}
static inline bool id_aa64pfr1_mte(u64 pfr1) static inline bool id_aa64pfr1_mte(u64 pfr1)
{ {
u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT); u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_MTE_SHIFT);
...@@ -759,6 +766,23 @@ static __always_inline bool system_supports_sve(void) ...@@ -759,6 +766,23 @@ static __always_inline bool system_supports_sve(void)
cpus_have_const_cap(ARM64_SVE); cpus_have_const_cap(ARM64_SVE);
} }
static __always_inline bool system_supports_sme(void)
{
return IS_ENABLED(CONFIG_ARM64_SME) &&
cpus_have_const_cap(ARM64_SME);
}
static __always_inline bool system_supports_fa64(void)
{
return IS_ENABLED(CONFIG_ARM64_SME) &&
cpus_have_const_cap(ARM64_SME_FA64);
}
static __always_inline bool system_supports_tpidr2(void)
{
return system_supports_sme();
}
static __always_inline bool system_supports_cnp(void) static __always_inline bool system_supports_cnp(void)
{ {
return IS_ENABLED(CONFIG_ARM64_CNP) && return IS_ENABLED(CONFIG_ARM64_CNP) &&
......
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
#define MIDR_VARIANT(midr) \ #define MIDR_VARIANT(midr) \
(((midr) & MIDR_VARIANT_MASK) >> MIDR_VARIANT_SHIFT) (((midr) & MIDR_VARIANT_MASK) >> MIDR_VARIANT_SHIFT)
#define MIDR_IMPLEMENTOR_SHIFT 24 #define MIDR_IMPLEMENTOR_SHIFT 24
#define MIDR_IMPLEMENTOR_MASK (0xff << MIDR_IMPLEMENTOR_SHIFT) #define MIDR_IMPLEMENTOR_MASK (0xffU << MIDR_IMPLEMENTOR_SHIFT)
#define MIDR_IMPLEMENTOR(midr) \ #define MIDR_IMPLEMENTOR(midr) \
(((midr) & MIDR_IMPLEMENTOR_MASK) >> MIDR_IMPLEMENTOR_SHIFT) (((midr) & MIDR_IMPLEMENTOR_MASK) >> MIDR_IMPLEMENTOR_SHIFT)
......
...@@ -64,7 +64,7 @@ struct task_struct; ...@@ -64,7 +64,7 @@ struct task_struct;
struct step_hook { struct step_hook {
struct list_head node; struct list_head node;
int (*fn)(struct pt_regs *regs, unsigned int esr); int (*fn)(struct pt_regs *regs, unsigned long esr);
}; };
void register_user_step_hook(struct step_hook *hook); void register_user_step_hook(struct step_hook *hook);
...@@ -75,7 +75,7 @@ void unregister_kernel_step_hook(struct step_hook *hook); ...@@ -75,7 +75,7 @@ void unregister_kernel_step_hook(struct step_hook *hook);
struct break_hook { struct break_hook {
struct list_head node; struct list_head node;
int (*fn)(struct pt_regs *regs, unsigned int esr); int (*fn)(struct pt_regs *regs, unsigned long esr);
u16 imm; u16 imm;
u16 mask; /* These bits are ignored when comparing with imm */ u16 mask; /* These bits are ignored when comparing with imm */
}; };
......
...@@ -143,6 +143,50 @@ ...@@ -143,6 +143,50 @@
.Lskip_sve_\@: .Lskip_sve_\@:
.endm .endm
/* SME register access and priority mapping */
.macro __init_el2_nvhe_sme
mrs x1, id_aa64pfr1_el1
ubfx x1, x1, #ID_AA64PFR1_SME_SHIFT, #4
cbz x1, .Lskip_sme_\@
bic x0, x0, #CPTR_EL2_TSM // Also disable SME traps
msr cptr_el2, x0 // Disable copro. traps to EL2
isb
mrs x1, sctlr_el2
orr x1, x1, #SCTLR_ELx_ENTP2 // Disable TPIDR2 traps
msr sctlr_el2, x1
isb
mov x1, #0 // SMCR controls
mrs_s x2, SYS_ID_AA64SMFR0_EL1
ubfx x2, x2, #ID_AA64SMFR0_FA64_SHIFT, #1 // Full FP in SM?
cbz x2, .Lskip_sme_fa64_\@
orr x1, x1, SMCR_ELx_FA64_MASK
.Lskip_sme_fa64_\@:
orr x1, x1, #SMCR_ELx_LEN_MASK // Enable full SME vector
msr_s SYS_SMCR_EL2, x1 // length for EL1.
mrs_s x1, SYS_SMIDR_EL1 // Priority mapping supported?
ubfx x1, x1, #SMIDR_EL1_SMPS_SHIFT, #1
cbz x1, .Lskip_sme_\@
msr_s SYS_SMPRIMAP_EL2, xzr // Make all priorities equal
mrs x1, id_aa64mmfr1_el1 // HCRX_EL2 present?
ubfx x1, x1, #ID_AA64MMFR1_HCX_SHIFT, #4
cbz x1, .Lskip_sme_\@
mrs_s x1, SYS_HCRX_EL2
orr x1, x1, #HCRX_EL2_SMPME_MASK // Enable priority mapping
msr_s SYS_HCRX_EL2, x1
.Lskip_sme_\@:
.endm
/* Disable any fine grained traps */ /* Disable any fine grained traps */
.macro __init_el2_fgt .macro __init_el2_fgt
mrs x1, id_aa64mmfr0_el1 mrs x1, id_aa64mmfr0_el1
...@@ -153,15 +197,26 @@ ...@@ -153,15 +197,26 @@
mrs x1, id_aa64dfr0_el1 mrs x1, id_aa64dfr0_el1
ubfx x1, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4 ubfx x1, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4
cmp x1, #3 cmp x1, #3
b.lt .Lset_fgt_\@ b.lt .Lset_debug_fgt_\@
/* Disable PMSNEVFR_EL1 read and write traps */ /* Disable PMSNEVFR_EL1 read and write traps */
orr x0, x0, #(1 << 62) orr x0, x0, #(1 << 62)
.Lset_fgt_\@: .Lset_debug_fgt_\@:
msr_s SYS_HDFGRTR_EL2, x0 msr_s SYS_HDFGRTR_EL2, x0
msr_s SYS_HDFGWTR_EL2, x0 msr_s SYS_HDFGWTR_EL2, x0
msr_s SYS_HFGRTR_EL2, xzr
msr_s SYS_HFGWTR_EL2, xzr mov x0, xzr
mrs x1, id_aa64pfr1_el1
ubfx x1, x1, #ID_AA64PFR1_SME_SHIFT, #4
cbz x1, .Lset_fgt_\@
/* Disable nVHE traps of TPIDR2 and SMPRI */
orr x0, x0, #HFGxTR_EL2_nSMPRI_EL1_MASK
orr x0, x0, #HFGxTR_EL2_nTPIDR2_EL0_MASK
.Lset_fgt_\@:
msr_s SYS_HFGRTR_EL2, x0
msr_s SYS_HFGWTR_EL2, x0
msr_s SYS_HFGITR_EL2, xzr msr_s SYS_HFGITR_EL2, xzr
mrs x1, id_aa64pfr0_el1 // AMU traps UNDEF without AMU mrs x1, id_aa64pfr0_el1 // AMU traps UNDEF without AMU
...@@ -196,6 +251,7 @@ ...@@ -196,6 +251,7 @@
__init_el2_nvhe_idregs __init_el2_nvhe_idregs
__init_el2_nvhe_cptr __init_el2_nvhe_cptr
__init_el2_nvhe_sve __init_el2_nvhe_sve
__init_el2_nvhe_sme
__init_el2_fgt __init_el2_fgt
__init_el2_nvhe_prepare_eret __init_el2_nvhe_prepare_eret
.endm .endm
......
...@@ -37,7 +37,8 @@ ...@@ -37,7 +37,8 @@
#define ESR_ELx_EC_ERET (0x1a) /* EL2 only */ #define ESR_ELx_EC_ERET (0x1a) /* EL2 only */
/* Unallocated EC: 0x1B */ /* Unallocated EC: 0x1B */
#define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */ #define ESR_ELx_EC_FPAC (0x1C) /* EL1 and above */
/* Unallocated EC: 0x1D - 0x1E */ #define ESR_ELx_EC_SME (0x1D)
/* Unallocated EC: 0x1E */
#define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */ #define ESR_ELx_EC_IMP_DEF (0x1f) /* EL3 only */
#define ESR_ELx_EC_IABT_LOW (0x20) #define ESR_ELx_EC_IABT_LOW (0x20)
#define ESR_ELx_EC_IABT_CUR (0x21) #define ESR_ELx_EC_IABT_CUR (0x21)
...@@ -75,6 +76,7 @@ ...@@ -75,6 +76,7 @@
#define ESR_ELx_IL_SHIFT (25) #define ESR_ELx_IL_SHIFT (25)
#define ESR_ELx_IL (UL(1) << ESR_ELx_IL_SHIFT) #define ESR_ELx_IL (UL(1) << ESR_ELx_IL_SHIFT)
#define ESR_ELx_ISS_MASK (ESR_ELx_IL - 1) #define ESR_ELx_ISS_MASK (ESR_ELx_IL - 1)
#define ESR_ELx_ISS(esr) ((esr) & ESR_ELx_ISS_MASK)
/* ISS field definitions shared by different classes */ /* ISS field definitions shared by different classes */
#define ESR_ELx_WNR_SHIFT (6) #define ESR_ELx_WNR_SHIFT (6)
...@@ -136,7 +138,7 @@ ...@@ -136,7 +138,7 @@
#define ESR_ELx_WFx_ISS_TI (UL(1) << 0) #define ESR_ELx_WFx_ISS_TI (UL(1) << 0)
#define ESR_ELx_WFx_ISS_WFI (UL(0) << 0) #define ESR_ELx_WFx_ISS_WFI (UL(0) << 0)
#define ESR_ELx_WFx_ISS_WFE (UL(1) << 0) #define ESR_ELx_WFx_ISS_WFE (UL(1) << 0)
#define ESR_ELx_xVC_IMM_MASK ((1UL << 16) - 1) #define ESR_ELx_xVC_IMM_MASK ((UL(1) << 16) - 1)
#define DISR_EL1_IDS (UL(1) << 24) #define DISR_EL1_IDS (UL(1) << 24)
/* /*
...@@ -327,17 +329,26 @@ ...@@ -327,17 +329,26 @@
#define ESR_ELx_CP15_32_ISS_SYS_CNTFRQ (ESR_ELx_CP15_32_ISS_SYS_VAL(0, 0, 14, 0) |\ #define ESR_ELx_CP15_32_ISS_SYS_CNTFRQ (ESR_ELx_CP15_32_ISS_SYS_VAL(0, 0, 14, 0) |\
ESR_ELx_CP15_32_ISS_DIR_READ) ESR_ELx_CP15_32_ISS_DIR_READ)
/*
* ISS values for SME traps
*/
#define ESR_ELx_SME_ISS_SME_DISABLED 0
#define ESR_ELx_SME_ISS_ILL 1
#define ESR_ELx_SME_ISS_SM_DISABLED 2
#define ESR_ELx_SME_ISS_ZA_DISABLED 3
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/types.h> #include <asm/types.h>
static inline bool esr_is_data_abort(u32 esr) static inline bool esr_is_data_abort(unsigned long esr)
{ {
const u32 ec = ESR_ELx_EC(esr); const unsigned long ec = ESR_ELx_EC(esr);
return ec == ESR_ELx_EC_DABT_LOW || ec == ESR_ELx_EC_DABT_CUR; return ec == ESR_ELx_EC_DABT_LOW || ec == ESR_ELx_EC_DABT_CUR;
} }
const char *esr_get_class_string(u32 esr); const char *esr_get_class_string(unsigned long esr);
#endif /* __ASSEMBLY */ #endif /* __ASSEMBLY */
#endif /* __ASM_ESR_H */ #endif /* __ASM_ESR_H */
...@@ -19,9 +19,9 @@ ...@@ -19,9 +19,9 @@
#define __exception_irq_entry __kprobes #define __exception_irq_entry __kprobes
#endif #endif
static inline u32 disr_to_esr(u64 disr) static inline unsigned long disr_to_esr(u64 disr)
{ {
unsigned int esr = ESR_ELx_EC_SERROR << ESR_ELx_EC_SHIFT; unsigned long esr = ESR_ELx_EC_SERROR << ESR_ELx_EC_SHIFT;
if ((disr & DISR_EL1_IDS) == 0) if ((disr & DISR_EL1_IDS) == 0)
esr |= (disr & DISR_EL1_ESR_MASK); esr |= (disr & DISR_EL1_ESR_MASK);
...@@ -57,23 +57,24 @@ asmlinkage void call_on_irq_stack(struct pt_regs *regs, ...@@ -57,23 +57,24 @@ asmlinkage void call_on_irq_stack(struct pt_regs *regs,
void (*func)(struct pt_regs *)); void (*func)(struct pt_regs *));
asmlinkage void asm_exit_to_user_mode(struct pt_regs *regs); asmlinkage void asm_exit_to_user_mode(struct pt_regs *regs);
void do_mem_abort(unsigned long far, unsigned int esr, struct pt_regs *regs); void do_mem_abort(unsigned long far, unsigned long esr, struct pt_regs *regs);
void do_undefinstr(struct pt_regs *regs); void do_undefinstr(struct pt_regs *regs);
void do_bti(struct pt_regs *regs); void do_bti(struct pt_regs *regs);
void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr, void do_debug_exception(unsigned long addr_if_watchpoint, unsigned long esr,
struct pt_regs *regs); struct pt_regs *regs);
void do_fpsimd_acc(unsigned int esr, struct pt_regs *regs); void do_fpsimd_acc(unsigned long esr, struct pt_regs *regs);
void do_sve_acc(unsigned int esr, struct pt_regs *regs); void do_sve_acc(unsigned long esr, struct pt_regs *regs);
void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs); void do_sme_acc(unsigned long esr, struct pt_regs *regs);
void do_sysinstr(unsigned int esr, struct pt_regs *regs); void do_fpsimd_exc(unsigned long esr, struct pt_regs *regs);
void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs); void do_sysinstr(unsigned long esr, struct pt_regs *regs);
void bad_el0_sync(struct pt_regs *regs, int reason, unsigned int esr); void do_sp_pc_abort(unsigned long addr, unsigned long esr, struct pt_regs *regs);
void do_cp15instr(unsigned int esr, struct pt_regs *regs); void bad_el0_sync(struct pt_regs *regs, int reason, unsigned long esr);
void do_cp15instr(unsigned long esr, struct pt_regs *regs);
void do_el0_svc(struct pt_regs *regs); void do_el0_svc(struct pt_regs *regs);
void do_el0_svc_compat(struct pt_regs *regs); void do_el0_svc_compat(struct pt_regs *regs);
void do_ptrauth_fault(struct pt_regs *regs, unsigned int esr); void do_ptrauth_fault(struct pt_regs *regs, unsigned long esr);
void do_serror(struct pt_regs *regs, unsigned int esr); void do_serror(struct pt_regs *regs, unsigned long esr);
void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags); void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags);
void panic_bad_stack(struct pt_regs *regs, unsigned int esr, unsigned long far); void panic_bad_stack(struct pt_regs *regs, unsigned long esr, unsigned long far);
#endif /* __ASM_EXCEPTION_H */ #endif /* __ASM_EXCEPTION_H */
...@@ -32,6 +32,18 @@ ...@@ -32,6 +32,18 @@
#define VFP_STATE_SIZE ((32 * 8) + 4) #define VFP_STATE_SIZE ((32 * 8) + 4)
#endif #endif
/*
* When we defined the maximum SVE vector length we defined the ABI so
* that the maximum vector length included all the reserved for future
* expansion bits in ZCR rather than those just currently defined by
* the architecture. While SME follows a similar pattern the fact that
* it includes a square matrix means that any allocations that attempt
* to cover the maximum potential vector length (such as happen with
* the regset used for ptrace) end up being extremely large. Define
* the much lower actual limit for use in such situations.
*/
#define SME_VQ_MAX 16
struct task_struct; struct task_struct;
extern void fpsimd_save_state(struct user_fpsimd_state *state); extern void fpsimd_save_state(struct user_fpsimd_state *state);
...@@ -46,11 +58,23 @@ extern void fpsimd_restore_current_state(void); ...@@ -46,11 +58,23 @@ extern void fpsimd_restore_current_state(void);
extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); extern void fpsimd_update_current_state(struct user_fpsimd_state const *state);
extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state, extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state,
void *sve_state, unsigned int sve_vl); void *sve_state, unsigned int sve_vl,
void *za_state, unsigned int sme_vl,
u64 *svcr);
extern void fpsimd_flush_task_state(struct task_struct *target); extern void fpsimd_flush_task_state(struct task_struct *target);
extern void fpsimd_save_and_flush_cpu_state(void); extern void fpsimd_save_and_flush_cpu_state(void);
static inline bool thread_sm_enabled(struct thread_struct *thread)
{
return system_supports_sme() && (thread->svcr & SVCR_SM_MASK);
}
static inline bool thread_za_enabled(struct thread_struct *thread)
{
return system_supports_sme() && (thread->svcr & SVCR_ZA_MASK);
}
/* Maximum VL that SVE/SME VL-agnostic software can transparently support */ /* Maximum VL that SVE/SME VL-agnostic software can transparently support */
#define VL_ARCH_MAX 0x100 #define VL_ARCH_MAX 0x100
...@@ -62,7 +86,14 @@ static inline size_t sve_ffr_offset(int vl) ...@@ -62,7 +86,14 @@ static inline size_t sve_ffr_offset(int vl)
static inline void *sve_pffr(struct thread_struct *thread) static inline void *sve_pffr(struct thread_struct *thread)
{ {
return (char *)thread->sve_state + sve_ffr_offset(thread_get_sve_vl(thread)); unsigned int vl;
if (system_supports_sme() && thread_sm_enabled(thread))
vl = thread_get_sme_vl(thread);
else
vl = thread_get_sve_vl(thread);
return (char *)thread->sve_state + sve_ffr_offset(vl);
} }
extern void sve_save_state(void *state, u32 *pfpsr, int save_ffr); extern void sve_save_state(void *state, u32 *pfpsr, int save_ffr);
...@@ -71,11 +102,17 @@ extern void sve_load_state(void const *state, u32 const *pfpsr, ...@@ -71,11 +102,17 @@ extern void sve_load_state(void const *state, u32 const *pfpsr,
extern void sve_flush_live(bool flush_ffr, unsigned long vq_minus_1); extern void sve_flush_live(bool flush_ffr, unsigned long vq_minus_1);
extern unsigned int sve_get_vl(void); extern unsigned int sve_get_vl(void);
extern void sve_set_vq(unsigned long vq_minus_1); extern void sve_set_vq(unsigned long vq_minus_1);
extern void sme_set_vq(unsigned long vq_minus_1);
extern void za_save_state(void *state);
extern void za_load_state(void const *state);
struct arm64_cpu_capabilities; struct arm64_cpu_capabilities;
extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused);
extern void sme_kernel_enable(const struct arm64_cpu_capabilities *__unused);
extern void fa64_kernel_enable(const struct arm64_cpu_capabilities *__unused);
extern u64 read_zcr_features(void); extern u64 read_zcr_features(void);
extern u64 read_smcr_features(void);
/* /*
* Helpers to translate bit indices in sve_vq_map to VQ values (and * Helpers to translate bit indices in sve_vq_map to VQ values (and
...@@ -119,6 +156,7 @@ struct vl_info { ...@@ -119,6 +156,7 @@ struct vl_info {
extern void sve_alloc(struct task_struct *task); extern void sve_alloc(struct task_struct *task);
extern void fpsimd_release_task(struct task_struct *task); extern void fpsimd_release_task(struct task_struct *task);
extern void fpsimd_sync_to_sve(struct task_struct *task); extern void fpsimd_sync_to_sve(struct task_struct *task);
extern void fpsimd_force_sync_to_sve(struct task_struct *task);
extern void sve_sync_to_fpsimd(struct task_struct *task); extern void sve_sync_to_fpsimd(struct task_struct *task);
extern void sve_sync_from_fpsimd_zeropad(struct task_struct *task); extern void sve_sync_from_fpsimd_zeropad(struct task_struct *task);
...@@ -170,6 +208,12 @@ static inline void write_vl(enum vec_type type, u64 val) ...@@ -170,6 +208,12 @@ static inline void write_vl(enum vec_type type, u64 val)
tmp = read_sysreg_s(SYS_ZCR_EL1) & ~ZCR_ELx_LEN_MASK; tmp = read_sysreg_s(SYS_ZCR_EL1) & ~ZCR_ELx_LEN_MASK;
write_sysreg_s(tmp | val, SYS_ZCR_EL1); write_sysreg_s(tmp | val, SYS_ZCR_EL1);
break; break;
#endif
#ifdef CONFIG_ARM64_SME
case ARM64_VEC_SME:
tmp = read_sysreg_s(SYS_SMCR_EL1) & ~SMCR_ELx_LEN_MASK;
write_sysreg_s(tmp | val, SYS_SMCR_EL1);
break;
#endif #endif
default: default:
WARN_ON_ONCE(1); WARN_ON_ONCE(1);
...@@ -208,6 +252,8 @@ static inline bool sve_vq_available(unsigned int vq) ...@@ -208,6 +252,8 @@ static inline bool sve_vq_available(unsigned int vq)
return vq_available(ARM64_VEC_SVE, vq); return vq_available(ARM64_VEC_SVE, vq);
} }
size_t sve_state_size(struct task_struct const *task);
#else /* ! CONFIG_ARM64_SVE */ #else /* ! CONFIG_ARM64_SVE */
static inline void sve_alloc(struct task_struct *task) { } static inline void sve_alloc(struct task_struct *task) { }
...@@ -247,8 +293,93 @@ static inline void vec_update_vq_map(enum vec_type t) { } ...@@ -247,8 +293,93 @@ static inline void vec_update_vq_map(enum vec_type t) { }
static inline int vec_verify_vq_map(enum vec_type t) { return 0; } static inline int vec_verify_vq_map(enum vec_type t) { return 0; }
static inline void sve_setup(void) { } static inline void sve_setup(void) { }
static inline size_t sve_state_size(struct task_struct const *task)
{
return 0;
}
#endif /* ! CONFIG_ARM64_SVE */ #endif /* ! CONFIG_ARM64_SVE */
#ifdef CONFIG_ARM64_SME
static inline void sme_user_disable(void)
{
sysreg_clear_set(cpacr_el1, CPACR_EL1_SMEN_EL0EN, 0);
}
static inline void sme_user_enable(void)
{
sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_SMEN_EL0EN);
}
static inline void sme_smstart_sm(void)
{
asm volatile(__msr_s(SYS_SVCR_SMSTART_SM_EL0, "xzr"));
}
static inline void sme_smstop_sm(void)
{
asm volatile(__msr_s(SYS_SVCR_SMSTOP_SM_EL0, "xzr"));
}
static inline void sme_smstop(void)
{
asm volatile(__msr_s(SYS_SVCR_SMSTOP_SMZA_EL0, "xzr"));
}
extern void __init sme_setup(void);
static inline int sme_max_vl(void)
{
return vec_max_vl(ARM64_VEC_SME);
}
static inline int sme_max_virtualisable_vl(void)
{
return vec_max_virtualisable_vl(ARM64_VEC_SME);
}
extern void sme_alloc(struct task_struct *task);
extern unsigned int sme_get_vl(void);
extern int sme_set_current_vl(unsigned long arg);
extern int sme_get_current_vl(void);
/*
* Return how many bytes of memory are required to store the full SME
* specific state (currently just ZA) for task, given task's currently
* configured vector length.
*/
static inline size_t za_state_size(struct task_struct const *task)
{
unsigned int vl = task_get_sme_vl(task);
return ZA_SIG_REGS_SIZE(sve_vq_from_vl(vl));
}
#else
static inline void sme_user_disable(void) { BUILD_BUG(); }
static inline void sme_user_enable(void) { BUILD_BUG(); }
static inline void sme_smstart_sm(void) { }
static inline void sme_smstop_sm(void) { }
static inline void sme_smstop(void) { }
static inline void sme_alloc(struct task_struct *task) { }
static inline void sme_setup(void) { }
static inline unsigned int sme_get_vl(void) { return 0; }
static inline int sme_max_vl(void) { return 0; }
static inline int sme_max_virtualisable_vl(void) { return 0; }
static inline int sme_set_current_vl(unsigned long arg) { return -EINVAL; }
static inline int sme_get_current_vl(void) { return -EINVAL; }
static inline size_t za_state_size(struct task_struct const *task)
{
return 0;
}
#endif /* ! CONFIG_ARM64_SME */
/* For use by EFI runtime services calls only */ /* For use by EFI runtime services calls only */
extern void __efi_fpsimd_begin(void); extern void __efi_fpsimd_begin(void);
extern void __efi_fpsimd_end(void); extern void __efi_fpsimd_end(void);
......
...@@ -93,6 +93,12 @@ ...@@ -93,6 +93,12 @@
.endif .endif
.endm .endm
.macro _sme_check_wv v
.if (\v) < 12 || (\v) > 15
.error "Bad vector select register \v."
.endif
.endm
/* SVE instruction encodings for non-SVE-capable assemblers */ /* SVE instruction encodings for non-SVE-capable assemblers */
/* (pre binutils 2.28, all kernel capable clang versions support SVE) */ /* (pre binutils 2.28, all kernel capable clang versions support SVE) */
...@@ -174,6 +180,54 @@ ...@@ -174,6 +180,54 @@
| (\np) | (\np)
.endm .endm
/* SME instruction encodings for non-SME-capable assemblers */
/* (pre binutils 2.38/LLVM 13) */
/* RDSVL X\nx, #\imm */
.macro _sme_rdsvl nx, imm
_check_general_reg \nx
_check_num (\imm), -0x20, 0x1f
.inst 0x04bf5800 \
| (\nx) \
| (((\imm) & 0x3f) << 5)
.endm
/*
* STR (vector from ZA array):
* STR ZA[\nw, #\offset], [X\nxbase, #\offset, MUL VL]
*/
.macro _sme_str_zav nw, nxbase, offset=0
_sme_check_wv \nw
_check_general_reg \nxbase
_check_num (\offset), -0x100, 0xff
.inst 0xe1200000 \
| (((\nw) & 3) << 13) \
| ((\nxbase) << 5) \
| ((\offset) & 7)
.endm
/*
* LDR (vector to ZA array):
* LDR ZA[\nw, #\offset], [X\nxbase, #\offset, MUL VL]
*/
.macro _sme_ldr_zav nw, nxbase, offset=0
_sme_check_wv \nw
_check_general_reg \nxbase
_check_num (\offset), -0x100, 0xff
.inst 0xe1000000 \
| (((\nw) & 3) << 13) \
| ((\nxbase) << 5) \
| ((\offset) & 7)
.endm
/*
* Zero the entire ZA array
* ZERO ZA
*/
.macro zero_za
.inst 0xc00800ff
.endm
.macro __for from:req, to:req .macro __for from:req, to:req
.if (\from) == (\to) .if (\from) == (\to)
_for__body %\from _for__body %\from
...@@ -208,6 +262,17 @@ ...@@ -208,6 +262,17 @@
921: 921:
.endm .endm
/* Update SMCR_EL1.LEN with the new VQ */
.macro sme_load_vq xvqminus1, xtmp, xtmp2
mrs_s \xtmp, SYS_SMCR_EL1
bic \xtmp2, \xtmp, SMCR_ELx_LEN_MASK
orr \xtmp2, \xtmp2, \xvqminus1
cmp \xtmp2, \xtmp
b.eq 921f
msr_s SYS_SMCR_EL1, \xtmp2 //self-synchronising
921:
.endm
/* Preserve the first 128-bits of Znz and zero the rest. */ /* Preserve the first 128-bits of Znz and zero the rest. */
.macro _sve_flush_z nz .macro _sve_flush_z nz
_sve_check_zreg \nz _sve_check_zreg \nz
...@@ -254,3 +319,25 @@ ...@@ -254,3 +319,25 @@
ldr w\nxtmp, [\xpfpsr, #4] ldr w\nxtmp, [\xpfpsr, #4]
msr fpcr, x\nxtmp msr fpcr, x\nxtmp
.endm .endm
.macro sme_save_za nxbase, xvl, nw
mov w\nw, #0
423:
_sme_str_zav \nw, \nxbase
add x\nxbase, x\nxbase, \xvl
add x\nw, x\nw, #1
cmp \xvl, x\nw
bne 423b
.endm
.macro sme_load_za nxbase, xvl, nw
mov w\nw, #0
423:
_sme_ldr_zav \nw, \nxbase
add x\nxbase, x\nxbase, \xvl
add x\nw, x\nw, #1
cmp \xvl, x\nw
bne 423b
.endm
...@@ -80,8 +80,15 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr) ...@@ -80,8 +80,15 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
struct dyn_ftrace; struct dyn_ftrace;
struct ftrace_ops;
struct ftrace_regs;
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
#define ftrace_init_nop ftrace_init_nop #define ftrace_init_nop ftrace_init_nop
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs);
#define ftrace_graph_func ftrace_graph_func
#endif #endif
#define ftrace_return_address(n) return_address(n) #define ftrace_return_address(n) return_address(n)
......
...@@ -44,6 +44,8 @@ extern void huge_ptep_clear_flush(struct vm_area_struct *vma, ...@@ -44,6 +44,8 @@ extern void huge_ptep_clear_flush(struct vm_area_struct *vma,
#define __HAVE_ARCH_HUGE_PTE_CLEAR #define __HAVE_ARCH_HUGE_PTE_CLEAR
extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr, extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long sz); pte_t *ptep, unsigned long sz);
#define __HAVE_ARCH_HUGE_PTEP_GET
extern pte_t huge_ptep_get(pte_t *ptep);
extern void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr, extern void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, unsigned long sz); pte_t *ptep, pte_t pte, unsigned long sz);
#define set_huge_swap_pte_at set_huge_swap_pte_at #define set_huge_swap_pte_at set_huge_swap_pte_at
......
...@@ -109,6 +109,14 @@ ...@@ -109,6 +109,14 @@
#define KERNEL_HWCAP_AFP __khwcap2_feature(AFP) #define KERNEL_HWCAP_AFP __khwcap2_feature(AFP)
#define KERNEL_HWCAP_RPRES __khwcap2_feature(RPRES) #define KERNEL_HWCAP_RPRES __khwcap2_feature(RPRES)
#define KERNEL_HWCAP_MTE3 __khwcap2_feature(MTE3) #define KERNEL_HWCAP_MTE3 __khwcap2_feature(MTE3)
#define KERNEL_HWCAP_SME __khwcap2_feature(SME)
#define KERNEL_HWCAP_SME_I16I64 __khwcap2_feature(SME_I16I64)
#define KERNEL_HWCAP_SME_F64F64 __khwcap2_feature(SME_F64F64)
#define KERNEL_HWCAP_SME_I8I32 __khwcap2_feature(SME_I8I32)
#define KERNEL_HWCAP_SME_F16F32 __khwcap2_feature(SME_F16F32)
#define KERNEL_HWCAP_SME_B16F32 __khwcap2_feature(SME_B16F32)
#define KERNEL_HWCAP_SME_F32F32 __khwcap2_feature(SME_F32F32)
#define KERNEL_HWCAP_SME_FA64 __khwcap2_feature(SME_FA64)
/* /*
* This yields a mask that user programs can use to figure out what * This yields a mask that user programs can use to figure out what
......
...@@ -279,6 +279,7 @@ ...@@ -279,6 +279,7 @@
#define CPTR_EL2_TCPAC (1U << 31) #define CPTR_EL2_TCPAC (1U << 31)
#define CPTR_EL2_TAM (1 << 30) #define CPTR_EL2_TAM (1 << 30)
#define CPTR_EL2_TTA (1 << 20) #define CPTR_EL2_TTA (1 << 20)
#define CPTR_EL2_TSM (1 << 12)
#define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT) #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)
#define CPTR_EL2_TZ (1 << 8) #define CPTR_EL2_TZ (1 << 8)
#define CPTR_NVHE_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */ #define CPTR_NVHE_EL2_RES1 0x000032ff /* known RES1 bits in CPTR_EL2 (nVHE) */
......
...@@ -236,14 +236,14 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu) ...@@ -236,14 +236,14 @@ static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu)
return mode != PSR_MODE_EL0t; return mode != PSR_MODE_EL0t;
} }
static __always_inline u32 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu) static __always_inline u64 kvm_vcpu_get_esr(const struct kvm_vcpu *vcpu)
{ {
return vcpu->arch.fault.esr_el2; return vcpu->arch.fault.esr_el2;
} }
static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) static __always_inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
{ {
u32 esr = kvm_vcpu_get_esr(vcpu); u64 esr = kvm_vcpu_get_esr(vcpu);
if (esr & ESR_ELx_CV) if (esr & ESR_ELx_CV)
return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT; return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT;
...@@ -374,7 +374,7 @@ static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu) ...@@ -374,7 +374,7 @@ static __always_inline bool kvm_vcpu_abt_issea(const struct kvm_vcpu *vcpu)
static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu) static __always_inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu)
{ {
u32 esr = kvm_vcpu_get_esr(vcpu); u64 esr = kvm_vcpu_get_esr(vcpu);
return ESR_ELx_SYS64_ISS_RT(esr); return ESR_ELx_SYS64_ISS_RT(esr);
} }
......
...@@ -153,7 +153,7 @@ struct kvm_arch { ...@@ -153,7 +153,7 @@ struct kvm_arch {
}; };
struct kvm_vcpu_fault_info { struct kvm_vcpu_fault_info {
u32 esr_el2; /* Hyp Syndrom Register */ u64 esr_el2; /* Hyp Syndrom Register */
u64 far_el2; /* Hyp Fault Address Register */ u64 far_el2; /* Hyp Fault Address Register */
u64 hpfar_el2; /* Hyp IPA Fault Address Register */ u64 hpfar_el2; /* Hyp IPA Fault Address Register */
u64 disr_el1; /* Deferred [SError] Status Register */ u64 disr_el1; /* Deferred [SError] Status Register */
...@@ -295,8 +295,11 @@ struct vcpu_reset_state { ...@@ -295,8 +295,11 @@ struct vcpu_reset_state {
struct kvm_vcpu_arch { struct kvm_vcpu_arch {
struct kvm_cpu_context ctxt; struct kvm_cpu_context ctxt;
/* Guest floating point state */
void *sve_state; void *sve_state;
unsigned int sve_max_vl; unsigned int sve_max_vl;
u64 svcr;
/* Stage 2 paging state used by the hardware on next switch */ /* Stage 2 paging state used by the hardware on next switch */
struct kvm_s2_mmu *hw_mmu; struct kvm_s2_mmu *hw_mmu;
...@@ -451,6 +454,7 @@ struct kvm_vcpu_arch { ...@@ -451,6 +454,7 @@ struct kvm_vcpu_arch {
#define KVM_ARM64_DEBUG_STATE_SAVE_TRBE (1 << 13) /* Save TRBE context if active */ #define KVM_ARM64_DEBUG_STATE_SAVE_TRBE (1 << 13) /* Save TRBE context if active */
#define KVM_ARM64_FP_FOREIGN_FPSTATE (1 << 14) #define KVM_ARM64_FP_FOREIGN_FPSTATE (1 << 14)
#define KVM_ARM64_ON_UNSUPPORTED_CPU (1 << 15) /* Physical CPU not in supported_cpus */ #define KVM_ARM64_ON_UNSUPPORTED_CPU (1 << 15) /* Physical CPU not in supported_cpus */
#define KVM_ARM64_HOST_SME_ENABLED (1 << 16) /* SME enabled for EL0 */
#define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \
KVM_GUESTDBG_USE_SW_BP | \ KVM_GUESTDBG_USE_SW_BP | \
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
* Was this synchronous external abort a RAS notification? * Was this synchronous external abort a RAS notification?
* Returns '0' for errors handled by some RAS subsystem, or -ENOENT. * Returns '0' for errors handled by some RAS subsystem, or -ENOENT.
*/ */
static inline int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr) static inline int kvm_handle_guest_sea(phys_addr_t addr, u64 esr)
{ {
/* apei_claim_sea(NULL) expects to mask interrupts itself */ /* apei_claim_sea(NULL) expects to mask interrupts itself */
lockdep_assert_irqs_enabled(); lockdep_assert_irqs_enabled();
......
...@@ -47,6 +47,7 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg); ...@@ -47,6 +47,7 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg);
long get_mte_ctrl(struct task_struct *task); long get_mte_ctrl(struct task_struct *task);
int mte_ptrace_copy_tags(struct task_struct *child, long request, int mte_ptrace_copy_tags(struct task_struct *child, long request,
unsigned long addr, unsigned long data); unsigned long addr, unsigned long data);
size_t mte_probe_user_range(const char __user *uaddr, size_t size);
#else /* CONFIG_ARM64_MTE */ #else /* CONFIG_ARM64_MTE */
......
...@@ -49,7 +49,7 @@ ...@@ -49,7 +49,7 @@
#define PMD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(2) #define PMD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(2)
#define PMD_SIZE (_AC(1, UL) << PMD_SHIFT) #define PMD_SIZE (_AC(1, UL) << PMD_SHIFT)
#define PMD_MASK (~(PMD_SIZE-1)) #define PMD_MASK (~(PMD_SIZE-1))
#define PTRS_PER_PMD PTRS_PER_PTE #define PTRS_PER_PMD (1 << (PAGE_SHIFT - 3))
#endif #endif
/* /*
...@@ -59,7 +59,7 @@ ...@@ -59,7 +59,7 @@
#define PUD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(1) #define PUD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(1)
#define PUD_SIZE (_AC(1, UL) << PUD_SHIFT) #define PUD_SIZE (_AC(1, UL) << PUD_SHIFT)
#define PUD_MASK (~(PUD_SIZE-1)) #define PUD_MASK (~(PUD_SIZE-1))
#define PTRS_PER_PUD PTRS_PER_PTE #define PTRS_PER_PUD (1 << (PAGE_SHIFT - 3))
#endif #endif
/* /*
......
...@@ -1001,7 +1001,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, ...@@ -1001,7 +1001,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
*/ */
static inline bool arch_faults_on_old_pte(void) static inline bool arch_faults_on_old_pte(void)
{ {
WARN_ON(preemptible()); /* The register read below requires a stable CPU to make any sense */
cant_migrate();
return !cpu_has_hw_af(); return !cpu_has_hw_af();
} }
......
...@@ -118,6 +118,7 @@ struct debug_info { ...@@ -118,6 +118,7 @@ struct debug_info {
enum vec_type { enum vec_type {
ARM64_VEC_SVE = 0, ARM64_VEC_SVE = 0,
ARM64_VEC_SME,
ARM64_VEC_MAX, ARM64_VEC_MAX,
}; };
...@@ -153,6 +154,7 @@ struct thread_struct { ...@@ -153,6 +154,7 @@ struct thread_struct {
unsigned int fpsimd_cpu; unsigned int fpsimd_cpu;
void *sve_state; /* SVE registers, if any */ void *sve_state; /* SVE registers, if any */
void *za_state; /* ZA register, if any */
unsigned int vl[ARM64_VEC_MAX]; /* vector length */ unsigned int vl[ARM64_VEC_MAX]; /* vector length */
unsigned int vl_onexec[ARM64_VEC_MAX]; /* vl after next exec */ unsigned int vl_onexec[ARM64_VEC_MAX]; /* vl after next exec */
unsigned long fault_address; /* fault info */ unsigned long fault_address; /* fault info */
...@@ -168,6 +170,8 @@ struct thread_struct { ...@@ -168,6 +170,8 @@ struct thread_struct {
u64 mte_ctrl; u64 mte_ctrl;
#endif #endif
u64 sctlr_user; u64 sctlr_user;
u64 svcr;
u64 tpidr2_el0;
}; };
static inline unsigned int thread_get_vl(struct thread_struct *thread, static inline unsigned int thread_get_vl(struct thread_struct *thread,
...@@ -181,6 +185,19 @@ static inline unsigned int thread_get_sve_vl(struct thread_struct *thread) ...@@ -181,6 +185,19 @@ static inline unsigned int thread_get_sve_vl(struct thread_struct *thread)
return thread_get_vl(thread, ARM64_VEC_SVE); return thread_get_vl(thread, ARM64_VEC_SVE);
} }
static inline unsigned int thread_get_sme_vl(struct thread_struct *thread)
{
return thread_get_vl(thread, ARM64_VEC_SME);
}
static inline unsigned int thread_get_cur_vl(struct thread_struct *thread)
{
if (system_supports_sme() && (thread->svcr & SVCR_SM_MASK))
return thread_get_sme_vl(thread);
else
return thread_get_sve_vl(thread);
}
unsigned int task_get_vl(const struct task_struct *task, enum vec_type type); unsigned int task_get_vl(const struct task_struct *task, enum vec_type type);
void task_set_vl(struct task_struct *task, enum vec_type type, void task_set_vl(struct task_struct *task, enum vec_type type,
unsigned long vl); unsigned long vl);
...@@ -194,6 +211,11 @@ static inline unsigned int task_get_sve_vl(const struct task_struct *task) ...@@ -194,6 +211,11 @@ static inline unsigned int task_get_sve_vl(const struct task_struct *task)
return task_get_vl(task, ARM64_VEC_SVE); return task_get_vl(task, ARM64_VEC_SVE);
} }
static inline unsigned int task_get_sme_vl(const struct task_struct *task)
{
return task_get_vl(task, ARM64_VEC_SME);
}
static inline void task_set_sve_vl(struct task_struct *task, unsigned long vl) static inline void task_set_sve_vl(struct task_struct *task, unsigned long vl)
{ {
task_set_vl(task, ARM64_VEC_SVE, vl); task_set_vl(task, ARM64_VEC_SVE, vl);
...@@ -354,9 +376,11 @@ extern void __init minsigstksz_setup(void); ...@@ -354,9 +376,11 @@ extern void __init minsigstksz_setup(void);
*/ */
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
/* Userspace interface for PR_SVE_{SET,GET}_VL prctl()s: */ /* Userspace interface for PR_S[MV]E_{SET,GET}_VL prctl()s: */
#define SVE_SET_VL(arg) sve_set_current_vl(arg) #define SVE_SET_VL(arg) sve_set_current_vl(arg)
#define SVE_GET_VL() sve_get_current_vl() #define SVE_GET_VL() sve_get_current_vl()
#define SME_SET_VL(arg) sme_set_current_vl(arg)
#define SME_GET_VL() sme_get_current_vl()
/* PR_PAC_RESET_KEYS prctl */ /* PR_PAC_RESET_KEYS prctl */
#define PAC_RESET_KEYS(tsk, arg) ptrauth_prctl_reset_keys(tsk, arg) #define PAC_RESET_KEYS(tsk, arg) ptrauth_prctl_reset_keys(tsk, arg)
......
...@@ -31,38 +31,6 @@ struct stack_info { ...@@ -31,38 +31,6 @@ struct stack_info {
enum stack_type type; enum stack_type type;
}; };
/*
* A snapshot of a frame record or fp/lr register values, along with some
* accounting information necessary for robust unwinding.
*
* @fp: The fp value in the frame record (or the real fp)
* @pc: The lr value in the frame record (or the real lr)
*
* @stacks_done: Stacks which have been entirely unwound, for which it is no
* longer valid to unwind to.
*
* @prev_fp: The fp that pointed to this frame record, or a synthetic value
* of 0. This is used to ensure that within a stack, each
* subsequent frame record is at an increasing address.
* @prev_type: The type of stack this frame record was on, or a synthetic
* value of STACK_TYPE_UNKNOWN. This is used to detect a
* transition from one stack to another.
*
* @kr_cur: When KRETPROBES is selected, holds the kretprobe instance
* associated with the most recently encountered replacement lr
* value.
*/
struct stackframe {
unsigned long fp;
unsigned long pc;
DECLARE_BITMAP(stacks_done, __NR_STACK_TYPES);
unsigned long prev_fp;
enum stack_type prev_type;
#ifdef CONFIG_KRETPROBES
struct llist_node *kr_cur;
#endif
};
extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk, extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
const char *loglvl); const char *loglvl);
......
This diff is collapsed.
...@@ -23,9 +23,9 @@ void die(const char *msg, struct pt_regs *regs, int err); ...@@ -23,9 +23,9 @@ void die(const char *msg, struct pt_regs *regs, int err);
struct siginfo; struct siginfo;
void arm64_notify_die(const char *str, struct pt_regs *regs, void arm64_notify_die(const char *str, struct pt_regs *regs,
int signo, int sicode, unsigned long far, int signo, int sicode, unsigned long far,
int err); unsigned long err);
void hook_debug_fault_code(int nr, int (*fn)(unsigned long, unsigned int, void hook_debug_fault_code(int nr, int (*fn)(unsigned long, unsigned long,
struct pt_regs *), struct pt_regs *),
int sig, int code, const char *name); int sig, int code, const char *name);
......
...@@ -82,6 +82,8 @@ int arch_dup_task_struct(struct task_struct *dst, ...@@ -82,6 +82,8 @@ int arch_dup_task_struct(struct task_struct *dst,
#define TIF_SVE_VL_INHERIT 24 /* Inherit SVE vl_onexec across exec */ #define TIF_SVE_VL_INHERIT 24 /* Inherit SVE vl_onexec across exec */
#define TIF_SSBD 25 /* Wants SSB mitigation */ #define TIF_SSBD 25 /* Wants SSB mitigation */
#define TIF_TAGGED_ADDR 26 /* Allow tagged user addresses */ #define TIF_TAGGED_ADDR 26 /* Allow tagged user addresses */
#define TIF_SME 27 /* SME in use */
#define TIF_SME_VL_INHERIT 28 /* Inherit SME vl_onexec across exec */
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
......
...@@ -24,7 +24,7 @@ struct undef_hook { ...@@ -24,7 +24,7 @@ struct undef_hook {
void register_undef_hook(struct undef_hook *hook); void register_undef_hook(struct undef_hook *hook);
void unregister_undef_hook(struct undef_hook *hook); void unregister_undef_hook(struct undef_hook *hook);
void force_signal_inject(int signal, int code, unsigned long address, unsigned int err); void force_signal_inject(int signal, int code, unsigned long address, unsigned long err);
void arm64_notify_segfault(unsigned long addr); void arm64_notify_segfault(unsigned long addr);
void arm64_force_sig_fault(int signo, int code, unsigned long far, const char *str); void arm64_force_sig_fault(int signo, int code, unsigned long far, const char *str);
void arm64_force_sig_mceerr(int code, unsigned long far, short lsb, const char *str); void arm64_force_sig_mceerr(int code, unsigned long far, short lsb, const char *str);
...@@ -57,7 +57,7 @@ static inline int in_entry_text(unsigned long ptr) ...@@ -57,7 +57,7 @@ static inline int in_entry_text(unsigned long ptr)
* errors share the same encoding as an all-zeros encoding from a CPU that * errors share the same encoding as an all-zeros encoding from a CPU that
* doesn't support RAS. * doesn't support RAS.
*/ */
static inline bool arm64_is_ras_serror(u32 esr) static inline bool arm64_is_ras_serror(unsigned long esr)
{ {
WARN_ON(preemptible()); WARN_ON(preemptible());
...@@ -77,9 +77,9 @@ static inline bool arm64_is_ras_serror(u32 esr) ...@@ -77,9 +77,9 @@ static inline bool arm64_is_ras_serror(u32 esr)
* We treat them as Uncontainable. * We treat them as Uncontainable.
* Non-RAS SError's are reported as Uncontained/Uncategorized. * Non-RAS SError's are reported as Uncontained/Uncategorized.
*/ */
static inline u32 arm64_ras_serror_get_severity(u32 esr) static inline unsigned long arm64_ras_serror_get_severity(unsigned long esr)
{ {
u32 aet = esr & ESR_ELx_AET; unsigned long aet = esr & ESR_ELx_AET;
if (!arm64_is_ras_serror(esr)) { if (!arm64_is_ras_serror(esr)) {
/* Not a RAS error, we can't interpret the ESR. */ /* Not a RAS error, we can't interpret the ESR. */
...@@ -98,6 +98,6 @@ static inline u32 arm64_ras_serror_get_severity(u32 esr) ...@@ -98,6 +98,6 @@ static inline u32 arm64_ras_serror_get_severity(u32 esr)
return aet; return aet;
} }
bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned int esr); bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned long esr);
void __noreturn arm64_serror_panic(struct pt_regs *regs, u32 esr); void __noreturn arm64_serror_panic(struct pt_regs *regs, unsigned long esr);
#endif #endif
...@@ -460,4 +460,19 @@ static inline int __copy_from_user_flushcache(void *dst, const void __user *src, ...@@ -460,4 +460,19 @@ static inline int __copy_from_user_flushcache(void *dst, const void __user *src,
} }
#endif #endif
#ifdef CONFIG_ARCH_HAS_SUBPAGE_FAULTS
/*
* Return 0 on success, the number of bytes not probed otherwise.
*/
static inline size_t probe_subpage_writeable(const char __user *uaddr,
size_t size)
{
if (!system_supports_mte())
return 0;
return mte_probe_user_range(uaddr, size);
}
#endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */
#endif /* __ASM_UACCESS_H */ #endif /* __ASM_UACCESS_H */
...@@ -79,5 +79,13 @@ ...@@ -79,5 +79,13 @@
#define HWCAP2_AFP (1 << 20) #define HWCAP2_AFP (1 << 20)
#define HWCAP2_RPRES (1 << 21) #define HWCAP2_RPRES (1 << 21)
#define HWCAP2_MTE3 (1 << 22) #define HWCAP2_MTE3 (1 << 22)
#define HWCAP2_SME (1 << 23)
#define HWCAP2_SME_I16I64 (1 << 24)
#define HWCAP2_SME_F64F64 (1 << 25)
#define HWCAP2_SME_I8I32 (1 << 26)
#define HWCAP2_SME_F16F32 (1 << 27)
#define HWCAP2_SME_B16F32 (1 << 28)
#define HWCAP2_SME_F32F32 (1 << 29)
#define HWCAP2_SME_FA64 (1 << 30)
#endif /* _UAPI__ASM_HWCAP_H */ #endif /* _UAPI__ASM_HWCAP_H */
...@@ -139,8 +139,10 @@ struct kvm_guest_debug_arch { ...@@ -139,8 +139,10 @@ struct kvm_guest_debug_arch {
__u64 dbg_wvr[KVM_ARM_MAX_DBG_REGS]; __u64 dbg_wvr[KVM_ARM_MAX_DBG_REGS];
}; };
#define KVM_DEBUG_ARCH_HSR_HIGH_VALID (1 << 0)
struct kvm_debug_exit_arch { struct kvm_debug_exit_arch {
__u32 hsr; __u32 hsr;
__u32 hsr_high; /* ESR_EL2[61:32] */
__u64 far; /* used for watchpoints */ __u64 far; /* used for watchpoints */
}; };
......
...@@ -109,7 +109,7 @@ struct user_hwdebug_state { ...@@ -109,7 +109,7 @@ struct user_hwdebug_state {
} dbg_regs[16]; } dbg_regs[16];
}; };
/* SVE/FP/SIMD state (NT_ARM_SVE) */ /* SVE/FP/SIMD state (NT_ARM_SVE & NT_ARM_SSVE) */
struct user_sve_header { struct user_sve_header {
__u32 size; /* total meaningful regset content in bytes */ __u32 size; /* total meaningful regset content in bytes */
...@@ -220,6 +220,7 @@ struct user_sve_header { ...@@ -220,6 +220,7 @@ struct user_sve_header {
(SVE_PT_SVE_PREG_OFFSET(vq, __SVE_NUM_PREGS) - \ (SVE_PT_SVE_PREG_OFFSET(vq, __SVE_NUM_PREGS) - \
SVE_PT_SVE_PREGS_OFFSET(vq)) SVE_PT_SVE_PREGS_OFFSET(vq))
/* For streaming mode SVE (SSVE) FFR must be read and written as zero */
#define SVE_PT_SVE_FFR_OFFSET(vq) \ #define SVE_PT_SVE_FFR_OFFSET(vq) \
(SVE_PT_REGS_OFFSET + __SVE_FFR_OFFSET(vq)) (SVE_PT_REGS_OFFSET + __SVE_FFR_OFFSET(vq))
...@@ -240,10 +241,12 @@ struct user_sve_header { ...@@ -240,10 +241,12 @@ struct user_sve_header {
- SVE_PT_SVE_OFFSET + (__SVE_VQ_BYTES - 1)) \ - SVE_PT_SVE_OFFSET + (__SVE_VQ_BYTES - 1)) \
/ __SVE_VQ_BYTES * __SVE_VQ_BYTES) / __SVE_VQ_BYTES * __SVE_VQ_BYTES)
#define SVE_PT_SIZE(vq, flags) \ #define SVE_PT_SIZE(vq, flags) \
(((flags) & SVE_PT_REGS_MASK) == SVE_PT_REGS_SVE ? \ (((flags) & SVE_PT_REGS_MASK) == SVE_PT_REGS_SVE ? \
SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags) \ SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags) \
: SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags)) : ((((flags) & SVE_PT_REGS_MASK) == SVE_PT_REGS_FPSIMD ? \
SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags) \
: SVE_PT_REGS_OFFSET)))
/* pointer authentication masks (NT_ARM_PAC_MASK) */ /* pointer authentication masks (NT_ARM_PAC_MASK) */
...@@ -265,6 +268,62 @@ struct user_pac_generic_keys { ...@@ -265,6 +268,62 @@ struct user_pac_generic_keys {
__uint128_t apgakey; __uint128_t apgakey;
}; };
/* ZA state (NT_ARM_ZA) */
struct user_za_header {
__u32 size; /* total meaningful regset content in bytes */
__u32 max_size; /* maxmium possible size for this thread */
__u16 vl; /* current vector length */
__u16 max_vl; /* maximum possible vector length */
__u16 flags;
__u16 __reserved;
};
/*
* Common ZA_PT_* flags:
* These must be kept in sync with prctl interface in <linux/prctl.h>
*/
#define ZA_PT_VL_INHERIT ((1 << 17) /* PR_SME_VL_INHERIT */ >> 16)
#define ZA_PT_VL_ONEXEC ((1 << 18) /* PR_SME_SET_VL_ONEXEC */ >> 16)
/*
* The remainder of the ZA state follows struct user_za_header. The
* total size of the ZA state (including header) depends on the
* metadata in the header: ZA_PT_SIZE(vq, flags) gives the total size
* of the state in bytes, including the header.
*
* Refer to <asm/sigcontext.h> for details of how to pass the correct
* "vq" argument to these macros.
*/
/* Offset from the start of struct user_za_header to the register data */
#define ZA_PT_ZA_OFFSET \
((sizeof(struct user_za_header) + (__SVE_VQ_BYTES - 1)) \
/ __SVE_VQ_BYTES * __SVE_VQ_BYTES)
/*
* The payload starts at offset ZA_PT_ZA_OFFSET, and is of size
* ZA_PT_ZA_SIZE(vq, flags).
*
* The ZA array is stored as a sequence of horizontal vectors ZAV of SVL/8
* bytes each, starting from vector 0.
*
* Additional data might be appended in the future.
*
* The ZA matrix is represented in memory in an endianness-invariant layout
* which differs from the layout used for the FPSIMD V-registers on big-endian
* systems: see sigcontext.h for more explanation.
*/
#define ZA_PT_ZAV_OFFSET(vq, n) \
(ZA_PT_ZA_OFFSET + ((vq * __SVE_VQ_BYTES) * n))
#define ZA_PT_ZA_SIZE(vq) ((vq * __SVE_VQ_BYTES) * (vq * __SVE_VQ_BYTES))
#define ZA_PT_SIZE(vq) \
(ZA_PT_ZA_OFFSET + ZA_PT_ZA_SIZE(vq))
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _UAPI__ASM_PTRACE_H */ #endif /* _UAPI__ASM_PTRACE_H */
...@@ -132,6 +132,17 @@ struct extra_context { ...@@ -132,6 +132,17 @@ struct extra_context {
#define SVE_MAGIC 0x53564501 #define SVE_MAGIC 0x53564501
struct sve_context { struct sve_context {
struct _aarch64_ctx head;
__u16 vl;
__u16 flags;
__u16 __reserved[2];
};
#define SVE_SIG_FLAG_SM 0x1 /* Context describes streaming mode */
#define ZA_MAGIC 0x54366345
struct za_context {
struct _aarch64_ctx head; struct _aarch64_ctx head;
__u16 vl; __u16 vl;
__u16 __reserved[3]; __u16 __reserved[3];
...@@ -186,9 +197,16 @@ struct sve_context { ...@@ -186,9 +197,16 @@ struct sve_context {
* sve_context.vl must equal the thread's current vector length when * sve_context.vl must equal the thread's current vector length when
* doing a sigreturn. * doing a sigreturn.
* *
* On systems with support for SME the SVE register state may reflect either
* streaming or non-streaming mode. In streaming mode the streaming mode
* vector length will be used and the flag SVE_SIG_FLAG_SM will be set in
* the flags field. It is permitted to enter or leave streaming mode in
* a signal return, applications should take care to ensure that any difference
* in vector length between the two modes is handled, including any resizing
* and movement of context blocks.
* *
* Note: for all these macros, the "vq" argument denotes the SVE * Note: for all these macros, the "vq" argument denotes the vector length
* vector length in quadwords (i.e., units of 128 bits). * in quadwords (i.e., units of 128 bits).
* *
* The correct way to obtain vq is to use sve_vq_from_vl(vl). The * The correct way to obtain vq is to use sve_vq_from_vl(vl). The
* result is valid if and only if sve_vl_valid(vl) is true. This is * result is valid if and only if sve_vl_valid(vl) is true. This is
...@@ -249,4 +267,37 @@ struct sve_context { ...@@ -249,4 +267,37 @@ struct sve_context {
#define SVE_SIG_CONTEXT_SIZE(vq) \ #define SVE_SIG_CONTEXT_SIZE(vq) \
(SVE_SIG_REGS_OFFSET + SVE_SIG_REGS_SIZE(vq)) (SVE_SIG_REGS_OFFSET + SVE_SIG_REGS_SIZE(vq))
/*
* If the ZA register is enabled for the thread at signal delivery then,
* za_context.head.size >= ZA_SIG_CONTEXT_SIZE(sve_vq_from_vl(za_context.vl))
* and the register data may be accessed using the ZA_SIG_*() macros.
*
* If za_context.head.size < ZA_SIG_CONTEXT_SIZE(sve_vq_from_vl(za_context.vl))
* then ZA was not enabled and no register data was included in which case
* ZA register was not enabled for the thread and no register data
* the ZA_SIG_*() macros should not be used except for this check.
*
* The same convention applies when returning from a signal: a caller
* will need to remove or resize the za_context block if it wants to
* enable the ZA register when it was previously non-live or vice-versa.
* This may require the caller to allocate fresh memory and/or move other
* context blocks in the signal frame.
*
* Changing the vector length during signal return is not permitted:
* za_context.vl must equal the thread's current SME vector length when
* doing a sigreturn.
*/
#define ZA_SIG_REGS_OFFSET \
((sizeof(struct za_context) + (__SVE_VQ_BYTES - 1)) \
/ __SVE_VQ_BYTES * __SVE_VQ_BYTES)
#define ZA_SIG_REGS_SIZE(vq) ((vq * __SVE_VQ_BYTES) * (vq * __SVE_VQ_BYTES))
#define ZA_SIG_ZAV_OFFSET(vq, n) (ZA_SIG_REGS_OFFSET + \
(SVE_SIG_ZREG_SIZE(vq) * n))
#define ZA_SIG_CONTEXT_SIZE(vq) \
(ZA_SIG_REGS_OFFSET + ZA_SIG_REGS_SIZE(vq))
#endif /* _UAPI__ASM_SIGCONTEXT_H */ #endif /* _UAPI__ASM_SIGCONTEXT_H */
...@@ -217,7 +217,7 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = { ...@@ -217,7 +217,7 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
#endif #endif
#ifdef CONFIG_CAVIUM_ERRATUM_23154 #ifdef CONFIG_CAVIUM_ERRATUM_23154
const struct midr_range cavium_erratum_23154_cpus[] = { static const struct midr_range cavium_erratum_23154_cpus[] = {
MIDR_ALL_VERSIONS(MIDR_THUNDERX), MIDR_ALL_VERSIONS(MIDR_THUNDERX),
MIDR_ALL_VERSIONS(MIDR_THUNDERX_81XX), MIDR_ALL_VERSIONS(MIDR_THUNDERX_81XX),
MIDR_ALL_VERSIONS(MIDR_THUNDERX_83XX), MIDR_ALL_VERSIONS(MIDR_THUNDERX_83XX),
......
This diff is collapsed.
...@@ -98,6 +98,14 @@ static const char *const hwcap_str[] = { ...@@ -98,6 +98,14 @@ static const char *const hwcap_str[] = {
[KERNEL_HWCAP_AFP] = "afp", [KERNEL_HWCAP_AFP] = "afp",
[KERNEL_HWCAP_RPRES] = "rpres", [KERNEL_HWCAP_RPRES] = "rpres",
[KERNEL_HWCAP_MTE3] = "mte3", [KERNEL_HWCAP_MTE3] = "mte3",
[KERNEL_HWCAP_SME] = "sme",
[KERNEL_HWCAP_SME_I16I64] = "smei16i64",
[KERNEL_HWCAP_SME_F64F64] = "smef64f64",
[KERNEL_HWCAP_SME_I8I32] = "smei8i32",
[KERNEL_HWCAP_SME_F16F32] = "smef16f32",
[KERNEL_HWCAP_SME_B16F32] = "smeb16f32",
[KERNEL_HWCAP_SME_F32F32] = "smef32f32",
[KERNEL_HWCAP_SME_FA64] = "smefa64",
}; };
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
...@@ -401,6 +409,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) ...@@ -401,6 +409,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
info->reg_id_aa64pfr0 = read_cpuid(ID_AA64PFR0_EL1); info->reg_id_aa64pfr0 = read_cpuid(ID_AA64PFR0_EL1);
info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1); info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1);
info->reg_id_aa64zfr0 = read_cpuid(ID_AA64ZFR0_EL1); info->reg_id_aa64zfr0 = read_cpuid(ID_AA64ZFR0_EL1);
info->reg_id_aa64smfr0 = read_cpuid(ID_AA64SMFR0_EL1);
if (id_aa64pfr1_mte(info->reg_id_aa64pfr1)) if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
info->reg_gmid = read_cpuid(GMID_EL1); info->reg_gmid = read_cpuid(GMID_EL1);
...@@ -412,6 +421,10 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) ...@@ -412,6 +421,10 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
id_aa64pfr0_sve(info->reg_id_aa64pfr0)) id_aa64pfr0_sve(info->reg_id_aa64pfr0))
info->reg_zcr = read_zcr_features(); info->reg_zcr = read_zcr_features();
if (IS_ENABLED(CONFIG_ARM64_SME) &&
id_aa64pfr1_sme(info->reg_id_aa64pfr1))
info->reg_smcr = read_smcr_features();
cpuinfo_detect_icache_policy(info); cpuinfo_detect_icache_policy(info);
} }
......
...@@ -202,7 +202,7 @@ void unregister_kernel_step_hook(struct step_hook *hook) ...@@ -202,7 +202,7 @@ void unregister_kernel_step_hook(struct step_hook *hook)
* So we call all the registered handlers, until the right handler is * So we call all the registered handlers, until the right handler is
* found which returns zero. * found which returns zero.
*/ */
static int call_step_hook(struct pt_regs *regs, unsigned int esr) static int call_step_hook(struct pt_regs *regs, unsigned long esr)
{ {
struct step_hook *hook; struct step_hook *hook;
struct list_head *list; struct list_head *list;
...@@ -238,7 +238,7 @@ static void send_user_sigtrap(int si_code) ...@@ -238,7 +238,7 @@ static void send_user_sigtrap(int si_code)
"User debug trap"); "User debug trap");
} }
static int single_step_handler(unsigned long unused, unsigned int esr, static int single_step_handler(unsigned long unused, unsigned long esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
bool handler_found = false; bool handler_found = false;
...@@ -299,11 +299,11 @@ void unregister_kernel_break_hook(struct break_hook *hook) ...@@ -299,11 +299,11 @@ void unregister_kernel_break_hook(struct break_hook *hook)
unregister_debug_hook(&hook->node); unregister_debug_hook(&hook->node);
} }
static int call_break_hook(struct pt_regs *regs, unsigned int esr) static int call_break_hook(struct pt_regs *regs, unsigned long esr)
{ {
struct break_hook *hook; struct break_hook *hook;
struct list_head *list; struct list_head *list;
int (*fn)(struct pt_regs *regs, unsigned int esr) = NULL; int (*fn)(struct pt_regs *regs, unsigned long esr) = NULL;
list = user_mode(regs) ? &user_break_hook : &kernel_break_hook; list = user_mode(regs) ? &user_break_hook : &kernel_break_hook;
...@@ -312,7 +312,7 @@ static int call_break_hook(struct pt_regs *regs, unsigned int esr) ...@@ -312,7 +312,7 @@ static int call_break_hook(struct pt_regs *regs, unsigned int esr)
* entirely not preemptible, and we can use rcu list safely here. * entirely not preemptible, and we can use rcu list safely here.
*/ */
list_for_each_entry_rcu(hook, list, node) { list_for_each_entry_rcu(hook, list, node) {
unsigned int comment = esr & ESR_ELx_BRK64_ISS_COMMENT_MASK; unsigned long comment = esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
if ((comment & ~hook->mask) == hook->imm) if ((comment & ~hook->mask) == hook->imm)
fn = hook->fn; fn = hook->fn;
...@@ -322,7 +322,7 @@ static int call_break_hook(struct pt_regs *regs, unsigned int esr) ...@@ -322,7 +322,7 @@ static int call_break_hook(struct pt_regs *regs, unsigned int esr)
} }
NOKPROBE_SYMBOL(call_break_hook); NOKPROBE_SYMBOL(call_break_hook);
static int brk_handler(unsigned long unused, unsigned int esr, static int brk_handler(unsigned long unused, unsigned long esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
if (call_break_hook(regs, esr) == DBG_HOOK_HANDLED) if (call_break_hook(regs, esr) == DBG_HOOK_HANDLED)
......
...@@ -282,13 +282,13 @@ extern void (*handle_arch_irq)(struct pt_regs *); ...@@ -282,13 +282,13 @@ extern void (*handle_arch_irq)(struct pt_regs *);
extern void (*handle_arch_fiq)(struct pt_regs *); extern void (*handle_arch_fiq)(struct pt_regs *);
static void noinstr __panic_unhandled(struct pt_regs *regs, const char *vector, static void noinstr __panic_unhandled(struct pt_regs *regs, const char *vector,
unsigned int esr) unsigned long esr)
{ {
arm64_enter_nmi(regs); arm64_enter_nmi(regs);
console_verbose(); console_verbose();
pr_crit("Unhandled %s exception on CPU%d, ESR 0x%08x -- %s\n", pr_crit("Unhandled %s exception on CPU%d, ESR 0x%016lx -- %s\n",
vector, smp_processor_id(), esr, vector, smp_processor_id(), esr,
esr_get_class_string(esr)); esr_get_class_string(esr));
...@@ -537,6 +537,14 @@ static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr) ...@@ -537,6 +537,14 @@ static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr)
exit_to_user_mode(regs); exit_to_user_mode(regs);
} }
static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr)
{
enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_sme_acc(esr, regs);
exit_to_user_mode(regs);
}
static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr) static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr)
{ {
enter_from_user_mode(regs); enter_from_user_mode(regs);
...@@ -645,6 +653,9 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs) ...@@ -645,6 +653,9 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
case ESR_ELx_EC_SVE: case ESR_ELx_EC_SVE:
el0_sve_acc(regs, esr); el0_sve_acc(regs, esr);
break; break;
case ESR_ELx_EC_SME:
el0_sme_acc(regs, esr);
break;
case ESR_ELx_EC_FP_EXC64: case ESR_ELx_EC_FP_EXC64:
el0_fpsimd_exc(regs, esr); el0_fpsimd_exc(regs, esr);
break; break;
...@@ -818,7 +829,7 @@ UNHANDLED(el0t, 32, error) ...@@ -818,7 +829,7 @@ UNHANDLED(el0t, 32, error)
#ifdef CONFIG_VMAP_STACK #ifdef CONFIG_VMAP_STACK
asmlinkage void noinstr handle_bad_stack(struct pt_regs *regs) asmlinkage void noinstr handle_bad_stack(struct pt_regs *regs)
{ {
unsigned int esr = read_sysreg(esr_el1); unsigned long esr = read_sysreg(esr_el1);
unsigned long far = read_sysreg(far_el1); unsigned long far = read_sysreg(far_el1);
arm64_enter_nmi(regs); arm64_enter_nmi(regs);
......
...@@ -86,3 +86,39 @@ SYM_FUNC_START(sve_flush_live) ...@@ -86,3 +86,39 @@ SYM_FUNC_START(sve_flush_live)
SYM_FUNC_END(sve_flush_live) SYM_FUNC_END(sve_flush_live)
#endif /* CONFIG_ARM64_SVE */ #endif /* CONFIG_ARM64_SVE */
#ifdef CONFIG_ARM64_SME
SYM_FUNC_START(sme_get_vl)
_sme_rdsvl 0, 1
ret
SYM_FUNC_END(sme_get_vl)
SYM_FUNC_START(sme_set_vq)
sme_load_vq x0, x1, x2
ret
SYM_FUNC_END(sme_set_vq)
/*
* Save the SME state
*
* x0 - pointer to buffer for state
*/
SYM_FUNC_START(za_save_state)
_sme_rdsvl 1, 1 // x1 = VL/8
sme_save_za 0, x1, 12
ret
SYM_FUNC_END(za_save_state)
/*
* Load the SME state
*
* x0 - pointer to buffer for state
*/
SYM_FUNC_START(za_load_state)
_sme_rdsvl 1, 1 // x1 = VL/8
sme_load_za 0, x1, 12
ret
SYM_FUNC_END(za_load_state)
#endif /* CONFIG_ARM64_SME */
...@@ -97,12 +97,6 @@ SYM_CODE_START(ftrace_common) ...@@ -97,12 +97,6 @@ SYM_CODE_START(ftrace_common)
SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL) SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
bl ftrace_stub bl ftrace_stub
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) // ftrace_graph_caller();
nop // If enabled, this will be replaced
// "b ftrace_graph_caller"
#endif
/* /*
* At the callsite x0-x8 and x19-x30 were live. Any C code will have preserved * At the callsite x0-x8 and x19-x30 were live. Any C code will have preserved
* x19-x29 per the AAPCS, and we created frame records upon entry, so we need * x19-x29 per the AAPCS, and we created frame records upon entry, so we need
...@@ -127,17 +121,6 @@ ftrace_common_return: ...@@ -127,17 +121,6 @@ ftrace_common_return:
ret x9 ret x9
SYM_CODE_END(ftrace_common) SYM_CODE_END(ftrace_common)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
SYM_CODE_START(ftrace_graph_caller)
ldr x0, [sp, #S_PC]
sub x0, x0, #AARCH64_INSN_SIZE // ip (callsite's BL insn)
add x1, sp, #S_LR // parent_ip (callsite's LR)
ldr x2, [sp, #PT_REGS_SIZE] // parent fp (callsite's FP)
bl prepare_ftrace_return
b ftrace_common_return
SYM_CODE_END(ftrace_graph_caller)
#endif
#else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ #else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
/* /*
......
This diff is collapsed.
...@@ -268,6 +268,22 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent, ...@@ -268,6 +268,22 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
} }
#ifdef CONFIG_DYNAMIC_FTRACE #ifdef CONFIG_DYNAMIC_FTRACE
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
/*
* When DYNAMIC_FTRACE_WITH_REGS is selected, `fregs` can never be NULL
* and arch_ftrace_get_regs(fregs) will always give a non-NULL pt_regs
* in which we can safely modify the LR.
*/
struct pt_regs *regs = arch_ftrace_get_regs(fregs);
unsigned long *parent = (unsigned long *)&procedure_link_pointer(regs);
prepare_ftrace_return(ip, parent, frame_pointer(regs));
}
#else
/* /*
* Turn on/off the call to ftrace_graph_caller() in ftrace_caller() * Turn on/off the call to ftrace_graph_caller() in ftrace_caller()
* depending on @enable. * depending on @enable.
...@@ -297,5 +313,6 @@ int ftrace_disable_ftrace_graph_caller(void) ...@@ -297,5 +313,6 @@ int ftrace_disable_ftrace_graph_caller(void)
{ {
return ftrace_modify_graph_caller(false); return ftrace_modify_graph_caller(false);
} }
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
#endif /* CONFIG_DYNAMIC_FTRACE */ #endif /* CONFIG_DYNAMIC_FTRACE */
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
...@@ -617,7 +617,7 @@ NOKPROBE_SYMBOL(toggle_bp_registers); ...@@ -617,7 +617,7 @@ NOKPROBE_SYMBOL(toggle_bp_registers);
/* /*
* Debug exception handlers. * Debug exception handlers.
*/ */
static int breakpoint_handler(unsigned long unused, unsigned int esr, static int breakpoint_handler(unsigned long unused, unsigned long esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
int i, step = 0, *kernel_step; int i, step = 0, *kernel_step;
...@@ -751,7 +751,7 @@ static int watchpoint_report(struct perf_event *wp, unsigned long addr, ...@@ -751,7 +751,7 @@ static int watchpoint_report(struct perf_event *wp, unsigned long addr,
return step; return step;
} }
static int watchpoint_handler(unsigned long addr, unsigned int esr, static int watchpoint_handler(unsigned long addr, unsigned long esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
int i, step = 0, *kernel_step, access, closest_match = 0; int i, step = 0, *kernel_step, access, closest_match = 0;
......
...@@ -232,14 +232,14 @@ int kgdb_arch_handle_exception(int exception_vector, int signo, ...@@ -232,14 +232,14 @@ int kgdb_arch_handle_exception(int exception_vector, int signo,
return err; return err;
} }
static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr) static int kgdb_brk_fn(struct pt_regs *regs, unsigned long esr)
{ {
kgdb_handle_exception(1, SIGTRAP, 0, regs); kgdb_handle_exception(1, SIGTRAP, 0, regs);
return DBG_HOOK_HANDLED; return DBG_HOOK_HANDLED;
} }
NOKPROBE_SYMBOL(kgdb_brk_fn) NOKPROBE_SYMBOL(kgdb_brk_fn)
static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr) static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned long esr)
{ {
compiled_break = 1; compiled_break = 1;
kgdb_handle_exception(1, SIGTRAP, 0, regs); kgdb_handle_exception(1, SIGTRAP, 0, regs);
...@@ -248,7 +248,7 @@ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr) ...@@ -248,7 +248,7 @@ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
} }
NOKPROBE_SYMBOL(kgdb_compiled_brk_fn); NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr) static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned long esr)
{ {
if (!kgdb_single_step) if (!kgdb_single_step)
return DBG_HOOK_ERROR; return DBG_HOOK_ERROR;
......
...@@ -329,8 +329,13 @@ bool crash_is_nosave(unsigned long pfn) ...@@ -329,8 +329,13 @@ bool crash_is_nosave(unsigned long pfn)
/* in reserved memory? */ /* in reserved memory? */
addr = __pfn_to_phys(pfn); addr = __pfn_to_phys(pfn);
if ((addr < crashk_res.start) || (crashk_res.end < addr)) if ((addr < crashk_res.start) || (crashk_res.end < addr)) {
return false; if (!crashk_low_res.end)
return false;
if ((addr < crashk_low_res.start) || (crashk_low_res.end < addr))
return false;
}
if (!kexec_crash_image) if (!kexec_crash_image)
return true; return true;
......
...@@ -65,10 +65,18 @@ static int prepare_elf_headers(void **addr, unsigned long *sz) ...@@ -65,10 +65,18 @@ static int prepare_elf_headers(void **addr, unsigned long *sz)
/* Exclude crashkernel region */ /* Exclude crashkernel region */
ret = crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); ret = crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end);
if (ret)
goto out;
if (crashk_low_res.end) {
ret = crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_res.end);
if (ret)
goto out;
}
if (!ret) ret = crash_prepare_elf64_headers(cmem, true, addr, sz);
ret = crash_prepare_elf64_headers(cmem, true, addr, sz);
out:
kfree(cmem); kfree(cmem);
return ret; return ret;
} }
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/swapops.h> #include <linux/swapops.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/uaccess.h>
#include <linux/uio.h> #include <linux/uio.h>
#include <asm/barrier.h> #include <asm/barrier.h>
...@@ -109,7 +110,8 @@ int memcmp_pages(struct page *page1, struct page *page2) ...@@ -109,7 +110,8 @@ int memcmp_pages(struct page *page1, struct page *page2)
static inline void __mte_enable_kernel(const char *mode, unsigned long tcf) static inline void __mte_enable_kernel(const char *mode, unsigned long tcf)
{ {
/* Enable MTE Sync Mode for EL1. */ /* Enable MTE Sync Mode for EL1. */
sysreg_clear_set(sctlr_el1, SCTLR_ELx_TCF_MASK, tcf); sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCF_MASK,
SYS_FIELD_PREP(SCTLR_EL1, TCF, tcf));
isb(); isb();
pr_info_once("MTE: enabled in %s mode at EL1\n", mode); pr_info_once("MTE: enabled in %s mode at EL1\n", mode);
...@@ -125,12 +127,12 @@ void mte_enable_kernel_sync(void) ...@@ -125,12 +127,12 @@ void mte_enable_kernel_sync(void)
WARN_ONCE(system_uses_mte_async_or_asymm_mode(), WARN_ONCE(system_uses_mte_async_or_asymm_mode(),
"MTE async mode enabled system wide!"); "MTE async mode enabled system wide!");
__mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC); __mte_enable_kernel("synchronous", SCTLR_EL1_TCF_SYNC);
} }
void mte_enable_kernel_async(void) void mte_enable_kernel_async(void)
{ {
__mte_enable_kernel("asynchronous", SCTLR_ELx_TCF_ASYNC); __mte_enable_kernel("asynchronous", SCTLR_EL1_TCF_ASYNC);
/* /*
* MTE async mode is set system wide by the first PE that * MTE async mode is set system wide by the first PE that
...@@ -147,7 +149,7 @@ void mte_enable_kernel_async(void) ...@@ -147,7 +149,7 @@ void mte_enable_kernel_async(void)
void mte_enable_kernel_asymm(void) void mte_enable_kernel_asymm(void)
{ {
if (cpus_have_cap(ARM64_MTE_ASYMM)) { if (cpus_have_cap(ARM64_MTE_ASYMM)) {
__mte_enable_kernel("asymmetric", SCTLR_ELx_TCF_ASYMM); __mte_enable_kernel("asymmetric", SCTLR_EL1_TCF_ASYMM);
/* /*
* MTE asymm mode behaves as async mode for store * MTE asymm mode behaves as async mode for store
...@@ -219,11 +221,11 @@ static void mte_update_sctlr_user(struct task_struct *task) ...@@ -219,11 +221,11 @@ static void mte_update_sctlr_user(struct task_struct *task)
* default order. * default order.
*/ */
if (resolved_mte_tcf & MTE_CTRL_TCF_ASYMM) if (resolved_mte_tcf & MTE_CTRL_TCF_ASYMM)
sctlr |= SCTLR_EL1_TCF0_ASYMM; sctlr |= SYS_FIELD_PREP_ENUM(SCTLR_EL1, TCF0, ASYMM);
else if (resolved_mte_tcf & MTE_CTRL_TCF_ASYNC) else if (resolved_mte_tcf & MTE_CTRL_TCF_ASYNC)
sctlr |= SCTLR_EL1_TCF0_ASYNC; sctlr |= SYS_FIELD_PREP_ENUM(SCTLR_EL1, TCF0, ASYNC);
else if (resolved_mte_tcf & MTE_CTRL_TCF_SYNC) else if (resolved_mte_tcf & MTE_CTRL_TCF_SYNC)
sctlr |= SCTLR_EL1_TCF0_SYNC; sctlr |= SYS_FIELD_PREP_ENUM(SCTLR_EL1, TCF0, SYNC);
task->thread.sctlr_user = sctlr; task->thread.sctlr_user = sctlr;
} }
...@@ -546,3 +548,32 @@ static int register_mte_tcf_preferred_sysctl(void) ...@@ -546,3 +548,32 @@ static int register_mte_tcf_preferred_sysctl(void)
return 0; return 0;
} }
subsys_initcall(register_mte_tcf_preferred_sysctl); subsys_initcall(register_mte_tcf_preferred_sysctl);
/*
* Return 0 on success, the number of bytes not probed otherwise.
*/
size_t mte_probe_user_range(const char __user *uaddr, size_t size)
{
const char __user *end = uaddr + size;
int err = 0;
char val;
__raw_get_user(val, uaddr, err);
if (err)
return size;
uaddr = PTR_ALIGN(uaddr, MTE_GRANULE_SIZE);
while (uaddr < end) {
/*
* A read is sufficient for mte, the caller should have probed
* for the pte write permission if required.
*/
__raw_get_user(val, uaddr, err);
if (err)
return end - uaddr;
uaddr += MTE_GRANULE_SIZE;
}
(void)val;
return 0;
}
...@@ -335,7 +335,7 @@ static void __kprobes kprobe_handler(struct pt_regs *regs) ...@@ -335,7 +335,7 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
} }
static int __kprobes static int __kprobes
kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned int esr) kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned long esr)
{ {
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long addr = instruction_pointer(regs); unsigned long addr = instruction_pointer(regs);
...@@ -359,7 +359,7 @@ static struct break_hook kprobes_break_ss_hook = { ...@@ -359,7 +359,7 @@ static struct break_hook kprobes_break_ss_hook = {
}; };
static int __kprobes static int __kprobes
kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr) kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
{ {
kprobe_handler(regs); kprobe_handler(regs);
return DBG_HOOK_HANDLED; return DBG_HOOK_HANDLED;
......
...@@ -166,7 +166,7 @@ int arch_uprobe_exception_notify(struct notifier_block *self, ...@@ -166,7 +166,7 @@ int arch_uprobe_exception_notify(struct notifier_block *self,
} }
static int uprobe_breakpoint_handler(struct pt_regs *regs, static int uprobe_breakpoint_handler(struct pt_regs *regs,
unsigned int esr) unsigned long esr)
{ {
if (uprobe_pre_sstep_notifier(regs)) if (uprobe_pre_sstep_notifier(regs))
return DBG_HOOK_HANDLED; return DBG_HOOK_HANDLED;
...@@ -175,7 +175,7 @@ static int uprobe_breakpoint_handler(struct pt_regs *regs, ...@@ -175,7 +175,7 @@ static int uprobe_breakpoint_handler(struct pt_regs *regs,
} }
static int uprobe_single_step_handler(struct pt_regs *regs, static int uprobe_single_step_handler(struct pt_regs *regs,
unsigned int esr) unsigned long esr)
{ {
struct uprobe_task *utask = current->utask; struct uprobe_task *utask = current->utask;
......
...@@ -250,6 +250,8 @@ void show_regs(struct pt_regs *regs) ...@@ -250,6 +250,8 @@ void show_regs(struct pt_regs *regs)
static void tls_thread_flush(void) static void tls_thread_flush(void)
{ {
write_sysreg(0, tpidr_el0); write_sysreg(0, tpidr_el0);
if (system_supports_tpidr2())
write_sysreg_s(0, SYS_TPIDR2_EL0);
if (is_compat_task()) { if (is_compat_task()) {
current->thread.uw.tp_value = 0; current->thread.uw.tp_value = 0;
...@@ -298,16 +300,42 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) ...@@ -298,16 +300,42 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
/* /*
* Detach src's sve_state (if any) from dst so that it does not * Detach src's sve_state (if any) from dst so that it does not
* get erroneously used or freed prematurely. dst's sve_state * get erroneously used or freed prematurely. dst's copies
* will be allocated on demand later on if dst uses SVE. * will be allocated on demand later on if dst uses SVE.
* For consistency, also clear TIF_SVE here: this could be done * For consistency, also clear TIF_SVE here: this could be done
* later in copy_process(), but to avoid tripping up future * later in copy_process(), but to avoid tripping up future
* maintainers it is best not to leave TIF_SVE and sve_state in * maintainers it is best not to leave TIF flags and buffers in
* an inconsistent state, even temporarily. * an inconsistent state, even temporarily.
*/ */
dst->thread.sve_state = NULL; dst->thread.sve_state = NULL;
clear_tsk_thread_flag(dst, TIF_SVE); clear_tsk_thread_flag(dst, TIF_SVE);
/*
* In the unlikely event that we create a new thread with ZA
* enabled we should retain the ZA state so duplicate it here.
* This may be shortly freed if we exec() or if CLONE_SETTLS
* but it's simpler to do it here. To avoid confusing the rest
* of the code ensure that we have a sve_state allocated
* whenever za_state is allocated.
*/
if (thread_za_enabled(&src->thread)) {
dst->thread.sve_state = kzalloc(sve_state_size(src),
GFP_KERNEL);
if (!dst->thread.sve_state)
return -ENOMEM;
dst->thread.za_state = kmemdup(src->thread.za_state,
za_state_size(src),
GFP_KERNEL);
if (!dst->thread.za_state) {
kfree(dst->thread.sve_state);
dst->thread.sve_state = NULL;
return -ENOMEM;
}
} else {
dst->thread.za_state = NULL;
clear_tsk_thread_flag(dst, TIF_SME);
}
/* clear any pending asynchronous tag fault raised by the parent */ /* clear any pending asynchronous tag fault raised by the parent */
clear_tsk_thread_flag(dst, TIF_MTE_ASYNC_FAULT); clear_tsk_thread_flag(dst, TIF_MTE_ASYNC_FAULT);
...@@ -343,6 +371,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, ...@@ -343,6 +371,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
* out-of-sync with the saved value. * out-of-sync with the saved value.
*/ */
*task_user_tls(p) = read_sysreg(tpidr_el0); *task_user_tls(p) = read_sysreg(tpidr_el0);
if (system_supports_tpidr2())
p->thread.tpidr2_el0 = read_sysreg_s(SYS_TPIDR2_EL0);
if (stack_start) { if (stack_start) {
if (is_compat_thread(task_thread_info(p))) if (is_compat_thread(task_thread_info(p)))
...@@ -353,10 +383,12 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, ...@@ -353,10 +383,12 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
/* /*
* If a TLS pointer was passed to clone, use it for the new * If a TLS pointer was passed to clone, use it for the new
* thread. * thread. We also reset TPIDR2 if it's in use.
*/ */
if (clone_flags & CLONE_SETTLS) if (clone_flags & CLONE_SETTLS) {
p->thread.uw.tp_value = tls; p->thread.uw.tp_value = tls;
p->thread.tpidr2_el0 = 0;
}
} else { } else {
/* /*
* A kthread has no context to ERET to, so ensure any buggy * A kthread has no context to ERET to, so ensure any buggy
...@@ -387,6 +419,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start, ...@@ -387,6 +419,8 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
void tls_preserve_current_state(void) void tls_preserve_current_state(void)
{ {
*task_user_tls(current) = read_sysreg(tpidr_el0); *task_user_tls(current) = read_sysreg(tpidr_el0);
if (system_supports_tpidr2() && !is_compat_task())
current->thread.tpidr2_el0 = read_sysreg_s(SYS_TPIDR2_EL0);
} }
static void tls_thread_switch(struct task_struct *next) static void tls_thread_switch(struct task_struct *next)
...@@ -399,6 +433,8 @@ static void tls_thread_switch(struct task_struct *next) ...@@ -399,6 +433,8 @@ static void tls_thread_switch(struct task_struct *next)
write_sysreg(0, tpidrro_el0); write_sysreg(0, tpidrro_el0);
write_sysreg(*task_user_tls(next), tpidr_el0); write_sysreg(*task_user_tls(next), tpidr_el0);
if (system_supports_tpidr2())
write_sysreg_s(next->thread.tpidr2_el0, SYS_TPIDR2_EL0);
} }
/* /*
......
This diff is collapsed.
...@@ -225,6 +225,8 @@ static void __init request_standard_resources(void) ...@@ -225,6 +225,8 @@ static void __init request_standard_resources(void)
kernel_code.end = __pa_symbol(__init_begin - 1); kernel_code.end = __pa_symbol(__init_begin - 1);
kernel_data.start = __pa_symbol(_sdata); kernel_data.start = __pa_symbol(_sdata);
kernel_data.end = __pa_symbol(_end - 1); kernel_data.end = __pa_symbol(_end - 1);
insert_resource(&iomem_resource, &kernel_code);
insert_resource(&iomem_resource, &kernel_data);
num_standard_resources = memblock.memory.cnt; num_standard_resources = memblock.memory.cnt;
res_size = num_standard_resources * sizeof(*standard_resources); res_size = num_standard_resources * sizeof(*standard_resources);
...@@ -246,20 +248,7 @@ static void __init request_standard_resources(void) ...@@ -246,20 +248,7 @@ static void __init request_standard_resources(void)
res->end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1; res->end = __pfn_to_phys(memblock_region_memory_end_pfn(region)) - 1;
} }
request_resource(&iomem_resource, res); insert_resource(&iomem_resource, res);
if (kernel_code.start >= res->start &&
kernel_code.end <= res->end)
request_resource(res, &kernel_code);
if (kernel_data.start >= res->start &&
kernel_data.end <= res->end)
request_resource(res, &kernel_data);
#ifdef CONFIG_KEXEC_CORE
/* Userspace will find "Crash kernel" region in /proc/iomem. */
if (crashk_res.end && crashk_res.start >= res->start &&
crashk_res.end <= res->end)
request_resource(res, &crashk_res);
#endif
} }
} }
......
...@@ -56,6 +56,7 @@ struct rt_sigframe_user_layout { ...@@ -56,6 +56,7 @@ struct rt_sigframe_user_layout {
unsigned long fpsimd_offset; unsigned long fpsimd_offset;
unsigned long esr_offset; unsigned long esr_offset;
unsigned long sve_offset; unsigned long sve_offset;
unsigned long za_offset;
unsigned long extra_offset; unsigned long extra_offset;
unsigned long end_offset; unsigned long end_offset;
}; };
...@@ -218,6 +219,7 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx) ...@@ -218,6 +219,7 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
struct user_ctxs { struct user_ctxs {
struct fpsimd_context __user *fpsimd; struct fpsimd_context __user *fpsimd;
struct sve_context __user *sve; struct sve_context __user *sve;
struct za_context __user *za;
}; };
#ifdef CONFIG_ARM64_SVE #ifdef CONFIG_ARM64_SVE
...@@ -226,11 +228,17 @@ static int preserve_sve_context(struct sve_context __user *ctx) ...@@ -226,11 +228,17 @@ static int preserve_sve_context(struct sve_context __user *ctx)
{ {
int err = 0; int err = 0;
u16 reserved[ARRAY_SIZE(ctx->__reserved)]; u16 reserved[ARRAY_SIZE(ctx->__reserved)];
u16 flags = 0;
unsigned int vl = task_get_sve_vl(current); unsigned int vl = task_get_sve_vl(current);
unsigned int vq = 0; unsigned int vq = 0;
if (test_thread_flag(TIF_SVE)) if (thread_sm_enabled(&current->thread)) {
vl = task_get_sme_vl(current);
vq = sve_vq_from_vl(vl); vq = sve_vq_from_vl(vl);
flags |= SVE_SIG_FLAG_SM;
} else if (test_thread_flag(TIF_SVE)) {
vq = sve_vq_from_vl(vl);
}
memset(reserved, 0, sizeof(reserved)); memset(reserved, 0, sizeof(reserved));
...@@ -238,6 +246,7 @@ static int preserve_sve_context(struct sve_context __user *ctx) ...@@ -238,6 +246,7 @@ static int preserve_sve_context(struct sve_context __user *ctx)
__put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16), __put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16),
&ctx->head.size, err); &ctx->head.size, err);
__put_user_error(vl, &ctx->vl, err); __put_user_error(vl, &ctx->vl, err);
__put_user_error(flags, &ctx->flags, err);
BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved)); BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved));
err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved)); err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved));
...@@ -258,18 +267,28 @@ static int preserve_sve_context(struct sve_context __user *ctx) ...@@ -258,18 +267,28 @@ static int preserve_sve_context(struct sve_context __user *ctx)
static int restore_sve_fpsimd_context(struct user_ctxs *user) static int restore_sve_fpsimd_context(struct user_ctxs *user)
{ {
int err; int err;
unsigned int vq; unsigned int vl, vq;
struct user_fpsimd_state fpsimd; struct user_fpsimd_state fpsimd;
struct sve_context sve; struct sve_context sve;
if (__copy_from_user(&sve, user->sve, sizeof(sve))) if (__copy_from_user(&sve, user->sve, sizeof(sve)))
return -EFAULT; return -EFAULT;
if (sve.vl != task_get_sve_vl(current)) if (sve.flags & SVE_SIG_FLAG_SM) {
if (!system_supports_sme())
return -EINVAL;
vl = task_get_sme_vl(current);
} else {
vl = task_get_sve_vl(current);
}
if (sve.vl != vl)
return -EINVAL; return -EINVAL;
if (sve.head.size <= sizeof(*user->sve)) { if (sve.head.size <= sizeof(*user->sve)) {
clear_thread_flag(TIF_SVE); clear_thread_flag(TIF_SVE);
current->thread.svcr &= ~SVCR_SM_MASK;
goto fpsimd_only; goto fpsimd_only;
} }
...@@ -301,7 +320,10 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user) ...@@ -301,7 +320,10 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
if (err) if (err)
return -EFAULT; return -EFAULT;
set_thread_flag(TIF_SVE); if (sve.flags & SVE_SIG_FLAG_SM)
current->thread.svcr |= SVCR_SM_MASK;
else
set_thread_flag(TIF_SVE);
fpsimd_only: fpsimd_only:
/* copy the FP and status/control registers */ /* copy the FP and status/control registers */
...@@ -326,6 +348,101 @@ extern int restore_sve_fpsimd_context(struct user_ctxs *user); ...@@ -326,6 +348,101 @@ extern int restore_sve_fpsimd_context(struct user_ctxs *user);
#endif /* ! CONFIG_ARM64_SVE */ #endif /* ! CONFIG_ARM64_SVE */
#ifdef CONFIG_ARM64_SME
static int preserve_za_context(struct za_context __user *ctx)
{
int err = 0;
u16 reserved[ARRAY_SIZE(ctx->__reserved)];
unsigned int vl = task_get_sme_vl(current);
unsigned int vq;
if (thread_za_enabled(&current->thread))
vq = sve_vq_from_vl(vl);
else
vq = 0;
memset(reserved, 0, sizeof(reserved));
__put_user_error(ZA_MAGIC, &ctx->head.magic, err);
__put_user_error(round_up(ZA_SIG_CONTEXT_SIZE(vq), 16),
&ctx->head.size, err);
__put_user_error(vl, &ctx->vl, err);
BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved));
err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved));
if (vq) {
/*
* This assumes that the ZA state has already been saved to
* the task struct by calling the function
* fpsimd_signal_preserve_current_state().
*/
err |= __copy_to_user((char __user *)ctx + ZA_SIG_REGS_OFFSET,
current->thread.za_state,
ZA_SIG_REGS_SIZE(vq));
}
return err ? -EFAULT : 0;
}
static int restore_za_context(struct user_ctxs __user *user)
{
int err;
unsigned int vq;
struct za_context za;
if (__copy_from_user(&za, user->za, sizeof(za)))
return -EFAULT;
if (za.vl != task_get_sme_vl(current))
return -EINVAL;
if (za.head.size <= sizeof(*user->za)) {
current->thread.svcr &= ~SVCR_ZA_MASK;
return 0;
}
vq = sve_vq_from_vl(za.vl);
if (za.head.size < ZA_SIG_CONTEXT_SIZE(vq))
return -EINVAL;
/*
* Careful: we are about __copy_from_user() directly into
* thread.za_state with preemption enabled, so protection is
* needed to prevent a racing context switch from writing stale
* registers back over the new data.
*/
fpsimd_flush_task_state(current);
/* From now, fpsimd_thread_switch() won't touch thread.sve_state */
sme_alloc(current);
if (!current->thread.za_state) {
current->thread.svcr &= ~SVCR_ZA_MASK;
clear_thread_flag(TIF_SME);
return -ENOMEM;
}
err = __copy_from_user(current->thread.za_state,
(char __user const *)user->za +
ZA_SIG_REGS_OFFSET,
ZA_SIG_REGS_SIZE(vq));
if (err)
return -EFAULT;
set_thread_flag(TIF_SME);
current->thread.svcr |= SVCR_ZA_MASK;
return 0;
}
#else /* ! CONFIG_ARM64_SME */
/* Turn any non-optimised out attempts to use these into a link error: */
extern int preserve_za_context(void __user *ctx);
extern int restore_za_context(struct user_ctxs *user);
#endif /* ! CONFIG_ARM64_SME */
static int parse_user_sigframe(struct user_ctxs *user, static int parse_user_sigframe(struct user_ctxs *user,
struct rt_sigframe __user *sf) struct rt_sigframe __user *sf)
...@@ -340,6 +457,7 @@ static int parse_user_sigframe(struct user_ctxs *user, ...@@ -340,6 +457,7 @@ static int parse_user_sigframe(struct user_ctxs *user,
user->fpsimd = NULL; user->fpsimd = NULL;
user->sve = NULL; user->sve = NULL;
user->za = NULL;
if (!IS_ALIGNED((unsigned long)base, 16)) if (!IS_ALIGNED((unsigned long)base, 16))
goto invalid; goto invalid;
...@@ -393,7 +511,7 @@ static int parse_user_sigframe(struct user_ctxs *user, ...@@ -393,7 +511,7 @@ static int parse_user_sigframe(struct user_ctxs *user,
break; break;
case SVE_MAGIC: case SVE_MAGIC:
if (!system_supports_sve()) if (!system_supports_sve() && !system_supports_sme())
goto invalid; goto invalid;
if (user->sve) if (user->sve)
...@@ -405,6 +523,19 @@ static int parse_user_sigframe(struct user_ctxs *user, ...@@ -405,6 +523,19 @@ static int parse_user_sigframe(struct user_ctxs *user,
user->sve = (struct sve_context __user *)head; user->sve = (struct sve_context __user *)head;
break; break;
case ZA_MAGIC:
if (!system_supports_sme())
goto invalid;
if (user->za)
goto invalid;
if (size < sizeof(*user->za))
goto invalid;
user->za = (struct za_context __user *)head;
break;
case EXTRA_MAGIC: case EXTRA_MAGIC:
if (have_extra_context) if (have_extra_context)
goto invalid; goto invalid;
...@@ -528,6 +659,9 @@ static int restore_sigframe(struct pt_regs *regs, ...@@ -528,6 +659,9 @@ static int restore_sigframe(struct pt_regs *regs,
} }
} }
if (err == 0 && system_supports_sme() && user.za)
err = restore_za_context(&user);
return err; return err;
} }
...@@ -594,11 +728,12 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user, ...@@ -594,11 +728,12 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user,
if (system_supports_sve()) { if (system_supports_sve()) {
unsigned int vq = 0; unsigned int vq = 0;
if (add_all || test_thread_flag(TIF_SVE)) { if (add_all || test_thread_flag(TIF_SVE) ||
int vl = sve_max_vl(); thread_sm_enabled(&current->thread)) {
int vl = max(sve_max_vl(), sme_max_vl());
if (!add_all) if (!add_all)
vl = task_get_sve_vl(current); vl = thread_get_cur_vl(&current->thread);
vq = sve_vq_from_vl(vl); vq = sve_vq_from_vl(vl);
} }
...@@ -609,6 +744,24 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user, ...@@ -609,6 +744,24 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user,
return err; return err;
} }
if (system_supports_sme()) {
unsigned int vl;
unsigned int vq = 0;
if (add_all)
vl = sme_max_vl();
else
vl = task_get_sme_vl(current);
if (thread_za_enabled(&current->thread))
vq = sve_vq_from_vl(vl);
err = sigframe_alloc(user, &user->za_offset,
ZA_SIG_CONTEXT_SIZE(vq));
if (err)
return err;
}
return sigframe_alloc_end(user); return sigframe_alloc_end(user);
} }
...@@ -649,13 +802,21 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user, ...@@ -649,13 +802,21 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user,
__put_user_error(current->thread.fault_code, &esr_ctx->esr, err); __put_user_error(current->thread.fault_code, &esr_ctx->esr, err);
} }
/* Scalable Vector Extension state, if present */ /* Scalable Vector Extension state (including streaming), if present */
if (system_supports_sve() && err == 0 && user->sve_offset) { if ((system_supports_sve() || system_supports_sme()) &&
err == 0 && user->sve_offset) {
struct sve_context __user *sve_ctx = struct sve_context __user *sve_ctx =
apply_user_offset(user, user->sve_offset); apply_user_offset(user, user->sve_offset);
err |= preserve_sve_context(sve_ctx); err |= preserve_sve_context(sve_ctx);
} }
/* ZA state if present */
if (system_supports_sme() && err == 0 && user->za_offset) {
struct za_context __user *za_ctx =
apply_user_offset(user, user->za_offset);
err |= preserve_za_context(za_ctx);
}
if (err == 0 && user->extra_offset) { if (err == 0 && user->extra_offset) {
char __user *sfp = (char __user *)user->sigframe; char __user *sfp = (char __user *)user->sigframe;
char __user *userp = char __user *userp =
...@@ -759,6 +920,13 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, ...@@ -759,6 +920,13 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka,
/* TCO (Tag Check Override) always cleared for signal handlers */ /* TCO (Tag Check Override) always cleared for signal handlers */
regs->pstate &= ~PSR_TCO_BIT; regs->pstate &= ~PSR_TCO_BIT;
/* Signal handlers are invoked with ZA and streaming mode disabled */
if (system_supports_sme()) {
current->thread.svcr &= ~(SVCR_ZA_MASK |
SVCR_SM_MASK);
sme_smstop();
}
if (ka->sa.sa_flags & SA_RESTORER) if (ka->sa.sa_flags & SA_RESTORER)
sigtramp = ka->sa.sa_restorer; sigtramp = ka->sa.sa_restorer;
else else
......
This diff is collapsed.
...@@ -113,6 +113,6 @@ long compat_arm_syscall(struct pt_regs *regs, int scno) ...@@ -113,6 +113,6 @@ long compat_arm_syscall(struct pt_regs *regs, int scno)
addr = instruction_pointer(regs) - (compat_thumb_mode(regs) ? 2 : 4); addr = instruction_pointer(regs) - (compat_thumb_mode(regs) ? 2 : 4);
arm64_notify_die("Oops - bad compat syscall(2)", regs, arm64_notify_die("Oops - bad compat syscall(2)", regs,
SIGILL, ILL_ILLTRP, addr, scno); SIGILL, ILL_ILLTRP, addr, 0);
return 0; return 0;
} }
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -783,6 +783,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) ...@@ -783,6 +783,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
ret = 1; ret = 1;
run->exit_reason = KVM_EXIT_UNKNOWN; run->exit_reason = KVM_EXIT_UNKNOWN;
run->flags = 0;
while (ret > 0) { while (ret > 0) {
/* /*
* Check conditions before entering the guest * Check conditions before entering the guest
......
This diff is collapsed.
This diff is collapsed.
...@@ -266,7 +266,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu) ...@@ -266,7 +266,7 @@ static inline bool handle_tx2_tvm(struct kvm_vcpu *vcpu)
return true; return true;
} }
static inline bool esr_is_ptrauth_trap(u32 esr) static inline bool esr_is_ptrauth_trap(u64 esr)
{ {
switch (esr_sys64_to_sysreg(esr)) { switch (esr_sys64_to_sysreg(esr)) {
case SYS_APIAKEYLO_EL1: case SYS_APIAKEYLO_EL1:
......
This diff is collapsed.
...@@ -33,7 +33,7 @@ u64 id_aa64mmfr2_el1_sys_val; ...@@ -33,7 +33,7 @@ u64 id_aa64mmfr2_el1_sys_val;
*/ */
static void inject_undef64(struct kvm_vcpu *vcpu) static void inject_undef64(struct kvm_vcpu *vcpu)
{ {
u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT); u64 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
*vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
*vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR); *vcpu_cpsr(vcpu) = read_sysreg_el2(SYS_SPSR);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -16,8 +16,8 @@ ...@@ -16,8 +16,8 @@
void copy_highpage(struct page *to, struct page *from) void copy_highpage(struct page *to, struct page *from)
{ {
struct page *kto = page_address(to); void *kto = page_address(to);
struct page *kfrom = page_address(from); void *kfrom = page_address(from);
copy_page(kto, kfrom); copy_page(kto, kfrom);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment