Commit 342403bc authored by Will Deacon's avatar Will Deacon

Merge branches 'for-next/acpi', 'for-next/bpf', 'for-next/cpufeature',...

Merge branches 'for-next/acpi', 'for-next/bpf', 'for-next/cpufeature', 'for-next/docs', 'for-next/kconfig', 'for-next/misc', 'for-next/perf', 'for-next/ptr-auth', 'for-next/sdei', 'for-next/smccc' and 'for-next/vdso' into for-next/core

ACPI and IORT updates
(Lorenzo Pieralisi)
* for-next/acpi:
  ACPI/IORT: Remove the unused __get_pci_rid()
  ACPI/IORT: Fix PMCG node single ID mapping handling
  ACPI: IORT: Add comments for not calling acpi_put_table()
  ACPI: GTDT: Put GTDT table after parsing
  ACPI: IORT: Add extra message "applying workaround" for off-by-1 issue
  ACPI/IORT: work around num_ids ambiguity
  Revert "ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map()"
  ACPI/IORT: take _DMA methods into account for named components

BPF JIT optimisations for immediate value generation
(Luke Nelson)
* for-next/bpf:
  bpf, arm64: Optimize ADD,SUB,JMP BPF_K using arm64 add/sub immediates
  bpf, arm64: Optimize AND,OR,XOR,JSET BPF_K using arm64 logical immediates
  arm64: insn: Fix two bugs in encoding 32-bit logical immediates

Addition of new CPU ID register fields and removal of some benign sanity checks
(Anshuman Khandual and others)
* for-next/cpufeature: (27 commits)
  KVM: arm64: Check advertised Stage-2 page size capability
  arm64/cpufeature: Add get_arm64_ftr_reg_nowarn()
  arm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context
  arm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register
  arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register
  arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register
  arm64/cpufeature: Add remaining feature bits in ID_MMFR4 register
  arm64/cpufeature: Add remaining feature bits in ID_PFR0 register
  arm64/cpufeature: Introduce ID_MMFR5 CPU register
  arm64/cpufeature: Introduce ID_DFR1 CPU register
  arm64/cpufeature: Introduce ID_PFR2 CPU register
  arm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0
  arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register
  arm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register
  arm64/cpufeature: Drop open encodings while extracting parange
  arm64/cpufeature: Validate hypervisor capabilities during CPU hotplug
  arm64: cpufeature: Group indexed system register definitions by name
  arm64: cpufeature: Extend comment to describe absence of field info
  arm64: drop duplicate definitions of ID_AA64MMFR0_TGRAN constants
  arm64: cpufeature: Add an overview comment for the cpufeature framework
  ...

Minor documentation tweaks for silicon errata and booting requirements
(Rob Herring and Will Deacon)
* for-next/docs:
  arm64: silicon-errata.rst: Sort the Cortex-A55 entries
  arm64: docs: Mandate that the I-cache doesn't hold stale kernel text

Minor Kconfig cleanups
(Geert Uytterhoeven)
* for-next/kconfig:
  arm64: cpufeature: Add "or" to mitigations for multiple errata
  arm64: Sort vendor-specific errata

Miscellaneous updates
(Ard Biesheuvel and others)
* for-next/misc:
  arm64: mm: Add asid_gen_match() helper
  arm64: stacktrace: Factor out some common code into on_stack()
  arm64: Call debug_traps_init() from trap_init() to help early kgdb
  arm64: cacheflush: Fix KGDB trap detection
  arm64/cpuinfo: Move device_initcall() near cpuinfo_regs_init()
  arm64: kexec_file: print appropriate variable
  arm: mm: use __pfn_to_section() to get mem_section
  arm64: Reorder the macro arguments in the copy routines
  efi/libstub/arm64: align PE/COFF sections to segment alignment
  KVM: arm64: Drop PTE_S2_MEMATTR_MASK
  arm64/kernel: Fix range on invalidating dcache for boot page tables
  arm64: set TEXT_OFFSET to 0x0 in preparation for removing it entirely
  arm64: lib: Consistently enable crc32 extension
  arm64/mm: Use phys_to_page() to access pgtable memory
  arm64: smp: Make cpus_stuck_in_kernel static
  arm64: entry: remove unneeded semicolon in el1_sync_handler()
  arm64/kernel: vmlinux.lds: drop redundant discard/keep macros
  arm64: drop GZFLAGS definition and export
  arm64: kexec_file: Avoid temp buffer for RNG seed
  arm64: rename stext to primary_entry

Perf PMU driver updates
(Tang Bin and others)
* for-next/perf:
  pmu/smmuv3: Clear IRQ affinity hint on device removal
  drivers/perf: hisi: Permit modular builds of HiSilicon uncore drivers
  drivers/perf: hisi: Fix typo in events attribute array
  drivers/perf: arm_spe_pmu: Avoid duplicate printouts
  drivers/perf: arm_dsu_pmu: Avoid duplicate printouts

Pointer authentication updates and support for vmcoreinfo
(Amit Daniel Kachhap and Mark Rutland)
* for-next/ptr-auth:
  Documentation/vmcoreinfo: Add documentation for 'KERNELPACMASK'
  arm64/crash_core: Export KERNELPACMASK in vmcoreinfo
  arm64: simplify ptrauth initialization
  arm64: remove ptrauth_keys_install_kernel sync arg

SDEI cleanup and non-critical fixes
(James Morse and others)
* for-next/sdei:
  firmware: arm_sdei: Document the motivation behind these set_fs() calls
  firmware: arm_sdei: remove unused interfaces
  firmware: arm_sdei: Put the SDEI table after using it
  firmware: arm_sdei: Drop check for /firmware/ node and always register driver

SMCCC updates and refactoring
(Sudeep Holla)
* for-next/smccc:
  firmware: smccc: Fix missing prototype warning for arm_smccc_version_init
  firmware: smccc: Add function to fetch SMCCC version
  firmware: smccc: Refactor SMCCC specific bits into separate file
  firmware: smccc: Drop smccc_version enum and use ARM_SMCCC_VERSION_1_x instead
  firmware: smccc: Add the definition for SMCCCv1.2 version/error codes
  firmware: smccc: Update link to latest SMCCC specification
  firmware: smccc: Add HAVE_ARM_SMCCC_DISCOVERY to identify SMCCC v1.1 and above

vDSO cleanup and non-critical fixes
(Mark Rutland and Vincenzo Frascino)
* for-next/vdso:
  arm64: vdso: Add --eh-frame-hdr to ldflags
  arm64: vdso: use consistent 'map' nomenclature
  arm64: vdso: use consistent 'abi' nomenclature
  arm64: vdso: simplify arch_vdso_type ifdeffery
  arm64: vdso: remove aarch32_vdso_pages[]
  arm64: vdso: Add '-Bsymbolic' to ldflags
...@@ -393,6 +393,12 @@ KERNELOFFSET ...@@ -393,6 +393,12 @@ KERNELOFFSET
The kernel randomization offset. Used to compute the page offset. If The kernel randomization offset. Used to compute the page offset. If
KASLR is disabled, this value is zero. KASLR is disabled, this value is zero.
KERNELPACMASK
-------------
The mask to extract the Pointer Authentication Code from a kernel virtual
address.
arm arm
=== ===
......
...@@ -173,7 +173,8 @@ Before jumping into the kernel, the following conditions must be met: ...@@ -173,7 +173,8 @@ Before jumping into the kernel, the following conditions must be met:
- Caches, MMUs - Caches, MMUs
The MMU must be off. The MMU must be off.
Instruction cache may be on or off. The instruction cache may be on or off, and must not hold any stale
entries corresponding to the loaded kernel image.
The address range corresponding to the loaded kernel image must be The address range corresponding to the loaded kernel image must be
cleaned to the PoC. In the presence of a system cache or other cleaned to the PoC. In the presence of a system cache or other
coherent masters with caches enabled, this will typically require coherent masters with caches enabled, this will typically require
......
...@@ -64,6 +64,10 @@ stable kernels. ...@@ -64,6 +64,10 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A53 | #843419 | ARM64_ERRATUM_843419 | | ARM | Cortex-A53 | #843419 | ARM64_ERRATUM_843419 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A55 | #1530923 | ARM64_ERRATUM_1530923 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A57 | #832075 | ARM64_ERRATUM_832075 | | ARM | Cortex-A57 | #832075 | ARM64_ERRATUM_832075 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A57 | #852523 | N/A | | ARM | Cortex-A57 | #852523 | N/A |
...@@ -78,8 +82,6 @@ stable kernels. ...@@ -78,8 +82,6 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 | | ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A76 | #1188873,1418040| ARM64_ERRATUM_1418040 | | ARM | Cortex-A76 | #1188873,1418040| ARM64_ERRATUM_1418040 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A76 | #1165522 | ARM64_ERRATUM_1165522 | | ARM | Cortex-A76 | #1165522 | ARM64_ERRATUM_1165522 |
...@@ -88,8 +90,6 @@ stable kernels. ...@@ -88,8 +90,6 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A76 | #1463225 | ARM64_ERRATUM_1463225 | | ARM | Cortex-A76 | #1463225 | ARM64_ERRATUM_1463225 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A55 | #1530923 | ARM64_ERRATUM_1530923 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N1 | #1188873,1418040| ARM64_ERRATUM_1418040 | | ARM | Neoverse-N1 | #1188873,1418040| ARM64_ERRATUM_1418040 |
+----------------+-----------------+-----------------+-----------------------------+ +----------------+-----------------+-----------------+-----------------------------+
| ARM | Neoverse-N1 | #1349291 | N/A | | ARM | Neoverse-N1 | #1349291 | N/A |
......
...@@ -15474,6 +15474,15 @@ M: Nicolas Pitre <nico@fluxnic.net> ...@@ -15474,6 +15474,15 @@ M: Nicolas Pitre <nico@fluxnic.net>
S: Odd Fixes S: Odd Fixes
F: drivers/net/ethernet/smsc/smc91x.* F: drivers/net/ethernet/smsc/smc91x.*
SECURE MONITOR CALL(SMC) CALLING CONVENTION (SMCCC)
M: Mark Rutland <mark.rutland@arm.com>
M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
M: Sudeep Holla <sudeep.holla@arm.com>
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: drivers/firmware/smccc/
F: include/linux/arm-smccc.h
SMIA AND SMIA++ IMAGE SENSOR DRIVER SMIA AND SMIA++ IMAGE SENSOR DRIVER
M: Sakari Ailus <sakari.ailus@linux.intel.com> M: Sakari Ailus <sakari.ailus@linux.intel.com>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
......
...@@ -553,6 +553,9 @@ config ARM64_ERRATUM_1530923 ...@@ -553,6 +553,9 @@ config ARM64_ERRATUM_1530923
If unsure, say Y. If unsure, say Y.
config ARM64_WORKAROUND_REPEAT_TLBI
bool
config ARM64_ERRATUM_1286807 config ARM64_ERRATUM_1286807
bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation" bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation"
default y default y
...@@ -694,6 +697,35 @@ config CAVIUM_TX2_ERRATUM_219 ...@@ -694,6 +697,35 @@ config CAVIUM_TX2_ERRATUM_219
If unsure, say Y. If unsure, say Y.
config FUJITSU_ERRATUM_010001
bool "Fujitsu-A64FX erratum E#010001: Undefined fault may occur wrongly"
default y
help
This option adds a workaround for Fujitsu-A64FX erratum E#010001.
On some variants of the Fujitsu-A64FX cores ver(1.0, 1.1), memory
accesses may cause undefined fault (Data abort, DFSC=0b111111).
This fault occurs under a specific hardware condition when a
load/store instruction performs an address translation using:
case-1 TTBR0_EL1 with TCR_EL1.NFD0 == 1.
case-2 TTBR0_EL2 with TCR_EL2.NFD0 == 1.
case-3 TTBR1_EL1 with TCR_EL1.NFD1 == 1.
case-4 TTBR1_EL2 with TCR_EL2.NFD1 == 1.
The workaround is to ensure these bits are clear in TCR_ELx.
The workaround only affects the Fujitsu-A64FX.
If unsure, say Y.
config HISILICON_ERRATUM_161600802
bool "Hip07 161600802: Erroneous redistributor VLPI base"
default y
help
The HiSilicon Hip07 SoC uses the wrong redistributor base
when issued ITS commands such as VMOVP and VMAPP, and requires
a 128kB offset to be applied to the target address in this commands.
If unsure, say Y.
config QCOM_FALKOR_ERRATUM_1003 config QCOM_FALKOR_ERRATUM_1003
bool "Falkor E1003: Incorrect translation due to ASID change" bool "Falkor E1003: Incorrect translation due to ASID change"
default y default y
...@@ -705,9 +737,6 @@ config QCOM_FALKOR_ERRATUM_1003 ...@@ -705,9 +737,6 @@ config QCOM_FALKOR_ERRATUM_1003
is unchanged. Work around the erratum by invalidating the walk cache is unchanged. Work around the erratum by invalidating the walk cache
entries for the trampoline before entering the kernel proper. entries for the trampoline before entering the kernel proper.
config ARM64_WORKAROUND_REPEAT_TLBI
bool
config QCOM_FALKOR_ERRATUM_1009 config QCOM_FALKOR_ERRATUM_1009
bool "Falkor E1009: Prematurely complete a DSB after a TLBI" bool "Falkor E1009: Prematurely complete a DSB after a TLBI"
default y default y
...@@ -729,25 +758,6 @@ config QCOM_QDF2400_ERRATUM_0065 ...@@ -729,25 +758,6 @@ config QCOM_QDF2400_ERRATUM_0065
If unsure, say Y. If unsure, say Y.
config SOCIONEXT_SYNQUACER_PREITS
bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"
default y
help
Socionext Synquacer SoCs implement a separate h/w block to generate
MSI doorbell writes with non-zero values for the device ID.
If unsure, say Y.
config HISILICON_ERRATUM_161600802
bool "Hip07 161600802: Erroneous redistributor VLPI base"
default y
help
The HiSilicon Hip07 SoC uses the wrong redistributor base
when issued ITS commands such as VMOVP and VMAPP, and requires
a 128kB offset to be applied to the target address in this commands.
If unsure, say Y.
config QCOM_FALKOR_ERRATUM_E1041 config QCOM_FALKOR_ERRATUM_E1041
bool "Falkor E1041: Speculative instruction fetches might cause errant memory access" bool "Falkor E1041: Speculative instruction fetches might cause errant memory access"
default y default y
...@@ -758,22 +768,12 @@ config QCOM_FALKOR_ERRATUM_E1041 ...@@ -758,22 +768,12 @@ config QCOM_FALKOR_ERRATUM_E1041
If unsure, say Y. If unsure, say Y.
config FUJITSU_ERRATUM_010001 config SOCIONEXT_SYNQUACER_PREITS
bool "Fujitsu-A64FX erratum E#010001: Undefined fault may occur wrongly" bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"
default y default y
help help
This option adds a workaround for Fujitsu-A64FX erratum E#010001. Socionext Synquacer SoCs implement a separate h/w block to generate
On some variants of the Fujitsu-A64FX cores ver(1.0, 1.1), memory MSI doorbell writes with non-zero values for the device ID.
accesses may cause undefined fault (Data abort, DFSC=0b111111).
This fault occurs under a specific hardware condition when a
load/store instruction performs an address translation using:
case-1 TTBR0_EL1 with TCR_EL1.NFD0 == 1.
case-2 TTBR0_EL2 with TCR_EL2.NFD0 == 1.
case-3 TTBR1_EL1 with TCR_EL1.NFD1 == 1.
case-4 TTBR1_EL2 with TCR_EL2.NFD1 == 1.
The workaround is to ensure these bits are clear in TCR_ELx.
The workaround only affects the Fujitsu-A64FX.
If unsure, say Y. If unsure, say Y.
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
LDFLAGS_vmlinux :=--no-undefined -X LDFLAGS_vmlinux :=--no-undefined -X
CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET) CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET)
GZFLAGS :=-9
ifeq ($(CONFIG_RELOCATABLE), y) ifeq ($(CONFIG_RELOCATABLE), y)
# Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour # Pass --no-apply-dynamic-relocs to restore pre-binutils-2.27 behaviour
...@@ -118,7 +117,7 @@ TEXT_OFFSET := $(shell awk "BEGIN {srand(); printf \"0x%06x\n\", \ ...@@ -118,7 +117,7 @@ TEXT_OFFSET := $(shell awk "BEGIN {srand(); printf \"0x%06x\n\", \
int(2 * 1024 * 1024 / (2 ^ $(CONFIG_ARM64_PAGE_SHIFT)) * \ int(2 * 1024 * 1024 / (2 ^ $(CONFIG_ARM64_PAGE_SHIFT)) * \
rand()) * (2 ^ $(CONFIG_ARM64_PAGE_SHIFT))}") rand()) * (2 ^ $(CONFIG_ARM64_PAGE_SHIFT))}")
else else
TEXT_OFFSET := 0x00080000 TEXT_OFFSET := 0x0
endif endif
ifeq ($(CONFIG_KASAN_SW_TAGS), y) ifeq ($(CONFIG_KASAN_SW_TAGS), y)
...@@ -131,7 +130,7 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) ...@@ -131,7 +130,7 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
export TEXT_OFFSET GZFLAGS export TEXT_OFFSET
core-y += arch/arm64/ core-y += arch/arm64/
libs-y := arch/arm64/lib/ $(libs-y) libs-y := arch/arm64/lib/ $(libs-y)
......
...@@ -39,25 +39,58 @@ alternative_if ARM64_HAS_GENERIC_AUTH ...@@ -39,25 +39,58 @@ alternative_if ARM64_HAS_GENERIC_AUTH
alternative_else_nop_endif alternative_else_nop_endif
.endm .endm
.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3 .macro __ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
alternative_if ARM64_HAS_ADDRESS_AUTH
mov \tmp1, #THREAD_KEYS_KERNEL mov \tmp1, #THREAD_KEYS_KERNEL
add \tmp1, \tsk, \tmp1 add \tmp1, \tsk, \tmp1
ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA] ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_KERNEL_KEY_APIA]
msr_s SYS_APIAKEYLO_EL1, \tmp2 msr_s SYS_APIAKEYLO_EL1, \tmp2
msr_s SYS_APIAKEYHI_EL1, \tmp3 msr_s SYS_APIAKEYHI_EL1, \tmp3
.if \sync == 1 .endm
.macro ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
alternative_if ARM64_HAS_ADDRESS_AUTH
__ptrauth_keys_install_kernel_nosync \tsk, \tmp1, \tmp2, \tmp3
alternative_else_nop_endif
.endm
.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
alternative_if ARM64_HAS_ADDRESS_AUTH
__ptrauth_keys_install_kernel_nosync \tsk, \tmp1, \tmp2, \tmp3
isb isb
.endif
alternative_else_nop_endif alternative_else_nop_endif
.endm .endm
.macro __ptrauth_keys_init_cpu tsk, tmp1, tmp2, tmp3
mrs \tmp1, id_aa64isar1_el1
ubfx \tmp1, \tmp1, #ID_AA64ISAR1_APA_SHIFT, #8
cbz \tmp1, .Lno_addr_auth\@
mov_q \tmp1, (SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
SCTLR_ELx_ENDA | SCTLR_ELx_ENDB)
mrs \tmp2, sctlr_el1
orr \tmp2, \tmp2, \tmp1
msr sctlr_el1, \tmp2
__ptrauth_keys_install_kernel_nosync \tsk, \tmp1, \tmp2, \tmp3
isb
.Lno_addr_auth\@:
.endm
.macro ptrauth_keys_init_cpu tsk, tmp1, tmp2, tmp3
alternative_if_not ARM64_HAS_ADDRESS_AUTH
b .Lno_addr_auth\@
alternative_else_nop_endif
__ptrauth_keys_init_cpu \tsk, \tmp1, \tmp2, \tmp3
.Lno_addr_auth\@:
.endm
#else /* CONFIG_ARM64_PTR_AUTH */ #else /* CONFIG_ARM64_PTR_AUTH */
.macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3
.endm .endm
.macro ptrauth_keys_install_kernel tsk, sync, tmp1, tmp2, tmp3 .macro ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3
.endm
.macro ptrauth_keys_install_kernel tsk, tmp1, tmp2, tmp3
.endm .endm
#endif /* CONFIG_ARM64_PTR_AUTH */ #endif /* CONFIG_ARM64_PTR_AUTH */
......
...@@ -79,7 +79,7 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) ...@@ -79,7 +79,7 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
* IPI all online CPUs so that they undergo a context synchronization * IPI all online CPUs so that they undergo a context synchronization
* event and are forced to refetch the new instructions. * event and are forced to refetch the new instructions.
*/ */
#ifdef CONFIG_KGDB
/* /*
* KGDB performs cache maintenance with interrupts disabled, so we * KGDB performs cache maintenance with interrupts disabled, so we
* will deadlock trying to IPI the secondary CPUs. In theory, we can * will deadlock trying to IPI the secondary CPUs. In theory, we can
...@@ -89,9 +89,9 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) ...@@ -89,9 +89,9 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
* the patching operation, so we don't need extra IPIs here anyway. * the patching operation, so we don't need extra IPIs here anyway.
* In which case, add a KGDB-specific bodge and return early. * In which case, add a KGDB-specific bodge and return early.
*/ */
if (kgdb_connected && irqs_disabled()) if (in_dbg_master())
return; return;
#endif
kick_all_cpus_sync(); kick_all_cpus_sync();
} }
......
...@@ -2,8 +2,6 @@ ...@@ -2,8 +2,6 @@
#ifndef __ASM_COMPILER_H #ifndef __ASM_COMPILER_H
#define __ASM_COMPILER_H #define __ASM_COMPILER_H
#if defined(CONFIG_ARM64_PTR_AUTH)
/* /*
* The EL0/EL1 pointer bits used by a pointer authentication code. * The EL0/EL1 pointer bits used by a pointer authentication code.
* This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply. * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also apply.
...@@ -19,6 +17,4 @@ ...@@ -19,6 +17,4 @@
#define __builtin_return_address(val) \ #define __builtin_return_address(val) \
(void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val))) (void *)(ptrauth_clear_pac((unsigned long)__builtin_return_address(val)))
#endif /* CONFIG_ARM64_PTR_AUTH */
#endif /* __ASM_COMPILER_H */ #endif /* __ASM_COMPILER_H */
...@@ -33,6 +33,7 @@ struct cpuinfo_arm64 { ...@@ -33,6 +33,7 @@ struct cpuinfo_arm64 {
u64 reg_id_aa64zfr0; u64 reg_id_aa64zfr0;
u32 reg_id_dfr0; u32 reg_id_dfr0;
u32 reg_id_dfr1;
u32 reg_id_isar0; u32 reg_id_isar0;
u32 reg_id_isar1; u32 reg_id_isar1;
u32 reg_id_isar2; u32 reg_id_isar2;
...@@ -44,8 +45,11 @@ struct cpuinfo_arm64 { ...@@ -44,8 +45,11 @@ struct cpuinfo_arm64 {
u32 reg_id_mmfr1; u32 reg_id_mmfr1;
u32 reg_id_mmfr2; u32 reg_id_mmfr2;
u32 reg_id_mmfr3; u32 reg_id_mmfr3;
u32 reg_id_mmfr4;
u32 reg_id_mmfr5;
u32 reg_id_pfr0; u32 reg_id_pfr0;
u32 reg_id_pfr1; u32 reg_id_pfr1;
u32 reg_id_pfr2;
u32 reg_mvfr0; u32 reg_mvfr0;
u32 reg_mvfr1; u32 reg_mvfr1;
......
...@@ -61,7 +61,8 @@ ...@@ -61,7 +61,8 @@
#define ARM64_HAS_AMU_EXTN 51 #define ARM64_HAS_AMU_EXTN 51
#define ARM64_HAS_ADDRESS_AUTH 52 #define ARM64_HAS_ADDRESS_AUTH 52
#define ARM64_HAS_GENERIC_AUTH 53 #define ARM64_HAS_GENERIC_AUTH 53
#define ARM64_HAS_32BIT_EL1 54
#define ARM64_NCAPS 54 #define ARM64_NCAPS 55
#endif /* __ASM_CPUCAPS_H */ #endif /* __ASM_CPUCAPS_H */
...@@ -551,6 +551,13 @@ static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0) ...@@ -551,6 +551,13 @@ static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1; cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1;
} }
static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
{
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_SHIFT);
return val == ID_AA64PFR0_EL1_32BIT_64BIT;
}
static inline bool id_aa64pfr0_32bit_el0(u64 pfr0) static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
{ {
u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT); u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
...@@ -745,6 +752,24 @@ static inline bool cpu_has_hw_af(void) ...@@ -745,6 +752,24 @@ static inline bool cpu_has_hw_af(void)
extern bool cpu_has_amu_feat(int cpu); extern bool cpu_has_amu_feat(int cpu);
#endif #endif
static inline unsigned int get_vmid_bits(u64 mmfr1)
{
int vmid_bits;
vmid_bits = cpuid_feature_extract_unsigned_field(mmfr1,
ID_AA64MMFR1_VMIDBITS_SHIFT);
if (vmid_bits == ID_AA64MMFR1_VMIDBITS_16)
return 16;
/*
* Return the default here even if any reserved
* value is fetched from the system register.
*/
return 8;
}
u32 get_kvm_ipa_limit(void);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif #endif
...@@ -125,5 +125,7 @@ static inline int reinstall_suspended_bps(struct pt_regs *regs) ...@@ -125,5 +125,7 @@ static inline int reinstall_suspended_bps(struct pt_regs *regs)
int aarch32_break_handler(struct pt_regs *regs); int aarch32_break_handler(struct pt_regs *regs);
void debug_traps_init(void);
#endif /* __ASSEMBLY */ #endif /* __ASSEMBLY */
#endif /* __ASM_DEBUG_MONITORS_H */ #endif /* __ASM_DEBUG_MONITORS_H */
...@@ -670,7 +670,7 @@ static inline int kvm_arm_have_ssbd(void) ...@@ -670,7 +670,7 @@ static inline int kvm_arm_have_ssbd(void)
void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu); void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu);
void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu); void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu);
void kvm_set_ipa_limit(void); int kvm_set_ipa_limit(void);
#define __KVM_HAVE_ARCH_VM_ALLOC #define __KVM_HAVE_ARCH_VM_ALLOC
struct kvm *kvm_arch_alloc_vm(void); struct kvm *kvm_arch_alloc_vm(void);
......
...@@ -416,7 +416,7 @@ static inline unsigned int kvm_get_vmid_bits(void) ...@@ -416,7 +416,7 @@ static inline unsigned int kvm_get_vmid_bits(void)
{ {
int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
return (cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR1_VMIDBITS_SHIFT) == 2) ? 16 : 8; return get_vmid_bits(reg);
} }
/* /*
......
...@@ -190,7 +190,6 @@ ...@@ -190,7 +190,6 @@
* Memory Attribute override for Stage-2 (MemAttr[3:0]) * Memory Attribute override for Stage-2 (MemAttr[3:0])
*/ */
#define PTE_S2_MEMATTR(t) (_AT(pteval_t, (t)) << 2) #define PTE_S2_MEMATTR(t) (_AT(pteval_t, (t)) << 2)
#define PTE_S2_MEMATTR_MASK (_AT(pteval_t, 0xf) << 2)
/* /*
* EL2/HYP PTE/PMD definitions * EL2/HYP PTE/PMD definitions
......
...@@ -457,6 +457,7 @@ extern pgd_t init_pg_dir[PTRS_PER_PGD]; ...@@ -457,6 +457,7 @@ extern pgd_t init_pg_dir[PTRS_PER_PGD];
extern pgd_t init_pg_end[]; extern pgd_t init_pg_end[];
extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
extern pgd_t idmap_pg_dir[PTRS_PER_PGD]; extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
extern pgd_t idmap_pg_end[];
extern pgd_t tramp_pg_dir[PTRS_PER_PGD]; extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd); extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd);
...@@ -508,7 +509,7 @@ static inline void pte_unmap(pte_t *pte) { } ...@@ -508,7 +509,7 @@ static inline void pte_unmap(pte_t *pte) { }
#define pte_set_fixmap_offset(pmd, addr) pte_set_fixmap(pte_offset_phys(pmd, addr)) #define pte_set_fixmap_offset(pmd, addr) pte_set_fixmap(pte_offset_phys(pmd, addr))
#define pte_clear_fixmap() clear_fixmap(FIX_PTE) #define pte_clear_fixmap() clear_fixmap(FIX_PTE)
#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(__pmd_to_phys(pmd))) #define pmd_page(pmd) phys_to_page(__pmd_to_phys(pmd))
/* use ONLY for statically allocated translation tables */ /* use ONLY for statically allocated translation tables */
#define pte_offset_kimg(dir,addr) ((pte_t *)__phys_to_kimg(pte_offset_phys((dir), (addr)))) #define pte_offset_kimg(dir,addr) ((pte_t *)__phys_to_kimg(pte_offset_phys((dir), (addr))))
...@@ -566,7 +567,7 @@ static inline phys_addr_t pud_page_paddr(pud_t pud) ...@@ -566,7 +567,7 @@ static inline phys_addr_t pud_page_paddr(pud_t pud)
#define pmd_set_fixmap_offset(pud, addr) pmd_set_fixmap(pmd_offset_phys(pud, addr)) #define pmd_set_fixmap_offset(pud, addr) pmd_set_fixmap(pmd_offset_phys(pud, addr))
#define pmd_clear_fixmap() clear_fixmap(FIX_PMD) #define pmd_clear_fixmap() clear_fixmap(FIX_PMD)
#define pud_page(pud) pfn_to_page(__phys_to_pfn(__pud_to_phys(pud))) #define pud_page(pud) phys_to_page(__pud_to_phys(pud))
/* use ONLY for statically allocated translation tables */ /* use ONLY for statically allocated translation tables */
#define pmd_offset_kimg(dir,addr) ((pmd_t *)__phys_to_kimg(pmd_offset_phys((dir), (addr)))) #define pmd_offset_kimg(dir,addr) ((pmd_t *)__phys_to_kimg(pmd_offset_phys((dir), (addr))))
...@@ -624,7 +625,7 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd) ...@@ -624,7 +625,7 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
#define pud_set_fixmap_offset(pgd, addr) pud_set_fixmap(pud_offset_phys(pgd, addr)) #define pud_set_fixmap_offset(pgd, addr) pud_set_fixmap(pud_offset_phys(pgd, addr))
#define pud_clear_fixmap() clear_fixmap(FIX_PUD) #define pud_clear_fixmap() clear_fixmap(FIX_PUD)
#define pgd_page(pgd) pfn_to_page(__phys_to_pfn(__pgd_to_phys(pgd))) #define pgd_page(pgd) phys_to_page(__pgd_to_phys(pgd))
/* use ONLY for statically allocated translation tables */ /* use ONLY for statically allocated translation tables */
#define pud_offset_kimg(dir,addr) ((pud_t *)__phys_to_kimg(pud_offset_phys((dir), (addr)))) #define pud_offset_kimg(dir,addr) ((pud_t *)__phys_to_kimg(pud_offset_phys((dir), (addr))))
......
...@@ -23,14 +23,6 @@ ...@@ -23,14 +23,6 @@
#define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT) #define CPU_STUCK_REASON_52_BIT_VA (UL(1) << CPU_STUCK_REASON_SHIFT)
#define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT) #define CPU_STUCK_REASON_NO_GRAN (UL(2) << CPU_STUCK_REASON_SHIFT)
/* Possible options for __cpu_setup */
/* Option to setup primary cpu */
#define ARM64_CPU_BOOT_PRIMARY (1)
/* Option to setup secondary cpus */
#define ARM64_CPU_BOOT_SECONDARY (2)
/* Option to setup cpus for different cpu run time services */
#define ARM64_CPU_RUNTIME (3)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/percpu.h> #include <asm/percpu.h>
...@@ -96,9 +88,6 @@ asmlinkage void secondary_start_kernel(void); ...@@ -96,9 +88,6 @@ asmlinkage void secondary_start_kernel(void);
struct secondary_data { struct secondary_data {
void *stack; void *stack;
struct task_struct *task; struct task_struct *task;
#ifdef CONFIG_ARM64_PTR_AUTH
struct ptrauth_keys_kernel ptrauth_key;
#endif
long status; long status;
}; };
......
...@@ -68,12 +68,10 @@ extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk); ...@@ -68,12 +68,10 @@ extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk);
DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr);
static inline bool on_irq_stack(unsigned long sp, static inline bool on_stack(unsigned long sp, unsigned long low,
unsigned long high, enum stack_type type,
struct stack_info *info) struct stack_info *info)
{ {
unsigned long low = (unsigned long)raw_cpu_read(irq_stack_ptr);
unsigned long high = low + IRQ_STACK_SIZE;
if (!low) if (!low)
return false; return false;
...@@ -83,12 +81,20 @@ static inline bool on_irq_stack(unsigned long sp, ...@@ -83,12 +81,20 @@ static inline bool on_irq_stack(unsigned long sp,
if (info) { if (info) {
info->low = low; info->low = low;
info->high = high; info->high = high;
info->type = STACK_TYPE_IRQ; info->type = type;
} }
return true; return true;
} }
static inline bool on_irq_stack(unsigned long sp,
struct stack_info *info)
{
unsigned long low = (unsigned long)raw_cpu_read(irq_stack_ptr);
unsigned long high = low + IRQ_STACK_SIZE;
return on_stack(sp, low, high, STACK_TYPE_IRQ, info);
}
static inline bool on_task_stack(const struct task_struct *tsk, static inline bool on_task_stack(const struct task_struct *tsk,
unsigned long sp, unsigned long sp,
struct stack_info *info) struct stack_info *info)
...@@ -96,16 +102,7 @@ static inline bool on_task_stack(const struct task_struct *tsk, ...@@ -96,16 +102,7 @@ static inline bool on_task_stack(const struct task_struct *tsk,
unsigned long low = (unsigned long)task_stack_page(tsk); unsigned long low = (unsigned long)task_stack_page(tsk);
unsigned long high = low + THREAD_SIZE; unsigned long high = low + THREAD_SIZE;
if (sp < low || sp >= high) return on_stack(sp, low, high, STACK_TYPE_TASK, info);
return false;
if (info) {
info->low = low;
info->high = high;
info->type = STACK_TYPE_TASK;
}
return true;
} }
#ifdef CONFIG_VMAP_STACK #ifdef CONFIG_VMAP_STACK
...@@ -117,16 +114,7 @@ static inline bool on_overflow_stack(unsigned long sp, ...@@ -117,16 +114,7 @@ static inline bool on_overflow_stack(unsigned long sp,
unsigned long low = (unsigned long)raw_cpu_ptr(overflow_stack); unsigned long low = (unsigned long)raw_cpu_ptr(overflow_stack);
unsigned long high = low + OVERFLOW_STACK_SIZE; unsigned long high = low + OVERFLOW_STACK_SIZE;
if (sp < low || sp >= high) return on_stack(sp, low, high, STACK_TYPE_OVERFLOW, info);
return false;
if (info) {
info->low = low;
info->high = high;
info->type = STACK_TYPE_OVERFLOW;
}
return true;
} }
#else #else
static inline bool on_overflow_stack(unsigned long sp, static inline bool on_overflow_stack(unsigned long sp,
......
...@@ -105,6 +105,10 @@ ...@@ -105,6 +105,10 @@
#define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2)
#define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2) #define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2)
/*
* System registers, organised loosely by encoding but grouped together
* where the architected name contains an index. e.g. ID_MMFR<n>_EL1.
*/
#define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2) #define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2)
#define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0) #define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0)
#define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2) #define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2)
...@@ -134,12 +138,16 @@ ...@@ -134,12 +138,16 @@
#define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0) #define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0)
#define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1) #define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1)
#define SYS_ID_PFR2_EL1 sys_reg(3, 0, 0, 3, 4)
#define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2) #define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2)
#define SYS_ID_DFR1_EL1 sys_reg(3, 0, 0, 3, 5)
#define SYS_ID_AFR0_EL1 sys_reg(3, 0, 0, 1, 3) #define SYS_ID_AFR0_EL1 sys_reg(3, 0, 0, 1, 3)
#define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4) #define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4)
#define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) #define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5)
#define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) #define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6)
#define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7) #define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7)
#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6)
#define SYS_ID_MMFR5_EL1 sys_reg(3, 0, 0, 3, 6)
#define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0) #define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0)
#define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1) #define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1)
...@@ -147,7 +155,6 @@ ...@@ -147,7 +155,6 @@
#define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3) #define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3)
#define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4) #define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4)
#define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5) #define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5)
#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6)
#define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7) #define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7)
#define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) #define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0)
...@@ -594,6 +601,7 @@ ...@@ -594,6 +601,7 @@
/* id_aa64isar0 */ /* id_aa64isar0 */
#define ID_AA64ISAR0_RNDR_SHIFT 60 #define ID_AA64ISAR0_RNDR_SHIFT 60
#define ID_AA64ISAR0_TLB_SHIFT 56
#define ID_AA64ISAR0_TS_SHIFT 52 #define ID_AA64ISAR0_TS_SHIFT 52
#define ID_AA64ISAR0_FHM_SHIFT 48 #define ID_AA64ISAR0_FHM_SHIFT 48
#define ID_AA64ISAR0_DP_SHIFT 44 #define ID_AA64ISAR0_DP_SHIFT 44
...@@ -637,6 +645,8 @@ ...@@ -637,6 +645,8 @@
#define ID_AA64PFR0_CSV2_SHIFT 56 #define ID_AA64PFR0_CSV2_SHIFT 56
#define ID_AA64PFR0_DIT_SHIFT 48 #define ID_AA64PFR0_DIT_SHIFT 48
#define ID_AA64PFR0_AMU_SHIFT 44 #define ID_AA64PFR0_AMU_SHIFT 44
#define ID_AA64PFR0_MPAM_SHIFT 40
#define ID_AA64PFR0_SEL2_SHIFT 36
#define ID_AA64PFR0_SVE_SHIFT 32 #define ID_AA64PFR0_SVE_SHIFT 32
#define ID_AA64PFR0_RAS_SHIFT 28 #define ID_AA64PFR0_RAS_SHIFT 28
#define ID_AA64PFR0_GIC_SHIFT 24 #define ID_AA64PFR0_GIC_SHIFT 24
...@@ -655,11 +665,16 @@ ...@@ -655,11 +665,16 @@
#define ID_AA64PFR0_ASIMD_NI 0xf #define ID_AA64PFR0_ASIMD_NI 0xf
#define ID_AA64PFR0_ASIMD_SUPPORTED 0x0 #define ID_AA64PFR0_ASIMD_SUPPORTED 0x0
#define ID_AA64PFR0_EL1_64BIT_ONLY 0x1 #define ID_AA64PFR0_EL1_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL1_32BIT_64BIT 0x2
#define ID_AA64PFR0_EL0_64BIT_ONLY 0x1 #define ID_AA64PFR0_EL0_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL0_32BIT_64BIT 0x2 #define ID_AA64PFR0_EL0_32BIT_64BIT 0x2
/* id_aa64pfr1 */ /* id_aa64pfr1 */
#define ID_AA64PFR1_MPAMFRAC_SHIFT 16
#define ID_AA64PFR1_RASFRAC_SHIFT 12
#define ID_AA64PFR1_MTE_SHIFT 8
#define ID_AA64PFR1_SSBS_SHIFT 4 #define ID_AA64PFR1_SSBS_SHIFT 4
#define ID_AA64PFR1_BT_SHIFT 0
#define ID_AA64PFR1_SSBS_PSTATE_NI 0 #define ID_AA64PFR1_SSBS_PSTATE_NI 0
#define ID_AA64PFR1_SSBS_PSTATE_ONLY 1 #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1
...@@ -688,6 +703,9 @@ ...@@ -688,6 +703,9 @@
#define ID_AA64ZFR0_SVEVER_SVE2 0x1 #define ID_AA64ZFR0_SVEVER_SVE2 0x1
/* id_aa64mmfr0 */ /* id_aa64mmfr0 */
#define ID_AA64MMFR0_TGRAN4_2_SHIFT 40
#define ID_AA64MMFR0_TGRAN64_2_SHIFT 36
#define ID_AA64MMFR0_TGRAN16_2_SHIFT 32
#define ID_AA64MMFR0_TGRAN4_SHIFT 28 #define ID_AA64MMFR0_TGRAN4_SHIFT 28
#define ID_AA64MMFR0_TGRAN64_SHIFT 24 #define ID_AA64MMFR0_TGRAN64_SHIFT 24
#define ID_AA64MMFR0_TGRAN16_SHIFT 20 #define ID_AA64MMFR0_TGRAN16_SHIFT 20
...@@ -752,6 +770,25 @@ ...@@ -752,6 +770,25 @@
#define ID_DFR0_PERFMON_8_1 0x4 #define ID_DFR0_PERFMON_8_1 0x4
#define ID_ISAR4_SWP_FRAC_SHIFT 28
#define ID_ISAR4_PSR_M_SHIFT 24
#define ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT 20
#define ID_ISAR4_BARRIER_SHIFT 16
#define ID_ISAR4_SMC_SHIFT 12
#define ID_ISAR4_WRITEBACK_SHIFT 8
#define ID_ISAR4_WITHSHIFTS_SHIFT 4
#define ID_ISAR4_UNPRIV_SHIFT 0
#define ID_DFR1_MTPMU_SHIFT 0
#define ID_ISAR0_DIVIDE_SHIFT 24
#define ID_ISAR0_DEBUG_SHIFT 20
#define ID_ISAR0_COPROC_SHIFT 16
#define ID_ISAR0_CMPBRANCH_SHIFT 12
#define ID_ISAR0_BITFIELD_SHIFT 8
#define ID_ISAR0_BITCOUNT_SHIFT 4
#define ID_ISAR0_SWAP_SHIFT 0
#define ID_ISAR5_RDM_SHIFT 24 #define ID_ISAR5_RDM_SHIFT 24
#define ID_ISAR5_CRC32_SHIFT 16 #define ID_ISAR5_CRC32_SHIFT 16
#define ID_ISAR5_SHA2_SHIFT 12 #define ID_ISAR5_SHA2_SHIFT 12
...@@ -767,6 +804,22 @@ ...@@ -767,6 +804,22 @@
#define ID_ISAR6_DP_SHIFT 4 #define ID_ISAR6_DP_SHIFT 4
#define ID_ISAR6_JSCVT_SHIFT 0 #define ID_ISAR6_JSCVT_SHIFT 0
#define ID_MMFR4_EVT_SHIFT 28
#define ID_MMFR4_CCIDX_SHIFT 24
#define ID_MMFR4_LSM_SHIFT 20
#define ID_MMFR4_HPDS_SHIFT 16
#define ID_MMFR4_CNP_SHIFT 12
#define ID_MMFR4_XNX_SHIFT 8
#define ID_MMFR4_SPECSEI_SHIFT 0
#define ID_MMFR5_ETS_SHIFT 0
#define ID_PFR0_DIT_SHIFT 24
#define ID_PFR0_CSV2_SHIFT 16
#define ID_PFR2_SSBS_SHIFT 4
#define ID_PFR2_CSV3_SHIFT 0
#define MVFR0_FPROUND_SHIFT 28 #define MVFR0_FPROUND_SHIFT 28
#define MVFR0_FPSHVEC_SHIFT 24 #define MVFR0_FPSHVEC_SHIFT 24
#define MVFR0_FPSQRT_SHIFT 20 #define MVFR0_FPSQRT_SHIFT 20
...@@ -785,17 +838,14 @@ ...@@ -785,17 +838,14 @@
#define MVFR1_FPDNAN_SHIFT 4 #define MVFR1_FPDNAN_SHIFT 4
#define MVFR1_FPFTZ_SHIFT 0 #define MVFR1_FPFTZ_SHIFT 0
#define ID_PFR1_GIC_SHIFT 28
#define ID_AA64MMFR0_TGRAN4_SHIFT 28 #define ID_PFR1_VIRT_FRAC_SHIFT 24
#define ID_AA64MMFR0_TGRAN64_SHIFT 24 #define ID_PFR1_SEC_FRAC_SHIFT 20
#define ID_AA64MMFR0_TGRAN16_SHIFT 20 #define ID_PFR1_GENTIMER_SHIFT 16
#define ID_PFR1_VIRTUALIZATION_SHIFT 12
#define ID_AA64MMFR0_TGRAN4_NI 0xf #define ID_PFR1_MPROGMOD_SHIFT 8
#define ID_AA64MMFR0_TGRAN4_SUPPORTED 0x0 #define ID_PFR1_SECURITY_SHIFT 4
#define ID_AA64MMFR0_TGRAN64_NI 0xf #define ID_PFR1_PROGMOD_SHIFT 0
#define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0
#define ID_AA64MMFR0_TGRAN16_NI 0x0
#define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1
#if defined(CONFIG_ARM64_4K_PAGES) #if defined(CONFIG_ARM64_4K_PAGES)
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT
......
...@@ -92,9 +92,6 @@ int main(void) ...@@ -92,9 +92,6 @@ int main(void)
BLANK(); BLANK();
DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack)); DEFINE(CPU_BOOT_STACK, offsetof(struct secondary_data, stack));
DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task)); DEFINE(CPU_BOOT_TASK, offsetof(struct secondary_data, task));
#ifdef CONFIG_ARM64_PTR_AUTH
DEFINE(CPU_BOOT_PTRAUTH_KEY, offsetof(struct secondary_data, ptrauth_key));
#endif
BLANK(); BLANK();
#ifdef CONFIG_KVM_ARM_HOST #ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt)); DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt));
......
...@@ -774,7 +774,7 @@ static const struct midr_range erratum_speculative_at_vhe_list[] = { ...@@ -774,7 +774,7 @@ static const struct midr_range erratum_speculative_at_vhe_list[] = {
const struct arm64_cpu_capabilities arm64_errata[] = { const struct arm64_cpu_capabilities arm64_errata[] = {
#ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE #ifdef CONFIG_ARM64_WORKAROUND_CLEAN_CACHE
{ {
.desc = "ARM errata 826319, 827319, 824069, 819472", .desc = "ARM errata 826319, 827319, 824069, or 819472",
.capability = ARM64_WORKAROUND_CLEAN_CACHE, .capability = ARM64_WORKAROUND_CLEAN_CACHE,
ERRATA_MIDR_RANGE_LIST(workaround_clean_cache), ERRATA_MIDR_RANGE_LIST(workaround_clean_cache),
.cpu_enable = cpu_enable_cache_maint_trap, .cpu_enable = cpu_enable_cache_maint_trap,
...@@ -856,7 +856,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ...@@ -856,7 +856,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
#endif #endif
#ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI
{ {
.desc = "Qualcomm erratum 1009, ARM erratum 1286807", .desc = "Qualcomm erratum 1009, or ARM erratum 1286807",
.capability = ARM64_WORKAROUND_REPEAT_TLBI, .capability = ARM64_WORKAROUND_REPEAT_TLBI,
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
.matches = cpucap_multi_entry_cap_matches, .matches = cpucap_multi_entry_cap_matches,
...@@ -899,7 +899,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ...@@ -899,7 +899,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
#endif #endif
#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT_VHE #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT_VHE
{ {
.desc = "ARM errata 1165522, 1530923", .desc = "ARM errata 1165522 or 1530923",
.capability = ARM64_WORKAROUND_SPECULATIVE_AT_VHE, .capability = ARM64_WORKAROUND_SPECULATIVE_AT_VHE,
ERRATA_MIDR_RANGE_LIST(erratum_speculative_at_vhe_list), ERRATA_MIDR_RANGE_LIST(erratum_speculative_at_vhe_list),
}, },
......
This diff is collapsed.
...@@ -311,6 +311,8 @@ static int __init cpuinfo_regs_init(void) ...@@ -311,6 +311,8 @@ static int __init cpuinfo_regs_init(void)
} }
return 0; return 0;
} }
device_initcall(cpuinfo_regs_init);
static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info) static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
{ {
unsigned int cpu = smp_processor_id(); unsigned int cpu = smp_processor_id();
...@@ -362,6 +364,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) ...@@ -362,6 +364,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
/* Update the 32bit ID registers only if AArch32 is implemented */ /* Update the 32bit ID registers only if AArch32 is implemented */
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) { if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1); info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1);
info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1); info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1); info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1); info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
...@@ -373,8 +376,11 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info) ...@@ -373,8 +376,11 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1); info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1); info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1); info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1);
info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1);
info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1); info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1); info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1);
info->reg_mvfr0 = read_cpuid(MVFR0_EL1); info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
info->reg_mvfr1 = read_cpuid(MVFR1_EL1); info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
...@@ -403,5 +409,3 @@ void __init cpuinfo_store_boot_cpu(void) ...@@ -403,5 +409,3 @@ void __init cpuinfo_store_boot_cpu(void)
boot_cpu_data = *info; boot_cpu_data = *info;
init_cpu_features(&boot_cpu_data); init_cpu_features(&boot_cpu_data);
} }
device_initcall(cpuinfo_regs_init);
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
*/ */
#include <linux/crash_core.h> #include <linux/crash_core.h>
#include <asm/cpufeature.h>
#include <asm/memory.h> #include <asm/memory.h>
void arch_crash_save_vmcoreinfo(void) void arch_crash_save_vmcoreinfo(void)
...@@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void) ...@@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void)
vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n", vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
PHYS_OFFSET); PHYS_OFFSET);
vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset()); vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
vmcoreinfo_append_str("NUMBER(KERNELPACMASK)=0x%llx\n",
system_supports_address_auth() ?
ptrauth_kernel_pac_mask() : 0);
} }
...@@ -376,15 +376,13 @@ int aarch32_break_handler(struct pt_regs *regs) ...@@ -376,15 +376,13 @@ int aarch32_break_handler(struct pt_regs *regs)
} }
NOKPROBE_SYMBOL(aarch32_break_handler); NOKPROBE_SYMBOL(aarch32_break_handler);
static int __init debug_traps_init(void) void __init debug_traps_init(void)
{ {
hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP, hook_debug_fault_code(DBG_ESR_EVT_HWSS, single_step_handler, SIGTRAP,
TRAP_TRACE, "single-step handler"); TRAP_TRACE, "single-step handler");
hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP, hook_debug_fault_code(DBG_ESR_EVT_BRK, brk_handler, SIGTRAP,
TRAP_BRKPT, "ptrace BRK handler"); TRAP_BRKPT, "ptrace BRK handler");
return 0;
} }
arch_initcall(debug_traps_init);
/* Re-enable single step for syscall restarting. */ /* Re-enable single step for syscall restarting. */
void user_rewind_single_step(struct task_struct *task) void user_rewind_single_step(struct task_struct *task)
......
...@@ -19,7 +19,7 @@ SYM_CODE_START(efi_enter_kernel) ...@@ -19,7 +19,7 @@ SYM_CODE_START(efi_enter_kernel)
* point stored in x0. Save those values in registers which are * point stored in x0. Save those values in registers which are
* callee preserved. * callee preserved.
*/ */
ldr w2, =stext_offset ldr w2, =primary_entry_offset
add x19, x0, x2 // relocated Image entrypoint add x19, x0, x2 // relocated Image entrypoint
mov x20, x1 // DTB address mov x20, x1 // DTB address
......
...@@ -32,7 +32,7 @@ optional_header: ...@@ -32,7 +32,7 @@ optional_header:
extra_header_fields: extra_header_fields:
.quad 0 // ImageBase .quad 0 // ImageBase
.long SZ_4K // SectionAlignment .long SEGMENT_ALIGN // SectionAlignment
.long PECOFF_FILE_ALIGNMENT // FileAlignment .long PECOFF_FILE_ALIGNMENT // FileAlignment
.short 0 // MajorOperatingSystemVersion .short 0 // MajorOperatingSystemVersion
.short 0 // MinorOperatingSystemVersion .short 0 // MinorOperatingSystemVersion
......
...@@ -94,7 +94,7 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs) ...@@ -94,7 +94,7 @@ asmlinkage void notrace el1_sync_handler(struct pt_regs *regs)
break; break;
default: default:
el1_inv(regs, esr); el1_inv(regs, esr);
}; }
} }
NOKPROBE_SYMBOL(el1_sync_handler); NOKPROBE_SYMBOL(el1_sync_handler);
......
...@@ -178,7 +178,7 @@ alternative_cb_end ...@@ -178,7 +178,7 @@ alternative_cb_end
apply_ssbd 1, x22, x23 apply_ssbd 1, x22, x23
ptrauth_keys_install_kernel tsk, 1, x20, x22, x23 ptrauth_keys_install_kernel tsk, x20, x22, x23
.else .else
add x21, sp, #S_FRAME_SIZE add x21, sp, #S_FRAME_SIZE
get_current_task tsk get_current_task tsk
...@@ -900,7 +900,7 @@ SYM_FUNC_START(cpu_switch_to) ...@@ -900,7 +900,7 @@ SYM_FUNC_START(cpu_switch_to)
ldr lr, [x8] ldr lr, [x8]
mov sp, x9 mov sp, x9
msr sp_el0, x1 msr sp_el0, x1
ptrauth_keys_install_kernel x1, 1, x8, x9, x10 ptrauth_keys_install_kernel x1, x8, x9, x10
ret ret
SYM_FUNC_END(cpu_switch_to) SYM_FUNC_END(cpu_switch_to)
NOKPROBE(cpu_switch_to) NOKPROBE(cpu_switch_to)
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/irqchip/arm-gic-v3.h> #include <linux/irqchip/arm-gic-v3.h>
#include <asm/asm_pointer_auth.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/boot.h> #include <asm/boot.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
...@@ -70,9 +71,9 @@ _head: ...@@ -70,9 +71,9 @@ _head:
* its opcode forms the magic "MZ" signature required by UEFI. * its opcode forms the magic "MZ" signature required by UEFI.
*/ */
add x13, x18, #0x16 add x13, x18, #0x16
b stext b primary_entry
#else #else
b stext // branch to kernel start, magic b primary_entry // branch to kernel start, magic
.long 0 // reserved .long 0 // reserved
#endif #endif
le64sym _kernel_offset_le // Image load offset from start of RAM, little-endian le64sym _kernel_offset_le // Image load offset from start of RAM, little-endian
...@@ -98,14 +99,13 @@ pe_header: ...@@ -98,14 +99,13 @@ pe_header:
* primary lowlevel boot path: * primary lowlevel boot path:
* *
* Register Scope Purpose * Register Scope Purpose
* x21 stext() .. start_kernel() FDT pointer passed at boot in x0 * x21 primary_entry() .. start_kernel() FDT pointer passed at boot in x0
* x23 stext() .. start_kernel() physical misalignment/KASLR offset * x23 primary_entry() .. start_kernel() physical misalignment/KASLR offset
* x28 __create_page_tables() callee preserved temp register * x28 __create_page_tables() callee preserved temp register
* x19/x20 __primary_switch() callee preserved temp registers * x19/x20 __primary_switch() callee preserved temp registers
* x24 __primary_switch() .. relocate_kernel() * x24 __primary_switch() .. relocate_kernel() current RELR displacement
* current RELR displacement
*/ */
SYM_CODE_START(stext) SYM_CODE_START(primary_entry)
bl preserve_boot_args bl preserve_boot_args
bl el2_setup // Drop to EL1, w0=cpu_boot_mode bl el2_setup // Drop to EL1, w0=cpu_boot_mode
adrp x23, __PHYS_OFFSET adrp x23, __PHYS_OFFSET
...@@ -118,10 +118,9 @@ SYM_CODE_START(stext) ...@@ -118,10 +118,9 @@ SYM_CODE_START(stext)
* On return, the CPU will be ready for the MMU to be turned on and * On return, the CPU will be ready for the MMU to be turned on and
* the TCR will have been set. * the TCR will have been set.
*/ */
mov x0, #ARM64_CPU_BOOT_PRIMARY
bl __cpu_setup // initialise processor bl __cpu_setup // initialise processor
b __primary_switch b __primary_switch
SYM_CODE_END(stext) SYM_CODE_END(primary_entry)
/* /*
* Preserve the arguments passed by the bootloader in x0 .. x3 * Preserve the arguments passed by the bootloader in x0 .. x3
...@@ -394,13 +393,19 @@ SYM_FUNC_START_LOCAL(__create_page_tables) ...@@ -394,13 +393,19 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
/* /*
* Since the page tables have been populated with non-cacheable * Since the page tables have been populated with non-cacheable
* accesses (MMU disabled), invalidate the idmap and swapper page * accesses (MMU disabled), invalidate those tables again to
* tables again to remove any speculatively loaded cache lines. * remove any speculatively loaded cache lines.
*/ */
dmb sy
adrp x0, idmap_pg_dir adrp x0, idmap_pg_dir
adrp x1, idmap_pg_end
sub x1, x1, x0
bl __inval_dcache_area
adrp x0, init_pg_dir
adrp x1, init_pg_end adrp x1, init_pg_end
sub x1, x1, x0 sub x1, x1, x0
dmb sy
bl __inval_dcache_area bl __inval_dcache_area
ret x28 ret x28
...@@ -417,6 +422,10 @@ SYM_FUNC_START_LOCAL(__primary_switched) ...@@ -417,6 +422,10 @@ SYM_FUNC_START_LOCAL(__primary_switched)
adr_l x5, init_task adr_l x5, init_task
msr sp_el0, x5 // Save thread_info msr sp_el0, x5 // Save thread_info
#ifdef CONFIG_ARM64_PTR_AUTH
__ptrauth_keys_init_cpu x5, x6, x7, x8
#endif
adr_l x8, vectors // load VBAR_EL1 with virtual adr_l x8, vectors // load VBAR_EL1 with virtual
msr vbar_el1, x8 // vector table address msr vbar_el1, x8 // vector table address
isb isb
...@@ -717,7 +726,6 @@ SYM_FUNC_START_LOCAL(secondary_startup) ...@@ -717,7 +726,6 @@ SYM_FUNC_START_LOCAL(secondary_startup)
* Common entry point for secondary CPUs. * Common entry point for secondary CPUs.
*/ */
bl __cpu_secondary_check52bitva bl __cpu_secondary_check52bitva
mov x0, #ARM64_CPU_BOOT_SECONDARY
bl __cpu_setup // initialise processor bl __cpu_setup // initialise processor
adrp x1, swapper_pg_dir adrp x1, swapper_pg_dir
bl __enable_mmu bl __enable_mmu
...@@ -739,6 +747,11 @@ SYM_FUNC_START_LOCAL(__secondary_switched) ...@@ -739,6 +747,11 @@ SYM_FUNC_START_LOCAL(__secondary_switched)
msr sp_el0, x2 msr sp_el0, x2
mov x29, #0 mov x29, #0
mov x30, #0 mov x30, #0
#ifdef CONFIG_ARM64_PTR_AUTH
ptrauth_keys_init_cpu x2, x3, x4, x5
#endif
b secondary_start_kernel b secondary_start_kernel
SYM_FUNC_END(__secondary_switched) SYM_FUNC_END(__secondary_switched)
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
#ifdef CONFIG_EFI #ifdef CONFIG_EFI
__efistub_kernel_size = _edata - _text; __efistub_kernel_size = _edata - _text;
__efistub_stext_offset = stext - _text; __efistub_primary_entry_offset = primary_entry - _text;
/* /*
......
...@@ -1535,16 +1535,10 @@ static u32 aarch64_encode_immediate(u64 imm, ...@@ -1535,16 +1535,10 @@ static u32 aarch64_encode_immediate(u64 imm,
u32 insn) u32 insn)
{ {
unsigned int immr, imms, n, ones, ror, esz, tmp; unsigned int immr, imms, n, ones, ror, esz, tmp;
u64 mask = ~0UL; u64 mask;
/* Can't encode full zeroes or full ones */
if (!imm || !~imm)
return AARCH64_BREAK_FAULT;
switch (variant) { switch (variant) {
case AARCH64_INSN_VARIANT_32BIT: case AARCH64_INSN_VARIANT_32BIT:
if (upper_32_bits(imm))
return AARCH64_BREAK_FAULT;
esz = 32; esz = 32;
break; break;
case AARCH64_INSN_VARIANT_64BIT: case AARCH64_INSN_VARIANT_64BIT:
...@@ -1556,6 +1550,12 @@ static u32 aarch64_encode_immediate(u64 imm, ...@@ -1556,6 +1550,12 @@ static u32 aarch64_encode_immediate(u64 imm,
return AARCH64_BREAK_FAULT; return AARCH64_BREAK_FAULT;
} }
mask = GENMASK(esz - 1, 0);
/* Can't encode full zeroes, full ones, or value wider than the mask */
if (!imm || imm == mask || imm & ~mask)
return AARCH64_BREAK_FAULT;
/* /*
* Inverse of Replicate(). Try to spot a repeating pattern * Inverse of Replicate(). Try to spot a repeating pattern
* with a pow2 stride. * with a pow2 stride.
......
...@@ -138,12 +138,12 @@ static int setup_dtb(struct kimage *image, ...@@ -138,12 +138,12 @@ static int setup_dtb(struct kimage *image,
/* add rng-seed */ /* add rng-seed */
if (rng_is_initialized()) { if (rng_is_initialized()) {
u8 rng_seed[RNG_SEED_SIZE]; void *rng_seed;
get_random_bytes(rng_seed, RNG_SEED_SIZE); ret = fdt_setprop_placeholder(dtb, off, FDT_PROP_RNG_SEED,
ret = fdt_setprop(dtb, off, FDT_PROP_RNG_SEED, rng_seed, RNG_SEED_SIZE, &rng_seed);
RNG_SEED_SIZE);
if (ret) if (ret)
goto out; goto out;
get_random_bytes(rng_seed, RNG_SEED_SIZE);
} else { } else {
pr_notice("RNG is not initialised: omitting \"%s\" property\n", pr_notice("RNG is not initialised: omitting \"%s\" property\n",
FDT_PROP_RNG_SEED); FDT_PROP_RNG_SEED);
...@@ -284,7 +284,7 @@ int load_other_segments(struct kimage *image, ...@@ -284,7 +284,7 @@ int load_other_segments(struct kimage *image,
image->arch.elf_headers_sz = headers_sz; image->arch.elf_headers_sz = headers_sz;
pr_debug("Loaded elf core header at 0x%lx bufsz=0x%lx memsz=0x%lx\n", pr_debug("Loaded elf core header at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
image->arch.elf_headers_mem, headers_sz, headers_sz); image->arch.elf_headers_mem, kbuf.bufsz, kbuf.memsz);
} }
/* load initrd */ /* load initrd */
...@@ -305,7 +305,7 @@ int load_other_segments(struct kimage *image, ...@@ -305,7 +305,7 @@ int load_other_segments(struct kimage *image,
initrd_load_addr = kbuf.mem; initrd_load_addr = kbuf.mem;
pr_debug("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n", pr_debug("Loaded initrd at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
initrd_load_addr, initrd_len, initrd_len); initrd_load_addr, kbuf.bufsz, kbuf.memsz);
} }
/* load dtb */ /* load dtb */
...@@ -332,7 +332,7 @@ int load_other_segments(struct kimage *image, ...@@ -332,7 +332,7 @@ int load_other_segments(struct kimage *image,
image->arch.dtb_mem = kbuf.mem; image->arch.dtb_mem = kbuf.mem;
pr_debug("Loaded dtb at 0x%lx bufsz=0x%lx memsz=0x%lx\n", pr_debug("Loaded dtb at 0x%lx bufsz=0x%lx memsz=0x%lx\n",
kbuf.mem, dtb_len, dtb_len); kbuf.mem, kbuf.bufsz, kbuf.memsz);
return 0; return 0;
......
...@@ -120,7 +120,7 @@ static bool has_pv_steal_clock(void) ...@@ -120,7 +120,7 @@ static bool has_pv_steal_clock(void)
struct arm_smccc_res res; struct arm_smccc_res res;
/* To detect the presence of PV time support we require SMCCC 1.1+ */ /* To detect the presence of PV time support we require SMCCC 1.1+ */
if (psci_ops.smccc_version < SMCCC_VERSION_1_1) if (arm_smccc_1_1_get_conduit() == SMCCC_CONDUIT_NONE)
return false; return false;
arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
......
...@@ -95,19 +95,7 @@ static bool on_sdei_normal_stack(unsigned long sp, struct stack_info *info) ...@@ -95,19 +95,7 @@ static bool on_sdei_normal_stack(unsigned long sp, struct stack_info *info)
unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr); unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr);
unsigned long high = low + SDEI_STACK_SIZE; unsigned long high = low + SDEI_STACK_SIZE;
if (!low) return on_stack(sp, low, high, STACK_TYPE_SDEI_NORMAL, info);
return false;
if (sp < low || sp >= high)
return false;
if (info) {
info->low = low;
info->high = high;
info->type = STACK_TYPE_SDEI_NORMAL;
}
return true;
} }
static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info) static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info)
...@@ -115,19 +103,7 @@ static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info) ...@@ -115,19 +103,7 @@ static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info)
unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr); unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr);
unsigned long high = low + SDEI_STACK_SIZE; unsigned long high = low + SDEI_STACK_SIZE;
if (!low) return on_stack(sp, low, high, STACK_TYPE_SDEI_CRITICAL, info);
return false;
if (sp < low || sp >= high)
return false;
if (info) {
info->low = low;
info->high = high;
info->type = STACK_TYPE_SDEI_CRITICAL;
}
return true;
} }
bool _on_sdei_stack(unsigned long sp, struct stack_info *info) bool _on_sdei_stack(unsigned long sp, struct stack_info *info)
......
...@@ -100,7 +100,6 @@ ENDPROC(__cpu_suspend_enter) ...@@ -100,7 +100,6 @@ ENDPROC(__cpu_suspend_enter)
.pushsection ".idmap.text", "awx" .pushsection ".idmap.text", "awx"
ENTRY(cpu_resume) ENTRY(cpu_resume)
bl el2_setup // if in EL2 drop to EL1 cleanly bl el2_setup // if in EL2 drop to EL1 cleanly
mov x0, #ARM64_CPU_RUNTIME
bl __cpu_setup bl __cpu_setup
/* enable the MMU early - so we can access sleep_save_stash by va */ /* enable the MMU early - so we can access sleep_save_stash by va */
adrp x1, swapper_pg_dir adrp x1, swapper_pg_dir
......
...@@ -65,7 +65,7 @@ EXPORT_PER_CPU_SYMBOL(cpu_number); ...@@ -65,7 +65,7 @@ EXPORT_PER_CPU_SYMBOL(cpu_number);
*/ */
struct secondary_data secondary_data; struct secondary_data secondary_data;
/* Number of CPUs which aren't online, but looping in kernel text. */ /* Number of CPUs which aren't online, but looping in kernel text. */
int cpus_stuck_in_kernel; static int cpus_stuck_in_kernel;
enum ipi_msg_type { enum ipi_msg_type {
IPI_RESCHEDULE, IPI_RESCHEDULE,
...@@ -114,10 +114,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) ...@@ -114,10 +114,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
*/ */
secondary_data.task = idle; secondary_data.task = idle;
secondary_data.stack = task_stack_page(idle) + THREAD_SIZE; secondary_data.stack = task_stack_page(idle) + THREAD_SIZE;
#if defined(CONFIG_ARM64_PTR_AUTH)
secondary_data.ptrauth_key.apia.lo = idle->thread.keys_kernel.apia.lo;
secondary_data.ptrauth_key.apia.hi = idle->thread.keys_kernel.apia.hi;
#endif
update_cpu_boot_status(CPU_MMU_OFF); update_cpu_boot_status(CPU_MMU_OFF);
__flush_dcache_area(&secondary_data, sizeof(secondary_data)); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
...@@ -140,10 +136,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) ...@@ -140,10 +136,6 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
pr_crit("CPU%u: failed to come online\n", cpu); pr_crit("CPU%u: failed to come online\n", cpu);
secondary_data.task = NULL; secondary_data.task = NULL;
secondary_data.stack = NULL; secondary_data.stack = NULL;
#if defined(CONFIG_ARM64_PTR_AUTH)
secondary_data.ptrauth_key.apia.lo = 0;
secondary_data.ptrauth_key.apia.hi = 0;
#endif
__flush_dcache_area(&secondary_data, sizeof(secondary_data)); __flush_dcache_area(&secondary_data, sizeof(secondary_data));
status = READ_ONCE(secondary_data.status); status = READ_ONCE(secondary_data.status);
if (status == CPU_MMU_OFF) if (status == CPU_MMU_OFF)
......
...@@ -1047,11 +1047,11 @@ int __init early_brk64(unsigned long addr, unsigned int esr, ...@@ -1047,11 +1047,11 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
return bug_handler(regs, esr) != DBG_HOOK_HANDLED; return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
} }
/* This registration must happen early, before debug_traps_init(). */
void __init trap_init(void) void __init trap_init(void)
{ {
register_kernel_break_hook(&bug_break_hook); register_kernel_break_hook(&bug_break_hook);
#ifdef CONFIG_KASAN_SW_TAGS #ifdef CONFIG_KASAN_SW_TAGS
register_kernel_break_hook(&kasan_break_hook); register_kernel_break_hook(&kasan_break_hook);
#endif #endif
debug_traps_init();
} }
...@@ -33,20 +33,14 @@ extern char vdso_start[], vdso_end[]; ...@@ -33,20 +33,14 @@ extern char vdso_start[], vdso_end[];
extern char vdso32_start[], vdso32_end[]; extern char vdso32_start[], vdso32_end[];
#endif /* CONFIG_COMPAT_VDSO */ #endif /* CONFIG_COMPAT_VDSO */
/* vdso_lookup arch_index */ enum vdso_abi {
enum arch_vdso_type { VDSO_ABI_AA64,
ARM64_VDSO = 0,
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
ARM64_VDSO32 = 1, VDSO_ABI_AA32,
#endif /* CONFIG_COMPAT_VDSO */ #endif /* CONFIG_COMPAT_VDSO */
}; };
#ifdef CONFIG_COMPAT_VDSO
#define VDSO_TYPES (ARM64_VDSO32 + 1)
#else
#define VDSO_TYPES (ARM64_VDSO + 1)
#endif /* CONFIG_COMPAT_VDSO */
struct __vdso_abi { struct vdso_abi_info {
const char *name; const char *name;
const char *vdso_code_start; const char *vdso_code_start;
const char *vdso_code_end; const char *vdso_code_end;
...@@ -57,14 +51,14 @@ struct __vdso_abi { ...@@ -57,14 +51,14 @@ struct __vdso_abi {
struct vm_special_mapping *cm; struct vm_special_mapping *cm;
}; };
static struct __vdso_abi vdso_lookup[VDSO_TYPES] __ro_after_init = { static struct vdso_abi_info vdso_info[] __ro_after_init = {
{ [VDSO_ABI_AA64] = {
.name = "vdso", .name = "vdso",
.vdso_code_start = vdso_start, .vdso_code_start = vdso_start,
.vdso_code_end = vdso_end, .vdso_code_end = vdso_end,
}, },
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
{ [VDSO_ABI_AA32] = {
.name = "vdso32", .name = "vdso32",
.vdso_code_start = vdso32_start, .vdso_code_start = vdso32_start,
.vdso_code_end = vdso32_end, .vdso_code_end = vdso32_end,
...@@ -81,13 +75,13 @@ static union { ...@@ -81,13 +75,13 @@ static union {
} vdso_data_store __page_aligned_data; } vdso_data_store __page_aligned_data;
struct vdso_data *vdso_data = vdso_data_store.data; struct vdso_data *vdso_data = vdso_data_store.data;
static int __vdso_remap(enum arch_vdso_type arch_index, static int __vdso_remap(enum vdso_abi abi,
const struct vm_special_mapping *sm, const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma) struct vm_area_struct *new_vma)
{ {
unsigned long new_size = new_vma->vm_end - new_vma->vm_start; unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
unsigned long vdso_size = vdso_lookup[arch_index].vdso_code_end - unsigned long vdso_size = vdso_info[abi].vdso_code_end -
vdso_lookup[arch_index].vdso_code_start; vdso_info[abi].vdso_code_start;
if (vdso_size != new_size) if (vdso_size != new_size)
return -EINVAL; return -EINVAL;
...@@ -97,24 +91,24 @@ static int __vdso_remap(enum arch_vdso_type arch_index, ...@@ -97,24 +91,24 @@ static int __vdso_remap(enum arch_vdso_type arch_index,
return 0; return 0;
} }
static int __vdso_init(enum arch_vdso_type arch_index) static int __vdso_init(enum vdso_abi abi)
{ {
int i; int i;
struct page **vdso_pagelist; struct page **vdso_pagelist;
unsigned long pfn; unsigned long pfn;
if (memcmp(vdso_lookup[arch_index].vdso_code_start, "\177ELF", 4)) { if (memcmp(vdso_info[abi].vdso_code_start, "\177ELF", 4)) {
pr_err("vDSO is not a valid ELF object!\n"); pr_err("vDSO is not a valid ELF object!\n");
return -EINVAL; return -EINVAL;
} }
vdso_lookup[arch_index].vdso_pages = ( vdso_info[abi].vdso_pages = (
vdso_lookup[arch_index].vdso_code_end - vdso_info[abi].vdso_code_end -
vdso_lookup[arch_index].vdso_code_start) >> vdso_info[abi].vdso_code_start) >>
PAGE_SHIFT; PAGE_SHIFT;
/* Allocate the vDSO pagelist, plus a page for the data. */ /* Allocate the vDSO pagelist, plus a page for the data. */
vdso_pagelist = kcalloc(vdso_lookup[arch_index].vdso_pages + 1, vdso_pagelist = kcalloc(vdso_info[abi].vdso_pages + 1,
sizeof(struct page *), sizeof(struct page *),
GFP_KERNEL); GFP_KERNEL);
if (vdso_pagelist == NULL) if (vdso_pagelist == NULL)
...@@ -125,18 +119,18 @@ static int __vdso_init(enum arch_vdso_type arch_index) ...@@ -125,18 +119,18 @@ static int __vdso_init(enum arch_vdso_type arch_index)
/* Grab the vDSO code pages. */ /* Grab the vDSO code pages. */
pfn = sym_to_pfn(vdso_lookup[arch_index].vdso_code_start); pfn = sym_to_pfn(vdso_info[abi].vdso_code_start);
for (i = 0; i < vdso_lookup[arch_index].vdso_pages; i++) for (i = 0; i < vdso_info[abi].vdso_pages; i++)
vdso_pagelist[i + 1] = pfn_to_page(pfn + i); vdso_pagelist[i + 1] = pfn_to_page(pfn + i);
vdso_lookup[arch_index].dm->pages = &vdso_pagelist[0]; vdso_info[abi].dm->pages = &vdso_pagelist[0];
vdso_lookup[arch_index].cm->pages = &vdso_pagelist[1]; vdso_info[abi].cm->pages = &vdso_pagelist[1];
return 0; return 0;
} }
static int __setup_additional_pages(enum arch_vdso_type arch_index, static int __setup_additional_pages(enum vdso_abi abi,
struct mm_struct *mm, struct mm_struct *mm,
struct linux_binprm *bprm, struct linux_binprm *bprm,
int uses_interp) int uses_interp)
...@@ -144,7 +138,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index, ...@@ -144,7 +138,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index,
unsigned long vdso_base, vdso_text_len, vdso_mapping_len; unsigned long vdso_base, vdso_text_len, vdso_mapping_len;
void *ret; void *ret;
vdso_text_len = vdso_lookup[arch_index].vdso_pages << PAGE_SHIFT; vdso_text_len = vdso_info[abi].vdso_pages << PAGE_SHIFT;
/* Be sure to map the data page */ /* Be sure to map the data page */
vdso_mapping_len = vdso_text_len + PAGE_SIZE; vdso_mapping_len = vdso_text_len + PAGE_SIZE;
...@@ -156,7 +150,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index, ...@@ -156,7 +150,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index,
ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE, ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE,
VM_READ|VM_MAYREAD, VM_READ|VM_MAYREAD,
vdso_lookup[arch_index].dm); vdso_info[abi].dm);
if (IS_ERR(ret)) if (IS_ERR(ret))
goto up_fail; goto up_fail;
...@@ -165,7 +159,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index, ...@@ -165,7 +159,7 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index,
ret = _install_special_mapping(mm, vdso_base, vdso_text_len, ret = _install_special_mapping(mm, vdso_base, vdso_text_len,
VM_READ|VM_EXEC| VM_READ|VM_EXEC|
VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC, VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC,
vdso_lookup[arch_index].cm); vdso_info[abi].cm);
if (IS_ERR(ret)) if (IS_ERR(ret))
goto up_fail; goto up_fail;
...@@ -184,46 +178,42 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index, ...@@ -184,46 +178,42 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index,
static int aarch32_vdso_mremap(const struct vm_special_mapping *sm, static int aarch32_vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma) struct vm_area_struct *new_vma)
{ {
return __vdso_remap(ARM64_VDSO32, sm, new_vma); return __vdso_remap(VDSO_ABI_AA32, sm, new_vma);
} }
#endif /* CONFIG_COMPAT_VDSO */ #endif /* CONFIG_COMPAT_VDSO */
/* enum aarch32_map {
* aarch32_vdso_pages: AA32_MAP_VECTORS, /* kuser helpers */
* 0 - kuser helpers
* 1 - sigreturn code
* or (CONFIG_COMPAT_VDSO):
* 0 - kuser helpers
* 1 - vdso data
* 2 - vdso code
*/
#define C_VECTORS 0
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
#define C_VVAR 1 AA32_MAP_VVAR,
#define C_VDSO 2 AA32_MAP_VDSO,
#define C_PAGES (C_VDSO + 1)
#else #else
#define C_SIGPAGE 1 AA32_MAP_SIGPAGE
#define C_PAGES (C_SIGPAGE + 1) #endif
#endif /* CONFIG_COMPAT_VDSO */ };
static struct page *aarch32_vdso_pages[C_PAGES] __ro_after_init;
static struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { static struct page *aarch32_vectors_page __ro_after_init;
{ #ifndef CONFIG_COMPAT_VDSO
static struct page *aarch32_sig_page __ro_after_init;
#endif
static struct vm_special_mapping aarch32_vdso_maps[] = {
[AA32_MAP_VECTORS] = {
.name = "[vectors]", /* ABI */ .name = "[vectors]", /* ABI */
.pages = &aarch32_vdso_pages[C_VECTORS], .pages = &aarch32_vectors_page,
}, },
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
{ [AA32_MAP_VVAR] = {
.name = "[vvar]", .name = "[vvar]",
}, },
{ [AA32_MAP_VDSO] = {
.name = "[vdso]", .name = "[vdso]",
.mremap = aarch32_vdso_mremap, .mremap = aarch32_vdso_mremap,
}, },
#else #else
{ [AA32_MAP_SIGPAGE] = {
.name = "[sigpage]", /* ABI */ .name = "[sigpage]", /* ABI */
.pages = &aarch32_vdso_pages[C_SIGPAGE], .pages = &aarch32_sig_page,
}, },
#endif /* CONFIG_COMPAT_VDSO */ #endif /* CONFIG_COMPAT_VDSO */
}; };
...@@ -243,8 +233,8 @@ static int aarch32_alloc_kuser_vdso_page(void) ...@@ -243,8 +233,8 @@ static int aarch32_alloc_kuser_vdso_page(void)
memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start, memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start,
kuser_sz); kuser_sz);
aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_page); aarch32_vectors_page = virt_to_page(vdso_page);
flush_dcache_page(aarch32_vdso_pages[C_VECTORS]); flush_dcache_page(aarch32_vectors_page);
return 0; return 0;
} }
...@@ -253,10 +243,10 @@ static int __aarch32_alloc_vdso_pages(void) ...@@ -253,10 +243,10 @@ static int __aarch32_alloc_vdso_pages(void)
{ {
int ret; int ret;
vdso_lookup[ARM64_VDSO32].dm = &aarch32_vdso_spec[C_VVAR]; vdso_info[VDSO_ABI_AA32].dm = &aarch32_vdso_maps[AA32_MAP_VVAR];
vdso_lookup[ARM64_VDSO32].cm = &aarch32_vdso_spec[C_VDSO]; vdso_info[VDSO_ABI_AA32].cm = &aarch32_vdso_maps[AA32_MAP_VDSO];
ret = __vdso_init(ARM64_VDSO32); ret = __vdso_init(VDSO_ABI_AA32);
if (ret) if (ret)
return ret; return ret;
...@@ -275,8 +265,8 @@ static int __aarch32_alloc_vdso_pages(void) ...@@ -275,8 +265,8 @@ static int __aarch32_alloc_vdso_pages(void)
return -ENOMEM; return -ENOMEM;
memcpy((void *)sigpage, __aarch32_sigret_code_start, sigret_sz); memcpy((void *)sigpage, __aarch32_sigret_code_start, sigret_sz);
aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(sigpage); aarch32_sig_page = virt_to_page(sigpage);
flush_dcache_page(aarch32_vdso_pages[C_SIGPAGE]); flush_dcache_page(aarch32_sig_page);
ret = aarch32_alloc_kuser_vdso_page(); ret = aarch32_alloc_kuser_vdso_page();
if (ret) if (ret)
...@@ -306,7 +296,7 @@ static int aarch32_kuser_helpers_setup(struct mm_struct *mm) ...@@ -306,7 +296,7 @@ static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
ret = _install_special_mapping(mm, AARCH32_VECTORS_BASE, PAGE_SIZE, ret = _install_special_mapping(mm, AARCH32_VECTORS_BASE, PAGE_SIZE,
VM_READ | VM_EXEC | VM_READ | VM_EXEC |
VM_MAYREAD | VM_MAYEXEC, VM_MAYREAD | VM_MAYEXEC,
&aarch32_vdso_spec[C_VECTORS]); &aarch32_vdso_maps[AA32_MAP_VECTORS]);
return PTR_ERR_OR_ZERO(ret); return PTR_ERR_OR_ZERO(ret);
} }
...@@ -330,7 +320,7 @@ static int aarch32_sigreturn_setup(struct mm_struct *mm) ...@@ -330,7 +320,7 @@ static int aarch32_sigreturn_setup(struct mm_struct *mm)
ret = _install_special_mapping(mm, addr, PAGE_SIZE, ret = _install_special_mapping(mm, addr, PAGE_SIZE,
VM_READ | VM_EXEC | VM_MAYREAD | VM_READ | VM_EXEC | VM_MAYREAD |
VM_MAYWRITE | VM_MAYEXEC, VM_MAYWRITE | VM_MAYEXEC,
&aarch32_vdso_spec[C_SIGPAGE]); &aarch32_vdso_maps[AA32_MAP_SIGPAGE]);
if (IS_ERR(ret)) if (IS_ERR(ret))
goto out; goto out;
...@@ -354,7 +344,7 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) ...@@ -354,7 +344,7 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
goto out; goto out;
#ifdef CONFIG_COMPAT_VDSO #ifdef CONFIG_COMPAT_VDSO
ret = __setup_additional_pages(ARM64_VDSO32, ret = __setup_additional_pages(VDSO_ABI_AA32,
mm, mm,
bprm, bprm,
uses_interp); uses_interp);
...@@ -371,22 +361,19 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) ...@@ -371,22 +361,19 @@ int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
static int vdso_mremap(const struct vm_special_mapping *sm, static int vdso_mremap(const struct vm_special_mapping *sm,
struct vm_area_struct *new_vma) struct vm_area_struct *new_vma)
{ {
return __vdso_remap(ARM64_VDSO, sm, new_vma); return __vdso_remap(VDSO_ABI_AA64, sm, new_vma);
} }
/* enum aarch64_map {
* aarch64_vdso_pages: AA64_MAP_VVAR,
* 0 - vvar AA64_MAP_VDSO,
* 1 - vdso };
*/
#define A_VVAR 0 static struct vm_special_mapping aarch64_vdso_maps[] __ro_after_init = {
#define A_VDSO 1 [AA64_MAP_VVAR] = {
#define A_PAGES (A_VDSO + 1)
static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = {
{
.name = "[vvar]", .name = "[vvar]",
}, },
{ [AA64_MAP_VDSO] = {
.name = "[vdso]", .name = "[vdso]",
.mremap = vdso_mremap, .mremap = vdso_mremap,
}, },
...@@ -394,10 +381,10 @@ static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = { ...@@ -394,10 +381,10 @@ static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = {
static int __init vdso_init(void) static int __init vdso_init(void)
{ {
vdso_lookup[ARM64_VDSO].dm = &vdso_spec[A_VVAR]; vdso_info[VDSO_ABI_AA64].dm = &aarch64_vdso_maps[AA64_MAP_VVAR];
vdso_lookup[ARM64_VDSO].cm = &vdso_spec[A_VDSO]; vdso_info[VDSO_ABI_AA64].cm = &aarch64_vdso_maps[AA64_MAP_VDSO];
return __vdso_init(ARM64_VDSO); return __vdso_init(VDSO_ABI_AA64);
} }
arch_initcall(vdso_init); arch_initcall(vdso_init);
...@@ -410,7 +397,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, ...@@ -410,7 +397,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
if (down_write_killable(&mm->mmap_sem)) if (down_write_killable(&mm->mmap_sem))
return -EINTR; return -EINTR;
ret = __setup_additional_pages(ARM64_VDSO, ret = __setup_additional_pages(VDSO_ABI_AA64,
mm, mm,
bprm, bprm,
uses_interp); uses_interp);
......
...@@ -17,14 +17,16 @@ obj-vdso := vgettimeofday.o note.o sigreturn.o ...@@ -17,14 +17,16 @@ obj-vdso := vgettimeofday.o note.o sigreturn.o
targets := $(obj-vdso) vdso.so vdso.so.dbg targets := $(obj-vdso) vdso.so vdso.so.dbg
obj-vdso := $(addprefix $(obj)/, $(obj-vdso)) obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
# -Bsymbolic has been added for consistency with arm, the compat vDSO and
# potential future proofing if we end up with internal calls to the exported
# routines, as x86 does (see 6f121e548f83 ("x86, vdso: Reimplement vdso.so
# preparation in build-time C")).
ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \ ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \
--build-id -n -T -Bsymbolic --eh-frame-hdr --build-id -n -T
ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18 ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
ccflags-y += -DDISABLE_BRANCH_PROFILING ccflags-y += -DDISABLE_BRANCH_PROFILING
VDSO_LDFLAGS := -Bsymbolic
CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os
KBUILD_CFLAGS += $(DISABLE_LTO) KBUILD_CFLAGS += $(DISABLE_LTO)
KASAN_SANITIZE := n KASAN_SANITIZE := n
......
...@@ -17,10 +17,6 @@ ...@@ -17,10 +17,6 @@
#include "image.h" #include "image.h"
/* .exit.text needed in case of alternative patching */
#define ARM_EXIT_KEEP(x) x
#define ARM_EXIT_DISCARD(x)
OUTPUT_ARCH(aarch64) OUTPUT_ARCH(aarch64)
ENTRY(_text) ENTRY(_text)
...@@ -72,8 +68,8 @@ jiffies = jiffies_64; ...@@ -72,8 +68,8 @@ jiffies = jiffies_64;
/* /*
* The size of the PE/COFF section that covers the kernel image, which * The size of the PE/COFF section that covers the kernel image, which
* runs from stext to _edata, must be a round multiple of the PE/COFF * runs from _stext to _edata, must be a round multiple of the PE/COFF
* FileAlignment, which we set to its minimum value of 0x200. 'stext' * FileAlignment, which we set to its minimum value of 0x200. '_stext'
* itself is 4 KB aligned, so padding out _edata to a 0x200 aligned * itself is 4 KB aligned, so padding out _edata to a 0x200 aligned
* boundary should be sufficient. * boundary should be sufficient.
*/ */
...@@ -95,8 +91,6 @@ SECTIONS ...@@ -95,8 +91,6 @@ SECTIONS
* order of matching. * order of matching.
*/ */
/DISCARD/ : { /DISCARD/ : {
ARM_EXIT_DISCARD(EXIT_TEXT)
ARM_EXIT_DISCARD(EXIT_DATA)
EXIT_CALL EXIT_CALL
*(.discard) *(.discard)
*(.discard.*) *(.discard.*)
...@@ -139,6 +133,7 @@ SECTIONS ...@@ -139,6 +133,7 @@ SECTIONS
idmap_pg_dir = .; idmap_pg_dir = .;
. += IDMAP_DIR_SIZE; . += IDMAP_DIR_SIZE;
idmap_pg_end = .;
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
tramp_pg_dir = .; tramp_pg_dir = .;
...@@ -161,7 +156,7 @@ SECTIONS ...@@ -161,7 +156,7 @@ SECTIONS
__exittext_begin = .; __exittext_begin = .;
.exit.text : { .exit.text : {
ARM_EXIT_KEEP(EXIT_TEXT) EXIT_TEXT
} }
__exittext_end = .; __exittext_end = .;
...@@ -175,7 +170,7 @@ SECTIONS ...@@ -175,7 +170,7 @@ SECTIONS
*(.altinstr_replacement) *(.altinstr_replacement)
} }
. = ALIGN(PAGE_SIZE); . = ALIGN(SEGMENT_ALIGN);
__inittext_end = .; __inittext_end = .;
__initdata_begin = .; __initdata_begin = .;
...@@ -188,7 +183,7 @@ SECTIONS ...@@ -188,7 +183,7 @@ SECTIONS
*(.init.rodata.* .init.bss) /* from the EFI stub */ *(.init.rodata.* .init.bss) /* from the EFI stub */
} }
.exit.data : { .exit.data : {
ARM_EXIT_KEEP(EXIT_DATA) EXIT_DATA
} }
PERCPU_SECTION(L1_CACHE_BYTES) PERCPU_SECTION(L1_CACHE_BYTES)
...@@ -246,6 +241,7 @@ SECTIONS ...@@ -246,6 +241,7 @@ SECTIONS
. += INIT_DIR_SIZE; . += INIT_DIR_SIZE;
init_pg_end = .; init_pg_end = .;
. = ALIGN(SEGMENT_ALIGN);
__pecoff_data_size = ABSOLUTE(. - __initdata_begin); __pecoff_data_size = ABSOLUTE(. - __initdata_begin);
_end = .; _end = .;
......
...@@ -46,14 +46,6 @@ static const struct kvm_regs default_regs_reset32 = { ...@@ -46,14 +46,6 @@ static const struct kvm_regs default_regs_reset32 = {
PSR_AA32_I_BIT | PSR_AA32_F_BIT), PSR_AA32_I_BIT | PSR_AA32_F_BIT),
}; };
static bool cpu_has_32bit_el1(void)
{
u64 pfr0;
pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
return !!(pfr0 & 0x20);
}
/** /**
* kvm_arch_vm_ioctl_check_extension * kvm_arch_vm_ioctl_check_extension
* *
...@@ -66,7 +58,7 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext) ...@@ -66,7 +58,7 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
switch (ext) { switch (ext) {
case KVM_CAP_ARM_EL1_32BIT: case KVM_CAP_ARM_EL1_32BIT:
r = cpu_has_32bit_el1(); r = cpus_have_const_cap(ARM64_HAS_32BIT_EL1);
break; break;
case KVM_CAP_GUEST_DEBUG_HW_BPS: case KVM_CAP_GUEST_DEBUG_HW_BPS:
r = get_num_brps(); r = get_num_brps();
...@@ -288,7 +280,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) ...@@ -288,7 +280,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
switch (vcpu->arch.target) { switch (vcpu->arch.target) {
default: default:
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
if (!cpu_has_32bit_el1()) if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1))
goto out; goto out;
cpu_reset = &default_regs_reset32; cpu_reset = &default_regs_reset32;
} else { } else {
...@@ -340,11 +332,50 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) ...@@ -340,11 +332,50 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
return ret; return ret;
} }
void kvm_set_ipa_limit(void) u32 get_kvm_ipa_limit(void)
{
return kvm_ipa_limit;
}
int kvm_set_ipa_limit(void)
{ {
unsigned int ipa_max, pa_max, va_max, parange; unsigned int ipa_max, pa_max, va_max, parange, tgran_2;
u64 mmfr0;
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
parange = cpuid_feature_extract_unsigned_field(mmfr0,
ID_AA64MMFR0_PARANGE_SHIFT);
/*
* Check with ARMv8.5-GTG that our PAGE_SIZE is supported at
* Stage-2. If not, things will stop very quickly.
*/
switch (PAGE_SIZE) {
default:
case SZ_4K:
tgran_2 = ID_AA64MMFR0_TGRAN4_2_SHIFT;
break;
case SZ_16K:
tgran_2 = ID_AA64MMFR0_TGRAN16_2_SHIFT;
break;
case SZ_64K:
tgran_2 = ID_AA64MMFR0_TGRAN64_2_SHIFT;
break;
}
switch (cpuid_feature_extract_unsigned_field(mmfr0, tgran_2)) {
default:
case 1:
kvm_err("PAGE_SIZE not supported at Stage-2, giving up\n");
return -EINVAL;
case 0:
kvm_debug("PAGE_SIZE supported at Stage-2 (default)\n");
break;
case 2:
kvm_debug("PAGE_SIZE supported at Stage-2 (advertised)\n");
break;
}
parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & 0x7;
pa_max = id_aa64mmfr0_parange_to_phys_shift(parange); pa_max = id_aa64mmfr0_parange_to_phys_shift(parange);
/* Clamp the IPA limit to the PA size supported by the kernel */ /* Clamp the IPA limit to the PA size supported by the kernel */
...@@ -378,6 +409,8 @@ void kvm_set_ipa_limit(void) ...@@ -378,6 +409,8 @@ void kvm_set_ipa_limit(void)
"KVM IPA limit (%d bit) is smaller than default size\n", ipa_max); "KVM IPA limit (%d bit) is smaller than default size\n", ipa_max);
kvm_ipa_limit = ipa_max; kvm_ipa_limit = ipa_max;
kvm_info("IPA Size Limit: %dbits\n", kvm_ipa_limit); kvm_info("IPA Size Limit: %dbits\n", kvm_ipa_limit);
return 0;
} }
/* /*
...@@ -390,7 +423,7 @@ void kvm_set_ipa_limit(void) ...@@ -390,7 +423,7 @@ void kvm_set_ipa_limit(void)
*/ */
int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
{ {
u64 vtcr = VTCR_EL2_FLAGS; u64 vtcr = VTCR_EL2_FLAGS, mmfr0;
u32 parange, phys_shift; u32 parange, phys_shift;
u8 lvls; u8 lvls;
...@@ -406,7 +439,9 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) ...@@ -406,7 +439,9 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
phys_shift = KVM_PHYS_SHIFT; phys_shift = KVM_PHYS_SHIFT;
} }
parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & 7; mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
parange = cpuid_feature_extract_unsigned_field(mmfr0,
ID_AA64MMFR0_PARANGE_SHIFT);
if (parange > ID_AA64MMFR0_PARANGE_MAX) if (parange > ID_AA64MMFR0_PARANGE_MAX)
parange = ID_AA64MMFR0_PARANGE_MAX; parange = ID_AA64MMFR0_PARANGE_MAX;
vtcr |= parange << VTCR_EL2_PS_SHIFT; vtcr |= parange << VTCR_EL2_PS_SHIFT;
......
...@@ -1456,9 +1456,9 @@ static const struct sys_reg_desc sys_reg_descs[] = { ...@@ -1456,9 +1456,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
ID_SANITISED(MVFR1_EL1), ID_SANITISED(MVFR1_EL1),
ID_SANITISED(MVFR2_EL1), ID_SANITISED(MVFR2_EL1),
ID_UNALLOCATED(3,3), ID_UNALLOCATED(3,3),
ID_UNALLOCATED(3,4), ID_SANITISED(ID_PFR2_EL1),
ID_UNALLOCATED(3,5), ID_HIDDEN(ID_DFR1_EL1),
ID_UNALLOCATED(3,6), ID_SANITISED(ID_MMFR5_EL1),
ID_UNALLOCATED(3,7), ID_UNALLOCATED(3,7),
/* AArch64 ID registers */ /* AArch64 ID registers */
......
...@@ -20,36 +20,36 @@ ...@@ -20,36 +20,36 @@
* x0 - bytes not copied * x0 - bytes not copied
*/ */
.macro ldrb1 ptr, regB, val .macro ldrb1 reg, ptr, val
uao_user_alternative 9998f, ldrb, ldtrb, \ptr, \regB, \val uao_user_alternative 9998f, ldrb, ldtrb, \reg, \ptr, \val
.endm .endm
.macro strb1 ptr, regB, val .macro strb1 reg, ptr, val
strb \ptr, [\regB], \val strb \reg, [\ptr], \val
.endm .endm
.macro ldrh1 ptr, regB, val .macro ldrh1 reg, ptr, val
uao_user_alternative 9998f, ldrh, ldtrh, \ptr, \regB, \val uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
.endm .endm
.macro strh1 ptr, regB, val .macro strh1 reg, ptr, val
strh \ptr, [\regB], \val strh \reg, [\ptr], \val
.endm .endm
.macro ldr1 ptr, regB, val .macro ldr1 reg, ptr, val
uao_user_alternative 9998f, ldr, ldtr, \ptr, \regB, \val uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
.endm .endm
.macro str1 ptr, regB, val .macro str1 reg, ptr, val
str \ptr, [\regB], \val str \reg, [\ptr], \val
.endm .endm
.macro ldp1 ptr, regB, regC, val .macro ldp1 reg1, reg2, ptr, val
uao_ldp 9998f, \ptr, \regB, \regC, \val uao_ldp 9998f, \reg1, \reg2, \ptr, \val
.endm .endm
.macro stp1 ptr, regB, regC, val .macro stp1 reg1, reg2, ptr, val
stp \ptr, \regB, [\regC], \val stp \reg1, \reg2, [\ptr], \val
.endm .endm
end .req x5 end .req x5
......
...@@ -21,36 +21,36 @@ ...@@ -21,36 +21,36 @@
* Returns: * Returns:
* x0 - bytes not copied * x0 - bytes not copied
*/ */
.macro ldrb1 ptr, regB, val .macro ldrb1 reg, ptr, val
uao_user_alternative 9998f, ldrb, ldtrb, \ptr, \regB, \val uao_user_alternative 9998f, ldrb, ldtrb, \reg, \ptr, \val
.endm .endm
.macro strb1 ptr, regB, val .macro strb1 reg, ptr, val
uao_user_alternative 9998f, strb, sttrb, \ptr, \regB, \val uao_user_alternative 9998f, strb, sttrb, \reg, \ptr, \val
.endm .endm
.macro ldrh1 ptr, regB, val .macro ldrh1 reg, ptr, val
uao_user_alternative 9998f, ldrh, ldtrh, \ptr, \regB, \val uao_user_alternative 9998f, ldrh, ldtrh, \reg, \ptr, \val
.endm .endm
.macro strh1 ptr, regB, val .macro strh1 reg, ptr, val
uao_user_alternative 9998f, strh, sttrh, \ptr, \regB, \val uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
.endm .endm
.macro ldr1 ptr, regB, val .macro ldr1 reg, ptr, val
uao_user_alternative 9998f, ldr, ldtr, \ptr, \regB, \val uao_user_alternative 9998f, ldr, ldtr, \reg, \ptr, \val
.endm .endm
.macro str1 ptr, regB, val .macro str1 reg, ptr, val
uao_user_alternative 9998f, str, sttr, \ptr, \regB, \val uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
.endm .endm
.macro ldp1 ptr, regB, regC, val .macro ldp1 reg1, reg2, ptr, val
uao_ldp 9998f, \ptr, \regB, \regC, \val uao_ldp 9998f, \reg1, \reg2, \ptr, \val
.endm .endm
.macro stp1 ptr, regB, regC, val .macro stp1 reg1, reg2, ptr, val
uao_stp 9998f, \ptr, \regB, \regC, \val uao_stp 9998f, \reg1, \reg2, \ptr, \val
.endm .endm
end .req x5 end .req x5
......
...@@ -19,36 +19,36 @@ ...@@ -19,36 +19,36 @@
* Returns: * Returns:
* x0 - bytes not copied * x0 - bytes not copied
*/ */
.macro ldrb1 ptr, regB, val .macro ldrb1 reg, ptr, val
ldrb \ptr, [\regB], \val ldrb \reg, [\ptr], \val
.endm .endm
.macro strb1 ptr, regB, val .macro strb1 reg, ptr, val
uao_user_alternative 9998f, strb, sttrb, \ptr, \regB, \val uao_user_alternative 9998f, strb, sttrb, \reg, \ptr, \val
.endm .endm
.macro ldrh1 ptr, regB, val .macro ldrh1 reg, ptr, val
ldrh \ptr, [\regB], \val ldrh \reg, [\ptr], \val
.endm .endm
.macro strh1 ptr, regB, val .macro strh1 reg, ptr, val
uao_user_alternative 9998f, strh, sttrh, \ptr, \regB, \val uao_user_alternative 9998f, strh, sttrh, \reg, \ptr, \val
.endm .endm
.macro ldr1 ptr, regB, val .macro ldr1 reg, ptr, val
ldr \ptr, [\regB], \val ldr \reg, [\ptr], \val
.endm .endm
.macro str1 ptr, regB, val .macro str1 reg, ptr, val
uao_user_alternative 9998f, str, sttr, \ptr, \regB, \val uao_user_alternative 9998f, str, sttr, \reg, \ptr, \val
.endm .endm
.macro ldp1 ptr, regB, regC, val .macro ldp1 reg1, reg2, ptr, val
ldp \ptr, \regB, [\regC], \val ldp \reg1, \reg2, [\ptr], \val
.endm .endm
.macro stp1 ptr, regB, regC, val .macro stp1 reg1, reg2, ptr, val
uao_stp 9998f, \ptr, \regB, \regC, \val uao_stp 9998f, \reg1, \reg2, \ptr, \val
.endm .endm
end .req x5 end .req x5
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/assembler.h> #include <asm/assembler.h>
.cpu generic+crc .arch armv8-a+crc
.macro __crc32, c .macro __crc32, c
cmp x2, #16 cmp x2, #16
......
...@@ -24,36 +24,36 @@ ...@@ -24,36 +24,36 @@
* Returns: * Returns:
* x0 - dest * x0 - dest
*/ */
.macro ldrb1 ptr, regB, val .macro ldrb1 reg, ptr, val
ldrb \ptr, [\regB], \val ldrb \reg, [\ptr], \val
.endm .endm
.macro strb1 ptr, regB, val .macro strb1 reg, ptr, val
strb \ptr, [\regB], \val strb \reg, [\ptr], \val
.endm .endm
.macro ldrh1 ptr, regB, val .macro ldrh1 reg, ptr, val
ldrh \ptr, [\regB], \val ldrh \reg, [\ptr], \val
.endm .endm
.macro strh1 ptr, regB, val .macro strh1 reg, ptr, val
strh \ptr, [\regB], \val strh \reg, [\ptr], \val
.endm .endm
.macro ldr1 ptr, regB, val .macro ldr1 reg, ptr, val
ldr \ptr, [\regB], \val ldr \reg, [\ptr], \val
.endm .endm
.macro str1 ptr, regB, val .macro str1 reg, ptr, val
str \ptr, [\regB], \val str \reg, [\ptr], \val
.endm .endm
.macro ldp1 ptr, regB, regC, val .macro ldp1 reg1, reg2, ptr, val
ldp \ptr, \regB, [\regC], \val ldp \reg1, \reg2, [\ptr], \val
.endm .endm
.macro stp1 ptr, regB, regC, val .macro stp1 reg1, reg2, ptr, val
stp \ptr, \regB, [\regC], \val stp \reg1, \reg2, [\ptr], \val
.endm .endm
.weak memcpy .weak memcpy
......
...@@ -92,6 +92,9 @@ static void set_reserved_asid_bits(void) ...@@ -92,6 +92,9 @@ static void set_reserved_asid_bits(void)
bitmap_clear(asid_map, 0, NUM_USER_ASIDS); bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
} }
#define asid_gen_match(asid) \
(!(((asid) ^ atomic64_read(&asid_generation)) >> asid_bits))
static void flush_context(void) static void flush_context(void)
{ {
int i; int i;
...@@ -220,8 +223,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) ...@@ -220,8 +223,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
* because atomic RmWs are totally ordered for a given location. * because atomic RmWs are totally ordered for a given location.
*/ */
old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); old_active_asid = atomic64_read(&per_cpu(active_asids, cpu));
if (old_active_asid && if (old_active_asid && asid_gen_match(asid) &&
!((asid ^ atomic64_read(&asid_generation)) >> asid_bits) &&
atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu),
old_active_asid, asid)) old_active_asid, asid))
goto switch_mm_fastpath; goto switch_mm_fastpath;
...@@ -229,7 +231,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) ...@@ -229,7 +231,7 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
raw_spin_lock_irqsave(&cpu_asid_lock, flags); raw_spin_lock_irqsave(&cpu_asid_lock, flags);
/* Check that our ASID belongs to the current generation. */ /* Check that our ASID belongs to the current generation. */
asid = atomic64_read(&mm->context.id); asid = atomic64_read(&mm->context.id);
if ((asid ^ atomic64_read(&asid_generation)) >> asid_bits) { if (!asid_gen_match(asid)) {
asid = new_context(mm); asid = new_context(mm);
atomic64_set(&mm->context.id, asid); atomic64_set(&mm->context.id, asid);
} }
......
...@@ -272,7 +272,7 @@ int pfn_valid(unsigned long pfn) ...@@ -272,7 +272,7 @@ int pfn_valid(unsigned long pfn)
if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
return 0; return 0;
if (!valid_section(__nr_to_section(pfn_to_section_nr(pfn)))) if (!valid_section(__pfn_to_section(pfn)))
return 0; return 0;
#endif #endif
return memblock_is_map_memory(addr); return memblock_is_map_memory(addr);
......
...@@ -139,7 +139,7 @@ alternative_if ARM64_HAS_RAS_EXTN ...@@ -139,7 +139,7 @@ alternative_if ARM64_HAS_RAS_EXTN
msr_s SYS_DISR_EL1, xzr msr_s SYS_DISR_EL1, xzr
alternative_else_nop_endif alternative_else_nop_endif
ptrauth_keys_install_kernel x14, 0, x1, x2, x3 ptrauth_keys_install_kernel_nosync x14, x1, x2, x3
isb isb
ret ret
SYM_FUNC_END(cpu_do_resume) SYM_FUNC_END(cpu_do_resume)
...@@ -386,8 +386,6 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings) ...@@ -386,8 +386,6 @@ SYM_FUNC_END(idmap_kpti_install_ng_mappings)
* *
* Initialise the processor for turning the MMU on. * Initialise the processor for turning the MMU on.
* *
* Input:
* x0 with a flag ARM64_CPU_BOOT_PRIMARY/ARM64_CPU_BOOT_SECONDARY/ARM64_CPU_RUNTIME.
* Output: * Output:
* Return in x0 the value of the SCTLR_EL1 register. * Return in x0 the value of the SCTLR_EL1 register.
*/ */
...@@ -446,51 +444,9 @@ SYM_FUNC_START(__cpu_setup) ...@@ -446,51 +444,9 @@ SYM_FUNC_START(__cpu_setup)
1: 1:
#endif /* CONFIG_ARM64_HW_AFDBM */ #endif /* CONFIG_ARM64_HW_AFDBM */
msr tcr_el1, x10 msr tcr_el1, x10
mov x1, x0
/* /*
* Prepare SCTLR * Prepare SCTLR
*/ */
mov_q x0, SCTLR_EL1_SET mov_q x0, SCTLR_EL1_SET
#ifdef CONFIG_ARM64_PTR_AUTH
/* No ptrauth setup for run time cpus */
cmp x1, #ARM64_CPU_RUNTIME
b.eq 3f
/* Check if the CPU supports ptrauth */
mrs x2, id_aa64isar1_el1
ubfx x2, x2, #ID_AA64ISAR1_APA_SHIFT, #8
cbz x2, 3f
/*
* The primary cpu keys are reset here and can be
* re-initialised with some proper values later.
*/
msr_s SYS_APIAKEYLO_EL1, xzr
msr_s SYS_APIAKEYHI_EL1, xzr
/* Just enable ptrauth for primary cpu */
cmp x1, #ARM64_CPU_BOOT_PRIMARY
b.eq 2f
/* if !system_supports_address_auth() then skip enable */
alternative_if_not ARM64_HAS_ADDRESS_AUTH
b 3f
alternative_else_nop_endif
/* Install ptrauth key for secondary cpus */
adr_l x2, secondary_data
ldr x3, [x2, #CPU_BOOT_TASK] // get secondary_data.task
cbz x3, 2f // check for slow booting cpus
ldp x3, x4, [x2, #CPU_BOOT_PTRAUTH_KEY]
msr_s SYS_APIAKEYLO_EL1, x3
msr_s SYS_APIAKEYHI_EL1, x4
2: /* Enable ptrauth instructions */
ldr x2, =SCTLR_ELx_ENIA | SCTLR_ELx_ENIB | \
SCTLR_ELx_ENDA | SCTLR_ELx_ENDB
orr x0, x0, x2
3:
#endif
ret // return to head.S ret // return to head.S
SYM_FUNC_END(__cpu_setup) SYM_FUNC_END(__cpu_setup)
...@@ -100,6 +100,14 @@ ...@@ -100,6 +100,14 @@
/* Rd = Rn OP imm12 */ /* Rd = Rn OP imm12 */
#define A64_ADD_I(sf, Rd, Rn, imm12) A64_ADDSUB_IMM(sf, Rd, Rn, imm12, ADD) #define A64_ADD_I(sf, Rd, Rn, imm12) A64_ADDSUB_IMM(sf, Rd, Rn, imm12, ADD)
#define A64_SUB_I(sf, Rd, Rn, imm12) A64_ADDSUB_IMM(sf, Rd, Rn, imm12, SUB) #define A64_SUB_I(sf, Rd, Rn, imm12) A64_ADDSUB_IMM(sf, Rd, Rn, imm12, SUB)
#define A64_ADDS_I(sf, Rd, Rn, imm12) \
A64_ADDSUB_IMM(sf, Rd, Rn, imm12, ADD_SETFLAGS)
#define A64_SUBS_I(sf, Rd, Rn, imm12) \
A64_ADDSUB_IMM(sf, Rd, Rn, imm12, SUB_SETFLAGS)
/* Rn + imm12; set condition flags */
#define A64_CMN_I(sf, Rn, imm12) A64_ADDS_I(sf, A64_ZR, Rn, imm12)
/* Rn - imm12; set condition flags */
#define A64_CMP_I(sf, Rn, imm12) A64_SUBS_I(sf, A64_ZR, Rn, imm12)
/* Rd = Rn */ /* Rd = Rn */
#define A64_MOV(sf, Rd, Rn) A64_ADD_I(sf, Rd, Rn, 0) #define A64_MOV(sf, Rd, Rn) A64_ADD_I(sf, Rd, Rn, 0)
...@@ -189,4 +197,18 @@ ...@@ -189,4 +197,18 @@
/* Rn & Rm; set condition flags */ /* Rn & Rm; set condition flags */
#define A64_TST(sf, Rn, Rm) A64_ANDS(sf, A64_ZR, Rn, Rm) #define A64_TST(sf, Rn, Rm) A64_ANDS(sf, A64_ZR, Rn, Rm)
/* Logical (immediate) */
#define A64_LOGIC_IMM(sf, Rd, Rn, imm, type) ({ \
u64 imm64 = (sf) ? (u64)imm : (u64)(u32)imm; \
aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_##type, \
A64_VARIANT(sf), Rn, Rd, imm64); \
})
/* Rd = Rn OP imm */
#define A64_AND_I(sf, Rd, Rn, imm) A64_LOGIC_IMM(sf, Rd, Rn, imm, AND)
#define A64_ORR_I(sf, Rd, Rn, imm) A64_LOGIC_IMM(sf, Rd, Rn, imm, ORR)
#define A64_EOR_I(sf, Rd, Rn, imm) A64_LOGIC_IMM(sf, Rd, Rn, imm, EOR)
#define A64_ANDS_I(sf, Rd, Rn, imm) A64_LOGIC_IMM(sf, Rd, Rn, imm, AND_SETFLAGS)
/* Rn & imm; set condition flags */
#define A64_TST_I(sf, Rn, imm) A64_ANDS_I(sf, A64_ZR, Rn, imm)
#endif /* _BPF_JIT_H */ #endif /* _BPF_JIT_H */
...@@ -167,6 +167,12 @@ static inline int epilogue_offset(const struct jit_ctx *ctx) ...@@ -167,6 +167,12 @@ static inline int epilogue_offset(const struct jit_ctx *ctx)
return to - from; return to - from;
} }
static bool is_addsub_imm(u32 imm)
{
/* Either imm12 or shifted imm12. */
return !(imm & ~0xfff) || !(imm & ~0xfff000);
}
/* Stack must be multiples of 16B */ /* Stack must be multiples of 16B */
#define STACK_ALIGN(sz) (((sz) + 15) & ~15) #define STACK_ALIGN(sz) (((sz) + 15) & ~15)
...@@ -356,6 +362,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, ...@@ -356,6 +362,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
const bool isdw = BPF_SIZE(code) == BPF_DW; const bool isdw = BPF_SIZE(code) == BPF_DW;
u8 jmp_cond, reg; u8 jmp_cond, reg;
s32 jmp_offset; s32 jmp_offset;
u32 a64_insn;
#define check_imm(bits, imm) do { \ #define check_imm(bits, imm) do { \
if ((((imm) > 0) && ((imm) >> (bits))) || \ if ((((imm) > 0) && ((imm) >> (bits))) || \
...@@ -478,28 +485,55 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, ...@@ -478,28 +485,55 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
/* dst = dst OP imm */ /* dst = dst OP imm */
case BPF_ALU | BPF_ADD | BPF_K: case BPF_ALU | BPF_ADD | BPF_K:
case BPF_ALU64 | BPF_ADD | BPF_K: case BPF_ALU64 | BPF_ADD | BPF_K:
if (is_addsub_imm(imm)) {
emit(A64_ADD_I(is64, dst, dst, imm), ctx);
} else if (is_addsub_imm(-imm)) {
emit(A64_SUB_I(is64, dst, dst, -imm), ctx);
} else {
emit_a64_mov_i(is64, tmp, imm, ctx); emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_ADD(is64, dst, dst, tmp), ctx); emit(A64_ADD(is64, dst, dst, tmp), ctx);
}
break; break;
case BPF_ALU | BPF_SUB | BPF_K: case BPF_ALU | BPF_SUB | BPF_K:
case BPF_ALU64 | BPF_SUB | BPF_K: case BPF_ALU64 | BPF_SUB | BPF_K:
if (is_addsub_imm(imm)) {
emit(A64_SUB_I(is64, dst, dst, imm), ctx);
} else if (is_addsub_imm(-imm)) {
emit(A64_ADD_I(is64, dst, dst, -imm), ctx);
} else {
emit_a64_mov_i(is64, tmp, imm, ctx); emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_SUB(is64, dst, dst, tmp), ctx); emit(A64_SUB(is64, dst, dst, tmp), ctx);
}
break; break;
case BPF_ALU | BPF_AND | BPF_K: case BPF_ALU | BPF_AND | BPF_K:
case BPF_ALU64 | BPF_AND | BPF_K: case BPF_ALU64 | BPF_AND | BPF_K:
a64_insn = A64_AND_I(is64, dst, dst, imm);
if (a64_insn != AARCH64_BREAK_FAULT) {
emit(a64_insn, ctx);
} else {
emit_a64_mov_i(is64, tmp, imm, ctx); emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_AND(is64, dst, dst, tmp), ctx); emit(A64_AND(is64, dst, dst, tmp), ctx);
}
break; break;
case BPF_ALU | BPF_OR | BPF_K: case BPF_ALU | BPF_OR | BPF_K:
case BPF_ALU64 | BPF_OR | BPF_K: case BPF_ALU64 | BPF_OR | BPF_K:
a64_insn = A64_ORR_I(is64, dst, dst, imm);
if (a64_insn != AARCH64_BREAK_FAULT) {
emit(a64_insn, ctx);
} else {
emit_a64_mov_i(is64, tmp, imm, ctx); emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_ORR(is64, dst, dst, tmp), ctx); emit(A64_ORR(is64, dst, dst, tmp), ctx);
}
break; break;
case BPF_ALU | BPF_XOR | BPF_K: case BPF_ALU | BPF_XOR | BPF_K:
case BPF_ALU64 | BPF_XOR | BPF_K: case BPF_ALU64 | BPF_XOR | BPF_K:
a64_insn = A64_EOR_I(is64, dst, dst, imm);
if (a64_insn != AARCH64_BREAK_FAULT) {
emit(a64_insn, ctx);
} else {
emit_a64_mov_i(is64, tmp, imm, ctx); emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_EOR(is64, dst, dst, tmp), ctx); emit(A64_EOR(is64, dst, dst, tmp), ctx);
}
break; break;
case BPF_ALU | BPF_MUL | BPF_K: case BPF_ALU | BPF_MUL | BPF_K:
case BPF_ALU64 | BPF_MUL | BPF_K: case BPF_ALU64 | BPF_MUL | BPF_K:
...@@ -623,13 +657,24 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, ...@@ -623,13 +657,24 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
case BPF_JMP32 | BPF_JSLT | BPF_K: case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_K: case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K: case BPF_JMP32 | BPF_JSLE | BPF_K:
if (is_addsub_imm(imm)) {
emit(A64_CMP_I(is64, dst, imm), ctx);
} else if (is_addsub_imm(-imm)) {
emit(A64_CMN_I(is64, dst, -imm), ctx);
} else {
emit_a64_mov_i(is64, tmp, imm, ctx); emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_CMP(is64, dst, tmp), ctx); emit(A64_CMP(is64, dst, tmp), ctx);
}
goto emit_cond_jmp; goto emit_cond_jmp;
case BPF_JMP | BPF_JSET | BPF_K: case BPF_JMP | BPF_JSET | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_K: case BPF_JMP32 | BPF_JSET | BPF_K:
a64_insn = A64_TST_I(is64, dst, imm);
if (a64_insn != AARCH64_BREAK_FAULT) {
emit(a64_insn, ctx);
} else {
emit_a64_mov_i(is64, tmp, imm, ctx); emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_TST(is64, dst, tmp), ctx); emit(A64_TST(is64, dst, tmp), ctx);
}
goto emit_cond_jmp; goto emit_cond_jmp;
/* function call */ /* function call */
case BPF_JMP | BPF_CALL: case BPF_JMP | BPF_CALL:
......
...@@ -295,15 +295,13 @@ config TURRIS_MOX_RWTM ...@@ -295,15 +295,13 @@ config TURRIS_MOX_RWTM
other manufacturing data and also utilize the Entropy Bit Generator other manufacturing data and also utilize the Entropy Bit Generator
for hardware random number generation. for hardware random number generation.
config HAVE_ARM_SMCCC
bool
source "drivers/firmware/psci/Kconfig"
source "drivers/firmware/broadcom/Kconfig" source "drivers/firmware/broadcom/Kconfig"
source "drivers/firmware/google/Kconfig" source "drivers/firmware/google/Kconfig"
source "drivers/firmware/efi/Kconfig" source "drivers/firmware/efi/Kconfig"
source "drivers/firmware/imx/Kconfig" source "drivers/firmware/imx/Kconfig"
source "drivers/firmware/meson/Kconfig" source "drivers/firmware/meson/Kconfig"
source "drivers/firmware/psci/Kconfig"
source "drivers/firmware/smccc/Kconfig"
source "drivers/firmware/tegra/Kconfig" source "drivers/firmware/tegra/Kconfig"
source "drivers/firmware/xilinx/Kconfig" source "drivers/firmware/xilinx/Kconfig"
......
...@@ -23,12 +23,13 @@ obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o ...@@ -23,12 +23,13 @@ obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o
obj-$(CONFIG_TURRIS_MOX_RWTM) += turris-mox-rwtm.o obj-$(CONFIG_TURRIS_MOX_RWTM) += turris-mox-rwtm.o
obj-$(CONFIG_ARM_SCMI_PROTOCOL) += arm_scmi/ obj-$(CONFIG_ARM_SCMI_PROTOCOL) += arm_scmi/
obj-y += psci/
obj-y += broadcom/ obj-y += broadcom/
obj-y += meson/ obj-y += meson/
obj-$(CONFIG_GOOGLE_FIRMWARE) += google/ obj-$(CONFIG_GOOGLE_FIRMWARE) += google/
obj-$(CONFIG_EFI) += efi/ obj-$(CONFIG_EFI) += efi/
obj-$(CONFIG_UEFI_CPER) += efi/ obj-$(CONFIG_UEFI_CPER) += efi/
obj-y += imx/ obj-y += imx/
obj-y += psci/
obj-y += smccc/
obj-y += tegra/ obj-y += tegra/
obj-y += xilinx/ obj-y += xilinx/
...@@ -429,7 +429,6 @@ int sdei_event_enable(u32 event_num) ...@@ -429,7 +429,6 @@ int sdei_event_enable(u32 event_num)
return err; return err;
} }
EXPORT_SYMBOL(sdei_event_enable);
static int sdei_api_event_disable(u32 event_num) static int sdei_api_event_disable(u32 event_num)
{ {
...@@ -471,7 +470,6 @@ int sdei_event_disable(u32 event_num) ...@@ -471,7 +470,6 @@ int sdei_event_disable(u32 event_num)
return err; return err;
} }
EXPORT_SYMBOL(sdei_event_disable);
static int sdei_api_event_unregister(u32 event_num) static int sdei_api_event_unregister(u32 event_num)
{ {
...@@ -533,7 +531,6 @@ int sdei_event_unregister(u32 event_num) ...@@ -533,7 +531,6 @@ int sdei_event_unregister(u32 event_num)
return err; return err;
} }
EXPORT_SYMBOL(sdei_event_unregister);
/* /*
* unregister events, but don't destroy them as they are re-registered by * unregister events, but don't destroy them as they are re-registered by
...@@ -643,7 +640,6 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg) ...@@ -643,7 +640,6 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg)
return err; return err;
} }
EXPORT_SYMBOL(sdei_event_register);
static int sdei_reregister_event_llocked(struct sdei_event *event) static int sdei_reregister_event_llocked(struct sdei_event *event)
{ {
...@@ -1079,26 +1075,9 @@ static struct platform_driver sdei_driver = { ...@@ -1079,26 +1075,9 @@ static struct platform_driver sdei_driver = {
.probe = sdei_probe, .probe = sdei_probe,
}; };
static bool __init sdei_present_dt(void)
{
struct device_node *np, *fw_np;
fw_np = of_find_node_by_name(NULL, "firmware");
if (!fw_np)
return false;
np = of_find_matching_node(fw_np, sdei_of_match);
if (!np)
return false;
of_node_put(np);
return true;
}
static bool __init sdei_present_acpi(void) static bool __init sdei_present_acpi(void)
{ {
acpi_status status; acpi_status status;
struct platform_device *pdev;
struct acpi_table_header *sdei_table_header; struct acpi_table_header *sdei_table_header;
if (acpi_disabled) if (acpi_disabled)
...@@ -1113,20 +1092,26 @@ static bool __init sdei_present_acpi(void) ...@@ -1113,20 +1092,26 @@ static bool __init sdei_present_acpi(void)
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
return false; return false;
pdev = platform_device_register_simple(sdei_driver.driver.name, 0, NULL, acpi_put_table(sdei_table_header);
0);
if (IS_ERR(pdev))
return false;
return true; return true;
} }
static int __init sdei_init(void) static int __init sdei_init(void)
{ {
if (sdei_present_dt() || sdei_present_acpi()) int ret = platform_driver_register(&sdei_driver);
platform_driver_register(&sdei_driver);
return 0; if (!ret && sdei_present_acpi()) {
struct platform_device *pdev;
pdev = platform_device_register_simple(sdei_driver.driver.name,
0, NULL, 0);
if (IS_ERR(pdev))
pr_info("Failed to register ACPI:SDEI platform device %ld\n",
PTR_ERR(pdev));
}
return ret;
} }
/* /*
...@@ -1143,6 +1128,14 @@ int sdei_event_handler(struct pt_regs *regs, ...@@ -1143,6 +1128,14 @@ int sdei_event_handler(struct pt_regs *regs,
mm_segment_t orig_addr_limit; mm_segment_t orig_addr_limit;
u32 event_num = arg->event_num; u32 event_num = arg->event_num;
/*
* Save restore 'fs'.
* The architecture's entry code save/restores 'fs' when taking an
* exception from the kernel. This ensures addr_limit isn't inherited
* if you interrupted something that allowed the uaccess routines to
* access kernel memory.
* Do the same here because this doesn't come via the same entry code.
*/
orig_addr_limit = get_fs(); orig_addr_limit = get_fs();
set_fs(USER_DS); set_fs(USER_DS);
......
...@@ -46,25 +46,14 @@ ...@@ -46,25 +46,14 @@
* require cooperation with a Trusted OS driver. * require cooperation with a Trusted OS driver.
*/ */
static int resident_cpu = -1; static int resident_cpu = -1;
struct psci_operations psci_ops;
static enum arm_smccc_conduit psci_conduit = SMCCC_CONDUIT_NONE;
bool psci_tos_resident_on(int cpu) bool psci_tos_resident_on(int cpu)
{ {
return cpu == resident_cpu; return cpu == resident_cpu;
} }
struct psci_operations psci_ops = {
.conduit = SMCCC_CONDUIT_NONE,
.smccc_version = SMCCC_VERSION_1_0,
};
enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void)
{
if (psci_ops.smccc_version < SMCCC_VERSION_1_1)
return SMCCC_CONDUIT_NONE;
return psci_ops.conduit;
}
typedef unsigned long (psci_fn)(unsigned long, unsigned long, typedef unsigned long (psci_fn)(unsigned long, unsigned long,
unsigned long, unsigned long); unsigned long, unsigned long);
static psci_fn *invoke_psci_fn; static psci_fn *invoke_psci_fn;
...@@ -242,7 +231,7 @@ static void set_conduit(enum arm_smccc_conduit conduit) ...@@ -242,7 +231,7 @@ static void set_conduit(enum arm_smccc_conduit conduit)
WARN(1, "Unexpected PSCI conduit %d\n", conduit); WARN(1, "Unexpected PSCI conduit %d\n", conduit);
} }
psci_ops.conduit = conduit; psci_conduit = conduit;
} }
static int get_set_conduit_method(struct device_node *np) static int get_set_conduit_method(struct device_node *np)
...@@ -411,8 +400,8 @@ static void __init psci_init_smccc(void) ...@@ -411,8 +400,8 @@ static void __init psci_init_smccc(void)
if (feature != PSCI_RET_NOT_SUPPORTED) { if (feature != PSCI_RET_NOT_SUPPORTED) {
u32 ret; u32 ret;
ret = invoke_psci_fn(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0); ret = invoke_psci_fn(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0);
if (ret == ARM_SMCCC_VERSION_1_1) { if (ret >= ARM_SMCCC_VERSION_1_1) {
psci_ops.smccc_version = SMCCC_VERSION_1_1; arm_smccc_version_init(ret, psci_conduit);
ver = ret; ver = ret;
} }
} }
......
# SPDX-License-Identifier: GPL-2.0-only
config HAVE_ARM_SMCCC
bool
help
Include support for the Secure Monitor Call (SMC) and Hypervisor
Call (HVC) instructions on Armv7 and above architectures.
config HAVE_ARM_SMCCC_DISCOVERY
bool
depends on ARM_PSCI_FW
default y
help
SMCCC v1.0 lacked discoverability and hence PSCI v1.0 was updated
to add SMCCC discovery mechanism though the PSCI firmware
implementation of PSCI_FEATURES(SMCCC_VERSION) which returns
success on firmware compliant to SMCCC v1.1 and above.
# SPDX-License-Identifier: GPL-2.0
#
obj-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smccc.o
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020 Arm Limited
*/
#define pr_fmt(fmt) "smccc: " fmt
#include <linux/init.h>
#include <linux/arm-smccc.h>
static u32 smccc_version = ARM_SMCCC_VERSION_1_0;
static enum arm_smccc_conduit smccc_conduit = SMCCC_CONDUIT_NONE;
void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit)
{
smccc_version = version;
smccc_conduit = conduit;
}
enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void)
{
if (smccc_version < ARM_SMCCC_VERSION_1_1)
return SMCCC_CONDUIT_NONE;
return smccc_conduit;
}
u32 arm_smccc_get_version(void)
{
return smccc_version;
}
...@@ -79,13 +79,6 @@ config FSL_IMX8_DDR_PMU ...@@ -79,13 +79,6 @@ config FSL_IMX8_DDR_PMU
can give information about memory throughput and other related can give information about memory throughput and other related
events. events.
config HISI_PMU
bool "HiSilicon SoC PMU"
depends on ARM64 && ACPI
help
Support for HiSilicon SoC uncore performance monitoring
unit (PMU), such as: L3C, HHA and DDRC.
config QCOM_L2_PMU config QCOM_L2_PMU
bool "Qualcomm Technologies L2-cache PMU" bool "Qualcomm Technologies L2-cache PMU"
depends on ARCH_QCOM && ARM64 && ACPI depends on ARCH_QCOM && ARM64 && ACPI
...@@ -129,4 +122,6 @@ config ARM_SPE_PMU ...@@ -129,4 +122,6 @@ config ARM_SPE_PMU
Extension, which provides periodic sampling of operations in Extension, which provides periodic sampling of operations in
the CPU pipeline and reports this via the perf AUX interface. the CPU pipeline and reports this via the perf AUX interface.
source "drivers/perf/hisilicon/Kconfig"
endmenu endmenu
...@@ -690,10 +690,8 @@ static int dsu_pmu_device_probe(struct platform_device *pdev) ...@@ -690,10 +690,8 @@ static int dsu_pmu_device_probe(struct platform_device *pdev)
} }
irq = platform_get_irq(pdev, 0); irq = platform_get_irq(pdev, 0);
if (irq < 0) { if (irq < 0)
dev_warn(&pdev->dev, "Failed to find IRQ\n");
return -EINVAL; return -EINVAL;
}
name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s_%d", name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s_%d",
PMUNAME, atomic_inc_return(&pmu_idx)); PMUNAME, atomic_inc_return(&pmu_idx));
......
...@@ -814,7 +814,7 @@ static int smmu_pmu_probe(struct platform_device *pdev) ...@@ -814,7 +814,7 @@ static int smmu_pmu_probe(struct platform_device *pdev)
if (err) { if (err) {
dev_err(dev, "Error %d registering hotplug, PMU @%pa\n", dev_err(dev, "Error %d registering hotplug, PMU @%pa\n",
err, &res_0->start); err, &res_0->start);
return err; goto out_clear_affinity;
} }
err = perf_pmu_register(&smmu_pmu->pmu, name, -1); err = perf_pmu_register(&smmu_pmu->pmu, name, -1);
...@@ -833,6 +833,8 @@ static int smmu_pmu_probe(struct platform_device *pdev) ...@@ -833,6 +833,8 @@ static int smmu_pmu_probe(struct platform_device *pdev)
out_unregister: out_unregister:
cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node); cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
out_clear_affinity:
irq_set_affinity_hint(smmu_pmu->irq, NULL);
return err; return err;
} }
...@@ -842,6 +844,7 @@ static int smmu_pmu_remove(struct platform_device *pdev) ...@@ -842,6 +844,7 @@ static int smmu_pmu_remove(struct platform_device *pdev)
perf_pmu_unregister(&smmu_pmu->pmu); perf_pmu_unregister(&smmu_pmu->pmu);
cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node); cpuhp_state_remove_instance_nocalls(cpuhp_state_num, &smmu_pmu->node);
irq_set_affinity_hint(smmu_pmu->irq, NULL);
return 0; return 0;
} }
......
...@@ -1133,10 +1133,8 @@ static int arm_spe_pmu_irq_probe(struct arm_spe_pmu *spe_pmu) ...@@ -1133,10 +1133,8 @@ static int arm_spe_pmu_irq_probe(struct arm_spe_pmu *spe_pmu)
struct platform_device *pdev = spe_pmu->pdev; struct platform_device *pdev = spe_pmu->pdev;
int irq = platform_get_irq(pdev, 0); int irq = platform_get_irq(pdev, 0);
if (irq < 0) { if (irq < 0)
dev_err(&pdev->dev, "failed to get IRQ (%d)\n", irq);
return -ENXIO; return -ENXIO;
}
if (!irq_is_percpu(irq)) { if (!irq_is_percpu(irq)) {
dev_err(&pdev->dev, "expected PPI but got SPI (%d)\n", irq); dev_err(&pdev->dev, "expected PPI but got SPI (%d)\n", irq);
......
# SPDX-License-Identifier: GPL-2.0-only
config HISI_PMU
tristate "HiSilicon SoC PMU drivers"
depends on ARM64 && ACPI
help
Support for HiSilicon SoC L3 Cache performance monitor, Hydra Home
Agent performance monitor and DDR Controller performance monitor.
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_HISI_PMU) += hisi_uncore_pmu.o hisi_uncore_l3c_pmu.o hisi_uncore_hha_pmu.o hisi_uncore_ddrc_pmu.o obj-$(CONFIG_HISI_PMU) += hisi_uncore_pmu.o hisi_uncore_l3c_pmu.o \
hisi_uncore_hha_pmu.o hisi_uncore_ddrc_pmu.o
...@@ -394,8 +394,9 @@ static int hisi_ddrc_pmu_probe(struct platform_device *pdev) ...@@ -394,8 +394,9 @@ static int hisi_ddrc_pmu_probe(struct platform_device *pdev)
ret = perf_pmu_register(&ddrc_pmu->pmu, name, -1); ret = perf_pmu_register(&ddrc_pmu->pmu, name, -1);
if (ret) { if (ret) {
dev_err(ddrc_pmu->dev, "DDRC PMU register failed!\n"); dev_err(ddrc_pmu->dev, "DDRC PMU register failed!\n");
cpuhp_state_remove_instance(CPUHP_AP_PERF_ARM_HISI_DDRC_ONLINE, cpuhp_state_remove_instance_nocalls(
&ddrc_pmu->node); CPUHP_AP_PERF_ARM_HISI_DDRC_ONLINE, &ddrc_pmu->node);
irq_set_affinity_hint(ddrc_pmu->irq, NULL);
} }
return ret; return ret;
...@@ -406,8 +407,9 @@ static int hisi_ddrc_pmu_remove(struct platform_device *pdev) ...@@ -406,8 +407,9 @@ static int hisi_ddrc_pmu_remove(struct platform_device *pdev)
struct hisi_pmu *ddrc_pmu = platform_get_drvdata(pdev); struct hisi_pmu *ddrc_pmu = platform_get_drvdata(pdev);
perf_pmu_unregister(&ddrc_pmu->pmu); perf_pmu_unregister(&ddrc_pmu->pmu);
cpuhp_state_remove_instance(CPUHP_AP_PERF_ARM_HISI_DDRC_ONLINE, cpuhp_state_remove_instance_nocalls(CPUHP_AP_PERF_ARM_HISI_DDRC_ONLINE,
&ddrc_pmu->node); &ddrc_pmu->node);
irq_set_affinity_hint(ddrc_pmu->irq, NULL);
return 0; return 0;
} }
......
...@@ -283,7 +283,7 @@ static struct attribute *hisi_hha_pmu_events_attr[] = { ...@@ -283,7 +283,7 @@ static struct attribute *hisi_hha_pmu_events_attr[] = {
HISI_PMU_EVENT_ATTR(rx_wbip, 0x05), HISI_PMU_EVENT_ATTR(rx_wbip, 0x05),
HISI_PMU_EVENT_ATTR(rx_wtistash, 0x11), HISI_PMU_EVENT_ATTR(rx_wtistash, 0x11),
HISI_PMU_EVENT_ATTR(rd_ddr_64b, 0x1c), HISI_PMU_EVENT_ATTR(rd_ddr_64b, 0x1c),
HISI_PMU_EVENT_ATTR(wr_dr_64b, 0x1d), HISI_PMU_EVENT_ATTR(wr_ddr_64b, 0x1d),
HISI_PMU_EVENT_ATTR(rd_ddr_128b, 0x1e), HISI_PMU_EVENT_ATTR(rd_ddr_128b, 0x1e),
HISI_PMU_EVENT_ATTR(wr_ddr_128b, 0x1f), HISI_PMU_EVENT_ATTR(wr_ddr_128b, 0x1f),
HISI_PMU_EVENT_ATTR(spill_num, 0x20), HISI_PMU_EVENT_ATTR(spill_num, 0x20),
...@@ -406,8 +406,9 @@ static int hisi_hha_pmu_probe(struct platform_device *pdev) ...@@ -406,8 +406,9 @@ static int hisi_hha_pmu_probe(struct platform_device *pdev)
ret = perf_pmu_register(&hha_pmu->pmu, name, -1); ret = perf_pmu_register(&hha_pmu->pmu, name, -1);
if (ret) { if (ret) {
dev_err(hha_pmu->dev, "HHA PMU register failed!\n"); dev_err(hha_pmu->dev, "HHA PMU register failed!\n");
cpuhp_state_remove_instance(CPUHP_AP_PERF_ARM_HISI_HHA_ONLINE, cpuhp_state_remove_instance_nocalls(
&hha_pmu->node); CPUHP_AP_PERF_ARM_HISI_HHA_ONLINE, &hha_pmu->node);
irq_set_affinity_hint(hha_pmu->irq, NULL);
} }
return ret; return ret;
...@@ -418,8 +419,9 @@ static int hisi_hha_pmu_remove(struct platform_device *pdev) ...@@ -418,8 +419,9 @@ static int hisi_hha_pmu_remove(struct platform_device *pdev)
struct hisi_pmu *hha_pmu = platform_get_drvdata(pdev); struct hisi_pmu *hha_pmu = platform_get_drvdata(pdev);
perf_pmu_unregister(&hha_pmu->pmu); perf_pmu_unregister(&hha_pmu->pmu);
cpuhp_state_remove_instance(CPUHP_AP_PERF_ARM_HISI_HHA_ONLINE, cpuhp_state_remove_instance_nocalls(CPUHP_AP_PERF_ARM_HISI_HHA_ONLINE,
&hha_pmu->node); &hha_pmu->node);
irq_set_affinity_hint(hha_pmu->irq, NULL);
return 0; return 0;
} }
......
...@@ -396,8 +396,9 @@ static int hisi_l3c_pmu_probe(struct platform_device *pdev) ...@@ -396,8 +396,9 @@ static int hisi_l3c_pmu_probe(struct platform_device *pdev)
ret = perf_pmu_register(&l3c_pmu->pmu, name, -1); ret = perf_pmu_register(&l3c_pmu->pmu, name, -1);
if (ret) { if (ret) {
dev_err(l3c_pmu->dev, "L3C PMU register failed!\n"); dev_err(l3c_pmu->dev, "L3C PMU register failed!\n");
cpuhp_state_remove_instance(CPUHP_AP_PERF_ARM_HISI_L3_ONLINE, cpuhp_state_remove_instance_nocalls(
&l3c_pmu->node); CPUHP_AP_PERF_ARM_HISI_L3_ONLINE, &l3c_pmu->node);
irq_set_affinity_hint(l3c_pmu->irq, NULL);
} }
return ret; return ret;
...@@ -408,8 +409,9 @@ static int hisi_l3c_pmu_remove(struct platform_device *pdev) ...@@ -408,8 +409,9 @@ static int hisi_l3c_pmu_remove(struct platform_device *pdev)
struct hisi_pmu *l3c_pmu = platform_get_drvdata(pdev); struct hisi_pmu *l3c_pmu = platform_get_drvdata(pdev);
perf_pmu_unregister(&l3c_pmu->pmu); perf_pmu_unregister(&l3c_pmu->pmu);
cpuhp_state_remove_instance(CPUHP_AP_PERF_ARM_HISI_L3_ONLINE, cpuhp_state_remove_instance_nocalls(CPUHP_AP_PERF_ARM_HISI_L3_ONLINE,
&l3c_pmu->node); &l3c_pmu->node);
irq_set_affinity_hint(l3c_pmu->irq, NULL);
return 0; return 0;
} }
......
...@@ -35,6 +35,7 @@ ssize_t hisi_format_sysfs_show(struct device *dev, ...@@ -35,6 +35,7 @@ ssize_t hisi_format_sysfs_show(struct device *dev,
return sprintf(buf, "%s\n", (char *)eattr->var); return sprintf(buf, "%s\n", (char *)eattr->var);
} }
EXPORT_SYMBOL_GPL(hisi_format_sysfs_show);
/* /*
* PMU event attributes * PMU event attributes
...@@ -48,6 +49,7 @@ ssize_t hisi_event_sysfs_show(struct device *dev, ...@@ -48,6 +49,7 @@ ssize_t hisi_event_sysfs_show(struct device *dev,
return sprintf(page, "config=0x%lx\n", (unsigned long)eattr->var); return sprintf(page, "config=0x%lx\n", (unsigned long)eattr->var);
} }
EXPORT_SYMBOL_GPL(hisi_event_sysfs_show);
/* /*
* sysfs cpumask attributes. For uncore PMU, we only have a single CPU to show * sysfs cpumask attributes. For uncore PMU, we only have a single CPU to show
...@@ -59,6 +61,7 @@ ssize_t hisi_cpumask_sysfs_show(struct device *dev, ...@@ -59,6 +61,7 @@ ssize_t hisi_cpumask_sysfs_show(struct device *dev,
return sprintf(buf, "%d\n", hisi_pmu->on_cpu); return sprintf(buf, "%d\n", hisi_pmu->on_cpu);
} }
EXPORT_SYMBOL_GPL(hisi_cpumask_sysfs_show);
static bool hisi_validate_event_group(struct perf_event *event) static bool hisi_validate_event_group(struct perf_event *event)
{ {
...@@ -97,6 +100,7 @@ int hisi_uncore_pmu_counter_valid(struct hisi_pmu *hisi_pmu, int idx) ...@@ -97,6 +100,7 @@ int hisi_uncore_pmu_counter_valid(struct hisi_pmu *hisi_pmu, int idx)
{ {
return idx >= 0 && idx < hisi_pmu->num_counters; return idx >= 0 && idx < hisi_pmu->num_counters;
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_counter_valid);
int hisi_uncore_pmu_get_event_idx(struct perf_event *event) int hisi_uncore_pmu_get_event_idx(struct perf_event *event)
{ {
...@@ -113,6 +117,7 @@ int hisi_uncore_pmu_get_event_idx(struct perf_event *event) ...@@ -113,6 +117,7 @@ int hisi_uncore_pmu_get_event_idx(struct perf_event *event)
return idx; return idx;
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_get_event_idx);
static void hisi_uncore_pmu_clear_event_idx(struct hisi_pmu *hisi_pmu, int idx) static void hisi_uncore_pmu_clear_event_idx(struct hisi_pmu *hisi_pmu, int idx)
{ {
...@@ -173,6 +178,7 @@ int hisi_uncore_pmu_event_init(struct perf_event *event) ...@@ -173,6 +178,7 @@ int hisi_uncore_pmu_event_init(struct perf_event *event)
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_event_init);
/* /*
* Set the counter to count the event that we're interested in, * Set the counter to count the event that we're interested in,
...@@ -220,6 +226,7 @@ void hisi_uncore_pmu_set_event_period(struct perf_event *event) ...@@ -220,6 +226,7 @@ void hisi_uncore_pmu_set_event_period(struct perf_event *event)
/* Write start value to the hardware event counter */ /* Write start value to the hardware event counter */
hisi_pmu->ops->write_counter(hisi_pmu, hwc, val); hisi_pmu->ops->write_counter(hisi_pmu, hwc, val);
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_set_event_period);
void hisi_uncore_pmu_event_update(struct perf_event *event) void hisi_uncore_pmu_event_update(struct perf_event *event)
{ {
...@@ -240,6 +247,7 @@ void hisi_uncore_pmu_event_update(struct perf_event *event) ...@@ -240,6 +247,7 @@ void hisi_uncore_pmu_event_update(struct perf_event *event)
HISI_MAX_PERIOD(hisi_pmu->counter_bits); HISI_MAX_PERIOD(hisi_pmu->counter_bits);
local64_add(delta, &event->count); local64_add(delta, &event->count);
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_event_update);
void hisi_uncore_pmu_start(struct perf_event *event, int flags) void hisi_uncore_pmu_start(struct perf_event *event, int flags)
{ {
...@@ -262,6 +270,7 @@ void hisi_uncore_pmu_start(struct perf_event *event, int flags) ...@@ -262,6 +270,7 @@ void hisi_uncore_pmu_start(struct perf_event *event, int flags)
hisi_uncore_pmu_enable_event(event); hisi_uncore_pmu_enable_event(event);
perf_event_update_userpage(event); perf_event_update_userpage(event);
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_start);
void hisi_uncore_pmu_stop(struct perf_event *event, int flags) void hisi_uncore_pmu_stop(struct perf_event *event, int flags)
{ {
...@@ -278,6 +287,7 @@ void hisi_uncore_pmu_stop(struct perf_event *event, int flags) ...@@ -278,6 +287,7 @@ void hisi_uncore_pmu_stop(struct perf_event *event, int flags)
hisi_uncore_pmu_event_update(event); hisi_uncore_pmu_event_update(event);
hwc->state |= PERF_HES_UPTODATE; hwc->state |= PERF_HES_UPTODATE;
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_stop);
int hisi_uncore_pmu_add(struct perf_event *event, int flags) int hisi_uncore_pmu_add(struct perf_event *event, int flags)
{ {
...@@ -300,6 +310,7 @@ int hisi_uncore_pmu_add(struct perf_event *event, int flags) ...@@ -300,6 +310,7 @@ int hisi_uncore_pmu_add(struct perf_event *event, int flags)
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_add);
void hisi_uncore_pmu_del(struct perf_event *event, int flags) void hisi_uncore_pmu_del(struct perf_event *event, int flags)
{ {
...@@ -311,12 +322,14 @@ void hisi_uncore_pmu_del(struct perf_event *event, int flags) ...@@ -311,12 +322,14 @@ void hisi_uncore_pmu_del(struct perf_event *event, int flags)
perf_event_update_userpage(event); perf_event_update_userpage(event);
hisi_pmu->pmu_events.hw_events[hwc->idx] = NULL; hisi_pmu->pmu_events.hw_events[hwc->idx] = NULL;
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_del);
void hisi_uncore_pmu_read(struct perf_event *event) void hisi_uncore_pmu_read(struct perf_event *event)
{ {
/* Read hardware counter and update the perf counter statistics */ /* Read hardware counter and update the perf counter statistics */
hisi_uncore_pmu_event_update(event); hisi_uncore_pmu_event_update(event);
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_read);
void hisi_uncore_pmu_enable(struct pmu *pmu) void hisi_uncore_pmu_enable(struct pmu *pmu)
{ {
...@@ -329,6 +342,7 @@ void hisi_uncore_pmu_enable(struct pmu *pmu) ...@@ -329,6 +342,7 @@ void hisi_uncore_pmu_enable(struct pmu *pmu)
hisi_pmu->ops->start_counters(hisi_pmu); hisi_pmu->ops->start_counters(hisi_pmu);
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_enable);
void hisi_uncore_pmu_disable(struct pmu *pmu) void hisi_uncore_pmu_disable(struct pmu *pmu)
{ {
...@@ -336,6 +350,7 @@ void hisi_uncore_pmu_disable(struct pmu *pmu) ...@@ -336,6 +350,7 @@ void hisi_uncore_pmu_disable(struct pmu *pmu)
hisi_pmu->ops->stop_counters(hisi_pmu); hisi_pmu->ops->stop_counters(hisi_pmu);
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_disable);
/* /*
...@@ -414,10 +429,11 @@ int hisi_uncore_pmu_online_cpu(unsigned int cpu, struct hlist_node *node) ...@@ -414,10 +429,11 @@ int hisi_uncore_pmu_online_cpu(unsigned int cpu, struct hlist_node *node)
hisi_pmu->on_cpu = cpu; hisi_pmu->on_cpu = cpu;
/* Overflow interrupt also should use the same CPU */ /* Overflow interrupt also should use the same CPU */
WARN_ON(irq_set_affinity(hisi_pmu->irq, cpumask_of(cpu))); WARN_ON(irq_set_affinity_hint(hisi_pmu->irq, cpumask_of(cpu)));
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_online_cpu);
int hisi_uncore_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) int hisi_uncore_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
{ {
...@@ -446,7 +462,10 @@ int hisi_uncore_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) ...@@ -446,7 +462,10 @@ int hisi_uncore_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
perf_pmu_migrate_context(&hisi_pmu->pmu, cpu, target); perf_pmu_migrate_context(&hisi_pmu->pmu, cpu, target);
/* Use this CPU for event counting */ /* Use this CPU for event counting */
hisi_pmu->on_cpu = target; hisi_pmu->on_cpu = target;
WARN_ON(irq_set_affinity(hisi_pmu->irq, cpumask_of(target))); WARN_ON(irq_set_affinity_hint(hisi_pmu->irq, cpumask_of(target)));
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(hisi_uncore_pmu_offline_cpu);
MODULE_LICENSE("GPL v2");
...@@ -5,12 +5,15 @@ ...@@ -5,12 +5,15 @@
#ifndef __LINUX_ARM_SMCCC_H #ifndef __LINUX_ARM_SMCCC_H
#define __LINUX_ARM_SMCCC_H #define __LINUX_ARM_SMCCC_H
#include <linux/init.h>
#include <uapi/linux/const.h> #include <uapi/linux/const.h>
/* /*
* This file provides common defines for ARM SMC Calling Convention as * This file provides common defines for ARM SMC Calling Convention as
* specified in * specified in
* http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html * https://developer.arm.com/docs/den0028/latest
*
* This code is up-to-date with version DEN 0028 C
*/ */
#define ARM_SMCCC_STD_CALL _AC(0,U) #define ARM_SMCCC_STD_CALL _AC(0,U)
...@@ -56,6 +59,7 @@ ...@@ -56,6 +59,7 @@
#define ARM_SMCCC_VERSION_1_0 0x10000 #define ARM_SMCCC_VERSION_1_0 0x10000
#define ARM_SMCCC_VERSION_1_1 0x10001 #define ARM_SMCCC_VERSION_1_1 0x10001
#define ARM_SMCCC_VERSION_1_2 0x10002
#define ARM_SMCCC_VERSION_FUNC_ID \ #define ARM_SMCCC_VERSION_FUNC_ID \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
...@@ -97,6 +101,19 @@ enum arm_smccc_conduit { ...@@ -97,6 +101,19 @@ enum arm_smccc_conduit {
*/ */
enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void); enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void);
/**
* arm_smccc_get_version()
*
* Returns the version to be used for SMCCCv1.1 or later.
*
* When SMCCCv1.1 or above is not present, returns SMCCCv1.0, but this
* does not imply the presence of firmware or a valid conduit. Caller
* handling SMCCCv1.0 must determine the conduit by other means.
*/
u32 arm_smccc_get_version(void);
void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit);
/** /**
* struct arm_smccc_res - Result from SMC/HVC call * struct arm_smccc_res - Result from SMC/HVC call
* @a0-a3 result values from registers 0 to 3 * @a0-a3 result values from registers 0 to 3
...@@ -314,10 +331,14 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1, ...@@ -314,10 +331,14 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
*/ */
#define arm_smccc_1_1_hvc(...) __arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__) #define arm_smccc_1_1_hvc(...) __arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
/* Return codes defined in ARM DEN 0070A */ /*
* Return codes defined in ARM DEN 0070A
* ARM DEN 0070A is now merged/consolidated into ARM DEN 0028 C
*/
#define SMCCC_RET_SUCCESS 0 #define SMCCC_RET_SUCCESS 0
#define SMCCC_RET_NOT_SUPPORTED -1 #define SMCCC_RET_NOT_SUPPORTED -1
#define SMCCC_RET_NOT_REQUIRED -2 #define SMCCC_RET_NOT_REQUIRED -2
#define SMCCC_RET_INVALID_PARAMETER -3
/* /*
* Like arm_smccc_1_1* but always returns SMCCC_RET_NOT_SUPPORTED. * Like arm_smccc_1_1* but always returns SMCCC_RET_NOT_SUPPORTED.
......
...@@ -21,11 +21,6 @@ bool psci_power_state_is_valid(u32 state); ...@@ -21,11 +21,6 @@ bool psci_power_state_is_valid(u32 state);
int psci_set_osi_mode(void); int psci_set_osi_mode(void);
bool psci_has_osi_support(void); bool psci_has_osi_support(void);
enum smccc_version {
SMCCC_VERSION_1_0,
SMCCC_VERSION_1_1,
};
struct psci_operations { struct psci_operations {
u32 (*get_version)(void); u32 (*get_version)(void);
int (*cpu_suspend)(u32 state, unsigned long entry_point); int (*cpu_suspend)(u32 state, unsigned long entry_point);
...@@ -35,8 +30,6 @@ struct psci_operations { ...@@ -35,8 +30,6 @@ struct psci_operations {
int (*affinity_info)(unsigned long target_affinity, int (*affinity_info)(unsigned long target_affinity,
unsigned long lowest_affinity_level); unsigned long lowest_affinity_level);
int (*migrate_info_type)(void); int (*migrate_info_type)(void);
enum arm_smccc_conduit conduit;
enum smccc_version smccc_version;
}; };
extern struct psci_operations psci_ops; extern struct psci_operations psci_ops;
......
...@@ -1387,9 +1387,7 @@ static inline void hyp_cpu_pm_exit(void) ...@@ -1387,9 +1387,7 @@ static inline void hyp_cpu_pm_exit(void)
static int init_common_resources(void) static int init_common_resources(void)
{ {
kvm_set_ipa_limit(); return kvm_set_ipa_limit();
return 0;
} }
static int init_subsystems(void) static int init_subsystems(void)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment