Commit c620f7bd authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Will Deacon:
 "Mostly just incremental improvements here:

   - Introduce AT_HWCAP2 for advertising CPU features to userspace

   - Expose SVE2 availability to userspace

   - Support for "data cache clean to point of deep persistence" (DC PODP)

   - Honour "mitigations=off" on the cmdline and advertise status via
     sysfs

   - CPU timer erratum workaround (Neoverse-N1 #1188873)

   - Introduce perf PMU driver for the SMMUv3 performance counters

   - Add config option to disable the kuser helpers page for AArch32 tasks

   - Futex modifications to ensure liveness under contention

   - Rework debug exception handling to seperate kernel and user
     handlers

   - Non-critical fixes and cleanup"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (92 commits)
  Documentation: Add ARM64 to kernel-parameters.rst
  arm64/speculation: Support 'mitigations=' cmdline option
  arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB
  arm64: enable generic CPU vulnerabilites support
  arm64: add sysfs vulnerability show for speculative store bypass
  arm64: Fix size of __early_cpu_boot_status
  clocksource/arm_arch_timer: Use arch_timer_read_counter to access stable counters
  clocksource/arm_arch_timer: Remove use of workaround static key
  clocksource/arm_arch_timer: Drop use of static key in arch_timer_reg_read_stable
  clocksource/arm_arch_timer: Direcly assign set_next_event workaround
  arm64: Use arch_timer_read_counter instead of arch_counter_get_cntvct
  watchdog/sbsa: Use arch_timer_read_counter instead of arch_counter_get_cntvct
  ARM: vdso: Remove dependency with the arch_timer driver internals
  arm64: Apply ARM64_ERRATUM_1188873 to Neoverse-N1
  arm64: Add part number for Neoverse N1
  arm64: Make ARM64_ERRATUM_1188873 depend on COMPAT
  arm64: Restrict ARM64_ERRATUM_1188873 mitigation to AArch32
  arm64: mm: Remove pte_unmap_nested()
  arm64: Fix compiler warning from pte_unmap() with -Wunused-but-set-variable
  arm64: compat: Reduce address limit for 64K pages
  ...
parents dd4e5d61 b33f9088
...@@ -88,6 +88,7 @@ parameter is applicable:: ...@@ -88,6 +88,7 @@ parameter is applicable::
APIC APIC support is enabled. APIC APIC support is enabled.
APM Advanced Power Management support is enabled. APM Advanced Power Management support is enabled.
ARM ARM architecture is enabled. ARM ARM architecture is enabled.
ARM64 ARM64 architecture is enabled.
AX25 Appropriate AX.25 support is enabled. AX25 Appropriate AX.25 support is enabled.
CLK Common clock infrastructure is enabled. CLK Common clock infrastructure is enabled.
CMA Contiguous Memory Area support is enabled. CMA Contiguous Memory Area support is enabled.
......
...@@ -2548,8 +2548,8 @@ ...@@ -2548,8 +2548,8 @@
http://repo.or.cz/w/linux-2.6/mini2440.git http://repo.or.cz/w/linux-2.6/mini2440.git
mitigations= mitigations=
[X86,PPC,S390] Control optional mitigations for CPU [X86,PPC,S390,ARM64] Control optional mitigations for
vulnerabilities. This is a set of curated, CPU vulnerabilities. This is a set of curated,
arch-independent options, each of which is an arch-independent options, each of which is an
aggregation of existing arch-specific options. aggregation of existing arch-specific options.
...@@ -2558,11 +2558,13 @@ ...@@ -2558,11 +2558,13 @@
improves system performance, but it may also improves system performance, but it may also
expose users to several CPU vulnerabilities. expose users to several CPU vulnerabilities.
Equivalent to: nopti [X86,PPC] Equivalent to: nopti [X86,PPC]
kpti=0 [ARM64]
nospectre_v1 [PPC] nospectre_v1 [PPC]
nobp=0 [S390] nobp=0 [S390]
nospectre_v2 [X86,PPC,S390] nospectre_v2 [X86,PPC,S390,ARM64]
spectre_v2_user=off [X86] spectre_v2_user=off [X86]
spec_store_bypass_disable=off [X86,PPC] spec_store_bypass_disable=off [X86,PPC]
ssbd=force-off [ARM64]
l1tf=off [X86] l1tf=off [X86]
auto (default) auto (default)
...@@ -2908,10 +2910,10 @@ ...@@ -2908,10 +2910,10 @@
check bypass). With this option data leaks are possible check bypass). With this option data leaks are possible
in the system. in the system.
nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2 nospectre_v2 [X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
(indirect branch prediction) vulnerability. System may the Spectre variant 2 (indirect branch prediction)
allow data leaks with this option, which is equivalent vulnerability. System may allow data leaks with this
to spectre_v2=off. option.
nospec_store_bypass_disable nospec_store_bypass_disable
[HW] Disable all mitigations for the Speculative Store Bypass vulnerability [HW] Disable all mitigations for the Speculative Store Bypass vulnerability
......
...@@ -209,6 +209,22 @@ infrastructure: ...@@ -209,6 +209,22 @@ infrastructure:
| AT | [35-32] | y | | AT | [35-32] | y |
x--------------------------------------------------x x--------------------------------------------------x
6) ID_AA64ZFR0_EL1 - SVE feature ID register 0
x--------------------------------------------------x
| Name | bits | visible |
|--------------------------------------------------|
| SM4 | [43-40] | y |
|--------------------------------------------------|
| SHA3 | [35-32] | y |
|--------------------------------------------------|
| BitPerm | [19-16] | y |
|--------------------------------------------------|
| AES | [7-4] | y |
|--------------------------------------------------|
| SVEVer | [3-0] | y |
x--------------------------------------------------x
Appendix I: Example Appendix I: Example
--------------------------- ---------------------------
......
...@@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The ...@@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
kernel exposes the presence of these features to userspace through a set kernel exposes the presence of these features to userspace through a set
of flags called hwcaps, exposed in the auxilliary vector. of flags called hwcaps, exposed in the auxilliary vector.
Userspace software can test for features by acquiring the AT_HWCAP entry Userspace software can test for features by acquiring the AT_HWCAP or
of the auxilliary vector, and testing whether the relevant flags are AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
set, e.g. flags are set, e.g.
bool floating_point_is_present(void) bool floating_point_is_present(void)
{ {
...@@ -135,6 +135,10 @@ HWCAP_DCPOP ...@@ -135,6 +135,10 @@ HWCAP_DCPOP
Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0001. Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0001.
HWCAP2_DCPODP
Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0010.
HWCAP_SHA3 HWCAP_SHA3
Functionality implied by ID_AA64ISAR0_EL1.SHA3 == 0b0001. Functionality implied by ID_AA64ISAR0_EL1.SHA3 == 0b0001.
...@@ -159,6 +163,30 @@ HWCAP_SVE ...@@ -159,6 +163,30 @@ HWCAP_SVE
Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001. Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001.
HWCAP2_SVE2
Functionality implied by ID_AA64ZFR0_EL1.SVEVer == 0b0001.
HWCAP2_SVEAES
Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0001.
HWCAP2_SVEPMULL
Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0010.
HWCAP2_SVEBITPERM
Functionality implied by ID_AA64ZFR0_EL1.BitPerm == 0b0001.
HWCAP2_SVESHA3
Functionality implied by ID_AA64ZFR0_EL1.SHA3 == 0b0001.
HWCAP2_SVESM4
Functionality implied by ID_AA64ZFR0_EL1.SM4 == 0b0001.
HWCAP_ASIMDFHM HWCAP_ASIMDFHM
Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001. Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
...@@ -194,3 +222,10 @@ HWCAP_PACG ...@@ -194,3 +222,10 @@ HWCAP_PACG
Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
ID_AA64ISAR1_EL1.GPI == 0b0001, as described by ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
Documentation/arm64/pointer-authentication.txt. Documentation/arm64/pointer-authentication.txt.
4. Unused AT_HWCAP bits
-----------------------
For interoperation with userspace, the kernel guarantees that bits 62
and 63 of AT_HWCAP will always be returned as 0.
...@@ -61,6 +61,7 @@ stable kernels. ...@@ -61,6 +61,7 @@ stable kernels.
| ARM | Cortex-A76 | #1188873 | ARM64_ERRATUM_1188873 | | ARM | Cortex-A76 | #1188873 | ARM64_ERRATUM_1188873 |
| ARM | Cortex-A76 | #1165522 | ARM64_ERRATUM_1165522 | | ARM | Cortex-A76 | #1165522 | ARM64_ERRATUM_1165522 |
| ARM | Cortex-A76 | #1286807 | ARM64_ERRATUM_1286807 | | ARM | Cortex-A76 | #1286807 | ARM64_ERRATUM_1286807 |
| ARM | Neoverse-N1 | #1188873 | ARM64_ERRATUM_1188873 |
| ARM | MMU-500 | #841119,#826419 | N/A | | ARM | MMU-500 | #841119,#826419 | N/A |
| | | | | | | | | |
| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 | | Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
...@@ -77,6 +78,7 @@ stable kernels. ...@@ -77,6 +78,7 @@ stable kernels.
| Hisilicon | Hip0{5,6,7} | #161010101 | HISILICON_ERRATUM_161010101 | | Hisilicon | Hip0{5,6,7} | #161010101 | HISILICON_ERRATUM_161010101 |
| Hisilicon | Hip0{6,7} | #161010701 | N/A | | Hisilicon | Hip0{6,7} | #161010701 | N/A |
| Hisilicon | Hip07 | #161600802 | HISILICON_ERRATUM_161600802 | | Hisilicon | Hip07 | #161600802 | HISILICON_ERRATUM_161600802 |
| Hisilicon | Hip08 SMMU PMCG | #162001800 | N/A |
| | | | | | | | | |
| Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 | | Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
| Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 | | Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 |
......
...@@ -34,6 +34,23 @@ model features for SVE is included in Appendix A. ...@@ -34,6 +34,23 @@ model features for SVE is included in Appendix A.
following sections: software that needs to verify that those interfaces are following sections: software that needs to verify that those interfaces are
present must check for HWCAP_SVE instead. present must check for HWCAP_SVE instead.
* On hardware that supports the SVE2 extensions, HWCAP2_SVE2 will also
be reported in the AT_HWCAP2 aux vector entry. In addition to this,
optional extensions to SVE2 may be reported by the presence of:
HWCAP2_SVE2
HWCAP2_SVEAES
HWCAP2_SVEPMULL
HWCAP2_SVEBITPERM
HWCAP2_SVESHA3
HWCAP2_SVESM4
This list may be extended over time as the SVE architecture evolves.
These extensions are also reported via the CPU ID register ID_AA64ZFR0_EL1,
which userspace can read using an MRS instruction. See elf_hwcaps.txt and
cpu-feature-registers.txt for details.
* Debuggers should restrict themselves to interacting with the target via the * Debuggers should restrict themselves to interacting with the target via the
NT_ARM_SVE regset. The recommended way of detecting support for this regset NT_ARM_SVE regset. The recommended way of detecting support for this regset
is to connect to a target process first and then attempt a is to connect to a target process first and then attempt a
......
...@@ -218,5 +218,4 @@ All other architectures should build just fine too - but they won't have ...@@ -218,5 +218,4 @@ All other architectures should build just fine too - but they won't have
the new syscalls yet. the new syscalls yet.
Architectures need to implement the new futex_atomic_cmpxchg_inatomic() Architectures need to implement the new futex_atomic_cmpxchg_inatomic()
inline function before writing up the syscalls (that function returns inline function before writing up the syscalls.
-ENOSYS right now).
...@@ -11,6 +11,10 @@ ...@@ -11,6 +11,10 @@
#include <clocksource/arm_arch_timer.h> #include <clocksource/arm_arch_timer.h>
#ifdef CONFIG_ARM_ARCH_TIMER #ifdef CONFIG_ARM_ARCH_TIMER
/* 32bit ARM doesn't know anything about timer errata... */
#define has_erratum_handler(h) (false)
#define erratum_handler(h) (arch_timer_##h)
int arch_timer_arch_init(void); int arch_timer_arch_init(void);
/* /*
...@@ -79,7 +83,7 @@ static inline u32 arch_timer_get_cntfrq(void) ...@@ -79,7 +83,7 @@ static inline u32 arch_timer_get_cntfrq(void)
return val; return val;
} }
static inline u64 arch_counter_get_cntpct(void) static inline u64 __arch_counter_get_cntpct(void)
{ {
u64 cval; u64 cval;
...@@ -88,7 +92,12 @@ static inline u64 arch_counter_get_cntpct(void) ...@@ -88,7 +92,12 @@ static inline u64 arch_counter_get_cntpct(void)
return cval; return cval;
} }
static inline u64 arch_counter_get_cntvct(void) static inline u64 __arch_counter_get_cntpct_stable(void)
{
return __arch_counter_get_cntpct();
}
static inline u64 __arch_counter_get_cntvct(void)
{ {
u64 cval; u64 cval;
...@@ -97,6 +106,11 @@ static inline u64 arch_counter_get_cntvct(void) ...@@ -97,6 +106,11 @@ static inline u64 arch_counter_get_cntvct(void)
return cval; return cval;
} }
static inline u64 __arch_counter_get_cntvct_stable(void)
{
return __arch_counter_get_cntvct();
}
static inline u32 arch_timer_get_cntkctl(void) static inline u32 arch_timer_get_cntkctl(void)
{ {
u32 cntkctl; u32 cntkctl;
......
...@@ -68,6 +68,8 @@ ...@@ -68,6 +68,8 @@
#define BPIALL __ACCESS_CP15(c7, 0, c5, 6) #define BPIALL __ACCESS_CP15(c7, 0, c5, 6)
#define ICIALLU __ACCESS_CP15(c7, 0, c5, 0) #define ICIALLU __ACCESS_CP15(c7, 0, c5, 0)
#define CNTVCT __ACCESS_CP15_64(1, c14)
extern unsigned long cr_alignment; /* defined in entry-armv.S */ extern unsigned long cr_alignment; /* defined in entry-armv.S */
static inline unsigned long get_cr(void) static inline unsigned long get_cr(void)
......
...@@ -32,14 +32,14 @@ ...@@ -32,14 +32,14 @@
#define stage2_pgd_present(kvm, pgd) pgd_present(pgd) #define stage2_pgd_present(kvm, pgd) pgd_present(pgd)
#define stage2_pgd_populate(kvm, pgd, pud) pgd_populate(NULL, pgd, pud) #define stage2_pgd_populate(kvm, pgd, pud) pgd_populate(NULL, pgd, pud)
#define stage2_pud_offset(kvm, pgd, address) pud_offset(pgd, address) #define stage2_pud_offset(kvm, pgd, address) pud_offset(pgd, address)
#define stage2_pud_free(kvm, pud) pud_free(NULL, pud) #define stage2_pud_free(kvm, pud) do { } while (0)
#define stage2_pud_none(kvm, pud) pud_none(pud) #define stage2_pud_none(kvm, pud) pud_none(pud)
#define stage2_pud_clear(kvm, pud) pud_clear(pud) #define stage2_pud_clear(kvm, pud) pud_clear(pud)
#define stage2_pud_present(kvm, pud) pud_present(pud) #define stage2_pud_present(kvm, pud) pud_present(pud)
#define stage2_pud_populate(kvm, pud, pmd) pud_populate(NULL, pud, pmd) #define stage2_pud_populate(kvm, pud, pmd) pud_populate(NULL, pud, pmd)
#define stage2_pmd_offset(kvm, pud, address) pmd_offset(pud, address) #define stage2_pmd_offset(kvm, pud, address) pmd_offset(pud, address)
#define stage2_pmd_free(kvm, pmd) pmd_free(NULL, pmd) #define stage2_pmd_free(kvm, pmd) free_page((unsigned long)pmd)
#define stage2_pud_huge(kvm, pud) pud_huge(pud) #define stage2_pud_huge(kvm, pud) pud_huge(pud)
......
...@@ -18,9 +18,9 @@ ...@@ -18,9 +18,9 @@
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/hrtimer.h> #include <linux/hrtimer.h>
#include <linux/time.h> #include <linux/time.h>
#include <asm/arch_timer.h>
#include <asm/barrier.h> #include <asm/barrier.h>
#include <asm/bug.h> #include <asm/bug.h>
#include <asm/cp15.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/unistd.h> #include <asm/unistd.h>
#include <asm/vdso_datapage.h> #include <asm/vdso_datapage.h>
...@@ -123,7 +123,8 @@ static notrace u64 get_ns(struct vdso_data *vdata) ...@@ -123,7 +123,8 @@ static notrace u64 get_ns(struct vdso_data *vdata)
u64 cycle_now; u64 cycle_now;
u64 nsec; u64 nsec;
cycle_now = arch_counter_get_cntvct(); isb();
cycle_now = read_sysreg(CNTVCT);
cycle_delta = (cycle_now - vdata->cs_cycle_last) & vdata->cs_mask; cycle_delta = (cycle_now - vdata->cs_cycle_last) & vdata->cs_mask;
......
...@@ -90,6 +90,7 @@ config ARM64 ...@@ -90,6 +90,7 @@ config ARM64
select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS
select GENERIC_CLOCKEVENTS_BROADCAST select GENERIC_CLOCKEVENTS_BROADCAST
select GENERIC_CPU_AUTOPROBE select GENERIC_CPU_AUTOPROBE
select GENERIC_CPU_VULNERABILITIES
select GENERIC_EARLY_IOREMAP select GENERIC_EARLY_IOREMAP
select GENERIC_IDLE_POLL_SETUP select GENERIC_IDLE_POLL_SETUP
select GENERIC_IRQ_MULTI_HANDLER select GENERIC_IRQ_MULTI_HANDLER
...@@ -148,6 +149,7 @@ config ARM64 ...@@ -148,6 +149,7 @@ config ARM64
select HAVE_PERF_REGS select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_FUNCTION_ARG_ACCESS_API
select HAVE_RCU_TABLE_FREE select HAVE_RCU_TABLE_FREE
select HAVE_RSEQ select HAVE_RSEQ
select HAVE_STACKPROTECTOR select HAVE_STACKPROTECTOR
...@@ -293,7 +295,7 @@ menu "Kernel Features" ...@@ -293,7 +295,7 @@ menu "Kernel Features"
menu "ARM errata workarounds via the alternatives framework" menu "ARM errata workarounds via the alternatives framework"
config ARM64_WORKAROUND_CLEAN_CACHE config ARM64_WORKAROUND_CLEAN_CACHE
def_bool n bool
config ARM64_ERRATUM_826319 config ARM64_ERRATUM_826319
bool "Cortex-A53: 826319: System might deadlock if a write cannot complete until read data is accepted" bool "Cortex-A53: 826319: System might deadlock if a write cannot complete until read data is accepted"
...@@ -460,26 +462,28 @@ config ARM64_ERRATUM_1024718 ...@@ -460,26 +462,28 @@ config ARM64_ERRATUM_1024718
bool "Cortex-A55: 1024718: Update of DBM/AP bits without break before make might result in incorrect update" bool "Cortex-A55: 1024718: Update of DBM/AP bits without break before make might result in incorrect update"
default y default y
help help
This option adds work around for Arm Cortex-A55 Erratum 1024718. This option adds a workaround for ARM Cortex-A55 Erratum 1024718.
Affected Cortex-A55 cores (r0p0, r0p1, r1p0) could cause incorrect Affected Cortex-A55 cores (r0p0, r0p1, r1p0) could cause incorrect
update of the hardware dirty bit when the DBM/AP bits are updated update of the hardware dirty bit when the DBM/AP bits are updated
without a break-before-make. The work around is to disable the usage without a break-before-make. The workaround is to disable the usage
of hardware DBM locally on the affected cores. CPUs not affected by of hardware DBM locally on the affected cores. CPUs not affected by
erratum will continue to use the feature. this erratum will continue to use the feature.
If unsure, say Y. If unsure, say Y.
config ARM64_ERRATUM_1188873 config ARM64_ERRATUM_1188873
bool "Cortex-A76: MRC read following MRRC read of specific Generic Timer in AArch32 might give incorrect result" bool "Cortex-A76/Neoverse-N1: MRC read following MRRC read of specific Generic Timer in AArch32 might give incorrect result"
default y default y
depends on COMPAT
select ARM_ARCH_TIMER_OOL_WORKAROUND select ARM_ARCH_TIMER_OOL_WORKAROUND
help help
This option adds work arounds for ARM Cortex-A76 erratum 1188873 This option adds a workaround for ARM Cortex-A76/Neoverse-N1
erratum 1188873.
Affected Cortex-A76 cores (r0p0, r1p0, r2p0) could cause Affected Cortex-A76/Neoverse-N1 cores (r0p0, r1p0, r2p0) could
register corruption when accessing the timer registers from cause register corruption when accessing the timer registers
AArch32 userspace. from AArch32 userspace.
If unsure, say Y. If unsure, say Y.
...@@ -487,7 +491,7 @@ config ARM64_ERRATUM_1165522 ...@@ -487,7 +491,7 @@ config ARM64_ERRATUM_1165522
bool "Cortex-A76: Speculative AT instruction using out-of-context translation regime could cause subsequent request to generate an incorrect translation" bool "Cortex-A76: Speculative AT instruction using out-of-context translation regime could cause subsequent request to generate an incorrect translation"
default y default y
help help
This option adds work arounds for ARM Cortex-A76 erratum 1165522 This option adds a workaround for ARM Cortex-A76 erratum 1165522.
Affected Cortex-A76 cores (r0p0, r1p0, r2p0) could end-up with Affected Cortex-A76 cores (r0p0, r1p0, r2p0) could end-up with
corrupted TLBs by speculating an AT instruction during a guest corrupted TLBs by speculating an AT instruction during a guest
...@@ -500,7 +504,7 @@ config ARM64_ERRATUM_1286807 ...@@ -500,7 +504,7 @@ config ARM64_ERRATUM_1286807
default y default y
select ARM64_WORKAROUND_REPEAT_TLBI select ARM64_WORKAROUND_REPEAT_TLBI
help help
This option adds workaround for ARM Cortex-A76 erratum 1286807 This option adds a workaround for ARM Cortex-A76 erratum 1286807.
On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual
address for a cacheable mapping of a location is being address for a cacheable mapping of a location is being
...@@ -517,10 +521,10 @@ config CAVIUM_ERRATUM_22375 ...@@ -517,10 +521,10 @@ config CAVIUM_ERRATUM_22375
bool "Cavium erratum 22375, 24313" bool "Cavium erratum 22375, 24313"
default y default y
help help
Enable workaround for erratum 22375, 24313. Enable workaround for errata 22375 and 24313.
This implements two gicv3-its errata workarounds for ThunderX. Both This implements two gicv3-its errata workarounds for ThunderX. Both
with small impact affecting only ITS table allocation. with a small impact affecting only ITS table allocation.
erratum 22375: only alloc 8MB table size erratum 22375: only alloc 8MB table size
erratum 24313: ignore memory access type erratum 24313: ignore memory access type
...@@ -584,9 +588,6 @@ config QCOM_FALKOR_ERRATUM_1003 ...@@ -584,9 +588,6 @@ config QCOM_FALKOR_ERRATUM_1003
config ARM64_WORKAROUND_REPEAT_TLBI config ARM64_WORKAROUND_REPEAT_TLBI
bool bool
help
Enable the repeat TLBI workaround for Falkor erratum 1009 and
Cortex-A76 erratum 1286807.
config QCOM_FALKOR_ERRATUM_1009 config QCOM_FALKOR_ERRATUM_1009
bool "Falkor E1009: Prematurely complete a DSB after a TLBI" bool "Falkor E1009: Prematurely complete a DSB after a TLBI"
...@@ -622,7 +623,7 @@ config HISILICON_ERRATUM_161600802 ...@@ -622,7 +623,7 @@ config HISILICON_ERRATUM_161600802
bool "Hip07 161600802: Erroneous redistributor VLPI base" bool "Hip07 161600802: Erroneous redistributor VLPI base"
default y default y
help help
The HiSilicon Hip07 SoC usees the wrong redistributor base The HiSilicon Hip07 SoC uses the wrong redistributor base
when issued ITS commands such as VMOVP and VMAPP, and requires when issued ITS commands such as VMOVP and VMAPP, and requires
a 128kB offset to be applied to the target address in this commands. a 128kB offset to be applied to the target address in this commands.
...@@ -642,7 +643,7 @@ config FUJITSU_ERRATUM_010001 ...@@ -642,7 +643,7 @@ config FUJITSU_ERRATUM_010001
bool "Fujitsu-A64FX erratum E#010001: Undefined fault may occur wrongly" bool "Fujitsu-A64FX erratum E#010001: Undefined fault may occur wrongly"
default y default y
help help
This option adds workaround for Fujitsu-A64FX erratum E#010001. This option adds a workaround for Fujitsu-A64FX erratum E#010001.
On some variants of the Fujitsu-A64FX cores ver(1.0, 1.1), memory On some variants of the Fujitsu-A64FX cores ver(1.0, 1.1), memory
accesses may cause undefined fault (Data abort, DFSC=0b111111). accesses may cause undefined fault (Data abort, DFSC=0b111111).
This fault occurs under a specific hardware condition when a This fault occurs under a specific hardware condition when a
...@@ -653,7 +654,7 @@ config FUJITSU_ERRATUM_010001 ...@@ -653,7 +654,7 @@ config FUJITSU_ERRATUM_010001
case-4 TTBR1_EL2 with TCR_EL2.NFD1 == 1. case-4 TTBR1_EL2 with TCR_EL2.NFD1 == 1.
The workaround is to ensure these bits are clear in TCR_ELx. The workaround is to ensure these bits are clear in TCR_ELx.
The workaround only affect the Fujitsu-A64FX. The workaround only affects the Fujitsu-A64FX.
If unsure, say Y. If unsure, say Y.
...@@ -885,6 +886,9 @@ config ARCH_WANT_HUGE_PMD_SHARE ...@@ -885,6 +886,9 @@ config ARCH_WANT_HUGE_PMD_SHARE
config ARCH_HAS_CACHE_LINE_SIZE config ARCH_HAS_CACHE_LINE_SIZE
def_bool y def_bool y
config ARCH_ENABLE_SPLIT_PMD_PTLOCK
def_bool y if PGTABLE_LEVELS > 2
config SECCOMP config SECCOMP
bool "Enable seccomp to safely compute untrusted bytecode" bool "Enable seccomp to safely compute untrusted bytecode"
---help--- ---help---
...@@ -1074,9 +1078,65 @@ config RODATA_FULL_DEFAULT_ENABLED ...@@ -1074,9 +1078,65 @@ config RODATA_FULL_DEFAULT_ENABLED
This requires the linear region to be mapped down to pages, This requires the linear region to be mapped down to pages,
which may adversely affect performance in some cases. which may adversely affect performance in some cases.
config ARM64_SW_TTBR0_PAN
bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
help
Enabling this option prevents the kernel from accessing
user-space memory directly by pointing TTBR0_EL1 to a reserved
zeroed area and reserved ASID. The user access routines
restore the valid TTBR0_EL1 temporarily.
menuconfig COMPAT
bool "Kernel support for 32-bit EL0"
depends on ARM64_4K_PAGES || EXPERT
select COMPAT_BINFMT_ELF if BINFMT_ELF
select HAVE_UID16
select OLD_SIGSUSPEND3
select COMPAT_OLD_SIGACTION
help
This option enables support for a 32-bit EL0 running under a 64-bit
kernel at EL1. AArch32-specific components such as system calls,
the user helper functions, VFP support and the ptrace interface are
handled appropriately by the kernel.
If you use a page size other than 4KB (i.e, 16KB or 64KB), please be aware
that you will only be able to execute AArch32 binaries that were compiled
with page size aligned segments.
If you want to execute 32-bit userspace applications, say Y.
if COMPAT
config KUSER_HELPERS
bool "Enable kuser helpers page for 32 bit applications"
default y
help
Warning: disabling this option may break 32-bit user programs.
Provide kuser helpers to compat tasks. The kernel provides
helper code to userspace in read only form at a fixed location
to allow userspace to be independent of the CPU type fitted to
the system. This permits binaries to be run on ARMv4 through
to ARMv8 without modification.
See Documentation/arm/kernel_user_helpers.txt for details.
However, the fixed address nature of these helpers can be used
by ROP (return orientated programming) authors when creating
exploits.
If all of the binaries and libraries which run on your platform
are built specifically for your platform, and make no use of
these helpers, then you can turn this option off to hinder
such exploits. However, in that case, if a binary or library
relying on those helpers is run, it will not function correctly.
Say N here only if you are absolutely certain that you do not
need these helpers; otherwise, the safe option is to say Y.
menuconfig ARMV8_DEPRECATED menuconfig ARMV8_DEPRECATED
bool "Emulate deprecated/obsolete ARMv8 instructions" bool "Emulate deprecated/obsolete ARMv8 instructions"
depends on COMPAT
depends on SYSCTL depends on SYSCTL
help help
Legacy software support may require certain instructions Legacy software support may require certain instructions
...@@ -1142,13 +1202,7 @@ config SETEND_EMULATION ...@@ -1142,13 +1202,7 @@ config SETEND_EMULATION
If unsure, say Y If unsure, say Y
endif endif
config ARM64_SW_TTBR0_PAN endif
bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
help
Enabling this option prevents the kernel from accessing
user-space memory directly by pointing TTBR0_EL1 to a reserved
zeroed area and reserved ASID. The user access routines
restore the valid TTBR0_EL1 temporarily.
menu "ARMv8.1 architectural features" menu "ARMv8.1 architectural features"
...@@ -1314,6 +1368,9 @@ config ARM64_SVE ...@@ -1314,6 +1368,9 @@ config ARM64_SVE
To enable use of this extension on CPUs that implement it, say Y. To enable use of this extension on CPUs that implement it, say Y.
On CPUs that support the SVE2 extensions, this option will enable
those too.
Note that for architectural reasons, firmware _must_ implement SVE Note that for architectural reasons, firmware _must_ implement SVE
support when running on SVE capable hardware. The required support support when running on SVE capable hardware. The required support
is present in: is present in:
...@@ -1347,7 +1404,7 @@ config ARM64_PSEUDO_NMI ...@@ -1347,7 +1404,7 @@ config ARM64_PSEUDO_NMI
help help
Adds support for mimicking Non-Maskable Interrupts through the use of Adds support for mimicking Non-Maskable Interrupts through the use of
GIC interrupt priority. This support requires version 3 or later of GIC interrupt priority. This support requires version 3 or later of
Arm GIC. ARM GIC.
This high priority configuration for interrupts needs to be This high priority configuration for interrupts needs to be
explicitly enabled by setting the kernel parameter explicitly enabled by setting the kernel parameter
...@@ -1471,25 +1528,6 @@ config DMI ...@@ -1471,25 +1528,6 @@ config DMI
endmenu endmenu
config COMPAT
bool "Kernel support for 32-bit EL0"
depends on ARM64_4K_PAGES || EXPERT
select COMPAT_BINFMT_ELF if BINFMT_ELF
select HAVE_UID16
select OLD_SIGSUSPEND3
select COMPAT_OLD_SIGACTION
help
This option enables support for a 32-bit EL0 running under a 64-bit
kernel at EL1. AArch32-specific components such as system calls,
the user helper functions, VFP support and the ptrace interface are
handled appropriately by the kernel.
If you use a page size other than 4KB (i.e, 16KB or 64KB), please be aware
that you will only be able to execute AArch32 binaries that were compiled
with page size aligned segments.
If you want to execute 32-bit userspace applications, say Y.
config SYSVIPC_COMPAT config SYSVIPC_COMPAT
def_bool y def_bool y
depends on COMPAT && SYSVIPC depends on COMPAT && SYSVIPC
......
// SPDX-License-Identifier: GPL-2.0 /* SPDX-License-Identifier: GPL-2.0 */
/* /*
* Copyright (C) 2018 MediaTek Inc. * Copyright (C) 2018 MediaTek Inc.
* Author: Zhiyong Tao <zhiyong.tao@mediatek.com> * Author: Zhiyong Tao <zhiyong.tao@mediatek.com>
......
...@@ -372,7 +372,7 @@ static struct aead_alg ccm_aes_alg = { ...@@ -372,7 +372,7 @@ static struct aead_alg ccm_aes_alg = {
static int __init aes_mod_init(void) static int __init aes_mod_init(void)
{ {
if (!(elf_hwcap & HWCAP_AES)) if (!cpu_have_named_feature(AES))
return -ENODEV; return -ENODEV;
return crypto_register_aead(&ccm_aes_alg); return crypto_register_aead(&ccm_aes_alg);
} }
......
...@@ -440,7 +440,7 @@ static int __init aes_init(void) ...@@ -440,7 +440,7 @@ static int __init aes_init(void)
int err; int err;
int i; int i;
if (!(elf_hwcap & HWCAP_ASIMD)) if (!cpu_have_named_feature(ASIMD))
return -ENODEV; return -ENODEV;
err = crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs)); err = crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
......
...@@ -173,7 +173,7 @@ static struct skcipher_alg algs[] = { ...@@ -173,7 +173,7 @@ static struct skcipher_alg algs[] = {
static int __init chacha_simd_mod_init(void) static int __init chacha_simd_mod_init(void)
{ {
if (!(elf_hwcap & HWCAP_ASIMD)) if (!cpu_have_named_feature(ASIMD))
return -ENODEV; return -ENODEV;
return crypto_register_skciphers(algs, ARRAY_SIZE(algs)); return crypto_register_skciphers(algs, ARRAY_SIZE(algs));
......
...@@ -101,7 +101,7 @@ static struct shash_alg crc_t10dif_alg[] = {{ ...@@ -101,7 +101,7 @@ static struct shash_alg crc_t10dif_alg[] = {{
static int __init crc_t10dif_mod_init(void) static int __init crc_t10dif_mod_init(void)
{ {
if (elf_hwcap & HWCAP_PMULL) if (cpu_have_named_feature(PMULL))
return crypto_register_shashes(crc_t10dif_alg, return crypto_register_shashes(crc_t10dif_alg,
ARRAY_SIZE(crc_t10dif_alg)); ARRAY_SIZE(crc_t10dif_alg));
else else
...@@ -111,7 +111,7 @@ static int __init crc_t10dif_mod_init(void) ...@@ -111,7 +111,7 @@ static int __init crc_t10dif_mod_init(void)
static void __exit crc_t10dif_mod_exit(void) static void __exit crc_t10dif_mod_exit(void)
{ {
if (elf_hwcap & HWCAP_PMULL) if (cpu_have_named_feature(PMULL))
crypto_unregister_shashes(crc_t10dif_alg, crypto_unregister_shashes(crc_t10dif_alg,
ARRAY_SIZE(crc_t10dif_alg)); ARRAY_SIZE(crc_t10dif_alg));
else else
......
...@@ -704,10 +704,10 @@ static int __init ghash_ce_mod_init(void) ...@@ -704,10 +704,10 @@ static int __init ghash_ce_mod_init(void)
{ {
int ret; int ret;
if (!(elf_hwcap & HWCAP_ASIMD)) if (!cpu_have_named_feature(ASIMD))
return -ENODEV; return -ENODEV;
if (elf_hwcap & HWCAP_PMULL) if (cpu_have_named_feature(PMULL))
ret = crypto_register_shashes(ghash_alg, ret = crypto_register_shashes(ghash_alg,
ARRAY_SIZE(ghash_alg)); ARRAY_SIZE(ghash_alg));
else else
...@@ -717,7 +717,7 @@ static int __init ghash_ce_mod_init(void) ...@@ -717,7 +717,7 @@ static int __init ghash_ce_mod_init(void)
if (ret) if (ret)
return ret; return ret;
if (elf_hwcap & HWCAP_PMULL) { if (cpu_have_named_feature(PMULL)) {
ret = crypto_register_aead(&gcm_aes_alg); ret = crypto_register_aead(&gcm_aes_alg);
if (ret) if (ret)
crypto_unregister_shashes(ghash_alg, crypto_unregister_shashes(ghash_alg,
...@@ -728,7 +728,7 @@ static int __init ghash_ce_mod_init(void) ...@@ -728,7 +728,7 @@ static int __init ghash_ce_mod_init(void)
static void __exit ghash_ce_mod_exit(void) static void __exit ghash_ce_mod_exit(void)
{ {
if (elf_hwcap & HWCAP_PMULL) if (cpu_have_named_feature(PMULL))
crypto_unregister_shashes(ghash_alg, ARRAY_SIZE(ghash_alg)); crypto_unregister_shashes(ghash_alg, ARRAY_SIZE(ghash_alg));
else else
crypto_unregister_shash(ghash_alg); crypto_unregister_shash(ghash_alg);
......
...@@ -56,7 +56,7 @@ static struct shash_alg nhpoly1305_alg = { ...@@ -56,7 +56,7 @@ static struct shash_alg nhpoly1305_alg = {
static int __init nhpoly1305_mod_init(void) static int __init nhpoly1305_mod_init(void)
{ {
if (!(elf_hwcap & HWCAP_ASIMD)) if (!cpu_have_named_feature(ASIMD))
return -ENODEV; return -ENODEV;
return crypto_register_shash(&nhpoly1305_alg); return crypto_register_shash(&nhpoly1305_alg);
......
...@@ -173,7 +173,7 @@ static int __init sha256_mod_init(void) ...@@ -173,7 +173,7 @@ static int __init sha256_mod_init(void)
if (ret) if (ret)
return ret; return ret;
if (elf_hwcap & HWCAP_ASIMD) { if (cpu_have_named_feature(ASIMD)) {
ret = crypto_register_shashes(neon_algs, ARRAY_SIZE(neon_algs)); ret = crypto_register_shashes(neon_algs, ARRAY_SIZE(neon_algs));
if (ret) if (ret)
crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
...@@ -183,7 +183,7 @@ static int __init sha256_mod_init(void) ...@@ -183,7 +183,7 @@ static int __init sha256_mod_init(void)
static void __exit sha256_mod_fini(void) static void __exit sha256_mod_fini(void)
{ {
if (elf_hwcap & HWCAP_ASIMD) if (cpu_have_named_feature(ASIMD))
crypto_unregister_shashes(neon_algs, ARRAY_SIZE(neon_algs)); crypto_unregister_shashes(neon_algs, ARRAY_SIZE(neon_algs));
crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
} }
......
...@@ -31,11 +31,23 @@ ...@@ -31,11 +31,23 @@
#include <clocksource/arm_arch_timer.h> #include <clocksource/arm_arch_timer.h>
#if IS_ENABLED(CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND) #if IS_ENABLED(CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND)
extern struct static_key_false arch_timer_read_ool_enabled; #define has_erratum_handler(h) \
#define needs_unstable_timer_counter_workaround() \ ({ \
static_branch_unlikely(&arch_timer_read_ool_enabled) const struct arch_timer_erratum_workaround *__wa; \
__wa = __this_cpu_read(timer_unstable_counter_workaround); \
(__wa && __wa->h); \
})
#define erratum_handler(h) \
({ \
const struct arch_timer_erratum_workaround *__wa; \
__wa = __this_cpu_read(timer_unstable_counter_workaround); \
(__wa && __wa->h) ? __wa->h : arch_timer_##h; \
})
#else #else
#define needs_unstable_timer_counter_workaround() false #define has_erratum_handler(h) false
#define erratum_handler(h) (arch_timer_##h)
#endif #endif
enum arch_timer_erratum_match_type { enum arch_timer_erratum_match_type {
...@@ -61,23 +73,37 @@ struct arch_timer_erratum_workaround { ...@@ -61,23 +73,37 @@ struct arch_timer_erratum_workaround {
DECLARE_PER_CPU(const struct arch_timer_erratum_workaround *, DECLARE_PER_CPU(const struct arch_timer_erratum_workaround *,
timer_unstable_counter_workaround); timer_unstable_counter_workaround);
/* inline sysreg accessors that make erratum_handler() work */
static inline notrace u32 arch_timer_read_cntp_tval_el0(void)
{
return read_sysreg(cntp_tval_el0);
}
static inline notrace u32 arch_timer_read_cntv_tval_el0(void)
{
return read_sysreg(cntv_tval_el0);
}
static inline notrace u64 arch_timer_read_cntpct_el0(void)
{
return read_sysreg(cntpct_el0);
}
static inline notrace u64 arch_timer_read_cntvct_el0(void)
{
return read_sysreg(cntvct_el0);
}
#define arch_timer_reg_read_stable(reg) \ #define arch_timer_reg_read_stable(reg) \
({ \ ({ \
u64 _val; \ u64 _val; \
if (needs_unstable_timer_counter_workaround()) { \ \
const struct arch_timer_erratum_workaround *wa; \
preempt_disable_notrace(); \ preempt_disable_notrace(); \
wa = __this_cpu_read(timer_unstable_counter_workaround); \ _val = erratum_handler(read_ ## reg)(); \
if (wa && wa->read_##reg) \
_val = wa->read_##reg(); \
else \
_val = read_sysreg(reg); \
preempt_enable_notrace(); \ preempt_enable_notrace(); \
} else { \ \
_val = read_sysreg(reg); \
} \
_val; \ _val; \
}) })
/* /*
* These register accessors are marked inline so the compiler can * These register accessors are marked inline so the compiler can
...@@ -148,18 +174,67 @@ static inline void arch_timer_set_cntkctl(u32 cntkctl) ...@@ -148,18 +174,67 @@ static inline void arch_timer_set_cntkctl(u32 cntkctl)
isb(); isb();
} }
static inline u64 arch_counter_get_cntpct(void) /*
* Ensure that reads of the counter are treated the same as memory reads
* for the purposes of ordering by subsequent memory barriers.
*
* This insanity brought to you by speculative system register reads,
* out-of-order memory accesses, sequence locks and Thomas Gleixner.
*
* http://lists.infradead.org/pipermail/linux-arm-kernel/2019-February/631195.html
*/
#define arch_counter_enforce_ordering(val) do { \
u64 tmp, _val = (val); \
\
asm volatile( \
" eor %0, %1, %1\n" \
" add %0, sp, %0\n" \
" ldr xzr, [%0]" \
: "=r" (tmp) : "r" (_val)); \
} while (0)
static inline u64 __arch_counter_get_cntpct_stable(void)
{
u64 cnt;
isb();
cnt = arch_timer_reg_read_stable(cntpct_el0);
arch_counter_enforce_ordering(cnt);
return cnt;
}
static inline u64 __arch_counter_get_cntpct(void)
{ {
u64 cnt;
isb(); isb();
return arch_timer_reg_read_stable(cntpct_el0); cnt = read_sysreg(cntpct_el0);
arch_counter_enforce_ordering(cnt);
return cnt;
} }
static inline u64 arch_counter_get_cntvct(void) static inline u64 __arch_counter_get_cntvct_stable(void)
{ {
u64 cnt;
isb(); isb();
return arch_timer_reg_read_stable(cntvct_el0); cnt = arch_timer_reg_read_stable(cntvct_el0);
arch_counter_enforce_ordering(cnt);
return cnt;
} }
static inline u64 __arch_counter_get_cntvct(void)
{
u64 cnt;
isb();
cnt = read_sysreg(cntvct_el0);
arch_counter_enforce_ordering(cnt);
return cnt;
}
#undef arch_counter_enforce_ordering
static inline int arch_timer_arch_init(void) static inline int arch_timer_arch_init(void)
{ {
return 0; return 0;
......
...@@ -407,10 +407,14 @@ alternative_endif ...@@ -407,10 +407,14 @@ alternative_endif
.ifc \op, cvap .ifc \op, cvap
sys 3, c7, c12, 1, \kaddr // dc cvap sys 3, c7, c12, 1, \kaddr // dc cvap
.else .else
.ifc \op, cvadp
sys 3, c7, c13, 1, \kaddr // dc cvadp
.else
dc \op, \kaddr dc \op, \kaddr
.endif .endif
.endif .endif
.endif .endif
.endif
add \kaddr, \kaddr, \tmp1 add \kaddr, \kaddr, \tmp1
cmp \kaddr, \size cmp \kaddr, \size
b.lo 9998b b.lo 9998b
...@@ -442,8 +446,8 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU ...@@ -442,8 +446,8 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU
* reset_pmuserenr_el0 - reset PMUSERENR_EL0 if PMUv3 present * reset_pmuserenr_el0 - reset PMUSERENR_EL0 if PMUv3 present
*/ */
.macro reset_pmuserenr_el0, tmpreg .macro reset_pmuserenr_el0, tmpreg
mrs \tmpreg, id_aa64dfr0_el1 // Check ID_AA64DFR0_EL1 PMUVer mrs \tmpreg, id_aa64dfr0_el1
sbfx \tmpreg, \tmpreg, #8, #4 sbfx \tmpreg, \tmpreg, #ID_AA64DFR0_PMUVER_SHIFT, #4
cmp \tmpreg, #1 // Skip if no PMU present cmp \tmpreg, #1 // Skip if no PMU present
b.lt 9000f b.lt 9000f
msr pmuserenr_el0, xzr // Disable PMU access from EL0 msr pmuserenr_el0, xzr // Disable PMU access from EL0
......
...@@ -20,6 +20,8 @@ ...@@ -20,6 +20,8 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/kasan-checks.h>
#define __nops(n) ".rept " #n "\nnop\n.endr\n" #define __nops(n) ".rept " #n "\nnop\n.endr\n"
#define nops(n) asm volatile(__nops(n)) #define nops(n) asm volatile(__nops(n))
...@@ -72,31 +74,33 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx, ...@@ -72,31 +74,33 @@ static inline unsigned long array_index_mask_nospec(unsigned long idx,
#define __smp_store_release(p, v) \ #define __smp_store_release(p, v) \
do { \ do { \
typeof(p) __p = (p); \
union { typeof(*p) __val; char __c[1]; } __u = \ union { typeof(*p) __val; char __c[1]; } __u = \
{ .__val = (__force typeof(*p)) (v) }; \ { .__val = (__force typeof(*p)) (v) }; \
compiletime_assert_atomic_type(*p); \ compiletime_assert_atomic_type(*p); \
kasan_check_write(__p, sizeof(*p)); \
switch (sizeof(*p)) { \ switch (sizeof(*p)) { \
case 1: \ case 1: \
asm volatile ("stlrb %w1, %0" \ asm volatile ("stlrb %w1, %0" \
: "=Q" (*p) \ : "=Q" (*__p) \
: "r" (*(__u8 *)__u.__c) \ : "r" (*(__u8 *)__u.__c) \
: "memory"); \ : "memory"); \
break; \ break; \
case 2: \ case 2: \
asm volatile ("stlrh %w1, %0" \ asm volatile ("stlrh %w1, %0" \
: "=Q" (*p) \ : "=Q" (*__p) \
: "r" (*(__u16 *)__u.__c) \ : "r" (*(__u16 *)__u.__c) \
: "memory"); \ : "memory"); \
break; \ break; \
case 4: \ case 4: \
asm volatile ("stlr %w1, %0" \ asm volatile ("stlr %w1, %0" \
: "=Q" (*p) \ : "=Q" (*__p) \
: "r" (*(__u32 *)__u.__c) \ : "r" (*(__u32 *)__u.__c) \
: "memory"); \ : "memory"); \
break; \ break; \
case 8: \ case 8: \
asm volatile ("stlr %1, %0" \ asm volatile ("stlr %1, %0" \
: "=Q" (*p) \ : "=Q" (*__p) \
: "r" (*(__u64 *)__u.__c) \ : "r" (*(__u64 *)__u.__c) \
: "memory"); \ : "memory"); \
break; \ break; \
...@@ -106,27 +110,29 @@ do { \ ...@@ -106,27 +110,29 @@ do { \
#define __smp_load_acquire(p) \ #define __smp_load_acquire(p) \
({ \ ({ \
union { typeof(*p) __val; char __c[1]; } __u; \ union { typeof(*p) __val; char __c[1]; } __u; \
typeof(p) __p = (p); \
compiletime_assert_atomic_type(*p); \ compiletime_assert_atomic_type(*p); \
kasan_check_read(__p, sizeof(*p)); \
switch (sizeof(*p)) { \ switch (sizeof(*p)) { \
case 1: \ case 1: \
asm volatile ("ldarb %w0, %1" \ asm volatile ("ldarb %w0, %1" \
: "=r" (*(__u8 *)__u.__c) \ : "=r" (*(__u8 *)__u.__c) \
: "Q" (*p) : "memory"); \ : "Q" (*__p) : "memory"); \
break; \ break; \
case 2: \ case 2: \
asm volatile ("ldarh %w0, %1" \ asm volatile ("ldarh %w0, %1" \
: "=r" (*(__u16 *)__u.__c) \ : "=r" (*(__u16 *)__u.__c) \
: "Q" (*p) : "memory"); \ : "Q" (*__p) : "memory"); \
break; \ break; \
case 4: \ case 4: \
asm volatile ("ldar %w0, %1" \ asm volatile ("ldar %w0, %1" \
: "=r" (*(__u32 *)__u.__c) \ : "=r" (*(__u32 *)__u.__c) \
: "Q" (*p) : "memory"); \ : "Q" (*__p) : "memory"); \
break; \ break; \
case 8: \ case 8: \
asm volatile ("ldar %0, %1" \ asm volatile ("ldar %0, %1" \
: "=r" (*(__u64 *)__u.__c) \ : "=r" (*(__u64 *)__u.__c) \
: "Q" (*p) : "memory"); \ : "Q" (*__p) : "memory"); \
break; \ break; \
} \ } \
__u.__val; \ __u.__val; \
......
...@@ -11,6 +11,8 @@ ...@@ -11,6 +11,8 @@
/* /*
* #imm16 values used for BRK instruction generation * #imm16 values used for BRK instruction generation
* 0x004: for installing kprobes
* 0x005: for installing uprobes
* Allowed values for kgdb are 0x400 - 0x7ff * Allowed values for kgdb are 0x400 - 0x7ff
* 0x100: for triggering a fault on purpose (reserved) * 0x100: for triggering a fault on purpose (reserved)
* 0x400: for dynamic BRK instruction * 0x400: for dynamic BRK instruction
...@@ -18,10 +20,13 @@ ...@@ -18,10 +20,13 @@
* 0x800: kernel-mode BUG() and WARN() traps * 0x800: kernel-mode BUG() and WARN() traps
* 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff) * 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff)
*/ */
#define KPROBES_BRK_IMM 0x004
#define UPROBES_BRK_IMM 0x005
#define FAULT_BRK_IMM 0x100 #define FAULT_BRK_IMM 0x100
#define KGDB_DYN_DBG_BRK_IMM 0x400 #define KGDB_DYN_DBG_BRK_IMM 0x400
#define KGDB_COMPILED_DBG_BRK_IMM 0x401 #define KGDB_COMPILED_DBG_BRK_IMM 0x401
#define BUG_BRK_IMM 0x800 #define BUG_BRK_IMM 0x800
#define KASAN_BRK_IMM 0x900 #define KASAN_BRK_IMM 0x900
#define KASAN_BRK_MASK 0x0ff
#endif #endif
...@@ -61,7 +61,8 @@ ...@@ -61,7 +61,8 @@
#define ARM64_HAS_GENERIC_AUTH_ARCH 40 #define ARM64_HAS_GENERIC_AUTH_ARCH 40
#define ARM64_HAS_GENERIC_AUTH_IMP_DEF 41 #define ARM64_HAS_GENERIC_AUTH_IMP_DEF 41
#define ARM64_HAS_IRQ_PRIO_MASKING 42 #define ARM64_HAS_IRQ_PRIO_MASKING 42
#define ARM64_HAS_DCPODP 43
#define ARM64_NCAPS 43 #define ARM64_NCAPS 44
#endif /* __ASM_CPUCAPS_H */ #endif /* __ASM_CPUCAPS_H */
...@@ -14,15 +14,8 @@ ...@@ -14,15 +14,8 @@
#include <asm/hwcap.h> #include <asm/hwcap.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
/* #define MAX_CPU_FEATURES 64
* In the arm64 world (as in the ARM world), elf_hwcap is used both internally #define cpu_feature(x) KERNEL_HWCAP_ ## x
* in the kernel and for user space to keep track of which optional features
* are supported by the current system. So let's map feature 'x' to HWCAP_x.
* Note that HWCAP_x constants are bit fields so we need to take the log.
*/
#define MAX_CPU_FEATURES (8 * sizeof(elf_hwcap))
#define cpu_feature(x) ilog2(HWCAP_ ## x)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -399,11 +392,13 @@ extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE); ...@@ -399,11 +392,13 @@ extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
for_each_set_bit(cap, cpu_hwcaps, ARM64_NCAPS) for_each_set_bit(cap, cpu_hwcaps, ARM64_NCAPS)
bool this_cpu_has_cap(unsigned int cap); bool this_cpu_has_cap(unsigned int cap);
void cpu_set_feature(unsigned int num);
bool cpu_have_feature(unsigned int num);
unsigned long cpu_get_elf_hwcap(void);
unsigned long cpu_get_elf_hwcap2(void);
static inline bool cpu_have_feature(unsigned int num) #define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
{ #define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
return elf_hwcap & (1UL << num);
}
/* System capability check for constant caps */ /* System capability check for constant caps */
static inline bool __cpus_have_const_cap(int num) static inline bool __cpus_have_const_cap(int num)
...@@ -638,11 +633,7 @@ static inline int arm64_get_ssbd_state(void) ...@@ -638,11 +633,7 @@ static inline int arm64_get_ssbd_state(void)
#endif #endif
} }
#ifdef CONFIG_ARM64_SSBD
void arm64_set_ssbd_mitigation(bool state); void arm64_set_ssbd_mitigation(bool state);
#else
static inline void arm64_set_ssbd_mitigation(bool state) {}
#endif
extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
......
...@@ -89,6 +89,7 @@ ...@@ -89,6 +89,7 @@
#define ARM_CPU_PART_CORTEX_A35 0xD04 #define ARM_CPU_PART_CORTEX_A35 0xD04
#define ARM_CPU_PART_CORTEX_A55 0xD05 #define ARM_CPU_PART_CORTEX_A55 0xD05
#define ARM_CPU_PART_CORTEX_A76 0xD0B #define ARM_CPU_PART_CORTEX_A76 0xD0B
#define ARM_CPU_PART_NEOVERSE_N1 0xD0C
#define APM_CPU_PART_POTENZA 0x000 #define APM_CPU_PART_POTENZA 0x000
...@@ -118,6 +119,7 @@ ...@@ -118,6 +119,7 @@
#define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35) #define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35)
#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55) #define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55)
#define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76) #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
#define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
#define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX) #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
......
...@@ -65,12 +65,9 @@ ...@@ -65,12 +65,9 @@
#define CACHE_FLUSH_IS_SAFE 1 #define CACHE_FLUSH_IS_SAFE 1
/* kprobes BRK opcodes with ESR encoding */ /* kprobes BRK opcodes with ESR encoding */
#define BRK64_ESR_MASK 0xFFFF #define BRK64_OPCODE_KPROBES (AARCH64_BREAK_MON | (KPROBES_BRK_IMM << 5))
#define BRK64_ESR_KPROBES 0x0004
#define BRK64_OPCODE_KPROBES (AARCH64_BREAK_MON | (BRK64_ESR_KPROBES << 5))
/* uprobes BRK opcodes with ESR encoding */ /* uprobes BRK opcodes with ESR encoding */
#define BRK64_ESR_UPROBES 0x0005 #define BRK64_OPCODE_UPROBES (AARCH64_BREAK_MON | (UPROBES_BRK_IMM << 5))
#define BRK64_OPCODE_UPROBES (AARCH64_BREAK_MON | (BRK64_ESR_UPROBES << 5))
/* AArch32 */ /* AArch32 */
#define DBG_ESR_EVT_BKPT 0x4 #define DBG_ESR_EVT_BKPT 0x4
...@@ -94,18 +91,24 @@ struct step_hook { ...@@ -94,18 +91,24 @@ struct step_hook {
int (*fn)(struct pt_regs *regs, unsigned int esr); int (*fn)(struct pt_regs *regs, unsigned int esr);
}; };
void register_step_hook(struct step_hook *hook); void register_user_step_hook(struct step_hook *hook);
void unregister_step_hook(struct step_hook *hook); void unregister_user_step_hook(struct step_hook *hook);
void register_kernel_step_hook(struct step_hook *hook);
void unregister_kernel_step_hook(struct step_hook *hook);
struct break_hook { struct break_hook {
struct list_head node; struct list_head node;
u32 esr_val;
u32 esr_mask;
int (*fn)(struct pt_regs *regs, unsigned int esr); int (*fn)(struct pt_regs *regs, unsigned int esr);
u16 imm;
u16 mask; /* These bits are ignored when comparing with imm */
}; };
void register_break_hook(struct break_hook *hook); void register_user_break_hook(struct break_hook *hook);
void unregister_break_hook(struct break_hook *hook); void unregister_user_break_hook(struct break_hook *hook);
void register_kernel_break_hook(struct break_hook *hook);
void unregister_kernel_break_hook(struct break_hook *hook);
u8 debug_monitors_arch(void); u8 debug_monitors_arch(void);
......
...@@ -214,10 +214,10 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG]; ...@@ -214,10 +214,10 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG];
set_thread_flag(TIF_32BIT); \ set_thread_flag(TIF_32BIT); \
}) })
#define COMPAT_ARCH_DLINFO #define COMPAT_ARCH_DLINFO
extern int aarch32_setup_vectors_page(struct linux_binprm *bprm, extern int aarch32_setup_additional_pages(struct linux_binprm *bprm,
int uses_interp); int uses_interp);
#define compat_arch_setup_additional_pages \ #define compat_arch_setup_additional_pages \
aarch32_setup_vectors_page aarch32_setup_additional_pages
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
......
...@@ -156,9 +156,7 @@ ...@@ -156,9 +156,7 @@
ESR_ELx_WFx_ISS_WFI) ESR_ELx_WFx_ISS_WFI)
/* BRK instruction trap from AArch64 state */ /* BRK instruction trap from AArch64 state */
#define ESR_ELx_VAL_BRK64(imm) \ #define ESR_ELx_BRK64_ISS_COMMENT_MASK 0xffff
((ESR_ELx_EC_BRK64 << ESR_ELx_EC_SHIFT) | ESR_ELx_IL | \
((imm) & 0xffff))
/* ISS field definitions for System instruction traps */ /* ISS field definitions for System instruction traps */
#define ESR_ELx_SYS64_ISS_RES0_SHIFT 22 #define ESR_ELx_SYS64_ISS_RES0_SHIFT 22
...@@ -198,9 +196,10 @@ ...@@ -198,9 +196,10 @@
/* /*
* User space cache operations have the following sysreg encoding * User space cache operations have the following sysreg encoding
* in System instructions. * in System instructions.
* op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 12, 14 }, WRITE (L=0) * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 12, 13, 14 }, WRITE (L=0)
*/ */
#define ESR_ELx_SYS64_ISS_CRM_DC_CIVAC 14 #define ESR_ELx_SYS64_ISS_CRM_DC_CIVAC 14
#define ESR_ELx_SYS64_ISS_CRM_DC_CVADP 13
#define ESR_ELx_SYS64_ISS_CRM_DC_CVAP 12 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAP 12
#define ESR_ELx_SYS64_ISS_CRM_DC_CVAU 11 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAU 11
#define ESR_ELx_SYS64_ISS_CRM_DC_CVAC 10 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAC 10
......
...@@ -23,26 +23,34 @@ ...@@ -23,26 +23,34 @@
#include <asm/errno.h> #include <asm/errno.h>
#define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of? */
#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \ #define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \
do { \ do { \
unsigned int loops = FUTEX_MAX_LOOPS; \
\
uaccess_enable(); \ uaccess_enable(); \
asm volatile( \ asm volatile( \
" prfm pstl1strm, %2\n" \ " prfm pstl1strm, %2\n" \
"1: ldxr %w1, %2\n" \ "1: ldxr %w1, %2\n" \
insn "\n" \ insn "\n" \
"2: stlxr %w0, %w3, %2\n" \ "2: stlxr %w0, %w3, %2\n" \
" cbnz %w0, 1b\n" \ " cbz %w0, 3f\n" \
" dmb ish\n" \ " sub %w4, %w4, %w0\n" \
" cbnz %w4, 1b\n" \
" mov %w0, %w7\n" \
"3:\n" \ "3:\n" \
" dmb ish\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"4: mov %w0, %w5\n" \ "4: mov %w0, %w6\n" \
" b 3b\n" \ " b 3b\n" \
" .popsection\n" \ " .popsection\n" \
_ASM_EXTABLE(1b, 4b) \ _ASM_EXTABLE(1b, 4b) \
_ASM_EXTABLE(2b, 4b) \ _ASM_EXTABLE(2b, 4b) \
: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp) \ : "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp), \
: "r" (oparg), "Ir" (-EFAULT) \ "+r" (loops) \
: "r" (oparg), "Ir" (-EFAULT), "Ir" (-EAGAIN) \
: "memory"); \ : "memory"); \
uaccess_disable(); \ uaccess_disable(); \
} while (0) } while (0)
...@@ -57,23 +65,23 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr) ...@@ -57,23 +65,23 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
switch (op) { switch (op) {
case FUTEX_OP_SET: case FUTEX_OP_SET:
__futex_atomic_op("mov %w3, %w4", __futex_atomic_op("mov %w3, %w5",
ret, oldval, uaddr, tmp, oparg); ret, oldval, uaddr, tmp, oparg);
break; break;
case FUTEX_OP_ADD: case FUTEX_OP_ADD:
__futex_atomic_op("add %w3, %w1, %w4", __futex_atomic_op("add %w3, %w1, %w5",
ret, oldval, uaddr, tmp, oparg); ret, oldval, uaddr, tmp, oparg);
break; break;
case FUTEX_OP_OR: case FUTEX_OP_OR:
__futex_atomic_op("orr %w3, %w1, %w4", __futex_atomic_op("orr %w3, %w1, %w5",
ret, oldval, uaddr, tmp, oparg); ret, oldval, uaddr, tmp, oparg);
break; break;
case FUTEX_OP_ANDN: case FUTEX_OP_ANDN:
__futex_atomic_op("and %w3, %w1, %w4", __futex_atomic_op("and %w3, %w1, %w5",
ret, oldval, uaddr, tmp, ~oparg); ret, oldval, uaddr, tmp, ~oparg);
break; break;
case FUTEX_OP_XOR: case FUTEX_OP_XOR:
__futex_atomic_op("eor %w3, %w1, %w4", __futex_atomic_op("eor %w3, %w1, %w5",
ret, oldval, uaddr, tmp, oparg); ret, oldval, uaddr, tmp, oparg);
break; break;
default: default:
...@@ -93,6 +101,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr, ...@@ -93,6 +101,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
u32 oldval, u32 newval) u32 oldval, u32 newval)
{ {
int ret = 0; int ret = 0;
unsigned int loops = FUTEX_MAX_LOOPS;
u32 val, tmp; u32 val, tmp;
u32 __user *uaddr; u32 __user *uaddr;
...@@ -104,24 +113,30 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr, ...@@ -104,24 +113,30 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
asm volatile("// futex_atomic_cmpxchg_inatomic\n" asm volatile("// futex_atomic_cmpxchg_inatomic\n"
" prfm pstl1strm, %2\n" " prfm pstl1strm, %2\n"
"1: ldxr %w1, %2\n" "1: ldxr %w1, %2\n"
" sub %w3, %w1, %w4\n" " sub %w3, %w1, %w5\n"
" cbnz %w3, 3f\n" " cbnz %w3, 4f\n"
"2: stlxr %w3, %w5, %2\n" "2: stlxr %w3, %w6, %2\n"
" cbnz %w3, 1b\n" " cbz %w3, 3f\n"
" dmb ish\n" " sub %w4, %w4, %w3\n"
" cbnz %w4, 1b\n"
" mov %w0, %w8\n"
"3:\n" "3:\n"
" dmb ish\n"
"4:\n"
" .pushsection .fixup,\"ax\"\n" " .pushsection .fixup,\"ax\"\n"
"4: mov %w0, %w6\n" "5: mov %w0, %w7\n"
" b 3b\n" " b 4b\n"
" .popsection\n" " .popsection\n"
_ASM_EXTABLE(1b, 4b) _ASM_EXTABLE(1b, 5b)
_ASM_EXTABLE(2b, 4b) _ASM_EXTABLE(2b, 5b)
: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp) : "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
: "r" (oldval), "r" (newval), "Ir" (-EFAULT) : "r" (oldval), "r" (newval), "Ir" (-EFAULT), "Ir" (-EAGAIN)
: "memory"); : "memory");
uaccess_disable(); uaccess_disable();
if (!ret)
*uval = val; *uval = val;
return ret; return ret;
} }
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#define __ASM_HWCAP_H #define __ASM_HWCAP_H
#include <uapi/asm/hwcap.h> #include <uapi/asm/hwcap.h>
#include <asm/cpufeature.h>
#define COMPAT_HWCAP_HALF (1 << 1) #define COMPAT_HWCAP_HALF (1 << 1)
#define COMPAT_HWCAP_THUMB (1 << 2) #define COMPAT_HWCAP_THUMB (1 << 2)
...@@ -40,11 +41,67 @@ ...@@ -40,11 +41,67 @@
#define COMPAT_HWCAP2_CRC32 (1 << 4) #define COMPAT_HWCAP2_CRC32 (1 << 4)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/log2.h>
/*
* For userspace we represent hwcaps as a collection of HWCAP{,2}_x bitfields
* as described in uapi/asm/hwcap.h. For the kernel we represent hwcaps as
* natural numbers (in a single range of size MAX_CPU_FEATURES) defined here
* with prefix KERNEL_HWCAP_ mapped to their HWCAP{,2}_x counterpart.
*
* Hwcaps should be set and tested within the kernel via the
* cpu_{set,have}_named_feature(feature) where feature is the unique suffix
* of KERNEL_HWCAP_{feature}.
*/
#define __khwcap_feature(x) const_ilog2(HWCAP_ ## x)
#define KERNEL_HWCAP_FP __khwcap_feature(FP)
#define KERNEL_HWCAP_ASIMD __khwcap_feature(ASIMD)
#define KERNEL_HWCAP_EVTSTRM __khwcap_feature(EVTSTRM)
#define KERNEL_HWCAP_AES __khwcap_feature(AES)
#define KERNEL_HWCAP_PMULL __khwcap_feature(PMULL)
#define KERNEL_HWCAP_SHA1 __khwcap_feature(SHA1)
#define KERNEL_HWCAP_SHA2 __khwcap_feature(SHA2)
#define KERNEL_HWCAP_CRC32 __khwcap_feature(CRC32)
#define KERNEL_HWCAP_ATOMICS __khwcap_feature(ATOMICS)
#define KERNEL_HWCAP_FPHP __khwcap_feature(FPHP)
#define KERNEL_HWCAP_ASIMDHP __khwcap_feature(ASIMDHP)
#define KERNEL_HWCAP_CPUID __khwcap_feature(CPUID)
#define KERNEL_HWCAP_ASIMDRDM __khwcap_feature(ASIMDRDM)
#define KERNEL_HWCAP_JSCVT __khwcap_feature(JSCVT)
#define KERNEL_HWCAP_FCMA __khwcap_feature(FCMA)
#define KERNEL_HWCAP_LRCPC __khwcap_feature(LRCPC)
#define KERNEL_HWCAP_DCPOP __khwcap_feature(DCPOP)
#define KERNEL_HWCAP_SHA3 __khwcap_feature(SHA3)
#define KERNEL_HWCAP_SM3 __khwcap_feature(SM3)
#define KERNEL_HWCAP_SM4 __khwcap_feature(SM4)
#define KERNEL_HWCAP_ASIMDDP __khwcap_feature(ASIMDDP)
#define KERNEL_HWCAP_SHA512 __khwcap_feature(SHA512)
#define KERNEL_HWCAP_SVE __khwcap_feature(SVE)
#define KERNEL_HWCAP_ASIMDFHM __khwcap_feature(ASIMDFHM)
#define KERNEL_HWCAP_DIT __khwcap_feature(DIT)
#define KERNEL_HWCAP_USCAT __khwcap_feature(USCAT)
#define KERNEL_HWCAP_ILRCPC __khwcap_feature(ILRCPC)
#define KERNEL_HWCAP_FLAGM __khwcap_feature(FLAGM)
#define KERNEL_HWCAP_SSBS __khwcap_feature(SSBS)
#define KERNEL_HWCAP_SB __khwcap_feature(SB)
#define KERNEL_HWCAP_PACA __khwcap_feature(PACA)
#define KERNEL_HWCAP_PACG __khwcap_feature(PACG)
#define __khwcap2_feature(x) (const_ilog2(HWCAP2_ ## x) + 32)
#define KERNEL_HWCAP_DCPODP __khwcap2_feature(DCPODP)
#define KERNEL_HWCAP_SVE2 __khwcap2_feature(SVE2)
#define KERNEL_HWCAP_SVEAES __khwcap2_feature(SVEAES)
#define KERNEL_HWCAP_SVEPMULL __khwcap2_feature(SVEPMULL)
#define KERNEL_HWCAP_SVEBITPERM __khwcap2_feature(SVEBITPERM)
#define KERNEL_HWCAP_SVESHA3 __khwcap2_feature(SVESHA3)
#define KERNEL_HWCAP_SVESM4 __khwcap2_feature(SVESM4)
/* /*
* This yields a mask that user programs can use to figure out what * This yields a mask that user programs can use to figure out what
* instruction set this cpu supports. * instruction set this cpu supports.
*/ */
#define ELF_HWCAP (elf_hwcap) #define ELF_HWCAP cpu_get_elf_hwcap()
#define ELF_HWCAP2 cpu_get_elf_hwcap2()
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
#define COMPAT_ELF_HWCAP (compat_elf_hwcap) #define COMPAT_ELF_HWCAP (compat_elf_hwcap)
...@@ -60,6 +117,5 @@ enum { ...@@ -60,6 +117,5 @@ enum {
#endif #endif
}; };
extern unsigned long elf_hwcap;
#endif #endif
#endif #endif
...@@ -43,7 +43,7 @@ static inline void arch_local_irq_enable(void) ...@@ -43,7 +43,7 @@ static inline void arch_local_irq_enable(void)
asm volatile(ALTERNATIVE( asm volatile(ALTERNATIVE(
"msr daifclr, #2 // arch_local_irq_enable\n" "msr daifclr, #2 // arch_local_irq_enable\n"
"nop", "nop",
"msr_s " __stringify(SYS_ICC_PMR_EL1) ",%0\n" __msr_s(SYS_ICC_PMR_EL1, "%0")
"dsb sy", "dsb sy",
ARM64_HAS_IRQ_PRIO_MASKING) ARM64_HAS_IRQ_PRIO_MASKING)
: :
...@@ -55,7 +55,7 @@ static inline void arch_local_irq_disable(void) ...@@ -55,7 +55,7 @@ static inline void arch_local_irq_disable(void)
{ {
asm volatile(ALTERNATIVE( asm volatile(ALTERNATIVE(
"msr daifset, #2 // arch_local_irq_disable", "msr daifset, #2 // arch_local_irq_disable",
"msr_s " __stringify(SYS_ICC_PMR_EL1) ", %0", __msr_s(SYS_ICC_PMR_EL1, "%0"),
ARM64_HAS_IRQ_PRIO_MASKING) ARM64_HAS_IRQ_PRIO_MASKING)
: :
: "r" ((unsigned long) GIC_PRIO_IRQOFF) : "r" ((unsigned long) GIC_PRIO_IRQOFF)
...@@ -86,7 +86,7 @@ static inline unsigned long arch_local_save_flags(void) ...@@ -86,7 +86,7 @@ static inline unsigned long arch_local_save_flags(void)
"mov %0, %1\n" "mov %0, %1\n"
"nop\n" "nop\n"
"nop", "nop",
"mrs_s %0, " __stringify(SYS_ICC_PMR_EL1) "\n" __mrs_s("%0", SYS_ICC_PMR_EL1)
"ands %1, %1, " __stringify(PSR_I_BIT) "\n" "ands %1, %1, " __stringify(PSR_I_BIT) "\n"
"csel %0, %0, %2, eq", "csel %0, %0, %2, eq",
ARM64_HAS_IRQ_PRIO_MASKING) ARM64_HAS_IRQ_PRIO_MASKING)
...@@ -116,7 +116,7 @@ static inline void arch_local_irq_restore(unsigned long flags) ...@@ -116,7 +116,7 @@ static inline void arch_local_irq_restore(unsigned long flags)
asm volatile(ALTERNATIVE( asm volatile(ALTERNATIVE(
"msr daif, %0\n" "msr daif, %0\n"
"nop", "nop",
"msr_s " __stringify(SYS_ICC_PMR_EL1) ", %0\n" __msr_s(SYS_ICC_PMR_EL1, "%0")
"dsb sy", "dsb sy",
ARM64_HAS_IRQ_PRIO_MASKING) ARM64_HAS_IRQ_PRIO_MASKING)
: "+r" (flags) : "+r" (flags)
......
...@@ -54,8 +54,6 @@ void arch_remove_kprobe(struct kprobe *); ...@@ -54,8 +54,6 @@ void arch_remove_kprobe(struct kprobe *);
int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr); int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr);
int kprobe_exceptions_notify(struct notifier_block *self, int kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data); unsigned long val, void *data);
int kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr);
int kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr);
void kretprobe_trampoline(void); void kretprobe_trampoline(void);
void __kprobes *trampoline_probe_handler(struct pt_regs *regs); void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
......
...@@ -30,7 +30,7 @@ ...@@ -30,7 +30,7 @@
({ \ ({ \
u64 reg; \ u64 reg; \
asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##nvh),\ asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##nvh),\
"mrs_s %0, " __stringify(r##vh),\ __mrs_s("%0", r##vh), \
ARM64_HAS_VIRT_HOST_EXTN) \ ARM64_HAS_VIRT_HOST_EXTN) \
: "=r" (reg)); \ : "=r" (reg)); \
reg; \ reg; \
...@@ -40,7 +40,7 @@ ...@@ -40,7 +40,7 @@
do { \ do { \
u64 __val = (u64)(v); \ u64 __val = (u64)(v); \
asm volatile(ALTERNATIVE("msr " __stringify(r##nvh) ", %x0",\ asm volatile(ALTERNATIVE("msr " __stringify(r##nvh) ", %x0",\
"msr_s " __stringify(r##vh) ", %x0",\ __msr_s(r##vh, "%x0"), \
ARM64_HAS_VIRT_HOST_EXTN) \ ARM64_HAS_VIRT_HOST_EXTN) \
: : "rZ" (__val)); \ : : "rZ" (__val)); \
} while (0) } while (0)
......
...@@ -302,7 +302,7 @@ static inline void *phys_to_virt(phys_addr_t x) ...@@ -302,7 +302,7 @@ static inline void *phys_to_virt(phys_addr_t x)
*/ */
#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET) #define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
#ifndef CONFIG_SPARSEMEM_VMEMMAP #if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define _virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) #define _virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
#else #else
......
...@@ -33,12 +33,22 @@ ...@@ -33,12 +33,22 @@
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{ {
return (pmd_t *)__get_free_page(PGALLOC_GFP); struct page *page;
page = alloc_page(PGALLOC_GFP);
if (!page)
return NULL;
if (!pgtable_pmd_page_ctor(page)) {
__free_page(page);
return NULL;
}
return page_address(page);
} }
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmdp) static inline void pmd_free(struct mm_struct *mm, pmd_t *pmdp)
{ {
BUG_ON((unsigned long)pmdp & (PAGE_SIZE-1)); BUG_ON((unsigned long)pmdp & (PAGE_SIZE-1));
pgtable_pmd_page_dtor(virt_to_page(pmdp));
free_page((unsigned long)pmdp); free_page((unsigned long)pmdp);
} }
......
...@@ -478,6 +478,8 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd) ...@@ -478,6 +478,8 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd)
return __pmd_to_phys(pmd); return __pmd_to_phys(pmd);
} }
static inline void pte_unmap(pte_t *pte) { }
/* Find an entry in the third-level page table. */ /* Find an entry in the third-level page table. */
#define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) #define pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
...@@ -485,9 +487,6 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd) ...@@ -485,9 +487,6 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd)
#define pte_offset_kernel(dir,addr) ((pte_t *)__va(pte_offset_phys((dir), (addr)))) #define pte_offset_kernel(dir,addr) ((pte_t *)__va(pte_offset_phys((dir), (addr))))
#define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr)) #define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr))
#define pte_offset_map_nested(dir,addr) pte_offset_kernel((dir), (addr))
#define pte_unmap(pte) do { } while (0)
#define pte_unmap_nested(pte) do { } while (0)
#define pte_set_fixmap(addr) ((pte_t *)set_fixmap_offset(FIX_PTE, addr)) #define pte_set_fixmap(addr) ((pte_t *)set_fixmap_offset(FIX_PTE, addr))
#define pte_set_fixmap_offset(pmd, addr) pte_set_fixmap(pte_offset_phys(pmd, addr)) #define pte_set_fixmap_offset(pmd, addr) pte_set_fixmap(pte_offset_phys(pmd, addr))
......
// SPDX-License-Identifier: GPL-2.0 /* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_POINTER_AUTH_H #ifndef __ASM_POINTER_AUTH_H
#define __ASM_POINTER_AUTH_H #define __ASM_POINTER_AUTH_H
......
...@@ -57,7 +57,15 @@ ...@@ -57,7 +57,15 @@
#define TASK_SIZE_64 (UL(1) << vabits_user) #define TASK_SIZE_64 (UL(1) << vabits_user)
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
#if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS)
/*
* With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied
* by the compat vectors page.
*/
#define TASK_SIZE_32 UL(0x100000000) #define TASK_SIZE_32 UL(0x100000000)
#else
#define TASK_SIZE_32 (UL(0x100000000) - PAGE_SIZE)
#endif /* CONFIG_ARM64_64K_PAGES */
#define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \ #define TASK_SIZE (test_thread_flag(TIF_32BIT) ? \
TASK_SIZE_32 : TASK_SIZE_64) TASK_SIZE_32 : TASK_SIZE_64)
#define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \ #define TASK_SIZE_OF(tsk) (test_tsk_thread_flag(tsk, TIF_32BIT) ? \
......
...@@ -305,6 +305,28 @@ static inline unsigned long regs_return_value(struct pt_regs *regs) ...@@ -305,6 +305,28 @@ static inline unsigned long regs_return_value(struct pt_regs *regs)
return regs->regs[0]; return regs->regs[0];
} }
/**
* regs_get_kernel_argument() - get Nth function argument in kernel
* @regs: pt_regs of that context
* @n: function argument number (start from 0)
*
* regs_get_argument() returns @n th argument of the function call.
*
* Note that this chooses the most likely register mapping. In very rare
* cases this may not return correct data, for example, if one of the
* function parameters is 16 bytes or bigger. In such cases, we cannot
* get access the parameter correctly and the register assignment of
* subsequent parameters will be shifted.
*/
static inline unsigned long regs_get_kernel_argument(struct pt_regs *regs,
unsigned int n)
{
#define NR_REG_ARGUMENTS 8
if (n < NR_REG_ARGUMENTS)
return pt_regs_read_reg(regs, n);
return 0;
}
/* We must avoid circular header include via sched.h */ /* We must avoid circular header include via sched.h */
struct task_struct; struct task_struct;
int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task); int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task);
......
// SPDX-License-Identifier: GPL-2.0 /* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2017 Arm Ltd. // Copyright (C) 2017 Arm Ltd.
#ifndef __ASM_SDEI_H #ifndef __ASM_SDEI_H
#define __ASM_SDEI_H #define __ASM_SDEI_H
......
...@@ -20,8 +20,6 @@ ...@@ -20,8 +20,6 @@
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
#include <linux/compat.h> #include <linux/compat.h>
#define AARCH32_KERN_SIGRET_CODE_OFFSET 0x500
int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set, int compat_setup_frame(int usig, struct ksignal *ksig, sigset_t *set,
struct pt_regs *regs); struct pt_regs *regs);
int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, int compat_setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
......
...@@ -119,7 +119,7 @@ static inline pud_t *stage2_pud_offset(struct kvm *kvm, ...@@ -119,7 +119,7 @@ static inline pud_t *stage2_pud_offset(struct kvm *kvm,
static inline void stage2_pud_free(struct kvm *kvm, pud_t *pud) static inline void stage2_pud_free(struct kvm *kvm, pud_t *pud)
{ {
if (kvm_stage2_has_pud(kvm)) if (kvm_stage2_has_pud(kvm))
pud_free(NULL, pud); free_page((unsigned long)pud);
} }
static inline bool stage2_pud_table_empty(struct kvm *kvm, pud_t *pudp) static inline bool stage2_pud_table_empty(struct kvm *kvm, pud_t *pudp)
...@@ -192,7 +192,7 @@ static inline pmd_t *stage2_pmd_offset(struct kvm *kvm, ...@@ -192,7 +192,7 @@ static inline pmd_t *stage2_pmd_offset(struct kvm *kvm,
static inline void stage2_pmd_free(struct kvm *kvm, pmd_t *pmd) static inline void stage2_pmd_free(struct kvm *kvm, pmd_t *pmd)
{ {
if (kvm_stage2_has_pmd(kvm)) if (kvm_stage2_has_pmd(kvm))
pmd_free(NULL, pmd); free_page((unsigned long)pmd);
} }
static inline bool stage2_pud_huge(struct kvm *kvm, pud_t pud) static inline bool stage2_pud_huge(struct kvm *kvm, pud_t pud)
......
...@@ -606,6 +606,20 @@ ...@@ -606,6 +606,20 @@
#define ID_AA64PFR1_SSBS_PSTATE_ONLY 1 #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1
#define ID_AA64PFR1_SSBS_PSTATE_INSNS 2 #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2
/* id_aa64zfr0 */
#define ID_AA64ZFR0_SM4_SHIFT 40
#define ID_AA64ZFR0_SHA3_SHIFT 32
#define ID_AA64ZFR0_BITPERM_SHIFT 16
#define ID_AA64ZFR0_AES_SHIFT 4
#define ID_AA64ZFR0_SVEVER_SHIFT 0
#define ID_AA64ZFR0_SM4 0x1
#define ID_AA64ZFR0_SHA3 0x1
#define ID_AA64ZFR0_BITPERM 0x1
#define ID_AA64ZFR0_AES 0x1
#define ID_AA64ZFR0_AES_PMULL 0x2
#define ID_AA64ZFR0_SVEVER_SVE2 0x1
/* id_aa64mmfr0 */ /* id_aa64mmfr0 */
#define ID_AA64MMFR0_TGRAN4_SHIFT 28 #define ID_AA64MMFR0_TGRAN4_SHIFT 28
#define ID_AA64MMFR0_TGRAN64_SHIFT 24 #define ID_AA64MMFR0_TGRAN64_SHIFT 24
...@@ -746,20 +760,39 @@ ...@@ -746,20 +760,39 @@
#include <linux/build_bug.h> #include <linux/build_bug.h>
#include <linux/types.h> #include <linux/types.h>
asm( #define __DEFINE_MRS_MSR_S_REGNUM \
" .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" " .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" \
" .equ .L__reg_num_x\\num, \\num\n" " .equ .L__reg_num_x\\num, \\num\n" \
" .endr\n" " .endr\n" \
" .equ .L__reg_num_xzr, 31\n" " .equ .L__reg_num_xzr, 31\n"
"\n"
" .macro mrs_s, rt, sreg\n" #define DEFINE_MRS_S \
__emit_inst(0xd5200000|(\\sreg)|(.L__reg_num_\\rt)) __DEFINE_MRS_MSR_S_REGNUM \
" .macro mrs_s, rt, sreg\n" \
__emit_inst(0xd5200000|(\\sreg)|(.L__reg_num_\\rt)) \
" .endm\n" " .endm\n"
"\n"
" .macro msr_s, sreg, rt\n" #define DEFINE_MSR_S \
__emit_inst(0xd5000000|(\\sreg)|(.L__reg_num_\\rt)) __DEFINE_MRS_MSR_S_REGNUM \
" .macro msr_s, sreg, rt\n" \
__emit_inst(0xd5000000|(\\sreg)|(.L__reg_num_\\rt)) \
" .endm\n" " .endm\n"
);
#define UNDEFINE_MRS_S \
" .purgem mrs_s\n"
#define UNDEFINE_MSR_S \
" .purgem msr_s\n"
#define __mrs_s(v, r) \
DEFINE_MRS_S \
" mrs_s " v ", " __stringify(r) "\n" \
UNDEFINE_MRS_S
#define __msr_s(r, v) \
DEFINE_MSR_S \
" msr_s " __stringify(r) ", " v "\n" \
UNDEFINE_MSR_S
/* /*
* Unlike read_cpuid, calls to read_sysreg are never expected to be * Unlike read_cpuid, calls to read_sysreg are never expected to be
...@@ -787,13 +820,13 @@ asm( ...@@ -787,13 +820,13 @@ asm(
*/ */
#define read_sysreg_s(r) ({ \ #define read_sysreg_s(r) ({ \
u64 __val; \ u64 __val; \
asm volatile("mrs_s %0, " __stringify(r) : "=r" (__val)); \ asm volatile(__mrs_s("%0", r) : "=r" (__val)); \
__val; \ __val; \
}) })
#define write_sysreg_s(v, r) do { \ #define write_sysreg_s(v, r) do { \
u64 __val = (u64)(v); \ u64 __val = (u64)(v); \
asm volatile("msr_s " __stringify(r) ", %x0" : : "rZ" (__val)); \ asm volatile(__msr_s(r, "%x0") : : "rZ" (__val)); \
} while (0) } while (0)
/* /*
......
...@@ -41,7 +41,6 @@ void hook_debug_fault_code(int nr, int (*fn)(unsigned long, unsigned int, ...@@ -41,7 +41,6 @@ void hook_debug_fault_code(int nr, int (*fn)(unsigned long, unsigned int,
int sig, int code, const char *name); int sig, int code, const char *name);
struct mm_struct; struct mm_struct;
extern void show_pte(unsigned long addr);
extern void __show_regs(struct pt_regs *); extern void __show_regs(struct pt_regs *);
extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
......
...@@ -63,7 +63,10 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, ...@@ -63,7 +63,10 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp,
unsigned long addr) unsigned long addr)
{ {
tlb_remove_table(tlb, virt_to_page(pmdp)); struct page *page = virt_to_page(pmdp);
pgtable_pmd_page_dtor(page);
tlb_remove_table(tlb, page);
} }
#endif #endif
......
...@@ -38,6 +38,7 @@ struct vdso_data { ...@@ -38,6 +38,7 @@ struct vdso_data {
__u32 tz_minuteswest; /* Whacky timezone stuff */ __u32 tz_minuteswest; /* Whacky timezone stuff */
__u32 tz_dsttime; __u32 tz_dsttime;
__u32 use_syscall; __u32 use_syscall;
__u32 hrtimer_res;
}; };
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
// SPDX-License-Identifier: GPL-2.0 /* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2017 Arm Ltd. // Copyright (C) 2017 Arm Ltd.
#ifndef __ASM_VMAP_STACK_H #ifndef __ASM_VMAP_STACK_H
#define __ASM_VMAP_STACK_H #define __ASM_VMAP_STACK_H
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
#define _UAPI__ASM_HWCAP_H #define _UAPI__ASM_HWCAP_H
/* /*
* HWCAP flags - for elf_hwcap (in kernel) and AT_HWCAP * HWCAP flags - for AT_HWCAP
*/ */
#define HWCAP_FP (1 << 0) #define HWCAP_FP (1 << 0)
#define HWCAP_ASIMD (1 << 1) #define HWCAP_ASIMD (1 << 1)
...@@ -53,4 +53,15 @@ ...@@ -53,4 +53,15 @@
#define HWCAP_PACA (1 << 30) #define HWCAP_PACA (1 << 30)
#define HWCAP_PACG (1UL << 31) #define HWCAP_PACG (1UL << 31)
/*
* HWCAP2 flags - for AT_HWCAP2
*/
#define HWCAP2_DCPODP (1 << 0)
#define HWCAP2_SVE2 (1 << 1)
#define HWCAP2_SVEAES (1 << 2)
#define HWCAP2_SVEPMULL (1 << 3)
#define HWCAP2_SVEBITPERM (1 << 4)
#define HWCAP2_SVESHA3 (1 << 5)
#define HWCAP2_SVESM4 (1 << 6)
#endif /* _UAPI__ASM_HWCAP_H */ #endif /* _UAPI__ASM_HWCAP_H */
...@@ -7,9 +7,9 @@ CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET=$(TEXT_OFFSET) ...@@ -7,9 +7,9 @@ CPPFLAGS_vmlinux.lds := -DTEXT_OFFSET=$(TEXT_OFFSET)
AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET) AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
CFLAGS_armv8_deprecated.o := -I$(src) CFLAGS_armv8_deprecated.o := -I$(src)
CFLAGS_REMOVE_ftrace.o = -pg CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_insn.o = -pg CFLAGS_REMOVE_insn.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_return_address.o = -pg CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE)
# Object file lists. # Object file lists.
obj-y := debug-monitors.o entry.o irq.o fpsimd.o \ obj-y := debug-monitors.o entry.o irq.o fpsimd.o \
...@@ -27,8 +27,9 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_ ...@@ -27,8 +27,9 @@ OBJCOPYFLAGS := --prefix-symbols=__efistub_
$(obj)/%.stub.o: $(obj)/%.o FORCE $(obj)/%.stub.o: $(obj)/%.o FORCE
$(call if_changed,objcopy) $(call if_changed,objcopy)
obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \ obj-$(CONFIG_COMPAT) += sys32.o signal32.o \
sys_compat.o sigreturn32.o sys_compat.o
obj-$(CONFIG_KUSER_HELPERS) += kuser32.o
obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o entry-ftrace.o
obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o obj-$(CONFIG_ARM64_MODULE_PLTS) += module-plts.o
......
...@@ -94,7 +94,7 @@ int main(void) ...@@ -94,7 +94,7 @@ int main(void)
DEFINE(CLOCK_REALTIME, CLOCK_REALTIME); DEFINE(CLOCK_REALTIME, CLOCK_REALTIME);
DEFINE(CLOCK_MONOTONIC, CLOCK_MONOTONIC); DEFINE(CLOCK_MONOTONIC, CLOCK_MONOTONIC);
DEFINE(CLOCK_MONOTONIC_RAW, CLOCK_MONOTONIC_RAW); DEFINE(CLOCK_MONOTONIC_RAW, CLOCK_MONOTONIC_RAW);
DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC); DEFINE(CLOCK_REALTIME_RES, offsetof(struct vdso_data, hrtimer_res));
DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE); DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE); DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
DEFINE(CLOCK_COARSE_RES, LOW_RES_NSEC); DEFINE(CLOCK_COARSE_RES, LOW_RES_NSEC);
......
This diff is collapsed.
...@@ -85,6 +85,7 @@ static const char *__init cpu_read_enable_method(int cpu) ...@@ -85,6 +85,7 @@ static const char *__init cpu_read_enable_method(int cpu)
pr_err("%pOF: missing enable-method property\n", pr_err("%pOF: missing enable-method property\n",
dn); dn);
} }
of_node_put(dn);
} else { } else {
enable_method = acpi_get_enable_method(cpu); enable_method = acpi_get_enable_method(cpu);
if (!enable_method) { if (!enable_method) {
......
This diff is collapsed.
...@@ -85,6 +85,13 @@ static const char *const hwcap_str[] = { ...@@ -85,6 +85,13 @@ static const char *const hwcap_str[] = {
"sb", "sb",
"paca", "paca",
"pacg", "pacg",
"dcpodp",
"sve2",
"sveaes",
"svepmull",
"svebitperm",
"svesha3",
"svesm4",
NULL NULL
}; };
...@@ -167,7 +174,7 @@ static int c_show(struct seq_file *m, void *v) ...@@ -167,7 +174,7 @@ static int c_show(struct seq_file *m, void *v)
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
} else { } else {
for (j = 0; hwcap_str[j]; j++) for (j = 0; hwcap_str[j]; j++)
if (elf_hwcap & (1 << j)) if (cpu_have_feature(j))
seq_printf(m, " %s", hwcap_str[j]); seq_printf(m, " %s", hwcap_str[j]);
} }
seq_puts(m, "\n"); seq_puts(m, "\n");
......
...@@ -135,6 +135,7 @@ NOKPROBE_SYMBOL(disable_debug_monitors); ...@@ -135,6 +135,7 @@ NOKPROBE_SYMBOL(disable_debug_monitors);
*/ */
static int clear_os_lock(unsigned int cpu) static int clear_os_lock(unsigned int cpu)
{ {
write_sysreg(0, osdlr_el1);
write_sysreg(0, oslar_el1); write_sysreg(0, oslar_el1);
isb(); isb();
return 0; return 0;
...@@ -163,25 +164,46 @@ static void clear_regs_spsr_ss(struct pt_regs *regs) ...@@ -163,25 +164,46 @@ static void clear_regs_spsr_ss(struct pt_regs *regs)
} }
NOKPROBE_SYMBOL(clear_regs_spsr_ss); NOKPROBE_SYMBOL(clear_regs_spsr_ss);
/* EL1 Single Step Handler hooks */ static DEFINE_SPINLOCK(debug_hook_lock);
static LIST_HEAD(step_hook); static LIST_HEAD(user_step_hook);
static DEFINE_SPINLOCK(step_hook_lock); static LIST_HEAD(kernel_step_hook);
void register_step_hook(struct step_hook *hook) static void register_debug_hook(struct list_head *node, struct list_head *list)
{ {
spin_lock(&step_hook_lock); spin_lock(&debug_hook_lock);
list_add_rcu(&hook->node, &step_hook); list_add_rcu(node, list);
spin_unlock(&step_hook_lock); spin_unlock(&debug_hook_lock);
} }
void unregister_step_hook(struct step_hook *hook) static void unregister_debug_hook(struct list_head *node)
{ {
spin_lock(&step_hook_lock); spin_lock(&debug_hook_lock);
list_del_rcu(&hook->node); list_del_rcu(node);
spin_unlock(&step_hook_lock); spin_unlock(&debug_hook_lock);
synchronize_rcu(); synchronize_rcu();
} }
void register_user_step_hook(struct step_hook *hook)
{
register_debug_hook(&hook->node, &user_step_hook);
}
void unregister_user_step_hook(struct step_hook *hook)
{
unregister_debug_hook(&hook->node);
}
void register_kernel_step_hook(struct step_hook *hook)
{
register_debug_hook(&hook->node, &kernel_step_hook);
}
void unregister_kernel_step_hook(struct step_hook *hook)
{
unregister_debug_hook(&hook->node);
}
/* /*
* Call registered single step handlers * Call registered single step handlers
* There is no Syndrome info to check for determining the handler. * There is no Syndrome info to check for determining the handler.
...@@ -191,11 +213,14 @@ void unregister_step_hook(struct step_hook *hook) ...@@ -191,11 +213,14 @@ void unregister_step_hook(struct step_hook *hook)
static int call_step_hook(struct pt_regs *regs, unsigned int esr) static int call_step_hook(struct pt_regs *regs, unsigned int esr)
{ {
struct step_hook *hook; struct step_hook *hook;
struct list_head *list;
int retval = DBG_HOOK_ERROR; int retval = DBG_HOOK_ERROR;
list = user_mode(regs) ? &user_step_hook : &kernel_step_hook;
rcu_read_lock(); rcu_read_lock();
list_for_each_entry_rcu(hook, &step_hook, node) { list_for_each_entry_rcu(hook, list, node) {
retval = hook->fn(regs, esr); retval = hook->fn(regs, esr);
if (retval == DBG_HOOK_HANDLED) if (retval == DBG_HOOK_HANDLED)
break; break;
...@@ -222,7 +247,7 @@ static void send_user_sigtrap(int si_code) ...@@ -222,7 +247,7 @@ static void send_user_sigtrap(int si_code)
"User debug trap"); "User debug trap");
} }
static int single_step_handler(unsigned long addr, unsigned int esr, static int single_step_handler(unsigned long unused, unsigned int esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
bool handler_found = false; bool handler_found = false;
...@@ -234,10 +259,6 @@ static int single_step_handler(unsigned long addr, unsigned int esr, ...@@ -234,10 +259,6 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
if (!reinstall_suspended_bps(regs)) if (!reinstall_suspended_bps(regs))
return 0; return 0;
#ifdef CONFIG_KPROBES
if (kprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
handler_found = true;
#endif
if (!handler_found && call_step_hook(regs, esr) == DBG_HOOK_HANDLED) if (!handler_found && call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
handler_found = true; handler_found = true;
...@@ -264,61 +285,59 @@ static int single_step_handler(unsigned long addr, unsigned int esr, ...@@ -264,61 +285,59 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
} }
NOKPROBE_SYMBOL(single_step_handler); NOKPROBE_SYMBOL(single_step_handler);
/* static LIST_HEAD(user_break_hook);
* Breakpoint handler is re-entrant as another breakpoint can static LIST_HEAD(kernel_break_hook);
* hit within breakpoint handler, especically in kprobes.
* Use reader/writer locks instead of plain spinlock.
*/
static LIST_HEAD(break_hook);
static DEFINE_SPINLOCK(break_hook_lock);
void register_break_hook(struct break_hook *hook) void register_user_break_hook(struct break_hook *hook)
{ {
spin_lock(&break_hook_lock); register_debug_hook(&hook->node, &user_break_hook);
list_add_rcu(&hook->node, &break_hook);
spin_unlock(&break_hook_lock);
} }
void unregister_break_hook(struct break_hook *hook) void unregister_user_break_hook(struct break_hook *hook)
{ {
spin_lock(&break_hook_lock); unregister_debug_hook(&hook->node);
list_del_rcu(&hook->node); }
spin_unlock(&break_hook_lock);
synchronize_rcu(); void register_kernel_break_hook(struct break_hook *hook)
{
register_debug_hook(&hook->node, &kernel_break_hook);
}
void unregister_kernel_break_hook(struct break_hook *hook)
{
unregister_debug_hook(&hook->node);
} }
static int call_break_hook(struct pt_regs *regs, unsigned int esr) static int call_break_hook(struct pt_regs *regs, unsigned int esr)
{ {
struct break_hook *hook; struct break_hook *hook;
struct list_head *list;
int (*fn)(struct pt_regs *regs, unsigned int esr) = NULL; int (*fn)(struct pt_regs *regs, unsigned int esr) = NULL;
list = user_mode(regs) ? &user_break_hook : &kernel_break_hook;
rcu_read_lock(); rcu_read_lock();
list_for_each_entry_rcu(hook, &break_hook, node) list_for_each_entry_rcu(hook, list, node) {
if ((esr & hook->esr_mask) == hook->esr_val) unsigned int comment = esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
if ((comment & ~hook->mask) == hook->imm)
fn = hook->fn; fn = hook->fn;
}
rcu_read_unlock(); rcu_read_unlock();
return fn ? fn(regs, esr) : DBG_HOOK_ERROR; return fn ? fn(regs, esr) : DBG_HOOK_ERROR;
} }
NOKPROBE_SYMBOL(call_break_hook); NOKPROBE_SYMBOL(call_break_hook);
static int brk_handler(unsigned long addr, unsigned int esr, static int brk_handler(unsigned long unused, unsigned int esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
bool handler_found = false; if (call_break_hook(regs, esr) == DBG_HOOK_HANDLED)
return 0;
#ifdef CONFIG_KPROBES
if ((esr & BRK64_ESR_MASK) == BRK64_ESR_KPROBES) {
if (kprobe_breakpoint_handler(regs, esr) == DBG_HOOK_HANDLED)
handler_found = true;
}
#endif
if (!handler_found && call_break_hook(regs, esr) == DBG_HOOK_HANDLED)
handler_found = true;
if (!handler_found && user_mode(regs)) { if (user_mode(regs)) {
send_user_sigtrap(TRAP_BRKPT); send_user_sigtrap(TRAP_BRKPT);
} else if (!handler_found) { } else {
pr_warn("Unexpected kernel BRK exception at EL1\n"); pr_warn("Unexpected kernel BRK exception at EL1\n");
return -EFAULT; return -EFAULT;
} }
......
...@@ -336,6 +336,21 @@ alternative_if ARM64_WORKAROUND_845719 ...@@ -336,6 +336,21 @@ alternative_if ARM64_WORKAROUND_845719
alternative_else_nop_endif alternative_else_nop_endif
#endif #endif
3: 3:
#ifdef CONFIG_ARM64_ERRATUM_1188873
alternative_if_not ARM64_WORKAROUND_1188873
b 4f
alternative_else_nop_endif
/*
* if (x22.mode32 == cntkctl_el1.el0vcten)
* cntkctl_el1.el0vcten = ~cntkctl_el1.el0vcten
*/
mrs x1, cntkctl_el1
eon x0, x1, x22, lsr #3
tbz x0, #1, 4f
eor x1, x1, #2 // ARCH_TIMER_USR_VCT_ACCESS_EN
msr cntkctl_el1, x1
4:
#endif
apply_ssbd 0, x0, x1 apply_ssbd 0, x0, x1
.endif .endif
...@@ -362,11 +377,11 @@ alternative_else_nop_endif ...@@ -362,11 +377,11 @@ alternative_else_nop_endif
.if \el == 0 .if \el == 0
alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
bne 4f bne 5f
msr far_el1, x30 msr far_el1, x30
tramp_alias x30, tramp_exit_native tramp_alias x30, tramp_exit_native
br x30 br x30
4: 5:
tramp_alias x30, tramp_exit_compat tramp_alias x30, tramp_exit_compat
br x30 br x30
#endif #endif
......
...@@ -1258,14 +1258,14 @@ static inline void fpsimd_hotplug_init(void) { } ...@@ -1258,14 +1258,14 @@ static inline void fpsimd_hotplug_init(void) { }
*/ */
static int __init fpsimd_init(void) static int __init fpsimd_init(void)
{ {
if (elf_hwcap & HWCAP_FP) { if (cpu_have_named_feature(FP)) {
fpsimd_pm_init(); fpsimd_pm_init();
fpsimd_hotplug_init(); fpsimd_hotplug_init();
} else { } else {
pr_notice("Floating-point is not implemented\n"); pr_notice("Floating-point is not implemented\n");
} }
if (!(elf_hwcap & HWCAP_ASIMD)) if (!cpu_have_named_feature(ASIMD))
pr_notice("Advanced SIMD is not implemented\n"); pr_notice("Advanced SIMD is not implemented\n");
return sve_sysctl_init(); return sve_sysctl_init();
......
...@@ -505,7 +505,7 @@ ENTRY(el2_setup) ...@@ -505,7 +505,7 @@ ENTRY(el2_setup)
* kernel is intended to run at EL2. * kernel is intended to run at EL2.
*/ */
mrs x2, id_aa64mmfr1_el1 mrs x2, id_aa64mmfr1_el1
ubfx x2, x2, #8, #4 ubfx x2, x2, #ID_AA64MMFR1_VHE_SHIFT, #4
#else #else
mov x2, xzr mov x2, xzr
#endif #endif
...@@ -538,7 +538,7 @@ set_hcr: ...@@ -538,7 +538,7 @@ set_hcr:
#ifdef CONFIG_ARM_GIC_V3 #ifdef CONFIG_ARM_GIC_V3
/* GICv3 system register access */ /* GICv3 system register access */
mrs x0, id_aa64pfr0_el1 mrs x0, id_aa64pfr0_el1
ubfx x0, x0, #24, #4 ubfx x0, x0, #ID_AA64PFR0_GIC_SHIFT, #4
cbz x0, 3f cbz x0, 3f
mrs_s x0, SYS_ICC_SRE_EL2 mrs_s x0, SYS_ICC_SRE_EL2
...@@ -564,8 +564,8 @@ set_hcr: ...@@ -564,8 +564,8 @@ set_hcr:
#endif #endif
/* EL2 debug */ /* EL2 debug */
mrs x1, id_aa64dfr0_el1 // Check ID_AA64DFR0_EL1 PMUVer mrs x1, id_aa64dfr0_el1
sbfx x0, x1, #8, #4 sbfx x0, x1, #ID_AA64DFR0_PMUVER_SHIFT, #4
cmp x0, #1 cmp x0, #1
b.lt 4f // Skip if no PMU present b.lt 4f // Skip if no PMU present
mrs x0, pmcr_el0 // Disable debug access traps mrs x0, pmcr_el0 // Disable debug access traps
...@@ -574,7 +574,7 @@ set_hcr: ...@@ -574,7 +574,7 @@ set_hcr:
csel x3, xzr, x0, lt // all PMU counters from EL1 csel x3, xzr, x0, lt // all PMU counters from EL1
/* Statistical profiling */ /* Statistical profiling */
ubfx x0, x1, #32, #4 // Check ID_AA64DFR0_EL1 PMSVer ubfx x0, x1, #ID_AA64DFR0_PMSVER_SHIFT, #4
cbz x0, 7f // Skip if SPE not present cbz x0, 7f // Skip if SPE not present
cbnz x2, 6f // VHE? cbnz x2, 6f // VHE?
mrs_s x4, SYS_PMBIDR_EL1 // If SPE available at EL2, mrs_s x4, SYS_PMBIDR_EL1 // If SPE available at EL2,
...@@ -684,7 +684,7 @@ ENTRY(__boot_cpu_mode) ...@@ -684,7 +684,7 @@ ENTRY(__boot_cpu_mode)
* with MMU turned off. * with MMU turned off.
*/ */
ENTRY(__early_cpu_boot_status) ENTRY(__early_cpu_boot_status)
.long 0 .quad 0
.popsection .popsection
......
...@@ -244,9 +244,6 @@ int kgdb_arch_handle_exception(int exception_vector, int signo, ...@@ -244,9 +244,6 @@ int kgdb_arch_handle_exception(int exception_vector, int signo,
static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr) static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr)
{ {
if (user_mode(regs))
return DBG_HOOK_ERROR;
kgdb_handle_exception(1, SIGTRAP, 0, regs); kgdb_handle_exception(1, SIGTRAP, 0, regs);
return DBG_HOOK_HANDLED; return DBG_HOOK_HANDLED;
} }
...@@ -254,9 +251,6 @@ NOKPROBE_SYMBOL(kgdb_brk_fn) ...@@ -254,9 +251,6 @@ NOKPROBE_SYMBOL(kgdb_brk_fn)
static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr) static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
{ {
if (user_mode(regs))
return DBG_HOOK_ERROR;
compiled_break = 1; compiled_break = 1;
kgdb_handle_exception(1, SIGTRAP, 0, regs); kgdb_handle_exception(1, SIGTRAP, 0, regs);
...@@ -266,7 +260,7 @@ NOKPROBE_SYMBOL(kgdb_compiled_brk_fn); ...@@ -266,7 +260,7 @@ NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr) static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
{ {
if (user_mode(regs) || !kgdb_single_step) if (!kgdb_single_step)
return DBG_HOOK_ERROR; return DBG_HOOK_ERROR;
kgdb_handle_exception(1, SIGTRAP, 0, regs); kgdb_handle_exception(1, SIGTRAP, 0, regs);
...@@ -275,15 +269,13 @@ static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr) ...@@ -275,15 +269,13 @@ static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
NOKPROBE_SYMBOL(kgdb_step_brk_fn); NOKPROBE_SYMBOL(kgdb_step_brk_fn);
static struct break_hook kgdb_brkpt_hook = { static struct break_hook kgdb_brkpt_hook = {
.esr_mask = 0xffffffff, .fn = kgdb_brk_fn,
.esr_val = (u32)ESR_ELx_VAL_BRK64(KGDB_DYN_DBG_BRK_IMM), .imm = KGDB_DYN_DBG_BRK_IMM,
.fn = kgdb_brk_fn
}; };
static struct break_hook kgdb_compiled_brkpt_hook = { static struct break_hook kgdb_compiled_brkpt_hook = {
.esr_mask = 0xffffffff, .fn = kgdb_compiled_brk_fn,
.esr_val = (u32)ESR_ELx_VAL_BRK64(KGDB_COMPILED_DBG_BRK_IMM), .imm = KGDB_COMPILED_DBG_BRK_IMM,
.fn = kgdb_compiled_brk_fn
}; };
static struct step_hook kgdb_step_hook = { static struct step_hook kgdb_step_hook = {
...@@ -332,9 +324,9 @@ int kgdb_arch_init(void) ...@@ -332,9 +324,9 @@ int kgdb_arch_init(void)
if (ret != 0) if (ret != 0)
return ret; return ret;
register_break_hook(&kgdb_brkpt_hook); register_kernel_break_hook(&kgdb_brkpt_hook);
register_break_hook(&kgdb_compiled_brkpt_hook); register_kernel_break_hook(&kgdb_compiled_brkpt_hook);
register_step_hook(&kgdb_step_hook); register_kernel_step_hook(&kgdb_step_hook);
return 0; return 0;
} }
...@@ -345,9 +337,9 @@ int kgdb_arch_init(void) ...@@ -345,9 +337,9 @@ int kgdb_arch_init(void)
*/ */
void kgdb_arch_exit(void) void kgdb_arch_exit(void)
{ {
unregister_break_hook(&kgdb_brkpt_hook); unregister_kernel_break_hook(&kgdb_brkpt_hook);
unregister_break_hook(&kgdb_compiled_brkpt_hook); unregister_kernel_break_hook(&kgdb_compiled_brkpt_hook);
unregister_step_hook(&kgdb_step_hook); unregister_kernel_step_hook(&kgdb_step_hook);
unregister_die_notifier(&kgdb_notifier); unregister_die_notifier(&kgdb_notifier);
} }
......
/* SPDX-License-Identifier: GPL-2.0 */
/* /*
* Low-level user helpers placed in the vectors page for AArch32. * AArch32 user helpers.
* Based on the kuser helpers in arch/arm/kernel/entry-armv.S. * Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
* *
* Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net> * Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
* Copyright (C) 2012 ARM Ltd. * Copyright (C) 2012-2018 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
* *
* You should have received a copy of the GNU General Public License * The kuser helpers below are mapped at a fixed address by
* along with this program. If not, see <http://www.gnu.org/licenses/>. * aarch32_setup_additional_pages() and are provided for compatibility
* * reasons with 32 bit (aarch32) applications that need them.
*
* AArch32 user helpers.
*
* Each segment is 32-byte aligned and will be moved to the top of the high
* vector page. New segments (if ever needed) must be added in front of
* existing ones. This mechanism should be used only for things that are
* really small and justified, and not be abused freely.
* *
* See Documentation/arm/kernel_user_helpers.txt for formal definitions. * See Documentation/arm/kernel_user_helpers.txt for formal definitions.
*/ */
...@@ -77,42 +62,3 @@ __kuser_helper_version: // 0xffff0ffc ...@@ -77,42 +62,3 @@ __kuser_helper_version: // 0xffff0ffc
.word ((__kuser_helper_end - __kuser_helper_start) >> 5) .word ((__kuser_helper_end - __kuser_helper_start) >> 5)
.globl __kuser_helper_end .globl __kuser_helper_end
__kuser_helper_end: __kuser_helper_end:
/*
* AArch32 sigreturn code
*
* For ARM syscalls, the syscall number has to be loaded into r7.
* We do not support an OABI userspace.
*
* For Thumb syscalls, we also pass the syscall number via r7. We therefore
* need two 16-bit instructions.
*/
.globl __aarch32_sigret_code_start
__aarch32_sigret_code_start:
/*
* ARM Code
*/
.byte __NR_compat_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_sigreturn
.byte __NR_compat_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_sigreturn
/*
* Thumb code
*/
.byte __NR_compat_sigreturn, 0x27 // svc #__NR_compat_sigreturn
.byte __NR_compat_sigreturn, 0xdf // mov r7, #__NR_compat_sigreturn
/*
* ARM code
*/
.byte __NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_rt_sigreturn
.byte __NR_compat_rt_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_rt_sigreturn
/*
* Thumb code
*/
.byte __NR_compat_rt_sigreturn, 0x27 // svc #__NR_compat_rt_sigreturn
.byte __NR_compat_rt_sigreturn, 0xdf // mov r7, #__NR_compat_rt_sigreturn
.globl __aarch32_sigret_code_end
__aarch32_sigret_code_end:
...@@ -431,7 +431,7 @@ static inline u64 armv8pmu_read_hw_counter(struct perf_event *event) ...@@ -431,7 +431,7 @@ static inline u64 armv8pmu_read_hw_counter(struct perf_event *event)
return val; return val;
} }
static inline u64 armv8pmu_read_counter(struct perf_event *event) static u64 armv8pmu_read_counter(struct perf_event *event)
{ {
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
struct hw_perf_event *hwc = &event->hw; struct hw_perf_event *hwc = &event->hw;
...@@ -468,7 +468,7 @@ static inline void armv8pmu_write_hw_counter(struct perf_event *event, ...@@ -468,7 +468,7 @@ static inline void armv8pmu_write_hw_counter(struct perf_event *event,
} }
} }
static inline void armv8pmu_write_counter(struct perf_event *event, u64 value) static void armv8pmu_write_counter(struct perf_event *event, u64 value)
{ {
struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
struct hw_perf_event *hwc = &event->hw; struct hw_perf_event *hwc = &event->hw;
......
...@@ -439,15 +439,12 @@ kprobe_ss_hit(struct kprobe_ctlblk *kcb, unsigned long addr) ...@@ -439,15 +439,12 @@ kprobe_ss_hit(struct kprobe_ctlblk *kcb, unsigned long addr)
return DBG_HOOK_ERROR; return DBG_HOOK_ERROR;
} }
int __kprobes static int __kprobes
kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr) kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
{ {
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
int retval; int retval;
if (user_mode(regs))
return DBG_HOOK_ERROR;
/* return error if this is not our step */ /* return error if this is not our step */
retval = kprobe_ss_hit(kcb, instruction_pointer(regs)); retval = kprobe_ss_hit(kcb, instruction_pointer(regs));
...@@ -461,16 +458,22 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr) ...@@ -461,16 +458,22 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
return retval; return retval;
} }
int __kprobes static struct step_hook kprobes_step_hook = {
.fn = kprobe_single_step_handler,
};
static int __kprobes
kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr) kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr)
{ {
if (user_mode(regs))
return DBG_HOOK_ERROR;
kprobe_handler(regs); kprobe_handler(regs);
return DBG_HOOK_HANDLED; return DBG_HOOK_HANDLED;
} }
static struct break_hook kprobes_break_hook = {
.imm = KPROBES_BRK_IMM,
.fn = kprobe_breakpoint_handler,
};
/* /*
* Provide a blacklist of symbols identifying ranges which cannot be kprobed. * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
* This blacklist is exposed to userspace via debugfs (kprobes/blacklist). * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
...@@ -599,5 +602,8 @@ int __kprobes arch_trampoline_kprobe(struct kprobe *p) ...@@ -599,5 +602,8 @@ int __kprobes arch_trampoline_kprobe(struct kprobe *p)
int __init arch_init_kprobes(void) int __init arch_init_kprobes(void)
{ {
register_kernel_break_hook(&kprobes_break_hook);
register_kernel_step_hook(&kprobes_step_hook);
return 0; return 0;
} }
...@@ -171,7 +171,7 @@ int arch_uprobe_exception_notify(struct notifier_block *self, ...@@ -171,7 +171,7 @@ int arch_uprobe_exception_notify(struct notifier_block *self,
static int uprobe_breakpoint_handler(struct pt_regs *regs, static int uprobe_breakpoint_handler(struct pt_regs *regs,
unsigned int esr) unsigned int esr)
{ {
if (user_mode(regs) && uprobe_pre_sstep_notifier(regs)) if (uprobe_pre_sstep_notifier(regs))
return DBG_HOOK_HANDLED; return DBG_HOOK_HANDLED;
return DBG_HOOK_ERROR; return DBG_HOOK_ERROR;
...@@ -182,21 +182,16 @@ static int uprobe_single_step_handler(struct pt_regs *regs, ...@@ -182,21 +182,16 @@ static int uprobe_single_step_handler(struct pt_regs *regs,
{ {
struct uprobe_task *utask = current->utask; struct uprobe_task *utask = current->utask;
if (user_mode(regs)) { WARN_ON(utask && (instruction_pointer(regs) != utask->xol_vaddr + 4));
WARN_ON(utask &&
(instruction_pointer(regs) != utask->xol_vaddr + 4));
if (uprobe_post_sstep_notifier(regs)) if (uprobe_post_sstep_notifier(regs))
return DBG_HOOK_HANDLED; return DBG_HOOK_HANDLED;
}
return DBG_HOOK_ERROR; return DBG_HOOK_ERROR;
} }
/* uprobe breakpoint handler hook */ /* uprobe breakpoint handler hook */
static struct break_hook uprobes_break_hook = { static struct break_hook uprobes_break_hook = {
.esr_mask = BRK64_ESR_MASK, .imm = UPROBES_BRK_IMM,
.esr_val = BRK64_ESR_UPROBES,
.fn = uprobe_breakpoint_handler, .fn = uprobe_breakpoint_handler,
}; };
...@@ -207,8 +202,8 @@ static struct step_hook uprobes_step_hook = { ...@@ -207,8 +202,8 @@ static struct step_hook uprobes_step_hook = {
static int __init arch_init_uprobes(void) static int __init arch_init_uprobes(void)
{ {
register_break_hook(&uprobes_break_hook); register_user_break_hook(&uprobes_break_hook);
register_step_hook(&uprobes_step_hook); register_user_step_hook(&uprobes_step_hook);
return 0; return 0;
} }
......
...@@ -403,8 +403,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, ...@@ -403,8 +403,7 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
if (ka->sa.sa_flags & SA_SIGINFO) if (ka->sa.sa_flags & SA_SIGINFO)
idx += 3; idx += 3;
retcode = AARCH32_VECTORS_BASE + retcode = (unsigned long)current->mm->context.vdso +
AARCH32_KERN_SIGRET_CODE_OFFSET +
(idx << 2) + thumb; (idx << 2) + thumb;
} }
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* AArch32 sigreturn code.
* Based on the kuser helpers in arch/arm/kernel/entry-armv.S.
*
* Copyright (C) 2005-2011 Nicolas Pitre <nico@fluxnic.net>
* Copyright (C) 2012-2018 ARM Ltd.
*
* For ARM syscalls, the syscall number has to be loaded into r7.
* We do not support an OABI userspace.
*
* For Thumb syscalls, we also pass the syscall number via r7. We therefore
* need two 16-bit instructions.
*/
#include <asm/unistd.h>
.globl __aarch32_sigret_code_start
__aarch32_sigret_code_start:
/*
* ARM Code
*/
.byte __NR_compat_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_sigreturn
.byte __NR_compat_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_sigreturn
/*
* Thumb code
*/
.byte __NR_compat_sigreturn, 0x27 // svc #__NR_compat_sigreturn
.byte __NR_compat_sigreturn, 0xdf // mov r7, #__NR_compat_sigreturn
/*
* ARM code
*/
.byte __NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_rt_sigreturn
.byte __NR_compat_rt_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_rt_sigreturn
/*
* Thumb code
*/
.byte __NR_compat_rt_sigreturn, 0x27 // svc #__NR_compat_rt_sigreturn
.byte __NR_compat_rt_sigreturn, 0xdf // mov r7, #__NR_compat_rt_sigreturn
.globl __aarch32_sigret_code_end
__aarch32_sigret_code_end:
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len, SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
unsigned long, prot, unsigned long, flags, unsigned long, prot, unsigned long, flags,
unsigned long, fd, off_t, off) unsigned long, fd, unsigned long, off)
{ {
if (offset_in_page(off) != 0) if (offset_in_page(off) != 0)
return -EINVAL; return -EINVAL;
......
...@@ -462,6 +462,9 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs) ...@@ -462,6 +462,9 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
case ESR_ELx_SYS64_ISS_CRM_DC_CVAC: /* DC CVAC, gets promoted */ case ESR_ELx_SYS64_ISS_CRM_DC_CVAC: /* DC CVAC, gets promoted */
__user_cache_maint("dc civac", address, ret); __user_cache_maint("dc civac", address, ret);
break; break;
case ESR_ELx_SYS64_ISS_CRM_DC_CVADP: /* DC CVADP */
__user_cache_maint("sys 3, c7, c13, 1", address, ret);
break;
case ESR_ELx_SYS64_ISS_CRM_DC_CVAP: /* DC CVAP */ case ESR_ELx_SYS64_ISS_CRM_DC_CVAP: /* DC CVAP */
__user_cache_maint("sys 3, c7, c12, 1", address, ret); __user_cache_maint("sys 3, c7, c12, 1", address, ret);
break; break;
...@@ -496,7 +499,7 @@ static void cntvct_read_handler(unsigned int esr, struct pt_regs *regs) ...@@ -496,7 +499,7 @@ static void cntvct_read_handler(unsigned int esr, struct pt_regs *regs)
{ {
int rt = ESR_ELx_SYS64_ISS_RT(esr); int rt = ESR_ELx_SYS64_ISS_RT(esr);
pt_regs_write_reg(regs, rt, arch_counter_get_cntvct()); pt_regs_write_reg(regs, rt, arch_timer_read_counter());
arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE);
} }
...@@ -668,7 +671,7 @@ static void compat_cntvct_read_handler(unsigned int esr, struct pt_regs *regs) ...@@ -668,7 +671,7 @@ static void compat_cntvct_read_handler(unsigned int esr, struct pt_regs *regs)
{ {
int rt = (esr & ESR_ELx_CP15_64_ISS_RT_MASK) >> ESR_ELx_CP15_64_ISS_RT_SHIFT; int rt = (esr & ESR_ELx_CP15_64_ISS_RT_MASK) >> ESR_ELx_CP15_64_ISS_RT_SHIFT;
int rt2 = (esr & ESR_ELx_CP15_64_ISS_RT2_MASK) >> ESR_ELx_CP15_64_ISS_RT2_SHIFT; int rt2 = (esr & ESR_ELx_CP15_64_ISS_RT2_MASK) >> ESR_ELx_CP15_64_ISS_RT2_SHIFT;
u64 val = arch_counter_get_cntvct(); u64 val = arch_timer_read_counter();
pt_regs_write_reg(regs, rt, lower_32_bits(val)); pt_regs_write_reg(regs, rt, lower_32_bits(val));
pt_regs_write_reg(regs, rt2, upper_32_bits(val)); pt_regs_write_reg(regs, rt2, upper_32_bits(val));
...@@ -950,9 +953,6 @@ int is_valid_bugaddr(unsigned long addr) ...@@ -950,9 +953,6 @@ int is_valid_bugaddr(unsigned long addr)
static int bug_handler(struct pt_regs *regs, unsigned int esr) static int bug_handler(struct pt_regs *regs, unsigned int esr)
{ {
if (user_mode(regs))
return DBG_HOOK_ERROR;
switch (report_bug(regs->pc, regs)) { switch (report_bug(regs->pc, regs)) {
case BUG_TRAP_TYPE_BUG: case BUG_TRAP_TYPE_BUG:
die("Oops - BUG", regs, 0); die("Oops - BUG", regs, 0);
...@@ -972,9 +972,8 @@ static int bug_handler(struct pt_regs *regs, unsigned int esr) ...@@ -972,9 +972,8 @@ static int bug_handler(struct pt_regs *regs, unsigned int esr)
} }
static struct break_hook bug_break_hook = { static struct break_hook bug_break_hook = {
.esr_val = 0xf2000000 | BUG_BRK_IMM,
.esr_mask = 0xffffffff,
.fn = bug_handler, .fn = bug_handler,
.imm = BUG_BRK_IMM,
}; };
#ifdef CONFIG_KASAN_SW_TAGS #ifdef CONFIG_KASAN_SW_TAGS
...@@ -992,9 +991,6 @@ static int kasan_handler(struct pt_regs *regs, unsigned int esr) ...@@ -992,9 +991,6 @@ static int kasan_handler(struct pt_regs *regs, unsigned int esr)
u64 addr = regs->regs[0]; u64 addr = regs->regs[0];
u64 pc = regs->pc; u64 pc = regs->pc;
if (user_mode(regs))
return DBG_HOOK_ERROR;
kasan_report(addr, size, write, pc); kasan_report(addr, size, write, pc);
/* /*
...@@ -1019,13 +1015,10 @@ static int kasan_handler(struct pt_regs *regs, unsigned int esr) ...@@ -1019,13 +1015,10 @@ static int kasan_handler(struct pt_regs *regs, unsigned int esr)
return DBG_HOOK_HANDLED; return DBG_HOOK_HANDLED;
} }
#define KASAN_ESR_VAL (0xf2000000 | KASAN_BRK_IMM)
#define KASAN_ESR_MASK 0xffffff00
static struct break_hook kasan_break_hook = { static struct break_hook kasan_break_hook = {
.esr_val = KASAN_ESR_VAL,
.esr_mask = KASAN_ESR_MASK,
.fn = kasan_handler, .fn = kasan_handler,
.imm = KASAN_BRK_IMM,
.mask = KASAN_BRK_MASK,
}; };
#endif #endif
...@@ -1037,7 +1030,9 @@ int __init early_brk64(unsigned long addr, unsigned int esr, ...@@ -1037,7 +1030,9 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
#ifdef CONFIG_KASAN_SW_TAGS #ifdef CONFIG_KASAN_SW_TAGS
if ((esr & KASAN_ESR_MASK) == KASAN_ESR_VAL) unsigned int comment = esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
if ((comment & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
return kasan_handler(regs, esr) != DBG_HOOK_HANDLED; return kasan_handler(regs, esr) != DBG_HOOK_HANDLED;
#endif #endif
return bug_handler(regs, esr) != DBG_HOOK_HANDLED; return bug_handler(regs, esr) != DBG_HOOK_HANDLED;
...@@ -1046,8 +1041,8 @@ int __init early_brk64(unsigned long addr, unsigned int esr, ...@@ -1046,8 +1041,8 @@ int __init early_brk64(unsigned long addr, unsigned int esr,
/* This registration must happen early, before debug_traps_init(). */ /* This registration must happen early, before debug_traps_init(). */
void __init trap_init(void) void __init trap_init(void)
{ {
register_break_hook(&bug_break_hook); register_kernel_break_hook(&bug_break_hook);
#ifdef CONFIG_KASAN_SW_TAGS #ifdef CONFIG_KASAN_SW_TAGS
register_break_hook(&kasan_break_hook); register_kernel_break_hook(&kasan_break_hook);
#endif #endif
} }
/* /*
* VDSO implementation for AArch64 and vector page setup for AArch32. * VDSO implementations.
* *
* Copyright (C) 2012 ARM Limited * Copyright (C) 2012 ARM Limited
* *
...@@ -53,61 +53,129 @@ struct vdso_data *vdso_data = &vdso_data_store.data; ...@@ -53,61 +53,129 @@ struct vdso_data *vdso_data = &vdso_data_store.data;
/* /*
* Create and map the vectors page for AArch32 tasks. * Create and map the vectors page for AArch32 tasks.
*/ */
static struct page *vectors_page[1] __ro_after_init; #define C_VECTORS 0
#define C_SIGPAGE 1
#define C_PAGES (C_SIGPAGE + 1)
static struct page *aarch32_vdso_pages[C_PAGES] __ro_after_init;
static const struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = {
{
.name = "[vectors]", /* ABI */
.pages = &aarch32_vdso_pages[C_VECTORS],
},
{
.name = "[sigpage]", /* ABI */
.pages = &aarch32_vdso_pages[C_SIGPAGE],
},
};
static int __init alloc_vectors_page(void) static int aarch32_alloc_kuser_vdso_page(void)
{ {
extern char __kuser_helper_start[], __kuser_helper_end[]; extern char __kuser_helper_start[], __kuser_helper_end[];
extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
int kuser_sz = __kuser_helper_end - __kuser_helper_start; int kuser_sz = __kuser_helper_end - __kuser_helper_start;
int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start; unsigned long vdso_page;
unsigned long vpage;
vpage = get_zeroed_page(GFP_ATOMIC); if (!IS_ENABLED(CONFIG_KUSER_HELPERS))
return 0;
if (!vpage) vdso_page = get_zeroed_page(GFP_ATOMIC);
if (!vdso_page)
return -ENOMEM; return -ENOMEM;
/* kuser helpers */ memcpy((void *)(vdso_page + 0x1000 - kuser_sz), __kuser_helper_start,
memcpy((void *)vpage + 0x1000 - kuser_sz, __kuser_helper_start,
kuser_sz); kuser_sz);
aarch32_vdso_pages[C_VECTORS] = virt_to_page(vdso_page);
flush_dcache_page(aarch32_vdso_pages[C_VECTORS]);
return 0;
}
/* sigreturn code */ static int __init aarch32_alloc_vdso_pages(void)
memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET, {
__aarch32_sigret_code_start, sigret_sz); extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
unsigned long sigpage;
int ret;
sigpage = get_zeroed_page(GFP_ATOMIC);
if (!sigpage)
return -ENOMEM;
flush_icache_range(vpage, vpage + PAGE_SIZE); memcpy((void *)sigpage, __aarch32_sigret_code_start, sigret_sz);
vectors_page[0] = virt_to_page(vpage); aarch32_vdso_pages[C_SIGPAGE] = virt_to_page(sigpage);
flush_dcache_page(aarch32_vdso_pages[C_SIGPAGE]);
return 0; ret = aarch32_alloc_kuser_vdso_page();
if (ret)
free_page(sigpage);
return ret;
} }
arch_initcall(alloc_vectors_page); arch_initcall(aarch32_alloc_vdso_pages);
int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp) static int aarch32_kuser_helpers_setup(struct mm_struct *mm)
{ {
struct mm_struct *mm = current->mm; void *ret;
unsigned long addr = AARCH32_VECTORS_BASE;
static const struct vm_special_mapping spec = { if (!IS_ENABLED(CONFIG_KUSER_HELPERS))
.name = "[vectors]", return 0;
.pages = vectors_page,
/*
* Avoid VM_MAYWRITE for compatibility with arch/arm/, where it's
* not safe to CoW the page containing the CPU exception vectors.
*/
ret = _install_special_mapping(mm, AARCH32_VECTORS_BASE, PAGE_SIZE,
VM_READ | VM_EXEC |
VM_MAYREAD | VM_MAYEXEC,
&aarch32_vdso_spec[C_VECTORS]);
return PTR_ERR_OR_ZERO(ret);
}
}; static int aarch32_sigreturn_setup(struct mm_struct *mm)
{
unsigned long addr;
void *ret; void *ret;
if (down_write_killable(&mm->mmap_sem)) addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
return -EINTR; if (IS_ERR_VALUE(addr)) {
current->mm->context.vdso = (void *)addr; ret = ERR_PTR(addr);
goto out;
}
/* Map vectors page at the high address. */ /*
* VM_MAYWRITE is required to allow gdb to Copy-on-Write and
* set breakpoints.
*/
ret = _install_special_mapping(mm, addr, PAGE_SIZE, ret = _install_special_mapping(mm, addr, PAGE_SIZE,
VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC, VM_READ | VM_EXEC | VM_MAYREAD |
&spec); VM_MAYWRITE | VM_MAYEXEC,
&aarch32_vdso_spec[C_SIGPAGE]);
if (IS_ERR(ret))
goto out;
up_write(&mm->mmap_sem); mm->context.vdso = (void *)addr;
out:
return PTR_ERR_OR_ZERO(ret); return PTR_ERR_OR_ZERO(ret);
} }
int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
{
struct mm_struct *mm = current->mm;
int ret;
if (down_write_killable(&mm->mmap_sem))
return -EINTR;
ret = aarch32_kuser_helpers_setup(mm);
if (ret)
goto out;
ret = aarch32_sigreturn_setup(mm);
out:
up_write(&mm->mmap_sem);
return ret;
}
#endif /* CONFIG_COMPAT */ #endif /* CONFIG_COMPAT */
static int vdso_mremap(const struct vm_special_mapping *sm, static int vdso_mremap(const struct vm_special_mapping *sm,
...@@ -146,8 +214,6 @@ static int __init vdso_init(void) ...@@ -146,8 +214,6 @@ static int __init vdso_init(void)
} }
vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT; vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT;
pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n",
vdso_pages + 1, vdso_pages, vdso_start, 1L, vdso_data);
/* Allocate the vDSO pagelist, plus a page for the data. */ /* Allocate the vDSO pagelist, plus a page for the data. */
vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *), vdso_pagelist = kcalloc(vdso_pages + 1, sizeof(struct page *),
...@@ -232,6 +298,9 @@ void update_vsyscall(struct timekeeper *tk) ...@@ -232,6 +298,9 @@ void update_vsyscall(struct timekeeper *tk)
vdso_data->wtm_clock_sec = tk->wall_to_monotonic.tv_sec; vdso_data->wtm_clock_sec = tk->wall_to_monotonic.tv_sec;
vdso_data->wtm_clock_nsec = tk->wall_to_monotonic.tv_nsec; vdso_data->wtm_clock_nsec = tk->wall_to_monotonic.tv_nsec;
/* Read without the seqlock held by clock_getres() */
WRITE_ONCE(vdso_data->hrtimer_res, hrtimer_resolution);
if (!use_syscall) { if (!use_syscall) {
/* tkr_mono.cycle_last == tkr_raw.cycle_last */ /* tkr_mono.cycle_last == tkr_raw.cycle_last */
vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last; vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last;
......
...@@ -12,17 +12,12 @@ obj-vdso := gettimeofday.o note.o sigreturn.o ...@@ -12,17 +12,12 @@ obj-vdso := gettimeofday.o note.o sigreturn.o
targets := $(obj-vdso) vdso.so vdso.so.dbg targets := $(obj-vdso) vdso.so vdso.so.dbg
obj-vdso := $(addprefix $(obj)/, $(obj-vdso)) obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
ccflags-y := -shared -fno-common -fno-builtin ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 \
ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \ $(call ld-option, --hash-style=sysv) -n -T
$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
# Disable gcov profiling for VDSO code # Disable gcov profiling for VDSO code
GCOV_PROFILE := n GCOV_PROFILE := n
# Workaround for bare-metal (ELF) toolchains that neglect to pass -shared
# down to collect2, resulting in silent corruption of the vDSO image.
ccflags-y += -Wl,-shared
obj-y += vdso.o obj-y += vdso.o
extra-y += vdso.lds extra-y += vdso.lds
CPPFLAGS_vdso.lds += -P -C -U$(ARCH) CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
...@@ -31,8 +26,8 @@ CPPFLAGS_vdso.lds += -P -C -U$(ARCH) ...@@ -31,8 +26,8 @@ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
$(obj)/vdso.o : $(obj)/vdso.so $(obj)/vdso.o : $(obj)/vdso.so
# Link rule for the .so file, .lds has to be first # Link rule for the .so file, .lds has to be first
$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) $(obj)/vdso.so.dbg: $(obj)/vdso.lds $(obj-vdso) FORCE
$(call if_changed,vdsold) $(call if_changed,ld)
# Strip rule for the .so file # Strip rule for the .so file
$(obj)/%.so: OBJCOPYFLAGS := -S $(obj)/%.so: OBJCOPYFLAGS := -S
...@@ -42,9 +37,7 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE ...@@ -42,9 +37,7 @@ $(obj)/%.so: $(obj)/%.so.dbg FORCE
# Generate VDSO offsets using helper script # Generate VDSO offsets using helper script
gen-vdsosym := $(srctree)/$(src)/gen_vdso_offsets.sh gen-vdsosym := $(srctree)/$(src)/gen_vdso_offsets.sh
quiet_cmd_vdsosym = VDSOSYM $@ quiet_cmd_vdsosym = VDSOSYM $@
define cmd_vdsosym cmd_vdsosym = $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@
$(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@
endef
include/generated/vdso-offsets.h: $(obj)/vdso.so.dbg FORCE include/generated/vdso-offsets.h: $(obj)/vdso.so.dbg FORCE
$(call if_changed,vdsosym) $(call if_changed,vdsosym)
...@@ -54,8 +47,6 @@ $(obj-vdso): %.o: %.S FORCE ...@@ -54,8 +47,6 @@ $(obj-vdso): %.o: %.S FORCE
$(call if_changed_dep,vdsoas) $(call if_changed_dep,vdsoas)
# Actual build commands # Actual build commands
quiet_cmd_vdsold = VDSOL $@
cmd_vdsold = $(CC) $(c_flags) -Wl,-n -Wl,-T $^ -o $@
quiet_cmd_vdsoas = VDSOA $@ quiet_cmd_vdsoas = VDSOA $@
cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $< cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $<
......
...@@ -73,6 +73,13 @@ x_tmp .req x8 ...@@ -73,6 +73,13 @@ x_tmp .req x8
movn x_tmp, #0xff00, lsl #48 movn x_tmp, #0xff00, lsl #48
and \res, x_tmp, \res and \res, x_tmp, \res
mul \res, \res, \mult mul \res, \res, \mult
/*
* Fake address dependency from the value computed from the counter
* register to subsequent data page accesses so that the sequence
* locking also orders the read of the counter.
*/
and x_tmp, \res, xzr
add vdso_data, vdso_data, x_tmp
.endm .endm
/* /*
...@@ -147,12 +154,12 @@ ENTRY(__kernel_gettimeofday) ...@@ -147,12 +154,12 @@ ENTRY(__kernel_gettimeofday)
/* w11 = cs_mono_mult, w12 = cs_shift */ /* w11 = cs_mono_mult, w12 = cs_shift */
ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT] ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC] ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
seqcnt_check fail=1b
get_nsec_per_sec res=x9 get_nsec_per_sec res=x9
lsl x9, x9, x12 lsl x9, x9, x12
get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11 get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
seqcnt_check fail=1b
get_ts_realtime res_sec=x10, res_nsec=x11, \ get_ts_realtime res_sec=x10, res_nsec=x11, \
clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9 clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
...@@ -211,13 +218,13 @@ realtime: ...@@ -211,13 +218,13 @@ realtime:
/* w11 = cs_mono_mult, w12 = cs_shift */ /* w11 = cs_mono_mult, w12 = cs_shift */
ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT] ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC] ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
seqcnt_check fail=realtime
/* All computations are done with left-shifted nsecs. */ /* All computations are done with left-shifted nsecs. */
get_nsec_per_sec res=x9 get_nsec_per_sec res=x9
lsl x9, x9, x12 lsl x9, x9, x12
get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11 get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
seqcnt_check fail=realtime
get_ts_realtime res_sec=x10, res_nsec=x11, \ get_ts_realtime res_sec=x10, res_nsec=x11, \
clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9 clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
clock_gettime_return, shift=1 clock_gettime_return, shift=1
...@@ -231,7 +238,6 @@ monotonic: ...@@ -231,7 +238,6 @@ monotonic:
ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT] ldp w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC] ldp x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
ldp x3, x4, [vdso_data, #VDSO_WTM_CLK_SEC] ldp x3, x4, [vdso_data, #VDSO_WTM_CLK_SEC]
seqcnt_check fail=monotonic
/* All computations are done with left-shifted nsecs. */ /* All computations are done with left-shifted nsecs. */
lsl x4, x4, x12 lsl x4, x4, x12
...@@ -239,6 +245,7 @@ monotonic: ...@@ -239,6 +245,7 @@ monotonic:
lsl x9, x9, x12 lsl x9, x9, x12
get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11 get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
seqcnt_check fail=monotonic
get_ts_realtime res_sec=x10, res_nsec=x11, \ get_ts_realtime res_sec=x10, res_nsec=x11, \
clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9 clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
...@@ -253,13 +260,13 @@ monotonic_raw: ...@@ -253,13 +260,13 @@ monotonic_raw:
/* w11 = cs_raw_mult, w12 = cs_shift */ /* w11 = cs_raw_mult, w12 = cs_shift */
ldp w12, w11, [vdso_data, #VDSO_CS_SHIFT] ldp w12, w11, [vdso_data, #VDSO_CS_SHIFT]
ldp x13, x14, [vdso_data, #VDSO_RAW_TIME_SEC] ldp x13, x14, [vdso_data, #VDSO_RAW_TIME_SEC]
seqcnt_check fail=monotonic_raw
/* All computations are done with left-shifted nsecs. */ /* All computations are done with left-shifted nsecs. */
get_nsec_per_sec res=x9 get_nsec_per_sec res=x9
lsl x9, x9, x12 lsl x9, x9, x12
get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11 get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
seqcnt_check fail=monotonic_raw
get_ts_clock_raw res_sec=x10, res_nsec=x11, \ get_ts_clock_raw res_sec=x10, res_nsec=x11, \
clock_nsec=x15, nsec_to_sec=x9 clock_nsec=x15, nsec_to_sec=x9
...@@ -301,13 +308,14 @@ ENTRY(__kernel_clock_getres) ...@@ -301,13 +308,14 @@ ENTRY(__kernel_clock_getres)
ccmp w0, #CLOCK_MONOTONIC_RAW, #0x4, ne ccmp w0, #CLOCK_MONOTONIC_RAW, #0x4, ne
b.ne 1f b.ne 1f
ldr x2, 5f adr vdso_data, _vdso_data
ldr w2, [vdso_data, #CLOCK_REALTIME_RES]
b 2f b 2f
1: 1:
cmp w0, #CLOCK_REALTIME_COARSE cmp w0, #CLOCK_REALTIME_COARSE
ccmp w0, #CLOCK_MONOTONIC_COARSE, #0x4, ne ccmp w0, #CLOCK_MONOTONIC_COARSE, #0x4, ne
b.ne 4f b.ne 4f
ldr x2, 6f ldr x2, 5f
2: 2:
cbz x1, 3f cbz x1, 3f
stp xzr, x2, [x1] stp xzr, x2, [x1]
...@@ -321,8 +329,6 @@ ENTRY(__kernel_clock_getres) ...@@ -321,8 +329,6 @@ ENTRY(__kernel_clock_getres)
svc #0 svc #0
ret ret
5: 5:
.quad CLOCK_REALTIME_RES
6:
.quad CLOCK_COARSE_RES .quad CLOCK_COARSE_RES
.cfi_endproc .cfi_endproc
ENDPROC(__kernel_clock_getres) ENDPROC(__kernel_clock_getres)
...@@ -24,7 +24,7 @@ CFLAGS_atomic_ll_sc.o := -ffixed-x1 -ffixed-x2 \ ...@@ -24,7 +24,7 @@ CFLAGS_atomic_ll_sc.o := -ffixed-x1 -ffixed-x2 \
-fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12 \ -fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12 \
-fcall-saved-x13 -fcall-saved-x14 -fcall-saved-x15 \ -fcall-saved-x13 -fcall-saved-x14 -fcall-saved-x15 \
-fcall-saved-x18 -fomit-frame-pointer -fcall-saved-x18 -fomit-frame-pointer
CFLAGS_REMOVE_atomic_ll_sc.o := -pg CFLAGS_REMOVE_atomic_ll_sc.o := $(CC_FLAGS_FTRACE)
GCOV_PROFILE_atomic_ll_sc.o := n GCOV_PROFILE_atomic_ll_sc.o := n
KASAN_SANITIZE_atomic_ll_sc.o := n KASAN_SANITIZE_atomic_ll_sc.o := n
KCOV_INSTRUMENT_atomic_ll_sc.o := n KCOV_INSTRUMENT_atomic_ll_sc.o := n
......
...@@ -148,7 +148,7 @@ static inline bool is_ttbr1_addr(unsigned long addr) ...@@ -148,7 +148,7 @@ static inline bool is_ttbr1_addr(unsigned long addr)
/* /*
* Dump out the page tables associated with 'addr' in the currently active mm. * Dump out the page tables associated with 'addr' in the currently active mm.
*/ */
void show_pte(unsigned long addr) static void show_pte(unsigned long addr)
{ {
struct mm_struct *mm; struct mm_struct *mm;
pgd_t *pgdp; pgd_t *pgdp;
...@@ -810,13 +810,12 @@ void __init hook_debug_fault_code(int nr, ...@@ -810,13 +810,12 @@ void __init hook_debug_fault_code(int nr,
debug_fault_info[nr].name = name; debug_fault_info[nr].name = name;
} }
asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint, asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
unsigned int esr, unsigned int esr,
struct pt_regs *regs) struct pt_regs *regs)
{ {
const struct fault_info *inf = esr_to_debug_fault_info(esr); const struct fault_info *inf = esr_to_debug_fault_info(esr);
unsigned long pc = instruction_pointer(regs); unsigned long pc = instruction_pointer(regs);
int rv;
/* /*
* Tell lockdep we disabled irqs in entry.S. Do nothing if they were * Tell lockdep we disabled irqs in entry.S. Do nothing if they were
...@@ -828,17 +827,12 @@ asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint, ...@@ -828,17 +827,12 @@ asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint,
if (user_mode(regs) && !is_ttbr0_addr(pc)) if (user_mode(regs) && !is_ttbr0_addr(pc))
arm64_apply_bp_hardening(); arm64_apply_bp_hardening();
if (!inf->fn(addr_if_watchpoint, esr, regs)) { if (inf->fn(addr_if_watchpoint, esr, regs)) {
rv = 1;
} else {
arm64_notify_die(inf->name, regs, arm64_notify_die(inf->name, regs,
inf->sig, inf->code, (void __user *)pc, esr); inf->sig, inf->code, (void __user *)pc, esr);
rv = 0;
} }
if (interrupts_enabled(regs)) if (interrupts_enabled(regs))
trace_hardirqs_on(); trace_hardirqs_on();
return rv;
} }
NOKPROBE_SYMBOL(do_debug_exception); NOKPROBE_SYMBOL(do_debug_exception);
...@@ -377,7 +377,7 @@ void __init arm64_memblock_init(void) ...@@ -377,7 +377,7 @@ void __init arm64_memblock_init(void)
base + size > memblock_start_of_DRAM() + base + size > memblock_start_of_DRAM() +
linear_region_size, linear_region_size,
"initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) { "initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
initrd_start = 0; phys_initrd_size = 0;
} else { } else {
memblock_remove(base, size); /* clear MEMBLOCK_ flags */ memblock_remove(base, size); /* clear MEMBLOCK_ flags */
memblock_add(base, size); memblock_add(base, size);
...@@ -440,6 +440,7 @@ void __init bootmem_init(void) ...@@ -440,6 +440,7 @@ void __init bootmem_init(void)
early_memtest(min << PAGE_SHIFT, max << PAGE_SHIFT); early_memtest(min << PAGE_SHIFT, max << PAGE_SHIFT);
max_pfn = max_low_pfn = max; max_pfn = max_low_pfn = max;
min_low_pfn = min;
arm64_numa_init(); arm64_numa_init();
/* /*
...@@ -535,7 +536,7 @@ void __init mem_init(void) ...@@ -535,7 +536,7 @@ void __init mem_init(void)
else else
swiotlb_force = SWIOTLB_NO_FORCE; swiotlb_force = SWIOTLB_NO_FORCE;
set_max_mapnr(pfn_to_page(max_pfn) - mem_map); set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
#ifndef CONFIG_SPARSEMEM_VMEMMAP #ifndef CONFIG_SPARSEMEM_VMEMMAP
free_unused_memmap(); free_unused_memmap();
......
...@@ -97,7 +97,7 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, ...@@ -97,7 +97,7 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
} }
EXPORT_SYMBOL(phys_mem_access_prot); EXPORT_SYMBOL(phys_mem_access_prot);
static phys_addr_t __init early_pgtable_alloc(void) static phys_addr_t __init early_pgtable_alloc(int shift)
{ {
phys_addr_t phys; phys_addr_t phys;
void *ptr; void *ptr;
...@@ -174,7 +174,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end, ...@@ -174,7 +174,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
unsigned long end, phys_addr_t phys, unsigned long end, phys_addr_t phys,
pgprot_t prot, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(void), phys_addr_t (*pgtable_alloc)(int),
int flags) int flags)
{ {
unsigned long next; unsigned long next;
...@@ -184,7 +184,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, ...@@ -184,7 +184,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
if (pmd_none(pmd)) { if (pmd_none(pmd)) {
phys_addr_t pte_phys; phys_addr_t pte_phys;
BUG_ON(!pgtable_alloc); BUG_ON(!pgtable_alloc);
pte_phys = pgtable_alloc(); pte_phys = pgtable_alloc(PAGE_SHIFT);
__pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE); __pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
pmd = READ_ONCE(*pmdp); pmd = READ_ONCE(*pmdp);
} }
...@@ -208,7 +208,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr, ...@@ -208,7 +208,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot, phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(void), int flags) phys_addr_t (*pgtable_alloc)(int), int flags)
{ {
unsigned long next; unsigned long next;
pmd_t *pmdp; pmd_t *pmdp;
...@@ -246,7 +246,7 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end, ...@@ -246,7 +246,7 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
unsigned long end, phys_addr_t phys, unsigned long end, phys_addr_t phys,
pgprot_t prot, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(void), int flags) phys_addr_t (*pgtable_alloc)(int), int flags)
{ {
unsigned long next; unsigned long next;
pud_t pud = READ_ONCE(*pudp); pud_t pud = READ_ONCE(*pudp);
...@@ -258,7 +258,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr, ...@@ -258,7 +258,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
if (pud_none(pud)) { if (pud_none(pud)) {
phys_addr_t pmd_phys; phys_addr_t pmd_phys;
BUG_ON(!pgtable_alloc); BUG_ON(!pgtable_alloc);
pmd_phys = pgtable_alloc(); pmd_phys = pgtable_alloc(PMD_SHIFT);
__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE); __pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
pud = READ_ONCE(*pudp); pud = READ_ONCE(*pudp);
} }
...@@ -294,7 +294,7 @@ static inline bool use_1G_block(unsigned long addr, unsigned long next, ...@@ -294,7 +294,7 @@ static inline bool use_1G_block(unsigned long addr, unsigned long next,
static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot, phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(void), phys_addr_t (*pgtable_alloc)(int),
int flags) int flags)
{ {
unsigned long next; unsigned long next;
...@@ -304,7 +304,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, ...@@ -304,7 +304,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
if (pgd_none(pgd)) { if (pgd_none(pgd)) {
phys_addr_t pud_phys; phys_addr_t pud_phys;
BUG_ON(!pgtable_alloc); BUG_ON(!pgtable_alloc);
pud_phys = pgtable_alloc(); pud_phys = pgtable_alloc(PUD_SHIFT);
__pgd_populate(pgdp, pud_phys, PUD_TYPE_TABLE); __pgd_populate(pgdp, pud_phys, PUD_TYPE_TABLE);
pgd = READ_ONCE(*pgdp); pgd = READ_ONCE(*pgdp);
} }
...@@ -345,7 +345,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end, ...@@ -345,7 +345,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size, unsigned long virt, phys_addr_t size,
pgprot_t prot, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(void), phys_addr_t (*pgtable_alloc)(int),
int flags) int flags)
{ {
unsigned long addr, length, end, next; unsigned long addr, length, end, next;
...@@ -371,17 +371,36 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, ...@@ -371,17 +371,36 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
} while (pgdp++, addr = next, addr != end); } while (pgdp++, addr = next, addr != end);
} }
static phys_addr_t pgd_pgtable_alloc(void) static phys_addr_t __pgd_pgtable_alloc(int shift)
{ {
void *ptr = (void *)__get_free_page(PGALLOC_GFP); void *ptr = (void *)__get_free_page(PGALLOC_GFP);
if (!ptr || !pgtable_page_ctor(virt_to_page(ptr))) BUG_ON(!ptr);
BUG();
/* Ensure the zeroed page is visible to the page table walker */ /* Ensure the zeroed page is visible to the page table walker */
dsb(ishst); dsb(ishst);
return __pa(ptr); return __pa(ptr);
} }
static phys_addr_t pgd_pgtable_alloc(int shift)
{
phys_addr_t pa = __pgd_pgtable_alloc(shift);
/*
* Call proper page table ctor in case later we need to
* call core mm functions like apply_to_page_range() on
* this pre-allocated page table.
*
* We don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK if pmd is
* folded, and if so pgtable_pmd_page_ctor() becomes nop.
*/
if (shift == PAGE_SHIFT)
BUG_ON(!pgtable_page_ctor(phys_to_page(pa)));
else if (shift == PMD_SHIFT)
BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa)));
return pa;
}
/* /*
* This function can only be used to modify existing table entries, * This function can only be used to modify existing table entries,
* without allocating new levels of table. Note that this permits the * without allocating new levels of table. Note that this permits the
...@@ -583,7 +602,7 @@ static int __init map_entry_trampoline(void) ...@@ -583,7 +602,7 @@ static int __init map_entry_trampoline(void)
/* Map only the text into the trampoline page table */ /* Map only the text into the trampoline page table */
memset(tramp_pg_dir, 0, PGD_SIZE); memset(tramp_pg_dir, 0, PGD_SIZE);
__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE, __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
prot, pgd_pgtable_alloc, 0); prot, __pgd_pgtable_alloc, 0);
/* Map both the text and data into the kernel page table */ /* Map both the text and data into the kernel page table */
__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot); __set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
...@@ -1055,7 +1074,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap, ...@@ -1055,7 +1074,7 @@ int arch_add_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap,
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
size, PAGE_KERNEL, pgd_pgtable_alloc, flags); size, PAGE_KERNEL, __pgd_pgtable_alloc, flags);
return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT,
altmap, want_memblock); altmap, want_memblock);
......
...@@ -200,7 +200,7 @@ void __init setup_per_cpu_areas(void) ...@@ -200,7 +200,7 @@ void __init setup_per_cpu_areas(void)
#endif #endif
/** /**
* numa_add_memblk - Set node id to memblk * numa_add_memblk() - Set node id to memblk
* @nid: NUMA node ID of the new memblk * @nid: NUMA node ID of the new memblk
* @start: Start address of the new memblk * @start: Start address of the new memblk
* @end: End address of the new memblk * @end: End address of the new memblk
...@@ -223,7 +223,7 @@ int __init numa_add_memblk(int nid, u64 start, u64 end) ...@@ -223,7 +223,7 @@ int __init numa_add_memblk(int nid, u64 start, u64 end)
return ret; return ret;
} }
/** /*
* Initialize NODE_DATA for a node on the local memory * Initialize NODE_DATA for a node on the local memory
*/ */
static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn) static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
...@@ -257,7 +257,7 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn) ...@@ -257,7 +257,7 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn; NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn;
} }
/** /*
* numa_free_distance * numa_free_distance
* *
* The current table is freed. * The current table is freed.
...@@ -277,10 +277,8 @@ void __init numa_free_distance(void) ...@@ -277,10 +277,8 @@ void __init numa_free_distance(void)
numa_distance = NULL; numa_distance = NULL;
} }
/** /*
*
* Create a new NUMA distance table. * Create a new NUMA distance table.
*
*/ */
static int __init numa_alloc_distance(void) static int __init numa_alloc_distance(void)
{ {
...@@ -311,7 +309,7 @@ static int __init numa_alloc_distance(void) ...@@ -311,7 +309,7 @@ static int __init numa_alloc_distance(void)
} }
/** /**
* numa_set_distance - Set inter node NUMA distance from node to node. * numa_set_distance() - Set inter node NUMA distance from node to node.
* @from: the 'from' node to set distance * @from: the 'from' node to set distance
* @to: the 'to' node to set distance * @to: the 'to' node to set distance
* @distance: NUMA distance * @distance: NUMA distance
...@@ -321,7 +319,6 @@ static int __init numa_alloc_distance(void) ...@@ -321,7 +319,6 @@ static int __init numa_alloc_distance(void)
* *
* If @from or @to is higher than the highest known node or lower than zero * If @from or @to is higher than the highest known node or lower than zero
* or @distance doesn't make sense, the call is ignored. * or @distance doesn't make sense, the call is ignored.
*
*/ */
void __init numa_set_distance(int from, int to, int distance) void __init numa_set_distance(int from, int to, int distance)
{ {
...@@ -347,7 +344,7 @@ void __init numa_set_distance(int from, int to, int distance) ...@@ -347,7 +344,7 @@ void __init numa_set_distance(int from, int to, int distance)
numa_distance[from * numa_distance_cnt + to] = distance; numa_distance[from * numa_distance_cnt + to] = distance;
} }
/** /*
* Return NUMA distance @from to @to * Return NUMA distance @from to @to
*/ */
int __node_distance(int from, int to) int __node_distance(int from, int to)
...@@ -422,13 +419,15 @@ static int __init numa_init(int (*init_func)(void)) ...@@ -422,13 +419,15 @@ static int __init numa_init(int (*init_func)(void))
} }
/** /**
* dummy_numa_init - Fallback dummy NUMA init * dummy_numa_init() - Fallback dummy NUMA init
* *
* Used if there's no underlying NUMA architecture, NUMA initialization * Used if there's no underlying NUMA architecture, NUMA initialization
* fails, or NUMA is disabled on the command line. * fails, or NUMA is disabled on the command line.
* *
* Must online at least one node (node 0) and add memory blocks that cover all * Must online at least one node (node 0) and add memory blocks that cover all
* allowed memory. It is unlikely that this function fails. * allowed memory. It is unlikely that this function fails.
*
* Return: 0 on success, -errno on failure.
*/ */
static int __init dummy_numa_init(void) static int __init dummy_numa_init(void)
{ {
...@@ -454,7 +453,7 @@ static int __init dummy_numa_init(void) ...@@ -454,7 +453,7 @@ static int __init dummy_numa_init(void)
} }
/** /**
* arm64_numa_init - Initialize NUMA * arm64_numa_init() - Initialize NUMA
* *
* Try each configured NUMA initialization method until one succeeds. The * Try each configured NUMA initialization method until one succeeds. The
* last fallback is dummy single node config encomapssing whole memory. * last fallback is dummy single node config encomapssing whole memory.
......
...@@ -65,24 +65,25 @@ ENTRY(cpu_do_suspend) ...@@ -65,24 +65,25 @@ ENTRY(cpu_do_suspend)
mrs x2, tpidr_el0 mrs x2, tpidr_el0
mrs x3, tpidrro_el0 mrs x3, tpidrro_el0
mrs x4, contextidr_el1 mrs x4, contextidr_el1
mrs x5, cpacr_el1 mrs x5, osdlr_el1
mrs x6, tcr_el1 mrs x6, cpacr_el1
mrs x7, vbar_el1 mrs x7, tcr_el1
mrs x8, mdscr_el1 mrs x8, vbar_el1
mrs x9, oslsr_el1 mrs x9, mdscr_el1
mrs x10, sctlr_el1 mrs x10, oslsr_el1
mrs x11, sctlr_el1
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
mrs x11, tpidr_el1 mrs x12, tpidr_el1
alternative_else alternative_else
mrs x11, tpidr_el2 mrs x12, tpidr_el2
alternative_endif alternative_endif
mrs x12, sp_el0 mrs x13, sp_el0
stp x2, x3, [x0] stp x2, x3, [x0]
stp x4, xzr, [x0, #16] stp x4, x5, [x0, #16]
stp x5, x6, [x0, #32] stp x6, x7, [x0, #32]
stp x7, x8, [x0, #48] stp x8, x9, [x0, #48]
stp x9, x10, [x0, #64] stp x10, x11, [x0, #64]
stp x11, x12, [x0, #80] stp x12, x13, [x0, #80]
ret ret
ENDPROC(cpu_do_suspend) ENDPROC(cpu_do_suspend)
...@@ -105,8 +106,8 @@ ENTRY(cpu_do_resume) ...@@ -105,8 +106,8 @@ ENTRY(cpu_do_resume)
msr cpacr_el1, x6 msr cpacr_el1, x6
/* Don't change t0sz here, mask those bits when restoring */ /* Don't change t0sz here, mask those bits when restoring */
mrs x5, tcr_el1 mrs x7, tcr_el1
bfi x8, x5, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH bfi x8, x7, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
msr tcr_el1, x8 msr tcr_el1, x8
msr vbar_el1, x9 msr vbar_el1, x9
...@@ -130,6 +131,7 @@ alternative_endif ...@@ -130,6 +131,7 @@ alternative_endif
/* /*
* Restore oslsr_el1 by writing oslar_el1 * Restore oslsr_el1 by writing oslar_el1
*/ */
msr osdlr_el1, x5
ubfx x11, x11, #1, #1 ubfx x11, x11, #1, #1
msr oslar_el1, x11 msr oslar_el1, x11
reset_pmuserenr_el0 x0 // Disable PMU access from EL0 reset_pmuserenr_el0 x0 // Disable PMU access from EL0
......
...@@ -356,7 +356,8 @@ static struct acpi_iort_node *iort_node_get_id(struct acpi_iort_node *node, ...@@ -356,7 +356,8 @@ static struct acpi_iort_node *iort_node_get_id(struct acpi_iort_node *node,
if (map->flags & ACPI_IORT_ID_SINGLE_MAPPING) { if (map->flags & ACPI_IORT_ID_SINGLE_MAPPING) {
if (node->type == ACPI_IORT_NODE_NAMED_COMPONENT || if (node->type == ACPI_IORT_NODE_NAMED_COMPONENT ||
node->type == ACPI_IORT_NODE_PCI_ROOT_COMPLEX || node->type == ACPI_IORT_NODE_PCI_ROOT_COMPLEX ||
node->type == ACPI_IORT_NODE_SMMU_V3) { node->type == ACPI_IORT_NODE_SMMU_V3 ||
node->type == ACPI_IORT_NODE_PMCG) {
*id_out = map->output_base; *id_out = map->output_base;
return parent; return parent;
} }
...@@ -394,6 +395,8 @@ static int iort_get_id_mapping_index(struct acpi_iort_node *node) ...@@ -394,6 +395,8 @@ static int iort_get_id_mapping_index(struct acpi_iort_node *node)
} }
return smmu->id_mapping_index; return smmu->id_mapping_index;
case ACPI_IORT_NODE_PMCG:
return 0;
default: default:
return -EINVAL; return -EINVAL;
} }
...@@ -1218,32 +1221,47 @@ static void __init arm_smmu_v3_init_resources(struct resource *res, ...@@ -1218,32 +1221,47 @@ static void __init arm_smmu_v3_init_resources(struct resource *res,
} }
} }
static bool __init arm_smmu_v3_is_coherent(struct acpi_iort_node *node) static void __init arm_smmu_v3_dma_configure(struct device *dev,
struct acpi_iort_node *node)
{ {
struct acpi_iort_smmu_v3 *smmu; struct acpi_iort_smmu_v3 *smmu;
enum dev_dma_attr attr;
/* Retrieve SMMUv3 specific data */ /* Retrieve SMMUv3 specific data */
smmu = (struct acpi_iort_smmu_v3 *)node->node_data; smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
return smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE; attr = (smmu->flags & ACPI_IORT_SMMU_V3_COHACC_OVERRIDE) ?
DEV_DMA_COHERENT : DEV_DMA_NON_COHERENT;
/* We expect the dma masks to be equivalent for all SMMUv3 set-ups */
dev->dma_mask = &dev->coherent_dma_mask;
/* Configure DMA for the page table walker */
acpi_dma_configure(dev, attr);
} }
#if defined(CONFIG_ACPI_NUMA) #if defined(CONFIG_ACPI_NUMA)
/* /*
* set numa proximity domain for smmuv3 device * set numa proximity domain for smmuv3 device
*/ */
static void __init arm_smmu_v3_set_proximity(struct device *dev, static int __init arm_smmu_v3_set_proximity(struct device *dev,
struct acpi_iort_node *node) struct acpi_iort_node *node)
{ {
struct acpi_iort_smmu_v3 *smmu; struct acpi_iort_smmu_v3 *smmu;
smmu = (struct acpi_iort_smmu_v3 *)node->node_data; smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
if (smmu->flags & ACPI_IORT_SMMU_V3_PXM_VALID) { if (smmu->flags & ACPI_IORT_SMMU_V3_PXM_VALID) {
set_dev_node(dev, acpi_map_pxm_to_node(smmu->pxm)); int node = acpi_map_pxm_to_node(smmu->pxm);
if (node != NUMA_NO_NODE && !node_online(node))
return -EINVAL;
set_dev_node(dev, node);
pr_info("SMMU-v3[%llx] Mapped to Proximity domain %d\n", pr_info("SMMU-v3[%llx] Mapped to Proximity domain %d\n",
smmu->base_address, smmu->base_address,
smmu->pxm); smmu->pxm);
} }
return 0;
} }
#else #else
#define arm_smmu_v3_set_proximity NULL #define arm_smmu_v3_set_proximity NULL
...@@ -1301,30 +1319,96 @@ static void __init arm_smmu_init_resources(struct resource *res, ...@@ -1301,30 +1319,96 @@ static void __init arm_smmu_init_resources(struct resource *res,
} }
} }
static bool __init arm_smmu_is_coherent(struct acpi_iort_node *node) static void __init arm_smmu_dma_configure(struct device *dev,
struct acpi_iort_node *node)
{ {
struct acpi_iort_smmu *smmu; struct acpi_iort_smmu *smmu;
enum dev_dma_attr attr;
/* Retrieve SMMU specific data */ /* Retrieve SMMU specific data */
smmu = (struct acpi_iort_smmu *)node->node_data; smmu = (struct acpi_iort_smmu *)node->node_data;
return smmu->flags & ACPI_IORT_SMMU_COHERENT_WALK; attr = (smmu->flags & ACPI_IORT_SMMU_COHERENT_WALK) ?
DEV_DMA_COHERENT : DEV_DMA_NON_COHERENT;
/* We expect the dma masks to be equivalent for SMMU set-ups */
dev->dma_mask = &dev->coherent_dma_mask;
/* Configure DMA for the page table walker */
acpi_dma_configure(dev, attr);
}
static int __init arm_smmu_v3_pmcg_count_resources(struct acpi_iort_node *node)
{
struct acpi_iort_pmcg *pmcg;
/* Retrieve PMCG specific data */
pmcg = (struct acpi_iort_pmcg *)node->node_data;
/*
* There are always 2 memory resources.
* If the overflow_gsiv is present then add that for a total of 3.
*/
return pmcg->overflow_gsiv ? 3 : 2;
}
static void __init arm_smmu_v3_pmcg_init_resources(struct resource *res,
struct acpi_iort_node *node)
{
struct acpi_iort_pmcg *pmcg;
/* Retrieve PMCG specific data */
pmcg = (struct acpi_iort_pmcg *)node->node_data;
res[0].start = pmcg->page0_base_address;
res[0].end = pmcg->page0_base_address + SZ_4K - 1;
res[0].flags = IORESOURCE_MEM;
res[1].start = pmcg->page1_base_address;
res[1].end = pmcg->page1_base_address + SZ_4K - 1;
res[1].flags = IORESOURCE_MEM;
if (pmcg->overflow_gsiv)
acpi_iort_register_irq(pmcg->overflow_gsiv, "overflow",
ACPI_EDGE_SENSITIVE, &res[2]);
}
static struct acpi_platform_list pmcg_plat_info[] __initdata = {
/* HiSilicon Hip08 Platform */
{"HISI ", "HIP08 ", 0, ACPI_SIG_IORT, greater_than_or_equal,
"Erratum #162001800", IORT_SMMU_V3_PMCG_HISI_HIP08},
{ }
};
static int __init arm_smmu_v3_pmcg_add_platdata(struct platform_device *pdev)
{
u32 model;
int idx;
idx = acpi_match_platform_list(pmcg_plat_info);
if (idx >= 0)
model = pmcg_plat_info[idx].data;
else
model = IORT_SMMU_V3_PMCG_GENERIC;
return platform_device_add_data(pdev, &model, sizeof(model));
} }
struct iort_dev_config { struct iort_dev_config {
const char *name; const char *name;
int (*dev_init)(struct acpi_iort_node *node); int (*dev_init)(struct acpi_iort_node *node);
bool (*dev_is_coherent)(struct acpi_iort_node *node); void (*dev_dma_configure)(struct device *dev,
struct acpi_iort_node *node);
int (*dev_count_resources)(struct acpi_iort_node *node); int (*dev_count_resources)(struct acpi_iort_node *node);
void (*dev_init_resources)(struct resource *res, void (*dev_init_resources)(struct resource *res,
struct acpi_iort_node *node); struct acpi_iort_node *node);
void (*dev_set_proximity)(struct device *dev, int (*dev_set_proximity)(struct device *dev,
struct acpi_iort_node *node); struct acpi_iort_node *node);
int (*dev_add_platdata)(struct platform_device *pdev);
}; };
static const struct iort_dev_config iort_arm_smmu_v3_cfg __initconst = { static const struct iort_dev_config iort_arm_smmu_v3_cfg __initconst = {
.name = "arm-smmu-v3", .name = "arm-smmu-v3",
.dev_is_coherent = arm_smmu_v3_is_coherent, .dev_dma_configure = arm_smmu_v3_dma_configure,
.dev_count_resources = arm_smmu_v3_count_resources, .dev_count_resources = arm_smmu_v3_count_resources,
.dev_init_resources = arm_smmu_v3_init_resources, .dev_init_resources = arm_smmu_v3_init_resources,
.dev_set_proximity = arm_smmu_v3_set_proximity, .dev_set_proximity = arm_smmu_v3_set_proximity,
...@@ -1332,9 +1416,16 @@ static const struct iort_dev_config iort_arm_smmu_v3_cfg __initconst = { ...@@ -1332,9 +1416,16 @@ static const struct iort_dev_config iort_arm_smmu_v3_cfg __initconst = {
static const struct iort_dev_config iort_arm_smmu_cfg __initconst = { static const struct iort_dev_config iort_arm_smmu_cfg __initconst = {
.name = "arm-smmu", .name = "arm-smmu",
.dev_is_coherent = arm_smmu_is_coherent, .dev_dma_configure = arm_smmu_dma_configure,
.dev_count_resources = arm_smmu_count_resources, .dev_count_resources = arm_smmu_count_resources,
.dev_init_resources = arm_smmu_init_resources .dev_init_resources = arm_smmu_init_resources,
};
static const struct iort_dev_config iort_arm_smmu_v3_pmcg_cfg __initconst = {
.name = "arm-smmu-v3-pmcg",
.dev_count_resources = arm_smmu_v3_pmcg_count_resources,
.dev_init_resources = arm_smmu_v3_pmcg_init_resources,
.dev_add_platdata = arm_smmu_v3_pmcg_add_platdata,
}; };
static __init const struct iort_dev_config *iort_get_dev_cfg( static __init const struct iort_dev_config *iort_get_dev_cfg(
...@@ -1345,6 +1436,8 @@ static __init const struct iort_dev_config *iort_get_dev_cfg( ...@@ -1345,6 +1436,8 @@ static __init const struct iort_dev_config *iort_get_dev_cfg(
return &iort_arm_smmu_v3_cfg; return &iort_arm_smmu_v3_cfg;
case ACPI_IORT_NODE_SMMU: case ACPI_IORT_NODE_SMMU:
return &iort_arm_smmu_cfg; return &iort_arm_smmu_cfg;
case ACPI_IORT_NODE_PMCG:
return &iort_arm_smmu_v3_pmcg_cfg;
default: default:
return NULL; return NULL;
} }
...@@ -1362,15 +1455,17 @@ static int __init iort_add_platform_device(struct acpi_iort_node *node, ...@@ -1362,15 +1455,17 @@ static int __init iort_add_platform_device(struct acpi_iort_node *node,
struct fwnode_handle *fwnode; struct fwnode_handle *fwnode;
struct platform_device *pdev; struct platform_device *pdev;
struct resource *r; struct resource *r;
enum dev_dma_attr attr;
int ret, count; int ret, count;
pdev = platform_device_alloc(ops->name, PLATFORM_DEVID_AUTO); pdev = platform_device_alloc(ops->name, PLATFORM_DEVID_AUTO);
if (!pdev) if (!pdev)
return -ENOMEM; return -ENOMEM;
if (ops->dev_set_proximity) if (ops->dev_set_proximity) {
ops->dev_set_proximity(&pdev->dev, node); ret = ops->dev_set_proximity(&pdev->dev, node);
if (ret)
goto dev_put;
}
count = ops->dev_count_resources(node); count = ops->dev_count_resources(node);
...@@ -1393,19 +1488,19 @@ static int __init iort_add_platform_device(struct acpi_iort_node *node, ...@@ -1393,19 +1488,19 @@ static int __init iort_add_platform_device(struct acpi_iort_node *node,
goto dev_put; goto dev_put;
/* /*
* Add a copy of IORT node pointer to platform_data to * Platform devices based on PMCG nodes uses platform_data to
* be used to retrieve IORT data information. * pass the hardware model info to the driver. For others, add
* a copy of IORT node pointer to platform_data to be used to
* retrieve IORT data information.
*/ */
if (ops->dev_add_platdata)
ret = ops->dev_add_platdata(pdev);
else
ret = platform_device_add_data(pdev, &node, sizeof(node)); ret = platform_device_add_data(pdev, &node, sizeof(node));
if (ret) if (ret)
goto dev_put; goto dev_put;
/*
* We expect the dma masks to be equivalent for
* all SMMUs set-ups
*/
pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask;
fwnode = iort_get_fwnode(node); fwnode = iort_get_fwnode(node);
if (!fwnode) { if (!fwnode) {
...@@ -1415,11 +1510,8 @@ static int __init iort_add_platform_device(struct acpi_iort_node *node, ...@@ -1415,11 +1510,8 @@ static int __init iort_add_platform_device(struct acpi_iort_node *node,
pdev->dev.fwnode = fwnode; pdev->dev.fwnode = fwnode;
attr = ops->dev_is_coherent && ops->dev_is_coherent(node) ? if (ops->dev_dma_configure)
DEV_DMA_COHERENT : DEV_DMA_NON_COHERENT; ops->dev_dma_configure(&pdev->dev, node);
/* Configure DMA for the page table walker */
acpi_dma_configure(&pdev->dev, attr);
iort_set_device_domain(&pdev->dev, node); iort_set_device_domain(&pdev->dev, node);
......
...@@ -149,6 +149,26 @@ u32 arch_timer_reg_read(int access, enum arch_timer_reg reg, ...@@ -149,6 +149,26 @@ u32 arch_timer_reg_read(int access, enum arch_timer_reg reg,
return val; return val;
} }
static u64 arch_counter_get_cntpct_stable(void)
{
return __arch_counter_get_cntpct_stable();
}
static u64 arch_counter_get_cntpct(void)
{
return __arch_counter_get_cntpct();
}
static u64 arch_counter_get_cntvct_stable(void)
{
return __arch_counter_get_cntvct_stable();
}
static u64 arch_counter_get_cntvct(void)
{
return __arch_counter_get_cntvct();
}
/* /*
* Default to cp15 based access because arm64 uses this function for * Default to cp15 based access because arm64 uses this function for
* sched_clock() before DT is probed and the cp15 method is guaranteed * sched_clock() before DT is probed and the cp15 method is guaranteed
...@@ -316,13 +336,6 @@ static u64 notrace arm64_858921_read_cntvct_el0(void) ...@@ -316,13 +336,6 @@ static u64 notrace arm64_858921_read_cntvct_el0(void)
} }
#endif #endif
#ifdef CONFIG_ARM64_ERRATUM_1188873
static u64 notrace arm64_1188873_read_cntvct_el0(void)
{
return read_sysreg(cntvct_el0);
}
#endif
#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1 #ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
/* /*
* The low bits of the counter registers are indeterminate while bit 10 or * The low bits of the counter registers are indeterminate while bit 10 or
...@@ -369,8 +382,7 @@ static u32 notrace sun50i_a64_read_cntv_tval_el0(void) ...@@ -369,8 +382,7 @@ static u32 notrace sun50i_a64_read_cntv_tval_el0(void)
DEFINE_PER_CPU(const struct arch_timer_erratum_workaround *, timer_unstable_counter_workaround); DEFINE_PER_CPU(const struct arch_timer_erratum_workaround *, timer_unstable_counter_workaround);
EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround); EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround);
DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled); static atomic_t timer_unstable_counter_workaround_in_use = ATOMIC_INIT(0);
EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled);
static void erratum_set_next_event_tval_generic(const int access, unsigned long evt, static void erratum_set_next_event_tval_generic(const int access, unsigned long evt,
struct clock_event_device *clk) struct clock_event_device *clk)
...@@ -454,14 +466,6 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = { ...@@ -454,14 +466,6 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = {
.read_cntvct_el0 = arm64_858921_read_cntvct_el0, .read_cntvct_el0 = arm64_858921_read_cntvct_el0,
}, },
#endif #endif
#ifdef CONFIG_ARM64_ERRATUM_1188873
{
.match_type = ate_match_local_cap_id,
.id = (void *)ARM64_WORKAROUND_1188873,
.desc = "ARM erratum 1188873",
.read_cntvct_el0 = arm64_1188873_read_cntvct_el0,
},
#endif
#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1 #ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
{ {
.match_type = ate_match_dt, .match_type = ate_match_dt,
...@@ -549,11 +553,8 @@ void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa ...@@ -549,11 +553,8 @@ void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa
per_cpu(timer_unstable_counter_workaround, i) = wa; per_cpu(timer_unstable_counter_workaround, i) = wa;
} }
/* if (wa->read_cntvct_el0 || wa->read_cntpct_el0)
* Use the locked version, as we're called from the CPU atomic_set(&timer_unstable_counter_workaround_in_use, 1);
* hotplug framework. Otherwise, we end-up in deadlock-land.
*/
static_branch_enable_cpuslocked(&arch_timer_read_ool_enabled);
/* /*
* Don't use the vdso fastpath if errata require using the * Don't use the vdso fastpath if errata require using the
...@@ -570,7 +571,7 @@ void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa ...@@ -570,7 +571,7 @@ void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa
static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type type, static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type type,
void *arg) void *arg)
{ {
const struct arch_timer_erratum_workaround *wa; const struct arch_timer_erratum_workaround *wa, *__wa;
ate_match_fn_t match_fn = NULL; ate_match_fn_t match_fn = NULL;
bool local = false; bool local = false;
...@@ -594,8 +595,6 @@ static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type t ...@@ -594,8 +595,6 @@ static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type t
if (!wa) if (!wa)
return; return;
if (needs_unstable_timer_counter_workaround()) {
const struct arch_timer_erratum_workaround *__wa;
__wa = __this_cpu_read(timer_unstable_counter_workaround); __wa = __this_cpu_read(timer_unstable_counter_workaround);
if (__wa && wa != __wa) if (__wa && wa != __wa)
pr_warn("Can't enable workaround for %s (clashes with %s\n)", pr_warn("Can't enable workaround for %s (clashes with %s\n)",
...@@ -603,44 +602,25 @@ static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type t ...@@ -603,44 +602,25 @@ static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type t
if (__wa) if (__wa)
return; return;
}
arch_timer_enable_workaround(wa, local); arch_timer_enable_workaround(wa, local);
pr_info("Enabling %s workaround for %s\n", pr_info("Enabling %s workaround for %s\n",
local ? "local" : "global", wa->desc); local ? "local" : "global", wa->desc);
} }
#define erratum_handler(fn, r, ...) \
({ \
bool __val; \
if (needs_unstable_timer_counter_workaround()) { \
const struct arch_timer_erratum_workaround *__wa; \
__wa = __this_cpu_read(timer_unstable_counter_workaround); \
if (__wa && __wa->fn) { \
r = __wa->fn(__VA_ARGS__); \
__val = true; \
} else { \
__val = false; \
} \
} else { \
__val = false; \
} \
__val; \
})
static bool arch_timer_this_cpu_has_cntvct_wa(void) static bool arch_timer_this_cpu_has_cntvct_wa(void)
{ {
const struct arch_timer_erratum_workaround *wa; return has_erratum_handler(read_cntvct_el0);
}
wa = __this_cpu_read(timer_unstable_counter_workaround); static bool arch_timer_counter_has_wa(void)
return wa && wa->read_cntvct_el0; {
return atomic_read(&timer_unstable_counter_workaround_in_use);
} }
#else #else
#define arch_timer_check_ool_workaround(t,a) do { } while(0) #define arch_timer_check_ool_workaround(t,a) do { } while(0)
#define erratum_set_next_event_tval_virt(...) ({BUG(); 0;})
#define erratum_set_next_event_tval_phys(...) ({BUG(); 0;})
#define erratum_handler(fn, r, ...) ({false;})
#define arch_timer_this_cpu_has_cntvct_wa() ({false;}) #define arch_timer_this_cpu_has_cntvct_wa() ({false;})
#define arch_timer_counter_has_wa() ({false;})
#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ #endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */
static __always_inline irqreturn_t timer_handler(const int access, static __always_inline irqreturn_t timer_handler(const int access,
...@@ -733,11 +713,6 @@ static __always_inline void set_next_event(const int access, unsigned long evt, ...@@ -733,11 +713,6 @@ static __always_inline void set_next_event(const int access, unsigned long evt,
static int arch_timer_set_next_event_virt(unsigned long evt, static int arch_timer_set_next_event_virt(unsigned long evt,
struct clock_event_device *clk) struct clock_event_device *clk)
{ {
int ret;
if (erratum_handler(set_next_event_virt, ret, evt, clk))
return ret;
set_next_event(ARCH_TIMER_VIRT_ACCESS, evt, clk); set_next_event(ARCH_TIMER_VIRT_ACCESS, evt, clk);
return 0; return 0;
} }
...@@ -745,11 +720,6 @@ static int arch_timer_set_next_event_virt(unsigned long evt, ...@@ -745,11 +720,6 @@ static int arch_timer_set_next_event_virt(unsigned long evt,
static int arch_timer_set_next_event_phys(unsigned long evt, static int arch_timer_set_next_event_phys(unsigned long evt,
struct clock_event_device *clk) struct clock_event_device *clk)
{ {
int ret;
if (erratum_handler(set_next_event_phys, ret, evt, clk))
return ret;
set_next_event(ARCH_TIMER_PHYS_ACCESS, evt, clk); set_next_event(ARCH_TIMER_PHYS_ACCESS, evt, clk);
return 0; return 0;
} }
...@@ -774,6 +744,10 @@ static void __arch_timer_setup(unsigned type, ...@@ -774,6 +744,10 @@ static void __arch_timer_setup(unsigned type,
clk->features = CLOCK_EVT_FEAT_ONESHOT; clk->features = CLOCK_EVT_FEAT_ONESHOT;
if (type == ARCH_TIMER_TYPE_CP15) { if (type == ARCH_TIMER_TYPE_CP15) {
typeof(clk->set_next_event) sne;
arch_timer_check_ool_workaround(ate_match_local_cap_id, NULL);
if (arch_timer_c3stop) if (arch_timer_c3stop)
clk->features |= CLOCK_EVT_FEAT_C3STOP; clk->features |= CLOCK_EVT_FEAT_C3STOP;
clk->name = "arch_sys_timer"; clk->name = "arch_sys_timer";
...@@ -784,20 +758,20 @@ static void __arch_timer_setup(unsigned type, ...@@ -784,20 +758,20 @@ static void __arch_timer_setup(unsigned type,
case ARCH_TIMER_VIRT_PPI: case ARCH_TIMER_VIRT_PPI:
clk->set_state_shutdown = arch_timer_shutdown_virt; clk->set_state_shutdown = arch_timer_shutdown_virt;
clk->set_state_oneshot_stopped = arch_timer_shutdown_virt; clk->set_state_oneshot_stopped = arch_timer_shutdown_virt;
clk->set_next_event = arch_timer_set_next_event_virt; sne = erratum_handler(set_next_event_virt);
break; break;
case ARCH_TIMER_PHYS_SECURE_PPI: case ARCH_TIMER_PHYS_SECURE_PPI:
case ARCH_TIMER_PHYS_NONSECURE_PPI: case ARCH_TIMER_PHYS_NONSECURE_PPI:
case ARCH_TIMER_HYP_PPI: case ARCH_TIMER_HYP_PPI:
clk->set_state_shutdown = arch_timer_shutdown_phys; clk->set_state_shutdown = arch_timer_shutdown_phys;
clk->set_state_oneshot_stopped = arch_timer_shutdown_phys; clk->set_state_oneshot_stopped = arch_timer_shutdown_phys;
clk->set_next_event = arch_timer_set_next_event_phys; sne = erratum_handler(set_next_event_phys);
break; break;
default: default:
BUG(); BUG();
} }
arch_timer_check_ool_workaround(ate_match_local_cap_id, NULL); clk->set_next_event = sne;
} else { } else {
clk->features |= CLOCK_EVT_FEAT_DYNIRQ; clk->features |= CLOCK_EVT_FEAT_DYNIRQ;
clk->name = "arch_mem_timer"; clk->name = "arch_mem_timer";
...@@ -830,7 +804,11 @@ static void arch_timer_evtstrm_enable(int divider) ...@@ -830,7 +804,11 @@ static void arch_timer_evtstrm_enable(int divider)
cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT) cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
| ARCH_TIMER_VIRT_EVT_EN; | ARCH_TIMER_VIRT_EVT_EN;
arch_timer_set_cntkctl(cntkctl); arch_timer_set_cntkctl(cntkctl);
#ifdef CONFIG_ARM64
cpu_set_named_feature(EVTSTRM);
#else
elf_hwcap |= HWCAP_EVTSTRM; elf_hwcap |= HWCAP_EVTSTRM;
#endif
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM; compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
#endif #endif
...@@ -995,12 +973,22 @@ static void __init arch_counter_register(unsigned type) ...@@ -995,12 +973,22 @@ static void __init arch_counter_register(unsigned type)
/* Register the CP15 based counter if we have one */ /* Register the CP15 based counter if we have one */
if (type & ARCH_TIMER_TYPE_CP15) { if (type & ARCH_TIMER_TYPE_CP15) {
u64 (*rd)(void);
if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) || if ((IS_ENABLED(CONFIG_ARM64) && !is_hyp_mode_available()) ||
arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) arch_timer_uses_ppi == ARCH_TIMER_VIRT_PPI) {
arch_timer_read_counter = arch_counter_get_cntvct; if (arch_timer_counter_has_wa())
rd = arch_counter_get_cntvct_stable;
else else
arch_timer_read_counter = arch_counter_get_cntpct; rd = arch_counter_get_cntvct;
} else {
if (arch_timer_counter_has_wa())
rd = arch_counter_get_cntpct_stable;
else
rd = arch_counter_get_cntpct;
}
arch_timer_read_counter = rd;
clocksource_counter.archdata.vdso_direct = vdso_default; clocksource_counter.archdata.vdso_direct = vdso_default;
} else { } else {
arch_timer_read_counter = arch_counter_get_cntvct_mem; arch_timer_read_counter = arch_counter_get_cntvct_mem;
...@@ -1052,7 +1040,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self, ...@@ -1052,7 +1040,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
} else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) { } else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl)); arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
#ifdef CONFIG_ARM64
if (cpu_have_named_feature(EVTSTRM))
#else
if (elf_hwcap & HWCAP_EVTSTRM) if (elf_hwcap & HWCAP_EVTSTRM)
#endif
cpumask_set_cpu(smp_processor_id(), &evtstrm_available); cpumask_set_cpu(smp_processor_id(), &evtstrm_available);
} }
return NOTIFY_OK; return NOTIFY_OK;
......
...@@ -165,6 +165,7 @@ static int invoke_sdei_fn(unsigned long function_id, unsigned long arg0, ...@@ -165,6 +165,7 @@ static int invoke_sdei_fn(unsigned long function_id, unsigned long arg0,
return err; return err;
} }
NOKPROBE_SYMBOL(invoke_sdei_fn);
static struct sdei_event *sdei_event_find(u32 event_num) static struct sdei_event *sdei_event_find(u32 event_num)
{ {
...@@ -879,6 +880,7 @@ static void sdei_smccc_smc(unsigned long function_id, ...@@ -879,6 +880,7 @@ static void sdei_smccc_smc(unsigned long function_id,
{ {
arm_smccc_smc(function_id, arg0, arg1, arg2, arg3, arg4, 0, 0, res); arm_smccc_smc(function_id, arg0, arg1, arg2, arg3, arg4, 0, 0, res);
} }
NOKPROBE_SYMBOL(sdei_smccc_smc);
static void sdei_smccc_hvc(unsigned long function_id, static void sdei_smccc_hvc(unsigned long function_id,
unsigned long arg0, unsigned long arg1, unsigned long arg0, unsigned long arg1,
...@@ -887,6 +889,7 @@ static void sdei_smccc_hvc(unsigned long function_id, ...@@ -887,6 +889,7 @@ static void sdei_smccc_hvc(unsigned long function_id,
{ {
arm_smccc_hvc(function_id, arg0, arg1, arg2, arg3, arg4, 0, 0, res); arm_smccc_hvc(function_id, arg0, arg1, arg2, arg3, arg4, 0, 0, res);
} }
NOKPROBE_SYMBOL(sdei_smccc_hvc);
int sdei_register_ghes(struct ghes *ghes, sdei_event_callback *normal_cb, int sdei_register_ghes(struct ghes *ghes, sdei_event_callback *normal_cb,
sdei_event_callback *critical_cb) sdei_event_callback *critical_cb)
......
...@@ -16,9 +16,9 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ -O2 \ ...@@ -16,9 +16,9 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ -O2 \
# arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
# disable the stackleak plugin # disable the stackleak plugin
cflags-$(CONFIG_ARM64) := $(subst -pg,,$(KBUILD_CFLAGS)) -fpie \ cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
$(DISABLE_STACKLEAK_PLUGIN) -fpie $(DISABLE_STACKLEAK_PLUGIN)
cflags-$(CONFIG_ARM) := $(subst -pg,,$(KBUILD_CFLAGS)) \ cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \
-fno-builtin -fpic \ -fno-builtin -fpic \
$(call cc-option,-mno-single-pic-base) $(call cc-option,-mno-single-pic-base)
......
...@@ -52,6 +52,15 @@ config ARM_PMU_ACPI ...@@ -52,6 +52,15 @@ config ARM_PMU_ACPI
depends on ARM_PMU && ACPI depends on ARM_PMU && ACPI
def_bool y def_bool y
config ARM_SMMU_V3_PMU
tristate "ARM SMMUv3 Performance Monitors Extension"
depends on ARM64 && ACPI && ARM_SMMU_V3
help
Provides support for the ARM SMMUv3 Performance Monitor Counter
Groups (PMCG), which provide monitoring of transactions passing
through the SMMU and allow the resulting information to be filtered
based on the Stream ID of the corresponding master.
config ARM_DSU_PMU config ARM_DSU_PMU
tristate "ARM DynamIQ Shared Unit (DSU) PMU" tristate "ARM DynamIQ Shared Unit (DSU) PMU"
depends on ARM64 depends on ARM64
......
...@@ -4,6 +4,7 @@ obj-$(CONFIG_ARM_CCN) += arm-ccn.o ...@@ -4,6 +4,7 @@ obj-$(CONFIG_ARM_CCN) += arm-ccn.o
obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o obj-$(CONFIG_ARM_DSU_PMU) += arm_dsu_pmu.o
obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o obj-$(CONFIG_ARM_PMU) += arm_pmu.o arm_pmu_platform.o
obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o obj-$(CONFIG_ARM_PMU_ACPI) += arm_pmu_acpi.o
obj-$(CONFIG_ARM_SMMU_V3_PMU) += arm_smmuv3_pmu.o
obj-$(CONFIG_HISI_PMU) += hisilicon/ obj-$(CONFIG_HISI_PMU) += hisilicon/
obj-$(CONFIG_QCOM_L2_PMU) += qcom_l2_pmu.o obj-$(CONFIG_QCOM_L2_PMU) += qcom_l2_pmu.o
obj-$(CONFIG_QCOM_L3_PMU) += qcom_l3_pmu.o obj-$(CONFIG_QCOM_L3_PMU) += qcom_l3_pmu.o
......
...@@ -1684,21 +1684,24 @@ static int cci_pmu_probe(struct platform_device *pdev) ...@@ -1684,21 +1684,24 @@ static int cci_pmu_probe(struct platform_device *pdev)
raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock); raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock);
mutex_init(&cci_pmu->reserve_mutex); mutex_init(&cci_pmu->reserve_mutex);
atomic_set(&cci_pmu->active_events, 0); atomic_set(&cci_pmu->active_events, 0);
cci_pmu->cpu = get_cpu();
ret = cci_pmu_init(cci_pmu, pdev);
if (ret) {
put_cpu();
return ret;
}
cci_pmu->cpu = raw_smp_processor_id();
g_cci_pmu = cci_pmu;
cpuhp_setup_state_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE, cpuhp_setup_state_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE,
"perf/arm/cci:online", NULL, "perf/arm/cci:online", NULL,
cci_pmu_offline_cpu); cci_pmu_offline_cpu);
put_cpu();
g_cci_pmu = cci_pmu; ret = cci_pmu_init(cci_pmu, pdev);
if (ret)
goto error_pmu_init;
pr_info("ARM %s PMU driver probed", cci_pmu->model->name); pr_info("ARM %s PMU driver probed", cci_pmu->model->name);
return 0; return 0;
error_pmu_init:
cpuhp_remove_state(CPUHP_AP_PERF_ARM_CCI_ONLINE);
g_cci_pmu = NULL;
return ret;
} }
static int cci_pmu_remove(struct platform_device *pdev) static int cci_pmu_remove(struct platform_device *pdev)
......
...@@ -167,7 +167,7 @@ struct arm_ccn_dt { ...@@ -167,7 +167,7 @@ struct arm_ccn_dt {
struct hrtimer hrtimer; struct hrtimer hrtimer;
cpumask_t cpu; unsigned int cpu;
struct hlist_node node; struct hlist_node node;
struct pmu pmu; struct pmu pmu;
...@@ -559,7 +559,7 @@ static ssize_t arm_ccn_pmu_cpumask_show(struct device *dev, ...@@ -559,7 +559,7 @@ static ssize_t arm_ccn_pmu_cpumask_show(struct device *dev,
{ {
struct arm_ccn *ccn = pmu_to_arm_ccn(dev_get_drvdata(dev)); struct arm_ccn *ccn = pmu_to_arm_ccn(dev_get_drvdata(dev));
return cpumap_print_to_pagebuf(true, buf, &ccn->dt.cpu); return cpumap_print_to_pagebuf(true, buf, cpumask_of(ccn->dt.cpu));
} }
static struct device_attribute arm_ccn_pmu_cpumask_attr = static struct device_attribute arm_ccn_pmu_cpumask_attr =
...@@ -759,7 +759,7 @@ static int arm_ccn_pmu_event_init(struct perf_event *event) ...@@ -759,7 +759,7 @@ static int arm_ccn_pmu_event_init(struct perf_event *event)
* mitigate this, we enforce CPU assignment to one, selected * mitigate this, we enforce CPU assignment to one, selected
* processor (the one described in the "cpumask" attribute). * processor (the one described in the "cpumask" attribute).
*/ */
event->cpu = cpumask_first(&ccn->dt.cpu); event->cpu = ccn->dt.cpu;
node_xp = CCN_CONFIG_NODE(event->attr.config); node_xp = CCN_CONFIG_NODE(event->attr.config);
type = CCN_CONFIG_TYPE(event->attr.config); type = CCN_CONFIG_TYPE(event->attr.config);
...@@ -1215,15 +1215,15 @@ static int arm_ccn_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) ...@@ -1215,15 +1215,15 @@ static int arm_ccn_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node)
struct arm_ccn *ccn = container_of(dt, struct arm_ccn, dt); struct arm_ccn *ccn = container_of(dt, struct arm_ccn, dt);
unsigned int target; unsigned int target;
if (!cpumask_test_and_clear_cpu(cpu, &dt->cpu)) if (cpu != dt->cpu)
return 0; return 0;
target = cpumask_any_but(cpu_online_mask, cpu); target = cpumask_any_but(cpu_online_mask, cpu);
if (target >= nr_cpu_ids) if (target >= nr_cpu_ids)
return 0; return 0;
perf_pmu_migrate_context(&dt->pmu, cpu, target); perf_pmu_migrate_context(&dt->pmu, cpu, target);
cpumask_set_cpu(target, &dt->cpu); dt->cpu = target;
if (ccn->irq) if (ccn->irq)
WARN_ON(irq_set_affinity_hint(ccn->irq, &dt->cpu) != 0); WARN_ON(irq_set_affinity_hint(ccn->irq, cpumask_of(dt->cpu)));
return 0; return 0;
} }
...@@ -1299,29 +1299,30 @@ static int arm_ccn_pmu_init(struct arm_ccn *ccn) ...@@ -1299,29 +1299,30 @@ static int arm_ccn_pmu_init(struct arm_ccn *ccn)
} }
/* Pick one CPU which we will use to collect data from CCN... */ /* Pick one CPU which we will use to collect data from CCN... */
cpumask_set_cpu(get_cpu(), &ccn->dt.cpu); ccn->dt.cpu = raw_smp_processor_id();
/* Also make sure that the overflow interrupt is handled by this CPU */ /* Also make sure that the overflow interrupt is handled by this CPU */
if (ccn->irq) { if (ccn->irq) {
err = irq_set_affinity_hint(ccn->irq, &ccn->dt.cpu); err = irq_set_affinity_hint(ccn->irq, cpumask_of(ccn->dt.cpu));
if (err) { if (err) {
dev_err(ccn->dev, "Failed to set interrupt affinity!\n"); dev_err(ccn->dev, "Failed to set interrupt affinity!\n");
goto error_set_affinity; goto error_set_affinity;
} }
} }
cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_CCN_ONLINE,
&ccn->dt.node);
err = perf_pmu_register(&ccn->dt.pmu, name, -1); err = perf_pmu_register(&ccn->dt.pmu, name, -1);
if (err) if (err)
goto error_pmu_register; goto error_pmu_register;
cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_CCN_ONLINE,
&ccn->dt.node);
put_cpu();
return 0; return 0;
error_pmu_register: error_pmu_register:
cpuhp_state_remove_instance_nocalls(CPUHP_AP_PERF_ARM_CCN_ONLINE,
&ccn->dt.node);
error_set_affinity: error_set_affinity:
put_cpu();
error_choose_name: error_choose_name:
ida_simple_remove(&arm_ccn_pmu_ida, ccn->dt.id); ida_simple_remove(&arm_ccn_pmu_ida, ccn->dt.id);
for (i = 0; i < ccn->num_xps; i++) for (i = 0; i < ccn->num_xps; i++)
......
This diff is collapsed.
...@@ -161,7 +161,7 @@ static unsigned int sbsa_gwdt_get_timeleft(struct watchdog_device *wdd) ...@@ -161,7 +161,7 @@ static unsigned int sbsa_gwdt_get_timeleft(struct watchdog_device *wdd)
timeleft += readl(gwdt->control_base + SBSA_GWDT_WOR); timeleft += readl(gwdt->control_base + SBSA_GWDT_WOR);
timeleft += lo_hi_readq(gwdt->control_base + SBSA_GWDT_WCV) - timeleft += lo_hi_readq(gwdt->control_base + SBSA_GWDT_WCV) -
arch_counter_get_cntvct(); arch_timer_read_counter();
do_div(timeleft, gwdt->clk); do_div(timeleft, gwdt->clk);
......
...@@ -23,7 +23,9 @@ ...@@ -23,7 +23,9 @@
* *
* Return: * Return:
* 0 - On success * 0 - On success
* <0 - On error * -EFAULT - User access resulted in a page fault
* -EAGAIN - Atomic operation was unable to complete due to contention
* -ENOSYS - Operation not supported
*/ */
static inline int static inline int
arch_futex_atomic_op_inuser(int op, u32 oparg, int *oval, u32 __user *uaddr) arch_futex_atomic_op_inuser(int op, u32 oparg, int *oval, u32 __user *uaddr)
...@@ -85,7 +87,9 @@ arch_futex_atomic_op_inuser(int op, u32 oparg, int *oval, u32 __user *uaddr) ...@@ -85,7 +87,9 @@ arch_futex_atomic_op_inuser(int op, u32 oparg, int *oval, u32 __user *uaddr)
* *
* Return: * Return:
* 0 - On success * 0 - On success
* <0 - On error * -EFAULT - User access resulted in a page fault
* -EAGAIN - Atomic operation was unable to complete due to contention
* -ENOSYS - Function not implemented (only if !HAVE_FUTEX_CMPXCHG)
*/ */
static inline int static inline int
futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
......
...@@ -26,6 +26,14 @@ ...@@ -26,6 +26,14 @@
#define IORT_IRQ_MASK(irq) (irq & 0xffffffffULL) #define IORT_IRQ_MASK(irq) (irq & 0xffffffffULL)
#define IORT_IRQ_TRIGGER_MASK(irq) ((irq >> 32) & 0xffffffffULL) #define IORT_IRQ_TRIGGER_MASK(irq) ((irq >> 32) & 0xffffffffULL)
/*
* PMCG model identifiers for use in smmu pmu driver. Please note
* that this is purely for the use of software and has nothing to
* do with hardware or with IORT specification.
*/
#define IORT_SMMU_V3_PMCG_GENERIC 0x00000000 /* Generic SMMUv3 PMCG */
#define IORT_SMMU_V3_PMCG_HISI_HIP08 0x00000001 /* HiSilicon HIP08 PMCG */
int iort_register_domain_token(int trans_id, phys_addr_t base, int iort_register_domain_token(int trans_id, phys_addr_t base,
struct fwnode_handle *fw_node); struct fwnode_handle *fw_node);
void iort_deregister_domain_token(int trans_id); void iort_deregister_domain_token(int trans_id);
......
...@@ -1311,13 +1311,15 @@ static int lookup_pi_state(u32 __user *uaddr, u32 uval, ...@@ -1311,13 +1311,15 @@ static int lookup_pi_state(u32 __user *uaddr, u32 uval,
static int lock_pi_update_atomic(u32 __user *uaddr, u32 uval, u32 newval) static int lock_pi_update_atomic(u32 __user *uaddr, u32 uval, u32 newval)
{ {
int err;
u32 uninitialized_var(curval); u32 uninitialized_var(curval);
if (unlikely(should_fail_futex(true))) if (unlikely(should_fail_futex(true)))
return -EFAULT; return -EFAULT;
if (unlikely(cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))) err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
return -EFAULT; if (unlikely(err))
return err;
/* If user space value changed, let the caller retry */ /* If user space value changed, let the caller retry */
return curval != uval ? -EAGAIN : 0; return curval != uval ? -EAGAIN : 0;
...@@ -1502,10 +1504,8 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_ ...@@ -1502,10 +1504,8 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_
if (unlikely(should_fail_futex(true))) if (unlikely(should_fail_futex(true)))
ret = -EFAULT; ret = -EFAULT;
if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) { ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
ret = -EFAULT; if (!ret && (curval != uval)) {
} else if (curval != uval) {
/* /*
* If a unconditional UNLOCK_PI operation (user space did not * If a unconditional UNLOCK_PI operation (user space did not
* try the TID->0 transition) raced with a waiter setting the * try the TID->0 transition) raced with a waiter setting the
...@@ -1700,32 +1700,32 @@ futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2, ...@@ -1700,32 +1700,32 @@ futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
double_lock_hb(hb1, hb2); double_lock_hb(hb1, hb2);
op_ret = futex_atomic_op_inuser(op, uaddr2); op_ret = futex_atomic_op_inuser(op, uaddr2);
if (unlikely(op_ret < 0)) { if (unlikely(op_ret < 0)) {
double_unlock_hb(hb1, hb2); double_unlock_hb(hb1, hb2);
#ifndef CONFIG_MMU if (!IS_ENABLED(CONFIG_MMU) ||
unlikely(op_ret != -EFAULT && op_ret != -EAGAIN)) {
/* /*
* we don't get EFAULT from MMU faults if we don't have an MMU, * we don't get EFAULT from MMU faults if we don't have
* but we might get them from range checking * an MMU, but we might get them from range checking
*/ */
ret = op_ret; ret = op_ret;
goto out_put_keys; goto out_put_keys;
#endif
if (unlikely(op_ret != -EFAULT)) {
ret = op_ret;
goto out_put_keys;
} }
if (op_ret == -EFAULT) {
ret = fault_in_user_writeable(uaddr2); ret = fault_in_user_writeable(uaddr2);
if (ret) if (ret)
goto out_put_keys; goto out_put_keys;
}
if (!(flags & FLAGS_SHARED)) if (!(flags & FLAGS_SHARED)) {
cond_resched();
goto retry_private; goto retry_private;
}
put_futex_key(&key2); put_futex_key(&key2);
put_futex_key(&key1); put_futex_key(&key1);
cond_resched();
goto retry; goto retry;
} }
...@@ -2350,7 +2350,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, ...@@ -2350,7 +2350,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
u32 uval, uninitialized_var(curval), newval; u32 uval, uninitialized_var(curval), newval;
struct task_struct *oldowner, *newowner; struct task_struct *oldowner, *newowner;
u32 newtid; u32 newtid;
int ret; int ret, err = 0;
lockdep_assert_held(q->lock_ptr); lockdep_assert_held(q->lock_ptr);
...@@ -2421,14 +2421,17 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, ...@@ -2421,14 +2421,17 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
if (!pi_state->owner) if (!pi_state->owner)
newtid |= FUTEX_OWNER_DIED; newtid |= FUTEX_OWNER_DIED;
if (get_futex_value_locked(&uval, uaddr)) err = get_futex_value_locked(&uval, uaddr);
goto handle_fault; if (err)
goto handle_err;
for (;;) { for (;;) {
newval = (uval & FUTEX_OWNER_DIED) | newtid; newval = (uval & FUTEX_OWNER_DIED) | newtid;
if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
goto handle_fault; if (err)
goto handle_err;
if (curval == uval) if (curval == uval)
break; break;
uval = curval; uval = curval;
...@@ -2456,23 +2459,37 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, ...@@ -2456,23 +2459,37 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
return 0; return 0;
/* /*
* To handle the page fault we need to drop the locks here. That gives * In order to reschedule or handle a page fault, we need to drop the
* the other task (either the highest priority waiter itself or the * locks here. In the case of a fault, this gives the other task
* task which stole the rtmutex) the chance to try the fixup of the * (either the highest priority waiter itself or the task which stole
* pi_state. So once we are back from handling the fault we need to * the rtmutex) the chance to try the fixup of the pi_state. So once we
* check the pi_state after reacquiring the locks and before trying to * are back from handling the fault we need to check the pi_state after
* do another fixup. When the fixup has been done already we simply * reacquiring the locks and before trying to do another fixup. When
* return. * the fixup has been done already we simply return.
* *
* Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
* drop hb->lock since the caller owns the hb -> futex_q relation. * drop hb->lock since the caller owns the hb -> futex_q relation.
* Dropping the pi_mutex->wait_lock requires the state revalidate. * Dropping the pi_mutex->wait_lock requires the state revalidate.
*/ */
handle_fault: handle_err:
raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
spin_unlock(q->lock_ptr); spin_unlock(q->lock_ptr);
switch (err) {
case -EFAULT:
ret = fault_in_user_writeable(uaddr); ret = fault_in_user_writeable(uaddr);
break;
case -EAGAIN:
cond_resched();
ret = 0;
break;
default:
WARN_ON_ONCE(1);
ret = err;
break;
}
spin_lock(q->lock_ptr); spin_lock(q->lock_ptr);
raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
...@@ -3041,10 +3058,8 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) ...@@ -3041,10 +3058,8 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
* A unconditional UNLOCK_PI op raced against a waiter * A unconditional UNLOCK_PI op raced against a waiter
* setting the FUTEX_WAITERS bit. Try again. * setting the FUTEX_WAITERS bit. Try again.
*/ */
if (ret == -EAGAIN) { if (ret == -EAGAIN)
put_futex_key(&key); goto pi_retry;
goto retry;
}
/* /*
* wake_futex_pi has detected invalid state. Tell user * wake_futex_pi has detected invalid state. Tell user
* space. * space.
...@@ -3059,9 +3074,19 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) ...@@ -3059,9 +3074,19 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
* preserve the WAITERS bit not the OWNER_DIED one. We are the * preserve the WAITERS bit not the OWNER_DIED one. We are the
* owner. * owner.
*/ */
if (cmpxchg_futex_value_locked(&curval, uaddr, uval, 0)) { if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) {
spin_unlock(&hb->lock); spin_unlock(&hb->lock);
switch (ret) {
case -EFAULT:
goto pi_faulted; goto pi_faulted;
case -EAGAIN:
goto pi_retry;
default:
WARN_ON_ONCE(1);
goto out_putkey;
}
} }
/* /*
...@@ -3075,6 +3100,11 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags) ...@@ -3075,6 +3100,11 @@ static int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
put_futex_key(&key); put_futex_key(&key);
return ret; return ret;
pi_retry:
put_futex_key(&key);
cond_resched();
goto retry;
pi_faulted: pi_faulted:
put_futex_key(&key); put_futex_key(&key);
...@@ -3435,6 +3465,7 @@ SYSCALL_DEFINE3(get_robust_list, int, pid, ...@@ -3435,6 +3465,7 @@ SYSCALL_DEFINE3(get_robust_list, int, pid,
static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi) static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi)
{ {
u32 uval, uninitialized_var(nval), mval; u32 uval, uninitialized_var(nval), mval;
int err;
/* Futex address must be 32bit aligned */ /* Futex address must be 32bit aligned */
if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0) if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0)
...@@ -3444,7 +3475,9 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p ...@@ -3444,7 +3475,9 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p
if (get_user(uval, uaddr)) if (get_user(uval, uaddr))
return -1; return -1;
if ((uval & FUTEX_TID_MASK) == task_pid_vnr(curr)) { if ((uval & FUTEX_TID_MASK) != task_pid_vnr(curr))
return 0;
/* /*
* Ok, this dying thread is truly holding a futex * Ok, this dying thread is truly holding a futex
* of interest. Set the OWNER_DIED bit atomically * of interest. Set the OWNER_DIED bit atomically
...@@ -3456,6 +3489,7 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p ...@@ -3456,6 +3489,7 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p
* userspace. * userspace.
*/ */
mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED; mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED;
/* /*
* We are not holding a lock here, but we want to have * We are not holding a lock here, but we want to have
* the pagefault_disable/enable() protection because * the pagefault_disable/enable() protection because
...@@ -3465,11 +3499,23 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p ...@@ -3465,11 +3499,23 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p
* does not guarantee R/W access. If that fails we * does not guarantee R/W access. If that fails we
* give up and leave the futex locked. * give up and leave the futex locked.
*/ */
if (cmpxchg_futex_value_locked(&nval, uaddr, uval, mval)) { if ((err = cmpxchg_futex_value_locked(&nval, uaddr, uval, mval))) {
switch (err) {
case -EFAULT:
if (fault_in_user_writeable(uaddr)) if (fault_in_user_writeable(uaddr))
return -1; return -1;
goto retry; goto retry;
case -EAGAIN:
cond_resched();
goto retry;
default:
WARN_ON_ONCE(1);
return err;
}
} }
if (nval != uval) if (nval != uval)
goto retry; goto retry;
...@@ -3479,7 +3525,7 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p ...@@ -3479,7 +3525,7 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p
*/ */
if (!pi && (uval & FUTEX_WAITERS)) if (!pi && (uval & FUTEX_WAITERS))
futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY); futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
}
return 0; return 0;
} }
......
...@@ -6,10 +6,10 @@ UBSAN_SANITIZE_generic_report.o := n ...@@ -6,10 +6,10 @@ UBSAN_SANITIZE_generic_report.o := n
UBSAN_SANITIZE_tags.o := n UBSAN_SANITIZE_tags.o := n
KCOV_INSTRUMENT := n KCOV_INSTRUMENT := n
CFLAGS_REMOVE_common.o = -pg CFLAGS_REMOVE_common.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_generic.o = -pg CFLAGS_REMOVE_generic.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_generic_report.o = -pg CFLAGS_REMOVE_generic_report.o = $(CC_FLAGS_FTRACE)
CFLAGS_REMOVE_tags.o = -pg CFLAGS_REMOVE_tags.o = $(CC_FLAGS_FTRACE)
# Function splitter causes unnecessary splits in __asan_load1/__asan_store1 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
......
...@@ -189,7 +189,7 @@ static void clear_stage2_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr ...@@ -189,7 +189,7 @@ static void clear_stage2_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr
VM_BUG_ON(pmd_thp_or_huge(*pmd)); VM_BUG_ON(pmd_thp_or_huge(*pmd));
pmd_clear(pmd); pmd_clear(pmd);
kvm_tlb_flush_vmid_ipa(kvm, addr); kvm_tlb_flush_vmid_ipa(kvm, addr);
pte_free_kernel(NULL, pte_table); free_page((unsigned long)pte_table);
put_page(virt_to_page(pmd)); put_page(virt_to_page(pmd));
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment