Commit 533925cb authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'riscv-for-linus-6.5-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux

Pull RISC-V updates from Palmer Dabbelt:

 - Support for ACPI

 - Various cleanups to the ISA string parsing, including making them
   case-insensitive

 - Support for the vector extension

 - Support for independent irq/softirq stacks

 - Our CPU DT binding now has "unevaluatedProperties: false"

* tag 'riscv-for-linus-6.5-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (78 commits)
  riscv: hibernate: remove WARN_ON in save_processor_state
  dt-bindings: riscv: cpus: switch to unevaluatedProperties: false
  dt-bindings: riscv: cpus: add a ref the common cpu schema
  riscv: stack: Add config of thread stack size
  riscv: stack: Support HAVE_SOFTIRQ_ON_OWN_STACK
  riscv: stack: Support HAVE_IRQ_EXIT_ON_IRQ_STACK
  RISC-V: always report presence of extensions formerly part of the base ISA
  dt-bindings: riscv: explicitly mention assumption of Zicntr & Zihpm support
  RISC-V: remove decrement/increment dance in ISA string parser
  RISC-V: rework comments in ISA string parser
  RISC-V: validate riscv,isa at boot, not during ISA string parsing
  RISC-V: split early & late of_node to hartid mapping
  RISC-V: simplify register width check in ISA string parsing
  perf: RISC-V: Limit the number of counters returned from SBI
  riscv: replace deprecated scall with ecall
  riscv: uprobes: Restore thread.bad_cause
  riscv: mm: try VMA lock-based page fault handling first
  riscv: mm: Pre-allocate PGD entries for vmalloc/modules area
  RISC-V: hwprobe: Expose Zba, Zbb, and Zbs
  RISC-V: Track ISA extensions per hart
  ...
parents d8b0bd57 488833cc
acpi= [HW,ACPI,X86,ARM64] acpi= [HW,ACPI,X86,ARM64,RISCV64]
Advanced Configuration and Power Interface Advanced Configuration and Power Interface
Format: { force | on | off | strict | noirq | rsdt | Format: { force | on | off | strict | noirq | rsdt |
copy_dsdt } copy_dsdt }
force -- enable ACPI if default was off force -- enable ACPI if default was off
on -- enable ACPI but allow fallback to DT [arm64] on -- enable ACPI but allow fallback to DT [arm64,riscv64]
off -- disable ACPI if default was on off -- disable ACPI if default was on
noirq -- do not use ACPI for IRQ routing noirq -- do not use ACPI for IRQ routing
strict -- Be less tolerant of platforms that are not strict -- Be less tolerant of platforms that are not
strictly ACPI specification compliant. strictly ACPI specification compliant.
rsdt -- prefer RSDT over (default) XSDT rsdt -- prefer RSDT over (default) XSDT
copy_dsdt -- copy DSDT to memory copy_dsdt -- copy DSDT to memory
For ARM64, ONLY "acpi=off", "acpi=on" or "acpi=force" For ARM64 and RISCV64, ONLY "acpi=off", "acpi=on" or
are available "acpi=force" are available
See also Documentation/power/runtime_pm.rst, pci=noacpi See also Documentation/power/runtime_pm.rst, pci=noacpi
......
...@@ -23,6 +23,9 @@ description: | ...@@ -23,6 +23,9 @@ description: |
two cores, each of which has two hyperthreads, could be described as two cores, each of which has two hyperthreads, could be described as
having four harts. having four harts.
allOf:
- $ref: /schemas/cpu.yaml#
properties: properties:
compatible: compatible:
oneOf: oneOf:
...@@ -61,7 +64,7 @@ properties: ...@@ -61,7 +64,7 @@ properties:
hart. These values originate from the RISC-V Privileged hart. These values originate from the RISC-V Privileged
Specification document, available from Specification document, available from
https://riscv.org/specifications/ https://riscv.org/specifications/
$ref: "/schemas/types.yaml#/definitions/string" $ref: /schemas/types.yaml#/definitions/string
enum: enum:
- riscv,sv32 - riscv,sv32
- riscv,sv39 - riscv,sv39
...@@ -89,15 +92,18 @@ properties: ...@@ -89,15 +92,18 @@ properties:
Due to revisions of the ISA specification, some deviations Due to revisions of the ISA specification, some deviations
have arisen over time. have arisen over time.
Notably, riscv,isa was defined prior to the creation of the Notably, riscv,isa was defined prior to the creation of the
Zicsr and Zifencei extensions and thus "i" implies Zicntr, Zicsr, Zifencei and Zihpm extensions and thus "i"
"zicsr_zifencei". implies "zicntr_zicsr_zifencei_zihpm".
While the isa strings in ISA specification are case While the isa strings in ISA specification are case
insensitive, letters in the riscv,isa string must be all insensitive, letters in the riscv,isa string must be all
lowercase to simplify parsing. lowercase.
$ref: "/schemas/types.yaml#/definitions/string" $ref: /schemas/types.yaml#/definitions/string
pattern: ^rv(?:64|32)imaf?d?q?c?b?k?j?p?v?h?(?:[hsxz](?:[a-z])+)?(?:_[hsxz](?:[a-z])+)*$ pattern: ^rv(?:64|32)imaf?d?q?c?b?k?j?p?v?h?(?:[hsxz](?:[a-z])+)?(?:_[hsxz](?:[a-z])+)*$
# RISC-V has multiple properties for cache op block sizes as the sizes
# differ between individual CBO extensions
cache-op-block-size: false
# RISC-V requires 'timebase-frequency' in /cpus, so disallow it here # RISC-V requires 'timebase-frequency' in /cpus, so disallow it here
timebase-frequency: false timebase-frequency: false
...@@ -120,7 +126,7 @@ properties: ...@@ -120,7 +126,7 @@ properties:
- interrupt-controller - interrupt-controller
cpu-idle-states: cpu-idle-states:
$ref: '/schemas/types.yaml#/definitions/phandle-array' $ref: /schemas/types.yaml#/definitions/phandle-array
items: items:
maxItems: 1 maxItems: 1
description: | description: |
...@@ -137,7 +143,7 @@ required: ...@@ -137,7 +143,7 @@ required:
- riscv,isa - riscv,isa
- interrupt-controller - interrupt-controller
additionalProperties: true unevaluatedProperties: false
examples: examples:
- | - |
......
...@@ -64,6 +64,19 @@ The following keys are defined: ...@@ -64,6 +64,19 @@ The following keys are defined:
* :c:macro:`RISCV_HWPROBE_IMA_C`: The C extension is supported, as defined * :c:macro:`RISCV_HWPROBE_IMA_C`: The C extension is supported, as defined
by version 2.2 of the RISC-V ISA manual. by version 2.2 of the RISC-V ISA manual.
* :c:macro:`RISCV_HWPROBE_IMA_V`: The V extension is supported, as defined by
version 1.0 of the RISC-V Vector extension manual.
* :c:macro:`RISCV_HWPROBE_EXT_ZBA`: The Zba address generation extension is
supported, as defined in version 1.0 of the Bit-Manipulation ISA
extensions.
* :c:macro:`RISCV_HWPROBE_EXT_ZBB`: The Zbb extension is supported, as defined
in version 1.0 of the Bit-Manipulation ISA extensions.
* :c:macro:`RISCV_HWPROBE_EXT_ZBS`: The Zbs extension is supported, as defined
in version 1.0 of the Bit-Manipulation ISA extensions.
* :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance * :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance
information about the selected set of processors. information about the selected set of processors.
......
...@@ -10,6 +10,7 @@ RISC-V architecture ...@@ -10,6 +10,7 @@ RISC-V architecture
hwprobe hwprobe
patch-acceptance patch-acceptance
uabi uabi
vector
features features
......
.. SPDX-License-Identifier: GPL-2.0
=========================================
Vector Extension Support for RISC-V Linux
=========================================
This document briefly outlines the interface provided to userspace by Linux in
order to support the use of the RISC-V Vector Extension.
1. prctl() Interface
---------------------
Two new prctl() calls are added to allow programs to manage the enablement
status for the use of Vector in userspace. The intended usage guideline for
these interfaces is to give init systems a way to modify the availability of V
for processes running under its domain. Calling thess interfaces is not
recommended in libraries routines because libraries should not override policies
configured from the parant process. Also, users must noted that these interfaces
are not portable to non-Linux, nor non-RISC-V environments, so it is discourage
to use in a portable code. To get the availability of V in an ELF program,
please read :c:macro:`COMPAT_HWCAP_ISA_V` bit of :c:macro:`ELF_HWCAP` in the
auxiliary vector.
* prctl(PR_RISCV_V_SET_CONTROL, unsigned long arg)
Sets the Vector enablement status of the calling thread, where the control
argument consists of two 2-bit enablement statuses and a bit for inheritance
mode. Other threads of the calling process are unaffected.
Enablement status is a tri-state value each occupying 2-bit of space in
the control argument:
* :c:macro:`PR_RISCV_V_VSTATE_CTRL_DEFAULT`: Use the system-wide default
enablement status on execve(). The system-wide default setting can be
controlled via sysctl interface (see sysctl section below).
* :c:macro:`PR_RISCV_V_VSTATE_CTRL_ON`: Allow Vector to be run for the
thread.
* :c:macro:`PR_RISCV_V_VSTATE_CTRL_OFF`: Disallow Vector. Executing Vector
instructions under such condition will trap and casuse the termination of the thread.
arg: The control argument is a 5-bit value consisting of 3 parts, and
accessed by 3 masks respectively.
The 3 masks, PR_RISCV_V_VSTATE_CTRL_CUR_MASK,
PR_RISCV_V_VSTATE_CTRL_NEXT_MASK, and PR_RISCV_V_VSTATE_CTRL_INHERIT
represents bit[1:0], bit[3:2], and bit[4]. bit[1:0] accounts for the
enablement status of current thread, and the setting at bit[3:2] takes place
at next execve(). bit[4] defines the inheritance mode of the setting in
bit[3:2].
* :c:macro:`PR_RISCV_V_VSTATE_CTRL_CUR_MASK`: bit[1:0]: Account for the
Vector enablement status for the calling thread. The calling thread is
not able to turn off Vector once it has been enabled. The prctl() call
fails with EPERM if the value in this mask is PR_RISCV_V_VSTATE_CTRL_OFF
but the current enablement status is not off. Setting
PR_RISCV_V_VSTATE_CTRL_DEFAULT here takes no effect but to set back
the original enablement status.
* :c:macro:`PR_RISCV_V_VSTATE_CTRL_NEXT_MASK`: bit[3:2]: Account for the
Vector enablement setting for the calling thread at the next execve()
system call. If PR_RISCV_V_VSTATE_CTRL_DEFAULT is used in this mask,
then the enablement status will be decided by the system-wide
enablement status when execve() happen.
* :c:macro:`PR_RISCV_V_VSTATE_CTRL_INHERIT`: bit[4]: the inheritance
mode for the setting at PR_RISCV_V_VSTATE_CTRL_NEXT_MASK. If the bit
is set then the following execve() will not clear the setting in both
PR_RISCV_V_VSTATE_CTRL_NEXT_MASK and PR_RISCV_V_VSTATE_CTRL_INHERIT.
This setting persists across changes in the system-wide default value.
Return value:
* 0 on success;
* EINVAL: Vector not supported, invalid enablement status for current or
next mask;
* EPERM: Turning off Vector in PR_RISCV_V_VSTATE_CTRL_CUR_MASK if Vector
was enabled for the calling thread.
On success:
* A valid setting for PR_RISCV_V_VSTATE_CTRL_CUR_MASK takes place
immediately. The enablement status specified in
PR_RISCV_V_VSTATE_CTRL_NEXT_MASK happens at the next execve() call, or
all following execve() calls if PR_RISCV_V_VSTATE_CTRL_INHERIT bit is
set.
* Every successful call overwrites a previous setting for the calling
thread.
* prctl(PR_RISCV_V_GET_CONTROL)
Gets the same Vector enablement status for the calling thread. Setting for
next execve() call and the inheritance bit are all OR-ed together.
Note that ELF programs are able to get the availability of V for itself by
reading :c:macro:`COMPAT_HWCAP_ISA_V` bit of :c:macro:`ELF_HWCAP` in the
auxiliary vector.
Return value:
* a nonnegative value on success;
* EINVAL: Vector not supported.
2. System runtime configuration (sysctl)
-----------------------------------------
To mitigate the ABI impact of expansion of the signal stack, a
policy mechanism is provided to the administrators, distro maintainers, and
developers to control the default Vector enablement status for userspace
processes in form of sysctl knob:
* /proc/sys/abi/riscv_v_default_allow
Writing the text representation of 0 or 1 to this file sets the default
system enablement status for new starting userspace programs. Valid values
are:
* 0: Do not allow Vector code to be executed as the default for new processes.
* 1: Allow Vector code to be executed as the default for new processes.
Reading this file returns the current system default enablement status.
At every execve() call, a new enablement status of the new process is set to
the system default, unless:
* PR_RISCV_V_VSTATE_CTRL_INHERIT is set for the calling process, and the
setting in PR_RISCV_V_VSTATE_CTRL_NEXT_MASK is not
PR_RISCV_V_VSTATE_CTRL_DEFAULT. Or,
* The setting in PR_RISCV_V_VSTATE_CTRL_NEXT_MASK is not
PR_RISCV_V_VSTATE_CTRL_DEFAULT.
Modifying the system default enablement status does not affect the enablement
status of any existing process of thread that do not make an execve() call.
...@@ -406,6 +406,13 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ...@@ -406,6 +406,13 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: drivers/acpi/arm64 F: drivers/acpi/arm64
ACPI FOR RISC-V (ACPI/riscv)
M: Sunil V L <sunilvl@ventanamicro.com>
L: linux-acpi@vger.kernel.org
L: linux-riscv@lists.infradead.org
S: Maintained
F: drivers/acpi/riscv/
ACPI PCC(Platform Communication Channel) MAILBOX DRIVER ACPI PCC(Platform Communication Channel) MAILBOX DRIVER
M: Sudeep Holla <sudeep.holla@arm.com> M: Sudeep Holla <sudeep.holla@arm.com>
L: linux-acpi@vger.kernel.org L: linux-acpi@vger.kernel.org
......
...@@ -12,6 +12,8 @@ config 32BIT ...@@ -12,6 +12,8 @@ config 32BIT
config RISCV config RISCV
def_bool y def_bool y
select ACPI_GENERIC_GSI if ACPI
select ACPI_REDUCED_HARDWARE_ONLY if ACPI
select ARCH_DMA_DEFAULT_COHERENT select ARCH_DMA_DEFAULT_COHERENT
select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION
select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
...@@ -43,6 +45,7 @@ config RISCV ...@@ -43,6 +45,7 @@ config RISCV
select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
select ARCH_SUPPORTS_HUGETLBFS if MMU select ARCH_SUPPORTS_HUGETLBFS if MMU
select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU select ARCH_SUPPORTS_PAGE_TABLE_CHECK if MMU
select ARCH_SUPPORTS_PER_VMA_LOCK if MMU
select ARCH_USE_MEMTEST select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
...@@ -265,6 +268,12 @@ config RISCV_DMA_NONCOHERENT ...@@ -265,6 +268,12 @@ config RISCV_DMA_NONCOHERENT
config AS_HAS_INSN config AS_HAS_INSN
def_bool $(as-instr,.insn r 51$(comma) 0$(comma) 0$(comma) t0$(comma) t0$(comma) zero) def_bool $(as-instr,.insn r 51$(comma) 0$(comma) 0$(comma) t0$(comma) t0$(comma) zero)
config AS_HAS_OPTION_ARCH
# https://reviews.llvm.org/D123515
def_bool y
depends on $(as-instr, .option arch$(comma) +m)
depends on !$(as-instr, .option arch$(comma) -i)
source "arch/riscv/Kconfig.socs" source "arch/riscv/Kconfig.socs"
source "arch/riscv/Kconfig.errata" source "arch/riscv/Kconfig.errata"
...@@ -463,13 +472,44 @@ config RISCV_ISA_SVPBMT ...@@ -463,13 +472,44 @@ config RISCV_ISA_SVPBMT
If you don't know what to do here, say Y. If you don't know what to do here, say Y.
config TOOLCHAIN_HAS_V
bool
default y
depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64iv)
depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32iv)
depends on LLD_VERSION >= 140000 || LD_VERSION >= 23800
depends on AS_HAS_OPTION_ARCH
config RISCV_ISA_V
bool "VECTOR extension support"
depends on TOOLCHAIN_HAS_V
depends on FPU
select DYNAMIC_SIGFRAME
default y
help
Say N here if you want to disable all vector related procedure
in the kernel.
If you don't know what to do here, say Y.
config RISCV_ISA_V_DEFAULT_ENABLE
bool "Enable userspace Vector by default"
depends on RISCV_ISA_V
default y
help
Say Y here if you want to enable Vector in userspace by default.
Otherwise, userspace has to make explicit prctl() call to enable
Vector, or enable it via the sysctl interface.
If you don't know what to do here, say Y.
config TOOLCHAIN_HAS_ZBB config TOOLCHAIN_HAS_ZBB
bool bool
default y default y
depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zbb) depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zbb)
depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zbb) depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zbb)
depends on LLD_VERSION >= 150000 || LD_VERSION >= 23900 depends on LLD_VERSION >= 150000 || LD_VERSION >= 23900
depends on AS_IS_GNU depends on AS_HAS_OPTION_ARCH
config RISCV_ISA_ZBB config RISCV_ISA_ZBB
bool "Zbb extension support for bit manipulation instructions" bool "Zbb extension support for bit manipulation instructions"
...@@ -554,6 +594,25 @@ config FPU ...@@ -554,6 +594,25 @@ config FPU
If you don't know what to do here, say Y. If you don't know what to do here, say Y.
config IRQ_STACKS
bool "Independent irq & softirq stacks" if EXPERT
default y
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_SOFTIRQ_ON_OWN_STACK
help
Add independent irq & softirq stacks for percpu to prevent kernel stack
overflows. We may save some memory footprint by disabling IRQ_STACKS.
config THREAD_SIZE_ORDER
int "Kernel stack size (in power-of-two numbers of page size)" if VMAP_STACK && EXPERT
range 0 4
default 1 if 32BIT && !KASAN
default 3 if 64BIT && KASAN
default 2
help
Specify the Pages of thread stack size (from 4KB to 64KB), which also
affects irq stack size, which is equal to thread stack size.
endmenu # "Platform type" endmenu # "Platform type"
menu "Kernel features" menu "Kernel features"
...@@ -710,6 +769,7 @@ config EFI ...@@ -710,6 +769,7 @@ config EFI
depends on OF && !XIP_KERNEL depends on OF && !XIP_KERNEL
depends on MMU depends on MMU
default y default y
select ARCH_SUPPORTS_ACPI if 64BIT
select EFI_GENERIC_STUB select EFI_GENERIC_STUB
select EFI_PARAMS_FROM_FDT select EFI_PARAMS_FROM_FDT
select EFI_RUNTIME_WRAPPERS select EFI_RUNTIME_WRAPPERS
...@@ -822,3 +882,5 @@ source "drivers/cpufreq/Kconfig" ...@@ -822,3 +882,5 @@ source "drivers/cpufreq/Kconfig"
endmenu # "CPU Power Management" endmenu # "CPU Power Management"
source "arch/riscv/kvm/Kconfig" source "arch/riscv/kvm/Kconfig"
source "drivers/acpi/Kconfig"
...@@ -60,6 +60,7 @@ riscv-march-$(CONFIG_ARCH_RV32I) := rv32ima ...@@ -60,6 +60,7 @@ riscv-march-$(CONFIG_ARCH_RV32I) := rv32ima
riscv-march-$(CONFIG_ARCH_RV64I) := rv64ima riscv-march-$(CONFIG_ARCH_RV64I) := rv64ima
riscv-march-$(CONFIG_FPU) := $(riscv-march-y)fd riscv-march-$(CONFIG_FPU) := $(riscv-march-y)fd
riscv-march-$(CONFIG_RISCV_ISA_C) := $(riscv-march-y)c riscv-march-$(CONFIG_RISCV_ISA_C) := $(riscv-march-y)c
riscv-march-$(CONFIG_RISCV_ISA_V) := $(riscv-march-y)v
ifdef CONFIG_TOOLCHAIN_NEEDS_OLD_ISA_SPEC ifdef CONFIG_TOOLCHAIN_NEEDS_OLD_ISA_SPEC
KBUILD_CFLAGS += -Wa,-misa-spec=2.2 KBUILD_CFLAGS += -Wa,-misa-spec=2.2
...@@ -71,7 +72,10 @@ endif ...@@ -71,7 +72,10 @@ endif
# Check if the toolchain supports Zihintpause extension # Check if the toolchain supports Zihintpause extension
riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE) := $(riscv-march-y)_zihintpause riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE) := $(riscv-march-y)_zihintpause
KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y)) # Remove F,D,V from isa string for all. Keep extensions between "fd" and "v" by
# matching non-v and non-multi-letter extensions out with the filter ([^v_]*)
KBUILD_CFLAGS += -march=$(shell echo $(riscv-march-y) | sed -E 's/(rv32ima|rv64ima)fd([^v_]*)v?/\1\2/')
KBUILD_AFLAGS += -march=$(riscv-march-y) KBUILD_AFLAGS += -march=$(riscv-march-y)
KBUILD_CFLAGS += -mno-save-restore KBUILD_CFLAGS += -mno-save-restore
......
...@@ -38,6 +38,7 @@ CONFIG_PM=y ...@@ -38,6 +38,7 @@ CONFIG_PM=y
CONFIG_CPU_IDLE=y CONFIG_CPU_IDLE=y
CONFIG_VIRTUALIZATION=y CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m CONFIG_KVM=m
CONFIG_ACPI=y
CONFIG_JUMP_LABEL=y CONFIG_JUMP_LABEL=y
CONFIG_MODULES=y CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y CONFIG_MODULE_UNLOAD=y
......
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* RISC-V specific ACPICA environments and implementation
*/
#ifndef _ASM_ACENV_H
#define _ASM_ACENV_H
/* This header is required unconditionally by the ACPI core */
#endif /* _ASM_ACENV_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2013-2014, Linaro Ltd.
* Author: Al Stone <al.stone@linaro.org>
* Author: Graeme Gregory <graeme.gregory@linaro.org>
* Author: Hanjun Guo <hanjun.guo@linaro.org>
*
* Copyright (C) 2021-2023, Ventana Micro Systems Inc.
* Author: Sunil V L <sunilvl@ventanamicro.com>
*/
#ifndef _ASM_ACPI_H
#define _ASM_ACPI_H
/* Basic configuration for ACPI */
#ifdef CONFIG_ACPI
typedef u64 phys_cpuid_t;
#define PHYS_CPUID_INVALID INVALID_HARTID
/* ACPI table mapping after acpi_permanent_mmap is set */
void *acpi_os_ioremap(acpi_physical_address phys, acpi_size size);
#define acpi_os_ioremap acpi_os_ioremap
#define acpi_strict 1 /* No out-of-spec workarounds on RISC-V */
extern int acpi_disabled;
extern int acpi_noirq;
extern int acpi_pci_disabled;
static inline void disable_acpi(void)
{
acpi_disabled = 1;
acpi_pci_disabled = 1;
acpi_noirq = 1;
}
static inline void enable_acpi(void)
{
acpi_disabled = 0;
acpi_pci_disabled = 0;
acpi_noirq = 0;
}
/*
* The ACPI processor driver for ACPI core code needs this macro
* to find out whether this cpu was already mapped (mapping from CPU hardware
* ID to CPU logical ID) or not.
*/
#define cpu_physical_id(cpu) cpuid_to_hartid_map(cpu)
/*
* Since MADT must provide at least one RINTC structure, the
* CPU will be always available in MADT on RISC-V.
*/
static inline bool acpi_has_cpu_in_madt(void)
{
return true;
}
static inline void arch_fix_phys_package_id(int num, u32 slot) { }
void acpi_init_rintc_map(void);
struct acpi_madt_rintc *acpi_cpu_get_madt_rintc(int cpu);
u32 get_acpi_id_for_cpu(int cpu);
int acpi_get_riscv_isa(struct acpi_table_header *table,
unsigned int cpu, const char **isa);
static inline int acpi_numa_get_nid(unsigned int cpu) { return NUMA_NO_NODE; }
#else
static inline void acpi_init_rintc_map(void) { }
static inline struct acpi_madt_rintc *acpi_cpu_get_madt_rintc(int cpu)
{
return NULL;
}
static inline int acpi_get_riscv_isa(struct acpi_table_header *table,
unsigned int cpu, const char **isa)
{
return -EINVAL;
}
#endif /* CONFIG_ACPI */
#endif /*_ASM_ACPI_H*/
...@@ -7,6 +7,8 @@ ...@@ -7,6 +7,8 @@
#define EX_TYPE_BPF 2 #define EX_TYPE_BPF 2
#define EX_TYPE_UACCESS_ERR_ZERO 3 #define EX_TYPE_UACCESS_ERR_ZERO 3
#ifdef CONFIG_MMU
#ifdef __ASSEMBLY__ #ifdef __ASSEMBLY__
#define __ASM_EXTABLE_RAW(insn, fixup, type, data) \ #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
...@@ -62,4 +64,8 @@ ...@@ -62,4 +64,8 @@
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#else /* CONFIG_MMU */
#define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err)
#endif /* CONFIG_MMU */
#endif /* __ASM_ASM_EXTABLE_H */ #endif /* __ASM_ASM_EXTABLE_H */
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef _ASM_CPU_H
#define _ASM_CPU_H
/* This header is required unconditionally by the ACPI core */
#endif /* _ASM_CPU_H */
...@@ -6,6 +6,9 @@ ...@@ -6,6 +6,9 @@
#ifndef _ASM_CPUFEATURE_H #ifndef _ASM_CPUFEATURE_H
#define _ASM_CPUFEATURE_H #define _ASM_CPUFEATURE_H
#include <linux/bitmap.h>
#include <asm/hwcap.h>
/* /*
* These are probed via a device_initcall(), via either the SBI or directly * These are probed via a device_initcall(), via either the SBI or directly
* from the corresponding CSRs. * from the corresponding CSRs.
...@@ -16,8 +19,15 @@ struct riscv_cpuinfo { ...@@ -16,8 +19,15 @@ struct riscv_cpuinfo {
unsigned long mimpid; unsigned long mimpid;
}; };
struct riscv_isainfo {
DECLARE_BITMAP(isa, RISCV_ISA_EXT_MAX);
};
DECLARE_PER_CPU(struct riscv_cpuinfo, riscv_cpuinfo); DECLARE_PER_CPU(struct riscv_cpuinfo, riscv_cpuinfo);
DECLARE_PER_CPU(long, misaligned_access_speed); DECLARE_PER_CPU(long, misaligned_access_speed);
/* Per-cpu ISA extensions. */
extern struct riscv_isainfo hart_isa[NR_CPUS];
#endif #endif
...@@ -24,16 +24,24 @@ ...@@ -24,16 +24,24 @@
#define SR_FS_CLEAN _AC(0x00004000, UL) #define SR_FS_CLEAN _AC(0x00004000, UL)
#define SR_FS_DIRTY _AC(0x00006000, UL) #define SR_FS_DIRTY _AC(0x00006000, UL)
#define SR_VS _AC(0x00000600, UL) /* Vector Status */
#define SR_VS_OFF _AC(0x00000000, UL)
#define SR_VS_INITIAL _AC(0x00000200, UL)
#define SR_VS_CLEAN _AC(0x00000400, UL)
#define SR_VS_DIRTY _AC(0x00000600, UL)
#define SR_XS _AC(0x00018000, UL) /* Extension Status */ #define SR_XS _AC(0x00018000, UL) /* Extension Status */
#define SR_XS_OFF _AC(0x00000000, UL) #define SR_XS_OFF _AC(0x00000000, UL)
#define SR_XS_INITIAL _AC(0x00008000, UL) #define SR_XS_INITIAL _AC(0x00008000, UL)
#define SR_XS_CLEAN _AC(0x00010000, UL) #define SR_XS_CLEAN _AC(0x00010000, UL)
#define SR_XS_DIRTY _AC(0x00018000, UL) #define SR_XS_DIRTY _AC(0x00018000, UL)
#define SR_FS_VS (SR_FS | SR_VS) /* Vector and Floating-Point Unit */
#ifndef CONFIG_64BIT #ifndef CONFIG_64BIT
#define SR_SD _AC(0x80000000, UL) /* FS/XS dirty */ #define SR_SD _AC(0x80000000, UL) /* FS/VS/XS dirty */
#else #else
#define SR_SD _AC(0x8000000000000000, UL) /* FS/XS dirty */ #define SR_SD _AC(0x8000000000000000, UL) /* FS/VS/XS dirty */
#endif #endif
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
...@@ -375,6 +383,12 @@ ...@@ -375,6 +383,12 @@
#define CSR_MVIPH 0x319 #define CSR_MVIPH 0x319
#define CSR_MIPH 0x354 #define CSR_MIPH 0x354
#define CSR_VSTART 0x8
#define CSR_VCSR 0xf
#define CSR_VL 0xc20
#define CSR_VTYPE 0xc21
#define CSR_VLENB 0xc22
#ifdef CONFIG_RISCV_M_MODE #ifdef CONFIG_RISCV_M_MODE
# define CSR_STATUS CSR_MSTATUS # define CSR_STATUS CSR_MSTATUS
# define CSR_IE CSR_MIE # define CSR_IE CSR_MIE
......
...@@ -66,7 +66,7 @@ extern bool compat_elf_check_arch(Elf32_Ehdr *hdr); ...@@ -66,7 +66,7 @@ extern bool compat_elf_check_arch(Elf32_Ehdr *hdr);
* via a bitmap that coorespends to each single-letter ISA extension. This is * via a bitmap that coorespends to each single-letter ISA extension. This is
* essentially defunct, but will remain for compatibility with userspace. * essentially defunct, but will remain for compatibility with userspace.
*/ */
#define ELF_HWCAP (elf_hwcap & ((1UL << RISCV_ISA_EXT_BASE) - 1)) #define ELF_HWCAP riscv_get_elf_hwcap()
extern unsigned long elf_hwcap; extern unsigned long elf_hwcap;
/* /*
...@@ -105,6 +105,15 @@ do { \ ...@@ -105,6 +105,15 @@ do { \
get_cache_size(3, CACHE_TYPE_UNIFIED)); \ get_cache_size(3, CACHE_TYPE_UNIFIED)); \
NEW_AUX_ENT(AT_L3_CACHEGEOMETRY, \ NEW_AUX_ENT(AT_L3_CACHEGEOMETRY, \
get_cache_geometry(3, CACHE_TYPE_UNIFIED)); \ get_cache_geometry(3, CACHE_TYPE_UNIFIED)); \
/* \
* Should always be nonzero unless there's a kernel bug. \
* If we haven't determined a sensible value to give to \
* userspace, omit the entry: \
*/ \
if (likely(signal_minsigstksz)) \
NEW_AUX_ENT(AT_MINSIGSTKSZ, signal_minsigstksz); \
else \
NEW_AUX_ENT(AT_IGNORE, 0); \
} while (0) } while (0)
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES #define ARCH_HAS_SETUP_ADDITIONAL_PAGES
struct linux_binprm; struct linux_binprm;
......
...@@ -32,7 +32,11 @@ do { \ ...@@ -32,7 +32,11 @@ do { \
(b)->data = (tmp).data; \ (b)->data = (tmp).data; \
} while (0) } while (0)
#ifdef CONFIG_MMU
bool fixup_exception(struct pt_regs *regs); bool fixup_exception(struct pt_regs *regs);
#else
static inline bool fixup_exception(struct pt_regs *regs) { return false; }
#endif
#if defined(CONFIG_BPF_JIT) && defined(CONFIG_ARCH_RV64I) #if defined(CONFIG_BPF_JIT) && defined(CONFIG_ARCH_RV64I)
bool ex_handler_bpf(const struct exception_table_entry *ex, struct pt_regs *regs); bool ex_handler_bpf(const struct exception_table_entry *ex, struct pt_regs *regs);
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#define RISCV_ISA_EXT_m ('m' - 'a') #define RISCV_ISA_EXT_m ('m' - 'a')
#define RISCV_ISA_EXT_s ('s' - 'a') #define RISCV_ISA_EXT_s ('s' - 'a')
#define RISCV_ISA_EXT_u ('u' - 'a') #define RISCV_ISA_EXT_u ('u' - 'a')
#define RISCV_ISA_EXT_v ('v' - 'a')
/* /*
* These macros represent the logical IDs of each multi-letter RISC-V ISA * These macros represent the logical IDs of each multi-letter RISC-V ISA
...@@ -46,6 +47,12 @@ ...@@ -46,6 +47,12 @@
#define RISCV_ISA_EXT_ZICBOZ 34 #define RISCV_ISA_EXT_ZICBOZ 34
#define RISCV_ISA_EXT_SMAIA 35 #define RISCV_ISA_EXT_SMAIA 35
#define RISCV_ISA_EXT_SSAIA 36 #define RISCV_ISA_EXT_SSAIA 36
#define RISCV_ISA_EXT_ZBA 37
#define RISCV_ISA_EXT_ZBS 38
#define RISCV_ISA_EXT_ZICNTR 39
#define RISCV_ISA_EXT_ZICSR 40
#define RISCV_ISA_EXT_ZIFENCEI 41
#define RISCV_ISA_EXT_ZIHPM 42
#define RISCV_ISA_EXT_MAX 64 #define RISCV_ISA_EXT_MAX 64
#define RISCV_ISA_EXT_NAME_LEN_MAX 32 #define RISCV_ISA_EXT_NAME_LEN_MAX 32
...@@ -60,6 +67,8 @@ ...@@ -60,6 +67,8 @@
#include <linux/jump_label.h> #include <linux/jump_label.h>
unsigned long riscv_get_elf_hwcap(void);
struct riscv_isa_ext_data { struct riscv_isa_ext_data {
/* Name of the extension displayed to userspace via /proc/cpuinfo */ /* Name of the extension displayed to userspace via /proc/cpuinfo */
char uprop[RISCV_ISA_EXT_NAME_LEN_MAX]; char uprop[RISCV_ISA_EXT_NAME_LEN_MAX];
......
...@@ -137,6 +137,26 @@ ...@@ -137,6 +137,26 @@
#define RVG_OPCODE_JALR 0x67 #define RVG_OPCODE_JALR 0x67
#define RVG_OPCODE_JAL 0x6f #define RVG_OPCODE_JAL 0x6f
#define RVG_OPCODE_SYSTEM 0x73 #define RVG_OPCODE_SYSTEM 0x73
#define RVG_SYSTEM_CSR_OFF 20
#define RVG_SYSTEM_CSR_MASK GENMASK(12, 0)
/* parts of opcode for RVF, RVD and RVQ */
#define RVFDQ_FL_FS_WIDTH_OFF 12
#define RVFDQ_FL_FS_WIDTH_MASK GENMASK(3, 0)
#define RVFDQ_FL_FS_WIDTH_W 2
#define RVFDQ_FL_FS_WIDTH_D 3
#define RVFDQ_LS_FS_WIDTH_Q 4
#define RVFDQ_OPCODE_FL 0x07
#define RVFDQ_OPCODE_FS 0x27
/* parts of opcode for RVV */
#define RVV_OPCODE_VECTOR 0x57
#define RVV_VL_VS_WIDTH_8 0
#define RVV_VL_VS_WIDTH_16 5
#define RVV_VL_VS_WIDTH_32 6
#define RVV_VL_VS_WIDTH_64 7
#define RVV_OPCODE_VL RVFDQ_OPCODE_FL
#define RVV_OPCODE_VS RVFDQ_OPCODE_FS
/* parts of opcode for RVC*/ /* parts of opcode for RVC*/
#define RVC_OPCODE_C0 0x0 #define RVC_OPCODE_C0 0x0
...@@ -304,6 +324,15 @@ static __always_inline bool riscv_insn_is_branch(u32 code) ...@@ -304,6 +324,15 @@ static __always_inline bool riscv_insn_is_branch(u32 code)
(RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \ (RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \
(RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); }) (RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); })
#define RVG_EXTRACT_SYSTEM_CSR(x) \
({typeof(x) x_ = (x); RV_X(x_, RVG_SYSTEM_CSR_OFF, RVG_SYSTEM_CSR_MASK); })
#define RVFDQ_EXTRACT_FL_FS_WIDTH(x) \
({typeof(x) x_ = (x); RV_X(x_, RVFDQ_FL_FS_WIDTH_OFF, \
RVFDQ_FL_FS_WIDTH_MASK); })
#define RVV_EXRACT_VL_VS_WIDTH(x) RVFDQ_EXTRACT_FL_FS_WIDTH(x)
/* /*
* Get the immediate from a J-type instruction. * Get the immediate from a J-type instruction.
* *
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_RISCV_IRQ_STACK_H
#define _ASM_RISCV_IRQ_STACK_H
#include <linux/bug.h>
#include <linux/gfp.h>
#include <linux/kconfig.h>
#include <linux/vmalloc.h>
#include <linux/pgtable.h>
#include <asm/thread_info.h>
DECLARE_PER_CPU(ulong *, irq_stack_ptr);
#ifdef CONFIG_VMAP_STACK
/*
* To ensure that VMAP'd stack overflow detection works correctly, all VMAP'd
* stacks need to have the same alignment.
*/
static inline unsigned long *arch_alloc_vmap_stack(size_t stack_size, int node)
{
void *p;
p = __vmalloc_node(stack_size, THREAD_ALIGN, THREADINFO_GFP, node,
__builtin_return_address(0));
return kasan_reset_tag(p);
}
#endif /* CONFIG_VMAP_STACK */
#endif /* _ASM_RISCV_IRQ_STACK_H */
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <asm/hwcap.h> #include <asm/hwcap.h>
#include <asm/kvm_aia.h> #include <asm/kvm_aia.h>
#include <asm/ptrace.h>
#include <asm/kvm_vcpu_fp.h> #include <asm/kvm_vcpu_fp.h>
#include <asm/kvm_vcpu_insn.h> #include <asm/kvm_vcpu_insn.h>
#include <asm/kvm_vcpu_sbi.h> #include <asm/kvm_vcpu_sbi.h>
...@@ -145,6 +146,7 @@ struct kvm_cpu_context { ...@@ -145,6 +146,7 @@ struct kvm_cpu_context {
unsigned long sstatus; unsigned long sstatus;
unsigned long hstatus; unsigned long hstatus;
union __riscv_fp_state fp; union __riscv_fp_state fp;
struct __riscv_v_ext_state vector;
}; };
struct kvm_vcpu_csr { struct kvm_vcpu_csr {
......
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2022 SiFive
*
* Authors:
* Vincent Chen <vincent.chen@sifive.com>
* Greentime Hu <greentime.hu@sifive.com>
*/
#ifndef __KVM_VCPU_RISCV_VECTOR_H
#define __KVM_VCPU_RISCV_VECTOR_H
#include <linux/types.h>
#ifdef CONFIG_RISCV_ISA_V
#include <asm/vector.h>
#include <asm/kvm_host.h>
static __always_inline void __kvm_riscv_vector_save(struct kvm_cpu_context *context)
{
__riscv_v_vstate_save(&context->vector, context->vector.datap);
}
static __always_inline void __kvm_riscv_vector_restore(struct kvm_cpu_context *context)
{
__riscv_v_vstate_restore(&context->vector, context->vector.datap);
}
void kvm_riscv_vcpu_vector_reset(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_guest_vector_save(struct kvm_cpu_context *cntx,
unsigned long *isa);
void kvm_riscv_vcpu_guest_vector_restore(struct kvm_cpu_context *cntx,
unsigned long *isa);
void kvm_riscv_vcpu_host_vector_save(struct kvm_cpu_context *cntx);
void kvm_riscv_vcpu_host_vector_restore(struct kvm_cpu_context *cntx);
int kvm_riscv_vcpu_alloc_vector_context(struct kvm_vcpu *vcpu,
struct kvm_cpu_context *cntx);
void kvm_riscv_vcpu_free_vector_context(struct kvm_vcpu *vcpu);
#else
struct kvm_cpu_context;
static inline void kvm_riscv_vcpu_vector_reset(struct kvm_vcpu *vcpu)
{
}
static inline void kvm_riscv_vcpu_guest_vector_save(struct kvm_cpu_context *cntx,
unsigned long *isa)
{
}
static inline void kvm_riscv_vcpu_guest_vector_restore(struct kvm_cpu_context *cntx,
unsigned long *isa)
{
}
static inline void kvm_riscv_vcpu_host_vector_save(struct kvm_cpu_context *cntx)
{
}
static inline void kvm_riscv_vcpu_host_vector_restore(struct kvm_cpu_context *cntx)
{
}
static inline int kvm_riscv_vcpu_alloc_vector_context(struct kvm_vcpu *vcpu,
struct kvm_cpu_context *cntx)
{
return 0;
}
static inline void kvm_riscv_vcpu_free_vector_context(struct kvm_vcpu *vcpu)
{
}
#endif
int kvm_riscv_vcpu_get_reg_vector(struct kvm_vcpu *vcpu,
const struct kvm_one_reg *reg,
unsigned long rtype);
int kvm_riscv_vcpu_set_reg_vector(struct kvm_vcpu *vcpu,
const struct kvm_one_reg *reg,
unsigned long rtype);
#endif
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#define _ASM_RISCV_PROCESSOR_H #define _ASM_RISCV_PROCESSOR_H
#include <linux/const.h> #include <linux/const.h>
#include <linux/cache.h>
#include <vdso/processor.h> #include <vdso/processor.h>
...@@ -39,6 +40,8 @@ struct thread_struct { ...@@ -39,6 +40,8 @@ struct thread_struct {
unsigned long s[12]; /* s[0]: frame pointer */ unsigned long s[12]; /* s[0]: frame pointer */
struct __riscv_d_ext_state fstate; struct __riscv_d_ext_state fstate;
unsigned long bad_cause; unsigned long bad_cause;
unsigned long vstate_ctrl;
struct __riscv_v_ext_state vstate;
}; };
/* Whitelist the fstate from the task_struct for hardened usercopy */ /* Whitelist the fstate from the task_struct for hardened usercopy */
...@@ -75,11 +78,22 @@ static inline void wait_for_interrupt(void) ...@@ -75,11 +78,22 @@ static inline void wait_for_interrupt(void)
struct device_node; struct device_node;
int riscv_of_processor_hartid(struct device_node *node, unsigned long *hartid); int riscv_of_processor_hartid(struct device_node *node, unsigned long *hartid);
int riscv_early_of_processor_hartid(struct device_node *node, unsigned long *hartid);
int riscv_of_parent_hartid(struct device_node *node, unsigned long *hartid); int riscv_of_parent_hartid(struct device_node *node, unsigned long *hartid);
extern void riscv_fill_hwcap(void); extern void riscv_fill_hwcap(void);
extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
extern unsigned long signal_minsigstksz __ro_after_init;
#ifdef CONFIG_RISCV_ISA_V
/* Userspace interface for PR_RISCV_V_{SET,GET}_VS prctl()s: */
#define RISCV_V_SET_CONTROL(arg) riscv_v_vstate_ctrl_set_current(arg)
#define RISCV_V_GET_CONTROL() riscv_v_vstate_ctrl_get_current()
extern long riscv_v_vstate_ctrl_set_current(unsigned long arg);
extern long riscv_v_vstate_ctrl_get_current(void);
#endif /* CONFIG_RISCV_ISA_V */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_RISCV_PROCESSOR_H */ #endif /* _ASM_RISCV_PROCESSOR_H */
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <linux/jump_label.h> #include <linux/jump_label.h>
#include <linux/sched/task_stack.h> #include <linux/sched/task_stack.h>
#include <asm/vector.h>
#include <asm/hwcap.h> #include <asm/hwcap.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
...@@ -46,7 +47,7 @@ static inline void fstate_restore(struct task_struct *task, ...@@ -46,7 +47,7 @@ static inline void fstate_restore(struct task_struct *task,
} }
} }
static inline void __switch_to_aux(struct task_struct *prev, static inline void __switch_to_fpu(struct task_struct *prev,
struct task_struct *next) struct task_struct *next)
{ {
struct pt_regs *regs; struct pt_regs *regs;
...@@ -66,7 +67,7 @@ static __always_inline bool has_fpu(void) ...@@ -66,7 +67,7 @@ static __always_inline bool has_fpu(void)
static __always_inline bool has_fpu(void) { return false; } static __always_inline bool has_fpu(void) { return false; }
#define fstate_save(task, regs) do { } while (0) #define fstate_save(task, regs) do { } while (0)
#define fstate_restore(task, regs) do { } while (0) #define fstate_restore(task, regs) do { } while (0)
#define __switch_to_aux(__prev, __next) do { } while (0) #define __switch_to_fpu(__prev, __next) do { } while (0)
#endif #endif
extern struct task_struct *__switch_to(struct task_struct *, extern struct task_struct *__switch_to(struct task_struct *,
...@@ -77,7 +78,9 @@ do { \ ...@@ -77,7 +78,9 @@ do { \
struct task_struct *__prev = (prev); \ struct task_struct *__prev = (prev); \
struct task_struct *__next = (next); \ struct task_struct *__next = (next); \
if (has_fpu()) \ if (has_fpu()) \
__switch_to_aux(__prev, __next); \ __switch_to_fpu(__prev, __next); \
if (has_vector()) \
__switch_to_vector(__prev, __next); \
((last) = __switch_to(__prev, __next)); \ ((last) = __switch_to(__prev, __next)); \
} while (0) } while (0)
......
...@@ -11,18 +11,8 @@ ...@@ -11,18 +11,8 @@
#include <asm/page.h> #include <asm/page.h>
#include <linux/const.h> #include <linux/const.h>
#ifdef CONFIG_KASAN
#define KASAN_STACK_ORDER 1
#else
#define KASAN_STACK_ORDER 0
#endif
/* thread information allocation */ /* thread information allocation */
#ifdef CONFIG_64BIT #define THREAD_SIZE_ORDER CONFIG_THREAD_SIZE_ORDER
#define THREAD_SIZE_ORDER (2 + KASAN_STACK_ORDER)
#else
#define THREAD_SIZE_ORDER (1 + KASAN_STACK_ORDER)
#endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
/* /*
...@@ -40,6 +30,8 @@ ...@@ -40,6 +30,8 @@
#define OVERFLOW_STACK_SIZE SZ_4K #define OVERFLOW_STACK_SIZE SZ_4K
#define SHADOW_OVERFLOW_STACK_SIZE (1024) #define SHADOW_OVERFLOW_STACK_SIZE (1024)
#define IRQ_STACK_SIZE THREAD_SIZE
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)]; extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)];
...@@ -81,6 +73,9 @@ struct thread_info { ...@@ -81,6 +73,9 @@ struct thread_info {
.preempt_count = INIT_PREEMPT_COUNT, \ .preempt_count = INIT_PREEMPT_COUNT, \
} }
void arch_release_task_struct(struct task_struct *tsk);
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
/* /*
......
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2020 SiFive
*/
#ifndef __ASM_RISCV_VECTOR_H
#define __ASM_RISCV_VECTOR_H
#include <linux/types.h>
#include <uapi/asm-generic/errno.h>
#ifdef CONFIG_RISCV_ISA_V
#include <linux/stringify.h>
#include <linux/sched.h>
#include <linux/sched/task_stack.h>
#include <asm/ptrace.h>
#include <asm/hwcap.h>
#include <asm/csr.h>
#include <asm/asm.h>
extern unsigned long riscv_v_vsize;
int riscv_v_setup_vsize(void);
bool riscv_v_first_use_handler(struct pt_regs *regs);
static __always_inline bool has_vector(void)
{
return riscv_has_extension_unlikely(RISCV_ISA_EXT_v);
}
static inline void __riscv_v_vstate_clean(struct pt_regs *regs)
{
regs->status = (regs->status & ~SR_VS) | SR_VS_CLEAN;
}
static inline void riscv_v_vstate_off(struct pt_regs *regs)
{
regs->status = (regs->status & ~SR_VS) | SR_VS_OFF;
}
static inline void riscv_v_vstate_on(struct pt_regs *regs)
{
regs->status = (regs->status & ~SR_VS) | SR_VS_INITIAL;
}
static inline bool riscv_v_vstate_query(struct pt_regs *regs)
{
return (regs->status & SR_VS) != 0;
}
static __always_inline void riscv_v_enable(void)
{
csr_set(CSR_SSTATUS, SR_VS);
}
static __always_inline void riscv_v_disable(void)
{
csr_clear(CSR_SSTATUS, SR_VS);
}
static __always_inline void __vstate_csr_save(struct __riscv_v_ext_state *dest)
{
asm volatile (
"csrr %0, " __stringify(CSR_VSTART) "\n\t"
"csrr %1, " __stringify(CSR_VTYPE) "\n\t"
"csrr %2, " __stringify(CSR_VL) "\n\t"
"csrr %3, " __stringify(CSR_VCSR) "\n\t"
: "=r" (dest->vstart), "=r" (dest->vtype), "=r" (dest->vl),
"=r" (dest->vcsr) : :);
}
static __always_inline void __vstate_csr_restore(struct __riscv_v_ext_state *src)
{
asm volatile (
".option push\n\t"
".option arch, +v\n\t"
"vsetvl x0, %2, %1\n\t"
".option pop\n\t"
"csrw " __stringify(CSR_VSTART) ", %0\n\t"
"csrw " __stringify(CSR_VCSR) ", %3\n\t"
: : "r" (src->vstart), "r" (src->vtype), "r" (src->vl),
"r" (src->vcsr) :);
}
static inline void __riscv_v_vstate_save(struct __riscv_v_ext_state *save_to,
void *datap)
{
unsigned long vl;
riscv_v_enable();
__vstate_csr_save(save_to);
asm volatile (
".option push\n\t"
".option arch, +v\n\t"
"vsetvli %0, x0, e8, m8, ta, ma\n\t"
"vse8.v v0, (%1)\n\t"
"add %1, %1, %0\n\t"
"vse8.v v8, (%1)\n\t"
"add %1, %1, %0\n\t"
"vse8.v v16, (%1)\n\t"
"add %1, %1, %0\n\t"
"vse8.v v24, (%1)\n\t"
".option pop\n\t"
: "=&r" (vl) : "r" (datap) : "memory");
riscv_v_disable();
}
static inline void __riscv_v_vstate_restore(struct __riscv_v_ext_state *restore_from,
void *datap)
{
unsigned long vl;
riscv_v_enable();
asm volatile (
".option push\n\t"
".option arch, +v\n\t"
"vsetvli %0, x0, e8, m8, ta, ma\n\t"
"vle8.v v0, (%1)\n\t"
"add %1, %1, %0\n\t"
"vle8.v v8, (%1)\n\t"
"add %1, %1, %0\n\t"
"vle8.v v16, (%1)\n\t"
"add %1, %1, %0\n\t"
"vle8.v v24, (%1)\n\t"
".option pop\n\t"
: "=&r" (vl) : "r" (datap) : "memory");
__vstate_csr_restore(restore_from);
riscv_v_disable();
}
static inline void riscv_v_vstate_save(struct task_struct *task,
struct pt_regs *regs)
{
if ((regs->status & SR_VS) == SR_VS_DIRTY) {
struct __riscv_v_ext_state *vstate = &task->thread.vstate;
__riscv_v_vstate_save(vstate, vstate->datap);
__riscv_v_vstate_clean(regs);
}
}
static inline void riscv_v_vstate_restore(struct task_struct *task,
struct pt_regs *regs)
{
if ((regs->status & SR_VS) != SR_VS_OFF) {
struct __riscv_v_ext_state *vstate = &task->thread.vstate;
__riscv_v_vstate_restore(vstate, vstate->datap);
__riscv_v_vstate_clean(regs);
}
}
static inline void __switch_to_vector(struct task_struct *prev,
struct task_struct *next)
{
struct pt_regs *regs;
regs = task_pt_regs(prev);
riscv_v_vstate_save(prev, regs);
riscv_v_vstate_restore(next, task_pt_regs(next));
}
void riscv_v_vstate_ctrl_init(struct task_struct *tsk);
bool riscv_v_vstate_ctrl_user_allowed(void);
#else /* ! CONFIG_RISCV_ISA_V */
struct pt_regs;
static inline int riscv_v_setup_vsize(void) { return -EOPNOTSUPP; }
static __always_inline bool has_vector(void) { return false; }
static inline bool riscv_v_first_use_handler(struct pt_regs *regs) { return false; }
static inline bool riscv_v_vstate_query(struct pt_regs *regs) { return false; }
static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; }
#define riscv_v_vsize (0)
#define riscv_v_vstate_save(task, regs) do {} while (0)
#define riscv_v_vstate_restore(task, regs) do {} while (0)
#define __switch_to_vector(__prev, __next) do {} while (0)
#define riscv_v_vstate_off(regs) do {} while (0)
#define riscv_v_vstate_on(regs) do {} while (0)
#endif /* CONFIG_RISCV_ISA_V */
#endif /* ! __ASM_RISCV_VECTOR_H */
...@@ -35,5 +35,6 @@ ...@@ -35,5 +35,6 @@
/* entries in ARCH_DLINFO */ /* entries in ARCH_DLINFO */
#define AT_VECTOR_SIZE_ARCH 9 #define AT_VECTOR_SIZE_ARCH 9
#define AT_MINSIGSTKSZ 51
#endif /* _UAPI_ASM_RISCV_AUXVEC_H */ #endif /* _UAPI_ASM_RISCV_AUXVEC_H */
...@@ -21,5 +21,6 @@ ...@@ -21,5 +21,6 @@
#define COMPAT_HWCAP_ISA_F (1 << ('F' - 'A')) #define COMPAT_HWCAP_ISA_F (1 << ('F' - 'A'))
#define COMPAT_HWCAP_ISA_D (1 << ('D' - 'A')) #define COMPAT_HWCAP_ISA_D (1 << ('D' - 'A'))
#define COMPAT_HWCAP_ISA_C (1 << ('C' - 'A')) #define COMPAT_HWCAP_ISA_C (1 << ('C' - 'A'))
#define COMPAT_HWCAP_ISA_V (1 << ('V' - 'A'))
#endif /* _UAPI_ASM_RISCV_HWCAP_H */ #endif /* _UAPI_ASM_RISCV_HWCAP_H */
...@@ -25,6 +25,10 @@ struct riscv_hwprobe { ...@@ -25,6 +25,10 @@ struct riscv_hwprobe {
#define RISCV_HWPROBE_KEY_IMA_EXT_0 4 #define RISCV_HWPROBE_KEY_IMA_EXT_0 4
#define RISCV_HWPROBE_IMA_FD (1 << 0) #define RISCV_HWPROBE_IMA_FD (1 << 0)
#define RISCV_HWPROBE_IMA_C (1 << 1) #define RISCV_HWPROBE_IMA_C (1 << 1)
#define RISCV_HWPROBE_IMA_V (1 << 2)
#define RISCV_HWPROBE_EXT_ZBA (1 << 3)
#define RISCV_HWPROBE_EXT_ZBB (1 << 4)
#define RISCV_HWPROBE_EXT_ZBS (1 << 5)
#define RISCV_HWPROBE_KEY_CPUPERF_0 5 #define RISCV_HWPROBE_KEY_CPUPERF_0 5
#define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0) #define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0)
#define RISCV_HWPROBE_MISALIGNED_EMULATED (1 << 0) #define RISCV_HWPROBE_MISALIGNED_EMULATED (1 << 0)
......
...@@ -121,6 +121,7 @@ enum KVM_RISCV_ISA_EXT_ID { ...@@ -121,6 +121,7 @@ enum KVM_RISCV_ISA_EXT_ID {
KVM_RISCV_ISA_EXT_ZICBOZ, KVM_RISCV_ISA_EXT_ZICBOZ,
KVM_RISCV_ISA_EXT_ZBB, KVM_RISCV_ISA_EXT_ZBB,
KVM_RISCV_ISA_EXT_SSAIA, KVM_RISCV_ISA_EXT_SSAIA,
KVM_RISCV_ISA_EXT_V,
KVM_RISCV_ISA_EXT_MAX, KVM_RISCV_ISA_EXT_MAX,
}; };
...@@ -203,6 +204,13 @@ enum KVM_RISCV_SBI_EXT_ID { ...@@ -203,6 +204,13 @@ enum KVM_RISCV_SBI_EXT_ID {
#define KVM_REG_RISCV_SBI_MULTI_REG_LAST \ #define KVM_REG_RISCV_SBI_MULTI_REG_LAST \
KVM_REG_RISCV_SBI_MULTI_REG(KVM_RISCV_SBI_EXT_MAX - 1) KVM_REG_RISCV_SBI_MULTI_REG(KVM_RISCV_SBI_EXT_MAX - 1)
/* V extension registers are mapped as type 9 */
#define KVM_REG_RISCV_VECTOR (0x09 << KVM_REG_RISCV_TYPE_SHIFT)
#define KVM_REG_RISCV_VECTOR_CSR_REG(name) \
(offsetof(struct __riscv_v_ext_state, name) / sizeof(unsigned long))
#define KVM_REG_RISCV_VECTOR_REG(n) \
((n) + sizeof(struct __riscv_v_ext_state) / sizeof(unsigned long))
#endif #endif
#endif /* __LINUX_KVM_RISCV_H */ #endif /* __LINUX_KVM_RISCV_H */
...@@ -71,12 +71,51 @@ struct __riscv_q_ext_state { ...@@ -71,12 +71,51 @@ struct __riscv_q_ext_state {
__u32 reserved[3]; __u32 reserved[3];
}; };
struct __riscv_ctx_hdr {
__u32 magic;
__u32 size;
};
struct __riscv_extra_ext_header {
__u32 __padding[129] __attribute__((aligned(16)));
/*
* Reserved for expansion of sigcontext structure. Currently zeroed
* upon signal, and must be zero upon sigreturn.
*/
__u32 reserved;
struct __riscv_ctx_hdr hdr;
};
union __riscv_fp_state { union __riscv_fp_state {
struct __riscv_f_ext_state f; struct __riscv_f_ext_state f;
struct __riscv_d_ext_state d; struct __riscv_d_ext_state d;
struct __riscv_q_ext_state q; struct __riscv_q_ext_state q;
}; };
struct __riscv_v_ext_state {
unsigned long vstart;
unsigned long vl;
unsigned long vtype;
unsigned long vcsr;
void *datap;
/*
* In signal handler, datap will be set a correct user stack offset
* and vector registers will be copied to the address of datap
* pointer.
*
* In ptrace syscall, datap will be set to zero and the vector
* registers will be copied to the address right after this
* structure.
*/
};
/*
* According to spec: The number of bits in a single vector register,
* VLEN >= ELEN, which must be a power of 2, and must be no greater than
* 2^16 = 65536bits = 8192bytes
*/
#define RISCV_MAX_VLENB (8192)
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _UAPI_ASM_RISCV_PTRACE_H */ #endif /* _UAPI_ASM_RISCV_PTRACE_H */
...@@ -8,6 +8,17 @@ ...@@ -8,6 +8,17 @@
#include <asm/ptrace.h> #include <asm/ptrace.h>
/* The Magic number for signal context frame header. */
#define RISCV_V_MAGIC 0x53465457
#define END_MAGIC 0x0
/* The size of END signal context header. */
#define END_HDR_SIZE 0x0
struct __sc_riscv_v_state {
struct __riscv_v_ext_state v_state;
} __attribute__((aligned(16)));
/* /*
* Signal context structure * Signal context structure
* *
...@@ -16,7 +27,10 @@ ...@@ -16,7 +27,10 @@
*/ */
struct sigcontext { struct sigcontext {
struct user_regs_struct sc_regs; struct user_regs_struct sc_regs;
union {
union __riscv_fp_state sc_fpregs; union __riscv_fp_state sc_fpregs;
struct __riscv_extra_ext_header sc_extdesc;
};
}; };
#endif /* _UAPI_ASM_RISCV_SIGCONTEXT_H */ #endif /* _UAPI_ASM_RISCV_SIGCONTEXT_H */
...@@ -60,6 +60,7 @@ obj-$(CONFIG_MMU) += vdso.o vdso/ ...@@ -60,6 +60,7 @@ obj-$(CONFIG_MMU) += vdso.o vdso/
obj-$(CONFIG_RISCV_M_MODE) += traps_misaligned.o obj-$(CONFIG_RISCV_M_MODE) += traps_misaligned.o
obj-$(CONFIG_FPU) += fpu.o obj-$(CONFIG_FPU) += fpu.o
obj-$(CONFIG_RISCV_ISA_V) += vector.o
obj-$(CONFIG_SMP) += smpboot.o obj-$(CONFIG_SMP) += smpboot.o
obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_SMP) += cpu_ops.o obj-$(CONFIG_SMP) += cpu_ops.o
...@@ -96,3 +97,4 @@ obj-$(CONFIG_COMPAT) += compat_signal.o ...@@ -96,3 +97,4 @@ obj-$(CONFIG_COMPAT) += compat_signal.o
obj-$(CONFIG_COMPAT) += compat_vdso/ obj-$(CONFIG_COMPAT) += compat_vdso/
obj-$(CONFIG_64BIT) += pi/ obj-$(CONFIG_64BIT) += pi/
obj-$(CONFIG_ACPI) += acpi.o
// SPDX-License-Identifier: GPL-2.0-only
/*
* RISC-V Specific Low-Level ACPI Boot Support
*
* Copyright (C) 2013-2014, Linaro Ltd.
* Author: Al Stone <al.stone@linaro.org>
* Author: Graeme Gregory <graeme.gregory@linaro.org>
* Author: Hanjun Guo <hanjun.guo@linaro.org>
* Author: Tomasz Nowicki <tomasz.nowicki@linaro.org>
* Author: Naresh Bhat <naresh.bhat@linaro.org>
*
* Copyright (C) 2021-2023, Ventana Micro Systems Inc.
* Author: Sunil V L <sunilvl@ventanamicro.com>
*/
#include <linux/acpi.h>
#include <linux/io.h>
#include <linux/pci.h>
#include <linux/efi.h>
int acpi_noirq = 1; /* skip ACPI IRQ initialization */
int acpi_disabled = 1;
EXPORT_SYMBOL(acpi_disabled);
int acpi_pci_disabled = 1; /* skip ACPI PCI scan and IRQ initialization */
EXPORT_SYMBOL(acpi_pci_disabled);
static bool param_acpi_off __initdata;
static bool param_acpi_on __initdata;
static bool param_acpi_force __initdata;
static struct acpi_madt_rintc cpu_madt_rintc[NR_CPUS];
static int __init parse_acpi(char *arg)
{
if (!arg)
return -EINVAL;
/* "acpi=off" disables both ACPI table parsing and interpreter */
if (strcmp(arg, "off") == 0)
param_acpi_off = true;
else if (strcmp(arg, "on") == 0) /* prefer ACPI over DT */
param_acpi_on = true;
else if (strcmp(arg, "force") == 0) /* force ACPI to be enabled */
param_acpi_force = true;
else
return -EINVAL; /* Core will print when we return error */
return 0;
}
early_param("acpi", parse_acpi);
/*
* acpi_fadt_sanity_check() - Check FADT presence and carry out sanity
* checks on it
*
* Return 0 on success, <0 on failure
*/
static int __init acpi_fadt_sanity_check(void)
{
struct acpi_table_header *table;
struct acpi_table_fadt *fadt;
acpi_status status;
int ret = 0;
/*
* FADT is required on riscv; retrieve it to check its presence
* and carry out revision and ACPI HW reduced compliancy tests
*/
status = acpi_get_table(ACPI_SIG_FADT, 0, &table);
if (ACPI_FAILURE(status)) {
const char *msg = acpi_format_exception(status);
pr_err("Failed to get FADT table, %s\n", msg);
return -ENODEV;
}
fadt = (struct acpi_table_fadt *)table;
/*
* The revision in the table header is the FADT's Major revision. The
* FADT also has a minor revision, which is stored in the FADT itself.
*
* TODO: Currently, we check for 6.5 as the minimum version to check
* for HW_REDUCED flag. However, once RISC-V updates are released in
* the ACPI spec, we need to update this check for exact minor revision
*/
if (table->revision < 6 || (table->revision == 6 && fadt->minor_revision < 5))
pr_err(FW_BUG "Unsupported FADT revision %d.%d, should be 6.5+\n",
table->revision, fadt->minor_revision);
if (!(fadt->flags & ACPI_FADT_HW_REDUCED)) {
pr_err("FADT not ACPI hardware reduced compliant\n");
ret = -EINVAL;
}
/*
* acpi_get_table() creates FADT table mapping that
* should be released after parsing and before resuming boot
*/
acpi_put_table(table);
return ret;
}
/*
* acpi_boot_table_init() called from setup_arch(), always.
* 1. find RSDP and get its address, and then find XSDT
* 2. extract all tables and checksums them all
* 3. check ACPI FADT HW reduced flag
*
* We can parse ACPI boot-time tables such as MADT after
* this function is called.
*
* On return ACPI is enabled if either:
*
* - ACPI tables are initialized and sanity checks passed
* - acpi=force was passed in the command line and ACPI was not disabled
* explicitly through acpi=off command line parameter
*
* ACPI is disabled on function return otherwise
*/
void __init acpi_boot_table_init(void)
{
/*
* Enable ACPI instead of device tree unless
* - ACPI has been disabled explicitly (acpi=off), or
* - firmware has not populated ACPI ptr in EFI system table
* and ACPI has not been [force] enabled (acpi=on|force)
*/
if (param_acpi_off ||
(!param_acpi_on && !param_acpi_force &&
efi.acpi20 == EFI_INVALID_TABLE_ADDR))
return;
/*
* ACPI is disabled at this point. Enable it in order to parse
* the ACPI tables and carry out sanity checks
*/
enable_acpi();
/*
* If ACPI tables are initialized and FADT sanity checks passed,
* leave ACPI enabled and carry on booting; otherwise disable ACPI
* on initialization error.
* If acpi=force was passed on the command line it forces ACPI
* to be enabled even if its initialization failed.
*/
if (acpi_table_init() || acpi_fadt_sanity_check()) {
pr_err("Failed to init ACPI tables\n");
if (!param_acpi_force)
disable_acpi();
}
}
static int acpi_parse_madt_rintc(union acpi_subtable_headers *header, const unsigned long end)
{
struct acpi_madt_rintc *rintc = (struct acpi_madt_rintc *)header;
int cpuid;
if (!(rintc->flags & ACPI_MADT_ENABLED))
return 0;
cpuid = riscv_hartid_to_cpuid(rintc->hart_id);
/*
* When CONFIG_SMP is disabled, mapping won't be created for
* all cpus.
* CPUs more than num_possible_cpus, will be ignored.
*/
if (cpuid >= 0 && cpuid < num_possible_cpus())
cpu_madt_rintc[cpuid] = *rintc;
return 0;
}
/*
* Instead of parsing (and freeing) the ACPI table, cache
* the RINTC structures since they are frequently used
* like in cpuinfo.
*/
void __init acpi_init_rintc_map(void)
{
if (acpi_table_parse_madt(ACPI_MADT_TYPE_RINTC, acpi_parse_madt_rintc, 0) <= 0) {
pr_err("No valid RINTC entries exist\n");
BUG();
}
}
struct acpi_madt_rintc *acpi_cpu_get_madt_rintc(int cpu)
{
return &cpu_madt_rintc[cpu];
}
u32 get_acpi_id_for_cpu(int cpu)
{
return acpi_cpu_get_madt_rintc(cpu)->uid;
}
/*
* __acpi_map_table() will be called before paging_init(), so early_ioremap()
* or early_memremap() should be called here to for ACPI table mapping.
*/
void __init __iomem *__acpi_map_table(unsigned long phys, unsigned long size)
{
if (!size)
return NULL;
return early_ioremap(phys, size);
}
void __init __acpi_unmap_table(void __iomem *map, unsigned long size)
{
if (!map || !size)
return;
early_iounmap(map, size);
}
void *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
{
return memremap(phys, size, MEMREMAP_WB);
}
#ifdef CONFIG_PCI
/*
* These interfaces are defined just to enable building ACPI core.
* TODO: Update it with actual implementation when external interrupt
* controller support is added in RISC-V ACPI.
*/
int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
int reg, int len, u32 *val)
{
return PCIBIOS_DEVICE_NOT_FOUND;
}
int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn,
int reg, int len, u32 val)
{
return PCIBIOS_DEVICE_NOT_FOUND;
}
int acpi_pci_bus_find_domain_nr(struct pci_bus *bus)
{
return -1;
}
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{
return NULL;
}
#endif /* CONFIG_PCI */
...@@ -3,10 +3,13 @@ ...@@ -3,10 +3,13 @@
* Copyright (C) 2012 Regents of the University of California * Copyright (C) 2012 Regents of the University of California
*/ */
#include <linux/acpi.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/ctype.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/of.h> #include <linux/of.h>
#include <asm/acpi.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/csr.h> #include <asm/csr.h>
#include <asm/hwcap.h> #include <asm/hwcap.h>
...@@ -19,6 +22,26 @@ ...@@ -19,6 +22,26 @@
* isn't an enabled and valid RISC-V hart node. * isn't an enabled and valid RISC-V hart node.
*/ */
int riscv_of_processor_hartid(struct device_node *node, unsigned long *hart) int riscv_of_processor_hartid(struct device_node *node, unsigned long *hart)
{
int cpu;
*hart = (unsigned long)of_get_cpu_hwid(node, 0);
if (*hart == ~0UL) {
pr_warn("Found CPU without hart ID\n");
return -ENODEV;
}
cpu = riscv_hartid_to_cpuid(*hart);
if (cpu < 0)
return cpu;
if (!cpu_possible(cpu))
return -ENODEV;
return 0;
}
int riscv_early_of_processor_hartid(struct device_node *node, unsigned long *hart)
{ {
const char *isa; const char *isa;
...@@ -27,7 +50,7 @@ int riscv_of_processor_hartid(struct device_node *node, unsigned long *hart) ...@@ -27,7 +50,7 @@ int riscv_of_processor_hartid(struct device_node *node, unsigned long *hart)
return -ENODEV; return -ENODEV;
} }
*hart = (unsigned long) of_get_cpu_hwid(node, 0); *hart = (unsigned long)of_get_cpu_hwid(node, 0);
if (*hart == ~0UL) { if (*hart == ~0UL) {
pr_warn("Found CPU without hart ID\n"); pr_warn("Found CPU without hart ID\n");
return -ENODEV; return -ENODEV;
...@@ -42,10 +65,12 @@ int riscv_of_processor_hartid(struct device_node *node, unsigned long *hart) ...@@ -42,10 +65,12 @@ int riscv_of_processor_hartid(struct device_node *node, unsigned long *hart)
pr_warn("CPU with hartid=%lu has no \"riscv,isa\" property\n", *hart); pr_warn("CPU with hartid=%lu has no \"riscv,isa\" property\n", *hart);
return -ENODEV; return -ENODEV;
} }
if (isa[0] != 'r' || isa[1] != 'v') {
pr_warn("CPU with hartid=%lu has an invalid ISA of \"%s\"\n", *hart, isa); if (IS_ENABLED(CONFIG_32BIT) && strncasecmp(isa, "rv32ima", 7))
return -ENODEV;
if (IS_ENABLED(CONFIG_64BIT) && strncasecmp(isa, "rv64ima", 7))
return -ENODEV; return -ENODEV;
}
return 0; return 0;
} }
...@@ -183,8 +208,14 @@ arch_initcall(riscv_cpuinfo_init); ...@@ -183,8 +208,14 @@ arch_initcall(riscv_cpuinfo_init);
static struct riscv_isa_ext_data isa_ext_arr[] = { static struct riscv_isa_ext_data isa_ext_arr[] = {
__RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM), __RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM),
__RISCV_ISA_EXT_DATA(zicboz, RISCV_ISA_EXT_ZICBOZ), __RISCV_ISA_EXT_DATA(zicboz, RISCV_ISA_EXT_ZICBOZ),
__RISCV_ISA_EXT_DATA(zicntr, RISCV_ISA_EXT_ZICNTR),
__RISCV_ISA_EXT_DATA(zicsr, RISCV_ISA_EXT_ZICSR),
__RISCV_ISA_EXT_DATA(zifencei, RISCV_ISA_EXT_ZIFENCEI),
__RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE), __RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE),
__RISCV_ISA_EXT_DATA(zihpm, RISCV_ISA_EXT_ZIHPM),
__RISCV_ISA_EXT_DATA(zba, RISCV_ISA_EXT_ZBA),
__RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB), __RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB),
__RISCV_ISA_EXT_DATA(zbs, RISCV_ISA_EXT_ZBS),
__RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA), __RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA),
__RISCV_ISA_EXT_DATA(ssaia, RISCV_ISA_EXT_SSAIA), __RISCV_ISA_EXT_DATA(ssaia, RISCV_ISA_EXT_SSAIA),
__RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF),
...@@ -283,23 +314,35 @@ static void c_stop(struct seq_file *m, void *v) ...@@ -283,23 +314,35 @@ static void c_stop(struct seq_file *m, void *v)
static int c_show(struct seq_file *m, void *v) static int c_show(struct seq_file *m, void *v)
{ {
unsigned long cpu_id = (unsigned long)v - 1; unsigned long cpu_id = (unsigned long)v - 1;
struct device_node *node = of_get_cpu_node(cpu_id, NULL);
struct riscv_cpuinfo *ci = per_cpu_ptr(&riscv_cpuinfo, cpu_id); struct riscv_cpuinfo *ci = per_cpu_ptr(&riscv_cpuinfo, cpu_id);
struct device_node *node;
const char *compat, *isa; const char *compat, *isa;
seq_printf(m, "processor\t: %lu\n", cpu_id); seq_printf(m, "processor\t: %lu\n", cpu_id);
seq_printf(m, "hart\t\t: %lu\n", cpuid_to_hartid_map(cpu_id)); seq_printf(m, "hart\t\t: %lu\n", cpuid_to_hartid_map(cpu_id));
if (acpi_disabled) {
node = of_get_cpu_node(cpu_id, NULL);
if (!of_property_read_string(node, "riscv,isa", &isa)) if (!of_property_read_string(node, "riscv,isa", &isa))
print_isa(m, isa); print_isa(m, isa);
print_mmu(m); print_mmu(m);
if (!of_property_read_string(node, "compatible", &compat) if (!of_property_read_string(node, "compatible", &compat) &&
&& strcmp(compat, "riscv")) strcmp(compat, "riscv"))
seq_printf(m, "uarch\t\t: %s\n", compat); seq_printf(m, "uarch\t\t: %s\n", compat);
of_node_put(node);
} else {
if (!acpi_get_riscv_isa(NULL, cpu_id, &isa))
print_isa(m, isa);
print_mmu(m);
}
seq_printf(m, "mvendorid\t: 0x%lx\n", ci->mvendorid); seq_printf(m, "mvendorid\t: 0x%lx\n", ci->mvendorid);
seq_printf(m, "marchid\t\t: 0x%lx\n", ci->marchid); seq_printf(m, "marchid\t\t: 0x%lx\n", ci->marchid);
seq_printf(m, "mimpid\t\t: 0x%lx\n", ci->mimpid); seq_printf(m, "mimpid\t\t: 0x%lx\n", ci->mimpid);
seq_puts(m, "\n"); seq_puts(m, "\n");
of_node_put(node);
return 0; return 0;
} }
......
This diff is collapsed.
...@@ -48,10 +48,10 @@ _save_context: ...@@ -48,10 +48,10 @@ _save_context:
* Disable user-mode memory access as it should only be set in the * Disable user-mode memory access as it should only be set in the
* actual user copy routines. * actual user copy routines.
* *
* Disable the FPU to detect illegal usage of floating point in kernel * Disable the FPU/Vector to detect illegal usage of floating point
* space. * or vector in kernel space.
*/ */
li t0, SR_SUM | SR_FS li t0, SR_SUM | SR_FS_VS
REG_L s0, TASK_TI_USER_SP(tp) REG_L s0, TASK_TI_USER_SP(tp)
csrrc s1, CSR_STATUS, t0 csrrc s1, CSR_STATUS, t0
...@@ -348,6 +348,6 @@ SYM_CODE_END(excp_vect_table) ...@@ -348,6 +348,6 @@ SYM_CODE_END(excp_vect_table)
#ifndef CONFIG_MMU #ifndef CONFIG_MMU
SYM_CODE_START(__user_rt_sigreturn) SYM_CODE_START(__user_rt_sigreturn)
li a7, __NR_rt_sigreturn li a7, __NR_rt_sigreturn
scall ecall
SYM_CODE_END(__user_rt_sigreturn) SYM_CODE_END(__user_rt_sigreturn)
#endif #endif
...@@ -140,10 +140,10 @@ secondary_start_sbi: ...@@ -140,10 +140,10 @@ secondary_start_sbi:
.option pop .option pop
/* /*
* Disable FPU to detect illegal usage of * Disable FPU & VECTOR to detect illegal usage of
* floating point in kernel space * floating point or vector in kernel space
*/ */
li t0, SR_FS li t0, SR_FS_VS
csrc CSR_STATUS, t0 csrc CSR_STATUS, t0
/* Set trap vector to spin forever to help debug */ /* Set trap vector to spin forever to help debug */
...@@ -234,10 +234,10 @@ pmp_done: ...@@ -234,10 +234,10 @@ pmp_done:
.option pop .option pop
/* /*
* Disable FPU to detect illegal usage of * Disable FPU & VECTOR to detect illegal usage of
* floating point in kernel space * floating point or vector in kernel space
*/ */
li t0, SR_FS li t0, SR_FS_VS
csrc CSR_STATUS, t0 csrc CSR_STATUS, t0
#ifdef CONFIG_RISCV_BOOT_SPINWAIT #ifdef CONFIG_RISCV_BOOT_SPINWAIT
...@@ -301,6 +301,7 @@ clear_bss_done: ...@@ -301,6 +301,7 @@ clear_bss_done:
la tp, init_task la tp, init_task
la sp, init_thread_union + THREAD_SIZE la sp, init_thread_union + THREAD_SIZE
XIP_FIXUP_OFFSET sp XIP_FIXUP_OFFSET sp
addi sp, sp, -PT_SIZE_ON_STACK
#ifdef CONFIG_BUILTIN_DTB #ifdef CONFIG_BUILTIN_DTB
la a0, __dtb_start la a0, __dtb_start
XIP_FIXUP_OFFSET a0 XIP_FIXUP_OFFSET a0
...@@ -318,6 +319,7 @@ clear_bss_done: ...@@ -318,6 +319,7 @@ clear_bss_done:
/* Restore C environment */ /* Restore C environment */
la tp, init_task la tp, init_task
la sp, init_thread_union + THREAD_SIZE la sp, init_thread_union + THREAD_SIZE
addi sp, sp, -PT_SIZE_ON_STACK
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
call kasan_early_init call kasan_early_init
...@@ -392,7 +394,7 @@ ENTRY(reset_regs) ...@@ -392,7 +394,7 @@ ENTRY(reset_regs)
#ifdef CONFIG_FPU #ifdef CONFIG_FPU
csrr t0, CSR_MISA csrr t0, CSR_MISA
andi t0, t0, (COMPAT_HWCAP_ISA_F | COMPAT_HWCAP_ISA_D) andi t0, t0, (COMPAT_HWCAP_ISA_F | COMPAT_HWCAP_ISA_D)
beqz t0, .Lreset_regs_done beqz t0, .Lreset_regs_done_fpu
li t1, SR_FS li t1, SR_FS
csrs CSR_STATUS, t1 csrs CSR_STATUS, t1
...@@ -430,8 +432,31 @@ ENTRY(reset_regs) ...@@ -430,8 +432,31 @@ ENTRY(reset_regs)
fmv.s.x f31, zero fmv.s.x f31, zero
csrw fcsr, 0 csrw fcsr, 0
/* note that the caller must clear SR_FS */ /* note that the caller must clear SR_FS */
.Lreset_regs_done_fpu:
#endif /* CONFIG_FPU */ #endif /* CONFIG_FPU */
.Lreset_regs_done:
#ifdef CONFIG_RISCV_ISA_V
csrr t0, CSR_MISA
li t1, COMPAT_HWCAP_ISA_V
and t0, t0, t1
beqz t0, .Lreset_regs_done_vector
/*
* Clear vector registers and reset vcsr
* VLMAX has a defined value, VLEN is a constant,
* and this form of vsetvli is defined to set vl to VLMAX.
*/
li t1, SR_VS
csrs CSR_STATUS, t1
csrs CSR_VCSR, x0
vsetvli t1, x0, e8, m8, ta, ma
vmv.v.i v0, 0
vmv.v.i v8, 0
vmv.v.i v16, 0
vmv.v.i v24, 0
/* note that the caller must clear SR_VS */
.Lreset_regs_done_vector:
#endif /* CONFIG_RISCV_ISA_V */
ret ret
END(reset_regs) END(reset_regs)
#endif /* CONFIG_RISCV_M_MODE */ #endif /* CONFIG_RISCV_M_MODE */
...@@ -28,7 +28,6 @@ ENTRY(__hibernate_cpu_resume) ...@@ -28,7 +28,6 @@ ENTRY(__hibernate_cpu_resume)
REG_L a0, hibernate_cpu_context REG_L a0, hibernate_cpu_context
suspend_restore_csrs
suspend_restore_regs suspend_restore_regs
/* Return zero value. */ /* Return zero value. */
...@@ -50,7 +49,7 @@ ENTRY(hibernate_restore_image) ...@@ -50,7 +49,7 @@ ENTRY(hibernate_restore_image)
REG_L s4, restore_pblist REG_L s4, restore_pblist
REG_L a1, relocated_restore_code REG_L a1, relocated_restore_code
jalr a1 jr a1
END(hibernate_restore_image) END(hibernate_restore_image)
/* /*
...@@ -73,5 +72,5 @@ ENTRY(hibernate_core_restore_code) ...@@ -73,5 +72,5 @@ ENTRY(hibernate_core_restore_code)
REG_L s4, HIBERN_PBE_NEXT(s4) REG_L s4, HIBERN_PBE_NEXT(s4)
bnez s4, .Lcopy bnez s4, .Lcopy
jalr s2 jr s2
END(hibernate_core_restore_code) END(hibernate_core_restore_code)
...@@ -80,7 +80,6 @@ int pfn_is_nosave(unsigned long pfn) ...@@ -80,7 +80,6 @@ int pfn_is_nosave(unsigned long pfn)
void notrace save_processor_state(void) void notrace save_processor_state(void)
{ {
WARN_ON(num_online_cpus() != 1);
} }
void notrace restore_processor_state(void) void notrace restore_processor_state(void)
......
...@@ -11,6 +11,9 @@ ...@@ -11,6 +11,9 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <asm/sbi.h> #include <asm/sbi.h>
#include <asm/smp.h>
#include <asm/softirq_stack.h>
#include <asm/stacktrace.h>
static struct fwnode_handle *(*__get_intc_node)(void); static struct fwnode_handle *(*__get_intc_node)(void);
...@@ -28,6 +31,70 @@ struct fwnode_handle *riscv_get_intc_hwnode(void) ...@@ -28,6 +31,70 @@ struct fwnode_handle *riscv_get_intc_hwnode(void)
} }
EXPORT_SYMBOL_GPL(riscv_get_intc_hwnode); EXPORT_SYMBOL_GPL(riscv_get_intc_hwnode);
#ifdef CONFIG_IRQ_STACKS
#include <asm/irq_stack.h>
DEFINE_PER_CPU(ulong *, irq_stack_ptr);
#ifdef CONFIG_VMAP_STACK
static void init_irq_stacks(void)
{
int cpu;
ulong *p;
for_each_possible_cpu(cpu) {
p = arch_alloc_vmap_stack(IRQ_STACK_SIZE, cpu_to_node(cpu));
per_cpu(irq_stack_ptr, cpu) = p;
}
}
#else
/* irq stack only needs to be 16 byte aligned - not IRQ_STACK_SIZE aligned. */
DEFINE_PER_CPU_ALIGNED(ulong [IRQ_STACK_SIZE/sizeof(ulong)], irq_stack);
static void init_irq_stacks(void)
{
int cpu;
for_each_possible_cpu(cpu)
per_cpu(irq_stack_ptr, cpu) = per_cpu(irq_stack, cpu);
}
#endif /* CONFIG_VMAP_STACK */
#ifdef CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK
void do_softirq_own_stack(void)
{
#ifdef CONFIG_IRQ_STACKS
if (on_thread_stack()) {
ulong *sp = per_cpu(irq_stack_ptr, smp_processor_id())
+ IRQ_STACK_SIZE/sizeof(ulong);
__asm__ __volatile(
"addi sp, sp, -"RISCV_SZPTR "\n"
REG_S" ra, (sp) \n"
"addi sp, sp, -"RISCV_SZPTR "\n"
REG_S" s0, (sp) \n"
"addi s0, sp, 2*"RISCV_SZPTR "\n"
"move sp, %[sp] \n"
"call __do_softirq \n"
"addi sp, s0, -2*"RISCV_SZPTR"\n"
REG_L" s0, (sp) \n"
"addi sp, sp, "RISCV_SZPTR "\n"
REG_L" ra, (sp) \n"
"addi sp, sp, "RISCV_SZPTR "\n"
:
: [sp] "r" (sp)
: "a0", "a1", "a2", "a3", "a4", "a5", "a6", "a7",
"t0", "t1", "t2", "t3", "t4", "t5", "t6",
"memory");
} else
#endif
__do_softirq();
}
#endif /* CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK */
#else
static void init_irq_stacks(void) {}
#endif /* CONFIG_IRQ_STACKS */
int arch_show_interrupts(struct seq_file *p, int prec) int arch_show_interrupts(struct seq_file *p, int prec)
{ {
show_ipi_stats(p, prec); show_ipi_stats(p, prec);
...@@ -36,6 +103,7 @@ int arch_show_interrupts(struct seq_file *p, int prec) ...@@ -36,6 +103,7 @@ int arch_show_interrupts(struct seq_file *p, int prec)
void __init init_IRQ(void) void __init init_IRQ(void)
{ {
init_irq_stacks();
irqchip_init(); irqchip_init();
if (!handle_arch_irq) if (!handle_arch_irq)
panic("No interrupt controller found."); panic("No interrupt controller found.");
......
...@@ -67,6 +67,7 @@ int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) ...@@ -67,6 +67,7 @@ int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
struct uprobe_task *utask = current->utask; struct uprobe_task *utask = current->utask;
WARN_ON_ONCE(current->thread.bad_cause != UPROBE_TRAP_NR); WARN_ON_ONCE(current->thread.bad_cause != UPROBE_TRAP_NR);
current->thread.bad_cause = utask->autask.saved_cause;
instruction_pointer_set(regs, utask->vaddr + auprobe->insn_size); instruction_pointer_set(regs, utask->vaddr + auprobe->insn_size);
...@@ -102,6 +103,7 @@ void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) ...@@ -102,6 +103,7 @@ void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
{ {
struct uprobe_task *utask = current->utask; struct uprobe_task *utask = current->utask;
current->thread.bad_cause = utask->autask.saved_cause;
/* /*
* Task has received a fatal signal, so reset back to probbed * Task has received a fatal signal, so reset back to probbed
* address. * address.
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <asm/switch_to.h> #include <asm/switch_to.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/cpuidle.h> #include <asm/cpuidle.h>
#include <asm/vector.h>
register unsigned long gp_in_global __asm__("gp"); register unsigned long gp_in_global __asm__("gp");
...@@ -146,12 +147,29 @@ void flush_thread(void) ...@@ -146,12 +147,29 @@ void flush_thread(void)
fstate_off(current, task_pt_regs(current)); fstate_off(current, task_pt_regs(current));
memset(&current->thread.fstate, 0, sizeof(current->thread.fstate)); memset(&current->thread.fstate, 0, sizeof(current->thread.fstate));
#endif #endif
#ifdef CONFIG_RISCV_ISA_V
/* Reset vector state */
riscv_v_vstate_ctrl_init(current);
riscv_v_vstate_off(task_pt_regs(current));
kfree(current->thread.vstate.datap);
memset(&current->thread.vstate, 0, sizeof(struct __riscv_v_ext_state));
#endif
}
void arch_release_task_struct(struct task_struct *tsk)
{
/* Free the vector context of datap. */
if (has_vector())
kfree(tsk->thread.vstate.datap);
} }
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
{ {
fstate_save(src, task_pt_regs(src)); fstate_save(src, task_pt_regs(src));
*dst = *src; *dst = *src;
/* clear entire V context, including datap for a new task */
memset(&dst->thread.vstate, 0, sizeof(struct __riscv_v_ext_state));
return 0; return 0;
} }
...@@ -176,6 +194,8 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) ...@@ -176,6 +194,8 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
p->thread.s[1] = (unsigned long)args->fn_arg; p->thread.s[1] = (unsigned long)args->fn_arg;
} else { } else {
*childregs = *(current_pt_regs()); *childregs = *(current_pt_regs());
/* Turn off status.VS */
riscv_v_vstate_off(childregs);
if (usp) /* User fork */ if (usp) /* User fork */
childregs->sp = usp; childregs->sp = usp;
if (clone_flags & CLONE_SETTLS) if (clone_flags & CLONE_SETTLS)
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
* Copied from arch/tile/kernel/ptrace.c * Copied from arch/tile/kernel/ptrace.c
*/ */
#include <asm/vector.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/syscall.h> #include <asm/syscall.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
...@@ -24,6 +25,9 @@ enum riscv_regset { ...@@ -24,6 +25,9 @@ enum riscv_regset {
#ifdef CONFIG_FPU #ifdef CONFIG_FPU
REGSET_F, REGSET_F,
#endif #endif
#ifdef CONFIG_RISCV_ISA_V
REGSET_V,
#endif
}; };
static int riscv_gpr_get(struct task_struct *target, static int riscv_gpr_get(struct task_struct *target,
...@@ -80,6 +84,61 @@ static int riscv_fpr_set(struct task_struct *target, ...@@ -80,6 +84,61 @@ static int riscv_fpr_set(struct task_struct *target,
} }
#endif #endif
#ifdef CONFIG_RISCV_ISA_V
static int riscv_vr_get(struct task_struct *target,
const struct user_regset *regset,
struct membuf to)
{
struct __riscv_v_ext_state *vstate = &target->thread.vstate;
if (!riscv_v_vstate_query(task_pt_regs(target)))
return -EINVAL;
/*
* Ensure the vector registers have been saved to the memory before
* copying them to membuf.
*/
if (target == current)
riscv_v_vstate_save(current, task_pt_regs(current));
/* Copy vector header from vstate. */
membuf_write(&to, vstate, offsetof(struct __riscv_v_ext_state, datap));
membuf_zero(&to, sizeof(vstate->datap));
/* Copy all the vector registers from vstate. */
return membuf_write(&to, vstate->datap, riscv_v_vsize);
}
static int riscv_vr_set(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
int ret, size;
struct __riscv_v_ext_state *vstate = &target->thread.vstate;
if (!riscv_v_vstate_query(task_pt_regs(target)))
return -EINVAL;
/* Copy rest of the vstate except datap */
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, vstate, 0,
offsetof(struct __riscv_v_ext_state, datap));
if (unlikely(ret))
return ret;
/* Skip copy datap. */
size = sizeof(vstate->datap);
count -= size;
ubuf += size;
/* Copy all the vector registers. */
pos = 0;
ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, vstate->datap,
0, riscv_v_vsize);
return ret;
}
#endif
static const struct user_regset riscv_user_regset[] = { static const struct user_regset riscv_user_regset[] = {
[REGSET_X] = { [REGSET_X] = {
.core_note_type = NT_PRSTATUS, .core_note_type = NT_PRSTATUS,
...@@ -99,6 +158,17 @@ static const struct user_regset riscv_user_regset[] = { ...@@ -99,6 +158,17 @@ static const struct user_regset riscv_user_regset[] = {
.set = riscv_fpr_set, .set = riscv_fpr_set,
}, },
#endif #endif
#ifdef CONFIG_RISCV_ISA_V
[REGSET_V] = {
.core_note_type = NT_RISCV_VECTOR,
.align = 16,
.n = ((32 * RISCV_MAX_VLENB) +
sizeof(struct __riscv_v_ext_state)) / sizeof(__u32),
.size = sizeof(__u32),
.regset_get = riscv_vr_get,
.set = riscv_vr_set,
},
#endif
}; };
static const struct user_regset_view riscv_user_native_view = { static const struct user_regset_view riscv_user_native_view = {
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
* Nick Kossifidis <mick@ics.forth.gr> * Nick Kossifidis <mick@ics.forth.gr>
*/ */
#include <linux/acpi.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/mm.h> #include <linux/mm.h>
...@@ -21,6 +22,7 @@ ...@@ -21,6 +22,7 @@
#include <linux/efi.h> #include <linux/efi.h>
#include <linux/crash_dump.h> #include <linux/crash_dump.h>
#include <asm/acpi.h>
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/cpu_ops.h> #include <asm/cpu_ops.h>
...@@ -262,6 +264,8 @@ static void __init parse_dtb(void) ...@@ -262,6 +264,8 @@ static void __init parse_dtb(void)
#endif #endif
} }
extern void __init init_rt_signal_env(void);
void __init setup_arch(char **cmdline_p) void __init setup_arch(char **cmdline_p)
{ {
parse_dtb(); parse_dtb();
...@@ -270,11 +274,16 @@ void __init setup_arch(char **cmdline_p) ...@@ -270,11 +274,16 @@ void __init setup_arch(char **cmdline_p)
*cmdline_p = boot_command_line; *cmdline_p = boot_command_line;
early_ioremap_setup(); early_ioremap_setup();
sbi_init();
jump_label_init(); jump_label_init();
parse_early_param(); parse_early_param();
efi_init(); efi_init();
paging_init(); paging_init();
/* Parse the ACPI tables for possible boot-time configuration */
acpi_boot_table_init();
#if IS_ENABLED(CONFIG_BUILTIN_DTB) #if IS_ENABLED(CONFIG_BUILTIN_DTB)
unflatten_and_copy_device_tree(); unflatten_and_copy_device_tree();
#else #else
...@@ -283,7 +292,6 @@ void __init setup_arch(char **cmdline_p) ...@@ -283,7 +292,6 @@ void __init setup_arch(char **cmdline_p)
misc_mem_init(); misc_mem_init();
init_resources(); init_resources();
sbi_init();
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
kasan_init(); kasan_init();
...@@ -293,8 +301,12 @@ void __init setup_arch(char **cmdline_p) ...@@ -293,8 +301,12 @@ void __init setup_arch(char **cmdline_p)
setup_smp(); setup_smp();
#endif #endif
if (!acpi_disabled)
acpi_init_rintc_map();
riscv_init_cbo_blocksizes(); riscv_init_cbo_blocksizes();
riscv_fill_hwcap(); riscv_fill_hwcap();
init_rt_signal_env();
apply_boot_alternatives(); apply_boot_alternatives();
if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) && if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) &&
riscv_isa_extension_available(NULL, ZICBOM)) riscv_isa_extension_available(NULL, ZICBOM))
......
...@@ -19,10 +19,14 @@ ...@@ -19,10 +19,14 @@
#include <asm/signal.h> #include <asm/signal.h>
#include <asm/signal32.h> #include <asm/signal32.h>
#include <asm/switch_to.h> #include <asm/switch_to.h>
#include <asm/vector.h>
#include <asm/csr.h> #include <asm/csr.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
unsigned long signal_minsigstksz __ro_after_init;
extern u32 __user_rt_sigreturn[2]; extern u32 __user_rt_sigreturn[2];
static size_t riscv_v_sc_size __ro_after_init;
#define DEBUG_SIG 0 #define DEBUG_SIG 0
...@@ -40,26 +44,13 @@ static long restore_fp_state(struct pt_regs *regs, ...@@ -40,26 +44,13 @@ static long restore_fp_state(struct pt_regs *regs,
{ {
long err; long err;
struct __riscv_d_ext_state __user *state = &sc_fpregs->d; struct __riscv_d_ext_state __user *state = &sc_fpregs->d;
size_t i;
err = __copy_from_user(&current->thread.fstate, state, sizeof(*state)); err = __copy_from_user(&current->thread.fstate, state, sizeof(*state));
if (unlikely(err)) if (unlikely(err))
return err; return err;
fstate_restore(current, regs); fstate_restore(current, regs);
return 0;
/* We support no other extension state at this time. */
for (i = 0; i < ARRAY_SIZE(sc_fpregs->q.reserved); i++) {
u32 value;
err = __get_user(value, &sc_fpregs->q.reserved[i]);
if (unlikely(err))
break;
if (value != 0)
return -EINVAL;
}
return err;
} }
static long save_fp_state(struct pt_regs *regs, static long save_fp_state(struct pt_regs *regs,
...@@ -67,52 +58,186 @@ static long save_fp_state(struct pt_regs *regs, ...@@ -67,52 +58,186 @@ static long save_fp_state(struct pt_regs *regs,
{ {
long err; long err;
struct __riscv_d_ext_state __user *state = &sc_fpregs->d; struct __riscv_d_ext_state __user *state = &sc_fpregs->d;
size_t i;
fstate_save(current, regs); fstate_save(current, regs);
err = __copy_to_user(state, &current->thread.fstate, sizeof(*state)); err = __copy_to_user(state, &current->thread.fstate, sizeof(*state));
return err;
}
#else
#define save_fp_state(task, regs) (0)
#define restore_fp_state(task, regs) (0)
#endif
#ifdef CONFIG_RISCV_ISA_V
static long save_v_state(struct pt_regs *regs, void __user **sc_vec)
{
struct __riscv_ctx_hdr __user *hdr;
struct __sc_riscv_v_state __user *state;
void __user *datap;
long err;
hdr = *sc_vec;
/* Place state to the user's signal context space after the hdr */
state = (struct __sc_riscv_v_state __user *)(hdr + 1);
/* Point datap right after the end of __sc_riscv_v_state */
datap = state + 1;
/* datap is designed to be 16 byte aligned for better performance */
WARN_ON(unlikely(!IS_ALIGNED((unsigned long)datap, 16)));
riscv_v_vstate_save(current, regs);
/* Copy everything of vstate but datap. */
err = __copy_to_user(&state->v_state, &current->thread.vstate,
offsetof(struct __riscv_v_ext_state, datap));
/* Copy the pointer datap itself. */
err |= __put_user(datap, &state->v_state.datap);
/* Copy the whole vector content to user space datap. */
err |= __copy_to_user(datap, current->thread.vstate.datap, riscv_v_vsize);
/* Copy magic to the user space after saving all vector conetext */
err |= __put_user(RISCV_V_MAGIC, &hdr->magic);
err |= __put_user(riscv_v_sc_size, &hdr->size);
if (unlikely(err))
return err;
/* Only progress the sv_vec if everything has done successfully */
*sc_vec += riscv_v_sc_size;
return 0;
}
/*
* Restore Vector extension context from the user's signal frame. This function
* assumes a valid extension header. So magic and size checking must be done by
* the caller.
*/
static long __restore_v_state(struct pt_regs *regs, void __user *sc_vec)
{
long err;
struct __sc_riscv_v_state __user *state = sc_vec;
void __user *datap;
/* Copy everything of __sc_riscv_v_state except datap. */
err = __copy_from_user(&current->thread.vstate, &state->v_state,
offsetof(struct __riscv_v_ext_state, datap));
if (unlikely(err)) if (unlikely(err))
return err; return err;
/* We support no other extension state at this time. */ /* Copy the pointer datap itself. */
for (i = 0; i < ARRAY_SIZE(sc_fpregs->q.reserved); i++) { err = __get_user(datap, &state->v_state.datap);
err = __put_user(0, &sc_fpregs->q.reserved[i]);
if (unlikely(err)) if (unlikely(err))
break; return err;
} /*
* Copy the whole vector content from user space datap. Use
* copy_from_user to prevent information leak.
*/
err = copy_from_user(current->thread.vstate.datap, datap, riscv_v_vsize);
if (unlikely(err))
return err;
riscv_v_vstate_restore(current, regs);
return err; return err;
} }
#else #else
#define save_fp_state(task, regs) (0) #define save_v_state(task, regs) (0)
#define restore_fp_state(task, regs) (0) #define __restore_v_state(task, regs) (0)
#endif #endif
static long restore_sigcontext(struct pt_regs *regs, static long restore_sigcontext(struct pt_regs *regs,
struct sigcontext __user *sc) struct sigcontext __user *sc)
{ {
void __user *sc_ext_ptr = &sc->sc_extdesc.hdr;
__u32 rsvd;
long err; long err;
/* sc_regs is structured the same as the start of pt_regs */ /* sc_regs is structured the same as the start of pt_regs */
err = __copy_from_user(regs, &sc->sc_regs, sizeof(sc->sc_regs)); err = __copy_from_user(regs, &sc->sc_regs, sizeof(sc->sc_regs));
if (unlikely(err))
return err;
/* Restore the floating-point state. */ /* Restore the floating-point state. */
if (has_fpu()) if (has_fpu()) {
err |= restore_fp_state(regs, &sc->sc_fpregs); err = restore_fp_state(regs, &sc->sc_fpregs);
if (unlikely(err))
return err;
}
/* Check the reserved word before extensions parsing */
err = __get_user(rsvd, &sc->sc_extdesc.reserved);
if (unlikely(err))
return err;
if (unlikely(rsvd))
return -EINVAL;
while (!err) {
__u32 magic, size;
struct __riscv_ctx_hdr __user *head = sc_ext_ptr;
err |= __get_user(magic, &head->magic);
err |= __get_user(size, &head->size);
if (unlikely(err))
return err;
sc_ext_ptr += sizeof(*head);
switch (magic) {
case END_MAGIC:
if (size != END_HDR_SIZE)
return -EINVAL;
return 0;
case RISCV_V_MAGIC:
if (!has_vector() || !riscv_v_vstate_query(regs) ||
size != riscv_v_sc_size)
return -EINVAL;
err = __restore_v_state(regs, sc_ext_ptr);
break;
default:
return -EINVAL;
}
sc_ext_ptr = (void __user *)head + size;
}
return err; return err;
} }
static size_t get_rt_frame_size(bool cal_all)
{
struct rt_sigframe __user *frame;
size_t frame_size;
size_t total_context_size = 0;
frame_size = sizeof(*frame);
if (has_vector()) {
if (cal_all || riscv_v_vstate_query(task_pt_regs(current)))
total_context_size += riscv_v_sc_size;
}
/*
* Preserved a __riscv_ctx_hdr for END signal context header if an
* extension uses __riscv_extra_ext_header
*/
if (total_context_size)
total_context_size += sizeof(struct __riscv_ctx_hdr);
frame_size += total_context_size;
frame_size = round_up(frame_size, 16);
return frame_size;
}
SYSCALL_DEFINE0(rt_sigreturn) SYSCALL_DEFINE0(rt_sigreturn)
{ {
struct pt_regs *regs = current_pt_regs(); struct pt_regs *regs = current_pt_regs();
struct rt_sigframe __user *frame; struct rt_sigframe __user *frame;
struct task_struct *task; struct task_struct *task;
sigset_t set; sigset_t set;
size_t frame_size = get_rt_frame_size(false);
/* Always make any pending restarted system calls return -EINTR */ /* Always make any pending restarted system calls return -EINTR */
current->restart_block.fn = do_no_restart_syscall; current->restart_block.fn = do_no_restart_syscall;
frame = (struct rt_sigframe __user *)regs->sp; frame = (struct rt_sigframe __user *)regs->sp;
if (!access_ok(frame, sizeof(*frame))) if (!access_ok(frame, frame_size))
goto badframe; goto badframe;
if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set))) if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
...@@ -146,12 +271,23 @@ static long setup_sigcontext(struct rt_sigframe __user *frame, ...@@ -146,12 +271,23 @@ static long setup_sigcontext(struct rt_sigframe __user *frame,
struct pt_regs *regs) struct pt_regs *regs)
{ {
struct sigcontext __user *sc = &frame->uc.uc_mcontext; struct sigcontext __user *sc = &frame->uc.uc_mcontext;
struct __riscv_ctx_hdr __user *sc_ext_ptr = &sc->sc_extdesc.hdr;
long err; long err;
/* sc_regs is structured the same as the start of pt_regs */ /* sc_regs is structured the same as the start of pt_regs */
err = __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs)); err = __copy_to_user(&sc->sc_regs, regs, sizeof(sc->sc_regs));
/* Save the floating-point state. */ /* Save the floating-point state. */
if (has_fpu()) if (has_fpu())
err |= save_fp_state(regs, &sc->sc_fpregs); err |= save_fp_state(regs, &sc->sc_fpregs);
/* Save the vector state. */
if (has_vector() && riscv_v_vstate_query(regs))
err |= save_v_state(regs, (void __user **)&sc_ext_ptr);
/* Write zero to fp-reserved space and check it on restore_sigcontext */
err |= __put_user(0, &sc->sc_extdesc.reserved);
/* And put END __riscv_ctx_hdr at the end. */
err |= __put_user(END_MAGIC, &sc_ext_ptr->magic);
err |= __put_user(END_HDR_SIZE, &sc_ext_ptr->size);
return err; return err;
} }
...@@ -175,6 +311,13 @@ static inline void __user *get_sigframe(struct ksignal *ksig, ...@@ -175,6 +311,13 @@ static inline void __user *get_sigframe(struct ksignal *ksig,
/* Align the stack frame. */ /* Align the stack frame. */
sp &= ~0xfUL; sp &= ~0xfUL;
/*
* Fail if the size of the altstack is not large enough for the
* sigframe construction.
*/
if (current->sas_ss_size && sp < current->sas_ss_sp)
return (void __user __force *)-1UL;
return (void __user *)sp; return (void __user *)sp;
} }
...@@ -184,9 +327,10 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set, ...@@ -184,9 +327,10 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
struct rt_sigframe __user *frame; struct rt_sigframe __user *frame;
long err = 0; long err = 0;
unsigned long __maybe_unused addr; unsigned long __maybe_unused addr;
size_t frame_size = get_rt_frame_size(false);
frame = get_sigframe(ksig, regs, sizeof(*frame)); frame = get_sigframe(ksig, regs, frame_size);
if (!access_ok(frame, sizeof(*frame))) if (!access_ok(frame, frame_size))
return -EFAULT; return -EFAULT;
err |= copy_siginfo_to_user(&frame->info, &ksig->info); err |= copy_siginfo_to_user(&frame->info, &ksig->info);
...@@ -319,3 +463,23 @@ void arch_do_signal_or_restart(struct pt_regs *regs) ...@@ -319,3 +463,23 @@ void arch_do_signal_or_restart(struct pt_regs *regs)
*/ */
restore_saved_sigmask(); restore_saved_sigmask();
} }
void init_rt_signal_env(void);
void __init init_rt_signal_env(void)
{
riscv_v_sc_size = sizeof(struct __riscv_ctx_hdr) +
sizeof(struct __sc_riscv_v_state) + riscv_v_vsize;
/*
* Determine the stack space required for guaranteed signal delivery.
* The signal_minsigstksz will be populated into the AT_MINSIGSTKSZ entry
* in the auxiliary array at process startup.
*/
signal_minsigstksz = get_rt_frame_size(true);
}
#ifdef CONFIG_DYNAMIC_SIGFRAME
bool sigaltstack_size_valid(size_t ss_size)
{
return ss_size > get_rt_frame_size(false);
}
#endif /* CONFIG_DYNAMIC_SIGFRAME */
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
* Copyright (C) 2017 SiFive * Copyright (C) 2017 SiFive
*/ */
#include <linux/acpi.h>
#include <linux/arch_topology.h> #include <linux/arch_topology.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
...@@ -31,6 +32,8 @@ ...@@ -31,6 +32,8 @@
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <uapi/asm/hwcap.h>
#include <asm/vector.h>
#include "head.h" #include "head.h"
...@@ -70,7 +73,73 @@ void __init smp_prepare_cpus(unsigned int max_cpus) ...@@ -70,7 +73,73 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
} }
} }
void __init setup_smp(void) #ifdef CONFIG_ACPI
static unsigned int cpu_count = 1;
static int __init acpi_parse_rintc(union acpi_subtable_headers *header, const unsigned long end)
{
unsigned long hart;
static bool found_boot_cpu;
struct acpi_madt_rintc *processor = (struct acpi_madt_rintc *)header;
/*
* Each RINTC structure in MADT will have a flag. If ACPI_MADT_ENABLED
* bit in the flag is not enabled, it means OS should not try to enable
* the cpu to which RINTC belongs.
*/
if (!(processor->flags & ACPI_MADT_ENABLED))
return 0;
if (BAD_MADT_ENTRY(processor, end))
return -EINVAL;
acpi_table_print_madt_entry(&header->common);
hart = processor->hart_id;
if (hart == INVALID_HARTID) {
pr_warn("Invalid hartid\n");
return 0;
}
if (hart == cpuid_to_hartid_map(0)) {
BUG_ON(found_boot_cpu);
found_boot_cpu = true;
early_map_cpu_to_node(0, acpi_numa_get_nid(cpu_count));
return 0;
}
if (cpu_count >= NR_CPUS) {
pr_warn("NR_CPUS is too small for the number of ACPI tables.\n");
return 0;
}
cpuid_to_hartid_map(cpu_count) = hart;
early_map_cpu_to_node(cpu_count, acpi_numa_get_nid(cpu_count));
cpu_count++;
return 0;
}
static void __init acpi_parse_and_init_cpus(void)
{
int cpuid;
cpu_set_ops(0);
acpi_table_parse_madt(ACPI_MADT_TYPE_RINTC, acpi_parse_rintc, 0);
for (cpuid = 1; cpuid < nr_cpu_ids; cpuid++) {
if (cpuid_to_hartid_map(cpuid) != INVALID_HARTID) {
cpu_set_ops(cpuid);
set_cpu_possible(cpuid, true);
}
}
}
#else
#define acpi_parse_and_init_cpus(...) do { } while (0)
#endif
static void __init of_parse_and_init_cpus(void)
{ {
struct device_node *dn; struct device_node *dn;
unsigned long hart; unsigned long hart;
...@@ -81,7 +150,7 @@ void __init setup_smp(void) ...@@ -81,7 +150,7 @@ void __init setup_smp(void)
cpu_set_ops(0); cpu_set_ops(0);
for_each_of_cpu_node(dn) { for_each_of_cpu_node(dn) {
rc = riscv_of_processor_hartid(dn, &hart); rc = riscv_early_of_processor_hartid(dn, &hart);
if (rc < 0) if (rc < 0)
continue; continue;
...@@ -116,6 +185,14 @@ void __init setup_smp(void) ...@@ -116,6 +185,14 @@ void __init setup_smp(void)
} }
} }
void __init setup_smp(void)
{
if (acpi_disabled)
of_parse_and_init_cpus();
else
acpi_parse_and_init_cpus();
}
static int start_secondary_cpu(int cpu, struct task_struct *tidle) static int start_secondary_cpu(int cpu, struct task_struct *tidle)
{ {
if (cpu_ops[cpu]->cpu_start) if (cpu_ops[cpu]->cpu_start)
...@@ -169,6 +246,11 @@ asmlinkage __visible void smp_callin(void) ...@@ -169,6 +246,11 @@ asmlinkage __visible void smp_callin(void)
set_cpu_online(curr_cpuid, 1); set_cpu_online(curr_cpuid, 1);
probe_vendor_features(curr_cpuid); probe_vendor_features(curr_cpuid);
if (has_vector()) {
if (riscv_v_setup_vsize())
elf_hwcap &= ~COMPAT_HWCAP_ISA_V;
}
/* /*
* Remote TLB flushes are ignored while the CPU is offline, so emit * Remote TLB flushes are ignored while the CPU is offline, so emit
* a local TLB flush right now just in case. * a local TLB flush right now just in case.
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/hwprobe.h> #include <asm/hwprobe.h>
#include <asm/sbi.h> #include <asm/sbi.h>
#include <asm/vector.h>
#include <asm/switch_to.h> #include <asm/switch_to.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/unistd.h> #include <asm/unistd.h>
...@@ -121,6 +122,49 @@ static void hwprobe_arch_id(struct riscv_hwprobe *pair, ...@@ -121,6 +122,49 @@ static void hwprobe_arch_id(struct riscv_hwprobe *pair,
pair->value = id; pair->value = id;
} }
static void hwprobe_isa_ext0(struct riscv_hwprobe *pair,
const struct cpumask *cpus)
{
int cpu;
u64 missing = 0;
pair->value = 0;
if (has_fpu())
pair->value |= RISCV_HWPROBE_IMA_FD;
if (riscv_isa_extension_available(NULL, c))
pair->value |= RISCV_HWPROBE_IMA_C;
if (has_vector())
pair->value |= RISCV_HWPROBE_IMA_V;
/*
* Loop through and record extensions that 1) anyone has, and 2) anyone
* doesn't have.
*/
for_each_cpu(cpu, cpus) {
struct riscv_isainfo *isainfo = &hart_isa[cpu];
if (riscv_isa_extension_available(isainfo->isa, ZBA))
pair->value |= RISCV_HWPROBE_EXT_ZBA;
else
missing |= RISCV_HWPROBE_EXT_ZBA;
if (riscv_isa_extension_available(isainfo->isa, ZBB))
pair->value |= RISCV_HWPROBE_EXT_ZBB;
else
missing |= RISCV_HWPROBE_EXT_ZBB;
if (riscv_isa_extension_available(isainfo->isa, ZBS))
pair->value |= RISCV_HWPROBE_EXT_ZBS;
else
missing |= RISCV_HWPROBE_EXT_ZBS;
}
/* Now turn off reporting features if any CPU is missing it. */
pair->value &= ~missing;
}
static u64 hwprobe_misaligned(const struct cpumask *cpus) static u64 hwprobe_misaligned(const struct cpumask *cpus)
{ {
int cpu; int cpu;
...@@ -164,13 +208,7 @@ static void hwprobe_one_pair(struct riscv_hwprobe *pair, ...@@ -164,13 +208,7 @@ static void hwprobe_one_pair(struct riscv_hwprobe *pair,
break; break;
case RISCV_HWPROBE_KEY_IMA_EXT_0: case RISCV_HWPROBE_KEY_IMA_EXT_0:
pair->value = 0; hwprobe_isa_ext0(pair, cpus);
if (has_fpu())
pair->value |= RISCV_HWPROBE_IMA_FD;
if (riscv_isa_extension_available(NULL, c))
pair->value |= RISCV_HWPROBE_IMA_C;
break; break;
case RISCV_HWPROBE_KEY_CPUPERF_0: case RISCV_HWPROBE_KEY_CPUPERF_0:
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
* Copyright (C) 2017 SiFive * Copyright (C) 2017 SiFive
*/ */
#include <linux/acpi.h>
#include <linux/of_clk.h> #include <linux/of_clk.h>
#include <linux/clockchips.h> #include <linux/clockchips.h>
#include <linux/clocksource.h> #include <linux/clocksource.h>
...@@ -18,17 +19,29 @@ EXPORT_SYMBOL_GPL(riscv_timebase); ...@@ -18,17 +19,29 @@ EXPORT_SYMBOL_GPL(riscv_timebase);
void __init time_init(void) void __init time_init(void)
{ {
struct device_node *cpu; struct device_node *cpu;
struct acpi_table_rhct *rhct;
acpi_status status;
u32 prop; u32 prop;
if (acpi_disabled) {
cpu = of_find_node_by_path("/cpus"); cpu = of_find_node_by_path("/cpus");
if (!cpu || of_property_read_u32(cpu, "timebase-frequency", &prop)) if (!cpu || of_property_read_u32(cpu, "timebase-frequency", &prop))
panic(KERN_WARNING "RISC-V system with no 'timebase-frequency' in DTS\n"); panic("RISC-V system with no 'timebase-frequency' in DTS\n");
of_node_put(cpu); of_node_put(cpu);
riscv_timebase = prop; riscv_timebase = prop;
of_clk_init(NULL);
} else {
status = acpi_get_table(ACPI_SIG_RHCT, 0, (struct acpi_table_header **)&rhct);
if (ACPI_FAILURE(status))
panic("RISC-V ACPI system with no RHCT table\n");
riscv_timebase = rhct->time_base_freq;
acpi_put_table((struct acpi_table_header *)rhct);
}
lpj_fine = riscv_timebase / HZ; lpj_fine = riscv_timebase / HZ;
of_clk_init(NULL);
timer_probe(); timer_probe();
tick_setup_hrtimer_broadcast(); tick_setup_hrtimer_broadcast();
......
...@@ -26,6 +26,8 @@ ...@@ -26,6 +26,8 @@
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/syscall.h> #include <asm/syscall.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
#include <asm/vector.h>
#include <asm/irq_stack.h>
int show_unhandled_signals = 1; int show_unhandled_signals = 1;
...@@ -145,8 +147,29 @@ DO_ERROR_INFO(do_trap_insn_misaligned, ...@@ -145,8 +147,29 @@ DO_ERROR_INFO(do_trap_insn_misaligned,
SIGBUS, BUS_ADRALN, "instruction address misaligned"); SIGBUS, BUS_ADRALN, "instruction address misaligned");
DO_ERROR_INFO(do_trap_insn_fault, DO_ERROR_INFO(do_trap_insn_fault,
SIGSEGV, SEGV_ACCERR, "instruction access fault"); SIGSEGV, SEGV_ACCERR, "instruction access fault");
DO_ERROR_INFO(do_trap_insn_illegal,
SIGILL, ILL_ILLOPC, "illegal instruction"); asmlinkage __visible __trap_section void do_trap_insn_illegal(struct pt_regs *regs)
{
if (user_mode(regs)) {
irqentry_enter_from_user_mode(regs);
local_irq_enable();
if (!riscv_v_first_use_handler(regs))
do_trap_error(regs, SIGILL, ILL_ILLOPC, regs->epc,
"Oops - illegal instruction");
irqentry_exit_to_user_mode(regs);
} else {
irqentry_state_t state = irqentry_nmi_enter(regs);
do_trap_error(regs, SIGILL, ILL_ILLOPC, regs->epc,
"Oops - illegal instruction");
irqentry_nmi_exit(regs, state);
}
}
DO_ERROR_INFO(do_trap_load_fault, DO_ERROR_INFO(do_trap_load_fault,
SIGSEGV, SEGV_ACCERR, "load access fault"); SIGSEGV, SEGV_ACCERR, "load access fault");
#ifndef CONFIG_RISCV_M_MODE #ifndef CONFIG_RISCV_M_MODE
...@@ -305,16 +328,46 @@ asmlinkage __visible noinstr void do_page_fault(struct pt_regs *regs) ...@@ -305,16 +328,46 @@ asmlinkage __visible noinstr void do_page_fault(struct pt_regs *regs)
} }
#endif #endif
asmlinkage __visible noinstr void do_irq(struct pt_regs *regs) static void noinstr handle_riscv_irq(struct pt_regs *regs)
{ {
struct pt_regs *old_regs; struct pt_regs *old_regs;
irqentry_state_t state = irqentry_enter(regs);
irq_enter_rcu(); irq_enter_rcu();
old_regs = set_irq_regs(regs); old_regs = set_irq_regs(regs);
handle_arch_irq(regs); handle_arch_irq(regs);
set_irq_regs(old_regs); set_irq_regs(old_regs);
irq_exit_rcu(); irq_exit_rcu();
}
asmlinkage void noinstr do_irq(struct pt_regs *regs)
{
irqentry_state_t state = irqentry_enter(regs);
#ifdef CONFIG_IRQ_STACKS
if (on_thread_stack()) {
ulong *sp = per_cpu(irq_stack_ptr, smp_processor_id())
+ IRQ_STACK_SIZE/sizeof(ulong);
__asm__ __volatile(
"addi sp, sp, -"RISCV_SZPTR "\n"
REG_S" ra, (sp) \n"
"addi sp, sp, -"RISCV_SZPTR "\n"
REG_S" s0, (sp) \n"
"addi s0, sp, 2*"RISCV_SZPTR "\n"
"move sp, %[sp] \n"
"move a0, %[regs] \n"
"call handle_riscv_irq \n"
"addi sp, s0, -2*"RISCV_SZPTR"\n"
REG_L" s0, (sp) \n"
"addi sp, sp, "RISCV_SZPTR "\n"
REG_L" ra, (sp) \n"
"addi sp, sp, "RISCV_SZPTR "\n"
:
: [sp] "r" (sp), [regs] "r" (regs)
: "a0", "a1", "a2", "a3", "a4", "a5", "a6", "a7",
"t0", "t1", "t2", "t3", "t4", "t5", "t6",
"memory");
} else
#endif
handle_riscv_irq(regs);
irqentry_exit(regs, state); irqentry_exit(regs, state);
} }
......
...@@ -11,6 +11,6 @@ ENTRY(__vdso_rt_sigreturn) ...@@ -11,6 +11,6 @@ ENTRY(__vdso_rt_sigreturn)
.cfi_startproc .cfi_startproc
.cfi_signal_frame .cfi_signal_frame
li a7, __NR_rt_sigreturn li a7, __NR_rt_sigreturn
scall ecall
.cfi_endproc .cfi_endproc
ENDPROC(__vdso_rt_sigreturn) ENDPROC(__vdso_rt_sigreturn)
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Copyright (C) 2023 SiFive
* Author: Andy Chiu <andy.chiu@sifive.com>
*/
#include <linux/export.h>
#include <linux/sched/signal.h>
#include <linux/types.h>
#include <linux/slab.h>
#include <linux/sched.h>
#include <linux/uaccess.h>
#include <linux/prctl.h>
#include <asm/thread_info.h>
#include <asm/processor.h>
#include <asm/insn.h>
#include <asm/vector.h>
#include <asm/csr.h>
#include <asm/elf.h>
#include <asm/ptrace.h>
#include <asm/bug.h>
static bool riscv_v_implicit_uacc = IS_ENABLED(CONFIG_RISCV_ISA_V_DEFAULT_ENABLE);
unsigned long riscv_v_vsize __read_mostly;
EXPORT_SYMBOL_GPL(riscv_v_vsize);
int riscv_v_setup_vsize(void)
{
unsigned long this_vsize;
/* There are 32 vector registers with vlenb length. */
riscv_v_enable();
this_vsize = csr_read(CSR_VLENB) * 32;
riscv_v_disable();
if (!riscv_v_vsize) {
riscv_v_vsize = this_vsize;
return 0;
}
if (riscv_v_vsize != this_vsize) {
WARN(1, "RISCV_ISA_V only supports one vlenb on SMP systems");
return -EOPNOTSUPP;
}
return 0;
}
static bool insn_is_vector(u32 insn_buf)
{
u32 opcode = insn_buf & __INSN_OPCODE_MASK;
u32 width, csr;
/*
* All V-related instructions, including CSR operations are 4-Byte. So,
* do not handle if the instruction length is not 4-Byte.
*/
if (unlikely(GET_INSN_LENGTH(insn_buf) != 4))
return false;
switch (opcode) {
case RVV_OPCODE_VECTOR:
return true;
case RVV_OPCODE_VL:
case RVV_OPCODE_VS:
width = RVV_EXRACT_VL_VS_WIDTH(insn_buf);
if (width == RVV_VL_VS_WIDTH_8 || width == RVV_VL_VS_WIDTH_16 ||
width == RVV_VL_VS_WIDTH_32 || width == RVV_VL_VS_WIDTH_64)
return true;
break;
case RVG_OPCODE_SYSTEM:
csr = RVG_EXTRACT_SYSTEM_CSR(insn_buf);
if ((csr >= CSR_VSTART && csr <= CSR_VCSR) ||
(csr >= CSR_VL && csr <= CSR_VLENB))
return true;
}
return false;
}
static int riscv_v_thread_zalloc(void)
{
void *datap;
datap = kzalloc(riscv_v_vsize, GFP_KERNEL);
if (!datap)
return -ENOMEM;
current->thread.vstate.datap = datap;
memset(&current->thread.vstate, 0, offsetof(struct __riscv_v_ext_state,
datap));
return 0;
}
#define VSTATE_CTRL_GET_CUR(x) ((x) & PR_RISCV_V_VSTATE_CTRL_CUR_MASK)
#define VSTATE_CTRL_GET_NEXT(x) (((x) & PR_RISCV_V_VSTATE_CTRL_NEXT_MASK) >> 2)
#define VSTATE_CTRL_MAKE_NEXT(x) (((x) << 2) & PR_RISCV_V_VSTATE_CTRL_NEXT_MASK)
#define VSTATE_CTRL_GET_INHERIT(x) (!!((x) & PR_RISCV_V_VSTATE_CTRL_INHERIT))
static inline int riscv_v_ctrl_get_cur(struct task_struct *tsk)
{
return VSTATE_CTRL_GET_CUR(tsk->thread.vstate_ctrl);
}
static inline int riscv_v_ctrl_get_next(struct task_struct *tsk)
{
return VSTATE_CTRL_GET_NEXT(tsk->thread.vstate_ctrl);
}
static inline bool riscv_v_ctrl_test_inherit(struct task_struct *tsk)
{
return VSTATE_CTRL_GET_INHERIT(tsk->thread.vstate_ctrl);
}
static inline void riscv_v_ctrl_set(struct task_struct *tsk, int cur, int nxt,
bool inherit)
{
unsigned long ctrl;
ctrl = cur & PR_RISCV_V_VSTATE_CTRL_CUR_MASK;
ctrl |= VSTATE_CTRL_MAKE_NEXT(nxt);
if (inherit)
ctrl |= PR_RISCV_V_VSTATE_CTRL_INHERIT;
tsk->thread.vstate_ctrl = ctrl;
}
bool riscv_v_vstate_ctrl_user_allowed(void)
{
return riscv_v_ctrl_get_cur(current) == PR_RISCV_V_VSTATE_CTRL_ON;
}
EXPORT_SYMBOL_GPL(riscv_v_vstate_ctrl_user_allowed);
bool riscv_v_first_use_handler(struct pt_regs *regs)
{
u32 __user *epc = (u32 __user *)regs->epc;
u32 insn = (u32)regs->badaddr;
/* Do not handle if V is not supported, or disabled */
if (!(ELF_HWCAP & COMPAT_HWCAP_ISA_V))
return false;
/* If V has been enabled then it is not the first-use trap */
if (riscv_v_vstate_query(regs))
return false;
/* Get the instruction */
if (!insn) {
if (__get_user(insn, epc))
return false;
}
/* Filter out non-V instructions */
if (!insn_is_vector(insn))
return false;
/* Sanity check. datap should be null by the time of the first-use trap */
WARN_ON(current->thread.vstate.datap);
/*
* Now we sure that this is a V instruction. And it executes in the
* context where VS has been off. So, try to allocate the user's V
* context and resume execution.
*/
if (riscv_v_thread_zalloc()) {
force_sig(SIGBUS);
return true;
}
riscv_v_vstate_on(regs);
return true;
}
void riscv_v_vstate_ctrl_init(struct task_struct *tsk)
{
bool inherit;
int cur, next;
if (!has_vector())
return;
next = riscv_v_ctrl_get_next(tsk);
if (!next) {
if (READ_ONCE(riscv_v_implicit_uacc))
cur = PR_RISCV_V_VSTATE_CTRL_ON;
else
cur = PR_RISCV_V_VSTATE_CTRL_OFF;
} else {
cur = next;
}
/* Clear next mask if inherit-bit is not set */
inherit = riscv_v_ctrl_test_inherit(tsk);
if (!inherit)
next = PR_RISCV_V_VSTATE_CTRL_DEFAULT;
riscv_v_ctrl_set(tsk, cur, next, inherit);
}
long riscv_v_vstate_ctrl_get_current(void)
{
if (!has_vector())
return -EINVAL;
return current->thread.vstate_ctrl & PR_RISCV_V_VSTATE_CTRL_MASK;
}
long riscv_v_vstate_ctrl_set_current(unsigned long arg)
{
bool inherit;
int cur, next;
if (!has_vector())
return -EINVAL;
if (arg & ~PR_RISCV_V_VSTATE_CTRL_MASK)
return -EINVAL;
cur = VSTATE_CTRL_GET_CUR(arg);
switch (cur) {
case PR_RISCV_V_VSTATE_CTRL_OFF:
/* Do not allow user to turn off V if current is not off */
if (riscv_v_ctrl_get_cur(current) != PR_RISCV_V_VSTATE_CTRL_OFF)
return -EPERM;
break;
case PR_RISCV_V_VSTATE_CTRL_ON:
break;
case PR_RISCV_V_VSTATE_CTRL_DEFAULT:
cur = riscv_v_ctrl_get_cur(current);
break;
default:
return -EINVAL;
}
next = VSTATE_CTRL_GET_NEXT(arg);
inherit = VSTATE_CTRL_GET_INHERIT(arg);
switch (next) {
case PR_RISCV_V_VSTATE_CTRL_DEFAULT:
case PR_RISCV_V_VSTATE_CTRL_OFF:
case PR_RISCV_V_VSTATE_CTRL_ON:
riscv_v_ctrl_set(current, cur, next, inherit);
return 0;
}
return -EINVAL;
}
#ifdef CONFIG_SYSCTL
static struct ctl_table riscv_v_default_vstate_table[] = {
{
.procname = "riscv_v_default_allow",
.data = &riscv_v_implicit_uacc,
.maxlen = sizeof(riscv_v_implicit_uacc),
.mode = 0644,
.proc_handler = proc_dobool,
},
{ }
};
static int __init riscv_v_sysctl_init(void)
{
if (has_vector())
if (!register_sysctl("abi", riscv_v_default_vstate_table))
return -EINVAL;
return 0;
}
#else /* ! CONFIG_SYSCTL */
static int __init riscv_v_sysctl_init(void) { return 0; }
#endif /* ! CONFIG_SYSCTL */
static int riscv_v_init(void)
{
return riscv_v_sysctl_init();
}
core_initcall(riscv_v_init);
...@@ -17,6 +17,7 @@ kvm-y += mmu.o ...@@ -17,6 +17,7 @@ kvm-y += mmu.o
kvm-y += vcpu.o kvm-y += vcpu.o
kvm-y += vcpu_exit.o kvm-y += vcpu_exit.o
kvm-y += vcpu_fp.o kvm-y += vcpu_fp.o
kvm-y += vcpu_vector.o
kvm-y += vcpu_insn.o kvm-y += vcpu_insn.o
kvm-y += vcpu_switch.o kvm-y += vcpu_switch.o
kvm-y += vcpu_sbi.o kvm-y += vcpu_sbi.o
......
...@@ -22,6 +22,8 @@ ...@@ -22,6 +22,8 @@
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/hwcap.h> #include <asm/hwcap.h>
#include <asm/sbi.h> #include <asm/sbi.h>
#include <asm/vector.h>
#include <asm/kvm_vcpu_vector.h>
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = { const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(), KVM_GENERIC_VCPU_STATS(),
...@@ -57,6 +59,7 @@ static const unsigned long kvm_isa_ext_arr[] = { ...@@ -57,6 +59,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
[KVM_RISCV_ISA_EXT_H] = RISCV_ISA_EXT_h, [KVM_RISCV_ISA_EXT_H] = RISCV_ISA_EXT_h,
[KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i, [KVM_RISCV_ISA_EXT_I] = RISCV_ISA_EXT_i,
[KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m, [KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m,
[KVM_RISCV_ISA_EXT_V] = RISCV_ISA_EXT_v,
KVM_ISA_EXT_ARR(SSAIA), KVM_ISA_EXT_ARR(SSAIA),
KVM_ISA_EXT_ARR(SSTC), KVM_ISA_EXT_ARR(SSTC),
...@@ -85,6 +88,8 @@ static bool kvm_riscv_vcpu_isa_enable_allowed(unsigned long ext) ...@@ -85,6 +88,8 @@ static bool kvm_riscv_vcpu_isa_enable_allowed(unsigned long ext)
switch (ext) { switch (ext) {
case KVM_RISCV_ISA_EXT_H: case KVM_RISCV_ISA_EXT_H:
return false; return false;
case KVM_RISCV_ISA_EXT_V:
return riscv_v_vstate_ctrl_user_allowed();
default: default:
break; break;
} }
...@@ -138,6 +143,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu) ...@@ -138,6 +143,8 @@ static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
kvm_riscv_vcpu_fp_reset(vcpu); kvm_riscv_vcpu_fp_reset(vcpu);
kvm_riscv_vcpu_vector_reset(vcpu);
kvm_riscv_vcpu_timer_reset(vcpu); kvm_riscv_vcpu_timer_reset(vcpu);
kvm_riscv_vcpu_aia_reset(vcpu); kvm_riscv_vcpu_aia_reset(vcpu);
...@@ -198,6 +205,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) ...@@ -198,6 +205,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
cntx->hstatus |= HSTATUS_SPVP; cntx->hstatus |= HSTATUS_SPVP;
cntx->hstatus |= HSTATUS_SPV; cntx->hstatus |= HSTATUS_SPV;
if (kvm_riscv_vcpu_alloc_vector_context(vcpu, cntx))
return -ENOMEM;
/* By default, make CY, TM, and IR counters accessible in VU mode */ /* By default, make CY, TM, and IR counters accessible in VU mode */
reset_csr->scounteren = 0x7; reset_csr->scounteren = 0x7;
...@@ -241,6 +251,9 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) ...@@ -241,6 +251,9 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
/* Free unused pages pre-allocated for G-stage page table mappings */ /* Free unused pages pre-allocated for G-stage page table mappings */
kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache);
/* Free vector context space for host and guest kernel */
kvm_riscv_vcpu_free_vector_context(vcpu);
} }
int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
...@@ -679,6 +692,9 @@ static int kvm_riscv_vcpu_set_reg(struct kvm_vcpu *vcpu, ...@@ -679,6 +692,9 @@ static int kvm_riscv_vcpu_set_reg(struct kvm_vcpu *vcpu,
return kvm_riscv_vcpu_set_reg_isa_ext(vcpu, reg); return kvm_riscv_vcpu_set_reg_isa_ext(vcpu, reg);
case KVM_REG_RISCV_SBI_EXT: case KVM_REG_RISCV_SBI_EXT:
return kvm_riscv_vcpu_set_reg_sbi_ext(vcpu, reg); return kvm_riscv_vcpu_set_reg_sbi_ext(vcpu, reg);
case KVM_REG_RISCV_VECTOR:
return kvm_riscv_vcpu_set_reg_vector(vcpu, reg,
KVM_REG_RISCV_VECTOR);
default: default:
break; break;
} }
...@@ -708,6 +724,9 @@ static int kvm_riscv_vcpu_get_reg(struct kvm_vcpu *vcpu, ...@@ -708,6 +724,9 @@ static int kvm_riscv_vcpu_get_reg(struct kvm_vcpu *vcpu,
return kvm_riscv_vcpu_get_reg_isa_ext(vcpu, reg); return kvm_riscv_vcpu_get_reg_isa_ext(vcpu, reg);
case KVM_REG_RISCV_SBI_EXT: case KVM_REG_RISCV_SBI_EXT:
return kvm_riscv_vcpu_get_reg_sbi_ext(vcpu, reg); return kvm_riscv_vcpu_get_reg_sbi_ext(vcpu, reg);
case KVM_REG_RISCV_VECTOR:
return kvm_riscv_vcpu_get_reg_vector(vcpu, reg,
KVM_REG_RISCV_VECTOR);
default: default:
break; break;
} }
...@@ -1002,6 +1021,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) ...@@ -1002,6 +1021,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvm_riscv_vcpu_host_fp_save(&vcpu->arch.host_context); kvm_riscv_vcpu_host_fp_save(&vcpu->arch.host_context);
kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context, kvm_riscv_vcpu_guest_fp_restore(&vcpu->arch.guest_context,
vcpu->arch.isa); vcpu->arch.isa);
kvm_riscv_vcpu_host_vector_save(&vcpu->arch.host_context);
kvm_riscv_vcpu_guest_vector_restore(&vcpu->arch.guest_context,
vcpu->arch.isa);
kvm_riscv_vcpu_aia_load(vcpu, cpu); kvm_riscv_vcpu_aia_load(vcpu, cpu);
...@@ -1021,6 +1043,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) ...@@ -1021,6 +1043,9 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context); kvm_riscv_vcpu_host_fp_restore(&vcpu->arch.host_context);
kvm_riscv_vcpu_timer_save(vcpu); kvm_riscv_vcpu_timer_save(vcpu);
kvm_riscv_vcpu_guest_vector_save(&vcpu->arch.guest_context,
vcpu->arch.isa);
kvm_riscv_vcpu_host_vector_restore(&vcpu->arch.host_context);
csr->vsstatus = csr_read(CSR_VSSTATUS); csr->vsstatus = csr_read(CSR_VSSTATUS);
csr->vsie = csr_read(CSR_VSIE); csr->vsie = csr_read(CSR_VSIE);
......
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2022 SiFive
*
* Authors:
* Vincent Chen <vincent.chen@sifive.com>
* Greentime Hu <greentime.hu@sifive.com>
*/
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/kvm_host.h>
#include <linux/uaccess.h>
#include <asm/hwcap.h>
#include <asm/kvm_vcpu_vector.h>
#include <asm/vector.h>
#ifdef CONFIG_RISCV_ISA_V
void kvm_riscv_vcpu_vector_reset(struct kvm_vcpu *vcpu)
{
unsigned long *isa = vcpu->arch.isa;
struct kvm_cpu_context *cntx = &vcpu->arch.guest_context;
cntx->sstatus &= ~SR_VS;
if (riscv_isa_extension_available(isa, v)) {
cntx->sstatus |= SR_VS_INITIAL;
WARN_ON(!cntx->vector.datap);
memset(cntx->vector.datap, 0, riscv_v_vsize);
} else {
cntx->sstatus |= SR_VS_OFF;
}
}
static void kvm_riscv_vcpu_vector_clean(struct kvm_cpu_context *cntx)
{
cntx->sstatus &= ~SR_VS;
cntx->sstatus |= SR_VS_CLEAN;
}
void kvm_riscv_vcpu_guest_vector_save(struct kvm_cpu_context *cntx,
unsigned long *isa)
{
if ((cntx->sstatus & SR_VS) == SR_VS_DIRTY) {
if (riscv_isa_extension_available(isa, v))
__kvm_riscv_vector_save(cntx);
kvm_riscv_vcpu_vector_clean(cntx);
}
}
void kvm_riscv_vcpu_guest_vector_restore(struct kvm_cpu_context *cntx,
unsigned long *isa)
{
if ((cntx->sstatus & SR_VS) != SR_VS_OFF) {
if (riscv_isa_extension_available(isa, v))
__kvm_riscv_vector_restore(cntx);
kvm_riscv_vcpu_vector_clean(cntx);
}
}
void kvm_riscv_vcpu_host_vector_save(struct kvm_cpu_context *cntx)
{
/* No need to check host sstatus as it can be modified outside */
if (riscv_isa_extension_available(NULL, v))
__kvm_riscv_vector_save(cntx);
}
void kvm_riscv_vcpu_host_vector_restore(struct kvm_cpu_context *cntx)
{
if (riscv_isa_extension_available(NULL, v))
__kvm_riscv_vector_restore(cntx);
}
int kvm_riscv_vcpu_alloc_vector_context(struct kvm_vcpu *vcpu,
struct kvm_cpu_context *cntx)
{
cntx->vector.datap = kmalloc(riscv_v_vsize, GFP_KERNEL);
if (!cntx->vector.datap)
return -ENOMEM;
vcpu->arch.host_context.vector.datap = kzalloc(riscv_v_vsize, GFP_KERNEL);
if (!vcpu->arch.host_context.vector.datap)
return -ENOMEM;
return 0;
}
void kvm_riscv_vcpu_free_vector_context(struct kvm_vcpu *vcpu)
{
kfree(vcpu->arch.guest_reset_context.vector.datap);
kfree(vcpu->arch.host_context.vector.datap);
}
#endif
static void *kvm_riscv_vcpu_vreg_addr(struct kvm_vcpu *vcpu,
unsigned long reg_num,
size_t reg_size)
{
struct kvm_cpu_context *cntx = &vcpu->arch.guest_context;
void *reg_val;
size_t vlenb = riscv_v_vsize / 32;
if (reg_num < KVM_REG_RISCV_VECTOR_REG(0)) {
if (reg_size != sizeof(unsigned long))
return NULL;
switch (reg_num) {
case KVM_REG_RISCV_VECTOR_CSR_REG(vstart):
reg_val = &cntx->vector.vstart;
break;
case KVM_REG_RISCV_VECTOR_CSR_REG(vl):
reg_val = &cntx->vector.vl;
break;
case KVM_REG_RISCV_VECTOR_CSR_REG(vtype):
reg_val = &cntx->vector.vtype;
break;
case KVM_REG_RISCV_VECTOR_CSR_REG(vcsr):
reg_val = &cntx->vector.vcsr;
break;
case KVM_REG_RISCV_VECTOR_CSR_REG(datap):
default:
return NULL;
}
} else if (reg_num <= KVM_REG_RISCV_VECTOR_REG(31)) {
if (reg_size != vlenb)
return NULL;
reg_val = cntx->vector.datap
+ (reg_num - KVM_REG_RISCV_VECTOR_REG(0)) * vlenb;
} else {
return NULL;
}
return reg_val;
}
int kvm_riscv_vcpu_get_reg_vector(struct kvm_vcpu *vcpu,
const struct kvm_one_reg *reg,
unsigned long rtype)
{
unsigned long *isa = vcpu->arch.isa;
unsigned long __user *uaddr =
(unsigned long __user *)(unsigned long)reg->addr;
unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK |
KVM_REG_SIZE_MASK |
rtype);
void *reg_val = NULL;
size_t reg_size = KVM_REG_SIZE(reg->id);
if (rtype == KVM_REG_RISCV_VECTOR &&
riscv_isa_extension_available(isa, v)) {
reg_val = kvm_riscv_vcpu_vreg_addr(vcpu, reg_num, reg_size);
}
if (!reg_val)
return -EINVAL;
if (copy_to_user(uaddr, reg_val, reg_size))
return -EFAULT;
return 0;
}
int kvm_riscv_vcpu_set_reg_vector(struct kvm_vcpu *vcpu,
const struct kvm_one_reg *reg,
unsigned long rtype)
{
unsigned long *isa = vcpu->arch.isa;
unsigned long __user *uaddr =
(unsigned long __user *)(unsigned long)reg->addr;
unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK |
KVM_REG_SIZE_MASK |
rtype);
void *reg_val = NULL;
size_t reg_size = KVM_REG_SIZE(reg->id);
if (rtype == KVM_REG_RISCV_VECTOR &&
riscv_isa_extension_available(isa, v)) {
reg_val = kvm_riscv_vcpu_vreg_addr(vcpu, reg_num, reg_size);
}
if (!reg_val)
return -EINVAL;
if (copy_from_user(reg_val, uaddr, reg_size))
return -EFAULT;
return 0;
}
...@@ -13,8 +13,7 @@ endif ...@@ -13,8 +13,7 @@ endif
KCOV_INSTRUMENT_init.o := n KCOV_INSTRUMENT_init.o := n
obj-y += init.o obj-y += init.o
obj-y += extable.o obj-$(CONFIG_MMU) += extable.o fault.o pageattr.o
obj-$(CONFIG_MMU) += fault.o pageattr.o
obj-y += cacheflush.o obj-y += cacheflush.o
obj-y += context.o obj-y += context.o
obj-y += pgtable.o obj-y += pgtable.o
......
...@@ -247,24 +247,12 @@ void handle_page_fault(struct pt_regs *regs) ...@@ -247,24 +247,12 @@ void handle_page_fault(struct pt_regs *regs)
* only copy the information from the master page table, * only copy the information from the master page table,
* nothing more. * nothing more.
*/ */
if (unlikely((addr >= VMALLOC_START) && (addr < VMALLOC_END))) { if ((!IS_ENABLED(CONFIG_MMU) || !IS_ENABLED(CONFIG_64BIT)) &&
unlikely(addr >= VMALLOC_START && addr < VMALLOC_END)) {
vmalloc_fault(regs, code, addr); vmalloc_fault(regs, code, addr);
return; return;
} }
#ifdef CONFIG_64BIT
/*
* Modules in 64bit kernels lie in their own virtual region which is not
* in the vmalloc region, but dealing with page faults in this region
* or the vmalloc region amounts to doing the same thing: checking that
* the mapping exists in init_mm.pgd and updating user page table, so
* just use vmalloc_fault.
*/
if (unlikely(addr >= MODULES_VADDR && addr < MODULES_END)) {
vmalloc_fault(regs, code, addr);
return;
}
#endif
/* Enable interrupts if they were enabled in the parent context. */ /* Enable interrupts if they were enabled in the parent context. */
if (!regs_irqs_disabled(regs)) if (!regs_irqs_disabled(regs))
local_irq_enable(); local_irq_enable();
...@@ -295,6 +283,36 @@ void handle_page_fault(struct pt_regs *regs) ...@@ -295,6 +283,36 @@ void handle_page_fault(struct pt_regs *regs)
flags |= FAULT_FLAG_WRITE; flags |= FAULT_FLAG_WRITE;
else if (cause == EXC_INST_PAGE_FAULT) else if (cause == EXC_INST_PAGE_FAULT)
flags |= FAULT_FLAG_INSTRUCTION; flags |= FAULT_FLAG_INSTRUCTION;
#ifdef CONFIG_PER_VMA_LOCK
if (!(flags & FAULT_FLAG_USER))
goto lock_mmap;
vma = lock_vma_under_rcu(mm, addr);
if (!vma)
goto lock_mmap;
if (unlikely(access_error(cause, vma))) {
vma_end_read(vma);
goto lock_mmap;
}
fault = handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, regs);
vma_end_read(vma);
if (!(fault & VM_FAULT_RETRY)) {
count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
goto done;
}
count_vm_vma_lock_event(VMA_LOCK_RETRY);
if (fault_signal_pending(fault, regs)) {
if (!user_mode(regs))
no_context(regs, addr);
return;
}
lock_mmap:
#endif /* CONFIG_PER_VMA_LOCK */
retry: retry:
vma = lock_mm_and_find_vma(mm, addr, regs); vma = lock_mm_and_find_vma(mm, addr, regs);
if (unlikely(!vma)) { if (unlikely(!vma)) {
...@@ -350,6 +368,9 @@ void handle_page_fault(struct pt_regs *regs) ...@@ -350,6 +368,9 @@ void handle_page_fault(struct pt_regs *regs)
mmap_read_unlock(mm); mmap_read_unlock(mm);
#ifdef CONFIG_PER_VMA_LOCK
done:
#endif
if (unlikely(fault & VM_FAULT_ERROR)) { if (unlikely(fault & VM_FAULT_ERROR)) {
tsk->thread.bad_cause = cause; tsk->thread.bad_cause = cause;
mm_fault_error(regs, addr, fault); mm_fault_error(regs, addr, fault);
......
...@@ -1389,3 +1389,61 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, ...@@ -1389,3 +1389,61 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
return vmemmap_populate_basepages(start, end, node, NULL); return vmemmap_populate_basepages(start, end, node, NULL);
} }
#endif #endif
#if defined(CONFIG_MMU) && defined(CONFIG_64BIT)
/*
* Pre-allocates page-table pages for a specific area in the kernel
* page-table. Only the level which needs to be synchronized between
* all page-tables is allocated because the synchronization can be
* expensive.
*/
static void __init preallocate_pgd_pages_range(unsigned long start, unsigned long end,
const char *area)
{
unsigned long addr;
const char *lvl;
for (addr = start; addr < end && addr >= start; addr = ALIGN(addr + 1, PGDIR_SIZE)) {
pgd_t *pgd = pgd_offset_k(addr);
p4d_t *p4d;
pud_t *pud;
pmd_t *pmd;
lvl = "p4d";
p4d = p4d_alloc(&init_mm, pgd, addr);
if (!p4d)
goto failed;
if (pgtable_l5_enabled)
continue;
lvl = "pud";
pud = pud_alloc(&init_mm, p4d, addr);
if (!pud)
goto failed;
if (pgtable_l4_enabled)
continue;
lvl = "pmd";
pmd = pmd_alloc(&init_mm, pud, addr);
if (!pmd)
goto failed;
}
return;
failed:
/*
* The pages have to be there now or they will be missing in
* process page-tables later.
*/
panic("Failed to pre-allocate %s pages for %s area\n", lvl, area);
}
void __init pgtable_cache_init(void)
{
preallocate_pgd_pages_range(VMALLOC_START, VMALLOC_END, "vmalloc");
if (IS_ENABLED(CONFIG_MODULES))
preallocate_pgd_pages_range(MODULES_VADDR, MODULES_END, "bpf/modules");
}
#endif
...@@ -131,3 +131,5 @@ obj-y += dptf/ ...@@ -131,3 +131,5 @@ obj-y += dptf/
obj-$(CONFIG_ARM64) += arm64/ obj-$(CONFIG_ARM64) += arm64/
obj-$(CONFIG_ACPI_VIOT) += viot.o obj-$(CONFIG_ACPI_VIOT) += viot.o
obj-$(CONFIG_RISCV) += riscv/
...@@ -276,7 +276,7 @@ acpi_map_lookup_virt(void __iomem *virt, acpi_size size) ...@@ -276,7 +276,7 @@ acpi_map_lookup_virt(void __iomem *virt, acpi_size size)
return NULL; return NULL;
} }
#if defined(CONFIG_IA64) || defined(CONFIG_ARM64) #if defined(CONFIG_IA64) || defined(CONFIG_ARM64) || defined(CONFIG_RISCV)
/* ioremap will take care of cache attributes */ /* ioremap will take care of cache attributes */
#define should_use_kmap(pfn) 0 #define should_use_kmap(pfn) 0
#else #else
......
...@@ -106,6 +106,32 @@ static int map_gicc_mpidr(struct acpi_subtable_header *entry, ...@@ -106,6 +106,32 @@ static int map_gicc_mpidr(struct acpi_subtable_header *entry,
return -EINVAL; return -EINVAL;
} }
/*
* Retrieve the RISC-V hartid for the processor
*/
static int map_rintc_hartid(struct acpi_subtable_header *entry,
int device_declaration, u32 acpi_id,
phys_cpuid_t *hartid)
{
struct acpi_madt_rintc *rintc =
container_of(entry, struct acpi_madt_rintc, header);
if (!(rintc->flags & ACPI_MADT_ENABLED))
return -ENODEV;
/* device_declaration means Device object in DSDT, in the
* RISC-V, logical processors are required to
* have a Processor Device object in the DSDT, so we should
* check device_declaration here
*/
if (device_declaration && rintc->uid == acpi_id) {
*hartid = rintc->hart_id;
return 0;
}
return -EINVAL;
}
static phys_cpuid_t map_madt_entry(struct acpi_table_madt *madt, static phys_cpuid_t map_madt_entry(struct acpi_table_madt *madt,
int type, u32 acpi_id) int type, u32 acpi_id)
{ {
...@@ -136,6 +162,9 @@ static phys_cpuid_t map_madt_entry(struct acpi_table_madt *madt, ...@@ -136,6 +162,9 @@ static phys_cpuid_t map_madt_entry(struct acpi_table_madt *madt,
} else if (header->type == ACPI_MADT_TYPE_GENERIC_INTERRUPT) { } else if (header->type == ACPI_MADT_TYPE_GENERIC_INTERRUPT) {
if (!map_gicc_mpidr(header, type, acpi_id, &phys_id)) if (!map_gicc_mpidr(header, type, acpi_id, &phys_id))
break; break;
} else if (header->type == ACPI_MADT_TYPE_RINTC) {
if (!map_rintc_hartid(header, type, acpi_id, &phys_id))
break;
} }
entry += header->length; entry += header->length;
} }
......
# SPDX-License-Identifier: GPL-2.0-only
obj-y += rhct.o
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2022-2023, Ventana Micro Systems Inc
* Author: Sunil V L <sunilvl@ventanamicro.com>
*
*/
#define pr_fmt(fmt) "ACPI: RHCT: " fmt
#include <linux/acpi.h>
static struct acpi_table_header *acpi_get_rhct(void)
{
static struct acpi_table_header *rhct;
acpi_status status;
/*
* RHCT will be used at runtime on every CPU, so we
* don't need to call acpi_put_table() to release the table mapping.
*/
if (!rhct) {
status = acpi_get_table(ACPI_SIG_RHCT, 0, &rhct);
if (ACPI_FAILURE(status)) {
pr_warn_once("No RHCT table found\n");
return NULL;
}
}
return rhct;
}
/*
* During early boot, the caller should call acpi_get_table() and pass its pointer to
* these functions(and free up later). At run time, since this table can be used
* multiple times, NULL may be passed in order to use the cached table.
*/
int acpi_get_riscv_isa(struct acpi_table_header *table, unsigned int cpu, const char **isa)
{
struct acpi_rhct_node_header *node, *ref_node, *end;
u32 size_hdr = sizeof(struct acpi_rhct_node_header);
u32 size_hartinfo = sizeof(struct acpi_rhct_hart_info);
struct acpi_rhct_hart_info *hart_info;
struct acpi_rhct_isa_string *isa_node;
struct acpi_table_rhct *rhct;
u32 *hart_info_node_offset;
u32 acpi_cpu_id = get_acpi_id_for_cpu(cpu);
BUG_ON(acpi_disabled);
if (!table) {
rhct = (struct acpi_table_rhct *)acpi_get_rhct();
if (!rhct)
return -ENOENT;
} else {
rhct = (struct acpi_table_rhct *)table;
}
end = ACPI_ADD_PTR(struct acpi_rhct_node_header, rhct, rhct->header.length);
for (node = ACPI_ADD_PTR(struct acpi_rhct_node_header, rhct, rhct->node_offset);
node < end;
node = ACPI_ADD_PTR(struct acpi_rhct_node_header, node, node->length)) {
if (node->type == ACPI_RHCT_NODE_TYPE_HART_INFO) {
hart_info = ACPI_ADD_PTR(struct acpi_rhct_hart_info, node, size_hdr);
hart_info_node_offset = ACPI_ADD_PTR(u32, hart_info, size_hartinfo);
if (acpi_cpu_id != hart_info->uid)
continue;
for (int i = 0; i < hart_info->num_offsets; i++) {
ref_node = ACPI_ADD_PTR(struct acpi_rhct_node_header,
rhct, hart_info_node_offset[i]);
if (ref_node->type == ACPI_RHCT_NODE_TYPE_ISA_STRING) {
isa_node = ACPI_ADD_PTR(struct acpi_rhct_isa_string,
ref_node, size_hdr);
*isa = isa_node->isa;
return 0;
}
}
}
}
return -1;
}
...@@ -220,6 +220,16 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header) ...@@ -220,6 +220,16 @@ void acpi_table_print_madt_entry(struct acpi_subtable_header *header)
} }
break; break;
case ACPI_MADT_TYPE_RINTC:
{
struct acpi_madt_rintc *p = (struct acpi_madt_rintc *)header;
pr_debug("RISC-V INTC (acpi_uid[0x%04x] hart_id[0x%llx] %s)\n",
p->uid, p->hart_id,
(p->flags & ACPI_MADT_ENABLED) ? "enabled" : "disabled");
}
break;
default: default:
pr_warn("Found unsupported MADT entry (type = 0x%x)\n", pr_warn("Found unsupported MADT entry (type = 0x%x)\n",
header->type); header->type);
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#define pr_fmt(fmt) "riscv-timer: " fmt #define pr_fmt(fmt) "riscv-timer: " fmt
#include <linux/acpi.h>
#include <linux/clocksource.h> #include <linux/clocksource.h>
#include <linux/clockchips.h> #include <linux/clockchips.h>
#include <linux/cpu.h> #include <linux/cpu.h>
...@@ -124,61 +125,28 @@ static irqreturn_t riscv_timer_interrupt(int irq, void *dev_id) ...@@ -124,61 +125,28 @@ static irqreturn_t riscv_timer_interrupt(int irq, void *dev_id)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static int __init riscv_timer_init_dt(struct device_node *n) static int __init riscv_timer_init_common(void)
{ {
int cpuid, error; int error;
unsigned long hartid;
struct device_node *child;
struct irq_domain *domain; struct irq_domain *domain;
struct fwnode_handle *intc_fwnode = riscv_get_intc_hwnode();
error = riscv_of_processor_hartid(n, &hartid); domain = irq_find_matching_fwnode(intc_fwnode, DOMAIN_BUS_ANY);
if (error < 0) {
pr_warn("Not valid hartid for node [%pOF] error = [%lu]\n",
n, hartid);
return error;
}
cpuid = riscv_hartid_to_cpuid(hartid);
if (cpuid < 0) {
pr_warn("Invalid cpuid for hartid [%lu]\n", hartid);
return cpuid;
}
if (cpuid != smp_processor_id())
return 0;
child = of_find_compatible_node(NULL, NULL, "riscv,timer");
if (child) {
riscv_timer_cannot_wake_cpu = of_property_read_bool(child,
"riscv,timer-cannot-wake-cpu");
of_node_put(child);
}
domain = NULL;
child = of_get_compatible_child(n, "riscv,cpu-intc");
if (!child) {
pr_err("Failed to find INTC node [%pOF]\n", n);
return -ENODEV;
}
domain = irq_find_host(child);
of_node_put(child);
if (!domain) { if (!domain) {
pr_err("Failed to find IRQ domain for node [%pOF]\n", n); pr_err("Failed to find irq_domain for INTC node [%pfwP]\n",
intc_fwnode);
return -ENODEV; return -ENODEV;
} }
riscv_clock_event_irq = irq_create_mapping(domain, RV_IRQ_TIMER); riscv_clock_event_irq = irq_create_mapping(domain, RV_IRQ_TIMER);
if (!riscv_clock_event_irq) { if (!riscv_clock_event_irq) {
pr_err("Failed to map timer interrupt for node [%pOF]\n", n); pr_err("Failed to map timer interrupt for node [%pfwP]\n", intc_fwnode);
return -ENODEV; return -ENODEV;
} }
pr_info("%s: Registering clocksource cpuid [%d] hartid [%lu]\n",
__func__, cpuid, hartid);
error = clocksource_register_hz(&riscv_clocksource, riscv_timebase); error = clocksource_register_hz(&riscv_clocksource, riscv_timebase);
if (error) { if (error) {
pr_err("RISCV timer register failed [%d] for cpu = [%d]\n", pr_err("RISCV timer registration failed [%d]\n", error);
error, cpuid);
return error; return error;
} }
...@@ -207,4 +175,46 @@ static int __init riscv_timer_init_dt(struct device_node *n) ...@@ -207,4 +175,46 @@ static int __init riscv_timer_init_dt(struct device_node *n)
return error; return error;
} }
static int __init riscv_timer_init_dt(struct device_node *n)
{
int cpuid, error;
unsigned long hartid;
struct device_node *child;
error = riscv_of_processor_hartid(n, &hartid);
if (error < 0) {
pr_warn("Invalid hartid for node [%pOF] error = [%lu]\n",
n, hartid);
return error;
}
cpuid = riscv_hartid_to_cpuid(hartid);
if (cpuid < 0) {
pr_warn("Invalid cpuid for hartid [%lu]\n", hartid);
return cpuid;
}
if (cpuid != smp_processor_id())
return 0;
child = of_find_compatible_node(NULL, NULL, "riscv,timer");
if (child) {
riscv_timer_cannot_wake_cpu = of_property_read_bool(child,
"riscv,timer-cannot-wake-cpu");
of_node_put(child);
}
return riscv_timer_init_common();
}
TIMER_OF_DECLARE(riscv_timer, "riscv", riscv_timer_init_dt); TIMER_OF_DECLARE(riscv_timer, "riscv", riscv_timer_init_dt);
#ifdef CONFIG_ACPI
static int __init riscv_timer_acpi_init(struct acpi_table_header *table)
{
return riscv_timer_init_common();
}
TIMER_ACPI_DECLARE(aclint_mtimer, ACPI_SIG_RHCT, riscv_timer_acpi_init);
#endif
...@@ -610,7 +610,10 @@ EXPORT_SYMBOL_GPL(hisi_qm_wait_mb_ready); ...@@ -610,7 +610,10 @@ EXPORT_SYMBOL_GPL(hisi_qm_wait_mb_ready);
static void qm_mb_write(struct hisi_qm *qm, const void *src) static void qm_mb_write(struct hisi_qm *qm, const void *src)
{ {
void __iomem *fun_base = qm->io_base + QM_MB_CMD_SEND_BASE; void __iomem *fun_base = qm->io_base + QM_MB_CMD_SEND_BASE;
#if IS_ENABLED(CONFIG_ARM64)
unsigned long tmp0 = 0, tmp1 = 0; unsigned long tmp0 = 0, tmp1 = 0;
#endif
if (!IS_ENABLED(CONFIG_ARM64)) { if (!IS_ENABLED(CONFIG_ARM64)) {
memcpy_toio(fun_base, src, 16); memcpy_toio(fun_base, src, 16);
...@@ -618,6 +621,7 @@ static void qm_mb_write(struct hisi_qm *qm, const void *src) ...@@ -618,6 +621,7 @@ static void qm_mb_write(struct hisi_qm *qm, const void *src)
return; return;
} }
#if IS_ENABLED(CONFIG_ARM64)
asm volatile("ldp %0, %1, %3\n" asm volatile("ldp %0, %1, %3\n"
"stp %0, %1, %2\n" "stp %0, %1, %2\n"
"dmb oshst\n" "dmb oshst\n"
...@@ -626,6 +630,7 @@ static void qm_mb_write(struct hisi_qm *qm, const void *src) ...@@ -626,6 +630,7 @@ static void qm_mb_write(struct hisi_qm *qm, const void *src)
"+Q" (*((char __iomem *)fun_base)) "+Q" (*((char __iomem *)fun_base))
: "Q" (*((char *)src)) : "Q" (*((char *)src))
: "memory"); : "memory");
#endif
} }
static int qm_mb_nolock(struct hisi_qm *qm, struct qm_mailbox *mailbox) static int qm_mb_nolock(struct hisi_qm *qm, struct qm_mailbox *mailbox)
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
*/ */
#define pr_fmt(fmt) "riscv-intc: " fmt #define pr_fmt(fmt) "riscv-intc: " fmt
#include <linux/acpi.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/bits.h> #include <linux/bits.h>
#include <linux/cpu.h> #include <linux/cpu.h>
...@@ -112,6 +113,30 @@ static struct fwnode_handle *riscv_intc_hwnode(void) ...@@ -112,6 +113,30 @@ static struct fwnode_handle *riscv_intc_hwnode(void)
return intc_domain->fwnode; return intc_domain->fwnode;
} }
static int __init riscv_intc_init_common(struct fwnode_handle *fn)
{
int rc;
intc_domain = irq_domain_create_linear(fn, BITS_PER_LONG,
&riscv_intc_domain_ops, NULL);
if (!intc_domain) {
pr_err("unable to add IRQ domain\n");
return -ENXIO;
}
rc = set_handle_irq(&riscv_intc_irq);
if (rc) {
pr_err("failed to set irq handler\n");
return rc;
}
riscv_set_intc_hwnode_fn(riscv_intc_hwnode);
pr_info("%d local interrupts mapped\n", BITS_PER_LONG);
return 0;
}
static int __init riscv_intc_init(struct device_node *node, static int __init riscv_intc_init(struct device_node *node,
struct device_node *parent) struct device_node *parent)
{ {
...@@ -133,24 +158,39 @@ static int __init riscv_intc_init(struct device_node *node, ...@@ -133,24 +158,39 @@ static int __init riscv_intc_init(struct device_node *node,
if (riscv_hartid_to_cpuid(hartid) != smp_processor_id()) if (riscv_hartid_to_cpuid(hartid) != smp_processor_id())
return 0; return 0;
intc_domain = irq_domain_add_linear(node, BITS_PER_LONG, return riscv_intc_init_common(of_node_to_fwnode(node));
&riscv_intc_domain_ops, NULL); }
if (!intc_domain) {
pr_err("unable to add IRQ domain\n");
return -ENXIO;
}
rc = set_handle_irq(&riscv_intc_irq); IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init);
if (rc) {
pr_err("failed to set irq handler\n");
return rc;
}
riscv_set_intc_hwnode_fn(riscv_intc_hwnode); #ifdef CONFIG_ACPI
pr_info("%d local interrupts mapped\n", BITS_PER_LONG); static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header,
const unsigned long end)
{
struct fwnode_handle *fn;
struct acpi_madt_rintc *rintc;
rintc = (struct acpi_madt_rintc *)header;
/*
* The ACPI MADT will have one INTC for each CPU (or HART)
* so riscv_intc_acpi_init() function will be called once
* for each INTC. We only do INTC initialization
* for the INTC belonging to the boot CPU (or boot HART).
*/
if (riscv_hartid_to_cpuid(rintc->hart_id) != smp_processor_id())
return 0; return 0;
fn = irq_domain_alloc_named_fwnode("RISCV-INTC");
if (!fn) {
pr_err("unable to allocate INTC FW node\n");
return -ENOMEM;
}
return riscv_intc_init_common(fn);
} }
IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init); IRQCHIP_ACPI_DECLARE(riscv_intc, ACPI_MADT_TYPE_RINTC, NULL,
ACPI_MADT_RINTC_VERSION_V1, riscv_intc_acpi_init);
#endif
...@@ -739,7 +739,6 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde ...@@ -739,7 +739,6 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde
{ {
int ret; int ret;
struct cpu_hw_events __percpu *hw_events = pmu->hw_events; struct cpu_hw_events __percpu *hw_events = pmu->hw_events;
struct device_node *cpu, *child;
struct irq_domain *domain = NULL; struct irq_domain *domain = NULL;
if (riscv_isa_extension_available(NULL, SSCOFPMF)) { if (riscv_isa_extension_available(NULL, SSCOFPMF)) {
...@@ -756,20 +755,8 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde ...@@ -756,20 +755,8 @@ static int pmu_sbi_setup_irqs(struct riscv_pmu *pmu, struct platform_device *pde
if (!riscv_pmu_use_irq) if (!riscv_pmu_use_irq)
return -EOPNOTSUPP; return -EOPNOTSUPP;
for_each_of_cpu_node(cpu) { domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(),
child = of_get_compatible_child(cpu, "riscv,cpu-intc"); DOMAIN_BUS_ANY);
if (!child) {
pr_err("Failed to find INTC node\n");
of_node_put(cpu);
return -ENODEV;
}
domain = irq_find_host(child);
of_node_put(child);
if (domain) {
of_node_put(cpu);
break;
}
}
if (!domain) { if (!domain) {
pr_err("Failed to find INTC IRQ root domain\n"); pr_err("Failed to find INTC IRQ root domain\n");
return -ENODEV; return -ENODEV;
...@@ -868,6 +855,12 @@ static int pmu_sbi_device_probe(struct platform_device *pdev) ...@@ -868,6 +855,12 @@ static int pmu_sbi_device_probe(struct platform_device *pdev)
goto out_free; goto out_free;
} }
/* It is possible to get from SBI more than max number of counters */
if (num_counters > RISCV_MAX_COUNTERS) {
num_counters = RISCV_MAX_COUNTERS;
pr_info("SBI returned more than maximum number of counters. Limiting the number of counters to %d\n", num_counters);
}
/* cache all the information about counters now */ /* cache all the information about counters now */
if (pmu_sbi_get_ctrinfo(num_counters, &cmask)) if (pmu_sbi_get_ctrinfo(num_counters, &cmask))
goto out_free; goto out_free;
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
menuconfig SURFACE_AGGREGATOR menuconfig SURFACE_AGGREGATOR
tristate "Microsoft Surface System Aggregator Module Subsystem and Drivers" tristate "Microsoft Surface System Aggregator Module Subsystem and Drivers"
depends on SERIAL_DEV_BUS depends on SERIAL_DEV_BUS
depends on ACPI depends on ACPI && !RISCV
select CRC_CCITT select CRC_CCITT
help help
The Surface System Aggregator Module (Surface SAM or SSAM) is an The Surface System Aggregator Module (Surface SAM or SSAM) is an
......
...@@ -443,6 +443,7 @@ typedef struct elf64_shdr { ...@@ -443,6 +443,7 @@ typedef struct elf64_shdr {
#define NT_MIPS_DSP 0x800 /* MIPS DSP ASE registers */ #define NT_MIPS_DSP 0x800 /* MIPS DSP ASE registers */
#define NT_MIPS_FP_MODE 0x801 /* MIPS floating-point mode */ #define NT_MIPS_FP_MODE 0x801 /* MIPS floating-point mode */
#define NT_MIPS_MSA 0x802 /* MIPS SIMD registers */ #define NT_MIPS_MSA 0x802 /* MIPS SIMD registers */
#define NT_RISCV_VECTOR 0x900 /* RISC-V vector registers */
#define NT_LOONGARCH_CPUCFG 0xa00 /* LoongArch CPU config registers */ #define NT_LOONGARCH_CPUCFG 0xa00 /* LoongArch CPU config registers */
#define NT_LOONGARCH_CSR 0xa01 /* LoongArch control and status registers */ #define NT_LOONGARCH_CSR 0xa01 /* LoongArch control and status registers */
#define NT_LOONGARCH_LSX 0xa02 /* LoongArch Loongson SIMD Extension registers */ #define NT_LOONGARCH_LSX 0xa02 /* LoongArch Loongson SIMD Extension registers */
......
...@@ -294,4 +294,15 @@ struct prctl_mm_map { ...@@ -294,4 +294,15 @@ struct prctl_mm_map {
#define PR_SET_MEMORY_MERGE 67 #define PR_SET_MEMORY_MERGE 67
#define PR_GET_MEMORY_MERGE 68 #define PR_GET_MEMORY_MERGE 68
#define PR_RISCV_V_SET_CONTROL 69
#define PR_RISCV_V_GET_CONTROL 70
# define PR_RISCV_V_VSTATE_CTRL_DEFAULT 0
# define PR_RISCV_V_VSTATE_CTRL_OFF 1
# define PR_RISCV_V_VSTATE_CTRL_ON 2
# define PR_RISCV_V_VSTATE_CTRL_INHERIT (1 << 4)
# define PR_RISCV_V_VSTATE_CTRL_CUR_MASK 0x3
# define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc
# define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f
#endif /* _LINUX_PRCTL_H */ #endif /* _LINUX_PRCTL_H */
...@@ -140,6 +140,12 @@ ...@@ -140,6 +140,12 @@
#ifndef GET_TAGGED_ADDR_CTRL #ifndef GET_TAGGED_ADDR_CTRL
# define GET_TAGGED_ADDR_CTRL() (-EINVAL) # define GET_TAGGED_ADDR_CTRL() (-EINVAL)
#endif #endif
#ifndef RISCV_V_SET_CONTROL
# define RISCV_V_SET_CONTROL(a) (-EINVAL)
#endif
#ifndef RISCV_V_GET_CONTROL
# define RISCV_V_GET_CONTROL() (-EINVAL)
#endif
/* /*
* this is where the system-wide overflow UID and GID are defined, for * this is where the system-wide overflow UID and GID are defined, for
...@@ -2708,6 +2714,12 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3, ...@@ -2708,6 +2714,12 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags); error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
break; break;
#endif #endif
case PR_RISCV_V_SET_CONTROL:
error = RISCV_V_SET_CONTROL(arg2);
break;
case PR_RISCV_V_GET_CONTROL:
error = RISCV_V_GET_CONTROL();
break;
default: default:
error = -EINVAL; error = -EINVAL;
break; break;
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
ARCH ?= $(shell uname -m 2>/dev/null || echo not) ARCH ?= $(shell uname -m 2>/dev/null || echo not)
ifneq (,$(filter $(ARCH),riscv)) ifneq (,$(filter $(ARCH),riscv))
RISCV_SUBTARGETS ?= hwprobe RISCV_SUBTARGETS ?= hwprobe vector
else else
RISCV_SUBTARGETS := RISCV_SUBTARGETS :=
endif endif
......
# SPDX-License-Identifier: GPL-2.0
# Copyright (C) 2021 ARM Limited
# Originally tools/testing/arm64/abi/Makefile
TEST_GEN_PROGS := vstate_prctl
TEST_GEN_PROGS_EXTENDED := vstate_exec_nolibc
include ../../lib.mk
$(OUTPUT)/vstate_prctl: vstate_prctl.c ../hwprobe/sys_hwprobe.S
$(CC) -static -o$@ $(CFLAGS) $(LDFLAGS) $^
$(OUTPUT)/vstate_exec_nolibc: vstate_exec_nolibc.c
$(CC) -nostdlib -static -include ../../../../include/nolibc/nolibc.h \
-Wall $(CFLAGS) $(LDFLAGS) $^ -o $@ -lgcc
// SPDX-License-Identifier: GPL-2.0-only
#include <sys/prctl.h>
#define THIS_PROGRAM "./vstate_exec_nolibc"
int main(int argc, char **argv)
{
int rc, pid, status, test_inherit = 0;
long ctrl, ctrl_c;
char *exec_argv[2], *exec_envp[2];
if (argc > 1)
test_inherit = 1;
ctrl = my_syscall1(__NR_prctl, PR_RISCV_V_GET_CONTROL);
if (ctrl < 0) {
puts("PR_RISCV_V_GET_CONTROL is not supported\n");
return ctrl;
}
if (test_inherit) {
pid = fork();
if (pid == -1) {
puts("fork failed\n");
exit(-1);
}
/* child */
if (!pid) {
exec_argv[0] = THIS_PROGRAM;
exec_argv[1] = NULL;
exec_envp[0] = NULL;
exec_envp[1] = NULL;
/* launch the program again to check inherit */
rc = execve(THIS_PROGRAM, exec_argv, exec_envp);
if (rc) {
puts("child execve failed\n");
exit(-1);
}
}
} else {
pid = fork();
if (pid == -1) {
puts("fork failed\n");
exit(-1);
}
if (!pid) {
rc = my_syscall1(__NR_prctl, PR_RISCV_V_GET_CONTROL);
if (rc != ctrl) {
puts("child's vstate_ctrl not equal to parent's\n");
exit(-1);
}
asm volatile (".option push\n\t"
".option arch, +v\n\t"
"vsetvli x0, x0, e32, m8, ta, ma\n\t"
".option pop\n\t"
);
exit(ctrl);
}
}
rc = waitpid(-1, &status, 0);
if (WIFEXITED(status) && WEXITSTATUS(status) == -1) {
puts("child exited abnormally\n");
exit(-1);
}
if (WIFSIGNALED(status)) {
if (WTERMSIG(status) != SIGILL) {
puts("child was terminated by unexpected signal\n");
exit(-1);
}
if ((ctrl & PR_RISCV_V_VSTATE_CTRL_CUR_MASK) != PR_RISCV_V_VSTATE_CTRL_OFF) {
puts("child signaled by illegal V access but vstate_ctrl is not off\n");
exit(-1);
}
/* child terminated, and its vstate_ctrl is off */
exit(ctrl);
}
ctrl_c = WEXITSTATUS(status);
if (test_inherit) {
if (ctrl & PR_RISCV_V_VSTATE_CTRL_INHERIT) {
if (!(ctrl_c & PR_RISCV_V_VSTATE_CTRL_INHERIT)) {
puts("parent has inherit bit, but child has not\n");
exit(-1);
}
}
rc = (ctrl & PR_RISCV_V_VSTATE_CTRL_NEXT_MASK) >> 2;
if (rc != PR_RISCV_V_VSTATE_CTRL_DEFAULT) {
if (rc != (ctrl_c & PR_RISCV_V_VSTATE_CTRL_CUR_MASK)) {
puts("parent's next setting does not equal to child's\n");
exit(-1);
}
if (!(ctrl & PR_RISCV_V_VSTATE_CTRL_INHERIT)) {
if ((ctrl_c & PR_RISCV_V_VSTATE_CTRL_NEXT_MASK) !=
PR_RISCV_V_VSTATE_CTRL_DEFAULT) {
puts("must clear child's next vstate_ctrl if !inherit\n");
exit(-1);
}
}
}
}
return ctrl;
}
// SPDX-License-Identifier: GPL-2.0-only
#include <sys/prctl.h>
#include <unistd.h>
#include <asm/hwprobe.h>
#include <errno.h>
#include <sys/wait.h>
#include "../../kselftest.h"
/*
* Rather than relying on having a new enough libc to define this, just do it
* ourselves. This way we don't need to be coupled to a new-enough libc to
* contain the call.
*/
long riscv_hwprobe(struct riscv_hwprobe *pairs, size_t pair_count,
size_t cpu_count, unsigned long *cpus, unsigned int flags);
#define NEXT_PROGRAM "./vstate_exec_nolibc"
static int launch_test(int test_inherit)
{
char *exec_argv[3], *exec_envp[1];
int rc, pid, status;
pid = fork();
if (pid < 0) {
ksft_test_result_fail("fork failed %d", pid);
return -1;
}
if (!pid) {
exec_argv[0] = NEXT_PROGRAM;
exec_argv[1] = test_inherit != 0 ? "x" : NULL;
exec_argv[2] = NULL;
exec_envp[0] = NULL;
/* launch the program again to check inherit */
rc = execve(NEXT_PROGRAM, exec_argv, exec_envp);
if (rc) {
perror("execve");
ksft_test_result_fail("child execve failed %d\n", rc);
exit(-1);
}
}
rc = waitpid(-1, &status, 0);
if (rc < 0) {
ksft_test_result_fail("waitpid failed\n");
return -3;
}
if ((WIFEXITED(status) && WEXITSTATUS(status) == -1) ||
WIFSIGNALED(status)) {
ksft_test_result_fail("child exited abnormally\n");
return -4;
}
return WEXITSTATUS(status);
}
int test_and_compare_child(long provided, long expected, int inherit)
{
int rc;
rc = prctl(PR_RISCV_V_SET_CONTROL, provided);
if (rc != 0) {
ksft_test_result_fail("prctl with provided arg %lx failed with code %d\n",
provided, rc);
return -1;
}
rc = launch_test(inherit);
if (rc != expected) {
ksft_test_result_fail("Test failed, check %d != %d\n", rc,
expected);
return -2;
}
return 0;
}
#define PR_RISCV_V_VSTATE_CTRL_CUR_SHIFT 0
#define PR_RISCV_V_VSTATE_CTRL_NEXT_SHIFT 2
int main(void)
{
struct riscv_hwprobe pair;
long flag, expected;
long rc;
pair.key = RISCV_HWPROBE_KEY_IMA_EXT_0;
rc = riscv_hwprobe(&pair, 1, 0, NULL, 0);
if (rc < 0) {
ksft_test_result_fail("hwprobe() failed with %d\n", rc);
return -1;
}
if (pair.key != RISCV_HWPROBE_KEY_IMA_EXT_0) {
ksft_test_result_fail("hwprobe cannot probe RISCV_HWPROBE_KEY_IMA_EXT_0\n");
return -2;
}
if (!(pair.value & RISCV_HWPROBE_IMA_V)) {
rc = prctl(PR_RISCV_V_GET_CONTROL);
if (rc != -1 || errno != EINVAL) {
ksft_test_result_fail("GET_CONTROL should fail on kernel/hw without V\n");
return -3;
}
rc = prctl(PR_RISCV_V_SET_CONTROL, PR_RISCV_V_VSTATE_CTRL_ON);
if (rc != -1 || errno != EINVAL) {
ksft_test_result_fail("GET_CONTROL should fail on kernel/hw without V\n");
return -4;
}
ksft_test_result_skip("Vector not supported\n");
return 0;
}
flag = PR_RISCV_V_VSTATE_CTRL_ON;
rc = prctl(PR_RISCV_V_SET_CONTROL, flag);
if (rc != 0) {
ksft_test_result_fail("Enabling V for current should always success\n");
return -5;
}
flag = PR_RISCV_V_VSTATE_CTRL_OFF;
rc = prctl(PR_RISCV_V_SET_CONTROL, flag);
if (rc != -1 || errno != EPERM) {
ksft_test_result_fail("Disabling current's V alive must fail with EPERM(%d)\n",
errno);
return -5;
}
/* Turn on next's vector explicitly and test */
flag = PR_RISCV_V_VSTATE_CTRL_ON << PR_RISCV_V_VSTATE_CTRL_NEXT_SHIFT;
if (test_and_compare_child(flag, PR_RISCV_V_VSTATE_CTRL_ON, 0))
return -6;
/* Turn off next's vector explicitly and test */
flag = PR_RISCV_V_VSTATE_CTRL_OFF << PR_RISCV_V_VSTATE_CTRL_NEXT_SHIFT;
if (test_and_compare_child(flag, PR_RISCV_V_VSTATE_CTRL_OFF, 0))
return -7;
/* Turn on next's vector explicitly and test inherit */
flag = PR_RISCV_V_VSTATE_CTRL_ON << PR_RISCV_V_VSTATE_CTRL_NEXT_SHIFT;
flag |= PR_RISCV_V_VSTATE_CTRL_INHERIT;
expected = flag | PR_RISCV_V_VSTATE_CTRL_ON;
if (test_and_compare_child(flag, expected, 0))
return -8;
if (test_and_compare_child(flag, expected, 1))
return -9;
/* Turn off next's vector explicitly and test inherit */
flag = PR_RISCV_V_VSTATE_CTRL_OFF << PR_RISCV_V_VSTATE_CTRL_NEXT_SHIFT;
flag |= PR_RISCV_V_VSTATE_CTRL_INHERIT;
expected = flag | PR_RISCV_V_VSTATE_CTRL_OFF;
if (test_and_compare_child(flag, expected, 0))
return -10;
if (test_and_compare_child(flag, expected, 1))
return -11;
/* arguments should fail with EINVAL */
rc = prctl(PR_RISCV_V_SET_CONTROL, 0xff0);
if (rc != -1 || errno != EINVAL) {
ksft_test_result_fail("Undefined control argument should return EINVAL\n");
return -12;
}
rc = prctl(PR_RISCV_V_SET_CONTROL, 0x3);
if (rc != -1 || errno != EINVAL) {
ksft_test_result_fail("Undefined control argument should return EINVAL\n");
return -12;
}
rc = prctl(PR_RISCV_V_SET_CONTROL, 0xc);
if (rc != -1 || errno != EINVAL) {
ksft_test_result_fail("Undefined control argument should return EINVAL\n");
return -12;
}
rc = prctl(PR_RISCV_V_SET_CONTROL, 0xc);
if (rc != -1 || errno != EINVAL) {
ksft_test_result_fail("Undefined control argument should return EINVAL\n");
return -12;
}
ksft_test_result_pass("tests for riscv_v_vstate_ctrl pass\n");
ksft_exit_pass();
return 0;
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment