Commit 8fa590bf authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm updates from Paolo Bonzini:
 "ARM64:

   - Enable the per-vcpu dirty-ring tracking mechanism, together with an
     option to keep the good old dirty log around for pages that are
     dirtied by something other than a vcpu.

   - Switch to the relaxed parallel fault handling, using RCU to delay
     page table reclaim and giving better performance under load.

   - Relax the MTE ABI, allowing a VMM to use the MAP_SHARED mapping
     option, which multi-process VMMs such as crosvm rely on (see merge
     commit 382b5b87: "Fix a number of issues with MTE, such as
     races on the tags being initialised vs the PG_mte_tagged flag as
     well as the lack of support for VM_SHARED when KVM is involved.
     Patches from Catalin Marinas and Peter Collingbourne").

   - Merge the pKVM shadow vcpu state tracking that allows the
     hypervisor to have its own view of a vcpu, keeping that state
     private.

   - Add support for the PMUv3p5 architecture revision, bringing support
     for 64bit counters on systems that support it, and fix the
     no-quite-compliant CHAIN-ed counter support for the machines that
     actually exist out there.

   - Fix a handful of minor issues around 52bit VA/PA support (64kB
     pages only) as a prefix of the oncoming support for 4kB and 16kB
     pages.

   - Pick a small set of documentation and spelling fixes, because no
     good merge window would be complete without those.

  s390:

   - Second batch of the lazy destroy patches

   - First batch of KVM changes for kernel virtual != physical address
     support

   - Removal of a unused function

  x86:

   - Allow compiling out SMM support

   - Cleanup and documentation of SMM state save area format

   - Preserve interrupt shadow in SMM state save area

   - Respond to generic signals during slow page faults

   - Fixes and optimizations for the non-executable huge page errata
     fix.

   - Reprogram all performance counters on PMU filter change

   - Cleanups to Hyper-V emulation and tests

   - Process Hyper-V TLB flushes from a nested guest (i.e. from a L2
     guest running on top of a L1 Hyper-V hypervisor)

   - Advertise several new Intel features

   - x86 Xen-for-KVM:

      - Allow the Xen runstate information to cross a page boundary

      - Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured

      - Add support for 32-bit guests in SCHEDOP_poll

   - Notable x86 fixes and cleanups:

      - One-off fixes for various emulation flows (SGX, VMXON, NRIPS=0).

      - Reinstate IBPB on emulated VM-Exit that was incorrectly dropped
        a few years back when eliminating unnecessary barriers when
        switching between vmcs01 and vmcs02.

      - Clean up vmread_error_trampoline() to make it more obvious that
        params must be passed on the stack, even for x86-64.

      - Let userspace set all supported bits in MSR_IA32_FEAT_CTL
        irrespective of the current guest CPUID.

      - Fudge around a race with TSC refinement that results in KVM
        incorrectly thinking a guest needs TSC scaling when running on a
        CPU with a constant TSC, but no hardware-enumerated TSC
        frequency.

      - Advertise (on AMD) that the SMM_CTL MSR is not supported

      - Remove unnecessary exports

  Generic:

   - Support for responding to signals during page faults; introduces
     new FOLL_INTERRUPTIBLE flag that was reviewed by mm folks

  Selftests:

   - Fix an inverted check in the access tracking perf test, and restore
     support for asserting that there aren't too many idle pages when
     running on bare metal.

   - Fix build errors that occur in certain setups (unsure exactly what
     is unique about the problematic setup) due to glibc overriding
     static_assert() to a variant that requires a custom message.

   - Introduce actual atomics for clear/set_bit() in selftests

   - Add support for pinning vCPUs in dirty_log_perf_test.

   - Rename the so called "perf_util" framework to "memstress".

   - Add a lightweight psuedo RNG for guest use, and use it to randomize
     the access pattern and write vs. read percentage in the memstress
     tests.

   - Add a common ucall implementation; code dedup and pre-work for
     running SEV (and beyond) guests in selftests.

   - Provide a common constructor and arch hook, which will eventually
     be used by x86 to automatically select the right hypercall (AMD vs.
     Intel).

   - A bunch of added/enabled/fixed selftests for ARM64, covering
     memslots, breakpoints, stage-2 faults and access tracking.

   - x86-specific selftest changes:

      - Clean up x86's page table management.

      - Clean up and enhance the "smaller maxphyaddr" test, and add a
        related test to cover generic emulation failure.

      - Clean up the nEPT support checks.

      - Add X86_PROPERTY_* framework to retrieve multi-bit CPUID values.

      - Fix an ordering issue in the AMX test introduced by recent
        conversions to use kvm_cpu_has(), and harden the code to guard
        against similar bugs in the future. Anything that tiggers
        caching of KVM's supported CPUID, kvm_cpu_has() in this case,
        effectively hides opt-in XSAVE features if the caching occurs
        before the test opts in via prctl().

  Documentation:

   - Remove deleted ioctls from documentation

   - Clean up the docs for the x86 MSR filter.

   - Various fixes"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (361 commits)
  KVM: x86: Add proper ReST tables for userspace MSR exits/flags
  KVM: selftests: Allocate ucall pool from MEM_REGION_DATA
  KVM: arm64: selftests: Align VA space allocator with TTBR0
  KVM: arm64: Fix benign bug with incorrect use of VA_BITS
  KVM: arm64: PMU: Fix period computation for 64bit counters with 32bit overflow
  KVM: x86: Advertise that the SMM_CTL MSR is not supported
  KVM: x86: remove unnecessary exports
  KVM: selftests: Fix spelling mistake "probabalistic" -> "probabilistic"
  tools: KVM: selftests: Convert clear/set_bit() to actual atomics
  tools: Drop "atomic_" prefix from atomic test_and_set_bit()
  tools: Drop conflicting non-atomic test_and_{clear,set}_bit() helpers
  KVM: selftests: Use non-atomic clear/set bit helpers in KVM tests
  perf tools: Use dedicated non-atomic clear/set bit helpers
  tools: Take @bit as an "unsigned long" in {clear,set}_bit() helpers
  KVM: arm64: selftests: Enable single-step without a "full" ucall()
  KVM: x86: fix APICv/x2AVIC disabled when vm reboot by itself
  KVM: Remove stale comment about KVM_REQ_UNHALT
  KVM: Add missing arch for KVM_CREATE_DEVICE and KVM_{SET,GET}_DEVICE_ATTR
  KVM: Reference to kvm_userspace_memory_region in doc and comments
  KVM: Delete all references to removed KVM_SET_MEMORY_ALIAS ioctl
  ...
parents 057b40f4 549a715b
This diff is collapsed.
...@@ -23,21 +23,23 @@ the PV_TIME_FEATURES hypercall should be probed using the SMCCC 1.1 ...@@ -23,21 +23,23 @@ the PV_TIME_FEATURES hypercall should be probed using the SMCCC 1.1
ARCH_FEATURES mechanism before calling it. ARCH_FEATURES mechanism before calling it.
PV_TIME_FEATURES PV_TIME_FEATURES
============= ======== ==========
============= ======== =================================================
Function ID: (uint32) 0xC5000020 Function ID: (uint32) 0xC5000020
PV_call_id: (uint32) The function to query for support. PV_call_id: (uint32) The function to query for support.
Currently only PV_TIME_ST is supported. Currently only PV_TIME_ST is supported.
Return value: (int64) NOT_SUPPORTED (-1) or SUCCESS (0) if the relevant Return value: (int64) NOT_SUPPORTED (-1) or SUCCESS (0) if the relevant
PV-time feature is supported by the hypervisor. PV-time feature is supported by the hypervisor.
============= ======== ========== ============= ======== =================================================
PV_TIME_ST PV_TIME_ST
============= ======== ==========
============= ======== ==============================================
Function ID: (uint32) 0xC5000021 Function ID: (uint32) 0xC5000021
Return value: (int64) IPA of the stolen time data structure for this Return value: (int64) IPA of the stolen time data structure for this
VCPU. On failure: VCPU. On failure:
NOT_SUPPORTED (-1) NOT_SUPPORTED (-1)
============= ======== ========== ============= ======== ==============================================
The IPA returned by PV_TIME_ST should be mapped by the guest as normal memory The IPA returned by PV_TIME_ST should be mapped by the guest as normal memory
with inner and outer write back caching attributes, in the inner shareable with inner and outer write back caching attributes, in the inner shareable
...@@ -76,5 +78,5 @@ It is advisable that one or more 64k pages are set aside for the purpose of ...@@ -76,5 +78,5 @@ It is advisable that one or more 64k pages are set aside for the purpose of
these structures and not used for other purposes, this enables the guest to map these structures and not used for other purposes, this enables the guest to map
the region using 64k pages and avoids conflicting attributes with other memory. the region using 64k pages and avoids conflicting attributes with other memory.
For the user space interface see Documentation/virt/kvm/devices/vcpu.rst For the user space interface see
section "3. GROUP: KVM_ARM_VCPU_PVTIME_CTRL". :ref:`Documentation/virt/kvm/devices/vcpu.rst <kvm_arm_vcpu_pvtime_ctrl>`.
\ No newline at end of file
...@@ -52,7 +52,10 @@ KVM_DEV_ARM_VGIC_GRP_CTRL ...@@ -52,7 +52,10 @@ KVM_DEV_ARM_VGIC_GRP_CTRL
KVM_DEV_ARM_ITS_SAVE_TABLES KVM_DEV_ARM_ITS_SAVE_TABLES
save the ITS table data into guest RAM, at the location provisioned save the ITS table data into guest RAM, at the location provisioned
by the guest in corresponding registers/table entries. by the guest in corresponding registers/table entries. Should userspace
require a form of dirty tracking to identify which pages are modified
by the saving process, it should use a bitmap even if using another
mechanism to track the memory dirtied by the vCPUs.
The layout of the tables in guest memory defines an ABI. The entries The layout of the tables in guest memory defines an ABI. The entries
are laid out in little endian format as described in the last paragraph. are laid out in little endian format as described in the last paragraph.
......
...@@ -171,6 +171,8 @@ configured values on other VCPUs. Userspace should configure the interrupt ...@@ -171,6 +171,8 @@ configured values on other VCPUs. Userspace should configure the interrupt
numbers on at least one VCPU after creating all VCPUs and before running any numbers on at least one VCPU after creating all VCPUs and before running any
VCPUs. VCPUs.
.. _kvm_arm_vcpu_pvtime_ctrl:
3. GROUP: KVM_ARM_VCPU_PVTIME_CTRL 3. GROUP: KVM_ARM_VCPU_PVTIME_CTRL
================================== ==================================
......
...@@ -11438,6 +11438,16 @@ F: arch/x86/kvm/svm/hyperv.* ...@@ -11438,6 +11438,16 @@ F: arch/x86/kvm/svm/hyperv.*
F: arch/x86/kvm/svm/svm_onhyperv.* F: arch/x86/kvm/svm/svm_onhyperv.*
F: arch/x86/kvm/vmx/evmcs.* F: arch/x86/kvm/vmx/evmcs.*
KVM X86 Xen (KVM/Xen)
M: David Woodhouse <dwmw2@infradead.org>
M: Paul Durrant <paul@xen.org>
M: Sean Christopherson <seanjc@google.com>
M: Paolo Bonzini <pbonzini@redhat.com>
L: kvm@vger.kernel.org
S: Supported
T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
F: arch/x86/kvm/xen.*
KERNFS KERNFS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
M: Tejun Heo <tj@kernel.org> M: Tejun Heo <tj@kernel.org>
......
...@@ -1988,6 +1988,7 @@ config ARM64_MTE ...@@ -1988,6 +1988,7 @@ config ARM64_MTE
depends on ARM64_PAN depends on ARM64_PAN
select ARCH_HAS_SUBPAGE_FAULTS select ARCH_HAS_SUBPAGE_FAULTS
select ARCH_USES_HIGH_VMA_FLAGS select ARCH_USES_HIGH_VMA_FLAGS
select ARCH_USES_PG_ARCH_X
help help
Memory Tagging (part of the ARMv8.5 Extensions) provides Memory Tagging (part of the ARMv8.5 Extensions) provides
architectural support for run-time, always-on detection of architectural support for run-time, always-on detection of
......
...@@ -135,7 +135,7 @@ ...@@ -135,7 +135,7 @@
* 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are * 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are
* not known to exist and will break with this configuration. * not known to exist and will break with this configuration.
* *
* The VTCR_EL2 is configured per VM and is initialised in kvm_arm_setup_stage2(). * The VTCR_EL2 is configured per VM and is initialised in kvm_init_stage2_mmu.
* *
* Note that when using 4K pages, we concatenate two first level page tables * Note that when using 4K pages, we concatenate two first level page tables
* together. With 16K pages, we concatenate 16 first level page tables. * together. With 16K pages, we concatenate 16 first level page tables.
...@@ -340,9 +340,13 @@ ...@@ -340,9 +340,13 @@
* We have * We have
* PAR [PA_Shift - 1 : 12] = PA [PA_Shift - 1 : 12] * PAR [PA_Shift - 1 : 12] = PA [PA_Shift - 1 : 12]
* HPFAR [PA_Shift - 9 : 4] = FIPA [PA_Shift - 1 : 12] * HPFAR [PA_Shift - 9 : 4] = FIPA [PA_Shift - 1 : 12]
*
* Always assume 52 bit PA since at this point, we don't know how many PA bits
* the page table has been set up for. This should be safe since unused address
* bits in PAR are res0.
*/ */
#define PAR_TO_HPFAR(par) \ #define PAR_TO_HPFAR(par) \
(((par) & GENMASK_ULL(PHYS_MASK_SHIFT - 1, 12)) >> 8) (((par) & GENMASK_ULL(52 - 1, 12)) >> 8)
#define ECN(x) { ESR_ELx_EC_##x, #x } #define ECN(x) { ESR_ELx_EC_##x, #x }
......
...@@ -76,6 +76,9 @@ enum __kvm_host_smccc_func { ...@@ -76,6 +76,9 @@ enum __kvm_host_smccc_func {
__KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs, __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs,
__KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs, __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs,
__KVM_HOST_SMCCC_FUNC___pkvm_vcpu_init_traps, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_init_traps,
__KVM_HOST_SMCCC_FUNC___pkvm_init_vm,
__KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu,
__KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm,
}; };
#define DECLARE_KVM_VHE_SYM(sym) extern char sym[] #define DECLARE_KVM_VHE_SYM(sym) extern char sym[]
...@@ -106,7 +109,7 @@ enum __kvm_host_smccc_func { ...@@ -106,7 +109,7 @@ enum __kvm_host_smccc_func {
#define per_cpu_ptr_nvhe_sym(sym, cpu) \ #define per_cpu_ptr_nvhe_sym(sym, cpu) \
({ \ ({ \
unsigned long base, off; \ unsigned long base, off; \
base = kvm_arm_hyp_percpu_base[cpu]; \ base = kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu]; \
off = (unsigned long)&CHOOSE_NVHE_SYM(sym) - \ off = (unsigned long)&CHOOSE_NVHE_SYM(sym) - \
(unsigned long)&CHOOSE_NVHE_SYM(__per_cpu_start); \ (unsigned long)&CHOOSE_NVHE_SYM(__per_cpu_start); \
base ? (typeof(CHOOSE_NVHE_SYM(sym))*)(base + off) : NULL; \ base ? (typeof(CHOOSE_NVHE_SYM(sym))*)(base + off) : NULL; \
...@@ -211,7 +214,7 @@ DECLARE_KVM_HYP_SYM(__kvm_hyp_vector); ...@@ -211,7 +214,7 @@ DECLARE_KVM_HYP_SYM(__kvm_hyp_vector);
#define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init) #define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init)
#define __kvm_hyp_vector CHOOSE_HYP_SYM(__kvm_hyp_vector) #define __kvm_hyp_vector CHOOSE_HYP_SYM(__kvm_hyp_vector)
extern unsigned long kvm_arm_hyp_percpu_base[NR_CPUS]; extern unsigned long kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[];
DECLARE_KVM_NVHE_SYM(__per_cpu_start); DECLARE_KVM_NVHE_SYM(__per_cpu_start);
DECLARE_KVM_NVHE_SYM(__per_cpu_end); DECLARE_KVM_NVHE_SYM(__per_cpu_end);
......
...@@ -73,6 +73,63 @@ u32 __attribute_const__ kvm_target_cpu(void); ...@@ -73,6 +73,63 @@ u32 __attribute_const__ kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu); int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu);
struct kvm_hyp_memcache {
phys_addr_t head;
unsigned long nr_pages;
};
static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc,
phys_addr_t *p,
phys_addr_t (*to_pa)(void *virt))
{
*p = mc->head;
mc->head = to_pa(p);
mc->nr_pages++;
}
static inline void *pop_hyp_memcache(struct kvm_hyp_memcache *mc,
void *(*to_va)(phys_addr_t phys))
{
phys_addr_t *p = to_va(mc->head);
if (!mc->nr_pages)
return NULL;
mc->head = *p;
mc->nr_pages--;
return p;
}
static inline int __topup_hyp_memcache(struct kvm_hyp_memcache *mc,
unsigned long min_pages,
void *(*alloc_fn)(void *arg),
phys_addr_t (*to_pa)(void *virt),
void *arg)
{
while (mc->nr_pages < min_pages) {
phys_addr_t *p = alloc_fn(arg);
if (!p)
return -ENOMEM;
push_hyp_memcache(mc, p, to_pa);
}
return 0;
}
static inline void __free_hyp_memcache(struct kvm_hyp_memcache *mc,
void (*free_fn)(void *virt, void *arg),
void *(*to_va)(phys_addr_t phys),
void *arg)
{
while (mc->nr_pages)
free_fn(pop_hyp_memcache(mc, to_va), arg);
}
void free_hyp_memcache(struct kvm_hyp_memcache *mc);
int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages);
struct kvm_vmid { struct kvm_vmid {
atomic64_t id; atomic64_t id;
}; };
...@@ -115,6 +172,13 @@ struct kvm_smccc_features { ...@@ -115,6 +172,13 @@ struct kvm_smccc_features {
unsigned long vendor_hyp_bmap; unsigned long vendor_hyp_bmap;
}; };
typedef unsigned int pkvm_handle_t;
struct kvm_protected_vm {
pkvm_handle_t handle;
struct kvm_hyp_memcache teardown_mc;
};
struct kvm_arch { struct kvm_arch {
struct kvm_s2_mmu mmu; struct kvm_s2_mmu mmu;
...@@ -163,9 +227,19 @@ struct kvm_arch { ...@@ -163,9 +227,19 @@ struct kvm_arch {
u8 pfr0_csv2; u8 pfr0_csv2;
u8 pfr0_csv3; u8 pfr0_csv3;
struct {
u8 imp:4;
u8 unimp:4;
} dfr0_pmuver;
/* Hypercall features firmware registers' descriptor */ /* Hypercall features firmware registers' descriptor */
struct kvm_smccc_features smccc_feat; struct kvm_smccc_features smccc_feat;
/*
* For an untrusted host VM, 'pkvm.handle' is used to lookup
* the associated pKVM instance in the hypervisor.
*/
struct kvm_protected_vm pkvm;
}; };
struct kvm_vcpu_fault_info { struct kvm_vcpu_fault_info {
...@@ -925,8 +999,6 @@ int kvm_set_ipa_limit(void); ...@@ -925,8 +999,6 @@ int kvm_set_ipa_limit(void);
#define __KVM_HAVE_ARCH_VM_ALLOC #define __KVM_HAVE_ARCH_VM_ALLOC
struct kvm *kvm_arch_alloc_vm(void); struct kvm *kvm_arch_alloc_vm(void);
int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type);
static inline bool kvm_vm_is_protected(struct kvm *kvm) static inline bool kvm_vm_is_protected(struct kvm *kvm)
{ {
return false; return false;
......
...@@ -123,4 +123,7 @@ extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val); ...@@ -123,4 +123,7 @@ extern u64 kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val);
extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val);
extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val); extern u64 kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val);
extern unsigned long kvm_nvhe_sym(__icache_flags);
extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits);
#endif /* __ARM64_KVM_HYP_H__ */ #endif /* __ARM64_KVM_HYP_H__ */
...@@ -166,7 +166,7 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size, ...@@ -166,7 +166,7 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
void free_hyp_pgds(void); void free_hyp_pgds(void);
void stage2_unmap_vm(struct kvm *kvm); void stage2_unmap_vm(struct kvm *kvm);
int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu); int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type);
void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu); void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
phys_addr_t pa, unsigned long size, bool writable); phys_addr_t pa, unsigned long size, bool writable);
......
...@@ -42,6 +42,8 @@ typedef u64 kvm_pte_t; ...@@ -42,6 +42,8 @@ typedef u64 kvm_pte_t;
#define KVM_PTE_ADDR_MASK GENMASK(47, PAGE_SHIFT) #define KVM_PTE_ADDR_MASK GENMASK(47, PAGE_SHIFT)
#define KVM_PTE_ADDR_51_48 GENMASK(15, 12) #define KVM_PTE_ADDR_51_48 GENMASK(15, 12)
#define KVM_PHYS_INVALID (-1ULL)
static inline bool kvm_pte_valid(kvm_pte_t pte) static inline bool kvm_pte_valid(kvm_pte_t pte)
{ {
return pte & KVM_PTE_VALID; return pte & KVM_PTE_VALID;
...@@ -57,6 +59,18 @@ static inline u64 kvm_pte_to_phys(kvm_pte_t pte) ...@@ -57,6 +59,18 @@ static inline u64 kvm_pte_to_phys(kvm_pte_t pte)
return pa; return pa;
} }
static inline kvm_pte_t kvm_phys_to_pte(u64 pa)
{
kvm_pte_t pte = pa & KVM_PTE_ADDR_MASK;
if (PAGE_SHIFT == 16) {
pa &= GENMASK(51, 48);
pte |= FIELD_PREP(KVM_PTE_ADDR_51_48, pa >> 48);
}
return pte;
}
static inline u64 kvm_granule_shift(u32 level) static inline u64 kvm_granule_shift(u32 level)
{ {
/* Assumes KVM_PGTABLE_MAX_LEVELS is 4 */ /* Assumes KVM_PGTABLE_MAX_LEVELS is 4 */
...@@ -85,6 +99,8 @@ static inline bool kvm_level_supports_block_mapping(u32 level) ...@@ -85,6 +99,8 @@ static inline bool kvm_level_supports_block_mapping(u32 level)
* allocation is physically contiguous. * allocation is physically contiguous.
* @free_pages_exact: Free an exact number of memory pages previously * @free_pages_exact: Free an exact number of memory pages previously
* allocated by zalloc_pages_exact. * allocated by zalloc_pages_exact.
* @free_removed_table: Free a removed paging structure by unlinking and
* dropping references.
* @get_page: Increment the refcount on a page. * @get_page: Increment the refcount on a page.
* @put_page: Decrement the refcount on a page. When the * @put_page: Decrement the refcount on a page. When the
* refcount reaches 0 the page is automatically * refcount reaches 0 the page is automatically
...@@ -103,6 +119,7 @@ struct kvm_pgtable_mm_ops { ...@@ -103,6 +119,7 @@ struct kvm_pgtable_mm_ops {
void* (*zalloc_page)(void *arg); void* (*zalloc_page)(void *arg);
void* (*zalloc_pages_exact)(size_t size); void* (*zalloc_pages_exact)(size_t size);
void (*free_pages_exact)(void *addr, size_t size); void (*free_pages_exact)(void *addr, size_t size);
void (*free_removed_table)(void *addr, u32 level);
void (*get_page)(void *addr); void (*get_page)(void *addr);
void (*put_page)(void *addr); void (*put_page)(void *addr);
int (*page_count)(void *addr); int (*page_count)(void *addr);
...@@ -161,29 +178,6 @@ enum kvm_pgtable_prot { ...@@ -161,29 +178,6 @@ enum kvm_pgtable_prot {
typedef bool (*kvm_pgtable_force_pte_cb_t)(u64 addr, u64 end, typedef bool (*kvm_pgtable_force_pte_cb_t)(u64 addr, u64 end,
enum kvm_pgtable_prot prot); enum kvm_pgtable_prot prot);
/**
* struct kvm_pgtable - KVM page-table.
* @ia_bits: Maximum input address size, in bits.
* @start_level: Level at which the page-table walk starts.
* @pgd: Pointer to the first top-level entry of the page-table.
* @mm_ops: Memory management callbacks.
* @mmu: Stage-2 KVM MMU struct. Unused for stage-1 page-tables.
* @flags: Stage-2 page-table flags.
* @force_pte_cb: Function that returns true if page level mappings must
* be used instead of block mappings.
*/
struct kvm_pgtable {
u32 ia_bits;
u32 start_level;
kvm_pte_t *pgd;
struct kvm_pgtable_mm_ops *mm_ops;
/* Stage-2 only */
struct kvm_s2_mmu *mmu;
enum kvm_pgtable_stage2_flags flags;
kvm_pgtable_force_pte_cb_t force_pte_cb;
};
/** /**
* enum kvm_pgtable_walk_flags - Flags to control a depth-first page-table walk. * enum kvm_pgtable_walk_flags - Flags to control a depth-first page-table walk.
* @KVM_PGTABLE_WALK_LEAF: Visit leaf entries, including invalid * @KVM_PGTABLE_WALK_LEAF: Visit leaf entries, including invalid
...@@ -192,17 +186,34 @@ struct kvm_pgtable { ...@@ -192,17 +186,34 @@ struct kvm_pgtable {
* children. * children.
* @KVM_PGTABLE_WALK_TABLE_POST: Visit table entries after their * @KVM_PGTABLE_WALK_TABLE_POST: Visit table entries after their
* children. * children.
* @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared
* with other software walkers.
*/ */
enum kvm_pgtable_walk_flags { enum kvm_pgtable_walk_flags {
KVM_PGTABLE_WALK_LEAF = BIT(0), KVM_PGTABLE_WALK_LEAF = BIT(0),
KVM_PGTABLE_WALK_TABLE_PRE = BIT(1), KVM_PGTABLE_WALK_TABLE_PRE = BIT(1),
KVM_PGTABLE_WALK_TABLE_POST = BIT(2), KVM_PGTABLE_WALK_TABLE_POST = BIT(2),
KVM_PGTABLE_WALK_SHARED = BIT(3),
}; };
typedef int (*kvm_pgtable_visitor_fn_t)(u64 addr, u64 end, u32 level, struct kvm_pgtable_visit_ctx {
kvm_pte_t *ptep, kvm_pte_t *ptep;
enum kvm_pgtable_walk_flags flag, kvm_pte_t old;
void * const arg); void *arg;
struct kvm_pgtable_mm_ops *mm_ops;
u64 addr;
u64 end;
u32 level;
enum kvm_pgtable_walk_flags flags;
};
typedef int (*kvm_pgtable_visitor_fn_t)(const struct kvm_pgtable_visit_ctx *ctx,
enum kvm_pgtable_walk_flags visit);
static inline bool kvm_pgtable_walk_shared(const struct kvm_pgtable_visit_ctx *ctx)
{
return ctx->flags & KVM_PGTABLE_WALK_SHARED;
}
/** /**
* struct kvm_pgtable_walker - Hook into a page-table walk. * struct kvm_pgtable_walker - Hook into a page-table walk.
...@@ -217,6 +228,94 @@ struct kvm_pgtable_walker { ...@@ -217,6 +228,94 @@ struct kvm_pgtable_walker {
const enum kvm_pgtable_walk_flags flags; const enum kvm_pgtable_walk_flags flags;
}; };
/*
* RCU cannot be used in a non-kernel context such as the hyp. As such, page
* table walkers used in hyp do not call into RCU and instead use other
* synchronization mechanisms (such as a spinlock).
*/
#if defined(__KVM_NVHE_HYPERVISOR__) || defined(__KVM_VHE_HYPERVISOR__)
typedef kvm_pte_t *kvm_pteref_t;
static inline kvm_pte_t *kvm_dereference_pteref(struct kvm_pgtable_walker *walker,
kvm_pteref_t pteref)
{
return pteref;
}
static inline int kvm_pgtable_walk_begin(struct kvm_pgtable_walker *walker)
{
/*
* Due to the lack of RCU (or a similar protection scheme), only
* non-shared table walkers are allowed in the hypervisor.
*/
if (walker->flags & KVM_PGTABLE_WALK_SHARED)
return -EPERM;
return 0;
}
static inline void kvm_pgtable_walk_end(struct kvm_pgtable_walker *walker) {}
static inline bool kvm_pgtable_walk_lock_held(void)
{
return true;
}
#else
typedef kvm_pte_t __rcu *kvm_pteref_t;
static inline kvm_pte_t *kvm_dereference_pteref(struct kvm_pgtable_walker *walker,
kvm_pteref_t pteref)
{
return rcu_dereference_check(pteref, !(walker->flags & KVM_PGTABLE_WALK_SHARED));
}
static inline int kvm_pgtable_walk_begin(struct kvm_pgtable_walker *walker)
{
if (walker->flags & KVM_PGTABLE_WALK_SHARED)
rcu_read_lock();
return 0;
}
static inline void kvm_pgtable_walk_end(struct kvm_pgtable_walker *walker)
{
if (walker->flags & KVM_PGTABLE_WALK_SHARED)
rcu_read_unlock();
}
static inline bool kvm_pgtable_walk_lock_held(void)
{
return rcu_read_lock_held();
}
#endif
/**
* struct kvm_pgtable - KVM page-table.
* @ia_bits: Maximum input address size, in bits.
* @start_level: Level at which the page-table walk starts.
* @pgd: Pointer to the first top-level entry of the page-table.
* @mm_ops: Memory management callbacks.
* @mmu: Stage-2 KVM MMU struct. Unused for stage-1 page-tables.
* @flags: Stage-2 page-table flags.
* @force_pte_cb: Function that returns true if page level mappings must
* be used instead of block mappings.
*/
struct kvm_pgtable {
u32 ia_bits;
u32 start_level;
kvm_pteref_t pgd;
struct kvm_pgtable_mm_ops *mm_ops;
/* Stage-2 only */
struct kvm_s2_mmu *mmu;
enum kvm_pgtable_stage2_flags flags;
kvm_pgtable_force_pte_cb_t force_pte_cb;
};
/** /**
* kvm_pgtable_hyp_init() - Initialise a hypervisor stage-1 page-table. * kvm_pgtable_hyp_init() - Initialise a hypervisor stage-1 page-table.
* @pgt: Uninitialised page-table structure to initialise. * @pgt: Uninitialised page-table structure to initialise.
...@@ -296,6 +395,14 @@ u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); ...@@ -296,6 +395,14 @@ u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size);
*/ */
u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift); u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift);
/**
* kvm_pgtable_stage2_pgd_size() - Helper to compute size of a stage-2 PGD
* @vtcr: Content of the VTCR register.
*
* Return: the size (in bytes) of the stage-2 PGD
*/
size_t kvm_pgtable_stage2_pgd_size(u64 vtcr);
/** /**
* __kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table. * __kvm_pgtable_stage2_init() - Initialise a guest stage-2 page-table.
* @pgt: Uninitialised page-table structure to initialise. * @pgt: Uninitialised page-table structure to initialise.
...@@ -324,6 +431,17 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, ...@@ -324,6 +431,17 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu,
*/ */
void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt);
/**
* kvm_pgtable_stage2_free_removed() - Free a removed stage-2 paging structure.
* @mm_ops: Memory management callbacks.
* @pgtable: Unlinked stage-2 paging structure to be freed.
* @level: Level of the stage-2 paging structure to be freed.
*
* The page-table is assumed to be unreachable by any hardware walkers prior to
* freeing and therefore no TLB invalidation is performed.
*/
void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level);
/** /**
* kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table.
* @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*().
...@@ -333,6 +451,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); ...@@ -333,6 +451,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt);
* @prot: Permissions and attributes for the mapping. * @prot: Permissions and attributes for the mapping.
* @mc: Cache of pre-allocated and zeroed memory from which to allocate * @mc: Cache of pre-allocated and zeroed memory from which to allocate
* page-table pages. * page-table pages.
* @flags: Flags to control the page-table walk (ex. a shared walk)
* *
* The offset of @addr within a page is ignored, @size is rounded-up to * The offset of @addr within a page is ignored, @size is rounded-up to
* the next page boundary and @phys is rounded-down to the previous page * the next page boundary and @phys is rounded-down to the previous page
...@@ -354,7 +473,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); ...@@ -354,7 +473,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt);
*/ */
int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
u64 phys, enum kvm_pgtable_prot prot, u64 phys, enum kvm_pgtable_prot prot,
void *mc); void *mc, enum kvm_pgtable_walk_flags flags);
/** /**
* kvm_pgtable_stage2_set_owner() - Unmap and annotate pages in the IPA space to * kvm_pgtable_stage2_set_owner() - Unmap and annotate pages in the IPA space to
......
...@@ -9,11 +9,49 @@ ...@@ -9,11 +9,49 @@
#include <linux/memblock.h> #include <linux/memblock.h>
#include <asm/kvm_pgtable.h> #include <asm/kvm_pgtable.h>
/* Maximum number of VMs that can co-exist under pKVM. */
#define KVM_MAX_PVMS 255
#define HYP_MEMBLOCK_REGIONS 128 #define HYP_MEMBLOCK_REGIONS 128
int pkvm_init_host_vm(struct kvm *kvm);
int pkvm_create_hyp_vm(struct kvm *kvm);
void pkvm_destroy_hyp_vm(struct kvm *kvm);
extern struct memblock_region kvm_nvhe_sym(hyp_memory)[]; extern struct memblock_region kvm_nvhe_sym(hyp_memory)[];
extern unsigned int kvm_nvhe_sym(hyp_memblock_nr); extern unsigned int kvm_nvhe_sym(hyp_memblock_nr);
static inline unsigned long
hyp_vmemmap_memblock_size(struct memblock_region *reg, size_t vmemmap_entry_size)
{
unsigned long nr_pages = reg->size >> PAGE_SHIFT;
unsigned long start, end;
start = (reg->base >> PAGE_SHIFT) * vmemmap_entry_size;
end = start + nr_pages * vmemmap_entry_size;
start = ALIGN_DOWN(start, PAGE_SIZE);
end = ALIGN(end, PAGE_SIZE);
return end - start;
}
static inline unsigned long hyp_vmemmap_pages(size_t vmemmap_entry_size)
{
unsigned long res = 0, i;
for (i = 0; i < kvm_nvhe_sym(hyp_memblock_nr); i++) {
res += hyp_vmemmap_memblock_size(&kvm_nvhe_sym(hyp_memory)[i],
vmemmap_entry_size);
}
return res >> PAGE_SHIFT;
}
static inline unsigned long hyp_vm_table_pages(void)
{
return PAGE_ALIGN(KVM_MAX_PVMS * sizeof(void *)) >> PAGE_SHIFT;
}
static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages) static inline unsigned long __hyp_pgtable_max_pages(unsigned long nr_pages)
{ {
unsigned long total = 0, i; unsigned long total = 0, i;
......
...@@ -25,7 +25,7 @@ unsigned long mte_copy_tags_to_user(void __user *to, void *from, ...@@ -25,7 +25,7 @@ unsigned long mte_copy_tags_to_user(void __user *to, void *from,
unsigned long n); unsigned long n);
int mte_save_tags(struct page *page); int mte_save_tags(struct page *page);
void mte_save_page_tags(const void *page_addr, void *tag_storage); void mte_save_page_tags(const void *page_addr, void *tag_storage);
bool mte_restore_tags(swp_entry_t entry, struct page *page); void mte_restore_tags(swp_entry_t entry, struct page *page);
void mte_restore_page_tags(void *page_addr, const void *tag_storage); void mte_restore_page_tags(void *page_addr, const void *tag_storage);
void mte_invalidate_tags(int type, pgoff_t offset); void mte_invalidate_tags(int type, pgoff_t offset);
void mte_invalidate_tags_area(int type); void mte_invalidate_tags_area(int type);
...@@ -36,6 +36,58 @@ void mte_free_tag_storage(char *storage); ...@@ -36,6 +36,58 @@ void mte_free_tag_storage(char *storage);
/* track which pages have valid allocation tags */ /* track which pages have valid allocation tags */
#define PG_mte_tagged PG_arch_2 #define PG_mte_tagged PG_arch_2
/* simple lock to avoid multiple threads tagging the same page */
#define PG_mte_lock PG_arch_3
static inline void set_page_mte_tagged(struct page *page)
{
/*
* Ensure that the tags written prior to this function are visible
* before the page flags update.
*/
smp_wmb();
set_bit(PG_mte_tagged, &page->flags);
}
static inline bool page_mte_tagged(struct page *page)
{
bool ret = test_bit(PG_mte_tagged, &page->flags);
/*
* If the page is tagged, ensure ordering with a likely subsequent
* read of the tags.
*/
if (ret)
smp_rmb();
return ret;
}
/*
* Lock the page for tagging and return 'true' if the page can be tagged,
* 'false' if already tagged. PG_mte_tagged is never cleared and therefore the
* locking only happens once for page initialisation.
*
* The page MTE lock state:
*
* Locked: PG_mte_lock && !PG_mte_tagged
* Unlocked: !PG_mte_lock || PG_mte_tagged
*
* Acquire semantics only if the page is tagged (returning 'false').
*/
static inline bool try_page_mte_tagging(struct page *page)
{
if (!test_and_set_bit(PG_mte_lock, &page->flags))
return true;
/*
* The tags are either being initialised or may have been initialised
* already. Check if the PG_mte_tagged flag has been set or wait
* otherwise.
*/
smp_cond_load_acquire(&page->flags, VAL & (1UL << PG_mte_tagged));
return false;
}
void mte_zero_clear_page_tags(void *addr); void mte_zero_clear_page_tags(void *addr);
void mte_sync_tags(pte_t old_pte, pte_t pte); void mte_sync_tags(pte_t old_pte, pte_t pte);
...@@ -56,6 +108,17 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size); ...@@ -56,6 +108,17 @@ size_t mte_probe_user_range(const char __user *uaddr, size_t size);
/* unused if !CONFIG_ARM64_MTE, silence the compiler */ /* unused if !CONFIG_ARM64_MTE, silence the compiler */
#define PG_mte_tagged 0 #define PG_mte_tagged 0
static inline void set_page_mte_tagged(struct page *page)
{
}
static inline bool page_mte_tagged(struct page *page)
{
return false;
}
static inline bool try_page_mte_tagging(struct page *page)
{
return false;
}
static inline void mte_zero_clear_page_tags(void *addr) static inline void mte_zero_clear_page_tags(void *addr)
{ {
} }
......
...@@ -1046,8 +1046,8 @@ static inline void arch_swap_invalidate_area(int type) ...@@ -1046,8 +1046,8 @@ static inline void arch_swap_invalidate_area(int type)
#define __HAVE_ARCH_SWAP_RESTORE #define __HAVE_ARCH_SWAP_RESTORE
static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio)
{ {
if (system_supports_mte() && mte_restore_tags(entry, &folio->page)) if (system_supports_mte())
set_bit(PG_mte_tagged, &folio->flags); mte_restore_tags(entry, &folio->page);
} }
#endif /* CONFIG_ARM64_MTE */ #endif /* CONFIG_ARM64_MTE */
......
...@@ -43,6 +43,7 @@ ...@@ -43,6 +43,7 @@
#define __KVM_HAVE_VCPU_EVENTS #define __KVM_HAVE_VCPU_EVENTS
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1
#define KVM_DIRTY_LOG_PAGE_OFFSET 64
#define KVM_REG_SIZE(id) \ #define KVM_REG_SIZE(id) \
(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT)) (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
......
...@@ -2076,8 +2076,10 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap) ...@@ -2076,8 +2076,10 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
* Clear the tags in the zero page. This needs to be done via the * Clear the tags in the zero page. This needs to be done via the
* linear map which has the Tagged attribute. * linear map which has the Tagged attribute.
*/ */
if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags)) if (try_page_mte_tagging(ZERO_PAGE(0))) {
mte_clear_page_tags(lm_alias(empty_zero_page)); mte_clear_page_tags(lm_alias(empty_zero_page));
set_page_mte_tagged(ZERO_PAGE(0));
}
kasan_init_hw_tags_cpu(); kasan_init_hw_tags_cpu();
} }
......
...@@ -47,7 +47,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm, ...@@ -47,7 +47,7 @@ static int mte_dump_tag_range(struct coredump_params *cprm,
* Pages mapped in user space as !pte_access_permitted() (e.g. * Pages mapped in user space as !pte_access_permitted() (e.g.
* PROT_EXEC only) may not have the PG_mte_tagged flag set. * PROT_EXEC only) may not have the PG_mte_tagged flag set.
*/ */
if (!test_bit(PG_mte_tagged, &page->flags)) { if (!page_mte_tagged(page)) {
put_page(page); put_page(page);
dump_skip(cprm, MTE_PAGE_TAG_STORAGE); dump_skip(cprm, MTE_PAGE_TAG_STORAGE);
continue; continue;
......
...@@ -271,7 +271,7 @@ static int swsusp_mte_save_tags(void) ...@@ -271,7 +271,7 @@ static int swsusp_mte_save_tags(void)
if (!page) if (!page)
continue; continue;
if (!test_bit(PG_mte_tagged, &page->flags)) if (!page_mte_tagged(page))
continue; continue;
ret = save_tags(page, pfn); ret = save_tags(page, pfn);
......
...@@ -63,12 +63,6 @@ KVM_NVHE_ALIAS(nvhe_hyp_panic_handler); ...@@ -63,12 +63,6 @@ KVM_NVHE_ALIAS(nvhe_hyp_panic_handler);
/* Vectors installed by hyp-init on reset HVC. */ /* Vectors installed by hyp-init on reset HVC. */
KVM_NVHE_ALIAS(__hyp_stub_vectors); KVM_NVHE_ALIAS(__hyp_stub_vectors);
/* Kernel symbol used by icache_is_vpipt(). */
KVM_NVHE_ALIAS(__icache_flags);
/* VMID bits set by the KVM VMID allocator */
KVM_NVHE_ALIAS(kvm_arm_vmid_bits);
/* Static keys which are set if a vGIC trap should be handled in hyp. */ /* Static keys which are set if a vGIC trap should be handled in hyp. */
KVM_NVHE_ALIAS(vgic_v2_cpuif_trap); KVM_NVHE_ALIAS(vgic_v2_cpuif_trap);
KVM_NVHE_ALIAS(vgic_v3_cpuif_trap); KVM_NVHE_ALIAS(vgic_v3_cpuif_trap);
...@@ -84,9 +78,6 @@ KVM_NVHE_ALIAS(gic_nonsecure_priorities); ...@@ -84,9 +78,6 @@ KVM_NVHE_ALIAS(gic_nonsecure_priorities);
KVM_NVHE_ALIAS(__start___kvm_ex_table); KVM_NVHE_ALIAS(__start___kvm_ex_table);
KVM_NVHE_ALIAS(__stop___kvm_ex_table); KVM_NVHE_ALIAS(__stop___kvm_ex_table);
/* Array containing bases of nVHE per-CPU memory regions. */
KVM_NVHE_ALIAS(kvm_arm_hyp_percpu_base);
/* PMU available static key */ /* PMU available static key */
#ifdef CONFIG_HW_PERF_EVENTS #ifdef CONFIG_HW_PERF_EVENTS
KVM_NVHE_ALIAS(kvm_arm_pmu_available); KVM_NVHE_ALIAS(kvm_arm_pmu_available);
...@@ -103,12 +94,6 @@ KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy); ...@@ -103,12 +94,6 @@ KVM_NVHE_ALIAS_HYP(__memcpy, __pi_memcpy);
KVM_NVHE_ALIAS_HYP(__memset, __pi_memset); KVM_NVHE_ALIAS_HYP(__memset, __pi_memset);
#endif #endif
/* Kernel memory sections */
KVM_NVHE_ALIAS(__start_rodata);
KVM_NVHE_ALIAS(__end_rodata);
KVM_NVHE_ALIAS(__bss_start);
KVM_NVHE_ALIAS(__bss_stop);
/* Hyp memory sections */ /* Hyp memory sections */
KVM_NVHE_ALIAS(__hyp_idmap_text_start); KVM_NVHE_ALIAS(__hyp_idmap_text_start);
KVM_NVHE_ALIAS(__hyp_idmap_text_end); KVM_NVHE_ALIAS(__hyp_idmap_text_end);
......
...@@ -41,19 +41,17 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, ...@@ -41,19 +41,17 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte,
if (check_swap && is_swap_pte(old_pte)) { if (check_swap && is_swap_pte(old_pte)) {
swp_entry_t entry = pte_to_swp_entry(old_pte); swp_entry_t entry = pte_to_swp_entry(old_pte);
if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) if (!non_swap_entry(entry))
return; mte_restore_tags(entry, page);
} }
if (!pte_is_tagged) if (!pte_is_tagged)
return; return;
/* if (try_page_mte_tagging(page)) {
* Test PG_mte_tagged again in case it was racing with another
* set_pte_at().
*/
if (!test_and_set_bit(PG_mte_tagged, &page->flags))
mte_clear_page_tags(page_address(page)); mte_clear_page_tags(page_address(page));
set_page_mte_tagged(page);
}
} }
void mte_sync_tags(pte_t old_pte, pte_t pte) void mte_sync_tags(pte_t old_pte, pte_t pte)
...@@ -69,9 +67,11 @@ void mte_sync_tags(pte_t old_pte, pte_t pte) ...@@ -69,9 +67,11 @@ void mte_sync_tags(pte_t old_pte, pte_t pte)
/* if PG_mte_tagged is set, tags have already been initialised */ /* if PG_mte_tagged is set, tags have already been initialised */
for (i = 0; i < nr_pages; i++, page++) { for (i = 0; i < nr_pages; i++, page++) {
if (!test_bit(PG_mte_tagged, &page->flags)) if (!page_mte_tagged(page)) {
mte_sync_page_tags(page, old_pte, check_swap, mte_sync_page_tags(page, old_pte, check_swap,
pte_is_tagged); pte_is_tagged);
set_page_mte_tagged(page);
}
} }
/* ensure the tags are visible before the PTE is set */ /* ensure the tags are visible before the PTE is set */
...@@ -96,8 +96,7 @@ int memcmp_pages(struct page *page1, struct page *page2) ...@@ -96,8 +96,7 @@ int memcmp_pages(struct page *page1, struct page *page2)
* pages is tagged, set_pte_at() may zero or change the tags of the * pages is tagged, set_pte_at() may zero or change the tags of the
* other page via mte_sync_tags(). * other page via mte_sync_tags().
*/ */
if (test_bit(PG_mte_tagged, &page1->flags) || if (page_mte_tagged(page1) || page_mte_tagged(page2))
test_bit(PG_mte_tagged, &page2->flags))
return addr1 != addr2; return addr1 != addr2;
return ret; return ret;
...@@ -454,7 +453,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr, ...@@ -454,7 +453,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
put_page(page); put_page(page);
break; break;
} }
WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags)); WARN_ON_ONCE(!page_mte_tagged(page));
/* limit access to the end of the page */ /* limit access to the end of the page */
offset = offset_in_page(addr); offset = offset_in_page(addr);
......
...@@ -32,6 +32,8 @@ menuconfig KVM ...@@ -32,6 +32,8 @@ menuconfig KVM
select KVM_VFIO select KVM_VFIO
select HAVE_KVM_EVENTFD select HAVE_KVM_EVENTFD
select HAVE_KVM_IRQFD select HAVE_KVM_IRQFD
select HAVE_KVM_DIRTY_RING_ACQ_REL
select NEED_KVM_DIRTY_RING_WITH_BITMAP
select HAVE_KVM_MSI select HAVE_KVM_MSI
select HAVE_KVM_IRQCHIP select HAVE_KVM_IRQCHIP
select HAVE_KVM_IRQ_ROUTING select HAVE_KVM_IRQ_ROUTING
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <asm/kvm_arm.h> #include <asm/kvm_arm.h>
#include <asm/kvm_asm.h> #include <asm/kvm_asm.h>
#include <asm/kvm_mmu.h> #include <asm/kvm_mmu.h>
#include <asm/kvm_pkvm.h>
#include <asm/kvm_emulate.h> #include <asm/kvm_emulate.h>
#include <asm/sections.h> #include <asm/sections.h>
...@@ -50,7 +51,6 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized); ...@@ -50,7 +51,6 @@ DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized);
DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector); DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page); DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
unsigned long kvm_arm_hyp_percpu_base[NR_CPUS];
DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
static bool vgic_present; static bool vgic_present;
...@@ -138,24 +138,24 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) ...@@ -138,24 +138,24 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
{ {
int ret; int ret;
ret = kvm_arm_setup_stage2(kvm, type); ret = kvm_share_hyp(kvm, kvm + 1);
if (ret)
return ret;
ret = kvm_init_stage2_mmu(kvm, &kvm->arch.mmu);
if (ret) if (ret)
return ret; return ret;
ret = kvm_share_hyp(kvm, kvm + 1); ret = pkvm_init_host_vm(kvm);
if (ret) if (ret)
goto out_free_stage2_pgd; goto err_unshare_kvm;
if (!zalloc_cpumask_var(&kvm->arch.supported_cpus, GFP_KERNEL)) { if (!zalloc_cpumask_var(&kvm->arch.supported_cpus, GFP_KERNEL)) {
ret = -ENOMEM; ret = -ENOMEM;
goto out_free_stage2_pgd; goto err_unshare_kvm;
} }
cpumask_copy(kvm->arch.supported_cpus, cpu_possible_mask); cpumask_copy(kvm->arch.supported_cpus, cpu_possible_mask);
ret = kvm_init_stage2_mmu(kvm, &kvm->arch.mmu, type);
if (ret)
goto err_free_cpumask;
kvm_vgic_early_init(kvm); kvm_vgic_early_init(kvm);
/* The maximum number of VCPUs is limited by the host's GIC model */ /* The maximum number of VCPUs is limited by the host's GIC model */
...@@ -164,9 +164,18 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) ...@@ -164,9 +164,18 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
set_default_spectre(kvm); set_default_spectre(kvm);
kvm_arm_init_hypercalls(kvm); kvm_arm_init_hypercalls(kvm);
return ret; /*
out_free_stage2_pgd: * Initialise the default PMUver before there is a chance to
kvm_free_stage2_pgd(&kvm->arch.mmu); * create an actual PMU.
*/
kvm->arch.dfr0_pmuver.imp = kvm_arm_pmu_get_pmuver_limit();
return 0;
err_free_cpumask:
free_cpumask_var(kvm->arch.supported_cpus);
err_unshare_kvm:
kvm_unshare_hyp(kvm, kvm + 1);
return ret; return ret;
} }
...@@ -187,6 +196,9 @@ void kvm_arch_destroy_vm(struct kvm *kvm) ...@@ -187,6 +196,9 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
kvm_vgic_destroy(kvm); kvm_vgic_destroy(kvm);
if (is_protected_kvm_enabled())
pkvm_destroy_hyp_vm(kvm);
kvm_destroy_vcpus(kvm); kvm_destroy_vcpus(kvm);
kvm_unshare_hyp(kvm, kvm + 1); kvm_unshare_hyp(kvm, kvm + 1);
...@@ -569,6 +581,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) ...@@ -569,6 +581,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
if (ret) if (ret)
return ret; return ret;
if (is_protected_kvm_enabled()) {
ret = pkvm_create_hyp_vm(kvm);
if (ret)
return ret;
}
if (!irqchip_in_kernel(kvm)) { if (!irqchip_in_kernel(kvm)) {
/* /*
* Tell the rest of the code that there are userspace irqchip * Tell the rest of the code that there are userspace irqchip
...@@ -746,6 +764,9 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu) ...@@ -746,6 +764,9 @@ static int check_vcpu_requests(struct kvm_vcpu *vcpu)
if (kvm_check_request(KVM_REQ_SUSPEND, vcpu)) if (kvm_check_request(KVM_REQ_SUSPEND, vcpu))
return kvm_vcpu_suspend(vcpu); return kvm_vcpu_suspend(vcpu);
if (kvm_dirty_ring_check_request(vcpu))
return 0;
} }
return 1; return 1;
...@@ -1518,7 +1539,7 @@ static int kvm_init_vector_slots(void) ...@@ -1518,7 +1539,7 @@ static int kvm_init_vector_slots(void)
return 0; return 0;
} }
static void cpu_prepare_hyp_mode(int cpu) static void cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
{ {
struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu); struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu);
unsigned long tcr; unsigned long tcr;
...@@ -1534,23 +1555,9 @@ static void cpu_prepare_hyp_mode(int cpu) ...@@ -1534,23 +1555,9 @@ static void cpu_prepare_hyp_mode(int cpu)
params->mair_el2 = read_sysreg(mair_el1); params->mair_el2 = read_sysreg(mair_el1);
/*
* The ID map may be configured to use an extended virtual address
* range. This is only the case if system RAM is out of range for the
* currently configured page size and VA_BITS, in which case we will
* also need the extended virtual range for the HYP ID map, or we won't
* be able to enable the EL2 MMU.
*
* However, at EL2, there is only one TTBR register, and we can't switch
* between translation tables *and* update TCR_EL2.T0SZ at the same
* time. Bottom line: we need to use the extended range with *both* our
* translation tables.
*
* So use the same T0SZ value we use for the ID map.
*/
tcr = (read_sysreg(tcr_el1) & TCR_EL2_MASK) | TCR_EL2_RES1; tcr = (read_sysreg(tcr_el1) & TCR_EL2_MASK) | TCR_EL2_RES1;
tcr &= ~TCR_T0SZ_MASK; tcr &= ~TCR_T0SZ_MASK;
tcr |= (idmap_t0sz & GENMASK(TCR_TxSZ_WIDTH - 1, 0)) << TCR_T0SZ_OFFSET; tcr |= TCR_T0SZ(hyp_va_bits);
params->tcr_el2 = tcr; params->tcr_el2 = tcr;
params->pgd_pa = kvm_mmu_get_httbr(); params->pgd_pa = kvm_mmu_get_httbr();
...@@ -1844,13 +1851,13 @@ static void teardown_hyp_mode(void) ...@@ -1844,13 +1851,13 @@ static void teardown_hyp_mode(void)
free_hyp_pgds(); free_hyp_pgds();
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu)); free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
free_pages(kvm_arm_hyp_percpu_base[cpu], nvhe_percpu_order()); free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order());
} }
} }
static int do_pkvm_init(u32 hyp_va_bits) static int do_pkvm_init(u32 hyp_va_bits)
{ {
void *per_cpu_base = kvm_ksym_ref(kvm_arm_hyp_percpu_base); void *per_cpu_base = kvm_ksym_ref(kvm_nvhe_sym(kvm_arm_hyp_percpu_base));
int ret; int ret;
preempt_disable(); preempt_disable();
...@@ -1870,11 +1877,8 @@ static int do_pkvm_init(u32 hyp_va_bits) ...@@ -1870,11 +1877,8 @@ static int do_pkvm_init(u32 hyp_va_bits)
return ret; return ret;
} }
static int kvm_hyp_init_protection(u32 hyp_va_bits) static void kvm_hyp_init_symbols(void)
{ {
void *addr = phys_to_virt(hyp_mem_base);
int ret;
kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1); kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
kvm_nvhe_sym(id_aa64isar0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR0_EL1); kvm_nvhe_sym(id_aa64isar0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR0_EL1);
...@@ -1883,6 +1887,14 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits) ...@@ -1883,6 +1887,14 @@ static int kvm_hyp_init_protection(u32 hyp_va_bits)
kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); kvm_nvhe_sym(id_aa64mmfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); kvm_nvhe_sym(id_aa64mmfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1); kvm_nvhe_sym(id_aa64mmfr2_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64MMFR2_EL1);
kvm_nvhe_sym(__icache_flags) = __icache_flags;
kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits;
}
static int kvm_hyp_init_protection(u32 hyp_va_bits)
{
void *addr = phys_to_virt(hyp_mem_base);
int ret;
ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP); ret = create_hyp_mappings(addr, addr + hyp_mem_size, PAGE_HYP);
if (ret) if (ret)
...@@ -1950,7 +1962,7 @@ static int init_hyp_mode(void) ...@@ -1950,7 +1962,7 @@ static int init_hyp_mode(void)
page_addr = page_address(page); page_addr = page_address(page);
memcpy(page_addr, CHOOSE_NVHE_SYM(__per_cpu_start), nvhe_percpu_size()); memcpy(page_addr, CHOOSE_NVHE_SYM(__per_cpu_start), nvhe_percpu_size());
kvm_arm_hyp_percpu_base[cpu] = (unsigned long)page_addr; kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu] = (unsigned long)page_addr;
} }
/* /*
...@@ -2043,7 +2055,7 @@ static int init_hyp_mode(void) ...@@ -2043,7 +2055,7 @@ static int init_hyp_mode(void)
} }
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
char *percpu_begin = (char *)kvm_arm_hyp_percpu_base[cpu]; char *percpu_begin = (char *)kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu];
char *percpu_end = percpu_begin + nvhe_percpu_size(); char *percpu_end = percpu_begin + nvhe_percpu_size();
/* Map Hyp percpu pages */ /* Map Hyp percpu pages */
...@@ -2054,9 +2066,11 @@ static int init_hyp_mode(void) ...@@ -2054,9 +2066,11 @@ static int init_hyp_mode(void)
} }
/* Prepare the CPU initialization parameters */ /* Prepare the CPU initialization parameters */
cpu_prepare_hyp_mode(cpu); cpu_prepare_hyp_mode(cpu, hyp_va_bits);
} }
kvm_hyp_init_symbols();
if (is_protected_kvm_enabled()) { if (is_protected_kvm_enabled()) {
init_cpu_logical_map(); init_cpu_logical_map();
...@@ -2064,9 +2078,7 @@ static int init_hyp_mode(void) ...@@ -2064,9 +2078,7 @@ static int init_hyp_mode(void)
err = -ENODEV; err = -ENODEV;
goto out_err; goto out_err;
} }
}
if (is_protected_kvm_enabled()) {
err = kvm_hyp_init_protection(hyp_va_bits); err = kvm_hyp_init_protection(hyp_va_bits);
if (err) { if (err) {
kvm_err("Failed to init hyp memory protection\n"); kvm_err("Failed to init hyp memory protection\n");
...@@ -2130,6 +2142,11 @@ struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr) ...@@ -2130,6 +2142,11 @@ struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
return NULL; return NULL;
} }
bool kvm_arch_irqchip_in_kernel(struct kvm *kvm)
{
return irqchip_in_kernel(kvm);
}
bool kvm_arch_has_irq_bypass(void) bool kvm_arch_has_irq_bypass(void)
{ {
return true; return true;
......
...@@ -1059,7 +1059,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, ...@@ -1059,7 +1059,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
maddr = page_address(page); maddr = page_address(page);
if (!write) { if (!write) {
if (test_bit(PG_mte_tagged, &page->flags)) if (page_mte_tagged(page))
num_tags = mte_copy_tags_to_user(tags, maddr, num_tags = mte_copy_tags_to_user(tags, maddr,
MTE_GRANULES_PER_PAGE); MTE_GRANULES_PER_PAGE);
else else
...@@ -1068,15 +1068,19 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm, ...@@ -1068,15 +1068,19 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
clear_user(tags, MTE_GRANULES_PER_PAGE); clear_user(tags, MTE_GRANULES_PER_PAGE);
kvm_release_pfn_clean(pfn); kvm_release_pfn_clean(pfn);
} else { } else {
/*
* Only locking to serialise with a concurrent
* set_pte_at() in the VMM but still overriding the
* tags, hence ignoring the return value.
*/
try_page_mte_tagging(page);
num_tags = mte_copy_tags_from_user(maddr, tags, num_tags = mte_copy_tags_from_user(maddr, tags,
MTE_GRANULES_PER_PAGE); MTE_GRANULES_PER_PAGE);
/* /* uaccess failed, don't leave stale tags */
* Set the flag after checking the write if (num_tags != MTE_GRANULES_PER_PAGE)
* completed fully mte_clear_page_tags(page);
*/ set_page_mte_tagged(page);
if (num_tags == MTE_GRANULES_PER_PAGE)
set_bit(PG_mte_tagged, &page->flags);
kvm_release_pfn_dirty(pfn); kvm_release_pfn_dirty(pfn);
} }
......
...@@ -2,9 +2,12 @@ ...@@ -2,9 +2,12 @@
#include <linux/kbuild.h> #include <linux/kbuild.h>
#include <nvhe/memory.h> #include <nvhe/memory.h>
#include <nvhe/pkvm.h>
int main(void) int main(void)
{ {
DEFINE(STRUCT_HYP_PAGE_SIZE, sizeof(struct hyp_page)); DEFINE(STRUCT_HYP_PAGE_SIZE, sizeof(struct hyp_page));
DEFINE(PKVM_HYP_VM_SIZE, sizeof(struct pkvm_hyp_vm));
DEFINE(PKVM_HYP_VCPU_SIZE, sizeof(struct pkvm_hyp_vcpu));
return 0; return 0;
} }
...@@ -8,8 +8,10 @@ ...@@ -8,8 +8,10 @@
#define __KVM_NVHE_MEM_PROTECT__ #define __KVM_NVHE_MEM_PROTECT__
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <asm/kvm_hyp.h> #include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
#include <asm/kvm_pgtable.h> #include <asm/kvm_pgtable.h>
#include <asm/virt.h> #include <asm/virt.h>
#include <nvhe/pkvm.h>
#include <nvhe/spinlock.h> #include <nvhe/spinlock.h>
/* /*
...@@ -43,30 +45,45 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot) ...@@ -43,30 +45,45 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot prot)
return prot & PKVM_PAGE_STATE_PROT_MASK; return prot & PKVM_PAGE_STATE_PROT_MASK;
} }
struct host_kvm { struct host_mmu {
struct kvm_arch arch; struct kvm_arch arch;
struct kvm_pgtable pgt; struct kvm_pgtable pgt;
struct kvm_pgtable_mm_ops mm_ops; struct kvm_pgtable_mm_ops mm_ops;
hyp_spinlock_t lock; hyp_spinlock_t lock;
}; };
extern struct host_kvm host_kvm; extern struct host_mmu host_mmu;
extern const u8 pkvm_hyp_id; /* This corresponds to page-table locking order */
enum pkvm_component_id {
PKVM_ID_HOST,
PKVM_ID_HYP,
};
extern unsigned long hyp_nr_cpus;
int __pkvm_prot_finalize(void); int __pkvm_prot_finalize(void);
int __pkvm_host_share_hyp(u64 pfn); int __pkvm_host_share_hyp(u64 pfn);
int __pkvm_host_unshare_hyp(u64 pfn); int __pkvm_host_unshare_hyp(u64 pfn);
int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages);
int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages);
bool addr_is_memory(phys_addr_t phys); bool addr_is_memory(phys_addr_t phys);
int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_prot prot);
int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id); int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id);
int kvm_host_prepare_stage2(void *pgt_pool_base); int kvm_host_prepare_stage2(void *pgt_pool_base);
int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd);
void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt); void handle_host_mem_abort(struct kvm_cpu_context *host_ctxt);
int hyp_pin_shared_mem(void *from, void *to);
void hyp_unpin_shared_mem(void *from, void *to);
void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *mc);
int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages,
struct kvm_hyp_memcache *host_mc);
static __always_inline void __load_host_stage2(void) static __always_inline void __load_host_stage2(void)
{ {
if (static_branch_likely(&kvm_protected_mode_initialized)) if (static_branch_likely(&kvm_protected_mode_initialized))
__load_stage2(&host_kvm.arch.mmu, &host_kvm.arch); __load_stage2(&host_mmu.arch.mmu, &host_mmu.arch);
else else
write_sysreg(0, vttbr_el2); write_sysreg(0, vttbr_el2);
} }
......
...@@ -38,6 +38,10 @@ static inline phys_addr_t hyp_virt_to_phys(void *addr) ...@@ -38,6 +38,10 @@ static inline phys_addr_t hyp_virt_to_phys(void *addr)
#define hyp_page_to_virt(page) __hyp_va(hyp_page_to_phys(page)) #define hyp_page_to_virt(page) __hyp_va(hyp_page_to_phys(page))
#define hyp_page_to_pool(page) (((struct hyp_page *)page)->pool) #define hyp_page_to_pool(page) (((struct hyp_page *)page)->pool)
/*
* Refcounting for 'struct hyp_page'.
* hyp_pool::lock must be held if atomic access to the refcount is required.
*/
static inline int hyp_page_count(void *addr) static inline int hyp_page_count(void *addr)
{ {
struct hyp_page *p = hyp_virt_to_page(addr); struct hyp_page *p = hyp_virt_to_page(addr);
...@@ -45,4 +49,27 @@ static inline int hyp_page_count(void *addr) ...@@ -45,4 +49,27 @@ static inline int hyp_page_count(void *addr)
return p->refcount; return p->refcount;
} }
static inline void hyp_page_ref_inc(struct hyp_page *p)
{
BUG_ON(p->refcount == USHRT_MAX);
p->refcount++;
}
static inline void hyp_page_ref_dec(struct hyp_page *p)
{
BUG_ON(!p->refcount);
p->refcount--;
}
static inline int hyp_page_ref_dec_and_test(struct hyp_page *p)
{
hyp_page_ref_dec(p);
return (p->refcount == 0);
}
static inline void hyp_set_page_refcounted(struct hyp_page *p)
{
BUG_ON(p->refcount);
p->refcount = 1;
}
#endif /* __KVM_HYP_MEMORY_H */ #endif /* __KVM_HYP_MEMORY_H */
...@@ -13,9 +13,13 @@ ...@@ -13,9 +13,13 @@
extern struct kvm_pgtable pkvm_pgtable; extern struct kvm_pgtable pkvm_pgtable;
extern hyp_spinlock_t pkvm_pgd_lock; extern hyp_spinlock_t pkvm_pgd_lock;
int hyp_create_pcpu_fixmap(void);
void *hyp_fixmap_map(phys_addr_t phys);
void hyp_fixmap_unmap(void);
int hyp_create_idmap(u32 hyp_va_bits); int hyp_create_idmap(u32 hyp_va_bits);
int hyp_map_vectors(void); int hyp_map_vectors(void);
int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back); int hyp_back_vmemmap(phys_addr_t back);
int pkvm_cpu_set_vector(enum arm64_hyp_spectre_vector slot); int pkvm_cpu_set_vector(enum arm64_hyp_spectre_vector slot);
int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot); int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot);
int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot); int pkvm_create_mappings_locked(void *from, void *to, enum kvm_pgtable_prot prot);
...@@ -24,16 +28,4 @@ int __pkvm_create_private_mapping(phys_addr_t phys, size_t size, ...@@ -24,16 +28,4 @@ int __pkvm_create_private_mapping(phys_addr_t phys, size_t size,
unsigned long *haddr); unsigned long *haddr);
int pkvm_alloc_private_va_range(size_t size, unsigned long *haddr); int pkvm_alloc_private_va_range(size_t size, unsigned long *haddr);
static inline void hyp_vmemmap_range(phys_addr_t phys, unsigned long size,
unsigned long *start, unsigned long *end)
{
unsigned long nr_pages = size >> PAGE_SHIFT;
struct hyp_page *p = hyp_phys_to_page(phys);
*start = (unsigned long)p;
*end = *start + nr_pages * sizeof(struct hyp_page);
*start = ALIGN_DOWN(*start, PAGE_SIZE);
*end = ALIGN(*end, PAGE_SIZE);
}
#endif /* __KVM_HYP_MM_H */ #endif /* __KVM_HYP_MM_H */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2021 Google LLC
* Author: Fuad Tabba <tabba@google.com>
*/
#ifndef __ARM64_KVM_NVHE_PKVM_H__
#define __ARM64_KVM_NVHE_PKVM_H__
#include <asm/kvm_pkvm.h>
#include <nvhe/gfp.h>
#include <nvhe/spinlock.h>
/*
* Holds the relevant data for maintaining the vcpu state completely at hyp.
*/
struct pkvm_hyp_vcpu {
struct kvm_vcpu vcpu;
/* Backpointer to the host's (untrusted) vCPU instance. */
struct kvm_vcpu *host_vcpu;
};
/*
* Holds the relevant data for running a protected vm.
*/
struct pkvm_hyp_vm {
struct kvm kvm;
/* Backpointer to the host's (untrusted) KVM instance. */
struct kvm *host_kvm;
/* The guest's stage-2 page-table managed by the hypervisor. */
struct kvm_pgtable pgt;
struct kvm_pgtable_mm_ops mm_ops;
struct hyp_pool pool;
hyp_spinlock_t lock;
/*
* The number of vcpus initialized and ready to run.
* Modifying this is protected by 'vm_table_lock'.
*/
unsigned int nr_vcpus;
/* Array of the hyp vCPU structures for this VM. */
struct pkvm_hyp_vcpu *vcpus[];
};
static inline struct pkvm_hyp_vm *
pkvm_hyp_vcpu_to_hyp_vm(struct pkvm_hyp_vcpu *hyp_vcpu)
{
return container_of(hyp_vcpu->vcpu.kvm, struct pkvm_hyp_vm, kvm);
}
void pkvm_hyp_vm_table_init(void *tbl);
int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva,
unsigned long pgd_hva);
int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
unsigned long vcpu_hva);
int __pkvm_teardown_vm(pkvm_handle_t handle);
struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle,
unsigned int vcpu_idx);
void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu);
#endif /* __ARM64_KVM_NVHE_PKVM_H__ */
...@@ -28,9 +28,17 @@ typedef union hyp_spinlock { ...@@ -28,9 +28,17 @@ typedef union hyp_spinlock {
}; };
} hyp_spinlock_t; } hyp_spinlock_t;
#define __HYP_SPIN_LOCK_INITIALIZER \
{ .__val = 0 }
#define __HYP_SPIN_LOCK_UNLOCKED \
((hyp_spinlock_t) __HYP_SPIN_LOCK_INITIALIZER)
#define DEFINE_HYP_SPINLOCK(x) hyp_spinlock_t x = __HYP_SPIN_LOCK_UNLOCKED
#define hyp_spin_lock_init(l) \ #define hyp_spin_lock_init(l) \
do { \ do { \
*(l) = (hyp_spinlock_t){ .__val = 0 }; \ *(l) = __HYP_SPIN_LOCK_UNLOCKED; \
} while (0) } while (0)
static inline void hyp_spin_lock(hyp_spinlock_t *lock) static inline void hyp_spin_lock(hyp_spinlock_t *lock)
......
...@@ -12,3 +12,14 @@ SYM_FUNC_START(__pi_dcache_clean_inval_poc) ...@@ -12,3 +12,14 @@ SYM_FUNC_START(__pi_dcache_clean_inval_poc)
ret ret
SYM_FUNC_END(__pi_dcache_clean_inval_poc) SYM_FUNC_END(__pi_dcache_clean_inval_poc)
SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc) SYM_FUNC_ALIAS(dcache_clean_inval_poc, __pi_dcache_clean_inval_poc)
SYM_FUNC_START(__pi_icache_inval_pou)
alternative_if ARM64_HAS_CACHE_DIC
isb
ret
alternative_else_nop_endif
invalidate_icache_by_line x0, x1, x2, x3
ret
SYM_FUNC_END(__pi_icache_inval_pou)
SYM_FUNC_ALIAS(icache_inval_pou, __pi_icache_inval_pou)
...@@ -15,17 +15,93 @@ ...@@ -15,17 +15,93 @@
#include <nvhe/mem_protect.h> #include <nvhe/mem_protect.h>
#include <nvhe/mm.h> #include <nvhe/mm.h>
#include <nvhe/pkvm.h>
#include <nvhe/trap_handler.h> #include <nvhe/trap_handler.h>
DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params); DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt); void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt);
static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
{
struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt;
hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state);
hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl;
hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu;
hyp_vcpu->vcpu.arch.hcr_el2 = host_vcpu->arch.hcr_el2;
hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2;
hyp_vcpu->vcpu.arch.cptr_el2 = host_vcpu->arch.cptr_el2;
hyp_vcpu->vcpu.arch.iflags = host_vcpu->arch.iflags;
hyp_vcpu->vcpu.arch.fp_state = host_vcpu->arch.fp_state;
hyp_vcpu->vcpu.arch.debug_ptr = kern_hyp_va(host_vcpu->arch.debug_ptr);
hyp_vcpu->vcpu.arch.host_fpsimd_state = host_vcpu->arch.host_fpsimd_state;
hyp_vcpu->vcpu.arch.vsesr_el2 = host_vcpu->arch.vsesr_el2;
hyp_vcpu->vcpu.arch.vgic_cpu.vgic_v3 = host_vcpu->arch.vgic_cpu.vgic_v3;
}
static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
{
struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
struct vgic_v3_cpu_if *hyp_cpu_if = &hyp_vcpu->vcpu.arch.vgic_cpu.vgic_v3;
struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3;
unsigned int i;
host_vcpu->arch.ctxt = hyp_vcpu->vcpu.arch.ctxt;
host_vcpu->arch.hcr_el2 = hyp_vcpu->vcpu.arch.hcr_el2;
host_vcpu->arch.cptr_el2 = hyp_vcpu->vcpu.arch.cptr_el2;
host_vcpu->arch.fault = hyp_vcpu->vcpu.arch.fault;
host_vcpu->arch.iflags = hyp_vcpu->vcpu.arch.iflags;
host_vcpu->arch.fp_state = hyp_vcpu->vcpu.arch.fp_state;
host_cpu_if->vgic_hcr = hyp_cpu_if->vgic_hcr;
for (i = 0; i < hyp_cpu_if->used_lrs; ++i)
host_cpu_if->vgic_lr[i] = hyp_cpu_if->vgic_lr[i];
}
static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
{ {
DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1);
int ret;
cpu_reg(host_ctxt, 1) = __kvm_vcpu_run(kern_hyp_va(vcpu)); host_vcpu = kern_hyp_va(host_vcpu);
if (unlikely(is_protected_kvm_enabled())) {
struct pkvm_hyp_vcpu *hyp_vcpu;
struct kvm *host_kvm;
host_kvm = kern_hyp_va(host_vcpu->kvm);
hyp_vcpu = pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle,
host_vcpu->vcpu_idx);
if (!hyp_vcpu) {
ret = -EINVAL;
goto out;
}
flush_hyp_vcpu(hyp_vcpu);
ret = __kvm_vcpu_run(&hyp_vcpu->vcpu);
sync_hyp_vcpu(hyp_vcpu);
pkvm_put_hyp_vcpu(hyp_vcpu);
} else {
/* The host is fully trusted, run its vCPU directly. */
ret = __kvm_vcpu_run(host_vcpu);
}
out:
cpu_reg(host_ctxt, 1) = ret;
} }
static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt)
...@@ -191,6 +267,33 @@ static void handle___pkvm_vcpu_init_traps(struct kvm_cpu_context *host_ctxt) ...@@ -191,6 +267,33 @@ static void handle___pkvm_vcpu_init_traps(struct kvm_cpu_context *host_ctxt)
__pkvm_vcpu_init_traps(kern_hyp_va(vcpu)); __pkvm_vcpu_init_traps(kern_hyp_va(vcpu));
} }
static void handle___pkvm_init_vm(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(struct kvm *, host_kvm, host_ctxt, 1);
DECLARE_REG(unsigned long, vm_hva, host_ctxt, 2);
DECLARE_REG(unsigned long, pgd_hva, host_ctxt, 3);
host_kvm = kern_hyp_va(host_kvm);
cpu_reg(host_ctxt, 1) = __pkvm_init_vm(host_kvm, vm_hva, pgd_hva);
}
static void handle___pkvm_init_vcpu(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1);
DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 2);
DECLARE_REG(unsigned long, vcpu_hva, host_ctxt, 3);
host_vcpu = kern_hyp_va(host_vcpu);
cpu_reg(host_ctxt, 1) = __pkvm_init_vcpu(handle, host_vcpu, vcpu_hva);
}
static void handle___pkvm_teardown_vm(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1);
cpu_reg(host_ctxt, 1) = __pkvm_teardown_vm(handle);
}
typedef void (*hcall_t)(struct kvm_cpu_context *); typedef void (*hcall_t)(struct kvm_cpu_context *);
#define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x #define HANDLE_FUNC(x) [__KVM_HOST_SMCCC_FUNC_##x] = (hcall_t)handle_##x
...@@ -220,6 +323,9 @@ static const hcall_t host_hcall[] = { ...@@ -220,6 +323,9 @@ static const hcall_t host_hcall[] = {
HANDLE_FUNC(__vgic_v3_save_aprs), HANDLE_FUNC(__vgic_v3_save_aprs),
HANDLE_FUNC(__vgic_v3_restore_aprs), HANDLE_FUNC(__vgic_v3_restore_aprs),
HANDLE_FUNC(__pkvm_vcpu_init_traps), HANDLE_FUNC(__pkvm_vcpu_init_traps),
HANDLE_FUNC(__pkvm_init_vm),
HANDLE_FUNC(__pkvm_init_vcpu),
HANDLE_FUNC(__pkvm_teardown_vm),
}; };
static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
......
...@@ -23,6 +23,8 @@ u64 cpu_logical_map(unsigned int cpu) ...@@ -23,6 +23,8 @@ u64 cpu_logical_map(unsigned int cpu)
return hyp_cpu_logical_map[cpu]; return hyp_cpu_logical_map[cpu];
} }
unsigned long __ro_after_init kvm_arm_hyp_percpu_base[NR_CPUS];
unsigned long __hyp_per_cpu_offset(unsigned int cpu) unsigned long __hyp_per_cpu_offset(unsigned int cpu)
{ {
unsigned long *cpu_base_array; unsigned long *cpu_base_array;
......
This diff is collapsed.
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <nvhe/early_alloc.h> #include <nvhe/early_alloc.h>
#include <nvhe/gfp.h> #include <nvhe/gfp.h>
#include <nvhe/memory.h> #include <nvhe/memory.h>
#include <nvhe/mem_protect.h>
#include <nvhe/mm.h> #include <nvhe/mm.h>
#include <nvhe/spinlock.h> #include <nvhe/spinlock.h>
...@@ -25,6 +26,12 @@ unsigned int hyp_memblock_nr; ...@@ -25,6 +26,12 @@ unsigned int hyp_memblock_nr;
static u64 __io_map_base; static u64 __io_map_base;
struct hyp_fixmap_slot {
u64 addr;
kvm_pte_t *ptep;
};
static DEFINE_PER_CPU(struct hyp_fixmap_slot, fixmap_slots);
static int __pkvm_create_mappings(unsigned long start, unsigned long size, static int __pkvm_create_mappings(unsigned long start, unsigned long size,
unsigned long phys, enum kvm_pgtable_prot prot) unsigned long phys, enum kvm_pgtable_prot prot)
{ {
...@@ -129,13 +136,36 @@ int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot) ...@@ -129,13 +136,36 @@ int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot)
return ret; return ret;
} }
int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t back) int hyp_back_vmemmap(phys_addr_t back)
{ {
unsigned long start, end; unsigned long i, start, size, end = 0;
int ret;
for (i = 0; i < hyp_memblock_nr; i++) {
start = hyp_memory[i].base;
start = ALIGN_DOWN((u64)hyp_phys_to_page(start), PAGE_SIZE);
/*
* The begining of the hyp_vmemmap region for the current
* memblock may already be backed by the page backing the end
* the previous region, so avoid mapping it twice.
*/
start = max(start, end);
hyp_vmemmap_range(phys, size, &start, &end); end = hyp_memory[i].base + hyp_memory[i].size;
end = PAGE_ALIGN((u64)hyp_phys_to_page(end));
if (start >= end)
continue;
return __pkvm_create_mappings(start, end - start, back, PAGE_HYP); size = end - start;
ret = __pkvm_create_mappings(start, size, back, PAGE_HYP);
if (ret)
return ret;
memset(hyp_phys_to_virt(back), 0, size);
back += size;
}
return 0;
} }
static void *__hyp_bp_vect_base; static void *__hyp_bp_vect_base;
...@@ -189,6 +219,102 @@ int hyp_map_vectors(void) ...@@ -189,6 +219,102 @@ int hyp_map_vectors(void)
return 0; return 0;
} }
void *hyp_fixmap_map(phys_addr_t phys)
{
struct hyp_fixmap_slot *slot = this_cpu_ptr(&fixmap_slots);
kvm_pte_t pte, *ptep = slot->ptep;
pte = *ptep;
pte &= ~kvm_phys_to_pte(KVM_PHYS_INVALID);
pte |= kvm_phys_to_pte(phys) | KVM_PTE_VALID;
WRITE_ONCE(*ptep, pte);
dsb(ishst);
return (void *)slot->addr;
}
static void fixmap_clear_slot(struct hyp_fixmap_slot *slot)
{
kvm_pte_t *ptep = slot->ptep;
u64 addr = slot->addr;
WRITE_ONCE(*ptep, *ptep & ~KVM_PTE_VALID);
/*
* Irritatingly, the architecture requires that we use inner-shareable
* broadcast TLB invalidation here in case another CPU speculates
* through our fixmap and decides to create an "amalagamation of the
* values held in the TLB" due to the apparent lack of a
* break-before-make sequence.
*
* https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#mf10dfbaf1eaef9274c581b81c53758918c1d0f03
*/
dsb(ishst);
__tlbi_level(vale2is, __TLBI_VADDR(addr, 0), (KVM_PGTABLE_MAX_LEVELS - 1));
dsb(ish);
isb();
}
void hyp_fixmap_unmap(void)
{
fixmap_clear_slot(this_cpu_ptr(&fixmap_slots));
}
static int __create_fixmap_slot_cb(const struct kvm_pgtable_visit_ctx *ctx,
enum kvm_pgtable_walk_flags visit)
{
struct hyp_fixmap_slot *slot = per_cpu_ptr(&fixmap_slots, (u64)ctx->arg);
if (!kvm_pte_valid(ctx->old) || ctx->level != KVM_PGTABLE_MAX_LEVELS - 1)
return -EINVAL;
slot->addr = ctx->addr;
slot->ptep = ctx->ptep;
/*
* Clear the PTE, but keep the page-table page refcount elevated to
* prevent it from ever being freed. This lets us manipulate the PTEs
* by hand safely without ever needing to allocate memory.
*/
fixmap_clear_slot(slot);
return 0;
}
static int create_fixmap_slot(u64 addr, u64 cpu)
{
struct kvm_pgtable_walker walker = {
.cb = __create_fixmap_slot_cb,
.flags = KVM_PGTABLE_WALK_LEAF,
.arg = (void *)cpu,
};
return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker);
}
int hyp_create_pcpu_fixmap(void)
{
unsigned long addr, i;
int ret;
for (i = 0; i < hyp_nr_cpus; i++) {
ret = pkvm_alloc_private_va_range(PAGE_SIZE, &addr);
if (ret)
return ret;
ret = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PAGE_SIZE,
__hyp_pa(__hyp_bss_start), PAGE_HYP);
if (ret)
return ret;
ret = create_fixmap_slot(addr, i);
if (ret)
return ret;
}
return 0;
}
int hyp_create_idmap(u32 hyp_va_bits) int hyp_create_idmap(u32 hyp_va_bits)
{ {
unsigned long start, end; unsigned long start, end;
...@@ -213,3 +339,36 @@ int hyp_create_idmap(u32 hyp_va_bits) ...@@ -213,3 +339,36 @@ int hyp_create_idmap(u32 hyp_va_bits)
return __pkvm_create_mappings(start, end - start, start, PAGE_HYP_EXEC); return __pkvm_create_mappings(start, end - start, start, PAGE_HYP_EXEC);
} }
static void *admit_host_page(void *arg)
{
struct kvm_hyp_memcache *host_mc = arg;
if (!host_mc->nr_pages)
return NULL;
/*
* The host still owns the pages in its memcache, so we need to go
* through a full host-to-hyp donation cycle to change it. Fortunately,
* __pkvm_host_donate_hyp() takes care of races for us, so if it
* succeeds we're good to go.
*/
if (__pkvm_host_donate_hyp(hyp_phys_to_pfn(host_mc->head), 1))
return NULL;
return pop_hyp_memcache(host_mc, hyp_phys_to_virt);
}
/* Refill our local memcache by poping pages from the one provided by the host. */
int refill_memcache(struct kvm_hyp_memcache *mc, unsigned long min_pages,
struct kvm_hyp_memcache *host_mc)
{
struct kvm_hyp_memcache tmp = *host_mc;
int ret;
ret = __topup_hyp_memcache(mc, min_pages, admit_host_page,
hyp_virt_to_phys, &tmp);
*host_mc = tmp;
return ret;
}
...@@ -93,11 +93,16 @@ static inline struct hyp_page *node_to_page(struct list_head *node) ...@@ -93,11 +93,16 @@ static inline struct hyp_page *node_to_page(struct list_head *node)
static void __hyp_attach_page(struct hyp_pool *pool, static void __hyp_attach_page(struct hyp_pool *pool,
struct hyp_page *p) struct hyp_page *p)
{ {
phys_addr_t phys = hyp_page_to_phys(p);
unsigned short order = p->order; unsigned short order = p->order;
struct hyp_page *buddy; struct hyp_page *buddy;
memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order);
/* Skip coalescing for 'external' pages being freed into the pool. */
if (phys < pool->range_start || phys >= pool->range_end)
goto insert;
/* /*
* Only the first struct hyp_page of a high-order page (otherwise known * Only the first struct hyp_page of a high-order page (otherwise known
* as the 'head') should have p->order set. The non-head pages should * as the 'head') should have p->order set. The non-head pages should
...@@ -116,6 +121,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, ...@@ -116,6 +121,7 @@ static void __hyp_attach_page(struct hyp_pool *pool,
p = min(p, buddy); p = min(p, buddy);
} }
insert:
/* Mark the new head, and insert it */ /* Mark the new head, and insert it */
p->order = order; p->order = order;
page_add_to_list(p, &pool->free_area[order]); page_add_to_list(p, &pool->free_area[order]);
...@@ -144,25 +150,6 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, ...@@ -144,25 +150,6 @@ static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool,
return p; return p;
} }
static inline void hyp_page_ref_inc(struct hyp_page *p)
{
BUG_ON(p->refcount == USHRT_MAX);
p->refcount++;
}
static inline int hyp_page_ref_dec_and_test(struct hyp_page *p)
{
BUG_ON(!p->refcount);
p->refcount--;
return (p->refcount == 0);
}
static inline void hyp_set_page_refcounted(struct hyp_page *p)
{
BUG_ON(p->refcount);
p->refcount = 1;
}
static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p) static void __hyp_put_page(struct hyp_pool *pool, struct hyp_page *p)
{ {
if (hyp_page_ref_dec_and_test(p)) if (hyp_page_ref_dec_and_test(p))
...@@ -249,10 +236,8 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, ...@@ -249,10 +236,8 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages,
/* Init the vmemmap portion */ /* Init the vmemmap portion */
p = hyp_phys_to_page(phys); p = hyp_phys_to_page(phys);
for (i = 0; i < nr_pages; i++) { for (i = 0; i < nr_pages; i++)
p[i].order = 0;
hyp_set_page_refcounted(&p[i]); hyp_set_page_refcounted(&p[i]);
}
/* Attach the unused pages to the buddy tree */ /* Attach the unused pages to the buddy tree */
for (i = reserved_pages; i < nr_pages; i++) for (i = reserved_pages; i < nr_pages; i++)
......
This diff is collapsed.
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#include <nvhe/memory.h> #include <nvhe/memory.h>
#include <nvhe/mem_protect.h> #include <nvhe/mem_protect.h>
#include <nvhe/mm.h> #include <nvhe/mm.h>
#include <nvhe/pkvm.h>
#include <nvhe/trap_handler.h> #include <nvhe/trap_handler.h>
unsigned long hyp_nr_cpus; unsigned long hyp_nr_cpus;
...@@ -24,6 +25,7 @@ unsigned long hyp_nr_cpus; ...@@ -24,6 +25,7 @@ unsigned long hyp_nr_cpus;
(unsigned long)__per_cpu_start) (unsigned long)__per_cpu_start)
static void *vmemmap_base; static void *vmemmap_base;
static void *vm_table_base;
static void *hyp_pgt_base; static void *hyp_pgt_base;
static void *host_s2_pgt_base; static void *host_s2_pgt_base;
static struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops; static struct kvm_pgtable_mm_ops pkvm_pgtable_mm_ops;
...@@ -31,16 +33,20 @@ static struct hyp_pool hpool; ...@@ -31,16 +33,20 @@ static struct hyp_pool hpool;
static int divide_memory_pool(void *virt, unsigned long size) static int divide_memory_pool(void *virt, unsigned long size)
{ {
unsigned long vstart, vend, nr_pages; unsigned long nr_pages;
hyp_early_alloc_init(virt, size); hyp_early_alloc_init(virt, size);
hyp_vmemmap_range(__hyp_pa(virt), size, &vstart, &vend); nr_pages = hyp_vmemmap_pages(sizeof(struct hyp_page));
nr_pages = (vend - vstart) >> PAGE_SHIFT;
vmemmap_base = hyp_early_alloc_contig(nr_pages); vmemmap_base = hyp_early_alloc_contig(nr_pages);
if (!vmemmap_base) if (!vmemmap_base)
return -ENOMEM; return -ENOMEM;
nr_pages = hyp_vm_table_pages();
vm_table_base = hyp_early_alloc_contig(nr_pages);
if (!vm_table_base)
return -ENOMEM;
nr_pages = hyp_s1_pgtable_pages(); nr_pages = hyp_s1_pgtable_pages();
hyp_pgt_base = hyp_early_alloc_contig(nr_pages); hyp_pgt_base = hyp_early_alloc_contig(nr_pages);
if (!hyp_pgt_base) if (!hyp_pgt_base)
...@@ -78,7 +84,7 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, ...@@ -78,7 +84,7 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
if (ret) if (ret)
return ret; return ret;
ret = hyp_back_vmemmap(phys, size, hyp_virt_to_phys(vmemmap_base)); ret = hyp_back_vmemmap(hyp_virt_to_phys(vmemmap_base));
if (ret) if (ret)
return ret; return ret;
...@@ -138,20 +144,17 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, ...@@ -138,20 +144,17 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
} }
/* /*
* Map the host's .bss and .rodata sections RO in the hypervisor, but * Map the host sections RO in the hypervisor, but transfer the
* transfer the ownership from the host to the hypervisor itself to * ownership from the host to the hypervisor itself to make sure they
* make sure it can't be donated or shared with another entity. * can't be donated or shared with another entity.
* *
* The ownership transition requires matching changes in the host * The ownership transition requires matching changes in the host
* stage-2. This will be done later (see finalize_host_mappings()) once * stage-2. This will be done later (see finalize_host_mappings()) once
* the hyp_vmemmap is addressable. * the hyp_vmemmap is addressable.
*/ */
prot = pkvm_mkstate(PAGE_HYP_RO, PKVM_PAGE_SHARED_OWNED); prot = pkvm_mkstate(PAGE_HYP_RO, PKVM_PAGE_SHARED_OWNED);
ret = pkvm_create_mappings(__start_rodata, __end_rodata, prot); ret = pkvm_create_mappings(&kvm_vgic_global_state,
if (ret) &kvm_vgic_global_state + 1, prot);
return ret;
ret = pkvm_create_mappings(__hyp_bss_end, __bss_stop, prot);
if (ret) if (ret)
return ret; return ret;
...@@ -186,33 +189,20 @@ static void hpool_put_page(void *addr) ...@@ -186,33 +189,20 @@ static void hpool_put_page(void *addr)
hyp_put_page(&hpool, addr); hyp_put_page(&hpool, addr);
} }
static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *ctx,
kvm_pte_t *ptep, enum kvm_pgtable_walk_flags visit)
enum kvm_pgtable_walk_flags flag,
void * const arg)
{ {
struct kvm_pgtable_mm_ops *mm_ops = arg;
enum kvm_pgtable_prot prot; enum kvm_pgtable_prot prot;
enum pkvm_page_state state; enum pkvm_page_state state;
kvm_pte_t pte = *ptep;
phys_addr_t phys; phys_addr_t phys;
if (!kvm_pte_valid(pte)) if (!kvm_pte_valid(ctx->old))
return 0; return 0;
/* if (ctx->level != (KVM_PGTABLE_MAX_LEVELS - 1))
* Fix-up the refcount for the page-table pages as the early allocator
* was unable to access the hyp_vmemmap and so the buddy allocator has
* initialised the refcount to '1'.
*/
mm_ops->get_page(ptep);
if (flag != KVM_PGTABLE_WALK_LEAF)
return 0;
if (level != (KVM_PGTABLE_MAX_LEVELS - 1))
return -EINVAL; return -EINVAL;
phys = kvm_pte_to_phys(pte); phys = kvm_pte_to_phys(ctx->old);
if (!addr_is_memory(phys)) if (!addr_is_memory(phys))
return -EINVAL; return -EINVAL;
...@@ -220,10 +210,10 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, ...@@ -220,10 +210,10 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level,
* Adjust the host stage-2 mappings to match the ownership attributes * Adjust the host stage-2 mappings to match the ownership attributes
* configured in the hypervisor stage-1. * configured in the hypervisor stage-1.
*/ */
state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(pte)); state = pkvm_getstate(kvm_pgtable_hyp_pte_prot(ctx->old));
switch (state) { switch (state) {
case PKVM_PAGE_OWNED: case PKVM_PAGE_OWNED:
return host_stage2_set_owner_locked(phys, PAGE_SIZE, pkvm_hyp_id); return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP);
case PKVM_PAGE_SHARED_OWNED: case PKVM_PAGE_SHARED_OWNED:
prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED); prot = pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED);
break; break;
...@@ -237,12 +227,25 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level, ...@@ -237,12 +227,25 @@ static int finalize_host_mappings_walker(u64 addr, u64 end, u32 level,
return host_stage2_idmap_locked(phys, PAGE_SIZE, prot); return host_stage2_idmap_locked(phys, PAGE_SIZE, prot);
} }
static int finalize_host_mappings(void) static int fix_hyp_pgtable_refcnt_walker(const struct kvm_pgtable_visit_ctx *ctx,
enum kvm_pgtable_walk_flags visit)
{
/*
* Fix-up the refcount for the page-table pages as the early allocator
* was unable to access the hyp_vmemmap and so the buddy allocator has
* initialised the refcount to '1'.
*/
if (kvm_pte_valid(ctx->old))
ctx->mm_ops->get_page(ctx->ptep);
return 0;
}
static int fix_host_ownership(void)
{ {
struct kvm_pgtable_walker walker = { struct kvm_pgtable_walker walker = {
.cb = finalize_host_mappings_walker, .cb = fix_host_ownership_walker,
.flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, .flags = KVM_PGTABLE_WALK_LEAF,
.arg = pkvm_pgtable.mm_ops,
}; };
int i, ret; int i, ret;
...@@ -258,6 +261,18 @@ static int finalize_host_mappings(void) ...@@ -258,6 +261,18 @@ static int finalize_host_mappings(void)
return 0; return 0;
} }
static int fix_hyp_pgtable_refcnt(void)
{
struct kvm_pgtable_walker walker = {
.cb = fix_hyp_pgtable_refcnt_walker,
.flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST,
.arg = pkvm_pgtable.mm_ops,
};
return kvm_pgtable_walk(&pkvm_pgtable, 0, BIT(pkvm_pgtable.ia_bits),
&walker);
}
void __noreturn __pkvm_init_finalise(void) void __noreturn __pkvm_init_finalise(void)
{ {
struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data); struct kvm_host_data *host_data = this_cpu_ptr(&kvm_host_data);
...@@ -287,10 +302,19 @@ void __noreturn __pkvm_init_finalise(void) ...@@ -287,10 +302,19 @@ void __noreturn __pkvm_init_finalise(void)
}; };
pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops; pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops;
ret = finalize_host_mappings(); ret = fix_host_ownership();
if (ret)
goto out;
ret = fix_hyp_pgtable_refcnt();
if (ret)
goto out;
ret = hyp_create_pcpu_fixmap();
if (ret) if (ret)
goto out; goto out;
pkvm_hyp_vm_table_init(vm_table_base);
out: out:
/* /*
* We tail-called to here from handle___pkvm_init() and will not return, * We tail-called to here from handle___pkvm_init() and will not return,
......
This diff is collapsed.
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
# #
# Makefile for Kernel-based Virtual Machine module, HYP/nVHE part # Makefile for Kernel-based Virtual Machine module, HYP/VHE part
# #
asflags-y := -D__KVM_VHE_HYPERVISOR__ asflags-y := -D__KVM_VHE_HYPERVISOR__
......
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* irq.h: in kernel interrupt controller related definitions
* Copyright (c) 2016 Red Hat, Inc.
*
* This header is included by irqchip.c. However, on ARM, interrupt
* controller declarations are located in include/kvm/arm_vgic.h since
* they are mostly shared between arm and arm64.
*/
#ifndef __IRQ_H
#define __IRQ_H
#include <kvm/arm_vgic.h>
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -395,32 +395,3 @@ int kvm_set_ipa_limit(void) ...@@ -395,32 +395,3 @@ int kvm_set_ipa_limit(void)
return 0; return 0;
} }
int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
{
u64 mmfr0, mmfr1;
u32 phys_shift;
if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK)
return -EINVAL;
phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type);
if (phys_shift) {
if (phys_shift > kvm_ipa_limit ||
phys_shift < ARM64_MIN_PARANGE_BITS)
return -EINVAL;
} else {
phys_shift = KVM_PHYS_SHIFT;
if (phys_shift > kvm_ipa_limit) {
pr_warn_once("%s using unsupported default IPA limit, upgrade your VMM\n",
current->comm);
return -EINVAL;
}
}
mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
kvm->arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift);
return 0;
}
This diff is collapsed.
This diff is collapsed.
...@@ -21,9 +21,12 @@ void copy_highpage(struct page *to, struct page *from) ...@@ -21,9 +21,12 @@ void copy_highpage(struct page *to, struct page *from)
copy_page(kto, kfrom); copy_page(kto, kfrom);
if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { if (system_supports_mte() && page_mte_tagged(from)) {
set_bit(PG_mte_tagged, &to->flags); page_kasan_tag_reset(to);
/* It's a new page, shouldn't have been tagged yet */
WARN_ON_ONCE(!try_page_mte_tagging(to));
mte_copy_page_tags(kto, kfrom); mte_copy_page_tags(kto, kfrom);
set_page_mte_tagged(to);
} }
} }
EXPORT_SYMBOL(copy_highpage); EXPORT_SYMBOL(copy_highpage);
......
...@@ -943,6 +943,8 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, ...@@ -943,6 +943,8 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma,
void tag_clear_highpage(struct page *page) void tag_clear_highpage(struct page *page)
{ {
/* Newly allocated page, shouldn't have been tagged yet */
WARN_ON_ONCE(!try_page_mte_tagging(page));
mte_zero_clear_page_tags(page_address(page)); mte_zero_clear_page_tags(page_address(page));
set_bit(PG_mte_tagged, &page->flags); set_page_mte_tagged(page);
} }
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -46,6 +46,7 @@ struct stack_frame { ...@@ -46,6 +46,7 @@ struct stack_frame {
unsigned long sie_savearea; unsigned long sie_savearea;
unsigned long sie_reason; unsigned long sie_reason;
unsigned long sie_flags; unsigned long sie_flags;
unsigned long sie_control_block_phys;
}; };
}; };
unsigned long gprs[10]; unsigned long gprs[10];
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment