Commit 96c31a85 authored by Joel Schopp's avatar Joel Schopp Committed by Luis Henriques

arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc

commit dbff124e upstream.

The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits
and not all the bits in the PA range. This is clearly a bug that
manifests itself on systems that allocate memory in the higher address
space range.

 [ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT
   instead of a hard-coded value and to move the alignment check of the
   allocation to mmu.c.  Also added a comment explaining why we hardcode
   the IPA range and changed the stage-2 pgd allocation to be based on
   the 40 bit IPA range instead of the maximum possible 48 bit PA range.
   - Christoffer ]
Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
Signed-off-by: default avatarJoel Schopp <joel.schopp@amd.com>
Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: default avatarShannon Zhao <shannon.zhao@linaro.org>
Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
parent e2dcb552
...@@ -428,9 +428,9 @@ static void update_vttbr(struct kvm *kvm) ...@@ -428,9 +428,9 @@ static void update_vttbr(struct kvm *kvm)
/* update vttbr to be used with the new vmid */ /* update vttbr to be used with the new vmid */
pgd_phys = virt_to_phys(kvm->arch.pgd); pgd_phys = virt_to_phys(kvm->arch.pgd);
BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK; vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK;
kvm->arch.vttbr = pgd_phys & VTTBR_BADDR_MASK; kvm->arch.vttbr = pgd_phys | vmid;
kvm->arch.vttbr |= vmid;
spin_unlock(&kvm_vmid_lock); spin_unlock(&kvm_vmid_lock);
} }
......
...@@ -121,6 +121,17 @@ ...@@ -121,6 +121,17 @@
#define VTCR_EL2_T0SZ_MASK 0x3f #define VTCR_EL2_T0SZ_MASK 0x3f
#define VTCR_EL2_T0SZ_40B 24 #define VTCR_EL2_T0SZ_40B 24
/*
* We configure the Stage-2 page tables to always restrict the IPA space to be
* 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are
* not known to exist and will break with this configuration.
*
* Note that when using 4K pages, we concatenate two first level page tables
* together.
*
* The magic numbers used for VTTBR_X in this patch can be found in Tables
* D4-23 and D4-25 in ARM DDI 0487A.b.
*/
#ifdef CONFIG_ARM64_64K_PAGES #ifdef CONFIG_ARM64_64K_PAGES
/* /*
* Stage2 translation configuration: * Stage2 translation configuration:
...@@ -148,7 +159,7 @@ ...@@ -148,7 +159,7 @@
#endif #endif
#define VTTBR_BADDR_SHIFT (VTTBR_X - 1) #define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
#define VTTBR_BADDR_MASK (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT) #define VTTBR_BADDR_MASK (((1LLU << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
#define VTTBR_VMID_SHIFT (48LLU) #define VTTBR_VMID_SHIFT (48LLU)
#define VTTBR_VMID_MASK (0xffLLU << VTTBR_VMID_SHIFT) #define VTTBR_VMID_MASK (0xffLLU << VTTBR_VMID_SHIFT)
......
...@@ -59,10 +59,9 @@ ...@@ -59,10 +59,9 @@
#define KERN_TO_HYP(kva) ((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET) #define KERN_TO_HYP(kva) ((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
/* /*
* Align KVM with the kernel's view of physical memory. Should be * We currently only support a 40bit IPA.
* 40bit IPA, with PGD being 8kB aligned in the 4KB page configuration.
*/ */
#define KVM_PHYS_SHIFT PHYS_MASK_SHIFT #define KVM_PHYS_SHIFT (40)
#define KVM_PHYS_SIZE (1UL << KVM_PHYS_SHIFT) #define KVM_PHYS_SIZE (1UL << KVM_PHYS_SHIFT)
#define KVM_PHYS_MASK (KVM_PHYS_SIZE - 1UL) #define KVM_PHYS_MASK (KVM_PHYS_SIZE - 1UL)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment