Commit c604cffa authored by James Hogan's avatar James Hogan Committed by Radim Krčmář

MIPS: KVM: Fix mapped fault broken commpage handling

kvm_mips_handle_mapped_seg_tlb_fault() appears to map the guest page at
virtual address 0 to PFN 0 if the guest has created its own mapping
there. The intention is unclear, but it may have been an attempt to
protect the zero page from being mapped to anything but the comm page in
code paths you wouldn't expect from genuine commpage accesses (guest
kernel mode cache instructions on that address, hitting trapping
instructions when executing from that address with a coincidental TLB
eviction during the KVM handling, and guest user mode accesses to that
address).

Fix this to check for mappings exactly at KVM_GUEST_COMMPAGE_ADDR (it
may not be at address 0 since commit 42aa12e7 ("MIPS: KVM: Move
commpage so 0x0 is unmapped")), and set the corresponding EntryLo to be
interpreted as 0 (invalid).

Fixes: 858dd5d4 ("KVM/MIPS32: MMU/TLB operations for the Guest.")
Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Cc: <stable@vger.kernel.org> # 3.10.x-
Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
parent a28ebea2
...@@ -138,35 +138,42 @@ int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu, ...@@ -138,35 +138,42 @@ int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
unsigned long entryhi = 0, entrylo0 = 0, entrylo1 = 0; unsigned long entryhi = 0, entrylo0 = 0, entrylo1 = 0;
struct kvm *kvm = vcpu->kvm; struct kvm *kvm = vcpu->kvm;
kvm_pfn_t pfn0, pfn1; kvm_pfn_t pfn0, pfn1;
long tlb_lo[2];
int ret; int ret;
if ((tlb->tlb_hi & VPN2_MASK) == 0) { tlb_lo[0] = tlb->tlb_lo[0];
pfn0 = 0; tlb_lo[1] = tlb->tlb_lo[1];
pfn1 = 0;
} else { /*
if (kvm_mips_map_page(kvm, mips3_tlbpfn_to_paddr(tlb->tlb_lo[0]) * The commpage address must not be mapped to anything else if the guest
>> PAGE_SHIFT) < 0) * TLB contains entries nearby, or commpage accesses will break.
return -1; */
if (!((tlb->tlb_hi ^ KVM_GUEST_COMMPAGE_ADDR) &
if (kvm_mips_map_page(kvm, mips3_tlbpfn_to_paddr(tlb->tlb_lo[1]) VPN2_MASK & (PAGE_MASK << 1)))
>> PAGE_SHIFT) < 0) tlb_lo[(KVM_GUEST_COMMPAGE_ADDR >> PAGE_SHIFT) & 1] = 0;
return -1;
if (kvm_mips_map_page(kvm, mips3_tlbpfn_to_paddr(tlb_lo[0])
pfn0 = kvm->arch.guest_pmap[ >> PAGE_SHIFT) < 0)
mips3_tlbpfn_to_paddr(tlb->tlb_lo[0]) >> PAGE_SHIFT]; return -1;
pfn1 = kvm->arch.guest_pmap[
mips3_tlbpfn_to_paddr(tlb->tlb_lo[1]) >> PAGE_SHIFT]; if (kvm_mips_map_page(kvm, mips3_tlbpfn_to_paddr(tlb_lo[1])
} >> PAGE_SHIFT) < 0)
return -1;
pfn0 = kvm->arch.guest_pmap[
mips3_tlbpfn_to_paddr(tlb_lo[0]) >> PAGE_SHIFT];
pfn1 = kvm->arch.guest_pmap[
mips3_tlbpfn_to_paddr(tlb_lo[1]) >> PAGE_SHIFT];
/* Get attributes from the Guest TLB */ /* Get attributes from the Guest TLB */
entrylo0 = mips3_paddr_to_tlbpfn(pfn0 << PAGE_SHIFT) | entrylo0 = mips3_paddr_to_tlbpfn(pfn0 << PAGE_SHIFT) |
((_page_cachable_default >> _CACHE_SHIFT) << ENTRYLO_C_SHIFT) | ((_page_cachable_default >> _CACHE_SHIFT) << ENTRYLO_C_SHIFT) |
(tlb->tlb_lo[0] & ENTRYLO_D) | (tlb_lo[0] & ENTRYLO_D) |
(tlb->tlb_lo[0] & ENTRYLO_V); (tlb_lo[0] & ENTRYLO_V);
entrylo1 = mips3_paddr_to_tlbpfn(pfn1 << PAGE_SHIFT) | entrylo1 = mips3_paddr_to_tlbpfn(pfn1 << PAGE_SHIFT) |
((_page_cachable_default >> _CACHE_SHIFT) << ENTRYLO_C_SHIFT) | ((_page_cachable_default >> _CACHE_SHIFT) << ENTRYLO_C_SHIFT) |
(tlb->tlb_lo[1] & ENTRYLO_D) | (tlb_lo[1] & ENTRYLO_D) |
(tlb->tlb_lo[1] & ENTRYLO_V); (tlb_lo[1] & ENTRYLO_V);
kvm_debug("@ %#lx tlb_lo0: 0x%08lx tlb_lo1: 0x%08lx\n", vcpu->arch.pc, kvm_debug("@ %#lx tlb_lo0: 0x%08lx tlb_lo1: 0x%08lx\n", vcpu->arch.pc,
tlb->tlb_lo[0], tlb->tlb_lo[1]); tlb->tlb_lo[0], tlb->tlb_lo[1]);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment