Commit cc70e737 authored by Avi Kivity's avatar Avi Kivity

KVM: MMU: Disable write access on clean large pages

By forcing clean huge pages to be read-only, we have separate roles
for the shadow of a clean large page and the shadow of a dirty large
page.  This is necessary because different ptes will be instantiated
for the two cases, even for read faults.
Signed-off-by: default avatarAvi Kivity <avi@qumranet.com>
parent c22e3514
...@@ -382,6 +382,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, ...@@ -382,6 +382,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
metaphysical = 1; metaphysical = 1;
hugepage_access = walker->pte; hugepage_access = walker->pte;
hugepage_access &= PT_USER_MASK | PT_WRITABLE_MASK; hugepage_access &= PT_USER_MASK | PT_WRITABLE_MASK;
if (!is_dirty_pte(walker->pte))
hugepage_access &= ~PT_WRITABLE_MASK;
hugepage_access >>= PT_WRITABLE_SHIFT; hugepage_access >>= PT_WRITABLE_SHIFT;
if (walker->pte & PT64_NX_MASK) if (walker->pte & PT64_NX_MASK)
hugepage_access |= (1 << 2); hugepage_access |= (1 << 2);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment