Commit 10c30de0 authored by Junaid Shahid's avatar Junaid Shahid Committed by Paolo Bonzini

kvm: mmu: Use fast PF path for access tracking of huge pages when possible

The fast page fault path bails out on write faults to huge pages in
order to accommodate dirty logging. This change adds a check to do that
only when dirty logging is actually enabled, so that access tracking for
huge pages can still use the fast path for write faults in the common
case.
Signed-off-by: default avatarJunaid Shahid <junaids@google.com>
Reviewed-by: default avatarBen Gardon <bgardon@google.com>
Reviewed-by: default avatarSean Christopherson <seanjc@google.com>
Message-Id: <20211104003359.2201967-1-junaids@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent c435d4b7
...@@ -3191,17 +3191,17 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) ...@@ -3191,17 +3191,17 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
new_spte |= PT_WRITABLE_MASK; new_spte |= PT_WRITABLE_MASK;
/* /*
* Do not fix write-permission on the large spte. Since * Do not fix write-permission on the large spte when
* we only dirty the first page into the dirty-bitmap in * dirty logging is enabled. Since we only dirty the
* first page into the dirty-bitmap in
* fast_pf_fix_direct_spte(), other pages are missed * fast_pf_fix_direct_spte(), other pages are missed
* if its slot has dirty logging enabled. * if its slot has dirty logging enabled.
* *
* Instead, we let the slow page fault path create a * Instead, we let the slow page fault path create a
* normal spte to fix the access. * normal spte to fix the access.
*
* See the comments in kvm_arch_commit_memory_region().
*/ */
if (sp->role.level > PG_LEVEL_4K) if (sp->role.level > PG_LEVEL_4K &&
kvm_slot_dirty_track_enabled(fault->slot))
break; break;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment