Commit 4506ecf4 authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: x86/mmu: Revert "Revert "KVM: MMU: collapse TLB flushes when zap all pages""

Now that the fast invalidate mechanism has been reintroduced, restore
the performance tweaks for fast invalidation that existed prior to its
removal.

Paraphrashing the original changelog:

  Reload the mmu on all vCPUs after updating the generation number so
  that obsolete pages are not used by any vCPUs.  This allows collapsing
  all TLB flushes during obsolete page zapping into a single flush, as
  there is no need to flush when dropping mmu_lock (to reschedule).

  Note: a remote TLB flush is still needed before freeing the pages as
  other vCPUs may be doing a lockless shadow page walk.

Opportunstically improve the comments restored by the revert (the
code itself is a true revert).

This reverts commit f34d251d.
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent fbb158cb
...@@ -5696,11 +5696,15 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm) ...@@ -5696,11 +5696,15 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
if (sp->role.invalid) if (sp->role.invalid)
continue; continue;
/*
* No need to flush the TLB since we're only zapping shadow
* pages with an obsolete generation number and all vCPUS have
* loaded a new root, i.e. the shadow pages being zapped cannot
* be in active use by the guest.
*/
if (batch >= BATCH_ZAP_PAGES && if (batch >= BATCH_ZAP_PAGES &&
(need_resched() || spin_needbreak(&kvm->mmu_lock))) { cond_resched_lock(&kvm->mmu_lock)) {
batch = 0; batch = 0;
kvm_mmu_commit_zap_page(kvm, &invalid_list);
cond_resched_lock(&kvm->mmu_lock);
goto restart; goto restart;
} }
...@@ -5711,6 +5715,11 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm) ...@@ -5711,6 +5715,11 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm)
} }
} }
/*
* Trigger a remote TLB flush before freeing the page tables to ensure
* KVM is not in the middle of a lockless shadow page table walk, which
* may reference the pages.
*/
kvm_mmu_commit_zap_page(kvm, &invalid_list); kvm_mmu_commit_zap_page(kvm, &invalid_list);
} }
...@@ -5729,6 +5738,16 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) ...@@ -5729,6 +5738,16 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm)
trace_kvm_mmu_zap_all_fast(kvm); trace_kvm_mmu_zap_all_fast(kvm);
kvm->arch.mmu_valid_gen++; kvm->arch.mmu_valid_gen++;
/*
* Notify all vcpus to reload its shadow page table and flush TLB.
* Then all vcpus will switch to new shadow page table with the new
* mmu_valid_gen.
*
* Note: we need to do this under the protection of mmu_lock,
* otherwise, vcpu would purge shadow page but miss tlb flush.
*/
kvm_reload_remote_mmus(kvm);
kvm_zap_obsolete_pages(kvm); kvm_zap_obsolete_pages(kvm);
spin_unlock(&kvm->mmu_lock); spin_unlock(&kvm->mmu_lock);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment