Commit 13000523 authored by Wanpeng Li's avatar Wanpeng Li Committed by Paolo Bonzini

kvm: mmu: don't do memslot overflow check

As Andres pointed out:

| I don't understand the value of this check here. Are we looking for a
| broken memslot? Shouldn't this be a BUG_ON? Is this the place to care
| about these things? npages is capped to KVM_MEM_MAX_NR_PAGES, i.e.
| 2^31. A 64 bit overflow would be caused by a gigantic gfn_start which
| would be trouble in many other ways.

This patch drops the memslot overflow check to make the codes more simple.
Reviewed-by: default avatarAndres Lagar-Cavilla <andreslc@google.com>
Signed-off-by: default avatarWanpeng Li <wanpeng.li@linux.intel.com>
Message-Id: <1429064694-3072-1-git-send-email-wanpeng.li@linux.intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 95fce4fa
......@@ -4504,19 +4504,12 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
bool flush = false;
unsigned long *rmapp;
unsigned long last_index, index;
gfn_t gfn_start, gfn_end;
spin_lock(&kvm->mmu_lock);
gfn_start = memslot->base_gfn;
gfn_end = memslot->base_gfn + memslot->npages - 1;
if (gfn_start >= gfn_end)
goto out;
rmapp = memslot->arch.rmap[0];
last_index = gfn_to_index(gfn_end, memslot->base_gfn,
PT_PAGE_TABLE_LEVEL);
last_index = gfn_to_index(memslot->base_gfn + memslot->npages - 1,
memslot->base_gfn, PT_PAGE_TABLE_LEVEL);
for (index = 0; index <= last_index; ++index, ++rmapp) {
if (*rmapp)
......@@ -4534,7 +4527,6 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
if (flush)
kvm_flush_remote_tlbs(kvm);
out:
spin_unlock(&kvm->mmu_lock);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment