Commit 3d32e4db authored by Quentin Casasnovas's avatar Quentin Casasnovas Committed by Paolo Bonzini

kvm: fix excessive pages un-pinning in kvm_iommu_map error path.

The third parameter of kvm_unpin_pages() when called from
kvm_iommu_map_pages() is wrong, it should be the number of pages to un-pin
and not the page size.

This error was facilitated with an inconsistent API: kvm_pin_pages() takes
a size, but kvn_unpin_pages() takes a number of pages, so fix the problem
by matching the two.

This was introduced by commit 350b8bdd ("kvm: iommu: fix the third parameter
of kvm_iommu_put_pages (CVE-2014-3601)"), which fixes the lack of
un-pinning for pages intended to be un-pinned (i.e. memory leak) but
unfortunately potentially aggravated the number of pages we un-pin that
should have stayed pinned. As far as I understand though, the same
practical mitigations apply.

This issue was found during review of Red Hat 6.6 patches to prepare
Ksplice rebootless updates.

Thanks to Vegard for his time on a late Friday evening to help me in
understanding this code.

Fixes: 350b8bdd ("kvm: iommu: fix the third parameter of... (CVE-2014-3601)")
Cc: stable@vger.kernel.org
Signed-off-by: default avatarQuentin Casasnovas <quentin.casasnovas@oracle.com>
Signed-off-by: default avatarVegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: default avatarJamie Iles <jamie.iles@oracle.com>
Reviewed-by: default avatarSasha Levin <sasha.levin@oracle.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 3f6f1480
...@@ -43,13 +43,13 @@ static void kvm_iommu_put_pages(struct kvm *kvm, ...@@ -43,13 +43,13 @@ static void kvm_iommu_put_pages(struct kvm *kvm,
gfn_t base_gfn, unsigned long npages); gfn_t base_gfn, unsigned long npages);
static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn, static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn,
unsigned long size) unsigned long npages)
{ {
gfn_t end_gfn; gfn_t end_gfn;
pfn_t pfn; pfn_t pfn;
pfn = gfn_to_pfn_memslot(slot, gfn); pfn = gfn_to_pfn_memslot(slot, gfn);
end_gfn = gfn + (size >> PAGE_SHIFT); end_gfn = gfn + npages;
gfn += 1; gfn += 1;
if (is_error_noslot_pfn(pfn)) if (is_error_noslot_pfn(pfn))
...@@ -119,7 +119,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) ...@@ -119,7 +119,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
* Pin all pages we are about to map in memory. This is * Pin all pages we are about to map in memory. This is
* important because we unmap and unpin in 4kb steps later. * important because we unmap and unpin in 4kb steps later.
*/ */
pfn = kvm_pin_pages(slot, gfn, page_size); pfn = kvm_pin_pages(slot, gfn, page_size >> PAGE_SHIFT);
if (is_error_noslot_pfn(pfn)) { if (is_error_noslot_pfn(pfn)) {
gfn += 1; gfn += 1;
continue; continue;
...@@ -131,7 +131,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) ...@@ -131,7 +131,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
if (r) { if (r) {
printk(KERN_ERR "kvm_iommu_map_address:" printk(KERN_ERR "kvm_iommu_map_address:"
"iommu failed to map pfn=%llx\n", pfn); "iommu failed to map pfn=%llx\n", pfn);
kvm_unpin_pages(kvm, pfn, page_size); kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT);
goto unmap_pages; goto unmap_pages;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment