Commit 28019bca authored by Andi Kleen's avatar Andi Kleen Committed by Linus Torvalds

[PATCH] i386/x86-64: Fix ioremap off by one

From Terence Ripperda <tripperda@nvidia.com>

When doing iounmap don't try to change_page_attr back the guard
page that ioremap added.

Since the last round of change_page_attr changes this would
trigger an BUG because the reference count on the changed pages
wouldn't match up.

The problem would be only visible on machines with >3GB of memory,
because only then the PCI memory hole is below end_pfn and
change_page_attr is used.

Fixed for both i386 and x86-64.

This was actually discovered&fixed by Andrea earlier, but I goofed up
while doing the last ioremap fixes merge and this change got lost.
Poor Terence had to debug it again. Sorry about that.

cc: andrea@suse.de
Signed-off-by: default avatarAndi Kleen <ak@suse.de>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent b3245157
......@@ -238,8 +238,9 @@ void iounmap(volatile void __iomem *addr)
}
if ((p->flags >> 20) && p->phys_addr < virt_to_phys(high_memory) - 1) {
/* p->size includes the guard page, but cpa doesn't like that */
change_page_attr(virt_to_page(__va(p->phys_addr)),
p->size >> PAGE_SHIFT,
(p->size - PAGE_SIZE) >> PAGE_SHIFT,
PAGE_KERNEL);
global_flush_tlb();
}
......
......@@ -265,8 +265,9 @@ void iounmap(volatile void __iomem *addr)
unmap_vm_area(p);
if ((p->flags >> 20) &&
p->phys_addr + p->size - 1 < virt_to_phys(high_memory)) {
/* p->size includes the guard page, but cpa doesn't like that */
change_page_attr(virt_to_page(__va(p->phys_addr)),
p->size >> PAGE_SHIFT,
(p->size - PAGE_SIZE) >> PAGE_SHIFT,
PAGE_KERNEL);
global_flush_tlb();
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment