Commit 0b580096 authored by David S. Miller's avatar David S. Miller

[MM]: Restore pgd_index() iteration to clear_page_range().

Otherwise ia64 and sparc64 explode with the new ptwalk
iterators.  The pgd level stuff does not handle virtual
address space holes (sparc64) and region based PGD indexing
(ia64) properly.  It only matters in functions like
clear_page_range() which potentially walk over more than
a single VMA worth of address space.
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 13d5e44c
......@@ -182,15 +182,19 @@ void clear_page_range(struct mmu_gather *tlb,
unsigned long addr, unsigned long end)
{
pgd_t *pgd;
unsigned long next;
unsigned long i, next;
pgd = pgd_offset(tlb->mm, addr);
do {
for (i = pgd_index(addr); i <= pgd_index(end-1); i++) {
next = pgd_addr_end(addr, end);
if (pgd_none_or_clear_bad(pgd))
continue;
clear_pud_range(tlb, pgd, addr, next);
} while (pgd++, addr = next, addr != end);
pgd++;
addr = next;
if (addr == end)
break;
}
}
pte_t fastcall * pte_alloc_map(struct mm_struct *mm, pmd_t *pmd, unsigned long address)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment