Commit 0f30206b authored by James Morse's avatar James Morse Committed by Linus Torvalds

fs/proc/task_mmu.c: make the task_mmu walk_page_range() limit in clear_refs_write() obvious

Trying to walk all of virtual memory requires architecture specific
knowledge.  On x86_64, addresses must be sign extended from bit 48,
whereas on arm64 the top VA_BITS of address space have their own set of
page tables.

clear_refs_write() calls walk_page_range() on the range 0 to ~0UL, it
provides a test_walk() callback that only expects to be walking over
VMAs.  Currently walk_pmd_range() will skip memory regions that don't
have a VMA, reporting them as a hole.

As this call only expects to walk user address space, make it walk 0 to
'highest_vm_end'.

Link: http://lkml.kernel.org/r/1472655792-22439-1-git-send-email-james.morse@arm.comSigned-off-by: default avatarJames Morse <james.morse@arm.com>
Acked-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 03e86dba
...@@ -1070,7 +1070,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, ...@@ -1070,7 +1070,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
} }
mmu_notifier_invalidate_range_start(mm, 0, -1); mmu_notifier_invalidate_range_start(mm, 0, -1);
} }
walk_page_range(0, ~0UL, &clear_refs_walk); walk_page_range(0, mm->highest_vm_end, &clear_refs_walk);
if (type == CLEAR_REFS_SOFT_DIRTY) if (type == CLEAR_REFS_SOFT_DIRTY)
mmu_notifier_invalidate_range_end(mm, 0, -1); mmu_notifier_invalidate_range_end(mm, 0, -1);
flush_tlb_mm(mm); flush_tlb_mm(mm);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment