Commit df79ea40 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] hugetlb mremap fix

If you attempt to perform a relocating 4k-aligned mremap and the new address
for the map lands on top of a hugepage VMA, do_mremap() will attempt to
perform a 4k-aligned unmap inside the hugetlb VMA.  The hugetlb layer goes
BUG.

Fix that by trapping the poorly-aligned unmap attempt in do_munmap().
do_remap() will then fall through without having done anything to the place
where it tests for a hugetlb VMA.

It would be neater to perform these checks on entry to do_mremap(), but that
would incur another VMA lookup.

Also, if you attempt to perform a 4k-aligned and/or sized munmap() inside a
hugepage VMA the same BUG happens.  This patch fixes that too.

This all means that an mremap attempt against a hugetlb area will fail, but
only after having unmapped the source pages.  That's a bit messy, but
supporting hugetlb mremap doesn't seem worth it, and completely disallowing
it will add overhead to normal mremaps.
parent 8a1335e9
......@@ -58,6 +58,10 @@ static inline int is_vm_hugetlb_page(struct vm_area_struct *vma)
#define follow_huge_pmd(mm, addr, pmd, write) 0
#define pmd_huge(x) 0
#ifndef HPAGE_MASK
#define HPAGE_MASK 0 /* Keep the compiler happy */
#endif
#endif /* !CONFIG_HUGETLB_PAGE */
#ifdef CONFIG_HUGETLBFS
......
......@@ -1223,6 +1223,11 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
return 0;
/* we have start < mpnt->vm_end */
if (is_vm_hugetlb_page(mpnt)) {
if ((start & ~HPAGE_MASK) || (len & ~HPAGE_MASK))
return -EINVAL;
}
/* if it doesn't overlap, we have nothing.. */
end = start + len;
if (mpnt->vm_start >= end)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment