Commit 8b5111ec authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] Fix hugetlbfs faults

If the underlying mapping was truncated and someone references the
now-unmapped memory the kernel will enter handle_mm_fault() and will start
instantiating PAGE_SIZE pte's inside the hugepage VMA.  Everything goes
generally pear-shaped.

So trap this in handle_mm_fault().  It adds no overhead to non-hugepage
builds.

Another possible fix would be to not unmap the huge pages at all in truncate
- just anonymise them.

But I think we want full ftruncate semantics for hugepages for management
purposes.
parent 08a1cc4e
...@@ -1447,6 +1447,10 @@ int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct * vma, ...@@ -1447,6 +1447,10 @@ int handle_mm_fault(struct mm_struct *mm, struct vm_area_struct * vma,
pgd = pgd_offset(mm, address); pgd = pgd_offset(mm, address);
inc_page_state(pgfault); inc_page_state(pgfault);
if (is_vm_hugetlb_page(vma))
return VM_FAULT_SIGBUS; /* mapping truncation does this. */
/* /*
* We need the page table lock to synchronize with kswapd * We need the page table lock to synchronize with kswapd
* and the SMP-safe atomic PTE updates. * and the SMP-safe atomic PTE updates.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment