-
Andrew Morton authored
If the underlying mapping was truncated and someone references the now-unmapped memory the kernel will enter handle_mm_fault() and will start instantiating PAGE_SIZE pte's inside the hugepage VMA. Everything goes generally pear-shaped. So trap this in handle_mm_fault(). It adds no overhead to non-hugepage builds. Another possible fix would be to not unmap the huge pages at all in truncate - just anonymise them. But I think we want full ftruncate semantics for hugepages for management purposes.
8b5111ec