-
Kenneth W. Chen authored
Hit a bug check when unmap a hugetlb vma in PAE mode on i386 (and x86-64). Bad page state at free_hot_cold_page (in process 'a.out', page c165cc40) flags:0x20000000 mapping:f75e1d00 mapped:0 count:0 Backtrace: Call Trace: [<c0133e0d>] bad_page+0x79/0x9e [<c0134550>] free_hot_cold_page+0x71/0xfa [<c0115d60>] unmap_hugepage_range+0xa3/0xbf [<c013d375>] unmap_vmas+0xac/0x252 [<c0117691>] default_wake_function+0x0/0xc [<c0140bea>] unmap_region+0xd8/0x145 [<c0140f2d>] do_munmap+0xfc/0x14d [<c01b8a56>] sys_shmdt+0xa5/0x126 [<c010a2ad>] sys_ipc+0x23c/0x27f [<c014a85e>] sys_write+0x38/0x59 [<c0103e1b>] syscall_call+0x7/0xb It turns out there is a bug in hugetlb_prefault(): with 3 level page table, huge_pte_alloc() might return a pmd that points to a PTE page. It happens if the virtual address for hugetlb mmap is recycled from previously used normal page mmap. free_pgtables() might not scrub the pmd entry on munmap and hugetlb_prefault skips on any pmd presence regardless what type it is. Patch to fix the bug. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
677f0307