• Hugh Dickins's avatar
    mm/thp: fix vma_address() if virtual address below file offset · 494334e4
    Hugh Dickins authored
    Running certain tests with a DEBUG_VM kernel would crash within hours,
    on the total_mapcount BUG() in split_huge_page_to_list(), while trying
    to free up some memory by punching a hole in a shmem huge page: split's
    try_to_unmap() was unable to find all the mappings of the page (which,
    on a !DEBUG_VM kernel, would then keep the huge page pinned in memory).
    
    When that BUG() was changed to a WARN(), it would later crash on the
    VM_BUG_ON_VMA(end < vma->vm_start || start >= vma->vm_end, vma) in
    mm/internal.h:vma_address(), used by rmap_walk_file() for
    try_to_unmap().
    
    vma_address() is usually correct, but there's a wraparound case when the
    vm_start address is unusually low, but vm_pgoff not so low:
    vma_address() chooses max(start, vma->vm_start), but that decides on the
    wrong address, because start has become almost ULONG_MAX.
    
    Rewrite vma_address() to be more careful about vm_pgoff; move the
    VM_BUG_ON_VMA() out of it, returning -EFAULT for errors, so that it can
    be safely used from page_mapped_in_vma() and page_address_in_vma() too.
    
    Add vma_address_end() to apply similar care to end address calculation,
    in page_vma_mapped_walk() and page_mkclean_one() and try_to_unmap_one();
    though it raises a question of whether callers would do better to supply
    pvmw->end to page_vma_mapped_walk() - I chose not, for a smaller patch.
    
    An irritation is that their apparent generality breaks down on KSM
    pages, which cannot be located by the page->index that page_to_pgoff()
    uses: as commit 4b0ece6f ("mm: migrate: fix remove_migration_pte()
    for ksm pages") once discovered.  I dithered over the best thing to do
    about that, and have ended up with a VM_BUG_ON_PAGE(PageKsm) in both
    vma_address() and vma_address_end(); though the only place in danger of
    using it on them was try_to_unmap_one().
    
    Sidenote: vma_address() and vma_address_end() now use compound_nr() on a
    head page, instead of thp_size(): to make the right calculation on a
    hugetlbfs page, whether or not THPs are configured.  try_to_unmap() is
    used on hugetlbfs pages, but perhaps the wrong calculation never
    mattered.
    
    Link: https://lkml.kernel.org/r/caf1c1a3-7cfb-7f8f-1beb-ba816e932825@google.com
    Fixes: a8fa41ad ("mm, rmap: check all VMAs that PTE-mapped THP can be part of")
    Signed-off-by: default avatarHugh Dickins <hughd@google.com>
    Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Cc: Alistair Popple <apopple@nvidia.com>
    Cc: Jan Kara <jack@suse.cz>
    Cc: Jue Wang <juew@google.com>
    Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
    Cc: Miaohe Lin <linmiaohe@huawei.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
    Cc: Oscar Salvador <osalvador@suse.de>
    Cc: Peter Xu <peterx@redhat.com>
    Cc: Ralph Campbell <rcampbell@nvidia.com>
    Cc: Shakeel Butt <shakeelb@google.com>
    Cc: Wang Yugui <wangyugui@e16-tech.com>
    Cc: Yang Shi <shy828301@gmail.com>
    Cc: Zi Yan <ziy@nvidia.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    494334e4
page_vma_mapped.c 7.91 KB