Commit f720b471 authored by Kefeng Wang's avatar Kefeng Wang Committed by Andrew Morton

mm: hugetlb: use flush_hugetlb_tlb_range() in move_hugetlb_page_tables()

Archs may need to do special things when flushing hugepage tlb, so use the
more applicable flush_hugetlb_tlb_range() instead of flush_tlb_range().

Link: https://lkml.kernel.org/r/20230801023145.17026-2-wangkefeng.wang@huawei.com
Fixes: 550a7d60 ("mm, hugepages: add mremap() support for hugepage backed vma")
Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
Acked-by: default avatarMuchun Song <songmuchun@bytedance.com>
Cc: Barry Song <21cnbao@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Will Deacon <will@kernel.org>
Cc: William Kucharski <william.kucharski@oracle.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 13cfd63f
......@@ -5279,9 +5279,9 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
}
if (shared_pmd)
flush_tlb_range(vma, range.start, range.end);
flush_hugetlb_tlb_range(vma, range.start, range.end);
else
flush_tlb_range(vma, old_end - len, old_end);
flush_hugetlb_tlb_range(vma, old_end - len, old_end);
mmu_notifier_invalidate_range_end(&range);
i_mmap_unlock_write(mapping);
hugetlb_vma_unlock_write(vma);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment