• Yicong Yang's avatar
    mm/tlbbatch: introduce arch_flush_tlb_batched_pending() · db6c1f6f
    Yicong Yang authored
    Currently we'll flush the mm in flush_tlb_batched_pending() to avoid race
    between reclaim unmaps pages by batched TLB flush and mprotect/munmap/etc.
    Other architectures like arm64 may only need a synchronization
    barrier(dsb) here rather than a full mm flush.  So add
    arch_flush_tlb_batched_pending() to allow an arch-specific implementation
    here.  This intends no functional changes on x86 since still a full mm
    flush for x86.
    
    Link: https://lkml.kernel.org/r/20230717131004.12662-4-yangyicong@huawei.comSigned-off-by: default avatarYicong Yang <yangyicong@hisilicon.com>
    Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    Cc: Anshuman Khandual <anshuman.khandual@arm.com>
    Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Barry Song <baohua@kernel.org>
    Cc: Barry Song <v-songbaohua@oppo.com>
    Cc: Darren Hart <darren@os.amperecomputing.com>
    Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
    Cc: Jonathan Corbet <corbet@lwn.net>
    Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
    Cc: lipeifeng <lipeifeng@oppo.com>
    Cc: Mark Rutland <mark.rutland@arm.com>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Nadav Amit <namit@vmware.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Punit Agrawal <punit.agrawal@bytedance.com>
    Cc: Ryan Roberts <ryan.roberts@arm.com>
    Cc: Steven Miao <realmz6@gmail.com>
    Cc: Will Deacon <will@kernel.org>
    Cc: Xin Hao <xhao@linux.alibaba.com>
    Cc: Zeng Tao <prime.zeng@hisilicon.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    db6c1f6f
rmap.c 74 KB