Commit 6a718bd2 authored by Yicong Yang's avatar Yicong Yang Committed by Andrew Morton

arm64: tlbflush: add some comments for TLB batched flushing

Add comments for arch_flush_tlb_batched_pending() and
arch_tlbbatch_flush() to illustrate why only a DSB is needed.

Link: https://lkml.kernel.org/r/20230801124203.62164-1-yangyicong@huawei.com
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: default avatarYicong Yang <yangyicong@hisilicon.com>
Reviewed-by: default avatarAlistair Popple <apopple@nvidia.com>
Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
Cc: Barry Song <21cnbao@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent ebddd111
......@@ -304,11 +304,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
__flush_tlb_page_nosync(mm, uaddr);
}
/*
* If mprotect/munmap/etc occurs during TLB batched flushing, we need to
* synchronise all the TLBI issued with a DSB to avoid the race mentioned in
* flush_tlb_batched_pending().
*/
static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
{
dsb(ish);
}
/*
* To support TLB batched flush for multiple pages unmapping, we only send
* the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
* completion at the end in arch_tlbbatch_flush(). Since we've already issued
* TLBI for each page so only a DSB is needed to synchronise its effect on the
* other CPUs.
*
* This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence
* for each page.
*/
static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
{
dsb(ish);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment