Commit 5a7862e8 authored by Will Deacon's avatar Will Deacon Committed by Catalin Marinas

arm64: tlbflush: avoid flushing when fullmm == 1

The TLB gather code sets fullmm=1 when tearing down the entire address
space for an mm_struct on exit or execve. Given that the ASID allocator
will never re-allocate a dirty ASID, this flushing is not needed and can
simply be avoided in the flushing code.
Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
parent f3e002c2
...@@ -37,17 +37,21 @@ static inline void __tlb_remove_table(void *_table) ...@@ -37,17 +37,21 @@ static inline void __tlb_remove_table(void *_table)
static inline void tlb_flush(struct mmu_gather *tlb) static inline void tlb_flush(struct mmu_gather *tlb)
{ {
if (tlb->fullmm) { struct vm_area_struct vma = { .vm_mm = tlb->mm, };
flush_tlb_mm(tlb->mm);
} else { /*
struct vm_area_struct vma = { .vm_mm = tlb->mm, }; * The ASID allocator will either invalidate the ASID or mark
/* * it as used.
* The intermediate page table levels are already handled by */
* the __(pte|pmd|pud)_free_tlb() functions, so last level if (tlb->fullmm)
* TLBI is sufficient here. return;
*/
__flush_tlb_range(&vma, tlb->start, tlb->end, true); /*
} * The intermediate page table levels are already handled by
* the __(pte|pmd|pud)_free_tlb() functions, so last level
* TLBI is sufficient here.
*/
__flush_tlb_range(&vma, tlb->start, tlb->end, true);
} }
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment