-
Andrew Morton authored
It has been noticed that across a kernel build many calls to tlb_flush_mmu() do not have anything to flush, apparently because glibc is mmapping a file over a previously-mapped region which has no faulted-in ptes. This patch detects this case and optimises away a little over one third of the tlb invalidations. The functions which potentially cause an invalidate are tlb_remove_tlb_entry(), pte_free_tlb() and pmd_free_tlb(). These have been front-ended in asm-generic/tlb.h and the per-arch versions now have leading double-underscores. The generic versions tag the mmu_gather_t as needing a flush and then call the arch-specific version. tlb_flush_mmu() looks at tlb->need_flush and if it sees that no real activity has happened, the invalidation is avoided. The success rate is displayed in /proc/meminfo for the while. This should be removed later.
37717bca