x86/mm/tlb: Always use lazy TLB mode
On most workloads, the number of context switches far exceeds the number of TLB flushes sent. Optimizing the context switches, by always using lazy TLB mode, speeds up those workloads. This patch results in about a 1% reduction in CPU use on a two socket Broadwell system running a memcache like workload. Cc: npiggin@gmail.com Cc: efault@gmx.de Cc: will.deacon@arm.com Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-team@fb.com Cc: hpa@zytor.com Cc: luto@kernel.org Tested-by:Song Liu <songliubraving@fb.com> Signed-off-by:
Rik van Riel <riel@surriel.com> (cherry picked from commit 95b0e635) Acked-by:
Dave Hansen <dave.hansen@intel.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180716190337.26133-7-riel@surriel.com
Showing
Please register or sign in to comment