Commit 2b4a0815 authored by Andi Kleen's avatar Andi Kleen Committed by Linus Torvalds

[PATCH] x86-64: Increase TLB flush array size

The generic TLB flush functions kept upto 506 pages per
CPU to avoid too frequent IPIs.

This value was done for the L1 cache of older x86 CPUs,
but with modern CPUs it does not make much sense anymore.
TLB flushing is slow enough that using the L2 cache is fine.

This patch increases the flush array on x86-64 to cache
5350 pages. That is roughly 20MB with 4K pages. It speeds
up large munmaps in multithreaded processes on SMP considerably.

The cost is roughly 42k of memory per CPU, which is reasonable.

I only increased it on x86-64 for now, but it would probably
make sense to increase it everywhere. Embedded architectures
with SMP may keep it smaller to save some memory per CPU.
Signed-off-by: default avatarAndi Kleen <ak@suse.de>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 165aeb82
...@@ -23,7 +23,11 @@ ...@@ -23,7 +23,11 @@
* and page free order so much.. * and page free order so much..
*/ */
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#ifdef ARCH_FREE_PTR_NR
#define FREE_PTR_NR ARCH_FREE_PTR_NR
#else
#define FREE_PTE_NR 506 #define FREE_PTE_NR 506
#endif
#define tlb_fast_mode(tlb) ((tlb)->nr == ~0U) #define tlb_fast_mode(tlb) ((tlb)->nr == ~0U)
#else #else
#define FREE_PTE_NR 1 #define FREE_PTE_NR 1
......
...@@ -109,6 +109,10 @@ static inline void flush_tlb_range(struct vm_area_struct * vma, unsigned long st ...@@ -109,6 +109,10 @@ static inline void flush_tlb_range(struct vm_area_struct * vma, unsigned long st
#define TLBSTATE_OK 1 #define TLBSTATE_OK 1
#define TLBSTATE_LAZY 2 #define TLBSTATE_LAZY 2
/* Roughly an IPI every 20MB with 4k pages for freeing page table
ranges. Cost is about 42k of memory for each CPU. */
#define ARCH_FREE_PTE_NR 5350
#endif #endif
#define flush_tlb_kernel_range(start, end) flush_tlb_all() #define flush_tlb_kernel_range(start, end) flush_tlb_all()
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment