Commit 2655421a authored by Nicholas Piggin's avatar Nicholas Piggin Committed by Andrew Morton

lazy tlb: shoot lazies, non-refcounting lazy tlb mm reference handling scheme

On big systems, the mm refcount can become highly contented when doing a
lot of context switching with threaded applications.  user<->idle switch
is one of the important cases.  Abandoning lazy tlb entirely slows this
switching down quite a bit in the common uncontended case, so that is not
viable.

Implement a scheme where lazy tlb mm references do not contribute to the
refcount, instead they get explicitly removed when the refcount reaches
zero.

The final mmdrop() sends IPIs to all CPUs in the mm_cpumask and they
switch away from this mm to init_mm if it was being used as the lazy tlb
mm.  Enabling the shoot lazies option therefore requires that the arch
ensures that mm_cpumask contains all CPUs that could possibly be using mm.
A DEBUG_VM option IPIs every CPU in the system after this to ensure there
are no references remaining before the mm is freed.

Shootdown IPIs cost could be an issue, but they have not been observed to
be a serious problem with this scheme, because short-lived processes tend
not to migrate CPUs much, therefore they don't get much chance to leave
lazy tlb mm references on remote CPUs.  There are a lot of options to
reduce them if necessary, described in comments.

The near-worst-case can be benchmarked with will-it-scale:

  context_switch1_threads -t $(($(nproc) / 2))

This will create nproc threads (nproc / 2 switching pairs) all sharing the
same mm that spread over all CPUs so each CPU does thread->idle->thread
switching.

[ Rik came up with basically the same idea a few years ago, so credit
  to him for that. ]

Link: https://lore.kernel.org/linux-mm/20230118080011.2258375-1-npiggin@gmail.com/
Link: https://lore.kernel.org/all/20180728215357.3249-11-riel@surriel.com/
Link: https://lkml.kernel.org/r/20230203071837.1136453-5-npiggin@gmail.comSigned-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 88e3009b
...@@ -481,6 +481,21 @@ config ARCH_WANT_IRQS_OFF_ACTIVATE_MM ...@@ -481,6 +481,21 @@ config ARCH_WANT_IRQS_OFF_ACTIVATE_MM
# converted already). # converted already).
config MMU_LAZY_TLB_REFCOUNT config MMU_LAZY_TLB_REFCOUNT
def_bool y def_bool y
depends on !MMU_LAZY_TLB_SHOOTDOWN
# This option allows MMU_LAZY_TLB_REFCOUNT=n. It ensures no CPUs are using an
# mm as a lazy tlb beyond its last reference count, by shooting down these
# users before the mm is deallocated. __mmdrop() first IPIs all CPUs that may
# be using the mm as a lazy tlb, so that they may switch themselves to using
# init_mm for their active mm. mm_cpumask(mm) is used to determine which CPUs
# may be using mm as a lazy tlb mm.
#
# To implement this, an arch *must*:
# - At the time of the final mmdrop of the mm, ensure mm_cpumask(mm) contains
# at least all possible CPUs in which the mm is lazy.
# - It must meet the requirements for MMU_LAZY_TLB_REFCOUNT=n (see above).
config MMU_LAZY_TLB_SHOOTDOWN
bool
config ARCH_HAVE_NMI_SAFE_CMPXCHG config ARCH_HAVE_NMI_SAFE_CMPXCHG
bool bool
......
...@@ -772,6 +772,67 @@ static void check_mm(struct mm_struct *mm) ...@@ -772,6 +772,67 @@ static void check_mm(struct mm_struct *mm)
#define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL)) #define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL))
#define free_mm(mm) (kmem_cache_free(mm_cachep, (mm))) #define free_mm(mm) (kmem_cache_free(mm_cachep, (mm)))
static void do_check_lazy_tlb(void *arg)
{
struct mm_struct *mm = arg;
WARN_ON_ONCE(current->active_mm == mm);
}
static void do_shoot_lazy_tlb(void *arg)
{
struct mm_struct *mm = arg;
if (current->active_mm == mm) {
WARN_ON_ONCE(current->mm);
current->active_mm = &init_mm;
switch_mm(mm, &init_mm, current);
}
}
static void cleanup_lazy_tlbs(struct mm_struct *mm)
{
if (!IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) {
/*
* In this case, lazy tlb mms are refounted and would not reach
* __mmdrop until all CPUs have switched away and mmdrop()ed.
*/
return;
}
/*
* Lazy mm shootdown does not refcount "lazy tlb mm" usage, rather it
* requires lazy mm users to switch to another mm when the refcount
* drops to zero, before the mm is freed. This requires IPIs here to
* switch kernel threads to init_mm.
*
* archs that use IPIs to flush TLBs can piggy-back that lazy tlb mm
* switch with the final userspace teardown TLB flush which leaves the
* mm lazy on this CPU but no others, reducing the need for additional
* IPIs here. There are cases where a final IPI is still required here,
* such as the final mmdrop being performed on a different CPU than the
* one exiting, or kernel threads using the mm when userspace exits.
*
* IPI overheads have not found to be expensive, but they could be
* reduced in a number of possible ways, for example (roughly
* increasing order of complexity):
* - The last lazy reference created by exit_mm() could instead switch
* to init_mm, however it's probable this will run on the same CPU
* immediately afterwards, so this may not reduce IPIs much.
* - A batch of mms requiring IPIs could be gathered and freed at once.
* - CPUs store active_mm where it can be remotely checked without a
* lock, to filter out false-positives in the cpumask.
* - After mm_users or mm_count reaches zero, switching away from the
* mm could clear mm_cpumask to reduce some IPIs, perhaps together
* with some batching or delaying of the final IPIs.
* - A delayed freeing and RCU-like quiescing sequence based on mm
* switching to avoid IPIs completely.
*/
on_each_cpu_mask(mm_cpumask(mm), do_shoot_lazy_tlb, (void *)mm, 1);
if (IS_ENABLED(CONFIG_DEBUG_VM_SHOOT_LAZIES))
on_each_cpu(do_check_lazy_tlb, (void *)mm, 1);
}
/* /*
* Called when the last reference to the mm * Called when the last reference to the mm
* is dropped: either by a lazy thread or by * is dropped: either by a lazy thread or by
...@@ -783,6 +844,10 @@ void __mmdrop(struct mm_struct *mm) ...@@ -783,6 +844,10 @@ void __mmdrop(struct mm_struct *mm)
BUG_ON(mm == &init_mm); BUG_ON(mm == &init_mm);
WARN_ON_ONCE(mm == current->mm); WARN_ON_ONCE(mm == current->mm);
/* Ensure no CPUs are using this as their lazy tlb mm */
cleanup_lazy_tlbs(mm);
WARN_ON_ONCE(mm == current->active_mm); WARN_ON_ONCE(mm == current->active_mm);
mm_free_pgd(mm); mm_free_pgd(mm);
destroy_context(mm); destroy_context(mm);
......
...@@ -791,6 +791,16 @@ config DEBUG_VM ...@@ -791,6 +791,16 @@ config DEBUG_VM
If unsure, say N. If unsure, say N.
config DEBUG_VM_SHOOT_LAZIES
bool "Debug MMU_LAZY_TLB_SHOOTDOWN implementation"
depends on DEBUG_VM
depends on MMU_LAZY_TLB_SHOOTDOWN
help
Enable additional IPIs that ensure lazy tlb mm references are removed
before the mm is freed.
If unsure, say N.
config DEBUG_VM_MAPLE_TREE config DEBUG_VM_MAPLE_TREE
bool "Debug VM maple trees" bool "Debug VM maple trees"
depends on DEBUG_VM depends on DEBUG_VM
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment