Commit 6cad87b0 authored by Nicholas Piggin's avatar Nicholas Piggin Committed by Andrew Morton

kthread: simplify kthread_use_mm refcounting

Patch series "shoot lazy tlbs (lazy tlb refcount scalability
improvement)", v7.

This series improves scalability of context switching between user and
kernel threads on large systems with a threaded process spread across a
lot of CPUs.

Discussion of v6 here:
https://lore.kernel.org/linux-mm/20230118080011.2258375-1-npiggin@gmail.com/


This patch (of 5):

Remove the special case avoiding refcounting when the mm to be used is the
same as the kernel thread's active (lazy tlb) mm.  kthread_use_mm() should
not be such a performance critical path that this matters much.  This
simplifies a later change to lazy tlb mm refcounting.

Link: https://lkml.kernel.org/r/20230203071837.1136453-1-npiggin@gmail.com
Link: https://lkml.kernel.org/r/20230203071837.1136453-2-npiggin@gmail.comSigned-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 62bf1258
...@@ -1415,14 +1415,13 @@ void kthread_use_mm(struct mm_struct *mm) ...@@ -1415,14 +1415,13 @@ void kthread_use_mm(struct mm_struct *mm)
WARN_ON_ONCE(!(tsk->flags & PF_KTHREAD)); WARN_ON_ONCE(!(tsk->flags & PF_KTHREAD));
WARN_ON_ONCE(tsk->mm); WARN_ON_ONCE(tsk->mm);
mmgrab(mm);
task_lock(tsk); task_lock(tsk);
/* Hold off tlb flush IPIs while switching mm's */ /* Hold off tlb flush IPIs while switching mm's */
local_irq_disable(); local_irq_disable();
active_mm = tsk->active_mm; active_mm = tsk->active_mm;
if (active_mm != mm) {
mmgrab(mm);
tsk->active_mm = mm; tsk->active_mm = mm;
}
tsk->mm = mm; tsk->mm = mm;
membarrier_update_current_mm(mm); membarrier_update_current_mm(mm);
switch_mm_irqs_off(active_mm, mm, tsk); switch_mm_irqs_off(active_mm, mm, tsk);
...@@ -1439,12 +1438,9 @@ void kthread_use_mm(struct mm_struct *mm) ...@@ -1439,12 +1438,9 @@ void kthread_use_mm(struct mm_struct *mm)
* memory barrier after storing to tsk->mm, before accessing * memory barrier after storing to tsk->mm, before accessing
* user-space memory. A full memory barrier for membarrier * user-space memory. A full memory barrier for membarrier
* {PRIVATE,GLOBAL}_EXPEDITED is implicitly provided by * {PRIVATE,GLOBAL}_EXPEDITED is implicitly provided by
* mmdrop(), or explicitly with smp_mb(). * mmdrop().
*/ */
if (active_mm != mm)
mmdrop(active_mm); mmdrop(active_mm);
else
smp_mb();
} }
EXPORT_SYMBOL_GPL(kthread_use_mm); EXPORT_SYMBOL_GPL(kthread_use_mm);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment