Commit aaa2cc56 authored by Michel Lespinasse's avatar Michel Lespinasse Committed by Linus Torvalds

mmap locking API: convert nested write lock sites

Add API for nested write locks and convert the few call sites doing that.
Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Reviewed-by: default avatarDaniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: default avatarLaurent Dufour <ldufour@linux.ibm.com>
Reviewed-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Liam Howlett <Liam.Howlett@oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ying Han <yinghan@google.com>
Link: http://lkml.kernel.org/r/20200520052908.204642-7-walken@google.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 89154dd5
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/mm_types.h> #include <linux/mm_types.h>
#include <linux/mmap_lock.h>
#include <asm/mmu.h> #include <asm/mmu.h>
...@@ -47,7 +48,7 @@ static inline void activate_mm(struct mm_struct *old, struct mm_struct *new) ...@@ -47,7 +48,7 @@ static inline void activate_mm(struct mm_struct *old, struct mm_struct *new)
* when the new ->mm is used for the first time. * when the new ->mm is used for the first time.
*/ */
__switch_mm(&new->context.id); __switch_mm(&new->context.id);
down_write_nested(&new->mmap_sem, 1); mmap_write_lock_nested(new, SINGLE_DEPTH_NESTING);
uml_setup_stubs(new); uml_setup_stubs(new);
mmap_write_unlock(new); mmap_write_unlock(new);
} }
......
...@@ -11,6 +11,11 @@ static inline void mmap_write_lock(struct mm_struct *mm) ...@@ -11,6 +11,11 @@ static inline void mmap_write_lock(struct mm_struct *mm)
down_write(&mm->mmap_sem); down_write(&mm->mmap_sem);
} }
static inline void mmap_write_lock_nested(struct mm_struct *mm, int subclass)
{
down_write_nested(&mm->mmap_sem, subclass);
}
static inline int mmap_write_lock_killable(struct mm_struct *mm) static inline int mmap_write_lock_killable(struct mm_struct *mm)
{ {
return down_write_killable(&mm->mmap_sem); return down_write_killable(&mm->mmap_sem);
......
...@@ -501,7 +501,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, ...@@ -501,7 +501,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
/* /*
* Not linked in yet - no deadlock potential: * Not linked in yet - no deadlock potential:
*/ */
down_write_nested(&mm->mmap_sem, SINGLE_DEPTH_NESTING); mmap_write_lock_nested(mm, SINGLE_DEPTH_NESTING);
/* No ordering required: file already has been exposed. */ /* No ordering required: file already has been exposed. */
RCU_INIT_POINTER(mm->exe_file, get_mm_exe_file(oldmm)); RCU_INIT_POINTER(mm->exe_file, get_mm_exe_file(oldmm));
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment