Commit 17fe833b authored by Jann Horn's avatar Jann Horn Committed by Andrew Morton

mm: fix (harmless) type confusion in lock_vma_under_rcu()

There is a (harmless) type confusion in lock_vma_under_rcu(): After
vma_start_read(), we have taken the VMA lock but don't know yet whether
the VMA has already been detached and scheduled for RCU freeing.  At this
point, ->vm_start and ->vm_end are accessed.

vm_area_struct contains a union such that ->vm_rcu uses the same memory as
->vm_start and ->vm_end; so accessing ->vm_start and ->vm_end of a
detached VMA is illegal and leads to type confusion between union members.

Fix it by reordering the vma->detached check above the address checks, and
document the rules for RCU readers accessing VMAs.

This will probably change the number of observed VMA_LOCK_MISS events
(since previously, trying to access a detached VMA whose ->vm_rcu has been
scheduled would bail out when checking the fault address against the
rcu_head members reinterpreted as VMA bounds).

Link: https://lkml.kernel.org/r/20240805-fix-vma-lock-type-confusion-v1-1-9f25443a9a71@google.com
Fixes: 50ee3253 ("mm: introduce lock_vma_under_rcu to be used from arch-specific code")
Signed-off-by: default avatarJann Horn <jannh@google.com>
Acked-by: default avatarSuren Baghdasaryan <surenb@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 0e400844
......@@ -660,6 +660,9 @@ struct vma_numab_state {
* per VM-area/task. A VM area is any part of the process virtual memory
* space that has a special rule for the page-fault handlers (ie a shared
* library, the executable area etc).
*
* Only explicitly marked struct members may be accessed by RCU readers before
* getting a stable reference.
*/
struct vm_area_struct {
/* The first cache line has the info for VMA tree walking. */
......@@ -675,7 +678,11 @@ struct vm_area_struct {
#endif
};
struct mm_struct *vm_mm; /* The address space we belong to. */
/*
* The address space we belong to.
* Unstable RCU readers are allowed to read this.
*/
struct mm_struct *vm_mm;
pgprot_t vm_page_prot; /* Access permissions of this VMA. */
/*
......@@ -688,7 +695,10 @@ struct vm_area_struct {
};
#ifdef CONFIG_PER_VMA_LOCK
/* Flag to indicate areas detached from the mm->mm_mt tree */
/*
* Flag to indicate areas detached from the mm->mm_mt tree.
* Unstable RCU readers are allowed to read this.
*/
bool detached;
/*
......@@ -706,6 +716,7 @@ struct vm_area_struct {
* slowpath.
*/
int vm_lock_seq;
/* Unstable RCU readers are allowed to read this. */
struct vma_lock *vm_lock;
#endif
......
......@@ -5997,10 +5997,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
if (!vma_start_read(vma))
goto inval;
/* Check since vm_start/vm_end might change before we lock the VMA */
if (unlikely(address < vma->vm_start || address >= vma->vm_end))
goto inval_end_read;
/* Check if the VMA got isolated after we found it */
if (vma->detached) {
vma_end_read(vma);
......@@ -6008,6 +6004,16 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
/* The area was replaced with another one */
goto retry;
}
/*
* At this point, we have a stable reference to a VMA: The VMA is
* locked and we know it hasn't already been isolated.
* From here on, we can access the VMA without worrying about which
* fields are accessible for RCU readers.
*/
/* Check since vm_start/vm_end might change before we lock the VMA */
if (unlikely(address < vma->vm_start || address >= vma->vm_end))
goto inval_end_read;
rcu_read_unlock();
return vma;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment