Commit 5446f21e authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] fix use-after-free bug in move_vma()

move_vma() calls do_munmap() and then uses the memory at *new_vma.

But when starting X11 it just happens that the memory which do_munmap
unmapped had the same start address and the range at *new_vma.  So new_vma
is freed by do_munmap().

This was never noticed before because (vm_flags & VM_LOCKED) evaluates
false when vm_flags is 0x5a5a5a5a.  But I just changed that to 0x6b6b6b6b
and boom - we call make_pages_present() with start == end == 0x6b6b6b6b and
it goes BUG.

So I think the right fix here is for move_vma() to not inspect the values
of any vma's after it has called do_munmap().

The patch does that, for `new_vma'.

The local variable `vma' is also being used after the call do do_munmap(),
and this may also be a bug.  Proving that this is not so, and adding a
comment to explain why is hereby added to Hugh's todo list ;)
parent 985babe8
...@@ -227,6 +227,10 @@ static unsigned long move_vma(struct vm_area_struct * vma, ...@@ -227,6 +227,10 @@ static unsigned long move_vma(struct vm_area_struct * vma,
} }
if (!move_page_tables(vma, new_addr, addr, old_len)) { if (!move_page_tables(vma, new_addr, addr, old_len)) {
unsigned long must_fault_in;
unsigned long fault_in_start;
unsigned long fault_in_end;
if (allocated_vma) { if (allocated_vma) {
*new_vma = *vma; *new_vma = *vma;
INIT_LIST_HEAD(&new_vma->shared); INIT_LIST_HEAD(&new_vma->shared);
...@@ -251,19 +255,25 @@ static unsigned long move_vma(struct vm_area_struct * vma, ...@@ -251,19 +255,25 @@ static unsigned long move_vma(struct vm_area_struct * vma,
} else } else
vma = NULL; /* nothing more to do */ vma = NULL; /* nothing more to do */
must_fault_in = new_vma->vm_flags & VM_LOCKED;
fault_in_start = new_vma->vm_start;
fault_in_end = new_vma->vm_end;
do_munmap(current->mm, addr, old_len); do_munmap(current->mm, addr, old_len);
/* new_vma could have been invalidated by do_munmap */
/* Restore VM_ACCOUNT if one or two pieces of vma left */ /* Restore VM_ACCOUNT if one or two pieces of vma left */
if (vma) { if (vma) {
vma->vm_flags |= VM_ACCOUNT; vma->vm_flags |= VM_ACCOUNT;
if (split) if (split)
vma->vm_next->vm_flags |= VM_ACCOUNT; vma->vm_next->vm_flags |= VM_ACCOUNT;
} }
current->mm->total_vm += new_len >> PAGE_SHIFT; current->mm->total_vm += new_len >> PAGE_SHIFT;
if (new_vma->vm_flags & VM_LOCKED) { if (must_fault_in) {
current->mm->locked_vm += new_len >> PAGE_SHIFT; current->mm->locked_vm += new_len >> PAGE_SHIFT;
make_pages_present(new_vma->vm_start, make_pages_present(fault_in_start, fault_in_end);
new_vma->vm_end);
} }
return new_addr; return new_addr;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment