Commit 9760c0f9 authored by Kirill A. Shutemov's avatar Kirill A. Shutemov Committed by Luis Henriques

mm: avoid setting up anonymous pages into file mapping

commit 6b7339f4 upstream.

Reading page fault handler code I've noticed that under right
circumstances kernel would map anonymous pages into file mappings: if
the VMA doesn't have vm_ops->fault() and the VMA wasn't fully populated
on ->mmap(), kernel would handle page fault to not populated pte with
do_anonymous_page().

Let's change page fault handler to use do_anonymous_page() only on
anonymous VMA (->vm_ops == NULL) and make sure that the VMA is not
shared.

For file mappings without vm_ops->fault() or shred VMA without vm_ops,
page fault on pte_none() entry would lead to SIGBUS.
Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Willy Tarreau <w@1wt.eu>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
[ luis: backported to 3.16: used Kirill's backport to 3.18 ]
Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
parent 9bc34abf
...@@ -2637,6 +2637,10 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, ...@@ -2637,6 +2637,10 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
pte_unmap(page_table); pte_unmap(page_table);
/* File mapping without ->vm_ops ? */
if (vma->vm_flags & VM_SHARED)
return VM_FAULT_SIGBUS;
/* Check if we need to add a guard page to the stack */ /* Check if we need to add a guard page to the stack */
if (check_stack_guard_page(vma, address) < 0) if (check_stack_guard_page(vma, address) < 0)
return VM_FAULT_SIGSEGV; return VM_FAULT_SIGSEGV;
...@@ -3031,6 +3035,9 @@ static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma, ...@@ -3031,6 +3035,9 @@ static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
- vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
pte_unmap(page_table); pte_unmap(page_table);
/* The VMA was not fully populated on mmap() or missing VM_DONTEXPAND */
if (!vma->vm_ops->fault)
return VM_FAULT_SIGBUS;
if (!(flags & FAULT_FLAG_WRITE)) if (!(flags & FAULT_FLAG_WRITE))
return do_read_fault(mm, vma, address, pmd, pgoff, flags, return do_read_fault(mm, vma, address, pmd, pgoff, flags,
orig_pte); orig_pte);
...@@ -3191,11 +3198,10 @@ static int handle_pte_fault(struct mm_struct *mm, ...@@ -3191,11 +3198,10 @@ static int handle_pte_fault(struct mm_struct *mm,
entry = *pte; entry = *pte;
if (!pte_present(entry)) { if (!pte_present(entry)) {
if (pte_none(entry)) { if (pte_none(entry)) {
if (vma->vm_ops) { if (vma->vm_ops)
if (likely(vma->vm_ops->fault)) return do_linear_fault(mm, vma, address,
return do_linear_fault(mm, vma, address,
pte, pmd, flags, entry); pte, pmd, flags, entry);
}
return do_anonymous_page(mm, vma, address, return do_anonymous_page(mm, vma, address,
pte, pmd, flags); pte, pmd, flags);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment