Commit a1b92a3f authored by Muhammad Usama Anjum's avatar Muhammad Usama Anjum Committed by Andrew Morton

mm/userfaultfd: support WP on multiple VMAs

mwriteprotect_range() errors out if [start, end) doesn't fall in one VMA. 
We are facing a use case where multiple VMAs are present in one range of
interest.  For example, the following pseudocode reproduces the error
which we are trying to fix:

- Allocate memory of size 16 pages with PROT_NONE with mmap
- Register userfaultfd
- Change protection of the first half (1 to 8 pages) of memory to
  PROT_READ | PROT_WRITE. This breaks the memory area in two VMAs.
- Now UFFDIO_WRITEPROTECT_MODE_WP on the whole memory of 16 pages errors
  out.

This is a simple use case where user may or may not know if the memory
area has been divided into multiple VMAs.

We need an implementation which doesn't disrupt the already present users.
So keeping things simple, stop going over all the VMAs if any one of the
VMA hasn't been registered in WP mode.  While at it, remove the un-needed
error check as well.

[akpm@linux-foundation.org: s/VM_WARN_ON_ONCE/VM_WARN_ONCE/ to fix build]
Link: https://lkml.kernel.org/r/20230217105558.832710-1-usama.anjum@collabora.comSigned-off-by: default avatarMuhammad Usama Anjum <usama.anjum@collabora.com>
Acked-by: default avatarPeter Xu <peterx@redhat.com>
Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
Reported-by: default avatarPaul Gofman <pgofman@codeweavers.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 700d2e9a
...@@ -717,6 +717,8 @@ long uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma, ...@@ -717,6 +717,8 @@ long uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma,
struct mmu_gather tlb; struct mmu_gather tlb;
long ret; long ret;
VM_WARN_ONCE(start < dst_vma->vm_start || start + len > dst_vma->vm_end,
"The address range exceeds VMA boundary.\n");
if (enable_wp) if (enable_wp)
mm_cp_flags = MM_CP_UFFD_WP; mm_cp_flags = MM_CP_UFFD_WP;
else else
...@@ -741,9 +743,12 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, ...@@ -741,9 +743,12 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
unsigned long len, bool enable_wp, unsigned long len, bool enable_wp,
atomic_t *mmap_changing) atomic_t *mmap_changing)
{ {
unsigned long end = start + len;
unsigned long _start, _end;
struct vm_area_struct *dst_vma; struct vm_area_struct *dst_vma;
unsigned long page_mask; unsigned long page_mask;
long err; long err;
VMA_ITERATOR(vmi, dst_mm, start);
/* /*
* Sanitize the command parameters: * Sanitize the command parameters:
...@@ -766,28 +771,30 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, ...@@ -766,28 +771,30 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
goto out_unlock; goto out_unlock;
err = -ENOENT; err = -ENOENT;
dst_vma = find_dst_vma(dst_mm, start, len); for_each_vma_range(vmi, dst_vma, end) {
if (!dst_vma) if (!userfaultfd_wp(dst_vma)) {
goto out_unlock; err = -ENOENT;
if (!userfaultfd_wp(dst_vma)) break;
goto out_unlock; }
if (!vma_can_userfault(dst_vma, dst_vma->vm_flags))
goto out_unlock;
if (is_vm_hugetlb_page(dst_vma)) { if (is_vm_hugetlb_page(dst_vma)) {
err = -EINVAL; err = -EINVAL;
page_mask = vma_kernel_pagesize(dst_vma) - 1; page_mask = vma_kernel_pagesize(dst_vma) - 1;
if ((start & page_mask) || (len & page_mask)) if ((start & page_mask) || (len & page_mask))
goto out_unlock; break;
} }
err = uffd_wp_range(dst_mm, dst_vma, start, len, enable_wp); _start = max(dst_vma->vm_start, start);
_end = min(dst_vma->vm_end, end);
/* Return 0 on success, <0 on failures */ err = uffd_wp_range(dst_mm, dst_vma, _start, _end - _start, enable_wp);
if (err > 0)
err = 0;
/* Return 0 on success, <0 on failures */
if (err < 0)
break;
err = 0;
}
out_unlock: out_unlock:
mmap_read_unlock(dst_mm); mmap_read_unlock(dst_mm);
return err; return err;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment