Commit b5641a5d authored by Linus Torvalds's avatar Linus Torvalds

mm: don't do validate_mm() unnecessarily and without mmap locking

This is an addition to commit ae80b404 ("mm: validate the mm before
dropping the mmap lock"), because it turns out there were two problems,
but lockdep just stopped complaining after finding the first one.

The do_vmi_align_munmap() function now drops the mmap lock after doing
the validate_mm() call, but it turns out that one of the callers then
immediately calls validate_mm() again.

That's both a bit silly, and now (again) happens without the mmap lock
held.

So just remove that validate_mm() call from the caller, but make sure to
not lose any coverage by doing that mm sanity checking in the error path
of do_vmi_align_munmap() too.
Reported-and-tested-by: default avatarkernel test robot <oliver.sang@intel.com>
Link: https://lore.kernel.org/lkml/ZKN6CdkKyxBShPHi@xsang-OptiPlex-9020/
Fixes: 408579cd ("mm: Update do_vmi_align_munmap() return semantics")
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 24be4d0b
......@@ -2571,6 +2571,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
__mt_destroy(&mt_detach);
start_split_failed:
map_count_exceeded:
validate_mm(mm);
return error;
}
......@@ -3019,12 +3020,9 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
bool unlock)
{
struct mm_struct *mm = vma->vm_mm;
int ret;
arch_unmap(mm, start, end);
ret = do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
validate_mm(mm);
return ret;
return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
}
/*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment