Commit 1acf2e04 authored by Davidlohr Bueso's avatar Davidlohr Bueso Committed by Linus Torvalds

mm/nommu: share the i_mmap_rwsem

Shrinking/truncate logic can call nommu_shrink_inode_mappings() to verify
that any shared mappings of the inode in question aren't broken (dead
zone).  afaict the only user being ramfs to handle the size change
attribute.

Pretty much a no-brainer to share the lock.
Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
Acked-by: default avatar"Kirill A. Shutemov" <kirill@shutemov.name>
Acked-by: default avatarHugh Dickins <hughd@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Acked-by: default avatarMel Gorman <mgorman@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent d28eb9c8
...@@ -2094,14 +2094,14 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size, ...@@ -2094,14 +2094,14 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size,
high = (size + PAGE_SIZE - 1) >> PAGE_SHIFT; high = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
down_write(&nommu_region_sem); down_write(&nommu_region_sem);
i_mmap_lock_write(inode->i_mapping); i_mmap_lock_read(inode->i_mapping);
/* search for VMAs that fall within the dead zone */ /* search for VMAs that fall within the dead zone */
vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, low, high) { vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, low, high) {
/* found one - only interested if it's shared out of the page /* found one - only interested if it's shared out of the page
* cache */ * cache */
if (vma->vm_flags & VM_SHARED) { if (vma->vm_flags & VM_SHARED) {
i_mmap_unlock_write(inode->i_mapping); i_mmap_unlock_read(inode->i_mapping);
up_write(&nommu_region_sem); up_write(&nommu_region_sem);
return -ETXTBSY; /* not quite true, but near enough */ return -ETXTBSY; /* not quite true, but near enough */
} }
...@@ -2113,8 +2113,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size, ...@@ -2113,8 +2113,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size,
* we don't check for any regions that start beyond the EOF as there * we don't check for any regions that start beyond the EOF as there
* shouldn't be any * shouldn't be any
*/ */
vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, 0, ULONG_MAX) {
0, ULONG_MAX) {
if (!(vma->vm_flags & VM_SHARED)) if (!(vma->vm_flags & VM_SHARED))
continue; continue;
...@@ -2129,7 +2128,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size, ...@@ -2129,7 +2128,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, size_t size,
} }
} }
i_mmap_unlock_write(inode->i_mapping); i_mmap_unlock_read(inode->i_mapping);
up_write(&nommu_region_sem); up_write(&nommu_region_sem);
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment