Commit 16a10019 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

[PATCH] holepunch: fix disconnected pages after second truncate

shmem_truncate_range has its own truncate_inode_pages_range, to free any pages
racily instantiated while it was in progress: a SHMEM_PAGEIN flag is set when
this might have happened.  But holepunching gets no chance to clear that flag
at the start of vmtruncate_range, so it's always set (unless a truncate came
just before), so holepunch almost always does this second
truncate_inode_pages_range.

shmem holepunch has unlikely swap<->file races hereabouts whatever we do
(without a fuller rework than is fit for this release): I was going to skip
the second truncate in the punch_hole case, but Miklos points out that would
make holepunch correctness more vulnerable to swapoff.  So keep the second
truncate, but follow it by an unmap_mapping_range to eliminate the
disconnected pages (freed from pagecache while still mapped in userspace) that
it might have left behind.
Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
Cc: Miklos Szeredi <mszeredi@suse.cz>
Cc: Badari Pulavarty <pbadari@us.ibm.com>
Cc: Nick Piggin <npiggin@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 1ae70006
...@@ -674,8 +674,16 @@ static void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end) ...@@ -674,8 +674,16 @@ static void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end)
* generic_delete_inode did it, before we lowered next_index. * generic_delete_inode did it, before we lowered next_index.
* Also, though shmem_getpage checks i_size before adding to * Also, though shmem_getpage checks i_size before adding to
* cache, no recheck after: so fix the narrow window there too. * cache, no recheck after: so fix the narrow window there too.
*
* Recalling truncate_inode_pages_range and unmap_mapping_range
* every time for punch_hole (which never got a chance to clear
* SHMEM_PAGEIN at the start of vmtruncate_range) is expensive,
* yet hardly ever necessary: try to optimize them out later.
*/ */
truncate_inode_pages_range(inode->i_mapping, start, end); truncate_inode_pages_range(inode->i_mapping, start, end);
if (punch_hole)
unmap_mapping_range(inode->i_mapping, start,
end - start, 1);
} }
spin_lock(&info->lock); spin_lock(&info->lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment