Commit cedabed4 authored by OGAWA Hirofumi's avatar OGAWA Hirofumi Committed by Linus Torvalds

vfs: Fix vmtruncate() regression

If __block_prepare_write() was failed in block_write_begin(), the
allocated blocks can be outside of ->i_size.

But new truncate_pagecache() in vmtuncate() does nothing if new < old.
It means the above usage is not working anymore.

So, this patch fixes it by removing "new < old" check. It would need
more cleanup/change. But, now -rc and truncate working is in progress,
so, this tried to fix it minimum change.
Acked-by: default avatarNick Piggin <npiggin@suse.de>
Signed-off-by: default avatarOGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e80c14e1
...@@ -522,22 +522,20 @@ EXPORT_SYMBOL_GPL(invalidate_inode_pages2); ...@@ -522,22 +522,20 @@ EXPORT_SYMBOL_GPL(invalidate_inode_pages2);
*/ */
void truncate_pagecache(struct inode *inode, loff_t old, loff_t new) void truncate_pagecache(struct inode *inode, loff_t old, loff_t new)
{ {
if (new < old) { struct address_space *mapping = inode->i_mapping;
struct address_space *mapping = inode->i_mapping;
/*
/* * unmap_mapping_range is called twice, first simply for
* unmap_mapping_range is called twice, first simply for * efficiency so that truncate_inode_pages does fewer
* efficiency so that truncate_inode_pages does fewer * single-page unmaps. However after this first call, and
* single-page unmaps. However after this first call, and * before truncate_inode_pages finishes, it is possible for
* before truncate_inode_pages finishes, it is possible for * private pages to be COWed, which remain after
* private pages to be COWed, which remain after * truncate_inode_pages finishes, hence the second
* truncate_inode_pages finishes, hence the second * unmap_mapping_range call must be made for correctness.
* unmap_mapping_range call must be made for correctness. */
*/ unmap_mapping_range(mapping, new + PAGE_SIZE - 1, 0, 1);
unmap_mapping_range(mapping, new + PAGE_SIZE - 1, 0, 1); truncate_inode_pages(mapping, new);
truncate_inode_pages(mapping, new); unmap_mapping_range(mapping, new + PAGE_SIZE - 1, 0, 1);
unmap_mapping_range(mapping, new + PAGE_SIZE - 1, 0, 1);
}
} }
EXPORT_SYMBOL(truncate_pagecache); EXPORT_SYMBOL(truncate_pagecache);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment