Commit 7994e6f7 authored by Jan Kara's avatar Jan Kara Committed by Fengguang Wu

vfs: Move waiting for inode writeback from end_writeback() to evict_inode()

Currently, I_SYNC can never be set when evict_inode() (and thus
end_writeback()) is called because flusher thread holds inode reference while
inode is under writeback. As a result inode_sync_wait() in those places
currently does nothing. However that is going to change and unveils problems
with calling inode_sync_wait() from end_writeback(). Several filesystems call
end_writeback() after they have deleted the inode (btrfs, gfs2, ...) and other
filesystems (ext3, ext4, reiserfs, ...) can deadlock when waiting for I_SYNC
because they call end_writeback() from within a transaction.

To avoid these issues, we move inode_sync_wait() into evict_inode() before
calling ->evict_inode(). That way we preserve the current property that
->evict_inode() and writeback never run in parallel and all filesystems are
safe.
Signed-off-by: default avatarJan Kara <jack@suse.cz>
Signed-off-by: default avatarFengguang Wu <fengguang.wu@intel.com>
parent 4f8ad655
...@@ -500,7 +500,6 @@ void end_writeback(struct inode *inode) ...@@ -500,7 +500,6 @@ void end_writeback(struct inode *inode)
BUG_ON(!list_empty(&inode->i_data.private_list)); BUG_ON(!list_empty(&inode->i_data.private_list));
BUG_ON(!(inode->i_state & I_FREEING)); BUG_ON(!(inode->i_state & I_FREEING));
BUG_ON(inode->i_state & I_CLEAR); BUG_ON(inode->i_state & I_CLEAR);
inode_sync_wait(inode);
/* don't need i_lock here, no concurrent mods to i_state */ /* don't need i_lock here, no concurrent mods to i_state */
inode->i_state = I_FREEING | I_CLEAR; inode->i_state = I_FREEING | I_CLEAR;
} }
...@@ -531,6 +530,8 @@ static void evict(struct inode *inode) ...@@ -531,6 +530,8 @@ static void evict(struct inode *inode)
inode_sb_list_del(inode); inode_sb_list_del(inode);
inode_sync_wait(inode);
if (op->evict_inode) { if (op->evict_inode) {
op->evict_inode(inode); op->evict_inode(inode);
} else { } else {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment