Commit e25ff835 authored by Dave Jiang's avatar Dave Jiang Committed by Darrick J. Wong

xfs: Close race between direct IO and xfs_break_layouts()

This patch is the duplicate of ross's fix for ext4 for xfs.

If the refcount of a page is lowered between the time that it is returned
by dax_busy_page() and when the refcount is again checked in
xfs_break_layouts() => ___wait_var_event(), the waiting function
xfs_wait_dax_page() will never be called.  This means that
xfs_break_layouts() will still have 'retry' set to false, so we'll stop
looping and never check the refcount of other pages in this inode.

Instead, always continue looping as long as dax_layout_busy_page() gives us
a page which it found with an elevated refcount.
Signed-off-by: default avatarDave Jiang <dave.jiang@intel.com>
Reviewed-by: default avatarJan Kara <jack@suse.cz>
Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
parent 13942aa9
......@@ -721,12 +721,10 @@ xfs_file_write_iter(
static void
xfs_wait_dax_page(
struct inode *inode,
bool *did_unlock)
struct inode *inode)
{
struct xfs_inode *ip = XFS_I(inode);
*did_unlock = true;
xfs_iunlock(ip, XFS_MMAPLOCK_EXCL);
schedule();
xfs_ilock(ip, XFS_MMAPLOCK_EXCL);
......@@ -735,7 +733,7 @@ xfs_wait_dax_page(
static int
xfs_break_dax_layouts(
struct inode *inode,
bool *did_unlock)
bool *retry)
{
struct page *page;
......@@ -745,9 +743,10 @@ xfs_break_dax_layouts(
if (!page)
return 0;
*retry = true;
return ___wait_var_event(&page->_refcount,
atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE,
0, 0, xfs_wait_dax_page(inode, did_unlock));
0, 0, xfs_wait_dax_page(inode));
}
int
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment