Commit 471d557a authored by Filipe Manana's avatar Filipe Manana Committed by David Sterba

Btrfs: fix loss of prealloc extents past i_size after fsync log replay

Currently if we allocate extents beyond an inode's i_size (through the
fallocate system call) and then fsync the file, we log the extents but
after a power failure we replay them and then immediately drop them.
This behaviour happens since about 2009, commit c71bf099 ("Btrfs:
Avoid orphan inodes cleanup while replaying log"), because it marks
the inode as an orphan instead of dropping any extents beyond i_size
before replaying logged extents, so after the log replay, and while
the mount operation is still ongoing, we find the inode marked as an
orphan and then perform a truncation (drop extents beyond the inode's
i_size). Because the processing of orphan inodes is still done
right after replaying the log and before the mount operation finishes,
the intention of that commit does not make any sense (at least as
of today). However reverting that behaviour is not enough, because
we can not simply discard all extents beyond i_size and then replay
logged extents, because we risk dropping extents beyond i_size created
in past transactions, for example:

  add prealloc extent beyond i_size
  fsync - clears the flag BTRFS_INODE_NEEDS_FULL_SYNC from the inode
  transaction commit
  add another prealloc extent beyond i_size
  fsync - triggers the fast fsync path
  power failure

In that scenario, we would drop the first extent and then replay the
second one. To fix this just make sure that all prealloc extents
beyond i_size are logged, and if we find too many (which is far from
a common case), fallback to a full transaction commit (like we do when
logging regular extents in the fast fsync path).

Trivial reproducer:

 $ mkfs.btrfs -f /dev/sdb
 $ mount /dev/sdb /mnt
 $ xfs_io -f -c "pwrite -S 0xab 0 256K" /mnt/foo
 $ sync
 $ xfs_io -c "falloc -k 256K 1M" /mnt/foo
 $ xfs_io -c "fsync" /mnt/foo
 <power failure>

 # mount to replay log
 $ mount /dev/sdb /mnt
 # at this point the file only has one extent, at offset 0, size 256K

A test case for fstests follows soon, covering multiple scenarios that
involve adding prealloc extents with previous shrinking truncates and
without such truncates.

Fixes: c71bf099 ("Btrfs: Avoid orphan inodes cleanup while replaying log")
Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
parent af722733
...@@ -2458,13 +2458,41 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb, ...@@ -2458,13 +2458,41 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
if (ret) if (ret)
break; break;
/* for regular files, make sure corresponding /*
* orphan item exist. extents past the new EOF * Before replaying extents, truncate the inode to its
* will be truncated later by orphan cleanup. * size. We need to do it now and not after log replay
* because before an fsync we can have prealloc extents
* added beyond the inode's i_size. If we did it after,
* through orphan cleanup for example, we would drop
* those prealloc extents just after replaying them.
*/ */
if (S_ISREG(mode)) { if (S_ISREG(mode)) {
ret = insert_orphan_item(wc->trans, root, struct inode *inode;
key.objectid); u64 from;
inode = read_one_inode(root, key.objectid);
if (!inode) {
ret = -EIO;
break;
}
from = ALIGN(i_size_read(inode),
root->fs_info->sectorsize);
ret = btrfs_drop_extents(wc->trans, root, inode,
from, (u64)-1, 1);
/*
* If the nlink count is zero here, the iput
* will free the inode. We bump it to make
* sure it doesn't get freed until the link
* count fixup is done.
*/
if (!ret) {
if (inode->i_nlink == 0)
inc_nlink(inode);
/* Update link count and nbytes. */
ret = btrfs_update_inode(wc->trans,
root, inode);
}
iput(inode);
if (ret) if (ret)
break; break;
} }
...@@ -4359,6 +4387,31 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans, ...@@ -4359,6 +4387,31 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans,
num++; num++;
} }
/*
* Add all prealloc extents beyond the inode's i_size to make sure we
* don't lose them after doing a fast fsync and replaying the log.
*/
if (inode->flags & BTRFS_INODE_PREALLOC) {
struct rb_node *node;
for (node = rb_last(&tree->map); node; node = rb_prev(node)) {
em = rb_entry(node, struct extent_map, rb_node);
if (em->start < i_size_read(&inode->vfs_inode))
break;
if (!list_empty(&em->list))
continue;
/* Same as above loop. */
if (++num > 32768) {
list_del_init(&tree->modified_extents);
ret = -EFBIG;
goto process;
}
refcount_inc(&em->refs);
set_bit(EXTENT_FLAG_LOGGING, &em->flags);
list_add_tail(&em->list, &extents);
}
}
list_sort(NULL, &extents, extent_cmp); list_sort(NULL, &extents, extent_cmp);
btrfs_get_logged_extents(inode, logged_list, logged_start, logged_end); btrfs_get_logged_extents(inode, logged_list, logged_start, logged_end);
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment