Commit 300e4f8a authored by Josef Bacik's avatar Josef Bacik

Btrfs: put the block group cache after we commit the super

In moving some enospc stuff around I noticed that when we unmount we are often
evicting the free space cache inodes before we do our last commit.  This isn't
bad, but it makes us constantly have to re-read the inodes back.  So instead
don't evict the cache until after we do our last commit, this will make things a
little less crappy and makes a future enospc change work properly.  Thanks,
Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
parent 4a338542
...@@ -2543,8 +2543,6 @@ int close_ctree(struct btrfs_root *root) ...@@ -2543,8 +2543,6 @@ int close_ctree(struct btrfs_root *root)
/* clear out the rbtree of defraggable inodes */ /* clear out the rbtree of defraggable inodes */
btrfs_run_defrag_inodes(root->fs_info); btrfs_run_defrag_inodes(root->fs_info);
btrfs_put_block_group_cache(fs_info);
/* /*
* Here come 2 situations when btrfs is broken to flip readonly: * Here come 2 situations when btrfs is broken to flip readonly:
* *
...@@ -2570,6 +2568,8 @@ int close_ctree(struct btrfs_root *root) ...@@ -2570,6 +2568,8 @@ int close_ctree(struct btrfs_root *root)
printk(KERN_ERR "btrfs: commit super ret %d\n", ret); printk(KERN_ERR "btrfs: commit super ret %d\n", ret);
} }
btrfs_put_block_group_cache(fs_info);
kthread_stop(root->fs_info->transaction_kthread); kthread_stop(root->fs_info->transaction_kthread);
kthread_stop(root->fs_info->cleaner_kthread); kthread_stop(root->fs_info->cleaner_kthread);
......
...@@ -105,7 +105,7 @@ struct inode *lookup_free_space_inode(struct btrfs_root *root, ...@@ -105,7 +105,7 @@ struct inode *lookup_free_space_inode(struct btrfs_root *root,
block_group->disk_cache_state = BTRFS_DC_CLEAR; block_group->disk_cache_state = BTRFS_DC_CLEAR;
} }
if (!btrfs_fs_closing(root->fs_info)) { if (!block_group->iref) {
block_group->inode = igrab(inode); block_group->inode = igrab(inode);
block_group->iref = 1; block_group->iref = 1;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment