Commit bd56b302 authored by Chris Mason's avatar Chris Mason

Btrfs: Make btrfs_drop_snapshot work in larger and more efficient chunks

Every transaction in btrfs creates a new snapshot, and then schedules the
snapshot from the last transaction for deletion.  Snapshot deletion
works by walking down the btree and dropping the reference counts
on each btree block during the walk.

If if a given leaf or node has a reference count greater than one,
the reference count is decremented and the subtree pointed to by that
node is ignored.

If the reference count is one, walking continues down into that node
or leaf, and the references of everything it points to are decremented.

The old code would try to work in small pieces, walking down the tree
until it found the lowest leaf or node to free and then returning.  This
was very friendly to the rest of the FS because it didn't have a huge
impact on other operations.

But it wouldn't always keep up with the rate that new commits added new
snapshots for deletion, and it wasn't very optimal for the extent
allocation tree because it wasn't finding leaves that were close together
on disk and processing them at the same time.

This changes things to walk down to a level 1 node and then process it
in bulk.  All the leaf pointers are sorted and the leaves are dropped
in order based on their extent number.

The extent allocation tree and commit code are now fast enough for
this kind of bulk processing to work without slowing the rest of the FS
down.  Overall it does less IO and is better able to keep up with
snapshot deletions under high load.
Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
parent b4ce94de
This diff is collapsed.
...@@ -2441,6 +2441,8 @@ static noinline int drop_csum_leaves(struct btrfs_trans_handle *trans, ...@@ -2441,6 +2441,8 @@ static noinline int drop_csum_leaves(struct btrfs_trans_handle *trans,
ref->generation = leaf_gen; ref->generation = leaf_gen;
ref->nritems = 0; ref->nritems = 0;
btrfs_sort_leaf_ref(ref);
ret = btrfs_add_leaf_ref(root, ref, 0); ret = btrfs_add_leaf_ref(root, ref, 0);
WARN_ON(ret); WARN_ON(ret);
btrfs_free_leaf_ref(root, ref); btrfs_free_leaf_ref(root, ref);
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
*/ */
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/sort.h>
#include "ctree.h" #include "ctree.h"
#include "ref-cache.h" #include "ref-cache.h"
#include "transaction.h" #include "transaction.h"
......
...@@ -73,5 +73,4 @@ int btrfs_add_leaf_ref(struct btrfs_root *root, struct btrfs_leaf_ref *ref, ...@@ -73,5 +73,4 @@ int btrfs_add_leaf_ref(struct btrfs_root *root, struct btrfs_leaf_ref *ref,
int btrfs_remove_leaf_refs(struct btrfs_root *root, u64 max_root_gen, int btrfs_remove_leaf_refs(struct btrfs_root *root, u64 max_root_gen,
int shared); int shared);
int btrfs_remove_leaf_ref(struct btrfs_root *root, struct btrfs_leaf_ref *ref); int btrfs_remove_leaf_ref(struct btrfs_root *root, struct btrfs_leaf_ref *ref);
#endif #endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment