Commit 7ed49f18 authored by Josef Bacik's avatar Josef Bacik

Btrfs: fix space leak when we fail to make an allocation

When changing back to using a spin_lock to protect the extent counters I decided
that since we would only be dropping our original extent, it was ok to just drop
the extent and return.  However since somebody else could have come in and done
a reservation, we need to do the normal song and dance to clear the reservation
out properly.  So calculate how much space we need to free, and then subtract
what we just attempted to reserve.  If it's more then we know we need to drop
those bytes from the delalloc block rsv.  Thanks,
Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
parent a9b5fcdd
...@@ -4025,16 +4025,24 @@ int btrfs_delalloc_reserve_metadata(struct inode *inode, u64 num_bytes) ...@@ -4025,16 +4025,24 @@ int btrfs_delalloc_reserve_metadata(struct inode *inode, u64 num_bytes)
ret = reserve_metadata_bytes(NULL, root, block_rsv, to_reserve, 1); ret = reserve_metadata_bytes(NULL, root, block_rsv, to_reserve, 1);
if (ret) { if (ret) {
u64 to_free = 0;
unsigned dropped; unsigned dropped;
/*
* We don't need the return value since our reservation failed,
* we just need to clean up our counter.
*/
spin_lock(&BTRFS_I(inode)->lock); spin_lock(&BTRFS_I(inode)->lock);
dropped = drop_outstanding_extent(inode); dropped = drop_outstanding_extent(inode);
WARN_ON(dropped > 1); to_free = calc_csum_metadata_size(inode, num_bytes, 0);
BTRFS_I(inode)->csum_bytes -= num_bytes;
spin_unlock(&BTRFS_I(inode)->lock); spin_unlock(&BTRFS_I(inode)->lock);
to_free += btrfs_calc_trans_metadata_size(root, dropped);
/*
* Somebody could have come in and twiddled with the
* reservation, so if we have to free more than we would have
* reserved from this reservation go ahead and release those
* bytes.
*/
to_free -= to_reserve;
if (to_free)
btrfs_block_rsv_release(root, block_rsv, to_free);
return ret; return ret;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment