Commit 63447b7d authored by Qu Wenruo's avatar Qu Wenruo Committed by David Sterba

btrfs: scrub: update last_physical after scrubbing one stripe

Currently sctx->stat.last_physical only got updated in the following
cases:

- When the last stripe of a non-RAID56 chunk is scrubbed
  This implies a pitfall, if the last stripe is at the chunk boundary,
  and we finished the scrub of the whole chunk, we won't update
  last_physical at all until the next chunk.

- When a P/Q stripe of a RAID56 chunk is scrubbed

This leads the following two problems:

- sctx->stat.last_physical is not updated for a almost full chunk
  This is especially bad, affecting scrub resume, as the resume would
  start from last_physical, causing unnecessary re-scrub.

- "btrfs scrub status" will not report any progress for a long time

Fix the problem by properly updating @last_physical after each stripe is
scrubbed.

And since we're here, for the sake of consistency, use spin lock to
protect the update of @last_physical, just like all the remaining
call sites touching sctx->stat.
Reported-by: default avatarMichel Palleau <michel.palleau@gmail.com>
Link: https://lore.kernel.org/linux-btrfs/CAMFk-+igFTv2E8svg=cQ6o3e6CrR5QwgQ3Ok9EyRaEvvthpqCQ@mail.gmail.com/Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
parent 33eb1e5d
...@@ -1875,6 +1875,9 @@ static int flush_scrub_stripes(struct scrub_ctx *sctx) ...@@ -1875,6 +1875,9 @@ static int flush_scrub_stripes(struct scrub_ctx *sctx)
stripe = &sctx->stripes[i]; stripe = &sctx->stripes[i];
wait_scrub_stripe_io(stripe); wait_scrub_stripe_io(stripe);
spin_lock(&sctx->stat_lock);
sctx->stat.last_physical = stripe->physical + stripe_length(stripe);
spin_unlock(&sctx->stat_lock);
scrub_reset_stripe(stripe); scrub_reset_stripe(stripe);
} }
out: out:
...@@ -2143,7 +2146,9 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx, ...@@ -2143,7 +2146,9 @@ static int scrub_simple_mirror(struct scrub_ctx *sctx,
cur_physical, &found_logical); cur_physical, &found_logical);
if (ret > 0) { if (ret > 0) {
/* No more extent, just update the accounting */ /* No more extent, just update the accounting */
spin_lock(&sctx->stat_lock);
sctx->stat.last_physical = physical + logical_length; sctx->stat.last_physical = physical + logical_length;
spin_unlock(&sctx->stat_lock);
ret = 0; ret = 0;
break; break;
} }
...@@ -2340,6 +2345,10 @@ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx, ...@@ -2340,6 +2345,10 @@ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx,
stripe_logical += chunk_logical; stripe_logical += chunk_logical;
ret = scrub_raid56_parity_stripe(sctx, scrub_dev, bg, ret = scrub_raid56_parity_stripe(sctx, scrub_dev, bg,
map, stripe_logical); map, stripe_logical);
spin_lock(&sctx->stat_lock);
sctx->stat.last_physical = min(physical + BTRFS_STRIPE_LEN,
physical_end);
spin_unlock(&sctx->stat_lock);
if (ret) if (ret)
goto out; goto out;
goto next; goto next;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment