• Qu Wenruo's avatar
    btrfs: scrub: use recovered data stripes as cache to avoid unnecessary read · 94ead93e
    Qu Wenruo authored
    For P/Q stripe scrub, we have quite some duplicated read IO:
    
    - Data stripes read for verification
      This is triggered by the scrub_submit_initial_read() inside
      scrub_raid56_parity_stripe().
    
    - Data stripes read (again) for P/Q stripe verification
      This is triggered by scrub_assemble_read_bios() from scrub_rbio().
    
      Although we can have hit rbio cache and avoid unnecessary read, the
      chance is very low, as scrub would easily flush the whole rbio cache.
    
    This means, even we're just scrubbing a single P/Q stripe, we would read
    the data stripes twice for the best case scenario.  If we need to
    recover some data stripes, it would cause more reads on the same data
    stripes, again and again.
    
    However before we call raid56_parity_submit_scrub_rbio() we already
    have all data stripes repaired and their contents ready to use.
    But RAID56 cache is unaware about the scrub cache, thus RAID56 layer
    itself still needs to re-read the data stripes.
    
    To avoid such cache miss, this patch would:
    
    - Introduce a new helper, raid56_parity_cache_data_pages()
      This function would grab the pages from an array, and copy the content
      to the rbio, marking all the involved sectors uptodate.
    
      The page copy is unavoidable because of the cache pages of rbio are all
      self managed, thus can not utilize outside pages without screwing up
      the lifespan.
    
    - Use the repaired data stripes as cache inside
      scrub_raid56_parity_stripe()
    
    By this, we ensure all the data sectors of the scrub rbio are already
    uptodate, and no need to read them again from disk.
    Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
    Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
    94ead93e
raid56.h 5.19 KB