1. 22 Feb, 2021 4 commits
    • Nikolay Borisov's avatar
      btrfs: fix race between extent freeing/allocation when using bitmaps · 3c179165
      Nikolay Borisov authored
      During allocation the allocator will try to allocate an extent using
      cluster policy. Once the current cluster is exhausted it will remove the
      entry under btrfs_free_cluster::lock and subsequently acquire
      btrfs_free_space_ctl::tree_lock to dispose of the already-deleted entry
      and adjust btrfs_free_space_ctl::total_bitmap. This poses a problem
      because there exists a race condition between removing the entry under
      one lock and doing the necessary accounting holding a different lock
      since extent freeing only uses the 2nd lock. This can result in the
      following situation:
      
      T1:                                    T2:
      btrfs_alloc_from_cluster               insert_into_bitmap <holds tree_lock>
       if (entry->bytes == 0)                   if (block_group && !list_empty(&block_group->cluster_list)) {
          rb_erase(entry)
      
       spin_unlock(&cluster->lock);
         (total_bitmaps is still 4)           spin_lock(&cluster->lock);
                                               <doesn't find entry in cluster->root>
       spin_lock(&ctl->tree_lock);             <goes to new_bitmap label, adds
      <blocked since T2 holds tree_lock>       <a new entry and calls add_new_bitmap>
      					    recalculate_thresholds  <crashes,
                                                    due to total_bitmaps
      					      becoming 5 and triggering
      					      an ASSERT>
      
      To fix this ensure that once depleted, the cluster entry is deleted when
      both cluster lock and tree locks are held in the allocator (T1), this
      ensures that even if there is a race with a concurrent
      insert_into_bitmap call it will correctly find the entry in the cluster
      and add the new space to it.
      
      CC: <stable@vger.kernel.org> # 4.4+
      Reviewed-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      3c179165
    • Qu Wenruo's avatar
      btrfs: make check_compressed_csum() to be subpage compatible · 04d4ba4c
      Qu Wenruo authored
      Currently check_compressed_csum() completely relies on sectorsize ==
      PAGE_SIZE to do checksum verification for compressed extents.
      
      To make it subpage compatible, this patch will:
      - Do extra calculation for the csum range
        Since we have multiple sectors inside a page, we need to only hash
        the range we want, not the full page anymore.
      
      - Do sector-by-sector hash inside the page
      
      With this patch and previous conversion on
      btrfs_submit_compressed_read(), now we can read subpage compressed
      extents properly, and do proper csum verification.
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      04d4ba4c
    • Qu Wenruo's avatar
      btrfs: make btrfs_submit_compressed_read() subpage compatible · be6a1361
      Qu Wenruo authored
      For compressed read, we always submit page read using page size.  This
      doesn't work well with subpage, as for subpage one page can contain
      several sectors.  Such submission will read range out of what we want,
      and cause problems.
      
      Thankfully to make it subpage compatible, we only need to change how the
      last page of the compressed extent is read.
      
      Instead of always adding a full page to the compressed read bio, if we're
      at the last page, calculate the size using compressed length, so that we
      only add part of the range into the compressed read bio.
      
      Since we are here, also change the PAGE_SIZE used in
      lookup_extent_mapping() to sectorsize.
      This modification won't cause any functional change, as
      lookup_extent_mapping() can handle the case where the search range is
      larger than found extent range.
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      be6a1361
    • Ira Weiny's avatar
      btrfs: fix raid6 qstripe kmap · d70cef0d
      Ira Weiny authored
      When a qstripe is required an extra page is allocated and mapped.  There
      were 3 problems:
      
      1) There is no corresponding call of kunmap() for the qstripe page.
      2) There is no reason to map the qstripe page more than once if the
         number of bits set in rbio->dbitmap is greater than one.
      3) There is no reason to map the parity page and unmap it each time
         through the loop.
      
      The page memory can continue to be reused with a single mapping on each
      iteration by raid6_call.gen_syndrome() without remapping.  So map the
      page for the duration of the loop.
      
      Similarly, improve the algorithm by mapping the parity page just 1 time.
      
      Fixes: 5a6ac9ea ("Btrfs, raid56: support parity scrub on raid56")
      CC: stable@vger.kernel.org # 4.4.x: c17af965: btrfs: raid56: simplify tracking of Q stripe presence
      CC: stable@vger.kernel.org # 4.4.x
      Signed-off-by: default avatarIra Weiny <ira.weiny@intel.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d70cef0d
  2. 09 Feb, 2021 36 commits