• Filipe Manana's avatar
    Btrfs: update fix for read corruption of compressed and shared extents · 8f9abef1
    Filipe Manana authored
    commit 808f80b4 upstream.
    
    My previous fix in commit 005efedf ("Btrfs: fix read corruption of
    compressed and shared extents") was effective only if the compressed
    extents cover a file range with a length that is not a multiple of 16
    pages. That's because the detection of when we reached a different range
    of the file that shares the same compressed extent as the previously
    processed range was done at extent_io.c:__do_contiguous_readpages(),
    which covers subranges with a length up to 16 pages, because
    extent_readpages() groups the pages in clusters no larger than 16 pages.
    So fix this by tracking the start of the previously processed file
    range's extent map at extent_readpages().
    
    The following test case for fstests reproduces the issue:
    
      seq=`basename $0`
      seqres=$RESULT_DIR/$seq
      echo "QA output created by $seq"
      tmp=/tmp/$$
      status=1	# failure is the default!
      trap "_cleanup; exit \$status" 0 1 2 3 15
    
      _cleanup()
      {
          rm -f $tmp.*
      }
    
      # get standard environment, filters and checks
      . ./common/rc
      . ./common/filter
    
      # real QA test starts here
      _need_to_be_root
      _supported_fs btrfs
      _supported_os Linux
      _require_scratch
      _require_cloner
    
      rm -f $seqres.full
    
      test_clone_and_read_compressed_extent()
      {
          local mount_opts=$1
    
          _scratch_mkfs >>$seqres.full 2>&1
          _scratch_mount $mount_opts
    
          # Create our test file with a single extent of 64Kb that is going to
          # be compressed no matter which compression algo is used (zlib/lzo).
          $XFS_IO_PROG -f -c "pwrite -S 0xaa 0K 64K" \
              $SCRATCH_MNT/foo | _filter_xfs_io
    
          # Now clone the compressed extent into an adjacent file offset.
          $CLONER_PROG -s 0 -d $((64 * 1024)) -l $((64 * 1024)) \
              $SCRATCH_MNT/foo $SCRATCH_MNT/foo
    
          echo "File digest before unmount:"
          md5sum $SCRATCH_MNT/foo | _filter_scratch
    
          # Remount the fs or clear the page cache to trigger the bug in
          # btrfs. Because the extent has an uncompressed length that is a
          # multiple of 16 pages, all the pages belonging to the second range
          # of the file (64K to 128K), which points to the same extent as the
          # first range (0K to 64K), had their contents full of zeroes instead
          # of the byte 0xaa. This was a bug exclusively in the read path of
          # compressed extents, the correct data was stored on disk, btrfs
          # just failed to fill in the pages correctly.
          _scratch_remount
    
          echo "File digest after remount:"
          # Must match the digest we got before.
          md5sum $SCRATCH_MNT/foo | _filter_scratch
      }
    
      echo -e "\nTesting with zlib compression..."
      test_clone_and_read_compressed_extent "-o compress=zlib"
    
      _scratch_unmount
    
      echo -e "\nTesting with lzo compression..."
      test_clone_and_read_compressed_extent "-o compress=lzo"
    
      status=0
      exit
    Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
    Tested-by: default avatarTimofey Titovets <nefelim4ag@gmail.com>
    Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
    8f9abef1
extent_io.c 134 KB