1. 12 Jan, 2024 9 commits
    • Qu Wenruo's avatar
      btrfs: defrag: reject unknown flags of btrfs_ioctl_defrag_range_args · 173431b2
      Qu Wenruo authored
      Add extra sanity check for btrfs_ioctl_defrag_range_args::flags.
      
      This is not really to enhance fuzzing tests, but as a preparation for
      future expansion on btrfs_ioctl_defrag_range_args.
      
      In the future we're going to add new members, allowing more fine tuning
      for btrfs defrag.  Without the -ENONOTSUPP error, there would be no way
      to detect if the kernel supports those new defrag features.
      
      CC: stable@vger.kernel.org # 4.14+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      173431b2
    • Omar Sandoval's avatar
      btrfs: avoid copying BTRFS_ROOT_SUBVOL_DEAD flag to snapshot of subvolume being deleted · 3324d054
      Omar Sandoval authored
      Sweet Tea spotted a race between subvolume deletion and snapshotting
      that can result in the root item for the snapshot having the
      BTRFS_ROOT_SUBVOL_DEAD flag set. The race is:
      
      Thread 1                                      | Thread 2
      ----------------------------------------------|----------
      btrfs_delete_subvolume                        |
        btrfs_set_root_flags(BTRFS_ROOT_SUBVOL_DEAD)|
                                                    |btrfs_mksubvol
                                                    |  down_read(subvol_sem)
                                                    |  create_snapshot
                                                    |    ...
                                                    |    create_pending_snapshot
                                                    |      copy root item from source
        down_write(subvol_sem)                      |
      
      This flag is only checked in send and swap activate, which this would
      cause to fail mysteriously.
      
      create_snapshot() now checks the root refs to reject a deleted
      subvolume, so we can fix this by locking subvol_sem earlier so that the
      BTRFS_ROOT_SUBVOL_DEAD flag and the root refs are updated atomically.
      
      CC: stable@vger.kernel.org # 4.14+
      Reported-by: default avatarSweet Tea Dorminy <sweettea-kernel@dorminy.me>
      Reviewed-by: default avatarSweet Tea Dorminy <sweettea-kernel@dorminy.me>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      3324d054
    • Omar Sandoval's avatar
      btrfs: don't abort filesystem when attempting to snapshot deleted subvolume · 7081929a
      Omar Sandoval authored
      If the source file descriptor to the snapshot ioctl refers to a deleted
      subvolume, we get the following abort:
      
        BTRFS: Transaction aborted (error -2)
        WARNING: CPU: 0 PID: 833 at fs/btrfs/transaction.c:1875 create_pending_snapshot+0x1040/0x1190 [btrfs]
        Modules linked in: pata_acpi btrfs ata_piix libata scsi_mod virtio_net blake2b_generic xor net_failover virtio_rng failover scsi_common rng_core raid6_pq libcrc32c
        CPU: 0 PID: 833 Comm: t_snapshot_dele Not tainted 6.7.0-rc6 #2
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-1.fc39 04/01/2014
        RIP: 0010:create_pending_snapshot+0x1040/0x1190 [btrfs]
        RSP: 0018:ffffa09c01337af8 EFLAGS: 00010282
        RAX: 0000000000000000 RBX: ffff9982053e7c78 RCX: 0000000000000027
        RDX: ffff99827dc20848 RSI: 0000000000000001 RDI: ffff99827dc20840
        RBP: ffffa09c01337c00 R08: 0000000000000000 R09: ffffa09c01337998
        R10: 0000000000000003 R11: ffffffffb96da248 R12: fffffffffffffffe
        R13: ffff99820535bb28 R14: ffff99820b7bd000 R15: ffff99820381ea80
        FS:  00007fe20aadabc0(0000) GS:ffff99827dc00000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 0000559a120b502f CR3: 00000000055b6000 CR4: 00000000000006f0
        Call Trace:
         <TASK>
         ? create_pending_snapshot+0x1040/0x1190 [btrfs]
         ? __warn+0x81/0x130
         ? create_pending_snapshot+0x1040/0x1190 [btrfs]
         ? report_bug+0x171/0x1a0
         ? handle_bug+0x3a/0x70
         ? exc_invalid_op+0x17/0x70
         ? asm_exc_invalid_op+0x1a/0x20
         ? create_pending_snapshot+0x1040/0x1190 [btrfs]
         ? create_pending_snapshot+0x1040/0x1190 [btrfs]
         create_pending_snapshots+0x92/0xc0 [btrfs]
         btrfs_commit_transaction+0x66b/0xf40 [btrfs]
         btrfs_mksubvol+0x301/0x4d0 [btrfs]
         btrfs_mksnapshot+0x80/0xb0 [btrfs]
         __btrfs_ioctl_snap_create+0x1c2/0x1d0 [btrfs]
         btrfs_ioctl_snap_create_v2+0xc4/0x150 [btrfs]
         btrfs_ioctl+0x8a6/0x2650 [btrfs]
         ? kmem_cache_free+0x22/0x340
         ? do_sys_openat2+0x97/0xe0
         __x64_sys_ioctl+0x97/0xd0
         do_syscall_64+0x46/0xf0
         entry_SYSCALL_64_after_hwframe+0x6e/0x76
        RIP: 0033:0x7fe20abe83af
        RSP: 002b:00007ffe6eff1360 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
        RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007fe20abe83af
        RDX: 00007ffe6eff23c0 RSI: 0000000050009417 RDI: 0000000000000003
        RBP: 0000000000000003 R08: 0000000000000000 R09: 00007fe20ad16cd0
        R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
        R13: 00007ffe6eff13c0 R14: 00007fe20ad45000 R15: 0000559a120b6d58
         </TASK>
        ---[ end trace 0000000000000000 ]---
        BTRFS: error (device vdc: state A) in create_pending_snapshot:1875: errno=-2 No such entry
        BTRFS info (device vdc: state EA): forced readonly
        BTRFS warning (device vdc: state EA): Skipping commit of aborted transaction.
        BTRFS: error (device vdc: state EA) in cleanup_transaction:2055: errno=-2 No such entry
      
      This happens because create_pending_snapshot() initializes the new root
      item as a copy of the source root item. This includes the refs field,
      which is 0 for a deleted subvolume. The call to btrfs_insert_root()
      therefore inserts a root with refs == 0. btrfs_get_new_fs_root() then
      finds the root and returns -ENOENT if refs == 0, which causes
      create_pending_snapshot() to abort.
      
      Fix it by checking the source root's refs before attempting the
      snapshot, but after locking subvol_sem to avoid racing with deletion.
      
      CC: stable@vger.kernel.org # 4.14+
      Reviewed-by: default avatarSweet Tea Dorminy <sweettea-kernel@dorminy.me>
      Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7081929a
    • Naohiro Aota's avatar
      btrfs: zoned: fix lock ordering in btrfs_zone_activate() · b18f3b60
      Naohiro Aota authored
      The btrfs CI reported a lockdep warning as follows by running generic
      generic/129.
      
         WARNING: possible circular locking dependency detected
         6.7.0-rc5+ #1 Not tainted
         ------------------------------------------------------
         kworker/u5:5/793427 is trying to acquire lock:
         ffff88813256d028 (&cache->lock){+.+.}-{2:2}, at: btrfs_zone_finish_one_bg+0x5e/0x130
         but task is already holding lock:
         ffff88810a23a318 (&fs_info->zone_active_bgs_lock){+.+.}-{2:2}, at: btrfs_zone_finish_one_bg+0x34/0x130
         which lock already depends on the new lock.
      
         the existing dependency chain (in reverse order) is:
         -> #1 (&fs_info->zone_active_bgs_lock){+.+.}-{2:2}:
         ...
         -> #0 (&cache->lock){+.+.}-{2:2}:
         ...
      
      This is because we take fs_info->zone_active_bgs_lock after a block_group's
      lock in btrfs_zone_activate() while doing the opposite in other places.
      
      Fix the issue by expanding the fs_info->zone_active_bgs_lock's critical
      section and taking it before a block_group's lock.
      
      Fixes: a7e1ac7b ("btrfs: zoned: reserve zones for an active metadata/system block group")
      CC: stable@vger.kernel.org # 6.6
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b18f3b60
    • Naohiro Aota's avatar
      btrfs: fix unbalanced unlock of mapping_tree_lock · d967c914
      Naohiro Aota authored
      The error path of btrfs_get_chunk_map() releases
      fs_info->mapping_tree_lock. But, it is taken and released in
      btrfs_find_chunk_map(). So, there is no need to do so.
      
      Fixes: 7dc66abb ("btrfs: use a dedicated data structure for chunk maps")
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d967c914
    • Fedor Pchelkin's avatar
      btrfs: ref-verify: free ref cache before clearing mount opt · f03e274a
      Fedor Pchelkin authored
      As clearing REF_VERIFY mount option indicates there were some errors in a
      ref-verify process, a ref cache is not relevant anymore and should be
      freed.
      
      btrfs_free_ref_cache() requires REF_VERIFY option being set so call
      it just before clearing the mount option.
      
      Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
      
      Reported-by: syzbot+be14ed7728594dc8bd42@syzkaller.appspotmail.com
      Fixes: fd708b81 ("Btrfs: add a extent ref verify tool")
      CC: stable@vger.kernel.org # 5.4+
      Closes: https://lore.kernel.org/lkml/000000000000e5a65c05ee832054@google.com/
      Reported-by: syzbot+c563a3c79927971f950f@syzkaller.appspotmail.com
      Closes: https://lore.kernel.org/lkml/0000000000007fe09705fdc6086c@google.com/Reviewed-by: default avatarAnand Jain <anand.jain@oracle.com>
      Signed-off-by: default avatarFedor Pchelkin <pchelkin@ispras.ru>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f03e274a
    • Dmitry Antipov's avatar
      btrfs: fix kvcalloc() arguments order in btrfs_ioctl_send() · 6ff09b6b
      Dmitry Antipov authored
      When compiling with gcc version 14.0.0 20231220 (experimental)
      and W=1, I've noticed the following warning:
      
      fs/btrfs/send.c: In function 'btrfs_ioctl_send':
      fs/btrfs/send.c:8208:44: warning: 'kvcalloc' sizes specified with 'sizeof'
      in the earlier argument and not in the later argument [-Wcalloc-transposed-args]
       8208 |         sctx->clone_roots = kvcalloc(sizeof(*sctx->clone_roots),
            |                                            ^
      
      Since 'n' and 'size' arguments of 'kvcalloc()' are multiplied to
      calculate the final size, their actual order doesn't affect the result
      and so this is not a bug. But it's still worth to fix it.
      Signed-off-by: default avatarDmitry Antipov <dmantipov@yandex.ru>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6ff09b6b
    • Naohiro Aota's avatar
      btrfs: zoned: optimize hint byte for zoned allocator · 02444f2a
      Naohiro Aota authored
      Writing sequentially to a huge file on btrfs on a SMR HDD revealed a
      decline of the performance (220 MiB/s to 30 MiB/s after 500 minutes).
      
      The performance goes down because of increased latency of the extent
      allocation, which is induced by a traversing of a lot of full block groups.
      
      So, this patch optimizes the ffe_ctl->hint_byte by choosing a block group
      with sufficient size from the active block group list, which does not
      contain full block groups.
      
      After applying the patch, the performance is maintained well.
      
      Fixes: 2eda5708 ("btrfs: zoned: implement sequential extent allocation")
      CC: stable@vger.kernel.org # 5.15+
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      02444f2a
    • Naohiro Aota's avatar
      btrfs: zoned: factor out prepare_allocation_zoned() · b271fee9
      Naohiro Aota authored
      Factor out prepare_allocation_zoned() for further extension. While at
      it, optimize the if-branch a bit.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b271fee9
  2. 15 Dec, 2023 31 commits
    • Johannes Thumshirn's avatar
      btrfs: pass btrfs_io_geometry into btrfs_max_io_len · e94dfb7a
      Johannes Thumshirn authored
      Instead of passing three individual members of 'struct btrfs_io_geometry'
      into btrfs_max_io_len(), pass a pointer to btrfs_io_geometry.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e94dfb7a
    • Johannes Thumshirn's avatar
      btrfs: pass struct btrfs_io_geometry to set_io_stripe · 6edf6822
      Johannes Thumshirn authored
      Instead of passing three members of 'struct btrfs_io_geometry' into
      set_io_stripe() pass a pointer to the whole structure and then get the needed
      members out of btrfs_io_geometry.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6edf6822
    • Johannes Thumshirn's avatar
      btrfs: open code set_io_stripe for RAID56 · 89f547c6
      Johannes Thumshirn authored
      Open code set_io_stripe() for RAID56, as it
      
      a) uses a different method to calculate the stripe_index
      b) doesn't need to go through raid-stripe-tree mapping code.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      89f547c6
    • Johannes Thumshirn's avatar
      btrfs: change block mapping to switch/case in btrfs_map_block · b55b3077
      Johannes Thumshirn authored
      Now that all the per-profile if/else statement blocks have been
      converted to calls to helper the conversion to switch/case is
      straightforward.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b55b3077
    • Johannes Thumshirn's avatar
      btrfs: factor out block mapping for single profiles · a16fb8c6
      Johannes Thumshirn authored
      Now that we have a container for the I/O geometry that has all the needed
      information for the block mappings of SINGLE profiles, factor out a helper
      calculating this information.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a16fb8c6
    • Johannes Thumshirn's avatar
      btrfs: factor out block mapping for RAID5/6 · 089221d3
      Johannes Thumshirn authored
      Now that we have a container for the I/O geometry that has all the needed
      information for the block mappings of RAID5 and RAID6, factor out a helper
      calculating this information.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      089221d3
    • Johannes Thumshirn's avatar
      btrfs: reduce scope of data_stripes in btrfs_map_block · d9d4ce9f
      Johannes Thumshirn authored
      Reduce the scope of 'data_stripes' in btrfs_map_block(). While the
      change alone may not make too much sense, it helps us factoring out a
      helper function for the block mapping of RAID56 I/O.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d9d4ce9f
    • Johannes Thumshirn's avatar
      btrfs: factor out block mapping for RAID10 · 8938f112
      Johannes Thumshirn authored
      Now that we have a container for the I/O geometry that has all the needed
      information for the block mappings of RAID10, factor out a helper calculating
      this information.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8938f112
    • Johannes Thumshirn's avatar
      btrfs: factor out block mapping for DUP profiles · 5aeb15c8
      Johannes Thumshirn authored
      Now that we have a container for the I/O geometry that has all the needed
      information for the block mappings of DUP, factor out a helper calculating
      this information.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5aeb15c8
    • Johannes Thumshirn's avatar
      btrfs: factor out RAID1 block mapping · 5e36aba8
      Johannes Thumshirn authored
      Now that we have a container for the I/O geometry that has all the needed
      information for the block mappings of RAID1, factor out a helper calculating
      this information.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5e36aba8
    • Johannes Thumshirn's avatar
      btrfs: factor out block-mapping for RAID0 · 30e8534b
      Johannes Thumshirn authored
      Now that we have a container for the I/O geometry that has all the needed
      information for the block mappings of RAID0, factor out a helper calculating
      this information.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      30e8534b
    • Johannes Thumshirn's avatar
      btrfs: re-introduce struct btrfs_io_geometry · fd747f2d
      Johannes Thumshirn authored
      Re-introduce struct btrfs_io_geometry, holding the necessary bits and
      pieces needed in btrfs_map_block() to decide the I/O geometry of a specific
      block mapping.
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      fd747f2d
    • Johannes Thumshirn's avatar
      btrfs: factor out helper for single device IO check · 02d05b64
      Johannes Thumshirn authored
      The check in btrfs_map_block() deciding if a particular I/O is targeting a
      single device is getting more and more convoluted.
      
      Factor out the check conditions into a helper function, with no functional
      change otherwise.
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      02d05b64
    • Qu Wenruo's avatar
      btrfs: migrate btrfs_repair_io_failure() to folio interfaces · 96c36eaa
      Qu Wenruo authored
      [BUG]
      Test case btrfs/124 failed if larger metadata folio is enabled, the
      dying message looks like this:
      
       BTRFS error (device dm-2): bad tree block start, mirror 2 want 31686656 have 0
       BTRFS info (device dm-2): read error corrected: ino 0 off 31686656 (dev /dev/mapper/test-scratch2 sector 20928)
       BUG: kernel NULL pointer dereference, address: 0000000000000020
       #PF: supervisor read access in kernel mode
       #PF: error_code(0x0000) - not-present page
       CPU: 6 PID: 350881 Comm: btrfs Tainted: G           OE      6.7.0-rc3-custom+ #128
       Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 2/2/2022
       RIP: 0010:btrfs_read_extent_buffer+0x106/0x180 [btrfs]
       PKRU: 55555554
       Call Trace:
        <TASK>
        read_tree_block+0x33/0xb0 [btrfs]
        read_block_for_search+0x23e/0x340 [btrfs]
        btrfs_search_slot+0x2f9/0xe60 [btrfs]
        btrfs_lookup_csum+0x75/0x160 [btrfs]
        btrfs_lookup_bio_sums+0x21a/0x560 [btrfs]
        btrfs_submit_chunk+0x152/0x680 [btrfs]
        btrfs_submit_bio+0x1c/0x50 [btrfs]
        submit_one_bio+0x40/0x80 [btrfs]
        submit_extent_page+0x158/0x390 [btrfs]
        btrfs_do_readpage+0x330/0x740 [btrfs]
        extent_readahead+0x38d/0x6c0 [btrfs]
        read_pages+0x94/0x2c0
        page_cache_ra_unbounded+0x12d/0x190
        relocate_file_extent_cluster+0x7c1/0x9d0 [btrfs]
        relocate_block_group+0x2d3/0x560 [btrfs]
        btrfs_relocate_block_group+0x2c7/0x4b0 [btrfs]
        btrfs_relocate_chunk+0x4c/0x1a0 [btrfs]
        btrfs_balance+0x925/0x13c0 [btrfs]
        btrfs_ioctl+0x19f1/0x25d0 [btrfs]
        __x64_sys_ioctl+0x90/0xd0
        do_syscall_64+0x3f/0xf0
        entry_SYSCALL_64_after_hwframe+0x6e/0x76
      
      [CAUSE]
      The dying line is at btrfs_repair_io_failure() call inside
      btrfs_repair_eb_io_failure().
      
      The function is still relying on the extent buffer using page sized
      folios.
      When the extent buffer is using larger folio, we go into the 2nd slot of
      folios[], and triggered the NULL pointer dereference.
      
      [FIX]
      Migrate btrfs_repair_io_failure() to folio interfaces.
      
      So that when we hit a larger folio, we just submit the whole folio in
      one go.
      
      This also affects data repair path through btrfs_end_repair_bio(),
      thankfully data is still fully page based, we can just add an
      ASSERT(), and use page_folio() to convert the page to folio.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      96c36eaa
    • Qu Wenruo's avatar
      btrfs: migrate eb_bitmap_offset() to folio interfaces · f4521b01
      Qu Wenruo authored
      [BUG]
      Test case btrfs/002 would fail if larger folios are enabled for
      metadata:
      
       assertion failed: folio, in fs/btrfs/extent_io.c:4358
       ------------[ cut here ]------------
       kernel BUG at fs/btrfs/extent_io.c:4358!
       invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
       CPU: 1 PID: 30916 Comm: fsstress Tainted: G           OE      6.7.0-rc3-custom+ #128
       Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 2/2/2022
       RIP: 0010:assert_eb_folio_uptodate+0x98/0xe0 [btrfs]
       Call Trace:
        <TASK>
        extent_buffer_test_bit+0x3c/0x70 [btrfs]
        free_space_test_bit+0xcd/0x140 [btrfs]
        modify_free_space_bitmap+0x27a/0x430 [btrfs]
        add_to_free_space_tree+0x8d/0x160 [btrfs]
        __btrfs_free_extent.isra.0+0xef1/0x13c0 [btrfs]
        __btrfs_run_delayed_refs+0x786/0x13c0 [btrfs]
        btrfs_run_delayed_refs+0x33/0x120 [btrfs]
        btrfs_commit_transaction+0xa2/0x1350 [btrfs]
        iterate_supers+0x77/0xe0
        ksys_sync+0x60/0xa0
        __do_sys_sync+0xa/0x20
        do_syscall_64+0x3f/0xf0
        entry_SYSCALL_64_after_hwframe+0x6e/0x76
        </TASK>
      
      [CAUSE]
      The function extent_buffer_test_bit() is not folio compatible.
      
      It still assumes the old fixed page size, when an extent buffer with
      large folio passed in, only eb->folios[0] is populated.
      
      Then if the target bit range falls in the 2nd page of the folio, then we
      would check eb->folios[1], and trigger the ASSERT().
      
      [FIX]
      Just migrate eb_bitmap_offset() to folio interfaces, using the
      folio_size() to replace PAGE_SIZE.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f4521b01
    • Qu Wenruo's avatar
      btrfs: migrate various end io functions to folios · a700ca5e
      Qu Wenruo authored
      If we still go the old page based iterator functions, like
      bio_for_each_segment_all(), we can hit middle pages of a folio (compound
      page).
      
      In that case if we set any page flag on those middle pages, we can
      easily trigger VM_BUG_ON(), as for compound page flags, they should
      follow their flag policies (normally only set on leading or tail pages).
      
      To avoid such problem in the future full folio migration, here we do:
      
      - Change from bio_for_each_segment_all() to bio_for_each_folio_all()
        This completely removes the ability to access the middle page.
      
      - Add extra ASSERT()s for data read/write paths
        To ensure we only get single paged folio for data now.
      
      - Rename those end io functions to follow a certain schema
        * end_bbio_compressed_read()
        * end_bbio_compressed_write()
      
          These two endio functions don't set any page flags, as they use pages
          not mapped to any address space.
          They can be very good candidates for higher order folio testing.
      
          And they are shared between compression and encoded IO.
      
        * end_bbio_data_read()
        * end_bbio_data_write()
        * end_bbio_meta_read()
        * end_bbio_meta_write()
      
        The old function names are not unified:
          - end_bio_extent_writepage()
          - end_bio_extent_readpage()
          - extent_buffer_write_end_io()
          - extent_buffer_read_end_io()
      
        They share no schema on where the "end_*io" string should be, nor can
        be confusing just using "extent_buffer" and "extent" to distinguish
        data and metadata paths.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a700ca5e
    • Qu Wenruo's avatar
      btrfs: migrate subpage code to folio interfaces · 55151ea9
      Qu Wenruo authored
      Although subpage itself is conflicting with higher folio, since subpage
      (sectorsize < PAGE_SIZE and nodesize < PAGE_SIZE) means we will never
      need higher order folio, there is a hidden pitfall:
      
      - btrfs_page_*() helpers
      
      Those helpers are an abstraction to handle both subpage and non-subpage
      cases, which means we're going to pass pages pointers to those helpers.
      
      And since those helpers are shared between data and metadata paths, it's
      unavoidable to let them to handle folios, including higher order
      folios).
      
      Meanwhile for true subpage case, we should only have a single page
      backed folios anyway, thus add a new ASSERT() for btrfs_subpage_assert()
      to ensure that.
      
      Also since those helpers are shared between both data and metadata, add
      some extra ASSERT()s for data path to make sure we only get single page
      backed folio for now.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      55151ea9
    • Qu Wenruo's avatar
      btrfs: migrate get_eb_page_index() and get_eb_offset_in_page() to folios · 8d993618
      Qu Wenruo authored
      These two functions are still using the old page based code, which is
      not going to handle larger folios at all.
      
      The migration itself is going to involve the following changes:
      
      - PAGE_SIZE -> folio_size()
      - PAGE_SHIFT -> folio_shift()
      - get_eb_page_index() -> get_eb_folio_index()
      - get_eb_offset_in_page() -> get_eb_offset_in_folio()
      
      And since we're going to support larger folios, although above straight
      conversion is good enough, this patch would add extra comments in the
      involved functions to explain why the same single line code can now
      cover 3 cases:
      
      - folio_size == PAGE_SIZE, sectorsize == PAGE_SIZE, nodesize >= PAGE_SIZE
        The common, non-subpage case with per-page folio.
      
      - folio_size > PAGE_SIZE, sectorsize == PAGE_SIZE, nodesize >= PAGE_SIZE
        The incoming larger folio, non-subpage case.
      
      - folio_size == PAGE_SIZE, sectorsize < PAGE_SIZE, nodesize < PAGE_SIZE
        The existing subpage case, we won't larger folio anyway.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8d993618
    • Josef Bacik's avatar
      btrfs: don't double put our subpage reference in alloc_extent_buffer · 4a565c80
      Josef Bacik authored
      This fixes as case in "btrfs: refactor alloc_extent_buffer() to
      allocate-then-attach method".
      
      We have been seeing panics in the CI for the subpage stuff recently, it
      happens on btrfs/187 but could potentially happen anywhere.
      
      In the subpage case, if we race with somebody else inserting the same
      extent buffer, the error case will end up calling
      detach_extent_buffer_page() on the page twice.
      
      This is done first in the bit
      
      for (int i = 0; i < attached; i++)
      	detach_extent_buffer_page(eb, eb->pages[i];
      
      and then again in btrfs_release_extent_buffer().
      
      This works fine for !subpage because we're the only person who ever has
      ourselves on the private, and so when we do the initial
      detach_extent_buffer_page() we know we've completely removed it.
      
      However for subpage we could be using this page private elsewhere, so
      this results in a double put on the subpage, which can result in an
      early freeing.
      
      The fix here is to clear eb->pages[i] for everything we detach.  Then
      anything still attached to the eb is freed in
      btrfs_release_extent_buffer().
      
      Because of this change we must update
      btrfs_release_extent_buffer_pages() to not use num_extent_folios,
      because it assumes eb->folio[0] is set properly.  Since this is only
      interested in freeing any pages we have on the extent buffer we can
      simply use INLINE_EXTENT_BUFFER_PAGES.
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4a565c80
    • Qu Wenruo's avatar
      btrfs: cleanup metadata page pointer usage · 13df3775
      Qu Wenruo authored
      Although we have migrated extent_buffer::pages[] to folios[], we're
      still mostly using the folio_page() help to grab the page.
      
      This patch would do the following cleanups for metadata:
      
      - Introduce num_extent_folios() helper
        This is to replace most num_extent_pages() callers.
      
      - Use num_extent_folios() to iterate future large folios
        This allows us to use things like
        bio_add_folio()/bio_add_folio_nofail(), and only set the needed flags
        for the folio (aka the leading/tailing page), which reduces the loop
        iteration to 1 for large folios.
      
      - Change metadata related functions to use folio pointers
        Including their function name, involving:
        * attach_extent_buffer_page()
        * detach_extent_buffer_page()
        * page_range_has_eb()
        * btrfs_release_extent_buffer_pages()
        * btree_clear_page_dirty()
        * btrfs_page_inc_eb_refs()
        * btrfs_page_dec_eb_refs()
      
      - Change btrfs_is_subpage() to accept an address_space pointer
        This is to allow both page->mapping and folio->mapping to be utilized.
        As data is still using the old per-page code, and may keep so for a
        while.
      
      - Special corner case place holder for future order mismatches between
        extent buffer and inode filemap
        For now it's  just a block of comments and a dead ASSERT(), no real
        handling yet.
      
      The subpage code would still go page, just because subpage and large
      folio are conflicting conditions, thus we don't need to bother subpage
      with higher order folios at all. Just folio_page(folio, 0) would be
      enough.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      [ minor styling tweaks ]
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      13df3775
    • Qu Wenruo's avatar
      btrfs: migrate extent_buffer::pages[] to folio · 082d5bb9
      Qu Wenruo authored
      For now extent_buffer::pages[] are still only accepting single page
      pointer, thus we can migrate to folios pretty easily.
      
      As for single page, page and folio are 1:1 mapped, including their page
      flags.
      
      This patch would just do the conversion from struct page to struct
      folio, providing the first step to higher order folio in the future.
      
      This conversion is pretty simple:
      
      - extent_buffer::pages[] -> extent_buffer::folios[]
      
      - page_address(eb->pages[i]) -> folio_address(eb->pages[i])
      
      - eb->pages[i] -> folio_page(eb->folios[i], 0)
      
      There would be more specific cleanups preparing for the incoming higher
      order folio support.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      082d5bb9
    • Qu Wenruo's avatar
      btrfs: refactor alloc_extent_buffer() to allocate-then-attach method · 09e6cef1
      Qu Wenruo authored
      Currently alloc_extent_buffer() utilizes find_or_create_page() to
      allocate one page a time for an extent buffer.
      
      This method has the following disadvantages:
      
      - find_or_create_page() is the legacy way of allocating new pages
        With the new folio infrastructure, find_or_create_page() is just
        redirected to filemap_get_folio().
      
      - Lacks the way to support higher order (order >= 1) folios
        As we can not yet let filemap give us a higher order folio.
      
      This patch would change the workflow by the following way:
      
      		Old		   |		new
      -----------------------------------+-------------------------------------
                                         | ret = btrfs_alloc_page_array();
      for (i = 0; i < num_pages; i++) {  | for (i = 0; i < num_pages; i++) {
          p = find_or_create_page();     |     ret = filemap_add_folio();
          /* Attach page private */      |     /* Reuse page cache if needed */
          /* Reused eb if needed */      |
      				   |     /* Attach page private and
      				   |        reuse eb if needed */
      				   | }
      
      By this we split the page allocation and private attaching into two
      parts, allowing future updates to each part more easily, and migrate to
      folio interfaces (especially for possible higher order folios).
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      09e6cef1
    • David Disseldorp's avatar
      btrfs: sysfs: validate scrub_speed_max value · 2b0122aa
      David Disseldorp authored
      The value set as scrub_speed_max accepts size with suffixes
      (k/m/g/t/p/e) but we should still validate it for trailing characters,
      similar to what we do with chunk_size_store.
      
      CC: stable@vger.kernel.org # 5.15+
      Signed-off-by: default avatarDavid Disseldorp <ddiss@suse.de>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2b0122aa
    • David Sterba's avatar
      btrfs: switch btrfs_root::delayed_nodes_tree to xarray from radix-tree · 6140ba8a
      David Sterba authored
      The radix-tree has been superseded by the xarray
      (https://lwn.net/Articles/745073), this patch converts the
      btrfs_root::delayed_nodes, the APIs are used in a simple way.
      
      First idea is to do xa_insert() but this would require GFP_ATOMIC
      allocation which we want to avoid if possible. The preload mechanism of
      radix-tree can be emulated within the xarray API.
      
      - xa_reserve() with GFP_NOFS outside of the lock, the reserved entry
        is inserted atomically at most once
      
      - xa_store() under a lock, in case something races in we can detect that
        and xa_load() returns a valid pointer
      
      All uses of xa_load() must check for a valid pointer in case they manage
      to get between the xa_reserve() and xa_store(), this is handled in
      btrfs_get_delayed_node().
      
      Otherwise the functionality is equivalent, xarray implements the
      radix-tree and there should be no performance difference.
      
      The patch continues the efforts started in 253bf575 ("btrfs: turn
      delayed_nodes_tree into an XArray") and fixes the problems with locking
      and GFP flags 088aea3b ("Revert "btrfs: turn delayed_nodes_tree
      into an XArray"").
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6140ba8a
    • David Sterba's avatar
      btrfs: fix typos found by codespell · eefaf0a1
      David Sterba authored
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      eefaf0a1
    • Qu Wenruo's avatar
      btrfs: fix mismatching parameter names for btrfs_get_extent() · 4618d0a6
      Qu Wenruo authored
      The definition for btrfs_get_extent() is using "u64 end" as the last
      parameter, but in implementation we go "u64 len", and all call sites
      follows the implementation.
      
      This can be very confusing during development, as most developers
      including me, would just use the snippet returned by LSP (clangd in my
      case), which would only check the definition.
      
      Unfortunately this mismatch is introduced from the very beginning of
      btrfs.
      
      Fix it to prevent further confusion.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4618d0a6
    • Filipe Manana's avatar
      btrfs: use the flags of an extent map to identify the compression type · f86f7a75
      Filipe Manana authored
      Currently, in struct extent_map, we use an unsigned int (32 bits) to
      identify the compression type of an extent and an unsigned long (64 bits
      on a 64 bits platform, 32 bits otherwise) for flags. We are only using
      6 different flags, so an unsigned long is excessive and we can use flags
      to identify the compression type instead of using a dedicated 32 bits
      field.
      
      We can easily have tens or hundreds of thousands (or more) of extent maps
      on busy and large filesystems, specially with compression enabled or many
      or large files with tons of small extents. So it's convenient to have the
      extent_map structure as small as possible in order to use less memory.
      
      So remove the compression type field from struct extent_map, use flags
      to identify the compression type and shorten the flags field from an
      unsigned long to a u32. This saves 8 bytes (on 64 bits platforms) and
      reduces the size of the structure from 136 bytes down to 128 bytes, using
      now only two cache lines, and increases the number of extent maps we can
      have per 4K page from 30 to 32. By using a u32 for the flags instead of
      an unsigned long, we no longer use test_bit(), set_bit() and clear_bit(),
      but that level of atomicity is not needed as most flags are never cleared
      once set (before adding an extent map to the tree), and the ones that can
      be cleared or set after an extent map is added to the tree, are always
      performed while holding the write lock on the extent map tree, while the
      reader holds a lock on the tree or tests for a flag that never changes
      once the extent map is in the tree (such as compression flags).
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f86f7a75
    • Filipe Manana's avatar
      btrfs: refactor mergable_maps() for more readability · 27f0d9c9
      Filipe Manana authored
      At mergable_maps() instead of having a single if statement with many
      ORed and ANDed conditions, refactor it with multiple if statements that
      check a single condition and return immediately once a requirement fails.
      This makes it easier to read.
      
      Also change the return type from int to bool, make the arguments const
      and rename the function from mergable_maps() to mergeable_maps().
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      27f0d9c9
    • Filipe Manana's avatar
      btrfs: make extent_map_end() argument const · b144cc04
      Filipe Manana authored
      The extent map pointer argument for extent_map_end() can be const as we
      are not modifyng anything in the extent map. So make it const, as it will
      allow further changes to callers that have a const extent map pointer.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b144cc04
    • Filipe Manana's avatar
      btrfs: avoid useless rbtree iterations when attempting to merge extent map · 1a9fb16c
      Filipe Manana authored
      When trying to merge an extent map that was just inserted or unpinned, we
      will try to merge it with any adjacent extent map that is suitable.
      
      However we will only check if our extent map is mergeable after searching
      for the previous and next extent maps in the rbtree, meaning that we are
      doing unnecessary calls to rb_prev() and rb_next() in case our extent map
      is not mergeable (it's compressed, in the list of modifed extents, being
      logged or pinned), wasting CPU time chasing rbtree pointers and pulling
      in unnecessary cache lines.
      
      So change the logic to check first if an extent map is mergeable before
      searching for the next and previous extent maps in the rbtree.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1a9fb16c
    • Filipe Manana's avatar
      btrfs: log messages at unpin_extent_range() during unexpected cases · 00deaf04
      Filipe Manana authored
      At unpin_extent_range() we trigger a WARN_ON() when we don't find an
      extent map or we find one with a start offset not matching the start
      offset of the target range. This however isn't very useful for debugging
      because:
      
      1) We don't know which condition was triggered, as they are both in the
         same WARN_ON() call;
      
      2) We don't know which inode was affected, from which root, for which
         range, what's the start offset of the extent map, and so on.
      
      So trigger a separate warning for each case and log a message for each
      case providing information about the inode, its root, the target range,
      the generation and the start offset of the extent map we found.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      00deaf04