1. 26 Oct, 2021 40 commits
    • Filipe Manana's avatar
      btrfs: avoid expensive search when truncating inode items from the log · 4934a815
      Filipe Manana authored
      Whenever we are logging a file inode in full sync mode we call
      btrfs_truncate_inode_items() to delete items of the inode we may have
      previously logged.
      
      That results in doing a btree search for deletion, which is expensive
      because it always acquires write locks for extent buffers at levels 2, 1
      and 0, and it balances any node that is less than half full. Acquiring
      the write locks can block the task if the extent buffers are already
      locked by another task or block other tasks attempting to lock them,
      which is specially bad in case of log trees since they are small due to
      their short life, with a root node at a level typically not greater than
      level 2.
      
      If we know that we are logging the inode for the first time in the current
      transaction, we can skip the call to btrfs_truncate_inode_items(), avoiding
      the deletion search. This change does that.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 7/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      4934a815
    • Filipe Manana's avatar
      btrfs: add helper to truncate inode items when logging inode · 8a2b3da1
      Filipe Manana authored
      Move the call to btrfs_truncate_inode_items(), and the surrounding retry
      loop, into a local helper function. This avoids some repetition and avoids
      making the next change a bit awkward due to a bit of too much indentation.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 6/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8a2b3da1
    • Filipe Manana's avatar
      btrfs: avoid expensive search when dropping inode items from log · 88e221cd
      Filipe Manana authored
      Whenever we are logging a directory inode, logging that an inode exists or
      logging an inode that has changes in its references or xattrs, we attempt
      to delete items of this inode we may have previously logged (through calls
      to drop_objectid_items()).
      
      That attempt does a btree search for deletion, which is expensive because
      it always acquires write locks for extent buffers at levels 2, 1 and 0,
      and it balances any node that is less than half full. Acquiring the write
      locks can block the task if the extent buffers are already locked or block
      other tasks attempting to lock them, which is specially bad in case of log
      trees since they are small due to their short life, with a root node at a
      level typically not greater than level 2.
      
      If we know that we are logging the inode for the first time in the current
      transaction, we can skip the search. This change does that.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 5/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      88e221cd
    • Filipe Manana's avatar
      btrfs: always update the logged transaction when logging new names · 130341be
      Filipe Manana authored
      When we are logging a new name for an inode, due to a link or rename
      operation, if the inode has ancestor inodes that are new, created in the
      current transaction, we need to log that these inodes exist. To ensure
      that a subsequent explicit fsync on one of these ancestor inodes does
      sync the log, we don't set the logged_trans field of these inodes.
      This was done in commit 75b463d2 ("btrfs: do not commit logs and
      transactions during link and rename operations"), to avoid syncing a
      log after a rename or link operation.
      
      In order to allow for future changes to do some optimizations, change
      this behaviour to always update the logged_trans of any logged inode
      and don't update the last_log_commit of the inode if we are logging
      that it exists. This accomplishes that same objective with simpler
      logic, allowing for some optimizations in the next patches.
      
      So just do that simplification.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 4/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      130341be
    • Filipe Manana's avatar
      btrfs: do not log new dentries when logging that a new name exists · c48792c6
      Filipe Manana authored
      When logging a new name for an inode, due to a link or rename operation,
      we don't need to log all new dentries of the parent directories and their
      subdirectories. We only want to log the names of the inode and that any
      new parent directories exist. So in this case don't trigger logging of
      the new dentries, that is only need when doing an explicit fsync on a
      directory or on a file which requires logging its parent directories.
      
      This avoids unnecessary work and reduces contention on the extent buffers
      of a log tree.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 3/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c48792c6
    • Filipe Manana's avatar
      btrfs: remove no longer needed checks for NULL log context · 289cffcb
      Filipe Manana authored
      Since commit 75b463d2 ("btrfs: do not commit logs and transactions
      during link and rename operations"), we always pass a non-NULL log context
      to btrfs_log_inode_parent() and therefore to all the functions that it
      calls. So remove the checks we have all over the place that test for a
      NULL log context, making the code shorter and easier to read, as well as
      reducing the size of the generated code.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 2/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      289cffcb
    • Filipe Manana's avatar
      btrfs: check if a log tree exists at inode_logged() · 1e0860f3
      Filipe Manana authored
      In case an inode was never logged since it was loaded from disk and was
      modified in the current transaction (its ->last_trans matches the ID of
      the current transaction), inode_logged() returns true even if there's no
      existing log tree. In this case we can simply check if a log tree exists
      and return false if it does not. This avoids a caller of inode_logged()
      doing some unnecessary, but harmless, work.
      
      For btrfs_log_new_name() it avoids it logging an inode in case it was
      never logged since it was loaded from disk and there is currently no log
      tree for the inode's root. For the remaining callers of inode_logged(),
      btrfs_del_dir_entries_in_log() and btrfs_del_inode_ref_in_log(), it has
      no effect since they already check if a log tree exists through their
      calls to join_running_log_trans().
      
      So just add a check to inode_logged() to verify if a log tree exists, and
      return false if it does not.
      
      This patch is part of a patch set comprised of the following patches:
      
        btrfs: check if a log tree exists at inode_logged()
        btrfs: remove no longer needed checks for NULL log context
        btrfs: do not log new dentries when logging that a new name exists
        btrfs: always update the logged transaction when logging new names
        btrfs: avoid expensive search when dropping inode items from log
        btrfs: add helper to truncate inode items when logging inode
        btrfs: avoid expensive search when truncating inode items from the log
        btrfs: avoid search for logged i_size when logging inode if possible
        btrfs: avoid attempt to drop extents when logging inode for the first time
        btrfs: do not commit delayed inode when logging a file in full sync mode
      
      This is patch 1/10 and test results are listed in the change log of the
      last patch in the set.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1e0860f3
    • Anand Jain's avatar
      btrfs: remove stale comment about the btrfs_show_devname · cdccc03a
      Anand Jain authored
      There were few lockdep warnings because btrfs_show_devname() was using
      device_list_mutex as recorded in the commits:
      
        0ccd0528 ("btrfs: fix a possible umount deadlock")
        779bf3fe ("btrfs: fix lock dep warning, move scratch dev out of device_list_mutex and uuid_mutex")
      
      And finally, commit 88c14590 ("btrfs: use RCU in btrfs_show_devname
      for device list traversal") removed the device_list_mutex from
      btrfs_show_devname for performance reasons.
      
      This patch removes a stale comment about the function
      btrfs_show_devname and device_list_mutex.
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      cdccc03a
    • Anand Jain's avatar
      btrfs: update latest_dev when we create a sprout device · b7cb29e6
      Anand Jain authored
      When we add a device to the seed filesystem (sprouting) it is a new
      filesystem (and fsid) on the device added. Update the latest_dev so
      that /proc/self/mounts shows the correct device.
      
      Example:
      
        $ btrfstune -S1 /dev/vg/seed
        $ mount /dev/vg/seed /btrfs
        mount: /btrfs: WARNING: device write-protected, mounted read-only.
      
        $ cat /proc/self/mounts | grep btrfs
        /dev/mapper/vg-seed /btrfs btrfs ro,relatime,space_cache,subvolid=5,subvol=/ 0 0
      
        $ btrfs dev add -f /dev/vg/new /btrfs
      
      Before:
      
        $ cat /proc/self/mounts | grep btrfs
        /dev/mapper/vg-seed /btrfs btrfs ro,relatime,space_cache,subvolid=5,subvol=/ 0 0
      
      After:
      
        $ cat /proc/self/mounts | grep btrfs
        /dev/mapper/vg-new /btrfs btrfs ro,relatime,space_cache,subvolid=5,subvol=/ 0 0
      Tested-by: default avatarSu Yue <l@damenly.su>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b7cb29e6
    • Anand Jain's avatar
      btrfs: use latest_dev in btrfs_show_devname · 6605fd2f
      Anand Jain authored
      The test case btrfs/238 reports the warning below:
      
       WARNING: CPU: 3 PID: 481 at fs/btrfs/super.c:2509 btrfs_show_devname+0x104/0x1e8 [btrfs]
       CPU: 2 PID: 1 Comm: systemd Tainted: G        W  O 5.14.0-rc1-custom #72
       Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
       Call trace:
         btrfs_show_devname+0x108/0x1b4 [btrfs]
         show_mountinfo+0x234/0x2c4
         m_show+0x28/0x34
         seq_read_iter+0x12c/0x3c4
         vfs_read+0x29c/0x2c8
         ksys_read+0x80/0xec
         __arm64_sys_read+0x28/0x34
         invoke_syscall+0x50/0xf8
         do_el0_svc+0x88/0x138
         el0_svc+0x2c/0x8c
         el0t_64_sync_handler+0x84/0xe4
         el0t_64_sync+0x198/0x19c
      
      Reason:
      While btrfs_prepare_sprout() moves the fs_devices::devices into
      fs_devices::seed_list, the btrfs_show_devname() searches for the devices
      and found none, leading to the warning as in above.
      
      Fix:
      latest_dev is updated according to the changes to the device list.
      That means we could use the latest_dev->name to show the device name in
      /proc/self/mounts, the pointer will be always valid as it's assigned
      before the device is deleted from the list in remove or replace.
      The RCU protection is sufficient as the device structure is freed after
      synchronization.
      Reported-by: default avatarSu Yue <l@damenly.su>
      Tested-by: default avatarSu Yue <l@damenly.su>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      6605fd2f
    • Anand Jain's avatar
      btrfs: convert latest_bdev type to btrfs_device and rename · d24fa5c1
      Anand Jain authored
      In preparation to fix a bug in btrfs_show_devname().
      
      Convert fs_devices::latest_bdev type from struct block_device to struct
      btrfs_device and, rename the member to fs_devices::latest_dev.
      So that btrfs_show_devname() can use fs_devices::latest_dev::name.
      Tested-by: default avatarSu Yue <l@damenly.su>
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d24fa5c1
    • Naohiro Aota's avatar
      btrfs: zoned: finish relocating block group · 7ae9bd18
      Naohiro Aota authored
      We will no longer write to a relocating block group. So, we can finish it
      now.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7ae9bd18
    • Naohiro Aota's avatar
      btrfs: zoned: finish fully written block group · be1a1d7a
      Naohiro Aota authored
      If we have written to the zone capacity, the device automatically
      deactivates the zone. Sync up block group side (the active BG list and
      zone_is_active flag) with it.
      
      We need to do it both on data BGs and metadata BGs. On data side, we add a
      hook to btrfs_finish_ordered_io(). On metadata side, we use
      end_extent_buffer_writeback().
      
      To reduce excess lookup of a block group, we mark the last extent buffer in
      a block group with EXTENT_BUFFER_ZONE_FINISH flag. This cannot be done for
      data (ordered_extent), because the address may change due to
      REQ_OP_ZONE_APPEND.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      be1a1d7a
    • Naohiro Aota's avatar
      btrfs: zoned: avoid chunk allocation if active block group has enough space · a85f05e5
      Naohiro Aota authored
      The current extent allocator tries to allocate a new block group when the
      existing block groups do not have enough space. On a ZNS device, a new
      block group means a new active zone. If the number of active zones has
      already reached the max_active_zones, activating a new zone needs to finish
      an existing zone, leading to wasting the free space there.
      
      So, instead, it should reuse the existing active block groups as much as
      possible when we can't activate any other zones without sacrificing an
      already activated block group.
      
      While at it, I converted find_free_extent_update_loop() to check the
      found_extent() case early and made the other conditions simpler.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a85f05e5
    • Naohiro Aota's avatar
      btrfs: move ffe_ctl one level up · a12b0dc0
      Naohiro Aota authored
      We are passing too many variables as it is from btrfs_reserve_extent() to
      find_free_extent(). The next commit will add min_alloc_size to ffe_ctl, and
      that means another pass-through argument. Take this opportunity to move
      ffe_ctl one level up and drop the redundant arguments.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a12b0dc0
    • Naohiro Aota's avatar
      btrfs: zoned: activate new block group · eb66a010
      Naohiro Aota authored
      Activate new block group at btrfs_make_block_group(). We do not check the
      return value. If failed, we can try again later at the actual extent
      allocation phase.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      eb66a010
    • Naohiro Aota's avatar
      btrfs: zoned: activate block group on allocation · 2e654e4b
      Naohiro Aota authored
      Activate a block group when trying to allocate an extent from it. We check
      read-only case and no space left case before trying to activate a block
      group not to consume the number of active zones uselessly.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      2e654e4b
    • Naohiro Aota's avatar
      btrfs: zoned: load active zone info for block group · 68a384b5
      Naohiro Aota authored
      Load activeness of underlying zones of a block group. When underlying zones
      are active, we add the block group to the fs_info->zone_active_bgs list.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      68a384b5
    • Naohiro Aota's avatar
      btrfs: zoned: implement active zone tracking · afba2bc0
      Naohiro Aota authored
      Add zone_is_active flag to btrfs_block_group. This flag indicates the
      underlying zones are all active. Such zone active block groups are tracked
      by fs_info->active_bg_list.
      
      btrfs_dev_{set,clear}_active_zone() take responsibility for the underlying
      device part. They set/clear the bitmap to indicate zone activeness and
      count the number of zones we can activate left.
      
      btrfs_zone_{activate,finish}() take responsibility for the logical part and
      the list management. In addition, btrfs_zone_finish() wait for any writes
      on it and send REQ_OP_ZONE_FINISH to the zone.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      afba2bc0
    • Naohiro Aota's avatar
      btrfs: zoned: introduce physical_map to btrfs_block_group · dafc340d
      Naohiro Aota authored
      We will use a block group's physical location to track active zones and
      finish fully written zones in the following commits. Since the zone
      activation is done in the extent allocation context which already holding
      the tree locks, we can't query the chunk tree for the physical locations.
      So, copy the location info into a block group and use it for activation.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      dafc340d
    • Naohiro Aota's avatar
      btrfs: zoned: load active zone information from devices · ea6f8ddc
      Naohiro Aota authored
      The ZNS specification defines a limit on the number of zones that can be in
      the implicit open, explicit open or closed conditions. Any zone with such
      condition is defined as an active zone and correspond to any zone that is
      being written or that has been only partially written. If the maximum
      number of active zones is reached, we must either reset or finish some
      active zones before being able to chose other zones for storing data.
      
      Load queue_max_active_zones() and track the number of active zones left on
      the device.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ea6f8ddc
    • Naohiro Aota's avatar
      btrfs: zoned: finish superblock zone once no space left for new SB · 8376d9e1
      Naohiro Aota authored
      If there is no more space left for a new superblock in a superblock zone,
      then it is better to ZONE_FINISH the zone and frees up the active zone
      count.
      
      Since btrfs_advance_sb_log() can now issue REQ_OP_ZONE_FINISH, we also need
      to convert it to return int for the error case.
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8376d9e1
    • Naohiro Aota's avatar
      btrfs: zoned: locate superblock position using zone capacity · 9658b72e
      Naohiro Aota authored
      sb_write_pointer() returns the write position of next superblock. For READ,
      we need a previous location. When the pointer is at the head, the previous
      one is the last one of the other zone. Calculate the last one's position
      from zone capacity.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9658b72e
    • Naohiro Aota's avatar
      btrfs: zoned: consider zone as full when no more SB can be written · 5daaf552
      Naohiro Aota authored
      We cannot write beyond zone capacity. So, we should consider a zone as
      "full" when the write pointer goes beyond capacity - the size of super
      info.
      
      Also, take this opportunity to replace a subtle duplicated code with a loop
      and fix a typo in comment.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5daaf552
    • Naohiro Aota's avatar
      btrfs: zoned: tweak reclaim threshold for zone capacity · d8da0e85
      Naohiro Aota authored
      With the introduction of zone capacity, the range [capacity, length] is
      always zone unusable. Counting this region as a reclaim target will
      cause reclaiming too early. Reclaim block groups based on bytes that can
      be usable after resetting.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d8da0e85
    • Naohiro Aota's avatar
      btrfs: zoned: calculate free space from zone capacity · 98173255
      Naohiro Aota authored
      Now that we introduced capacity in a block group, we need to calculate free
      space using the capacity instead of the length. Thus, bytes we account
      capacity - alloc_pointer as free, and account bytes [capacity, length] as
      zone unusable.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      98173255
    • Naohiro Aota's avatar
      btrfs: zoned: move btrfs_free_excluded_extents out of btrfs_calc_zone_unusable · c46c4247
      Naohiro Aota authored
      btrfs_free_excluded_extents() is not neccessary for
      btrfs_calc_zone_unusable() and it makes btrfs_calc_zone_unusable()
      difficult to reuse. Move it out and call btrfs_free_excluded_extents()
      in proper context.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c46c4247
    • Naohiro Aota's avatar
      btrfs: zoned: load zone capacity information from devices · 8eae532b
      Naohiro Aota authored
      The ZNS specification introduces the concept of a Zone Capacity.  A zone
      capacity is an additional per-zone attribute that indicates the number of
      usable logical blocks within each zone, starting from the first logical
      block of each zone. It is always smaller or equal to the zone size.
      
      With the SINGLE profile, we can set a block group's "capacity" as the same
      as the underlying zone's Zone Capacity. We will limit the allocation not
      to exceed in a following commit.
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      8eae532b
    • Qu Wenruo's avatar
      btrfs: defrag: enable defrag for subpage case · c22a3572
      Qu Wenruo authored
      With the new infrastructure which has taken subpage into consideration,
      now we should be safe to allow defrag to work for subpage case.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c22a3572
    • Qu Wenruo's avatar
      btrfs: defrag: remove the old infrastructure · c6357573
      Qu Wenruo authored
      Now the old infrastructure can all be removed, defrag
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      c6357573
    • Qu Wenruo's avatar
      btrfs: defrag: use defrag_one_cluster() to implement btrfs_defrag_file() · 7b508037
      Qu Wenruo authored
      The function defrag_one_cluster() is able to defrag one range well
      enough, we only need to do preparation for it, including:
      
      - Clamp and align the defrag range
      - Exclude invalid cases
      - Proper inode locking
      
      The old infrastructures will not be removed in this patch, as it would
      be too noisy to review.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      7b508037
    • Qu Wenruo's avatar
      btrfs: defrag: introduce helper to defrag one cluster · b18c3ab2
      Qu Wenruo authored
      This new helper, defrag_one_cluster(), will defrag one cluster (at most
      256K):
      
      - Collect all initial targets
      
      - Kick in readahead when possible
      
      - Call defrag_one_range() on each initial target
        With some extra range clamping.
      
      - Update @sectors_defragged parameter
      
      This involves one behavior change, the defragged sectors accounting is
      no longer as accurate as old behavior, as the initial targets are not
      consistent.
      
      We can have new holes punched inside the initial target, and we will
      skip such holes later.
      But the defragged sectors accounting doesn't need to be that accurate
      anyway, thus I don't want to pass those extra accounting burden into
      defrag_one_range().
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      b18c3ab2
    • Qu Wenruo's avatar
      btrfs: defrag: introduce helper to defrag a range · e9eec721
      Qu Wenruo authored
      A new helper, defrag_one_range(), is introduced to defrag one range.
      
      This function will mostly prepare the needed pages and extent status for
      defrag_one_locked_target().
      
      As we can only have a consistent view of extent map with page and extent
      bits locked, we need to re-check the range passed in to get a real
      target list for defrag_one_locked_target().
      
      Since defrag_collect_targets() will call defrag_lookup_extent() and lock
      extent range, we also need to teach those two functions to skip extent
      lock.  Thus new parameter, @locked, is introduced to skip extent lock if
      the caller has already locked the range.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e9eec721
    • Qu Wenruo's avatar
      btrfs: defrag: introduce helper to defrag a contiguous prepared range · 22b398ee
      Qu Wenruo authored
      A new helper, defrag_one_locked_target(), introduced to do the real part
      of defrag.
      
      The caller needs to ensure both page and extents bits are locked, and no
      ordered extent exists for the range, and all writeback is finished.
      
      The core defrag part is pretty straight-forward:
      
      - Reserve space
      - Set extent bits to defrag
      - Update involved pages to be dirty
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      22b398ee
    • Qu Wenruo's avatar
      btrfs: defrag: introduce helper to collect target file extents · eb793cf8
      Qu Wenruo authored
      Introduce a helper, defrag_collect_targets(), to collect all possible
      targets to be defragged.
      
      This function will not consider things like max_sectors_to_defrag, thus
      caller should be responsible to ensure we don't exceed the limit.
      
      This function will be the first stage of later defrag rework.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      eb793cf8
    • Qu Wenruo's avatar
      btrfs: defrag: factor out page preparation into a helper · 5767b50c
      Qu Wenruo authored
      In cluster_pages_for_defrag(), we have complex code block inside one
      for() loop.
      
      The code block is to prepare one page for defrag, this will ensure:
      
      - The page is locked and set up properly.
      - No ordered extent exists in the page range.
      - The page is uptodate.
      
      This behavior is pretty common and will be reused by later defrag
      rework.
      
      So factor out the code into its own helper, defrag_prepare_one_page(),
      for later usage, and cleanup the code by a little.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      5767b50c
    • Qu Wenruo's avatar
      btrfs: defrag: replace hard coded PAGE_SIZE with sectorsize · 76068cae
      Qu Wenruo authored
      When testing subpage defrag support, I always find some strange inode
      nbytes error, after a lot of debugging, it turns out that
      defrag_lookup_extent() is using PAGE_SIZE as size for
      lookup_extent_mapping().
      
      Since lookup_extent_mapping() is calling __lookup_extent_mapping() with
      @strict == 1, this means any extent map smaller than one page will be
      ignored, prevent subpage defrag to grab a correct extent map.
      
      There are quite some PAGE_SIZE usage in ioctl.c, but most of them are
      correct usages, and can be one of the following cases:
      
      - ioctl structure size check
        We want ioctl structure to be contained inside one page.
      
      - real page operations
      
      The remaining cases in defrag_lookup_extent() and
      check_defrag_in_cache() will be addressed in this patch.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      76068cae
    • Qu Wenruo's avatar
      btrfs: defrag: also check PagePrivate for subpage cases in cluster_pages_for_defrag() · cae79686
      Qu Wenruo authored
      In function cluster_pages_for_defrag() we have a window where we unlock
      page, either start the ordered range or read the content from disk.
      
      When we re-lock the page, we need to make sure it still has the correct
      page->private for subpage.
      
      Thus add the extra PagePrivate check here to handle subpage cases
      properly.
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      cae79686
    • Qu Wenruo's avatar
      btrfs: defrag: pass file_ra_state instead of file to btrfs_defrag_file() · 1ccc2e8a
      Qu Wenruo authored
      Currently btrfs_defrag_file() accepts both "struct inode" and "struct
      file" as parameter.  We can easily grab "struct inode" from "struct
      file" using file_inode() helper.
      
      The reason why we need "struct file" is just to re-use its f_ra.
      
      Change this to pass "struct file_ra_state" parameter, so that it's more
      clear what we really want.  Since we're here, also add some comments on
      the function btrfs_defrag_file().
      Reviewed-by: default avatarJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      1ccc2e8a
    • Anand Jain's avatar
      btrfs: rename and switch to bool btrfs_chunk_readonly · a09f23c3
      Anand Jain authored
      btrfs_chunk_readonly() checks if the given chunk is writeable. It
      returns 1 for readonly, and 0 for writeable. So the return argument type
      bool shall suffice instead of the current type int.
      
      Also, rename btrfs_chunk_readonly() to btrfs_chunk_writeable() as we
      check if the bg is writeable, and helps to keep the logic at the parent
      function simpler to understand.
      Signed-off-by: default avatarAnand Jain <anand.jain@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a09f23c3