1. 15 Dec, 2023 19 commits
    • Darrick J. Wong's avatar
      xfs: add missing nrext64 inode flag check to scrub · 576d30ec
      Darrick J. Wong authored
      Add this missing check that the superblock nrext64 flag is set if the
      inode flag is set.
      
      Fixes: 9b7d16e3 ("xfs: Introduce XFS_DIFLAG2_NREXT64 and associated helpers")
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      576d30ec
    • Darrick J. Wong's avatar
      xfs: try to attach dquots to files before repairing them · 259ba1d3
      Darrick J. Wong authored
      Inode resource usage is tracked in the quota metadata.  Repairing a file
      might change the resources used by that file, which means that we need
      to attach dquots to the file that we're examining before accessing
      anything in the file protected by the ILOCK.
      
      However, there's a twist: a dquot cache miss requires the dquot to be
      read in from the quota file, during which we drop the ILOCK on the file
      being examined.  This means that we *must* try to attach the dquots
      before taking the ILOCK.
      
      Therefore, dquots must be attached to files in the scrub setup function.
      If doing so yields corruption errors (or unknown dquot errors), we
      instead clear the quotachecked status, which will cause a quotacheck on
      next mount.  A future series will make this trigger live quotacheck.
      
      While we're here, change the xrep_ino_dqattach function to use the
      unlocked dqattach functions so that we avoid cycling the ILOCK if the
      inode already has dquots attached.  This makes the naming and locking
      requirements consistent with the rest of the filesystem.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      259ba1d3
    • Darrick J. Wong's avatar
      xfs: disable online repair quota helpers when quota not enabled · d5aa62de
      Darrick J. Wong authored
      Don't compile the quota helper functions if quota isn't being built into
      the XFS module.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      d5aa62de
    • Darrick J. Wong's avatar
    • Darrick J. Wong's avatar
      xfs: repair inode btrees · dbfbf3bd
      Darrick J. Wong authored
      Use the rmapbt to find inode chunks, query the chunks to compute hole
      and free masks, and with that information rebuild the inobt and finobt.
      Refer to the case study in
      Documentation/filesystems/xfs-online-fsck-design.rst for more details.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      dbfbf3bd
    • Darrick J. Wong's avatar
      xfs: repair free space btrees · 4bdfd7d1
      Darrick J. Wong authored
      Rebuild the free space btrees from the gaps in the rmap btree.  Refer to
      the case study in Documentation/filesystems/xfs-online-fsck-design.rst
      for more details.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      4bdfd7d1
    • Darrick J. Wong's avatar
      xfs: remove trivial bnobt/inobt scrub helpers · 8bd0bf57
      Darrick J. Wong authored
      Christoph Hellwig complained about awkward code in the next two repair
      patches such as:
      
      	sc->sm->sm_type = XFS_SCRUB_TYPE_BNOBT;
      	error = xchk_bnobt(sc);
      
      This is a little silly, so let's export the xchk_{,i}allocbt functions
      to the dispatch table in scrub.c directly and get rid of the helpers.
      Originally I had planned each btree gets its own separate entry point,
      but since repair doesn't work that way, it no longer makes sense to
      complicate the call chain that way.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      8bd0bf57
    • Darrick J. Wong's avatar
      xfs: roll the scrub transaction after completing a repair · efb43b35
      Darrick J. Wong authored
      When we've finished repairing an AG header, roll the scrub transaction.
      This ensure that any failures caused by defer ops failing are captured
      by the xrep_done tracepoint and that any stacktraces that occur will
      point to the repair code that caused it, instead of xchk_teardown.
      
      Going forward, repair functions should commit the transaction if they're
      going to return success.  Usually the space reaping functions that run
      after a successful atomic commit of the new metadata will take care of
      that for us.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      efb43b35
    • Darrick J. Wong's avatar
      xfs: move the per-AG datatype bitmaps to separate files · 0f08af0f
      Darrick J. Wong authored
      Move struct xagb_bitmap to its own pair of C and header files per
      request of Christoph.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      0f08af0f
    • Darrick J. Wong's avatar
      xfs: create separate structures and code for u32 bitmaps · 6ece924b
      Darrick J. Wong authored
      Create a version of the xbitmap that handles 32-bit integer intervals
      and adapt the xfs_agblock_t bitmap to use it.  This reduces the size of
      the interval tree nodes from 48 to 36 bytes and enables us to use a more
      efficient slab (:0000040 instead of :0000048) which allows us to pack
      more nodes into a single slab page (102 vs 85).
      
      As a side effect, the users of these bitmaps no longer have to convert
      between u32 and u64 quantities just to use the bitmap; and the hairy
      overflow checking code in xagb_bitmap_test goes away.
      
      Later in this patchset we're going to add bitmaps for xfs_agino_t,
      xfs_rgblock_t, and xfs_dablk_t, so the increase in code size (5622 vs.
      9959 bytes) seems worth it.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      6ece924b
    • Darrick J. Wong's avatar
      xfs: constrain dirty buffers while formatting a staged btree · e069d549
      Darrick J. Wong authored
      Constrain the number of dirty buffers that are locked by the btree
      staging code at any given time by establishing a threshold at which we
      put them all on the delwri queue and push them to disk.  This limits
      memory consumption while writing out new btrees.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      e069d549
    • Darrick J. Wong's avatar
      xfs: move btree bulkload record initialization to ->get_record implementations · 6dfeb0c2
      Darrick J. Wong authored
      When we're performing a bulk load of a btree, move the code that
      actually stores the btree record in the new btree block out of the
      generic code and into the individual ->get_record implementations.
      This is preparation for being able to store multiple records with a
      single indirect call.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      6dfeb0c2
    • Darrick J. Wong's avatar
      xfs: add debug knobs to control btree bulk load slack factors · a20ffa7d
      Darrick J. Wong authored
      Add some debug knobs so that we can control the leaf and node block
      slack when rebuilding btrees.
      
      For developers, it might be useful to construct btrees of various
      heights by crafting a filesystem with a certain number of records and
      then using repair+knobs to rebuild the index with a certain shape.
      Practically speaking, you'd only ever do that for extreme stress
      testing of the runtime code or the btree generator.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      a20ffa7d
    • Darrick J. Wong's avatar
      xfs: read leaf blocks when computing keys for bulkloading into node blocks · 26de6462
      Darrick J. Wong authored
      When constructing a new btree, xfs_btree_bload_node needs to read the
      btree blocks for level N to compute the keyptrs for the blocks that will
      be loaded into level N+1.  The level N blocks must be formatted at that
      point.
      
      A subsequent patch will change the btree bulkloader to write new btree
      blocks in 256K chunks to moderate memory consumption if the new btree is
      very large.  As a consequence of that, it's possible that the buffers
      for lower level blocks might have been reclaimed by the time the node
      builder comes back to the block.
      
      Therefore, change xfs_btree_bload_node to read the lower level blocks
      to handle the reclaimed buffer case.  As a side effect, the read will
      increase the LRU refs, which will bias towards keeping new btree buffers
      in memory after the new btree commits.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      26de6462
    • Darrick J. Wong's avatar
      xfs: set XBF_DONE on newly formatted btree block that are ready for writing · c1e0f8e6
      Darrick J. Wong authored
      The btree bulkloading code calls xfs_buf_delwri_queue_here when it has
      finished formatting a new btree block and wants to queue it to be
      written to disk.  Once the new btree root has been committed, the blocks
      (and hence the buffers) will be accessible to the rest of the
      filesystem.  Mark each new buffer as DONE when adding it to the delwri
      list so that the next btree traversal can skip reloading the contents
      from disk.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      c1e0f8e6
    • Darrick J. Wong's avatar
      xfs: force all buffers to be written during btree bulk load · 13ae04d8
      Darrick J. Wong authored
      While stress-testing online repair of btrees, I noticed periodic
      assertion failures from the buffer cache about buffers with incorrect
      DELWRI_Q state.  Looking further, I observed this race between the AIL
      trying to write out a btree block and repair zapping a btree block after
      the fact:
      
      AIL:    Repair0:
      
      pin buffer X
      delwri_queue:
      set DELWRI_Q
      add to delwri list
      
              stale buf X:
              clear DELWRI_Q
              does not clear b_list
              free space X
              commit
      
      delwri_submit   # oops
      
      Worse yet, I discovered that running the same repair over and over in a
      tight loop can result in a second race that cause data integrity
      problems with the repair:
      
      AIL:    Repair0:        Repair1:
      
      pin buffer X
      delwri_queue:
      set DELWRI_Q
      add to delwri list
      
              stale buf X:
              clear DELWRI_Q
              does not clear b_list
              free space X
              commit
      
                              find free space X
                              get buffer
                              rewrite buffer
                              delwri_queue:
                              set DELWRI_Q
                              already on a list, do not add
                              commit
      
                              BAD: committed tree root before all blocks written
      
      delwri_submit   # too late now
      
      I traced this to my own misunderstanding of how the delwri lists work,
      particularly with regards to the AIL's buffer list.  If a buffer is
      logged and committed, the buffer can end up on that AIL buffer list.  If
      btree repairs are run twice in rapid succession, it's possible that the
      first repair will invalidate the buffer and free it before the next time
      the AIL wakes up.  Marking the buffer stale clears DELWRI_Q from the
      buffer state without removing the buffer from its delwri list.  The
      buffer doesn't know which list it's on, so it cannot know which lock to
      take to protect the list for a removal.
      
      If the second repair allocates the same block, it will then recycle the
      buffer to start writing the new btree block.  Meanwhile, if the AIL
      wakes up and walks the buffer list, it will ignore the buffer because it
      can't lock it, and go back to sleep.
      
      When the second repair calls delwri_queue to put the buffer on the
      list of buffers to write before committing the new btree, it will set
      DELWRI_Q again, but since the buffer hasn't been removed from the AIL's
      buffer list, it won't add it to the bulkload buffer's list.
      
      This is incorrect, because the bulkload caller relies on delwri_submit
      to ensure that all the buffers have been sent to disk /before/
      committing the new btree root pointer.  This ordering requirement is
      required for data consistency.
      
      Worse, the AIL won't clear DELWRI_Q from the buffer when it does finally
      drop it, so the next thread to walk through the btree will trip over a
      debug assertion on that flag.
      
      To fix this, create a new function that waits for the buffer to be
      removed from any other delwri lists before adding the buffer to the
      caller's delwri list.  By waiting for the buffer to clear both the
      delwri list and any potential delwri wait list, we can be sure that
      repair will initiate writes of all buffers and report all write errors
      back to userspace instead of committing the new structure.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      13ae04d8
    • Dave Chinner's avatar
      xfs: initialise di_crc in xfs_log_dinode · 0573676f
      Dave Chinner authored
      Alexander Potapenko report that KMSAN was issuing these warnings:
      
      kmalloc-ed xlog buffer of size 512 : ffff88802fc26200
      kmalloc-ed xlog buffer of size 368 : ffff88802fc24a00
      kmalloc-ed xlog buffer of size 648 : ffff88802b631000
      kmalloc-ed xlog buffer of size 648 : ffff88802b632800
      kmalloc-ed xlog buffer of size 648 : ffff88802b631c00
      xlog_write_iovec: copying 12 bytes from ffff888017ddbbd8 to ffff88802c300400
      xlog_write_iovec: copying 28 bytes from ffff888017ddbbe4 to ffff88802c30040c
      xlog_write_iovec: copying 68 bytes from ffff88802fc26274 to ffff88802c300428
      xlog_write_iovec: copying 188 bytes from ffff88802fc262bc to ffff88802c30046c
      =====================================================
      BUG: KMSAN: uninit-value in xlog_write_iovec fs/xfs/xfs_log.c:2227
      BUG: KMSAN: uninit-value in xlog_write_full fs/xfs/xfs_log.c:2263
      BUG: KMSAN: uninit-value in xlog_write+0x1fac/0x2600 fs/xfs/xfs_log.c:2532
       xlog_write_iovec fs/xfs/xfs_log.c:2227
       xlog_write_full fs/xfs/xfs_log.c:2263
       xlog_write+0x1fac/0x2600 fs/xfs/xfs_log.c:2532
       xlog_cil_write_chain fs/xfs/xfs_log_cil.c:918
       xlog_cil_push_work+0x30f2/0x44e0 fs/xfs/xfs_log_cil.c:1263
       process_one_work kernel/workqueue.c:2630
       process_scheduled_works+0x1188/0x1e30 kernel/workqueue.c:2703
       worker_thread+0xee5/0x14f0 kernel/workqueue.c:2784
       kthread+0x391/0x500 kernel/kthread.c:388
       ret_from_fork+0x66/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
      
      Uninit was created at:
       slab_post_alloc_hook+0x101/0xac0 mm/slab.h:768
       slab_alloc_node mm/slub.c:3482
       __kmem_cache_alloc_node+0x612/0xae0 mm/slub.c:3521
       __do_kmalloc_node mm/slab_common.c:1006
       __kmalloc+0x11a/0x410 mm/slab_common.c:1020
       kmalloc ./include/linux/slab.h:604
       xlog_kvmalloc fs/xfs/xfs_log_priv.h:704
       xlog_cil_alloc_shadow_bufs fs/xfs/xfs_log_cil.c:343
       xlog_cil_commit+0x487/0x4dc0 fs/xfs/xfs_log_cil.c:1574
       __xfs_trans_commit+0x8df/0x1930 fs/xfs/xfs_trans.c:1017
       xfs_trans_commit+0x30/0x40 fs/xfs/xfs_trans.c:1061
       xfs_create+0x15af/0x2150 fs/xfs/xfs_inode.c:1076
       xfs_generic_create+0x4cd/0x1550 fs/xfs/xfs_iops.c:199
       xfs_vn_create+0x4a/0x60 fs/xfs/xfs_iops.c:275
       lookup_open fs/namei.c:3477
       open_last_lookups fs/namei.c:3546
       path_openat+0x29ac/0x6180 fs/namei.c:3776
       do_filp_open+0x24d/0x680 fs/namei.c:3809
       do_sys_openat2+0x1bc/0x330 fs/open.c:1440
       do_sys_open fs/open.c:1455
       __do_sys_openat fs/open.c:1471
       __se_sys_openat fs/open.c:1466
       __x64_sys_openat+0x253/0x330 fs/open.c:1466
       do_syscall_x64 arch/x86/entry/common.c:51
       do_syscall_64+0x4f/0x140 arch/x86/entry/common.c:82
       entry_SYSCALL_64_after_hwframe+0x63/0x6b arch/x86/entry/entry_64.S:120
      
      Bytes 112-115 of 188 are uninitialized
      Memory access of size 188 starts at ffff88802fc262bc
      
      This is caused by the struct xfs_log_dinode not having the di_crc
      field initialised. Log recovery never uses this field (it is only
      present these days for on-disk format compatibility reasons) and so
      it's value is never checked so nothing in XFS has caught this.
      
      Further, none of the uninitialised memory access warning tools have
      caught this (despite catching other uninit memory accesses in the
      struct xfs_log_dinode back in 2017!) until recently. Alexander
      annotated the XFS code to get the dump of the actual bytes that were
      detected as uninitialised, and from that report it took me about 30s
      to realise what the issue was.
      
      The issue was introduced back in 2016 and every inode that is logged
      fails to initialise this field. This is no actual bad behaviour
      caused by this issue - I find it hard to even classify it as a
      bug...
      Reported-and-tested-by: default avatarAlexander Potapenko <glider@google.com>
      Fixes: f8d55aa0 ("xfs: introduce inode log format object")
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatar"Darrick J. Wong" <djwong@kernel.org>
      Signed-off-by: default avatarChandan Babu R <chandanbabu@kernel.org>
      0573676f
    • Darrick J. Wong's avatar
      xfs: fix an off-by-one error in xreap_agextent_binval · c0e37f07
      Darrick J. Wong authored
      Overall, this function tries to find and invalidate all buffers for a
      given extent of space on the data device.  The inner for loop in this
      function tries to find all xfs_bufs for a given daddr.  The lengths of
      all possible cached buffers range from 1 fsblock to the largest needed
      to contain a 64k xattr value (~17fsb).  The scan is capped to avoid
      looking at anything buffer going past the given extent.
      
      Unfortunately, the loop continuation test is wrong -- max_fsbs is the
      largest size we want to scan, not one past that.  Put another way, this
      loop is actually 1-indexed, not 0-indexed.  Therefore, the continuation
      test should use <=, not <.
      
      As a result, online repairs of btree blocks fails to stale any buffers
      for btrees that are being torn down, which causes later assertions in
      the buffer cache when another thread creates a different-sized buffer.
      This happens in xfs/709 when allocating an inode cluster buffer:
      
       ------------[ cut here ]------------
       WARNING: CPU: 0 PID: 3346128 at fs/xfs/xfs_message.c:104 assfail+0x3a/0x40 [xfs]
       CPU: 0 PID: 3346128 Comm: fsstress Not tainted 6.7.0-rc4-djwx #rc4
       RIP: 0010:assfail+0x3a/0x40 [xfs]
       Call Trace:
        <TASK>
        _xfs_buf_obj_cmp+0x4a/0x50
        xfs_buf_get_map+0x191/0xba0
        xfs_trans_get_buf_map+0x136/0x280
        xfs_ialloc_inode_init+0x186/0x340
        xfs_ialloc_ag_alloc+0x254/0x720
        xfs_dialloc+0x21f/0x870
        xfs_create_tmpfile+0x1a9/0x2f0
        xfs_rename+0x369/0xfd0
        xfs_vn_rename+0xfa/0x170
        vfs_rename+0x5fb/0xc30
        do_renameat2+0x52d/0x6e0
        __x64_sys_renameat2+0x4b/0x60
        do_syscall_64+0x3b/0xe0
        entry_SYSCALL_64_after_hwframe+0x46/0x4e
      
      A later refactoring patch in the online repair series fixed this by
      accident, which is why I didn't notice this until I started testing only
      the patches that are likely to end up in 6.8.
      
      Fixes: 1c7ce115 ("xfs: reap large AG metadata extents when possible")
      Signed-off-by: default avatar"Darrick J. Wong" <djwong@kernel.org>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarChandan Babu R <chandanbabu@kernel.org>
      c0e37f07
    • Eric Sandeen's avatar
      xfs: short circuit xfs_growfs_data_private() if delta is zero · 84712492
      Eric Sandeen authored
      Although xfs_growfs_data() doesn't call xfs_growfs_data_private()
      if in->newblocks == mp->m_sb.sb_dblocks, xfs_growfs_data_private()
      further massages the new block count so that we don't i.e. try
      to create a too-small new AG.
      
      This may lead to a delta of "0" in xfs_growfs_data_private(), so
      we end up in the shrink case and emit the EXPERIMENTAL warning
      even if we're not changing anything at all.
      
      Fix this by returning straightaway if the block delta is zero.
      
      (nb: in older kernels, the result of entering the shrink case
      with delta == 0 may actually let an -ENOSPC escape to userspace,
      which is confusing for users.)
      
      Fixes: fb2fc172 ("xfs: support shrinking unused space in the last AG")
      Signed-off-by: default avatarEric Sandeen <sandeen@redhat.com>
      Reviewed-by: default avatar"Darrick J. Wong" <djwong@kernel.org>
      Signed-off-by: default avatarChandan Babu R <chandanbabu@kernel.org>
      84712492
  2. 14 Dec, 2023 6 commits
  3. 13 Dec, 2023 1 commit
    • Darrick J. Wong's avatar
      xfs: recompute growfsrtfree transaction reservation while growing rt volume · 578bd4ce
      Darrick J. Wong authored
      While playing with growfs to create a 20TB realtime section on a
      filesystem that didn't previously have an rt section, I noticed that
      growfs would occasionally shut down the log due to a transaction
      reservation overflow.
      
      xfs_calc_growrtfree_reservation uses the current size of the realtime
      summary file (m_rsumsize) to compute the transaction reservation for a
      growrtfree transaction.  The reservations are computed at mount time,
      which means that m_rsumsize is zero when growfs starts "freeing" the new
      realtime extents into the rt volume.  As a result, the transaction is
      undersized and fails.
      
      Fix this by recomputing the transaction reservations every time we
      change m_rsumsize.
      Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      578bd4ce
  4. 07 Dec, 2023 14 commits