1. 02 Sep, 2017 2 commits
  2. 01 Sep, 2017 29 commits
  3. 22 Aug, 2017 9 commits
    • Carlos Maiolino's avatar
      xfs: stop searching for free slots in an inode chunk when there are none · 2d32311c
      Carlos Maiolino authored
      In a filesystem without finobt, the Space manager selects an AG to alloc a new
      inode, where xfs_dialloc_ag_inobt() will search the AG for the free slot chunk.
      
      When the new inode is in the same AG as its parent, the btree will be searched
      starting on the parent's record, and then retried from the top if no slot is
      available beyond the parent's record.
      
      To exit this loop though, xfs_dialloc_ag_inobt() relies on the fact that the
      btree must have a free slot available, once its callers relied on the
      agi->freecount when deciding how/where to allocate this new inode.
      
      In the case when the agi->freecount is corrupted, showing available inodes in an
      AG, when in fact there is none, this becomes an infinite loop.
      
      Add a way to stop the loop when a free slot is not found in the btree, making
      the function to fall into the whole AG scan which will then, be able to detect
      the corruption and shut the filesystem down.
      
      As pointed by Brian, this might impact performance, giving the fact we
      don't reset the search distance anymore when we reach the end of the
      tree, giving it fewer tries before falling back to the whole AG search, but
      it will only affect searches that start within 10 records to the end of the tree.
      Signed-off-by: default avatarCarlos Maiolino <cmaiolino@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      2d32311c
    • Brian Foster's avatar
      xfs: add log recovery tracepoint for head/tail · e67d3d42
      Brian Foster authored
      Torn write detection and tail overwrite detection can shift the log
      head and tail respectively in the event of CRC mismatch or
      corruption errors. Add a high-level log recovery tracepoint to dump
      the final log head/tail and make those values easily attainable in
      debug/diagnostic situations.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      e67d3d42
    • Brian Foster's avatar
      xfs: handle -EFSCORRUPTED during head/tail verification · a4c9b34d
      Brian Foster authored
      Torn write and tail overwrite detection both trigger only on
      -EFSBADCRC errors. While this is the most likely failure scenario
      for each condition, -EFSCORRUPTED is still possible in certain cases
      depending on what ends up on disk when a torn write or partial tail
      overwrite occurs. For example, an invalid log record h_len can lead
      to an -EFSCORRUPTED error when running the log recovery CRC pass.
      
      Therefore, update log head and tail verification to trigger the
      associated head/tail fixups in the event of -EFSCORRUPTED errors
      along with -EFSBADCRC. Also, -EFSCORRUPTED can currently be returned
      from xlog_do_recovery_pass() before rhead_blk is initialized if the
      first record encountered happens to be corrupted. This leads to an
      incorrect 'first_bad' return value. Initialize rhead_blk earlier in
      the function to address that problem as well.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      a4c9b34d
    • Brian Foster's avatar
      xfs: add log item pinning error injection tag · 7f4d01f3
      Brian Foster authored
      Add an error injection tag to force log items in the AIL to the
      pinned state. This option can be used by test infrastructure to
      induce head behind tail conditions. Specifically, this is intended
      to be used by xfstests to reproduce log recovery problems after
      failed/corrupted log writes overwrite the last good tail LSN in the
      log.
      
      When enabled, AIL push attempts see log items in the AIL in the
      pinned state. This stalls metadata writeback and thus prevents the
      current tail of the log from moving forward. When disabled,
      subsequent AIL pushes observe the log items in their appropriate
      state and filesystem operation continues as normal.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      7f4d01f3
    • Brian Foster's avatar
      xfs: fix log recovery corruption error due to tail overwrite · 4a4f66ea
      Brian Foster authored
      If we consider the case where the tail (T) of the log is pinned long
      enough for the head (H) to push and block behind the tail, we can
      end up blocked in the following state without enough free space (f)
      in the log to satisfy a transaction reservation:
      
      	0	phys. log	N
      	[-------HffT---H'--T'---]
      
      The last good record in the log (before H) refers to T. The tail
      eventually pushes forward (T') leaving more free space in the log
      for writes to H. At this point, suppose space frees up in the log
      for the maximum of 8 in-core log buffers to start flushing out to
      the log. If this pushes the head from H to H', these next writes
      overwrite the previous tail T. This is safe because the items logged
      from T to T' have been written back and removed from the AIL.
      
      If the next log writes (H -> H') happen to fail and result in
      partial records in the log, the filesystem shuts down having
      overwritten T with invalid data. Log recovery correctly locates H on
      the subsequent mount, but H still refers to the now corrupted tail
      T. This results in log corruption errors and recovery failure.
      
      Since the tail overwrite results from otherwise correct runtime
      behavior, it is up to log recovery to try and deal with this
      situation. Update log recovery tail verification to run a CRC pass
      from the first record past the tail to the head. This facilitates
      error detection at T and moves the recovery tail to the first good
      record past H' (similar to truncating the head on torn write
      detection). If corruption is detected beyond the range possibly
      affected by the max number of iclogs, the log is legitimately
      corrupted and log recovery failure is expected.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      4a4f66ea
    • Brian Foster's avatar
      xfs: always verify the log tail during recovery · 5297ac1f
      Brian Foster authored
      Log tail verification currently only occurs when torn writes are
      detected at the head of the log. This was introduced because a
      change in the head block due to torn writes can lead to a change in
      the tail block (each log record header references the current tail)
      and the tail block should be verified before log recovery proceeds.
      
      Tail corruption is possible outside of torn write scenarios,
      however. For example, partial log writes can be detected and cleared
      during the initial head/tail block discovery process. If the partial
      write coincides with a tail overwrite, the log tail is corrupted and
      recovery fails.
      
      To facilitate correct handling of log tail overwites, update log
      recovery to always perform tail verification. This is necessary to
      detect potential tail overwrite conditions when torn writes may not
      have occurred. This changes normal (i.e., no torn writes) recovery
      behavior slightly to detect and return CRC related errors near the
      tail before actual recovery starts.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      5297ac1f
    • Brian Foster's avatar
      xfs: fix recovery failure when log record header wraps log end · 284f1c2c
      Brian Foster authored
      The high-level log recovery algorithm consists of two loops that
      walk the physical log and process log records from the tail to the
      head. The first loop handles the case where the tail is beyond the
      head and processes records up to the end of the physical log. The
      subsequent loop processes records from the beginning of the physical
      log to the head.
      
      Because log records can wrap around the end of the physical log, the
      first loop mentioned above must handle this case appropriately.
      Records are processed from in-core buffers, which means that this
      algorithm must split the reads of such records into two partial
      I/Os: 1.) from the beginning of the record to the end of the log and
      2.) from the beginning of the log to the end of the record. This is
      further complicated by the fact that the log record header and log
      record data are read into independent buffers.
      
      The current handling of each buffer correctly splits the reads when
      either the header or data starts before the end of the log and wraps
      around the end. The data read does not correctly handle the case
      where the prior header read wrapped or ends on the physical log end
      boundary. blk_no is incremented to or beyond the log end after the
      header read to point to the record data, but the split data read
      logic triggers, attempts to read from an invalid log block and
      ultimately causes log recovery to fail. This can be reproduced
      fairly reliably via xfstests tests generic/047 and generic/388 with
      large iclog sizes (256k) and small (10M) logs.
      
      If the record header read has pushed beyond the end of the physical
      log, the subsequent data read is actually contiguous. Update the
      data read logic to detect the case where blk_no has wrapped, mod it
      against the log size to read from the correct address and issue one
      contiguous read for the log data buffer. The log record is processed
      as normal from the buffer(s), the loop exits after the current
      iteration and the subsequent loop picks up with the first new record
      after the start of the log.
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      284f1c2c
    • Carlos Maiolino's avatar
      xfs: Properly retry failed inode items in case of error during buffer writeback · d3a304b6
      Carlos Maiolino authored
      When a buffer has been failed during writeback, the inode items into it
      are kept flush locked, and are never resubmitted due the flush lock, so,
      if any buffer fails to be written, the items in AIL are never written to
      disk and never unlocked.
      
      This causes unmount operation to hang due these items flush locked in AIL,
      but this also causes the items in AIL to never be written back, even when
      the IO device comes back to normal.
      
      I've been testing this patch with a DM-thin device, creating a
      filesystem larger than the real device.
      
      When writing enough data to fill the DM-thin device, XFS receives ENOSPC
      errors from the device, and keep spinning on xfsaild (when 'retry
      forever' configuration is set).
      
      At this point, the filesystem can not be unmounted because of the flush locked
      items in AIL, but worse, the items in AIL are never retried at all
      (once xfs_inode_item_push() will skip the items that are flush locked),
      even if the underlying DM-thin device is expanded to the proper size.
      
      This patch fixes both cases, retrying any item that has been failed
      previously, using the infra-structure provided by the previous patch.
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Signed-off-by: default avatarCarlos Maiolino <cmaiolino@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      d3a304b6
    • Carlos Maiolino's avatar
      xfs: Add infrastructure needed for error propagation during buffer IO failure · 0b80ae6e
      Carlos Maiolino authored
      With the current code, XFS never re-submit a failed buffer for IO,
      because the failed item in the buffer is kept in the flush locked state
      forever.
      
      To be able to resubmit an log item for IO, we need a way to mark an item
      as failed, if, for any reason the buffer which the item belonged to
      failed during writeback.
      
      Add a new log item callback to be used after an IO completion failure
      and make the needed clean ups.
      Reviewed-by: default avatarBrian Foster <bfoster@redhat.com>
      Signed-off-by: default avatarCarlos Maiolino <cmaiolino@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      0b80ae6e