Commit b32e3819 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'xfs-5.18-merge-4' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux

Pull xfs fixes from Darrick Wong:
 "This fixes multiple problems in the reserve pool sizing functions: an
  incorrect free space calculation, a pointless infinite loop, and even
  more braindamage that could result in the pool being overfilled. The
  pile of patches from Dave fix myriad races and UAF bugs in the log
  recovery code that much to our mutual surprise nobody's tripped over.
  Dave also fixed a performance optimization that had turned into a
  regression.

  Dave Chinner is taking over as XFS maintainer starting Sunday and
  lasting until 5.19-rc1 is tagged so that I can focus on starting a
  massive design review for the (feature complete after five years)
  online repair feature. From then on, he and I will be moving XFS to a
  co-maintainership model by trading duties every other release.

  NOTE: I hope very strongly that the other pieces of the (X)FS
  ecosystem (fstests and xfsprogs) will make similar changes to spread
  their maintenance load.

  Summary:

   - Fix an incorrect free space calculation in xfs_reserve_blocks that
     could lead to a request for free blocks that will never succeed.

   - Fix a hang in xfs_reserve_blocks caused by an infinite loop and the
     incorrect free space calculation.

   - Fix yet a third problem in xfs_reserve_blocks where multiple racing
     threads can overfill the reserve pool.

   - Fix an accounting error that lead to us reporting reserved space as
     "available".

   - Fix a race condition during abnormal fs shutdown that could cause
     UAF problems when memory reclaim and log shutdown try to clean up
     inodes.

   - Fix a bug where log shutdown can race with unmount to tear down the
     log, thereby causing UAF errors.

   - Disentangle log and filesystem shutdown to reduce confusion.

   - Fix some confusion in xfs_trans_commit such that a race between
     transaction commit and filesystem shutdown can cause unlogged dirty
     inode metadata to be committed, thereby corrupting the filesystem.

   - Remove a performance optimization in the log as it was discovered
     that certain storage hardware handle async log flushes so poorly as
     to cause serious performance regressions. Recent restructuring of
     other parts of the logging code mean that no performance benefit is
     seen on hardware that handle it well"

* tag 'xfs-5.18-merge-4' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
  xfs: drop async cache flushes from CIL commits.
  xfs: shutdown during log recovery needs to mark the log shutdown
  xfs: xfs_trans_commit() path must check for log shutdown
  xfs: xfs_do_force_shutdown needs to block racing shutdowns
  xfs: log shutdown triggers should only shut down the log
  xfs: run callbacks before waking waiters in xlog_state_shutdown_callbacks
  xfs: shutdown in intent recovery has non-intent items in the AIL
  xfs: aborting inodes on shutdown may need buffer lock
  xfs: don't report reserved bnobt space as available
  xfs: fix overfilling of reserve pool
  xfs: always succeed at setting the reserve pool size
  xfs: remove infinite loop when reserving free block pool
  xfs: don't include bnobt blocks when reserving free block pool
  xfs: document the XFS_ALLOC_AGFL_RESERVE constant
parents 1fdff407 919edbad
...@@ -82,6 +82,24 @@ xfs_prealloc_blocks( ...@@ -82,6 +82,24 @@ xfs_prealloc_blocks(
} }
/* /*
* The number of blocks per AG that we withhold from xfs_mod_fdblocks to
* guarantee that we can refill the AGFL prior to allocating space in a nearly
* full AG. Although the the space described by the free space btrees, the
* blocks used by the freesp btrees themselves, and the blocks owned by the
* AGFL are counted in the ondisk fdblocks, it's a mistake to let the ondisk
* free space in the AG drop so low that the free space btrees cannot refill an
* empty AGFL up to the minimum level. Rather than grind through empty AGs
* until the fs goes down, we subtract this many AG blocks from the incore
* fdblocks to ensure user allocation does not overcommit the space the
* filesystem needs for the AGFLs. The rmap btree uses a per-AG reservation to
* withhold space from xfs_mod_fdblocks, so we do not account for that here.
*/
#define XFS_ALLOCBT_AGFL_RESERVE 4
/*
* Compute the number of blocks that we set aside to guarantee the ability to
* refill the AGFL and handle a full bmap btree split.
*
* In order to avoid ENOSPC-related deadlock caused by out-of-order locking of * In order to avoid ENOSPC-related deadlock caused by out-of-order locking of
* AGF buffer (PV 947395), we place constraints on the relationship among * AGF buffer (PV 947395), we place constraints on the relationship among
* actual allocations for data blocks, freelist blocks, and potential file data * actual allocations for data blocks, freelist blocks, and potential file data
...@@ -93,14 +111,14 @@ xfs_prealloc_blocks( ...@@ -93,14 +111,14 @@ xfs_prealloc_blocks(
* extents need to be actually allocated. To get around this, we explicitly set * extents need to be actually allocated. To get around this, we explicitly set
* aside a few blocks which will not be reserved in delayed allocation. * aside a few blocks which will not be reserved in delayed allocation.
* *
* We need to reserve 4 fsbs _per AG_ for the freelist and 4 more to handle a * For each AG, we need to reserve enough blocks to replenish a totally empty
* potential split of the file's bmap btree. * AGFL and 4 more to handle a potential split of the file's bmap btree.
*/ */
unsigned int unsigned int
xfs_alloc_set_aside( xfs_alloc_set_aside(
struct xfs_mount *mp) struct xfs_mount *mp)
{ {
return mp->m_sb.sb_agcount * (XFS_ALLOC_AGFL_RESERVE + 4); return mp->m_sb.sb_agcount * (XFS_ALLOCBT_AGFL_RESERVE + 4);
} }
/* /*
...@@ -124,7 +142,7 @@ xfs_alloc_ag_max_usable( ...@@ -124,7 +142,7 @@ xfs_alloc_ag_max_usable(
unsigned int blocks; unsigned int blocks;
blocks = XFS_BB_TO_FSB(mp, XFS_FSS_TO_BB(mp, 4)); /* ag headers */ blocks = XFS_BB_TO_FSB(mp, XFS_FSS_TO_BB(mp, 4)); /* ag headers */
blocks += XFS_ALLOC_AGFL_RESERVE; blocks += XFS_ALLOCBT_AGFL_RESERVE;
blocks += 3; /* AGF, AGI btree root blocks */ blocks += 3; /* AGF, AGI btree root blocks */
if (xfs_has_finobt(mp)) if (xfs_has_finobt(mp))
blocks++; /* finobt root block */ blocks++; /* finobt root block */
......
...@@ -88,7 +88,6 @@ typedef struct xfs_alloc_arg { ...@@ -88,7 +88,6 @@ typedef struct xfs_alloc_arg {
#define XFS_ALLOC_NOBUSY (1 << 2)/* Busy extents not allowed */ #define XFS_ALLOC_NOBUSY (1 << 2)/* Busy extents not allowed */
/* freespace limit calculations */ /* freespace limit calculations */
#define XFS_ALLOC_AGFL_RESERVE 4
unsigned int xfs_alloc_set_aside(struct xfs_mount *mp); unsigned int xfs_alloc_set_aside(struct xfs_mount *mp);
unsigned int xfs_alloc_ag_max_usable(struct xfs_mount *mp); unsigned int xfs_alloc_ag_max_usable(struct xfs_mount *mp);
......
...@@ -9,39 +9,6 @@ static inline unsigned int bio_max_vecs(unsigned int count) ...@@ -9,39 +9,6 @@ static inline unsigned int bio_max_vecs(unsigned int count)
return bio_max_segs(howmany(count, PAGE_SIZE)); return bio_max_segs(howmany(count, PAGE_SIZE));
} }
static void
xfs_flush_bdev_async_endio(
struct bio *bio)
{
complete(bio->bi_private);
}
/*
* Submit a request for an async cache flush to run. If the request queue does
* not require flush operations, just skip it altogether. If the caller needs
* to wait for the flush completion at a later point in time, they must supply a
* valid completion. This will be signalled when the flush completes. The
* caller never sees the bio that is issued here.
*/
void
xfs_flush_bdev_async(
struct bio *bio,
struct block_device *bdev,
struct completion *done)
{
struct request_queue *q = bdev->bd_disk->queue;
if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
complete(done);
return;
}
bio_init(bio, bdev, NULL, 0, REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC);
bio->bi_private = done;
bio->bi_end_io = xfs_flush_bdev_async_endio;
submit_bio(bio);
}
int int
xfs_rw_bdev( xfs_rw_bdev(
struct block_device *bdev, struct block_device *bdev,
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include "xfs_fsops.h" #include "xfs_fsops.h"
#include "xfs_trans_space.h" #include "xfs_trans_space.h"
#include "xfs_log.h" #include "xfs_log.h"
#include "xfs_log_priv.h"
#include "xfs_ag.h" #include "xfs_ag.h"
#include "xfs_ag_resv.h" #include "xfs_ag_resv.h"
#include "xfs_trace.h" #include "xfs_trace.h"
...@@ -347,7 +348,7 @@ xfs_fs_counts( ...@@ -347,7 +348,7 @@ xfs_fs_counts(
cnt->allocino = percpu_counter_read_positive(&mp->m_icount); cnt->allocino = percpu_counter_read_positive(&mp->m_icount);
cnt->freeino = percpu_counter_read_positive(&mp->m_ifree); cnt->freeino = percpu_counter_read_positive(&mp->m_ifree);
cnt->freedata = percpu_counter_read_positive(&mp->m_fdblocks) - cnt->freedata = percpu_counter_read_positive(&mp->m_fdblocks) -
mp->m_alloc_set_aside; xfs_fdblocks_unavailable(mp);
spin_lock(&mp->m_sb_lock); spin_lock(&mp->m_sb_lock);
cnt->freertx = mp->m_sb.sb_frextents; cnt->freertx = mp->m_sb.sb_frextents;
...@@ -430,46 +431,36 @@ xfs_reserve_blocks( ...@@ -430,46 +431,36 @@ xfs_reserve_blocks(
* If the request is larger than the current reservation, reserve the * If the request is larger than the current reservation, reserve the
* blocks before we update the reserve counters. Sample m_fdblocks and * blocks before we update the reserve counters. Sample m_fdblocks and
* perform a partial reservation if the request exceeds free space. * perform a partial reservation if the request exceeds free space.
*
* The code below estimates how many blocks it can request from
* fdblocks to stash in the reserve pool. This is a classic TOCTOU
* race since fdblocks updates are not always coordinated via
* m_sb_lock. Set the reserve size even if there's not enough free
* space to fill it because mod_fdblocks will refill an undersized
* reserve when it can.
*/ */
error = -ENOSPC;
do {
free = percpu_counter_sum(&mp->m_fdblocks) - free = percpu_counter_sum(&mp->m_fdblocks) -
mp->m_alloc_set_aside; xfs_fdblocks_unavailable(mp);
if (free <= 0)
break;
delta = request - mp->m_resblks; delta = request - mp->m_resblks;
lcounter = free - delta; mp->m_resblks = request;
if (lcounter < 0) if (delta > 0 && free > 0) {
/* We can't satisfy the request, just get what we can */
fdblks_delta = free;
else
fdblks_delta = delta;
/* /*
* We'll either succeed in getting space from the free block * We'll either succeed in getting space from the free block
* count or we'll get an ENOSPC. If we get a ENOSPC, it means * count or we'll get an ENOSPC. Don't set the reserved flag
* things changed while we were calculating fdblks_delta and so * here - we don't want to reserve the extra reserve blocks
* we should try again to see if there is anything left to * from the reserve.
* reserve.
* *
* Don't set the reserved flag here - we don't want to reserve * The desired reserve size can change after we drop the lock.
* the extra reserve blocks from the reserve..... * Use mod_fdblocks to put the space into the reserve or into
* fdblocks as appropriate.
*/ */
fdblks_delta = min(free, delta);
spin_unlock(&mp->m_sb_lock); spin_unlock(&mp->m_sb_lock);
error = xfs_mod_fdblocks(mp, -fdblks_delta, 0); error = xfs_mod_fdblocks(mp, -fdblks_delta, 0);
if (!error)
xfs_mod_fdblocks(mp, fdblks_delta, 0);
spin_lock(&mp->m_sb_lock); spin_lock(&mp->m_sb_lock);
} while (error == -ENOSPC);
/*
* Update the reserve counters if blocks have been successfully
* allocated.
*/
if (!error && fdblks_delta) {
mp->m_resblks += fdblks_delta;
mp->m_resblks_avail += fdblks_delta;
} }
out: out:
if (outval) { if (outval) {
outval->resblks = mp->m_resblks; outval->resblks = mp->m_resblks;
...@@ -528,8 +519,11 @@ xfs_do_force_shutdown( ...@@ -528,8 +519,11 @@ xfs_do_force_shutdown(
int tag; int tag;
const char *why; const char *why;
if (test_and_set_bit(XFS_OPSTATE_SHUTDOWN, &mp->m_opstate))
if (test_and_set_bit(XFS_OPSTATE_SHUTDOWN, &mp->m_opstate)) {
xlog_shutdown_wait(mp->m_log);
return; return;
}
if (mp->m_sb_bp) if (mp->m_sb_bp)
mp->m_sb_bp->b_flags |= XBF_DONE; mp->m_sb_bp->b_flags |= XBF_DONE;
......
...@@ -883,7 +883,7 @@ xfs_reclaim_inode( ...@@ -883,7 +883,7 @@ xfs_reclaim_inode(
*/ */
if (xlog_is_shutdown(ip->i_mount->m_log)) { if (xlog_is_shutdown(ip->i_mount->m_log)) {
xfs_iunpin_wait(ip); xfs_iunpin_wait(ip);
xfs_iflush_abort(ip); xfs_iflush_shutdown_abort(ip);
goto reclaim; goto reclaim;
} }
if (xfs_ipincount(ip)) if (xfs_ipincount(ip))
......
...@@ -3631,7 +3631,7 @@ xfs_iflush_cluster( ...@@ -3631,7 +3631,7 @@ xfs_iflush_cluster(
/* /*
* We must use the safe variant here as on shutdown xfs_iflush_abort() * We must use the safe variant here as on shutdown xfs_iflush_abort()
* can remove itself from the list. * will remove itself from the list.
*/ */
list_for_each_entry_safe(lip, n, &bp->b_li_list, li_bio_list) { list_for_each_entry_safe(lip, n, &bp->b_li_list, li_bio_list) {
iip = (struct xfs_inode_log_item *)lip; iip = (struct xfs_inode_log_item *)lip;
......
...@@ -544,10 +544,17 @@ xfs_inode_item_push( ...@@ -544,10 +544,17 @@ xfs_inode_item_push(
uint rval = XFS_ITEM_SUCCESS; uint rval = XFS_ITEM_SUCCESS;
int error; int error;
ASSERT(iip->ili_item.li_buf); if (!bp || (ip->i_flags & XFS_ISTALE)) {
/*
* Inode item/buffer is being being aborted due to cluster
* buffer deletion. Trigger a log force to have that operation
* completed and items removed from the AIL before the next push
* attempt.
*/
return XFS_ITEM_PINNED;
}
if (xfs_ipincount(ip) > 0 || xfs_buf_ispinned(bp) || if (xfs_ipincount(ip) > 0 || xfs_buf_ispinned(bp))
(ip->i_flags & XFS_ISTALE))
return XFS_ITEM_PINNED; return XFS_ITEM_PINNED;
if (xfs_iflags_test(ip, XFS_IFLUSHING)) if (xfs_iflags_test(ip, XFS_IFLUSHING))
...@@ -834,46 +841,143 @@ xfs_buf_inode_io_fail( ...@@ -834,46 +841,143 @@ xfs_buf_inode_io_fail(
} }
/* /*
* This is the inode flushing abort routine. It is called when * Clear the inode logging fields so no more flushes are attempted. If we are
* the filesystem is shutting down to clean up the inode state. It is * on a buffer list, it is now safe to remove it because the buffer is
* responsible for removing the inode item from the AIL if it has not been * guaranteed to be locked. The caller will drop the reference to the buffer
* re-logged and clearing the inode's flush state. * the log item held.
*/
static void
xfs_iflush_abort_clean(
struct xfs_inode_log_item *iip)
{
iip->ili_last_fields = 0;
iip->ili_fields = 0;
iip->ili_fsync_fields = 0;
iip->ili_flush_lsn = 0;
iip->ili_item.li_buf = NULL;
list_del_init(&iip->ili_item.li_bio_list);
}
/*
* Abort flushing the inode from a context holding the cluster buffer locked.
*
* This is the normal runtime method of aborting writeback of an inode that is
* attached to a cluster buffer. It occurs when the inode and the backing
* cluster buffer have been freed (i.e. inode is XFS_ISTALE), or when cluster
* flushing or buffer IO completion encounters a log shutdown situation.
*
* If we need to abort inode writeback and we don't already hold the buffer
* locked, call xfs_iflush_shutdown_abort() instead as this should only ever be
* necessary in a shutdown situation.
*/ */
void void
xfs_iflush_abort( xfs_iflush_abort(
struct xfs_inode *ip) struct xfs_inode *ip)
{ {
struct xfs_inode_log_item *iip = ip->i_itemp; struct xfs_inode_log_item *iip = ip->i_itemp;
struct xfs_buf *bp = NULL; struct xfs_buf *bp;
if (!iip) {
/* clean inode, nothing to do */
xfs_iflags_clear(ip, XFS_IFLUSHING);
return;
}
if (iip) {
/* /*
* Clear the failed bit before removing the item from the AIL so * Remove the inode item from the AIL before we clear its internal
* xfs_trans_ail_delete() doesn't try to clear and release the * state. Whilst the inode is in the AIL, it should have a valid buffer
* buffer attached to the log item before we are done with it. * pointer for push operations to access - it is only safe to remove the
* inode from the buffer once it has been removed from the AIL.
*
* We also clear the failed bit before removing the item from the AIL
* as xfs_trans_ail_delete()->xfs_clear_li_failed() will release buffer
* references the inode item owns and needs to hold until we've fully
* aborted the inode log item and detached it from the buffer.
*/ */
clear_bit(XFS_LI_FAILED, &iip->ili_item.li_flags); clear_bit(XFS_LI_FAILED, &iip->ili_item.li_flags);
xfs_trans_ail_delete(&iip->ili_item, 0); xfs_trans_ail_delete(&iip->ili_item, 0);
/* /*
* Clear the inode logging fields so no more flushes are * Grab the inode buffer so can we release the reference the inode log
* attempted. * item holds on it.
*/ */
spin_lock(&iip->ili_lock); spin_lock(&iip->ili_lock);
iip->ili_last_fields = 0;
iip->ili_fields = 0;
iip->ili_fsync_fields = 0;
iip->ili_flush_lsn = 0;
bp = iip->ili_item.li_buf; bp = iip->ili_item.li_buf;
iip->ili_item.li_buf = NULL; xfs_iflush_abort_clean(iip);
list_del_init(&iip->ili_item.li_bio_list);
spin_unlock(&iip->ili_lock); spin_unlock(&iip->ili_lock);
}
xfs_iflags_clear(ip, XFS_IFLUSHING); xfs_iflags_clear(ip, XFS_IFLUSHING);
if (bp) if (bp)
xfs_buf_rele(bp); xfs_buf_rele(bp);
} }
/*
* Abort an inode flush in the case of a shutdown filesystem. This can be called
* from anywhere with just an inode reference and does not require holding the
* inode cluster buffer locked. If the inode is attached to a cluster buffer,
* it will grab and lock it safely, then abort the inode flush.
*/
void
xfs_iflush_shutdown_abort(
struct xfs_inode *ip)
{
struct xfs_inode_log_item *iip = ip->i_itemp;
struct xfs_buf *bp;
if (!iip) {
/* clean inode, nothing to do */
xfs_iflags_clear(ip, XFS_IFLUSHING);
return;
}
spin_lock(&iip->ili_lock);
bp = iip->ili_item.li_buf;
if (!bp) {
spin_unlock(&iip->ili_lock);
xfs_iflush_abort(ip);
return;
}
/*
* We have to take a reference to the buffer so that it doesn't get
* freed when we drop the ili_lock and then wait to lock the buffer.
* We'll clean up the extra reference after we pick up the ili_lock
* again.
*/
xfs_buf_hold(bp);
spin_unlock(&iip->ili_lock);
xfs_buf_lock(bp);
spin_lock(&iip->ili_lock);
if (!iip->ili_item.li_buf) {
/*
* Raced with another removal, hold the only reference
* to bp now. Inode should not be in the AIL now, so just clean
* up and return;
*/
ASSERT(list_empty(&iip->ili_item.li_bio_list));
ASSERT(!test_bit(XFS_LI_IN_AIL, &iip->ili_item.li_flags));
xfs_iflush_abort_clean(iip);
spin_unlock(&iip->ili_lock);
xfs_iflags_clear(ip, XFS_IFLUSHING);
xfs_buf_relse(bp);
return;
}
/*
* Got two references to bp. The first will get dropped by
* xfs_iflush_abort() when the item is removed from the buffer list, but
* we can't drop our reference until _abort() returns because we have to
* unlock the buffer as well. Hence we abort and then unlock and release
* our reference to the buffer.
*/
ASSERT(iip->ili_item.li_buf == bp);
spin_unlock(&iip->ili_lock);
xfs_iflush_abort(ip);
xfs_buf_relse(bp);
}
/* /*
* convert an xfs_inode_log_format struct from the old 32 bit version * convert an xfs_inode_log_format struct from the old 32 bit version
* (which can have different field alignments) to the native 64 bit version * (which can have different field alignments) to the native 64 bit version
......
...@@ -44,6 +44,7 @@ static inline int xfs_inode_clean(struct xfs_inode *ip) ...@@ -44,6 +44,7 @@ static inline int xfs_inode_clean(struct xfs_inode *ip)
extern void xfs_inode_item_init(struct xfs_inode *, struct xfs_mount *); extern void xfs_inode_item_init(struct xfs_inode *, struct xfs_mount *);
extern void xfs_inode_item_destroy(struct xfs_inode *); extern void xfs_inode_item_destroy(struct xfs_inode *);
extern void xfs_iflush_abort(struct xfs_inode *); extern void xfs_iflush_abort(struct xfs_inode *);
extern void xfs_iflush_shutdown_abort(struct xfs_inode *);
extern int xfs_inode_item_format_convert(xfs_log_iovec_t *, extern int xfs_inode_item_format_convert(xfs_log_iovec_t *,
struct xfs_inode_log_format *); struct xfs_inode_log_format *);
......
...@@ -197,8 +197,6 @@ static inline uint64_t howmany_64(uint64_t x, uint32_t y) ...@@ -197,8 +197,6 @@ static inline uint64_t howmany_64(uint64_t x, uint32_t y)
int xfs_rw_bdev(struct block_device *bdev, sector_t sector, unsigned int count, int xfs_rw_bdev(struct block_device *bdev, sector_t sector, unsigned int count,
char *data, unsigned int op); char *data, unsigned int op);
void xfs_flush_bdev_async(struct bio *bio, struct block_device *bdev,
struct completion *done);
#define ASSERT_ALWAYS(expr) \ #define ASSERT_ALWAYS(expr) \
(likely(expr) ? (void)0 : assfail(NULL, #expr, __FILE__, __LINE__)) (likely(expr) ? (void)0 : assfail(NULL, #expr, __FILE__, __LINE__))
......
...@@ -487,7 +487,10 @@ xfs_log_reserve( ...@@ -487,7 +487,10 @@ xfs_log_reserve(
* Run all the pending iclog callbacks and wake log force waiters and iclog * Run all the pending iclog callbacks and wake log force waiters and iclog
* space waiters so they can process the newly set shutdown state. We really * space waiters so they can process the newly set shutdown state. We really
* don't care what order we process callbacks here because the log is shut down * don't care what order we process callbacks here because the log is shut down
* and so state cannot change on disk anymore. * and so state cannot change on disk anymore. However, we cannot wake waiters
* until the callbacks have been processed because we may be in unmount and
* we must ensure that all AIL operations the callbacks perform have completed
* before we tear down the AIL.
* *
* We avoid processing actively referenced iclogs so that we don't run callbacks * We avoid processing actively referenced iclogs so that we don't run callbacks
* while the iclog owner might still be preparing the iclog for IO submssion. * while the iclog owner might still be preparing the iclog for IO submssion.
...@@ -501,7 +504,6 @@ xlog_state_shutdown_callbacks( ...@@ -501,7 +504,6 @@ xlog_state_shutdown_callbacks(
struct xlog_in_core *iclog; struct xlog_in_core *iclog;
LIST_HEAD(cb_list); LIST_HEAD(cb_list);
spin_lock(&log->l_icloglock);
iclog = log->l_iclog; iclog = log->l_iclog;
do { do {
if (atomic_read(&iclog->ic_refcnt)) { if (atomic_read(&iclog->ic_refcnt)) {
...@@ -509,26 +511,22 @@ xlog_state_shutdown_callbacks( ...@@ -509,26 +511,22 @@ xlog_state_shutdown_callbacks(
continue; continue;
} }
list_splice_init(&iclog->ic_callbacks, &cb_list); list_splice_init(&iclog->ic_callbacks, &cb_list);
spin_unlock(&log->l_icloglock);
xlog_cil_process_committed(&cb_list);
spin_lock(&log->l_icloglock);
wake_up_all(&iclog->ic_write_wait); wake_up_all(&iclog->ic_write_wait);
wake_up_all(&iclog->ic_force_wait); wake_up_all(&iclog->ic_force_wait);
} while ((iclog = iclog->ic_next) != log->l_iclog); } while ((iclog = iclog->ic_next) != log->l_iclog);
wake_up_all(&log->l_flush_wait); wake_up_all(&log->l_flush_wait);
spin_unlock(&log->l_icloglock);
xlog_cil_process_committed(&cb_list);
} }
/* /*
* Flush iclog to disk if this is the last reference to the given iclog and the * Flush iclog to disk if this is the last reference to the given iclog and the
* it is in the WANT_SYNC state. * it is in the WANT_SYNC state.
* *
* If the caller passes in a non-zero @old_tail_lsn and the current log tail
* does not match, there may be metadata on disk that must be persisted before
* this iclog is written. To satisfy that requirement, set the
* XLOG_ICL_NEED_FLUSH flag as a condition for writing this iclog with the new
* log tail value.
*
* If XLOG_ICL_NEED_FUA is already set on the iclog, we need to ensure that the * If XLOG_ICL_NEED_FUA is already set on the iclog, we need to ensure that the
* log tail is updated correctly. NEED_FUA indicates that the iclog will be * log tail is updated correctly. NEED_FUA indicates that the iclog will be
* written to stable storage, and implies that a commit record is contained * written to stable storage, and implies that a commit record is contained
...@@ -545,12 +543,10 @@ xlog_state_shutdown_callbacks( ...@@ -545,12 +543,10 @@ xlog_state_shutdown_callbacks(
* always capture the tail lsn on the iclog on the first NEED_FUA release * always capture the tail lsn on the iclog on the first NEED_FUA release
* regardless of the number of active reference counts on this iclog. * regardless of the number of active reference counts on this iclog.
*/ */
int int
xlog_state_release_iclog( xlog_state_release_iclog(
struct xlog *log, struct xlog *log,
struct xlog_in_core *iclog, struct xlog_in_core *iclog)
xfs_lsn_t old_tail_lsn)
{ {
xfs_lsn_t tail_lsn; xfs_lsn_t tail_lsn;
bool last_ref; bool last_ref;
...@@ -561,17 +557,13 @@ xlog_state_release_iclog( ...@@ -561,17 +557,13 @@ xlog_state_release_iclog(
/* /*
* Grabbing the current log tail needs to be atomic w.r.t. the writing * Grabbing the current log tail needs to be atomic w.r.t. the writing
* of the tail LSN into the iclog so we guarantee that the log tail does * of the tail LSN into the iclog so we guarantee that the log tail does
* not move between deciding if a cache flush is required and writing * not move between the first time we know that the iclog needs to be
* the LSN into the iclog below. * made stable and when we eventually submit it.
*/ */
if (old_tail_lsn || iclog->ic_state == XLOG_STATE_WANT_SYNC) { if ((iclog->ic_state == XLOG_STATE_WANT_SYNC ||
(iclog->ic_flags & XLOG_ICL_NEED_FUA)) &&
!iclog->ic_header.h_tail_lsn) {
tail_lsn = xlog_assign_tail_lsn(log->l_mp); tail_lsn = xlog_assign_tail_lsn(log->l_mp);
if (old_tail_lsn && tail_lsn != old_tail_lsn)
iclog->ic_flags |= XLOG_ICL_NEED_FLUSH;
if ((iclog->ic_flags & XLOG_ICL_NEED_FUA) &&
!iclog->ic_header.h_tail_lsn)
iclog->ic_header.h_tail_lsn = cpu_to_be64(tail_lsn); iclog->ic_header.h_tail_lsn = cpu_to_be64(tail_lsn);
} }
...@@ -583,11 +575,8 @@ xlog_state_release_iclog( ...@@ -583,11 +575,8 @@ xlog_state_release_iclog(
* pending iclog callbacks that were waiting on the release of * pending iclog callbacks that were waiting on the release of
* this iclog. * this iclog.
*/ */
if (last_ref) { if (last_ref)
spin_unlock(&log->l_icloglock);
xlog_state_shutdown_callbacks(log); xlog_state_shutdown_callbacks(log);
spin_lock(&log->l_icloglock);
}
return -EIO; return -EIO;
} }
...@@ -600,8 +589,6 @@ xlog_state_release_iclog( ...@@ -600,8 +589,6 @@ xlog_state_release_iclog(
} }
iclog->ic_state = XLOG_STATE_SYNCING; iclog->ic_state = XLOG_STATE_SYNCING;
if (!iclog->ic_header.h_tail_lsn)
iclog->ic_header.h_tail_lsn = cpu_to_be64(tail_lsn);
xlog_verify_tail_lsn(log, iclog); xlog_verify_tail_lsn(log, iclog);
trace_xlog_iclog_syncing(iclog, _RET_IP_); trace_xlog_iclog_syncing(iclog, _RET_IP_);
...@@ -873,7 +860,7 @@ xlog_force_iclog( ...@@ -873,7 +860,7 @@ xlog_force_iclog(
iclog->ic_flags |= XLOG_ICL_NEED_FLUSH | XLOG_ICL_NEED_FUA; iclog->ic_flags |= XLOG_ICL_NEED_FLUSH | XLOG_ICL_NEED_FUA;
if (iclog->ic_state == XLOG_STATE_ACTIVE) if (iclog->ic_state == XLOG_STATE_ACTIVE)
xlog_state_switch_iclogs(iclog->ic_log, iclog, 0); xlog_state_switch_iclogs(iclog->ic_log, iclog, 0);
return xlog_state_release_iclog(iclog->ic_log, iclog, 0); return xlog_state_release_iclog(iclog->ic_log, iclog);
} }
/* /*
...@@ -1373,7 +1360,7 @@ xlog_ioend_work( ...@@ -1373,7 +1360,7 @@ xlog_ioend_work(
*/ */
if (XFS_TEST_ERROR(error, log->l_mp, XFS_ERRTAG_IODONE_IOERR)) { if (XFS_TEST_ERROR(error, log->l_mp, XFS_ERRTAG_IODONE_IOERR)) {
xfs_alert(log->l_mp, "log I/O error %d", error); xfs_alert(log->l_mp, "log I/O error %d", error);
xfs_force_shutdown(log->l_mp, SHUTDOWN_LOG_IO_ERROR); xlog_force_shutdown(log, SHUTDOWN_LOG_IO_ERROR);
} }
xlog_state_done_syncing(iclog); xlog_state_done_syncing(iclog);
...@@ -1912,7 +1899,7 @@ xlog_write_iclog( ...@@ -1912,7 +1899,7 @@ xlog_write_iclog(
iclog->ic_flags &= ~(XLOG_ICL_NEED_FLUSH | XLOG_ICL_NEED_FUA); iclog->ic_flags &= ~(XLOG_ICL_NEED_FLUSH | XLOG_ICL_NEED_FUA);
if (xlog_map_iclog_data(&iclog->ic_bio, iclog->ic_data, count)) { if (xlog_map_iclog_data(&iclog->ic_bio, iclog->ic_data, count)) {
xfs_force_shutdown(log->l_mp, SHUTDOWN_LOG_IO_ERROR); xlog_force_shutdown(log, SHUTDOWN_LOG_IO_ERROR);
return; return;
} }
if (is_vmalloc_addr(iclog->ic_data)) if (is_vmalloc_addr(iclog->ic_data))
...@@ -2411,7 +2398,7 @@ xlog_write_copy_finish( ...@@ -2411,7 +2398,7 @@ xlog_write_copy_finish(
ASSERT(iclog->ic_state == XLOG_STATE_WANT_SYNC || ASSERT(iclog->ic_state == XLOG_STATE_WANT_SYNC ||
xlog_is_shutdown(log)); xlog_is_shutdown(log));
release_iclog: release_iclog:
error = xlog_state_release_iclog(log, iclog, 0); error = xlog_state_release_iclog(log, iclog);
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
return error; return error;
} }
...@@ -2487,7 +2474,7 @@ xlog_write( ...@@ -2487,7 +2474,7 @@ xlog_write(
xfs_alert_tag(log->l_mp, XFS_PTAG_LOGRES, xfs_alert_tag(log->l_mp, XFS_PTAG_LOGRES,
"ctx ticket reservation ran out. Need to up reservation"); "ctx ticket reservation ran out. Need to up reservation");
xlog_print_tic_res(log->l_mp, ticket); xlog_print_tic_res(log->l_mp, ticket);
xfs_force_shutdown(log->l_mp, SHUTDOWN_LOG_IO_ERROR); xlog_force_shutdown(log, SHUTDOWN_LOG_IO_ERROR);
} }
len = xlog_write_calc_vec_length(ticket, log_vector, optype); len = xlog_write_calc_vec_length(ticket, log_vector, optype);
...@@ -2628,7 +2615,7 @@ xlog_write( ...@@ -2628,7 +2615,7 @@ xlog_write(
spin_lock(&log->l_icloglock); spin_lock(&log->l_icloglock);
xlog_state_finish_copy(log, iclog, record_cnt, data_cnt); xlog_state_finish_copy(log, iclog, record_cnt, data_cnt);
error = xlog_state_release_iclog(log, iclog, 0); error = xlog_state_release_iclog(log, iclog);
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
return error; return error;
...@@ -3052,7 +3039,7 @@ xlog_state_get_iclog_space( ...@@ -3052,7 +3039,7 @@ xlog_state_get_iclog_space(
* reference to the iclog. * reference to the iclog.
*/ */
if (!atomic_add_unless(&iclog->ic_refcnt, -1, 1)) if (!atomic_add_unless(&iclog->ic_refcnt, -1, 1))
error = xlog_state_release_iclog(log, iclog, 0); error = xlog_state_release_iclog(log, iclog);
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
if (error) if (error)
return error; return error;
...@@ -3821,9 +3808,10 @@ xlog_verify_iclog( ...@@ -3821,9 +3808,10 @@ xlog_verify_iclog(
#endif #endif
/* /*
* Perform a forced shutdown on the log. This should be called once and once * Perform a forced shutdown on the log.
* only by the high level filesystem shutdown code to shut the log subsystem *
* down cleanly. * This can be called from low level log code to trigger a shutdown, or from the
* high level mount shutdown code when the mount shuts down.
* *
* Our main objectives here are to make sure that: * Our main objectives here are to make sure that:
* a. if the shutdown was not due to a log IO error, flush the logs to * a. if the shutdown was not due to a log IO error, flush the logs to
...@@ -3832,6 +3820,8 @@ xlog_verify_iclog( ...@@ -3832,6 +3820,8 @@ xlog_verify_iclog(
* parties to find out. Nothing new gets queued after this is done. * parties to find out. Nothing new gets queued after this is done.
* c. Tasks sleeping on log reservations, pinned objects and * c. Tasks sleeping on log reservations, pinned objects and
* other resources get woken up. * other resources get woken up.
* d. The mount is also marked as shut down so that log triggered shutdowns
* still behave the same as if they called xfs_forced_shutdown().
* *
* Return true if the shutdown cause was a log IO error and we actually shut the * Return true if the shutdown cause was a log IO error and we actually shut the
* log down. * log down.
...@@ -3843,25 +3833,25 @@ xlog_force_shutdown( ...@@ -3843,25 +3833,25 @@ xlog_force_shutdown(
{ {
bool log_error = (shutdown_flags & SHUTDOWN_LOG_IO_ERROR); bool log_error = (shutdown_flags & SHUTDOWN_LOG_IO_ERROR);
/* if (!log)
* If this happens during log recovery then we aren't using the runtime
* log mechanisms yet so there's nothing to shut down.
*/
if (!log || xlog_in_recovery(log))
return false; return false;
ASSERT(!xlog_is_shutdown(log));
/* /*
* Flush all the completed transactions to disk before marking the log * Flush all the completed transactions to disk before marking the log
* being shut down. We need to do this first as shutting down the log * being shut down. We need to do this first as shutting down the log
* before the force will prevent the log force from flushing the iclogs * before the force will prevent the log force from flushing the iclogs
* to disk. * to disk.
* *
* Re-entry due to a log IO error shutdown during the log force is * When we are in recovery, there are no transactions to flush, and
* prevented by the atomicity of higher level shutdown code. * we don't want to touch the log because we don't want to perturb the
* current head/tail for future recovery attempts. Hence we need to
* avoid a log force in this case.
*
* If we are shutting down due to a log IO error, then we must avoid
* trying to write the log as that may just result in more IO errors and
* an endless shutdown/force loop.
*/ */
if (!log_error) if (!log_error && !xlog_in_recovery(log))
xfs_log_force(log->l_mp, XFS_LOG_SYNC); xfs_log_force(log->l_mp, XFS_LOG_SYNC);
/* /*
...@@ -3878,11 +3868,24 @@ xlog_force_shutdown( ...@@ -3878,11 +3868,24 @@ xlog_force_shutdown(
spin_lock(&log->l_icloglock); spin_lock(&log->l_icloglock);
if (test_and_set_bit(XLOG_IO_ERROR, &log->l_opstate)) { if (test_and_set_bit(XLOG_IO_ERROR, &log->l_opstate)) {
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
ASSERT(0);
return false; return false;
} }
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
/*
* If this log shutdown also sets the mount shutdown state, issue a
* shutdown warning message.
*/
if (!test_and_set_bit(XFS_OPSTATE_SHUTDOWN, &log->l_mp->m_opstate)) {
xfs_alert_tag(log->l_mp, XFS_PTAG_SHUTDOWN_LOGERROR,
"Filesystem has been shut down due to log error (0x%x).",
shutdown_flags);
xfs_alert(log->l_mp,
"Please unmount the filesystem and rectify the problem(s).");
if (xfs_error_level >= XFS_ERRLEVEL_HIGH)
xfs_stack_trace();
}
/* /*
* We don't want anybody waiting for log reservations after this. That * We don't want anybody waiting for log reservations after this. That
* means we have to wake up everybody queued up on reserveq as well as * means we have to wake up everybody queued up on reserveq as well as
...@@ -3903,8 +3906,12 @@ xlog_force_shutdown( ...@@ -3903,8 +3906,12 @@ xlog_force_shutdown(
wake_up_all(&log->l_cilp->xc_start_wait); wake_up_all(&log->l_cilp->xc_start_wait);
wake_up_all(&log->l_cilp->xc_commit_wait); wake_up_all(&log->l_cilp->xc_commit_wait);
spin_unlock(&log->l_cilp->xc_push_lock); spin_unlock(&log->l_cilp->xc_push_lock);
spin_lock(&log->l_icloglock);
xlog_state_shutdown_callbacks(log); xlog_state_shutdown_callbacks(log);
spin_unlock(&log->l_icloglock);
wake_up_var(&log->l_opstate);
return log_error; return log_error;
} }
......
...@@ -540,7 +540,7 @@ xlog_cil_insert_items( ...@@ -540,7 +540,7 @@ xlog_cil_insert_items(
spin_unlock(&cil->xc_cil_lock); spin_unlock(&cil->xc_cil_lock);
if (tp->t_ticket->t_curr_res < 0) if (tp->t_ticket->t_curr_res < 0)
xfs_force_shutdown(log->l_mp, SHUTDOWN_LOG_IO_ERROR); xlog_force_shutdown(log, SHUTDOWN_LOG_IO_ERROR);
} }
static void static void
...@@ -705,11 +705,21 @@ xlog_cil_set_ctx_write_state( ...@@ -705,11 +705,21 @@ xlog_cil_set_ctx_write_state(
* The LSN we need to pass to the log items on transaction * The LSN we need to pass to the log items on transaction
* commit is the LSN reported by the first log vector write, not * commit is the LSN reported by the first log vector write, not
* the commit lsn. If we use the commit record lsn then we can * the commit lsn. If we use the commit record lsn then we can
* move the tail beyond the grant write head. * move the grant write head beyond the tail LSN and overwrite
* it.
*/ */
ctx->start_lsn = lsn; ctx->start_lsn = lsn;
wake_up_all(&cil->xc_start_wait); wake_up_all(&cil->xc_start_wait);
spin_unlock(&cil->xc_push_lock); spin_unlock(&cil->xc_push_lock);
/*
* Make sure the metadata we are about to overwrite in the log
* has been flushed to stable storage before this iclog is
* issued.
*/
spin_lock(&cil->xc_log->l_icloglock);
iclog->ic_flags |= XLOG_ICL_NEED_FLUSH;
spin_unlock(&cil->xc_log->l_icloglock);
return; return;
} }
...@@ -854,7 +864,7 @@ xlog_cil_write_commit_record( ...@@ -854,7 +864,7 @@ xlog_cil_write_commit_record(
error = xlog_write(log, ctx, &vec, ctx->ticket, XLOG_COMMIT_TRANS); error = xlog_write(log, ctx, &vec, ctx->ticket, XLOG_COMMIT_TRANS);
if (error) if (error)
xfs_force_shutdown(log->l_mp, SHUTDOWN_LOG_IO_ERROR); xlog_force_shutdown(log, SHUTDOWN_LOG_IO_ERROR);
return error; return error;
} }
...@@ -888,10 +898,7 @@ xlog_cil_push_work( ...@@ -888,10 +898,7 @@ xlog_cil_push_work(
struct xfs_trans_header thdr; struct xfs_trans_header thdr;
struct xfs_log_iovec lhdr; struct xfs_log_iovec lhdr;
struct xfs_log_vec lvhdr = { NULL }; struct xfs_log_vec lvhdr = { NULL };
xfs_lsn_t preflush_tail_lsn;
xfs_csn_t push_seq; xfs_csn_t push_seq;
struct bio bio;
DECLARE_COMPLETION_ONSTACK(bdev_flush);
bool push_commit_stable; bool push_commit_stable;
new_ctx = xlog_cil_ctx_alloc(); new_ctx = xlog_cil_ctx_alloc();
...@@ -961,23 +968,6 @@ xlog_cil_push_work( ...@@ -961,23 +968,6 @@ xlog_cil_push_work(
list_add(&ctx->committing, &cil->xc_committing); list_add(&ctx->committing, &cil->xc_committing);
spin_unlock(&cil->xc_push_lock); spin_unlock(&cil->xc_push_lock);
/*
* The CIL is stable at this point - nothing new will be added to it
* because we hold the flush lock exclusively. Hence we can now issue
* a cache flush to ensure all the completed metadata in the journal we
* are about to overwrite is on stable storage.
*
* Because we are issuing this cache flush before we've written the
* tail lsn to the iclog, we can have metadata IO completions move the
* tail forwards between the completion of this flush and the iclog
* being written. In this case, we need to re-issue the cache flush
* before the iclog write. To detect whether the log tail moves, sample
* the tail LSN *before* we issue the flush.
*/
preflush_tail_lsn = atomic64_read(&log->l_tail_lsn);
xfs_flush_bdev_async(&bio, log->l_mp->m_ddev_targp->bt_bdev,
&bdev_flush);
/* /*
* Pull all the log vectors off the items in the CIL, and remove the * Pull all the log vectors off the items in the CIL, and remove the
* items from the CIL. We don't need the CIL lock here because it's only * items from the CIL. We don't need the CIL lock here because it's only
...@@ -1054,12 +1044,6 @@ xlog_cil_push_work( ...@@ -1054,12 +1044,6 @@ xlog_cil_push_work(
lvhdr.lv_iovecp = &lhdr; lvhdr.lv_iovecp = &lhdr;
lvhdr.lv_next = ctx->lv_chain; lvhdr.lv_next = ctx->lv_chain;
/*
* Before we format and submit the first iclog, we have to ensure that
* the metadata writeback ordering cache flush is complete.
*/
wait_for_completion(&bdev_flush);
error = xlog_cil_write_chain(ctx, &lvhdr); error = xlog_cil_write_chain(ctx, &lvhdr);
if (error) if (error)
goto out_abort_free_ticket; goto out_abort_free_ticket;
...@@ -1118,7 +1102,7 @@ xlog_cil_push_work( ...@@ -1118,7 +1102,7 @@ xlog_cil_push_work(
if (push_commit_stable && if (push_commit_stable &&
ctx->commit_iclog->ic_state == XLOG_STATE_ACTIVE) ctx->commit_iclog->ic_state == XLOG_STATE_ACTIVE)
xlog_state_switch_iclogs(log, ctx->commit_iclog, 0); xlog_state_switch_iclogs(log, ctx->commit_iclog, 0);
xlog_state_release_iclog(log, ctx->commit_iclog, preflush_tail_lsn); xlog_state_release_iclog(log, ctx->commit_iclog);
/* Not safe to reference ctx now! */ /* Not safe to reference ctx now! */
...@@ -1139,7 +1123,7 @@ xlog_cil_push_work( ...@@ -1139,7 +1123,7 @@ xlog_cil_push_work(
return; return;
} }
spin_lock(&log->l_icloglock); spin_lock(&log->l_icloglock);
xlog_state_release_iclog(log, ctx->commit_iclog, 0); xlog_state_release_iclog(log, ctx->commit_iclog);
/* Not safe to reference ctx now! */ /* Not safe to reference ctx now! */
spin_unlock(&log->l_icloglock); spin_unlock(&log->l_icloglock);
} }
......
...@@ -484,6 +484,17 @@ xlog_is_shutdown(struct xlog *log) ...@@ -484,6 +484,17 @@ xlog_is_shutdown(struct xlog *log)
return test_bit(XLOG_IO_ERROR, &log->l_opstate); return test_bit(XLOG_IO_ERROR, &log->l_opstate);
} }
/*
* Wait until the xlog_force_shutdown() has marked the log as shut down
* so xlog_is_shutdown() will always return true.
*/
static inline void
xlog_shutdown_wait(
struct xlog *log)
{
wait_var_event(&log->l_opstate, xlog_is_shutdown(log));
}
/* common routines */ /* common routines */
extern int extern int
xlog_recover( xlog_recover(
...@@ -524,8 +535,7 @@ void xfs_log_ticket_regrant(struct xlog *log, struct xlog_ticket *ticket); ...@@ -524,8 +535,7 @@ void xfs_log_ticket_regrant(struct xlog *log, struct xlog_ticket *ticket);
void xlog_state_switch_iclogs(struct xlog *log, struct xlog_in_core *iclog, void xlog_state_switch_iclogs(struct xlog *log, struct xlog_in_core *iclog,
int eventual_size); int eventual_size);
int xlog_state_release_iclog(struct xlog *log, struct xlog_in_core *iclog, int xlog_state_release_iclog(struct xlog *log, struct xlog_in_core *iclog);
xfs_lsn_t log_tail_lsn);
/* /*
* When we crack an atomic LSN, we sample it first so that the value will not * When we crack an atomic LSN, we sample it first so that the value will not
......
...@@ -2485,7 +2485,7 @@ xlog_finish_defer_ops( ...@@ -2485,7 +2485,7 @@ xlog_finish_defer_ops(
error = xfs_trans_alloc(mp, &resv, dfc->dfc_blkres, error = xfs_trans_alloc(mp, &resv, dfc->dfc_blkres,
dfc->dfc_rtxres, XFS_TRANS_RESERVE, &tp); dfc->dfc_rtxres, XFS_TRANS_RESERVE, &tp);
if (error) { if (error) {
xfs_force_shutdown(mp, SHUTDOWN_LOG_IO_ERROR); xlog_force_shutdown(mp->m_log, SHUTDOWN_LOG_IO_ERROR);
return error; return error;
} }
...@@ -2519,21 +2519,22 @@ xlog_abort_defer_ops( ...@@ -2519,21 +2519,22 @@ xlog_abort_defer_ops(
xfs_defer_ops_capture_free(mp, dfc); xfs_defer_ops_capture_free(mp, dfc);
} }
} }
/* /*
* When this is called, all of the log intent items which did not have * When this is called, all of the log intent items which did not have
* corresponding log done items should be in the AIL. What we do now * corresponding log done items should be in the AIL. What we do now is update
* is update the data structures associated with each one. * the data structures associated with each one.
* *
* Since we process the log intent items in normal transactions, they * Since we process the log intent items in normal transactions, they will be
* will be removed at some point after the commit. This prevents us * removed at some point after the commit. This prevents us from just walking
* from just walking down the list processing each one. We'll use a * down the list processing each one. We'll use a flag in the intent item to
* flag in the intent item to skip those that we've already processed * skip those that we've already processed and use the AIL iteration mechanism's
* and use the AIL iteration mechanism's generation count to try to * generation count to try to speed this up at least a bit.
* speed this up at least a bit.
* *
* When we start, we know that the intents are the only things in the * When we start, we know that the intents are the only things in the AIL. As we
* AIL. As we process them, however, other items are added to the * process them, however, other items are added to the AIL. Hence we know we
* AIL. * have started recovery on all the pending intents when we find an non-intent
* item in the AIL.
*/ */
STATIC int STATIC int
xlog_recover_process_intents( xlog_recover_process_intents(
...@@ -2556,17 +2557,8 @@ xlog_recover_process_intents( ...@@ -2556,17 +2557,8 @@ xlog_recover_process_intents(
for (lip = xfs_trans_ail_cursor_first(ailp, &cur, 0); for (lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
lip != NULL; lip != NULL;
lip = xfs_trans_ail_cursor_next(ailp, &cur)) { lip = xfs_trans_ail_cursor_next(ailp, &cur)) {
/* if (!xlog_item_is_intent(lip))
* We're done when we see something other than an intent.
* There should be no intents left in the AIL now.
*/
if (!xlog_item_is_intent(lip)) {
#ifdef DEBUG
for (; lip; lip = xfs_trans_ail_cursor_next(ailp, &cur))
ASSERT(!xlog_item_is_intent(lip));
#endif
break; break;
}
/* /*
* We should never see a redo item with a LSN higher than * We should never see a redo item with a LSN higher than
...@@ -2607,8 +2599,9 @@ xlog_recover_process_intents( ...@@ -2607,8 +2599,9 @@ xlog_recover_process_intents(
} }
/* /*
* A cancel occurs when the mount has failed and we're bailing out. * A cancel occurs when the mount has failed and we're bailing out. Release all
* Release all pending log intent items so they don't pin the AIL. * pending log intent items that we haven't started recovery on so they don't
* pin the AIL.
*/ */
STATIC void STATIC void
xlog_recover_cancel_intents( xlog_recover_cancel_intents(
...@@ -2622,17 +2615,8 @@ xlog_recover_cancel_intents( ...@@ -2622,17 +2615,8 @@ xlog_recover_cancel_intents(
spin_lock(&ailp->ail_lock); spin_lock(&ailp->ail_lock);
lip = xfs_trans_ail_cursor_first(ailp, &cur, 0); lip = xfs_trans_ail_cursor_first(ailp, &cur, 0);
while (lip != NULL) { while (lip != NULL) {
/* if (!xlog_item_is_intent(lip))
* We're done when we see something other than an intent.
* There should be no intents left in the AIL now.
*/
if (!xlog_item_is_intent(lip)) {
#ifdef DEBUG
for (; lip; lip = xfs_trans_ail_cursor_next(ailp, &cur))
ASSERT(!xlog_item_is_intent(lip));
#endif
break; break;
}
spin_unlock(&ailp->ail_lock); spin_unlock(&ailp->ail_lock);
lip->li_ops->iop_release(lip); lip->li_ops->iop_release(lip);
...@@ -3470,7 +3454,7 @@ xlog_recover_finish( ...@@ -3470,7 +3454,7 @@ xlog_recover_finish(
*/ */
xlog_recover_cancel_intents(log); xlog_recover_cancel_intents(log);
xfs_alert(log->l_mp, "Failed to recover intents"); xfs_alert(log->l_mp, "Failed to recover intents");
xfs_force_shutdown(log->l_mp, SHUTDOWN_LOG_IO_ERROR); xlog_force_shutdown(log, SHUTDOWN_LOG_IO_ERROR);
return error; return error;
} }
...@@ -3517,7 +3501,7 @@ xlog_recover_finish( ...@@ -3517,7 +3501,7 @@ xlog_recover_finish(
* end of intents processing can be pushed through the CIL * end of intents processing can be pushed through the CIL
* and AIL. * and AIL.
*/ */
xfs_force_shutdown(log->l_mp, SHUTDOWN_LOG_IO_ERROR); xlog_force_shutdown(log, SHUTDOWN_LOG_IO_ERROR);
} }
return 0; return 0;
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include "xfs_trans.h" #include "xfs_trans.h"
#include "xfs_trans_priv.h" #include "xfs_trans_priv.h"
#include "xfs_log.h" #include "xfs_log.h"
#include "xfs_log_priv.h"
#include "xfs_error.h" #include "xfs_error.h"
#include "xfs_quota.h" #include "xfs_quota.h"
#include "xfs_fsops.h" #include "xfs_fsops.h"
...@@ -1146,7 +1147,7 @@ xfs_mod_fdblocks( ...@@ -1146,7 +1147,7 @@ xfs_mod_fdblocks(
* problems (i.e. transaction abort, pagecache discards, etc.) than * problems (i.e. transaction abort, pagecache discards, etc.) than
* slightly premature -ENOSPC. * slightly premature -ENOSPC.
*/ */
set_aside = mp->m_alloc_set_aside + atomic64_read(&mp->m_allocbt_blks); set_aside = xfs_fdblocks_unavailable(mp);
percpu_counter_add_batch(&mp->m_fdblocks, delta, batch); percpu_counter_add_batch(&mp->m_fdblocks, delta, batch);
if (__percpu_counter_compare(&mp->m_fdblocks, set_aside, if (__percpu_counter_compare(&mp->m_fdblocks, set_aside,
XFS_FDBLOCKS_BATCH) >= 0) { XFS_FDBLOCKS_BATCH) >= 0) {
......
...@@ -479,6 +479,21 @@ extern void xfs_unmountfs(xfs_mount_t *); ...@@ -479,6 +479,21 @@ extern void xfs_unmountfs(xfs_mount_t *);
*/ */
#define XFS_FDBLOCKS_BATCH 1024 #define XFS_FDBLOCKS_BATCH 1024
/*
* Estimate the amount of free space that is not available to userspace and is
* not explicitly reserved from the incore fdblocks. This includes:
*
* - The minimum number of blocks needed to support splitting a bmap btree
* - The blocks currently in use by the freespace btrees because they record
* the actual blocks that will fill per-AG metadata space reservations
*/
static inline uint64_t
xfs_fdblocks_unavailable(
struct xfs_mount *mp)
{
return mp->m_alloc_set_aside + atomic64_read(&mp->m_allocbt_blks);
}
extern int xfs_mod_fdblocks(struct xfs_mount *mp, int64_t delta, extern int xfs_mod_fdblocks(struct xfs_mount *mp, int64_t delta,
bool reserved); bool reserved);
extern int xfs_mod_frextents(struct xfs_mount *mp, int64_t delta); extern int xfs_mod_frextents(struct xfs_mount *mp, int64_t delta);
......
...@@ -815,7 +815,8 @@ xfs_fs_statfs( ...@@ -815,7 +815,8 @@ xfs_fs_statfs(
spin_unlock(&mp->m_sb_lock); spin_unlock(&mp->m_sb_lock);
/* make sure statp->f_bfree does not underflow */ /* make sure statp->f_bfree does not underflow */
statp->f_bfree = max_t(int64_t, fdblocks - mp->m_alloc_set_aside, 0); statp->f_bfree = max_t(int64_t, 0,
fdblocks - xfs_fdblocks_unavailable(mp));
statp->f_bavail = statp->f_bfree; statp->f_bavail = statp->f_bfree;
fakeinos = XFS_FSB_TO_INO(mp, statp->f_bfree); fakeinos = XFS_FSB_TO_INO(mp, statp->f_bfree);
......
...@@ -836,6 +836,7 @@ __xfs_trans_commit( ...@@ -836,6 +836,7 @@ __xfs_trans_commit(
bool regrant) bool regrant)
{ {
struct xfs_mount *mp = tp->t_mountp; struct xfs_mount *mp = tp->t_mountp;
struct xlog *log = mp->m_log;
xfs_csn_t commit_seq = 0; xfs_csn_t commit_seq = 0;
int error = 0; int error = 0;
int sync = tp->t_flags & XFS_TRANS_SYNC; int sync = tp->t_flags & XFS_TRANS_SYNC;
...@@ -864,7 +865,13 @@ __xfs_trans_commit( ...@@ -864,7 +865,13 @@ __xfs_trans_commit(
if (!(tp->t_flags & XFS_TRANS_DIRTY)) if (!(tp->t_flags & XFS_TRANS_DIRTY))
goto out_unreserve; goto out_unreserve;
if (xfs_is_shutdown(mp)) { /*
* We must check against log shutdown here because we cannot abort log
* items and leave them dirty, inconsistent and unpinned in memory while
* the log is active. This leaves them open to being written back to
* disk, and that will lead to on-disk corruption.
*/
if (xlog_is_shutdown(log)) {
error = -EIO; error = -EIO;
goto out_unreserve; goto out_unreserve;
} }
...@@ -878,7 +885,7 @@ __xfs_trans_commit( ...@@ -878,7 +885,7 @@ __xfs_trans_commit(
xfs_trans_apply_sb_deltas(tp); xfs_trans_apply_sb_deltas(tp);
xfs_trans_apply_dquot_deltas(tp); xfs_trans_apply_dquot_deltas(tp);
xlog_cil_commit(mp->m_log, tp, &commit_seq, regrant); xlog_cil_commit(log, tp, &commit_seq, regrant);
xfs_trans_free(tp); xfs_trans_free(tp);
...@@ -905,10 +912,10 @@ __xfs_trans_commit( ...@@ -905,10 +912,10 @@ __xfs_trans_commit(
*/ */
xfs_trans_unreserve_and_mod_dquots(tp); xfs_trans_unreserve_and_mod_dquots(tp);
if (tp->t_ticket) { if (tp->t_ticket) {
if (regrant && !xlog_is_shutdown(mp->m_log)) if (regrant && !xlog_is_shutdown(log))
xfs_log_ticket_regrant(mp->m_log, tp->t_ticket); xfs_log_ticket_regrant(log, tp->t_ticket);
else else
xfs_log_ticket_ungrant(mp->m_log, tp->t_ticket); xfs_log_ticket_ungrant(log, tp->t_ticket);
tp->t_ticket = NULL; tp->t_ticket = NULL;
} }
xfs_trans_free_items(tp, !!error); xfs_trans_free_items(tp, !!error);
...@@ -926,18 +933,27 @@ xfs_trans_commit( ...@@ -926,18 +933,27 @@ xfs_trans_commit(
} }
/* /*
* Unlock all of the transaction's items and free the transaction. * Unlock all of the transaction's items and free the transaction. If the
* The transaction must not have modified any of its items, because * transaction is dirty, we must shut down the filesystem because there is no
* there is no way to restore them to their previous state. * way to restore them to their previous state.
* *
* If the transaction has made a log reservation, make sure to release * If the transaction has made a log reservation, make sure to release it as
* it as well. * well.
*
* This is a high level function (equivalent to xfs_trans_commit()) and so can
* be called after the transaction has effectively been aborted due to the mount
* being shut down. However, if the mount has not been shut down and the
* transaction is dirty we will shut the mount down and, in doing so, that
* guarantees that the log is shut down, too. Hence we don't need to be as
* careful with shutdown state and dirty items here as we need to be in
* xfs_trans_commit().
*/ */
void void
xfs_trans_cancel( xfs_trans_cancel(
struct xfs_trans *tp) struct xfs_trans *tp)
{ {
struct xfs_mount *mp = tp->t_mountp; struct xfs_mount *mp = tp->t_mountp;
struct xlog *log = mp->m_log;
bool dirty = (tp->t_flags & XFS_TRANS_DIRTY); bool dirty = (tp->t_flags & XFS_TRANS_DIRTY);
trace_xfs_trans_cancel(tp, _RET_IP_); trace_xfs_trans_cancel(tp, _RET_IP_);
...@@ -955,16 +971,18 @@ xfs_trans_cancel( ...@@ -955,16 +971,18 @@ xfs_trans_cancel(
} }
/* /*
* See if the caller is relying on us to shut down the * See if the caller is relying on us to shut down the filesystem. We
* filesystem. This happens in paths where we detect * only want an error report if there isn't already a shutdown in
* corruption and decide to give up. * progress, so we only need to check against the mount shutdown state
* here.
*/ */
if (dirty && !xfs_is_shutdown(mp)) { if (dirty && !xfs_is_shutdown(mp)) {
XFS_ERROR_REPORT("xfs_trans_cancel", XFS_ERRLEVEL_LOW, mp); XFS_ERROR_REPORT("xfs_trans_cancel", XFS_ERRLEVEL_LOW, mp);
xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE); xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
} }
#ifdef DEBUG #ifdef DEBUG
if (!dirty && !xfs_is_shutdown(mp)) { /* Log items need to be consistent until the log is shut down. */
if (!dirty && !xlog_is_shutdown(log)) {
struct xfs_log_item *lip; struct xfs_log_item *lip;
list_for_each_entry(lip, &tp->t_items, li_trans) list_for_each_entry(lip, &tp->t_items, li_trans)
...@@ -975,7 +993,7 @@ xfs_trans_cancel( ...@@ -975,7 +993,7 @@ xfs_trans_cancel(
xfs_trans_unreserve_and_mod_dquots(tp); xfs_trans_unreserve_and_mod_dquots(tp);
if (tp->t_ticket) { if (tp->t_ticket) {
xfs_log_ticket_ungrant(mp->m_log, tp->t_ticket); xfs_log_ticket_ungrant(log, tp->t_ticket);
tp->t_ticket = NULL; tp->t_ticket = NULL;
} }
......
...@@ -873,17 +873,17 @@ xfs_trans_ail_delete( ...@@ -873,17 +873,17 @@ xfs_trans_ail_delete(
int shutdown_type) int shutdown_type)
{ {
struct xfs_ail *ailp = lip->li_ailp; struct xfs_ail *ailp = lip->li_ailp;
struct xfs_mount *mp = ailp->ail_log->l_mp; struct xlog *log = ailp->ail_log;
xfs_lsn_t tail_lsn; xfs_lsn_t tail_lsn;
spin_lock(&ailp->ail_lock); spin_lock(&ailp->ail_lock);
if (!test_bit(XFS_LI_IN_AIL, &lip->li_flags)) { if (!test_bit(XFS_LI_IN_AIL, &lip->li_flags)) {
spin_unlock(&ailp->ail_lock); spin_unlock(&ailp->ail_lock);
if (shutdown_type && !xlog_is_shutdown(ailp->ail_log)) { if (shutdown_type && !xlog_is_shutdown(log)) {
xfs_alert_tag(mp, XFS_PTAG_AILDELETE, xfs_alert_tag(log->l_mp, XFS_PTAG_AILDELETE,
"%s: attempting to delete a log item that is not in the AIL", "%s: attempting to delete a log item that is not in the AIL",
__func__); __func__);
xfs_force_shutdown(mp, shutdown_type); xlog_force_shutdown(log, shutdown_type);
} }
return; return;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment