Commit b794e7a6 authored by Jan Kara's avatar Jan Kara Committed by Theodore Ts'o

jbd2: fix assertion failure in commit code due to lacking transaction credits

ext4 users of data=journal mode with blocksize < pagesize were
occasionally hitting assertion failure in
jbd2_journal_commit_transaction() checking whether the transaction has
at least as many credits reserved as buffers attached.  The core of the
problem is that when a file gets truncated, buffers that still need
checkpointing or that are attached to the committing transaction are
left with buffer_mapped set. When this happens to buffers beyond i_size
attached to a page stradding i_size, subsequent write extending the file
will see these buffers and as they are mapped (but underlying blocks
were freed) things go awry from here.

The assertion failure just coincidentally (and in this case luckily as
we would start corrupting filesystem) triggers due to journal_head not
being properly cleaned up as well.

We fix the problem by unmapping buffers if possible (in lots of cases we
just need a buffer attached to a transaction as a place holder but it
must not be written out anyway).  And in one case, we just have to bite
the bullet and wait for transaction commit to finish.

CC: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: default avatarJan Kara <jack@suse.cz>
parent 9b687332
...@@ -1014,17 +1014,35 @@ void jbd2_journal_commit_transaction(journal_t *journal) ...@@ -1014,17 +1014,35 @@ void jbd2_journal_commit_transaction(journal_t *journal)
* there's no point in keeping a checkpoint record for * there's no point in keeping a checkpoint record for
* it. */ * it. */
/* A buffer which has been freed while still being /*
* journaled by a previous transaction may end up still * A buffer which has been freed while still being journaled by
* being dirty here, but we want to avoid writing back * a previous transaction.
* that buffer in the future after the "add to orphan" */
* operation been committed, That's not only a performance if (buffer_freed(bh)) {
* gain, it also stops aliasing problems if the buffer is /*
* left behind for writeback and gets reallocated for another * If the running transaction is the one containing
* use in a different page. */ * "add to orphan" operation (b_next_transaction !=
if (buffer_freed(bh) && !jh->b_next_transaction) { * NULL), we have to wait for that transaction to
* commit before we can really get rid of the buffer.
* So just clear b_modified to not confuse transaction
* credit accounting and refile the buffer to
* BJ_Forget of the running transaction. If the just
* committed transaction contains "add to orphan"
* operation, we can completely invalidate the buffer
* now. We are rather through in that since the
* buffer may be still accessible when blocksize <
* pagesize and it is attached to the last partial
* page.
*/
jh->b_modified = 0;
if (!jh->b_next_transaction) {
clear_buffer_freed(bh); clear_buffer_freed(bh);
clear_buffer_jbddirty(bh); clear_buffer_jbddirty(bh);
clear_buffer_mapped(bh);
clear_buffer_new(bh);
clear_buffer_req(bh);
bh->b_bdev = NULL;
}
} }
if (buffer_jbddirty(bh)) { if (buffer_jbddirty(bh)) {
......
...@@ -1841,15 +1841,16 @@ static int __dispose_buffer(struct journal_head *jh, transaction_t *transaction) ...@@ -1841,15 +1841,16 @@ static int __dispose_buffer(struct journal_head *jh, transaction_t *transaction)
* We're outside-transaction here. Either or both of j_running_transaction * We're outside-transaction here. Either or both of j_running_transaction
* and j_committing_transaction may be NULL. * and j_committing_transaction may be NULL.
*/ */
static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh) static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh,
int partial_page)
{ {
transaction_t *transaction; transaction_t *transaction;
struct journal_head *jh; struct journal_head *jh;
int may_free = 1; int may_free = 1;
int ret;
BUFFER_TRACE(bh, "entry"); BUFFER_TRACE(bh, "entry");
retry:
/* /*
* It is safe to proceed here without the j_list_lock because the * It is safe to proceed here without the j_list_lock because the
* buffers cannot be stolen by try_to_free_buffers as long as we are * buffers cannot be stolen by try_to_free_buffers as long as we are
...@@ -1878,10 +1879,18 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh) ...@@ -1878,10 +1879,18 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
* clear the buffer dirty bit at latest at the moment when the * clear the buffer dirty bit at latest at the moment when the
* transaction marking the buffer as freed in the filesystem * transaction marking the buffer as freed in the filesystem
* structures is committed because from that moment on the * structures is committed because from that moment on the
* buffer can be reallocated and used by a different page. * block can be reallocated and used by a different page.
* Since the block hasn't been freed yet but the inode has * Since the block hasn't been freed yet but the inode has
* already been added to orphan list, it is safe for us to add * already been added to orphan list, it is safe for us to add
* the buffer to BJ_Forget list of the newest transaction. * the buffer to BJ_Forget list of the newest transaction.
*
* Also we have to clear buffer_mapped flag of a truncated buffer
* because the buffer_head may be attached to the page straddling
* i_size (can happen only when blocksize < pagesize) and thus the
* buffer_head can be reused when the file is extended again. So we end
* up keeping around invalidated buffers attached to transactions'
* BJ_Forget list just to stop checkpointing code from cleaning up
* the transaction this buffer was modified in.
*/ */
transaction = jh->b_transaction; transaction = jh->b_transaction;
if (transaction == NULL) { if (transaction == NULL) {
...@@ -1908,13 +1917,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh) ...@@ -1908,13 +1917,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
* committed, the buffer won't be needed any * committed, the buffer won't be needed any
* longer. */ * longer. */
JBUFFER_TRACE(jh, "checkpointed: add to BJ_Forget"); JBUFFER_TRACE(jh, "checkpointed: add to BJ_Forget");
ret = __dispose_buffer(jh, may_free = __dispose_buffer(jh,
journal->j_running_transaction); journal->j_running_transaction);
jbd2_journal_put_journal_head(jh); goto zap_buffer;
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else { } else {
/* There is no currently-running transaction. So the /* There is no currently-running transaction. So the
* orphan record which we wrote for this file must have * orphan record which we wrote for this file must have
...@@ -1922,13 +1927,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh) ...@@ -1922,13 +1927,9 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
* the committing transaction, if it exists. */ * the committing transaction, if it exists. */
if (journal->j_committing_transaction) { if (journal->j_committing_transaction) {
JBUFFER_TRACE(jh, "give to committing trans"); JBUFFER_TRACE(jh, "give to committing trans");
ret = __dispose_buffer(jh, may_free = __dispose_buffer(jh,
journal->j_committing_transaction); journal->j_committing_transaction);
jbd2_journal_put_journal_head(jh); goto zap_buffer;
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
return ret;
} else { } else {
/* The orphan record's transaction has /* The orphan record's transaction has
* committed. We can cleanse this buffer */ * committed. We can cleanse this buffer */
...@@ -1940,10 +1941,24 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh) ...@@ -1940,10 +1941,24 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
JBUFFER_TRACE(jh, "on committing transaction"); JBUFFER_TRACE(jh, "on committing transaction");
/* /*
* The buffer is committing, we simply cannot touch * The buffer is committing, we simply cannot touch
* it. So we just set j_next_transaction to the * it. If the page is straddling i_size we have to wait
* running transaction (if there is one) and mark * for commit and try again.
* buffer as freed so that commit code knows it should */
* clear dirty bits when it is done with the buffer. if (partial_page) {
tid_t tid = journal->j_committing_transaction->t_tid;
jbd2_journal_put_journal_head(jh);
spin_unlock(&journal->j_list_lock);
jbd_unlock_bh_state(bh);
write_unlock(&journal->j_state_lock);
jbd2_log_wait_commit(journal, tid);
goto retry;
}
/*
* OK, buffer won't be reachable after truncate. We just set
* j_next_transaction to the running transaction (if there is
* one) and mark buffer as freed so that commit code knows it
* should clear dirty bits when it is done with the buffer.
*/ */
set_buffer_freed(bh); set_buffer_freed(bh);
if (journal->j_running_transaction && buffer_jbddirty(bh)) if (journal->j_running_transaction && buffer_jbddirty(bh))
...@@ -1966,6 +1981,15 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh) ...@@ -1966,6 +1981,15 @@ static int journal_unmap_buffer(journal_t *journal, struct buffer_head *bh)
} }
zap_buffer: zap_buffer:
/*
* This is tricky. Although the buffer is truncated, it may be reused
* if blocksize < pagesize and it is attached to the page straddling
* EOF. Since the buffer might have been added to BJ_Forget list of the
* running transaction, journal_get_write_access() won't clear
* b_modified and credit accounting gets confused. So clear b_modified
* here.
*/
jh->b_modified = 0;
jbd2_journal_put_journal_head(jh); jbd2_journal_put_journal_head(jh);
zap_buffer_no_jh: zap_buffer_no_jh:
spin_unlock(&journal->j_list_lock); spin_unlock(&journal->j_list_lock);
...@@ -2017,7 +2041,8 @@ void jbd2_journal_invalidatepage(journal_t *journal, ...@@ -2017,7 +2041,8 @@ void jbd2_journal_invalidatepage(journal_t *journal,
if (offset <= curr_off) { if (offset <= curr_off) {
/* This block is wholly outside the truncation point */ /* This block is wholly outside the truncation point */
lock_buffer(bh); lock_buffer(bh);
may_free &= journal_unmap_buffer(journal, bh); may_free &= journal_unmap_buffer(journal, bh,
offset > 0);
unlock_buffer(bh); unlock_buffer(bh);
} }
curr_off = next_off; curr_off = next_off;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment