Commit 39db00f1 authored by Curt Wohlgemuth's avatar Curt Wohlgemuth Committed by Theodore Ts'o

ext4: don't set PageUptodate in ext4_end_bio()

In the bio completion routine, we should not be setting
PageUptodate at all -- it's set at sys_write() time, and is
unaffected by success/failure of the write to disk.

This can cause a page corruption bug when the file system's
block size is less than the architecture's VM page size.

if we have only written a single block -- we might end up
setting the page's PageUptodate flag, indicating that page
is completely read into memory, which may not be true.
This could cause subsequent reads to get bad data.

This commit also takes the opportunity to clean up error
handling in ext4_end_bio(), and remove some extraneous code:

   - fixes ext4_end_bio() to set AS_EIO in the
     page->mapping->flags on error, which was left out by
     mistake.  This is needed so that fsync() will
     return an error if there was an I/O error.
   - remove the clear_buffer_dirty() call on unmapped
     buffers for each page.
   - consolidate page/buffer error handling in a single
     section.
Signed-off-by: default avatarCurt Wohlgemuth <curtw@google.com>
Signed-off-by: default avatar"Theodore Ts'o" <tytso@mit.edu>
Reported-by: default avatarJim Meyering <jim@meyering.net>
Reported-by: default avatarHugh Dickins <hughd@google.com>
Cc: Mingming Cao <cmm@us.ibm.com>
parent 2035e776
...@@ -203,46 +203,29 @@ static void ext4_end_bio(struct bio *bio, int error) ...@@ -203,46 +203,29 @@ static void ext4_end_bio(struct bio *bio, int error)
for (i = 0; i < io_end->num_io_pages; i++) { for (i = 0; i < io_end->num_io_pages; i++) {
struct page *page = io_end->pages[i]->p_page; struct page *page = io_end->pages[i]->p_page;
struct buffer_head *bh, *head; struct buffer_head *bh, *head;
int partial_write = 0; loff_t offset;
loff_t io_end_offset;
head = page_buffers(page); if (error) {
if (error)
SetPageError(page); SetPageError(page);
BUG_ON(!head); set_bit(AS_EIO, &page->mapping->flags);
if (head->b_size != PAGE_CACHE_SIZE) { head = page_buffers(page);
loff_t offset; BUG_ON(!head);
loff_t io_end_offset = io_end->offset + io_end->size;
io_end_offset = io_end->offset + io_end->size;
offset = (sector_t) page->index << PAGE_CACHE_SHIFT; offset = (sector_t) page->index << PAGE_CACHE_SHIFT;
bh = head; bh = head;
do { do {
if ((offset >= io_end->offset) && if ((offset >= io_end->offset) &&
(offset+bh->b_size <= io_end_offset)) { (offset+bh->b_size <= io_end_offset))
if (error) buffer_io_error(bh);
buffer_io_error(bh);
}
if (buffer_delay(bh))
partial_write = 1;
else if (!buffer_mapped(bh))
clear_buffer_dirty(bh);
else if (buffer_dirty(bh))
partial_write = 1;
offset += bh->b_size; offset += bh->b_size;
bh = bh->b_this_page; bh = bh->b_this_page;
} while (bh != head); } while (bh != head);
} }
/*
* If this is a partial write which happened to make
* all buffers uptodate then we can optimize away a
* bogus readpage() for the next read(). Here we
* 'discover' whether the page went uptodate as a
* result of this (potentially partial) write.
*/
if (!partial_write)
SetPageUptodate(page);
put_io_page(io_end->pages[i]); put_io_page(io_end->pages[i]);
} }
io_end->num_io_pages = 0; io_end->num_io_pages = 0;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment