Commit caa2f807 authored by Andrew Morton's avatar Andrew Morton Committed by Jens Axboe

[PATCH] invalidate_inode_pages fixes

Two fixes here.

First:

Fixes a BUG() which occurs if you try to perform O_DIRECT IO against a
blockdev which has an fs mounted on it.  (We should be able to do
that).

What happens is that do_invalidatepage() ends up calling
discard_buffer() on buffers which it couldn't strip.  That clears
buffer_mapped() against useful things like the superblock buffer_head.
The next submit_bh() goes BUG over the write of an unmapped buffer.

So just run try_to_release_page() (aka try_to_free_buffers()) on the
invalidate path.


Second:

The invalidate_inode_pages() functions are best-effort pagecache
shrinkers.  They are used against pages inside i_size and are not
supposed to throw away dirty data.

However it is possible for another CPU to run set_page_dirty() against
one of these pages after invalidate_inode_pages() has decided that it
is clean.  This could happen if someone was performing O_DIRECT IO
against a file which was also mapped with MAP_SHARED.

So recheck the dirty state of the page inside the mapping->page_lock
and back out if the page has just been marked dirty.

This will also prevent the remove_from_page_cache() BUG which will occur
if someone marks the page dirty between the clear_page_dirty() and
remove_from_page_cache() calls in truncate_complete_page().
parent 303c9cf6
......@@ -53,7 +53,34 @@ truncate_complete_page(struct address_space *mapping, struct page *page)
clear_page_dirty(page);
ClearPageUptodate(page);
remove_from_page_cache(page);
page_cache_release(page);
page_cache_release(page); /* pagecache ref */
}
/*
* This is for invalidate_inode_pages(). That function can be called at
* any time, and is not supposed to throw away dirty pages. But pages can
* be marked dirty at any time too. So we re-check the dirtiness inside
* ->page_lock. That provides exclusion against the __set_page_dirty
* functions.
*/
static void
invalidate_complete_page(struct address_space *mapping, struct page *page)
{
if (page->mapping != mapping)
return;
if (PagePrivate(page) && !try_to_release_page(page, 0))
return;
write_lock(&mapping->page_lock);
if (PageDirty(page)) {
write_unlock(&mapping->page_lock);
} else {
__remove_from_page_cache(page);
write_unlock(&mapping->page_lock);
ClearPageUptodate(page);
page_cache_release(page); /* pagecache ref */
}
}
/**
......@@ -172,11 +199,9 @@ void invalidate_inode_pages(struct address_space *mapping)
next++;
if (PageDirty(page) || PageWriteback(page))
goto unlock;
if (PagePrivate(page) && !try_to_release_page(page, 0))
goto unlock;
if (page_mapped(page))
goto unlock;
truncate_complete_page(mapping, page);
invalidate_complete_page(mapping, page);
unlock:
unlock_page(page);
}
......@@ -213,7 +238,7 @@ void invalidate_inode_pages2(struct address_space *mapping)
if (page_mapped(page))
clear_page_dirty(page);
else
truncate_complete_page(mapping, page);
invalidate_complete_page(mapping, page);
}
unlock_page(page);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment