Commit d55158b5 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] vmscan: give dirty referenced pages another pass

In a further attempt to prevent dirty pages from being written out from the
LRU, don't write them if they were referenced.  This gives those pages
another trip around the inactive list.  So more of them are written via
balance_dirty_pages().

It speeds up an untar-of-five-kernel trees by 5% on a 256M box, presumably
because balance_dirty_pages() has better IO patterns.

It largely fixes the problem which Gerrit talked about at the kernel summit:
the individual writepage()s of dirty pages coming off the tail of the LRU are
reduced by 83% in their database workload.

I'm a bit worried that it increases scanning and OOM possibilities under
nutty VM stress cases, but nothing untoward has been noted during its four
weeks in -mm, so...
parent 74d74915
......@@ -279,6 +279,7 @@ shrink_list(struct list_head *page_list, unsigned int gfp_mask,
while (!list_empty(page_list)) {
struct page *page;
int may_enter_fs;
int referenced;
page = list_entry(page_list->prev, struct page, lru);
list_del(&page->lru);
......@@ -298,7 +299,8 @@ shrink_list(struct list_head *page_list, unsigned int gfp_mask,
goto keep_locked;
pte_chain_lock(page);
if (page_referenced(page) && page_mapping_inuse(page)) {
referenced = page_referenced(page);
if (referenced && page_mapping_inuse(page)) {
/* In active use or really unfreeable. Activate it. */
pte_chain_unlock(page);
goto activate_locked;
......@@ -358,6 +360,8 @@ shrink_list(struct list_head *page_list, unsigned int gfp_mask,
* See swapfile.c:page_queue_congested().
*/
if (PageDirty(page)) {
if (referenced)
goto keep_locked;
if (!is_page_cache_freeable(page))
goto keep_locked;
if (!mapping)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment