Commit 4eda4823 authored by Johannes Weiner's avatar Johannes Weiner Committed by Linus Torvalds

mm: vmscan: only write dirty pages that the scanner has seen twice

Dirty pages can easily reach the end of the LRU while there are still
clean pages to reclaim around.  Don't let kswapd write them back just
because there are a lot of them.  It costs more CPU to find the clean
pages, but that's almost certainly better than to disrupt writeback from
the flushers with LRU-order single-page writes from reclaim.  And the
flushers have been woken up by that point, so we spend IO capacity on
flushing and CPU capacity on finding the clean cache.

Only start writing dirty pages if they have cycled around the LRU twice
now and STILL haven't been queued on the IO device.  It's possible that
the dirty pages are so sparsely distributed across different bdis,
inodes, memory cgroups, that the flushers take forever to get to the
ones we want reclaimed.  Once we see them twice on the LRU, we know
that's the quicker way to find them, so do LRU writeback.

Link: http://lkml.kernel.org/r/20170123181641.23938-5-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarMinchan Kim <minchan@kernel.org>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarMel Gorman <mgorman@suse.de>
Acked-by: default avatarHillf Danton <hillf.zj@alibaba-inc.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent bbef9384
...@@ -1153,12 +1153,17 @@ static unsigned long shrink_page_list(struct list_head *page_list, ...@@ -1153,12 +1153,17 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (PageDirty(page)) { if (PageDirty(page)) {
/* /*
* Only kswapd can writeback filesystem pages to * Only kswapd can writeback filesystem pages
* avoid risk of stack overflow but only writeback * to avoid risk of stack overflow. But avoid
* if many dirty pages have been encountered. * injecting inefficient single-page IO into
* flusher writeback as much as possible: only
* write pages when we've encountered many
* dirty pages, and when we've already scanned
* the rest of the LRU for clean pages and see
* the same dirty pages again (PageReclaim).
*/ */
if (page_is_file_cache(page) && if (page_is_file_cache(page) &&
(!current_is_kswapd() || (!current_is_kswapd() || !PageReclaim(page) ||
!test_bit(PGDAT_DIRTY, &pgdat->flags))) { !test_bit(PGDAT_DIRTY, &pgdat->flags))) {
/* /*
* Immediately reclaim when written back. * Immediately reclaim when written back.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment