Commit a18bba06 authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds

mm: vmscan: remove dead code related to lumpy reclaim waiting on pages under writeback

Lumpy reclaim worked with two passes - the first which queued pages for IO
and the second which waited on writeback.  As direct reclaim can no longer
write pages there is some dead code.  This patch removes it but direct
reclaim will continue to wait on pages under writeback while in
synchronous reclaim mode.
Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Alex Elder <aelder@sgi.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ee72886d
...@@ -495,15 +495,6 @@ static pageout_t pageout(struct page *page, struct address_space *mapping, ...@@ -495,15 +495,6 @@ static pageout_t pageout(struct page *page, struct address_space *mapping,
return PAGE_ACTIVATE; return PAGE_ACTIVATE;
} }
/*
* Wait on writeback if requested to. This happens when
* direct reclaiming a large contiguous area and the
* first attempt to free a range of pages fails.
*/
if (PageWriteback(page) &&
(sc->reclaim_mode & RECLAIM_MODE_SYNC))
wait_on_page_writeback(page);
if (!PageWriteback(page)) { if (!PageWriteback(page)) {
/* synchronous write or broken a_ops? */ /* synchronous write or broken a_ops? */
ClearPageReclaim(page); ClearPageReclaim(page);
...@@ -804,12 +795,10 @@ static unsigned long shrink_page_list(struct list_head *page_list, ...@@ -804,12 +795,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (PageWriteback(page)) { if (PageWriteback(page)) {
/* /*
* Synchronous reclaim is performed in two passes, * Synchronous reclaim cannot queue pages for
* first an asynchronous pass over the list to * writeback due to the possibility of stack overflow
* start parallel writeback, and a second synchronous * but if it encounters a page under writeback, wait
* pass to wait for the IO to complete. Wait here * for the IO to complete.
* for any page for which writeback has already
* started.
*/ */
if ((sc->reclaim_mode & RECLAIM_MODE_SYNC) && if ((sc->reclaim_mode & RECLAIM_MODE_SYNC) &&
may_enter_fs) may_enter_fs)
...@@ -1414,7 +1403,7 @@ static noinline_for_stack void update_isolated_counts(struct zone *zone, ...@@ -1414,7 +1403,7 @@ static noinline_for_stack void update_isolated_counts(struct zone *zone,
} }
/* /*
* Returns true if the caller should wait to clean dirty/writeback pages. * Returns true if a direct reclaim should wait on pages under writeback.
* *
* If we are direct reclaiming for contiguous pages and we do not reclaim * If we are direct reclaiming for contiguous pages and we do not reclaim
* everything in the list, try again and wait for writeback IO to complete. * everything in the list, try again and wait for writeback IO to complete.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment