Commit bd19e012 authored by Nick Piggin's avatar Nick Piggin Committed by Linus Torvalds

mm: write_cache_pages early loop termination

We'd like to break out of the loop early in many situations, however the
existing code has been setting mapping->writeback_index past the final
page in the pagevec lookup for cyclic writeback.  This is a problem if we
don't process all pages up to the final page.

Currently the code mostly keeps writeback_index reasonable and hacked
around this by not breaking out of the loop or writing pages outside the
range in these cases.  Keep track of a real "done index" that enables us
to terminate the loop in a much more flexible manner.

Needed by the subsequent patch to preserve writepage errors, and then
further patches to break out of the loop early for other reasons.  However
there are no functional changes with this patch alone.
Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 31a12666
...@@ -871,6 +871,7 @@ int write_cache_pages(struct address_space *mapping, ...@@ -871,6 +871,7 @@ int write_cache_pages(struct address_space *mapping,
pgoff_t uninitialized_var(writeback_index); pgoff_t uninitialized_var(writeback_index);
pgoff_t index; pgoff_t index;
pgoff_t end; /* Inclusive */ pgoff_t end; /* Inclusive */
pgoff_t done_index;
int cycled; int cycled;
int range_whole = 0; int range_whole = 0;
long nr_to_write = wbc->nr_to_write; long nr_to_write = wbc->nr_to_write;
...@@ -897,6 +898,7 @@ int write_cache_pages(struct address_space *mapping, ...@@ -897,6 +898,7 @@ int write_cache_pages(struct address_space *mapping,
cycled = 1; /* ignore range_cyclic tests */ cycled = 1; /* ignore range_cyclic tests */
} }
retry: retry:
done_index = index;
while (!done && (index <= end) && while (!done && (index <= end) &&
(nr_pages = pagevec_lookup_tag(&pvec, mapping, &index, (nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
PAGECACHE_TAG_DIRTY, PAGECACHE_TAG_DIRTY,
...@@ -906,6 +908,8 @@ int write_cache_pages(struct address_space *mapping, ...@@ -906,6 +908,8 @@ int write_cache_pages(struct address_space *mapping,
for (i = 0; i < nr_pages; i++) { for (i = 0; i < nr_pages; i++) {
struct page *page = pvec.pages[i]; struct page *page = pvec.pages[i];
done_index = page->index + 1;
/* /*
* At this point we hold neither mapping->tree_lock nor * At this point we hold neither mapping->tree_lock nor
* lock on the page itself: the page may be truncated or * lock on the page itself: the page may be truncated or
...@@ -968,7 +972,7 @@ int write_cache_pages(struct address_space *mapping, ...@@ -968,7 +972,7 @@ int write_cache_pages(struct address_space *mapping,
} }
if (!wbc->no_nrwrite_index_update) { if (!wbc->no_nrwrite_index_update) {
if (wbc->range_cyclic || (range_whole && nr_to_write > 0)) if (wbc->range_cyclic || (range_whole && nr_to_write > 0))
mapping->writeback_index = index; mapping->writeback_index = done_index;
wbc->nr_to_write = nr_to_write; wbc->nr_to_write = nr_to_write;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment