Commit a46cbf3b authored by David Rientjes's avatar David Rientjes Committed by Linus Torvalds

mm, compaction: prevent VM_BUG_ON when terminating freeing scanner

It's possible to isolate some freepages in a pageblock and then fail
split_free_page() due to the low watermark check.  In this case, we hit
VM_BUG_ON() because the freeing scanner terminated early without a
contended lock or enough freepages.

This should never have been a VM_BUG_ON() since it's not a fatal
condition.  It should have been a VM_WARN_ON() at best, or even handled
gracefully.

Regardless, we need to terminate anytime the full pageblock scan was not
done.  The logic belongs in isolate_freepages_block(), so handle its
state gracefully by terminating the pageblock loop and making a note to
restart at the same pageblock next time since it was not possible to
complete the scan this time.

[rientjes@google.com: don't rescan pages in a pageblock]
  Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1607111244150.83138@chino.kir.corp.google.com
Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1606291436300.145590@chino.kir.corp.google.comSigned-off-by: default avatarDavid Rientjes <rientjes@google.com>
Reported-by: default avatarMinchan Kim <minchan@kernel.org>
Tested-by: default avatarMinchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent f97d1045
...@@ -1009,8 +1009,6 @@ static void isolate_freepages(struct compact_control *cc) ...@@ -1009,8 +1009,6 @@ static void isolate_freepages(struct compact_control *cc)
block_end_pfn = block_start_pfn, block_end_pfn = block_start_pfn,
block_start_pfn -= pageblock_nr_pages, block_start_pfn -= pageblock_nr_pages,
isolate_start_pfn = block_start_pfn) { isolate_start_pfn = block_start_pfn) {
unsigned long isolated;
/* /*
* This can iterate a massively long zone without finding any * This can iterate a massively long zone without finding any
* suitable migration targets, so periodically check if we need * suitable migration targets, so periodically check if we need
...@@ -1034,36 +1032,30 @@ static void isolate_freepages(struct compact_control *cc) ...@@ -1034,36 +1032,30 @@ static void isolate_freepages(struct compact_control *cc)
continue; continue;
/* Found a block suitable for isolating free pages from. */ /* Found a block suitable for isolating free pages from. */
isolated = isolate_freepages_block(cc, &isolate_start_pfn, isolate_freepages_block(cc, &isolate_start_pfn, block_end_pfn,
block_end_pfn, freelist, false); freelist, false);
/* If isolation failed early, do not continue needlessly */
if (!isolated && isolate_start_pfn < block_end_pfn &&
cc->nr_migratepages > cc->nr_freepages)
break;
/* /*
* If we isolated enough freepages, or aborted due to async * If we isolated enough freepages, or aborted due to lock
* compaction being contended, terminate the loop. * contention, terminate.
* Remember where the free scanner should restart next time,
* which is where isolate_freepages_block() left off.
* But if it scanned the whole pageblock, isolate_start_pfn
* now points at block_end_pfn, which is the start of the next
* pageblock.
* In that case we will however want to restart at the start
* of the previous pageblock.
*/ */
if ((cc->nr_freepages >= cc->nr_migratepages) if ((cc->nr_freepages >= cc->nr_migratepages)
|| cc->contended) { || cc->contended) {
if (isolate_start_pfn >= block_end_pfn) if (isolate_start_pfn >= block_end_pfn) {
/*
* Restart at previous pageblock if more
* freepages can be isolated next time.
*/
isolate_start_pfn = isolate_start_pfn =
block_start_pfn - pageblock_nr_pages; block_start_pfn - pageblock_nr_pages;
}
break; break;
} else { } else if (isolate_start_pfn < block_end_pfn) {
/* /*
* isolate_freepages_block() should not terminate * If isolation failed early, do not continue
* prematurely unless contended, or isolated enough * needlessly.
*/ */
VM_BUG_ON(isolate_start_pfn < block_end_pfn); break;
} }
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment