Commit c54839a7 authored by Jaewon Kim's avatar Jaewon Kim Committed by Linus Torvalds

vmscan: fix increasing nr_isolated incurred by putback unevictable pages

reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
number of pages removed from the candidate list.  But shrink_page_list()
puts back mlocked pages without passing it to caller and without
counting as nr_reclaimed.  This increases nr_isolated.

To fix this, this patch changes shrink_page_list() to pass unevictable
pages back to caller.  Caller will take care those pages.

Minchan said:

It fixes two issues.

1. With unevictable page, cma_alloc will be successful.

Exactly speaking, cma_alloc of current kernel will fail due to
unevictable pages.

2. fix leaking of NR_ISOLATED counter of vmstat

With it, too_many_isolated works.  Otherwise, it could make hang until
the process get SIGKILL.
Signed-off-by: default avatarJaewon Kim <jaewon31.kim@samsung.com>
Acked-by: default avatarMinchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 0b802f10
...@@ -1196,7 +1196,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, ...@@ -1196,7 +1196,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
if (PageSwapCache(page)) if (PageSwapCache(page))
try_to_free_swap(page); try_to_free_swap(page);
unlock_page(page); unlock_page(page);
putback_lru_page(page); list_add(&page->lru, &ret_pages);
continue; continue;
activate_locked: activate_locked:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment