Commit 2919bfd0 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

ksm: drain pagevecs to lru

It was hard to explain the page counts which were causing new LTP tests
of KSM to fail: we need to drain the per-cpu pagevecs to LRU occasionally.
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Reported-by: default avatarCAI Qian <caiqian@redhat.com>
Cc:Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 73ae31e5
...@@ -1296,6 +1296,18 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page) ...@@ -1296,6 +1296,18 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page)
slot = ksm_scan.mm_slot; slot = ksm_scan.mm_slot;
if (slot == &ksm_mm_head) { if (slot == &ksm_mm_head) {
/*
* A number of pages can hang around indefinitely on per-cpu
* pagevecs, raised page count preventing write_protect_page
* from merging them. Though it doesn't really matter much,
* it is puzzling to see some stuck in pages_volatile until
* other activity jostles them out, and they also prevented
* LTP's KSM test from succeeding deterministically; so drain
* them here (here rather than on entry to ksm_do_scan(),
* so we don't IPI too often when pages_to_scan is set low).
*/
lru_add_drain_all();
root_unstable_tree = RB_ROOT; root_unstable_tree = RB_ROOT;
spin_lock(&ksm_mmlist_lock); spin_lock(&ksm_mmlist_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment