Commit 1163e785 authored by Yisheng Xie's avatar Yisheng Xie Committed by Greg Kroah-Hartman

mlock: fix mlock count can not decrease in race condition

commit 70feee0e upstream.

Kefeng reported that when running the follow test, the mlock count in
meminfo will increase permanently:

 [1] testcase
 linux:~ # cat test_mlockal
 grep Mlocked /proc/meminfo
  for j in `seq 0 10`
  do
 	for i in `seq 4 15`
 	do
 		./p_mlockall >> log &
 	done
 	sleep 0.2
 done
 # wait some time to let mlock counter decrease and 5s may not enough
 sleep 5
 grep Mlocked /proc/meminfo

 linux:~ # cat p_mlockall.c
 #include <sys/mman.h>
 #include <stdlib.h>
 #include <stdio.h>

 #define SPACE_LEN	4096

 int main(int argc, char ** argv)
 {
	 	int ret;
	 	void *adr = malloc(SPACE_LEN);
	 	if (!adr)
	 		return -1;

	 	ret = mlockall(MCL_CURRENT | MCL_FUTURE);
	 	printf("mlcokall ret = %d\n", ret);

	 	ret = munlockall();
	 	printf("munlcokall ret = %d\n", ret);

	 	free(adr);
	 	return 0;
	 }

In __munlock_pagevec() we should decrement NR_MLOCK for each page where
we clear the PageMlocked flag.  Commit 1ebb7cc6 ("mm: munlock: batch
NR_MLOCK zone state updates") has introduced a bug where we don't
decrement NR_MLOCK for pages where we clear the flag, but fail to
isolate them from the lru list (e.g.  when the pages are on some other
cpu's percpu pagevec).  Since PageMlocked stays cleared, the NR_MLOCK
accounting gets permanently disrupted by this.

Fix it by counting the number of page whose PageMlock flag is cleared.

Fixes: 1ebb7cc6 (" mm: munlock: batch NR_MLOCK zone state updates")
Link: http://lkml.kernel.org/r/1495678405-54569-1-git-send-email-xieyisheng1@huawei.comSigned-off-by: default avatarYisheng Xie <xieyisheng1@huawei.com>
Reported-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Tested-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Joern Engel <joern@logfs.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michel Lespinasse <walken@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Xishi Qiu <qiuxishi@huawei.com>
Cc: zhongjiang <zhongjiang@huawei.com>
Cc: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent d494cab7
...@@ -285,7 +285,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) ...@@ -285,7 +285,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
{ {
int i; int i;
int nr = pagevec_count(pvec); int nr = pagevec_count(pvec);
int delta_munlocked; int delta_munlocked = -nr;
struct pagevec pvec_putback; struct pagevec pvec_putback;
int pgrescued = 0; int pgrescued = 0;
...@@ -305,6 +305,8 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) ...@@ -305,6 +305,8 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
continue; continue;
else else
__munlock_isolation_failed(page); __munlock_isolation_failed(page);
} else {
delta_munlocked++;
} }
/* /*
...@@ -316,7 +318,6 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) ...@@ -316,7 +318,6 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
pagevec_add(&pvec_putback, pvec->pages[i]); pagevec_add(&pvec_putback, pvec->pages[i]);
pvec->pages[i] = NULL; pvec->pages[i] = NULL;
} }
delta_munlocked = -nr + pagevec_count(&pvec_putback);
__mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked);
spin_unlock_irq(zone_lru_lock(zone)); spin_unlock_irq(zone_lru_lock(zone));
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment