Commit ece369c7 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

mm/munlock: add lru_add_drain() to fix memcg_stat_test

Mike reports that LTP memcg_stat_test usually leads to

  memcg_stat_test 3 TINFO: Test unevictable with MAP_LOCKED
  memcg_stat_test 3 TINFO: Running memcg_process --mmap-lock1 -s 135168
  memcg_stat_test 3 TINFO: Warming up pid: 3460
  memcg_stat_test 3 TINFO: Process is still here after warm up: 3460
  memcg_stat_test 3 TFAIL: unevictable is 122880, 135168 expected

but may also lead to

  memcg_stat_test 4 TINFO: Test unevictable with mlock
  memcg_stat_test 4 TINFO: Running memcg_process --mmap-lock2 -s 135168
  memcg_stat_test 4 TINFO: Warming up pid: 4271
  memcg_stat_test 4 TINFO: Process is still here after warm up: 4271
  memcg_stat_test 4 TFAIL: unevictable is 122880, 135168 expected

or both.  A wee bit flaky.

follow_page_pte() used to have an lru_add_drain() per each page mlocked,
and the test came to rely on accurate stats.  The pagevec to be drained
is different now, but still covered by lru_add_drain(); and, never mind
the test, I believe it's in everyone's interest that a bulk faulting
interface like populate_vma_page_range() or faultin_vma_page_range()
should drain its local pagevecs at the end, to save others sometimes
needing the much more expensive lru_add_drain_all().

This does not absolutely guarantee exact stats - the mlocking task can
be migrated between CPUs as it proceeds - but it's good enough and the
tests pass.

Link: https://lkml.kernel.org/r/47f6d39c-a075-50cb-1cfb-26dd957a48af@google.com
Fixes: b67bf49c ("mm/munlock: delete FOLL_MLOCK and FOLL_POPULATE")
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Reported-by: default avatarMike Galbraith <efault@gmx.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent cdd81b31
...@@ -1404,6 +1404,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, ...@@ -1404,6 +1404,7 @@ long populate_vma_page_range(struct vm_area_struct *vma,
struct mm_struct *mm = vma->vm_mm; struct mm_struct *mm = vma->vm_mm;
unsigned long nr_pages = (end - start) / PAGE_SIZE; unsigned long nr_pages = (end - start) / PAGE_SIZE;
int gup_flags; int gup_flags;
long ret;
VM_BUG_ON(!PAGE_ALIGNED(start)); VM_BUG_ON(!PAGE_ALIGNED(start));
VM_BUG_ON(!PAGE_ALIGNED(end)); VM_BUG_ON(!PAGE_ALIGNED(end));
...@@ -1438,8 +1439,10 @@ long populate_vma_page_range(struct vm_area_struct *vma, ...@@ -1438,8 +1439,10 @@ long populate_vma_page_range(struct vm_area_struct *vma,
* We made sure addr is within a VMA, so the following will * We made sure addr is within a VMA, so the following will
* not result in a stack expansion that recurses back here. * not result in a stack expansion that recurses back here.
*/ */
return __get_user_pages(mm, start, nr_pages, gup_flags, ret = __get_user_pages(mm, start, nr_pages, gup_flags,
NULL, NULL, locked); NULL, NULL, locked);
lru_add_drain();
return ret;
} }
/* /*
...@@ -1471,6 +1474,7 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, ...@@ -1471,6 +1474,7 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start,
struct mm_struct *mm = vma->vm_mm; struct mm_struct *mm = vma->vm_mm;
unsigned long nr_pages = (end - start) / PAGE_SIZE; unsigned long nr_pages = (end - start) / PAGE_SIZE;
int gup_flags; int gup_flags;
long ret;
VM_BUG_ON(!PAGE_ALIGNED(start)); VM_BUG_ON(!PAGE_ALIGNED(start));
VM_BUG_ON(!PAGE_ALIGNED(end)); VM_BUG_ON(!PAGE_ALIGNED(end));
...@@ -1498,8 +1502,10 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, ...@@ -1498,8 +1502,10 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start,
if (check_vma_flags(vma, gup_flags)) if (check_vma_flags(vma, gup_flags))
return -EINVAL; return -EINVAL;
return __get_user_pages(mm, start, nr_pages, gup_flags, ret = __get_user_pages(mm, start, nr_pages, gup_flags,
NULL, NULL, locked); NULL, NULL, locked);
lru_add_drain();
return ret;
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment