Commit 73848b46 authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

ksm: fix mlockfreed to munlocked

When KSM merges an mlocked page, it has been forgetting to munlock it:
that's been left to free_page_mlock(), which reports it in /proc/vmstat as
unevictable_pgs_mlockfreed instead of unevictable_pgs_munlocked (and
whinges "Page flag mlocked set for process" in mmotm, whereas mainline is
silently forgiving).  Call munlock_vma_page() to fix that.
Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: Izik Eidus <ieidus@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Chris Wright <chrisw@redhat.com>
Acked-by: default avatarRik van Riel <riel@redhat.com>
Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 08beca44
...@@ -105,9 +105,10 @@ static inline int is_mlocked_vma(struct vm_area_struct *vma, struct page *page) ...@@ -105,9 +105,10 @@ static inline int is_mlocked_vma(struct vm_area_struct *vma, struct page *page)
} }
/* /*
* must be called with vma's mmap_sem held for read, and page locked. * must be called with vma's mmap_sem held for read or write, and page locked.
*/ */
extern void mlock_vma_page(struct page *page); extern void mlock_vma_page(struct page *page);
extern void munlock_vma_page(struct page *page);
/* /*
* Clear the page's PageMlocked(). This can be useful in a situation where * Clear the page's PageMlocked(). This can be useful in a situation where
......
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
#include <linux/ksm.h> #include <linux/ksm.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include "internal.h"
/* /*
* A few notes about the KSM scanning process, * A few notes about the KSM scanning process,
...@@ -762,6 +763,9 @@ static int try_to_merge_one_page(struct vm_area_struct *vma, ...@@ -762,6 +763,9 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
pages_identical(page, kpage)) pages_identical(page, kpage))
err = replace_page(vma, page, kpage, orig_pte); err = replace_page(vma, page, kpage, orig_pte);
if ((vma->vm_flags & VM_LOCKED) && !err)
munlock_vma_page(page);
unlock_page(page); unlock_page(page);
out: out:
return err; return err;
......
...@@ -99,14 +99,14 @@ void mlock_vma_page(struct page *page) ...@@ -99,14 +99,14 @@ void mlock_vma_page(struct page *page)
* not get another chance to clear PageMlocked. If we successfully * not get another chance to clear PageMlocked. If we successfully
* isolate the page and try_to_munlock() detects other VM_LOCKED vmas * isolate the page and try_to_munlock() detects other VM_LOCKED vmas
* mapping the page, it will restore the PageMlocked state, unless the page * mapping the page, it will restore the PageMlocked state, unless the page
* is mapped in a non-linear vma. So, we go ahead and SetPageMlocked(), * is mapped in a non-linear vma. So, we go ahead and ClearPageMlocked(),
* perhaps redundantly. * perhaps redundantly.
* If we lose the isolation race, and the page is mapped by other VM_LOCKED * If we lose the isolation race, and the page is mapped by other VM_LOCKED
* vmas, we'll detect this in vmscan--via try_to_munlock() or try_to_unmap() * vmas, we'll detect this in vmscan--via try_to_munlock() or try_to_unmap()
* either of which will restore the PageMlocked state by calling * either of which will restore the PageMlocked state by calling
* mlock_vma_page() above, if it can grab the vma's mmap sem. * mlock_vma_page() above, if it can grab the vma's mmap sem.
*/ */
static void munlock_vma_page(struct page *page) void munlock_vma_page(struct page *page)
{ {
BUG_ON(!PageLocked(page)); BUG_ON(!PageLocked(page));
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment