Commit 21333b2b authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

ksm: no debug in page_dup_rmap()

page_dup_rmap(), used on each mapped page when forking, was originally
just an inline atomic_inc of mapcount.  2.6.22 added CONFIG_DEBUG_VM
out-of-line checks to it, which would need to be ever-so-slightly
complicated to allow for the PageKsm() we're about to define.

But I think these checks never caught anything.  And if it's coding errors
we're worried about, such checks should be in page_remove_rmap() too, not
just when forking; whereas if it's pagetable corruption we're worried
about, then they shouldn't be limited to CONFIG_DEBUG_VM.

Oh, just revert page_dup_rmap() to an inline atomic_inc of mapcount.
Signed-off-by: default avatarHugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: default avatarChris Wright <chrisw@redhat.com>
Signed-off-by: default avatarIzik Eidus <ieidus@redhat.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Avi Kivity <avi@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent f8af4da3
...@@ -71,14 +71,10 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned lon ...@@ -71,14 +71,10 @@ void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned lon
void page_add_file_rmap(struct page *); void page_add_file_rmap(struct page *);
void page_remove_rmap(struct page *); void page_remove_rmap(struct page *);
#ifdef CONFIG_DEBUG_VM static inline void page_dup_rmap(struct page *page)
void page_dup_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address);
#else
static inline void page_dup_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address)
{ {
atomic_inc(&page->_mapcount); atomic_inc(&page->_mapcount);
} }
#endif
/* /*
* Called from mm/vmscan.c to handle paging out * Called from mm/vmscan.c to handle paging out
......
...@@ -597,7 +597,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, ...@@ -597,7 +597,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
page = vm_normal_page(vma, addr, pte); page = vm_normal_page(vma, addr, pte);
if (page) { if (page) {
get_page(page); get_page(page);
page_dup_rmap(page, vma, addr); page_dup_rmap(page);
rss[!!PageAnon(page)]++; rss[!!PageAnon(page)]++;
} }
......
...@@ -710,27 +710,6 @@ void page_add_file_rmap(struct page *page) ...@@ -710,27 +710,6 @@ void page_add_file_rmap(struct page *page)
} }
} }
#ifdef CONFIG_DEBUG_VM
/**
* page_dup_rmap - duplicate pte mapping to a page
* @page: the page to add the mapping to
* @vma: the vm area being duplicated
* @address: the user virtual address mapped
*
* For copy_page_range only: minimal extract from page_add_file_rmap /
* page_add_anon_rmap, avoiding unnecessary tests (already checked) so it's
* quicker.
*
* The caller needs to hold the pte lock.
*/
void page_dup_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address)
{
if (PageAnon(page))
__page_check_anon_rmap(page, vma, address);
atomic_inc(&page->_mapcount);
}
#endif
/** /**
* page_remove_rmap - take down pte mapping from a page * page_remove_rmap - take down pte mapping from a page
* @page: page to remove mapping from * @page: page to remove mapping from
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment