Commit 5b40998a authored by Vlastimil Babka's avatar Vlastimil Babka Committed by Linus Torvalds

mm: munlock: remove redundant get_page/put_page pair on the fast path

The performance of the fast path in munlock_vma_range() can be further
improved by avoiding atomic ops of a redundant get_page()/put_page() pair.

When calling get_page() during page isolation, we already have the pin
from follow_page_mask().  This pin will be then returned by
__pagevec_lru_add(), after which we do not reference the pages anymore.

After this patch, an 8% speedup was measured for munlocking a 56GB large
memory area with THP disabled.
Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
Reviewed-by: default avatarJörn Engel <joern@logfs.org>
Acked-by: default avatarMel Gorman <mgorman@suse.de>
Cc: Michel Lespinasse <walken@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 56afe477
...@@ -303,8 +303,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) ...@@ -303,8 +303,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
if (PageLRU(page)) { if (PageLRU(page)) {
lruvec = mem_cgroup_page_lruvec(page, zone); lruvec = mem_cgroup_page_lruvec(page, zone);
lru = page_lru(page); lru = page_lru(page);
/*
get_page(page); * We already have pin from follow_page_mask()
* so we can spare the get_page() here.
*/
ClearPageLRU(page); ClearPageLRU(page);
del_page_from_lru_list(page, lruvec, lru); del_page_from_lru_list(page, lruvec, lru);
} else { } else {
...@@ -336,25 +338,25 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) ...@@ -336,25 +338,25 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
lock_page(page); lock_page(page);
if (!__putback_lru_fast_prepare(page, &pvec_putback, if (!__putback_lru_fast_prepare(page, &pvec_putback,
&pgrescued)) { &pgrescued)) {
/* Slow path */ /*
* Slow path. We don't want to lose the last
* pin before unlock_page()
*/
get_page(page); /* for putback_lru_page() */
__munlock_isolated_page(page); __munlock_isolated_page(page);
unlock_page(page); unlock_page(page);
put_page(page); /* from follow_page_mask() */
} }
} }
} }
/* Phase 3: page putback for pages that qualified for the fast path */ /*
* Phase 3: page putback for pages that qualified for the fast path
* This will also call put_page() to return pin from follow_page_mask()
*/
if (pagevec_count(&pvec_putback)) if (pagevec_count(&pvec_putback))
__putback_lru_fast(&pvec_putback, pgrescued); __putback_lru_fast(&pvec_putback, pgrescued);
/* Phase 4: put_page to return pin from follow_page_mask() */
for (i = 0; i < nr; i++) {
struct page *page = pvec->pages[i];
if (page)
put_page(page);
}
pagevec_reinit(pvec); pagevec_reinit(pvec);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment