Commit e6e88712 authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Linus Torvalds

mm: optimise madvise WILLNEED

Instead of calling find_get_entry() for every page index, use an XArray
iterator to skip over NULL entries, and avoid calling get_page(),
because we only want the swap entries.

[willy@infradead.org: fix LTP soft lockups]
  Link: https://lkml.kernel.org/r/20200914165032.GS6583@casper.infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: William Kucharski <william.kucharski@oracle.com>
Cc: Qian Cai <cai@redhat.com>
Link: https://lkml.kernel.org/r/20200910183318.20139-4-willy@infradead.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent f5df8635
...@@ -224,25 +224,28 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, ...@@ -224,25 +224,28 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma,
unsigned long start, unsigned long end, unsigned long start, unsigned long end,
struct address_space *mapping) struct address_space *mapping)
{ {
pgoff_t index; XA_STATE(xas, &mapping->i_pages, linear_page_index(vma, start));
pgoff_t end_index = end / PAGE_SIZE;
struct page *page; struct page *page;
swp_entry_t swap;
for (; start < end; start += PAGE_SIZE) { rcu_read_lock();
index = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; xas_for_each(&xas, page, end_index) {
swp_entry_t swap;
page = find_get_entry(mapping, index); if (!xa_is_value(page))
if (!xa_is_value(page)) {
if (page)
put_page(page);
continue; continue;
} xas_pause(&xas);
rcu_read_unlock();
swap = radix_to_swp_entry(page); swap = radix_to_swp_entry(page);
page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE, page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE,
NULL, 0, false); NULL, 0, false);
if (page) if (page)
put_page(page); put_page(page);
rcu_read_lock();
} }
rcu_read_unlock();
lru_add_drain(); /* Push any new pages onto the LRU now */ lru_add_drain(); /* Push any new pages onto the LRU now */
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment