Commit a6de4b48 authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Linus Torvalds

mm: convert find_get_entry to return the head page

There are only four callers remaining of find_get_entry().
get_shadow_from_swap_cache() only wants to see shadow entries and doesn't
care about which page is returned.  Push the find_subpage() call into
find_lock_entry(), find_get_incore_page() and pagecache_get_page().

[willy@infradead.org: fix oops]
  Link: https://lkml.kernel.org/r/20200914112738.GM6583@casper.infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: William Kucharski <william.kucharski@oracle.com>
Link: https://lkml.kernel.org/r/20200910183318.20139-7-willy@infradead.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 9dfc8ff3
......@@ -1645,19 +1645,19 @@ EXPORT_SYMBOL(page_cache_prev_miss);
/**
* find_get_entry - find and get a page cache entry
* @mapping: the address_space to search
* @offset: the page cache index
* @index: The page cache index.
*
* Looks up the page cache slot at @mapping & @offset. If there is a
* page cache page, it is returned with an increased refcount.
* page cache page, the head page is returned with an increased refcount.
*
* If the slot holds a shadow entry of a previously evicted page, or a
* swap entry from shmem/tmpfs, it is returned.
*
* Return: the found page or shadow entry, %NULL if nothing is found.
* Return: The head page or shadow entry, %NULL if nothing is found.
*/
struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
struct page *find_get_entry(struct address_space *mapping, pgoff_t index)
{
XA_STATE(xas, &mapping->i_pages, offset);
XA_STATE(xas, &mapping->i_pages, index);
struct page *page;
rcu_read_lock();
......@@ -1685,7 +1685,6 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
put_page(page);
goto repeat;
}
page = find_subpage(page, offset);
out:
rcu_read_unlock();
......@@ -1722,6 +1721,7 @@ struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset)
put_page(page);
goto repeat;
}
page = find_subpage(page, offset);
VM_BUG_ON_PAGE(page_to_pgoff(page) != offset, page);
}
return page;
......@@ -1768,6 +1768,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index,
page = NULL;
if (!page)
goto no_page;
page = find_subpage(page, index);
if (fgp_flags & FGP_LOCK) {
if (fgp_flags & FGP_NOWAIT) {
......
......@@ -431,8 +431,10 @@ struct page *find_get_incore_page(struct address_space *mapping, pgoff_t index)
struct swap_info_struct *si;
struct page *page = find_get_entry(mapping, index);
if (!xa_is_value(page))
if (!page)
return page;
if (!xa_is_value(page))
return find_subpage(page, index);
if (!shmem_mapping(mapping))
return NULL;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment