Commit 0ba5e806 authored by Illia Ostapyshyn's avatar Illia Ostapyshyn Committed by Andrew Morton

mm/vmscan: update stale references to shrink_page_list

Commit 49fd9b6d ("mm/vmscan: fix a lot of comments") renamed
shrink_page_list() to shrink_folio_list().  Fix up the remaining
references to the old name in comments and documentation.

Link: https://lkml.kernel.org/r/20240517091348.1185566-1-illia@yshyn.comSigned-off-by: default avatarIllia Ostapyshyn <illia@yshyn.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 525c3030
......@@ -191,13 +191,13 @@ have become evictable again (via munlock() for example) and have been "rescued"
from the unevictable list. However, there may be situations where we decide,
for the sake of expediency, to leave an unevictable folio on one of the regular
active/inactive LRU lists for vmscan to deal with. vmscan checks for such
folios in all of the shrink_{active|inactive|page}_list() functions and will
folios in all of the shrink_{active|inactive|folio}_list() functions and will
"cull" such folios that it encounters: that is, it diverts those folios to the
unevictable list for the memory cgroup and node being scanned.
There may be situations where a folio is mapped into a VM_LOCKED VMA,
but the folio does not have the mlocked flag set. Such folios will make
it all the way to shrink_active_list() or shrink_page_list() where they
it all the way to shrink_active_list() or shrink_folio_list() where they
will be detected when vmscan walks the reverse map in folio_referenced()
or try_to_unmap(). The folio is culled to the unevictable list when it
is released by the shrinker.
......@@ -269,7 +269,7 @@ the LRU. Such pages can be "noticed" by memory management in several places:
(4) in the fault path and when a VM_LOCKED stack segment is expanded; or
(5) as mentioned above, in vmscan:shrink_page_list() when attempting to
(5) as mentioned above, in vmscan:shrink_folio_list() when attempting to
reclaim a page in a VM_LOCKED VMA by folio_referenced() or try_to_unmap().
mlocked pages become unlocked and rescued from the unevictable list when:
......@@ -548,12 +548,12 @@ Some examples of these unevictable pages on the LRU lists are:
(3) pages still mapped into VM_LOCKED VMAs, which should be marked mlocked,
but events left mlock_count too low, so they were munlocked too early.
vmscan's shrink_inactive_list() and shrink_page_list() also divert obviously
vmscan's shrink_inactive_list() and shrink_folio_list() also divert obviously
unevictable pages found on the inactive lists to the appropriate memory cgroup
and node unevictable list.
rmap's folio_referenced_one(), called via vmscan's shrink_active_list() or
shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(),
shrink_folio_list(), and rmap's try_to_unmap_one() called via shrink_folio_list(),
check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_folio()
to correct them. Such pages are culled to the unevictable list when released
by the shrinker.
......@@ -4541,7 +4541,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
* lock_page(B)
* lock_page(B)
* pte_alloc_one
* shrink_page_list
* shrink_folio_list
* wait_on_page_writeback(A)
* SetPageWriteback(B)
* unlock_page(B)
......
......@@ -28,7 +28,7 @@
/*
* swapper_space is a fiction, retained to simplify the path through
* vmscan's shrink_page_list.
* vmscan's shrink_folio_list.
*/
static const struct address_space_operations swap_aops = {
.writepage = swap_writepage,
......
......@@ -554,7 +554,7 @@ EXPORT_SYMBOL(invalidate_mapping_pages);
* This is like mapping_evict_folio(), except it ignores the folio's
* refcount. We do this because invalidate_inode_pages2() needs stronger
* invalidation guarantees, and cannot afford to leave folios behind because
* shrink_page_list() has a temp ref on them, or because they're transiently
* shrink_folio_list() has a temp ref on them, or because they're transiently
* sitting in the folio_add_lru() caches.
*/
static int invalidate_complete_folio2(struct address_space *mapping,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment