Commit 9ae2feac authored by Barry Song's avatar Barry Song Committed by Andrew Morton

mm: use folio_add_new_anon_rmap() if folio_test_anon(folio)==false

For the !folio_test_anon(folio) case, we can now invoke
folio_add_new_anon_rmap() with the rmap flags set to either EXCLUSIVE or
non-EXCLUSIVE.  This action will suppress the VM_WARN_ON_FOLIO check
within __folio_add_anon_rmap() while initiating the process of bringing up
mTHP swapin.

 static __always_inline void __folio_add_anon_rmap(struct folio *folio,
                 struct page *page, int nr_pages, struct vm_area_struct *vma,
                 unsigned long address, rmap_t flags, enum rmap_level level)
 {
         ...
         if (unlikely(!folio_test_anon(folio))) {
                 VM_WARN_ON_FOLIO(folio_test_large(folio) &&
                                  level != RMAP_LEVEL_PMD, folio);
         }
         ...
 }

It also improves the code's readability.  Currently, all new anonymous
folios calling folio_add_anon_rmap_ptes() are order-0.  This ensures that
new folios cannot be partially exclusive; they are either entirely
exclusive or entirely shared.

A useful comment from Hugh's fix:

: Commit "mm: use folio_add_new_anon_rmap() if folio_test_anon(folio)==
: false" has extended folio_add_new_anon_rmap() to use on non-exclusive
: folios, already visible to others in swap cache and on LRU.
: 
: That renders its non-atomic __folio_set_swapbacked() unsafe: it risks
: overwriting concurrent atomic operations on folio->flags, losing bits
: added or restoring bits cleared.  Since it's only used in this risky way
: when folio_test_locked and !folio_test_anon, many such races are excluded;
: but, for example, isolations by folio_test_clear_lru() are vulnerable, and
: setting or clearing active.
: 
: It could just use the atomic folio_set_swapbacked(); but this function
: does try to avoid atomics where it can, so use a branch instead: just
: avoid setting swapbacked when it is already set, that is good enough. 
: (Swapbacked is normally stable once set: lazyfree can undo it, but only
: later, when found anon in a page table.)
: 
: This fixes a lot of instability under compaction and swapping loads:
: assorted "Bad page"s, VM_BUG_ON_FOLIO()s, apparently even page double
: frees - though I've not worked out what races could lead to the latter.

[akpm@linux-foundation.org: comment fixes, per David and akpm]
[v-songbaohua@oppo.com: lock the folio to avoid race]
  Link: https://lkml.kernel.org/r/20240622032002.53033-1-21cnbao@gmail.com
[hughd@google.com: folio_add_new_anon_rmap() careful __folio_set_swapbacked()]
  Link: https://lkml.kernel.org/r/f3599b1d-8323-0dc5-e9e0-fdb3cfc3dd5a@google.com
Link: https://lkml.kernel.org/r/20240617231137.80726-3-21cnbao@gmail.comSigned-off-by: default avatarBarry Song <v-songbaohua@oppo.com>
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Suggested-by: default avatarDavid Hildenbrand <david@redhat.com>
Tested-by: default avatarShuai Yuan <yuanshuai@oppo.com>
Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Chris Li <chrisl@kernel.org>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Yosry Ahmed <yosryahmed@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 15bde4ab
...@@ -4341,6 +4341,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) ...@@ -4341,6 +4341,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
if (unlikely(folio != swapcache && swapcache)) { if (unlikely(folio != swapcache && swapcache)) {
folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma); folio_add_lru_vma(folio, vma);
} else if (!folio_test_anon(folio)) {
/*
* We currently only expect small !anon folios, which are either
* fully exclusive or fully shared. If we ever get large folios
* here, we have to be careful.
*/
VM_WARN_ON_ONCE(folio_test_large(folio));
VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
folio_add_new_anon_rmap(folio, vma, address, rmap_flags);
} else { } else {
folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address, folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address,
rmap_flags); rmap_flags);
......
...@@ -1422,7 +1422,9 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, ...@@ -1422,7 +1422,9 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
VM_WARN_ON_FOLIO(!exclusive && !folio_test_locked(folio), folio); VM_WARN_ON_FOLIO(!exclusive && !folio_test_locked(folio), folio);
VM_BUG_ON_VMA(address < vma->vm_start || VM_BUG_ON_VMA(address < vma->vm_start ||
address + (nr << PAGE_SHIFT) > vma->vm_end, vma); address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
__folio_set_swapbacked(folio);
if (!folio_test_swapbacked(folio))
__folio_set_swapbacked(folio);
__folio_set_anon(folio, vma, address, exclusive); __folio_set_anon(folio, vma, address, exclusive);
if (likely(!folio_test_large(folio))) { if (likely(!folio_test_large(folio))) {
......
...@@ -1908,8 +1908,18 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -1908,8 +1908,18 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio); VM_BUG_ON_FOLIO(folio_test_writeback(folio), folio);
if (pte_swp_exclusive(old_pte)) if (pte_swp_exclusive(old_pte))
rmap_flags |= RMAP_EXCLUSIVE; rmap_flags |= RMAP_EXCLUSIVE;
/*
folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags); * We currently only expect small !anon folios, which are either
* fully exclusive or fully shared. If we ever get large folios
* here, we have to be careful.
*/
if (!folio_test_anon(folio)) {
VM_WARN_ON_ONCE(folio_test_large(folio));
VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
folio_add_new_anon_rmap(folio, vma, addr, rmap_flags);
} else {
folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flags);
}
} else { /* ksm created a completely new copy */ } else { /* ksm created a completely new copy */
folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
folio_add_lru_vma(folio, vma); folio_add_lru_vma(folio, vma);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment