Commit 5ca43289 authored by David Hildenbrand's avatar David Hildenbrand Committed by Andrew Morton

mm/rmap: move SetPageAnonExclusive() out of page_move_anon_rmap()

Patch series "mm/rmap: convert page_move_anon_rmap() to
folio_move_anon_rmap()".

Convert page_move_anon_rmap() to folio_move_anon_rmap(), letting the
callers handle PageAnonExclusive.  I'm including cleanup patch #3 because
it fits into the picture and can be done cleaner by the conversion.


This patch (of 3):

Let's move it into the caller: there is a difference between whether an
anon folio can only be mapped by one process (e.g., into one VMA), and
whether it is truly exclusive (e.g., no references -- including GUP --
from other processes).

Further, for large folios the page might not actually be pointing at the
head page of the folio, so it better be handled in the caller.  This is a
preparation for converting page_move_anon_rmap() to consume a folio.

Link: https://lkml.kernel.org/r/20231002142949.235104-1-david@redhat.com
Link: https://lkml.kernel.org/r/20231002142949.235104-2-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
Reviewed-by: default avatarVishal Moola (Oracle) <vishal.moola@gmail.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 4a68fef1
...@@ -1377,6 +1377,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) ...@@ -1377,6 +1377,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
pmd_t entry; pmd_t entry;
page_move_anon_rmap(page, vma); page_move_anon_rmap(page, vma);
SetPageAnonExclusive(page);
folio_unlock(folio); folio_unlock(folio);
reuse: reuse:
if (unlikely(unshare)) { if (unlikely(unshare)) {
......
...@@ -5652,8 +5652,10 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, ...@@ -5652,8 +5652,10 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
* owner and can reuse this page. * owner and can reuse this page.
*/ */
if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) { if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) {
if (!PageAnonExclusive(&old_folio->page)) if (!PageAnonExclusive(&old_folio->page)) {
page_move_anon_rmap(&old_folio->page, vma); page_move_anon_rmap(&old_folio->page, vma);
SetPageAnonExclusive(&old_folio->page);
}
if (likely(!unshare)) if (likely(!unshare))
set_huge_ptep_writable(vma, haddr, ptep); set_huge_ptep_writable(vma, haddr, ptep);
......
...@@ -3481,6 +3481,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) ...@@ -3481,6 +3481,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
* sunglasses. Hit it. * sunglasses. Hit it.
*/ */
page_move_anon_rmap(vmf->page, vma); page_move_anon_rmap(vmf->page, vma);
SetPageAnonExclusive(vmf->page);
folio_unlock(folio); folio_unlock(folio);
reuse: reuse:
if (unlikely(unshare)) { if (unlikely(unshare)) {
......
...@@ -1152,7 +1152,6 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma) ...@@ -1152,7 +1152,6 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
* folio_test_anon()) will not see one without the other. * folio_test_anon()) will not see one without the other.
*/ */
WRITE_ONCE(folio->mapping, anon_vma); WRITE_ONCE(folio->mapping, anon_vma);
SetPageAnonExclusive(page);
} }
/** /**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment