Commit ec177880 authored by Kefeng Wang's avatar Kefeng Wang Committed by Andrew Morton

mm: mprotect: use a folio in change_pte_range()

Use a folio in change_pte_range() to save three compound_head() calls.
Since now only normal and PMD-mapped page is handled by numa balancing,
it is enough to only update the entire folio's access time.

Link: https://lkml.kernel.org/r/20231018140806.2783514-10-wangkefeng.wang@huawei.comSigned-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 0b201c36
...@@ -114,7 +114,7 @@ static long change_pte_range(struct mmu_gather *tlb, ...@@ -114,7 +114,7 @@ static long change_pte_range(struct mmu_gather *tlb,
* pages. See similar comment in change_huge_pmd. * pages. See similar comment in change_huge_pmd.
*/ */
if (prot_numa) { if (prot_numa) {
struct page *page; struct folio *folio;
int nid; int nid;
bool toptier; bool toptier;
...@@ -122,13 +122,14 @@ static long change_pte_range(struct mmu_gather *tlb, ...@@ -122,13 +122,14 @@ static long change_pte_range(struct mmu_gather *tlb,
if (pte_protnone(oldpte)) if (pte_protnone(oldpte))
continue; continue;
page = vm_normal_page(vma, addr, oldpte); folio = vm_normal_folio(vma, addr, oldpte);
if (!page || is_zone_device_page(page) || PageKsm(page)) if (!folio || folio_is_zone_device(folio) ||
folio_test_ksm(folio))
continue; continue;
/* Also skip shared copy-on-write pages */ /* Also skip shared copy-on-write pages */
if (is_cow_mapping(vma->vm_flags) && if (is_cow_mapping(vma->vm_flags) &&
page_count(page) != 1) folio_ref_count(folio) != 1)
continue; continue;
/* /*
...@@ -136,14 +137,15 @@ static long change_pte_range(struct mmu_gather *tlb, ...@@ -136,14 +137,15 @@ static long change_pte_range(struct mmu_gather *tlb,
* it cannot move them all from MIGRATE_ASYNC * it cannot move them all from MIGRATE_ASYNC
* context. * context.
*/ */
if (page_is_file_lru(page) && PageDirty(page)) if (folio_is_file_lru(folio) &&
folio_test_dirty(folio))
continue; continue;
/* /*
* Don't mess with PTEs if page is already on the node * Don't mess with PTEs if page is already on the node
* a single-threaded process is running on. * a single-threaded process is running on.
*/ */
nid = page_to_nid(page); nid = folio_nid(folio);
if (target_node == nid) if (target_node == nid)
continue; continue;
toptier = node_is_toptier(nid); toptier = node_is_toptier(nid);
...@@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb, ...@@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
continue; continue;
if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING && if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
!toptier) !toptier)
xchg_page_access_time(page, folio_xchg_access_time(folio,
jiffies_to_msecs(jiffies)); jiffies_to_msecs(jiffies));
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment