Commit 7cb1d7ef authored by Peter Xu's avatar Peter Xu Committed by Andrew Morton

mm/khugepaged: cleanup memcg uncharge for failure path

Explicit memcg uncharging is not needed when the memcg accounting has the
same lifespan of the page/folio.  That becomes the case for khugepaged
after Yang & Zach's recent rework so the hpage will be allocated for each
collapse rather than being cached.

Cleanup the explicit memcg uncharge in khugepaged failure path and leave
that for put_page().

Link: https://lkml.kernel.org/r/20230303151218.311015-1-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
Suggested-by: default avatarZach O'Keefe <zokeefe@google.com>
Reviewed-by: default avatarZach O'Keefe <zokeefe@google.com>
Reviewed-by: default avatarYang Shi <shy828301@gmail.com>
Cc: David Stevens <stevensd@chromium.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 9dabf6e1
......@@ -1135,10 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
out_up_write:
mmap_write_unlock(mm);
out_nolock:
if (hpage) {
mem_cgroup_uncharge(page_folio(hpage));
if (hpage)
put_page(hpage);
}
trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
return result;
}
......@@ -2137,10 +2135,8 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
unlock_page(hpage);
out:
VM_BUG_ON(!list_empty(&pagelist));
if (hpage) {
mem_cgroup_uncharge(page_folio(hpage));
if (hpage)
put_page(hpage);
}
trace_mm_khugepaged_collapse_file(mm, hpage, index, is_shmem, addr, file, nr, result);
return result;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment