Commit 49e50d27 authored by Johannes Weiner's avatar Johannes Weiner Committed by Linus Torvalds

mm: memcontrol: prepare move_account for removal of private page type counters

When memcg uses the generic vmstat counters, it doesn't need to do
anything at charging and uncharging time.  It does, however, need to
migrate counts when pages move to a different cgroup in move_account.

Prepare the move_account function for the arrival of NR_FILE_PAGES,
NR_ANON_MAPPED, NR_ANON_THPS etc.  by having a branch for files and a
branch for anon, which can then divided into sub-branches.
Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Reviewed-by: default avatarAlex Shi <alex.shi@linux.alibaba.com>
Reviewed-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Link: http://lkml.kernel.org/r/20200508183105.225460-8-hannes@cmpxchg.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 9f762dbe
...@@ -5434,7 +5434,6 @@ static int mem_cgroup_move_account(struct page *page, ...@@ -5434,7 +5434,6 @@ static int mem_cgroup_move_account(struct page *page,
struct pglist_data *pgdat; struct pglist_data *pgdat;
unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1; unsigned int nr_pages = compound ? hpage_nr_pages(page) : 1;
int ret; int ret;
bool anon;
VM_BUG_ON(from == to); VM_BUG_ON(from == to);
VM_BUG_ON_PAGE(PageLRU(page), page); VM_BUG_ON_PAGE(PageLRU(page), page);
...@@ -5452,25 +5451,27 @@ static int mem_cgroup_move_account(struct page *page, ...@@ -5452,25 +5451,27 @@ static int mem_cgroup_move_account(struct page *page,
if (page->mem_cgroup != from) if (page->mem_cgroup != from)
goto out_unlock; goto out_unlock;
anon = PageAnon(page);
pgdat = page_pgdat(page); pgdat = page_pgdat(page);
from_vec = mem_cgroup_lruvec(from, pgdat); from_vec = mem_cgroup_lruvec(from, pgdat);
to_vec = mem_cgroup_lruvec(to, pgdat); to_vec = mem_cgroup_lruvec(to, pgdat);
lock_page_memcg(page); lock_page_memcg(page);
if (!anon && page_mapped(page)) { if (!PageAnon(page)) {
if (page_mapped(page)) {
__mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages); __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
__mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages); __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
} }
if (!anon && PageDirty(page)) { if (PageDirty(page)) {
struct address_space *mapping = page_mapping(page); struct address_space *mapping = page_mapping(page);
if (mapping_cap_account_dirty(mapping)) { if (mapping_cap_account_dirty(mapping)) {
__mod_lruvec_state(from_vec, NR_FILE_DIRTY, -nr_pages); __mod_lruvec_state(from_vec, NR_FILE_DIRTY,
__mod_lruvec_state(to_vec, NR_FILE_DIRTY, nr_pages); -nr_pages);
__mod_lruvec_state(to_vec, NR_FILE_DIRTY,
nr_pages);
}
} }
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment