Commit 20f05310 authored by Li Zefan's avatar Li Zefan Committed by Linus Torvalds

memcg: don't use mem_cgroup_get() when creating a kmemcg cache

Use css_get()/css_put() instead of mem_cgroup_get()/mem_cgroup_put().

There are two things being done in the current code:

First, we acquired a css_ref to make sure that the underlying cgroup
would not go away.  That is a short lived reference, and it is put as
soon as the cache is created.

At this point, we acquire a long-lived per-cache memcg reference count
to guarantee that the memcg will still be alive.

so it is:

  enqueue: css_get
  create : memcg_get, css_put
  destroy: memcg_put

So we only need to get rid of the memcg_get, change the memcg_put to
css_put, and get rid of the now extra css_put.

(This changelog is mostly written by Glauber)
Signed-off-by: default avatarLi Zefan <lizefan@huawei.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Glauber Costa <glommer@openvz.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 5347e5ae
...@@ -3242,7 +3242,7 @@ void memcg_release_cache(struct kmem_cache *s) ...@@ -3242,7 +3242,7 @@ void memcg_release_cache(struct kmem_cache *s)
list_del(&s->memcg_params->list); list_del(&s->memcg_params->list);
mutex_unlock(&memcg->slab_caches_mutex); mutex_unlock(&memcg->slab_caches_mutex);
mem_cgroup_put(memcg); css_put(&memcg->css);
out: out:
kfree(s->memcg_params); kfree(s->memcg_params);
} }
...@@ -3402,16 +3402,18 @@ static struct kmem_cache *memcg_create_kmem_cache(struct mem_cgroup *memcg, ...@@ -3402,16 +3402,18 @@ static struct kmem_cache *memcg_create_kmem_cache(struct mem_cgroup *memcg,
mutex_lock(&memcg_cache_mutex); mutex_lock(&memcg_cache_mutex);
new_cachep = cachep->memcg_params->memcg_caches[idx]; new_cachep = cachep->memcg_params->memcg_caches[idx];
if (new_cachep) if (new_cachep) {
css_put(&memcg->css);
goto out; goto out;
}
new_cachep = kmem_cache_dup(memcg, cachep); new_cachep = kmem_cache_dup(memcg, cachep);
if (new_cachep == NULL) { if (new_cachep == NULL) {
new_cachep = cachep; new_cachep = cachep;
css_put(&memcg->css);
goto out; goto out;
} }
mem_cgroup_get(memcg);
atomic_set(&new_cachep->memcg_params->nr_pages , 0); atomic_set(&new_cachep->memcg_params->nr_pages , 0);
cachep->memcg_params->memcg_caches[idx] = new_cachep; cachep->memcg_params->memcg_caches[idx] = new_cachep;
...@@ -3499,8 +3501,6 @@ static void memcg_create_cache_work_func(struct work_struct *w) ...@@ -3499,8 +3501,6 @@ static void memcg_create_cache_work_func(struct work_struct *w)
cw = container_of(w, struct create_work, work); cw = container_of(w, struct create_work, work);
memcg_create_kmem_cache(cw->memcg, cw->cachep); memcg_create_kmem_cache(cw->memcg, cw->cachep);
/* Drop the reference gotten when we enqueued. */
css_put(&cw->memcg->css);
kfree(cw); kfree(cw);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment