Commit ebe945c2 authored by Glauber Costa's avatar Glauber Costa Committed by Linus Torvalds

memcg: add comments clarifying aspects of cache attribute propagation

This patch clarifies two aspects of cache attribute propagation.

First, the expected context for the for_each_memcg_cache macro in
memcontrol.h.  The usages already in the codebase are safe.  In mm/slub.c,
it is trivially safe because the lock is acquired right before the loop.
In mm/slab.c, it is less so: the lock is acquired by an outer function a
few steps back in the stack, so a VM_BUG_ON() is added to make sure it is
indeed safe.

A comment is also added to detail why we are returning the value of the
parent cache and ignoring the children's when we propagate the attributes.
Signed-off-by: default avatarGlauber Costa <glommer@parallels.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 92e79349
...@@ -422,6 +422,12 @@ static inline void sock_release_memcg(struct sock *sk) ...@@ -422,6 +422,12 @@ static inline void sock_release_memcg(struct sock *sk)
extern struct static_key memcg_kmem_enabled_key; extern struct static_key memcg_kmem_enabled_key;
extern int memcg_limited_groups_array_size; extern int memcg_limited_groups_array_size;
/*
* Helper macro to loop through all memcg-specific caches. Callers must still
* check if the cache is valid (it is either valid or NULL).
* the slab_mutex must be held when looping through those caches
*/
#define for_each_memcg_cache_index(_idx) \ #define for_each_memcg_cache_index(_idx) \
for ((_idx) = 0; i < memcg_limited_groups_array_size; (_idx)++) for ((_idx) = 0; i < memcg_limited_groups_array_size; (_idx)++)
......
...@@ -4099,6 +4099,7 @@ static int do_tune_cpucache(struct kmem_cache *cachep, int limit, ...@@ -4099,6 +4099,7 @@ static int do_tune_cpucache(struct kmem_cache *cachep, int limit,
if ((ret < 0) || !is_root_cache(cachep)) if ((ret < 0) || !is_root_cache(cachep))
return ret; return ret;
VM_BUG_ON(!mutex_is_locked(&slab_mutex));
for_each_memcg_cache_index(i) { for_each_memcg_cache_index(i) {
c = cache_from_memcg(cachep, i); c = cache_from_memcg(cachep, i);
if (c) if (c)
......
...@@ -5108,12 +5108,25 @@ static ssize_t slab_attr_store(struct kobject *kobj, ...@@ -5108,12 +5108,25 @@ static ssize_t slab_attr_store(struct kobject *kobj,
if (s->max_attr_size < len) if (s->max_attr_size < len)
s->max_attr_size = len; s->max_attr_size = len;
for_each_memcg_cache_index(i) {
struct kmem_cache *c = cache_from_memcg(s, i);
/* /*
* This function's return value is determined by the * This is a best effort propagation, so this function's return
* parent cache only * value will be determined by the parent cache only. This is
* basically because not all attributes will have a well
* defined semantics for rollbacks - most of the actions will
* have permanent effects.
*
* Returning the error value of any of the children that fail
* is not 100 % defined, in the sense that users seeing the
* error code won't be able to know anything about the state of
* the cache.
*
* Only returning the error code for the parent cache at least
* has well defined semantics. The cache being written to
* directly either failed or succeeded, in which case we loop
* through the descendants with best-effort propagation.
*/ */
for_each_memcg_cache_index(i) {
struct kmem_cache *c = cache_from_memcg(s, i);
if (c) if (c)
attribute->store(c, buf, len); attribute->store(c, buf, len);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment