Commit d46eb14b authored by Shakeel Butt's avatar Shakeel Butt Committed by Linus Torvalds

fs: fsnotify: account fsnotify metadata to kmemcg

Patch series "Directed kmem charging", v8.

The Linux kernel's memory cgroup allows limiting the memory usage of the
jobs running on the system to provide isolation between the jobs.  All
the kernel memory allocated in the context of the job and marked with
__GFP_ACCOUNT will also be included in the memory usage and be limited
by the job's limit.

The kernel memory can only be charged to the memcg of the process in
whose context kernel memory was allocated.  However there are cases
where the allocated kernel memory should be charged to the memcg
different from the current processes's memcg.  This patch series
contains two such concrete use-cases i.e.  fsnotify and buffer_head.

The fsnotify event objects can consume a lot of system memory for large
or unlimited queues if there is either no or slow listener.  The events
are allocated in the context of the event producer.  However they should
be charged to the event consumer.  Similarly the buffer_head objects can
be allocated in a memcg different from the memcg of the page for which
buffer_head objects are being allocated.

To solve this issue, this patch series introduces mechanism to charge
kernel memory to a given memcg.  In case of fsnotify events, the memcg
of the consumer can be used for charging and for buffer_head, the memcg
of the page can be charged.  For directed charging, the caller can use
the scope API memalloc_[un]use_memcg() to specify the memcg to charge
for all the __GFP_ACCOUNT allocations within the scope.

This patch (of 2):

A lot of memory can be consumed by the events generated for the huge or
unlimited queues if there is either no or slow listener.  This can cause
system level memory pressure or OOMs.  So, it's better to account the
fsnotify kmem caches to the memcg of the listener.

However the listener can be in a different memcg than the memcg of the
producer and these allocations happen in the context of the event
producer.  This patch introduces remote memcg charging API which the
producer can use to charge the allocations to the memcg of the listener.

There are seven fsnotify kmem caches and among them allocations from
dnotify_struct_cache, dnotify_mark_cache, fanotify_mark_cache and
inotify_inode_mark_cachep happens in the context of syscall from the
listener.  So, SLAB_ACCOUNT is enough for these caches.

The objects from fsnotify_mark_connector_cachep are not accounted as
they are small compared to the notification mark or events and it is
unclear whom to account connector to since it is shared by all events
attached to the inode.

The allocations from the event caches happen in the context of the event
producer.  For such caches we will need to remote charge the allocations
to the listener's memcg.  Thus we save the memcg reference in the
fsnotify_group structure of the listener.

This patch has also moved the members of fsnotify_group to keep the size
same, at least for 64 bit build, even with additional member by filling
the holes.

[shakeelb@google.com: use GFP_KERNEL_ACCOUNT rather than open-coding it]
  Link: http://lkml.kernel.org/r/20180702215439.211597-1-shakeelb@google.com
Link: http://lkml.kernel.org/r/20180627191250.209150-2-shakeelb@google.comSigned-off-by: default avatarShakeel Butt <shakeelb@google.com>
Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Amir Goldstein <amir73il@gmail.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent dc0b5864
...@@ -384,8 +384,9 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned long arg) ...@@ -384,8 +384,9 @@ int fcntl_dirnotify(int fd, struct file *filp, unsigned long arg)
static int __init dnotify_init(void) static int __init dnotify_init(void)
{ {
dnotify_struct_cache = KMEM_CACHE(dnotify_struct, SLAB_PANIC); dnotify_struct_cache = KMEM_CACHE(dnotify_struct,
dnotify_mark_cache = KMEM_CACHE(dnotify_mark, SLAB_PANIC); SLAB_PANIC|SLAB_ACCOUNT);
dnotify_mark_cache = KMEM_CACHE(dnotify_mark, SLAB_PANIC|SLAB_ACCOUNT);
dnotify_group = fsnotify_alloc_group(&dnotify_fsnotify_ops); dnotify_group = fsnotify_alloc_group(&dnotify_fsnotify_ops);
if (IS_ERR(dnotify_group)) if (IS_ERR(dnotify_group))
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/audit.h> #include <linux/audit.h>
#include <linux/sched/mm.h>
#include "fanotify.h" #include "fanotify.h"
...@@ -140,8 +141,8 @@ struct fanotify_event_info *fanotify_alloc_event(struct fsnotify_group *group, ...@@ -140,8 +141,8 @@ struct fanotify_event_info *fanotify_alloc_event(struct fsnotify_group *group,
struct inode *inode, u32 mask, struct inode *inode, u32 mask,
const struct path *path) const struct path *path)
{ {
struct fanotify_event_info *event; struct fanotify_event_info *event = NULL;
gfp_t gfp = GFP_KERNEL; gfp_t gfp = GFP_KERNEL_ACCOUNT;
/* /*
* For queues with unlimited length lost events are not expected and * For queues with unlimited length lost events are not expected and
...@@ -151,19 +152,22 @@ struct fanotify_event_info *fanotify_alloc_event(struct fsnotify_group *group, ...@@ -151,19 +152,22 @@ struct fanotify_event_info *fanotify_alloc_event(struct fsnotify_group *group,
if (group->max_events == UINT_MAX) if (group->max_events == UINT_MAX)
gfp |= __GFP_NOFAIL; gfp |= __GFP_NOFAIL;
/* Whoever is interested in the event, pays for the allocation. */
memalloc_use_memcg(group->memcg);
if (fanotify_is_perm_event(mask)) { if (fanotify_is_perm_event(mask)) {
struct fanotify_perm_event_info *pevent; struct fanotify_perm_event_info *pevent;
pevent = kmem_cache_alloc(fanotify_perm_event_cachep, gfp); pevent = kmem_cache_alloc(fanotify_perm_event_cachep, gfp);
if (!pevent) if (!pevent)
return NULL; goto out;
event = &pevent->fae; event = &pevent->fae;
pevent->response = 0; pevent->response = 0;
goto init; goto init;
} }
event = kmem_cache_alloc(fanotify_event_cachep, gfp); event = kmem_cache_alloc(fanotify_event_cachep, gfp);
if (!event) if (!event)
return NULL; goto out;
init: __maybe_unused init: __maybe_unused
fsnotify_init_event(&event->fse, inode, mask); fsnotify_init_event(&event->fse, inode, mask);
event->tgid = get_pid(task_tgid(current)); event->tgid = get_pid(task_tgid(current));
...@@ -174,6 +178,8 @@ init: __maybe_unused ...@@ -174,6 +178,8 @@ init: __maybe_unused
event->path.mnt = NULL; event->path.mnt = NULL;
event->path.dentry = NULL; event->path.dentry = NULL;
} }
out:
memalloc_unuse_memcg();
return event; return event;
} }
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/compat.h> #include <linux/compat.h>
#include <linux/sched/signal.h> #include <linux/sched/signal.h>
#include <linux/memcontrol.h>
#include <asm/ioctls.h> #include <asm/ioctls.h>
...@@ -756,6 +757,7 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags) ...@@ -756,6 +757,7 @@ SYSCALL_DEFINE2(fanotify_init, unsigned int, flags, unsigned int, event_f_flags)
group->fanotify_data.user = user; group->fanotify_data.user = user;
atomic_inc(&user->fanotify_listeners); atomic_inc(&user->fanotify_listeners);
group->memcg = get_mem_cgroup_from_mm(current->mm);
oevent = fanotify_alloc_event(group, NULL, FS_Q_OVERFLOW, NULL); oevent = fanotify_alloc_event(group, NULL, FS_Q_OVERFLOW, NULL);
if (unlikely(!oevent)) { if (unlikely(!oevent)) {
...@@ -957,7 +959,8 @@ COMPAT_SYSCALL_DEFINE6(fanotify_mark, ...@@ -957,7 +959,8 @@ COMPAT_SYSCALL_DEFINE6(fanotify_mark,
*/ */
static int __init fanotify_user_setup(void) static int __init fanotify_user_setup(void)
{ {
fanotify_mark_cache = KMEM_CACHE(fsnotify_mark, SLAB_PANIC); fanotify_mark_cache = KMEM_CACHE(fsnotify_mark,
SLAB_PANIC|SLAB_ACCOUNT);
fanotify_event_cachep = KMEM_CACHE(fanotify_event_info, SLAB_PANIC); fanotify_event_cachep = KMEM_CACHE(fanotify_event_info, SLAB_PANIC);
if (IS_ENABLED(CONFIG_FANOTIFY_ACCESS_PERMISSIONS)) { if (IS_ENABLED(CONFIG_FANOTIFY_ACCESS_PERMISSIONS)) {
fanotify_perm_event_cachep = fanotify_perm_event_cachep =
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/srcu.h> #include <linux/srcu.h>
#include <linux/rculist.h> #include <linux/rculist.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/memcontrol.h>
#include <linux/fsnotify_backend.h> #include <linux/fsnotify_backend.h>
#include "fsnotify.h" #include "fsnotify.h"
...@@ -36,6 +37,8 @@ static void fsnotify_final_destroy_group(struct fsnotify_group *group) ...@@ -36,6 +37,8 @@ static void fsnotify_final_destroy_group(struct fsnotify_group *group)
if (group->ops->free_group_priv) if (group->ops->free_group_priv)
group->ops->free_group_priv(group); group->ops->free_group_priv(group);
mem_cgroup_put(group->memcg);
kfree(group); kfree(group);
} }
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/sched/user.h> #include <linux/sched/user.h>
#include <linux/sched/mm.h>
#include "inotify.h" #include "inotify.h"
...@@ -98,7 +99,11 @@ int inotify_handle_event(struct fsnotify_group *group, ...@@ -98,7 +99,11 @@ int inotify_handle_event(struct fsnotify_group *group,
i_mark = container_of(inode_mark, struct inotify_inode_mark, i_mark = container_of(inode_mark, struct inotify_inode_mark,
fsn_mark); fsn_mark);
event = kmalloc(alloc_len, GFP_KERNEL); /* Whoever is interested in the event, pays for the allocation. */
memalloc_use_memcg(group->memcg);
event = kmalloc(alloc_len, GFP_KERNEL_ACCOUNT);
memalloc_unuse_memcg();
if (unlikely(!event)) { if (unlikely(!event)) {
/* /*
* Treat lost event due to ENOMEM the same way as queue * Treat lost event due to ENOMEM the same way as queue
......
...@@ -38,6 +38,7 @@ ...@@ -38,6 +38,7 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/poll.h> #include <linux/poll.h>
#include <linux/wait.h> #include <linux/wait.h>
#include <linux/memcontrol.h>
#include "inotify.h" #include "inotify.h"
#include "../fdinfo.h" #include "../fdinfo.h"
...@@ -636,6 +637,7 @@ static struct fsnotify_group *inotify_new_group(unsigned int max_events) ...@@ -636,6 +637,7 @@ static struct fsnotify_group *inotify_new_group(unsigned int max_events)
oevent->name_len = 0; oevent->name_len = 0;
group->max_events = max_events; group->max_events = max_events;
group->memcg = get_mem_cgroup_from_mm(current->mm);
spin_lock_init(&group->inotify_data.idr_lock); spin_lock_init(&group->inotify_data.idr_lock);
idr_init(&group->inotify_data.idr); idr_init(&group->inotify_data.idr);
...@@ -808,7 +810,8 @@ static int __init inotify_user_setup(void) ...@@ -808,7 +810,8 @@ static int __init inotify_user_setup(void)
BUG_ON(hweight32(ALL_INOTIFY_BITS) != 21); BUG_ON(hweight32(ALL_INOTIFY_BITS) != 21);
inotify_inode_mark_cachep = KMEM_CACHE(inotify_inode_mark, SLAB_PANIC); inotify_inode_mark_cachep = KMEM_CACHE(inotify_inode_mark,
SLAB_PANIC|SLAB_ACCOUNT);
inotify_max_queued_events = 16384; inotify_max_queued_events = 16384;
init_user_ns.ucount_max[UCOUNT_INOTIFY_INSTANCES] = 128; init_user_ns.ucount_max[UCOUNT_INOTIFY_INSTANCES] = 128;
......
...@@ -84,6 +84,8 @@ struct fsnotify_event_private_data; ...@@ -84,6 +84,8 @@ struct fsnotify_event_private_data;
struct fsnotify_fname; struct fsnotify_fname;
struct fsnotify_iter_info; struct fsnotify_iter_info;
struct mem_cgroup;
/* /*
* Each group much define these ops. The fsnotify infrastructure will call * Each group much define these ops. The fsnotify infrastructure will call
* these operations for each relevant group. * these operations for each relevant group.
...@@ -127,6 +129,8 @@ struct fsnotify_event { ...@@ -127,6 +129,8 @@ struct fsnotify_event {
* everything will be cleaned up. * everything will be cleaned up.
*/ */
struct fsnotify_group { struct fsnotify_group {
const struct fsnotify_ops *ops; /* how this group handles things */
/* /*
* How the refcnt is used is up to each group. When the refcnt hits 0 * How the refcnt is used is up to each group. When the refcnt hits 0
* fsnotify will clean up all of the resources associated with this group. * fsnotify will clean up all of the resources associated with this group.
...@@ -137,8 +141,6 @@ struct fsnotify_group { ...@@ -137,8 +141,6 @@ struct fsnotify_group {
*/ */
refcount_t refcnt; /* things with interest in this group */ refcount_t refcnt; /* things with interest in this group */
const struct fsnotify_ops *ops; /* how this group handles things */
/* needed to send notification to userspace */ /* needed to send notification to userspace */
spinlock_t notification_lock; /* protect the notification_list */ spinlock_t notification_lock; /* protect the notification_list */
struct list_head notification_list; /* list of event_holder this group needs to send to userspace */ struct list_head notification_list; /* list of event_holder this group needs to send to userspace */
...@@ -160,6 +162,8 @@ struct fsnotify_group { ...@@ -160,6 +162,8 @@ struct fsnotify_group {
atomic_t num_marks; /* 1 for each mark and 1 for not being atomic_t num_marks; /* 1 for each mark and 1 for not being
* past the point of no return when freeing * past the point of no return when freeing
* a group */ * a group */
atomic_t user_waits; /* Number of tasks waiting for user
* response */
struct list_head marks_list; /* all inode marks for this group */ struct list_head marks_list; /* all inode marks for this group */
struct fasync_struct *fsn_fa; /* async notification */ struct fasync_struct *fsn_fa; /* async notification */
...@@ -167,8 +171,8 @@ struct fsnotify_group { ...@@ -167,8 +171,8 @@ struct fsnotify_group {
struct fsnotify_event *overflow_event; /* Event we queue when the struct fsnotify_event *overflow_event; /* Event we queue when the
* notification list is too * notification list is too
* full */ * full */
atomic_t user_waits; /* Number of tasks waiting for user
* response */ struct mem_cgroup *memcg; /* memcg to charge allocations */
/* groups can define private fields here or use the void *private */ /* groups can define private fields here or use the void *private */
union { union {
......
...@@ -373,6 +373,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); ...@@ -373,6 +373,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *);
bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg); bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg);
struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
static inline static inline
struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){ struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){
return css ? container_of(css, struct mem_cgroup, css) : NULL; return css ? container_of(css, struct mem_cgroup, css) : NULL;
...@@ -380,6 +382,7 @@ struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){ ...@@ -380,6 +382,7 @@ struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){
static inline void mem_cgroup_put(struct mem_cgroup *memcg) static inline void mem_cgroup_put(struct mem_cgroup *memcg)
{ {
if (memcg)
css_put(&memcg->css); css_put(&memcg->css);
} }
...@@ -855,6 +858,11 @@ static inline bool task_in_mem_cgroup(struct task_struct *task, ...@@ -855,6 +858,11 @@ static inline bool task_in_mem_cgroup(struct task_struct *task,
return true; return true;
} }
static inline struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
{
return NULL;
}
static inline void mem_cgroup_put(struct mem_cgroup *memcg) static inline void mem_cgroup_put(struct mem_cgroup *memcg)
{ {
} }
......
...@@ -1152,6 +1152,9 @@ struct task_struct { ...@@ -1152,6 +1152,9 @@ struct task_struct {
/* Number of pages to reclaim on returning to userland: */ /* Number of pages to reclaim on returning to userland: */
unsigned int memcg_nr_pages_over_high; unsigned int memcg_nr_pages_over_high;
/* Used by memcontrol for targeted memcg charge: */
struct mem_cgroup *active_memcg;
#endif #endif
#ifdef CONFIG_BLK_CGROUP #ifdef CONFIG_BLK_CGROUP
......
...@@ -248,6 +248,43 @@ static inline void memalloc_noreclaim_restore(unsigned int flags) ...@@ -248,6 +248,43 @@ static inline void memalloc_noreclaim_restore(unsigned int flags)
current->flags = (current->flags & ~PF_MEMALLOC) | flags; current->flags = (current->flags & ~PF_MEMALLOC) | flags;
} }
#ifdef CONFIG_MEMCG
/**
* memalloc_use_memcg - Starts the remote memcg charging scope.
* @memcg: memcg to charge.
*
* This function marks the beginning of the remote memcg charging scope. All the
* __GFP_ACCOUNT allocations till the end of the scope will be charged to the
* given memcg.
*
* NOTE: This function is not nesting safe.
*/
static inline void memalloc_use_memcg(struct mem_cgroup *memcg)
{
WARN_ON_ONCE(current->active_memcg);
current->active_memcg = memcg;
}
/**
* memalloc_unuse_memcg - Ends the remote memcg charging scope.
*
* This function marks the end of the remote memcg charging scope started by
* memalloc_use_memcg().
*/
static inline void memalloc_unuse_memcg(void)
{
current->active_memcg = NULL;
}
#else
static inline void memalloc_use_memcg(struct mem_cgroup *memcg)
{
}
static inline void memalloc_unuse_memcg(void)
{
}
#endif
#ifdef CONFIG_MEMBARRIER #ifdef CONFIG_MEMBARRIER
enum { enum {
MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY = (1U << 0), MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY = (1U << 0),
......
...@@ -871,6 +871,9 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) ...@@ -871,6 +871,9 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
tsk->use_memdelay = 0; tsk->use_memdelay = 0;
#endif #endif
#ifdef CONFIG_MEMCG
tsk->active_memcg = NULL;
#endif
return tsk; return tsk;
free_stack: free_stack:
......
...@@ -678,9 +678,20 @@ struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p) ...@@ -678,9 +678,20 @@ struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
} }
EXPORT_SYMBOL(mem_cgroup_from_task); EXPORT_SYMBOL(mem_cgroup_from_task);
static struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm) /**
* get_mem_cgroup_from_mm: Obtain a reference on given mm_struct's memcg.
* @mm: mm from which memcg should be extracted. It can be NULL.
*
* Obtain a reference on mm->memcg and returns it if successful. Otherwise
* root_mem_cgroup is returned. However if mem_cgroup is disabled, NULL is
* returned.
*/
struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
{ {
struct mem_cgroup *memcg = NULL; struct mem_cgroup *memcg;
if (mem_cgroup_disabled())
return NULL;
rcu_read_lock(); rcu_read_lock();
do { do {
...@@ -700,6 +711,24 @@ static struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm) ...@@ -700,6 +711,24 @@ static struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
rcu_read_unlock(); rcu_read_unlock();
return memcg; return memcg;
} }
EXPORT_SYMBOL(get_mem_cgroup_from_mm);
/**
* If current->active_memcg is non-NULL, do not fallback to current->mm->memcg.
*/
static __always_inline struct mem_cgroup *get_mem_cgroup_from_current(void)
{
if (unlikely(current->active_memcg)) {
struct mem_cgroup *memcg = root_mem_cgroup;
rcu_read_lock();
if (css_tryget_online(&current->active_memcg->css))
memcg = current->active_memcg;
rcu_read_unlock();
return memcg;
}
return get_mem_cgroup_from_mm(current->mm);
}
/** /**
* mem_cgroup_iter - iterate over memory cgroup hierarchy * mem_cgroup_iter - iterate over memory cgroup hierarchy
...@@ -2261,7 +2290,7 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep) ...@@ -2261,7 +2290,7 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep)
if (current->memcg_kmem_skip_account) if (current->memcg_kmem_skip_account)
return cachep; return cachep;
memcg = get_mem_cgroup_from_mm(current->mm); memcg = get_mem_cgroup_from_current();
kmemcg_id = READ_ONCE(memcg->kmemcg_id); kmemcg_id = READ_ONCE(memcg->kmemcg_id);
if (kmemcg_id < 0) if (kmemcg_id < 0)
goto out; goto out;
...@@ -2345,7 +2374,7 @@ int memcg_kmem_charge(struct page *page, gfp_t gfp, int order) ...@@ -2345,7 +2374,7 @@ int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
if (memcg_kmem_bypass()) if (memcg_kmem_bypass())
return 0; return 0;
memcg = get_mem_cgroup_from_mm(current->mm); memcg = get_mem_cgroup_from_current();
if (!mem_cgroup_is_root(memcg)) { if (!mem_cgroup_is_root(memcg)) {
ret = memcg_kmem_charge_memcg(page, gfp, order, memcg); ret = memcg_kmem_charge_memcg(page, gfp, order, memcg);
if (!ret) if (!ret)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment