Commit a4ebf1b6 authored by Vasily Averin's avatar Vasily Averin Committed by Linus Torvalds

memcg: prohibit unconditional exceeding the limit of dying tasks

Memory cgroup charging allows killed or exiting tasks to exceed the hard
limit.  It is assumed that the amount of the memory charged by those
tasks is bound and most of the memory will get released while the task
is exiting.  This is resembling a heuristic for the global OOM situation
when tasks get access to memory reserves.  There is no global memory
shortage at the memcg level so the memcg heuristic is more relieved.

The above assumption is overly optimistic though.  E.g.  vmalloc can
scale to really large requests and the heuristic would allow that.  We
used to have an early break in the vmalloc allocator for killed tasks
but this has been reverted by commit b8c8a338 ("Revert "vmalloc:
back off when the current task is killed"").  There are likely other
similar code paths which do not check for fatal signals in an
allocation&charge loop.  Also there are some kernel objects charged to a
memcg which are not bound to a process life time.

It has been observed that it is not really hard to trigger these
bypasses and cause global OOM situation.

One potential way to address these runaways would be to limit the amount
of excess (similar to the global OOM with limited oom reserves).  This
is certainly possible but it is not really clear how much of an excess
is desirable and still protects from global OOMs as that would have to
consider the overall memcg configuration.

This patch is addressing the problem by removing the heuristic
altogether.  Bypass is only allowed for requests which either cannot
fail or where the failure is not desirable while excess should be still
limited (e.g.  atomic requests).  Implementation wise a killed or dying
task fails to charge if it has passed the OOM killer stage.  That should
give all forms of reclaim chance to restore the limit before the failure
(ENOMEM) and tell the caller to back off.

In addition, this patch renames should_force_charge() helper to
task_is_dying() because now its use is not associated witch forced
charging.

This patch depends on pagefault_out_of_memory() to not trigger
out_of_memory(), because then a memcg failure can unwind to VM_FAULT_OOM
and cause a global OOM killer.

Link: https://lkml.kernel.org/r/8f5cebbb-06da-4902-91f0-6566fc4b4203@virtuozzo.comSigned-off-by: default avatarVasily Averin <vvs@virtuozzo.com>
Suggested-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 60e2793d
...@@ -234,7 +234,7 @@ enum res_type { ...@@ -234,7 +234,7 @@ enum res_type {
iter != NULL; \ iter != NULL; \
iter = mem_cgroup_iter(NULL, iter, NULL)) iter = mem_cgroup_iter(NULL, iter, NULL))
static inline bool should_force_charge(void) static inline bool task_is_dying(void)
{ {
return tsk_is_oom_victim(current) || fatal_signal_pending(current) || return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||
(current->flags & PF_EXITING); (current->flags & PF_EXITING);
...@@ -1624,7 +1624,7 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, ...@@ -1624,7 +1624,7 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
* A few threads which were not waiting at mutex_lock_killable() can * A few threads which were not waiting at mutex_lock_killable() can
* fail to bail out. Therefore, check again after holding oom_lock. * fail to bail out. Therefore, check again after holding oom_lock.
*/ */
ret = should_force_charge() || out_of_memory(&oc); ret = task_is_dying() || out_of_memory(&oc);
unlock: unlock:
mutex_unlock(&oom_lock); mutex_unlock(&oom_lock);
...@@ -2579,6 +2579,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask, ...@@ -2579,6 +2579,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
struct page_counter *counter; struct page_counter *counter;
enum oom_status oom_status; enum oom_status oom_status;
unsigned long nr_reclaimed; unsigned long nr_reclaimed;
bool passed_oom = false;
bool may_swap = true; bool may_swap = true;
bool drained = false; bool drained = false;
unsigned long pflags; unsigned long pflags;
...@@ -2613,15 +2614,6 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask, ...@@ -2613,15 +2614,6 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
if (gfp_mask & __GFP_ATOMIC) if (gfp_mask & __GFP_ATOMIC)
goto force; goto force;
/*
* Unlike in global OOM situations, memcg is not in a physical
* memory shortage. Allow dying and OOM-killed tasks to
* bypass the last charges so that they can exit quickly and
* free their memory.
*/
if (unlikely(should_force_charge()))
goto force;
/* /*
* Prevent unbounded recursion when reclaim operations need to * Prevent unbounded recursion when reclaim operations need to
* allocate memory. This might exceed the limits temporarily, * allocate memory. This might exceed the limits temporarily,
...@@ -2679,8 +2671,9 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask, ...@@ -2679,8 +2671,9 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
if (gfp_mask & __GFP_RETRY_MAYFAIL) if (gfp_mask & __GFP_RETRY_MAYFAIL)
goto nomem; goto nomem;
if (fatal_signal_pending(current)) /* Avoid endless loop for tasks bypassed by the oom killer */
goto force; if (passed_oom && task_is_dying())
goto nomem;
/* /*
* keep retrying as long as the memcg oom killer is able to make * keep retrying as long as the memcg oom killer is able to make
...@@ -2689,14 +2682,10 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask, ...@@ -2689,14 +2682,10 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
*/ */
oom_status = mem_cgroup_oom(mem_over_limit, gfp_mask, oom_status = mem_cgroup_oom(mem_over_limit, gfp_mask,
get_order(nr_pages * PAGE_SIZE)); get_order(nr_pages * PAGE_SIZE));
switch (oom_status) { if (oom_status == OOM_SUCCESS) {
case OOM_SUCCESS: passed_oom = true;
nr_retries = MAX_RECLAIM_RETRIES; nr_retries = MAX_RECLAIM_RETRIES;
goto retry; goto retry;
case OOM_FAILED:
goto force;
default:
goto nomem;
} }
nomem: nomem:
if (!(gfp_mask & __GFP_NOFAIL)) if (!(gfp_mask & __GFP_NOFAIL))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment