Commit a373966d authored by Michal Hocko's avatar Michal Hocko Committed by Linus Torvalds

mm, oom: hide mm which is shared with kthread or global init

The only case where the oom_reaper is not triggered for the oom victim
is when it shares the memory with a kernel thread (aka use_mm) or with
the global init.  After "mm, oom: skip vforked tasks from being
selected" the victim cannot be a vforked task of the global init so we
are left with clone(CLONE_VM) (without CLONE_SIGHAND).  use_mm() users
are quite rare as well.

In order to help forward progress for the OOM killer, make sure that
this really rare case will not get in the way - we do this by hiding the
mm from the oom killer by setting MMF_OOM_REAPED flag for it.
oom_scan_process_thread will ignore any TIF_MEMDIE task if it has
MMF_OOM_REAPED flag set to catch these oom victims.

After this patch we should guarantee forward progress for the OOM killer
even when the selected victim is sharing memory with a kernel thread or
global init as long as the victims mm is still alive.

Link: http://lkml.kernel.org/r/1466426628-15074-11-git-send-email-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 11a410d5
...@@ -283,10 +283,22 @@ enum oom_scan_t oom_scan_process_thread(struct oom_control *oc, ...@@ -283,10 +283,22 @@ enum oom_scan_t oom_scan_process_thread(struct oom_control *oc,
/* /*
* This task already has access to memory reserves and is being killed. * This task already has access to memory reserves and is being killed.
* Don't allow any other task to have access to the reserves. * Don't allow any other task to have access to the reserves unless
* the task has MMF_OOM_REAPED because chances that it would release
* any memory is quite low.
*/ */
if (!is_sysrq_oom(oc) && atomic_read(&task->signal->oom_victims)) if (!is_sysrq_oom(oc) && atomic_read(&task->signal->oom_victims)) {
return OOM_SCAN_ABORT; struct task_struct *p = find_lock_task_mm(task);
enum oom_scan_t ret = OOM_SCAN_ABORT;
if (p) {
if (test_bit(MMF_OOM_REAPED, &p->mm->flags))
ret = OOM_SCAN_CONTINUE;
task_unlock(p);
}
return ret;
}
/* /*
* If task is allocating a lot of memory and has been marked to be * If task is allocating a lot of memory and has been marked to be
...@@ -913,9 +925,14 @@ void oom_kill_process(struct oom_control *oc, struct task_struct *p, ...@@ -913,9 +925,14 @@ void oom_kill_process(struct oom_control *oc, struct task_struct *p,
/* /*
* We cannot use oom_reaper for the mm shared by this * We cannot use oom_reaper for the mm shared by this
* process because it wouldn't get killed and so the * process because it wouldn't get killed and so the
* memory might be still used. * memory might be still used. Hide the mm from the oom
* killer to guarantee OOM forward progress.
*/ */
can_oom_reap = false; can_oom_reap = false;
set_bit(MMF_OOM_REAPED, &mm->flags);
pr_info("oom killer %d (%s) has mm pinned by %d (%s)\n",
task_pid_nr(victim), victim->comm,
task_pid_nr(p), p->comm);
continue; continue;
} }
do_send_sig_info(SIGKILL, SEND_SIG_FORCED, p, true); do_send_sig_info(SIGKILL, SEND_SIG_FORCED, p, true);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment