Commit 18f649ef authored by Oleg Nesterov's avatar Oleg Nesterov Committed by Ingo Molnar

sched/autogroup: Fix autogroup_move_group() to never skip sched_move_task()

The PF_EXITING check in task_wants_autogroup() is no longer needed. Remove
it, but see the next patch.

However the comment is correct in that autogroup_move_group() must always
change task_group() for every thread so the sysctl_ check is very wrong;
we can race with cgroups and even sys_setsid() is not safe because a task
running with task_group() == ag->tg must participate in refcounting:

	int main(void)
	{
		int sctl = open("/proc/sys/kernel/sched_autogroup_enabled", O_WRONLY);

		assert(sctl > 0);
		if (fork()) {
			wait(NULL); // destroy the child's ag/tg
			pause();
		}

		assert(pwrite(sctl, "1\n", 2, 0) == 2);
		assert(setsid() > 0);
		if (fork())
			pause();

		kill(getppid(), SIGKILL);
		sleep(1);

		// The child has gone, the grandchild runs with kref == 1
		assert(pwrite(sctl, "0\n", 2, 0) == 2);
		assert(setsid() > 0);

		// runs with the freed ag/tg
		for (;;)
			sleep(1);

		return 0;
	}

crashes the kernel. It doesn't really need sleep(1), it doesn't matter if
autogroup_move_group() actually frees the task_group or this happens later.
Reported-by: default avatarVern Lovejoy <vlovejoy@redhat.com>
Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hartsjc@redhat.com
Cc: vbendel@redhat.com
Link: http://lkml.kernel.org/r/20161114184609.GA15965@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 3b404a51
...@@ -111,14 +111,11 @@ bool task_wants_autogroup(struct task_struct *p, struct task_group *tg) ...@@ -111,14 +111,11 @@ bool task_wants_autogroup(struct task_struct *p, struct task_group *tg)
{ {
if (tg != &root_task_group) if (tg != &root_task_group)
return false; return false;
/* /*
* We can only assume the task group can't go away on us if * If we race with autogroup_move_group() the caller can use the old
* autogroup_move_group() can see us on ->thread_group list. * value of signal->autogroup but in this case sched_move_task() will
* be called again before autogroup_kref_put().
*/ */
if (p->flags & PF_EXITING)
return false;
return true; return true;
} }
...@@ -138,13 +135,17 @@ autogroup_move_group(struct task_struct *p, struct autogroup *ag) ...@@ -138,13 +135,17 @@ autogroup_move_group(struct task_struct *p, struct autogroup *ag)
} }
p->signal->autogroup = autogroup_kref_get(ag); p->signal->autogroup = autogroup_kref_get(ag);
/*
if (!READ_ONCE(sysctl_sched_autogroup_enabled)) * We can't avoid sched_move_task() after we changed signal->autogroup,
goto out; * this process can already run with task_group() == prev->tg or we can
* race with cgroup code which can read autogroup = prev under rq->lock.
* In the latter case for_each_thread() can not miss a migrating thread,
* cpu_cgroup_attach() must not be possible after cgroup_exit() and it
* can't be removed from thread list, we hold ->siglock.
*/
for_each_thread(p, t) for_each_thread(p, t)
sched_move_task(t); sched_move_task(t);
out:
unlock_task_sighand(p, &flags); unlock_task_sighand(p, &flags);
autogroup_kref_put(prev); autogroup_kref_put(prev);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment