Commit 1445c08d authored by Oleg Nesterov's avatar Oleg Nesterov Committed by Ingo Molnar

sched: move_task_off_dead_cpu(): Take rq->lock around select_fallback_rq()

move_task_off_dead_cpu()->select_fallback_rq() reads/updates ->cpus_allowed
lockless. We can race with set_cpus_allowed() running in parallel.

Change it to take rq->lock around select_fallback_rq(). Note that it is not
trivial to move this spin_lock() into select_fallback_rq(), we must recheck
the task was not migrated after we take the lock and other callers do not
need this lock.

To avoid the races with other callers of select_fallback_rq() which rely on
TASK_WAKING, we also check p->state != TASK_WAKING and do nothing otherwise.
The owner of TASK_WAKING must update ->cpus_allowed and choose the correct
CPU anyway, and the subsequent __migrate_task() is just meaningless because
p->se.on_rq must be false.

Alternatively, we could change select_task_rq() to take rq->lock right
after it calls sched_class->select_task_rq(), but this looks a bit ugly.

Also, change it to not assume irqs are disabled and absorb __migrate_task_irq().
Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20100315091010.GA9131@redhat.com>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 897f0b3c
...@@ -5448,29 +5448,29 @@ static int migration_thread(void *data) ...@@ -5448,29 +5448,29 @@ static int migration_thread(void *data)
} }
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
static int __migrate_task_irq(struct task_struct *p, int src_cpu, int dest_cpu)
{
int ret;
local_irq_disable();
ret = __migrate_task(p, src_cpu, dest_cpu);
local_irq_enable();
return ret;
}
/* /*
* Figure out where task on dead CPU should go, use force if necessary. * Figure out where task on dead CPU should go, use force if necessary.
*/ */
static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p) static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p)
{ {
int dest_cpu; struct rq *rq = cpu_rq(dead_cpu);
int needs_cpu, uninitialized_var(dest_cpu);
unsigned long flags;
again: again:
dest_cpu = select_fallback_rq(dead_cpu, p); local_irq_save(flags);
raw_spin_lock(&rq->lock);
needs_cpu = (task_cpu(p) == dead_cpu) && (p->state != TASK_WAKING);
if (needs_cpu)
dest_cpu = select_fallback_rq(dead_cpu, p);
raw_spin_unlock(&rq->lock);
/* It can have affinity changed while we were choosing. */ /* It can have affinity changed while we were choosing. */
if (unlikely(!__migrate_task_irq(p, dead_cpu, dest_cpu))) if (needs_cpu)
needs_cpu = !__migrate_task(p, dead_cpu, dest_cpu);
local_irq_restore(flags);
if (unlikely(needs_cpu))
goto again; goto again;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment