Commit 741ba80f authored by Peter Zijlstra's avatar Peter Zijlstra

sched: Relax the set_cpus_allowed_ptr() semantics

Now that we have KTHREAD_IS_PER_CPU to denote the critical per-cpu
tasks to retain during CPU offline, we can relax the warning in
set_cpus_allowed_ptr(). Any spurious kthread that wants to get on at
the last minute will get pushed off before it can run.

While during CPU online there is no harm, and actual benefit, to
allowing kthreads back on early, it simplifies hotplug code and fixes
a number of outstanding races.
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: default avatarLai jiangshan <jiangshanlai@gmail.com>
Reviewed-by: default avatarValentin Schneider <valentin.schneider@arm.com>
Tested-by: default avatarValentin Schneider <valentin.schneider@arm.com>
Link: https://lkml.kernel.org/r/20210121103507.240724591@infradead.org
parent 5ba2ffba
...@@ -2342,7 +2342,9 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, ...@@ -2342,7 +2342,9 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
if (p->flags & PF_KTHREAD || is_migration_disabled(p)) { if (p->flags & PF_KTHREAD || is_migration_disabled(p)) {
/* /*
* Kernel threads are allowed on online && !active CPUs. * Kernel threads are allowed on online && !active CPUs,
* however, during cpu-hot-unplug, even these might get pushed
* away if not KTHREAD_IS_PER_CPU.
* *
* Specifically, migration_disabled() tasks must not fail the * Specifically, migration_disabled() tasks must not fail the
* cpumask_any_and_distribute() pick below, esp. so on * cpumask_any_and_distribute() pick below, esp. so on
...@@ -2386,16 +2388,6 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, ...@@ -2386,16 +2388,6 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
__do_set_cpus_allowed(p, new_mask, flags); __do_set_cpus_allowed(p, new_mask, flags);
if (p->flags & PF_KTHREAD) {
/*
* For kernel threads that do indeed end up on online &&
* !active we want to ensure they are strict per-CPU threads.
*/
WARN_ON(cpumask_intersects(new_mask, cpu_online_mask) &&
!cpumask_intersects(new_mask, cpu_active_mask) &&
p->nr_cpus_allowed != 1);
}
return affine_move_task(rq, p, &rf, dest_cpu, flags); return affine_move_task(rq, p, &rf, dest_cpu, flags);
out: out:
...@@ -7518,6 +7510,13 @@ int sched_cpu_deactivate(unsigned int cpu) ...@@ -7518,6 +7510,13 @@ int sched_cpu_deactivate(unsigned int cpu)
int ret; int ret;
set_cpu_active(cpu, false); set_cpu_active(cpu, false);
/*
* From this point forward, this CPU will refuse to run any task that
* is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
* push those tasks away until this gets cleared, see
* sched_cpu_dying().
*/
balance_push_set(cpu, true); balance_push_set(cpu, true);
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment