Commit 989442d7 authored by Lai Jiangshan's avatar Lai Jiangshan Committed by Tejun Heo

workqueue: Move the code of waking a worker up in unbind_workers()

In unbind_workers(), there are two pool->lock held sections separated
by the code of zapping nr_running.  wake_up_worker() needs to be in
pool->lock held section and after zapping nr_running.  And zapping
nr_running had to be after schedule() when the local wake up
functionality was in use.  Now, the call to schedule() has been removed
along with the local wake up functionality, so the code can be merged
into the same pool->lock held section.

The diffstat shows that it is other code moved down because the diff
tools can not know the meaning of merging lock sections by swapping
two code blocks.
Signed-off-by: default avatarLai Jiangshan <laijs@linux.alibaba.com>
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
parent b4ac9384
...@@ -1810,14 +1810,8 @@ static void worker_enter_idle(struct worker *worker) ...@@ -1810,14 +1810,8 @@ static void worker_enter_idle(struct worker *worker)
if (too_many_workers(pool) && !timer_pending(&pool->idle_timer)) if (too_many_workers(pool) && !timer_pending(&pool->idle_timer))
mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT); mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT);
/* /* Sanity check nr_running. */
* Sanity check nr_running. Because unbind_workers() releases WARN_ON_ONCE(pool->nr_workers == pool->nr_idle &&
* pool->lock between setting %WORKER_UNBOUND and zapping
* nr_running, the warning may trigger spuriously. Check iff
* unbind is not in progress.
*/
WARN_ON_ONCE(!(pool->flags & POOL_DISASSOCIATED) &&
pool->nr_workers == pool->nr_idle &&
atomic_read(&pool->nr_running)); atomic_read(&pool->nr_running));
} }
...@@ -4988,21 +4982,12 @@ static void unbind_workers(int cpu) ...@@ -4988,21 +4982,12 @@ static void unbind_workers(int cpu)
pool->flags |= POOL_DISASSOCIATED; pool->flags |= POOL_DISASSOCIATED;
raw_spin_unlock_irq(&pool->lock);
for_each_pool_worker(worker, pool) {
kthread_set_per_cpu(worker->task, -1);
WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
}
mutex_unlock(&wq_pool_attach_mutex);
/* /*
* Sched callbacks are disabled now. Zap nr_running. * The handling of nr_running in sched callbacks are disabled
* After this, nr_running stays zero and need_more_worker() * now. Zap nr_running. After this, nr_running stays zero and
* and keep_working() are always true as long as the * need_more_worker() and keep_working() are always true as
* worklist is not empty. This pool now behaves as an * long as the worklist is not empty. This pool now behaves as
* unbound (in terms of concurrency management) pool which * an unbound (in terms of concurrency management) pool which
* are served by workers tied to the pool. * are served by workers tied to the pool.
*/ */
atomic_set(&pool->nr_running, 0); atomic_set(&pool->nr_running, 0);
...@@ -5012,9 +4997,16 @@ static void unbind_workers(int cpu) ...@@ -5012,9 +4997,16 @@ static void unbind_workers(int cpu)
* worker blocking could lead to lengthy stalls. Kick off * worker blocking could lead to lengthy stalls. Kick off
* unbound chain execution of currently pending work items. * unbound chain execution of currently pending work items.
*/ */
raw_spin_lock_irq(&pool->lock);
wake_up_worker(pool); wake_up_worker(pool);
raw_spin_unlock_irq(&pool->lock); raw_spin_unlock_irq(&pool->lock);
for_each_pool_worker(worker, pool) {
kthread_set_per_cpu(worker->task, -1);
WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
}
mutex_unlock(&wq_pool_attach_mutex);
} }
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment