Commit 21b195c0 authored by Lai Jiangshan's avatar Lai Jiangshan Committed by Tejun Heo

workqueue: Remove the mb() pair between wq_worker_sleeping() and insert_work()

In wq_worker_sleeping(), the access to worklist is protected by the
pool->lock, so the memory barrier is unneeded.
Signed-off-by: default avatarLai Jiangshan <laijs@linux.alibaba.com>
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
parent daadb3bd
...@@ -918,10 +918,6 @@ void wq_worker_sleeping(struct task_struct *task) ...@@ -918,10 +918,6 @@ void wq_worker_sleeping(struct task_struct *task)
} }
/* /*
* The counterpart of the following dec_and_test, implied mb,
* worklist not empty test sequence is in insert_work().
* Please read comment there.
*
* NOT_RUNNING is clear. This means that we're bound to and * NOT_RUNNING is clear. This means that we're bound to and
* running on the local cpu w/ rq lock held and preemption * running on the local cpu w/ rq lock held and preemption
* disabled, which in turn means that none else could be * disabled, which in turn means that none else could be
...@@ -1372,13 +1368,6 @@ static void insert_work(struct pool_workqueue *pwq, struct work_struct *work, ...@@ -1372,13 +1368,6 @@ static void insert_work(struct pool_workqueue *pwq, struct work_struct *work,
list_add_tail(&work->entry, head); list_add_tail(&work->entry, head);
get_pwq(pwq); get_pwq(pwq);
/*
* Ensure either wq_worker_sleeping() sees the above
* list_add_tail() or we see zero nr_running to avoid workers lying
* around lazily while there are works to be processed.
*/
smp_mb();
if (__need_more_worker(pool)) if (__need_more_worker(pool))
wake_up_worker(pool); wake_up_worker(pool);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment