Commit e41e704b authored by Tejun Heo's avatar Tejun Heo

workqueue: improve destroy_workqueue() debuggability

Now that the worklist is global, having works pending after wq
destruction can easily lead to oops and destroy_workqueue() have
several BUG_ON()s to catch these cases.  Unfortunately, BUG_ON()
doesn't tell much about how the work became pending after the final
flush_workqueue().

This patch adds WQ_DYING which is set before the final flush begins.
If a work is requested to be queued on a dying workqueue,
WARN_ON_ONCE() is triggered and the request is ignored.  This clearly
indicates which caller is trying to queue a work on a dying workqueue
and keeps the system working in most cases.

Locking rule comment is updated such that the 'I' rule includes
modifying the field from destruction path.
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
parent 972fa1c5
...@@ -241,6 +241,8 @@ enum { ...@@ -241,6 +241,8 @@ enum {
WQ_HIGHPRI = 1 << 4, /* high priority */ WQ_HIGHPRI = 1 << 4, /* high priority */
WQ_CPU_INTENSIVE = 1 << 5, /* cpu instensive workqueue */ WQ_CPU_INTENSIVE = 1 << 5, /* cpu instensive workqueue */
WQ_DYING = 1 << 6, /* internal: workqueue is dying */
WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */ WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */
WQ_MAX_UNBOUND_PER_CPU = 4, /* 4 * #cpus for unbound wq */ WQ_MAX_UNBOUND_PER_CPU = 4, /* 4 * #cpus for unbound wq */
WQ_DFL_ACTIVE = WQ_MAX_ACTIVE / 2, WQ_DFL_ACTIVE = WQ_MAX_ACTIVE / 2,
......
...@@ -87,7 +87,8 @@ enum { ...@@ -87,7 +87,8 @@ enum {
/* /*
* Structure fields follow one of the following exclusion rules. * Structure fields follow one of the following exclusion rules.
* *
* I: Set during initialization and read-only afterwards. * I: Modifiable by initialization/destruction paths and read-only for
* everyone else.
* *
* P: Preemption protected. Disabling preemption is enough and should * P: Preemption protected. Disabling preemption is enough and should
* only be modified and accessed from the local cpu. * only be modified and accessed from the local cpu.
...@@ -944,6 +945,9 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq, ...@@ -944,6 +945,9 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
debug_work_activate(work); debug_work_activate(work);
if (WARN_ON_ONCE(wq->flags & WQ_DYING))
return;
/* determine gcwq to use */ /* determine gcwq to use */
if (!(wq->flags & WQ_UNBOUND)) { if (!(wq->flags & WQ_UNBOUND)) {
struct global_cwq *last_gcwq; struct global_cwq *last_gcwq;
...@@ -2828,6 +2832,7 @@ void destroy_workqueue(struct workqueue_struct *wq) ...@@ -2828,6 +2832,7 @@ void destroy_workqueue(struct workqueue_struct *wq)
{ {
unsigned int cpu; unsigned int cpu;
wq->flags |= WQ_DYING;
flush_workqueue(wq); flush_workqueue(wq);
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment