Commit e0dd880a authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq

Pull workqueue updates from Tejun Heo:
 "Most of the changes are around implementing and fixing fallouts from
  sysfs and internal interface to limit the CPUs available to all
  unbound workqueues to help isolating CPUs.  It needs more work as
  ordered workqueues can roam unrestricted but still is a significant
  improvement"

* 'for-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
  workqueue: fix typos in comments
  workqueue: move flush_scheduled_work() to workqueue.h
  workqueue: remove the lock from wq_sysfs_prep_attrs()
  workqueue: remove the declaration of copy_workqueue_attrs()
  workqueue: ensure attrs changes are properly synchronized
  workqueue: separate out and refactor the locking of applying attrs
  workqueue: simplify wq_update_unbound_numa()
  workqueue: wq_pool_mutex protects the attrs-installation
  workqueue: fix a typo
  workqueue: function name in the comment differs from the real function name
  workqueue: fix trivial typo in Documentation/workqueue.txt
  workqueue: Allow modifying low level unbound workqueue cpumask
  workqueue: Create low-level unbound workqueues cpumask
  workqueue: split apply_workqueue_attrs() into 3 stages
parents bbe179f8 402dd89d
......@@ -365,7 +365,7 @@ root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0]
If kworkers are going crazy (using too much cpu), there are two types
of possible problems:
1. Something beeing scheduled in rapid succession
1. Something being scheduled in rapid succession
2. A single work item that consumes lots of cpu cycles
The first one can be tracked using tracing:
......
......@@ -424,6 +424,7 @@ struct workqueue_attrs *alloc_workqueue_attrs(gfp_t gfp_mask);
void free_workqueue_attrs(struct workqueue_attrs *attrs);
int apply_workqueue_attrs(struct workqueue_struct *wq,
const struct workqueue_attrs *attrs);
int workqueue_set_unbound_cpumask(cpumask_var_t cpumask);
extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
struct work_struct *work);
......@@ -434,7 +435,6 @@ extern bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,
extern void flush_workqueue(struct workqueue_struct *wq);
extern void drain_workqueue(struct workqueue_struct *wq);
extern void flush_scheduled_work(void);
extern int schedule_on_each_cpu(work_func_t func);
......@@ -530,6 +530,35 @@ static inline bool schedule_work(struct work_struct *work)
return queue_work(system_wq, work);
}
/**
* flush_scheduled_work - ensure that any scheduled work has run to completion.
*
* Forces execution of the kernel-global workqueue and blocks until its
* completion.
*
* Think twice before calling this function! It's very easy to get into
* trouble if you don't take great care. Either of the following situations
* will lead to deadlock:
*
* One of the work items currently on the workqueue needs to acquire
* a lock held by your code or its caller.
*
* Your code is running in the context of a work routine.
*
* They will be detected by lockdep when they occur, but the first might not
* occur very often. It depends on what work items are on the workqueue and
* what locks they need, which you have no control over.
*
* In most situations flushing the entire workqueue is overkill; you merely
* need to know that a particular work item isn't queued and isn't running.
* In such cases you should use cancel_delayed_work_sync() or
* cancel_work_sync() instead.
*/
static inline void flush_scheduled_work(void)
{
flush_workqueue(system_wq);
}
/**
* schedule_delayed_work_on - queue work in global workqueue on CPU after delay
* @cpu: cpu to use
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment