1. 17 Jul, 2012 6 commits
    • Tejun Heo's avatar
      workqueue: reimplement CPU online rebinding to handle idle workers · 25511a47
      Tejun Heo authored
      Currently, if there are left workers when a CPU is being brough back
      online, the trustee kills all idle workers and scheduled rebind_work
      so that they re-bind to the CPU after the currently executing work is
      finished.  This works for busy workers because concurrency management
      doesn't try to wake up them from scheduler callbacks, which require
      the target task to be on the local run queue.  The busy worker bumps
      concurrency counter appropriately as it clears WORKER_UNBOUND from the
      rebind work item and it's bound to the CPU before returning to the
      idle state.
      
      To reduce CPU on/offlining overhead (as many embedded systems use it
      for powersaving) and simplify the code path, workqueue is planned to
      be modified to retain idle workers across CPU on/offlining.  This
      patch reimplements CPU online rebinding such that it can also handle
      idle workers.
      
      As noted earlier, due to the local wakeup requirement, rebinding idle
      workers is tricky.  All idle workers must be re-bound before scheduler
      callbacks are enabled.  This is achieved by interlocking idle
      re-binding.  Idle workers are requested to re-bind and then hold until
      all idle re-binding is complete so that no bound worker starts
      executing work item.  Only after all idle workers are re-bound and
      parked, CPU_ONLINE proceeds to release them and queue rebind work item
      to busy workers thus guaranteeing scheduler callbacks aren't invoked
      until all idle workers are ready.
      
      worker_rebind_fn() is renamed to busy_worker_rebind_fn() and
      idle_worker_rebind() for idle workers is added.  Rebinding logic is
      moved to rebind_workers() and now called from CPU_ONLINE after
      flushing trustee.  While at it, add CPU sanity check in
      worker_thread().
      
      Note that now a worker may become idle or the manager between trustee
      release and rebinding during CPU_ONLINE.  As the previous patch
      updated create_worker() so that it can be used by regular manager
      while unbound and this patch implements idle re-binding, this is safe.
      
      This prepares for removal of trustee and keeping idle workers across
      CPU hotplugs.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
      25511a47
    • Tejun Heo's avatar
      workqueue: drop @bind from create_worker() · bc2ae0f5
      Tejun Heo authored
      Currently, create_worker()'s callers are responsible for deciding
      whether the newly created worker should be bound to the associated CPU
      and create_worker() sets WORKER_UNBOUND only for the workers for the
      unbound global_cwq.  Creation during normal operation is always via
      maybe_create_worker() and @bind is true.  For workers created during
      hotplug, @bind is false.
      
      Normal operation path is planned to be used even while the CPU is
      going through hotplug operations or offline and this static decision
      won't work.
      
      Drop @bind from create_worker() and decide whether to bind by looking
      at GCWQ_DISASSOCIATED.  create_worker() will also set WORKER_UNBOUND
      autmatically if disassociated.  To avoid flipping GCWQ_DISASSOCIATED
      while create_worker() is in progress, the flag is now allowed to be
      changed only while holding all manager_mutexes on the global_cwq.
      
      This requires that GCWQ_DISASSOCIATED is not cleared behind trustee's
      back.  CPU_ONLINE no longer clears DISASSOCIATED before flushing
      trustee, which clears DISASSOCIATED before rebinding remaining workers
      if asked to release.  For cases where trustee isn't around, CPU_ONLINE
      clears DISASSOCIATED after flushing trustee.  Also, now, first_idle
      has UNBOUND set on creation which is explicitly cleared by CPU_ONLINE
      while binding it.  These convolutions will soon be removed by further
      simplification of CPU hotplug path.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
      bc2ae0f5
    • Tejun Heo's avatar
      workqueue: use mutex for global_cwq manager exclusion · 60373152
      Tejun Heo authored
      POOL_MANAGING_WORKERS is used to ensure that at most one worker takes
      the manager role at any given time on a given global_cwq.  Trustee
      later hitched on it to assume manager adding blocking wait for the
      bit.  As trustee already needed a custom wait mechanism, waiting for
      MANAGING_WORKERS was rolled into the same mechanism.
      
      Trustee is scheduled to be removed.  This patch separates out
      MANAGING_WORKERS wait into per-pool mutex.  Workers use
      mutex_trylock() to test for manager role and trustee uses mutex_lock()
      to claim manager roles.
      
      gcwq_claim/release_management() helpers are added to grab and release
      manager roles of all pools on a global_cwq.  gcwq_claim_management()
      always grabs pool manager mutexes in ascending pool index order and
      uses pool index as lockdep subclass.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
      60373152
    • Tejun Heo's avatar
      workqueue: ROGUE workers are UNBOUND workers · 403c821d
      Tejun Heo authored
      Currently, WORKER_UNBOUND is used to mark workers for the unbound
      global_cwq and WORKER_ROGUE is used to mark workers for disassociated
      per-cpu global_cwqs.  Both are used to make the marked worker skip
      concurrency management and the only place they make any difference is
      in worker_enter_idle() where WORKER_ROGUE is used to skip scheduling
      idle timer, which can easily be replaced with trustee state testing.
      
      This patch replaces WORKER_ROGUE with WORKER_UNBOUND and drops
      WORKER_ROGUE.  This is to prepare for removing trustee and handling
      disassociated global_cwqs as unbound.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
      403c821d
    • Tejun Heo's avatar
      workqueue: drop CPU_DYING notifier operation · f2d5a0ee
      Tejun Heo authored
      Workqueue used CPU_DYING notification to mark GCWQ_DISASSOCIATED.
      This was necessary because workqueue's CPU_DOWN_PREPARE happened
      before other DOWN_PREPARE notifiers and workqueue needed to stay
      associated across the rest of DOWN_PREPARE.
      
      After the previous patch, workqueue's DOWN_PREPARE happens after
      others and can set GCWQ_DISASSOCIATED directly.  Drop CPU_DYING and
      let the trustee set GCWQ_DISASSOCIATED after disabling concurrency
      management.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
      f2d5a0ee
    • Tejun Heo's avatar
      workqueue: perform cpu down operations from low priority cpu_notifier() · 65758202
      Tejun Heo authored
      Currently, all workqueue cpu hotplug operations run off
      CPU_PRI_WORKQUEUE which is higher than normal notifiers.  This is to
      ensure that workqueue is up and running while bringing up a CPU before
      other notifiers try to use workqueue on the CPU.
      
      Per-cpu workqueues are supposed to remain working and bound to the CPU
      for normal CPU_DOWN_PREPARE notifiers.  This holds mostly true even
      with workqueue offlining running with higher priority because
      workqueue CPU_DOWN_PREPARE only creates a bound trustee thread which
      runs the per-cpu workqueue without concurrency management without
      explicitly detaching the existing workers.
      
      However, if the trustee needs to create new workers, it creates
      unbound workers which may wander off to other CPUs while
      CPU_DOWN_PREPARE notifiers are in progress.  Furthermore, if the CPU
      down is cancelled, the per-CPU workqueue may end up with workers which
      aren't bound to the CPU.
      
      While reliably reproducible with a convoluted artificial test-case
      involving scheduling and flushing CPU burning work items from CPU down
      notifiers, this isn't very likely to happen in the wild, and, even
      when it happens, the effects are likely to be hidden by the following
      successful CPU down.
      
      Fix it by using different priorities for up and down notifiers - high
      priority for up operations and low priority for down operations.
      
      Workqueue cpu hotplug operations will soon go through further cleanup.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
      65758202
  2. 14 Jul, 2012 2 commits
    • Tejun Heo's avatar
      workqueue: reimplement WQ_HIGHPRI using a separate worker_pool · 3270476a
      Tejun Heo authored
      WQ_HIGHPRI was implemented by queueing highpri work items at the head
      of the global worklist.  Other than queueing at the head, they weren't
      handled differently; unfortunately, this could lead to execution
      latency of a few seconds on heavily loaded systems.
      
      Now that workqueue code has been updated to deal with multiple
      worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
      a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
      gcwq->pools[0] is used for normal pri work items and ->pools[1] for
      highpri.  Highpri workers get -20 nice level and has 'H' suffix in
      their names.  Note that this change increases the number of kworkers
      per cpu.
      
      POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
      wakeup code in process_one_work() are no longer used and removed.
      
      This allows proper prioritization of highpri work items and removes
      high execution latency of highpri work items.
      
      v2: nr_running indexing bug in get_pool_nr_running() fixed.
      
      v3: Refreshed for the get_pool_nr_running() update in the previous
          patch.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarJosh Hunt <joshhunt00@gmail.com>
      LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      3270476a
    • Tejun Heo's avatar
      workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool() · 4ce62e9e
      Tejun Heo authored
      Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
      paths which need to manipulate all pools in a gcwq to use them.
      NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
      over only @gcwq->pool.
      
      Note that nr_running is per-pool property and converted to an array
      with NR_WORKER_POOLS elements and renamed to pool_nr_running.  Note
      that get_pool_nr_running() currently assumes 0 index.  The next patch
      will make use of non-zero index.
      
      The changes in this patch are mechanical and don't caues any
      functional difference.  This is to prepare for multiple pools per
      gcwq.
      
      v2: nr_running indexing bug in get_pool_nr_running() fixed.
      
      v3: Pointer to array is stupid.  Don't use it in get_pool_nr_running()
          as suggested by Linus.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      4ce62e9e
  3. 12 Jul, 2012 4 commits
    • Tejun Heo's avatar
      workqueue: separate out worker_pool flags · 11ebea50
      Tejun Heo authored
      GCWQ_MANAGE_WORKERS, GCWQ_MANAGING_WORKERS and GCWQ_HIGHPRI_PENDING
      are per-pool properties.  Add worker_pool->flags and make the above
      three flags per-pool flags.
      
      The changes in this patch are mechanical and don't caues any
      functional difference.  This is to prepare for multiple pools per
      gcwq.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      11ebea50
    • Tejun Heo's avatar
      workqueue: use @pool instead of @gcwq or @cpu where applicable · 63d95a91
      Tejun Heo authored
      Modify all functions which deal with per-pool properties to pass
      around @pool instead of @gcwq or @cpu.
      
      The changes in this patch are mechanical and don't caues any
      functional difference.  This is to prepare for multiple pools per
      gcwq.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      63d95a91
    • Tejun Heo's avatar
      workqueue: factor out worker_pool from global_cwq · bd7bdd43
      Tejun Heo authored
      Move worklist and all worker management fields from global_cwq into
      the new struct worker_pool.  worker_pool points back to the containing
      gcwq.  worker and cpu_workqueue_struct are updated to point to
      worker_pool instead of gcwq too.
      
      This change is mechanical and doesn't introduce any functional
      difference other than rearranging of fields and an added level of
      indirection in some places.  This is to prepare for multiple pools per
      gcwq.
      
      v2: Comment typo fixes as suggested by Namhyung.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      bd7bdd43
    • Tejun Heo's avatar
      workqueue: don't use WQ_HIGHPRI for unbound workqueues · 974271c4
      Tejun Heo authored
      Unbound wqs aren't concurrency-managed and try to execute work items
      as soon as possible.  This is currently achieved by implicitly setting
      %WQ_HIGHPRI on all unbound workqueues; however, WQ_HIGHPRI
      implementation is about to be restructured and this usage won't be
      valid anymore.
      
      Add an explicit chain-wakeup path for unbound workqueues in
      process_one_work() instead of piggy backing on %WQ_HIGHPRI.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      974271c4
  4. 11 Jul, 2012 28 commits