1. 14 Dec, 2016 2 commits
    • Petr Mladek's avatar
      IB/rdmavt: Handle the kthread worker using the new API · f5eabf5e
      Petr Mladek authored
      Use the new API to create and destroy the cq kthread worker.
      The API hides some implementation details.
      
      In particular, kthread_create_worker() allocates and initializes
      struct kthread_worker. It runs the kthread the right way and stores
      task_struct into the worker structure. In addition, the *on_cpu()
      variant binds the kthread to the given cpu and the related memory
      node.
      
      kthread_destroy_worker() flushes all pending works, stops
      the kthread and frees the structure.
      
      This patch does not change the existing behavior. Note that we must
      use the on_cpu() variant because the function starts the kthread
      and it must bind it to the right CPU before waking. The numa node
      is associated for given CPU as well.
      Signed-off-by: default avatarPetr Mladek <pmladek@suse.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      f5eabf5e
    • Petr Mladek's avatar
      IB/rdmavt: Avoid queuing work into a destroyed cq kthread worker · 6efaf10f
      Petr Mladek authored
      The memory barrier is not enough to protect queuing works into
      a destroyed cq kthread. Just imagine the following situation:
      
      CPU1				CPU2
      
      rvt_cq_enter()
        worker =  cq->rdi->worker;
      
      				rvt_cq_exit()
      				  rdi->worker = NULL;
      				  smp_wmb();
      				  kthread_flush_worker(worker);
      				  kthread_stop(worker->task);
      				  kfree(worker);
      
      				  // nothing queued yet =>
      				  // nothing flushed and
      				  // happily stopped and freed
      
        if (likely(worker)) {
           // true => read before CPU2 acted
           cq->notify = RVT_CQ_NONE;
           cq->triggered++;
           kthread_queue_work(worker, &cq->comptask);
      
        BANG: worker has been flushed/stopped/freed in the meantime.
      
      This patch solves this by protecting the critical sections by
      rdi->n_cqs_lock. It seems that this lock is not much contended
      and looks reasonable for this purpose.
      
      One catch is that rvt_cq_enter() might be called from IRQ context.
      Therefore we must always take the lock with IRQs disabled to avoid
      a possible deadlock.
      Signed-off-by: default avatarPetr Mladek <pmladek@suse.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      6efaf10f
  2. 11 Dec, 2016 25 commits
  3. 15 Nov, 2016 10 commits
  4. 29 Oct, 2016 3 commits