1. 19 Jul, 2012 3 commits
    • Mike Marciniszyn's avatar
      IB/qib: Add congestion control agent implementation · 36a8f01c
      Mike Marciniszyn authored
      Add a congestion control agent in the driver that handles gets and
      sets from the congestion control manager in the fabric for the
      Performance Scale Messaging (PSM) library.
      Signed-off-by: default avatarMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      36a8f01c
    • Mike Marciniszyn's avatar
      IB/qib: Reduce sdma_lock contention · 551ace12
      Mike Marciniszyn authored
      Profiling has shown that sdma_lock is proving a bottleneck for
      performance. The situations include:
       - RDMA reads when krcvqs > 1
       - post sends from multiple threads
      
      For RDMA read the current global qib_wq mechanism runs on all CPUs
      and contends for the sdma_lock when multiple RMDA read requests are
      fielded on differenct CPUs. For post sends, the direct call to
      qib_do_send() from multiple threads causes the contention.
      
      Since the sdma mechanism is per port, this fix converts the existing
      workqueue to a per port single thread workqueue to reduce the lock
      contention in the RDMA read case, and for any other case where the QP
      is scheduled via the workqueue mechanism from more than 1 CPU.
      
      For the post send case, This patch modifies the post send code to test
      for a non empty sdma engine.  If the sdma is not idle the (now single
      thread) workqueue will be used to trigger the send engine instead of
      the direct call to qib_do_send().
      Signed-off-by: default avatarMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      551ace12
    • Betty Dall's avatar
      IB/qib: Fix an incorrect log message · f3331f88
      Betty Dall authored
      There is a cut-and-paste typo in the function qib_pci_slot_reset()
      where it prints that the "link_reset" function is called rather than
      the "slot_reset" function.  This makes the message misleading.
      Signed-off-by: default avatarBetty Dall <betty.dall@hp.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      f3331f88
  2. 17 Jul, 2012 1 commit
  3. 10 Jul, 2012 1 commit
  4. 09 Jul, 2012 3 commits
    • Mike Marciniszyn's avatar
      IB/qib: RCU locking for MR validation · 8aac4cc3
      Mike Marciniszyn authored
      Profiling indicates that MR validation locking is expensive.  The MR
      table is largely read-only and is a suitable candidate for RCU locking.
      
      The patch uses RCU locking during validation to eliminate one
      lock/unlock during that validation.
      Reviewed-by: default avatarMike Heinz <michael.william.heinz@intel.com>
      Signed-off-by: default avatarMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      8aac4cc3
    • Mike Marciniszyn's avatar
      IB/qib: Avoid returning EBUSY from MR deregister · 6a82649f
      Mike Marciniszyn authored
      A timing issue can occur where qib_mr_dereg can return -EBUSY if the
      MR use count is not zero.
      
      This can occur if the MR is de-registered while RDMA read response
      packets are being progressed from the SDMA ring.  The suspicion is
      that the peer sent an RDMA read request, which has already been copied
      across to the peer.  The peer sees the completion of his request and
      then communicates to the responder that the MR is not needed any
      longer.  The responder tries to de-register the MR, catching some
      responses remaining in the SDMA ring holding the MR use count.
      
      The code now uses a get/put paradigm to track MR use counts and
      coordinates with the MR de-registration process using a completion
      when the count has reached zero.  A timeout on the delay is in place
      to catch other EBUSY issues.
      
      The reference count protocol is as follows:
      - The return to the user counts as 1
      - A reference from the lk_table or the qib_ibdev counts as 1.
      - Transient I/O operations increase/decrease as necessary
      
      A lot of code duplication has been folded into the new routines
      init_qib_mregion() and deinit_qib_mregion().  Additionally, explicit
      initialization of fields to zero is now handled by kzalloc().
      
      Also, duplicated code 'while.*num_sge' that decrements reference
      counts have been consolidated in qib_put_ss().
      Reviewed-by: default avatarRamkrishna Vepa <ramkrishna.vepa@intel.com>
      Signed-off-by: default avatarMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      6a82649f
    • Mike Marciniszyn's avatar
      IB/qib: Fix UC MR refs for immediate operations · 354dff1b
      Mike Marciniszyn authored
      An MR reference leak exists when handling UC RDMA writes with
      immediate data because we manipulate the reference counts as if the
      operation had been a send.
      
      This patch moves the last_imm label so that the RDMA write operations
      with immediate data converge at the cq building code.  The copy/mr
      deref code is now done correctly prior to the branch to last_imm.
      Reviewed-by: default avatarEdward Mascarenhas <edward.mascarenhas@intel.com>
      Signed-off-by: default avatarMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      354dff1b
  5. 30 Jun, 2012 13 commits
  6. 29 Jun, 2012 13 commits
  7. 28 Jun, 2012 6 commits