1. 03 Aug, 2018 1 commit
    • Yixian Liu's avatar
      RDMA/hns: Support flush cqe for hip08 in kernel space · 0425e3e6
      Yixian Liu authored
      According to IB protocol, there are some cases that work requests must
      return the flush error completion status through the completion queue. Due
      to hardware limitation, the driver needs to assist the flush process.
      
      This patch adds the support of flush cqe for hip08 in the cases that
      needed, such as poll cqe, post send, post recv and aeqe handle.
      
      The patch also considered the compatibility between kernel and user space.
      Signed-off-by: default avatarYixian Liu <liuyixian@huawei.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      0425e3e6
  2. 01 Aug, 2018 12 commits
    • Denis Drozdov's avatar
      IB/IPoIB: Set ah valid flag in multicast send flow · 75da9606
      Denis Drozdov authored
      The change of ipoib_ah data structure with adding "valid" flag and
      checks of ah->valid in ipoib_start_xmit affected multicast packet flow.
      
      Since the multicast flow doesn't invoke path_rec_start, "ah->valid" flag
      remains unset, so that ipoib_start_xmit end up with neigh_refresh_path
      instead of sending the packet using neigh.
      
      "ah->valid" has to be set in multicast send flow. As a result IPoIB
      starts sending packets via neigh immediately and eliminates 60sec delay
      of neigh keep alive interval.
      
      The typical example of this issue are two sequential arpings:
      
      arping 11.134.208.9 -> got response (mcast_send)
      arping 11.134.208.9 -> no response  (ah->valid = 0)
      
      Fixes: fa9391db ("RDMA/ipoib: Update paths on CLIENT_REREG/SM_CHANGE events")
      Signed-off-by: default avatarDenis Drozdov <denisd@mellanox.com>
      Reviewed-by: default avatarErez Shitrit <erezsh@mellanox.com>
      Reviewed-by: default avatarFeras Daoud <ferasda@mellanox.com>
      Signed-off-by: default avatarLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      75da9606
    • Jason Gunthorpe's avatar
      IB/uverbs: Allow all DESTROY commands to succeed after disassociate · 0f50d88a
      Jason Gunthorpe authored
      The disassociate function was broken by design because it failed all
      commands. This prevents userspace from calling destroy on a uobject after
      it has detected a device fatal error and thus reclaiming the resources in
      userspace is prevented.
      
      This fix is now straightforward, when anything destroys a uobject that is
      not the user the object remains on the IDR with a NULL context and object
      pointer. All lookup locking modes other than DESTROY will fail. When the
      user ultimately calls the destroy function it is simply dropped from the
      IDR while any related information is returned.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      0f50d88a
    • Jason Gunthorpe's avatar
      IB/uverbs: Do not block disassociate during write() · a9b66d64
      Jason Gunthorpe authored
      Now that all the callbacks are safe to run concurrently with
      disassociation this test can be eliminated. The ufile core infrastructure
      becomes entirely self contained and is not sensitive to disassociation.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      a9b66d64
    • Jason Gunthorpe's avatar
      IB/uverbs: Do not pass struct ib_device to the ioctl methods · e83f0ecd
      Jason Gunthorpe authored
      This does the same as the patch before, except for ioctl. The rules are
      the same, but for the ioctl methods the core code handles setting up the
      uobject.
      
      - Retrieve the ib_dev from the uobject->context->device. This is
        safe under ioctl as the core has already done rdma_alloc_begin_uobject
        and so CREATE calls are entirely protected by the rwsem.
      - Retrieve the ib_dev from uobject->object
      - Call ib_uverbs_get_ucontext()
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      e83f0ecd
    • Jason Gunthorpe's avatar
      IB/uverbs: Do not pass struct ib_device to the write based methods · bbd51e88
      Jason Gunthorpe authored
      This is a step to get rid of the global check for disassociation. In this
      model, the ib_dev is not proven to be valid by the core code and cannot be
      provided to the method. Instead, every method decides if it is able to
      run after disassociation and obtains the ib_dev using one of three
      different approaches:
      
      - Call srcu_dereference on the udevice's ib_dev. As before, this means
        the method cannot be called after disassociation begins.
        (eg alloc ucontext)
      - Retrieve the ib_dev from the ucontext, via ib_uverbs_get_ucontext()
      - Retrieve the ib_dev from the uobject->object after checking
        under SRCU if disassociation has started (eg uobj_get)
      
      Largely, the code is all ready for this, the main work is to provide a
      ib_dev after calling uobj_alloc(). The few other places simply use
      ib_uverbs_get_ucontext() to get the ib_dev.
      
      This flexibility will let the next patches allow destroy to operate
      after disassociation.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      bbd51e88
    • Jason Gunthorpe's avatar
      IB/uverbs: Lower the test for ongoing disassociation · cc2e14e6
      Jason Gunthorpe authored
      Commands that are reading/writing to objects can test for an ongoing
      disassociation during their initial call to rdma_lookup_get_uobject.  This
      directly prevents all of these commands from conflicting with an ongoing
      disassociation.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      cc2e14e6
    • Jason Gunthorpe's avatar
      IB/uverbs: Allow uobject allocation to work concurrently with disassociate · 1e857e65
      Jason Gunthorpe authored
      After all the recent structural changes this is now straightforward, hold
      the hw_destroy_rwsem across the entire uobject creation. We already take
      this semaphore on the success path, so holding it a bit longer is not
      going to change the performance.
      
      After this change none of the create callbacks require the
      disassociate_srcu lock to be correct.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      1e857e65
    • Jason Gunthorpe's avatar
      IB/uverbs: Allow RDMA_REMOVE_DESTROY to work concurrently with disassociate · 7452a3c7
      Jason Gunthorpe authored
      After all the recent structural changes this is now straightfoward, hoist
      the hw_destroy_rwsem up out of rdma_destroy_explicit and wrap it around
      the uobject write lock as well as the destroy.
      
      This is necessary as obtaining a write lock concurrently with
      uverbs_destroy_ufile_hw() will cause malfunction.
      
      After this change none of the destroy callbacks require the
      disassociate_srcu lock to be correct.
      
      This requires introducing a new lookup mode, UVERBS_LOOKUP_DESTROY as the
      IOCTL interface needs to hold an unlocked kref until all command
      verification is completed.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      7452a3c7
    • Jason Gunthorpe's avatar
      IB/uverbs: Convert 'bool exclusive' into an enum · 9867f5c6
      Jason Gunthorpe authored
      This is more readable, and future patches will need a 3rd lookup type.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      9867f5c6
    • Jason Gunthorpe's avatar
      IB/uverbs: Consolidate uobject destruction · 87ad80ab
      Jason Gunthorpe authored
      There are several flows that can destroy a uobject and each one is
      minimized and sprinkled throughout the code base, making it difficult to
      understand and very hard to modify the destroy path.
      
      Consolidate all of these into uverbs_destroy_uobject() and call it in all
      cases where a uobject has to be destroyed.
      
      This makes one change to the lifecycle, during any abort (eg when
      alloc_commit is not called) we always call out to alloc_abort, even if
      remove_commit needs to be called to delete a HW object.
      
      This also renames RDMA_REMOVE_DURING_CLEANUP to RDMA_REMOVE_ABORT to
      clarify its actual usage and revises some of the comments to reflect what
      the life cycle is for the type implementation.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      87ad80ab
    • Jason Gunthorpe's avatar
      IB/uverbs: Make the write path destroy methods use the same flow as ioctl · 32ed5c00
      Jason Gunthorpe authored
      The ridiculous dance with uobj_remove_commit() is not needed, the write
      path can follow the same flow as ioctl - lock and destroy the HW object
      then use the data left over in the uobject to form the response to
      userspace.
      
      Two helpers are introduced to make this flow straightforward for the
      caller.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      32ed5c00
    • Jason Gunthorpe's avatar
      IB/uverbs: Remove rdma_explicit_destroy() from the ioctl methods · aa72c9a5
      Jason Gunthorpe authored
      The core code will destroy the HW object on behalf of the method, if the
      method provides an implementation it must simply copy data from the stub
      uobj into the response. Destroy methods cannot touch the HW object.
      Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      aa72c9a5
  3. 31 Jul, 2018 27 commits