- 26 Jul, 2018 6 commits
-
-
Parav Pandit authored
Currently if the cm_id is not bound to any netdevice, than for such cm_id, net namespace is ignored; which is incorrect. Regardless of cm_id bound to a netdevice or not, net namespace must match. When a cm_id is bound to a netdevice, in such case net namespace and netdevice both must match. Fixes: 4c21b5bc ("IB/cma: Add net_dev and private data checks to RDMA CM") Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Parav Pandit authored
When netdevice is not found for a request, and if it for RoCE port, currently it allows matching the listener as long as port number matches by ignoring the netdevice. Now that we always prefer to have netdevice associated with RoCE, when netdevice is not found, don't consider RoCE ports. In other words, a NULL netdevice with RoCE is not acceptable. Therefore, remove this confusing RoCE port ignorance check. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Parav Pandit authored
For RoCE, when CM requests are received for RC and UD connections, netdevice of the incoming request is unavailable. Because of that CM requests are always forwarded to init_net namespace. Now that we have the GID attribute available, introduce SGID attribute in incoming CM requests and refer to the netdevice of it. This is similar to existing SGID attribute field in outgoing CM requests for RC and UD transports. Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
This driver doesn't provide any kernel services, it only provides an interface via uverbs, so it should depend on, not select, uverbs support. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Raju Rangoju authored
This patch implements the srq specific verbs such as create/destroy/modify and post_srq_recv. And adds srq specific structures and defines to t4.h and uapi. Also updates the cq poll logic to deal with completions that are associated with the SRQ's. This patch also handles kernel mode SRQ_LIMIT events as well as flushed SRQ buffers Signed-off-by: Raju Rangoju <rajur@chelsio.com> Reviewed-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Raju Rangoju authored
This patch adds kernel mode t4_srq structures and support functions, uapi structures and defines, as well as firmware work request structures. Signed-off-by: Raju Rangoju <rajur@chelsio.com> Reviewed-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 25 Jul, 2018 13 commits
-
-
Varsha Rao authored
Remove unnecessary parentheses to fix the clang warning of extraneous parentheses. Signed-off-by: Varsha Rao <rvarsha016@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
This patch avoids that the following compiler warning is reported when building with gcc 8 and W=1: In function 'ocrdma_mbx_get_ctrl_attribs', inlined from 'ocrdma_init_hw' at drivers/infiniband/hw/ocrdma/ocrdma_hw.c:3224:11: drivers/infiniband/hw/ocrdma/ocrdma_hw.c:1368:3: warning: 'strncpy' output may be truncated copying 31 bytes from a string of length 31 [-Wstringop-truncation] strncpy(dev->model_number, ^~~~~~~~~~~~~~~~~~~~~~~~~~ hba_attribs->controller_model_number, 31); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Acked-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
We have a parallel unlocked reader and writer with ib_uverbs_get_context() vs everything else, and nothing guarantees this works properly. Audit and fix all of the places that access ucontext to use one of the following locking schemes: - Call ib_uverbs_get_ucontext() under SRCU and check for failure - Access the ucontext through an struct ib_uobject context member while holding a READ or WRITE lock on the uobject. This value cannot be NULL and has no race. - Hold the ucontext_lock and check for ufile->ucontext !NULL This also re-implements ib_uverbs_get_ucontext() in a way that is safe against concurrent ib_uverbs_get_context() and disassociation. As a side effect, every access to ucontext in the commands is via ib_uverbs_get_context() with an error check, or via the uobject, so there is no longer any need for the core code to check ucontext on every command call. These checks are also removed. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
This approach matches the standard flow of the typical write method that relies on the HW object to store the device and the uobject to access the ucontext. Avoids the use of the devx_ufile2uctx in several places will make revising the semantics of ib_uverbs_get_ucontext() in the next patch simpler. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Allocating the struct file during alloc_begin creates this strange asymmetry with IDR, where the FD has two krefs pointing at it during the pre-commit phase. In particular this makes the abort process for FD very strange and confusing. For instance abort currently calls the type's destroy_object twice, and the fops release once if abort is done. This is very counter intuitive. No fops should be called until alloc_commit succeeds, and destroy_object should only ever be called once. Moving the struct file allocation to the alloc_commit is now simple, as we already support failure of rdma_alloc_commit_uobject, with all the required rollback pieces. This creates an understandable symmetry with IDR and simplifies/fixes the abort handling for FD types. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
The ioctl framework already does this correctly, but the write path did not. This is trivially fixed by simply using a standard pattern to return uobj_alloc_commit() as the last statement in every function. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
The locking here has always been a bit crazy and spread out, upon some careful analysis we can simplify things. Create a single function uverbs_destroy_ufile_hw() that internally handles all locking. This pulls together pieces of this process that were sprinkled all over the places into one place, and covers them with one lock. This eliminates several duplicate/confusing locks and makes the control flow in ib_uverbs_close() and ib_uverbs_free_hw_resources() extremely simple. Unfortunately we have to keep an extra mutex, ucontext_lock. This lock is logically part of the rwsem and provides the 'down write, fail if write locked, wait if read locked' semantic we require. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Rename 'cleanup_rwsem' to 'hw_destroy_rwsem' which is held across any call to the type destroy function (aka 'hw' destroy). The main purpose of this lock is to prevent normal add and destroy from running concurrently with uverbs_cleanup_ufile() Since the uobjects list is always manipulated under the 'hw_destroy_rwsem' we can eliminate the uobjects_lock in the cleanup function. This allows converting that lock to a very simple spinlock with a narrow critical section. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
The locking requirements here have changed slightly now that we can rely on the ib_uverbs_file always existing and containing all the necessary locking infrastructure. That means we can get rid of the cleanup_mutex usage (this was protecting the check on !uboj->context). Otherwise, follow the same pattern that IDR uses for destroy, acquire exclusive write access, then call destroy and the undo the 'lookup'. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
This wasn't wrong, but the placement of two krefs didn't make any sense. Follow some simple rules. - A kref is held inside uobjects_list - A kref is held inside the IDR - A kref is held inside file->private - A stack based kref is passed bettwen alloc_begin and alloc_abort/alloc_commit Any place we destroy one of the above pointers, we stick a put, or 'move' the kref into another pointer. The key functions have sensible semantics: - alloc_uobj fully initializes the common members in uobj, including the list - Get rid of the uverbs_idr_remove_uobj helper since IDR remove does require put, but it depends on the situation. Later patches will re-consolidate this differently. - alloc_abort always consumes the passed kref, done in the type - alloc_commit always consumes the passed kref, done in the type - rdma_remove_commit_uobject always pairs with a lookup_get After it is all done the only control flow change is to: - move a get from alloc_commit_fd_uobject to rdma_alloc_commit_uobject - add a put to remove_commit_idr_uobject - Consistenly use rdma_lookup_put in rdma_remove_commit_uobject at the right place Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
The alloc_commit callback makes the uobj visible to other threads, and it does so using a 'move' semantic of the uobj kref on the stack into the public storage (eg the IDR, uobject list and file_private_data) Once this is done another thread could start up and trigger deletion of the kref. Fortunately cleanup_rwsem happens to prevent this from being a bug, but that is a fantastically unclear side effect. Re-organize things so that alloc_commit is that last thing to touch the uobj, get rid of the sneaky implicit dependency on cleanup_rwsem, and add a comment reminding that uobj is no longer kref'd after alloc_commit. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Our ABI for write() uses a s32 for FDs and a u32 for IDRs, but internally we ended up implicitly casting these ABI values into an 'int'. For ioctl() we use a s64 for FDs and a u64 for IDRs, again casting to an int. The various casts to int are all missing range checks which can cause userspace values that should be considered invalid to be accepted. Fix this by making the generic lookup routine accept a s64, which does not truncate the write API's u32/s32 or the ioctl API's s64. Then push the detailed range checking down to the actual type implementations to be shared by both interfaces. Finally, change the copy of the uobj->id to sign extend into a s64, so eg, if we ever wish to return a negative value for a FD it is carried properly. This ensures that userspace values are never weirdly interpreted due to the various trunctations and everything that is really out of range gets an EINVAL. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
If the method fails after calling rdma_explicit_destroy (eg if copy_to_user faults) then it will trigger a kernel oops: BUG: unable to handle kernel NULL pointer dereference at 0000000000000000 PGD 800000000548d067 P4D 800000000548d067 PUD 54a0067 PMD 0 SMP PTI CPU: 0 PID: 359 Comm: ibv_rc_pingpong Not tainted 4.18.0-rc1+ #28 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 RIP: 0010: (null) Code: Bad RIP value. RSP: 0018:ffffc900001a3bf0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff88000603bd00 RCX: 0000000000000003 RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff88000603bd00 RBP: 0000000000000001 R08: ffffc900001a3cf8 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: ffffc900001a3cf0 R13: 0000000000000000 R14: ffffc900001a3cf0 R15: 0000000000000000 FS: 00007fb00dda8700(0000) GS:ffff880007c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffffffffffffd6 CR3: 000000000548e004 CR4: 00000000003606b0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? rdma_lookup_put_uobject+0x22/0x50 [ib_uverbs] ? uverbs_finalize_object+0x3b/0x60 [ib_uverbs] ? uverbs_finalize_attrs+0x128/0x140 [ib_uverbs] ? ib_uverbs_cmd_verbs+0x698/0x7c0 [ib_uverbs] ? find_held_lock+0x2d/0x90 ? __might_fault+0x39/0x90 ? ib_uverbs_ioctl+0x111/0x1f0 [ib_uverbs] ? do_vfs_ioctl+0xa0/0x6d0 ? trace_hardirqs_on_caller+0xed/0x180 ? _raw_spin_unlock_irq+0x24/0x40 ? syscall_trace_enter+0x138/0x1d0 ? ksys_ioctl+0x35/0x60 ? __x64_sys_ioctl+0x11/0x20 ? do_syscall_64+0x5b/0x1c0 ? entry_SYSCALL_64_after_hwframe+0x49/0xbe This is because the type was replaced with the null_type during explicit destroy that cannot complete the destruction. One of the side effects of replacing the type is to make the object handle totally unreachable - so no other command could attempt to use it, even though it remains on the uboject list. We can get the same end result by just fully destroying the object inside rdma_explicit_destroy and leaving the caller the residual kref for the uobj with no attached HW object, and no presence in the ubojects list. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
-
- 24 Jul, 2018 21 commits
-
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Acked-by: Anna Schumaker <Anna.Schumaker@netapp.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Acked-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Remove a WARN_ON() statement that verifies something that is guaranteed by the RDMA API, namely that the failed_wr pointer is not touched if an ib_post_send() call succeeds and that it points at the failed wr if an ib_post_send() call fails. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Acked-by: Ursula Braun <ubraun@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Remove two WARN_ON() statements that verify something that is guaranteed by the RDMA API, namely that the failed_wr pointer is not touched if an ib_post_send() call succeeds and that it points at the failed wr if an ib_post_send() call fails. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
Instead of declaring and passing a dummy 'bad_wr' pointer, pass NULL as third argument to ib_post_(send|recv|srq_recv)(). Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Bart Van Assche authored
This patch does not change the behavior of the modified functions. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Zhu Yanjun authored
According to "Annex A16: RDMA over Converged Ethernet (RoCE)": A16.4.3 MANAGEMENT INTERFACES As defined in the base specification, a special Queue Pair, QP0 is defined solely for communication between subnet manager(s) and subnet management agents. Since such an IB-defined subnet management architecture is outside the scope of this annex, it follows that there is also no requirement that a port which conforms to this annex be associated with a QP0. Thus, for end nodes designed to conform to this annex, the concept of QP0 is undefined and unused for any port connected to an Ethernet network. CA16-8: A packet arriving at a RoCE port containing a BTH with the destination QP field set to QP0 shall be silently dropped. Signed-off-by: Zhu Yanjun <yanjun.zhu@oracle.com> Acked-by: Moni Shoua <monis@mellanox.com> Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Wei Yongjun authored
Fix to return a negative error code from the ipoib_neigh_hash_init() error handling case instead of 0, as done elsewhere in this function. Fixes: 515ed4f3 ("IB/IPoIB: Separate control and data related initializations") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
Expose the mlx5 flow steering parsing trees, exposing the functionality to user space. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
Add support to set a destination that is a flow table, this can come from the DEVX destination. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yishai Hadas authored
Add support to set a public flow steering rule when its destination is a TIR by using raw specification data. The logic follows the verbs API but instead of using ib_spec(s) the raw, device specific, description is used. This allows supporting specialty matchers without having to define new matches in the verbs struct based language. Signed-off-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-