- 03 Dec, 2018 2 commits
-
-
Jason Gunthorpe authored
This creates a consistent way to access the two core buffers across write and write_ex handlers. Remove the open coded ucore conversion in the write/ex compatibility handlers. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
Jason Gunthorpe authored
write() methods must work with fixed sized structures as that is the only way to know where the udata segment starts. The common udata code now rejects any write() that has a response buffer shorter than the core's response. Thus all the checks of out_len for write methods are redundant and can be removed. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
-
- 30 Nov, 2018 3 commits
-
-
Guy Levi authored
The current implementation of create QP requires contiguous memory, such a requirement is problematic once the memory is fragmented or the system is low in memory, it causes failures in dma_zalloc_coherent(). This patch takes advantage of the new mlx5_core API which allocates a fragmented buffer. This makes the QP creation much more resilient to memory fragmentation. Data-path code was adapted to the fact that WQEs can cross buffers. We also use the opportunity to fix some cosmetic legacy coding convention errors which were in the feature scope. Signed-off-by: Guy Levi <guyle@mellanox.com> Reviewed-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Guy Levi authored
The current implementation of create SRQ requires contiguous memory, such a requirement is problematic once the memory is fragmented or the system is low in memory, it causes failures in dma_zalloc_coherent(). This patch takes the advantage of the new mlx5_core API which allocates a fragmented buffer, and makes the SRQ creation much more resilient to memory fragmentation. Data-path code was adapted to the fact that WQEs can cross buffers. Signed-off-by: Guy Levi <guyle@mellanox.com> Reviewed-by: Majd Dibbiny <majd@mellanox.com> Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Chuck Lever authored
FRWR memory registration is done with a series of calls and WRs. 1. ULP invokes ib_dma_map_sg() 2. ULP invokes ib_map_mr_sg() 3. ULP posts an IB_WR_REG_MR on the Send queue Step 2 generates an iova. It is permissible for ULPs to change this iova (with certain restrictions) between steps 2 and 3. rxe_map_mr_sg captures the MR's iova but later when rxe processes the REG_MR WR, it ignores the MR's iova field. If a ULP alters the MR's iova after step 2 but before step 3, rxe never captures that change. When the remote sends an RDMA Read targeting that MR, rxe looks up the R_key, but the altered iova does not match the iova stored in the MR, causing the RDMA Read request to fail. Reported-by: Anna Schumaker <schumaker.anna@gmail.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 29 Nov, 2018 5 commits
-
-
Mark Bloch authored
Allow a user to attach a DEVX counter via mlx5 raw flow creation. In order to attach a counter we introduce a new attribute: MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX A counter can be attached to multiple flow steering rules. Signed-off-by: Mark Bloch <markb@mellanox.com> Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Leon Romanovsky authored
QIB driver was added in 2010 with many BUG_ON(), most of them were cleaned out after years of development and usages. It looks like that it is safe now to remove rest of BUG_ONs. Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Acked-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Colin Ian King authored
There is a spelling mistake in a usnic_err error message, fix it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
kbuild test robot authored
drivers/infiniband/core/uverbs_cmd.c:1095:1-3: WARNING: PTR_ERR_OR_ZERO can be used Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR Generated by: scripts/coccinelle/api/ptr_ret.cocci Fixes: 7106a976 ("RDMA/uverbs: Make write() handlers return 0 on success") Signed-off-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Colin Ian King authored
Fix spelling mistake in usnic_err error message Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 26 Nov, 2018 10 commits
-
-
Jason Gunthorpe authored
Have the core code initialize the driver_udata if the method has a udata description. This is done using the same create_udata the handler was supposed to call. This makes ioctl consistent with the write and write_ex paths. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
Now that we have metadata describing the command format the core code can directly compute the udata pointers and all the really ugly ib_uverbs_init_udata() calls can be removed from the handlers. This means all the write() handlers are no longer sensitive to the layout of the command buffer. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
The core code needs to compute the udata so we may as well pass it in the uverbs_attr_bundle instead of on the stack. This converts the simple case of write_ex() which already has a core calculation. Also change the write() path to use the attrs for ib_uverbs_init_udata() instead of on the stack. This lets the write to write_ex compatibility path continue to follow the lead of the _ex path. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
The size meta-data in the prior patch describes the smallest acceptable buffer for the write() interface. Globally check this in the core code. This is necessary in the case of write() methods that have a driver udata to prevent computing a negative udata buffer length. The return code of -ENOSPC is chosen here as some of the handlers already use this code, however many other handler use EINVAL. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
We need the structure sizes to compute the location of the udata in the core code. Annotate the sizes into the new macro language. This is generated largely by script and checked by comparing against the similar list in rdma-core. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
The uverbs_attr_bundle already contains this pointer, and most methods don't actually need it. Get rid of the redundant function argument. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
Currently they return the command length, while all other handlers return 0. This makes the write path closer to the write_ex and ioctl path. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
Now that we can add meta-data to the description of write() methods we need to pass the uverbs_attr_bundle into all write based handlers so future patches can use it as a container for any new data transferred out of the core. This is the first step to bringing the write() and ioctl() methods to a common interface signature. This is a simple search/replace, and we push the attr down into the uobj and other APIs to keep changes minimal. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
If the struct is used with a driver_udata it should have a trailing driver_data flex array to mark it as having udata. In most cases this forces the end of the struct to be aligned to u64 which is needed to make the trailing driver_data naturally aligned. Unfortunately We have a few cases where the base struct is not aligned to 8 bytes, these are marked with a u32 driver_data and userspace will check for alignment issues when it compiles the driver. Also remove the empty ib_uverbs_modify_qp_resp as nothing uses this. pahole says there is no change to any struct sizes by this change. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Colin Ian King authored
There is a spelling mistake in the module description text, fix it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 22 Nov, 2018 13 commits
-
-
Parav Pandit authored
When the rdma device is getting removed, get resource info can race with device removal, as below: CPU-0 CPU-1 -------- -------- rdma_nl_rcv_msg() nldev_res_get_cq_dumpit() mutex_lock(device_lock); get device reference mutex_unlock(device_lock); [..] ib_unregister_device() /* Valid reference to * device->dev exists. */ ib_dealloc_device() [..] provider->fill_res_entry(); Even though device object is not freed, fill_res_entry() can get called on device which doesn't have a driver anymore. Kernel core device reference count is not sufficient, as this only keeps the structure valid, and doesn't guarantee the driver is still loaded. Similar race can occur with device renaming and device removal, where device_rename() tries to rename a unregistered device. While this is fine for devices of a class which are not net namespace aware, but it is incorrect for net namespace aware class coming in subsequent series. If a class is net namespace aware, then the below [1] call trace is observed in above situation. Therefore, to avoid the race, keep a reference count and let device unregistration wait until all netlink users drop the reference. [1] Call trace: kernfs: ns required in 'infiniband' for 'mlx5_0' WARNING: CPU: 18 PID: 44270 at fs/kernfs/dir.c:842 kernfs_find_ns+0x104/0x120 libahci i2c_core mlxfw libata dca [last unloaded: devlink] RIP: 0010:kernfs_find_ns+0x104/0x120 Call Trace: kernfs_find_and_get_ns+0x2e/0x50 sysfs_rename_link_ns+0x40/0xb0 device_rename+0xb2/0xf0 ib_device_rename+0xb3/0x100 [ib_core] nldev_set_doit+0x165/0x190 [ib_core] rdma_nl_rcv_msg+0x249/0x250 [ib_core] ? netlink_deliver_tap+0x8f/0x3e0 rdma_nl_rcv+0xd6/0x120 [ib_core] netlink_unicast+0x17c/0x230 netlink_sendmsg+0x2f0/0x3e0 sock_sendmsg+0x30/0x40 __sys_sendto+0xdc/0x160 Fixes: da5c8507 ("RDMA/nldev: add driver-specific resource tracking") Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Parav Pandit authored
Currently several rdma_cm module specific functions are declared in core_priv.h file. Now that we have cma_priv.h file specific to rdma_cm kernel module, move them from core_priv.h to cma_priv.h Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
Add annotations to the uverbs_api structure indicating which driver methods are called by the implementation. If the required method is NULL the write API will be not be callable. This effectively duplicates the cmd_mask system, however it does it by expressing invariants required by the core code, not by delegating decision making to the driver. This is another step toward eliminating cmd_mask. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
Now that we use struct uverbs_uapi to link the method functions to the dispatcher there is no reason to have them be extern symbols. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
This organizes the write commands into objects and links them to the uverbs_api data structure. The command path is reworked to use uapi instead of its internal structures. The command mask is moved from a runtime check to a registration time check in the uapi. Since the write interface does not have the object ID as part of the command, the radix bins are converted into linear lists to support the lookup. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
Bringing all uapi entry points into one place lets us deal with them consistently. For instance the write, write_ex and ioctl paths can be disabled when an API is not supported by the driver. This will replace the uverbs_cmd_table static arrays. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
If we can't destroy the object then we certainly shouldn't allow it be created or used. Remove it from the uverbs_uapi in this case. This also disables methods of other objects that have mandatory object handle inputs - ie REG_DM_MR is now automatically removed if DM objects cannot be created. Typically drivers not supporting an interface will mark all of the supporting functions as NULL, including destroy. This is intended to automatically eliminate entire corner cases in the API that are difficult to test. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
Rely on UAPI_DEF_IS_OBJ_SUPPORTED instead of manipulating the contents of the driver's definition list. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
We have many cases where parts of the uapi are not supported in a driver, needs a certain protocol, or whatever. It is best to reflect this directly into the struct uverbs_api when it is built so that everything is simply blocked off, and future introspection can report a proper supported list. This is done by adding some additional helpers to the definition list language that disable objects based on a 'supported' call back, and a helper that disables based on a NULL struct ib_device function pointer. Disablement is global. For instance, if a driver disables an object then everything connected to that object is removed, including core methods. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
The next patch needs another copy of this, provide a simple helper to reduce the coding. uapi_add_get_elm() returns an existing entry or adds a new one. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
The 'tree' data structure is very hard to build at compile time, and this makes it very limited. The new radix tree based compiler can handle a more complex input language that does not require the compiler to perfectly group everything into a neat tree structure. Instead use a simple list to describe to input, where the list elements can be of various different 'opcodes' instructing the radix compiler what to do. Start out with opcodes chaining to other definition lists and chaining to the existing 'tree' definition. Replace the very top level of the 'object tree' with this list type and get rid of struct uverbs_object_tree_def and DECLARE_UVERBS_OBJECT_TREE. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Jason Gunthorpe authored
For DM there is no reason not to add the spec for the START_OFFSET, if DM is not supported then ib_dev.alloc_dm is already set to NULL which ensures we do not call the method. For IPSEC, the core code should be setting ib_dev.create_flow_action_esp to NULL to disable it, not relying on wonky manipulation of the specs. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
-
Ursula Braun authored
The mlx4 driver does not trigger an IB_EVENT_PORT_ACTIVE when the RoCE network interface is activated. When SMC determines the RoCE device port to be used, it checks the port states. This patch triggers IB events for NETDEV_UP and NETDEV_DOWN. Signed-off-by: Ursula Braun <ubraun@linux.ibm.com> Acked-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
- 21 Nov, 2018 7 commits
-
-
Steve Wise authored
Only retry connection setup with MPAv1 if the peer actually aborted the connection upon receiving the MPAv2 start message. This avoids retrying with MPAv1 in the case where the connection was aborted due to retransmit timeouts. Fixes: d2fe99e8 ("RDMA/cxgb4: Add support for MPAv2 Enhanced RDMA Negotiation") Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yuval Shaia authored
Since the function always returns 0 make it void. Reported-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Yuval Shaia <yuval.shaia@oracle.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Acked-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Yue Haibing authored
There is no need to have the 'struct se_portal_group *tpg' variable static since new value always be assigned before use. Signed-off-by: Yue Haibing <yuehaibing@huawei.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Parav Pandit authored
Structures of ib_verbs.h don't use fields/structures of mm.h, socket.h or scatterlist.h. So remove such header files inclusion. Signed-off-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Sabyasachi Gupta authored
Replaced dma_alloc_coherent + memset with dma_zalloc_coherent Signed-off-by: Sabyasachi Gupta <sabyasachi.linux@gmail.com> Acked-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Sabyasachi Gupta authored
Replaced dma_alloc_coherent + memset with dma_zalloc_coherent Signed-off-by: Sabyasachi Gupta <sabyasachi.linux@gmail.com> Acked-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-
Jason Gunthorpe authored
From git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux mlx5 updates taken for dependencies on later ODP patches. Conflict resolved by deleting mlx5_ib_get_vector_affinity() * branch 'mlx5-next': (21 commits) net/mlx5: EQ, Make EQE access methods inline {net,IB}/mlx5: Move Page fault EQ and ODP logic to RDMA net/mlx5: EQ, Generic EQ net/mlx5: EQ, Different EQ types net/mlx5: EQ, Privatize eq_table and friends net/mlx5: EQ, irq_info and rmap belong to eq_table net/mlx5: EQ, Create all EQs in one place net/mlx5: EQ, Move all EQ logic to eq.c net/mlx5: EQ, Remove redundant completion EQ list lock net/mlx5: EQ, No need to store eq index as a field net/mlx5: EQ, Remove unused fields and structures net/mlx5: EQ, Use the right place to store/read IRQ affinity hint IB/mlx5: Improve ODP debugging messages net/mlx5: Use multi threaded workqueue for page fault handling net/mlx5: Return success for PAGE_FAULT_RESUME in internal error state IB/mlx5: Lock QP during page fault handling net/mlx5: Enumerate page fault types net/mlx5: Add interface to hold and release core resources net/mlx5: Release resource on error flow net/mlx5: Fix offsets of ifc reserved fields ... Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
-