- 19 Aug, 2021 3 commits
-
-
Christoph Hellwig authored
Just use seq_write to copy the stats into the seq_file buffer instead of poking holes into the seq_file abstraction. Link: https://lore.kernel.org/r/20210810151711.1795374-1-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Tested-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Christophe JAILLET authored
'sess->rbufs' is known to be NULL here, so there is no point in kfree'ing it. Fixes: 6a98d71d ("RDMA/rtrs: client: main functionality") Link: https://lore.kernel.org/r/9a57c9f837fa2c6f0070578a1bc4840688f62962.1628185335.git.christophe.jaillet@wanadoo.frSigned-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
YueHaibing authored
If re-registering an MR in hns_roce_rereg_user_mr(), we should return NULL instead of passing 0 to ERR_PTR for clarity. Fixes: 4e9fc1da ("RDMA/hns: Optimize the MR registration process") Link: https://lore.kernel.org/r/20210804125939.20516-1-yuehaibing@huawei.comSigned-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 03 Aug, 2021 16 commits
-
-
Leon Romanovsky authored
Unify create QP creation interface to make clean approach to create XRC_TGT and regular QPs. Link: https://lore.kernel.org/r/5cd50e7d8ad9112545a1a61dea62799a5cb3224a.1628014762.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The QP usecnts were incremented through QP attributes structure while decreased through QP itself. Rely on the ib_creat_qp_user() code that initialized all QP parameters prior returning to the user and increment exactly like destroy does. Link: https://lore.kernel.org/r/25d256a3bb1fc480b77d7fe439817b993de48610.1628014762.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
All QP creation flows called ib_create_qp_security(), but differently. This caused to the need to provide exclusion conditions for the XRC_TGT, because such QP already had selinux configuration call. In order to fix it, move ib_create_qp_security() to the general QP creation routine. Link: https://lore.kernel.org/r/4d7cd6f5828aca37fb62283e6b126b73ab86b18c.1628014762.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The low-level create QP function grew to be larger than any sensible inline function should be. The inline attribute is not really needed for that function and can be implemented as exported symbol. Link: https://lore.kernel.org/r/2c08709d86f876c3dfb77684357b2a939e570ca4.1628014762.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The ib_create_named_qp() is kernel verb that is not used for user supplied attributes. In such case, it is ULP responsibility to provide valid QP attributes. In-kernel API shouldn't check it, exactly like other functions that don't check device capabilities. Link: https://lore.kernel.org/r/b9b9e981d1af148b750750196e686199dbbf61f8.1628014762.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The ib_create_named_qp() is kernel verb and no kernel users exist that use XRC_INI QP. Hence such QP path is not reachable. In addition, delete duplicated assignments of QP attributes from the initialization structure. Link: https://lore.kernel.org/r/1b4c0d1def5f8f6d26839e14d19da950cc4a0b05.1628014762.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
XRC_TGT QPs are created through kernel verbs and don't have udata at all. Fixes: 6eefa839 ("RDMA/mlx5: Protect from kernel crash if XRC_TGT doesn't have udata") Fixes: e383085c ("RDMA/mlx5: Set ECE options during QP create") Link: https://lore.kernel.org/r/b68228597e730675020aa5162745390a2d39d3a2.1628014762.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
There is no real value in bypassing IB/core APIs for creating standard objects with standard types. The open-coded variant didn't have any restrack task management calls and caused to such objects to be not present when running rdmatoool. Link: https://lore.kernel.org/r/f745590e5fb7d56f90fdb25f64ee3983ba17e1e4.1627040189.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Convert QP object to follow IB/core general allocation scheme. That change allows us to make sure that restrack properly kref the memory. Link: https://lore.kernel.org/r/48e767124758aeecc433360ddd85eaa6325b34d9.1627040189.git.leonro@nvidia.com Reviewed-by: Gal Pressman <galpress@amazon.com> #efa Tested-by: Gal Pressman <galpress@amazon.com> Reviewed-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> #rdma and core Tested-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Tested-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The rdmavt QP has fields that are both needed for the control and data path. Such mixed declaration caused to the very specific allocation flow with kzalloc_node and SGE list embedded into the struct rvt_qp. This patch separates QP creation to two: regular memory allocation for the control path and specific code for the SGE list, while the access to the later is performed through derefenced pointer. Such pointer and its context are expected to be in the cache, so performance difference is expected to be negligible, if any exists. Link: https://lore.kernel.org/r/f66c1e20ccefba0db3c69c58ca9c897f062b4d1c.1627040189.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Starting from commit 2b1f7470 ("RDMA/core: Allow drivers to disable restrack DB") the restrack is able to handle non-standard QP types either. That change allows us to rewrite custom QP calls to their IB/core counterparts, so we will use general QP creation flow even for the driver QP types. Link: https://lore.kernel.org/r/51682ab82298748941f38bd23ee3bf77ef1cab7b.1627040189.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The dev->devr.mutex was intended to protect GSI QP pointer change in the struct mlx5_ib_port_resources when it is accessed from the pkey_change_work. However that pointer isn't changed during the runtime and once IB/core adds MAD, it stays stable. Link: https://lore.kernel.org/r/6e338c561033df20d92e1371fc6a7a0d93aad945.1627040189.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
In the driver release flow, we are ensuring that notifier is disabled and no new works can be added to pkey_change_handler. It means that we can cancel that handler before destroying resources to make sure that our unwind routine is symmetrical to the allocation one. Link: https://lore.kernel.org/r/f2b1ea1bad952e4e7a48a6f731de9e0344986b29.1627040189.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The QP type is set by the IB/core and shouldn't be set in the driver. Fixes: 40909f66 ("RDMA/efa: Add EFA verbs implementation") Link: https://lore.kernel.org/r/838c40134c1590167b888ca06ad51071139ff2ae.1627040189.git.leonro@nvidia.comAcked-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
QP attributes that were supplied by IB/core already have all parameters set when they are passed to the driver. The drivers are not supposed to change anything in struct ib_qp_init_attr. Fixes: 66d86e52 ("RDMA/hns: Add UD support for HIP09") Link: https://lore.kernel.org/r/5987138875e8ade9aa339d4db6e1bd9694ed4591.1627040189.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The call to internal QP creation function skips QP creation checks and misses the addition of such device QPs to the restrack DB. As a preparation to general allocation scheme, convert hns to use proper API. Link: https://lore.kernel.org/r/7b236c15f7d5abb368958297ac6962d8459cb824.1627040189.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 30 Jul, 2021 10 commits
-
-
Prabhakar Kushwaha authored
Use -EINVAL return type to identify whether error is returned because of "Out of MR resources" or any other error types. Link: https://lore.kernel.org/r/20210729151732.30995-2-pkushwaha@marvell.comSigned-off-by: Shai Malin <smalin@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Prabhakar Kushwaha authored
To have more accurate error return type use -EOPNOTSUPP instead of -EINVAL. Link: https://lore.kernel.org/r/20210729151732.30995-1-pkushwaha@marvell.comSigned-off-by: Shai Malin <smalin@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cai Huoqing authored
Remove the repeated word 'the' from comments Link: https://lore.kernel.org/r/20210729082346.1882-1-caihuoqing@baidu.comSigned-off-by: Cai Huoqing <caihuoqing@baidu.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Starting from the beginning of infiniband subsystem, the uverbs char devices start from 192 as a minor number, see commit bc38a6ab ("[PATCH] IB uverbs: core implementation"). This patch updates the admin guide documentation to reflect it. Fixes: 9d85025b ("docs-rst: create an user's manual book") Link: https://lore.kernel.org/r/bad03e6bcde45550c01e12908a6fe7dfa4770703.1627477347.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The core netlink code alread guarentees that no netlink callback can be running outside the rdma_nl_register/unregister() region and this registration happens during module init/exit. Thus it is already prevented that iwpm_valid_client() can ever fail. Remove it. Link: https://lore.kernel.org/r/a9f05a78f9996bf6ea47099b5e02671bf742f5ab.1627048781.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
iwpm_init() and iwpm_exit() are called only once during iw_cm module load. This makes whole reference count implementation not needed at all. Link: https://lore.kernel.org/r/1778ded873ba58c9fadc5bb25038de1cec843bec.1627048781.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The failure during iw_cm module initialization partially left the system with unreleased memory and other resources. Rewrite the module init/exit routines in such way that netlink commands will be opened only after successful initialization. Fixes: b493d91d ("iwcm: common code for port mapper") Link: https://lore.kernel.org/r/b01239f99cb1a3e6d2b0694c242d89e6410bcd93.1627048781.git.leonro@nvidia.comSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xiyu Yang authored
refcount_t type and corresponding API can protect refcounters from accidental underflow and overflow and further use-after-free situations. Link: https://lore.kernel.org/r/1626674454-56075-1-git-send-email-xiyuyang19@fudan.edu.cnSigned-off-by: Xiyu Yang <xiyuyang19@fudan.edu.cn> Signed-off-by: Xin Tan <tanxin.ctf@gmail.com> Tested-by: Josh Fisher <josh.fisher@cornelisnetworks.com> Acked-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mike Marciniszyn authored
It is possible for the primary IPoIB network device associated with any RDMA device to fail to join certain multicast groups preventing IPv6 neighbor discovery and possibly other network ULPs from working correctly. The IPv4 broadcast group is not affected as the IPoIB network device handles joining that multicast group directly. This is because the primary IPoIB network device uses the pkey at ndex 0 in the associated RDMA device's pkey table. Anytime the pkey value of index 0 changes, the primary IPoIB network device automatically modifies it's broadcast address (i.e. /sys/class/net/[ib0]/broadcast), since the broadcast address includes the pkey value, and then bounces carrier. This includes initial pkey assignment, such as when the pkey at index 0 transitions from the opa default of invalid (0x0000) to some value such as the OPA default pkey for Virtual Fabric 0: 0x8001 or when the fabric manager is restarted with a configuration change causing the pkey at index 0 to change. Many network ULPs are not sensitive to the carrier bounce and are not expecting the broadcast address to change including the linux IPv6 stack. This problem does not affect IPoIB child network devices as their pkey value is constant for all time. To mitigate this issue, change the default pkey in at index 0 to 0x8001 to cover the predominant case and avoid issues as ipoib comes up and the FM sweeps. At some point, ipoib multicast support should automatically fix non-broadcast addresses as it does with the primary broadcast address. Fixes: 77241056 ("IB/hfi1: add driver files") Link: https://lore.kernel.org/r/20210715160445.142451.47651.stgit@awfm-01.cornelisnetworks.comSuggested-by: Josh Collier <josh.d.collier@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Mike Marciniszyn authored
There is no counter for dmawait in AIP, which hampers debugging performance issues. Add the counter increment when the txq is queued. Fixes: d99dc602 ("IB/hfi1: Add functions to transmit datagram ipoib packets") Link: https://lore.kernel.org/r/20210715160440.142451.8278.stgit@awfm-01.cornelisnetworks.comSigned-off-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 20 Jul, 2021 3 commits
-
-
Jason Gunthorpe authored
Leon Romanovsky says: ==================== Add ConnectX DCS offload support This patchset from Lior adds support of DCI stream channel (DCS) support. DCS is an offload to SW load balancing of DC initiator work requests. A single DC QP initiator (DCI) can be connected to only one target at the time and can't start new connection until the previous work request is completed. This limitation causes to delays when the initiator process needs to transfer data to multiple targets at the same time. ==================== * branch 'mlx5_dcs': RDMA/mlx5: Add DCS offload support RDMA/mlx5: Separate DCI QP creation logic net/mlx5: Add DCS caps & fields support
-
Lior Nahmanson authored
DCS is an offload to SW load balancing of DC initiator work requests. A single DCI can be connected to only one target at the time and can't start new connection until the previous work request is completed. This limitation will cause to delay when the initiator process needs to transfer data to multiple targets at the same time. The SW solution is to use a process that handling and spreading the work request on many DCIs according to destinations. This feature is an offload to this process and coming to reduce the load from the CPU and improve the performance. Link: https://lore.kernel.org/r/491c2c2afdb5b07de7f03eab3f93cf0704549dbc.1624258894.git.leonro@nvidia.comReviewed-by: Meir Lichtinger <meirl@nvidia.com> Signed-off-by: Lior Nahmanson <liorna@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lior Nahmanson authored
This patch isolates DCI QP creation logic to separate function, so this change will reduce complexity when adding new features to DCI QP without interfering with other QP types. The code was copied from create_user_qp() while taking only DCI relevant bits. Link: https://lore.kernel.org/r/b4530bdd999349c59691224f016ff1efb5dc3b92.1624258894.git.leonro@nvidia.comReviewed-by: Meir Lichtinger <meirl@nvidia.com> Signed-off-by: Lior Nahmanson <liorna@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 18 Jul, 2021 1 commit
-
-
Lior Nahmanson authored
This fields will be needed when adding a support for DCS offload max_dci_stream_channels - maximum DCI stream channels supported per DCI. max_dci_errored_streams - maximum DCI error stream channels supported per DCI before a DCI move to error state. Signed-off-by: Lior Nahmanson <liorna@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
-
- 16 Jul, 2021 7 commits
-
-
Bob Pearson authored
Currently the ICRC is generated as a u32 type and then forced to a __be32 and stored into the ICRC field in the packet. The actual type of the ICRC is __be32. This patch replaces u32 by __be32 and eliminates the casts. The computation is exactly the same as the original but the types are more consistent. Link: https://lore.kernel.org/r/20210707040040.15434-10-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
This patch adds kernel-doc style comments to rxe_icrc.c Link: https://lore.kernel.org/r/20210707040040.15434-9-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
This patch collects the code from rxe_register_device() that sets up the crc32 calculation into a subroutine rxe_icrc_init() in rxe_icrc.c. Link: https://lore.kernel.org/r/20210707040040.15434-8-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
rxe_icrc_hdr() in rxe_icrc.c is no longer shared. This patch makes it static and changes the parameter list to match the other routines there. Link: https://lore.kernel.org/r/20210707040040.15434-7-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Move rxe_crc32() from rxe.h to rxe_icrc.c as a static local function. Link: https://lore.kernel.org/r/20210707040040.15434-6-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Isolate ICRC generation into a single subroutine named rxe_generate_icrc() in rxe_icrc.c. Remove scattered crc generation code from elsewhere. Link: https://lore.kernel.org/r/20210707040040.15434-5-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Fixup rxe_send() and rxe_loopback() in rxe_net.c to have the same calling sequence. This patch makes them static and have the same parameter list and return value. Link: https://lore.kernel.org/r/20210707040040.15434-4-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-