- 27 Jul, 2022 18 commits
-
-
Jason Gunthorpe authored
Cheng Xu says ==================== This v14 patch set introduces the Elastic RDMA Adapter (ERDMA) driver, which released in Apsara Conference 2021 by Alibaba. The PR of ERDMA userspace provider has already been created [1]. ERDMA enables large-scale RDMA acceleration capability in Alibaba ECS environment, initially offered in g7re instance. It can improve the efficiency of large-scale distributed computing and communication significantly and expand dynamically with the cluster scale of Alibaba Cloud. ERDMA is a RDMA networking adapter based on the Alibaba MOC hardware. It works in the VPC network environment (overlay network), and uses iWarp transport protocol. ERDMA supports reliable connection (RC). ERDMA also supports both kernel space and user space verbs. Now we have already supported HPC/AI applications with libfabric, NoF and some other internal verbs libraries, such as xrdma, epsl, etc,. For the ECS instance with RDMA enabled, our MOC hardware generates two kinds of PCI devices: one for ERDMA, and one for the original net device (virtio-net). They are separated PCI devices. ==================== * branch 'erdma': RDMA/erdma: Add driver to kernel build environment RDMA/erdma: Add the ABI definitions RDMA/erdma: Add the erdma module RDMA/erdma: Add connection management (CM) support RDMA/erdma: Add verbs implementation RDMA/erdma: Add verbs header file RDMA/erdma: Add event queue implementation RDMA/erdma: Add cmdq implementation RDMA/erdma: Add main include file RDMA/erdma: Add the hardware related definitions RDMA: Add ERDMA to rdma_driver_id definition Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
Add erdma to the kernel build environment, and sort the source order in drivers/infiniband/Kconfig. Link: https://lore.kernel.org/r/20220727014927.76564-12-chengyou@linux.alibaba.comSigned-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
Add erdma ABI definitions which will be shared between kernel and userspace. This commit also fix compile issues reported by lkp. Link: https://lore.kernel.org/r/20220727014927.76564-11-chengyou@linux.alibaba.comReported-by: kernel test robot <lkp@intel.com> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
Add the main erdma module, which provides interface to infiniband subsystem. This commit includes a modification from Christophe, that using the bitmap API to allocate bitmaps instead of hand-writing. And the commit also fixes warnings reported by static checkers. Link: https://lore.kernel.org/r/20220727014927.76564-10-chengyou@linux.alibaba.comSigned-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
ERDMA's transport protocol is iWarp, so the driver must support CM interface. In CM part, we use the same way as SoftiWarp: using kernel socket to set up the connection, then performing MPA negotiation in kernel. So, this part of code mainly comes from SoftiWarp, base on it, we add some more features, such as non-blocking iw_connect implementation. This commit also fixes a duplicated include issue reported by Abaci Robot. Link: https://lore.kernel.org/r/20220727014927.76564-9-chengyou@linux.alibaba.comReported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
The RDMA verbs implementation of erdma is divided into three files: erdma_qp.c, erdma_cq.c, and erdma_verbs.c. Internal used functions and datapath functions of QP/CQ are put in erdma_qp.c and erdma_cq.c, the rest is in erdma_verbs.c. This commit also fixes some static check warnings. Link: https://lore.kernel.org/r/20220727014927.76564-8-chengyou@linux.alibaba.comReported-by: Dan Carpenter <dan.carpenter@oracle.com> Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
This header file defines the main structures and functions used for RDMA Verbs, including qp, cq, mr, ucontext, etc,. Link: https://lore.kernel.org/r/20220727014927.76564-7-chengyou@linux.alibaba.comSigned-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
Event queue (EQ) is the main notification way from erdma hardware to its driver. Each erdma device contains 2 kinds EQs: asynchronous EQ (AEQ) and completion EQ (CEQ). Per device has 1 AEQ, which used for RDMA async event report, and max to 32 CEQs (numbered for CEQ0 to CEQ31). CEQ0 is used for cmdq completion event report, and the rest CEQs are used for RDMA completion event report. Link: https://lore.kernel.org/r/20220727014927.76564-6-chengyou@linux.alibaba.comSigned-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
Cmdq is the main control plane channel between erdma driver and hardware. After erdma device is initialized, the cmdq channel will be active in the whole lifecycle of this driver. This commit also includes two modifications from Christophe, one is using the bitmap API to allocate bitmaps instead of hand-writing, and another is using the non-atomic bitmap API when applicable. Link: https://lore.kernel.org/r/20220727014927.76564-5-chengyou@linux.alibaba.comSigned-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
Add ERDMA driver main header file, defining internal used data structures and operations. The defined data structures includes *cmdq*, which is used as the communication channel between ERDMA driver and hardware. Link: https://lore.kernel.org/r/20220727014927.76564-4-chengyou@linux.alibaba.comSigned-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
ERDMA is a PCIe device, and this file provides ERDMA hardware related definitions, mainly including PCIe device capabilities and restrictions, device registers definitions, doorbell space, doorbell structure definitions and WQE definitions. Link: https://lore.kernel.org/r/20220727014927.76564-3-chengyou@linux.alibaba.comSigned-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cheng Xu authored
Define RDMA_DRIVER_ERDMA in enum rdma_driver_id. Link: https://lore.kernel.org/r/20220727014927.76564-2-chengyou@linux.alibaba.comSigned-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
After replacing the MR cache with an Mkey cache, rename the variables and functions to fit the new meaning. Link: https://lore.kernel.org/r/20220726071911.122765-6-michaelgur@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
Currently, the driver stores mlx5_ib_mr struct in the cache entries, although the only use of the cached MR is the mkey. Store only the mkey in the cache. Link: https://lore.kernel.org/r/20220726071911.122765-5-michaelgur@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
total_mrs is used only to calculate the number of mkeys currently in use. To simplify things, replace it with a new member called "in_use" and directly store the number of mkeys currently in use. Link: https://lore.kernel.org/r/20220726071911.122765-4-michaelgur@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
The Xarray allows us to store the cached mkeys in memory efficient way. Entries are reserved in the Xarray using xa_cmpxchg before calling to the upcoming callbacks to avoid allocations in interrupt context. The xa_cmpxchg can sleep when using GFP_KERNEL, so we call it in a loop to ensure one reserved entry for each process trying to reserve. Link: https://lore.kernel.org/r/20220726071911.122765-3-michaelgur@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
In the next patch, ent->list will be replaced with an xarray. The xarray uses an internal lock to protect the indexes. Use it to protect all the entry fields, and get rid of ent->lock. Link: https://lore.kernel.org/r/20220726071911.122765-2-michaelgur@nvidia.comSigned-off-by: Aharon Landau <aharonl@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Li Zhijian authored
Below 2 commits will be reverted: commit 8ff5f5d9 ("RDMA/rxe: Prevent double freeing rxe_map_set()") commit 647bf13c ("RDMA/rxe: Create duplicate mapping tables for FMRs") The community has a few bug reports which pointed this commit at last. Some proposals are raised up in the meantime but all of them have no follow-up operation. The previous commit led the map_set of FMR to be not available any more if the MR is registered again after invalidating. Although the mentioned patch try to fix a potential race in building/accessing the same table for fast memory regions, it broke rtrs etc ULPs. Since the latter could be worse, revert this patch. With previous commit, it's observed that a same MR in rnbd server will trigger below code path: -> rxe_mr_init_fast() |-> alloc map_set() # map_set is uninitialized |...-> rxe_map_mr_sg() # build the map_set |-> rxe_mr_set_page() |...-> rxe_reg_fast_mr() # mr->state change to VALID from FREE that means # we can access host memory(such rxe_mr_copy) |...-> rxe_invalidate_mr() # mr->state change to FREE from VALID |...-> rxe_reg_fast_mr() # mr->state change to VALID from FREE, # but map_set was not built again |...-> rxe_mr_copy() # kernel crash due to access wild addresses # that lookup from the map_set The backtraces are not always identical. [1st]---------- RIP: 0010:lookup_iova+0x66/0xa0 [rdma_rxe] Code: 00 00 00 48 d3 ee 89 32 c3 4c 8b 18 49 8b 3b 48 8b 47 08 48 39 c6 72 38 48 29 c6 45 31 d2 b8 01 00 00 00 48 63 c8 48 c1 e1 04 <48> 8b 4c 0f 08 48 39 f1 77 21 83 c0 01 48 29 ce 3d 00 01 00 00 75 RSP: 0018:ffffb7ff80063bf0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff9b9949d86800 RCX: 0000000000000000 RDX: ffffb7ff80063c00 RSI: 0000000049f6b378 RDI: 002818da00000004 RBP: 0000000000000120 R08: ffffb7ff80063c08 R09: ffffb7ff80063c04 R10: 0000000000000002 R11: ffff9b9916f7eef8 R12: ffff9b99488a0038 R13: ffff9b99488a0038 R14: ffff9b9914fb346a R15: ffff9b990ab27000 FS: 0000000000000000(0000) GS:ffff9b997dc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007efc33a98ed0 CR3: 0000000014f32004 CR4: 00000000001706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> rxe_mr_copy.part.0+0x6f/0x140 [rdma_rxe] rxe_responder+0x12ee/0x1b60 [rdma_rxe] ? rxe_icrc_check+0x7e/0x100 [rdma_rxe] ? rxe_rcv+0x1d0/0x780 [rdma_rxe] ? rxe_icrc_hdr.isra.0+0xf6/0x160 [rdma_rxe] rxe_do_task+0x67/0xb0 [rdma_rxe] rxe_xmit_packet+0xc7/0x210 [rdma_rxe] rxe_requester+0x680/0xee0 [rdma_rxe] ? update_load_avg+0x5f/0x690 ? update_load_avg+0x5f/0x690 ? rtrs_clt_recv_done+0x1b/0x30 [rtrs_client] [2nd]---------- RIP: 0010:rxe_mr_copy.part.0+0xa8/0x140 [rdma_rxe] Code: 00 00 49 c1 e7 04 48 8b 00 4c 8d 2c d0 48 8b 44 24 10 4d 03 7d 00 85 ed 7f 10 eb 6c 89 54 24 0c 49 83 c7 10 31 c0 85 ed 7e 5e <49> 8b 3f 8b 14 24 4c 89 f6 48 01 c7 85 d2 74 06 48 89 fe 4c 89 f7 RSP: 0018:ffffae3580063bf8 EFLAGS: 00010202 RAX: 0000000000018978 RBX: ffff9d7ef7a03600 RCX: 0000000000000008 RDX: 000000000000007c RSI: 000000000000007c RDI: ffff9d7ef7a03600 RBP: 0000000000000120 R08: ffffae3580063c08 R09: ffffae3580063c04 R10: ffff9d7efece0038 R11: ffff9d7ec4b1db00 R12: ffff9d7efece0038 R13: ffff9d7ef4098260 R14: ffff9d7f11e23c6a R15: 4c79500065708144 FS: 0000000000000000(0000) GS:ffff9d7f3dc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fce47276c60 CR3: 0000000003f66004 CR4: 00000000001706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: <TASK> rxe_responder+0x12ee/0x1b60 [rdma_rxe] ? rxe_icrc_check+0x7e/0x100 [rdma_rxe] ? rxe_rcv+0x1d0/0x780 [rdma_rxe] ? rxe_icrc_hdr.isra.0+0xf6/0x160 [rdma_rxe] rxe_do_task+0x67/0xb0 [rdma_rxe] rxe_xmit_packet+0xc7/0x210 [rdma_rxe] rxe_requester+0x680/0xee0 [rdma_rxe] ? update_load_avg+0x5f/0x690 ? update_load_avg+0x5f/0x690 ? rtrs_clt_recv_done+0x1b/0x30 [rtrs_client] rxe_do_task+0x67/0xb0 [rdma_rxe] tasklet_action_common.constprop.0+0x92/0xc0 __do_softirq+0xe1/0x2d8 run_ksoftirqd+0x21/0x30 smpboot_thread_fn+0x183/0x220 ? sort_range+0x20/0x20 kthread+0xe2/0x110 ? kthread_complete_and_exit+0x20/0x20 ret_from_fork+0x22/0x30 Link: https://lore.kernel.org/r/1658805386-2-1-git-send-email-lizhijian@fujitsu.com Link: https://lore.kernel.org/all/20220210073655.42281-1-guoqing.jiang@linux.dev/T/ Link: https://www.spinics.net/lists/linux-rdma/msg110836.html Link: https://lore.kernel.org/lkml/94a5ea93-b8bb-3a01-9497-e2021f29598a@linux.dev/t/Tested-by: Md Haris Iqbal <haris.iqbal@ionos.com> Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
- 22 Jul, 2022 8 commits
-
-
Bob Pearson authored
In rxe_req.c replace calls to __rxe_do_task() by calls to rxe_run_task(.., 0). Using __rxe_do_task is an error because the completer tasklet is not designed to be re-entrant and __rxe_do_task() should only be called when it is clear that no one else could be calling the completer tasklet as is the case in rxe_qp.c where this call is used in safe environments. Link: https://lore.kernel.org/r/20220630190425.2251-10-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Limit the maximum number of calls to each tasklet from rxe_do_task() before yielding the cpu. When the limit is reached reschedule the tasklet and exit the calling loop. This patch prevents one tasklet from consuming 100% of a cpu core and causing a deadlock or soft lockup. Link: https://lore.kernel.org/r/20220630190425.2251-9-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Make changes to the three tasklets so that the exit logic from each is the same. This makes the code easier to understand. Link: https://lore.kernel.org/r/20220630190425.2251-8-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
Currently the completer tasklet when retransmit timer or the rnr timer fires the same flag (qp->req.need_retry) is set so that if either timer fires it will attempt to perform a retry flow on the send queue. This has the effect of responding to an RNR NAK at the first retransmit timer event which might not allow the requested rnr timeout. This patch adds a new flag (qp->req.wait_for_rnr_timer) which, if set, prevents a retry flow until the rnr nak timer fires. This patch fixes rnr retry errors which can be observed by running the pyverbs test_rdmacm_async_traffic_external_qp multiple times. With this patch applied they do not occur. Link: https://lore.kernel.org/linux-rdma/a8287823-1408-4273-bc22-99a0678db640@gmail.com/ Link: https://lore.kernel.org/linux-rdma/2bafda9e-2bb6-186d-12a1-179e8f6a2678@talpey.com/ Fixes: 8700e3e7 ("Soft RoCE driver") Link: https://lore.kernel.org/r/20220630190425.2251-6-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
The code thc that decides whether to defer execution of a wqe in rxe_requester.c is isolated into a subroutine rxe_is_fenced() and removed from the call to req_next_wqe(). The condition whether a wqe should be fenced is changed to comply with the IBA. Currently an operation is fenced if the fence bit is set in the wqe flags and the last wqe has not completed. For normal operations the IBA actually only requires that the last read or atomic operation is complete. Link: https://lore.kernel.org/r/20220630190425.2251-2-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Reviewed-by: Li Zhijian <lizhijian@fujitsu.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Md Haris Iqbal authored
The 'rkey' input can be an lkey or rkey, and in rxe the lkey or rkey have the same value, including the variant bits. So, if mr->rkey is set, compare the invalidate key with it, otherwise compare with the mr->lkey. Since we already did a lookup on the non-varient bits to get this far, the check's only purpose is to confirm that the wqe has the correct variant bits. Fixes: 00134533 ("RDMA/rxe: Separate HW and SW l/rkeys") Link: https://lore.kernel.org/r/20220707073006.328737-1-haris.phnx@gmail.comSigned-off-by: Md Haris Iqbal <haris.phnx@gmail.com> Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Xin Gao authored
The double `get' is duplicated, remove one. Link: https://lore.kernel.org/r/20220722021833.15669-1-gaoxin@cdjrlc.comSigned-off-by: Xin Gao <gaoxin@cdjrlc.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Slark Xiao authored
Replace 'the the' with 'the' in the comments. Signed-off-by: Slark Xiao <slark_xiao@163.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 21 Jul, 2022 2 commits
-
-
Bob Pearson authored
The current implementation of rxe_check_bind_mw() in rxe_mw.c is incorrect since it requires the new key portion provided by the mw consumer to be different than the previous key portion. This is not required by the IBA. Remove the test. Link: https://lore.kernel.org/linux-rdma/fb4614e7-4cac-0dc7-3ef7-766dfd10e8f2@gmail.com/ Fixes: 32a577b4 ("Add support for bind MW work requests") Link: https://lore.kernel.org/r/20220714204619.13396-1-rpearsonhpe@gmail.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Zhang Jiaming authored
There is a spelling mistake (writeable) in function rxe_check_bind_mw. Fix it. Signed-off-by: Zhang Jiaming <jiaming@nfschina.com> Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
- 19 Jul, 2022 6 commits
-
-
Mark Bloch authored
Expose a steering anchor per priority to allow users to re-inject packets back into default NIC pipeline for additional processing. MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which a user can use to re-inject packets at a specific priority. A FTE (flow table entry) can be created and the flow table ID used as a destination. When a packet is taken into a RDMA-controlled steering domain (like software steering) there may be a need to insert the packet back into the default NIC pipeline. This exposes a flow table ID to the user that can be used as a destination in a flow table entry. With this new method priorities that are exposed to users via MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID. As user-created flow tables (via RDMA DEVX) are created with a non-zero UID thus it's impossible to point to a NIC core flow table (core driver flow tables are created with UID value of zero) from userspace. Create flow tables that are exposed to users with the shared UID, this allows users to point to default NIC flow tables. Steering loops are prevented at FW level as FW enforces that no flow table at level X can point to a table at level lower than X. Link: https://lore.kernel.org/all/20220703205407.110890-6-saeed@kernel.org/Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
Mark Bloch authored
_get_flow_table() requires the entire matcher being passed while all it needs is the priority and namespace type. Pass the priority and namespace type directly instead. Link: https://lore.kernel.org/all/20220703205407.110890-5-saeed@kernel.org/Signed-off-by: Mark Bloch <mbloch@nvidia.com> Reviewed-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
Leon Romanovsky authored
Mark Bloch Says: ================ Expose steering anchor Expose a steering anchor per priority to allow users to re-inject packets back into default NIC pipeline for additional processing. MLX5_IB_METHOD_STEERING_ANCHOR_CREATE returns a flow table ID which a user can use to re-inject packets at a specific priority. A FTE (flow table entry) can be created and the flow table ID used as a destination. When a packet is taken into a RDMA-controlled steering domain (like software steering) there may be a need to insert the packet back into the default NIC pipeline. This exposes a flow table ID to the user that can be used as a destination in a flow table entry. With this new method priorities that are exposed to users via MLX5_IB_METHOD_FLOW_MATCHER_CREATE can be reached from a non-zero UID. As user-created flow tables (via RDMA DEVX) are created with a non-zero UID thus it's impossible to point to a NIC core flow table (core driver flow tables are created with UID value of zero) from userspace. Create flow tables that are exposed to users with the shared UID, this allows users to point to default NIC flow tables. Steering loops are prevented at FW level as FW enforces that no flow table at level X can point to a table at level lower than X. ================ Link: https://lore.kernel.org/all/20220703205407.110890-1-saeed@kernel.orgSigned-off-by: Leon Romanovsky <leon@kernel.org>
-
Xiao Yang authored
The qp parameter in free_rd_atomic_resource() has become unused so remove it directly. Fixes: 15ae1375 ("RDMA/rxe: Fix qp reference counting for atomic ops") Link: https://lore.kernel.org/all/20220708035547.6592-1-yangx.jy@fujitsu.com/Signed-off-by: Xiao Yang <yangx.jy@fujitsu.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
Jason Wang authored
The double `are' is duplicated in line 156, remove one. Link: https://lore.kernel.org/r/20220715054007.5320-1-wangborong@cdjrlc.comSigned-off-by: Jason Wang <wangborong@cdjrlc.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
Jianglei Nie authored
setup_base_ctxt() allocates a memory chunk for uctxt->groups with hfi1_alloc_ctxt_rcv_groups(). When init_user_ctxt() fails, uctxt->groups is not released, which will lead to a memory leak. We should release the uctxt->groups with hfi1_free_ctxt_rcv_groups() when init_user_ctxt() fails. Fixes: e87473bc ("IB/hfi1: Only set fd pointer when base context is completely initialized") Link: https://lore.kernel.org/r/20220711070718.2318320-1-niejianglei2021@163.comSigned-off-by: Jianglei Nie <niejianglei2021@163.com> Acked-by: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
- 18 Jul, 2022 6 commits
-
-
lizhijian@fujitsu.com authored
This parameter had been deprecated since below commit: 1a7085b3 ("RDMA/rxe: Skip adjusting remote addr for write in retry operation") Link: https://lore.kernel.org/r/20220715035340.1900168-1-lizhijian@fujitsu.comSigned-off-by: Li Zhijian <lizhijian@fujitsu.com> Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
Xiao Yang authored
It's better to use the unified naming format. Link: https://lore.kernel.org/r/20220705145212.12014-2-yangx.jy@fujitsu.comSigned-off-by: Xiao Yang <yangx.jy@fujitsu.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
Xiao Yang authored
It's redundant to prepare resources for Read and Atomic requests by different functions. Replace them by a common rxe_prepare_res() with different parameters. In addition, the common rxe_prepare_res() can also be used by new Flush and Atomic Write requests in the future. Link: https://lore.kernel.org/r/20220705145212.12014-1-yangx.jy@fujitsu.comSigned-off-by: Xiao Yang <yangx.jy@fujitsu.com> Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
Zhu Yanjun authored
The function rxe_create_qp calls rxe_qp_from_init. If some error occurs, the error handler of function rxe_qp_from_init will set both scq and rcq to NULL. Then rxe_create_qp calls rxe_put to handle qp. In the end, rxe_qp_do_cleanup is called by rxe_put. rxe_qp_do_cleanup directly accesses scq and rcq before checking them. This will cause null-ptr-deref error. The call graph is as below: rxe_create_qp { ... rxe_qp_from_init { ... err1: ... qp->rcq = NULL; <---rcq is set to NULL qp->scq = NULL; <---scq is set to NULL ... } qp_init: rxe_put{ ... rxe_qp_do_cleanup { ... atomic_dec(&qp->scq->num_wq); <--- scq is accessed ... atomic_dec(&qp->rcq->num_wq); <--- rcq is accessed } } Fixes: 4703b4f0 ("RDMA/rxe: Enforce IBA C11-17") Link: https://lore.kernel.org/r/20220705225414.315478-1-yanjun.zhu@linux.devSigned-off-by: Zhu Yanjun <yanjun.zhu@linux.dev> Reviewed-by: Bob Pearson <rpearsonhpe@gmail.com> Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
Cheng Xu authored
If siw_recv_mpa_rr returns -EAGAIN, it means that the MPA reply hasn't been received completely, and should not report IW_CM_EVENT_CONNECT_REPLY in this case. This may trigger a call trace in iw_cm. A simple way to trigger this: server: ib_send_lat client: ib_send_lat -R <server_ip> The call trace looks like this: kernel BUG at drivers/infiniband/core/iwcm.c:894! invalid opcode: 0000 [#1] PREEMPT SMP NOPTI <...> Workqueue: iw_cm_wq cm_work_handler [iw_cm] Call Trace: <TASK> cm_work_handler+0x1dd/0x370 [iw_cm] process_one_work+0x1e2/0x3b0 worker_thread+0x49/0x2e0 ? rescuer_thread+0x370/0x370 kthread+0xe5/0x110 ? kthread_complete_and_exit+0x20/0x20 ret_from_fork+0x1f/0x30 </TASK> Fixes: 6c52fdc2 ("rdma/siw: connection management") Link: https://lore.kernel.org/r/dae34b5fd5c2ea2bd9744812c1d2653a34a94c67.1657706960.git.chengyou@linux.alibaba.comSigned-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-
Haoyue Xu authored
Since ECC memory maintains a memory system immune to single-bit errors, add support for correcting the 1bit-ECC error, which prevents a 1bit-ECC error become an uncorrected type error. When a 1bit-ECC error happens in the internal ram of the ROCE engine, such as the QPC table, as a 1bit-ECC error caused by reading, the ROCE engine only corrects those 1bit ECC errors by writing. Link: https://lore.kernel.org/r/20220714134353.16700-6-liangwenpeng@huawei.comSigned-off-by: Haoyue Xu <xuhaoyue1@hisilicon.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
-