- 08 Sep, 2021 1 commit
-
-
Niklas Schnelle authored
In commit 8010d74b ("RDMA/mlx5: Split the WR setup out of mlx5_ib_update_xlt()") the allocation logic was split out of mlx5_ib_update_xlt() and the logic was changed to enable better OOM handling. Sadly this change introduced a miscalculation of the number of entries that were actually allocated when under memory pressure where it can actually become 0 which on s390 lets dma_map_single() fail. It can also lead to corruption of the free pages list when the wrong number of entries is used in the calculation of sg->length which is used as argument for free_pages(). Fix this by using the allocation size instead of misusing get_order(size). Cc: stable@vger.kernel.org Fixes: 8010d74b ("RDMA/mlx5: Split the WR setup out of mlx5_ib_update_xlt()") Link: https://lore.kernel.org/r/20210908081849.7948-1-schnelle@linux.ibm.comSigned-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 30 Aug, 2021 2 commits
-
-
Jason Gunthorpe authored
From Maor Gottlieb ==================== Fix the use of nents and orig_nents in the sg table append helpers. The nents should be used by the DMA layer to store the number of DMA mapped sges, the orig_nents is the number of CPU sges. Since the sg append logic doesn't always create a SGL with exactly orig_nents entries store a total_nents as well to allow the table to be properly free'd and reorganize the freeing logic to share across all the use cases. ==================== Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> * 'sg_nents': RDMA: Use the sg_table directly and remove the opencoded version from umem lib/scatterlist: Fix wrong update of orig_nents lib/scatterlist: Provide a dedicated function to support table append
-
Lior Nahmanson authored
In order to create DCS QPs, we don't need to rely on both log_max_dci_stream_channels and log_max_dci_errored_streams capabilities. Fixes: 11656f59 ("RDMA/mlx5: Add DCS offload support") Link: https://lore.kernel.org/r/3e7b3363fd73686176cc584295e86832a7cf99b2.1630320354.git.leonro@nvidia.comSigned-off-by: Lior Nahmanson <liorna@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 26 Aug, 2021 7 commits
-
-
Xinhao Liu authored
Just delete unnecessary blank lines. Link: https://lore.kernel.org/r/1629985056-57004-8-git-send-email-liangwenpeng@huawei.comSigned-off-by: Xinhao Liu <liuxinhao5@hisilicon.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yixing Liu authored
Encapsulate qp db into two functions: user and kernel. Link: https://lore.kernel.org/r/1629985056-57004-7-git-send-email-liangwenpeng@huawei.comSigned-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
It should first alloc workqueue and request irq, and finally enable irq. Link: https://lore.kernel.org/r/1629985056-57004-6-git-send-email-liangwenpeng@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Weihang Li authored
There is no need to prints error for hw_v1. Link: https://lore.kernel.org/r/1629985056-57004-5-git-send-email-liangwenpeng@huawei.comSigned-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
According to the IB specification, the destination qpn is allowed to be filled into the qpc only when the qp transitions from Init to RTR, so this code is unused. Link: https://lore.kernel.org/r/1629985056-57004-4-git-send-email-liangwenpeng@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The resp passed to the user space represents the enable flag of qp, incomplete assignment will cause some features of the user space to be disabled. Fixes: 90ae0b57 ("RDMA/hns: Combine enable flags of qp") Fixes: aba457ca ("RDMA/hns: Support owner mode doorbell") Link: https://lore.kernel.org/r/1629985056-57004-3-git-send-email-liangwenpeng@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The bit width of dqpn is 24 bits, using u8 will cause truncation error. Fixes: 926a01dc ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1629985056-57004-2-git-send-email-liangwenpeng@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 25 Aug, 2021 8 commits
-
-
Cai Huoqing authored
use SPDX-License-Identifier instead of a verbose license text Link: https://lore.kernel.org/r/20210823042622.109-1-caihuoqing@baidu.comSigned-off-by: Cai Huoqing <caihuoqing@baidu.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Cai Huoqing authored
use SPDX-License-Identifier instead of a verbose license text Link: https://lore.kernel.org/r/20210823023530.48-1-caihuoqing@baidu.comSigned-off-by: Cai Huoqing <caihuoqing@baidu.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Junxian Huang authored
dip_idx and dgid should be a one-to-one mapping relationship, but when qp_num loops back to the start number, it may happen that two different dgid are assiociated to the same dip_idx incorrectly. One solution is to store the qp_num that is not assigned to dip_idx in an array. When a dip_idx needs to be allocated to a new dgid, an spare qp_num is extracted and assigned to dip_idx. Fixes: f91696f2 ("RDMA/hns: Support congestion control type selection according to the FW") Link: https://lore.kernel.org/r/1629884592-23424-4-git-send-email-liangwenpeng@huawei.comSigned-off-by: Junxian Huang <huangjunxian4@hisilicon.com> Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Junxian Huang authored
When the dgid-dip_idx mapping relationship exists, dip should be assigned. Fixes: f91696f2 ("RDMA/hns: Support congestion control type selection according to the FW") Link: https://lore.kernel.org/r/1629884592-23424-3-git-send-email-liangwenpeng@huawei.comSigned-off-by: Junxian Huang <huangjunxian4@hisilicon.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Junxian Huang authored
dip_idx is associated with qp_num whose data type is u32. However, dip_idx is incorrectly defined as u8 data in the hns_roce_dip struct, which leads to data truncation during value assignment. Fixes: f91696f2 ("RDMA/hns: Support congestion control type selection according to the FW") Link: https://lore.kernel.org/r/1629884592-23424-2-git-send-email-liangwenpeng@huawei.comSigned-off-by: Junxian Huang <huangjunxian4@hisilicon.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yixing Liu authored
In RNR NAK screnario, according to the specification, when no credit is available, only the first fragment of the send request can be sent. The LSN(Limit Sequence Number) field should be 0 or the entire packet will be resent. Fixes: 926a01dc ("RDMA/hns: Add QP operations support for hip08 SoC") Link: https://lore.kernel.org/r/1629883169-2306-1-git-send-email-liangwenpeng@huawei.comSigned-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Shaokun Zhang authored
Functions 'irdma_alloc_ws_node_id' and 'irdma_free_ws_node_id' are declared twice, so remove the repeated declaration. Link: https://lore.kernel.org/r/1629861674-53343-1-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Håkon Bugge authored
A MAD packet is sent as an unreliable datagram (UD). SA requests are sent as MAD packets. As such, SA requests or responses may be silently dropped. IB Core's MAD layer has a timeout and retry mechanism, which amongst other, is used by RDMA CM. But it is not used by SA queries. The lack of retries of SA queries leads to long specified timeout, and error being returned in case of packet loss. The ULP or user-land process has to perform the retry. Fix this by taking advantage of the MAD layer's retry mechanism. First, a check against a zero timeout is added in rdma_resolve_route(). In send_mad(), we set the MAD layer timeout to one tenth of the specified timeout and the number of retries to 10. The special case when timeout is less than 10 is handled. With this fix: # ucmatose -c 1000 -S 1024 -C 1 runs stable on an Infiniband fabric. Without this fix, we see an intermittent behavior and it errors out with: cmatose: event: RDMA_CM_EVENT_ROUTE_ERROR, error: -110 (110 is ETIMEDOUT) Link: https://lore.kernel.org/r/1628784755-28316-1-git-send-email-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 24 Aug, 2021 6 commits
-
-
Maor Gottlieb authored
This allows using the normal sg_table APIs and makes all the code cleaner. Remove sgt, nents and nmapd from ib_umem. Link: https://lore.kernel.org/r/20210824142531.3877007-4-maorg@nvidia.comSigned-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Maor Gottlieb authored
orig_nents should represent the number of entries with pages, but __sg_alloc_table_from_pages sets orig_nents as the number of total entries in the table. This is wrong when the API is used for dynamic allocation where not all the table entries are mapped with pages. It wasn't observed until now, since RDMA umem who uses this API in the dynamic form doesn't use orig_nents implicit or explicit by the scatterlist APIs. Fix it by changing the append API to track the SG append table state and have an API to free the append table according to the total number of entries in the table. Now all APIs set orig_nents as number of enries with pages. Fixes: 07da1223 ("lib/scatterlist: Add support in dynamic allocation of SG table from pages") Link: https://lore.kernel.org/r/20210824142531.3877007-3-maorg@nvidia.comSigned-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Maor Gottlieb authored
RDMA is the only in-kernel user that uses __sg_alloc_table_from_pages to append pages dynamically. In the next patch. That mode will be extended and that function will get more parameters. So separate it into a unique function to make such change more clear. Link: https://lore.kernel.org/r/20210824142531.3877007-2-maorg@nvidia.comSigned-off-by: Maor Gottlieb <maorg@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
The resources that use the hns bitmap interface: qp, cq, mr, pd, xrcd, uar, srq, have been changed to IDA interfaces, and the unused hns' own bitmap interfaces need to be deleted. Link: https://lore.kernel.org/r/1629336980-17499-4-git-send-email-liangwenpeng@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
Switch srq index allocation and release from hns' own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1629336980-17499-3-git-send-email-liangwenpeng@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yangyang Li authored
Switch uar index allocation and release from hns' own bitmap interface to IDA interface. Link: https://lore.kernel.org/r/1629336980-17499-2-git-send-email-liangwenpeng@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 23 Aug, 2021 5 commits
-
-
Lang Cheng authored
The ownerbit mode is for external card mode. Make it controlled by the firmware. Fixes: aba457ca ("RDMA/hns: Support owner mode doorbell") Link: https://lore.kernel.org/r/1629539607-33217-4-git-send-email-liangwenpeng@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Yixing Liu authored
The stash feature is enabled by default on HIP09. Fixes: f93c39bc ("RDMA/hns: Add support for QP stash") Fixes: bfefae9f ("RDMA/hns: Add support for CQ stash") Link: https://lore.kernel.org/r/1629539607-33217-3-git-send-email-liangwenpeng@huawei.comSigned-off-by: Yixing Liu <liuyixing1@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lang Cheng authored
CMDQ support un-interrupt mode only, and firmware ignores this mode flag, so remove it. Fixes: a04ff739 ("RDMA/hns: Add command queue support for hip08 RoCE driver") Link: https://lore.kernel.org/r/1629539607-33217-2-git-send-email-liangwenpeng@huawei.comSigned-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. The patch has been generated with the coccinelle script below. It has been hand modified to use 'dma_set_mask_and_coherent()' instead of 'pci_set_dma_mask()/pci_set_consistent_dma_mask()' when applicable. This is less verbose. It has been compile tested. @@ @@ - PCI_DMA_BIDIRECTIONAL + DMA_BIDIRECTIONAL @@ @@ - PCI_DMA_TODEVICE + DMA_TO_DEVICE @@ @@ - PCI_DMA_FROMDEVICE + DMA_FROM_DEVICE @@ @@ - PCI_DMA_NONE + DMA_NONE @@ expression e1, e2, e3; @@ - pci_alloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3; @@ - pci_zalloc_consistent(e1, e2, e3) + dma_alloc_coherent(&e1->dev, e2, e3, GFP_) @@ expression e1, e2, e3, e4; @@ - pci_free_consistent(e1, e2, e3, e4) + dma_free_coherent(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_single(e1, e2, e3, e4) + dma_map_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_single(e1, e2, e3, e4) + dma_unmap_single(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4, e5; @@ - pci_map_page(e1, e2, e3, e4, e5) + dma_map_page(&e1->dev, e2, e3, e4, e5) @@ expression e1, e2, e3, e4; @@ - pci_unmap_page(e1, e2, e3, e4) + dma_unmap_page(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_map_sg(e1, e2, e3, e4) + dma_map_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_unmap_sg(e1, e2, e3, e4) + dma_unmap_sg(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_cpu(e1, e2, e3, e4) + dma_sync_single_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_single_for_device(e1, e2, e3, e4) + dma_sync_single_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_cpu(e1, e2, e3, e4) + dma_sync_sg_for_cpu(&e1->dev, e2, e3, e4) @@ expression e1, e2, e3, e4; @@ - pci_dma_sync_sg_for_device(e1, e2, e3, e4) + dma_sync_sg_for_device(&e1->dev, e2, e3, e4) @@ expression e1, e2; @@ - pci_dma_mapping_error(e1, e2) + dma_mapping_error(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_dma_mask(e1, e2) + dma_set_mask(&e1->dev, e2) @@ expression e1, e2; @@ - pci_set_consistent_dma_mask(e1, e2) + dma_set_coherent_mask(&e1->dev, e2) Link: https://lore.kernel.org/r/259e53b7a00f64bf081d41da8761b171b2ad8f5c.1629634798.git.christophe.jaillet@wanadoo.frSigned-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Li Zhijian authored
current_seq was removed since the commit below. Fixes: 36f30e48 ("IB/core: Improve ODP to use hmm_range_fault()") Link: https://lore.kernel.org/r/20210823035246.3506-1-lizhijian@cn.fujitsu.comSigned-off-by: Li Zhijian <lizhijian@cn.fujitsu.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 22 Aug, 2021 8 commits
-
-
Gal Pressman authored
The vector field naming is quite confusing, it is better referred to as irqn. Link: https://lore.kernel.org/r/20210811151131.39138-4-galpress@amazon.comReviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gal Pressman authored
The cpu field in efa_irq struct is unused, remove it. Link: https://lore.kernel.org/r/20210811151131.39138-3-galpress@amazon.comReviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gioh Kim authored
Casting to (void) does nothing, remove them. Link: https://lore.kernel.org/r/20210806112112.124313-7-haris.iqbal@ionos.comSuggested-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gioh Kim authored
There are mis-match at counting inflight IO after changing the multipath policy. For example, we started fio test with round-robin policy and then we changed the policy to min-inflight. IOs created under the RR policy is finished under the min-inflight policy and inflight counter only decreased. So the counter would be negative value. And also we started fio test with min-inflight policy and changed the policy to the round-robin. IOs created under the min-inflight policy increased the inflight IO counter but the inflight IO counter was not decreased because the policy was the round-robin when IO was finished. So it should count IOs only if the IO is created under the min-inflight policy. It should not care the policy when the IO is finished. This patch adds a field mp_policy in struct rtrs_clt_io_req and stores the multipath policy when an object of rtrs_clt_io_req is created. Then rtrs-clt checks the mp_policy of only struct rtrs_clt_io_req instead of the struct rtrs_clt. Link: https://lore.kernel.org/r/20210806112112.124313-6-haris.iqbal@ionos.comSigned-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gioh Kim authored
The IO performance test with fio after swapping the likely and unlikely macros in all if-statement shows no difference. They do not help for the performance of rtrs. Thanks to Haakon Bugge for the test scenario. The fio test did random read on 32 rnbd devices and 64 processes. Test environment: - Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz - 376G memory - kernel version: 5.4.86 - gcc version: gcc (Debian 8.3.0-6) 8.3.0 - Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] Test result: - before swapping: IOPS=829k, BW=3239MiB/s - after swapping: IOPS=829k, BW=3238MiB/s - remove all (un)likely: IOPS=829k, BW=3238MiB/s Link: https://lore.kernel.org/r/20210806112112.124313-5-haris.iqbal@ionos.comSigned-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jack Wang authored
The two functions are unused, so just remove them. Link: https://lore.kernel.org/r/20210806112112.124313-3-haris.iqbal@ionos.comSigned-off-by: Jack Wang <jinpu.wang@ionos.com> Reviewed-by: Md Haris Iqbal <haris.iqbal@ionos.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Md Haris Iqbal authored
When all the paths are removed for a session, the addition of the first path is like a new session for the storage server. Hence, for_new_clt has to be set to 1. Link: https://lore.kernel.org/r/20210806112112.124313-2-haris.iqbal@ionos.comSigned-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linuxJason Gunthorpe authored
Saeed Mahameed says: ==================== This pulls mlx5-next branch which includes patches already reviewed on net-next and rdma mailing lists. 1) mlx5 single E-Switch FDB for lag 2) IB/mlx5: Rename is_apu_thread_cq function to is_apu_cq 3) Add DCS caps & fields support We need this in net-next as multiple features are dependent on the single FDB feature. ==================== Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> * mellanox/mlx5-next: net/mlx5: Lag, Create shared FDB when in switchdev mode net/mlx5: E-Switch, add logic to enable shared FDB net/mlx5: Lag, move lag destruction to a workqueue net/mlx5: Lag, properly lock eswitch if needed net/mlx5: Add send to vport rules on paired device net/mlx5: E-Switch, Add event callback for representors net/mlx5e: Use shared mappings for restoring from metadata net/mlx5e: Add an option to create a shared mapping net/mlx5: E-Switch, set flow source for send to uplink rule RDMA/mlx5: Add shared FDB support {net, RDMA}/mlx5: Extend send to vport rules RDMA/mlx5: Fill port info based on the relevant eswitch net/mlx5: Lag, add initial logic for shared FDB net/mlx5: Return mdev from eswitch IB/mlx5: Rename is_apu_thread_cq function to is_apu_cq
-
- 19 Aug, 2021 3 commits
-
-
Håkon Bugge authored
ib_sa_service_rec_query() was introduced in kernel v2.6.13 by commit cbae32c5 ("[PATCH] IB: Add Service Record support to SA client") in 2005. It was not used then and have never been used since. Removing it and related functions/structs. Link: https://lore.kernel.org/r/1628702736-12651-1-git-send-email-haakon.bugge@oracle.comSigned-off-by: Håkon Bugge <haakon.bugge@oracle.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Prabhakar Kushwaha authored
Qedr code is tightly coupled with existing both INIT transitions. Here, during first INIT transition all variables are reset and the RESET state is checked in post_recv() before any posting. Commit dc70f7c3 ("RDMA/cma: Remove unnecessary INIT->INIT transition") exposed this bug. So moving variables reset to qedr_set_common_qp_params() and also avoid RESET state check for post_recv(). Link: https://lore.kernel.org/r/20210811051650.14914-1-pkushwaha@marvell.comSigned-off-by: Michal Kalderon <mkalderon@marvell.com> Signed-off-by: Ariel Elior <aelior@marvell.com> Signed-off-by: Shai Malin <smalin@marvell.com> Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Christoph Hellwig authored
Just use seq_write to copy the stats into the seq_file buffer instead of poking holes into the seq_file abstraction. Link: https://lore.kernel.org/r/20210810151711.1795374-1-hch@lst.deSigned-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Tested-by: Mike Marciniszyn <mike.marciniszyn@cornelisnetworks.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-