- 24 Sep, 2020 4 commits
-
-
Yangyang Li authored
For HIP09, size of SCCC (Soft Congestion Control Context) is increased to 64 Bytes from 32 Bytes. The hardware will get the configuration of SCCC from driver instead of using a fixed value. Link: https://lore.kernel.org/r/1600245806-56321-5-git-send-email-liweihang@huawei.comSigned-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The new version of RoCEE supports using QPC in size of 256B or 512B, so that HIP09 can supports new congestion control algorithms by using QPC in larger size. Link: https://lore.kernel.org/r/1600245806-56321-4-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The new version of RoCEE supports using CQE in size of 32B or 64B. The performance of bus can be improved by using larger size of CQE. Link: https://lore.kernel.org/r/1600245806-56321-3-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Wenpeng Liang authored
The new version of RoCEE supports using CEQE in size of 4B or 64B, AEQE in size of 16B or 64B. The performance of bus can be improved by using larger size of EQE. Link: https://lore.kernel.org/r/1600245806-56321-2-git-send-email-liweihang@huawei.comSigned-off-by: Wenpeng Liang <liangwenpeng@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 22 Sep, 2020 9 commits
-
-
Julia Lawall authored
sg_init_table() zeroes its first argument, so the allocation of that argument doesn't have to. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression x,n,flags; @@ x = - kcalloc + kmalloc_array (n,sizeof(*x),flags) ... sg_init_table(x,n) // </smpl> Link: https://lore.kernel.org/r/1600601186-7420-6-git-send-email-Julia.Lawall@inria.frSigned-off-by: Julia Lawall <Julia.Lawall@inria.fr> Acked-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Sindhu, Devale authored
Occasionally ib_write_bw crash is seen due to access of a pd object in i40iw_sc_qp_destroy after it is freed. Destroy qp is not synchronous in i40iw and thus the iwqp object could be referencing a pd object that is freed by ib core as a result of successful return from i40iw_destroy_qp. Wait in i40iw_destroy_qp till all QP references are released and destroy the QP and its associated resources before returning. Switch to use the refcount API vs atomic API for lifetime management of the qp. RIP: 0010:i40iw_sc_qp_destroy+0x4b/0x120 [i40iw] [...] RSP: 0018:ffffb4a7042e3ba8 EFLAGS: 00010002 RAX: 0000000000000000 RBX: 0000000000000001 RCX: dead000000000122 RDX: ffffb4a7042e3bac RSI: ffff8b7ef9b1e940 RDI: ffff8b7efbf09080 RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000 R10: 8080808080808080 R11: 0000000000000010 R12: ffff8b7efbf08050 R13: 0000000000000001 R14: ffff8b7f15042928 R15: ffff8b7ef9b1e940 FS: 0000000000000000(0000) GS:ffff8b7f2fa00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000400 CR3: 000000020d60a006 CR4: 00000000001606e0 Call Trace: i40iw_exec_cqp_cmd+0x4d3/0x5c0 [i40iw] ? try_to_wake_up+0x1ea/0x5d0 ? __switch_to_asm+0x40/0x70 i40iw_process_cqp_cmd+0x95/0xa0 [i40iw] i40iw_handle_cqp_op+0x42/0x1a0 [i40iw] ? cm_event_handler+0x13c/0x1f0 [iw_cm] i40iw_rem_ref+0xa0/0xf0 [i40iw] cm_work_handler+0x99c/0xd10 [iw_cm] process_one_work+0x1a1/0x360 worker_thread+0x30/0x380 ? process_one_work+0x360/0x360 kthread+0x10c/0x130 ? kthread_park+0x80/0x80 ret_from_fork+0x35/0x40 Fixes: d3749841 ("i40iw: add files for iwarp interface") Link: https://lore.kernel.org/r/20200916131811.2077-1-shiraz.saleem@intel.comReported-by: Kamal Heib <kheib@redhat.com> Signed-off-by: Sindhu, Devale <sindhu.devale@intel.com> Signed-off-by: Shiraz, Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Daniel Kranzdorf authored
Add separate stats types for send messages and RDMA read work requests. Link: https://lore.kernel.org/r/20200915141449.8428-3-galpress@amazon.comSigned-off-by: Daniel Kranzdorf <dkkranzd@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Gal Pressman authored
The keep alive received counter is a software stat, keep it grouped with all other software stats. Since all stored stats are software stats, remove the efa_sw_stats struct and use efa_stats instead. Link: https://lore.kernel.org/r/20200915141449.8428-2-galpress@amazon.comReviewed-by: Daniel Kranzdorf <dkkranzd@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Use rdma_restrack_set_name() and rdma_restrack_parent_name() instead of tricky uses of rdma_restrack_attach_task()/rdma_restrack_uadd(). This uniformly makes all restracks add'd using rdma_restrack_add(). Link: https://lore.kernel.org/r/20200922091106.2152715-6-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Have a single rdma_restrack_add() that adds an entry, there is no reason to split the user/kernel here, the rdma_restrack_set_task() is responsible for this difference. This patch prepares the code to the future requirement of making restrack is mandatory for managing ib objects. Link: https://lore.kernel.org/r/20200922091106.2152715-5-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Refactor the restrack code to make sure the kref inside the restrack entry properly kref's the object in which it is embedded. This slight change is needed for future conversions of MR and QP which are refcounted before the release and kfree. The ideal flow from ib_core perspective as follows: * Allocate ib_* structure with rdma_zalloc_*. * Set everything that is known to ib_core to that newly created object. * Initialize kref with restrack help * Call to driver specific allocation functions. * Insert into restrack DB .... * Return and release restrack with restrack_put. Largely this means a rdma_restrack_new() should be called near allocating the containing structure. Link: https://lore.kernel.org/r/20200922091106.2152715-4-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
The restrack is going to manage memory of all IB objects and must be called before object is created. GSI QP in the mlx5_ib separated between creating dummy interface and HW object beneath. This was achieved by double call to ib_create_qp(). In order to skip such reentry call to internal driver create_qp code. Link: https://lore.kernel.org/r/20200922091106.2152715-3-leon@kernel.orgReviewed-by: Mark Zhang <markz@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Update the code to have similar destroy pattern like other IB objects. This change create asymmetry to the rdma_id_private create flow to make sure that memory is managed by restrack. Link: https://lore.kernel.org/r/20200922091106.2152715-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 18 Sep, 2020 14 commits
-
-
Jason Gunthorpe authored
ucma_destroy_id() assumes that all things accessing the ctx will do so via the xarray. This assumption violated only in the case the FD is being closed, then the ctx is reached via the ctx_list. Normally this is OK since ucma_destroy_id() cannot run concurrenty with release(), however with ucma_migrate_id() is involved this can violated as the close of the 2nd FD can run concurrently with destroy on the first: CPU0 CPU1 ucma_destroy_id(fda) ucma_migrate_id(fda -> fdb) ucma_get_ctx() xa_lock() _ucma_find_context() xa_erase() xa_unlock() xa_lock() ctx->file = new_file list_move() xa_unlock() ucma_put_ctx() ucma_close(fdb) _destroy_id() kfree(ctx) _destroy_id() wait_for_completion() // boom, ctx was freed The ctx->file must be modified under the handler and xa_lock, and prior to modification the ID must be rechecked that it is still reachable from cur_file, ie there is no parallel destroy or migrate. To make this work remove the double locking and streamline the control flow. The double locking was obsoleted by the handler lock now directly preventing new uevents from being created, and the ctx_list cannot be read while holding fgets on both files. Removing the double locking also removes the need to check for the same file. Fixes: 88314e4d ("RDMA/cma: add support for rdma_migrate_id()") Link: https://lore.kernel.org/r/0-v1-05c5a4090305+3a872-ucma_syz_migrate_jgg@nvidia.com Reported-and-tested-by: syzbot+cc6fc752b3819e082d0c@syzkaller.appspotmail.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Once a mkey is created it can be modified using UMR. This is desirable for performance reasons. However, different hardware has restrictions on what modifications are possible using UMR. Make sense of these checks: - mlx5_ib_can_reconfig_with_umr() returns true if the access flags can be altered. Most cases create MRs using 0 access flags (now made clear by consistent use of set_mkc_access_pd_addr_fields()), but the old logic here was tormented. Make it clear that this is checking if the current access_flags can be modified using UMR to different access_flags. It is always OK to use UMR to change flags that all HW supports. - mlx5_ib_can_load_pas_with_umr() returns true if UMR can be used to enable and update the PAS/XLT. Enabling requires updating the entity size, so UMR ends up completely disabled on this old hardware. Make it clear why it is disabled. FRWR, ODP and cache always requires mlx5_ib_can_load_pas_with_umr(). - mlx5_ib_pas_fits_in_mr() is used to tell if an existing MR can be resized to hold a new PAS list. This only works for cached MR's because we don't store the PAS list size in other cases. To be very clear, arrange things so any pre-created MR's in the cache check the newly requested access_flags before allowing the MR to leave the cache. If UMR cannot set the required access_flags the cache fails to create the MR. This in turn means relaxed ordering and atomic are now correctly blocked early for implicit ODP on older HW. Link: https://lore.kernel.org/r/20200914112653.345244-6-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
set_reg_wr() always fails if !umr_modify_entity_size_disabled because mlx5_ib_can_use_umr() always fails. Without set_reg_wr() IB_WR_REG_MR doesn't work and that means the device should not advertise IB_DEVICE_MEM_MGT_EXTENSIONS. Fixes: 841b07f9 ("IB/mlx5: Block MR WR if UMR is not possible") Link: https://lore.kernel.org/r/20200914112653.345244-5-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Any mkey that is not enabled and assigned to userspace should have the PD set to a kernel owned PD. When cache entries are created for the first time the PDN is set to 0, which is probably a kernel PD, but be explicit. When a MR is registered using the hybrid reg_create with UMR xlt & enable the disabled mkey is pointing at the user PD, keep it pointing at the kernel until a UMR enables it and sets the user PD. Fixes: 9ec4483a ("IB/mlx5: Move MRs to a kernel PD when freeing them to the MR cache") Link: https://lore.kernel.org/r/20200914112653.345244-4-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
reg_create() open codes this helper, use the shared code. Link: https://lore.kernel.org/r/20200914112653.345244-3-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
alloc_mr_from_cache() no longer returns EAGAIN, this is just dead code now. Fixes: aad719dc ("RDMA/mlx5: Allow MRs to be created in the cache synchronously") Link: https://lore.kernel.org/r/20200914112653.345244-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Lijun Ou authored
hip06 does not support IB_WR_LOCAL_INV, so the ps_opcode should be set to an invalid value instead of being left uninitialized. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Fixes: a2f3d447 ("RDMA/hns: Avoid unncessary initialization") Link: https://lore.kernel.org/r/1600350615-115217-1-git-send-email-oulijun@huawei.comSigned-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Keita Suzuki authored
When xa_insert() fails, the acquired resource in qedr_create_qp should also be freed. However, current implementation does not handle the error. Fix this by adding a new goto label that calls qedr_free_qp_resources. Fixes: 1212767e ("qedr: Add wrapping generic structure for qpidr and adjust idr routines.") Link: https://lore.kernel.org/r/20200911125159.4577-1-keitasuzuki.park@sslab.ics.keio.ac.jpSigned-off-by: Keita Suzuki <keitasuzuki.park@sslab.ics.keio.ac.jp> Acked-by: Michal Kalderon <michal.kalderon@marvell.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Potnuri Bharat Teja authored
Receive side delayed ack mode is needed only for certain area networks/ connections. Therefore disable it by default. Link: https://lore.kernel.org/r/20200909153707.2795-1-bharat@chelsio.comSigned-off-by: Potnuri Bharat Teja <bharat@chelsio.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Leon Romanovsky says: ==================== This series from Alex extends software steering interface to support devices with extra capability "sw_owner_2" which will replace existing "sw_owner". ==================== Based on the mlx5-next branch at git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux due to dependencies. * branch 'mlx5_sw_owner_v2: RDMA/mlx5: Expose TIR and QP ICM address for sw_owner_v2 devices RDMA/mlx5: Allow DM allocation for sw_owner_v2 enabled devices RDMA/mlx5: Add sw_owner_v2 bit capability
-
Jason Gunthorpe authored
Leon Romanovsky says: ==================== IBTA declares speed as 16 bits, but kernel stores it in u8. This series fixes in-kernel declaration while keeping external interface intact. ==================== Based on the mlx5-next branch at git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux due to dependencies. * branch 'mlx5_active_speed': RDMA: Fix link active_speed size RDMA/mlx5: Delete duplicated mlx5_ptys_width enum net/mlx5: Refactor query port speed functions
-
Aharon Landau authored
According to the IB spec active_speed size should be u16 and not u8 as before. Changing it to allow further extensions in offered speeds. Link: https://lore.kernel.org/r/20200917090223.1018224-4-leon@kernel.orgSigned-off-by: Aharon Landau <aharonl@mellanox.com> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Alex Vesker authored
Expose the ICM address to access TIR and QP, this will allow sw_owned_v2 devices to steer traffic to TIRs and QPs same as done with sw_owner capability. Link: https://lore.kernel.org/r/20200903073857.1129166-4-leon@kernel.orgSigned-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Alex Vesker authored
sw_owner_v2 will replace sw_owner for future devices, this means that if sw_owner_v2 is set sw_owner should be ignored and DM allocation is required for sw_owner_v2 devices to function. Link: https://lore.kernel.org/r/20200903073857.1129166-3-leon@kernel.orgSigned-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
- 17 Sep, 2020 13 commits
-
-
Alex Vesker authored
Added sw_owner_v2 which will be enabled for future devices, replacing sw_owner bit. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
-
Leon Romanovsky authored
Move struct ib_rwq_ind_table allocation to ib_core. Link: https://lore.kernel.org/r/20200902081623.746359-3-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Leon Romanovsky authored
Move allocation and destruction of memory windows under ib_core responsibility and clean drivers to ensure that no updates to MW ib_core structures are done in driver layer. Link: https://lore.kernel.org/r/20200902081623.746359-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Aharon Landau authored
Combine two same enums to avoid duplication. Signed-off-by: Aharon Landau <aharonl@mellanox.com> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
-
Aharon Landau authored
The functions mlx5_query_port_link_width_oper and mlx5_query_port_ib_proto_oper are always called together, so combine them to a new function called mlx5_query_port_oper to avoid duplication. And while the mlx5i_get_port_settings is the same as mlx5_query_port_oper therefore let's remove it. According to the IB spec link_width_oper and ib_proto_oper should be u16 and not as written u8, so perform casting as a preparation to cross-RDMA patch which will fix that type for all drivers in the RDMA subsystem. Fixes: ada68c31 ("net/mlx5: Introduce a new header file for physical port functions") Signed-off-by: Aharon Landau <aharonl@mellanox.com> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
-
Jason Gunthorpe authored
The roce path triggers a work queue that continues to touch the id_priv but doesn't hold any reference on it. Futher, unlike in the IB case, the work queue is not fenced during rdma_destroy_id(). This can trigger a use after free if a destroy is triggered in the incredibly narrow window after the queue_work and the work starting and obtaining the handler_mutex. The only purpose of this work queue is to run the ULP event callback from the standard context, so switch the design to use the existing cma_work_handler() scheme. This simplifies quite a lot of the flow: - Use the cma_work_handler() callback to launch the work for roce. This requires generating the event synchronously inside the rdma_join_multicast(), which in turn means the dummy struct ib_sa_multicast can become a simple stack variable. - cm_work_handler() used the id_priv kref, so we can entirely eliminate the kref inside struct cma_multicast. Since the cma_multicast never leaks into an unprotected work queue the kfree can be done at the same time as for IB. - Eliminating the general multicast.ib requires using cma_set_mgid() in a few places to recompute the mgid. Fixes: 3c86aa70 ("RDMA/cm: Add RDMA CM support for IBoE devices") Link: https://lore.kernel.org/r/20200902081122.745412-9-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Two places were open coding this sequence, and also pull in cma_leave_roce_mc_group() which was called only once. Link: https://lore.kernel.org/r/20200902081122.745412-8-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
There is no kernel user of RDMA CM multicast so this code managing the multicast subscription of the kernel-only internal QP is dead. Remove it. This makes the bug fixes in the next patches much simpler. Link: https://lore.kernel.org/r/20200902081122.745412-7-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
These are the same thing, except that cma_ndev_work doesn't have a state transition. Signal no state transition by setting old_state and new_state == 0. In all cases the handler function should not be called once rdma_destroy_id() has progressed passed setting the state. Link: https://lore.kernel.org/r/20200902081122.745412-6-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
The only place that still uses it is rdma_join_multicast() which is only doing a sanity check that the caller hasn't done something wrong and doesn't need the spinlock. At least in the case of rdma_join_multicast() the information it needs will remain until the ID is destroyed once it enters these states. Similarly there is no reason to check for these specific states in the handler callback, instead use the usual check for a destroyed id under the handler_mutex. Link: https://lore.kernel.org/r/20200902081122.745412-5-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
There is a strange unlocked read of the ID state when checking for reuseaddr. This is because an ID cannot be reusable once it becomes a listening ID. Instead of using the state to exclude reuse, just clear it as part of rdma_listen()'s flow to convert reusable into not reusable. Once a ID goes to listen there is no way back out, and the only use of reusable is on the bind_list check. Finally, update the checks under handler_mutex to use READ_ONCE and audit that once RDMA_CM_LISTEN is observed in a req callback it is stable under the handler_mutex. Link: https://lore.kernel.org/r/20200902081122.745412-4-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
Re-organize things so the state variable is not read unlocked. The first attempt to go directly from ADDR_BOUND immediately tells us if the ID is already bound, if we can't do that then the attempt inside rdma_bind_addr() to go from IDLE to ADDR_BOUND confirms the ID needs binding. Link: https://lore.kernel.org/r/20200902081122.745412-3-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-
Jason Gunthorpe authored
It is currently a bit confusing, but the design is if the handler_mutex is held, and the state is in RDMA_CM_CONNECT, then the state cannot leave RDMA_CM_CONNECT without also serializing with the handler_mutex. Make this clearer by adding a direct assertion, fixing the usage in rdma_connect and generally using READ_ONCE to read the state value. Link: https://lore.kernel.org/r/20200902081122.745412-2-leon@kernel.orgSigned-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-