Commit 8aa64be0 authored by Xi Wang's avatar Xi Wang Committed by Jason Gunthorpe

RDMA/core: Fix unsafe linked list traversal after failing to allocate CQ

It's not safe to access the next CQ in list_for_each_entry() after
invoking ib_free_cq(), because the CQ has already been freed in current
iteration.  It should be replaced by list_for_each_entry_safe().

Fixes: c7ff819a ("RDMA/core: Introduce shared CQ pool API")
Link: https://lore.kernel.org/r/1598963935-32335-1-git-send-email-liweihang@huawei.comSigned-off-by: default avatarXi Wang <wangxi11@huawei.com>
Signed-off-by: default avatarWeihang Li <liweihang@huawei.com>
Reviewed-by: default avatarJason Gunthorpe <jgg@nvidia.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
parent 097a9d23
...@@ -379,7 +379,7 @@ static int ib_alloc_cqs(struct ib_device *dev, unsigned int nr_cqes, ...@@ -379,7 +379,7 @@ static int ib_alloc_cqs(struct ib_device *dev, unsigned int nr_cqes,
{ {
LIST_HEAD(tmp_list); LIST_HEAD(tmp_list);
unsigned int nr_cqs, i; unsigned int nr_cqs, i;
struct ib_cq *cq; struct ib_cq *cq, *n;
int ret; int ret;
if (poll_ctx > IB_POLL_LAST_POOL_TYPE) { if (poll_ctx > IB_POLL_LAST_POOL_TYPE) {
...@@ -412,7 +412,7 @@ static int ib_alloc_cqs(struct ib_device *dev, unsigned int nr_cqes, ...@@ -412,7 +412,7 @@ static int ib_alloc_cqs(struct ib_device *dev, unsigned int nr_cqes,
return 0; return 0;
out_free_cqs: out_free_cqs:
list_for_each_entry(cq, &tmp_list, pool_entry) { list_for_each_entry_safe(cq, n, &tmp_list, pool_entry) {
cq->shared = false; cq->shared = false;
ib_free_cq(cq); ib_free_cq(cq);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment