Commit 928da37a authored by Eric Dumazet's avatar Eric Dumazet Committed by Jason Gunthorpe

RDMA/umem: Add a schedule point in ib_umem_get()

Mapping as little as 64GB can take more than 10 seconds, triggering issues
on kernels with CONFIG_PREEMPT_NONE=y.

ib_umem_get() already splits the work in 2MB units on x86_64, adding a
cond_resched() in the long-lasting loop is enough to solve the issue.

Note that sg_alloc_table() can still use more than 100 ms, which is also
problematic. This might be addressed later in ib_umem_add_sg_table(),
adding new blocks in sgl on demand.

Link: https://lore.kernel.org/r/20200730015755.1827498-1-edumazet@google.comSigned-off-by: default avatarEric Dumazet <edumazet@google.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
parent 395f2e8f
...@@ -261,6 +261,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, ...@@ -261,6 +261,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
sg = umem->sg_head.sgl; sg = umem->sg_head.sgl;
while (npages) { while (npages) {
cond_resched();
ret = pin_user_pages_fast(cur_base, ret = pin_user_pages_fast(cur_base,
min_t(unsigned long, npages, min_t(unsigned long, npages,
PAGE_SIZE / PAGE_SIZE /
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment