Commit a4e54011 authored by Chiara Meiohas's avatar Chiara Meiohas Committed by Leon Romanovsky

RDMA/mlx5: Set mkeys for dmabuf at PAGE_SIZE

Set the mkey for dmabuf at PAGE_SIZE to support any SGL
after a move operation.

ib_umem_find_best_pgsz returns 0 on error, so it is
incorrect to check the returned page_size against PAGE_SIZE

Fixes: 90da7dc8 ("RDMA/mlx5: Support dma-buf based userspace memory region")
Signed-off-by: default avatarChiara Meiohas <cmeiohas@nvidia.com>
Reviewed-by: default avatarMichael Guralnik <michaelgur@nvidia.com>
Link: https://lore.kernel.org/r/1e2289b9133e89f273a4e68d459057d032cbc2ce.1718301631.git.leon@kernel.orgSigned-off-by: default avatarLeon Romanovsky <leon@kernel.org>
parent 5895e70f
......@@ -115,6 +115,19 @@ unsigned long __mlx5_umem_find_best_quantized_pgoff(
__mlx5_bit_sz(typ, page_offset_fld), 0, scale, \
page_offset_quantized)
static inline unsigned long
mlx5_umem_dmabuf_find_best_pgsz(struct ib_umem_dmabuf *umem_dmabuf)
{
/*
* mkeys used for dmabuf are fixed at PAGE_SIZE because we must be able
* to hold any sgl after a move operation. Ideally the mkc page size
* could be changed at runtime to be optimal, but right now the driver
* cannot do that.
*/
return ib_umem_find_best_pgsz(&umem_dmabuf->umem, PAGE_SIZE,
umem_dmabuf->umem.iova);
}
enum {
MLX5_IB_MMAP_OFFSET_START = 9,
MLX5_IB_MMAP_OFFSET_END = 255,
......
......@@ -705,10 +705,8 @@ static int pagefault_dmabuf_mr(struct mlx5_ib_mr *mr, size_t bcnt,
return err;
}
page_size = mlx5_umem_find_best_pgsz(&umem_dmabuf->umem, mkc,
log_page_size, 0,
umem_dmabuf->umem.iova);
if (unlikely(page_size < PAGE_SIZE)) {
page_size = mlx5_umem_dmabuf_find_best_pgsz(umem_dmabuf);
if (!page_size) {
ib_umem_dmabuf_unmap_pages(umem_dmabuf);
err = -EINVAL;
} else {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment