Commit 0c507d8f authored by John Hubbard's avatar John Hubbard Committed by Jason Gunthorpe

RDMA/umem: Revert broken 'off by one' fix

The previous attempted bug fix overlooked the fact that
ib_umem_odp_map_dma_single_page() was doing a put_page() upon hitting an
error. So there was not really a bug there.

Therefore, this reverts the off-by-one change, but keeps the change to use
release_pages() in the error path.

Fixes: 75a3e6a3 ("RDMA/umem: minor bug fix in error handling path")
Suggested-by: default avatarArtemy Kovalyov <artemyko@mellanox.com>
Signed-off-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@mellanox.com>
parent 75a3e6a3
...@@ -687,10 +687,13 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, ...@@ -687,10 +687,13 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt,
if (ret < 0) { if (ret < 0) {
/* /*
* Release pages, starting at the the first page * Release pages, remembering that the first page
* that experienced an error. * to hit an error was already released by
* ib_umem_odp_map_dma_single_page().
*/ */
release_pages(&local_page_list[j], npages - j); if (npages - (j + 1) > 0)
release_pages(&local_page_list[j+1],
npages - (j + 1));
break; break;
} }
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment