Commit de79093b authored by Mitko Haralanov's avatar Mitko Haralanov Committed by Doug Ledford

IB/hfi1: Correctly compute node interval

The computation of the interval of an interval RB node
was incorrect leading to data corruption due to the RB
search algorithm not properly finding the all RB nodes
in an MMU invalidation interval.

The problem stemmed from the fact that the beginning
address of the node's range was being aligned to a page
boundary. For certain buffer sizes, this would lead to
a end address calculation that was off by 1 page.

An important aspect of keeping the RB same is also
updating the node's range in the case it's being extended.
Reviewed-by: default avatarDean Luick <dean.luick@intel.com>
Signed-off-by: default avatarMitko Haralanov <mitko.haralanov@intel.com>
Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
parent 782f6697
......@@ -91,7 +91,7 @@ static unsigned long mmu_node_start(struct mmu_rb_node *node)
static unsigned long mmu_node_last(struct mmu_rb_node *node)
{
return PAGE_ALIGN((node->addr & PAGE_MASK) + node->len) - 1;
return PAGE_ALIGN(node->addr + node->len) - 1;
}
int hfi1_mmu_rb_register(struct rb_root *root, struct mmu_rb_ops *ops)
......
......@@ -1076,7 +1076,6 @@ static int pin_vector_pages(struct user_sdma_request *req,
return -ENOMEM;
node->rb.addr = (unsigned long)iovec->iov.iov_base;
node->rb.len = iovec->iov.iov_len;
node->pq = pq;
atomic_set(&node->refcount, 0);
INIT_LIST_HEAD(&node->list);
......@@ -1117,6 +1116,7 @@ static int pin_vector_pages(struct user_sdma_request *req,
goto bail;
}
kfree(node->pages);
node->rb.len = iovec->iov.iov_len;
node->pages = pages;
node->npages += pinned;
npages = node->npages;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment