Commit 2a54e347 authored by Matthew Rosato's avatar Matthew Rosato Committed by Jason Gunthorpe

vfio/ap: Validate iova during dma_unmap and trigger irq disable

Currently, each mapped iova is stashed in its associated vfio_ap_queue;
when we get an unmap request, validate that it matches with one or more of
these stashed values before attempting unpins.

Each stashed iova represents IRQ that was enabled for a queue.  Therefore,
if a match is found, trigger IRQ disable for this queue to ensure that
underlying firmware will no longer try to use the associated pfn after the
page is unpinned. IRQ disable will also handle the associated unpin.

Link: https://lore.kernel.org/r/20221202135402.756470-3-yi.l.liu@intel.comReviewed-by: default avatarTony Krowiak <akrowiak@linux.ibm.com>
Signed-off-by: default avatarMatthew Rosato <mjrosato@linux.ibm.com>
Signed-off-by: default avatarYi Liu <yi.l.liu@intel.com>
Signed-off-by: default avatarJason Gunthorpe <jgg@nvidia.com>
parent 4dc334ca
......@@ -1535,13 +1535,29 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev,
return 0;
}
static void unmap_iova(struct ap_matrix_mdev *matrix_mdev, u64 iova, u64 length)
{
struct ap_queue_table *qtable = &matrix_mdev->qtable;
struct vfio_ap_queue *q;
int loop_cursor;
hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode) {
if (q->saved_iova >= iova && q->saved_iova < iova + length)
vfio_ap_irq_disable(q);
}
}
static void vfio_ap_mdev_dma_unmap(struct vfio_device *vdev, u64 iova,
u64 length)
{
struct ap_matrix_mdev *matrix_mdev =
container_of(vdev, struct ap_matrix_mdev, vdev);
vfio_unpin_pages(&matrix_mdev->vdev, iova, 1);
mutex_lock(&matrix_dev->mdevs_lock);
unmap_iova(matrix_mdev, iova, length);
mutex_unlock(&matrix_dev->mdevs_lock);
}
/**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment