Commit 7f9e5c48 authored by David Dillow's avatar David Dillow Committed by Roland Dreier

IB: Increase DMA max_segment_size on Mellanox hardware

By default, each device is assumed to be able only handle 64 KB chunks
during DMA. By giving the segment size a larger value, the block layer
will coalesce more S/G entries together for SRP, allowing larger
requests with the same sg_tablesize setting.  The block layer is the
only direct user of it, though a few IOMMU drivers reference it as
well for their *_map_sg coalescing code. pci-gart_64 on x86, and a
smattering on on sparc, powerpc, and ia64.

Since other IB protocols could potentially see larger segments with
this, let's check those:

 - iSER is fine, because you limit your maximum request size to 512
   KB, so we'll never overrun the page vector in struct iser_page_vec
   (128 entries currently). It is independent of the DMA segment size,
   and handles multi-page segments already.

 - IPoIB is fine, as it maps each page individually, and doesn't use
   ib_dma_map_sg().

 - RDS appears to do the right thing and has no dependencies on DMA
   segment size, but I don't claim to have done a complete audit.

 - NFSoRDMA and 9p are OK -- they do not use ib_dma_map_sg(), so they
   doesn't care about the coalescing.

 - Lustre's ko2iblnd does not care about coalescing -- it properly
   walks the returned sg list.

This patch ups the value on Mellanox hardware to 1 GB, which matches
reported firmware limits on mlx4.
Signed-off-by: default avatarDavid Dillow <dillowda@ornl.gov>
Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
parent 1eba843d
...@@ -1043,6 +1043,9 @@ static int __mthca_init_one(struct pci_dev *pdev, int hca_type) ...@@ -1043,6 +1043,9 @@ static int __mthca_init_one(struct pci_dev *pdev, int hca_type)
} }
} }
/* We can handle large RDMA requests, so allow larger segments. */
dma_set_max_seg_size(&pdev->dev, 1024 * 1024 * 1024);
mdev = (struct mthca_dev *) ib_alloc_device(sizeof *mdev); mdev = (struct mthca_dev *) ib_alloc_device(sizeof *mdev);
if (!mdev) { if (!mdev) {
dev_err(&pdev->dev, "Device struct alloc failed, " dev_err(&pdev->dev, "Device struct alloc failed, "
......
...@@ -1109,6 +1109,9 @@ static int __mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id) ...@@ -1109,6 +1109,9 @@ static int __mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id)
} }
} }
/* Allow large DMA segments, up to the firmware limit of 1 GB */
dma_set_max_seg_size(&pdev->dev, 1024 * 1024 * 1024);
priv = kzalloc(sizeof *priv, GFP_KERNEL); priv = kzalloc(sizeof *priv, GFP_KERNEL);
if (!priv) { if (!priv) {
dev_err(&pdev->dev, "Device struct alloc failed, " dev_err(&pdev->dev, "Device struct alloc failed, "
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment