1. 22 Mar, 2011 1 commit
    • David Dillow's avatar
      IB: Increase DMA max_segment_size on Mellanox hardware · 7f9e5c48
      David Dillow authored
      By default, each device is assumed to be able only handle 64 KB chunks
      during DMA. By giving the segment size a larger value, the block layer
      will coalesce more S/G entries together for SRP, allowing larger
      requests with the same sg_tablesize setting.  The block layer is the
      only direct user of it, though a few IOMMU drivers reference it as
      well for their *_map_sg coalescing code. pci-gart_64 on x86, and a
      smattering on on sparc, powerpc, and ia64.
      
      Since other IB protocols could potentially see larger segments with
      this, let's check those:
      
       - iSER is fine, because you limit your maximum request size to 512
         KB, so we'll never overrun the page vector in struct iser_page_vec
         (128 entries currently). It is independent of the DMA segment size,
         and handles multi-page segments already.
      
       - IPoIB is fine, as it maps each page individually, and doesn't use
         ib_dma_map_sg().
      
       - RDS appears to do the right thing and has no dependencies on DMA
         segment size, but I don't claim to have done a complete audit.
      
       - NFSoRDMA and 9p are OK -- they do not use ib_dma_map_sg(), so they
         doesn't care about the coalescing.
      
       - Lustre's ko2iblnd does not care about coalescing -- it properly
         walks the returned sg list.
      
      This patch ups the value on Mellanox hardware to 1 GB, which matches
      reported firmware limits on mlx4.
      Signed-off-by: default avatarDavid Dillow <dillowda@ornl.gov>
      Signed-off-by: default avatarRoland Dreier <roland@purestorage.com>
      7f9e5c48
  2. 18 Mar, 2011 2 commits
  3. 17 Mar, 2011 37 commits