• Israel Rukshin's avatar
    nvme-rdma: Avoid preallocating big SGL for data · 38e18002
    Israel Rukshin authored
    nvme_rdma_alloc_tagset() preallocates a big buffer for the IO SGL based
    on SG_CHUNK_SIZE.
    
    Modern DMA engines are often capable of dealing with very big segments so
    the SG_CHUNK_SIZE is often too big. SG_CHUNK_SIZE results in a static 4KB
    SGL allocation per command.
    
    If a controller has lots of deep queues, preallocation for the sg list can
    consume substantial amounts of memory. For nvme-rdma, nr_hw_queues can be
    128 and each queue's depth 128. This means the resulting preallocation
    for the data SGL is 128*128*4K = 64MB per controller.
    
    Switch to runtime allocation for SGL for lists longer than 2 entries. This
    is the approach used by NVMe PCI so it should be reasonable for NVMeOF as
    well. Runtime SGL allocation has always been the case for the legacy I/O
    path so this is nothing new.
    
    The preallocated small SGL depends on SG_CHAIN so if the ARCH doesn't
    support SG_CHAIN, use only runtime allocation for the SGL.
    
    We didn't notice of a performance degradation, since for small IOs we'll
    use the inline SG and for the bigger IOs the allocation of a bigger SGL
    from slab is fast enough.
    Suggested-by: default avatarChristoph Hellwig <hch@lst.de>
    Reviewed-by: default avatarMax Gurtovoy <maxg@mellanox.com>
    Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
    Signed-off-by: default avatarIsrael Rukshin <israelr@mellanox.com>
    Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
    38e18002
rdma.c 54.8 KB