• Chuck Lever's avatar
    xprtrdma: Initialize separate RPC call and reply buffers · 9c40c49f
    Chuck Lever authored
    RPC-over-RDMA needs to separate its RPC call and reply buffers.
    
     o When an RPC Call is sent, rq_snd_buf is DMA mapped for an RDMA
       Send operation using DMA_TO_DEVICE
    
     o If the client expects a large RPC reply, it DMA maps rq_rcv_buf
       as part of a Reply chunk using DMA_FROM_DEVICE
    
    The two mappings are for data movement in opposite directions.
    
    DMA-API.txt suggests that if these mappings share a DMA cacheline,
    bad things can happen. This could occur in the final bytes of
    rq_snd_buf and the first bytes of rq_rcv_buf if the two buffers
    happen to share a DMA cacheline.
    
    On x86_64 the cacheline size is typically 8 bytes, and RPC call
    messages are usually much smaller than the send buffer, so this
    hasn't been a noticeable problem. But the DMA cacheline size can be
    larger on other platforms.
    
    Also, often rq_rcv_buf starts most of the way into a page, thus
    an additional RDMA segment is needed to map and register the end of
    that buffer. Try to avoid that scenario to reduce the cost of
    registering and invalidating Reply chunks.
    
    Instead of carrying a single regbuf that covers both rq_snd_buf and
    rq_rcv_buf, each struct rpcrdma_req now carries one regbuf for
    rq_snd_buf and one regbuf for rq_rcv_buf.
    
    Some incidental changes worth noting:
    
    - To clear out some spaghetti, refactor xprt_rdma_allocate.
    - The value stored in rg_size is the same as the value stored in
      the iov.length field, so eliminate rg_size
    Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
    Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
    9c40c49f
xprt_rdma.h 16.8 KB