• Chuck Lever's avatar
    xprtrdma: Chunk list encoders no longer share one rl_segments array · 5ab81428
    Chuck Lever authored
    Currently, all three chunk list encoders each use a portion of the
    one rl_segments array in rpcrdma_req. This is because the MWs for
    each chunk list were preserved in rl_segments so that ro_unmap could
    find and invalidate them after the RPC was complete.
    
    However, now that MWs are placed on a per-req linked list as they
    are registered, there is no longer any information in rpcrdma_mr_seg
    that is shared between ro_map and ro_unmap_{sync,safe}, and thus
    nothing in rl_segments needs to be preserved after
    rpcrdma_marshal_req is complete.
    
    Thus the rl_segments array can be used now just for the needs of
    each rpcrdma_convert_iovs call. Once each chunk list is encoded, the
    next chunk list encoder is free to re-use all of rl_segments.
    
    This means all three chunk lists in one RPC request can now each
    encode a full size data payload with no increase in the size of
    rl_segments.
    
    This is a key requirement for Kerberos support, since both the Call
    and Reply for a single RPC transaction are conveyed via Long
    messages (RDMA Read/Write). Both can be large.
    Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
    Tested-by: default avatarSteve Wise <swise@opengridcomputing.com>
    Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
    5ab81428
xprt_rdma.h 17 KB