1. 27 Sep, 2016 13 commits
  2. 23 Sep, 2016 3 commits
    • Daniel Wagner's avatar
      xprtrdma: use complete() instead complete_all() · 5690a22d
      Daniel Wagner authored
      There is only one waiter for the completion, therefore there
      is no need to use complete_all(). Let's make that clear by
      using complete() instead of complete_all().
      
      The usage pattern of the completion is:
      
      waiter context                          waker context
      
      frwr_op_unmap_sync()
        reinit_completion()
        ib_post_send()
        wait_for_completion()
      
      					frwr_wc_localinv_wake()
      					  complete()
      Signed-off-by: default avatarDaniel Wagner <daniel.wagner@bmw-carit.de>
      Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
      Cc: Trond Myklebust <trond.myklebust@primarydata.com>
      Cc: Chuck Lever <chuck.lever@oracle.com>
      Cc: linux-nfs@vger.kernel.org
      Cc: netdev@vger.kernel.org
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      5690a22d
    • Daniel Wagner's avatar
      NFS: cache_lib: use complete() instead of complete_all() · 2a446a5d
      Daniel Wagner authored
      There is only one waiter for the completion, therefore there
      is no need to use complete_all(). Let's make that clear by
      using complete() instead of complete_all().
      
      The generic caching code from sunrpc is calling revisit() only once.
      
      The usage pattern of the completion is:
      
      waiter context                          waker context
      
      do_cache_lookup_wait()
        nfs_cache_defer_req_alloc()
          init_completion()
        do_cache_lookup()
        nfs_cache_wait_for_upcall()
          wait_for_completion_timeout()
      
      					nfs_dns_cache_revisit()
      					  complete()
      
        nfs_cache_defer_req_put()
      Signed-off-by: default avatarDaniel Wagner <daniel.wagner@bmw-carit.de>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      2a446a5d
    • Daniel Wagner's avatar
      NFS: direct: use complete() instead of complete_all() · 024de8f1
      Daniel Wagner authored
      There is only one waiter for the completion, therefore there
      is no need to use complete_all(). Let's make that clear by
      using complete() instead of complete_all().
      
      nfs_file_direct_write() or nfs_file_direct_read() allocated a request
      object via nfs_direct_req_alloc(), which initializes the
      completion. The request object then is freed later in the exit path.
      Between the initialization and the release either
      nfs_direct_write_schedule_iovec() resp
      nfs_direct_read_schedule_iovec() are called which will asynchronously
      process the request. The calling function waits via nfs_direct_wait()
      till the async work has been done. Thus there is only one waiter on
      the completion.
      
      nfs_direct_pgio_init() and nfs_direct_read_completion() are passed via
      function pointers to nfs pageio. The first function does a ref
      counting (get_dreq() and put_dreq()) which ensures that
      nfs_direct_read_completion() and nfs_direct_read_schedule_iovec() only
      call the completion path once.
      
      The usage pattern of the completion is:
      
      waiter context                          waker context
      
      nfs_file_direct_write()
        dreq = nfs_direct_req_alloc()
          init_completion()
        nfs_direct_write_schedule_iovec()
        nfs_direct_wait()
          wait_for_completion_killable()
      
                                              nfs_direct_write_schedule_work()
                                                nfs_direct_complete()
                                                  complete()
      
      nfs_file_direct_read()
        dreq = nfs_direct_req_all()
          init_completion()
        nfs_direct_read_schedule_iovec()
        nfs_direct_wait()
          wait_for_completion_killable()
                                              nfs_direct_read_schedule_iovec()
                                                nfs_direct_complete()
                                                  complete()
      
                                              nfs_direct_read_completion()
                                                nfs_direct_complete()
                                                  complete()
      Signed-off-by: default avatarDaniel Wagner <daniel.wagner@bmw-carit.de>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      024de8f1
  3. 22 Sep, 2016 12 commits
  4. 20 Sep, 2016 1 commit
  5. 19 Sep, 2016 11 commits
    • David Vrabel's avatar
      sunrpc: fix write space race causing stalls · d48f9ce7
      David Vrabel authored
      Write space becoming available may race with putting the task to sleep
      in xprt_wait_for_buffer_space().  The existing mechanism to avoid the
      race does not work.
      
      This (edited) partial trace illustrates the problem:
      
         [1] rpc_task_run_action: task:43546@5 ... action=call_transmit
         [2] xs_write_space <-xs_tcp_write_space
         [3] xprt_write_space <-xs_write_space
         [4] rpc_task_sleep: task:43546@5 ...
         [5] xs_write_space <-xs_tcp_write_space
      
      [1] Task 43546 runs but is out of write space.
      
      [2] Space becomes available, xs_write_space() clears the
          SOCKWQ_ASYNC_NOSPACE bit.
      
      [3] xprt_write_space() attemts to wake xprt->snd_task (== 43546), but
          this has not yet been queued and the wake up is lost.
      
      [4] xs_nospace() is called which calls xprt_wait_for_buffer_space()
          which queues task 43546.
      
      [5] The call to sk->sk_write_space() at the end of xs_nospace() (which
          is supposed to handle the above race) does not call
          xprt_write_space() as the SOCKWQ_ASYNC_NOSPACE bit is clear and
          thus the task is not woken.
      
      Fix the race by resetting the SOCKWQ_ASYNC_NOSPACE bit in xs_nospace()
      so the second call to sk->sk_write_space() calls xprt_write_space().
      Suggested-by: default avatarTrond Myklebust <trondmy@primarydata.com>
      Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      cc: stable@vger.kernel.org # 4.4
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      d48f9ce7
    • Jeff Layton's avatar
      pnfs: add a new mechanism to select a layout driver according to an ordered list · ca440c38
      Jeff Layton authored
      Currently, the layout driver selection code always chooses the first one
      from the list. That's not really ideal however, as the server can send
      the list of layout types in any order that it likes. It's up to the
      client to select the best one for its needs.
      
      This patch adds an ordered list of preferred driver types and has the
      selection code sort the list of available layout drivers according to it.
      Any unrecognized layout type is sorted to the end of the list.
      
      For now, the order of preference is hardcoded, but it should be possible
      to make this configurable in the future.
      Signed-off-by: default avatarJeff Layton <jlayton@redhat.com>
      Reviewed-by: default avatarJ. Bruce Fields <bfields@fieldses.org>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      ca440c38
    • Chuck Lever's avatar
      xprtrdma: Eliminate rpcrdma_receive_worker() · 496b77a5
      Chuck Lever authored
      Clean up: the extra layer of indirection doesn't add value.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      496b77a5
    • Chuck Lever's avatar
      xprtrdma: Rename rpcrdma_receive_wc() · 1519e969
      Chuck Lever authored
      Clean up: When converting xprtrdma to use the new CQ API, I missed a
      spot. The naming convention elsewhere is:
      
        {svc_rdma,rpcrdma}_wc_{operation}
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      1519e969
    • Chuck Lever's avatar
      xprtrmda: Report address of frmr, not mw · eeb30613
      Chuck Lever authored
      Tie frwr debugging messages together by always reporting the address
      of the frwr.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      eeb30613
    • Chuck Lever's avatar
      xprtrdma: Support larger inline thresholds · 44829d02
      Chuck Lever authored
      The Version One default inline threshold is still 1KB. But allow
      testing with thresholds up to 64KB.
      
      This maximum is somewhat arbitrary. There's no fundamental
      architectural limit I'm aware of, but it's good to keep the size of
      Receive buffers reasonable. Now that Send can use a s/g list, a
      Send buffer is only as large as each RPC requires. Receive buffers
      are always the size of the inline threshold, however.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      44829d02
    • Chuck Lever's avatar
      xprtrdma: Use gathered Send for large inline messages · 655fec69
      Chuck Lever authored
      An RPC Call message that is sent inline but that has a data payload
      (ie, one or more items in rq_snd_buf's page list) must be "pulled
      up:"
      
      - call_allocate has to reserve enough RPC Call buffer space to
      accommodate the data payload
      
      - call_transmit has to memcopy the rq_snd_buf's page list and tail
      into its head iovec before it is sent
      
      As the inline threshold is increased beyond its current 1KB default,
      however, this means data payloads of more than a few KB are copied
      by the host CPU. For example, if the inline threshold is increased
      just to 4KB, then NFS WRITE requests up to 4KB would involve a
      memcpy of the NFS WRITE's payload data into the RPC Call buffer.
      This is an undesirable amount of participation by the host CPU.
      
      The inline threshold may be much larger than 4KB in the future,
      after negotiation with a peer server.
      
      Instead of copying the components of rq_snd_buf into its head iovec,
      construct a gather list of these components, and send them all in
      place. The same approach is already used in the Linux server's
      RPC-over-RDMA reply path.
      
      This mechanism also eliminates the need for rpcrdma_tail_pullup,
      which is used to manage the XDR pad and trailing inline content when
      a Read list is present.
      
      This requires that the pages in rq_snd_buf's page list be DMA-mapped
      during marshaling, and unmapped when a data-bearing RPC is
      completed. This is slightly less efficient for very small I/O
      payloads, but significantly more efficient as data payload size and
      inline threshold increase past a kilobyte.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      655fec69
    • Chuck Lever's avatar
      xprtrdma: Basic support for Remote Invalidation · c8b920bb
      Chuck Lever authored
      Have frwr's ro_unmap_sync recognize an invalidated rkey that appears
      as part of a Receive completion. Local invalidation can be skipped
      for that rkey.
      
      Use an out-of-band signaling mechanism to indicate to the server
      that the client is prepared to receive RDMA Send With Invalidate.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      c8b920bb
    • Chuck Lever's avatar
      xprtrdma: Client-side support for rpcrdma_connect_private · 87cfb9a0
      Chuck Lever authored
      Send an RDMA-CM private message on connect, and look for one during
      a connection-established event.
      
      Both sides can communicate their various implementation limits.
      Implementations that don't support this sideband protocol ignore it.
      
      Once the client knows the server's inline threshold maxima, it can
      adjust the use of Reply chunks, and eliminate most use of Position
      Zero Read chunks. Moderately-sized I/O can be done using a pure
      inline RDMA Send instead of RDMA operations that require memory
      registration.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      87cfb9a0
    • Chuck Lever's avatar
      rpcrdma: RDMA/CM private message data structure · ff06bd19
      Chuck Lever authored
      Introduce data structure used by both client and server to exchange
      implementation details during RDMA/CM connection establishment.
      
      This is an experimental out-of-band exchange between Linux
      RPC-over-RDMA Version One implementations, replacing the deprecated
      CCP (see RFC 5666bis). The purpose of this extension is to enable
      prototyping of features that might be introduced in a subsequent
      version of RPC-over-RDMA.
      
      Suggested by Christoph Hellwig and Devesh Sharma.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      ff06bd19
    • Chuck Lever's avatar
      xprtrdma: Move recv_wr to struct rpcrdma_rep · 6ea8e711
      Chuck Lever authored
      Clean up: The fields in the recv_wr do not vary. There is no need to
      initialize them before each ib_post_recv(). This removes a large-ish
      data structure from the stack.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      6ea8e711