1. 19 Apr, 2021 4 commits
  2. 16 Apr, 2021 1 commit
    • J. Bruce Fields's avatar
      nfsd: ensure new clients break delegations · 217fd6f6
      J. Bruce Fields authored
      If nfsd already has an open file that it plans to use for IO from
      another, it may not need to do another vfs open, but it still may need
      to break any delegations in case the existing opens are for another
      client.
      
      Symptoms are that we may incorrectly fail to break a delegation on a
      write open from a different client, when the delegation-holding client
      already has a write open.
      
      Fixes: 28df3d15 ("nfsd: clients don't need to break their own delegations")
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      217fd6f6
  3. 15 Apr, 2021 2 commits
  4. 14 Apr, 2021 3 commits
  5. 06 Apr, 2021 2 commits
  6. 01 Apr, 2021 1 commit
  7. 31 Mar, 2021 5 commits
    • Gustavo A. R. Silva's avatar
      UAPI: nfsfh.h: Replace one-element array with flexible-array member · c0a744dc
      Gustavo A. R. Silva authored
      There is a regular need in the kernel to provide a way to declare having
      a dynamically sized set of trailing elements in a structure. Kernel code
      should always use “flexible array members”[1] for these cases. The older
      style of one-element or zero-length arrays should no longer be used[2].
      
      Use an anonymous union with a couple of anonymous structs in order to
      keep userspace unchanged:
      
      $ pahole -C nfs_fhbase_new fs/nfsd/nfsfh.o
      struct nfs_fhbase_new {
              union {
                      struct {
                              __u8       fb_version_aux;       /*     0     1 */
                              __u8       fb_auth_type_aux;     /*     1     1 */
                              __u8       fb_fsid_type_aux;     /*     2     1 */
                              __u8       fb_fileid_type_aux;   /*     3     1 */
                              __u32      fb_auth[1];           /*     4     4 */
                      };                                       /*     0     8 */
                      struct {
                              __u8       fb_version;           /*     0     1 */
                              __u8       fb_auth_type;         /*     1     1 */
                              __u8       fb_fsid_type;         /*     2     1 */
                              __u8       fb_fileid_type;       /*     3     1 */
                              __u32      fb_auth_flex[0];      /*     4     0 */
                      };                                       /*     0     4 */
              };                                               /*     0     8 */
      
              /* size: 8, cachelines: 1, members: 1 */
              /* last cacheline: 8 bytes */
      };
      
      Also, this helps with the ongoing efforts to enable -Warray-bounds by
      fixing the following warnings:
      
      fs/nfsd/nfsfh.c: In function ‘nfsd_set_fh_dentry’:
      fs/nfsd/nfsfh.c:191:41: warning: array subscript 1 is above array bounds of ‘__u32[1]’ {aka ‘unsigned int[1]’} [-Warray-bounds]
        191 |        ntohl((__force __be32)fh->fh_fsid[1])));
            |                              ~~~~~~~~~~~^~~
      ./include/linux/kdev_t.h:12:46: note: in definition of macro ‘MKDEV’
         12 | #define MKDEV(ma,mi) (((ma) << MINORBITS) | (mi))
            |                                              ^~
      ./include/uapi/linux/byteorder/little_endian.h:40:26: note: in expansion of macro ‘__swab32’
         40 | #define __be32_to_cpu(x) __swab32((__force __u32)(__be32)(x))
            |                          ^~~~~~~~
      ./include/linux/byteorder/generic.h:136:21: note: in expansion of macro ‘__be32_to_cpu’
        136 | #define ___ntohl(x) __be32_to_cpu(x)
            |                     ^~~~~~~~~~~~~
      ./include/linux/byteorder/generic.h:140:18: note: in expansion of macro ‘___ntohl’
        140 | #define ntohl(x) ___ntohl(x)
            |                  ^~~~~~~~
      fs/nfsd/nfsfh.c:191:8: note: in expansion of macro ‘ntohl’
        191 |        ntohl((__force __be32)fh->fh_fsid[1])));
            |        ^~~~~
      fs/nfsd/nfsfh.c:192:32: warning: array subscript 2 is above array bounds of ‘__u32[1]’ {aka ‘unsigned int[1]’} [-Warray-bounds]
        192 |    fh->fh_fsid[1] = fh->fh_fsid[2];
            |                     ~~~~~~~~~~~^~~
      fs/nfsd/nfsfh.c:192:15: warning: array subscript 1 is above array bounds of ‘__u32[1]’ {aka ‘unsigned int[1]’} [-Warray-bounds]
        192 |    fh->fh_fsid[1] = fh->fh_fsid[2];
            |    ~~~~~~~~~~~^~~
      
      [1] https://en.wikipedia.org/wiki/Flexible_array_member
      [2] https://www.kernel.org/doc/html/v5.10/process/deprecated.html#zero-length-and-one-element-arrays
      
      Link: https://github.com/KSPP/linux/issues/79
      Link: https://github.com/KSPP/linux/issues/109Signed-off-by: default avatarGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      c0a744dc
    • Chuck Lever's avatar
      svcrdma: Clean up dto_q critical section in svc_rdma_recvfrom() · e3eded5e
      Chuck Lever authored
      This, to me, seems less cluttered and less redundant. I was hoping
      it could help reduce lock contention on the dto_q lock by reducing
      the size of the critical section, but alas, the only improvement is
      readability.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      e3eded5e
    • Chuck Lever's avatar
      svcrdma: Remove svc_rdma_recv_ctxt::rc_pages and ::rc_arg · 5533c4f4
      Chuck Lever authored
      These fields are no longer used.
      
      The size of struct svc_rdma_recv_ctxt is now less than 300 bytes on
      x86_64, down from 2440 bytes.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      5533c4f4
    • Chuck Lever's avatar
      svcrdma: Remove sc_read_complete_q · 9af723be
      Chuck Lever authored
      Now that svc_rdma_recvfrom() waits for Read completion,
      sc_read_complete_q is no longer used.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      9af723be
    • Chuck Lever's avatar
      svcrdma: Single-stage RDMA Read · 7d81ee87
      Chuck Lever authored
      Currently the generic RPC server layer calls svc_rdma_recvfrom()
      twice to retrieve an RPC message that uses Read chunks. I'm not
      exactly sure why this design was chosen originally.
      
      Instead, let's wait for the Read chunk completion inline in the
      first call to svc_rdma_recvfrom().
      
      The goal is to eliminate some page allocator churn.
      rdma_read_complete() replaces pages in the second svc_rqst by
      calling put_page() repeatedly while the upper layer waits for the
      request to be constructed, which adds unnecessary NFS WRITE round-
      trip latency.
      Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
      Reviewed-by: default avatarTom Talpey <tom@talpey.com>
      7d81ee87
  8. 22 Mar, 2021 22 commits