Commit 1c87b851 authored by Chuck Lever's avatar Chuck Lever Committed by Anna Schumaker

NFS: Fix rpcrdma_inline_fixup() crash with new LISTXATTRS operation

By switching to an XFS-backed export, I am able to reproduce the
ibcomp worker crash on my client with xfstests generic/013.

For the failing LISTXATTRS operation, xdr_inline_pages() is called
with page_len=12 and buflen=128.

- When ->send_request() is called, rpcrdma_marshal_req() does not
  set up a Reply chunk because buflen is smaller than the inline
  threshold. Thus rpcrdma_convert_iovs() does not get invoked at
  all and the transport's XDRBUF_SPARSE_PAGES logic is not invoked
  on the receive buffer.

- During reply processing, rpcrdma_inline_fixup() tries to copy
  received data into rq_rcv_buf->pages because page_len is positive.
  But there are no receive pages because rpcrdma_marshal_req() never
  allocated them.

The result is that the ibcomp worker faults and dies. Sometimes that
causes a visible crash, and sometimes it results in a transport hang
without other symptoms.

RPC/RDMA's XDRBUF_SPARSE_PAGES support is not entirely correct, and
should eventually be fixed or replaced. However, my preference is
that upper-layer operations should explicitly allocate their receive
buffers (using GFP_KERNEL) when possible, rather than relying on
XDRBUF_SPARSE_PAGES.
Reported-by: default avatarOlga kornievskaia <kolga@netapp.com>
Suggested-by: default avatarOlga kornievskaia <kolga@netapp.com>
Fixes: c10a7514 ("NFSv4.2: add the extended attribute proc functions.")
Signed-off-by: default avatarChuck Lever <chuck.lever@oracle.com>
Reviewed-by: default avatarOlga kornievskaia <kolga@netapp.com>
Reviewed-by: default avatarFrank van der Linden <fllinden@amazon.com>
Tested-by: default avatarOlga kornievskaia <kolga@netapp.com>
Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
parent 63e2fffa
...@@ -1241,12 +1241,13 @@ static ssize_t _nfs42_proc_listxattrs(struct inode *inode, void *buf, ...@@ -1241,12 +1241,13 @@ static ssize_t _nfs42_proc_listxattrs(struct inode *inode, void *buf,
.rpc_resp = &res, .rpc_resp = &res,
}; };
u32 xdrlen; u32 xdrlen;
int ret, np; int ret, np, i;
ret = -ENOMEM;
res.scratch = alloc_page(GFP_KERNEL); res.scratch = alloc_page(GFP_KERNEL);
if (!res.scratch) if (!res.scratch)
return -ENOMEM; goto out;
xdrlen = nfs42_listxattr_xdrsize(buflen); xdrlen = nfs42_listxattr_xdrsize(buflen);
if (xdrlen > server->lxasize) if (xdrlen > server->lxasize)
...@@ -1254,9 +1255,12 @@ static ssize_t _nfs42_proc_listxattrs(struct inode *inode, void *buf, ...@@ -1254,9 +1255,12 @@ static ssize_t _nfs42_proc_listxattrs(struct inode *inode, void *buf,
np = xdrlen / PAGE_SIZE + 1; np = xdrlen / PAGE_SIZE + 1;
pages = kcalloc(np, sizeof(struct page *), GFP_KERNEL); pages = kcalloc(np, sizeof(struct page *), GFP_KERNEL);
if (pages == NULL) { if (!pages)
__free_page(res.scratch); goto out_free_scratch;
return -ENOMEM; for (i = 0; i < np; i++) {
pages[i] = alloc_page(GFP_KERNEL);
if (!pages[i])
goto out_free_pages;
} }
arg.xattr_pages = pages; arg.xattr_pages = pages;
...@@ -1271,14 +1275,15 @@ static ssize_t _nfs42_proc_listxattrs(struct inode *inode, void *buf, ...@@ -1271,14 +1275,15 @@ static ssize_t _nfs42_proc_listxattrs(struct inode *inode, void *buf,
*eofp = res.eof; *eofp = res.eof;
} }
out_free_pages:
while (--np >= 0) { while (--np >= 0) {
if (pages[np]) if (pages[np])
__free_page(pages[np]); __free_page(pages[np]);
} }
__free_page(res.scratch);
kfree(pages); kfree(pages);
out_free_scratch:
__free_page(res.scratch);
out:
return ret; return ret;
} }
......
...@@ -1528,7 +1528,6 @@ static void nfs4_xdr_enc_listxattrs(struct rpc_rqst *req, ...@@ -1528,7 +1528,6 @@ static void nfs4_xdr_enc_listxattrs(struct rpc_rqst *req,
rpc_prepare_reply_pages(req, args->xattr_pages, 0, args->count, rpc_prepare_reply_pages(req, args->xattr_pages, 0, args->count,
hdr.replen); hdr.replen);
req->rq_rcv_buf.flags |= XDRBUF_SPARSE_PAGES;
encode_nops(&hdr); encode_nops(&hdr);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment