- 25 Apr, 2019 39 commits
-
-
Chuck Lever authored
The Receive handler runs in process context, thus can use on-demand GFP_KERNEL allocations instead of pre-allocation. This makes the xprtrdma backchannel independent of the number of backchannel session slots provisioned by the Upper Layer protocol. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Chuck Lever authored
For code legibility, clean up the function names to be consistent with the pattern: "rpcrdma" _ object-type _ action Also rpcrdma_regbuf_alloc and rpcrdma_regbuf_free no longer have any callers outside of verbs.c, and can thus be made static. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Chuck Lever authored
Clean up by providing an API to do this common task. At this point, the difference between rpcrdma_get_sendbuf and rpcrdma_get_recvbuf has become tiny. These can be collapsed into a single helper. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Chuck Lever authored
Allocating an rpcrdma_req's regbufs at xprt create time enables a pair of micro-optimizations: First, if these regbufs are always there, we can eliminate two conditional branches from the hot xprt_rdma_allocate path. Second, by allocating a 1KB buffer, it places a lower bound on the size of these buffers, without adding yet another conditional branch. The lower bound reduces the number of hardway re- allocations. In fact, for some workloads it completely eliminates hardway allocations. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Chuck Lever authored
Allocate the struct rpcrdma_regbuf separately from the I/O buffer to better guarantee the alignment of the I/O buffer and eliminate the wasted space between the rpcrdma_regbuf metadata and the buffer itself. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Chuck Lever authored
For code legibility, clean up the function names to be consistent with the pattern: "rpcrdma" _ object-type _ action Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Chuck Lever authored
Eventually, I'd like to invoke rpcrdma_create_req() during the call_reserve step. Memory allocation there probably needs to use GFP_NOIO. Therefore a set of GFP flags needs to be passed in. As an additional clean up, just return a pointer or NULL, because the only error return code here is -ENOMEM. Lastly, clean up the function names to be consistent with the pattern: "rpcrdma" _ object-type _ action Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Chuck Lever authored
After a DMA map failure in frwr_map, mark the MR so that recycling won't attempt to DMA unmap it. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Fixes: e2f34e26 ("xprtrdma: Yet another double DMA-unmap") Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Chuck Lever authored
Page allocation requests made when the SPARSE_PAGES flag is set are allowed to fail, and are not critical. No need to spend a rare resource. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Xiaoli Feng authored
dedupe_file_range operations is combiled into remap_file_range. But in nfs42_remap_file_range, it's skiped for dedupe operations. Before this patch: # dd if=/dev/zero of=nfs/file bs=1M count=1 # xfs_io -c "dedupe nfs/file 4k 64k 4k" nfs/file XFS_IOC_FILE_EXTENT_SAME: Invalid argument After this patch: # dd if=/dev/zero of=nfs/file bs=1M count=1 # xfs_io -c "dedupe nfs/file 4k 64k 4k" nfs/file deduped 4096/4096 bytes at offset 65536 4 KiB, 1 ops; 0.0046 sec (865.988 KiB/sec and 216.4971 ops/sec) Signed-off-by: Xiaoli Feng <fengxiaoli0714@gmail.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
The lock context already references and tracks the open context, so take the opportunity to save some space in struct nfs_page. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Add a helper for when we remove the explicit pointer to the open context. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Force the lock context to keep a reference to the parent open context so that we can guarantee the validity of the latter. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
If the server is unable to immediately execute an RPC call, and returns an NFS4ERR_DELAY then we can assume it is safe to interrupt the operation in order to handle ordinary signals. This allows the application to service timer interrupts that would otherwise have to wait until the server is again able to respond. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
When the client is reading or writing using pNFS, and hits an error on the DS, then it typically sends a LAYOUTERROR and/or LAYOUTRETURN to the MDS, before redirtying the failed pages, and going for a new round of reads/writebacks. The problem is that if the server has no way to fix the DS, then we may need a way to interrupt this loop after a set number of attempts have been made. This patch adds an optional module parameter that allows the admin to specify how many times to retry the read/writeback process before failing with a fatal error. The default behaviour is to retry forever. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
All the callers of nfs_create_request() are now creating page group heads, so we can remove the redundant 'last' page argument. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
We require all NFS I/O subrequests to duplicate the lock context as well as the open context. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Replace the NFS custom error reporting mechanism with the generic mapping_set_error(). Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
vfs_fsync() has the side effect of clearing unreported writeback errors, so we need to make sure that we do not abuse it in situations where applications might not normally expect us to report those errors. The solution is to replace calls to vfs_fsync() with calls to nfs_wb_all(). Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
The NFS read code can trigger writeback while holding the page lock. If an error then triggers a call to nfs_write_error_remove_page(), we can deadlock. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
When flushing out dirty pages, the fact that we may hit fatal errors is not a reason to stop writeback. Those errors are reported through fsync(), not through the flush mechanism. Fixes: a6598813 ("NFS: Don't write back further requests if there...") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Add a mount option that exposes the ETIMEDOUT errors that occur during soft timeouts to the application. This allows aware applications to distinguish between server disk IO errors and client timeout errors. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
When the label says "for internal use only", then it doesn't belong in the 'uapi' subtree. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
When we introduce the 'softerr' mount option, we will see the RPC layer returning ETIMEDOUT errors if the server is unresponsive. We want to consider those errors to be fatal on par with the EIO errors that are returned by ordinary 'soft' timeouts.. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Add the 'softerr' rpc client flag that sets the RPC_TASK_TIMEOUT flag on all new rpc tasks that are attached to that rpc client. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
In particular, the timeout messages can be very noisy, so we ought to ratelimit them in order to avoid spamming the syslog. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
When calculating the major timeout for a new task, when we know that the connection has been broken, use the task->tk_start to ensure that we also take into account the time spent waiting for a slot or session slot. This ensures that we fail over soft requests relatively quickly once the connection has actually been broken, and the first requests have started to fail. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
If a soft NFSv4 request is sent, then we don't need it to time out unless the connection breaks. The reason is that as long as the connection is unbroken, the protocol states that the server is not allowed to drop the request. IOW: as long as the connection remains unbroken, the client may assume that all transmitted RPC requests are being processed by the server, and that retransmissions and timeouts of those requests are unwarranted. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Add variables to track RPC level errors so that we can distinguish between issue that arose in the RPC transport layer as opposed to those arising from the reply message. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Ensure that when in the transport layer, we don't sleep past a major timeout. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Don't wake idle CPUs only for the purpose of servicing an RPC queue timeout. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Simplify the setting of queue timeouts by using the timer_reduce() function. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Add a helper to ensure that debugfs and friends print out the correct current task timeout value. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Clean up the RPC task sleep interfaces by replacing the task->tk_timeout 'hidden parameter' to rpc_sleep_on() with a new function that takes an absolute timeout. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
None of the callers set the 'action' argument, so let's just remove it. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
rpc_sleep_on() does not need to set the task->tk_callback under the queue lock, so move that out. Also refactor the check for whether the task is active. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Convert the transport callback to actually put the request to sleep instead of just setting a timeout. This is in preparation for rpc_sleep_on_timeout(). Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
Clean up. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
Trond Myklebust authored
The RPC_TASK_KILLED flag should really not be set from another context because it can clobber data in the struct task when task->tk_flags is changed non-atomically. Let's therefore swap out RPC_TASK_KILLED with an atomic flag, and add a function to set that flag and safely wake up the task. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
-
- 21 Apr, 2019 1 commit
-
-
Linus Torvalds authored
-