- 19 Oct, 2010 14 commits
-
-
Pavel Emelyanov authored
The task in question is dereferenced above (and is actually never NULL). Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Same for UDP sockets creation paths. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
The v4 and the v6 wrappers only pass the respective family to the xs_tcp_setup_socket. This family can be taken from the xprt's sockaddr. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Now we have a single socket creation routine and can call it directly from the setup_socket routines. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
After xs_bind is merged it's easy to merge its callers. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> [bfields@redhat.com: fix address family initialization] Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
There's the only difference betseen the xs_bind4 and the xs_bind6 - the size of sockaddr structure they use. Fortunatelly its size can be indirectly get from the transport. Change since v1: * use sockaddr_storage instead of sockaddr * use rpc_set_port instead of manual port assigning Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> [bfields@redhat.com: fix address family initialization] Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Remove now unneeded wrappers that just add type and protocol to socket creation callback. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Same patch for v6 protocols. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
The UDPv4 and TCPv4 socket creation callbacks now look very similar. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Make it look like the TCP sockets creation. Unfortunately the git diff made the patch look messy :( Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
The xs_tcp_reuse_connection takes the xprt only to pass it down to the xs_abort_connection. The later one can get it from the given transport itself. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
- 18 Oct, 2010 2 commits
-
-
Tom Tucker authored
There are several error paths in the code that do not unmap DMA. This patch adds calls to svc_rdma_unmap_dma to free these DMA contexts. Signed-off-by: Tom Tucker <tom@opengridcomputing.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Tom Tucker authored
There was logic in the send path that assumed that a page containing data to send to the client has a KVA. This is not always the case and can result in data corruption when page_address returns zero and we end up DMA mapping zero. This patch changes the bus mapping logic to avoid page_address() where necessary and converts all calls from ib_dma_map_single to ib_dma_map_page in order to keep the map/unmap calls symmetric. Signed-off-by: Tom Tucker <tom@ogc.us> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
- 12 Oct, 2010 2 commits
-
-
J. Bruce Fields authored
Expire clients more promptly, at the expense of possibly running the laundromat thread more frequently. Though it's not the default, I'd like it to be feasible to run with a lease time of just a few seconds, at which point a minimum 10 second wait between laundromat runs seems a little much. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
- 11 Oct, 2010 2 commits
-
-
NeilBrown authored
We limit the number of 'defer' requests to DFR_MAX. The imposition of this limit is spread about a bit - sometime we don't add new things to the list, sometimes we remove old things. Also it is currently applied to requests which we are 'waiting' for rather than 'deferring'. This doesn't seem ideal as 'waiting' requests are naturally limited by the number of threads. So gather the DFR_MAX handling code to one place and only apply it to requests that are actually being deferred. This means that not all 'cache_deferred_req' structures go on the 'cache_defer_list, so we need to be careful when adding and removing things. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
NeilBrown authored
The return value from cache_defer_req is somewhat confusing. Various different error codes are returned, but the single caller is only interested in success or failure. In fact it can measure this success or failure itself by checking CACHE_PENDING, which makes the point of the code more explicit. So change cache_defer_req to return 'void' and test CACHE_PENDING after it completes, to see if the request was actually deferred or not. Similarly setup_deferral and cache_wait_req don't need a return value, so make them void and remove some code. The call to cache_revisit_request (to guard against a race) is only needed for the second call to setup_deferral, so move it out of setup_deferral to after that second call. With the first call the race is handled differently (by explicitly calling 'wait_for_completion'). Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
- 02 Oct, 2010 1 commit
-
-
J. Bruce Fields authored
Commit 78155ed7 "nfsd4: distinguish expired from stale stateids" attempted to distinguish expired and stale stateid's using time information that may not have been completely reliable, so I reverted it. That was throwing out the baby with the bathwater; we still do want to return expired, but let's do that using the simpler approach of just assuming any stateid is expired if it looks like it was given out by the current server instance, but we can't find it any more. This may help clients that are recovering from network partitions. Reported-by: Bian Naimeng <biannm@cn.fujitsu.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
- 01 Oct, 2010 19 commits
-
-
J. Bruce Fields authored
As long as we're not implementing any session security, we should just automatically add any new connections that come along to the list of sessions associated with the session. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
J. Bruce Fields authored
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
J. Bruce Fields authored
Remove connections from the list when they go down. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
NFSv4.1 needs warning when a client tcp connection goes down, if that connection is being used as a backchannel, so that it can warn the client that it has lost the backchannel connection. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
The spec requires us in various places to keep track of the connections associated with each session. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
Changes: - make sure session memory reservation is released on failure path. - use min_t()/min() for more compact code in several places. - break alloc_init_session into smaller pieces. - miscellaneous other cleanup. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
This returns an nfs error, not -ERRNO. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
J. Bruce Fields authored
Note we're allocating an array of nfsd4_slot *'s, not nfsd4_slot's. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
J. Bruce Fields authored
Instead of creating the new rpc client from a regular server thread, set a flag, kick off a null call, and allow the null call to do the work of setting up the client on the callback workqueue. Use a spinlock to ensure the callback work gets a consistent view of the callback parameters. This allows, for example, changing the callback from contexts where sleeping is not allowed. I hope it will also keep the locking simple as we add more session and trunking features, by serializing most of the callback-specific work. This also closes a small race where the the new cb_ident could be used with an old connection (or vice-versa). Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
I don't see the point of the separate struct. It seems to just be getting in the way. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
This will eventually allow us, for example, to kick off null callback from contexts where we can't sleep. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
Make the recall callback code more generic, so that other callbacks will be able to use it too. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
With apologies for the gratuitous rename, the new name seems more helpful to me. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
These two structs don't really need to be distinct as far as I can tell. Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
Now that we have both nfsd4_callback and nfsd4_cb_conn structures, I get confused if variables of both types are always named cb.... Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
-
J. Bruce Fields authored
Unfortunately, spkm3 never got very far; while interoperability with one other implementation was demonstrated at some point, problems were found with the spec that were deemed not worth fixing. The kernel code is useless on its own without nfs-utils patches which were never merged into nfs-utils, and were only ever available from citi.umich.edu. They appear not to have been updated since 2005. Therefore it seems safe to assume that this code has no users, and never will. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
NeilBrown authored
If we set up to wait for a cache item to be filled in, and then find that it is no longer pending, it could be that some other thread is in 'cache_revisit_request' and has moved our request to its 'pending' list. So when our setup_deferral calls cache_revisit_request it will find nothing to put on the pending list, and do nothing. We then return from cache_wait_req, thus leaving the 'sleeper' on-stack structure open to being corrupted by subsequent stack usage. However that 'sleeper' could still be on the 'pending' list that the other thread is looking at and so any corruption could cause it to behave badly. To avoid this race we simply take the same path as if the 'wait_for_completion_interruptible_timeout' was interrupted and if the sleeper is no longer on the list (which it won't be) we wait on the completion - which will ensure that any other cache_revisit_request will have let go of the sleeper. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
The context is already known in all the sock_create callers. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Pavel Emelyanov authored
Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-