• J. Bruce Fields's avatar
    This makes several changes to the gss upcalls · 620563dd
    J. Bruce Fields authored
      1. Currently rpc_queue_upcall returns -EPIPE if we make an upcall on a pipe
         that userland hasn't opened yet, and we timeout and retry later.  This
         can lead to an unnecessary delay on mount, because rpc.gssd is racing
         to open the newly created pipe while the nfs code is making the first
         upcall.  If rpc.gssd loses, then we end up with a delay equal to the
         length of the timeout.  So instead we allow rpc_queue_upcall to queue
         upcalls on pipes that aren't opened yet.  To deal with the case of
         other upcall-users (e.g., the name<->uid mapping upcall code) who
         do want to know if the pipe isn't open (in the name<->uid case you can
         choose just to map everyone to nobody if the user doesn't want to run
         idmapd), we add a flag parameter to rpc_mkpipe that allows us to choose
         the kind of behavior we want at the time we create the pipe.
    
      2. Currently gss_msg's are destroyed the moment they have been completely
         read (by the call to destroy_msg in rpc_pipe_read).  This means an
         rpc_wake_up is done then, and can't be done later (because the gss_msg is
         gone, along with gss_msg->waitq).  It will typically be some time yet
         before the downcall comes, so the woken-up processes will have to wait and
         retry later; as above this leads to unnecessary delays.  Also, since the
         gss_msg is deleted from the list of gss_msgs's, we forget that an upcall
         to get creds for the user in question is still pending, so multiple
         unnecessary upcalls will be made.  This patch changes gss_pipe_upcall to
         never update msg->copied so that rpc_pipe_read never destroys the message.
         Instead, we wait till a downcall arrives to remove the upcall, using the
         new function __rpc_purge_one_upcall, which searches the list of pending
         rpc_pipe_msg's on the inode as well as checking the current upcall, to
         deal with the case where rpc.gssd might preemptively create a context for
         a user that there's already a pending upcall for.  Note also that this
         means that repeated reads by rpc.gssd will return the same data until
         rpc.gssd does a downcall.  This also gives us a better chance of
         recovering from rpc.gssd crashes.
    620563dd
rpc_pipe.c 17.2 KB