1. 01 Mar, 2012 7 commits
  2. 27 Feb, 2012 4 commits
    • Stanislav Kinsbursky's avatar
      SUNRPC: move waitq from RPC pipe to RPC inode · 591ad7fe
      Stanislav Kinsbursky authored
      Currently, wait queue, used for polling of RPC pipe changes from user-space,
      is a part of RPC pipe. But the pipe data itself can be released on NFS umount
      prior to dentry-inode pair, connected to it (is case of this pair is open by
      some process).
      This is not a problem for almost all pipe users, because all PipeFS file
      operations checks pipe reference prior to using it.
      Except evenfd. This thing registers itself with "poll" file operation and thus
      has a reference to pipe wait queue. This leads to oopses on destroying eventfd
      after NFS umount (like rpc_idmapd do) since not pipe data left to the point
      already.
      The solution is to wait queue from pipe data to internal RPC inode data. This
      looks more logical, because this wiat queue used only for user-space processes,
      which already holds inode reference.
      
      Note: upcalls have to get pipe->dentry prior to dereferecing wait queue to make
      sure, that mount point won't disappear from underneath us.
      Signed-off-by: default avatarStanislav Kinsbursky <skinsbursky@parallels.com>
      Signed-off-by: default avatarTrond Myklebust <Trond.Myklebust@netapp.com>
      591ad7fe
    • Stanislav Kinsbursky's avatar
      SUNRPC: check RPC inode's pipe reference before dereferencing · 2c9030ee
      Stanislav Kinsbursky authored
      There are 2 tightly bound objects: pipe data (created for kernel needs, has
      reference to dentry, which depends on PipeFS mount/umount) and PipeFS
      dentry/inode pair (created on mount for user-space needs). They both
      independently may have or have not a valid reference to each other.
      This means, that we have to make sure, that pipe->dentry reference is valid on
      upcalls, and dentry->pipe reference is valid on downcalls. The latter check is
      absent - my fault.
      IOW, PipeFS dentry can be opened by some process (rpc.idmapd for example), but
      it's pipe data can belong to NFS mount, which was unmounted already and thus
      pipe data was destroyed.
      To fix this, pipe reference have to be set to NULL on rpc_unlink() and checked
      on PipeFS file operations instead of pipe->dentry check.
      
      Note: PipeFS "poll" file operation will be updated in next patch, because it's
      logic is more complicated.
      Signed-off-by: default avatarStanislav Kinsbursky <skinsbursky@parallels.com>
      Signed-off-by: default avatarTrond Myklebust <Trond.Myklebust@netapp.com>
      2c9030ee
    • Stanislav Kinsbursky's avatar
      NFS: release per-net clients lock before calling PipeFS dentries creation · e9dbca8d
      Stanislav Kinsbursky authored
      v3:
      1) Lookup for client is performed from the beginning of the list on each PipeFS
      event handling operation.
      
      Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
      creation, which can be called on mount notification, where this per-net client
      lock is taken on clients list walk.
      Signed-off-by: default avatarStanislav Kinsbursky <skinsbursky@parallels.com>
      Signed-off-by: default avatarTrond Myklebust <Trond.Myklebust@netapp.com>
      e9dbca8d
    • Stanislav Kinsbursky's avatar
      SUNRPC: release per-net clients lock before calling PipeFS dentries creation · da3b4622
      Stanislav Kinsbursky authored
      v3:
      1) Lookup for client is performed from the beginning of the list on each PipeFS
      event handling operation.
      
      Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
      creation, which can be called on mount notification, where this per-net client
      lock is taken on clients list walk.
      Signed-off-by: default avatarStanislav Kinsbursky <skinsbursky@parallels.com>
      Signed-off-by: default avatarTrond Myklebust <Trond.Myklebust@netapp.com>
      da3b4622
  3. 26 Feb, 2012 1 commit
  4. 19 Feb, 2012 2 commits
  5. 17 Feb, 2012 2 commits
  6. 16 Feb, 2012 4 commits
  7. 15 Feb, 2012 20 commits