1. 25 Mar, 2011 5 commits
    • Dave Chinner's avatar
      fs: move i_sb_list out from under inode_lock · 55fa6091
      Dave Chinner authored
      Protect the per-sb inode list with a new global lock
      inode_sb_list_lock and use it to protect the list manipulations and
      traversals. This lock replaces the inode_lock as the inodes on the
      list can be validity checked while holding the inode->i_lock and
      hence the inode_lock is no longer needed to protect the list.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      55fa6091
    • Dave Chinner's avatar
      fs: remove inode_lock from iput_final and prune_icache · f283c86a
      Dave Chinner authored
      Now that inode state changes are protected by the inode->i_lock and
      the inode LRU manipulations by the inode_lru_lock, we can remove the
      inode_lock from prune_icache and the initial part of iput_final().
      
      instead of using the inode_lock to protect the inode during
      iput_final, use the inode->i_lock instead. This protects the inode
      against new references being taken while we change the inode state
      to I_FREEING, as well as preventing prune_icache from grabbing the
      inode while we are manipulating it. Hence we no longer need the
      inode_lock in iput_final prior to setting I_FREEING on the inode.
      
      For prune_icache, we no longer need the inode_lock to protect the
      LRU list, and the inodes themselves are protected against freeing
      races by the inode->i_lock. Hence we can lift the inode_lock from
      prune_icache as well.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      f283c86a
    • Dave Chinner's avatar
      fs: Lock the inode LRU list separately · 02afc410
      Dave Chinner authored
      Introduce the inode_lru_lock to protect the inode_lru list. This
      lock is nested inside the inode->i_lock to allow the inode to be
      added to the LRU list in iput_final without needing to deal with
      lock inversions. This keeps iput_final() clean and neat.
      
      Further, where marking the inode I_FREEING and removing it from the
      LRU, move the LRU list manipulation within the inode->i_lock to keep
      the list manipulation consistent with iput_final. This also means
      that most of the open coded LRU list removal + unused inode
      accounting can now use the inode_lru_list_del() wrappers which
      cleans the code up further.
      
      However, this locking change means what the LRU traversal in
      prune_icache() inverts this lock ordering and needs to use trylock
      semantics on the inode->i_lock to avoid deadlocking. In these cases,
      if we fail to lock the inode we move it to the back of the LRU to
      prevent spinning on it.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      02afc410
    • Dave Chinner's avatar
      fs: factor inode disposal · b2b2af8e
      Dave Chinner authored
      We have a couple of places that dispose of inodes. factor the
      disposal into evict() to isolate this code and make it simpler to
      peel away the inode_lock from the code.
      
      While doing this, change the logic flow in iput_final() to separate
      the different cases that need to be handled to make the transitions
      the inode goes through more obvious.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      b2b2af8e
    • Dave Chinner's avatar
      fs: protect inode->i_state with inode->i_lock · 250df6ed
      Dave Chinner authored
      Protect inode state transitions and validity checks with the
      inode->i_lock. This enables us to make inode state transitions
      independently of the inode_lock and is the first step to peeling
      away the inode_lock from the code.
      
      This requires that __iget() is done atomically with i_state checks
      during list traversals so that we don't race with another thread
      marking the inode I_FREEING between the state check and grabbing the
      reference.
      
      Also remove the unlock_new_inode() memory barrier optimisation
      required to avoid taking the inode_lock when clearing I_NEW.
      Simplify the code by simply taking the inode->i_lock around the
      state change and wakeup. Because the wakeup is no longer tricky,
      remove the wake_up_inode() function and open code the wakeup where
      necessary.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      250df6ed
  2. 24 Mar, 2011 35 commits