1. 02 Dec, 2010 32 commits
  2. 01 Dec, 2010 8 commits
    • Yehuda Sadeh's avatar
      rbd: replace the rbd sysfs interface · dfc5606d
      Yehuda Sadeh authored
      The new interface creates directories per mapped image
      and under each it creates a subdir per available snapshot.
      This allows keeping a cleaner interface within the sysfs
      guidelines. The ABI documentation was updated too.
      Acked-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: default avatarYehuda Sadeh <yehuda@hq.newdream.net>
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      dfc5606d
    • Eli Cohen's avatar
      IB/mlx4: Fix memory ordering of VLAN insertion control bits · e27535b9
      Eli Cohen authored
      We must fully update the control segment before marking it as valid,
      so that hardware doesn't start executing it before we're ready.
      Signed-off-by: default avatarEli Cohen <eli@mellanox.co.il>
      
      [ Move VLAN control bit setting to before wmb().  - Roland ]
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      e27535b9
    • Chien Tung's avatar
      MAINTAINERS: Update NetEffect entry · e3d33cb1
      Chien Tung authored
      Correct web link as www.neteffect.com is no longer valid.  Remove
      Chien Tung as maintainer.  I am moving on to other responsibilities at
      Intel.  Thanks for all the fish.
      Signed-off-by: default avatarChien Tung <chien.tin.tung@intel.com>
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      e3d33cb1
    • Dave Chinner's avatar
      xfs: only run xfs_error_test if error injection is active · c76febef
      Dave Chinner authored
      Recent tests writing lots of small files showed the flusher thread
      being CPU bound and taking a long time to do allocations on a debug
      kernel. perf showed this as the prime reason:
      
                   samples  pcnt function                    DSO
                   _______ _____ ___________________________ _________________
      
                 224648.00 36.8% xfs_error_test              [kernel.kallsyms]
                  86045.00 14.1% xfs_btree_check_sblock      [kernel.kallsyms]
                  39778.00  6.5% prandom32                   [kernel.kallsyms]
                  37436.00  6.1% xfs_btree_increment         [kernel.kallsyms]
                  29278.00  4.8% xfs_btree_get_rec           [kernel.kallsyms]
                  27717.00  4.5% random32                    [kernel.kallsyms]
      
      Walking btree blocks during allocation checking them requires each
      block (a cache hit, so no I/O) call xfs_error_test(), which then
      does a random32() call as the first operation.  IOWs, ~50% of the
      CPU is being consumed just testing whether we need to inject an
      error, even though error injection is not active.
      
      Kill this overhead when error injection is not active by adding a
      global counter of active error traps and only calling into
      xfs_error_test when fault injection is active.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      c76febef
    • Dave Chinner's avatar
      xfs: avoid moving stale inodes in the AIL · de25c181
      Dave Chinner authored
      When an inode has been marked stale because the cluster is being
      freed, we don't want to (re-)insert this inode into the AIL. There
      is a race condition where the cluster buffer may be unpinned before
      the inode is inserted into the AIL during transaction committed
      processing. If the buffer is unpinned before the inode item has been
      committed and inserted, then it is possible for the buffer to be
      released and hence processthe stale inode callbacks before the inode
      is inserted into the AIL.
      
      In this case, we then insert a clean, stale inode into the AIL which
      will never get removed by an IO completion. It will, however, get
      reclaimed and that triggers an assert in xfs_inode_free()
      complaining about freeing an inode still in the AIL.
      
      This race can be avoided by not moving stale inodes forward in the AIL
      during transaction commit completion processing. This closes the
      race condition by ensuring we never insert clean stale inodes into
      the AIL. It is safe to do this because a dirty stale inode, by
      definition, must already be in the AIL.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      de25c181
    • Dave Chinner's avatar
      xfs: delayed alloc blocks beyond EOF are valid after writeback · 309c8480
      Dave Chinner authored
      There is an assumption in the parts of XFS that flushing a dirty
      file will make all the delayed allocation blocks disappear from an
      inode. That is, that after calling xfs_flush_pages() then
      ip->i_delayed_blks will be zero.
      
      This is an invalid assumption as we may have specualtive
      preallocation beyond EOF and they are recorded in
      ip->i_delayed_blks. A flush of the dirty pages of an inode will not
      change the state of these blocks beyond EOF, so a non-zero
      deeelalloc block count after a flush is valid.
      
      The bmap code has an invalid ASSERT() that needs to be removed, and
      the swapext code has a bug in that while it swaps the data forks
      around, it fails to swap the i_delayed_blks counter associated with
      the fork and hence can get the block accounting wrong.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      309c8480
    • Dave Chinner's avatar
      xfs: push stale, pinned buffers on trylock failures · 90810b9e
      Dave Chinner authored
      As reported by Nick Piggin, XFS is suffering from long pauses under
      highly concurrent workloads when hosted on ramdisks. The problem is
      that an inode buffer is stuck in the pinned state in memory and as a
      result either the inode buffer or one of the inodes within the
      buffer is stopping the tail of the log from being moved forward.
      
      The system remains in this state until a periodic log force issued
      by xfssyncd causes the buffer to be unpinned. The main problem is
      that these are stale buffers, and are hence held locked until the
      transaction/checkpoint that marked them state has been committed to
      disk. When the filesystem gets into this state, only the xfssyncd
      can cause the async transactions to be committed to disk and hence
      unpin the inode buffer.
      
      This problem was encountered when scaling the busy extent list, but
      only the blocking lock interface was fixed to solve the problem.
      Extend the same fix to the buffer trylock operations - if we fail to
      lock a pinned, stale buffer, then force the log immediately so that
      when the next attempt to lock it comes around, it will have been
      unpinned.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      90810b9e
    • Dave Chinner's avatar
      xfs: fix failed write truncation handling. · c726de44
      Dave Chinner authored
      Since the move to the new truncate sequence we call xfs_setattr to
      truncate down excessively instanciated blocks.  As shown by the testcase
      in kernel.org BZ #22452 that doesn't work too well.  Due to the confusion
      of the internal inode size, and the VFS inode i_size it zeroes data that
      it shouldn't.
      
      But full blown truncate seems like overkill here.  We only instanciate
      delayed allocations in the write path, and given that we never released
      the iolock we can't have converted them to real allocations yet either.
      
      The only nasty case is pre-existing preallocation which we need to skip.
      We already do this for page discard during writeback, so make the delayed
      allocation block punching a generic function and call it from the failed
      write path as well as xfs_aops_discard_page. The callers are
      responsible for ensuring that partial blocks are not truncated away,
      and that they hold the ilock.
      
      Based on a fix originally from Christoph Hellwig. This version used
      filesystem blocks as the range unit.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      c726de44