1. 08 Dec, 2010 1 commit
    • Dan Carpenter's avatar
      IB/uverbs: Handle large number of entries in poll CQ · 7182afea
      Dan Carpenter authored
      In ib_uverbs_poll_cq() code there is a potential integer overflow if
      userspace passes in a large cmd.ne.  The calls to kmalloc() would
      allocate smaller buffers than intended, leading to memory corruption.
      There iss also an information leak if resp wasn't all used.
      Unprivileged userspace may call this function, although only if an
      RDMA device that uses this function is present.
      
      Fix this by copying CQ entries one at a time, which avoids the
      allocation entirely, and also by moving this copying into a function
      that makes sure to initialize all memory copied to userspace.
      
      Special thanks to Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
      for his help and advice.
      
      Cc: <stable@kernel.org>
      Signed-off-by: default avatarDan Carpenter <error27@gmail.com>
      
      [ Monkey around with things a bit to avoid bad code generation by gcc
        when designated initializers are used.  - Roland ]
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      7182afea
  2. 03 Dec, 2010 1 commit
  3. 02 Dec, 2010 33 commits
  4. 01 Dec, 2010 5 commits
    • Yehuda Sadeh's avatar
      rbd: replace the rbd sysfs interface · dfc5606d
      Yehuda Sadeh authored
      The new interface creates directories per mapped image
      and under each it creates a subdir per available snapshot.
      This allows keeping a cleaner interface within the sysfs
      guidelines. The ABI documentation was updated too.
      Acked-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: default avatarYehuda Sadeh <yehuda@hq.newdream.net>
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      dfc5606d
    • Eli Cohen's avatar
      IB/mlx4: Fix memory ordering of VLAN insertion control bits · e27535b9
      Eli Cohen authored
      We must fully update the control segment before marking it as valid,
      so that hardware doesn't start executing it before we're ready.
      Signed-off-by: default avatarEli Cohen <eli@mellanox.co.il>
      
      [ Move VLAN control bit setting to before wmb().  - Roland ]
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      e27535b9
    • Chien Tung's avatar
      MAINTAINERS: Update NetEffect entry · e3d33cb1
      Chien Tung authored
      Correct web link as www.neteffect.com is no longer valid.  Remove
      Chien Tung as maintainer.  I am moving on to other responsibilities at
      Intel.  Thanks for all the fish.
      Signed-off-by: default avatarChien Tung <chien.tin.tung@intel.com>
      Signed-off-by: default avatarRoland Dreier <rolandd@cisco.com>
      e3d33cb1
    • Dave Chinner's avatar
      xfs: only run xfs_error_test if error injection is active · c76febef
      Dave Chinner authored
      Recent tests writing lots of small files showed the flusher thread
      being CPU bound and taking a long time to do allocations on a debug
      kernel. perf showed this as the prime reason:
      
                   samples  pcnt function                    DSO
                   _______ _____ ___________________________ _________________
      
                 224648.00 36.8% xfs_error_test              [kernel.kallsyms]
                  86045.00 14.1% xfs_btree_check_sblock      [kernel.kallsyms]
                  39778.00  6.5% prandom32                   [kernel.kallsyms]
                  37436.00  6.1% xfs_btree_increment         [kernel.kallsyms]
                  29278.00  4.8% xfs_btree_get_rec           [kernel.kallsyms]
                  27717.00  4.5% random32                    [kernel.kallsyms]
      
      Walking btree blocks during allocation checking them requires each
      block (a cache hit, so no I/O) call xfs_error_test(), which then
      does a random32() call as the first operation.  IOWs, ~50% of the
      CPU is being consumed just testing whether we need to inject an
      error, even though error injection is not active.
      
      Kill this overhead when error injection is not active by adding a
      global counter of active error traps and only calling into
      xfs_error_test when fault injection is active.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      c76febef
    • Dave Chinner's avatar
      xfs: avoid moving stale inodes in the AIL · de25c181
      Dave Chinner authored
      When an inode has been marked stale because the cluster is being
      freed, we don't want to (re-)insert this inode into the AIL. There
      is a race condition where the cluster buffer may be unpinned before
      the inode is inserted into the AIL during transaction committed
      processing. If the buffer is unpinned before the inode item has been
      committed and inserted, then it is possible for the buffer to be
      released and hence processthe stale inode callbacks before the inode
      is inserted into the AIL.
      
      In this case, we then insert a clean, stale inode into the AIL which
      will never get removed by an IO completion. It will, however, get
      reclaimed and that triggers an assert in xfs_inode_free()
      complaining about freeing an inode still in the AIL.
      
      This race can be avoided by not moving stale inodes forward in the AIL
      during transaction commit completion processing. This closes the
      race condition by ensuring we never insert clean stale inodes into
      the AIL. It is safe to do this because a dirty stale inode, by
      definition, must already be in the AIL.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      de25c181