1. 10 Feb, 2010 5 commits
  2. 09 Feb, 2010 11 commits
  3. 04 Feb, 2010 1 commit
  4. 03 Feb, 2010 13 commits
  5. 02 Feb, 2010 10 commits
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.dk/linux-2.6-block · 1a45dcfe
      Linus Torvalds authored
      * 'for-linus' of git://git.kernel.dk/linux-2.6-block:
        cfq-iosched: Do not idle on async queues
        blk-cgroup: Fix potential deadlock in blk-cgroup
        block: fix bugs in bio-integrity mempool usage
        block: fix bio_add_page for non trivial merge_bvec_fn case
        drbd: null dereference bug
        drbd: fix max_segment_size initialization
      1a45dcfe
    • Nick Piggin's avatar
      mm: purge fragmented percpu vmap blocks · 02b709df
      Nick Piggin authored
      Improve handling of fragmented per-CPU vmaps.  We previously don't free
      up per-CPU maps until all its addresses have been used and freed.  So
      fragmented blocks could fill up vmalloc space even if they actually had
      no active vmap regions within them.
      
      Add some logic to allow all CPUs to have these blocks purged in the case
      of failure to allocate a new vm area, and also put some logic to trim
      such blocks of a current CPU if we hit them in the allocation path (so
      as to avoid a large build up of them).
      
      Christoph reported some vmap allocation failures when using the per CPU
      vmap APIs in XFS, which cannot be reproduced after this patch and the
      previous bug fix.
      
      Cc: linux-mm@kvack.org
      Cc: stable@kernel.org
      Tested-by: default avatarChristoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      --
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      02b709df
    • Nick Piggin's avatar
      mm: percpu-vmap fix RCU list walking · de560423
      Nick Piggin authored
      RCU list walking of the per-cpu vmap cache was broken.  It did not use
      RCU primitives, and also the union of free_list and rcu_head is
      obviously wrong (because free_list is indeed the list we are RCU
      walking).
      
      While we are there, remove a couple of unused fields from an earlier
      iteration.
      
      These APIs aren't actually used anywhere, because of problems with the
      XFS conversion.  Christoph has now verified that the problems are solved
      with these patches.  Also it is an exported interface, so I think it
      will be good to be merged now (and Christoph wants to get the XFS
      changes into their local tree).
      
      Cc: stable@kernel.org
      Cc: linux-mm@kvack.org
      Tested-by: default avatarChristoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      --
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      de560423
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 · 489b24f2
      Linus Torvalds authored
      * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
        random: Remove unused inode variable
        crypto: padlock-sha - Add import/export support
        random: drop weird m_time/a_time manipulation
      489b24f2
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6-fixes · 4dab75ec
      Linus Torvalds authored
      * git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6-fixes:
        GFS2: Use GFP_NOFS for alloc structure
        GFS2: Fix previous patch
        GFS2: Don't withdraw on partial rindex entries
        GFS2: Fix refcnt leak on gfs2_follow_link() error path
      4dab75ec
    • Linus Torvalds's avatar
      Merge branch 'sh/for-2.6.33' of git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6 · 7fbcca25
      Linus Torvalds authored
      * 'sh/for-2.6.33' of git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6:
        sh: Fix access to released memory in clk_debugfs_register_one()
        sh: Fix access to released memory in dwarf_unwinder_cleanup()
        usb: r8a66597-hdc disable interrupts fix
        spi: spi_sh_msiof: Fixed data sampling on the correct edge
      7fbcca25
    • Linus Torvalds's avatar
      Merge branch 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linus · e770a0f1
      Linus Torvalds authored
      * 'upstream' of git://ftp.linux-mips.org/pub/scm/upstream-linus:
        MIPS: 64-bit: Detect virtual memory size
        MIPS: AR7: Fix USB slave mem range typo
        MIPS: Alchemy: Fix dbdma ring destruction memory debugcheck.
      e770a0f1
    • Linus Torvalds's avatar
      Fix 'flush_old_exec()/setup_new_exec()' split · 7ab02af4
      Linus Torvalds authored
      Commit 221af7f8 ("Split 'flush_old_exec' into two functions") split
      the function at the point of no return - ie right where there were no
      more error cases to check.  That made sense from a technical standpoint,
      but when we then also combined it with the actual personality setting
      going in between flush_old_exec() and setup_new_exec(), it needs to be a
      bit more careful.
      
      In particular, we need to make sure that we really flush the old
      personality bits in the 'flush' stage, rather than later in the 'setup'
      stage, since otherwise we might be flushing the _new_ personality state
      that we're just setting up.
      
      So this moves the flags and personality flushing (and 'flush_thread()',
      which is the arch-specific function that generally resets lazy FP state
      etc) of the old process into flush_old_exec(), so that it doesn't affect
      any state that execve() is setting up for the new process environment.
      
      This was reported by Michal Simek as breaking his Microblaze qemu
      environment.
      Reported-and-tested-by: default avatarMichal Simek <michal.simek@petalogix.com>
      Cc: Peter Anvin <hpa@zytor.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7ab02af4
    • Vivek Goyal's avatar
      cfq-iosched: Do not idle on async queues · 1efe8fe1
      Vivek Goyal authored
      Few weeks back, Shaohua Li had posted similar patch. I am reposting it
      with more test results.
      
      This patch does two things.
      
      - Do not idle on async queues.
      
      - It also changes the write queue depth CFQ drives (cfq_may_dispatch()).
        Currently, we seem to driving queue depth of 1 always for WRITES. This is
        true even if there is only one write queue in the system and all the logic
        of infinite queue depth in case of single busy queue as well as slowly
        increasing queue depth based on last delayed sync request does not seem to
        be kicking in at all.
      
      This patch will allow deeper WRITE queue depths (subjected to the other
      WRITE queue depth contstraints like cfq_quantum and last delayed sync
      request).
      
      Shaohua Li had reported getting more out of his SSD. For me, I have got
      one Lun exported from an HP EVA and when pure buffered writes are on, I
      can get more out of the system. Following are test results of pure
      buffered writes (with end_fsync=1) with vanilla and patched kernel. These
      results are average of 3 sets of run with increasing number of threads.
      
      AVERAGE[bufwfs][vanilla]
      -------
      job       Set NR  ReadBW(KB/s)   MaxClat(us)    WriteBW(KB/s)  MaxClat(us)
      ---       --- --  ------------   -----------    -------------  -----------
      bufwfs    3   1   0              0              95349          474141
      bufwfs    3   2   0              0              100282         806926
      bufwfs    3   4   0              0              109989         2.7301e+06
      bufwfs    3   8   0              0              116642         3762231
      bufwfs    3   16  0              0              118230         6902970
      
      AVERAGE[bufwfs] [patched kernel]
      -------
      bufwfs    3   1   0              0              270722         404352
      bufwfs    3   2   0              0              206770         1.06552e+06
      bufwfs    3   4   0              0              195277         1.62283e+06
      bufwfs    3   8   0              0              260960         2.62979e+06
      bufwfs    3   16  0              0              299260         1.70731e+06
      
      I also ran buffered writes along with some sequential reads and some
      buffered reads going on in the system on a SATA disk because the potential
      risk could be that we should not be driving queue depth higher in presence
      of sync IO going to keep the max clat low.
      
      With some random and sequential reads going on in the system on one SATA
      disk I did not see any significant increase in max clat. So it looks like
      other WRITE queue depth control logic is doing its job. Here are the
      results.
      
      AVERAGE[brr, bsr, bufw together] [vanilla]
      -------
      job       Set NR  ReadBW(KB/s)   MaxClat(us)    WriteBW(KB/s)  MaxClat(us)
      ---       --- --  ------------   -----------    -------------  -----------
      brr       3   1   850            546345         0              0
      bsr       3   1   14650          729543         0              0
      bufw      3   1   0              0              23908          8274517
      
      brr       3   2   981.333        579395         0              0
      bsr       3   2   14149.7        1175689        0              0
      bufw      3   2   0              0              21921          1.28108e+07
      
      brr       3   4   898.333        1.75527e+06    0              0
      bsr       3   4   12230.7        1.40072e+06    0              0
      bufw      3   4   0              0              19722.3        2.4901e+07
      
      brr       3   8   900            3160594        0              0
      bsr       3   8   9282.33        1.91314e+06    0              0
      bufw      3   8   0              0              18789.3        23890622
      
      AVERAGE[brr, bsr, bufw mixed] [patched kernel]
      -------
      job       Set NR  ReadBW(KB/s)   MaxClat(us)    WriteBW(KB/s)  MaxClat(us)
      ---       --- --  ------------   -----------    -------------  -----------
      brr       3   1   837            417973         0              0
      bsr       3   1   14357.7        591275         0              0
      bufw      3   1   0              0              24869.7        8910662
      
      brr       3   2   1038.33        543434         0              0
      bsr       3   2   13351.3        1205858        0              0
      bufw      3   2   0              0              18626.3        13280370
      
      brr       3   4   913            1.86861e+06    0              0
      bsr       3   4   12652.3        1430974        0              0
      bufw      3   4   0              0              15343.3        2.81305e+07
      
      brr       3   8   890            2.92695e+06    0              0
      bsr       3   8   9635.33        1.90244e+06    0              0
      bufw      3   8   0              0              17200.3        24424392
      
      So looks like it might make sense to include this patch.
      
      Thanks
      Vivek
      Signed-off-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      1efe8fe1
    • Guenter Roeck's avatar
      MIPS: 64-bit: Detect virtual memory size · 91dfc423
      Guenter Roeck authored
      Linux kernel 2.6.32 and later allocate address space from the top of the
      kernel virtual memory address space.
      
      This patch implements virtual memory size detection for 64 bit MIPS CPUs
      to avoid resulting crashes.
      Signed-off-by: default avatarGuenter Roeck <guenter.roeck@ericsson.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: http://patchwork.linux-mips.org/patch/935/Reviewed-by: default avatarDavid Daney <ddaney@caviumnetworks.com>
      Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      91dfc423