1. 14 Feb, 2011 5 commits
    • Shaohua Li's avatar
      x86: Avoid tlbstate lock if not enough cpus · 7064d865
      Shaohua Li authored
      This one isn't related to previous patch. If online cpus are
      below NUM_INVALIDATE_TLB_VECTORS, we don't need the lock. The
      comments in the code declares we don't need the check, but a hot
      lock still needs an atomic operation and expensive, so add the
      check here.
      
      Uses nr_cpu_ids here as suggested by Eric Dumazet.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Acked-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      LKML-Reference: <1295232730.1949.710.camel@sli10-conroe>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      7064d865
    • Shaohua Li's avatar
      x86: Scale up the number of TLB invalidate vectors with NR_CPUs, up to 32 · 70e4a369
      Shaohua Li authored
      Make the maxium TLB invalidate vectors depend on NR_CPUS linearly,
      with a maximum of 32 vectors.
      
      We currently only have 8 vectors for TLB invalidation and that is clearly
      inadequate. If we have a lot of CPUs, the CPUs need share the 8 vectors and
      tlbstate_lock is used to protect them. flush_tlb_page() is
      heavily used in page reclaim, which will cause a lot of lock
      contention for tlbstate_lock.
      
      Andi Kleen suggested increasing the vectors number to 32, which should be
      good for current typical systems to reduce the tlbstate_lock contention.
      
      My test system has 4 sockets and 64G memory, and 64 CPUs. My
      workload creates 64 processes. Each process mmap reads a big
      empty sparse file. The total size of the files are 2*total_mem,
      so this will cause a lot of page reclaim.
      
      Below is the result I get from perf call-graph profiling:
      
       without the patch:
       ------------------
      
          24.25%           usemem  [kernel]                                   [k] _raw_spin_lock
                           |
                           --- _raw_spin_lock
                              |
                              |--42.15%-- native_flush_tlb_others
      
       with the patch:
       ------------------
      
          14.96%           usemem  [kernel]                                   [k] _raw_spin_lock
                           |
                           --- _raw_spin_lock
                              |--13.89%-- native_flush_tlb_others
      
      So this heavily reduces the tlbstate_lock contention.
      Suggested-by: default avatarAndi Kleen <andi@firstfloor.org>
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1295232727.1949.709.camel@sli10-conroe>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      70e4a369
    • Shaohua Li's avatar
      x86: Allocate 32 tlb_invalidate_interrupt handler stubs · 3a09fb45
      Shaohua Li authored
      Add up to 32 invalidate_interrupt handlers. How many handlers are
      added depends on NUM_INVALIDATE_TLB_VECTORS. So if
      NUM_INVALIDATE_TLB_VECTORS is smaller than 32, we reduce code
      size.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      LKML-Reference: <1295232725.1949.708.camel@sli10-conroe>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3a09fb45
    • Shaohua Li's avatar
      x86: Cleanup vector usage · 60f6e65d
      Shaohua Li authored
      Cleanup the vector usage and make them continuous if possible.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      LKML-Reference: <1295232722.1949.707.camel@sli10-conroe>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      60f6e65d
    • David Miller's avatar
      klist: Fix object alignment on 64-bit. · 795abaf1
      David Miller authored
      Commit c0e69a5b ("klist.c: bit 0 in pointer can't be used as flag")
      intended to make sure that all klist objects were at least pointer size
      aligned, but used the constant "4" which only works on 32-bit.
      
      Use "sizeof(void *)" which is correct in all cases.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarJesper Nilsson <jesper.nilsson@axis.com>
      Cc: stable <stable@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      795abaf1
  2. 13 Feb, 2011 6 commits
  3. 12 Feb, 2011 25 commits
  4. 11 Feb, 2011 4 commits