1. 04 Dec, 2015 14 commits
    • Davidlohr Bueso's avatar
      lcoking/barriers, arch: Use smp barriers in smp_store_release() · d5a73cad
      Davidlohr Bueso authored
      With commit b92b8b35 ("locking/arch: Rename set_mb() to smp_store_mb()")
      it was made clear that the context of this call (and thus set_mb)
      is strictly for CPU ordering, as opposed to IO. As such all archs
      should use the smp variant of mb(), respecting the semantics and
      saving a mandatory barrier on UP.
      Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <linux-arch@vger.kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: dave@stgolabs.net
      Link: http://lkml.kernel.org/r/1445975631-17047-3-git-send-email-dave@stgolabs.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d5a73cad
    • Davidlohr Bueso's avatar
      locking/cmpxchg, arch: Remove tas() definitions · fbd35c0d
      Davidlohr Bueso authored
      It seems that commit 5dc12dde ("Remove tas()") missed some files.
      Correct this and fully drop this macro, for which we should be using
      cmpxchg() like calls.
      Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <linux-arch@vger.kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: David Howells <dhowells@re hat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: dave@stgolabs.net
      Link: http://lkml.kernel.org/r/1445975631-17047-2-git-send-email-dave@stgolabs.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      fbd35c0d
    • Waiman Long's avatar
      locking/pvqspinlock: Queue node adaptive spinning · cd0272fa
      Waiman Long authored
      In an overcommitted guest where some vCPUs have to be halted to make
      forward progress in other areas, it is highly likely that a vCPU later
      in the spinlock queue will be spinning while the ones earlier in the
      queue would have been halted. The spinning in the later vCPUs is then
      just a waste of precious CPU cycles because they are not going to
      get the lock soon as the earlier ones have to be woken up and take
      their turn to get the lock.
      
      This patch implements an adaptive spinning mechanism where the vCPU
      will call pv_wait() if the previous vCPU is not running.
      
      Linux kernel builds were run in KVM guest on an 8-socket, 4
      cores/socket Westmere-EX system and a 4-socket, 8 cores/socket
      Haswell-EX system. Both systems are configured to have 32 physical
      CPUs. The kernel build times before and after the patch were:
      
      		    Westmere			Haswell
        Patch		32 vCPUs    48 vCPUs	32 vCPUs    48 vCPUs
        -----		--------    --------    --------    --------
        Before patch   3m02.3s     5m00.2s     1m43.7s     3m03.5s
        After patch    3m03.0s     4m37.5s	 1m43.0s     2m47.2s
      
      For 32 vCPUs, this patch doesn't cause any noticeable change in
      performance. For 48 vCPUs (over-committed), there is about 8%
      performance improvement.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Douglas Hatch <doug.hatch@hpe.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1447114167-47185-8-git-send-email-Waiman.Long@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      cd0272fa
    • Waiman Long's avatar
      locking/pvqspinlock: Allow limited lock stealing · 1c4941fd
      Waiman Long authored
      This patch allows one attempt for the lock waiter to steal the lock
      when entering the PV slowpath. To prevent lock starvation, the pending
      bit will be set by the queue head vCPU when it is in the active lock
      spinning loop to disable any lock stealing attempt.  This helps to
      reduce the performance penalty caused by lock waiter preemption while
      not having much of the downsides of a real unfair lock.
      
      The pv_wait_head() function was renamed as pv_wait_head_or_lock()
      as it was modified to acquire the lock before returning. This is
      necessary because of possible lock stealing attempts from other tasks.
      
      Linux kernel builds were run in KVM guest on an 8-socket, 4
      cores/socket Westmere-EX system and a 4-socket, 8 cores/socket
      Haswell-EX system. Both systems are configured to have 32 physical
      CPUs. The kernel build times before and after the patch were:
      
                          Westmere                    Haswell
        Patch         32 vCPUs    48 vCPUs    32 vCPUs    48 vCPUs
        -----         --------    --------    --------    --------
        Before patch   3m15.6s    10m56.1s     1m44.1s     5m29.1s
        After patch    3m02.3s     5m00.2s     1m43.7s     3m03.5s
      
      For the overcommited case (48 vCPUs), this patch is able to reduce
      kernel build time by more than 54% for Westmere and 44% for Haswell.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Douglas Hatch <doug.hatch@hpe.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1447190336-53317-1-git-send-email-Waiman.Long@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1c4941fd
    • Waiman Long's avatar
      locking/pvqspinlock: Collect slowpath lock statistics · 45e898b7
      Waiman Long authored
      This patch enables the accumulation of kicking and waiting related
      PV qspinlock statistics when the new QUEUED_LOCK_STAT configuration
      option is selected. It also enables the collection of data which
      enable us to calculate the kicking and wakeup latencies which have
      a heavy dependency on the CPUs being used.
      
      The statistical counters are per-cpu variables to minimize the
      performance overhead in their updates. These counters are exported
      via the debugfs filesystem under the qlockstat directory.  When the
      corresponding debugfs files are read, summation and computing of the
      required data are then performed.
      
      The measured latencies for different CPUs are:
      
      	CPU		Wakeup		Kicking
      	---		------		-------
      	Haswell-EX	63.6us		 7.4us
      	Westmere-EX	67.6us		 9.3us
      
      The measured latencies varied a bit from run-to-run. The wakeup
      latency is much higher than the kicking latency.
      
      A sample of statistical counters after system bootup (with vCPU
      overcommit) was:
      
      	pv_hash_hops=1.00
      	pv_kick_unlock=1148
      	pv_kick_wake=1146
      	pv_latency_kick=11040
      	pv_latency_wake=194840
      	pv_spurious_wakeup=7
      	pv_wait_again=4
      	pv_wait_head=23
      	pv_wait_node=1129
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Douglas Hatch <doug.hatch@hpe.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1447114167-47185-6-git-send-email-Waiman.Long@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      45e898b7
    • Peter Zijlstra's avatar
      sched/core, locking: Document Program-Order guarantees · 8643cda5
      Peter Zijlstra authored
      These are some notes on the scheduler locking and how it provides
      program order guarantees on SMP systems.
      
      ( This commit is in the locking tree, because the new documentation
        refers to a newly introduced locking primitive. )
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8643cda5
    • Peter Zijlstra's avatar
      locking, sched: Introduce smp_cond_acquire() and use it · b3e0b1b6
      Peter Zijlstra authored
      Introduce smp_cond_acquire() which combines a control dependency and a
      read barrier to form acquire semantics.
      
      This primitive has two benefits:
      
       - it documents control dependencies,
       - its typically cheaper than using smp_load_acquire() in a loop.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b3e0b1b6
    • Ingo Molnar's avatar
      Merge branch 'sched/urgent' into locking/core, to pick up scheduler fix we rely on · 829cf317
      Ingo Molnar authored
      So we want to change a locking API, but the scheduler uses it, and a conflict
      is generated by a recent scheduler fix.
      
      Pick up the pending scheduler fixes to make life easier.
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      829cf317
    • Peter Zijlstra's avatar
      sched/core: Fix an SMP ordering race in try_to_wake_up() vs. schedule() · ecf7d01c
      Peter Zijlstra authored
      Oleg noticed that its possible to falsely observe p->on_cpu == 0 such
      that we'll prematurely continue with the wakeup and effectively run p on
      two CPUs at the same time.
      
      Even though the overlap is very limited; the task is in the middle of
      being scheduled out; it could still result in corruption of the
      scheduler data structures.
      
              CPU0                            CPU1
      
              set_current_state(...)
      
              <preempt_schedule>
                context_switch(X, Y)
                  prepare_lock_switch(Y)
                    Y->on_cpu = 1;
                  finish_lock_switch(X)
                    store_release(X->on_cpu, 0);
      
                                              try_to_wake_up(X)
                                                LOCK(p->pi_lock);
      
                                                t = X->on_cpu; // 0
      
                context_switch(Y, X)
                  prepare_lock_switch(X)
                    X->on_cpu = 1;
                  finish_lock_switch(Y)
                    store_release(Y->on_cpu, 0);
              </preempt_schedule>
      
              schedule();
                deactivate_task(X);
                X->on_rq = 0;
      
                                                if (X->on_rq) // false
      
                                                if (t) while (X->on_cpu)
                                                  cpu_relax();
      
                context_switch(X, ..)
                  finish_lock_switch(X)
                    store_release(X->on_cpu, 0);
      
      Avoid the load of X->on_cpu being hoisted over the X->on_rq load.
      Reported-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ecf7d01c
    • Peter Zijlstra's avatar
      sched/core: Better document the try_to_wake_up() barriers · b75a2253
      Peter Zijlstra authored
      Explain how the control dependency and smp_rmb() end up providing
      ACQUIRE semantics and pair with smp_store_release() in
      finish_lock_switch().
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b75a2253
    • Hiroshi Shimamoto's avatar
      sched/cputime: Fix invalid gtime in proc · 2541117b
      Hiroshi Shimamoto authored
      /proc/stats shows invalid gtime when the thread is running in guest.
      When vtime accounting is not enabled, we cannot get a valid delta.
      The delta is calculated with now - tsk->vtime_snap, but tsk->vtime_snap
      is only updated when vtime accounting is runtime enabled.
      
      This patch makes task_gtime() just return gtime without computing the
      buggy non-existing tickless delta when vtime accounting is not enabled.
      
      Use context_tracking_is_enabled() to check if vtime is accounting on
      some cpu, in which case only we need to check the tickless delta. This
      way we fix the gtime value regression on machines not running nohz full.
      
      The kernel config contains CONFIG_VIRT_CPU_ACCOUNTING_GEN=y and
      CONFIG_NO_HZ_FULL_ALL=n and boot without nohz_full.
      
      I ran and stop a busy loop in VM and see the gtime in host.
      Dump the 43rd field which shows the gtime in every second:
      
      	 # while :; do awk '{print $3" "$43}' /proc/3955/task/4014/stat; sleep 1; done
      	S 4348
      	R 7064566
      	R 7064766
      	R 7064967
      	R 7065168
      	S 4759
      	S 4759
      
      During running busy loop, it returns large value.
      
      After applying this patch, we can see right gtime.
      
      	 # while :; do awk '{print $3" "$43}' /proc/10913/task/10956/stat; sleep 1; done
      	S 5338
      	R 5365
      	R 5465
      	R 5566
      	R 5666
      	S 5726
      	S 5726
      Signed-off-by: default avatarHiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1447948054-28668-2-git-send-email-fweisbec@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      2541117b
    • Xunlei Pang's avatar
      sched/core: Clear the root_domain cpumasks in init_rootdomain() · 8295c699
      Xunlei Pang authored
      root_domain::rto_mask allocated through alloc_cpumask_var()
      contains garbage data, this may cause problems. For instance,
      When doing pull_rt_task(), it may do useless iterations if
      rto_mask retains some extra garbage bits. Worse still, this
      violates the isolated domain rule for clustered scheduling
      using cpuset, because the tasks(with all the cpus allowed)
      belongs to one root domain can be pulled away into another
      root domain.
      
      The patch cleans the garbage by using zalloc_cpumask_var()
      instead of alloc_cpumask_var() for root_domain::rto_mask
      allocation, thereby addressing the issues.
      
      Do the same thing for root_domain's other cpumask memembers:
      dlo_mask, span, and online.
      Signed-off-by: default avatarXunlei Pang <xlpang@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1449057179-29321-1-git-send-email-xlpang@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8295c699
    • Sasha Levin's avatar
      sched/core: Remove false-positive warning from wake_up_process() · 119d6f6a
      Sasha Levin authored
      Because wakeups can (fundamentally) be late, a task might not be in
      the expected state. Therefore testing against a task's state is racy,
      and can yield false positives.
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: oleg@redhat.com
      Fixes: 9067ac85 ("wake_up_process() should be never used to wakeup a TASK_STOPPED/TRACED task")
      Link: http://lkml.kernel.org/r/1448933660-23082-1-git-send-email-sasha.levin@oracle.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      119d6f6a
    • Peter Zijlstra's avatar
      sched/wait: Fix signal handling in bit wait helpers · 68985633
      Peter Zijlstra authored
      Vladimir reported getting RCU stall warnings and bisected it back to
      commit:
      
        74316201 ("sched: Remove proliferation of wait_on_bit() action functions")
      
      That commit inadvertently reversed the calls to schedule() and signal_pending(),
      thereby not handling the case where the signal receives while we sleep.
      Reported-by: default avatarVladimir Murzin <vladimir.murzin@arm.com>
      Tested-by: default avatarVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: mark.rutland@arm.com
      Cc: neilb@suse.de
      Cc: oleg@redhat.com
      Fixes: 74316201 ("sched: Remove proliferation of wait_on_bit() action functions")
      Fixes: cbbce822 ("SCHED: add some "wait..on_bit...timeout()" interfaces.")
      Link: http://lkml.kernel.org/r/20151201130404.GL3816@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      68985633
  2. 23 Nov, 2015 7 commits
    • Waiman Long's avatar
      locking/pvqspinlock, x86: Optimize the PV unlock code path · d7804530
      Waiman Long authored
      The unlock function in queued spinlocks was optimized for better
      performance on bare metal systems at the expense of virtualized guests.
      
      For x86-64 systems, the unlock call needs to go through a
      PV_CALLEE_SAVE_REGS_THUNK() which saves and restores 8 64-bit
      registers before calling the real __pv_queued_spin_unlock()
      function. The thunk code may also be in a separate cacheline from
      __pv_queued_spin_unlock().
      
      This patch optimizes the PV unlock code path by:
      
       1) Moving the unlock slowpath code from the fastpath into a separate
          __pv_queued_spin_unlock_slowpath() function to make the fastpath
          as simple as possible..
      
       2) For x86-64, hand-coded an assembly function to combine the register
          saving thunk code with the fastpath code. Only registers that
          are used in the fastpath will be saved and restored. If the
          fastpath fails, the slowpath function will be called via another
          PV_CALLEE_SAVE_REGS_THUNK(). For 32-bit, it falls back to the C
          __pv_queued_spin_unlock() code as the thunk saves and restores
          only one 32-bit register.
      
      With a microbenchmark of 5M lock-unlock loop, the table below shows
      the execution times before and after the patch with different number
      of threads in a VM running on a 32-core Westmere-EX box with x86-64
      4.2-rc1 based kernels:
      
        Threads	Before patch	After patch	% Change
        -------	------------	-----------	--------
           1		   134.1 ms	  119.3 ms	  -11%
           2		   1286  ms	   953  ms	  -26%
           3		   3715  ms	  3480  ms	  -6.3%
           4		   4092  ms	  3764  ms	  -8.0%
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Douglas Hatch <doug.hatch@hpe.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1447114167-47185-5-git-send-email-Waiman.Long@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d7804530
    • Waiman Long's avatar
      locking/qspinlock: Avoid redundant read of next pointer · aa68744f
      Waiman Long authored
      With optimistic prefetch of the next node cacheline, the next pointer
      may have been properly inititalized. As a result, the reading
      of node->next in the contended path may be redundant. This patch
      eliminates the redundant read if the next pointer value is not NULL.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Douglas Hatch <doug.hatch@hpe.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1447114167-47185-4-git-send-email-Waiman.Long@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      aa68744f
    • Waiman Long's avatar
      locking/qspinlock: Prefetch the next node cacheline · 81b55986
      Waiman Long authored
      A queue head CPU, after acquiring the lock, will have to notify
      the next CPU in the wait queue that it has became the new queue
      head. This involves loading a new cacheline from the MCS node of the
      next CPU. That operation can be expensive and add to the latency of
      locking operation.
      
      This patch addes code to optmistically prefetch the next MCS node
      cacheline if the next pointer is defined and it has been spinning
      for the MCS lock for a while. This reduces the locking latency and
      improves the system throughput.
      
      The performance change will depend on whether the prefetch overhead
      can be hidden within the latency of the lock spin loop. On really
      short critical section, there may not be performance gain at all. With
      longer critical section, however, it was found to have a performance
      boost of 5-10% over a range of different queue depths with a spinlock
      loop microbenchmark.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Douglas Hatch <doug.hatch@hpe.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1447114167-47185-3-git-send-email-Waiman.Long@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      81b55986
    • Waiman Long's avatar
      locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg() · 64d816cb
      Waiman Long authored
      This patch replaces the cmpxchg() and xchg() calls in the native
      qspinlock code with the more relaxed _acquire or _release versions of
      those calls to enable other architectures to adopt queued spinlocks
      with less memory barrier performance overhead.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Douglas Hatch <doug.hatch@hpe.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1447114167-47185-2-git-send-email-Waiman.Long@hpe.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      64d816cb
    • Boqun Feng's avatar
      atomics: Add test for atomic operations with _relaxed variants · 978e5a36
      Boqun Feng authored
      Some atomic operations now have _relaxed/acquire/release variants, this
      patch adds some trivial tests for two purposes:
      
        1. test the behavior of these new operations in single-CPU
           environment.
      
        2. make their code generated before we actually use them somewhere,
           so that we can examine their assembly code.
      Signed-off-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/1446634365-25176-1-git-send-email-boqun.feng@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      978e5a36
    • Arnd Bergmann's avatar
      sched/rt: Hide the push_irq_work_func() declaration · 89b41108
      Arnd Bergmann authored
      The push_irq_work_func() function is conditionally defined only
      when both CONFIG_SMP and HAVE_RT_PUSH_IPI are defined, but the
      forward declaration remains visibile without HAVE_RT_PUSH_IPI,
      causing a gcc warning in ARM64 allnoconfig:
      
        kernel/sched/rt.c:68:13: warning: 'push_irq_work_func' declared 'static' but never defined [-Wunused-function]
      
      This changes the code to use the same condition for both the
      declaration and the function definition, which gets rid of the
      warning.
      
      As Peter Zijlstra, we can possibly get rid of the whole HAVE_RT_PUSH_IPI
      thing after:
      
        8053871d ("smp: Fix smp_call_function_single_async() locking")
      
      Until that is done, this patch can be used to avoid the warning.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: b6366f04 ("sched/rt: Use IPI to trigger RT task push migration instead of pulling")
      Link: http://lkml.kernel.org/r/3828565.oKfGk7yNIT@wuerfelSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      89b41108
    • Linus Torvalds's avatar
      Linux 4.4-rc2 · 1ec21837
      Linus Torvalds authored
      1ec21837
  3. 22 Nov, 2015 19 commits
    • Linus Torvalds's avatar
      Merge branch 'akpm' (patches from Andrew) · 104e2a6f
      Linus Torvalds authored
      Merge slub bulk allocator updates from Andrew Morton:
       "This missed the merge window because I was waiting for some repairs to
        come in.  Nothing actually uses the bulk allocator yet and the changes
        to other code paths are pretty small.  And the net guys are waiting
        for this so they can start merging the client code"
      
      More comments from Jesper Dangaard Brouer:
       "The kmem_cache_alloc_bulk() call, in mm/slub.c, were included in
        previous kernel.  The present version contains a bug.  Vladimir
        Davydov noticed it contained a bug, when kernel is compiled with
        CONFIG_MEMCG_KMEM (see commit 03ec0ed5: "slub: fix kmem cgroup
        bug in kmem_cache_alloc_bulk").  Plus the mem cgroup counterpart in
        kmem_cache_free_bulk() were missing (see commit 03374518 "slub:
        add missing kmem cgroup support to kmem_cache_free_bulk").
      
        I don't consider the fix stable-material because there are no in-tree
        users of the API.
      
        But with known bugs (for memcg) I cannot start using the API in the
        net-tree"
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>:
        slab/slub: adjust kmem_cache_alloc_bulk API
        slub: add missing kmem cgroup support to kmem_cache_free_bulk
        slub: fix kmem cgroup bug in kmem_cache_alloc_bulk
        slub: optimize bulk slowpath free by detached freelist
        slub: support for bulk free with SLUB freelists
      104e2a6f
    • Linus Torvalds's avatar
      Merge tag 'tty-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty · dcfeda9d
      Linus Torvalds authored
      Pull tty/serial fixes from Greg KH:
       "Here are a few small tty/serial driver fixes for 4.4-rc2 that resolve
        some reported problems.
      
        All have been in linux-next, full details are in the shortlog below"
      
      * tag 'tty-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
        serial: export fsl8250_handle_irq
        serial: 8250_mid: Add missing dependency
        tty: audit: Fix audit source
        serial: etraxfs-uart: Fix crash
        serial: fsl_lpuart: Fix earlycon support
        bcm63xx_uart: Use the device name when registering an interrupt
        tty: Fix direct use of tty buffer work
        tty: Fix tty_send_xchar() lock order inversion
      dcfeda9d
    • Linus Torvalds's avatar
      Merge tag 'staging-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging · 7f217393
      Linus Torvalds authored
      Pull staging/IIO fixes from Greg KH:
       "Here are some staging and iio driver fixes for 4.4-rc2.  All of these
        are in response to issues that have been reported and have been in
        linux-next for a while"
      
      * tag 'staging-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging:
        Revert "Staging: wilc1000: coreconfigurator: Drop unneeded wrapper functions"
        iio: adc: xilinx: Fix VREFN scale
        iio: si7020: Swap data byte order
        iio: adc: vf610_adc: Fix division by zero error
        iio:ad7793: Fix ad7785 product ID
        iio: ad5064: Fix ad5629/ad5669 shift
        iio:ad5064: Make sure ad5064_i2c_write() returns 0 on success
        iio: lpc32xx_adc: fix warnings caused by enabling unprepared clock
        staging: iio: select IRQ_WORK for IIO_DUMMY_EVGEN
        vf610_adc: Fix internal temperature calculation
      7f217393
    • Linus Torvalds's avatar
      Merge tag 'usb-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb · 6d2d91b3
      Linus Torvalds authored
      Pull USB fixes from Greg KH:
       "Here are a number of USB fixes and new device ids for 4.4-rc2.  All
        have been in linux-next and the details are in the shortlog"
      
      * tag 'usb-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (28 commits)
        usblp: do not set TASK_INTERRUPTIBLE before lock
        USB: MAINTAINERS: cxacru
        usb: kconfig: fix warning of select USB_OTG
        USB: option: add XS Stick W100-2 from 4G Systems
        xhci: Fix a race in usb2 LPM resume, blocking U3 for usb2 devices
        usb: xhci: fix checking ep busy for CFC
        xhci: Workaround to get Intel xHCI reset working more reliably
        usb: chipidea: imx: fix a possible NULL dereference
        usb: chipidea: usbmisc_imx: fix a possible NULL dereference
        usb: chipidea: otg: gadget module load and unload support
        usb: chipidea: debug: disable usb irq while role switch
        ARM: dts: imx27.dtsi: change the clock information for usb
        usb: chipidea: imx: refine clock operations to adapt for all platforms
        usb: gadget: atmel_usba_udc: Expose correct device speed
        usb: musb: enable usb_dma parameter
        usb: phy: phy-mxs-usb: fix a possible NULL dereference
        usb: dwc3: gadget: let us set lower max_speed
        usb: musb: fix tx fifo flush handling
        usb: gadget: f_loopback: fix the warning during the enumeration
        usb: dwc2: host: Fix remote wakeup when not in DWC2_L2
        ...
      6d2d91b3
    • Linus Torvalds's avatar
      Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus · 0ec7dc8d
      Linus Torvalds authored
      Pull MIPS fixes from Ralf Baechle:
      
       - Fix a flood of annoying build warnings
      
       - A number of fixes for Atheros 79xx platforms
      
      * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
        MIPS: ath79: Add a machine entry for booting OF machines
        MIPS: ath79: Fix the size of the MISC INTC registers in ar9132.dtsi
        MIPS: ath79: Fix the DDR control initialization on ar71xx and ar934x
        MIPS: Fix flood of warnings about comparsion being always true.
      0ec7dc8d
    • Linus Torvalds's avatar
      Merge branch 'parisc-4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux · 94521b2f
      Linus Torvalds authored
      Pull parisc update from Helge Deller:
       "This patchset adds Huge Page and HUGETLBFS support for parisc"
      
      Honestly, the hugepage support should have gone through in the merge
      window, and is not really an rc-time fix.  But it only touches
      arch/parisc, and I cannot find it in myself to care.  If one of the
      three parisc users notices a breakage, I will point at Helge and make
      rude farting noises.
      
      * 'parisc-4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
        parisc: Map kernel text and data on huge pages
        parisc: Add Huge Page and HUGETLBFS support
        parisc: Use long branch to do_syscall_trace_exit
        parisc: Increase initial kernel mapping to 32MB on 64bit kernel
        parisc: Initialize the fault vector earlier in the boot process.
        parisc: Add defines for Huge page support
        parisc: Drop unused MADV_xxxK_PAGES flags from asm/mman.h
        parisc: Drop definition of start_thread_som for HP-UX SOM binaries
        parisc: Fix wrong comment regarding first pmd entry flags
      94521b2f
    • Linus Torvalds's avatar
      Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 727cde6c
      Linus Torvalds authored
      Pull perf tool fixes from Thomas Gleixner:
       "A couple of fixes for perf tools:
      
         - Build system updates
      
         - Plug a memory leak in an error path of perf probe
      
         - Tear down probes correctly when adding fails
      
         - Fixes to the perf symbol handling
      
         - Fix ordering of event processing in buildid-list
      
         - Fix per DSO filtering in the histogram browser"
      
      * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        perf probe: Clear probe_trace_event when add_probe_trace_event() fails
        perf probe: Fix memory leaking on failure by clearing all probe_trace_events
        perf inject: Also re-pipe lost_samples event
        perf buildid-list: Requires ordered events
        perf symbols: Fix dso lookup by long name and missing buildids
        perf symbols: Allow forcing reading of non-root owned files by root
        perf hists browser: The dso can be obtained from popup_action->ms.map->dso
        perf hists browser: Fix 'd' hotkey action to filter by DSO
        perf symbols: Rebuild rbtree when adjusting symbols for kcore
        tools: Add a "make all" rule
        tools: Actually install tmon in the install rule
      727cde6c
    • Linus Torvalds's avatar
      Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 069ec229
      Linus Torvalds authored
      Pull x86 fixes from Thomas Gleixner:
       "This update contains:
      
         - MPX updates for handling 32bit processes
      
         - A fix for a long standing bug in 32bit signal frame handling
           related to FPU/XSAVE state
      
         - Handle get_xsave_addr() correctly in KVM
      
         - Fix SMAP check under paravirtualization
      
         - Add a comment to the static function trace entry to avoid further
           confusion about the difference to dynamic tracing"
      
      * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        x86/cpu: Fix SMAP check in PVOPS environments
        x86/ftrace: Add comment on static function tracing
        x86/fpu: Fix get_xsave_addr() behavior under virtualization
        x86/fpu: Fix 32-bit signal frame handling
        x86/mpx: Fix 32-bit address space calculation
        x86/mpx: Do proper get_user() when running 32-bit binaries on 64-bit kernels
      069ec229
    • Jesper Dangaard Brouer's avatar
      slab/slub: adjust kmem_cache_alloc_bulk API · 865762a8
      Jesper Dangaard Brouer authored
      Adjust kmem_cache_alloc_bulk API before we have any real users.
      
      Adjust API to return type 'int' instead of previously type 'bool'.  This
      is done to allow future extension of the bulk alloc API.
      
      A future extension could be to allow SLUB to stop at a page boundary, when
      specified by a flag, and then return the number of objects.
      
      The advantage of this approach, would make it easier to make bulk alloc
      run without local IRQs disabled.  With an approach of cmpxchg "stealing"
      the entire c->freelist or page->freelist.  To avoid overshooting we would
      stop processing at a slab-page boundary.  Else we always end up returning
      some objects at the cost of another cmpxchg.
      
      To keep compatible with future users of this API linking against an older
      kernel when using the new flag, we need to return the number of allocated
      objects with this API change.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      865762a8
    • Jesper Dangaard Brouer's avatar
      slub: add missing kmem cgroup support to kmem_cache_free_bulk · 03374518
      Jesper Dangaard Brouer authored
      Initial implementation missed support for kmem cgroup support in
      kmem_cache_free_bulk() call, add this.
      
      If CONFIG_MEMCG_KMEM is not enabled, the compiler should be smart enough
      to not add any asm code.
      
      Incoming bulk free objects can belong to different kmem cgroups, and
      object free call can happen at a later point outside memcg context.  Thus,
      we need to keep the orig kmem_cache, to correctly verify if a memcg object
      match against its "root_cache" (s->memcg_params.root_cache).
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      03374518
    • Jesper Dangaard Brouer's avatar
      slub: fix kmem cgroup bug in kmem_cache_alloc_bulk · 03ec0ed5
      Jesper Dangaard Brouer authored
      The call slab_pre_alloc_hook() interacts with kmemgc and is not allowed to
      be called several times inside the bulk alloc for loop, due to the call to
      memcg_kmem_get_cache().
      
      This would result in hitting the VM_BUG_ON in __memcg_kmem_get_cache.
      
      As suggested by Vladimir Davydov, change slab_post_alloc_hook() to be able
      to handle an array of objects.
      
      A subtle detail is, loop iterator "i" in slab_post_alloc_hook() must have
      same type (size_t) as size argument.  This helps the compiler to easier
      realize that it can remove the loop, when all debug statements inside loop
      evaluates to nothing.  Note, this is only an issue because the kernel is
      compiled with GCC option: -fno-strict-overflow
      
      In slab_alloc_node() the compiler inlines and optimizes the invocation of
      slab_post_alloc_hook(s, flags, 1, &object) by removing the loop and access
      object directly.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Reported-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Suggested-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Reviewed-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      03ec0ed5
    • Jesper Dangaard Brouer's avatar
      slub: optimize bulk slowpath free by detached freelist · d0ecd894
      Jesper Dangaard Brouer authored
      This change focus on improving the speed of object freeing in the
      "slowpath" of kmem_cache_free_bulk.
      
      The calls slab_free (fastpath) and __slab_free (slowpath) have been
      extended with support for bulk free, which amortize the overhead of
      the (locked) cmpxchg_double.
      
      To use the new bulking feature, we build what I call a detached
      freelist.  The detached freelist takes advantage of three properties:
      
       1) the free function call owns the object that is about to be freed,
          thus writing into this memory is synchronization-free.
      
       2) many freelist's can co-exist side-by-side in the same slab-page
          each with a separate head pointer.
      
       3) it is the visibility of the head pointer that needs synchronization.
      
      Given these properties, the brilliant part is that the detached
      freelist can be constructed without any need for synchronization.  The
      freelist is constructed directly in the page objects, without any
      synchronization needed.  The detached freelist is allocated on the
      stack of the function call kmem_cache_free_bulk.  Thus, the freelist
      head pointer is not visible to other CPUs.
      
      All objects in a SLUB freelist must belong to the same slab-page.
      Thus, constructing the detached freelist is about matching objects
      that belong to the same slab-page.  The bulk free array is scanned is
      a progressive manor with a limited look-ahead facility.
      
      Kmem debug support is handled in call of slab_free().
      
      Notice kmem_cache_free_bulk no longer need to disable IRQs. This
      only slowed down single free bulk with approx 3 cycles.
      
      Performance data:
       Benchmarked[1] obj size 256 bytes on CPU i7-4790K @ 4.00GHz
      
      SLUB fastpath single object quick reuse: 47 cycles(tsc) 11.931 ns
      
      To get stable and comparable numbers, the kernel have been booted with
      "slab_merge" (this also improve performance for larger bulk sizes).
      
      Performance data, compared against fallback bulking:
      
      bulk -  fallback bulk            - improvement with this patch
         1 -  62 cycles(tsc) 15.662 ns - 49 cycles(tsc) 12.407 ns- improved 21.0%
         2 -  55 cycles(tsc) 13.935 ns - 30 cycles(tsc) 7.506 ns - improved 45.5%
         3 -  53 cycles(tsc) 13.341 ns - 23 cycles(tsc) 5.865 ns - improved 56.6%
         4 -  52 cycles(tsc) 13.081 ns - 20 cycles(tsc) 5.048 ns - improved 61.5%
         8 -  50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.659 ns - improved 64.0%
        16 -  49 cycles(tsc) 12.412 ns - 17 cycles(tsc) 4.495 ns - improved 65.3%
        30 -  49 cycles(tsc) 12.484 ns - 18 cycles(tsc) 4.533 ns - improved 63.3%
        32 -  50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.707 ns - improved 64.0%
        34 -  96 cycles(tsc) 24.243 ns - 23 cycles(tsc) 5.976 ns - improved 76.0%
        48 -  83 cycles(tsc) 20.818 ns - 21 cycles(tsc) 5.329 ns - improved 74.7%
        64 -  74 cycles(tsc) 18.700 ns - 20 cycles(tsc) 5.127 ns - improved 73.0%
       128 -  90 cycles(tsc) 22.734 ns - 27 cycles(tsc) 6.833 ns - improved 70.0%
       158 -  99 cycles(tsc) 24.776 ns - 30 cycles(tsc) 7.583 ns - improved 69.7%
       250 - 104 cycles(tsc) 26.089 ns - 37 cycles(tsc) 9.280 ns - improved 64.4%
      
      Performance data, compared current in-kernel bulking:
      
      bulk - curr in-kernel  - improvement with this patch
         1 -  46 cycles(tsc) - 49 cycles(tsc) - improved (cycles:-3) -6.5%
         2 -  27 cycles(tsc) - 30 cycles(tsc) - improved (cycles:-3) -11.1%
         3 -  21 cycles(tsc) - 23 cycles(tsc) - improved (cycles:-2) -9.5%
         4 -  18 cycles(tsc) - 20 cycles(tsc) - improved (cycles:-2) -11.1%
         8 -  17 cycles(tsc) - 18 cycles(tsc) - improved (cycles:-1) -5.9%
        16 -  18 cycles(tsc) - 17 cycles(tsc) - improved (cycles: 1)  5.6%
        30 -  18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0)  0.0%
        32 -  18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0)  0.0%
        34 -  78 cycles(tsc) - 23 cycles(tsc) - improved (cycles:55) 70.5%
        48 -  60 cycles(tsc) - 21 cycles(tsc) - improved (cycles:39) 65.0%
        64 -  49 cycles(tsc) - 20 cycles(tsc) - improved (cycles:29) 59.2%
       128 -  69 cycles(tsc) - 27 cycles(tsc) - improved (cycles:42) 60.9%
       158 -  79 cycles(tsc) - 30 cycles(tsc) - improved (cycles:49) 62.0%
       250 -  86 cycles(tsc) - 37 cycles(tsc) - improved (cycles:49) 57.0%
      
      Performance with normal SLUB merging is significantly slower for
      larger bulking.  This is believed to (primarily) be an effect of not
      having to share the per-CPU data-structures, as tuning per-CPU size
      can achieve similar performance.
      
      bulk - slab_nomerge   -  normal SLUB merge
         1 -  49 cycles(tsc) - 49 cycles(tsc) - merge slower with cycles:0
         2 -  30 cycles(tsc) - 30 cycles(tsc) - merge slower with cycles:0
         3 -  23 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:0
         4 -  20 cycles(tsc) - 20 cycles(tsc) - merge slower with cycles:0
         8 -  18 cycles(tsc) - 18 cycles(tsc) - merge slower with cycles:0
        16 -  17 cycles(tsc) - 17 cycles(tsc) - merge slower with cycles:0
        30 -  18 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:5
        32 -  18 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:4
        34 -  23 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:-1
        48 -  21 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:1
        64 -  20 cycles(tsc) - 48 cycles(tsc) - merge slower with cycles:28
       128 -  27 cycles(tsc) - 57 cycles(tsc) - merge slower with cycles:30
       158 -  30 cycles(tsc) - 59 cycles(tsc) - merge slower with cycles:29
       250 -  37 cycles(tsc) - 56 cycles(tsc) - merge slower with cycles:19
      
      Joint work with Alexander Duyck.
      
      [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/slab_bulk_test01.c
      
      [akpm@linux-foundation.org: BUG_ON -> WARN_ON;return]
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d0ecd894
    • Jesper Dangaard Brouer's avatar
      slub: support for bulk free with SLUB freelists · 81084651
      Jesper Dangaard Brouer authored
      Make it possible to free a freelist with several objects by adjusting API
      of slab_free() and __slab_free() to have head, tail and an objects counter
      (cnt).
      
      Tail being NULL indicate single object free of head object.  This allow
      compiler inline constant propagation in slab_free() and
      slab_free_freelist_hook() to avoid adding any overhead in case of single
      object free.
      
      This allows a freelist with several objects (all within the same
      slab-page) to be free'ed using a single locked cmpxchg_double in
      __slab_free() and with an unlocked cmpxchg_double in slab_free().
      
      Object debugging on the free path is also extended to handle these
      freelists.  When CONFIG_SLUB_DEBUG is enabled it will also detect if
      objects don't belong to the same slab-page.
      
      These changes are needed for the next patch to bulk free the detached
      freelists it introduces and constructs.
      
      Micro benchmarking showed no performance reduction due to this change,
      when debugging is turned off (compiled with CONFIG_SLUB_DEBUG).
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      81084651
    • Helge Deller's avatar
      parisc: Map kernel text and data on huge pages · 41b85a11
      Helge Deller authored
      Adjust the linker script and map_pages() to map kernel text and data on
      physical 1MB huge/large pages.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      41b85a11
    • Helge Deller's avatar
      parisc: Add Huge Page and HUGETLBFS support · 736d2169
      Helge Deller authored
      This patch adds huge page support to allow userspace to allocate huge
      pages and to use hugetlbfs filesystem on 32- and 64-bit Linux kernels.
      A later patch will add kernel support to map kernel text and data on
      huge pages.
      
      The only requirement is, that the kernel needs to be compiled for a
      PA8X00 CPU (PA2.0 architecture). Older PA1.X CPUs do not support
      variable page sizes. 64bit Kernels are compiled for PA2.0 by default.
      
      Technically on parisc multiple physical huge pages may be needed to
      emulate standard 2MB huge pages.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      736d2169
    • Helge Deller's avatar
      parisc: Use long branch to do_syscall_trace_exit · 337685e5
      Helge Deller authored
      Use the 22bit instead of the 17bit branch instruction on a 64bit kernel
      to reach the do_syscall_trace_exit function from the gateway page.
      A huge page enabled kernel may need the additional branch distance bits.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      337685e5
    • Helge Deller's avatar
      parisc: Increase initial kernel mapping to 32MB on 64bit kernel · 332b42e4
      Helge Deller authored
      For the 64bit kernel the initially 16 MB kernel memory might become too
      small if you build a kernel with many modules built-in and with kernel
      text and data areas mapped on huge pages.
      
      This patch increases the initial mapping to 32MB for 64bit kernels and
      keeps 16MB for 32bit kernels.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      332b42e4
    • Helge Deller's avatar
      parisc: Initialize the fault vector earlier in the boot process. · 4182d0cd
      Helge Deller authored
      A fault vector on parisc needs to be 2K aligned.  Furthermore the
      checksum of the fault vector needs to sum up to 0 which is being
      calculated and written at runtime.
      
      Up to now we aligned both PA20 and PA11 fault vectors on the same 4K
      page in order to easily write the checksum after having mapped the
      kernel read-only (by mapping this page only as read-write).
      But when we want to map the kernel text and data on huge pages this
      makes things harder.
      So, simplify it by aligning both fault vectors on 2K boundries and write
      the checksum before we map the page read-only.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      4182d0cd
    • Helge Deller's avatar
      parisc: Add defines for Huge page support · 1f25ad26
      Helge Deller authored
      Huge pages on parisc will have the same size as one pmd table, which
      is on a 64bit kernel 2MB on a kernel with 4K kernel page sizes, and
      on a 32bit kernel 4MB when used with 4K kernel pages.
      
      Since parisc does not physically supports 2MB huge page sizes, emulate
      it with two consecutive 1MB page sizes instead. Keeping the same huge
      page size as one pmd will allow us to add transparent huge page support
      later on.
      
      Bit 21 in the pte flags was unused and will now be used to mark a page
      as huge page (_PAGE_HPAGE_BIT).
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      1f25ad26