An error occurred fetching the project authors.
  1. 29 Nov, 2017 1 commit
    • Paul E. McKenney's avatar
      sched: Stop resched_cpu() from sending IPIs to offline CPUs · a0982dfa
      Paul E. McKenney authored
      The rcutorture test suite occasionally provokes a splat due to invoking
      resched_cpu() on an offline CPU:
      
      WARNING: CPU: 2 PID: 8 at /home/paulmck/public_git/linux-rcu/arch/x86/kernel/smp.c:128 native_smp_send_reschedule+0x37/0x40
      Modules linked in:
      CPU: 2 PID: 8 Comm: rcu_preempt Not tainted 4.14.0-rc4+ #1
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
      task: ffff902ede9daf00 task.stack: ffff96c50010c000
      RIP: 0010:native_smp_send_reschedule+0x37/0x40
      RSP: 0018:ffff96c50010fdb8 EFLAGS: 00010096
      RAX: 000000000000002e RBX: ffff902edaab4680 RCX: 0000000000000003
      RDX: 0000000080000003 RSI: 0000000000000000 RDI: 00000000ffffffff
      RBP: ffff96c50010fdb8 R08: 0000000000000000 R09: 0000000000000001
      R10: 0000000000000000 R11: 00000000299f36ae R12: 0000000000000001
      R13: ffffffff9de64240 R14: 0000000000000001 R15: ffffffff9de64240
      FS:  0000000000000000(0000) GS:ffff902edfc80000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 00000000f7d4c642 CR3: 000000001e0e2000 CR4: 00000000000006e0
      Call Trace:
       resched_curr+0x8f/0x1c0
       resched_cpu+0x2c/0x40
       rcu_implicit_dynticks_qs+0x152/0x220
       force_qs_rnp+0x147/0x1d0
       ? sync_rcu_exp_select_cpus+0x450/0x450
       rcu_gp_kthread+0x5a9/0x950
       kthread+0x142/0x180
       ? force_qs_rnp+0x1d0/0x1d0
       ? kthread_create_on_node+0x40/0x40
       ret_from_fork+0x27/0x40
      Code: 14 01 0f 92 c0 84 c0 74 14 48 8b 05 14 4f f4 00 be fd 00 00 00 ff 90 a0 00 00 00 5d c3 89 fe 48 c7 c7 38 89 ca 9d e8 e5 56 08 00 <0f> ff 5d c3 0f 1f 44 00 00 8b 05 52 9e 37 02 85 c0 75 38 55 48
      ---[ end trace 26df9e5df4bba4ac ]---
      
      This splat cannot be generated by expedited grace periods because they
      always invoke resched_cpu() on the current CPU, which is good because
      expedited grace periods require that resched_cpu() unconditionally
      succeed.  However, other parts of RCU can tolerate resched_cpu() acting
      as a no-op, at least as long as it doesn't happen too often.
      
      This commit therefore makes resched_cpu() invoke resched_curr() only if
      the CPU is either online or is the current CPU.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      a0982dfa
  2. 09 Nov, 2017 1 commit
    • Patrick Bellasi's avatar
      sched/core: Optimize sched_feat() for !CONFIG_SCHED_DEBUG builds · 765cc3a4
      Patrick Bellasi authored
      When the kernel is compiled with !CONFIG_SCHED_DEBUG support, we expect that
      all SCHED_FEAT are turned into compile time constants being propagated
      to support compiler optimizations.
      
      Specifically, we expect that code blocks like this:
      
         if (sched_feat(FEATURE_NAME) [&& <other_conditions>]) {
      	/* FEATURE CODE */
         }
      
      are turned into dead-code in case FEATURE_NAME defaults to FALSE, and thus
      being removed by the compiler from the finale image.
      
      For this mechanism to properly work it's required for the compiler to
      have full access, from each translation unit, to whatever is the value
      defined by the sched_feat macro. This macro is defined as:
      
         #define sched_feat(x) (sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
      
      and thus, the compiler can optimize that code only if the value of
      sysctl_sched_features is visible within each translation unit.
      
      Since:
      
         029632fb ("sched: Make separate sched*.c translation units")
      
      the scheduler code has been split into separate translation units
      however the definition of sysctl_sched_features is part of
      kernel/sched/core.c while, for all the other scheduler modules, it is
      visible only via kernel/sched/sched.h as an:
      
         extern const_debug unsigned int sysctl_sched_features
      
      Unfortunately, an extern reference does not allow the compiler to apply
      constants propagation. Thus, on !CONFIG_SCHED_DEBUG kernel we still end up
      with code to load a memory reference and (eventually) doing an unconditional
      jump of a chunk of code.
      
      This mechanism is unavoidable when sched_features can be turned on and off at
      run-time. However, this is not the case for "production" kernels compiled with
      !CONFIG_SCHED_DEBUG. In this case, sysctl_sched_features is just a constant value
      which cannot be changed at run-time and thus memory loads and jumps can be
      avoided altogether.
      
      This patch fixes the case of !CONFIG_SCHED_DEBUG kernel by declaring a local version
      of the sysctl_sched_features constant for each translation unit. This will
      ultimately allow the compiler to perform constants propagation and dead-code
      pruning.
      
      Tests have been done, with !CONFIG_SCHED_DEBUG on a v4.14-rc8 with and without
      the patch, by running 30 iterations of:
      
         perf bench sched messaging --pipe --thread --group 4 --loop 50000
      
      on a 40 cores Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz using the
      powersave governor to rule out variations due to frequency scaling.
      
      Statistics on the reported completion time:
      
                         count     mean       std     min       99%     max
        v4.14-rc8         30.0  15.7831  0.176032  15.442  16.01226  16.014
        v4.14-rc8+patch   30.0  15.5033  0.189681  15.232  15.93938  15.962
      
      ... show a 1.8% speedup on average completion time and 0.5% speedup in the
      99 percentile.
      Signed-off-by: default avatarPatrick Bellasi <patrick.bellasi@arm.com>
      Signed-off-by: default avatarChris Redpath <chris.redpath@arm.com>
      Reviewed-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
      Reviewed-by: default avatarBrendan Jackman <brendan.jackman@arm.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Juri Lelli <juri.lelli@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Morten Rasmussen <morten.rasmussen@arm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vincent Guittot <vincent.guittot@linaro.org>
      Link: http://lkml.kernel.org/r/20171108184101.16006-1-patrick.bellasi@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      765cc3a4
  3. 27 Oct, 2017 4 commits
  4. 26 Oct, 2017 1 commit
    • Tejun Heo's avatar
      cgroup, sched: Move basic cpu stats from cgroup.stat to cpu.stat · d41bf8c9
      Tejun Heo authored
      The basic cpu stat is currently shown with "cpu." prefix in
      cgroup.stat, and the same information is duplicated in cpu.stat when
      cpu controller is enabled.  This is ugly and not very scalable as we
      want to expand the coverage of stat information which is always
      available.
      
      This patch makes cgroup core always create "cpu.stat" file and show
      the basic cpu stat there and calls the cpu controller to show the
      extra stats when enabled.  This ensures that the same information
      isn't presented in multiple places and makes future expansion of basic
      stats easier.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      d41bf8c9
  5. 09 Oct, 2017 3 commits
    • Paul E. McKenney's avatar
      rcutorture: Dump writer stack if stalled · 0032f4e8
      Paul E. McKenney authored
      Right now, rcutorture warns if an rcu_torture_writer() kthread stalls,
      but this warning is not always all that helpful.  This commit therefore
      makes the first such warning include a stack dump.
      
      This in turn requires that sched_show_task() be exported to GPL modules,
      so this commit makes that change as well.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      0032f4e8
    • Paul E. McKenney's avatar
      sched,rcu: Make cond_resched() provide RCU quiescent state · f79c3ad6
      Paul E. McKenney authored
      There is some confusion as to which of cond_resched() or
      cond_resched_rcu_qs() should be added to long in-kernel loops.
      This commit therefore eliminates the decision by adding RCU quiescent
      states to cond_resched().  This commit also simplifies the code that
      used to interact with cond_resched_rcu_qs(), and that now interacts with
      cond_resched(), to reduce its overhead.  This reduction is necessary to
      allow the heavier-weight cond_resched_rcu_qs() mechanism to be invoked
      everywhere that cond_resched() is invoked.
      
      Part of that reduction in overhead converts the jiffies_till_sched_qs
      kernel parameter to read-only at runtime, thus eliminating the need for
      bounds checking.
      Reported-by: default avatarMichal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      [ paulmck: Keep PREEMPT=n cond_resched a no-op, per Peter Zijlstra. ]
      f79c3ad6
    • Paul E. McKenney's avatar
      sched: Make resched_cpu() unconditional · 7c2102e5
      Paul E. McKenney authored
      The current implementation of synchronize_sched_expedited() incorrectly
      assumes that resched_cpu() is unconditional, which it is not.  This means
      that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
      fails as follows (analysis by Neeraj Upadhyay):
      
      o	CPU1 is waiting for expedited wait to complete:
      
      	sync_rcu_exp_select_cpus
      	     rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
      	     IPI sent to CPU5
      
      	synchronize_sched_expedited_wait
      		 ret = swait_event_timeout(rsp->expedited_wq,
      					   sync_rcu_preempt_exp_done(rnp_root),
      					   jiffies_stall);
      
      	expmask = 0x20, CPU 5 in idle path (in cpuidle_enter())
      
      o	CPU5 handles IPI and fails to acquire rq lock.
      
      	Handles IPI
      	     sync_sched_exp_handler
      		 resched_cpu
      		     returns while failing to try lock acquire rq->lock
      		 need_resched is not set
      
      o	CPU5 calls  rcu_idle_enter() and as need_resched is not set, goes to
      	idle (schedule() is not called).
      
      o	CPU 1 reports RCU stall.
      
      Given that resched_cpu() is now used only by RCU, this commit fixes the
      assumption by making resched_cpu() unconditional.
      Reported-by: default avatarNeeraj Upadhyay <neeraju@codeaurora.org>
      Suggested-by: default avatarNeeraj Upadhyay <neeraju@codeaurora.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      7c2102e5
  6. 29 Sep, 2017 4 commits
    • Tejun Heo's avatar
      sched: Implement interface for cgroup unified hierarchy · 0d593634
      Tejun Heo authored
      There are a couple interface issues which can be addressed in cgroup2
      interface.
      
      * Stats from cpuacct being reported separately from the cpu stats.
      
      * Use of different time units.  Writable control knobs use
        microseconds, some stat fields use nanoseconds while other cpuacct
        stat fields use centiseconds.
      
      * Control knobs which can't be used in the root cgroup still show up
        in the root.
      
      * Control knob names and semantics aren't consistent with other
        controllers.
      
      This patchset implements cpu controller's interface on cgroup2 which
      adheres to the controller file conventions described in
      Documentation/cgroups/cgroup-v2.txt.  Overall, the following changes
      are made.
      
      * cpuacct is implictly enabled and disabled by cpu and its information
        is reported through "cpu.stat" which now uses microseconds for all
        time durations.  All time duration fields now have "_usec" appended
        to them for clarity.
      
        Note that cpuacct.usage_percpu is currently not included in
        "cpu.stat".  If this information is actually called for, it will be
        added later.
      
      * "cpu.shares" is replaced with "cpu.weight" and operates on the
        standard scale defined by CGROUP_WEIGHT_MIN/DFL/MAX (1, 100, 10000).
        The weight is scaled to scheduler weight so that 100 maps to 1024
        and the ratio relationship is preserved - if weight is W and its
        scaled value is S, W / 100 == S / 1024.  While the mapped range is a
        bit smaller than the orignal scheduler weight range, the dead zones
        on both sides are relatively small and covers wider range than the
        nice value mappings.  This file doesn't make sense in the root
        cgroup and isn't created on root.
      
      * "cpu.weight.nice" is added. When read, it reads back the nice value
        which is closest to the current "cpu.weight".  When written, it sets
        "cpu.weight" to the weight value which matches the nice value.  This
        makes it easy to configure cgroups when they're competing against
        threads in threaded subtrees.
      
      * "cpu.cfs_quota_us" and "cpu.cfs_period_us" are replaced by "cpu.max"
        which contains both quota and period.
      
      v4: - Use cgroup2 basic usage stat as the information source instead
            of cpuacct.
      
      v3: - Added "cpu.weight.nice" to allow using nice values when
            configuring the weight.  The feature is requested by PeterZ.
          - Merge the patch to enable threaded support on cpu and cpuacct.
          - Dropped the bits about getting rid of cpuacct from patch
            description as there is a pretty strong case for making cpuacct
            an implicit controller so that basic cpu usage stats are always
            available.
          - Documentation updated accordingly.  "cpu.rt.max" section is
            dropped for now.
      
      v2: - cpu_stats_show() was incorrectly using CONFIG_FAIR_GROUP_SCHED
            for CFS bandwidth stats and also using raw division for u64.
            Use CONFIG_CFS_BANDWITH and do_div() instead.  "cpu.rt.max" is
            not included yet.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      0d593634
    • Tejun Heo's avatar
      sched: Misc preps for cgroup unified hierarchy interface · a1f7164c
      Tejun Heo authored
      Make the following changes in preparation for the cpu controller
      interface implementation for cgroup2.  This patch doesn't cause any
      functional differences.
      
      * s/cpu_stats_show()/cpu_cfs_stat_show()/
      
      * s/cpu_files/cpu_legacy_files/
      
      v2: Dropped cpuacct changes as it won't be used by cpu controller
          interface anymore.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      a1f7164c
    • Vincent Guittot's avatar
      sched/fair: Use reweight_entity() for set_user_nice() · 9059393e
      Vincent Guittot authored
      Now that we directly change load_avg and propagate that change into
      the sums, sys_nice() and co should do the same, otherwise its possible
      to confuse load accounting when we migrate near the weight change.
      Fixes-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
      [ Added changelog, fixed the call condition. ]
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20170517095045.GA8420@linaro.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      9059393e
    • Peter Zijlstra's avatar
      sched/debug: Ignore TASK_IDLE for SysRq-W · 5d68cc95
      Peter Zijlstra authored
      Markus reported that tasks in TASK_IDLE state are reported by SysRq-W,
      which results in undesirable clutter.
      Reported-by: default avatarMarkus Trippelsdorf <markus@trippelsdorf.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5d68cc95
  7. 20 Sep, 2017 1 commit
  8. 12 Sep, 2017 1 commit
  9. 07 Sep, 2017 1 commit
    • Peter Zijlstra's avatar
      sched/cpuset/pm: Fix cpuset vs. suspend-resume bugs · 50e76632
      Peter Zijlstra authored
      Cpusets vs. suspend-resume is _completely_ broken. And it got noticed
      because it now resulted in non-cpuset usage breaking too.
      
      On suspend cpuset_cpu_inactive() doesn't call into
      cpuset_update_active_cpus() because it doesn't want to move tasks about,
      there is no need, all tasks are frozen and won't run again until after
      we've resumed everything.
      
      But this means that when we finally do call into
      cpuset_update_active_cpus() after resuming the last frozen cpu in
      cpuset_cpu_active(), the top_cpuset will not have any difference with
      the cpu_active_mask and this it will not in fact do _anything_.
      
      So the cpuset configuration will not be restored. This was largely
      hidden because we would unconditionally create identity domains and
      mobile users would not in fact use cpusets much. And servers what do use
      cpusets tend to not suspend-resume much.
      
      An addition problem is that we'd not in fact wait for the cpuset work to
      finish before resuming the tasks, allowing spurious migrations outside
      of the specified domains.
      
      Fix the rebuild by introducing cpuset_force_rebuild() and fix the
      ordering with cpuset_wait_for_hotplug().
      Reported-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: deb7aa30 ("cpuset: reorganize CPU / memory hotplug handling")
      Link: http://lkml.kernel.org/r/20170907091338.orwxrqkbfkki3c24@hirez.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      50e76632
  10. 17 Aug, 2017 1 commit
    • Mathieu Desnoyers's avatar
      membarrier: Provide expedited private command · 22e4ebb9
      Mathieu Desnoyers authored
      Implement MEMBARRIER_CMD_PRIVATE_EXPEDITED with IPIs using cpumask built
      from all runqueues for which current thread's mm is the same as the
      thread calling sys_membarrier. It executes faster than the non-expedited
      variant (no blocking). It also works on NOHZ_FULL configurations.
      
      Scheduler-wise, it requires a memory barrier before and after context
      switching between processes (which have different mm). The memory
      barrier before context switch is already present. For the barrier after
      context switch:
      
      * Our TSO archs can do RELEASE without being a full barrier. Look at
        x86 spin_unlock() being a regular STORE for example.  But for those
        archs, all atomics imply smp_mb and all of them have atomic ops in
        switch_mm() for mm_cpumask(), and on x86 the CR3 load acts as a full
        barrier.
      
      * From all weakly ordered machines, only ARM64 and PPC can do RELEASE,
        the rest does indeed do smp_mb(), so there the spin_unlock() is a full
        barrier and we're good.
      
      * ARM64 has a very heavy barrier in switch_to(), which suffices.
      
      * PPC just removed its barrier from switch_to(), but appears to be
        talking about adding something to switch_mm(). So add a
        smp_mb__after_unlock_lock() for now, until this is settled on the PPC
        side.
      
      Changes since v3:
      - Properly document the memory barriers provided by each architecture.
      
      Changes since v2:
      - Address comments from Peter Zijlstra,
      - Add smp_mb__after_unlock_lock() after finish_lock_switch() in
        finish_task_switch() to add the memory barrier we need after storing
        to rq->curr. This is much simpler than the previous approach relying
        on atomic_dec_and_test() in mmdrop(), which actually added a memory
        barrier in the common case of switching between userspace processes.
      - Return -EINVAL when MEMBARRIER_CMD_SHARED is used on a nohz_full
        kernel, rather than having the whole membarrier system call returning
        -ENOSYS. Indeed, CMD_PRIVATE_EXPEDITED is compatible with nohz_full.
        Adapt the CMD_QUERY mask accordingly.
      
      Changes since v1:
      - move membarrier code under kernel/sched/ because it uses the
        scheduler runqueue,
      - only add the barrier when we switch from a kernel thread. The case
        where we switch from a user-space thread is already handled by
        the atomic_dec_and_test() in mmdrop().
      - add a comment to mmdrop() documenting the requirement on the implicit
        memory barrier.
      
      CC: Peter Zijlstra <peterz@infradead.org>
      CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      CC: Boqun Feng <boqun.feng@gmail.com>
      CC: Andrew Hunter <ahh@google.com>
      CC: Maged Michael <maged.michael@gmail.com>
      CC: gromer@google.com
      CC: Avi Kivity <avi@scylladb.com>
      CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      CC: Paul Mackerras <paulus@samba.org>
      CC: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Tested-by: default avatarDave Watson <davejwatson@fb.com>
      22e4ebb9
  11. 11 Aug, 2017 1 commit
  12. 10 Aug, 2017 4 commits
  13. 28 Jul, 2017 1 commit
    • Tejun Heo's avatar
      sched: Allow migrating kthreads into online but inactive CPUs · 955dbdf4
      Tejun Heo authored
      Per-cpu workqueues have been tripping CPU affinity sanity checks while
      a CPU is being offlined.  A per-cpu kworker ends up running on a CPU
      which isn't its target CPU while the CPU is online but inactive.
      
      While the scheduler allows kthreads to wake up on an online but
      inactive CPU, it doesn't allow a running kthread to be migrated to
      such a CPU, which leads to an odd situation where setting affinity on
      a sleeping and running kthread leads to different results.
      
      Each mem-reclaim workqueue has one rescuer which guarantees forward
      progress and the rescuer needs to bind itself to the CPU which needs
      help in making forward progress; however, due to the above issue,
      while set_cpus_allowed_ptr() succeeds, the rescuer doesn't end up on
      the correct CPU if the CPU is in the process of going offline,
      tripping the sanity check and executing the work item on the wrong
      CPU.
      
      This patch updates __migrate_task() so that kthreads can be migrated
      into an inactive but online CPU.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatar"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Reported-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      955dbdf4
  14. 25 Jul, 2017 1 commit
  15. 23 Jun, 2017 3 commits
  16. 20 Jun, 2017 2 commits
    • Ingo Molnar's avatar
      sched/wait: Move bit_wait_table[] and related functionality from sched/core.c to sched/wait_bit.c · 5822a454
      Ingo Molnar authored
      The key hashed waitqueue data structures and their initialization
      was done in the main scheduler file for no good reason, move them
      to sched/wait_bit.c instead.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5822a454
    • Ingo Molnar's avatar
      sched/wait: Rename wait_queue_t => wait_queue_entry_t · ac6424b9
      Ingo Molnar authored
      Rename:
      
      	wait_queue_t		=>	wait_queue_entry_t
      
      'wait_queue_t' was always a slight misnomer: its name implies that it's a "queue",
      but in reality it's a queue *entry*. The 'real' queue is the wait queue head,
      which had to carry the name.
      
      Start sorting this out by renaming it to 'wait_queue_entry_t'.
      
      This also allows the real structure name 'struct __wait_queue' to
      lose its double underscore and become 'struct wait_queue_entry',
      which is the more canonical nomenclature for such data types.
      
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ac6424b9
  17. 11 Jun, 2017 1 commit
  18. 08 Jun, 2017 9 commits