An error occurred fetching the project authors.
- 11 Apr, 2011 2 commits
-
-
Peter Zijlstra authored
The code in update_group_power() does what init_sched_groups_power() does and more, so remove the special init_ code and call the generic code instead. Also move the sd->span_weight initialization because update_group_power() needs it. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20110407122941.875856012@chello.nlSigned-off-by:
Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Non weak static functions clearly are not arch specific, so remove the arch_ prefix. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20110407122941.820460566@chello.nlSigned-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 05 Apr, 2011 1 commit
-
-
Peter Zijlstra authored
Instead of the possible multiple-evaluation of num_online_cpus() in rebalance_domains() that Linus reported, avoid it altogether in the normal case since it's implemented with a Hamming weight function over a cpu bitmask which can be darn expensive for those with big iron. This also makes it cleaner, smaller and documents the code. Reported-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1301991265.2225.12.camel@twins> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 31 Mar, 2011 2 commits
-
-
Lucas De Marchi authored
Fixes generated by 'codespell' and manually reviewed. Signed-off-by:
Lucas De Marchi <lucas.demarchi@profusion.mobi>
-
Dario Faggioli authored
sched_setscheduler() (in sched.c) is called in order of changing the scheduling policy and/or the real-time priority of a task. Thus, if we find out that neither of those are actually being modified, it is possible to return earlier and save the overhead of a full deactivate+activate cycle of the task in question. Beside that, if we have more than one SCHED_FIFO task with the same priority on the same rq (which means they share the same priority queue) having one of them changing its position in the priority queue because of a sched_setscheduler (as it happens by means of the deactivate+activate) that does not actually change the priority violates POSIX which states, for SCHED_FIFO: "If a thread whose policy or priority has been modified by pthread_setschedprio() is a running thread or is runnable, the effect on its position in the thread list depends on the direction of the modification, as follows: a. <...> b. If the priority is unchanged, the thread does not change position in the thread list. c. <...>" http://pubs.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_08.html (ed: And the POSIX specification here does, briefly and somewhat unexpectedly, match what common sense tells us as well. ) Signed-off-by:
Dario Faggioli <raistlin@linux.it> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1300971618.3960.82.camel@Palantir> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 24 Mar, 2011 1 commit
-
-
Serge E. Hallyn authored
CAP_IPC_OWNER and CAP_IPC_LOCK can be checked against current_user_ns(), because the resource comes from current's own ipc namespace. setuid/setgid are to uids in own namespace, so again checks can be against current_user_ns(). Changelog: Jan 11: Use task_ns_capable() in place of sched_capable(). Jan 11: Use nsown_capable() as suggested by Bastian Blank. Jan 11: Clarify (hopefully) some logic in futex and sched.c Feb 15: use ns_capable for ipc, not nsown_capable Feb 23: let copy_ipcs handle setting ipc_ns->user_ns Feb 23: pass ns down rather than taking it from current [akpm@linux-foundation.org: coding-style fixes] Signed-off-by:
Serge E. Hallyn <serge.hallyn@canonical.com> Acked-by:
"Eric W. Biederman" <ebiederm@xmission.com> Acked-by:
Daniel Lezcano <daniel.lezcano@free.fr> Acked-by:
David Howells <dhowells@redhat.com> Cc: James Morris <jmorris@namei.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 23 Mar, 2011 1 commit
-
-
Sergey Senozhatsky authored
Signed-off-by:
Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <20110323111722.GA4244@swordfish.minsk.epam.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 19 Mar, 2011 1 commit
-
-
Randy Dunlap authored
Add missing function parameters for yield_to(): Warning(kernel/sched.c:5470): No description found for parameter 'p' Warning(kernel/sched.c:5470): No description found for parameter 'preempt' Signed-off-by:
Randy Dunlap <randy.dunlap@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <20110318093453.8f7489a4.randy.dunlap@oracle.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 16 Mar, 2011 2 commits
-
-
Randy Dunlap authored
Fix kernel-doc warning for runqueue_is_locked(): Warning(kernel/sched.c:664): missing initial short description on line: Signed-off-by:
Randy Dunlap <randy.dunlap@oracle.com> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Randy Dunlap authored
Fix kernel-doc warning for runqueue_is_locked(): Warning(kernel/sched.c:664): missing initial short description Signed-off-by:
Randy Dunlap <randy.dunlap@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <20110315161230.c4e1e8e3.rdunlap@xenotime.net> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 10 Mar, 2011 2 commits
-
-
Trond Myklebust authored
Although they run as rpciod background tasks, under normal operation (i.e. no SIGKILL), functions like nfs_sillyrename(), nfs4_proc_unlck() and nfs4_do_close() want to be fully synchronous. This means that when we exit, we want all references to the rpc_task to be gone, and we want any dentry references etc. held by that task to be released. For this reason these functions call __rpc_wait_for_completion_task(), followed by rpc_put_task() in the expectation that the latter will be releasing the last reference to the rpc_task, and thus ensuring that the callback_ops->rpc_release() has been called synchronously. This patch fixes a race which exists due to the fact that rpciod calls rpc_complete_task() (in order to wake up the callers of __rpc_wait_for_completion_task()) and then subsequently calls rpc_put_task() without ensuring that these two steps are done atomically. In order to avoid adding new spin locks, the patch uses the existing waitqueue spin lock to order the rpc_task reference count releases between the waiting process and rpciod. The common case where nobody is waiting for completion is optimised for by checking if the RPC_TASK_ASYNC flag is cleared and/or if the rpc_task reference count is 1: in those cases we drop trying to grab the spin lock, and immediately free up the rpc_task. Those few processes that need to put the rpc_task from inside an asynchronous context and that do not care about ordering are given a new helper: rpc_put_task_async(). Signed-off-by:
Trond Myklebust <Trond.Myklebust@netapp.com>
-
Jens Axboe authored
This patch adds support for creating a queuing context outside of the queue itself. This enables us to batch up pieces of IO before grabbing the block device queue lock and submitting them to the IO scheduler. The context is created on the stack of the process and assigned in the task structure, so that we can auto-unplug it if we hit a schedule event. The current queue plugging happens implicitly if IO is submitted to an empty device, yet callers have to remember to unplug that IO when they are going to wait for it. This is an ugly API and has caused bugs in the past. Additionally, it requires hacks in the vm (->sync_page() callback) to handle that logic. By switching to an explicit plugging scheme we make the API a lot nicer and can get rid of the ->sync_page() hack in the vm. Signed-off-by:
Jens Axboe <jaxboe@fusionio.com>
-
- 05 Mar, 2011 1 commit
-
-
Arnd Bergmann authored
This removes the implementation of the big kernel lock, at last. A lot of people have worked on this in the past, I so the credit for this patch should be with everyone who participated in the hunt. The names on the Cc list are the people that were the most active in this, according to the recorded git history, in alphabetical order. Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Acked-by:
Alan Cox <alan@linux.intel.com> Cc: Alessio Igor Bogani <abogani@texware.it> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrew Hendry <andrew.hendry@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Hans Verkuil <hverkuil@xs4all.nl> Acked-by:
Ingo Molnar <mingo@elte.hu> Cc: Jan Blunck <jblunck@infradead.org> Cc: John Kacur <jkacur@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Wilcox <matthew@wil.cx> Cc: Oliver Neukum <oliver@neukum.org> Cc: Paul Menage <menage@google.com> Acked-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
-
- 04 Mar, 2011 2 commits
-
-
Venkatesh Pallipadi authored
yield_to_task_fair() has code to resched the CPU of yielding task when the intention is to resched the CPU of the task that is being yielded to. Change here fixes the problem and also makes the resched conditional on rq != p_rq. Signed-off-by:
Venkatesh Pallipadi <venki@google.com> Reviewed-by:
Rik van Riel <riel@redhat.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1299025701-22168-1-git-send-email-venki@google.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Darren Hart authored
The current scheduler implementation returns -EPERM when trying to change from SCHED_IDLE to SCHED_OTHER or SCHED_BATCH. Since SCHED_IDLE is considered to be a nice 20 on steroids, changing to another policy should be allowed provided the RLIMIT_NICE is accounted for. This patch allows the following test-case to pass with RLIMIT_NICE=40, but still fail with RLIMIT_NICE=10 when the calling process is run from a typical shell (nice 0, or 20 in rlimit terms). int main() { int ret; struct sched_param sp; sp.sched_priority = 0; /* switch to SCHED_IDLE */ ret = sched_setscheduler(0, SCHED_IDLE, &sp); printf("setscheduler IDLE: %d\n", ret); if (ret) return ret; /* switch back to SCHED_OTHER */ ret = sched_setscheduler(0, SCHED_OTHER, &sp); printf("setscheduler OTHER: %d\n", ret); return ret; } $ ulimit -e 40 $ ./test setscheduler IDLE: 0 setscheduler OTHER: 0 $ ulimit -e 10 $ ulimit -e 10 $ ./test setscheduler IDLE: 0 setscheduler OTHER: -1 Signed-off-by:
Darren Hart <dvhart@linux.intel.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Richard Purdie <richard.purdie@linuxfoundation.org> LKML-Reference: <4D657BEE.4040608@linux.intel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 26 Feb, 2011 1 commit
-
-
Venkatesh Pallipadi authored
Fix this warning: lkml.org/lkml/2011/1/30/124 kernel/sched.c:3719: warning: 'irqtime_account_idle_ticks' defined but not used kernel/sched.c:3720: warning: 'irqtime_account_process_tick' defined but not used In a cleaner way than: 7e949870: sched: Add #ifdef around irq time accounting functions This patch will not have any functional impact. Signed-off-by:
Venkatesh Pallipadi <venki@google.com> Cc: heiko.carstens@de.ibm.com Cc: a.p.zijlstra@chello.nl LKML-Reference: <1298675596-10992-1-git-send-email-venki@google.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 25 Feb, 2011 2 commits
-
-
Thomas Gleixner authored
When we force thread hard and soft interrupts the startup of ksoftirqd would hang in kthread_bind() when wait_task_inactive() calls schedule_timeout_uninterruptible() because there is no softirq yet which will wake us up. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <20110223234956.677109139@linutronix.de>
-
Heiko Carstens authored
Get rid of this: kernel/sched.c:3731:13: warning: 'irqtime_account_idle_ticks' defined but not used kernel/sched.c:3732:13: warning: 'irqtime_account_process_tick' defined but not used Signed-off-by:
Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Venkatesh Pallipadi <venki@google.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20110225133228.GD7469@osiris.boeblingen.de.ibm.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 16 Feb, 2011 1 commit
-
-
Peter Zijlstra authored
Make the ::exit method act like ::attach, it is after all very nearly the same thing. The bug had no effect on correctness - fixing it is an optimization for the scheduler. Also, later perf-cgroups patches rely on it. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by:
Paul Menage <menage@google.com> LKML-Reference: <1297160655.13327.92.camel@laptop> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 11 Feb, 2011 1 commit
-
-
Steven Rostedt authored
When the fuction graph tracer starts, it needs to make a special stack for each task to save the real return values of the tasks. All running tasks have this stack created, as well as any new tasks. On CPU hot plug, the new idle task will allocate a stack as well when init_idle() is called. The problem is that cpu hotplug does not create a new idle_task. Instead it uses the idle task that existed when the cpu went down. ftrace_graph_init_task() will add a new ret_stack to the task that is given to it. Because a clone will make the task have a stack of its parent it does not check if the task's ret_stack is already NULL or not. When the CPU hotplug code starts a CPU up again, it will allocate a new stack even though one already existed for it. The solution is to treat the idle_task specially. In fact, the function_graph code already does, just not at init_idle(). Instead of using the ftrace_graph_init_task() for the idle task, which that function expects the task to be a clone, have a separate ftrace_graph_init_idle_task(). Also, we will create a per_cpu ret_stack that is used by the idle task. When we call ftrace_graph_init_idle_task() it will check if the idle task's ret_stack is NULL, if it is, then it will assign it the per_cpu ret_stack. Reported-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Suggested-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stable Tree <stable@kernel.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org>
-
- 03 Feb, 2011 3 commits
-
-
Mike Galbraith authored
Currently only implemented for fair class tasks. Add a yield_to_task method() to the fair scheduling class. allowing the caller of yield_to() to accelerate another thread in it's thread group, task group. Implemented via a scheduler hint, using cfs_rq->next to encourage the target being selected. We can rely on pick_next_entity to keep things fair, so noone can accelerate a thread that has already used its fair share of CPU time. This also means callers should only call yield_to when they really mean it. Calling it too often can result in the scheduler just ignoring the hint. Signed-off-by:
Rik van Riel <riel@redhat.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20110201095051.4ddb7738@annuminas.surriel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Rik van Riel authored
Use the buddy mechanism to implement yield_task_fair. This allows us to skip onto the next highest priority se at every level in the CFS tree, unless doing so would introduce gross unfairness in CPU time distribution. We order the buddy selection in pick_next_entity to check yield first, then last, then next. We need next to be able to override yield, because it is possible for the "next" and "yield" task to be different processen in the same sub-tree of the CFS tree. When they are, we need to go into that sub-tree regardless of the "yield" hint, and pick the correct entity once we get to the right level. Signed-off-by:
Rik van Riel <riel@redhat.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20110201095103.3a79e92a@annuminas.surriel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Oleg reported that on architectures with __ARCH_WANT_INTERRUPTS_ON_CTXSW the IPI from task_oncpu_function_call() can land before perf_event_task_sched_in() and cause interesting situations for eg. perf_install_in_context(). This patch reworks the task_oncpu_function_call() interface to give a more usable primitive as well as rework all its users to hopefully be more obvious as well as remove the races. While looking at the code I also found a number of races against perf_event_task_sched_out() which can flip contexts between tasks so plug those too. Reported-and-reviewed-by:
Oleg Nesterov <oleg@redhat.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 27 Jan, 2011 1 commit
-
-
Peter Zijlstra authored
Fix the build on UP. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Turner <pjt@google.com> LKML-Reference: <20110122044852.102126037@google.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 26 Jan, 2011 7 commits
-
-
Peter Zijlstra authored
When a task is taken out of the fair class we must ensure the vruntime is properly normalized because when we put it back in it will assume to be normalized. The case that goes wrong is when changing away from the fair class while sleeping. Sleeping tasks have non-normalized vruntime in order to make sleeper-fairness work. So treat the switch away from fair as a wakeup and preserve the relative vruntime. Also update sysrq-n to call the ->switch_{to,from} methods. Reported-by:
Onkalo Samu <samu.p.onkalo@nokia.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Venkatesh Pallipadi authored
softirq time in ksoftirqd context is not accounted in ns granularity per cpu softirq stats, as we want that to be a part of ksoftirqd exec_runtime. Accounting them as softirq on /proc/stat separately. Tested-by:
Shaun Ruffell <sruffell@digium.com> Signed-off-by:
Venkatesh Pallipadi <venki@google.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1292980144-28796-6-git-send-email-venki@google.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Venkatesh Pallipadi authored
CONFIG_IRQ_TIME_ACCOUNTING adds ns granularity irq time on each CPU. This info is already used in scheduler to do proper task chargeback (earlier patches). This patch retro-fits this ns granularity hardirq and softirq information to /proc/stat irq and softirq fields. The update is still done on timer tick, where we look at accumulated ns hardirq/softirq time and account the tick to user/system/irq/hardirq/guest accordingly. No new interface added. Earlier versions looked at adding this as new fields in some /proc files. This one seems to be the best in terms of impact to existing apps, even though it has somewhat more kernel code than earlier versions. Tested-by:
Shaun Ruffell <sruffell@digium.com> Signed-off-by:
Venkatesh Pallipadi <venki@google.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1292980144-28796-5-git-send-email-venki@google.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Venkatesh Pallipadi authored
Refactor account_system_time, to separate out the logic of identifying the update needed and code that does actual update. This is used by following patch for IRQ_TIME_ACCOUNTING, which has different identification logic and same update logic. Tested-by:
Shaun Ruffell <sruffell@digium.com> Signed-off-by:
Venkatesh Pallipadi <venki@google.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1292980144-28796-4-git-send-email-venki@google.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Venkatesh Pallipadi authored
Cleanup patch, freeing up PF_KSOFTIRQD and use per_cpu ksoftirqd pointer instead, as suggested by Eric Dumazet. Tested-by:
Shaun Ruffell <sruffell@digium.com> Signed-off-by:
Venkatesh Pallipadi <venki@google.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1292980144-28796-2-git-send-email-venki@google.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Paul Turner authored
Since cfs->{load_stamp,load_last} are zero-initalized the initial load update will consider the delta to be 'since the beginning of time'. This results in a lot of pointless divisions to bring this large period to be within the sysctl_sched_shares_window. Fix this by initializing load_stamp to be 1 at cfs_rq initialization, this allows for an initial load_stamp > load_last which then lets standard idle truncation proceed. We avoid spinning (and slightly improve consistency) by fixing delta to be [period - 1] in this path resulting in a slightly more predictable shares ramp. (Previously the amount of idle time preserved by the overflow would range between [period/2,period-1].) Signed-off-by:
Paul Turner <pjt@google.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20110122044852.102126037@google.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Paul Turner authored
Re-visiting this: Since update_cfs_shares will now only ever re-weight an entity that is a relative parent of the current entity in enqueue_entity; we can safely issue the account_entity_enqueue relative to that cfs_rq and avoid the requirement for special handling of the enqueue case in update_cfs_shares. Signed-off-by:
Paul Turner <pjt@google.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20110122044851.915214637@google.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 19 Jan, 2011 1 commit
-
-
Peter Zijlstra authored
By not notifying the controller of the on-exit move back to init_css_set, we fail to move the task out of the previous cgroup's cfs_rq. This leads to an opportunity for a cgroup-destroy to come in and free the cgroup (there are no active tasks left in it after all) to which the not-quite dead task is still enqueued. Reported-by:
Miklos Vajna <vmiklos@frugalware.org> Fixed-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <stable@kernel.org> Cc: Mike Galbraith <efault@gmx.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu> LKML-Reference: <1293206353.29444.205.camel@laptop>
-
- 18 Jan, 2011 2 commits
-
-
Yong Zhang authored
Now rq->rq_sched_info.bkl_count is not used for rq, scroll rq->bkl_count into it. Thus we can save some space for rq. Signed-off-by:
Yong Zhang <yong.zhang0@gmail.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1294991859-13246-1-git-send-email-yong.zhang0@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Mike Galbraith authored
If CONFIG_RT_GROUP_SCHED is set, __sched_setscheduler() fails due to autogroup not allocating rt_runtime. Free unused/unusable rt_se and rt_rq, redirect RT tasks to the root task group, and tell __sched_setscheduler() that it's ok. Reported-and-tested-by:
Bharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1294890890.8089.39.camel@marge.simson.net> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 07 Jan, 2011 3 commits
-
-
Hillf Danton authored
One of the operands, buf, is incorrect, since it is stripped and the correct address for subsequent string comparing could change if leading white spaces, if any, are removed from buf. It is fixed by replacing buf with cmp. Signed-off-by:
Hillf Danton <dhillf@gmail.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <AANLkTinOPuYsVovrZpbuCCmG5deEyc8WgA_A1RJx_YK7@mail.gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Mike Galbraith authored
Seems I lost a change somewhere, leaking memory. sched: fix struct autogroup memory leak Add missing change to actually use autogroup_free(). Signed-off-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1294222285.8369.2.camel@marge.simson.net> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Yong Zhang authored
root_task_group is the leftover of USER_SCHED, now it's always same to init_task_group. But as Mike suggested, root_task_group is maybe the suitable name to keep for a tree. So in this patch: init_task_group --> root_task_group init_task_group_load --> root_task_group_load INIT_TASK_GROUP_LOAD --> ROOT_TASK_GROUP_LOAD Suggested-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Yong Zhang <yong.zhang0@gmail.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20110107071736.GA32635@windriver.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 05 Jan, 2011 2 commits
-
-
NeilBrown authored
wait_for_completion_*_timeout() can return: 0: if the wait timed out -ve: if the wait was interrupted +ve: if the completion was completed. As they currently return an 'unsigned long', the last two cases are not easily distinguished which can easily result in buggy code, as is the case for the recently added wait_for_completion_interruptible_timeout() call in net/sunrpc/cache.c So change them both to return 'long'. As MAX_SCHEDULE_TIMEOUT is LONG_MAX, a large +ve return value should never overflow. Signed-off-by:
NeilBrown <neilb@suse.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: J. Bruce Fields <bfields@fieldses.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <20110105125016.64ccab0e@notabene.brown> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Gerald Schaefer authored
The spinning mutex implementation uses cpu_relax() in busy loops as a compiler barrier. Depending on the architecture, cpu_relax() may do more than needed in this specific mutex spin loops. On System z we also give up the time slice of the virtual cpu in cpu_relax(), which prevents effective spinning on the mutex. This patch replaces cpu_relax() in the spinning mutex code with arch_mutex_cpu_relax(), which can be defined by each architecture that selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so this patch should not affect other architectures than System z for now. Signed-off-by:
Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1290437256.7455.4.camel@thinkpad> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 04 Jan, 2011 1 commit
-
-
Yong Zhang authored
CONFIG_[FAIR|RT]_GROUP_SCHED always means CONFIG_CGROUP_SCHED Signed-off-by:
Yong Zhang <yong.zhang0@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> LKML-Reference: <1293803938-8157-1-git-send-email-yong.zhang0@gmail.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-