- 08 Jan, 2005 40 commits
-
-
Matthew Dobson authored
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matthew Dobson authored
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matthew Dobson authored
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matthew Dobson authored
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matthew Dobson authored
From: Jesse Barnes <jbarnes@engr.sgi.com> Here are some compile fixes for this patch. Looks like simple typos. Signed-off-by: Jesse Barnes <jbarnes@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matthew Dobson authored
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matthew Dobson authored
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matthew Dobson authored
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Nick Piggin authored
Change the sched-domain debug routine to be called on a per-CPU basis, and executed before the domain is actually attached to the CPU. Previously, all CPUs would have their new domains attached, and then the debug routine would loop over all of them. This has two advantages: First, there is no longer any theoretical races: we are running the debug routine on a domain that isn't yet active, and should have no racing access from another CPU. Second, if there is a problem with a domain, the validator will have a better chance to catch the error and print a diagnostic _before_ the domain is attached, which may take down the system. Also, change reporting of detected error conditions to KERN_ERR instead of KERN_DEBUG, so they have a better chance of being seen in a hang on boot situation. The patch also does an unrelated (and harmless) cleanup in migration_thread(). Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
The patch below fixes smp_processor_id() warnings that are triggered by numa_node_id(). All uses of numa_node_id() in mm/mempolicy.c seem to use it as a 'hint' only, not as a correctness number. Once a node is established, it's used in a preemption-safe way. So the simple fix is to disable the checking for numa_node_id(). But additional review would be more than welcome, because this patch turns off the preemption-checking of numa_node_id() permanently. Tested on amd64. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
Clean up a few suspicious-looking uses of smp_processor_id() in preemptible code. The current_cpu_data use is unclean but most likely safe. I haven't seen any outright bugs. Since oprofile does not seem to be ready for different-type CPUs (do we even care?), the patch below documents this property by using boot_cpu_data. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
The early bootup stage is pretty fragile because the idle thread is not yet functioning as such and so we need preemption disabled. Whether the bootup fails or not seems to depend on timing details so e.g. the presence of SCHED_SMT makes it go away. Disabling preemption explicitly has another advantage: the atomicity check in schedule() will catch early-bootup schedule() calls from now on. The patch also fixes another preempt-bkl buglet: interrupt-driven forced-preemption didnt go through preempt_schedule() so it resulted in auto-dropping of the BKL. Now we go through preempt_schedule() which properly deals with the BKL. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
This patch adds a handful of cond_resched() points to a number of key, scheduling-latency related non-inlined functions. This reduces preemption latency for !PREEMPT kernels. These are scheduling points complementary to PREEMPT_VOLUNTARY scheduling points (might_sleep() places) - i.e. these are all points where an explicit cond_resched() had to be added. Has been tested as part of the -VP patchset. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
This patch fixes scheduling latencies in vgacon_do_font_op(). The code is protected by vga_lock already so it's safe to drop (and re-acquire) the BKL. Has been tested in the -VP patchset. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
Fix scheduling latencies in the MTRR-setting codepath. Also, fix bad bug: MTRR's _must_ be set with interrupts disabled! From: Bernard Blackham <bernard@blackham.com.au> The patch sched-fix-scheduling-latencies-in-mttr in recent -mm kernels has the bad side-effect of re-enabling interrupts even if they were disabled. This caused bugs in Software Suspend 2 which reenabled MTRRs whilst interrupts were already disabled. Attached is a replacement patch which uses spin_lock_irqsave instead of spin_lock_irq. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
We dont want to execute off keventd since it might hold a semaphore our callers hold too. This can happen when kthread_create() is called from within keventd. This happened due to the IRQ threading patches but it could happen with other code too. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
The attached patch, written by Andrew Morton, fixes long scheduling latencies in filemap_sync(). Has been tested as part of the -VP patchset. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
The attached patch fixes long scheduling latencies in get_user_pages(). Has been tested as part of the -VP patchset. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
The attached patch fixes long latencies in unmap_vmas(). We had lockbreak code in that function already but it did not take delayed effects of TLB-gather into account. Has been tested as part of the -VP patchset. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
The attached patch fixes long scheduling latencies caused by backlog triggered by __release_sock(). That code only executes in process context, and we've made the backlog queue private already at this point so it is safe to do a cond_resched_softirq(). Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
The attached patch fixes long scheduling latencies caused by access to the /proc/net/tcp file. The seqfile functions keep softirqs disabled for a very long time (i've seen reports of 20+ msecs, if there are enough sockets in the system). With the attached patch it's below 100 usecs. The cond_resched_softirq() relies on the implicit knowledge that this code executes in process context and runs with softirqs disabled. Potentially enabling softirqs means that the socket list might change between buckets - but this is not an issue since seqfiles have a 4K iteration granularity anyway and /proc/net/tcp is often (much) larger than that. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
The attached patch fixes long scheduling latencies in select_parent() and prune_dcache(). The prune_dcache() lock-break is easy, but for select_parent() the only viable solution i found was to break out if there's a resched necessary - the reordering is not necessary and the dcache scanning/shrinking will later on do it anyway. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
break latency in invalidate_list(). Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
The attached patch fixes long scheduling latencies in the ext3 code, and it also cleans up the existing lock-break functionality to use the new primitives. This patch has been in the -VP patchset for quite some time. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
It adds cond_resched_softirq() which can be used by _process context_ softirqs-disabled codepaths to preempt if necessary. The function will enable softirqs before scheduling. (Later patches will use this primitive.) Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
Add lock_need_resched() which is to check for the necessity of lock-break in a critical section. Used by later latency-break patches. tested on x86, should work on all architectures. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
This is another generic fallout from the voluntary-preempt patchset: a cleanup of the cond_resched() infrastructure, in preparation of the latency reduction patches. The changes: - uninline cond_resched() - this makes the footprint smaller, especially once the number of cond_resched() points increase. - add a 'was rescheduled' return value to cond_resched. This makes it symmetric to cond_resched_lock() and later latency reduction patches rely on the ability to tell whether there was any preemption. - make cond_resched() more robust by using the same mechanism as preempt_kernel(): by using PREEMPT_ACTIVE. This preserves the task's state - e.g. if the task is in TASK_ZOMBIE but gets preempted via cond_resched() just prior scheduling off then this approach preserves TASK_ZOMBIE. - the patch also adds need_lockbreak() which critical sections can use to detect lock-break requests. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
SMP locking latencies are one of the last architectural problems that cause millisec-category scheduling delays. CONFIG_PREEMPT tries to solve some of the SMP issues but there are still lots of problems remaining: spinlocks nested at multiple levels, spinning with irqs turned off, and non-nested spinning with preemption turned off permanently. The nesting problem goes like this: if a piece of kernel code (e.g. the MM or ext3's journalling code) does the following: spin_lock(&spinlock_1); ... spin_lock(&spinlock_2); ... then even with CONFIG_PREEMPT enabled, current kernels may spin on spinlock_2 indefinitely. A number of critical sections break their long paths by using cond_resched_lock(), but this does not break the path on SMP, because need_resched() *of the other CPU* is not set so cond_resched_lock() doesnt notice that a reschedule is due. to solve this problem i've introduced a new spinlock field, lock->break_lock, which signals towards the holding CPU that a spinlock-break is requested by another CPU. This field is only set if a CPU is spinning in a spinlock function [at any locking depth], so the default overhead is zero. I've extended cond_resched_lock() to check for this flag - in this case we can also save a reschedule. I've added the lock_need_resched(lock) and need_lockbreak(lock) methods to check for the need to break out of a critical section. Another latency problem was that the stock kernel, even with CONFIG_PREEMPT enabled, didnt have any spin-nicely preemption logic for the following, commonly used SMP locking primitives: read_lock(), spin_lock_irqsave(), spin_lock_irq(), spin_lock_bh(), read_lock_irqsave(), read_lock_irq(), read_lock_bh(), write_lock_irqsave(), write_lock_irq(), write_lock_bh(). Only spin_lock() and write_lock() [the two simplest cases] where covered. In addition to the preemption latency problems, the _irq() variants in the above list didnt do any IRQ-enabling while spinning - possibly resulting in excessive irqs-off sections of code! preempt-smp.patch fixes all these latency problems by spinning irq-nicely (if possible) and by requesting lock-breaks if needed. Two architecture-level changes were necessary for this: the addition of the break_lock field to spinlock_t and rwlock_t, and the addition of the _raw_read_trylock() function. Testing done by Mark H Johnson and myself indicate SMP latencies comparable to the UP kernel - while they were basically indefinitely high without this patch. i successfully test-compiled and test-booted this patch ontop of BK-curr using the following .config combinations: SMP && PREEMPT, !SMP && PREEMPT, SMP && !PREEMPT and !SMP && !PREEMPT on x86, !SMP && !PREEMPT and SMP && PREEMPT on x64. I also test-booted x86 with the generic_read_trylock function to check that it works fine. Essentially the same patch has been in testing as part of the voluntary-preempt patches for some time already. NOTE to architecture maintainers: generic_raw_read_trylock() is a crude version that should be replaced with the proper arch-optimized version ASAP. From: Hugh Dickins <hugh@veritas.com> The i386 and x86_64 _raw_read_trylocks in preempt-smp.patch are too successful: atomic_read() returns a signed integer. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Nathan Lynch authored
Call idle_task_exit from cpu_die to avoid mm_struct leak. Signed-off-by: Nathan Lynch <nathanl@austin.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Nathan Lynch authored
Heiko Carstens figured out that offlining a cpu can leak mm_structs because the dying cpu's idle task fails to switch to init_mm and mmdrop its active_mm before the cpu is down. This patch introduces idle_task_exit, which allows the idle task to do this as Ingo suggested. I will follow this up with a patch for ppc64 which calls idle_task_exit from cpu_die. Signed-off-by: Nathan Lynch <nathanl@austin.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Josh Aas authored
This patch removes two outdated/misleading comments from the CPU scheduler. 1) The first comment removed is simply incorrect. The function it comments on is not used for what the comments says it is anymore. 2) The second comment is a leftover from when the "if" block it comments on contained a goto. It does not any more, and the comment doesn't make sense. There isn't really a reason to add different comments, though someone might feel differently in the case of the second one. I'll leave adding a comment to anybody who wants to - more important to just get rid of them now. Signed-off-by: Josh Aas <josha@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Dean Nelson authored
This patch exports sched_setscheduler() so that it can be used by a kernel module to set a kthread's scheduling policy and associated parameters. Signed-off-by: Dean Nelson <dcn@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Robert Love authored
no need to call task_rq in setscheduler; just use rq Signed-Off-By: Robert Love <rml@novell.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Oleg Nesterov authored
Replace open-coded thread_group_leader() calls. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Oleg Nesterov authored
schedule() can use prev instead of get_current(). Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Con Kolivas authored
Special casing tasks by interactive credit was helpful for preventing fully cpu bound tasks from easily rising to interactive status. However it did not select out tasks that had periods of being fully cpu bound and then sleeping while waiting on pipes, signals etc. This led to a more disproportionate share of cpu time. Backing this out will no longer special case only fully cpu bound tasks, and prevents the variable behaviour that occurs at startup before tasks declare themseleves interactive or not, and speeds up application startup slightly under certain circumstances. It does cost in interactivity slightly as load rises but it is worth it for the fairness gains. Signed-off-by: Con Kolivas <kernel@kolivas.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Con Kolivas authored
Change the granularity code to requeue tasks at their best priority instead of changing priority while they're running. This keeps tasks at their top interactive level during their whole timeslice. Signed-off-by: Con Kolivas <kernel@kolivas.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Con Kolivas authored
We can requeue tasks for cheaper then doing a complete dequeue followed by an enqueue. Add the requeue_task function and perform it where possible. This will be hit frequently by upcoming changes to the requeueing in timeslice granularity. Signed-off-by: Con Kolivas <kernel@kolivas.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Con Kolivas authored
The minimum timeslice was decreased from 10ms to 5ms. In the process, the timeslice granularity was leading to much more rapid round robinning of interactive tasks at cache trashing levels. Restore minimum granularity to 10ms. Signed-off-by: Con Kolivas <kernel@kolivas.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Con Kolivas authored
Timeslice proportion has been increased substantially for -niced tasks. As a result of this kernel threads have much larger timeslices than they previously had. Change kernel threads' nice value to -5 to bring their timeslice back in line with previous behaviour. This means kernel threads will be less likely to cause large latencies under periods of system stress for normal nice 0 tasks. Signed-off-by: Con Kolivas <kernel@kolivas.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-