- 26 Sep, 2002 28 commits
-
-
Linus Torvalds authored
-
Linus Torvalds authored
having a valid name (base kernel: "").
-
Ingo Molnar authored
Make the kernel print out symbolic bactraces if symbol table information is available (CONFIG_KALLSYMS)
-
Linus Torvalds authored
when it is explicitly overridden in the MADT
-
Christoph Hellwig authored
-
Stephen Lord authored
Modid: 2.5.x-xfs:slinx:128467a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:128366a
-
Eric Sandeen authored
Modid: 2.5.x-xfs:slinx:128363a
-
Stephen Lord authored
Modid: 2.5.x-xfs:slinx:128239a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:128192a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:128159a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:127994a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:127896a
-
Nathan Scott authored
Modid: 2.5.x-xfs:slinx:127944a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:127879a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:127872a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:127736a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:127734a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:127729a
-
Christoph Hellwig authored
Modid: 2.5.x-xfs:slinx:127568a
-
Rusty Russell authored
-
Rusty Russell authored
This patch defines cpu_possible() for non-SMP.
-
http://gkernel.bkbits.net/misc-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
http://gkernel.bkbits.net/net-drivers-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
http://gkernel.bkbits.net/irda-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
http://gkernel.bkbits.net/i2c-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Jens Axboe authored
Some various small cleanups, optimizations, and fixes. o Make fifo_batch=32 as default, from testing this appears a good default value. We still get good throughput, and latency is good. o Reintroduce the merge_cleanup logic. We need it for deadline for rehashing requests when they have been merged. o Cleanup last_merge logic. Move it to the new elv_merged_request(), this is where it really belongs. Doing it inside the io scheduler core can causes false positives, when the queue merge functions reject an otherwise good merge o Have deadline_move_requests() account from last entry on the dispatch queue, if it is non-empty. It doesn't really matter what the last extracted sector was, if we are not right behind it. o Clean/optimize deadline_move_requests() o Account size of a request just a little bit. Streaming transfer isn't for free, it's just a lot cheaper than a seek. o Make deadline_check_fifo() more readable.
-
Ingo Molnar authored
From Andrew Morton. There are a couple of places where we would enable interrupts while write-holding the tasklist_lock ... nasty.
-
- 25 Sep, 2002 12 commits
-
-
Jeff Garzik authored
into mandrakesoft.com:/home/jgarzik/repo/misc-2.5
-
Jeff Garzik authored
into mandrakesoft.com:/home/jgarzik/repo/irda-2.5
-
Jeff Garzik authored
into mandrakesoft.com:/home/jgarzik/repo/net-drivers-2.5
-
Albert Cranford authored
-
bk://ldm.bkbits.net/linux-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Andrew Morton authored
Had a weird oops from Bill Irwin - the pdflush_list was corrupt. The only thing I can think of is that something sprayed out a wakeup when it shouldn't. So tighten things up against that, and add some printks to catch it if it happens again.
-
Andrew Morton authored
Well it's a one-liner. sys_sync() only syncs one queue at a time, and can be slow if you have a lot of disks. So poke pdflush, which knows how to write all the queues in parallel.
-
Andrew Morton authored
[This has four scalps already. Thomas Molina has agreed to track things as they are identified ] Infrastructure to detect sleep-inside-spinlock bugs. Really only useful if compiled with CONFIG_PREEMPT=y. It prints out a whiny message and a stack backtrace if someone calls a function which might sleep from within an atomic region. This patch generates a storm of output at boot, due to drivers/ide/ide-probe.c:init_irq() calling lots of things which it shouldn't under ide_lock. It'll find other bugs too.
-
Andrew Morton authored
A patch from Ed Tomlinson which improves the way in which the kernel reclaims slab objects. The theory is: a cached object's usefulness is measured in terms of the number of disk seeks which it saves. Furthermore, we assume that one dentry or inode saves as many seeks as one pagecache page. So we reap slab objects at the same rate as we reclaim pages. For each 1% of reclaimed pagecache we reclaim 1% of slab. (Actually, we _scan_ 1% of slab for each 1% of scanned pages). Furthermore we assume that one swapout costs twice as many seeks as one pagecache page, and twice as many seeks as one slab object. So we double the pressure on slab when anonymous pages are being considered for eviction. The code works nicely, and smoothly. Possibly it does not shrink slab hard enough, but that is now very easy to tune up and down. It is just: ratio *= 3; in shrink_caches(). Slab caches no longer hold onto completely empty pages. Instead, pages are freed as soon as they have zero objects. This is possibly a performance hit for slabs which have constructors, but it's doubtful. Most allocations after a batch of frees are satisfied from inside internally-fragmented pages and by the time slab gets back onto using the wholly-empty pages they'll be cache-cold. slab would be better off going and requesting a new, cache-warm page and reconstructing the objects therein. (Once we have the per-cpu hot-page allocator in place. It's happening). As a consequence of the above, kmem_cache_shrink() is now unused. No great loss there - the serialising effect of kmem_cache_shrink and its semaphore in front of page reclaim was measurably bad. Still todo: - batch up the shrinking so we don't call into prune_dcache and friends at high frequency asking for a tiny number of objects. - Maybe expose the shrink ratio via a tunable. - clean up slab.c - highmem page reclaim in prune_icache: highmem pages can pin inodes.
-
Andrew Morton authored
This uses the new wakeup machinery in some hot parts of the VFS and block layers. wait_on_buffer(), wait_on_page(), lock_page(), blk_congestion_wait(). Also in get_request_wait(), although the benefit for exclusive wakeups will be lower.
-
Andrew Morton authored
This is worth a whopping 2% on spwecweb on an 8-way. Which is faintly surprising because __wake_up and other wait/wakeup functions are not apparent in the specweb profiles which I've seen. The main objective of this is to reduce the CPU cost of the wait/wakeup operation. When a task is woken up, its waitqueue is removed from the waitqueue_head by the waker (ie: immediately), rather than by the woken process. This means that a subsequent wakeup does not need to revisit the just-woken task. It also means that the just-woken task does not need to take the waitqueue_head's lock, which may well reside in another CPU's cache. I have no decent measurements on the effect of this change - possibly a 20-30% drop in __wake_up cost in Badari's 40-dds-to-40-disks test (it was the most expensive function), but it's inconclusive. And no quantitative testing of which I am aware has been performed by networking people. The API is very simple to use (Linus thought it up): my_func(waitqueue_head_t *wqh) { DEFINE_WAIT(wait); prepare_to_wait(wqh, &wait, TASK_UNINTERRUPTIBLE); if (!some_test) schedule(); finish_wait(wqh, &wait); } or: DEFINE_WAIT(wait); while (!some_test_1) { prepare_to_wait(wqh, &wait, TASK_UNINTERRUPTIBLE); if (!some_test_2) schedule(); ... } finish_wait(wqh, &wait); You need to bear in mind that once prepare_to_wait has been performed, your task could be removed from the waitqueue_head and placed into TASK_RUNNING at any time. You don't know whether or not you're still on the waitqueue_head. Running prepare_to_wait() when you're already on the waitqueue_head is fine - it will do the right thing. Running finish_wait() when you're actually not on the waitqueue_head is fine. Running finish_wait() when you've _never_ been on the waitqueue_head is fine, as ling as the DEFINE_WAIT() macro was used to initialise the waitqueue. You don't need to fiddle with current->state. prepare_to_wait() and finish_wait() will do that. finish_wait() will always return in state TASK_RUNNING. There are plenty of usage examples in vm-wakeups.patch and tcp-wakeups.patch.
-
Andrew Morton authored
From David M-T. When this function successfully merges the new range into an existing VMA, it forgets to extend the new protection mode into the just-merged pages.
-