- 06 Oct, 2002 1 commit
-
-
Russell King authored
The PCMCIA layer claims the IO or memory regions for all cards. This means that any port registered via 8250_cs must not cause the 8250 code to claim the resources itself. We also add support for iomem-based ports at initialisation time for PPC.
-
- 05 Oct, 2002 22 commits
-
-
Andrew Morton authored
Hardly anything uses this function, so the debug checks in there are not of much value. The check for bdev_readonly() should be done in submit_bio(). Local variable `major' was altogether unused.
-
Andrew Morton authored
The ratelimiting logic in balance_dirty_pages_ratelimited() is designed to prevent excessive calls to the expensive get_page_state(): On a big machine we only check to see if we're over dirty memory limits once per 1024 dirtyings per cpu. This works OK normally, but it has the effect of allowing each process to go 1024 pages over the dirty limit before it gets throttled. So if someone runs 16000 tiobench threads, they can go 16G over the dirty memory threshold and die the death of buffer_head consumption. Because page dirtiness pins the page's buffer_heads, defeating the special buffer_head reclaim logic. I'd left this overshoot artifact in place because it provides a degree of adaptivity - of someone if running hundreds of dirtying processes (dbench!) then they do want to overshoot the dirty memory limit. But it's hard to balance, and is really not worth the futzing around. So change the logic to only perform the get_page_state() call rate limiting if we're known to be under the dirty memory threshold.
-
Andrew Morton authored
The patch removes page->virtual for all architectures which do not define WANT_PAGE_VIRTUAL. Hash for it instead. Possibly we could define WANT_PAGE_VIRTUAL for CONFIG_HIGHMEM4G, but it seems unlikely. A lot of the pressure went off kmap() and page_address() as a result of the move to kmap_atomic(). That should be the preferred way to address CPU load in the set_page_address() and page_address() hashing and locking. If kmap_atomic is not usable then the next best approach is for users to cache the result of kmap() in a local rather than calling page_address() repeatedly. One heavy user of kmap() and page_address() is the ext2 directory code. On a 7G Quad PIII, running four concurrent instances of while true do find /usr/src/linux > /dev/null done on ext2 with everything cached, profiling shows that the new hashed set_page_address() and page_address() implementations consume 0.4% and 1.3% of CPU time respectively. I think that's OK.
-
Andrew Morton authored
This is the replacement for write_mapping_buffers(). Whenever the mpage code sees that it has just written a block which had buffer_boundary() set, it assumes that the next block is dirty filesystem metadata. (This is a good assumption - that's what buffer_boundary is for). So we do a lookup in the blockdev mapping for the next block and it if is present and dirty, then schedule it for IO. So the indirect blocks in the blockdev mapping get merged with the data blocks in the file mapping. This is a bit more general than the write_mapping_buffers() approach. write_mapping_buffers() required that the fs carefully maintain the correct buffers on the mapping->private_list, and that the fs call write_mapping_buffers(), and the implementation was generally rather yuk. This version will "just work" for filesystems which implement buffer_boundary correctly. Currently this is ext2, ext3 and some not-yet-merged reiserfs patches. JFS implements buffer_boundary() but does not use ext2-like layouts - so there will be no change there. Works nicely.
-
Andrew Morton authored
When the global buffer LRU was present, dirty ext2 indirect blocks were automatically scheduled for writeback alongside their data. I added write_mapping_buffers() to replace this - the idea was to schedule the indirects close in time to the scheduling of their data. It works OK for small-to-medium sized files but for large, linear writes it doesn't work: the request queue is completely full of file data and when we later come to scheduling the indirects, their neighbouring data has already been written. So writeback of really huge files tends to be a bit seeky. So. Kill it. Will fix this problem by other means.
-
Andrew Morton authored
From Badari Pulavarty. Rather than allocating maximum-sized BIOs, use the new bio_get_nr_vecs() hint when sizing the BIOs. Also keep track of the approximate upper-bound on the number of pages remaining to do, so we can again avoid allocating excessively-sized BIOs.
-
Andrew Morton authored
By Vincent Hanquez <tab@tuxfamily.org>
-
Andrew Morton authored
Use the bio_get_nr_pages() hint for sizing the BIOs which writeback allocates.
-
Andrew Morton authored
The page reclaim logic will bail out if all zones are at pages_high. But if the caller is requesting a higher-order allocation we need to go on and free more memory anyway. That's the only way we have of addressing buddy fragmentation.
-
Andrew Morton authored
There is some lack of clarity in what kswapd does and what direct-reclaim tasks do; try_to_free_pages() tries to service both functions, and they are different. - kswapd's role is to keep all zones on its node at zone->free_pages >= zone->pages_high. and to never stop as long as any zones do not meet that condition. - A direct reclaimer's role is to try to free some pages from the zones which are suitable for this particular allocation request, and to return when that has been achieved, or when all the relevant zones are at zone->free_pages >= zone->pages_high. The patch explicitly separates these two code paths; kswapd does not run try_to_free_pages() any more. kswapd should not be aware of zone fallbacks.
-
Andrew Morton authored
When the mempool is empty, tasks wait on the waitqueue in "exclusive mode". So one task is woken for each returned element. But if the number of tasks which are waiting exceeds the mempool's specified size (min_nr), mempool_free() ends up deciding that as the pool is fully replenished, there cannot possibly be anyone waiting for more elements. But with 16384 threads running tiobench, it happens. We could fix this with a waitqueue_active() test in mempool_free(). But rather than adding that test to this fastpath I changed the wait to be non-exclusive, and used the prepare_to_wait/finish_wait API, which will be quite beneficial in this case. Also, convert the schedule() in mempool_alloc() to an io_schedule(), so this sleep time is accounted as "IO wait". Which is a bit approximate - we don't _know_ that the caller is really waiting for IO completion. But for most current users of mempools, io_schedule() is more accurate than schedule() here.
-
Andrew Morton authored
If the alignment checks in generic_direct_IO() fail, we end up not forcing writeback of dirty pagecache pages, but we still run invalidate_inode_pages2(). The net result is that dirty pagecache gets incorrectly removed. I guess this will expose unwritten disk blocks. So move the sync up into generic_file_direct_IO(), where we perform the invalidation. So we know that pagecache and disk are in sync before we do anything else.
-
Andrew Morton authored
The new truncate code needs to check page->mapping after acquiring the page lock. Because the page could have been unmapped by page reclaim or by invalidate_inode_pages() while we waited for the page lock. Also, the page may have been moved between a tmpfs inode and swapper_space. Because we don't hold the mapping->page_lock across the entire truncate operation any more. Also, change the initial truncate scan (the non-blocking one which is there to stop as much writeout as possible) so that it is immune to other CPUs decreasing page->index. Also fix negated test in invalidate_inode_pages2(). Not sure how that got in there.
-
Andrew Morton authored
From David Mosberger The patch below fixes a bug in nr_free_zone_pages() which shows when a zone has hole. The problem is due to the fact that "struct zone" didn't keep track of the amount of real memory in a zone. Because of this, nr_free_zone_pages() simply assumed that a zone consists entirely of real memory. On machines with large holes, this has catastrophic effects on VM performance, because the VM system ends up thinking that there is plenty of memory left over in a zone, when in fact it may be completely full. The patch below fixes the problem by replacing the "size" member in "struct zone" with "spanned_pages" and "present_pages" and updating page_alloc.c.
-
Andrew Morton authored
It hasn't caught any bugs, and it is causing confusion over whether this is a permanent part of list_del() behaviour.
-
Andrew Morton authored
From Bill Irwin This patch makes alloc_hugetlb_page() kmap() the memory it's zeroing, and cleans up a tiny bit of list handling on the side. Without this fix, it oopses every time it's called.
-
Andrew Morton authored
These numbers are being sent to userspace as number-of-sectors, whereas they should be number-of-k.
-
Brian Gerst authored
Removes the last member of the union, ext3.
-
Brian Gerst authored
Remove hpfs_sb from struct super_block.
-
Kai Mäkisara authored
fix device numbering in driverfs and devfs broken by previous patch (bug found by Bjoern A. Zeeb (bz@zabbadoz.net))
-
Christer Weinigel authored
This patch adds support for the National Semiconductor SCx200 processor family to Linux 2.5. The patch consists of the following drivers: arch/i386/kernel/scx200.c -- give kernel access to the GPIO pins drivers/chars/scx200_gpio.c -- give userspace access to the GPIO pins drivers/chars/scx200_wdt.c -- watchdog timer driver drivers/i2c/scx200_i2c.c -- use any two GPIO pins as an I2C bus drivers/i2c/scx200_acb.c -- driver for the Access.BUS hardware drivers/mtd/maps/scx200_docflash.c -- driver for a CFI flash connected to the DOCCS pin
-
Petr Vandrovec authored
This patch fixes memory corruption during vfat mount: one byte before mount options is overwritten by ',' since strtok->strsep conversion happened. This patch also fixes another problem introduced by strtok->strsep conversion: VFAT requires that FAT does not modify passed options, but unfortunately FAT driver fails to preserve options string if there is more than one consecutive comma in option string.
-
- 04 Oct, 2002 17 commits
-
-
Linus Torvalds authored
to make it possible to track down.
-
Linus Torvalds authored
Cset exclude: ink@jurassic.park.msu.ru|ChangeSet|20021003201553|58706
-
Trond Myklebust authored
Duh... Even a simple one-liner test can be wrong. The really sad bit is that I made the same mistake 3 weeks ago, fixed it, and then lost track of the fix... To recap fix to fix: A valid end of directory marker has to read (entry[0]==0 && entry[1]!=0). Here is final correct (I hope) patch.
-
Anton Blanchard authored
I think I have found it and it only hits on a 64 bit machine. If the timeout is big enough we still need to initialise timer->entry. Otherwise bad things happen we we hit del_timer.
-
http://linuxusb.bkbits.net/pci-2.5Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Linus Torvalds authored
-
Greg Kroah-Hartman authored
into kroah.com:/home/greg/linux/BK/pci-2.5
-
Greg Kroah-Hartman authored
-
Linus Torvalds authored
-
Martin Schwidefsky authored
Replace IMMEDIATE_BH bottom half by tasklets in helper functions for console control characters. Fix a race condition and make it look nicer.
-
Martin Schwidefsky authored
Don't create /proc/interrupts on s390.
-
Martin Schwidefsky authored
Remove call to s390_init_machine_check in init/main.c, the new boot code on s390 calls it via arch_initcall.
-
Martin Schwidefsky authored
Rework boot sequence on s390: Traditionally, device detection os s390 is done completely at a _very_ early stage during bootup (from init_irq(), i.e. before memory management or the console are there). This has always been a bad idea, but now it broke even more since the linux driver model requires devices detection to take place after the core_initcalls are done. We now do only a small amount of scanning (probably less in the future) at the early stage, the bulk of it is done from a proper subsys_initcall(). This requires some changes in related areas: - the machine check handler initialization is split in two halves, since we want to catch major machine malfunctions as early as possible, but device machine checks can only be caught after the channel subsystem is up. - some functions that are called from the css initialization made some assumptions of when to use kmalloc or bootmem_alloc, which were broken anyway. We fix this here and hopefully can get rid of bootmem_alloc for the css completely in the future. - the debug logging feature for s390 was not used for functions in the initialization before, since it requires the memory management to be working. Now that we can be sure that it works, some special cases can be removed. Now that these changes are done, a partial implementation of the device model for the channel subsystem is possible, but at this point, none of the device drivers make use of that yet.
-
Martin Schwidefsky authored
Check if defined chpids are available. Some code simplification.
-
Martin Schwidefsky authored
Cleanup s390_process_IRQ a little, the ending_status argument is never really used.
-
Martin Schwidefsky authored
Remove bogus sanity check from {en,dis}able_sync_isc() and really disable all interrupt sub classes except isc 7 in wait_cons_dev.
-
Martin Schwidefsky authored
Add 'signal quiesque' feature to s390 hardware console. A signal quiesce is sent from VM or the service element every time the system should shut down. We receive the quiesce signal and call ctrl_alt_del(). Finally the mainframes have ctrl-alt-del as well :-)
-