- 12 Apr, 2004 40 commits
-
-
Andrew Morton authored
From: Geert Uytterhoeven <geert@linux-m68k.org> PCI multi-IO card support depends on PCI
-
Andrew Morton authored
From: Geert Uytterhoeven <geert@linux-m68k.org> Recent serial changes moved some code, causing unused variable warnings.
-
Andrew Morton authored
From: Geert Uytterhoeven <geert@linux-m68k.org> jiffies must be unsigned long
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> The patch avoids the instantiation of pagetables for not-present pages in get_user_pages(). Without this, the coredump code can cause total memory exhaustion in pagetables. Consider a store to current stack - 1TB. The stack vma is extended to include this address because of VM_GROWSDOWN. If such a process dies (which is likely for a defunc process) then the elf core dumper will cause the system to hang because of too many page tables. We especially recognise this situation and simply return a ref to the zero page.
-
Andrew Morton authored
From: "Randy.Dunlap" <rddunlap@osdl.org> From: Michael Still <mikal@stillhq.com> Correct kernel-doc comment with incorrect parameters documented
-
Andrew Morton authored
From: Pavel Machek <pavel@suse.cz> If you stop process with ^Z, then suspend, process is awakened. Thats a bug. Solution is to simply leave already stopped processes alone. Plus we no longer use TASK_STOPPED for processes in refrigerator. Userland might see us and get confused.
-
Andrew Morton authored
From: Pavel Machek <pavel@ucw.cz> It makes swsusp behave correctly w.r.t. discontingmem, and adds highmem handling.
-
Andrew Morton authored
From: Pavel Machek <pavel@ucw.cz> Bill Irwin did some work on this. It makes swsusp behave correctly w.r.t. discontingmem, and adds highmem handling (very simple-minded, but should work ok with 1GB). It now should behave correctly w.r.t. more than one swap device, and fixes double restoring of console.
-
Andrew Morton authored
From: Rene Herman <rene.herman@keyaccess.nl> This patch tries to improve the i386/mach-default probe_roms(). This also c99ifies the data, adds an IORESOURCE_IO flag for the I/O port resources, an IORESOURCE_MEM flag for the VRAM resource, IORESOURCE_READONLY | IORESOURCE_MEM for the ROM resources and adds two additional "adapter ROM slots" (for a total of 6) since it now also scans the 0xe0000 segment.
-
Andrew Morton authored
From: Rene Herman <rene.herman@keyaccess.nl> The i386 probe_roms() function has a fair number of problems currently: - When you actually have an adapter ROM in the machine, your video ROM disappears. This is due to the pc9800 subarch merge that split it up in probe_video_rom(int roms) and probe_extension_roms(int roms), but expects a "roms++" in probe_video_roms() to have an effect outside of that function. - The majority of VGA adapters these days host a ROM larger then 32K, yet the current code hardcodes a 32K ROM. The VGA BIOS "length" byte is normally valid (it in fact needs to be for a regular mainboard BIOS to accept it) and I've verified on a few dozen very new to very old VGAs that it is. However, assuming someone actually did not check for the length and checksum there for a reason, the safe thing to do here is accept the length byte when we also get a valid checksum. - The current code scans 0xc0000 to 0xdffff for a video ROM while the standard PC thing to do (that which the BIOS does) is only scan for a video ROM starting between 0xc0000 and 0xc7fff. This means that on a headless- (or BIOS-less monochrome adapter-) box, the first adapter ROM found triggers the registration of a 32K "Video ROM" at hardcoded address 0xc0000, even when _nothing_ is present between 0xc0000 and 0xc7fff. - The current adapter ROM scan stops at 0xdffff, whether or not an extension ROM is present at 0xe0000. The PC thing to do is scan 0xc8000 upto 0xdffff if an extension ROM is present, and upto 0xeffff when it's not (it's not/hardly ever). - Adapter ROMs are called "Extension ROM", but the latter term is really better reserved for a motherboard extension ROM. - Currently, the code happily starts scanning through a ROM it just registered looking for the next one (just does += 2048, even when that's inside the previous ROM) which is at least silly. Unfortunately, this code is "subarched" between mach-default and mach-pc9800, meaning the patch got a bit involved. Currently all this code, and gobs of data, is defined (not just declared) in the header: include/asm-i386/mach-{default,pc9800}/mach_resources.h which isn't nice. That .h really wants to be a .c. The first patch, in the next message, does not change any code but only undoes the probe_video_rom / probe_extension_roms split and moves the code to a new file arch/i386/mach-{default,pc9800}/std_resources.c with a header include/asm-i386/std_resources.h for the prototypes only. The second patch overhauls the code itself for mach-default. Please see comments on top of that patch for (yet more) comments. It's tested on various machines, with and without adapter ROMs. I haven't touched pc9800. Nothing should have changed though. The pc9800 author, as given in the code, is CCed. Also, x86-64 inherits the probe_roms() code from 2.4, and while it doesn't have the subarch specific problems, it has all others. I'll convert it to if this i386 version is deemed desirable. This patch doesn't change any code, just moves stuff from the "mach_resources.h" header to a "std_resources.c" subarch specific file, and introduces a "std_resources.h" header for the prototypes.
-
Andrew Morton authored
Almost everywhere where JBD removes a buffer from the transaction lists the caller then nulls out jh->b_transaction. Sometimes, the caller does that without holding the locks which are defined to protect b_transaction. This makes me queazy. So change things so that __journal_unfile_buffer() nulls out b_transaction inside both j_list_lock and jbd_lock_bh_state(). It cleans things up a bit, too.
-
Andrew Morton authored
We're seeing heavy contention against j_list_lock on 8-way in do_get_write_access(). We actually don't need j_list_lock in there except for one little case - the per-bh jbd_lock_bh_state() is sufficient to protect this buffer's internal state. On some nice quick LVM array Ram Pai measured an overall 3x speedup from this patch: the script took the following time on 265mm1 real 0m57.504s user 0m0.400s sys 7m29.867s and with the 2patches it took real 0m19.983s user 0m0.438s sys 1m55.896s
-
Andrew Morton authored
From: "Randy.Dunlap" <rddunlap@osdl.org>
-
Andrew Morton authored
From: Anton Blanchard <anton@samba.org> Directory notify code was showing up in a dd bs=1024k from 2 raid arrays on an emulex FC adapter: 3635 69.4896 vmlinux-2.6.5 .default_idle 332 6.3468 vmlinux-2.6.5 .__copy_tofrom_user 112 2.1411 vmlinux-2.6.5 .save_remaining_regs 76 1.4529 vmlinux-2.6.5 .scsi_dispatch_cmd 64 1.2235 vmlinux-2.6.5 .dnotify_parent 61 1.1661 vmlinux-2.6.5 .do_generic_mapping_read We already have a sysctl to enable/disable it, the patch below uses it in dnotify_parent. dnotify_parent disappears and idle time goes up: 4508 70.8582 vmlinux-2.6.5 .default_idle 253 3.9767 vmlinux-2.6.5 .__copy_tofrom_user 142 2.2320 vmlinux-2.6.5 .save_remaining_regs 88 1.3832 vmlinux-2.6.5 .shrink_zone 84 1.3203 vmlinux-2.6.5 .elx_drvr_unlock 75 1.1789 vmlinux-2.6.5 .scsi_dispatch_cmd 69 1.0846 vmlinux-2.6.5 .do_generic_mapping_read Of course, to gain this small speedup isers need to know to set /proc/sys/fs/dir-notify-enable to zero. Nobody does that.
-
Andrew Morton authored
From: Marcelo Tosatti <marcelo.tosatti@cyclades.com> The cyclades.c driver was marked BROKEN_ON_SMP during early 2.6. It was fixed later on but the tag was left in Kconfig. The driver is not very smart wrt SMP locking, it can be improved. There is only one spinlock per card which guarantees command block ordering and protects different shared data, which can be held for long periods. _But_ the locking works reliably, so remove the BROKEN_ON_SMP tag.
-
Andrew Morton authored
From: "Martin J. Bligh" <mbligh@aracnet.com> I'd prefer we renamed this to page_to_nid() before anyone starts using it. This fits with the naming convention of everything else (pfn_to_nid, etc). Nobody uses it right now - I grepped the whole tree.
-
Andrew Morton authored
From: Hugh Dickins <hugh@veritas.com> Some arches refer to page->mapping for their dcache flushing: use page_mapping(page) for safety, to avoid confusion on anon pages, which will store a different pointer there - though in most cases flush_dcache_page is being applied to pagecache pages. arm has a useful mapping_mapped macro: move that to generic, and add mapping_writably_mapped, to avoid explicit list_empty checks on i_mmap and i_mmap_shared in several places. Very tempted to add page_mapped(page) tests, perhaps along with the mapping_writably_mapped tests in do_generic_mapping_read and do_shmem_file_read, to cut down on wasted flush_dcache effort; but the serialization is not obvious, too unsafe to do in a hurry.
-
Andrew Morton authored
Fix up the rw_swap_page_sync() gorrors by fully decoupling this function from the VM - it is now just a helper function which reads a page from or writes a page to swap.
-
Andrew Morton authored
From: Hugh Dickins <hugh@veritas.com> Tracking anonymous pages by anon_vma,pgoff or mm,address needs a pointer,offset pair in struct page: mapping,index the natural choice. But swapcache uses those for &swapper_space,swp_entry_t. It's trivial to separate swapcache from pagecache with radix tree; most of swapper_space is actually unused, just a fiction to pretend swap like file; and page->private is a good place to keep swp_entry_t, now that swap never uses bufferheads. Define PG_anon bit, page_add_rmap SetPageAnon and put an oopsable address in page->mapping to test that we're not confused by it. Define page_mapping(page) macro to give NULL when PageAnon, whatever may be in page->mapping. Define PG_swapcache bit, deduce swapper_space from that in the few places we need it. add_to_swap_cache now distinct from add_to_page_cache. Separating the caches somewhat simplifies the tmpfs swizzling in swap_state.c, now the page can briefly be in both caches. The rmap method remains pte chains, no change to that yet. But one small functional difference: the use of PageAnon implies that a page truncated while still mapped will no longer be found and freed (swapped out) by try_to_unmap, will only be freed by exit or munmap. But normally pages are unmapped by vmtruncate: this should only affect nonlinear mappings, and a later patch not in this batch will fix that.
-
Andrew Morton authored
From: Hugh Dickins <hugh@veritas.com> First of a batch of three rmap patches: this initial batch of three paving the way for a move to some form of object-based rmap (probably Andrea's, but drawing from mine too), and making almost no functional change by itself. A few days will intervene before the next batch, to give the struct page changes in the second patch some exposure before proceeding. rmap 1 create include/linux/rmap.h Start small: linux/rmap-locking.h has already gathered some declarations unrelated to locking, and the rest of the rmap declarations were over in linux/swap.h: gather them all together in linux/rmap.h, and rename the pte_chain_lock to rmap_lock.
-
Andrew Morton authored
From: Jens Axboe <axboe@suse.de> CFQ I/O scheduler
-
Andrew Morton authored
From: Jens Axboe <axboe@suse.de> There's a small discrepancy in when we decide to unplug a queue based on q->unplug_thresh. Basically it doesn't work for tagged queues, since q->rq.count[READ] + q->rq.count[WRITE] is just the number of allocated requests, not the number of requests stuck in the io scheduler. We could just change the nr_queued == to a nr_queued >=, however that is still suboptimal. This patch adds accounting for requests that have been dequeued from the io scheduler, but not freed yet. These are q->in_flight. allocated_requests - q->in_flight == requests_in_scheduler. So the condition correctly becomes if (requests_in_scheduler == q->unplug_thresh) instead. I did a quick round of testing, and for dbench on a SCSI disk the number of timer induced unplugs was reduced from 13 to 5 :-). Not a huge number, but there might be cases where it's more significant. Either way, it gets ->unplug_thresh always right, which the old logic didn't.
-
Andrew Morton authored
From: Neil Brown <neilb@cse.unsw.edu.au> I've made a bunch of changes to the 'md' bits - largely moving the unplugging into the individual personalities which know more about which drives are actually in use.
-
Andrew Morton authored
From: Jens Axboe <axboe@suse.de> Dog slow software suspend found this one. If WB_SYNC_ALL, then you need to mark the bio as sync as well. This is because swap_writepage() does a remove_exclusive_swap_page() (going to __delete_from_swap_cache -> __remove_from_page_cache) which can kill page->mapping, thus aops->sync_page() has nothing to work with for unplugging the address space.
-
Andrew Morton authored
From: Jens Axboe <axboe@suse.de>, Chris Mason, me, others. The global unplug list causes horrid spinlock contention on many-disk many-CPU setups - throughput is worse than halved. The other problem with the global unplugging is of course that it will cause the unplugging of queues which are unrelated to the I/O upon which the caller is about to wait. So what we do to solve these problems is to remove the global unplug and set up the infrastructure under which the VFS can tell the block layer to unplug only those queues which are relevant to the page or buffer_head whcih is about to be waited upon. We do this via the very appropriate address_space->backing_dev_info structure. Most of the complexity is in devicemapper, MD and swapper_space, because for these backing devices, multiple queues may need to be unplugged to complete a page/buffer I/O. In each case we ensure that data structures are in place to permit us to identify all the lower-level queues which contribute to the higher-level backing_dev_info. Each contributing queue is told to unplug in response to a higher-level unplug. To simplify things in various places we also introduce the concept of a "synchronous BIO": it is tagged with BIO_RW_SYNC. The block layer will perform an immediate unplug when it sees one of these go past.
-
Andrew Morton authored
From: Joe Thornber <thornber@redhat.com> dm.c: remove __dm_request (merge with previous patch).
-
Andrew Morton authored
From: Miquel van Smoorenburg <miquels@cistron.nl> Joe Thornber <thornber@redhat.com> This implements the queue congestion callout for DM stacks. To make bdi_read/write_congested() return correct information. - md->lock protects all fields in md _except_ md->map - md->map_lock protects md->map - Anyone who wants to read md->map should use dm_get_table() which increments the tables reference count. This means the spin lock is now only held for the duration of a reference count increment. Udpate: dm.c: protect md->map with a rw spin lock rather than the md->lock semaphore. Also ensure that everyone accesses md->map through dm_get_table(), rather than directly.
-
Andrew Morton authored
From: Miquel van Smoorenburg <miquels@cistron.nl> The VM and VFS use the address_space_backing_dev_info to track the realtime status of the device which backs the mapping. The read_congested and write_congested fields are used to determine whether a read or write against that device may block. We use this infrastructure to a) allow pdflush to service many queues in parallel (by not getting stuck on any particular one) and b) to avoid undesirable and uncontrolled latencies in places such as page reclaim and c) To avoid blocking in readahead operations The current code only supports simple disk queues (and I have a patch here for NFS). Stacked queues (MD and DM) don't get this information right and problems were expected. Efficiency problems have now been noted and it's time to fix it. This patch lays down the infrastructure which permits the queue implementation to get control when someone at a higher level is querying the queue's congestion state. So DM (for example) can run around and examine all the queues which contribute to the higher-level queue. It also adds bdi_rw_congested() for code in xfs and ext2 that calls both bdi_read_congested() and bdi_write_congested() in a row, and it was "free" anyway.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> The rewritten qeth network driver.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> The crypto device driver for PCICA & PCICC cards, part 2.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> The crypto device driver for PCICA & PCICC cards, part 1.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> zfcp host adapter log message cleanup part 2: - Shorten log output. - Increase log level for some messages. - Always print leading zeroes for wwpn and fcp-lun.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> zfcp host adapter log message cleanup part 1: - Shorten log output. - Increase log level for some messages. - Always print leading zeroes for wwpn and fcp-lun.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> zfcp host adapter fixes: - Reuse freed scsi_ids and scsi_luns for mappings. - Order list of ports/units by assigned scsi_id/scsi_lun. - Don't update max_id/max_lun in scsi_host anymore. - Get rid of all magics. - Add owner field to ccw_driver structure. - Avoid deadlock on bus->subsys.rwsem. - Use a macro for all scsi device sysfs attributes. - Change proc_name from "dummy" to "zfcp". - Don't wait for scsi_add_device to complete while holding a semaphore. - Cleanup include files in zfcp_aux.c & zfcp_def.h. - Get rid of zfcp_erp_fsf_req_handler. - Proper link up/down handling. - Avoid possible NULL pointer dereference in zfcp_erp_schedule_work. - Remove module_exit function. Without an external release function for the zfcp_port/zfcp_unit objects module unloading is racy.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> DCSS block device driver changes: - Fix remove_store function, put_device is called too early.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> Network driver changes: - ctc: move kfree of driver structure after the last use of it. - netiucv: stay in state startwait if peer is down. - lcs: initialize ipm_list and unregister netdev only if it is present.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> dasd driver changes: - Fix check for device type in error recovery for fba devices.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> Tape driver changes: - Add missing break in tape_34xx_work_handler to avoid misleading message. - Cleanup offline/remove code.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> Common i/o layer changes: - Avoid de-registering a ccwgroup device multiple times. - Remove check for channel path objects in get_subchannel_by_schid. Channel patch objects are never in the bus list. - Avoid NULL pointer deref. in qdio_unmark_q. - Fix reference counting on subchannel objects. - Add shutdown function to terminate i/o and disable subchannels at reipl. - Remove all ccwgroup devices if the ccwgroup driver is unregistered.
-
Andrew Morton authored
From: Martin Schwidefsky <schwidefsky@de.ibm.com> s390 core changes: - Fix _raw_spin_trylock for 64 bit. - Add clarification to s390 debug debug documentation.
-