- 27 Nov, 2002 6 commits
-
-
Stuart MacDonald authored
1-fix-lowlat: QA found that running all four ports at 460800 would drop data. I traced it to data being dropped in the read callback because the flip buffers were full. Turning on the low latency flag fixed things. 2-fix-taint A side-effect of turning on low latency; the interrupt context from the callback is now passed through to the tty layer, passing it on to calls back into usb-serial.c. Which causes deadlocks when trying to re-acquire the per-port semaphore. We've already talked about this. This patch is my work-around for the usb-serial.c brokenness. Basically, implemement a buffering scheme, and schedule a software interrupt to handle the data handoff to the tty layer sometime later. urb_pool_size defaults to 8, but is a module parameter and can be modified at runtime. The buffering is needed so that the driver can run while data is waiting to be processed, but I could have used the tty layer scheduling instead of doing my own by turning off low latency. However, I looked at the tty layer and it seems to me that there's nothing preventing a really fast device from flipping one buffer, flipping the next, and flipping back to the still full buffer from before (actually, the flip just gets scheduled for later), so my driver needs to be able to hold onto buffered data and schedule them for processing later anyway. So, might as well leave low_latency on. diff -Naur linux-2.5.49-0-virgin/drivers/usb/serial/whiteheat.c linux-2.5.49-1-fix- lowlat/drivers/usb/serial/whiteheat.c
-
Stuart MacDonald authored
Attached is a patch that changes the 2.5.x disconnect to be similar to 2.4.x disconnect. This doesn't fix the race, but does shrink the window such that I've never seen it trigger, even under testing designed to do that. There doesn't seem to be a good way to fix the race. The fix should be to have _disconnect force any sleeping semaphore holders to run to completion between the end of the loop in the patch below and the spot where the underlying memory is freed, but I don't see a way to do that. diff -Naur linux-2.5.49-2-fix-taint/drivers/usb/serial/usb-serial.c linux-2.5.49-3-fix-drvdata/drivers/usb/serial/usb- serial.c
-
John Tyner authored
This patch cleans up the vicam_decode_color function by removing unused/useless variables and combining the two "x" loops inside the y loop into one. It also reduces the number of times that the "x" loop occurs from 512 to 320 which should provide a decent speed increase. It also fixes a bug in the y loop that wrote beyond its bound.
-
Nemosoft Unv. authored
After a little absence, here's a patch to bring the Philips Webcam driver up to version 8.9 (skipping 8.8 which has been available as a download on my website for a while). This patch is against 2.5.49, and includes some of the following: * New USB IDs for Logitech and Visionite webcams. * Better URB link/unlink sequence when opening/closing device and switching resolutions. * Adding probe for CCD/CMOS sensor type. * Removed remnants of YUV420 palette stuff. Also updated the description in 'Kconfig'.
-
Mark W. McClelland authored
-
Ganesh Varadarajan authored
-
- 26 Nov, 2002 3 commits
-
-
Greg Kroah-Hartman authored
-
Randy Dunlap authored
It addresses the timeout parameter in the tiglusb driver. 1. timeout could be 0, causing a divide-by-zero. The patch prevents this. 2. The timeout value to usb_bulk_msg() could be rounded down to cause a divide-by-zero if timeout was < 10, e.g. 9, in: result = usb_bulk_msg (s->dev, pipe, buffer, bytes_to_read, &bytes_read, HZ / (timeout / 10)); 9 / 10 == 0 => divide-by-zero !! 3. The timeout value above doesn't do very well on converting timeout to tenths of seconds. Even for the default timeout value of 15 (1.5 seconds), it becomes: HZ / (15 / 10) == HZ / 1 == HZ, or 1 second. The patch corrects this formula to use: (HZ * 10) / timeout
-
Greg Kroah-Hartman authored
-
- 25 Nov, 2002 3 commits
-
-
John Tyner authored
Here is a patch that fixes the disconnect handling and locking for the vicam driver. It does the following. 1.) Change the parameters of send_control_msg to take a struct vicam_camera instead of struct usb_device to allow for locking of the device. Note that __send_control_msg does not lock the camera. send_control_msg locks the camera before calling __send_control_msg. 2.) Remove all instances of busy_lock. busy_lock was renamed to cam_lock and used to lock out simultaneous uses of the camera and handle disconnects. We may want to add back a different lock to handle smp type stuff. 3.) Separate read_frame and vicam_decode_color. This should move us along toward asynchronous urbs. This patch does not address the locking of the camera that is still needed by the proc interface.
-
Greg Kroah-Hartman authored
Thanks to Ralf Dietrich <ralle@envicon.de> for the information.
-
Greg Kroah-Hartman authored
fixes bug #26 <http://bugme.osdl.org/show_bug.cgi?id=26>
-
- 23 Nov, 2002 6 commits
-
-
Duncan Sands authored
Description: When an urb has been submitted via usbdevfs, and is still pending when the interface it was submitted to is released, force the urb to be completed. This is the correct behaviour. It fixes an oops on system shutdown when using the user space driver for the speedtouch modem.
-
Ganesh Varadarajan authored
this fixes an uninitialzed spinlock in ipaq.c. the driver should work on smp machines now.
-
Randy Dunlap authored
-
David Brownell authored
When chasing down some of those 'bad entry' diagnostics, I once got suspicious that the problem was slab corruption coming from the way the td hashtable code worked. So I put together this patch, eliminating some kmallocation, and the next times I ran that test, the oops went away and it worked like a charm. Hmm. This patch is good because it shrinks memory and code, and gets rid of some could-fail allocations, so I figured I'd send it on (low priority) even if I don't think it fixes the root problem.
-
David Brownell authored
Basically, no point in having short and long timeout options where both are _shorter_ than the timeout from the USB spec.
-
David Brownell authored
Hotplug agents couldn't use /sys/$DEVPATH after /sys/root morphed into /sys/devices ... now they can do it again.
-
- 22 Nov, 2002 22 commits
-
-
Linus Torvalds authored
-
bk://bk.arm.linux.org.ukLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Russell King authored
Fix compilation errors for do_fork() and print_symbol()
-
bk://cifs.bkbits.net/linux-2.5cifsLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Andrew Morton authored
Implements a new set of block address_space_operations which will never attach buffer_heads to file pagecache. These can be turned on for ext2 with the `nobh' mount option. During write-intensive testing on a 7G machine, total buffer_head storage remained below 0.3 megabytes. And those buffer_heads are against ZONE_NORMAL pagecache and will be reclaimed by ZONE_NORMAL memory pressure. This work is, of course, a special for the huge highmem machines. Possibly it obsoletes the buffer_heads_over_limit stuff (which doesn't work terribly well), but that code is simple, and will provide relief for other filesystems. It should be noted that the nobh_prepare_write() function and the PageMappedToDisk() infrastructure is what is needed to solve the problem of user data corruption when the filesystem which backs a sparse MAP_SHARED mapping runs out of space. We can use this code in filemap_nopage() to ensure that all mapped pages have space allocated on-disk. Deliver SIGBUS on ENOSPC. This will require a new address_space op, I expect.
-
Andrew Morton authored
This patch is a general solution to the situation where a zone is full of pinned pages. This can come about if: a) Someone has allocated all of ZONE_DMA for IO buffers b) Some application is mlocking some memory and a zone ends up full of mlocked pages (can happen on a 1G ia32 system) c) All of ZONE_HIGHMEM is pinned in hugetlb pages (can happen on 1G machines) We'll currently burn 10% of CPU in kswapd when this happens, although it is quite hard to trigger. The algorithm is: - If page reclaim has scanned 2 * the total number of pages in the zone and there have been no pages freed in that zone then mark the zone as "all unreclaimable". - When a zone is "all unreclaimable" page reclaim almost ignores it. We will perform a "light" scan at DEF_PRIORITY (typically 1/4096'th of the zone, or 64 pages) and then forget about the zone. - When a batch of pages are freed into the zone, clear its "all unreclaimable" state and start full scanning again. The assumption being that some state change has come about which will make reclaim successful again. So if a "light scan" actually frees some pages, the zone will revert to normal state immediately. So we're effectively putting the zone into "low power" mode, and lightly polling it to see if something has changed. The code works OK, but is quite hard to test - I mainly tested it by pinning all highmem in hugetlb pages.
-
Andrew Morton authored
Strengthen the `incremental min' logic in the page allocator. Currently it is allowing the allocation to succeed if the zone has free_pages >= pages_high. This was to avoid a lockup corner case in which all the zones were at pages_high so reclaim wasn't doing anything, but the incremental min refused to take pages from those zones anyway. But we want the incremental min zone protection to work. So: - Only allow the allocator to dip below the incremental min if he cannot run direct reclaim. - Change the page reclaim code so that on the direct reclaim path, the caller can free pages beyond ->pages_high. So if the incremental min test fails, the caller will go and free some more memory. Eventually, the caller will have freed enough memory for the incremental min test to pass against one of the zones.
-
Andrew Morton authored
The vm_writeback address_space operation was designed to provide the VM with a "clustered writeout" capability. It allowed the filesystem to perform more intelligent writearound decisions when the VM was trying to clean a particular page. I can't say I ever saw any real benefit from this - not much writeout actually happens on that path - quite a lot of work has gone into minimising it actually. The default ->vm_writeback a_op which I provided wrote back the pages in ->dirty_pages order. But there is one scenario in which this causes problems - writing a single 4G file with mem=4G. We end up with all of ZONE_NORMAL full of dirty pages, but all writeback effort is against highmem pages. (Because there is about 1.5G of dirty memory total). Net effect: the machine stalls ZONE_NORMAL allocation attempts until the ->dirty_pages writeback advances onto ZONE_NORMAL pages. This can be fixed most sweetly with additional radix-tree infrastructure which will be quite complex. Later. So this patch dumps it all, and goes back to using writepage against individual pages as they come off the LRU.
-
Andrew Morton authored
blk_congestion_wait() is a utility function which various callers use to throttle themselves to the rate at which the IO system can retire writes. The current implementation refuses to wait if no queues are "congested" (>75% of requests are in flight). That doesn't work if the queue is so huge that it can hold more than 40% (dirty_ratio) of memory. The queue simply cannot enter congestion because the VM refuses to allow more than 40% of memory to be dirtied. (This spin could happen with a lot of normal-sized queues too) So this patch simply changes blk_congestion_wait() to throttle even if there are no congested queues. It will cause the caller to sleep until someone puts back a write request against any queue. (Nobody uses blk_congestion_wait for read congestion). The patch adds new state to backing_dev_info->state: a couple of flags which indicate whether there are _any_ reads or writes in flight against that queue. This was added to prevent blk_congestion_wait() from taking a nap when there are no writes at all in flight. But the "are there any reads" info could be used to defer background writeout from pdflush, to reduce read-vs-write competition. We'll see. Because the large request queues have made a fundamental change: blocking in get_request_wait() has been the main form of VM throttling for years. But with large queues it doesn't work any more - all throttling happens in blk_congestion_wait(). Also, change io_schedule_timeout() to propagate the schedule_timeout() return value. I was using that in some debug code, but it should have been like that from day one.
-
Andrew Morton authored
From Roman Zippel. Don't assume that physical memory starts at physical address zero.
-
Andrew Morton authored
Patch from Stephen Tweedie "In looking at the fix for the ext3 Orlov double-accounting bug, I noticed a change to the sb->s_dir_count accounting, restoring a missing s_dir_count++ when we allocate a new directory. However, I can't find anywhere in the code where we decrement this again on directory deletion, neither in ext2 nor in ext3, in 2.4 nor in 2.5." Locking is via lock_super().
-
Andrew Morton authored
There is a warning in there to detect when block_write_full_page() attaches buffers to a blockdev page. This is a bad thing because that page's blocks may then overlap blocks from a different address_space. So I disallowed it. But the message can be triggered when an application is mmapping a blockdev MAP_SHARED. Apparently INND likes to do this. So remove the warning.
-
Andrew Morton authored
Patch from Christopher Li <chrisl@vmware.com> This little patch will fix two place in htree code which forget the "cpu_to_le16" converting . This bug causes incorrect record length on PPC. Thanks Franz for report the problem.
-
Andrew Morton authored
Patch from Andreas Gruenbacher <agruen@suse.de> The setxattr inode operation is defined like this in 2.4 and 2.5: int (*setxattr) (struct dentry *dentry, const char *name, void *value, size_t size, int flags); the original type of the value parameter was `const void *'; the const obviously has been lost at some point. The definition should be: int (*setxattr) (struct dentry *dentry, const char *name, const void *value, size_t size, int flags);
-
Andrew Morton authored
The page allocator has traditionally just gone BUG when it sees a page in a bad state. This is usually due to hardware errors, sometimes software errors. I'm proposing that we not go BUG() any more, but print lots (and lots) of diagnostic info and try to continue. Might be a bit controversial.
-
Andrew Morton authored
balance_dirty_pages() is too expensive to call once-per-page. Use the ratelimited version.
-
Andrew Morton authored
From Dipanker Sarma. Before setting the ids->entries to the new array, there must be a wmb() to make sure that the memcpyed contents of the new array are visible before the new array becomes visible.
-
Andrew Morton authored
This patch fixes a problem which was discovered by Vladimir Saveliev <vs@namesys.com> Radix trees have a `height' field, which defines how far the pages are from the root of the tree. It starts out at zero and increases as the trees depth is grown. But it is never decreased. It cannot be decreased without a full tree traversal. Because radix_tree_delete() does not decrease `height', we end up returning inodes to their filesystem's inode slab cache with a non-zero height. And when that inode is reused from slab for a new file, it still has a non-zero height. So we're breaking the slab rules by not putting objects back in a fully reinitialised state. So the new file starts out life with whatever height the previous owner of the inode had. Which is space- and speed-inefficient. The most efficient place to fix this would be in destroy_inode(). But that only fixes the problem for inodes - there are other users of radix trees. So fix it in radix_tree_delete(): if the tree was emptied, reset `height' to zero.
-
Andrew Morton authored
Patch from Hugh Dickins <hugh@veritas.com> Fixes the Oracle startup problem reported by Alessandro Suardi. Reverts a "simplification" to shmdt() which was wrong if subsequent mprotects broke up the original VMA, or if parts of it were munmapped.
-
Andrew Morton authored
- I hit a BUG in end_swap_bio_read() under heavy load. The page wasn't locked. No idea how this can happen :( Add a BUG at submission time to catch a caller reading into an unlocked swapcache page. - Remove a debug check from destroy_inode() - it was in the wrong leg of the `if' statement anyway.
-
Neil Brown authored
This allows NFSv4 responses to cover move than one page. There are still limits though. There can be at most one 'data' response which includes READ, READLINK, READDIR. For these responses, the interesting data goes in a separate page or, for READ, list of pages. All responses before the 'data' response must fit in one page, and all responses after it must also fit in one (separate) page.
-
Neil Brown authored
Now that nfsd uses a list of pages for requests instead of one large buffer, NFSv4 need to know about this. The most interesting part of this is that it is possible that section of a request, like a path name, could span two pages, so we need to be able to kmalloc as little bit of space to copy them into, and make sure they get freed later.
-