- 23 Sep, 2002 27 commits
-
-
Greg Kroah-Hartman authored
into kroah.com:/home/greg/linux/BK/gregkh-2.5
-
Adams IT Services authored
-increased timeout value because some people reported problems -(important!) Vender ID has changed from 0x1212 to 0x10D2 , my official assigned one. -added usblcd driver to configure.help
-
Stuart MacDonald authored
Update to full working driver status. Latest firmware 4.06 too. Driver now officially supported.
-
Greg Kroah-Hartman authored
Based off of a patch from Stuart MacDonald <stuartm@connecttech.com>
-
Stuart MacDonald authored
This cleans up the error path in the open() call to make a bit more sense.
-
Greg Kroah-Hartman authored
This fixes a stupid error in the timeout value when downloading firmware to a device. The WhiteHEAT device now works properly with this patch.
-
Alan Stern authored
Like the header says, this patch fixes up the various Transfer- and Transport-level return codes. There were a lot of places in the various subdrivers that were not particularly careful about distinguishing the two; it would help if the people currently maintaining those drivers could take a look at my changes to make sure I haven't screwed anything up. # Converted US_BULK_TRANSFER_xxx to USB_STOR_XFER_xxx, to make it more # easily distinguishable from USB_STOR_TRANSPORT_xxx. (Also, in the # future these codes may apply to control transfers as well as to bulk # transfers.) # # Changed USB_STOR_XFER_FAILED to USB_STOR_XFER_ERROR, since it implies # a transport error rather than a transport failure. # # Added a USB_STOR_XFER_STALLED code, to indicate a transfer that was # terminated by an endpoint stall. This patch is in preparation for one in which usb_stor_transfer_partial() and usb_stor_transfer() are replaced by usb_stor_bulk_transfer_buf() and usb_stor_bulk_transfer_srb() respectively, with slightly different argument lists. Ultimately the subdrivers will be able to use these routines in place of the slightly specialized versions they have now and in place of the ones in raw_bulk.c.
-
Luc Van Oostenryck authored
compile fails with the following message: > In file included from ohci-hcd.c:136: > ohci-dbg.c:318: parse error > make[3]: *** [ohci-hcd.o] Error 1 due to a missing #include <linux/version.h> Here is a trivial patch for this.
-
David Brownell authored
is it guarenteed that callers have zero'd out the device before this is invoked? Else the following is necessary to prevent potential OOPS's derefencing interface->dev.driver in the generic device layer.
-
David Brownell authored
Here's an EHCI update, I'll send separate patches to sync 2.4 with this version. Changes in this version include: - An earlier locking update would give trouble on SPARC, where irqsave "flags" aren't flags. This resolves that issue by adding a module parameter to limit work done with irqs off. (Some net drivers do the same thing.) - Optionally (now #ifdef DEBUG) collects some statistics on IRQs and URBs. There are more IAA interrupts than I want to see, during extended usb-storage loading. - Adds a commented-out workaround for a problem I've seen on one VT8235. Seems likely an issue with this specific motherboard; another tester hasn't reported such issues. - Includes the jiffies time_after() patch from Tim Schmielau. - Minor tweaks to the hcd portability (get rid of another #if). - Minor doc/diagnostic/... updates
-
David Brownell authored
This USB patch updates the OHCI driver: - converts to relying on td_list shadowing the hardware's schedule; only collecting the donelist needs dma_to_td(), and td list handling works much like EHCI or UHCI. - leaves faulted endpoint queues (bulk/intr) disabled until the relevant drivers had a chance to clean up. - fixes minor bugs (unreported) in the affected code: * byteswap problem when unlinking urbs ... symptom would be data toggle confusion (since 2.4.2x) on big-endian cpus * latent bug if folk unlinked queue in LIFO order, not FIFO - removes unnecessary debug code; mostly de-BUG()ged The interesting fix is the "leave queues halted" one. As discussed on email a while back, this HCD fault handling policy (also followed by EHCI) is sufficient to let device drivers implement the two key fault handling policies that seem to be necessary: (a) Datagram style, where issues on one I/O won't affect the next unless the device halted the endpoint. The device driver can ignore most errors other than -EPIPE. (b) Stream style, where for example it'd be wrong to ever let block N+1 overwrite block N on the disk. Once the first URB fails, the rest would just be unlinked in the completion handler. As a consequence of using the td_list, you can now see urb queuing in action in the driverfs 'async' file. At least, if you look at the right time, or use drivers (networking, etc) that queue (bulk) reads for a long time.
-
Ingo Molnar authored
This fixes all xchg()'s and a preemption bug.
-
Ingo Molnar authored
This does the following things: - removes the ->thread_group list and uses a new PIDTYPE_TGID pid class to handle thread groups. This cleans up lots of code in signal.c and elsewhere. - fixes sys_execve() if a non-leader thread calls it. (2.5.38 crashed in this case.) - renames list_for_each_noprefetch to __list_for_each. - cleans up delayed-leader parent notification. - introduces link_pid() to optimize PIDTYPE_TGID installation in the thread-group case. I've tested the patch with a number of threaded and non-threaded workloads, and it works just fine. Compiles & boots on UP and SMP x86. The session/pgrp bugs reported to lkml are probably still open, they are the next on my todo - now that we have a clean pidhash architecture they should be easier to fix.
-
Linus Torvalds authored
-
http://linux-isdn.bkbits.net/linux-2.5.isdnLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Kai Germaschewski authored
T30_s * is part of a union, so the typedef needs to exist even when CONFIG_ISDN_TTY_FAX is not set.
-
http://linux-isdn.bkbits.net/linux-2.5.makeLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Kai Germaschewski authored
When converting all L_TARGETs to lib.a, I missed these instances.
-
Peter Rival authored
Update alpha port to work with new nanosecond xtime, and the in_atomic() requirements.
-
bk://thebsh.namesys.com/bk/reiser3-linux-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Mikael Pettersson authored
The problem is that the local APIC code references stuff in mpparse, but 2.5.37 changed arch/i386/kernel/Makefile to only compile mpparse for SMP. This patch works around this by enforcing CONFIG_X86_MPPARSE for all LOCAL_APIC-enabled configs.
-
Jens Axboe authored
Add bio_get_nr_vecs(). It returns an approximate number of pages that can be added to a block device. It's just a ballpark number, but I think this is quite fine for the type of thing it is needed for: mpage etc need to know an approx size of a bio that they need to allocate. It would be silly to continously allocate 64-page sized bio_vec entries, if the target cannot do more than 8, for example.
-
Jens Axboe authored
make pdc4030 work
-
Jens Axboe authored
Bad merge from 2.4.20-pre-ac, ide_build_dmatable() does not need data direction argument in 2.5 (it's implicit in the request)
-
Tim Schmielau authored
-
Ivan Kokshaysky authored
I'm terribly sorry - I've sent you the wrong diff, it was some intermediate variant. Actually it added extra breakage to ide_hwif_configure(). Desired behavior was: if ctl == base == 0, the device is in "true legacy" mode (as per PCI spec); use values from the base address registers otherwise.
-
Jens Axboe authored
cleanup end_that_request_first() end_io handling, and fix bug where partial completes didn't get accounted right wrt blk_recalc_rq_sectors()
-
- 22 Sep, 2002 13 commits
-
-
Kai Germaschewski authored
into tp1.ruhr-uni-bochum.de:/home/kai/kernel/v2.5/linux-2.5.isdn
-
Kai Germaschewski authored
It was (only partially) protected by cli() before, which we want to get rid of.
-
Kai Germaschewski authored
Simplifies the code which was previously using an open coded singly linked list. Also, deleting a phone number during dial-out could easily oops the kernel before this patch.
-
Kai Germaschewski authored
ISDN_GLOBAL_STOPPED is a way to globally stop the system from dialing out / accepting incoming calls. Instead of spreading checks all over the place, just catch dial commands / incoming call indications in one place. Also, kill isdn_net_phone typedef and clean up affected code.
-
Kai Germaschewski authored
It's not used for the timeout controlled hangup anymore, only to hangup depending on the dialmode, which we handle directly now.
-
Kai Germaschewski authored
o PPP_IPX is defined in a header these days o isdn_net_hangup takes an isdn_net_local *, simplifying code a bit.
-
Kai Germaschewski authored
into tp1.ruhr-uni-bochum.de:/home/kai/kernel/v2.5/linux-2.5.make
-
Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Andrew Morton authored
Convert the VM to not wait on other people's dirty data. - If we find a dirty page and its queue is not congested, do some writeback. - If we find a dirty page and its queue _is_ congested then just refile the page. - If we find a PageWriteback page then just refile the page. - There is additional throttling for write(2) callers. Within generic_file_write(), record their backing queue in ->current. Within page reclaim, if this tasks encounters a page which is dirty or under writeback onthis queue, block on it. This gives some more writer throttling and reduces the page refiling frequency. It's somewhat CPU expensive - under really heavy load we only get a 50% reclaim rate in pages coming off the tail of the LRU. This can be fixed by splitting the inactive list into reclaimable and non-reclaimable lists. But the CPU load isn't too bad, and latency is much, much more important in these situations. Example: with `mem=512m', running 4 instances of `dbench 100', 2.5.34 took 35 minutes to compile a kernel. With this patch, it took three minutes, 45 seconds. I haven't done swapcache or MAP_SHARED pages yet. If there's tons of dirty swapcache or mmap data around we still stall heavily in page reclaim. That's less important. This patch also has a tweak for swapless machines: don't even bother bringing anon pages onto the inactive list if there is no swap online.
-
Andrew Morton authored
The key concept here is that pdflush does not block on request queues any more. Instead, it circulates across the queues, keeping any non-congested queues full of write data. When all queues are full, pdflush takes a nap, to be woken when *any* queue exits write congestion. This code can keep sixty spindles saturated - we've never been able to do that before. - Add the `nonblocking' flag to struct writeback_control, and teach the writeback paths to honour it. - Add the `encountered_congestion' flag to struct writeback_control and teach the writeback paths to set it. So as soon as a mapping's backing_dev_info indicates that it is getting congested, bale out of writeback. And don't even start writeback against filesystems whose queues are congested. - Convert pdflush's background_writeback() function to use nonblocking writeback. This way, a single pdflush thread will circulate around all the dirty queues, keeping them filled. - Convert the pdlfush `kupdate' function to do the same thing. This solves the problem of pdflush thread pool exhaustion. It solves the problem of pdflush startup latency. It solves the (minor) problem wherein `kupdate' writeback only writes back a single disk at a time (it was getting blocked on each queue in turn). It probably means that we only ever need a single pdflush thread.
-
Andrew Morton authored
Use the new queue congestion detector in ext2_preread_inode(). Don't try the speculative read if the read queue is congested. Also, don't try it if the disk is write-congested. Presumably it is more important to get the dirty memory cleaned out.
-
Andrew Morton authored
The patch provides a means for the VM to be able to determine whether a request queue is in a "congested" state. If it is congested, then a write to (or read from) the queue may cause blockage in get_request_wait(). So the VM can do: if (!bdi_write_congested(page->mapping->backing_dev_info)) writepage(page); This is not exact. The code assumes that if the request queue still has 1/4 of its capacity (queue_nr_requests) available then a request will be non-blocking. There is a small chance that another CPU could zoom in and consume those requests. But on the rare occasions where that may happen the result will mereley be some unexpected latency - it's not worth doing anything elaborate to prevent this. The patch decreases the size of `batch_requests'. batch_requests is positively harmful - when a "heavy" writer and a "light" writer are both writing to the same queue, batch_requests provides a means for the heavy writer to massively stall the light writer. Instead of waiting for one or two requests to come free, the light writer has to wait for 32 requests to complete. Plus batch_requests generally makes things harder to tune, understand and predict. I wanted to kill it altogether, but Jens says that it is important for some hardware - it allows decent size requests to be submitted. The VM changes which go along with this code cause batch_requests to be not so painful anyway - the only processes which sleep in get_request_wait() are the ones which we elect, by design, to wait in there - typically heavy writers. The patch changes the meaning of `queue_nr_requests'. It used to mean "total number of requests per queue". Half of these are for reads, and half are for writes. This always confused the heck out of me, and the code needs to divide queue_nr_requests by two all over the place. So queue_nr_requests now means "the number of write requests per queue" and "the number of read requests per queue". ie: I halved it. Also, queue_nr_requests was converted to static scope. Nothing else uses it. The accuracy of bdi_read_congested() and bdi_write_congested() depends upon the accuracy of mapping->backing_dev_info. With complex block stacking arrangements it is possible that ->backing_dev_info is pointing at the wrong queue. I don't know. But the cost of getting this wrong is merely latency, and if it is a problem we can fix it up in the block layer, by getting stacking devices to communicate their congestion state upwards in some manner.
-