- 12 Jan, 2003 1 commit
-
-
Kai Germaschewski authored
This patch introduces, private to the HiSax driver, new helper functions request_io/mmio(), which correspond to request_region()/ request_mem_region() but also are verbose about failures and keep track of the allocated regions, so unwinding in case of errors is automatic. Additionally, request_mmio() will also ioremap() the region.
-
- 11 Jan, 2003 26 commits
-
-
http://linux-isdn.bkbits.net/linux-2.5.isdnLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Kai Germaschewski authored
into tp1.ruhr-uni-bochum.de:/scratch/kai/kernel/v2.5/linux-2.5.isdn
-
Kai Germaschewski authored
From: Adrian Bunk <bunk@fs.tum.de> The patch below removes #if'd kernel 2.0 code from drivers/isdn/divert/divert_init.c.
-
Kai Germaschewski authored
From: Christian Borntraeger <linux@borntraeger.net> This patch makes isdn_tty HZ aware. The first change changes 3000 jiffies (now 3 seconds) to 30 seconds according to the comment. I dont know, if the second change (schedule_timeout(50);) has to be half a second but this was the value used in 2.4.
-
Kai Germaschewski authored
-
Kai Germaschewski authored
Instead of having "switch (subtype)" in just about every function, rather use separate functions and invoke the right one using the now existing struct card_ops infrastructure.
-
Kai Germaschewski authored
All IRQ handlers for IPAC based cards were basically the same (not a big surprise, since the chip is the same), so we can share the IRQ handler.
-
Kai Germaschewski authored
IPAC is basically a combined HSCX/ISAC chip, so we can generate the D- and B-channel access functions knowing how to access the IPAC. For performance reasons, this happens in a macro.
-
Kai Germaschewski authored
Just renaming and introducing some helpers makes them look very similar to each other..
-
Kai Germaschewski authored
Except for a minor performance penalty, using the same IRQ handler for cards which used the same code anyway seems perfectly natural...
-
Kai Germaschewski authored
Again, just killing some duplicated code.
-
Kai Germaschewski authored
Same change which happened for the B-channel earlier.
-
Kai Germaschewski authored
This mostly finishes splitting up the multiplexing ->cardmsg.
-
Kai Germaschewski authored
Since we now have a per-card ops struct, use it to provide the irq handler function, too. Some drivers actually drive more than one specific hardware card, instead of having "switch (cs->subtyp)" scattered around, we rather aim at having different card_ops structures which just provide the right functions for the hardware. Of course, this patch is only the beginning of that separation, but allows for some cleaning already.
-
Kai Germaschewski authored
Linux normally uses separate callbacks instead of a multiplexing function like "cardmsg". So start to break that into pieces.
-
Kai Germaschewski authored
into tp1.ruhr-uni-bochum.de:/scratch/kai/kernel/v2.5/linux-2.5.isdn
-
Andrew Morton authored
The patch arranges for constant 1, 2 and 4-byte copy_*_user() invokations to be inlined. It's hard to tell really, but the AIM9 creat_clo, signal_test and dir_rtns_1 numbers went up by 3%-9%, which is to be expected.
-
Andrew Morton authored
set_page_dirty() is racy if the caller has no reference against page->mapping->host, and if the page is unlocked. This is because another CPU could truncate the page off the mapping and then free the mapping. Usually, the page _is_ locked, or the caller is a user-space process which holds a reference on the inode by having an open file. The exceptional cases are where the page was obtained via get_user_pages(). The patch changes those to lock the page around the set_page_dirty() call.
-
Andrew Morton authored
- Fix error-path mem leak in __vfs_follow_link() (From a recent AC->2.4 patch) - Make drivers/net/aironet4500_proc.c:driver_lock static.
-
Andrew Morton authored
Here is spin_lock(): #define spin_lock(lock) \ do { \ preempt_disable(); \ _raw_spin_lock(lock); \ } while(0) Here is the scenario: CPU0: spin_lock(some_lock); do_very_long_thing(); /* This has cond_resched()s in it */ CPU1: spin_lock(some_lock); Now suppose that the scheduler tries to schedule a task on CPU1. Nothing happens, because CPU1 is spinning on the lock with preemption disabled. CPU0 will happliy hold the lock for a long time because nobody has set need_resched() against CPU0. This problem can cause scheduling latencies of many tens of milliseconds on SMP on kernels which handle UP quite happily. This patch fixes the problem by changing the spin_lock() and write_lock() contended slowpath to spin on the lock by hand, while polling for preemption requests. I would have done read_lock() too, but we don't seem to have read_trylock() primitives. The patch also shrinks the kernel by 30k due to not having separate out-of-line spinning code for each spin_lock() callsite.
-
Andrew Morton authored
Pagetable teardown can hold page_table_lock for extremely long periods - hundreds of milliseconds. This is pretty much the final source of high scheduling latency in the core kernel. We fixed it for zap_page_range() by chunking the work up and dropping the lock occasionally if needed. But that did not fix exit_mmap() and unmap_region(). So what this patch does is to create an uber-zapper "unmap_vmas()" which provides all the vma-walking, page unmapping and low-latency lock-dropping which zap_page_range(), exit_mmap() and unmap_region() require. Those three functions are updated to call unmap_vmas(). It's actually a bit of a cleanup...
-
Andrew Morton authored
touched_by_munmap() returns a reversed list of VMA's. That makes things harder in the low-latency-page-zapping patch. So change touched_by_munmap() to return a VMA list which is in the original order - ascending virtual addresses. Oh, and rename it to <hugh>detach_vmas_to_be_unmapped()</hugh>. It now returns nothing, because we know that the VMA we passed in is the head of the to-be-unmapped list.
-
Andrew Morton authored
In the next patch I wish to add to mm.h prototypes of functions which take an mmu_gather_t* argument. To do this I must either: a) include tlb.h in mm.h Not good - more nested includes when a simple forward decl is sufficient. b) Add `typedef struct free_pte_ctx mmu_gather_t;' to mm.h. That's silly - it's supposed to be an opaque type. or c) Remove the pesky typedef. Bingo.
-
Andrew Morton authored
cond_resched_lock() _used_ to be "if this is the only lock which I am holding then drop it and schedule if needed". However with the i_shared_lock->i_shared_sem change, neither of its two callsites now need those semantics. So this patch changes it to mean just "if needed, drop this lock and reschedule". This allows us to also schedule if CONFIG_PREEMPT=n, which is useful - zap_page_range() can run for an awfully long time. The preempt and non-preempt versions of cond_resched_lock() have been unified.
-
Andrew Morton authored
i_shared_lock is held for a very long time during vmtruncate() and causes high scheduling latencies when truncating a file which is mmapped. I've seen 100 milliseconds. So turn it into a semaphore. It nests inside mmap_sem. This change is also needed by the shared pagetable patch, which needs to unshare pte's on the vmtruncate path - lots of pagetable pages need to be allocated and they are using __GFP_WAIT. The patch also makes unmap_vma() static.
-
Ingo Molnar authored
This patch from Roland McGrath fixes a threading related ptrace bug: PTRACE_ATTACH should not stop everybody for each thread attached.
-
- 10 Jan, 2003 13 commits
-
-
Miles Bader authored
-
Miles Bader authored
-
Miles Bader authored
-
Miles Bader authored
-
Miles Bader authored
-
Miles Bader authored
This is needed by some includers of <asm/stat.h>.
-
Neil Brown authored
Implementing hash_str as hash_mem(..., strlen()) is actually quite slow, so create a separate hash_str. Now hash_mem has only one call site, and both are quite small, so we make them both inline.
-
Neil Brown authored
blk_plug_device and blk_remove_plug want irqs_disabled, so we give it to them.
-
Neil Brown authored
I think this is finally right: - we always wake_up when curr_resync decreases. - we wait for the other guys curr_resync to be less than ours - if we ever wait for someone who might yield, we start checking again from the start.
-
Neil Brown authored
Currently nfsd only worries about read-only exports for directories and files, which allows device special files to be chmodded (for example). This patch extends the test to cover all files, but is careful to avoid it when an IRIX client is doing an write-permission test against a pre-existing device special file.
-
Neil Brown authored
We encode that status in the return value. Also, don't pass 'proc' parameter to ->accept, as it is implicit in rqstp.
-
Neil Brown authored
-
Neil Brown authored
If one of the callback (e.g. data ready) is called before socket setup is complete, oops may occur. With this patch, socket is kept SK_BUSY until all is ready to avoid this. Also some code is moved around to make it cleaner.
-