- 16 Apr, 2003 13 commits
-
-
Randolph Chung authored
-
http://gkernel.bkbits.net/net-drivers-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
David S. Miller authored
into kernel.bkbits.net:/home/davem/net-2.5
-
Jeff Garzik authored
into hum.(none):/garz/repo/net-drivers-2.5
-
Linus Torvalds authored
-
Linus Torvalds authored
with user pointer annotations.
-
Linus Torvalds authored
verifies declarations against definitions and checks argument types.
-
Linus Torvalds authored
-
David Mosberger authored
The patch below is needed to avoid a deadlock on fs->lock. Without the patch, if __emul_lookup_dentry() returns 0, we fail to reacquire current->fs->lock and then go ahead to read_unlock() it anyhow. Bad for your health. I believe the bug was introduced when the fast pathwalk was reverted in order to introduce the RCU lockless path walking.
-
Hideaki Yoshifuji authored
-
Martin Josefsson authored
-
Hideaki Yoshifuji authored
-
Martin Schwidefsky authored
- lcs: Don't free net_device in lcs_stop_device. - lcs: Reset card after LGW initiaited stoplan. - lcs: Fix bug in lcs_tasklet - ctc: Get channel structure from private pointer. Remove __NO_VERSION__. - lcs,ctc,iucv: Remove MOD_INC_USE_COUNT/MOD_DEC_USE_COUNT. Set dev->owner.
-
- 15 Apr, 2003 15 commits
-
-
David Stevens authored
-
Jan Harkes authored
The problem is caused by the devfs_mk_dir simplification that went in a couple of weeks ago that didn't update one of the coda call-sites.
-
David Mosberger authored
Fix for trivial typo. Without it, you can't insert anything on top of agpgart.ko because the agp_register_driver() will erroneously pick up the symbol version from agp_backend_acquire().
-
Rusty Russell authored
This converts connection tracking and all the connection tracking modules to handle non-linear skbs. Enough interfaces have been broken in the process that old helpers won't compile. Interfaces which used to take a "void *data, int len" or "struct iphdr *iph, int len" now take the skb itself (and an offset to the data in the case of the first interface), which is not linearized in any way (although Alexey says after ip_rcv the IP header is always linear, so IPv4 netfilter hooks can always assume a linear IP hdr). Helpers which examine data (amanda, FTP, IRC) now copy it into a buffer and examine that.
-
George Anzinger authored
Clean up "pendcount" locking (or rather - lack there-of) by making it a per-timer thing and thus automatically protected by the timer lock. Fix whitespace damage.
-
David S. Miller authored
into nuts.ninka.net:/home/davem/src/BK/net-2.5
-
bk://are.twiddle.net/axp-2.5Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Richard Henderson authored
-
Ulrich Drepper authored
Now that the kernel provides code user programs are executing directly (I mean the vsyscall code on x86) it is necessary to add unwind information for that code as well. The unwind information is used not only in C++ code. This patch adds a AT_SYSINFO_EH_FRAME ELF aux-table value that points to the unwinding block description for the sysinfo frame, and makes sure the AT_* value is passed to applications. It defines the static data for the unwind blocks (two, one for int80 and the other for sysenter), and finally adds code to copy the data in place.
-
Linus Torvalds authored
testing. Found by 'sparse', my source parser tool.
-
Matt Reppert authored
-
Ivan Kokshaysky authored
Forward port of Jay's 2.4 patch. Also I've cleaned up EISA configury - we only need it for systems with EISA. Ivan.
-
Ivan Kokshaysky authored
While testing our upcoming kernel update for 7.2 alpha, I've encountered a problem with move_initrd. It allocates a page-aligned chunk to move the initrd into, but it doesn't allocate the entire last page. Subsequent bootmem allocations can then be filled from the last page used be the initrd. This then becomes a problem when the initrd memory is released.
-
Ivan Kokshaysky authored
The 2.5 kernels may hang on execve(). Most easily this can be reproduced by submitting forms in mozilla, apparently because it does execve with very long argument strings. That's what happens in do_execve, I suppose: bprm.mm = mm_alloc(); ... init_new_context(current, bprm.mm); here we update current ptbr with new mm->pgd ... copy_strings; interrupt -> do_softirq -> switch to ksoftirqd ... switch back to do_execve; copy_strings - immediate page fault in copy_user that we can't handle because the new ptbr has been activated after context switch and current->mm is not valid anymore. The fix is to not update ptbr for current task in init_new_context(), as we do it later in activate_mm() anyway. With it my (UP) boxes look quite stable so far. Ivan.
-
Richard Henderson authored
into are.twiddle.net:/home/rth/BK/axp-2.5
-
- 14 Apr, 2003 12 commits
-
-
Randolph Chung authored
This one gets rid of sys32_{get,set}affinity in favor of a unified compat implementation.
-
Andrew Morton authored
From: Trond Myklebust <trond.myklebust@fys.uio.no> The patch fixes some problems with NFS under heavy writeout. NFS pages can be in a clean but unreclaimable state. They are unreclaimable because the server has not yet acked the write - we may need to "redirty" them if the server crashes. These are referred to as "unstable" pages. We need to count them alongside dirty and writeback pages when making flushing and throttling decisions. Otherwise the machine can be flooded with these pages and the VM has problems.
-
Andrew Morton authored
The MCE code is setting up a timer whose handler uses the workqueue code before workqueue is initialised. If you boot slowly it oopses. Convert the MCE code to use an initcall.
-
Andrew Morton authored
From: george anzinger <george@mvista.com> The MAJOR problem was a hang in the kernel if a user tried to delete a repeating timer that had a signal delivery pending. I was putting the task in a loop waiting for that same task to pick up the signal. OUCH! A minor issue relates to the need by the glibc folks, to specify a particular thread to get the signal. I had this code in all along, but somewhere in 2.5 the signal code was made POSIX compliant, i.e. deliver to the first thread that doesn't have it masked out. This now uses the code from the above mentioned clean up. Most signals go to the group delivery signal code, however, those specifying THREAD_ID (an extension to the POSIX standard) are sent to the specified thread. That thread MUST be in the same thread group as the thread that creates the timer.
-
Andrew Morton authored
drivers/md/xor.c needs kernel_fpu_begin() for the mmx checksumming functions. So export that to GPL modules.
-
Andrew Morton authored
use `unsigned long' for a jiffies-holding type.
-
Andrew Morton authored
use-after-free races have been seen due to the workqueue timer in the tty structure going off after the tty was freed. Fix that up by using cancel_scheduled_work() and flush_scheduled_work().
-
Andrew Morton authored
The workqueue code currently has a notion of a per-cpu queue being "busy". flush_scheduled_work()'s responsibility is to wait for a queue to be not busy. Problem is, flush_scheduled_work() can easily hang up. - The workqueue is deemed "busy" when there are pending delayed (timer-based) works. But if someone repeatedly schedules new delayed work in the callback, the queue will never fall idle, and flush_scheduled_work() will not terminate. - If someone reschedules work (not delayed work) in the work function, that too will cause the queue to never go idle, and flush_scheduled_work() will not terminate. So what this patch does is: - Create a new "cancel_delayed_work()" which will try to kill off any timer-based delayed works. - Change flush_scheduled_work() so that it is immune to people re-adding work in the work callout handler. We can do this by recognising that the caller does *not* want to wait until the workqueue is "empty". The caller merely wants to wait until all works which were pending at the time flush_scheduled_work() was called have completed. The patch uses a couple of sequence numbers for that. So now, if someone wants to reliably remove delayed work they should do: /* * Make sure that my work-callback will no longer schedule new work */ my_driver_is_shutting_down = 1; /* * Kill off any pending delayed work */ cancel_delayed_work(&my_work); /* * OK, there will be no new works scheduled. But there may be one * currently queued or in progress. So wait for that to complete. */ flush_scheduled_work(); The patch also changes the flush_workqueue() sleep to be uninterruptible. We cannot legally bale out if a signal is delivered anyway.
-
Andrew Morton authored
From: Philippe Elie <phil.el@wanadoo.fr> - oprofile is currently only profiling one sibling. Fix that with appropriate register settings. - fix an oops which could occur if the userspace driver were to request a non-existent resource. - in NMI handler counter_config[i].event is accessible from user space so user can change the event during profiling by echo xxx > /dev/oprofile/event - event mask was wrong, the bit field is 6 bits length not 5, events SSE_INPUT_ASSIST and X87_SIMD_MOVES_UOP was affected by masking high bit of event number.
-
Andrew Morton authored
A few places were missing the rwlock->spinlock conversion.
-
Linus Torvalds authored
-
Linus Torvalds authored
macro argument, so that portability issues will be found in a timely manner.
-