- 02 Feb, 2003 19 commits
-
-
Andrew Morton authored
Patch from Rusty Russell <rusty@rustcorp.com.au> Make symbol_get() use undefined weak symbols if !CONFIG_MODULE. Many thanks to RTH for introducing undef weak symbols to me.
-
Andrew Morton authored
Patch from William Lee Irwin III <wli@holomorphy.com> BLK_BOUNCE_HIGH and BLK_BOUNCE_ANY are compared against 64-bit quantities. Cast these unsigned long quantities to avoid overflow.
-
Andrew Morton authored
Patch from Manfred Spraul <manfred@colorfullife.com> cache_alloc_refill() forgets to disable interrupts again on an error path. This exposes us to slab corruption and it makes slab debugging go BUG (it expects local irqs to be disabled).
-
Andrew Morton authored
Patch from William Lee Irwin III <wli@holomorphy.com> struct thread_info is shared with the stack, not struct task_struct. False positives have been seen.
-
Andrew Morton authored
Random semicolon makes the whole thing a no-op. It _did_ work. I must have broken it between testing and sending :(
-
Andrew Morton authored
Patch from: jak@rudolph.ccur.com (Joe Korty) The new, preemptable spin_lock() spins on an atomic bus-locking read/write instead of an ordinary read, as the original spin_lock implementation did. Perhaps that is the source of the inefficiency being seen. Attached sample code compiles but is untested and incomplete (present only to illustrate the idea).
-
Andrew Morton authored
The second quota locking fix. Sorry, I seem to have misplaced the changelog.
-
Andrew Morton authored
Quota locking fix from Jan Kara.
-
Andrew Morton authored
Spotted by Andries Brouwer. There's one place where slab is calling check_poison_obj() but not reporting on any detected failure. We used to go BUG() in there. Convert it over to the kinder, gentler slab_error() regime.
-
Andrew Morton authored
There have been sporadic sightings of ext3 causing little blips of 100,000 context switches per second when under load. At the start of do_get_write_access() we have this logic: repeat: lock_buffer(jh->bh); ... unlock_buffer(jh->bh); ... if (jh->j_list == BJ_Shadow) { sleep_on_buffer(jh->bh); goto repeat; } The problem is that the unlock_buffer() will wake up anyone who is sleeping in the sleep_on_buffer(). So if task A is asleep in sleep_on_buffer() and task B now runs do_get_write_access(), task B will wake task A by accident. Task B will then sleep on the buffer and task A will loop, will run unlock_buffer() and then wake task B. This state will continue until I/O completes against the buffer and kjournal changes jh->j_list. Unless task A and task B happen to both have realtime scheduling policy - if they do then kjournald will never run. The state is never cleared and your box locks up. The fix is to not do the `goto repeat;' until the buffer has been taken of the shadow list. So we don't go and wake up the other waiter(s) until they can actually proceed to use the buffer. The patch removes the exported sleep_on_buffer() function and simply exports an existing function which provides access to a buffer_head's waitqueue pointer. Which is a better interface anyway, because it permits the use of wait_event(). This bug was introduced introduced into 2.4.20-pre5 and was faithfully ported up.
-
Andrew Morton authored
The general error logic handling in there is: *errp = -EFOO; <lots of code> if (some_error) goto out; this is fragile and unmaintainable, because the setting of the error code is "far away" from the site where the error was detected. And the code was actually wrong - we're returning ENOSPC in places where fs metadata inconsistency was detected. We traditionally return -EIO in this case. So change it all to do, effectively: if (some_error) { *errp = -EFOO; goto out; }
-
Andrew Morton authored
Patch from: Hugh Dickins <hugh@veritas.com> For almost a year (since 2.5.4) ext2_new_block has tended to set err 0 instead of -ENOSPC or -EIO. This manifested variously (typically depends on what's stale in ext2_get_block's chain[4] array): sometimes __brelse free free buffer backtraces, sometimes release_pages oops, usually generic_make_request beyond end of device messages, followed by further ext2 errors. [Insert lecture on dangers of using goto for unwind :-]
-
Andrew Morton authored
Forward port of a 2.4 patch by Christoph Hellwig. See http://cert.uni-stuttgart.de/archive/bugtraq/2002/03/msg00384.html for the security implications.
-
Andrew Morton authored
Patch from Manfred Spraul <manfred@colorfullife.com> exec of setuid apps and ptrace must be synchronized, to ensure that a normal user cannot ptrace a setuid app across exec. ptrace_attach acquires the task_lock around the uid checks, compute_creds acquires the BLK. The patch converts compute_creds to the task_lock. Additionally, it removes the do_unlock variable: the task_lock is not heaviliy used, there is no need to avoid the spinlock by adding branches. The patch is a cleanup patch, not a fix for a security problem: AFAICS the sys_ptrace in every arch acquires the BKL before calling ptrace_attach.
-
Andrew Morton authored
Patch from "Ph. Marek" <philipp.marek@bmlv.gv.at> Compile fix in sound/oss/maestro.c
-
Andrew Morton authored
Patch from: "H. J. Lu" <hjl@lucon.org> Fixes a commonly-reported insmod oops. Move the ksymtab labels definitions inside the liker section, so they get the right addresses.
-
Andrew Morton authored
Since Jan removed the lock_kernel()s in inode_add_bytes() and inode_sub_bytes(), these functions have been racy. One problematic workload has been discovered in which concurrent writepage and truncate on SMP quickly causes i_blocks to go negative. writepage() does not take i_sem, and it seems that for ext2, there are no other locks in force when inode_add_bytes() is called. Putting the BKL back in there is not acceptable. To fix this race I have added a new spinlock "i_lock" to the inode. That lock is presently used to protect i_bytes and i_blocks. We could use it to protect i_size as well. The splitting of the used disk space into i_blocks and i_bytes is silly - we should nuke all that and just have a bare loff_t i_usedbytes. Later.
-
Andrew Morton authored
When an appending O_DIRECT write hits ENOSPC we're returning a short write which is _too_ short. The file ends up with an undersized i_size and fsck complains. So update the return value with the partial result before bailing out.
-
Andrew Morton authored
In 2.5.52 I broke sys_sync() for ext2 in subtle ways. sys_sync() will set mapping->dirtied_when non-zero against a clean inode. Later, in (say) __iget(), that inode gets moved over to inode_unused or inode_in_use. But because it has non-zero ->dirtied_when, __mark_inode_dirty() thinks that the inode must still be on sb->s_dirty. But it isn't. It's on inode_in_use. It (and its pages) never get written out and the data gets thrown away on unmount. The patch ceases to use ->dirtied_when as an indicator of inode dirtiness. Not sure why I even did that :(
-
- 16 Jan, 2003 21 commits
-
-
Linus Torvalds authored
-
-
Russell King authored
__virt_to_bus/__bus_to_virt depended on INTEGRATOR_HDR0_SDRAM_BASE Unfortunately, this is defined in arch-integrator/platform.h, and we really don't want to include it in memory.h. We instead use BUS_OFFSET, which will eventually depend on the CPU number in the system.
-
Russell King authored
Only default BLK_DEV_IDEDMA on BLK_DEV_IDEDMA_ICS if ARCH_ACORN is set, not if ARM is set. There are PCI ARM systems out there!
-
Russell King authored
Ensure that we clean up properly after initialisation error, releasing all claimed resources in an orderly manner and returning the correct error code.
-
Russell King authored
-
Russell King authored
-
Russell King authored
-
Russell King authored
-
Russell King authored
Add cfbfillrect / cfbcopyarea / cfbimgblt objects for SA1100fb. Remove redundant "pm" member.
-
Jeff Wiedemeier authored
Found a buglet in the marvel code -- doesn't change the number of IRQS just the logic to get there.. This applies on top of the other marvel code. /jeff
-
Richard Henderson authored
into kanga.twiddle.net:/home/rth/linux/axp-2.5
-
Richard Henderson authored
-
Richard Henderson authored
to header files where they belong.
-
Richard Henderson authored
of AGP and SRMCONS patches.
-
Richard Henderson authored
From Jeff.Wiedemeier@hp.com.
-
Richard Henderson authored
(Titan / Marvel), Kconfig and headers. From Jeff Wiedemeier.
-
Martin J. Bligh authored
Patch from Erich Focht This adds a hook to rebalance globally across nodes every NODE_BALANCE_RATE iterations of the rebalancer. This allows us to easily tune on an architecture specific basis how often we wish to rebalance - machines with higher NUMA ratios (more expensive off-node access) will want to do this less often. It's currently set to 100 for NUMA-Q and 10 for other machines. If the imbalance between nodes is > 125%, we'll rebalance them. The hook for this is added to the NUMA definition of cpus_to_balance, so again, no impact on non-NUMA machines.
-
Martin J. Bligh authored
Patch from Michael Hohnbaum This adds a hook, sched_balance_exec(), to the exec code, to make it place the exec'ed task on the least loaded queue. We have less state to move at exec time than fork time, so this is the cheapest point to cross-node migrate. Experience in Dynix/PTX and testing on Linux has confirmed that this is the cheapest time to move tasks between nodes. It also macro-wraps changes to nr_running, to allow us to keep track of per-node nr_running as well. Again, no impact on non-NUMA machines.
-
Martin J. Bligh authored
Patch from Martin J. Bligh This adds a small hook to the find_busiest_queue routine to allow us to specify a mask of which CPUs to search over. In the NUMA case, it will only balance inside the node (much cheaper to search, and stops tasks from bouncing across nodes, which is very costly). The cpus_to_balance routine is conditionally defined to ensure no impact to non-NUMA machines. This is a tiny NUMA scheduler, but it needs the assistance of the second and third patches in order to spread tasks across nodes.
-
Christoph Hellwig authored
Another left-over from ancient module code, it was supposed to return non-zero if the module has a use count, but currently it always evaluates to 0. There are a few users of different types: (1) ioctl that perform a while(MOD_IN_USE) MOD_DEC_USE_COUNT loop. Just rip them out, we now have forced module unloading. (2) printk's that moan if the use-count in not zero in the exitfunc. Just rip them out, this can't happen. (3) if(MOD_IN_USE) MOD_DEC_USE_COUNT constructs in ->close of a few serial drivers. Just remove the conditional, we did a MOD_INC_USE_COUNT in ->open. (4) This one is interesting: drivers/sbus/char/display7seg.c uses the module use count to track openers. Replace this with an atomic_t. In addition remove tons of stale comments in network driver that aren't understandable for anyone who doesn't know ancient Linux module semantics.
-