- 08 Mar, 2003 16 commits
-
-
Andrew Morton authored
use print_symbol() to decode the offender's program counter.
-
Andrew Morton authored
Patch from Manfred Spraul <manfred@colorfullife.com> A patch that records the last kfree caller's program counter and prints that if a poison check fails.
-
Andrew Morton authored
Patch from Petr Vandrovec <vandrove@vc.cvut.cz> Modifies check_poison function to not only verify that last byte is POISON_END, but also that all preceeding bytes are either POISON_BEFORE or POISON_AFTER bytes.
-
Andrew Morton authored
Patch from Hugh Dickins <hugh@veritas.com> Hugh's patch fixes vm_area_struct slab corruption due to mremap's move_vma mistaking how do_munmap splits vmas in one case. Neither of us are very happy with it - it is fragile, and obscure. Hugh will revisit this later, but for now it should fix up the potential memory corruption.
-
Andrew Morton authored
Patch from Ravikiran G Thirumalai <kiran@in.ibm.com> Makes the disk stats on struct gendisk per-cpu.
-
Andrew Morton authored
Some workloads really, really want to have no readahead. Databases which are perfoming small synchronous I/Os against a file which has extremely poor layout. Any readahead at all is a lose here. But the current readahead code refuses to adapt that low. Fix it up so that we can indeed adaptively disable readahead altogether, and do not start it again until we have seen max_readahead()'s worth of consecutive reads.
-
Andrew Morton authored
Patch from Trond Myklebust <trond.myklebust@fys.uio.no> Implement sendfile() for the NFS client. This is required for loop-on-NFS support.
-
Andrew Morton authored
Tasks which throttle in balance_dirty_pages() will loop until the amount of dirty memory falls below the configured dirty_ratio. This exposes the possibility that one task could be stuck in there for arbitrary periods of time due to page dirtying activity by other tasks. The patch changes the logic so that tasks will break out of the loop if they have written enough pages, regardless of the current dirty memory limits. Here "enough" pages is 1.5x the number of pages which they just dirtied. If the amount of dirty memory in the machine happens to still exceed dirty_ratio (say, due to MAP_SHARED activity) then the task will again throttle after dirtying a single page. But there is now an upper limit on the time for which a single task will be captured in balance_dirty_pages().
-
Andrew Morton authored
Patch from Andries.Brouwer@cwi.nl The following patch does the following: - static const char *blkdevs[MAX_BLKDEV]; disappears - get_blkdev_list, (un)register_blkdev, __bdevname are moved from block_dev.c to genhd.c - the third "fops" parameter of register_blkdev was unused; now removed everywhere - zillions of places had printk("cannot get major") upon error return from register_blkdev; removed all of these and inserted a single printk in register_blkdev. Of course the reason for the patch is that one fixed size array is eliminated.
-
Martin J. Bligh authored
I'm getting a lot of cacheline bounce from .text.lock.file_table due to false sharing of the cahceline. The following patch just aligns the lock in it's own cacheline.
-
Martin J. Bligh authored
People keep asking for this info, and Andrew asked me to put it under the Documentation directory ... provides really simple instructions for taking a profile so that users can report performance changes in a useful way.
-
Martin J. Bligh authored
From Andy Whitcroft Fix the type of get_zholes_size for NUMA-Q
-
Martin J. Bligh authored
From Andy Whitcroft Convert physnode_map from an int to a u8 to save cachelines.
-
Martin J. Bligh authored
From Andy Whitcroft A few very simple changes in order to make CONFIG_NUMA work everywhere, so the distros can build one common binary kernel for distributions.
-
Martin J. Bligh authored
From Andy Whitcroft Share a common physnode_map structure between NUMA-Q and Summit.
-
bk://kernel.bkbits.net/davem/sparc-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
- 07 Mar, 2003 1 commit
-
-
bk://linux-dj.bkbits.net/watchdogLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
- 08 Mar, 2003 1 commit
-
-
Dave Jones authored
-
- 07 Mar, 2003 22 commits
-
-
http://linux-isdn.bkbits.net/linux-2.5.isdnLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
David S. Miller authored
into kernel.bkbits.net:/home/davem/sparc-2.5
-
David S. Miller authored
-
Robert Love authored
This is a minor cleanup. We currently define and declare the BKL's kernel_flag spinlock on either SMP or PREEMPT, which means a UP+PREEMPT machine gets it. We only need the actual lock on SMP.
-
Kai Germaschewski authored
into tp1.ruhr-uni-bochum.de:/scratch/kai/kernel/v2.5/linux-2.5.isdn
-
Marcel Holtmann authored
-
Linus Torvalds authored
-
Linus Torvalds authored
-
David S. Miller authored
-
David S. Miller authored
into nuts.ninka.net:/home/davem/src/BK/sparc-2.5
-
David S. Miller authored
-
David S. Miller authored
-
bk://kernel.bkbits.net/gregkh/linux/initramfs-2.5Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Greg Kroah-Hartman authored
This also shows how to add files to the initramfs build, but is commented out. Patch originally done by Kai.
-
Greg Kroah-Hartman authored
-
David S. Miller authored
-
bk://cifs.bkbits.net/linux-2.5cifsLinus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
Steve French authored
Fix oops in getdfs when null path passed in on mount. Fix oops when changed readsize caused readpages problem. Add support for altering rsize so can reduce pages read across net below default of 4
-
Ingo Molnar authored
This fixes the SMP runqueue locking bug when updating the wakers priority. It also includes: - only update the priority and do a requeueing if the sleep average has changed. (this does not happen for pure CPU hogs or pure interactive tasks, so no need to requeue/recalc-prio in that case.) [All the necessary values are available at that point already, so gcc should have an easy job making this branch really cheap.] - do not do a full task activation in the migration-thread path - that is supposed to be near-atomic anyway. - fix up comments I solved the SMP locking bug by moving the requeueing outside of try_to_wake_up(). It does not matter that the priority update is not atomically done now, since the current process wont do anything inbetween. (well, it could get preempted in a preemptible kernel, but even that wont do any harm.)
-
Stephen Hemminger authored
The following messages are of interest only when debugging aio. Otherwise, they are just console clutter.
-
Matthew Wilcox authored
- Remove broken lock accounting - Introduce __locks_delete_block() - Stop using kdevname() - Fix locks_remove_posix()
-
Ingo Molnar authored
- fix a (now-) bug in kernel/softirq.c, it did a wakeup outside any atomic regions, which falsely identified random processes as a non-atomic wakeup, and which causes random priority boost to be distributed. - reset the initial idle thread's priority back to PRIO_MAX after doing the wakeup_forked_process() - correct preemption relies on this. - update current->prio immediately after a backboost. - clean up effective_prio() & sleep_avg calculations so that there are fewer RT-task special cases. This has the advantage of the sleep_avg being maintained even for RT tasks - this could be advantegous for tasks that briefly enter/exit RT mode.
-