- 08 Mar, 2005 28 commits
-
-
Nigel Cunningham authored
Here's a patch I've prepared which improves the speed at which memory is freed prior to suspend. It should be a big gain for swsusp. For suspend2, it isn't used much, but has shown big improvements when I set a very low image size limit and had memory quite full. 1GB P4, 2.6.11+Suspend2 2.1.8. Soft image size limit set to 2MB to emulate Pavel's implementation (eat as much memory as we can). Without patch: Freed 16545 pages in 4000 jiffies = 16.16 MB/s Freed 83281 pages in 14060 jiffies = 23.14 MB/s Freed 237754 pages in 41482 jiffies = 22.39 MB/s With patch: Freed 52257 pages in 6700 jiffies = 30.46 MB/s Freed 105693 pages in 11035 jiffies = 37.41 MB/s Freed 239007 pages in 18284 jiffies = 51.06 MB/s With a less aggressive image size limit (200MB): Without the patch: Freed 14600 pages in 1749 jiffies = 32.61 MB/s (Anomolous!) Freed 88563 pages in 14719 jiffies = 23.50 MB/s Freed 205734 pages in 32389 jiffies = 24.81 MB/s With the patch: Freed 68252 pages in 496 jiffies = 537.52 MB/s Freed 116464 pages in 569 jiffies = 798.54 MB/s Freed 209699 pages in 705 jiffies = 1161.89 MB/s The later pages take more work to get, which accounts for the slower MB/s with smaller numbers of pages to free. Without the patch, though, getting the easier pages also takes longer because we do a far greater number of invocations of shrink_all_memory in order to get the same number of pages. Signed-Off-By: Nigel Cunningham <ncunningham@cyclades.com> Acked-By: Pavel Machek <pavel@ucw.cz> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Christoph Hellwig authored
This way we actually shake dentries before inodes and thus mark more inodes reclaimable once we shake them. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
A small change to the tests for "Bad page state", to avoid one class of the page_remove_rmap BUG reports, giving more information while letting the system continue: check page_mapcount (_mapcount != -1) rather than page_mapped (_mapcount >= 0). And how does _mapcount go bad? In the case under study, it looks sure now that an overheating(?) Pentium III sometimes gets confused by a pair of instructions in the no-buddy-bitmap __free_pages_bulk, and clears the PG_private bit from the _mapcount field while buddying around - changing PG_private value changes the bit cleared from _mapcount. Bad page state mapcount:-4096 would have tracked this down much sooner, and will be recognizable if other cpus show the same aberrant reaction to 2.6.11. The page_remove_rmap BUG does need to be replaced by more permissive and informative handling, but I'm not yet ready to to finalize such a patch. Please admit Colin Harrison to the Order of the Iridescent Penguin, for his tireless testing. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Marcelo Tosatti authored
Use find_*_page helpers in swap code instead handcoding it. Signed-off-by: Marcelo Tosatti <marcelo.tosatti@cyclades.com> Acked-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Oleg Nesterov authored
On top of "[PATCH 2/2] readahead: improve sequential read detection". Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Oleg Nesterov authored
1. Current code can't always detect sequential reading, in case when read size is not PAGE_CACHE_SIZE aligned. If application reads the file by 4096+512 chunks, we have: 1st read: first read detected, prev_page = 2. 2nd read: offset == 2, the read is considered random. page_cache_readahead() should treat prev_page == offset as sequential access. In this case it is better to ++offset, because of blockable_page_cache_readahead(offset, size). 2. If application reads 4096 bytes with *ppos == 512, we have to read 2 pages, but req_size == 1 in do_generic_mapping_read(). Usually it's not a problem. But in random read case it results in unnecessary page cache misses. ~$ time dd conv=notrunc if=/tmp/GIG of=/tmp/dummy bs=$((4096+512)) 2.6.11-clean: real=370.35 user=0.16 sys=14.66 2.6.11-patched: real=234.49 user=0.19 sys=12.41 Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Oleg Nesterov authored
Currently page_cache_readahead() treats ra->size == 0 (first read) and ra->size == -1 (ra_off was called) separately, but does exactly the same in both cases. With this patch we may assume that the reading starts in 'ra_off()' state, so we don't need to consider the first read as a special case. file_ra_state_init() sets ra->prev_page = -1; ra->size = 0; When the page_cache_readahead() is called for the first time it sets ra->size to nonzero value either via get_init_ra_size() or ra_off(). So ra->size == 0 implies that ra->prev_page == -1. I am ignoring the case when readahead is disabled via ra->ra_pages == 0. page_cache_readahead detects sub-page sized reads: if (offset == ra->prev_page && req_size == 1 && ra->size != 0) But if offset == ra->prev_page, then ra->size == 0 can happen only if offset == -1, so there is no need to check ra->size here. If application starts reading 16Tb file from the last page then readahead can't help. First offset==0 read or first sequential detection: if ((ra->size == 0 && offset == 0) || (ra->size == -1 && sequential) could be changed to: if ((ra->size == 0 && sequential) || (ra->size == -1 && sequential) which means: if (sequential && (ra->size == 0 || ra->size == -1)) Random case detection: if (!sequential || (ra->size == 0)) But if sequential == 1, then ra->size can't be 0, this case is already handled before. Now we have: if (offset == ra->prev_page && req_size == 1) /* sub-page reads */ if (sequential && (ra->size == 0 || ra->size == -1)) /* first offset==0 read or first sequential */ if (!sequential) /* random case */ Now ->size is checked only in one place, so ra_off() can set ra->size = 0, and we can just test ->size against 0. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Oleg Nesterov authored
I think that do_page_cache_readahead() can be inlined in blockable_page_cache_readahead(), this makes the code a bit more readable in my opinion. Also makes check_ra_success() static inline. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Oleg Nesterov authored
This patch introduces make_ahead_window() function for simplification of page_cache_readahead. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Oleg Nesterov authored
get_next_ra_size() can get all info from file_ra_state. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Oleg Nesterov authored
There is no point in setting ra->prev_page before 'goto out', it will be overwritten anyway. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Hugh Dickins authored
Ingo's patch to reduce scheduling latencies, by checking for lockbreak in copy_page_range, was in the -VP and -mm patchsets some months ago; but got preempted by the 4level rework, and not reinstated since. Restore it now in copy_pte_range - which mercifully makes it easier. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Gordon Jin authored
This patch fixes 2 cornercases of overflow caused by argument len in sys_mincore(): Case 1: len is so large that will overflow to 0 after page alignment. E.g. len=(size_t)(-1), i.e. 0xff...ff. Expected result: it's overflow and return ENOMEM. Current result: len is aligned to 0, then treated the same as len=0 and return succeed. This cornercase has been fixed in do_mmap_pgoff(), and here sys_mincore() also needs this fix. Case 2: len is a large number but will not overflow after alignment. But start+len will overflow. E.g. len=(size_t)(-PAGE_SIZE), and start>0. Expected result: it's overflow and return ENOMEM. Current result: return EINVAL. Looks like considering len as a non-positive value, probably influenced by manpage. But since the type of len is size_t, i.e. unsigned, it shouldn't be considered as non-positive value. I've also reported this inconsistency to manpage mincore. Signed-off-by: Gordon Jin <gordon.jin@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Prasanna Meda authored
- Race in mempool_resize: memcpy can copy at the end of the kmalloced elements. - When new_min_nr is same as min_nr, instead of reallocate and copy, just return, changed '<' to '<='. - Changed while condition to the same sense of if condition from '>' to '<'; it is easy to think with only one of the left and right brains at a time. Signed-off-by: Prasanna Meda <pmeda@akamai.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Dave Hansen authored
Appended is a patch which stops using the zone->zone_mem_map to calculate the buddy and combined page pointers. It uses the fact that the mem_map array is guaranteed to be contigious for the surrounding (1 << MAX_ORDER) pages. The relative positions of the pages in the physical address space to provide the alignement; which conicidentally fixes the issue where zones are not aligned at MAX_ORDER. There is a very comprehensive comment in the new code explaining the mathematical relationship between a page and its buddy so I won't reproduce it here. This kind of approach is required for CONFIG_NONLINEAR systems where the mem_map is not contiguous within a zone, and the zone->zone_mem_map is not used at all. This patch has been boot-tested on a large variety of systems and architectures: my P4 laptop, 16-way NUMAQ, 16-way Summit, 4-way x86 SMP, ppc64 LPAR, x86_64, and several ia64 configurations. It has been performance-tested on a 16-way NUMAQ. SDET shows a very slight (within margin of error) performance gain. Kernbench shows an approximately ~1% decrease in system time with this patch applied. So, it has a likely positive performance impact. However, the patch has the potential to have a negative performance impact on systems with an expensive page_to_pfn() implementation. But, I think the NUMAQ has one of the more expensive ones around, and it doesn't seem mind too much. Signed-off-by: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Nick Piggin authored
Here is another small OOM killer improvement. Previously we needed to reclaim SWAP_CLUSTER_MAX pages in a single pass. That should be changed so that we need only reclaim that many pages during the entire try_to_free_pages run, without going OOM. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Ingo Molnar authored
1. typos/spelling ;-) 2. removed prev_vma and find_vma_prev because the condition checked later was always true 3. moved the free_area_cache/mmap_base check into arch_unmap_area_topdown where i think it belongs. 4. removed the extra free_area_cache setting code in the while loop as it only has to be set when we actually (and successfully) return from this function. The only visible change to the layout should be the following:
-
Marcelo Tosatti authored
With silly pageout testcases it is possible to place huge amounts of memory under I/O. With a large request queue (CFQ uses 8192 requests) it is possible to place _all_ memory under I/O at the same time. This means that all memory is pinned and unreclaimable and the VM gets upset and goes oom. The patch limits the amount of memory which is under pageout writeout to be a little more than the amount of memory at which balance_dirty_pages() callers will synchronously throttle. This means that heavy pageout activity can starve heavy writeback activity completely, but heavy writeback activity will not cause starvation of pageout. Because we don't want a simple `dd' to be causing excessive latencies in page reclaim. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Kai Mäkisara authored
Apparently `tar' errors out if it cannot perform lseek() against a tape. Work around that in-kernel. Signed-off-by: Kai Makisara <kai.makisara@kolumbus.fi> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matt Porter authored
this patch fixes the problem, that the current kernel (linux-2.6.11-rc5) could not be compiled, when "support for early boot texts over serial port" (CONFIG_SERIAL_TEXT_DEBUG=y) is active. Signed-off-by: Gerhard Jaeger <gjaeger@sysgo.com> Signed-off-by: Matt Porter <mporter@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Andrew Morton authored
`current' is a lousy choice for a variable name. This driver explodes on ppc64 because `current' expands to (local_paca->__current). OK, the driver doesn't compile on power4 anyway, but... Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Eric Lammerts authored
When I stat(2) a device node on a cramfs, the st_blocks field is bogus (it's derived from the size field which in this case holds the major/minor numbers). This makes du(1) output completely wrong. Signed-off-by: Eric Lammerts <eric@lammerts.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Andrew Morton authored
We don't actually need this. The reason why the ppc64 build exploded was that I had CC="gcc -m64", and even though the build system turns that into "gcc -m64 -m32", that is, surprisingly, equivalent to "gcc -m64". Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Adrian Bunk authored
Since at about half a year, this driver was no longer selectable via Kconfig. Since it seems noone missed this driver, therefore this patch removes it. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Andrew Morton authored
include/asm/pgtable.h:267: warning: `struct vm_area_struct' declared inside parameter list include/asm/pgtable.h:267: warning: its scope is only this definition or declaration, which is probably not what you want Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Andrew Morton authored
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Kumar Gala authored
Fix for trivial fix for 2.6.11 oprofile compilation on e500 based ppc. Signed-off-by: Andy Fleming <afleming@freescale.com> Signed-off-by: Kumar Gala <kumar.gala@freescale.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Linus Torvalds authored
Noted by Georgi Guninski.
-
- 07 Mar, 2005 12 commits
-
-
bk://linux-1394.bkbits.net/1394-2.6Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.6/linux
-
Linus Torvalds authored
-
Alan Cox authored
-
Alan Cox authored
This gets the stallion driver working again non-SMP. Patch by Wayne Meissner Signed-off-by: Alan Cox <alan@redhat.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Alan Cox authored
Sorry forgot I made these constants
-
Alan Cox authored
PWC has a new maintainer (Luc Saillard) and also the various contentious binary hooks removed and replaced with reverse engineering work. Please restore it to the kernel Alan Signed-off-by: Luc Saillard <luc@saillard.org> Signed-off-by: Alan Cox <alan@redhat.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Alan Cox authored
Patch-by: ARTOP Corp. Basically this adds the small bits for the new card and makes one set of items an array because the new card is multi-channel. Signed-off-by: Alan Cox <alan@redhat.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Alan Cox authored
The current driver does workarounds for errata that do not work with fibre attached devices. This patch avoids doing the workaround on the only known fibre attach pcnet/32 hardware. All handling is automated on pci sub-ids Patch by: Guido Guenther Signed-off-by: Alan Cox <alan@redhat.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Alan Cox authored
-
Matthew Wilcox authored
- Support sparse - Use CONFIG_64BIT instead of CONFIG_PARISC64 - Find palo in /sbin even if it's not in our $PATH Signed-off-by: Randolph Chung <tausq@parisc-linux.org> Signed-off-by: Matthew Wilcox <willy@parisc-linux.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matthew Wilcox authored
- Use def_bool where possible - Eliminate PARISC64 Signed-off-by: Matthew Wilcox <willy@parisc-linux.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Matthew Wilcox authored
- Fix all ioremap abuse noticed by CONFIG_DEBUG_IOREMAP on an N4000. - F_EXTEND doesn't do what I thought it did on a 32-bit box. - New calling convention for txn_alloc_irq() - Replace CONFIG_PARISC64 with CONFIG_64BIT - Ensure alignment of the irt buffer to 8 bytes even when slab debugging is enabled Signed-off-by: Matthew Wilcox <willy@parisc-linux.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-