- 04 Dec, 2002 2 commits
-
-
Randy Dunlap authored
Originally by Bob Miller <rem@osdl.org> Allows raw driver to build as module, with GPL license.
-
Stephen Rothwell authored
This is the generic part of the start of the compatibility syscall layer. I think I have made it generic enough that each architecture can define what compatibility means. To use this, an architecture must create asm/compat.h and provide typedefs for (currently) 'compat_time_t', 'struct compat_timeval' and 'struct compat_timespec'.
-
- 03 Dec, 2002 2 commits
-
-
Mike Phillips authored
This fixes a lock and potential oops when accessing /proc/net/tr_rif is the token ring interface is under heavy load.
-
Linus Torvalds authored
into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
-
- 04 Dec, 2002 1 commit
-
-
Christoph Hellwig authored
-
- 03 Dec, 2002 35 commits
-
-
Christoph Hellwig authored
SGI Modid: 2.5.x-xfs:slinx:134222a
-
Christoph Hellwig authored
SGI Modid: 2.5.x-xfs:slinx:134216a
-
Christoph Hellwig authored
SGI Modid: 2.5.x-xfs:slinx:134187a
-
Christoph Hellwig authored
SGI Modid: 2.5.x-xfs:slinx:134185a
-
Stephen Lord authored
SGI Modid: 2.5.x-xfs:slinx:134098a
-
Stephen Lord authored
around in active state for as long as possible. This allows us to coalesce several transactions into one buffer and reduce the disk traffic. SGI Modid: 2.5.x-xfs:slinx:134077a
-
Christoph Hellwig authored
SGI Modid: 2.5.x-xfs:slinx:134179a
-
Stephen Lord authored
SGI Modid: 2.5.x-xfs:slinx:133408a
-
Stephen Lord authored
remove the callback processing from the log write path, we only do callbacks on I/O completion now. SGI Modid: 2.5.x-xfs:slinx:133285a
-
Stephen Lord authored
SGI Modid: 2.5.x-xfs:slinx:133254a
-
Christoph Hellwig authored
SGI Modid: 2.5.x-xfs:slinx:134176a
-
Stephen Lord authored
SGI Modid: 2.5.x-xfs:slinx:132911a
-
Christoph Hellwig authored
SGI Modid: 2.5.x-xfs:slinx:134172a
-
Nathan Scott authored
SGI Modid: 2.5.x-xfs:slinx:134113a
-
Nathan Scott authored
SGI Modid: 2.5.x-xfs:slinx:134107a
-
Christoph Hellwig authored
SGI Modid: 2.5.x-xfs:slinx:134068a
-
Nathan Scott authored
data structures (sb,agf,agi,agfl) are now sector size aware. Cleaned up the early mount code dealing with log devices and logsectsize. SGI Modid: 2.5.x-xfs:slinx:134065a
-
Nathan Scott authored
SGI Modid: 2.5.x-xfs:slinx:134064a
-
Nathan Scott authored
SGI Modid: 2.5.x-xfs:slinx:134059a
-
Christoph Hellwig authored
SGI Modid: 2.5.x-xfs:slinx:134013a
-
Nathan Scott authored
SGI Modid: 2.5.x-xfs:slinx:133971a
-
Linus Torvalds authored
Make old 32-bit getdents() look more like the updated getdents64 for maintainability.
-
Linus Torvalds authored
-
Andrew Morton authored
2.5 is 20% slower than 2.4 in an AIM9 test which is just running readdir across /bin. A lot of this is due to lots of tiny calls to copy_to_user() in fs/readdir.c. The patch speeds up that test by 50%, so it's comfortably faster than 2.4. Also, there were lots of unchecked copy_to_user() and put_user() calls in there. Fixed all that up as well. The patch assumes that each arch has a working 64-bit put_user(), which appears to be the case.
-
Andrew Morton authored
If a page is "freed" while in the deferred-lru-addition queue, the final reference to it is the deferred lru addition queue. When that queue gets spilled onto the LRU, the page is actually freed. Which is all expected and natural and works fine - it's a weird case. But one of the AIM9 tests was taking a 20% performance hit (relative to 2.4) because it was going into the page allocator for new pages while cache-hot pages were languishiung out in the deferred-addition queue. So the patch changes things so that we spill the CPU's deferred-lru-addition queue before starting to free pages. This way, the recently-used pages actually make it to the hot/cold lists and are available for new allocations. It gets back 15 of the lost 20%. The other 5% is lost to the general additional complexity of all this stuff. (But we're 250% faster than 2.4 when running four instances of the test on 4-way).
-
Andrew Morton authored
This patch optimises the truncate of a zero-length file, which is a sufficiently common case to justify the extra test-n-branch. It does this by skipping the entire call into the fs if i_size is not being altered. The AIM9 `open_clo' test just loops, creating and unlinking a file. This patch speeds it up 50% for ext2, 600% for reiserfs.
-
Andrew Morton authored
The buffer-stripping code gets upset when it sees a non-uptodate buffer against an uptodate page. This happens because the write end_io handler clears BH_Uptodate. Add a buffer_req() test to suppress these warnings.
-
Andrew Morton authored
Patch from Hugh Dickins and Robert Love. Fixes up the PF_MEMDIE handling so that it actually works. (PF_MEMDIE allows an oom-killed task to use the emergency memory reserves so that it can actually get out of the page allocator and die)
-
Andrew Morton authored
2.5's signal delivery is 20% slower than 2.4. A signal send/handle cycle is performing a total of 24 copy_*_user() calls, and copy_*_user() got optimised for large copies. The patch reduces that to six copy_*_user() calls, and gets us up to about 5% slower than 2.4. We'd have to go back to some additional inlined copy_user() code to get the last 3% back. And HZ=100 to get the 2% back. It is noteworthy that the benchmark is not using float at all during the body of the test, yet the kernel is still doing all that floating point stuff.
-
Andrew Morton authored
Patch from Mingming Cao <cmm@us.ibm.com> - ipc_lock() need a read_barrier_depends() to prevent indexing uninitialized new array on the read side. This is corresponding to the write memory barrier added in grow_ary() from Dipankar's patch to prevent indexing uninitialized array. - Replaced "wmb()" in IPC code with "smp_wmb()"."wmb()" produces a full write memory barrier in both UP and SMP kernels, while "smp_wmb()" provides a full write memory barrier in an SMP kernel, but only a compiler directive in a UP kernel. The same change are made for "rmb()". - Removed rmb() in ipc_get(). We do not need a read memory barrier there since ipc_get() is protected by ipc_ids.sem semaphore. - Added more comments about why write barriers and read barriers are needed (or not needed) here or there.
-
Andrew Morton authored
With some workloads a large number of pages coming off the LRU are pinned blockdev pagecache - things like ext2 group descriptors, pages which have buffers in the per-cpu buffer LRUs, etc. They keep churning around the inactive list, reducing the overall page reclaim effectiveness. So move these pages onto the active list.
-
Andrew Morton authored
Pages from memory-backed filesystems are supposed to be moved up onto the active list, but that's not working because fail_writepage() is called when the page is not on the LRU. So look for this case in page reclaim and handle it there. And it's more efficient, the VM knows more about what is going on and it later leads to the removal of fail_writepage().
-
Andrew Morton authored
The patch addresses some search complexity failures which occur when there is a large amount of dirty data on the inactive list. Normally we attempt to write out those pages and then move them to the head of the inactive list. But this goes against page aging, and means that the page has to traverse the entire list again before it can be reclaimed. But the VM really wants to reclaim that page - it has reached the tail of the LRU. So what we do in this patch is to mark the page as needing reclamation, and then start I/O. In the IO completion handler we check to see if the page is still probably reclaimable and if so, move it to the tail of the inactive list, where it can be reclaimed immediately. Under really heavy swap-intensive loads this increases the page reclaim efficiency (pages reclaimed/pages scanned) from 10% to 25%. Which is OK for that sort of load. Not great, but OK. This code path takes the LRU lock once per page. I didn't bother playing games with batching up the locking work - it's a rare code path, and the machine has plenty of CPU to spare when this is happening.
-
Andrew Morton authored
This removes the last remnant of the 2.4 way of throttling page allocators: the wait_on_page_writeback() against mapped-or-swapcache pages. I did this because: a) It's not used much. b) It's already causing big latencies c) With Jens' large-queue stuff, it can cause huuuuuuuuge latencies. Like: ninety seconds. So kill it, and rely on blk_congestion_wait() to slow the allocator down to match the rate at which the IO system can retire writes.
-
Andrew Morton authored
These are the mount options which turn off and on the Orlov allocator. ext2 supports them but Ted forgot to wire them up for ext3.
-