Commit ded29eb1 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] revert lazy readahead

From: Trond Myklebust <trond.myklebust@fys.uio.no>

The following reversion is what fixes my regression.  That puts the
sequential read numbers back to the 2.6.0 values of ~140MB/sec (from the
current 2.6.1 values of 14MB/second)...

We were triggering I/O of the `ahead' when we hit the last page in the
`current' window.  That's bad because it gives no pipelining at all.

So go back to full pipelining.

It's not at all clear why this change made a 10x difference in NFS
throughput.
parent 2fd50585
......@@ -449,12 +449,8 @@ page_cache_readahead(struct address_space *mapping, struct file_ra_state *ra,
* accessed in the current window, there
* is a high probability that around 'n' pages
* shall be used in the next current window.
*
* To minimize lazy-readahead triggered
* in the next current window, read in
* an extra page.
*/
ra->next_size = preoffset - ra->start + 2;
ra->next_size = preoffset - ra->start + 1;
}
ra->start = offset;
ra->size = ra->next_size;
......@@ -474,13 +470,9 @@ page_cache_readahead(struct address_space *mapping, struct file_ra_state *ra,
/*
* This read request is within the current window. It is time
* to submit I/O for the ahead window while the application is
* about to step into the ahead window.
* Heuristic: Defer reading the ahead window till we hit
* the last page in the current window. (lazy readahead)
* If we read in earlier we run the risk of wasting
* the ahead window.
* crunching through the current window.
*/
if (ra->ahead_start == 0 && offset == (ra->start + ra->size -1)) {
if (ra->ahead_start == 0) {
ra->ahead_start = ra->start + ra->size;
ra->ahead_size = ra->next_size;
actual = do_page_cache_readahead(mapping, filp,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment