-
Steven Pratt authored
With Ram Pai <linuxram@us.ibm.com> - request size is now passed into page_cache_readahead. This allows the removal of the size averaging code in the current readahead logic. - readahead rampup is now faster (especially for larger request sizes) - No longer "slow read path". Readahead is turn off at first random access, turned back on at first sequential access. - Code now handles thrashing, slowly reducing readahead window until thrashing stops, or min size reached. - Returned to old behavior where first access is assumed sequential only if at offset 0. - designed to handle larger (1M or above) window sizes efficiently Benchmark results: machine 1: 8 way pentiumIV 1GB memory, tests run to 36GB SCSI disk (Similar results were seen on a 1 way 866Mhz box with IDE disk.) TioBench: tiobench.pl --dir /mnt/tmp --block 4096 --size 4000 --numruns 2 --threads 1(4,16,64) 4k request size sequential read results in MB/sec Threads 2.6.9 w/patches %diff diff
6f734a1a