Commit dc30b96a authored by Markus Stockhausen's avatar Markus Stockhausen Committed by Jens Axboe

readahead: stricter check for bdi io_pages

ondemand_readahead() checks bdi->io_pages to cap the maximum pages
that need to be processed. This works until the readit section. If
we would do an async only readahead (async size = sync size) and
target is at beginning of window we expand the pages by another
get_next_ra_size() pages. Btrace for large reads shows that kernel
always issues a doubled size read at the beginning of processing.
Add an additional check for io_pages in the lower part of the func.
The fix helps devices that hard limit bio pages and rely on proper
handling of max_hw_read_sectors (e.g. older FusionIO cards). For
that reason it could qualify for stable.

Fixes: 9491ae4a ("mm: don't cap request size based on read-ahead setting")
Cc: stable@vger.kernel.org
Signed-off-by: Markus Stockhausen stockhausen@collogia.de
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent cdcdcaae
...@@ -386,6 +386,7 @@ ondemand_readahead(struct address_space *mapping, ...@@ -386,6 +386,7 @@ ondemand_readahead(struct address_space *mapping,
{ {
struct backing_dev_info *bdi = inode_to_bdi(mapping->host); struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
unsigned long max_pages = ra->ra_pages; unsigned long max_pages = ra->ra_pages;
unsigned long add_pages;
pgoff_t prev_offset; pgoff_t prev_offset;
/* /*
...@@ -475,10 +476,17 @@ ondemand_readahead(struct address_space *mapping, ...@@ -475,10 +476,17 @@ ondemand_readahead(struct address_space *mapping,
* Will this read hit the readahead marker made by itself? * Will this read hit the readahead marker made by itself?
* If so, trigger the readahead marker hit now, and merge * If so, trigger the readahead marker hit now, and merge
* the resulted next readahead window into the current one. * the resulted next readahead window into the current one.
* Take care of maximum IO pages as above.
*/ */
if (offset == ra->start && ra->size == ra->async_size) { if (offset == ra->start && ra->size == ra->async_size) {
ra->async_size = get_next_ra_size(ra, max_pages); add_pages = get_next_ra_size(ra, max_pages);
ra->size += ra->async_size; if (ra->size + add_pages <= max_pages) {
ra->async_size = add_pages;
ra->size += add_pages;
} else {
ra->size = max_pages;
ra->async_size = max_pages >> 1;
}
} }
return ra_submit(ra, mapping, filp); return ra_submit(ra, mapping, filp);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment