Commit 58c94cc1 authored by NeilBrown's avatar NeilBrown Committed by Jens Axboe

block: don't check for BIO_MAX_PAGES in blk_bio_segment_split()

blk_bio_segment_split() makes sure bios have no more than
BIO_MAX_PAGES entries in the bi_io_vec.
This was done because bio_clone_bioset() (when given a
mempool bioset) could not handle larger io_vecs.

No driver uses bio_clone_bioset() any more, they all
use bio_clone_fast() if anything, and bio_clone_fast()
doesn't clone the bi_io_vec.

The main user of of bio_clone_bioset() at this level
is bounce.c, and bouncing now happens before blk_bio_segment_split(),
so that is not of concern.

So remove the big helpful comment and the code.
Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarNeilBrown <neilb@suse.com>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 9b10f6a9
...@@ -108,24 +108,8 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, ...@@ -108,24 +108,8 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
bool do_split = true; bool do_split = true;
struct bio *new = NULL; struct bio *new = NULL;
const unsigned max_sectors = get_max_io_size(q, bio); const unsigned max_sectors = get_max_io_size(q, bio);
unsigned bvecs = 0;
bio_for_each_segment(bv, bio, iter) { bio_for_each_segment(bv, bio, iter) {
/*
* With arbitrary bio size, the incoming bio may be very
* big. We have to split the bio into small bios so that
* each holds at most BIO_MAX_PAGES bvecs because
* bio_clone_bioset() can fail to allocate big bvecs.
*
* Those drivers which will need to use bio_clone_bioset()
* should tell us in some way. For now, impose the
* BIO_MAX_PAGES limit on all queues.
*
* TODO: handle users of bio_clone_bioset() differently.
*/
if (bvecs++ >= BIO_MAX_PAGES)
goto split;
/* /*
* If the queue doesn't support SG gaps and adding this * If the queue doesn't support SG gaps and adding this
* offset would create a gap, disallow it. * offset would create a gap, disallow it.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment