Commit 8cacc5ab authored by Paolo Valente's avatar Paolo Valente Committed by Jens Axboe

block, bfq: do not merge queues on flash storage with queueing

To boost throughput with a set of processes doing interleaved I/O
(i.e., a set of processes whose individual I/O is random, but whose
merged cumulative I/O is sequential), BFQ merges the queues associated
with these processes, i.e., redirects the I/O of these processes into a
common, shared queue. In the shared queue, I/O requests are ordered by
their position on the medium, thus sequential I/O gets dispatched to
the device when the shared queue is served.

Queue merging costs execution time, because, to detect which queues to
merge, BFQ must maintain a list of the head I/O requests of active
queues, ordered by request positions. Measurements showed that this
costs about 10% of BFQ's total per-request processing time.

Request processing time becomes more and more critical as the speed of
the underlying storage device grows. Yet, fortunately, queue merging
is basically useless on the very devices that are so fast to make
request processing time critical. To reach a high throughput, these
devices must have many requests queued at the same time. But, in this
configuration, the internal scheduling algorithms of these devices do
also the job of queue merging: they reorder requests so as to obtain
as much as possible a sequential I/O pattern. As a consequence, with
processes doing interleaved I/O, the throughput reached by one such
device is likely to be the same, with and without queue merging.

In view of this fact, this commit disables queue merging, and all
related housekeeping, for non-rotational devices with internal
queueing. The total, single-lock-protected, per-request processing
time of BFQ drops to, e.g., 1.9 us on an Intel Core i7-2760QM@2.40GHz
(time measured with simple code instrumentation, and using the
throughput-sync.sh script of the S suite [1], in performance-profiling
mode). To put this result into context, the total,
single-lock-protected, per-request execution time of the lightest I/O
scheduler available in blk-mq, mq-deadline, is 0.7 us (mq-deadline is
~800 LOC, against ~10500 LOC for BFQ).

Disabling merging provides a further, remarkable benefit in terms of
throughput. Merging tends to make many workloads artificially more
uneven, mainly because of shared queues remaining non empty for
incomparably more time than normal queues. So, if, e.g., one of the
queues in a set of merged queues has a higher weight than a normal
queue, then the shared queue may inherit such a high weight and, by
staying almost always active, may force BFQ to perform I/O plugging
most of the time. This evidently makes it harder for BFQ to let the
device reach a high throughput.

As a practical example of this problem, and of the benefits of this
commit, we measured again the throughput in the nasty scenario
considered in previous commit messages: dbench test (in the Phoronix
suite), with 6 clients, on a filesystem with journaling, and with the
journaling daemon enjoying a higher weight than normal processes. With
this commit, the throughput grows from ~150 MB/s to ~200 MB/s on a
PLEXTOR PX-256M5 SSD. This is the same peak throughput reached by any
of the other I/O schedulers. As such, this is also likely to be the
maximum possible throughput reachable with this workload on this
device, because I/O is mostly random, and the other schedulers
basically just pass I/O requests to the drive as fast as possible.

[1] https://github.com/Algodev-github/STested-by: default avatarHolger Hoffstätte <holger@applied-asynchrony.com>
Tested-by: default avatarOleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: default avatarFrancesco Pollicino <fra.fra.800@gmail.com>
Signed-off-by: default avatarAlessio Masola <alessio.masola@gmail.com>
Signed-off-by: default avatarPaolo Valente <paolo.valente@linaro.org>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 2341d662
...@@ -578,6 +578,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, ...@@ -578,6 +578,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
bfqg_and_blkg_get(bfqg); bfqg_and_blkg_get(bfqg);
if (bfq_bfqq_busy(bfqq)) { if (bfq_bfqq_busy(bfqq)) {
if (unlikely(!bfqd->nonrot_with_queueing))
bfq_pos_tree_add_move(bfqd, bfqq); bfq_pos_tree_add_move(bfqd, bfqq);
bfq_activate_bfqq(bfqd, bfqq); bfq_activate_bfqq(bfqd, bfqq);
} }
......
...@@ -595,7 +595,16 @@ static bool bfq_too_late_for_merging(struct bfq_queue *bfqq) ...@@ -595,7 +595,16 @@ static bool bfq_too_late_for_merging(struct bfq_queue *bfqq)
bfq_merge_time_limit); bfq_merge_time_limit);
} }
void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq) /*
* The following function is not marked as __cold because it is
* actually cold, but for the same performance goal described in the
* comments on the likely() at the beginning of
* bfq_setup_cooperator(). Unexpectedly, to reach an even lower
* execution time for the case where this function is not invoked, we
* had to add an unlikely() in each involved if().
*/
void __cold
bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
{ {
struct rb_node **p, *parent; struct rb_node **p, *parent;
struct bfq_queue *__bfqq; struct bfq_queue *__bfqq;
...@@ -1849,8 +1858,9 @@ static void bfq_add_request(struct request *rq) ...@@ -1849,8 +1858,9 @@ static void bfq_add_request(struct request *rq)
/* /*
* Adjust priority tree position, if next_rq changes. * Adjust priority tree position, if next_rq changes.
* See comments on bfq_pos_tree_add_move() for the unlikely().
*/ */
if (prev != bfqq->next_rq) if (unlikely(!bfqd->nonrot_with_queueing && prev != bfqq->next_rq))
bfq_pos_tree_add_move(bfqd, bfqq); bfq_pos_tree_add_move(bfqd, bfqq);
if (!bfq_bfqq_busy(bfqq)) /* switching to busy ... */ if (!bfq_bfqq_busy(bfqq)) /* switching to busy ... */
...@@ -1990,6 +2000,8 @@ static void bfq_remove_request(struct request_queue *q, ...@@ -1990,6 +2000,8 @@ static void bfq_remove_request(struct request_queue *q,
bfqq->pos_root = NULL; bfqq->pos_root = NULL;
} }
} else { } else {
/* see comments on bfq_pos_tree_add_move() for the unlikely() */
if (unlikely(!bfqd->nonrot_with_queueing))
bfq_pos_tree_add_move(bfqd, bfqq); bfq_pos_tree_add_move(bfqd, bfqq);
} }
...@@ -2075,6 +2087,11 @@ static void bfq_request_merged(struct request_queue *q, struct request *req, ...@@ -2075,6 +2087,11 @@ static void bfq_request_merged(struct request_queue *q, struct request *req,
*/ */
if (prev != bfqq->next_rq) { if (prev != bfqq->next_rq) {
bfq_updated_next_req(bfqd, bfqq); bfq_updated_next_req(bfqd, bfqq);
/*
* See comments on bfq_pos_tree_add_move() for
* the unlikely().
*/
if (unlikely(!bfqd->nonrot_with_queueing))
bfq_pos_tree_add_move(bfqd, bfqq); bfq_pos_tree_add_move(bfqd, bfqq);
} }
} }
...@@ -2357,6 +2374,46 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq, ...@@ -2357,6 +2374,46 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
{ {
struct bfq_queue *in_service_bfqq, *new_bfqq; struct bfq_queue *in_service_bfqq, *new_bfqq;
/*
* Do not perform queue merging if the device is non
* rotational and performs internal queueing. In fact, such a
* device reaches a high speed through internal parallelism
* and pipelining. This means that, to reach a high
* throughput, it must have many requests enqueued at the same
* time. But, in this configuration, the internal scheduling
* algorithm of the device does exactly the job of queue
* merging: it reorders requests so as to obtain as much as
* possible a sequential I/O pattern. As a consequence, with
* the workload generated by processes doing interleaved I/O,
* the throughput reached by the device is likely to be the
* same, with and without queue merging.
*
* Disabling merging also provides a remarkable benefit in
* terms of throughput. Merging tends to make many workloads
* artificially more uneven, because of shared queues
* remaining non empty for incomparably more time than
* non-merged queues. This may accentuate workload
* asymmetries. For example, if one of the queues in a set of
* merged queues has a higher weight than a normal queue, then
* the shared queue may inherit such a high weight and, by
* staying almost always active, may force BFQ to perform I/O
* plugging most of the time. This evidently makes it harder
* for BFQ to let the device reach a high throughput.
*
* Finally, the likely() macro below is not used because one
* of the two branches is more likely than the other, but to
* have the code path after the following if() executed as
* fast as possible for the case of a non rotational device
* with queueing. We want it because this is the fastest kind
* of device. On the opposite end, the likely() may lengthen
* the execution time of BFQ for the case of slower devices
* (rotational or at least without queueing). But in this case
* the execution time of BFQ matters very little, if not at
* all.
*/
if (likely(bfqd->nonrot_with_queueing))
return NULL;
/* /*
* Prevent bfqq from being merged if it has been created too * Prevent bfqq from being merged if it has been created too
* long ago. The idea is that true cooperating processes, and * long ago. The idea is that true cooperating processes, and
...@@ -2986,7 +3043,9 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq) ...@@ -2986,7 +3043,9 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
bfq_requeue_bfqq(bfqd, bfqq, true); bfq_requeue_bfqq(bfqd, bfqq, true);
/* /*
* Resort priority tree of potential close cooperators. * Resort priority tree of potential close cooperators.
* See comments on bfq_pos_tree_add_move() for the unlikely().
*/ */
if (unlikely(!bfqd->nonrot_with_queueing))
bfq_pos_tree_add_move(bfqd, bfqq); bfq_pos_tree_add_move(bfqd, bfqq);
} }
...@@ -5051,6 +5110,9 @@ static void bfq_update_hw_tag(struct bfq_data *bfqd) ...@@ -5051,6 +5110,9 @@ static void bfq_update_hw_tag(struct bfq_data *bfqd)
bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD; bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD;
bfqd->max_rq_in_driver = 0; bfqd->max_rq_in_driver = 0;
bfqd->hw_tag_samples = 0; bfqd->hw_tag_samples = 0;
bfqd->nonrot_with_queueing =
blk_queue_nonrot(bfqd->queue) && bfqd->hw_tag;
} }
static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd)
...@@ -5882,6 +5944,7 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) ...@@ -5882,6 +5944,7 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
INIT_HLIST_HEAD(&bfqd->burst_list); INIT_HLIST_HEAD(&bfqd->burst_list);
bfqd->hw_tag = -1; bfqd->hw_tag = -1;
bfqd->nonrot_with_queueing = blk_queue_nonrot(bfqd->queue);
bfqd->bfq_max_budget = bfq_default_max_budget; bfqd->bfq_max_budget = bfq_default_max_budget;
......
...@@ -497,6 +497,9 @@ struct bfq_data { ...@@ -497,6 +497,9 @@ struct bfq_data {
/* number of requests dispatched and waiting for completion */ /* number of requests dispatched and waiting for completion */
int rq_in_driver; int rq_in_driver;
/* true if the device is non rotational and performs queueing */
bool nonrot_with_queueing;
/* /*
* Maximum number of requests in driver in the last * Maximum number of requests in driver in the last
* @hw_tag_samples completed requests. * @hw_tag_samples completed requests.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment