Commit 1f18b700 authored by Jan Kara's avatar Jan Kara Committed by Jens Axboe

bfq: Limit waker detection in time

Currently, when process A starts issuing requests shortly after process
B has completed some IO three times in a row, we decide that B is a
"waker" of A meaning that completing IO of B is needed for A to make
progress and generally stop separating A's and B's IO much. This logic
is useful to avoid unnecessary idling and thus throughput loss for cases
where workload needs to switch e.g. between the process and the
journaling thread doing IO. However the detection heuristic tends to
frequently give false positives when A and B are fighting IO bandwidth
and other processes aren't doing much IO as we are basically deemed to
eventually accumulate three occurences of a situation where one process
starts issuing requests after the other has completed some IO. To reduce
these false positives, cancel the waker detection also if we didn't
accumulate three detected wakeups within given timeout. The rationale is
that if wakeups are really rare, the pointless idling doesn't hurt
throughput that much anyway.

This significantly reduces false waker detection for workload like:

[global]
directory=/mnt/repro/
rw=write
size=8g
time_based
runtime=30
ramp_time=10
blocksize=1m
direct=0
ioengine=sync

[slowwriter]
numjobs=1
fsync=200

[fastwriter]
numjobs=1
fsync=200
Acked-by: default avatarPaolo Valente <paolo.valente@linaro.org>
Signed-off-by: default avatarJan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211125133645.27483-5-jack@suse.czSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 76f1df88
...@@ -2091,20 +2091,19 @@ static void bfq_update_io_intensity(struct bfq_queue *bfqq, u64 now_ns) ...@@ -2091,20 +2091,19 @@ static void bfq_update_io_intensity(struct bfq_queue *bfqq, u64 now_ns)
* aspect, see the comments on the choice of the queue for injection * aspect, see the comments on the choice of the queue for injection
* in bfq_select_queue(). * in bfq_select_queue().
* *
* Turning back to the detection of a waker queue, a queue Q is deemed * Turning back to the detection of a waker queue, a queue Q is deemed as a
* as a waker queue for bfqq if, for three consecutive times, bfqq * waker queue for bfqq if, for three consecutive times, bfqq happens to become
* happens to become non empty right after a request of Q has been * non empty right after a request of Q has been completed within given
* completed. In this respect, even if bfqq is empty, we do not check * timeout. In this respect, even if bfqq is empty, we do not check for a waker
* for a waker if it still has some in-flight I/O. In fact, in this * if it still has some in-flight I/O. In fact, in this case bfqq is actually
* case bfqq is actually still being served by the drive, and may * still being served by the drive, and may receive new I/O on the completion
* receive new I/O on the completion of some of the in-flight * of some of the in-flight requests. In particular, on the first time, Q is
* requests. In particular, on the first time, Q is tentatively set as * tentatively set as a candidate waker queue, while on the third consecutive
* a candidate waker queue, while on the third consecutive time that Q * time that Q is detected, the field waker_bfqq is set to Q, to confirm that Q
* is detected, the field waker_bfqq is set to Q, to confirm that Q is * is a waker queue for bfqq. These detection steps are performed only if bfqq
* a waker queue for bfqq. These detection steps are performed only if * has a long think time, so as to make it more likely that bfqq's I/O is
* bfqq has a long think time, so as to make it more likely that * actually being blocked by a synchronization. This last filter, plus the
* bfqq's I/O is actually being blocked by a synchronization. This * above three-times requirement and time limit for detection, make false
* last filter, plus the above three-times requirement, make false
* positives less likely. * positives less likely.
* *
* NOTE * NOTE
...@@ -2136,8 +2135,16 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq, ...@@ -2136,8 +2135,16 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq,
bfqd->last_completed_rq_bfqq == bfqq->waker_bfqq) bfqd->last_completed_rq_bfqq == bfqq->waker_bfqq)
return; return;
/*
* We reset waker detection logic also if too much time has passed
* since the first detection. If wakeups are rare, pointless idling
* doesn't hurt throughput that much. The condition below makes sure
* we do not uselessly idle blocking waker in more than 1/64 cases.
*/
if (bfqd->last_completed_rq_bfqq != if (bfqd->last_completed_rq_bfqq !=
bfqq->tentative_waker_bfqq) { bfqq->tentative_waker_bfqq ||
now_ns > bfqq->waker_detection_started +
128 * (u64)bfqd->bfq_slice_idle) {
/* /*
* First synchronization detected with a * First synchronization detected with a
* candidate waker queue, or with a different * candidate waker queue, or with a different
...@@ -2146,6 +2153,7 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq, ...@@ -2146,6 +2153,7 @@ static void bfq_check_waker(struct bfq_data *bfqd, struct bfq_queue *bfqq,
bfqq->tentative_waker_bfqq = bfqq->tentative_waker_bfqq =
bfqd->last_completed_rq_bfqq; bfqd->last_completed_rq_bfqq;
bfqq->num_waker_detections = 1; bfqq->num_waker_detections = 1;
bfqq->waker_detection_started = now_ns;
} else /* Same tentative waker queue detected again */ } else /* Same tentative waker queue detected again */
bfqq->num_waker_detections++; bfqq->num_waker_detections++;
......
...@@ -388,6 +388,8 @@ struct bfq_queue { ...@@ -388,6 +388,8 @@ struct bfq_queue {
struct bfq_queue *tentative_waker_bfqq; struct bfq_queue *tentative_waker_bfqq;
/* number of times the same tentative waker has been detected */ /* number of times the same tentative waker has been detected */
unsigned int num_waker_detections; unsigned int num_waker_detections;
/* time when we started considering this waker */
u64 waker_detection_started;
/* node for woken_list, see below */ /* node for woken_list, see below */
struct hlist_node woken_list_node; struct hlist_node woken_list_node;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment