Commit 90fa0288 authored by Hao Xu's avatar Hao Xu Committed by Jens Axboe

io_uring: implement async hybrid mode for pollable requests

The current logic of requests with IOSQE_ASYNC is first queueing it to
io-worker, then execute it in a synchronous way. For unbound works like
pollable requests(e.g. read/write a socketfd), the io-worker may stuck
there waiting for events for a long time. And thus other works wait in
the list for a long time too.
Let's introduce a new way for unbound works (currently pollable
requests), with this a request will first be queued to io-worker, then
executed in a nonblock try rather than a synchronous way. Failure of
that leads it to arm poll stuff and then the worker can begin to handle
other works.
The detail process of this kind of requests is:

step1: original context:
           queue it to io-worker
step2: io-worker context:
           nonblock try(the old logic is a synchronous try here)
               |
               |--fail--> arm poll
                            |
                            |--(fail/ready)-->synchronous issue
                            |
                            |--(succeed)-->worker finish it's job, tw
                                           take over the req

This works much better than the old IOSQE_ASYNC logic in cases where
unbound max_worker is relatively small. In this case, number of
io-worker eazily increments to max_worker, new worker cannot be created
and running workers stuck there handling old works in IOSQE_ASYNC mode.

In my 64-core machine, set unbound max_worker to 20, run echo-server,
turns out:
(arguments: register_file, connetion number is 1000, message size is 12
Byte)
original IOSQE_ASYNC: 76664.151 tps
after this patch: 166934.985 tps
Suggested-by: default avatarJens Axboe <axboe@kernel.dk>
Signed-off-by: default avatarHao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20211018133445.103438-1-haoxu@linux.alibaba.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 898df244
......@@ -6739,8 +6739,18 @@ static void io_wq_submit_work(struct io_wq_work *work)
ret = -ECANCELED;
if (!ret) {
bool needs_poll = false;
unsigned int issue_flags = IO_URING_F_UNLOCKED;
if (req->flags & REQ_F_FORCE_ASYNC) {
needs_poll = req->file && file_can_poll(req->file);
if (needs_poll)
issue_flags |= IO_URING_F_NONBLOCK;
}
do {
ret = io_issue_sqe(req, IO_URING_F_UNLOCKED);
issue_sqe:
ret = io_issue_sqe(req, issue_flags);
/*
* We can get EAGAIN for polled IO even though we're
* forcing a sync submission from here, since we can't
......@@ -6748,6 +6758,30 @@ static void io_wq_submit_work(struct io_wq_work *work)
*/
if (ret != -EAGAIN)
break;
if (needs_poll) {
bool armed = false;
ret = 0;
needs_poll = false;
issue_flags &= ~IO_URING_F_NONBLOCK;
switch (io_arm_poll_handler(req)) {
case IO_APOLL_READY:
goto issue_sqe;
case IO_APOLL_ABORTED:
/*
* somehow we failed to arm the poll infra,
* fallback it to a normal async worker try.
*/
break;
case IO_APOLL_OK:
armed = true;
break;
}
if (armed)
break;
}
cond_resched();
} while (1);
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment