Commit 1658633c authored by Jens Axboe's avatar Jens Axboe

io_uring: ensure io_lockdep_assert_cq_locked() handles disabled rings

io_lockdep_assert_cq_locked() checks that locking is correctly done when
a CQE is posted. If the ring is setup in a disabled state with
IORING_SETUP_R_DISABLED, then ctx->submitter_task isn't assigned until
the ring is later enabled. We generally don't post CQEs in this state,
as no SQEs can be submitted. However it is possible to generate a CQE
if tagged resources are being updated. If this happens and PROVE_LOCKING
is enabled, then the locking check helper will dereference
ctx->submitter_task, which hasn't been set yet.

Fixup io_lockdep_assert_cq_locked() to handle this case correctly. While
at it, convert it to a static inline as well, so that generated line
offsets will actually reflect which condition failed, rather than just
the line offset for io_lockdep_assert_cq_locked() itself.

Reported-and-tested-by: syzbot+efc45d4e7ba6ab4ef1eb@syzkaller.appspotmail.com
Fixes: f26cc959 ("io_uring: lockdep annotate CQ locking")
Cc: stable@vger.kernel.org
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent f8024f1f
...@@ -86,20 +86,33 @@ bool __io_alloc_req_refill(struct io_ring_ctx *ctx); ...@@ -86,20 +86,33 @@ bool __io_alloc_req_refill(struct io_ring_ctx *ctx);
bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task, bool io_match_task_safe(struct io_kiocb *head, struct task_struct *task,
bool cancel_all); bool cancel_all);
#define io_lockdep_assert_cq_locked(ctx) \ #if defined(CONFIG_PROVE_LOCKING)
do { \ static inline void io_lockdep_assert_cq_locked(struct io_ring_ctx *ctx)
lockdep_assert(in_task()); \ {
\ lockdep_assert(in_task());
if (ctx->flags & IORING_SETUP_IOPOLL) { \
lockdep_assert_held(&ctx->uring_lock); \ if (ctx->flags & IORING_SETUP_IOPOLL) {
} else if (!ctx->task_complete) { \ lockdep_assert_held(&ctx->uring_lock);
lockdep_assert_held(&ctx->completion_lock); \ } else if (!ctx->task_complete) {
} else if (ctx->submitter_task->flags & PF_EXITING) { \ lockdep_assert_held(&ctx->completion_lock);
lockdep_assert(current_work()); \ } else if (ctx->submitter_task) {
} else { \ /*
lockdep_assert(current == ctx->submitter_task); \ * ->submitter_task may be NULL and we can still post a CQE,
} \ * if the ring has been setup with IORING_SETUP_R_DISABLED.
} while (0) * Not from an SQE, as those cannot be submitted, but via
* updating tagged resources.
*/
if (ctx->submitter_task->flags & PF_EXITING)
lockdep_assert(current_work());
else
lockdep_assert(current == ctx->submitter_task);
}
}
#else
static inline void io_lockdep_assert_cq_locked(struct io_ring_ctx *ctx)
{
}
#endif
static inline void io_req_task_work_add(struct io_kiocb *req) static inline void io_req_task_work_add(struct io_kiocb *req)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment