1. 18 Mar, 2021 3 commits
  2. 15 Mar, 2021 7 commits
  3. 14 Mar, 2021 1 commit
    • Jens Axboe's avatar
      io_uring: convert io_buffer_idr to XArray · 9e15c3a0
      Jens Axboe authored
      Like we did for the personality idr, convert the IO buffer idr to use
      XArray. This avoids a use-after-free on removal of entries, since idr
      doesn't like doing so from inside an iterator, and it nicely reduces
      the amount of code we need to support this feature.
      
      Fixes: 5a2e745d ("io_uring: buffer registration infrastructure")
      Cc: stable@vger.kernel.org
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: yangerkun <yangerkun@huawei.com>
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      9e15c3a0
  4. 13 Mar, 2021 2 commits
  5. 12 Mar, 2021 5 commits
    • Pavel Begunkov's avatar
      io_uring: fix OP_ASYNC_CANCEL across tasks · 58f99373
      Pavel Begunkov authored
      IORING_OP_ASYNC_CANCEL tries io-wq cancellation only for current task.
      If it fails go over tctx_list and try it out for every single tctx.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      58f99373
    • Pavel Begunkov's avatar
      io_uring: cancel sqpoll via task_work · 521d6a73
      Pavel Begunkov authored
      1) The first problem is io_uring_cancel_sqpoll() ->
      io_uring_cancel_task_requests() basically doing park(); park(); and so
      hanging.
      
      2) Another one is more subtle, when the master task is doing cancellations,
      but SQPOLL task submits in-between the end of the cancellation but
      before finish() requests taking a ref to the ctx, and so eternally
      locking it up.
      
      3) Yet another is a dying SQPOLL task doing io_uring_cancel_sqpoll() and
      same io_uring_cancel_sqpoll() from the owner task, they race for
      tctx->wait events. And there probably more of them.
      
      Instead do SQPOLL cancellations from within SQPOLL task context via
      task_work, see io_sqpoll_cancel_sync(). With that we don't need temporal
      park()/unpark() during cancellation, which is ugly, subtle and anyway
      doesn't allow to do io_run_task_work() properly.
      
      io_uring_cancel_sqpoll() is called only from SQPOLL task context and
      under sqd locking, so all parking is removed from there. And so,
      io_sq_thread_[un]park() and io_sq_thread_stop() are not used now by
      SQPOLL task, and that spare us from some headache.
      
      Also remove ctx->sqd_list early to avoid 2). And kill tctx->sqpoll,
      which is not used anymore.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      521d6a73
    • Pavel Begunkov's avatar
      io_uring: prevent racy sqd->thread checks · 26984fbf
      Pavel Begunkov authored
      SQPOLL thread to which we're trying to attach may be going away, it's
      not nice but a more serious problem is if io_sq_offload_create() sees
      sqd->thread==NULL, and tries to init it with a new thread. There are
      tons of ways it can be exploited or fail.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      26984fbf
    • Pavel Begunkov's avatar
      io_uring: remove useless ->startup completion · 0df8ea60
      Pavel Begunkov authored
      We always do complete(&sqd->startup) almost right after sqd->thread
      creation, either in the success path or in io_sq_thread_finish(). It's
      specifically created not started for us to be able to set some stuff
      like sqd->thread and io_uring_alloc_task_context() before following
      right after wake_up_new_task().
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0df8ea60
    • Pavel Begunkov's avatar
      io_uring: cancel deferred requests in try_cancel · e1915f76
      Pavel Begunkov authored
      As io_uring_cancel_files() and others let SQO to run between
      io_uring_try_cancel_requests(), SQO may generate new deferred requests,
      so it's safer to try to cancel them in it.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e1915f76
  6. 11 Mar, 2021 2 commits
    • Jens Axboe's avatar
      io_uring: perform IOPOLL reaping if canceler is thread itself · d052d1d6
      Jens Axboe authored
      We bypass IOPOLL completion polling (and reaping) for the SQPOLL thread,
      but if it's the thread itself invoking cancelations, then we still need
      to perform it or no one will.
      
      Fixes: 9936c7c2 ("io_uring: deduplicate core cancellations sequence")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d052d1d6
    • Jens Axboe's avatar
      io_uring: force creation of separate context for ATTACH_WQ and non-threads · 5c2469e0
      Jens Axboe authored
      Earlier kernels had SQPOLL threads that could share across anything, as
      we grabbed the context we needed on a per-ring basis. This is no longer
      the case, so only allow attaching directly if we're in the same thread
      group. That is the common use case. For non-group tasks, just setup a
      new context and thread as we would've done if sharing wasn't set. This
      isn't 100% ideal in terms of CPU utilization for the forked and share
      case, but hopefully that isn't much of a concern. If it is, there are
      plans in motion for how to improve that. Most importantly, we want to
      avoid app side regressions where sharing worked before and now doesn't.
      With this patch, functionality is equivalent to previous kernels that
      supported IORING_SETUP_ATTACH_WQ with SQPOLL.
      Reported-by: default avatarStefan Metzmacher <metze@samba.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5c2469e0
  7. 10 Mar, 2021 15 commits
  8. 07 Mar, 2021 5 commits