1. 24 Jan, 2021 2 commits
    • Jens Axboe's avatar
      io_uring: account io_uring internal files as REQ_F_INFLIGHT · 02a13674
      Jens Axboe authored
      We need to actively cancel anything that introduces a potential circular
      loop, where io_uring holds a reference to itself. If the file in question
      is an io_uring file, then add the request to the inflight list.
      
      Cc: stable@vger.kernel.org # 5.9+
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      02a13674
    • Pavel Begunkov's avatar
      io_uring: fix sleeping under spin in __io_clean_op · 9d5c8190
      Pavel Begunkov authored
      [   27.629441] BUG: sleeping function called from invalid context
      	at fs/file.c:402
      [   27.631317] in_atomic(): 1, irqs_disabled(): 1, non_block: 0,
      	pid: 1012, name: io_wqe_worker-0
      [   27.633220] 1 lock held by io_wqe_worker-0/1012:
      [   27.634286]  #0: ffff888105e26c98 (&ctx->completion_lock)
      	{....}-{2:2}, at: __io_req_complete.part.102+0x30/0x70
      [   27.649249] Call Trace:
      [   27.649874]  dump_stack+0xac/0xe3
      [   27.650666]  ___might_sleep+0x284/0x2c0
      [   27.651566]  put_files_struct+0xb8/0x120
      [   27.652481]  __io_clean_op+0x10c/0x2a0
      [   27.653362]  __io_cqring_fill_event+0x2c1/0x350
      [   27.654399]  __io_req_complete.part.102+0x41/0x70
      [   27.655464]  io_openat2+0x151/0x300
      [   27.656297]  io_issue_sqe+0x6c/0x14e0
      [   27.660991]  io_wq_submit_work+0x7f/0x240
      [   27.662890]  io_worker_handle_work+0x501/0x8a0
      [   27.664836]  io_wqe_worker+0x158/0x520
      [   27.667726]  kthread+0x134/0x180
      [   27.669641]  ret_from_fork+0x1f/0x30
      
      Instead of cleaning files on overflow, return back overflow cancellation
      into io_uring_cancel_files(). Previously it was racy to clean
      REQ_F_OVERFLOW flag, but we got rid of it, and can do it through
      repetitive attempts targeting all matching requests.
      Reported-by: default avatarAbaci <abaci@linux.alibaba.com>
      Reported-by: default avatarJoseph Qi <joseph.qi@linux.alibaba.com>
      Cc: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      9d5c8190
  2. 22 Jan, 2021 2 commits
    • Pavel Begunkov's avatar
      io_uring: fix short read retries for non-reg files · 9a173346
      Pavel Begunkov authored
      Sockets and other non-regular files may actually expect short reads to
      happen, don't retry reads for them. Because non-reg files don't set
      FMODE_BUF_RASYNC and so it won't do second/retry do_read, we can filter
      out those cases after first do_read() attempt with ret>0.
      
      Cc: stable@vger.kernel.org # 5.9+
      Suggested-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      9a173346
    • Jens Axboe's avatar
      io_uring: fix SQPOLL IORING_OP_CLOSE cancelation state · 607ec89e
      Jens Axboe authored
      IORING_OP_CLOSE is special in terms of cancelation, since it has an
      intermediate state where we've removed the file descriptor but hasn't
      closed the file yet. For that reason, it's currently marked with
      IO_WQ_WORK_NO_CANCEL to prevent cancelation. This ensures that the op
      is always run even if canceled, to prevent leaving us with a live file
      but an fd that is gone. However, with SQPOLL, since a cancel request
      doesn't carry any resources on behalf of the request being canceled, if
      we cancel before any of the close op has been run, we can end up with
      io-wq not having the ->files assigned. This can result in the following
      oops reported by Joseph:
      
      BUG: kernel NULL pointer dereference, address: 00000000000000d8
      PGD 800000010b76f067 P4D 800000010b76f067 PUD 10b462067 PMD 0
      Oops: 0000 [#1] SMP PTI
      CPU: 1 PID: 1788 Comm: io_uring-sq Not tainted 5.11.0-rc4 #1
      Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
      RIP: 0010:__lock_acquire+0x19d/0x18c0
      Code: 00 00 8b 1d fd 56 dd 08 85 db 0f 85 43 05 00 00 48 c7 c6 98 7b 95 82 48 c7 c7 57 96 93 82 e8 9a bc f5 ff 0f 0b e9 2b 05 00 00 <48> 81 3f c0 ca 67 8a b8 00 00 00 00 41 0f 45 c0 89 04 24 e9 81 fe
      RSP: 0018:ffffc90001933828 EFLAGS: 00010002
      RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000
      RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000000000d8
      RBP: 0000000000000246 R08: 0000000000000001 R09: 0000000000000000
      R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
      R13: 0000000000000000 R14: ffff888106e8a140 R15: 00000000000000d8
      FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 00000000000000d8 CR3: 0000000106efa004 CR4: 00000000003706e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      Call Trace:
       lock_acquire+0x31a/0x440
       ? close_fd_get_file+0x39/0x160
       ? __lock_acquire+0x647/0x18c0
       _raw_spin_lock+0x2c/0x40
       ? close_fd_get_file+0x39/0x160
       close_fd_get_file+0x39/0x160
       io_issue_sqe+0x1334/0x14e0
       ? lock_acquire+0x31a/0x440
       ? __io_free_req+0xcf/0x2e0
       ? __io_free_req+0x175/0x2e0
       ? find_held_lock+0x28/0xb0
       ? io_wq_submit_work+0x7f/0x240
       io_wq_submit_work+0x7f/0x240
       io_wq_cancel_cb+0x161/0x580
       ? io_wqe_wake_worker+0x114/0x360
       ? io_uring_get_socket+0x40/0x40
       io_async_find_and_cancel+0x3b/0x140
       io_issue_sqe+0xbe1/0x14e0
       ? __lock_acquire+0x647/0x18c0
       ? __io_queue_sqe+0x10b/0x5f0
       __io_queue_sqe+0x10b/0x5f0
       ? io_req_prep+0xdb/0x1150
       ? mark_held_locks+0x6d/0xb0
       ? mark_held_locks+0x6d/0xb0
       ? io_queue_sqe+0x235/0x4b0
       io_queue_sqe+0x235/0x4b0
       io_submit_sqes+0xd7e/0x12a0
       ? _raw_spin_unlock_irq+0x24/0x30
       ? io_sq_thread+0x3ae/0x940
       io_sq_thread+0x207/0x940
       ? do_wait_intr_irq+0xc0/0xc0
       ? __ia32_sys_io_uring_enter+0x650/0x650
       kthread+0x134/0x180
       ? kthread_create_worker_on_cpu+0x90/0x90
       ret_from_fork+0x1f/0x30
      
      Fix this by moving the IO_WQ_WORK_NO_CANCEL until _after_ we've modified
      the fdtable. Canceling before this point is totally fine, and running
      it in the io-wq context _after_ that point is also fine.
      
      For 5.12, we'll handle this internally and get rid of the no-cancel
      flag, as IORING_OP_CLOSE is the only user of it.
      
      Cc: stable@vger.kernel.org
      Fixes: b5dba59e ("io_uring: add support for IORING_OP_CLOSE")
      Reported-by: "Abaci <abaci@linux.alibaba.com>"
      Reviewed-and-tested-by: default avatarJoseph Qi <joseph.qi@linux.alibaba.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      607ec89e
  3. 17 Jan, 2021 1 commit
  4. 16 Jan, 2021 3 commits
    • Pavel Begunkov's avatar
      io_uring: fix uring_flush in exit_files() warning · 4325cb49
      Pavel Begunkov authored
      WARNING: CPU: 1 PID: 11100 at fs/io_uring.c:9096
      	io_uring_flush+0x326/0x3a0 fs/io_uring.c:9096
      RIP: 0010:io_uring_flush+0x326/0x3a0 fs/io_uring.c:9096
      Call Trace:
       filp_close+0xb4/0x170 fs/open.c:1280
       close_files fs/file.c:401 [inline]
       put_files_struct fs/file.c:416 [inline]
       put_files_struct+0x1cc/0x350 fs/file.c:413
       exit_files+0x7e/0xa0 fs/file.c:433
       do_exit+0xc22/0x2ae0 kernel/exit.c:820
       do_group_exit+0x125/0x310 kernel/exit.c:922
       get_signal+0x3e9/0x20a0 kernel/signal.c:2770
       arch_do_signal_or_restart+0x2a8/0x1eb0 arch/x86/kernel/signal.c:811
       handle_signal_work kernel/entry/common.c:147 [inline]
       exit_to_user_mode_loop kernel/entry/common.c:171 [inline]
       exit_to_user_mode_prepare+0x148/0x250 kernel/entry/common.c:201
       __syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
       syscall_exit_to_user_mode+0x19/0x50 kernel/entry/common.c:302
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      An SQPOLL ring creator task may have gotten rid of its file note during
      exit and called io_disable_sqo_submit(), but the io_uring is still left
      referenced through fdtable, which will be put during close_files() and
      cause a false positive warning.
      
      First split the warning into two for more clarity when is hit, and the
      add sqo_dead check to handle the described case.
      
      Reported-by: syzbot+a32b546d58dde07875a1@syzkaller.appspotmail.com
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4325cb49
    • Pavel Begunkov's avatar
      io_uring: fix false positive sqo warning on flush · 6b393a1f
      Pavel Begunkov authored
      WARNING: CPU: 1 PID: 9094 at fs/io_uring.c:8884
      	io_disable_sqo_submit+0x106/0x130 fs/io_uring.c:8884
      Call Trace:
       io_uring_flush+0x28b/0x3a0 fs/io_uring.c:9099
       filp_close+0xb4/0x170 fs/open.c:1280
       close_fd+0x5c/0x80 fs/file.c:626
       __do_sys_close fs/open.c:1299 [inline]
       __se_sys_close fs/open.c:1297 [inline]
       __x64_sys_close+0x2f/0xa0 fs/open.c:1297
       do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      io_uring's final close() may be triggered by any task not only the
      creator. It's well handled by io_uring_flush() including SQPOLL case,
      though a warning in io_disable_sqo_submit() will fallaciously fire by
      moving this warning out to the only call site that matters.
      
      Reported-by: syzbot+2f5d1785dc624932da78@syzkaller.appspotmail.com
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6b393a1f
    • Jens Axboe's avatar
      io_uring: iopoll requests should also wake task ->in_idle state · c93cc9e1
      Jens Axboe authored
      If we're freeing/finishing iopoll requests, ensure we check if the task
      is in idling in terms of cancelation. Otherwise we could end up waiting
      forever in __io_uring_task_cancel() if the task has active iopoll
      requests that need cancelation.
      
      Cc: stable@vger.kernel.org # 5.9+
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      c93cc9e1
  5. 15 Jan, 2021 2 commits
  6. 13 Jan, 2021 2 commits
    • Pavel Begunkov's avatar
      io_uring: do sqo disable on install_fd error · 06585c49
      Pavel Begunkov authored
      WARNING: CPU: 0 PID: 8494 at fs/io_uring.c:8717
      	io_ring_ctx_wait_and_kill+0x4f2/0x600 fs/io_uring.c:8717
      Call Trace:
       io_uring_release+0x3e/0x50 fs/io_uring.c:8759
       __fput+0x283/0x920 fs/file_table.c:280
       task_work_run+0xdd/0x190 kernel/task_work.c:140
       tracehook_notify_resume include/linux/tracehook.h:189 [inline]
       exit_to_user_mode_loop kernel/entry/common.c:174 [inline]
       exit_to_user_mode_prepare+0x249/0x250 kernel/entry/common.c:201
       __syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
       syscall_exit_to_user_mode+0x19/0x50 kernel/entry/common.c:302
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      failed io_uring_install_fd() is a special case, we don't do
      io_ring_ctx_wait_and_kill() directly but defer it to fput, though still
      need to io_disable_sqo_submit() before.
      
      note: it doesn't fix any real problem, just a warning. That's because
      sqring won't be available to the userspace in this case and so SQPOLL
      won't submit anything.
      
      Reported-by: syzbot+9c9c35374c0ecac06516@syzkaller.appspotmail.com
      Fixes: d9d05217 ("io_uring: stop SQPOLL submit on creator's death")
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      06585c49
    • Pavel Begunkov's avatar
      io_uring: fix null-deref in io_disable_sqo_submit · b4411616
      Pavel Begunkov authored
      general protection fault, probably for non-canonical address
      	0xdffffc0000000022: 0000 [#1] KASAN: null-ptr-deref
      	in range [0x0000000000000110-0x0000000000000117]
      RIP: 0010:io_ring_set_wakeup_flag fs/io_uring.c:6929 [inline]
      RIP: 0010:io_disable_sqo_submit+0xdb/0x130 fs/io_uring.c:8891
      Call Trace:
       io_uring_create fs/io_uring.c:9711 [inline]
       io_uring_setup+0x12b1/0x38e0 fs/io_uring.c:9739
       do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      io_disable_sqo_submit() might be called before user rings were
      allocated, don't do io_ring_set_wakeup_flag() in those cases.
      
      Reported-by: syzbot+ab412638aeb652ded540@syzkaller.appspotmail.com
      Fixes: d9d05217 ("io_uring: stop SQPOLL submit on creator's death")
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      b4411616
  7. 11 Jan, 2021 2 commits
  8. 09 Jan, 2021 4 commits
    • Pavel Begunkov's avatar
      io_uring: stop SQPOLL submit on creator's death · d9d05217
      Pavel Begunkov authored
      When the creator of SQPOLL io_uring dies (i.e. sqo_task), we don't want
      its internals like ->files and ->mm to be poked by the SQPOLL task, it
      have never been nice and recently got racy. That can happen when the
      owner undergoes destruction and SQPOLL tasks tries to submit new
      requests in parallel, and so calls io_sq_thread_acquire*().
      
      That patch halts SQPOLL submissions when sqo_task dies by introducing
      sqo_dead flag. Once set, the SQPOLL task must not do any submission,
      which is synchronised by uring_lock as well as the new flag.
      
      The tricky part is to make sure that disabling always happens, that
      means either the ring is discovered by creator's do_exit() -> cancel,
      or if the final close() happens before it's done by the creator. The
      last is guaranteed by the fact that for SQPOLL the creator task and only
      it holds exactly one file note, so either it pins up to do_exit() or
      removed by the creator on the final put in flush. (see comments in
      uring_flush() around file->f_count == 2).
      
      One more place that can trigger io_sq_thread_acquire_*() is
      __io_req_task_submit(). Shoot off requests on sqo_dead there, even
      though actually we don't need to. That's because cancellation of
      sqo_task should wait for the request before going any further.
      
      note 1: io_disable_sqo_submit() does io_ring_set_wakeup_flag() so the
      caller would enter the ring to get an error, but it still doesn't
      guarantee that the flag won't be cleared.
      
      note 2: if final __userspace__ close happens not from the creator
      task, the file note will pin the ring until the task dies.
      
      Fixed: b1b6b5a3 ("kernel/io_uring: cancel io_uring before task works")
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d9d05217
    • Pavel Begunkov's avatar
      io_uring: add warn_once for io_uring_flush() · 6b5733eb
      Pavel Begunkov authored
      files_cancel() should cancel all relevant requests and drop file notes,
      so we should never have file notes after that, including on-exit fput
      and flush. Add a WARN_ONCE to be sure.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6b5733eb
    • Pavel Begunkov's avatar
      io_uring: inline io_uring_attempt_task_drop() · 4f793dc4
      Pavel Begunkov authored
      A simple preparation change inlining io_uring_attempt_task_drop() into
      io_uring_flush().
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4f793dc4
    • Pavel Begunkov's avatar
      io_uring: io_rw_reissue lockdep annotations · 55e6ac1e
      Pavel Begunkov authored
      We expect io_rw_reissue() to take place only during submission with
      uring_lock held. Add a lockdep annotation to check that invariant.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      55e6ac1e
  9. 07 Jan, 2021 3 commits
    • Pavel Begunkov's avatar
      io_uring: synchronise ev_posted() with waitqueues · b1445e59
      Pavel Begunkov authored
      waitqueue_active() needs smp_mb() to be in sync with waitqueues
      modification, but we miss it in io_cqring_ev_posted*() apart from
      cq_wait() case.
      
      Take an smb_mb() out of wq_has_sleeper() making it waitqueue_active(),
      and place it a few lines before, so it can synchronise other
      waitqueue_active() as well.
      
      The patch doesn't add any additional overhead, so even if there are
      no problems currently, it's just safer to have it this way.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      b1445e59
    • Pavel Begunkov's avatar
      io_uring: dont kill fasync under completion_lock · 4aa84f2f
      Pavel Begunkov authored
            CPU0                    CPU1
             ----                    ----
        lock(&new->fa_lock);
                                     local_irq_disable();
                                     lock(&ctx->completion_lock);
                                     lock(&new->fa_lock);
        <Interrupt>
          lock(&ctx->completion_lock);
      
       *** DEADLOCK ***
      
      Move kill_fasync() out of io_commit_cqring() to io_cqring_ev_posted(),
      so it doesn't hold completion_lock while doing it. That saves from the
      reported deadlock, and it's just nice to shorten the locking time and
      untangle nested locks (compl_lock -> wq_head::lock).
      
      Reported-by: syzbot+91ca3f25bd7f795f019c@syzkaller.appspotmail.com
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4aa84f2f
    • Pavel Begunkov's avatar
      io_uring: trigger eventfd for IOPOLL · 80c18e4a
      Pavel Begunkov authored
      Make sure io_iopoll_complete() tries to wake up eventfd, which currently
      is skipped together with io_cqring_ev_posted() for non-SQPOLL IOPOLL.
      
      Add an iopoll version of io_cqring_ev_posted(), duplicates a bit of
      code, but they actually use different sets of wait queues may be for
      better.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      80c18e4a
  10. 06 Jan, 2021 1 commit
  11. 05 Jan, 2021 1 commit
  12. 04 Jan, 2021 4 commits
  13. 31 Dec, 2020 3 commits
  14. 29 Dec, 2020 1 commit
    • Jens Axboe's avatar
      io_uring: don't assume mm is constant across submits · 77788775
      Jens Axboe authored
      If we COW the identity, we assume that ->mm never changes. But this
      isn't true of multiple processes end up sharing the ring. Hence treat
      id->mm like like any other process compontent when it comes to the
      identity mapping. This is pretty trivial, just moving the existing grab
      into io_grab_identity(), and including a check for the match.
      
      Cc: stable@vger.kernel.org # 5.10
      Fixes: 1e6fa521 ("io_uring: COW io_identity on mismatch")
      Reported-by: Christian Brauner <christian.brauner@ubuntu.com>:
      Tested-by: Christian Brauner <christian.brauner@ubuntu.com>:
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      77788775
  15. 27 Dec, 2020 8 commits
  16. 26 Dec, 2020 1 commit