1. 19 May, 2022 1 commit
    • Jan Kara's avatar
      bfq: Relax waker detection for shared queues · f9506673
      Jan Kara authored
      Currently we look for waker only if current queue has no requests. This
      makes sense for bfq queues with a single process however for shared
      queues when there is a larger number of processes the condition that
      queue has no requests is difficult to meet because often at least one
      process has some request in flight although all the others are waiting
      for the waker to do the work and this harms throughput. Relax the "no
      queued request for bfq queue" condition to "the current task has no
      queued requests yet". For this, we also need to start tracking number of
      requests in flight for each task.
      
      This patch (together with the following one) restores the performance
      for dbench with 128 clients that regressed with commit c65e6fd4
      ("bfq: Do not let waker requests skip proper accounting") because
      this commit makes requests of wakers properly enter BFQ queues and thus
      these queues become ineligible for the old waker detection logic.
      Dbench results:
      
               Vanilla 5.18-rc3        5.18-rc3 + revert      5.18-rc3 patched
      Mean     1237.36 (   0.00%)      950.16 *  23.21%*      988.35 *  20.12%*
      
      Numbers are time to complete workload so lower is better.
      
      Fixes: c65e6fd4 ("bfq: Do not let waker requests skip proper accounting")
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Link: https://lore.kernel.org/r/20220519105235.31397-1-jack@suse.czSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      f9506673
  2. 18 May, 2022 2 commits
    • Jens Axboe's avatar
      blk-cgroup: delete rcu_read_lock_held() WARN_ON_ONCE() · 1305e2c9
      Jens Axboe authored
      A previous commit got rid of unnecessary rcu_read_lock() inside the
      IRQ disabling queue_lock, but this debug statement was left. It's now
      firing since we are indeed not inside a RCU read lock, but we don't
      need to be as we're still preempt safe.
      
      Get rid of the check, as we have a lockdep assert for holding the
      queue lock right after it anyway.
      
      Link: https://lore.kernel.org/linux-block/46253c48-81cb-0787-20ad-9133afdd9e21@samsung.com/Reported-by: default avatarMarek Szyprowski <m.szyprowski@samsung.com>
      Fixes: 77c570a1 ("blk-cgroup: Remove unnecessary rcu_read_lock/unlock()")
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1305e2c9
    • Laibin Qiu's avatar
      blk-throttle: Set BIO_THROTTLED when bio has been throttled · 5a011f88
      Laibin Qiu authored
      1.In current process, all bio will set the BIO_THROTTLED flag
      after __blk_throtl_bio().
      
      2.If bio needs to be throttled, it will start the timer and
      stop submit bio directly. Bio will submit in
      blk_throtl_dispatch_work_fn() when the timer expires.But in
      the current process, if bio is throttled. The BIO_THROTTLED
      will be set to bio after timer start. If the bio has been
      completed, it may cause use-after-free blow.
      
      BUG: KASAN: use-after-free in blk_throtl_bio+0x12f0/0x2c70
      Read of size 2 at addr ffff88801b8902d4 by task fio/26380
      
       dump_stack+0x9b/0xce
       print_address_description.constprop.6+0x3e/0x60
       kasan_report.cold.9+0x22/0x3a
       blk_throtl_bio+0x12f0/0x2c70
       submit_bio_checks+0x701/0x1550
       submit_bio_noacct+0x83/0xc80
       submit_bio+0xa7/0x330
       mpage_readahead+0x380/0x500
       read_pages+0x1c1/0xbf0
       page_cache_ra_unbounded+0x471/0x6f0
       do_page_cache_ra+0xda/0x110
       ondemand_readahead+0x442/0xae0
       page_cache_async_ra+0x210/0x300
       generic_file_buffered_read+0x4d9/0x2130
       generic_file_read_iter+0x315/0x490
       blkdev_read_iter+0x113/0x1b0
       aio_read+0x2ad/0x450
       io_submit_one+0xc8e/0x1d60
       __se_sys_io_submit+0x125/0x350
       do_syscall_64+0x2d/0x40
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      Allocated by task 26380:
       kasan_save_stack+0x19/0x40
       __kasan_kmalloc.constprop.2+0xc1/0xd0
       kmem_cache_alloc+0x146/0x440
       mempool_alloc+0x125/0x2f0
       bio_alloc_bioset+0x353/0x590
       mpage_alloc+0x3b/0x240
       do_mpage_readpage+0xddf/0x1ef0
       mpage_readahead+0x264/0x500
       read_pages+0x1c1/0xbf0
       page_cache_ra_unbounded+0x471/0x6f0
       do_page_cache_ra+0xda/0x110
       ondemand_readahead+0x442/0xae0
       page_cache_async_ra+0x210/0x300
       generic_file_buffered_read+0x4d9/0x2130
       generic_file_read_iter+0x315/0x490
       blkdev_read_iter+0x113/0x1b0
       aio_read+0x2ad/0x450
       io_submit_one+0xc8e/0x1d60
       __se_sys_io_submit+0x125/0x350
       do_syscall_64+0x2d/0x40
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      Freed by task 0:
       kasan_save_stack+0x19/0x40
       kasan_set_track+0x1c/0x30
       kasan_set_free_info+0x1b/0x30
       __kasan_slab_free+0x111/0x160
       kmem_cache_free+0x94/0x460
       mempool_free+0xd6/0x320
       bio_free+0xe0/0x130
       bio_put+0xab/0xe0
       bio_endio+0x3a6/0x5d0
       blk_update_request+0x590/0x1370
       scsi_end_request+0x7d/0x400
       scsi_io_completion+0x1aa/0xe50
       scsi_softirq_done+0x11b/0x240
       blk_mq_complete_request+0xd4/0x120
       scsi_mq_done+0xf0/0x200
       virtscsi_vq_done+0xbc/0x150
       vring_interrupt+0x179/0x390
       __handle_irq_event_percpu+0xf7/0x490
       handle_irq_event_percpu+0x7b/0x160
       handle_irq_event+0xcc/0x170
       handle_edge_irq+0x215/0xb20
       common_interrupt+0x60/0x120
       asm_common_interrupt+0x1e/0x40
      
      Fix this by move BIO_THROTTLED set into the queue_lock.
      Signed-off-by: default avatarLaibin Qiu <qiulaibin@huawei.com>
      Reviewed-by: default avatarMing Lei <ming.lei@redhat.com>
      Link: https://lore.kernel.org/r/20220301123919.2381579-1-qiulaibin@huawei.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5a011f88
  3. 17 May, 2022 2 commits
  4. 16 May, 2022 3 commits
  5. 12 May, 2022 2 commits
  6. 11 May, 2022 1 commit
  7. 05 May, 2022 3 commits
  8. 02 May, 2022 16 commits
  9. 23 Apr, 2022 5 commits
  10. 18 Apr, 2022 5 commits