Commit 0a9a25ca authored by Ming Lei's avatar Ming Lei Committed by Jens Axboe

block: let blkcg_gq grab request queue's refcnt

In the whole lifetime of blkcg_gq instance, ->q will be referred, such
as, ->pd_free_fn() is called in blkg_free, and throtl_pd_free() still
may touch the request queue via &tg->service_queue.pending_timer which
is handled by throtl_pending_timer_fn(), so it is reasonable to grab
request queue's refcnt by blkcg_gq instance.

Previously blkcg_exit_queue() is called from blk_release_queue, and it
is hard to avoid the use-after-free. But recently commit 1059699f ("block:
move blkcg initialization/destroy into disk allocation/release handler")
is merged to for-5.18/block, it becomes simple to fix the issue by simply
grabbing request queue's refcnt.
Reported-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20220318130144.1066064-3-ming.lei@redhat.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent ee37eddb
...@@ -82,6 +82,8 @@ static void blkg_free(struct blkcg_gq *blkg) ...@@ -82,6 +82,8 @@ static void blkg_free(struct blkcg_gq *blkg)
if (blkg->pd[i]) if (blkg->pd[i])
blkcg_policy[i]->pd_free_fn(blkg->pd[i]); blkcg_policy[i]->pd_free_fn(blkg->pd[i]);
if (blkg->q)
blk_put_queue(blkg->q);
free_percpu(blkg->iostat_cpu); free_percpu(blkg->iostat_cpu);
percpu_ref_exit(&blkg->refcnt); percpu_ref_exit(&blkg->refcnt);
kfree(blkg); kfree(blkg);
...@@ -167,6 +169,9 @@ static struct blkcg_gq *blkg_alloc(struct blkcg *blkcg, struct request_queue *q, ...@@ -167,6 +169,9 @@ static struct blkcg_gq *blkg_alloc(struct blkcg *blkcg, struct request_queue *q,
if (!blkg->iostat_cpu) if (!blkg->iostat_cpu)
goto err_free; goto err_free;
if (!blk_get_queue(q))
goto err_free;
blkg->q = q; blkg->q = q;
INIT_LIST_HEAD(&blkg->q_node); INIT_LIST_HEAD(&blkg->q_node);
spin_lock_init(&blkg->async_bio_lock); spin_lock_init(&blkg->async_bio_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment