Commit 30e06628 authored by Jens Axboe's avatar Jens Axboe

nvme: fix boot hang with only being able to get one IRQ vector

NVMe always asks for io_queues + 1 worth of IRQ vectors, which
means that even when we scale all the way down, we still ask
for 2 vectors and get -ENOSPC in return if the system can't
support more than 1.

Getting just 1 vector is fine, it just means that we'll have
1 IO queue and 1 admin queue, with a shared vector between them.
Check for this case and don't add our + 1 if it happens.

Fixes: 3b6592f7 ("nvme: utilize two queue maps, one for reads and one for writes")
Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent d16a6766
......@@ -2073,7 +2073,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, int nr_io_queues)
.nr_sets = ARRAY_SIZE(irq_sets),
.sets = irq_sets,
};
int result;
int result = 0;
/*
* For irq sets, we have to ask for minvec == maxvec. This passes
......@@ -2088,9 +2088,16 @@ static int nvme_setup_irqs(struct nvme_dev *dev, int nr_io_queues)
affd.nr_sets = 1;
/*
* Need IRQs for read+write queues, and one for the admin queue
* Need IRQs for read+write queues, and one for the admin queue.
* If we can't get more than one vector, we have to share the
* admin queue and IO queue vector. For that case, don't add
* an extra vector for the admin queue, or we'll continue
* asking for 2 and get -ENOSPC in return.
*/
nr_io_queues = irq_sets[0] + irq_sets[1] + 1;
if (result == -ENOSPC && nr_io_queues == 1)
nr_io_queues = 1;
else
nr_io_queues = irq_sets[0] + irq_sets[1] + 1;
result = pci_alloc_irq_vectors_affinity(pdev, nr_io_queues,
nr_io_queues,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment