Commit 54c52978 authored by Maher Sanalla's avatar Maher Sanalla Committed by Saeed Mahameed

net/mlx5: Handle SF IRQ request in the absence of SF IRQ pool

In case the SF IRQ pool is not available due to setup limitations,
SF currently relies on the already allocated PF IRQs to fulfill
its IRQ vector requests.

However, with the dynamic EQ allocation introduced in the next patch,
it is possible that not all IRQs of PF will be allocated after the driver
is loaded. In such case, if a SF requests a completion IRQ without having
its own independent IRQ pool, SF will lack a PF IRQ to utilize.

To address this scenario, allocate an IRQ for the SF from the PF's IRQ pool
on demand. The new IRQ will be shared between the SF and it's PF.
Signed-off-by: default avatarMaher Sanalla <msanalla@nvidia.com>
Reviewed-by: default avatarShay Drory <shayd@nvidia.com>
Reviewed-by: default avatarMoshe Shemesh <moshe@nvidia.com>
Signed-off-by: default avatarSaeed Mahameed <saeedm@nvidia.com>
parent 674dd4e2
......@@ -850,14 +850,29 @@ static int mlx5_cpumask_default_spread(int numa_node, int index)
return found_cpu;
}
static struct cpu_rmap *mlx5_eq_table_get_pci_rmap(struct mlx5_core_dev *dev)
{
#ifdef CONFIG_RFS_ACCEL
#ifdef CONFIG_MLX5_SF
if (mlx5_core_is_sf(dev))
return dev->priv.parent_mdev->priv.eq_table->rmap;
#endif
return dev->priv.eq_table->rmap;
#else
return NULL;
#endif
}
static int comp_irq_request_pci(struct mlx5_core_dev *dev, u16 vecidx)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
struct cpu_rmap *rmap;
struct mlx5_irq *irq;
int cpu;
rmap = mlx5_eq_table_get_pci_rmap(dev);
cpu = mlx5_cpumask_default_spread(dev->priv.numa_node, vecidx);
irq = mlx5_irq_request_vector(dev, cpu, vecidx, &table->rmap);
irq = mlx5_irq_request_vector(dev, cpu, vecidx, &rmap);
if (IS_ERR(irq))
return PTR_ERR(irq);
......@@ -883,8 +898,13 @@ static int comp_irq_request_sf(struct mlx5_core_dev *dev, u16 vecidx)
struct mlx5_irq *irq;
irq = mlx5_irq_affinity_irq_request_auto(dev, &table->used_cpus, vecidx);
if (IS_ERR(irq))
if (IS_ERR(irq)) {
/* In case SF irq pool does not exist, fallback to the PF irqs*/
if (PTR_ERR(irq) == -ENOENT)
return comp_irq_request_pci(dev, vecidx);
return PTR_ERR(irq);
}
return xa_err(xa_store(&table->comp_irqs, vecidx, irq, GFP_KERNEL));
}
......
......@@ -191,17 +191,13 @@ struct mlx5_irq *mlx5_irq_affinity_irq_request_auto(struct mlx5_core_dev *dev,
struct irq_affinity_desc af_desc = {};
struct mlx5_irq *irq;
if (!mlx5_irq_pool_is_sf_pool(pool))
return ERR_PTR(-ENOENT);
af_desc.is_managed = 1;
cpumask_copy(&af_desc.mask, cpu_online_mask);
cpumask_andnot(&af_desc.mask, &af_desc.mask, used_cpus);
if (mlx5_irq_pool_is_sf_pool(pool))
irq = mlx5_irq_affinity_request(pool, &af_desc);
else
/* In case SF pool doesn't exists, fallback to the PF IRQs.
* The PF IRQs are already allocated and binded to CPU
* at this point. Hence, only an index is needed.
*/
irq = mlx5_irq_request(dev, vecidx, NULL, NULL);
if (IS_ERR(irq))
return irq;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment