Commit 25a39f7f authored by Haiyang Zhang's avatar Haiyang Zhang Committed by David S. Miller

hv_netvsc: Use the num_online_cpus() for channel limit

Since we no longer localize channel/CPU affiliation within one NUMA
node, num_online_cpus() is used as the number of channel cap, instead of
the number of processors in a NUMA node.

This patch allows a bigger range for tuning the number of channels.
Signed-off-by: default avatarHaiyang Zhang <haiyangz@microsoft.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent e1586241
...@@ -1221,7 +1221,6 @@ struct netvsc_device *rndis_filter_device_add(struct hv_device *dev, ...@@ -1221,7 +1221,6 @@ struct netvsc_device *rndis_filter_device_add(struct hv_device *dev,
struct ndis_recv_scale_cap rsscap; struct ndis_recv_scale_cap rsscap;
u32 rsscap_size = sizeof(struct ndis_recv_scale_cap); u32 rsscap_size = sizeof(struct ndis_recv_scale_cap);
u32 mtu, size; u32 mtu, size;
const struct cpumask *node_cpu_mask;
u32 num_possible_rss_qs; u32 num_possible_rss_qs;
int i, ret; int i, ret;
...@@ -1290,14 +1289,8 @@ struct netvsc_device *rndis_filter_device_add(struct hv_device *dev, ...@@ -1290,14 +1289,8 @@ struct netvsc_device *rndis_filter_device_add(struct hv_device *dev,
if (ret || rsscap.num_recv_que < 2) if (ret || rsscap.num_recv_que < 2)
goto out; goto out;
/* /* This guarantees that num_possible_rss_qs <= num_online_cpus */
* We will limit the VRSS channels to the number CPUs in the NUMA node num_possible_rss_qs = min_t(u32, num_online_cpus(),
* the primary channel is currently bound to.
*
* This also guarantees that num_possible_rss_qs <= num_online_cpus
*/
node_cpu_mask = cpumask_of_node(cpu_to_node(dev->channel->target_cpu));
num_possible_rss_qs = min_t(u32, cpumask_weight(node_cpu_mask),
rsscap.num_recv_que); rsscap.num_recv_que);
net_device->max_chn = min_t(u32, VRSS_CHANNEL_MAX, num_possible_rss_qs); net_device->max_chn = min_t(u32, VRSS_CHANNEL_MAX, num_possible_rss_qs);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment