Commit 78e8311a authored by David S. Miller's avatar David S. Miller

Merge branch 'net-rps-lockless'

Jason Xing says:

====================
locklessly protect left members in struct rps_dev_flow

From: Jason Xing <kernelxing@tencent.com>

Since Eric did a more complicated locklessly change to last_qtail
member[1] in struct rps_dev_flow, the left members are easier to change
as the same.

One thing important I would like to share by qooting Eric:
"rflow is located in rxqueue->rps_flow_table, it is thus private to current
thread. Only one cpu can service an RX queue at a time."
So we only pay attention to the reader in the rps_may_expire_flow() and
writer in the set_rps_cpu(). They are in the two different contexts.

[1]:
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=3b4cf29bdab

v3
Link: https://lore.kernel.org/all/20240417062721.45652-1-kerneljasonxing@gmail.com/
1. adjust the protection in a right way (Eric)

v2
1. fix passing wrong type qtail.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 00ac0dc3 f7b60cce
......@@ -4507,7 +4507,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
struct netdev_rx_queue *rxqueue;
struct rps_dev_flow_table *flow_table;
struct rps_dev_flow *old_rflow;
u32 flow_id;
u32 flow_id, head;
u16 rxq_index;
int rc;
......@@ -4530,16 +4530,16 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb,
goto out;
old_rflow = rflow;
rflow = &flow_table->flows[flow_id];
rflow->filter = rc;
if (old_rflow->filter == rflow->filter)
old_rflow->filter = RPS_NO_FILTER;
WRITE_ONCE(rflow->filter, rc);
if (old_rflow->filter == rc)
WRITE_ONCE(old_rflow->filter, RPS_NO_FILTER);
out:
#endif
rflow->last_qtail =
READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head);
head = READ_ONCE(per_cpu(softnet_data, next_cpu).input_queue_head);
rps_input_queue_tail_save(&rflow->last_qtail, head);
}
rflow->cpu = next_cpu;
WRITE_ONCE(rflow->cpu, next_cpu);
return rflow;
}
......@@ -4619,7 +4619,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb,
if (unlikely(tcpu != next_cpu) &&
(tcpu >= nr_cpu_ids || !cpu_online(tcpu) ||
((int)(READ_ONCE(per_cpu(softnet_data, tcpu).input_queue_head) -
READ_ONCE(rflow->last_qtail))) >= 0)) {
rflow->last_qtail)) >= 0)) {
tcpu = next_cpu;
rflow = set_rps_cpu(dev, skb, rflow, next_cpu);
}
......@@ -4672,7 +4672,7 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index,
if (flow_table && flow_id <= flow_table->mask) {
rflow = &flow_table->flows[flow_id];
cpu = READ_ONCE(rflow->cpu);
if (rflow->filter == filter_id && cpu < nr_cpu_ids &&
if (READ_ONCE(rflow->filter) == filter_id && cpu < nr_cpu_ids &&
((int)(READ_ONCE(per_cpu(softnet_data, cpu).input_queue_head) -
READ_ONCE(rflow->last_qtail)) <
(int)(10 * flow_table->mask)))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment