Commit 6a1f12dd authored by Eric Dumazet's avatar Eric Dumazet Committed by Jakub Kicinski

udp: relax atomic operation on sk->sk_rmem_alloc

atomic_add_return() is more expensive than atomic_add()
and seems overkill in UDP rx fast path.
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240328144032.1864988-3-edumazet@google.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parent 60557969
...@@ -1516,12 +1516,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb) ...@@ -1516,12 +1516,7 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
size = skb->truesize; size = skb->truesize;
udp_set_dev_scratch(skb); udp_set_dev_scratch(skb);
/* we drop only if the receive buf is full and the receive atomic_add(size, &sk->sk_rmem_alloc);
* queue contains some other skb
*/
rmem = atomic_add_return(size, &sk->sk_rmem_alloc);
if (rmem > (size + (unsigned int)sk->sk_rcvbuf))
goto uncharge_drop;
spin_lock(&list->lock); spin_lock(&list->lock);
err = udp_rmem_schedule(sk, size); err = udp_rmem_schedule(sk, size);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment