Commit 90987650 authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller

net: call skb_defer_free_flush() before each napi_poll()

skb_defer_free_flush() can consume cpu cycles,
it seems better to call it in the inner loop:

- Potentially frees page/skb that will be reallocated while hot.

- Account for the cpu cycles in the @time_limit determination.

- Keep softnet_data.defer_count small to reduce chances for
  skb_attempt_defer_free() to send an IPI.
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 39564c3f
...@@ -6655,6 +6655,8 @@ static __latent_entropy void net_rx_action(struct softirq_action *h) ...@@ -6655,6 +6655,8 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
for (;;) { for (;;) {
struct napi_struct *n; struct napi_struct *n;
skb_defer_free_flush(sd);
if (list_empty(&list)) { if (list_empty(&list)) {
if (!sd_has_rps_ipi_waiting(sd) && list_empty(&repoll)) if (!sd_has_rps_ipi_waiting(sd) && list_empty(&repoll))
goto end; goto end;
...@@ -6684,8 +6686,7 @@ static __latent_entropy void net_rx_action(struct softirq_action *h) ...@@ -6684,8 +6686,7 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
__raise_softirq_irqoff(NET_RX_SOFTIRQ); __raise_softirq_irqoff(NET_RX_SOFTIRQ);
net_rps_action_and_irq_enable(sd); net_rps_action_and_irq_enable(sd);
end: end:;
skb_defer_free_flush(sd);
} }
struct netdev_adjacent { struct netdev_adjacent {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment