Commit 0eddba52 authored by Anton Vorontsov's avatar Anton Vorontsov Committed by David S. Miller

gianfar: Fix TX ring processing on SMP machines

Starting with commit a3bc1f11 ("gianfar: Revive SKB
recycling") gianfar driver sooner or later stops transmitting any
packets on SMP machines.

start_xmit() prepares new skb for transmitting, generally it does
three things:

1. sets up all BDs (marks them ready to send), except the first one.
2. stores skb into tx_queue->tx_skbuff so that clean_tx_ring()
   would cleanup it later.
3. sets up the first BD, i.e. marks it ready.

Here is what clean_tx_ring() does:

1. reads skbs from tx_queue->tx_skbuff
2. checks if the *last* BD is ready. If it's still ready [to send]
   then it it isn't transmitted, so clean_tx_ring() returns.
   Otherwise it actually cleanups BDs. All is OK.

Now, if there is just one BD, code flow:

- start_xmit(): stores skb into tx_skbuff. Note that the first BD
  (which is also the last one) isn't marked as ready, yet.
- clean_tx_ring(): sees that skb is not null, *and* its lstatus
  says that it is NOT ready (like if BD was sent), so it cleans
  it up (bad!)
- start_xmit(): marks BD as ready [to send], but it's too late.

We can fix this simply by reordering lstatus/tx_skbuff writes.
Reported-by: default avatarMartyn Welch <martyn.welch@ge.com>
Bisected-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: default avatarAnton Vorontsov <avorontsov@ru.mvista.com>
Tested-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
Tested-by: default avatarMartyn Welch <martyn.welch@ge.com>
Cc: Sandeep Gopalpet <Sandeep.Kumar@freescale.com>
Cc: Stable <stable@vger.kernel.org> [2.6.33]
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 4c020a96
...@@ -2021,7 +2021,6 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -2021,7 +2021,6 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev)
} }
/* setup the TxBD length and buffer pointer for the first BD */ /* setup the TxBD length and buffer pointer for the first BD */
tx_queue->tx_skbuff[tx_queue->skb_curtx] = skb;
txbdp_start->bufPtr = dma_map_single(&priv->ofdev->dev, skb->data, txbdp_start->bufPtr = dma_map_single(&priv->ofdev->dev, skb->data,
skb_headlen(skb), DMA_TO_DEVICE); skb_headlen(skb), DMA_TO_DEVICE);
...@@ -2053,6 +2052,10 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -2053,6 +2052,10 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev)
txbdp_start->lstatus = lstatus; txbdp_start->lstatus = lstatus;
eieio(); /* force lstatus write before tx_skbuff */
tx_queue->tx_skbuff[tx_queue->skb_curtx] = skb;
/* Update the current skb pointer to the next entry we will use /* Update the current skb pointer to the next entry we will use
* (wrapping if necessary) */ * (wrapping if necessary) */
tx_queue->skb_curtx = (tx_queue->skb_curtx + 1) & tx_queue->skb_curtx = (tx_queue->skb_curtx + 1) &
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment