Commit fa35864e authored by Dominic Curran's avatar Dominic Curran Committed by David S. Miller

tuntap: Fix for a race in accessing numqueues

A patch for fixing a race between queue selection and changing queues
was introduced in commit 92bb73ea("tuntap: fix a possible race between
queue selection and changing queues").

The fix was to prevent the driver from re-reading the tun->numqueues
more than once within tun_select_queue() using ACCESS_ONCE().

We have been experiancing 'Divide-by-zero' errors in tun_net_xmit()
since we moved from 3.6 to 3.10, and believe that they come from a
simular source where the value of tun->numqueues changes to zero
between the first and a subsequent read of tun->numqueues.

The fix is a simular use of ACCESS_ONCE(), as well as a multiply
instead of a divide in the if statement.
Signed-off-by: default avatarDominic Curran <dominic.curran@citrix.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Maxim Krasnyansky <maxk@qti.qualcomm.com>
Acked-by: default avatarJason Wang <jasowang@redhat.com>
Acked-by: default avatarMax Krasnyansky <maxk@kernel.org>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent bdf4351b
...@@ -738,15 +738,17 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -738,15 +738,17 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
struct tun_struct *tun = netdev_priv(dev); struct tun_struct *tun = netdev_priv(dev);
int txq = skb->queue_mapping; int txq = skb->queue_mapping;
struct tun_file *tfile; struct tun_file *tfile;
u32 numqueues = 0;
rcu_read_lock(); rcu_read_lock();
tfile = rcu_dereference(tun->tfiles[txq]); tfile = rcu_dereference(tun->tfiles[txq]);
numqueues = ACCESS_ONCE(tun->numqueues);
/* Drop packet if interface is not attached */ /* Drop packet if interface is not attached */
if (txq >= tun->numqueues) if (txq >= numqueues)
goto drop; goto drop;
if (tun->numqueues == 1) { if (numqueues == 1) {
/* Select queue was not called for the skbuff, so we extract the /* Select queue was not called for the skbuff, so we extract the
* RPS hash and save it into the flow_table here. * RPS hash and save it into the flow_table here.
*/ */
...@@ -779,8 +781,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) ...@@ -779,8 +781,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
/* Limit the number of packets queued by dividing txq length with the /* Limit the number of packets queued by dividing txq length with the
* number of queues. * number of queues.
*/ */
if (skb_queue_len(&tfile->socket.sk->sk_receive_queue) if (skb_queue_len(&tfile->socket.sk->sk_receive_queue) * numqueues
>= dev->tx_queue_len / tun->numqueues) >= dev->tx_queue_len)
goto drop; goto drop;
if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC))) if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC)))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment