1. 04 Jan, 2012 4 commits
  2. 03 Jan, 2012 18 commits
  3. 02 Jan, 2012 4 commits
  4. 01 Jan, 2012 1 commit
  5. 31 Dec, 2011 6 commits
  6. 30 Dec, 2011 7 commits
    • Ajit Khaparde's avatar
      be2net: query link status in be_open() · b236916a
      Ajit Khaparde authored
      be2net gets an async link status notification from the FW when it creates
      an MCC queue. There are some cases in which this gratuitous notification
      is not received from FW. To cover this explicitly query the link status
      in be_open().
      Signed-off-by: default avatarVasundhara Volam <vasundhara.volam@emulex.com>
      Signed-off-by: default avatarSathya Perla <sathya.perla@emulex.com>
      Signed-off-by: default avatarAjit Khaparde <ajit.khaparde@emulex.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b236916a
    • Ajit Khaparde's avatar
    • Ajit Khaparde's avatar
      be2net: fix be_vlan_add/rem_vid · 80817cbf
      Ajit Khaparde authored
      1) fix be_vlan_add/rem_vid to return proper status
      2) perform appropriate housekeeping if firmware command succeeds.
      Signed-off-by: default avatarAjit Khaparde <ajit.khaparde@emulex.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      80817cbf
    • Yevgeny Petrilin's avatar
      mlx4_en: nullify cq->vector field when closing completion queue · cd3109d2
      Yevgeny Petrilin authored
      Caused loss of connectivity when changing ring size.
      Signed-off-by: default avatarYevgeny Petrilin <yevgenyp@mellanox.co.il>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cd3109d2
    • Eric Dumazet's avatar
      netem: fix classful handling · 50612537
      Eric Dumazet authored
      Commit 10f6dfcf (Revert "sch_netem: Remove classful functionality")
      reintroduced classful functionality to netem, but broke basic netem
      behavior :
      
      netem uses an t(ime)fifo queue, and store timestamps in skb->cb[]
      
      If qdisc is changed, time constraints are not respected and other qdisc
      can destroy skb->cb[] and block netem at dequeue time.
      
      Fix this by always using internal tfifo, and optionally attach a child
      qdisc to netem (or a tree of qdiscs)
      
      Example of use :
      
      DEV=eth3
      tc qdisc del dev $DEV root
      tc qdisc add dev $DEV root handle 30: est 1sec 8sec netem delay 20ms 10ms
      tc qdisc add dev $DEV handle 40:0 parent 30:0 tbf \
      	burst 20480 limit 20480 mtu 1514 rate 32000bps
      
      qdisc netem 30: root refcnt 18 limit 1000 delay 20.0ms  10.0ms
       Sent 190792 bytes 413 pkt (dropped 0, overlimits 0 requeues 0)
       rate 18416bit 3pps backlog 0b 0p requeues 0
      qdisc tbf 40: parent 30: rate 256000bit burst 20Kb/8 mpu 0b lat 0us
       Sent 190792 bytes 413 pkt (dropped 6, overlimits 10 requeues 0)
       backlog 0b 5p requeues 0
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      CC: Stephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      50612537
    • Josh Hunt's avatar
      IPv6: Avoid taking write lock for /proc/net/ipv6_route · 32b293a5
      Josh Hunt authored
      During some debugging I needed to look into how /proc/net/ipv6_route
      operated and in my digging I found its calling fib6_clean_all() which uses
      "write_lock_bh(&table->tb6_lock)" before doing the walk of the table. I
      found this on 2.6.32, but reading the code I believe the same basic idea
      exists currently. Looking at the rtnetlink code they are only calling
      "read_lock_bh(&table->tb6_lock);" via fib6_dump_table(). While I realize
      reading from proc isn't the recommended way of fetching the ipv6 route
      table; taking a write lock seems unnecessary and would probably cause
      network performance issues.
      
      To verify this I loaded up the ipv6 route table and then ran iperf in 3
      cases:
        * doing nothing
        * reading ipv6 route table via proc
          (while :; do cat /proc/net/ipv6_route > /dev/null; done)
        * reading ipv6 route table via rtnetlink
          (while :; do ip -6 route show table all > /dev/null; done)
      
      * Load the ipv6 route table up with:
        * for ((i = 0;i < 4000;i++)); do ip route add unreachable 2000::$i; done
      
      * iperf commands:
        * client: iperf -i 1 -V -c <ipv6 addr>
        * server: iperf -V -s
      
      * iperf results - 3 runs each (in Mbits/sec)
        * nothing: client: 927,927,927 server: 927,927,927
        * proc: client: 179,97,96,113 server: 142,112,133
        * iproute: client: 928,927,928 server: 927,927,927
      
      lock_stat shows taking the write lock is causing the slowdown. Using this
      info I decided to write a version of fib6_clean_all() which replaces
      write_lock_bh(&table->tb6_lock) with read_lock_bh(&table->tb6_lock). With
      this new function I see the same results as with my rtnetlink iperf test.
      Signed-off-by: default avatarJosh Hunt <joshhunt00@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      32b293a5
    • Pavel Emelyanov's avatar
      unix_diag: Fixup RQLEN extension report · c9da99e6
      Pavel Emelyanov authored
      While it's not too late fix the recently added RQLEN diag extension
      to report rqlen and wqlen in the same way as TCP does.
      
      I.e. for listening sockets the ack backlog length (which is the input
      queue length for socket) in rqlen and the max ack backlog length in
      wqlen, and what the CINQ/OUTQ ioctls do for established.
      Signed-off-by: default avatarPavel Emelyanov <xemul@parallels.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c9da99e6