1. 24 Jun, 2013 32 commits
  2. 21 Jun, 2013 1 commit
  3. 20 Jun, 2013 7 commits
    • Joe Perches's avatar
      ndisc: Convert use of typedef ctl_table to struct ctl_table · fedaf4ff
      Joe Perches authored
      This typedef is unnecessary and should just be removed.
      Signed-off-by: default avatarJoe Perches <joe@perches.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fedaf4ff
    • Joe Perches's avatar
      ipv6: Convert use of typedef ctl_table to struct ctl_table · 9e8cda3b
      Joe Perches authored
      This typedef is unnecessary and should just be removed.
      Signed-off-by: default avatarJoe Perches <joe@perches.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9e8cda3b
    • Rami Rosen's avatar
      inet: frag , remove an empty ifdef. · af92e542
      Rami Rosen authored
      This patch removes an empty ifdef from inet_frag_intern()
      in net/ipv4/inet_fragment.c.
      
      commit b67bfe0d
      (hlist: drop the node parameter from iterators) removed hlist from
      net/ipv4/inet_fragment.c, but did not remove the enclosing ifdef command,
      which is now empty.
      Signed-off-by: default avatarRami Rosen <ramirose@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      af92e542
    • Eric Dumazet's avatar
      htb: refactor struct htb_sched fields for performance · c9364636
      Eric Dumazet authored
      htb_sched structures are big, and source of false sharing on SMP.
      
      Every time a packet is queued or dequeue, many cache lines must be
      touched because structures are not lay out properly.
      
      By carefully splitting htb_sched in two parts, and define sub structures
      to increase data locality, we can improve performance dramatically on
      SMP.
      
      New htb_prio structure can also be used in htb_class to increase data
      locality.
      
      I got 26 % performance increase on a 24 threads machine, with 200
      concurrent netperf in TCP_RR mode, using a HTB hierarchy of 4 classes.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c9364636
    • Cong Wang's avatar
      tcp: introduce a per-route knob for quick ack · bcefe17c
      Cong Wang authored
      In previous discussions, I tried to find some reasonable heuristics
      for delayed ACK, however this seems not possible, according to Eric:
      
      	"ACKS might also be delayed because of bidirectional
      	traffic, and is more controlled by the application
      	response time. TCP stack can not easily estimate it."
      
      	"ACK can be incredibly useful to recover from losses in
      	a short time.
      
      	The vast majority of TCP sessions are small lived, and we
      	send one ACK per received segment anyway at beginning or
      	retransmits to let the sender smoothly increase its cwnd,
      	so an auto-tuning facility wont help them that much."
      
      and according to David:
      
      	"ACKs are the only information we have to detect loss.
      
      	And, for the same reasons that TCP VEGAS is fundamentally
      	broken, we cannot measure the pipe or some other
      	receiver-side-visible piece of information to determine
      	when it's "safe" to stretch ACK.
      
      	And even if it's "safe", we should not do it so that losses are
      	accurately detected and we don't spuriously retransmit.
      
      	The only way to know when the bandwidth increases is to
      	"test" it, by sending more and more packets until drops happen.
      	That's why all successful congestion control algorithms must
      	operate on explicited tested pieces of information.
      
      	Similarly, it's not really possible to universally know if
      	it's safe to stretch ACK or not."
      
      It still makes sense to enable or disable quick ack mode like
      what TCP_QUICK_ACK does.
      
      Similar to TCP_QUICK_ACK option, but for people who can't
      modify the source code and still wants to control
      TCP delayed ACK behavior. As David suggested, this should belong
      to per-path scope, since different pathes may want different
      behaviors.
      
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Rick Jones <rick.jones2@hp.com>
      Cc: Stephen Hemminger <stephen@networkplumber.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Graf <tgraf@suug.ch>
      CC: David Laight <David.Laight@ACULAB.COM>
      Signed-off-by: default avatarCong Wang <amwang@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bcefe17c
    • Dave Jones's avatar
      2c0740e4
    • Yijing Wang's avatar
      bnx2: use pdev->pm_cap instead of pci_find_capability(.., PCI_CAP_ID_PM) · 85768271
      Yijing Wang authored
      Pci core has been saved pm cap register offset by pdev->pm_cap in pci_pm_init()
      in init path. So we can use pdev->pm_cap instead of using
      pci_find_capability(pdev, PCI_CAP_ID_PM) for better performance and simplified code.
      Signed-off-by: default avatarYijing Wang <wangyijing@huawei.com>
      Cc: Michael Chan <mchan@broadcom.com>
      Cc: netdev@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      85768271