1. 21 Dec, 2010 9 commits
    • Denis Kirjanov's avatar
      sundance: Wrap up acceess to ASICCtrl high word with a macro · 24de5285
      Denis Kirjanov authored
      Wrap up acceess to ASICCtrl high word with a macro
      Signed-off-by: default avatarDenis Kirjanov <dkirjanov@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      24de5285
    • Eric Dumazet's avatar
      filter: optimize accesses to ancillary data · 12b16dad
      Eric Dumazet authored
      We can translate pseudo load instructions at filter check time to
      dedicated instructions to speed up filtering and avoid one switch().
      libpcap currently uses SKF_AD_PROTOCOL, but custom filters probably use
      other ancillary accesses.
      
      Note : I made the assertion that ancillary data was always accessed with
      BPF_LD|BPF_?|BPF_ABS instructions, not with BPF_LD|BPF_?|BPF_IND ones
      (offset given by K constant, not by K + X register)
      
      On x86_64, this saves a few bytes of text :
      
      # size net/core/filter.o.*
         text	   data	    bss	    dec	    hex	filename
         4864	      0	      0	   4864	   1300	net/core/filter.o.new
         4944	      0	      0	   4944	   1350	net/core/filter.o.old
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      12b16dad
    • Tejun Heo's avatar
      bnx2: remove cancel_work_sync() from remove_one · cb8f4048
      Tejun Heo authored
      Michael pointed out that bnx2_close() already cancels bp->reset_task
      and thus it is guaranteed to be idle when bnx2_remove_one() is called.
      Remove the unnecessary cancel_work_sync() in remove_one.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarMichael Chan <mchan@broadcom.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cb8f4048
    • David S. Miller's avatar
    • Dan Carpenter's avatar
      stmmac: unwind properly in stmmac_dvr_probe() · 34a52f36
      Dan Carpenter authored
      The original code had a several problems:
      *) It had potential null dereferences of "priv" and "res".
      *) It released the memory region before it was aquired.
      *) It didn't free "ndev" after it was allocated.
      *) It didn't call unregister_netdev() after calling stmmac_probe().
      Signed-off-by: default avatarDan Carpenter <error27@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      34a52f36
    • Dan Carpenter's avatar
      bnx2x: remove bogus check · 4b97f8e1
      Dan Carpenter authored
      We dereferenced params on the line before so it's too late to check if
      params is NULL.  In fact, params can never be NULL and strict_cos is
      either 0 or 1 so that part of the check is bogus too.  Let's remove it.
      Signed-off-by: default avatarDan Carpenter <error27@gmail.com>
      Acked-by: default avatarEilon Greenstein <eilong@broadcom.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4b97f8e1
    • Eric Dumazet's avatar
      net: timestamp cloned packet in dev_queue_xmit_nit · 70978182
      Eric Dumazet authored
      Le vendredi 17 décembre 2010 à 10:26 +0100, Eric Dumazet a écrit :
      
      >
      > I think we can add this after latest Changli patch :
      >
      > He does one skb_clone() before calling the sniffers.
      > We could set timestamp on this clone, instead of original skb.
      >
      > Problem solved.
      >
      
      [PATCH net-next-2.6] net: timestamp cloned packet in dev_queue_xmit_nit
      
      Now we do one clone of skb if at least one sniffer might take packet,
      we also can do the skb timestamping on the clone and let original packet
      unchanged.
      
      This is a generalization of commit 8caf1539 (net: sch_netem: Fix an
      inconsistency in ingress netem timestamps.)
      
      This way, we can have a good idea when packets are delivered to our
      stack (tcpdump -i ifb0), while a tcpdump on original device gives
      timestamps right before ingressing.
      
      This also speedup our stack, avoiding taking timestamps if not needed.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Cc: Changli Gao <xiaosuo@gmail.com>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Jarek Poplawski <jarkao2@gmail.com>
      Acked-by: default avatarChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      70978182
    • Nandita Dukkipati's avatar
      TCP: increase default initial receive window. · 356f0398
      Nandita Dukkipati authored
      This patch changes the default initial receive window to 10 mss
      (defined constant). The default window is limited to the maximum
      of 10*1460 and 2*mss (when mss > 1460).
      
      draft-ietf-tcpm-initcwnd-00 is a proposal to the IETF that recommends
      increasing TCP's initial congestion window to 10 mss or about 15KB.
      Leading up to this proposal were several large-scale live Internet
      experiments with an initial congestion window of 10 mss (IW10), where
      we showed that the average latency of HTTP responses improved by
      approximately 10%. This was accompanied by a slight increase in
      retransmission rate (0.5%), most of which is coming from applications
      opening multiple simultaneous connections. To understand the extreme
      worst case scenarios, and fairness issues (IW10 versus IW3), we further
      conducted controlled testbed experiments. We came away finding minimal
      negative impact even under low link bandwidths (dial-ups) and small
      buffers.  These results are extremely encouraging to adopting IW10.
      
      However, an initial congestion window of 10 mss is useless unless a TCP
      receiver advertises an initial receive window of at least 10 mss.
      Fortunately, in the large-scale Internet experiments we found that most
      widely used operating systems advertised large initial receive windows
      of 64KB, allowing us to experiment with a wide range of initial
      congestion windows. Linux systems were among the few exceptions that
      advertised a small receive window of 6KB. The purpose of this patch is
      to fix this shortcoming.
      
      References:
      1. A comprehensive list of all IW10 references to date.
      http://code.google.com/speed/protocols/tcpm-IW10.html
      
      2. Paper describing results from large-scale Internet experiments with IW10.
      http://ccr.sigcomm.org/drupal/?q=node/621
      
      3. Controlled testbed experiments under worst case scenarios and a
      fairness study.
      http://www.ietf.org/proceedings/79/slides/tcpm-0.pdf
      
      4. Raw test data from testbed experiments (Linux senders/receivers)
      with initial congestion and receive windows of both 10 mss.
      http://research.csc.ncsu.edu/netsrv/?q=content/iw10
      
      5. Internet-Draft. Increasing TCP's Initial Window.
      https://datatracker.ietf.org/doc/draft-ietf-tcpm-initcwnd/Signed-off-by: default avatarNandita Dukkipati <nanditad@google.com>
      Acked-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      356f0398
    • Eric Dumazet's avatar
      net_sched: sch_sfq: better struct layouts · eda83e3b
      Eric Dumazet authored
      Here is a respin of patch.
      
      I'll send a short patch to make SFQ more fair in presence of large
      packets as well.
      
      Thanks
      
      [PATCH v3 net-next-2.6] net_sched: sch_sfq: better struct layouts
      
      This patch shrinks sizeof(struct sfq_sched_data)
      from 0x14f8 (or more if spinlocks are bigger) to 0x1180 bytes, and
      reduce text size as well.
      
         text    data     bss     dec     hex filename
         4821     152       0    4973    136d old/net/sched/sch_sfq.o
         4627     136       0    4763    129b new/net/sched/sch_sfq.o
      
      All data for a slot/flow is now grouped in a compact and cache friendly
      structure, instead of being spreaded in many different points.
      
      struct sfq_slot {
              struct sk_buff  *skblist_next;
              struct sk_buff  *skblist_prev;
              sfq_index       qlen; /* number of skbs in skblist */
              sfq_index       next; /* next slot in sfq chain */
              struct sfq_head dep; /* anchor in dep[] chains */
              unsigned short  hash; /* hash value (index in ht[]) */
              short           allot; /* credit for this slot */
      };
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Cc: Jarek Poplawski <jarkao2@gmail.com>
      Cc: Patrick McHardy <kaber@trash.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      eda83e3b
  2. 20 Dec, 2010 16 commits
  3. 19 Dec, 2010 2 commits
  4. 17 Dec, 2010 8 commits
  5. 16 Dec, 2010 5 commits