1. 10 Dec, 2010 3 commits
    • Eric Dumazet's avatar
      filter: use size of fetched data in __load_pointer() · 4bc65dd8
      Eric Dumazet authored
      __load_pointer() checks data we fetch from skb is included in head
      portion, but assumes we fetch one byte, instead of up to four.
      
      This wont crash because we have extra bytes (struct skb_shared_info)
      after head, but this can read uninitialized bytes.
      
      Fix this using size of the data (1, 2, 4 bytes) in the test.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4bc65dd8
    • Jozsef Kadlecsik's avatar
      The new jhash implementation · 60d509c8
      Jozsef Kadlecsik authored
      The current jhash.h implements the lookup2() hash function by Bob Jenkins.
      However, lookup2() is outdated as Bob wrote a new hash function called
      lookup3(). The patch replaces the lookup2() implementation of the 'jhash*'
      functions with that of lookup3().
      
      You can read a longer comparison of the two and other hash functions at
      http://burtleburtle.net/bob/hash/doobs.html.
      Signed-off-by: default avatarJozsef Kadlecsik <kadlec@blackhole.kfki.hu>
      Acked-by: default avatarRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      60d509c8
    • Eric Dumazet's avatar
      net: optimize INET input path further · 68835aba
      Eric Dumazet authored
      Followup of commit b178bb3d (net: reorder struct sock fields)
      
      Optimize INET input path a bit further, by :
      
      1) moving sk_refcnt close to sk_lock.
      
      This reduces number of dirtied cache lines by one on 64bit arches (and
      64 bytes cache line size).
      
      2) moving inet_daddr & inet_rcv_saddr at the beginning of sk
      
      (same cache line than hash / family / bound_dev_if / nulls_node)
      
      This reduces number of accessed cache lines in lookups by one, and dont
      increase size of inet and timewait socks.
      inet and tw sockets now share same place-holder for these fields.
      
      Before patch :
      
      offsetof(struct sock, sk_refcnt) = 0x10
      offsetof(struct sock, sk_lock) = 0x40
      offsetof(struct sock, sk_receive_queue) = 0x60
      offsetof(struct inet_sock, inet_daddr) = 0x270
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x274
      
      After patch :
      
      offsetof(struct sock, sk_refcnt) = 0x44
      offsetof(struct sock, sk_lock) = 0x48
      offsetof(struct sock, sk_receive_queue) = 0x68
      offsetof(struct inet_sock, inet_daddr) = 0x0
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x4
      
      compute_score() (udp or tcp) now use a single cache line per ignored
      item, instead of two.
      Signed-off-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      68835aba
  2. 09 Dec, 2010 2 commits
  3. 08 Dec, 2010 28 commits
  4. 07 Dec, 2010 5 commits
  5. 06 Dec, 2010 2 commits