1. 17 Feb, 2016 1 commit
  2. 16 Feb, 2016 25 commits
  3. 12 Feb, 2016 12 commits
  4. 11 Feb, 2016 2 commits
    • David S. Miller's avatar
      Merge branch 'net-mitigate-kmem_free-slowpath' · 3134b9f0
      David S. Miller authored
      Jesper Dangaard Brouer says:
      
      ====================
      net: mitigating kmem_cache free slowpath
      
      This patchset is the first real use-case for kmem_cache bulk _free_.
      The use of bulk _alloc_ is NOT included in this patchset. The full use
      have previously been posted here [1].
      
      The bulk free side have the largest benefit for the network stack
      use-case, because network stack is hitting the kmem_cache/SLUB
      slowpath when freeing SKBs, due to the amount of outstanding SKBs.
      This is solved by using the new API kmem_cache_free_bulk().
      
      Introduce new API napi_consume_skb(), that hides/handles bulk freeing
      for the caller.  The drivers simply need to use this call when freeing
      SKBs in NAPI context, e.g. replacing their calles to dev_kfree_skb() /
      dev_consume_skb_any().
      
      Driver ixgbe is the first user of this new API.
      
      [1] http://thread.gmane.org/gmane.linux.network/384302/focus=397373
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3134b9f0
    • Jesper Dangaard Brouer's avatar
      ixgbe: bulk free SKBs during TX completion cleanup cycle · a3a8749d
      Jesper Dangaard Brouer authored
      There is an opportunity to bulk free SKBs during reclaiming of
      resources after DMA transmit completes in ixgbe_clean_tx_irq.  Thus,
      bulk freeing at this point does not introduce any added latency.
      
      Simply use napi_consume_skb() which were recently introduced.  The
      napi_budget parameter is needed by napi_consume_skb() to detect if it
      is called from netpoll.
      
      Benchmarking IPv4-forwarding, on CPU i7-4790K @4.2GHz (no turbo boost)
       Single CPU/flow numbers: before: 1982144 pps ->  after : 2064446 pps
       Improvement: +82302 pps, -20 nanosec, +4.1%
       (SLUB and GCC version 5.1.1 20150618 (Red Hat 5.1.1-4))
      
      Joint work with Alexander Duyck.
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a3a8749d