1. 13 Aug, 2018 22 commits
  2. 12 Aug, 2018 4 commits
    • David S. Miller's avatar
      Merge branch 'ip-faster-in-order-IP-fragments' · 78cbac64
      David S. Miller authored
      Peter Oskolkov says:
      
      ====================
      ip: faster in-order IP fragments
      
      Added "Signed-off-by" in v2.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      78cbac64
    • Peter Oskolkov's avatar
      ip: process in-order fragments efficiently · a4fd284a
      Peter Oskolkov authored
      This patch changes the runtime behavior of IP defrag queue:
      incoming in-order fragments are added to the end of the current
      list/"run" of in-order fragments at the tail.
      
      On some workloads, UDP stream performance is substantially improved:
      
      RX: ./udp_stream -F 10 -T 2 -l 60
      TX: ./udp_stream -c -H <host> -F 10 -T 5 -l 60
      
      with this patchset applied on a 10Gbps receiver:
      
        throughput=9524.18
        throughput_units=Mbit/s
      
      upstream (net-next):
      
        throughput=4608.93
        throughput_units=Mbit/s
      Reported-by: default avatarWillem de Bruijn <willemb@google.com>
      Signed-off-by: default avatarPeter Oskolkov <posk@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Florian Westphal <fw@strlen.de>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a4fd284a
    • Peter Oskolkov's avatar
      ip: add helpers to process in-order fragments faster. · 353c9cb3
      Peter Oskolkov authored
      This patch introduces several helper functions/macros that will be
      used in the follow-up patch. No runtime changes yet.
      
      The new logic (fully implemented in the second patch) is as follows:
      
      * Nodes in the rb-tree will now contain not single fragments, but lists
        of consecutive fragments ("runs").
      
      * At each point in time, the current "active" run at the tail is
        maintained/tracked. Fragments that arrive in-order, adjacent
        to the previous tail fragment, are added to this tail run without
        triggering the re-balancing of the rb-tree.
      
      * If a fragment arrives out of order with the offset _before_ the tail run,
        it is inserted into the rb-tree as a single fragment.
      
      * If a fragment arrives after the current tail fragment (with a gap),
        it starts a new "tail" run, as is inserted into the rb-tree
        at the end as the head of the new run.
      
      skb->cb is used to store additional information
      needed here (suggested by Eric Dumazet).
      Reported-by: default avatarWillem de Bruijn <willemb@google.com>
      Signed-off-by: default avatarPeter Oskolkov <posk@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Florian Westphal <fw@strlen.de>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      353c9cb3
    • David S. Miller's avatar
  3. 11 Aug, 2018 14 commits
    • David S. Miller's avatar
      Merge branch 'Remove-rtnl-lock-dependency-from-all-action-implementations' · 9a95d9c6
      David S. Miller authored
      Vlad Buslov says:
      
      ====================
      Remove rtnl lock dependency from all action implementations
      
      Currently, all netlink protocol handlers for updating rules, actions and
      qdiscs are protected with single global rtnl lock which removes any
      possibility for parallelism. This patch set is a second step to remove
      rtnl lock dependency from TC rules update path.
      
      Recently, new rtnl registration flag RTNL_FLAG_DOIT_UNLOCKED was added.
      Handlers registered with this flag are called without RTNL taken. End
      goal is to have rule update handlers(RTM_NEWTFILTER, RTM_DELTFILTER,
      etc.) to be registered with UNLOCKED flag to allow parallel execution.
      However, there is no intention to completely remove or split rtnl lock
      itself. This patch set addresses specific problems in implementation of
      tc actions that prevent their control path from being executed
      concurrently. Additional changes are required to refactor classifiers
      API and individual classifiers for parallel execution. This patch set
      lays groundwork to eventually register rule update handlers as
      rtnl-unlocked.
      
      Action API is already prepared for parallel execution with previous
      patch set, which means that action ops that use action API for their
      implementation do not require additional modifications. (delete, search,
      etc.) Action API implements concurrency-safe reference counting and
      guarantees that cleanup/delete is called only once, after last reference
      to action is released.
      
      The goal of this change is to update specific actions APIs that access
      action private state directly, in order to be independent from external
      locking. General approach is to re-use existing tcf_lock spinlock (used
      by some action implementation to synchronize control path with data
      path) to protect action private state from concurrent modification. If
      action has rcu-protected pointer, tcf spinlock is used to protect its
      update code, instead of relying on rtnl lock.
      
      Some actions need to determine rtnl mutex status in order to release it.
      For example, ife action can load additional kernel modules(meta ops) and
      must make sure that no locks are held during module load. In such cases
      'rtnl_held' argument is used to conditionally release rtnl mutex.
      
      Changes from V1 to V2:
      - Patch 12:
        - new patch
      - Patch 14:
        - refactor gen_new_estimator() to reuse stats_lock when re-assigning
          rate estimator statistics pointer
      - Remove mirred and tunnel_key helper function changes. (to be submitted
        and standalone patch)
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9a95d9c6
    • Vlad Buslov's avatar
      net: sched: act_police: remove dependency on rtnl lock · e329bc42
      Vlad Buslov authored
      Use tcf spinlock to protect police action private data from concurrent
      modification during dump. (init already uses tcf spinlock when changing
      police action state)
      
      Pass tcf spinlock as estimator lock argument to gen_replace_estimator()
      during action init.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e329bc42
    • Vlad Buslov's avatar
      net: core: protect rate estimator statistics pointer with lock · 51a9f5ae
      Vlad Buslov authored
      Extend gen_new_estimator() to also take stats_lock when re-assigning rate
      estimator statistics pointer. (to be used by unlocked actions)
      
      Rename 'stats_lock' to 'lock' and change argument description to explain
      that it is now also used for control path.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      51a9f5ae
    • Vlad Buslov's avatar
      net: sched: act_mirred: remove dependency on rtnl lock · 4e232818
      Vlad Buslov authored
      Re-introduce mirred list spinlock, that was removed some time ago, in order
      to protect it from concurrent modifications, instead of relying on rtnl
      lock.
      
      Use tcf spinlock to protect mirred action private data from concurrent
      modification in init and dump. Rearrange access to mirred data in order to
      be performed only while holding the lock.
      
      Rearrange net dev access to always hold reference while working with it,
      instead of relying on rntl lock.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4e232818
    • Vlad Buslov's avatar
      net: sched: extend action ops with put_dev callback · 84a75b32
      Vlad Buslov authored
      As a preparation for removing dependency on rtnl lock from rules update
      path, all users of shared objects must take reference while working with
      them.
      
      Extend action ops with put_dev() API to be used on net device returned by
      get_dev().
      
      Modify mirred action (only action that implements get_dev callback):
      - Take reference to net device in get_dev.
      - Implement put_dev API that releases reference to net device.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      84a75b32
    • Vlad Buslov's avatar
      net: sched: act_vlan: remove dependency on rtnl lock · 764e9a24
      Vlad Buslov authored
      Use tcf spinlock to protect vlan action private data from concurrent
      modification during dump and init. Use rcu swap operation to reassign
      params pointer under protection of tcf lock. (old params value is not used
      by init, so there is no need of standalone rcu dereference step)
      
      Remove rtnl assertion that is no longer necessary.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      764e9a24
    • Vlad Buslov's avatar
      net: sched: act_tunnel_key: remove dependency on rtnl lock · 729e0126
      Vlad Buslov authored
      Use tcf lock to protect tunnel key action struct private data from
      concurrent modification in init and dump. Use rcu swap operation to
      reassign params pointer under protection of tcf lock. (old params value is
      not used by init, so there is no need of standalone rcu dereference step)
      
      Remove rtnl lock assertion that is no longer required.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      729e0126
    • Vlad Buslov's avatar
      net: sched: act_skbmod: remove dependency on rtnl lock · c8814552
      Vlad Buslov authored
      Move read of skbmod_p rcu pointer to be protected by tcf spinlock. Use tcf
      spinlock to protect private skbmod data from concurrent modification during
      dump.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c8814552
    • Vlad Buslov's avatar
      net: sched: act_simple: remove dependency on rtnl lock · 5e48180e
      Vlad Buslov authored
      Use tcf spinlock to protect private simple action data from concurrent
      modification during dump. (simple init already uses tcf spinlock when
      changing action state)
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5e48180e
    • Vlad Buslov's avatar
      net: sched: act_sample: remove dependency on rtnl lock · d7728495
      Vlad Buslov authored
      Use tcf spinlock to protect private sample action data from concurrent
      modification during dump and init.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d7728495
    • Vlad Buslov's avatar
      net: sched: act_pedit: remove dependency on rtnl lock · 67b0c1a3
      Vlad Buslov authored
      Rearrange pedit init code to only access pedit action data while holding
      tcf spinlock. Change keys allocation type to atomic to allow it to execute
      while holding tcf spinlock. Take tcf spinlock in dump function when
      accessing pedit action data.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      67b0c1a3
    • Vlad Buslov's avatar
      net: sched: act_ipt: remove dependency on rtnl lock · ff25276d
      Vlad Buslov authored
      Use tcf spinlock to protect ipt action private data from concurrent
      modification during dump. Ipt init already takes tcf spinlock when
      modifying ipt state.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ff25276d
    • Vlad Buslov's avatar
      net: sched: act_ife: remove dependency on rtnl lock · 54d0d423
      Vlad Buslov authored
      Use tcf spinlock and rcu to protect params pointer from concurrent
      modification during dump and init. Use rcu swap operation to reassign
      params pointer under protection of tcf lock. (old params value is not used
      by init, so there is no need of standalone rcu dereference step)
      
      Ife action has meta-actions that are compiled as standalone modules. Rtnl
      mutex must be released while loading a kernel module. In order to support
      execution without rtnl mutex, propagate 'rtnl_held' argument to meta action
      loading functions. When requesting meta action module, conditionally
      release rtnl lock depending on 'rtnl_held' argument.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      54d0d423
    • Vlad Buslov's avatar
      net: sched: act_gact: remove dependency on rtnl lock · e8917f43
      Vlad Buslov authored
      Use tcf spinlock to protect gact action private state from concurrent
      modification during dump and init. Remove rtnl assertion that is no longer
      necessary.
      Signed-off-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e8917f43