1. 05 Oct, 2015 1 commit
  2. 29 Sep, 2015 17 commits
  3. 22 Sep, 2015 18 commits
  4. 27 Aug, 2015 2 commits
  5. 26 Aug, 2015 2 commits
    • David S. Miller's avatar
      Merge branch 'act_bpf_lockless' · 8c5bbe77
      David S. Miller authored
      Alexei Starovoitov says:
      
      ====================
      act_bpf: remove spinlock in fast path
      
      v1 version had a race condition in cleanup path of bpf_prog.
      I tried to fix it by adding new callback 'cleanup_rcu' to 'struct tcf_common'
      and call it out of act_api cleanup path, but Daniel noticed
      (thanks for the idea!) that most of the classifiers already do action cleanup
      out of rcu callback.
      So instead this set of patches converts tcindex and rsvp classifiers to call
      tcf_exts_destroy() after rcu grace period and since action cleanup logic
      in __tcf_hash_release() is only called when bind and refcnt goes to zero,
      it's guaranteed that cleanup() callback is called from rcu callback.
      More specifically:
      patches 1 and 2 - simple fixes
      patches 2 and 3 - convert tcf_exts_destroy in tcindex and rsvp to call_rcu
      patch 5 - removes spin_lock from act_bpf
      
      The cleanup of actions is now universally done after rcu grace period
      and in the future we can drop (now unnecessary) call_rcu from tcf_hash_destroy()
      patch 5 is using synchronize_rcu() in act_bpf replacement path, since it's
      very rare and alternative of dynamically allocating 'struct tcf_bpf_cfg' just
      to pass it to call_rcu looks even less appealing.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8c5bbe77
    • Alexei Starovoitov's avatar
      net_sched: act_bpf: remove spinlock in fast path · cff82457
      Alexei Starovoitov authored
      Similar to act_gact/act_mirred, act_bpf can be lockless in packet processing
      with extra care taken to free bpf programs after rcu grace period.
      Replacement of existing act_bpf (very rare) is done with synchronize_rcu()
      and final destruction is done from tc_action_ops->cleanup() callback that is
      called from tcf_exts_destroy()->tcf_action_destroy()->__tcf_hash_release() when
      bind and refcnt reach zero which is only possible when classifier is destroyed.
      Previous two patches fixed the last two classifiers (tcindex and rsvp) to
      call tcf_exts_destroy() from rcu callback.
      
      Similar to gact/mirred there is a race between prog->filter and
      prog->tcf_action. Meaning that the program being replaced may use
      previous default action if it happened to return TC_ACT_UNSPEC.
      act_mirred race betwen tcf_action and tcfm_dev is similar.
      In all cases the race is harmless.
      Long term we may want to improve the situation by replacing the whole
      tc_action->priv as single pointer instead of updating inner fields one by one.
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cff82457