1. 26 Oct, 2022 1 commit
    • Martin KaFai Lau's avatar
      bpf: Remove prog->active check for bpf_lsm and bpf_iter · 271de525
      Martin KaFai Lau authored
      The commit 64696c40 ("bpf: Add __bpf_prog_{enter,exit}_struct_ops for struct_ops trampoline")
      removed prog->active check for struct_ops prog.  The bpf_lsm
      and bpf_iter is also using trampoline.  Like struct_ops, the bpf_lsm
      and bpf_iter have fixed hooks for the prog to attach.  The
      kernel does not call the same hook in a recursive way.
      This patch also removes the prog->active check for
      bpf_lsm and bpf_iter.
      
      A later patch has a test to reproduce the recursion issue
      for a sleepable bpf_lsm program.
      
      This patch appends the '_recur' naming to the existing
      enter and exit functions that track the prog->active counter.
      New __bpf_prog_{enter,exit}[_sleepable] function are
      added to skip the prog->active tracking. The '_struct_ops'
      version is also removed.
      
      It also moves the decision on picking the enter and exit function to
      the new bpf_trampoline_{enter,exit}().  It returns the '_recur' ones
      for all tracing progs to use.  For bpf_lsm, bpf_iter,
      struct_ops (no prog->active tracking after 64696c40), and
      bpf_lsm_cgroup (no prog->active tracking after 69fd337a),
      it will return the functions that don't track the prog->active.
      Signed-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      Link: https://lore.kernel.org/r/20221025184524.3526117-2-martin.lau@linux.devSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      271de525
  2. 25 Oct, 2022 19 commits
  3. 22 Oct, 2022 4 commits
  4. 21 Oct, 2022 15 commits
  5. 19 Oct, 2022 1 commit
    • Alexei Starovoitov's avatar
      Merge branch 'bpf,x64: Use BMI2 for shifts' · 04a8f9d7
      Alexei Starovoitov authored
      Jie Meng says:
      
      ====================
      
      With baseline x64 instruction set, shift count can only be an immediate
      or in %cl. The implicit dependency on %cl makes it necessary to shuffle
      registers around and/or add push/pop operations.
      
      BMI2 provides shift instructions that can use any general register as
      the shift count, saving us instructions and a few bytes in most cases.
      
      Suboptimal codegen when %ecx is source and/or destination is also
      addressed and unnecessary instructions are removed.
      
      test_progs: Summary: 267/1340 PASSED, 25 SKIPPED, 0 FAILED
      test_progs-no_alu32: Summary: 267/1333 PASSED, 26 SKIPPED, 0 FAILED
      test_verifier: Summary: 1367 PASSED, 636 SKIPPED, 0 FAILED (same result
       with or without BMI2)
      test_maps: OK, 0 SKIPPED
      lib/test_bpf:
        test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed]
        test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed]
        test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED
      ---
      v4 -> v5:
      - More comments regarding instruction encoding
      v3 -> v4:
      - Fixed a regression when BMI2 isn't available
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      04a8f9d7