1. 09 Dec, 2022 3 commits
  2. 08 Dec, 2022 8 commits
    • Stanislav Fomichev's avatar
      selftests/bpf: Bring test_offload.py back to life · e60db051
      Stanislav Fomichev authored
      Bpftool has new extra libbpf_det_bind probing map we need to exclude.
      Also skip trying to load netdevsim modules if it's already loaded (builtin).
      
      v2:
      - drop iproute2->bpftool changes (Toke)
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20221206232739.2504890-1-sdf@google.com
      e60db051
    • Yang Jihong's avatar
      bpf: Fix comment error in fixup_kfunc_call function · c2cc0ce7
      Yang Jihong authored
      insn->imm for kfunc is the relative address of __bpf_call_base,
      instead of __bpf_base_call, Fix the comment error.
      Signed-off-by: default avatarYang Jihong <yangjihong1@huawei.com>
      Link: https://lore.kernel.org/r/20221208013724.257848-1-yangjihong1@huawei.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      c2cc0ce7
    • Björn Töpel's avatar
      bpf: Do not zero-extend kfunc return values · d35af0a7
      Björn Töpel authored
      In BPF all global functions, and BPF helpers return a 64-bit
      value. For kfunc calls, this is not the case, and they can return
      e.g. 32-bit values.
      
      The return register R0 for kfuncs calls can therefore be marked as
      subreg_def != DEF_NOT_SUBREG. In general, if a register is marked with
      subreg_def != DEF_NOT_SUBREG, some archs (where bpf_jit_needs_zext()
      returns true) require the verifier to insert explicit zero-extension
      instructions.
      
      For kfuncs calls, however, the caller should do sign/zero extension
      for return values. In other words, the compiler is responsible to
      insert proper instructions, not the verifier.
      
      An example, provided by Yonghong Song:
      
      $ cat t.c
      extern unsigned foo(void);
      unsigned bar1(void) {
           return foo();
      }
      unsigned bar2(void) {
           if (foo()) return 10; else return 20;
      }
      
      $ clang -target bpf -mcpu=v3 -O2 -c t.c && llvm-objdump -d t.o
      t.o:    file format elf64-bpf
      
      Disassembly of section .text:
      
      0000000000000000 <bar1>:
      	0:       85 10 00 00 ff ff ff ff call -0x1
      	1:       95 00 00 00 00 00 00 00 exit
      
      0000000000000010 <bar2>:
      	2:       85 10 00 00 ff ff ff ff call -0x1
      	3:       bc 01 00 00 00 00 00 00 w1 = w0
      	4:       b4 00 00 00 14 00 00 00 w0 = 0x14
      	5:       16 01 01 00 00 00 00 00 if w1 == 0x0 goto +0x1 <LBB1_2>
      	6:       b4 00 00 00 0a 00 00 00 w0 = 0xa
      
      0000000000000038 <LBB1_2>:
      	7:       95 00 00 00 00 00 00 00 exit
      
      If the return value of 'foo()' is used in the BPF program, the proper
      zero-extension will be done.
      
      Currently, the verifier correctly marks, say, a 32-bit return value as
      subreg_def != DEF_NOT_SUBREG, but will fail performing the actual
      zero-extension, due to a verifier bug in
      opt_subreg_zext_lo32_rnd_hi32(). load_reg is not properly set to R0,
      and the following path will be taken:
      
      		if (WARN_ON(load_reg == -1)) {
      			verbose(env, "verifier bug. zext_dst is set, but no reg is defined\n");
      			return -EFAULT;
      		}
      
      A longer discussion from v1 can be found in the link below.
      
      Correct the verifier by avoiding doing explicit zero-extension of R0
      for kfunc calls. Note that R0 will still be marked as a sub-register
      for return values smaller than 64-bit.
      
      Fixes: 83a28819 ("bpf: Account for BPF_FETCH in insn_has_def32()")
      Link: https://lore.kernel.org/bpf/20221202103620.1915679-1-bjorn@kernel.org/Suggested-by: default avatarYonghong Song <yhs@meta.com>
      Signed-off-by: default avatarBjörn Töpel <bjorn@rivosinc.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/r/20221207103540.396496-1-bjorn@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d35af0a7
    • Alexei Starovoitov's avatar
      Merge branch 'Document some recent core kfunc additions' · 2d141236
      Alexei Starovoitov authored
      David Vernet says:
      
      ====================
      
      A series of recent patch sets introduced kfuncs that allowed struct
      task_struct and struct cgroup objects to be used as kptrs. These were
      introduced in [0], [1], and [2].
      
      [0]: https://lore.kernel.org/lkml/20221120051004.3605026-1-void@manifault.com/
      [1]: https://lore.kernel.org/lkml/20221122145300.251210-2-void@manifault.com/T/
      [2]: https://lore.kernel.org/lkml/20221122055458.173143-1-void@manifault.com/
      
      These are "core" kfuncs, in that they may be used by a wide variety of
      possible BPF tracepoint or struct_ops programs, and are defined in
      kernel/bpf/helpers.c. Even though as kfuncs they have no ABI stability
      guarantees, they should still be properly documented. This patch set
      adds that documentation.
      
      Some other kfuncs were added recently as well, such as
      bpf_rcu_read_lock() and bpf_rcu_read_unlock(). Those could and should be
      added to this "Core kfuncs" section as well in subsequent patch sets.
      
      Note that this patch set does not contain documentation for
      bpf_task_acquire_not_zero(), or bpf_task_kptr_get(). As discussed in
      [3], those kfuncs currently always return NULL pending resolution on how
      to properly protect their arguments using RCU.
      
      [3]: https://lore.kernel.org/all/20221206210538.597606-1-void@manifault.com/
      ---
      Changelog:
      v2 -> v3:
      - Don't document bpf_task_kptr_get(), and instead provide a more
        substantive example for bpf_cgroup_kptr_get().
      - Further clarify expected behavior of bpf_task_from_pid() in comments
        (Alexei)
      
      v1 -> v2:
      - Expand comment to specify that a map holds a reference to a task kptr
        if we don't end up releasing it (Alexei)
      - Just read task->pid instead of using a probed read (Alexei)
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      2d141236
    • David Vernet's avatar
      bpf/docs: Document struct cgroup * kfuncs · 36aa10ff
      David Vernet authored
      bpf_cgroup_acquire(), bpf_cgroup_release(), bpf_cgroup_kptr_get(), and
      bpf_cgroup_ancestor(), are kfuncs that were recently added to
      kernel/bpf/helpers.c. These are "core" kfuncs in that they're available
      for use in any tracepoint or struct_ops BPF program. Though they have no
      ABI stability guarantees, we should still document them. This patch adds
      a struct cgroup * subsection to the Core kfuncs section which describes
      each of these kfuncs.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/r/20221207204911.873646-3-void@manifault.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      36aa10ff
    • David Vernet's avatar
      bpf/docs: Document struct task_struct * kfuncs · 25c5e92d
      David Vernet authored
      bpf_task_acquire(), bpf_task_release(), and bpf_task_from_pid() are
      kfuncs that were recently added to kernel/bpf/helpers.c. These are
      "core" kfuncs in that they're available for use for any tracepoint or
      struct_ops BPF program. Though they have no ABI stability guarantees, we
      should still document them. This patch adds a new Core kfuncs section to
      the BPF kfuncs doc, and adds entries for all of these task kfuncs.
      
      Note that bpf_task_kptr_get() is not documented, as it still returns
      NULL while we're working to resolve how it can use RCU to ensure struct
      task_struct * lifetime.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/r/20221207204911.873646-2-void@manifault.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      25c5e92d
    • Andrii Nakryiko's avatar
      selftests/bpf: convert dynptr_fail and map_kptr_fail subtests to generic tester · 26c386ec
      Andrii Nakryiko authored
      Convert big chunks of dynptr and map_kptr subtests to use generic
      verification_tester. They are switched from using manually maintained
      tables of test cases, specifying program name and expected error
      verifier message, to btf_decl_tag-based annotations directly on
      corresponding BPF programs: __failure to specify that BPF program is
      expected to fail verification, and __msg() to specify expected log
      message.
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Acked-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20221207201648.2990661-2-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      26c386ec
    • Andrii Nakryiko's avatar
      selftests/bpf: add generic BPF program tester-loader · 537c3f66
      Andrii Nakryiko authored
      It's become a common pattern to have a collection of small BPF programs
      in one BPF object file, each representing one test case. On user-space
      side of such tests we maintain a table of program names and expected
      failure or success, along with optional expected verifier log message.
      
      This works, but each set of tests reimplement this mundane code over and
      over again, which is a waste of time for anyone trying to add a new set
      of tests. Furthermore, it's quite error prone as it's way too easy to miss
      some entries in these manually maintained test tables (as evidences by
      dynptr_fail tests, in which ringbuf_release_uninit_dynptr subtest was
      accidentally missed; this is fixed in next patch).
      
      So this patch implements generic test_loader, which accepts skeleton
      name and handles the rest of details: opens and loads BPF object file,
      making sure each program is tested in isolation. Optionally each test
      case can specify expected BPF verifier log message. In case of failure,
      tester makes sure to report verifier log, but it also reports verifier
      log in verbose mode unconditionally.
      
      Now, the interesting deviation from existing custom implementations is
      the use of btf_decl_tag attribute to specify expected-to-fail vs
      expected-to-succeed markers and, optionally, expected log message
      directly next to BPF program source code, eliminating the need to
      manually create and update table of tests.
      
      We define few macros wrapping btf_decl_tag with a convention that all
      values of btf_decl_tag start with "comment:" prefix, and then utilizing
      a very simple "just_some_text_tag" or "some_key_name=<value>" pattern to
      define things like expected success/failure, expected verifier message,
      extra verifier log level (if necessary). This approach is demonstrated
      by next patch in which two existing sets of failure tests are converted.
      
      Tester supports both expected-to-fail and expected-to-succeed programs,
      though this patch set didn't convert any existing expected-to-succeed
      programs yet, as existing tests couple BPF program loading with their
      further execution through attach or test_prog_run. One way to allow
      testing scenarios like this would be ability to specify custom callback,
      executed for each successfully loaded BPF program. This is left for
      follow up patches, after some more analysis of existing test cases.
      
      This test_loader is, hopefully, a start of a test_verifier-like runner,
      but integrated into test_progs infrastructure. It will allow much better
      "user experience" of defining low-level verification tests that can take
      advantage of all the libbpf-provided nicety features on BPF side: global
      variables, declarative maps, etc.  All while having a choice of defining
      it in C or as BPF assembly (through __attribute__((naked)) functions and
      using embedded asm), depending on what makes most sense in each
      particular case. This will be explored in follow up patches as well.
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20221207201648.2990661-1-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      537c3f66
  3. 07 Dec, 2022 15 commits
  4. 06 Dec, 2022 8 commits
  5. 05 Dec, 2022 6 commits