- 10 May, 2022 22 commits
-
-
Jiri Olsa authored
Adding ftrace_lookup_symbols function that resolves array of symbols with single pass over kallsyms. The user provides array of string pointers with count and pointer to allocated array for resolved values. int ftrace_lookup_symbols(const char **sorted_syms, size_t cnt, unsigned long *addrs) It iterates all kallsyms symbols and tries to loop up each in provided symbols array with bsearch. The symbols array needs to be sorted by name for this reason. We also check each symbol to pass ftrace_location, because this API will be used for fprobe symbols resolving. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220510122616.2652285-3-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Making kallsyms_on_each_symbol generally available, so it can be used outside CONFIG_LIVEPATCH option in following changes. Rather than adding another ifdef option let's make the function generally available (when CONFIG_KALLSYMS option is defined). Cc: Christoph Hellwig <hch@lst.de> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220510122616.2652285-2-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Dmitrii Dolgov says: ==================== Bpf links seem to be one of the important structures for which no iterator is provided. Such iterator could be useful in those cases when generic 'task/file' is not suitable or better performance is needed. The implementation is mostly copied from prog iterator. This time tests were executed, although I still had to exclude test_bpf_nf (failed to find BTF info for global/extern symbol 'bpf_skb_ct_lookup') -- since it's unrelated, I hope it's a minor issue. Per suggestion from the previous discussion, there is a new patch for converting CHECK to corresponding ASSERT_* macro. Such replacement is done only if the final result would be the same, e.g. CHECK with important-looking custom formatting strings are still in place -- from what I understand ASSERT_* doesn't allow to specify such format. The third small patch fixes what looks like a copy-paste error in the condition checking. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dmitrii Dolgov authored
Add a simple test for bpf link iterator Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com> Link: https://lore.kernel.org/r/20220510155233.9815-5-9erthalion6@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dmitrii Dolgov authored
Replace usage of CHECK with a corresponding ASSERT_* macro for bpf_iter tests. Only done if the final result is equivalent, no changes when replacement means loosing some information, e.g. from formatting string. Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com> Link: https://lore.kernel.org/r/20220510155233.9815-4-9erthalion6@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dmitrii Dolgov authored
The original condition looks like a typo, verify the skeleton loading result instead. Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com> Link: https://lore.kernel.org/r/20220510155233.9815-3-9erthalion6@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dmitrii Dolgov authored
Implement bpf_link iterator to traverse links via bpf_seq_file operations. The changeset is mostly shamelessly copied from commit a228a64f ("bpf: Add bpf_prog iterator") Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20220510155233.9815-2-9erthalion6@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Kaixi Fan says: ==================== From: Kaixi Fan <fankaixi.li@bytedance.com> Now bpf code could not set tunnel source ip address of ip tunnel. So it could not support flow based tunnel mode completely. Because flow based tunnel mode could set tunnel source, destination ip address and tunnel key simultaneously. Flow based tunnel is useful for overlay networks. And by configuring tunnel source ip address, user could make their networks more elastic. For example, tunnel source ip could be used to select different egress nic interface for different flows with same tunnel destination ip. Another example, user could choose one of multiple ip address of the egress nic interface as the packet's tunnel source ip. Add tunnel and tunnel source testcases in test_progs. Other types of tunnel testcases would be moved to test_progs step by step in the future. v6: - use libbpf api to attach tc progs and remove some shell commands to reduce test runtime based on Alexei Starovoitov's suggestion v5: - fix some code format errors - use bpf kernel code at namespace at_ns0 to set tunnel metadata v4: - fix subject error of first patch v3: - move vxlan tunnel testcases to test_progs - replace bpf_trace_printk with bpf_printk - rename bpf kernel prog section name to tic v2: - merge vxlan tunnel and tunnel source ip testcases in test_tunnel.sh ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kaixi Fan authored
Replace bpf_trace_printk with bpf_printk in test_tunnel_kern.c. function bpf_printk is more easier and useful than bpf_trace_printk. Signed-off-by: Kaixi Fan <fankaixi.li@bytedance.com> Link: https://lore.kernel.org/r/20220430074844.69214-4-fankaixi.li@bytedance.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kaixi Fan authored
Move vxlan tunnel testcases from test_tunnel.sh to test_progs. And add vxlan tunnel source testcases also. Other tunnel testcases will be moved to test_progs step by step in the future. Rename bpf program section name as SEC("tc") because test_progs bpf loader could not load sections with name SEC("gre_set_tunnel"). Because of this, add bpftool to load bpf programs in test_tunnel.sh. Signed-off-by: Kaixi Fan <fankaixi.li@bytedance.com> Link: https://lore.kernel.org/r/20220430074844.69214-3-fankaixi.li@bytedance.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kaixi Fan authored
Add tunnel source ip field in "struct bpf_tunnel_key". Add related code to set and get tunnel source field. Signed-off-by: Kaixi Fan <fankaixi.li@bytedance.com> Link: https://lore.kernel.org/r/20220430074844.69214-2-fankaixi.li@bytedance.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
KP Singh authored
bpf_link_get_from_fd currently returns a NULL fd for LSM programs. LSM programs are similar to tracing programs and can also use skel_raw_tracepoint_open. Signed-off-by: KP Singh <kpsingh@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220509214905.3754984-1-kpsingh@kernel.org
-
Takshak Chahande authored
This patch adds up test cases that handles 4 combinations: a) outer map: BPF_MAP_TYPE_ARRAY_OF_MAPS inner maps: BPF_MAP_TYPE_ARRAY and BPF_MAP_TYPE_HASH b) outer map: BPF_MAP_TYPE_HASH_OF_MAPS inner maps: BPF_MAP_TYPE_ARRAY and BPF_MAP_TYPE_HASH Signed-off-by: Takshak Chahande <ctakshak@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220510082221.2390540-2-ctakshak@fb.com
-
Takshak Chahande authored
This patch extends batch operations support for map-in-map map-types: BPF_MAP_TYPE_HASH_OF_MAPS and BPF_MAP_TYPE_ARRAY_OF_MAPS A usecase where outer HASH map holds hundred of VIP entries and its associated reuse-ports per VIP stored in REUSEPORT_SOCKARRAY type inner map, needs to do batch operation for performance gain. This patch leverages the exiting generic functions for most of the batch operations. As map-in-map's value contains the actual reference of the inner map, for BPF_MAP_TYPE_HASH_OF_MAPS type, it needed an extra step to fetch the map_id from the reference value. selftests are added in next patch 2/2. Signed-off-by: Takshak Chahande <ctakshak@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220510082221.2390540-1-ctakshak@fb.com
-
Tiezhu Yang authored
A user told me that bpf_jit_enable can be disabled on one system, but he failed to disable bpf_jit_enable on the other system: # echo 0 > /proc/sys/net/core/bpf_jit_enable bash: echo: write error: Invalid argument No useful info is available through the dmesg log, a quick analysis shows that the issue is related with CONFIG_BPF_JIT_ALWAYS_ON. When CONFIG_BPF_JIT_ALWAYS_ON is enabled, bpf_jit_enable is permanently set to 1 and setting any other value than that will return failure. It is better to print some info to tell the user if disable bpf_jit_enable failed. Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/1652153703-22729-3-git-send-email-yangtiezhu@loongson.cn
-
Tiezhu Yang authored
It is better to use SYSCTL_TWO instead of &two, and then we can remove the variable "two" in net/core/sysctl_net_core.c. Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/1652153703-22729-2-git-send-email-yangtiezhu@loongson.cn
-
Yuntao Wang authored
The func_id parameter in find_kfunc_desc_btf() is not used, get rid of it. Fixes: 2357672c ("bpf: Introduce BPF support for kernel module function calls") Signed-off-by: Yuntao Wang <ytcoode@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20220505070114.3522522-1-ytcoode@gmail.com
-
Jason Wang authored
Most code generators declare its name so did this for bfptool. Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220509090247.5457-1-jasowang@redhat.com
-
Jerome Marchand authored
samples/bpf build currently always fails if it can't generate vmlinux.h from vmlinux, even when vmlinux.h is directly provided by VMLINUX_H variable, which makes VMLINUX_H pointless. Only fails when neither method works. Fixes: 384b6b3b ("samples: bpf: Add vmlinux.h generation support") Reported-by: CKI Project <cki-project@redhat.com> Reported-by: Veronika Kabatova <vkabatov@redhat.com> Signed-off-by: Jerome Marchand <jmarchan@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220507161635.2219052-1-jmarchan@redhat.com
-
Andrii Nakryiko authored
Milan Landaverde says: ==================== Currently in bpftool's feature probe, we incorrectly tell the user that all of the helper functions are supported for program types where helper probing fails or is explicitly unsupported[1]: $ bpftool feature probe ... eBPF helpers supported for program type tracing: - bpf_map_lookup_elem - bpf_map_update_elem - bpf_map_delete_elem ... - bpf_redirect_neigh - bpf_check_mtu - bpf_sys_bpf - bpf_sys_close This patch adjusts bpftool to relay to the user when helper support can't be determined: $ bpftool feature probe ... eBPF helpers supported for program type lirc_mode2: Program type not supported eBPF helpers supported for program type tracing: Could not determine which helpers are available eBPF helpers supported for program type struct_opts: Could not determine which helpers are available eBPF helpers supported for program type ext: Could not determine which helpers are available Rather than imply that no helpers are available for the program type, we let the user know that helper function probing failed entirely. [1] https://lore.kernel.org/bpf/20211217171202.3352835-2-andrii@kernel.org/ ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Milan Landaverde authored
Currently in libbpf, we have hardcoded program types that are not supported for helper function probing (e.g. tracing, ext, lsm). Due to this (and other legitimate failures), bpftool feature probe returns empty for those program type helper functions. Instead of implying to the user that there are no helper functions available for a program type, we output a message to the user explaining that helper function probing failed for that program type. Signed-off-by: Milan Landaverde <milan@mdaverde.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220504161356.3497972-3-milan@mdaverde.com
-
Milan Landaverde authored
Originally [1], libbpf's (now deprecated) probe functions returned a bool to acknowledge support but the new APIs return an int with a possible negative error code to reflect probe failure. This change decides for bpftool to declare maps and helpers are not available on probe failures. [1]: https://lore.kernel.org/bpf/20220202225916.3313522-3-andrii@kernel.org/Signed-off-by: Milan Landaverde <milan@mdaverde.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220504161356.3497972-2-milan@mdaverde.com
-
- 09 May, 2022 9 commits
-
-
Andrii Nakryiko authored
Make sure we always excercise libbpf's ringbuf map size adjustment logic by specifying non-zero size that's definitely not a page size multiple. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-10-andrii@kernel.org
-
Andrii Nakryiko authored
Kernel imposes a pretty particular restriction on ringbuf map size. It has to be a power-of-2 multiple of page size. While generally this isn't hard for user to satisfy, sometimes it's impossible to do this declaratively in BPF source code or just plain inconvenient to do at runtime. One such example might be BPF libraries that are supposed to work on different architectures, which might not agree on what the common page size is. Let libbpf find the right size for user instead, if it turns out to not satisfy kernel requirements. If user didn't set size at all, that's most probably a mistake so don't upsize such zero size to one full page, though. Also we need to be careful about not overflowing __u32 max_entries. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-9-andrii@kernel.org
-
Andrii Nakryiko authored
Add barrier() and barrier_var() macros into bpf_helpers.h to be used by end users. While a bit advanced and specialized instruments, they are sometimes indispensable. Instead of requiring each user to figure out exact asm volatile incantations for themselves, provide them from bpf_helpers.h. Also remove conflicting definitions from selftests. Some tests rely on barrier_var() definition being nothing, those will still work as libbpf does the #ifndef/#endif guarding for barrier() and barrier_var(), allowing users to redefine them, if necessary. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-8-andrii@kernel.org
-
Andrii Nakryiko authored
Add test cases for bpf_core_field_offset() helper. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-7-andrii@kernel.org
-
Andrii Nakryiko authored
Add bpf_core_field_offset() helper to complete field-based CO-RE helpers. This helper can be useful for feature-detection and for some more advanced cases of field reading (e.g., reading flexible array members). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-6-andrii@kernel.org
-
Andrii Nakryiko authored
Excercise both supported forms of bpf_core_field_exists() and bpf_core_field_size() helpers: variable-based field reference and type/field name-based one. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-5-andrii@kernel.org
-
Andrii Nakryiko authored
Allow to specify field reference in two ways: - if user has variable of necessary type, they can use variable-based reference (my_var.my_field or my_var_ptr->my_field). This was the only supported syntax up till now. - now, bpf_core_field_exists() and bpf_core_field_size() support also specifying field in a fashion similar to offsetof() macro, by specifying type of the containing struct/union separately and field name separately: bpf_core_field_exists(struct my_type, my_field). This forms is quite often more convenient in practice and it matches type-based CO-RE helpers that support specifying type by its name without requiring any variables. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-4-andrii@kernel.org
-
Andrii Nakryiko authored
It will be annoying and surprising for users of __kptr and __kptr_ref if libbpf silently ignores them just because Clang used for compilation didn't support btf_type_tag(). It's much better to get clear compiler error than debug BPF verifier failures later on. Fixes: ef89654f ("libbpf: Add kptr type tag macros to bpf_helpers.h") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-3-andrii@kernel.org
-
Andrii Nakryiko authored
Prevent "classic" and light skeleton generation rules from stomping on each other's toes due to the use of the same <obj>.linked{1,2,3}.o naming pattern. There is no coordination and synchronizataion between .skel.h and .lskel.h rules, so they can easily overwrite each other's intermediate object files, leading to errors like: /bin/sh: line 1: 170928 Bus error (core dumped) /data/users/andriin/linux/tools/testing/selftests/bpf/tools/sbin/bpftool gen skeleton /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.linked3.o name test_ksyms_weak > /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h make: *** [Makefile:507: /data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h] Error 135 make: *** Deleting file '/data/users/andriin/linux/tools/testing/selftests/bpf/test_ksyms_weak.skel.h' Fix by using different suffix for light skeleton rule. Fixes: c48e51c8 ("bpf: selftests: Add selftests for module kfunc support") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220509004148.1801791-2-andrii@kernel.org
-
- 29 Apr, 2022 6 commits
-
-
Mykola Lysenko authored
Fix log_fp memory leak in dispatch_thread_read_log. Remove obsolete log_fp clean-up code in dispatch_thread. Also, release memory of subtest_selector. This can be reproduced with -n 2/1 parameters. Signed-off-by: Mykola Lysenko <mykolal@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220428225744.1961643-1-mykolal@fb.com
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Add bpf_map__set_autocreate() API which is a BPF map counterpart of bpf_program__set_autoload() and serves similar goal of allowing to build more flexible CO-RE applications. See patch #3 for example scenarios in which the need for such API came up previously. Patch #1 is a follow-up patch to previous patch set adding verifier log fixup logic, making sure bpf_core_format_spec()'s return result is used for something useful. Patch #2 is a small refactoring to avoid unnecessary verbose memory management around obj->maps array. Patch #3 adds and API and corresponding BPF verifier log fix up logic to provide human-comprehensible error message with useful details. Patch #4 adds a simple selftest validating both the API itself and libbpf's log fixup logic for it. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add a subtest that excercises bpf_map__set_autocreate() API and validates that libbpf properly fixes up BPF verifier log with correct map information. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220428041523.4089853-5-andrii@kernel.org
-
Andrii Nakryiko authored
Add bpf_map__set_autocreate() API that allows user to opt-out from libbpf automatically creating BPF map during BPF object load. This is a useful feature when building CO-RE-enabled BPF application that takes advantage of some new-ish BPF map type (e.g., socket-local storage) if kernel supports it, but otherwise uses some alternative way (e.g., extra HASH map). In such case, being able to disable the creation of a map that kernel doesn't support allows to successfully create and load BPF object file with all its other maps and programs. It's still up to user to make sure that no "live" code in any of their BPF programs are referencing such map instance, which can be achieved by guarding such code with CO-RE relocation check or by using .rodata global variables. If user fails to properly guard such code to turn it into "dead code", libbpf will helpfully post-process BPF verifier log and will provide more meaningful error and map name that needs to be guarded properly. As such, instead of: ; value = bpf_map_lookup_elem(&missing_map, &zero); 4: (85) call unknown#2001000000 invalid func unknown#2001000000 ... user will see: ; value = bpf_map_lookup_elem(&missing_map, &zero); 4: <invalid BPF map reference> BPF map 'missing_map' is referenced but wasn't created Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220428041523.4089853-4-andrii@kernel.org
-
Andrii Nakryiko authored
Reuse libbpf_mem_ensure() when adding a new map to the list of maps inside bpf_object. It takes care of proper resizing and reallocating of map array and zeroing out newly allocated memory. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220428041523.4089853-3-andrii@kernel.org
-
Andrii Nakryiko authored
Detect CO-RE spec truncation and append "..." to make user aware that there was supposed to be more of the spec there. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220428041523.4089853-2-andrii@kernel.org
-
- 28 Apr, 2022 3 commits
-
-
Andrii Nakryiko authored
Add new or modify existing SEC() definitions to be target-less and validate that libbpf handles such program definitions correctly. For kprobe/kretprobe we also add explicit test that generic bpf_program__attach() works in cases when kprobe definition contains proper target. It wasn't previously tested as selftests code always explicitly specified the target regardless. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220428185349.3799599-4-andrii@kernel.org
-
Andrii Nakryiko authored
Similar to previous patch, support target-less definitions like SEC("fentry"), SEC("freplace"), etc. For such BTF-backed program types it is expected that user will specify BTF target programmatically at runtime using bpf_program__set_attach_target() *before* load phase. If not, libbpf will report this as an error. Aslo use SEC_ATTACH_BTF flag instead of explicitly listing a set of types that are expected to require attach_btf_id. This was an accidental omission during custom SEC() support refactoring. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220428185349.3799599-3-andrii@kernel.org
-
Andrii Nakryiko authored
In a lot of cases the target of kprobe/kretprobe, tracepoint, raw tracepoint, etc BPF program might not be known at the compilation time and will be discovered at runtime. This was always a supported case by libbpf, with APIs like bpf_program__attach_{kprobe,tracepoint,etc}() accepting full target definition, regardless of what was defined in SEC() definition in BPF source code. Unfortunately, up till now libbpf still enforced users to specify at least something for the fake target, e.g., SEC("kprobe/whatever"), which is cumbersome and somewhat misleading. This patch allows target-less SEC() definitions for basic tracing BPF program types: - kprobe/kretprobe; - multi-kprobe/multi-kretprobe; - tracepoints; - raw tracepoints. Such target-less SEC() definitions are meant to specify declaratively proper BPF program type only. Attachment of them will have to be handled programmatically using correct APIs. As such, skeleton's auto-attachment of such BPF programs is skipped and generic bpf_program__attach() will fail, if attempted, due to the lack of enough target information. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220428185349.3799599-2-andrii@kernel.org
-