An error occurred fetching the project authors.
- 28 May, 2024 1 commit
-
-
Mykyta Yatsenko authored
Configure logging verbosity by setting LIBBPF_LOG_LEVEL environment variable, which is applied only to default logger. Once user set their custom logging callback, it is up to them to handle filtering. Signed-off-by:
Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240524131840.114289-1-yatsenko@meta.com
-
- 30 Apr, 2024 1 commit
-
-
Jiri Olsa authored
Adding support to attach program in kprobe session mode with bpf_program__attach_kprobe_multi_opts function. Adding session bool to bpf_kprobe_multi_opts struct that allows to load and attach the bpf program via kprobe session. the attachment to create kprobe multi session. Also adding new program loader section that allows: SEC("kprobe.session/bpf_fentry_test*") and loads/attaches kprobe program as kprobe session. Signed-off-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20240430112830.1184228-5-jolsa@kernel.org
-
- 11 Apr, 2024 1 commit
-
-
Yonghong Song authored
Introduce a libbpf API function bpf_program__attach_sockmap() which allow user to get a bpf_link for their corresponding programs. Acked-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Eduard Zingerman <eddyz87@gmail.com> Reviewed-by:
John Fastabend <john.fastabend@gmail.com> Signed-off-by:
Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240410043532.3737722-1-yonghong.song@linux.devSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 06 Apr, 2024 1 commit
-
-
Andrea Righi authored
Introduce a new API to consume items from a ring buffer, limited to a specified amount, and return to the caller the actual number of items consumed. Signed-off-by:
Andrea Righi <andrea.righi@canonical.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/lkml/20240310154726.734289-1-andrea.righi@canonical.com/T Link: https://lore.kernel.org/bpf/20240406092005.92399-4-andrea.righi@canonical.com
-
- 20 Mar, 2024 1 commit
-
-
Andrii Nakryiko authored
Wire up BPF cookie passing or raw_tp and tp_btf programs, both in low-level and high-level APIs. Acked-by:
Stanislav Fomichev <sdf@google.com> Acked-by:
Eduard Zingerman <eddyz87@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Message-ID: <20240319233852.1977493-5-andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 11 Mar, 2024 1 commit
-
-
Andrii Nakryiko authored
LLVM automatically places __arena variables into ".arena.1" ELF section. In order to use such global variables bpf program must include definition of arena map in ".maps" section, like: struct { __uint(type, BPF_MAP_TYPE_ARENA); __uint(map_flags, BPF_F_MMAPABLE); __uint(max_entries, 1000); /* number of pages */ __ulong(map_extra, 2ull << 44); /* start of mmap() region */ } arena SEC(".maps"); libbpf recognizes both uses of arena and creates single `struct bpf_map *` instance in libbpf APIs. ".arena.1" ELF section data is used as initial data image, which is exposed through skeleton and bpf_map__initial_value() to the user, if they need to tune it before the load phase. During load phase, this initial image is copied over into mmap()'ed region corresponding to arena, and discarded. Few small checks here and there had to be added to make sure this approach works with bpf_map__initial_value(), mostly due to hard-coded assumption that map->mmaped is set up with mmap() syscall and should be munmap()'ed. For arena, .arena.1 can be (much) smaller than maximum arena size, so this smaller data size has to be tracked separately. Given it is enforced that there is only one arena for entire bpf_object instance, we just keep it in a separate field. This can be generalized if necessary later. All global variables from ".arena.1" section are accessible from user space via skel->arena->name_of_var. For bss/data/rodata the skeleton/libbpf perform the following sequence: 1. addr = mmap(MAP_ANONYMOUS) 2. user space optionally modifies global vars 3. map_fd = bpf_create_map() 4. bpf_update_map_elem(map_fd, addr) // to store values into the kernel 5. mmap(addr, MAP_FIXED, map_fd) after step 5 user spaces see the values it wrote at step 2 at the same addresses arena doesn't support update_map_elem. Hence skeleton/libbpf do: 1. addr = malloc(sizeof SEC ".arena.1") 2. user space optionally modifies global vars 3. map_fd = bpf_create_map(MAP_TYPE_ARENA) 4. real_addr = mmap(map->map_extra, MAP_SHARED | MAP_FIXED, map_fd) 5. memcpy(real_addr, addr) // this will fault-in and allocate pages At the end look and feel of global data vs __arena global data is the same from bpf prog pov. Another complication is: struct { __uint(type, BPF_MAP_TYPE_ARENA); } arena SEC(".maps"); int __arena foo; int bar; ptr1 = &foo; // relocation against ".arena.1" section ptr2 = &arena; // relocation against ".maps" section ptr3 = &bar; // relocation against ".bss" section Fo the kernel ptr1 and ptr2 has point to the same arena's map_fd while ptr3 points to a different global array's map_fd. For the verifier: ptr1->type == unknown_scalar ptr2->type == const_ptr_to_map ptr3->type == ptr_to_map_value After verification, from JIT pov all 3 ptr-s are normal ld_imm64 insns. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20240308010812.89848-11-alexei.starovoitov@gmail.com
-
- 25 Jan, 2024 2 commits
-
-
Andrii Nakryiko authored
To allow external admin authority to override default BPF FS location (/sys/fs/bpf) for implicit BPF token creation, teach libbpf to recognize LIBBPF_BPF_TOKEN_PATH envvar. If it is specified and user application didn't explicitly specify bpf_token_path option, it will be treated exactly like bpf_token_path option, overriding default /sys/fs/bpf location and making BPF token mandatory. Suggested-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20240124022127.2379740-29-andrii@kernel.org
-
Andrii Nakryiko authored
Add BPF token support to BPF object-level functionality. BPF token is supported by BPF object logic either as an explicitly provided BPF token from outside (through BPF FS path), or implicitly (unless prevented through bpf_object_open_opts). Implicit mode is assumed to be the most common one for user namespaced unprivileged workloads. The assumption is that privileged container manager sets up default BPF FS mount point at /sys/fs/bpf with BPF token delegation options (delegate_{cmds,maps,progs,attachs} mount options). BPF object during loading will attempt to create BPF token from /sys/fs/bpf location, and pass it for all relevant operations (currently, map creation, BTF load, and program load). In this implicit mode, if BPF token creation fails due to whatever reason (BPF FS is not mounted, or kernel doesn't support BPF token, etc), this is not considered an error. BPF object loading sequence will proceed with no BPF token. In explicit BPF token mode, user provides explicitly custom BPF FS mount point path. In such case, BPF object will attempt to create BPF token from provided BPF FS location. If BPF token creation fails, that is considered a critical error and BPF object load fails with an error. Libbpf provides a way to disable implicit BPF token creation, if it causes any troubles (BPF token is designed to be completely optional and shouldn't cause any problems even if provided, but in the world of BPF LSM, custom security logic can be installed that might change outcome depending on the presence of BPF token). To disable libbpf's default BPF token creation behavior user should provide either invalid BPF token FD (negative), or empty bpf_token_path option. BPF token presence can influence libbpf's feature probing, so if BPF object has associated BPF token, feature probing is instructed to use BPF object-specific feature detection cache and token FD. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20240124022127.2379740-26-andrii@kernel.org
-
- 19 Dec, 2023 1 commit
-
-
Andrii Nakryiko authored
This patch includes the following revert (one conflicting BPF FS patch and three token patch sets, represented by merge commits): - revert 0f5d5454 "Merge branch 'bpf-fs-mount-options-parsing-follow-ups'"; - revert 750e7857 "bpf: Support uid and gid when mounting bpffs"; - revert 73376328 "Merge branch 'bpf-token-support-in-libbpf-s-bpf-object'"; - revert c35919dc "Merge branch 'bpf-token-and-bpf-fs-based-delegation'". Link: https://lore.kernel.org/bpf/CAHk-=wg7JuFYwGy=GOMbRCtOL+jwSQsdUaBsRWkDVYbxipbM5A@mail.gmail.comSigned-off-by:
Andrii Nakryiko <andrii@kernel.org>
-
- 13 Dec, 2023 2 commits
-
-
Andrii Nakryiko authored
To allow external admin authority to override default BPF FS location (/sys/fs/bpf) for implicit BPF token creation, teach libbpf to recognize LIBBPF_BPF_TOKEN_PATH envvar. If it is specified and user application didn't explicitly specify neither bpf_token_path nor bpf_token_fd option, it will be treated exactly like bpf_token_path option, overriding default /sys/fs/bpf location and making BPF token mandatory. Suggested-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231213190842.3844987-10-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add BPF token support to BPF object-level functionality. BPF token is supported by BPF object logic either as an explicitly provided BPF token from outside (through BPF FS path or explicit BPF token FD), or implicitly (unless prevented through bpf_object_open_opts). Implicit mode is assumed to be the most common one for user namespaced unprivileged workloads. The assumption is that privileged container manager sets up default BPF FS mount point at /sys/fs/bpf with BPF token delegation options (delegate_{cmds,maps,progs,attachs} mount options). BPF object during loading will attempt to create BPF token from /sys/fs/bpf location, and pass it for all relevant operations (currently, map creation, BTF load, and program load). In this implicit mode, if BPF token creation fails due to whatever reason (BPF FS is not mounted, or kernel doesn't support BPF token, etc), this is not considered an error. BPF object loading sequence will proceed with no BPF token. In explicit BPF token mode, user provides explicitly either custom BPF FS mount point path or creates BPF token on their own and just passes token FD directly. In such case, BPF object will either dup() token FD (to not require caller to hold onto it for entire duration of BPF object lifetime) or will attempt to create BPF token from provided BPF FS location. If BPF token creation fails, that is considered a critical error and BPF object load fails with an error. Libbpf provides a way to disable implicit BPF token creation, if it causes any troubles (BPF token is designed to be completely optional and shouldn't cause any problems even if provided, but in the world of BPF LSM, custom security logic can be installed that might change outcome dependin on the presence of BPF token). To disable libbpf's default BPF token creation behavior user should provide either invalid BPF token FD (negative), or empty bpf_token_path option. BPF token presence can influence libbpf's feature probing, so if BPF object has associated BPF token, feature probing is instructed to use BPF object-specific feature detection cache and token FD. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20231213190842.3844987-7-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 24 Oct, 2023 1 commit
-
-
Daniel Borkmann authored
This adds bpf_program__attach_netkit() API to libbpf. Overall it is very similar to tcx. The API looks as following: LIBBPF_API struct bpf_link * bpf_program__attach_netkit(const struct bpf_program *prog, int ifindex, const struct bpf_netkit_opts *opts); The struct bpf_netkit_opts is done in similar way as struct bpf_tcx_opts for supporting bpf_mprog control parameters. The attach location for the primary and peer device is derived from the program section "netkit/primary" and "netkit/peer", respectively. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20231024214904.29825-4-daniel@iogearbox.netSigned-off-by:
Martin KaFai Lau <martin.lau@kernel.org>
-
- 25 Sep, 2023 6 commits
-
-
Martin Kelly authored
Add ring__consume to consume a single ringbuffer, analogous to ring_buffer__consume. Signed-off-by:
Martin Kelly <martin.kelly@crowdstrike.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230925215045.2375758-14-martin.kelly@crowdstrike.com
-
Martin Kelly authored
Add ring__map_fd to get the file descriptor underlying a given ringbuffer. Signed-off-by:
Martin Kelly <martin.kelly@crowdstrike.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230925215045.2375758-12-martin.kelly@crowdstrike.com
-
Martin Kelly authored
Add ring__size to get the total size of a given ringbuffer. Signed-off-by:
Martin Kelly <martin.kelly@crowdstrike.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230925215045.2375758-10-martin.kelly@crowdstrike.com
-
Martin Kelly authored
Add ring__avail_data_size for querying the currently available data in the ringbuffer, similar to the BPF_RB_AVAIL_DATA flag in bpf_ringbuf_query. This is racy during ongoing operations but is still useful for overall information on how a ringbuffer is behaving. Signed-off-by:
Martin Kelly <martin.kelly@crowdstrike.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230925215045.2375758-8-martin.kelly@crowdstrike.com
-
Martin Kelly authored
Add APIs to get the producer and consumer position for a given ringbuffer. Signed-off-by:
Martin Kelly <martin.kelly@crowdstrike.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230925215045.2375758-6-martin.kelly@crowdstrike.com
-
Martin Kelly authored
Add a new function ring_buffer__ring, which exposes struct ring * to the user, representing a single ringbuffer. Signed-off-by:
Martin Kelly <martin.kelly@crowdstrike.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230925215045.2375758-4-martin.kelly@crowdstrike.com
-
- 24 Aug, 2023 1 commit
-
-
Daniel Xu authored
For bpf_object__pin_programs() there is bpf_object__unpin_programs(). Likewise bpf_object__unpin_maps() for bpf_object__pin_maps(). But no bpf_object__unpin() for bpf_object__pin(). Adding the former adds symmetry to the API. It's also convenient for cleanup in application code. It's an API I would've used if it was available for a repro I was writing earlier. Signed-off-by:
Daniel Xu <dxu@dxuuu.xyz> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Song Liu <song@kernel.org> Link: https://lore.kernel.org/bpf/b2f9d41da4a350281a0b53a804d11b68327e14e5.1692832478.git.dxu@dxuuu.xyz
-
- 21 Aug, 2023 1 commit
-
-
Jiri Olsa authored
Adding bpf_program__attach_uprobe_multi function that allows to attach multiple uprobes with uprobe_multi link. The user can specify uprobes with direct arguments: binary_path/func_pattern/pid or with struct bpf_uprobe_multi_opts opts argument fields: const char **syms; const unsigned long *offsets; const unsigned long *ref_ctr_offsets; const __u64 *cookies; User can specify 2 mutually exclusive set of inputs: 1) use only path/func_pattern/pid arguments 2) use path/pid with allowed combinations of: syms/offsets/ref_ctr_offsets/cookies/cnt - syms and offsets are mutually exclusive - ref_ctr_offsets and cookies are optional Any other usage results in error. Signed-off-by:
Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20230809083440.3209381-15-jolsa@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 19 Jul, 2023 2 commits
-
-
Daniel Borkmann authored
Implement tcx BPF link support for libbpf. The bpf_program__attach_fd() API has been refactored slightly in order to pass bpf_link_create_opts pointer as input. A new bpf_program__attach_tcx() has been added on top of this which allows for passing all relevant data via extensible struct bpf_tcx_opts. The program sections tcx/ingress and tcx/egress correspond to the hook locations for tc ingress and egress, respectively. For concrete usage examples, see the extensive selftests that have been developed as part of this series. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230719140858.13224-5-daniel@iogearbox.netSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Maciej Fijalkowski authored
Introduce new netlink attribute NETDEV_A_DEV_XDP_ZC_MAX_SEGS that will carry maximum fragments that underlying ZC driver is able to handle on TX side. It is going to be included in netlink response only when driver supports ZC. Any value higher than 1 implies multi-buffer ZC support on underlying device. Signed-off-by:
Maciej Fijalkowski <maciej.fijalkowski@intel.com> Link: https://lore.kernel.org/r/20230719132421.584801-11-maciej.fijalkowski@intel.comSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 30 Jun, 2023 1 commit
-
-
Florian Westphal authored
Add new api function: bpf_program__attach_netfilter. It takes a bpf program (netfilter type), and a pointer to a option struct that contains the desired attachment (protocol family, priority, hook location, ...). It returns a pointer to a 'bpf_link' structure or NULL on error. Next patch adds new netfilter_basic test that uses this function to attach a program to a few pf/hook/priority combinations. v2: change name and use bpf_link_create. Suggested-by:
Andrii Nakryiko <andrii.nakryiko@gmail.com> Signed-off-by:
Florian Westphal <fw@strlen.de> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Toke Høiland-Jørgensen <toke@redhat.com> Acked-by:
Daniel Xu <dxu@dxuuu.xyz> Link: https://lore.kernel.org/bpf/CAEf4BzZrmUv27AJp0dDxBDMY_B8e55-wLs8DUKK69vCWsCG_pQ@mail.gmail.com/ Link: https://lore.kernel.org/bpf/CAEf4BzZ69YgrQW7DHCJUT_X+GqMq_ZQQPBwopaJJVGFD5=d5Vg@mail.gmail.com/ Link: https://lore.kernel.org/bpf/20230628152738.22765-2-fw@strlen.de
-
- 24 May, 2023 1 commit
-
-
JP Kobryn authored
This patch updates bpf_map__set_value_size() so that if the given map is memory mapped, it will attempt to resize the mapped region. Initial contents of the mapped region are preserved. BTF is not required, but after the mapping is resized an attempt is made to adjust the associated BTF information if the following criteria is met: - BTF info is present - the map is a datasec - the final variable in the datasec is an array ... the resulting BTF info will be updated so that the final array variable is associated with a new BTF array type sized to cover the requested size. Note that the initial resizing of the memory mapped region can succeed while the subsequent BTF adjustment can fail. In this case, BTF info is dropped from the map by clearing the key and value type. Signed-off-by:
JP Kobryn <inwardvessel@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20230524004537.18614-2-inwardvessel@gmail.com
-
- 27 Mar, 2023 1 commit
-
-
JP Kobryn authored
This patch prevents races on the print function pointer, allowing the libbpf_set_print() function to become thread-safe. Signed-off-by:
JP Kobryn <inwardvessel@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230325010845.46000-1-inwardvessel@gmail.com
-
- 23 Mar, 2023 1 commit
-
-
Kui-Feng Lee authored
Introduce bpf_link__update_map(), which allows to atomically update underlying struct_ops implementation for given struct_ops BPF link. Also add old_map_fd to struct bpf_link_update_opts to handle BPF_F_REPLACE feature. Signed-off-by:
Kui-Feng Lee <kuifeng@meta.com> Link: https://lore.kernel.org/r/20230323032405.3735486-7-kuifeng@meta.comSigned-off-by:
Martin KaFai Lau <martin.lau@kernel.org>
-
- 06 Mar, 2023 1 commit
-
-
Menglong Dong authored
By default, libbpf will attach the kprobe/uprobe BPF program in the latest mode that supported by kernel. In this patch, we add the support to let users manually attach kprobe/uprobe in legacy or perf mode. There are 3 mode that supported by the kernel to attach kprobe/uprobe: LEGACY: create perf event in legacy way and don't use bpf_link PERF: create perf event with perf_event_open() and don't use bpf_link Signed-off-by:
Menglong Dong <imagedong@tencent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Biao Jiang <benbjiang@tencent.com> Link: create perf event with perf_event_open() and use bpf_link Link: https://lore.kernel.org/bpf/20230113093427.1666466-1-imagedong@tencent.com/ Link: https://lore.kernel.org/bpf/20230306064833.7932-2-imagedong@tencent.com Users now can manually choose the mode with bpf_program__attach_uprobe_opts()/bpf_program__attach_kprobe_opts().
-
- 08 Feb, 2023 1 commit
-
-
Jon Doron authored
Add option to set when the perf buffer should wake up, by default the perf buffer becomes signaled for every event that is being pushed to it. In case of a high throughput of events it will be more efficient to wake up only once you have X events ready to be read. So your application can wakeup once and drain the entire perf buffer. Signed-off-by:
Jon Doron <jond@wiz.io> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230207081916.3398417-1-arilou@gmail.com
-
- 03 Feb, 2023 1 commit
-
-
Lorenzo Bianconi authored
Extend bpf_xdp_query routine in order to get XDP/XSK supported features of netdev over route netlink interface. Extend libbpf netlink implementation in order to support netlink_generic protocol. Co-developed-by:
Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by:
Kumar Kartikeya Dwivedi <memxor@gmail.com> Co-developed-by:
Marek Majtyka <alardam@gmail.com> Signed-off-by:
Marek Majtyka <alardam@gmail.com> Signed-off-by:
Lorenzo Bianconi <lorenzo@kernel.org> Link: https://lore.kernel.org/r/a72609ef4f0de7fee5376c40dbf54ad7f13bfb8d.1675245258.git.lorenzo@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 27 Jan, 2023 2 commits
-
-
Grant Seltzer authored
This adds documentation for the following API functions: - bpf_map__set_pin_path() - bpf_map__pin_path() - bpf_map__is_pinned() - bpf_map__pin() - bpf_map__unpin() - bpf_object__pin_maps() - bpf_object__unpin_maps() Signed-off-by:
Grant Seltzer <grantseltzer@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230126024225.520685-1-grantseltzer@gmail.com
-
Grant Seltzer authored
This fixes the doxygen format documentation above the user_ring_buffer__* APIs. There has to be a newline before the @brief, otherwise doxygen won't render them for libbpf.readthedocs.org. Signed-off-by:
Grant Seltzer <grantseltzer@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230126024749.522278-1-grantseltzer@gmail.com
-
- 29 Dec, 2022 1 commit
-
-
Xin Liu authored
Currently, many API functions are not described in the document. Add add API description of the following four API functions: - libbpf_set_print; - bpf_object__open; - bpf_object__load; - bpf_object__close. Signed-off-by:
Xin Liu <liuxin350@huawei.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221224112058.12038-1-liuxin350@huawei.com
-
- 23 Sep, 2022 1 commit
-
-
Andrii Nakryiko authored
When attach_prog_fd field was removed in libbpf 1.0 and replaced with `long: 0` placeholder, it actually shifted all the subsequent fields by 8 byte. This is due to `long: 0` promising to adjust next field's offset to long-aligned offset. But in this case we were already long-aligned as pin_root_path is a pointer. So `long: 0` had no effect, and thus didn't feel the gap created by removed attach_prog_fd. Non-zero bitfield should have been used instead. I validated using pahole. Originally kconfig field was at offset 40. With `long: 0` it's at offset 32, which is wrong. With this change it's back at offset 40. While technically libbpf 1.0 is allowed to break backwards compatibility and applications should have been recompiled against libbpf 1.0 headers, but given how trivial it is to preserve memory layout, let's fix this. Reported-by:
Grant Seltzer Richman <grantseltzer@gmail.com> Fixes: 146bf811 ("libbpf: remove most other deprecated high-level APIs") Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220923230559.666608-1-andrii@kernel.orgSigned-off-by:
Martin KaFai Lau <martin.lau@kernel.org>
-
- 21 Sep, 2022 1 commit
-
-
David Vernet authored
Now that all of the logic is in place in the kernel to support user-space produced ring buffers, we can add the user-space logic to libbpf. This patch therefore adds the following public symbols to libbpf: struct user_ring_buffer * user_ring_buffer__new(int map_fd, const struct user_ring_buffer_opts *opts); void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size); void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb, __u32 size, int timeout_ms); void user_ring_buffer__submit(struct user_ring_buffer *rb, void *sample); void user_ring_buffer__discard(struct user_ring_buffer *rb, void user_ring_buffer__free(struct user_ring_buffer *rb); A user-space producer must first create a struct user_ring_buffer * object with user_ring_buffer__new(), and can then reserve samples in the ring buffer using one of the following two symbols: void *user_ring_buffer__reserve(struct user_ring_buffer *rb, __u32 size); void *user_ring_buffer__reserve_blocking(struct user_ring_buffer *rb, __u32 size, int timeout_ms); With user_ring_buffer__reserve(), a pointer to a 'size' region of the ring buffer will be returned if sufficient space is available in the buffer. user_ring_buffer__reserve_blocking() provides similar semantics, but will block for up to 'timeout_ms' in epoll_wait if there is insufficient space in the buffer. This function has the guarantee from the kernel that it will receive at least one event-notification per invocation to bpf_ringbuf_drain(), provided that at least one sample is drained, and the BPF program did not pass the BPF_RB_NO_WAKEUP flag to bpf_ringbuf_drain(). Once a sample is reserved, it must either be committed to the ring buffer with user_ring_buffer__submit(), or discarded with user_ring_buffer__discard(). Signed-off-by:
David Vernet <void@manifault.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220920000100.477320-4-void@manifault.com
-
- 17 Aug, 2022 1 commit
-
-
Hao Luo authored
Adds libbpf APIs for disabling auto-attach for individual functions. This is motivated by the use case of cgroup iter [1]. Some iter types require their parameters to be non-zero, therefore applying auto-attach on them will fail. With these two new APIs, users who want to use auto-attach and these types of iters can disable auto-attach on the program and perform manual attach. [1] https://lore.kernel.org/bpf/CAEf4BzZ+a2uDo_t6kGBziqdz--m2gh2_EUwkGLDtMd65uwxUjA@mail.gmail.com/Signed-off-by:
Hao Luo <haoluo@google.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220816234012.910255-1-haoluo@google.com
-
- 19 Jul, 2022 1 commit
-
-
Andrii Nakryiko authored
Add SEC("ksyscall")/SEC("ksyscall/<syscall_name>") and corresponding kretsyscall variants (for return kprobes) to allow users to kprobe syscall functions in kernel. These special sections allow to ignore complexities and differences between kernel versions and host architectures when it comes to syscall wrapper and corresponding __<arch>_sys_<syscall> vs __se_sys_<syscall> differences, depending on whether host kernel has CONFIG_ARCH_HAS_SYSCALL_WRAPPER (though libbpf itself doesn't rely on /proc/config.gz for detecting this, see BPF_KSYSCALL patch for how it's done internally). Combined with the use of BPF_KSYSCALL() macro, this allows to just specify intended syscall name and expected input arguments and leave dealing with all the variations to libbpf. In addition to SEC("ksyscall+") and SEC("kretsyscall+") add bpf_program__attach_ksyscall() API which allows to specify syscall name at runtime and provide associated BPF cookie value. At the moment SEC("ksyscall") and bpf_program__attach_ksyscall() do not handle all the calling convention quirks for mmap(), clone() and compat syscalls. It also only attaches to "native" syscall interfaces. If host system supports compat syscalls or defines 32-bit syscalls in 64-bit kernel, such syscall interfaces won't be attached to by libbpf. These limitations may or may not change in the future. Therefore it is recommended to use SEC("kprobe") for these syscalls or if working with compat and 32-bit interfaces is required. Tested-by:
Alan Maguire <alan.maguire@oracle.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220714070755.3235561-5-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 15 Jul, 2022 1 commit
-
-
Jon Doron authored
Add support for writing a custom event reader, by exposing the ring buffer. With the new API perf_buffer__buffer() you will get access to the raw mmaped()'ed per-cpu underlying memory of the ring buffer. This region contains both the perf buffer data and header (struct perf_event_mmap_page), which manages the ring buffer state (head/tail positions, when accessing the head/tail position it's important to take into consideration SMP). With this type of low level access one can implement different types of consumers here are few simple examples where this API helps with: 1. perf_event_read_simple is allocating using malloc, perhaps you want to handle the wrap-around in some other way. 2. Since perf buf is per-cpu then the order of the events is not guarnteed, for example: Given 3 events where each event has a timestamp t0 < t1 < t2, and the events are spread on more than 1 CPU, then we can end up with the following state in the ring buf: CPU[0] => [t0, t2] CPU[1] => [t1] When you consume the events from CPU[0], you could know there is a t1 missing, (assuming there are no drops, and your event data contains a sequential index). So now one can simply do the following, for CPU[0], you can store the address of t0 and t2 in an array (without moving the tail, so there data is not perished) then move on the CPU[1] and set the address of t1 in the same array. So you end up with something like: void **arr[] = [&t0, &t1, &t2], now you can consume it orderely and move the tails as you process in order. 3. Assuming there are multiple CPUs and we want to start draining the messages from them, then we can "pick" with which one to start with according to the remaining free space in the ring buffer. Signed-off-by:
Jon Doron <jond@wiz.io> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220715181122.149224-1-arilou@gmail.com
-
- 28 Jun, 2022 3 commits
-
-
Andrii Nakryiko authored
Remove support for legacy features and behaviors that previously had to be disabled by calling libbpf_set_strict_mode(): - legacy BPF map definitions are not supported now; - RLIMIT_MEMLOCK auto-setting, if necessary, is always on (but see libbpf_set_memlock_rlim()); - program name is used for program pinning (instead of section name); - cleaned up error returning logic; - entry BPF programs should have SEC() always. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-15-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Remove all the public APIs that are related to creating multi-instance bpf_programs through custom preprocessing callback and generally working with them. Also remove all the bpf_{object,map,program}__[set_]priv() APIs. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-10-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Remove a bunch of high-level bpf_object/bpf_map/bpf_program related APIs. All the APIs related to private per-object/map/prog state, program preprocessing callback, and generally everything multi-instance related is removed in a separate patch. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220627211527.2245459-9-andrii@kernel.orgSigned-off-by:
Alexei Starovoitov <ast@kernel.org>
-