- 03 Feb, 2022 7 commits
-
-
Andrii Nakryiko authored
libbpf 1.0 is not going to support passing ifindex to BPF prog/map/helper feature probing APIs. Remove the support for BPF offload feature probing. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-3-andrii@kernel.org
-
Andrii Nakryiko authored
Open-code bpf_map__is_offload_neutral() logic in one place in to-be-deprecated bpf_prog_load_xattr2. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-2-andrii@kernel.org
-
Andrii Nakryiko authored
Delyan Kratunov says: ==================== Fairly straight-forward mechanical transformation from bpf_prog_test_run and bpf_prog_test_run_xattr to the bpf_prog_test_run_opts goodness. I did a fair amount of drive-by CHECK/CHECK_ATTR cleanups as well, though certainly not everything possible. Primarily, I did not want to just change arguments to CHECK calls, though I had to do a bit more than that in some cases (overall, -119 CHECK calls and all CHECK_ATTR calls). v2 -> v3: Don't introduce CHECK_OPTS, replace CHECK/CHECK_ATTR usages we need to touch with ASSERT_* calls instead. Don't be prescriptive about the opts var name and keep old names where that would minimize unnecessary code churn. Drop _xattr-specific checks in prog_run_xattr and rename accordingly. v1 -> v2: Split selftests/bpf changes into two commits to appease the mailing list. ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Delyan Kratunov authored
Deprecate non-extendable bpf_prog_test_run{,_xattr} in favor of OPTS-based bpf_prog_test_run_opts ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/286Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-5-delyank@fb.com
-
Delyan Kratunov authored
bpf_prog_test_run is being deprecated in favor of the OPTS-based bpf_prog_test_run_opts. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-4-delyank@fb.com
-
Delyan Kratunov authored
bpf_prog_test_run_xattr is being deprecated in favor of the OPTS-based bpf_prog_test_run_opts. We end up unable to use CHECK_ATTR so replace usages with ASSERT_* calls. Also, prog_run_xattr is now prog_run_opts. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-3-delyank@fb.com
-
Delyan Kratunov authored
bpf_prog_test_run is being deprecated in favor of the OPTS-based bpf_prog_test_run_opts. We end up unable to use CHECK in most cases, so replace usages with ASSERT_* calls. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-2-delyank@fb.com
-
- 02 Feb, 2022 6 commits
-
-
Daniel Borkmann authored
Nathan Chancellor says: ==================== This series allows CONFIG_DEBUG_INFO_DWARF5 to be selected with CONFIG_DEBUG_INFO_BTF=y by checking the pahole version. The first four patches add CONFIG_PAHOLE_VERSION and scripts/pahole-version.sh to clean up all the places that pahole's version is transformed into a 3-digit form. The fourth patch adds a PAHOLE_VERSION dependency to DEBUG_INFO_DWARF5 so that there are no build errors when it is selected with DEBUG_INFO_BTF. I build tested Fedora's aarch64 and x86_64 config with ToT clang 14.0.0 and GCC 11 with CONFIG_DEBUG_INFO_DWARF5 enabled with both pahole 1.21 and 1.23. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Nathan Chancellor authored
Commit 98cd6f52 ("Kconfig: allow explicit opt in to DWARF v5") prevented CONFIG_DEBUG_INFO_DWARF5 from being selected when CONFIG_DEBUG_INFO_BTF is enabled because pahole had issues with clang's DWARF5 info. This was resolved by [1], which is in pahole v1.21. Allow DEBUG_INFO_DWARF5 to be selected with DEBUG_INFO_BTF when using pahole v1.21 or newer. [1]: https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=7d8e829f636f47aba2e1b6eda57e74d8e31f733cSigned-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-6-nathan@kernel.org
-
Nathan Chancellor authored
Now that CONFIG_PAHOLE_VERSION exists, use it in the definition of CONFIG_PAHOLE_HAS_SPLIT_BTF and CONFIG_PAHOLE_HAS_BTF_TAG to reduce the amount of duplication across the tree. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-5-nathan@kernel.org
-
Nathan Chancellor authored
Use pahole-version.sh to get pahole's version code to reduce the amount of duplication across the tree. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-4-nathan@kernel.org
-
Nathan Chancellor authored
There are a few different places where pahole's version is turned into a three digit form with the exact same command. Move this command into scripts/pahole-version.sh to reduce the amount of duplication across the tree. Create CONFIG_PAHOLE_VERSION so the version code can be used in Kconfig to enable and disable configuration options based on the pahole version, which is already done in a couple of places. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-3-nathan@kernel.org
-
Nathan Chancellor authored
Currently, scripts/pahole-flags.sh has no formal maintainer. Add it to the BPF section so that patches to it can be properly reviewed and picked up. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-2-nathan@kernel.org
-
- 01 Feb, 2022 13 commits
-
-
Daniel Borkmann authored
Alexei Starovoitov says: ==================== CO-RE in the kernel support allows bpf preload to switch to light skeleton and remove libbpf dependency. This reduces the size of bpf_preload_umd from 300kbyte to 19kbyte and eventually will make "kernel skeleton" possible. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Drop libbpf, libelf, libz dependency from bpf preload. This reduces bpf_preload_umd binary size from 1.7M to 30k unstripped with debug info and from 300k to 19k stripped. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-8-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Open code obj_get_info_by_fd in bpf preload. It's the last part of libbpf that preload/iterators were using. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-7-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Convert bpffs preload iterators to light skeleton. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-6-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
BPF programs and maps are memcg accounted. setrlimit is obsolete. Remove its use from bpf preload. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-5-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Open code raw_tracepoint_open and link_create used by light skeleton to be able to avoid full libbpf eventually. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-4-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Open code low level bpf commands used by light skeleton to be able to avoid full libbpf eventually. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-3-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
bpf iterator programs should use bpf_link_create to attach instead of bpf_raw_tracepoint_open like other tracing programs. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-2-alexei.starovoitov@gmail.com
-
Andrii Nakryiko authored
Lorenzo Bianconi says: ==================== Deprecate xdp_cpumap, xdp_devmap and classifier sec definitions. Update cpumap/devmap samples and kselftests. Changes since v2: - update warning log - split libbpf and samples/kselftests changes - deprecate classifier sec definition Changes since v1: - refer to Libbpf-1.0-migration-guide in the warning rised by libbpf ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Lorenzo Bianconi authored
Substitute deprecated xdp_cpumap and xdp_devmap sec_name with xdp/cpumap and xdp/devmap respectively. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/509201497c6c4926bc941f1cba24173cf500e760.1643727185.git.lorenzo@kernel.org
-
Lorenzo Bianconi authored
Substitute deprecated xdp_cpumap and xdp_devmap sec_name with xdp/cpumap and xdp/devmap respectively in bpf kselftests. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/9a4286cd36781e2c31ba3773bfdcf45cf1bbaa9e.1643727185.git.lorenzo@kernel.org
-
Lorenzo Bianconi authored
Deprecate xdp_cpumap, xdp_devmap and classifier sec definitions. Introduce xdp/devmap and xdp/cpumap definitions according to the standard for SEC("") in libbpf: - prog_type.prog_flags/attach_place Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/5c7bd9426b3ce6a31d9a4b1f97eb299e1467fc52.1643727185.git.lorenzo@kernel.org
-
Dave Marchevsky authored
btf_ext__{func,line}_info_rec_size functions are used in conjunction with already-deprecated btf_ext__reloc_{func,line}_info functions. Since struct btf_ext is opaque to the user it was necessary to expose rec_size getters in the past. btf_ext__reloc_{func,line}_info were deprecated in commit 8505e870 ("libbpf: Implement generalized .BTF.ext func/line info adjustment") as they're not compatible with support for multiple programs per section. It was decided[0] that users of these APIs should implement their own .btf.ext parsing to access this data, in which case the rec_size getters are unnecessary. So deprecate them from libbpf 0.7.0 onwards. [0] Closes: https://github.com/libbpf/libbpf/issues/277Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220201014610.3522985-1-davemarchevsky@fb.com
-
- 31 Jan, 2022 4 commits
-
-
Kenta Tada authored
access_process_vm() is exported by EXPORT_SYMBOL_GPL(). Signed-off-by: Kenta Tada <Kenta.Tada@sony.com> Link: https://lore.kernel.org/r/20220128170906.21154-1-Kenta.Tada@sony.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Jakub Sitnicki says: ==================== This is a follow-up to discussion around the idea of making dst_port in struct bpf_sock a 16-bit field that happened in [1]. v2: - use an anonymous field for zero padding (Alexei) v1: - keep dst_field offset unchanged to prevent existing BPF program breakage (Martin) - allow 8-bit loads from dst_port[0] and [1] - add test coverage for the verifier and the context access converter [1] https://lore.kernel.org/bpf/87sftbobys.fsf@cloudflare.com/ ==================== Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Sitnicki authored
Add coverage to the verifier tests and tests for reading bpf_sock fields to ensure that 32-bit, 16-bit, and 8-bit loads from dst_port field are allowed only at intended offsets and produce expected values. While 16-bit and 8-bit access to dst_port field is straight-forward, 32-bit wide loads need be allowed and produce a zero-padded 16-bit value for backward compatibility. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/r/20220130115518.213259-3-jakub@cloudflare.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jakub Sitnicki authored
Menglong Dong reports that the documentation for the dst_port field in struct bpf_sock is inaccurate and confusing. From the BPF program PoV, the field is a zero-padded 16-bit integer in network byte order. The value appears to the BPF user as if laid out in memory as so: offsetof(struct bpf_sock, dst_port) + 0 <port MSB> + 8 <port LSB> +16 0x00 +24 0x00 32-, 16-, and 8-bit wide loads from the field are all allowed, but only if the offset into the field is 0. 32-bit wide loads from dst_port are especially confusing. The loaded value, after converting to host byte order with bpf_ntohl(dst_port), contains the port number in the upper 16-bits. Remove the confusion by splitting the field into two 16-bit fields. For backward compatibility, allow 32-bit wide loads from offsetof(struct bpf_sock, dst_port). While at it, allow loads 8-bit loads at offset [0] and [1] from dst_port. Reported-by: Menglong Dong <imagedong@tencent.com> Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/r/20220130115518.213259-2-jakub@cloudflare.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 28 Jan, 2022 8 commits
-
-
Alexei Starovoitov authored
Hangbin Liu says: ==================== There are some bpf tests using hard code netns name like ns0, ns1, etc. This kind of ns name is easily used by other tests or system. If there is already a such netns, all the related tests will failed. So let's use temp netns name for testing. The first patch not only change to temp netns. But also fixed an interface index issue. So I add fixes tag. For the later patches, I think that should be an update instead of fixes, so the fixes tag is not added. ==================== Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hangbin Liu authored
Use temp netns instead of hard code name for testing in case the netns already exists. Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Acked-by: William Tu <u9012063@gmail.com> Link: https://lore.kernel.org/r/20220125081717.1260849-8-liuhangbin@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hangbin Liu authored
Use temp netns instead of hard code name for testing in case the netns already exists. Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Link: https://lore.kernel.org/r/20220125081717.1260849-7-liuhangbin@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hangbin Liu authored
Use temp netns instead of hard code name for testing in case the netns already exists. Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Acked-by: Lorenz Bauer <lmb@cloudflare.com> Link: https://lore.kernel.org/r/20220125081717.1260849-6-liuhangbin@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hangbin Liu authored
Use temp netns instead of hard code name for testing in case the netns already exists. Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Link: https://lore.kernel.org/r/20220125081717.1260849-5-liuhangbin@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hangbin Liu authored
Use temp netns instead of hard code name for testing in case the netns already exists. Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Link: https://lore.kernel.org/r/20220125081717.1260849-4-liuhangbin@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hangbin Liu authored
Use temp netns instead of hard code name for testing in case the netns already exists. Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Link: https://lore.kernel.org/r/20220125081717.1260849-3-liuhangbin@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hangbin Liu authored
Use temp netns instead of hard code name for testing in case the netns already exists. Remove the hard code interface index when creating the veth interfaces. Because when the system loads some virtual interface modules, e.g. tunnels. the ifindex of 2 will be used and the cmd will fail. As the netns has not created if checking environment failed. Trap the clean up function after checking env. Fixes: 8955c1a3 ("selftests/bpf/xdp_redirect_multi: Limit the tests in netns") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Acked-by: William Tu <u9012063@gmail.com> Link: https://lore.kernel.org/r/20220125081717.1260849-2-liuhangbin@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 27 Jan, 2022 2 commits
-
-
Hou Tao authored
According to the LLVM commit (https://reviews.llvm.org/D72184), sync_fetch_and_sub() is implemented as a negation followed by sync_fetch_and_add(), so there will be no BPF_SUB op, thus just remove it. BPF_SUB is also rejected by the verifier anyway. Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Brendan Jackman <jackmanb@google.com> Link: https://lore.kernel.org/bpf/20220127083240.1425481-1-houtao1@huawei.com
-
Alexei Starovoitov authored
Yonghong Song says: ==================== The __user attribute is currently mainly used by sparse for type checking. The attribute indicates whether a memory access is in user memory address space or not. Such information is important during tracing kernel internal functions or data structures as accessing user memory often has different mechanisms compared to accessing kernel memory. For example, the perf-probe needs explicit command line specification to indicate a particular argument or string in user-space memory ([1], [2], [3]). Currently, vmlinux BTF is available in kernel with many distributions. If __user attribute information is available in vmlinux BTF, the explicit user memory access information from users will not be necessary as the kernel can figure it out by itself with vmlinux BTF. Besides the above possible use for perf/probe, another use case is for bpf verifier. Currently, for bpf BPF_PROG_TYPE_TRACING type of bpf programs, users can write direct code like p->m1->m2 and "p" could be a function parameter. Without __user information in BTF, the verifier will assume p->m1 accessing kernel memory and will generate normal loads. Let us say "p" actually tagged with __user in the source code. In such cases, p->m1 is actually accessing user memory and direct load is not right and may produce incorrect result. For such cases, bpf_probe_read_user() will be the correct way to read p->m1. To support encoding __user information in BTF, a new attribute __attribute__((btf_type_tag("<arbitrary_string>"))) is implemented in clang ([4]). For example, if we have #define __user __attribute__((btf_type_tag("user"))) during kernel compilation, the attribute "user" information will be preserved in dwarf. After pahole converting dwarf to BTF, __user information will be available in vmlinux BTF and such information can be used by bpf verifier, perf/probe or other use cases. Currently btf_type_tag is only supported in clang (>= clang14) and pahole (>= 1.23). gcc support is also proposed and under development ([5]). In the rest of patch set, Patch 1 added support of __user btf_type_tag during compilation. Patch 2 added bpf verifier support to utilize __user tag information to reject bpf programs not using proper helper to access user memories. Patches 3-5 are for bpf selftests which demonstrate verifier can reject direct user memory accesses. [1] http://lkml.kernel.org/r/155789874562.26965.10836126971405890891.stgit@devnote2 [2] http://lkml.kernel.org/r/155789872187.26965.4468456816590888687.stgit@devnote2 [3] http://lkml.kernel.org/r/155789871009.26965.14167558859557329331.stgit@devnote2 [4] https://reviews.llvm.org/D111199 [5] https://lore.kernel.org/bpf/0cbeb2fb-1a18-f690-e360-24b1c90c2a91@fb.com/ Changelog: v2 -> v3: - remove FLAG_DONTCARE enumerator and just use 0 as dontcare flag. - explain how btf type_tag is encoded in btf type chain. v1 -> v2: - use MEM_USER flag for PTR_TO_BTF_ID reg type instead of a separate field to encode __user tag. - add a test with kernel function __sys_getsockname which has __user tagged argument. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-