- 04 Feb, 2022 2 commits
-
-
Hou Tao authored
Since commit b2eed9b5 ("arm64/kernel: kaslr: reduce module randomization range to 2 GB"), for arm64 whether KASLR is enabled or not, the module is placed within 2GB of the kernel region, so s32 in bpf_kfunc_desc is sufficient to represente the offset of module function relative to __bpf_call_base. The only thing needed is to override bpf_jit_supports_kfunc_call(). Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220130092917.14544-2-hotforest@gmail.com
-
Andrii Nakryiko authored
btf__get_map_kv_tids() is in the same group of APIs as btf_ext__reloc_func_info()/btf_ext__reloc_line_info() which were only used by BCC. It was missed to be marked as deprecated in [0]. Fixing that to complete [1]. [0] https://patchwork.kernel.org/project/netdevbpf/patch/20220201014610.3522985-1-davemarchevsky@fb.com/ [1] Closes: https://github.com/libbpf/libbpf/issues/277Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220203225017.1795946-1-andrii@kernel.org
-
- 03 Feb, 2022 21 commits
-
-
Yonghong Song authored
Added a selftest similar to [1] which exposed a kernel bug. Without the fix in the previous patch, the similar kasan error will appear. [1] https://lore.kernel.org/bpf/0000000000009b6eaa05d71a8c06@google.com/Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220203191732.742285-1-yhs@fb.com
-
Yonghong Song authored
syzbot reported a btf decl_tag bug with stack trace below: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] CPU: 0 PID: 3592 Comm: syz-executor914 Not tainted 5.16.0-syzkaller-11424-gb7892f7d #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:btf_type_vlen include/linux/btf.h:231 [inline] RIP: 0010:btf_decl_tag_resolve+0x83e/0xaa0 kernel/bpf/btf.c:3910 ... Call Trace: <TASK> btf_resolve+0x251/0x1020 kernel/bpf/btf.c:4198 btf_check_all_types kernel/bpf/btf.c:4239 [inline] btf_parse_type_sec kernel/bpf/btf.c:4280 [inline] btf_parse kernel/bpf/btf.c:4513 [inline] btf_new_fd+0x19fe/0x2370 kernel/bpf/btf.c:6047 bpf_btf_load kernel/bpf/syscall.c:4039 [inline] __sys_bpf+0x1cbb/0x5970 kernel/bpf/syscall.c:4679 __do_sys_bpf kernel/bpf/syscall.c:4738 [inline] __se_sys_bpf kernel/bpf/syscall.c:4736 [inline] __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:4736 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae The kasan error is triggered with an illegal BTF like below: type 0: void type 1: int type 2: decl_tag to func type 3 type 3: func to func_proto type 8 The total number of types is 4 and the type 3 is illegal since its func_proto type is out of range. Currently, the target type of decl_tag can be struct/union, var or func. Both struct/union and var implemented their own 'resolve' callback functions and hence handled properly in kernel. But func type doesn't have 'resolve' callback function. When btf_decl_tag_resolve() tries to check func type, it tries to get vlen of its func_proto type, which triggered the above kasan error. To fix the issue, btf_decl_tag_resolve() needs to do btf_func_check() before trying to accessing func_proto type. In the current implementation, func type is checked with btf_func_check() in the main checking function btf_check_all_types(). To fix the above kasan issue, let us implement 'resolve' callback func type properly. The 'resolve' callback will be also called in btf_check_all_types() for func types. Fixes: b5ea834d ("bpf: Support for new btf kind BTF_KIND_TAG") Reported-by: syzbot+53619be9444215e785ed@syzkaller.appspotmail.com Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220203191727.741862-1-yhs@fb.com
-
Delyan Kratunov authored
Arbitrary storage via bpf_*__set_priv/__priv is being deprecated without a replacement ([1]). perf uses this capability, but most of that is going away with the removal of prologue generation ([2]). perf is already suppressing deprecation warnings, so the remaining cleanup will happen separately. [1]: Closes: https://github.com/libbpf/libbpf/issues/294 [2]: https://lore.kernel.org/bpf/20220123221932.537060-1-jolsa@kernel.org/Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220203180032.1921580-1-delyank@fb.com
-
Lorenzo Bianconi authored
Fix the following kasan issue reported by syzbot: BUG: KASAN: slab-out-of-bounds in __skb_frag_set_page include/linux/skbuff.h:3242 [inline] BUG: KASAN: slab-out-of-bounds in bpf_prog_test_run_xdp+0x10ac/0x1150 net/bpf/test_run.c:972 Write of size 8 at addr ffff888048c75000 by task syz-executor.5/23405 CPU: 1 PID: 23405 Comm: syz-executor.5 Not tainted 5.16.0-syzkaller #0 Hardware name: Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: <TASK> __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 print_address_description.constprop.0.cold+0x8d/0x336 mm/kasan/report.c:255 __kasan_report mm/kasan/report.c:442 [inline] kasan_report.cold+0x83/0xdf mm/kasan/report.c:459 __skb_frag_set_page include/linux/skbuff.h:3242 [inline] bpf_prog_test_run_xdp+0x10ac/0x1150 net/bpf/test_run.c:972 bpf_prog_test_run kernel/bpf/syscall.c:3356 [inline] __sys_bpf+0x1858/0x59a0 kernel/bpf/syscall.c:4658 __do_sys_bpf kernel/bpf/syscall.c:4744 [inline] __se_sys_bpf kernel/bpf/syscall.c:4742 [inline] __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:4742 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f4ea30dd059 RSP: 002b:00007f4ea1a52168 EFLAGS: 00000246 ORIG_RAX: 0000000000000141 RAX: ffffffffffffffda RBX: 00007f4ea31eff60 RCX: 00007f4ea30dd059 RDX: 0000000000000048 RSI: 0000000020000000 RDI: 000000000000000a RBP: 00007f4ea313708d R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffc8367c5af R14: 00007f4ea1a52300 R15: 0000000000022000 </TASK> Allocated by task 23405: kasan_save_stack+0x1e/0x50 mm/kasan/common.c:38 kasan_set_track mm/kasan/common.c:46 [inline] set_alloc_info mm/kasan/common.c:437 [inline] ____kasan_kmalloc mm/kasan/common.c:516 [inline] ____kasan_kmalloc mm/kasan/common.c:475 [inline] __kasan_kmalloc+0xa9/0xd0 mm/kasan/common.c:525 kmalloc include/linux/slab.h:586 [inline] kzalloc include/linux/slab.h:715 [inline] bpf_test_init.isra.0+0x9f/0x150 net/bpf/test_run.c:411 bpf_prog_test_run_xdp+0x2f8/0x1150 net/bpf/test_run.c:941 bpf_prog_test_run kernel/bpf/syscall.c:3356 [inline] __sys_bpf+0x1858/0x59a0 kernel/bpf/syscall.c:4658 __do_sys_bpf kernel/bpf/syscall.c:4744 [inline] __se_sys_bpf kernel/bpf/syscall.c:4742 [inline] __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:4742 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae The buggy address belongs to the object at ffff888048c74000 which belongs to the cache kmalloc-4k of size 4096 The buggy address is located 0 bytes to the right of 4096-byte region [ffff888048c74000, ffff888048c75000) The buggy address belongs to the page: page:ffffea0001231c00 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x48c70 head:ffffea0001231c00 order:3 compound_mapcount:0 compound_pincount:0 flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000010200 dead000000000100 dead000000000122 ffff888010c42140 raw: 0000000000000000 0000000080040004 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated prep_new_page mm/page_alloc.c:2434 [inline] get_page_from_freelist+0xa72/0x2f50 mm/page_alloc.c:4165 __alloc_pages+0x1b2/0x500 mm/page_alloc.c:5389 alloc_pages+0x1aa/0x310 mm/mempolicy.c:2271 alloc_slab_page mm/slub.c:1799 [inline] allocate_slab mm/slub.c:1944 [inline] new_slab+0x28a/0x3b0 mm/slub.c:2004 ___slab_alloc+0x87c/0xe90 mm/slub.c:3018 __slab_alloc.constprop.0+0x4d/0xa0 mm/slub.c:3105 slab_alloc_node mm/slub.c:3196 [inline] __kmalloc_node_track_caller+0x2cb/0x360 mm/slub.c:4957 kmalloc_reserve net/core/skbuff.c:354 [inline] __alloc_skb+0xde/0x340 net/core/skbuff.c:426 alloc_skb include/linux/skbuff.h:1159 [inline] nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:745 [inline] nsim_dev_trap_report drivers/net/netdevsim/dev.c:802 [inline] nsim_dev_trap_report_work+0x29a/0xbc0 drivers/net/netdevsim/dev.c:843 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1352 [inline] free_pcp_prepare+0x374/0x870 mm/page_alloc.c:1404 free_unref_page_prepare mm/page_alloc.c:3325 [inline] free_unref_page+0x19/0x690 mm/page_alloc.c:3404 qlink_free mm/kasan/quarantine.c:157 [inline] qlist_free_all+0x6d/0x160 mm/kasan/quarantine.c:176 kasan_quarantine_reduce+0x180/0x200 mm/kasan/quarantine.c:283 __kasan_slab_alloc+0xa2/0xc0 mm/kasan/common.c:447 kasan_slab_alloc include/linux/kasan.h:260 [inline] slab_post_alloc_hook mm/slab.h:732 [inline] slab_alloc_node mm/slub.c:3230 [inline] slab_alloc mm/slub.c:3238 [inline] kmem_cache_alloc+0x202/0x3a0 mm/slub.c:3243 getname_flags.part.0+0x50/0x4f0 fs/namei.c:138 getname_flags include/linux/audit.h:323 [inline] getname+0x8e/0xd0 fs/namei.c:217 do_sys_openat2+0xf5/0x4d0 fs/open.c:1208 do_sys_open fs/open.c:1230 [inline] __do_sys_openat fs/open.c:1246 [inline] __se_sys_openat fs/open.c:1241 [inline] __x64_sys_openat+0x13f/0x1f0 fs/open.c:1241 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae Memory state around the buggy address: ffff888048c74f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff888048c74f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ ffff888048c75080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff888048c75100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ================================================================== Fixes: 1c194998 ("bpf: introduce frags support to bpf_prog_test_run_xdp()") Reported-by: syzbot+6d70ca7438345077c549@syzkaller.appspotmail.com Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/688c26f9dd6e885e58e8e834ede3f0139bb7fa95.1643835097.git.lorenzo@kernel.org
-
Christoph Hellwig authored
Use proper tables and RST markup to document the atomic instructions in a structured way. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220131183638.3934982-6-hch@lst.de
-
Christoph Hellwig authored
In addition to the normal 64-bit instruction encoding, eBPF also has a single instruction that uses a second 64-bit bits for a second immediate value. Instead of only documenting this format deep down in the document mention it in the instruction encoding section. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220131183638.3934982-5-hch@lst.de
-
Christoph Hellwig authored
Use consistent terminology and structured RST elements to better document these two oddball instructions. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220131183638.3934982-4-hch@lst.de
-
Christoph Hellwig authored
Add a separate section and a little intro blurb for the regular load and store instructions. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220131183638.3934982-3-hch@lst.de
-
Christoph Hellwig authored
Add a section to document the byte swapping instructions. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220131183638.3934982-2-hch@lst.de
-
Daniel Borkmann authored
Andrii Nakryiko says: ==================== Clean up remaining missed uses of deprecated libbpf APIs across samples/bpf, selftests/bpf, libbpf, and bpftool. Also fix uninit variable warning in bpftool. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Andrii Nakryiko authored
Remove all the remaining uses of deprecated bpf_prog_load_xattr() API. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-7-andrii@kernel.org
-
Andrii Nakryiko authored
Switch to using new bpf_xdp_*() APIs across all selftests. Take advantage of a more straightforward and user-friendly semantics of old_prog_fd (0 means "don't care") in few places. This is a redo of 54435652 ("selftests/bpf: switch to new libbpf XDP APIs"), which was previously reverted to minimize conflicts during bpf and bpf-next tree merge. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-6-andrii@kernel.org
-
Andrii Nakryiko authored
Switch to libbpf_probe_*() APIs instead of the deprecated ones. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-5-andrii@kernel.org
-
Andrii Nakryiko authored
Newer GCC complains about capturing the address of unitialized variable. While there is nothing wrong with the code (the variable is filled out by the kernel), initialize the variable anyway to make compiler happy. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-4-andrii@kernel.org
-
Andrii Nakryiko authored
libbpf 1.0 is not going to support passing ifindex to BPF prog/map/helper feature probing APIs. Remove the support for BPF offload feature probing. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-3-andrii@kernel.org
-
Andrii Nakryiko authored
Open-code bpf_map__is_offload_neutral() logic in one place in to-be-deprecated bpf_prog_load_xattr2. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-2-andrii@kernel.org
-
Andrii Nakryiko authored
Delyan Kratunov says: ==================== Fairly straight-forward mechanical transformation from bpf_prog_test_run and bpf_prog_test_run_xattr to the bpf_prog_test_run_opts goodness. I did a fair amount of drive-by CHECK/CHECK_ATTR cleanups as well, though certainly not everything possible. Primarily, I did not want to just change arguments to CHECK calls, though I had to do a bit more than that in some cases (overall, -119 CHECK calls and all CHECK_ATTR calls). v2 -> v3: Don't introduce CHECK_OPTS, replace CHECK/CHECK_ATTR usages we need to touch with ASSERT_* calls instead. Don't be prescriptive about the opts var name and keep old names where that would minimize unnecessary code churn. Drop _xattr-specific checks in prog_run_xattr and rename accordingly. v1 -> v2: Split selftests/bpf changes into two commits to appease the mailing list. ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Delyan Kratunov authored
Deprecate non-extendable bpf_prog_test_run{,_xattr} in favor of OPTS-based bpf_prog_test_run_opts ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/286Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-5-delyank@fb.com
-
Delyan Kratunov authored
bpf_prog_test_run is being deprecated in favor of the OPTS-based bpf_prog_test_run_opts. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-4-delyank@fb.com
-
Delyan Kratunov authored
bpf_prog_test_run_xattr is being deprecated in favor of the OPTS-based bpf_prog_test_run_opts. We end up unable to use CHECK_ATTR so replace usages with ASSERT_* calls. Also, prog_run_xattr is now prog_run_opts. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-3-delyank@fb.com
-
Delyan Kratunov authored
bpf_prog_test_run is being deprecated in favor of the OPTS-based bpf_prog_test_run_opts. We end up unable to use CHECK in most cases, so replace usages with ASSERT_* calls. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-2-delyank@fb.com
-
- 02 Feb, 2022 6 commits
-
-
Daniel Borkmann authored
Nathan Chancellor says: ==================== This series allows CONFIG_DEBUG_INFO_DWARF5 to be selected with CONFIG_DEBUG_INFO_BTF=y by checking the pahole version. The first four patches add CONFIG_PAHOLE_VERSION and scripts/pahole-version.sh to clean up all the places that pahole's version is transformed into a 3-digit form. The fourth patch adds a PAHOLE_VERSION dependency to DEBUG_INFO_DWARF5 so that there are no build errors when it is selected with DEBUG_INFO_BTF. I build tested Fedora's aarch64 and x86_64 config with ToT clang 14.0.0 and GCC 11 with CONFIG_DEBUG_INFO_DWARF5 enabled with both pahole 1.21 and 1.23. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Nathan Chancellor authored
Commit 98cd6f52 ("Kconfig: allow explicit opt in to DWARF v5") prevented CONFIG_DEBUG_INFO_DWARF5 from being selected when CONFIG_DEBUG_INFO_BTF is enabled because pahole had issues with clang's DWARF5 info. This was resolved by [1], which is in pahole v1.21. Allow DEBUG_INFO_DWARF5 to be selected with DEBUG_INFO_BTF when using pahole v1.21 or newer. [1]: https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=7d8e829f636f47aba2e1b6eda57e74d8e31f733cSigned-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-6-nathan@kernel.org
-
Nathan Chancellor authored
Now that CONFIG_PAHOLE_VERSION exists, use it in the definition of CONFIG_PAHOLE_HAS_SPLIT_BTF and CONFIG_PAHOLE_HAS_BTF_TAG to reduce the amount of duplication across the tree. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-5-nathan@kernel.org
-
Nathan Chancellor authored
Use pahole-version.sh to get pahole's version code to reduce the amount of duplication across the tree. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-4-nathan@kernel.org
-
Nathan Chancellor authored
There are a few different places where pahole's version is turned into a three digit form with the exact same command. Move this command into scripts/pahole-version.sh to reduce the amount of duplication across the tree. Create CONFIG_PAHOLE_VERSION so the version code can be used in Kconfig to enable and disable configuration options based on the pahole version, which is already done in a couple of places. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-3-nathan@kernel.org
-
Nathan Chancellor authored
Currently, scripts/pahole-flags.sh has no formal maintainer. Add it to the BPF section so that patches to it can be properly reviewed and picked up. Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220201205624.652313-2-nathan@kernel.org
-
- 01 Feb, 2022 11 commits
-
-
Daniel Borkmann authored
Alexei Starovoitov says: ==================== CO-RE in the kernel support allows bpf preload to switch to light skeleton and remove libbpf dependency. This reduces the size of bpf_preload_umd from 300kbyte to 19kbyte and eventually will make "kernel skeleton" possible. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Drop libbpf, libelf, libz dependency from bpf preload. This reduces bpf_preload_umd binary size from 1.7M to 30k unstripped with debug info and from 300k to 19k stripped. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-8-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Open code obj_get_info_by_fd in bpf preload. It's the last part of libbpf that preload/iterators were using. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-7-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Convert bpffs preload iterators to light skeleton. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-6-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
BPF programs and maps are memcg accounted. setrlimit is obsolete. Remove its use from bpf preload. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-5-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Open code raw_tracepoint_open and link_create used by light skeleton to be able to avoid full libbpf eventually. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-4-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Open code low level bpf commands used by light skeleton to be able to avoid full libbpf eventually. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-3-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
bpf iterator programs should use bpf_link_create to attach instead of bpf_raw_tracepoint_open like other tracing programs. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-2-alexei.starovoitov@gmail.com
-
Andrii Nakryiko authored
Lorenzo Bianconi says: ==================== Deprecate xdp_cpumap, xdp_devmap and classifier sec definitions. Update cpumap/devmap samples and kselftests. Changes since v2: - update warning log - split libbpf and samples/kselftests changes - deprecate classifier sec definition Changes since v1: - refer to Libbpf-1.0-migration-guide in the warning rised by libbpf ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Lorenzo Bianconi authored
Substitute deprecated xdp_cpumap and xdp_devmap sec_name with xdp/cpumap and xdp/devmap respectively. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/509201497c6c4926bc941f1cba24173cf500e760.1643727185.git.lorenzo@kernel.org
-
Lorenzo Bianconi authored
Substitute deprecated xdp_cpumap and xdp_devmap sec_name with xdp/cpumap and xdp/devmap respectively in bpf kselftests. Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/9a4286cd36781e2c31ba3773bfdcf45cf1bbaa9e.1643727185.git.lorenzo@kernel.org
-