- 30 Sep, 2022 3 commits
-
-
Dave Thaler authored
Move Clang notes to a separate file. Signed-off-by: Dave Thaler <dthaler@microsoft.com> Link: https://lore.kernel.org/r/20220927185958.14995-3-dthaler1968@googlemail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dave Thaler authored
Add Linux byteswap note. Signed-off-by: Dave Thaler <dthaler@microsoft.com> Link: https://lore.kernel.org/r/20220927185958.14995-2-dthaler1968@googlemail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dave Thaler authored
Move legacy packet instructions to a separate file. Signed-off-by: Dave Thaler <dthaler@microsoft.com> Link: https://lore.kernel.org/r/20220927185958.14995-1-dthaler1968@googlemail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 29 Sep, 2022 18 commits
-
-
Alexei Starovoitov authored
Martin KaFai Lau says: ==================== From: Martin KaFai Lau <martin.lau@kernel.org> The struct_ops is sharing the tracing-trampoline's enter/exit function which tracks prog->active to avoid recursion. It turns out the struct_ops bpf prog will hit this prog->active and unnecessarily skipped running the struct_ops prog. eg. The '.ssthresh' may run in_task() and then interrupted by softirq that runs the same '.ssthresh'. The kernel does not call the tcp-cc's ops in a recursive way, so this set is to remove the recursion check for struct_ops prog. v3: - Clear the bpf_chg_cc_inprogress from the newly cloned tcp_sock in tcp_create_openreq_child() because the listen sk can be cloned without lock being held. (Eric Dumazet) v2: - v1 [0] turned into a long discussion on a few cases and also whether it needs to follow the bpf_run_ctx chain if there is tracing bpf_run_ctx (kprobe/trace/trampoline) running in between. It is a good signal that it is not obvious enough to reason about it and needs a tradeoff for a more straight forward approach. This revision uses one bit out of an existing 1 byte hole in the tcp_sock. It is in Patch 4. [0]: https://lore.kernel.org/bpf/20220922225616.3054840-1-kafai@fb.com/T/#md98d40ac5ec295fdadef476c227a3401b2b6b911 ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
This patch changes the bpf_dctcp test to ensure the recurred bpf_setsockopt(TCP_CONGESTION) returns -EBUSY. Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20220929070407.965581-6-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
When a bad bpf prog '.init' calls bpf_setsockopt(TCP_CONGESTION, "itself"), it will trigger this loop: .init => bpf_setsockopt(tcp_cc) => .init => bpf_setsockopt(tcp_cc) ... ... => .init => bpf_setsockopt(tcp_cc). It was prevented by the prog->active counter before but the prog->active detection cannot be used in struct_ops as explained in the earlier patch of the set. In this patch, the second bpf_setsockopt(tcp_cc) is not allowed in order to break the loop. This is done by using a bit of an existing 1 byte hole in tcp_sock to check if there is on-going bpf_setsockopt(TCP_CONGESTION) in this tcp_sock. Note that this essentially limits only the first '.init' can call bpf_setsockopt(TCP_CONGESTION) to pick a fallback cc (eg. peer does not support ECN) and the second '.init' cannot fallback to another cc. This applies even the second bpf_setsockopt(TCP_CONGESTION) will not cause a loop. Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20220929070407.965581-5-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
This patch moves the bpf_setsockopt(TCP_CONGESTION) logic into another function. The next patch will add extra logic to avoid recursion and this will make the latter patch easier to follow. Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20220929070407.965581-4-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
The check on the tcp-cc, "cdg", is done in the bpf_sk_setsockopt which is used by the bpf_tcp_ca, bpf_lsm, cg_sockopt, and tcp_iter hooks. However, it is not done for cg sock_ddr, cg sockops, and some of the bpf_lsm_cgroup hooks. The tcp-cc "cdg" should have very limited usage. This patch is to move the "cdg" check to the common sol_tcp_sockopt() so that all hooks have a consistent behavior. The motivation to make this check consistent now is because the latter patch will refactor the bpf_setsockopt(TCP_CONGESTION) into another function, so it is better to take this chance to refactor this piece also. Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20220929070407.965581-3-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
The struct_ops prog is to allow using bpf to implement the functions in a struct (eg. kernel module). The current usage is to implement the tcp_congestion. The kernel does not call the tcp-cc's ops (ie. the bpf prog) in a recursive way. The struct_ops is sharing the tracing-trampoline's enter/exit function which tracks prog->active to avoid recursion. It is needed for tracing prog. However, it turns out the struct_ops bpf prog will hit this prog->active and unnecessarily skipped running the struct_ops prog. eg. The '.ssthresh' may run in_task() and then interrupted by softirq that runs the same '.ssthresh'. Skip running the '.ssthresh' will end up returning random value to the caller. The patch adds __bpf_prog_{enter,exit}_struct_ops for the struct_ops trampoline. They do not track the prog->active to detect recursion. One exception is when the tcp_congestion's '.init' ops is doing bpf_setsockopt(TCP_CONGESTION) and then recurs to the same '.init' ops. This will be addressed in the following patches. Fixes: ca06f55b ("bpf: Add per-program recursion prevention mechanism") Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20220929070407.965581-2-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Wang Yufen says: ==================== Convert some tests to use the preferred ASSERT_* macros instead of the deprecated CHECK(). ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-12-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-11-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-10-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-9-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-8-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-7-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-6-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-5-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-4-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-3-git-send-email-wangyufen@huawei.com
-
Wang Yufen authored
Convert the selftest to use the preferred ASSERT_* macros instead of the deprecated CHECK(). Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1664169131-32405-2-git-send-email-wangyufen@huawei.com
-
- 28 Sep, 2022 6 commits
-
-
Andrii Nakryiko authored
Kui-Feng Lee says: ==================== Allow creating an iterator that loops through resources of one task/thread. People could only create iterators to loop through all resources of files, vma, and tasks in the system, even though they were interested in only the resources of a specific task or process. Passing the additional parameters, people can now create an iterator to go through all resources or only the resources of a task. Major Changes: - Add new parameters in bpf_iter_link_info to indicate to go through all tasks or to go through a specific task. - Change the implementations of BPF iterators of vma, files, and tasks to allow going through only the resources of a specific task. - Provide the arguments of parameterized task iterators in bpf_link_info. Differences from v10: - Check pid_alive() to avoid potential errors. Differences from v9: - Fix the boundary check of computing page_shift. - Rewording the reason of checking and returning the same task. Differences from v8: - Fix uninitialized variable. - Avoid redundant work of getting task from pid. - Change format string to use %u instead of %d. - Use the value of page_shift to compute correct offset in bpf_iter_vm_offset.c. Differences from v7: - Travel the tasks of a process through task_group linked list instead of traveling through the whole namespace. Differences from v6: - Add part 5 to make bpftool show the value of parameters. - Change of wording of show_fdinfo() to show pid or tid instead of always pid. - Simplify error handling and naming of test cases. Differences from v5: - Use user-space tid/pid terminologies in bpf_iter_link_info and bpf_link_info. - Fix reference count - Merge all variants to one 'u32 pid' in internal structs. (bpf_iter_aux_info and bpf_iter_seq_task_common) - Compare the result of get_uprobe_offset() with the implementation with the vma iterators. - Implement show_fdinfo. Differences from v4: - Remove 'type' from bpf_iter_link_info and bpf_link_info. v10: https://lore.kernel.org/all/20220831181039.2680134-1-kuifeng@fb.com/ v9: https://lore.kernel.org/bpf/20220829192317.486946-1-kuifeng@fb.com/ v8: https://lore.kernel.org/bpf/20220829192317.486946-1-kuifeng@fb.com/ v7: https://lore.kernel.org/bpf/20220826003712.2810158-1-kuifeng@fb.com/ v6: https://lore.kernel.org/bpf/20220819220927.3409575-1-kuifeng@fb.com/ v5: https://lore.kernel.org/bpf/20220811001654.1316689-1-kuifeng@fb.com/ v4: https://lore.kernel.org/bpf/20220809195429.1043220-1-kuifeng@fb.com/ v3: https://lore.kernel.org/bpf/20220809063501.667610-1-kuifeng@fb.com/ v2: https://lore.kernel.org/bpf/20220801232649.2306614-1-kuifeng@fb.com/ v1: https://lore.kernel.org/bpf/20220726051713.840431-1-kuifeng@fb.com/ ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Kui-Feng Lee authored
Show tid or pid of iterators if giving an argument of tid or pid For example, the command `bpftool link list` may list following lines. 1: iter prog 2 target_name bpf_map 2: iter prog 3 target_name bpf_prog 33: iter prog 225 target_name task_file tid 1644 pids test_progs(1644) Link 33 is a task_file iterator with tid 1644. For now, only targets of task, task_file and task_vma may be with tid or pid to filter out tasks other than those belonging to a process (pid) or a thread (tid). Signed-off-by: Kui-Feng Lee <kuifeng@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/bpf/20220926184957.208194-6-kuifeng@fb.com
-
Kui-Feng Lee authored
Test iterators of vma, files and tasks. Ensure the API works appropriately to visit all tasks, tasks in a process, or a particular task. Signed-off-by: Kui-Feng Lee <kuifeng@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/bpf/20220926184957.208194-5-kuifeng@fb.com
-
Kui-Feng Lee authored
Show information of iterators in the respective files under /proc/<pid>/fdinfo/. For example, for a task file iterator with 1723 as the value of tid parameter, its fdinfo would look like the following lines. pos: 0 flags: 02000000 mnt_id: 14 ino: 38 link_type: iter link_id: 51 prog_tag: a590ac96db22b825 prog_id: 299 target_name: task_file task_type: TID tid: 1723 This patch add the last three fields. task_type is the type of the task parameter. TID means the iterator visit only the thread specified by tid. The value of tid in the above example is 1723. For the case of PID task_type, it means the iterator visits only threads of a process and will show the pid value of the process instead of a tid. Signed-off-by: Kui-Feng Lee <kuifeng@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/bpf/20220926184957.208194-4-kuifeng@fb.com
-
Kui-Feng Lee authored
Add new fields to bpf_link_info that users can query it through bpf_obj_get_info_by_fd(). Signed-off-by: Kui-Feng Lee <kuifeng@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/bpf/20220926184957.208194-3-kuifeng@fb.com
-
Kui-Feng Lee authored
Allow creating an iterator that loops through resources of one thread/process. People could only create iterators to loop through all resources of files, vma, and tasks in the system, even though they were interested in only the resources of a specific task or process. Passing the additional parameters, people can now create an iterator to go through all resources or only the resources of a task. Signed-off-by: Kui-Feng Lee <kuifeng@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/bpf/20220926184957.208194-2-kuifeng@fb.com
-
- 27 Sep, 2022 13 commits
-
-
Andrii Nakryiko authored
Drop the requirement for system-wide kernel UAPI headers to provide full struct btf_enum64 definition. This is an unexpected requirement that slipped in libbpf 1.0 and put unnecessary pressure ([0]) on users to have a bleeding-edge kernel UAPI header from unreleased Linux 6.0. To achieve this, we forward declare struct btf_enum64. But that's not enough as there is btf_enum64_value() helper that expects to know the layout of struct btf_enum64. So we get a bit creative with reinterpreting memory layout as array of __u32 and accesing lo32/hi32 fields as array elements. Alternative way would be to have a local pointer variable for anonymous struct with exactly the same layout as struct btf_enum64, but that gets us into C++ compiler errors complaining about invalid type casts. So play it safe, if ugly. [0] Closes: https://github.com/libbpf/libbpf/issues/562 Fixes: d90ec262 ("libbpf: Add enum64 support for btf_dump") Reported-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Toke Høiland-Jørgensen <toke@toke.dk> Link: https://lore.kernel.org/bpf/20220927042940.147185-1-andrii@kernel.org
-
Yauheni Kaliuta authored
Since the tests are run in a function $@ there actually contains the function arguments, not the script ones. Pass "$@" to the function as well. Fixes: 272d1f4c ("selftests: bpf: test_kmod.sh: Pass parameters to the module") Signed-off-by: Yauheni Kaliuta <ykaliuta@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220926092320.564631-1-ykaliuta@redhat.com
-
Jon Doron authored
When running rootless with special capabilities like: FOWNER / DAC_OVERRIDE / DAC_READ_SEARCH The "access" API will not make the proper check if there is really access to a file or not. >From the access man page: " The check is done using the calling process's real UID and GID, rather than the effective IDs as is done when actually attempting an operation (e.g., open(2)) on the file. Similarly, for the root user, the check uses the set of permitted capabilities rather than the set of effective capabilities; ***and for non-root users, the check uses an empty set of capabilities.*** " What that means is that for non-root user the access API will not do the proper validation if the process really has permission to a file or not. To resolve this this patch replaces all the access API calls with faccessat with AT_EACCESS flag. Signed-off-by: Jon Doron <jond@wiz.io> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220925070431.1313680-1-arilou@gmail.com
-
Alexei Starovoitov authored
Song Liu says: ==================== Changes v1 => v2: 1. Update arch_prepare_bpf_dispatcher to use a RO image and a RW buffer. (Alexei) Note: I haven't found an existing test to cover this part, so this part was tested manually (comparing the generated dispatcher is the same). Jeff Layton reported CPA W^X warning linux-next [1]. It turns out to be W^X issue with bpf trampoline and bpf dispatcher. Fix these by: 1. Use bpf_prog_pack for bpf_dispatcher; 2. Set memory permission properly with bpf trampoline. [1] https://lore.kernel.org/lkml/c84cc27c1a5031a003039748c3c099732a718aec.camel@kernel.org/ ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Song Liu authored
Mark the trampoline as RO+X after arch_prepare_bpf_trampoline, so that the trampoine follows W^X rule strictly. This will turn off warnings like CPA refuse W^X violation: 8000000000000163 -> 0000000000000163 range: ... Also remove bpf_jit_alloc_exec_page(), since it is not used any more. Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20220926184739.3512547-3-song@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Song Liu authored
Allocate bpf_dispatcher with bpf_prog_pack_alloc so that bpf_dispatcher can share pages with bpf programs. arch_prepare_bpf_dispatcher() is updated to provide a RW buffer as working area for arch code to write to. This also fixes CPA W^X warnning like: CPA refuse W^X violation: 8000000000000163 -> 0000000000000163 range: ... Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20220926184739.3512547-2-song@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Jiri Olsa says: ==================== Martynas reported bpf_get_func_ip returning +4 address when CONFIG_X86_KERNEL_IBT option is enabled and I found there are some failing bpf tests when this option is enabled. The CONFIG_X86_KERNEL_IBT option adds endbr instruction at the function entry, so the idea is to 'fix' entry ip for kprobe_multi and trampoline probes, because they are placed on the function entry. v5 changes: - updated uapi/linux/bpf.h headers with comment for bpf_get_func_ip returning 0 [Andrii] - added acks v4 changes: - used get_kernel_nofault to read previous instruction [Peter] - used movabs instruction in trampoline comment [Peter] - renamed fentry_ip argument in kprobe_multi_link_handler [Peter] v3 changes: - using 'unused' bpf function to get IBT config option into selftest skeleton - rebased to current bpf-next/master - added ack/review from Masami v2 changes: - change kprobes get_func_ip to return zero for kprobes attached within the function body [Andrii] - detect IBT config and properly test kprobe with offset [Andrii] v1 changes: - read previous instruction in kprobe_multi link handler and adjust entry_ip for CONFIG_X86_KERNEL_IBT option - split first patch into 2 separate changes - update changelogs ==================== Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
With CONFIG_X86_KERNEL_IBT enabled the test for kprobe with offset won't work because of the extra endbr instruction. As suggested by Andrii adding CONFIG_X86_KERNEL_IBT detection and using appropriate offset value based on that. Also removing test7 program, because it does the same as test6. Suggested-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220926153340.1621984-7-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Changing return value of kprobe's version of bpf_get_func_ip to return zero if the attach address is not on the function's entry point. For kprobes attached in the middle of the function we can't easily get to the function address especially now with the CONFIG_X86_KERNEL_IBT support. If user cares about current IP for kprobes attached within the function body, they can get it with PT_REGS_IP(ctx). Suggested-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Martynas Pumputis <m@lambda.lt> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220926153340.1621984-6-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Martynas reported bpf_get_func_ip returning +4 address when CONFIG_X86_KERNEL_IBT option is enabled. When CONFIG_X86_KERNEL_IBT is enabled we'll have endbr instruction at the function entry, which screws return value of bpf_get_func_ip() helper that should return the function address. There's short term workaround for kprobe_multi bpf program made by Alexei [1], but we need this fixup also for bpf_get_attach_cookie, that returns cookie based on the entry_ip value. Moving the fixup in the fprobe handler, so both bpf_get_func_ip and bpf_get_attach_cookie get expected function address when CONFIG_X86_KERNEL_IBT option is enabled. Also renaming kprobe_multi_link_handler entry_ip argument to fentry_ip so it's clearer this is an ftrace __fentry__ ip. [1] commit 7f0059b5 ("selftests/bpf: Fix kprobe_multi test.") Cc: Peter Zijlstra <peterz@infradead.org> Reported-by: Martynas Pumputis <m@lambda.lt> Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220926153340.1621984-5-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Using function address given at the generation time as the trampoline ip argument. This way we get directly the function address that we need, so we don't need to: - read the ip from the stack - subtract X86_PATCH_SIZE - subtract ENDBR_INSN_SIZE if CONFIG_X86_KERNEL_IBT is enabled which is not even implemented yet ;-) Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220926153340.1621984-4-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Keeping the resolved 'addr' in kallsyms_callback, instead of taking ftrace_location value, because we depend on symbol address in the cookie related code. With CONFIG_X86_KERNEL_IBT option the ftrace_location value differs from symbol address, which screwes the symbol address cookies matching. There are 2 users of this function: - bpf_kprobe_multi_link_attach for which this fix is for - get_ftrace_locations which is used by register_fprobe_syms this function needs to get symbols resolved to addresses, but does not need 'ftrace location addresses' at this point there's another ftrace location translation in the path done by ftrace_set_filter_ips call: register_fprobe_syms addrs = get_ftrace_locations register_fprobe_ips(addrs) ... ftrace_set_filter_ips ... __ftrace_match_addr ip = ftrace_location(ip); ... Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220926153340.1621984-3-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that attach address is on function entry. This is used in following changes in get_func_ip helper to return correct function address. Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220926153340.1621984-2-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-