- 05 Jan, 2023 3 commits
-
-
Siddharth Vadapalli authored
CPSW9G in J721e supports additional modes like QSGMII. Add new compatible for J721e in am65-cpsw driver. Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Siddharth Vadapalli authored
Update bindings for TI K3 J721e SoC which contains 9 ports (8 external ports) CPSW9G module and add compatible for it. Changes made: - Add new compatible ti,j721e-cpswxg-nuss for CPSW9G. - Extend pattern properties for new compatible. - Change maximum number of CPSW ports to 8 for new compatible. Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski authored
Daniel Borkmann says: ==================== bpf-next 2023-01-04 We've added 45 non-merge commits during the last 21 day(s) which contain a total of 50 files changed, 1454 insertions(+), 375 deletions(-). The main changes are: 1) Fixes, improvements and refactoring of parts of BPF verifier's state equivalence checks, from Andrii Nakryiko. 2) Fix a few corner cases in libbpf's BTF-to-C converter in particular around padding handling and enums, also from Andrii Nakryiko. 3) Add BPF_F_NO_TUNNEL_KEY extension to bpf_skb_set_tunnel_key to better support decap on GRE tunnel devices not operating in collect metadata, from Christian Ehrig. 4) Improve x86 JIT's codegen for PROBE_MEM runtime error checks, from Dave Marchevsky. 5) Remove the need for trace_printk_lock for bpf_trace_printk and bpf_trace_vprintk helpers, from Jiri Olsa. 6) Add proper documentation for BPF_MAP_TYPE_SOCK{MAP,HASH} maps, from Maryam Tahhan. 7) Improvements in libbpf's btf_parse_elf error handling, from Changbin Du. 8) Bigger batch of improvements to BPF tracing code samples, from Daniel T. Lee. 9) Add LoongArch support to libbpf's bpf_tracing helper header, from Hengqi Chen. 10) Fix a libbpf compiler warning in perf_event_open_probe on arm32, from Khem Raj. 11) Optimize bpf_local_storage_elem by removing 56 bytes of padding, from Martin KaFai Lau. 12) Use pkg-config to locate libelf for resolve_btfids build, from Shen Jiamin. 13) Various libbpf improvements around API documentation and errno handling, from Xin Liu. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (45 commits) libbpf: Return -ENODATA for missing btf section libbpf: Add LoongArch support to bpf_tracing.h libbpf: Restore errno after pr_warn. libbpf: Added the description of some API functions libbpf: Fix invalid return address register in s390 samples/bpf: Use BPF_KSYSCALL macro in syscall tracing programs samples/bpf: Fix tracex2 by using BPF_KSYSCALL macro samples/bpf: Change _kern suffix to .bpf with syscall tracing program samples/bpf: Use vmlinux.h instead of implicit headers in syscall tracing program samples/bpf: Use kyscall instead of kprobe in syscall tracing program bpf: rename list_head -> graph_root in field info types libbpf: fix errno is overwritten after being closed. bpf: fix regs_exact() logic in regsafe() to remap IDs correctly bpf: perform byte-by-byte comparison only when necessary in regsafe() bpf: reject non-exact register type matches in regsafe() bpf: generalize MAYBE_NULL vs non-MAYBE_NULL rule bpf: reorganize struct bpf_reg_state fields bpf: teach refsafe() to take into account ID remapping bpf: Remove unused field initialization in bpf's ctl_table selftests/bpf: Add jit probe_mem corner case tests to s390x denylist ... ==================== Link: https://lore.kernel.org/r/20230105000926.31350-1-daniel@iogearbox.netSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 04 Jan, 2023 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueDavid S. Miller authored
Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2023-01-03 (igc) Muhammad Husaini Zulkifli says: Improvements to the Time-Sensitive Networking (TSN) Qbv Scheduling capabilities were included in this patch series for I226 SKU. An overview of each patch series is given below: Patch 1: To enable basetime scheduling in the future, remove the existing restriction for i226 stepping while maintain the restriction for i225. Patch 2: Remove the restriction which require a controller reset when setting the basetime register for new i226 steps and enable the second GCL configuration. Patch 3: Remove the power reset adapter during disabling the tsn config. --- Patches remaining from initial PR: https://lore.kernel.org/netdev/20221205212414.3197525-1-anthony.l.nguyen@intel.com/ after sending net patches: https://lore.kernel.org/netdev/20221215230758.3595578-1-anthony.l.nguyen@intel.com/ Note: patch 3 is an additional patch from the initial PR. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 03 Jan, 2023 5 commits
-
-
Changbin Du authored
As discussed before, return -ENODATA (No data available) would be more meaningful than ENOENT (No such file or directory). Suggested-by: Leo Yan <leo.yan@linaro.org> Signed-off-by: Changbin Du <changbin.du@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221231151436.6541-1-changbin.du@gmail.com
-
Muhammad Husaini Zulkifli authored
I225 have limitation when programming the BaseTime register which required a power cycle of the controller. This limitation already lifted in I226. This patch removes the restriction so that when user configure/remove any TSN mode, it would not go into power cycle reset adapter. How to test: Schedule any gate control list configuration or delete it. Example: 1) BASE_TIME=$(date +%s%N) tc qdisc replace dev $interface_name parent root handle 100 taprio \ num_tc 4 \ map 3 1 0 2 3 3 3 3 3 3 3 3 3 3 3 3 \ queues 1@0 1@1 1@2 1@3 \ base-time $BASE_TIME \ sched-entry S 0F 1000000 \ flags 0x2 2) tc qdisc del dev $intername_name root Signed-off-by: Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com> Tested-by: Naama Meir <naamax.meir@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Tan Tee Min authored
Make reset task only executes for i225 and Qbv disabling to allow i226 configure for 2nd GCL without resetting the adapter. In i226, Tx won't hang if there is a GCL is already running, so in this case we don't need to set FutScdDis bit. Signed-off-by: Tan Tee Min <tee.min.tan@linux.intel.com> Signed-off-by: Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com> Tested-by: Naama Meir <naamax.meir@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Muhammad Husaini Zulkifli authored
Remove the Qbv BaseTime restriction for I226 so that the BaseTime can be scheduled to the future time. A new register bit of Tx Qav Control (Bit-7: FutScdDis) was introduced to allow I226 scheduling future time as Qbv BaseTime and not having the Tx hang timeout issue. Besides, according to datasheet section 7.5.2.9.3.3, FutScdDis bit has to be configured first before the cycle time and base time. Indeed the FutScdDis bit is only active on re-configuration, thus we have to set the BASET_L to zero and then only set it to the desired value. Please also note that the Qbv configuration flow is moved around based on the Qbv programming guideline that is documented in the latest datasheet. Co-developed-by: Tan Tee Min <tee.min.tan@linux.intel.com> Signed-off-by: Tan Tee Min <tee.min.tan@linux.intel.com> Signed-off-by: Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com> Tested-by: Naama Meir <naamax.meir@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
-
Hengqi Chen authored
Add PT_REGS macros for LoongArch ([0]). [0]: https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.htmlSigned-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Huacai Chen <chenhuacai@loongson.cn> Link: https://lore.kernel.org/bpf/20221231100757.3177034-1-hengqi.chen@gmail.com
-
- 30 Dec, 2022 1 commit
-
-
Alexei Starovoitov authored
pr_warn calls into user-provided callback, which can clobber errno, so `errno = saved_errno` should happen after pr_warn. Fixes: 07453245 ("libbpf: fix errno is overwritten after being closed.") Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 29 Dec, 2022 9 commits
-
-
Xin Liu authored
Currently, many API functions are not described in the document. Add add API description of the following four API functions: - libbpf_set_print; - bpf_object__open; - bpf_object__load; - bpf_object__close. Signed-off-by: Xin Liu <liuxin350@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221224112058.12038-1-liuxin350@huawei.com
-
Andrii Nakryiko authored
"Daniel T. Lee" says: ==================== Syscall tracing using kprobe is quite unstable. Since it uses the exact name of the kernel function, the program might broke due to the rename of a function. The problem can also be caused by a changes in the arguments of the function to which the kprobe connects. This commit enhances syscall tracing program with the following instruments. In this patchset, ksyscall is used instead of kprobe. By using ksyscall, libbpf will detect the appropriate kernel function name. (e.g. sys_write -> __s390_sys_write). This eliminates the need to worry about which wrapper function to attach in order to parse arguments. Also ksyscall provides more fine method with attaching system call, the coarse SYSCALL helper at trace_common.h can be removed. Next, BPF_SYSCALL is used to reduce the inconvenience of parsing arguments. Since the nature of SYSCALL_WRAPPER function wraps the argument once, additional process of argument extraction is required to properly parse the argument. The BPF_SYSCALL macro will reduces the hassle of parsing arguments from pt_regs. Lastly, vmlinux.h is applied to syscall tracing program. This change allows the bpf program to refer to the internal structure as a single "vmlinux.h" instead of including each header referenced by the bpf program. Additionally, this patchset changes the suffix of _kern to .bpf to make use of the new compile rule (CLANG-BPF) which is more simple and neat. By just changing the _kern suffix to .bpf will inherit the benefit of the new CLANG-BPF compile target. Also, this commit adds dummy gnu/stub.h to the samples/bpf directory. This will fix the compiling problem with 'clang -target bpf'. To fix the build error with the s390x, this patchset also includes the fix of libbpf invalid return address register mapping in s390. --- Changes in V2: - add gnu/stub.h hack to fix compile error with 'clang -target bpf' Changes in V3: - fix libbpf invalid return address register mapping in s390 ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Daniel T. Lee authored
There is currently an invalid register mapping in the s390 return address register. As the manual[1] states, the return address can be found at r14. In bpf_tracing.h, the s390 registers were named gprs(general purpose registers). This commit fixes the problem by correcting the mistyped mapping. [1]: https://uclibc.org/docs/psABI-s390x.pdf#page=14 Fixes: 3cc31d79 ("libbpf: Normalize PT_REGS_xxx() macro definitions") Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221224071527.2292-7-danieltimlee@gmail.com
-
Daniel T. Lee authored
This commit enhances the syscall tracing programs by using the BPF_SYSCALL macro to reduce the inconvenience of parsing arguments from pt_regs. By simplifying argument extraction, bpf program will become clear to understand. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221224071527.2292-6-danieltimlee@gmail.com
-
Daniel T. Lee authored
Currently, there is a problem with tracex2, as it doesn't print the histogram properly and the results are misleading. (all results report as 0) The problem is caused by a change in arguments of the function to which the kprobe connects. This tracex2 bpf program uses kprobe (attached to __x64_sys_write) to figure out the size of the write system call. In order to achieve this, the third argument 'count' must be intact. The following is a prototype of the sys_write variant. (checked with pfunct) ~/git/linux$ pfunct -P fs/read_write.o | grep sys_write ssize_t ksys_write(unsigned int fd, const char * buf, size_t count); long int __x64_sys_write(const struct pt_regs * regs); ... cross compile with s390x ... long int __s390_sys_write(struct pt_regs * regs); Since the nature of SYSCALL_WRAPPER function wraps the argument once, additional process of argument extraction is required to properly parse the argument. #define BPF_KSYSCALL(name, args...) ... snip ... struct pt_regs *regs = LINUX_HAS_SYSCALL_WRAPPER \ ? (struct pt_regs *)PT_REGS_PARM1(ctx) \ : ctx; \ In order to fix this problem, the BPF_SYSCALL macro has been used. This reduces the hassle of parsing arguments from pt_regs. Since the macro uses the CORE version of argument extraction, additional portability comes too. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221224071527.2292-5-danieltimlee@gmail.com
-
Daniel T. Lee authored
Currently old compile rule (CLANG-bpf) doesn't contains VMLINUX_H define flag which is essential for the bpf program that includes "vmlinux.h". Also old compile rule doesn't directly specify the compile target as bpf, instead it uses bunch of extra options with clang followed by long chain of commands. (e.g. clang | opt | llvm-dis | llc) In Makefile, there is already new compile rule which is more simple and neat. And it also has -D__VMLINUX_H__ option. By just changing the _kern suffix to .bpf will inherit the benefit of the new CLANG-BPF compile target. Also, this commit adds dummy gnu/stub.h to the samples/bpf directory. As commit 1c2dd16a ("selftests/bpf: get rid of -D__x86_64__") noted, compiling with 'clang -target bpf' will raise an error with stubs.h unless workaround (-D__x86_64) is used. This commit solves this problem by adding dummy stub.h to make /usr/include/features.h to follow the expected path as the same way selftests/bpf dealt with. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221224071527.2292-4-danieltimlee@gmail.com
-
Daniel T. Lee authored
This commit applies vmlinux.h to syscall tracing program. This change allows the bpf program to refer to the internal structure as a single "vmlinux.h" instead of including each header referenced by the bpf program. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221224071527.2292-3-danieltimlee@gmail.com
-
Daniel T. Lee authored
Syscall tracing using kprobe is quite unstable. Since it uses the exact name of the kernel function, the program might broke due to the rename of a function. The problem can also be caused by a changes in the arguments of the function to which the kprobe connects. In this commit, ksyscall is used instead of kprobe. By using ksyscall, libbpf will detect the appropriate kernel function name. (e.g. sys_write -> __s390_sys_write). This eliminates the need to worry about which wrapper function to attach in order to parse arguments. In addition, ksyscall provides more fine method with attaching system call, the coarse SYSCALL helper at trace_common.h can be removed. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221224071527.2292-2-danieltimlee@gmail.com
-
Dave Marchevsky authored
Many of the structs recently added to track field info for linked-list head are useful as-is for rbtree root. So let's do a mechanical renaming of list_head-related types and fields: include/linux/bpf.h: struct btf_field_list_head -> struct btf_field_graph_root list_head -> graph_root in struct btf_field union kernel/bpf/btf.c: list_head -> graph_root in struct btf_field_info This is a nonfunctional change, functionality to actually use these fields for rbtree will be added in further patches. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/r/20221217082506.1570898-5-davemarchevsky@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 28 Dec, 2022 7 commits
-
-
Xin Liu authored
In the ensure_good_fd function, if the fcntl function succeeds but the close function fails, ensure_good_fd returns a normal fd and sets errno, which may cause users to misunderstand. The close failure is not a serious problem, and the correct FD has been handed over to the upper-layer application. Let's restore errno here. Signed-off-by: Xin Liu <liuxin350@huawei.com> Link: https://lore.kernel.org/r/20221223133618.10323-1-liuxin350@huawei.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Comparing IDs exactly between two separate states is not just suboptimal, but also incorrect in some cases. So update regs_exact() check to do byte-by-byte memcmp() only up to id/ref_obj_id. For id and ref_obj_id perform proper check_ids() checks, taking into account idmap. This change makes more states equivalent improving insns and states stats across a bunch of selftest BPF programs: File Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF) ------------------------------------------- -------------------------------- --------- --------- -------------- ---------- ---------- ------------- cgrp_kfunc_success.bpf.linked1.o test_cgrp_get_release 141 137 -4 (-2.84%) 13 13 +0 (+0.00%) cgrp_kfunc_success.bpf.linked1.o test_cgrp_xchg_release 142 139 -3 (-2.11%) 14 13 -1 (-7.14%) connect6_prog.bpf.linked1.o connect_v6_prog 139 102 -37 (-26.62%) 9 6 -3 (-33.33%) ima.bpf.linked1.o bprm_creds_for_exec 68 61 -7 (-10.29%) 6 5 -1 (-16.67%) linked_list.bpf.linked1.o global_list_in_list 569 499 -70 (-12.30%) 60 52 -8 (-13.33%) linked_list.bpf.linked1.o global_list_push_pop 167 150 -17 (-10.18%) 18 16 -2 (-11.11%) linked_list.bpf.linked1.o global_list_push_pop_multiple 881 815 -66 (-7.49%) 74 63 -11 (-14.86%) linked_list.bpf.linked1.o inner_map_list_in_list 579 534 -45 (-7.77%) 61 55 -6 (-9.84%) linked_list.bpf.linked1.o inner_map_list_push_pop 190 181 -9 (-4.74%) 19 18 -1 (-5.26%) linked_list.bpf.linked1.o inner_map_list_push_pop_multiple 916 850 -66 (-7.21%) 75 64 -11 (-14.67%) linked_list.bpf.linked1.o map_list_in_list 588 525 -63 (-10.71%) 62 55 -7 (-11.29%) linked_list.bpf.linked1.o map_list_push_pop 183 174 -9 (-4.92%) 18 17 -1 (-5.56%) linked_list.bpf.linked1.o map_list_push_pop_multiple 909 843 -66 (-7.26%) 75 64 -11 (-14.67%) map_kptr.bpf.linked1.o test_map_kptr 264 256 -8 (-3.03%) 26 26 +0 (+0.00%) map_kptr.bpf.linked1.o test_map_kptr_ref 95 91 -4 (-4.21%) 9 8 -1 (-11.11%) task_kfunc_success.bpf.linked1.o test_task_xchg_release 139 136 -3 (-2.16%) 14 13 -1 (-7.14%) test_bpf_nf.bpf.linked1.o nf_skb_ct_test 815 509 -306 (-37.55%) 57 30 -27 (-47.37%) test_bpf_nf.bpf.linked1.o nf_xdp_ct_test 815 509 -306 (-37.55%) 57 30 -27 (-47.37%) test_cls_redirect.bpf.linked1.o cls_redirect 78925 78390 -535 (-0.68%) 4782 4704 -78 (-1.63%) test_cls_redirect_subprogs.bpf.linked1.o cls_redirect 64901 63897 -1004 (-1.55%) 4612 4470 -142 (-3.08%) test_sk_lookup.bpf.linked1.o access_ctx_sk 181 95 -86 (-47.51%) 19 10 -9 (-47.37%) test_sk_lookup.bpf.linked1.o ctx_narrow_access 447 437 -10 (-2.24%) 38 37 -1 (-2.63%) test_sk_lookup_kern.bpf.linked1.o sk_lookup_success 148 133 -15 (-10.14%) 14 12 -2 (-14.29%) test_tcp_check_syncookie_kern.bpf.linked1.o check_syncookie_clsact 304 300 -4 (-1.32%) 23 22 -1 (-4.35%) test_tcp_check_syncookie_kern.bpf.linked1.o check_syncookie_xdp 304 300 -4 (-1.32%) 23 22 -1 (-4.35%) test_verify_pkcs7_sig.bpf.linked1.o bpf 87 76 -11 (-12.64%) 7 6 -1 (-14.29%) ------------------------------------------- -------------------------------- --------- --------- -------------- ---------- ---------- ------------- Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221223054921.958283-7-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Extract byte-by-byte comparison of bpf_reg_state in regsafe() into a helper function, which makes it more convenient to use it "on demand" only for registers that benefit from such checks, instead of doing it all the time, even if result of such comparison is ignored. Also, remove WARN_ON_ONCE(1)+return false dead code. There is no risk of missing some case as compiler will warn about non-void function not returning value in some branches (and that under assumption that default case is removed in the future). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221223054921.958283-6-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Generalize the (somewhat implicit) rule of regsafe(), which states that if register types in old and current states do not match *exactly*, they can't be safely considered equivalent. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221223054921.958283-5-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Make generic check to prevent XXX_OR_NULL and XXX register types to be intermixed. While technically in some situations it could be safe, it's impossible to enforce due to the loss of an ID when converting XXX_OR_NULL to its non-NULL variant. So prevent this in general, not just for PTR_TO_MAP_KEY and PTR_TO_MAP_VALUE. PTR_TO_MAP_KEY_OR_NULL and PTR_TO_MAP_VALUE_OR_NULL checks, which were previously special-cased, are simplified to generic check that takes into account range_within() and tnum_in(). This is correct as BPF verifier doesn't allow arithmetic on XXX_OR_NULL register types, so var_off and ranges should stay zero. But even if in the future this restriction is lifted, it's even more important to enforce that var_off and ranges are compatible, otherwise it's possible to construct case where this can be exploited to bypass verifier's memory range safety checks. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221223054921.958283-4-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Move id and ref_obj_id fields after scalar data section (var_off and ranges). This is necessary to simplify next patch which will change regsafe()'s logic to be safer, as it makes the contents that has to be an exact match (type-specific parts, off, type, and var_off+ranges) a single sequential block of memory, while id and ref_obj_id should always be remapped and thus can't be memcp()'ed. There are few places that assume that var_off is after id/ref_obj_id to clear out id/ref_obj_id with the single memset(0). These are changed to explicitly zero-out id/ref_obj_id fields. Other places are adjusted to preserve exact byte-by-byte comparison behavior. No functional changes. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221223054921.958283-3-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
states_equal() check performs ID mapping between old and new states to establish a 1-to-1 correspondence between IDs, even if their absolute numberic values across two equivalent states differ. This is important both for correctness and to avoid unnecessary work when two states are equivalent. With recent changes we partially fixed this logic by maintaining ID map across all function frames. This patch also makes refsafe() check take into account (and maintain) ID map, making states_equal() behavior more optimal and correct. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221223054921.958283-2-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 22 Dec, 2022 2 commits
-
-
Ricardo Ribalda authored
Maxlen is used by standard proc_handlers such as proc_dointvec(), but in this case we have our own proc_handler via bpf_stats_handler(). Therefore, remove the initialization. Signed-off-by: Ricardo Ribalda <ribalda@chromium.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20221221-bpf-syscall-v1-0-9550f5f2c3fc@chromium.org
-
Daniel Borkmann authored
BPF CI fails for s390x with the following result: [...] All error logs: libbpf: prog 'test_jit_probe_mem': BPF program load failed: ERROR: strerror_r(524)=22 libbpf: prog 'test_jit_probe_mem': -- BEGIN PROG LOAD LOG -- JIT does not support calling kernel function processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 -- END PROG LOAD LOG -- libbpf: prog 'test_jit_probe_mem': failed to load: -524 libbpf: failed to load object 'jit_probe_mem' libbpf: failed to load BPF skeleton 'jit_probe_mem': -524 test_jit_probe_mem:FAIL:jit_probe_mem__open_and_load unexpected error: -524 #89 jit_probe_mem:FAIL [...] Add the test to the deny list. Fixes: 59fe41b5 ("selftests/bpf: Add verifier test exercising jit PROBE_MEM logic") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 21 Dec, 2022 12 commits
-
-
Dave Marchevsky authored
This patch adds a test exercising logic that was fixed / improved in the previous patch in the series, as well as general sanity checking for jit's PROBE_MEM logic which should've been unaffected by the previous patch. The added verifier test does the following: * Acquire a referenced kptr to struct prog_test_ref_kfunc using existing net/bpf/test_run.c kfunc * Helper returns ptr to a specific prog_test_ref_kfunc whose first two fields - both ints - have been prepopulated w/ vals 42 and 108, respectively * kptr_xchg the acquired ptr into an arraymap * Do a direct map_value load of the just-added ptr * Goal of all this setup is to get an unreferenced kptr pointing to struct with ints of known value, which is the result of this step * Using unreferenced kptr obtained in previous step, do loads of prog_test_ref_kfunc.a (offset 0) and .b (offset 4) * Then incr the kptr by 8 and load prog_test_ref_kfunc.a again (this time at offset -8) * Add all the loaded ints together and return Before the PROBE_MEM fixes in previous patch, the loads at offset 0 and 4 would succeed, while the load at offset -8 would incorrectly fail runtime check emitted by the JIT and 0 out dst reg as a result. This confirmed by retval of 150 for this test before previous patch - since second .a read is 0'd out - and a retval of 192 with the fixed logic. The test exercises the two optimizations to fixed logic added in last patch as well: * First load, with insn "r8 = *(u32 *)(r9 + 0)" exercises "insn->off is 0, no need to add / sub from src_reg" optimization * Third load, with insn "r9 = *(u32 *)(r9 - 8)" exercises "src_reg == dst_reg, no need to restore src_reg after load" optimization Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20221216214319.3408356-2-davemarchevsky@fb.com
-
Dave Marchevsky authored
This patch rewrites the runtime PROBE_MEM check insns emitted by the BPF JIT in order to ensure load safety. The changes in the patch fix two issues with the previous logic and more generally improve size of emitted code. Paragraphs between this one and "FIX 1" below explain the purpose of the runtime check and examine the current implementation. When a load is marked PROBE_MEM - e.g. due to PTR_UNTRUSTED access - the address being loaded from is not necessarily valid. The BPF jit sets up exception handlers for each such load which catch page faults and 0 out the destination register. Arbitrary register-relative loads can escape this exception handling mechanism. Specifically, a load like dst_reg = *(src_reg + off) will not trigger BPF exception handling if (src_reg + off) is outside of kernel address space, resulting in an uncaught page fault. A concrete example of such behavior is a program like: struct result { char space[40]; long a; }; /* if err, returns ERR_PTR(-EINVAL) */ struct result *ptr = get_ptr_maybe_err(); long x = ptr->a; If get_ptr_maybe_err returns ERR_PTR(-EINVAL) and the result isn't checked for err, 'result' will be (u64)-EINVAL, a number close to U64_MAX. The ptr->a load will be > U64_MAX and will wrap over to a small positive u64, which will be in userspace and thus not covered by BPF exception handling mechanism. In order to prevent such loads from occurring, the BPF jit emits some instructions which do runtime checking of (src_reg + off) and skip the actual load if it's out of range. As an example, here are instructions emitted for a %rdi = *(%rdi + 0x10) PROBE_MEM load: 72: movabs $0x800000000010,%r11 --| 7c: cmp %r11,%rdi |- 72 - 7f: Check 1 7f: jb 0x000000000000008d --| 81: mov %rdi,%r11 -----| 84: add $0x0000000000000010,%r11 |- 81-8b: Check 2 8b: jnc 0x0000000000000091 -----| 8d: xor %edi,%edi ---- 0 out dest 8f: jmp 0x0000000000000095 91: mov 0x10(%rdi),%rdi ---- Actual load 95: The JIT considers kernel address space to start at MAX_TASK_SIZE + PAGE_SIZE. Determining whether a load will be outside of kernel address space should be a simple check: (src_reg + off) >= MAX_TASK_SIZE + PAGE_SIZE But because there is only one spare register when the checking logic is emitted, this logic is split into two checks: Check 1: src_reg >= (MAX_TASK_SIZE + PAGE_SIZE - off) Check 2: src_reg + off doesn't wrap over U64_MAX and result in small pos u64 Emitted insns implementing Checks 1 and 2 are annotated in the above example. Check 1 can be done with a single spare register since the source reg by definition is the left-hand-side of the inequality. Since adding 'off' to both sides of Check 1's inequality results in the original inequality we want, it's equivalent to testing that inequality. Except in the case where src_reg + off wraps past U64_MAX, which is why Check 2 needs to actually add src_reg + off if Check 1 passes - again using the single spare reg. FIX 1: The Check 1 inequality listed above is not what current code is doing. Current code is a bit more pessimistic, instead checking: src_reg >= (MAX_TASK_SIZE + PAGE_SIZE + abs(off)) The 0x800000000010 in above example is from this current check. If Check 1 was corrected to use the correct right-hand-side, the value would be 0x7ffffffffff0. This patch changes the checking logic more broadly (FIX 2 below will elaborate), fixing this issue as a side-effect of the rewrite. Regardless, it's important to understand why Check 1 should've been doing MAX_TASK_SIZE + PAGE_SIZE - off before proceeding. FIX 2: Current code relies on a 'jnc' to determine whether src_reg + off addition wrapped over. For negative offsets this logic is incorrect. Consider Check 2 insns emitted when off = -0x10: 81: mov %rdi,%r11 84: add 0xfffffffffffffff0,%r11 8b: jnc 0x0000000000000091 2's complement representation of -0x10 is a large positive u64. Any value of src_reg that passes Check 1 will result in carry flag being set after (src_reg + off) addition. So a load with any negative offset will always fail Check 2 at runtime and never do the actual load. This patch fixes the negative offset issue by rewriting both checks in order to not rely on carry flag. The rewrite takes advantage of the fact that, while we only have one scratch reg to hold arbitrary values, we know the offset at JIT time. This we can use src_reg as a temporary scratch reg to hold src_reg + offset since we can return it to its original value by later subtracting offset. As a result we can directly check the original inequality we care about: (src_reg + off) >= MAX_TASK_SIZE + PAGE_SIZE For a load like %rdi = *(%rsi + -0x10), this results in emitted code: 43: movabs $0x800000000000,%r11 4d: add $0xfffffffffffffff0,%rsi --- src_reg += off 54: cmp %r11,%rsi --- Check original inequality 57: jae 0x000000000000005d 59: xor %edi,%edi 5b: jmp 0x0000000000000061 5d: mov 0x0(%rdi),%rsi --- Actual Load 61: sub $0xfffffffffffffff0,%rsi --- src_reg -= off Note that the actual load is always done with offset 0, since previous insns have already done src_reg += off. Regardless of whether the new check succeeds or fails, insn 61 is always executed, returning src_reg to its original value. Because the goal of these checks is to ensure that loaded-from address will be protected by BPF exception handler, the new check can safely ignore any wrapover from insn 4d. If such wrapped-over address passes insn 54 + 57's cmp-and-jmp it will have such protection so the load can proceed. IMPROVEMENTS: The above improved logic is 8 insns vs original logic's 9, and has 1 fewer jmp. The number of checking insns can be further improved in common scenarios: If src_reg == dst_reg, the actual load insn will clobber src_reg, so there's no original src_reg state for the sub insn immediately following the load to restore, so it can be omitted. In fact, it must be omitted since it would incorrectly subtract from the result of the load if it wasn't. So for src_reg == dst_reg, JIT emits these insns: 3c: movabs $0x800000000000,%r11 46: add $0xfffffffffffffff0,%rdi 4d: cmp %r11,%rdi 50: jae 0x0000000000000056 52: xor %edi,%edi 54: jmp 0x000000000000005a 56: mov 0x0(%rdi),%rdi 5a: The only difference from larger example being the omitted sub, which would've been insn 5a in this example. If offset == 0, we can similarly omit the sub as in previous case, since there's nothing added to subtract. For the same reason we can omit the addition as well, resulting in JIT emitting these insns: 46: movabs $0x800000000000,%r11 4d: cmp %r11,%rdi 50: jae 0x0000000000000056 52: xor %edi,%edi 54: jmp 0x000000000000005a 56: mov 0x0(%rdi),%rdi 5a: Although the above example also has src_reg == dst_reg, the same offset == 0 optimization is valid to apply if src_reg != dst_reg. To summarize the improvements in emitted insn count for the check-and-load: BEFORE: 8 check insns, 3 jmps AFTER (general case): 7 check insns, 2 jmps (12.5% fewer insn, 33% jmp) AFTER (src == dst): 6 check insns, 2 jmps (25% fewer insn) AFTER (offset == 0): 5 check insns, 2 jmps (37.5% fewer insn) (Above counts don't include the 1 load insn, just checking around it) Based on BPF bytecode + JITted x86 insn I saw while experimenting with these improvements, I expect the src_reg == dst_reg case to occur most often, followed by offset == 0, then the general case. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20221216214319.3408356-1-davemarchevsky@fb.com
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
No conflicts. Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andrii Nakryiko authored
Bump current version for new development cycle to v1.2. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/20221221180049.853365-1-andrii@kernel.orgSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds authored
Pull networking fixes from Jakub Kicinski: "Including fixes from bpf, netfilter and can. Current release - regressions: - bpf: synchronize dispatcher update with bpf_dispatcher_xdp_func - rxrpc: - fix security setting propagation - fix null-deref in rxrpc_unuse_local() - fix switched parameters in peer tracing Current release - new code bugs: - rxrpc: - fix I/O thread startup getting skipped - fix locking issues in rxrpc_put_peer_locked() - fix I/O thread stop - fix uninitialised variable in rxperf server - fix the return value of rxrpc_new_incoming_call() - microchip: vcap: fix initialization of value and mask - nfp: fix unaligned io read of capabilities word Previous releases - regressions: - stop in-kernel socket users from corrupting socket's task_frag - stream: purge sk_error_queue in sk_stream_kill_queues() - openvswitch: fix flow lookup to use unmasked key - dsa: mv88e6xxx: avoid reg_lock deadlock in mv88e6xxx_setup_port() - devlink: - hold region lock when flushing snapshots - protect devlink dump by the instance lock Previous releases - always broken: - bpf: - prevent leak of lsm program after failed attach - resolve fext program type when checking map compatibility - skbuff: account for tail adjustment during pull operations - macsec: fix net device access prior to holding a lock - bonding: switch back when high prio link up - netfilter: flowtable: really fix NAT IPv6 offload - enetc: avoid buffer leaks on xdp_do_redirect() failure - unix: fix race in SOCK_SEQPACKET's unix_dgram_sendmsg() - dsa: microchip: remove IRQF_TRIGGER_FALLING in request_threaded_irq" * tag 'net-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (64 commits) net: fec: check the return value of build_skb() net: simplify sk_page_frag Treewide: Stop corrupting socket's task_frag net: Introduce sk_use_task_frag in struct sock. mctp: Remove device type check at unregister net: dsa: microchip: remove IRQF_TRIGGER_FALLING in request_threaded_irq can: kvaser_usb: hydra: help gcc-13 to figure out cmd_len can: flexcan: avoid unbalanced pm_runtime_enable warning Documentation: devlink: add missing toc entry for etas_es58x devlink doc mctp: serial: Fix starting value for frame check sequence nfp: fix unaligned io read of capabilities word net: stream: purge sk_error_queue in sk_stream_kill_queues() myri10ge: Fix an error handling path in myri10ge_probe() net: microchip: vcap: Fix initialization of value and mask rxrpc: Fix the return value of rxrpc_new_incoming_call() rxrpc: rxperf: Fix uninitialised variable rxrpc: Fix I/O thread stop rxrpc: Fix switched parameters in peer tracing rxrpc: Fix locking issues in rxrpc_put_peer_locked() rxrpc: Fix I/O thread startup getting skipped ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmappingLinus Torvalds authored
Pull vfsuid cleanup from Christian Brauner: "This moves the ima specific vfs{g,u}id_t comparison helpers out of the header and into the one file in ima where they are used. We shouldn't incentivize people to use them by placing them into the header. As discussed and suggested by Linus in [1] let's just define them locally in the one file in ima where they are used" Link: https://lore.kernel.org/lkml/CAHk-=wj4BpEwUd=OkTv1F9uykvSrsBNZJVHMp+p_+e2kiV71_A@mail.gmail.com [1] * tag 'fs.vfsuid.ima.v6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmapping: mnt_idmapping: move ima-only helpers to ima
-
git://git.kernel.org/pub/scm/linux/kernel/git/crng/randomLinus Torvalds authored
Pull more random number generator updates from Jason Donenfeld: "Two remaining changes that are now possible after you merged a few other trees: - #include <asm/archrandom.h> can be removed from random.h now, making the direct use of the arch_random_* API more of a private implementation detail between the archs and random.c, rather than something for general consumers. - Two additional uses of prandom_u32_max() snuck in during the initial phase of pulls, so these have been converted to get_random_u32_below(), and now the deprecated prandom_u32_max() alias -- which was just a wrapper around get_random_u32_below() -- can be removed. In addition, there is one fix: - Check efi_rt_services_supported() before attempting to use an EFI runtime function. This affected EFI systems that disable runtime services yet still boot via EFI (e.g. the reporter's Lenovo Thinkpad X13s laptop), as well systems where EFI runtime services have been forcibly disabled, such as on PREEMPT_RT. On those machines, a very early and hard to diagnose crash would happen, preventing boot" * tag 'random-6.2-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: prandom: remove prandom_u32_max() efi: random: fix NULL-deref when refreshing seed random: do not include <asm/archrandom.h> from random.h
-
Linus Torvalds authored
Merge tag 'rcu-urgent.2022.12.17a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu Pull RCU fix from Paul McKenney: "This fixes a lockdep false positive in synchronize_rcu() that can otherwise occur during early boot. The fix simply avoids invoking lockdep if the scheduler has not yet been initialized, that is, during that portion of boot when interrupts are disabled" * tag 'rcu-urgent.2022.12.17a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: rcu: Don't assert interrupts enabled too early in boot
-
Martin KaFai Lau authored
'struct bpf_local_storage_elem' has an unused 56 byte padding at the end due to struct's cache-line alignment requirement. This padding space is overlapped by storage value contents, so if we use sizeof() to calculate the total size, we overinflate it by 56 bytes. Use offsetof() instead to calculate more exact memory use. Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221221013036.3427431-1-martin.lau@linux.dev
-
Andrii Nakryiko authored
Changbin Du says: ==================== Display error message for missing ".BTF" section and clean up empty vmlinux.h file. v3: - fix typo and make error message consistent. (Andrii Nakryiko) - split out perf change. v2: - remove vmlinux specific error info. - use builtin target .DELETE_ON_ERROR: to delete empty vmlinux.h ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Changbin Du authored
Remove the empty vmlinux.h if bpftool failed to dump btf info. The empty vmlinux.h can hide real error when reading output of make. This is done by adding .DELETE_ON_ERROR special target in related makefiles. Signed-off-by: Changbin Du <changbin.du@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20221217223509.88254-3-changbin.du@gmail.com
-
Changbin Du authored
Show the real problem instead of just saying "No such file or directory". Now will print below info: libbpf: failed to find '.BTF' ELF section in /home/changbin/work/linux/vmlinux Error: failed to load BTF from /home/changbin/work/linux/vmlinux: No such file or directory Signed-off-by: Changbin Du <changbin.du@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221217223509.88254-2-changbin.du@gmail.com
-