- 12 Apr, 2023 7 commits
-
-
Christian Ehrig authored
Add two new kfuncs that allow a BPF tc-hook, installed on an ipip device in collect-metadata mode, to control FOU encap parameters on a per-packet level. The set of kfuncs is registered with the fou module. The bpf_skb_set_fou_encap kfunc is supposed to be used in tandem and after a successful call to the bpf_skb_set_tunnel_key bpf-helper. UDP source and destination ports can be controlled by passing a struct bpf_fou_encap. A source port of zero will auto-assign a source port. enum bpf_fou_encap_type is used to specify if the egress path should FOU or GUE encap the packet. On the ingress path bpf_skb_get_fou_encap can be used to read UDP source and destination ports from the receiver's point of view and allows for packet multiplexing across different destination ports within a single BPF program and ipip device. Signed-off-by: Christian Ehrig <cehrig@cloudflare.com> Link: https://lore.kernel.org/r/e17c94a646b63e78ce0dbf3f04b2c33dc948a32d.1680874078.git.cehrig@cloudflare.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Christian Ehrig authored
Today ipip devices in collect-metadata mode don't allow for sending FOU or GUE encapsulated packets. This patch lifts the restriction by adding a struct ip_tunnel_encap to the tunnel metadata. On the egress path, the members of this struct can be set by the bpf_skb_set_fou_encap kfunc via a BPF tc-hook. Instead of dropping packets wishing to use additional UDP encapsulation, ip_md_tunnel_xmit now evaluates the contents of this struct and adds the corresponding FOU or GUE header. Furthermore, it is making sure that additional header bytes are taken into account for PMTU discovery. On the ingress path, an ipip device in collect-metadata mode will fill this struct and a BPF tc-hook can obtain the information via a call to the bpf_skb_get_fou_encap kfunc. The minor change to ip_tunnel_encap, which now takes a pointer to struct ip_tunnel_encap instead of struct ip_tunnel, allows us to control FOU encap type and parameters on a per packet-level. Signed-off-by: Christian Ehrig <cehrig@cloudflare.com> Link: https://lore.kernel.org/r/cfea47de655d0f870248abf725932f851b53960a.1680874078.git.cehrig@cloudflare.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Xin Liu authored
When huang uses sched_switch tracepoint, the tracepoint does only one thing in the mounted ebpf program, which deletes the fixed elements in sockhash ([0]) It seems that elements in sockhash are rarely actively deleted by users or ebpf program. Therefore, we do not pay much attention to their deletion. Compared with hash maps, sockhash only provides spin_lock_bh protection. This causes it to appear to have self-locking behavior in the interrupt context. [0]:https://lore.kernel.org/all/CABcoxUayum5oOqFMMqAeWuS8+EzojquSOSyDA3J_2omY=2EeAg@mail.gmail.com/Reported-by: Hsin-Wei Hung <hsinweih@uci.edu> Fixes: 604326b4 ("bpf, sockmap: convert to generic sk_msg interface") Signed-off-by: Xin Liu <liuxin350@huawei.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/r/20230406122622.109978-1-liuxin350@huawei.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
David Vernet authored
The bpf_cgroup_kptr_get() kfunc has been removed, and bpf_cgroup_acquire() / bpf_cgroup_release() now have the same semantics as bpf_task_acquire() / bpf_task_release(). This patch updates the BPF documentation to reflect this. Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230411041633.179404-3-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
David Vernet authored
Now that bpf_cgroup_acquire() is KF_RCU | KF_RET_NULL, bpf_cgroup_kptr_get() is redundant. Let's remove it, and update selftests to instead use bpf_cgroup_acquire() where appropriate. The next patch will update the BPF documentation to not mention bpf_cgroup_kptr_get(). Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230411041633.179404-2-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
David Vernet authored
struct cgroup is already an RCU-safe type in the verifier. We can therefore update bpf_cgroup_acquire() to be KF_RCU | KF_RET_NULL, and subsequently remove bpf_cgroup_kptr_get(). This patch does the first of these by updating bpf_cgroup_acquire() to be KF_RCU | KF_RET_NULL, and also updates selftests accordingly. Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230411041633.179404-1-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
During OOM bpf_local_storage_alloc() may fail to allocate 'storage' and call to bpf_local_storage_free() with NULL pointer will cause a crash like: [ 271718.917646] BUG: kernel NULL pointer dereference, address: 00000000000000a0 [ 271719.019620] RIP: 0010:call_rcu+0x2d/0x240 [ 271719.216274] bpf_local_storage_alloc+0x19e/0x1e0 [ 271719.250121] bpf_local_storage_update+0x33b/0x740 Fixes: 7e30a847 ("bpf: Add bpf_local_storage_free()") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230412171252.15635-1-alexei.starovoitov@gmail.com
-
- 11 Apr, 2023 23 commits
-
-
Feng Zhou authored
Adding verifier test for accessing u32 pointer argument in tracing programs. The test program loads 1nd argument of bpf_fentry_test9 function which is u32 pointer and checks that verifier allows that. Co-developed-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Feng Zhou <zhoufeng.zf@bytedance.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20230410085908.98493-3-zhoufeng.zf@bytedance.com
-
Feng Zhou authored
When tracing a kernel function with arg type is u32*, btf_ctx_access() would report error: arg2 type INT is not a struct. The commit bb6728d7 ("bpf: Allow access to int pointer arguments in tracing programs") added support for int pointer, but did not skip modifiers before checking it's type. This patch fixes it. Fixes: bb6728d7 ("bpf: Allow access to int pointer arguments in tracing programs") Co-developed-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Signed-off-by: Feng Zhou <zhoufeng.zf@bytedance.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20230410085908.98493-2-zhoufeng.zf@bytedance.com
-
Daniel Borkmann authored
Andrii Nakryiko says: ==================== This patch set changes BPF verifier log behavior to behave as a rotating log, by default. If user-supplied log buffer is big enough to contain entire verifier log output, there is no effective difference. But where previously user supplied too small log buffer and would get -ENOSPC error result and the beginning part of the verifier log, now there will be no error and user will get ending part of verifier log filling up user-supplied log buffer. Which is, in absolute majority of cases, is exactly what's useful, relevant, and what users want and need, as the ending of the verifier log is containing details of verifier failure and relevant state that got us to that failure. So this rotating mode is made default, but for some niche advanced debugging scenarios it's possible to request old behavior by specifying additional BPF_LOG_FIXED (8) flag. This patch set adjusts libbpf to allow specifying flags beyond 1 | 2 | 4. We also add --log-size and --log-fixed options to veristat to be able to both test this functionality manually, but also to be used in various debugging scenarios. We also add selftests that tries many variants of log buffer size to stress-test correctness of internal verifier log bookkeeping code. Further, this patch set is merged with log_size_actual v1 patchset ([0]), which adds ability to get required log buffer size to fit entire verifier log output. This addresses a long-standing limitation, which causes users and BPF loader library writers to guess and pre-size log buffer, often allocating unnecessary extra memory for this or doing extra program verifications just to size logs better, ultimately wasting resources. This was requested most recently by Go BPF library maintainers ([1]). See respective patches for details. A bunch of them some drive-by fixes detecting during working with the code. Some other further refactor and compratmentalize verifier log handling code into kernel/bpf/log.c, which should also make it simpler to integrate such verbose log for other complicated bpf() syscall commands, if necessary. The rest are actual logic to calculate maximum log buffer size needed and return it to user-space. Few patches wire this on libbpf side, and the rest add selftests to test proper log truncation and log_buf==NULL handling. This turned into a pretty sizable patch set with lots of arithmetics, but hopefully the set of features added to verifier log in this patch set are both useful for BPF users and are self-contained and isolated enough to not cause troubles going forward. v3->v4: - s/log_size_actual/log_true_size/ (Alexei); - log_buf==NULL && log_size==0 don't trigger -ENOSPC (Lorenz); - added WARN_ON_ONCE if we try bpf_vlog_reset() forward (Lorenz); - added selftests for truncation in BPF_LOG_FIXED mode; - fixed edge case in BPF_LOG_FIXED when log_size==1, leaving buf not zero terminated; v2->v3: - typos and comment improvement (Lorenz); - merged with log_size_actual v1 ([0]) patch set (Alexei); - added log_buf==NULL condition allowed (Lorenz); - added BPF_BTF_LOAD logs tests (Lorenz); - more clean up and refactoring of internal verifier log API; v1->v2: - return -ENOSPC even in rotating log mode for preserving backwards compatibility (Lorenz); [0] https://patchwork.kernel.org/project/netdevbpf/list/?series=735213&state=* [1] https://lore.kernel.org/bpf/CAN+4W8iNoEbQzQVbB_o1W0MWBDV4xCJAq7K3f6psVE-kkCfMqg@mail.gmail.com/ ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Andrii Nakryiko authored
Add verifier log tests for BPF_BTF_LOAD command, which are very similar, conceptually, to BPF_PROG_LOAD tests. These are two separate commands dealing with verbose verifier log, so should be both tested separately. Test that log_buf==NULL condition *does not* return -ENOSPC. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-20-andrii@kernel.org
-
Andrii Nakryiko authored
Add few extra test conditions to validate that it's ok to pass log_buf==NULL and log_size==0 to BPF_PROG_LOAD command with the intent to get log_true_size without providing a buffer. Test that log_buf==NULL condition *does not* return -ENOSPC. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-19-andrii@kernel.org
-
Andrii Nakryiko authored
Add additional test cases validating that log_true_size is consistent between fixed and rotating log modes, and that log_true_size can be used *exactly* without causing -ENOSPC, while using just 1 byte shorter log buffer would cause -ENOSPC. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-18-andrii@kernel.org
-
Andrii Nakryiko authored
Similar to what we did for bpf_prog_load() in previous patch, wire returning of log_true_size value from kernel back to the user through OPTS out field. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230406234205.323208-17-andrii@kernel.org
-
Andrii Nakryiko authored
Add output-only log_true_size field to bpf_prog_load_opts to return bpf_attr->log_true_size value back from bpf() syscall. Note, that we have to drop const modifier from opts in bpf_prog_load(). This could potentially cause compilation error for some users. But the usual practice is to define bpf_prog_load_ops as a local variable next to bpf_prog_load() call and pass pointer to it, so const vs non-const makes no difference and won't even come up in most (if not all) cases. There are no runtime and ABI backwards/forward compatibility issues at all. If user provides old struct bpf_prog_load_opts, libbpf won't set new fields. If old libbpf is provided new bpf_prog_load_opts, nothing will happen either as old libbpf doesn't yet know about this new field. Adding a new variant of bpf_prog_load() just for this seems like a big and unnecessary overkill. As a corroborating evidence is the fact that entire selftests/bpf code base required not adjustment whatsoever. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230406234205.323208-16-andrii@kernel.org
-
Andrii Nakryiko authored
Drop the log_size>0 and log_buf!=NULL condition when log_level>0. This allows users to request log_true_size of a full log without providing actual (even if small) log buffer. Verifier log handling code was mostly ready to handle NULL log->ubuf, so only few small changes were necessary to prevent NULL log->ubuf from causing problems. Note, that if user provided NULL log_buf with log_level>0 we don't consider this a log truncation, and thus won't return -ENOSPC. We also enforce that either (log_buf==NULL && log_size==0) or (log_buf!=NULL && log_size>0). Suggested-by: Lorenz Bauer <lmb@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-15-andrii@kernel.org
-
Andrii Nakryiko authored
Simplify internal verifier log API down to bpf_vlog_init() and bpf_vlog_finalize(). The former handles input arguments validation in one place and makes it easier to change it. The latter subsumes -ENOSPC (truncation) and -EFAULT handling and simplifies both caller's code (bpf_check() and btf_parse()). For btf_parse(), this patch also makes sure that verifier log finalization happens even if there is some error condition during BTF verification process prior to normal finalization step. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-14-andrii@kernel.org
-
Andrii Nakryiko authored
Add output-only log_true_size and btf_log_true_size field to BPF_PROG_LOAD and BPF_BTF_LOAD commands, respectively. It will return the size of log buffer necessary to fit in all the log contents at specified log_level. This is very useful for BPF loader libraries like libbpf to be able to size log buffer correctly, but could be used by users directly, if necessary, as well. This patch plumbs all this through the code, taking into account actual bpf_attr size provided by user to determine if these new fields are expected by users. And if they are, set them from kernel on return. We refactory btf_parse() function to accommodate this, moving attr and uattr handling inside it. The rest is very straightforward code, which is split from the logging accounting changes in the previous patch to make it simpler to review logic vs UAPI changes. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-13-andrii@kernel.org
-
Andrii Nakryiko authored
Change how we do accounting in BPF_LOG_FIXED mode and adopt log->end_pos as *logical* log position. This means that we can go beyond physical log buffer size now and be able to tell what log buffer size should be to fit entire log contents without -ENOSPC. To do this for BPF_LOG_FIXED mode, we need to remove a short-circuiting logic of not vsnprintf()'ing further log content once we filled up user-provided buffer, which is done by bpf_verifier_log_needed() checks. We modify these checks to always keep going if log->level is non-zero (i.e., log is requested), even if log->ubuf was NULL'ed out due to copying data to user-space, or if entire log buffer is physically full. We adopt bpf_verifier_vlog() routine to work correctly with log->ubuf == NULL condition, performing log formatting into temporary kernel buffer, doing all the necessary accounting, but just avoiding copying data out if buffer is full or NULL'ed out. With these changes, it's now possible to do this sort of determination of log contents size in both BPF_LOG_FIXED and default rolling log mode. We need to keep in mind bpf_vlog_reset(), though, which shrinks log contents after successful verification of a particular code path. This log reset means that log->end_pos isn't always increasing, so to return back to users what should be the log buffer size to fit all log content without causing -ENOSPC even in the presence of log resetting, we need to keep maximum over "lifetime" of logging. We do this accounting in bpf_vlog_update_len_max() helper. A related and subtle aspect is that with this logical log->end_pos even in BPF_LOG_FIXED mode we could temporary "overflow" buffer, but then reset it back with bpf_vlog_reset() to a position inside user-supplied log_buf. In such situation we still want to properly maintain terminating zero. We will eventually return -ENOSPC even if final log buffer is small (we detect this through log->len_max check). This behavior is simpler to reason about and is consistent with current behavior of verifier log. Handling of this required a small addition to bpf_vlog_reset() logic to avoid doing put_user() beyond physical log buffer dimensions. Another issue to keep in mind is that we limit log buffer size to 32-bit value and keep such log length as u32, but theoretically verifier could produce huge log stretching beyond 4GB. Instead of keeping (and later returning) 64-bit log length, we cap it at UINT_MAX. Current UAPI makes it impossible to specify log buffer size bigger than 4GB anyways, so we don't really loose anything here and keep everything consistently 32-bit in UAPI. This property will be utilized in next patch. Doing the same determination of maximum log buffer for rolling mode is trivial, as log->end_pos and log->start_pos are already logical positions, so there is nothing new there. These changes do incidentally fix one small issue with previous logging logic. Previously, if use provided log buffer of size N, and actual log output was exactly N-1 bytes + terminating \0, kernel logic coun't distinguish this condition from log truncation scenario which would end up with truncated log contents of N-1 bytes + terminating \0 as well. But now with log->end_pos being logical position that could go beyond actual log buffer size, we can distinguish these two conditions, which we do in this patch. This plays nicely with returning log_size_actual (implemented in UAPI in the next patch), as we can now guarantee that if user takes such log_size_actual and provides log buffer of that exact size, they will not get -ENOSPC in return. All in all, all these changes do conceptually unify fixed and rolling log modes much better, and allow a nice feature requested by users: knowing what should be the size of the buffer to avoid -ENOSPC. We'll plumb this through the UAPI and the code in the next patch. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-12-andrii@kernel.org
-
Andrii Nakryiko authored
Move log->level == 0 check into bpf_vlog_truncated() instead of doing it explicitly. Also remove unnecessary goto in kernel/bpf/verifier.c. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-11-andrii@kernel.org
-
Andrii Nakryiko authored
If verifier log is in BPF_LOG_KERNEL mode, no log->ubuf is expected and it stays NULL throughout entire verification process. Don't erroneously return -EFAULT in such case. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-10-andrii@kernel.org
-
Andrii Nakryiko authored
btf_parse() is missing -EFAULT error return if log->ubuf was NULL-ed out due to error while copying data into user-provided buffer. Add it, but handle a special case of BPF_LOG_KERNEL in which log->ubuf is always NULL. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-9-andrii@kernel.org
-
Andrii Nakryiko authored
Verifier log position reset is meaningless in BPF_LOG_KERNEL mode, so just exit early in bpf_vlog_reset() if log->level is BPF_LOG_KERNEL. This avoid meaningless put_user() into NULL log->ubuf. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-8-andrii@kernel.org
-
Andrii Nakryiko authored
Add selftests validating BPF_LOG_FIXED behavior, which used to be the only behavior, and now default rotating BPF verifier log, which returns just up to last N bytes of full verifier log, instead of returning -ENOSPC. To stress test correctness of in-kernel verifier log logic, we force it to truncate program's verifier log to all lengths from 1 all the way to its full size (about 450 bytes today). This was a useful stress test while developing the feature. For both fixed and rotating log modes we expect -ENOSPC if log contents doesn't fit in user-supplied log buffer. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-7-andrii@kernel.org
-
Andrii Nakryiko authored
Add --log-size to be able to customize log buffer sent to bpf() syscall for BPF program verification logging. Add --log-fixed to enforce BPF_LOG_FIXED behavior for BPF verifier log. This is useful in unlikely event that beginning of truncated verifier log is more important than the end of it (which with rotating verifier log behavior is the default now). Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230406234205.323208-6-andrii@kernel.org
-
Andrii Nakryiko authored
This basically prevents any forward compatibility. And we either way just return -EINVAL, which would otherwise be returned from bpf() syscall anyways. Similarly, drop enforcement of non-NULL log_buf when log_level > 0. This won't be true anymore soon. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-5-andrii@kernel.org
-
Andrii Nakryiko authored
Currently, if user-supplied log buffer to collect BPF verifier log turns out to be too small to contain full log, bpf() syscall returns -ENOSPC, fails BPF program verification/load, and preserves first N-1 bytes of the verifier log (where N is the size of user-supplied buffer). This is problematic in a bunch of common scenarios, especially when working with real-world BPF programs that tend to be pretty complex as far as verification goes and require big log buffers. Typically, it's when debugging tricky cases at log level 2 (verbose). Also, when BPF program is successfully validated, log level 2 is the only way to actually see verifier state progression and all the important details. Even with log level 1, it's possible to get -ENOSPC even if the final verifier log fits in log buffer, if there is a code path that's deep enough to fill up entire log, even if normally it would be reset later on (there is a logic to chop off successfully validated portions of BPF verifier log). In short, it's not always possible to pre-size log buffer. Also, what's worse, in practice, the end of the log most often is way more important than the beginning, but verifier stops emitting log as soon as initial log buffer is filled up. This patch switches BPF verifier log behavior to effectively behave as rotating log. That is, if user-supplied log buffer turns out to be too short, verifier will keep overwriting previously written log, effectively treating user's log buffer as a ring buffer. -ENOSPC is still going to be returned at the end, to notify user that log contents was truncated, but the important last N bytes of the log would be returned, which might be all that user really needs. This consistent -ENOSPC behavior, regardless of rotating or fixed log behavior, allows to prevent backwards compatibility breakage. The only user-visible change is which portion of verifier log user ends up seeing *if buffer is too small*. Given contents of verifier log itself is not an ABI, there is no breakage due to this behavior change. Specialized tools that rely on specific contents of verifier log in -ENOSPC scenario are expected to be easily adapted to accommodate old and new behaviors. Importantly, though, to preserve good user experience and not require every user-space application to adopt to this new behavior, before exiting to user-space verifier will rotate log (in place) to make it start at the very beginning of user buffer as a continuous zero-terminated string. The contents will be a chopped off N-1 last bytes of full verifier log, of course. Given beginning of log is sometimes important as well, we add BPF_LOG_FIXED (which equals 8) flag to force old behavior, which allows tools like veristat to request first part of verifier log, if necessary. BPF_LOG_FIXED flag is also a simple and straightforward way to check if BPF verifier supports rotating behavior. On the implementation side, conceptually, it's all simple. We maintain 64-bit logical start and end positions. If we need to truncate the log, start position will be adjusted accordingly to lag end position by N bytes. We then use those logical positions to calculate their matching actual positions in user buffer and handle wrap around the end of the buffer properly. Finally, right before returning from bpf_check(), we rotate user log buffer contents in-place as necessary, to make log contents contiguous. See comments in relevant functions for details. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-4-andrii@kernel.org
-
Andrii Nakryiko authored
It's not clear why we have 128 as minimum size, but it makes testing harder and seems unnecessary, as we carefully handle truncation scenarios and use proper snprintf variants. So remove this limitation and just enforce positive length for log buffer. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-3-andrii@kernel.org
-
Andrii Nakryiko authored
kernel/bpf/verifier.c file is large and growing larger all the time. So it's good to start splitting off more or less self-contained parts into separate files to keep source code size (somewhat) somewhat under control. This patch is a one step in this direction, moving some of BPF verifier log routines into a separate kernel/bpf/log.c. Right now it's most low-level and isolated routines to append data to log, reset log to previous position, etc. Eventually we could probably move verifier state printing logic here as well, but this patch doesn't attempt to do that yet. Subsequent patches will add more logic to verifier log management, so having basics in a separate file will make sure verifier.c doesn't grow more with new changes. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Lorenz Bauer <lmb@isovalent.com> Link: https://lore.kernel.org/bpf/20230406234205.323208-2-andrii@kernel.org
-
Alejandro Colomar authored
There is an extra whitespace in the SPDX tag, before the license name, in the script for generating man pages for the bpf() syscall and the helpers. It has caused problems in Debian packaging, in the tool that autodetects licenses. Let's clean it up. Fixes: 5cb62b75 ("bpf, docs: Use SPDX license identifier in bpf_doc.py") Signed-off-by: Alejandro Colomar <alx@kernel.org> Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20230411144747.66734-1-quentin@isovalent.com
-
- 10 Apr, 2023 1 commit
-
-
Manu Bretelle authored
When trying to add a name to the hashmap, an error code of EEXIST is returned and we continue as names are possibly duplicated in the sys file. If the last name in the file is a duplicate, we will continue to the next iteration of the while loop, and exit the loop with a value of err set to EEXIST and enter the error label with err set, which causes the test to fail when it should not. This change reset err to 0 before continue-ing into the next iteration, this way, if there is no more data to read from the file we iterate through, err will be set to 0. Behaviour prior to this change: ``` test_kprobe_multi_bench_attach:FAIL:get_syms unexpected error: -17 (errno 2) All error logs: test_kprobe_multi_bench_attach:FAIL:get_syms unexpected error: -17 (errno 2) Summary: 0/1 PASSED, 0 SKIPPED, 1 FAILED ``` After this change: ``` Summary: 1/2 PASSED, 0 SKIPPED, 0 FAILED ``` Signed-off-by: Manu Bretelle <chantr4@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230408022919.54601-1-chantr4@gmail.com
-
- 07 Apr, 2023 4 commits
-
-
Eduard Zingerman authored
The following example forces veristat to loop indefinitely: $ cat two-ok file_name,prog_name,verdict,total_states file-a,a,success,12 file-b,b,success,67 $ cat add-failure file_name,prog_name,verdict,total_states file-a,a,success,12 file-b,b,success,67 file-b,c,failure,32 $ veristat -C two-ok add-failure <does not return> The loop is caused by handle_comparison_mode() not checking if `base` variable points to `fallback_stats` prior advancing joined results using `base`. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230407154125.896927-1-eddyz87@gmail.com
-
Wei Yongjun authored
After commit d6e6286a ("libbpf: disassociate section handler on explicit bpf_program__set_type() call"), bpf_program__set_type() will force cleanup the program's SEC() definition, this commit fixed the test helper but missed the bpftool, which leads to bpftool prog autoattach broken as follows: $ bpftool prog load spi-xfer-r1v1.o /sys/fs/bpf/test autoattach Program spi_xfer_r1v1 does not support autoattach, falling back to pinning This patch fix bpftool to set program type only if it differs. Fixes: d6e6286a ("libbpf: disassociate section handler on explicit bpf_program__set_type() call") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230407081427.2621590-1-weiyongjun@huaweicloud.com
-
Song Liu authored
perf_event with type=PERF_TYPE_RAW and config=0x1b00 turned out to be not reliable in ensuring LBR is active. Thus, test_progs:get_branch_snapshot is not reliable in some systems. Replace it with PERF_COUNT_HW_CPU_CYCLES event, which gives more consistent results. Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/20230407190130.2093736-1-song@kernel.org
-
Barret Rhoden authored
BPF helpers that take an ARG_PTR_TO_UNINIT_MEM must ensure that all of the memory is set, including beyond the end of the string. Signed-off-by: Barret Rhoden <brho@google.com> Link: https://lore.kernel.org/r/20230407001808.1622968-1-brho@google.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 06 Apr, 2023 5 commits
-
-
Alexei Starovoitov authored
Yonghong Song says: ==================== LLVM commit [1] introduced hoistMinMax optimization like (i < VIRTIO_MAX_SGS) && (i < out_sgs) to upper = MIN(VIRTIO_MAX_SGS, out_sgs) ... i < upper ... and caused the verification failure. Commit [2] workarounded the issue by adding some bpf assembly code to prohibit the above optimization. This patch improved verifier such that verification can succeed without the above workaround. Without [2], the current verifier will hit the following failures: ... 119: (15) if r1 == 0x0 goto pc+1 The sequence of 8193 jumps is too complex. verification time 525829 usec stack depth 64 processed 156616 insns (limit 1000000) max_states_per_insn 8 total_states 1754 peak_states 1712 mark_read 12 -- END PROG LOAD LOG -- libbpf: prog 'trace_virtqueue_add_sgs': failed to load: -14 libbpf: failed to load object 'loop6.bpf.o' ... The failure is due to verifier inadequately handling '<const> <cond_op> <non_const>' which will go through both pathes and generate the following verificaiton states: ... 89: (07) r2 += 1 ; R2_w=5 90: (79) r8 = *(u64 *)(r10 -48) ; R8_w=scalar() R10=fp0 91: (79) r1 = *(u64 *)(r10 -56) ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0 92: (ad) if r2 < r1 goto pc+41 ; R0_w=scalar() R1_w=scalar(umin=6,umax=5,var_off=(0x4; 0x3)) R2_w=5 R6_w=scalar(id=385) R7_w=0 R8_w=scalar() R9_w=scalar(umax=21474836475,var_off=(0x0; 0x7ffffffff)) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40_w=4 fp-48=mmmmmmmm fp-56= fp-64=mmmmmmmm ... 89: (07) r2 += 1 ; R2_w=6 90: (79) r8 = *(u64 *)(r10 -48) ; R8_w=scalar() R10=fp0 91: (79) r1 = *(u64 *)(r10 -56) ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0 92: (ad) if r2 < r1 goto pc+41 ; R0_w=scalar() R1_w=scalar(umin=7,umax=5,var_off=(0x4; 0x3)) R2_w=6 R6=scalar(id=388) R7=0 R8_w=scalar() R9_w=scalar(umax=25769803770,var_off=(0x0; 0x7ffffffff)) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40=5 fp-48=mmmmmmmm fp-56= fp-64=mmmmmmmm ... 89: (07) r2 += 1 ; R2_w=4088 90: (79) r8 = *(u64 *)(r10 -48) ; R8_w=scalar() R10=fp0 91: (79) r1 = *(u64 *)(r10 -56) ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0 92: (ad) if r2 < r1 goto pc+41 ; R0=scalar() R1=scalar(umin=4089,umax=5,var_off=(0x0; 0x7)) R2=4088 R6=scalar(id=12634) R7=0 R8=scalar() R9=scalar(umax=17557826301960,var_off=(0x0; 0xfffffffffff)) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40=4087 fp-48=mmmmmmmm fp-56= fp-64=mmmmmmmm Patch 3 fixed the above issue by handling '<const> <cond_op> <non_const>' properly. During developing selftests for Patch 3, I found some issues with bound deduction with BPF_EQ/BPF_NE and fixed the issue in Patch 1. After the above issue is fixed, the second issue shows up. ... 67: (07) r1 += -16 ; R1_w=fp-16 ; bpf_probe_read_kernel(&sgp, sizeof(sgp), sgs + i); 68: (b4) w2 = 8 ; R2_w=8 69: (85) call bpf_probe_read_kernel#113 ; R0_w=scalar() fp-16=mmmmmmmm ; return sgp; 70: (79) r6 = *(u64 *)(r10 -16) ; R6=scalar() R10=fp0 ; for (n = 0, sgp = get_sgp(sgs, i); sgp && (n < SG_MAX); 71: (15) if r6 == 0x0 goto pc-49 ; R6=scalar() 72: (b4) w1 = 0 ; R1_w=0 73: (05) goto pc-46 ; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) { 28: (bc) w7 = w1 ; R1_w=0 R7_w=0 ; bpf_probe_read_kernel(&len, sizeof(len), &sgp->length); ... 23: (79) r3 = *(u64 *)(r10 -40) ; R3_w=2 R10=fp0 ; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) { 24: (07) r3 += 1 ; R3_w=3 ; for (i = 0; (i < VIRTIO_MAX_SGS) && (i < out_sgs); i++) { 25: (79) r1 = *(u64 *)(r10 -56) ; R1_w=scalar(umax=5,var_off=(0x0; 0x7)) R10=fp0 26: (ad) if r3 < r1 goto pc+34 61: R0=scalar() R1_w=scalar(umin=4,umax=5,var_off=(0x4; 0x1)) R3_w=3 R6=scalar(id=1658) R7=0 R8=scalar(id=1653) R9=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-32= fp-40=2 fp-56= fp-64=mmmmmmmm ; if (sg_is_chain(&sg)) 61: (7b) *(u64 *)(r10 -40) = r3 ; R3_w=3 R10=fp0 fp-40_w=3 ... 67: (07) r1 += -16 ; R1_w=fp-16 ; bpf_probe_read_kernel(&sgp, sizeof(sgp), sgs + i); 68: (b4) w2 = 8 ; R2_w=8 69: (85) call bpf_probe_read_kernel#113 ; R0_w=scalar() fp-16=mmmmmmmm ; return sgp; 70: (79) r6 = *(u64 *)(r10 -16) ; for (n = 0, sgp = get_sgp(sgs, i); sgp && (n < SG_MAX); infinite loop detected at insn 71 verification time 90800 usec stack depth 64 processed 25017 insns (limit 1000000) max_states_per_insn 20 total_states 491 peak_states 169 mark_read 12 -- END PROG LOAD LOG -- libbpf: prog 'trace_virtqueue_add_sgs': failed to load: -22 Further analysis found the index variable 'i' is spilled but since it is not marked as precise. This is more tricky as identifying induction variable is not easy in verifier. Although a heuristic is possible, let us leave it for now. [1] https://reviews.llvm.org/D143726 [2] Commit 3c2611ba ("selftests/bpf: Fix trace_virtqueue_add_sgs test issue with LLVM 17.") ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Add various tests for code pattern '<const> <cond_op> <non_const>' to exercise the previous verifier patch. The following are veristat changed number of processed insns stat comparing the previous patch vs. this patch: File Program Insns (A) Insns (B) Insns (DIFF) ----------------------------------------------------- ---------------------------------------------------- --------- --------- ------------- test_seg6_loop.bpf.linked3.o __add_egr_x 12423 12314 -109 (-0.88%) Only one program is affected with minor change. Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230406164510.1047757-1-yhs@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Currently, the verifier does not handle '<const> <cond_op> <non_const>' well. For example, ... 10: (79) r1 = *(u64 *)(r10 -16) ; R1_w=scalar() R10=fp0 11: (b7) r2 = 0 ; R2_w=0 12: (2d) if r2 > r1 goto pc+2 13: (b7) r0 = 0 14: (95) exit 15: (65) if r1 s> 0x1 goto pc+3 16: (0f) r0 += r1 ... At insn 12, verifier decides both true and false branch are possible, but actually only false branch is possible. Currently, the verifier already supports patterns '<non_const> <cond_op> <const>. Add support for patterns '<const> <cond_op> <non_const>' in a similar way. Also fix selftest 'verifier_bounds_mix_sign_unsign/bounds checks mixing signed and unsigned, variant 10' due to this change. Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230406164505.1046801-1-yhs@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Add various tests for code pattern '<non-const> NE/EQ <const>' implemented in the previous verifier patch. Without the verifier patch, these new tests will fail. Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230406164500.1045715-1-yhs@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Currently, for BPF_JEQ/BPF_JNE insn, verifier determines whether the branch is taken or not only if both operands are constants. Therefore, for the following code snippet, 0: (85) call bpf_ktime_get_ns#5 ; R0_w=scalar() 1: (a5) if r0 < 0x3 goto pc+2 ; R0_w=scalar(umin=3) 2: (b7) r2 = 2 ; R2_w=2 3: (1d) if r0 == r2 goto pc+2 6 At insn 3, since r0 is not a constant, verifier assumes both branch can be taken which may lead inproper verification failure. Add comparing umin/umax value and the constant. If the umin value is greater than the constant, or umax value is smaller than the constant, for JEQ the branch must be not-taken, and for JNE the branch must be taken. The jmp32 mode JEQ/JNE branch taken checking is also handled similarly. The following lists the veristat result w.r.t. changed number of processes insns during verification: File Program Insns (A) Insns (B) Insns (DIFF) ----------------------------------------------------- ---------------------------------------------------- --------- --------- --------------- test_cls_redirect.bpf.linked3.o cls_redirect 64980 73472 +8492 (+13.07%) test_seg6_loop.bpf.linked3.o __add_egr_x 12425 12423 -2 (-0.02%) test_tcp_hdr_options.bpf.linked3.o estab 2634 2558 -76 (-2.89%) test_parse_tcp_hdr_opt.bpf.linked3.o xdp_ingress_v6 1421 1420 -1 (-0.07%) test_parse_tcp_hdr_opt_dynptr.bpf.linked3.o xdp_ingress_v6 1238 1237 -1 (-0.08%) test_tc_dtime.bpf.linked3.o egress_fwdns_prio100 414 411 -3 (-0.72%) Mostly a small improvement but test_cls_redirect.bpf.linked3.o has a 13% regression. I checked with verifier log and found it this is due to pruning. For some JEQ/JNE branches impacted by this patch, one branch is explored and the other has state equivalence and pruned. Signed-off-by: Yonghong Song <yhs@fb.com> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230406164455.1045294-1-yhs@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-