- 24 Jan, 2024 14 commits
-
-
Kui-Feng Lee authored
Once new struct_ops can be registered from modules, btf_vmlinux is no longer the only btf that struct_ops_map would face. st_map should remember what btf it should use to get type information. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240119225005.668602-6-thinker.li@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Kui-Feng Lee authored
Maintain a registry of registered struct_ops types in the per-btf (module) struct_ops_tab. This registry allows for easy lookup of struct_ops types that are registered by a specific module. It is a preparation work for supporting kernel module struct_ops in a latter patch. Each struct_ops will be registered under its own kernel module btf and will be stored in the newly added btf->struct_ops_tab. The bpf verifier and bpf syscall (e.g. prog and map cmd) can find the struct_ops and its btf type/size/id... information from btf->struct_ops_tab. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240119225005.668602-5-thinker.li@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Kui-Feng Lee authored
Move some of members of bpf_struct_ops to bpf_struct_ops_desc. type_id is unavailabe in bpf_struct_ops anymore. Modules should get it from the btf received by kmod's init function. Cc: netdev@vger.kernel.org Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240119225005.668602-4-thinker.li@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Kui-Feng Lee authored
Get ready to remove bpf_struct_ops_init() in the future. By using BTF_ID_LIST, it is possible to gather type information while building instead of runtime. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240119225005.668602-3-thinker.li@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Kui-Feng Lee authored
Move the majority of the code to bpf_struct_ops_init_one(), which can then be utilized for the initialization of newly registered dynamically allocated struct_ops types in the following patches. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240119225005.668602-2-thinker.li@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Alexei Starovoitov authored
Jiri Olsa says: ==================== bpf: Add cookies retrieval for perf/kprobe multi links hi, this patchset adds support to retrieve cookies from existing tracing links that still did not support it plus changes to bpftool to display them. It's leftover we discussed some time ago [1]. thanks, jirka v2 changes: - added review/ack tags - fixed memory leak [Quentin] - align the uapi fields properly [Yafang Shao] [1] https://lore.kernel.org/bpf/CALOAHbAZ6=A9j3VFCLoAC_WhgQKU7injMf06=cM2sU4Hi4Sx+Q@mail.gmail.com/Reviewed-by: Quentin Monnet <quentin@isovalent.com> --- ==================== Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/r/20240119110505.400573-1-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Displaying cookies for kprobe multi link, in plain mode: # bpftool link ... 1397: kprobe_multi prog 47532 kretprobe.multi func_cnt 3 addr cookie func [module] ffffffff82b370c0 3 bpf_fentry_test1 ffffffff82b39780 1 bpf_fentry_test2 ffffffff82b397a0 2 bpf_fentry_test3 And in json mode: # bpftool link -j | jq ... { "id": 1397, "type": "kprobe_multi", "prog_id": 47532, "retprobe": true, "func_cnt": 3, "missed": 0, "funcs": [ { "addr": 18446744071607382208, "func": "bpf_fentry_test1", "module": null, "cookie": 3 }, { "addr": 18446744071607392128, "func": "bpf_fentry_test2", "module": null, "cookie": 1 }, { "addr": 18446744071607392160, "func": "bpf_fentry_test3", "module": null, "cookie": 2 } ] } Cookie is attached to specific address, and because we sort addresses before printing, we need to sort cookies the same way, hence adding the struct addr_cookie to keep and sort them together. Also adding missing dd.sym_count check to show_kprobe_multi_json. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240119110505.400573-9-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Displaying cookie for perf event link probes, in plain mode: # bpftool link 17: perf_event prog 90 kprobe ffffffff82b1c2b0 bpf_fentry_test1 cookie 3735928559 18: perf_event prog 90 kretprobe ffffffff82b1c2b0 bpf_fentry_test1 cookie 3735928559 20: perf_event prog 92 tracepoint sched_switch cookie 3735928559 21: perf_event prog 93 event software:page-faults cookie 3735928559 22: perf_event prog 91 uprobe /proc/self/exe+0xd703c cookie 3735928559 And in json mode: # bpftool link -j | jq { "id": 30, "type": "perf_event", "prog_id": 160, "retprobe": false, "addr": 18446744071607272112, "func": "bpf_fentry_test1", "offset": 0, "missed": 0, "cookie": 3735928559 } { "id": 33, "type": "perf_event", "prog_id": 162, "tracepoint": "sched_switch", "cookie": 3735928559 } { "id": 34, "type": "perf_event", "prog_id": 163, "event_type": "software", "event_config": "page-faults", "cookie": 3735928559 } { "id": 35, "type": "perf_event", "prog_id": 161, "retprobe": false, "file": "/proc/self/exe", "offset": 880700, "cookie": 3735928559 } Reviewed-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240119110505.400573-8-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Adding fill_link_info test for perf event and testing we get its values back through the bpf_link_info interface. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240119110505.400573-7-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Now that we get cookies for perf_event probes, adding tests for cookie for kprobe/uprobe/tracepoint. The perf_event test needs to be added completely and is coming in following change. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240119110505.400573-6-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Adding cookies check for kprobe_multi fill_link_info test, plus tests for invalid values related to cookies. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240119110505.400573-5-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
The error path frees wrong array, it should be ref_ctr_offsets. Acked-by: Yafang Shao <laoar.shao@gmail.com> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Fixes: a7795698 ("bpftool: Add support to display uprobe_multi links") Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240119110505.400573-4-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Storing cookies in kprobe_multi bpf_link_info data. The cookies field is optional and if provided it needs to be an array of __u64 with kprobe_multi.count length. Acked-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20240119110505.400573-3-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
At the moment we don't store cookie for perf_event probes, while we do that for the rest of the probes. Adding cookie fields to struct bpf_link_info perf event probe records: perf_event.uprobe perf_event.kprobe perf_event.tracepoint perf_event.perf_event And the code to store that in bpf_link_info struct. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Song Liu <song@kernel.org> Acked-by: Yafang Shao <laoar.shao@gmail.com> Link: https://lore.kernel.org/r/20240119110505.400573-2-jolsa@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 23 Jan, 2024 26 commits
-
-
Jose E. Marchesi authored
Some of the BPF selftests use the "p" constraint in inline assembly snippets, for input operands for MOV (rN = rM) instructions. This is mainly done via the __imm_ptr macro defined in tools/testing/selftests/bpf/progs/bpf_misc.h: #define __imm_ptr(name) [name]"p"(&name) Example: int consume_first_item_only(void *ctx) { struct bpf_iter_num iter; asm volatile ( /* create iterator */ "r1 = %[iter];" [...] : : __imm_ptr(iter) : CLOBBERS); [...] } The "p" constraint is a tricky one. It is documented in the GCC manual section "Simple Constraints": An operand that is a valid memory address is allowed. This is for ``load address'' and ``push address'' instructions. p in the constraint must be accompanied by address_operand as the predicate in the match_operand. This predicate interprets the mode specified in the match_operand as the mode of the memory reference for which the address would be valid. There are two problems: 1. It is questionable whether that constraint was ever intended to be used in inline assembly templates, because its behavior really depends on compiler internals. A "memory address" is not the same than a "memory operand" or a "memory reference" (constraint "m"), and in fact its usage in the template above results in an error in both x86_64-linux-gnu and bpf-unkonwn-none: foo.c: In function ‘bar’: foo.c:6:3: error: invalid 'asm': invalid expression as operand 6 | asm volatile ("r1 = %[jorl]" : : [jorl]"p"(&jorl)); | ^~~ I would assume the same happens with aarch64, riscv, and most/all other targets in GCC, that do not accept operands of the form A + B that are not wrapped either in a const or in a memory reference. To avoid that error, the usage of the "p" constraint in internal GCC instruction templates is supposed to be complemented by the 'a' modifier, like in: asm volatile ("r1 = %a[jorl]" : : [jorl]"p"(&jorl)); Internally documented (in GCC's final.cc) as: %aN means expect operand N to be a memory address (not a memory reference!) and print a reference to that address. That works because when the modifier 'a' is found, GCC prints an "operand address", which is not the same than an "operand". But... 2. Even if we used the internal 'a' modifier (we shouldn't) the 'rN = rM' instruction really requires a register argument. In cases involving automatics, like in the examples above, we easily end with: bar: #APP r1 = r10-4 #NO_APP In other cases we could conceibly also end with a 64-bit label that may overflow the 32-bit immediate operand of `rN = imm32' instructions: r1 = foo All of which is clearly wrong. clang happens to do "the right thing" in the current usage of __imm_ptr in the BPF tests, because even with -O2 it seems to "reload" the fp-relative address of the automatic to a register like in: bar: r1 = r10 r1 += -4 #APP r1 = r1 #NO_APP Which is what GCC would generate with -O0. Whether this is by chance or by design, the compiler shouln't be expected to do that reload driven by the "p" constraint. This patch changes the usage of the "p" constraint in the BPF selftests macros to use the "r" constraint instead. If a register is what is required, we should let the compiler know. Previous discussion in bpf@vger: https://lore.kernel.org/bpf/87h6p5ebpb.fsf@oracle.com/T/#ef0df83d6975c34dff20bf0dd52e078f5b8ca2767 Tested in bpf-next master. No regressions. Signed-off-by: Jose E. Marchesi <jose.marchesi@oracle.com> Cc: Yonghong Song <yonghong.song@linux.dev> Cc: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240123181309.19853-1-jose.marchesi@oracle.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jose E. Marchesi authored
GCC emits a warning: progs/test_tcpbpf_kern.c:60:9: error: ‘op’ is used uninitialized [-Werror=uninitialized] when an uninialized op is used with a "+r" constraint. The + modifier means a read-write operand, but that operand in the selftest is just written to. This patch changes the selftest to use a "=r" constraint. This pacifies GCC. Tested in bpf-next master. No regressions. Signed-off-by: Jose E. Marchesi <jose.marchesi@oracle.com> Cc: Yonghong Song <yhs@meta.com> Cc: Eduard Zingerman <eddyz87@gmail.com> Cc: david.faust@oracle.com Cc: cupertino.miranda@oracle.com Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240123205624.14746-1-jose.marchesi@oracle.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jose E. Marchesi authored
VLAs are not supported by either the BPF port of clang nor GCC. The selftest test_xdp_dynptr.c contains the following code: const size_t tcphdr_sz = sizeof(struct tcphdr); const size_t udphdr_sz = sizeof(struct udphdr); const size_t ethhdr_sz = sizeof(struct ethhdr); const size_t iphdr_sz = sizeof(struct iphdr); const size_t ipv6hdr_sz = sizeof(struct ipv6hdr); [...] static __always_inline int handle_ipv4(struct xdp_md *xdp, struct bpf_dynptr *xdp_ptr) { __u8 eth_buffer[ethhdr_sz + iphdr_sz + ethhdr_sz]; __u8 iph_buffer_tcp[iphdr_sz + tcphdr_sz]; __u8 iph_buffer_udp[iphdr_sz + udphdr_sz]; [...] } The eth_buffer, iph_buffer_tcp and other automatics are fixed size only if the compiler optimizes away the constant global variables. clang does this, but GCC does not, turning these automatics into variable length arrays. This patch removes the global variables and turns these values into preprocessor constants. This makes the selftest to build properly with GCC. Tested in bpf-next master. No regressions. Signed-off-by: Jose E. Marchesi <jose.marchesi@oracle.com> Cc: Yonghong Song <yhs@meta.com> Cc: Eduard Zingerman <eddyz87@gmail.com> Cc: david.faust@oracle.com Cc: cupertino.miranda@oracle.com Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240123201729.16173-1-jose.marchesi@oracle.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
We've ran into issues with using dup2() API in production setting, where libbpf is linked into large production environment and ends up calling unintended custom implementations of dup2(). These custom implementations don't provide atomic FD replacement guarantees of dup2() syscall, leading to subtle and hard to debug issues. To prevent this in the future and guarantee that no libc implementation will do their own custom non-atomic dup2() implementation, call dup2() syscall directly with syscall(SYS_dup2). Note that some architectures don't seem to provide dup2 and have dup3 instead. Try to detect and pick best syscall. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Song Liu <song@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240119210201.1295511-1-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Hou Tao says: ==================== Enable the inline of kptr_xchg for arm64 From: Hou Tao <houtao1@huawei.com> Hi, The patch set is just a follow-up for "bpf: inline bpf_kptr_xchg()". It enables the inline of bpf_kptr_xchg() and kptr_xchg_inline test for arm64. Please see individual patches for more details. And comments are always welcome. ==================== Link: https://lore.kernel.org/r/20240119102529.99581-1-houtao@huaweicloud.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hou Tao authored
Now arm64 bpf jit has enable bpf_jit_supports_ptr_xchg(), so enable the test for arm64 as well. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20240119102529.99581-3-houtao@huaweicloud.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hou Tao authored
ARM64 bpf jit satisfies the following two conditions: 1) support BPF_XCHG() on pointer-sized word. 2) the implementation of xchg is the same as atomic_xchg() on pointer-sized words. Both of these two functions use arch_xchg() to implement the exchange. So enable the inline of bpf_kptr_xchg() for arm64 bpf jit. Signed-off-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20240119102529.99581-2-houtao@huaweicloud.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dave Thaler authored
Per discussion on the mailing list at https://mailarchive.ietf.org/arch/msg/bpf/uQiqhURdtxV_ZQOTgjCdm-seh74/ the MOVSX operation is only defined to support register extension. The document didn't previously state this and incorrectly implied that one could use an immediate value. Signed-off-by: Dave Thaler <dthaler1968@gmail.com> Acked-by: David Vernet <void@manifault.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240118232954.27206-1-dthaler1968@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kuniyuki Iwashima authored
kernel test robot reported the warning below: >> net/core/filter.c:11842:13: warning: declaration of 'struct bpf_tcp_req_attrs' will not be visible outside of this function [-Wvisibility] 11842 | struct bpf_tcp_req_attrs *attrs, int attrs__sz) | ^ 1 warning generated. struct bpf_tcp_req_attrs is defined under CONFIG_SYN_COOKIES but used in kfunc without the config. Let's move struct bpf_tcp_req_attrs definition outside of CONFIG_SYN_COOKIES guard. Fixes: e472f888 ("bpf: tcp: Support arbitrary SYN Cookie.") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202401180418.CUVc0hxF-lkp@intel.com/Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://lore.kernel.org/r/20240118211751.25790-1-kuniyu@amazon.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Hao Sun authored
Current checking rules are structured to disallow alu on particular ptr types explicitly, so default cases are allowed implicitly. This may lead to newly added ptr types being allowed unexpectedly. So restruture it to allow alu explicitly. The tradeoff is mainly a bit more cases added in the switch. The following table from Eduard summarizes the rules: | Pointer type | Arithmetics allowed | |---------------------+---------------------| | PTR_TO_CTX | yes | | CONST_PTR_TO_MAP | conditionally | | PTR_TO_MAP_VALUE | yes | | PTR_TO_MAP_KEY | yes | | PTR_TO_STACK | yes | | PTR_TO_PACKET_META | yes | | PTR_TO_PACKET | yes | | PTR_TO_PACKET_END | no | | PTR_TO_FLOW_KEYS | conditionally | | PTR_TO_SOCKET | no | | PTR_TO_SOCK_COMMON | no | | PTR_TO_TCP_SOCK | no | | PTR_TO_TP_BUFFER | yes | | PTR_TO_XDP_SOCK | no | | PTR_TO_BTF_ID | yes | | PTR_TO_MEM | yes | | PTR_TO_BUF | yes | | PTR_TO_FUNC | yes | | CONST_PTR_TO_DYNPTR | yes | The refactored rules are equivalent to the original one. Note that PTR_TO_FUNC and CONST_PTR_TO_DYNPTR are not reject here because: (1) check_mem_access() rejects load/store on those ptrs, and those ptrs with offset passing to calls are rejected check_func_arg_reg_off(); (2) someone may rely on the verifier not rejecting programs earily. Signed-off-by: Hao Sun <sunhao.th@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240117094012.36798-1-sunhao.th@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrey Grafin authored
Check that bpf_object__load() successfully creates map_in_maps with BPF_MAP_TYPE_PERF_EVENT_ARRAY values. These changes cover fix in the previous patch "libbpf: Apply map_set_def_max_entries() for inner_maps on creation". A command line output is: - w/o fix $ sudo ./test_maps libbpf: map 'mim_array_pe': failed to create inner map: -22 libbpf: map 'mim_array_pe': failed to create: Invalid argument(-22) libbpf: failed to load object './test_map_in_map.bpf.o' Failed to load test prog - with fix $ sudo ./test_maps ... test_maps: OK, 0 SKIPPED Fixes: 646f02ff ("libbpf: Add BTF-defined map-in-map support") Signed-off-by: Andrey Grafin <conquistador@yandex-team.ru> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/bpf/20240117130619.9403-2-conquistador@yandex-team.ruSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrey Grafin authored
This patch allows to auto create BPF_MAP_TYPE_ARRAY_OF_MAPS and BPF_MAP_TYPE_HASH_OF_MAPS with values of BPF_MAP_TYPE_PERF_EVENT_ARRAY by bpf_object__load(). Previous behaviour created a zero filled btf_map_def for inner maps and tried to use it for a map creation but the linux kernel forbids to create a BPF_MAP_TYPE_PERF_EVENT_ARRAY map with max_entries=0. Fixes: 646f02ff ("libbpf: Add BTF-defined map-in-map support") Signed-off-by: Andrey Grafin <conquistador@yandex-team.ru> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/bpf/20240117130619.9403-1-conquistador@yandex-team.ruSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
Both commit 91051f00 ("tcp: Dump bound-only sockets in inet_diag.") and commit 985b8ea9ec7e ("bpf, docs: Fix bpf_redirect_peer header doc") missed the tooling header sync. Fix it. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Victor Stewart authored
Amend the bpf_redirect_peer() header documentation to also mention support for the netkit device type. Signed-off-by: Victor Stewart <v@nametag.social> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240116202952.241009-1-v@nametag.socialSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
Kuniyuki Iwashima says: ==================== Under SYN Flood, the TCP stack generates SYN Cookie to remain stateless for the connection request until a valid ACK is responded to the SYN+ACK. The cookie contains two kinds of host-specific bits, a timestamp and secrets, so only can it be validated by the generator. It means SYN Cookie consumes network resources between the client and the server; intermediate nodes must remember which nodes to route ACK for the cookie. SYN Proxy reduces such unwanted resource allocation by handling 3WHS at the edge network. After SYN Proxy completes 3WHS, it forwards SYN to the backend server and completes another 3WHS. However, since the server's ISN differs from the cookie, the proxy must manage the ISN mappings and fix up SEQ/ACK numbers in every packet for each connection. If a proxy node goes down, all the connections through it are terminated. Keeping a state at proxy is painful from that perspective. At AWS, we use a dirty hack to build truly stateless SYN Proxy at scale. Our SYN Proxy consists of the front proxy layer and the backend kernel module. (See slides of LPC2023 [0], p37 - p48) The cookie that SYN Proxy generates differs from the kernel's cookie in that it contains a secret (called rolling salt) (i) shared by all the proxy nodes so that any node can validate ACK and (ii) updated periodically so that old cookies cannot be validated and we need not encode a timestamp for the cookie. Also, ISN contains WScale, SACK, and ECN, not in TS val. This is not to sacrifice any connection quality, where some customers turn off TCP timestamps option due to retro CVE. After 3WHS, the proxy restores SYN, encapsulates ACK into SYN, and forward the TCP-in-TCP packet to the backend server. Our kernel module works at Netfilter input/output hooks and first feeds SYN to the TCP stack to initiate 3WHS. When the module is triggered for SYN+ACK, it looks up the corresponding request socket and overwrites tcp_rsk(req)->snt_isn with the proxy's cookie. Then, the module can complete 3WHS with the original ACK as is. This way, our SYN Proxy does not manage the ISN mappings nor wait for SYN+ACK from the backend thus can remain stateless. It's working very well for high-bandwidth services like multiple Tbps, but we are looking for a way to drop the dirty hack and further optimise the sequences. If we could validate an arbitrary SYN Cookie on the backend server with BPF, the proxy would need not restore SYN nor pass it. After validating ACK, the proxy node just needs to forward it, and then the server can do the lightweight validation (e.g. check if ACK came from proxy nodes, etc) and create a connection from the ACK. This series allows us to create a full sk from an arbitrary SYN Cookie, which is done in 3 steps. 1) At tc, BPF prog calls a new kfunc to create a reqsk and configure it based on the argument populated from SYN Cookie. The reqsk has its listener as req->rsk_listener and is passed to the TCP stack as skb->sk. 2) During TCP socket lookup for the skb, skb_steal_sock() returns a listener in the reuseport group that inet_reqsk(skb->sk)->rsk_listener belongs to. 3) In cookie_v[46]_check(), the reqsk (skb->sk) is fully initialised and a full sk is created. The kfunc usage is as follows: struct bpf_tcp_req_attrs attrs = { .mss = mss, .wscale_ok = wscale_ok, .rcv_wscale = rcv_wscale, /* Server's WScale < 15 */ .snd_wscale = snd_wscale, /* Client's WScale < 15 */ .tstamp_ok = tstamp_ok, .rcv_tsval = tsval, .rcv_tsecr = tsecr, /* Server's Initial TSval */ .usec_ts_ok = usec_ts_ok, .sack_ok = sack_ok, .ecn_ok = ecn_ok, } skc = bpf_skc_lookup_tcp(...); sk = (struct sock *)bpf_skc_to_tcp_sock(skc); bpf_sk_assign_tcp_reqsk(skb, sk, attrs, sizeof(attrs)); bpf_sk_release(skc); [0]: https://lpc.events/event/17/contributions/1645/attachments/1350/2701/SYN_Proxy_at_Scale_with_BPF.pdf Changes: v8 * Rebase on Yonghong's cpuv4 fix * Patch 5 * Fill the trailing 3-bytes padding in struct bpf_tcp_req_attrs and test it as null * Patch 6 * Remove unused IPPROTP_MPTCP definition v7: https://lore.kernel.org/bpf/20231221012806.37137-1-kuniyu@amazon.com/ * Patch 5 & 6 * Drop MPTCP support v6: https://lore.kernel.org/bpf/20231214155424.67136-1-kuniyu@amazon.com/ * Patch 5 & 6 * /struct /s/tcp_cookie_attributes/bpf_tcp_req_attrs/ * Don't reuse struct tcp_options_received and use u8 for each attrs * Patch 6 * Check retval of test__start_subtest() v5: https://lore.kernel.org/netdev/20231211073650.90819-1-kuniyu@amazon.com/ * Split patch 1-3 * Patch 3 * Clear req->rsk_listener in skb_steal_sock() * Patch 4 & 5 * Move sysctl validation and tsoff init from cookie_bpf_check() to kfunc * Patch 5 * Do not increment LINUX_MIB_SYNCOOKIES(RECV|FAILED) * Patch 6 * Remove __always_inline * Test if tcp_handle_{syn,ack}() is executed * Move some definition to bpf_tracing_net.h * s/BPF_F_CURRENT_NETNS/-1/ v4: https://lore.kernel.org/bpf/20231205013420.88067-1-kuniyu@amazon.com/ * Patch 1 & 2 * s/CONFIG_SYN_COOKIE/CONFIG_SYN_COOKIES/ * Patch 1 * Don't set rcv_wscale for BPF SYN Cookie case. * Patch 2 * Add test for tcp_opt.{unused,rcv_wscale} in kfunc * Modify skb_steal_sock() to avoid resetting skb-sk * Support SO_REUSEPORT lookup * Patch 3 * Add CONFIG_SYN_COOKIES to Kconfig for CI * Define BPF_F_CURRENT_NETNS v3: https://lore.kernel.org/netdev/20231121184245.69569-1-kuniyu@amazon.com/ * Guard kfunc and req->syncookie part in inet6?_steal_sock() with CONFIG_SYN_COOKIE v2: https://lore.kernel.org/netdev/20231120222341.54776-1-kuniyu@amazon.com/ * Drop SOCK_OPS and move SYN Cookie validation logic to TC with kfunc. * Add cleanup patches to reduce discrepancy between cookie_v[46]_check() v1: https://lore.kernel.org/bpf/20231013220433.70792-1-kuniyu@amazon.com/ ==================== Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kuniyuki Iwashima authored
This commit adds a sample selftest to demonstrate how we can use bpf_sk_assign_tcp_reqsk() as the backend of SYN Proxy. The test creates IPv4/IPv6 x TCP connections and transfer messages over them on lo with BPF tc prog attached. The tc prog will process SYN and returns SYN+ACK with the following ISN and TS. In a real use case, this part will be done by other hosts. MSB LSB ISN: | 31 ... 8 | 7 6 | 5 | 4 | 3 2 1 0 | | Hash_1 | MSS | ECN | SACK | WScale | TS: | 31 ... 8 | 7 ... 0 | | Random | Hash_2 | WScale in SYN is reused in SYN+ACK. The client returns ACK, and tc prog will recalculate ISN and TS from ACK and validate SYN Cookie. If it's valid, the prog calls kfunc to allocate a reqsk for skb and configure the reqsk based on the argument created from SYN Cookie. Later, the reqsk will be processed in cookie_v[46]_check() to create a connection. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://lore.kernel.org/r/20240115205514.68364-7-kuniyu@amazon.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kuniyuki Iwashima authored
This patch adds a new kfunc available at TC hook to support arbitrary SYN Cookie. The basic usage is as follows: struct bpf_tcp_req_attrs attrs = { .mss = mss, .wscale_ok = wscale_ok, .rcv_wscale = rcv_wscale, /* Server's WScale < 15 */ .snd_wscale = snd_wscale, /* Client's WScale < 15 */ .tstamp_ok = tstamp_ok, .rcv_tsval = tsval, .rcv_tsecr = tsecr, /* Server's Initial TSval */ .usec_ts_ok = usec_ts_ok, .sack_ok = sack_ok, .ecn_ok = ecn_ok, } skc = bpf_skc_lookup_tcp(...); sk = (struct sock *)bpf_skc_to_tcp_sock(skc); bpf_sk_assign_tcp_reqsk(skb, sk, attrs, sizeof(attrs)); bpf_sk_release(skc); bpf_sk_assign_tcp_reqsk() takes skb, a listener sk, and struct bpf_tcp_req_attrs and allocates reqsk and configures it. Then, bpf_sk_assign_tcp_reqsk() links reqsk with skb and the listener. The notable thing here is that we do not hold refcnt for both reqsk and listener. To differentiate that, we mark reqsk->syncookie, which is only used in TX for now. So, if reqsk->syncookie is 1 in RX, it means that the reqsk is allocated by kfunc. When skb is freed, sock_pfree() checks if reqsk->syncookie is 1, and in that case, we set NULL to reqsk->rsk_listener before calling reqsk_free() as reqsk does not hold a refcnt of the listener. When the TCP stack looks up a socket from the skb, we steal the listener from the reqsk in skb_steal_sock() and create a full sk in cookie_v[46]_check(). The refcnt of reqsk will finally be set to 1 in tcp_get_cookie_sock() after creating a full sk. Note that we can extend struct bpf_tcp_req_attrs in the future when we add a new attribute that is determined in 3WHS. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://lore.kernel.org/r/20240115205514.68364-6-kuniyu@amazon.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kuniyuki Iwashima authored
We will support arbitrary SYN Cookie with BPF in the following patch. If BPF prog validates ACK and kfunc allocates a reqsk, it will be carried to cookie_[46]_check() as skb->sk. If skb->sk is not NULL, we call cookie_bpf_check(). Then, we clear skb->sk and skb->destructor, which are needed not to hold refcnt for reqsk and the listener. See the following patch for details. After that, we finish initialisation for the remaining fields with cookie_tcp_reqsk_init(). Note that the server side WScale is set only for non-BPF SYN Cookie. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://lore.kernel.org/r/20240115205514.68364-5-kuniyu@amazon.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kuniyuki Iwashima authored
We will support arbitrary SYN Cookie with BPF. If BPF prog validates ACK and kfunc allocates a reqsk, it will be carried to TCP stack as skb->sk with req->syncookie 1. Also, the reqsk has its listener as req->rsk_listener with no refcnt taken. When the TCP stack looks up a socket from the skb, we steal inet_reqsk(skb->sk)->rsk_listener in skb_steal_sock() so that the skb will be processed in cookie_v[46]_check() with the listener. Note that we do not clear skb->sk and skb->destructor so that we can carry the reqsk to cookie_v[46]_check(). Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Link: https://lore.kernel.org/r/20240115205514.68364-4-kuniyu@amazon.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Artem Savkov authored
It is possible for bpf_kfunc_call_test_release() to be called from bpf_map_free_deferred() when bpf_testmod is already unloaded and perf_test_stuct.cnt which it tries to decrease is no longer in memory. This patch tries to fix the issue by waiting for all references to be dropped in bpf_testmod_exit(). The issue can be triggered by running 'test_progs -t map_kptr' in 6.5, but is obscured in 6.6 by d119357d ("rcu-tasks: Treat only synchronous grace periods urgently"). Fixes: 65eb006d ("bpf: Move kernel test kfuncs to bpf_testmod") Signed-off-by: Artem Savkov <asavkov@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yonghong.song@linux.dev> Cc: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/bpf/82f55c0e-0ec8-4fe1-8d8c-b1de07558ad9@linux.dev Link: https://lore.kernel.org/bpf/20240110085737.8895-1-asavkov@redhat.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kuniyuki Iwashima authored
We will support arbitrary SYN Cookie with BPF. If BPF prog validates ACK and kfunc allocates a reqsk, it will be carried to TCP stack as skb->sk with req->syncookie 1. In skb_steal_sock(), we need to check inet_reqsk(sk)->syncookie to see if the reqsk is created by kfunc. However, inet_reqsk() is not available in sock.h. Let's move skb_steal_sock() to request_sock.h. While at it, we refactor skb_steal_sock() so it returns early if skb->sk is NULL to minimise the following patch. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20240115205514.68364-3-kuniyu@amazon.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Tiezhu Yang authored
There exists the following warning when building bpftool: CC prog.o prog.c: In function ‘profile_open_perf_events’: prog.c:2301:24: warning: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Wcalloc-transposed-args] 2301 | sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric); | ^~~ prog.c:2301:24: note: earlier argument should specify number of elements, later size of each element Tested with the latest upstream GCC which contains a new warning option -Wcalloc-transposed-args. The first argument to calloc is documented to be number of elements in array, while the second argument is size of each element, just switch the first and second arguments of calloc() to silence the build warning, compile tested only. Fixes: 47c09d6a ("bpftool: Introduce "prog profile" command") Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20240116061920.31172-1-yangtiezhu@loongson.cnSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kuniyuki Iwashima authored
We will support arbitrary SYN Cookie with BPF. When BPF prog validates ACK and kfunc allocates a reqsk, we need to call tcp_ns_to_ts() to calculate an offset of TSval for later use: time t0 : Send SYN+ACK -> tsval = Initial TSval (Random Number) t1 : Recv ACK of 3WHS -> tsoff = TSecr - tcp_ns_to_ts(usec_ts_ok, tcp_clock_ns()) = Initial TSval - t1 t2 : Send ACK -> tsval = t2 + tsoff = Initial TSval + (t2 - t1) = Initial TSval + Time Delta (x) (x) Note that the time delta does not include the initial RTT from t0 to t1. Let's move tcp_ns_to_ts() to tcp.h. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20240115205514.68364-2-kuniyu@amazon.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Few minor improvements for bpf_cmp() macro: . reduce number of args in __bpf_cmp() . rename NOFLIP to UNLIKELY . add a comment about 64-bit truncation in "i" constraint . use "ri" constraint for sizeof(rhs) <= 4 . improve error message for bpf_cmp_likely() Before: progs/iters_task_vma.c:31:7: error: variable 'ret' is uninitialized when used here [-Werror,-Wuninitialized] 31 | if (bpf_cmp_likely(seen, <==, 1000)) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../bpf/bpf_experimental.h:325:3: note: expanded from macro 'bpf_cmp_likely' 325 | ret; | ^~~ progs/iters_task_vma.c:31:7: note: variable 'ret' is declared here ../bpf/bpf_experimental.h:310:3: note: expanded from macro 'bpf_cmp_likely' 310 | bool ret; | ^ After: progs/iters_task_vma.c:31:7: error: invalid operand for instruction 31 | if (bpf_cmp_likely(seen, <==, 1000)) | ^ ../bpf/bpf_experimental.h:324:17: note: expanded from macro 'bpf_cmp_likely' 324 | asm volatile("r0 " #OP " invalid compare"); | ^ <inline asm>:1:5: note: instantiated into assembly here 1 | r0 <== invalid compare | ^ Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/bpf/20240112220134.71209-1-alexei.starovoitov@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
In verifier.rst, I found an incorrect statement (maybe a typo) in section 'Liveness marks tracking'. Basically, the wrong register is attributed to have a read mark. This may confuse the user. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240111052136.3440417-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Add a selftest with a 4 bytes BPF_ST of 0 where the store is not 8-byte aligned. The goal is to ensure that STACK_ZERO is properly marked in stack slots and the STACK_ZERO value can propagate properly during the load. Acked-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240110051355.2737232-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-