- 29 Mar, 2023 3 commits
-
-
Yixin Shen authored
Test whether a TCP CC implemented in BPF is allowed to write app_limited in struct tcp_sock. This is already allowed for the built-in TCP CC. Signed-off-by: Yixin Shen <bobankhshen@gmail.com> Link: https://lore.kernel.org/r/20230329073558.8136-3-bobankhshen@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Yixin Shen authored
A CC that implements tcp_congestion_ops.cong_control() should be able to write app_limited. A built-in CC or one from a kernel module is already able to write to this member of struct tcp_sock. For a BPF program, write access has not been allowed, yet. Signed-off-by: Yixin Shen <bobankhshen@gmail.com> Link: https://lore.kernel.org/r/20230329073558.8136-2-bobankhshen@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Manu Bretelle authored
This is essentially a backport of iproute2's commit ed54f76484b5 ("json: fix backslash escape typo in jsonw_puts") Also added the stdio.h include in json_writer.h to be able to compile and run the json_writer test as used below). Before this fix: $ gcc -D notused -D TEST -I../../include -o json_writer json_writer.c json_writer.h $ ./json_writer { "Vyatta": { "url": "http://vyatta.com", "downloads": 2000000, "stock": 8.16, "ARGV": [], "empty": [], "NIL": {}, "my_null": null, "special chars": [ "slash": "/", "newline": "\n", "tab": "\t", "ff": "\f", "quote": "\"", "tick": "'", "backslash": "\n" ] } } After: $ gcc -D notused -D TEST -I../../include -o json_writer json_writer.c json_writer.h $ ./json_writer { "Vyatta": { "url": "http://vyatta.com", "downloads": 2000000, "stock": 8.16, "ARGV": [], "empty": [], "NIL": {}, "my_null": null, "special chars": [ "slash": "/", "newline": "\n", "tab": "\t", "ff": "\f", "quote": "\"", "tick": "'", "backslash": "\\" ] } } Signed-off-by: Manu Bretelle <chantr4@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20230329073002.2026563-1-chantr4@gmail.com
-
- 28 Mar, 2023 4 commits
-
-
Andrii Nakryiko authored
Eduard Zingerman says: ==================== verifier/xdp_direct_packet_access.c automatically converted to inline assembly using [1]. This is a leftover from [2], the last patch in a batch was blocked by mail server for being too long. This patch-set splits it in two: - one to add migrated test to progs/ - one to remove old test from verifier/ [1] Migration tool https://github.com/eddyz87/verifier-tests-migrator [2] First batch of migrated verifier/*.c tests https://lore.kernel.org/bpf/167979433109.17761.17302808621381963629.git-patchwork-notify@kernel.org/ ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Eduard Zingerman authored
selftests/bpf: Remove verifier/xdp_direct_packet_access.c, converted to progs/verifier_xdp_direct_packet_access.c Removing verifier/xdp_direct_packet_access.c.c as it was automatically converted to use inline assembly in the previous commit. It is available in progs/verifier_xdp_direct_packet_access.c.c. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230328020813.392560-3-eddyz87@gmail.com
-
Eduard Zingerman authored
Test verifier/xdp_direct_packet_access.c automatically converted to use inline assembly. Original test would be removed in the next patch. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230328020813.392560-2-eddyz87@gmail.com
-
Eduard Zingerman authored
Double-free error in bpf_linker__free() was reported by James Hilliard. The error is caused by miss-use of realloc() in extend_sec(). The error occurs when two files with empty sections of the same name are linked: - when first file is processed: - extend_sec() calls realloc(dst->raw_data, dst_align_sz) with dst->raw_data == NULL and dst_align_sz == 0; - dst->raw_data is set to a special pointer to a memory block of size zero; - when second file is processed: - extend_sec() calls realloc(dst->raw_data, dst_align_sz) with dst->raw_data == <special pointer> and dst_align_sz == 0; - realloc() "frees" dst->raw_data special pointer and returns NULL; - extend_sec() exits with -ENOMEM, and the old dst->raw_data value is preserved (it is now invalid); - eventually, bpf_linker__free() attempts to free dst->raw_data again. This patch fixes the bug by avoiding -ENOMEM exit for dst_align_sz == 0. The fix was suggested by Andrii Nakryiko <andrii.nakryiko@gmail.com>. Reported-by: James Hilliard <james.hilliard1@gmail.com> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: James Hilliard <james.hilliard1@gmail.com> Link: https://lore.kernel.org/bpf/CADvTj4o7ZWUikKwNTwFq0O_AaX+46t_+Ca9gvWMYdWdRtTGeHQ@mail.gmail.com/ Link: https://lore.kernel.org/bpf/20230328004738.381898-3-eddyz87@gmail.com
-
- 27 Mar, 2023 2 commits
-
-
Hengqi Chen authored
The verifier test creates BPF ringbuf maps using hard-coded 4096 as max_entries. Some tests will fail if the page size of the running kernel is not 4096. Use getpagesize() instead. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230326095341.816023-1-hengqi.chen@gmail.com
-
JP Kobryn authored
This patch prevents races on the print function pointer, allowing the libbpf_set_print() function to become thread-safe. Signed-off-by: JP Kobryn <inwardvessel@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230325010845.46000-1-inwardvessel@gmail.com
-
- 26 Mar, 2023 31 commits
-
-
Nuno Gonçalves authored
The remap of fill and completion rings was frowned upon as they control the usage of UMEM which does not support concurrent use. At the same time this would disallow the remap of these rings into another process. A possible use case is that the user wants to transfer the socket/ UMEM ownership to another process (via SYS_pidfd_getfd) and so would need to also remap these rings. This will have no impact on current usages and just relaxes the remap limitation. Signed-off-by: Nuno Gonçalves <nunog@fr24.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Acked-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230324100222.13434-1-nunog@fr24.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dave Thaler authored
Add extended call instructions. Uses the term "program-local" for call by offset. And there are instructions for calling helper functions by "address" (the old way of using integer values), and for calling helper functions by BTF ID (for kfuncs). V1 -> V2: addressed comments from David Vernet V2 -> V3: make descriptions in table consistent with updated names V3 -> V4: addressed comments from Alexei V4 -> V5: fixed alignment Signed-off-by: Dave Thaler <dthaler@microsoft.com> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230326033117.1075-1-dthaler1968@googlemail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Martin KaFai Lau says: ==================== From: Martin KaFai Lau <martin.lau@kernel.org> This set is a continuation of the effort in using bpf_mem_cache_alloc/free in bpf_local_storage [1] Major change is only using bpf_mem_alloc for task and cgrp storage while sk and inode stay with kzalloc/kfree. The details is in patch 2. [1]: https://lore.kernel.org/bpf/20230308065936.1550103-1-martin.lau@linux.dev/ v3: - Only use bpf_mem_alloc for task and cgrp storage. - sk and inode storage stay with kzalloc/kfree. - Check NULL and add comments in bpf_mem_cache_raw_free() in patch 1. - Added test and benchmark for task storage. v2: - Added bpf_mem_cache_alloc_flags() and bpf_mem_cache_raw_free() to hide the internal data structure of the bpf allocator. - Fixed a typo bug in bpf_selem_free() - Simplified the test_local_storage test by directly using err returned from libbpf ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
This patch adds a task storage benchmark to the existing local-storage-create benchmark. For task storage, ./bench --storage-type task --batch-size 32: bpf_ma: Summary: creates 30.456 ± 0.507k/s ( 30.456k/prod), 6.08 kmallocs/create no bpf_ma: Summary: creates 31.962 ± 0.486k/s ( 31.962k/prod), 6.13 kmallocs/create ./bench --storage-type task --batch-size 64: bpf_ma: Summary: creates 30.197 ± 1.476k/s ( 30.197k/prod), 6.08 kmallocs/create no bpf_ma: Summary: creates 31.103 ± 0.297k/s ( 31.103k/prod), 6.13 kmallocs/create Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20230322215246.1675516-6-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
The current sk storage test ensures the memory free works when the local_storage->smap is NULL. This patch adds a task storage test to ensure the memory free code path works when local_storage->smap is NULL. Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20230322215246.1675516-5-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
This patch uses bpf_mem_cache_alloc/free for allocating and freeing bpf_local_storage for task and cgroup storage. The changes are similar to the previous patch. A few things that worth to mention for bpf_local_storage: The local_storage is freed when the last selem is deleted. Before deleting a selem from local_storage, it needs to retrieve the local_storage->smap because the bpf_selem_unlink_storage_nolock() may have set it to NULL. Note that local_storage->smap may have already been NULL when the selem created this local_storage has been removed. In this case, call_rcu will be used to free the local_storage. Also, the bpf_ma (true or false) value is needed before calling bpf_local_storage_free(). The bpf_ma can either be obtained from the local_storage->smap (if available) or any of its selem's smap. A new helper check_storage_bpf_ma() is added to obtain bpf_ma for a deleting bpf_local_storage. When bpf_local_storage_alloc getting a reused memory, all fields are either in the correct values or will be initialized. 'cache[]' must already be all NULLs. 'list' must be empty. Others will be initialized. Cc: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20230322215246.1675516-4-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
This patch uses bpf_mem_alloc for the task and cgroup local storage that the bpf prog can easily get a hold of the storage owner's PTR_TO_BTF_ID. eg. bpf_get_current_task_btf() can be used in some of the kmalloc code path which will cause deadlock/recursion. bpf_mem_cache_alloc is deadlock free and will solve a legit use case in [1]. For sk storage, its batch creation benchmark shows a few percent regression when the sk create/destroy batch size is larger than 32. The sk creation/destruction happens much more often and depends on external traffic. Considering it is hypothetical to be able to cause deadlock with sk storage, it can cross the bridge to use bpf_mem_alloc till a legit (ie. useful) use case comes up. For inode storage, bpf_local_storage_destroy() is called before waiting for a rcu gp and its memory cannot be reused immediately. inode stays with kmalloc/kfree after the rcu [or tasks_trace] gp. A 'bool bpf_ma' argument is added to bpf_local_storage_map_alloc(). Only task and cgroup storage have 'bpf_ma == true' which means to use bpf_mem_cache_alloc/free(). This patch only changes selem to use bpf_mem_alloc for task and cgroup. The next patch will change the local_storage to use bpf_mem_alloc also for task and cgroup. Here is some more details on the changes: * memory allocation: After bpf_mem_cache_alloc(), the SDATA(selem)->data is zero-ed because bpf_mem_cache_alloc() could return a reused selem. It is to keep the existing bpf_map_kzalloc() behavior. Only SDATA(selem)->data is zero-ed. SDATA(selem)->data is the visible part to the bpf prog. No need to use zero_map_value() to do the zeroing because bpf_selem_free(..., reuse_now = true) ensures no bpf prog is using the selem before returning the selem through bpf_mem_cache_free(). For the internal fields of selem, they will be initialized when linking to the new smap and the new local_storage. When 'bpf_ma == false', nothing changes in this patch. It will stay with the bpf_map_kzalloc(). * memory free: The bpf_selem_free() and bpf_selem_free_rcu() are modified to handle the bpf_ma == true case. For the common selem free path where its owner is also being destroyed, the mem is freed in bpf_local_storage_destroy(), the owner (task and cgroup) has gone through a rcu gp. The memory can be reused immediately, so bpf_local_storage_destroy() will call bpf_selem_free(..., reuse_now = true) which will do bpf_mem_cache_free() for immediate reuse consideration. An exception is the delete elem code path. The delete elem code path is called from the helper bpf_*_storage_delete() and the syscall bpf_map_delete_elem(). This path is an unusual case for local storage because the common use case is to have the local storage staying with its owner life time so that the bpf prog and the user space does not have to monitor the owner's destruction. For the delete elem path, the selem cannot be reused immediately because there could be bpf prog using it. It will call bpf_selem_free(..., reuse_now = false) and it will wait for a rcu tasks trace gp before freeing the elem. The rcu callback is changed to do bpf_mem_cache_raw_free() instead of kfree(). When 'bpf_ma == false', it should be the same as before. __bpf_selem_free() is added to do the kfree_rcu and call_tasks_trace_rcu(). A few words on the 'reuse_now == true'. When 'reuse_now == true', it is still racing with bpf_local_storage_map_free which is under rcu protection, so it still needs to wait for a rcu gp instead of kfree(). Otherwise, the selem may be reused by slab for a totally different struct while the bpf_local_storage_map_free() is still using it (as a rcu reader). For the inode case, there may be other rcu readers also. In short, when bpf_ma == false and reuse_now == true => vanilla rcu. [1]: https://lore.kernel.org/bpf/20221118190109.1512674-1-namhyung@kernel.org/ Cc: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20230322215246.1675516-3-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
This patch adds a few bpf mem allocator functions which will be used in the bpf_local_storage in a later patch. bpf_mem_cache_alloc_flags(..., gfp_t flags) is added. When the flags == GFP_KERNEL, it will fallback to __alloc(..., GFP_KERNEL). bpf_local_storage knows its running context is sleepable (GFP_KERNEL) and provides a better guarantee on memory allocation. bpf_local_storage has some uncommon cases that its selem cannot be reused immediately. It handles its own rcu_head and goes through a rcu_trace gp and then free it. bpf_mem_cache_raw_free() is added for direct free purpose without leaking the LLIST_NODE_SZ internal knowledge. During free time, the 'struct bpf_mem_alloc *ma' is no longer available. However, the caller should know if it is percpu memory or not and it can call different raw_free functions. bpf_local_storage does not support percpu value, so only the non-percpu 'bpf_mem_cache_raw_free()' is added in this patch. Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://lore.kernel.org/r/20230322215246.1675516-2-martin.lau@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Eduard Zingerman says: ==================== This is a follow up for RFC [1]. It migrates a first batch of 38 verifier/*.c tests to inline assembly and use of ./test_progs for actual execution. The migration is done by a python script (see [2]). Each migrated verifier/xxx.c file is mapped to progs/verifier_xxx.c plus an entry in the prog_tests/verifier.c. One patch per each file. A few patches at the beginning of the patch-set extend test_loader with necessary functionality, mainly: - support for tests execution in unprivileged mode; - support for test runs for test programs. Migrated tests could be selected for execution using the following filter: ./test_progs -a verifier_* An example of the migrated test: SEC("xdp") __description("XDP pkt read, pkt_data' > pkt_end, corner case, good access") __success __retval(0) __flag(BPF_F_ANY_ALIGNMENT) __naked void end_corner_case_good_access_1(void) { asm volatile (" \ r2 = *(u32*)(r1 + %[xdp_md_data]); \ r3 = *(u32*)(r1 + %[xdp_md_data_end]); \ r1 = r2; \ r1 += 8; \ if r1 > r3 goto l0_%=; \ r0 = *(u64*)(r1 - 8); \ l0_%=: r0 = 0; \ exit; \ " : : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), __imm_const(xdp_md_data_end, offsetof(struct xdp_md, data_end)) : __clobber_all); } Changes compared to RFC: - test_loader.c is extended to support test program runs; - capabilities handling now matches behavior of test_verifier; - BPF_ST_MEM instructions are automatically replaced by BPF_STX_MEM instructions to overcome current clang limitations; - tests styling updates according to RFC feedback; - 38 migrated files are included instead of 1. I used the following means for testing: - migration tool itself has a set of self-tests; - migrated tests are passing; - manually compared each old/new file side-by-side. While doing side-by-side comparison I've noted a few defects in the original tests: - and.c: - One of the jump targets is off by one; - BPF_ST_MEM wrong OFF/IMM ordering; - array_access.c: - BPF_ST_MEM wrong OFF/IMM ordering; - value_or_null.c: - BPF_ST_MEM wrong OFF/IMM ordering. These defects would be addressed separately. [1] RFC https://lore.kernel.org/bpf/20230123145148.2791939-1-eddyz87@gmail.com/ [2] Migration tool https://github.com/eddyz87/verifier-tests-migrator ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/xdp.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-43-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/xadd.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-42-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/var_off.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-41-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/value_or_null.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-40-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/value.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-39-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/value_adj_spill.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-38-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/uninit.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-37-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/stack_ptr.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-36-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/spill_fill.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-35-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/ringbuf.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-34-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/raw_tp_writable.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-33-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/raw_stack.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-32-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/meta_access.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-31-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/masking.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-30-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/map_ret_val.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-29-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/map_ptr.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-28-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/leak_ptr.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-27-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/ld_ind.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-26-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/int_ptr.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-25-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/helper_value_access.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-24-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/helper_restricted.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-23-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Test verifier/helper_packet_access.c automatically converted to use inline assembly. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230325025524.144043-22-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-