- 12 Oct, 2019 12 commits
-
-
Ivan Khoronzhuk authored
In case of C/LDFLAGS there is no way to pass them correctly to build command, for instance when --sysroot is used or external libraries are used, like -lelf, wich can be absent in toolchain. This can be used for samples/bpf cross-compiling allowing to get elf lib from sysroot. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-13-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
No need to use C++ for test_libbpf target when libbpf is on C and it can be tested with C, after this change the CXXFLAGS in makefiles can be avoided, at least in bpf samples, when sysroot is used, passing same C/LDFLAGS as for lib. Add "return 0" in test_libbpf to avoid warn, but also remove spaces at start of the lines to keep same style and avoid warns while apply. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-12-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
No need in hacking HOSTCC to be cross-compiler any more, so drop this trick and use target CC for HDR_PROBE. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-11-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
While compiling natively, the host's cflags and ldflags are equal to ones used from HOSTCFLAGS and HOSTLDFLAGS. When cross compiling it should have own, used for target arch. While verification, for arm, arm64 and x86_64 the following flags were used always: -Wall -O2 -fomit-frame-pointer -Wmissing-prototypes -Wstrict-prototypes So, add them as they were verified and used before adding Makefile.target and lets omit "-fomit-frame-pointer" as were proposed while review, as no sense in such optimization for samples. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191011002808.28206-10-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
The main reason for that - HOSTCC and CC have different aims. HOSTCC is used to build programs running on host, that can cross-comple target programs with CC. It was tested for arm and arm64 cross compilation, based on linaro toolchain, but should work for others. So, in order to split cross compilation (CC) with host build (HOSTCC), lets base samples on Makefile.target. It allows to cross-compile samples/bpf programs with CC while auxialry tools running on host built with HOSTCC. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191011002808.28206-9-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
The Makefile.target is added only and will be used in sample/bpf/Makefile later in order to switch cross-compiling to CC from HOSTCC environment. The HOSTCC is supposed to build binaries and tools running on the host afterwards, in order to simplify build or so, like "fixdep" or else. In case of cross compiling "fixdep" is executed on host when the rest samples should run on target arch. In order to build binaries for target arch with CC and tools running on host with HOSTCC, lets add Makefile.target for simplicity, having definition and routines similar to ones, used in script/Makefile.host. This allows later add cross-compilation to samples/bpf with minimum changes. The tprog stands for target programs built with CC. Makefile.target contains only stuff needed for samples/bpf, potentially can be reused later and now needed only for unblocking tricky samples/bpf cross compilation. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191011002808.28206-8-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
Drop inclusion for bpf_load -I$(objtree)/usr/include as it is included for all objects anyway, with above line: KBUILD_HOSTCFLAGS += -I$(objtree)/usr/include Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-7-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
For arm, -D__LINUX_ARM_ARCH__=X is min version used as instruction set selector and is absolutely required while parsing some parts of headers. It's present in KBUILD_CFLAGS but not in autoconf.h, so let's retrieve it from and add to programs cflags. In another case errors like "SMP is not supported" for armv7 and bunch of other errors are issued resulting to incorrect final object. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191011002808.28206-6-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
It can overlap with CFLAGS used for libraries built with gcc if not now then in next patches. Correct it here for simplicity. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-5-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
For cross compiling the target triple can be inherited from cross-compile prefix as it's done in CLANG_FLAGS from kernel makefile. So copy-paste this decision from kernel Makefile. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-4-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
Don't list userspace "cookie_uid_helper_example" object in list for bpf objects. 'always' target is used for listing bpf programs, but 'cookie_uid_helper_example.o' is a user space ELF file, and covered by rule `per_socket_stats_example`, so shouldn't be in 'always'. Let us remove `always += cookie_uid_helper_example.o`, which avoids breaking cross compilation due to mismatched includes. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-3-ivan.khoronzhuk@linaro.org
-
Ivan Khoronzhuk authored
echo should be replaced with echo -e to handle '\n' correctly, but instead, replace it with printf as some systems can't handle echo -e. Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-2-ivan.khoronzhuk@linaro.org
-
- 11 Oct, 2019 7 commits
-
-
Andrii Nakryiko authored
Old GCC versions are producing invalid typedef for __gnuc_va_list pointing to void. Special-case this and emit valid: typedef __builtin_va_list __gnuc_va_list; Reported-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191011032901.452042-1-andriin@fb.com
-
Andrii Nakryiko authored
Existing BPF_CORE_READ() macro generates slightly suboptimal code. If there are intermediate pointers to be read, initial source pointer is going to be assigned into a temporary variable and then temporary variable is going to be uniformly used as a "source" pointer for all intermediate pointer reads. Schematically (ignoring all the type casts), BPF_CORE_READ(s, a, b, c) is expanded into: ({ const void *__t = src; bpf_probe_read(&__t, sizeof(*__t), &__t->a); bpf_probe_read(&__t, sizeof(*__t), &__t->b); typeof(s->a->b->c) __r; bpf_probe_read(&__r, sizeof(*__r), &__t->c); }) This initial `__t = src` makes calls more uniform, but causes slightly less optimal register usage sometimes when compiled with Clang. This can cascase into, e.g., more register spills. This patch fixes this issue by generating more optimal sequence: ({ const void *__t; bpf_probe_read(&__t, sizeof(*__t), &src->a); /* <-- src here */ bpf_probe_read(&__t, sizeof(*__t), &__t->b); typeof(s->a->b->c) __r; bpf_probe_read(&__r, sizeof(*__r), &__t->c); }) Fixes: 7db3822a ("libbpf: Add BPF_CORE_READ/BPF_CORE_READ_INTO helpers") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191011023847.275936-1-andriin@fb.com
-
Anton Ivanov authored
Fix typo 'boolian' into 'boolean'. Signed-off-by: Anton Ivanov <anton.ivanov@cambridgegreys.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191011084303.28418-1-anton.ivanov@cambridgegreys.com
-
Andrii Nakryiko authored
Fix "warning: cast to pointer from integer of different size" when casting u64 addr to void *. Fixes: a23740ec ("bpf: Track contents of read-only maps as scalars") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191011172053.2980619-1-andriin@fb.com
-
Jakub Sitnicki authored
Make sure a new flow dissector program can be attached to replace the old one with a single syscall. Also check that attaching the same program twice is prohibited. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Stanislav Fomichev <sdf@google.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191011082946.22695-3-jakub@cloudflare.com
-
Jakub Sitnicki authored
It is currently not possible to detach the flow dissector program and attach a new one in an atomic fashion, that is with a single syscall. Attempts to do so will be met with EEXIST error. This makes updates to flow dissector program hard. Traffic steering that relies on BPF-powered flow dissection gets disrupted while old program has been already detached but the new one has not been attached yet. There is also a window of opportunity to attach a flow dissector to a non-root namespace while updating the root flow dissector, thus blocking the update. Lastly, the behavior is inconsistent with cgroup BPF programs, which can be replaced with a single bpf(BPF_PROG_ATTACH, ...) syscall without any restrictions. Allow attaching a new flow dissector program when another one is already present with a restriction that it can't be the same program. Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Stanislav Fomichev <sdf@google.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191011082946.22695-2-jakub@cloudflare.com
-
Eric Dumazet authored
Do not risk spanning these small structures on two cache lines. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191011181140.2898-1-edumazet@google.com
-
- 10 Oct, 2019 4 commits
-
-
Daniel Borkmann authored
Andrii Nakryiko says: ==================== With BPF maps supporting direct map access (currently, array_map w/ single element, used for global data) that are read-only both from system call and BPF side, it's possible for BPF verifier to track its contents as known constants. Now it's possible for user-space control app to pre-initialize read-only map (e.g., for .rodata section) with user-provided flags and parameters and rely on BPF verifier to detect and eliminate dead code resulting from specific combination of input parameters. v1->v2: - BPF_F_RDONLY means nothing, stick to just map->frozen (Daniel); - stick to passing just offset into map_direct_value_addr (Martin). ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Andrii Nakryiko authored
Add tests checking that verifier does proper constant propagation for read-only maps. If constant propagation didn't work, skipp_loop and part_loop BPF programs would be rejected due to BPF verifier otherwise not being able to prove they ever complete. With constant propagation, though, they are succesfully validated as properly terminating loops. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191009201458.2679171-3-andriin@fb.com
-
Andrii Nakryiko authored
Maps that are read-only both from BPF program side and user space side have their contents constant, so verifier can track referenced values precisely and use that knowledge for dead code elimination, branch pruning, etc. This patch teaches BPF verifier how to do this. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191009201458.2679171-2-andriin@fb.com
-
Andrii Nakryiko authored
Fix typo in struct xpd_md, generated from bpf_helpers_doc.py, which is causing compilation warnings for programs using bpf_helpers.h Fixes: 7a387bed ("scripts/bpf: teach bpf_helpers_doc.py to dump BPF helper definitions") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191010042534.290562-1-andriin@fb.com
-
- 09 Oct, 2019 5 commits
-
-
Ilya Maximets authored
'struct xdp_umem_reg' has 4 bytes of padding at the end that makes valgrind complain about passing uninitialized stack memory to the syscall: Syscall param socketcall.setsockopt() points to uninitialised byte(s) at 0x4E7AB7E: setsockopt (in /usr/lib64/libc-2.29.so) by 0x4BDE035: xsk_umem__create@@LIBBPF_0.0.4 (xsk.c:172) Uninitialised value was created by a stack allocation at 0x4BDDEBA: xsk_umem__create@@LIBBPF_0.0.4 (xsk.c:140) Padding bytes appeared after introducing of a new 'flags' field. memset() is required to clear them. Fixes: 10d30e30 ("libbpf: add flags to umem config") Signed-off-by: Ilya Maximets <i.maximets@ovn.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191009164929.17242-1-i.maximets@ovn.org
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Fix BTF-to-C logic of handling padding at the end of a struct. Fix existing test that should have captured this. Also move test_btf_dump into a test_progs test to leverage common infrastructure. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Existing padding test case for btf_dump has a good test that was supposed to test padding generation at the end of a struct, but its expected output was specified incorrectly. Fix this. Fixes: 2d2a3ad8 ("selftests/bpf: add btf_dump BTF-to-C conversion tests") Reported-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191008231009.2991130-4-andriin@fb.com
-
Andrii Nakryiko authored
Convert test_btf_dump into a part of test_progs, instead of a stand-alone test binary. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191008231009.2991130-3-andriin@fb.com
-
Andrii Nakryiko authored
Fix a case where explicit padding at the end of a struct is necessary due to non-standart alignment requirements of fields (which BTF doesn't capture explicitly). Fixes: 351131b5 ("libbpf: add btf_dump API for BTF-to-C conversion") Reported-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191008231009.2991130-2-andriin@fb.com
-
- 08 Oct, 2019 12 commits
-
-
Daniel Borkmann authored
Andrii Nakryiko says: ==================== This patch set makes bpf_helpers.h and bpf_endian.h a part of libbpf itself for consumption by user BPF programs, not just selftests. It also splits off tracing helpers into bpf_tracing.h, which also becomes part of libbpf. Some of the legacy stuff (BPF_ANNOTATE_KV_PAIR, load_{byte,half,word}, bpf_map_def with unsupported fields, etc, is extracted into selftests-only bpf_legacy.h. All the selftests and samples are switched to use libbpf's headers and selftests' ones are removed. As part of this patch set we also add BPF_CORE_READ variadic macros, that are simplifying BPF CO-RE reads, especially the ones that have to follow few pointers. E.g., what in non-BPF world (and when using BCC) would be: int x = s->a->b.c->d; /* s, a, and b.c are pointers */ Today would have to be written using explicit bpf_probe_read() calls as: void *t; int x; bpf_probe_read(&t, sizeof(t), s->a); bpf_probe_read(&t, sizeof(t), ((struct b *)t)->b.c); bpf_probe_read(&x, sizeof(x), ((struct c *)t)->d); This is super inconvenient and distracts from program logic a lot. Now, with added BPF_CORE_READ() macros, you can write the above as: int x = BPF_CORE_READ(s, a, b.c, d); Up to 9 levels of pointer chasing are supported, which should be enough for any practical purpose, hopefully, without adding too much boilerplate macro definitions (though there is admittedly some, given how variadic and recursive C macro have to be implemented). There is also BPF_CORE_READ_INTO() variant, which relies on caller to allocate space for result: int x; BPF_CORE_READ_INTO(&x, s, a, b.c, d); Result of last bpf_probe_read() call in the chain of calls is the result of BPF_CORE_READ_INTO(). If any intermediate bpf_probe_read() aall fails, then all the subsequent ones will fail too, so this is sufficient to know whether overall "operation" succeeded or not. No short-circuiting of bpf_probe_read()s is done, though. BPF_CORE_READ_STR_INTO() is added as well, which differs from BPF_CORE_READ_INTO() only in that last bpf_probe_read() call (to read final field after chasing pointers) is replaced with bpf_probe_read_str(). Result of bpf_probe_read_str() is returned as a result of BPF_CORE_READ_STR_INTO() macro itself, so that applications can track return code and/or length of read string. Patch set outline: - patch #1 undoes previously added GCC-specific bpf-helpers.h include; - patch #2 splits off legacy stuff we don't want to carry over; - patch #3 adjusts CO-RE reloc tests to avoid subsequent naming conflict with BPF_CORE_READ; - patch #4 splits off bpf_tracing.h; - patch #5 moves bpf_{helpers,endian,tracing}.h and bpf_helper_defs.h generation into libbpf and adjusts Makefiles to include libbpf for header search; - patch #6 adds variadic BPF_CORE_READ() macro family, as described above; - patch #7 adds tests to verify all possible levels of pointer nestedness for BPF_CORE_READ(), as well as correctness test for BPF_CORE_READ_STR_INTO(). v4->v5: - move BPF_CORE_READ() stuff into bpf_core_read.h header (Alexei); v3->v4: - rebase on latest bpf-next master; - bpf_helper_defs.h generation is moved into libbpf's Makefile; v2->v3: - small formatting fixes and macro () fixes (Song); v1->v2: - fix CO-RE reloc tests before bpf_helpers.h move (Song); - split off legacy stuff we don't want to carry over (Daniel, Toke); - split off bpf_tracing.h (Daniel); - fix samples/bpf build (assuming other fixes are applied); - switch remaining maps either to bpf_map_def_legacy or BTF-defined maps; ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Andrii Nakryiko authored
Validate BPF_CORE_READ correctness and handling of up to 9 levels of nestedness using cyclic task->(group_leader->)*->tgid chains. Also add a test of maximum-dpeth BPF_CORE_READ_STR_INTO() macro. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-8-andriin@fb.com
-
Andrii Nakryiko authored
Add few macros simplifying BCC-like multi-level probe reads, while also emitting CO-RE relocations for each read. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-7-andriin@fb.com
-
Andrii Nakryiko authored
Move bpf_helpers.h, bpf_tracing.h, and bpf_endian.h into libbpf. Move bpf_helper_defs.h generation into libbpf's Makefile. Ensure all those headers are installed along the other libbpf headers. Also, adjust selftests and samples include path to include libbpf now. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-6-andriin@fb.com
-
Andrii Nakryiko authored
Split-off PT_REGS-related helpers into bpf_tracing.h header. Adjust selftests and samples to include it where necessary. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-5-andriin@fb.com
-
Andrii Nakryiko authored
To allow adding a variadic BPF_CORE_READ macro with slightly different syntax and semantics, define CORE_READ in CO-RE reloc tests, which is a thin wrapper around low-level bpf_core_read() macro, which in turn is just a wrapper around bpf_probe_read(). Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-4-andriin@fb.com
-
Andrii Nakryiko authored
Split off few legacy things from bpf_helpers.h into separate bpf_legacy.h file: - load_{byte|half|word}; - remove extra inner_idx and numa_node fields from bpf_map_def and introduce bpf_map_def_legacy for use in samples; - move BPF_ANNOTATE_KV_PAIR into bpf_legacy.h. Adjust samples and selftests accordingly by either including bpf_legacy.h and using bpf_map_def_legacy, or switching to BTF-defined maps altogether. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-3-andriin@fb.com
-
Andrii Nakryiko authored
Having GCC provide its own bpf-helper.h is not the right approach and is going to be changed. Undo bpf_helpers.h change before moving bpf_helpers.h into libbpf. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-2-andriin@fb.com
-
Daniel T. Lee authored
Currently, at xdp_adjust_tail_kern.c, MAX_PCKT_SIZE is limited to 600. To make this size flexible, static global variable 'max_pcktsz' is added. By updating new packet size from the user space, xdp_adjust_tail_kern.o will use this value as a new max packet size. This static global variable can be accesible from .data section with bpf_object__find_map* from user space, since it is considered as internal map (accessible with .bss/.data/.rodata suffix). If no '-P <MAX_PCKT_SIZE>' option is used, the size of maximum packet will be 600 as a default. For clarity, change the helper to fetch map from 'bpf_map__next' to 'bpf_object__find_map_fd_by_name'. Also, changed the way to test prog_fd, map_fd from '!= 0' to '< 0', since fd could be 0 when stdin is closed. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191007172117.3916-1-danieltimlee@gmail.com
-
Alexei Starovoitov authored
Stanislav Fomichev says: ==================== While having a per-net-ns flow dissector programs is convenient for testing, security-wise it's better to have only one vetted global flow dissector implementation. Let's have a convention that when BPF flow dissector is installed in the root namespace, child namespaces can't override it. The intended use-case is to attach global BPF flow dissector early from the init scripts/systemd. Attaching global dissector is prohibited if some non-root namespace already has flow dissector attached. Also, attaching to non-root namespace is prohibited when there is flow dissector attached to the root namespace. v3: * drop extra check and empty line (Andrii Nakryiko) v2: * EPERM -> EEXIST (Song Liu) * Make sure we don't have dissector attached to non-root namespaces when attaching the global one (Andrii Nakryiko) ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Stanislav Fomichev authored
Make sure non-root namespaces get an error if root flow dissector is attached. Cc: Petar Penkov <ppenkov@google.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Stanislav Fomichev authored
Always use init_net flow dissector BPF program if it's attached and fall back to the per-net namespace one. Also, deny installing new programs if there is already one attached to the root namespace. Users can still detach their BPF programs, but can't attach any new ones (-EEXIST). Cc: Petar Penkov <ppenkov@google.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-