An error occurred fetching the project authors.
- 11 Jun, 2018 1 commit
-
-
Gary Lin authored
Those two map types are necessary to support bpf_redirect_map() in XDP. v2: Use ArrayBase as the base class of DevMap and CpuMap Signed-off-by:
Gary Lin <glin@suse.com>
-
- 01 Jun, 2018 1 commit
-
-
Quentin Monnet authored
Update doc/kernel-versions.md with latest eBPF features, map types, JIT-compiler, helpers. Synchronise headers with bpf-next (at commit bcece5dc40b9). Add prototypes for the following helpers: - bpf_get_stack() - bpf_skb_load_bytes_relative() - bpf_fib_lookup() - bpf_sock_hash_update() - bpf_msg_redirect_hash() - bpf_sk_redirect_hash() - bpf_lwt_push_encap() - bpf_lwt_seg6_store_bytes() - bpf_lwt_seg6_adjust_srh() - bpf_lwt_seg6_action() - bpf_rc_repeat() - bpf_rc_keydown()
-
- 29 Apr, 2018 2 commits
-
-
Quentin Monnet authored
- Update links in doc (make them point from net-next to linux, when relevant). - Add helpers bpf_xdp_adjust_tail() and bpf_skb_get_xfrm_state() to documentation and headers. - Synchronise helpers with latest net-next.
-
Yonghong Song authored
The motivation comes from pull request #1689. It attached a kprobe bpf program to kernel function ttwu_do_wakeup for more accurate tracing. Unfortunately, it broke runqlat.py in my 4.17 environment since ttwu_do_wakeup function is inlined in my kernel with gcc 7.3.1. 4.17 introduced raw_tracepoint and this patch added the relevant API to bcc. With this, we can use tracepoints sched:{sched_wakeup, sched_wakeup_new, sched_switch} to measure runq latency more reliably. Signed-off-by:
Yonghong Song <yhs@fb.com>
-
- 26 Apr, 2018 1 commit
-
-
Teng Qin authored
-
- 02 Apr, 2018 1 commit
-
-
Quentin Monnet authored
- Update links in doc (make them point from net-next to linux, when relevant). - Fix kernel version for bpf_msg_*() helpers, 4.17 instead of 4.16, in doc and in header. - Add helper bpf_bind() to documentation and headers. - Synchronise helpers with latest net-next.
-
- 26 Mar, 2018 1 commit
-
-
Teng Qin authored
-
- 22 Mar, 2018 1 commit
-
-
Quentin Monnet authored
-
- 14 Feb, 2018 1 commit
-
-
Yonghong Song authored
Signed-off-by:
Yonghong Song <yhs@fb.com>
-
- 06 Feb, 2018 2 commits
-
-
Joel Fernandes authored
BCC at the moment builds eBPF only considering the local architecture instead of the one that the user's target kernel is running on. For cross-compiler environments, the ARCH environment variable is used to specify which ARCH to build the kernel for. In this patch we add support to read ARCH and if that's not set, then fallback to detecting based on local architecture. This patch borrows some code from a patch doing a similar thing for eBPF samples in the kenrel that I submitted recently [1] [1] https://patchwork.kernel.org/patch/9961801/Signed-off-by:
Joel Fernandes <joelaf@google.com>
-
Gary Lin authored
Define a new macro to make it easier to declare a program array. Also replace BPF_TABLE("prog") in examples and tests with BPF_PROG_ARRAY. Signed-off-by:
Gary Lin <glin@suse.com>
-
- 02 Feb, 2018 1 commit
-
-
Song Liu authored
The semicolon is usually added when the macro is used. Update both the macro definition and all uses. Signed-off-by:
Song Liu <songliubraving@fb.com>
-
- 28 Jan, 2018 1 commit
-
-
Paul Chaignon authored
-
- 17 Jan, 2018 1 commit
-
-
Howard McLauchlan authored
-
- 08 Jan, 2018 1 commit
-
-
Yonghong Song authored
when running with latest linus tree and net-next, the python test tests/python/test_tracepoint.py failed with the following symptoms: ``` ...... R0=map_value(id=0,off=0,ks=4,vs=64,imm=0) R6=map_value(id=0,off=0,ks=4,vs=64,imm=0) R7=ctx(id=0,off=0,imm=0) R10=fp0,call_-1 34: (69) r1 = *(u16 *)(r7 +8) 35: (67) r1 <<= 48 36: (c7) r1 s>>= 48 37: (0f) r7 += r1 math between ctx pointer and register with unbounded min value is not allowed ...... ``` The reason of failure is due to added tightening in verifier introduced by the following commit: ``` commit f4be569a429987343e3f424d3854b3622ffae436 Author: Alexei Starovoitov <ast@kernel.org> Date: Mon Dec 18 20:12:00 2017 -0800 bpf: fix integer overflows ...... ``` The patch changes `offset` type in `ctx + offset` from signed to unsigned so that we do not have `unbounded min value` so the test trace_tracepoint.py can pass. Signed-off-by:
Yonghong Song <yhs@fb.com>
-
- 15 Dec, 2017 1 commit
-
-
Teng Qin authored
-
- 27 Oct, 2017 1 commit
-
-
Yonghong Song authored
Changes include: . Add PT_REGS_FP to access base(FP) register in x64 . Use macros, intead of directly ctx-><reg_name> in a few places . Let userspace fill in the value of PAGE_SIZE. Otherwise, arm64 needs additional headers to get this value for kernel. . In tools/wakeuptime.py, arm64 and x86_64 have the same stack walker mechanism. But they have different symbol/macro to represent kernel start address. With these changes, the test py_test_tools_smoke can pass on arm64. Signed-off-by:
Yonghong Song <yhs@fb.com>
-
- 18 Oct, 2017 2 commits
-
-
Sandipan Das authored
This fixes the definition of PT_REGS_SP() for powerpc to refer to GPR1. Fixes: #529 4afa96a7 ("cc: introduce helpers to access pt_regs in an arch-independent manner") Signed-off-by:
Sandipan Das <sandipan@linux.vnet.ibm.com>
-
Yonghong Song authored
The function bpf_get_stackid is defined in helpers.h: int bpf_get_stackid(uintptr_t map, void *ctx, u64 flags) But the same function is also defined in linux:include/linux/bpf.h: u64 bpf_get_stackid(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5); If a bpf program also includes, directly or indirectly, linux/bpf.h, compilation will fail because incompatible definition. This patch renames bcc helpers.h definition to bcc_get_stackid to avoid this issue. Signed-off-by:
Yonghong Song <yhs@fb.com>
-
- 26 Sep, 2017 1 commit
-
-
Paul Chaignon authored
LPM trie maps require the BPF_F_NO_PREALLOC flag on creation. The need for this flag is not obvious at first; this new macro should help avoid the mistake.
-
- 25 Sep, 2017 1 commit
-
-
Kirill Smelkov authored
For the following program: #include <linux/interrupt.h> // remember t(last-interrupt) on interface int kprobe__handle_irq_event_percpu(struct pt_regs *ctx, struct irq_desc *desc) { const char *irqname = desc->action->name; char c; bpf_probe_read(&c, 1, &irqname[0]); if (c != 'e') return 0; bpf_probe_read(&c, 1, &irqname[1]); if (c != 't') return 0; ... LLVM gives warnings because irqaction->name is `const char *`: /virtual/main.c:10:27: warning: passing 'const char *' to parameter of type 'void *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers] bpf_probe_read(&c, 1, &irqname[0]); ^~~~~~~~~~~ /virtual/main.c:13:27: warning: passing 'const char *' to parameter of type 'void *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers] bpf_probe_read(&c, 1, &irqname[1]); ^~~~~~~~~~~ ... Instead of adding casts in source everywhere fix bpf_probe_read* signature to indicate the memory referenced by src won't be modified, as it should be. P.S. bpf_probe_read_str was in fact already marked so in several places in comments but not in actual signature.
-
- 17 Aug, 2017 1 commit
-
-
Yonghong Song authored
In bcc, internal BPF_F_TABLE defines a structure to contain all the table information for later easy extraction. A global structure will be defined with this type. Note that this structure will be allocated by LLVM during compilation. In the table structure, one of field is: _leaf_type data[_max_entries] If the _leaf_type and _max_entries are big, significant memory will be consumed. A big _leaf_type size example is for BPF_STACK_TRACE map with 127*8=1016 bytes. If max_entries is bigger as well, significant amount of memory will be consumed by LLVM. This patch replaces _leaf_type data[_max_entries] to unsigned ing max_entries The detail of a test example can be found in issue #1291. For the example in #1291, without this patch, for a BPF_STACK_TRACE map with 1M entries, the RSS is roughly 3GB (roughly 3KB per entry). With this patch, it is 5.8MB. Signed-off-by:
Yonghong Song <yhs@fb.com>
-
- 31 Jul, 2017 1 commit
-
-
Nan Xiao authored
-
- 29 Jun, 2017 1 commit
-
-
Romain authored
-
- 24 May, 2017 1 commit
-
-
Teng Qin authored
-
- 16 May, 2017 1 commit
-
-
Teng Qin authored
-
- 09 May, 2017 1 commit
-
-
Teng Qin authored
Also use it in the RecordMySQLQuery example and updated documentation
-
- 15 Apr, 2017 1 commit
-
-
Huapeng Zhou authored
-
- 01 Apr, 2017 1 commit
-
-
Paul Chaignon authored
Inserts element in map only if it does not already exist. Throws a warning during rewriter step if used on a BPF array.
-
- 29 Mar, 2017 1 commit
-
-
Teng Qin authored
-
- 07 Mar, 2017 1 commit
-
-
Zvonko Kosic authored
-
- 09 Feb, 2017 1 commit
-
-
Brenden Blanco authored
Signed-off-by:
Brenden Blanco <bblanco@gmail.com>
-
- 01 Feb, 2017 1 commit
-
-
Sasha Goldshtein authored
`__data_loc` fields are dynamically sized by the kernel at runtime. The field data follows the tracepoint structure entry, and needs to be extracted in a special way. The `__data_loc` field itself is a 32-bit value that consists of two 16-bit parts: the high 16 bits are the length of the data, and the low 16 bits are the offset of the data from the beginning of the tracepoint structure. From a cursory look, there are >200 tracepoints in recent kernels that have this kind of field. This patch fixes `tp_frontend_action.cc` to recognize and emit `__data_loc` fields correctly, as 32-bit opaque fields. Then, it introduces two helper macros: `TP_DATA_LOC_READ(dst, field)` reads from `args->field` by finding the right offset and length and emitting the `bpf_probe_read` required to fetch the data. This will only work with new kernels. `TP_DATA_LOC_READ_CONST(dst, field, length)` takes a user-specified length rather than finding it from `args->field`. This will work on older kernels, where the BPF verifier doesn't allow non-constant sizes to be passed to `bpf_probe_read`.
-
- 09 Jan, 2017 1 commit
-
-
Mauricio Vasquez B authored
Signed-off-by:
Mauricio Vasquez B <mauricio.vasquez@polito.it>
-
- 05 Jan, 2017 1 commit
-
-
Brenden Blanco authored
Avoid conflicting [no]inline attributes in function annotation. This was probably always there but now 4.0 is treating this as an error. Also, explicitly inline several functions in helpers.h. Turn off unwind tables in the flags passed to clang. This was generating calls to the elf relocator, which doesn't work for the BPF target. It is unclear which change in LLVM 4.0 altered this behavior. On python3, handle byte strings in the usual way for supporting backwards compatibility. Signed-off-by:
Brenden Blanco <bblanco@gmail.com>
-
- 08 Dec, 2016 1 commit
-
-
Huapeng Zhou authored
-
- 06 Dec, 2016 1 commit
-
-
Zhiyi Sun authored
ABI for aarch64: register x0-x7 are used for parameter and result. In bcc, there are 6 parameter registers are defined. So use x0-x5 as parameter. frame pointer, link register, stack pointer and pc are added in PT_REGS_xx according to arm64 architecture. syscall number of bpf for aarch64 are defined in Kernel header uapi/asm-generic/unistd.h. Signed-off-by:
Zhiyi Sun <zhiyisun@gmail.com>
-
- 24 Aug, 2016 1 commit
-
-
Martin KaFai Lau authored
For BPF_PROG_TYPE_SCHED_CLS/ACT, the upstream kernel has recently added a feature to efficiently output skb + meta data: commit 555c8a8623a3 ("bpf: avoid stack copy and use skb ctx for event output") This patch adds perf_submit_skb to BPF_PERF_OUTPUT macro. It takes an extra u32 argument. perf_submit_skb will then be expanded to bpf_perf_event_output properly to consider the newly added u32 argument as the skb's len. Other than the above described changes, perf_submit_skb is almost a carbon copy of the perf_submit except the removal of the 'string name' variable since I cannot find a specific use of it. Note that the 3rd param type of bpf_perf_event_output has also been changed from u32 to u64. Added a sample program tc_perf_event.py. Here is how the output looks like: [root@arch-fb-vm1 networking]# ./tc_perf_event.py Try: "ping -6 ff02::1%me" CPU SRC IP DST IP Magic 0 fe80::982f:5dff:fec1:e52b ff02::1 0xfaceb00c 0 fe80::982f:5dff:fec1:e52b ff02::1 0xfaceb00c 0 fe80::982f:5dff:fec1:e52b ff02::1 0xfaceb00c 1 fe80::982f:5dff:fec1:e52b ff02::1 0xfaceb00c 1 fe80::982f:5dff:fec1:e52b ff02::1 0xfaceb00c 1 fe80::982f:5dff:fec1:e52b ff02::1 0xfaceb00c
-
- 06 Aug, 2016 1 commit
-
-
Omar Sandoval authored
Signed-off-by:
Omar Sandoval <osandov@fb.com>
-
- 09 Jul, 2016 1 commit
-
-
Sasha Goldshtein authored
When a probe function refers to a tracepoint arguments structure, such as `struct tracepoint__irq__irq_handler_entry`, add that structure on-the-fly using a Clang frontend action that runs before any other steps take place. Typically, the user will create tracepoint probe functions using the TRACEPOINT_PROBE macro, which avoids the need for specifying the tracepoint category and event twice in the signature of the probe function.
-