An error occurred fetching the project authors.
  1. 11 Jun, 2018 1 commit
  2. 01 Jun, 2018 1 commit
    • Quentin Monnet's avatar
      sync BPF compat headers with latest bpf-next, update BPF features list · 8ce57acc
      Quentin Monnet authored
      Update doc/kernel-versions.md with latest eBPF features, map types,
      JIT-compiler, helpers.
      
      Synchronise headers with bpf-next (at commit bcece5dc40b9). Add
      prototypes for the following helpers:
      
      - bpf_get_stack()
      - bpf_skb_load_bytes_relative()
      - bpf_fib_lookup()
      - bpf_sock_hash_update()
      - bpf_msg_redirect_hash()
      - bpf_sk_redirect_hash()
      - bpf_lwt_push_encap()
      - bpf_lwt_seg6_store_bytes()
      - bpf_lwt_seg6_adjust_srh()
      - bpf_lwt_seg6_action()
      - bpf_rc_repeat()
      - bpf_rc_keydown()
      8ce57acc
  3. 29 Apr, 2018 2 commits
    • Quentin Monnet's avatar
      sync bpf compat headers with latest net-next, update doc for helpers · 4e285455
      Quentin Monnet authored
      - Update links in doc (make them point from net-next to linux, when
        relevant).
      - Add helpers bpf_xdp_adjust_tail() and bpf_skb_get_xfrm_state() to
        documentation and headers.
      - Synchronise helpers with latest net-next.
      4e285455
    • Yonghong Song's avatar
      introduce {attach|detach}_raw_tracepoint API · 0d722379
      Yonghong Song authored
      The motivation comes from pull request #1689.
      It attached a kprobe bpf program to kernel function
      ttwu_do_wakeup for more accurate tracing.
      Unfortunately, it broke runqlat.py in my
      4.17 environment since ttwu_do_wakeup function
      is inlined in my kernel with gcc 7.3.1.
      
      4.17 introduced raw_tracepoint and this patch
      added the relevant API to bcc. With this,
      we can use tracepoints
      sched:{sched_wakeup, sched_wakeup_new, sched_switch}
      to measure runq latency more reliably.
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      0d722379
  4. 26 Apr, 2018 1 commit
  5. 02 Apr, 2018 1 commit
  6. 26 Mar, 2018 1 commit
  7. 22 Mar, 2018 1 commit
  8. 14 Feb, 2018 1 commit
  9. 06 Feb, 2018 2 commits
  10. 02 Feb, 2018 1 commit
  11. 28 Jan, 2018 1 commit
  12. 17 Jan, 2018 1 commit
  13. 08 Jan, 2018 1 commit
    • Yonghong Song's avatar
      fix a verifier failure · 0cfd665b
      Yonghong Song authored
      when running with latest linus tree and net-next, the python test
      tests/python/test_tracepoint.py failed with the following
      symptoms:
      ```
      ......
       R0=map_value(id=0,off=0,ks=4,vs=64,imm=0) R6=map_value(id=0,off=0,ks=4,vs=64,imm=0) R7=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
      34: (69) r1 = *(u16 *)(r7 +8)
      35: (67) r1 <<= 48
      36: (c7) r1 s>>= 48
      37: (0f) r7 += r1
      math between ctx pointer and register with unbounded min value is not allowed
      ......
      ```
      
      The reason of failure is due to added tightening in verifier introduced by
      the following commit:
      ```
      commit f4be569a429987343e3f424d3854b3622ffae436
      Author: Alexei Starovoitov <ast@kernel.org>
      Date: Mon Dec 18 20:12:00 2017 -0800
      
      bpf: fix integer overflows
      ......
      
      ```
      
      The patch changes `offset` type in `ctx + offset` from signed to
      unsigned so that we do not have `unbounded min value` so the
      test trace_tracepoint.py can pass.
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      0cfd665b
  14. 15 Dec, 2017 1 commit
  15. 27 Oct, 2017 1 commit
    • Yonghong Song's avatar
      bpf: make test py_test_tools_smoke pass on arm64 · eb6ddc0e
      Yonghong Song authored
      Changes include:
        . Add PT_REGS_FP to access base(FP) register in x64
        . Use macros, intead of directly ctx-><reg_name>
          in a few places
        . Let userspace fill in the value of PAGE_SIZE.
          Otherwise, arm64 needs additional headers to
          get this value for kernel.
        . In tools/wakeuptime.py, arm64 and x86_64 have
          the same stack walker mechanism. But they
          have different symbol/macro to represent
          kernel start address.
      With these changes, the test py_test_tools_smoke
      can pass on arm64.
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      eb6ddc0e
  16. 18 Oct, 2017 2 commits
  17. 26 Sep, 2017 1 commit
  18. 25 Sep, 2017 1 commit
    • Kirill Smelkov's avatar
      bpf_probe_read*: src argument should be const void *. · 2dc7daad
      Kirill Smelkov authored
      For the following program:
      
          #include <linux/interrupt.h>
      
          // remember t(last-interrupt) on interface
          int kprobe__handle_irq_event_percpu(struct pt_regs *ctx, struct irq_desc *desc) {
              const char *irqname = desc->action->name;
      
              char c;
      
              bpf_probe_read(&c, 1, &irqname[0]);
              if (c != 'e') return 0;
      
              bpf_probe_read(&c, 1, &irqname[1]);
              if (c != 't') return 0;
      
              ...
      
      LLVM gives warnings because irqaction->name is `const char *`:
      
          /virtual/main.c:10:27: warning: passing 'const char *' to parameter of type 'void *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers]
              bpf_probe_read(&c, 1, &irqname[0]);
                                    ^~~~~~~~~~~
          /virtual/main.c:13:27: warning: passing 'const char *' to parameter of type 'void *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers]
              bpf_probe_read(&c, 1, &irqname[1]);
                                    ^~~~~~~~~~~
          ...
      
      Instead of adding casts in source everywhere fix bpf_probe_read* signature to
      indicate the memory referenced by src won't be modified, as it should be.
      
      P.S.
      
      bpf_probe_read_str was in fact already marked so in several places in comments
      but not in actual signature.
      2dc7daad
  19. 17 Aug, 2017 1 commit
    • Yonghong Song's avatar
      avoid large map memory allocation in userspace · 067219b2
      Yonghong Song authored
      In bcc, internal BPF_F_TABLE defines a structure to
      contain all the table information for later easy
      extraction. A global structure will be defined
      with this type. Note that this structure will be
      allocated by LLVM during compilation.
      
      In the table structure, one of field is:
         _leaf_type data[_max_entries]
      
      If the _leaf_type and _max_entries are big,
      significant memory will be consumed. A big
      _leaf_type size example is for BPF_STACK_TRACE map
      with 127*8=1016 bytes. If max_entries is bigger
      as well, significant amount of memory will be
      consumed by LLVM.
      
      This patch replaces
        _leaf_type data[_max_entries]
      to
        unsigned ing max_entries
      
      The detail of a test example can be found in issue #1291.
      For the example in #1291, without this patch, for a
      BPF_STACK_TRACE map with 1M entries, the RSS is roughly
      3GB (roughly 3KB per entry). With this patch, it is 5.8MB.
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      067219b2
  20. 31 Jul, 2017 1 commit
  21. 29 Jun, 2017 1 commit
  22. 24 May, 2017 1 commit
  23. 16 May, 2017 1 commit
  24. 09 May, 2017 1 commit
  25. 15 Apr, 2017 1 commit
  26. 01 Apr, 2017 1 commit
  27. 29 Mar, 2017 1 commit
  28. 07 Mar, 2017 1 commit
  29. 09 Feb, 2017 1 commit
  30. 01 Feb, 2017 1 commit
    • Sasha Goldshtein's avatar
      cc: Support for __data_loc tracepoint fields · b9545a5c
      Sasha Goldshtein authored
      `__data_loc` fields are dynamically sized by the kernel at
      runtime. The field data follows the tracepoint structure entry,
      and needs to be extracted in a special way. The `__data_loc` field
      itself is a 32-bit value that consists of two 16-bit parts: the
      high 16 bits are the length of the data, and the low 16 bits are
      the offset of the data from the beginning of the tracepoint
      structure. From a cursory look, there are >200 tracepoints in
      recent kernels that have this kind of field.
      
      This patch fixes `tp_frontend_action.cc` to recognize and emit
      `__data_loc` fields correctly, as 32-bit opaque fields. Then, it
      introduces two helper macros:
      
      `TP_DATA_LOC_READ(dst, field)` reads from `args->field` by finding
      the right offset and length and emitting the `bpf_probe_read`
      required to fetch the data. This will only work with new kernels.
      
      `TP_DATA_LOC_READ_CONST(dst, field, length)` takes a user-specified
      length rather than finding it from `args->field`. This will work
      on older kernels, where the BPF verifier doesn't allow non-constant
      sizes to be passed to `bpf_probe_read`.
      b9545a5c
  31. 09 Jan, 2017 1 commit
  32. 05 Jan, 2017 1 commit
    • Brenden Blanco's avatar
      Fixes for LLVM 4.0 and python3 · 2d862046
      Brenden Blanco authored
      Avoid conflicting [no]inline attributes in function annotation. This was
      probably always there but now 4.0 is treating this as an error.
      Also, explicitly inline several functions in helpers.h.
      
      Turn off unwind tables in the flags passed to clang. This was generating
      calls to the elf relocator, which doesn't work for the BPF target. It is
      unclear which change in LLVM 4.0 altered this behavior.
      
      On python3, handle byte strings in the usual way for supporting
      backwards compatibility.
      Signed-off-by: default avatarBrenden Blanco <bblanco@gmail.com>
      2d862046
  33. 08 Dec, 2016 1 commit
  34. 06 Dec, 2016 1 commit
    • Zhiyi Sun's avatar
      Add support for aarch64 · 8e434b79
      Zhiyi Sun authored
      ABI for aarch64: register x0-x7 are used for parameter and result. In
      bcc, there are 6 parameter registers are defined. So use x0-x5 as
      parameter. frame pointer, link register, stack pointer and pc are added
      in PT_REGS_xx according to arm64 architecture.
      
      syscall number of bpf for aarch64 are defined in Kernel
      header uapi/asm-generic/unistd.h.
      Signed-off-by: default avatarZhiyi Sun <zhiyisun@gmail.com>
      8e434b79
  35. 24 Aug, 2016 1 commit
    • Martin KaFai Lau's avatar
      Add perf_submit_skb · bdad3840
      Martin KaFai Lau authored
      For BPF_PROG_TYPE_SCHED_CLS/ACT, the upstream kernel has recently added a
      feature to efficiently output skb + meta data:
      commit 555c8a8623a3 ("bpf: avoid stack copy and use skb ctx for event output")
      
      This patch adds perf_submit_skb to BPF_PERF_OUTPUT macro.  It takes
      an extra u32 argument.  perf_submit_skb will then be expanded to
      bpf_perf_event_output properly to consider the newly added
      u32 argument as the skb's len.
      
      Other than the above described changes, perf_submit_skb is almost
      a carbon copy of the perf_submit except the removal of the 'string name'
      variable since I cannot find a specific use of it.
      
      Note that the 3rd param type of bpf_perf_event_output has also been
      changed from u32 to u64.
      
      Added a sample program tc_perf_event.py.  Here is how the output
      looks like:
      [root@arch-fb-vm1 networking]# ./tc_perf_event.py
      Try: "ping -6 ff02::1%me"
      
      CPU SRC IP                           DST IP       Magic
      0   fe80::982f:5dff:fec1:e52b        ff02::1      0xfaceb00c
      0   fe80::982f:5dff:fec1:e52b        ff02::1      0xfaceb00c
      0   fe80::982f:5dff:fec1:e52b        ff02::1      0xfaceb00c
      1   fe80::982f:5dff:fec1:e52b        ff02::1      0xfaceb00c
      1   fe80::982f:5dff:fec1:e52b        ff02::1      0xfaceb00c
      1   fe80::982f:5dff:fec1:e52b        ff02::1      0xfaceb00c
      bdad3840
  36. 06 Aug, 2016 1 commit
  37. 09 Jul, 2016 1 commit
    • Sasha Goldshtein's avatar
      cc: Rewrite probe functions that refer to tracepoint structures · fab68e3a
      Sasha Goldshtein authored
      When a probe function refers to a tracepoint arguments structure,
      such as `struct tracepoint__irq__irq_handler_entry`, add that structure
      on-the-fly using a Clang frontend action that runs before any other
      steps take place.
      
      Typically, the user will create tracepoint probe functions using
      the TRACEPOINT_PROBE macro, which avoids the need for specifying
      the tracepoint category and event twice in the signature of the
      probe function.
      fab68e3a