1. 27 Sep, 2021 13 commits
  2. 26 Sep, 2021 5 commits
    • Alexei Starovoitov's avatar
      Merge branch 'bpf: Support <8-byte scalar spill and refill' · e7d5184b
      Alexei Starovoitov authored
      Martin KaFai says:
      
      ====================
      
      The verifier currently does not save the reg state when
      spilling <8byte bounded scalar to the stack.  The bpf program
      will be incorrectly rejected when this scalar is refilled to
      the reg and then used to offset into a packet header.
      The later patch has a simplified bpf prog from a real use case
      to demonstrate this case.  The current work around is
      to reparse the packet again such that this offset scalar
      is close to where the packet data will be accessed to
      avoid the spill.  Thus, the header is parsed twice.
      
      The llvm patch [1] will align the <8bytes spill to
      the 8-byte stack address.  This set is to make the necessary
      changes in verifier to support <8byte scalar spill and refill.
      
      [1] https://reviews.llvm.org/D109073
      
      v2:
      - Changed the xdpwall selftest in patch 3 to trigger a u32
        spill at a non 8-byte aligned stack address.  The v1 has
        simplified the real example too much such that it only
        triggers a u32 spill but does not spill at a non
        8-byte aligned stack address.
      - Changed README.rst in patch 3 to explain the llvm dependency
        for the xdpwall test.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      e7d5184b
    • Martin KaFai Lau's avatar
      bpf: selftest: Add verifier tests for <8-byte scalar spill and refill · ef979017
      Martin KaFai Lau authored
      This patch adds a few verifier tests for <8-byte spill and refill.
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210922004953.627183-1-kafai@fb.com
      ef979017
    • Martin KaFai Lau's avatar
      bpf: selftest: A bpf prog that has a 32bit scalar spill · 54ea6079
      Martin KaFai Lau authored
      It is a simplified example that can trigger a 32bit scalar spill.
      The const scalar is refilled and added to a skb->data later.
      Since the reg state of the 32bit scalar spill is not saved now,
      adding the refilled reg to skb->data and then comparing it with
      skb->data_end cannot verify the skb->data access.
      
      With the earlier verifier patch and the llvm patch [1].  The verifier
      can correctly verify the bpf prog.
      
      Here is the snippet of the verifier log that leads to verifier conclusion
      that the packet data is unsafe to read.  The log is from the kerne
      without the previous verifier patch to save the <8-byte scalar spill.
      67: R0=inv1 R1=inv17 R2=invP2 R3=inv1 R4=pkt(id=0,off=68,r=102,imm=0) R5=inv102 R6=pkt(id=0,off=62,r=102,imm=0) R7=pkt(id=0,off=0,r=102,imm=0) R8=pkt_end(id=0,off=0,imm=0) R9=inv17 R10=fp0
      67: (63) *(u32 *)(r10 -12) = r5
      68: R0=inv1 R1=inv17 R2=invP2 R3=inv1 R4=pkt(id=0,off=68,r=102,imm=0) R5=inv102 R6=pkt(id=0,off=62,r=102,imm=0) R7=pkt(id=0,off=0,r=102,imm=0) R8=pkt_end(id=0,off=0,imm=0) R9=inv17 R10=fp0 fp-16=mmmm????
      ...
      101: R0_w=map_value_or_null(id=2,off=0,ks=16,vs=1,imm=0) R6_w=pkt(id=0,off=70,r=102,imm=0) R7=pkt(id=0,off=0,r=102,imm=0) R8=pkt_end(id=0,off=0,imm=0) R9=inv17 R10=fp0 fp-16=mmmmmmmm
      101: (61) r1 = *(u32 *)(r10 -12)
      102: R0_w=map_value_or_null(id=2,off=0,ks=16,vs=1,imm=0) R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6_w=pkt(id=0,off=70,r=102,imm=0) R7=pkt(id=0,off=0,r=102,imm=0) R8=pkt_end(id=0,off=0,imm=0) R9=inv17 R10=fp0 fp-16=mmmmmmmm
      102: (bc) w1 = w1
      103: R0_w=map_value_or_null(id=2,off=0,ks=16,vs=1,imm=0) R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6_w=pkt(id=0,off=70,r=102,imm=0) R7=pkt(id=0,off=0,r=102,imm=0) R8=pkt_end(id=0,off=0,imm=0) R9=inv17 R10=fp0 fp-16=mmmmmmmm
      103: (0f) r7 += r1
      last_idx 103 first_idx 67
      regs=2 stack=0 before 102: (bc) w1 = w1
      regs=2 stack=0 before 101: (61) r1 = *(u32 *)(r10 -12)
      104: R0_w=map_value_or_null(id=2,off=0,ks=16,vs=1,imm=0) R1_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6_w=pkt(id=0,off=70,r=102,imm=0) R7_w=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R8=pkt_end(id=0,off=0,imm=0) R9=inv17 R10=fp0 fp-16=mmmmmmmm
      ...
      127: R0_w=inv1 R1=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6=pkt(id=0,off=70,r=102,imm=0) R7=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R8=pkt_end(id=0,off=0,imm=0) R9_w=invP17 R10=fp0 fp-16=mmmmmmmm
      127: (bf) r1 = r7
      128: R0_w=inv1 R1_w=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6=pkt(id=0,off=70,r=102,imm=0) R7=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R8=pkt_end(id=0,off=0,imm=0) R9_w=invP17 R10=fp0 fp-16=mmmmmmmm
      128: (07) r1 += 8
      129: R0_w=inv1 R1_w=pkt(id=3,off=8,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6=pkt(id=0,off=70,r=102,imm=0) R7=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R8=pkt_end(id=0,off=0,imm=0) R9_w=invP17 R10=fp0 fp-16=mmmmmmmm
      129: (b4) w0 = 1
      130: R0=inv1 R1=pkt(id=3,off=8,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6=pkt(id=0,off=70,r=102,imm=0) R7=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R8=pkt_end(id=0,off=0,imm=0) R9=invP17 R10=fp0 fp-16=mmmmmmmm
      130: (2d) if r1 > r8 goto pc-66
       R0=inv1 R1=pkt(id=3,off=8,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6=pkt(id=0,off=70,r=102,imm=0) R7=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R8=pkt_end(id=0,off=0,imm=0) R9=invP17 R10=fp0 fp-16=mmmmmmmm
      131: R0=inv1 R1=pkt(id=3,off=8,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6=pkt(id=0,off=70,r=102,imm=0) R7=pkt(id=3,off=0,r=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R8=pkt_end(id=0,off=0,imm=0) R9=invP17 R10=fp0 fp-16=mmmmmmmm
      131: (69) r6 = *(u16 *)(r7 +0)
      invalid access to packet, off=0 size=2, R7(id=3,off=0,r=0)
      R7 offset is outside of the packet
      
      [1]: https://reviews.llvm.org/D109073Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210922004947.626286-1-kafai@fb.com
      54ea6079
    • Martin KaFai Lau's avatar
      bpf: Support <8-byte scalar spill and refill · 354e8f19
      Martin KaFai Lau authored
      The verifier currently does not save the reg state when
      spilling <8byte bounded scalar to the stack.  The bpf program
      will be incorrectly rejected when this scalar is refilled to
      the reg and then used to offset into a packet header.
      The later patch has a simplified bpf prog from a real use case
      to demonstrate this case.  The current work around is
      to reparse the packet again such that this offset scalar
      is close to where the packet data will be accessed to
      avoid the spill.  Thus, the header is parsed twice.
      
      The llvm patch [1] will align the <8bytes spill to
      the 8-byte stack address.  This can simplify the verifier
      support by avoiding to store multiple reg states for
      each 8 byte stack slot.
      
      This patch changes the verifier to save the reg state when
      spilling <8bytes scalar to the stack.  This reg state saving
      is limited to spill aligned to the 8-byte stack address.
      The current refill logic has already called coerce_reg_to_size(),
      so coerce_reg_to_size() is not called on state->stack[spi].spilled_ptr
      during spill.
      
      When refilling in check_stack_read_fixed_off(),  it checks
      the refill size is the same as the number of bytes marked with
      STACK_SPILL before restoring the reg state.  When restoring
      the reg state to state->regs[dst_regno], it needs
      to avoid the state->regs[dst_regno].subreg_def being
      over written because it has been marked by the check_reg_arg()
      earlier [check_mem_access() is called after check_reg_arg() in
      do_check()].  Reordering check_mem_access() and check_reg_arg()
      will need a lot of changes in test_verifier's tests because
      of the difference in verifier's error message.  Thus, the
      patch here is to save the state->regs[dst_regno].subreg_def
      first in check_stack_read_fixed_off().
      
      There are cases that the verifier needs to scrub the spilled slot
      from STACK_SPILL to STACK_MISC.  After this patch the spill is not always
      in 8 bytes now, so it can no longer assume the other 7 bytes are always
      marked as STACK_SPILL.  In particular, the scrub needs to avoid marking
      an uninitialized byte from STACK_INVALID to STACK_MISC.  Otherwise, the
      verifier will incorrectly accept bpf program reading uninitialized bytes
      from the stack.  A new helper scrub_spilled_slot() is created for this
      purpose.
      
      [1]: https://reviews.llvm.org/D109073Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210922004941.625398-1-kafai@fb.com
      354e8f19
    • Martin KaFai Lau's avatar
      bpf: Check the other end of slot_type for STACK_SPILL · 27113c59
      Martin KaFai Lau authored
      Every 8 bytes of the stack is tracked by a bpf_stack_state.
      Within each bpf_stack_state, there is a 'u8 slot_type[8]' to track
      the type of each byte.  Verifier tests slot_type[0] == STACK_SPILL
      to decide if the spilled reg state is saved.  Verifier currently only
      saves the reg state if the whole 8 bytes are spilled to the stack,
      so checking the slot_type[7] is the same as checking slot_type[0].
      
      The later patch will allow verifier to save the bounded scalar
      reg also for <8 bytes spill.  There is a llvm patch [1] to ensure
      the <8 bytes spill will be 8-byte aligned,  so checking
      slot_type[7] instead of slot_type[0] is required.
      
      While at it, this patch refactors the slot_type[0] == STACK_SPILL
      test into a new function is_spilled_reg() and change the
      slot_type[0] check to slot_type[7] check in there also.
      
      [1] https://reviews.llvm.org/D109073Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210922004934.624194-1-kafai@fb.com
      27113c59
  3. 24 Sep, 2021 1 commit
    • Yonghong Song's avatar
      selftests/bpf: Fix btf_dump __int128 test failure with clang build kernel · 091037fb
      Yonghong Song authored
      With clang build kernel (adding LLVM=1 to kernel and selftests/bpf build
      command line), I hit the following test failure:
      
        $ ./test_progs -t btf_dump
        ...
        btf_dump_data:PASS:ensure expected/actual match 0 nsec
        btf_dump_data:FAIL:find type id unexpected find type id: actual -2 < expected 0
        btf_dump_data:FAIL:find type id unexpected find type id: actual -2 < expected 0
        test_btf_dump_int_data:FAIL:dump __int128 unexpected error: -2 (errno 2)
        #15/9 btf_dump/btf_dump: int_data:FAIL
      
      Further analysis showed gcc build kernel has type "__int128" in dwarf/BTF
      and it doesn't exist in clang build kernel. Code searching for kernel code
      found the following:
        arch/s390/include/asm/types.h:  unsigned __int128 pair;
        crypto/ecc.c:   unsigned __int128 m = (unsigned __int128)left * right;
        include/linux/math64.h: return (u64)(((unsigned __int128)a * mul) >> shift);
        include/linux/math64.h: return (u64)(((unsigned __int128)a * mul) >> shift);
        lib/ubsan.h:typedef __int128 s_max;
        lib/ubsan.h:typedef unsigned __int128 u_max;
      
      In my case, CONFIG_UBSAN is not enabled. Even if we only have "unsigned __int128"
      in the code, somehow gcc still put "__int128" in dwarf while clang didn't.
      Hence current test works fine for gcc but not for clang.
      
      Enabling CONFIG_UBSAN is an option to provide __int128 type into dwarf
      reliably for both gcc and clang, but not everybody enables CONFIG_UBSAN
      in their kernel build. So the best choice is to use "unsigned __int128" type
      which is available in both clang and gcc build kernels. But clang and gcc
      dwarf encoded names for "unsigned __int128" are different:
      
        [$ ~] cat t.c
        unsigned __int128 a;
        [$ ~] gcc -g -c t.c && llvm-dwarfdump t.o | grep __int128
                        DW_AT_type      (0x00000031 "__int128 unsigned")
                        DW_AT_name      ("__int128 unsigned")
        [$ ~] clang -g -c t.c && llvm-dwarfdump t.o | grep __int128
                        DW_AT_type      (0x00000033 "unsigned __int128")
                        DW_AT_name      ("unsigned __int128")
      
      The test change in this patch tries to test type name before
      doing actual test.
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Reviewed-by: default avatarAlan Maguire <alan.maguire@oracle.com>
      Link: https://lore.kernel.org/bpf/20210924025856.2192476-1-yhs@fb.com
      091037fb
  4. 22 Sep, 2021 7 commits
  5. 21 Sep, 2021 3 commits
    • Gokul Sivakumar's avatar
      samples: bpf: Convert ARP table network order fields into readable format · cf8980a3
      Gokul Sivakumar authored
      The ARP table that is dumped when the xdp_router_ipv4 process is launched
      has the IP address & MAC address in non-readable network byte order format,
      also the alignment is off when printing the table.
      
      Address HwAddress
      160000e0                1600005e0001
      ff96a8c0                ffffffffffff
      faffffef                faff7f5e0001
      196a8c0		9607871293ea
      fb0000e0                fb00005e0001
      0               0
      196a8c0		9607871293ea
      ffff11ac                ffffffffffff
      faffffef                faff7f5e0001
      fb0000e0                fb00005e0001
      160000e0                1600005e0001
      160000e0                1600005e0001
      faffffef                faff7f5e0001
      fb0000e0                fb00005e0001
      40011ac         40011ac4202
      
      Fix this by converting the "Address" field from network byte order Hex into
      dotted decimal notation IPv4 format and "HwAddress" field from network byte
      order Hex into Colon separated Hex format. Also fix the aligntment of the
      fields in the ARP table.
      
      Address         HwAddress
      224.0.0.22      01:00:5e:00:00:16
      192.168.150.255 ff:ff:ff:ff:ff:ff
      239.255.255.250 01:00:5e:7f:ff:fa
      192.168.150.1	ea:93:12:87:07:96
      224.0.0.251     01:00:5e:00:00:fb
      0.0.0.0         00:00:00:00:00:00
      192.168.150.1	ea:93:12:87:07:96
      172.17.255.255  ff:ff:ff:ff:ff:ff
      239.255.255.250 01:00:5e:7f:ff:fa
      224.0.0.251     01:00:5e:00:00:fb
      224.0.0.22      01:00:5e:00:00:16
      224.0.0.22      01:00:5e:00:00:16
      239.255.255.250 01:00:5e:7f:ff:fa
      224.0.0.251     01:00:5e:00:00:fb
      172.17.0.4      02:42:ac:11:00:04
      Signed-off-by: default avatarGokul Sivakumar <gokulkumar792@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210919080305.173588-2-gokulkumar792@gmail.com
      cf8980a3
    • Gokul Sivakumar's avatar
      samples: bpf: Convert route table network order fields into readable format · f5c4e419
      Gokul Sivakumar authored
      The route table that is dumped when the xdp_router_ipv4 process is launched
      has the "Gateway" field in non-readable network byte order format, also the
      alignment is off when printing the table.
      
      Destination             Gateway         Genmask         Metric          Iface
        0.0.0.0               196a8c0         0               0               enp7s0
        0.0.0.0               196a8c0         0               0               wlp6s0
      169.254.0.0             196a8c0         16              0               enp7s0
      172.17.0.0                0             16              0               docker0
      192.168.150.0             0             24              0               enp7s0
      192.168.150.0             0             24              0               wlp6s0
      
      Fix this by converting the "Gateway" field from network byte order Hex into
      dotted decimal notation IPv4 format and "Genmask" from CIDR notation into
      dotted decimal notation IPv4 format. Also fix the aligntment of the fields
      in the route table.
      
      Destination     Gateway         Genmask         Metric Iface
      0.0.0.0         192.168.150.1   0.0.0.0         0      enp7s0
      0.0.0.0         192.168.150.1   0.0.0.0         0      wlp6s0
      169.254.0.0     192.168.150.1   255.255.0.0     0      enp7s0
      172.17.0.0      0.0.0.0         255.255.0.0     0      docker0
      192.168.150.0   0.0.0.0         255.255.255.0   0      enp7s0
      192.168.150.0   0.0.0.0         255.255.255.0   0      wlp6s0
      Signed-off-by: default avatarGokul Sivakumar <gokulkumar792@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210919080305.173588-1-gokulkumar792@gmail.com
      f5c4e419
    • Grant Seltzer's avatar
      libbpf: Add doc comments in libbpf.h · 97c140d9
      Grant Seltzer authored
      This adds comments above functions in libbpf.h which document
      their uses. These comments are of a format that doxygen and sphinx
      can pick up and render. These are rendered by libbpf.readthedocs.org
      
      These doc comments are for:
      - bpf_object__find_map_by_name()
      - bpf_map__fd()
      - bpf_map__is_internal()
      - libbpf_get_error()
      - libbpf_num_possible_cpus()
      Signed-off-by: default avatarGrant Seltzer <grantseltzer@gmail.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210918031457.36204-1-grantseltzer@gmail.com
      97c140d9
  6. 17 Sep, 2021 11 commits
    • Alexei Starovoitov's avatar
      Merge branch 'bpf: implement variadic printk helper' · e57f52b4
      Alexei Starovoitov authored
      Dave Marchevsky says:
      
      ====================
      
      This series introduces a new helper, bpf_trace_vprintk, which functions
      like bpf_trace_printk but supports > 3 arguments via a pseudo-vararg u64
      array. The bpf_printk libbpf convenience macro is modified to use
      bpf_trace_vprintk when > 3 varargs are passed, otherwise the previous
      behavior - using bpf_trace_printk - is retained.
      
      Helper functions and macros added during the implementation of
      bpf_seq_printf and bpf_snprintf do most of the heavy lifting for
      bpf_trace_vprintk. There's no novel format string wrangling here.
      
      Usecase here is straightforward: Giving BPF program writers a more
      powerful printk will ease development of BPF programs, particularly
      during debugging and testing, where printk tends to be used.
      
      This feature was proposed by Andrii in libbpf mirror's issue tracker
      [1].
      
      [1] https://github.com/libbpf/libbpf/issues/315
      
      v5 -> v6: Rebase to pick up newly-added helper
      
      v4 -> v5:
      
      * patch 8: added test for "%pS" format string w/ NULL fmt arg [Daniel]
      * patch 8: dmesg -> /sys/kernel/debug/tracing/trace_pipe in commit message [Andrii]
      * patch 9: squash into patch 8, remove the added test in favor of just bpf_printk'ing in patch 8's test [Andrii]
          * migrate comment to /* */
      * header comments improved$
          * uapi/linux/bpf.h: u64 -> long return type [Daniel]
          * uapi/linux/bpf.h: function description explains benefit of bpf_trace_vprintk over bpf_trace_printk [Daniel]
          * uapi/linux/bpf.h: added patch explaining that data_len should be a multiple of 8 in bpf_seq_printf, bpf_snprintf descriptions [Daniel]
          * tools/lib/bpf/bpf_helpers.h: move comment to new bpf_printk [Andrii]
      * rebase
      
      v3 -> v4:
      * Add patch 2, which migrates reference_tracking prog_test away from
        bpf_program__load. Could be placed a bit later in the series, but
        wanted to keep the actual vprintk-related patches contiguous
      * Add patch 9, which adds a program w/ 0 fmt arg bpf_printk to vprintk
        test
      * bpf_printk convenience macro isn't multiline anymore, so simplify [Andrii]
      * Add some comments to ___bpf_pick_printk to make it more obvious when
        implementation switches from printk to vprintk [Andrii]
      * BPF_PRINTK_FMT_TYPE -> BPF_PRINTK_FMT_MOD for 'static const' fmt string
        in printk wrapper macro [Andrii]
          * checkpatch.pl doesn't like this, says "Macros with complex values
            should be enclosed in parentheses". Strange that it didn't have similar
            complaints about v3's BPF_PRINTK_FMT_TYPE. Regardless, IMO the complaint
            is not highlighting a real issue in the case of this macro.
      * Fix alignment of __bpf_vprintk and __bpf_pick_printk [Andrii]
      * rebase
      
      v2 -> v3:
      * Clean up patch 3's commit message [Alexei]
      * Add patch 4, which modifies __bpf_printk to use 'static const char' to
        store fmt string with fallback for older kernels [Andrii]
      * rebase
      
      v1 -> v2:
      
      * Naming conversation seems to have gone in favor of keeping
        bpf_trace_vprintk, names are unchanged
      
      * Patch 3 now modifies bpf_printk convenience macro to choose between
        __bpf_printk and __bpf_vprintk 'implementation' macros based on arg
        count. __bpf_vprintk is a renaming of bpf_vprintk convenience macro
        from v1, __bpf_printk is the existing bpf_printk implementation.
      
        This patch could use some scrutiny as I think current implementation
        may regress developer experience in a specific case, turning a
        compile-time error into a load-time error. Unclear to me how
        common the case is, or whether the macro magic I chose is ideal.
      
      * char ___fmt[] to static const char ___fmt[] change was not done,
        wanted to leave __bpf_printk 'implementation' macro unchanged for v2
        to ease discussion of above point
      
      * Removed __always_inline from __set_printk_clr_event [Andrii]
      * Simplified bpf_trace_printk docstring to refer to other functions
        instead of copy/pasting and avoid specifying 12 vararg limit [Andrii]
      * Migrated trace_printk selftest to use ASSERT_ instead of CHECK
        * Adds new patch 5, previous patch 5 is now 6
      * Migrated trace_vprintk selftest to use ASSERT_ instead of CHECK,
        open_and_load instead of separate open, load [Andrii]
      * Patch 2's commit message now correctly mentions trace_pipe instead of
        dmesg [Andrii]
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      e57f52b4
    • Dave Marchevsky's avatar
      bpf: Clarify data_len param in bpf_snprintf and bpf_seq_printf comments · a42effb0
      Dave Marchevsky authored
      Since the data_len in these two functions is a byte len of the preceding
      u64 *data array, it must always be a multiple of 8. If this isn't the
      case both helpers error out, so let's make the requirement explicit so
      users don't need to infer it.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210917182911.2426606-10-davemarchevsky@fb.com
      a42effb0
    • Dave Marchevsky's avatar
      selftests/bpf: Add trace_vprintk test prog · 7606729f
      Dave Marchevsky authored
      This commit adds a test prog for vprintk which confirms that:
        * bpf_trace_vprintk is writing to /sys/kernel/debug/tracing/trace_pipe
        * __bpf_vprintk macro works as expected
        * >3 args are printed
        * bpf_printk w/ 0 format args compiles
        * bpf_trace_vprintk call w/ a fmt specifier but NULL fmt data fails
      
      Approach and code are borrowed from trace_printk test.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210917182911.2426606-9-davemarchevsky@fb.com
      7606729f
    • Dave Marchevsky's avatar
      selftests/bpf: Migrate prog_tests/trace_printk CHECKs to ASSERTs · d313d45a
      Dave Marchevsky authored
      Guidance for new tests is to use ASSERT macros instead of CHECK. Since
      trace_vprintk test will borrow heavily from trace_printk's, migrate its
      CHECKs so it remains obvious that the two are closely related.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210917182911.2426606-8-davemarchevsky@fb.com
      d313d45a
    • Dave Marchevsky's avatar
      bpftool: Only probe trace_vprintk feature in 'full' mode · 4190c299
      Dave Marchevsky authored
      Since commit 368cb0e7 ("bpftool: Make probes which emit dmesg
      warnings optional"), some helpers aren't probed by bpftool unless
      `full` arg is added to `bpftool feature probe`.
      
      bpf_trace_vprintk can emit dmesg warnings when probed, so include it.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210917182911.2426606-7-davemarchevsky@fb.com
      4190c299
    • Dave Marchevsky's avatar
      libbpf: Use static const fmt string in __bpf_printk · 6c66b0e7
      Dave Marchevsky authored
      The __bpf_printk convenience macro was using a 'char' fmt string holder
      as it predates support for globals in libbpf. Move to more efficient
      'static const char', but provide a fallback to the old way via
      BPF_NO_GLOBAL_DATA so users on old kernels can still use the macro.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210917182911.2426606-6-davemarchevsky@fb.com
      6c66b0e7
    • Dave Marchevsky's avatar
      libbpf: Modify bpf_printk to choose helper based on arg count · c2758baa
      Dave Marchevsky authored
      Instead of being a thin wrapper which calls into bpf_trace_printk,
      libbpf's bpf_printk convenience macro now chooses between
      bpf_trace_printk and bpf_trace_vprintk. If the arg count (excluding
      format string) is >3, use bpf_trace_vprintk, otherwise use the older
      helper.
      
      The motivation behind this added complexity - instead of migrating
      entirely to bpf_trace_vprintk - is to maintain good developer experience
      for users compiling against new libbpf but running on older kernels.
      Users who are passing <=3 args to bpf_printk will see no change in their
      bytecode.
      
      __bpf_vprintk functions similarly to BPF_SEQ_PRINTF and BPF_SNPRINTF
      macros elsewhere in the file - it allows use of bpf_trace_vprintk
      without manual conversion of varargs to u64 array. Previous
      implementation of bpf_printk macro is moved to __bpf_printk for use by
      the new implementation.
      
      This does change behavior of bpf_printk calls with >3 args in the "new
      libbpf, old kernels" scenario. Before this patch, attempting to use 4
      args to bpf_printk results in a compile-time error. After this patch,
      using bpf_printk with 4 args results in a trace_vprintk helper call
      being emitted and a load-time failure on older kernels.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210917182911.2426606-5-davemarchevsky@fb.com
      c2758baa
    • Dave Marchevsky's avatar
      bpf: Add bpf_trace_vprintk helper · 10aceb62
      Dave Marchevsky authored
      This helper is meant to be "bpf_trace_printk, but with proper vararg
      support". Follow bpf_snprintf's example and take a u64 pseudo-vararg
      array. Write to /sys/kernel/debug/tracing/trace_pipe using the same
      mechanism as bpf_trace_printk. The functionality of this helper was
      requested in the libbpf issue tracker [0].
      
      [0] Closes: https://github.com/libbpf/libbpf/issues/315Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210917182911.2426606-4-davemarchevsky@fb.com
      10aceb62
    • Dave Marchevsky's avatar
      selftests/bpf: Stop using bpf_program__load · 84b4c529
      Dave Marchevsky authored
      bpf_program__load is not supposed to be used directly. Replace it with
      bpf_object__ APIs for the reference_tracking prog_test, which is the
      last offender in bpf selftests.
      
      Some additional complexity is added for this test, namely the use of one
      bpf_object to iterate through progs, while a second bpf_object is
      created and opened/closed to test actual loading of progs. This is
      because the test was doing bpf_program__load then __unload to test
      loading of individual progs and same semantics with
      bpf_object__load/__unload result in failure to load an __unload-ed obj.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210917182911.2426606-3-davemarchevsky@fb.com
      84b4c529
    • Dave Marchevsky's avatar
      bpf: Merge printk and seq_printf VARARG max macros · 335ff499
      Dave Marchevsky authored
      MAX_SNPRINTF_VARARGS and MAX_SEQ_PRINTF_VARARGS are used by bpf helpers
      bpf_snprintf and bpf_seq_printf to limit their varargs. Both call into
      bpf_bprintf_prepare for print formatting logic and have convenience
      macros in libbpf (BPF_SNPRINTF, BPF_SEQ_PRINTF) which use the same
      helper macros to convert varargs to a byte array.
      
      Changing shared functionality to support more varargs for either bpf
      helper would affect the other as well, so let's combine the _VARARGS
      macros to make this more obvious.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210917182911.2426606-2-davemarchevsky@fb.com
      335ff499
    • Jakub Kicinski's avatar
      Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next · af54faab
      Jakub Kicinski authored
      Alexei Starovoitov says:
      
      ====================
      pull-request: bpf-next 2021-09-17
      
      We've added 63 non-merge commits during the last 12 day(s) which contain
      a total of 65 files changed, 2653 insertions(+), 751 deletions(-).
      
      The main changes are:
      
      1) Streamline internal BPF program sections handling and
         bpf_program__set_attach_target() in libbpf, from Andrii.
      
      2) Add support for new btf kind BTF_KIND_TAG, from Yonghong.
      
      3) Introduce bpf_get_branch_snapshot() to capture LBR, from Song.
      
      4) IMUL optimization for x86-64 JIT, from Jie.
      
      5) xsk selftest improvements, from Magnus.
      
      6) Introduce legacy kprobe events support in libbpf, from Rafael.
      
      7) Access hw timestamp through BPF's __sk_buff, from Vadim.
      
      * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (63 commits)
        selftests/bpf: Fix a few compiler warnings
        libbpf: Constify all high-level program attach APIs
        libbpf: Schedule open_opts.attach_prog_fd deprecation since v0.7
        selftests/bpf: Switch fexit_bpf2bpf selftest to set_attach_target() API
        libbpf: Allow skipping attach_func_name in bpf_program__set_attach_target()
        libbpf: Deprecated bpf_object_open_opts.relaxed_core_relocs
        selftests/bpf: Stop using relaxed_core_relocs which has no effect
        libbpf: Use pre-setup sec_def in libbpf_find_attach_btf_id()
        bpf: Update bpf_get_smp_processor_id() documentation
        libbpf: Add sphinx code documentation comments
        selftests/bpf: Skip btf_tag test if btf_tag attribute not supported
        docs/bpf: Add documentation for BTF_KIND_TAG
        selftests/bpf: Add a test with a bpf program with btf_tag attributes
        selftests/bpf: Test BTF_KIND_TAG for deduplication
        selftests/bpf: Add BTF_KIND_TAG unit tests
        selftests/bpf: Change NAME_NTH/IS_NAME_NTH for BTF_KIND_TAG format
        selftests/bpf: Test libbpf API function btf__add_tag()
        bpftool: Add support for BTF_KIND_TAG
        libbpf: Add support for BTF_KIND_TAG
        libbpf: Rename btf_{hash,equal}_int to btf_{hash,equal}_int_tag
        ...
      ====================
      
      Link: https://lore.kernel.org/r/20210917173738.3397064-1-ast@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      af54faab