1. 10 Dec, 2018 6 commits
    • Yonghong Song's avatar
      tools/bpf: rename *_info_cnt to nr_*_info · cfc54241
      Yonghong Song authored
      Rename all occurances of *_info_cnt field access
      to nr_*_info in tools directory.
      
      The local variables finfo_cnt, linfo_cnt and jited_linfo_cnt
      in function do_dump() of tools/bpf/bpftool/prog.c are also
      changed to nr_finfo, nr_linfo and nr_jited_linfo to
      keep naming convention consistent.
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      cfc54241
    • Yonghong Song's avatar
      tools/bpf: sync kernel uapi bpf.h to tools directory · b4f8623c
      Yonghong Song authored
      Sync kernel uapi bpf.h "*_info_cnt => nr_*_info"
      changes to tools directory.
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      b4f8623c
    • Yonghong Song's avatar
      bpf: rename *_info_cnt to nr_*_info in bpf_prog_info · 11d8b82d
      Yonghong Song authored
      In uapi bpf.h, currently we have the following fields in
      the struct bpf_prog_info:
      	__u32 func_info_cnt;
      	__u32 line_info_cnt;
      	__u32 jited_line_info_cnt;
      The above field names "func_info_cnt" and "line_info_cnt"
      also appear in union bpf_attr for program loading.
      
      The original intention is to keep the names the same
      between bpf_prog_info and bpf_attr
      so it will imply what we returned to user space will be
      the same as what the user space passed to the kernel.
      
      Such a naming convention in bpf_prog_info is not consistent
      with other fields like:
              __u32 nr_jited_ksyms;
              __u32 nr_jited_func_lens;
      
      This patch made this adjustment so in bpf_prog_info
      newly introduced *_info_cnt becomes nr_*_info.
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      11d8b82d
    • Song Liu's avatar
      bpf: clean up bpf_prog_get_info_by_fd() · 7a5725dd
      Song Liu authored
      info.nr_jited_ksyms and info.nr_jited_func_lens cannot be 0 in these two
      statements, so we don't need to check them.
      Signed-off-by: default avatarSong Liu <songliubraving@fb.com>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      7a5725dd
    • Martin KaFai Lau's avatar
      bpf: bpftool: Fix newline and p_err issue · 10a5ce98
      Martin KaFai Lau authored
      This patch fixes a few newline issues and also
      replaces p_err with p_info in prog.c
      
      Fixes: b053b439 ("bpf: libbpf: bpftool: Print bpf_line_info during prog dump")
      Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      10a5ce98
    • Jiong Wang's avatar
      bpf: relax verifier restriction on BPF_MOV | BPF_ALU · e434b8cd
      Jiong Wang authored
      Currently, the destination register is marked as unknown for 32-bit
      sub-register move (BPF_MOV | BPF_ALU) whenever the source register type is
      SCALAR_VALUE.
      
      This is too conservative that some valid cases will be rejected.
      Especially, this may turn a constant scalar value into unknown value that
      could break some assumptions of verifier.
      
      For example, test_l4lb_noinline.c has the following C code:
      
          struct real_definition *dst
      
      1:  if (!get_packet_dst(&dst, &pckt, vip_info, is_ipv6))
      2:    return TC_ACT_SHOT;
      3:
      4:  if (dst->flags & F_IPV6) {
      
      get_packet_dst is responsible for initializing "dst" into valid pointer and
      return true (1), otherwise return false (0). The compiled instruction
      sequence using alu32 will be:
      
        412: (54) (u32) r7 &= (u32) 1
        413: (bc) (u32) r0 = (u32) r7
        414: (95) exit
      
      insn 413, a BPF_MOV | BPF_ALU, however will turn r0 into unknown value even
      r7 contains SCALAR_VALUE 1.
      
      This causes trouble when verifier is walking the code path that hasn't
      initialized "dst" inside get_packet_dst, for which case 0 is returned and
      we would then expect verifier concluding line 1 in the above C code pass
      the "if" check, therefore would skip fall through path starting at line 4.
      Now, because r0 returned from callee has became unknown value, so verifier
      won't skip analyzing path starting at line 4 and "dst->flags" requires
      dereferencing the pointer "dst" which actually hasn't be initialized for
      this path.
      
      This patch relaxed the code marking sub-register move destination. For a
      SCALAR_VALUE, it is safe to just copy the value from source then truncate
      it into 32-bit.
      
      A unit test also included to demonstrate this issue. This test will fail
      before this patch.
      
      This relaxation could let verifier skipping more paths for conditional
      comparison against immediate. It also let verifier recording a more
      accurate/strict value for one register at one state, if this state end up
      with going through exit without rejection and it is used for state
      comparison later, then it is possible an inaccurate/permissive value is
      better. So the real impact on verifier processed insn number is complex.
      But in all, without this fix, valid program could be rejected.
      
      >From real benchmarking on kernel selftests and Cilium bpf tests, there is
      no impact on processed instruction number when tests ares compiled with
      default compilation options. There is slightly improvements when they are
      compiled with -mattr=+alu32 after this patch.
      
      Also, test_xdp_noinline/-mattr=+alu32 now passed verification. It is
      rejected before this fix.
      
      Insn processed before/after this patch:
      
                              default     -mattr=+alu32
      
      Kernel selftest
      
      ===
      test_xdp.o              371/371      369/369
      test_l4lb.o             6345/6345    5623/5623
      test_xdp_noinline.o     2971/2971    rejected/2727
      test_tcp_estates.o      429/429      430/430
      
      Cilium bpf
      ===
      bpf_lb-DLB_L3.o:        2085/2085     1685/1687
      bpf_lb-DLB_L4.o:        2287/2287     1986/1982
      bpf_lb-DUNKNOWN.o:      690/690       622/622
      bpf_lxc.o:              95033/95033   N/A
      bpf_netdev.o:           7245/7245     N/A
      bpf_overlay.o:          2898/2898     3085/2947
      
      NOTE:
        - bpf_lxc.o and bpf_netdev.o compiled by -mattr=+alu32 are rejected by
          verifier due to another issue inside verifier on supporting alu32
          binary.
        - Each cilium bpf program could generate several processed insn number,
          above number is sum of them.
      
      v1->v2:
       - Restrict the change on SCALAR_VALUE.
       - Update benchmark numbers on Cilium bpf tests.
      Signed-off-by: default avatarJiong Wang <jiong.wang@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      e434b8cd
  2. 09 Dec, 2018 9 commits
    • Sean Young's avatar
      media: bpf: add bpf function to report mouse movement · 01d3240a
      Sean Young authored
      Some IR remotes have a directional pad or other pointer-like thing that
      can be used as a mouse. Make it possible to decode these types of IR
      protocols in BPF.
      
      Cc: netdev@vger.kernel.org
      Signed-off-by: default avatarSean Young <sean@mess.org>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      01d3240a
    • Alexei Starovoitov's avatar
      Merge branch 'bpf_line_info' · ca5d1a7f
      Alexei Starovoitov authored
      Martin Lau says:
      
      ====================
      This patch series introduces the bpf_line_info.  Please see individual patch
      for details.
      
      It will be useful for introspection purpose, like:
      
      [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv
      [...]
      int test_long_fname_2(struct dummy_tracepoint_args * arg):
      bpf_prog_44a040bf25481309_test_long_fname_2:
      ; static int test_long_fname_2(struct dummy_tracepoint_args *arg)
         0:   push   %rbp
         1:   mov    %rsp,%rbp
         4:   sub    $0x30,%rsp
         b:   sub    $0x28,%rbp
         f:   mov    %rbx,0x0(%rbp)
        13:   mov    %r13,0x8(%rbp)
        17:   mov    %r14,0x10(%rbp)
        1b:   mov    %r15,0x18(%rbp)
        1f:   xor    %eax,%eax
        21:   mov    %rax,0x20(%rbp)
        25:   xor    %esi,%esi
      ; int key = 0;
        27:   mov    %esi,-0x4(%rbp)
      ; if (!arg->sock)
        2a:   mov    0x8(%rdi),%rdi
      ; if (!arg->sock)
        2e:   cmp    $0x0,%rdi
        32:   je     0x0000000000000070
        34:   mov    %rbp,%rsi
      ; counts = bpf_map_lookup_elem(&btf_map, &key);
        37:   add    $0xfffffffffffffffc,%rsi
        3b:   movabs $0xffff8881139d7480,%rdi
        45:   add    $0x110,%rdi
        4c:   mov    0x0(%rsi),%eax
        4f:   cmp    $0x4,%rax
        53:   jae    0x000000000000005e
        55:   shl    $0x3,%rax
        59:   add    %rdi,%rax
        5c:   jmp    0x0000000000000060
        5e:   xor    %eax,%eax
      ; if (!counts)
        60:   cmp    $0x0,%rax
        64:   je     0x0000000000000070
      ; counts->v6++;
        66:   mov    0x4(%rax),%edi
        69:   add    $0x1,%rdi
        6d:   mov    %edi,0x4(%rax)
        70:   mov    0x0(%rbp),%rbx
        74:   mov    0x8(%rbp),%r13
        78:   mov    0x10(%rbp),%r14
        7c:   mov    0x18(%rbp),%r15
        80:   add    $0x28,%rbp
        84:   leaveq
        85:   retq
      [...]
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      ca5d1a7f
    • Martin KaFai Lau's avatar
      bpf: libbpf: bpftool: Print bpf_line_info during prog dump · b053b439
      Martin KaFai Lau authored
      This patch adds print bpf_line_info function in 'prog dump jitted'
      and 'prog dump xlated':
      
      [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv
      [...]
      int test_long_fname_2(struct dummy_tracepoint_args * arg):
      bpf_prog_44a040bf25481309_test_long_fname_2:
      ; static int test_long_fname_2(struct dummy_tracepoint_args *arg)
         0:	push   %rbp
         1:	mov    %rsp,%rbp
         4:	sub    $0x30,%rsp
         b:	sub    $0x28,%rbp
         f:	mov    %rbx,0x0(%rbp)
        13:	mov    %r13,0x8(%rbp)
        17:	mov    %r14,0x10(%rbp)
        1b:	mov    %r15,0x18(%rbp)
        1f:	xor    %eax,%eax
        21:	mov    %rax,0x20(%rbp)
        25:	xor    %esi,%esi
      ; int key = 0;
        27:	mov    %esi,-0x4(%rbp)
      ; if (!arg->sock)
        2a:	mov    0x8(%rdi),%rdi
      ; if (!arg->sock)
        2e:	cmp    $0x0,%rdi
        32:	je     0x0000000000000070
        34:	mov    %rbp,%rsi
      ; counts = bpf_map_lookup_elem(&btf_map, &key);
        37:	add    $0xfffffffffffffffc,%rsi
        3b:	movabs $0xffff8881139d7480,%rdi
        45:	add    $0x110,%rdi
        4c:	mov    0x0(%rsi),%eax
        4f:	cmp    $0x4,%rax
        53:	jae    0x000000000000005e
        55:	shl    $0x3,%rax
        59:	add    %rdi,%rax
        5c:	jmp    0x0000000000000060
        5e:	xor    %eax,%eax
      ; if (!counts)
        60:	cmp    $0x0,%rax
        64:	je     0x0000000000000070
      ; counts->v6++;
        66:	mov    0x4(%rax),%edi
        69:	add    $0x1,%rdi
        6d:	mov    %edi,0x4(%rax)
        70:	mov    0x0(%rbp),%rbx
        74:	mov    0x8(%rbp),%r13
        78:	mov    0x10(%rbp),%r14
        7c:	mov    0x18(%rbp),%r15
        80:	add    $0x28,%rbp
        84:	leaveq
        85:	retq
      [...]
      
      With linum:
      [root@arch-fb-vm1 bpf]# ~/devshare/fb-kernel/linux/tools/bpf/bpftool/bpftool prog dump jited pinned /sys/fs/bpf/test_btf_haskv linum
      int _dummy_tracepoint(struct dummy_tracepoint_args * arg):
      bpf_prog_b07ccb89267cf242__dummy_tracepoint:
      ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:9]
         0:	push   %rbp
         1:	mov    %rsp,%rbp
         4:	sub    $0x28,%rsp
         b:	sub    $0x28,%rbp
         f:	mov    %rbx,0x0(%rbp)
        13:	mov    %r13,0x8(%rbp)
        17:	mov    %r14,0x10(%rbp)
        1b:	mov    %r15,0x18(%rbp)
        1f:	xor    %eax,%eax
        21:	mov    %rax,0x20(%rbp)
        25:	callq  0x000000000000851e
      ; return test_long_fname_1(arg); [file:/data/users/kafai/fb-kernel/linux/tools/testing/selftests/bpf/test_btf_haskv.c line_num:54 line_col:2]
        2a:	xor    %eax,%eax
        2c:	mov    0x0(%rbp),%rbx
        30:	mov    0x8(%rbp),%r13
        34:	mov    0x10(%rbp),%r14
        38:	mov    0x18(%rbp),%r15
        3c:	add    $0x28,%rbp
        40:	leaveq
        41:	retq
      [...]
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      b053b439
    • Martin KaFai Lau's avatar
      bpf: libbpf: Add btf_line_info support to libbpf · 3d650141
      Martin KaFai Lau authored
      This patch adds bpf_line_info support to libbpf:
      1) Parsing the line_info sec from ".BTF.ext"
      2) Relocating the line_info.  If the main prog *_info relocation
         fails, it will ignore the remaining subprog line_info and continue.
         If the subprog *_info relocation fails, it will bail out.
      3) BPF_PROG_LOAD a prog with line_info
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      3d650141
    • Martin KaFai Lau's avatar
      bpf: libbpf: Refactor and bug fix on the bpf_func_info loading logic · f0187f0b
      Martin KaFai Lau authored
      This patch refactor and fix a bug in the libbpf's bpf_func_info loading
      logic.  The bug fix and refactoring are targeting the same
      commit 2993e051 ("tools/bpf: add support to read .BTF.ext sections")
      which is in the bpf-next branch.
      
      1) In bpf_load_program_xattr(), it should retry when errno == E2BIG
         regardless of log_buf and log_buf_sz.  This patch fixes it.
      
      2) btf_ext__reloc_init() and btf_ext__reloc() are essentially
         the same except btf_ext__reloc_init() always has insns_cnt == 0.
         Hence, btf_ext__reloc_init() is removed.
      
         btf_ext__reloc() is also renamed to btf_ext__reloc_func_info()
         to get ready for the line_info support in the next patch.
      
      3) Consolidate func_info section logic from "btf_ext_parse_hdr()",
         "btf_ext_validate_func_info()" and "btf_ext__new()" to
         a new function "btf_ext_copy_func_info()" such that similar
         logic can be reused by the later libbpf's line_info patch.
      
      4) The next line_info patch will store line_info_cnt instead of
         line_info_len in the bpf_program because the kernel is taking
         line_info_cnt also.  It will save a few "len" to "cnt" conversions
         and will also save some function args.
      
         Hence, this patch also makes bpf_program to store func_info_cnt
         instead of func_info_len.
      
      5) btf_ext depends on btf.  e.g. the func_info's type_id
         in ".BTF.ext" is not useful when ".BTF" is absent.
         This patch only init the obj->btf_ext pointer after
         it has successfully init the obj->btf pointer.
      
         This can avoid always checking "obj->btf && obj->btf_ext"
         together for accessing ".BTF.ext".  Checking "obj->btf_ext"
         alone will do.
      
      6) Move "struct btf_sec_func_info" from btf.h to btf.c.
         There is no external usage outside btf.c.
      
      Fixes: 2993e051 ("tools/bpf: add support to read .BTF.ext sections")
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f0187f0b
    • Martin KaFai Lau's avatar
      bpf: Add unit tests for bpf_line_info · 4d6304c7
      Martin KaFai Lau authored
      Add unit tests for bpf_line_info for both BPF_PROG_LOAD and
      BPF_OBJ_GET_INFO_BY_FD.
      
      jit enabled:
      [root@arch-fb-vm1 bpf]# ./test_btf -k 0
      BTF prog info raw test[5] (line_info (No subprog)): OK
      BTF prog info raw test[6] (line_info (No subprog. insn_off >= prog->len)): OK
      BTF prog info raw test[7] (line_info (No subprog. zero tailing line_info): OK
      BTF prog info raw test[8] (line_info (No subprog. nonzero tailing line_info)): OK
      BTF prog info raw test[9] (line_info (subprog)): OK
      BTF prog info raw test[10] (line_info (subprog + func_info)): OK
      BTF prog info raw test[11] (line_info (subprog. missing 1st func line info)): OK
      BTF prog info raw test[12] (line_info (subprog. missing 2nd func line info)): OK
      BTF prog info raw test[13] (line_info (subprog. unordered insn offset)): OK
      
      jit disabled:
      BTF prog info raw test[5] (line_info (No subprog)): not jited. skipping jited_line_info check. OK
      BTF prog info raw test[6] (line_info (No subprog. insn_off >= prog->len)): OK
      BTF prog info raw test[7] (line_info (No subprog. zero tailing line_info): not jited. skipping jited_line_info check. OK
      BTF prog info raw test[8] (line_info (No subprog. nonzero tailing line_info)): OK
      BTF prog info raw test[9] (line_info (subprog)): not jited. skipping jited_line_info check. OK
      BTF prog info raw test[10] (line_info (subprog + func_info)): not jited. skipping jited_line_info check. OK
      BTF prog info raw test[11] (line_info (subprog. missing 1st func line info)): OK
      BTF prog info raw test[12] (line_info (subprog. missing 2nd func line info)): OK
      BTF prog info raw test[13] (line_info (subprog. unordered insn offset)): OK
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      4d6304c7
    • Martin KaFai Lau's avatar
      bpf: Refactor and bug fix in test_func_type in test_btf.c · 05687352
      Martin KaFai Lau authored
      1) bpf_load_program_xattr() is absorbing the EBIG error
         which makes testing this case impossible.  It is replaced
         with a direct syscall(__NR_bpf, BPF_PROG_LOAD,...).
      2) The test_func_type() is renamed to test_info_raw() to
         prepare for the new line_info test in the next patch.
      3) The bpf_obj_get_info_by_fd() testing for func_info
         is refactored to test_get_finfo().  A new
         test_get_linfo() will be added in the next patch
         for testing line_info purpose.
      4) The test->func_info_cnt is checked instead of
         a static value "2".
      5) Remove unnecessary "\n" in error message.
      6) Adding back info_raw_test_num to the cmd arg such
         that a specific test case can be tested, like
         all other existing tests.
      
      7) Fix a bug in handling expected_prog_load_failure.
         A test could pass even if prog_fd != -1 while
         expected_prog_load_failure is true.
      8) The min rec_size check should be < 8 instead of < 4.
      
      Fixes: 4798c4ba ("tools/bpf: extends test_btf to test load/retrieve func_type info")
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      05687352
    • Martin KaFai Lau's avatar
      bpf: tools: Sync uapi bpf.h · ee491d8d
      Martin KaFai Lau authored
      Sync uapi bpf.h to tools/include/uapi/linux for
      the new bpf_line_info.
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      ee491d8d
    • Martin KaFai Lau's avatar
      bpf: Add bpf_line_info support · c454a46b
      Martin KaFai Lau authored
      This patch adds bpf_line_info support.
      
      It accepts an array of bpf_line_info objects during BPF_PROG_LOAD.
      The "line_info", "line_info_cnt" and "line_info_rec_size" are added
      to the "union bpf_attr".  The "line_info_rec_size" makes
      bpf_line_info extensible in the future.
      
      The new "check_btf_line()" ensures the userspace line_info is valid
      for the kernel to use.
      
      When the verifier is translating/patching the bpf_prog (through
      "bpf_patch_insn_single()"), the line_infos' insn_off is also
      adjusted by the newly added "bpf_adj_linfo()".
      
      If the bpf_prog is jited, this patch also provides the jited addrs (in
      aux->jited_linfo) for the corresponding line_info.insn_off.
      "bpf_prog_fill_jited_linfo()" is added to fill the aux->jited_linfo.
      It is currently called by the x86 jit.  Other jits can also use
      "bpf_prog_fill_jited_linfo()" and it will be done in the followup patches.
      In the future, if it deemed necessary, a particular jit could also provide
      its own "bpf_prog_fill_jited_linfo()" implementation.
      
      A few "*line_info*" fields are added to the bpf_prog_info such
      that the user can get the xlated line_info back (i.e. the line_info
      with its insn_off reflecting the translated prog).  The jited_line_info
      is available if the prog is jited.  It is an array of __u64.
      If the prog is not jited, jited_line_info_cnt is 0.
      
      The verifier's verbose log with line_info will be done in
      a follow up patch.
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      c454a46b
  3. 07 Dec, 2018 9 commits
  4. 06 Dec, 2018 5 commits
    • Alexei Starovoitov's avatar
      Merge branch 'bpf_func_info-improvements' · a06aef4e
      Alexei Starovoitov authored
      Martin KaFai Lau says:
      
      ====================
      The patchset has a few improvements on bpf_func_info:
      1. Improvements on the behaviors of info.func_info, info.func_info_cnt
         and info.func_info_rec_size.
      2. Name change: s/insn_offset/insn_off/
      
      Please see individual patch for details.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      a06aef4e
    • Martin KaFai Lau's avatar
      bpf: Expect !info.func_info and insn_off name changes in test_btf/libbpf/bpftool · 84ecc1f9
      Martin KaFai Lau authored
      Similar to info.jited_*, info.func_info could be 0 if
      bpf_dump_raw_ok() == false.
      
      This patch makes changes to test_btf and bpftool to expect info.func_info
      could be 0.
      
      This patch also makes the needed changes for s/insn_offset/insn_off/.
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      84ecc1f9
    • Martin KaFai Lau's avatar
      bpf: tools: Sync uapi bpf.h for the name changes in bpf_func_info · 555249df
      Martin KaFai Lau authored
      This patch sync the name changes in bpf_func_info to
      the tools/.
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      555249df
    • Martin KaFai Lau's avatar
      bpf: Change insn_offset to insn_off in bpf_func_info · d30d42e0
      Martin KaFai Lau authored
      The later patch will introduce "struct bpf_line_info" which
      has member "line_off" and "file_off" referring back to the
      string section in btf.  The line_"off" and file_"off"
      are more consistent to the naming convention in btf.h that
      means "offset" (e.g. name_off in "struct btf_type").
      
      The to-be-added "struct bpf_line_info" also has another
      member, "insn_off" which is the same as the "insn_offset"
      in "struct bpf_func_info".  Hence, this patch renames "insn_offset"
      to "insn_off" for "struct bpf_func_info".
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d30d42e0
    • Martin KaFai Lau's avatar
      bpf: Improve the info.func_info and info.func_info_rec_size behavior · 7337224f
      Martin KaFai Lau authored
      1) When bpf_dump_raw_ok() == false and the kernel can provide >=1
         func_info to the userspace, the current behavior is setting
         the info.func_info_cnt to 0 instead of setting info.func_info
         to 0.
      
         It is different from the behavior in jited_func_lens/nr_jited_func_lens,
         jited_ksyms/nr_jited_ksyms...etc.
      
         This patch fixes it. (i.e. set func_info to 0 instead of
         func_info_cnt to 0 when bpf_dump_raw_ok() == false).
      
      2) When the userspace passed in info.func_info_cnt == 0, the kernel
         will set the expected func_info size back to the
         info.func_info_rec_size.  It is a way for the userspace to learn
         the kernel expected func_info_rec_size introduced in
         commit 838e9690 ("bpf: Introduce bpf_func_info").
      
         An exception is the kernel expected size is not set when
         func_info is not available for a bpf_prog.  This makes the
         returned info.func_info_rec_size has different values
         depending on the returned value of info.func_info_cnt.
      
         This patch sets the kernel expected size to info.func_info_rec_size
         independent of the info.func_info_cnt.
      
      3) The current logic only rejects invalid func_info_rec_size if
         func_info_cnt is non zero.  This patch also rejects invalid
         nonzero info.func_info_rec_size and not equal to the kernel
         expected size.
      
      4) Set info.btf_id as long as prog->aux->btf != NULL.  That will
         setup the later copy_to_user() codes look the same as others
         which then easier to understand and maintain.
      
         prog->aux->btf is not NULL only if prog->aux->func_info_cnt > 0.
      
         Breaking up info.btf_id from prog->aux->func_info_cnt is needed
         for the later line info patch anyway.
      
         A similar change is made to bpf_get_prog_name().
      
      Fixes: 838e9690 ("bpf: Introduce bpf_func_info")
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      7337224f
  5. 05 Dec, 2018 4 commits
    • Quentin Monnet's avatar
      tools: bpftool: add a command to dump the trace pipe · 30da46b5
      Quentin Monnet authored
      BPF programs can use the bpf_trace_printk() helper to print debug
      information into the trace pipe. Add a subcommand
      "bpftool prog tracelog" to simply dump this pipe to the console.
      
      This is for a good part copied from iproute2, where the feature is
      available with "tc exec bpf dbg". Changes include dumping pipe content
      to stdout instead of stderr and adding JSON support (content is dumped
      as an array of strings, one per line read from the pipe). This version
      is dual-licensed, with Daniel's permission.
      
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Suggested-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarQuentin Monnet <quentin.monnet@netronome.com>
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      30da46b5
    • Daniel Borkmann's avatar
      Merge branch 'bpf-jit-overridable-alloc' · 41888179
      Daniel Borkmann authored
      Ard Biesheuvel says:
      
      ====================
      On arm64, modules are allocated from a 128 MB window which is close to
      the core kernel, so that relative direct branches are guaranteed to be
      in range (except in some KASLR configurations). Also, module_alloc()
      is in charge of allocating KASAN shadow memory when running with KASAN
      enabled.
      
      This means that the way BPF reuses module_alloc()/module_memfree() is
      undesirable on arm64 (and potentially other architectures as well),
      and so this series refactors BPF's use of those functions to permit
      architectures to change this behavior.
      
      Patch #1 breaks out the module_alloc() and module_memfree() calls into
      __weak functions so they can be overridden.
      
      Patch #2 implements the new alloc/free overrides for arm64
      
      Changes since v3:
      - drop 'const' modifier for free() hook void* argument
      - move the dedicated BPF region to before the module region, putting it
        within 4GB of the module and kernel regions on non-KASLR kernels
      
      Changes since v2:
      - properly build time and runtime tested this time (log after the diffstat)
      - create a dedicated 128 MB region at the top of the vmalloc space for BPF
        programs, ensuring that the programs will be in branching range of each
        other (which we currently rely upon) but at an arbitrary distance from
        the kernel and modules (which we don't care about)
      
      Changes since v1:
      - Drop misguided attempt to 'fix' and refactor the free path. Instead,
        just add another __weak wrapper for the invocation of module_memfree()
      ====================
      
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Cc: netdev@vger.kernel.org
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      41888179
    • Ard Biesheuvel's avatar
      arm64/bpf: don't allocate BPF JIT programs in module memory · 91fc957c
      Ard Biesheuvel authored
      The arm64 module region is a 128 MB region that is kept close to
      the core kernel, in order to ensure that relative branches are
      always in range. So using the same region for programs that do
      not have this restriction is wasteful, and preferably avoided.
      
      Now that the core BPF JIT code permits the alloc/free routines to
      be overridden, implement them by vmalloc()/vfree() calls from a
      dedicated 128 MB region set aside for BPF programs. This ensures
      that BPF programs are still in branching range of each other, which
      is something the JIT currently depends upon (and is not guaranteed
      when using module_alloc() on KASLR kernels like we do currently).
      It also ensures that placement of BPF programs does not correlate
      with the placement of the core kernel or modules, making it less
      likely that leaking the former will reveal the latter.
      
      This also solves an issue under KASAN, where shadow memory is
      needlessly allocated for all BPF programs (which don't require KASAN
      shadow pages since they are not KASAN instrumented)
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      91fc957c
    • Ard Biesheuvel's avatar
      bpf: add __weak hook for allocating executable memory · dc002bb6
      Ard Biesheuvel authored
      By default, BPF uses module_alloc() to allocate executable memory,
      but this is not necessary on all arches and potentially undesirable
      on some of them.
      
      So break out the module_alloc() and module_memfree() calls into __weak
      functions to allow them to be overridden in arch code.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      dc002bb6
  6. 04 Dec, 2018 5 commits
  7. 03 Dec, 2018 2 commits