1. 24 Oct, 2023 3 commits
    • Eduard Zingerman's avatar
      bpf: exact states comparison for iterator convergence checks · 2793a8b0
      Eduard Zingerman authored
      Convergence for open coded iterators is computed in is_state_visited()
      by examining states with branches count > 1 and using states_equal().
      states_equal() computes sub-state relation using read and precision marks.
      Read and precision marks are propagated from children states,
      thus are not guaranteed to be complete inside a loop when branches
      count > 1. This could be demonstrated using the following unsafe program:
      
           1. r7 = -16
           2. r6 = bpf_get_prandom_u32()
           3. while (bpf_iter_num_next(&fp[-8])) {
           4.   if (r6 != 42) {
           5.     r7 = -32
           6.     r6 = bpf_get_prandom_u32()
           7.     continue
           8.   }
           9.   r0 = r10
          10.   r0 += r7
          11.   r8 = *(u64 *)(r0 + 0)
          12.   r6 = bpf_get_prandom_u32()
          13. }
      
      Here verifier would first visit path 1-3, create a checkpoint at 3
      with r7=-16, continue to 4-7,3 with r7=-32.
      
      Because instructions at 9-12 had not been visitied yet existing
      checkpoint at 3 does not have read or precision mark for r7.
      Thus states_equal() would return true and verifier would discard
      current state, thus unsafe memory access at 11 would not be caught.
      
      This commit fixes this loophole by introducing exact state comparisons
      for iterator convergence logic:
      - registers are compared using regs_exact() regardless of read or
        precision marks;
      - stack slots have to have identical type.
      
      Unfortunately, this is too strict even for simple programs like below:
      
          i = 0;
          while(iter_next(&it))
            i++;
      
      At each iteration step i++ would produce a new distinct state and
      eventually instruction processing limit would be reached.
      
      To avoid such behavior speculatively forget (widen) range for
      imprecise scalar registers, if those registers were not precise at the
      end of the previous iteration and do not match exactly.
      
      This a conservative heuristic that allows to verify wide range of
      programs, however it precludes verification of programs that conjure
      an imprecise value on the first loop iteration and use it as precise
      on the second.
      
      Test case iter_task_vma_for_each() presents one of such cases:
      
              unsigned int seen = 0;
              ...
              bpf_for_each(task_vma, vma, task, 0) {
                      if (seen >= 1000)
                              break;
                      ...
                      seen++;
              }
      
      Here clang generates the following code:
      
      <LBB0_4>:
            24:       r8 = r6                          ; stash current value of
                      ... body ...                       'seen'
            29:       r1 = r10
            30:       r1 += -0x8
            31:       call bpf_iter_task_vma_next
            32:       r6 += 0x1                        ; seen++;
            33:       if r0 == 0x0 goto +0x2 <LBB0_6>  ; exit on next() == NULL
            34:       r7 += 0x10
            35:       if r8 < 0x3e7 goto -0xc <LBB0_4> ; loop on seen < 1000
      
      <LBB0_6>:
            ... exit ...
      
      Note that counter in r6 is copied to r8 and then incremented,
      conditional jump is done using r8. Because of this precision mark for
      r6 lags one state behind of precision mark on r8 and widening logic
      kicks in.
      
      Adding barrier_var(seen) after conditional is sufficient to force
      clang use the same register for both counting and conditional jump.
      
      This issue was discussed in the thread [1] which was started by
      Andrew Werner <awerner32@gmail.com> demonstrating a similar bug
      in callback functions handling. The callbacks would be addressed
      in a followup patch.
      
      [1] https://lore.kernel.org/bpf/97a90da09404c65c8e810cf83c94ac703705dc0e.camel@gmail.com/Co-developed-by: default avatarAndrii Nakryiko <andrii.nakryiko@gmail.com>
      Co-developed-by: default avatarAlexei Starovoitov <alexei.starovoitov@gmail.com>
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-4-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      2793a8b0
    • Eduard Zingerman's avatar
      bpf: extract same_callsites() as utility function · 4c97259a
      Eduard Zingerman authored
      Extract same_callsites() from clean_live_states() as a utility function.
      This function would be used by the next patch in the set.
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-3-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      4c97259a
    • Eduard Zingerman's avatar
      bpf: move explored_state() closer to the beginning of verifier.c · 3c4e420c
      Eduard Zingerman authored
      Subsequent patches would make use of explored_state() function.
      Move it up to avoid adding unnecessary prototype.
      Signed-off-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20231024000917.12153-2-eddyz87@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      3c4e420c
  2. 23 Oct, 2023 2 commits
  3. 20 Oct, 2023 18 commits
  4. 19 Oct, 2023 2 commits
  5. 18 Oct, 2023 2 commits
  6. 17 Oct, 2023 8 commits
    • Daniel Borkmann's avatar
      selftests/bpf: Add additional mprog query test coverage · 24516309
      Daniel Borkmann authored
      Add several new test cases which assert corner cases on the mprog query
      mechanism, for example, around passing in a too small or a larger array
      than the current count.
      
        ./test_progs -t tc_opts
        #252     tc_opts_after:OK
        #253     tc_opts_append:OK
        #254     tc_opts_basic:OK
        #255     tc_opts_before:OK
        #256     tc_opts_chain_classic:OK
        #257     tc_opts_chain_mixed:OK
        #258     tc_opts_delete_empty:OK
        #259     tc_opts_demixed:OK
        #260     tc_opts_detach:OK
        #261     tc_opts_detach_after:OK
        #262     tc_opts_detach_before:OK
        #263     tc_opts_dev_cleanup:OK
        #264     tc_opts_invalid:OK
        #265     tc_opts_max:OK
        #266     tc_opts_mixed:OK
        #267     tc_opts_prepend:OK
        #268     tc_opts_query:OK
        #269     tc_opts_query_attach:OK
        #270     tc_opts_replace:OK
        #271     tc_opts_revision:OK
        Summary: 20/0 PASSED, 0 SKIPPED, 0 FAILED
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Reviewed-by: default avatarAlan Maguire <alan.maguire@oracle.com>
      Link: https://lore.kernel.org/bpf/20231017081728.24769-1-daniel@iogearbox.net
      24516309
    • Yafang Shao's avatar
      selftests/bpf: Add selftest for bpf_task_under_cgroup() in sleepable prog · 44cb03f1
      Yafang Shao authored
      The result is as follows:
      
        $ tools/testing/selftests/bpf/test_progs --name=task_under_cgroup
        #237     task_under_cgroup:OK
        Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
      
      Without the previous patch, there will be RCU warnings in dmesg when
      CONFIG_PROVE_RCU is enabled. While with the previous patch, there will
      be no warnings.
      Signed-off-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarStanislav Fomichev <sdf@google.com>
      Link: https://lore.kernel.org/bpf/20231007135945.4306-2-laoar.shao@gmail.com
      44cb03f1
    • Yafang Shao's avatar
      bpf: Fix missed rcu read lock in bpf_task_under_cgroup() · 29a7e00f
      Yafang Shao authored
      When employed within a sleepable program not under RCU protection, the
      use of 'bpf_task_under_cgroup()' may trigger a warning in the kernel log,
      particularly when CONFIG_PROVE_RCU is enabled:
      
        [ 1259.662357] WARNING: suspicious RCU usage
        [ 1259.662358] 6.5.0+ #33 Not tainted
        [ 1259.662360] -----------------------------
        [ 1259.662361] include/linux/cgroup.h:423 suspicious rcu_dereference_check() usage!
      
      Other info that might help to debug this:
      
        [ 1259.662366] rcu_scheduler_active = 2, debug_locks = 1
        [ 1259.662368] 1 lock held by trace/72954:
        [ 1259.662369]  #0: ffffffffb5e3eda0 (rcu_read_lock_trace){....}-{0:0}, at: __bpf_prog_enter_sleepable+0x0/0xb0
      
      Stack backtrace:
      
        [ 1259.662385] CPU: 50 PID: 72954 Comm: trace Kdump: loaded Not tainted 6.5.0+ #33
        [ 1259.662391] Call Trace:
        [ 1259.662393]  <TASK>
        [ 1259.662395]  dump_stack_lvl+0x6e/0x90
        [ 1259.662401]  dump_stack+0x10/0x20
        [ 1259.662404]  lockdep_rcu_suspicious+0x163/0x1b0
        [ 1259.662412]  task_css_set.part.0+0x23/0x30
        [ 1259.662417]  bpf_task_under_cgroup+0xe7/0xf0
        [ 1259.662422]  bpf_prog_7fffba481a3bcf88_lsm_run+0x5c/0x93
        [ 1259.662431]  bpf_trampoline_6442505574+0x60/0x1000
        [ 1259.662439]  bpf_lsm_bpf+0x5/0x20
        [ 1259.662443]  ? security_bpf+0x32/0x50
        [ 1259.662452]  __sys_bpf+0xe6/0xdd0
        [ 1259.662463]  __x64_sys_bpf+0x1a/0x30
        [ 1259.662467]  do_syscall_64+0x38/0x90
        [ 1259.662472]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
        [ 1259.662479] RIP: 0033:0x7f487baf8e29
        [...]
        [ 1259.662504]  </TASK>
      
      This issue can be reproduced by executing a straightforward program, as
      demonstrated below:
      
      SEC("lsm.s/bpf")
      int BPF_PROG(lsm_run, int cmd, union bpf_attr *attr, unsigned int size)
      {
              struct cgroup *cgrp = NULL;
              struct task_struct *task;
              int ret = 0;
      
              if (cmd != BPF_LINK_CREATE)
                      return 0;
      
              // The cgroup2 should be mounted first
              cgrp = bpf_cgroup_from_id(1);
              if (!cgrp)
                      goto out;
              task = bpf_get_current_task_btf();
              if (bpf_task_under_cgroup(task, cgrp))
                      ret = -1;
              bpf_cgroup_release(cgrp);
      
      out:
              return ret;
      }
      
      After running the program, if you subsequently execute another BPF program,
      you will encounter the warning.
      
      It's worth noting that task_under_cgroup_hierarchy() is also utilized by
      bpf_current_task_under_cgroup(). However, bpf_current_task_under_cgroup()
      doesn't exhibit this issue because it cannot be used in sleepable BPF
      programs.
      
      Fixes: b5ad4cdc ("bpf: Add bpf_task_under_cgroup() kfunc")
      Signed-off-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarStanislav Fomichev <sdf@google.com>
      Cc: Feng Zhou <zhoufeng.zf@bytedance.com>
      Cc: KP Singh <kpsingh@kernel.org>
      Link: https://lore.kernel.org/bpf/20231007135945.4306-1-laoar.shao@gmail.com
      29a7e00f
    • Sebastian Andrzej Siewior's avatar
      net, bpf: Add a warning if NAPI cb missed xdp_do_flush(). · 9a675ba5
      Sebastian Andrzej Siewior authored
      A few drivers were missing a xdp_do_flush() invocation after
      XDP_REDIRECT.
      
      Add three helper functions each for one of the per-CPU lists. Return
      true if the per-CPU list is non-empty and flush the list.
      
      Add xdp_do_check_flushed() which invokes each helper functions and
      creates a warning if one of the functions had a non-empty list.
      
      Hide everything behind CONFIG_DEBUG_NET.
      Suggested-by: default avatarJesper Dangaard Brouer <hawk@kernel.org>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Acked-by: default avatarJakub Kicinski <kuba@kernel.org>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20231016125738.Yt79p1uF@linutronix.de
      9a675ba5
    • Andrii Nakryiko's avatar
      libbpf: Don't assume SHT_GNU_verdef presence for SHT_GNU_versym section · 137df118
      Andrii Nakryiko authored
      Fix too eager assumption that SHT_GNU_verdef ELF section is going to be
      present whenever binary has SHT_GNU_versym section. It seems like either
      SHT_GNU_verdef or SHT_GNU_verneed can be used, so failing on missing
      SHT_GNU_verdef actually breaks use cases in production.
      
      One specific reported issue, which was used to manually test this fix,
      was trying to attach to `readline` function in BASH binary.
      
      Fixes: bb7fa093 ("libbpf: Support symbol versioning for uprobe")
      Reported-by: default avatarLiam Wisehart <liamwisehart@meta.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: default avatarManu Bretelle <chantr4@gmail.com>
      Reviewed-by: default avatarFangrui Song <maskray@google.com>
      Acked-by: default avatarHengqi Chen <hengqi.chen@gmail.com>
      Link: https://lore.kernel.org/bpf/20231016182840.4033346-1-andrii@kernel.org
      137df118
    • Jakub Kicinski's avatar
      Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next · a3c2dd96
      Jakub Kicinski authored
      Daniel Borkmann says:
      
      ====================
      pull-request: bpf-next 2023-10-16
      
      We've added 90 non-merge commits during the last 25 day(s) which contain
      a total of 120 files changed, 3519 insertions(+), 895 deletions(-).
      
      The main changes are:
      
      1) Add missed stats for kprobes to retrieve the number of missed kprobe
         executions and subsequent executions of BPF programs, from Jiri Olsa.
      
      2) Add cgroup BPF sockaddr hooks for unix sockets. The use case is
         for systemd to reimplement the LogNamespace feature which allows
         running multiple instances of systemd-journald to process the logs
         of different services, from Daan De Meyer.
      
      3) Implement BPF CPUv4 support for s390x BPF JIT, from Ilya Leoshkevich.
      
      4) Improve BPF verifier log output for scalar registers to better
         disambiguate their internal state wrt defaults vs min/max values
         matching, from Andrii Nakryiko.
      
      5) Extend the BPF fib lookup helpers for IPv4/IPv6 to support retrieving
         the source IP address with a new BPF_FIB_LOOKUP_SRC flag,
         from Martynas Pumputis.
      
      6) Add support for open-coded task_vma iterator to help with symbolization
         for BPF-collected user stacks, from Dave Marchevsky.
      
      7) Add libbpf getters for accessing individual BPF ring buffers which
         is useful for polling them individually, for example, from Martin Kelly.
      
      8) Extend AF_XDP selftests to validate the SHARED_UMEM feature,
         from Tushar Vyavahare.
      
      9) Improve BPF selftests cross-building support for riscv arch,
         from Björn Töpel.
      
      10) Add the ability to pin a BPF timer to the same calling CPU,
         from David Vernet.
      
      11) Fix libbpf's bpf_tracing.h macros for riscv to use the generic
         implementation of PT_REGS_SYSCALL_REGS() to access syscall arguments,
         from Alexandre Ghiti.
      
      12) Extend libbpf to support symbol versioning for uprobes, from Hengqi Chen.
      
      13) Fix bpftool's skeleton code generation to guarantee that ELF data
          is 8 byte aligned, from Ian Rogers.
      
      14) Inherit system-wide cpu_mitigations_off() setting for Spectre v1/v4
          security mitigations in BPF verifier, from Yafang Shao.
      
      15) Annotate struct bpf_stack_map with __counted_by attribute to prepare
          BPF side for upcoming __counted_by compiler support, from Kees Cook.
      
      * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (90 commits)
        bpf: Ensure proper register state printing for cond jumps
        bpf: Disambiguate SCALAR register state output in verifier logs
        selftests/bpf: Make align selftests more robust
        selftests/bpf: Improve missed_kprobe_recursion test robustness
        selftests/bpf: Improve percpu_alloc test robustness
        selftests/bpf: Add tests for open-coded task_vma iter
        bpf: Introduce task_vma open-coded iterator kfuncs
        selftests/bpf: Rename bpf_iter_task_vma.c to bpf_iter_task_vmas.c
        bpf: Don't explicitly emit BTF for struct btf_iter_num
        bpf: Change syscall_nr type to int in struct syscall_tp_t
        net/bpf: Avoid unused "sin_addr_len" warning when CONFIG_CGROUP_BPF is not set
        bpf: Avoid unnecessary audit log for CPU security mitigations
        selftests/bpf: Add tests for cgroup unix socket address hooks
        selftests/bpf: Make sure mount directory exists
        documentation/bpf: Document cgroup unix socket address hooks
        bpftool: Add support for cgroup unix socket address hooks
        libbpf: Add support for cgroup unix socket address hooks
        bpf: Implement cgroup sockaddr hooks for unix sockets
        bpf: Add bpf_sock_addr_set_sun_path() to allow writing unix sockaddr from bpf
        bpf: Propagate modified uaddrlen from cgroup sockaddr programs
        ...
      ====================
      
      Link: https://lore.kernel.org/r/20231016204803.30153-1-daniel@iogearbox.netSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      a3c2dd96
    • Yunsheng Lin's avatar
      page_pool: fragment API support for 32-bit arch with 64-bit DMA · 90de47f0
      Yunsheng Lin authored
      Currently page_pool_alloc_frag() is not supported in 32-bit
      arch with 64-bit DMA because of the overlap issue between
      pp_frag_count and dma_addr_upper in 'struct page' for those
      arches, which seems to be quite common, see [1], which means
      driver may need to handle it when using fragment API.
      
      It is assumed that the combination of the above arch with an
      address space >16TB does not exist, as all those arches have
      64b equivalent, it seems logical to use the 64b version for a
      system with a large address space. It is also assumed that dma
      address is page aligned when we are dma mapping a page aligned
      buffer, see [2].
      
      That means we're storing 12 bits of 0 at the lower end for a
      dma address, we can reuse those bits for the above arches to
      support 32b+12b, which is 16TB of memory.
      
      If we make a wrong assumption, a warning is emitted so that
      user can report to us.
      
      1. https://lore.kernel.org/all/20211117075652.58299-1-linyunsheng@huawei.com/
      2. https://lore.kernel.org/all/20230818145145.4b357c89@kernel.org/Tested-by: default avatarAlexander Lobakin <aleksander.lobakin@intel.com>
      Signed-off-by: default avatarYunsheng Lin <linyunsheng@huawei.com>
      CC: Lorenzo Bianconi <lorenzo@kernel.org>
      CC: Alexander Duyck <alexander.duyck@gmail.com>
      CC: Liang Chen <liangchen.linux@gmail.com>
      CC: Guillaume Tucker <guillaume.tucker@collabora.com>
      CC: Matthew Wilcox <willy@infradead.org>
      CC: Linux-MM <linux-mm@kvack.org>
      Link: https://lore.kernel.org/r/20231013064827.61135-2-linyunsheng@huawei.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      90de47f0
    • Jacob Keller's avatar
      net: stub tcp_gro_complete if CONFIG_INET=n · e411a8e3
      Jacob Keller authored
      A few networking drivers including bnx2x, bnxt, qede, and idpf call
      tcp_gro_complete as part of offloading TCP GRO. The function is only
      defined if CONFIG_INET is true, since its TCP specific and is meaningless
      if the kernel lacks IP networking support.
      
      The combination of trying to use the complex network drivers with
      CONFIG_NET but not CONFIG_INET is rather unlikely in practice: most use
      cases are going to need IP networking.
      
      The tcp_gro_complete function just sets some data in the socket buffer for
      use in processing the TCP packet in the event that the GRO was offloaded to
      the device. If the kernel lacks TCP support, such setup will simply go
      unused.
      
      The bnx2x, bnxt, and qede drivers wrap their TCP offload support in
      CONFIG_INET checks and skip handling on such kernels.
      
      The idpf driver did not check CONFIG_INET and thus fails to link if the
      kernel is configured  with CONFIG_NET=y, CONFIG_IDPF=(m|y), and
      CONFIG_INET=n.
      
      While checking CONFIG_INET does allow the driver to bypass significantly
      more instructions in the event that we know TCP networking isn't supported,
      the configuration is unlikely to be used widely.
      
      Rather than require driver authors to care about this, stub the
      tcp_gro_complete function when CONFIG_INET=n. This allows drivers to be
      left as-is. It does mean the idpf driver will perform slightly more work
      than strictly necessary when CONFIG_INET=n, since it will still execute
      some of the skb setup in idpf_rx_rsc. However, that work would be performed
      in the case where CONFIG_INET=y anyways.
      
      I did not change the existing drivers, since they appear to wrap a
      significant portion of code when CONFIG_INET=n. There is little benefit in
      trashing these drivers just to unwrap and remove the CONFIG_INET check.
      
      Using a stub for tcp_gro_complete is still beneficial, as it means future
      drivers no longer need to worry about this case of CONFIG_NET=y and
      CONFIG_INET=n, which should reduce noise from buildbots that check such a
      configuration.
      Signed-off-by: default avatarJacob Keller <jacob.e.keller@intel.com>
      Acked-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Tested-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
      Link: https://lore.kernel.org/r/20231013185502.1473541-1-jacob.e.keller@intel.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      e411a8e3
  7. 16 Oct, 2023 5 commits