1. 14 Feb, 2023 8 commits
    • Dave Marchevsky's avatar
      bpf, documentation: Add graph documentation for non-owning refs · c31315c3
      Dave Marchevsky authored
      It is difficult to intuit the semantics of owning and non-owning
      references from verifier code. In order to keep the high-level details
      from being lost in the mailing list, this patch adds documentation
      explaining semantics and details.
      
      The target audience of doc added in this patch is folks working on BPF
      internals, as there's focus on "what should the verifier do here". Via
      reorganization or copy-and-paste, much of the content can probably be
      repurposed for BPF program writer audience as well.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230214004017.2534011-9-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      c31315c3
    • Dave Marchevsky's avatar
      selftests/bpf: Add rbtree selftests · 215249f6
      Dave Marchevsky authored
      This patch adds selftests exercising the logic changed/added in the
      previous patches in the series. A variety of successful and unsuccessful
      rbtree usages are validated:
      
      Success:
        * Add some nodes, let map_value bpf_rbtree_root destructor clean them
          up
        * Add some nodes, remove one using the non-owning ref leftover by
          successful rbtree_add() call
        * Add some nodes, remove one using the non-owning ref returned by
          rbtree_first() call
      
      Failure:
        * BTF where bpf_rb_root owns bpf_list_node should fail to load
        * BTF where node of type X is added to tree containing nodes of type Y
          should fail to load
        * No calling rbtree api functions in 'less' callback for rbtree_add
        * No releasing lock in 'less' callback for rbtree_add
        * No removing a node which hasn't been added to any tree
        * No adding a node which has already been added to a tree
        * No escaping of non-owning references past their lock's
          critical section
        * No escaping of non-owning references past other invalidation points
          (rbtree_remove)
      
      These tests mostly focus on rbtree-specific additions, but some of the
      failure cases revalidate scenarios common to both linked_list and rbtree
      which are covered in the former's tests. Better to be a bit redundant in
      case linked_list and rbtree semantics deviate over time.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230214004017.2534011-8-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      215249f6
    • Dave Marchevsky's avatar
      c834df84
    • Dave Marchevsky's avatar
      bpf: Special verifier handling for bpf_rbtree_{remove, first} · a40d3632
      Dave Marchevsky authored
      Newly-added bpf_rbtree_{remove,first} kfuncs have some special properties
      that require handling in the verifier:
      
        * both bpf_rbtree_remove and bpf_rbtree_first return the type containing
          the bpf_rb_node field, with the offset set to that field's offset,
          instead of a struct bpf_rb_node *
          * mark_reg_graph_node helper added in previous patch generalizes
            this logic, use it
      
        * bpf_rbtree_remove's node input is a node that's been inserted
          in the tree - a non-owning reference.
      
        * bpf_rbtree_remove must invalidate non-owning references in order to
          avoid aliasing issue. Use previously-added
          invalidate_non_owning_refs helper to mark this function as a
          non-owning ref invalidation point.
      
        * Unlike other functions, which convert one of their input arg regs to
          non-owning reference, bpf_rbtree_first takes no arguments and just
          returns a non-owning reference (possibly null)
          * For now verifier logic for this is special-cased instead of
            adding new kfunc flag.
      
      This patch, along with the previous one, complete special verifier
      handling for all rbtree API functions added in this series.
      
      With functional verifier handling of rbtree_remove, under current
      non-owning reference scheme, a node type with both bpf_{list,rb}_node
      fields could cause the verifier to accept programs which remove such
      nodes from collections they haven't been added to.
      
      In order to prevent this, this patch adds a check to btf_parse_fields
      which rejects structs with both bpf_{list,rb}_node fields. This is a
      temporary measure that can be removed after "collection identity"
      followup. See comment added in btf_parse_fields. A linked_list BTF test
      exercising the new check is added in this patch as well.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230214004017.2534011-6-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      a40d3632
    • Dave Marchevsky's avatar
      bpf: Add callback validation to kfunc verifier logic · 5d92ddc3
      Dave Marchevsky authored
      Some BPF helpers take a callback function which the helper calls. For
      each helper that takes such a callback, there's a special call to
      __check_func_call with a callback-state-setting callback that sets up
      verifier bpf_func_state for the callback's frame.
      
      kfuncs don't have any of this infrastructure yet, so let's add it in
      this patch, following existing helper pattern as much as possible. To
      validate functionality of this added plumbing, this patch adds
      callback handling for the bpf_rbtree_add kfunc and hopes to lay
      groundwork for future graph datastructure callbacks.
      
      In the "general plumbing" category we have:
      
        * check_kfunc_call doing callback verification right before clearing
          CALLER_SAVED_REGS, exactly like check_helper_call
        * recognition of func_ptr BTF types in kfunc args as
          KF_ARG_PTR_TO_CALLBACK + propagation of subprogno for this arg type
      
      In the "rbtree_add / graph datastructure-specific plumbing" category:
      
        * Since bpf_rbtree_add must be called while the spin_lock associated
          with the tree is held, don't complain when callback's func_state
          doesn't unlock it by frame exit
        * Mark rbtree_add callback's args with ref_set_non_owning
          to prevent rbtree api functions from being called in the callback.
          Semantically this makes sense, as less() takes no ownership of its
          args when determining which comes first.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230214004017.2534011-5-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      5d92ddc3
    • Dave Marchevsky's avatar
      bpf: Add support for bpf_rb_root and bpf_rb_node in kfunc args · cd6791b4
      Dave Marchevsky authored
      Now that we find bpf_rb_root and bpf_rb_node in structs, let's give args
      that contain those types special classification and properly handle
      these types when checking kfunc args.
      
      "Properly handling" these types largely requires generalizing similar
      handling for bpf_list_{head,node}, with little new logic added in this
      patch.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230214004017.2534011-4-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      cd6791b4
    • Dave Marchevsky's avatar
      bpf: Add bpf_rbtree_{add,remove,first} kfuncs · bd1279ae
      Dave Marchevsky authored
      This patch adds implementations of bpf_rbtree_{add,remove,first}
      and teaches verifier about their BTF_IDs as well as those of
      bpf_rb_{root,node}.
      
      All three kfuncs have some nonstandard component to their verification
      that needs to be addressed in future patches before programs can
      properly use them:
      
        * bpf_rbtree_add:     Takes 'less' callback, need to verify it
      
        * bpf_rbtree_first:   Returns ptr_to_node_type(off=rb_node_off) instead
                              of ptr_to_rb_node(off=0). Return value ref is
      			non-owning.
      
        * bpf_rbtree_remove:  Returns ptr_to_node_type(off=rb_node_off) instead
                              of ptr_to_rb_node(off=0). 2nd arg (node) is a
      			non-owning reference.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230214004017.2534011-3-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      bd1279ae
    • Dave Marchevsky's avatar
      bpf: Add basic bpf_rb_{root,node} support · 9c395c1b
      Dave Marchevsky authored
      This patch adds special BPF_RB_{ROOT,NODE} btf_field_types similar to
      BPF_LIST_{HEAD,NODE}, adds the necessary plumbing to detect the new
      types, and adds bpf_rb_root_free function for freeing bpf_rb_root in
      map_values.
      
      structs bpf_rb_root and bpf_rb_node are opaque types meant to
      obscure structs rb_root_cached rb_node, respectively.
      
      btf_struct_access will prevent BPF programs from touching these special
      fields automatically now that they're recognized.
      
      btf_check_and_fixup_fields now groups list_head and rb_root together as
      "graph root" fields and {list,rb}_node as "graph node", and does same
      ownership cycle checking as before. Note that this function does _not_
      prevent ownership type mixups (e.g. rb_root owning list_node) - that's
      handled by btf_parse_graph_root.
      
      After this patch, a bpf program can have a struct bpf_rb_root in a
      map_value, but not add anything to nor do anything useful with it.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230214004017.2534011-2-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      9c395c1b
  2. 13 Feb, 2023 10 commits
    • Dave Marchevsky's avatar
      bpf: Migrate release_on_unlock logic to non-owning ref semantics · 6a3cd331
      Dave Marchevsky authored
      This patch introduces non-owning reference semantics to the verifier,
      specifically linked_list API kfunc handling. release_on_unlock logic for
      refs is refactored - with small functional changes - to implement these
      semantics, and bpf_list_push_{front,back} are migrated to use them.
      
      When a list node is pushed to a list, the program still has a pointer to
      the node:
      
        n = bpf_obj_new(typeof(*n));
      
        bpf_spin_lock(&l);
        bpf_list_push_back(&l, n);
        /* n still points to the just-added node */
        bpf_spin_unlock(&l);
      
      What the verifier considers n to be after the push, and thus what can be
      done with n, are changed by this patch.
      
      Common properties both before/after this patch:
        * After push, n is only a valid reference to the node until end of
          critical section
        * After push, n cannot be pushed to any list
        * After push, the program can read the node's fields using n
      
      Before:
        * After push, n retains the ref_obj_id which it received on
          bpf_obj_new, but the associated bpf_reference_state's
          release_on_unlock field is set to true
          * release_on_unlock field and associated logic is used to implement
            "n is only a valid ref until end of critical section"
        * After push, n cannot be written to, the node must be removed from
          the list before writing to its fields
        * After push, n is marked PTR_UNTRUSTED
      
      After:
        * After push, n's ref is released and ref_obj_id set to 0. NON_OWN_REF
          type flag is added to reg's type, indicating that it's a non-owning
          reference.
          * NON_OWN_REF flag and logic is used to implement "n is only a
            valid ref until end of critical section"
        * n can be written to (except for special fields e.g. bpf_list_node,
          timer, ...)
      
      Summary of specific implementation changes to achieve the above:
      
        * release_on_unlock field, ref_set_release_on_unlock helper, and logic
          to "release on unlock" based on that field are removed
      
        * The anonymous active_lock struct used by bpf_verifier_state is
          pulled out into a named struct bpf_active_lock.
      
        * NON_OWN_REF type flag is introduced along with verifier logic
          changes to handle non-owning refs
      
        * Helpers are added to use NON_OWN_REF flag to implement non-owning
          ref semantics as described above
          * invalidate_non_owning_refs - helper to clobber all non-owning refs
            matching a particular bpf_active_lock identity. Replaces
            release_on_unlock logic in process_spin_lock.
          * ref_set_non_owning - set NON_OWN_REF type flag after doing some
            sanity checking
          * ref_convert_owning_non_owning - convert owning reference w/
            specified ref_obj_id to non-owning references. Set NON_OWN_REF
            flag for each reg with that ref_obj_id and 0-out its ref_obj_id
      
        * Update linked_list selftests to account for minor semantic
          differences introduced by this patch
          * Writes to a release_on_unlock node ref are not allowed, while
            writes to non-owning reference pointees are. As a result the
            linked_list "write after push" failure tests are no longer scenarios
            that should fail.
          * The test##missing_lock##op and test##incorrect_lock##op
            macro-generated failure tests need to have a valid node argument in
            order to have the same error output as before. Otherwise
            verification will fail early and the expected error output won't be seen.
      Signed-off-by: default avatarDave Marchevsky <davemarchevsky@fb.com>
      Link: https://lore.kernel.org/r/20230212092715.1422619-2-davemarchevsky@fb.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      6a3cd331
    • Daniel Borkmann's avatar
      Merge branch 'xdp-ice-mbuf' · 39c536ac
      Daniel Borkmann authored
      Alexander Lobakin says:
      
      ====================
      The set grew from the poor performance of %BPF_F_TEST_XDP_LIVE_FRAMES
      when the ice-backed device is a sender. Initially there were around
      3.3 Mpps / thread, while I have 5.5 on skb-based pktgen ...
      
      After fixing 0005 (0004 is a prereq for it) first (strange thing nobody
      noticed that earlier), I started catching random OOMs. This is how 0002
      (and partially 0001) appeared.
      
      0003 is a suggestion from Maciej to not waste time on refactoring dead
      lines. 0006 is a "cherry on top" to get away with the final 6.7 Mpps.
      4.5 of 6 are fixes, but only the first three are tagged, since it then
      starts being tricky. I may backport them manually later on.
      
      TL;DR for the series is that shortcuts are good, but only as long as
      they don't make the driver miss important things. %XDP_TX is purely
      driver-local, however .ndo_xdp_xmit() is not, and sometimes assumptions
      can be unsafe there.
      
      With that series and also one core code patch[0], "live frames" and
      xdp-trafficgen are now safe'n'fast on ice (probably more to come).
      
        [0] https://lore.kernel.org/all/20230209172827.874728-1-alexandr.lobakin@intel.com
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      39c536ac
    • Alexander Lobakin's avatar
      ice: Micro-optimize .ndo_xdp_xmit() path · ad07f29b
      Alexander Lobakin authored
      After the recent mbuf changes, ice_xmit_xdp_ring() became a 3-liner.
      It makes no sense to keep it global in a different file than its caller.
      Move it just next to the sole call site and mark static. Also, it
      doesn't need a full xdp_convert_frame_to_buff(). Save several cycles
      and fill only the fields used by __ice_xmit_xdp_ring() later on.
      Finally, since it doesn't modify @xdpf anyhow, mark the argument const
      to save some more (whole -11 bytes of .text! :D).
      
      Thanks to 1 jump less and less calcs as well, this yields as many as
      6.7 Mpps per queue. `xdp.data_hard_start = xdpf` is fully intentional
      again (see xdp_convert_buff_to_frame()) and just works when there are
      no source device's driver issues.
      Signed-off-by: default avatarAlexander Lobakin <alexandr.lobakin@intel.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Link: https://lore.kernel.org/bpf/20230210170618.1973430-7-alexandr.lobakin@intel.com
      ad07f29b
    • Alexander Lobakin's avatar
      ice: Fix freeing XDP frames backed by Page Pool · 055d0920
      Alexander Lobakin authored
      As already mentioned, freeing any &xdp_frame via page_frag_free() is
      wrong, as it assumes the frame is backed by either an order-0 page or
      a page with no "patrons" behind them, while in fact frames backed by
      Page Pool can be redirected to a device, which's driver doesn't use it.
      Keep storing a pointer to the raw buffer and then freeing it
      unconditionally via page_frag_free() for %XDP_TX frames, but introduce
      a separate type in the enum for frames coming through .ndo_xdp_xmit(),
      and free them via xdp_return_frame_bulk(). Note that saving xdpf as
      xdp_buff->data_hard_start is intentional and is always true when
      everything is configured properly.
      After this change, %XDP_REDIRECT from a Page Pool based driver to ice
      becomes zero-alloc as it should be and horrendous 3.3 Mpps / queue
      turn into 6.6, hehe.
      
      Let it go with no "Fixes:" tag as it spans across good 5+ commits and
      can't be trivially backported.
      Signed-off-by: default avatarAlexander Lobakin <alexandr.lobakin@intel.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Link: https://lore.kernel.org/bpf/20230210170618.1973430-6-alexandr.lobakin@intel.com
      055d0920
    • Alexander Lobakin's avatar
      ice: Robustify cleaning/completing XDP Tx buffers · aa1d3faf
      Alexander Lobakin authored
      When queueing frames from a Page Pool for redirecting to a device backed
      by the ice driver, `perf top` shows heavy load on page_alloc() and
      page_frag_free(), despite that on a properly working system it must be
      fully or at least almost zero-alloc. The problem is in fact a bit deeper
      and raises from how ice cleans up completed Tx buffers.
      
      The story so far: when cleaning/freeing the resources related to
      a particular completed Tx frame (skbs, DMA mappings etc.), ice uses some
      heuristics only without setting any type explicitly (except for dummy
      Flow Director packets, which are marked via ice_tx_buf::tx_flags).
      This kinda works, but only up to some point. For example, currently ice
      assumes that each frame coming to __ice_xmit_xdp_ring(), is backed by
      either plain order-0 page or plain page frag, while it may also be
      backed by Page Pool or any other possible memory models introduced in
      future. This means any &xdp_frame must be freed properly via
      xdp_return_frame() family with no assumptions.
      
      In order to do that, the whole heuristics must be replaced with setting
      the Tx buffer/frame type explicitly, just how it's always been done via
      an enum. Let us reuse 16 bits from ::tx_flags -- 1 bit-and instr won't
      hurt much -- especially given that sometimes there was a check for
      %ICE_TX_FLAGS_DUMMY_PKT, which is now turned from a flag to an enum
      member. The rest of the changes is straightforward and most of it is
      just a conversion to rely now on the type set in &ice_tx_buf rather than
      to some secondary properties.
      For now, no functional changes intended, the change only prepares the
      ground for starting freeing XDP frames properly next step. And it must
      be done atomically/synchronously to not break stuff.
      Signed-off-by: default avatarAlexander Lobakin <alexandr.lobakin@intel.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Link: https://lore.kernel.org/bpf/20230210170618.1973430-5-alexandr.lobakin@intel.com
      aa1d3faf
    • Alexander Lobakin's avatar
      ice: Remove two impossible branches on XDP Tx cleaning · 923096b5
      Alexander Lobakin authored
      The tagged commit started sending %XDP_TX frames from XSk Rx ring
      directly without converting it to an &xdp_frame. However, when XSk is
      enabled on a queue pair, it has its separate Tx cleaning functions, so
      neither ice_clean_xdp_irq() nor ice_unmap_and_free_tx_buf() ever happens
      there.
      Remove impossible branches in order to reduce the diffstat of the
      upcoming change.
      
      Fixes: a24b4c6e ("ice: xsk: Do not convert to buff to frame for XDP_TX")
      Suggested-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Signed-off-by: default avatarAlexander Lobakin <alexandr.lobakin@intel.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Link: https://lore.kernel.org/bpf/20230210170618.1973430-4-alexandr.lobakin@intel.com
      923096b5
    • Alexander Lobakin's avatar
      ice: Fix XDP Tx ring overrun · 0bd939b6
      Alexander Lobakin authored
      Sometimes, under heavy XDP Tx traffic, e.g. when using XDP traffic
      generator (%BPF_F_TEST_XDP_LIVE_FRAMES), the machine can catch OOM due
      to the driver not freeing all of the pages passed to it by
      .ndo_xdp_xmit().
      Turned out that during the development of the tagged commit, the check,
      which ensures that we have a free descriptor to queue a frame, moved
      into the branch happening only when a buffer has frags. Otherwise, we
      only run a cleaning cycle, but don't check anything.
      ATST, there can be situations when the driver gets new frames to send,
      but there are no buffers that can be cleaned/completed and the ring has
      no free slots. It's very rare, but still possible (> 6.5 Mpps per ring).
      The driver then fills the next buffer/descriptor, effectively
      overwriting the data, which still needs to be freed.
      
      Restore the check after the cleaning routine to make sure there is a
      slot to queue a new frame. When there are frags, there still will be a
      separate check that we can place all of them, but if the ring is full,
      there's no point in wasting any more time.
      
      (minor: make `!ready_frames` unlikely since it happens ~1-2 times per
       billion of frames)
      
      Fixes: 3246a107 ("ice: Add support for XDP multi-buffer on Tx side")
      Signed-off-by: default avatarAlexander Lobakin <alexandr.lobakin@intel.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Link: https://lore.kernel.org/bpf/20230210170618.1973430-3-alexandr.lobakin@intel.com
      0bd939b6
    • Alexander Lobakin's avatar
      ice: fix ice_tx_ring:: Xdp_tx_active underflow · bc4db834
      Alexander Lobakin authored
      xdp_tx_active is used to indicate whether an XDP ring has any %XDP_TX
      frames queued to shortcut processing Tx cleaning for XSk-enabled queues.
      When !XSk, it simply indicates whether the ring has any queued frames in
      general.
      It gets increased on each frame placed onto the ring and counts the
      whole frame, not each frag. However, currently it gets decremented in
      ice_clean_xdp_tx_buf(), which is called per each buffer, i.e. per each
      frag. Thus, on completing multi-frag frames, an underflow happens.
      Move the decrement to the outer function and do it once per frame, not
      buf. Also, do that on the stack and update the ring counter after the
      loop is done to save several cycles.
      XSk rings are fine since there are no frags at the moment.
      
      Fixes: 3246a107 ("ice: Add support for XDP multi-buffer on Tx side")
      Signed-off-by: default avatarAlexander Lobakin <alexandr.lobakin@intel.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Link: https://lore.kernel.org/bpf/20230210170618.1973430-2-alexandr.lobakin@intel.com
      bc4db834
    • Ilya Leoshkevich's avatar
      selftests/bpf: Fix out-of-srctree build · 0b075724
      Ilya Leoshkevich authored
      Building BPF selftests out of srctree fails with:
      
        make: *** No rule to make target '/linux-build//ima_setup.sh', needed by 'ima_setup.sh'.  Stop.
      
      The culprit is the rule that defines convenient shorthands like
      "make test_progs", which builds $(OUTPUT)/test_progs. These shorthands
      make sense only for binaries that are built though; scripts that live
      in the source tree do not end up in $(OUTPUT).
      
      Therefore drop $(TEST_PROGS) and $(TEST_PROGS_EXTENDED) from the rule.
      
      The issue exists for a while, but it became a problem only after commit
      d68ae498 ("selftests/bpf: Install all required files to run selftests"),
      which added dependencies on these scripts.
      
      Fixes: 03dcb784 ("selftests/bpf: Add simple per-test targets to Makefile")
      Signed-off-by: default avatarIlya Leoshkevich <iii@linux.ibm.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20230208231211.283606-1-iii@linux.ibm.com
      0b075724
    • Alan Maguire's avatar
      bpf: Add --skip_encoding_btf_inconsistent_proto, --btf_gen_optimized to pahole flags for v1.25 · 0243d3df
      Alan Maguire authored
      v1.25 of pahole supports filtering out functions with multiple inconsistent
      function prototypes or optimized-out parameters from the BTF representation.
      These present problems because there is no additional info in BTF saying which
      inconsistent prototype matches which function instance to help guide attachment,
      and functions with optimized-out parameters can lead to incorrect assumptions
      about register contents.
      
      So for now, filter out such functions while adding BTF representations for
      functions that have "."-suffixes (foo.isra.0) but not optimized-out parameters.
      This patch assumes that below linked changes land in pahole for v1.25.
      Signed-off-by: default avatarAlan Maguire <alan.maguire@oracle.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Acked-by: default avatarJiri Olsa <jolsa@kernel.org>
      Link: https://lore.kernel.org/bpf/1675790102-23037-1-git-send-email-alan.maguire@oracle.com
      Link: https://lore.kernel.org/bpf/1675949331-27935-1-git-send-email-alan.maguire@oracle.com
      0243d3df
  3. 11 Feb, 2023 14 commits
  4. 10 Feb, 2023 8 commits