1. 26 Mar, 2023 14 commits
  2. 25 Mar, 2023 4 commits
    • Alexei Starovoitov's avatar
      Merge branch 'Don't invoke KPTR_REF destructor on NULL xchg' · 496f4f1b
      Alexei Starovoitov authored
      David Vernet says:
      
      ====================
      
      When a map value is being freed, we loop over all of the fields of the
      corresponding BPF object and issue the appropriate cleanup calls
      corresponding to the field's type. If the field is a referenced kptr, we
      atomically xchg the value out of the map, and invoke the kptr's
      destructor on whatever was there before.
      
      Currently, we always invoke the destructor (or bpf_obj_drop() for a
      local kptr) on any kptr, including if no value was xchg'd out of the
      map. This means that any function serving as the kptr's KF_RELEASE
      destructor must always treat the argument as possibly NULL, and we
      invoke unnecessary (and seemingly unsafe) cleanup logic for the local
      kptr path as well.
      
      This is an odd requirement -- KF_RELEASE kfuncs that are invoked by BPF
      programs do not have this restriction, and the verifier will fail to
      load the program if the register containing the to-be-released type has
      any untrusted modifiers (e.g. PTR_UNTRUSTED or PTR_MAYBE_NULL). So as to
      simplify the expectations required for a KF_RELEASE kfunc, this patch
      set updates the KPTR_REF destructor logic to only be invoked when a
      non-NULL value is xchg'd out of the map.
      
      Additionally, the patch removes now-unnecessary KF_RELEASE calls from
      several kfuncs, and finally, updates the verifier to have KF_RELEASE
      automatically imply KF_TRUSTED_ARGS. This restriction was already
      implicitly happening because of the aforementioned logic in the verifier
      to reject any regs with untrusted modifiers, and to enforce that
      KF_RELEASE args are passed with a 0 offset. This change just updates the
      behavior to match that of other trusted args. This patch is left to the
      end of the series in case it happens to be controversial, as it arguably
      is slightly orthogonal to the purpose of the rest of the series.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      496f4f1b
    • David Vernet's avatar
      bpf: Treat KF_RELEASE kfuncs as KF_TRUSTED_ARGS · 6c831c46
      David Vernet authored
      KF_RELEASE kfuncs are not currently treated as having KF_TRUSTED_ARGS,
      even though they have a superset of the requirements of KF_TRUSTED_ARGS.
      Like KF_TRUSTED_ARGS, KF_RELEASE kfuncs require a 0-offset argument, and
      don't allow NULL-able arguments. Unlike KF_TRUSTED_ARGS which require
      _either_ an argument with ref_obj_id > 0, _or_ (ref->type &
      BPF_REG_TRUSTED_MODIFIERS) (and no unsafe modifiers allowed), KF_RELEASE
      only allows for ref_obj_id > 0.  Because KF_RELEASE today doesn't
      automatically imply KF_TRUSTED_ARGS, some of these requirements are
      enforced in different ways that can make the behavior of the verifier
      feel unpredictable. For example, a KF_RELEASE kfunc with a NULL-able
      argument will currently fail in the verifier with a message like, "arg#0
      is ptr_or_null_ expected ptr_ or socket" rather than "Possibly NULL
      pointer passed to trusted arg0". Our intention is the same, but the
      semantics are different due to implemenetation details that kfunc authors
      and BPF program writers should not need to care about.
      
      Let's make the behavior of the verifier more consistent and intuitive by
      having KF_RELEASE kfuncs imply the presence of KF_TRUSTED_ARGS. Our
      eventual goal is to have all kfuncs assume KF_TRUSTED_ARGS by default
      anyways, so this takes us a step in that direction.
      
      Note that it does not make sense to assume KF_TRUSTED_ARGS for all
      KF_ACQUIRE kfuncs. KF_ACQUIRE kfuncs can have looser semantics than
      KF_RELEASE, with e.g. KF_RCU | KF_RET_NULL. We may want to have
      KF_ACQUIRE imply KF_TRUSTED_ARGS _unless_ KF_RCU is specified, but that
      can be left to another patch set, and there are no such subtleties to
      address for KF_RELEASE.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/r/20230325213144.486885-4-void@manifault.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      6c831c46
    • David Vernet's avatar
      bpf: Remove now-unnecessary NULL checks for KF_RELEASE kfuncs · fb2211a5
      David Vernet authored
      Now that we're not invoking kfunc destructors when the kptr in a map was
      NULL, we no longer require NULL checks in many of our KF_RELEASE kfuncs.
      This patch removes those NULL checks.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/r/20230325213144.486885-3-void@manifault.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      fb2211a5
    • David Vernet's avatar
      bpf: Only invoke kptr dtor following non-NULL xchg · 1431d0b5
      David Vernet authored
      When a map value is being freed, we loop over all of the fields of the
      corresponding BPF object and issue the appropriate cleanup calls
      corresponding to the field's type. If the field is a referenced kptr, we
      atomically xchg the value out of the map, and invoke the kptr's
      destructor on whatever was there before (or bpf_obj_drop() it if it was
      a local kptr).
      
      Currently, we always invoke the destructor (either bpf_obj_drop() or the
      kptr's registered destructor) on any KPTR_REF-type field in a map, even
      if there wasn't a value in the map. This means that any function serving
      as the kptr's KF_RELEASE destructor must always treat the argument as
      possibly NULL, as the following can and regularly does happen:
      
      void *xchgd_field;
      
      /* No value was in the map, so xchgd_field is NULL */
      xchgd_field = (void *)xchg(unsigned long *field_ptr, 0);
      field->kptr.dtor(xchgd_field);
      
      These are odd semantics to impose on KF_RELEASE kfuncs -- BPF programs
      are prohibited by the verifier from passing NULL pointers to KF_RELEASE
      kfuncs, so it doesn't make sense to require this of BPF programs, but
      not the main kernel destructor path. It's also unnecessary to invoke any
      cleanup logic for local kptrs. If there is no object there, there's
      nothing to drop.
      
      So as to allow KF_RELEASE kfuncs to fully assume that an argument is
      non-NULL, this patch updates a KPTR_REF's destructor to only be invoked
      when a non-NULL value is xchg'd out of the kptr map field.
      Signed-off-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/r/20230325213144.486885-2-void@manifault.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      1431d0b5
  3. 24 Mar, 2023 1 commit
  4. 23 Mar, 2023 10 commits
    • Martin KaFai Lau's avatar
      Merge branch 'Transit between BPF TCP congestion controls.' · 226bc6ae
      Martin KaFai Lau authored
      Kui-Feng Lee says:
      
      ====================
      
      Major changes:
      
       - Create bpf_links in the kernel for BPF struct_ops to register and
         unregister it.
      
       - Enables switching between implementations of bpf-tcp-cc under a
         name instantly by replacing the backing struct_ops map of a
         bpf_link.
      
      Previously, BPF struct_ops didn't go off, as even when the user
      program creating it was terminated, none of these ever were pinned.
      For instance, the TCP congestion control subsystem indirectly
      maintains a reference count on the struct_ops of any registered BPF
      implemented algorithm. Thus, the algorithm won't be deactivated until
      someone deliberately unregisters it.  For compatibility with other BPF
      programs, bpf_links have been created to work in coordination with
      struct_ops maps. This ensures that the registration and unregistration
      of these respective maps is carried out at the start and end of the
      bpf_link.
      
      We also faced complications when attempting to replace an existing TCP
      congestion control algorithm with a new implementation on the fly. A
      struct_ops map was used to register a TCP congestion control algorithm
      with a unique name.  We had to either register the alternative
      implementation with a new name and move over or unregister the current
      one before being able to reregistration with the same name.  To fix
      this problem, we can an option to migrate the registration of the
      algorithm from struct_ops maps to bpf_links. By modifying the backing
      map of a bpf_link, it suddenly becomes possible to replace an existing
      TCP congestion control algorithm with ease.
      ---
      
      The major differences from v11:
      
       - Fix incorrectly setting both old_prog_fd and old_map_fd.
      
      The major differences from v10:
      
       - Add old_map_fd as an additional field instead of an union in
         bpf_link_update_opts.
      
      The major differences from v9:
      
       - Add test case for BPF_F_LINK.  Includes adding old_map_fd to struct
         bpf_link_update_opts in patch 6.
      
       - Return -EPERM instead of -EINVAL when the old map fd doesn't match
         with BPF_F_LINK.
      
       - Fix -EBUSY case in bpf_map__attach_struct_ops().
      
      The major differences form v8:
      
       - Check bpf_struct_ops::{validate,update} in
         bpf_struct_ops_map_alloc()
      
      The major differences from v7:
      
       - Use synchronize_rcu_mult(call_rcu, call_rcu_tasks) to replace
         synchronize_rcu() and synchronize_rcu_tasks().
      
       - Call synchronize_rcu() in tcp_update_congestion_control().
      
       - Handle -EBUSY in bpf_map__attach_struct_ops() to allow a struct_ops
         can be used to create links more than once.  Include a test case.
      
       - Add old_map_fd to bpf_attr and handle BPF_F_REPLACE in
         bpf_struct_ops_map_link_update().
      
       - Remove changes in bpf_dummy_struct_ops.c and add a check of .update
         function pointer of bpf_struct_ops.
      
      The major differences from v6:
      
       - Reword commit logs of the patch 1, 2, and 8.
      
       - Call synchronize_rcu_tasks() as well in bpf_struct_ops_map_free().
      
       - Refactor bpf_struct_ops_map_free() so that
         bpf_struct_ops_map_alloc() can free a struct_ops without waiting
         for a RCU grace period.
      
      The major differences from v5:
      
       - Add a new step to bpf_object__load() to prepare vdata.
      
       - Accept BPF_F_REPLACE.
      
       - Check section IDs in find_struct_ops_map_by_offset()
      
       - Add a test case to check mixing w/ and w/o link struct_ops.
      
       - Add a test case of using struct_ops w/o link to update a link.
      
       - Improve bpf_link__detach_struct_ops() to handle the w/ link case.
      
      The major differences from v4:
      
       - Rebase.
      
       - Reorder patches and merge part 4 to part 2 of the v4.
      
      The major differences from v3:
      
       - Remove bpf_struct_ops_map_free_rcu(), and use synchronize_rcu().
      
       - Improve the commit log of the part 1.
      
       - Before transitioning to the READY state, we conduct a value check
         to ensure that struct_ops can be successfully utilized and links
         created later.
      
      The major differences from v2:
      
       - Simplify states
      
         - Remove TOBEUNREG.
      
         - Rename UNREG to READY.
      
       - Stop using the refcnt of the kvalue of a struct_ops. Explicitly
         increase and decrease the refcount of struct_ops.
      
       - Prepare kernel vdata during the load phase of libbpf.
      
      The major differences from v1:
      
       - Added bpf_struct_ops_link to replace the previous union-based
         approach.
      
       - Added UNREG and TOBEUNREG to the state of bpf_struct_ops_map.
      
         - bpf_struct_ops_transit_state() maintains state transitions.
      
       - Fixed synchronization issue.
      
       - Prepare kernel vdata of struct_ops during the loading phase of
         bpf_object.
      
       - Merged previous patch 3 to patch 1.
      
      v11: https://lore.kernel.org/all/20230323010409.2265383-1-kuifeng@meta.com/
      v10: https://lore.kernel.org/all/20230321232813.3376064-1-kuifeng@meta.com/
      v9: https://lore.kernel.org/all/20230320195644.1953096-1-kuifeng@meta.com/
      v8: https://lore.kernel.org/all/20230318053144.1180301-1-kuifeng@meta.com/
      v7: https://lore.kernel.org/all/20230316023641.2092778-1-kuifeng@meta.com/
      v6: https://lore.kernel.org/all/20230310043812.3087672-1-kuifeng@meta.com/
      v5: https://lore.kernel.org/all/20230308005050.255859-1-kuifeng@meta.com/
      v4: https://lore.kernel.org/all/20230307232913.576893-1-andrii@kernel.org/
      v3: https://lore.kernel.org/all/20230303012122.852654-1-kuifeng@meta.com/
      v2: https://lore.kernel.org/bpf/20230223011238.12313-1-kuifeng@meta.com/
      v1: https://lore.kernel.org/bpf/20230214221718.503964-1-kuifeng@meta.com/
      ====================
      Signed-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      226bc6ae
    • Kui-Feng Lee's avatar
      selftests/bpf: Test switching TCP Congestion Control algorithms. · 06da9f3b
      Kui-Feng Lee authored
      Create a pair of sockets that utilize the congestion control algorithm
      under a particular name. Then switch up this congestion control
      algorithm to another implementation and check whether newly created
      connections using the same cc name now run the new implementation.
      
      Also, try to update a link with a struct_ops that is without
      BPF_F_LINK or with a wrong or different name.  These cases should fail
      due to the violation of assumptions.  To update a bpf_link of a
      struct_ops, it must be replaced with another struct_ops that is
      identical in type and name and has the BPF_F_LINK flag.
      
      The other test case is to create links from the same struct_ops more
      than once.  It makes sure a struct_ops can be used repeatly.
      Signed-off-by: default avatarKui-Feng Lee <kuifeng@meta.com>
      Link: https://lore.kernel.org/r/20230323032405.3735486-9-kuifeng@meta.comSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      06da9f3b
    • Kui-Feng Lee's avatar
      libbpf: Use .struct_ops.link section to indicate a struct_ops with a link. · 809a69d6
      Kui-Feng Lee authored
      Flags a struct_ops is to back a bpf_link by putting it to the
      ".struct_ops.link" section.  Once it is flagged, the created
      struct_ops can be used to create a bpf_link or update a bpf_link that
      has been backed by another struct_ops.
      Signed-off-by: default avatarKui-Feng Lee <kuifeng@meta.com>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230323032405.3735486-8-kuifeng@meta.comSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      809a69d6
    • Kui-Feng Lee's avatar
      libbpf: Update a bpf_link with another struct_ops. · 912dd4b0
      Kui-Feng Lee authored
      Introduce bpf_link__update_map(), which allows to atomically update
      underlying struct_ops implementation for given struct_ops BPF link.
      
      Also add old_map_fd to struct bpf_link_update_opts to handle
      BPF_F_REPLACE feature.
      Signed-off-by: default avatarKui-Feng Lee <kuifeng@meta.com>
      Link: https://lore.kernel.org/r/20230323032405.3735486-7-kuifeng@meta.comSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      912dd4b0
    • Kui-Feng Lee's avatar
      bpf: Update the struct_ops of a bpf_link. · aef56f2e
      Kui-Feng Lee authored
      By improving the BPF_LINK_UPDATE command of bpf(), it should allow you
      to conveniently switch between different struct_ops on a single
      bpf_link. This would enable smoother transitions from one struct_ops
      to another.
      
      The struct_ops maps passing along with BPF_LINK_UPDATE should have the
      BPF_F_LINK flag.
      Signed-off-by: default avatarKui-Feng Lee <kuifeng@meta.com>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230323032405.3735486-6-kuifeng@meta.comSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      aef56f2e
    • Kui-Feng Lee's avatar
      libbpf: Create a bpf_link in bpf_map__attach_struct_ops(). · 8d1608d7
      Kui-Feng Lee authored
      bpf_map__attach_struct_ops() was creating a dummy bpf_link as a
      placeholder, but now it is constructing an authentic one by calling
      bpf_link_create() if the map has the BPF_F_LINK flag.
      
      You can flag a struct_ops map with BPF_F_LINK by calling
      bpf_map__set_map_flags().
      Signed-off-by: default avatarKui-Feng Lee <kuifeng@meta.com>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230323032405.3735486-5-kuifeng@meta.comSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      8d1608d7
    • Kui-Feng Lee's avatar
      bpf: Create links for BPF struct_ops maps. · 68b04864
      Kui-Feng Lee authored
      Make bpf_link support struct_ops.  Previously, struct_ops were always
      used alone without any associated links. Upon updating its value, a
      struct_ops would be activated automatically. Yet other BPF program
      types required to make a bpf_link with their instances before they
      could become active. Now, however, you can create an inactive
      struct_ops, and create a link to activate it later.
      
      With bpf_links, struct_ops has a behavior similar to other BPF program
      types. You can pin/unpin them from their links and the struct_ops will
      be deactivated when its link is removed while previously need someone
      to delete the value for it to be deactivated.
      
      bpf_links are responsible for registering their associated
      struct_ops. You can only use a struct_ops that has the BPF_F_LINK flag
      set to create a bpf_link, while a structs without this flag behaves in
      the same manner as before and is registered upon updating its value.
      
      The BPF_LINK_TYPE_STRUCT_OPS serves a dual purpose. Not only is it
      used to craft the links for BPF struct_ops programs, but also to
      create links for BPF struct_ops them-self.  Since the links of BPF
      struct_ops programs are only used to create trampolines internally,
      they are never seen in other contexts. Thus, they can be reused for
      struct_ops themself.
      
      To maintain a reference to the map supporting this link, we add
      bpf_struct_ops_link as an additional type. The pointer of the map is
      RCU and won't be necessary until later in the patchset.
      Signed-off-by: default avatarKui-Feng Lee <kuifeng@meta.com>
      Link: https://lore.kernel.org/r/20230323032405.3735486-4-kuifeng@meta.comSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      68b04864
    • Kui-Feng Lee's avatar
      net: Update an existing TCP congestion control algorithm. · 8fb1a76a
      Kui-Feng Lee authored
      This feature lets you immediately transition to another congestion
      control algorithm or implementation with the same name.  Once a name
      is updated, new connections will apply this new algorithm.
      
      The purpose is to update a customized algorithm implemented in BPF
      struct_ops with a new version on the flight.  The following is an
      example of using the userspace API implemented in later BPF patches.
      
         link = bpf_map__attach_struct_ops(skel->maps.ca_update_1);
         .......
         err = bpf_link__update_map(link, skel->maps.ca_update_2);
      
      We first load and register an algorithm implemented in BPF struct_ops,
      then swap it out with a new one using the same name. After that, newly
      created connections will apply the updated algorithm, while older ones
      retain the previous version already applied.
      
      This patch also takes this chance to refactor the ca validation into
      the new tcp_validate_congestion_control() function.
      
      Cc: netdev@vger.kernel.org, Eric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarKui-Feng Lee <kuifeng@meta.com>
      Link: https://lore.kernel.org/r/20230323032405.3735486-3-kuifeng@meta.comSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      8fb1a76a
    • Kui-Feng Lee's avatar
      bpf: Retire the struct_ops map kvalue->refcnt. · b671c206
      Kui-Feng Lee authored
      We have replaced kvalue-refcnt with synchronize_rcu() to wait for an
      RCU grace period.
      
      Maintenance of kvalue->refcnt was a complicated task, as we had to
      simultaneously keep track of two reference counts: one for the
      reference count of bpf_map. When the kvalue->refcnt reaches zero, we
      also have to reduce the reference count on bpf_map - yet these steps
      are not performed in an atomic manner and require us to be vigilant
      when managing them. By eliminating kvalue->refcnt, we can make our
      maintenance more straightforward as the refcount of bpf_map is now
      solely managed!
      
      To prevent the trampoline image of a struct_ops from being released
      while it is still in use, we wait for an RCU grace period. The
      setsockopt(TCP_CONGESTION, "...") command allows you to change your
      socket's congestion control algorithm and can result in releasing the
      old struct_ops implementation. It is fine. However, this function is
      exposed through bpf_setsockopt(), it may be accessed by BPF programs
      as well. To ensure that the trampoline image belonging to struct_op
      can be safely called while its method is in use, the trampoline
      safeguarde the BPF program with rcu_read_lock(). Doing so prevents any
      destruction of the associated images before returning from a
      trampoline and requires us to wait for an RCU grace period.
      Signed-off-by: default avatarKui-Feng Lee <kuifeng@meta.com>
      Link: https://lore.kernel.org/r/20230323032405.3735486-2-kuifeng@meta.comSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      b671c206
    • Andrii Nakryiko's avatar
      bpf: remember meta->iter info only for initialized iters · b63cbc49
      Andrii Nakryiko authored
      For iter_new() functions iterator state's slot might not be yet
      initialized, in which case iter_get_spi() will return -ERANGE. This is
      expected and is handled properly. But for iter_next() and iter_destroy()
      cases iter slot is supposed to be initialized and correct, so -ERANGE is
      not possible.
      
      Move meta->iter.{spi,frameno} initialization into iter_next/iter_destroy
      handling branch to make it more explicit that valid information will be
      remembered in meta->iter block for subsequent use in process_iter_next_call(),
      avoiding confusingly looking -ERANGE assignment for meta->iter.spi.
      Reported-by: default avatarDan Carpenter <error27@gmail.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230322232502.836171-1-andrii@kernel.orgSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
      b63cbc49
  5. 22 Mar, 2023 11 commits
    • Xu Kuohai's avatar
      selftests/bpf: Check when bounds are not in the 32-bit range · 1a3148fc
      Xu Kuohai authored
      Add cases to check if bound is updated correctly when 64-bit value is
      not in the 32-bit range.
      Signed-off-by: default avatarXu Kuohai <xukuohai@huawei.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20230322213056.2470-2-daniel@iogearbox.net
      1a3148fc
    • Daniel Borkmann's avatar
      bpf: Fix __reg_bound_offset 64->32 var_off subreg propagation · 7be14c1c
      Daniel Borkmann authored
      Xu reports that after commit 3f50f132 ("bpf: Verifier, do explicit ALU32
      bounds tracking"), the following BPF program is rejected by the verifier:
      
         0: (61) r2 = *(u32 *)(r1 +0)          ; R2_w=pkt(off=0,r=0,imm=0)
         1: (61) r3 = *(u32 *)(r1 +4)          ; R3_w=pkt_end(off=0,imm=0)
         2: (bf) r1 = r2
         3: (07) r1 += 1
         4: (2d) if r1 > r3 goto pc+8
         5: (71) r1 = *(u8 *)(r2 +0)           ; R1_w=scalar(umax=255,var_off=(0x0; 0xff))
         6: (18) r0 = 0x7fffffffffffff10
         8: (0f) r1 += r0                      ; R1_w=scalar(umin=0x7fffffffffffff10,umax=0x800000000000000f)
         9: (18) r0 = 0x8000000000000000
        11: (07) r0 += 1
        12: (ad) if r0 < r1 goto pc-2
        13: (b7) r0 = 0
        14: (95) exit
      
      And the verifier log says:
      
        func#0 @0
        0: R1=ctx(off=0,imm=0) R10=fp0
        0: (61) r2 = *(u32 *)(r1 +0)          ; R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=0,imm=0)
        1: (61) r3 = *(u32 *)(r1 +4)          ; R1=ctx(off=0,imm=0) R3_w=pkt_end(off=0,imm=0)
        2: (bf) r1 = r2                       ; R1_w=pkt(off=0,r=0,imm=0) R2_w=pkt(off=0,r=0,imm=0)
        3: (07) r1 += 1                       ; R1_w=pkt(off=1,r=0,imm=0)
        4: (2d) if r1 > r3 goto pc+8          ; R1_w=pkt(off=1,r=1,imm=0) R3_w=pkt_end(off=0,imm=0)
        5: (71) r1 = *(u8 *)(r2 +0)           ; R1_w=scalar(umax=255,var_off=(0x0; 0xff)) R2_w=pkt(off=0,r=1,imm=0)
        6: (18) r0 = 0x7fffffffffffff10       ; R0_w=9223372036854775568
        8: (0f) r1 += r0                      ; R0_w=9223372036854775568 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775823,s32_min=-240,s32_max=15)
        9: (18) r0 = 0x8000000000000000       ; R0_w=-9223372036854775808
        11: (07) r0 += 1                      ; R0_w=-9223372036854775807
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775809)
        13: (b7) r0 = 0                       ; R0_w=0
        14: (95) exit
      
        from 12 to 11: R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775810,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775806
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775806 R1_w=scalar(umin=9223372036854775810,umax=9223372036854775810,var_off=(0x8000000000000000; 0xffffffff))
        13: safe
      
        [...]
      
        from 12 to 11: R0_w=-9223372036854775795 R1=scalar(umin=9223372036854775822,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775794
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775794 R1=scalar(umin=9223372036854775822,umax=9223372036854775822,var_off=(0x8000000000000000; 0xffffffff))
        13: safe
      
        from 12 to 11: R0_w=-9223372036854775794 R1=scalar(umin=9223372036854775823,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775793
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775793 R1=scalar(umin=9223372036854775823,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff))
        13: safe
      
        from 12 to 11: R0_w=-9223372036854775793 R1=scalar(umin=9223372036854775824,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775792
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775792 R1=scalar(umin=9223372036854775824,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff))
        13: safe
      
        [...]
      
      The 64bit umin=9223372036854775810 bound continuously bumps by +1 while
      umax=9223372036854775823 stays as-is until the verifier complexity limit
      is reached and the program gets finally rejected. During this simulation,
      the umin also eventually surpasses umax. Looking at the first 'from 12
      to 11' output line from the loop, R1 has the following state:
      
        R1_w=scalar(umin=0x8000000000000002 (9223372036854775810),
                    umax=0x800000000000000f (9223372036854775823),
                var_off=(0x8000000000000000;
                                 0xffffffff))
      
      The var_off has technically not an inconsistent state but it's very
      imprecise and far off surpassing 64bit umax bounds whereas the expected
      output with refined known bits in var_off should have been like:
      
        R1_w=scalar(umin=0x8000000000000002 (9223372036854775810),
                    umax=0x800000000000000f (9223372036854775823),
                var_off=(0x8000000000000000;
                                        0xf))
      
      In the above log, var_off stays as var_off=(0x8000000000000000; 0xffffffff)
      and does not converge into a narrower mask where more bits become known,
      eventually transforming R1 into a constant upon umin=9223372036854775823,
      umax=9223372036854775823 case where the verifier would have terminated and
      let the program pass.
      
      The __reg_combine_64_into_32() marks the subregister unknown and propagates
      64bit {s,u}min/{s,u}max bounds to their 32bit equivalents iff they are within
      the 32bit universe. The question came up whether __reg_combine_64_into_32()
      should special case the situation that when 64bit {s,u}min bounds have
      the same value as 64bit {s,u}max bounds to then assign the latter as
      well to the 32bit reg->{s,u}32_{min,max}_value. As can be seen from the
      above example however, that is just /one/ special case and not a /generic/
      solution given above example would still not be addressed this way and
      remain at an imprecise var_off=(0x8000000000000000; 0xffffffff).
      
      The improvement is needed in __reg_bound_offset() to refine var32_off with
      the updated var64_off instead of the prior reg->var_off. The reg_bounds_sync()
      code first refines information about the register's min/max bounds via
      __update_reg_bounds() from the current var_off, then in __reg_deduce_bounds()
      from sign bit and with the potentially learned bits from bounds it'll
      update the var_off tnum in __reg_bound_offset(). For example, intersecting
      with the old var_off might have improved bounds slightly, e.g. if umax
      was 0x7f...f and var_off was (0; 0xf...fc), then new var_off will then
      result in (0; 0x7f...fc). The intersected var64_off holds then the
      universe which is a superset of var32_off. The point for the latter is
      not to broaden, but to further refine known bits based on the intersection
      of var_off with 32 bit bounds, so that we later construct the final var_off
      from upper and lower 32 bits. The final __update_reg_bounds() can then
      potentially still slightly refine bounds if more bits became known from the
      new var_off.
      
      After the improvement, we can see R1 converging successively:
      
        func#0 @0
        0: R1=ctx(off=0,imm=0) R10=fp0
        0: (61) r2 = *(u32 *)(r1 +0)          ; R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=0,imm=0)
        1: (61) r3 = *(u32 *)(r1 +4)          ; R1=ctx(off=0,imm=0) R3_w=pkt_end(off=0,imm=0)
        2: (bf) r1 = r2                       ; R1_w=pkt(off=0,r=0,imm=0) R2_w=pkt(off=0,r=0,imm=0)
        3: (07) r1 += 1                       ; R1_w=pkt(off=1,r=0,imm=0)
        4: (2d) if r1 > r3 goto pc+8          ; R1_w=pkt(off=1,r=1,imm=0) R3_w=pkt_end(off=0,imm=0)
        5: (71) r1 = *(u8 *)(r2 +0)           ; R1_w=scalar(umax=255,var_off=(0x0; 0xff)) R2_w=pkt(off=0,r=1,imm=0)
        6: (18) r0 = 0x7fffffffffffff10       ; R0_w=9223372036854775568
        8: (0f) r1 += r0                      ; R0_w=9223372036854775568 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775823,s32_min=-240,s32_max=15)
        9: (18) r0 = 0x8000000000000000       ; R0_w=-9223372036854775808
        11: (07) r0 += 1                      ; R0_w=-9223372036854775807
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775809)
        13: (b7) r0 = 0                       ; R0_w=0
        14: (95) exit
      
        from 12 to 11: R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775810,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775806
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775806 R1_w=-9223372036854775806
        13: safe
      
        from 12 to 11: R0_w=-9223372036854775806 R1_w=scalar(umin=9223372036854775811,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775805
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775805 R1_w=-9223372036854775805
        13: safe
      
        [...]
      
        from 12 to 11: R0_w=-9223372036854775798 R1=scalar(umin=9223372036854775819,umax=9223372036854775823,var_off=(0x8000000000000008; 0x7),s32_min=8,s32_max=15,u32_min=8,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775797
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775797 R1=-9223372036854775797
        13: safe
      
        from 12 to 11: R0_w=-9223372036854775797 R1=scalar(umin=9223372036854775820,umax=9223372036854775823,var_off=(0x800000000000000c; 0x3),s32_min=12,s32_max=15,u32_min=12,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775796
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775796 R1=-9223372036854775796
        13: safe
      
        from 12 to 11: R0_w=-9223372036854775796 R1=scalar(umin=9223372036854775821,umax=9223372036854775823,var_off=(0x800000000000000c; 0x3),s32_min=12,s32_max=15,u32_min=12,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775795
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775795 R1=-9223372036854775795
        13: safe
      
        from 12 to 11: R0_w=-9223372036854775795 R1=scalar(umin=9223372036854775822,umax=9223372036854775823,var_off=(0x800000000000000e; 0x1),s32_min=14,s32_max=15,u32_min=14,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775794
        12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775794 R1=-9223372036854775794
        13: safe
      
        from 12 to 11: R0_w=-9223372036854775794 R1=-9223372036854775793 R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        11: (07) r0 += 1                      ; R0_w=-9223372036854775793
        12: (ad) if r0 < r1 goto pc-2
        last_idx 12 first_idx 12
        parent didn't have regs=1 stack=0 marks: R0_rw=P-9223372036854775801 R1_r=scalar(umin=9223372036854775815,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        last_idx 11 first_idx 11
        regs=1 stack=0 before 11: (07) r0 += 1
        parent didn't have regs=1 stack=0 marks: R0_rw=P-9223372036854775805 R1_rw=scalar(umin=9223372036854775812,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
        last_idx 12 first_idx 0
        regs=1 stack=0 before 12: (ad) if r0 < r1 goto pc-2
        regs=1 stack=0 before 11: (07) r0 += 1
        regs=1 stack=0 before 12: (ad) if r0 < r1 goto pc-2
        regs=1 stack=0 before 11: (07) r0 += 1
        regs=1 stack=0 before 12: (ad) if r0 < r1 goto pc-2
        regs=1 stack=0 before 11: (07) r0 += 1
        regs=1 stack=0 before 9: (18) r0 = 0x8000000000000000
        last_idx 12 first_idx 12
        parent didn't have regs=2 stack=0 marks: R0_rw=P-9223372036854775801 R1_r=Pscalar(umin=9223372036854775815,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
        last_idx 11 first_idx 11
        regs=2 stack=0 before 11: (07) r0 += 1
        parent didn't have regs=2 stack=0 marks: R0_rw=P-9223372036854775805 R1_rw=Pscalar(umin=9223372036854775812,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
        last_idx 12 first_idx 0
        regs=2 stack=0 before 12: (ad) if r0 < r1 goto pc-2
        regs=2 stack=0 before 11: (07) r0 += 1
        regs=2 stack=0 before 12: (ad) if r0 < r1 goto pc-2
        regs=2 stack=0 before 11: (07) r0 += 1
        regs=2 stack=0 before 12: (ad) if r0 < r1 goto pc-2
        regs=2 stack=0 before 11: (07) r0 += 1
        regs=2 stack=0 before 9: (18) r0 = 0x8000000000000000
        regs=2 stack=0 before 8: (0f) r1 += r0
        regs=3 stack=0 before 6: (18) r0 = 0x7fffffffffffff10
        regs=2 stack=0 before 5: (71) r1 = *(u8 *)(r2 +0)
        13: safe
      
        from 4 to 13: safe
        verification time 322 usec
        stack depth 0
        processed 56 insns (limit 1000000) max_states_per_insn 1 total_states 3 peak_states 3 mark_read 1
      
      This also fixes up a test case along with this improvement where we match
      on the verifier log. The updated log now has a refined var_off, too.
      
      Fixes: 3f50f132 ("bpf: Verifier, do explicit ALU32 bounds tracking")
      Reported-by: default avatarXu Kuohai <xukuohai@huaweicloud.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Reviewed-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20230314203424.4015351-2-xukuohai@huaweicloud.com
      Link: https://lore.kernel.org/bpf/20230322213056.2470-1-daniel@iogearbox.net
      7be14c1c
    • Alexei Starovoitov's avatar
      Merge branch 'error checking where helpers call bpf_map_ops' · 02adf9e9
      Alexei Starovoitov authored
      JP Kobryn says:
      
      ====================
      
      Within bpf programs, the bpf helper functions can make inline calls to
      kernel functions. In this scenario there can be a disconnect between the
      register the kernel function writes a return value to and the register the
      bpf program uses to evaluate that return value.
      
      As an example, this bpf code:
      
      long err = bpf_map_update_elem(...);
      if (err && err != -EEXIST)
      	// got some error other than -EEXIST
      
      ...can result in the bpf assembly:
      
      ; err = bpf_map_update_elem(&mymap, &key, &val, BPF_NOEXIST);
        37:	movabs $0xffff976a10730400,%rdi
        41:	mov    $0x1,%ecx
        46:	call   0xffffffffe103291c	; htab_map_update_elem
      ; if (err && err != -EEXIST) {
        4b:	cmp    $0xffffffffffffffef,%rax ; cmp -EEXIST,%rax
        4f:	je     0x000000000000008e
        51:	test   %rax,%rax
        54:	je     0x000000000000008e
      
      The compare operation here evaluates %rax, while in the preceding call to
      htab_map_update_elem the corresponding assembly returns -EEXIST via %eax
      (the lower 32 bits of %rax):
      
      movl $0xffffffef, %r9d
      ...
      movl %r9d, %eax
      
      ...since it's returning int (32-bit). So the resulting comparison becomes:
      
      cmp $0xffffffffffffffef, $0x00000000ffffffef
      
      ...making it not possible to check for negative errors or specific errors,
      since the sign value is left at the 32nd bit. It means in the original
      example, the conditional branch will be entered even when the error is
      -EEXIST, which was not intended.
      
      The selftests added cover these cases for the different bpf_map_ops
      functions. When the second patch is applied, changing the return type of
      those functions to long, the comparison works as intended and the tests
      pass.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      02adf9e9
    • JP Kobryn's avatar
      bpf: return long from bpf_map_ops funcs · d7ba4cc9
      JP Kobryn authored
      This patch changes the return types of bpf_map_ops functions to long, where
      previously int was returned. Using long allows for bpf programs to maintain
      the sign bit in the absence of sign extension during situations where
      inlined bpf helper funcs make calls to the bpf_map_ops funcs and a negative
      error is returned.
      
      The definitions of the helper funcs are generated from comments in the bpf
      uapi header at `include/uapi/linux/bpf.h`. The return type of these
      helpers was previously changed from int to long in commit bdb7b79b. For
      any case where one of the map helpers call the bpf_map_ops funcs that are
      still returning 32-bit int, a compiler might not include sign extension
      instructions to properly convert the 32-bit negative value a 64-bit
      negative value.
      
      For example:
      bpf assembly excerpt of an inlined helper calling a kernel function and
      checking for a specific error:
      
      ; err = bpf_map_update_elem(&mymap, &key, &val, BPF_NOEXIST);
        ...
        46:	call   0xffffffffe103291c	; htab_map_update_elem
      ; if (err && err != -EEXIST) {
        4b:	cmp    $0xffffffffffffffef,%rax ; cmp -EEXIST,%rax
      
      kernel function assembly excerpt of return value from
      `htab_map_update_elem` returning 32-bit int:
      
      movl $0xffffffef, %r9d
      ...
      movl %r9d, %eax
      
      ...results in the comparison:
      cmp $0xffffffffffffffef, $0x00000000ffffffef
      
      Fixes: bdb7b79b ("bpf: Switch most helper return values from 32-bit int to 64-bit long")
      Tested-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Signed-off-by: default avatarJP Kobryn <inwardvessel@gmail.com>
      Link: https://lore.kernel.org/r/20230322194754.185781-3-inwardvessel@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d7ba4cc9
    • JP Kobryn's avatar
      bpf/selftests: coverage for bpf_map_ops errors · 830154cd
      JP Kobryn authored
      These tests expose the issue of being unable to properly check for errors
      returned from inlined bpf map helpers that make calls to the bpf_map_ops
      functions. At best, a check for zero or non-zero can be done but these
      tests show it is not possible to check for a negative value or for a
      specific error value.
      Signed-off-by: default avatarJP Kobryn <inwardvessel@gmail.com>
      Tested-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Link: https://lore.kernel.org/r/20230322194754.185781-2-inwardvessel@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      830154cd
    • Andrii Nakryiko's avatar
      Merge branch 'bpf: Support ksym detection in light skeleton.' · d9d93f3b
      Andrii Nakryiko authored
      Alexei Starovoitov says:
      
      ====================
      
      From: Alexei Starovoitov <ast@kernel.org>
      
      v1->v2: update denylist on s390
      
      Patch 1: Cleanup internal libbpf names.
      Patch 2: Teach the verifier that rdonly_mem != NULL.
      Patch 3: Fix gen_loader to support ksym detection.
      Patch 4: Selftest and update denylist.
      ====================
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      d9d93f3b
    • Alexei Starovoitov's avatar
      selftests/bpf: Add light skeleton test for kfunc detection. · 3b2ec214
      Alexei Starovoitov authored
      Add light skeleton test for kfunc detection and denylist it for s390.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230321203854.3035-5-alexei.starovoitov@gmail.com
      3b2ec214
    • Alexei Starovoitov's avatar
      libbpf: Support kfunc detection in light skeleton. · 708cdc57
      Alexei Starovoitov authored
      Teach gen_loader to find {btf_id, btf_obj_fd} of kernel variables and kfuncs
      and populate corresponding ld_imm64 and bpf_call insns.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230321203854.3035-4-alexei.starovoitov@gmail.com
      708cdc57
    • Alexei Starovoitov's avatar
      bpf: Teach the verifier to recognize rdonly_mem as not null. · 1057d299
      Alexei Starovoitov authored
      Teach the verifier to recognize PTR_TO_MEM | MEM_RDONLY as not NULL
      otherwise if (!bpf_ksym_exists(known_kfunc)) doesn't go through
      dead code elimination.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Acked-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/bpf/20230321203854.3035-3-alexei.starovoitov@gmail.com
      1057d299
    • Alexei Starovoitov's avatar
      libbpf: Rename RELO_EXTERN_VAR/FUNC. · a18f7214
      Alexei Starovoitov authored
      RELO_EXTERN_VAR/FUNC names are not correct anymore. RELO_EXTERN_VAR represent
      ksym symbol in ld_imm64 insn. It can point to kernel variable or kfunc.
      Rename RELO_EXTERN_VAR->RELO_EXTERN_LD64 and RELO_EXTERN_FUNC->RELO_EXTERN_CALL
      to match what they actually represent.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Acked-by: default avatarDavid Vernet <void@manifault.com>
      Link: https://lore.kernel.org/bpf/20230321203854.3035-2-alexei.starovoitov@gmail.com
      a18f7214
    • Tushar Vyavahare's avatar
      selftests/xsk: add xdp populate metadata test · 9a321fd3
      Tushar Vyavahare authored
      Add a new test in copy-mode for testing the copying of metadata from the
      buffer in kernel-space to user-space. This is accomplished by adding a
      new XDP program and using the bss map to store a counter that is written
      to the metadata field. This counter is incremented for every packet so
      that the number becomes unique and should be the same as the payload. It
      is store in the bss so the value can be reset between runs.
      
      The XDP program populates the metadata and the userspace program checks
      the value stored in the metadata field against the payload using the new
      is_metadata_correct() function. To turn this verification on or off, add
      a new parameter (use_metadata) to the ifobject structure.
      Signed-off-by: default avatarTushar Vyavahare <tushar.vyavahare@intel.com>
      Reviewed-by: default avatarMaciej Fijalkowski <maciej.fijalkowski@intel.com>
      Link: https://lore.kernel.org/r/20230320102705.306187-1-tushar.vyavahare@intel.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      9a321fd3