1. 17 Jun, 2021 6 commits
    • Andrii Nakryiko's avatar
      selftests/bpf: Fix selftests build with old system-wide headers · f20792d4
      Andrii Nakryiko authored
      migrate_reuseport.c selftest relies on having TCP_FASTOPEN_CONNECT defined in
      system-wide netinet/tcp.h. Selftests can use up-to-date uapi/linux/tcp.h, but
      that one doesn't have SOL_TCP. So instead of switching everything to uapi
      header, add #define for TCP_FASTOPEN_CONNECT to fix the build.
      
      Fixes: c9d0bdef ("bpf: Test BPF_SK_REUSEPORT_SELECT_OR_MIGRATE.")
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarKuniyuki Iwashima <kuniyu@amazon.co.jp>
      Link: https://lore.kernel.org/bpf/20210617041446.425283-1-andrii@kernel.org
      f20792d4
    • Daniel Borkmann's avatar
      bpf: Fix up register-based shifts in interpreter to silence KUBSAN · 28131e9d
      Daniel Borkmann authored
      syzbot reported a shift-out-of-bounds that KUBSAN observed in the
      interpreter:
      
        [...]
        UBSAN: shift-out-of-bounds in kernel/bpf/core.c:1420:2
        shift exponent 255 is too large for 64-bit type 'long long unsigned int'
        CPU: 1 PID: 11097 Comm: syz-executor.4 Not tainted 5.12.0-rc2-syzkaller #0
        Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
        Call Trace:
         __dump_stack lib/dump_stack.c:79 [inline]
         dump_stack+0x141/0x1d7 lib/dump_stack.c:120
         ubsan_epilogue+0xb/0x5a lib/ubsan.c:148
         __ubsan_handle_shift_out_of_bounds.cold+0xb1/0x181 lib/ubsan.c:327
         ___bpf_prog_run.cold+0x19/0x56c kernel/bpf/core.c:1420
         __bpf_prog_run32+0x8f/0xd0 kernel/bpf/core.c:1735
         bpf_dispatcher_nop_func include/linux/bpf.h:644 [inline]
         bpf_prog_run_pin_on_cpu include/linux/filter.h:624 [inline]
         bpf_prog_run_clear_cb include/linux/filter.h:755 [inline]
         run_filter+0x1a1/0x470 net/packet/af_packet.c:2031
         packet_rcv+0x313/0x13e0 net/packet/af_packet.c:2104
         dev_queue_xmit_nit+0x7c2/0xa90 net/core/dev.c:2387
         xmit_one net/core/dev.c:3588 [inline]
         dev_hard_start_xmit+0xad/0x920 net/core/dev.c:3609
         __dev_queue_xmit+0x2121/0x2e00 net/core/dev.c:4182
         __bpf_tx_skb net/core/filter.c:2116 [inline]
         __bpf_redirect_no_mac net/core/filter.c:2141 [inline]
         __bpf_redirect+0x548/0xc80 net/core/filter.c:2164
         ____bpf_clone_redirect net/core/filter.c:2448 [inline]
         bpf_clone_redirect+0x2ae/0x420 net/core/filter.c:2420
         ___bpf_prog_run+0x34e1/0x77d0 kernel/bpf/core.c:1523
         __bpf_prog_run512+0x99/0xe0 kernel/bpf/core.c:1737
         bpf_dispatcher_nop_func include/linux/bpf.h:644 [inline]
         bpf_test_run+0x3ed/0xc50 net/bpf/test_run.c:50
         bpf_prog_test_run_skb+0xabc/0x1c50 net/bpf/test_run.c:582
         bpf_prog_test_run kernel/bpf/syscall.c:3127 [inline]
         __do_sys_bpf+0x1ea9/0x4f00 kernel/bpf/syscall.c:4406
         do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
         entry_SYSCALL_64_after_hwframe+0x44/0xae
        [...]
      
      Generally speaking, KUBSAN reports from the kernel should be fixed.
      However, in case of BPF, this particular report caused concerns since
      the large shift is not wrong from BPF point of view, just undefined.
      In the verifier, K-based shifts that are >= {64,32} (depending on the
      bitwidth of the instruction) are already rejected. The register-based
      cases were not given their content might not be known at verification
      time. Ideas such as verifier instruction rewrite with an additional
      AND instruction for the source register were brought up, but regularly
      rejected due to the additional runtime overhead they incur.
      
      As Edward Cree rightly put it:
      
        Shifts by more than insn bitness are legal in the BPF ISA; they are
        implementation-defined behaviour [of the underlying architecture],
        rather than UB, and have been made legal for performance reasons.
        Each of the JIT backends compiles the BPF shift operations to machine
        instructions which produce implementation-defined results in such a
        case; the resulting contents of the register may be arbitrary but
        program behaviour as a whole remains defined.
      
        Guard checks in the fast path (i.e. affecting JITted code) will thus
        not be accepted.
      
        The case of division by zero is not truly analogous here, as division
        instructions on many of the JIT-targeted architectures will raise a
        machine exception / fault on division by zero, whereas (to the best
        of my knowledge) none will do so on an out-of-bounds shift.
      
      Given the KUBSAN report only affects the BPF interpreter, but not JITs,
      one solution is to add the ANDs with 63 or 31 into ___bpf_prog_run().
      That would make the shifts defined, and thus shuts up KUBSAN, and the
      compiler would optimize out the AND on any CPU that interprets the shift
      amounts modulo the width anyway (e.g., confirmed from disassembly that
      on x86-64 and arm64 the generated interpreter code is the same before
      and after this fix).
      
      The BPF interpreter is slow path, and most likely compiled out anyway
      as distros select BPF_JIT_ALWAYS_ON to avoid speculative execution of
      BPF instructions by the interpreter. Given the main argument was to
      avoid sacrificing performance, the fact that the AND is optimized away
      from compiler for mainstream archs helps as well as a solution moving
      forward. Also add a comment on LSH/RSH/ARSH translation for JIT authors
      to provide guidance when they see the ___bpf_prog_run() interpreter
      code and use it as a model for a new JIT backend.
      
      Reported-by: syzbot+bed360704c521841c85d@syzkaller.appspotmail.com
      Reported-by: default avatarKurt Manucredo <fuzzybritches0@gmail.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@kernel.org>
      Co-developed-by: default avatarEric Biggers <ebiggers@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Tested-by: syzbot+bed360704c521841c85d@syzkaller.appspotmail.com
      Cc: Edward Cree <ecree.xilinx@gmail.com>
      Link: https://lore.kernel.org/bpf/0000000000008f912605bd30d5d7@google.com
      Link: https://lore.kernel.org/bpf/bac16d8d-c174-bdc4-91bd-bfa62b410190@gmail.com
      28131e9d
    • Lorenz Bauer's avatar
      libbpf: Fail compilation if target arch is missing · 4a638d58
      Lorenz Bauer authored
      bpf2go is the Go equivalent of libbpf skeleton. The convention is that
      the compiled BPF is checked into the repository to facilitate distributing
      BPF as part of Go packages. To make this portable, bpf2go by default
      generates both bpfel and bpfeb variants of the C.
      
      Using bpf_tracing.h is inherently non-portable since the fields of
      struct pt_regs differ between platforms, so CO-RE can't help us here.
      The only way of working around this is to compile for each target
      platform independently. bpf2go can't do this by default since there
      are too many platforms.
      
      Define the various PT_... macros when no target can be determined and
      turn them into compilation failures. This works because bpf2go always
      compiles for bpf targets, so the compiler fallback doesn't kick in.
      Conditionally define __BPF_MISSING_TARGET so that we can inject a
      more appropriate error message at build time. The user can then
      choose which platform to target explicitly.
      Signed-off-by: default avatarLorenz Bauer <lmb@cloudflare.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210616083635.11434-1-lmb@cloudflare.com
      4a638d58
    • Wang Hai's avatar
      samples/bpf: Add missing option to xdp_sample_pkts usage · dfdda1a0
      Wang Hai authored
      xdp_sample_pkts usage() is missing the introduction of the
      "-S" option, this patch adds it.
      
      Fixes: d50ecc46 ("samples/bpf: Attach XDP programs in driver mode by default")
      Signed-off-by: default avatarWang Hai <wanghai38@huawei.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Acked-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Link: https://lore.kernel.org/bpf/20210615135724.29528-1-wanghai38@huawei.com
      dfdda1a0
    • Wang Hai's avatar
      samples/bpf: Add missing option to xdp_fwd usage · bf067f1c
      Wang Hai authored
      xdp_fwd usage() is missing the introduction of the "-S"
      and "-F" options, this patch adds it.
      
      Fixes: d50ecc46 ("samples/bpf: Attach XDP programs in driver mode by default")
      Signed-off-by: default avatarWang Hai <wanghai38@huawei.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Acked-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Link: https://lore.kernel.org/bpf/20210615135554.29158-1-wanghai38@huawei.com
      bf067f1c
    • Shuyi Cheng's avatar
  2. 16 Jun, 2021 1 commit
  3. 15 Jun, 2021 14 commits
  4. 11 Jun, 2021 2 commits
  5. 08 Jun, 2021 3 commits
  6. 03 Jun, 2021 4 commits
  7. 01 Jun, 2021 1 commit
  8. 28 May, 2021 2 commits
  9. 26 May, 2021 7 commits
    • Florent Revest's avatar
      libbpf: Move BPF_SEQ_PRINTF and BPF_SNPRINTF to bpf_helpers.h · d6a6a555
      Florent Revest authored
      These macros are convenient wrappers around the bpf_seq_printf and
      bpf_snprintf helpers. They are currently provided by bpf_tracing.h which
      targets low level tracing primitives. bpf_helpers.h is a better fit.
      
      The __bpf_narg and __bpf_apply are needed in both files and provided
      twice. __bpf_empty isn't used anywhere and is removed from bpf_tracing.h
      Reported-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: default avatarFlorent Revest <revest@chromium.org>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210526164643.2881368-1-revest@chromium.org
      d6a6a555
    • Daniel Borkmann's avatar
      Merge branch 'bpf-xdp-bcast' · aa7f1f03
      Daniel Borkmann authored
      Hangbin Liu says:
      
      ====================
      This patchset is a new implementation for XDP multicast support based
      on my previous 2 maps implementation[1]. The reason is that Daniel thinks
      the exclude map implementation is missing proper bond support in XDP
      context. And there is a plan to add native XDP bonding support. Adding
      a exclude map in the helper also increases the complexity of verifier and
      has drawbacks on performance.
      
      The new implementation just add two new flags BPF_F_BROADCAST and
      BPF_F_EXCLUDE_INGRESS to extend xdp_redirect_map for broadcast support.
      
      With BPF_F_BROADCAST the packet will be broadcasted to all the interfaces
      in the map. with BPF_F_EXCLUDE_INGRESS the ingress interface will be
      excluded when do broadcasting.
      
      The patchv11 link is here [2].
      
        [1] https://lore.kernel.org/bpf/20210223125809.1376577-1-liuhangbin@gmail.com
        [2] https://lore.kernel.org/bpf/20210513070447.1878448-1-liuhangbin@gmail.com
      
      v12: As Daniel pointed out:
        a) defined as const u64 for flag_mask and action_mask in
           __bpf_xdp_redirect_map()
        b) remove BPF_F_ACTION_MASK in uapi header
        c) remove EXPORT_SYMBOL_GPL for xdpf_clone()
      
      v11:
        a) Use unlikely() when checking if this is for broadcast redirecting.
        b) Fix a tracepoint NULL pointer issue Jesper found
        c) Remove BPF_F_REDIR_MASK and just use OR flags to make the reader more
           clear about what's flags we are using
        d) Add the performace number with multi veth interfaces in patch 01
           description.
        e) remove some sleeps to reduce the testing time in patch04. Re-struct the
           test and make clear what flags we are testing.
      
      v10: use READ/WRITE_ONCE when read/write map instead of xchg()
      v9: Update patch 01 commit description
      v8: use hlist_for_each_entry_rcu() when looping the devmap hash ojbs
      v7: No need to free xdpf in dev_map_enqueue_clone() if xdpf_clone failed.
      v6: Fix a skb leak in the error path for generic XDP
      v5: Just walk the map directly to get interfaces as get_next_key() of devmap
          hash may restart looping from the first key if the device get removed.
          After update the performace has improved 10% compired with v4.
      v4: Fix flags never cleared issue in patch 02. Update selftest to cover this.
      v3: Rebase the code based on latest bpf-next
      v2: fix flag renaming issue in patch 02
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      aa7f1f03
    • Hangbin Liu's avatar
      selftests/bpf: Add xdp_redirect_multi test · d2329247
      Hangbin Liu authored
      Add a bpf selftest for new helper xdp_redirect_map_multi(). In this
      test there are 3 forward groups and 1 exclude group. The test will
      redirect each interface's packets to all the interfaces in the forward
      group, and exclude the interface in exclude map.
      
      Two maps (DEVMAP, DEVMAP_HASH) and two xdp modes (generic, drive) will
      be tested. XDP egress program will also be tested by setting pkt src MAC
      to egress interface's MAC address.
      
      For more test details, you can find it in the test script. Here is
      the test result.
      ]# time ./test_xdp_redirect_multi.sh
      Pass: xdpgeneric arp(F_BROADCAST) ns1-1
      Pass: xdpgeneric arp(F_BROADCAST) ns1-2
      Pass: xdpgeneric arp(F_BROADCAST) ns1-3
      Pass: xdpgeneric IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1
      Pass: xdpgeneric IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2
      Pass: xdpgeneric IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3
      Pass: xdpgeneric IPv6 (no flags) ns1-1
      Pass: xdpgeneric IPv6 (no flags) ns1-2
      Pass: xdpdrv arp(F_BROADCAST) ns1-1
      Pass: xdpdrv arp(F_BROADCAST) ns1-2
      Pass: xdpdrv arp(F_BROADCAST) ns1-3
      Pass: xdpdrv IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-1
      Pass: xdpdrv IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-2
      Pass: xdpdrv IPv4 (F_BROADCAST|F_EXCLUDE_INGRESS) ns1-3
      Pass: xdpdrv IPv6 (no flags) ns1-1
      Pass: xdpdrv IPv6 (no flags) ns1-2
      Pass: xdpegress mac ns1-2
      Pass: xdpegress mac ns1-3
      Summary: PASS 18, FAIL 0
      
      real    1m18.321s
      user    0m0.123s
      sys     0m0.350s
      Signed-off-by: default avatarHangbin Liu <liuhangbin@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/20210519090747.1655268-5-liuhangbin@gmail.com
      d2329247
    • Hangbin Liu's avatar
      sample/bpf: Add xdp_redirect_map_multi for redirect_map broadcast test · e48cfe4b
      Hangbin Liu authored
      This is a sample for xdp redirect broadcast. In the sample we could forward
      all packets between given interfaces. There is also an option -X that could
      enable 2nd xdp_prog on egress interface.
      Signed-off-by: default avatarHangbin Liu <liuhangbin@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/20210519090747.1655268-4-liuhangbin@gmail.com
      e48cfe4b
    • Hangbin Liu's avatar
      xdp: Extend xdp_redirect_map with broadcast support · e624d4ed
      Hangbin Liu authored
      This patch adds two flags BPF_F_BROADCAST and BPF_F_EXCLUDE_INGRESS to
      extend xdp_redirect_map for broadcast support.
      
      With BPF_F_BROADCAST the packet will be broadcasted to all the interfaces
      in the map. with BPF_F_EXCLUDE_INGRESS the ingress interface will be
      excluded when do broadcasting.
      
      When getting the devices in dev hash map via dev_map_hash_get_next_key(),
      there is a possibility that we fall back to the first key when a device
      was removed. This will duplicate packets on some interfaces. So just walk
      the whole buckets to avoid this issue. For dev array map, we also walk the
      whole map to find valid interfaces.
      
      Function bpf_clear_redirect_map() was removed in
      commit ee75aef2 ("bpf, xdp: Restructure redirect actions").
      Add it back as we need to use ri->map again.
      
      With test topology:
        +-------------------+             +-------------------+
        | Host A (i40e 10G) |  ---------- | eno1(i40e 10G)    |
        +-------------------+             |                   |
                                          |   Host B          |
        +-------------------+             |                   |
        | Host C (i40e 10G) |  ---------- | eno2(i40e 10G)    |
        +-------------------+             |                   |
                                          |          +------+ |
                                          | veth0 -- | Peer | |
                                          | veth1 -- |      | |
                                          | veth2 -- |  NS  | |
                                          |          +------+ |
                                          +-------------------+
      
      On Host A:
       # pktgen/pktgen_sample03_burst_single_flow.sh -i eno1 -d $dst_ip -m $dst_mac -s 64
      
      On Host B(Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz, 128G Memory):
      Use xdp_redirect_map and xdp_redirect_map_multi in samples/bpf for testing.
      All the veth peers in the NS have a XDP_DROP program loaded. The
      forward_map max_entries in xdp_redirect_map_multi is modify to 4.
      
      Testing the performance impact on the regular xdp_redirect path with and
      without patch (to check impact of additional check for broadcast mode):
      
      5.12 rc4         | redirect_map        i40e->i40e      |    2.0M |  9.7M
      5.12 rc4         | redirect_map        i40e->veth      |    1.7M | 11.8M
      5.12 rc4 + patch | redirect_map        i40e->i40e      |    2.0M |  9.6M
      5.12 rc4 + patch | redirect_map        i40e->veth      |    1.7M | 11.7M
      
      Testing the performance when cloning packets with the redirect_map_multi
      test, using a redirect map size of 4, filled with 1-3 devices:
      
      5.12 rc4 + patch | redirect_map multi  i40e->veth (x1) |    1.7M | 11.4M
      5.12 rc4 + patch | redirect_map multi  i40e->veth (x2) |    1.1M |  4.3M
      5.12 rc4 + patch | redirect_map multi  i40e->veth (x3) |    0.8M |  2.6M
      Signed-off-by: default avatarHangbin Liu <liuhangbin@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Link: https://lore.kernel.org/bpf/20210519090747.1655268-3-liuhangbin@gmail.com
      e624d4ed
    • Jesper Dangaard Brouer's avatar
      bpf: Run devmap xdp_prog on flush instead of bulk enqueue · cb261b59
      Jesper Dangaard Brouer authored
      This changes the devmap XDP program support to run the program when the
      bulk queue is flushed instead of before the frame is enqueued. This has
      a couple of benefits:
      
      - It "sorts" the packets by destination devmap entry, and then runs the
        same BPF program on all the packets in sequence. This ensures that we
        keep the XDP program and destination device properties hot in I-cache.
      
      - It makes the multicast implementation simpler because it can just
        enqueue packets using bq_enqueue() without having to deal with the
        devmap program at all.
      
      The drawback is that if the devmap program drops the packet, the enqueue
      step is redundant. However, arguably this is mostly visible in a
      micro-benchmark, and with more mixed traffic the I-cache benefit should
      win out. The performance impact of just this patch is as follows:
      
      Using 2 10Gb i40e NIC, redirecting one to another, or into a veth interface,
      which do XDP_DROP on veth peer. With xdp_redirect_map in sample/bpf, send
      pkts via pktgen cmd:
      ./pktgen_sample03_burst_single_flow.sh -i eno1 -d $dst_ip -m $dst_mac -t 10 -s 64
      
      There are about +/- 0.1M deviation for native testing, the performance
      improved for the base-case, but some drop back with xdp devmap prog attached.
      
      Version          | Test                           | Generic | Native | Native + 2nd xdp_prog
      5.12 rc4         | xdp_redirect_map   i40e->i40e  |    1.9M |   9.6M |  8.4M
      5.12 rc4         | xdp_redirect_map   i40e->veth  |    1.7M |  11.7M |  9.8M
      5.12 rc4 + patch | xdp_redirect_map   i40e->i40e  |    1.9M |   9.8M |  8.0M
      5.12 rc4 + patch | xdp_redirect_map   i40e->veth  |    1.7M |  12.0M |  9.4M
      
      When bq_xmit_all() is called from bq_enqueue(), another packet will
      always be enqueued immediately after, so clearing dev_rx, xdp_prog and
      flush_node in bq_xmit_all() is redundant. Move the clear to __dev_flush(),
      and only check them once in bq_enqueue() since they are all modified
      together.
      
      This change also has the side effect of extending the lifetime of the
      RCU-protected xdp_prog that lives inside the devmap entries: Instead of
      just living for the duration of the XDP program invocation, the
      reference now lives all the way until the bq is flushed. This is safe
      because the bq flush happens at the end of the NAPI poll loop, so
      everything happens between a local_bh_disable()/local_bh_enable() pair.
      However, this is by no means obvious from looking at the call sites; in
      particular, some drivers have an additional rcu_read_lock() around only
      the XDP program invocation, which only confuses matters further.
      Cleaning this up will be done in a separate patch series.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarHangbin Liu <liuhangbin@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20210519090747.1655268-2-liuhangbin@gmail.com
      cb261b59
    • Alexei Starovoitov's avatar
      Merge branch 'libbpf: error reporting changes for v1.0' · 21703cf7
      Alexei Starovoitov authored
      Andrii Nakryiko says:
      
      ====================
      
      Implement error reporting changes discussed in "Libbpf: the road to v1.0"
      ([0]) document.
      
      Libbpf gets a new API, libbpf_set_strict_mode() which accepts a set of flags
      that turn on a set of libbpf 1.0 changes, that might be potentially breaking.
      It's possible to opt-in into all current and future 1.0 features by specifying
      LIBBPF_STRICT_ALL flag.
      
      When some of the 1.0 "features" are requested, libbpf APIs might behave
      differently. In this patch set a first set of changes are implemented, all
      related to the way libbpf returns errors. See individual patches for details.
      
      Patch #1 adds a no-op libbpf_set_strict_mode() functionality to enable
      updating selftests.
      
      Patch #2 gets rid of all the bad code patterns that will break in libbpf 1.0
      (exact -1 comparison for low-level APIs, direct IS_ERR() macro usage to check
      pointer-returning APIs for error, etc). These changes make selftest work in
      both legacy and 1.0 libbpf modes. Selftests also opt-in into 100% libbpf 1.0
      mode to automatically gain all the subsequent changes, which will come in
      follow up patches.
      
      Patch #3 streamlines error reporting for low-level APIs wrapping bpf() syscall.
      
      Patch #4 streamlines errors for all the rest APIs.
      
      Patch #5 ensures that BPF skeletons propagate errors properly as well, as
      currently on error some APIs will return NULL with no way of checking exact
      error code.
      
        [0] https://docs.google.com/document/d/1UyjTZuPFWiPFyKk1tV5an11_iaRuec6U-ZESZ54nNTY
      
      v1->v2:
        - move libbpf_set_strict_mode() implementation to patch #1, where it belongs
          (Alexei);
        - add acks, slight rewording of commit messages.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      21703cf7