1. 17 Jan, 2020 2 commits
    • Toke Høiland-Jørgensen's avatar
      xdp: Move devmap bulk queue into struct net_device · 75ccae62
      Toke Høiland-Jørgensen authored
      Commit 96360004 ("xdp: Make devmap flush_list common for all map
      instances"), changed devmap flushing to be a global operation instead of a
      per-map operation. However, the queue structure used for bulking was still
      allocated as part of the containing map.
      
      This patch moves the devmap bulk queue into struct net_device. The
      motivation for this is reusing it for the non-map variant of XDP_REDIRECT,
      which will be changed in a subsequent commit.  To avoid other fields of
      struct net_device moving to different cache lines, we also move a couple of
      other members around.
      
      We defer the actual allocation of the bulk queue structure until the
      NETDEV_REGISTER notification devmap.c. This makes it possible to check for
      ndo_xdp_xmit support before allocating the structure, which is not possible
      at the time struct net_device is allocated. However, we keep the freeing in
      free_netdev() to avoid adding another RCU callback on NETDEV_UNREGISTER.
      
      Because of this change, we lose the reference back to the map that
      originated the redirect, so change the tracepoint to always return 0 as the
      map ID and index. Otherwise no functional change is intended with this
      patch.
      
      After this patch, the relevant part of struct net_device looks like this,
      according to pahole:
      
      	/* --- cacheline 14 boundary (896 bytes) --- */
      	struct netdev_queue *      _tx __attribute__((__aligned__(64))); /*   896     8 */
      	unsigned int               num_tx_queues;        /*   904     4 */
      	unsigned int               real_num_tx_queues;   /*   908     4 */
      	struct Qdisc *             qdisc;                /*   912     8 */
      	unsigned int               tx_queue_len;         /*   920     4 */
      	spinlock_t                 tx_global_lock;       /*   924     4 */
      	struct xdp_dev_bulk_queue * xdp_bulkq;           /*   928     8 */
      	struct xps_dev_maps *      xps_cpus_map;         /*   936     8 */
      	struct xps_dev_maps *      xps_rxqs_map;         /*   944     8 */
      	struct mini_Qdisc *        miniq_egress;         /*   952     8 */
      	/* --- cacheline 15 boundary (960 bytes) --- */
      	struct hlist_head  qdisc_hash[16];               /*   960   128 */
      	/* --- cacheline 17 boundary (1088 bytes) --- */
      	struct timer_list  watchdog_timer;               /*  1088    40 */
      
      	/* XXX last struct has 4 bytes of padding */
      
      	int                        watchdog_timeo;       /*  1128     4 */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	struct list_head   todo_list;                    /*  1136    16 */
      	/* --- cacheline 18 boundary (1152 bytes) --- */
      Signed-off-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarBjörn Töpel <bjorn.topel@intel.com>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/157918768397.1458396.12673224324627072349.stgit@toke.dk
      75ccae62
    • Andrii Nakryiko's avatar
      libbpf: Revert bpf_helper_defs.h inclusion regression · 20f21d98
      Andrii Nakryiko authored
      Revert bpf_helpers.h's change to include auto-generated bpf_helper_defs.h
      through <> instead of "", which causes it to be searched in include path. This
      can break existing applications that don't have their include path pointing
      directly to where libbpf installs its headers.
      
      There is ongoing work to make all (not just bpf_helper_defs.h) includes more
      consistent across libbpf and its consumers, but this unbreaks user code as is
      right now without any regressions. Selftests still behave sub-optimally
      (taking bpf_helper_defs.h from libbpf's source directory, if it's present
      there), which will be fixed in subsequent patches.
      
      Fixes: 6910d7d3 ("selftests/bpf: Ensure bpf_helper_defs.h are taken from selftests dir")
      Reported-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200117004103.148068-1-andriin@fb.com
      20f21d98
  2. 16 Jan, 2020 3 commits
  3. 15 Jan, 2020 22 commits
  4. 14 Jan, 2020 9 commits
  5. 10 Jan, 2020 4 commits
    • Andrii Nakryiko's avatar
      selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros · ac065870
      Andrii Nakryiko authored
      Streamline BPF_TRACE_x macro by moving out return type and section attribute
      definition out of macro itself. That makes those function look in source code
      similar to other BPF programs. Additionally, simplify its usage by determining
      number of arguments automatically (so just single BPF_TRACE vs a family of
      BPF_TRACE_1, BPF_TRACE_2, etc). Also, allow more natural function argument
      syntax without commas inbetween argument type and name.
      
      Given this helper is useful not only for tracing tp_btf/fenty/fexit programs,
      but could be used for LSM programs and others following the same pattern,
      rename BPF_TRACE macro into more generic BPF_PROG. Existing BPF_TRACE_x
      usages in selftests are converted to new BPF_PROG macro.
      
      Following the same pattern, define BPF_KPROBE and BPF_KRETPROBE macros for
      nicer usage of kprobe/kretprobe arguments, respectively. BPF_KRETPROBE, adopts
      same convention used by fexit programs, that last defined argument is probed
      function's return result.
      
      v4->v5:
      - fix test_overhead test (__set_task_comm is void) (Alexei);
      
      v3->v4:
      - rebased and fixed one more BPF_TRACE_x occurence (Alexei);
      
      v2->v3:
      - rename to shorter and as generic BPF_PROG (Alexei);
      
      v1->v2:
      - verified GCC handles pragmas as expected;
      - added descriptions to macros;
      - converted new STRUCT_OPS selftest to BPF_HANDLER (worked as expected);
      - added original context as 'ctx' parameter, for cases where it has to be
        passed into BPF helpers. This might cause an accidental naming collision,
        unfortunately, but at least it's easy to work around. Fortunately, this
        situation produces quite legible compilation error:
      
      progs/bpf_dctcp.c:46:6: error: redefinition of 'ctx' with a different type: 'int' vs 'unsigned long long *'
              int ctx = 123;
                  ^
      progs/bpf_dctcp.c:42:6: note: previous definition is here
      void BPF_HANDLER(dctcp_init, struct sock *sk)
           ^
      ./bpf_trace_helpers.h:58:32: note: expanded from macro 'BPF_HANDLER'
      ____##name(unsigned long long *ctx, ##args)
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Link: https://lore.kernel.org/bpf/20200110211634.1614739-1-andriin@fb.com
      ac065870
    • Andrii Nakryiko's avatar
      libbpf: Poison kernel-only integer types · 1d1a3bcf
      Andrii Nakryiko authored
      It's been a recurring issue with types like u32 slipping into libbpf source
      code accidentally. This is not detected during builds inside kernel source
      tree, but becomes a compilation error in libbpf's Github repo. Libbpf is
      supposed to use only __{s,u}{8,16,32,64} typedefs, so poison {s,u}{8,16,32,64}
      explicitly in every .c file. Doing that in a bit more centralized way, e.g.,
      inside libbpf_internal.h breaks selftests, which are both using kernel u32 and
      libbpf_internal.h.
      
      This patch also fixes a new u32 occurence in libbpf.c, added recently.
      
      Fixes: 590a0088 ("bpf: libbpf: Add STRUCT_OPS support")
      Signed-off-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Link: https://lore.kernel.org/bpf/20200110181916.271446-1-andriin@fb.com
      1d1a3bcf
    • Daniel Borkmann's avatar
      Merge branch 'bpf-global-funcs' · 7a2d070f
      Daniel Borkmann authored
      Alexei Starovoitov says:
      
      ====================
      Introduce static vs global functions and function by function verification.
      This is another step toward dynamic re-linking (or replacement) of global
      functions. See patch 2 for details.
      
      v2->v3:
      - cleaned up a check spotted by Song.
      - rebased and dropped patch 2 that was trying to improve BTF based on ELF.
      - added one more unit test for scalar return value from global func.
      
      v1->v2:
      - addressed review comments from Song, Andrii, Yonghong
      - fixed memory leak in error path
      - added modified ctx check
      - added more tests in patch 7
      ====================
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      7a2d070f
    • Alexei Starovoitov's avatar
      selftests/bpf: Add unit tests for global functions · 360301a6
      Alexei Starovoitov authored
      test_global_func[12] - check 512 stack limit.
      test_global_func[34] - check 8 frame call chain limit.
      test_global_func5    - check that non-ctx pointer cannot be passed into
                             a function that expects context.
      test_global_func6    - check that ctx pointer is unmodified.
      test_global_func7    - check that global function returns scalar.
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarSong Liu <songliubraving@fb.com>
      Link: https://lore.kernel.org/bpf/20200110064124.1760511-7-ast@kernel.org
      360301a6