1. 20 Nov, 2018 10 commits
  2. 19 Nov, 2018 6 commits
  3. 17 Nov, 2018 9 commits
  4. 11 Nov, 2018 4 commits
    • Alexei Starovoitov's avatar
      Merge branch 'narrow-loads' · 407be8d0
      Alexei Starovoitov authored
      Andrey Ignatov says:
      
      ====================
      This patch set adds support for narrow loads with offset > 0 to BPF
      verifier.
      
      Patch 1 provides more details and is the main patch in the set.
      Patches 2 and 3 add new test cases to test_verifier and test_sock_addr
      selftests.
      
      v1->v2:
      - fix -Wdeclaration-after-statement warning.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      407be8d0
    • Andrey Ignatov's avatar
      selftests/bpf: Test narrow loads with off > 0 for bpf_sock_addr · e7605475
      Andrey Ignatov authored
      Add more test cases for context bpf_sock_addr to test narrow loads with
      offset > 0 for ctx->user_ip4 field (__u32):
      * off=1, size=1;
      * off=2, size=1;
      * off=3, size=1;
      * off=2, size=2.
      Signed-off-by: default avatarAndrey Ignatov <rdna@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      e7605475
    • Andrey Ignatov's avatar
      selftests/bpf: Test narrow loads with off > 0 in test_verifier · 6c2afb67
      Andrey Ignatov authored
      Test the following narrow loads in test_verifier for context __sk_buff:
      * off=1, size=1 - ok;
      * off=2, size=1 - ok;
      * off=3, size=1 - ok;
      * off=0, size=2 - ok;
      * off=1, size=2 - fail;
      * off=0, size=2 - ok;
      * off=3, size=2 - fail.
      Signed-off-by: default avatarAndrey Ignatov <rdna@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      6c2afb67
    • Andrey Ignatov's avatar
      bpf: Allow narrow loads with offset > 0 · 46f53a65
      Andrey Ignatov authored
      Currently BPF verifier allows narrow loads for a context field only with
      offset zero. E.g. if there is a __u32 field then only the following
      loads are permitted:
        * off=0, size=1 (narrow);
        * off=0, size=2 (narrow);
        * off=0, size=4 (full).
      
      On the other hand LLVM can generate a load with offset different than
      zero that make sense from program logic point of view, but verifier
      doesn't accept it.
      
      E.g. tools/testing/selftests/bpf/sendmsg4_prog.c has code:
      
        #define DST_IP4			0xC0A801FEU /* 192.168.1.254 */
        ...
        	if ((ctx->user_ip4 >> 24) == (bpf_htonl(DST_IP4) >> 24) &&
      
      where ctx is struct bpf_sock_addr.
      
      Some versions of LLVM can produce the following byte code for it:
      
             8:       71 12 07 00 00 00 00 00         r2 = *(u8 *)(r1 + 7)
             9:       67 02 00 00 18 00 00 00         r2 <<= 24
            10:       18 03 00 00 00 00 00 fe 00 00 00 00 00 00 00 00         r3 = 4261412864 ll
            12:       5d 32 07 00 00 00 00 00         if r2 != r3 goto +7 <LBB0_6>
      
      where `*(u8 *)(r1 + 7)` means narrow load for ctx->user_ip4 with size=1
      and offset=3 (7 - sizeof(ctx->user_family) = 3). This load is currently
      rejected by verifier.
      
      Verifier code that rejects such loads is in bpf_ctx_narrow_access_ok()
      what means any is_valid_access implementation, that uses the function,
      works this way, e.g. bpf_skb_is_valid_access() for __sk_buff or
      sock_addr_is_valid_access() for bpf_sock_addr.
      
      The patch makes such loads supported. Offset can be in [0; size_default)
      but has to be multiple of load size. E.g. for __u32 field the following
      loads are supported now:
        * off=0, size=1 (narrow);
        * off=1, size=1 (narrow);
        * off=2, size=1 (narrow);
        * off=3, size=1 (narrow);
        * off=0, size=2 (narrow);
        * off=2, size=2 (narrow);
        * off=0, size=4 (full).
      Reported-by: default avatarYonghong Song <yhs@fb.com>
      Signed-off-by: default avatarAndrey Ignatov <rdna@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      46f53a65
  5. 10 Nov, 2018 11 commits
    • Alexei Starovoitov's avatar
      Merge branch 'bpftool-flow-dissector' · f2cbf958
      Alexei Starovoitov authored
      Stanislav Fomichev says:
      
      ====================
      v5 changes:
      * FILE -> PATH for load/loadall (can be either file or directory now)
      * simpler implementation for __bpf_program__pin_name
      * removed p_err for REQ_ARGS checks
      * parse_atach_detach_args -> parse_attach_detach_args
      * for -> while in bpf_object__pin_{programs,maps} recovery
      
      v4 changes:
      * addressed another round of comments/style issues from Jakub Kicinski &
        Quentin Monnet (thanks!)
      * implemented bpf_object__pin_maps and bpf_object__pin_programs helpers and
        used them in bpf_program__pin
      * added new pin_name to bpf_program so bpf_program__pin
        works with sections that contain '/'
      * moved *loadall* command implementation into a separate patch
      * added patch that implements *pinmaps* to pin maps when doing
        load/loadall
      
      v3 changes:
      * (maybe) better cleanup for partial failure in bpf_object__pin
      * added special case in bpf_program__pin for programs with single
        instances
      
      v2 changes:
      * addressed comments/style issues from Jakub Kicinski & Quentin Monnet
      * removed logic that populates jump table
      * added cleanup for partial failure in bpf_object__pin
      
      This patch series adds support for loading and attaching flow dissector
      programs from the bpftool:
      
      * first patch fixes flow dissector section name in the selftests (so
        libbpf auto-detection works)
      * second patch adds proper cleanup to bpf_object__pin, parts of which are now
        being used to attach all flow dissector progs/maps
      * third patch adds special case in bpf_program__pin for programs with
        single instances (we don't create <prog>/0 pin anymore, just <prog>)
      * forth patch adds pin_name to the bpf_program struct
        which is now used as a pin name in bpf_program__pin et al
      * fifth patch adds *loadall* command that pins all programs, not just
        the first one
      * sixth patch adds *pinmaps* argument to load/loadall to let users pin
        all maps of the obj file
      * seventh patch adds actual flow_dissector support to the bpftool and
        an example
      ====================
      Acked-by: default avatarQuentin Monnet <quentin.monnet@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f2cbf958
    • Stanislav Fomichev's avatar
      bpftool: support loading flow dissector · 092f0892
      Stanislav Fomichev authored
      This commit adds support for loading/attaching/detaching flow
      dissector program.
      
      When `bpftool loadall` is called with a flow_dissector prog (i.e. when the
      'type flow_dissector' argument is passed), we load and pin all programs.
      User is responsible to construct the jump table for the tail calls.
      
      The last argument of `bpftool attach` is made optional for this use
      case.
      
      Example:
      bpftool prog load tools/testing/selftests/bpf/bpf_flow.o \
              /sys/fs/bpf/flow type flow_dissector \
      	pinmaps /sys/fs/bpf/flow
      
      bpftool map update pinned /sys/fs/bpf/flow/jmp_table \
              key 0 0 0 0 \
              value pinned /sys/fs/bpf/flow/IP
      
      bpftool map update pinned /sys/fs/bpf/flow/jmp_table \
              key 1 0 0 0 \
              value pinned /sys/fs/bpf/flow/IPV6
      
      bpftool map update pinned /sys/fs/bpf/flow/jmp_table \
              key 2 0 0 0 \
              value pinned /sys/fs/bpf/flow/IPV6OP
      
      bpftool map update pinned /sys/fs/bpf/flow/jmp_table \
              key 3 0 0 0 \
              value pinned /sys/fs/bpf/flow/IPV6FR
      
      bpftool map update pinned /sys/fs/bpf/flow/jmp_table \
              key 4 0 0 0 \
              value pinned /sys/fs/bpf/flow/MPLS
      
      bpftool map update pinned /sys/fs/bpf/flow/jmp_table \
              key 5 0 0 0 \
              value pinned /sys/fs/bpf/flow/VLAN
      
      bpftool prog attach pinned /sys/fs/bpf/flow/flow_dissector flow_dissector
      
      Tested by using the above lines to load the prog in
      the test_flow_dissector.sh selftest.
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Acked-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      092f0892
    • Stanislav Fomichev's avatar
      bpftool: add pinmaps argument to the load/loadall · 3767a94b
      Stanislav Fomichev authored
      This new additional argument lets users pin all maps from the object at
      specified path.
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Acked-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      3767a94b
    • Stanislav Fomichev's avatar
      bpftool: add loadall command · 77380998
      Stanislav Fomichev authored
      This patch adds new *loadall* command which slightly differs from the
      existing *load*. *load* command loads all programs from the obj file,
      but pins only the first programs. *loadall* pins all programs from the
      obj file under specified directory.
      
      The intended usecase is flow_dissector, where we want to load a bunch
      of progs, pin them all and after that construct a jump table.
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Acked-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      77380998
    • Stanislav Fomichev's avatar
      libbpf: add internal pin_name · 33a2c75c
      Stanislav Fomichev authored
      pin_name is the same as section_name where '/' is replaced
      by '_'. bpf_object__pin_programs is converted to use pin_name
      to avoid the situation where section_name would require creating another
      subdirectory for a pin (as, for example, when calling bpf_object__pin_programs
      for programs in sections like "cgroup/connect6").
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      33a2c75c
    • Stanislav Fomichev's avatar
      libbpf: bpf_program__pin: add special case for instances.nr == 1 · fd734c5c
      Stanislav Fomichev authored
      When bpf_program has only one instance, don't create a subdirectory with
      per-instance pin files (<prog>/0). Instead, just create a single pin file
      for that single instance. This simplifies object pinning by not creating
      unnecessary subdirectories.
      
      This can potentially break existing users that depend on the case
      where '/0' is always created. However, I couldn't find any serious
      usage of bpf_program__pin inside the kernel tree and I suppose there
      should be none outside.
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      fd734c5c
    • Stanislav Fomichev's avatar
      libbpf: cleanup after partial failure in bpf_object__pin · 0c19a9fb
      Stanislav Fomichev authored
      bpftool will use bpf_object__pin in the next commits to pin all programs
      and maps from the file; in case of a partial failure, we need to get
      back to the clean state (undo previous program/map pins).
      
      As part of a cleanup, I've added and exported separate routines to
      pin all maps (bpf_object__pin_maps) and progs (bpf_object__pin_programs)
      of an object.
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      0c19a9fb
    • Stanislav Fomichev's avatar
      selftests/bpf: rename flow dissector section to flow_dissector · 108d50a9
      Stanislav Fomichev authored
      Makes it compatible with the logic that derives program type
      from section name in libbpf_prog_type_by_name.
      Signed-off-by: default avatarStanislav Fomichev <sdf@google.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      108d50a9
    • Alexei Starovoitov's avatar
      Merge branch 'device-ops-as-cb' · 0157edc8
      Alexei Starovoitov authored
      Quentin Monnet says:
      
      ====================
      For passing device functions for offloaded eBPF programs, there used to
      be no place where to store the pointer without making the non-offloaded
      programs pay a memory price.
      
      As a consequence, three functions were called with ndo_bpf() through
      specific commands. Now that we have struct bpf_offload_dev, and since none
      of those operations rely on RTNL, we can turn these three commands into
      hooks inside the struct bpf_prog_offload_ops, and pass them as part of
      bpf_offload_dev_create().
      
      This patch set changes the offload architecture to do so, and brings the
      relevant changes to the nfp and netdevsim drivers.
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      0157edc8
    • Quentin Monnet's avatar
      bpf: do not pass netdev to translate() and prepare() offload callbacks · 16a8cb5c
      Quentin Monnet authored
      The kernel functions to prepare verifier and translate for offloaded
      program retrieve "offload" from "prog", and "netdev" from "offload".
      Then both "prog" and "netdev" are passed to the callbacks.
      
      Simplify this by letting the drivers retrieve the net device themselves
      from the offload object attached to prog - if they need it at all. There
      is currently no need to pass the netdev as an argument to those
      functions.
      Signed-off-by: default avatarQuentin Monnet <quentin.monnet@netronome.com>
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      16a8cb5c
    • Quentin Monnet's avatar
      bpf: pass prog instead of env to bpf_prog_offload_verifier_prep() · a40a2632
      Quentin Monnet authored
      Function bpf_prog_offload_verifier_prep(), called from the kernel BPF
      verifier to run a driver-specific callback for preparing for the
      verification step for offloaded programs, takes a pointer to a struct
      bpf_verifier_env object. However, no driver callback needs the whole
      structure at this time: the two drivers supporting this, nfp and
      netdevsim, only need a pointer to the struct bpf_prog instance held by
      env.
      
      Update the callback accordingly, on kernel side and in these two
      drivers.
      Signed-off-by: default avatarQuentin Monnet <quentin.monnet@netronome.com>
      Reviewed-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      a40a2632