1. 09 Mar, 2023 7 commits
    • Andrii Nakryiko's avatar
      selftests/bpf: add number iterator tests · f59b1460
      Andrii Nakryiko authored
      Add number iterator (bpf_iter_num_{new,next,destroy}()) tests,
      validating the correct handling of various corner and common cases
      *at runtime*.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230308184121.1165081-8-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f59b1460
    • Andrii Nakryiko's avatar
      selftests/bpf: add iterators tests · 57400dcc
      Andrii Nakryiko authored
      Add various tests for open-coded iterators. Some of them excercise
      various possible coding patterns in C, some go down to low-level
      assembly for more control over various conditions, especially invalid
      ones.
      
      We also make use of bpf_for(), bpf_for_each(), bpf_repeat() macros in
      some of these tests.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230308184121.1165081-7-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      57400dcc
    • Andrii Nakryiko's avatar
      selftests/bpf: add bpf_for_each(), bpf_for(), and bpf_repeat() macros · 8c2b5e90
      Andrii Nakryiko authored
      Add bpf_for_each(), bpf_for(), and bpf_repeat() macros that make writing
      open-coded iterator-based loops much more convenient and natural. These
      macros utilize cleanup attribute to ensure proper destruction of the
      iterator and thanks to that manage to provide the ergonomics that is
      very close to C language's for() construct. Typical loop would look like:
      
        int i;
        int arr[N];
      
        bpf_for(i, 0, N) {
            /* verifier will know that i >= 0 && i < N, so could be used to
             * directly access array elements with no extra checks
             */
             arr[i] = i;
        }
      
      bpf_repeat() is very similar, but it doesn't expose iteration number and
      is meant as a simple "repeat action N times" loop:
      
        bpf_repeat(N) { /* whatever, N times */ }
      
      Note that `break` and `continue` statements inside the {} block work as
      expected.
      
      bpf_for_each() is a generalization over any kind of BPF open-coded
      iterator allowing to use for-each-like approach instead of calling
      low-level bpf_iter_<type>_{new,next,destroy}() APIs explicitly. E.g.:
      
        struct cgroup *cg;
      
        bpf_for_each(cgroup, cg, some, input, args) {
            /* do something with each cg */
        }
      
      would call (not-yet-implemented) bpf_iter_cgroup_{new,next,destroy}()
      functions to form a loop over cgroups, where `some, input, args` are
      passed verbatim into constructor as
      
        bpf_iter_cgroup_new(&it, some, input, args).
      
      As a first demonstration, add pyperf variant based on the bpf_for() loop.
      
      Also clean up a few tests that either included bpf_misc.h header
      unnecessarily from the user-space, which is unsupported, or included it
      before any common types are defined (and thus leading to unnecessary
      compilation warnings, potentially).
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230308184121.1165081-6-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      8c2b5e90
    • Andrii Nakryiko's avatar
      bpf: implement numbers iterator · 6018e1f4
      Andrii Nakryiko authored
      Implement the first open-coded iterator type over a range of integers.
      
      It's public API consists of:
        - bpf_iter_num_new() constructor, which accepts [start, end) range
          (that is, start is inclusive, end is exclusive).
        - bpf_iter_num_next() which will keep returning read-only pointer to int
          until the range is exhausted, at which point NULL will be returned.
          If bpf_iter_num_next() is kept calling after this, NULL will be
          persistently returned.
        - bpf_iter_num_destroy() destructor, which needs to be called at some
          point to clean up iterator state. BPF verifier enforces that iterator
          destructor is called at some point before BPF program exits.
      
      Note that `start = end = X` is a valid combination to setup an empty
      iterator. bpf_iter_num_new() will return 0 (success) for any such
      combination.
      
      If bpf_iter_num_new() detects invalid combination of input arguments, it
      returns error, resets iterator state to, effectively, empty iterator, so
      any subsequent call to bpf_iter_num_next() will keep returning NULL.
      
      BPF verifier has no knowledge that returned integers are in the
      [start, end) value range, as both `start` and `end` are not statically
      known and enforced: they are runtime values.
      
      While the implementation is pretty trivial, some care needs to be taken
      to avoid overflows and underflows. Subsequent selftests will validate
      correctness of [start, end) semantics, especially around extremes
      (INT_MIN and INT_MAX).
      
      Similarly to bpf_loop(), we enforce that no more than BPF_MAX_LOOPS can
      be specified.
      
      bpf_iter_num_{new,next,destroy}() is a logical evolution from bounded
      BPF loops and bpf_loop() helper and is the basis for implementing
      ergonomic BPF loops with no statically known or verified bounds.
      Subsequent patches implement bpf_for() macro, demonstrating how this can
      be wrapped into something that works and feels like a normal for() loop
      in C language.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230308184121.1165081-5-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      6018e1f4
    • Andrii Nakryiko's avatar
      bpf: add support for open-coded iterator loops · 06accc87
      Andrii Nakryiko authored
      Teach verifier about the concept of the open-coded (or inline) iterators.
      
      This patch adds generic iterator loop verification logic, new STACK_ITER
      stack slot type to contain iterator state, and necessary kfunc plumbing
      for iterator's constructor, destructor and next methods. Next patch
      implements first specific iterator (numbers iterator for implementing
      for() loop logic). Such split allows to have more focused commits for
      verifier logic and separate commit that we could point later to
      demonstrating  what does it take to add a new kind of iterator.
      
      Each kind of iterator has its own associated struct bpf_iter_<type>,
      where <type> denotes a specific type of iterator. struct bpf_iter_<type>
      state is supposed to live on BPF program stack, so there will be no way
      to change its size later on without breaking backwards compatibility, so
      choose wisely! But given this struct is specific to a given <type> of
      iterator, this allows a lot of flexibility: simple iterators could be
      fine with just one stack slot (8 bytes), like numbers iterator in the
      next patch, while some other more complicated iterators might need way
      more to keep their iterator state. Either way, such design allows to
      avoid runtime memory allocations, which otherwise would be necessary if
      we fixed on-the-stack size and it turned out to be too small for a given
      iterator implementation.
      
      The way BPF verifier logic is implemented, there are no artificial
      restrictions on a number of active iterators, it should work correctly
      using multiple active iterators at the same time. This also means you
      can have multiple nested iteration loops. struct bpf_iter_<type>
      reference can be safely passed to subprograms as well.
      
      General flow is easiest to demonstrate with a simple example using
      number iterator implemented in next patch. Here's the simplest possible
      loop:
      
        struct bpf_iter_num it;
        int *v;
      
        bpf_iter_num_new(&it, 2, 5);
        while ((v = bpf_iter_num_next(&it))) {
            bpf_printk("X = %d", *v);
        }
        bpf_iter_num_destroy(&it);
      
      Above snippet should output "X = 2", "X = 3", "X = 4". Note that 5 is
      exclusive and is not returned. This matches similar APIs (e.g., slices
      in Go or Rust) that implement a range of elements, where end index is
      non-inclusive.
      
      In the above example, we see a trio of function:
        - constructor, bpf_iter_num_new(), which initializes iterator state
        (struct bpf_iter_num it) on the stack. If any of the input arguments
        are invalid, constructor should make sure to still initialize it such
        that subsequent bpf_iter_num_next() calls will return NULL. I.e., on
        error, return error and construct empty iterator.
        - next method, bpf_iter_num_next(), which accepts pointer to iterator
        state and produces an element. Next method should always return
        a pointer. The contract between BPF verifier is that next method will
        always eventually return NULL when elements are exhausted. Once NULL is
        returned, subsequent next calls should keep returning NULL. In the
        case of numbers iterator, bpf_iter_num_next() returns a pointer to an int
        (storage for this integer is inside the iterator state itself),
        which can be dereferenced after corresponding NULL check.
        - once done with the iterator, it's mandated that user cleans up its
        state with the call to destructor, bpf_iter_num_destroy() in this
        case. Destructor frees up any resources and marks stack space used by
        struct bpf_iter_num as usable for something else.
      
      Any other iterator implementation will have to implement at least these
      three methods. It is enforced that for any given type of iterator only
      applicable constructor/destructor/next are callable. I.e., verifier
      ensures you can't pass number iterator state into, say, cgroup
      iterator's next method.
      
      It is important to keep the naming pattern consistent to be able to
      create generic macros to help with BPF iter usability. E.g., one
      of the follow up patches adds generic bpf_for_each() macro to bpf_misc.h
      in selftests, which allows to utilize iterator "trio" nicely without
      having to code the above somewhat tedious loop explicitly every time.
      This is enforced at kfunc registration point by one of the previous
      patches in this series.
      
      At the implementation level, iterator state tracking for verification
      purposes is very similar to dynptr. We add STACK_ITER stack slot type,
      reserve necessary number of slots, depending on
      sizeof(struct bpf_iter_<type>), and keep track of necessary extra state
      in the "main" slot, which is marked with non-zero ref_obj_id. Other
      slots are also marked as STACK_ITER, but have zero ref_obj_id. This is
      simpler than having a separate "is_first_slot" flag.
      
      Another big distinction is that STACK_ITER is *always refcounted*, which
      simplifies implementation without sacrificing usability. So no need for
      extra "iter_id", no need to anticipate reuse of STACK_ITER slots for new
      constructors, etc. Keeping it simple here.
      
      As far as the verification logic goes, there are two extensive comments:
      in process_iter_next_call() and iter_active_depths_differ() explaining
      some important and sometimes subtle aspects. Please refer to them for
      details.
      
      But from 10,000-foot point of view, next methods are the points of
      forking a verification state, which are conceptually similar to what
      verifier is doing when validating conditional jump. We branch out at
      a `call bpf_iter_<type>_next` instruction and simulate two outcomes:
      NULL (iteration is done) and non-NULL (new element is returned). NULL is
      simulated first and is supposed to reach exit without looping. After
      that non-NULL case is validated and it either reaches exit (for trivial
      examples with no real loop), or reaches another `call bpf_iter_<type>_next`
      instruction with the state equivalent to already (partially) validated
      one. State equivalency at that point means we technically are going to
      be looping forever without "breaking out" out of established "state
      envelope" (i.e., subsequent iterations don't add any new knowledge or
      constraints to the verifier state, so running 1, 2, 10, or a million of
      them doesn't matter). But taking into account the contract stating that
      iterator next method *has to* return NULL eventually, we can conclude
      that loop body is safe and will eventually terminate. Given we validated
      logic outside of the loop (NULL case), and concluded that loop body is
      safe (though potentially looping many times), verifier can claim safety
      of the overall program logic.
      
      The rest of the patch is necessary plumbing for state tracking, marking,
      validation, and necessary further kfunc plumbing to allow implementing
      iterator constructor, destructor, and next methods.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230308184121.1165081-4-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      06accc87
    • Andrii Nakryiko's avatar
      bpf: add iterator kfuncs registration and validation logic · 215bf496
      Andrii Nakryiko authored
      Add ability to register kfuncs that implement BPF open-coded iterator
      contract and enforce naming and function proto convention. Enforcement
      happens at the time of kfunc registration and significantly simplifies
      the rest of iterators logic in the verifier.
      
      More details follow in subsequent patches, but we enforce the following
      conditions.
      
      All kfuncs (constructor, next, destructor) have to be named consistenly
      as bpf_iter_<type>_{new,next,destroy}(), respectively. <type> represents
      iterator type, and iterator state should be represented as a matching
      `struct bpf_iter_<type>` state type. Also, all iter kfuncs should have
      a pointer to this `struct bpf_iter_<type>` as the very first argument.
      
      Additionally:
        - Constructor, i.e., bpf_iter_<type>_new(), can have arbitrary extra
        number of arguments. Return type is not enforced either.
        - Next method, i.e., bpf_iter_<type>_next(), has to return a pointer
        type and should have exactly one argument: `struct bpf_iter_<type> *`
        (const/volatile/restrict and typedefs are ignored).
        - Destructor, i.e., bpf_iter_<type>_destroy(), should return void and
        should have exactly one argument, similar to the next method.
        - struct bpf_iter_<type> size is enforced to be positive and
        a multiple of 8 bytes (to fit stack slots correctly).
      
      Such strictness and consistency allows to build generic helpers
      abstracting important, but boilerplate, details to be able to use
      open-coded iterators effectively and ergonomically (see bpf_for_each()
      in subsequent patches). It also simplifies the verifier logic in some
      places. At the same time, this doesn't hurt generality of possible
      iterator implementations. Win-win.
      
      Constructor kfunc is marked with a new KF_ITER_NEW flags, next method is
      marked with KF_ITER_NEXT (and should also have KF_RET_NULL, of course),
      while destructor kfunc is marked as KF_ITER_DESTROY.
      
      Additionally, we add a trivial kfunc name validation: it should be
      a valid non-NULL and non-empty string.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230308184121.1165081-3-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      215bf496
    • Andrii Nakryiko's avatar
      bpf: factor out fetching basic kfunc metadata · 07236eab
      Andrii Nakryiko authored
      Factor out logic to fetch basic kfunc metadata based on struct bpf_insn.
      This is not exactly short or trivial code to just copy/paste and this
      information is sometimes necessary in other parts of the verifier logic.
      Subsequent patches will rely on this to determine if an instruction is
      a kfunc call to iterator next method.
      
      No functional changes intended, including that verbose() warning
      behavior when kfunc is not allowed for a particular program type.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/r/20230308184121.1165081-2-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      07236eab
  2. 08 Mar, 2023 24 commits
  3. 07 Mar, 2023 9 commits
    • Andrii Nakryiko's avatar
      Merge branch 'libbpf: usdt arm arg parsing support' · d1d51a62
      Andrii Nakryiko authored
      Puranjay Mohan says:
      
      ====================
      
      This series add the support of the ARM architecture to libbpf USDT. This
      involves implementing the parse_usdt_arg() function for ARM.
      
      It was seen that the last part of parse_usdt_arg() is repeated for all architectures,
      so, the first patch in this series refactors these functions and moved the post
      processing to parse_usdt_spec()
      
      Changes in V2[1] to V3:
      
      - Use a tabular approach to find register offsets.
      - Add the patch for refactoring parse_usdt_arg()
      ====================
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      d1d51a62
    • Puranjay Mohan's avatar
      libbpf: USDT arm arg parsing support · 720d93b6
      Puranjay Mohan authored
      Parsing of USDT arguments is architecture-specific; on arm it is
      relatively easy since registers used are r[0-10], fp, ip, sp, lr,
      pc. Format is slightly different compared to aarch64; forms are
      
      - "size @ [ reg, #offset ]" for dereferences, for example
        "-8 @ [ sp, #76 ]" ; " -4 @ [ sp ]"
      - "size @ reg" for register values; for example
        "-4@r0"
      - "size @ #value" for raw values; for example
        "-8@#1"
      
      Add support for parsing USDT arguments for ARM architecture.
      
      To test the above changes QEMU's virt[1] board with cortex-a15
      CPU was used. libbpf-bootstrap's usdt example[2] was modified to attach
      to a test program with DTRACE_PROBE1/2/3/4... probes to test different
      combinations.
      
      [1] https://www.qemu.org/docs/master/system/arm/virt.html
      [2] https://github.com/libbpf/libbpf-bootstrap/blob/master/examples/c/usdt.bpf.cSigned-off-by: default avatarPuranjay Mohan <puranjay12@gmail.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230307120440.25941-3-puranjay12@gmail.com
      720d93b6
    • Puranjay Mohan's avatar
      libbpf: Refactor parse_usdt_arg() to re-use code · 98e678e9
      Puranjay Mohan authored
      The parse_usdt_arg() function is defined differently for each
      architecture but the last part of the function is repeated
      verbatim for each architecture.
      
      Refactor parse_usdt_arg() to fill the arg_sz and then do the repeated
      post-processing in parse_usdt_spec().
      Signed-off-by: default avatarPuranjay Mohan <puranjay12@gmail.com>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230307120440.25941-2-puranjay12@gmail.com
      98e678e9
    • Daniel Müller's avatar
      libbpf: Fix theoretical u32 underflow in find_cd() function · 3ecde218
      Daniel Müller authored
      Coverity reported a potential underflow of the offset variable used in
      the find_cd() function. Switch to using a signed 64 bit integer for the
      representation of offset to make sure we can never underflow.
      
      Fixes: 1eebcb60 ("libbpf: Implement basic zip archive parsing support")
      Signed-off-by: default avatarDaniel Müller <deso@posteo.net>
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20230307215504.837321-1-deso@posteo.net
      3ecde218
    • Alexei Starovoitov's avatar
      Merge branch 'bpf: bpf memory usage' · a73dc912
      Alexei Starovoitov authored
      Yafang Shao says:
      
      ====================
      
      Currently we can't get bpf memory usage reliably either from memcg or
      from bpftool.
      
      In memcg, there's not a 'bpf' item in memory.stat, but only 'kernel',
      'sock', 'vmalloc' and 'percpu' which may related to bpf memory. With
      these items we still can't get the bpf memory usage, because bpf memory
      usage may far less than the kmem in a memcg, for example, the dentry may
      consume lots of kmem.
      
      bpftool now shows the bpf memory footprint, which is difference with bpf
      memory usage. The difference can be quite great in some cases, for example,
      
      - non-preallocated bpf map
        The non-preallocated bpf map memory usage is dynamically changed. The
        allocated elements count can be from 0 to the max entries. But the
        memory footprint in bpftool only shows a fixed number.
      
      - bpf metadata consumes more memory than bpf element
        In some corner cases, the bpf metadata can consumes a lot more memory
        than bpf element consumes. For example, it can happen when the element
        size is quite small.
      
      - some maps don't have key, value or max_entries
        For example the key_size and value_size of ringbuf is 0, so its
        memlock is always 0.
      
      We need a way to show the bpf memory usage especially there will be more
      and more bpf programs running on the production environment and thus the
      bpf memory usage is not trivial.
      
      This patchset introduces a new map ops ->map_mem_usage to calculate the
      memory usage. Note that we don't intend to make the memory usage 100%
      accurate, while our goal is to make sure there is only a small difference
      between what bpftool reports and the real memory. That small difference
      can be ignored compared to the total usage.  That is enough to monitor
      the bpf memory usage. For example, the user can rely on this value to
      monitor the trend of bpf memory usage, compare the difference in bpf
      memory usage between different bpf program versions, figure out which
      maps consume large memory, and etc.
      
      This patchset implements the bpf memory usage for all maps, and yet there's
      still work to do. We don't want to introduce runtime overhead in the
      element update and delete path, but we have to do it for some
      non-preallocated maps,
      - devmap, xskmap
        When we update or delete an element, it will allocate or free memory.
        In order to track this dynamic memory, we have to track the count in
        element update and delete path.
      
      - cpumap
        The element size of each cpumap element is not determinated. If we
        want to track the usage, we have to count the size of all elements in
        the element update and delete path. So I just put it aside currently.
      
      - local_storage, bpf_local_storage
        When we attach or detach a cgroup, it will allocate or free memory. If
        we want to track the dynamic memory, we also need to do something in
        the update and delete path. So I just put it aside currently.
      
      - offload map
        The element update and delete of offload map is via the netdev dev_ops,
        in which it may dynamically allocate or free memory, but this dynamic
        memory isn't counted in offload map memory usage currently.
      
      The result of each map can be found in the individual patch.
      
      We may also need to track per-container bpf memory usage, that will be
      addressed by a different patchset.
      
      Changes:
      v3->v4: code improvement on ringbuf (Andrii)
              use READ_ONCE() to read lpm_trie (Tao)
              explain why we can't get bpf memory usage from memcg.
      v2->v3: check callback at map creation time and avoid warning (Alexei)
              fix build error under CONFIG_BPF=n (lkp@intel.com)
      v1->v2: calculate the memory usage within bpf (Alexei)
      - [v1] bpf, mm: bpf memory usage
        https://lwn.net/Articles/921991/
      - [RFC PATCH v2] mm, bpf: Add BPF into /proc/meminfo
        https://lwn.net/Articles/919848/
      - [RFC PATCH v1] mm, bpf: Add BPF into /proc/meminfo
        https://lwn.net/Articles/917647/
      - [RFC PATCH] bpf, mm: Add a new item bpf into memory.stat
        https://lore.kernel.org/bpf/20220921170002.29557-1-laoar.shao@gmail].com/
      ====================
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      a73dc912
    • Yafang Shao's avatar
      bpf: enforce all maps having memory usage callback · 6b4a6ea2
      Yafang Shao authored
      We have implemented memory usage callback for all maps, and we enforce
      any newly added map having a callback as well. We check this callback at
      map creation time. If it doesn't have the callback, we will return
      EINVAL.
      Signed-off-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Link: https://lore.kernel.org/r/20230305124615.12358-19-laoar.shao@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      6b4a6ea2
    • Yafang Shao's avatar
      bpf: offload map memory usage · 9629363c
      Yafang Shao authored
      A new helper is introduced to calculate offload map memory usage. But
      currently the memory dynamically allocated in netdev dev_ops, like
      nsim_map_update_elem, is not counted. Let's just put it aside now.
      Signed-off-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Link: https://lore.kernel.org/r/20230305124615.12358-18-laoar.shao@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      9629363c
    • Yafang Shao's avatar
      bpf, net: xskmap memory usage · b4fd0d67
      Yafang Shao authored
      A new helper is introduced to calculate xskmap memory usage.
      
      The xfsmap memory usage can be dynamically changed when we add or remove
      a xsk_map_node. Hence we need to track the count of xsk_map_node to get
      its memory usage.
      
      The result as follows,
      - before
      10: xskmap  name count_map  flags 0x0
              key 4B  value 4B  max_entries 65536  memlock 524288B
      
      - after
      10: xskmap  name count_map  flags 0x0 <<< no elements case
              key 4B  value 4B  max_entries 65536  memlock 524608B
      Signed-off-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Link: https://lore.kernel.org/r/20230305124615.12358-17-laoar.shao@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      b4fd0d67
    • Yafang Shao's avatar
      bpf, net: sock_map memory usage · 73d2c619
      Yafang Shao authored
      sockmap and sockhash don't have something in common in allocation, so let's
      introduce different helpers to calculate their memory usage.
      
      The reuslt as follows,
      
      - before
      28: sockmap  name count_map  flags 0x0
              key 4B  value 4B  max_entries 65536  memlock 524288B
      29: sockhash  name count_map  flags 0x0
              key 4B  value 4B  max_entries 65536  memlock 524288B
      
      - after
      28: sockmap  name count_map  flags 0x0
              key 4B  value 4B  max_entries 65536  memlock 524608B
      29: sockhash  name count_map  flags 0x0  <<<< no updated elements
              key 4B  value 4B  max_entries 65536  memlock 1048896B
      Signed-off-by: default avatarYafang Shao <laoar.shao@gmail.com>
      Link: https://lore.kernel.org/r/20230305124615.12358-16-laoar.shao@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      73d2c619