Commit 76283171 authored by Daniel Borkmann's avatar Daniel Borkmann

Merge branch 'bpf-timers'

Alexei Starovoitov says:

====================
The first request to support timers in bpf was made in 2013 before sys_bpf
syscall was added. That use case was periodic sampling. It was address with
attaching bpf programs to perf_events. Then during XDP development the timers
were requested to do garbage collection and health checks. They were worked
around by implementing timers in user space and triggering progs with
BPF_PROG_RUN command. The user space timers and perf_event+bpf timers are not
armed by the bpf program. They're done asynchronously vs program execution.
The XDP program cannot send a packet and arm the timer at the same time. The
tracing prog cannot record an event and arm the timer right away. This large
class of use cases remained unaddressed. The jiffy based and hrtimer based
timers are essential part of the kernel development and with this patch set
the hrtimer based timers will be available to bpf programs.

TLDR: bpf timers is a wrapper of hrtimers with all the extra safety added
to make sure bpf progs cannot crash the kernel.

v6->v7:
- address Andrii's comments and add his Acks.

v5->v6:
- address code review feedback from Martin and add his Acks.
- add usercnt > 0 check to bpf_timer_init and remove timers_cancel_and_free
second loop in map_free callbacks.
- add cond_resched_rcu.

v4->v5:
- Martin noticed the following issues:
. prog could be reallocated bpf_patch_insn_data().
Fixed by passing 'aux' into bpf_timer_set_callback, since 'aux' is stable
during insn patching.
. Added missing rcu_read_lock.
. Removed redundant record_map.
- Discovered few bugs with stress testing:
. One cpu does htab_free_prealloced_timers->bpf_timer_cancel_and_free->hrtimer_cancel
while another is trying to do something with the timer like bpf_timer_start/set_callback.
Those ops try to acquire bpf_spin_lock that is already taken by bpf_timer_cancel_and_free,
so both cpus spin forever. The same problem existed in bpf_timer_cancel().
One bpf prog on one cpu might call bpf_timer_cancel and wait, while another cpu is in
the timer callback that tries to do bpf_timer_*() helper on the same timer.
The fix is to do drop_prog_refcnt() and unlock. And only then hrtimer_cancel.
Because of this had to add callback_fn != NULL check to bpf_timer_cb().
Also removed redundant bpf_prog_inc/put from bpf_timer_cb() and replaced
with rcu_dereference_check similar to recent rcu_read_lock-removal from drivers.
bpf_timer_cb is in softirq.
. Managed to hit refcnt==0 while doing bpf_prog_put from bpf_timer_cancel_and_free().
That exposed the issue that bpf_prog_put wasn't ready to be called from irq context.
Fixed similar to bpf_map_put which is irq ready.
- Refactored BPF_CALL_1(bpf_spin_lock) into __bpf_spin_lock_irqsave() to
make the main logic more clear, since Martin and Yonghong brought up this concern.

v3->v4:
1.
Split callback_fn from bpf_timer_start into bpf_timer_set_callback as
suggested by Martin. That makes bpf timer api match one to one to
kernel hrtimer api and provides greater flexibility.
2.
Martin also discovered the following issue with uref approach:
bpftool prog load xdp_timer.o /sys/fs/bpf/xdp_timer type xdp
bpftool net attach xdpgeneric pinned /sys/fs/bpf/xdp_timer dev lo
rm /sys/fs/bpf/xdp_timer
nc -6 ::1 8888
bpftool net detach xdpgeneric dev lo
The timer callback stays active in the kernel though the prog was detached
and map usercnt == 0.
It happened because 'bpftool prog load' pinned the prog only.
The map usercnt went to zero. Subsequent attach and runs didn't
affect map usercnt. The timer was able to start and bpf_prog_inc itself.
When the prog was detached the prog stayed active.
To address this issue added
if (!atomic64_read(&(t->map->usercnt))) return -EPERM;
to the first patch.
Which means that timers are allowed only in the maps that are held
by user space with open file descriptor or maps pinned in bpffs.
3.
Discovered that timers in inner maps were broken.
The inner map pointers are dynamic. Therefore changed bpf_timer_init()
to accept explicit map pointer supplied by the program instead
of hidden map pointer supplied by the verifier.
To make sure that pointer to a timer actually belongs to that map
added the verifier check in patch 3.
4.
Addressed Yonghong's feedback. Improved comments and added
dynamic in_nmi() check.
Added Acks.

v2->v3:
The v2 approach attempted to bump bpf_prog refcnt when bpf_timer_start is
called to make sure callback code doesn't disappear when timer is active and
drop refcnt when timer cb is done. That led to a ton of race conditions between
callback running and concurrent bpf_timer_init/start/cancel on another cpu,
and concurrent bpf_map_update/delete_elem, and map destroy.

Then v2.5 approach skipped prog refcnt altogether. Instead it remembered all
timers that bpf prog armed in a link list and canceled them when prog refcnt
went to zero. The race conditions disappeared, but timers in map-in-map could
not be supported cleanly, since timers in inner maps have inner map's life time
and don't match prog's life time.

This v3 approach makes timers to be owned by maps. It allows timers in inner
maps to be supported from the start. This apporach relies on "user refcnt"
scheme used in prog_array that stores bpf programs for bpf_tail_call. The
bpf_timer_start() increments prog refcnt, but unlike 1st approach the timer
callback does decrement the refcnt. The ops->map_release_uref is
responsible for cancelling the timers and dropping prog refcnt when user space
reference to a map is dropped. That addressed all the races and simplified
locking.

Andrii presented a use case where specifying callback_fn in bpf_timer_init()
is inconvenient vs specifying in bpf_timer_start(). The bpf_timer_init()
typically is called outside for timer callback, while bpf_timer_start() most
likely will be called from the callback.
timer_cb() { ... bpf_timer_start(timer_cb); ...} looks like recursion and as
infinite loop to the verifier. The verifier had to be made smarter to recognize
such async callbacks. Patches 7,8,9 addressed that.

Patch 1 and 2 refactoring.
Patch 3 implements bpf timer helpers and locking.
Patch 4 implements map side of bpf timer support.
Patch 5 prevent pointer mismatch in bpf_timer_init.
Patch 6 adds support for BTF in inner maps.
Patch 7 teaches check_cfg() pass to understand async callbacks.
Patch 8 teaches do_check() pass to understand async callbacks.
Patch 9 teaches check_max_stack_depth() pass to understand async callbacks.
Patches 10 and 11 are the tests.

v1->v2:
- Addressed great feedback from Andrii and Toke.
- Fixed race between parallel bpf_timer_*() ops.
- Fixed deadlock between timer callback and LRU eviction or bpf_map_delete/update.
- Disallowed mmap and global timers.
- Allow spin_lock and bpf_timer in an element.
- Fixed memory leaks due to map destruction and LRU eviction.
- A ton more tests.
====================
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
parents de587d56 61f71e74
...@@ -168,6 +168,7 @@ struct bpf_map { ...@@ -168,6 +168,7 @@ struct bpf_map {
u32 max_entries; u32 max_entries;
u32 map_flags; u32 map_flags;
int spin_lock_off; /* >=0 valid offset, <0 error */ int spin_lock_off; /* >=0 valid offset, <0 error */
int timer_off; /* >=0 valid offset, <0 error */
u32 id; u32 id;
int numa_node; int numa_node;
u32 btf_key_type_id; u32 btf_key_type_id;
...@@ -197,30 +198,53 @@ static inline bool map_value_has_spin_lock(const struct bpf_map *map) ...@@ -197,30 +198,53 @@ static inline bool map_value_has_spin_lock(const struct bpf_map *map)
return map->spin_lock_off >= 0; return map->spin_lock_off >= 0;
} }
static inline void check_and_init_map_lock(struct bpf_map *map, void *dst) static inline bool map_value_has_timer(const struct bpf_map *map)
{ {
if (likely(!map_value_has_spin_lock(map))) return map->timer_off >= 0;
return; }
static inline void check_and_init_map_value(struct bpf_map *map, void *dst)
{
if (unlikely(map_value_has_spin_lock(map)))
*(struct bpf_spin_lock *)(dst + map->spin_lock_off) = *(struct bpf_spin_lock *)(dst + map->spin_lock_off) =
(struct bpf_spin_lock){}; (struct bpf_spin_lock){};
if (unlikely(map_value_has_timer(map)))
*(struct bpf_timer *)(dst + map->timer_off) =
(struct bpf_timer){};
} }
/* copy everything but bpf_spin_lock */ /* copy everything but bpf_spin_lock and bpf_timer. There could be one of each. */
static inline void copy_map_value(struct bpf_map *map, void *dst, void *src) static inline void copy_map_value(struct bpf_map *map, void *dst, void *src)
{ {
u32 s_off = 0, s_sz = 0, t_off = 0, t_sz = 0;
if (unlikely(map_value_has_spin_lock(map))) { if (unlikely(map_value_has_spin_lock(map))) {
u32 off = map->spin_lock_off; s_off = map->spin_lock_off;
s_sz = sizeof(struct bpf_spin_lock);
} else if (unlikely(map_value_has_timer(map))) {
t_off = map->timer_off;
t_sz = sizeof(struct bpf_timer);
}
memcpy(dst, src, off); if (unlikely(s_sz || t_sz)) {
memcpy(dst + off + sizeof(struct bpf_spin_lock), if (s_off < t_off || !s_sz) {
src + off + sizeof(struct bpf_spin_lock), swap(s_off, t_off);
map->value_size - off - sizeof(struct bpf_spin_lock)); swap(s_sz, t_sz);
}
memcpy(dst, src, t_off);
memcpy(dst + t_off + t_sz,
src + t_off + t_sz,
s_off - t_off - t_sz);
memcpy(dst + s_off + s_sz,
src + s_off + s_sz,
map->value_size - s_off - s_sz);
} else { } else {
memcpy(dst, src, map->value_size); memcpy(dst, src, map->value_size);
} }
} }
void copy_map_value_locked(struct bpf_map *map, void *dst, void *src, void copy_map_value_locked(struct bpf_map *map, void *dst, void *src,
bool lock_src); bool lock_src);
void bpf_timer_cancel_and_free(void *timer);
int bpf_obj_name_cpy(char *dst, const char *src, unsigned int size); int bpf_obj_name_cpy(char *dst, const char *src, unsigned int size);
struct bpf_offload_dev; struct bpf_offload_dev;
...@@ -314,6 +338,7 @@ enum bpf_arg_type { ...@@ -314,6 +338,7 @@ enum bpf_arg_type {
ARG_PTR_TO_FUNC, /* pointer to a bpf program function */ ARG_PTR_TO_FUNC, /* pointer to a bpf program function */
ARG_PTR_TO_STACK_OR_NULL, /* pointer to stack or NULL */ ARG_PTR_TO_STACK_OR_NULL, /* pointer to stack or NULL */
ARG_PTR_TO_CONST_STR, /* pointer to a null terminated read-only string */ ARG_PTR_TO_CONST_STR, /* pointer to a null terminated read-only string */
ARG_PTR_TO_TIMER, /* pointer to bpf_timer */
__BPF_ARG_TYPE_MAX, __BPF_ARG_TYPE_MAX,
}; };
......
...@@ -53,7 +53,14 @@ struct bpf_reg_state { ...@@ -53,7 +53,14 @@ struct bpf_reg_state {
/* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE | /* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE |
* PTR_TO_MAP_VALUE_OR_NULL * PTR_TO_MAP_VALUE_OR_NULL
*/ */
struct {
struct bpf_map *map_ptr; struct bpf_map *map_ptr;
/* To distinguish map lookups from outer map
* the map_uid is non-zero for registers
* pointing to inner maps.
*/
u32 map_uid;
};
/* for PTR_TO_BTF_ID */ /* for PTR_TO_BTF_ID */
struct { struct {
...@@ -201,12 +208,19 @@ struct bpf_func_state { ...@@ -201,12 +208,19 @@ struct bpf_func_state {
* zero == main subprog * zero == main subprog
*/ */
u32 subprogno; u32 subprogno;
/* Every bpf_timer_start will increment async_entry_cnt.
* It's used to distinguish:
* void foo(void) { for(;;); }
* void foo(void) { bpf_timer_set_callback(,foo); }
*/
u32 async_entry_cnt;
bool in_callback_fn;
bool in_async_callback_fn;
/* The following fields should be last. See copy_func_state() */ /* The following fields should be last. See copy_func_state() */
int acquired_refs; int acquired_refs;
struct bpf_reference_state *refs; struct bpf_reference_state *refs;
int allocated_stack; int allocated_stack;
bool in_callback_fn;
struct bpf_stack_state *stack; struct bpf_stack_state *stack;
}; };
...@@ -392,6 +406,7 @@ struct bpf_subprog_info { ...@@ -392,6 +406,7 @@ struct bpf_subprog_info {
bool has_tail_call; bool has_tail_call;
bool tail_call_reachable; bool tail_call_reachable;
bool has_ld_abs; bool has_ld_abs;
bool is_async_cb;
}; };
/* single container for all structs /* single container for all structs
......
...@@ -99,6 +99,7 @@ bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s, ...@@ -99,6 +99,7 @@ bool btf_member_is_reg_int(const struct btf *btf, const struct btf_type *s,
const struct btf_member *m, const struct btf_member *m,
u32 expected_offset, u32 expected_size); u32 expected_offset, u32 expected_size);
int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t); int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t);
int btf_find_timer(const struct btf *btf, const struct btf_type *t);
bool btf_type_is_void(const struct btf_type *t); bool btf_type_is_void(const struct btf_type *t);
s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind); s32 btf_find_by_name_kind(const struct btf *btf, const char *name, u8 kind);
const struct btf_type *btf_type_skip_modifiers(const struct btf *btf, const struct btf_type *btf_type_skip_modifiers(const struct btf *btf,
......
...@@ -4777,6 +4777,70 @@ union bpf_attr { ...@@ -4777,6 +4777,70 @@ union bpf_attr {
* Execute close syscall for given FD. * Execute close syscall for given FD.
* Return * Return
* A syscall result. * A syscall result.
*
* long bpf_timer_init(struct bpf_timer *timer, struct bpf_map *map, u64 flags)
* Description
* Initialize the timer.
* First 4 bits of *flags* specify clockid.
* Only CLOCK_MONOTONIC, CLOCK_REALTIME, CLOCK_BOOTTIME are allowed.
* All other bits of *flags* are reserved.
* The verifier will reject the program if *timer* is not from
* the same *map*.
* Return
* 0 on success.
* **-EBUSY** if *timer* is already initialized.
* **-EINVAL** if invalid *flags* are passed.
* **-EPERM** if *timer* is in a map that doesn't have any user references.
* The user space should either hold a file descriptor to a map with timers
* or pin such map in bpffs. When map is unpinned or file descriptor is
* closed all timers in the map will be cancelled and freed.
*
* long bpf_timer_set_callback(struct bpf_timer *timer, void *callback_fn)
* Description
* Configure the timer to call *callback_fn* static function.
* Return
* 0 on success.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier.
* **-EPERM** if *timer* is in a map that doesn't have any user references.
* The user space should either hold a file descriptor to a map with timers
* or pin such map in bpffs. When map is unpinned or file descriptor is
* closed all timers in the map will be cancelled and freed.
*
* long bpf_timer_start(struct bpf_timer *timer, u64 nsecs, u64 flags)
* Description
* Set timer expiration N nanoseconds from the current time. The
* configured callback will be invoked in soft irq context on some cpu
* and will not repeat unless another bpf_timer_start() is made.
* In such case the next invocation can migrate to a different cpu.
* Since struct bpf_timer is a field inside map element the map
* owns the timer. The bpf_timer_set_callback() will increment refcnt
* of BPF program to make sure that callback_fn code stays valid.
* When user space reference to a map reaches zero all timers
* in a map are cancelled and corresponding program's refcnts are
* decremented. This is done to make sure that Ctrl-C of a user
* process doesn't leave any timers running. If map is pinned in
* bpffs the callback_fn can re-arm itself indefinitely.
* bpf_map_update/delete_elem() helpers and user space sys_bpf commands
* cancel and free the timer in the given map element.
* The map can contain timers that invoke callback_fn-s from different
* programs. The same callback_fn can serve different timers from
* different maps if key/value layout matches across maps.
* Every bpf_timer_set_callback() can have different callback_fn.
*
* Return
* 0 on success.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier
* or invalid *flags* are passed.
*
* long bpf_timer_cancel(struct bpf_timer *timer)
* Description
* Cancel the timer and wait for callback_fn to finish if it was running.
* Return
* 0 if the timer was not active.
* 1 if the timer was active.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier.
* **-EDEADLK** if callback_fn tried to call bpf_timer_cancel() on its
* own timer which would have led to a deadlock otherwise.
*/ */
#define __BPF_FUNC_MAPPER(FN) \ #define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \ FN(unspec), \
...@@ -4948,6 +5012,10 @@ union bpf_attr { ...@@ -4948,6 +5012,10 @@ union bpf_attr {
FN(sys_bpf), \ FN(sys_bpf), \
FN(btf_find_by_name_kind), \ FN(btf_find_by_name_kind), \
FN(sys_close), \ FN(sys_close), \
FN(timer_init), \
FN(timer_set_callback), \
FN(timer_start), \
FN(timer_cancel), \
/* */ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper /* integer value in 'imm' field of BPF_CALL instruction selects which helper
...@@ -6074,6 +6142,11 @@ struct bpf_spin_lock { ...@@ -6074,6 +6142,11 @@ struct bpf_spin_lock {
__u32 val; __u32 val;
}; };
struct bpf_timer {
__u64 :64;
__u64 :64;
} __attribute__((aligned(8)));
struct bpf_sysctl { struct bpf_sysctl {
__u32 write; /* Sysctl is being read (= 0) or written (= 1). __u32 write; /* Sysctl is being read (= 0) or written (= 1).
* Allows 1,2,4-byte read, but no write. * Allows 1,2,4-byte read, but no write.
......
...@@ -287,6 +287,12 @@ static int array_map_get_next_key(struct bpf_map *map, void *key, void *next_key ...@@ -287,6 +287,12 @@ static int array_map_get_next_key(struct bpf_map *map, void *key, void *next_key
return 0; return 0;
} }
static void check_and_free_timer_in_array(struct bpf_array *arr, void *val)
{
if (unlikely(map_value_has_timer(&arr->map)))
bpf_timer_cancel_and_free(val + arr->map.timer_off);
}
/* Called from syscall or from eBPF program */ /* Called from syscall or from eBPF program */
static int array_map_update_elem(struct bpf_map *map, void *key, void *value, static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
u64 map_flags) u64 map_flags)
...@@ -321,6 +327,7 @@ static int array_map_update_elem(struct bpf_map *map, void *key, void *value, ...@@ -321,6 +327,7 @@ static int array_map_update_elem(struct bpf_map *map, void *key, void *value,
copy_map_value_locked(map, val, value, false); copy_map_value_locked(map, val, value, false);
else else
copy_map_value(map, val, value); copy_map_value(map, val, value);
check_and_free_timer_in_array(array, val);
} }
return 0; return 0;
} }
...@@ -374,6 +381,19 @@ static void *array_map_vmalloc_addr(struct bpf_array *array) ...@@ -374,6 +381,19 @@ static void *array_map_vmalloc_addr(struct bpf_array *array)
return (void *)round_down((unsigned long)array, PAGE_SIZE); return (void *)round_down((unsigned long)array, PAGE_SIZE);
} }
static void array_map_free_timers(struct bpf_map *map)
{
struct bpf_array *array = container_of(map, struct bpf_array, map);
int i;
if (likely(!map_value_has_timer(map)))
return;
for (i = 0; i < array->map.max_entries; i++)
bpf_timer_cancel_and_free(array->value + array->elem_size * i +
map->timer_off);
}
/* Called when map->refcnt goes to zero, either from workqueue or from syscall */ /* Called when map->refcnt goes to zero, either from workqueue or from syscall */
static void array_map_free(struct bpf_map *map) static void array_map_free(struct bpf_map *map)
{ {
...@@ -668,6 +688,7 @@ const struct bpf_map_ops array_map_ops = { ...@@ -668,6 +688,7 @@ const struct bpf_map_ops array_map_ops = {
.map_alloc = array_map_alloc, .map_alloc = array_map_alloc,
.map_free = array_map_free, .map_free = array_map_free,
.map_get_next_key = array_map_get_next_key, .map_get_next_key = array_map_get_next_key,
.map_release_uref = array_map_free_timers,
.map_lookup_elem = array_map_lookup_elem, .map_lookup_elem = array_map_lookup_elem,
.map_update_elem = array_map_update_elem, .map_update_elem = array_map_update_elem,
.map_delete_elem = array_map_delete_elem, .map_delete_elem = array_map_delete_elem,
......
...@@ -3046,43 +3046,92 @@ static void btf_struct_log(struct btf_verifier_env *env, ...@@ -3046,43 +3046,92 @@ static void btf_struct_log(struct btf_verifier_env *env,
btf_verifier_log(env, "size=%u vlen=%u", t->size, btf_type_vlen(t)); btf_verifier_log(env, "size=%u vlen=%u", t->size, btf_type_vlen(t));
} }
/* find 'struct bpf_spin_lock' in map value. static int btf_find_struct_field(const struct btf *btf, const struct btf_type *t,
* return >= 0 offset if found const char *name, int sz, int align)
* and < 0 in case of error
*/
int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t)
{ {
const struct btf_member *member; const struct btf_member *member;
u32 i, off = -ENOENT; u32 i, off = -ENOENT;
if (!__btf_type_is_struct(t))
return -EINVAL;
for_each_member(i, t, member) { for_each_member(i, t, member) {
const struct btf_type *member_type = btf_type_by_id(btf, const struct btf_type *member_type = btf_type_by_id(btf,
member->type); member->type);
if (!__btf_type_is_struct(member_type)) if (!__btf_type_is_struct(member_type))
continue; continue;
if (member_type->size != sizeof(struct bpf_spin_lock)) if (member_type->size != sz)
continue; continue;
if (strcmp(__btf_name_by_offset(btf, member_type->name_off), if (strcmp(__btf_name_by_offset(btf, member_type->name_off), name))
"bpf_spin_lock"))
continue; continue;
if (off != -ENOENT) if (off != -ENOENT)
/* only one 'struct bpf_spin_lock' is allowed */ /* only one such field is allowed */
return -E2BIG; return -E2BIG;
off = btf_member_bit_offset(t, member); off = btf_member_bit_offset(t, member);
if (off % 8) if (off % 8)
/* valid C code cannot generate such BTF */ /* valid C code cannot generate such BTF */
return -EINVAL; return -EINVAL;
off /= 8; off /= 8;
if (off % __alignof__(struct bpf_spin_lock)) if (off % align)
/* valid struct bpf_spin_lock will be 4 byte aligned */ return -EINVAL;
}
return off;
}
static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
const char *name, int sz, int align)
{
const struct btf_var_secinfo *vsi;
u32 i, off = -ENOENT;
for_each_vsi(i, t, vsi) {
const struct btf_type *var = btf_type_by_id(btf, vsi->type);
const struct btf_type *var_type = btf_type_by_id(btf, var->type);
if (!__btf_type_is_struct(var_type))
continue;
if (var_type->size != sz)
continue;
if (vsi->size != sz)
continue;
if (strcmp(__btf_name_by_offset(btf, var_type->name_off), name))
continue;
if (off != -ENOENT)
/* only one such field is allowed */
return -E2BIG;
off = vsi->offset;
if (off % align)
return -EINVAL; return -EINVAL;
} }
return off; return off;
} }
static int btf_find_field(const struct btf *btf, const struct btf_type *t,
const char *name, int sz, int align)
{
if (__btf_type_is_struct(t))
return btf_find_struct_field(btf, t, name, sz, align);
else if (btf_type_is_datasec(t))
return btf_find_datasec_var(btf, t, name, sz, align);
return -EINVAL;
}
/* find 'struct bpf_spin_lock' in map value.
* return >= 0 offset if found
* and < 0 in case of error
*/
int btf_find_spin_lock(const struct btf *btf, const struct btf_type *t)
{
return btf_find_field(btf, t, "bpf_spin_lock",
sizeof(struct bpf_spin_lock),
__alignof__(struct bpf_spin_lock));
}
int btf_find_timer(const struct btf *btf, const struct btf_type *t)
{
return btf_find_field(btf, t, "bpf_timer",
sizeof(struct bpf_timer),
__alignof__(struct bpf_timer));
}
static void __btf_struct_show(const struct btf *btf, const struct btf_type *t, static void __btf_struct_show(const struct btf *btf, const struct btf_type *t,
u32 type_id, void *data, u8 bits_offset, u32 type_id, void *data, u8 bits_offset,
struct btf_show *show) struct btf_show *show)
......
...@@ -228,6 +228,32 @@ static struct htab_elem *get_htab_elem(struct bpf_htab *htab, int i) ...@@ -228,6 +228,32 @@ static struct htab_elem *get_htab_elem(struct bpf_htab *htab, int i)
return (struct htab_elem *) (htab->elems + i * (u64)htab->elem_size); return (struct htab_elem *) (htab->elems + i * (u64)htab->elem_size);
} }
static bool htab_has_extra_elems(struct bpf_htab *htab)
{
return !htab_is_percpu(htab) && !htab_is_lru(htab);
}
static void htab_free_prealloced_timers(struct bpf_htab *htab)
{
u32 num_entries = htab->map.max_entries;
int i;
if (likely(!map_value_has_timer(&htab->map)))
return;
if (htab_has_extra_elems(htab))
num_entries += num_possible_cpus();
for (i = 0; i < num_entries; i++) {
struct htab_elem *elem;
elem = get_htab_elem(htab, i);
bpf_timer_cancel_and_free(elem->key +
round_up(htab->map.key_size, 8) +
htab->map.timer_off);
cond_resched();
}
}
static void htab_free_elems(struct bpf_htab *htab) static void htab_free_elems(struct bpf_htab *htab)
{ {
int i; int i;
...@@ -265,8 +291,12 @@ static struct htab_elem *prealloc_lru_pop(struct bpf_htab *htab, void *key, ...@@ -265,8 +291,12 @@ static struct htab_elem *prealloc_lru_pop(struct bpf_htab *htab, void *key,
struct htab_elem *l; struct htab_elem *l;
if (node) { if (node) {
u32 key_size = htab->map.key_size;
l = container_of(node, struct htab_elem, lru_node); l = container_of(node, struct htab_elem, lru_node);
memcpy(l->key, key, htab->map.key_size); memcpy(l->key, key, key_size);
check_and_init_map_value(&htab->map,
l->key + round_up(key_size, 8));
return l; return l;
} }
...@@ -278,7 +308,7 @@ static int prealloc_init(struct bpf_htab *htab) ...@@ -278,7 +308,7 @@ static int prealloc_init(struct bpf_htab *htab)
u32 num_entries = htab->map.max_entries; u32 num_entries = htab->map.max_entries;
int err = -ENOMEM, i; int err = -ENOMEM, i;
if (!htab_is_percpu(htab) && !htab_is_lru(htab)) if (htab_has_extra_elems(htab))
num_entries += num_possible_cpus(); num_entries += num_possible_cpus();
htab->elems = bpf_map_area_alloc((u64)htab->elem_size * num_entries, htab->elems = bpf_map_area_alloc((u64)htab->elem_size * num_entries,
...@@ -695,6 +725,14 @@ static int htab_lru_map_gen_lookup(struct bpf_map *map, ...@@ -695,6 +725,14 @@ static int htab_lru_map_gen_lookup(struct bpf_map *map,
return insn - insn_buf; return insn - insn_buf;
} }
static void check_and_free_timer(struct bpf_htab *htab, struct htab_elem *elem)
{
if (unlikely(map_value_has_timer(&htab->map)))
bpf_timer_cancel_and_free(elem->key +
round_up(htab->map.key_size, 8) +
htab->map.timer_off);
}
/* It is called from the bpf_lru_list when the LRU needs to delete /* It is called from the bpf_lru_list when the LRU needs to delete
* older elements from the htab. * older elements from the htab.
*/ */
...@@ -719,6 +757,7 @@ static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node) ...@@ -719,6 +757,7 @@ static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node)
hlist_nulls_for_each_entry_rcu(l, n, head, hash_node) hlist_nulls_for_each_entry_rcu(l, n, head, hash_node)
if (l == tgt_l) { if (l == tgt_l) {
hlist_nulls_del_rcu(&l->hash_node); hlist_nulls_del_rcu(&l->hash_node);
check_and_free_timer(htab, l);
break; break;
} }
...@@ -790,6 +829,7 @@ static void htab_elem_free(struct bpf_htab *htab, struct htab_elem *l) ...@@ -790,6 +829,7 @@ static void htab_elem_free(struct bpf_htab *htab, struct htab_elem *l)
{ {
if (htab->map.map_type == BPF_MAP_TYPE_PERCPU_HASH) if (htab->map.map_type == BPF_MAP_TYPE_PERCPU_HASH)
free_percpu(htab_elem_get_ptr(l, htab->map.key_size)); free_percpu(htab_elem_get_ptr(l, htab->map.key_size));
check_and_free_timer(htab, l);
kfree(l); kfree(l);
} }
...@@ -817,6 +857,7 @@ static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l) ...@@ -817,6 +857,7 @@ static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
htab_put_fd_value(htab, l); htab_put_fd_value(htab, l);
if (htab_is_prealloc(htab)) { if (htab_is_prealloc(htab)) {
check_and_free_timer(htab, l);
__pcpu_freelist_push(&htab->freelist, &l->fnode); __pcpu_freelist_push(&htab->freelist, &l->fnode);
} else { } else {
atomic_dec(&htab->count); atomic_dec(&htab->count);
...@@ -920,7 +961,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key, ...@@ -920,7 +961,7 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
l_new = ERR_PTR(-ENOMEM); l_new = ERR_PTR(-ENOMEM);
goto dec_count; goto dec_count;
} }
check_and_init_map_lock(&htab->map, check_and_init_map_value(&htab->map,
l_new->key + round_up(key_size, 8)); l_new->key + round_up(key_size, 8));
} }
...@@ -1062,6 +1103,8 @@ static int htab_map_update_elem(struct bpf_map *map, void *key, void *value, ...@@ -1062,6 +1103,8 @@ static int htab_map_update_elem(struct bpf_map *map, void *key, void *value,
hlist_nulls_del_rcu(&l_old->hash_node); hlist_nulls_del_rcu(&l_old->hash_node);
if (!htab_is_prealloc(htab)) if (!htab_is_prealloc(htab))
free_htab_elem(htab, l_old); free_htab_elem(htab, l_old);
else
check_and_free_timer(htab, l_old);
} }
ret = 0; ret = 0;
err: err:
...@@ -1069,6 +1112,12 @@ static int htab_map_update_elem(struct bpf_map *map, void *key, void *value, ...@@ -1069,6 +1112,12 @@ static int htab_map_update_elem(struct bpf_map *map, void *key, void *value,
return ret; return ret;
} }
static void htab_lru_push_free(struct bpf_htab *htab, struct htab_elem *elem)
{
check_and_free_timer(htab, elem);
bpf_lru_push_free(&htab->lru, &elem->lru_node);
}
static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value, static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value,
u64 map_flags) u64 map_flags)
{ {
...@@ -1102,7 +1151,8 @@ static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value, ...@@ -1102,7 +1151,8 @@ static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value,
l_new = prealloc_lru_pop(htab, key, hash); l_new = prealloc_lru_pop(htab, key, hash);
if (!l_new) if (!l_new)
return -ENOMEM; return -ENOMEM;
memcpy(l_new->key + round_up(map->key_size, 8), value, map->value_size); copy_map_value(&htab->map,
l_new->key + round_up(map->key_size, 8), value);
ret = htab_lock_bucket(htab, b, hash, &flags); ret = htab_lock_bucket(htab, b, hash, &flags);
if (ret) if (ret)
...@@ -1128,9 +1178,9 @@ static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value, ...@@ -1128,9 +1178,9 @@ static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value,
htab_unlock_bucket(htab, b, hash, flags); htab_unlock_bucket(htab, b, hash, flags);
if (ret) if (ret)
bpf_lru_push_free(&htab->lru, &l_new->lru_node); htab_lru_push_free(htab, l_new);
else if (l_old) else if (l_old)
bpf_lru_push_free(&htab->lru, &l_old->lru_node); htab_lru_push_free(htab, l_old);
return ret; return ret;
} }
...@@ -1339,7 +1389,7 @@ static int htab_lru_map_delete_elem(struct bpf_map *map, void *key) ...@@ -1339,7 +1389,7 @@ static int htab_lru_map_delete_elem(struct bpf_map *map, void *key)
htab_unlock_bucket(htab, b, hash, flags); htab_unlock_bucket(htab, b, hash, flags);
if (l) if (l)
bpf_lru_push_free(&htab->lru, &l->lru_node); htab_lru_push_free(htab, l);
return ret; return ret;
} }
...@@ -1359,6 +1409,35 @@ static void delete_all_elements(struct bpf_htab *htab) ...@@ -1359,6 +1409,35 @@ static void delete_all_elements(struct bpf_htab *htab)
} }
} }
static void htab_free_malloced_timers(struct bpf_htab *htab)
{
int i;
rcu_read_lock();
for (i = 0; i < htab->n_buckets; i++) {
struct hlist_nulls_head *head = select_bucket(htab, i);
struct hlist_nulls_node *n;
struct htab_elem *l;
hlist_nulls_for_each_entry(l, n, head, hash_node)
check_and_free_timer(htab, l);
cond_resched_rcu();
}
rcu_read_unlock();
}
static void htab_map_free_timers(struct bpf_map *map)
{
struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
if (likely(!map_value_has_timer(&htab->map)))
return;
if (!htab_is_prealloc(htab))
htab_free_malloced_timers(htab);
else
htab_free_prealloced_timers(htab);
}
/* Called when map->refcnt goes to zero, either from workqueue or from syscall */ /* Called when map->refcnt goes to zero, either from workqueue or from syscall */
static void htab_map_free(struct bpf_map *map) static void htab_map_free(struct bpf_map *map)
{ {
...@@ -1456,7 +1535,7 @@ static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key, ...@@ -1456,7 +1535,7 @@ static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key,
else else
copy_map_value(map, value, l->key + copy_map_value(map, value, l->key +
roundup_key_size); roundup_key_size);
check_and_init_map_lock(map, value); check_and_init_map_value(map, value);
} }
hlist_nulls_del_rcu(&l->hash_node); hlist_nulls_del_rcu(&l->hash_node);
...@@ -1467,7 +1546,7 @@ static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key, ...@@ -1467,7 +1546,7 @@ static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key,
htab_unlock_bucket(htab, b, hash, bflags); htab_unlock_bucket(htab, b, hash, bflags);
if (is_lru_map && l) if (is_lru_map && l)
bpf_lru_push_free(&htab->lru, &l->lru_node); htab_lru_push_free(htab, l);
return ret; return ret;
} }
...@@ -1645,7 +1724,7 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map, ...@@ -1645,7 +1724,7 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
true); true);
else else
copy_map_value(map, dst_val, value); copy_map_value(map, dst_val, value);
check_and_init_map_lock(map, dst_val); check_and_init_map_value(map, dst_val);
} }
if (do_delete) { if (do_delete) {
hlist_nulls_del_rcu(&l->hash_node); hlist_nulls_del_rcu(&l->hash_node);
...@@ -1672,7 +1751,7 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map, ...@@ -1672,7 +1751,7 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
while (node_to_free) { while (node_to_free) {
l = node_to_free; l = node_to_free;
node_to_free = node_to_free->batch_flink; node_to_free = node_to_free->batch_flink;
bpf_lru_push_free(&htab->lru, &l->lru_node); htab_lru_push_free(htab, l);
} }
next_batch: next_batch:
...@@ -2034,6 +2113,7 @@ const struct bpf_map_ops htab_map_ops = { ...@@ -2034,6 +2113,7 @@ const struct bpf_map_ops htab_map_ops = {
.map_alloc = htab_map_alloc, .map_alloc = htab_map_alloc,
.map_free = htab_map_free, .map_free = htab_map_free,
.map_get_next_key = htab_map_get_next_key, .map_get_next_key = htab_map_get_next_key,
.map_release_uref = htab_map_free_timers,
.map_lookup_elem = htab_map_lookup_elem, .map_lookup_elem = htab_map_lookup_elem,
.map_lookup_and_delete_elem = htab_map_lookup_and_delete_elem, .map_lookup_and_delete_elem = htab_map_lookup_and_delete_elem,
.map_update_elem = htab_map_update_elem, .map_update_elem = htab_map_update_elem,
...@@ -2055,6 +2135,7 @@ const struct bpf_map_ops htab_lru_map_ops = { ...@@ -2055,6 +2135,7 @@ const struct bpf_map_ops htab_lru_map_ops = {
.map_alloc = htab_map_alloc, .map_alloc = htab_map_alloc,
.map_free = htab_map_free, .map_free = htab_map_free,
.map_get_next_key = htab_map_get_next_key, .map_get_next_key = htab_map_get_next_key,
.map_release_uref = htab_map_free_timers,
.map_lookup_elem = htab_lru_map_lookup_elem, .map_lookup_elem = htab_lru_map_lookup_elem,
.map_lookup_and_delete_elem = htab_lru_map_lookup_and_delete_elem, .map_lookup_and_delete_elem = htab_lru_map_lookup_and_delete_elem,
.map_lookup_elem_sys_only = htab_lru_map_lookup_elem_sys, .map_lookup_elem_sys_only = htab_lru_map_lookup_elem_sys,
......
This diff is collapsed.
...@@ -173,7 +173,7 @@ static int cgroup_storage_update_elem(struct bpf_map *map, void *key, ...@@ -173,7 +173,7 @@ static int cgroup_storage_update_elem(struct bpf_map *map, void *key,
return -ENOMEM; return -ENOMEM;
memcpy(&new->data[0], value, map->value_size); memcpy(&new->data[0], value, map->value_size);
check_and_init_map_lock(map, new->data); check_and_init_map_value(map, new->data);
new = xchg(&storage->buf, new); new = xchg(&storage->buf, new);
kfree_rcu(new, rcu); kfree_rcu(new, rcu);
...@@ -509,7 +509,7 @@ struct bpf_cgroup_storage *bpf_cgroup_storage_alloc(struct bpf_prog *prog, ...@@ -509,7 +509,7 @@ struct bpf_cgroup_storage *bpf_cgroup_storage_alloc(struct bpf_prog *prog,
map->numa_node); map->numa_node);
if (!storage->buf) if (!storage->buf)
goto enomem; goto enomem;
check_and_init_map_lock(map, storage->buf->data); check_and_init_map_value(map, storage->buf->data);
} else { } else {
storage->percpu_buf = bpf_map_alloc_percpu(map, size, 8, gfp); storage->percpu_buf = bpf_map_alloc_percpu(map, size, 8, gfp);
if (!storage->percpu_buf) if (!storage->percpu_buf)
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
*/ */
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/bpf.h> #include <linux/bpf.h>
#include <linux/btf.h>
#include "map_in_map.h" #include "map_in_map.h"
...@@ -50,6 +51,11 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd) ...@@ -50,6 +51,11 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd)
inner_map_meta->map_flags = inner_map->map_flags; inner_map_meta->map_flags = inner_map->map_flags;
inner_map_meta->max_entries = inner_map->max_entries; inner_map_meta->max_entries = inner_map->max_entries;
inner_map_meta->spin_lock_off = inner_map->spin_lock_off; inner_map_meta->spin_lock_off = inner_map->spin_lock_off;
inner_map_meta->timer_off = inner_map->timer_off;
if (inner_map->btf) {
btf_get(inner_map->btf);
inner_map_meta->btf = inner_map->btf;
}
/* Misc members not needed in bpf_map_meta_equal() check. */ /* Misc members not needed in bpf_map_meta_equal() check. */
inner_map_meta->ops = inner_map->ops; inner_map_meta->ops = inner_map->ops;
...@@ -65,6 +71,7 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd) ...@@ -65,6 +71,7 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd)
void bpf_map_meta_free(struct bpf_map *map_meta) void bpf_map_meta_free(struct bpf_map *map_meta)
{ {
btf_put(map_meta->btf);
kfree(map_meta); kfree(map_meta);
} }
...@@ -75,6 +82,7 @@ bool bpf_map_meta_equal(const struct bpf_map *meta0, ...@@ -75,6 +82,7 @@ bool bpf_map_meta_equal(const struct bpf_map *meta0,
return meta0->map_type == meta1->map_type && return meta0->map_type == meta1->map_type &&
meta0->key_size == meta1->key_size && meta0->key_size == meta1->key_size &&
meta0->value_size == meta1->value_size && meta0->value_size == meta1->value_size &&
meta0->timer_off == meta1->timer_off &&
meta0->map_flags == meta1->map_flags; meta0->map_flags == meta1->map_flags;
} }
......
...@@ -260,8 +260,8 @@ static int bpf_map_copy_value(struct bpf_map *map, void *key, void *value, ...@@ -260,8 +260,8 @@ static int bpf_map_copy_value(struct bpf_map *map, void *key, void *value,
copy_map_value_locked(map, value, ptr, true); copy_map_value_locked(map, value, ptr, true);
else else
copy_map_value(map, value, ptr); copy_map_value(map, value, ptr);
/* mask lock, since value wasn't zero inited */ /* mask lock and timer, since value wasn't zero inited */
check_and_init_map_lock(map, value); check_and_init_map_value(map, value);
} }
rcu_read_unlock(); rcu_read_unlock();
} }
...@@ -623,7 +623,8 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma) ...@@ -623,7 +623,8 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
struct bpf_map *map = filp->private_data; struct bpf_map *map = filp->private_data;
int err; int err;
if (!map->ops->map_mmap || map_value_has_spin_lock(map)) if (!map->ops->map_mmap || map_value_has_spin_lock(map) ||
map_value_has_timer(map))
return -ENOTSUPP; return -ENOTSUPP;
if (!(vma->vm_flags & VM_SHARED)) if (!(vma->vm_flags & VM_SHARED))
...@@ -793,6 +794,16 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, ...@@ -793,6 +794,16 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
} }
} }
map->timer_off = btf_find_timer(btf, value_type);
if (map_value_has_timer(map)) {
if (map->map_flags & BPF_F_RDONLY_PROG)
return -EACCES;
if (map->map_type != BPF_MAP_TYPE_HASH &&
map->map_type != BPF_MAP_TYPE_LRU_HASH &&
map->map_type != BPF_MAP_TYPE_ARRAY)
return -EOPNOTSUPP;
}
if (map->ops->map_check_btf) if (map->ops->map_check_btf)
ret = map->ops->map_check_btf(map, btf, key_type, value_type); ret = map->ops->map_check_btf(map, btf, key_type, value_type);
...@@ -844,6 +855,7 @@ static int map_create(union bpf_attr *attr) ...@@ -844,6 +855,7 @@ static int map_create(union bpf_attr *attr)
mutex_init(&map->freeze_mutex); mutex_init(&map->freeze_mutex);
map->spin_lock_off = -EINVAL; map->spin_lock_off = -EINVAL;
map->timer_off = -EINVAL;
if (attr->btf_key_type_id || attr->btf_value_type_id || if (attr->btf_key_type_id || attr->btf_value_type_id ||
/* Even the map's value is a kernel's struct, /* Even the map's value is a kernel's struct,
* the bpf_prog.o must have BTF to begin with * the bpf_prog.o must have BTF to begin with
...@@ -1591,7 +1603,8 @@ static int map_freeze(const union bpf_attr *attr) ...@@ -1591,7 +1603,8 @@ static int map_freeze(const union bpf_attr *attr)
if (IS_ERR(map)) if (IS_ERR(map))
return PTR_ERR(map); return PTR_ERR(map);
if (map->map_type == BPF_MAP_TYPE_STRUCT_OPS) { if (map->map_type == BPF_MAP_TYPE_STRUCT_OPS ||
map_value_has_timer(map)) {
fdput(f); fdput(f);
return -ENOTSUPP; return -ENOTSUPP;
} }
...@@ -1699,6 +1712,8 @@ static int bpf_prog_alloc_id(struct bpf_prog *prog) ...@@ -1699,6 +1712,8 @@ static int bpf_prog_alloc_id(struct bpf_prog *prog)
void bpf_prog_free_id(struct bpf_prog *prog, bool do_idr_lock) void bpf_prog_free_id(struct bpf_prog *prog, bool do_idr_lock)
{ {
unsigned long flags;
/* cBPF to eBPF migrations are currently not in the idr store. /* cBPF to eBPF migrations are currently not in the idr store.
* Offloaded programs are removed from the store when their device * Offloaded programs are removed from the store when their device
* disappears - even if someone grabs an fd to them they are unusable, * disappears - even if someone grabs an fd to them they are unusable,
...@@ -1708,7 +1723,7 @@ void bpf_prog_free_id(struct bpf_prog *prog, bool do_idr_lock) ...@@ -1708,7 +1723,7 @@ void bpf_prog_free_id(struct bpf_prog *prog, bool do_idr_lock)
return; return;
if (do_idr_lock) if (do_idr_lock)
spin_lock_bh(&prog_idr_lock); spin_lock_irqsave(&prog_idr_lock, flags);
else else
__acquire(&prog_idr_lock); __acquire(&prog_idr_lock);
...@@ -1716,7 +1731,7 @@ void bpf_prog_free_id(struct bpf_prog *prog, bool do_idr_lock) ...@@ -1716,7 +1731,7 @@ void bpf_prog_free_id(struct bpf_prog *prog, bool do_idr_lock)
prog->aux->id = 0; prog->aux->id = 0;
if (do_idr_lock) if (do_idr_lock)
spin_unlock_bh(&prog_idr_lock); spin_unlock_irqrestore(&prog_idr_lock, flags);
else else
__release(&prog_idr_lock); __release(&prog_idr_lock);
} }
...@@ -1752,14 +1767,32 @@ static void __bpf_prog_put_noref(struct bpf_prog *prog, bool deferred) ...@@ -1752,14 +1767,32 @@ static void __bpf_prog_put_noref(struct bpf_prog *prog, bool deferred)
} }
} }
static void __bpf_prog_put(struct bpf_prog *prog, bool do_idr_lock) static void bpf_prog_put_deferred(struct work_struct *work)
{ {
if (atomic64_dec_and_test(&prog->aux->refcnt)) { struct bpf_prog_aux *aux;
struct bpf_prog *prog;
aux = container_of(work, struct bpf_prog_aux, work);
prog = aux->prog;
perf_event_bpf_event(prog, PERF_BPF_EVENT_PROG_UNLOAD, 0); perf_event_bpf_event(prog, PERF_BPF_EVENT_PROG_UNLOAD, 0);
bpf_audit_prog(prog, BPF_AUDIT_UNLOAD); bpf_audit_prog(prog, BPF_AUDIT_UNLOAD);
__bpf_prog_put_noref(prog, true);
}
static void __bpf_prog_put(struct bpf_prog *prog, bool do_idr_lock)
{
struct bpf_prog_aux *aux = prog->aux;
if (atomic64_dec_and_test(&aux->refcnt)) {
/* bpf_prog_free_id() must be called first */ /* bpf_prog_free_id() must be called first */
bpf_prog_free_id(prog, do_idr_lock); bpf_prog_free_id(prog, do_idr_lock);
__bpf_prog_put_noref(prog, true);
if (in_irq() || irqs_disabled()) {
INIT_WORK(&aux->work, bpf_prog_put_deferred);
schedule_work(&aux->work);
} else {
bpf_prog_put_deferred(&aux->work);
}
} }
} }
......
This diff is collapsed.
...@@ -1059,7 +1059,7 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) ...@@ -1059,7 +1059,7 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
case BPF_FUNC_snprintf: case BPF_FUNC_snprintf:
return &bpf_snprintf_proto; return &bpf_snprintf_proto;
default: default:
return NULL; return bpf_base_func_proto(func_id);
} }
} }
......
...@@ -547,6 +547,7 @@ class PrinterHelpers(Printer): ...@@ -547,6 +547,7 @@ class PrinterHelpers(Printer):
'struct inode', 'struct inode',
'struct socket', 'struct socket',
'struct file', 'struct file',
'struct bpf_timer',
] ]
known_types = { known_types = {
'...', '...',
...@@ -594,6 +595,7 @@ class PrinterHelpers(Printer): ...@@ -594,6 +595,7 @@ class PrinterHelpers(Printer):
'struct inode', 'struct inode',
'struct socket', 'struct socket',
'struct file', 'struct file',
'struct bpf_timer',
} }
mapped_types = { mapped_types = {
'u8': '__u8', 'u8': '__u8',
......
...@@ -4777,6 +4777,70 @@ union bpf_attr { ...@@ -4777,6 +4777,70 @@ union bpf_attr {
* Execute close syscall for given FD. * Execute close syscall for given FD.
* Return * Return
* A syscall result. * A syscall result.
*
* long bpf_timer_init(struct bpf_timer *timer, struct bpf_map *map, u64 flags)
* Description
* Initialize the timer.
* First 4 bits of *flags* specify clockid.
* Only CLOCK_MONOTONIC, CLOCK_REALTIME, CLOCK_BOOTTIME are allowed.
* All other bits of *flags* are reserved.
* The verifier will reject the program if *timer* is not from
* the same *map*.
* Return
* 0 on success.
* **-EBUSY** if *timer* is already initialized.
* **-EINVAL** if invalid *flags* are passed.
* **-EPERM** if *timer* is in a map that doesn't have any user references.
* The user space should either hold a file descriptor to a map with timers
* or pin such map in bpffs. When map is unpinned or file descriptor is
* closed all timers in the map will be cancelled and freed.
*
* long bpf_timer_set_callback(struct bpf_timer *timer, void *callback_fn)
* Description
* Configure the timer to call *callback_fn* static function.
* Return
* 0 on success.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier.
* **-EPERM** if *timer* is in a map that doesn't have any user references.
* The user space should either hold a file descriptor to a map with timers
* or pin such map in bpffs. When map is unpinned or file descriptor is
* closed all timers in the map will be cancelled and freed.
*
* long bpf_timer_start(struct bpf_timer *timer, u64 nsecs, u64 flags)
* Description
* Set timer expiration N nanoseconds from the current time. The
* configured callback will be invoked in soft irq context on some cpu
* and will not repeat unless another bpf_timer_start() is made.
* In such case the next invocation can migrate to a different cpu.
* Since struct bpf_timer is a field inside map element the map
* owns the timer. The bpf_timer_set_callback() will increment refcnt
* of BPF program to make sure that callback_fn code stays valid.
* When user space reference to a map reaches zero all timers
* in a map are cancelled and corresponding program's refcnts are
* decremented. This is done to make sure that Ctrl-C of a user
* process doesn't leave any timers running. If map is pinned in
* bpffs the callback_fn can re-arm itself indefinitely.
* bpf_map_update/delete_elem() helpers and user space sys_bpf commands
* cancel and free the timer in the given map element.
* The map can contain timers that invoke callback_fn-s from different
* programs. The same callback_fn can serve different timers from
* different maps if key/value layout matches across maps.
* Every bpf_timer_set_callback() can have different callback_fn.
*
* Return
* 0 on success.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier
* or invalid *flags* are passed.
*
* long bpf_timer_cancel(struct bpf_timer *timer)
* Description
* Cancel the timer and wait for callback_fn to finish if it was running.
* Return
* 0 if the timer was not active.
* 1 if the timer was active.
* **-EINVAL** if *timer* was not initialized with bpf_timer_init() earlier.
* **-EDEADLK** if callback_fn tried to call bpf_timer_cancel() on its
* own timer which would have led to a deadlock otherwise.
*/ */
#define __BPF_FUNC_MAPPER(FN) \ #define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \ FN(unspec), \
...@@ -4948,6 +5012,10 @@ union bpf_attr { ...@@ -4948,6 +5012,10 @@ union bpf_attr {
FN(sys_bpf), \ FN(sys_bpf), \
FN(btf_find_by_name_kind), \ FN(btf_find_by_name_kind), \
FN(sys_close), \ FN(sys_close), \
FN(timer_init), \
FN(timer_set_callback), \
FN(timer_start), \
FN(timer_cancel), \
/* */ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper /* integer value in 'imm' field of BPF_CALL instruction selects which helper
...@@ -6074,6 +6142,11 @@ struct bpf_spin_lock { ...@@ -6074,6 +6142,11 @@ struct bpf_spin_lock {
__u32 val; __u32 val;
}; };
struct bpf_timer {
__u64 :64;
__u64 :64;
} __attribute__((aligned(8)));
struct bpf_sysctl { struct bpf_sysctl {
__u32 write; /* Sysctl is being read (= 0) or written (= 1). __u32 write; /* Sysctl is being read (= 0) or written (= 1).
* Allows 1,2,4-byte read, but no write. * Allows 1,2,4-byte read, but no write.
......
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <test_progs.h>
#include "timer.skel.h"
static int timer(struct timer *timer_skel)
{
int err, prog_fd;
__u32 duration = 0, retval;
err = timer__attach(timer_skel);
if (!ASSERT_OK(err, "timer_attach"))
return err;
ASSERT_EQ(timer_skel->data->callback_check, 52, "callback_check1");
ASSERT_EQ(timer_skel->data->callback2_check, 52, "callback2_check1");
prog_fd = bpf_program__fd(timer_skel->progs.test1);
err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration);
ASSERT_OK(err, "test_run");
ASSERT_EQ(retval, 0, "test_run");
timer__detach(timer_skel);
usleep(50); /* 10 usecs should be enough, but give it extra */
/* check that timer_cb1() was executed 10+10 times */
ASSERT_EQ(timer_skel->data->callback_check, 42, "callback_check2");
ASSERT_EQ(timer_skel->data->callback2_check, 42, "callback2_check2");
/* check that timer_cb2() was executed twice */
ASSERT_EQ(timer_skel->bss->bss_data, 10, "bss_data");
/* check that there were no errors in timer execution */
ASSERT_EQ(timer_skel->bss->err, 0, "err");
/* check that code paths completed */
ASSERT_EQ(timer_skel->bss->ok, 1 | 2 | 4, "ok");
return 0;
}
void test_timer(void)
{
struct timer *timer_skel = NULL;
int err;
timer_skel = timer__open_and_load();
if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load"))
goto cleanup;
err = timer(timer_skel);
ASSERT_OK(err, "timer");
cleanup:
timer__destroy(timer_skel);
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <test_progs.h>
#include "timer_mim.skel.h"
#include "timer_mim_reject.skel.h"
static int timer_mim(struct timer_mim *timer_skel)
{
__u32 duration = 0, retval;
__u64 cnt1, cnt2;
int err, prog_fd, key1 = 1;
err = timer_mim__attach(timer_skel);
if (!ASSERT_OK(err, "timer_attach"))
return err;
prog_fd = bpf_program__fd(timer_skel->progs.test1);
err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration);
ASSERT_OK(err, "test_run");
ASSERT_EQ(retval, 0, "test_run");
timer_mim__detach(timer_skel);
/* check that timer_cb[12] are incrementing 'cnt' */
cnt1 = READ_ONCE(timer_skel->bss->cnt);
usleep(200); /* 100 times more than interval */
cnt2 = READ_ONCE(timer_skel->bss->cnt);
ASSERT_GT(cnt2, cnt1, "cnt");
ASSERT_EQ(timer_skel->bss->err, 0, "err");
/* check that code paths completed */
ASSERT_EQ(timer_skel->bss->ok, 1 | 2, "ok");
close(bpf_map__fd(timer_skel->maps.inner_htab));
err = bpf_map_delete_elem(bpf_map__fd(timer_skel->maps.outer_arr), &key1);
ASSERT_EQ(err, 0, "delete inner map");
/* check that timer_cb[12] are no longer running */
cnt1 = READ_ONCE(timer_skel->bss->cnt);
usleep(200);
cnt2 = READ_ONCE(timer_skel->bss->cnt);
ASSERT_EQ(cnt2, cnt1, "cnt");
return 0;
}
void test_timer_mim(void)
{
struct timer_mim_reject *timer_reject_skel = NULL;
libbpf_print_fn_t old_print_fn = NULL;
struct timer_mim *timer_skel = NULL;
int err;
old_print_fn = libbpf_set_print(NULL);
timer_reject_skel = timer_mim_reject__open_and_load();
libbpf_set_print(old_print_fn);
if (!ASSERT_ERR_PTR(timer_reject_skel, "timer_reject_skel_load"))
goto cleanup;
timer_skel = timer_mim__open_and_load();
if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load"))
goto cleanup;
err = timer_mim(timer_skel);
ASSERT_OK(err, "timer_mim");
cleanup:
timer_mim__destroy(timer_skel);
timer_mim_reject__destroy(timer_reject_skel);
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <linux/bpf.h>
#include <time.h>
#include <errno.h>
#include <bpf/bpf_helpers.h>
#include "bpf_tcp_helpers.h"
char _license[] SEC("license") = "GPL";
struct hmap_elem {
int counter;
struct bpf_timer timer;
struct bpf_spin_lock lock; /* unused */
};
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 1000);
__type(key, int);
__type(value, struct hmap_elem);
} hmap SEC(".maps");
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(map_flags, BPF_F_NO_PREALLOC);
__uint(max_entries, 1000);
__type(key, int);
__type(value, struct hmap_elem);
} hmap_malloc SEC(".maps");
struct elem {
struct bpf_timer t;
};
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__uint(max_entries, 2);
__type(key, int);
__type(value, struct elem);
} array SEC(".maps");
struct {
__uint(type, BPF_MAP_TYPE_LRU_HASH);
__uint(max_entries, 4);
__type(key, int);
__type(value, struct elem);
} lru SEC(".maps");
__u64 bss_data;
__u64 err;
__u64 ok;
__u64 callback_check = 52;
__u64 callback2_check = 52;
#define ARRAY 1
#define HTAB 2
#define HTAB_MALLOC 3
#define LRU 4
/* callback for array and lru timers */
static int timer_cb1(void *map, int *key, struct bpf_timer *timer)
{
/* increment bss variable twice.
* Once via array timer callback and once via lru timer callback
*/
bss_data += 5;
/* *key == 0 - the callback was called for array timer.
* *key == 4 - the callback was called from lru timer.
*/
if (*key == ARRAY) {
struct bpf_timer *lru_timer;
int lru_key = LRU;
/* rearm array timer to be called again in ~35 seconds */
if (bpf_timer_start(timer, 1ull << 35, 0) != 0)
err |= 1;
lru_timer = bpf_map_lookup_elem(&lru, &lru_key);
if (!lru_timer)
return 0;
bpf_timer_set_callback(lru_timer, timer_cb1);
if (bpf_timer_start(lru_timer, 0, 0) != 0)
err |= 2;
} else if (*key == LRU) {
int lru_key, i;
for (i = LRU + 1;
i <= 100 /* for current LRU eviction algorithm this number
* should be larger than ~ lru->max_entries * 2
*/;
i++) {
struct elem init = {};
/* lru_key cannot be used as loop induction variable
* otherwise the loop will be unbounded.
*/
lru_key = i;
/* add more elements into lru map to push out current
* element and force deletion of this timer
*/
bpf_map_update_elem(map, &lru_key, &init, 0);
/* look it up to bump it into active list */
bpf_map_lookup_elem(map, &lru_key);
/* keep adding until *key changes underneath,
* which means that key/timer memory was reused
*/
if (*key != LRU)
break;
}
/* check that the timer was removed */
if (bpf_timer_cancel(timer) != -EINVAL)
err |= 4;
ok |= 1;
}
return 0;
}
SEC("fentry/bpf_fentry_test1")
int BPF_PROG(test1, int a)
{
struct bpf_timer *arr_timer, *lru_timer;
struct elem init = {};
int lru_key = LRU;
int array_key = ARRAY;
arr_timer = bpf_map_lookup_elem(&array, &array_key);
if (!arr_timer)
return 0;
bpf_timer_init(arr_timer, &array, CLOCK_MONOTONIC);
bpf_map_update_elem(&lru, &lru_key, &init, 0);
lru_timer = bpf_map_lookup_elem(&lru, &lru_key);
if (!lru_timer)
return 0;
bpf_timer_init(lru_timer, &lru, CLOCK_MONOTONIC);
bpf_timer_set_callback(arr_timer, timer_cb1);
bpf_timer_start(arr_timer, 0 /* call timer_cb1 asap */, 0);
/* init more timers to check that array destruction
* doesn't leak timer memory.
*/
array_key = 0;
arr_timer = bpf_map_lookup_elem(&array, &array_key);
if (!arr_timer)
return 0;
bpf_timer_init(arr_timer, &array, CLOCK_MONOTONIC);
return 0;
}
/* callback for prealloc and non-prealloca hashtab timers */
static int timer_cb2(void *map, int *key, struct hmap_elem *val)
{
if (*key == HTAB)
callback_check--;
else
callback2_check--;
if (val->counter > 0 && --val->counter) {
/* re-arm the timer again to execute after 1 usec */
bpf_timer_start(&val->timer, 1000, 0);
} else if (*key == HTAB) {
struct bpf_timer *arr_timer;
int array_key = ARRAY;
/* cancel arr_timer otherwise bpf_fentry_test1 prog
* will stay alive forever.
*/
arr_timer = bpf_map_lookup_elem(&array, &array_key);
if (!arr_timer)
return 0;
if (bpf_timer_cancel(arr_timer) != 1)
/* bpf_timer_cancel should return 1 to indicate
* that arr_timer was active at this time
*/
err |= 8;
/* try to cancel ourself. It shouldn't deadlock. */
if (bpf_timer_cancel(&val->timer) != -EDEADLK)
err |= 16;
/* delete this key and this timer anyway.
* It shouldn't deadlock either.
*/
bpf_map_delete_elem(map, key);
/* in preallocated hashmap both 'key' and 'val' could have been
* reused to store another map element (like in LRU above),
* but in controlled test environment the below test works.
* It's not a use-after-free. The memory is owned by the map.
*/
if (bpf_timer_start(&val->timer, 1000, 0) != -EINVAL)
err |= 32;
ok |= 2;
} else {
if (*key != HTAB_MALLOC)
err |= 64;
/* try to cancel ourself. It shouldn't deadlock. */
if (bpf_timer_cancel(&val->timer) != -EDEADLK)
err |= 128;
/* delete this key and this timer anyway.
* It shouldn't deadlock either.
*/
bpf_map_delete_elem(map, key);
/* in non-preallocated hashmap both 'key' and 'val' are RCU
* protected and still valid though this element was deleted
* from the map. Arm this timer for ~35 seconds. When callback
* finishes the call_rcu will invoke:
* htab_elem_free_rcu
* check_and_free_timer
* bpf_timer_cancel_and_free
* to cancel this 35 second sleep and delete the timer for real.
*/
if (bpf_timer_start(&val->timer, 1ull << 35, 0) != 0)
err |= 256;
ok |= 4;
}
return 0;
}
int bpf_timer_test(void)
{
struct hmap_elem *val;
int key = HTAB, key_malloc = HTAB_MALLOC;
val = bpf_map_lookup_elem(&hmap, &key);
if (val) {
if (bpf_timer_init(&val->timer, &hmap, CLOCK_BOOTTIME) != 0)
err |= 512;
bpf_timer_set_callback(&val->timer, timer_cb2);
bpf_timer_start(&val->timer, 1000, 0);
}
val = bpf_map_lookup_elem(&hmap_malloc, &key_malloc);
if (val) {
if (bpf_timer_init(&val->timer, &hmap_malloc, CLOCK_BOOTTIME) != 0)
err |= 1024;
bpf_timer_set_callback(&val->timer, timer_cb2);
bpf_timer_start(&val->timer, 1000, 0);
}
return 0;
}
SEC("fentry/bpf_fentry_test2")
int BPF_PROG(test2, int a, int b)
{
struct hmap_elem init = {}, *val;
int key = HTAB, key_malloc = HTAB_MALLOC;
init.counter = 10; /* number of times to trigger timer_cb2 */
bpf_map_update_elem(&hmap, &key, &init, 0);
val = bpf_map_lookup_elem(&hmap, &key);
if (val)
bpf_timer_init(&val->timer, &hmap, CLOCK_BOOTTIME);
/* update the same key to free the timer */
bpf_map_update_elem(&hmap, &key, &init, 0);
bpf_map_update_elem(&hmap_malloc, &key_malloc, &init, 0);
val = bpf_map_lookup_elem(&hmap_malloc, &key_malloc);
if (val)
bpf_timer_init(&val->timer, &hmap_malloc, CLOCK_BOOTTIME);
/* update the same key to free the timer */
bpf_map_update_elem(&hmap_malloc, &key_malloc, &init, 0);
/* init more timers to check that htab operations
* don't leak timer memory.
*/
key = 0;
bpf_map_update_elem(&hmap, &key, &init, 0);
val = bpf_map_lookup_elem(&hmap, &key);
if (val)
bpf_timer_init(&val->timer, &hmap, CLOCK_BOOTTIME);
bpf_map_delete_elem(&hmap, &key);
bpf_map_update_elem(&hmap, &key, &init, 0);
val = bpf_map_lookup_elem(&hmap, &key);
if (val)
bpf_timer_init(&val->timer, &hmap, CLOCK_BOOTTIME);
/* and with non-prealloc htab */
key_malloc = 0;
bpf_map_update_elem(&hmap_malloc, &key_malloc, &init, 0);
val = bpf_map_lookup_elem(&hmap_malloc, &key_malloc);
if (val)
bpf_timer_init(&val->timer, &hmap_malloc, CLOCK_BOOTTIME);
bpf_map_delete_elem(&hmap_malloc, &key_malloc);
bpf_map_update_elem(&hmap_malloc, &key_malloc, &init, 0);
val = bpf_map_lookup_elem(&hmap_malloc, &key_malloc);
if (val)
bpf_timer_init(&val->timer, &hmap_malloc, CLOCK_BOOTTIME);
return bpf_timer_test();
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <linux/bpf.h>
#include <time.h>
#include <errno.h>
#include <bpf/bpf_helpers.h>
#include "bpf_tcp_helpers.h"
char _license[] SEC("license") = "GPL";
struct hmap_elem {
int pad; /* unused */
struct bpf_timer timer;
};
struct inner_map {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 1024);
__type(key, int);
__type(value, struct hmap_elem);
} inner_htab SEC(".maps");
#define ARRAY_KEY 1
#define HASH_KEY 1234
struct outer_arr {
__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
__uint(max_entries, 2);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__array(values, struct inner_map);
} outer_arr SEC(".maps") = {
.values = { [ARRAY_KEY] = &inner_htab },
};
__u64 err;
__u64 ok;
__u64 cnt;
static int timer_cb1(void *map, int *key, struct hmap_elem *val);
static int timer_cb2(void *map, int *key, struct hmap_elem *val)
{
cnt++;
bpf_timer_set_callback(&val->timer, timer_cb1);
if (bpf_timer_start(&val->timer, 1000, 0))
err |= 1;
ok |= 1;
return 0;
}
/* callback for inner hash map */
static int timer_cb1(void *map, int *key, struct hmap_elem *val)
{
cnt++;
bpf_timer_set_callback(&val->timer, timer_cb2);
if (bpf_timer_start(&val->timer, 1000, 0))
err |= 2;
/* Do a lookup to make sure 'map' and 'key' pointers are correct */
bpf_map_lookup_elem(map, key);
ok |= 2;
return 0;
}
SEC("fentry/bpf_fentry_test1")
int BPF_PROG(test1, int a)
{
struct hmap_elem init = {};
struct bpf_map *inner_map;
struct hmap_elem *val;
int array_key = ARRAY_KEY;
int hash_key = HASH_KEY;
inner_map = bpf_map_lookup_elem(&outer_arr, &array_key);
if (!inner_map)
return 0;
bpf_map_update_elem(inner_map, &hash_key, &init, 0);
val = bpf_map_lookup_elem(inner_map, &hash_key);
if (!val)
return 0;
bpf_timer_init(&val->timer, inner_map, CLOCK_MONOTONIC);
if (bpf_timer_set_callback(&val->timer, timer_cb1))
err |= 4;
if (bpf_timer_start(&val->timer, 0, 0))
err |= 8;
return 0;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <linux/bpf.h>
#include <time.h>
#include <errno.h>
#include <bpf/bpf_helpers.h>
#include "bpf_tcp_helpers.h"
char _license[] SEC("license") = "GPL";
struct hmap_elem {
int pad; /* unused */
struct bpf_timer timer;
};
struct inner_map {
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 1024);
__type(key, int);
__type(value, struct hmap_elem);
} inner_htab SEC(".maps");
#define ARRAY_KEY 1
#define ARRAY_KEY2 2
#define HASH_KEY 1234
struct outer_arr {
__uint(type, BPF_MAP_TYPE_ARRAY_OF_MAPS);
__uint(max_entries, 2);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
__array(values, struct inner_map);
} outer_arr SEC(".maps") = {
.values = { [ARRAY_KEY] = &inner_htab },
};
__u64 err;
__u64 ok;
__u64 cnt;
/* callback for inner hash map */
static int timer_cb(void *map, int *key, struct hmap_elem *val)
{
return 0;
}
SEC("fentry/bpf_fentry_test1")
int BPF_PROG(test1, int a)
{
struct hmap_elem init = {};
struct bpf_map *inner_map, *inner_map2;
struct hmap_elem *val;
int array_key = ARRAY_KEY;
int array_key2 = ARRAY_KEY2;
int hash_key = HASH_KEY;
inner_map = bpf_map_lookup_elem(&outer_arr, &array_key);
if (!inner_map)
return 0;
inner_map2 = bpf_map_lookup_elem(&outer_arr, &array_key2);
if (!inner_map2)
return 0;
bpf_map_update_elem(inner_map, &hash_key, &init, 0);
val = bpf_map_lookup_elem(inner_map, &hash_key);
if (!val)
return 0;
bpf_timer_init(&val->timer, inner_map2, CLOCK_MONOTONIC);
if (bpf_timer_set_callback(&val->timer, timer_cb))
err |= 4;
if (bpf_timer_start(&val->timer, 0, 0))
err |= 8;
return 0;
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment