Commit c22dfdd2 authored by Kumar Kartikeya Dwivedi's avatar Kumar Kartikeya Dwivedi Committed by Alexei Starovoitov

bpf: Add comments for map BTF matching requirement for bpf_list_head

The old behavior of bpf_map_meta_equal was that it compared timer_off
to be equal (but not spin_lock_off, because that was not allowed), and
did memcmp of kptr_off_tab.

Now, we memcmp the btf_record of two bpf_map structs, which has all
fields.

We preserve backwards compat as we kzalloc the array, so if only spin
lock and timer exist in map, we only compare offset while the rest of
unused members in the btf_field struct are zeroed out.

In case of kptr, btf and everything else is of vmlinux or module, so as
long type is same it will match, since kernel btf, module, dtor pointer
will be same across maps.

Now with list_head in the mix, things are a bit complicated. We
implicitly add a requirement that both BTFs are same, because struct
btf_field_list_head has btf and value_rec members.

We obviously shouldn't force BTFs to be equal by default, as that breaks
backwards compatibility.

Currently it is only implicitly required due to list_head matching
struct btf and value_rec member. value_rec points back into a btf_record
stashed in the map BTF (btf member of btf_field_list_head). So that
pointer and btf member has to match exactly.

Document all these subtle details so that things don't break in the
future when touching this code.
Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20221118015614.2013203-19-memxor@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
parent 534e86bc
...@@ -3648,6 +3648,9 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type ...@@ -3648,6 +3648,9 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
return NULL; return NULL;
cnt = ret; cnt = ret;
/* This needs to be kzalloc to zero out padding and unused fields, see
* comment in btf_record_equal.
*/
rec = kzalloc(offsetof(struct btf_record, fields[cnt]), GFP_KERNEL | __GFP_NOWARN); rec = kzalloc(offsetof(struct btf_record, fields[cnt]), GFP_KERNEL | __GFP_NOWARN);
if (!rec) if (!rec)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
......
...@@ -68,6 +68,11 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd) ...@@ -68,6 +68,11 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd)
} }
inner_map_meta->field_offs = field_offs; inner_map_meta->field_offs = field_offs;
} }
/* Note: We must use the same BTF, as we also used btf_record_dup above
* which relies on BTF being same for both maps, as some members like
* record->fields.list_head have pointers like value_rec pointing into
* inner_map->btf.
*/
if (inner_map->btf) { if (inner_map->btf) {
btf_get(inner_map->btf); btf_get(inner_map->btf);
inner_map_meta->btf = inner_map->btf; inner_map_meta->btf = inner_map->btf;
......
...@@ -611,6 +611,20 @@ bool btf_record_equal(const struct btf_record *rec_a, const struct btf_record *r ...@@ -611,6 +611,20 @@ bool btf_record_equal(const struct btf_record *rec_a, const struct btf_record *r
if (rec_a->cnt != rec_b->cnt) if (rec_a->cnt != rec_b->cnt)
return false; return false;
size = offsetof(struct btf_record, fields[rec_a->cnt]); size = offsetof(struct btf_record, fields[rec_a->cnt]);
/* btf_parse_fields uses kzalloc to allocate a btf_record, so unused
* members are zeroed out. So memcmp is safe to do without worrying
* about padding/unused fields.
*
* While spin_lock, timer, and kptr have no relation to map BTF,
* list_head metadata is specific to map BTF, the btf and value_rec
* members in particular. btf is the map BTF, while value_rec points to
* btf_record in that map BTF.
*
* So while by default, we don't rely on the map BTF (which the records
* were parsed from) matching for both records, which is not backwards
* compatible, in case list_head is part of it, we implicitly rely on
* that by way of depending on memcmp succeeding for it.
*/
return !memcmp(rec_a, rec_b, size); return !memcmp(rec_a, rec_b, size);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment