Commit 477bb4c1 authored by Andrii Nakryiko's avatar Andrii Nakryiko

Merge branch 'libbpf: Implement BTFGen'

Mauricio Vásquez <mauricio@kinvolk.io> says:

====================

CO-RE requires to have BTF information describing the kernel types in
order to perform the relocations. This is usually provided by the kernel
itself when it's configured with CONFIG_DEBUG_INFO_BTF. However, this
configuration is not enabled in all the distributions and it's not
available on kernels before 5.12.

It's possible to use CO-RE in kernels without CONFIG_DEBUG_INFO_BTF
support by providing the BTF information from an external source.
BTFHub[0] contains BTF files to each released kernel not supporting BTF,
for the most popular distributions.

Providing this BTF file for a given kernel has some challenges:
1. Each BTF file is a few MBs big, then it's not possible to ship the
eBPF program with all the BTF files needed to run in different kernels.
(The BTF files will be in the order of GBs if you want to support a high
number of kernels)
2. Downloading the BTF file for the current kernel at runtime delays the
start of the program and it's not always possible to reach an external
host to download such a file.

Providing the BTF file with the information about all the data types of
the kernel for running an eBPF program is an overkill in many of the
cases. Usually the eBPF programs access only some kernel fields.

This series implements BTFGen support in bpftool. This idea was
discussed during the "Towards truly portable eBPF"[1] presentation at
Linux Plumbers 2021.

There is a good example[2] on how to use BTFGen and BTFHub together
to generate multiple BTF files, to each existing/supported kernel,
tailored to one application. For example: a complex bpf object might
support nearly 400 kernels by having BTF files summing only 1.5 MB.

[0]: https://github.com/aquasecurity/btfhub/
[1]: https://www.youtube.com/watch?v=igJLKyP1lFk&t=2418s
[2]: https://github.com/aquasecurity/btfhub/tree/main/tools

Changelog:
v6 > v7:
- use array instead of hashmap to store ids
- use btf__add_{struct,union}() instead of memcpy()
- don't use fixed path for testing BTF file
- update example to use DECLARE_LIBBPF_OPTS()

v5 > v6:
- use BTF structure to store used member/types instead of hashmaps
- remove support for input/output folders
- remove bpf_core_{created,free}_cand_cache()
- reorganize commits to avoid having unused static functions
- remove usage of libbpf_get_error()
- fix some errno propagation issues
- do not record full types for type-based relocations
- add support for BTF_KIND_FUNC_PROTO
- implement tests based on core_reloc ones

v4 > v5:
- move some checks before invoking prog->obj->gen_loader
- use p_info() instead of printf()
- improve command output
- fix issue with record_relo_core()
- implement bash completion
- write man page
- implement some tests

v3 > v4:
- parse BTF and BTF.ext sections in bpftool and use
  bpf_core_calc_relo_insn() directly
- expose less internal details from libbpf to bpftool
- implement support for enum-based relocations
- split commits in a more granular way

v2 > v3:
- expose internal libbpf APIs to bpftool instead
- implement btfgen in bpftool
- drop btf__raw_data() from libbpf

v1 > v2:
- introduce bpf_object__prepare() and ‘record_core_relos’ to expose
  CO-RE relocations instead of bpf_object__reloc_info_gen()
- rename btf__save_to_file() to btf__raw_data()

v1: https://lore.kernel.org/bpf/20211027203727.208847-1-mauricio@kinvolk.io/
v2: https://lore.kernel.org/bpf/20211116164208.164245-1-mauricio@kinvolk.io/
v3: https://lore.kernel.org/bpf/20211217185654.311609-1-mauricio@kinvolk.io/
v4: https://lore.kernel.org/bpf/20220112142709.102423-1-mauricio@kinvolk.io/
v5: https://lore.kernel.org/bpf/20220128223312.1253169-1-mauricio@kinvolk.io/
v6: https://lore.kernel.org/bpf/20220209222646.348365-1-mauricio@kinvolk.io/

Mauricio Vásquez (6):
  libbpf: split bpf_core_apply_relo()
  libbpf: Expose bpf_core_{add,free}_cands() to bpftool
  bpftool: Add gen min_core_btf command
  bpftool: Implement "gen min_core_btf" logic
  bpftool: Implement btfgen_get_btf()
  selftests/bpf: Test "bpftool gen min_core_btf"
====================
Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
parents 8cbf062a 704c91e5
...@@ -7225,6 +7225,7 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo, ...@@ -7225,6 +7225,7 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
{ {
bool need_cands = relo->kind != BPF_CORE_TYPE_ID_LOCAL; bool need_cands = relo->kind != BPF_CORE_TYPE_ID_LOCAL;
struct bpf_core_cand_list cands = {}; struct bpf_core_cand_list cands = {};
struct bpf_core_relo_res targ_res;
struct bpf_core_spec *specs; struct bpf_core_spec *specs;
int err; int err;
...@@ -7264,13 +7265,19 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo, ...@@ -7264,13 +7265,19 @@ int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo,
cands.len = cc->cnt; cands.len = cc->cnt;
/* cand_cache_mutex needs to span the cache lookup and /* cand_cache_mutex needs to span the cache lookup and
* copy of btf pointer into bpf_core_cand_list, * copy of btf pointer into bpf_core_cand_list,
* since module can be unloaded while bpf_core_apply_relo_insn * since module can be unloaded while bpf_core_calc_relo_insn
* is working with module's btf. * is working with module's btf.
*/ */
} }
err = bpf_core_apply_relo_insn((void *)ctx->log, insn, relo->insn_off / 8, err = bpf_core_calc_relo_insn((void *)ctx->log, relo, relo_idx, ctx->btf, &cands, specs,
relo, relo_idx, ctx->btf, &cands, specs); &targ_res);
if (err)
goto out;
err = bpf_core_patch_insn((void *)ctx->log, insn, relo->insn_off / 8, relo, relo_idx,
&targ_res);
out: out:
kfree(specs); kfree(specs);
if (need_cands) { if (need_cands) {
......
...@@ -25,6 +25,7 @@ GEN COMMANDS ...@@ -25,6 +25,7 @@ GEN COMMANDS
| **bpftool** **gen object** *OUTPUT_FILE* *INPUT_FILE* [*INPUT_FILE*...] | **bpftool** **gen object** *OUTPUT_FILE* *INPUT_FILE* [*INPUT_FILE*...]
| **bpftool** **gen skeleton** *FILE* [**name** *OBJECT_NAME*] | **bpftool** **gen skeleton** *FILE* [**name** *OBJECT_NAME*]
| **bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...]
| **bpftool** **gen help** | **bpftool** **gen help**
DESCRIPTION DESCRIPTION
...@@ -149,6 +150,26 @@ DESCRIPTION ...@@ -149,6 +150,26 @@ DESCRIPTION
(non-read-only) data from userspace, with same simplicity (non-read-only) data from userspace, with same simplicity
as for BPF side. as for BPF side.
**bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...]
Generate a minimum BTF file as *OUTPUT*, derived from a given
*INPUT* BTF file, containing all needed BTF types so one, or
more, given eBPF objects CO-RE relocations may be satisfied.
When kernels aren't compiled with CONFIG_DEBUG_INFO_BTF,
libbpf, when loading an eBPF object, has to rely on external
BTF files to be able to calculate CO-RE relocations.
Usually, an external BTF file is built from existing kernel
DWARF data using pahole. It contains all the types used by
its respective kernel image and, because of that, is big.
The min_core_btf feature builds smaller BTF files, customized
to one or multiple eBPF objects, so they can be distributed
together with an eBPF CO-RE based application, turning the
application portable to different kernel versions.
Check examples bellow for more information how to use it.
**bpftool gen help** **bpftool gen help**
Print short help message. Print short help message.
...@@ -215,7 +236,9 @@ This is example BPF application with two BPF programs and a mix of BPF maps ...@@ -215,7 +236,9 @@ This is example BPF application with two BPF programs and a mix of BPF maps
and global variables. Source code is split across two source code files. and global variables. Source code is split across two source code files.
**$ clang -target bpf -g example1.bpf.c -o example1.bpf.o** **$ clang -target bpf -g example1.bpf.c -o example1.bpf.o**
**$ clang -target bpf -g example2.bpf.c -o example2.bpf.o** **$ clang -target bpf -g example2.bpf.c -o example2.bpf.o**
**$ bpftool gen object example.bpf.o example1.bpf.o example2.bpf.o** **$ bpftool gen object example.bpf.o example1.bpf.o example2.bpf.o**
This set of commands compiles *example1.bpf.c* and *example2.bpf.c* This set of commands compiles *example1.bpf.c* and *example2.bpf.c*
...@@ -329,3 +352,70 @@ BPF ELF object file *example.bpf.o*. ...@@ -329,3 +352,70 @@ BPF ELF object file *example.bpf.o*.
my_static_var: 7 my_static_var: 7
This is a stripped-out version of skeleton generated for above example code. This is a stripped-out version of skeleton generated for above example code.
min_core_btf
------------
**$ bpftool btf dump file 5.4.0-example.btf format raw**
::
[1] INT 'long unsigned int' size=8 bits_offset=0 nr_bits=64 encoding=(none)
[2] CONST '(anon)' type_id=1
[3] VOLATILE '(anon)' type_id=1
[4] ARRAY '(anon)' type_id=1 index_type_id=21 nr_elems=2
[5] PTR '(anon)' type_id=8
[6] CONST '(anon)' type_id=5
[7] INT 'char' size=1 bits_offset=0 nr_bits=8 encoding=(none)
[8] CONST '(anon)' type_id=7
[9] INT 'unsigned int' size=4 bits_offset=0 nr_bits=32 encoding=(none)
<long output>
**$ bpftool btf dump file one.bpf.o format raw**
::
[1] PTR '(anon)' type_id=2
[2] STRUCT 'trace_event_raw_sys_enter' size=64 vlen=4
'ent' type_id=3 bits_offset=0
'id' type_id=7 bits_offset=64
'args' type_id=9 bits_offset=128
'__data' type_id=12 bits_offset=512
[3] STRUCT 'trace_entry' size=8 vlen=4
'type' type_id=4 bits_offset=0
'flags' type_id=5 bits_offset=16
'preempt_count' type_id=5 bits_offset=24
<long output>
**$ bpftool gen min_core_btf 5.4.0-example.btf 5.4.0-smaller.btf one.bpf.o**
**$ bpftool btf dump file 5.4.0-smaller.btf format raw**
::
[1] TYPEDEF 'pid_t' type_id=6
[2] STRUCT 'trace_event_raw_sys_enter' size=64 vlen=1
'args' type_id=4 bits_offset=128
[3] STRUCT 'task_struct' size=9216 vlen=2
'pid' type_id=1 bits_offset=17920
'real_parent' type_id=7 bits_offset=18048
[4] ARRAY '(anon)' type_id=5 index_type_id=8 nr_elems=6
[5] INT 'long unsigned int' size=8 bits_offset=0 nr_bits=64 encoding=(none)
[6] TYPEDEF '__kernel_pid_t' type_id=8
[7] PTR '(anon)' type_id=3
[8] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED
<end>
Now, the "5.4.0-smaller.btf" file may be used by libbpf as an external BTF file
when loading the "one.bpf.o" object into the "5.4.0-example" kernel. Note that
the generated BTF file won't allow other eBPF objects to be loaded, just the
ones given to min_core_btf.
::
LIBBPF_OPTS(bpf_object_open_opts, opts, .btf_custom_path = "5.4.0-smaller.btf");
struct bpf_object *obj;
obj = bpf_object__open_file("one.bpf.o", &opts);
...
...@@ -34,10 +34,10 @@ LIBBPF_BOOTSTRAP_INCLUDE := $(LIBBPF_BOOTSTRAP_DESTDIR)/include ...@@ -34,10 +34,10 @@ LIBBPF_BOOTSTRAP_INCLUDE := $(LIBBPF_BOOTSTRAP_DESTDIR)/include
LIBBPF_BOOTSTRAP_HDRS_DIR := $(LIBBPF_BOOTSTRAP_INCLUDE)/bpf LIBBPF_BOOTSTRAP_HDRS_DIR := $(LIBBPF_BOOTSTRAP_INCLUDE)/bpf
LIBBPF_BOOTSTRAP := $(LIBBPF_BOOTSTRAP_OUTPUT)libbpf.a LIBBPF_BOOTSTRAP := $(LIBBPF_BOOTSTRAP_OUTPUT)libbpf.a
# We need to copy hashmap.h and nlattr.h which is not otherwise exported by # We need to copy hashmap.h, nlattr.h, relo_core.h and libbpf_internal.h
# libbpf, but still required by bpftool. # which are not otherwise exported by libbpf, but still required by bpftool.
LIBBPF_INTERNAL_HDRS := $(addprefix $(LIBBPF_HDRS_DIR)/,hashmap.h nlattr.h) LIBBPF_INTERNAL_HDRS := $(addprefix $(LIBBPF_HDRS_DIR)/,hashmap.h nlattr.h relo_core.h libbpf_internal.h)
LIBBPF_BOOTSTRAP_INTERNAL_HDRS := $(addprefix $(LIBBPF_BOOTSTRAP_HDRS_DIR)/,hashmap.h) LIBBPF_BOOTSTRAP_INTERNAL_HDRS := $(addprefix $(LIBBPF_BOOTSTRAP_HDRS_DIR)/,hashmap.h relo_core.h libbpf_internal.h)
$(LIBBPF_OUTPUT) $(BOOTSTRAP_OUTPUT) $(LIBBPF_BOOTSTRAP_OUTPUT) $(LIBBPF_HDRS_DIR) $(LIBBPF_BOOTSTRAP_HDRS_DIR): $(LIBBPF_OUTPUT) $(BOOTSTRAP_OUTPUT) $(LIBBPF_BOOTSTRAP_OUTPUT) $(LIBBPF_HDRS_DIR) $(LIBBPF_BOOTSTRAP_HDRS_DIR):
$(QUIET_MKDIR)mkdir -p $@ $(QUIET_MKDIR)mkdir -p $@
......
...@@ -1003,9 +1003,13 @@ _bpftool() ...@@ -1003,9 +1003,13 @@ _bpftool()
;; ;;
esac esac
;; ;;
min_core_btf)
_filedir
return 0
;;
*) *)
[[ $prev == $object ]] && \ [[ $prev == $object ]] && \
COMPREPLY=( $( compgen -W 'object skeleton help' -- "$cur" ) ) COMPREPLY=( $( compgen -W 'object skeleton help min_core_btf' -- "$cur" ) )
;; ;;
esac esac
;; ;;
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <unistd.h> #include <unistd.h>
#include <bpf/bpf.h> #include <bpf/bpf.h>
#include <bpf/libbpf.h> #include <bpf/libbpf.h>
#include <bpf/libbpf_internal.h>
#include <sys/types.h> #include <sys/types.h>
#include <sys/stat.h> #include <sys/stat.h>
#include <sys/mman.h> #include <sys/mman.h>
...@@ -1108,6 +1109,7 @@ static int do_help(int argc, char **argv) ...@@ -1108,6 +1109,7 @@ static int do_help(int argc, char **argv)
fprintf(stderr, fprintf(stderr,
"Usage: %1$s %2$s object OUTPUT_FILE INPUT_FILE [INPUT_FILE...]\n" "Usage: %1$s %2$s object OUTPUT_FILE INPUT_FILE [INPUT_FILE...]\n"
" %1$s %2$s skeleton FILE [name OBJECT_NAME]\n" " %1$s %2$s skeleton FILE [name OBJECT_NAME]\n"
" %1$s %2$s min_core_btf INPUT OUTPUT OBJECT [OBJECT...]\n"
" %1$s %2$s help\n" " %1$s %2$s help\n"
"\n" "\n"
" " HELP_SPEC_OPTIONS " |\n" " " HELP_SPEC_OPTIONS " |\n"
...@@ -1118,9 +1120,592 @@ static int do_help(int argc, char **argv) ...@@ -1118,9 +1120,592 @@ static int do_help(int argc, char **argv)
return 0; return 0;
} }
static int btf_save_raw(const struct btf *btf, const char *path)
{
const void *data;
FILE *f = NULL;
__u32 data_sz;
int err = 0;
data = btf__raw_data(btf, &data_sz);
if (!data)
return -ENOMEM;
f = fopen(path, "wb");
if (!f)
return -errno;
if (fwrite(data, 1, data_sz, f) != data_sz)
err = -errno;
fclose(f);
return err;
}
struct btfgen_info {
struct btf *src_btf;
struct btf *marked_btf; /* btf structure used to mark used types */
};
static size_t btfgen_hash_fn(const void *key, void *ctx)
{
return (size_t)key;
}
static bool btfgen_equal_fn(const void *k1, const void *k2, void *ctx)
{
return k1 == k2;
}
static void *u32_as_hash_key(__u32 x)
{
return (void *)(uintptr_t)x;
}
static void btfgen_free_info(struct btfgen_info *info)
{
if (!info)
return;
btf__free(info->src_btf);
btf__free(info->marked_btf);
free(info);
}
static struct btfgen_info *
btfgen_new_info(const char *targ_btf_path)
{
struct btfgen_info *info;
int err;
info = calloc(1, sizeof(*info));
if (!info)
return NULL;
info->src_btf = btf__parse(targ_btf_path, NULL);
if (!info->src_btf) {
err = -errno;
p_err("failed parsing '%s' BTF file: %s", targ_btf_path, strerror(errno));
goto err_out;
}
info->marked_btf = btf__parse(targ_btf_path, NULL);
if (!info->marked_btf) {
err = -errno;
p_err("failed parsing '%s' BTF file: %s", targ_btf_path, strerror(errno));
goto err_out;
}
return info;
err_out:
btfgen_free_info(info);
errno = -err;
return NULL;
}
#define MARKED UINT32_MAX
static void btfgen_mark_member(struct btfgen_info *info, int type_id, int idx)
{
const struct btf_type *t = btf__type_by_id(info->marked_btf, type_id);
struct btf_member *m = btf_members(t) + idx;
m->name_off = MARKED;
}
static int
btfgen_mark_type(struct btfgen_info *info, unsigned int type_id, bool follow_pointers)
{
const struct btf_type *btf_type = btf__type_by_id(info->src_btf, type_id);
struct btf_type *cloned_type;
struct btf_param *param;
struct btf_array *array;
int err, i;
if (type_id == 0)
return 0;
/* mark type on cloned BTF as used */
cloned_type = (struct btf_type *) btf__type_by_id(info->marked_btf, type_id);
cloned_type->name_off = MARKED;
/* recursively mark other types needed by it */
switch (btf_kind(btf_type)) {
case BTF_KIND_UNKN:
case BTF_KIND_INT:
case BTF_KIND_FLOAT:
case BTF_KIND_ENUM:
case BTF_KIND_STRUCT:
case BTF_KIND_UNION:
break;
case BTF_KIND_PTR:
if (follow_pointers) {
err = btfgen_mark_type(info, btf_type->type, follow_pointers);
if (err)
return err;
}
break;
case BTF_KIND_CONST:
case BTF_KIND_VOLATILE:
case BTF_KIND_TYPEDEF:
err = btfgen_mark_type(info, btf_type->type, follow_pointers);
if (err)
return err;
break;
case BTF_KIND_ARRAY:
array = btf_array(btf_type);
/* mark array type */
err = btfgen_mark_type(info, array->type, follow_pointers);
/* mark array's index type */
err = err ? : btfgen_mark_type(info, array->index_type, follow_pointers);
if (err)
return err;
break;
case BTF_KIND_FUNC_PROTO:
/* mark ret type */
err = btfgen_mark_type(info, btf_type->type, follow_pointers);
if (err)
return err;
/* mark parameters types */
param = btf_params(btf_type);
for (i = 0; i < btf_vlen(btf_type); i++) {
err = btfgen_mark_type(info, param->type, follow_pointers);
if (err)
return err;
param++;
}
break;
/* tells if some other type needs to be handled */
default:
p_err("unsupported kind: %s (%d)", btf_kind_str(btf_type), type_id);
return -EINVAL;
}
return 0;
}
static int btfgen_record_field_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec)
{
struct btf *btf = info->src_btf;
const struct btf_type *btf_type;
struct btf_member *btf_member;
struct btf_array *array;
unsigned int type_id = targ_spec->root_type_id;
int idx, err;
/* mark root type */
btf_type = btf__type_by_id(btf, type_id);
err = btfgen_mark_type(info, type_id, false);
if (err)
return err;
/* mark types for complex types (arrays, unions, structures) */
for (int i = 1; i < targ_spec->raw_len; i++) {
/* skip typedefs and mods */
while (btf_is_mod(btf_type) || btf_is_typedef(btf_type)) {
type_id = btf_type->type;
btf_type = btf__type_by_id(btf, type_id);
}
switch (btf_kind(btf_type)) {
case BTF_KIND_STRUCT:
case BTF_KIND_UNION:
idx = targ_spec->raw_spec[i];
btf_member = btf_members(btf_type) + idx;
/* mark member */
btfgen_mark_member(info, type_id, idx);
/* mark member's type */
type_id = btf_member->type;
btf_type = btf__type_by_id(btf, type_id);
err = btfgen_mark_type(info, type_id, false);
if (err)
return err;
break;
case BTF_KIND_ARRAY:
array = btf_array(btf_type);
type_id = array->type;
btf_type = btf__type_by_id(btf, type_id);
break;
default:
p_err("unsupported kind: %s (%d)",
btf_kind_str(btf_type), btf_type->type);
return -EINVAL;
}
}
return 0;
}
static int btfgen_record_type_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec)
{
return btfgen_mark_type(info, targ_spec->root_type_id, true);
}
static int btfgen_record_enumval_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec)
{
return btfgen_mark_type(info, targ_spec->root_type_id, false);
}
static int btfgen_record_reloc(struct btfgen_info *info, struct bpf_core_spec *res)
{
switch (res->relo_kind) {
case BPF_CORE_FIELD_BYTE_OFFSET:
case BPF_CORE_FIELD_BYTE_SIZE:
case BPF_CORE_FIELD_EXISTS:
case BPF_CORE_FIELD_SIGNED:
case BPF_CORE_FIELD_LSHIFT_U64:
case BPF_CORE_FIELD_RSHIFT_U64:
return btfgen_record_field_relo(info, res);
case BPF_CORE_TYPE_ID_LOCAL: /* BPF_CORE_TYPE_ID_LOCAL doesn't require kernel BTF */
return 0;
case BPF_CORE_TYPE_ID_TARGET:
case BPF_CORE_TYPE_EXISTS:
case BPF_CORE_TYPE_SIZE:
return btfgen_record_type_relo(info, res);
case BPF_CORE_ENUMVAL_EXISTS:
case BPF_CORE_ENUMVAL_VALUE:
return btfgen_record_enumval_relo(info, res);
default:
return -EINVAL;
}
}
static struct bpf_core_cand_list *
btfgen_find_cands(const struct btf *local_btf, const struct btf *targ_btf, __u32 local_id)
{
const struct btf_type *local_type;
struct bpf_core_cand_list *cands = NULL;
struct bpf_core_cand local_cand = {};
size_t local_essent_len;
const char *local_name;
int err;
local_cand.btf = local_btf;
local_cand.id = local_id;
local_type = btf__type_by_id(local_btf, local_id);
if (!local_type) {
err = -EINVAL;
goto err_out;
}
local_name = btf__name_by_offset(local_btf, local_type->name_off);
if (!local_name) {
err = -EINVAL;
goto err_out;
}
local_essent_len = bpf_core_essential_name_len(local_name);
cands = calloc(1, sizeof(*cands));
if (!cands)
return NULL;
err = bpf_core_add_cands(&local_cand, local_essent_len, targ_btf, "vmlinux", 1, cands);
if (err)
goto err_out;
return cands;
err_out:
bpf_core_free_cands(cands);
errno = -err;
return NULL;
}
/* Record relocation information for a single BPF object */
static int btfgen_record_obj(struct btfgen_info *info, const char *obj_path)
{
const struct btf_ext_info_sec *sec;
const struct bpf_core_relo *relo;
const struct btf_ext_info *seg;
struct hashmap_entry *entry;
struct hashmap *cand_cache = NULL;
struct btf_ext *btf_ext = NULL;
unsigned int relo_idx;
struct btf *btf = NULL;
size_t i;
int err;
btf = btf__parse(obj_path, &btf_ext);
if (!btf) {
err = -errno;
p_err("failed to parse BPF object '%s': %s", obj_path, strerror(errno));
return err;
}
if (!btf_ext) {
p_err("failed to parse BPF object '%s': section %s not found",
obj_path, BTF_EXT_ELF_SEC);
err = -EINVAL;
goto out;
}
if (btf_ext->core_relo_info.len == 0) {
err = 0;
goto out;
}
cand_cache = hashmap__new(btfgen_hash_fn, btfgen_equal_fn, NULL);
if (IS_ERR(cand_cache)) {
err = PTR_ERR(cand_cache);
goto out;
}
seg = &btf_ext->core_relo_info;
for_each_btf_ext_sec(seg, sec) {
for_each_btf_ext_rec(seg, sec, relo_idx, relo) {
struct bpf_core_spec specs_scratch[3] = {};
struct bpf_core_relo_res targ_res = {};
struct bpf_core_cand_list *cands = NULL;
const void *type_key = u32_as_hash_key(relo->type_id);
const char *sec_name = btf__name_by_offset(btf, sec->sec_name_off);
if (relo->kind != BPF_CORE_TYPE_ID_LOCAL &&
!hashmap__find(cand_cache, type_key, (void **)&cands)) {
cands = btfgen_find_cands(btf, info->src_btf, relo->type_id);
if (!cands) {
err = -errno;
goto out;
}
err = hashmap__set(cand_cache, type_key, cands, NULL, NULL);
if (err)
goto out;
}
err = bpf_core_calc_relo_insn(sec_name, relo, relo_idx, btf, cands,
specs_scratch, &targ_res);
if (err)
goto out;
/* specs_scratch[2] is the target spec */
err = btfgen_record_reloc(info, &specs_scratch[2]);
if (err)
goto out;
}
}
out:
btf__free(btf);
btf_ext__free(btf_ext);
if (!IS_ERR_OR_NULL(cand_cache)) {
hashmap__for_each_entry(cand_cache, entry, i) {
bpf_core_free_cands(entry->value);
}
hashmap__free(cand_cache);
}
return err;
}
static int btfgen_remap_id(__u32 *type_id, void *ctx)
{
unsigned int *ids = ctx;
*type_id = ids[*type_id];
return 0;
}
/* Generate BTF from relocation information previously recorded */
static struct btf *btfgen_get_btf(struct btfgen_info *info)
{
struct btf *btf_new = NULL;
unsigned int *ids = NULL;
unsigned int i, n = btf__type_cnt(info->marked_btf);
int err = 0;
btf_new = btf__new_empty();
if (!btf_new) {
err = -errno;
goto err_out;
}
ids = calloc(n, sizeof(*ids));
if (!ids) {
err = -errno;
goto err_out;
}
/* first pass: add all marked types to btf_new and add their new ids to the ids map */
for (i = 1; i < n; i++) {
const struct btf_type *cloned_type, *type;
const char *name;
int new_id;
cloned_type = btf__type_by_id(info->marked_btf, i);
if (cloned_type->name_off != MARKED)
continue;
type = btf__type_by_id(info->src_btf, i);
/* add members for struct and union */
if (btf_is_composite(type)) {
struct btf_member *cloned_m, *m;
unsigned short vlen;
int idx_src;
name = btf__str_by_offset(info->src_btf, type->name_off);
if (btf_is_struct(type))
err = btf__add_struct(btf_new, name, type->size);
else
err = btf__add_union(btf_new, name, type->size);
if (err < 0)
goto err_out;
new_id = err;
cloned_m = btf_members(cloned_type);
m = btf_members(type);
vlen = btf_vlen(cloned_type);
for (idx_src = 0; idx_src < vlen; idx_src++, cloned_m++, m++) {
/* add only members that are marked as used */
if (cloned_m->name_off != MARKED)
continue;
name = btf__str_by_offset(info->src_btf, m->name_off);
err = btf__add_field(btf_new, name, m->type,
btf_member_bit_offset(cloned_type, idx_src),
btf_member_bitfield_size(cloned_type, idx_src));
if (err < 0)
goto err_out;
}
} else {
err = btf__add_type(btf_new, info->src_btf, type);
if (err < 0)
goto err_out;
new_id = err;
}
/* add ID mapping */
ids[i] = new_id;
}
/* second pass: fix up type ids */
for (i = 1; i < btf__type_cnt(btf_new); i++) {
struct btf_type *btf_type = (struct btf_type *) btf__type_by_id(btf_new, i);
err = btf_type_visit_type_ids(btf_type, btfgen_remap_id, ids);
if (err)
goto err_out;
}
free(ids);
return btf_new;
err_out:
btf__free(btf_new);
free(ids);
errno = -err;
return NULL;
}
/* Create minimized BTF file for a set of BPF objects.
*
* The BTFGen algorithm is divided in two main parts: (1) collect the
* BTF types that are involved in relocations and (2) generate the BTF
* object using the collected types.
*
* In order to collect the types involved in the relocations, we parse
* the BTF and BTF.ext sections of the BPF objects and use
* bpf_core_calc_relo_insn() to get the target specification, this
* indicates how the types and fields are used in a relocation.
*
* Types are recorded in different ways according to the kind of the
* relocation. For field-based relocations only the members that are
* actually used are saved in order to reduce the size of the generated
* BTF file. For type-based relocations empty struct / unions are
* generated and for enum-based relocations the whole type is saved.
*
* The second part of the algorithm generates the BTF object. It creates
* an empty BTF object and fills it with the types recorded in the
* previous step. This function takes care of only adding the structure
* and union members that were marked as used and it also fixes up the
* type IDs on the generated BTF object.
*/
static int minimize_btf(const char *src_btf, const char *dst_btf, const char *objspaths[])
{
struct btfgen_info *info;
struct btf *btf_new = NULL;
int err, i;
info = btfgen_new_info(src_btf);
if (!info) {
err = -errno;
p_err("failed to allocate info structure: %s", strerror(errno));
goto out;
}
for (i = 0; objspaths[i] != NULL; i++) {
err = btfgen_record_obj(info, objspaths[i]);
if (err) {
p_err("error recording relocations for %s: %s", objspaths[i],
strerror(errno));
goto out;
}
}
btf_new = btfgen_get_btf(info);
if (!btf_new) {
err = -errno;
p_err("error generating BTF: %s", strerror(errno));
goto out;
}
err = btf_save_raw(btf_new, dst_btf);
if (err) {
p_err("error saving btf file: %s", strerror(errno));
goto out;
}
out:
btf__free(btf_new);
btfgen_free_info(info);
return err;
}
static int do_min_core_btf(int argc, char **argv)
{
const char *input, *output, **objs;
int i, err;
if (!REQ_ARGS(3)) {
usage();
return -1;
}
input = GET_ARG();
output = GET_ARG();
objs = (const char **) calloc(argc + 1, sizeof(*objs));
if (!objs) {
p_err("failed to allocate array for object names");
return -ENOMEM;
}
i = 0;
while (argc)
objs[i++] = GET_ARG();
err = minimize_btf(input, output, objs);
free(objs);
return err;
}
static const struct cmd cmds[] = { static const struct cmd cmds[] = {
{ "object", do_object }, { "object", do_object },
{ "skeleton", do_skeleton }, { "skeleton", do_skeleton },
{ "min_core_btf", do_min_core_btf},
{ "help", do_help }, { "help", do_help },
{ 0 } { 0 }
}; };
......
...@@ -5192,13 +5192,16 @@ size_t bpf_core_essential_name_len(const char *name) ...@@ -5192,13 +5192,16 @@ size_t bpf_core_essential_name_len(const char *name)
return n; return n;
} }
static void bpf_core_free_cands(struct bpf_core_cand_list *cands) void bpf_core_free_cands(struct bpf_core_cand_list *cands)
{ {
if (!cands)
return;
free(cands->cands); free(cands->cands);
free(cands); free(cands);
} }
static int bpf_core_add_cands(struct bpf_core_cand *local_cand, int bpf_core_add_cands(struct bpf_core_cand *local_cand,
size_t local_essent_len, size_t local_essent_len,
const struct btf *targ_btf, const struct btf *targ_btf,
const char *targ_btf_name, const char *targ_btf_name,
...@@ -5530,11 +5533,12 @@ static int record_relo_core(struct bpf_program *prog, ...@@ -5530,11 +5533,12 @@ static int record_relo_core(struct bpf_program *prog,
return 0; return 0;
} }
static int bpf_core_apply_relo(struct bpf_program *prog, static int bpf_core_resolve_relo(struct bpf_program *prog,
const struct bpf_core_relo *relo, const struct bpf_core_relo *relo,
int relo_idx, int relo_idx,
const struct btf *local_btf, const struct btf *local_btf,
struct hashmap *cand_cache) struct hashmap *cand_cache,
struct bpf_core_relo_res *targ_res)
{ {
struct bpf_core_spec specs_scratch[3] = {}; struct bpf_core_spec specs_scratch[3] = {};
const void *type_key = u32_as_hash_key(relo->type_id); const void *type_key = u32_as_hash_key(relo->type_id);
...@@ -5543,20 +5547,7 @@ static int bpf_core_apply_relo(struct bpf_program *prog, ...@@ -5543,20 +5547,7 @@ static int bpf_core_apply_relo(struct bpf_program *prog,
const struct btf_type *local_type; const struct btf_type *local_type;
const char *local_name; const char *local_name;
__u32 local_id = relo->type_id; __u32 local_id = relo->type_id;
struct bpf_insn *insn; int err;
int insn_idx, err;
if (relo->insn_off % BPF_INSN_SZ)
return -EINVAL;
insn_idx = relo->insn_off / BPF_INSN_SZ;
/* adjust insn_idx from section frame of reference to the local
* program's frame of reference; (sub-)program code is not yet
* relocated, so it's enough to just subtract in-section offset
*/
insn_idx = insn_idx - prog->sec_insn_off;
if (insn_idx >= prog->insns_cnt)
return -EINVAL;
insn = &prog->insns[insn_idx];
local_type = btf__type_by_id(local_btf, local_id); local_type = btf__type_by_id(local_btf, local_id);
if (!local_type) if (!local_type)
...@@ -5566,15 +5557,6 @@ static int bpf_core_apply_relo(struct bpf_program *prog, ...@@ -5566,15 +5557,6 @@ static int bpf_core_apply_relo(struct bpf_program *prog,
if (!local_name) if (!local_name)
return -EINVAL; return -EINVAL;
if (prog->obj->gen_loader) {
const char *spec_str = btf__name_by_offset(local_btf, relo->access_str_off);
pr_debug("record_relo_core: prog %td insn[%d] %s %s %s final insn_idx %d\n",
prog - prog->obj->programs, relo->insn_off / 8,
btf_kind_str(local_type), local_name, spec_str, insn_idx);
return record_relo_core(prog, relo, insn_idx);
}
if (relo->kind != BPF_CORE_TYPE_ID_LOCAL && if (relo->kind != BPF_CORE_TYPE_ID_LOCAL &&
!hashmap__find(cand_cache, type_key, (void **)&cands)) { !hashmap__find(cand_cache, type_key, (void **)&cands)) {
cands = bpf_core_find_cands(prog->obj, local_btf, local_id); cands = bpf_core_find_cands(prog->obj, local_btf, local_id);
...@@ -5591,19 +5573,21 @@ static int bpf_core_apply_relo(struct bpf_program *prog, ...@@ -5591,19 +5573,21 @@ static int bpf_core_apply_relo(struct bpf_program *prog,
} }
} }
return bpf_core_apply_relo_insn(prog_name, insn, insn_idx, relo, return bpf_core_calc_relo_insn(prog_name, relo, relo_idx, local_btf, cands, specs_scratch,
relo_idx, local_btf, cands, specs_scratch); targ_res);
} }
static int static int
bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path) bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
{ {
const struct btf_ext_info_sec *sec; const struct btf_ext_info_sec *sec;
struct bpf_core_relo_res targ_res;
const struct bpf_core_relo *rec; const struct bpf_core_relo *rec;
const struct btf_ext_info *seg; const struct btf_ext_info *seg;
struct hashmap_entry *entry; struct hashmap_entry *entry;
struct hashmap *cand_cache = NULL; struct hashmap *cand_cache = NULL;
struct bpf_program *prog; struct bpf_program *prog;
struct bpf_insn *insn;
const char *sec_name; const char *sec_name;
int i, err = 0, insn_idx, sec_idx; int i, err = 0, insn_idx, sec_idx;
...@@ -5654,6 +5638,8 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path) ...@@ -5654,6 +5638,8 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
sec_name, sec->num_info); sec_name, sec->num_info);
for_each_btf_ext_rec(seg, sec, i, rec) { for_each_btf_ext_rec(seg, sec, i, rec) {
if (rec->insn_off % BPF_INSN_SZ)
return -EINVAL;
insn_idx = rec->insn_off / BPF_INSN_SZ; insn_idx = rec->insn_off / BPF_INSN_SZ;
prog = find_prog_by_sec_insn(obj, sec_idx, insn_idx); prog = find_prog_by_sec_insn(obj, sec_idx, insn_idx);
if (!prog) { if (!prog) {
...@@ -5668,12 +5654,38 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path) ...@@ -5668,12 +5654,38 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
if (!prog->load) if (!prog->load)
continue; continue;
err = bpf_core_apply_relo(prog, rec, i, obj->btf, cand_cache); /* adjust insn_idx from section frame of reference to the local
* program's frame of reference; (sub-)program code is not yet
* relocated, so it's enough to just subtract in-section offset
*/
insn_idx = insn_idx - prog->sec_insn_off;
if (insn_idx >= prog->insns_cnt)
return -EINVAL;
insn = &prog->insns[insn_idx];
if (prog->obj->gen_loader) {
err = record_relo_core(prog, rec, insn_idx);
if (err) {
pr_warn("prog '%s': relo #%d: failed to record relocation: %d\n",
prog->name, i, err);
goto out;
}
continue;
}
err = bpf_core_resolve_relo(prog, rec, i, obj->btf, cand_cache, &targ_res);
if (err) { if (err) {
pr_warn("prog '%s': relo #%d: failed to relocate: %d\n", pr_warn("prog '%s': relo #%d: failed to relocate: %d\n",
prog->name, i, err); prog->name, i, err);
goto out; goto out;
} }
err = bpf_core_patch_insn(prog->name, insn, insn_idx, rec, i, &targ_res);
if (err) {
pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n",
prog->name, i, insn_idx, err);
goto out;
}
} }
} }
......
...@@ -529,4 +529,13 @@ static inline int ensure_good_fd(int fd) ...@@ -529,4 +529,13 @@ static inline int ensure_good_fd(int fd)
return fd; return fd;
} }
/* The following two functions are exposed to bpftool */
int bpf_core_add_cands(struct bpf_core_cand *local_cand,
size_t local_essent_len,
const struct btf *targ_btf,
const char *targ_btf_name,
int targ_start_id,
struct bpf_core_cand_list *cands);
void bpf_core_free_cands(struct bpf_core_cand_list *cands);
#endif /* __LIBBPF_LIBBPF_INTERNAL_H */ #endif /* __LIBBPF_LIBBPF_INTERNAL_H */
...@@ -775,31 +775,6 @@ static int bpf_core_calc_enumval_relo(const struct bpf_core_relo *relo, ...@@ -775,31 +775,6 @@ static int bpf_core_calc_enumval_relo(const struct bpf_core_relo *relo,
return 0; return 0;
} }
struct bpf_core_relo_res
{
/* expected value in the instruction, unless validate == false */
__u32 orig_val;
/* new value that needs to be patched up to */
__u32 new_val;
/* relocation unsuccessful, poison instruction, but don't fail load */
bool poison;
/* some relocations can't be validated against orig_val */
bool validate;
/* for field byte offset relocations or the forms:
* *(T *)(rX + <off>) = rY
* rX = *(T *)(rY + <off>),
* we remember original and resolved field size to adjust direct
* memory loads of pointers and integers; this is necessary for 32-bit
* host kernel architectures, but also allows to automatically
* relocate fields that were resized from, e.g., u32 to u64, etc.
*/
bool fail_memsz_adjust;
__u32 orig_sz;
__u32 orig_type_id;
__u32 new_sz;
__u32 new_type_id;
};
/* Calculate original and target relocation values, given local and target /* Calculate original and target relocation values, given local and target
* specs and relocation kind. These values are calculated for each candidate. * specs and relocation kind. These values are calculated for each candidate.
* If there are multiple candidates, resulting values should all be consistent * If there are multiple candidates, resulting values should all be consistent
...@@ -951,7 +926,7 @@ static int insn_bytes_to_bpf_size(__u32 sz) ...@@ -951,7 +926,7 @@ static int insn_bytes_to_bpf_size(__u32 sz)
* 5. *(T *)(rX + <off>) = rY, where T is one of {u8, u16, u32, u64}; * 5. *(T *)(rX + <off>) = rY, where T is one of {u8, u16, u32, u64};
* 6. *(T *)(rX + <off>) = <imm>, where T is one of {u8, u16, u32, u64}. * 6. *(T *)(rX + <off>) = <imm>, where T is one of {u8, u16, u32, u64}.
*/ */
static int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn, int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
int insn_idx, const struct bpf_core_relo *relo, int insn_idx, const struct bpf_core_relo *relo,
int relo_idx, const struct bpf_core_relo_res *res) int relo_idx, const struct bpf_core_relo_res *res)
{ {
...@@ -1128,7 +1103,7 @@ static void bpf_core_dump_spec(const char *prog_name, int level, const struct bp ...@@ -1128,7 +1103,7 @@ static void bpf_core_dump_spec(const char *prog_name, int level, const struct bp
} }
/* /*
* CO-RE relocate single instruction. * Calculate CO-RE relocation target result.
* *
* The outline and important points of the algorithm: * The outline and important points of the algorithm:
* 1. For given local type, find corresponding candidate target types. * 1. For given local type, find corresponding candidate target types.
...@@ -1177,18 +1152,18 @@ static void bpf_core_dump_spec(const char *prog_name, int level, const struct bp ...@@ -1177,18 +1152,18 @@ static void bpf_core_dump_spec(const char *prog_name, int level, const struct bp
* between multiple relocations for the same type ID and is updated as some * between multiple relocations for the same type ID and is updated as some
* of the candidates are pruned due to structural incompatibility. * of the candidates are pruned due to structural incompatibility.
*/ */
int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn, int bpf_core_calc_relo_insn(const char *prog_name,
int insn_idx,
const struct bpf_core_relo *relo, const struct bpf_core_relo *relo,
int relo_idx, int relo_idx,
const struct btf *local_btf, const struct btf *local_btf,
struct bpf_core_cand_list *cands, struct bpf_core_cand_list *cands,
struct bpf_core_spec *specs_scratch) struct bpf_core_spec *specs_scratch,
struct bpf_core_relo_res *targ_res)
{ {
struct bpf_core_spec *local_spec = &specs_scratch[0]; struct bpf_core_spec *local_spec = &specs_scratch[0];
struct bpf_core_spec *cand_spec = &specs_scratch[1]; struct bpf_core_spec *cand_spec = &specs_scratch[1];
struct bpf_core_spec *targ_spec = &specs_scratch[2]; struct bpf_core_spec *targ_spec = &specs_scratch[2];
struct bpf_core_relo_res cand_res, targ_res; struct bpf_core_relo_res cand_res;
const struct btf_type *local_type; const struct btf_type *local_type;
const char *local_name; const char *local_name;
__u32 local_id; __u32 local_id;
...@@ -1223,12 +1198,12 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn, ...@@ -1223,12 +1198,12 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
/* TYPE_ID_LOCAL relo is special and doesn't need candidate search */ /* TYPE_ID_LOCAL relo is special and doesn't need candidate search */
if (relo->kind == BPF_CORE_TYPE_ID_LOCAL) { if (relo->kind == BPF_CORE_TYPE_ID_LOCAL) {
/* bpf_insn's imm value could get out of sync during linking */ /* bpf_insn's imm value could get out of sync during linking */
memset(&targ_res, 0, sizeof(targ_res)); memset(targ_res, 0, sizeof(*targ_res));
targ_res.validate = false; targ_res->validate = false;
targ_res.poison = false; targ_res->poison = false;
targ_res.orig_val = local_spec->root_type_id; targ_res->orig_val = local_spec->root_type_id;
targ_res.new_val = local_spec->root_type_id; targ_res->new_val = local_spec->root_type_id;
goto patch_insn; return 0;
} }
/* libbpf doesn't support candidate search for anonymous types */ /* libbpf doesn't support candidate search for anonymous types */
...@@ -1262,7 +1237,7 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn, ...@@ -1262,7 +1237,7 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
return err; return err;
if (j == 0) { if (j == 0) {
targ_res = cand_res; *targ_res = cand_res;
*targ_spec = *cand_spec; *targ_spec = *cand_spec;
} else if (cand_spec->bit_offset != targ_spec->bit_offset) { } else if (cand_spec->bit_offset != targ_spec->bit_offset) {
/* if there are many field relo candidates, they /* if there are many field relo candidates, they
...@@ -1272,7 +1247,8 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn, ...@@ -1272,7 +1247,8 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
prog_name, relo_idx, cand_spec->bit_offset, prog_name, relo_idx, cand_spec->bit_offset,
targ_spec->bit_offset); targ_spec->bit_offset);
return -EINVAL; return -EINVAL;
} else if (cand_res.poison != targ_res.poison || cand_res.new_val != targ_res.new_val) { } else if (cand_res.poison != targ_res->poison ||
cand_res.new_val != targ_res->new_val) {
/* all candidates should result in the same relocation /* all candidates should result in the same relocation
* decision and value, otherwise it's dangerous to * decision and value, otherwise it's dangerous to
* proceed due to ambiguity * proceed due to ambiguity
...@@ -1280,7 +1256,7 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn, ...@@ -1280,7 +1256,7 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
pr_warn("prog '%s': relo #%d: relocation decision ambiguity: %s %u != %s %u\n", pr_warn("prog '%s': relo #%d: relocation decision ambiguity: %s %u != %s %u\n",
prog_name, relo_idx, prog_name, relo_idx,
cand_res.poison ? "failure" : "success", cand_res.new_val, cand_res.poison ? "failure" : "success", cand_res.new_val,
targ_res.poison ? "failure" : "success", targ_res.new_val); targ_res->poison ? "failure" : "success", targ_res->new_val);
return -EINVAL; return -EINVAL;
} }
...@@ -1314,19 +1290,10 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn, ...@@ -1314,19 +1290,10 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
prog_name, relo_idx); prog_name, relo_idx);
/* calculate single target relo result explicitly */ /* calculate single target relo result explicitly */
err = bpf_core_calc_relo(prog_name, relo, relo_idx, local_spec, NULL, &targ_res); err = bpf_core_calc_relo(prog_name, relo, relo_idx, local_spec, NULL, targ_res);
if (err) if (err)
return err; return err;
} }
patch_insn:
/* bpf_core_patch_insn() should know how to handle missing targ_spec */
err = bpf_core_patch_insn(prog_name, insn, insn_idx, relo, relo_idx, &targ_res);
if (err) {
pr_warn("prog '%s': relo #%d: failed to patch insn #%u: %d\n",
prog_name, relo_idx, relo->insn_off / 8, err);
return -EINVAL;
}
return 0; return 0;
} }
...@@ -44,14 +44,44 @@ struct bpf_core_spec { ...@@ -44,14 +44,44 @@ struct bpf_core_spec {
__u32 bit_offset; __u32 bit_offset;
}; };
int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_core_relo_res {
struct bpf_insn *insn, int insn_idx, /* expected value in the instruction, unless validate == false */
const struct bpf_core_relo *relo, int relo_idx, __u32 orig_val;
const struct btf *local_btf, /* new value that needs to be patched up to */
struct bpf_core_cand_list *cands, __u32 new_val;
struct bpf_core_spec *specs_scratch); /* relocation unsuccessful, poison instruction, but don't fail load */
bool poison;
/* some relocations can't be validated against orig_val */
bool validate;
/* for field byte offset relocations or the forms:
* *(T *)(rX + <off>) = rY
* rX = *(T *)(rY + <off>),
* we remember original and resolved field size to adjust direct
* memory loads of pointers and integers; this is necessary for 32-bit
* host kernel architectures, but also allows to automatically
* relocate fields that were resized from, e.g., u32 to u64, etc.
*/
bool fail_memsz_adjust;
__u32 orig_sz;
__u32 orig_type_id;
__u32 new_sz;
__u32 new_type_id;
};
int bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id, int bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id,
const struct btf *targ_btf, __u32 targ_id); const struct btf *targ_btf, __u32 targ_id);
size_t bpf_core_essential_name_len(const char *name); size_t bpf_core_essential_name_len(const char *name);
int bpf_core_calc_relo_insn(const char *prog_name,
const struct bpf_core_relo *relo, int relo_idx,
const struct btf *local_btf,
struct bpf_core_cand_list *cands,
struct bpf_core_spec *specs_scratch,
struct bpf_core_relo_res *targ_res);
int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
int insn_idx, const struct bpf_core_relo *relo,
int relo_idx, const struct bpf_core_relo_res *res);
#endif #endif
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#include <test_progs.h> #include <test_progs.h>
#include "progs/core_reloc_types.h" #include "progs/core_reloc_types.h"
#include "bpf_testmod/bpf_testmod.h" #include "bpf_testmod/bpf_testmod.h"
#include <linux/limits.h>
#include <sys/mman.h> #include <sys/mman.h>
#include <sys/syscall.h> #include <sys/syscall.h>
#include <bpf/btf.h> #include <bpf/btf.h>
...@@ -836,13 +837,27 @@ static size_t roundup_page(size_t sz) ...@@ -836,13 +837,27 @@ static size_t roundup_page(size_t sz)
return (sz + page_size - 1) / page_size * page_size; return (sz + page_size - 1) / page_size * page_size;
} }
void test_core_reloc(void) static int run_btfgen(const char *src_btf, const char *dst_btf, const char *objpath)
{
char command[4096];
int n;
n = snprintf(command, sizeof(command),
"./tools/build/bpftool/bpftool gen min_core_btf %s %s %s",
src_btf, dst_btf, objpath);
if (n < 0 || n >= sizeof(command))
return -1;
return system(command);
}
static void run_core_reloc_tests(bool use_btfgen)
{ {
const size_t mmap_sz = roundup_page(sizeof(struct data)); const size_t mmap_sz = roundup_page(sizeof(struct data));
DECLARE_LIBBPF_OPTS(bpf_object_open_opts, open_opts); DECLARE_LIBBPF_OPTS(bpf_object_open_opts, open_opts);
struct core_reloc_test_case *test_case; struct core_reloc_test_case *test_case;
const char *tp_name, *probe_name; const char *tp_name, *probe_name;
int err, i, equal; int err, i, equal, fd;
struct bpf_link *link = NULL; struct bpf_link *link = NULL;
struct bpf_map *data_map; struct bpf_map *data_map;
struct bpf_program *prog; struct bpf_program *prog;
...@@ -854,6 +869,7 @@ void test_core_reloc(void) ...@@ -854,6 +869,7 @@ void test_core_reloc(void)
my_pid_tgid = getpid() | ((uint64_t)syscall(SYS_gettid) << 32); my_pid_tgid = getpid() | ((uint64_t)syscall(SYS_gettid) << 32);
for (i = 0; i < ARRAY_SIZE(test_cases); i++) { for (i = 0; i < ARRAY_SIZE(test_cases); i++) {
char btf_file[] = "/tmp/core_reloc.btf.XXXXXX";
test_case = &test_cases[i]; test_case = &test_cases[i];
if (!test__start_subtest(test_case->case_name)) if (!test__start_subtest(test_case->case_name))
continue; continue;
...@@ -863,6 +879,25 @@ void test_core_reloc(void) ...@@ -863,6 +879,25 @@ void test_core_reloc(void)
continue; continue;
} }
/* generate a "minimal" BTF file and use it as source */
if (use_btfgen) {
if (!test_case->btf_src_file || test_case->fails) {
test__skip();
continue;
}
fd = mkstemp(btf_file);
if (!ASSERT_GE(fd, 0, "btf_tmp"))
goto cleanup;
close(fd); /* we only need the path */
err = run_btfgen(test_case->btf_src_file, btf_file,
test_case->bpf_obj_file);
if (!ASSERT_OK(err, "run_btfgen"))
goto cleanup;
test_case->btf_src_file = btf_file;
}
if (test_case->setup) { if (test_case->setup) {
err = test_case->setup(test_case); err = test_case->setup(test_case);
if (CHECK(err, "test_setup", "test #%d setup failed: %d\n", i, err)) if (CHECK(err, "test_setup", "test #%d setup failed: %d\n", i, err))
...@@ -954,8 +989,19 @@ void test_core_reloc(void) ...@@ -954,8 +989,19 @@ void test_core_reloc(void)
CHECK_FAIL(munmap(mmap_data, mmap_sz)); CHECK_FAIL(munmap(mmap_data, mmap_sz));
mmap_data = NULL; mmap_data = NULL;
} }
remove(btf_file);
bpf_link__destroy(link); bpf_link__destroy(link);
link = NULL; link = NULL;
bpf_object__close(obj); bpf_object__close(obj);
} }
} }
void test_core_reloc(void)
{
run_core_reloc_tests(false);
}
void test_core_btfgen(void)
{
run_core_reloc_tests(true);
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment