Commit f263a814 authored by John Fastabend's avatar John Fastabend Committed by Daniel Borkmann

bpf: Track subprog poke descriptors correctly and fix use-after-free

Subprograms are calling map_poke_track(), but on program release there is no
hook to call map_poke_untrack(). However, on program release, the aux memory
(and poke descriptor table) is freed even though we still have a reference to
it in the element list of the map aux data. When we run map_poke_run(), we then
end up accessing free'd memory, triggering KASAN in prog_array_map_poke_run():

  [...]
  [  402.824689] BUG: KASAN: use-after-free in prog_array_map_poke_run+0xc2/0x34e
  [  402.824698] Read of size 4 at addr ffff8881905a7940 by task hubble-fgs/4337
  [  402.824705] CPU: 1 PID: 4337 Comm: hubble-fgs Tainted: G          I       5.12.0+ #399
  [  402.824715] Call Trace:
  [  402.824719]  dump_stack+0x93/0xc2
  [  402.824727]  print_address_description.constprop.0+0x1a/0x140
  [  402.824736]  ? prog_array_map_poke_run+0xc2/0x34e
  [  402.824740]  ? prog_array_map_poke_run+0xc2/0x34e
  [  402.824744]  kasan_report.cold+0x7c/0xd8
  [  402.824752]  ? prog_array_map_poke_run+0xc2/0x34e
  [  402.824757]  prog_array_map_poke_run+0xc2/0x34e
  [  402.824765]  bpf_fd_array_map_update_elem+0x124/0x1a0
  [...]

The elements concerned are walked as follows:

    for (i = 0; i < elem->aux->size_poke_tab; i++) {
           poke = &elem->aux->poke_tab[i];
    [...]

The access to size_poke_tab is a 4 byte read, verified by checking offsets
in the KASAN dump:

  [  402.825004] The buggy address belongs to the object at ffff8881905a7800
                 which belongs to the cache kmalloc-1k of size 1024
  [  402.825008] The buggy address is located 320 bytes inside of
                 1024-byte region [ffff8881905a7800, ffff8881905a7c00)

The pahole output of bpf_prog_aux:

  struct bpf_prog_aux {
    [...]
    /* --- cacheline 5 boundary (320 bytes) --- */
    u32                        size_poke_tab;        /*   320     4 */
    [...]

In general, subprograms do not necessarily manage their own data structures.
For example, BTF func_info and linfo are just pointers to the main program
structure. This allows reference counting and cleanup to be done on the latter
which simplifies their management a bit. The aux->poke_tab struct, however,
did not follow this logic. The initial proposed fix for this use-after-free
bug further embedded poke data tracking into the subprogram with proper
reference counting. However, Daniel and Alexei questioned why we were treating
these objects special; I agree, its unnecessary. The fix here removes the per
subprogram poke table allocation and map tracking and instead simply points
the aux->poke_tab pointer at the main programs poke table. This way, map
tracking is simplified to the main program and we do not need to manage them
per subprogram.

This also means, bpf_prog_free_deferred(), which unwinds the program reference
counting and kfrees objects, needs to ensure that we don't try to double free
the poke_tab when free'ing the subprog structures. This is easily solved by
NULL'ing the poke_tab pointer. The second detail is to ensure that per
subprogram JIT logic only does fixups on poke_tab[] entries it owns. To do
this, we add a pointer in the poke structure to point at the subprogram value
so JITs can easily check while walking the poke_tab structure if the current
entry belongs to the current program. The aux pointer is stable and therefore
suitable for such comparison. On the jit_subprogs() error path, we omit
cleaning up the poke->aux field because these are only ever referenced from
the JIT side, but on error we will never make it to the JIT, so its fine to
leave them dangling. Removing these pointers would complicate the error path
for no reason. However, we do need to untrack all poke descriptors from the
main program as otherwise they could race with the freeing of JIT memory from
the subprograms. Lastly, a748c697 ("bpf: propagate poke descriptors to
subprograms") had an off-by-one on the subprogram instruction index range
check as it was testing 'insn_idx >= subprog_start && insn_idx <= subprog_end'.
However, subprog_end is the next subprogram's start instruction.

Fixes: a748c697 ("bpf: propagate poke descriptors to subprograms")
Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
Co-developed-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210707223848.14580-2-john.fastabend@gmail.com
parent 1d719254
...@@ -570,6 +570,9 @@ static void bpf_tail_call_direct_fixup(struct bpf_prog *prog) ...@@ -570,6 +570,9 @@ static void bpf_tail_call_direct_fixup(struct bpf_prog *prog)
for (i = 0; i < prog->aux->size_poke_tab; i++) { for (i = 0; i < prog->aux->size_poke_tab; i++) {
poke = &prog->aux->poke_tab[i]; poke = &prog->aux->poke_tab[i];
if (poke->aux && poke->aux != prog->aux)
continue;
WARN_ON_ONCE(READ_ONCE(poke->tailcall_target_stable)); WARN_ON_ONCE(READ_ONCE(poke->tailcall_target_stable));
if (poke->reason != BPF_POKE_REASON_TAIL_CALL) if (poke->reason != BPF_POKE_REASON_TAIL_CALL)
......
...@@ -780,6 +780,7 @@ struct bpf_jit_poke_descriptor { ...@@ -780,6 +780,7 @@ struct bpf_jit_poke_descriptor {
void *tailcall_target; void *tailcall_target;
void *tailcall_bypass; void *tailcall_bypass;
void *bypass_addr; void *bypass_addr;
void *aux;
union { union {
struct { struct {
struct bpf_map *map; struct bpf_map *map;
......
...@@ -2236,8 +2236,14 @@ static void bpf_prog_free_deferred(struct work_struct *work) ...@@ -2236,8 +2236,14 @@ static void bpf_prog_free_deferred(struct work_struct *work)
#endif #endif
if (aux->dst_trampoline) if (aux->dst_trampoline)
bpf_trampoline_put(aux->dst_trampoline); bpf_trampoline_put(aux->dst_trampoline);
for (i = 0; i < aux->func_cnt; i++) for (i = 0; i < aux->func_cnt; i++) {
/* We can just unlink the subprog poke descriptor table as
* it was originally linked to the main program and is also
* released along with it.
*/
aux->func[i]->aux->poke_tab = NULL;
bpf_jit_free(aux->func[i]); bpf_jit_free(aux->func[i]);
}
if (aux->func_cnt) { if (aux->func_cnt) {
kfree(aux->func); kfree(aux->func);
bpf_prog_unlock_free(aux->prog); bpf_prog_unlock_free(aux->prog);
......
...@@ -12121,33 +12121,19 @@ static int jit_subprogs(struct bpf_verifier_env *env) ...@@ -12121,33 +12121,19 @@ static int jit_subprogs(struct bpf_verifier_env *env)
goto out_free; goto out_free;
func[i]->is_func = 1; func[i]->is_func = 1;
func[i]->aux->func_idx = i; func[i]->aux->func_idx = i;
/* the btf and func_info will be freed only at prog->aux */ /* Below members will be freed only at prog->aux */
func[i]->aux->btf = prog->aux->btf; func[i]->aux->btf = prog->aux->btf;
func[i]->aux->func_info = prog->aux->func_info; func[i]->aux->func_info = prog->aux->func_info;
func[i]->aux->poke_tab = prog->aux->poke_tab;
func[i]->aux->size_poke_tab = prog->aux->size_poke_tab;
for (j = 0; j < prog->aux->size_poke_tab; j++) { for (j = 0; j < prog->aux->size_poke_tab; j++) {
u32 insn_idx = prog->aux->poke_tab[j].insn_idx; struct bpf_jit_poke_descriptor *poke;
int ret;
if (!(insn_idx >= subprog_start && poke = &prog->aux->poke_tab[j];
insn_idx <= subprog_end)) if (poke->insn_idx < subprog_end &&
continue; poke->insn_idx >= subprog_start)
poke->aux = func[i]->aux;
ret = bpf_jit_add_poke_descriptor(func[i],
&prog->aux->poke_tab[j]);
if (ret < 0) {
verbose(env, "adding tail call poke descriptor failed\n");
goto out_free;
}
func[i]->insnsi[insn_idx - subprog_start].imm = ret + 1;
map_ptr = func[i]->aux->poke_tab[ret].tail_call.map;
ret = map_ptr->ops->map_poke_track(map_ptr, func[i]->aux);
if (ret < 0) {
verbose(env, "tracking tail call prog failed\n");
goto out_free;
}
} }
/* Use bpf_prog_F_tag to indicate functions in stack traces. /* Use bpf_prog_F_tag to indicate functions in stack traces.
...@@ -12178,18 +12164,6 @@ static int jit_subprogs(struct bpf_verifier_env *env) ...@@ -12178,18 +12164,6 @@ static int jit_subprogs(struct bpf_verifier_env *env)
cond_resched(); cond_resched();
} }
/* Untrack main program's aux structs so that during map_poke_run()
* we will not stumble upon the unfilled poke descriptors; each
* of the main program's poke descs got distributed across subprogs
* and got tracked onto map, so we are sure that none of them will
* be missed after the operation below
*/
for (i = 0; i < prog->aux->size_poke_tab; i++) {
map_ptr = prog->aux->poke_tab[i].tail_call.map;
map_ptr->ops->map_poke_untrack(map_ptr, prog->aux);
}
/* at this point all bpf functions were successfully JITed /* at this point all bpf functions were successfully JITed
* now populate all bpf_calls with correct addresses and * now populate all bpf_calls with correct addresses and
* run last pass of JIT * run last pass of JIT
...@@ -12267,14 +12241,22 @@ static int jit_subprogs(struct bpf_verifier_env *env) ...@@ -12267,14 +12241,22 @@ static int jit_subprogs(struct bpf_verifier_env *env)
bpf_prog_jit_attempt_done(prog); bpf_prog_jit_attempt_done(prog);
return 0; return 0;
out_free: out_free:
/* We failed JIT'ing, so at this point we need to unregister poke
* descriptors from subprogs, so that kernel is not attempting to
* patch it anymore as we're freeing the subprog JIT memory.
*/
for (i = 0; i < prog->aux->size_poke_tab; i++) {
map_ptr = prog->aux->poke_tab[i].tail_call.map;
map_ptr->ops->map_poke_untrack(map_ptr, prog->aux);
}
/* At this point we're guaranteed that poke descriptors are not
* live anymore. We can just unlink its descriptor table as it's
* released with the main prog.
*/
for (i = 0; i < env->subprog_cnt; i++) { for (i = 0; i < env->subprog_cnt; i++) {
if (!func[i]) if (!func[i])
continue; continue;
func[i]->aux->poke_tab = NULL;
for (j = 0; j < func[i]->aux->size_poke_tab; j++) {
map_ptr = func[i]->aux->poke_tab[j].tail_call.map;
map_ptr->ops->map_poke_untrack(map_ptr, func[i]->aux);
}
bpf_jit_free(func[i]); bpf_jit_free(func[i]);
} }
kfree(func); kfree(func);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment