Commit cd9c1270 authored by Alexei Starovoitov's avatar Alexei Starovoitov

Merge branch 'bpf-register-bounds-logic-and-testing-improvements'

Andrii Nakryiko says:

====================
BPF register bounds logic and testing improvements

This patch set adds a big set of manual and auto-generated test cases
validating BPF verifier's register bounds tracking and deduction logic. See
details in the last patch.

We start with building a tester that validates existing <range> vs <scalar>
verifier logic for range bounds. To make all this work, BPF verifier's logic
needed a bunch of improvements to handle some cases that previously were not
covered. This had no implications as to correctness of verifier logic, but it
was incomplete enough to cause significant disagreements with alternative
implementation of register bounds logic that tests in this patch set
implement. So we need BPF verifier logic improvements to make all the tests
pass. This is what we do in patches #3 through #9.

The end goal of this work, though, is to extend BPF verifier range state
tracking such as to allow to derive new range bounds when comparing non-const
registers. There is some more investigative work required to investigate and
fix existing potential issues with range tracking as part of ALU/ALU64
operations, so <range> x <range> part of v5 patch set ([0]) is dropped until
these issues are sorted out.

For now, we include preparatory refactorings and clean ups, that set up BPF
verifier code base to extend the logic to <range> vs <range> logic in
subsequent patch set. Patches #10-#16 perform preliminary refactorings without
functionally changing anything. But they do clean up check_cond_jmp_op() logic
and generalize a bunch of other pieces in is_branch_taken() logic.

  [0] https://patchwork.kernel.org/project/netdevbpf/list/?series=797178&state=*

v5->v6:
  - dropped <range> vs <range> patches (original patches #18 through #23) to
    add more register range sanity checks and fix preexisting issues;
  - comments improvements, addressing other feedback on first 17 patches
    (Eduard, Alexei);
v4->v5:
  - added entirety of verifier reg bounds tracking changes, now handling
    <range> vs <range> cases (Alexei);
  - added way more comments trying to explain why deductions added are
    correct, hopefully they are useful and clarify things a bit (Daniel,
    Shung-Hsi);
  - added two preliminary selftests fixes necessary for RELEASE=1 build to
    work again, it keeps breaking.
v3->v4:
  - improvements to reg_bounds tester (progress report, split 32-bit and
    64-bit ranges, fix various verbosity output issues, etc);
v2->v3:
  - fix a subtle little-endianness assumption inside parge_reg_state() (CI);
v1->v2:
  - fix compilation when building selftests with llvm-16 toolchain (CI).
====================

Link: https://lore.kernel.org/r/20231102033759.2541186-1-andrii@kernel.orgSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
parents bf4a64b9 4621202a
...@@ -2324,6 +2324,81 @@ static void __update_reg_bounds(struct bpf_reg_state *reg) ...@@ -2324,6 +2324,81 @@ static void __update_reg_bounds(struct bpf_reg_state *reg)
/* Uses signed min/max values to inform unsigned, and vice-versa */ /* Uses signed min/max values to inform unsigned, and vice-versa */
static void __reg32_deduce_bounds(struct bpf_reg_state *reg) static void __reg32_deduce_bounds(struct bpf_reg_state *reg)
{ {
/* If upper 32 bits of u64/s64 range don't change, we can use lower 32
* bits to improve our u32/s32 boundaries.
*
* E.g., the case where we have upper 32 bits as zero ([10, 20] in
* u64) is pretty trivial, it's obvious that in u32 we'll also have
* [10, 20] range. But this property holds for any 64-bit range as
* long as upper 32 bits in that entire range of values stay the same.
*
* E.g., u64 range [0x10000000A, 0x10000000F] ([4294967306, 4294967311]
* in decimal) has the same upper 32 bits throughout all the values in
* that range. As such, lower 32 bits form a valid [0xA, 0xF] ([10, 15])
* range.
*
* Note also, that [0xA, 0xF] is a valid range both in u32 and in s32,
* following the rules outlined below about u64/s64 correspondence
* (which equally applies to u32 vs s32 correspondence). In general it
* depends on actual hexadecimal values of 32-bit range. They can form
* only valid u32, or only valid s32 ranges in some cases.
*
* So we use all these insights to derive bounds for subregisters here.
*/
if ((reg->umin_value >> 32) == (reg->umax_value >> 32)) {
/* u64 to u32 casting preserves validity of low 32 bits as
* a range, if upper 32 bits are the same
*/
reg->u32_min_value = max_t(u32, reg->u32_min_value, (u32)reg->umin_value);
reg->u32_max_value = min_t(u32, reg->u32_max_value, (u32)reg->umax_value);
if ((s32)reg->umin_value <= (s32)reg->umax_value) {
reg->s32_min_value = max_t(s32, reg->s32_min_value, (s32)reg->umin_value);
reg->s32_max_value = min_t(s32, reg->s32_max_value, (s32)reg->umax_value);
}
}
if ((reg->smin_value >> 32) == (reg->smax_value >> 32)) {
/* low 32 bits should form a proper u32 range */
if ((u32)reg->smin_value <= (u32)reg->smax_value) {
reg->u32_min_value = max_t(u32, reg->u32_min_value, (u32)reg->smin_value);
reg->u32_max_value = min_t(u32, reg->u32_max_value, (u32)reg->smax_value);
}
/* low 32 bits should form a proper s32 range */
if ((s32)reg->smin_value <= (s32)reg->smax_value) {
reg->s32_min_value = max_t(s32, reg->s32_min_value, (s32)reg->smin_value);
reg->s32_max_value = min_t(s32, reg->s32_max_value, (s32)reg->smax_value);
}
}
/* Special case where upper bits form a small sequence of two
* sequential numbers (in 32-bit unsigned space, so 0xffffffff to
* 0x00000000 is also valid), while lower bits form a proper s32 range
* going from negative numbers to positive numbers. E.g., let's say we
* have s64 range [-1, 1] ([0xffffffffffffffff, 0x0000000000000001]).
* Possible s64 values are {-1, 0, 1} ({0xffffffffffffffff,
* 0x0000000000000000, 0x00000000000001}). Ignoring upper 32 bits,
* we still get a valid s32 range [-1, 1] ([0xffffffff, 0x00000001]).
* Note that it doesn't have to be 0xffffffff going to 0x00000000 in
* upper 32 bits. As a random example, s64 range
* [0xfffffff0fffffff0; 0xfffffff100000010], forms a valid s32 range
* [-16, 16] ([0xfffffff0; 0x00000010]) in its 32 bit subregister.
*/
if ((u32)(reg->umin_value >> 32) + 1 == (u32)(reg->umax_value >> 32) &&
(s32)reg->umin_value < 0 && (s32)reg->umax_value >= 0) {
reg->s32_min_value = max_t(s32, reg->s32_min_value, (s32)reg->umin_value);
reg->s32_max_value = min_t(s32, reg->s32_max_value, (s32)reg->umax_value);
}
if ((u32)(reg->smin_value >> 32) + 1 == (u32)(reg->smax_value >> 32) &&
(s32)reg->smin_value < 0 && (s32)reg->smax_value >= 0) {
reg->s32_min_value = max_t(s32, reg->s32_min_value, (s32)reg->smin_value);
reg->s32_max_value = min_t(s32, reg->s32_max_value, (s32)reg->smax_value);
}
/* if u32 range forms a valid s32 range (due to matching sign bit),
* try to learn from that
*/
if ((s32)reg->u32_min_value <= (s32)reg->u32_max_value) {
reg->s32_min_value = max_t(s32, reg->s32_min_value, reg->u32_min_value);
reg->s32_max_value = min_t(s32, reg->s32_max_value, reg->u32_max_value);
}
/* Learn sign from signed bounds. /* Learn sign from signed bounds.
* If we cannot cross the sign boundary, then signed and unsigned bounds * If we cannot cross the sign boundary, then signed and unsigned bounds
* are the same, so combine. This works even in the negative case, e.g. * are the same, so combine. This works even in the negative case, e.g.
...@@ -2358,6 +2433,77 @@ static void __reg32_deduce_bounds(struct bpf_reg_state *reg) ...@@ -2358,6 +2433,77 @@ static void __reg32_deduce_bounds(struct bpf_reg_state *reg)
static void __reg64_deduce_bounds(struct bpf_reg_state *reg) static void __reg64_deduce_bounds(struct bpf_reg_state *reg)
{ {
/* If u64 range forms a valid s64 range (due to matching sign bit),
* try to learn from that. Let's do a bit of ASCII art to see when
* this is happening. Let's take u64 range first:
*
* 0 0x7fffffffffffffff 0x8000000000000000 U64_MAX
* |-------------------------------|--------------------------------|
*
* Valid u64 range is formed when umin and umax are anywhere in the
* range [0, U64_MAX], and umin <= umax. u64 case is simple and
* straightforward. Let's see how s64 range maps onto the same range
* of values, annotated below the line for comparison:
*
* 0 0x7fffffffffffffff 0x8000000000000000 U64_MAX
* |-------------------------------|--------------------------------|
* 0 S64_MAX S64_MIN -1
*
* So s64 values basically start in the middle and they are logically
* contiguous to the right of it, wrapping around from -1 to 0, and
* then finishing as S64_MAX (0x7fffffffffffffff) right before
* S64_MIN. We can try drawing the continuity of u64 vs s64 values
* more visually as mapped to sign-agnostic range of hex values.
*
* u64 start u64 end
* _______________________________________________________________
* / \
* 0 0x7fffffffffffffff 0x8000000000000000 U64_MAX
* |-------------------------------|--------------------------------|
* 0 S64_MAX S64_MIN -1
* / \
* >------------------------------ ------------------------------->
* s64 continues... s64 end s64 start s64 "midpoint"
*
* What this means is that, in general, we can't always derive
* something new about u64 from any random s64 range, and vice versa.
*
* But we can do that in two particular cases. One is when entire
* u64/s64 range is *entirely* contained within left half of the above
* diagram or when it is *entirely* contained in the right half. I.e.:
*
* |-------------------------------|--------------------------------|
* ^ ^ ^ ^
* A B C D
*
* [A, B] and [C, D] are contained entirely in their respective halves
* and form valid contiguous ranges as both u64 and s64 values. [A, B]
* will be non-negative both as u64 and s64 (and in fact it will be
* identical ranges no matter the signedness). [C, D] treated as s64
* will be a range of negative values, while in u64 it will be
* non-negative range of values larger than 0x8000000000000000.
*
* Now, any other range here can't be represented in both u64 and s64
* simultaneously. E.g., [A, C], [A, D], [B, C], [B, D] are valid
* contiguous u64 ranges, but they are discontinuous in s64. [B, C]
* in s64 would be properly presented as [S64_MIN, C] and [B, S64_MAX],
* for example. Similarly, valid s64 range [D, A] (going from negative
* to positive values), would be two separate [D, U64_MAX] and [0, A]
* ranges as u64. Currently reg_state can't represent two segments per
* numeric domain, so in such situations we can only derive maximal
* possible range ([0, U64_MAX] for u64, and [S64_MIN, S64_MAX] for s64).
*
* So we use these facts to derive umin/umax from smin/smax and vice
* versa only if they stay within the same "half". This is equivalent
* to checking sign bit: lower half will have sign bit as zero, upper
* half have sign bit 1. Below in code we simplify this by just
* casting umin/umax as smin/smax and checking if they form valid
* range, and vice versa. Those are equivalent checks.
*/
if ((s64)reg->umin_value <= (s64)reg->umax_value) {
reg->smin_value = max_t(s64, reg->smin_value, reg->umin_value);
reg->smax_value = min_t(s64, reg->smax_value, reg->umax_value);
}
/* Learn sign from signed bounds. /* Learn sign from signed bounds.
* If we cannot cross the sign boundary, then signed and unsigned bounds * If we cannot cross the sign boundary, then signed and unsigned bounds
* are the same, so combine. This works even in the negative case, e.g. * are the same, so combine. This works even in the negative case, e.g.
...@@ -2390,10 +2536,54 @@ static void __reg64_deduce_bounds(struct bpf_reg_state *reg) ...@@ -2390,10 +2536,54 @@ static void __reg64_deduce_bounds(struct bpf_reg_state *reg)
} }
} }
static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg)
{
/* Try to tighten 64-bit bounds from 32-bit knowledge, using 32-bit
* values on both sides of 64-bit range in hope to have tigher range.
* E.g., if r1 is [0x1'00000000, 0x3'80000000], and we learn from
* 32-bit signed > 0 operation that s32 bounds are now [1; 0x7fffffff].
* With this, we can substitute 1 as low 32-bits of _low_ 64-bit bound
* (0x100000000 -> 0x100000001) and 0x7fffffff as low 32-bits of
* _high_ 64-bit bound (0x380000000 -> 0x37fffffff) and arrive at a
* better overall bounds for r1 as [0x1'000000001; 0x3'7fffffff].
* We just need to make sure that derived bounds we are intersecting
* with are well-formed ranges in respecitve s64 or u64 domain, just
* like we do with similar kinds of 32-to-64 or 64-to-32 adjustments.
*/
__u64 new_umin, new_umax;
__s64 new_smin, new_smax;
/* u32 -> u64 tightening, it's always well-formed */
new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value;
new_umax = (reg->umax_value & ~0xffffffffULL) | reg->u32_max_value;
reg->umin_value = max_t(u64, reg->umin_value, new_umin);
reg->umax_value = min_t(u64, reg->umax_value, new_umax);
/* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */
new_smin = (reg->smin_value & ~0xffffffffULL) | reg->u32_min_value;
new_smax = (reg->smax_value & ~0xffffffffULL) | reg->u32_max_value;
reg->smin_value = max_t(s64, reg->smin_value, new_smin);
reg->smax_value = min_t(s64, reg->smax_value, new_smax);
/* if s32 can be treated as valid u32 range, we can use it as well */
if ((u32)reg->s32_min_value <= (u32)reg->s32_max_value) {
/* s32 -> u64 tightening */
new_umin = (reg->umin_value & ~0xffffffffULL) | (u32)reg->s32_min_value;
new_umax = (reg->umax_value & ~0xffffffffULL) | (u32)reg->s32_max_value;
reg->umin_value = max_t(u64, reg->umin_value, new_umin);
reg->umax_value = min_t(u64, reg->umax_value, new_umax);
/* s32 -> s64 tightening */
new_smin = (reg->smin_value & ~0xffffffffULL) | (u32)reg->s32_min_value;
new_smax = (reg->smax_value & ~0xffffffffULL) | (u32)reg->s32_max_value;
reg->smin_value = max_t(s64, reg->smin_value, new_smin);
reg->smax_value = min_t(s64, reg->smax_value, new_smax);
}
}
static void __reg_deduce_bounds(struct bpf_reg_state *reg) static void __reg_deduce_bounds(struct bpf_reg_state *reg)
{ {
__reg32_deduce_bounds(reg); __reg32_deduce_bounds(reg);
__reg64_deduce_bounds(reg); __reg64_deduce_bounds(reg);
__reg_deduce_mixed_bounds(reg);
} }
/* Attempts to improve var_off based on unsigned min/max information */ /* Attempts to improve var_off based on unsigned min/max information */
...@@ -2415,6 +2605,7 @@ static void reg_bounds_sync(struct bpf_reg_state *reg) ...@@ -2415,6 +2605,7 @@ static void reg_bounds_sync(struct bpf_reg_state *reg)
__update_reg_bounds(reg); __update_reg_bounds(reg);
/* We might have learned something about the sign bit. */ /* We might have learned something about the sign bit. */
__reg_deduce_bounds(reg); __reg_deduce_bounds(reg);
__reg_deduce_bounds(reg);
/* We might have learned some bits from the bounds. */ /* We might have learned some bits from the bounds. */
__reg_bound_offset(reg); __reg_bound_offset(reg);
/* Intersecting with the old var_off might have improved our bounds /* Intersecting with the old var_off might have improved our bounds
...@@ -2448,51 +2639,6 @@ static void __reg_assign_32_into_64(struct bpf_reg_state *reg) ...@@ -2448,51 +2639,6 @@ static void __reg_assign_32_into_64(struct bpf_reg_state *reg)
} }
} }
static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
{
/* special case when 64-bit register has upper 32-bit register
* zeroed. Typically happens after zext or <<32, >>32 sequence
* allowing us to use 32-bit bounds directly,
*/
if (tnum_equals_const(tnum_clear_subreg(reg->var_off), 0)) {
__reg_assign_32_into_64(reg);
} else {
/* Otherwise the best we can do is push lower 32bit known and
* unknown bits into register (var_off set from jmp logic)
* then learn as much as possible from the 64-bit tnum
* known and unknown bits. The previous smin/smax bounds are
* invalid here because of jmp32 compare so mark them unknown
* so they do not impact tnum bounds calculation.
*/
__mark_reg64_unbounded(reg);
}
reg_bounds_sync(reg);
}
static bool __reg64_bound_s32(s64 a)
{
return a >= S32_MIN && a <= S32_MAX;
}
static bool __reg64_bound_u32(u64 a)
{
return a >= U32_MIN && a <= U32_MAX;
}
static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
{
__mark_reg32_unbounded(reg);
if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) {
reg->s32_min_value = (s32)reg->smin_value;
reg->s32_max_value = (s32)reg->smax_value;
}
if (__reg64_bound_u32(reg->umin_value) && __reg64_bound_u32(reg->umax_value)) {
reg->u32_min_value = (u32)reg->umin_value;
reg->u32_max_value = (u32)reg->umax_value;
}
reg_bounds_sync(reg);
}
/* Mark a register as having a completely unknown (scalar) value. */ /* Mark a register as having a completely unknown (scalar) value. */
static void __mark_reg_unknown(const struct bpf_verifier_env *env, static void __mark_reg_unknown(const struct bpf_verifier_env *env,
struct bpf_reg_state *reg) struct bpf_reg_state *reg)
...@@ -6196,9 +6342,10 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size) ...@@ -6196,9 +6342,10 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
* values are also truncated so we push 64-bit bounds into * values are also truncated so we push 64-bit bounds into
* 32-bit bounds. Above were truncated < 32-bits already. * 32-bit bounds. Above were truncated < 32-bits already.
*/ */
if (size >= 4) if (size < 4) {
return; __mark_reg32_unbounded(reg);
__reg_combine_64_into_32(reg); reg_bounds_sync(reg);
}
} }
static void set_sext64_default_val(struct bpf_reg_state *reg, int size) static void set_sext64_default_val(struct bpf_reg_state *reg, int size)
...@@ -14041,161 +14188,102 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate, ...@@ -14041,161 +14188,102 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,
})); }));
} }
static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode) /* check if register is a constant scalar value */
static bool is_reg_const(struct bpf_reg_state *reg, bool subreg32)
{ {
struct tnum subreg = tnum_subreg(reg->var_off); return reg->type == SCALAR_VALUE &&
s32 sval = (s32)val; tnum_is_const(subreg32 ? tnum_subreg(reg->var_off) : reg->var_off);
switch (opcode) {
case BPF_JEQ:
if (tnum_is_const(subreg))
return !!tnum_equals_const(subreg, val);
else if (val < reg->u32_min_value || val > reg->u32_max_value)
return 0;
else if (sval < reg->s32_min_value || sval > reg->s32_max_value)
return 0;
break;
case BPF_JNE:
if (tnum_is_const(subreg))
return !tnum_equals_const(subreg, val);
else if (val < reg->u32_min_value || val > reg->u32_max_value)
return 1;
else if (sval < reg->s32_min_value || sval > reg->s32_max_value)
return 1;
break;
case BPF_JSET:
if ((~subreg.mask & subreg.value) & val)
return 1;
if (!((subreg.mask | subreg.value) & val))
return 0;
break;
case BPF_JGT:
if (reg->u32_min_value > val)
return 1;
else if (reg->u32_max_value <= val)
return 0;
break;
case BPF_JSGT:
if (reg->s32_min_value > sval)
return 1;
else if (reg->s32_max_value <= sval)
return 0;
break;
case BPF_JLT:
if (reg->u32_max_value < val)
return 1;
else if (reg->u32_min_value >= val)
return 0;
break;
case BPF_JSLT:
if (reg->s32_max_value < sval)
return 1;
else if (reg->s32_min_value >= sval)
return 0;
break;
case BPF_JGE:
if (reg->u32_min_value >= val)
return 1;
else if (reg->u32_max_value < val)
return 0;
break;
case BPF_JSGE:
if (reg->s32_min_value >= sval)
return 1;
else if (reg->s32_max_value < sval)
return 0;
break;
case BPF_JLE:
if (reg->u32_max_value <= val)
return 1;
else if (reg->u32_min_value > val)
return 0;
break;
case BPF_JSLE:
if (reg->s32_max_value <= sval)
return 1;
else if (reg->s32_min_value > sval)
return 0;
break;
}
return -1;
} }
/* assuming is_reg_const() is true, return constant value of a register */
static u64 reg_const_value(struct bpf_reg_state *reg, bool subreg32)
{
return subreg32 ? tnum_subreg(reg->var_off).value : reg->var_off.value;
}
static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode) /*
* <reg1> <op> <reg2>, currently assuming reg2 is a constant
*/
static int is_scalar_branch_taken(struct bpf_reg_state *reg1, struct bpf_reg_state *reg2,
u8 opcode, bool is_jmp32)
{ {
s64 sval = (s64)val; struct tnum t1 = is_jmp32 ? tnum_subreg(reg1->var_off) : reg1->var_off;
u64 umin1 = is_jmp32 ? (u64)reg1->u32_min_value : reg1->umin_value;
u64 umax1 = is_jmp32 ? (u64)reg1->u32_max_value : reg1->umax_value;
s64 smin1 = is_jmp32 ? (s64)reg1->s32_min_value : reg1->smin_value;
s64 smax1 = is_jmp32 ? (s64)reg1->s32_max_value : reg1->smax_value;
u64 uval = is_jmp32 ? (u32)tnum_subreg(reg2->var_off).value : reg2->var_off.value;
s64 sval = is_jmp32 ? (s32)uval : (s64)uval;
switch (opcode) { switch (opcode) {
case BPF_JEQ: case BPF_JEQ:
if (tnum_is_const(reg->var_off)) if (tnum_is_const(t1))
return !!tnum_equals_const(reg->var_off, val); return !!tnum_equals_const(t1, uval);
else if (val < reg->umin_value || val > reg->umax_value) else if (uval < umin1 || uval > umax1)
return 0; return 0;
else if (sval < reg->smin_value || sval > reg->smax_value) else if (sval < smin1 || sval > smax1)
return 0; return 0;
break; break;
case BPF_JNE: case BPF_JNE:
if (tnum_is_const(reg->var_off)) if (tnum_is_const(t1))
return !tnum_equals_const(reg->var_off, val); return !tnum_equals_const(t1, uval);
else if (val < reg->umin_value || val > reg->umax_value) else if (uval < umin1 || uval > umax1)
return 1; return 1;
else if (sval < reg->smin_value || sval > reg->smax_value) else if (sval < smin1 || sval > smax1)
return 1; return 1;
break; break;
case BPF_JSET: case BPF_JSET:
if ((~reg->var_off.mask & reg->var_off.value) & val) if ((~t1.mask & t1.value) & uval)
return 1; return 1;
if (!((reg->var_off.mask | reg->var_off.value) & val)) if (!((t1.mask | t1.value) & uval))
return 0; return 0;
break; break;
case BPF_JGT: case BPF_JGT:
if (reg->umin_value > val) if (umin1 > uval )
return 1; return 1;
else if (reg->umax_value <= val) else if (umax1 <= uval)
return 0; return 0;
break; break;
case BPF_JSGT: case BPF_JSGT:
if (reg->smin_value > sval) if (smin1 > sval)
return 1; return 1;
else if (reg->smax_value <= sval) else if (smax1 <= sval)
return 0; return 0;
break; break;
case BPF_JLT: case BPF_JLT:
if (reg->umax_value < val) if (umax1 < uval)
return 1; return 1;
else if (reg->umin_value >= val) else if (umin1 >= uval)
return 0; return 0;
break; break;
case BPF_JSLT: case BPF_JSLT:
if (reg->smax_value < sval) if (smax1 < sval)
return 1; return 1;
else if (reg->smin_value >= sval) else if (smin1 >= sval)
return 0; return 0;
break; break;
case BPF_JGE: case BPF_JGE:
if (reg->umin_value >= val) if (umin1 >= uval)
return 1; return 1;
else if (reg->umax_value < val) else if (umax1 < uval)
return 0; return 0;
break; break;
case BPF_JSGE: case BPF_JSGE:
if (reg->smin_value >= sval) if (smin1 >= sval)
return 1; return 1;
else if (reg->smax_value < sval) else if (smax1 < sval)
return 0; return 0;
break; break;
case BPF_JLE: case BPF_JLE:
if (reg->umax_value <= val) if (umax1 <= uval)
return 1; return 1;
else if (reg->umin_value > val) else if (umin1 > uval)
return 0; return 0;
break; break;
case BPF_JSLE: case BPF_JSLE:
if (reg->smax_value <= sval) if (smax1 <= sval)
return 1; return 1;
else if (reg->smin_value > sval) else if (smin1 > sval)
return 0; return 0;
break; break;
} }
...@@ -14203,41 +14291,6 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode) ...@@ -14203,41 +14291,6 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
return -1; return -1;
} }
/* compute branch direction of the expression "if (reg opcode val) goto target;"
* and return:
* 1 - branch will be taken and "goto target" will be executed
* 0 - branch will not be taken and fall-through to next insn
* -1 - unknown. Example: "if (reg < 5)" is unknown when register value
* range [0,10]
*/
static int is_branch_taken(struct bpf_reg_state *reg, u64 val, u8 opcode,
bool is_jmp32)
{
if (__is_pointer_value(false, reg)) {
if (!reg_not_null(reg))
return -1;
/* If pointer is valid tests against zero will fail so we can
* use this to direct branch taken.
*/
if (val != 0)
return -1;
switch (opcode) {
case BPF_JEQ:
return 0;
case BPF_JNE:
return 1;
default:
return -1;
}
}
if (is_jmp32)
return is_branch32_taken(reg, val, opcode);
return is_branch64_taken(reg, val, opcode);
}
static int flip_opcode(u32 opcode) static int flip_opcode(u32 opcode)
{ {
/* How can we transform "a <op> b" into "b <op> a"? */ /* How can we transform "a <op> b" into "b <op> a"? */
...@@ -14299,32 +14352,98 @@ static int is_pkt_ptr_branch_taken(struct bpf_reg_state *dst_reg, ...@@ -14299,32 +14352,98 @@ static int is_pkt_ptr_branch_taken(struct bpf_reg_state *dst_reg,
return -1; return -1;
} }
/* Adjusts the register min/max values in the case that the dst_reg is the /* compute branch direction of the expression "if (<reg1> opcode <reg2>) goto target;"
* variable register that we are working on, and src_reg is a constant or we're * and return:
* simply doing a BPF_K check. * 1 - branch will be taken and "goto target" will be executed
* In JEQ/JNE cases we also adjust the var_off values. * 0 - branch will not be taken and fall-through to next insn
* -1 - unknown. Example: "if (reg1 < 5)" is unknown when register value
* range [0,10]
*/ */
static void reg_set_min_max(struct bpf_reg_state *true_reg, static int is_branch_taken(struct bpf_reg_state *reg1, struct bpf_reg_state *reg2,
struct bpf_reg_state *false_reg,
u64 val, u32 val32,
u8 opcode, bool is_jmp32) u8 opcode, bool is_jmp32)
{ {
struct tnum false_32off = tnum_subreg(false_reg->var_off); u64 val;
struct tnum false_64off = false_reg->var_off;
struct tnum true_32off = tnum_subreg(true_reg->var_off);
struct tnum true_64off = true_reg->var_off;
s64 sval = (s64)val;
s32 sval32 = (s32)val32;
/* If the dst_reg is a pointer, we can't learn anything about its if (reg_is_pkt_pointer_any(reg1) && reg_is_pkt_pointer_any(reg2) && !is_jmp32)
* variable offset from the compare (unless src_reg were a pointer into return is_pkt_ptr_branch_taken(reg1, reg2, opcode);
* the same object, but we don't bother with that.
* Since false_reg and true_reg have the same type by construction, we /* try to make sure reg2 is a constant SCALAR_VALUE */
* only need to check one of them for pointerness. if (!is_reg_const(reg2, is_jmp32)) {
opcode = flip_opcode(opcode);
swap(reg1, reg2);
}
/* for now we expect reg2 to be a constant to make any useful decisions */
if (!is_reg_const(reg2, is_jmp32))
return -1;
val = reg_const_value(reg2, is_jmp32);
if (__is_pointer_value(false, reg1)) {
if (!reg_not_null(reg1))
return -1;
/* If pointer is valid tests against zero will fail so we can
* use this to direct branch taken.
*/ */
if (__is_pointer_value(false, false_reg)) if (val != 0)
return -1;
switch (opcode) {
case BPF_JEQ:
return 0;
case BPF_JNE:
return 1;
default:
return -1;
}
}
return is_scalar_branch_taken(reg1, reg2, opcode, is_jmp32);
}
/* Adjusts the register min/max values in the case that the dst_reg and
* src_reg are both SCALAR_VALUE registers (or we are simply doing a BPF_K
* check, in which case we havea fake SCALAR_VALUE representing insn->imm).
* Technically we can do similar adjustments for pointers to the same object,
* but we don't support that right now.
*/
static void reg_set_min_max(struct bpf_reg_state *true_reg1,
struct bpf_reg_state *true_reg2,
struct bpf_reg_state *false_reg1,
struct bpf_reg_state *false_reg2,
u8 opcode, bool is_jmp32)
{
struct tnum false_32off, false_64off;
struct tnum true_32off, true_64off;
u64 uval;
u32 uval32;
s64 sval;
s32 sval32;
/* If either register is a pointer, we can't learn anything about its
* variable offset from the compare (unless they were a pointer into
* the same object, but we don't bother with that).
*/
if (false_reg1->type != SCALAR_VALUE || false_reg2->type != SCALAR_VALUE)
return;
/* we expect right-hand registers (src ones) to be constants, for now */
if (!is_reg_const(false_reg2, is_jmp32)) {
opcode = flip_opcode(opcode);
swap(true_reg1, true_reg2);
swap(false_reg1, false_reg2);
}
if (!is_reg_const(false_reg2, is_jmp32))
return; return;
false_32off = tnum_subreg(false_reg1->var_off);
false_64off = false_reg1->var_off;
true_32off = tnum_subreg(true_reg1->var_off);
true_64off = true_reg1->var_off;
uval = false_reg2->var_off.value;
uval32 = (u32)tnum_subreg(false_reg2->var_off).value;
sval = (s64)uval;
sval32 = (s32)uval32;
switch (opcode) { switch (opcode) {
/* JEQ/JNE comparison doesn't change the register equivalence. /* JEQ/JNE comparison doesn't change the register equivalence.
* *
...@@ -14337,52 +14456,52 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg, ...@@ -14337,52 +14456,52 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
*/ */
case BPF_JEQ: case BPF_JEQ:
if (is_jmp32) { if (is_jmp32) {
__mark_reg32_known(true_reg, val32); __mark_reg32_known(true_reg1, uval32);
true_32off = tnum_subreg(true_reg->var_off); true_32off = tnum_subreg(true_reg1->var_off);
} else { } else {
___mark_reg_known(true_reg, val); ___mark_reg_known(true_reg1, uval);
true_64off = true_reg->var_off; true_64off = true_reg1->var_off;
} }
break; break;
case BPF_JNE: case BPF_JNE:
if (is_jmp32) { if (is_jmp32) {
__mark_reg32_known(false_reg, val32); __mark_reg32_known(false_reg1, uval32);
false_32off = tnum_subreg(false_reg->var_off); false_32off = tnum_subreg(false_reg1->var_off);
} else { } else {
___mark_reg_known(false_reg, val); ___mark_reg_known(false_reg1, uval);
false_64off = false_reg->var_off; false_64off = false_reg1->var_off;
} }
break; break;
case BPF_JSET: case BPF_JSET:
if (is_jmp32) { if (is_jmp32) {
false_32off = tnum_and(false_32off, tnum_const(~val32)); false_32off = tnum_and(false_32off, tnum_const(~uval32));
if (is_power_of_2(val32)) if (is_power_of_2(uval32))
true_32off = tnum_or(true_32off, true_32off = tnum_or(true_32off,
tnum_const(val32)); tnum_const(uval32));
} else { } else {
false_64off = tnum_and(false_64off, tnum_const(~val)); false_64off = tnum_and(false_64off, tnum_const(~uval));
if (is_power_of_2(val)) if (is_power_of_2(uval))
true_64off = tnum_or(true_64off, true_64off = tnum_or(true_64off,
tnum_const(val)); tnum_const(uval));
} }
break; break;
case BPF_JGE: case BPF_JGE:
case BPF_JGT: case BPF_JGT:
{ {
if (is_jmp32) { if (is_jmp32) {
u32 false_umax = opcode == BPF_JGT ? val32 : val32 - 1; u32 false_umax = opcode == BPF_JGT ? uval32 : uval32 - 1;
u32 true_umin = opcode == BPF_JGT ? val32 + 1 : val32; u32 true_umin = opcode == BPF_JGT ? uval32 + 1 : uval32;
false_reg->u32_max_value = min(false_reg->u32_max_value, false_reg1->u32_max_value = min(false_reg1->u32_max_value,
false_umax); false_umax);
true_reg->u32_min_value = max(true_reg->u32_min_value, true_reg1->u32_min_value = max(true_reg1->u32_min_value,
true_umin); true_umin);
} else { } else {
u64 false_umax = opcode == BPF_JGT ? val : val - 1; u64 false_umax = opcode == BPF_JGT ? uval : uval - 1;
u64 true_umin = opcode == BPF_JGT ? val + 1 : val; u64 true_umin = opcode == BPF_JGT ? uval + 1 : uval;
false_reg->umax_value = min(false_reg->umax_value, false_umax); false_reg1->umax_value = min(false_reg1->umax_value, false_umax);
true_reg->umin_value = max(true_reg->umin_value, true_umin); true_reg1->umin_value = max(true_reg1->umin_value, true_umin);
} }
break; break;
} }
...@@ -14393,14 +14512,14 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg, ...@@ -14393,14 +14512,14 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
s32 false_smax = opcode == BPF_JSGT ? sval32 : sval32 - 1; s32 false_smax = opcode == BPF_JSGT ? sval32 : sval32 - 1;
s32 true_smin = opcode == BPF_JSGT ? sval32 + 1 : sval32; s32 true_smin = opcode == BPF_JSGT ? sval32 + 1 : sval32;
false_reg->s32_max_value = min(false_reg->s32_max_value, false_smax); false_reg1->s32_max_value = min(false_reg1->s32_max_value, false_smax);
true_reg->s32_min_value = max(true_reg->s32_min_value, true_smin); true_reg1->s32_min_value = max(true_reg1->s32_min_value, true_smin);
} else { } else {
s64 false_smax = opcode == BPF_JSGT ? sval : sval - 1; s64 false_smax = opcode == BPF_JSGT ? sval : sval - 1;
s64 true_smin = opcode == BPF_JSGT ? sval + 1 : sval; s64 true_smin = opcode == BPF_JSGT ? sval + 1 : sval;
false_reg->smax_value = min(false_reg->smax_value, false_smax); false_reg1->smax_value = min(false_reg1->smax_value, false_smax);
true_reg->smin_value = max(true_reg->smin_value, true_smin); true_reg1->smin_value = max(true_reg1->smin_value, true_smin);
} }
break; break;
} }
...@@ -14408,19 +14527,19 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg, ...@@ -14408,19 +14527,19 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
case BPF_JLT: case BPF_JLT:
{ {
if (is_jmp32) { if (is_jmp32) {
u32 false_umin = opcode == BPF_JLT ? val32 : val32 + 1; u32 false_umin = opcode == BPF_JLT ? uval32 : uval32 + 1;
u32 true_umax = opcode == BPF_JLT ? val32 - 1 : val32; u32 true_umax = opcode == BPF_JLT ? uval32 - 1 : uval32;
false_reg->u32_min_value = max(false_reg->u32_min_value, false_reg1->u32_min_value = max(false_reg1->u32_min_value,
false_umin); false_umin);
true_reg->u32_max_value = min(true_reg->u32_max_value, true_reg1->u32_max_value = min(true_reg1->u32_max_value,
true_umax); true_umax);
} else { } else {
u64 false_umin = opcode == BPF_JLT ? val : val + 1; u64 false_umin = opcode == BPF_JLT ? uval : uval + 1;
u64 true_umax = opcode == BPF_JLT ? val - 1 : val; u64 true_umax = opcode == BPF_JLT ? uval - 1 : uval;
false_reg->umin_value = max(false_reg->umin_value, false_umin); false_reg1->umin_value = max(false_reg1->umin_value, false_umin);
true_reg->umax_value = min(true_reg->umax_value, true_umax); true_reg1->umax_value = min(true_reg1->umax_value, true_umax);
} }
break; break;
} }
...@@ -14431,14 +14550,14 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg, ...@@ -14431,14 +14550,14 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
s32 false_smin = opcode == BPF_JSLT ? sval32 : sval32 + 1; s32 false_smin = opcode == BPF_JSLT ? sval32 : sval32 + 1;
s32 true_smax = opcode == BPF_JSLT ? sval32 - 1 : sval32; s32 true_smax = opcode == BPF_JSLT ? sval32 - 1 : sval32;
false_reg->s32_min_value = max(false_reg->s32_min_value, false_smin); false_reg1->s32_min_value = max(false_reg1->s32_min_value, false_smin);
true_reg->s32_max_value = min(true_reg->s32_max_value, true_smax); true_reg1->s32_max_value = min(true_reg1->s32_max_value, true_smax);
} else { } else {
s64 false_smin = opcode == BPF_JSLT ? sval : sval + 1; s64 false_smin = opcode == BPF_JSLT ? sval : sval + 1;
s64 true_smax = opcode == BPF_JSLT ? sval - 1 : sval; s64 true_smax = opcode == BPF_JSLT ? sval - 1 : sval;
false_reg->smin_value = max(false_reg->smin_value, false_smin); false_reg1->smin_value = max(false_reg1->smin_value, false_smin);
true_reg->smax_value = min(true_reg->smax_value, true_smax); true_reg1->smax_value = min(true_reg1->smax_value, true_smax);
} }
break; break;
} }
...@@ -14447,36 +14566,20 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg, ...@@ -14447,36 +14566,20 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
} }
if (is_jmp32) { if (is_jmp32) {
false_reg->var_off = tnum_or(tnum_clear_subreg(false_64off), false_reg1->var_off = tnum_or(tnum_clear_subreg(false_64off),
tnum_subreg(false_32off)); tnum_subreg(false_32off));
true_reg->var_off = tnum_or(tnum_clear_subreg(true_64off), true_reg1->var_off = tnum_or(tnum_clear_subreg(true_64off),
tnum_subreg(true_32off)); tnum_subreg(true_32off));
__reg_combine_32_into_64(false_reg); reg_bounds_sync(false_reg1);
__reg_combine_32_into_64(true_reg); reg_bounds_sync(true_reg1);
} else { } else {
false_reg->var_off = false_64off; false_reg1->var_off = false_64off;
true_reg->var_off = true_64off; true_reg1->var_off = true_64off;
__reg_combine_64_into_32(false_reg); reg_bounds_sync(false_reg1);
__reg_combine_64_into_32(true_reg); reg_bounds_sync(true_reg1);
} }
} }
/* Same as above, but for the case that dst_reg holds a constant and src_reg is
* the variable reg.
*/
static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
struct bpf_reg_state *false_reg,
u64 val, u32 val32,
u8 opcode, bool is_jmp32)
{
opcode = flip_opcode(opcode);
/* This uses zero as "not present in table"; luckily the zero opcode,
* BPF_JA, can't get here.
*/
if (opcode)
reg_set_min_max(true_reg, false_reg, val, val32, opcode, is_jmp32);
}
/* Regs are known to be equal, so intersect their min/max/var_off */ /* Regs are known to be equal, so intersect their min/max/var_off */
static void __reg_combine_min_max(struct bpf_reg_state *src_reg, static void __reg_combine_min_max(struct bpf_reg_state *src_reg,
struct bpf_reg_state *dst_reg) struct bpf_reg_state *dst_reg)
...@@ -14706,6 +14809,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, ...@@ -14706,6 +14809,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
struct bpf_reg_state *regs = this_branch->frame[this_branch->curframe]->regs; struct bpf_reg_state *regs = this_branch->frame[this_branch->curframe]->regs;
struct bpf_reg_state *dst_reg, *other_branch_regs, *src_reg = NULL; struct bpf_reg_state *dst_reg, *other_branch_regs, *src_reg = NULL;
struct bpf_reg_state *eq_branch_regs; struct bpf_reg_state *eq_branch_regs;
struct bpf_reg_state fake_reg = {};
u8 opcode = BPF_OP(insn->code); u8 opcode = BPF_OP(insn->code);
bool is_jmp32; bool is_jmp32;
int pred = -1; int pred = -1;
...@@ -14746,42 +14850,13 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, ...@@ -14746,42 +14850,13 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
verbose(env, "BPF_JMP/JMP32 uses reserved fields\n"); verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
return -EINVAL; return -EINVAL;
} }
src_reg = &fake_reg;
src_reg->type = SCALAR_VALUE;
__mark_reg_known(src_reg, insn->imm);
} }
is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32; is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
pred = is_branch_taken(dst_reg, src_reg, opcode, is_jmp32);
if (BPF_SRC(insn->code) == BPF_K) {
pred = is_branch_taken(dst_reg, insn->imm, opcode, is_jmp32);
} else if (src_reg->type == SCALAR_VALUE &&
is_jmp32 && tnum_is_const(tnum_subreg(src_reg->var_off))) {
pred = is_branch_taken(dst_reg,
tnum_subreg(src_reg->var_off).value,
opcode,
is_jmp32);
} else if (src_reg->type == SCALAR_VALUE &&
!is_jmp32 && tnum_is_const(src_reg->var_off)) {
pred = is_branch_taken(dst_reg,
src_reg->var_off.value,
opcode,
is_jmp32);
} else if (dst_reg->type == SCALAR_VALUE &&
is_jmp32 && tnum_is_const(tnum_subreg(dst_reg->var_off))) {
pred = is_branch_taken(src_reg,
tnum_subreg(dst_reg->var_off).value,
flip_opcode(opcode),
is_jmp32);
} else if (dst_reg->type == SCALAR_VALUE &&
!is_jmp32 && tnum_is_const(dst_reg->var_off)) {
pred = is_branch_taken(src_reg,
dst_reg->var_off.value,
flip_opcode(opcode),
is_jmp32);
} else if (reg_is_pkt_pointer_any(dst_reg) &&
reg_is_pkt_pointer_any(src_reg) &&
!is_jmp32) {
pred = is_pkt_ptr_branch_taken(dst_reg, src_reg, opcode);
}
if (pred >= 0) { if (pred >= 0) {
/* If we get here with a dst_reg pointer type it is because /* If we get here with a dst_reg pointer type it is because
* above is_branch_taken() special cased the 0 comparison. * above is_branch_taken() special cased the 0 comparison.
...@@ -14829,53 +14904,32 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, ...@@ -14829,53 +14904,32 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
return -EFAULT; return -EFAULT;
other_branch_regs = other_branch->frame[other_branch->curframe]->regs; other_branch_regs = other_branch->frame[other_branch->curframe]->regs;
/* detect if we are comparing against a constant value so we can adjust
* our min/max values for our dst register.
* this is only legit if both are scalars (or pointers to the same
* object, I suppose, see the PTR_MAYBE_NULL related if block below),
* because otherwise the different base pointers mean the offsets aren't
* comparable.
*/
if (BPF_SRC(insn->code) == BPF_X) { if (BPF_SRC(insn->code) == BPF_X) {
struct bpf_reg_state *src_reg = &regs[insn->src_reg]; reg_set_min_max(&other_branch_regs[insn->dst_reg],
&other_branch_regs[insn->src_reg],
dst_reg, src_reg, opcode, is_jmp32);
if (dst_reg->type == SCALAR_VALUE && if (dst_reg->type == SCALAR_VALUE &&
src_reg->type == SCALAR_VALUE) { src_reg->type == SCALAR_VALUE &&
if (tnum_is_const(src_reg->var_off) || !is_jmp32 && (opcode == BPF_JEQ || opcode == BPF_JNE)) {
(is_jmp32 &&
tnum_is_const(tnum_subreg(src_reg->var_off))))
reg_set_min_max(&other_branch_regs[insn->dst_reg],
dst_reg,
src_reg->var_off.value,
tnum_subreg(src_reg->var_off).value,
opcode, is_jmp32);
else if (tnum_is_const(dst_reg->var_off) ||
(is_jmp32 &&
tnum_is_const(tnum_subreg(dst_reg->var_off))))
reg_set_min_max_inv(&other_branch_regs[insn->src_reg],
src_reg,
dst_reg->var_off.value,
tnum_subreg(dst_reg->var_off).value,
opcode, is_jmp32);
else if (!is_jmp32 &&
(opcode == BPF_JEQ || opcode == BPF_JNE))
/* Comparing for equality, we can combine knowledge */ /* Comparing for equality, we can combine knowledge */
reg_combine_min_max(&other_branch_regs[insn->src_reg], reg_combine_min_max(&other_branch_regs[insn->src_reg],
&other_branch_regs[insn->dst_reg], &other_branch_regs[insn->dst_reg],
src_reg, dst_reg, opcode); src_reg, dst_reg, opcode);
if (src_reg->id &&
!WARN_ON_ONCE(src_reg->id != other_branch_regs[insn->src_reg].id)) {
find_equal_scalars(this_branch, src_reg);
find_equal_scalars(other_branch, &other_branch_regs[insn->src_reg]);
} }
} else /* BPF_SRC(insn->code) == BPF_K */ {
}
} else if (dst_reg->type == SCALAR_VALUE) {
reg_set_min_max(&other_branch_regs[insn->dst_reg], reg_set_min_max(&other_branch_regs[insn->dst_reg],
dst_reg, insn->imm, (u32)insn->imm, src_reg /* fake one */,
dst_reg, src_reg /* same fake one */,
opcode, is_jmp32); opcode, is_jmp32);
} }
if (BPF_SRC(insn->code) == BPF_X &&
src_reg->type == SCALAR_VALUE && src_reg->id &&
!WARN_ON_ONCE(src_reg->id != other_branch_regs[insn->src_reg].id)) {
find_equal_scalars(this_branch, src_reg);
find_equal_scalars(other_branch, &other_branch_regs[insn->src_reg]);
}
if (dst_reg->type == SCALAR_VALUE && dst_reg->id && if (dst_reg->type == SCALAR_VALUE && dst_reg->id &&
!WARN_ON_ONCE(dst_reg->id != other_branch_regs[insn->dst_reg].id)) { !WARN_ON_ONCE(dst_reg->id != other_branch_regs[insn->dst_reg].id)) {
find_equal_scalars(this_branch, dst_reg); find_equal_scalars(this_branch, dst_reg);
......
...@@ -5265,6 +5265,7 @@ static size_t get_pprint_mapv_size(enum pprint_mapv_kind_t mapv_kind) ...@@ -5265,6 +5265,7 @@ static size_t get_pprint_mapv_size(enum pprint_mapv_kind_t mapv_kind)
#endif #endif
assert(0); assert(0);
return 0;
} }
static void set_pprint_mapv(enum pprint_mapv_kind_t mapv_kind, static void set_pprint_mapv(enum pprint_mapv_kind_t mapv_kind,
......
...@@ -2387,12 +2387,9 @@ static int generate_dummy_prog(void) ...@@ -2387,12 +2387,9 @@ static int generate_dummy_prog(void)
const size_t prog_insn_cnt = sizeof(prog_insns) / sizeof(struct bpf_insn); const size_t prog_insn_cnt = sizeof(prog_insns) / sizeof(struct bpf_insn);
LIBBPF_OPTS(bpf_prog_load_opts, opts); LIBBPF_OPTS(bpf_prog_load_opts, opts);
const size_t log_buf_sz = 256; const size_t log_buf_sz = 256;
char *log_buf; char log_buf[log_buf_sz];
int fd = -1; int fd = -1;
log_buf = malloc(log_buf_sz);
if (!ASSERT_OK_PTR(log_buf, "log_buf_alloc"))
return fd;
opts.log_buf = log_buf; opts.log_buf = log_buf;
opts.log_size = log_buf_sz; opts.log_size = log_buf_sz;
...@@ -2402,7 +2399,6 @@ static int generate_dummy_prog(void) ...@@ -2402,7 +2399,6 @@ static int generate_dummy_prog(void)
prog_insns, prog_insn_cnt, &opts); prog_insns, prog_insn_cnt, &opts);
ASSERT_STREQ(log_buf, "", "log_0"); ASSERT_STREQ(log_buf, "", "log_0");
ASSERT_GE(fd, 0, "prog_fd"); ASSERT_GE(fd, 0, "prog_fd");
free(log_buf);
return fd; return fd;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment