Commit db53dce9 authored by David S. Miller's avatar David S. Miller

Merge branch 'bpf-rewrite-value-tracking-in-verifier'

Edward Cree says:

====================
bpf: rewrite value tracking in verifier

This series simplifies alignment tracking, generalises bounds tracking
and fixes some bounds-tracking bugs in the BPF verifier.  Pointer
arithmetic on packet pointers, stack pointers, map value pointers and
context pointers has been unified, and bounds on these pointers are
only checked when the pointer is dereferenced.

Operations on pointers which destroy all relation to the original
pointer (such as multiplies and shifts) are disallowed if
!env->allow_ptr_leaks, otherwise they convert the pointer to an
unknown scalar and feed it to the normal scalar arithmetic handling.

Pointer types have been unified with the corresponding
adjusted-pointer types where those existed
(e.g. PTR_TO_MAP_VALUE[_ADJ] or FRAME_PTR vs PTR_TO_STACK); similarly,
CONST_IMM and UNKNOWN_VALUE have been unified into SCALAR_VALUE.

Pointer types (except CONST_PTR_TO_MAP, PTR_TO_MAP_VALUE_OR_NULL and
PTR_TO_PACKET_END, which do not allow arithmetic) have a 'fixed
offset' and a 'variable offset'; the former is used when e.g. adding
an immediate or a known-constant register, as long as it does not
overflow.  Otherwise the latter is used, and any operation creating a
new variable offset creates a new 'id' (and, for PTR_TO_PACKET, clears
the 'range').  SCALAR_VALUEs use the 'variable offset' fields to track
the range of possible values; the 'fixed offset' should never be set
on a scalar.
====================
Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents e1cb90f2 8e17c1b1
...@@ -793,7 +793,7 @@ Some core changes of the new internal format: ...@@ -793,7 +793,7 @@ Some core changes of the new internal format:
bpf_exit bpf_exit
After the call the registers R1-R5 contain junk values and cannot be read. After the call the registers R1-R5 contain junk values and cannot be read.
In the future an eBPF verifier can be used to validate internal BPF programs. An in-kernel eBPF verifier is used to validate internal BPF programs.
Also in the new design, eBPF is limited to 4096 insns, which means that any Also in the new design, eBPF is limited to 4096 insns, which means that any
program will terminate quickly and will only call a fixed number of kernel program will terminate quickly and will only call a fixed number of kernel
...@@ -1017,7 +1017,7 @@ At the start of the program the register R1 contains a pointer to context ...@@ -1017,7 +1017,7 @@ At the start of the program the register R1 contains a pointer to context
and has type PTR_TO_CTX. and has type PTR_TO_CTX.
If verifier sees an insn that does R2=R1, then R2 has now type If verifier sees an insn that does R2=R1, then R2 has now type
PTR_TO_CTX as well and can be used on the right hand side of expression. PTR_TO_CTX as well and can be used on the right hand side of expression.
If R1=PTR_TO_CTX and insn is R2=R1+R1, then R2=UNKNOWN_VALUE, If R1=PTR_TO_CTX and insn is R2=R1+R1, then R2=SCALAR_VALUE,
since addition of two valid pointers makes invalid pointer. since addition of two valid pointers makes invalid pointer.
(In 'secure' mode verifier will reject any type of pointer arithmetic to make (In 'secure' mode verifier will reject any type of pointer arithmetic to make
sure that kernel addresses don't leak to unprivileged users) sure that kernel addresses don't leak to unprivileged users)
...@@ -1039,7 +1039,7 @@ is a correct program. If there was R1 instead of R6, it would have ...@@ -1039,7 +1039,7 @@ is a correct program. If there was R1 instead of R6, it would have
been rejected. been rejected.
load/store instructions are allowed only with registers of valid types, which load/store instructions are allowed only with registers of valid types, which
are PTR_TO_CTX, PTR_TO_MAP, FRAME_PTR. They are bounds and alignment checked. are PTR_TO_CTX, PTR_TO_MAP, PTR_TO_STACK. They are bounds and alignment checked.
For example: For example:
bpf_mov R1 = 1 bpf_mov R1 = 1
bpf_mov R2 = 2 bpf_mov R2 = 2
...@@ -1058,7 +1058,7 @@ intends to load a word from address R6 + 8 and store it into R0 ...@@ -1058,7 +1058,7 @@ intends to load a word from address R6 + 8 and store it into R0
If R6=PTR_TO_CTX, via is_valid_access() callback the verifier will know If R6=PTR_TO_CTX, via is_valid_access() callback the verifier will know
that offset 8 of size 4 bytes can be accessed for reading, otherwise that offset 8 of size 4 bytes can be accessed for reading, otherwise
the verifier will reject the program. the verifier will reject the program.
If R6=FRAME_PTR, then access should be aligned and be within If R6=PTR_TO_STACK, then access should be aligned and be within
stack bounds, which are [-MAX_BPF_STACK, 0). In this example offset is 8, stack bounds, which are [-MAX_BPF_STACK, 0). In this example offset is 8,
so it will fail verification, since it's out of bounds. so it will fail verification, since it's out of bounds.
...@@ -1069,7 +1069,7 @@ For example: ...@@ -1069,7 +1069,7 @@ For example:
bpf_ld R0 = *(u32 *)(R10 - 4) bpf_ld R0 = *(u32 *)(R10 - 4)
bpf_exit bpf_exit
is invalid program. is invalid program.
Though R10 is correct read-only register and has type FRAME_PTR Though R10 is correct read-only register and has type PTR_TO_STACK
and R10 - 4 is within stack bounds, there were no stores into that location. and R10 - 4 is within stack bounds, there were no stores into that location.
Pointer register spill/fill is tracked as well, since four (R6-R9) Pointer register spill/fill is tracked as well, since four (R6-R9)
...@@ -1094,6 +1094,71 @@ all use cases. ...@@ -1094,6 +1094,71 @@ all use cases.
See details of eBPF verifier in kernel/bpf/verifier.c See details of eBPF verifier in kernel/bpf/verifier.c
Register value tracking
-----------------------
In order to determine the safety of an eBPF program, the verifier must track
the range of possible values in each register and also in each stack slot.
This is done with 'struct bpf_reg_state', defined in include/linux/
bpf_verifier.h, which unifies tracking of scalar and pointer values. Each
register state has a type, which is either NOT_INIT (the register has not been
written to), SCALAR_VALUE (some value which is not usable as a pointer), or a
pointer type. The types of pointers describe their base, as follows:
PTR_TO_CTX Pointer to bpf_context.
CONST_PTR_TO_MAP Pointer to struct bpf_map. "Const" because arithmetic
on these pointers is forbidden.
PTR_TO_MAP_VALUE Pointer to the value stored in a map element.
PTR_TO_MAP_VALUE_OR_NULL
Either a pointer to a map value, or NULL; map accesses
(see section 'eBPF maps', below) return this type,
which becomes a PTR_TO_MAP_VALUE when checked != NULL.
Arithmetic on these pointers is forbidden.
PTR_TO_STACK Frame pointer.
PTR_TO_PACKET skb->data.
PTR_TO_PACKET_END skb->data + headlen; arithmetic forbidden.
However, a pointer may be offset from this base (as a result of pointer
arithmetic), and this is tracked in two parts: the 'fixed offset' and 'variable
offset'. The former is used when an exactly-known value (e.g. an immediate
operand) is added to a pointer, while the latter is used for values which are
not exactly known. The variable offset is also used in SCALAR_VALUEs, to track
the range of possible values in the register.
The verifier's knowledge about the variable offset consists of:
* minimum and maximum values as unsigned
* minimum and maximum values as signed
* knowledge of the values of individual bits, in the form of a 'tnum': a u64
'mask' and a u64 'value'. 1s in the mask represent bits whose value is unknown;
1s in the value represent bits known to be 1. Bits known to be 0 have 0 in both
mask and value; no bit should ever be 1 in both. For example, if a byte is read
into a register from memory, the register's top 56 bits are known zero, while
the low 8 are unknown - which is represented as the tnum (0x0; 0xff). If we
then OR this with 0x40, we get (0x40; 0xcf), then if we add 1 we get (0x0;
0x1ff), because of potential carries.
Besides arithmetic, the register state can also be updated by conditional
branches. For instance, if a SCALAR_VALUE is compared > 8, in the 'true' branch
it will have a umin_value (unsigned minimum value) of 9, whereas in the 'false'
branch it will have a umax_value of 8. A signed compare (with BPF_JSGT or
BPF_JSGE) would instead update the signed minimum/maximum values. Information
from the signed and unsigned bounds can be combined; for instance if a value is
first tested < 8 and then tested s> 4, the verifier will conclude that the value
is also > 4 and s< 8, since the bounds prevent crossing the sign boundary.
PTR_TO_PACKETs with a variable offset part have an 'id', which is common to all
pointers sharing that same variable offset. This is important for packet range
checks: after adding some variable to a packet pointer, if you then copy it to
another register and (say) add a constant 4, both registers will share the same
'id' but one will have a fixed offset of +4. Then if it is bounds-checked and
found to be less than a PTR_TO_PACKET_END, the other register is now known to
have a safe range of at least 4 bytes. See 'Direct packet access', below, for
more on PTR_TO_PACKET ranges.
The 'id' field is also used on PTR_TO_MAP_VALUE_OR_NULL, common to all copies of
the pointer returned from a map lookup. This means that when one copy is
checked and found to be non-NULL, all copies can become PTR_TO_MAP_VALUEs.
As well as range-checking, the tracked information is also used for enforcing
alignment of pointer accesses. For instance, on most systems the packet pointer
is 2 bytes after a 4-byte alignment. If a program adds 14 bytes to that to jump
over the Ethernet header, then reads IHL and addes (IHL * 4), the resulting
pointer will have a variable offset known to be 4n+2 for some n, so adding the 2
bytes (NET_IP_ALIGN) gives a 4-byte alignment and so word-sized accesses through
that pointer are safe.
Direct packet access Direct packet access
-------------------- --------------------
In cls_bpf and act_bpf programs the verifier allows direct access to the packet In cls_bpf and act_bpf programs the verifier allows direct access to the packet
...@@ -1121,7 +1186,7 @@ it now points to 'skb->data + 14' and accessible range is [R5, R5 + 14 - 14) ...@@ -1121,7 +1186,7 @@ it now points to 'skb->data + 14' and accessible range is [R5, R5 + 14 - 14)
which is zero bytes. which is zero bytes.
More complex packet access may look like: More complex packet access may look like:
R0=imm1 R1=ctx R3=pkt(id=0,off=0,r=14) R4=pkt_end R5=pkt(id=0,off=14,r=14) R10=fp R0=inv1 R1=ctx R3=pkt(id=0,off=0,r=14) R4=pkt_end R5=pkt(id=0,off=14,r=14) R10=fp
6: r0 = *(u8 *)(r3 +7) /* load 7th byte from the packet */ 6: r0 = *(u8 *)(r3 +7) /* load 7th byte from the packet */
7: r4 = *(u8 *)(r3 +12) 7: r4 = *(u8 *)(r3 +12)
8: r4 *= 14 8: r4 *= 14
...@@ -1135,26 +1200,31 @@ More complex packet access may look like: ...@@ -1135,26 +1200,31 @@ More complex packet access may look like:
16: r2 += 8 16: r2 += 8
17: r1 = *(u32 *)(r1 +80) /* load skb->data_end */ 17: r1 = *(u32 *)(r1 +80) /* load skb->data_end */
18: if r2 > r1 goto pc+2 18: if r2 > r1 goto pc+2
R0=inv56 R1=pkt_end R2=pkt(id=2,off=8,r=8) R3=pkt(id=2,off=0,r=8) R4=inv52 R5=pkt(id=0,off=14,r=14) R10=fp R0=inv(id=0,umax_value=255,var_off=(0x0; 0xff)) R1=pkt_end R2=pkt(id=2,off=8,r=8) R3=pkt(id=2,off=0,r=8) R4=inv(id=0,umax_value=3570,var_off=(0x0; 0xfffe)) R5=pkt(id=0,off=14,r=14) R10=fp
19: r1 = *(u8 *)(r3 +4) 19: r1 = *(u8 *)(r3 +4)
The state of the register R3 is R3=pkt(id=2,off=0,r=8) The state of the register R3 is R3=pkt(id=2,off=0,r=8)
id=2 means that two 'r3 += rX' instructions were seen, so r3 points to some id=2 means that two 'r3 += rX' instructions were seen, so r3 points to some
offset within a packet and since the program author did offset within a packet and since the program author did
'if (r3 + 8 > r1) goto err' at insn #18, the safe range is [R3, R3 + 8). 'if (r3 + 8 > r1) goto err' at insn #18, the safe range is [R3, R3 + 8).
The verifier only allows 'add' operation on packet registers. Any other The verifier only allows 'add'/'sub' operations on packet registers. Any other
operation will set the register state to 'unknown_value' and it won't be operation will set the register state to 'SCALAR_VALUE' and it won't be
available for direct packet access. available for direct packet access.
Operation 'r3 += rX' may overflow and become less than original skb->data, Operation 'r3 += rX' may overflow and become less than original skb->data,
therefore the verifier has to prevent that. So it tracks the number of therefore the verifier has to prevent that. So when it sees 'r3 += rX'
upper zero bits in all 'uknown_value' registers, so when it sees instruction and rX is more than 16-bit value, any subsequent bounds-check of r3
'r3 += rX' instruction and rX is more than 16-bit value, it will error as: against skb->data_end will not give us 'range' information, so attempts to read
"cannot add integer value with N upper zero bits to ptr_to_packet" through the pointer will give "invalid access to packet" error.
Ex. after insn 'r4 = *(u8 *)(r3 +12)' (insn #7 above) the state of r4 is Ex. after insn 'r4 = *(u8 *)(r3 +12)' (insn #7 above) the state of r4 is
R4=inv56 which means that upper 56 bits on the register are guaranteed R4=inv(id=0,umax_value=255,var_off=(0x0; 0xff)) which means that upper 56 bits
to be zero. After insn 'r4 *= 14' the state becomes R4=inv52, since of the register are guaranteed to be zero, and nothing is known about the lower
multiplying 8-bit value by constant 14 will keep upper 52 bits as zero. 8 bits. After insn 'r4 *= 14' the state becomes
Similarly 'r2 >>= 48' will make R2=inv48, since the shift is not sign R4=inv(id=0,umax_value=3570,var_off=(0x0; 0xfffe)), since multiplying an 8-bit
extending. This logic is implemented in evaluate_reg_alu() function. value by constant 14 will keep upper 52 bits as zero, also the least significant
bit will be zero as 14 is even. Similarly 'r2 >>= 48' will make
R2=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff)), since the shift is not sign
extending. This logic is implemented in adjust_reg_min_max_vals() function,
which calls adjust_ptr_min_max_vals() for adding pointer to scalar (or vice
versa) and adjust_scalar_min_max_vals() for operations on two scalars.
The end result is that bpf program author can access packet directly The end result is that bpf program author can access packet directly
using normal C code as: using normal C code as:
...@@ -1214,6 +1284,22 @@ The map is defined by: ...@@ -1214,6 +1284,22 @@ The map is defined by:
. key size in bytes . key size in bytes
. value size in bytes . value size in bytes
Pruning
-------
The verifier does not actually walk all possible paths through the program. For
each new branch to analyse, the verifier looks at all the states it's previously
been in when at this instruction. If any of them contain the current state as a
subset, the branch is 'pruned' - that is, the fact that the previous state was
accepted implies the current state would be as well. For instance, if in the
previous state, r1 held a packet-pointer, and in the current state, r1 holds a
packet-pointer with a range as long or longer and at least as strict an
alignment, then r1 is safe. Similarly, if r2 was NOT_INIT before then it can't
have been used by any path from that point, so any value in r2 (including
another NOT_INIT) is safe. The implementation is in the function regsafe().
Pruning considers not only the registers but also the stack (and any spilled
registers it may hold). They must all be safe for the branch to be pruned.
This is implemented in states_equal().
Understanding eBPF verifier messages Understanding eBPF verifier messages
------------------------------------ ------------------------------------
......
...@@ -79,28 +79,32 @@ nfp_bpf_check_exit(struct nfp_prog *nfp_prog, ...@@ -79,28 +79,32 @@ nfp_bpf_check_exit(struct nfp_prog *nfp_prog,
const struct bpf_verifier_env *env) const struct bpf_verifier_env *env)
{ {
const struct bpf_reg_state *reg0 = &env->cur_state.regs[0]; const struct bpf_reg_state *reg0 = &env->cur_state.regs[0];
u64 imm;
if (nfp_prog->act == NN_ACT_XDP) if (nfp_prog->act == NN_ACT_XDP)
return 0; return 0;
if (reg0->type != CONST_IMM) { if (!(reg0->type == SCALAR_VALUE && tnum_is_const(reg0->var_off))) {
pr_info("unsupported exit state: %d, imm: %llx\n", char tn_buf[48];
reg0->type, reg0->imm);
tnum_strn(tn_buf, sizeof(tn_buf), reg0->var_off);
pr_info("unsupported exit state: %d, var_off: %s\n",
reg0->type, tn_buf);
return -EINVAL; return -EINVAL;
} }
if (nfp_prog->act != NN_ACT_DIRECT && imm = reg0->var_off.value;
reg0->imm != 0 && (reg0->imm & ~0U) != ~0U) { if (nfp_prog->act != NN_ACT_DIRECT && imm != 0 && (imm & ~0U) != ~0U) {
pr_info("unsupported exit state: %d, imm: %llx\n", pr_info("unsupported exit state: %d, imm: %llx\n",
reg0->type, reg0->imm); reg0->type, imm);
return -EINVAL; return -EINVAL;
} }
if (nfp_prog->act == NN_ACT_DIRECT && reg0->imm <= TC_ACT_REDIRECT && if (nfp_prog->act == NN_ACT_DIRECT && imm <= TC_ACT_REDIRECT &&
reg0->imm != TC_ACT_SHOT && reg0->imm != TC_ACT_STOLEN && imm != TC_ACT_SHOT && imm != TC_ACT_STOLEN &&
reg0->imm != TC_ACT_QUEUED) { imm != TC_ACT_QUEUED) {
pr_info("unsupported exit state: %d, imm: %llx\n", pr_info("unsupported exit state: %d, imm: %llx\n",
reg0->type, reg0->imm); reg0->type, imm);
return -EINVAL; return -EINVAL;
} }
......
...@@ -117,35 +117,25 @@ enum bpf_access_type { ...@@ -117,35 +117,25 @@ enum bpf_access_type {
}; };
/* types of values stored in eBPF registers */ /* types of values stored in eBPF registers */
/* Pointer types represent:
* pointer
* pointer + imm
* pointer + (u16) var
* pointer + (u16) var + imm
* if (range > 0) then [ptr, ptr + range - off) is safe to access
* if (id > 0) means that some 'var' was added
* if (off > 0) means that 'imm' was added
*/
enum bpf_reg_type { enum bpf_reg_type {
NOT_INIT = 0, /* nothing was written into register */ NOT_INIT = 0, /* nothing was written into register */
UNKNOWN_VALUE, /* reg doesn't contain a valid pointer */ SCALAR_VALUE, /* reg doesn't contain a valid pointer */
PTR_TO_CTX, /* reg points to bpf_context */ PTR_TO_CTX, /* reg points to bpf_context */
CONST_PTR_TO_MAP, /* reg points to struct bpf_map */ CONST_PTR_TO_MAP, /* reg points to struct bpf_map */
PTR_TO_MAP_VALUE, /* reg points to map element value */ PTR_TO_MAP_VALUE, /* reg points to map element value */
PTR_TO_MAP_VALUE_OR_NULL,/* points to map elem value or NULL */ PTR_TO_MAP_VALUE_OR_NULL,/* points to map elem value or NULL */
FRAME_PTR, /* reg == frame_pointer */ PTR_TO_STACK, /* reg == frame_pointer + offset */
PTR_TO_STACK, /* reg == frame_pointer + imm */ PTR_TO_PACKET, /* reg points to skb->data */
CONST_IMM, /* constant integer value */
/* PTR_TO_PACKET represents:
* skb->data
* skb->data + imm
* skb->data + (u16) var
* skb->data + (u16) var + imm
* if (range > 0) then [ptr, ptr + range - off) is safe to access
* if (id > 0) means that some 'var' was added
* if (off > 0) menas that 'imm' was added
*/
PTR_TO_PACKET,
PTR_TO_PACKET_END, /* skb->data + headlen */ PTR_TO_PACKET_END, /* skb->data + headlen */
/* PTR_TO_MAP_VALUE_ADJ is used for doing pointer math inside of a map
* elem value. We only allow this if we can statically verify that
* access from this register are going to fall within the size of the
* map element.
*/
PTR_TO_MAP_VALUE_ADJ,
}; };
struct bpf_prog; struct bpf_prog;
......
...@@ -9,41 +9,54 @@ ...@@ -9,41 +9,54 @@
#include <linux/bpf.h> /* for enum bpf_reg_type */ #include <linux/bpf.h> /* for enum bpf_reg_type */
#include <linux/filter.h> /* for MAX_BPF_STACK */ #include <linux/filter.h> /* for MAX_BPF_STACK */
#include <linux/tnum.h>
/* Just some arbitrary values so we can safely do math without overflowing and /* Maximum variable offset umax_value permitted when resolving memory accesses.
* are obviously wrong for any sort of memory access. * In practice this is far bigger than any realistic pointer offset; this limit
*/ * ensures that umax_value + (int)off + (int)size cannot overflow a u64.
#define BPF_REGISTER_MAX_RANGE (1024 * 1024 * 1024) */
#define BPF_REGISTER_MIN_RANGE -1 #define BPF_MAX_VAR_OFF (1ULL << 31)
/* Maximum variable size permitted for ARG_CONST_SIZE[_OR_ZERO]. This ensures
* that converting umax_value to int cannot overflow.
*/
#define BPF_MAX_VAR_SIZ INT_MAX
struct bpf_reg_state { struct bpf_reg_state {
enum bpf_reg_type type; enum bpf_reg_type type;
union { union {
/* valid when type == CONST_IMM | PTR_TO_STACK | UNKNOWN_VALUE */ /* valid when type == PTR_TO_PACKET */
s64 imm; u16 range;
/* valid when type == PTR_TO_PACKET* */
struct {
u16 off;
u16 range;
};
/* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE | /* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE |
* PTR_TO_MAP_VALUE_OR_NULL * PTR_TO_MAP_VALUE_OR_NULL
*/ */
struct bpf_map *map_ptr; struct bpf_map *map_ptr;
}; };
/* Fixed part of pointer offset, pointer types only */
s32 off;
/* For PTR_TO_PACKET, used to find other pointers with the same variable
* offset, so they can share range knowledge.
* For PTR_TO_MAP_VALUE_OR_NULL this is used to share which map value we
* came from, when one is tested for != NULL.
*/
u32 id; u32 id;
/* These five fields must be last. See states_equal() */
/* For scalar types (SCALAR_VALUE), this represents our knowledge of
* the actual value.
* For pointer types, this represents the variable part of the offset
* from the pointed-to object, and is shared with all bpf_reg_states
* with the same id as us.
*/
struct tnum var_off;
/* Used to determine if any memory access using this register will /* Used to determine if any memory access using this register will
* result in a bad access. These two fields must be last. * result in a bad access.
* See states_equal() * These refer to the same value as var_off, not necessarily the actual
* contents of the register.
*/ */
s64 min_value; s64 smin_value; /* minimum possible (s64)value */
u64 max_value; s64 smax_value; /* maximum possible (s64)value */
u32 min_align; u64 umin_value; /* minimum possible (u64)value */
u32 aux_off; u64 umax_value; /* maximum possible (u64)value */
u32 aux_off_align;
bool value_from_signed;
}; };
enum bpf_stack_slot_type { enum bpf_stack_slot_type {
......
/* tnum: tracked (or tristate) numbers
*
* A tnum tracks knowledge about the bits of a value. Each bit can be either
* known (0 or 1), or unknown (x). Arithmetic operations on tnums will
* propagate the unknown bits such that the tnum result represents all the
* possible results for possible values of the operands.
*/
#include <linux/types.h>
struct tnum {
u64 value;
u64 mask;
};
/* Constructors */
/* Represent a known constant as a tnum. */
struct tnum tnum_const(u64 value);
/* A completely unknown value */
extern const struct tnum tnum_unknown;
/* A value that's unknown except that @min <= value <= @max */
struct tnum tnum_range(u64 min, u64 max);
/* Arithmetic and logical ops */
/* Shift a tnum left (by a fixed shift) */
struct tnum tnum_lshift(struct tnum a, u8 shift);
/* Shift a tnum right (by a fixed shift) */
struct tnum tnum_rshift(struct tnum a, u8 shift);
/* Add two tnums, return @a + @b */
struct tnum tnum_add(struct tnum a, struct tnum b);
/* Subtract two tnums, return @a - @b */
struct tnum tnum_sub(struct tnum a, struct tnum b);
/* Bitwise-AND, return @a & @b */
struct tnum tnum_and(struct tnum a, struct tnum b);
/* Bitwise-OR, return @a | @b */
struct tnum tnum_or(struct tnum a, struct tnum b);
/* Bitwise-XOR, return @a ^ @b */
struct tnum tnum_xor(struct tnum a, struct tnum b);
/* Multiply two tnums, return @a * @b */
struct tnum tnum_mul(struct tnum a, struct tnum b);
/* Return a tnum representing numbers satisfying both @a and @b */
struct tnum tnum_intersect(struct tnum a, struct tnum b);
/* Return @a with all but the lowest @size bytes cleared */
struct tnum tnum_cast(struct tnum a, u8 size);
/* Returns true if @a is a known constant */
static inline bool tnum_is_const(struct tnum a)
{
return !a.mask;
}
/* Returns true if @a == tnum_const(@b) */
static inline bool tnum_equals_const(struct tnum a, u64 b)
{
return tnum_is_const(a) && a.value == b;
}
/* Returns true if @a is completely unknown */
static inline bool tnum_is_unknown(struct tnum a)
{
return !~a.mask;
}
/* Returns true if @a is known to be a multiple of @size.
* @size must be a power of two.
*/
bool tnum_is_aligned(struct tnum a, u64 size);
/* Returns true if @b represents a subset of @a. */
bool tnum_in(struct tnum a, struct tnum b);
/* Formatting functions. These have snprintf-like semantics: they will write
* up to @size bytes (including the terminating NUL byte), and return the number
* of bytes (excluding the terminating NUL) which would have been written had
* sufficient space been available. (Thus tnum_sbin always returns 64.)
*/
/* Format a tnum as a pair of hex numbers (value; mask) */
int tnum_strn(char *str, size_t size, struct tnum a);
/* Format a tnum as tristate binary expansion */
int tnum_sbin(char *str, size_t size, struct tnum a);
obj-y := core.o obj-y := core.o
obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o
obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o
ifeq ($(CONFIG_NET),y) ifeq ($(CONFIG_NET),y)
obj-$(CONFIG_BPF_SYSCALL) += devmap.o obj-$(CONFIG_BPF_SYSCALL) += devmap.o
......
/* tnum: tracked (or tristate) numbers
*
* A tnum tracks knowledge about the bits of a value. Each bit can be either
* known (0 or 1), or unknown (x). Arithmetic operations on tnums will
* propagate the unknown bits such that the tnum result represents all the
* possible results for possible values of the operands.
*/
#include <linux/kernel.h>
#include <linux/tnum.h>
#define TNUM(_v, _m) (struct tnum){.value = _v, .mask = _m}
/* A completely unknown value */
const struct tnum tnum_unknown = { .value = 0, .mask = -1 };
struct tnum tnum_const(u64 value)
{
return TNUM(value, 0);
}
struct tnum tnum_range(u64 min, u64 max)
{
u64 chi = min ^ max, delta;
u8 bits = fls64(chi);
/* special case, needed because 1ULL << 64 is undefined */
if (bits > 63)
return tnum_unknown;
/* e.g. if chi = 4, bits = 3, delta = (1<<3) - 1 = 7.
* if chi = 0, bits = 0, delta = (1<<0) - 1 = 0, so we return
* constant min (since min == max).
*/
delta = (1ULL << bits) - 1;
return TNUM(min & ~delta, delta);
}
struct tnum tnum_lshift(struct tnum a, u8 shift)
{
return TNUM(a.value << shift, a.mask << shift);
}
struct tnum tnum_rshift(struct tnum a, u8 shift)
{
return TNUM(a.value >> shift, a.mask >> shift);
}
struct tnum tnum_add(struct tnum a, struct tnum b)
{
u64 sm, sv, sigma, chi, mu;
sm = a.mask + b.mask;
sv = a.value + b.value;
sigma = sm + sv;
chi = sigma ^ sv;
mu = chi | a.mask | b.mask;
return TNUM(sv & ~mu, mu);
}
struct tnum tnum_sub(struct tnum a, struct tnum b)
{
u64 dv, alpha, beta, chi, mu;
dv = a.value - b.value;
alpha = dv + a.mask;
beta = dv - b.mask;
chi = alpha ^ beta;
mu = chi | a.mask | b.mask;
return TNUM(dv & ~mu, mu);
}
struct tnum tnum_and(struct tnum a, struct tnum b)
{
u64 alpha, beta, v;
alpha = a.value | a.mask;
beta = b.value | b.mask;
v = a.value & b.value;
return TNUM(v, alpha & beta & ~v);
}
struct tnum tnum_or(struct tnum a, struct tnum b)
{
u64 v, mu;
v = a.value | b.value;
mu = a.mask | b.mask;
return TNUM(v, mu & ~v);
}
struct tnum tnum_xor(struct tnum a, struct tnum b)
{
u64 v, mu;
v = a.value ^ b.value;
mu = a.mask | b.mask;
return TNUM(v & ~mu, mu);
}
/* half-multiply add: acc += (unknown * mask * value).
* An intermediate step in the multiply algorithm.
*/
static struct tnum hma(struct tnum acc, u64 value, u64 mask)
{
while (mask) {
if (mask & 1)
acc = tnum_add(acc, TNUM(0, value));
mask >>= 1;
value <<= 1;
}
return acc;
}
struct tnum tnum_mul(struct tnum a, struct tnum b)
{
struct tnum acc;
u64 pi;
pi = a.value * b.value;
acc = hma(TNUM(pi, 0), a.mask, b.mask | b.value);
return hma(acc, b.mask, a.value);
}
/* Note that if a and b disagree - i.e. one has a 'known 1' where the other has
* a 'known 0' - this will return a 'known 1' for that bit.
*/
struct tnum tnum_intersect(struct tnum a, struct tnum b)
{
u64 v, mu;
v = a.value | b.value;
mu = a.mask & b.mask;
return TNUM(v & ~mu, mu);
}
struct tnum tnum_cast(struct tnum a, u8 size)
{
a.value &= (1ULL << (size * 8)) - 1;
a.mask &= (1ULL << (size * 8)) - 1;
return a;
}
bool tnum_is_aligned(struct tnum a, u64 size)
{
if (!size)
return true;
return !((a.value | a.mask) & (size - 1));
}
bool tnum_in(struct tnum a, struct tnum b)
{
if (b.mask & ~a.mask)
return false;
b.value &= ~a.mask;
return a.value == b.value;
}
int tnum_strn(char *str, size_t size, struct tnum a)
{
return snprintf(str, size, "(%#llx; %#llx)", a.value, a.mask);
}
EXPORT_SYMBOL_GPL(tnum_strn);
int tnum_sbin(char *str, size_t size, struct tnum a)
{
size_t n;
for (n = 64; n; n--) {
if (n < size) {
if (a.mask & 1)
str[n - 1] = 'x';
else if (a.value & 1)
str[n - 1] = '1';
else
str[n - 1] = '0';
}
a.mask >>= 1;
a.value >>= 1;
}
str[min(size - 1, (size_t)64)] = 0;
return 64;
}
...@@ -61,12 +61,12 @@ ...@@ -61,12 +61,12 @@
* (and -20 constant is saved for further stack bounds checking). * (and -20 constant is saved for further stack bounds checking).
* Meaning that this reg is a pointer to stack plus known immediate constant. * Meaning that this reg is a pointer to stack plus known immediate constant.
* *
* Most of the time the registers have UNKNOWN_VALUE type, which * Most of the time the registers have SCALAR_VALUE type, which
* means the register has some value, but it's not a valid pointer. * means the register has some value, but it's not a valid pointer.
* (like pointer plus pointer becomes UNKNOWN_VALUE type) * (like pointer plus pointer becomes SCALAR_VALUE type)
* *
* When verifier sees load or store instructions the type of base register * When verifier sees load or store instructions the type of base register
* can be: PTR_TO_MAP_VALUE, PTR_TO_CTX, FRAME_PTR. These are three pointer * can be: PTR_TO_MAP_VALUE, PTR_TO_CTX, PTR_TO_STACK. These are three pointer
* types recognized by check_mem_access() function. * types recognized by check_mem_access() function.
* *
* PTR_TO_MAP_VALUE means that this register is pointing to 'map element value' * PTR_TO_MAP_VALUE means that this register is pointing to 'map element value'
...@@ -140,7 +140,7 @@ struct bpf_verifier_stack_elem { ...@@ -140,7 +140,7 @@ struct bpf_verifier_stack_elem {
struct bpf_verifier_stack_elem *next; struct bpf_verifier_stack_elem *next;
}; };
#define BPF_COMPLEXITY_LIMIT_INSNS 98304 #define BPF_COMPLEXITY_LIMIT_INSNS 131072
#define BPF_COMPLEXITY_LIMIT_STACK 1024 #define BPF_COMPLEXITY_LIMIT_STACK 1024
#define BPF_MAP_PTR_POISON ((void *)0xeB9F + POISON_POINTER_DELTA) #define BPF_MAP_PTR_POISON ((void *)0xeB9F + POISON_POINTER_DELTA)
...@@ -180,15 +180,12 @@ static __printf(1, 2) void verbose(const char *fmt, ...) ...@@ -180,15 +180,12 @@ static __printf(1, 2) void verbose(const char *fmt, ...)
/* string representation of 'enum bpf_reg_type' */ /* string representation of 'enum bpf_reg_type' */
static const char * const reg_type_str[] = { static const char * const reg_type_str[] = {
[NOT_INIT] = "?", [NOT_INIT] = "?",
[UNKNOWN_VALUE] = "inv", [SCALAR_VALUE] = "inv",
[PTR_TO_CTX] = "ctx", [PTR_TO_CTX] = "ctx",
[CONST_PTR_TO_MAP] = "map_ptr", [CONST_PTR_TO_MAP] = "map_ptr",
[PTR_TO_MAP_VALUE] = "map_value", [PTR_TO_MAP_VALUE] = "map_value",
[PTR_TO_MAP_VALUE_OR_NULL] = "map_value_or_null", [PTR_TO_MAP_VALUE_OR_NULL] = "map_value_or_null",
[PTR_TO_MAP_VALUE_ADJ] = "map_value_adj",
[FRAME_PTR] = "fp",
[PTR_TO_STACK] = "fp", [PTR_TO_STACK] = "fp",
[CONST_IMM] = "imm",
[PTR_TO_PACKET] = "pkt", [PTR_TO_PACKET] = "pkt",
[PTR_TO_PACKET_END] = "pkt_end", [PTR_TO_PACKET_END] = "pkt_end",
}; };
...@@ -221,32 +218,52 @@ static void print_verifier_state(struct bpf_verifier_state *state) ...@@ -221,32 +218,52 @@ static void print_verifier_state(struct bpf_verifier_state *state)
if (t == NOT_INIT) if (t == NOT_INIT)
continue; continue;
verbose(" R%d=%s", i, reg_type_str[t]); verbose(" R%d=%s", i, reg_type_str[t]);
if (t == CONST_IMM || t == PTR_TO_STACK) if ((t == SCALAR_VALUE || t == PTR_TO_STACK) &&
verbose("%lld", reg->imm); tnum_is_const(reg->var_off)) {
else if (t == PTR_TO_PACKET) /* reg->off should be 0 for SCALAR_VALUE */
verbose("(id=%d,off=%d,r=%d)", verbose("%lld", reg->var_off.value + reg->off);
reg->id, reg->off, reg->range); } else {
else if (t == UNKNOWN_VALUE && reg->imm) verbose("(id=%d", reg->id);
verbose("%lld", reg->imm); if (t != SCALAR_VALUE)
else if (t == CONST_PTR_TO_MAP || t == PTR_TO_MAP_VALUE || verbose(",off=%d", reg->off);
t == PTR_TO_MAP_VALUE_OR_NULL || if (t == PTR_TO_PACKET)
t == PTR_TO_MAP_VALUE_ADJ) verbose(",r=%d", reg->range);
verbose("(ks=%d,vs=%d,id=%u)", else if (t == CONST_PTR_TO_MAP ||
reg->map_ptr->key_size, t == PTR_TO_MAP_VALUE ||
reg->map_ptr->value_size, t == PTR_TO_MAP_VALUE_OR_NULL)
reg->id); verbose(",ks=%d,vs=%d",
if (reg->min_value != BPF_REGISTER_MIN_RANGE) reg->map_ptr->key_size,
verbose(",min_value=%lld", reg->map_ptr->value_size);
(long long)reg->min_value); if (tnum_is_const(reg->var_off)) {
if (reg->max_value != BPF_REGISTER_MAX_RANGE) /* Typically an immediate SCALAR_VALUE, but
verbose(",max_value=%llu", * could be a pointer whose offset is too big
(unsigned long long)reg->max_value); * for reg->off
if (reg->min_align) */
verbose(",min_align=%u", reg->min_align); verbose(",imm=%llx", reg->var_off.value);
if (reg->aux_off) } else {
verbose(",aux_off=%u", reg->aux_off); if (reg->smin_value != reg->umin_value &&
if (reg->aux_off_align) reg->smin_value != S64_MIN)
verbose(",aux_off_align=%u", reg->aux_off_align); verbose(",smin_value=%lld",
(long long)reg->smin_value);
if (reg->smax_value != reg->umax_value &&
reg->smax_value != S64_MAX)
verbose(",smax_value=%lld",
(long long)reg->smax_value);
if (reg->umin_value != 0)
verbose(",umin_value=%llu",
(unsigned long long)reg->umin_value);
if (reg->umax_value != U64_MAX)
verbose(",umax_value=%llu",
(unsigned long long)reg->umax_value);
if (!tnum_is_unknown(reg->var_off)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose(",var_off=%s", tn_buf);
}
}
verbose(")");
}
} }
for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) { for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
if (state->stack_slot_type[i] == STACK_SPILL) if (state->stack_slot_type[i] == STACK_SPILL)
...@@ -463,56 +480,161 @@ static const int caller_saved[CALLER_SAVED_REGS] = { ...@@ -463,56 +480,161 @@ static const int caller_saved[CALLER_SAVED_REGS] = {
BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5 BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5
}; };
static void mark_reg_not_init(struct bpf_reg_state *regs, u32 regno) static void __mark_reg_not_init(struct bpf_reg_state *reg);
/* Mark the unknown part of a register (variable offset or scalar value) as
* known to have the value @imm.
*/
static void __mark_reg_known(struct bpf_reg_state *reg, u64 imm)
{ {
BUG_ON(regno >= MAX_BPF_REG); reg->id = 0;
reg->var_off = tnum_const(imm);
reg->smin_value = (s64)imm;
reg->smax_value = (s64)imm;
reg->umin_value = imm;
reg->umax_value = imm;
}
memset(&regs[regno], 0, sizeof(regs[regno])); /* Mark the 'variable offset' part of a register as zero. This should be
regs[regno].type = NOT_INIT; * used only on registers holding a pointer type.
regs[regno].min_value = BPF_REGISTER_MIN_RANGE; */
regs[regno].max_value = BPF_REGISTER_MAX_RANGE; static void __mark_reg_known_zero(struct bpf_reg_state *reg)
{
__mark_reg_known(reg, 0);
} }
static void init_reg_state(struct bpf_reg_state *regs) static void mark_reg_known_zero(struct bpf_reg_state *regs, u32 regno)
{ {
int i; if (WARN_ON(regno >= MAX_BPF_REG)) {
verbose("mark_reg_known_zero(regs, %u)\n", regno);
/* Something bad happened, let's kill all regs */
for (regno = 0; regno < MAX_BPF_REG; regno++)
__mark_reg_not_init(regs + regno);
return;
}
__mark_reg_known_zero(regs + regno);
}
for (i = 0; i < MAX_BPF_REG; i++) /* Attempts to improve min/max values based on var_off information */
mark_reg_not_init(regs, i); static void __update_reg_bounds(struct bpf_reg_state *reg)
{
/* min signed is max(sign bit) | min(other bits) */
reg->smin_value = max_t(s64, reg->smin_value,
reg->var_off.value | (reg->var_off.mask & S64_MIN));
/* max signed is min(sign bit) | max(other bits) */
reg->smax_value = min_t(s64, reg->smax_value,
reg->var_off.value | (reg->var_off.mask & S64_MAX));
reg->umin_value = max(reg->umin_value, reg->var_off.value);
reg->umax_value = min(reg->umax_value,
reg->var_off.value | reg->var_off.mask);
}
/* frame pointer */ /* Uses signed min/max values to inform unsigned, and vice-versa */
regs[BPF_REG_FP].type = FRAME_PTR; static void __reg_deduce_bounds(struct bpf_reg_state *reg)
{
/* Learn sign from signed bounds.
* If we cannot cross the sign boundary, then signed and unsigned bounds
* are the same, so combine. This works even in the negative case, e.g.
* -3 s<= x s<= -1 implies 0xf...fd u<= x u<= 0xf...ff.
*/
if (reg->smin_value >= 0 || reg->smax_value < 0) {
reg->smin_value = reg->umin_value = max_t(u64, reg->smin_value,
reg->umin_value);
reg->smax_value = reg->umax_value = min_t(u64, reg->smax_value,
reg->umax_value);
return;
}
/* Learn sign from unsigned bounds. Signed bounds cross the sign
* boundary, so we must be careful.
*/
if ((s64)reg->umax_value >= 0) {
/* Positive. We can't learn anything from the smin, but smax
* is positive, hence safe.
*/
reg->smin_value = reg->umin_value;
reg->smax_value = reg->umax_value = min_t(u64, reg->smax_value,
reg->umax_value);
} else if ((s64)reg->umin_value < 0) {
/* Negative. We can't learn anything from the smax, but smin
* is negative, hence safe.
*/
reg->smin_value = reg->umin_value = max_t(u64, reg->smin_value,
reg->umin_value);
reg->smax_value = reg->umax_value;
}
}
/* 1st arg to a function */ /* Attempts to improve var_off based on unsigned min/max information */
regs[BPF_REG_1].type = PTR_TO_CTX; static void __reg_bound_offset(struct bpf_reg_state *reg)
{
reg->var_off = tnum_intersect(reg->var_off,
tnum_range(reg->umin_value,
reg->umax_value));
} }
static void __mark_reg_unknown_value(struct bpf_reg_state *regs, u32 regno) /* Reset the min/max bounds of a register */
static void __mark_reg_unbounded(struct bpf_reg_state *reg)
{ {
regs[regno].type = UNKNOWN_VALUE; reg->smin_value = S64_MIN;
regs[regno].id = 0; reg->smax_value = S64_MAX;
regs[regno].imm = 0; reg->umin_value = 0;
reg->umax_value = U64_MAX;
} }
static void mark_reg_unknown_value(struct bpf_reg_state *regs, u32 regno) /* Mark a register as having a completely unknown (scalar) value. */
static void __mark_reg_unknown(struct bpf_reg_state *reg)
{ {
BUG_ON(regno >= MAX_BPF_REG); reg->type = SCALAR_VALUE;
__mark_reg_unknown_value(regs, regno); reg->id = 0;
reg->off = 0;
reg->var_off = tnum_unknown;
__mark_reg_unbounded(reg);
} }
static void reset_reg_range_values(struct bpf_reg_state *regs, u32 regno) static void mark_reg_unknown(struct bpf_reg_state *regs, u32 regno)
{ {
regs[regno].min_value = BPF_REGISTER_MIN_RANGE; if (WARN_ON(regno >= MAX_BPF_REG)) {
regs[regno].max_value = BPF_REGISTER_MAX_RANGE; verbose("mark_reg_unknown(regs, %u)\n", regno);
regs[regno].value_from_signed = false; /* Something bad happened, let's kill all regs */
regs[regno].min_align = 0; for (regno = 0; regno < MAX_BPF_REG; regno++)
__mark_reg_not_init(regs + regno);
return;
}
__mark_reg_unknown(regs + regno);
} }
static void mark_reg_unknown_value_and_range(struct bpf_reg_state *regs, static void __mark_reg_not_init(struct bpf_reg_state *reg)
u32 regno)
{ {
mark_reg_unknown_value(regs, regno); __mark_reg_unknown(reg);
reset_reg_range_values(regs, regno); reg->type = NOT_INIT;
}
static void mark_reg_not_init(struct bpf_reg_state *regs, u32 regno)
{
if (WARN_ON(regno >= MAX_BPF_REG)) {
verbose("mark_reg_not_init(regs, %u)\n", regno);
/* Something bad happened, let's kill all regs */
for (regno = 0; regno < MAX_BPF_REG; regno++)
__mark_reg_not_init(regs + regno);
return;
}
__mark_reg_not_init(regs + regno);
}
static void init_reg_state(struct bpf_reg_state *regs)
{
int i;
for (i = 0; i < MAX_BPF_REG; i++)
mark_reg_not_init(regs, i);
/* frame pointer */
regs[BPF_REG_FP].type = PTR_TO_STACK;
mark_reg_known_zero(regs, BPF_REG_FP);
/* 1st arg to a function */
regs[BPF_REG_1].type = PTR_TO_CTX;
mark_reg_known_zero(regs, BPF_REG_1);
} }
enum reg_arg_type { enum reg_arg_type {
...@@ -542,7 +664,7 @@ static int check_reg_arg(struct bpf_reg_state *regs, u32 regno, ...@@ -542,7 +664,7 @@ static int check_reg_arg(struct bpf_reg_state *regs, u32 regno,
return -EACCES; return -EACCES;
} }
if (t == DST_OP) if (t == DST_OP)
mark_reg_unknown_value(regs, regno); mark_reg_unknown(regs, regno);
} }
return 0; return 0;
} }
...@@ -552,12 +674,10 @@ static bool is_spillable_regtype(enum bpf_reg_type type) ...@@ -552,12 +674,10 @@ static bool is_spillable_regtype(enum bpf_reg_type type)
switch (type) { switch (type) {
case PTR_TO_MAP_VALUE: case PTR_TO_MAP_VALUE:
case PTR_TO_MAP_VALUE_OR_NULL: case PTR_TO_MAP_VALUE_OR_NULL:
case PTR_TO_MAP_VALUE_ADJ:
case PTR_TO_STACK: case PTR_TO_STACK:
case PTR_TO_CTX: case PTR_TO_CTX:
case PTR_TO_PACKET: case PTR_TO_PACKET:
case PTR_TO_PACKET_END: case PTR_TO_PACKET_END:
case FRAME_PTR:
case CONST_PTR_TO_MAP: case CONST_PTR_TO_MAP:
return true; return true;
default: default:
...@@ -637,14 +757,13 @@ static int check_stack_read(struct bpf_verifier_state *state, int off, int size, ...@@ -637,14 +757,13 @@ static int check_stack_read(struct bpf_verifier_state *state, int off, int size,
} }
if (value_regno >= 0) if (value_regno >= 0)
/* have read misc data from the stack */ /* have read misc data from the stack */
mark_reg_unknown_value_and_range(state->regs, mark_reg_unknown(state->regs, value_regno);
value_regno);
return 0; return 0;
} }
} }
/* check read/write into map element returned by bpf_map_lookup_elem() */ /* check read/write into map element returned by bpf_map_lookup_elem() */
static int check_map_access(struct bpf_verifier_env *env, u32 regno, int off, static int __check_map_access(struct bpf_verifier_env *env, u32 regno, int off,
int size) int size)
{ {
struct bpf_map *map = env->cur_state.regs[regno].map_ptr; struct bpf_map *map = env->cur_state.regs[regno].map_ptr;
...@@ -657,49 +776,55 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno, int off, ...@@ -657,49 +776,55 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno, int off,
return 0; return 0;
} }
/* check read/write into an adjusted map element */ /* check read/write into a map element with possible variable offset */
static int check_map_access_adj(struct bpf_verifier_env *env, u32 regno, static int check_map_access(struct bpf_verifier_env *env, u32 regno,
int off, int size) int off, int size)
{ {
struct bpf_verifier_state *state = &env->cur_state; struct bpf_verifier_state *state = &env->cur_state;
struct bpf_reg_state *reg = &state->regs[regno]; struct bpf_reg_state *reg = &state->regs[regno];
int err; int err;
/* We adjusted the register to this map value, so we /* We may have adjusted the register to this map value, so we
* need to change off and size to min_value and max_value * need to try adding each of min_value and max_value to off
* respectively to make sure our theoretical access will be * to make sure our theoretical access will be safe.
* safe.
*/ */
if (log_level) if (log_level)
print_verifier_state(state); print_verifier_state(state);
env->varlen_map_value_access = true; /* If the offset is variable, we will need to be stricter in state
* pruning from now on.
*/
if (!tnum_is_const(reg->var_off))
env->varlen_map_value_access = true;
/* The minimum value is only important with signed /* The minimum value is only important with signed
* comparisons where we can't assume the floor of a * comparisons where we can't assume the floor of a
* value is 0. If we are using signed variables for our * value is 0. If we are using signed variables for our
* index'es we need to make sure that whatever we use * index'es we need to make sure that whatever we use
* will have a set floor within our range. * will have a set floor within our range.
*/ */
if (reg->min_value < 0) { if (reg->smin_value < 0) {
verbose("R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n", verbose("R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
regno); regno);
return -EACCES; return -EACCES;
} }
err = check_map_access(env, regno, reg->min_value + off, size); err = __check_map_access(env, regno, reg->smin_value + off, size);
if (err) { if (err) {
verbose("R%d min value is outside of the array range\n", verbose("R%d min value is outside of the array range\n", regno);
regno);
return err; return err;
} }
/* If we haven't set a max value then we need to bail /* If we haven't set a max value then we need to bail since we can't be
* since we can't be sure we won't do bad things. * sure we won't do bad things.
* If reg->umax_value + off could overflow, treat that as unbounded too.
*/ */
if (reg->max_value == BPF_REGISTER_MAX_RANGE) { if (reg->umax_value >= BPF_MAX_VAR_OFF) {
verbose("R%d unbounded memory access, make sure to bounds check any array access into a map\n", verbose("R%d unbounded memory access, make sure to bounds check any array access into a map\n",
regno); regno);
return -EACCES; return -EACCES;
} }
return check_map_access(env, regno, reg->max_value + off, size); err = __check_map_access(env, regno, reg->umax_value + off, size);
if (err)
verbose("R%d max value is outside of the array range\n", regno);
return err;
} }
#define MAX_PACKET_OFF 0xffff #define MAX_PACKET_OFF 0xffff
...@@ -729,14 +854,13 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env, ...@@ -729,14 +854,13 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
} }
} }
static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off, static int __check_packet_access(struct bpf_verifier_env *env, u32 regno,
int size) int off, int size)
{ {
struct bpf_reg_state *regs = env->cur_state.regs; struct bpf_reg_state *regs = env->cur_state.regs;
struct bpf_reg_state *reg = &regs[regno]; struct bpf_reg_state *reg = &regs[regno];
off += reg->off; if (off < 0 || size <= 0 || (u64)off + size > reg->range) {
if (off < 0 || size <= 0 || off + size > reg->range) {
verbose("invalid access to packet, off=%d size=%d, R%d(id=%d,off=%d,r=%d)\n", verbose("invalid access to packet, off=%d size=%d, R%d(id=%d,off=%d,r=%d)\n",
off, size, regno, reg->id, reg->off, reg->range); off, size, regno, reg->id, reg->off, reg->range);
return -EACCES; return -EACCES;
...@@ -744,7 +868,35 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off, ...@@ -744,7 +868,35 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
return 0; return 0;
} }
/* check access to 'struct bpf_context' fields */ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
int size)
{
struct bpf_reg_state *regs = env->cur_state.regs;
struct bpf_reg_state *reg = &regs[regno];
int err;
/* We may have added a variable offset to the packet pointer; but any
* reg->range we have comes after that. We are only checking the fixed
* offset.
*/
/* We don't allow negative numbers, because we aren't tracking enough
* detail to prove they're safe.
*/
if (reg->smin_value < 0) {
verbose("R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
regno);
return -EACCES;
}
err = __check_packet_access(env, regno, off, size);
if (err) {
verbose("R%d offset is outside of the packet\n", regno);
return err;
}
return err;
}
/* check access to 'struct bpf_context' fields. Supports fixed offsets only */
static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off, int size, static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off, int size,
enum bpf_access_type t, enum bpf_reg_type *reg_type) enum bpf_access_type t, enum bpf_reg_type *reg_type)
{ {
...@@ -784,13 +936,7 @@ static bool __is_pointer_value(bool allow_ptr_leaks, ...@@ -784,13 +936,7 @@ static bool __is_pointer_value(bool allow_ptr_leaks,
if (allow_ptr_leaks) if (allow_ptr_leaks)
return false; return false;
switch (reg->type) { return reg->type != SCALAR_VALUE;
case UNKNOWN_VALUE:
case CONST_IMM:
return false;
default:
return true;
}
} }
static bool is_pointer_value(struct bpf_verifier_env *env, int regno) static bool is_pointer_value(struct bpf_verifier_env *env, int regno)
...@@ -801,23 +947,13 @@ static bool is_pointer_value(struct bpf_verifier_env *env, int regno) ...@@ -801,23 +947,13 @@ static bool is_pointer_value(struct bpf_verifier_env *env, int regno)
static int check_pkt_ptr_alignment(const struct bpf_reg_state *reg, static int check_pkt_ptr_alignment(const struct bpf_reg_state *reg,
int off, int size, bool strict) int off, int size, bool strict)
{ {
struct tnum reg_off;
int ip_align; int ip_align;
int reg_off;
/* Byte size accesses are always allowed. */ /* Byte size accesses are always allowed. */
if (!strict || size == 1) if (!strict || size == 1)
return 0; return 0;
reg_off = reg->off;
if (reg->id) {
if (reg->aux_off_align % size) {
verbose("Packet access is only %u byte aligned, %d byte access not allowed\n",
reg->aux_off_align, size);
return -EACCES;
}
reg_off += reg->aux_off;
}
/* For platforms that do not have a Kconfig enabling /* For platforms that do not have a Kconfig enabling
* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS the value of * CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS the value of
* NET_IP_ALIGN is universally set to '2'. And on platforms * NET_IP_ALIGN is universally set to '2'. And on platforms
...@@ -827,20 +963,37 @@ static int check_pkt_ptr_alignment(const struct bpf_reg_state *reg, ...@@ -827,20 +963,37 @@ static int check_pkt_ptr_alignment(const struct bpf_reg_state *reg,
* unconditional IP align value of '2'. * unconditional IP align value of '2'.
*/ */
ip_align = 2; ip_align = 2;
if ((ip_align + reg_off + off) % size != 0) {
verbose("misaligned packet access off %d+%d+%d size %d\n", reg_off = tnum_add(reg->var_off, tnum_const(ip_align + reg->off + off));
ip_align, reg_off, off, size); if (!tnum_is_aligned(reg_off, size)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose("misaligned packet access off %d+%s+%d+%d size %d\n",
ip_align, tn_buf, reg->off, off, size);
return -EACCES; return -EACCES;
} }
return 0; return 0;
} }
static int check_val_ptr_alignment(const struct bpf_reg_state *reg, static int check_generic_ptr_alignment(const struct bpf_reg_state *reg,
int size, bool strict) const char *pointer_desc,
int off, int size, bool strict)
{ {
if (strict && size != 1) { struct tnum reg_off;
verbose("Unknown alignment. Only byte-sized access allowed in value access.\n");
/* Byte size accesses are always allowed. */
if (!strict || size == 1)
return 0;
reg_off = tnum_add(reg->var_off, tnum_const(reg->off + off));
if (!tnum_is_aligned(reg_off, size)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose("misaligned %saccess off %s+%d+%d size %d\n",
pointer_desc, tn_buf, reg->off, off, size);
return -EACCES; return -EACCES;
} }
...@@ -852,21 +1005,25 @@ static int check_ptr_alignment(struct bpf_verifier_env *env, ...@@ -852,21 +1005,25 @@ static int check_ptr_alignment(struct bpf_verifier_env *env,
int off, int size) int off, int size)
{ {
bool strict = env->strict_alignment; bool strict = env->strict_alignment;
const char *pointer_desc = "";
switch (reg->type) { switch (reg->type) {
case PTR_TO_PACKET: case PTR_TO_PACKET:
/* special case, because of NET_IP_ALIGN */
return check_pkt_ptr_alignment(reg, off, size, strict); return check_pkt_ptr_alignment(reg, off, size, strict);
case PTR_TO_MAP_VALUE_ADJ: case PTR_TO_MAP_VALUE:
return check_val_ptr_alignment(reg, size, strict); pointer_desc = "value ";
break;
case PTR_TO_CTX:
pointer_desc = "context ";
break;
case PTR_TO_STACK:
pointer_desc = "stack ";
break;
default: default:
if (off % size != 0) { break;
verbose("misaligned access off %d size %d\n",
off, size);
return -EACCES;
}
return 0;
} }
return check_generic_ptr_alignment(reg, pointer_desc, off, size, strict);
} }
/* check whether memory at (regno + off) is accessible for t = (read | write) /* check whether memory at (regno + off) is accessible for t = (read | write)
...@@ -883,52 +1040,79 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn ...@@ -883,52 +1040,79 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
struct bpf_reg_state *reg = &state->regs[regno]; struct bpf_reg_state *reg = &state->regs[regno];
int size, err = 0; int size, err = 0;
if (reg->type == PTR_TO_STACK)
off += reg->imm;
size = bpf_size_to_bytes(bpf_size); size = bpf_size_to_bytes(bpf_size);
if (size < 0) if (size < 0)
return size; return size;
/* alignment checks will add in reg->off themselves */
err = check_ptr_alignment(env, reg, off, size); err = check_ptr_alignment(env, reg, off, size);
if (err) if (err)
return err; return err;
if (reg->type == PTR_TO_MAP_VALUE || /* for access checks, reg->off is just part of off */
reg->type == PTR_TO_MAP_VALUE_ADJ) { off += reg->off;
if (reg->type == PTR_TO_MAP_VALUE) {
if (t == BPF_WRITE && value_regno >= 0 && if (t == BPF_WRITE && value_regno >= 0 &&
is_pointer_value(env, value_regno)) { is_pointer_value(env, value_regno)) {
verbose("R%d leaks addr into map\n", value_regno); verbose("R%d leaks addr into map\n", value_regno);
return -EACCES; return -EACCES;
} }
if (reg->type == PTR_TO_MAP_VALUE_ADJ) err = check_map_access(env, regno, off, size);
err = check_map_access_adj(env, regno, off, size);
else
err = check_map_access(env, regno, off, size);
if (!err && t == BPF_READ && value_regno >= 0) if (!err && t == BPF_READ && value_regno >= 0)
mark_reg_unknown_value_and_range(state->regs, mark_reg_unknown(state->regs, value_regno);
value_regno);
} else if (reg->type == PTR_TO_CTX) { } else if (reg->type == PTR_TO_CTX) {
enum bpf_reg_type reg_type = UNKNOWN_VALUE; enum bpf_reg_type reg_type = SCALAR_VALUE;
if (t == BPF_WRITE && value_regno >= 0 && if (t == BPF_WRITE && value_regno >= 0 &&
is_pointer_value(env, value_regno)) { is_pointer_value(env, value_regno)) {
verbose("R%d leaks addr into ctx\n", value_regno); verbose("R%d leaks addr into ctx\n", value_regno);
return -EACCES; return -EACCES;
} }
/* ctx accesses must be at a fixed offset, so that we can
* determine what type of data were returned.
*/
if (!tnum_is_const(reg->var_off)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose("variable ctx access var_off=%s off=%d size=%d",
tn_buf, off, size);
return -EACCES;
}
off += reg->var_off.value;
err = check_ctx_access(env, insn_idx, off, size, t, &reg_type); err = check_ctx_access(env, insn_idx, off, size, t, &reg_type);
if (!err && t == BPF_READ && value_regno >= 0) { if (!err && t == BPF_READ && value_regno >= 0) {
mark_reg_unknown_value_and_range(state->regs, /* ctx access returns either a scalar, or a
value_regno); * PTR_TO_PACKET[_END]. In the latter case, we know
/* note that reg.[id|off|range] == 0 */ * the offset is zero.
*/
if (reg_type == SCALAR_VALUE)
mark_reg_unknown(state->regs, value_regno);
else
mark_reg_known_zero(state->regs, value_regno);
state->regs[value_regno].id = 0;
state->regs[value_regno].off = 0;
state->regs[value_regno].range = 0;
state->regs[value_regno].type = reg_type; state->regs[value_regno].type = reg_type;
state->regs[value_regno].aux_off = 0;
state->regs[value_regno].aux_off_align = 0;
} }
} else if (reg->type == FRAME_PTR || reg->type == PTR_TO_STACK) { } else if (reg->type == PTR_TO_STACK) {
/* stack accesses must be at a fixed offset, so that we can
* determine what type of data were returned.
* See check_stack_read().
*/
if (!tnum_is_const(reg->var_off)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose("variable stack access var_off=%s off=%d size=%d",
tn_buf, off, size);
return -EACCES;
}
off += reg->var_off.value;
if (off >= 0 || off < -MAX_BPF_STACK) { if (off >= 0 || off < -MAX_BPF_STACK) {
verbose("invalid stack off=%d size=%d\n", off, size); verbose("invalid stack off=%d size=%d\n", off, size);
return -EACCES; return -EACCES;
...@@ -948,7 +1132,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn ...@@ -948,7 +1132,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
} else { } else {
err = check_stack_read(state, off, size, value_regno); err = check_stack_read(state, off, size, value_regno);
} }
} else if (state->regs[regno].type == PTR_TO_PACKET) { } else if (reg->type == PTR_TO_PACKET) {
if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) { if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) {
verbose("cannot write into packet\n"); verbose("cannot write into packet\n");
return -EACCES; return -EACCES;
...@@ -960,21 +1144,19 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn ...@@ -960,21 +1144,19 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
} }
err = check_packet_access(env, regno, off, size); err = check_packet_access(env, regno, off, size);
if (!err && t == BPF_READ && value_regno >= 0) if (!err && t == BPF_READ && value_regno >= 0)
mark_reg_unknown_value_and_range(state->regs, mark_reg_unknown(state->regs, value_regno);
value_regno);
} else { } else {
verbose("R%d invalid mem access '%s'\n", verbose("R%d invalid mem access '%s'\n",
regno, reg_type_str[reg->type]); regno, reg_type_str[reg->type]);
return -EACCES; return -EACCES;
} }
if (!err && size <= 2 && value_regno >= 0 && env->allow_ptr_leaks && if (!err && size < BPF_REG_SIZE && value_regno >= 0 && t == BPF_READ &&
state->regs[value_regno].type == UNKNOWN_VALUE) { state->regs[value_regno].type == SCALAR_VALUE) {
/* 1 or 2 byte load zero-extends, determine the number of /* b/h/w load zero-extends, mark upper bits as known 0 */
* zero upper bits. Not doing it fo 4 byte load, since state->regs[value_regno].var_off = tnum_cast(
* such values cannot be added to ptr_to_packet anyway. state->regs[value_regno].var_off, size);
*/ __update_reg_bounds(&state->regs[value_regno]);
state->regs[value_regno].imm = 64 - size * 8;
} }
return err; return err;
} }
...@@ -1016,9 +1198,17 @@ static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_ins ...@@ -1016,9 +1198,17 @@ static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_ins
BPF_SIZE(insn->code), BPF_WRITE, -1); BPF_SIZE(insn->code), BPF_WRITE, -1);
} }
/* Does this register contain a constant zero? */
static bool register_is_null(struct bpf_reg_state reg)
{
return reg.type == SCALAR_VALUE && tnum_equals_const(reg.var_off, 0);
}
/* when register 'regno' is passed into function that will read 'access_size' /* when register 'regno' is passed into function that will read 'access_size'
* bytes from that pointer, make sure that it's within stack boundary * bytes from that pointer, make sure that it's within stack boundary
* and all elements of stack are initialized * and all elements of stack are initialized.
* Unlike most pointer bounds-checking functions, this one doesn't take an
* 'off' argument, so it has to add in reg->off itself.
*/ */
static int check_stack_boundary(struct bpf_verifier_env *env, int regno, static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
int access_size, bool zero_size_allowed, int access_size, bool zero_size_allowed,
...@@ -1029,9 +1219,9 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno, ...@@ -1029,9 +1219,9 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
int off, i; int off, i;
if (regs[regno].type != PTR_TO_STACK) { if (regs[regno].type != PTR_TO_STACK) {
/* Allow zero-byte read from NULL, regardless of pointer type */
if (zero_size_allowed && access_size == 0 && if (zero_size_allowed && access_size == 0 &&
regs[regno].type == CONST_IMM && register_is_null(regs[regno]))
regs[regno].imm == 0)
return 0; return 0;
verbose("R%d type=%s expected=%s\n", regno, verbose("R%d type=%s expected=%s\n", regno,
...@@ -1040,7 +1230,15 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno, ...@@ -1040,7 +1230,15 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno,
return -EACCES; return -EACCES;
} }
off = regs[regno].imm; /* Only allow fixed-offset stack reads */
if (!tnum_is_const(regs[regno].var_off)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), regs[regno].var_off);
verbose("invalid variable stack read R%d var_off=%s\n",
regno, tn_buf);
}
off = regs[regno].off + regs[regno].var_off.value;
if (off >= 0 || off < -MAX_BPF_STACK || off + access_size > 0 || if (off >= 0 || off < -MAX_BPF_STACK || off + access_size > 0 ||
access_size <= 0) { access_size <= 0) {
verbose("invalid stack type R%d off=%d access_size=%d\n", verbose("invalid stack type R%d off=%d access_size=%d\n",
...@@ -1071,16 +1269,14 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, ...@@ -1071,16 +1269,14 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
int access_size, bool zero_size_allowed, int access_size, bool zero_size_allowed,
struct bpf_call_arg_meta *meta) struct bpf_call_arg_meta *meta)
{ {
struct bpf_reg_state *regs = env->cur_state.regs; struct bpf_reg_state *regs = env->cur_state.regs, *reg = &regs[regno];
switch (regs[regno].type) { switch (reg->type) {
case PTR_TO_PACKET: case PTR_TO_PACKET:
return check_packet_access(env, regno, 0, access_size); return check_packet_access(env, regno, reg->off, access_size);
case PTR_TO_MAP_VALUE: case PTR_TO_MAP_VALUE:
return check_map_access(env, regno, 0, access_size); return check_map_access(env, regno, reg->off, access_size);
case PTR_TO_MAP_VALUE_ADJ: default: /* scalar_value|ptr_to_stack or invalid ptr */
return check_map_access_adj(env, regno, 0, access_size);
default: /* const_imm|ptr_to_stack or invalid ptr */
return check_stack_boundary(env, regno, access_size, return check_stack_boundary(env, regno, access_size,
zero_size_allowed, meta); zero_size_allowed, meta);
} }
...@@ -1123,11 +1319,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, ...@@ -1123,11 +1319,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
goto err_type; goto err_type;
} else if (arg_type == ARG_CONST_SIZE || } else if (arg_type == ARG_CONST_SIZE ||
arg_type == ARG_CONST_SIZE_OR_ZERO) { arg_type == ARG_CONST_SIZE_OR_ZERO) {
expected_type = CONST_IMM; expected_type = SCALAR_VALUE;
/* One exception. Allow UNKNOWN_VALUE registers when the if (type != expected_type)
* boundaries are known and don't cause unsafe memory accesses
*/
if (type != UNKNOWN_VALUE && type != expected_type)
goto err_type; goto err_type;
} else if (arg_type == ARG_CONST_MAP_PTR) { } else if (arg_type == ARG_CONST_MAP_PTR) {
expected_type = CONST_PTR_TO_MAP; expected_type = CONST_PTR_TO_MAP;
...@@ -1141,13 +1334,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, ...@@ -1141,13 +1334,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
arg_type == ARG_PTR_TO_UNINIT_MEM) { arg_type == ARG_PTR_TO_UNINIT_MEM) {
expected_type = PTR_TO_STACK; expected_type = PTR_TO_STACK;
/* One exception here. In case function allows for NULL to be /* One exception here. In case function allows for NULL to be
* passed in as argument, it's a CONST_IMM type. Final test * passed in as argument, it's a SCALAR_VALUE type. Final test
* happens during stack boundary checking. * happens during stack boundary checking.
*/ */
if (type == CONST_IMM && reg->imm == 0) if (register_is_null(*reg))
/* final test in check_stack_boundary() */; /* final test in check_stack_boundary() */;
else if (type != PTR_TO_PACKET && type != PTR_TO_MAP_VALUE && else if (type != PTR_TO_PACKET && type != PTR_TO_MAP_VALUE &&
type != PTR_TO_MAP_VALUE_ADJ && type != expected_type) type != expected_type)
goto err_type; goto err_type;
meta->raw_mode = arg_type == ARG_PTR_TO_UNINIT_MEM; meta->raw_mode = arg_type == ARG_PTR_TO_UNINIT_MEM;
} else { } else {
...@@ -1173,7 +1366,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, ...@@ -1173,7 +1366,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
return -EACCES; return -EACCES;
} }
if (type == PTR_TO_PACKET) if (type == PTR_TO_PACKET)
err = check_packet_access(env, regno, 0, err = check_packet_access(env, regno, reg->off,
meta->map_ptr->key_size); meta->map_ptr->key_size);
else else
err = check_stack_boundary(env, regno, err = check_stack_boundary(env, regno,
...@@ -1189,7 +1382,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, ...@@ -1189,7 +1382,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
return -EACCES; return -EACCES;
} }
if (type == PTR_TO_PACKET) if (type == PTR_TO_PACKET)
err = check_packet_access(env, regno, 0, err = check_packet_access(env, regno, reg->off,
meta->map_ptr->value_size); meta->map_ptr->value_size);
else else
err = check_stack_boundary(env, regno, err = check_stack_boundary(env, regno,
...@@ -1209,10 +1402,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, ...@@ -1209,10 +1402,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
return -EACCES; return -EACCES;
} }
/* If the register is UNKNOWN_VALUE, the access check happens /* The register is SCALAR_VALUE; the access check
* using its boundaries. Otherwise, just use its imm * happens using its boundaries.
*/ */
if (type == UNKNOWN_VALUE) {
if (!tnum_is_const(reg->var_off))
/* For unprivileged variable accesses, disable raw /* For unprivileged variable accesses, disable raw
* mode so that the program is required to * mode so that the program is required to
* initialize all the memory that the helper could * initialize all the memory that the helper could
...@@ -1220,35 +1414,28 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, ...@@ -1220,35 +1414,28 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
*/ */
meta = NULL; meta = NULL;
if (reg->min_value < 0) { if (reg->smin_value < 0) {
verbose("R%d min value is negative, either use unsigned or 'var &= const'\n", verbose("R%d min value is negative, either use unsigned or 'var &= const'\n",
regno); regno);
return -EACCES; return -EACCES;
} }
if (reg->min_value == 0) {
err = check_helper_mem_access(env, regno - 1, 0,
zero_size_allowed,
meta);
if (err)
return err;
}
if (reg->max_value == BPF_REGISTER_MAX_RANGE) { if (reg->umin_value == 0) {
verbose("R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n", err = check_helper_mem_access(env, regno - 1, 0,
regno); zero_size_allowed,
return -EACCES; meta);
}
err = check_helper_mem_access(env, regno - 1,
reg->max_value,
zero_size_allowed, meta);
if (err) if (err)
return err; return err;
} else {
/* register is CONST_IMM */
err = check_helper_mem_access(env, regno - 1, reg->imm,
zero_size_allowed, meta);
} }
if (reg->umax_value >= BPF_MAX_VAR_SIZ) {
verbose("R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
regno);
return -EACCES;
}
err = check_helper_mem_access(env, regno - 1,
reg->umax_value,
zero_size_allowed, meta);
} }
return err; return err;
...@@ -1352,6 +1539,9 @@ static int check_raw_mode(const struct bpf_func_proto *fn) ...@@ -1352,6 +1539,9 @@ static int check_raw_mode(const struct bpf_func_proto *fn)
return count > 1 ? -EINVAL : 0; return count > 1 ? -EINVAL : 0;
} }
/* Packet data might have moved, any old PTR_TO_PACKET[_END] are now invalid,
* so turn them into unknown SCALAR_VALUE.
*/
static void clear_all_pkt_pointers(struct bpf_verifier_env *env) static void clear_all_pkt_pointers(struct bpf_verifier_env *env)
{ {
struct bpf_verifier_state *state = &env->cur_state; struct bpf_verifier_state *state = &env->cur_state;
...@@ -1361,7 +1551,7 @@ static void clear_all_pkt_pointers(struct bpf_verifier_env *env) ...@@ -1361,7 +1551,7 @@ static void clear_all_pkt_pointers(struct bpf_verifier_env *env)
for (i = 0; i < MAX_BPF_REG; i++) for (i = 0; i < MAX_BPF_REG; i++)
if (regs[i].type == PTR_TO_PACKET || if (regs[i].type == PTR_TO_PACKET ||
regs[i].type == PTR_TO_PACKET_END) regs[i].type == PTR_TO_PACKET_END)
mark_reg_unknown_value(regs, i); mark_reg_unknown(regs, i);
for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) { for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
if (state->stack_slot_type[i] != STACK_SPILL) if (state->stack_slot_type[i] != STACK_SPILL)
...@@ -1370,8 +1560,7 @@ static void clear_all_pkt_pointers(struct bpf_verifier_env *env) ...@@ -1370,8 +1560,7 @@ static void clear_all_pkt_pointers(struct bpf_verifier_env *env)
if (reg->type != PTR_TO_PACKET && if (reg->type != PTR_TO_PACKET &&
reg->type != PTR_TO_PACKET_END) reg->type != PTR_TO_PACKET_END)
continue; continue;
__mark_reg_unknown_value(state->spilled_regs, __mark_reg_unknown(reg);
i / BPF_REG_SIZE);
} }
} }
...@@ -1451,14 +1640,17 @@ static int check_call(struct bpf_verifier_env *env, int func_id, int insn_idx) ...@@ -1451,14 +1640,17 @@ static int check_call(struct bpf_verifier_env *env, int func_id, int insn_idx)
/* update return register */ /* update return register */
if (fn->ret_type == RET_INTEGER) { if (fn->ret_type == RET_INTEGER) {
regs[BPF_REG_0].type = UNKNOWN_VALUE; /* sets type to SCALAR_VALUE */
mark_reg_unknown(regs, BPF_REG_0);
} else if (fn->ret_type == RET_VOID) { } else if (fn->ret_type == RET_VOID) {
regs[BPF_REG_0].type = NOT_INIT; regs[BPF_REG_0].type = NOT_INIT;
} else if (fn->ret_type == RET_PTR_TO_MAP_VALUE_OR_NULL) { } else if (fn->ret_type == RET_PTR_TO_MAP_VALUE_OR_NULL) {
struct bpf_insn_aux_data *insn_aux; struct bpf_insn_aux_data *insn_aux;
regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL; regs[BPF_REG_0].type = PTR_TO_MAP_VALUE_OR_NULL;
regs[BPF_REG_0].max_value = regs[BPF_REG_0].min_value = 0; /* There is no offset yet applied, variable or fixed */
mark_reg_known_zero(regs, BPF_REG_0);
regs[BPF_REG_0].off = 0;
/* remember map_ptr, so that check_map_access() /* remember map_ptr, so that check_map_access()
* can check 'value_size' boundary of memory access * can check 'value_size' boundary of memory access
* to map element returned from bpf_map_lookup_elem() * to map element returned from bpf_map_lookup_elem()
...@@ -1489,494 +1681,551 @@ static int check_call(struct bpf_verifier_env *env, int func_id, int insn_idx) ...@@ -1489,494 +1681,551 @@ static int check_call(struct bpf_verifier_env *env, int func_id, int insn_idx)
return 0; return 0;
} }
static int check_packet_ptr_add(struct bpf_verifier_env *env, static void coerce_reg_to_32(struct bpf_reg_state *reg)
struct bpf_insn *insn)
{ {
struct bpf_reg_state *regs = env->cur_state.regs; /* clear high 32 bits */
struct bpf_reg_state *dst_reg = &regs[insn->dst_reg]; reg->var_off = tnum_cast(reg->var_off, 4);
struct bpf_reg_state *src_reg = &regs[insn->src_reg]; /* Update bounds */
struct bpf_reg_state tmp_reg; __update_reg_bounds(reg);
s32 imm; }
if (BPF_SRC(insn->code) == BPF_K) {
/* pkt_ptr += imm */
imm = insn->imm;
add_imm:
if (imm < 0) {
verbose("addition of negative constant to packet pointer is not allowed\n");
return -EACCES;
}
if (imm >= MAX_PACKET_OFF ||
imm + dst_reg->off >= MAX_PACKET_OFF) {
verbose("constant %d is too large to add to packet pointer\n",
imm);
return -EACCES;
}
/* a constant was added to pkt_ptr.
* Remember it while keeping the same 'id'
*/
dst_reg->off += imm;
} else {
bool had_id;
if (src_reg->type == PTR_TO_PACKET) {
/* R6=pkt(id=0,off=0,r=62) R7=imm22; r7 += r6 */
tmp_reg = *dst_reg; /* save r7 state */
*dst_reg = *src_reg; /* copy pkt_ptr state r6 into r7 */
src_reg = &tmp_reg; /* pretend it's src_reg state */
/* if the checks below reject it, the copy won't matter,
* since we're rejecting the whole program. If all ok,
* then imm22 state will be added to r7
* and r7 will be pkt(id=0,off=22,r=62) while
* r6 will stay as pkt(id=0,off=0,r=62)
*/
}
if (src_reg->type == CONST_IMM) { static bool signed_add_overflows(s64 a, s64 b)
/* pkt_ptr += reg where reg is known constant */ {
imm = src_reg->imm; /* Do the add in u64, where overflow is well-defined */
goto add_imm; s64 res = (s64)((u64)a + (u64)b);
}
/* disallow pkt_ptr += reg if (b < 0)
* if reg is not uknown_value with guaranteed zero upper bits return res > a;
* otherwise pkt_ptr may overflow and addition will become return res < a;
* subtraction which is not allowed }
*/
if (src_reg->type != UNKNOWN_VALUE) {
verbose("cannot add '%s' to ptr_to_packet\n",
reg_type_str[src_reg->type]);
return -EACCES;
}
if (src_reg->imm < 48) {
verbose("cannot add integer value with %lld upper zero bits to ptr_to_packet\n",
src_reg->imm);
return -EACCES;
}
had_id = (dst_reg->id != 0); static bool signed_sub_overflows(s64 a, s64 b)
{
/* Do the sub in u64, where overflow is well-defined */
s64 res = (s64)((u64)a - (u64)b);
/* dst_reg stays as pkt_ptr type and since some positive if (b < 0)
* integer value was added to the pointer, increment its 'id' return res < a;
*/ return res > a;
dst_reg->id = ++env->id_gen;
/* something was added to pkt_ptr, set range to zero */
dst_reg->aux_off += dst_reg->off;
dst_reg->off = 0;
dst_reg->range = 0;
if (had_id)
dst_reg->aux_off_align = min(dst_reg->aux_off_align,
src_reg->min_align);
else
dst_reg->aux_off_align = src_reg->min_align;
}
return 0;
} }
static int evaluate_reg_alu(struct bpf_verifier_env *env, struct bpf_insn *insn) /* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.
* Caller should also handle BPF_MOV case separately.
* If we return -EACCES, caller may want to try again treating pointer as a
* scalar. So we only emit a diagnostic if !env->allow_ptr_leaks.
*/
static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn,
const struct bpf_reg_state *ptr_reg,
const struct bpf_reg_state *off_reg)
{ {
struct bpf_reg_state *regs = env->cur_state.regs; struct bpf_reg_state *regs = env->cur_state.regs, *dst_reg;
struct bpf_reg_state *dst_reg = &regs[insn->dst_reg]; bool known = tnum_is_const(off_reg->var_off);
s64 smin_val = off_reg->smin_value, smax_val = off_reg->smax_value,
smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value;
u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value,
umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value;
u8 opcode = BPF_OP(insn->code); u8 opcode = BPF_OP(insn->code);
s64 imm_log2; u32 dst = insn->dst_reg;
/* for type == UNKNOWN_VALUE: dst_reg = &regs[dst];
* imm > 0 -> number of zero upper bits
* imm == 0 -> don't track which is the same as all bits can be non-zero
*/
if (BPF_SRC(insn->code) == BPF_X) { if (WARN_ON_ONCE(known && (smin_val != smax_val))) {
struct bpf_reg_state *src_reg = &regs[insn->src_reg]; print_verifier_state(&env->cur_state);
verbose("verifier internal error: known but bad sbounds\n");
if (src_reg->type == UNKNOWN_VALUE && src_reg->imm > 0 && return -EINVAL;
dst_reg->imm && opcode == BPF_ADD) { }
/* dreg += sreg if (WARN_ON_ONCE(known && (umin_val != umax_val))) {
* where both have zero upper bits. Adding them print_verifier_state(&env->cur_state);
* can only result making one more bit non-zero verbose("verifier internal error: known but bad ubounds\n");
* in the larger value. return -EINVAL;
* Ex. 0xffff (imm=48) + 1 (imm=63) = 0x10000 (imm=47)
* 0xffff (imm=48) + 0xffff = 0x1fffe (imm=47)
*/
dst_reg->imm = min(dst_reg->imm, src_reg->imm);
dst_reg->imm--;
return 0;
}
if (src_reg->type == CONST_IMM && src_reg->imm > 0 &&
dst_reg->imm && opcode == BPF_ADD) {
/* dreg += sreg
* where dreg has zero upper bits and sreg is const.
* Adding them can only result making one more bit
* non-zero in the larger value.
*/
imm_log2 = __ilog2_u64((long long)src_reg->imm);
dst_reg->imm = min(dst_reg->imm, 63 - imm_log2);
dst_reg->imm--;
return 0;
}
/* all other cases non supported yet, just mark dst_reg */
dst_reg->imm = 0;
return 0;
} }
/* sign extend 32-bit imm into 64-bit to make sure that if (BPF_CLASS(insn->code) != BPF_ALU64) {
* negative values occupy bit 63. Note ilog2() would have /* 32-bit ALU ops on pointers produce (meaningless) scalars */
* been incorrect, since sizeof(insn->imm) == 4 if (!env->allow_ptr_leaks)
*/ verbose("R%d 32-bit pointer arithmetic prohibited\n",
imm_log2 = __ilog2_u64((long long)insn->imm); dst);
return -EACCES;
if (dst_reg->imm && opcode == BPF_LSH) {
/* reg <<= imm
* if reg was a result of 2 byte load, then its imm == 48
* which means that upper 48 bits are zero and shifting this reg
* left by 4 would mean that upper 44 bits are still zero
*/
dst_reg->imm -= insn->imm;
} else if (dst_reg->imm && opcode == BPF_MUL) {
/* reg *= imm
* if multiplying by 14 subtract 4
* This is conservative calculation of upper zero bits.
* It's not trying to special case insn->imm == 1 or 0 cases
*/
dst_reg->imm -= imm_log2 + 1;
} else if (opcode == BPF_AND) {
/* reg &= imm */
dst_reg->imm = 63 - imm_log2;
} else if (dst_reg->imm && opcode == BPF_ADD) {
/* reg += imm */
dst_reg->imm = min(dst_reg->imm, 63 - imm_log2);
dst_reg->imm--;
} else if (opcode == BPF_RSH) {
/* reg >>= imm
* which means that after right shift, upper bits will be zero
* note that verifier already checked that
* 0 <= imm < 64 for shift insn
*/
dst_reg->imm += insn->imm;
if (unlikely(dst_reg->imm > 64))
/* some dumb code did:
* r2 = *(u32 *)mem;
* r2 >>= 32;
* and all bits are zero now */
dst_reg->imm = 64;
} else {
/* all other alu ops, means that we don't know what will
* happen to the value, mark it with unknown number of zero bits
*/
dst_reg->imm = 0;
} }
if (dst_reg->imm < 0) { if (ptr_reg->type == PTR_TO_MAP_VALUE_OR_NULL) {
/* all 64 bits of the register can contain non-zero bits if (!env->allow_ptr_leaks)
* and such value cannot be added to ptr_to_packet, since it verbose("R%d pointer arithmetic on PTR_TO_MAP_VALUE_OR_NULL prohibited, null-check it first\n",
* may overflow, mark it as unknown to avoid further eval dst);
*/ return -EACCES;
dst_reg->imm = 0; }
if (ptr_reg->type == CONST_PTR_TO_MAP) {
if (!env->allow_ptr_leaks)
verbose("R%d pointer arithmetic on CONST_PTR_TO_MAP prohibited\n",
dst);
return -EACCES;
}
if (ptr_reg->type == PTR_TO_PACKET_END) {
if (!env->allow_ptr_leaks)
verbose("R%d pointer arithmetic on PTR_TO_PACKET_END prohibited\n",
dst);
return -EACCES;
} }
return 0;
}
static int evaluate_reg_imm_alu_unknown(struct bpf_verifier_env *env, /* In case of 'scalar += pointer', dst_reg inherits pointer type and id.
struct bpf_insn *insn) * The id may be overwritten later if we create a new variable offset.
{ */
struct bpf_reg_state *regs = env->cur_state.regs; dst_reg->type = ptr_reg->type;
struct bpf_reg_state *dst_reg = &regs[insn->dst_reg]; dst_reg->id = ptr_reg->id;
struct bpf_reg_state *src_reg = &regs[insn->src_reg];
u8 opcode = BPF_OP(insn->code); switch (opcode) {
s64 imm_log2 = __ilog2_u64((long long)dst_reg->imm); case BPF_ADD:
/* We can take a fixed offset as long as it doesn't overflow
/* BPF_X code with src_reg->type UNKNOWN_VALUE here. */ * the s32 'off' field
if (src_reg->imm > 0 && dst_reg->imm) { */
switch (opcode) { if (known && (ptr_reg->off + smin_val ==
case BPF_ADD: (s64)(s32)(ptr_reg->off + smin_val))) {
/* dreg += sreg /* pointer += K. Accumulate it into fixed offset */
* where both have zero upper bits. Adding them dst_reg->smin_value = smin_ptr;
* can only result making one more bit non-zero dst_reg->smax_value = smax_ptr;
* in the larger value. dst_reg->umin_value = umin_ptr;
* Ex. 0xffff (imm=48) + 1 (imm=63) = 0x10000 (imm=47) dst_reg->umax_value = umax_ptr;
* 0xffff (imm=48) + 0xffff = 0x1fffe (imm=47) dst_reg->var_off = ptr_reg->var_off;
*/ dst_reg->off = ptr_reg->off + smin_val;
dst_reg->imm = min(src_reg->imm, 63 - imm_log2); dst_reg->range = ptr_reg->range;
dst_reg->imm--;
break;
case BPF_AND:
/* dreg &= sreg
* AND can not extend zero bits only shrink
* Ex. 0x00..00ffffff
* & 0x0f..ffffffff
* ----------------
* 0x00..00ffffff
*/
dst_reg->imm = max(src_reg->imm, 63 - imm_log2);
break; break;
case BPF_OR: }
/* dreg |= sreg /* A new variable offset is created. Note that off_reg->off
* OR can only extend zero bits * == 0, since it's a scalar.
* Ex. 0x00..00ffffff * dst_reg gets the pointer type and since some positive
* | 0x0f..ffffffff * integer value was added to the pointer, give it a new 'id'
* ---------------- * if it's a PTR_TO_PACKET.
* 0x0f..00ffffff * this creates a new 'base' pointer, off_reg (variable) gets
*/ * added into the variable offset, and we copy the fixed offset
dst_reg->imm = min(src_reg->imm, 63 - imm_log2); * from ptr_reg.
*/
if (signed_add_overflows(smin_ptr, smin_val) ||
signed_add_overflows(smax_ptr, smax_val)) {
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = smin_ptr + smin_val;
dst_reg->smax_value = smax_ptr + smax_val;
}
if (umin_ptr + umin_val < umin_ptr ||
umax_ptr + umax_val < umax_ptr) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value = umin_ptr + umin_val;
dst_reg->umax_value = umax_ptr + umax_val;
}
dst_reg->var_off = tnum_add(ptr_reg->var_off, off_reg->var_off);
dst_reg->off = ptr_reg->off;
if (ptr_reg->type == PTR_TO_PACKET) {
dst_reg->id = ++env->id_gen;
/* something was added to pkt_ptr, set range to zero */
dst_reg->range = 0;
}
break;
case BPF_SUB:
if (dst_reg == off_reg) {
/* scalar -= pointer. Creates an unknown scalar */
if (!env->allow_ptr_leaks)
verbose("R%d tried to subtract pointer from scalar\n",
dst);
return -EACCES;
}
/* We don't allow subtraction from FP, because (according to
* test_verifier.c test "invalid fp arithmetic", JITs might not
* be able to deal with it.
*/
if (ptr_reg->type == PTR_TO_STACK) {
if (!env->allow_ptr_leaks)
verbose("R%d subtraction from stack pointer prohibited\n",
dst);
return -EACCES;
}
if (known && (ptr_reg->off - smin_val ==
(s64)(s32)(ptr_reg->off - smin_val))) {
/* pointer -= K. Subtract it from fixed offset */
dst_reg->smin_value = smin_ptr;
dst_reg->smax_value = smax_ptr;
dst_reg->umin_value = umin_ptr;
dst_reg->umax_value = umax_ptr;
dst_reg->var_off = ptr_reg->var_off;
dst_reg->id = ptr_reg->id;
dst_reg->off = ptr_reg->off - smin_val;
dst_reg->range = ptr_reg->range;
break; break;
case BPF_SUB:
case BPF_MUL:
case BPF_RSH:
case BPF_LSH:
/* These may be flushed out later */
default:
mark_reg_unknown_value(regs, insn->dst_reg);
} }
} else { /* A new variable offset is created. If the subtrahend is known
mark_reg_unknown_value(regs, insn->dst_reg); * nonnegative, then any reg->range we had before is still good.
*/
if (signed_sub_overflows(smin_ptr, smax_val) ||
signed_sub_overflows(smax_ptr, smin_val)) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = smin_ptr - smax_val;
dst_reg->smax_value = smax_ptr - smin_val;
}
if (umin_ptr < umax_val) {
/* Overflow possible, we know nothing */
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
/* Cannot overflow (as long as bounds are consistent) */
dst_reg->umin_value = umin_ptr - umax_val;
dst_reg->umax_value = umax_ptr - umin_val;
}
dst_reg->var_off = tnum_sub(ptr_reg->var_off, off_reg->var_off);
dst_reg->off = ptr_reg->off;
if (ptr_reg->type == PTR_TO_PACKET) {
dst_reg->id = ++env->id_gen;
/* something was added to pkt_ptr, set range to zero */
if (smin_val < 0)
dst_reg->range = 0;
}
break;
case BPF_AND:
case BPF_OR:
case BPF_XOR:
/* bitwise ops on pointers are troublesome, prohibit for now.
* (However, in principle we could allow some cases, e.g.
* ptr &= ~3 which would reduce min_value by 3.)
*/
if (!env->allow_ptr_leaks)
verbose("R%d bitwise operator %s on pointer prohibited\n",
dst, bpf_alu_string[opcode >> 4]);
return -EACCES;
default:
/* other operators (e.g. MUL,LSH) produce non-pointer results */
if (!env->allow_ptr_leaks)
verbose("R%d pointer arithmetic with %s operator prohibited\n",
dst, bpf_alu_string[opcode >> 4]);
return -EACCES;
} }
dst_reg->type = UNKNOWN_VALUE; __update_reg_bounds(dst_reg);
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
return 0; return 0;
} }
static int evaluate_reg_imm_alu(struct bpf_verifier_env *env, static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn) struct bpf_insn *insn,
struct bpf_reg_state *dst_reg,
struct bpf_reg_state src_reg)
{ {
struct bpf_reg_state *regs = env->cur_state.regs; struct bpf_reg_state *regs = env->cur_state.regs;
struct bpf_reg_state *dst_reg = &regs[insn->dst_reg];
struct bpf_reg_state *src_reg = &regs[insn->src_reg];
u8 opcode = BPF_OP(insn->code); u8 opcode = BPF_OP(insn->code);
u64 dst_imm = dst_reg->imm; bool src_known, dst_known;
s64 smin_val, smax_val;
if (BPF_SRC(insn->code) == BPF_X && src_reg->type == UNKNOWN_VALUE) u64 umin_val, umax_val;
return evaluate_reg_imm_alu_unknown(env, insn);
if (BPF_CLASS(insn->code) != BPF_ALU64) {
/* dst_reg->type == CONST_IMM here. Simulate execution of insns /* 32-bit ALU ops are (32,32)->64 */
* containing ALU ops. Don't care about overflow or negative coerce_reg_to_32(dst_reg);
* values, just add/sub/... them; registers are in u64. coerce_reg_to_32(&src_reg);
*/
if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_K) {
dst_imm += insn->imm;
} else if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_X &&
src_reg->type == CONST_IMM) {
dst_imm += src_reg->imm;
} else if (opcode == BPF_SUB && BPF_SRC(insn->code) == BPF_K) {
dst_imm -= insn->imm;
} else if (opcode == BPF_SUB && BPF_SRC(insn->code) == BPF_X &&
src_reg->type == CONST_IMM) {
dst_imm -= src_reg->imm;
} else if (opcode == BPF_MUL && BPF_SRC(insn->code) == BPF_K) {
dst_imm *= insn->imm;
} else if (opcode == BPF_MUL && BPF_SRC(insn->code) == BPF_X &&
src_reg->type == CONST_IMM) {
dst_imm *= src_reg->imm;
} else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_K) {
dst_imm |= insn->imm;
} else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_X &&
src_reg->type == CONST_IMM) {
dst_imm |= src_reg->imm;
} else if (opcode == BPF_AND && BPF_SRC(insn->code) == BPF_K) {
dst_imm &= insn->imm;
} else if (opcode == BPF_AND && BPF_SRC(insn->code) == BPF_X &&
src_reg->type == CONST_IMM) {
dst_imm &= src_reg->imm;
} else if (opcode == BPF_RSH && BPF_SRC(insn->code) == BPF_K) {
dst_imm >>= insn->imm;
} else if (opcode == BPF_RSH && BPF_SRC(insn->code) == BPF_X &&
src_reg->type == CONST_IMM) {
dst_imm >>= src_reg->imm;
} else if (opcode == BPF_LSH && BPF_SRC(insn->code) == BPF_K) {
dst_imm <<= insn->imm;
} else if (opcode == BPF_LSH && BPF_SRC(insn->code) == BPF_X &&
src_reg->type == CONST_IMM) {
dst_imm <<= src_reg->imm;
} else {
mark_reg_unknown_value(regs, insn->dst_reg);
goto out;
}
dst_reg->imm = dst_imm;
out:
return 0;
}
static void check_reg_overflow(struct bpf_reg_state *reg)
{
if (reg->max_value > BPF_REGISTER_MAX_RANGE)
reg->max_value = BPF_REGISTER_MAX_RANGE;
if (reg->min_value < BPF_REGISTER_MIN_RANGE ||
reg->min_value > BPF_REGISTER_MAX_RANGE)
reg->min_value = BPF_REGISTER_MIN_RANGE;
}
static u32 calc_align(u32 imm)
{
if (!imm)
return 1U << 31;
return imm - ((imm - 1) & imm);
}
static void adjust_reg_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn)
{
struct bpf_reg_state *regs = env->cur_state.regs, *dst_reg;
s64 min_val = BPF_REGISTER_MIN_RANGE;
u64 max_val = BPF_REGISTER_MAX_RANGE;
u8 opcode = BPF_OP(insn->code);
u32 dst_align, src_align;
dst_reg = &regs[insn->dst_reg];
src_align = 0;
if (BPF_SRC(insn->code) == BPF_X) {
check_reg_overflow(&regs[insn->src_reg]);
min_val = regs[insn->src_reg].min_value;
max_val = regs[insn->src_reg].max_value;
/* If the source register is a random pointer then the
* min_value/max_value values represent the range of the known
* accesses into that value, not the actual min/max value of the
* register itself. In this case we have to reset the reg range
* values so we know it is not safe to look at.
*/
if (regs[insn->src_reg].type != CONST_IMM &&
regs[insn->src_reg].type != UNKNOWN_VALUE) {
min_val = BPF_REGISTER_MIN_RANGE;
max_val = BPF_REGISTER_MAX_RANGE;
src_align = 0;
} else {
src_align = regs[insn->src_reg].min_align;
}
} else if (insn->imm < BPF_REGISTER_MAX_RANGE &&
(s64)insn->imm > BPF_REGISTER_MIN_RANGE) {
min_val = max_val = insn->imm;
src_align = calc_align(insn->imm);
}
dst_align = dst_reg->min_align;
/* We don't know anything about what was done to this register, mark it
* as unknown. Also, if both derived bounds came from signed/unsigned
* mixed compares and one side is unbounded, we cannot really do anything
* with them as boundaries cannot be trusted. Thus, arithmetic of two
* regs of such kind will get invalidated bounds on the dst side.
*/
if ((min_val == BPF_REGISTER_MIN_RANGE &&
max_val == BPF_REGISTER_MAX_RANGE) ||
(BPF_SRC(insn->code) == BPF_X &&
((min_val != BPF_REGISTER_MIN_RANGE &&
max_val == BPF_REGISTER_MAX_RANGE) ||
(min_val == BPF_REGISTER_MIN_RANGE &&
max_val != BPF_REGISTER_MAX_RANGE) ||
(dst_reg->min_value != BPF_REGISTER_MIN_RANGE &&
dst_reg->max_value == BPF_REGISTER_MAX_RANGE) ||
(dst_reg->min_value == BPF_REGISTER_MIN_RANGE &&
dst_reg->max_value != BPF_REGISTER_MAX_RANGE)) &&
regs[insn->dst_reg].value_from_signed !=
regs[insn->src_reg].value_from_signed)) {
reset_reg_range_values(regs, insn->dst_reg);
return;
}
/* If one of our values was at the end of our ranges then we can't just
* do our normal operations to the register, we need to set the values
* to the min/max since they are undefined.
*/
if (opcode != BPF_SUB) {
if (min_val == BPF_REGISTER_MIN_RANGE)
dst_reg->min_value = BPF_REGISTER_MIN_RANGE;
if (max_val == BPF_REGISTER_MAX_RANGE)
dst_reg->max_value = BPF_REGISTER_MAX_RANGE;
} }
smin_val = src_reg.smin_value;
smax_val = src_reg.smax_value;
umin_val = src_reg.umin_value;
umax_val = src_reg.umax_value;
src_known = tnum_is_const(src_reg.var_off);
dst_known = tnum_is_const(dst_reg->var_off);
switch (opcode) { switch (opcode) {
case BPF_ADD: case BPF_ADD:
if (dst_reg->min_value != BPF_REGISTER_MIN_RANGE) if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
dst_reg->min_value += min_val; signed_add_overflows(dst_reg->smax_value, smax_val)) {
if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE) dst_reg->smin_value = S64_MIN;
dst_reg->max_value += max_val; dst_reg->smax_value = S64_MAX;
dst_reg->min_align = min(src_align, dst_align); } else {
dst_reg->smin_value += smin_val;
dst_reg->smax_value += smax_val;
}
if (dst_reg->umin_value + umin_val < umin_val ||
dst_reg->umax_value + umax_val < umax_val) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
dst_reg->umin_value += umin_val;
dst_reg->umax_value += umax_val;
}
dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off);
break; break;
case BPF_SUB: case BPF_SUB:
/* If one of our values was at the end of our ranges, then the if (signed_sub_overflows(dst_reg->smin_value, smax_val) ||
* _opposite_ value in the dst_reg goes to the end of our range. signed_sub_overflows(dst_reg->smax_value, smin_val)) {
*/ /* Overflow possible, we know nothing */
if (min_val == BPF_REGISTER_MIN_RANGE) dst_reg->smin_value = S64_MIN;
dst_reg->max_value = BPF_REGISTER_MAX_RANGE; dst_reg->smax_value = S64_MAX;
if (max_val == BPF_REGISTER_MAX_RANGE) } else {
dst_reg->min_value = BPF_REGISTER_MIN_RANGE; dst_reg->smin_value -= smax_val;
if (dst_reg->min_value != BPF_REGISTER_MIN_RANGE) dst_reg->smax_value -= smin_val;
dst_reg->min_value -= max_val; }
if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE) if (dst_reg->umin_value < umax_val) {
dst_reg->max_value -= min_val; /* Overflow possible, we know nothing */
dst_reg->min_align = min(src_align, dst_align); dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else {
/* Cannot overflow (as long as bounds are consistent) */
dst_reg->umin_value -= umax_val;
dst_reg->umax_value -= umin_val;
}
dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
break; break;
case BPF_MUL: case BPF_MUL:
if (dst_reg->min_value != BPF_REGISTER_MIN_RANGE) dst_reg->var_off = tnum_mul(dst_reg->var_off, src_reg.var_off);
dst_reg->min_value *= min_val; if (smin_val < 0 || dst_reg->smin_value < 0) {
if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE) /* Ain't nobody got time to multiply that sign */
dst_reg->max_value *= max_val; __mark_reg_unbounded(dst_reg);
dst_reg->min_align = max(src_align, dst_align); __update_reg_bounds(dst_reg);
break;
}
/* Both values are positive, so we can work with unsigned and
* copy the result to signed (unless it exceeds S64_MAX).
*/
if (umax_val > U32_MAX || dst_reg->umax_value > U32_MAX) {
/* Potential overflow, we know nothing */
__mark_reg_unbounded(dst_reg);
/* (except what we can learn from the var_off) */
__update_reg_bounds(dst_reg);
break;
}
dst_reg->umin_value *= umin_val;
dst_reg->umax_value *= umax_val;
if (dst_reg->umax_value > S64_MAX) {
/* Overflow possible, we know nothing */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
break; break;
case BPF_AND: case BPF_AND:
/* Disallow AND'ing of negative numbers, ain't nobody got time if (src_known && dst_known) {
* for that. Otherwise the minimum is 0 and the max is the max __mark_reg_known(dst_reg, dst_reg->var_off.value &
* value we could AND against. src_reg.var_off.value);
break;
}
/* We get our minimum from the var_off, since that's inherently
* bitwise. Our maximum is the minimum of the operands' maxima.
*/ */
if (min_val < 0) dst_reg->var_off = tnum_and(dst_reg->var_off, src_reg.var_off);
dst_reg->min_value = BPF_REGISTER_MIN_RANGE; dst_reg->umin_value = dst_reg->var_off.value;
else dst_reg->umax_value = min(dst_reg->umax_value, umax_val);
dst_reg->min_value = 0; if (dst_reg->smin_value < 0 || smin_val < 0) {
dst_reg->max_value = max_val; /* Lose signed bounds when ANDing negative numbers,
dst_reg->min_align = max(src_align, dst_align); * ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else {
/* ANDing two positives gives a positive, so safe to
* cast result into s64.
*/
dst_reg->smin_value = dst_reg->umin_value;
dst_reg->smax_value = dst_reg->umax_value;
}
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break; break;
case BPF_LSH: case BPF_OR:
/* Gotta have special overflow logic here, if we're shifting if (src_known && dst_known) {
* more than MAX_RANGE then just assume we have an invalid __mark_reg_known(dst_reg, dst_reg->var_off.value |
* range. src_reg.var_off.value);
break;
}
/* We get our maximum from the var_off, and our minimum is the
* maximum of the operands' minima
*/ */
if (min_val > ilog2(BPF_REGISTER_MAX_RANGE)) { dst_reg->var_off = tnum_or(dst_reg->var_off, src_reg.var_off);
dst_reg->min_value = BPF_REGISTER_MIN_RANGE; dst_reg->umin_value = max(dst_reg->umin_value, umin_val);
dst_reg->min_align = 1; dst_reg->umax_value = dst_reg->var_off.value |
dst_reg->var_off.mask;
if (dst_reg->smin_value < 0 || smin_val < 0) {
/* Lose signed bounds when ORing negative numbers,
* ain't nobody got time for that.
*/
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
} else { } else {
if (dst_reg->min_value != BPF_REGISTER_MIN_RANGE) /* ORing two positives gives a positive, so safe to
dst_reg->min_value <<= min_val; * cast result into s64.
if (!dst_reg->min_align) */
dst_reg->min_align = 1; dst_reg->smin_value = dst_reg->umin_value;
dst_reg->min_align <<= min_val; dst_reg->smax_value = dst_reg->umax_value;
} }
if (max_val > ilog2(BPF_REGISTER_MAX_RANGE)) /* We may learn something more from the var_off */
dst_reg->max_value = BPF_REGISTER_MAX_RANGE; __update_reg_bounds(dst_reg);
else if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE)
dst_reg->max_value <<= max_val;
break; break;
case BPF_RSH: case BPF_LSH:
/* RSH by a negative number is undefined, and the BPF_RSH is an if (umax_val > 63) {
* unsigned shift, so make the appropriate casts. /* Shifts greater than 63 are undefined. This includes
* shifts by a negative number.
*/
mark_reg_unknown(regs, insn->dst_reg);
break;
}
/* We lose all sign bit information (except what we can pick
* up from var_off)
*/ */
if (min_val < 0 || dst_reg->min_value < 0) { dst_reg->smin_value = S64_MIN;
dst_reg->min_value = BPF_REGISTER_MIN_RANGE; dst_reg->smax_value = S64_MAX;
/* If we might shift our top bit out, then we know nothing */
if (dst_reg->umax_value > 1ULL << (63 - umax_val)) {
dst_reg->umin_value = 0;
dst_reg->umax_value = U64_MAX;
} else { } else {
dst_reg->min_value = dst_reg->umin_value <<= umin_val;
(u64)(dst_reg->min_value) >> min_val; dst_reg->umax_value <<= umax_val;
}
if (src_known)
dst_reg->var_off = tnum_lshift(dst_reg->var_off, umin_val);
else
dst_reg->var_off = tnum_lshift(tnum_unknown, umin_val);
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break;
case BPF_RSH:
if (umax_val > 63) {
/* Shifts greater than 63 are undefined. This includes
* shifts by a negative number.
*/
mark_reg_unknown(regs, insn->dst_reg);
break;
} }
if (min_val < 0) { /* BPF_RSH is an unsigned shift, so make the appropriate casts */
dst_reg->min_align = 1; if (dst_reg->smin_value < 0) {
if (umin_val) {
/* Sign bit will be cleared */
dst_reg->smin_value = 0;
} else {
/* Lost sign bit information */
dst_reg->smin_value = S64_MIN;
dst_reg->smax_value = S64_MAX;
}
} else { } else {
dst_reg->min_align >>= (u64) min_val; dst_reg->smin_value =
if (!dst_reg->min_align) (u64)(dst_reg->smin_value) >> umax_val;
dst_reg->min_align = 1;
} }
if (dst_reg->max_value != BPF_REGISTER_MAX_RANGE) if (src_known)
dst_reg->max_value >>= max_val; dst_reg->var_off = tnum_rshift(dst_reg->var_off,
umin_val);
else
dst_reg->var_off = tnum_rshift(tnum_unknown, umin_val);
dst_reg->umin_value >>= umax_val;
dst_reg->umax_value >>= umin_val;
/* We may learn something more from the var_off */
__update_reg_bounds(dst_reg);
break; break;
default: default:
reset_reg_range_values(regs, insn->dst_reg); mark_reg_unknown(regs, insn->dst_reg);
break; break;
} }
check_reg_overflow(dst_reg); __reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
return 0;
}
/* Handles ALU ops other than BPF_END, BPF_NEG and BPF_MOV: computes new min/max
* and var_off.
*/
static int adjust_reg_min_max_vals(struct bpf_verifier_env *env,
struct bpf_insn *insn)
{
struct bpf_reg_state *regs = env->cur_state.regs, *dst_reg, *src_reg;
struct bpf_reg_state *ptr_reg = NULL, off_reg = {0};
u8 opcode = BPF_OP(insn->code);
int rc;
dst_reg = &regs[insn->dst_reg];
src_reg = NULL;
if (dst_reg->type != SCALAR_VALUE)
ptr_reg = dst_reg;
if (BPF_SRC(insn->code) == BPF_X) {
src_reg = &regs[insn->src_reg];
if (src_reg->type != SCALAR_VALUE) {
if (dst_reg->type != SCALAR_VALUE) {
/* Combining two pointers by any ALU op yields
* an arbitrary scalar.
*/
if (!env->allow_ptr_leaks) {
verbose("R%d pointer %s pointer prohibited\n",
insn->dst_reg,
bpf_alu_string[opcode >> 4]);
return -EACCES;
}
mark_reg_unknown(regs, insn->dst_reg);
return 0;
} else {
/* scalar += pointer
* This is legal, but we have to reverse our
* src/dest handling in computing the range
*/
rc = adjust_ptr_min_max_vals(env, insn,
src_reg, dst_reg);
if (rc == -EACCES && env->allow_ptr_leaks) {
/* scalar += unknown scalar */
__mark_reg_unknown(&off_reg);
return adjust_scalar_min_max_vals(
env, insn,
dst_reg, off_reg);
}
return rc;
}
} else if (ptr_reg) {
/* pointer += scalar */
rc = adjust_ptr_min_max_vals(env, insn,
dst_reg, src_reg);
if (rc == -EACCES && env->allow_ptr_leaks) {
/* unknown scalar += scalar */
__mark_reg_unknown(dst_reg);
return adjust_scalar_min_max_vals(
env, insn, dst_reg, *src_reg);
}
return rc;
}
} else {
/* Pretend the src is a reg with a known value, since we only
* need to be able to read from this state.
*/
off_reg.type = SCALAR_VALUE;
__mark_reg_known(&off_reg, insn->imm);
src_reg = &off_reg;
if (ptr_reg) { /* pointer += K */
rc = adjust_ptr_min_max_vals(env, insn,
ptr_reg, src_reg);
if (rc == -EACCES && env->allow_ptr_leaks) {
/* unknown scalar += K */
__mark_reg_unknown(dst_reg);
return adjust_scalar_min_max_vals(
env, insn, dst_reg, off_reg);
}
return rc;
}
}
/* Got here implies adding two SCALAR_VALUEs */
if (WARN_ON_ONCE(ptr_reg)) {
print_verifier_state(&env->cur_state);
verbose("verifier internal error: unexpected ptr_reg\n");
return -EINVAL;
}
if (WARN_ON(!src_reg)) {
print_verifier_state(&env->cur_state);
verbose("verifier internal error: no src_reg\n");
return -EINVAL;
}
return adjust_scalar_min_max_vals(env, insn, dst_reg, *src_reg);
} }
/* check validity of 32-bit and 64-bit arithmetic operations */ /* check validity of 32-bit and 64-bit arithmetic operations */
static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
{ {
struct bpf_reg_state *regs = env->cur_state.regs, *dst_reg; struct bpf_reg_state *regs = env->cur_state.regs;
u8 opcode = BPF_OP(insn->code); u8 opcode = BPF_OP(insn->code);
int err; int err;
...@@ -2036,11 +2285,6 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -2036,11 +2285,6 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
if (err) if (err)
return err; return err;
/* we are setting our register to something new, we need to
* reset its range values.
*/
reset_reg_range_values(regs, insn->dst_reg);
if (BPF_SRC(insn->code) == BPF_X) { if (BPF_SRC(insn->code) == BPF_X) {
if (BPF_CLASS(insn->code) == BPF_ALU64) { if (BPF_CLASS(insn->code) == BPF_ALU64) {
/* case: R1 = R2 /* case: R1 = R2
...@@ -2048,24 +2292,24 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -2048,24 +2292,24 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
*/ */
regs[insn->dst_reg] = regs[insn->src_reg]; regs[insn->dst_reg] = regs[insn->src_reg];
} else { } else {
/* R1 = (u32) R2 */
if (is_pointer_value(env, insn->src_reg)) { if (is_pointer_value(env, insn->src_reg)) {
verbose("R%d partial copy of pointer\n", verbose("R%d partial copy of pointer\n",
insn->src_reg); insn->src_reg);
return -EACCES; return -EACCES;
} }
mark_reg_unknown_value(regs, insn->dst_reg); mark_reg_unknown(regs, insn->dst_reg);
/* high 32 bits are known zero. */
regs[insn->dst_reg].var_off = tnum_cast(
regs[insn->dst_reg].var_off, 4);
__update_reg_bounds(&regs[insn->dst_reg]);
} }
} else { } else {
/* case: R = imm /* case: R = imm
* remember the value we stored into this reg * remember the value we stored into this reg
*/ */
regs[insn->dst_reg].type = CONST_IMM; regs[insn->dst_reg].type = SCALAR_VALUE;
regs[insn->dst_reg].imm = insn->imm; __mark_reg_known(regs + insn->dst_reg, insn->imm);
regs[insn->dst_reg].id = 0;
regs[insn->dst_reg].max_value = insn->imm;
regs[insn->dst_reg].min_value = insn->imm;
regs[insn->dst_reg].min_align = calc_align(insn->imm);
regs[insn->dst_reg].value_from_signed = false;
} }
} else if (opcode > BPF_END) { } else if (opcode > BPF_END) {
...@@ -2116,68 +2360,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -2116,68 +2360,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
if (err) if (err)
return err; return err;
dst_reg = &regs[insn->dst_reg]; return adjust_reg_min_max_vals(env, insn);
/* first we want to adjust our ranges. */
adjust_reg_min_max_vals(env, insn);
/* pattern match 'bpf_add Rx, imm' instruction */
if (opcode == BPF_ADD && BPF_CLASS(insn->code) == BPF_ALU64 &&
dst_reg->type == FRAME_PTR && BPF_SRC(insn->code) == BPF_K) {
dst_reg->type = PTR_TO_STACK;
dst_reg->imm = insn->imm;
return 0;
} else if (opcode == BPF_ADD &&
BPF_CLASS(insn->code) == BPF_ALU64 &&
dst_reg->type == PTR_TO_STACK &&
((BPF_SRC(insn->code) == BPF_X &&
regs[insn->src_reg].type == CONST_IMM) ||
BPF_SRC(insn->code) == BPF_K)) {
if (BPF_SRC(insn->code) == BPF_X)
dst_reg->imm += regs[insn->src_reg].imm;
else
dst_reg->imm += insn->imm;
return 0;
} else if (opcode == BPF_ADD &&
BPF_CLASS(insn->code) == BPF_ALU64 &&
(dst_reg->type == PTR_TO_PACKET ||
(BPF_SRC(insn->code) == BPF_X &&
regs[insn->src_reg].type == PTR_TO_PACKET))) {
/* ptr_to_packet += K|X */
return check_packet_ptr_add(env, insn);
} else if (BPF_CLASS(insn->code) == BPF_ALU64 &&
dst_reg->type == UNKNOWN_VALUE &&
env->allow_ptr_leaks) {
/* unknown += K|X */
return evaluate_reg_alu(env, insn);
} else if (BPF_CLASS(insn->code) == BPF_ALU64 &&
dst_reg->type == CONST_IMM &&
env->allow_ptr_leaks) {
/* reg_imm += K|X */
return evaluate_reg_imm_alu(env, insn);
} else if (is_pointer_value(env, insn->dst_reg)) {
verbose("R%d pointer arithmetic prohibited\n",
insn->dst_reg);
return -EACCES;
} else if (BPF_SRC(insn->code) == BPF_X &&
is_pointer_value(env, insn->src_reg)) {
verbose("R%d pointer arithmetic prohibited\n",
insn->src_reg);
return -EACCES;
}
/* If we did pointer math on a map value then just set it to our
* PTR_TO_MAP_VALUE_ADJ type so we can deal with any stores or
* loads to this register appropriately, otherwise just mark the
* register as unknown.
*/
if (env->allow_ptr_leaks &&
BPF_CLASS(insn->code) == BPF_ALU64 && opcode == BPF_ADD &&
(dst_reg->type == PTR_TO_MAP_VALUE ||
dst_reg->type == PTR_TO_MAP_VALUE_ADJ))
dst_reg->type = PTR_TO_MAP_VALUE_ADJ;
else
mark_reg_unknown_value(regs, insn->dst_reg);
} }
return 0; return 0;
...@@ -2189,6 +2372,17 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *state, ...@@ -2189,6 +2372,17 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *state,
struct bpf_reg_state *regs = state->regs, *reg; struct bpf_reg_state *regs = state->regs, *reg;
int i; int i;
if (dst_reg->off < 0)
/* This doesn't give us any range */
return;
if (dst_reg->umax_value > MAX_PACKET_OFF ||
dst_reg->umax_value + dst_reg->off > MAX_PACKET_OFF)
/* Risk of overflow. For instance, ptr + (1<<63) may be less
* than pkt_end, but that's because it's also less than pkt.
*/
return;
/* LLVM can generate two kind of checks: /* LLVM can generate two kind of checks:
* *
* Type 1: * Type 1:
...@@ -2219,193 +2413,215 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *state, ...@@ -2219,193 +2413,215 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *state,
* so that range of bytes [r3, r3 + 8) is safe to access. * so that range of bytes [r3, r3 + 8) is safe to access.
*/ */
/* If our ids match, then we must have the same max_value. And we
* don't care about the other reg's fixed offset, since if it's too big
* the range won't allow anything.
* dst_reg->off is known < MAX_PACKET_OFF, therefore it fits in a u16.
*/
for (i = 0; i < MAX_BPF_REG; i++) for (i = 0; i < MAX_BPF_REG; i++)
if (regs[i].type == PTR_TO_PACKET && regs[i].id == dst_reg->id) if (regs[i].type == PTR_TO_PACKET && regs[i].id == dst_reg->id)
/* keep the maximum range already checked */ /* keep the maximum range already checked */
regs[i].range = max(regs[i].range, dst_reg->off); regs[i].range = max_t(u16, regs[i].range, dst_reg->off);
for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) { for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
if (state->stack_slot_type[i] != STACK_SPILL) if (state->stack_slot_type[i] != STACK_SPILL)
continue; continue;
reg = &state->spilled_regs[i / BPF_REG_SIZE]; reg = &state->spilled_regs[i / BPF_REG_SIZE];
if (reg->type == PTR_TO_PACKET && reg->id == dst_reg->id) if (reg->type == PTR_TO_PACKET && reg->id == dst_reg->id)
reg->range = max(reg->range, dst_reg->off); reg->range = max_t(u16, reg->range, dst_reg->off);
} }
} }
/* Adjusts the register min/max values in the case that the dst_reg is the /* Adjusts the register min/max values in the case that the dst_reg is the
* variable register that we are working on, and src_reg is a constant or we're * variable register that we are working on, and src_reg is a constant or we're
* simply doing a BPF_K check. * simply doing a BPF_K check.
* In JEQ/JNE cases we also adjust the var_off values.
*/ */
static void reg_set_min_max(struct bpf_reg_state *true_reg, static void reg_set_min_max(struct bpf_reg_state *true_reg,
struct bpf_reg_state *false_reg, u64 val, struct bpf_reg_state *false_reg, u64 val,
u8 opcode) u8 opcode)
{ {
bool value_from_signed = true; /* If the dst_reg is a pointer, we can't learn anything about its
bool is_range = true; * variable offset from the compare (unless src_reg were a pointer into
* the same object, but we don't bother with that.
* Since false_reg and true_reg have the same type by construction, we
* only need to check one of them for pointerness.
*/
if (__is_pointer_value(false, false_reg))
return;
switch (opcode) { switch (opcode) {
case BPF_JEQ: case BPF_JEQ:
/* If this is false then we know nothing Jon Snow, but if it is /* If this is false then we know nothing Jon Snow, but if it is
* true then we know for sure. * true then we know for sure.
*/ */
true_reg->max_value = true_reg->min_value = val; __mark_reg_known(true_reg, val);
is_range = false;
break; break;
case BPF_JNE: case BPF_JNE:
/* If this is true we know nothing Jon Snow, but if it is false /* If this is true we know nothing Jon Snow, but if it is false
* we know the value for sure; * we know the value for sure;
*/ */
false_reg->max_value = false_reg->min_value = val; __mark_reg_known(false_reg, val);
is_range = false;
break; break;
case BPF_JGT: case BPF_JGT:
value_from_signed = false; false_reg->umax_value = min(false_reg->umax_value, val);
/* fallthrough */ true_reg->umin_value = max(true_reg->umin_value, val + 1);
break;
case BPF_JSGT: case BPF_JSGT:
if (true_reg->value_from_signed != value_from_signed) false_reg->smax_value = min_t(s64, false_reg->smax_value, val);
reset_reg_range_values(true_reg, 0); true_reg->smin_value = max_t(s64, true_reg->smin_value, val + 1);
if (false_reg->value_from_signed != value_from_signed)
reset_reg_range_values(false_reg, 0);
if (opcode == BPF_JGT) {
/* Unsigned comparison, the minimum value is 0. */
false_reg->min_value = 0;
}
/* If this is false then we know the maximum val is val,
* otherwise we know the min val is val+1.
*/
false_reg->max_value = val;
false_reg->value_from_signed = value_from_signed;
true_reg->min_value = val + 1;
true_reg->value_from_signed = value_from_signed;
break; break;
case BPF_JGE: case BPF_JGE:
value_from_signed = false; false_reg->umax_value = min(false_reg->umax_value, val - 1);
/* fallthrough */ true_reg->umin_value = max(true_reg->umin_value, val);
break;
case BPF_JSGE: case BPF_JSGE:
if (true_reg->value_from_signed != value_from_signed) false_reg->smax_value = min_t(s64, false_reg->smax_value, val - 1);
reset_reg_range_values(true_reg, 0); true_reg->smin_value = max_t(s64, true_reg->smin_value, val);
if (false_reg->value_from_signed != value_from_signed)
reset_reg_range_values(false_reg, 0);
if (opcode == BPF_JGE) {
/* Unsigned comparison, the minimum value is 0. */
false_reg->min_value = 0;
}
/* If this is false then we know the maximum value is val - 1,
* otherwise we know the mimimum value is val.
*/
false_reg->max_value = val - 1;
false_reg->value_from_signed = value_from_signed;
true_reg->min_value = val;
true_reg->value_from_signed = value_from_signed;
break; break;
default: default:
break; break;
} }
check_reg_overflow(false_reg); __reg_deduce_bounds(false_reg);
check_reg_overflow(true_reg); __reg_deduce_bounds(true_reg);
if (is_range) { /* We might have learned some bits from the bounds. */
if (__is_pointer_value(false, false_reg)) __reg_bound_offset(false_reg);
reset_reg_range_values(false_reg, 0); __reg_bound_offset(true_reg);
if (__is_pointer_value(false, true_reg)) /* Intersecting with the old var_off might have improved our bounds
reset_reg_range_values(true_reg, 0); * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
} * then new var_off is (0; 0x7f...fc) which improves our umax.
*/
__update_reg_bounds(false_reg);
__update_reg_bounds(true_reg);
} }
/* Same as above, but for the case that dst_reg is a CONST_IMM reg and src_reg /* Same as above, but for the case that dst_reg holds a constant and src_reg is
* is the variable reg. * the variable reg.
*/ */
static void reg_set_min_max_inv(struct bpf_reg_state *true_reg, static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
struct bpf_reg_state *false_reg, u64 val, struct bpf_reg_state *false_reg, u64 val,
u8 opcode) u8 opcode)
{ {
bool value_from_signed = true; if (__is_pointer_value(false, false_reg))
bool is_range = true; return;
switch (opcode) { switch (opcode) {
case BPF_JEQ: case BPF_JEQ:
/* If this is false then we know nothing Jon Snow, but if it is /* If this is false then we know nothing Jon Snow, but if it is
* true then we know for sure. * true then we know for sure.
*/ */
true_reg->max_value = true_reg->min_value = val; __mark_reg_known(true_reg, val);
is_range = false;
break; break;
case BPF_JNE: case BPF_JNE:
/* If this is true we know nothing Jon Snow, but if it is false /* If this is true we know nothing Jon Snow, but if it is false
* we know the value for sure; * we know the value for sure;
*/ */
false_reg->max_value = false_reg->min_value = val; __mark_reg_known(false_reg, val);
is_range = false;
break; break;
case BPF_JGT: case BPF_JGT:
value_from_signed = false; true_reg->umax_value = min(true_reg->umax_value, val - 1);
/* fallthrough */ false_reg->umin_value = max(false_reg->umin_value, val);
break;
case BPF_JSGT: case BPF_JSGT:
if (true_reg->value_from_signed != value_from_signed) true_reg->smax_value = min_t(s64, true_reg->smax_value, val - 1);
reset_reg_range_values(true_reg, 0); false_reg->smin_value = max_t(s64, false_reg->smin_value, val);
if (false_reg->value_from_signed != value_from_signed)
reset_reg_range_values(false_reg, 0);
if (opcode == BPF_JGT) {
/* Unsigned comparison, the minimum value is 0. */
true_reg->min_value = 0;
}
/*
* If this is false, then the val is <= the register, if it is
* true the register <= to the val.
*/
false_reg->min_value = val;
false_reg->value_from_signed = value_from_signed;
true_reg->max_value = val - 1;
true_reg->value_from_signed = value_from_signed;
break; break;
case BPF_JGE: case BPF_JGE:
value_from_signed = false; true_reg->umax_value = min(true_reg->umax_value, val);
/* fallthrough */ false_reg->umin_value = max(false_reg->umin_value, val + 1);
break;
case BPF_JSGE: case BPF_JSGE:
if (true_reg->value_from_signed != value_from_signed) true_reg->smax_value = min_t(s64, true_reg->smax_value, val);
reset_reg_range_values(true_reg, 0); false_reg->smin_value = max_t(s64, false_reg->smin_value, val + 1);
if (false_reg->value_from_signed != value_from_signed)
reset_reg_range_values(false_reg, 0);
if (opcode == BPF_JGE) {
/* Unsigned comparison, the minimum value is 0. */
true_reg->min_value = 0;
}
/* If this is false then constant < register, if it is true then
* the register < constant.
*/
false_reg->min_value = val + 1;
false_reg->value_from_signed = value_from_signed;
true_reg->max_value = val;
true_reg->value_from_signed = value_from_signed;
break; break;
default: default:
break; break;
} }
check_reg_overflow(false_reg); __reg_deduce_bounds(false_reg);
check_reg_overflow(true_reg); __reg_deduce_bounds(true_reg);
if (is_range) { /* We might have learned some bits from the bounds. */
if (__is_pointer_value(false, false_reg)) __reg_bound_offset(false_reg);
reset_reg_range_values(false_reg, 0); __reg_bound_offset(true_reg);
if (__is_pointer_value(false, true_reg)) /* Intersecting with the old var_off might have improved our bounds
reset_reg_range_values(true_reg, 0); * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
* then new var_off is (0; 0x7f...fc) which improves our umax.
*/
__update_reg_bounds(false_reg);
__update_reg_bounds(true_reg);
}
/* Regs are known to be equal, so intersect their min/max/var_off */
static void __reg_combine_min_max(struct bpf_reg_state *src_reg,
struct bpf_reg_state *dst_reg)
{
src_reg->umin_value = dst_reg->umin_value = max(src_reg->umin_value,
dst_reg->umin_value);
src_reg->umax_value = dst_reg->umax_value = min(src_reg->umax_value,
dst_reg->umax_value);
src_reg->smin_value = dst_reg->smin_value = max(src_reg->smin_value,
dst_reg->smin_value);
src_reg->smax_value = dst_reg->smax_value = min(src_reg->smax_value,
dst_reg->smax_value);
src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off,
dst_reg->var_off);
/* We might have learned new bounds from the var_off. */
__update_reg_bounds(src_reg);
__update_reg_bounds(dst_reg);
/* We might have learned something about the sign bit. */
__reg_deduce_bounds(src_reg);
__reg_deduce_bounds(dst_reg);
/* We might have learned some bits from the bounds. */
__reg_bound_offset(src_reg);
__reg_bound_offset(dst_reg);
/* Intersecting with the old var_off might have improved our bounds
* slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
* then new var_off is (0; 0x7f...fc) which improves our umax.
*/
__update_reg_bounds(src_reg);
__update_reg_bounds(dst_reg);
}
static void reg_combine_min_max(struct bpf_reg_state *true_src,
struct bpf_reg_state *true_dst,
struct bpf_reg_state *false_src,
struct bpf_reg_state *false_dst,
u8 opcode)
{
switch (opcode) {
case BPF_JEQ:
__reg_combine_min_max(true_src, true_dst);
break;
case BPF_JNE:
__reg_combine_min_max(false_src, false_dst);
break;
} }
} }
static void mark_map_reg(struct bpf_reg_state *regs, u32 regno, u32 id, static void mark_map_reg(struct bpf_reg_state *regs, u32 regno, u32 id,
enum bpf_reg_type type) bool is_null)
{ {
struct bpf_reg_state *reg = &regs[regno]; struct bpf_reg_state *reg = &regs[regno];
if (reg->type == PTR_TO_MAP_VALUE_OR_NULL && reg->id == id) { if (reg->type == PTR_TO_MAP_VALUE_OR_NULL && reg->id == id) {
if (type == UNKNOWN_VALUE) { /* Old offset (both fixed and variable parts) should
__mark_reg_unknown_value(regs, regno); * have been known-zero, because we don't allow pointer
* arithmetic on pointers that might be NULL.
*/
if (WARN_ON_ONCE(reg->smin_value || reg->smax_value ||
!tnum_equals_const(reg->var_off, 0) ||
reg->off)) {
__mark_reg_known_zero(reg);
reg->off = 0;
}
if (is_null) {
reg->type = SCALAR_VALUE;
} else if (reg->map_ptr->inner_map_meta) { } else if (reg->map_ptr->inner_map_meta) {
reg->type = CONST_PTR_TO_MAP; reg->type = CONST_PTR_TO_MAP;
reg->map_ptr = reg->map_ptr->inner_map_meta; reg->map_ptr = reg->map_ptr->inner_map_meta;
} else { } else {
reg->type = type; reg->type = PTR_TO_MAP_VALUE;
} }
/* We don't need id from this point onwards anymore, thus we /* We don't need id from this point onwards anymore, thus we
* should better reset it, so that state pruning has chances * should better reset it, so that state pruning has chances
...@@ -2419,19 +2635,19 @@ static void mark_map_reg(struct bpf_reg_state *regs, u32 regno, u32 id, ...@@ -2419,19 +2635,19 @@ static void mark_map_reg(struct bpf_reg_state *regs, u32 regno, u32 id,
* be folded together at some point. * be folded together at some point.
*/ */
static void mark_map_regs(struct bpf_verifier_state *state, u32 regno, static void mark_map_regs(struct bpf_verifier_state *state, u32 regno,
enum bpf_reg_type type) bool is_null)
{ {
struct bpf_reg_state *regs = state->regs; struct bpf_reg_state *regs = state->regs;
u32 id = regs[regno].id; u32 id = regs[regno].id;
int i; int i;
for (i = 0; i < MAX_BPF_REG; i++) for (i = 0; i < MAX_BPF_REG; i++)
mark_map_reg(regs, i, id, type); mark_map_reg(regs, i, id, is_null);
for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) { for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
if (state->stack_slot_type[i] != STACK_SPILL) if (state->stack_slot_type[i] != STACK_SPILL)
continue; continue;
mark_map_reg(state->spilled_regs, i / BPF_REG_SIZE, id, type); mark_map_reg(state->spilled_regs, i / BPF_REG_SIZE, id, is_null);
} }
} }
...@@ -2481,7 +2697,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, ...@@ -2481,7 +2697,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
/* detect if R == 0 where R was initialized to zero earlier */ /* detect if R == 0 where R was initialized to zero earlier */
if (BPF_SRC(insn->code) == BPF_K && if (BPF_SRC(insn->code) == BPF_K &&
(opcode == BPF_JEQ || opcode == BPF_JNE) && (opcode == BPF_JEQ || opcode == BPF_JNE) &&
dst_reg->type == CONST_IMM && dst_reg->imm == insn->imm) { dst_reg->type == SCALAR_VALUE &&
tnum_equals_const(dst_reg->var_off, insn->imm)) {
if (opcode == BPF_JEQ) { if (opcode == BPF_JEQ) {
/* if (imm == imm) goto pc+off; /* if (imm == imm) goto pc+off;
* only follow the goto, ignore fall-through * only follow the goto, ignore fall-through
...@@ -2503,17 +2720,30 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, ...@@ -2503,17 +2720,30 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
/* detect if we are comparing against a constant value so we can adjust /* detect if we are comparing against a constant value so we can adjust
* our min/max values for our dst register. * our min/max values for our dst register.
* this is only legit if both are scalars (or pointers to the same
* object, I suppose, but we don't support that right now), because
* otherwise the different base pointers mean the offsets aren't
* comparable.
*/ */
if (BPF_SRC(insn->code) == BPF_X) { if (BPF_SRC(insn->code) == BPF_X) {
if (regs[insn->src_reg].type == CONST_IMM) if (dst_reg->type == SCALAR_VALUE &&
reg_set_min_max(&other_branch->regs[insn->dst_reg], regs[insn->src_reg].type == SCALAR_VALUE) {
dst_reg, regs[insn->src_reg].imm, if (tnum_is_const(regs[insn->src_reg].var_off))
opcode); reg_set_min_max(&other_branch->regs[insn->dst_reg],
else if (dst_reg->type == CONST_IMM) dst_reg, regs[insn->src_reg].var_off.value,
reg_set_min_max_inv(&other_branch->regs[insn->src_reg], opcode);
&regs[insn->src_reg], dst_reg->imm, else if (tnum_is_const(dst_reg->var_off))
opcode); reg_set_min_max_inv(&other_branch->regs[insn->src_reg],
} else { &regs[insn->src_reg],
dst_reg->var_off.value, opcode);
else if (opcode == BPF_JEQ || opcode == BPF_JNE)
/* Comparing for equality, we can combine knowledge */
reg_combine_min_max(&other_branch->regs[insn->src_reg],
&other_branch->regs[insn->dst_reg],
&regs[insn->src_reg],
&regs[insn->dst_reg], opcode);
}
} else if (dst_reg->type == SCALAR_VALUE) {
reg_set_min_max(&other_branch->regs[insn->dst_reg], reg_set_min_max(&other_branch->regs[insn->dst_reg],
dst_reg, insn->imm, opcode); dst_reg, insn->imm, opcode);
} }
...@@ -2525,10 +2755,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, ...@@ -2525,10 +2755,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
/* Mark all identical map registers in each branch as either /* Mark all identical map registers in each branch as either
* safe or unknown depending R == 0 or R != 0 conditional. * safe or unknown depending R == 0 or R != 0 conditional.
*/ */
mark_map_regs(this_branch, insn->dst_reg, mark_map_regs(this_branch, insn->dst_reg, opcode == BPF_JNE);
opcode == BPF_JEQ ? PTR_TO_MAP_VALUE : UNKNOWN_VALUE); mark_map_regs(other_branch, insn->dst_reg, opcode == BPF_JEQ);
mark_map_regs(other_branch, insn->dst_reg,
opcode == BPF_JEQ ? UNKNOWN_VALUE : PTR_TO_MAP_VALUE);
} else if (BPF_SRC(insn->code) == BPF_X && opcode == BPF_JGT && } else if (BPF_SRC(insn->code) == BPF_X && opcode == BPF_JGT &&
dst_reg->type == PTR_TO_PACKET && dst_reg->type == PTR_TO_PACKET &&
regs[insn->src_reg].type == PTR_TO_PACKET_END) { regs[insn->src_reg].type == PTR_TO_PACKET_END) {
...@@ -2576,9 +2804,8 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -2576,9 +2804,8 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
if (insn->src_reg == 0) { if (insn->src_reg == 0) {
u64 imm = ((u64)(insn + 1)->imm << 32) | (u32)insn->imm; u64 imm = ((u64)(insn + 1)->imm << 32) | (u32)insn->imm;
regs[insn->dst_reg].type = CONST_IMM; regs[insn->dst_reg].type = SCALAR_VALUE;
regs[insn->dst_reg].imm = imm; __mark_reg_known(&regs[insn->dst_reg], imm);
regs[insn->dst_reg].id = 0;
return 0; return 0;
} }
...@@ -2659,7 +2886,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -2659,7 +2886,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
/* mark destination R0 register as readable, since it contains /* mark destination R0 register as readable, since it contains
* the value fetched from the packet * the value fetched from the packet
*/ */
regs[BPF_REG_0].type = UNKNOWN_VALUE; mark_reg_unknown(regs, BPF_REG_0);
return 0; return 0;
} }
...@@ -2862,57 +3089,149 @@ static int check_cfg(struct bpf_verifier_env *env) ...@@ -2862,57 +3089,149 @@ static int check_cfg(struct bpf_verifier_env *env)
return ret; return ret;
} }
/* the following conditions reduce the number of explored insns /* check %cur's range satisfies %old's */
* from ~140k to ~80k for ultra large programs that use a lot of ptr_to_packet static bool range_within(struct bpf_reg_state *old,
struct bpf_reg_state *cur)
{
return old->umin_value <= cur->umin_value &&
old->umax_value >= cur->umax_value &&
old->smin_value <= cur->smin_value &&
old->smax_value >= cur->smax_value;
}
/* Maximum number of register states that can exist at once */
#define ID_MAP_SIZE (MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE)
struct idpair {
u32 old;
u32 cur;
};
/* If in the old state two registers had the same id, then they need to have
* the same id in the new state as well. But that id could be different from
* the old state, so we need to track the mapping from old to new ids.
* Once we have seen that, say, a reg with old id 5 had new id 9, any subsequent
* regs with old id 5 must also have new id 9 for the new state to be safe. But
* regs with a different old id could still have new id 9, we don't care about
* that.
* So we look through our idmap to see if this old id has been seen before. If
* so, we require the new id to match; otherwise, we add the id pair to the map.
*/ */
static bool compare_ptrs_to_packet(struct bpf_verifier_env *env, static bool check_ids(u32 old_id, u32 cur_id, struct idpair *idmap)
struct bpf_reg_state *old,
struct bpf_reg_state *cur)
{ {
if (old->id != cur->id) unsigned int i;
return false;
for (i = 0; i < ID_MAP_SIZE; i++) {
if (!idmap[i].old) {
/* Reached an empty slot; haven't seen this id before */
idmap[i].old = old_id;
idmap[i].cur = cur_id;
return true;
}
if (idmap[i].old == old_id)
return idmap[i].cur == cur_id;
}
/* We ran out of idmap slots, which should be impossible */
WARN_ON_ONCE(1);
return false;
}
/* old ptr_to_packet is more conservative, since it allows smaller /* Returns true if (rold safe implies rcur safe) */
* range. Ex: static bool regsafe(struct bpf_reg_state *rold,
* old(off=0,r=10) is equal to cur(off=0,r=20), because struct bpf_reg_state *rcur,
* old(off=0,r=10) means that with range=10 the verifier proceeded bool varlen_map_access, struct idpair *idmap)
* further and found no issues with the program. Now we're in the same {
* spot with cur(off=0,r=20), so we're safe too, since anything further if (memcmp(rold, rcur, sizeof(*rold)) == 0)
* will only be looking at most 10 bytes after this pointer.
*/
if (old->off == cur->off && old->range < cur->range)
return true; return true;
/* old(off=20,r=10) is equal to cur(off=22,re=22 or 5 or 0) if (rold->type == NOT_INIT)
* since both cannot be used for packet access and safe(old) /* explored state can't have used this */
* pointer has smaller off that could be used for further
* 'if (ptr > data_end)' check
* Ex:
* old(off=20,r=10) and cur(off=22,r=22) and cur(off=22,r=0) mean
* that we cannot access the packet.
* The safe range is:
* [ptr, ptr + range - off)
* so whenever off >=range, it means no safe bytes from this pointer.
* When comparing old->off <= cur->off, it means that older code
* went with smaller offset and that offset was later
* used to figure out the safe range after 'if (ptr > data_end)' check
* Say, 'old' state was explored like:
* ... R3(off=0, r=0)
* R4 = R3 + 20
* ... now R4(off=20,r=0) <-- here
* if (R4 > data_end)
* ... R4(off=20,r=20), R3(off=0,r=20) and R3 can be used to access.
* ... the code further went all the way to bpf_exit.
* Now the 'cur' state at the mark 'here' has R4(off=30,r=0).
* old_R4(off=20,r=0) equal to cur_R4(off=30,r=0), since if the verifier
* goes further, such cur_R4 will give larger safe packet range after
* 'if (R4 > data_end)' and all further insn were already good with r=20,
* so they will be good with r=30 and we can prune the search.
*/
if (!env->strict_alignment && old->off <= cur->off &&
old->off >= old->range && cur->off >= cur->range)
return true; return true;
if (rcur->type == NOT_INIT)
return false;
switch (rold->type) {
case SCALAR_VALUE:
if (rcur->type == SCALAR_VALUE) {
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
} else {
/* if we knew anything about the old value, we're not
* equal, because we can't know anything about the
* scalar value of the pointer in the new value.
*/
return rold->umin_value == 0 &&
rold->umax_value == U64_MAX &&
rold->smin_value == S64_MIN &&
rold->smax_value == S64_MAX &&
tnum_is_unknown(rold->var_off);
}
case PTR_TO_MAP_VALUE:
if (varlen_map_access) {
/* If the new min/max/var_off satisfy the old ones and
* everything else matches, we are OK.
* We don't care about the 'id' value, because nothing
* uses it for PTR_TO_MAP_VALUE (only for ..._OR_NULL)
*/
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
} else {
/* If the ranges/var_off were not the same, but
* everything else was and we didn't do a variable
* access into a map then we are a-ok.
*/
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0;
}
case PTR_TO_MAP_VALUE_OR_NULL:
/* a PTR_TO_MAP_VALUE could be safe to use as a
* PTR_TO_MAP_VALUE_OR_NULL into the same map.
* However, if the old PTR_TO_MAP_VALUE_OR_NULL then got NULL-
* checked, doing so could have affected others with the same
* id, and we can't check for that because we lost the id when
* we converted to a PTR_TO_MAP_VALUE.
*/
if (rcur->type != PTR_TO_MAP_VALUE_OR_NULL)
return false;
if (memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)))
return false;
/* Check our ids match any regs they're supposed to */
return check_ids(rold->id, rcur->id, idmap);
case PTR_TO_PACKET:
if (rcur->type != PTR_TO_PACKET)
return false;
/* We must have at least as much range as the old ptr
* did, so that any accesses which were safe before are
* still safe. This is true even if old range < old off,
* since someone could have accessed through (ptr - k), or
* even done ptr -= k in a register, to get a safe access.
*/
if (rold->range > rcur->range)
return false;
/* If the offsets don't match, we can't trust our alignment;
* nor can we be sure that we won't fall out of range.
*/
if (rold->off != rcur->off)
return false;
/* id relations must be preserved */
if (rold->id && !check_ids(rold->id, rcur->id, idmap))
return false;
/* new val must satisfy old val knowledge */
return range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off);
case PTR_TO_CTX:
case CONST_PTR_TO_MAP:
case PTR_TO_STACK:
case PTR_TO_PACKET_END:
/* Only valid matches are exact, which memcmp() above
* would have accepted
*/
default:
/* Don't know what's going on, just say it's not safe */
return false;
}
/* Shouldn't get here; if we do, say it's not safe */
WARN_ON_ONCE(1);
return false; return false;
} }
...@@ -2947,43 +3266,19 @@ static bool states_equal(struct bpf_verifier_env *env, ...@@ -2947,43 +3266,19 @@ static bool states_equal(struct bpf_verifier_env *env,
struct bpf_verifier_state *cur) struct bpf_verifier_state *cur)
{ {
bool varlen_map_access = env->varlen_map_value_access; bool varlen_map_access = env->varlen_map_value_access;
struct bpf_reg_state *rold, *rcur; struct idpair *idmap;
bool ret = false;
int i; int i;
for (i = 0; i < MAX_BPF_REG; i++) { idmap = kcalloc(ID_MAP_SIZE, sizeof(struct idpair), GFP_KERNEL);
rold = &old->regs[i]; /* If we failed to allocate the idmap, just say it's not safe */
rcur = &cur->regs[i]; if (!idmap)
if (memcmp(rold, rcur, sizeof(*rold)) == 0)
continue;
/* If the ranges were not the same, but everything else was and
* we didn't do a variable access into a map then we are a-ok.
*/
if (!varlen_map_access &&
memcmp(rold, rcur, offsetofend(struct bpf_reg_state, id)) == 0)
continue;
/* If we didn't map access then again we don't care about the
* mismatched range values and it's ok if our old type was
* UNKNOWN and we didn't go to a NOT_INIT'ed reg.
*/
if (rold->type == NOT_INIT ||
(!varlen_map_access && rold->type == UNKNOWN_VALUE &&
rcur->type != NOT_INIT))
continue;
/* Don't care about the reg->id in this case. */
if (rold->type == PTR_TO_MAP_VALUE_OR_NULL &&
rcur->type == PTR_TO_MAP_VALUE_OR_NULL &&
rold->map_ptr == rcur->map_ptr)
continue;
if (rold->type == PTR_TO_PACKET && rcur->type == PTR_TO_PACKET &&
compare_ptrs_to_packet(env, rold, rcur))
continue;
return false; return false;
for (i = 0; i < MAX_BPF_REG; i++) {
if (!regsafe(&old->regs[i], &cur->regs[i], varlen_map_access,
idmap))
goto out_free;
} }
for (i = 0; i < MAX_BPF_STACK; i++) { for (i = 0; i < MAX_BPF_STACK; i++) {
...@@ -2995,29 +3290,32 @@ static bool states_equal(struct bpf_verifier_env *env, ...@@ -2995,29 +3290,32 @@ static bool states_equal(struct bpf_verifier_env *env,
* this verifier states are not equivalent, * this verifier states are not equivalent,
* return false to continue verification of this path * return false to continue verification of this path
*/ */
return false; goto out_free;
if (i % BPF_REG_SIZE) if (i % BPF_REG_SIZE)
continue; continue;
if (old->stack_slot_type[i] != STACK_SPILL) if (old->stack_slot_type[i] != STACK_SPILL)
continue; continue;
if (memcmp(&old->spilled_regs[i / BPF_REG_SIZE], if (!regsafe(&old->spilled_regs[i / BPF_REG_SIZE],
&cur->spilled_regs[i / BPF_REG_SIZE], &cur->spilled_regs[i / BPF_REG_SIZE],
sizeof(old->spilled_regs[0]))) varlen_map_access, idmap))
/* when explored and current stack slot types are /* when explored and current stack slot are both storing
* the same, check that stored pointers types * spilled registers, check that stored pointers types
* are the same as well. * are the same as well.
* Ex: explored safe path could have stored * Ex: explored safe path could have stored
* (bpf_reg_state) {.type = PTR_TO_STACK, .imm = -8} * (bpf_reg_state) {.type = PTR_TO_STACK, .off = -8}
* but current path has stored: * but current path has stored:
* (bpf_reg_state) {.type = PTR_TO_STACK, .imm = -16} * (bpf_reg_state) {.type = PTR_TO_STACK, .off = -16}
* such verifier states are not equivalent. * such verifier states are not equivalent.
* return false to continue verification of this path * return false to continue verification of this path
*/ */
return false; goto out_free;
else else
continue; continue;
} }
return true; ret = true;
out_free:
kfree(idmap);
return ret;
} }
static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
...@@ -3331,7 +3629,6 @@ static int do_check(struct bpf_verifier_env *env) ...@@ -3331,7 +3629,6 @@ static int do_check(struct bpf_verifier_env *env)
verbose("invalid BPF_LD mode\n"); verbose("invalid BPF_LD mode\n");
return -EINVAL; return -EINVAL;
} }
reset_reg_range_values(regs, insn->dst_reg);
} else { } else {
verbose("unknown insn class %d\n", class); verbose("unknown insn class %d\n", class);
return -EINVAL; return -EINVAL;
......
...@@ -27,6 +27,11 @@ ...@@ -27,6 +27,11 @@
#define MAX_INSNS 512 #define MAX_INSNS 512
#define MAX_MATCHES 16 #define MAX_MATCHES 16
struct bpf_reg_match {
unsigned int line;
const char *match;
};
struct bpf_align_test { struct bpf_align_test {
const char *descr; const char *descr;
struct bpf_insn insns[MAX_INSNS]; struct bpf_insn insns[MAX_INSNS];
...@@ -36,10 +41,14 @@ struct bpf_align_test { ...@@ -36,10 +41,14 @@ struct bpf_align_test {
REJECT REJECT
} result; } result;
enum bpf_prog_type prog_type; enum bpf_prog_type prog_type;
const char *matches[MAX_MATCHES]; /* Matches must be in order of increasing line */
struct bpf_reg_match matches[MAX_MATCHES];
}; };
static struct bpf_align_test tests[] = { static struct bpf_align_test tests[] = {
/* Four tests of known constants. These aren't staggeringly
* interesting since we track exact values now.
*/
{ {
.descr = "mov", .descr = "mov",
.insns = { .insns = {
...@@ -53,11 +62,13 @@ static struct bpf_align_test tests[] = { ...@@ -53,11 +62,13 @@ static struct bpf_align_test tests[] = {
}, },
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = { .matches = {
"1: R1=ctx R3=imm2,min_value=2,max_value=2,min_align=2 R10=fp", {1, "R1=ctx(id=0,off=0,imm=0)"},
"2: R1=ctx R3=imm4,min_value=4,max_value=4,min_align=4 R10=fp", {1, "R10=fp0"},
"3: R1=ctx R3=imm8,min_value=8,max_value=8,min_align=8 R10=fp", {1, "R3=inv2"},
"4: R1=ctx R3=imm16,min_value=16,max_value=16,min_align=16 R10=fp", {2, "R3=inv4"},
"5: R1=ctx R3=imm32,min_value=32,max_value=32,min_align=32 R10=fp", {3, "R3=inv8"},
{4, "R3=inv16"},
{5, "R3=inv32"},
}, },
}, },
{ {
...@@ -79,17 +90,19 @@ static struct bpf_align_test tests[] = { ...@@ -79,17 +90,19 @@ static struct bpf_align_test tests[] = {
}, },
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = { .matches = {
"1: R1=ctx R3=imm1,min_value=1,max_value=1,min_align=1 R10=fp", {1, "R1=ctx(id=0,off=0,imm=0)"},
"2: R1=ctx R3=imm2,min_value=2,max_value=2,min_align=2 R10=fp", {1, "R10=fp0"},
"3: R1=ctx R3=imm4,min_value=4,max_value=4,min_align=4 R10=fp", {1, "R3=inv1"},
"4: R1=ctx R3=imm8,min_value=8,max_value=8,min_align=8 R10=fp", {2, "R3=inv2"},
"5: R1=ctx R3=imm16,min_value=16,max_value=16,min_align=16 R10=fp", {3, "R3=inv4"},
"6: R1=ctx R3=imm1,min_value=1,max_value=1,min_align=1 R10=fp", {4, "R3=inv8"},
"7: R1=ctx R3=imm1,min_value=1,max_value=1,min_align=1 R4=imm32,min_value=32,max_value=32,min_align=32 R10=fp", {5, "R3=inv16"},
"8: R1=ctx R3=imm1,min_value=1,max_value=1,min_align=1 R4=imm16,min_value=16,max_value=16,min_align=16 R10=fp", {6, "R3=inv1"},
"9: R1=ctx R3=imm1,min_value=1,max_value=1,min_align=1 R4=imm8,min_value=8,max_value=8,min_align=8 R10=fp", {7, "R4=inv32"},
"10: R1=ctx R3=imm1,min_value=1,max_value=1,min_align=1 R4=imm4,min_value=4,max_value=4,min_align=4 R10=fp", {8, "R4=inv16"},
"11: R1=ctx R3=imm1,min_value=1,max_value=1,min_align=1 R4=imm2,min_value=2,max_value=2,min_align=2 R10=fp", {9, "R4=inv8"},
{10, "R4=inv4"},
{11, "R4=inv2"},
}, },
}, },
{ {
...@@ -106,12 +119,14 @@ static struct bpf_align_test tests[] = { ...@@ -106,12 +119,14 @@ static struct bpf_align_test tests[] = {
}, },
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = { .matches = {
"1: R1=ctx R3=imm4,min_value=4,max_value=4,min_align=4 R10=fp", {1, "R1=ctx(id=0,off=0,imm=0)"},
"2: R1=ctx R3=imm8,min_value=8,max_value=8,min_align=4 R10=fp", {1, "R10=fp0"},
"3: R1=ctx R3=imm10,min_value=10,max_value=10,min_align=2 R10=fp", {1, "R3=inv4"},
"4: R1=ctx R3=imm10,min_value=10,max_value=10,min_align=2 R4=imm8,min_value=8,max_value=8,min_align=8 R10=fp", {2, "R3=inv8"},
"5: R1=ctx R3=imm10,min_value=10,max_value=10,min_align=2 R4=imm12,min_value=12,max_value=12,min_align=4 R10=fp", {3, "R3=inv10"},
"6: R1=ctx R3=imm10,min_value=10,max_value=10,min_align=2 R4=imm14,min_value=14,max_value=14,min_align=2 R10=fp", {4, "R4=inv8"},
{5, "R4=inv12"},
{6, "R4=inv14"},
}, },
}, },
{ {
...@@ -126,13 +141,16 @@ static struct bpf_align_test tests[] = { ...@@ -126,13 +141,16 @@ static struct bpf_align_test tests[] = {
}, },
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = { .matches = {
"1: R1=ctx R3=imm7,min_value=7,max_value=7,min_align=1 R10=fp", {1, "R1=ctx(id=0,off=0,imm=0)"},
"2: R1=ctx R3=imm7,min_value=7,max_value=7,min_align=1 R10=fp", {1, "R10=fp0"},
"3: R1=ctx R3=imm14,min_value=14,max_value=14,min_align=2 R10=fp", {1, "R3=inv7"},
"4: R1=ctx R3=imm56,min_value=56,max_value=56,min_align=4 R10=fp", {2, "R3=inv7"},
{3, "R3=inv14"},
{4, "R3=inv56"},
}, },
}, },
/* Tests using unknown values */
#define PREP_PKT_POINTERS \ #define PREP_PKT_POINTERS \
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, \ BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, \
offsetof(struct __sk_buff, data)), \ offsetof(struct __sk_buff, data)), \
...@@ -166,17 +184,19 @@ static struct bpf_align_test tests[] = { ...@@ -166,17 +184,19 @@ static struct bpf_align_test tests[] = {
}, },
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = { .matches = {
"7: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R10=fp", {7, "R0=pkt(id=0,off=8,r=8,imm=0)"},
"8: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv55,min_align=2 R10=fp", {7, "R3=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
"9: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv54,min_align=4 R10=fp", {8, "R3=inv(id=0,umax_value=510,var_off=(0x0; 0x1fe))"},
"10: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv53,min_align=8 R10=fp", {9, "R3=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
"11: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv52,min_align=16 R10=fp", {10, "R3=inv(id=0,umax_value=2040,var_off=(0x0; 0x7f8))"},
"18: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv56 R10=fp", {11, "R3=inv(id=0,umax_value=4080,var_off=(0x0; 0xff0))"},
"19: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv51,min_align=32 R10=fp", {18, "R3=pkt_end(id=0,off=0,imm=0)"},
"20: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv52,min_align=16 R10=fp", {18, "R4=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
"21: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv53,min_align=8 R10=fp", {19, "R4=inv(id=0,umax_value=8160,var_off=(0x0; 0x1fe0))"},
"22: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv54,min_align=4 R10=fp", {20, "R4=inv(id=0,umax_value=4080,var_off=(0x0; 0xff0))"},
"23: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv55,min_align=2 R10=fp", {21, "R4=inv(id=0,umax_value=2040,var_off=(0x0; 0x7f8))"},
{22, "R4=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
{23, "R4=inv(id=0,umax_value=510,var_off=(0x0; 0x1fe))"},
}, },
}, },
{ {
...@@ -197,16 +217,16 @@ static struct bpf_align_test tests[] = { ...@@ -197,16 +217,16 @@ static struct bpf_align_test tests[] = {
}, },
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = { .matches = {
"7: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R10=fp", {7, "R3=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
"8: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R4=inv56 R10=fp", {8, "R4=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
"9: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R4=inv55,min_align=1 R10=fp", {9, "R4=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
"10: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R4=inv56 R10=fp", {10, "R4=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
"11: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R4=inv54,min_align=2 R10=fp", {11, "R4=inv(id=0,umax_value=510,var_off=(0x0; 0x1fe))"},
"12: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R4=inv56 R10=fp", {12, "R4=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
"13: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R4=inv53,min_align=4 R10=fp", {13, "R4=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
"14: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R4=inv56 R10=fp", {14, "R4=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
"15: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R4=inv52,min_align=8 R10=fp", {15, "R4=inv(id=0,umax_value=2040,var_off=(0x0; 0x7f8))"},
"16: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=inv56 R4=inv50,min_align=8 R10=fp" {16, "R4=inv(id=0,umax_value=4080,var_off=(0x0; 0xff0))"},
}, },
}, },
{ {
...@@ -237,12 +257,14 @@ static struct bpf_align_test tests[] = { ...@@ -237,12 +257,14 @@ static struct bpf_align_test tests[] = {
}, },
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = { .matches = {
"4: R0=imm0,min_value=0,max_value=0,min_align=2147483648 R1=ctx R2=pkt(id=0,off=0,r=0) R3=pkt_end R5=pkt(id=0,off=0,r=0) R10=fp", {4, "R5=pkt(id=0,off=0,r=0,imm=0)"},
"5: R0=imm0,min_value=0,max_value=0,min_align=2147483648 R1=ctx R2=pkt(id=0,off=0,r=0) R3=pkt_end R5=pkt(id=0,off=14,r=0) R10=fp", {5, "R5=pkt(id=0,off=14,r=0,imm=0)"},
"6: R0=imm0,min_value=0,max_value=0,min_align=2147483648 R1=ctx R2=pkt(id=0,off=0,r=0) R3=pkt_end R4=pkt(id=0,off=14,r=0) R5=pkt(id=0,off=14,r=0) R10=fp", {6, "R4=pkt(id=0,off=14,r=0,imm=0)"},
"10: R0=imm0,min_value=0,max_value=0,min_align=2147483648 R1=ctx R2=pkt(id=0,off=0,r=18) R3=pkt_end R4=inv56 R5=pkt(id=0,off=14,r=18) R10=fp", {10, "R2=pkt(id=0,off=0,r=18,imm=0)"},
"14: R0=imm0,min_value=0,max_value=0,min_align=2147483648 R1=ctx R2=pkt(id=0,off=0,r=18) R3=pkt_end R4=inv48 R5=pkt(id=0,off=14,r=18) R10=fp", {10, "R5=pkt(id=0,off=14,r=18,imm=0)"},
"15: R0=imm0,min_value=0,max_value=0,min_align=2147483648 R1=ctx R2=pkt(id=0,off=0,r=18) R3=pkt_end R4=inv48 R5=pkt(id=0,off=14,r=18) R10=fp", {10, "R4=inv(id=0,umax_value=255,var_off=(0x0; 0xff))"},
{14, "R4=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff))"},
{15, "R4=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff))"},
}, },
}, },
{ {
...@@ -297,62 +319,286 @@ static struct bpf_align_test tests[] = { ...@@ -297,62 +319,286 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known /* Calculated offset in R6 has unknown value, but known
* alignment of 4. * alignment of 4.
*/ */
"8: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R6=inv54,min_align=4 R10=fp", {8, "R2=pkt(id=0,off=0,r=8,imm=0)"},
{8, "R6=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* Offset is added to packet pointer R5, resulting in known /* Offset is added to packet pointer R5, resulting in
* auxiliary alignment and offset. * known fixed offset, and variable offset from R6.
*/ */
"11: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R5=pkt(id=1,off=0,r=0),aux_off=14,aux_off_align=4 R6=inv54,min_align=4 R10=fp", {11, "R5=pkt(id=1,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* At the time the word size load is performed from R5, /* At the time the word size load is performed from R5,
* it's total offset is NET_IP_ALIGN + reg->off (0) + * it's total offset is NET_IP_ALIGN + reg->off (0) +
* reg->aux_off (14) which is 16. Then the variable * reg->aux_off (14) which is 16. Then the variable
* offset is considered using reg->aux_off_align which * offset is considered using reg->aux_off_align which
* is 4 and meets the load's requirements. * is 4 and meets the load's requirements.
*/ */
"15: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=pkt(id=1,off=4,r=4),aux_off=14,aux_off_align=4 R5=pkt(id=1,off=0,r=4),aux_off=14,aux_off_align=4 R6=inv54,min_align=4 R10=fp", {15, "R4=pkt(id=1,off=18,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
{15, "R5=pkt(id=1,off=14,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* Variable offset is added to R5 packet pointer, /* Variable offset is added to R5 packet pointer,
* resulting in auxiliary alignment of 4. * resulting in auxiliary alignment of 4.
*/ */
"18: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv,aux_off=14,aux_off_align=4 R5=pkt(id=2,off=0,r=0),aux_off_align=4 R6=inv54,min_align=4 R10=fp", {18, "R5=pkt(id=2,off=0,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* Constant offset is added to R5, resulting in /* Constant offset is added to R5, resulting in
* reg->off of 14. * reg->off of 14.
*/ */
"19: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv,aux_off=14,aux_off_align=4 R5=pkt(id=2,off=14,r=0),aux_off_align=4 R6=inv54,min_align=4 R10=fp", {19, "R5=pkt(id=2,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* At the time the word size load is performed from R5, /* At the time the word size load is performed from R5,
* it's total offset is NET_IP_ALIGN + reg->off (14) which * its total fixed offset is NET_IP_ALIGN + reg->off
* is 16. Then the variable offset is considered using * (14) which is 16. Then the variable offset is 4-byte
* reg->aux_off_align which is 4 and meets the load's * aligned, so the total offset is 4-byte aligned and
* requirements. * meets the load's requirements.
*/ */
"23: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=pkt(id=2,off=18,r=18),aux_off_align=4 R5=pkt(id=2,off=14,r=18),aux_off_align=4 R6=inv54,min_align=4 R10=fp", {23, "R4=pkt(id=2,off=18,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
{23, "R5=pkt(id=2,off=14,r=18,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* Constant offset is added to R5 packet pointer, /* Constant offset is added to R5 packet pointer,
* resulting in reg->off value of 14. * resulting in reg->off value of 14.
*/ */
"26: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv,aux_off_align=4 R5=pkt(id=0,off=14,r=8) R6=inv54,min_align=4 R10=fp", {26, "R5=pkt(id=0,off=14,r=8"},
/* Variable offset is added to R5, resulting in an /* Variable offset is added to R5, resulting in a
* auxiliary offset of 14, and an auxiliary alignment of 4. * variable offset of (4n).
*/ */
"27: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv,aux_off_align=4 R5=pkt(id=3,off=0,r=0),aux_off=14,aux_off_align=4 R6=inv54,min_align=4 R10=fp", {27, "R5=pkt(id=3,off=14,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* Constant is added to R5 again, setting reg->off to 4. */ /* Constant is added to R5 again, setting reg->off to 18. */
"28: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv,aux_off_align=4 R5=pkt(id=3,off=4,r=0),aux_off=14,aux_off_align=4 R6=inv54,min_align=4 R10=fp", {28, "R5=pkt(id=3,off=18,r=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* And once more we add a variable, which causes an accumulation /* And once more we add a variable; resulting var_off
* of reg->off into reg->aux_off_align, with resulting value of * is still (4n), fixed offset is not changed.
* 18. The auxiliary alignment stays at 4. * Also, we create a new reg->id.
*/ */
"29: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=inv,aux_off_align=4 R5=pkt(id=4,off=0,r=0),aux_off=18,aux_off_align=4 R6=inv54,min_align=4 R10=fp", {29, "R5=pkt(id=4,off=18,r=0,umax_value=2040,var_off=(0x0; 0x7fc))"},
/* At the time the word size load is performed from R5, /* At the time the word size load is performed from R5,
* it's total offset is NET_IP_ALIGN + reg->off (0) + * its total fixed offset is NET_IP_ALIGN + reg->off (18)
* reg->aux_off (18) which is 20. Then the variable offset * which is 20. Then the variable offset is (4n), so
* is considered using reg->aux_off_align which is 4 and meets * the total offset is 4-byte aligned and meets the
* the load's requirements. * load's requirements.
*/
{33, "R4=pkt(id=4,off=22,r=22,umax_value=2040,var_off=(0x0; 0x7fc))"},
{33, "R5=pkt(id=4,off=18,r=22,umax_value=2040,var_off=(0x0; 0x7fc))"},
},
},
{
.descr = "packet variable offset 2",
.insns = {
/* Create an unknown offset, (4n+2)-aligned */
LOAD_UNKNOWN(BPF_REG_6),
BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 14),
/* Add it to the packet pointer */
BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
BPF_ALU64_REG(BPF_ADD, BPF_REG_5, BPF_REG_6),
/* Check bounds and perform a read */
BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
BPF_EXIT_INSN(),
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_5, 0),
/* Make a (4n) offset from the value we just read */
BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 0xff),
BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 2),
/* Add it to the packet pointer */
BPF_ALU64_REG(BPF_ADD, BPF_REG_5, BPF_REG_6),
/* Check bounds and perform a read */
BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
BPF_EXIT_INSN(),
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_5, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
{8, "R2=pkt(id=0,off=0,r=8,imm=0)"},
{8, "R6=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* Adding 14 makes R6 be (4n+2) */
{9, "R6=inv(id=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
/* Packet pointer has (4n+2) offset */
{11, "R5=pkt(id=1,off=0,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
{13, "R4=pkt(id=1,off=4,r=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
{15, "R5=pkt(id=1,off=0,r=4,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
/* Newly read value in R6 was shifted left by 2, so has
* known alignment of 4.
*/
{18, "R6=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* Added (4n) to packet pointer's (4n+2) var_off, giving
* another (4n+2).
*/
{19, "R5=pkt(id=2,off=0,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc))"},
{21, "R4=pkt(id=2,off=4,r=0,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc))"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
{23, "R5=pkt(id=2,off=0,r=4,umin_value=14,umax_value=2054,var_off=(0x2; 0xffc))"},
},
},
{
.descr = "dubious pointer arithmetic",
.insns = {
PREP_PKT_POINTERS,
BPF_MOV64_IMM(BPF_REG_0, 0),
/* ptr & const => unknown & const */
BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 0x40),
/* ptr << const => unknown << const */
BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
BPF_ALU64_IMM(BPF_LSH, BPF_REG_5, 2),
/* We have a (4n) value. Let's make a packet offset
* out of it. First add 14, to make it a (4n+2)
*/
BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
/* Then make sure it's nonnegative */
BPF_JMP_IMM(BPF_JSGE, BPF_REG_5, 0, 1),
BPF_EXIT_INSN(),
/* Add it to packet pointer */
BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
/* Check bounds and perform a read */
BPF_MOV64_REG(BPF_REG_4, BPF_REG_6),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
BPF_EXIT_INSN(),
BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_6, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
.matches = {
{4, "R5=pkt(id=0,off=0,r=0,imm=0)"},
/* ptr & 0x40 == either 0 or 0x40 */
{5, "R5=inv(id=0,umax_value=64,var_off=(0x0; 0x40))"},
/* ptr << 2 == unknown, (4n) */
{7, "R5=inv(id=0,smax_value=9223372036854775804,umax_value=18446744073709551612,var_off=(0x0; 0xfffffffffffffffc))"},
/* (4n) + 14 == (4n+2). We blow our bounds, because
* the add could overflow.
*/
{8, "R5=inv(id=0,var_off=(0x2; 0xfffffffffffffffc))"},
/* Checked s>=0 */
{10, "R5=inv(id=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc))"},
/* packet pointer + nonnegative (4n+2) */
{12, "R6=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc))"},
{14, "R4=pkt(id=1,off=4,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc))"},
/* NET_IP_ALIGN + (4n+2) == (4n), alignment is fine.
* We checked the bounds, but it might have been able
* to overflow if the packet pointer started in the
* upper half of the address space.
* So we did not get a 'range' on R6, and the access
* attempt will fail.
*/
{16, "R6=pkt(id=1,off=0,r=0,umin_value=2,umax_value=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc))"},
}
},
{
.descr = "variable subtraction",
.insns = {
/* Create an unknown offset, (4n+2)-aligned */
LOAD_UNKNOWN(BPF_REG_6),
BPF_MOV64_REG(BPF_REG_7, BPF_REG_6),
BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 14),
/* Create another unknown, (4n)-aligned, and subtract
* it from the first one
*/
BPF_ALU64_IMM(BPF_LSH, BPF_REG_7, 2),
BPF_ALU64_REG(BPF_SUB, BPF_REG_6, BPF_REG_7),
/* Bounds-check the result */
BPF_JMP_IMM(BPF_JSGE, BPF_REG_6, 0, 1),
BPF_EXIT_INSN(),
/* Add it to the packet pointer */
BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
BPF_ALU64_REG(BPF_ADD, BPF_REG_5, BPF_REG_6),
/* Check bounds and perform a read */
BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
BPF_EXIT_INSN(),
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_5, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
{7, "R2=pkt(id=0,off=0,r=8,imm=0)"},
{9, "R6=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* Adding 14 makes R6 be (4n+2) */
{10, "R6=inv(id=0,umin_value=14,umax_value=1034,var_off=(0x2; 0x7fc))"},
/* New unknown value in R7 is (4n) */
{11, "R7=inv(id=0,umax_value=1020,var_off=(0x0; 0x3fc))"},
/* Subtracting it from R6 blows our unsigned bounds */
{12, "R6=inv(id=0,smin_value=-1006,smax_value=1034,var_off=(0x2; 0xfffffffffffffffc))"},
/* Checked s>= 0 */
{14, "R6=inv(id=0,umin_value=2,umax_value=1034,var_off=(0x2; 0x7fc))"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
{20, "R5=pkt(id=1,off=0,r=4,umin_value=2,umax_value=1034,var_off=(0x2; 0x7fc))"},
},
},
{
.descr = "pointer variable subtraction",
.insns = {
/* Create an unknown offset, (4n+2)-aligned and bounded
* to [14,74]
*/
LOAD_UNKNOWN(BPF_REG_6),
BPF_MOV64_REG(BPF_REG_7, BPF_REG_6),
BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 0xf),
BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 14),
/* Subtract it from the packet pointer */
BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
BPF_ALU64_REG(BPF_SUB, BPF_REG_5, BPF_REG_6),
/* Create another unknown, (4n)-aligned and >= 74.
* That in fact means >= 76, since 74 % 4 == 2
*/ */
"33: R0=pkt(id=0,off=8,r=8) R1=ctx R2=pkt(id=0,off=0,r=8) R3=pkt_end R4=pkt(id=4,off=4,r=4),aux_off=18,aux_off_align=4 R5=pkt(id=4,off=0,r=4),aux_off=18,aux_off_align=4 R6=inv54,min_align=4 R10=fp", BPF_ALU64_IMM(BPF_LSH, BPF_REG_7, 2),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, 76),
/* Add it to the packet pointer */
BPF_ALU64_REG(BPF_ADD, BPF_REG_5, BPF_REG_7),
/* Check bounds and perform a read */
BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
BPF_EXIT_INSN(),
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_5, 0),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
{7, "R2=pkt(id=0,off=0,r=8,imm=0)"},
{10, "R6=inv(id=0,umax_value=60,var_off=(0x0; 0x3c))"},
/* Adding 14 makes R6 be (4n+2) */
{11, "R6=inv(id=0,umin_value=14,umax_value=74,var_off=(0x2; 0x7c))"},
/* Subtracting from packet pointer overflows ubounds */
{13, "R5=pkt(id=1,off=0,r=8,umin_value=18446744073709551542,umax_value=18446744073709551602,var_off=(0xffffffffffffff82; 0x7c))"},
/* New unknown value in R7 is (4n), >= 76 */
{15, "R7=inv(id=0,umin_value=76,umax_value=1096,var_off=(0x0; 0x7fc))"},
/* Adding it to packet pointer gives nice bounds again */
{16, "R5=pkt(id=2,off=0,r=0,umin_value=2,umax_value=1082,var_off=(0x2; 0x7fc))"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
{20, "R5=pkt(id=2,off=0,r=4,umin_value=2,umax_value=1082,var_off=(0x2; 0x7fc))"},
}, },
}, },
}; };
...@@ -373,6 +619,9 @@ static int do_test_single(struct bpf_align_test *test) ...@@ -373,6 +619,9 @@ static int do_test_single(struct bpf_align_test *test)
{ {
struct bpf_insn *prog = test->insns; struct bpf_insn *prog = test->insns;
int prog_type = test->prog_type; int prog_type = test->prog_type;
char bpf_vlog_copy[32768];
const char *line_ptr;
int cur_line = -1;
int prog_len, i; int prog_len, i;
int fd_prog; int fd_prog;
int ret; int ret;
...@@ -381,26 +630,49 @@ static int do_test_single(struct bpf_align_test *test) ...@@ -381,26 +630,49 @@ static int do_test_single(struct bpf_align_test *test)
fd_prog = bpf_verify_program(prog_type ? : BPF_PROG_TYPE_SOCKET_FILTER, fd_prog = bpf_verify_program(prog_type ? : BPF_PROG_TYPE_SOCKET_FILTER,
prog, prog_len, 1, "GPL", 0, prog, prog_len, 1, "GPL", 0,
bpf_vlog, sizeof(bpf_vlog), 2); bpf_vlog, sizeof(bpf_vlog), 2);
if (fd_prog < 0) { if (fd_prog < 0 && test->result != REJECT) {
printf("Failed to load program.\n"); printf("Failed to load program.\n");
printf("%s", bpf_vlog); printf("%s", bpf_vlog);
ret = 1; ret = 1;
} else if (fd_prog >= 0 && test->result == REJECT) {
printf("Unexpected success to load!\n");
printf("%s", bpf_vlog);
ret = 1;
close(fd_prog);
} else { } else {
ret = 0; ret = 0;
/* We make a local copy so that we can strtok() it */
strncpy(bpf_vlog_copy, bpf_vlog, sizeof(bpf_vlog_copy));
line_ptr = strtok(bpf_vlog_copy, "\n");
for (i = 0; i < MAX_MATCHES; i++) { for (i = 0; i < MAX_MATCHES; i++) {
const char *t, *m = test->matches[i]; struct bpf_reg_match m = test->matches[i];
if (!m) if (!m.match)
break; break;
t = strstr(bpf_vlog, m); while (line_ptr) {
if (!t) { cur_line = -1;
printf("Failed to find match: %s\n", m); sscanf(line_ptr, "%u: ", &cur_line);
if (cur_line == m.line)
break;
line_ptr = strtok(NULL, "\n");
}
if (!line_ptr) {
printf("Failed to find line %u for match: %s\n",
m.line, m.match);
ret = 1;
printf("%s", bpf_vlog);
break;
}
if (!strstr(line_ptr, m.match)) {
printf("Failed to find match %u: %s\n",
m.line, m.match);
ret = 1; ret = 1;
printf("%s", bpf_vlog); printf("%s", bpf_vlog);
break; break;
} }
} }
close(fd_prog); if (fd_prog >= 0)
close(fd_prog);
} }
return ret; return ret;
} }
......
...@@ -421,7 +421,7 @@ static struct bpf_test tests[] = { ...@@ -421,7 +421,7 @@ static struct bpf_test tests[] = {
BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0), BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr_unpriv = "R1 pointer arithmetic", .errstr_unpriv = "R1 subtraction from stack pointer",
.result_unpriv = REJECT, .result_unpriv = REJECT,
.errstr = "R1 invalid mem access", .errstr = "R1 invalid mem access",
.result = REJECT, .result = REJECT,
...@@ -603,8 +603,9 @@ static struct bpf_test tests[] = { ...@@ -603,8 +603,9 @@ static struct bpf_test tests[] = {
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -4), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -4),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "misaligned access", .errstr = "misaligned stack access",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"invalid map_fd for function call", "invalid map_fd for function call",
...@@ -650,8 +651,9 @@ static struct bpf_test tests[] = { ...@@ -650,8 +651,9 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr = "misaligned access", .errstr = "misaligned value access",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"sometimes access memory with incorrect alignment", "sometimes access memory with incorrect alignment",
...@@ -672,6 +674,7 @@ static struct bpf_test tests[] = { ...@@ -672,6 +674,7 @@ static struct bpf_test tests[] = {
.errstr = "R0 invalid mem access", .errstr = "R0 invalid mem access",
.errstr_unpriv = "R0 leaks addr", .errstr_unpriv = "R0 leaks addr",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"jump test 1", "jump test 1",
...@@ -1215,8 +1218,9 @@ static struct bpf_test tests[] = { ...@@ -1215,8 +1218,9 @@ static struct bpf_test tests[] = {
offsetof(struct __sk_buff, cb[0]) + 1), offsetof(struct __sk_buff, cb[0]) + 1),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "misaligned access", .errstr = "misaligned context access",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"check __sk_buff->hash, offset 0, half store not permitted", "check __sk_buff->hash, offset 0, half store not permitted",
...@@ -1319,8 +1323,9 @@ static struct bpf_test tests[] = { ...@@ -1319,8 +1323,9 @@ static struct bpf_test tests[] = {
offsetof(struct __sk_buff, cb[0]) + 2), offsetof(struct __sk_buff, cb[0]) + 2),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "misaligned access", .errstr = "misaligned context access",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"check cb access: word, unaligned 2", "check cb access: word, unaligned 2",
...@@ -1330,8 +1335,9 @@ static struct bpf_test tests[] = { ...@@ -1330,8 +1335,9 @@ static struct bpf_test tests[] = {
offsetof(struct __sk_buff, cb[4]) + 1), offsetof(struct __sk_buff, cb[4]) + 1),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "misaligned access", .errstr = "misaligned context access",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"check cb access: word, unaligned 3", "check cb access: word, unaligned 3",
...@@ -1341,8 +1347,9 @@ static struct bpf_test tests[] = { ...@@ -1341,8 +1347,9 @@ static struct bpf_test tests[] = {
offsetof(struct __sk_buff, cb[4]) + 2), offsetof(struct __sk_buff, cb[4]) + 2),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "misaligned access", .errstr = "misaligned context access",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"check cb access: word, unaligned 4", "check cb access: word, unaligned 4",
...@@ -1352,8 +1359,9 @@ static struct bpf_test tests[] = { ...@@ -1352,8 +1359,9 @@ static struct bpf_test tests[] = {
offsetof(struct __sk_buff, cb[4]) + 3), offsetof(struct __sk_buff, cb[4]) + 3),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "misaligned access", .errstr = "misaligned context access",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"check cb access: double", "check cb access: double",
...@@ -1379,8 +1387,9 @@ static struct bpf_test tests[] = { ...@@ -1379,8 +1387,9 @@ static struct bpf_test tests[] = {
offsetof(struct __sk_buff, cb[1])), offsetof(struct __sk_buff, cb[1])),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "misaligned access", .errstr = "misaligned context access",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"check cb access: double, unaligned 2", "check cb access: double, unaligned 2",
...@@ -1390,8 +1399,9 @@ static struct bpf_test tests[] = { ...@@ -1390,8 +1399,9 @@ static struct bpf_test tests[] = {
offsetof(struct __sk_buff, cb[3])), offsetof(struct __sk_buff, cb[3])),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "misaligned access", .errstr = "misaligned context access",
.result = REJECT, .result = REJECT,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"check cb access: double, oob 1", "check cb access: double, oob 1",
...@@ -1523,7 +1533,8 @@ static struct bpf_test tests[] = { ...@@ -1523,7 +1533,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = REJECT, .result = REJECT,
.errstr = "misaligned access off -6 size 8", .errstr = "misaligned stack access off (0x0; 0x0)+-8+2 size 8",
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"PTR_TO_STACK store/load - bad alignment on reg", "PTR_TO_STACK store/load - bad alignment on reg",
...@@ -1535,7 +1546,8 @@ static struct bpf_test tests[] = { ...@@ -1535,7 +1546,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = REJECT, .result = REJECT,
.errstr = "misaligned access off -2 size 8", .errstr = "misaligned stack access off (0x0; 0x0)+-10+8 size 8",
.flags = F_LOAD_WITH_STRICT_ALIGNMENT,
}, },
{ {
"PTR_TO_STACK store/load - out of bounds low", "PTR_TO_STACK store/load - out of bounds low",
...@@ -1579,8 +1591,6 @@ static struct bpf_test tests[] = { ...@@ -1579,8 +1591,6 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = ACCEPT, .result = ACCEPT,
.result_unpriv = REJECT,
.errstr_unpriv = "R1 pointer arithmetic",
}, },
{ {
"unpriv: add pointer to pointer", "unpriv: add pointer to pointer",
...@@ -1591,7 +1601,7 @@ static struct bpf_test tests[] = { ...@@ -1591,7 +1601,7 @@ static struct bpf_test tests[] = {
}, },
.result = ACCEPT, .result = ACCEPT,
.result_unpriv = REJECT, .result_unpriv = REJECT,
.errstr_unpriv = "R1 pointer arithmetic", .errstr_unpriv = "R1 pointer += pointer",
}, },
{ {
"unpriv: neg pointer", "unpriv: neg pointer",
...@@ -1932,10 +1942,7 @@ static struct bpf_test tests[] = { ...@@ -1932,10 +1942,7 @@ static struct bpf_test tests[] = {
BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr_unpriv = "pointer arithmetic prohibited", .result = ACCEPT,
.result_unpriv = REJECT,
.errstr = "R1 invalid mem access",
.result = REJECT,
}, },
{ {
"unpriv: cmp of stack pointer", "unpriv: cmp of stack pointer",
...@@ -1999,7 +2006,7 @@ static struct bpf_test tests[] = { ...@@ -1999,7 +2006,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = REJECT, .result = REJECT,
.errstr = "invalid stack type R3", .errstr = "R4 min value is negative",
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
...@@ -2016,7 +2023,7 @@ static struct bpf_test tests[] = { ...@@ -2016,7 +2023,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = REJECT, .result = REJECT,
.errstr = "invalid stack type R3", .errstr = "R4 min value is negative",
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
...@@ -2218,7 +2225,7 @@ static struct bpf_test tests[] = { ...@@ -2218,7 +2225,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = REJECT, .result = REJECT,
.errstr = "invalid stack type R3 off=-1 access_size=-1", .errstr = "R4 min value is negative",
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
...@@ -2235,7 +2242,7 @@ static struct bpf_test tests[] = { ...@@ -2235,7 +2242,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = REJECT, .result = REJECT,
.errstr = "invalid stack type R3 off=-1 access_size=2147483647", .errstr = "R4 unbounded memory access, use 'var &= const' or 'if (var < const)'",
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
...@@ -2252,7 +2259,7 @@ static struct bpf_test tests[] = { ...@@ -2252,7 +2259,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = REJECT, .result = REJECT,
.errstr = "invalid stack type R3 off=-512 access_size=2147483647", .errstr = "R4 unbounded memory access, use 'var &= const' or 'if (var < const)'",
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
...@@ -2323,8 +2330,8 @@ static struct bpf_test tests[] = { ...@@ -2323,8 +2330,8 @@ static struct bpf_test tests[] = {
offsetof(struct __sk_buff, data)), offsetof(struct __sk_buff, data)),
BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_4), BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_4),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_1), BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 48), BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 49),
BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 48), BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 49),
BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2), BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_3), BPF_MOV64_REG(BPF_REG_2, BPF_REG_3),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8), BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
...@@ -2652,7 +2659,7 @@ static struct bpf_test tests[] = { ...@@ -2652,7 +2659,7 @@ static struct bpf_test tests[] = {
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
BPF_JMP_A(-6), BPF_JMP_A(-6),
}, },
.errstr = "misaligned packet access off 2+15+-4 size 4", .errstr = "misaligned packet access off 2+(0x0; 0x0)+15+-4 size 4",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
.flags = F_LOAD_WITH_STRICT_ALIGNMENT, .flags = F_LOAD_WITH_STRICT_ALIGNMENT,
...@@ -2703,11 +2710,11 @@ static struct bpf_test tests[] = { ...@@ -2703,11 +2710,11 @@ static struct bpf_test tests[] = {
BPF_MOV64_IMM(BPF_REG_0, 0xffffffff), BPF_MOV64_IMM(BPF_REG_0, 0xffffffff),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff), BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0x7fff),
BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2), BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
BPF_MOV64_REG(BPF_REG_5, BPF_REG_4), BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0xffff - 1), BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1),
BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1), BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0), BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
...@@ -2729,10 +2736,10 @@ static struct bpf_test tests[] = { ...@@ -2729,10 +2736,10 @@ static struct bpf_test tests[] = {
BPF_MOV64_IMM(BPF_REG_4, 0xffffffff), BPF_MOV64_IMM(BPF_REG_4, 0xffffffff),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -8), BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -8),
BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
BPF_ALU64_IMM(BPF_AND, BPF_REG_4, 0xffff), BPF_ALU64_IMM(BPF_AND, BPF_REG_4, 0x7fff),
BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2), BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
BPF_MOV64_REG(BPF_REG_5, BPF_REG_4), BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0xffff - 1), BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1),
BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1), BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0), BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
...@@ -2758,7 +2765,7 @@ static struct bpf_test tests[] = { ...@@ -2758,7 +2765,7 @@ static struct bpf_test tests[] = {
BPF_MOV64_IMM(BPF_REG_4, 0xffffffff), BPF_MOV64_IMM(BPF_REG_4, 0xffffffff),
BPF_STX_XADD(BPF_DW, BPF_REG_10, BPF_REG_4, -8), BPF_STX_XADD(BPF_DW, BPF_REG_10, BPF_REG_4, -8),
BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8), BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
BPF_ALU64_IMM(BPF_RSH, BPF_REG_4, 48), BPF_ALU64_IMM(BPF_RSH, BPF_REG_4, 49),
BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2), BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2), BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
...@@ -2795,7 +2802,7 @@ static struct bpf_test tests[] = { ...@@ -2795,7 +2802,7 @@ static struct bpf_test tests[] = {
}, },
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT, .result = REJECT,
.errstr = "cannot add integer value with 47 upper zero bits to ptr_to_packet", .errstr = "invalid access to packet, off=0 size=8, R5(id=1,off=0,r=0)",
}, },
{ {
"direct packet access: test24 (x += pkt_ptr, 5)", "direct packet access: test24 (x += pkt_ptr, 5)",
...@@ -2813,7 +2820,7 @@ static struct bpf_test tests[] = { ...@@ -2813,7 +2820,7 @@ static struct bpf_test tests[] = {
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4), BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2), BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
BPF_MOV64_REG(BPF_REG_5, BPF_REG_0), BPF_MOV64_REG(BPF_REG_5, BPF_REG_0),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0xffff - 1), BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7fff - 1),
BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0), BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0),
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
...@@ -3112,7 +3119,7 @@ static struct bpf_test tests[] = { ...@@ -3112,7 +3119,7 @@ static struct bpf_test tests[] = {
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
"helper access to packet: test14, cls helper fail sub", "helper access to packet: test14, cls helper ok sub",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
offsetof(struct __sk_buff, data)), offsetof(struct __sk_buff, data)),
...@@ -3132,12 +3139,36 @@ static struct bpf_test tests[] = { ...@@ -3132,12 +3139,36 @@ static struct bpf_test tests[] = {
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
},
{
"helper access to packet: test15, cls helper fail sub",
.insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
offsetof(struct __sk_buff, data)),
BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
offsetof(struct __sk_buff, data_end)),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 12),
BPF_MOV64_IMM(BPF_REG_2, 4),
BPF_MOV64_IMM(BPF_REG_3, 0),
BPF_MOV64_IMM(BPF_REG_4, 0),
BPF_MOV64_IMM(BPF_REG_5, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_csum_diff),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.result = REJECT, .result = REJECT,
.errstr = "type=inv expected=fp", .errstr = "invalid access to packet",
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
"helper access to packet: test15, cls helper fail range 1", "helper access to packet: test16, cls helper fail range 1",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
offsetof(struct __sk_buff, data)), offsetof(struct __sk_buff, data)),
...@@ -3162,7 +3193,7 @@ static struct bpf_test tests[] = { ...@@ -3162,7 +3193,7 @@ static struct bpf_test tests[] = {
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
"helper access to packet: test16, cls helper fail range 2", "helper access to packet: test17, cls helper fail range 2",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
offsetof(struct __sk_buff, data)), offsetof(struct __sk_buff, data)),
...@@ -3183,11 +3214,11 @@ static struct bpf_test tests[] = { ...@@ -3183,11 +3214,11 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = REJECT, .result = REJECT,
.errstr = "invalid access to packet", .errstr = "R2 min value is negative",
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
"helper access to packet: test17, cls helper fail range 3", "helper access to packet: test18, cls helper fail range 3",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
offsetof(struct __sk_buff, data)), offsetof(struct __sk_buff, data)),
...@@ -3208,11 +3239,11 @@ static struct bpf_test tests[] = { ...@@ -3208,11 +3239,11 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.result = REJECT, .result = REJECT,
.errstr = "invalid access to packet", .errstr = "R2 min value is negative",
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
"helper access to packet: test18, cls helper fail range zero", "helper access to packet: test19, cls helper fail range zero",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
offsetof(struct __sk_buff, data)), offsetof(struct __sk_buff, data)),
...@@ -3237,7 +3268,7 @@ static struct bpf_test tests[] = { ...@@ -3237,7 +3268,7 @@ static struct bpf_test tests[] = {
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
"helper access to packet: test19, pkt end as input", "helper access to packet: test20, pkt end as input",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
offsetof(struct __sk_buff, data)), offsetof(struct __sk_buff, data)),
...@@ -3262,7 +3293,7 @@ static struct bpf_test tests[] = { ...@@ -3262,7 +3293,7 @@ static struct bpf_test tests[] = {
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
{ {
"helper access to packet: test20, wrong reg", "helper access to packet: test21, wrong reg",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
offsetof(struct __sk_buff, data)), offsetof(struct __sk_buff, data)),
...@@ -3322,7 +3353,7 @@ static struct bpf_test tests[] = { ...@@ -3322,7 +3353,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 leaks addr",
.result_unpriv = REJECT, .result_unpriv = REJECT,
.result = ACCEPT, .result = ACCEPT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
...@@ -3346,7 +3377,7 @@ static struct bpf_test tests[] = { ...@@ -3346,7 +3377,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 leaks addr",
.result_unpriv = REJECT, .result_unpriv = REJECT,
.result = ACCEPT, .result = ACCEPT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
...@@ -3374,7 +3405,7 @@ static struct bpf_test tests[] = { ...@@ -3374,7 +3405,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 leaks addr",
.result_unpriv = REJECT, .result_unpriv = REJECT,
.result = ACCEPT, .result = ACCEPT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
...@@ -3415,9 +3446,7 @@ static struct bpf_test tests[] = { ...@@ -3415,9 +3446,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is outside of the array range", .errstr = "R0 min value is outside of the array range",
.result_unpriv = REJECT,
.result = REJECT, .result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
...@@ -3439,9 +3468,7 @@ static struct bpf_test tests[] = { ...@@ -3439,9 +3468,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr = "R0 unbounded memory access, make sure to bounds check any array access into a map",
.errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.",
.result_unpriv = REJECT,
.result = REJECT, .result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
...@@ -3455,7 +3482,7 @@ static struct bpf_test tests[] = { ...@@ -3455,7 +3482,7 @@ static struct bpf_test tests[] = {
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_map_lookup_elem), BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0), BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES), BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES),
BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1), BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
BPF_MOV32_IMM(BPF_REG_1, 0), BPF_MOV32_IMM(BPF_REG_1, 0),
...@@ -3466,8 +3493,8 @@ static struct bpf_test tests[] = { ...@@ -3466,8 +3493,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 leaks addr",
.errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.", .errstr = "R0 unbounded memory access",
.result_unpriv = REJECT, .result_unpriv = REJECT,
.result = REJECT, .result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
...@@ -3493,7 +3520,7 @@ static struct bpf_test tests[] = { ...@@ -3493,7 +3520,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 leaks addr",
.errstr = "invalid access to map value, value_size=48 off=44 size=8", .errstr = "invalid access to map value, value_size=48 off=44 size=8",
.result_unpriv = REJECT, .result_unpriv = REJECT,
.result = REJECT, .result = REJECT,
...@@ -3523,8 +3550,8 @@ static struct bpf_test tests[] = { ...@@ -3523,8 +3550,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3, 11 }, .fixup_map2 = { 3, 11 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 pointer += pointer",
.errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.", .errstr = "R0 invalid mem access 'inv'",
.result_unpriv = REJECT, .result_unpriv = REJECT,
.result = REJECT, .result = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
...@@ -3665,34 +3692,6 @@ static struct bpf_test tests[] = { ...@@ -3665,34 +3692,6 @@ static struct bpf_test tests[] = {
.result = ACCEPT, .result = ACCEPT,
.prog_type = BPF_PROG_TYPE_SCHED_CLS .prog_type = BPF_PROG_TYPE_SCHED_CLS
}, },
{
"multiple registers share map_lookup_elem bad reg type",
.insns = {
BPF_MOV64_IMM(BPF_REG_1, 10),
BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_map_lookup_elem),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
BPF_MOV64_REG(BPF_REG_5, BPF_REG_0),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
BPF_MOV64_IMM(BPF_REG_1, 1),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
BPF_MOV64_IMM(BPF_REG_1, 2),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 1),
BPF_ST_MEM(BPF_DW, BPF_REG_3, 0, 0),
BPF_MOV64_IMM(BPF_REG_1, 3),
BPF_EXIT_INSN(),
},
.fixup_map1 = { 4 },
.result = REJECT,
.errstr = "R3 invalid mem access 'inv'",
.prog_type = BPF_PROG_TYPE_SCHED_CLS
},
{ {
"invalid map access from else condition", "invalid map access from else condition",
.insns = { .insns = {
...@@ -3711,9 +3710,9 @@ static struct bpf_test tests[] = { ...@@ -3711,9 +3710,9 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr = "R0 unbounded memory access, make sure to bounds check any array access into a map", .errstr = "R0 unbounded memory access",
.result = REJECT, .result = REJECT,
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 leaks addr",
.result_unpriv = REJECT, .result_unpriv = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
...@@ -4091,7 +4090,7 @@ static struct bpf_test tests[] = { ...@@ -4091,7 +4090,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr = "invalid access to map value, value_size=48 off=0 size=-8", .errstr = "R2 min value is negative",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT, .prog_type = BPF_PROG_TYPE_TRACEPOINT,
}, },
...@@ -4157,7 +4156,7 @@ static struct bpf_test tests[] = { ...@@ -4157,7 +4156,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr = "R1 min value is outside of the array range", .errstr = "invalid access to map value, value_size=48 off=4 size=0",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT, .prog_type = BPF_PROG_TYPE_TRACEPOINT,
}, },
...@@ -4203,7 +4202,7 @@ static struct bpf_test tests[] = { ...@@ -4203,7 +4202,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr = "invalid access to map value, value_size=48 off=4 size=-8", .errstr = "R2 min value is negative",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT, .prog_type = BPF_PROG_TYPE_TRACEPOINT,
}, },
...@@ -4225,7 +4224,7 @@ static struct bpf_test tests[] = { ...@@ -4225,7 +4224,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr = "R1 min value is outside of the array range", .errstr = "R2 min value is negative",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT, .prog_type = BPF_PROG_TYPE_TRACEPOINT,
}, },
...@@ -4341,7 +4340,7 @@ static struct bpf_test tests[] = { ...@@ -4341,7 +4340,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr = "invalid access to map value, value_size=48 off=4 size=-8", .errstr = "R2 min value is negative",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT, .prog_type = BPF_PROG_TYPE_TRACEPOINT,
}, },
...@@ -4364,7 +4363,7 @@ static struct bpf_test tests[] = { ...@@ -4364,7 +4363,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr = "R1 min value is outside of the array range", .errstr = "R2 min value is negative",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT, .prog_type = BPF_PROG_TYPE_TRACEPOINT,
}, },
...@@ -4452,13 +4451,13 @@ static struct bpf_test tests[] = { ...@@ -4452,13 +4451,13 @@ static struct bpf_test tests[] = {
BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0), BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3), BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
BPF_MOV64_IMM(BPF_REG_2, 0), BPF_MOV64_IMM(BPF_REG_2, 1),
BPF_MOV64_IMM(BPF_REG_3, 0), BPF_MOV64_IMM(BPF_REG_3, 0),
BPF_EMIT_CALL(BPF_FUNC_probe_read), BPF_EMIT_CALL(BPF_FUNC_probe_read),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr = "R1 min value is negative, either use unsigned index or do a if (index >=0) check", .errstr = "R1 unbounded memory access",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT, .prog_type = BPF_PROG_TYPE_TRACEPOINT,
}, },
...@@ -4578,7 +4577,7 @@ static struct bpf_test tests[] = { ...@@ -4578,7 +4577,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 leaks addr",
.result = ACCEPT, .result = ACCEPT,
.result_unpriv = REJECT, .result_unpriv = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
...@@ -4606,7 +4605,7 @@ static struct bpf_test tests[] = { ...@@ -4606,7 +4605,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 leaks addr",
.result = ACCEPT, .result = ACCEPT,
.result_unpriv = REJECT, .result_unpriv = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
...@@ -4625,7 +4624,7 @@ static struct bpf_test tests[] = { ...@@ -4625,7 +4624,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 bitwise operator &= on pointer",
.errstr = "invalid mem access 'inv'", .errstr = "invalid mem access 'inv'",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT, .result_unpriv = REJECT,
...@@ -4644,7 +4643,7 @@ static struct bpf_test tests[] = { ...@@ -4644,7 +4643,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 32-bit pointer arithmetic prohibited",
.errstr = "invalid mem access 'inv'", .errstr = "invalid mem access 'inv'",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT, .result_unpriv = REJECT,
...@@ -4663,7 +4662,7 @@ static struct bpf_test tests[] = { ...@@ -4663,7 +4662,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 pointer arithmetic with /= operator",
.errstr = "invalid mem access 'inv'", .errstr = "invalid mem access 'inv'",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT, .result_unpriv = REJECT,
...@@ -4706,10 +4705,8 @@ static struct bpf_test tests[] = { ...@@ -4706,10 +4705,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 invalid mem access 'inv'",
.errstr = "R0 invalid mem access 'inv'", .errstr = "R0 invalid mem access 'inv'",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"map element value is preserved across register spilling", "map element value is preserved across register spilling",
...@@ -4731,7 +4728,7 @@ static struct bpf_test tests[] = { ...@@ -4731,7 +4728,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 leaks addr",
.result = ACCEPT, .result = ACCEPT,
.result_unpriv = REJECT, .result_unpriv = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
...@@ -4913,7 +4910,8 @@ static struct bpf_test tests[] = { ...@@ -4913,7 +4910,8 @@ static struct bpf_test tests[] = {
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "R2 unbounded memory access", /* because max wasn't checked, signed min is negative */
.errstr = "R2 min value is negative, either use unsigned or 'var &= const'",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT, .prog_type = BPF_PROG_TYPE_TRACEPOINT,
}, },
...@@ -5061,6 +5059,20 @@ static struct bpf_test tests[] = { ...@@ -5061,6 +5059,20 @@ static struct bpf_test tests[] = {
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_TRACEPOINT, .prog_type = BPF_PROG_TYPE_TRACEPOINT,
}, },
{
"helper access to variable memory: size = 0 allowed on NULL",
.insns = {
BPF_MOV64_IMM(BPF_REG_1, 0),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_MOV64_IMM(BPF_REG_3, 0),
BPF_MOV64_IMM(BPF_REG_4, 0),
BPF_MOV64_IMM(BPF_REG_5, 0),
BPF_EMIT_CALL(BPF_FUNC_csum_diff),
BPF_EXIT_INSN(),
},
.result = ACCEPT,
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
},
{ {
"helper access to variable memory: size > 0 not allowed on NULL", "helper access to variable memory: size > 0 not allowed on NULL",
.insns = { .insns = {
...@@ -5075,7 +5087,7 @@ static struct bpf_test tests[] = { ...@@ -5075,7 +5087,7 @@ static struct bpf_test tests[] = {
BPF_EMIT_CALL(BPF_FUNC_csum_diff), BPF_EMIT_CALL(BPF_FUNC_csum_diff),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "R1 type=imm expected=fp", .errstr = "R1 type=inv expected=fp",
.result = REJECT, .result = REJECT,
.prog_type = BPF_PROG_TYPE_SCHED_CLS, .prog_type = BPF_PROG_TYPE_SCHED_CLS,
}, },
...@@ -5160,7 +5172,7 @@ static struct bpf_test tests[] = { ...@@ -5160,7 +5172,7 @@ static struct bpf_test tests[] = {
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_map_lookup_elem), BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
BPF_MOV64_IMM(BPF_REG_1, 6), BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
BPF_ALU64_IMM(BPF_AND, BPF_REG_1, -4), BPF_ALU64_IMM(BPF_AND, BPF_REG_1, -4),
BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2), BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
...@@ -5169,10 +5181,8 @@ static struct bpf_test tests[] = { ...@@ -5169,10 +5181,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr = "R0 max value is outside of the array range",
.errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
...@@ -5201,10 +5211,8 @@ static struct bpf_test tests[] = { ...@@ -5201,10 +5211,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map2 = { 3 }, .fixup_map2 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr = "R0 max value is outside of the array range",
.errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
}, },
{ {
...@@ -5251,7 +5259,7 @@ static struct bpf_test tests[] = { ...@@ -5251,7 +5259,7 @@ static struct bpf_test tests[] = {
}, },
.fixup_map_in_map = { 3 }, .fixup_map_in_map = { 3 },
.errstr = "R1 type=inv expected=map_ptr", .errstr = "R1 type=inv expected=map_ptr",
.errstr_unpriv = "R1 pointer arithmetic prohibited", .errstr_unpriv = "R1 pointer arithmetic on CONST_PTR_TO_MAP prohibited",
.result = REJECT, .result = REJECT,
}, },
{ {
...@@ -5531,10 +5539,8 @@ static struct bpf_test tests[] = { ...@@ -5531,10 +5539,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is negative", .errstr = "R0 min value is negative",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned", "bounds checks mixing signed and unsigned",
...@@ -5557,10 +5563,8 @@ static struct bpf_test tests[] = { ...@@ -5557,10 +5563,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is negative", .errstr = "R0 min value is negative",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 2", "bounds checks mixing signed and unsigned, variant 2",
...@@ -5585,10 +5589,8 @@ static struct bpf_test tests[] = { ...@@ -5585,10 +5589,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R8 invalid mem access 'inv'", .errstr = "R8 invalid mem access 'inv'",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 3", "bounds checks mixing signed and unsigned, variant 3",
...@@ -5612,10 +5614,8 @@ static struct bpf_test tests[] = { ...@@ -5612,10 +5614,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R8 invalid mem access 'inv'", .errstr = "R8 invalid mem access 'inv'",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 4", "bounds checks mixing signed and unsigned, variant 4",
...@@ -5638,10 +5638,7 @@ static struct bpf_test tests[] = { ...@@ -5638,10 +5638,7 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .result = ACCEPT,
.errstr = "R0 min value is negative",
.result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 5", "bounds checks mixing signed and unsigned, variant 5",
...@@ -5665,10 +5662,8 @@ static struct bpf_test tests[] = { ...@@ -5665,10 +5662,8 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr = "R0 min value is negative",
.errstr = "R0 invalid mem access",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 6", "bounds checks mixing signed and unsigned, variant 6",
...@@ -5689,10 +5684,8 @@ static struct bpf_test tests[] = { ...@@ -5689,10 +5684,8 @@ static struct bpf_test tests[] = {
BPF_MOV64_IMM(BPF_REG_0, 0), BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr_unpriv = "R4 min value is negative, either use unsigned",
.errstr = "R4 min value is negative, either use unsigned", .errstr = "R4 min value is negative, either use unsigned",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 7", "bounds checks mixing signed and unsigned, variant 7",
...@@ -5715,39 +5708,10 @@ static struct bpf_test tests[] = { ...@@ -5715,39 +5708,10 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .result = ACCEPT,
.errstr = "R0 min value is negative",
.result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 8", "bounds checks mixing signed and unsigned, variant 8",
.insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
BPF_MOV64_IMM(BPF_REG_2, 1024 * 1024 * 1024 + 1),
BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3),
BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is negative",
.result = REJECT,
.result_unpriv = REJECT,
},
{
"bounds checks mixing signed and unsigned, variant 9",
.insns = { .insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
...@@ -5769,13 +5733,11 @@ static struct bpf_test tests[] = { ...@@ -5769,13 +5733,11 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is negative", .errstr = "R0 min value is negative",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 10", "bounds checks mixing signed and unsigned, variant 9",
.insns = { .insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
...@@ -5797,13 +5759,10 @@ static struct bpf_test tests[] = { ...@@ -5797,13 +5759,10 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .result = ACCEPT,
.errstr = "R0 min value is negative",
.result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 11", "bounds checks mixing signed and unsigned, variant 10",
.insns = { .insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
...@@ -5825,13 +5784,11 @@ static struct bpf_test tests[] = { ...@@ -5825,13 +5784,11 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is negative", .errstr = "R0 min value is negative",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 12", "bounds checks mixing signed and unsigned, variant 11",
.insns = { .insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
...@@ -5854,13 +5811,11 @@ static struct bpf_test tests[] = { ...@@ -5854,13 +5811,11 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is negative", .errstr = "R0 min value is negative",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 13", "bounds checks mixing signed and unsigned, variant 12",
.insns = { .insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
...@@ -5882,13 +5837,11 @@ static struct bpf_test tests[] = { ...@@ -5882,13 +5837,11 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is negative", .errstr = "R0 min value is negative",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 14", "bounds checks mixing signed and unsigned, variant 13",
.insns = { .insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
...@@ -5913,13 +5866,11 @@ static struct bpf_test tests[] = { ...@@ -5913,13 +5866,11 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is negative", .errstr = "R0 min value is negative",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 15", "bounds checks mixing signed and unsigned, variant 14",
.insns = { .insns = {
BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1, BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1,
offsetof(struct __sk_buff, mark)), offsetof(struct __sk_buff, mark)),
...@@ -5945,13 +5896,11 @@ static struct bpf_test tests[] = { ...@@ -5945,13 +5896,11 @@ static struct bpf_test tests[] = {
BPF_JMP_IMM(BPF_JA, 0, 0, -7), BPF_JMP_IMM(BPF_JA, 0, 0, -7),
}, },
.fixup_map1 = { 4 }, .fixup_map1 = { 4 },
.errstr_unpriv = "R0 pointer arithmetic prohibited",
.errstr = "R0 min value is negative", .errstr = "R0 min value is negative",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT,
}, },
{ {
"bounds checks mixing signed and unsigned, variant 16", "bounds checks mixing signed and unsigned, variant 15",
.insns = { .insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
...@@ -5975,13 +5924,13 @@ static struct bpf_test tests[] = { ...@@ -5975,13 +5924,13 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr_unpriv = "R0 pointer comparison prohibited",
.errstr = "R0 min value is negative", .errstr = "R0 min value is negative",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT, .result_unpriv = REJECT,
}, },
{ {
"subtraction bounds (map value)", "subtraction bounds (map value) variant 1",
.insns = { .insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
...@@ -6003,10 +5952,74 @@ static struct bpf_test tests[] = { ...@@ -6003,10 +5952,74 @@ static struct bpf_test tests[] = {
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.fixup_map1 = { 3 }, .fixup_map1 = { 3 },
.errstr_unpriv = "R0 pointer arithmetic prohibited", .errstr = "R0 max value is outside of the array range",
.result = REJECT,
},
{
"subtraction bounds (map value) variant 2",
.insns = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
BPF_FUNC_map_lookup_elem),
BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 6),
BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_0, 1),
BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 0xff, 4),
BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_3),
BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
BPF_EXIT_INSN(),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
},
.fixup_map1 = { 3 },
.errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.", .errstr = "R0 min value is negative, either use unsigned index or do a if (index >=0) check.",
.result = REJECT, .result = REJECT,
.result_unpriv = REJECT, },
{
"variable-offset ctx access",
.insns = {
/* Get an unknown value */
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
/* Make it small and 4-byte aligned */
BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4),
/* add it to skb. We now have either &skb->len or
* &skb->pkt_type, but we don't know which
*/
BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
/* dereference it */
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
BPF_EXIT_INSN(),
},
.errstr = "variable ctx access var_off=(0x0; 0x4)",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_LWT_IN,
},
{
"variable-offset stack access",
.insns = {
/* Fill the top 8 bytes of the stack */
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
/* Get an unknown value */
BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
/* Make it small and 4-byte aligned */
BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 4),
BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 8),
/* add it to fp. We now have either fp-4 or fp-8, but
* we don't know which
*/
BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
/* dereference it */
BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
BPF_EXIT_INSN(),
},
.errstr = "variable stack access var_off=(0xfffffffffffffff8; 0x4)",
.result = REJECT,
.prog_type = BPF_PROG_TYPE_LWT_IN,
}, },
}; };
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment