Commit f7e6bd33 authored by Alexei Starovoitov's avatar Alexei Starovoitov

Merge branch 'bpf-support-new-insns-from-cpu-v4'

Yonghong Song says:

====================
bpf: Support new insns from cpu v4

In previous discussion ([1]), it is agreed that we should introduce
cpu version 4 (llvm flag -mcpu=v4) which contains some instructions
which can simplify code, make code easier to understand, fix the
existing problem, or simply for feature completeness. More specifically,
the following new insns are proposed:
  . sign extended load
  . sign extended mov
  . bswap
  . signed div/mod
  . ja with 32-bit offset

This patch set added kernel support for insns proposed in [1] except
BPF_ST which already has full kernel support. Beside the above proposed
insns, LLVM will generate BPF_ST insn as well under -mcpu=v4.
The llvm patch ([2]) has been merged into llvm-project 'main' branch.

The patchset implements interpreter, jit and verifier support for these new
insns.

For this patch set, I tested cpu v2/v3/v4 and the selftests are all passed.
I also tested selftests introduced in this patch set with additional changes
beside normal jit testing (bpf_jit_enable = 1 and bpf_jit_harden = 0)
  - bpf_jit_enable = 0
  - bpf_jit_enable = 1 and bpf_jit_harden = 1
and both testing passed.

  [1] https://lore.kernel.org/bpf/4bfe98be-5333-1c7e-2f6d-42486c8ec039@meta.com/
  [2] https://reviews.llvm.org/D144829

Changelogs:
  v4 -> v5:
   . for v4, patch 8/17 missed in mailing list and patchwork, so resend.
   . rebase on top of master
  v3 -> v4:
   . some minor asm syntax adjustment based on llvm change.
   . add clang version and target arch guard for new tests
     so they can still compile with old llvm compilers.
   . some changes to the bpf doc.
  v2 -> v3:
   . add missed disasm change from v2.
   . handle signed load of ctx fields properly.
   . fix some interpreter sdiv/smod error when bpf_jit_enable = 0.
   . fix some verifier range bounding errors.
   . add more C tests.
  RFCv1 -> v2:
   . add more verifier supports for signed extend load and mov insns.
   . rename some insn names to be more consistent with intel practice.
   . add cpuv4 test runner for test progs.
   . add more unit and C tests.
   . add documentation.
====================

Link: https://lore.kernel.org/r/20230728011143.3710005-1-yonghong.song@linux.devSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
parents 10d78a66 245d4c40
...@@ -140,11 +140,6 @@ A: Because if we picked one-to-one relationship to x64 it would have made ...@@ -140,11 +140,6 @@ A: Because if we picked one-to-one relationship to x64 it would have made
it more complicated to support on arm64 and other archs. Also it it more complicated to support on arm64 and other archs. Also it
needs div-by-zero runtime check. needs div-by-zero runtime check.
Q: Why there is no BPF_SDIV for signed divide operation?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A: Because it would be rarely used. llvm errors in such case and
prints a suggestion to use unsigned divide instead.
Q: Why BPF has implicit prologue and epilogue? Q: Why BPF has implicit prologue and epilogue?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A: Because architectures like sparc have register windows and in general A: Because architectures like sparc have register windows and in general
......
...@@ -154,24 +154,27 @@ otherwise identical operations. ...@@ -154,24 +154,27 @@ otherwise identical operations.
The 'code' field encodes the operation as below, where 'src' and 'dst' refer The 'code' field encodes the operation as below, where 'src' and 'dst' refer
to the values of the source and destination registers, respectively. to the values of the source and destination registers, respectively.
======== ===== ========================================================== ======== ===== ======= ==========================================================
code value description code value offset description
======== ===== ========================================================== ======== ===== ======= ==========================================================
BPF_ADD 0x00 dst += src BPF_ADD 0x00 0 dst += src
BPF_SUB 0x10 dst -= src BPF_SUB 0x10 0 dst -= src
BPF_MUL 0x20 dst \*= src BPF_MUL 0x20 0 dst \*= src
BPF_DIV 0x30 dst = (src != 0) ? (dst / src) : 0 BPF_DIV 0x30 0 dst = (src != 0) ? (dst / src) : 0
BPF_OR 0x40 dst \|= src BPF_SDIV 0x30 1 dst = (src != 0) ? (dst s/ src) : 0
BPF_AND 0x50 dst &= src BPF_OR 0x40 0 dst \|= src
BPF_LSH 0x60 dst <<= (src & mask) BPF_AND 0x50 0 dst &= src
BPF_RSH 0x70 dst >>= (src & mask) BPF_LSH 0x60 0 dst <<= (src & mask)
BPF_NEG 0x80 dst = -dst BPF_RSH 0x70 0 dst >>= (src & mask)
BPF_MOD 0x90 dst = (src != 0) ? (dst % src) : dst BPF_NEG 0x80 0 dst = -dst
BPF_XOR 0xa0 dst ^= src BPF_MOD 0x90 0 dst = (src != 0) ? (dst % src) : dst
BPF_MOV 0xb0 dst = src BPF_SMOD 0x90 1 dst = (src != 0) ? (dst s% src) : dst
BPF_ARSH 0xc0 sign extending dst >>= (src & mask) BPF_XOR 0xa0 0 dst ^= src
BPF_END 0xd0 byte swap operations (see `Byte swap instructions`_ below) BPF_MOV 0xb0 0 dst = src
======== ===== ========================================================== BPF_MOVSX 0xb0 8/16/32 dst = (s8,s16,s32)src
BPF_ARSH 0xc0 0 sign extending dst >>= (src & mask)
BPF_END 0xd0 0 byte swap operations (see `Byte swap instructions`_ below)
======== ===== ============ ==========================================================
Underflow and overflow are allowed during arithmetic operations, meaning Underflow and overflow are allowed during arithmetic operations, meaning
the 64-bit or 32-bit value will wrap. If eBPF program execution would the 64-bit or 32-bit value will wrap. If eBPF program execution would
...@@ -198,11 +201,20 @@ where '(u32)' indicates that the upper 32 bits are zeroed. ...@@ -198,11 +201,20 @@ where '(u32)' indicates that the upper 32 bits are zeroed.
dst = dst ^ imm32 dst = dst ^ imm32
Also note that the division and modulo operations are unsigned. Thus, for Note that most instructions have instruction offset of 0. But three instructions
``BPF_ALU``, 'imm' is first interpreted as an unsigned 32-bit value, whereas (BPF_SDIV, BPF_SMOD, BPF_MOVSX) have non-zero offset.
for ``BPF_ALU64``, 'imm' is first sign extended to 64 bits and the result
interpreted as an unsigned 64-bit value. There are no instructions for The devision and modulo operations support both unsigned and signed flavors.
signed division or modulo. For unsigned operation (BPF_DIV and BPF_MOD), for ``BPF_ALU``, 'imm' is first
interpreted as an unsigned 32-bit value, whereas for ``BPF_ALU64``, 'imm' is
first sign extended to 64 bits and the result interpreted as an unsigned 64-bit
value. For signed operation (BPF_SDIV and BPF_SMOD), for ``BPF_ALU``, 'imm' is
interpreted as a signed value. For ``BPF_ALU64``, the 'imm' is sign extended
from 32 to 64 and interpreted as a signed 64-bit value.
Instruction BPF_MOVSX does move operation with sign extension.
``BPF_ALU | MOVSX`` sign extendes 8-bit and 16-bit into 32-bit and upper 32-bit are zeroed.
``BPF_ALU64 | MOVSX`` sign extends 8-bit, 16-bit and 32-bit into 64-bit.
Shift operations use a mask of 0x3F (63) for 64-bit operations and 0x1F (31) Shift operations use a mask of 0x3F (63) for 64-bit operations and 0x1F (31)
for 32-bit operations. for 32-bit operations.
...@@ -210,21 +222,23 @@ for 32-bit operations. ...@@ -210,21 +222,23 @@ for 32-bit operations.
Byte swap instructions Byte swap instructions
~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~
The byte swap instructions use an instruction class of ``BPF_ALU`` and a 4-bit The byte swap instructions use instruction classes of ``BPF_ALU`` and ``BPF_ALU64``
'code' field of ``BPF_END``. and a 4-bit 'code' field of ``BPF_END``.
The byte swap instructions operate on the destination register The byte swap instructions operate on the destination register
only and do not use a separate source register or immediate value. only and do not use a separate source register or immediate value.
The 1-bit source operand field in the opcode is used to select what byte For ``BPF_ALU``, the 1-bit source operand field in the opcode is used to select what byte
order the operation convert from or to: order the operation convert from or to. For ``BPF_ALU64``, the 1-bit source operand
field in the opcode is not used and must be 0.
========= ===== ================================================= ========= ========= ===== =================================================
source value description class source value description
========= ===== ================================================= ========= ========= ===== =================================================
BPF_TO_LE 0x00 convert between host byte order and little endian BPF_ALU BPF_TO_LE 0x00 convert between host byte order and little endian
BPF_TO_BE 0x08 convert between host byte order and big endian BPF_ALU BPF_TO_BE 0x08 convert between host byte order and big endian
========= ===== ================================================= BPF_ALU64 BPF_TO_LE 0x00 do byte swap unconditionally
========= ========= ===== =================================================
The 'imm' field encodes the width of the swap operations. The following widths The 'imm' field encodes the width of the swap operations. The following widths
are supported: 16, 32 and 64. are supported: 16, 32 and 64.
...@@ -239,6 +253,12 @@ Examples: ...@@ -239,6 +253,12 @@ Examples:
dst = htobe64(dst) dst = htobe64(dst)
``BPF_ALU64 | BPF_TO_LE | BPF_END`` with imm = 16/32/64 means::
dst = bswap16 dst
dst = bswap32 dst
dst = bswap64 dst
Jump instructions Jump instructions
----------------- -----------------
...@@ -249,7 +269,8 @@ The 'code' field encodes the operation as below: ...@@ -249,7 +269,8 @@ The 'code' field encodes the operation as below:
======== ===== === =========================================== ========================================= ======== ===== === =========================================== =========================================
code value src description notes code value src description notes
======== ===== === =========================================== ========================================= ======== ===== === =========================================== =========================================
BPF_JA 0x0 0x0 PC += offset BPF_JMP only BPF_JA 0x0 0x0 PC += offset BPF_JMP class
BPF_JA 0x0 0x0 PC += imm BPF_JMP32 class
BPF_JEQ 0x1 any PC += offset if dst == src BPF_JEQ 0x1 any PC += offset if dst == src
BPF_JGT 0x2 any PC += offset if dst > src unsigned BPF_JGT 0x2 any PC += offset if dst > src unsigned
BPF_JGE 0x3 any PC += offset if dst >= src unsigned BPF_JGE 0x3 any PC += offset if dst >= src unsigned
...@@ -278,6 +299,16 @@ Example: ...@@ -278,6 +299,16 @@ Example:
where 's>=' indicates a signed '>=' comparison. where 's>=' indicates a signed '>=' comparison.
``BPF_JA | BPF_K | BPF_JMP32`` (0x06) means::
gotol +imm
where 'imm' means the branch offset comes from insn 'imm' field.
Note there are two flavors of BPF_JA instrions. BPF_JMP class permits 16-bit jump offset while
BPF_JMP32 permits 32-bit jump offset. A >16bit conditional jmp can be converted to a <16bit
conditional jmp plus a 32-bit unconditional jump.
Helper functions Helper functions
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
...@@ -320,6 +351,7 @@ The mode modifier is one of: ...@@ -320,6 +351,7 @@ The mode modifier is one of:
BPF_ABS 0x20 legacy BPF packet access (absolute) `Legacy BPF Packet access instructions`_ BPF_ABS 0x20 legacy BPF packet access (absolute) `Legacy BPF Packet access instructions`_
BPF_IND 0x40 legacy BPF packet access (indirect) `Legacy BPF Packet access instructions`_ BPF_IND 0x40 legacy BPF packet access (indirect) `Legacy BPF Packet access instructions`_
BPF_MEM 0x60 regular load and store operations `Regular load and store operations`_ BPF_MEM 0x60 regular load and store operations `Regular load and store operations`_
BPF_MEMSX 0x80 sign-extension load operations `Sign-extension load operations`_
BPF_ATOMIC 0xc0 atomic operations `Atomic operations`_ BPF_ATOMIC 0xc0 atomic operations `Atomic operations`_
============= ===== ==================================== ============= ============= ===== ==================================== =============
...@@ -350,9 +382,20 @@ instructions that transfer data between a register and memory. ...@@ -350,9 +382,20 @@ instructions that transfer data between a register and memory.
``BPF_MEM | <size> | BPF_LDX`` means:: ``BPF_MEM | <size> | BPF_LDX`` means::
dst = *(size *) (src + offset) dst = *(unsigned size *) (src + offset)
Where size is one of: ``BPF_B``, ``BPF_H``, ``BPF_W``, or ``BPF_DW`` and
'unsigned size' is one of u8, u16, u32 and u64.
The ``BPF_MEMSX`` mode modifier is used to encode sign-extension load
instructions that transfer data between a register and memory.
``BPF_MEMSX | <size> | BPF_LDX`` means::
dst = *(signed size *) (src + offset)
Where size is one of: ``BPF_B``, ``BPF_H``, ``BPF_W``, or ``BPF_DW``. Where size is one of: ``BPF_B``, ``BPF_H`` or ``BPF_W``, and
'signed size' is one of s8, s16 and s32.
Atomic operations Atomic operations
----------------- -----------------
......
...@@ -701,6 +701,38 @@ static void emit_mov_reg(u8 **pprog, bool is64, u32 dst_reg, u32 src_reg) ...@@ -701,6 +701,38 @@ static void emit_mov_reg(u8 **pprog, bool is64, u32 dst_reg, u32 src_reg)
*pprog = prog; *pprog = prog;
} }
static void emit_movsx_reg(u8 **pprog, int num_bits, bool is64, u32 dst_reg,
u32 src_reg)
{
u8 *prog = *pprog;
if (is64) {
/* movs[b,w,l]q dst, src */
if (num_bits == 8)
EMIT4(add_2mod(0x48, src_reg, dst_reg), 0x0f, 0xbe,
add_2reg(0xC0, src_reg, dst_reg));
else if (num_bits == 16)
EMIT4(add_2mod(0x48, src_reg, dst_reg), 0x0f, 0xbf,
add_2reg(0xC0, src_reg, dst_reg));
else if (num_bits == 32)
EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x63,
add_2reg(0xC0, src_reg, dst_reg));
} else {
/* movs[b,w]l dst, src */
if (num_bits == 8) {
EMIT4(add_2mod(0x40, src_reg, dst_reg), 0x0f, 0xbe,
add_2reg(0xC0, src_reg, dst_reg));
} else if (num_bits == 16) {
if (is_ereg(dst_reg) || is_ereg(src_reg))
EMIT1(add_2mod(0x40, src_reg, dst_reg));
EMIT3(add_2mod(0x0f, src_reg, dst_reg), 0xbf,
add_2reg(0xC0, src_reg, dst_reg));
}
}
*pprog = prog;
}
/* Emit the suffix (ModR/M etc) for addressing *(ptr_reg + off) and val_reg */ /* Emit the suffix (ModR/M etc) for addressing *(ptr_reg + off) and val_reg */
static void emit_insn_suffix(u8 **pprog, u32 ptr_reg, u32 val_reg, int off) static void emit_insn_suffix(u8 **pprog, u32 ptr_reg, u32 val_reg, int off)
{ {
...@@ -779,6 +811,29 @@ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) ...@@ -779,6 +811,29 @@ static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
*pprog = prog; *pprog = prog;
} }
/* LDSX: dst_reg = *(s8*)(src_reg + off) */
static void emit_ldsx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
{
u8 *prog = *pprog;
switch (size) {
case BPF_B:
/* Emit 'movsx rax, byte ptr [rax + off]' */
EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xBE);
break;
case BPF_H:
/* Emit 'movsx rax, word ptr [rax + off]' */
EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xBF);
break;
case BPF_W:
/* Emit 'movsx rax, dword ptr [rax+0x14]' */
EMIT2(add_2mod(0x48, src_reg, dst_reg), 0x63);
break;
}
emit_insn_suffix(&prog, src_reg, dst_reg, off);
*pprog = prog;
}
/* STX: *(u8*)(dst_reg + off) = src_reg */ /* STX: *(u8*)(dst_reg + off) = src_reg */
static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
{ {
...@@ -1028,9 +1083,14 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image ...@@ -1028,9 +1083,14 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
case BPF_ALU64 | BPF_MOV | BPF_X: case BPF_ALU64 | BPF_MOV | BPF_X:
case BPF_ALU | BPF_MOV | BPF_X: case BPF_ALU | BPF_MOV | BPF_X:
emit_mov_reg(&prog, if (insn->off == 0)
BPF_CLASS(insn->code) == BPF_ALU64, emit_mov_reg(&prog,
dst_reg, src_reg); BPF_CLASS(insn->code) == BPF_ALU64,
dst_reg, src_reg);
else
emit_movsx_reg(&prog, insn->off,
BPF_CLASS(insn->code) == BPF_ALU64,
dst_reg, src_reg);
break; break;
/* neg dst */ /* neg dst */
...@@ -1134,15 +1194,26 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image ...@@ -1134,15 +1194,26 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
/* mov rax, dst_reg */ /* mov rax, dst_reg */
emit_mov_reg(&prog, is64, BPF_REG_0, dst_reg); emit_mov_reg(&prog, is64, BPF_REG_0, dst_reg);
/* if (insn->off == 0) {
* xor edx, edx /*
* equivalent to 'xor rdx, rdx', but one byte less * xor edx, edx
*/ * equivalent to 'xor rdx, rdx', but one byte less
EMIT2(0x31, 0xd2); */
EMIT2(0x31, 0xd2);
/* div src_reg */ /* div src_reg */
maybe_emit_1mod(&prog, src_reg, is64); maybe_emit_1mod(&prog, src_reg, is64);
EMIT2(0xF7, add_1reg(0xF0, src_reg)); EMIT2(0xF7, add_1reg(0xF0, src_reg));
} else {
if (BPF_CLASS(insn->code) == BPF_ALU)
EMIT1(0x99); /* cdq */
else
EMIT2(0x48, 0x99); /* cqo */
/* idiv src_reg */
maybe_emit_1mod(&prog, src_reg, is64);
EMIT2(0xF7, add_1reg(0xF8, src_reg));
}
if (BPF_OP(insn->code) == BPF_MOD && if (BPF_OP(insn->code) == BPF_MOD &&
dst_reg != BPF_REG_3) dst_reg != BPF_REG_3)
...@@ -1262,6 +1333,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image ...@@ -1262,6 +1333,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
break; break;
case BPF_ALU | BPF_END | BPF_FROM_BE: case BPF_ALU | BPF_END | BPF_FROM_BE:
case BPF_ALU64 | BPF_END | BPF_FROM_LE:
switch (imm32) { switch (imm32) {
case 16: case 16:
/* Emit 'ror %ax, 8' to swap lower 2 bytes */ /* Emit 'ror %ax, 8' to swap lower 2 bytes */
...@@ -1370,9 +1442,17 @@ st: if (is_imm8(insn->off)) ...@@ -1370,9 +1442,17 @@ st: if (is_imm8(insn->off))
case BPF_LDX | BPF_PROBE_MEM | BPF_W: case BPF_LDX | BPF_PROBE_MEM | BPF_W:
case BPF_LDX | BPF_MEM | BPF_DW: case BPF_LDX | BPF_MEM | BPF_DW:
case BPF_LDX | BPF_PROBE_MEM | BPF_DW: case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
/* LDXS: dst_reg = *(s8*)(src_reg + off) */
case BPF_LDX | BPF_MEMSX | BPF_B:
case BPF_LDX | BPF_MEMSX | BPF_H:
case BPF_LDX | BPF_MEMSX | BPF_W:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_B:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
insn_off = insn->off; insn_off = insn->off;
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) { if (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
BPF_MODE(insn->code) == BPF_PROBE_MEMSX) {
/* Conservatively check that src_reg + insn->off is a kernel address: /* Conservatively check that src_reg + insn->off is a kernel address:
* src_reg + insn->off >= TASK_SIZE_MAX + PAGE_SIZE * src_reg + insn->off >= TASK_SIZE_MAX + PAGE_SIZE
* src_reg is used as scratch for src_reg += insn->off and restored * src_reg is used as scratch for src_reg += insn->off and restored
...@@ -1415,8 +1495,13 @@ st: if (is_imm8(insn->off)) ...@@ -1415,8 +1495,13 @@ st: if (is_imm8(insn->off))
start_of_ldx = prog; start_of_ldx = prog;
end_of_jmp[-1] = start_of_ldx - end_of_jmp; end_of_jmp[-1] = start_of_ldx - end_of_jmp;
} }
emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off); if (BPF_MODE(insn->code) == BPF_PROBE_MEMSX ||
if (BPF_MODE(insn->code) == BPF_PROBE_MEM) { BPF_MODE(insn->code) == BPF_MEMSX)
emit_ldsx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
else
emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
if (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
BPF_MODE(insn->code) == BPF_PROBE_MEMSX) {
struct exception_table_entry *ex; struct exception_table_entry *ex;
u8 *_insn = image + proglen + (start_of_ldx - temp); u8 *_insn = image + proglen + (start_of_ldx - temp);
s64 delta; s64 delta;
...@@ -1730,16 +1815,24 @@ st: if (is_imm8(insn->off)) ...@@ -1730,16 +1815,24 @@ st: if (is_imm8(insn->off))
break; break;
case BPF_JMP | BPF_JA: case BPF_JMP | BPF_JA:
if (insn->off == -1) case BPF_JMP32 | BPF_JA:
/* -1 jmp instructions will always jump if (BPF_CLASS(insn->code) == BPF_JMP) {
* backwards two bytes. Explicitly handling if (insn->off == -1)
* this case avoids wasting too many passes /* -1 jmp instructions will always jump
* when there are long sequences of replaced * backwards two bytes. Explicitly handling
* dead code. * this case avoids wasting too many passes
*/ * when there are long sequences of replaced
jmp_offset = -2; * dead code.
else */
jmp_offset = addrs[i + insn->off] - addrs[i]; jmp_offset = -2;
else
jmp_offset = addrs[i + insn->off] - addrs[i];
} else {
if (insn->imm == -1)
jmp_offset = -2;
else
jmp_offset = addrs[i + insn->imm] - addrs[i];
}
if (!jmp_offset) { if (!jmp_offset) {
/* /*
......
...@@ -69,6 +69,9 @@ struct ctl_table_header; ...@@ -69,6 +69,9 @@ struct ctl_table_header;
/* unused opcode to mark special load instruction. Same as BPF_ABS */ /* unused opcode to mark special load instruction. Same as BPF_ABS */
#define BPF_PROBE_MEM 0x20 #define BPF_PROBE_MEM 0x20
/* unused opcode to mark special ldsx instruction. Same as BPF_IND */
#define BPF_PROBE_MEMSX 0x40
/* unused opcode to mark call to interpreter with arguments */ /* unused opcode to mark call to interpreter with arguments */
#define BPF_CALL_ARGS 0xe0 #define BPF_CALL_ARGS 0xe0
...@@ -90,22 +93,28 @@ struct ctl_table_header; ...@@ -90,22 +93,28 @@ struct ctl_table_header;
/* ALU ops on registers, bpf_add|sub|...: dst_reg += src_reg */ /* ALU ops on registers, bpf_add|sub|...: dst_reg += src_reg */
#define BPF_ALU64_REG(OP, DST, SRC) \ #define BPF_ALU64_REG_OFF(OP, DST, SRC, OFF) \
((struct bpf_insn) { \ ((struct bpf_insn) { \
.code = BPF_ALU64 | BPF_OP(OP) | BPF_X, \ .code = BPF_ALU64 | BPF_OP(OP) | BPF_X, \
.dst_reg = DST, \ .dst_reg = DST, \
.src_reg = SRC, \ .src_reg = SRC, \
.off = 0, \ .off = OFF, \
.imm = 0 }) .imm = 0 })
#define BPF_ALU32_REG(OP, DST, SRC) \ #define BPF_ALU64_REG(OP, DST, SRC) \
BPF_ALU64_REG_OFF(OP, DST, SRC, 0)
#define BPF_ALU32_REG_OFF(OP, DST, SRC, OFF) \
((struct bpf_insn) { \ ((struct bpf_insn) { \
.code = BPF_ALU | BPF_OP(OP) | BPF_X, \ .code = BPF_ALU | BPF_OP(OP) | BPF_X, \
.dst_reg = DST, \ .dst_reg = DST, \
.src_reg = SRC, \ .src_reg = SRC, \
.off = 0, \ .off = OFF, \
.imm = 0 }) .imm = 0 })
#define BPF_ALU32_REG(OP, DST, SRC) \
BPF_ALU32_REG_OFF(OP, DST, SRC, 0)
/* ALU ops on immediates, bpf_add|sub|...: dst_reg += imm32 */ /* ALU ops on immediates, bpf_add|sub|...: dst_reg += imm32 */
#define BPF_ALU64_IMM(OP, DST, IMM) \ #define BPF_ALU64_IMM(OP, DST, IMM) \
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
/* ld/ldx fields */ /* ld/ldx fields */
#define BPF_DW 0x18 /* double word (64-bit) */ #define BPF_DW 0x18 /* double word (64-bit) */
#define BPF_MEMSX 0x80 /* load with sign extension */
#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */ #define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */
#define BPF_XADD 0xc0 /* exclusive add - legacy name */ #define BPF_XADD 0xc0 /* exclusive add - legacy name */
......
...@@ -61,6 +61,7 @@ ...@@ -61,6 +61,7 @@
#define AX regs[BPF_REG_AX] #define AX regs[BPF_REG_AX]
#define ARG1 regs[BPF_REG_ARG1] #define ARG1 regs[BPF_REG_ARG1]
#define CTX regs[BPF_REG_CTX] #define CTX regs[BPF_REG_CTX]
#define OFF insn->off
#define IMM insn->imm #define IMM insn->imm
struct bpf_mem_alloc bpf_global_ma; struct bpf_mem_alloc bpf_global_ma;
...@@ -372,7 +373,12 @@ static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old, ...@@ -372,7 +373,12 @@ static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old,
{ {
const s32 off_min = S16_MIN, off_max = S16_MAX; const s32 off_min = S16_MIN, off_max = S16_MAX;
s32 delta = end_new - end_old; s32 delta = end_new - end_old;
s32 off = insn->off; s32 off;
if (insn->code == (BPF_JMP32 | BPF_JA))
off = insn->imm;
else
off = insn->off;
if (curr < pos && curr + off + 1 >= end_old) if (curr < pos && curr + off + 1 >= end_old)
off += delta; off += delta;
...@@ -380,8 +386,12 @@ static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old, ...@@ -380,8 +386,12 @@ static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old,
off -= delta; off -= delta;
if (off < off_min || off > off_max) if (off < off_min || off > off_max)
return -ERANGE; return -ERANGE;
if (!probe_pass) if (!probe_pass) {
insn->off = off; if (insn->code == (BPF_JMP32 | BPF_JA))
insn->imm = off;
else
insn->off = off;
}
return 0; return 0;
} }
...@@ -1271,7 +1281,7 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from, ...@@ -1271,7 +1281,7 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
case BPF_ALU | BPF_MOD | BPF_K: case BPF_ALU | BPF_MOD | BPF_K:
*to++ = BPF_ALU32_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm); *to++ = BPF_ALU32_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm);
*to++ = BPF_ALU32_IMM(BPF_XOR, BPF_REG_AX, imm_rnd); *to++ = BPF_ALU32_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);
*to++ = BPF_ALU32_REG(from->code, from->dst_reg, BPF_REG_AX); *to++ = BPF_ALU32_REG_OFF(from->code, from->dst_reg, BPF_REG_AX, from->off);
break; break;
case BPF_ALU64 | BPF_ADD | BPF_K: case BPF_ALU64 | BPF_ADD | BPF_K:
...@@ -1285,7 +1295,7 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from, ...@@ -1285,7 +1295,7 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
case BPF_ALU64 | BPF_MOD | BPF_K: case BPF_ALU64 | BPF_MOD | BPF_K:
*to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm); *to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm);
*to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd); *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);
*to++ = BPF_ALU64_REG(from->code, from->dst_reg, BPF_REG_AX); *to++ = BPF_ALU64_REG_OFF(from->code, from->dst_reg, BPF_REG_AX, from->off);
break; break;
case BPF_JMP | BPF_JEQ | BPF_K: case BPF_JMP | BPF_JEQ | BPF_K:
...@@ -1523,6 +1533,7 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); ...@@ -1523,6 +1533,7 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
INSN_3(ALU64, DIV, X), \ INSN_3(ALU64, DIV, X), \
INSN_3(ALU64, MOD, X), \ INSN_3(ALU64, MOD, X), \
INSN_2(ALU64, NEG), \ INSN_2(ALU64, NEG), \
INSN_3(ALU64, END, TO_LE), \
/* Immediate based. */ \ /* Immediate based. */ \
INSN_3(ALU64, ADD, K), \ INSN_3(ALU64, ADD, K), \
INSN_3(ALU64, SUB, K), \ INSN_3(ALU64, SUB, K), \
...@@ -1591,6 +1602,7 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); ...@@ -1591,6 +1602,7 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
INSN_3(JMP, JSLE, K), \ INSN_3(JMP, JSLE, K), \
INSN_3(JMP, JSET, K), \ INSN_3(JMP, JSET, K), \
INSN_2(JMP, JA), \ INSN_2(JMP, JA), \
INSN_2(JMP32, JA), \
/* Store instructions. */ \ /* Store instructions. */ \
/* Register based. */ \ /* Register based. */ \
INSN_3(STX, MEM, B), \ INSN_3(STX, MEM, B), \
...@@ -1610,6 +1622,9 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); ...@@ -1610,6 +1622,9 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
INSN_3(LDX, MEM, H), \ INSN_3(LDX, MEM, H), \
INSN_3(LDX, MEM, W), \ INSN_3(LDX, MEM, W), \
INSN_3(LDX, MEM, DW), \ INSN_3(LDX, MEM, DW), \
INSN_3(LDX, MEMSX, B), \
INSN_3(LDX, MEMSX, H), \
INSN_3(LDX, MEMSX, W), \
/* Immediate based. */ \ /* Immediate based. */ \
INSN_3(LD, IMM, DW) INSN_3(LD, IMM, DW)
...@@ -1666,6 +1681,9 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) ...@@ -1666,6 +1681,9 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
[BPF_LDX | BPF_PROBE_MEM | BPF_H] = &&LDX_PROBE_MEM_H, [BPF_LDX | BPF_PROBE_MEM | BPF_H] = &&LDX_PROBE_MEM_H,
[BPF_LDX | BPF_PROBE_MEM | BPF_W] = &&LDX_PROBE_MEM_W, [BPF_LDX | BPF_PROBE_MEM | BPF_W] = &&LDX_PROBE_MEM_W,
[BPF_LDX | BPF_PROBE_MEM | BPF_DW] = &&LDX_PROBE_MEM_DW, [BPF_LDX | BPF_PROBE_MEM | BPF_DW] = &&LDX_PROBE_MEM_DW,
[BPF_LDX | BPF_PROBE_MEMSX | BPF_B] = &&LDX_PROBE_MEMSX_B,
[BPF_LDX | BPF_PROBE_MEMSX | BPF_H] = &&LDX_PROBE_MEMSX_H,
[BPF_LDX | BPF_PROBE_MEMSX | BPF_W] = &&LDX_PROBE_MEMSX_W,
}; };
#undef BPF_INSN_3_LBL #undef BPF_INSN_3_LBL
#undef BPF_INSN_2_LBL #undef BPF_INSN_2_LBL
...@@ -1733,13 +1751,36 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) ...@@ -1733,13 +1751,36 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
DST = -DST; DST = -DST;
CONT; CONT;
ALU_MOV_X: ALU_MOV_X:
DST = (u32) SRC; switch (OFF) {
case 0:
DST = (u32) SRC;
break;
case 8:
DST = (u32)(s8) SRC;
break;
case 16:
DST = (u32)(s16) SRC;
break;
}
CONT; CONT;
ALU_MOV_K: ALU_MOV_K:
DST = (u32) IMM; DST = (u32) IMM;
CONT; CONT;
ALU64_MOV_X: ALU64_MOV_X:
DST = SRC; switch (OFF) {
case 0:
DST = SRC;
break;
case 8:
DST = (s8) SRC;
break;
case 16:
DST = (s16) SRC;
break;
case 32:
DST = (s32) SRC;
break;
}
CONT; CONT;
ALU64_MOV_K: ALU64_MOV_K:
DST = IMM; DST = IMM;
...@@ -1761,36 +1802,114 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) ...@@ -1761,36 +1802,114 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
(*(s64 *) &DST) >>= IMM; (*(s64 *) &DST) >>= IMM;
CONT; CONT;
ALU64_MOD_X: ALU64_MOD_X:
div64_u64_rem(DST, SRC, &AX); switch (OFF) {
DST = AX; case 0:
div64_u64_rem(DST, SRC, &AX);
DST = AX;
break;
case 1:
AX = div64_s64(DST, SRC);
DST = DST - AX * SRC;
break;
}
CONT; CONT;
ALU_MOD_X: ALU_MOD_X:
AX = (u32) DST; switch (OFF) {
DST = do_div(AX, (u32) SRC); case 0:
AX = (u32) DST;
DST = do_div(AX, (u32) SRC);
break;
case 1:
AX = abs((s32)DST);
AX = do_div(AX, abs((s32)SRC));
if ((s32)DST < 0)
DST = (u32)-AX;
else
DST = (u32)AX;
break;
}
CONT; CONT;
ALU64_MOD_K: ALU64_MOD_K:
div64_u64_rem(DST, IMM, &AX); switch (OFF) {
DST = AX; case 0:
div64_u64_rem(DST, IMM, &AX);
DST = AX;
break;
case 1:
AX = div64_s64(DST, IMM);
DST = DST - AX * IMM;
break;
}
CONT; CONT;
ALU_MOD_K: ALU_MOD_K:
AX = (u32) DST; switch (OFF) {
DST = do_div(AX, (u32) IMM); case 0:
AX = (u32) DST;
DST = do_div(AX, (u32) IMM);
break;
case 1:
AX = abs((s32)DST);
AX = do_div(AX, abs((s32)IMM));
if ((s32)DST < 0)
DST = (u32)-AX;
else
DST = (u32)AX;
break;
}
CONT; CONT;
ALU64_DIV_X: ALU64_DIV_X:
DST = div64_u64(DST, SRC); switch (OFF) {
case 0:
DST = div64_u64(DST, SRC);
break;
case 1:
DST = div64_s64(DST, SRC);
break;
}
CONT; CONT;
ALU_DIV_X: ALU_DIV_X:
AX = (u32) DST; switch (OFF) {
do_div(AX, (u32) SRC); case 0:
DST = (u32) AX; AX = (u32) DST;
do_div(AX, (u32) SRC);
DST = (u32) AX;
break;
case 1:
AX = abs((s32)DST);
do_div(AX, abs((s32)SRC));
if ((s32)DST < 0 == (s32)SRC < 0)
DST = (u32)AX;
else
DST = (u32)-AX;
break;
}
CONT; CONT;
ALU64_DIV_K: ALU64_DIV_K:
DST = div64_u64(DST, IMM); switch (OFF) {
case 0:
DST = div64_u64(DST, IMM);
break;
case 1:
DST = div64_s64(DST, IMM);
break;
}
CONT; CONT;
ALU_DIV_K: ALU_DIV_K:
AX = (u32) DST; switch (OFF) {
do_div(AX, (u32) IMM); case 0:
DST = (u32) AX; AX = (u32) DST;
do_div(AX, (u32) IMM);
DST = (u32) AX;
break;
case 1:
AX = abs((s32)DST);
do_div(AX, abs((s32)IMM));
if ((s32)DST < 0 == (s32)IMM < 0)
DST = (u32)AX;
else
DST = (u32)-AX;
break;
}
CONT; CONT;
ALU_END_TO_BE: ALU_END_TO_BE:
switch (IMM) { switch (IMM) {
...@@ -1818,6 +1937,19 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) ...@@ -1818,6 +1937,19 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
break; break;
} }
CONT; CONT;
ALU64_END_TO_LE:
switch (IMM) {
case 16:
DST = (__force u16) __swab16(DST);
break;
case 32:
DST = (__force u32) __swab32(DST);
break;
case 64:
DST = (__force u64) __swab64(DST);
break;
}
CONT;
/* CALL */ /* CALL */
JMP_CALL: JMP_CALL:
...@@ -1867,6 +1999,9 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) ...@@ -1867,6 +1999,9 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
JMP_JA: JMP_JA:
insn += insn->off; insn += insn->off;
CONT; CONT;
JMP32_JA:
insn += insn->imm;
CONT;
JMP_EXIT: JMP_EXIT:
return BPF_R0; return BPF_R0;
/* JMP */ /* JMP */
...@@ -1942,6 +2077,21 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) ...@@ -1942,6 +2077,21 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
LDST(DW, u64) LDST(DW, u64)
#undef LDST #undef LDST
#define LDSX(SIZEOP, SIZE) \
LDX_MEMSX_##SIZEOP: \
DST = *(SIZE *)(unsigned long) (SRC + insn->off); \
CONT; \
LDX_PROBE_MEMSX_##SIZEOP: \
bpf_probe_read_kernel(&DST, sizeof(SIZE), \
(const void *)(long) (SRC + insn->off)); \
DST = *((SIZE *)&DST); \
CONT;
LDSX(B, s8)
LDSX(H, s16)
LDSX(W, s32)
#undef LDSX
#define ATOMIC_ALU_OP(BOP, KOP) \ #define ATOMIC_ALU_OP(BOP, KOP) \
case BOP: \ case BOP: \
if (BPF_SIZE(insn->code) == BPF_W) \ if (BPF_SIZE(insn->code) == BPF_W) \
......
...@@ -87,6 +87,17 @@ const char *const bpf_alu_string[16] = { ...@@ -87,6 +87,17 @@ const char *const bpf_alu_string[16] = {
[BPF_END >> 4] = "endian", [BPF_END >> 4] = "endian",
}; };
const char *const bpf_alu_sign_string[16] = {
[BPF_DIV >> 4] = "s/=",
[BPF_MOD >> 4] = "s%=",
};
const char *const bpf_movsx_string[4] = {
[0] = "(s8)",
[1] = "(s16)",
[3] = "(s32)",
};
static const char *const bpf_atomic_alu_string[16] = { static const char *const bpf_atomic_alu_string[16] = {
[BPF_ADD >> 4] = "add", [BPF_ADD >> 4] = "add",
[BPF_AND >> 4] = "and", [BPF_AND >> 4] = "and",
...@@ -101,6 +112,12 @@ static const char *const bpf_ldst_string[] = { ...@@ -101,6 +112,12 @@ static const char *const bpf_ldst_string[] = {
[BPF_DW >> 3] = "u64", [BPF_DW >> 3] = "u64",
}; };
static const char *const bpf_ldsx_string[] = {
[BPF_W >> 3] = "s32",
[BPF_H >> 3] = "s16",
[BPF_B >> 3] = "s8",
};
static const char *const bpf_jmp_string[16] = { static const char *const bpf_jmp_string[16] = {
[BPF_JA >> 4] = "jmp", [BPF_JA >> 4] = "jmp",
[BPF_JEQ >> 4] = "==", [BPF_JEQ >> 4] = "==",
...@@ -128,6 +145,26 @@ static void print_bpf_end_insn(bpf_insn_print_t verbose, ...@@ -128,6 +145,26 @@ static void print_bpf_end_insn(bpf_insn_print_t verbose,
insn->imm, insn->dst_reg); insn->imm, insn->dst_reg);
} }
static void print_bpf_bswap_insn(bpf_insn_print_t verbose,
void *private_data,
const struct bpf_insn *insn)
{
verbose(private_data, "(%02x) r%d = bswap%d r%d\n",
insn->code, insn->dst_reg,
insn->imm, insn->dst_reg);
}
static bool is_sdiv_smod(const struct bpf_insn *insn)
{
return (BPF_OP(insn->code) == BPF_DIV || BPF_OP(insn->code) == BPF_MOD) &&
insn->off == 1;
}
static bool is_movsx(const struct bpf_insn *insn)
{
return BPF_OP(insn->code) == BPF_MOV && insn->off != 0;
}
void print_bpf_insn(const struct bpf_insn_cbs *cbs, void print_bpf_insn(const struct bpf_insn_cbs *cbs,
const struct bpf_insn *insn, const struct bpf_insn *insn,
bool allow_ptr_leaks) bool allow_ptr_leaks)
...@@ -138,7 +175,7 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, ...@@ -138,7 +175,7 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
if (class == BPF_ALU || class == BPF_ALU64) { if (class == BPF_ALU || class == BPF_ALU64) {
if (BPF_OP(insn->code) == BPF_END) { if (BPF_OP(insn->code) == BPF_END) {
if (class == BPF_ALU64) if (class == BPF_ALU64)
verbose(cbs->private_data, "BUG_alu64_%02x\n", insn->code); print_bpf_bswap_insn(verbose, cbs->private_data, insn);
else else
print_bpf_end_insn(verbose, cbs->private_data, insn); print_bpf_end_insn(verbose, cbs->private_data, insn);
} else if (BPF_OP(insn->code) == BPF_NEG) { } else if (BPF_OP(insn->code) == BPF_NEG) {
...@@ -147,17 +184,20 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, ...@@ -147,17 +184,20 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
insn->dst_reg, class == BPF_ALU ? 'w' : 'r', insn->dst_reg, class == BPF_ALU ? 'w' : 'r',
insn->dst_reg); insn->dst_reg);
} else if (BPF_SRC(insn->code) == BPF_X) { } else if (BPF_SRC(insn->code) == BPF_X) {
verbose(cbs->private_data, "(%02x) %c%d %s %c%d\n", verbose(cbs->private_data, "(%02x) %c%d %s %s%c%d\n",
insn->code, class == BPF_ALU ? 'w' : 'r', insn->code, class == BPF_ALU ? 'w' : 'r',
insn->dst_reg, insn->dst_reg,
bpf_alu_string[BPF_OP(insn->code) >> 4], is_sdiv_smod(insn) ? bpf_alu_sign_string[BPF_OP(insn->code) >> 4]
: bpf_alu_string[BPF_OP(insn->code) >> 4],
is_movsx(insn) ? bpf_movsx_string[(insn->off >> 3) - 1] : "",
class == BPF_ALU ? 'w' : 'r', class == BPF_ALU ? 'w' : 'r',
insn->src_reg); insn->src_reg);
} else { } else {
verbose(cbs->private_data, "(%02x) %c%d %s %d\n", verbose(cbs->private_data, "(%02x) %c%d %s %d\n",
insn->code, class == BPF_ALU ? 'w' : 'r', insn->code, class == BPF_ALU ? 'w' : 'r',
insn->dst_reg, insn->dst_reg,
bpf_alu_string[BPF_OP(insn->code) >> 4], is_sdiv_smod(insn) ? bpf_alu_sign_string[BPF_OP(insn->code) >> 4]
: bpf_alu_string[BPF_OP(insn->code) >> 4],
insn->imm); insn->imm);
} }
} else if (class == BPF_STX) { } else if (class == BPF_STX) {
...@@ -218,13 +258,15 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, ...@@ -218,13 +258,15 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
verbose(cbs->private_data, "BUG_st_%02x\n", insn->code); verbose(cbs->private_data, "BUG_st_%02x\n", insn->code);
} }
} else if (class == BPF_LDX) { } else if (class == BPF_LDX) {
if (BPF_MODE(insn->code) != BPF_MEM) { if (BPF_MODE(insn->code) != BPF_MEM && BPF_MODE(insn->code) != BPF_MEMSX) {
verbose(cbs->private_data, "BUG_ldx_%02x\n", insn->code); verbose(cbs->private_data, "BUG_ldx_%02x\n", insn->code);
return; return;
} }
verbose(cbs->private_data, "(%02x) r%d = *(%s *)(r%d %+d)\n", verbose(cbs->private_data, "(%02x) r%d = *(%s *)(r%d %+d)\n",
insn->code, insn->dst_reg, insn->code, insn->dst_reg,
bpf_ldst_string[BPF_SIZE(insn->code) >> 3], BPF_MODE(insn->code) == BPF_MEM ?
bpf_ldst_string[BPF_SIZE(insn->code) >> 3] :
bpf_ldsx_string[BPF_SIZE(insn->code) >> 3],
insn->src_reg, insn->off); insn->src_reg, insn->off);
} else if (class == BPF_LD) { } else if (class == BPF_LD) {
if (BPF_MODE(insn->code) == BPF_ABS) { if (BPF_MODE(insn->code) == BPF_ABS) {
...@@ -279,6 +321,9 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, ...@@ -279,6 +321,9 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
} else if (insn->code == (BPF_JMP | BPF_JA)) { } else if (insn->code == (BPF_JMP | BPF_JA)) {
verbose(cbs->private_data, "(%02x) goto pc%+d\n", verbose(cbs->private_data, "(%02x) goto pc%+d\n",
insn->code, insn->off); insn->code, insn->off);
} else if (insn->code == (BPF_JMP32 | BPF_JA)) {
verbose(cbs->private_data, "(%02x) gotol pc%+d\n",
insn->code, insn->imm);
} else if (insn->code == (BPF_JMP | BPF_EXIT)) { } else if (insn->code == (BPF_JMP | BPF_EXIT)) {
verbose(cbs->private_data, "(%02x) exit\n", insn->code); verbose(cbs->private_data, "(%02x) exit\n", insn->code);
} else if (BPF_SRC(insn->code) == BPF_X) { } else if (BPF_SRC(insn->code) == BPF_X) {
......
...@@ -2855,7 +2855,10 @@ static int check_subprogs(struct bpf_verifier_env *env) ...@@ -2855,7 +2855,10 @@ static int check_subprogs(struct bpf_verifier_env *env)
goto next; goto next;
if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL) if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL)
goto next; goto next;
off = i + insn[i].off + 1; if (code == (BPF_JMP32 | BPF_JA))
off = i + insn[i].imm + 1;
else
off = i + insn[i].off + 1;
if (off < subprog_start || off >= subprog_end) { if (off < subprog_start || off >= subprog_end) {
verbose(env, "jump out of range from insn %d to %d\n", i, off); verbose(env, "jump out of range from insn %d to %d\n", i, off);
return -EINVAL; return -EINVAL;
...@@ -2867,6 +2870,7 @@ static int check_subprogs(struct bpf_verifier_env *env) ...@@ -2867,6 +2870,7 @@ static int check_subprogs(struct bpf_verifier_env *env)
* or unconditional jump back * or unconditional jump back
*/ */
if (code != (BPF_JMP | BPF_EXIT) && if (code != (BPF_JMP | BPF_EXIT) &&
code != (BPF_JMP32 | BPF_JA) &&
code != (BPF_JMP | BPF_JA)) { code != (BPF_JMP | BPF_JA)) {
verbose(env, "last insn is not an exit or jmp\n"); verbose(env, "last insn is not an exit or jmp\n");
return -EINVAL; return -EINVAL;
...@@ -3012,8 +3016,10 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn, ...@@ -3012,8 +3016,10 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn,
} }
} }
if (class == BPF_ALU64 && op == BPF_END && (insn->imm == 16 || insn->imm == 32))
return false;
if (class == BPF_ALU64 || class == BPF_JMP || if (class == BPF_ALU64 || class == BPF_JMP ||
/* BPF_END always use BPF_ALU class. */
(class == BPF_ALU && op == BPF_END && insn->imm == 64)) (class == BPF_ALU && op == BPF_END && insn->imm == 64))
return true; return true;
...@@ -3421,7 +3427,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, ...@@ -3421,7 +3427,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx,
return 0; return 0;
if (opcode == BPF_MOV) { if (opcode == BPF_MOV) {
if (BPF_SRC(insn->code) == BPF_X) { if (BPF_SRC(insn->code) == BPF_X) {
/* dreg = sreg /* dreg = sreg or dreg = (s8, s16, s32)sreg
* dreg needs precision after this insn * dreg needs precision after this insn
* sreg needs precision before this insn * sreg needs precision before this insn
*/ */
...@@ -5827,6 +5833,147 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size) ...@@ -5827,6 +5833,147 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
__reg_combine_64_into_32(reg); __reg_combine_64_into_32(reg);
} }
static void set_sext64_default_val(struct bpf_reg_state *reg, int size)
{
if (size == 1) {
reg->smin_value = reg->s32_min_value = S8_MIN;
reg->smax_value = reg->s32_max_value = S8_MAX;
} else if (size == 2) {
reg->smin_value = reg->s32_min_value = S16_MIN;
reg->smax_value = reg->s32_max_value = S16_MAX;
} else {
/* size == 4 */
reg->smin_value = reg->s32_min_value = S32_MIN;
reg->smax_value = reg->s32_max_value = S32_MAX;
}
reg->umin_value = reg->u32_min_value = 0;
reg->umax_value = U64_MAX;
reg->u32_max_value = U32_MAX;
reg->var_off = tnum_unknown;
}
static void coerce_reg_to_size_sx(struct bpf_reg_state *reg, int size)
{
s64 init_s64_max, init_s64_min, s64_max, s64_min, u64_cval;
u64 top_smax_value, top_smin_value;
u64 num_bits = size * 8;
if (tnum_is_const(reg->var_off)) {
u64_cval = reg->var_off.value;
if (size == 1)
reg->var_off = tnum_const((s8)u64_cval);
else if (size == 2)
reg->var_off = tnum_const((s16)u64_cval);
else
/* size == 4 */
reg->var_off = tnum_const((s32)u64_cval);
u64_cval = reg->var_off.value;
reg->smax_value = reg->smin_value = u64_cval;
reg->umax_value = reg->umin_value = u64_cval;
reg->s32_max_value = reg->s32_min_value = u64_cval;
reg->u32_max_value = reg->u32_min_value = u64_cval;
return;
}
top_smax_value = ((u64)reg->smax_value >> num_bits) << num_bits;
top_smin_value = ((u64)reg->smin_value >> num_bits) << num_bits;
if (top_smax_value != top_smin_value)
goto out;
/* find the s64_min and s64_min after sign extension */
if (size == 1) {
init_s64_max = (s8)reg->smax_value;
init_s64_min = (s8)reg->smin_value;
} else if (size == 2) {
init_s64_max = (s16)reg->smax_value;
init_s64_min = (s16)reg->smin_value;
} else {
init_s64_max = (s32)reg->smax_value;
init_s64_min = (s32)reg->smin_value;
}
s64_max = max(init_s64_max, init_s64_min);
s64_min = min(init_s64_max, init_s64_min);
/* both of s64_max/s64_min positive or negative */
if (s64_max >= 0 == s64_min >= 0) {
reg->smin_value = reg->s32_min_value = s64_min;
reg->smax_value = reg->s32_max_value = s64_max;
reg->umin_value = reg->u32_min_value = s64_min;
reg->umax_value = reg->u32_max_value = s64_max;
reg->var_off = tnum_range(s64_min, s64_max);
return;
}
out:
set_sext64_default_val(reg, size);
}
static void set_sext32_default_val(struct bpf_reg_state *reg, int size)
{
if (size == 1) {
reg->s32_min_value = S8_MIN;
reg->s32_max_value = S8_MAX;
} else {
/* size == 2 */
reg->s32_min_value = S16_MIN;
reg->s32_max_value = S16_MAX;
}
reg->u32_min_value = 0;
reg->u32_max_value = U32_MAX;
}
static void coerce_subreg_to_size_sx(struct bpf_reg_state *reg, int size)
{
s32 init_s32_max, init_s32_min, s32_max, s32_min, u32_val;
u32 top_smax_value, top_smin_value;
u32 num_bits = size * 8;
if (tnum_is_const(reg->var_off)) {
u32_val = reg->var_off.value;
if (size == 1)
reg->var_off = tnum_const((s8)u32_val);
else
reg->var_off = tnum_const((s16)u32_val);
u32_val = reg->var_off.value;
reg->s32_min_value = reg->s32_max_value = u32_val;
reg->u32_min_value = reg->u32_max_value = u32_val;
return;
}
top_smax_value = ((u32)reg->s32_max_value >> num_bits) << num_bits;
top_smin_value = ((u32)reg->s32_min_value >> num_bits) << num_bits;
if (top_smax_value != top_smin_value)
goto out;
/* find the s32_min and s32_min after sign extension */
if (size == 1) {
init_s32_max = (s8)reg->s32_max_value;
init_s32_min = (s8)reg->s32_min_value;
} else {
/* size == 2 */
init_s32_max = (s16)reg->s32_max_value;
init_s32_min = (s16)reg->s32_min_value;
}
s32_max = max(init_s32_max, init_s32_min);
s32_min = min(init_s32_max, init_s32_min);
if (s32_min >= 0 == s32_max >= 0) {
reg->s32_min_value = s32_min;
reg->s32_max_value = s32_max;
reg->u32_min_value = (u32)s32_min;
reg->u32_max_value = (u32)s32_max;
return;
}
out:
set_sext32_default_val(reg, size);
}
static bool bpf_map_is_rdonly(const struct bpf_map *map) static bool bpf_map_is_rdonly(const struct bpf_map *map)
{ {
/* A map is considered read-only if the following condition are true: /* A map is considered read-only if the following condition are true:
...@@ -5847,7 +5994,8 @@ static bool bpf_map_is_rdonly(const struct bpf_map *map) ...@@ -5847,7 +5994,8 @@ static bool bpf_map_is_rdonly(const struct bpf_map *map)
!bpf_map_write_active(map); !bpf_map_write_active(map);
} }
static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val) static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val,
bool is_ldsx)
{ {
void *ptr; void *ptr;
u64 addr; u64 addr;
...@@ -5860,13 +6008,13 @@ static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val) ...@@ -5860,13 +6008,13 @@ static int bpf_map_direct_read(struct bpf_map *map, int off, int size, u64 *val)
switch (size) { switch (size) {
case sizeof(u8): case sizeof(u8):
*val = (u64)*(u8 *)ptr; *val = is_ldsx ? (s64)*(s8 *)ptr : (u64)*(u8 *)ptr;
break; break;
case sizeof(u16): case sizeof(u16):
*val = (u64)*(u16 *)ptr; *val = is_ldsx ? (s64)*(s16 *)ptr : (u64)*(u16 *)ptr;
break; break;
case sizeof(u32): case sizeof(u32):
*val = (u64)*(u32 *)ptr; *val = is_ldsx ? (s64)*(s32 *)ptr : (u64)*(u32 *)ptr;
break; break;
case sizeof(u64): case sizeof(u64):
*val = *(u64 *)ptr; *val = *(u64 *)ptr;
...@@ -6285,7 +6433,7 @@ static int check_stack_access_within_bounds( ...@@ -6285,7 +6433,7 @@ static int check_stack_access_within_bounds(
*/ */
static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno, static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno,
int off, int bpf_size, enum bpf_access_type t, int off, int bpf_size, enum bpf_access_type t,
int value_regno, bool strict_alignment_once) int value_regno, bool strict_alignment_once, bool is_ldsx)
{ {
struct bpf_reg_state *regs = cur_regs(env); struct bpf_reg_state *regs = cur_regs(env);
struct bpf_reg_state *reg = regs + regno; struct bpf_reg_state *reg = regs + regno;
...@@ -6346,7 +6494,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn ...@@ -6346,7 +6494,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
u64 val = 0; u64 val = 0;
err = bpf_map_direct_read(map, map_off, size, err = bpf_map_direct_read(map, map_off, size,
&val); &val, is_ldsx);
if (err) if (err)
return err; return err;
...@@ -6516,8 +6664,11 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn ...@@ -6516,8 +6664,11 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
if (!err && size < BPF_REG_SIZE && value_regno >= 0 && t == BPF_READ && if (!err && size < BPF_REG_SIZE && value_regno >= 0 && t == BPF_READ &&
regs[value_regno].type == SCALAR_VALUE) { regs[value_regno].type == SCALAR_VALUE) {
/* b/h/w load zero-extends, mark upper bits as known 0 */ if (!is_ldsx)
coerce_reg_to_size(&regs[value_regno], size); /* b/h/w load zero-extends, mark upper bits as known 0 */
coerce_reg_to_size(&regs[value_regno], size);
else
coerce_reg_to_size_sx(&regs[value_regno], size);
} }
return err; return err;
} }
...@@ -6609,17 +6760,17 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i ...@@ -6609,17 +6760,17 @@ static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct bpf_i
* case to simulate the register fill. * case to simulate the register fill.
*/ */
err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_READ, -1, true); BPF_SIZE(insn->code), BPF_READ, -1, true, false);
if (!err && load_reg >= 0) if (!err && load_reg >= 0)
err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_READ, load_reg, BPF_SIZE(insn->code), BPF_READ, load_reg,
true); true, false);
if (err) if (err)
return err; return err;
/* Check whether we can write into the same memory. */ /* Check whether we can write into the same memory. */
err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off, err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_WRITE, -1, true); BPF_SIZE(insn->code), BPF_WRITE, -1, true, false);
if (err) if (err)
return err; return err;
...@@ -6865,7 +7016,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, ...@@ -6865,7 +7016,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
return zero_size_allowed ? 0 : -EACCES; return zero_size_allowed ? 0 : -EACCES;
return check_mem_access(env, env->insn_idx, regno, offset, BPF_B, return check_mem_access(env, env->insn_idx, regno, offset, BPF_B,
atype, -1, false); atype, -1, false, false);
} }
fallthrough; fallthrough;
...@@ -7237,7 +7388,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn ...@@ -7237,7 +7388,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
/* we write BPF_DW bits (8 bytes) at a time */ /* we write BPF_DW bits (8 bytes) at a time */
for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) { for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
err = check_mem_access(env, insn_idx, regno, err = check_mem_access(env, insn_idx, regno,
i, BPF_DW, BPF_WRITE, -1, false); i, BPF_DW, BPF_WRITE, -1, false, false);
if (err) if (err)
return err; return err;
} }
...@@ -7330,7 +7481,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id ...@@ -7330,7 +7481,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) { for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) {
err = check_mem_access(env, insn_idx, regno, err = check_mem_access(env, insn_idx, regno,
i, BPF_DW, BPF_WRITE, -1, false); i, BPF_DW, BPF_WRITE, -1, false, false);
if (err) if (err)
return err; return err;
} }
...@@ -9474,7 +9625,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn ...@@ -9474,7 +9625,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
*/ */
for (i = 0; i < meta.access_size; i++) { for (i = 0; i < meta.access_size; i++) {
err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B, err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B,
BPF_WRITE, -1, false); BPF_WRITE, -1, false, false);
if (err) if (err)
return err; return err;
} }
...@@ -12931,7 +13082,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -12931,7 +13082,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
} else { } else {
if (insn->src_reg != BPF_REG_0 || insn->off != 0 || if (insn->src_reg != BPF_REG_0 || insn->off != 0 ||
(insn->imm != 16 && insn->imm != 32 && insn->imm != 64) || (insn->imm != 16 && insn->imm != 32 && insn->imm != 64) ||
BPF_CLASS(insn->code) == BPF_ALU64) { (BPF_CLASS(insn->code) == BPF_ALU64 &&
BPF_SRC(insn->code) != BPF_TO_LE)) {
verbose(env, "BPF_END uses reserved fields\n"); verbose(env, "BPF_END uses reserved fields\n");
return -EINVAL; return -EINVAL;
} }
...@@ -12956,11 +13108,24 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -12956,11 +13108,24 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
} else if (opcode == BPF_MOV) { } else if (opcode == BPF_MOV) {
if (BPF_SRC(insn->code) == BPF_X) { if (BPF_SRC(insn->code) == BPF_X) {
if (insn->imm != 0 || insn->off != 0) { if (insn->imm != 0) {
verbose(env, "BPF_MOV uses reserved fields\n"); verbose(env, "BPF_MOV uses reserved fields\n");
return -EINVAL; return -EINVAL;
} }
if (BPF_CLASS(insn->code) == BPF_ALU) {
if (insn->off != 0 && insn->off != 8 && insn->off != 16) {
verbose(env, "BPF_MOV uses reserved fields\n");
return -EINVAL;
}
} else {
if (insn->off != 0 && insn->off != 8 && insn->off != 16 &&
insn->off != 32) {
verbose(env, "BPF_MOV uses reserved fields\n");
return -EINVAL;
}
}
/* check src operand */ /* check src operand */
err = check_reg_arg(env, insn->src_reg, SRC_OP); err = check_reg_arg(env, insn->src_reg, SRC_OP);
if (err) if (err)
...@@ -12984,18 +13149,33 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -12984,18 +13149,33 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
!tnum_is_const(src_reg->var_off); !tnum_is_const(src_reg->var_off);
if (BPF_CLASS(insn->code) == BPF_ALU64) { if (BPF_CLASS(insn->code) == BPF_ALU64) {
/* case: R1 = R2 if (insn->off == 0) {
* copy register state to dest reg /* case: R1 = R2
*/ * copy register state to dest reg
if (need_id)
/* Assign src and dst registers the same ID
* that will be used by find_equal_scalars()
* to propagate min/max range.
*/ */
src_reg->id = ++env->id_gen; if (need_id)
copy_register_state(dst_reg, src_reg); /* Assign src and dst registers the same ID
dst_reg->live |= REG_LIVE_WRITTEN; * that will be used by find_equal_scalars()
dst_reg->subreg_def = DEF_NOT_SUBREG; * to propagate min/max range.
*/
src_reg->id = ++env->id_gen;
copy_register_state(dst_reg, src_reg);
dst_reg->live |= REG_LIVE_WRITTEN;
dst_reg->subreg_def = DEF_NOT_SUBREG;
} else {
/* case: R1 = (s8, s16 s32)R2 */
bool no_sext;
no_sext = src_reg->umax_value < (1ULL << (insn->off - 1));
if (no_sext && need_id)
src_reg->id = ++env->id_gen;
copy_register_state(dst_reg, src_reg);
if (!no_sext)
dst_reg->id = 0;
coerce_reg_to_size_sx(dst_reg, insn->off >> 3);
dst_reg->live |= REG_LIVE_WRITTEN;
dst_reg->subreg_def = DEF_NOT_SUBREG;
}
} else { } else {
/* R1 = (u32) R2 */ /* R1 = (u32) R2 */
if (is_pointer_value(env, insn->src_reg)) { if (is_pointer_value(env, insn->src_reg)) {
...@@ -13004,19 +13184,33 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -13004,19 +13184,33 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
insn->src_reg); insn->src_reg);
return -EACCES; return -EACCES;
} else if (src_reg->type == SCALAR_VALUE) { } else if (src_reg->type == SCALAR_VALUE) {
bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX; if (insn->off == 0) {
bool is_src_reg_u32 = src_reg->umax_value <= U32_MAX;
if (is_src_reg_u32 && need_id)
src_reg->id = ++env->id_gen; if (is_src_reg_u32 && need_id)
copy_register_state(dst_reg, src_reg); src_reg->id = ++env->id_gen;
/* Make sure ID is cleared if src_reg is not in u32 range otherwise copy_register_state(dst_reg, src_reg);
* dst_reg min/max could be incorrectly /* Make sure ID is cleared if src_reg is not in u32
* propagated into src_reg by find_equal_scalars() * range otherwise dst_reg min/max could be incorrectly
*/ * propagated into src_reg by find_equal_scalars()
if (!is_src_reg_u32) */
dst_reg->id = 0; if (!is_src_reg_u32)
dst_reg->live |= REG_LIVE_WRITTEN; dst_reg->id = 0;
dst_reg->subreg_def = env->insn_idx + 1; dst_reg->live |= REG_LIVE_WRITTEN;
dst_reg->subreg_def = env->insn_idx + 1;
} else {
/* case: W1 = (s8, s16)W2 */
bool no_sext = src_reg->umax_value < (1ULL << (insn->off - 1));
if (no_sext && need_id)
src_reg->id = ++env->id_gen;
copy_register_state(dst_reg, src_reg);
if (!no_sext)
dst_reg->id = 0;
dst_reg->live |= REG_LIVE_WRITTEN;
dst_reg->subreg_def = env->insn_idx + 1;
coerce_subreg_to_size_sx(dst_reg, insn->off >> 3);
}
} else { } else {
mark_reg_unknown(env, regs, mark_reg_unknown(env, regs,
insn->dst_reg); insn->dst_reg);
...@@ -13047,7 +13241,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -13047,7 +13241,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
} else { /* all other ALU ops: and, sub, xor, add, ... */ } else { /* all other ALU ops: and, sub, xor, add, ... */
if (BPF_SRC(insn->code) == BPF_X) { if (BPF_SRC(insn->code) == BPF_X) {
if (insn->imm != 0 || insn->off != 0) { if (insn->imm != 0 || insn->off > 1 ||
(insn->off == 1 && opcode != BPF_MOD && opcode != BPF_DIV)) {
verbose(env, "BPF_ALU uses reserved fields\n"); verbose(env, "BPF_ALU uses reserved fields\n");
return -EINVAL; return -EINVAL;
} }
...@@ -13056,7 +13251,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -13056,7 +13251,8 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
if (err) if (err)
return err; return err;
} else { } else {
if (insn->src_reg != BPF_REG_0 || insn->off != 0) { if (insn->src_reg != BPF_REG_0 || insn->off > 1 ||
(insn->off == 1 && opcode != BPF_MOD && opcode != BPF_DIV)) {
verbose(env, "BPF_ALU uses reserved fields\n"); verbose(env, "BPF_ALU uses reserved fields\n");
return -EINVAL; return -EINVAL;
} }
...@@ -14600,7 +14796,7 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns, ...@@ -14600,7 +14796,7 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
static int visit_insn(int t, struct bpf_verifier_env *env) static int visit_insn(int t, struct bpf_verifier_env *env)
{ {
struct bpf_insn *insns = env->prog->insnsi, *insn = &insns[t]; struct bpf_insn *insns = env->prog->insnsi, *insn = &insns[t];
int ret; int ret, off;
if (bpf_pseudo_func(insn)) if (bpf_pseudo_func(insn))
return visit_func_call_insn(t, insns, env, true); return visit_func_call_insn(t, insns, env, true);
...@@ -14648,14 +14844,19 @@ static int visit_insn(int t, struct bpf_verifier_env *env) ...@@ -14648,14 +14844,19 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
if (BPF_SRC(insn->code) != BPF_K) if (BPF_SRC(insn->code) != BPF_K)
return -EINVAL; return -EINVAL;
if (BPF_CLASS(insn->code) == BPF_JMP)
off = insn->off;
else
off = insn->imm;
/* unconditional jump with single edge */ /* unconditional jump with single edge */
ret = push_insn(t, t + insn->off + 1, FALLTHROUGH, env, ret = push_insn(t, t + off + 1, FALLTHROUGH, env,
true); true);
if (ret) if (ret)
return ret; return ret;
mark_prune_point(env, t + insn->off + 1); mark_prune_point(env, t + off + 1);
mark_jmp_point(env, t + insn->off + 1); mark_jmp_point(env, t + off + 1);
return ret; return ret;
...@@ -16202,7 +16403,7 @@ static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type typ ...@@ -16202,7 +16403,7 @@ static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_type typ
* Have to support a use case when one path through * Have to support a use case when one path through
* the program yields TRUSTED pointer while another * the program yields TRUSTED pointer while another
* is UNTRUSTED. Fallback to UNTRUSTED to generate * is UNTRUSTED. Fallback to UNTRUSTED to generate
* BPF_PROBE_MEM. * BPF_PROBE_MEM/BPF_PROBE_MEMSX.
*/ */
*prev_type = PTR_TO_BTF_ID | PTR_UNTRUSTED; *prev_type = PTR_TO_BTF_ID | PTR_UNTRUSTED;
} else { } else {
...@@ -16343,7 +16544,8 @@ static int do_check(struct bpf_verifier_env *env) ...@@ -16343,7 +16544,8 @@ static int do_check(struct bpf_verifier_env *env)
*/ */
err = check_mem_access(env, env->insn_idx, insn->src_reg, err = check_mem_access(env, env->insn_idx, insn->src_reg,
insn->off, BPF_SIZE(insn->code), insn->off, BPF_SIZE(insn->code),
BPF_READ, insn->dst_reg, false); BPF_READ, insn->dst_reg, false,
BPF_MODE(insn->code) == BPF_MEMSX);
if (err) if (err)
return err; return err;
...@@ -16380,7 +16582,7 @@ static int do_check(struct bpf_verifier_env *env) ...@@ -16380,7 +16582,7 @@ static int do_check(struct bpf_verifier_env *env)
/* check that memory (dst_reg + off) is writeable */ /* check that memory (dst_reg + off) is writeable */
err = check_mem_access(env, env->insn_idx, insn->dst_reg, err = check_mem_access(env, env->insn_idx, insn->dst_reg,
insn->off, BPF_SIZE(insn->code), insn->off, BPF_SIZE(insn->code),
BPF_WRITE, insn->src_reg, false); BPF_WRITE, insn->src_reg, false, false);
if (err) if (err)
return err; return err;
...@@ -16405,7 +16607,7 @@ static int do_check(struct bpf_verifier_env *env) ...@@ -16405,7 +16607,7 @@ static int do_check(struct bpf_verifier_env *env)
/* check that memory (dst_reg + off) is writeable */ /* check that memory (dst_reg + off) is writeable */
err = check_mem_access(env, env->insn_idx, insn->dst_reg, err = check_mem_access(env, env->insn_idx, insn->dst_reg,
insn->off, BPF_SIZE(insn->code), insn->off, BPF_SIZE(insn->code),
BPF_WRITE, -1, false); BPF_WRITE, -1, false, false);
if (err) if (err)
return err; return err;
...@@ -16450,15 +16652,18 @@ static int do_check(struct bpf_verifier_env *env) ...@@ -16450,15 +16652,18 @@ static int do_check(struct bpf_verifier_env *env)
mark_reg_scratched(env, BPF_REG_0); mark_reg_scratched(env, BPF_REG_0);
} else if (opcode == BPF_JA) { } else if (opcode == BPF_JA) {
if (BPF_SRC(insn->code) != BPF_K || if (BPF_SRC(insn->code) != BPF_K ||
insn->imm != 0 ||
insn->src_reg != BPF_REG_0 || insn->src_reg != BPF_REG_0 ||
insn->dst_reg != BPF_REG_0 || insn->dst_reg != BPF_REG_0 ||
class == BPF_JMP32) { (class == BPF_JMP && insn->imm != 0) ||
(class == BPF_JMP32 && insn->off != 0)) {
verbose(env, "BPF_JA uses reserved fields\n"); verbose(env, "BPF_JA uses reserved fields\n");
return -EINVAL; return -EINVAL;
} }
env->insn_idx += insn->off + 1; if (class == BPF_JMP)
env->insn_idx += insn->off + 1;
else
env->insn_idx += insn->imm + 1;
continue; continue;
} else if (opcode == BPF_EXIT) { } else if (opcode == BPF_EXIT) {
...@@ -16833,7 +17038,8 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) ...@@ -16833,7 +17038,8 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
for (i = 0; i < insn_cnt; i++, insn++) { for (i = 0; i < insn_cnt; i++, insn++) {
if (BPF_CLASS(insn->code) == BPF_LDX && if (BPF_CLASS(insn->code) == BPF_LDX &&
(BPF_MODE(insn->code) != BPF_MEM || insn->imm != 0)) { ((BPF_MODE(insn->code) != BPF_MEM && BPF_MODE(insn->code) != BPF_MEMSX) ||
insn->imm != 0)) {
verbose(env, "BPF_LDX uses reserved fields\n"); verbose(env, "BPF_LDX uses reserved fields\n");
return -EINVAL; return -EINVAL;
} }
...@@ -17304,13 +17510,13 @@ static bool insn_is_cond_jump(u8 code) ...@@ -17304,13 +17510,13 @@ static bool insn_is_cond_jump(u8 code)
{ {
u8 op; u8 op;
op = BPF_OP(code);
if (BPF_CLASS(code) == BPF_JMP32) if (BPF_CLASS(code) == BPF_JMP32)
return true; return op != BPF_JA;
if (BPF_CLASS(code) != BPF_JMP) if (BPF_CLASS(code) != BPF_JMP)
return false; return false;
op = BPF_OP(code);
return op != BPF_JA && op != BPF_EXIT && op != BPF_CALL; return op != BPF_JA && op != BPF_EXIT && op != BPF_CALL;
} }
...@@ -17527,11 +17733,15 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) ...@@ -17527,11 +17733,15 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
for (i = 0; i < insn_cnt; i++, insn++) { for (i = 0; i < insn_cnt; i++, insn++) {
bpf_convert_ctx_access_t convert_ctx_access; bpf_convert_ctx_access_t convert_ctx_access;
u8 mode;
if (insn->code == (BPF_LDX | BPF_MEM | BPF_B) || if (insn->code == (BPF_LDX | BPF_MEM | BPF_B) ||
insn->code == (BPF_LDX | BPF_MEM | BPF_H) || insn->code == (BPF_LDX | BPF_MEM | BPF_H) ||
insn->code == (BPF_LDX | BPF_MEM | BPF_W) || insn->code == (BPF_LDX | BPF_MEM | BPF_W) ||
insn->code == (BPF_LDX | BPF_MEM | BPF_DW)) { insn->code == (BPF_LDX | BPF_MEM | BPF_DW) ||
insn->code == (BPF_LDX | BPF_MEMSX | BPF_B) ||
insn->code == (BPF_LDX | BPF_MEMSX | BPF_H) ||
insn->code == (BPF_LDX | BPF_MEMSX | BPF_W)) {
type = BPF_READ; type = BPF_READ;
} else if (insn->code == (BPF_STX | BPF_MEM | BPF_B) || } else if (insn->code == (BPF_STX | BPF_MEM | BPF_B) ||
insn->code == (BPF_STX | BPF_MEM | BPF_H) || insn->code == (BPF_STX | BPF_MEM | BPF_H) ||
...@@ -17590,8 +17800,12 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) ...@@ -17590,8 +17800,12 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
*/ */
case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED: case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED:
if (type == BPF_READ) { if (type == BPF_READ) {
insn->code = BPF_LDX | BPF_PROBE_MEM | if (BPF_MODE(insn->code) == BPF_MEM)
BPF_SIZE((insn)->code); insn->code = BPF_LDX | BPF_PROBE_MEM |
BPF_SIZE((insn)->code);
else
insn->code = BPF_LDX | BPF_PROBE_MEMSX |
BPF_SIZE((insn)->code);
env->prog->aux->num_exentries++; env->prog->aux->num_exentries++;
} }
continue; continue;
...@@ -17601,6 +17815,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) ...@@ -17601,6 +17815,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
ctx_field_size = env->insn_aux_data[i + delta].ctx_field_size; ctx_field_size = env->insn_aux_data[i + delta].ctx_field_size;
size = BPF_LDST_BYTES(insn); size = BPF_LDST_BYTES(insn);
mode = BPF_MODE(insn->code);
/* If the read access is a narrower load of the field, /* If the read access is a narrower load of the field,
* convert to a 4/8-byte load, to minimum program type specific * convert to a 4/8-byte load, to minimum program type specific
...@@ -17660,6 +17875,10 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) ...@@ -17660,6 +17875,10 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
(1ULL << size * 8) - 1); (1ULL << size * 8) - 1);
} }
} }
if (mode == BPF_MEMSX)
insn_buf[cnt++] = BPF_RAW_INSN(BPF_ALU64 | BPF_MOV | BPF_X,
insn->dst_reg, insn->dst_reg,
size * 8, 0);
new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
if (!new_prog) if (!new_prog)
...@@ -17779,7 +17998,8 @@ static int jit_subprogs(struct bpf_verifier_env *env) ...@@ -17779,7 +17998,8 @@ static int jit_subprogs(struct bpf_verifier_env *env)
insn = func[i]->insnsi; insn = func[i]->insnsi;
for (j = 0; j < func[i]->len; j++, insn++) { for (j = 0; j < func[i]->len; j++, insn++) {
if (BPF_CLASS(insn->code) == BPF_LDX && if (BPF_CLASS(insn->code) == BPF_LDX &&
BPF_MODE(insn->code) == BPF_PROBE_MEM) (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
BPF_MODE(insn->code) == BPF_PROBE_MEMSX))
num_exentries++; num_exentries++;
} }
func[i]->aux->num_exentries = num_exentries; func[i]->aux->num_exentries = num_exentries;
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
/* ld/ldx fields */ /* ld/ldx fields */
#define BPF_DW 0x18 /* double word (64-bit) */ #define BPF_DW 0x18 /* double word (64-bit) */
#define BPF_MEMSX 0x80 /* load with sign extension */
#define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */ #define BPF_ATOMIC 0xc0 /* atomic memory ops - op type in immediate */
#define BPF_XADD 0xc0 /* exclusive add - legacy name */ #define BPF_XADD 0xc0 /* exclusive add - legacy name */
......
...@@ -13,6 +13,7 @@ test_dev_cgroup ...@@ -13,6 +13,7 @@ test_dev_cgroup
/test_progs /test_progs
/test_progs-no_alu32 /test_progs-no_alu32
/test_progs-bpf_gcc /test_progs-bpf_gcc
/test_progs-cpuv4
test_verifier_log test_verifier_log
feature feature
test_sock test_sock
...@@ -36,6 +37,7 @@ test_cpp ...@@ -36,6 +37,7 @@ test_cpp
*.lskel.h *.lskel.h
/no_alu32 /no_alu32
/bpf_gcc /bpf_gcc
/cpuv4
/host-tools /host-tools
/tools /tools
/runqslower /runqslower
......
...@@ -33,9 +33,13 @@ CFLAGS += -g -O0 -rdynamic -Wall -Werror $(GENFLAGS) $(SAN_CFLAGS) \ ...@@ -33,9 +33,13 @@ CFLAGS += -g -O0 -rdynamic -Wall -Werror $(GENFLAGS) $(SAN_CFLAGS) \
LDFLAGS += $(SAN_LDFLAGS) LDFLAGS += $(SAN_LDFLAGS)
LDLIBS += -lelf -lz -lrt -lpthread LDLIBS += -lelf -lz -lrt -lpthread
# Silence some warnings when compiled with clang
ifneq ($(LLVM),) ifneq ($(LLVM),)
# Silence some warnings when compiled with clang
CFLAGS += -Wno-unused-command-line-argument CFLAGS += -Wno-unused-command-line-argument
# Check whether cpu=v4 is supported or not by clang
ifneq ($(shell $(CLANG) --target=bpf -mcpu=help 2>&1 | grep 'v4'),)
CLANG_CPUV4 := 1
endif
endif endif
# Order correspond to 'make run_tests' order # Order correspond to 'make run_tests' order
...@@ -51,6 +55,10 @@ ifneq ($(BPF_GCC),) ...@@ -51,6 +55,10 @@ ifneq ($(BPF_GCC),)
TEST_GEN_PROGS += test_progs-bpf_gcc TEST_GEN_PROGS += test_progs-bpf_gcc
endif endif
ifneq ($(CLANG_CPUV4),)
TEST_GEN_PROGS += test_progs-cpuv4
endif
TEST_GEN_FILES = test_lwt_ip_encap.bpf.o test_tc_edt.bpf.o TEST_GEN_FILES = test_lwt_ip_encap.bpf.o test_tc_edt.bpf.o
TEST_FILES = xsk_prereqs.sh $(wildcard progs/btf_dump_test_case_*.c) TEST_FILES = xsk_prereqs.sh $(wildcard progs/btf_dump_test_case_*.c)
...@@ -383,6 +391,11 @@ define CLANG_NOALU32_BPF_BUILD_RULE ...@@ -383,6 +391,11 @@ define CLANG_NOALU32_BPF_BUILD_RULE
$(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2) $(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
$(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v2 -o $2 $(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v2 -o $2
endef endef
# Similar to CLANG_BPF_BUILD_RULE, but with cpu-v4
define CLANG_CPUV4_BPF_BUILD_RULE
$(call msg,CLNG-BPF,$(TRUNNER_BINARY),$2)
$(Q)$(CLANG) $3 -O2 --target=bpf -c $1 -mcpu=v4 -o $2
endef
# Build BPF object using GCC # Build BPF object using GCC
define GCC_BPF_BUILD_RULE define GCC_BPF_BUILD_RULE
$(call msg,GCC-BPF,$(TRUNNER_BINARY),$2) $(call msg,GCC-BPF,$(TRUNNER_BINARY),$2)
...@@ -425,7 +438,7 @@ LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(ske ...@@ -425,7 +438,7 @@ LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(ske
# $eval()) and pass control to DEFINE_TEST_RUNNER_RULES. # $eval()) and pass control to DEFINE_TEST_RUNNER_RULES.
# Parameters: # Parameters:
# $1 - test runner base binary name (e.g., test_progs) # $1 - test runner base binary name (e.g., test_progs)
# $2 - test runner extra "flavor" (e.g., no_alu32, gcc-bpf, etc) # $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, gcc-bpf, etc)
define DEFINE_TEST_RUNNER define DEFINE_TEST_RUNNER
TRUNNER_OUTPUT := $(OUTPUT)$(if $2,/)$2 TRUNNER_OUTPUT := $(OUTPUT)$(if $2,/)$2
...@@ -453,7 +466,7 @@ endef ...@@ -453,7 +466,7 @@ endef
# Using TRUNNER_XXX variables, provided by callers of DEFINE_TEST_RUNNER and # Using TRUNNER_XXX variables, provided by callers of DEFINE_TEST_RUNNER and
# set up by DEFINE_TEST_RUNNER itself, create test runner build rules with: # set up by DEFINE_TEST_RUNNER itself, create test runner build rules with:
# $1 - test runner base binary name (e.g., test_progs) # $1 - test runner base binary name (e.g., test_progs)
# $2 - test runner extra "flavor" (e.g., no_alu32, gcc-bpf, etc) # $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, gcc-bpf, etc)
define DEFINE_TEST_RUNNER_RULES define DEFINE_TEST_RUNNER_RULES
ifeq ($($(TRUNNER_OUTPUT)-dir),) ifeq ($($(TRUNNER_OUTPUT)-dir),)
...@@ -584,6 +597,13 @@ TRUNNER_BPF_BUILD_RULE := CLANG_NOALU32_BPF_BUILD_RULE ...@@ -584,6 +597,13 @@ TRUNNER_BPF_BUILD_RULE := CLANG_NOALU32_BPF_BUILD_RULE
TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
$(eval $(call DEFINE_TEST_RUNNER,test_progs,no_alu32)) $(eval $(call DEFINE_TEST_RUNNER,test_progs,no_alu32))
# Define test_progs-cpuv4 test runner.
ifneq ($(CLANG_CPUV4),)
TRUNNER_BPF_BUILD_RULE := CLANG_CPUV4_BPF_BUILD_RULE
TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
$(eval $(call DEFINE_TEST_RUNNER,test_progs,cpuv4))
endif
# Define test_progs BPF-GCC-flavored test runner. # Define test_progs BPF-GCC-flavored test runner.
ifneq ($(BPF_GCC),) ifneq ($(BPF_GCC),)
TRUNNER_BPF_BUILD_RULE := GCC_BPF_BUILD_RULE TRUNNER_BPF_BUILD_RULE := GCC_BPF_BUILD_RULE
...@@ -681,7 +701,7 @@ EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \ ...@@ -681,7 +701,7 @@ EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
prog_tests/tests.h map_tests/tests.h verifier/tests.h \ prog_tests/tests.h map_tests/tests.h verifier/tests.h \
feature bpftool \ feature bpftool \
$(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h *.subskel.h \ $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h *.subskel.h \
no_alu32 bpf_gcc bpf_testmod.ko \ no_alu32 cpuv4 bpf_gcc bpf_testmod.ko \
liburandom_read.so) liburandom_read.so)
.PHONY: docs docs-clean .PHONY: docs docs-clean
......
...@@ -98,6 +98,12 @@ bpf_testmod_test_struct_arg_8(u64 a, void *b, short c, int d, void *e, ...@@ -98,6 +98,12 @@ bpf_testmod_test_struct_arg_8(u64 a, void *b, short c, int d, void *e,
return bpf_testmod_test_struct_arg_result; return bpf_testmod_test_struct_arg_result;
} }
noinline int
bpf_testmod_test_arg_ptr_to_struct(struct bpf_testmod_struct_arg_1 *a) {
bpf_testmod_test_struct_arg_result = a->a;
return bpf_testmod_test_struct_arg_result;
}
__bpf_kfunc void __bpf_kfunc void
bpf_testmod_test_mod_kfunc(int i) bpf_testmod_test_mod_kfunc(int i)
{ {
...@@ -240,7 +246,7 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj, ...@@ -240,7 +246,7 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
.off = off, .off = off,
.len = len, .len = len,
}; };
struct bpf_testmod_struct_arg_1 struct_arg1 = {10}; struct bpf_testmod_struct_arg_1 struct_arg1 = {10}, struct_arg1_2 = {-1};
struct bpf_testmod_struct_arg_2 struct_arg2 = {2, 3}; struct bpf_testmod_struct_arg_2 struct_arg2 = {2, 3};
struct bpf_testmod_struct_arg_3 *struct_arg3; struct bpf_testmod_struct_arg_3 *struct_arg3;
struct bpf_testmod_struct_arg_4 struct_arg4 = {21, 22}; struct bpf_testmod_struct_arg_4 struct_arg4 = {21, 22};
...@@ -259,6 +265,7 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj, ...@@ -259,6 +265,7 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
(void)bpf_testmod_test_struct_arg_8(16, (void *)17, 18, 19, (void)bpf_testmod_test_struct_arg_8(16, (void *)17, 18, 19,
(void *)20, struct_arg4, 23); (void *)20, struct_arg4, 23);
(void)bpf_testmod_test_arg_ptr_to_struct(&struct_arg1_2);
struct_arg3 = kmalloc((sizeof(struct bpf_testmod_struct_arg_3) + struct_arg3 = kmalloc((sizeof(struct bpf_testmod_struct_arg_3) +
sizeof(int)), GFP_KERNEL); sizeof(int)), GFP_KERNEL);
......
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates.*/
#include <test_progs.h>
#include <network_helpers.h>
#include "test_ldsx_insn.skel.h"
static void test_map_val_and_probed_memory(void)
{
struct test_ldsx_insn *skel;
int err;
skel = test_ldsx_insn__open();
if (!ASSERT_OK_PTR(skel, "test_ldsx_insn__open"))
return;
if (skel->rodata->skip) {
test__skip();
goto out;
}
bpf_program__set_autoload(skel->progs.rdonly_map_prog, true);
bpf_program__set_autoload(skel->progs.map_val_prog, true);
bpf_program__set_autoload(skel->progs.test_ptr_struct_arg, true);
err = test_ldsx_insn__load(skel);
if (!ASSERT_OK(err, "test_ldsx_insn__load"))
goto out;
err = test_ldsx_insn__attach(skel);
if (!ASSERT_OK(err, "test_ldsx_insn__attach"))
goto out;
ASSERT_OK(trigger_module_test_read(256), "trigger_read");
ASSERT_EQ(skel->bss->done1, 1, "done1");
ASSERT_EQ(skel->bss->ret1, 1, "ret1");
ASSERT_EQ(skel->bss->done2, 1, "done2");
ASSERT_EQ(skel->bss->ret2, 1, "ret2");
ASSERT_EQ(skel->bss->int_member, -1, "int_member");
out:
test_ldsx_insn__destroy(skel);
}
static void test_ctx_member_sign_ext(void)
{
struct test_ldsx_insn *skel;
int err, fd, cgroup_fd;
char buf[16] = {0};
socklen_t optlen;
cgroup_fd = test__join_cgroup("/ldsx_test");
if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup /ldsx_test"))
return;
skel = test_ldsx_insn__open();
if (!ASSERT_OK_PTR(skel, "test_ldsx_insn__open"))
goto close_cgroup_fd;
if (skel->rodata->skip) {
test__skip();
goto destroy_skel;
}
bpf_program__set_autoload(skel->progs._getsockopt, true);
err = test_ldsx_insn__load(skel);
if (!ASSERT_OK(err, "test_ldsx_insn__load"))
goto destroy_skel;
skel->links._getsockopt =
bpf_program__attach_cgroup(skel->progs._getsockopt, cgroup_fd);
if (!ASSERT_OK_PTR(skel->links._getsockopt, "getsockopt_link"))
goto destroy_skel;
fd = socket(AF_INET, SOCK_STREAM, 0);
if (!ASSERT_GE(fd, 0, "socket"))
goto destroy_skel;
optlen = sizeof(buf);
(void)getsockopt(fd, SOL_IP, IP_TTL, buf, &optlen);
ASSERT_EQ(skel->bss->set_optlen, -1, "optlen");
ASSERT_EQ(skel->bss->set_retval, -1, "retval");
close(fd);
destroy_skel:
test_ldsx_insn__destroy(skel);
close_cgroup_fd:
close(cgroup_fd);
}
static void test_ctx_member_narrow_sign_ext(void)
{
struct test_ldsx_insn *skel;
struct __sk_buff skb = {};
LIBBPF_OPTS(bpf_test_run_opts, topts,
.data_in = &pkt_v4,
.data_size_in = sizeof(pkt_v4),
.ctx_in = &skb,
.ctx_size_in = sizeof(skb),
);
int err, prog_fd;
skel = test_ldsx_insn__open();
if (!ASSERT_OK_PTR(skel, "test_ldsx_insn__open"))
return;
if (skel->rodata->skip) {
test__skip();
goto out;
}
bpf_program__set_autoload(skel->progs._tc, true);
err = test_ldsx_insn__load(skel);
if (!ASSERT_OK(err, "test_ldsx_insn__load"))
goto out;
prog_fd = bpf_program__fd(skel->progs._tc);
err = bpf_prog_test_run_opts(prog_fd, &topts);
ASSERT_OK(err, "test_run");
ASSERT_EQ(skel->bss->set_mark, -2, "set_mark");
out:
test_ldsx_insn__destroy(skel);
}
void test_ldsx_insn(void)
{
if (test__start_subtest("map_val and probed_memory"))
test_map_val_and_probed_memory();
if (test__start_subtest("ctx_member_sign_ext"))
test_ctx_member_sign_ext();
if (test__start_subtest("ctx_member_narrow_sign_ext"))
test_ctx_member_narrow_sign_ext();
}
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include "verifier_bounds_deduction_non_const.skel.h" #include "verifier_bounds_deduction_non_const.skel.h"
#include "verifier_bounds_mix_sign_unsign.skel.h" #include "verifier_bounds_mix_sign_unsign.skel.h"
#include "verifier_bpf_get_stack.skel.h" #include "verifier_bpf_get_stack.skel.h"
#include "verifier_bswap.skel.h"
#include "verifier_btf_ctx_access.skel.h" #include "verifier_btf_ctx_access.skel.h"
#include "verifier_cfg.skel.h" #include "verifier_cfg.skel.h"
#include "verifier_cgroup_inv_retcode.skel.h" #include "verifier_cgroup_inv_retcode.skel.h"
...@@ -24,6 +25,7 @@ ...@@ -24,6 +25,7 @@
#include "verifier_direct_stack_access_wraparound.skel.h" #include "verifier_direct_stack_access_wraparound.skel.h"
#include "verifier_div0.skel.h" #include "verifier_div0.skel.h"
#include "verifier_div_overflow.skel.h" #include "verifier_div_overflow.skel.h"
#include "verifier_gotol.skel.h"
#include "verifier_helper_access_var_len.skel.h" #include "verifier_helper_access_var_len.skel.h"
#include "verifier_helper_packet_access.skel.h" #include "verifier_helper_packet_access.skel.h"
#include "verifier_helper_restricted.skel.h" #include "verifier_helper_restricted.skel.h"
...@@ -31,6 +33,7 @@ ...@@ -31,6 +33,7 @@
#include "verifier_int_ptr.skel.h" #include "verifier_int_ptr.skel.h"
#include "verifier_jeq_infer_not_null.skel.h" #include "verifier_jeq_infer_not_null.skel.h"
#include "verifier_ld_ind.skel.h" #include "verifier_ld_ind.skel.h"
#include "verifier_ldsx.skel.h"
#include "verifier_leak_ptr.skel.h" #include "verifier_leak_ptr.skel.h"
#include "verifier_loops1.skel.h" #include "verifier_loops1.skel.h"
#include "verifier_lwt.skel.h" #include "verifier_lwt.skel.h"
...@@ -40,6 +43,7 @@ ...@@ -40,6 +43,7 @@
#include "verifier_map_ret_val.skel.h" #include "verifier_map_ret_val.skel.h"
#include "verifier_masking.skel.h" #include "verifier_masking.skel.h"
#include "verifier_meta_access.skel.h" #include "verifier_meta_access.skel.h"
#include "verifier_movsx.skel.h"
#include "verifier_netfilter_ctx.skel.h" #include "verifier_netfilter_ctx.skel.h"
#include "verifier_netfilter_retcode.skel.h" #include "verifier_netfilter_retcode.skel.h"
#include "verifier_prevent_map_lookup.skel.h" #include "verifier_prevent_map_lookup.skel.h"
...@@ -51,6 +55,7 @@ ...@@ -51,6 +55,7 @@
#include "verifier_ringbuf.skel.h" #include "verifier_ringbuf.skel.h"
#include "verifier_runtime_jit.skel.h" #include "verifier_runtime_jit.skel.h"
#include "verifier_scalar_ids.skel.h" #include "verifier_scalar_ids.skel.h"
#include "verifier_sdiv.skel.h"
#include "verifier_search_pruning.skel.h" #include "verifier_search_pruning.skel.h"
#include "verifier_sock.skel.h" #include "verifier_sock.skel.h"
#include "verifier_spill_fill.skel.h" #include "verifier_spill_fill.skel.h"
...@@ -113,6 +118,7 @@ void test_verifier_bounds_deduction(void) { RUN(verifier_bounds_deduction); ...@@ -113,6 +118,7 @@ void test_verifier_bounds_deduction(void) { RUN(verifier_bounds_deduction);
void test_verifier_bounds_deduction_non_const(void) { RUN(verifier_bounds_deduction_non_const); } void test_verifier_bounds_deduction_non_const(void) { RUN(verifier_bounds_deduction_non_const); }
void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); } void test_verifier_bounds_mix_sign_unsign(void) { RUN(verifier_bounds_mix_sign_unsign); }
void test_verifier_bpf_get_stack(void) { RUN(verifier_bpf_get_stack); } void test_verifier_bpf_get_stack(void) { RUN(verifier_bpf_get_stack); }
void test_verifier_bswap(void) { RUN(verifier_bswap); }
void test_verifier_btf_ctx_access(void) { RUN(verifier_btf_ctx_access); } void test_verifier_btf_ctx_access(void) { RUN(verifier_btf_ctx_access); }
void test_verifier_cfg(void) { RUN(verifier_cfg); } void test_verifier_cfg(void) { RUN(verifier_cfg); }
void test_verifier_cgroup_inv_retcode(void) { RUN(verifier_cgroup_inv_retcode); } void test_verifier_cgroup_inv_retcode(void) { RUN(verifier_cgroup_inv_retcode); }
...@@ -126,6 +132,7 @@ void test_verifier_direct_packet_access(void) { RUN(verifier_direct_packet_acces ...@@ -126,6 +132,7 @@ void test_verifier_direct_packet_access(void) { RUN(verifier_direct_packet_acces
void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); } void test_verifier_direct_stack_access_wraparound(void) { RUN(verifier_direct_stack_access_wraparound); }
void test_verifier_div0(void) { RUN(verifier_div0); } void test_verifier_div0(void) { RUN(verifier_div0); }
void test_verifier_div_overflow(void) { RUN(verifier_div_overflow); } void test_verifier_div_overflow(void) { RUN(verifier_div_overflow); }
void test_verifier_gotol(void) { RUN(verifier_gotol); }
void test_verifier_helper_access_var_len(void) { RUN(verifier_helper_access_var_len); } void test_verifier_helper_access_var_len(void) { RUN(verifier_helper_access_var_len); }
void test_verifier_helper_packet_access(void) { RUN(verifier_helper_packet_access); } void test_verifier_helper_packet_access(void) { RUN(verifier_helper_packet_access); }
void test_verifier_helper_restricted(void) { RUN(verifier_helper_restricted); } void test_verifier_helper_restricted(void) { RUN(verifier_helper_restricted); }
...@@ -133,6 +140,7 @@ void test_verifier_helper_value_access(void) { RUN(verifier_helper_value_access ...@@ -133,6 +140,7 @@ void test_verifier_helper_value_access(void) { RUN(verifier_helper_value_access
void test_verifier_int_ptr(void) { RUN(verifier_int_ptr); } void test_verifier_int_ptr(void) { RUN(verifier_int_ptr); }
void test_verifier_jeq_infer_not_null(void) { RUN(verifier_jeq_infer_not_null); } void test_verifier_jeq_infer_not_null(void) { RUN(verifier_jeq_infer_not_null); }
void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); }
void test_verifier_ldsx(void) { RUN(verifier_ldsx); }
void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); } void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); }
void test_verifier_loops1(void) { RUN(verifier_loops1); } void test_verifier_loops1(void) { RUN(verifier_loops1); }
void test_verifier_lwt(void) { RUN(verifier_lwt); } void test_verifier_lwt(void) { RUN(verifier_lwt); }
...@@ -142,6 +150,7 @@ void test_verifier_map_ptr_mixing(void) { RUN(verifier_map_ptr_mixing); } ...@@ -142,6 +150,7 @@ void test_verifier_map_ptr_mixing(void) { RUN(verifier_map_ptr_mixing); }
void test_verifier_map_ret_val(void) { RUN(verifier_map_ret_val); } void test_verifier_map_ret_val(void) { RUN(verifier_map_ret_val); }
void test_verifier_masking(void) { RUN(verifier_masking); } void test_verifier_masking(void) { RUN(verifier_masking); }
void test_verifier_meta_access(void) { RUN(verifier_meta_access); } void test_verifier_meta_access(void) { RUN(verifier_meta_access); }
void test_verifier_movsx(void) { RUN(verifier_movsx); }
void test_verifier_netfilter_ctx(void) { RUN(verifier_netfilter_ctx); } void test_verifier_netfilter_ctx(void) { RUN(verifier_netfilter_ctx); }
void test_verifier_netfilter_retcode(void) { RUN(verifier_netfilter_retcode); } void test_verifier_netfilter_retcode(void) { RUN(verifier_netfilter_retcode); }
void test_verifier_prevent_map_lookup(void) { RUN(verifier_prevent_map_lookup); } void test_verifier_prevent_map_lookup(void) { RUN(verifier_prevent_map_lookup); }
...@@ -153,6 +162,7 @@ void test_verifier_regalloc(void) { RUN(verifier_regalloc); } ...@@ -153,6 +162,7 @@ void test_verifier_regalloc(void) { RUN(verifier_regalloc); }
void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); } void test_verifier_ringbuf(void) { RUN(verifier_ringbuf); }
void test_verifier_runtime_jit(void) { RUN(verifier_runtime_jit); } void test_verifier_runtime_jit(void) { RUN(verifier_runtime_jit); }
void test_verifier_scalar_ids(void) { RUN(verifier_scalar_ids); } void test_verifier_scalar_ids(void) { RUN(verifier_scalar_ids); }
void test_verifier_sdiv(void) { RUN(verifier_sdiv); }
void test_verifier_search_pruning(void) { RUN(verifier_search_pruning); } void test_verifier_search_pruning(void) { RUN(verifier_search_pruning); }
void test_verifier_sock(void) { RUN(verifier_sock); } void test_verifier_sock(void) { RUN(verifier_sock); }
void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); }
......
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#if defined(__TARGET_ARCH_x86) && __clang_major__ >= 18
const volatile int skip = 0;
#else
const volatile int skip = 1;
#endif
volatile const short val1 = -1;
volatile const int val2 = -1;
short val3 = -1;
int val4 = -1;
int done1, done2, ret1, ret2;
SEC("?raw_tp/sys_enter")
int rdonly_map_prog(const void *ctx)
{
if (done1)
return 0;
done1 = 1;
/* val1/val2 readonly map */
if (val1 == val2)
ret1 = 1;
return 0;
}
SEC("?raw_tp/sys_enter")
int map_val_prog(const void *ctx)
{
if (done2)
return 0;
done2 = 1;
/* val1/val2 regular read/write map */
if (val3 == val4)
ret2 = 1;
return 0;
}
struct bpf_testmod_struct_arg_1 {
int a;
};
long long int_member;
SEC("?fentry/bpf_testmod_test_arg_ptr_to_struct")
int BPF_PROG2(test_ptr_struct_arg, struct bpf_testmod_struct_arg_1 *, p)
{
/* probed memory access */
int_member = p->a;
return 0;
}
long long set_optlen, set_retval;
SEC("?cgroup/getsockopt")
int _getsockopt(volatile struct bpf_sockopt *ctx)
{
int old_optlen, old_retval;
old_optlen = ctx->optlen;
old_retval = ctx->retval;
ctx->optlen = -1;
ctx->retval = -1;
/* sign extension for ctx member */
set_optlen = ctx->optlen;
set_retval = ctx->retval;
ctx->optlen = old_optlen;
ctx->retval = old_retval;
return 0;
}
long long set_mark;
SEC("?tc")
int _tc(volatile struct __sk_buff *skb)
{
long long tmp_mark;
int old_mark;
old_mark = skb->mark;
skb->mark = 0xf6fe;
/* narrowed sign extension for ctx member */
#if __clang_major__ >= 18
/* force narrow one-byte signed load. Otherwise, compiler may
* generate a 32-bit unsigned load followed by an s8 movsx.
*/
asm volatile ("r1 = *(s8 *)(%[ctx] + %[off_mark])\n\t"
"%[tmp_mark] = r1"
: [tmp_mark]"=r"(tmp_mark)
: [ctx]"r"(skb),
[off_mark]"i"(offsetof(struct __sk_buff, mark))
: "r1");
#else
tmp_mark = (char)skb->mark;
#endif
set_mark = tmp_mark;
skb->mark = old_mark;
return 0;
}
char _license[] SEC("license") = "GPL";
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include "bpf_misc.h"
#if defined(__TARGET_ARCH_x86) && __clang_major__ >= 18
SEC("socket")
__description("BSWAP, 16")
__success __success_unpriv __retval(0x23ff)
__naked void bswap_16(void)
{
asm volatile (" \
r0 = 0xff23; \
r0 = bswap16 r0; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("BSWAP, 32")
__success __success_unpriv __retval(0x23ff0000)
__naked void bswap_32(void)
{
asm volatile (" \
r0 = 0xff23; \
r0 = bswap32 r0; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("BSWAP, 64")
__success __success_unpriv __retval(0x34ff12ff)
__naked void bswap_64(void)
{
asm volatile (" \
r0 = %[u64_val] ll; \
r0 = bswap64 r0; \
exit; \
" :
: [u64_val]"i"(0xff12ff34ff56ff78ull)
: __clobber_all);
}
#else
SEC("socket")
__description("cpuv4 is not supported by compiler or jit, use a dummy test")
__success
int dummy_test(void)
{
return 0;
}
#endif
char _license[] SEC("license") = "GPL";
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include "bpf_misc.h"
#if defined(__TARGET_ARCH_x86) && __clang_major__ >= 18
SEC("socket")
__description("gotol, small_imm")
__success __success_unpriv __retval(1)
__naked void gotol_small_imm(void)
{
asm volatile (" \
call %[bpf_ktime_get_ns]; \
if r0 == 0 goto l0_%=; \
gotol l1_%=; \
l2_%=: \
gotol l3_%=; \
l1_%=: \
r0 = 1; \
gotol l2_%=; \
l0_%=: \
r0 = 2; \
l3_%=: \
exit; \
" :
: __imm(bpf_ktime_get_ns)
: __clobber_all);
}
#else
SEC("socket")
__description("cpuv4 is not supported by compiler or jit, use a dummy test")
__success
int dummy_test(void)
{
return 0;
}
#endif
char _license[] SEC("license") = "GPL";
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include "bpf_misc.h"
#if defined(__TARGET_ARCH_x86) && __clang_major__ >= 18
SEC("socket")
__description("LDSX, S8")
__success __success_unpriv __retval(-2)
__naked void ldsx_s8(void)
{
asm volatile (" \
r1 = 0x3fe; \
*(u64 *)(r10 - 8) = r1; \
r0 = *(s8 *)(r10 - 8); \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("LDSX, S16")
__success __success_unpriv __retval(-2)
__naked void ldsx_s16(void)
{
asm volatile (" \
r1 = 0x3fffe; \
*(u64 *)(r10 - 8) = r1; \
r0 = *(s16 *)(r10 - 8); \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("LDSX, S32")
__success __success_unpriv __retval(-1)
__naked void ldsx_s32(void)
{
asm volatile (" \
r1 = 0xfffffffe; \
*(u64 *)(r10 - 8) = r1; \
r0 = *(s32 *)(r10 - 8); \
r0 >>= 1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("LDSX, S8 range checking, privileged")
__log_level(2) __success __retval(1)
__msg("R1_w=scalar(smin=-128,smax=127)")
__naked void ldsx_s8_range_priv(void)
{
asm volatile (" \
call %[bpf_get_prandom_u32]; \
*(u64 *)(r10 - 8) = r0; \
r1 = *(s8 *)(r10 - 8); \
/* r1 with s8 range */ \
if r1 s> 0x7f goto l0_%=; \
if r1 s< -0x80 goto l0_%=; \
r0 = 1; \
l1_%=: \
exit; \
l0_%=: \
r0 = 2; \
goto l1_%=; \
" :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
SEC("socket")
__description("LDSX, S16 range checking")
__success __success_unpriv __retval(1)
__naked void ldsx_s16_range(void)
{
asm volatile (" \
call %[bpf_get_prandom_u32]; \
*(u64 *)(r10 - 8) = r0; \
r1 = *(s16 *)(r10 - 8); \
/* r1 with s16 range */ \
if r1 s> 0x7fff goto l0_%=; \
if r1 s< -0x8000 goto l0_%=; \
r0 = 1; \
l1_%=: \
exit; \
l0_%=: \
r0 = 2; \
goto l1_%=; \
" :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
SEC("socket")
__description("LDSX, S32 range checking")
__success __success_unpriv __retval(1)
__naked void ldsx_s32_range(void)
{
asm volatile (" \
call %[bpf_get_prandom_u32]; \
*(u64 *)(r10 - 8) = r0; \
r1 = *(s32 *)(r10 - 8); \
/* r1 with s16 range */ \
if r1 s> 0x7fffFFFF goto l0_%=; \
if r1 s< -0x80000000 goto l0_%=; \
r0 = 1; \
l1_%=: \
exit; \
l0_%=: \
r0 = 2; \
goto l1_%=; \
" :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
#else
SEC("socket")
__description("cpuv4 is not supported by compiler or jit, use a dummy test")
__success
int dummy_test(void)
{
return 0;
}
#endif
char _license[] SEC("license") = "GPL";
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include "bpf_misc.h"
#if defined(__TARGET_ARCH_x86) && __clang_major__ >= 18
SEC("socket")
__description("MOV32SX, S8")
__success __success_unpriv __retval(0x23)
__naked void mov32sx_s8(void)
{
asm volatile (" \
w0 = 0xff23; \
w0 = (s8)w0; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("MOV32SX, S16")
__success __success_unpriv __retval(0xFFFFff23)
__naked void mov32sx_s16(void)
{
asm volatile (" \
w0 = 0xff23; \
w0 = (s16)w0; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("MOV64SX, S8")
__success __success_unpriv __retval(-2)
__naked void mov64sx_s8(void)
{
asm volatile (" \
r0 = 0x1fe; \
r0 = (s8)r0; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("MOV64SX, S16")
__success __success_unpriv __retval(0xf23)
__naked void mov64sx_s16(void)
{
asm volatile (" \
r0 = 0xf0f23; \
r0 = (s16)r0; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("MOV64SX, S32")
__success __success_unpriv __retval(-1)
__naked void mov64sx_s32(void)
{
asm volatile (" \
r0 = 0xfffffffe; \
r0 = (s32)r0; \
r0 >>= 1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("MOV32SX, S8, range_check")
__success __success_unpriv __retval(1)
__naked void mov32sx_s8_range(void)
{
asm volatile (" \
call %[bpf_get_prandom_u32]; \
w1 = (s8)w0; \
/* w1 with s8 range */ \
if w1 s> 0x7f goto l0_%=; \
if w1 s< -0x80 goto l0_%=; \
r0 = 1; \
l1_%=: \
exit; \
l0_%=: \
r0 = 2; \
goto l1_%=; \
" :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
SEC("socket")
__description("MOV32SX, S16, range_check")
__success __success_unpriv __retval(1)
__naked void mov32sx_s16_range(void)
{
asm volatile (" \
call %[bpf_get_prandom_u32]; \
w1 = (s16)w0; \
/* w1 with s16 range */ \
if w1 s> 0x7fff goto l0_%=; \
if w1 s< -0x80ff goto l0_%=; \
r0 = 1; \
l1_%=: \
exit; \
l0_%=: \
r0 = 2; \
goto l1_%=; \
" :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
SEC("socket")
__description("MOV32SX, S16, range_check 2")
__success __success_unpriv __retval(1)
__naked void mov32sx_s16_range_2(void)
{
asm volatile (" \
r1 = 65535; \
w2 = (s16)w1; \
r2 >>= 1; \
if r2 != 0x7fffFFFF goto l0_%=; \
r0 = 1; \
l1_%=: \
exit; \
l0_%=: \
r0 = 0; \
goto l1_%=; \
" :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
SEC("socket")
__description("MOV64SX, S8, range_check")
__success __success_unpriv __retval(1)
__naked void mov64sx_s8_range(void)
{
asm volatile (" \
call %[bpf_get_prandom_u32]; \
r1 = (s8)r0; \
/* r1 with s8 range */ \
if r1 s> 0x7f goto l0_%=; \
if r1 s< -0x80 goto l0_%=; \
r0 = 1; \
l1_%=: \
exit; \
l0_%=: \
r0 = 2; \
goto l1_%=; \
" :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
SEC("socket")
__description("MOV64SX, S16, range_check")
__success __success_unpriv __retval(1)
__naked void mov64sx_s16_range(void)
{
asm volatile (" \
call %[bpf_get_prandom_u32]; \
r1 = (s16)r0; \
/* r1 with s16 range */ \
if r1 s> 0x7fff goto l0_%=; \
if r1 s< -0x8000 goto l0_%=; \
r0 = 1; \
l1_%=: \
exit; \
l0_%=: \
r0 = 2; \
goto l1_%=; \
" :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
SEC("socket")
__description("MOV64SX, S32, range_check")
__success __success_unpriv __retval(1)
__naked void mov64sx_s32_range(void)
{
asm volatile (" \
call %[bpf_get_prandom_u32]; \
r1 = (s32)r0; \
/* r1 with s32 range */ \
if r1 s> 0x7fffffff goto l0_%=; \
if r1 s< -0x80000000 goto l0_%=; \
r0 = 1; \
l1_%=: \
exit; \
l0_%=: \
r0 = 2; \
goto l1_%=; \
" :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
#else
SEC("socket")
__description("cpuv4 is not supported by compiler or jit, use a dummy test")
__success
int dummy_test(void)
{
return 0;
}
#endif
char _license[] SEC("license") = "GPL";
// SPDX-License-Identifier: GPL-2.0
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include "bpf_misc.h"
#if defined(__TARGET_ARCH_x86) && __clang_major__ >= 18
SEC("socket")
__description("SDIV32, non-zero imm divisor, check 1")
__success __success_unpriv __retval(-20)
__naked void sdiv32_non_zero_imm_1(void)
{
asm volatile (" \
w0 = -41; \
w0 s/= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero imm divisor, check 2")
__success __success_unpriv __retval(-20)
__naked void sdiv32_non_zero_imm_2(void)
{
asm volatile (" \
w0 = 41; \
w0 s/= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero imm divisor, check 3")
__success __success_unpriv __retval(20)
__naked void sdiv32_non_zero_imm_3(void)
{
asm volatile (" \
w0 = -41; \
w0 s/= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero imm divisor, check 4")
__success __success_unpriv __retval(-21)
__naked void sdiv32_non_zero_imm_4(void)
{
asm volatile (" \
w0 = -42; \
w0 s/= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero imm divisor, check 5")
__success __success_unpriv __retval(-21)
__naked void sdiv32_non_zero_imm_5(void)
{
asm volatile (" \
w0 = 42; \
w0 s/= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero imm divisor, check 6")
__success __success_unpriv __retval(21)
__naked void sdiv32_non_zero_imm_6(void)
{
asm volatile (" \
w0 = -42; \
w0 s/= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero imm divisor, check 7")
__success __success_unpriv __retval(21)
__naked void sdiv32_non_zero_imm_7(void)
{
asm volatile (" \
w0 = 42; \
w0 s/= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero imm divisor, check 8")
__success __success_unpriv __retval(20)
__naked void sdiv32_non_zero_imm_8(void)
{
asm volatile (" \
w0 = 41; \
w0 s/= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero reg divisor, check 1")
__success __success_unpriv __retval(-20)
__naked void sdiv32_non_zero_reg_1(void)
{
asm volatile (" \
w0 = -41; \
w1 = 2; \
w0 s/= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero reg divisor, check 2")
__success __success_unpriv __retval(-20)
__naked void sdiv32_non_zero_reg_2(void)
{
asm volatile (" \
w0 = 41; \
w1 = -2; \
w0 s/= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero reg divisor, check 3")
__success __success_unpriv __retval(20)
__naked void sdiv32_non_zero_reg_3(void)
{
asm volatile (" \
w0 = -41; \
w1 = -2; \
w0 s/= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero reg divisor, check 4")
__success __success_unpriv __retval(-21)
__naked void sdiv32_non_zero_reg_4(void)
{
asm volatile (" \
w0 = -42; \
w1 = 2; \
w0 s/= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero reg divisor, check 5")
__success __success_unpriv __retval(-21)
__naked void sdiv32_non_zero_reg_5(void)
{
asm volatile (" \
w0 = 42; \
w1 = -2; \
w0 s/= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero reg divisor, check 6")
__success __success_unpriv __retval(21)
__naked void sdiv32_non_zero_reg_6(void)
{
asm volatile (" \
w0 = -42; \
w1 = -2; \
w0 s/= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero reg divisor, check 7")
__success __success_unpriv __retval(21)
__naked void sdiv32_non_zero_reg_7(void)
{
asm volatile (" \
w0 = 42; \
w1 = 2; \
w0 s/= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, non-zero reg divisor, check 8")
__success __success_unpriv __retval(20)
__naked void sdiv32_non_zero_reg_8(void)
{
asm volatile (" \
w0 = 41; \
w1 = 2; \
w0 s/= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero imm divisor, check 1")
__success __success_unpriv __retval(-20)
__naked void sdiv64_non_zero_imm_1(void)
{
asm volatile (" \
r0 = -41; \
r0 s/= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero imm divisor, check 2")
__success __success_unpriv __retval(-20)
__naked void sdiv64_non_zero_imm_2(void)
{
asm volatile (" \
r0 = 41; \
r0 s/= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero imm divisor, check 3")
__success __success_unpriv __retval(20)
__naked void sdiv64_non_zero_imm_3(void)
{
asm volatile (" \
r0 = -41; \
r0 s/= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero imm divisor, check 4")
__success __success_unpriv __retval(-21)
__naked void sdiv64_non_zero_imm_4(void)
{
asm volatile (" \
r0 = -42; \
r0 s/= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero imm divisor, check 5")
__success __success_unpriv __retval(-21)
__naked void sdiv64_non_zero_imm_5(void)
{
asm volatile (" \
r0 = 42; \
r0 s/= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero imm divisor, check 6")
__success __success_unpriv __retval(21)
__naked void sdiv64_non_zero_imm_6(void)
{
asm volatile (" \
r0 = -42; \
r0 s/= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero reg divisor, check 1")
__success __success_unpriv __retval(-20)
__naked void sdiv64_non_zero_reg_1(void)
{
asm volatile (" \
r0 = -41; \
r1 = 2; \
r0 s/= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero reg divisor, check 2")
__success __success_unpriv __retval(-20)
__naked void sdiv64_non_zero_reg_2(void)
{
asm volatile (" \
r0 = 41; \
r1 = -2; \
r0 s/= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero reg divisor, check 3")
__success __success_unpriv __retval(20)
__naked void sdiv64_non_zero_reg_3(void)
{
asm volatile (" \
r0 = -41; \
r1 = -2; \
r0 s/= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero reg divisor, check 4")
__success __success_unpriv __retval(-21)
__naked void sdiv64_non_zero_reg_4(void)
{
asm volatile (" \
r0 = -42; \
r1 = 2; \
r0 s/= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero reg divisor, check 5")
__success __success_unpriv __retval(-21)
__naked void sdiv64_non_zero_reg_5(void)
{
asm volatile (" \
r0 = 42; \
r1 = -2; \
r0 s/= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, non-zero reg divisor, check 6")
__success __success_unpriv __retval(21)
__naked void sdiv64_non_zero_reg_6(void)
{
asm volatile (" \
r0 = -42; \
r1 = -2; \
r0 s/= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero imm divisor, check 1")
__success __success_unpriv __retval(-1)
__naked void smod32_non_zero_imm_1(void)
{
asm volatile (" \
w0 = -41; \
w0 s%%= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero imm divisor, check 2")
__success __success_unpriv __retval(1)
__naked void smod32_non_zero_imm_2(void)
{
asm volatile (" \
w0 = 41; \
w0 s%%= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero imm divisor, check 3")
__success __success_unpriv __retval(-1)
__naked void smod32_non_zero_imm_3(void)
{
asm volatile (" \
w0 = -41; \
w0 s%%= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero imm divisor, check 4")
__success __success_unpriv __retval(0)
__naked void smod32_non_zero_imm_4(void)
{
asm volatile (" \
w0 = -42; \
w0 s%%= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero imm divisor, check 5")
__success __success_unpriv __retval(0)
__naked void smod32_non_zero_imm_5(void)
{
asm volatile (" \
w0 = 42; \
w0 s%%= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero imm divisor, check 6")
__success __success_unpriv __retval(0)
__naked void smod32_non_zero_imm_6(void)
{
asm volatile (" \
w0 = -42; \
w0 s%%= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero reg divisor, check 1")
__success __success_unpriv __retval(-1)
__naked void smod32_non_zero_reg_1(void)
{
asm volatile (" \
w0 = -41; \
w1 = 2; \
w0 s%%= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero reg divisor, check 2")
__success __success_unpriv __retval(1)
__naked void smod32_non_zero_reg_2(void)
{
asm volatile (" \
w0 = 41; \
w1 = -2; \
w0 s%%= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero reg divisor, check 3")
__success __success_unpriv __retval(-1)
__naked void smod32_non_zero_reg_3(void)
{
asm volatile (" \
w0 = -41; \
w1 = -2; \
w0 s%%= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero reg divisor, check 4")
__success __success_unpriv __retval(0)
__naked void smod32_non_zero_reg_4(void)
{
asm volatile (" \
w0 = -42; \
w1 = 2; \
w0 s%%= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero reg divisor, check 5")
__success __success_unpriv __retval(0)
__naked void smod32_non_zero_reg_5(void)
{
asm volatile (" \
w0 = 42; \
w1 = -2; \
w0 s%%= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, non-zero reg divisor, check 6")
__success __success_unpriv __retval(0)
__naked void smod32_non_zero_reg_6(void)
{
asm volatile (" \
w0 = -42; \
w1 = -2; \
w0 s%%= w1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero imm divisor, check 1")
__success __success_unpriv __retval(-1)
__naked void smod64_non_zero_imm_1(void)
{
asm volatile (" \
r0 = -41; \
r0 s%%= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero imm divisor, check 2")
__success __success_unpriv __retval(1)
__naked void smod64_non_zero_imm_2(void)
{
asm volatile (" \
r0 = 41; \
r0 s%%= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero imm divisor, check 3")
__success __success_unpriv __retval(-1)
__naked void smod64_non_zero_imm_3(void)
{
asm volatile (" \
r0 = -41; \
r0 s%%= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero imm divisor, check 4")
__success __success_unpriv __retval(0)
__naked void smod64_non_zero_imm_4(void)
{
asm volatile (" \
r0 = -42; \
r0 s%%= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero imm divisor, check 5")
__success __success_unpriv __retval(-0)
__naked void smod64_non_zero_imm_5(void)
{
asm volatile (" \
r0 = 42; \
r0 s%%= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero imm divisor, check 6")
__success __success_unpriv __retval(0)
__naked void smod64_non_zero_imm_6(void)
{
asm volatile (" \
r0 = -42; \
r0 s%%= -2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero imm divisor, check 7")
__success __success_unpriv __retval(0)
__naked void smod64_non_zero_imm_7(void)
{
asm volatile (" \
r0 = 42; \
r0 s%%= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero imm divisor, check 8")
__success __success_unpriv __retval(1)
__naked void smod64_non_zero_imm_8(void)
{
asm volatile (" \
r0 = 41; \
r0 s%%= 2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero reg divisor, check 1")
__success __success_unpriv __retval(-1)
__naked void smod64_non_zero_reg_1(void)
{
asm volatile (" \
r0 = -41; \
r1 = 2; \
r0 s%%= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero reg divisor, check 2")
__success __success_unpriv __retval(1)
__naked void smod64_non_zero_reg_2(void)
{
asm volatile (" \
r0 = 41; \
r1 = -2; \
r0 s%%= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero reg divisor, check 3")
__success __success_unpriv __retval(-1)
__naked void smod64_non_zero_reg_3(void)
{
asm volatile (" \
r0 = -41; \
r1 = -2; \
r0 s%%= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero reg divisor, check 4")
__success __success_unpriv __retval(0)
__naked void smod64_non_zero_reg_4(void)
{
asm volatile (" \
r0 = -42; \
r1 = 2; \
r0 s%%= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero reg divisor, check 5")
__success __success_unpriv __retval(0)
__naked void smod64_non_zero_reg_5(void)
{
asm volatile (" \
r0 = 42; \
r1 = -2; \
r0 s%%= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero reg divisor, check 6")
__success __success_unpriv __retval(0)
__naked void smod64_non_zero_reg_6(void)
{
asm volatile (" \
r0 = -42; \
r1 = -2; \
r0 s%%= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero reg divisor, check 7")
__success __success_unpriv __retval(0)
__naked void smod64_non_zero_reg_7(void)
{
asm volatile (" \
r0 = 42; \
r1 = 2; \
r0 s%%= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, non-zero reg divisor, check 8")
__success __success_unpriv __retval(1)
__naked void smod64_non_zero_reg_8(void)
{
asm volatile (" \
r0 = 41; \
r1 = 2; \
r0 s%%= r1; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV32, zero divisor")
__success __success_unpriv __retval(0)
__naked void sdiv32_zero_divisor(void)
{
asm volatile (" \
w0 = 42; \
w1 = 0; \
w2 = -1; \
w2 s/= w1; \
w0 = w2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SDIV64, zero divisor")
__success __success_unpriv __retval(0)
__naked void sdiv64_zero_divisor(void)
{
asm volatile (" \
r0 = 42; \
r1 = 0; \
r2 = -1; \
r2 s/= r1; \
r0 = r2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD32, zero divisor")
__success __success_unpriv __retval(-1)
__naked void smod32_zero_divisor(void)
{
asm volatile (" \
w0 = 42; \
w1 = 0; \
w2 = -1; \
w2 s%%= w1; \
w0 = w2; \
exit; \
" ::: __clobber_all);
}
SEC("socket")
__description("SMOD64, zero divisor")
__success __success_unpriv __retval(-1)
__naked void smod64_zero_divisor(void)
{
asm volatile (" \
r0 = 42; \
r1 = 0; \
r2 = -1; \
r2 s%%= r1; \
r0 = r2; \
exit; \
" ::: __clobber_all);
}
#else
SEC("socket")
__description("cpuv4 is not supported by compiler or jit, use a dummy test")
__success
int dummy_test(void)
{
return 0;
}
#endif
char _license[] SEC("license") = "GPL";
...@@ -176,11 +176,11 @@ ...@@ -176,11 +176,11 @@
.retval = 1, .retval = 1,
}, },
{ {
"invalid 64-bit BPF_END", "invalid 64-bit BPF_END with BPF_TO_BE",
.insns = { .insns = {
BPF_MOV32_IMM(BPF_REG_0, 0), BPF_MOV32_IMM(BPF_REG_0, 0),
{ {
.code = BPF_ALU64 | BPF_END | BPF_TO_LE, .code = BPF_ALU64 | BPF_END | BPF_TO_BE,
.dst_reg = BPF_REG_0, .dst_reg = BPF_REG_0,
.src_reg = 0, .src_reg = 0,
.off = 0, .off = 0,
...@@ -188,7 +188,7 @@ ...@@ -188,7 +188,7 @@
}, },
BPF_EXIT_INSN(), BPF_EXIT_INSN(),
}, },
.errstr = "unknown opcode d7", .errstr = "unknown opcode df",
.result = REJECT, .result = REJECT,
}, },
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment