Commit ec7146db authored by David S. Miller's avatar David S. Miller

Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Daniel Borkmann says:

====================
pull-request: bpf-next 2019-01-29

The following pull-request contains BPF updates for your *net-next* tree.

The main changes are:

1) Teach verifier dead code removal, this also allows for optimizing /
   removing conditional branches around dead code and to shrink the
   resulting image. Code store constrained architectures like nfp would
   have hard time doing this at JIT level, from Jakub.

2) Add JMP32 instructions to BPF ISA in order to allow for optimizing
   code generation for 32-bit sub-registers. Evaluation shows that this
   can result in code reduction of ~5-20% compared to 64 bit-only code
   generation. Also add implementation for most JITs, from Jiong.

3) Add support for __int128 types in BTF which is also needed for
   vmlinux's BTF conversion to work, from Yonghong.

4) Add a new command to bpftool in order to dump a list of BPF-related
   parameters from the system or for a specific network device e.g. in
   terms of available prog/map types or helper functions, from Quentin.

5) Add AF_XDP sock_diag interface for querying sockets from user
   space which provides information about the RX/TX/fill/completion
   rings, umem, memory usage etc, from Björn.

6) Add skb context access for skb_shared_info->gso_segs field, from Eric.

7) Add support for testing flow dissector BPF programs by extending
   existing BPF_PROG_TEST_RUN infrastructure, from Stanislav.

8) Split BPF kselftest's test_verifier into various subgroups of tests
   in order better deal with merge conflicts in this area, from Jakub.

9) Add support for queue/stack manipulations in bpftool, from Stanislav.

10) Document BTF, from Yonghong.

11) Dump supported ELF section names in libbpf on program load
    failure, from Taeung.

12) Silence a false positive compiler warning in verifier's BTF
    handling, from Peter.

13) Fix help string in bpftool's feature probing, from Prashant.

14) Remove duplicate includes in BPF kselftests, from Yue.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 343917b4 3d2af27a
This diff is collapsed.
......@@ -15,6 +15,13 @@ that goes into great technical depth about the BPF Architecture.
The primary info for the bpf syscall is available in the `man-pages`_
for `bpf(2)`_.
BPF Type Format (BTF)
=====================
.. toctree::
:maxdepth: 1
btf
Frequently asked questions (FAQ)
......
......@@ -865,7 +865,7 @@ Three LSB bits store instruction class which is one of:
BPF_STX 0x03 BPF_STX 0x03
BPF_ALU 0x04 BPF_ALU 0x04
BPF_JMP 0x05 BPF_JMP 0x05
BPF_RET 0x06 [ class 6 unused, for future if needed ]
BPF_RET 0x06 BPF_JMP32 0x06
BPF_MISC 0x07 BPF_ALU64 0x07
When BPF_CLASS(code) == BPF_ALU or BPF_JMP, 4th bit encodes source operand ...
......@@ -902,9 +902,9 @@ If BPF_CLASS(code) == BPF_ALU or BPF_ALU64 [ in eBPF ], BPF_OP(code) is one of:
BPF_ARSH 0xc0 /* eBPF only: sign extending shift right */
BPF_END 0xd0 /* eBPF only: endianness conversion */
If BPF_CLASS(code) == BPF_JMP, BPF_OP(code) is one of:
If BPF_CLASS(code) == BPF_JMP or BPF_JMP32 [ in eBPF ], BPF_OP(code) is one of:
BPF_JA 0x00
BPF_JA 0x00 /* BPF_JMP only */
BPF_JEQ 0x10
BPF_JGT 0x20
BPF_JGE 0x30
......@@ -912,8 +912,8 @@ If BPF_CLASS(code) == BPF_JMP, BPF_OP(code) is one of:
BPF_JNE 0x50 /* eBPF only: jump != */
BPF_JSGT 0x60 /* eBPF only: signed '>' */
BPF_JSGE 0x70 /* eBPF only: signed '>=' */
BPF_CALL 0x80 /* eBPF only: function call */
BPF_EXIT 0x90 /* eBPF only: function return */
BPF_CALL 0x80 /* eBPF BPF_JMP only: function call */
BPF_EXIT 0x90 /* eBPF BPF_JMP only: function return */
BPF_JLT 0xa0 /* eBPF only: unsigned '<' */
BPF_JLE 0xb0 /* eBPF only: unsigned '<=' */
BPF_JSLT 0xc0 /* eBPF only: signed '<' */
......@@ -936,8 +936,9 @@ Classic BPF wastes the whole BPF_RET class to represent a single 'ret'
operation. Classic BPF_RET | BPF_K means copy imm32 into return register
and perform function exit. eBPF is modeled to match CPU, so BPF_JMP | BPF_EXIT
in eBPF means function exit only. The eBPF program needs to store return
value into register R0 before doing a BPF_EXIT. Class 6 in eBPF is currently
unused and reserved for future use.
value into register R0 before doing a BPF_EXIT. Class 6 in eBPF is used as
BPF_JMP32 to mean exactly the same operations as BPF_JMP, but with 32-bit wide
operands for the comparisons instead.
For load and store instructions the 8-bit 'code' field is divided as:
......
......@@ -1083,12 +1083,17 @@ static inline void emit_ldx_r(const s8 dst[], const s8 src,
/* Arithmatic Operation */
static inline void emit_ar_r(const u8 rd, const u8 rt, const u8 rm,
const u8 rn, struct jit_ctx *ctx, u8 op) {
const u8 rn, struct jit_ctx *ctx, u8 op,
bool is_jmp64) {
switch (op) {
case BPF_JSET:
emit(ARM_AND_R(ARM_IP, rt, rn), ctx);
emit(ARM_AND_R(ARM_LR, rd, rm), ctx);
emit(ARM_ORRS_R(ARM_IP, ARM_LR, ARM_IP), ctx);
if (is_jmp64) {
emit(ARM_AND_R(ARM_IP, rt, rn), ctx);
emit(ARM_AND_R(ARM_LR, rd, rm), ctx);
emit(ARM_ORRS_R(ARM_IP, ARM_LR, ARM_IP), ctx);
} else {
emit(ARM_ANDS_R(ARM_IP, rt, rn), ctx);
}
break;
case BPF_JEQ:
case BPF_JNE:
......@@ -1096,18 +1101,25 @@ static inline void emit_ar_r(const u8 rd, const u8 rt, const u8 rm,
case BPF_JGE:
case BPF_JLE:
case BPF_JLT:
emit(ARM_CMP_R(rd, rm), ctx);
_emit(ARM_COND_EQ, ARM_CMP_R(rt, rn), ctx);
if (is_jmp64) {
emit(ARM_CMP_R(rd, rm), ctx);
/* Only compare low halve if high halve are equal. */
_emit(ARM_COND_EQ, ARM_CMP_R(rt, rn), ctx);
} else {
emit(ARM_CMP_R(rt, rn), ctx);
}
break;
case BPF_JSLE:
case BPF_JSGT:
emit(ARM_CMP_R(rn, rt), ctx);
emit(ARM_SBCS_R(ARM_IP, rm, rd), ctx);
if (is_jmp64)
emit(ARM_SBCS_R(ARM_IP, rm, rd), ctx);
break;
case BPF_JSLT:
case BPF_JSGE:
emit(ARM_CMP_R(rt, rn), ctx);
emit(ARM_SBCS_R(ARM_IP, rd, rm), ctx);
if (is_jmp64)
emit(ARM_SBCS_R(ARM_IP, rd, rm), ctx);
break;
}
}
......@@ -1615,6 +1627,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
case BPF_JMP | BPF_JLT | BPF_X:
case BPF_JMP | BPF_JSLT | BPF_X:
case BPF_JMP | BPF_JSLE | BPF_X:
case BPF_JMP32 | BPF_JEQ | BPF_X:
case BPF_JMP32 | BPF_JGT | BPF_X:
case BPF_JMP32 | BPF_JGE | BPF_X:
case BPF_JMP32 | BPF_JNE | BPF_X:
case BPF_JMP32 | BPF_JSGT | BPF_X:
case BPF_JMP32 | BPF_JSGE | BPF_X:
case BPF_JMP32 | BPF_JSET | BPF_X:
case BPF_JMP32 | BPF_JLE | BPF_X:
case BPF_JMP32 | BPF_JLT | BPF_X:
case BPF_JMP32 | BPF_JSLT | BPF_X:
case BPF_JMP32 | BPF_JSLE | BPF_X:
/* Setup source registers */
rm = arm_bpf_get_reg32(src_hi, tmp2[0], ctx);
rn = arm_bpf_get_reg32(src_lo, tmp2[1], ctx);
......@@ -1641,6 +1664,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
case BPF_JMP | BPF_JLE | BPF_K:
case BPF_JMP | BPF_JSLT | BPF_K:
case BPF_JMP | BPF_JSLE | BPF_K:
case BPF_JMP32 | BPF_JEQ | BPF_K:
case BPF_JMP32 | BPF_JGT | BPF_K:
case BPF_JMP32 | BPF_JGE | BPF_K:
case BPF_JMP32 | BPF_JNE | BPF_K:
case BPF_JMP32 | BPF_JSGT | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_K:
case BPF_JMP32 | BPF_JLT | BPF_K:
case BPF_JMP32 | BPF_JLE | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
if (off == 0)
break;
rm = tmp2[0];
......@@ -1652,7 +1686,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
rd = arm_bpf_get_reg64(dst, tmp, ctx);
/* Check for the condition */
emit_ar_r(rd[0], rd[1], rm, rn, ctx, BPF_OP(code));
emit_ar_r(rd[0], rd[1], rm, rn, ctx, BPF_OP(code),
BPF_CLASS(code) == BPF_JMP);
/* Setup JUMP instruction */
jmp_offset = bpf2a32_offset(i+off, i, ctx);
......
......@@ -62,6 +62,7 @@
#define ARM_INST_ADDS_I 0x02900000
#define ARM_INST_AND_R 0x00000000
#define ARM_INST_ANDS_R 0x00100000
#define ARM_INST_AND_I 0x02000000
#define ARM_INST_BIC_R 0x01c00000
......@@ -172,6 +173,7 @@
#define ARM_ADC_I(rd, rn, imm) _AL3_I(ARM_INST_ADC, rd, rn, imm)
#define ARM_AND_R(rd, rn, rm) _AL3_R(ARM_INST_AND, rd, rn, rm)
#define ARM_ANDS_R(rd, rn, rm) _AL3_R(ARM_INST_ANDS, rd, rn, rm)
#define ARM_AND_I(rd, rn, imm) _AL3_I(ARM_INST_AND, rd, rn, imm)
#define ARM_BIC_R(rd, rn, rm) _AL3_R(ARM_INST_BIC, rd, rn, rm)
......
......@@ -362,7 +362,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
const s16 off = insn->off;
const s32 imm = insn->imm;
const int i = insn - ctx->prog->insnsi;
const bool is64 = BPF_CLASS(code) == BPF_ALU64;
const bool is64 = BPF_CLASS(code) == BPF_ALU64 ||
BPF_CLASS(code) == BPF_JMP;
const bool isdw = BPF_SIZE(code) == BPF_DW;
u8 jmp_cond;
s32 jmp_offset;
......@@ -559,7 +560,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
case BPF_JMP | BPF_JSLT | BPF_X:
case BPF_JMP | BPF_JSGE | BPF_X:
case BPF_JMP | BPF_JSLE | BPF_X:
emit(A64_CMP(1, dst, src), ctx);
case BPF_JMP32 | BPF_JEQ | BPF_X:
case BPF_JMP32 | BPF_JGT | BPF_X:
case BPF_JMP32 | BPF_JLT | BPF_X:
case BPF_JMP32 | BPF_JGE | BPF_X:
case BPF_JMP32 | BPF_JLE | BPF_X:
case BPF_JMP32 | BPF_JNE | BPF_X:
case BPF_JMP32 | BPF_JSGT | BPF_X:
case BPF_JMP32 | BPF_JSLT | BPF_X:
case BPF_JMP32 | BPF_JSGE | BPF_X:
case BPF_JMP32 | BPF_JSLE | BPF_X:
emit(A64_CMP(is64, dst, src), ctx);
emit_cond_jmp:
jmp_offset = bpf2a64_offset(i + off, i, ctx);
check_imm19(jmp_offset);
......@@ -601,7 +612,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
emit(A64_B_(jmp_cond, jmp_offset), ctx);
break;
case BPF_JMP | BPF_JSET | BPF_X:
emit(A64_TST(1, dst, src), ctx);
case BPF_JMP32 | BPF_JSET | BPF_X:
emit(A64_TST(is64, dst, src), ctx);
goto emit_cond_jmp;
/* IF (dst COND imm) JUMP off */
case BPF_JMP | BPF_JEQ | BPF_K:
......@@ -614,12 +626,23 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
case BPF_JMP | BPF_JSLT | BPF_K:
case BPF_JMP | BPF_JSGE | BPF_K:
case BPF_JMP | BPF_JSLE | BPF_K:
emit_a64_mov_i(1, tmp, imm, ctx);
emit(A64_CMP(1, dst, tmp), ctx);
case BPF_JMP32 | BPF_JEQ | BPF_K:
case BPF_JMP32 | BPF_JGT | BPF_K:
case BPF_JMP32 | BPF_JLT | BPF_K:
case BPF_JMP32 | BPF_JGE | BPF_K:
case BPF_JMP32 | BPF_JLE | BPF_K:
case BPF_JMP32 | BPF_JNE | BPF_K:
case BPF_JMP32 | BPF_JSGT | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_CMP(is64, dst, tmp), ctx);
goto emit_cond_jmp;
case BPF_JMP | BPF_JSET | BPF_K:
emit_a64_mov_i(1, tmp, imm, ctx);
emit(A64_TST(1, dst, tmp), ctx);
case BPF_JMP32 | BPF_JSET | BPF_K:
emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_TST(is64, dst, tmp), ctx);
goto emit_cond_jmp;
/* function call */
case BPF_JMP | BPF_CALL:
......
......@@ -337,6 +337,7 @@
#define PPC_INST_DIVWU 0x7c000396
#define PPC_INST_DIVD 0x7c0003d2
#define PPC_INST_RLWINM 0x54000000
#define PPC_INST_RLWINM_DOT 0x54000001
#define PPC_INST_RLWIMI 0x50000000
#define PPC_INST_RLDICL 0x78000000
#define PPC_INST_RLDICR 0x78000004
......
......@@ -165,6 +165,10 @@
#define PPC_RLWINM(d, a, i, mb, me) EMIT(PPC_INST_RLWINM | ___PPC_RA(d) | \
___PPC_RS(a) | __PPC_SH(i) | \
__PPC_MB(mb) | __PPC_ME(me))
#define PPC_RLWINM_DOT(d, a, i, mb, me) EMIT(PPC_INST_RLWINM_DOT | \
___PPC_RA(d) | ___PPC_RS(a) | \
__PPC_SH(i) | __PPC_MB(mb) | \
__PPC_ME(me))
#define PPC_RLWIMI(d, a, i, mb, me) EMIT(PPC_INST_RLWIMI | ___PPC_RA(d) | \
___PPC_RS(a) | __PPC_SH(i) | \
__PPC_MB(mb) | __PPC_ME(me))
......
......@@ -768,36 +768,58 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
case BPF_JMP | BPF_JGT | BPF_X:
case BPF_JMP | BPF_JSGT | BPF_K:
case BPF_JMP | BPF_JSGT | BPF_X:
case BPF_JMP32 | BPF_JGT | BPF_K:
case BPF_JMP32 | BPF_JGT | BPF_X:
case BPF_JMP32 | BPF_JSGT | BPF_K:
case BPF_JMP32 | BPF_JSGT | BPF_X:
true_cond = COND_GT;
goto cond_branch;
case BPF_JMP | BPF_JLT | BPF_K:
case BPF_JMP | BPF_JLT | BPF_X:
case BPF_JMP | BPF_JSLT | BPF_K:
case BPF_JMP | BPF_JSLT | BPF_X:
case BPF_JMP32 | BPF_JLT | BPF_K:
case BPF_JMP32 | BPF_JLT | BPF_X:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_X:
true_cond = COND_LT;
goto cond_branch;
case BPF_JMP | BPF_JGE | BPF_K:
case BPF_JMP | BPF_JGE | BPF_X:
case BPF_JMP | BPF_JSGE | BPF_K:
case BPF_JMP | BPF_JSGE | BPF_X:
case BPF_JMP32 | BPF_JGE | BPF_K:
case BPF_JMP32 | BPF_JGE | BPF_X:
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_X:
true_cond = COND_GE;
goto cond_branch;
case BPF_JMP | BPF_JLE | BPF_K:
case BPF_JMP | BPF_JLE | BPF_X:
case BPF_JMP | BPF_JSLE | BPF_K:
case BPF_JMP | BPF_JSLE | BPF_X:
case BPF_JMP32 | BPF_JLE | BPF_K:
case BPF_JMP32 | BPF_JLE | BPF_X:
case BPF_JMP32 | BPF_JSLE | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_X:
true_cond = COND_LE;
goto cond_branch;
case BPF_JMP | BPF_JEQ | BPF_K:
case BPF_JMP | BPF_JEQ | BPF_X:
case BPF_JMP32 | BPF_JEQ | BPF_K:
case BPF_JMP32 | BPF_JEQ | BPF_X:
true_cond = COND_EQ;
goto cond_branch;
case BPF_JMP | BPF_JNE | BPF_K:
case BPF_JMP | BPF_JNE | BPF_X:
case BPF_JMP32 | BPF_JNE | BPF_K:
case BPF_JMP32 | BPF_JNE | BPF_X:
true_cond = COND_NE;
goto cond_branch;
case BPF_JMP | BPF_JSET | BPF_K:
case BPF_JMP | BPF_JSET | BPF_X:
case BPF_JMP32 | BPF_JSET | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_X:
true_cond = COND_NE;
/* Fall through */
......@@ -809,18 +831,44 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
case BPF_JMP | BPF_JLE | BPF_X:
case BPF_JMP | BPF_JEQ | BPF_X:
case BPF_JMP | BPF_JNE | BPF_X:
case BPF_JMP32 | BPF_JGT | BPF_X:
case BPF_JMP32 | BPF_JLT | BPF_X:
case BPF_JMP32 | BPF_JGE | BPF_X:
case BPF_JMP32 | BPF_JLE | BPF_X:
case BPF_JMP32 | BPF_JEQ | BPF_X:
case BPF_JMP32 | BPF_JNE | BPF_X:
/* unsigned comparison */
PPC_CMPLD(dst_reg, src_reg);
if (BPF_CLASS(code) == BPF_JMP32)
PPC_CMPLW(dst_reg, src_reg);
else
PPC_CMPLD(dst_reg, src_reg);
break;
case BPF_JMP | BPF_JSGT | BPF_X:
case BPF_JMP | BPF_JSLT | BPF_X:
case BPF_JMP | BPF_JSGE | BPF_X:
case BPF_JMP | BPF_JSLE | BPF_X:
case BPF_JMP32 | BPF_JSGT | BPF_X:
case BPF_JMP32 | BPF_JSLT | BPF_X:
case BPF_JMP32 | BPF_JSGE | BPF_X:
case BPF_JMP32 | BPF_JSLE | BPF_X:
/* signed comparison */
PPC_CMPD(dst_reg, src_reg);
if (BPF_CLASS(code) == BPF_JMP32)
PPC_CMPW(dst_reg, src_reg);
else
PPC_CMPD(dst_reg, src_reg);
break;
case BPF_JMP | BPF_JSET | BPF_X:
PPC_AND_DOT(b2p[TMP_REG_1], dst_reg, src_reg);
case BPF_JMP32 | BPF_JSET | BPF_X:
if (BPF_CLASS(code) == BPF_JMP) {
PPC_AND_DOT(b2p[TMP_REG_1], dst_reg,
src_reg);
} else {
int tmp_reg = b2p[TMP_REG_1];
PPC_AND(tmp_reg, dst_reg, src_reg);
PPC_RLWINM_DOT(tmp_reg, tmp_reg, 0, 0,
31);
}
break;
case BPF_JMP | BPF_JNE | BPF_K:
case BPF_JMP | BPF_JEQ | BPF_K:
......@@ -828,43 +876,87 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
case BPF_JMP | BPF_JLT | BPF_K:
case BPF_JMP | BPF_JGE | BPF_K:
case BPF_JMP | BPF_JLE | BPF_K:
case BPF_JMP32 | BPF_JNE | BPF_K:
case BPF_JMP32 | BPF_JEQ | BPF_K:
case BPF_JMP32 | BPF_JGT | BPF_K:
case BPF_JMP32 | BPF_JLT | BPF_K:
case BPF_JMP32 | BPF_JGE | BPF_K:
case BPF_JMP32 | BPF_JLE | BPF_K:
{
bool is_jmp32 = BPF_CLASS(code) == BPF_JMP32;
/*
* Need sign-extended load, so only positive
* values can be used as imm in cmpldi
*/
if (imm >= 0 && imm < 32768)
PPC_CMPLDI(dst_reg, imm);
else {
if (imm >= 0 && imm < 32768) {
if (is_jmp32)
PPC_CMPLWI(dst_reg, imm);
else
PPC_CMPLDI(dst_reg, imm);
} else {
/* sign-extending load */
PPC_LI32(b2p[TMP_REG_1], imm);
/* ... but unsigned comparison */
PPC_CMPLD(dst_reg, b2p[TMP_REG_1]);
if (is_jmp32)
PPC_CMPLW(dst_reg,
b2p[TMP_REG_1]);
else
PPC_CMPLD(dst_reg,
b2p[TMP_REG_1]);
}
break;
}
case BPF_JMP | BPF_JSGT | BPF_K:
case BPF_JMP | BPF_JSLT | BPF_K:
case BPF_JMP | BPF_JSGE | BPF_K:
case BPF_JMP | BPF_JSLE | BPF_K:
case BPF_JMP32 | BPF_JSGT | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
{
bool is_jmp32 = BPF_CLASS(code) == BPF_JMP32;
/*
* signed comparison, so any 16-bit value
* can be used in cmpdi
*/
if (imm >= -32768 && imm < 32768)
PPC_CMPDI(dst_reg, imm);
else {
if (imm >= -32768 && imm < 32768) {
if (is_jmp32)
PPC_CMPWI(dst_reg, imm);
else
PPC_CMPDI(dst_reg, imm);
} else {
PPC_LI32(b2p[TMP_REG_1], imm);
PPC_CMPD(dst_reg, b2p[TMP_REG_1]);
if (is_jmp32)
PPC_CMPW(dst_reg,
b2p[TMP_REG_1]);
else
PPC_CMPD(dst_reg,
b2p[TMP_REG_1]);
}
break;
}
case BPF_JMP | BPF_JSET | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_K:
/* andi does not sign-extend the immediate */
if (imm >= 0 && imm < 32768)
/* PPC_ANDI is _only/always_ dot-form */
PPC_ANDI(b2p[TMP_REG_1], dst_reg, imm);
else {
PPC_LI32(b2p[TMP_REG_1], imm);
PPC_AND_DOT(b2p[TMP_REG_1], dst_reg,
b2p[TMP_REG_1]);
int tmp_reg = b2p[TMP_REG_1];
PPC_LI32(tmp_reg, imm);
if (BPF_CLASS(code) == BPF_JMP) {
PPC_AND_DOT(tmp_reg, dst_reg,
tmp_reg);
} else {
PPC_AND(tmp_reg, dst_reg,
tmp_reg);
PPC_RLWINM_DOT(tmp_reg, tmp_reg,
0, 0, 31);
}
}
break;
}
......
......@@ -1110,103 +1110,141 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
mask = 0xf000; /* j */
goto branch_oc;
case BPF_JMP | BPF_JSGT | BPF_K: /* ((s64) dst > (s64) imm) */
case BPF_JMP32 | BPF_JSGT | BPF_K: /* ((s32) dst > (s32) imm) */
mask = 0x2000; /* jh */
goto branch_ks;
case BPF_JMP | BPF_JSLT | BPF_K: /* ((s64) dst < (s64) imm) */
case BPF_JMP32 | BPF_JSLT | BPF_K: /* ((s32) dst < (s32) imm) */
mask = 0x4000; /* jl */
goto branch_ks;
case BPF_JMP | BPF_JSGE | BPF_K: /* ((s64) dst >= (s64) imm) */
case BPF_JMP32 | BPF_JSGE | BPF_K: /* ((s32) dst >= (s32) imm) */
mask = 0xa000; /* jhe */
goto branch_ks;
case BPF_JMP | BPF_JSLE | BPF_K: /* ((s64) dst <= (s64) imm) */
case BPF_JMP32 | BPF_JSLE | BPF_K: /* ((s32) dst <= (s32) imm) */
mask = 0xc000; /* jle */
goto branch_ks;
case BPF_JMP | BPF_JGT | BPF_K: /* (dst_reg > imm) */
case BPF_JMP32 | BPF_JGT | BPF_K: /* ((u32) dst_reg > (u32) imm) */
mask = 0x2000; /* jh */
goto branch_ku;
case BPF_JMP | BPF_JLT | BPF_K: /* (dst_reg < imm) */
case BPF_JMP32 | BPF_JLT | BPF_K: /* ((u32) dst_reg < (u32) imm) */
mask = 0x4000; /* jl */
goto branch_ku;
case BPF_JMP | BPF_JGE | BPF_K: /* (dst_reg >= imm) */
case BPF_JMP32 | BPF_JGE | BPF_K: /* ((u32) dst_reg >= (u32) imm) */
mask = 0xa000; /* jhe */
goto branch_ku;
case BPF_JMP | BPF_JLE | BPF_K: /* (dst_reg <= imm) */
case BPF_JMP32 | BPF_JLE | BPF_K: /* ((u32) dst_reg <= (u32) imm) */
mask = 0xc000; /* jle */
goto branch_ku;
case BPF_JMP | BPF_JNE | BPF_K: /* (dst_reg != imm) */
case BPF_JMP32 | BPF_JNE | BPF_K: /* ((u32) dst_reg != (u32) imm) */
mask = 0x7000; /* jne */
goto branch_ku;
case BPF_JMP | BPF_JEQ | BPF_K: /* (dst_reg == imm) */
case BPF_JMP32 | BPF_JEQ | BPF_K: /* ((u32) dst_reg == (u32) imm) */
mask = 0x8000; /* je */
goto branch_ku;
case BPF_JMP | BPF_JSET | BPF_K: /* (dst_reg & imm) */
case BPF_JMP32 | BPF_JSET | BPF_K: /* ((u32) dst_reg & (u32) imm) */
mask = 0x7000; /* jnz */
/* lgfi %w1,imm (load sign extend imm) */
EMIT6_IMM(0xc0010000, REG_W1, imm);
/* ngr %w1,%dst */
EMIT4(0xb9800000, REG_W1, dst_reg);
if (BPF_CLASS(insn->code) == BPF_JMP32) {
/* llilf %w1,imm (load zero extend imm) */
EMIT6_IMM(0xc0010000, REG_W1, imm);
/* nr %w1,%dst */
EMIT2(0x1400, REG_W1, dst_reg);
} else {
/* lgfi %w1,imm (load sign extend imm) */
EMIT6_IMM(0xc0010000, REG_W1, imm);
/* ngr %w1,%dst */
EMIT4(0xb9800000, REG_W1, dst_reg);
}
goto branch_oc;
case BPF_JMP | BPF_JSGT | BPF_X: /* ((s64) dst > (s64) src) */
case BPF_JMP32 | BPF_JSGT | BPF_X: /* ((s32) dst > (s32) src) */
mask = 0x2000; /* jh */
goto branch_xs;
case BPF_JMP | BPF_JSLT | BPF_X: /* ((s64) dst < (s64) src) */
case BPF_JMP32 | BPF_JSLT | BPF_X: /* ((s32) dst < (s32) src) */
mask = 0x4000; /* jl */
goto branch_xs;
case BPF_JMP | BPF_JSGE | BPF_X: /* ((s64) dst >= (s64) src) */
case BPF_JMP32 | BPF_JSGE | BPF_X: /* ((s32) dst >= (s32) src) */
mask = 0xa000; /* jhe */
goto branch_xs;
case BPF_JMP | BPF_JSLE | BPF_X: /* ((s64) dst <= (s64) src) */
case BPF_JMP32 | BPF_JSLE | BPF_X: /* ((s32) dst <= (s32) src) */
mask = 0xc000; /* jle */
goto branch_xs;
case BPF_JMP | BPF_JGT | BPF_X: /* (dst > src) */
case BPF_JMP32 | BPF_JGT | BPF_X: /* ((u32) dst > (u32) src) */
mask = 0x2000; /* jh */
goto branch_xu;
case BPF_JMP | BPF_JLT | BPF_X: /* (dst < src) */
case BPF_JMP32 | BPF_JLT | BPF_X: /* ((u32) dst < (u32) src) */
mask = 0x4000; /* jl */
goto branch_xu;
case BPF_JMP | BPF_JGE | BPF_X: /* (dst >= src) */
case BPF_JMP32 | BPF_JGE | BPF_X: /* ((u32) dst >= (u32) src) */
mask = 0xa000; /* jhe */
goto branch_xu;
case BPF_JMP | BPF_JLE | BPF_X: /* (dst <= src) */
case BPF_JMP32 | BPF_JLE | BPF_X: /* ((u32) dst <= (u32) src) */
mask = 0xc000; /* jle */
goto branch_xu;
case BPF_JMP | BPF_JNE | BPF_X: /* (dst != src) */
case BPF_JMP32 | BPF_JNE | BPF_X: /* ((u32) dst != (u32) src) */
mask = 0x7000; /* jne */
goto branch_xu;
case BPF_JMP | BPF_JEQ | BPF_X: /* (dst == src) */
case BPF_JMP32 | BPF_JEQ | BPF_X: /* ((u32) dst == (u32) src) */
mask = 0x8000; /* je */
goto branch_xu;
case BPF_JMP | BPF_JSET | BPF_X: /* (dst & src) */
case BPF_JMP32 | BPF_JSET | BPF_X: /* ((u32) dst & (u32) src) */
{
bool is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
mask = 0x7000; /* jnz */
/* ngrk %w1,%dst,%src */
EMIT4_RRF(0xb9e40000, REG_W1, dst_reg, src_reg);
/* nrk or ngrk %w1,%dst,%src */
EMIT4_RRF((is_jmp32 ? 0xb9f40000 : 0xb9e40000),
REG_W1, dst_reg, src_reg);
goto branch_oc;
branch_ks:
/* lgfi %w1,imm (load sign extend imm) */
EMIT6_IMM(0xc0010000, REG_W1, imm);
/* cgrj %dst,%w1,mask,off */
EMIT6_PCREL(0xec000000, 0x0064, dst_reg, REG_W1, i, off, mask);
/* crj or cgrj %dst,%w1,mask,off */
EMIT6_PCREL(0xec000000, (is_jmp32 ? 0x0076 : 0x0064),
dst_reg, REG_W1, i, off, mask);
break;
branch_ku:
/* lgfi %w1,imm (load sign extend imm) */
EMIT6_IMM(0xc0010000, REG_W1, imm);
/* clgrj %dst,%w1,mask,off */
EMIT6_PCREL(0xec000000, 0x0065, dst_reg, REG_W1, i, off, mask);
/* clrj or clgrj %dst,%w1,mask,off */
EMIT6_PCREL(0xec000000, (is_jmp32 ? 0x0077 : 0x0065),
dst_reg, REG_W1, i, off, mask);
break;
branch_xs:
/* cgrj %dst,%src,mask,off */
EMIT6_PCREL(0xec000000, 0x0064, dst_reg, src_reg, i, off, mask);
/* crj or cgrj %dst,%src,mask,off */
EMIT6_PCREL(0xec000000, (is_jmp32 ? 0x0076 : 0x0064),
dst_reg, src_reg, i, off, mask);
break;
branch_xu:
/* clgrj %dst,%src,mask,off */
EMIT6_PCREL(0xec000000, 0x0065, dst_reg, src_reg, i, off, mask);
/* clrj or clgrj %dst,%src,mask,off */
EMIT6_PCREL(0xec000000, (is_jmp32 ? 0x0077 : 0x0065),
dst_reg, src_reg, i, off, mask);
break;
branch_oc:
/* brc mask,jmp_off (branch instruction needs 4 bytes) */
jmp_off = addrs[i + off + 1] - (addrs[i + 1] - 4);
EMIT4_PCREL(0xa7040000 | mask << 8, jmp_off);
break;
}
default: /* too complex, give up */
pr_err("Unknown opcode %02x\n", insn->code);
return -1;
......
......@@ -881,20 +881,41 @@ xadd: if (is_imm8(insn->off))
case BPF_JMP | BPF_JSLT | BPF_X:
case BPF_JMP | BPF_JSGE | BPF_X:
case BPF_JMP | BPF_JSLE | BPF_X:
case BPF_JMP32 | BPF_JEQ | BPF_X:
case BPF_JMP32 | BPF_JNE | BPF_X:
case BPF_JMP32 | BPF_JGT | BPF_X:
case BPF_JMP32 | BPF_JLT | BPF_X:
case BPF_JMP32 | BPF_JGE | BPF_X:
case BPF_JMP32 | BPF_JLE | BPF_X:
case BPF_JMP32 | BPF_JSGT | BPF_X:
case BPF_JMP32 | BPF_JSLT | BPF_X:
case BPF_JMP32 | BPF_JSGE | BPF_X:
case BPF_JMP32 | BPF_JSLE | BPF_X:
/* cmp dst_reg, src_reg */
EMIT3(add_2mod(0x48, dst_reg, src_reg), 0x39,
add_2reg(0xC0, dst_reg, src_reg));
if (BPF_CLASS(insn->code) == BPF_JMP)
EMIT1(add_2mod(0x48, dst_reg, src_reg));
else if (is_ereg(dst_reg) || is_ereg(src_reg))
EMIT1(add_2mod(0x40, dst_reg, src_reg));
EMIT2(0x39, add_2reg(0xC0, dst_reg, src_reg));
goto emit_cond_jmp;
case BPF_JMP | BPF_JSET | BPF_X:
case BPF_JMP32 | BPF_JSET | BPF_X:
/* test dst_reg, src_reg */
EMIT3(add_2mod(0x48, dst_reg, src_reg), 0x85,
add_2reg(0xC0, dst_reg, src_reg));
if (BPF_CLASS(insn->code) == BPF_JMP)
EMIT1(add_2mod(0x48, dst_reg, src_reg));
else if (is_ereg(dst_reg) || is_ereg(src_reg))
EMIT1(add_2mod(0x40, dst_reg, src_reg));
EMIT2(0x85, add_2reg(0xC0, dst_reg, src_reg));
goto emit_cond_jmp;
case BPF_JMP | BPF_JSET | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_K:
/* test dst_reg, imm32 */
EMIT1(add_1mod(0x48, dst_reg));
if (BPF_CLASS(insn->code) == BPF_JMP)
EMIT1(add_1mod(0x48, dst_reg));
else if (is_ereg(dst_reg))
EMIT1(add_1mod(0x40, dst_reg));
EMIT2_off32(0xF7, add_1reg(0xC0, dst_reg), imm32);
goto emit_cond_jmp;
......@@ -908,8 +929,21 @@ xadd: if (is_imm8(insn->off))
case BPF_JMP | BPF_JSLT | BPF_K:
case BPF_JMP | BPF_JSGE | BPF_K:
case BPF_JMP | BPF_JSLE | BPF_K:
case BPF_JMP32 | BPF_JEQ | BPF_K:
case BPF_JMP32 | BPF_JNE | BPF_K:
case BPF_JMP32 | BPF_JGT | BPF_K:
case BPF_JMP32 | BPF_JLT | BPF_K:
case BPF_JMP32 | BPF_JGE | BPF_K:
case BPF_JMP32 | BPF_JLE | BPF_K:
case BPF_JMP32 | BPF_JSGT | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
/* cmp dst_reg, imm8/32 */
EMIT1(add_1mod(0x48, dst_reg));
if (BPF_CLASS(insn->code) == BPF_JMP)
EMIT1(add_1mod(0x48, dst_reg));
else if (is_ereg(dst_reg))
EMIT1(add_1mod(0x40, dst_reg));
if (is_imm8(imm32))
EMIT3(0x83, add_1reg(0xF8, dst_reg), imm32);
......
......@@ -2072,7 +2072,18 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
case BPF_JMP | BPF_JSGT | BPF_X:
case BPF_JMP | BPF_JSLE | BPF_X:
case BPF_JMP | BPF_JSLT | BPF_X:
case BPF_JMP | BPF_JSGE | BPF_X: {
case BPF_JMP | BPF_JSGE | BPF_X:
case BPF_JMP32 | BPF_JEQ | BPF_X:
case BPF_JMP32 | BPF_JNE | BPF_X:
case BPF_JMP32 | BPF_JGT | BPF_X:
case BPF_JMP32 | BPF_JLT | BPF_X:
case BPF_JMP32 | BPF_JGE | BPF_X:
case BPF_JMP32 | BPF_JLE | BPF_X:
case BPF_JMP32 | BPF_JSGT | BPF_X:
case BPF_JMP32 | BPF_JSLE | BPF_X:
case BPF_JMP32 | BPF_JSLT | BPF_X:
case BPF_JMP32 | BPF_JSGE | BPF_X: {
bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
u8 sreg_lo = sstk ? IA32_ECX : src_lo;
......@@ -2081,25 +2092,35 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
if (dstk) {
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EAX),
STACK_VAR(dst_lo));
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EDX),
STACK_VAR(dst_hi));
if (is_jmp64)
EMIT3(0x8B,
add_2reg(0x40, IA32_EBP,
IA32_EDX),
STACK_VAR(dst_hi));
}
if (sstk) {
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_ECX),
STACK_VAR(src_lo));
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EBX),
STACK_VAR(src_hi));
if (is_jmp64)
EMIT3(0x8B,
add_2reg(0x40, IA32_EBP,
IA32_EBX),
STACK_VAR(src_hi));
}
/* cmp dreg_hi,sreg_hi */
EMIT2(0x39, add_2reg(0xC0, dreg_hi, sreg_hi));
EMIT2(IA32_JNE, 2);
if (is_jmp64) {
/* cmp dreg_hi,sreg_hi */
EMIT2(0x39, add_2reg(0xC0, dreg_hi, sreg_hi));
EMIT2(IA32_JNE, 2);
}
/* cmp dreg_lo,sreg_lo */
EMIT2(0x39, add_2reg(0xC0, dreg_lo, sreg_lo));
goto emit_cond_jmp;
}
case BPF_JMP | BPF_JSET | BPF_X: {
case BPF_JMP | BPF_JSET | BPF_X:
case BPF_JMP32 | BPF_JSET | BPF_X: {
bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
u8 sreg_lo = sstk ? IA32_ECX : src_lo;
......@@ -2108,15 +2129,21 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
if (dstk) {
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EAX),
STACK_VAR(dst_lo));
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EDX),
STACK_VAR(dst_hi));
if (is_jmp64)
EMIT3(0x8B,
add_2reg(0x40, IA32_EBP,
IA32_EDX),
STACK_VAR(dst_hi));
}
if (sstk) {
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_ECX),
STACK_VAR(src_lo));
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EBX),
STACK_VAR(src_hi));
if (is_jmp64)
EMIT3(0x8B,
add_2reg(0x40, IA32_EBP,
IA32_EBX),
STACK_VAR(src_hi));
}
/* and dreg_lo,sreg_lo */
EMIT2(0x23, add_2reg(0xC0, sreg_lo, dreg_lo));
......@@ -2126,32 +2153,39 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi));
goto emit_cond_jmp;
}
case BPF_JMP | BPF_JSET | BPF_K: {
u32 hi;
case BPF_JMP | BPF_JSET | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_K: {
bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
u8 sreg_lo = IA32_ECX;
u8 sreg_hi = IA32_EBX;
u32 hi;
if (dstk) {
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EAX),
STACK_VAR(dst_lo));
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EDX),
STACK_VAR(dst_hi));
if (is_jmp64)
EMIT3(0x8B,
add_2reg(0x40, IA32_EBP,
IA32_EDX),
STACK_VAR(dst_hi));
}
hi = imm32 & (1<<31) ? (u32)~0 : 0;
/* mov ecx,imm32 */
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_ECX), imm32);
/* mov ebx,imm32 */
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_EBX), hi);
EMIT2_off32(0xC7, add_1reg(0xC0, sreg_lo), imm32);
/* and dreg_lo,sreg_lo */
EMIT2(0x23, add_2reg(0xC0, sreg_lo, dreg_lo));
/* and dreg_hi,sreg_hi */
EMIT2(0x23, add_2reg(0xC0, sreg_hi, dreg_hi));
/* or dreg_lo,dreg_hi */
EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi));
if (is_jmp64) {
hi = imm32 & (1 << 31) ? (u32)~0 : 0;
/* mov ebx,imm32 */
EMIT2_off32(0xC7, add_1reg(0xC0, sreg_hi), hi);
/* and dreg_hi,sreg_hi */
EMIT2(0x23, add_2reg(0xC0, sreg_hi, dreg_hi));
/* or dreg_lo,dreg_hi */
EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi));
}
goto emit_cond_jmp;
}
case BPF_JMP | BPF_JEQ | BPF_K:
......@@ -2163,29 +2197,44 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
case BPF_JMP | BPF_JSGT | BPF_K:
case BPF_JMP | BPF_JSLE | BPF_K:
case BPF_JMP | BPF_JSLT | BPF_K:
case BPF_JMP | BPF_JSGE | BPF_K: {
u32 hi;
case BPF_JMP | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JEQ | BPF_K:
case BPF_JMP32 | BPF_JNE | BPF_K:
case BPF_JMP32 | BPF_JGT | BPF_K:
case BPF_JMP32 | BPF_JLT | BPF_K:
case BPF_JMP32 | BPF_JGE | BPF_K:
case BPF_JMP32 | BPF_JLE | BPF_K:
case BPF_JMP32 | BPF_JSGT | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_K: {
bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
u8 sreg_lo = IA32_ECX;
u8 sreg_hi = IA32_EBX;
u32 hi;
if (dstk) {
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EAX),
STACK_VAR(dst_lo));
EMIT3(0x8B, add_2reg(0x40, IA32_EBP, IA32_EDX),
STACK_VAR(dst_hi));
if (is_jmp64)
EMIT3(0x8B,
add_2reg(0x40, IA32_EBP,
IA32_EDX),
STACK_VAR(dst_hi));
}
hi = imm32 & (1<<31) ? (u32)~0 : 0;
/* mov ecx,imm32 */
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_ECX), imm32);
/* mov ebx,imm32 */
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_EBX), hi);
/* cmp dreg_hi,sreg_hi */
EMIT2(0x39, add_2reg(0xC0, dreg_hi, sreg_hi));
EMIT2(IA32_JNE, 2);
if (is_jmp64) {
hi = imm32 & (1 << 31) ? (u32)~0 : 0;
/* mov ebx,imm32 */
EMIT2_off32(0xC7, add_1reg(0xC0, IA32_EBX), hi);
/* cmp dreg_hi,sreg_hi */
EMIT2(0x39, add_2reg(0xC0, dreg_hi, sreg_hi));
EMIT2(IA32_JNE, 2);
}
/* cmp dreg_lo,sreg_lo */
EMIT2(0x39, add_2reg(0xC0, dreg_lo, sreg_lo));
......
......@@ -243,6 +243,16 @@ struct nfp_bpf_reg_state {
#define FLAG_INSN_IS_JUMP_DST BIT(0)
#define FLAG_INSN_IS_SUBPROG_START BIT(1)
#define FLAG_INSN_PTR_CALLER_STACK_FRAME BIT(2)
/* Instruction is pointless, noop even on its own */
#define FLAG_INSN_SKIP_NOOP BIT(3)
/* Instruction is optimized out based on preceding instructions */
#define FLAG_INSN_SKIP_PREC_DEPENDENT BIT(4)
/* Instruction is optimized by the verifier */
#define FLAG_INSN_SKIP_VERIFIER_OPT BIT(5)
#define FLAG_INSN_SKIP_MASK (FLAG_INSN_SKIP_NOOP | \
FLAG_INSN_SKIP_PREC_DEPENDENT | \
FLAG_INSN_SKIP_VERIFIER_OPT)
/**
* struct nfp_insn_meta - BPF instruction wrapper
......@@ -271,7 +281,6 @@ struct nfp_bpf_reg_state {
* @n: eBPF instruction number
* @flags: eBPF instruction extra optimization flags
* @subprog_idx: index of subprogram to which the instruction belongs
* @skip: skip this instruction (optimized out)
* @double_cb: callback for second part of the instruction
* @l: link on nfp_prog->insns list
*/
......@@ -319,7 +328,6 @@ struct nfp_insn_meta {
unsigned short n;
unsigned short flags;
unsigned short subprog_idx;
bool skip;
instr_cb_t double_cb;
struct list_head l;
......@@ -357,6 +365,21 @@ static inline bool is_mbpf_load(const struct nfp_insn_meta *meta)
return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_LDX | BPF_MEM);
}
static inline bool is_mbpf_jmp32(const struct nfp_insn_meta *meta)
{
return mbpf_class(meta) == BPF_JMP32;
}
static inline bool is_mbpf_jmp64(const struct nfp_insn_meta *meta)
{
return mbpf_class(meta) == BPF_JMP;
}
static inline bool is_mbpf_jmp(const struct nfp_insn_meta *meta)
{
return is_mbpf_jmp32(meta) || is_mbpf_jmp64(meta);
}
static inline bool is_mbpf_store(const struct nfp_insn_meta *meta)
{
return (meta->insn.code & ~BPF_SIZE_MASK) == (BPF_STX | BPF_MEM);
......@@ -407,6 +430,20 @@ static inline bool is_mbpf_div(const struct nfp_insn_meta *meta)
return is_mbpf_alu(meta) && mbpf_op(meta) == BPF_DIV;
}
static inline bool is_mbpf_cond_jump(const struct nfp_insn_meta *meta)
{
u8 op;
if (is_mbpf_jmp32(meta))
return true;
if (!is_mbpf_jmp64(meta))
return false;
op = mbpf_op(meta);
return op != BPF_JA && op != BPF_EXIT && op != BPF_CALL;
}
static inline bool is_mbpf_helper_call(const struct nfp_insn_meta *meta)
{
struct bpf_insn insn = meta->insn;
......@@ -457,6 +494,7 @@ struct nfp_bpf_subprog_info {
* @subprog_cnt: number of sub-programs, including main function
* @map_records: the map record pointers from bpf->maps_neutral
* @subprog: pointer to an array of objects holding info about sub-programs
* @n_insns: number of instructions on @insns list
* @insns: list of BPF instruction wrappers (struct nfp_insn_meta)
*/
struct nfp_prog {
......@@ -489,6 +527,7 @@ struct nfp_prog {
struct nfp_bpf_neutral_map **map_records;
struct nfp_bpf_subprog_info *subprog;
unsigned int n_insns;
struct list_head insns;
};
......@@ -505,7 +544,7 @@ struct nfp_bpf_vnic {
};
bool nfp_is_subprog_start(struct nfp_insn_meta *meta);
void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog, unsigned int cnt);
void nfp_bpf_jit_prepare(struct nfp_prog *nfp_prog);
int nfp_bpf_jit(struct nfp_prog *prog);
bool nfp_bpf_supported_opcode(u8 code);
......@@ -513,6 +552,10 @@ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx,
int prev_insn_idx);
int nfp_bpf_finalize(struct bpf_verifier_env *env);
int nfp_bpf_opt_replace_insn(struct bpf_verifier_env *env, u32 off,
struct bpf_insn *insn);
int nfp_bpf_opt_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt);
extern const struct bpf_prog_offload_ops nfp_bpf_dev_ops;
struct netdev_bpf;
......@@ -526,7 +569,7 @@ int nfp_net_bpf_offload(struct nfp_net *nn, struct bpf_prog *prog,
struct nfp_insn_meta *
nfp_bpf_goto_meta(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
unsigned int insn_idx, unsigned int n_insns);
unsigned int insn_idx);
void *nfp_bpf_relo_for_vnic(struct nfp_prog *nfp_prog, struct nfp_bpf_vnic *bv);
......
......@@ -163,8 +163,9 @@ nfp_prog_prepare(struct nfp_prog *nfp_prog, const struct bpf_insn *prog,
list_add_tail(&meta->l, &nfp_prog->insns);
}
nfp_prog->n_insns = cnt;
nfp_bpf_jit_prepare(nfp_prog, cnt);
nfp_bpf_jit_prepare(nfp_prog);
return 0;
}
......@@ -219,6 +220,10 @@ static int nfp_bpf_translate(struct bpf_prog *prog)
unsigned int max_instr;
int err;
/* We depend on dead code elimination succeeding */
if (prog->aux->offload->opt_failed)
return -EINVAL;
max_instr = nn_readw(nn, NFP_NET_CFG_BPF_MAX_LEN);
nfp_prog->__prog_alloc_len = max_instr * sizeof(u64);
......@@ -591,6 +596,8 @@ int nfp_net_bpf_offload(struct nfp_net *nn, struct bpf_prog *prog,
const struct bpf_prog_offload_ops nfp_bpf_dev_ops = {
.insn_hook = nfp_verify_insn,
.finalize = nfp_bpf_finalize,
.replace_insn = nfp_bpf_opt_replace_insn,
.remove_insns = nfp_bpf_opt_remove_insns,
.prepare = nfp_bpf_verifier_prep,
.translate = nfp_bpf_translate,
.destroy = nfp_bpf_destroy,
......
......@@ -18,15 +18,15 @@
struct nfp_insn_meta *
nfp_bpf_goto_meta(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta,
unsigned int insn_idx, unsigned int n_insns)
unsigned int insn_idx)
{
unsigned int forward, backward, i;
backward = meta->n - insn_idx;
forward = insn_idx - meta->n;
if (min(forward, backward) > n_insns - insn_idx - 1) {
backward = n_insns - insn_idx - 1;
if (min(forward, backward) > nfp_prog->n_insns - insn_idx - 1) {
backward = nfp_prog->n_insns - insn_idx - 1;
meta = nfp_prog_last_meta(nfp_prog);
}
if (min(forward, backward) > insn_idx && backward > insn_idx) {
......@@ -629,7 +629,7 @@ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx,
struct nfp_prog *nfp_prog = env->prog->aux->offload->dev_priv;
struct nfp_insn_meta *meta = nfp_prog->verifier_meta;
meta = nfp_bpf_goto_meta(nfp_prog, meta, insn_idx, env->prog->len);
meta = nfp_bpf_goto_meta(nfp_prog, meta, insn_idx);
nfp_prog->verifier_meta = meta;
if (!nfp_bpf_supported_opcode(meta->insn.code)) {
......@@ -690,8 +690,7 @@ nfp_assign_subprog_idx_and_regs(struct bpf_verifier_env *env,
return 0;
}
static unsigned int
nfp_bpf_get_stack_usage(struct nfp_prog *nfp_prog, unsigned int cnt)
static unsigned int nfp_bpf_get_stack_usage(struct nfp_prog *nfp_prog)
{
struct nfp_insn_meta *meta = nfp_prog_first_meta(nfp_prog);
unsigned int max_depth = 0, depth = 0, frame = 0;
......@@ -726,7 +725,7 @@ nfp_bpf_get_stack_usage(struct nfp_prog *nfp_prog, unsigned int cnt)
/* Find the callee and start processing it. */
meta = nfp_bpf_goto_meta(nfp_prog, meta,
meta->n + 1 + meta->insn.imm, cnt);
meta->n + 1 + meta->insn.imm);
idx = meta->subprog_idx;
frame++;
goto process_subprog;
......@@ -778,8 +777,7 @@ int nfp_bpf_finalize(struct bpf_verifier_env *env)
nn = netdev_priv(env->prog->aux->offload->netdev);
max_stack = nn_readb(nn, NFP_NET_CFG_BPF_STACK_SZ) * 64;
nfp_prog->stack_size = nfp_bpf_get_stack_usage(nfp_prog,
env->prog->len);
nfp_prog->stack_size = nfp_bpf_get_stack_usage(nfp_prog);
if (nfp_prog->stack_size > max_stack) {
pr_vlog(env, "stack too large: program %dB > FW stack %dB\n",
nfp_prog->stack_size, max_stack);
......@@ -788,3 +786,61 @@ int nfp_bpf_finalize(struct bpf_verifier_env *env)
return 0;
}
int nfp_bpf_opt_replace_insn(struct bpf_verifier_env *env, u32 off,
struct bpf_insn *insn)
{
struct nfp_prog *nfp_prog = env->prog->aux->offload->dev_priv;
struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
struct nfp_insn_meta *meta = nfp_prog->verifier_meta;
meta = nfp_bpf_goto_meta(nfp_prog, meta, aux_data[off].orig_idx);
nfp_prog->verifier_meta = meta;
/* conditional jump to jump conversion */
if (is_mbpf_cond_jump(meta) &&
insn->code == (BPF_JMP | BPF_JA | BPF_K)) {
unsigned int tgt_off;
tgt_off = off + insn->off + 1;
if (!insn->off) {
meta->jmp_dst = list_next_entry(meta, l);
meta->jump_neg_op = false;
} else if (meta->jmp_dst->n != aux_data[tgt_off].orig_idx) {
pr_vlog(env, "branch hard wire at %d changes target %d -> %d\n",
off, meta->jmp_dst->n,
aux_data[tgt_off].orig_idx);
return -EINVAL;
}
return 0;
}
pr_vlog(env, "unsupported instruction replacement %hhx -> %hhx\n",
meta->insn.code, insn->code);
return -EINVAL;
}
int nfp_bpf_opt_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt)
{
struct nfp_prog *nfp_prog = env->prog->aux->offload->dev_priv;
struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
struct nfp_insn_meta *meta = nfp_prog->verifier_meta;
unsigned int i;
meta = nfp_bpf_goto_meta(nfp_prog, meta, aux_data[off].orig_idx);
for (i = 0; i < cnt; i++) {
if (WARN_ON_ONCE(&meta->l == &nfp_prog->insns))
return -EINVAL;
/* doesn't count if it already has the flag */
if (meta->flags & FLAG_INSN_SKIP_VERIFIER_OPT)
i--;
meta->flags |= FLAG_INSN_SKIP_VERIFIER_OPT;
meta = list_next_entry(meta, l);
}
return 0;
}
......@@ -268,9 +268,15 @@ struct bpf_verifier_ops {
};
struct bpf_prog_offload_ops {
/* verifier basic callbacks */
int (*insn_hook)(struct bpf_verifier_env *env,
int insn_idx, int prev_insn_idx);
int (*finalize)(struct bpf_verifier_env *env);
/* verifier optimization callbacks (called after .finalize) */
int (*replace_insn)(struct bpf_verifier_env *env, u32 off,
struct bpf_insn *insn);
int (*remove_insns)(struct bpf_verifier_env *env, u32 off, u32 cnt);
/* program management callbacks */
int (*prepare)(struct bpf_prog *prog);
int (*translate)(struct bpf_prog *prog);
void (*destroy)(struct bpf_prog *prog);
......@@ -283,6 +289,7 @@ struct bpf_prog_offload {
void *dev_priv;
struct list_head offloads;
bool dev_state;
bool opt_failed;
void *jited_image;
u32 jited_len;
};
......@@ -397,6 +404,9 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr);
int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr);
int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr);
/* an array of programs to be executed under rcu_lock.
*
......
......@@ -187,6 +187,7 @@ struct bpf_insn_aux_data {
int sanitize_stack_off; /* stack slot to be cleared */
bool seen; /* this insn was processed by the verifier */
u8 alu_state; /* used in combination with alu_limit */
unsigned int orig_idx; /* original instruction index */
};
#define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
......@@ -265,5 +266,10 @@ int bpf_prog_offload_verifier_prep(struct bpf_prog *prog);
int bpf_prog_offload_verify_insn(struct bpf_verifier_env *env,
int insn_idx, int prev_insn_idx);
int bpf_prog_offload_finalize(struct bpf_verifier_env *env);
void
bpf_prog_offload_replace_insn(struct bpf_verifier_env *env, u32 off,
struct bpf_insn *insn);
void
bpf_prog_offload_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt);
#endif /* _LINUX_BPF_VERIFIER_H */
......@@ -277,6 +277,26 @@ struct sock_reuseport;
.off = OFF, \
.imm = IMM })
/* Like BPF_JMP_REG, but with 32-bit wide operands for comparison. */
#define BPF_JMP32_REG(OP, DST, SRC, OFF) \
((struct bpf_insn) { \
.code = BPF_JMP32 | BPF_OP(OP) | BPF_X, \
.dst_reg = DST, \
.src_reg = SRC, \
.off = OFF, \
.imm = 0 })
/* Like BPF_JMP_IMM, but with 32-bit wide operands for comparison. */
#define BPF_JMP32_IMM(OP, DST, IMM, OFF) \
((struct bpf_insn) { \
.code = BPF_JMP32 | BPF_OP(OP) | BPF_K, \
.dst_reg = DST, \
.src_reg = 0, \
.off = OFF, \
.imm = IMM })
/* Unconditional jumps, goto pc + off16 */
#define BPF_JMP_A(OFF) \
......@@ -778,6 +798,7 @@ static inline bool bpf_dump_raw_ok(void)
struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
const struct bpf_insn *patch, u32 len);
int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt);
void bpf_clear_redirect_map(struct bpf_map *map);
......
......@@ -1221,6 +1221,11 @@ static inline int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)
}
#endif
struct bpf_flow_keys;
bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
const struct sk_buff *skb,
struct flow_dissector *flow_dissector,
struct bpf_flow_keys *flow_keys);
bool __skb_flow_dissect(const struct sk_buff *skb,
struct flow_dissector *flow_dissector,
void *target_container,
......
......@@ -31,6 +31,7 @@
#include <net/netns/xfrm.h>
#include <net/netns/mpls.h>
#include <net/netns/can.h>
#include <net/netns/xdp.h>
#include <linux/ns_common.h>
#include <linux/idr.h>
#include <linux/skbuff.h>
......@@ -160,6 +161,9 @@ struct net {
#endif
#if IS_ENABLED(CONFIG_CAN)
struct netns_can can;
#endif
#ifdef CONFIG_XDP_SOCKETS
struct netns_xdp xdp;
#endif
struct sock *diag_nlsk;
atomic_t fnhe_genid;
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __NETNS_XDP_H__
#define __NETNS_XDP_H__
#include <linux/rculist.h>
#include <linux/mutex.h>
struct netns_xdp {
struct mutex lock;
struct hlist_head list;
};
#endif /* __NETNS_XDP_H__ */
......@@ -42,6 +42,7 @@ struct xdp_umem {
struct work_struct work;
struct page **pgs;
u32 npgs;
int id;
struct net_device *dev;
struct xdp_umem_fq_reuse *fq_reuse;
u16 queue_id;
......
......@@ -14,6 +14,7 @@
/* Extended instruction set based on top of classic BPF */
/* instruction classes */
#define BPF_JMP32 0x06 /* jmp mode in word width */
#define BPF_ALU64 0x07 /* alu mode in double word width */
/* ld/ldx fields */
......@@ -2540,6 +2541,7 @@ struct __sk_buff {
__bpf_md_ptr(struct bpf_flow_keys *, flow_keys);
__u64 tstamp;
__u32 wire_len;
__u32 gso_segs;
};
struct bpf_tunnel_key {
......
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* xdp_diag: interface for query/monitor XDP sockets
* Copyright(c) 2019 Intel Corporation.
*/
#ifndef _LINUX_XDP_DIAG_H
#define _LINUX_XDP_DIAG_H
#include <linux/types.h>
struct xdp_diag_req {
__u8 sdiag_family;
__u8 sdiag_protocol;
__u16 pad;
__u32 xdiag_ino;
__u32 xdiag_show;
__u32 xdiag_cookie[2];
};
struct xdp_diag_msg {
__u8 xdiag_family;
__u8 xdiag_type;
__u16 pad;
__u32 xdiag_ino;
__u32 xdiag_cookie[2];
};
#define XDP_SHOW_INFO (1 << 0) /* Basic information */
#define XDP_SHOW_RING_CFG (1 << 1)
#define XDP_SHOW_UMEM (1 << 2)
#define XDP_SHOW_MEMINFO (1 << 3)
enum {
XDP_DIAG_NONE,
XDP_DIAG_INFO,
XDP_DIAG_UID,
XDP_DIAG_RX_RING,
XDP_DIAG_TX_RING,
XDP_DIAG_UMEM,
XDP_DIAG_UMEM_FILL_RING,
XDP_DIAG_UMEM_COMPLETION_RING,
XDP_DIAG_MEMINFO,
__XDP_DIAG_MAX,
};
#define XDP_DIAG_MAX (__XDP_DIAG_MAX - 1)
struct xdp_diag_info {
__u32 ifindex;
__u32 queue_id;
};
struct xdp_diag_ring {
__u32 entries; /*num descs */
};
#define XDP_DU_F_ZEROCOPY (1 << 0)
struct xdp_diag_umem {
__u64 size;
__u32 id;
__u32 num_pages;
__u32 chunk_size;
__u32 headroom;
__u32 ifindex;
__u32 queue_id;
__u32 flags;
__u32 refs;
};
#endif /* _LINUX_XDP_DIAG_H */
......@@ -157,7 +157,7 @@
*
*/
#define BITS_PER_U64 (sizeof(u64) * BITS_PER_BYTE)
#define BITS_PER_U128 (sizeof(u64) * BITS_PER_BYTE * 2)
#define BITS_PER_BYTE_MASK (BITS_PER_BYTE - 1)
#define BITS_PER_BYTE_MASKED(bits) ((bits) & BITS_PER_BYTE_MASK)
#define BITS_ROUNDDOWN_BYTES(bits) ((bits) >> 3)
......@@ -525,7 +525,7 @@ const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id)
/*
* Regular int is not a bit field and it must be either
* u8/u16/u32/u64.
* u8/u16/u32/u64 or __int128.
*/
static bool btf_type_int_is_regular(const struct btf_type *t)
{
......@@ -538,7 +538,8 @@ static bool btf_type_int_is_regular(const struct btf_type *t)
if (BITS_PER_BYTE_MASKED(nr_bits) ||
BTF_INT_OFFSET(int_data) ||
(nr_bytes != sizeof(u8) && nr_bytes != sizeof(u16) &&
nr_bytes != sizeof(u32) && nr_bytes != sizeof(u64))) {
nr_bytes != sizeof(u32) && nr_bytes != sizeof(u64) &&
nr_bytes != (2 * sizeof(u64)))) {
return false;
}
......@@ -1063,9 +1064,9 @@ static int btf_int_check_member(struct btf_verifier_env *env,
nr_copy_bits = BTF_INT_BITS(int_data) +
BITS_PER_BYTE_MASKED(struct_bits_off);
if (nr_copy_bits > BITS_PER_U64) {
if (nr_copy_bits > BITS_PER_U128) {
btf_verifier_log_member(env, struct_type, member,
"nr_copy_bits exceeds 64");
"nr_copy_bits exceeds 128");
return -EINVAL;
}
......@@ -1119,9 +1120,9 @@ static int btf_int_check_kflag_member(struct btf_verifier_env *env,
bytes_offset = BITS_ROUNDDOWN_BYTES(struct_bits_off);
nr_copy_bits = nr_bits + BITS_PER_BYTE_MASKED(struct_bits_off);
if (nr_copy_bits > BITS_PER_U64) {
if (nr_copy_bits > BITS_PER_U128) {
btf_verifier_log_member(env, struct_type, member,
"nr_copy_bits exceeds 64");
"nr_copy_bits exceeds 128");
return -EINVAL;
}
......@@ -1168,9 +1169,9 @@ static s32 btf_int_check_meta(struct btf_verifier_env *env,
nr_bits = BTF_INT_BITS(int_data) + BTF_INT_OFFSET(int_data);
if (nr_bits > BITS_PER_U64) {
if (nr_bits > BITS_PER_U128) {
btf_verifier_log_type(env, t, "nr_bits exceeds %zu",
BITS_PER_U64);
BITS_PER_U128);
return -EINVAL;
}
......@@ -1211,31 +1212,93 @@ static void btf_int_log(struct btf_verifier_env *env,
btf_int_encoding_str(BTF_INT_ENCODING(int_data)));
}
static void btf_int128_print(struct seq_file *m, void *data)
{
/* data points to a __int128 number.
* Suppose
* int128_num = *(__int128 *)data;
* The below formulas shows what upper_num and lower_num represents:
* upper_num = int128_num >> 64;
* lower_num = int128_num & 0xffffffffFFFFFFFFULL;
*/
u64 upper_num, lower_num;
#ifdef __BIG_ENDIAN_BITFIELD
upper_num = *(u64 *)data;
lower_num = *(u64 *)(data + 8);
#else
upper_num = *(u64 *)(data + 8);
lower_num = *(u64 *)data;
#endif
if (upper_num == 0)
seq_printf(m, "0x%llx", lower_num);
else
seq_printf(m, "0x%llx%016llx", upper_num, lower_num);
}
static void btf_int128_shift(u64 *print_num, u16 left_shift_bits,
u16 right_shift_bits)
{
u64 upper_num, lower_num;
#ifdef __BIG_ENDIAN_BITFIELD
upper_num = print_num[0];
lower_num = print_num[1];
#else
upper_num = print_num[1];
lower_num = print_num[0];
#endif
/* shake out un-needed bits by shift/or operations */
if (left_shift_bits >= 64) {
upper_num = lower_num << (left_shift_bits - 64);
lower_num = 0;
} else {
upper_num = (upper_num << left_shift_bits) |
(lower_num >> (64 - left_shift_bits));
lower_num = lower_num << left_shift_bits;
}
if (right_shift_bits >= 64) {
lower_num = upper_num >> (right_shift_bits - 64);
upper_num = 0;
} else {
lower_num = (lower_num >> right_shift_bits) |
(upper_num << (64 - right_shift_bits));
upper_num = upper_num >> right_shift_bits;
}
#ifdef __BIG_ENDIAN_BITFIELD
print_num[0] = upper_num;
print_num[1] = lower_num;
#else
print_num[0] = lower_num;
print_num[1] = upper_num;
#endif
}
static void btf_bitfield_seq_show(void *data, u8 bits_offset,
u8 nr_bits, struct seq_file *m)
{
u16 left_shift_bits, right_shift_bits;
u8 nr_copy_bytes;
u8 nr_copy_bits;
u64 print_num;
u64 print_num[2] = {};
nr_copy_bits = nr_bits + bits_offset;
nr_copy_bytes = BITS_ROUNDUP_BYTES(nr_copy_bits);
print_num = 0;
memcpy(&print_num, data, nr_copy_bytes);
memcpy(print_num, data, nr_copy_bytes);
#ifdef __BIG_ENDIAN_BITFIELD
left_shift_bits = bits_offset;
#else
left_shift_bits = BITS_PER_U64 - nr_copy_bits;
left_shift_bits = BITS_PER_U128 - nr_copy_bits;
#endif
right_shift_bits = BITS_PER_U64 - nr_bits;
print_num <<= left_shift_bits;
print_num >>= right_shift_bits;
right_shift_bits = BITS_PER_U128 - nr_bits;
seq_printf(m, "0x%llx", print_num);
btf_int128_shift(print_num, left_shift_bits, right_shift_bits);
btf_int128_print(m, print_num);
}
......@@ -1250,7 +1313,7 @@ static void btf_int_bits_seq_show(const struct btf *btf,
/*
* bits_offset is at most 7.
* BTF_INT_OFFSET() cannot exceed 64 bits.
* BTF_INT_OFFSET() cannot exceed 128 bits.
*/
total_bits_offset = bits_offset + BTF_INT_OFFSET(int_data);
data += BITS_ROUNDDOWN_BYTES(total_bits_offset);
......@@ -1274,6 +1337,9 @@ static void btf_int_seq_show(const struct btf *btf, const struct btf_type *t,
}
switch (nr_bits) {
case 128:
btf_int128_print(m, data);
break;
case 64:
if (sign)
seq_printf(m, "%lld", *(s64 *)data);
......
......@@ -307,15 +307,16 @@ int bpf_prog_calc_tag(struct bpf_prog *fp)
return 0;
}
static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, u32 delta,
u32 curr, const bool probe_pass)
static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, s32 end_old,
s32 end_new, u32 curr, const bool probe_pass)
{
const s64 imm_min = S32_MIN, imm_max = S32_MAX;
s32 delta = end_new - end_old;
s64 imm = insn->imm;
if (curr < pos && curr + imm + 1 > pos)
if (curr < pos && curr + imm + 1 >= end_old)
imm += delta;
else if (curr > pos + delta && curr + imm + 1 <= pos + delta)
else if (curr >= end_new && curr + imm + 1 < end_new)
imm -= delta;
if (imm < imm_min || imm > imm_max)
return -ERANGE;
......@@ -324,15 +325,16 @@ static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, u32 delta,
return 0;
}
static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, u32 delta,
u32 curr, const bool probe_pass)
static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old,
s32 end_new, u32 curr, const bool probe_pass)
{
const s32 off_min = S16_MIN, off_max = S16_MAX;
s32 delta = end_new - end_old;
s32 off = insn->off;
if (curr < pos && curr + off + 1 > pos)
if (curr < pos && curr + off + 1 >= end_old)
off += delta;
else if (curr > pos + delta && curr + off + 1 <= pos + delta)
else if (curr >= end_new && curr + off + 1 < end_new)
off -= delta;
if (off < off_min || off > off_max)
return -ERANGE;
......@@ -341,10 +343,10 @@ static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, u32 delta,
return 0;
}
static int bpf_adj_branches(struct bpf_prog *prog, u32 pos, u32 delta,
const bool probe_pass)
static int bpf_adj_branches(struct bpf_prog *prog, u32 pos, s32 end_old,
s32 end_new, const bool probe_pass)
{
u32 i, insn_cnt = prog->len + (probe_pass ? delta : 0);
u32 i, insn_cnt = prog->len + (probe_pass ? end_new - end_old : 0);
struct bpf_insn *insn = prog->insnsi;
int ret = 0;
......@@ -356,22 +358,23 @@ static int bpf_adj_branches(struct bpf_prog *prog, u32 pos, u32 delta,
* do any other adjustments. Therefore skip the patchlet.
*/
if (probe_pass && i == pos) {
i += delta + 1;
insn++;
i = end_new;
insn = prog->insnsi + end_old;
}
code = insn->code;
if (BPF_CLASS(code) != BPF_JMP ||
if ((BPF_CLASS(code) != BPF_JMP &&
BPF_CLASS(code) != BPF_JMP32) ||
BPF_OP(code) == BPF_EXIT)
continue;
/* Adjust offset of jmps if we cross patch boundaries. */
if (BPF_OP(code) == BPF_CALL) {
if (insn->src_reg != BPF_PSEUDO_CALL)
continue;
ret = bpf_adj_delta_to_imm(insn, pos, delta, i,
probe_pass);
ret = bpf_adj_delta_to_imm(insn, pos, end_old,
end_new, i, probe_pass);
} else {
ret = bpf_adj_delta_to_off(insn, pos, delta, i,
probe_pass);
ret = bpf_adj_delta_to_off(insn, pos, end_old,
end_new, i, probe_pass);
}
if (ret)
break;
......@@ -421,7 +424,7 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
* we afterwards may not fail anymore.
*/
if (insn_adj_cnt > cnt_max &&
bpf_adj_branches(prog, off, insn_delta, true))
bpf_adj_branches(prog, off, off + 1, off + len, true))
return NULL;
/* Several new instructions need to be inserted. Make room
......@@ -453,13 +456,25 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
* the ship has sailed to reverse to the original state. An
* overflow cannot happen at this point.
*/
BUG_ON(bpf_adj_branches(prog_adj, off, insn_delta, false));
BUG_ON(bpf_adj_branches(prog_adj, off, off + 1, off + len, false));
bpf_adj_linfo(prog_adj, off, insn_delta);
return prog_adj;
}
int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt)
{
/* Branch offsets can't overflow when program is shrinking, no need
* to call bpf_adj_branches(..., true) here
*/
memmove(prog->insnsi + off, prog->insnsi + off + cnt,
sizeof(struct bpf_insn) * (prog->len - off - cnt));
prog->len -= cnt;
return WARN_ON_ONCE(bpf_adj_branches(prog, off, off + cnt, off, false));
}
void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp)
{
int i;
......@@ -934,6 +949,27 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
*to++ = BPF_JMP_REG(from->code, from->dst_reg, BPF_REG_AX, off);
break;
case BPF_JMP32 | BPF_JEQ | BPF_K:
case BPF_JMP32 | BPF_JNE | BPF_K:
case BPF_JMP32 | BPF_JGT | BPF_K:
case BPF_JMP32 | BPF_JLT | BPF_K:
case BPF_JMP32 | BPF_JGE | BPF_K:
case BPF_JMP32 | BPF_JLE | BPF_K:
case BPF_JMP32 | BPF_JSGT | BPF_K:
case BPF_JMP32 | BPF_JSLT | BPF_K:
case BPF_JMP32 | BPF_JSGE | BPF_K:
case BPF_JMP32 | BPF_JSLE | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_K:
/* Accommodate for extra offset in case of a backjump. */
off = from->off;
if (off < 0)
off -= 2;
*to++ = BPF_ALU32_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm);
*to++ = BPF_ALU32_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);
*to++ = BPF_JMP32_REG(from->code, from->dst_reg, BPF_REG_AX,
off);
break;
case BPF_LD | BPF_IMM | BPF_DW:
*to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ aux[1].imm);
*to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);
......@@ -1130,6 +1166,31 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
INSN_2(JMP, CALL), \
/* Exit instruction. */ \
INSN_2(JMP, EXIT), \
/* 32-bit Jump instructions. */ \
/* Register based. */ \
INSN_3(JMP32, JEQ, X), \
INSN_3(JMP32, JNE, X), \
INSN_3(JMP32, JGT, X), \
INSN_3(JMP32, JLT, X), \
INSN_3(JMP32, JGE, X), \
INSN_3(JMP32, JLE, X), \
INSN_3(JMP32, JSGT, X), \
INSN_3(JMP32, JSLT, X), \
INSN_3(JMP32, JSGE, X), \
INSN_3(JMP32, JSLE, X), \
INSN_3(JMP32, JSET, X), \
/* Immediate based. */ \
INSN_3(JMP32, JEQ, K), \
INSN_3(JMP32, JNE, K), \
INSN_3(JMP32, JGT, K), \
INSN_3(JMP32, JLT, K), \
INSN_3(JMP32, JGE, K), \
INSN_3(JMP32, JLE, K), \
INSN_3(JMP32, JSGT, K), \
INSN_3(JMP32, JSLT, K), \
INSN_3(JMP32, JSGE, K), \
INSN_3(JMP32, JSLE, K), \
INSN_3(JMP32, JSET, K), \
/* Jump instructions. */ \
/* Register based. */ \
INSN_3(JMP, JEQ, X), \
......@@ -1390,145 +1451,49 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
out:
CONT;
}
/* JMP */
JMP_JA:
insn += insn->off;
CONT;
JMP_JEQ_X:
if (DST == SRC) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JEQ_K:
if (DST == IMM) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JNE_X:
if (DST != SRC) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JNE_K:
if (DST != IMM) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JGT_X:
if (DST > SRC) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JGT_K:
if (DST > IMM) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JLT_X:
if (DST < SRC) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JLT_K:
if (DST < IMM) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JGE_X:
if (DST >= SRC) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JGE_K:
if (DST >= IMM) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JLE_X:
if (DST <= SRC) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JLE_K:
if (DST <= IMM) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSGT_X:
if (((s64) DST) > ((s64) SRC)) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSGT_K:
if (((s64) DST) > ((s64) IMM)) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSLT_X:
if (((s64) DST) < ((s64) SRC)) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSLT_K:
if (((s64) DST) < ((s64) IMM)) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSGE_X:
if (((s64) DST) >= ((s64) SRC)) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSGE_K:
if (((s64) DST) >= ((s64) IMM)) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSLE_X:
if (((s64) DST) <= ((s64) SRC)) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSLE_K:
if (((s64) DST) <= ((s64) IMM)) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSET_X:
if (DST & SRC) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_JSET_K:
if (DST & IMM) {
insn += insn->off;
CONT_JMP;
}
CONT;
JMP_EXIT:
return BPF_R0;
/* JMP */
#define COND_JMP(SIGN, OPCODE, CMP_OP) \
JMP_##OPCODE##_X: \
if ((SIGN##64) DST CMP_OP (SIGN##64) SRC) { \
insn += insn->off; \
CONT_JMP; \
} \
CONT; \
JMP32_##OPCODE##_X: \
if ((SIGN##32) DST CMP_OP (SIGN##32) SRC) { \
insn += insn->off; \
CONT_JMP; \
} \
CONT; \
JMP_##OPCODE##_K: \
if ((SIGN##64) DST CMP_OP (SIGN##64) IMM) { \
insn += insn->off; \
CONT_JMP; \
} \
CONT; \
JMP32_##OPCODE##_K: \
if ((SIGN##32) DST CMP_OP (SIGN##32) IMM) { \
insn += insn->off; \
CONT_JMP; \
} \
CONT;
COND_JMP(u, JEQ, ==)
COND_JMP(u, JNE, !=)
COND_JMP(u, JGT, >)
COND_JMP(u, JLT, <)
COND_JMP(u, JGE, >=)
COND_JMP(u, JLE, <=)
COND_JMP(u, JSET, &)
COND_JMP(s, JSGT, >)
COND_JMP(s, JSLT, <)
COND_JMP(s, JSGE, >=)
COND_JMP(s, JSLE, <=)
#undef COND_JMP
/* STX and ST and LDX*/
#define LDST(SIZEOP, SIZE) \
STX_MEM_##SIZEOP: \
......
......@@ -67,7 +67,7 @@ const char *const bpf_class_string[8] = {
[BPF_STX] = "stx",
[BPF_ALU] = "alu",
[BPF_JMP] = "jmp",
[BPF_RET] = "BUG",
[BPF_JMP32] = "jmp32",
[BPF_ALU64] = "alu64",
};
......@@ -136,23 +136,22 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
else
print_bpf_end_insn(verbose, cbs->private_data, insn);
} else if (BPF_OP(insn->code) == BPF_NEG) {
verbose(cbs->private_data, "(%02x) r%d = %s-r%d\n",
insn->code, insn->dst_reg,
class == BPF_ALU ? "(u32) " : "",
verbose(cbs->private_data, "(%02x) %c%d = -%c%d\n",
insn->code, class == BPF_ALU ? 'w' : 'r',
insn->dst_reg, class == BPF_ALU ? 'w' : 'r',
insn->dst_reg);
} else if (BPF_SRC(insn->code) == BPF_X) {
verbose(cbs->private_data, "(%02x) %sr%d %s %sr%d\n",
insn->code, class == BPF_ALU ? "(u32) " : "",
verbose(cbs->private_data, "(%02x) %c%d %s %c%d\n",
insn->code, class == BPF_ALU ? 'w' : 'r',
insn->dst_reg,
bpf_alu_string[BPF_OP(insn->code) >> 4],
class == BPF_ALU ? "(u32) " : "",
class == BPF_ALU ? 'w' : 'r',
insn->src_reg);
} else {
verbose(cbs->private_data, "(%02x) %sr%d %s %s%d\n",
insn->code, class == BPF_ALU ? "(u32) " : "",
verbose(cbs->private_data, "(%02x) %c%d %s %d\n",
insn->code, class == BPF_ALU ? 'w' : 'r',
insn->dst_reg,
bpf_alu_string[BPF_OP(insn->code) >> 4],
class == BPF_ALU ? "(u32) " : "",
insn->imm);
}
} else if (class == BPF_STX) {
......@@ -220,7 +219,7 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
verbose(cbs->private_data, "BUG_ld_%02x\n", insn->code);
return;
}
} else if (class == BPF_JMP) {
} else if (class == BPF_JMP32 || class == BPF_JMP) {
u8 opcode = BPF_OP(insn->code);
if (opcode == BPF_CALL) {
......@@ -244,13 +243,18 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
} else if (insn->code == (BPF_JMP | BPF_EXIT)) {
verbose(cbs->private_data, "(%02x) exit\n", insn->code);
} else if (BPF_SRC(insn->code) == BPF_X) {
verbose(cbs->private_data, "(%02x) if r%d %s r%d goto pc%+d\n",
insn->code, insn->dst_reg,
verbose(cbs->private_data,
"(%02x) if %c%d %s %c%d goto pc%+d\n",
insn->code, class == BPF_JMP32 ? 'w' : 'r',
insn->dst_reg,
bpf_jmp_string[BPF_OP(insn->code) >> 4],
class == BPF_JMP32 ? 'w' : 'r',
insn->src_reg, insn->off);
} else {
verbose(cbs->private_data, "(%02x) if r%d %s 0x%x goto pc%+d\n",
insn->code, insn->dst_reg,
verbose(cbs->private_data,
"(%02x) if %c%d %s 0x%x goto pc%+d\n",
insn->code, class == BPF_JMP32 ? 'w' : 'r',
insn->dst_reg,
bpf_jmp_string[BPF_OP(insn->code) >> 4],
insn->imm, insn->off);
}
......
......@@ -173,6 +173,41 @@ int bpf_prog_offload_finalize(struct bpf_verifier_env *env)
return ret;
}
void
bpf_prog_offload_replace_insn(struct bpf_verifier_env *env, u32 off,
struct bpf_insn *insn)
{
const struct bpf_prog_offload_ops *ops;
struct bpf_prog_offload *offload;
int ret = -EOPNOTSUPP;
down_read(&bpf_devs_lock);
offload = env->prog->aux->offload;
if (offload) {
ops = offload->offdev->ops;
if (!offload->opt_failed && ops->replace_insn)
ret = ops->replace_insn(env, off, insn);
offload->opt_failed |= ret;
}
up_read(&bpf_devs_lock);
}
void
bpf_prog_offload_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt)
{
struct bpf_prog_offload *offload;
int ret = -EOPNOTSUPP;
down_read(&bpf_devs_lock);
offload = env->prog->aux->offload;
if (offload) {
if (!offload->opt_failed && offload->offdev->ops->remove_insns)
ret = offload->offdev->ops->remove_insns(env, off, cnt);
offload->opt_failed |= ret;
}
up_read(&bpf_devs_lock);
}
static void __bpf_prog_offload_destroy(struct bpf_prog *prog)
{
struct bpf_prog_offload *offload = prog->aux->offload;
......
This diff is collapsed.
......@@ -240,3 +240,85 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
kfree(data);
return ret;
}
int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
u32 size = kattr->test.data_size_in;
u32 repeat = kattr->test.repeat;
struct bpf_flow_keys flow_keys;
u64 time_start, time_spent = 0;
struct bpf_skb_data_end *cb;
u32 retval, duration;
struct sk_buff *skb;
struct sock *sk;
void *data;
int ret;
u32 i;
if (prog->type != BPF_PROG_TYPE_FLOW_DISSECTOR)
return -EINVAL;
data = bpf_test_init(kattr, size, NET_SKB_PAD + NET_IP_ALIGN,
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
if (IS_ERR(data))
return PTR_ERR(data);
sk = kzalloc(sizeof(*sk), GFP_USER);
if (!sk) {
kfree(data);
return -ENOMEM;
}
sock_net_set(sk, current->nsproxy->net_ns);
sock_init_data(NULL, sk);
skb = build_skb(data, 0);
if (!skb) {
kfree(data);
kfree(sk);
return -ENOMEM;
}
skb->sk = sk;
skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
__skb_put(skb, size);
skb->protocol = eth_type_trans(skb,
current->nsproxy->net_ns->loopback_dev);
skb_reset_network_header(skb);
cb = (struct bpf_skb_data_end *)skb->cb;
cb->qdisc_cb.flow_keys = &flow_keys;
if (!repeat)
repeat = 1;
time_start = ktime_get_ns();
for (i = 0; i < repeat; i++) {
preempt_disable();
rcu_read_lock();
retval = __skb_flow_bpf_dissect(prog, skb,
&flow_keys_dissector,
&flow_keys);
rcu_read_unlock();
preempt_enable();
if (need_resched()) {
if (signal_pending(current))
break;
time_spent += ktime_get_ns() - time_start;
cond_resched();
time_start = ktime_get_ns();
}
}
time_spent += ktime_get_ns() - time_start;
do_div(time_spent, repeat);
duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
retval, duration);
kfree_skb(skb);
kfree(sk);
return ret;
}
......@@ -6708,6 +6708,27 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
target_size));
break;
case offsetof(struct __sk_buff, gso_segs):
/* si->dst_reg = skb_shinfo(SKB); */
#ifdef NET_SKBUFF_DATA_USES_OFFSET
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, head),
si->dst_reg, si->src_reg,
offsetof(struct sk_buff, head));
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, end),
BPF_REG_AX, si->src_reg,
offsetof(struct sk_buff, end));
*insn++ = BPF_ALU64_REG(BPF_ADD, si->dst_reg, BPF_REG_AX);
#else
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, end),
si->dst_reg, si->src_reg,
offsetof(struct sk_buff, end));
#endif
*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct skb_shared_info, gso_segs),
si->dst_reg, si->dst_reg,
bpf_target_off(struct skb_shared_info,
gso_segs, 2,
target_size));
break;
case offsetof(struct __sk_buff, wire_len):
BUILD_BUG_ON(FIELD_SIZEOF(struct qdisc_skb_cb, pkt_len) != 4);
......@@ -7698,6 +7719,7 @@ const struct bpf_verifier_ops flow_dissector_verifier_ops = {
};
const struct bpf_prog_ops flow_dissector_prog_ops = {
.test_run = bpf_prog_test_run_flow_dissector,
};
int sk_detach_filter(struct sock *sk)
......
......@@ -683,6 +683,46 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
}
}
bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
const struct sk_buff *skb,
struct flow_dissector *flow_dissector,
struct bpf_flow_keys *flow_keys)
{
struct bpf_skb_data_end cb_saved;
struct bpf_skb_data_end *cb;
u32 result;
/* Note that even though the const qualifier is discarded
* throughout the execution of the BPF program, all changes(the
* control block) are reverted after the BPF program returns.
* Therefore, __skb_flow_dissect does not alter the skb.
*/
cb = (struct bpf_skb_data_end *)skb->cb;
/* Save Control Block */
memcpy(&cb_saved, cb, sizeof(cb_saved));
memset(cb, 0, sizeof(*cb));
/* Pass parameters to the BPF program */
memset(flow_keys, 0, sizeof(*flow_keys));
cb->qdisc_cb.flow_keys = flow_keys;
flow_keys->nhoff = skb_network_offset(skb);
flow_keys->thoff = flow_keys->nhoff;
bpf_compute_data_pointers((struct sk_buff *)skb);
result = BPF_PROG_RUN(prog, skb);
/* Restore state */
memcpy(cb, &cb_saved, sizeof(cb_saved));
flow_keys->nhoff = clamp_t(u16, flow_keys->nhoff, 0, skb->len);
flow_keys->thoff = clamp_t(u16, flow_keys->thoff,
flow_keys->nhoff, skb->len);
return result == BPF_OK;
}
/**
* __skb_flow_dissect - extract the flow_keys struct and return it
* @skb: sk_buff to extract the flow from, can be NULL if the rest are specified
......@@ -714,7 +754,6 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
struct flow_dissector_key_vlan *key_vlan;
enum flow_dissect_ret fdret;
enum flow_dissector_key_id dissector_vlan = FLOW_DISSECTOR_KEY_MAX;
struct bpf_prog *attached = NULL;
int num_hdrs = 0;
u8 ip_proto = 0;
bool ret;
......@@ -754,53 +793,30 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
FLOW_DISSECTOR_KEY_BASIC,
target_container);
rcu_read_lock();
if (skb) {
struct bpf_flow_keys flow_keys;
struct bpf_prog *attached = NULL;
rcu_read_lock();
if (skb->dev)
attached = rcu_dereference(dev_net(skb->dev)->flow_dissector_prog);
else if (skb->sk)
attached = rcu_dereference(sock_net(skb->sk)->flow_dissector_prog);
else
WARN_ON_ONCE(1);
}
if (attached) {
/* Note that even though the const qualifier is discarded
* throughout the execution of the BPF program, all changes(the
* control block) are reverted after the BPF program returns.
* Therefore, __skb_flow_dissect does not alter the skb.
*/
struct bpf_flow_keys flow_keys = {};
struct bpf_skb_data_end cb_saved;
struct bpf_skb_data_end *cb;
u32 result;
cb = (struct bpf_skb_data_end *)skb->cb;
/* Save Control Block */
memcpy(&cb_saved, cb, sizeof(cb_saved));
memset(cb, 0, sizeof(cb_saved));
/* Pass parameters to the BPF program */
cb->qdisc_cb.flow_keys = &flow_keys;
flow_keys.nhoff = nhoff;
flow_keys.thoff = nhoff;
bpf_compute_data_pointers((struct sk_buff *)skb);
result = BPF_PROG_RUN(attached, skb);
/* Restore state */
memcpy(cb, &cb_saved, sizeof(cb_saved));
flow_keys.nhoff = clamp_t(u16, flow_keys.nhoff, 0, skb->len);
flow_keys.thoff = clamp_t(u16, flow_keys.thoff,
flow_keys.nhoff, skb->len);
__skb_flow_bpf_to_target(&flow_keys, flow_dissector,
target_container);
if (attached) {
ret = __skb_flow_bpf_dissect(attached, skb,
flow_dissector,
&flow_keys);
__skb_flow_bpf_to_target(&flow_keys, flow_dissector,
target_container);
rcu_read_unlock();
return ret;
}
rcu_read_unlock();
return result == BPF_OK;
}
rcu_read_unlock();
if (dissector_uses_key(flow_dissector,
FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
......
......@@ -5,3 +5,11 @@ config XDP_SOCKETS
help
XDP sockets allows a channel between XDP programs and
userspace applications.
config XDP_SOCKETS_DIAG
tristate "XDP sockets: monitoring interface"
depends on XDP_SOCKETS
default n
help
Support for PF_XDP sockets monitoring interface used by the ss tool.
If unsure, say Y.
obj-$(CONFIG_XDP_SOCKETS) += xsk.o xdp_umem.o xsk_queue.o
obj-$(CONFIG_XDP_SOCKETS_DIAG) += xsk_diag.o
......@@ -13,12 +13,15 @@
#include <linux/mm.h>
#include <linux/netdevice.h>
#include <linux/rtnetlink.h>
#include <linux/idr.h>
#include "xdp_umem.h"
#include "xsk_queue.h"
#define XDP_UMEM_MIN_CHUNK_SIZE 2048
static DEFINE_IDA(umem_ida);
void xdp_add_sk_umem(struct xdp_umem *umem, struct xdp_sock *xs)
{
unsigned long flags;
......@@ -194,6 +197,8 @@ static void xdp_umem_release(struct xdp_umem *umem)
xdp_umem_clear_dev(umem);
ida_simple_remove(&umem_ida, umem->id);
if (umem->fq) {
xskq_destroy(umem->fq);
umem->fq = NULL;
......@@ -400,8 +405,16 @@ struct xdp_umem *xdp_umem_create(struct xdp_umem_reg *mr)
if (!umem)
return ERR_PTR(-ENOMEM);
err = ida_simple_get(&umem_ida, 0, 0, GFP_KERNEL);
if (err < 0) {
kfree(umem);
return ERR_PTR(err);
}
umem->id = err;
err = xdp_umem_reg(umem, mr);
if (err) {
ida_simple_remove(&umem_ida, umem->id);
kfree(umem);
return ERR_PTR(err);
}
......
......@@ -27,14 +27,10 @@
#include "xsk_queue.h"
#include "xdp_umem.h"
#include "xsk.h"
#define TX_BATCH_SIZE 16
static struct xdp_sock *xdp_sk(struct sock *sk)
{
return (struct xdp_sock *)sk;
}
bool xsk_is_setup_for_bpf_map(struct xdp_sock *xs)
{
return READ_ONCE(xs->rx) && READ_ONCE(xs->umem) &&
......@@ -350,6 +346,10 @@ static int xsk_release(struct socket *sock)
net = sock_net(sk);
mutex_lock(&net->xdp.lock);
sk_del_node_init_rcu(sk);
mutex_unlock(&net->xdp.lock);
local_bh_disable();
sock_prot_inuse_add(net, sk->sk_prot, -1);
local_bh_enable();
......@@ -746,6 +746,10 @@ static int xsk_create(struct net *net, struct socket *sock, int protocol,
mutex_init(&xs->mutex);
spin_lock_init(&xs->tx_completion_lock);
mutex_lock(&net->xdp.lock);
sk_add_node_rcu(sk, &net->xdp.list);
mutex_unlock(&net->xdp.lock);
local_bh_disable();
sock_prot_inuse_add(net, &xsk_proto, 1);
local_bh_enable();
......@@ -759,6 +763,23 @@ static const struct net_proto_family xsk_family_ops = {
.owner = THIS_MODULE,
};
static int __net_init xsk_net_init(struct net *net)
{
mutex_init(&net->xdp.lock);
INIT_HLIST_HEAD(&net->xdp.list);
return 0;
}
static void __net_exit xsk_net_exit(struct net *net)
{
WARN_ON_ONCE(!hlist_empty(&net->xdp.list));
}
static struct pernet_operations xsk_net_ops = {
.init = xsk_net_init,
.exit = xsk_net_exit,
};
static int __init xsk_init(void)
{
int err;
......@@ -771,8 +792,13 @@ static int __init xsk_init(void)
if (err)
goto out_proto;
err = register_pernet_subsys(&xsk_net_ops);
if (err)
goto out_sk;
return 0;
out_sk:
sock_unregister(PF_XDP);
out_proto:
proto_unregister(&xsk_proto);
out:
......
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2019 Intel Corporation. */
#ifndef XSK_H_
#define XSK_H_
static inline struct xdp_sock *xdp_sk(struct sock *sk)
{
return (struct xdp_sock *)sk;
}
#endif /* XSK_H_ */
// SPDX-License-Identifier: GPL-2.0
/* XDP sockets monitoring support
*
* Copyright(c) 2019 Intel Corporation.
*
* Author: Björn Töpel <bjorn.topel@intel.com>
*/
#include <linux/module.h>
#include <net/xdp_sock.h>
#include <linux/xdp_diag.h>
#include <linux/sock_diag.h>
#include "xsk_queue.h"
#include "xsk.h"
static int xsk_diag_put_info(const struct xdp_sock *xs, struct sk_buff *nlskb)
{
struct xdp_diag_info di = {};
di.ifindex = xs->dev ? xs->dev->ifindex : 0;
di.queue_id = xs->queue_id;
return nla_put(nlskb, XDP_DIAG_INFO, sizeof(di), &di);
}
static int xsk_diag_put_ring(const struct xsk_queue *queue, int nl_type,
struct sk_buff *nlskb)
{
struct xdp_diag_ring dr = {};
dr.entries = queue->nentries;
return nla_put(nlskb, nl_type, sizeof(dr), &dr);
}
static int xsk_diag_put_rings_cfg(const struct xdp_sock *xs,
struct sk_buff *nlskb)
{
int err = 0;
if (xs->rx)
err = xsk_diag_put_ring(xs->rx, XDP_DIAG_RX_RING, nlskb);
if (!err && xs->tx)
err = xsk_diag_put_ring(xs->tx, XDP_DIAG_TX_RING, nlskb);
return err;
}
static int xsk_diag_put_umem(const struct xdp_sock *xs, struct sk_buff *nlskb)
{
struct xdp_umem *umem = xs->umem;
struct xdp_diag_umem du = {};
int err;
if (!umem)
return 0;
du.id = umem->id;
du.size = umem->size;
du.num_pages = umem->npgs;
du.chunk_size = (__u32)(~umem->chunk_mask + 1);
du.headroom = umem->headroom;
du.ifindex = umem->dev ? umem->dev->ifindex : 0;
du.queue_id = umem->queue_id;
du.flags = 0;
if (umem->zc)
du.flags |= XDP_DU_F_ZEROCOPY;
du.refs = refcount_read(&umem->users);
err = nla_put(nlskb, XDP_DIAG_UMEM, sizeof(du), &du);
if (!err && umem->fq)
err = xsk_diag_put_ring(xs->tx, XDP_DIAG_UMEM_FILL_RING, nlskb);
if (!err && umem->cq) {
err = xsk_diag_put_ring(xs->tx, XDP_DIAG_UMEM_COMPLETION_RING,
nlskb);
}
return err;
}
static int xsk_diag_fill(struct sock *sk, struct sk_buff *nlskb,
struct xdp_diag_req *req,
struct user_namespace *user_ns,
u32 portid, u32 seq, u32 flags, int sk_ino)
{
struct xdp_sock *xs = xdp_sk(sk);
struct xdp_diag_msg *msg;
struct nlmsghdr *nlh;
nlh = nlmsg_put(nlskb, portid, seq, SOCK_DIAG_BY_FAMILY, sizeof(*msg),
flags);
if (!nlh)
return -EMSGSIZE;
msg = nlmsg_data(nlh);
memset(msg, 0, sizeof(*msg));
msg->xdiag_family = AF_XDP;
msg->xdiag_type = sk->sk_type;
msg->xdiag_ino = sk_ino;
sock_diag_save_cookie(sk, msg->xdiag_cookie);
if ((req->xdiag_show & XDP_SHOW_INFO) && xsk_diag_put_info(xs, nlskb))
goto out_nlmsg_trim;
if ((req->xdiag_show & XDP_SHOW_INFO) &&
nla_put_u32(nlskb, XDP_DIAG_UID,
from_kuid_munged(user_ns, sock_i_uid(sk))))
goto out_nlmsg_trim;
if ((req->xdiag_show & XDP_SHOW_RING_CFG) &&
xsk_diag_put_rings_cfg(xs, nlskb))
goto out_nlmsg_trim;
if ((req->xdiag_show & XDP_SHOW_UMEM) &&
xsk_diag_put_umem(xs, nlskb))
goto out_nlmsg_trim;
if ((req->xdiag_show & XDP_SHOW_MEMINFO) &&
sock_diag_put_meminfo(sk, nlskb, XDP_DIAG_MEMINFO))
goto out_nlmsg_trim;
nlmsg_end(nlskb, nlh);
return 0;
out_nlmsg_trim:
nlmsg_cancel(nlskb, nlh);
return -EMSGSIZE;
}
static int xsk_diag_dump(struct sk_buff *nlskb, struct netlink_callback *cb)
{
struct xdp_diag_req *req = nlmsg_data(cb->nlh);
struct net *net = sock_net(nlskb->sk);
int num = 0, s_num = cb->args[0];
struct sock *sk;
mutex_lock(&net->xdp.lock);
sk_for_each(sk, &net->xdp.list) {
if (!net_eq(sock_net(sk), net))
continue;
if (num++ < s_num)
continue;
if (xsk_diag_fill(sk, nlskb, req,
sk_user_ns(NETLINK_CB(cb->skb).sk),
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq, NLM_F_MULTI,
sock_i_ino(sk)) < 0) {
num--;
break;
}
}
mutex_unlock(&net->xdp.lock);
cb->args[0] = num;
return nlskb->len;
}
static int xsk_diag_handler_dump(struct sk_buff *nlskb, struct nlmsghdr *hdr)
{
struct netlink_dump_control c = { .dump = xsk_diag_dump };
int hdrlen = sizeof(struct xdp_diag_req);
struct net *net = sock_net(nlskb->sk);
if (nlmsg_len(hdr) < hdrlen)
return -EINVAL;
if (!(hdr->nlmsg_flags & NLM_F_DUMP))
return -EOPNOTSUPP;
return netlink_dump_start(net->diag_nlsk, nlskb, hdr, &c);
}
static const struct sock_diag_handler xsk_diag_handler = {
.family = AF_XDP,
.dump = xsk_diag_handler_dump,
};
static int __init xsk_diag_init(void)
{
return sock_diag_register(&xsk_diag_handler);
}
static void __exit xsk_diag_exit(void)
{
sock_diag_unregister(&xsk_diag_handler);
}
module_init(xsk_diag_init);
module_exit(xsk_diag_exit);
MODULE_LICENSE("GPL");
MODULE_ALIAS_NET_PF_PROTO_TYPE(PF_NETLINK, NETLINK_SOCK_DIAG, AF_XDP);
......@@ -164,6 +164,16 @@ struct bpf_insn;
.off = OFF, \
.imm = 0 })
/* Like BPF_JMP_REG, but with 32-bit wide operands for comparison. */
#define BPF_JMP32_REG(OP, DST, SRC, OFF) \
((struct bpf_insn) { \
.code = BPF_JMP32 | BPF_OP(OP) | BPF_X, \
.dst_reg = DST, \
.src_reg = SRC, \
.off = OFF, \
.imm = 0 })
/* Conditional jumps against immediates, if (dst_reg 'op' imm32) goto pc + off16 */
#define BPF_JMP_IMM(OP, DST, IMM, OFF) \
......@@ -174,6 +184,16 @@ struct bpf_insn;
.off = OFF, \
.imm = IMM })
/* Like BPF_JMP_IMM, but with 32-bit wide operands for comparison. */
#define BPF_JMP32_IMM(OP, DST, IMM, OFF) \
((struct bpf_insn) { \
.code = BPF_JMP32 | BPF_OP(OP) | BPF_K, \
.dst_reg = DST, \
.src_reg = 0, \
.off = OFF, \
.imm = IMM })
/* Raw code statement block */
#define BPF_RAW_INSN(CODE, DST, SRC, OFF, IMM) \
......
......@@ -142,5 +142,6 @@ SEE ALSO
**bpftool**\ (8),
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)
===============
bpftool-feature
===============
-------------------------------------------------------------------------------
tool for inspection of eBPF-related parameters for Linux kernel or net device
-------------------------------------------------------------------------------
:Manual section: 8
SYNOPSIS
========
**bpftool** [*OPTIONS*] **feature** *COMMAND*
*OPTIONS* := { { **-j** | **--json** } [{ **-p** | **--pretty** }] }
*COMMANDS* := { **probe** | **help** }
MAP COMMANDS
=============
| **bpftool** **feature probe** [*COMPONENT*] [**macros** [**prefix** *PREFIX*]]
| **bpftool** **feature help**
|
| *COMPONENT* := { **kernel** | **dev** *NAME* }
DESCRIPTION
===========
**bpftool feature probe** [**kernel**] [**macros** [**prefix** *PREFIX*]]
Probe the running kernel and dump a number of eBPF-related
parameters, such as availability of the **bpf()** system call,
JIT status, eBPF program types availability, eBPF helper
functions availability, and more.
If the **macros** keyword (but not the **-j** option) is
passed, a subset of the output is dumped as a list of
**#define** macros that are ready to be included in a C
header file, for example. If, additionally, **prefix** is
used to define a *PREFIX*, the provided string will be used
as a prefix to the names of the macros: this can be used to
avoid conflicts on macro names when including the output of
this command as a header file.
Keyword **kernel** can be omitted. If no probe target is
specified, probing the kernel is the default behaviour.
Note that when probed, some eBPF helpers (e.g.
**bpf_trace_printk**\ () or **bpf_probe_write_user**\ ()) may
print warnings to kernel logs.
**bpftool feature probe dev** *NAME* [**macros** [**prefix** *PREFIX*]]
Probe network device for supported eBPF features and dump
results to the console.
The two keywords **macros** and **prefix** have the same
role as when probing the kernel.
**bpftool feature help**
Print short help message.
OPTIONS
=======
-h, --help
Print short generic help message (similar to **bpftool help**).
-v, --version
Print version number (similar to **bpftool version**).
-j, --json
Generate JSON output. For commands that cannot produce JSON, this
option has no effect.
-p, --pretty
Generate human-readable JSON output. Implies **-j**.
SEE ALSO
========
**bpf**\ (2),
**bpf-helpers**\ (7),
**bpftool**\ (8),
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)
......@@ -25,12 +25,17 @@ MAP COMMANDS
| **bpftool** **map create** *FILE* **type** *TYPE* **key** *KEY_SIZE* **value** *VALUE_SIZE* \
| **entries** *MAX_ENTRIES* **name** *NAME* [**flags** *FLAGS*] [**dev** *NAME*]
| **bpftool** **map dump** *MAP*
| **bpftool** **map update** *MAP* **key** *DATA* **value** *VALUE* [*UPDATE_FLAGS*]
| **bpftool** **map lookup** *MAP* **key** *DATA*
| **bpftool** **map update** *MAP* [**key** *DATA*] [**value** *VALUE*] [*UPDATE_FLAGS*]
| **bpftool** **map lookup** *MAP* [**key** *DATA*]
| **bpftool** **map getnext** *MAP* [**key** *DATA*]
| **bpftool** **map delete** *MAP* **key** *DATA*
| **bpftool** **map pin** *MAP* *FILE*
| **bpftool** **map event_pipe** *MAP* [**cpu** *N* **index** *M*]
| **bpftool** **map peek** *MAP*
| **bpftool** **map push** *MAP* **value** *VALUE*
| **bpftool** **map pop** *MAP*
| **bpftool** **map enqueue** *MAP* **value** *VALUE*
| **bpftool** **map dequeue** *MAP*
| **bpftool** **map help**
|
| *MAP* := { **id** *MAP_ID* | **pinned** *FILE* }
......@@ -62,7 +67,7 @@ DESCRIPTION
**bpftool map dump** *MAP*
Dump all entries in a given *MAP*.
**bpftool map update** *MAP* **key** *DATA* **value** *VALUE* [*UPDATE_FLAGS*]
**bpftool map update** *MAP* [**key** *DATA*] [**value** *VALUE*] [*UPDATE_FLAGS*]
Update map entry for a given *KEY*.
*UPDATE_FLAGS* can be one of: **any** update existing entry
......@@ -75,7 +80,7 @@ DESCRIPTION
the bytes are parsed as decimal values, unless a "0x" prefix
(for hexadecimal) or a "0" prefix (for octal) is provided.
**bpftool map lookup** *MAP* **key** *DATA*
**bpftool map lookup** *MAP* [**key** *DATA*]
Lookup **key** in the map.
**bpftool map getnext** *MAP* [**key** *DATA*]
......@@ -107,6 +112,21 @@ DESCRIPTION
replace any existing ring. Any other application will stop
receiving events if it installed its rings earlier.
**bpftool map peek** *MAP*
Peek next **value** in the queue or stack.
**bpftool map push** *MAP* **value** *VALUE*
Push **value** onto the stack.
**bpftool map pop** *MAP*
Pop and print **value** from the stack.
**bpftool map enqueue** *MAP* **value** *VALUE*
Enqueue **value** into the queue.
**bpftool map dequeue** *MAP*
Dequeue and print **value** from the queue.
**bpftool map help**
Print short help message.
......@@ -236,5 +256,6 @@ SEE ALSO
**bpftool**\ (8),
**bpftool-prog**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)
......@@ -142,4 +142,5 @@ SEE ALSO
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-perf**\ (8)
......@@ -84,4 +84,5 @@ SEE ALSO
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8)
......@@ -258,5 +258,6 @@ SEE ALSO
**bpftool**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)
......@@ -72,5 +72,6 @@ SEE ALSO
**bpftool-prog**\ (8),
**bpftool-map**\ (8),
**bpftool-cgroup**\ (8),
**bpftool-feature**\ (8),
**bpftool-net**\ (8),
**bpftool-perf**\ (8)
This diff is collapsed.
......@@ -73,35 +73,104 @@ static int btf_dumper_array(const struct btf_dumper *d, __u32 type_id,
return ret;
}
static void btf_int128_print(json_writer_t *jw, const void *data,
bool is_plain_text)
{
/* data points to a __int128 number.
* Suppose
* int128_num = *(__int128 *)data;
* The below formulas shows what upper_num and lower_num represents:
* upper_num = int128_num >> 64;
* lower_num = int128_num & 0xffffffffFFFFFFFFULL;
*/
__u64 upper_num, lower_num;
#ifdef __BIG_ENDIAN_BITFIELD
upper_num = *(__u64 *)data;
lower_num = *(__u64 *)(data + 8);
#else
upper_num = *(__u64 *)(data + 8);
lower_num = *(__u64 *)data;
#endif
if (is_plain_text) {
if (upper_num == 0)
jsonw_printf(jw, "0x%llx", lower_num);
else
jsonw_printf(jw, "0x%llx%016llx", upper_num, lower_num);
} else {
if (upper_num == 0)
jsonw_printf(jw, "\"0x%llx\"", lower_num);
else
jsonw_printf(jw, "\"0x%llx%016llx\"", upper_num, lower_num);
}
}
static void btf_int128_shift(__u64 *print_num, u16 left_shift_bits,
u16 right_shift_bits)
{
__u64 upper_num, lower_num;
#ifdef __BIG_ENDIAN_BITFIELD
upper_num = print_num[0];
lower_num = print_num[1];
#else
upper_num = print_num[1];
lower_num = print_num[0];
#endif
/* shake out un-needed bits by shift/or operations */
if (left_shift_bits >= 64) {
upper_num = lower_num << (left_shift_bits - 64);
lower_num = 0;
} else {
upper_num = (upper_num << left_shift_bits) |
(lower_num >> (64 - left_shift_bits));
lower_num = lower_num << left_shift_bits;
}
if (right_shift_bits >= 64) {
lower_num = upper_num >> (right_shift_bits - 64);
upper_num = 0;
} else {
lower_num = (lower_num >> right_shift_bits) |
(upper_num << (64 - right_shift_bits));
upper_num = upper_num >> right_shift_bits;
}
#ifdef __BIG_ENDIAN_BITFIELD
print_num[0] = upper_num;
print_num[1] = lower_num;
#else
print_num[0] = lower_num;
print_num[1] = upper_num;
#endif
}
static void btf_dumper_bitfield(__u32 nr_bits, __u8 bit_offset,
const void *data, json_writer_t *jw,
bool is_plain_text)
{
int left_shift_bits, right_shift_bits;
__u64 print_num[2] = {};
int bytes_to_copy;
int bits_to_copy;
__u64 print_num;
bits_to_copy = bit_offset + nr_bits;
bytes_to_copy = BITS_ROUNDUP_BYTES(bits_to_copy);
print_num = 0;
memcpy(&print_num, data, bytes_to_copy);
memcpy(print_num, data, bytes_to_copy);
#if defined(__BIG_ENDIAN_BITFIELD)
left_shift_bits = bit_offset;
#elif defined(__LITTLE_ENDIAN_BITFIELD)
left_shift_bits = 64 - bits_to_copy;
left_shift_bits = 128 - bits_to_copy;
#else
#error neither big nor little endian
#endif
right_shift_bits = 64 - nr_bits;
right_shift_bits = 128 - nr_bits;
print_num <<= left_shift_bits;
print_num >>= right_shift_bits;
if (is_plain_text)
jsonw_printf(jw, "0x%llx", print_num);
else
jsonw_printf(jw, "%llu", print_num);
btf_int128_shift(print_num, left_shift_bits, right_shift_bits);
btf_int128_print(jw, print_num, is_plain_text);
}
......@@ -113,7 +182,7 @@ static void btf_dumper_int_bits(__u32 int_type, __u8 bit_offset,
int total_bits_offset;
/* bits_offset is at most 7.
* BTF_INT_OFFSET() cannot exceed 64 bits.
* BTF_INT_OFFSET() cannot exceed 128 bits.
*/
total_bits_offset = bit_offset + BTF_INT_OFFSET(int_type);
data += BITS_ROUNDDOWN_BYTES(total_bits_offset);
......@@ -139,6 +208,11 @@ static int btf_dumper_int(const struct btf_type *t, __u8 bit_offset,
return 0;
}
if (nr_bits == 128) {
btf_int128_print(jw, data, is_plain_text);
return 0;
}
switch (BTF_INT_ENCODING(*int_type)) {
case 0:
if (BTF_INT_BITS(*int_type) == 64)
......
......@@ -157,6 +157,11 @@ static bool cfg_partition_funcs(struct cfg *cfg, struct bpf_insn *cur,
return false;
}
static bool is_jmp_insn(u8 code)
{
return BPF_CLASS(code) == BPF_JMP || BPF_CLASS(code) == BPF_JMP32;
}
static bool func_partition_bb_head(struct func_node *func)
{
struct bpf_insn *cur, *end;
......@@ -170,7 +175,7 @@ static bool func_partition_bb_head(struct func_node *func)
return true;
for (; cur <= end; cur++) {
if (BPF_CLASS(cur->code) == BPF_JMP) {
if (is_jmp_insn(cur->code)) {
u8 opcode = BPF_OP(cur->code);
if (opcode == BPF_EXIT || opcode == BPF_CALL)
......@@ -296,7 +301,7 @@ static bool func_add_bb_edges(struct func_node *func)
e->src = bb;
insn = bb->tail;
if (BPF_CLASS(insn->code) != BPF_JMP ||
if (!is_jmp_insn(insn->code) ||
BPF_OP(insn->code) == BPF_EXIT) {
e->dst = bb_next(bb);
e->flags |= EDGE_FLAG_FALLTHROUGH;
......
This diff is collapsed.
......@@ -56,7 +56,7 @@ static int do_help(int argc, char **argv)
" %s batch file FILE\n"
" %s version\n"
"\n"
" OBJECT := { prog | map | cgroup | perf | net }\n"
" OBJECT := { prog | map | cgroup | perf | net | feature }\n"
" " HELP_SPEC_OPTIONS "\n"
"",
bin_name, bin_name, bin_name);
......@@ -187,6 +187,7 @@ static const struct cmd cmds[] = {
{ "cgroup", do_cgroup },
{ "perf", do_perf },
{ "net", do_net },
{ "feature", do_feature },
{ "version", do_version },
{ 0 }
};
......
......@@ -75,6 +75,9 @@ static const char * const prog_type_name[] = {
[BPF_PROG_TYPE_FLOW_DISSECTOR] = "flow_dissector",
};
extern const char * const map_type_name[];
extern const size_t map_type_name_size;
enum bpf_obj_type {
BPF_OBJ_UNKNOWN,
BPF_OBJ_PROG,
......@@ -145,6 +148,7 @@ int do_cgroup(int argc, char **arg);
int do_perf(int argc, char **arg);
int do_net(int argc, char **arg);
int do_tracelog(int argc, char **arg);
int do_feature(int argc, char **argv);
int parse_u32_arg(int *argc, char ***argv, __u32 *val, const char *what);
int prog_parse_fd(int *argc, char ***argv);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o netlink.o bpf_prog_linfo.o
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o netlink.o bpf_prog_linfo.o libbpf_probes.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment