Commit de55c9a1 authored by Alexei Starovoitov's avatar Alexei Starovoitov

Merge branch 'Add support for transmitting packets using XDP in bpf_prog_run()'

Toke Høiland-Jørgensen says:

====================

This series adds support for transmitting packets using XDP in
bpf_prog_run(), by enabling a new mode "live packet" mode which will handle
the XDP program return codes and redirect the packets to the stack or other
devices.

The primary use case for this is testing the redirect map types and the
ndo_xdp_xmit driver operation without an external traffic generator. But it
turns out to also be useful for creating a programmable traffic generator
in XDP, as well as injecting frames into the stack. A sample traffic
generator, which was included in previous versions of the series, but now
moved to xdp-tools, transmits up to 9 Mpps/core on my test machine.

To transmit the frames, the new mode instantiates a page_pool structure in
bpf_prog_run() and initialises the pages to contain XDP frames with the
data passed in by userspace. These frames can then be handled as though
they came from the hardware XDP path, and the existing page_pool code takes
care of returning and recycling them. The setup is optimised for high
performance with a high number of repetitions to support stress testing and
the traffic generator use case; see patch 1 for details.

v11:
- Fix override of return code in xdp_test_run_batch()
- Add Martin's ACKs to remaining patches

v10:
- Only propagate memory allocation errors from xdp_test_run_batch()
- Get rid of BPF_F_TEST_XDP_RESERVED; batch_size can be used to probe
- Check that batch_size is unset in non-XDP test_run funcs
- Lower the number of repetitions in the selftest to 10k
- Count number of recycled pages in the selftest
- Fix a few other nits from Martin, carry forward ACKs

v9:
- XDP_DROP packets in the selftest to ensure pages are recycled
- Fix a few issues reported by the kernel test robot
- Rewrite the documentation of the batch size to make it a bit clearer
- Rebase to newest bpf-next

v8:
- Make the batch size configurable from userspace
- Don't interrupt the packet loop on errors in do_redirect (this can be
  caught from the tracepoint)
- Add documentation of the feature
- Add reserved flag userspace can use to probe for support (kernel didn't
  check flags previously)
- Rebase to newest bpf-next, disallow live mode for jumbo frames

v7:
- Extend the local_bh_disable() to cover the full test run loop, to prevent
  running concurrently with the softirq. Fixes a deadlock with veth xmit.
- Reinstate the forwarding sysctl setting in the selftest, and bump up the
  number of packets being transmitted to trigger the above bug.
- Update commit message to make it clear that user space can select the
  ingress interface.

v6:
- Fix meta vs data pointer setting and add a selftest for it
- Add local_bh_disable() around code passing packets up the stack
- Create a new netns for the selftest and use a TC program instead of the
  forwarding hack to count packets being XDP_PASS'ed from the test prog.
- Check for the correct ingress ifindex in the selftest
- Rebase and drop patches 1-5 that were already merged

v5:
- Rebase to current bpf-next

v4:
- Fix a few code style issues (Alexei)
- Also handle the other return codes: XDP_PASS builds skbs and injects them
  into the stack, and XDP_TX is turned into a redirect out the same
  interface (Alexei).
- Drop the last patch adding an xdp_trafficgen program to samples/bpf; this
  will live in xdp-tools instead (Alexei).
- Add a separate bpf_test_run_xdp_live() function to test_run.c instead of
  entangling the new mode in the existing bpf_test_run().

v3:
- Reorder patches to make sure they all build individually (Patchwork)
- Remove a couple of unused variables (Patchwork)
- Remove unlikely() annotation in slow path and add back John's ACK that I
  accidentally dropped for v2 (John)

v2:
- Split up up __xdp_do_redirect to avoid passing two pointers to it (John)
- Always reset context pointers before each test run (John)
- Use get_mac_addr() from xdp_sample_user.h instead of rolling our own (Kumar)
- Fix wrong offset for metadata pointer
====================
Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
parents 3399dd9f 55fcacca
.. SPDX-License-Identifier: GPL-2.0
===================================
Running BPF programs from userspace
===================================
This document describes the ``BPF_PROG_RUN`` facility for running BPF programs
from userspace.
.. contents::
:local:
:depth: 2
Overview
--------
The ``BPF_PROG_RUN`` command can be used through the ``bpf()`` syscall to
execute a BPF program in the kernel and return the results to userspace. This
can be used to unit test BPF programs against user-supplied context objects, and
as way to explicitly execute programs in the kernel for their side effects. The
command was previously named ``BPF_PROG_TEST_RUN``, and both constants continue
to be defined in the UAPI header, aliased to the same value.
The ``BPF_PROG_RUN`` command can be used to execute BPF programs of the
following types:
- ``BPF_PROG_TYPE_SOCKET_FILTER``
- ``BPF_PROG_TYPE_SCHED_CLS``
- ``BPF_PROG_TYPE_SCHED_ACT``
- ``BPF_PROG_TYPE_XDP``
- ``BPF_PROG_TYPE_SK_LOOKUP``
- ``BPF_PROG_TYPE_CGROUP_SKB``
- ``BPF_PROG_TYPE_LWT_IN``
- ``BPF_PROG_TYPE_LWT_OUT``
- ``BPF_PROG_TYPE_LWT_XMIT``
- ``BPF_PROG_TYPE_LWT_SEG6LOCAL``
- ``BPF_PROG_TYPE_FLOW_DISSECTOR``
- ``BPF_PROG_TYPE_STRUCT_OPS``
- ``BPF_PROG_TYPE_RAW_TRACEPOINT``
- ``BPF_PROG_TYPE_SYSCALL``
When using the ``BPF_PROG_RUN`` command, userspace supplies an input context
object and (for program types operating on network packets) a buffer containing
the packet data that the BPF program will operate on. The kernel will then
execute the program and return the results to userspace. Note that programs will
not have any side effects while being run in this mode; in particular, packets
will not actually be redirected or dropped, the program return code will just be
returned to userspace. A separate mode for live execution of XDP programs is
provided, documented separately below.
Running XDP programs in "live frame mode"
-----------------------------------------
The ``BPF_PROG_RUN`` command has a separate mode for running live XDP programs,
which can be used to execute XDP programs in a way where packets will actually
be processed by the kernel after the execution of the XDP program as if they
arrived on a physical interface. This mode is activated by setting the
``BPF_F_TEST_XDP_LIVE_FRAMES`` flag when supplying an XDP program to
``BPF_PROG_RUN``.
The live packet mode is optimised for high performance execution of the supplied
XDP program many times (suitable for, e.g., running as a traffic generator),
which means the semantics are not quite as straight-forward as the regular test
run mode. Specifically:
- When executing an XDP program in live frame mode, the result of the execution
will not be returned to userspace; instead, the kernel will perform the
operation indicated by the program's return code (drop the packet, redirect
it, etc). For this reason, setting the ``data_out`` or ``ctx_out`` attributes
in the syscall parameters when running in this mode will be rejected. In
addition, not all failures will be reported back to userspace directly;
specifically, only fatal errors in setup or during execution (like memory
allocation errors) will halt execution and return an error. If an error occurs
in packet processing, like a failure to redirect to a given interface,
execution will continue with the next repetition; these errors can be detected
via the same trace points as for regular XDP programs.
- Userspace can supply an ifindex as part of the context object, just like in
the regular (non-live) mode. The XDP program will be executed as though the
packet arrived on this interface; i.e., the ``ingress_ifindex`` of the context
object will point to that interface. Furthermore, if the XDP program returns
``XDP_PASS``, the packet will be injected into the kernel networking stack as
though it arrived on that ifindex, and if it returns ``XDP_TX``, the packet
will be transmitted *out* of that same interface. Do note, though, that
because the program execution is not happening in driver context, an
``XDP_TX`` is actually turned into the same action as an ``XDP_REDIRECT`` to
that same interface (i.e., it will only work if the driver has support for the
``ndo_xdp_xmit`` driver op).
- When running the program with multiple repetitions, the execution will happen
in batches. The batch size defaults to 64 packets (which is same as the
maximum NAPI receive batch size), but can be specified by userspace through
the ``batch_size`` parameter, up to a maximum of 256 packets. For each batch,
the kernel executes the XDP program repeatedly, each invocation getting a
separate copy of the packet data. For each repetition, if the program drops
the packet, the data page is immediately recycled (see below). Otherwise, the
packet is buffered until the end of the batch, at which point all packets
buffered this way during the batch are transmitted at once.
- When setting up the test run, the kernel will initialise a pool of memory
pages of the same size as the batch size. Each memory page will be initialised
with the initial packet data supplied by userspace at ``BPF_PROG_RUN``
invocation. When possible, the pages will be recycled on future program
invocations, to improve performance. Pages will generally be recycled a full
batch at a time, except when a packet is dropped (by return code or because
of, say, a redirection error), in which case that page will be recycled
immediately. If a packet ends up being passed to the regular networking stack
(because the XDP program returns ``XDP_PASS``, or because it ends up being
redirected to an interface that injects it into the stack), the page will be
released and a new one will be allocated when the pool is empty.
When recycling, the page content is not rewritten; only the packet boundary
pointers (``data``, ``data_end`` and ``data_meta``) in the context object will
be reset to the original values. This means that if a program rewrites the
packet contents, it has to be prepared to see either the original content or
the modified version on subsequent invocations.
......@@ -21,6 +21,7 @@ that goes into great technical depth about the BPF Architecture.
helpers
programs
maps
bpf_prog_run
classic_vs_extended.rst
bpf_licensing
test_debug
......
......@@ -1232,6 +1232,8 @@ enum {
/* If set, run the test on the cpu specified by bpf_attr.test.cpu */
#define BPF_F_TEST_RUN_ON_CPU (1U << 0)
/* If set, XDP frames will be transmitted after processing */
#define BPF_F_TEST_XDP_LIVE_FRAMES (1U << 1)
/* type for BPF_ENABLE_STATS */
enum bpf_stats_type {
......@@ -1393,6 +1395,7 @@ union bpf_attr {
__aligned_u64 ctx_out;
__u32 flags;
__u32 cpu;
__u32 batch_size;
} test;
struct { /* anonymous struct used by BPF_*_GET_*_ID */
......
......@@ -30,6 +30,7 @@ config BPF_SYSCALL
select TASKS_TRACE_RCU
select BINARY_PRINTF
select NET_SOCK_MSG if NET
select PAGE_POOL if NET
default n
help
Enable the bpf() system call that allows to manipulate BPF programs
......
......@@ -3336,7 +3336,7 @@ static int bpf_prog_query(const union bpf_attr *attr,
}
}
#define BPF_PROG_TEST_RUN_LAST_FIELD test.cpu
#define BPF_PROG_TEST_RUN_LAST_FIELD test.batch_size
static int bpf_prog_test_run(const union bpf_attr *attr,
union bpf_attr __user *uattr)
......
This diff is collapsed.
......@@ -1232,6 +1232,8 @@ enum {
/* If set, run the test on the cpu specified by bpf_attr.test.cpu */
#define BPF_F_TEST_RUN_ON_CPU (1U << 0)
/* If set, XDP frames will be transmitted after processing */
#define BPF_F_TEST_XDP_LIVE_FRAMES (1U << 1)
/* type for BPF_ENABLE_STATS */
enum bpf_stats_type {
......@@ -1393,6 +1395,7 @@ union bpf_attr {
__aligned_u64 ctx_out;
__u32 flags;
__u32 cpu;
__u32 batch_size;
} test;
struct { /* anonymous struct used by BPF_*_GET_*_ID */
......
......@@ -995,6 +995,7 @@ int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
memset(&attr, 0, sizeof(attr));
attr.test.prog_fd = prog_fd;
attr.test.batch_size = OPTS_GET(opts, batch_size, 0);
attr.test.cpu = OPTS_GET(opts, cpu, 0);
attr.test.flags = OPTS_GET(opts, flags, 0);
attr.test.repeat = OPTS_GET(opts, repeat, 0);
......
......@@ -512,8 +512,9 @@ struct bpf_test_run_opts {
__u32 duration; /* out: average per repetition in ns */
__u32 flags;
__u32 cpu;
__u32 batch_size;
};
#define bpf_test_run_opts__last_field cpu
#define bpf_test_run_opts__last_field batch_size
LIBBPF_API int bpf_prog_test_run_opts(int prog_fd,
struct bpf_test_run_opts *opts);
......
// SPDX-License-Identifier: GPL-2.0-only
#define _GNU_SOURCE
#include <errno.h>
#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sched.h>
#include <arpa/inet.h>
#include <sys/mount.h>
#include <sys/stat.h>
#include <linux/err.h>
#include <linux/in.h>
#include <linux/in6.h>
#include <linux/limits.h>
#include "bpf_util.h"
#include "network_helpers.h"
#include "test_progs.h"
#define clean_errno() (errno == 0 ? "None" : strerror(errno))
#define log_err(MSG, ...) ({ \
......@@ -356,3 +363,82 @@ char *ping_command(int family)
}
return "ping";
}
struct nstoken {
int orig_netns_fd;
};
static int setns_by_fd(int nsfd)
{
int err;
err = setns(nsfd, CLONE_NEWNET);
close(nsfd);
if (!ASSERT_OK(err, "setns"))
return err;
/* Switch /sys to the new namespace so that e.g. /sys/class/net
* reflects the devices in the new namespace.
*/
err = unshare(CLONE_NEWNS);
if (!ASSERT_OK(err, "unshare"))
return err;
/* Make our /sys mount private, so the following umount won't
* trigger the global umount in case it's shared.
*/
err = mount("none", "/sys", NULL, MS_PRIVATE, NULL);
if (!ASSERT_OK(err, "remount private /sys"))
return err;
err = umount2("/sys", MNT_DETACH);
if (!ASSERT_OK(err, "umount2 /sys"))
return err;
err = mount("sysfs", "/sys", "sysfs", 0, NULL);
if (!ASSERT_OK(err, "mount /sys"))
return err;
err = mount("bpffs", "/sys/fs/bpf", "bpf", 0, NULL);
if (!ASSERT_OK(err, "mount /sys/fs/bpf"))
return err;
return 0;
}
struct nstoken *open_netns(const char *name)
{
int nsfd;
char nspath[PATH_MAX];
int err;
struct nstoken *token;
token = malloc(sizeof(struct nstoken));
if (!ASSERT_OK_PTR(token, "malloc token"))
return NULL;
token->orig_netns_fd = open("/proc/self/ns/net", O_RDONLY);
if (!ASSERT_GE(token->orig_netns_fd, 0, "open /proc/self/ns/net"))
goto fail;
snprintf(nspath, sizeof(nspath), "%s/%s", "/var/run/netns", name);
nsfd = open(nspath, O_RDONLY | O_CLOEXEC);
if (!ASSERT_GE(nsfd, 0, "open netns fd"))
goto fail;
err = setns_by_fd(nsfd);
if (!ASSERT_OK(err, "setns_by_fd"))
goto fail;
return token;
fail:
free(token);
return NULL;
}
void close_netns(struct nstoken *token)
{
ASSERT_OK(setns_by_fd(token->orig_netns_fd), "setns_by_fd");
free(token);
}
......@@ -55,4 +55,13 @@ int make_sockaddr(int family, const char *addr_str, __u16 port,
struct sockaddr_storage *addr, socklen_t *len);
char *ping_command(int family);
struct nstoken;
/**
* open_netns() - Switch to specified network namespace by name.
*
* Returns token with which to restore the original namespace
* using close_netns().
*/
struct nstoken *open_netns(const char *name);
void close_netns(struct nstoken *token);
#endif
......@@ -10,8 +10,6 @@
* to drop unexpected traffic.
*/
#define _GNU_SOURCE
#include <arpa/inet.h>
#include <linux/if.h>
#include <linux/if_tun.h>
......@@ -19,10 +17,8 @@
#include <linux/sysctl.h>
#include <linux/time_types.h>
#include <linux/net_tstamp.h>
#include <sched.h>
#include <stdbool.h>
#include <stdio.h>
#include <sys/mount.h>
#include <sys/stat.h>
#include <unistd.h>
......@@ -92,91 +88,6 @@ static int write_file(const char *path, const char *newval)
return 0;
}
struct nstoken {
int orig_netns_fd;
};
static int setns_by_fd(int nsfd)
{
int err;
err = setns(nsfd, CLONE_NEWNET);
close(nsfd);
if (!ASSERT_OK(err, "setns"))
return err;
/* Switch /sys to the new namespace so that e.g. /sys/class/net
* reflects the devices in the new namespace.
*/
err = unshare(CLONE_NEWNS);
if (!ASSERT_OK(err, "unshare"))
return err;
/* Make our /sys mount private, so the following umount won't
* trigger the global umount in case it's shared.
*/
err = mount("none", "/sys", NULL, MS_PRIVATE, NULL);
if (!ASSERT_OK(err, "remount private /sys"))
return err;
err = umount2("/sys", MNT_DETACH);
if (!ASSERT_OK(err, "umount2 /sys"))
return err;
err = mount("sysfs", "/sys", "sysfs", 0, NULL);
if (!ASSERT_OK(err, "mount /sys"))
return err;
err = mount("bpffs", "/sys/fs/bpf", "bpf", 0, NULL);
if (!ASSERT_OK(err, "mount /sys/fs/bpf"))
return err;
return 0;
}
/**
* open_netns() - Switch to specified network namespace by name.
*
* Returns token with which to restore the original namespace
* using close_netns().
*/
static struct nstoken *open_netns(const char *name)
{
int nsfd;
char nspath[PATH_MAX];
int err;
struct nstoken *token;
token = calloc(1, sizeof(struct nstoken));
if (!ASSERT_OK_PTR(token, "malloc token"))
return NULL;
token->orig_netns_fd = open("/proc/self/ns/net", O_RDONLY);
if (!ASSERT_GE(token->orig_netns_fd, 0, "open /proc/self/ns/net"))
goto fail;
snprintf(nspath, sizeof(nspath), "%s/%s", "/var/run/netns", name);
nsfd = open(nspath, O_RDONLY | O_CLOEXEC);
if (!ASSERT_GE(nsfd, 0, "open netns fd"))
goto fail;
err = setns_by_fd(nsfd);
if (!ASSERT_OK(err, "setns_by_fd"))
goto fail;
return token;
fail:
free(token);
return NULL;
}
static void close_netns(struct nstoken *token)
{
ASSERT_OK(setns_by_fd(token->orig_netns_fd), "setns_by_fd");
free(token);
}
static int netns_setup_namespaces(const char *verb)
{
const char * const *ns = namespaces;
......
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
#include <network_helpers.h>
#include <net/if.h>
#include <linux/if_ether.h>
#include <linux/if_packet.h>
#include <linux/ipv6.h>
#include <linux/in6.h>
#include <linux/udp.h>
#include <bpf/bpf_endian.h>
#include "test_xdp_do_redirect.skel.h"
#define SYS(fmt, ...) \
({ \
char cmd[1024]; \
snprintf(cmd, sizeof(cmd), fmt, ##__VA_ARGS__); \
if (!ASSERT_OK(system(cmd), cmd)) \
goto out; \
})
struct udp_packet {
struct ethhdr eth;
struct ipv6hdr iph;
struct udphdr udp;
__u8 payload[64 - sizeof(struct udphdr)
- sizeof(struct ethhdr) - sizeof(struct ipv6hdr)];
} __packed;
static struct udp_packet pkt_udp = {
.eth.h_proto = __bpf_constant_htons(ETH_P_IPV6),
.eth.h_dest = {0x00, 0x11, 0x22, 0x33, 0x44, 0x55},
.eth.h_source = {0x66, 0x77, 0x88, 0x99, 0xaa, 0xbb},
.iph.version = 6,
.iph.nexthdr = IPPROTO_UDP,
.iph.payload_len = bpf_htons(sizeof(struct udp_packet)
- offsetof(struct udp_packet, udp)),
.iph.hop_limit = 2,
.iph.saddr.s6_addr16 = {bpf_htons(0xfc00), 0, 0, 0, 0, 0, 0, bpf_htons(1)},
.iph.daddr.s6_addr16 = {bpf_htons(0xfc00), 0, 0, 0, 0, 0, 0, bpf_htons(2)},
.udp.source = bpf_htons(1),
.udp.dest = bpf_htons(1),
.udp.len = bpf_htons(sizeof(struct udp_packet)
- offsetof(struct udp_packet, udp)),
.payload = {0x42}, /* receiver XDP program matches on this */
};
static int attach_tc_prog(struct bpf_tc_hook *hook, int fd)
{
DECLARE_LIBBPF_OPTS(bpf_tc_opts, opts, .handle = 1, .priority = 1, .prog_fd = fd);
int ret;
ret = bpf_tc_hook_create(hook);
if (!ASSERT_OK(ret, "create tc hook"))
return ret;
ret = bpf_tc_attach(hook, &opts);
if (!ASSERT_OK(ret, "bpf_tc_attach")) {
bpf_tc_hook_destroy(hook);
return ret;
}
return 0;
}
#define NUM_PKTS 10000
void test_xdp_do_redirect(void)
{
int err, xdp_prog_fd, tc_prog_fd, ifindex_src, ifindex_dst;
char data[sizeof(pkt_udp) + sizeof(__u32)];
struct test_xdp_do_redirect *skel = NULL;
struct nstoken *nstoken = NULL;
struct bpf_link *link;
struct xdp_md ctx_in = { .data = sizeof(__u32),
.data_end = sizeof(data) };
DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts,
.data_in = &data,
.data_size_in = sizeof(data),
.ctx_in = &ctx_in,
.ctx_size_in = sizeof(ctx_in),
.flags = BPF_F_TEST_XDP_LIVE_FRAMES,
.repeat = NUM_PKTS,
.batch_size = 64,
);
DECLARE_LIBBPF_OPTS(bpf_tc_hook, tc_hook,
.attach_point = BPF_TC_INGRESS);
memcpy(&data[sizeof(__u32)], &pkt_udp, sizeof(pkt_udp));
*((__u32 *)data) = 0x42; /* metadata test value */
skel = test_xdp_do_redirect__open();
if (!ASSERT_OK_PTR(skel, "skel"))
return;
/* The XDP program we run with bpf_prog_run() will cycle through all
* three xmit (PASS/TX/REDIRECT) return codes starting from above, and
* ending up with PASS, so we should end up with two packets on the dst
* iface and NUM_PKTS-2 in the TC hook. We match the packets on the UDP
* payload.
*/
SYS("ip netns add testns");
nstoken = open_netns("testns");
if (!ASSERT_OK_PTR(nstoken, "setns"))
goto out;
SYS("ip link add veth_src type veth peer name veth_dst");
SYS("ip link set dev veth_src address 00:11:22:33:44:55");
SYS("ip link set dev veth_dst address 66:77:88:99:aa:bb");
SYS("ip link set dev veth_src up");
SYS("ip link set dev veth_dst up");
SYS("ip addr add dev veth_src fc00::1/64");
SYS("ip addr add dev veth_dst fc00::2/64");
SYS("ip neigh add fc00::2 dev veth_src lladdr 66:77:88:99:aa:bb");
/* We enable forwarding in the test namespace because that will cause
* the packets that go through the kernel stack (with XDP_PASS) to be
* forwarded back out the same interface (because of the packet dst
* combined with the interface addresses). When this happens, the
* regular forwarding path will end up going through the same
* veth_xdp_xmit() call as the XDP_REDIRECT code, which can cause a
* deadlock if it happens on the same CPU. There's a local_bh_disable()
* in the test_run code to prevent this, but an earlier version of the
* code didn't have this, so we keep the test behaviour to make sure the
* bug doesn't resurface.
*/
SYS("sysctl -qw net.ipv6.conf.all.forwarding=1");
ifindex_src = if_nametoindex("veth_src");
ifindex_dst = if_nametoindex("veth_dst");
if (!ASSERT_NEQ(ifindex_src, 0, "ifindex_src") ||
!ASSERT_NEQ(ifindex_dst, 0, "ifindex_dst"))
goto out;
memcpy(skel->rodata->expect_dst, &pkt_udp.eth.h_dest, ETH_ALEN);
skel->rodata->ifindex_out = ifindex_src; /* redirect back to the same iface */
skel->rodata->ifindex_in = ifindex_src;
ctx_in.ingress_ifindex = ifindex_src;
tc_hook.ifindex = ifindex_src;
if (!ASSERT_OK(test_xdp_do_redirect__load(skel), "load"))
goto out;
link = bpf_program__attach_xdp(skel->progs.xdp_count_pkts, ifindex_dst);
if (!ASSERT_OK_PTR(link, "prog_attach"))
goto out;
skel->links.xdp_count_pkts = link;
tc_prog_fd = bpf_program__fd(skel->progs.tc_count_pkts);
if (attach_tc_prog(&tc_hook, tc_prog_fd))
goto out;
xdp_prog_fd = bpf_program__fd(skel->progs.xdp_redirect);
err = bpf_prog_test_run_opts(xdp_prog_fd, &opts);
if (!ASSERT_OK(err, "prog_run"))
goto out_tc;
/* wait for the packets to be flushed */
kern_sync_rcu();
/* There will be one packet sent through XDP_REDIRECT and one through
* XDP_TX; these will show up on the XDP counting program, while the
* rest will be counted at the TC ingress hook (and the counting program
* resets the packet payload so they don't get counted twice even though
* they are re-xmited out the veth device
*/
ASSERT_EQ(skel->bss->pkts_seen_xdp, 2, "pkt_count_xdp");
ASSERT_EQ(skel->bss->pkts_seen_zero, 2, "pkt_count_zero");
ASSERT_EQ(skel->bss->pkts_seen_tc, NUM_PKTS - 2, "pkt_count_tc");
out_tc:
bpf_tc_hook_destroy(&tc_hook);
out:
if (nstoken)
close_netns(nstoken);
system("ip netns del testns");
test_xdp_do_redirect__destroy(skel);
}
// SPDX-License-Identifier: GPL-2.0
#include <vmlinux.h>
#include <bpf/bpf_helpers.h>
#define ETH_ALEN 6
#define HDR_SZ (sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + sizeof(struct udphdr))
const volatile int ifindex_out;
const volatile int ifindex_in;
const volatile __u8 expect_dst[ETH_ALEN];
volatile int pkts_seen_xdp = 0;
volatile int pkts_seen_zero = 0;
volatile int pkts_seen_tc = 0;
volatile int retcode = XDP_REDIRECT;
SEC("xdp")
int xdp_redirect(struct xdp_md *xdp)
{
__u32 *metadata = (void *)(long)xdp->data_meta;
void *data_end = (void *)(long)xdp->data_end;
void *data = (void *)(long)xdp->data;
__u8 *payload = data + HDR_SZ;
int ret = retcode;
if (payload + 1 > data_end)
return XDP_ABORTED;
if (xdp->ingress_ifindex != ifindex_in)
return XDP_ABORTED;
if (metadata + 1 > data)
return XDP_ABORTED;
if (*metadata != 0x42)
return XDP_ABORTED;
if (*payload == 0) {
*payload = 0x42;
pkts_seen_zero++;
}
if (bpf_xdp_adjust_meta(xdp, 4))
return XDP_ABORTED;
if (retcode > XDP_PASS)
retcode--;
if (ret == XDP_REDIRECT)
return bpf_redirect(ifindex_out, 0);
return ret;
}
static bool check_pkt(void *data, void *data_end)
{
struct ipv6hdr *iph = data + sizeof(struct ethhdr);
__u8 *payload = data + HDR_SZ;
if (payload + 1 > data_end)
return false;
if (iph->nexthdr != IPPROTO_UDP || *payload != 0x42)
return false;
/* reset the payload so the same packet doesn't get counted twice when
* it cycles back through the kernel path and out the dst veth
*/
*payload = 0;
return true;
}
SEC("xdp")
int xdp_count_pkts(struct xdp_md *xdp)
{
void *data = (void *)(long)xdp->data;
void *data_end = (void *)(long)xdp->data_end;
if (check_pkt(data, data_end))
pkts_seen_xdp++;
/* Return XDP_DROP to make sure the data page is recycled, like when it
* exits a physical NIC. Recycled pages will be counted in the
* pkts_seen_zero counter above.
*/
return XDP_DROP;
}
SEC("tc")
int tc_count_pkts(struct __sk_buff *skb)
{
void *data = (void *)(long)skb->data;
void *data_end = (void *)(long)skb->data_end;
if (check_pkt(data, data_end))
pkts_seen_tc++;
return 0;
}
char _license[] SEC("license") = "GPL";
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment