Commit eb1e1478 authored by Daniel Borkmann's avatar Daniel Borkmann

Merge branch 'bpf-sockmap-listen'

Jakub Sitnicki says:

====================
This patch set turns SOCK{MAP,HASH} into generic collections for TCP
sockets, both listening and established. Adding support for listening
sockets enables us to use these BPF map types with reuseport BPF programs.

Why? SOCKMAP and SOCKHASH, in comparison to REUSEPORT_SOCKARRAY, allow
the socket to be in more than one map at the same time.

Having a BPF map type that can hold listening sockets, and gracefully
co-exist with reuseport BPF is important if, in the future, we want
BPF programs that run at socket lookup time [0]. Cover letter for v1 of
this series tells the full story of how we got here [1].

Although SOCK{MAP,HASH} are not a drop-in replacement for SOCKARRAY just
yet, because UDP support is lacking, it's a step in this direction. We're
working with Lorenz on extending SOCK{MAP,HASH} to hold UDP sockets, and
expect to post RFC series for sockmap + UDP in the near future.

I've dropped Acks from all patches that have been touched since v6.

The audit for missing READ_ONCE annotations for access to sk_prot is
ongoing. Thus far I've found one location specific to TCP listening sockets
that needed annotating. This got fixed it in this iteration. I wonder if
sparse checker could be put to work to identify places where we have
sk_prot access while not holding sk_lock...

The patch series depends on another one, posted earlier [2], that has
been split out of it.

v6 -> v7:

- Extended the series to cover SOCKHASH. (patches 4-8, 10-11) (John)

- Rebased onto recent bpf-next. Resolved conflicts in recent fixes to
  sk_state checks on sockmap/sockhash update path. (patch 4)

- Added missing READ_ONCE annotation in sock_copy. (patch 1)

- Split out patches that simplify sk_psock_restore_proto [2].

v5 -> v6:

- Added a fix-up for patch 1 which I forgot to commit in v5. Sigh.

v4 -> v5:

- Rebase onto recent bpf-next to resolve conflicts. (Daniel)

v3 -> v4:

- Make tcp_bpf_clone parameter names consistent across function declaration
  and definition. (Martin)

- Use sock_map_redirect_okay helper everywhere we need to take a different
  action for listening sockets. (Lorenz)

- Expand comment explaining the need for a callback from reuseport to
  sockarray code in reuseport_detach_sock. (Martin)

- Mention the possibility of using a u64 counter for reuseport IDs in the
  future in the description for patch 10. (Martin)

v2 -> v3:

- Generate reuseport ID when group is created. Please see patch 10
  description for details. (Martin)

- Fix the build when CONFIG_NET_SOCK_MSG is not selected by either
  CONFIG_BPF_STREAM_PARSER or CONFIG_TLS. (kbuild bot & John)

- Allow updating sockmap from BPF on BPF_SOCK_OPS_TCP_LISTEN_CB callback. An
  oversight in previous iterations. Users may want to populate the sockmap with
  listening sockets from BPF as well.

- Removed RCU read lock assertion in sock_map_lookup_sys. (Martin)

- Get rid of a warning when child socket was cloned with parent's psock
  state. (John)

- Check for tcp_bpf_unhash rather than tcp_bpf_recvmsg when deciding if
  sk_proto needs restoring on clone. Check for recvmsg in the context of
  listening socket cloning was confusing. (Martin)

- Consolidate sock_map_sk_is_suitable with sock_map_update_okay. This led
  to adding dedicated predicates for sockhash. Update self-tests
  accordingly. (John)

- Annotate unlikely branch in bpf_{sk,msg}_redirect_map when socket isn't
  in a map, or isn't a valid redirect target. (John)

- Document paired READ/WRITE_ONCE annotations and cover shared access in
  more detail in patch 2 description. (John)

- Correct a couple of log messages in sockmap_listen self-tests so the
  message reflects the actual failure.

- Rework reuseport tests from sockmap_listen suite so that ENOENT error
  from bpf_sk_select_reuseport handler does not happen on happy path.

v1 -> v2:

- af_ops->syn_recv_sock callback is no longer overridden and burdened with
  restoring sk_prot and clearing sk_user_data in the child socket. As child
  socket is already hashed when syn_recv_sock returns, it is too late to
  put it in the right state. Instead patches 3 & 4 address restoring
  sk_prot and clearing sk_user_data before we hash the child socket.
  (Pointed out by Martin Lau)

- Annotate shared access to sk->sk_prot with READ_ONCE/WRITE_ONCE macros as
  we write to it from sk_msg while socket might be getting cloned on
  another CPU. (Suggested by John Fastabend)

- Convert tests for SOCKMAP holding listening sockets to return-on-error
  style, and hook them up to test_progs. Also use BPF skeleton for setup.
  Add new tests to cover the race scenario discovered during v1 review.

RFC -> v1:

- Switch from overriding proto->accept to af_ops->syn_recv_sock, which
  happens earlier. Clearing the psock state after accept() does not work
  for child sockets that become orphaned (never got accepted). v4-mapped
  sockets need special care.

- Return the socket cookie on SOCKMAP lookup from syscall to be on par with
  REUSEPORT_SOCKARRAY. Requires SOCKMAP to take u64 on lookup/update from
  syscall.

- Make bpf_sk_redirect_map (ingress) and bpf_msg_redirect_map (egress)
  SOCKMAP helpers fail when target socket is a listening one.

- Make bpf_sk_select_reuseport helper fail when target is a TCP established
  socket.

- Teach libbpf to recognize SK_REUSEPORT program type from section name.

- Add a dedicated set of tests for SOCKMAP holding listening sockets,
  covering map operations, overridden socket callbacks, and BPF helpers.

[0] https://lore.kernel.org/bpf/20190828072250.29828-1-jakub@cloudflare.com/
[1] https://lore.kernel.org/bpf/20191123110751.6729-1-jakub@cloudflare.com/
[2] https://lore.kernel.org/bpf/20200217121530.754315-1-jakub@cloudflare.com/
====================
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
parents e42da4c6 44d28be2
...@@ -352,7 +352,8 @@ static inline void sk_psock_update_proto(struct sock *sk, ...@@ -352,7 +352,8 @@ static inline void sk_psock_update_proto(struct sock *sk,
psock->saved_write_space = sk->sk_write_space; psock->saved_write_space = sk->sk_write_space;
psock->sk_proto = sk->sk_prot; psock->sk_proto = sk->sk_prot;
sk->sk_prot = ops; /* Pairs with lockless read in sk_clone_lock() */
WRITE_ONCE(sk->sk_prot, ops);
} }
static inline void sk_psock_restore_proto(struct sock *sk, static inline void sk_psock_restore_proto(struct sock *sk,
......
...@@ -502,10 +502,43 @@ enum sk_pacing { ...@@ -502,10 +502,43 @@ enum sk_pacing {
SK_PACING_FQ = 2, SK_PACING_FQ = 2,
}; };
/* Pointer stored in sk_user_data might not be suitable for copying
* when cloning the socket. For instance, it can point to a reference
* counted object. sk_user_data bottom bit is set if pointer must not
* be copied.
*/
#define SK_USER_DATA_NOCOPY 1UL
#define SK_USER_DATA_PTRMASK ~(SK_USER_DATA_NOCOPY)
/**
* sk_user_data_is_nocopy - Test if sk_user_data pointer must not be copied
* @sk: socket
*/
static inline bool sk_user_data_is_nocopy(const struct sock *sk)
{
return ((uintptr_t)sk->sk_user_data & SK_USER_DATA_NOCOPY);
}
#define __sk_user_data(sk) ((*((void __rcu **)&(sk)->sk_user_data))) #define __sk_user_data(sk) ((*((void __rcu **)&(sk)->sk_user_data)))
#define rcu_dereference_sk_user_data(sk) rcu_dereference(__sk_user_data((sk))) #define rcu_dereference_sk_user_data(sk) \
#define rcu_assign_sk_user_data(sk, ptr) rcu_assign_pointer(__sk_user_data((sk)), ptr) ({ \
void *__tmp = rcu_dereference(__sk_user_data((sk))); \
(void *)((uintptr_t)__tmp & SK_USER_DATA_PTRMASK); \
})
#define rcu_assign_sk_user_data(sk, ptr) \
({ \
uintptr_t __tmp = (uintptr_t)(ptr); \
WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK); \
rcu_assign_pointer(__sk_user_data((sk)), __tmp); \
})
#define rcu_assign_sk_user_data_nocopy(sk, ptr) \
({ \
uintptr_t __tmp = (uintptr_t)(ptr); \
WARN_ON_ONCE(__tmp & ~SK_USER_DATA_PTRMASK); \
rcu_assign_pointer(__sk_user_data((sk)), \
__tmp | SK_USER_DATA_NOCOPY); \
})
/* /*
* SK_CAN_REUSE and SK_NO_REUSE on a socket mean that the socket is OK * SK_CAN_REUSE and SK_NO_REUSE on a socket mean that the socket is OK
......
...@@ -55,6 +55,4 @@ static inline bool reuseport_has_conns(struct sock *sk, bool set) ...@@ -55,6 +55,4 @@ static inline bool reuseport_has_conns(struct sock *sk, bool set)
return ret; return ret;
} }
int reuseport_get_id(struct sock_reuseport *reuse);
#endif /* _SOCK_REUSEPORT_H */ #endif /* _SOCK_REUSEPORT_H */
...@@ -2203,6 +2203,13 @@ int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, ...@@ -2203,6 +2203,13 @@ int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int nonblock, int flags, int *addr_len); int nonblock, int flags, int *addr_len);
int __tcp_bpf_recvmsg(struct sock *sk, struct sk_psock *psock, int __tcp_bpf_recvmsg(struct sock *sk, struct sk_psock *psock,
struct msghdr *msg, int len, int flags); struct msghdr *msg, int len, int flags);
#ifdef CONFIG_NET_SOCK_MSG
void tcp_bpf_clone(const struct sock *sk, struct sock *newsk);
#else
static inline void tcp_bpf_clone(const struct sock *sk, struct sock *newsk)
{
}
#endif
/* Call BPF_SOCK_OPS program that returns an int. If the return value /* Call BPF_SOCK_OPS program that returns an int. If the return value
* is < 0, then the BPF op failed (for example if the loaded BPF * is < 0, then the BPF op failed (for example if the loaded BPF
......
...@@ -305,11 +305,6 @@ int bpf_fd_reuseport_array_update_elem(struct bpf_map *map, void *key, ...@@ -305,11 +305,6 @@ int bpf_fd_reuseport_array_update_elem(struct bpf_map *map, void *key,
if (err) if (err)
goto put_file_unlock; goto put_file_unlock;
/* Ensure reuse->reuseport_id is set */
err = reuseport_get_id(reuse);
if (err < 0)
goto put_file_unlock;
WRITE_ONCE(nsk->sk_user_data, &array->ptrs[index]); WRITE_ONCE(nsk->sk_user_data, &array->ptrs[index]);
rcu_assign_pointer(array->ptrs[index], nsk); rcu_assign_pointer(array->ptrs[index], nsk);
free_osk = osk; free_osk = osk;
......
...@@ -3693,14 +3693,16 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env, ...@@ -3693,14 +3693,16 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
if (func_id != BPF_FUNC_sk_redirect_map && if (func_id != BPF_FUNC_sk_redirect_map &&
func_id != BPF_FUNC_sock_map_update && func_id != BPF_FUNC_sock_map_update &&
func_id != BPF_FUNC_map_delete_elem && func_id != BPF_FUNC_map_delete_elem &&
func_id != BPF_FUNC_msg_redirect_map) func_id != BPF_FUNC_msg_redirect_map &&
func_id != BPF_FUNC_sk_select_reuseport)
goto error; goto error;
break; break;
case BPF_MAP_TYPE_SOCKHASH: case BPF_MAP_TYPE_SOCKHASH:
if (func_id != BPF_FUNC_sk_redirect_hash && if (func_id != BPF_FUNC_sk_redirect_hash &&
func_id != BPF_FUNC_sock_hash_update && func_id != BPF_FUNC_sock_hash_update &&
func_id != BPF_FUNC_map_delete_elem && func_id != BPF_FUNC_map_delete_elem &&
func_id != BPF_FUNC_msg_redirect_hash) func_id != BPF_FUNC_msg_redirect_hash &&
func_id != BPF_FUNC_sk_select_reuseport)
goto error; goto error;
break; break;
case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY: case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
...@@ -3774,7 +3776,9 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env, ...@@ -3774,7 +3776,9 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
goto error; goto error;
break; break;
case BPF_FUNC_sk_select_reuseport: case BPF_FUNC_sk_select_reuseport:
if (map->map_type != BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) if (map->map_type != BPF_MAP_TYPE_REUSEPORT_SOCKARRAY &&
map->map_type != BPF_MAP_TYPE_SOCKMAP &&
map->map_type != BPF_MAP_TYPE_SOCKHASH)
goto error; goto error;
break; break;
case BPF_FUNC_map_peek_elem: case BPF_FUNC_map_peek_elem:
......
...@@ -8620,6 +8620,7 @@ struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk, ...@@ -8620,6 +8620,7 @@ struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk,
BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern, BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern,
struct bpf_map *, map, void *, key, u32, flags) struct bpf_map *, map, void *, key, u32, flags)
{ {
bool is_sockarray = map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY;
struct sock_reuseport *reuse; struct sock_reuseport *reuse;
struct sock *selected_sk; struct sock *selected_sk;
...@@ -8628,26 +8629,20 @@ BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern, ...@@ -8628,26 +8629,20 @@ BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern,
return -ENOENT; return -ENOENT;
reuse = rcu_dereference(selected_sk->sk_reuseport_cb); reuse = rcu_dereference(selected_sk->sk_reuseport_cb);
if (!reuse) if (!reuse) {
/* selected_sk is unhashed (e.g. by close()) after the /* reuseport_array has only sk with non NULL sk_reuseport_cb.
* above map_lookup_elem(). Treat selected_sk has already * The only (!reuse) case here is - the sk has already been
* been removed from the map. * unhashed (e.g. by close()), so treat it as -ENOENT.
*
* Other maps (e.g. sock_map) do not provide this guarantee and
* the sk may never be in the reuseport group to begin with.
*/ */
return -ENOENT; return is_sockarray ? -ENOENT : -EINVAL;
}
if (unlikely(reuse->reuseport_id != reuse_kern->reuseport_id)) { if (unlikely(reuse->reuseport_id != reuse_kern->reuseport_id)) {
struct sock *sk; struct sock *sk = reuse_kern->sk;
if (unlikely(!reuse_kern->reuseport_id))
/* There is a small race between adding the
* sk to the map and setting the
* reuse_kern->reuseport_id.
* Treat it as the sk has not been added to
* the bpf map yet.
*/
return -ENOENT;
sk = reuse_kern->sk;
if (sk->sk_protocol != selected_sk->sk_protocol) if (sk->sk_protocol != selected_sk->sk_protocol)
return -EPROTOTYPE; return -EPROTOTYPE;
else if (sk->sk_family != selected_sk->sk_family) else if (sk->sk_family != selected_sk->sk_family)
......
...@@ -512,7 +512,7 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node) ...@@ -512,7 +512,7 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
sk_psock_set_state(psock, SK_PSOCK_TX_ENABLED); sk_psock_set_state(psock, SK_PSOCK_TX_ENABLED);
refcount_set(&psock->refcnt, 1); refcount_set(&psock->refcnt, 1);
rcu_assign_sk_user_data(sk, psock); rcu_assign_sk_user_data_nocopy(sk, psock);
sock_hold(sk); sock_hold(sk);
return psock; return psock;
......
...@@ -1572,13 +1572,14 @@ static inline void sock_lock_init(struct sock *sk) ...@@ -1572,13 +1572,14 @@ static inline void sock_lock_init(struct sock *sk)
*/ */
static void sock_copy(struct sock *nsk, const struct sock *osk) static void sock_copy(struct sock *nsk, const struct sock *osk)
{ {
const struct proto *prot = READ_ONCE(osk->sk_prot);
#ifdef CONFIG_SECURITY_NETWORK #ifdef CONFIG_SECURITY_NETWORK
void *sptr = nsk->sk_security; void *sptr = nsk->sk_security;
#endif #endif
memcpy(nsk, osk, offsetof(struct sock, sk_dontcopy_begin)); memcpy(nsk, osk, offsetof(struct sock, sk_dontcopy_begin));
memcpy(&nsk->sk_dontcopy_end, &osk->sk_dontcopy_end, memcpy(&nsk->sk_dontcopy_end, &osk->sk_dontcopy_end,
osk->sk_prot->obj_size - offsetof(struct sock, sk_dontcopy_end)); prot->obj_size - offsetof(struct sock, sk_dontcopy_end));
#ifdef CONFIG_SECURITY_NETWORK #ifdef CONFIG_SECURITY_NETWORK
nsk->sk_security = sptr; nsk->sk_security = sptr;
...@@ -1792,16 +1793,17 @@ static void sk_init_common(struct sock *sk) ...@@ -1792,16 +1793,17 @@ static void sk_init_common(struct sock *sk)
*/ */
struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority) struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
{ {
struct proto *prot = READ_ONCE(sk->sk_prot);
struct sock *newsk; struct sock *newsk;
bool is_charged = true; bool is_charged = true;
newsk = sk_prot_alloc(sk->sk_prot, priority, sk->sk_family); newsk = sk_prot_alloc(prot, priority, sk->sk_family);
if (newsk != NULL) { if (newsk != NULL) {
struct sk_filter *filter; struct sk_filter *filter;
sock_copy(newsk, sk); sock_copy(newsk, sk);
newsk->sk_prot_creator = sk->sk_prot; newsk->sk_prot_creator = prot;
/* SANITY */ /* SANITY */
if (likely(newsk->sk_net_refcnt)) if (likely(newsk->sk_net_refcnt))
...@@ -1863,6 +1865,12 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority) ...@@ -1863,6 +1865,12 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
goto out; goto out;
} }
/* Clear sk_user_data if parent had the pointer tagged
* as not suitable for copying when cloning.
*/
if (sk_user_data_is_nocopy(newsk))
RCU_INIT_POINTER(newsk->sk_user_data, NULL);
newsk->sk_err = 0; newsk->sk_err = 0;
newsk->sk_err_soft = 0; newsk->sk_err_soft = 0;
newsk->sk_priority = 0; newsk->sk_priority = 0;
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/skmsg.h> #include <linux/skmsg.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/jhash.h> #include <linux/jhash.h>
#include <linux/sock_diag.h>
struct bpf_stab { struct bpf_stab {
struct bpf_map map; struct bpf_map map;
...@@ -31,7 +32,8 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr) ...@@ -31,7 +32,8 @@ static struct bpf_map *sock_map_alloc(union bpf_attr *attr)
return ERR_PTR(-EPERM); return ERR_PTR(-EPERM);
if (attr->max_entries == 0 || if (attr->max_entries == 0 ||
attr->key_size != 4 || attr->key_size != 4 ||
attr->value_size != 4 || (attr->value_size != sizeof(u32) &&
attr->value_size != sizeof(u64)) ||
attr->map_flags & ~SOCK_CREATE_FLAG_MASK) attr->map_flags & ~SOCK_CREATE_FLAG_MASK)
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
...@@ -228,6 +230,30 @@ static int sock_map_link(struct bpf_map *map, struct sk_psock_progs *progs, ...@@ -228,6 +230,30 @@ static int sock_map_link(struct bpf_map *map, struct sk_psock_progs *progs,
return ret; return ret;
} }
static int sock_map_link_no_progs(struct bpf_map *map, struct sock *sk)
{
struct sk_psock *psock;
int ret;
psock = sk_psock_get_checked(sk);
if (IS_ERR(psock))
return PTR_ERR(psock);
if (psock) {
tcp_bpf_reinit(sk);
return 0;
}
psock = sk_psock_init(sk, map->numa_node);
if (!psock)
return -ENOMEM;
ret = tcp_bpf_init(sk);
if (ret < 0)
sk_psock_put(sk, psock);
return ret;
}
static void sock_map_free(struct bpf_map *map) static void sock_map_free(struct bpf_map *map)
{ {
struct bpf_stab *stab = container_of(map, struct bpf_stab, map); struct bpf_stab *stab = container_of(map, struct bpf_stab, map);
...@@ -275,7 +301,22 @@ static struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key) ...@@ -275,7 +301,22 @@ static struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key)
static void *sock_map_lookup(struct bpf_map *map, void *key) static void *sock_map_lookup(struct bpf_map *map, void *key)
{ {
return ERR_PTR(-EOPNOTSUPP); return __sock_map_lookup_elem(map, *(u32 *)key);
}
static void *sock_map_lookup_sys(struct bpf_map *map, void *key)
{
struct sock *sk;
if (map->value_size != sizeof(u64))
return ERR_PTR(-ENOSPC);
sk = __sock_map_lookup_elem(map, *(u32 *)key);
if (!sk)
return ERR_PTR(-ENOENT);
sock_gen_cookie(sk);
return &sk->sk_cookie;
} }
static int __sock_map_delete(struct bpf_stab *stab, struct sock *sk_test, static int __sock_map_delete(struct bpf_stab *stab, struct sock *sk_test,
...@@ -334,6 +375,11 @@ static int sock_map_get_next_key(struct bpf_map *map, void *key, void *next) ...@@ -334,6 +375,11 @@ static int sock_map_get_next_key(struct bpf_map *map, void *key, void *next)
return 0; return 0;
} }
static bool sock_map_redirect_allowed(const struct sock *sk)
{
return sk->sk_state != TCP_LISTEN;
}
static int sock_map_update_common(struct bpf_map *map, u32 idx, static int sock_map_update_common(struct bpf_map *map, u32 idx,
struct sock *sk, u64 flags) struct sock *sk, u64 flags)
{ {
...@@ -356,7 +402,14 @@ static int sock_map_update_common(struct bpf_map *map, u32 idx, ...@@ -356,7 +402,14 @@ static int sock_map_update_common(struct bpf_map *map, u32 idx,
if (!link) if (!link)
return -ENOMEM; return -ENOMEM;
/* Only sockets we can redirect into/from in BPF need to hold
* refs to parser/verdict progs and have their sk_data_ready
* and sk_write_space callbacks overridden.
*/
if (sock_map_redirect_allowed(sk))
ret = sock_map_link(map, &stab->progs, sk); ret = sock_map_link(map, &stab->progs, sk);
else
ret = sock_map_link_no_progs(map, sk);
if (ret < 0) if (ret < 0)
goto out_free; goto out_free;
...@@ -391,7 +444,8 @@ static int sock_map_update_common(struct bpf_map *map, u32 idx, ...@@ -391,7 +444,8 @@ static int sock_map_update_common(struct bpf_map *map, u32 idx,
static bool sock_map_op_okay(const struct bpf_sock_ops_kern *ops) static bool sock_map_op_okay(const struct bpf_sock_ops_kern *ops)
{ {
return ops->op == BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB || return ops->op == BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB ||
ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB; ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB ||
ops->op == BPF_SOCK_OPS_TCP_LISTEN_CB;
} }
static bool sock_map_sk_is_suitable(const struct sock *sk) static bool sock_map_sk_is_suitable(const struct sock *sk)
...@@ -400,14 +454,26 @@ static bool sock_map_sk_is_suitable(const struct sock *sk) ...@@ -400,14 +454,26 @@ static bool sock_map_sk_is_suitable(const struct sock *sk)
sk->sk_protocol == IPPROTO_TCP; sk->sk_protocol == IPPROTO_TCP;
} }
static bool sock_map_sk_state_allowed(const struct sock *sk)
{
return (1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_LISTEN);
}
static int sock_map_update_elem(struct bpf_map *map, void *key, static int sock_map_update_elem(struct bpf_map *map, void *key,
void *value, u64 flags) void *value, u64 flags)
{ {
u32 ufd = *(u32 *)value;
u32 idx = *(u32 *)key; u32 idx = *(u32 *)key;
struct socket *sock; struct socket *sock;
struct sock *sk; struct sock *sk;
int ret; int ret;
u64 ufd;
if (map->value_size == sizeof(u64))
ufd = *(u64 *)value;
else
ufd = *(u32 *)value;
if (ufd > S32_MAX)
return -EINVAL;
sock = sockfd_lookup(ufd, &ret); sock = sockfd_lookup(ufd, &ret);
if (!sock) if (!sock)
...@@ -423,7 +489,7 @@ static int sock_map_update_elem(struct bpf_map *map, void *key, ...@@ -423,7 +489,7 @@ static int sock_map_update_elem(struct bpf_map *map, void *key,
} }
sock_map_sk_acquire(sk); sock_map_sk_acquire(sk);
if (sk->sk_state != TCP_ESTABLISHED) if (!sock_map_sk_state_allowed(sk))
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
else else
ret = sock_map_update_common(map, idx, sk, flags); ret = sock_map_update_common(map, idx, sk, flags);
...@@ -460,13 +526,17 @@ BPF_CALL_4(bpf_sk_redirect_map, struct sk_buff *, skb, ...@@ -460,13 +526,17 @@ BPF_CALL_4(bpf_sk_redirect_map, struct sk_buff *, skb,
struct bpf_map *, map, u32, key, u64, flags) struct bpf_map *, map, u32, key, u64, flags)
{ {
struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); struct tcp_skb_cb *tcb = TCP_SKB_CB(skb);
struct sock *sk;
if (unlikely(flags & ~(BPF_F_INGRESS))) if (unlikely(flags & ~(BPF_F_INGRESS)))
return SK_DROP; return SK_DROP;
tcb->bpf.flags = flags;
tcb->bpf.sk_redir = __sock_map_lookup_elem(map, key); sk = __sock_map_lookup_elem(map, key);
if (!tcb->bpf.sk_redir) if (unlikely(!sk || !sock_map_redirect_allowed(sk)))
return SK_DROP; return SK_DROP;
tcb->bpf.flags = flags;
tcb->bpf.sk_redir = sk;
return SK_PASS; return SK_PASS;
} }
...@@ -483,12 +553,17 @@ const struct bpf_func_proto bpf_sk_redirect_map_proto = { ...@@ -483,12 +553,17 @@ const struct bpf_func_proto bpf_sk_redirect_map_proto = {
BPF_CALL_4(bpf_msg_redirect_map, struct sk_msg *, msg, BPF_CALL_4(bpf_msg_redirect_map, struct sk_msg *, msg,
struct bpf_map *, map, u32, key, u64, flags) struct bpf_map *, map, u32, key, u64, flags)
{ {
struct sock *sk;
if (unlikely(flags & ~(BPF_F_INGRESS))) if (unlikely(flags & ~(BPF_F_INGRESS)))
return SK_DROP; return SK_DROP;
msg->flags = flags;
msg->sk_redir = __sock_map_lookup_elem(map, key); sk = __sock_map_lookup_elem(map, key);
if (!msg->sk_redir) if (unlikely(!sk || !sock_map_redirect_allowed(sk)))
return SK_DROP; return SK_DROP;
msg->flags = flags;
msg->sk_redir = sk;
return SK_PASS; return SK_PASS;
} }
...@@ -506,6 +581,7 @@ const struct bpf_map_ops sock_map_ops = { ...@@ -506,6 +581,7 @@ const struct bpf_map_ops sock_map_ops = {
.map_alloc = sock_map_alloc, .map_alloc = sock_map_alloc,
.map_free = sock_map_free, .map_free = sock_map_free,
.map_get_next_key = sock_map_get_next_key, .map_get_next_key = sock_map_get_next_key,
.map_lookup_elem_sys_only = sock_map_lookup_sys,
.map_update_elem = sock_map_update_elem, .map_update_elem = sock_map_update_elem,
.map_delete_elem = sock_map_delete_elem, .map_delete_elem = sock_map_delete_elem,
.map_lookup_elem = sock_map_lookup, .map_lookup_elem = sock_map_lookup,
...@@ -680,7 +756,14 @@ static int sock_hash_update_common(struct bpf_map *map, void *key, ...@@ -680,7 +756,14 @@ static int sock_hash_update_common(struct bpf_map *map, void *key,
if (!link) if (!link)
return -ENOMEM; return -ENOMEM;
/* Only sockets we can redirect into/from in BPF need to hold
* refs to parser/verdict progs and have their sk_data_ready
* and sk_write_space callbacks overridden.
*/
if (sock_map_redirect_allowed(sk))
ret = sock_map_link(map, &htab->progs, sk); ret = sock_map_link(map, &htab->progs, sk);
else
ret = sock_map_link_no_progs(map, sk);
if (ret < 0) if (ret < 0)
goto out_free; goto out_free;
...@@ -729,10 +812,17 @@ static int sock_hash_update_common(struct bpf_map *map, void *key, ...@@ -729,10 +812,17 @@ static int sock_hash_update_common(struct bpf_map *map, void *key,
static int sock_hash_update_elem(struct bpf_map *map, void *key, static int sock_hash_update_elem(struct bpf_map *map, void *key,
void *value, u64 flags) void *value, u64 flags)
{ {
u32 ufd = *(u32 *)value;
struct socket *sock; struct socket *sock;
struct sock *sk; struct sock *sk;
int ret; int ret;
u64 ufd;
if (map->value_size == sizeof(u64))
ufd = *(u64 *)value;
else
ufd = *(u32 *)value;
if (ufd > S32_MAX)
return -EINVAL;
sock = sockfd_lookup(ufd, &ret); sock = sockfd_lookup(ufd, &ret);
if (!sock) if (!sock)
...@@ -748,7 +838,7 @@ static int sock_hash_update_elem(struct bpf_map *map, void *key, ...@@ -748,7 +838,7 @@ static int sock_hash_update_elem(struct bpf_map *map, void *key,
} }
sock_map_sk_acquire(sk); sock_map_sk_acquire(sk);
if (sk->sk_state != TCP_ESTABLISHED) if (!sock_map_sk_state_allowed(sk))
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
else else
ret = sock_hash_update_common(map, key, sk, flags); ret = sock_hash_update_common(map, key, sk, flags);
...@@ -808,7 +898,8 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr) ...@@ -808,7 +898,8 @@ static struct bpf_map *sock_hash_alloc(union bpf_attr *attr)
return ERR_PTR(-EPERM); return ERR_PTR(-EPERM);
if (attr->max_entries == 0 || if (attr->max_entries == 0 ||
attr->key_size == 0 || attr->key_size == 0 ||
attr->value_size != 4 || (attr->value_size != sizeof(u32) &&
attr->value_size != sizeof(u64)) ||
attr->map_flags & ~SOCK_CREATE_FLAG_MASK) attr->map_flags & ~SOCK_CREATE_FLAG_MASK)
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
if (attr->key_size > MAX_BPF_STACK) if (attr->key_size > MAX_BPF_STACK)
...@@ -885,6 +976,26 @@ static void sock_hash_free(struct bpf_map *map) ...@@ -885,6 +976,26 @@ static void sock_hash_free(struct bpf_map *map)
kfree(htab); kfree(htab);
} }
static void *sock_hash_lookup_sys(struct bpf_map *map, void *key)
{
struct sock *sk;
if (map->value_size != sizeof(u64))
return ERR_PTR(-ENOSPC);
sk = __sock_hash_lookup_elem(map, key);
if (!sk)
return ERR_PTR(-ENOENT);
sock_gen_cookie(sk);
return &sk->sk_cookie;
}
static void *sock_hash_lookup(struct bpf_map *map, void *key)
{
return __sock_hash_lookup_elem(map, key);
}
static void sock_hash_release_progs(struct bpf_map *map) static void sock_hash_release_progs(struct bpf_map *map)
{ {
psock_progs_drop(&container_of(map, struct bpf_htab, map)->progs); psock_progs_drop(&container_of(map, struct bpf_htab, map)->progs);
...@@ -916,13 +1027,17 @@ BPF_CALL_4(bpf_sk_redirect_hash, struct sk_buff *, skb, ...@@ -916,13 +1027,17 @@ BPF_CALL_4(bpf_sk_redirect_hash, struct sk_buff *, skb,
struct bpf_map *, map, void *, key, u64, flags) struct bpf_map *, map, void *, key, u64, flags)
{ {
struct tcp_skb_cb *tcb = TCP_SKB_CB(skb); struct tcp_skb_cb *tcb = TCP_SKB_CB(skb);
struct sock *sk;
if (unlikely(flags & ~(BPF_F_INGRESS))) if (unlikely(flags & ~(BPF_F_INGRESS)))
return SK_DROP; return SK_DROP;
tcb->bpf.flags = flags;
tcb->bpf.sk_redir = __sock_hash_lookup_elem(map, key); sk = __sock_hash_lookup_elem(map, key);
if (!tcb->bpf.sk_redir) if (unlikely(!sk || !sock_map_redirect_allowed(sk)))
return SK_DROP; return SK_DROP;
tcb->bpf.flags = flags;
tcb->bpf.sk_redir = sk;
return SK_PASS; return SK_PASS;
} }
...@@ -939,12 +1054,17 @@ const struct bpf_func_proto bpf_sk_redirect_hash_proto = { ...@@ -939,12 +1054,17 @@ const struct bpf_func_proto bpf_sk_redirect_hash_proto = {
BPF_CALL_4(bpf_msg_redirect_hash, struct sk_msg *, msg, BPF_CALL_4(bpf_msg_redirect_hash, struct sk_msg *, msg,
struct bpf_map *, map, void *, key, u64, flags) struct bpf_map *, map, void *, key, u64, flags)
{ {
struct sock *sk;
if (unlikely(flags & ~(BPF_F_INGRESS))) if (unlikely(flags & ~(BPF_F_INGRESS)))
return SK_DROP; return SK_DROP;
msg->flags = flags;
msg->sk_redir = __sock_hash_lookup_elem(map, key); sk = __sock_hash_lookup_elem(map, key);
if (!msg->sk_redir) if (unlikely(!sk || !sock_map_redirect_allowed(sk)))
return SK_DROP; return SK_DROP;
msg->flags = flags;
msg->sk_redir = sk;
return SK_PASS; return SK_PASS;
} }
...@@ -964,7 +1084,8 @@ const struct bpf_map_ops sock_hash_ops = { ...@@ -964,7 +1084,8 @@ const struct bpf_map_ops sock_hash_ops = {
.map_get_next_key = sock_hash_get_next_key, .map_get_next_key = sock_hash_get_next_key,
.map_update_elem = sock_hash_update_elem, .map_update_elem = sock_hash_update_elem,
.map_delete_elem = sock_hash_delete_elem, .map_delete_elem = sock_hash_delete_elem,
.map_lookup_elem = sock_map_lookup, .map_lookup_elem = sock_hash_lookup,
.map_lookup_elem_sys_only = sock_hash_lookup_sys,
.map_release_uref = sock_hash_release_progs, .map_release_uref = sock_hash_release_progs,
.map_check_btf = map_check_no_btf, .map_check_btf = map_check_no_btf,
}; };
......
...@@ -16,27 +16,8 @@ ...@@ -16,27 +16,8 @@
DEFINE_SPINLOCK(reuseport_lock); DEFINE_SPINLOCK(reuseport_lock);
#define REUSEPORT_MIN_ID 1
static DEFINE_IDA(reuseport_ida); static DEFINE_IDA(reuseport_ida);
int reuseport_get_id(struct sock_reuseport *reuse)
{
int id;
if (reuse->reuseport_id)
return reuse->reuseport_id;
id = ida_simple_get(&reuseport_ida, REUSEPORT_MIN_ID, 0,
/* Called under reuseport_lock */
GFP_ATOMIC);
if (id < 0)
return id;
reuse->reuseport_id = id;
return reuse->reuseport_id;
}
static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks) static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks)
{ {
unsigned int size = sizeof(struct sock_reuseport) + unsigned int size = sizeof(struct sock_reuseport) +
...@@ -55,6 +36,7 @@ static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks) ...@@ -55,6 +36,7 @@ static struct sock_reuseport *__reuseport_alloc(unsigned int max_socks)
int reuseport_alloc(struct sock *sk, bool bind_inany) int reuseport_alloc(struct sock *sk, bool bind_inany)
{ {
struct sock_reuseport *reuse; struct sock_reuseport *reuse;
int id, ret = 0;
/* bh lock used since this function call may precede hlist lock in /* bh lock used since this function call may precede hlist lock in
* soft irq of receive path or setsockopt from process context * soft irq of receive path or setsockopt from process context
...@@ -78,10 +60,18 @@ int reuseport_alloc(struct sock *sk, bool bind_inany) ...@@ -78,10 +60,18 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
reuse = __reuseport_alloc(INIT_SOCKS); reuse = __reuseport_alloc(INIT_SOCKS);
if (!reuse) { if (!reuse) {
spin_unlock_bh(&reuseport_lock); ret = -ENOMEM;
return -ENOMEM; goto out;
} }
id = ida_alloc(&reuseport_ida, GFP_ATOMIC);
if (id < 0) {
kfree(reuse);
ret = id;
goto out;
}
reuse->reuseport_id = id;
reuse->socks[0] = sk; reuse->socks[0] = sk;
reuse->num_socks = 1; reuse->num_socks = 1;
reuse->bind_inany = bind_inany; reuse->bind_inany = bind_inany;
...@@ -90,7 +80,7 @@ int reuseport_alloc(struct sock *sk, bool bind_inany) ...@@ -90,7 +80,7 @@ int reuseport_alloc(struct sock *sk, bool bind_inany)
out: out:
spin_unlock_bh(&reuseport_lock); spin_unlock_bh(&reuseport_lock);
return 0; return ret;
} }
EXPORT_SYMBOL(reuseport_alloc); EXPORT_SYMBOL(reuseport_alloc);
...@@ -134,8 +124,7 @@ static void reuseport_free_rcu(struct rcu_head *head) ...@@ -134,8 +124,7 @@ static void reuseport_free_rcu(struct rcu_head *head)
reuse = container_of(head, struct sock_reuseport, rcu); reuse = container_of(head, struct sock_reuseport, rcu);
sk_reuseport_prog_free(rcu_dereference_protected(reuse->prog, 1)); sk_reuseport_prog_free(rcu_dereference_protected(reuse->prog, 1));
if (reuse->reuseport_id) ida_free(&reuseport_ida, reuse->reuseport_id);
ida_simple_remove(&reuseport_ida, reuse->reuseport_id);
kfree(reuse); kfree(reuse);
} }
...@@ -199,11 +188,14 @@ void reuseport_detach_sock(struct sock *sk) ...@@ -199,11 +188,14 @@ void reuseport_detach_sock(struct sock *sk)
reuse = rcu_dereference_protected(sk->sk_reuseport_cb, reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
lockdep_is_held(&reuseport_lock)); lockdep_is_held(&reuseport_lock));
/* At least one of the sk in this reuseport group is added to /* Notify the bpf side. The sk may be added to a sockarray
* a bpf map. Notify the bpf side. The bpf map logic will * map. If so, sockarray logic will remove it from the map.
* remove the sk if it is indeed added to a bpf map. *
* Other bpf map types that work with reuseport, like sockmap,
* don't need an explicit callback from here. They override sk
* unhash/close ops to remove the sk from the map before we
* get to this point.
*/ */
if (reuse->reuseport_id)
bpf_sk_reuseport_detach(sk); bpf_sk_reuseport_detach(sk);
rcu_assign_pointer(sk->sk_reuseport_cb, NULL); rcu_assign_pointer(sk->sk_reuseport_cb, NULL);
......
...@@ -645,8 +645,10 @@ static void tcp_bpf_reinit_sk_prot(struct sock *sk, struct sk_psock *psock) ...@@ -645,8 +645,10 @@ static void tcp_bpf_reinit_sk_prot(struct sock *sk, struct sk_psock *psock)
/* Reinit occurs when program types change e.g. TCP_BPF_TX is removed /* Reinit occurs when program types change e.g. TCP_BPF_TX is removed
* or added requiring sk_prot hook updates. We keep original saved * or added requiring sk_prot hook updates. We keep original saved
* hooks in this case. * hooks in this case.
*
* Pairs with lockless read in sk_clone_lock().
*/ */
sk->sk_prot = &tcp_bpf_prots[family][config]; WRITE_ONCE(sk->sk_prot, &tcp_bpf_prots[family][config]);
} }
static int tcp_bpf_assert_proto_ops(struct proto *ops) static int tcp_bpf_assert_proto_ops(struct proto *ops)
...@@ -691,3 +693,17 @@ int tcp_bpf_init(struct sock *sk) ...@@ -691,3 +693,17 @@ int tcp_bpf_init(struct sock *sk)
rcu_read_unlock(); rcu_read_unlock();
return 0; return 0;
} }
/* If a child got cloned from a listening socket that had tcp_bpf
* protocol callbacks installed, we need to restore the callbacks to
* the default ones because the child does not inherit the psock state
* that tcp_bpf callbacks expect.
*/
void tcp_bpf_clone(const struct sock *sk, struct sock *newsk)
{
int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;
struct proto *prot = newsk->sk_prot;
if (prot == &tcp_bpf_prots[family][TCP_BPF_BASE])
newsk->sk_prot = sk->sk_prot_creator;
}
...@@ -548,6 +548,8 @@ struct sock *tcp_create_openreq_child(const struct sock *sk, ...@@ -548,6 +548,8 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
newtp->fastopen_req = NULL; newtp->fastopen_req = NULL;
RCU_INIT_POINTER(newtp->fastopen_rsk, NULL); RCU_INIT_POINTER(newtp->fastopen_rsk, NULL);
tcp_bpf_clone(sk, newsk);
__TCP_INC_STATS(sock_net(sk), TCP_MIB_PASSIVEOPENS); __TCP_INC_STATS(sock_net(sk), TCP_MIB_PASSIVEOPENS);
return newsk; return newsk;
......
...@@ -106,7 +106,8 @@ void tcp_update_ulp(struct sock *sk, struct proto *proto, ...@@ -106,7 +106,8 @@ void tcp_update_ulp(struct sock *sk, struct proto *proto,
if (!icsk->icsk_ulp_ops) { if (!icsk->icsk_ulp_ops) {
sk->sk_write_space = write_space; sk->sk_write_space = write_space;
sk->sk_prot = proto; /* Pairs with lockless read in sk_clone_lock() */
WRITE_ONCE(sk->sk_prot, proto);
return; return;
} }
......
...@@ -742,7 +742,8 @@ static void tls_update(struct sock *sk, struct proto *p, ...@@ -742,7 +742,8 @@ static void tls_update(struct sock *sk, struct proto *p,
ctx->sk_write_space = write_space; ctx->sk_write_space = write_space;
ctx->sk_proto = p; ctx->sk_proto = p;
} else { } else {
sk->sk_prot = p; /* Pairs with lockless read in sk_clone_lock(). */
WRITE_ONCE(sk->sk_prot, p);
sk->sk_write_space = write_space; sk->sk_write_space = write_space;
} }
} }
......
...@@ -36,6 +36,7 @@ static int result_map, tmp_index_ovr_map, linum_map, data_check_map; ...@@ -36,6 +36,7 @@ static int result_map, tmp_index_ovr_map, linum_map, data_check_map;
static __u32 expected_results[NR_RESULTS]; static __u32 expected_results[NR_RESULTS];
static int sk_fds[REUSEPORT_ARRAY_SIZE]; static int sk_fds[REUSEPORT_ARRAY_SIZE];
static int reuseport_array = -1, outer_map = -1; static int reuseport_array = -1, outer_map = -1;
static enum bpf_map_type inner_map_type;
static int select_by_skb_data_prog; static int select_by_skb_data_prog;
static int saved_tcp_syncookie = -1; static int saved_tcp_syncookie = -1;
static struct bpf_object *obj; static struct bpf_object *obj;
...@@ -63,13 +64,15 @@ static union sa46 { ...@@ -63,13 +64,15 @@ static union sa46 {
} \ } \
}) })
static int create_maps(void) static int create_maps(enum bpf_map_type inner_type)
{ {
struct bpf_create_map_attr attr = {}; struct bpf_create_map_attr attr = {};
inner_map_type = inner_type;
/* Creating reuseport_array */ /* Creating reuseport_array */
attr.name = "reuseport_array"; attr.name = "reuseport_array";
attr.map_type = BPF_MAP_TYPE_REUSEPORT_SOCKARRAY; attr.map_type = inner_type;
attr.key_size = sizeof(__u32); attr.key_size = sizeof(__u32);
attr.value_size = sizeof(__u32); attr.value_size = sizeof(__u32);
attr.max_entries = REUSEPORT_ARRAY_SIZE; attr.max_entries = REUSEPORT_ARRAY_SIZE;
...@@ -726,12 +729,36 @@ static void cleanup_per_test(bool no_inner_map) ...@@ -726,12 +729,36 @@ static void cleanup_per_test(bool no_inner_map)
static void cleanup(void) static void cleanup(void)
{ {
if (outer_map != -1) if (outer_map != -1) {
close(outer_map); close(outer_map);
if (reuseport_array != -1) outer_map = -1;
}
if (reuseport_array != -1) {
close(reuseport_array); close(reuseport_array);
if (obj) reuseport_array = -1;
}
if (obj) {
bpf_object__close(obj); bpf_object__close(obj);
obj = NULL;
}
memset(expected_results, 0, sizeof(expected_results));
}
static const char *maptype_str(enum bpf_map_type type)
{
switch (type) {
case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
return "reuseport_sockarray";
case BPF_MAP_TYPE_SOCKMAP:
return "sockmap";
case BPF_MAP_TYPE_SOCKHASH:
return "sockhash";
default:
return "unknown";
}
} }
static const char *family_str(sa_family_t family) static const char *family_str(sa_family_t family)
...@@ -779,13 +806,21 @@ static void test_config(int sotype, sa_family_t family, bool inany) ...@@ -779,13 +806,21 @@ static void test_config(int sotype, sa_family_t family, bool inany)
const struct test *t; const struct test *t;
for (t = tests; t < tests + ARRAY_SIZE(tests); t++) { for (t = tests; t < tests + ARRAY_SIZE(tests); t++) {
snprintf(s, sizeof(s), "%s/%s %s %s", snprintf(s, sizeof(s), "%s %s/%s %s %s",
maptype_str(inner_map_type),
family_str(family), sotype_str(sotype), family_str(family), sotype_str(sotype),
inany ? "INANY" : "LOOPBACK", t->name); inany ? "INANY" : "LOOPBACK", t->name);
if (!test__start_subtest(s)) if (!test__start_subtest(s))
continue; continue;
if (sotype == SOCK_DGRAM &&
inner_map_type != BPF_MAP_TYPE_REUSEPORT_SOCKARRAY) {
/* SOCKMAP/SOCKHASH don't support UDP yet */
test__skip();
continue;
}
setup_per_test(sotype, family, inany, t->no_inner_map); setup_per_test(sotype, family, inany, t->no_inner_map);
t->fn(sotype, family); t->fn(sotype, family);
cleanup_per_test(t->no_inner_map); cleanup_per_test(t->no_inner_map);
...@@ -814,13 +849,20 @@ static void test_all(void) ...@@ -814,13 +849,20 @@ static void test_all(void)
test_config(c->sotype, c->family, c->inany); test_config(c->sotype, c->family, c->inany);
} }
void test_select_reuseport(void) void test_map_type(enum bpf_map_type mt)
{ {
if (create_maps()) if (create_maps(mt))
goto out; goto out;
if (prepare_bpf_obj()) if (prepare_bpf_obj())
goto out; goto out;
test_all();
out:
cleanup();
}
void test_select_reuseport(void)
{
saved_tcp_fo = read_int_sysctl(TCP_FO_SYSCTL); saved_tcp_fo = read_int_sysctl(TCP_FO_SYSCTL);
saved_tcp_syncookie = read_int_sysctl(TCP_SYNCOOKIE_SYSCTL); saved_tcp_syncookie = read_int_sysctl(TCP_SYNCOOKIE_SYSCTL);
if (saved_tcp_syncookie < 0 || saved_tcp_syncookie < 0) if (saved_tcp_syncookie < 0 || saved_tcp_syncookie < 0)
...@@ -831,8 +873,9 @@ void test_select_reuseport(void) ...@@ -831,8 +873,9 @@ void test_select_reuseport(void)
if (disable_syncookie()) if (disable_syncookie())
goto out; goto out;
test_all(); test_map_type(BPF_MAP_TYPE_REUSEPORT_SOCKARRAY);
test_map_type(BPF_MAP_TYPE_SOCKMAP);
test_map_type(BPF_MAP_TYPE_SOCKHASH);
out: out:
cleanup();
restore_sysctls(); restore_sysctls();
} }
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
// Copyright (c) 2020 Cloudflare
#include <errno.h>
#include <stdbool.h>
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
struct {
__uint(type, BPF_MAP_TYPE_SOCKMAP);
__uint(max_entries, 2);
__type(key, __u32);
__type(value, __u64);
} sock_map SEC(".maps");
struct {
__uint(type, BPF_MAP_TYPE_SOCKHASH);
__uint(max_entries, 2);
__type(key, __u32);
__type(value, __u64);
} sock_hash SEC(".maps");
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__uint(max_entries, 2);
__type(key, int);
__type(value, unsigned int);
} verdict_map SEC(".maps");
static volatile bool test_sockmap; /* toggled by user-space */
SEC("sk_skb/stream_parser")
int prog_skb_parser(struct __sk_buff *skb)
{
return skb->len;
}
SEC("sk_skb/stream_verdict")
int prog_skb_verdict(struct __sk_buff *skb)
{
unsigned int *count;
__u32 zero = 0;
int verdict;
if (test_sockmap)
verdict = bpf_sk_redirect_map(skb, &sock_map, zero, 0);
else
verdict = bpf_sk_redirect_hash(skb, &sock_hash, &zero, 0);
count = bpf_map_lookup_elem(&verdict_map, &verdict);
if (count)
(*count)++;
return verdict;
}
SEC("sk_msg")
int prog_msg_verdict(struct sk_msg_md *msg)
{
unsigned int *count;
__u32 zero = 0;
int verdict;
if (test_sockmap)
verdict = bpf_msg_redirect_map(msg, &sock_map, zero, 0);
else
verdict = bpf_msg_redirect_hash(msg, &sock_hash, &zero, 0);
count = bpf_map_lookup_elem(&verdict_map, &verdict);
if (count)
(*count)++;
return verdict;
}
SEC("sk_reuseport")
int prog_reuseport(struct sk_reuseport_md *reuse)
{
unsigned int *count;
int err, verdict;
__u32 zero = 0;
if (test_sockmap)
err = bpf_sk_select_reuseport(reuse, &sock_map, &zero, 0);
else
err = bpf_sk_select_reuseport(reuse, &sock_hash, &zero, 0);
verdict = err ? SK_DROP : SK_PASS;
count = bpf_map_lookup_elem(&verdict_map, &verdict);
if (count)
(*count)++;
return verdict;
}
int _version SEC("version") = 1;
char _license[] SEC("license") = "GPL";
...@@ -756,11 +756,7 @@ static void test_sockmap(unsigned int tasks, void *data) ...@@ -756,11 +756,7 @@ static void test_sockmap(unsigned int tasks, void *data)
/* Test update without programs */ /* Test update without programs */
for (i = 0; i < 6; i++) { for (i = 0; i < 6; i++) {
err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY); err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY);
if (i < 2 && !err) { if (err) {
printf("Allowed update sockmap '%i:%i' not in ESTABLISHED\n",
i, sfd[i]);
goto out_sockmap;
} else if (i >= 2 && err) {
printf("Failed noprog update sockmap '%i:%i'\n", printf("Failed noprog update sockmap '%i:%i'\n",
i, sfd[i]); i, sfd[i]);
goto out_sockmap; goto out_sockmap;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment