Commit cc3baecb authored by David S. Miller's avatar David S. Miller

Merge tag 'rxrpc-rewrite-20160706' of...

Merge tag 'rxrpc-rewrite-20160706' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs

David Howells says:

====================
rxrpc: Improve conn/call lookup and fix call number generation [ver #3]

I've fixed a couple of patch descriptions and excised the patch that
duplicated the connections list for reconsideration at a later date.

For reference, the excised patch is sitting on the rxrpc-experimental
branch of my git tree, based on top of the rxrpc-rewrite branch.  Diffing
it against yesterday's tag shows no differences.

Would you prefer the patch set to be emailed afresh instead of a git-pull
request?

David
---
Here's the next part of the AF_RXRPC rewrite.  The two main purposes of
this set are to fix the call number handling and to make use of RCU when
looking up the connection or call to pass a received packet to.

Important changes in this set include:

 (1) Avoidance of placing stack data into SG lists in rxkad so that kernel
     stacks can become vmalloc'd (Herbert Xu).

 (2) Calls cease pinning the connection they used as soon as possible,
     which allows the connection to be discarded sooner and allows the call
     channel on that connection to be reused earlier.

 (3) Make each call channel on a connection have a separate and independent
     call number space rather than having a shared number space for the
     connection.  Call numbers should increment monotonically per channel
     on the client, and the server should ignore a call with a lower call
     number for that channel than the latest it has seen.  The RESPONSE
     packet sets the minimum values of each call ID counter on a
     connection.

 (4) Look up calls by indexing the channel array on a connection rather
     than by keeping calls in an rbtree on that connection.  Also look up
     calls using the channel array rather than using a hashtable.

     The call hashtable can then be removed.

 (5) Call terminal statuses are cached in the channel array for the last
     call.  It is assumed that if we the server have seen call N, then the
     client no longer cares about call N-1 on the same channel.

     This will allow retransmission of the terminal status in future
     without the need to keep the rxrpc_call struct around.

 (6) Peer lookups are moved out of common connection handling code and into
     service connection handling code as client connections (a) must point
     to a peer before they can be used and (b) are looked up by a
     machine-unique connection ID directly, so we only need to look up the
     peer first if we're going to deal with a service call.

 (7) The reference count on a connection is held elevated by 1 whilst it is
     alive (ie. idle unused connections have a refcount of 1).  The reaper
     will attempt to change the refcount from 1->0 and skip if this cannot
     be done, whilst look ups only increment the refcount if it's non-zero.

     This makes the implementation of RCU lookups easier as we don't have
     to get a ref on the connection or a lock on the connection list to
     prevent a connection being reaped whilst we're contemplating queueing
     a packet that initiates a new service call upon it.

     If we need to get a connection, but there's a dead connection in the
     tree, we use rb_replace_node() to replace the dead one with a new one.

 (8) Use a seqlock to validate the walk over the service connection rbtree
     attached to a peer when it's being walked in RCU mode.

 (9) Make the incoming call/connection packet handling code use RCU mode
     and locks and make it only take a reference if the call/connection
     gets queued on a workqueue.

The intention is that the next set will introduce the connection lifetime
management and capacity limits to prevent clients from overloading the
server.

There are some fixes too:

 (1) Verifying that a packet coming in to a client connection came from the
     expected source.

 (2) Fix handling of connection failure in client call creation where we
     don't reinitialise the list linkage block and a second attempt to
     unlink the failed connection oopses and also we don't set the state
     correctly, which causes an assertion failure.

 (3) New service calls were being added to the socket's accept queue under
     the wrong lock.

Changes:

 (V2) In rxrpc_find_service_conn_rcu() initialised the sequence number to 0.

      Fixed the RCU handling in conn_service.c by introducing and using
      rb_replace_node_rcu() as an RCU-safe alternative in
      rxrpc_publish_service_conn().

      Modified and used rcu_dereference_raw() to avoid RCU sparse warnings
      in rxrpc_find_service_conn_rcu().

      Added in some missing RCU dereference wrappers.  It seems to be
      necessary to turn on CONFIG_PROVE_RCU_REPEATEDLY as well as
      CONFIG_SPARSE_RCU_POINTER to get the static __rcu annotation checking
      to happen.

      Fixed some other sparse warnings, including a missing ntohs() in
      jumbo packet processing.

 (V3) Fixed some commit descriptions.

      Excised the patch that duplicated the connection list to separate out
      the procfs list for reconsideration at a later date.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 99a50bb1 d440a1ce
...@@ -76,6 +76,8 @@ extern struct rb_node *rb_next_postorder(const struct rb_node *); ...@@ -76,6 +76,8 @@ extern struct rb_node *rb_next_postorder(const struct rb_node *);
/* Fast replacement of a single node without remove/rebalance/add/rebalance */ /* Fast replacement of a single node without remove/rebalance/add/rebalance */
extern void rb_replace_node(struct rb_node *victim, struct rb_node *new, extern void rb_replace_node(struct rb_node *victim, struct rb_node *new,
struct rb_root *root); struct rb_root *root);
extern void rb_replace_node_rcu(struct rb_node *victim, struct rb_node *new,
struct rb_root *root);
static inline void rb_link_node(struct rb_node *node, struct rb_node *parent, static inline void rb_link_node(struct rb_node *node, struct rb_node *parent,
struct rb_node **rb_link) struct rb_node **rb_link)
......
...@@ -130,6 +130,19 @@ __rb_change_child(struct rb_node *old, struct rb_node *new, ...@@ -130,6 +130,19 @@ __rb_change_child(struct rb_node *old, struct rb_node *new,
WRITE_ONCE(root->rb_node, new); WRITE_ONCE(root->rb_node, new);
} }
static inline void
__rb_change_child_rcu(struct rb_node *old, struct rb_node *new,
struct rb_node *parent, struct rb_root *root)
{
if (parent) {
if (parent->rb_left == old)
rcu_assign_pointer(parent->rb_left, new);
else
rcu_assign_pointer(parent->rb_right, new);
} else
rcu_assign_pointer(root->rb_node, new);
}
extern void __rb_erase_color(struct rb_node *parent, struct rb_root *root, extern void __rb_erase_color(struct rb_node *parent, struct rb_root *root,
void (*augment_rotate)(struct rb_node *old, struct rb_node *new)); void (*augment_rotate)(struct rb_node *old, struct rb_node *new));
......
...@@ -611,6 +611,12 @@ static inline void rcu_preempt_sleep_check(void) ...@@ -611,6 +611,12 @@ static inline void rcu_preempt_sleep_check(void)
rcu_dereference_sparse(p, space); \ rcu_dereference_sparse(p, space); \
((typeof(*p) __force __kernel *)(p)); \ ((typeof(*p) __force __kernel *)(p)); \
}) })
#define rcu_dereference_raw(p) \
({ \
/* Dependency order vs. p above. */ \
typeof(p) ________p1 = lockless_dereference(p); \
((typeof(*p) __force __kernel *)(________p1)); \
})
/** /**
* RCU_INITIALIZER() - statically initialize an RCU-protected global variable * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
...@@ -729,8 +735,6 @@ static inline void rcu_preempt_sleep_check(void) ...@@ -729,8 +735,6 @@ static inline void rcu_preempt_sleep_check(void)
__rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \ __rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \
__rcu) __rcu)
#define rcu_dereference_raw(p) rcu_dereference_check(p, 1) /*@@@ needed? @@@*/
/* /*
* The tracing infrastructure traces RCU (we want that), but unfortunately * The tracing infrastructure traces RCU (we want that), but unfortunately
* some of the RCU checks causes tracing to lock up the system. * some of the RCU checks causes tracing to lock up the system.
......
...@@ -539,17 +539,39 @@ void rb_replace_node(struct rb_node *victim, struct rb_node *new, ...@@ -539,17 +539,39 @@ void rb_replace_node(struct rb_node *victim, struct rb_node *new,
{ {
struct rb_node *parent = rb_parent(victim); struct rb_node *parent = rb_parent(victim);
/* Copy the pointers/colour from the victim to the replacement */
*new = *victim;
/* Set the surrounding nodes to point to the replacement */ /* Set the surrounding nodes to point to the replacement */
__rb_change_child(victim, new, parent, root);
if (victim->rb_left) if (victim->rb_left)
rb_set_parent(victim->rb_left, new); rb_set_parent(victim->rb_left, new);
if (victim->rb_right) if (victim->rb_right)
rb_set_parent(victim->rb_right, new); rb_set_parent(victim->rb_right, new);
__rb_change_child(victim, new, parent, root);
}
EXPORT_SYMBOL(rb_replace_node);
void rb_replace_node_rcu(struct rb_node *victim, struct rb_node *new,
struct rb_root *root)
{
struct rb_node *parent = rb_parent(victim);
/* Copy the pointers/colour from the victim to the replacement */ /* Copy the pointers/colour from the victim to the replacement */
*new = *victim; *new = *victim;
/* Set the surrounding nodes to point to the replacement */
if (victim->rb_left)
rb_set_parent(victim->rb_left, new);
if (victim->rb_right)
rb_set_parent(victim->rb_right, new);
/* Set the parent's pointer to the new node last after an RCU barrier
* so that the pointers onwards are seen to be set correctly when doing
* an RCU walk over the tree.
*/
__rb_change_child_rcu(victim, new, parent, root);
} }
EXPORT_SYMBOL(rb_replace_node); EXPORT_SYMBOL(rb_replace_node_rcu);
static struct rb_node *rb_left_deepest_node(const struct rb_node *node) static struct rb_node *rb_left_deepest_node(const struct rb_node *node)
{ {
......
...@@ -10,6 +10,7 @@ af-rxrpc-y := \ ...@@ -10,6 +10,7 @@ af-rxrpc-y := \
conn_client.o \ conn_client.o \
conn_event.o \ conn_event.o \
conn_object.o \ conn_object.o \
conn_service.o \
input.o \ input.o \
insecure.o \ insecure.o \
key.o \ key.o \
......
...@@ -788,27 +788,7 @@ static void __exit af_rxrpc_exit(void) ...@@ -788,27 +788,7 @@ static void __exit af_rxrpc_exit(void)
proto_unregister(&rxrpc_proto); proto_unregister(&rxrpc_proto);
rxrpc_destroy_all_calls(); rxrpc_destroy_all_calls();
rxrpc_destroy_all_connections(); rxrpc_destroy_all_connections();
ASSERTCMP(atomic_read(&rxrpc_n_skbs), ==, 0); ASSERTCMP(atomic_read(&rxrpc_n_skbs), ==, 0);
/* We need to flush the scheduled work twice because the local endpoint
* records involve a work item in their destruction as they can only be
* destroyed from process context. However, a connection may have a
* work item outstanding - and this will pin the local endpoint record
* until the connection goes away.
*
* Peers don't pin locals and calls pin sockets - which prevents the
* module from being unloaded - so we should only need two flushes.
*/
_debug("flush scheduled work");
flush_workqueue(rxrpc_workqueue);
_debug("flush scheduled work 2");
flush_workqueue(rxrpc_workqueue);
_debug("synchronise RCU");
rcu_barrier();
_debug("destroy locals");
ASSERT(idr_is_empty(&rxrpc_client_conn_ids));
idr_destroy(&rxrpc_client_conn_ids);
rxrpc_destroy_all_locals(); rxrpc_destroy_all_locals();
remove_proc_entry("rxrpc_conns", init_net.proc_net); remove_proc_entry("rxrpc_conns", init_net.proc_net);
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
*/ */
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/seqlock.h>
#include <net/sock.h> #include <net/sock.h>
#include <net/af_rxrpc.h> #include <net/af_rxrpc.h>
#include <rxrpc/packet.h> #include <rxrpc/packet.h>
...@@ -35,7 +36,6 @@ struct rxrpc_crypt { ...@@ -35,7 +36,6 @@ struct rxrpc_crypt {
queue_delayed_work(rxrpc_workqueue, (WS), (D)) queue_delayed_work(rxrpc_workqueue, (WS), (D))
#define rxrpc_queue_call(CALL) rxrpc_queue_work(&(CALL)->processor) #define rxrpc_queue_call(CALL) rxrpc_queue_work(&(CALL)->processor)
#define rxrpc_queue_conn(CONN) rxrpc_queue_work(&(CONN)->processor)
struct rxrpc_connection; struct rxrpc_connection;
...@@ -141,17 +141,16 @@ struct rxrpc_security { ...@@ -141,17 +141,16 @@ struct rxrpc_security {
int (*init_connection_security)(struct rxrpc_connection *); int (*init_connection_security)(struct rxrpc_connection *);
/* prime a connection's packet security */ /* prime a connection's packet security */
void (*prime_packet_security)(struct rxrpc_connection *); int (*prime_packet_security)(struct rxrpc_connection *);
/* impose security on a packet */ /* impose security on a packet */
int (*secure_packet)(const struct rxrpc_call *, int (*secure_packet)(struct rxrpc_call *,
struct sk_buff *, struct sk_buff *,
size_t, size_t,
void *); void *);
/* verify the security on a received packet */ /* verify the security on a received packet */
int (*verify_packet)(const struct rxrpc_call *, struct sk_buff *, int (*verify_packet)(struct rxrpc_call *, struct sk_buff *, u32 *);
u32 *);
/* issue a challenge */ /* issue a challenge */
int (*issue_challenge)(struct rxrpc_connection *); int (*issue_challenge)(struct rxrpc_connection *);
...@@ -208,7 +207,7 @@ struct rxrpc_peer { ...@@ -208,7 +207,7 @@ struct rxrpc_peer {
struct hlist_head error_targets; /* targets for net error distribution */ struct hlist_head error_targets; /* targets for net error distribution */
struct work_struct error_distributor; struct work_struct error_distributor;
struct rb_root service_conns; /* Service connections */ struct rb_root service_conns; /* Service connections */
rwlock_t conn_lock; seqlock_t service_conn_lock;
spinlock_t lock; /* access lock */ spinlock_t lock; /* access lock */
unsigned int if_mtu; /* interface MTU for this peer */ unsigned int if_mtu; /* interface MTU for this peer */
unsigned int mtu; /* network MTU for this peer */ unsigned int mtu; /* network MTU for this peer */
...@@ -231,18 +230,12 @@ struct rxrpc_peer { ...@@ -231,18 +230,12 @@ struct rxrpc_peer {
* Keys for matching a connection. * Keys for matching a connection.
*/ */
struct rxrpc_conn_proto { struct rxrpc_conn_proto {
unsigned long hash_key; union {
struct rxrpc_local *local; /* Representation of local endpoint */ struct {
u32 epoch; /* epoch of this connection */ u32 epoch; /* epoch of this connection */
u32 cid; /* connection ID */ u32 cid; /* connection ID */
u8 in_clientflag; /* RXRPC_CLIENT_INITIATED if we are server */ };
u8 addr_size; /* Size of the address */ u64 index_key;
sa_family_t family; /* Transport protocol */
__be16 port; /* Peer UDP/UDP6 port */
union { /* Peer address */
struct in_addr ipv4_addr;
struct in6_addr ipv6_addr;
u32 raw_addr[0];
}; };
}; };
...@@ -255,6 +248,37 @@ struct rxrpc_conn_parameters { ...@@ -255,6 +248,37 @@ struct rxrpc_conn_parameters {
u32 security_level; /* Security level selected */ u32 security_level; /* Security level selected */
}; };
/*
* Bits in the connection flags.
*/
enum rxrpc_conn_flag {
RXRPC_CONN_HAS_IDR, /* Has a client conn ID assigned */
RXRPC_CONN_IN_SERVICE_CONNS, /* Conn is in peer->service_conns */
RXRPC_CONN_IN_CLIENT_CONNS, /* Conn is in local->client_conns */
};
/*
* Events that can be raised upon a connection.
*/
enum rxrpc_conn_event {
RXRPC_CONN_EV_CHALLENGE, /* Send challenge packet */
};
/*
* The connection protocol state.
*/
enum rxrpc_conn_proto_state {
RXRPC_CONN_UNUSED, /* Connection not yet attempted */
RXRPC_CONN_CLIENT, /* Client connection */
RXRPC_CONN_SERVICE_UNSECURED, /* Service unsecured connection */
RXRPC_CONN_SERVICE_CHALLENGING, /* Service challenging for security */
RXRPC_CONN_SERVICE, /* Service secured connection */
RXRPC_CONN_REMOTELY_ABORTED, /* Conn aborted by peer */
RXRPC_CONN_LOCALLY_ABORTED, /* Conn aborted locally */
RXRPC_CONN_NETWORK_ERROR, /* Conn terminated by network error */
RXRPC_CONN__NR_STATES
};
/* /*
* RxRPC connection definition * RxRPC connection definition
* - matched by { local, peer, epoch, conn_id, direction } * - matched by { local, peer, epoch, conn_id, direction }
...@@ -265,44 +289,38 @@ struct rxrpc_connection { ...@@ -265,44 +289,38 @@ struct rxrpc_connection {
struct rxrpc_conn_parameters params; struct rxrpc_conn_parameters params;
spinlock_t channel_lock; spinlock_t channel_lock;
struct rxrpc_call *channels[RXRPC_MAXCALLS]; /* active calls */
struct rxrpc_channel {
struct rxrpc_call __rcu *call; /* Active call */
u32 call_id; /* ID of current call */
u32 call_counter; /* Call ID counter */
u32 last_call; /* ID of last call */
u32 last_result; /* Result of last call (0/abort) */
} channels[RXRPC_MAXCALLS];
wait_queue_head_t channel_wq; /* queue to wait for channel to become available */ wait_queue_head_t channel_wq; /* queue to wait for channel to become available */
struct rcu_head rcu;
struct work_struct processor; /* connection event processor */ struct work_struct processor; /* connection event processor */
union { union {
struct rb_node client_node; /* Node in local->client_conns */ struct rb_node client_node; /* Node in local->client_conns */
struct rb_node service_node; /* Node in peer->service_conns */ struct rb_node service_node; /* Node in peer->service_conns */
}; };
struct list_head link; /* link in master connection list */ struct list_head link; /* link in master connection list */
struct rb_root calls; /* calls on this connection */
struct sk_buff_head rx_queue; /* received conn-level packets */ struct sk_buff_head rx_queue; /* received conn-level packets */
const struct rxrpc_security *security; /* applied security module */ const struct rxrpc_security *security; /* applied security module */
struct key *server_key; /* security for this service */ struct key *server_key; /* security for this service */
struct crypto_skcipher *cipher; /* encryption handle */ struct crypto_skcipher *cipher; /* encryption handle */
struct rxrpc_crypt csum_iv; /* packet checksum base */ struct rxrpc_crypt csum_iv; /* packet checksum base */
unsigned long flags; unsigned long flags;
#define RXRPC_CONN_HAS_IDR 0 /* - Has a client conn ID assigned */
unsigned long events; unsigned long events;
#define RXRPC_CONN_CHALLENGE 0 /* send challenge packet */
unsigned long put_time; /* Time at which last put */ unsigned long put_time; /* Time at which last put */
rwlock_t lock; /* access lock */
spinlock_t state_lock; /* state-change lock */ spinlock_t state_lock; /* state-change lock */
atomic_t usage; atomic_t usage;
enum { /* current state of connection */ enum rxrpc_conn_proto_state state : 8; /* current state of connection */
RXRPC_CONN_UNUSED, /* - connection not yet attempted */
RXRPC_CONN_CLIENT, /* - client connection */
RXRPC_CONN_SERVER_UNSECURED, /* - server unsecured connection */
RXRPC_CONN_SERVER_CHALLENGING, /* - server challenging for security */
RXRPC_CONN_SERVER, /* - server secured connection */
RXRPC_CONN_REMOTELY_ABORTED, /* - conn aborted by peer */
RXRPC_CONN_LOCALLY_ABORTED, /* - conn aborted locally */
RXRPC_CONN_NETWORK_ERROR, /* - conn terminated by network error */
} state;
u32 local_abort; /* local abort code */ u32 local_abort; /* local abort code */
u32 remote_abort; /* remote abort code */ u32 remote_abort; /* remote abort code */
int error; /* local error incurred */ int error; /* local error incurred */
int debug_id; /* debug ID for printks */ int debug_id; /* debug ID for printks */
unsigned int call_counter; /* call ID counter */
atomic_t serial; /* packet serial number counter */ atomic_t serial; /* packet serial number counter */
atomic_t hi_serial; /* highest serial number received */ atomic_t hi_serial; /* highest serial number received */
atomic_t avail_chans; /* number of channels available */ atomic_t avail_chans; /* number of channels available */
...@@ -382,6 +400,7 @@ enum rxrpc_call_state { ...@@ -382,6 +400,7 @@ enum rxrpc_call_state {
* - matched by { connection, call_id } * - matched by { connection, call_id }
*/ */
struct rxrpc_call { struct rxrpc_call {
struct rcu_head rcu;
struct rxrpc_connection *conn; /* connection carrying call */ struct rxrpc_connection *conn; /* connection carrying call */
struct rxrpc_sock *socket; /* socket responsible */ struct rxrpc_sock *socket; /* socket responsible */
struct timer_list lifetimer; /* lifetime remaining on call */ struct timer_list lifetimer; /* lifetime remaining on call */
...@@ -394,11 +413,11 @@ struct rxrpc_call { ...@@ -394,11 +413,11 @@ struct rxrpc_call {
struct hlist_node error_link; /* link in error distribution list */ struct hlist_node error_link; /* link in error distribution list */
struct list_head accept_link; /* calls awaiting acceptance */ struct list_head accept_link; /* calls awaiting acceptance */
struct rb_node sock_node; /* node in socket call tree */ struct rb_node sock_node; /* node in socket call tree */
struct rb_node conn_node; /* node in connection call tree */
struct sk_buff_head rx_queue; /* received packets */ struct sk_buff_head rx_queue; /* received packets */
struct sk_buff_head rx_oos_queue; /* packets received out of sequence */ struct sk_buff_head rx_oos_queue; /* packets received out of sequence */
struct sk_buff *tx_pending; /* Tx socket buffer being filled */ struct sk_buff *tx_pending; /* Tx socket buffer being filled */
wait_queue_head_t tx_waitq; /* wait for Tx window space to become available */ wait_queue_head_t tx_waitq; /* wait for Tx window space to become available */
__be32 crypto_buf[2]; /* Temporary packet crypto buffer */
unsigned long user_call_ID; /* user-defined call ID */ unsigned long user_call_ID; /* user-defined call ID */
unsigned long creation_jif; /* time of call creation */ unsigned long creation_jif; /* time of call creation */
unsigned long flags; unsigned long flags;
...@@ -442,19 +461,12 @@ struct rxrpc_call { ...@@ -442,19 +461,12 @@ struct rxrpc_call {
#define RXRPC_ACKR_WINDOW_ASZ DIV_ROUND_UP(RXRPC_MAXACKS, BITS_PER_LONG) #define RXRPC_ACKR_WINDOW_ASZ DIV_ROUND_UP(RXRPC_MAXACKS, BITS_PER_LONG)
unsigned long ackr_window[RXRPC_ACKR_WINDOW_ASZ + 1]; unsigned long ackr_window[RXRPC_ACKR_WINDOW_ASZ + 1];
struct hlist_node hash_node; u8 in_clientflag; /* Copy of conn->in_clientflag */
unsigned long hash_key; /* Full hash key */ struct rxrpc_local *local; /* Local endpoint. */
u8 in_clientflag; /* Copy of conn->in_clientflag for hashing */
struct rxrpc_local *local; /* Local endpoint. Used for hashing. */
sa_family_t family; /* Frame protocol */
u32 call_id; /* call ID on connection */ u32 call_id; /* call ID on connection */
u32 cid; /* connection ID plus channel index */ u32 cid; /* connection ID plus channel index */
u32 epoch; /* epoch of this connection */ u32 epoch; /* epoch of this connection */
u16 service_id; /* service ID */ u16 service_id; /* service ID */
union { /* Peer IP address for hashing */
__be32 ipv4_addr;
__u8 ipv6_addr[16]; /* Anticipates eventual IPv6 support */
} peer_ip;
}; };
/* /*
...@@ -502,8 +514,6 @@ extern struct kmem_cache *rxrpc_call_jar; ...@@ -502,8 +514,6 @@ extern struct kmem_cache *rxrpc_call_jar;
extern struct list_head rxrpc_calls; extern struct list_head rxrpc_calls;
extern rwlock_t rxrpc_call_lock; extern rwlock_t rxrpc_call_lock;
struct rxrpc_call *rxrpc_find_call_hash(struct rxrpc_host_header *,
void *, sa_family_t, const void *);
struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *, unsigned long); struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *, unsigned long);
struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *, struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *,
struct rxrpc_conn_parameters *, struct rxrpc_conn_parameters *,
...@@ -522,8 +532,10 @@ void __exit rxrpc_destroy_all_calls(void); ...@@ -522,8 +532,10 @@ void __exit rxrpc_destroy_all_calls(void);
*/ */
extern struct idr rxrpc_client_conn_ids; extern struct idr rxrpc_client_conn_ids;
int rxrpc_get_client_connection_id(struct rxrpc_connection *, gfp_t); void rxrpc_destroy_client_conn_ids(void);
void rxrpc_put_client_connection_id(struct rxrpc_connection *); int rxrpc_connect_call(struct rxrpc_call *, struct rxrpc_conn_parameters *,
struct sockaddr_rxrpc *, gfp_t);
void rxrpc_unpublish_client_conn(struct rxrpc_connection *);
/* /*
* conn_event.c * conn_event.c
...@@ -539,17 +551,14 @@ extern unsigned int rxrpc_connection_expiry; ...@@ -539,17 +551,14 @@ extern unsigned int rxrpc_connection_expiry;
extern struct list_head rxrpc_connections; extern struct list_head rxrpc_connections;
extern rwlock_t rxrpc_connection_lock; extern rwlock_t rxrpc_connection_lock;
int rxrpc_connect_call(struct rxrpc_call *, struct rxrpc_conn_parameters *, int rxrpc_extract_addr_from_skb(struct sockaddr_rxrpc *, struct sk_buff *);
struct sockaddr_rxrpc *, gfp_t); struct rxrpc_connection *rxrpc_alloc_connection(gfp_t);
struct rxrpc_connection *rxrpc_find_connection(struct rxrpc_local *, struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *,
struct rxrpc_peer *, struct sk_buff *);
struct sk_buff *); void __rxrpc_disconnect_call(struct rxrpc_call *);
void rxrpc_disconnect_call(struct rxrpc_call *); void rxrpc_disconnect_call(struct rxrpc_call *);
void rxrpc_put_connection(struct rxrpc_connection *); void rxrpc_put_connection(struct rxrpc_connection *);
void __exit rxrpc_destroy_all_connections(void); void __exit rxrpc_destroy_all_connections(void);
struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *,
struct rxrpc_peer *,
struct sk_buff *);
static inline bool rxrpc_conn_is_client(const struct rxrpc_connection *conn) static inline bool rxrpc_conn_is_client(const struct rxrpc_connection *conn)
{ {
...@@ -558,7 +567,7 @@ static inline bool rxrpc_conn_is_client(const struct rxrpc_connection *conn) ...@@ -558,7 +567,7 @@ static inline bool rxrpc_conn_is_client(const struct rxrpc_connection *conn)
static inline bool rxrpc_conn_is_service(const struct rxrpc_connection *conn) static inline bool rxrpc_conn_is_service(const struct rxrpc_connection *conn)
{ {
return conn->proto.in_clientflag; return !rxrpc_conn_is_client(conn);
} }
static inline void rxrpc_get_connection(struct rxrpc_connection *conn) static inline void rxrpc_get_connection(struct rxrpc_connection *conn)
...@@ -566,6 +575,31 @@ static inline void rxrpc_get_connection(struct rxrpc_connection *conn) ...@@ -566,6 +575,31 @@ static inline void rxrpc_get_connection(struct rxrpc_connection *conn)
atomic_inc(&conn->usage); atomic_inc(&conn->usage);
} }
static inline
struct rxrpc_connection *rxrpc_get_connection_maybe(struct rxrpc_connection *conn)
{
return atomic_inc_not_zero(&conn->usage) ? conn : NULL;
}
static inline bool rxrpc_queue_conn(struct rxrpc_connection *conn)
{
if (!rxrpc_get_connection_maybe(conn))
return false;
if (!rxrpc_queue_work(&conn->processor))
rxrpc_put_connection(conn);
return true;
}
/*
* conn_service.c
*/
struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *,
struct sk_buff *);
struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *,
struct sockaddr_rxrpc *,
struct sk_buff *);
void rxrpc_unpublish_service_conn(struct rxrpc_connection *);
/* /*
* input.c * input.c
*/ */
...@@ -618,6 +652,11 @@ static inline void rxrpc_put_local(struct rxrpc_local *local) ...@@ -618,6 +652,11 @@ static inline void rxrpc_put_local(struct rxrpc_local *local)
__rxrpc_put_local(local); __rxrpc_put_local(local);
} }
static inline void rxrpc_queue_local(struct rxrpc_local *local)
{
rxrpc_queue_work(&local->processor);
}
/* /*
* misc.c * misc.c
*/ */
...@@ -722,8 +761,7 @@ static inline void rxrpc_sysctl_exit(void) {} ...@@ -722,8 +761,7 @@ static inline void rxrpc_sysctl_exit(void) {}
/* /*
* utils.c * utils.c
*/ */
void rxrpc_get_addr_from_skb(struct rxrpc_local *, const struct sk_buff *, int rxrpc_extract_addr_from_skb(struct sockaddr_rxrpc *, struct sk_buff *);
struct sockaddr_rxrpc *);
/* /*
* debug tracing * debug tracing
......
...@@ -75,7 +75,6 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local, ...@@ -75,7 +75,6 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local,
{ {
struct rxrpc_connection *conn; struct rxrpc_connection *conn;
struct rxrpc_skb_priv *sp, *nsp; struct rxrpc_skb_priv *sp, *nsp;
struct rxrpc_peer *peer;
struct rxrpc_call *call; struct rxrpc_call *call;
struct sk_buff *notification; struct sk_buff *notification;
int ret; int ret;
...@@ -94,15 +93,7 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local, ...@@ -94,15 +93,7 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local,
rxrpc_new_skb(notification); rxrpc_new_skb(notification);
notification->mark = RXRPC_SKB_MARK_NEW_CALL; notification->mark = RXRPC_SKB_MARK_NEW_CALL;
peer = rxrpc_lookup_peer(local, srx, GFP_NOIO); conn = rxrpc_incoming_connection(local, srx, skb);
if (!peer) {
_debug("no peer");
ret = -EBUSY;
goto error;
}
conn = rxrpc_incoming_connection(local, peer, skb);
rxrpc_put_peer(peer);
if (IS_ERR(conn)) { if (IS_ERR(conn)) {
_debug("no conn"); _debug("no conn");
ret = PTR_ERR(conn); ret = PTR_ERR(conn);
...@@ -128,12 +119,11 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local, ...@@ -128,12 +119,11 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local,
spin_lock(&call->conn->state_lock); spin_lock(&call->conn->state_lock);
if (sp->hdr.securityIndex > 0 && if (sp->hdr.securityIndex > 0 &&
call->conn->state == RXRPC_CONN_SERVER_UNSECURED) { call->conn->state == RXRPC_CONN_SERVICE_UNSECURED) {
_debug("await conn sec"); _debug("await conn sec");
list_add_tail(&call->accept_link, &rx->secureq); list_add_tail(&call->accept_link, &rx->secureq);
call->conn->state = RXRPC_CONN_SERVER_CHALLENGING; call->conn->state = RXRPC_CONN_SERVICE_CHALLENGING;
rxrpc_get_connection(call->conn); set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events);
set_bit(RXRPC_CONN_CHALLENGE, &call->conn->events);
rxrpc_queue_conn(call->conn); rxrpc_queue_conn(call->conn);
} else { } else {
_debug("conn ready"); _debug("conn ready");
...@@ -227,20 +217,8 @@ void rxrpc_accept_incoming_calls(struct rxrpc_local *local) ...@@ -227,20 +217,8 @@ void rxrpc_accept_incoming_calls(struct rxrpc_local *local)
whdr._rsvd = 0; whdr._rsvd = 0;
whdr.serviceId = htons(sp->hdr.serviceId); whdr.serviceId = htons(sp->hdr.serviceId);
/* determine the remote address */ if (rxrpc_extract_addr_from_skb(&srx, skb) < 0)
memset(&srx, 0, sizeof(srx)); goto drop;
srx.srx_family = AF_RXRPC;
srx.transport.family = local->srx.transport.family;
srx.transport_type = local->srx.transport_type;
switch (srx.transport.family) {
case AF_INET:
srx.transport_len = sizeof(struct sockaddr_in);
srx.transport.sin.sin_port = udp_hdr(skb)->source;
srx.transport.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
break;
default:
goto busy;
}
/* get the socket providing the service */ /* get the socket providing the service */
read_lock_bh(&local->services_lock); read_lock_bh(&local->services_lock);
...@@ -286,6 +264,10 @@ void rxrpc_accept_incoming_calls(struct rxrpc_local *local) ...@@ -286,6 +264,10 @@ void rxrpc_accept_incoming_calls(struct rxrpc_local *local)
rxrpc_free_skb(skb); rxrpc_free_skb(skb);
return; return;
drop:
rxrpc_free_skb(skb);
return;
invalid_service: invalid_service:
skb->priority = RX_INVALID_OPERATION; skb->priority = RX_INVALID_OPERATION;
rxrpc_reject_packet(local, skb); rxrpc_reject_packet(local, skb);
......
...@@ -858,11 +858,6 @@ void rxrpc_process_call(struct work_struct *work) ...@@ -858,11 +858,6 @@ void rxrpc_process_call(struct work_struct *work)
iov[0].iov_len = sizeof(whdr); iov[0].iov_len = sizeof(whdr);
/* deal with events of a final nature */ /* deal with events of a final nature */
if (test_bit(RXRPC_CALL_EV_RELEASE, &call->events)) {
rxrpc_release_call(call);
clear_bit(RXRPC_CALL_EV_RELEASE, &call->events);
}
if (test_bit(RXRPC_CALL_EV_RCVD_ERROR, &call->events)) { if (test_bit(RXRPC_CALL_EV_RCVD_ERROR, &call->events)) {
enum rxrpc_skb_mark mark; enum rxrpc_skb_mark mark;
int error; int error;
...@@ -1094,7 +1089,7 @@ void rxrpc_process_call(struct work_struct *work) ...@@ -1094,7 +1089,7 @@ void rxrpc_process_call(struct work_struct *work)
if (call->state == RXRPC_CALL_SERVER_SECURING) { if (call->state == RXRPC_CALL_SERVER_SECURING) {
_debug("securing"); _debug("securing");
write_lock(&call->conn->lock); write_lock(&call->socket->call_lock);
if (!test_bit(RXRPC_CALL_RELEASED, &call->flags) && if (!test_bit(RXRPC_CALL_RELEASED, &call->flags) &&
!test_bit(RXRPC_CALL_EV_RELEASE, &call->events)) { !test_bit(RXRPC_CALL_EV_RELEASE, &call->events)) {
_debug("not released"); _debug("not released");
...@@ -1102,7 +1097,7 @@ void rxrpc_process_call(struct work_struct *work) ...@@ -1102,7 +1097,7 @@ void rxrpc_process_call(struct work_struct *work)
list_move_tail(&call->accept_link, list_move_tail(&call->accept_link,
&call->socket->acceptq); &call->socket->acceptq);
} }
write_unlock(&call->conn->lock); write_unlock(&call->socket->call_lock);
read_lock(&call->state_lock); read_lock(&call->state_lock);
if (call->state < RXRPC_CALL_COMPLETE) if (call->state < RXRPC_CALL_COMPLETE)
set_bit(RXRPC_CALL_EV_POST_ACCEPT, &call->events); set_bit(RXRPC_CALL_EV_POST_ACCEPT, &call->events);
...@@ -1144,6 +1139,11 @@ void rxrpc_process_call(struct work_struct *work) ...@@ -1144,6 +1139,11 @@ void rxrpc_process_call(struct work_struct *work)
goto maybe_reschedule; goto maybe_reschedule;
} }
if (test_bit(RXRPC_CALL_EV_RELEASE, &call->events)) {
rxrpc_release_call(call);
clear_bit(RXRPC_CALL_EV_RELEASE, &call->events);
}
/* other events may have been raised since we started checking */ /* other events may have been raised since we started checking */
goto maybe_reschedule; goto maybe_reschedule;
......
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/circ_buf.h> #include <linux/circ_buf.h>
#include <linux/hashtable.h>
#include <linux/spinlock_types.h> #include <linux/spinlock_types.h>
#include <net/sock.h> #include <net/sock.h>
#include <net/af_rxrpc.h> #include <net/af_rxrpc.h>
...@@ -61,142 +60,6 @@ static void rxrpc_dead_call_expired(unsigned long _call); ...@@ -61,142 +60,6 @@ static void rxrpc_dead_call_expired(unsigned long _call);
static void rxrpc_ack_time_expired(unsigned long _call); static void rxrpc_ack_time_expired(unsigned long _call);
static void rxrpc_resend_time_expired(unsigned long _call); static void rxrpc_resend_time_expired(unsigned long _call);
static DEFINE_SPINLOCK(rxrpc_call_hash_lock);
static DEFINE_HASHTABLE(rxrpc_call_hash, 10);
/*
* Hash function for rxrpc_call_hash
*/
static unsigned long rxrpc_call_hashfunc(
u8 in_clientflag,
u32 cid,
u32 call_id,
u32 epoch,
u16 service_id,
sa_family_t family,
void *localptr,
unsigned int addr_size,
const u8 *peer_addr)
{
const u16 *p;
unsigned int i;
unsigned long key;
_enter("");
key = (unsigned long)localptr;
/* We just want to add up the __be32 values, so forcing the
* cast should be okay.
*/
key += epoch;
key += service_id;
key += call_id;
key += (cid & RXRPC_CIDMASK) >> RXRPC_CIDSHIFT;
key += cid & RXRPC_CHANNELMASK;
key += in_clientflag;
key += family;
/* Step through the peer address in 16-bit portions for speed */
for (i = 0, p = (const u16 *)peer_addr; i < addr_size >> 1; i++, p++)
key += *p;
_leave(" key = 0x%lx", key);
return key;
}
/*
* Add a call to the hashtable
*/
static void rxrpc_call_hash_add(struct rxrpc_call *call)
{
unsigned long key;
unsigned int addr_size = 0;
_enter("");
switch (call->family) {
case AF_INET:
addr_size = sizeof(call->peer_ip.ipv4_addr);
break;
case AF_INET6:
addr_size = sizeof(call->peer_ip.ipv6_addr);
break;
default:
break;
}
key = rxrpc_call_hashfunc(call->in_clientflag, call->cid,
call->call_id, call->epoch,
call->service_id, call->family,
call->conn->params.local, addr_size,
call->peer_ip.ipv6_addr);
/* Store the full key in the call */
call->hash_key = key;
spin_lock(&rxrpc_call_hash_lock);
hash_add_rcu(rxrpc_call_hash, &call->hash_node, key);
spin_unlock(&rxrpc_call_hash_lock);
_leave("");
}
/*
* Remove a call from the hashtable
*/
static void rxrpc_call_hash_del(struct rxrpc_call *call)
{
_enter("");
spin_lock(&rxrpc_call_hash_lock);
hash_del_rcu(&call->hash_node);
spin_unlock(&rxrpc_call_hash_lock);
_leave("");
}
/*
* Find a call in the hashtable and return it, or NULL if it
* isn't there.
*/
struct rxrpc_call *rxrpc_find_call_hash(
struct rxrpc_host_header *hdr,
void *localptr,
sa_family_t family,
const void *peer_addr)
{
unsigned long key;
unsigned int addr_size = 0;
struct rxrpc_call *call = NULL;
struct rxrpc_call *ret = NULL;
u8 in_clientflag = hdr->flags & RXRPC_CLIENT_INITIATED;
_enter("");
switch (family) {
case AF_INET:
addr_size = sizeof(call->peer_ip.ipv4_addr);
break;
case AF_INET6:
addr_size = sizeof(call->peer_ip.ipv6_addr);
break;
default:
break;
}
key = rxrpc_call_hashfunc(in_clientflag, hdr->cid, hdr->callNumber,
hdr->epoch, hdr->serviceId,
family, localptr, addr_size,
peer_addr);
hash_for_each_possible_rcu(rxrpc_call_hash, call, hash_node, key) {
if (call->hash_key == key &&
call->call_id == hdr->callNumber &&
call->cid == hdr->cid &&
call->in_clientflag == in_clientflag &&
call->service_id == hdr->serviceId &&
call->family == family &&
call->local == localptr &&
memcmp(call->peer_ip.ipv6_addr, peer_addr,
addr_size) == 0 &&
call->epoch == hdr->epoch) {
ret = call;
break;
}
}
_leave(" = %p", ret);
return ret;
}
/* /*
* find an extant server call * find an extant server call
* - called in process context with IRQs enabled * - called in process context with IRQs enabled
...@@ -305,20 +168,7 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, ...@@ -305,20 +168,7 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx,
call->socket = rx; call->socket = rx;
call->rx_data_post = 1; call->rx_data_post = 1;
/* Record copies of information for hashtable lookup */
call->family = rx->family;
call->local = rx->local; call->local = rx->local;
switch (call->family) {
case AF_INET:
call->peer_ip.ipv4_addr = srx->transport.sin.sin_addr.s_addr;
break;
case AF_INET6:
memcpy(call->peer_ip.ipv6_addr,
srx->transport.sin6.sin6_addr.in6_u.u6_addr8,
sizeof(call->peer_ip.ipv6_addr));
break;
}
call->service_id = srx->srx_service; call->service_id = srx->srx_service;
call->in_clientflag = 0; call->in_clientflag = 0;
...@@ -345,9 +195,6 @@ static int rxrpc_begin_client_call(struct rxrpc_call *call, ...@@ -345,9 +195,6 @@ static int rxrpc_begin_client_call(struct rxrpc_call *call,
call->state = RXRPC_CALL_CLIENT_SEND_REQUEST; call->state = RXRPC_CALL_CLIENT_SEND_REQUEST;
/* Add the new call to the hashtable */
rxrpc_call_hash_add(call);
spin_lock(&call->conn->params.peer->lock); spin_lock(&call->conn->params.peer->lock);
hlist_add_head(&call->error_link, &call->conn->params.peer->error_targets); hlist_add_head(&call->error_link, &call->conn->params.peer->error_targets);
spin_unlock(&call->conn->params.peer->lock); spin_unlock(&call->conn->params.peer->lock);
...@@ -425,9 +272,10 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -425,9 +272,10 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
rxrpc_put_call(call); rxrpc_put_call(call);
write_lock_bh(&rxrpc_call_lock); write_lock_bh(&rxrpc_call_lock);
list_del(&call->link); list_del_init(&call->link);
write_unlock_bh(&rxrpc_call_lock); write_unlock_bh(&rxrpc_call_lock);
call->state = RXRPC_CALL_DEAD;
rxrpc_put_call(call); rxrpc_put_call(call);
_leave(" = %d", ret); _leave(" = %d", ret);
return ERR_PTR(ret); return ERR_PTR(ret);
...@@ -439,6 +287,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -439,6 +287,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
*/ */
found_user_ID_now_present: found_user_ID_now_present:
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
call->state = RXRPC_CALL_DEAD;
rxrpc_put_call(call); rxrpc_put_call(call);
_leave(" = -EEXIST [%p]", call); _leave(" = -EEXIST [%p]", call);
return ERR_PTR(-EEXIST); return ERR_PTR(-EEXIST);
...@@ -454,8 +303,7 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx, ...@@ -454,8 +303,7 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
{ {
struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_call *call, *candidate; struct rxrpc_call *call, *candidate;
struct rb_node **p, *parent; u32 call_id, chan;
u32 call_id;
_enter(",%d", conn->debug_id); _enter(",%d", conn->debug_id);
...@@ -465,20 +313,23 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx, ...@@ -465,20 +313,23 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
if (!candidate) if (!candidate)
return ERR_PTR(-EBUSY); return ERR_PTR(-EBUSY);
chan = sp->hdr.cid & RXRPC_CHANNELMASK;
candidate->socket = rx; candidate->socket = rx;
candidate->conn = conn; candidate->conn = conn;
candidate->cid = sp->hdr.cid; candidate->cid = sp->hdr.cid;
candidate->call_id = sp->hdr.callNumber; candidate->call_id = sp->hdr.callNumber;
candidate->channel = sp->hdr.cid & RXRPC_CHANNELMASK; candidate->channel = chan;
candidate->rx_data_post = 0; candidate->rx_data_post = 0;
candidate->state = RXRPC_CALL_SERVER_ACCEPTING; candidate->state = RXRPC_CALL_SERVER_ACCEPTING;
if (conn->security_ix > 0) if (conn->security_ix > 0)
candidate->state = RXRPC_CALL_SERVER_SECURING; candidate->state = RXRPC_CALL_SERVER_SECURING;
write_lock_bh(&conn->lock); spin_lock(&conn->channel_lock);
/* set the channel for this call */ /* set the channel for this call */
call = conn->channels[candidate->channel]; call = rcu_dereference_protected(conn->channels[chan].call,
lockdep_is_held(&conn->channel_lock));
_debug("channel[%u] is %p", candidate->channel, call); _debug("channel[%u] is %p", candidate->channel, call);
if (call && call->call_id == sp->hdr.callNumber) { if (call && call->call_id == sp->hdr.callNumber) {
/* already set; must've been a duplicate packet */ /* already set; must've been a duplicate packet */
...@@ -507,9 +358,9 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx, ...@@ -507,9 +358,9 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
call->debug_id, rxrpc_call_states[call->state]); call->debug_id, rxrpc_call_states[call->state]);
if (call->state >= RXRPC_CALL_COMPLETE) { if (call->state >= RXRPC_CALL_COMPLETE) {
conn->channels[call->channel] = NULL; __rxrpc_disconnect_call(call);
} else { } else {
write_unlock_bh(&conn->lock); spin_unlock(&conn->channel_lock);
kmem_cache_free(rxrpc_call_jar, candidate); kmem_cache_free(rxrpc_call_jar, candidate);
_leave(" = -EBUSY"); _leave(" = -EBUSY");
return ERR_PTR(-EBUSY); return ERR_PTR(-EBUSY);
...@@ -519,33 +370,22 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx, ...@@ -519,33 +370,22 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
/* check the call number isn't duplicate */ /* check the call number isn't duplicate */
_debug("check dup"); _debug("check dup");
call_id = sp->hdr.callNumber; call_id = sp->hdr.callNumber;
p = &conn->calls.rb_node;
parent = NULL; /* We just ignore calls prior to the current call ID. Terminated calls
while (*p) { * are handled via the connection.
parent = *p; */
call = rb_entry(parent, struct rxrpc_call, conn_node); if (call_id <= conn->channels[chan].call_counter)
goto old_call; /* TODO: Just drop packet */
/* The tree is sorted in order of the __be32 value without
* turning it into host order.
*/
if (call_id < call->call_id)
p = &(*p)->rb_left;
else if (call_id > call->call_id)
p = &(*p)->rb_right;
else
goto old_call;
}
/* make the call available */ /* make the call available */
_debug("new call"); _debug("new call");
call = candidate; call = candidate;
candidate = NULL; candidate = NULL;
rb_link_node(&call->conn_node, parent, p); conn->channels[chan].call_counter = call_id;
rb_insert_color(&call->conn_node, &conn->calls); rcu_assign_pointer(conn->channels[chan].call, call);
conn->channels[call->channel] = call;
sock_hold(&rx->sk); sock_hold(&rx->sk);
rxrpc_get_connection(conn); rxrpc_get_connection(conn);
write_unlock_bh(&conn->lock); spin_unlock(&conn->channel_lock);
spin_lock(&conn->params.peer->lock); spin_lock(&conn->params.peer->lock);
hlist_add_head(&call->error_link, &conn->params.peer->error_targets); hlist_add_head(&call->error_link, &conn->params.peer->error_targets);
...@@ -555,27 +395,10 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx, ...@@ -555,27 +395,10 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
list_add_tail(&call->link, &rxrpc_calls); list_add_tail(&call->link, &rxrpc_calls);
write_unlock_bh(&rxrpc_call_lock); write_unlock_bh(&rxrpc_call_lock);
/* Record copies of information for hashtable lookup */
call->family = rx->family;
call->local = conn->params.local; call->local = conn->params.local;
switch (call->family) {
case AF_INET:
call->peer_ip.ipv4_addr =
conn->params.peer->srx.transport.sin.sin_addr.s_addr;
break;
case AF_INET6:
memcpy(call->peer_ip.ipv6_addr,
conn->params.peer->srx.transport.sin6.sin6_addr.in6_u.u6_addr8,
sizeof(call->peer_ip.ipv6_addr));
break;
default:
break;
}
call->epoch = conn->proto.epoch; call->epoch = conn->proto.epoch;
call->service_id = conn->params.service_id; call->service_id = conn->params.service_id;
call->in_clientflag = conn->proto.in_clientflag; call->in_clientflag = RXRPC_CLIENT_INITIATED;
/* Add the new call to the hashtable */
rxrpc_call_hash_add(call);
_net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id); _net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id);
...@@ -585,19 +408,19 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx, ...@@ -585,19 +408,19 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
return call; return call;
extant_call: extant_call:
write_unlock_bh(&conn->lock); spin_unlock(&conn->channel_lock);
kmem_cache_free(rxrpc_call_jar, candidate); kmem_cache_free(rxrpc_call_jar, candidate);
_leave(" = %p {%d} [extant]", call, call ? call->debug_id : -1); _leave(" = %p {%d} [extant]", call, call ? call->debug_id : -1);
return call; return call;
aborted_call: aborted_call:
write_unlock_bh(&conn->lock); spin_unlock(&conn->channel_lock);
kmem_cache_free(rxrpc_call_jar, candidate); kmem_cache_free(rxrpc_call_jar, candidate);
_leave(" = -ECONNABORTED"); _leave(" = -ECONNABORTED");
return ERR_PTR(-ECONNABORTED); return ERR_PTR(-ECONNABORTED);
old_call: old_call:
write_unlock_bh(&conn->lock); spin_unlock(&conn->channel_lock);
kmem_cache_free(rxrpc_call_jar, candidate); kmem_cache_free(rxrpc_call_jar, candidate);
_leave(" = -ECONNRESET [old]"); _leave(" = -ECONNRESET [old]");
return ERR_PTR(-ECONNRESET); return ERR_PTR(-ECONNRESET);
...@@ -626,6 +449,10 @@ void rxrpc_release_call(struct rxrpc_call *call) ...@@ -626,6 +449,10 @@ void rxrpc_release_call(struct rxrpc_call *call)
*/ */
_debug("RELEASE CALL %p (%d CONN %p)", call, call->debug_id, conn); _debug("RELEASE CALL %p (%d CONN %p)", call, call->debug_id, conn);
spin_lock(&conn->params.peer->lock);
hlist_del_init(&call->error_link);
spin_unlock(&conn->params.peer->lock);
write_lock_bh(&rx->call_lock); write_lock_bh(&rx->call_lock);
if (!list_empty(&call->accept_link)) { if (!list_empty(&call->accept_link)) {
_debug("unlinking once-pending call %p { e=%lx f=%lx }", _debug("unlinking once-pending call %p { e=%lx f=%lx }",
...@@ -641,24 +468,17 @@ void rxrpc_release_call(struct rxrpc_call *call) ...@@ -641,24 +468,17 @@ void rxrpc_release_call(struct rxrpc_call *call)
write_unlock_bh(&rx->call_lock); write_unlock_bh(&rx->call_lock);
/* free up the channel for reuse */ /* free up the channel for reuse */
spin_lock(&conn->channel_lock); write_lock_bh(&call->state_lock);
write_lock_bh(&conn->lock);
write_lock(&call->state_lock);
rxrpc_disconnect_call(call);
spin_unlock(&conn->channel_lock);
if (call->state < RXRPC_CALL_COMPLETE && if (call->state < RXRPC_CALL_COMPLETE &&
call->state != RXRPC_CALL_CLIENT_FINAL_ACK) { call->state != RXRPC_CALL_CLIENT_FINAL_ACK) {
_debug("+++ ABORTING STATE %d +++\n", call->state); _debug("+++ ABORTING STATE %d +++\n", call->state);
call->state = RXRPC_CALL_LOCALLY_ABORTED; call->state = RXRPC_CALL_LOCALLY_ABORTED;
call->local_abort = RX_CALL_DEAD; call->local_abort = RX_CALL_DEAD;
set_bit(RXRPC_CALL_EV_ABORT, &call->events);
rxrpc_queue_call(call);
} }
write_unlock(&call->state_lock); write_unlock_bh(&call->state_lock);
write_unlock_bh(&conn->lock);
rxrpc_disconnect_call(call);
/* clean up the Rx queue */ /* clean up the Rx queue */
if (!skb_queue_empty(&call->rx_queue) || if (!skb_queue_empty(&call->rx_queue) ||
...@@ -791,6 +611,17 @@ void __rxrpc_put_call(struct rxrpc_call *call) ...@@ -791,6 +611,17 @@ void __rxrpc_put_call(struct rxrpc_call *call)
_leave(""); _leave("");
} }
/*
* Final call destruction under RCU.
*/
static void rxrpc_rcu_destroy_call(struct rcu_head *rcu)
{
struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu);
rxrpc_purge_queue(&call->rx_queue);
kmem_cache_free(rxrpc_call_jar, call);
}
/* /*
* clean up a call * clean up a call
*/ */
...@@ -815,19 +646,7 @@ static void rxrpc_cleanup_call(struct rxrpc_call *call) ...@@ -815,19 +646,7 @@ static void rxrpc_cleanup_call(struct rxrpc_call *call)
return; return;
} }
if (call->conn) { ASSERTCMP(call->conn, ==, NULL);
spin_lock(&call->conn->params.peer->lock);
hlist_del_init(&call->error_link);
spin_unlock(&call->conn->params.peer->lock);
write_lock_bh(&call->conn->lock);
rb_erase(&call->conn_node, &call->conn->calls);
write_unlock_bh(&call->conn->lock);
rxrpc_put_connection(call->conn);
}
/* Remove the call from the hash */
rxrpc_call_hash_del(call);
if (call->acks_window) { if (call->acks_window) {
_debug("kill Tx window %d", _debug("kill Tx window %d",
...@@ -855,7 +674,7 @@ static void rxrpc_cleanup_call(struct rxrpc_call *call) ...@@ -855,7 +674,7 @@ static void rxrpc_cleanup_call(struct rxrpc_call *call)
rxrpc_purge_queue(&call->rx_queue); rxrpc_purge_queue(&call->rx_queue);
ASSERT(skb_queue_empty(&call->rx_oos_queue)); ASSERT(skb_queue_empty(&call->rx_oos_queue));
sock_put(&call->socket->sk); sock_put(&call->socket->sk);
kmem_cache_free(rxrpc_call_jar, call); call_rcu(&call->rcu, rxrpc_rcu_destroy_call);
} }
/* /*
......
...@@ -33,7 +33,8 @@ static DEFINE_SPINLOCK(rxrpc_conn_id_lock); ...@@ -33,7 +33,8 @@ static DEFINE_SPINLOCK(rxrpc_conn_id_lock);
* client conns away from the current allocation point to try and keep the IDs * client conns away from the current allocation point to try and keep the IDs
* concentrated. We will also need to retire connections from an old epoch. * concentrated. We will also need to retire connections from an old epoch.
*/ */
int rxrpc_get_client_connection_id(struct rxrpc_connection *conn, gfp_t gfp) static int rxrpc_get_client_connection_id(struct rxrpc_connection *conn,
gfp_t gfp)
{ {
u32 epoch; u32 epoch;
int id; int id;
...@@ -83,7 +84,7 @@ int rxrpc_get_client_connection_id(struct rxrpc_connection *conn, gfp_t gfp) ...@@ -83,7 +84,7 @@ int rxrpc_get_client_connection_id(struct rxrpc_connection *conn, gfp_t gfp)
/* /*
* Release a connection ID for a client connection from the global pool. * Release a connection ID for a client connection from the global pool.
*/ */
void rxrpc_put_client_connection_id(struct rxrpc_connection *conn) static void rxrpc_put_client_connection_id(struct rxrpc_connection *conn)
{ {
if (test_bit(RXRPC_CONN_HAS_IDR, &conn->flags)) { if (test_bit(RXRPC_CONN_HAS_IDR, &conn->flags)) {
spin_lock(&rxrpc_conn_id_lock); spin_lock(&rxrpc_conn_id_lock);
...@@ -92,3 +93,280 @@ void rxrpc_put_client_connection_id(struct rxrpc_connection *conn) ...@@ -92,3 +93,280 @@ void rxrpc_put_client_connection_id(struct rxrpc_connection *conn)
spin_unlock(&rxrpc_conn_id_lock); spin_unlock(&rxrpc_conn_id_lock);
} }
} }
/*
* Destroy the client connection ID tree.
*/
void rxrpc_destroy_client_conn_ids(void)
{
struct rxrpc_connection *conn;
int id;
if (!idr_is_empty(&rxrpc_client_conn_ids)) {
idr_for_each_entry(&rxrpc_client_conn_ids, conn, id) {
pr_err("AF_RXRPC: Leaked client conn %p {%d}\n",
conn, atomic_read(&conn->usage));
}
BUG();
}
idr_destroy(&rxrpc_client_conn_ids);
}
/*
* Allocate a client connection. The caller must take care to clear any
* padding bytes in *cp.
*/
static struct rxrpc_connection *
rxrpc_alloc_client_connection(struct rxrpc_conn_parameters *cp, gfp_t gfp)
{
struct rxrpc_connection *conn;
int ret;
_enter("");
conn = rxrpc_alloc_connection(gfp);
if (!conn) {
_leave(" = -ENOMEM");
return ERR_PTR(-ENOMEM);
}
conn->params = *cp;
conn->out_clientflag = RXRPC_CLIENT_INITIATED;
conn->state = RXRPC_CONN_CLIENT;
ret = rxrpc_get_client_connection_id(conn, gfp);
if (ret < 0)
goto error_0;
ret = rxrpc_init_client_conn_security(conn);
if (ret < 0)
goto error_1;
ret = conn->security->prime_packet_security(conn);
if (ret < 0)
goto error_2;
write_lock(&rxrpc_connection_lock);
list_add_tail(&conn->link, &rxrpc_connections);
write_unlock(&rxrpc_connection_lock);
/* We steal the caller's peer ref. */
cp->peer = NULL;
rxrpc_get_local(conn->params.local);
key_get(conn->params.key);
_leave(" = %p", conn);
return conn;
error_2:
conn->security->clear(conn);
error_1:
rxrpc_put_client_connection_id(conn);
error_0:
kfree(conn);
_leave(" = %d", ret);
return ERR_PTR(ret);
}
/*
* find a connection for a call
* - called in process context with IRQs enabled
*/
int rxrpc_connect_call(struct rxrpc_call *call,
struct rxrpc_conn_parameters *cp,
struct sockaddr_rxrpc *srx,
gfp_t gfp)
{
struct rxrpc_connection *conn, *candidate = NULL;
struct rxrpc_local *local = cp->local;
struct rb_node *p, **pp, *parent;
long diff;
int chan;
DECLARE_WAITQUEUE(myself, current);
_enter("{%d,%lx},", call->debug_id, call->user_call_ID);
cp->peer = rxrpc_lookup_peer(cp->local, srx, gfp);
if (!cp->peer)
return -ENOMEM;
if (!cp->exclusive) {
/* Search for a existing client connection unless this is going
* to be a connection that's used exclusively for a single call.
*/
_debug("search 1");
spin_lock(&local->client_conns_lock);
p = local->client_conns.rb_node;
while (p) {
conn = rb_entry(p, struct rxrpc_connection, client_node);
#define cmp(X) ((long)conn->params.X - (long)cp->X)
diff = (cmp(peer) ?:
cmp(key) ?:
cmp(security_level));
if (diff < 0)
p = p->rb_left;
else if (diff > 0)
p = p->rb_right;
else
goto found_extant_conn;
}
spin_unlock(&local->client_conns_lock);
}
/* We didn't find a connection or we want an exclusive one. */
_debug("get new conn");
candidate = rxrpc_alloc_client_connection(cp, gfp);
if (!candidate) {
_leave(" = -ENOMEM");
return -ENOMEM;
}
if (cp->exclusive) {
/* Assign the call on an exclusive connection to channel 0 and
* don't add the connection to the endpoint's shareable conn
* lookup tree.
*/
_debug("exclusive chan 0");
conn = candidate;
atomic_set(&conn->avail_chans, RXRPC_MAXCALLS - 1);
spin_lock(&conn->channel_lock);
chan = 0;
goto found_channel;
}
/* We need to redo the search before attempting to add a new connection
* lest we race with someone else adding a conflicting instance.
*/
_debug("search 2");
spin_lock(&local->client_conns_lock);
pp = &local->client_conns.rb_node;
parent = NULL;
while (*pp) {
parent = *pp;
conn = rb_entry(parent, struct rxrpc_connection, client_node);
diff = (cmp(peer) ?:
cmp(key) ?:
cmp(security_level));
if (diff < 0)
pp = &(*pp)->rb_left;
else if (diff > 0)
pp = &(*pp)->rb_right;
else
goto found_extant_conn;
}
/* The second search also failed; simply add the new connection with
* the new call in channel 0. Note that we need to take the channel
* lock before dropping the client conn lock.
*/
_debug("new conn");
set_bit(RXRPC_CONN_IN_CLIENT_CONNS, &candidate->flags);
rb_link_node(&candidate->client_node, parent, pp);
rb_insert_color(&candidate->client_node, &local->client_conns);
attached:
conn = candidate;
candidate = NULL;
atomic_set(&conn->avail_chans, RXRPC_MAXCALLS - 1);
spin_lock(&conn->channel_lock);
spin_unlock(&local->client_conns_lock);
chan = 0;
found_channel:
_debug("found chan");
call->conn = conn;
call->channel = chan;
call->epoch = conn->proto.epoch;
call->cid = conn->proto.cid | chan;
call->call_id = ++conn->channels[chan].call_counter;
conn->channels[chan].call_id = call->call_id;
rcu_assign_pointer(conn->channels[chan].call, call);
_net("CONNECT call %d on conn %d", call->debug_id, conn->debug_id);
spin_unlock(&conn->channel_lock);
rxrpc_put_peer(cp->peer);
cp->peer = NULL;
_leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
return 0;
/* We found a potentially suitable connection already in existence. If
* we can reuse it (ie. its usage count hasn't been reduced to 0 by the
* reaper), discard any candidate we may have allocated, and try to get
* a channel on this one, otherwise we have to replace it.
*/
found_extant_conn:
_debug("found conn");
if (!rxrpc_get_connection_maybe(conn)) {
set_bit(RXRPC_CONN_IN_CLIENT_CONNS, &candidate->flags);
rb_replace_node(&conn->client_node,
&candidate->client_node,
&local->client_conns);
clear_bit(RXRPC_CONN_IN_CLIENT_CONNS, &conn->flags);
goto attached;
}
spin_unlock(&local->client_conns_lock);
rxrpc_put_connection(candidate);
if (!atomic_add_unless(&conn->avail_chans, -1, 0)) {
if (!gfpflags_allow_blocking(gfp)) {
rxrpc_put_connection(conn);
_leave(" = -EAGAIN");
return -EAGAIN;
}
add_wait_queue(&conn->channel_wq, &myself);
for (;;) {
set_current_state(TASK_INTERRUPTIBLE);
if (atomic_add_unless(&conn->avail_chans, -1, 0))
break;
if (signal_pending(current))
goto interrupted;
schedule();
}
remove_wait_queue(&conn->channel_wq, &myself);
__set_current_state(TASK_RUNNING);
}
/* The connection allegedly now has a free channel and we can now
* attach the call to it.
*/
spin_lock(&conn->channel_lock);
for (chan = 0; chan < RXRPC_MAXCALLS; chan++)
if (!conn->channels[chan].call)
goto found_channel;
BUG();
interrupted:
remove_wait_queue(&conn->channel_wq, &myself);
__set_current_state(TASK_RUNNING);
rxrpc_put_connection(conn);
rxrpc_put_peer(cp->peer);
cp->peer = NULL;
_leave(" = -ERESTARTSYS");
return -ERESTARTSYS;
}
/*
* Remove a client connection from the local endpoint's tree, thereby removing
* it as a target for reuse for new client calls.
*/
void rxrpc_unpublish_client_conn(struct rxrpc_connection *conn)
{
struct rxrpc_local *local = conn->params.local;
spin_lock(&local->client_conns_lock);
if (test_and_clear_bit(RXRPC_CONN_IN_CLIENT_CONNS, &conn->flags))
rb_erase(&conn->client_node, &local->client_conns);
spin_unlock(&local->client_conns_lock);
rxrpc_put_client_connection_id(conn);
}
...@@ -31,15 +31,17 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn, int state, ...@@ -31,15 +31,17 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn, int state,
u32 abort_code) u32 abort_code)
{ {
struct rxrpc_call *call; struct rxrpc_call *call;
struct rb_node *p; int i;
_enter("{%d},%x", conn->debug_id, abort_code); _enter("{%d},%x", conn->debug_id, abort_code);
read_lock_bh(&conn->lock); spin_lock(&conn->channel_lock);
for (p = rb_first(&conn->calls); p; p = rb_next(p)) { for (i = 0; i < RXRPC_MAXCALLS; i++) {
call = rb_entry(p, struct rxrpc_call, conn_node); call = rcu_dereference_protected(
write_lock(&call->state_lock); conn->channels[i].call,
lockdep_is_held(&conn->channel_lock));
write_lock_bh(&call->state_lock);
if (call->state <= RXRPC_CALL_COMPLETE) { if (call->state <= RXRPC_CALL_COMPLETE) {
call->state = state; call->state = state;
if (state == RXRPC_CALL_LOCALLY_ABORTED) { if (state == RXRPC_CALL_LOCALLY_ABORTED) {
...@@ -51,10 +53,10 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn, int state, ...@@ -51,10 +53,10 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn, int state,
} }
rxrpc_queue_call(call); rxrpc_queue_call(call);
} }
write_unlock(&call->state_lock); write_unlock_bh(&call->state_lock);
} }
read_unlock_bh(&conn->lock); spin_unlock(&conn->channel_lock);
_leave(""); _leave("");
} }
...@@ -188,18 +190,24 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, ...@@ -188,18 +190,24 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
if (ret < 0) if (ret < 0)
return ret; return ret;
conn->security->prime_packet_security(conn); ret = conn->security->prime_packet_security(conn);
read_lock_bh(&conn->lock); if (ret < 0)
return ret;
spin_lock(&conn->channel_lock);
spin_lock(&conn->state_lock); spin_lock(&conn->state_lock);
if (conn->state == RXRPC_CONN_SERVER_CHALLENGING) { if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) {
conn->state = RXRPC_CONN_SERVER; conn->state = RXRPC_CONN_SERVICE;
for (loop = 0; loop < RXRPC_MAXCALLS; loop++) for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
rxrpc_call_is_secure(conn->channels[loop]); rxrpc_call_is_secure(
rcu_dereference_protected(
conn->channels[loop].call,
lockdep_is_held(&conn->channel_lock)));
} }
spin_unlock(&conn->state_lock); spin_unlock(&conn->state_lock);
read_unlock_bh(&conn->lock); spin_unlock(&conn->channel_lock);
return 0; return 0;
default: default:
...@@ -263,12 +271,8 @@ void rxrpc_process_connection(struct work_struct *work) ...@@ -263,12 +271,8 @@ void rxrpc_process_connection(struct work_struct *work)
_enter("{%d}", conn->debug_id); _enter("{%d}", conn->debug_id);
rxrpc_get_connection(conn); if (test_and_clear_bit(RXRPC_CONN_EV_CHALLENGE, &conn->events))
if (test_and_clear_bit(RXRPC_CONN_CHALLENGE, &conn->events)) {
rxrpc_secure_connection(conn); rxrpc_secure_connection(conn);
rxrpc_put_connection(conn);
}
/* go through the conn-level event packets, releasing the ref on this /* go through the conn-level event packets, releasing the ref on this
* connection that each one has when we've finished with it */ * connection that each one has when we've finished with it */
...@@ -283,7 +287,6 @@ void rxrpc_process_connection(struct work_struct *work) ...@@ -283,7 +287,6 @@ void rxrpc_process_connection(struct work_struct *work)
goto requeue_and_leave; goto requeue_and_leave;
case -ECONNABORTED: case -ECONNABORTED:
default: default:
rxrpc_put_connection(conn);
rxrpc_free_skb(skb); rxrpc_free_skb(skb);
break; break;
} }
...@@ -301,7 +304,6 @@ void rxrpc_process_connection(struct work_struct *work) ...@@ -301,7 +304,6 @@ void rxrpc_process_connection(struct work_struct *work)
protocol_error: protocol_error:
if (rxrpc_abort_connection(conn, -ret, abort_code) < 0) if (rxrpc_abort_connection(conn, -ret, abort_code) < 0)
goto requeue_and_leave; goto requeue_and_leave;
rxrpc_put_connection(conn);
rxrpc_free_skb(skb); rxrpc_free_skb(skb);
_leave(" [EPROTO]"); _leave(" [EPROTO]");
goto out; goto out;
...@@ -315,7 +317,7 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb) ...@@ -315,7 +317,7 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
CHECK_SLAB_OKAY(&local->usage); CHECK_SLAB_OKAY(&local->usage);
skb_queue_tail(&local->reject_queue, skb); skb_queue_tail(&local->reject_queue, skb);
rxrpc_queue_work(&local->processor); rxrpc_queue_local(local);
} }
/* /*
......
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/net.h> #include <linux/net.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <linux/crypto.h>
#include <net/sock.h> #include <net/sock.h>
#include <net/af_rxrpc.h> #include <net/af_rxrpc.h>
#include "ar-internal.h" #include "ar-internal.h"
...@@ -34,7 +33,7 @@ static DECLARE_DELAYED_WORK(rxrpc_connection_reap, rxrpc_connection_reaper); ...@@ -34,7 +33,7 @@ static DECLARE_DELAYED_WORK(rxrpc_connection_reap, rxrpc_connection_reaper);
/* /*
* allocate a new connection * allocate a new connection
*/ */
static struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp) struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp)
{ {
struct rxrpc_connection *conn; struct rxrpc_connection *conn;
...@@ -46,12 +45,13 @@ static struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp) ...@@ -46,12 +45,13 @@ static struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp)
init_waitqueue_head(&conn->channel_wq); init_waitqueue_head(&conn->channel_wq);
INIT_WORK(&conn->processor, &rxrpc_process_connection); INIT_WORK(&conn->processor, &rxrpc_process_connection);
INIT_LIST_HEAD(&conn->link); INIT_LIST_HEAD(&conn->link);
conn->calls = RB_ROOT;
skb_queue_head_init(&conn->rx_queue); skb_queue_head_init(&conn->rx_queue);
conn->security = &rxrpc_no_security; conn->security = &rxrpc_no_security;
rwlock_init(&conn->lock);
spin_lock_init(&conn->state_lock); spin_lock_init(&conn->state_lock);
atomic_set(&conn->usage, 1); /* We maintain an extra ref on the connection whilst it is
* on the rxrpc_connections list.
*/
atomic_set(&conn->usage, 2);
conn->debug_id = atomic_inc_return(&rxrpc_debug_id); conn->debug_id = atomic_inc_return(&rxrpc_debug_id);
atomic_set(&conn->avail_chans, RXRPC_MAXCALLS); atomic_set(&conn->avail_chans, RXRPC_MAXCALLS);
conn->size_align = 4; conn->size_align = 4;
...@@ -63,465 +63,118 @@ static struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp) ...@@ -63,465 +63,118 @@ static struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp)
} }
/* /*
* add a call to a connection's call-by-ID tree * Look up a connection in the cache by protocol parameters.
*/ *
static void rxrpc_add_call_ID_to_conn(struct rxrpc_connection *conn, * If successful, a pointer to the connection is returned, but no ref is taken.
struct rxrpc_call *call) * NULL is returned if there is no match.
{ *
struct rxrpc_call *xcall; * The caller must be holding the RCU read lock.
struct rb_node *parent, **p;
__be32 call_id;
write_lock_bh(&conn->lock);
call_id = call->call_id;
p = &conn->calls.rb_node;
parent = NULL;
while (*p) {
parent = *p;
xcall = rb_entry(parent, struct rxrpc_call, conn_node);
if (call_id < xcall->call_id)
p = &(*p)->rb_left;
else if (call_id > xcall->call_id)
p = &(*p)->rb_right;
else
BUG();
}
rb_link_node(&call->conn_node, parent, p);
rb_insert_color(&call->conn_node, &conn->calls);
write_unlock_bh(&conn->lock);
}
/*
* Allocate a client connection. The caller must take care to clear any
* padding bytes in *cp.
*/ */
static struct rxrpc_connection * struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local,
rxrpc_alloc_client_connection(struct rxrpc_conn_parameters *cp, gfp_t gfp) struct sk_buff *skb)
{ {
struct rxrpc_connection *conn; struct rxrpc_connection *conn;
int ret; struct rxrpc_conn_proto k;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
_enter(""); struct sockaddr_rxrpc srx;
struct rxrpc_peer *peer;
conn = rxrpc_alloc_connection(gfp);
if (!conn) {
_leave(" = -ENOMEM");
return ERR_PTR(-ENOMEM);
}
conn->params = *cp;
conn->proto.local = cp->local;
conn->proto.epoch = rxrpc_epoch;
conn->proto.cid = 0;
conn->proto.in_clientflag = 0;
conn->proto.family = cp->peer->srx.transport.family;
conn->out_clientflag = RXRPC_CLIENT_INITIATED;
conn->state = RXRPC_CONN_CLIENT;
switch (conn->proto.family) {
case AF_INET:
conn->proto.addr_size = sizeof(conn->proto.ipv4_addr);
conn->proto.ipv4_addr = cp->peer->srx.transport.sin.sin_addr;
conn->proto.port = cp->peer->srx.transport.sin.sin_port;
break;
}
ret = rxrpc_get_client_connection_id(conn, gfp);
if (ret < 0)
goto error_0;
ret = rxrpc_init_client_conn_security(conn);
if (ret < 0)
goto error_1;
conn->security->prime_packet_security(conn);
write_lock(&rxrpc_connection_lock);
list_add_tail(&conn->link, &rxrpc_connections);
write_unlock(&rxrpc_connection_lock);
/* We steal the caller's peer ref. */
cp->peer = NULL;
rxrpc_get_local(conn->params.local);
key_get(conn->params.key);
_leave(" = %p", conn);
return conn;
error_1:
rxrpc_put_client_connection_id(conn);
error_0:
kfree(conn);
_leave(" = %d", ret);
return ERR_PTR(ret);
}
/*
* find a connection for a call
* - called in process context with IRQs enabled
*/
int rxrpc_connect_call(struct rxrpc_call *call,
struct rxrpc_conn_parameters *cp,
struct sockaddr_rxrpc *srx,
gfp_t gfp)
{
struct rxrpc_connection *conn, *candidate = NULL;
struct rxrpc_local *local = cp->local;
struct rb_node *p, **pp, *parent;
long diff;
int chan;
DECLARE_WAITQUEUE(myself, current); _enter(",%x", sp->hdr.cid & RXRPC_CIDMASK);
_enter("{%d,%lx},", call->debug_id, call->user_call_ID); if (rxrpc_extract_addr_from_skb(&srx, skb) < 0)
goto not_found;
cp->peer = rxrpc_lookup_peer(cp->local, srx, gfp); k.epoch = sp->hdr.epoch;
if (!cp->peer) k.cid = sp->hdr.cid & RXRPC_CIDMASK;
return -ENOMEM;
if (!cp->exclusive) { /* We may have to handle mixing IPv4 and IPv6 */
/* Search for a existing client connection unless this is going if (srx.transport.family != local->srx.transport.family) {
* to be a connection that's used exclusively for a single call. pr_warn_ratelimited("AF_RXRPC: Protocol mismatch %u not %u\n",
*/ srx.transport.family,
_debug("search 1"); local->srx.transport.family);
spin_lock(&local->client_conns_lock); goto not_found;
p = local->client_conns.rb_node;
while (p) {
conn = rb_entry(p, struct rxrpc_connection, client_node);
#define cmp(X) ((long)conn->params.X - (long)cp->X)
diff = (cmp(peer) ?:
cmp(key) ?:
cmp(security_level));
if (diff < 0)
p = p->rb_left;
else if (diff > 0)
p = p->rb_right;
else
goto found_extant_conn;
}
spin_unlock(&local->client_conns_lock);
} }
/* We didn't find a connection or we want an exclusive one. */ k.epoch = sp->hdr.epoch;
_debug("get new conn"); k.cid = sp->hdr.cid & RXRPC_CIDMASK;
candidate = rxrpc_alloc_client_connection(cp, gfp);
if (!candidate) {
_leave(" = -ENOMEM");
return -ENOMEM;
}
if (cp->exclusive) { if (sp->hdr.flags & RXRPC_CLIENT_INITIATED) {
/* Assign the call on an exclusive connection to channel 0 and /* We need to look up service connections by the full protocol
* don't add the connection to the endpoint's shareable conn * parameter set. We look up the peer first as an intermediate
* lookup tree. * step and then the connection from the peer's tree.
*/ */
_debug("exclusive chan 0"); peer = rxrpc_lookup_peer_rcu(local, &srx);
conn = candidate; if (!peer)
atomic_set(&conn->avail_chans, RXRPC_MAXCALLS - 1); goto not_found;
spin_lock(&conn->channel_lock); conn = rxrpc_find_service_conn_rcu(peer, skb);
chan = 0; if (!conn || atomic_read(&conn->usage) == 0)
goto found_channel; goto not_found;
} _leave(" = %p", conn);
return conn;
/* We need to redo the search before attempting to add a new connection } else {
* lest we race with someone else adding a conflicting instance. /* Look up client connections by connection ID alone as their
*/ * IDs are unique for this machine.
_debug("search 2"); */
spin_lock(&local->client_conns_lock); conn = idr_find(&rxrpc_client_conn_ids,
sp->hdr.cid >> RXRPC_CIDSHIFT);
pp = &local->client_conns.rb_node; if (!conn || atomic_read(&conn->usage) == 0) {
parent = NULL; _debug("no conn");
while (*pp) { goto not_found;
parent = *pp;
conn = rb_entry(parent, struct rxrpc_connection, client_node);
diff = (cmp(peer) ?:
cmp(key) ?:
cmp(security_level));
if (diff < 0)
pp = &(*pp)->rb_left;
else if (diff > 0)
pp = &(*pp)->rb_right;
else
goto found_extant_conn;
}
/* The second search also failed; simply add the new connection with
* the new call in channel 0. Note that we need to take the channel
* lock before dropping the client conn lock.
*/
_debug("new conn");
conn = candidate;
candidate = NULL;
rb_link_node(&conn->client_node, parent, pp);
rb_insert_color(&conn->client_node, &local->client_conns);
atomic_set(&conn->avail_chans, RXRPC_MAXCALLS - 1);
spin_lock(&conn->channel_lock);
spin_unlock(&local->client_conns_lock);
chan = 0;
found_channel:
_debug("found chan");
call->conn = conn;
call->channel = chan;
call->epoch = conn->proto.epoch;
call->cid = conn->proto.cid | chan;
call->call_id = ++conn->call_counter;
rcu_assign_pointer(conn->channels[chan], call);
_net("CONNECT call %d on conn %d", call->debug_id, conn->debug_id);
rxrpc_add_call_ID_to_conn(conn, call);
spin_unlock(&conn->channel_lock);
rxrpc_put_peer(cp->peer);
cp->peer = NULL;
_leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
return 0;
/* We found a suitable connection already in existence. Discard any
* candidate we may have allocated, and try to get a channel on this
* one.
*/
found_extant_conn:
_debug("found conn");
rxrpc_get_connection(conn);
spin_unlock(&local->client_conns_lock);
rxrpc_put_connection(candidate);
if (!atomic_add_unless(&conn->avail_chans, -1, 0)) {
if (!gfpflags_allow_blocking(gfp)) {
rxrpc_put_connection(conn);
_leave(" = -EAGAIN");
return -EAGAIN;
} }
add_wait_queue(&conn->channel_wq, &myself); if (conn->proto.epoch != k.epoch ||
for (;;) { conn->params.local != local)
set_current_state(TASK_INTERRUPTIBLE); goto not_found;
if (atomic_add_unless(&conn->avail_chans, -1, 0))
break; peer = conn->params.peer;
if (signal_pending(current)) switch (srx.transport.family) {
goto interrupted; case AF_INET:
schedule(); if (peer->srx.transport.sin.sin_port !=
srx.transport.sin.sin_port ||
peer->srx.transport.sin.sin_addr.s_addr !=
srx.transport.sin.sin_addr.s_addr)
goto not_found;
break;
default:
BUG();
} }
remove_wait_queue(&conn->channel_wq, &myself);
__set_current_state(TASK_RUNNING);
}
/* The connection allegedly now has a free channel and we can now
* attach the call to it.
*/
spin_lock(&conn->channel_lock);
for (chan = 0; chan < RXRPC_MAXCALLS; chan++)
if (!conn->channels[chan])
goto found_channel;
BUG();
interrupted:
remove_wait_queue(&conn->channel_wq, &myself);
__set_current_state(TASK_RUNNING);
rxrpc_put_connection(conn);
rxrpc_put_peer(cp->peer);
cp->peer = NULL;
_leave(" = -ERESTARTSYS");
return -ERESTARTSYS;
}
/*
* get a record of an incoming connection
*/
struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *local,
struct rxrpc_peer *peer,
struct sk_buff *skb)
{
struct rxrpc_connection *conn, *candidate = NULL;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rb_node *p, **pp;
const char *new = "old";
__be32 epoch;
u32 cid;
_enter("");
ASSERT(sp->hdr.flags & RXRPC_CLIENT_INITIATED);
epoch = sp->hdr.epoch;
cid = sp->hdr.cid & RXRPC_CIDMASK;
/* search the connection list first */
read_lock_bh(&peer->conn_lock);
p = peer->service_conns.rb_node;
while (p) {
conn = rb_entry(p, struct rxrpc_connection, service_node);
_debug("maybe %x", conn->proto.cid); _leave(" = %p", conn);
return conn;
if (epoch < conn->proto.epoch)
p = p->rb_left;
else if (epoch > conn->proto.epoch)
p = p->rb_right;
else if (cid < conn->proto.cid)
p = p->rb_left;
else if (cid > conn->proto.cid)
p = p->rb_right;
else
goto found_extant_connection;
}
read_unlock_bh(&peer->conn_lock);
/* not yet present - create a candidate for a new record and then
* redo the search */
candidate = rxrpc_alloc_connection(GFP_NOIO);
if (!candidate) {
_leave(" = -ENOMEM");
return ERR_PTR(-ENOMEM);
} }
candidate->proto.local = local; not_found:
candidate->proto.epoch = sp->hdr.epoch; _leave(" = NULL");
candidate->proto.cid = sp->hdr.cid & RXRPC_CIDMASK; return NULL;
candidate->proto.in_clientflag = RXRPC_CLIENT_INITIATED;
candidate->params.local = local;
candidate->params.peer = peer;
candidate->params.service_id = sp->hdr.serviceId;
candidate->security_ix = sp->hdr.securityIndex;
candidate->out_clientflag = 0;
candidate->state = RXRPC_CONN_SERVER;
if (candidate->params.service_id)
candidate->state = RXRPC_CONN_SERVER_UNSECURED;
write_lock_bh(&peer->conn_lock);
pp = &peer->service_conns.rb_node;
p = NULL;
while (*pp) {
p = *pp;
conn = rb_entry(p, struct rxrpc_connection, service_node);
if (epoch < conn->proto.epoch)
pp = &(*pp)->rb_left;
else if (epoch > conn->proto.epoch)
pp = &(*pp)->rb_right;
else if (cid < conn->proto.cid)
pp = &(*pp)->rb_left;
else if (cid > conn->proto.cid)
pp = &(*pp)->rb_right;
else
goto found_extant_second;
}
/* we can now add the new candidate to the list */
conn = candidate;
candidate = NULL;
rb_link_node(&conn->service_node, p, pp);
rb_insert_color(&conn->service_node, &peer->service_conns);
rxrpc_get_peer(peer);
rxrpc_get_local(local);
write_unlock_bh(&peer->conn_lock);
write_lock(&rxrpc_connection_lock);
list_add_tail(&conn->link, &rxrpc_connections);
write_unlock(&rxrpc_connection_lock);
new = "new";
success:
_net("CONNECTION %s %d {%x}", new, conn->debug_id, conn->proto.cid);
_leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
return conn;
/* we found the connection in the list immediately */
found_extant_connection:
if (sp->hdr.securityIndex != conn->security_ix) {
read_unlock_bh(&peer->conn_lock);
goto security_mismatch;
}
rxrpc_get_connection(conn);
read_unlock_bh(&peer->conn_lock);
goto success;
/* we found the connection on the second time through the list */
found_extant_second:
if (sp->hdr.securityIndex != conn->security_ix) {
write_unlock_bh(&peer->conn_lock);
goto security_mismatch;
}
rxrpc_get_connection(conn);
write_unlock_bh(&peer->conn_lock);
kfree(candidate);
goto success;
security_mismatch:
kfree(candidate);
_leave(" = -EKEYREJECTED");
return ERR_PTR(-EKEYREJECTED);
} }
/* /*
* find a connection based on transport and RxRPC connection ID for an incoming * Disconnect a call and clear any channel it occupies when that call
* packet * terminates. The caller must hold the channel_lock and must release the
* call's ref on the connection.
*/ */
struct rxrpc_connection *rxrpc_find_connection(struct rxrpc_local *local, void __rxrpc_disconnect_call(struct rxrpc_call *call)
struct rxrpc_peer *peer,
struct sk_buff *skb)
{ {
struct rxrpc_connection *conn; struct rxrpc_connection *conn = call->conn;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_channel *chan = &conn->channels[call->channel];
struct rb_node *p;
u32 epoch, cid;
_enter(",{%x,%x}", sp->hdr.cid, sp->hdr.flags);
read_lock_bh(&peer->conn_lock); _enter("%d,%d", conn->debug_id, call->channel);
cid = sp->hdr.cid & RXRPC_CIDMASK; if (rcu_access_pointer(chan->call) == call) {
epoch = sp->hdr.epoch; /* Save the result of the call so that we can repeat it if necessary
* through the channel, whilst disposing of the actual call record.
*/
chan->last_result = call->local_abort;
smp_wmb();
chan->last_call = chan->call_id;
chan->call_id = chan->call_counter;
if (sp->hdr.flags & RXRPC_CLIENT_INITIATED) { rcu_assign_pointer(chan->call, NULL);
p = peer->service_conns.rb_node; atomic_inc(&conn->avail_chans);
while (p) { wake_up(&conn->channel_wq);
conn = rb_entry(p, struct rxrpc_connection, service_node);
_debug("maybe %x", conn->proto.cid);
if (epoch < conn->proto.epoch)
p = p->rb_left;
else if (epoch > conn->proto.epoch)
p = p->rb_right;
else if (cid < conn->proto.cid)
p = p->rb_left;
else if (cid > conn->proto.cid)
p = p->rb_right;
else
goto found;
}
} else {
conn = idr_find(&rxrpc_client_conn_ids, cid >> RXRPC_CIDSHIFT);
if (conn && conn->proto.epoch == epoch)
goto found;
} }
read_unlock_bh(&peer->conn_lock); _leave("");
_leave(" = NULL");
return NULL;
found:
rxrpc_get_connection(conn);
read_unlock_bh(&peer->conn_lock);
_leave(" = %p", conn);
return conn;
} }
/* /*
...@@ -531,15 +184,13 @@ struct rxrpc_connection *rxrpc_find_connection(struct rxrpc_local *local, ...@@ -531,15 +184,13 @@ struct rxrpc_connection *rxrpc_find_connection(struct rxrpc_local *local,
void rxrpc_disconnect_call(struct rxrpc_call *call) void rxrpc_disconnect_call(struct rxrpc_call *call)
{ {
struct rxrpc_connection *conn = call->conn; struct rxrpc_connection *conn = call->conn;
unsigned chan = call->channel;
_enter("%d,%d", conn->debug_id, call->channel); spin_lock(&conn->channel_lock);
__rxrpc_disconnect_call(call);
spin_unlock(&conn->channel_lock);
if (conn->channels[chan] == call) { call->conn = NULL;
rcu_assign_pointer(conn->channels[chan], NULL); rxrpc_put_connection(conn);
atomic_inc(&conn->avail_chans);
wake_up(&conn->channel_wq);
}
} }
/* /*
...@@ -553,10 +204,10 @@ void rxrpc_put_connection(struct rxrpc_connection *conn) ...@@ -553,10 +204,10 @@ void rxrpc_put_connection(struct rxrpc_connection *conn)
_enter("%p{u=%d,d=%d}", _enter("%p{u=%d,d=%d}",
conn, atomic_read(&conn->usage), conn->debug_id); conn, atomic_read(&conn->usage), conn->debug_id);
ASSERTCMP(atomic_read(&conn->usage), >, 0); ASSERTCMP(atomic_read(&conn->usage), >, 1);
conn->put_time = ktime_get_seconds(); conn->put_time = ktime_get_seconds();
if (atomic_dec_and_test(&conn->usage)) { if (atomic_dec_return(&conn->usage) == 1) {
_debug("zombie"); _debug("zombie");
rxrpc_queue_delayed_work(&rxrpc_connection_reap, 0); rxrpc_queue_delayed_work(&rxrpc_connection_reap, 0);
} }
...@@ -567,15 +218,17 @@ void rxrpc_put_connection(struct rxrpc_connection *conn) ...@@ -567,15 +218,17 @@ void rxrpc_put_connection(struct rxrpc_connection *conn)
/* /*
* destroy a virtual connection * destroy a virtual connection
*/ */
static void rxrpc_destroy_connection(struct rxrpc_connection *conn) static void rxrpc_destroy_connection(struct rcu_head *rcu)
{ {
_enter("%p{%d}", conn, atomic_read(&conn->usage)); struct rxrpc_connection *conn =
container_of(rcu, struct rxrpc_connection, rcu);
_enter("{%d,u=%d}", conn->debug_id, atomic_read(&conn->usage));
ASSERTCMP(atomic_read(&conn->usage), ==, 0); ASSERTCMP(atomic_read(&conn->usage), ==, 0);
_net("DESTROY CONN %d", conn->debug_id); _net("DESTROY CONN %d", conn->debug_id);
ASSERT(RB_EMPTY_ROOT(&conn->calls));
rxrpc_purge_queue(&conn->rx_queue); rxrpc_purge_queue(&conn->rx_queue);
conn->security->clear(conn); conn->security->clear(conn);
...@@ -594,59 +247,41 @@ static void rxrpc_destroy_connection(struct rxrpc_connection *conn) ...@@ -594,59 +247,41 @@ static void rxrpc_destroy_connection(struct rxrpc_connection *conn)
static void rxrpc_connection_reaper(struct work_struct *work) static void rxrpc_connection_reaper(struct work_struct *work)
{ {
struct rxrpc_connection *conn, *_p; struct rxrpc_connection *conn, *_p;
struct rxrpc_peer *peer; unsigned long reap_older_than, earliest, put_time, now;
unsigned long now, earliest, reap_time;
LIST_HEAD(graveyard); LIST_HEAD(graveyard);
_enter(""); _enter("");
now = ktime_get_seconds(); now = ktime_get_seconds();
reap_older_than = now - rxrpc_connection_expiry;
earliest = ULONG_MAX; earliest = ULONG_MAX;
write_lock(&rxrpc_connection_lock); write_lock(&rxrpc_connection_lock);
list_for_each_entry_safe(conn, _p, &rxrpc_connections, link) { list_for_each_entry_safe(conn, _p, &rxrpc_connections, link) {
_debug("reap CONN %d { u=%d,t=%ld }", ASSERTCMP(atomic_read(&conn->usage), >, 0);
conn->debug_id, atomic_read(&conn->usage), if (likely(atomic_read(&conn->usage) > 1))
(long) now - (long) conn->put_time);
if (likely(atomic_read(&conn->usage) > 0))
continue; continue;
if (rxrpc_conn_is_client(conn)) { put_time = READ_ONCE(conn->put_time);
struct rxrpc_local *local = conn->params.local; if (time_after(put_time, reap_older_than)) {
spin_lock(&local->client_conns_lock); if (time_before(put_time, earliest))
reap_time = conn->put_time + rxrpc_connection_expiry; earliest = put_time;
continue;
if (atomic_read(&conn->usage) > 0) {
;
} else if (reap_time <= now) {
list_move_tail(&conn->link, &graveyard);
rxrpc_put_client_connection_id(conn);
rb_erase(&conn->client_node,
&local->client_conns);
} else if (reap_time < earliest) {
earliest = reap_time;
}
spin_unlock(&local->client_conns_lock);
} else {
peer = conn->params.peer;
write_lock_bh(&peer->conn_lock);
reap_time = conn->put_time + rxrpc_connection_expiry;
if (atomic_read(&conn->usage) > 0) {
;
} else if (reap_time <= now) {
list_move_tail(&conn->link, &graveyard);
rb_erase(&conn->service_node,
&peer->service_conns);
} else if (reap_time < earliest) {
earliest = reap_time;
}
write_unlock_bh(&peer->conn_lock);
} }
/* The usage count sits at 1 whilst the object is unused on the
* list; we reduce that to 0 to make the object unavailable.
*/
if (atomic_cmpxchg(&conn->usage, 1, 0) != 1)
continue;
if (rxrpc_conn_is_client(conn))
rxrpc_unpublish_client_conn(conn);
else
rxrpc_unpublish_service_conn(conn);
list_move_tail(&conn->link, &graveyard);
} }
write_unlock(&rxrpc_connection_lock); write_unlock(&rxrpc_connection_lock);
...@@ -657,14 +292,14 @@ static void rxrpc_connection_reaper(struct work_struct *work) ...@@ -657,14 +292,14 @@ static void rxrpc_connection_reaper(struct work_struct *work)
(earliest - now) * HZ); (earliest - now) * HZ);
} }
/* then destroy all those pulled out */
while (!list_empty(&graveyard)) { while (!list_empty(&graveyard)) {
conn = list_entry(graveyard.next, struct rxrpc_connection, conn = list_entry(graveyard.next, struct rxrpc_connection,
link); link);
list_del_init(&conn->link); list_del_init(&conn->link);
ASSERTCMP(atomic_read(&conn->usage), ==, 0); ASSERTCMP(atomic_read(&conn->usage), ==, 0);
rxrpc_destroy_connection(conn); skb_queue_purge(&conn->rx_queue);
call_rcu(&conn->rcu, rxrpc_destroy_connection);
} }
_leave(""); _leave("");
...@@ -676,11 +311,30 @@ static void rxrpc_connection_reaper(struct work_struct *work) ...@@ -676,11 +311,30 @@ static void rxrpc_connection_reaper(struct work_struct *work)
*/ */
void __exit rxrpc_destroy_all_connections(void) void __exit rxrpc_destroy_all_connections(void)
{ {
struct rxrpc_connection *conn, *_p;
bool leak = false;
_enter(""); _enter("");
rxrpc_connection_expiry = 0; rxrpc_connection_expiry = 0;
cancel_delayed_work(&rxrpc_connection_reap); cancel_delayed_work(&rxrpc_connection_reap);
rxrpc_queue_delayed_work(&rxrpc_connection_reap, 0); rxrpc_queue_delayed_work(&rxrpc_connection_reap, 0);
flush_workqueue(rxrpc_workqueue);
write_lock(&rxrpc_connection_lock);
list_for_each_entry_safe(conn, _p, &rxrpc_connections, link) {
pr_err("AF_RXRPC: Leaked conn %p {%d}\n",
conn, atomic_read(&conn->usage));
leak = true;
}
write_unlock(&rxrpc_connection_lock);
BUG_ON(leak);
/* Make sure the local and peer records pinned by any dying connections
* are released.
*/
rcu_barrier();
rxrpc_destroy_client_conn_ids();
_leave(""); _leave("");
} }
/* Service connection management
*
* Copyright (C) 2016 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#include <linux/slab.h>
#include "ar-internal.h"
/*
* Find a service connection under RCU conditions.
*
* We could use a hash table, but that is subject to bucket stuffing by an
* attacker as the client gets to pick the epoch and cid values and would know
* the hash function. So, instead, we use a hash table for the peer and from
* that an rbtree to find the service connection. Under ordinary circumstances
* it might be slower than a large hash table, but it is at least limited in
* depth.
*/
struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *peer,
struct sk_buff *skb)
{
struct rxrpc_connection *conn = NULL;
struct rxrpc_conn_proto k;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rb_node *p;
unsigned int seq = 0;
k.epoch = sp->hdr.epoch;
k.cid = sp->hdr.cid & RXRPC_CIDMASK;
do {
/* Unfortunately, rbtree walking doesn't give reliable results
* under just the RCU read lock, so we have to check for
* changes.
*/
read_seqbegin_or_lock(&peer->service_conn_lock, &seq);
p = rcu_dereference_raw(peer->service_conns.rb_node);
while (p) {
conn = rb_entry(p, struct rxrpc_connection, service_node);
if (conn->proto.index_key < k.index_key)
p = rcu_dereference_raw(p->rb_left);
else if (conn->proto.index_key > k.index_key)
p = rcu_dereference_raw(p->rb_right);
else
goto done;
conn = NULL;
}
} while (need_seqretry(&peer->service_conn_lock, seq));
done:
done_seqretry(&peer->service_conn_lock, seq);
_leave(" = %d", conn ? conn->debug_id : -1);
return conn;
}
/*
* Insert a service connection into a peer's tree, thereby making it a target
* for incoming packets.
*/
static struct rxrpc_connection *
rxrpc_publish_service_conn(struct rxrpc_peer *peer,
struct rxrpc_connection *conn)
{
struct rxrpc_connection *cursor = NULL;
struct rxrpc_conn_proto k = conn->proto;
struct rb_node **pp, *parent;
write_seqlock_bh(&peer->service_conn_lock);
pp = &peer->service_conns.rb_node;
parent = NULL;
while (*pp) {
parent = *pp;
cursor = rb_entry(parent,
struct rxrpc_connection, service_node);
if (cursor->proto.index_key < k.index_key)
pp = &(*pp)->rb_left;
else if (cursor->proto.index_key > k.index_key)
pp = &(*pp)->rb_right;
else
goto found_extant_conn;
}
rb_link_node_rcu(&conn->service_node, parent, pp);
rb_insert_color(&conn->service_node, &peer->service_conns);
conn_published:
set_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags);
write_sequnlock_bh(&peer->service_conn_lock);
_leave(" = %d [new]", conn->debug_id);
return conn;
found_extant_conn:
if (atomic_read(&cursor->usage) == 0)
goto replace_old_connection;
write_sequnlock_bh(&peer->service_conn_lock);
/* We should not be able to get here. rxrpc_incoming_connection() is
* called in a non-reentrant context, so there can't be a race to
* insert a new connection.
*/
BUG();
replace_old_connection:
/* The old connection is from an outdated epoch. */
_debug("replace conn");
rb_replace_node_rcu(&cursor->service_node,
&conn->service_node,
&peer->service_conns);
clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &cursor->flags);
goto conn_published;
}
/*
* get a record of an incoming connection
*/
struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *local,
struct sockaddr_rxrpc *srx,
struct sk_buff *skb)
{
struct rxrpc_connection *conn;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_peer *peer;
const char *new = "old";
_enter("");
peer = rxrpc_lookup_peer(local, srx, GFP_NOIO);
if (!peer) {
_debug("no peer");
return ERR_PTR(-EBUSY);
}
ASSERT(sp->hdr.flags & RXRPC_CLIENT_INITIATED);
rcu_read_lock();
peer = rxrpc_lookup_peer_rcu(local, srx);
if (peer) {
conn = rxrpc_find_service_conn_rcu(peer, skb);
if (conn) {
if (sp->hdr.securityIndex != conn->security_ix)
goto security_mismatch_rcu;
if (rxrpc_get_connection_maybe(conn))
goto found_extant_connection_rcu;
/* The conn has expired but we can't remove it without
* the appropriate lock, so we attempt to replace it
* when we have a new candidate.
*/
}
if (!rxrpc_get_peer_maybe(peer))
peer = NULL;
}
rcu_read_unlock();
if (!peer) {
peer = rxrpc_lookup_peer(local, srx, GFP_NOIO);
if (IS_ERR(peer))
goto enomem;
}
/* We don't have a matching record yet. */
conn = rxrpc_alloc_connection(GFP_NOIO);
if (!conn)
goto enomem_peer;
conn->proto.epoch = sp->hdr.epoch;
conn->proto.cid = sp->hdr.cid & RXRPC_CIDMASK;
conn->params.local = local;
conn->params.peer = peer;
conn->params.service_id = sp->hdr.serviceId;
conn->security_ix = sp->hdr.securityIndex;
conn->out_clientflag = 0;
conn->state = RXRPC_CONN_SERVICE;
if (conn->params.service_id)
conn->state = RXRPC_CONN_SERVICE_UNSECURED;
rxrpc_get_local(local);
write_lock(&rxrpc_connection_lock);
list_add_tail(&conn->link, &rxrpc_connections);
write_unlock(&rxrpc_connection_lock);
/* Make the connection a target for incoming packets. */
rxrpc_publish_service_conn(peer, conn);
new = "new";
success:
_net("CONNECTION %s %d {%x}", new, conn->debug_id, conn->proto.cid);
_leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
return conn;
found_extant_connection_rcu:
rcu_read_unlock();
goto success;
security_mismatch_rcu:
rcu_read_unlock();
_leave(" = -EKEYREJECTED");
return ERR_PTR(-EKEYREJECTED);
enomem_peer:
rxrpc_put_peer(peer);
enomem:
_leave(" = -ENOMEM");
return ERR_PTR(-ENOMEM);
}
/*
* Remove the service connection from the peer's tree, thereby removing it as a
* target for incoming packets.
*/
void rxrpc_unpublish_service_conn(struct rxrpc_connection *conn)
{
struct rxrpc_peer *peer = conn->params.peer;
write_seqlock_bh(&peer->service_conn_lock);
if (test_and_clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags))
rb_erase(&conn->service_node, &peer->service_conns);
write_sequnlock_bh(&peer->service_conn_lock);
}
...@@ -476,7 +476,7 @@ static void rxrpc_process_jumbo_packet(struct rxrpc_call *call, ...@@ -476,7 +476,7 @@ static void rxrpc_process_jumbo_packet(struct rxrpc_call *call,
sp->hdr.seq += 1; sp->hdr.seq += 1;
sp->hdr.serial += 1; sp->hdr.serial += 1;
sp->hdr.flags = jhdr.flags; sp->hdr.flags = jhdr.flags;
sp->hdr._rsvd = jhdr._rsvd; sp->hdr._rsvd = ntohs(jhdr._rsvd);
_proto("Rx DATA Jumbo %%%u", sp->hdr.serial - 1); _proto("Rx DATA Jumbo %%%u", sp->hdr.serial - 1);
...@@ -575,14 +575,13 @@ static void rxrpc_post_packet_to_call(struct rxrpc_call *call, ...@@ -575,14 +575,13 @@ static void rxrpc_post_packet_to_call(struct rxrpc_call *call,
* post connection-level events to the connection * post connection-level events to the connection
* - this includes challenges, responses and some aborts * - this includes challenges, responses and some aborts
*/ */
static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn, static bool rxrpc_post_packet_to_conn(struct rxrpc_connection *conn,
struct sk_buff *skb) struct sk_buff *skb)
{ {
_enter("%p,%p", conn, skb); _enter("%p,%p", conn, skb);
rxrpc_get_connection(conn);
skb_queue_tail(&conn->rx_queue, skb); skb_queue_tail(&conn->rx_queue, skb);
rxrpc_queue_conn(conn); return rxrpc_queue_conn(conn);
} }
/* /*
...@@ -595,7 +594,7 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local, ...@@ -595,7 +594,7 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local,
_enter("%p,%p", local, skb); _enter("%p,%p", local, skb);
skb_queue_tail(&local->event_queue, skb); skb_queue_tail(&local->event_queue, skb);
rxrpc_queue_work(&local->processor); rxrpc_queue_local(local);
} }
/* /*
...@@ -627,32 +626,6 @@ int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb) ...@@ -627,32 +626,6 @@ int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb)
return 0; return 0;
} }
static struct rxrpc_connection *rxrpc_conn_from_local(struct rxrpc_local *local,
struct sk_buff *skb)
{
struct rxrpc_peer *peer;
struct rxrpc_connection *conn;
struct sockaddr_rxrpc srx;
rxrpc_get_addr_from_skb(local, skb, &srx);
rcu_read_lock();
peer = rxrpc_lookup_peer_rcu(local, &srx);
if (!peer)
goto cant_find_peer;
conn = rxrpc_find_connection(local, peer, skb);
rcu_read_unlock();
if (!conn)
goto cant_find_conn;
return conn;
cant_find_peer:
rcu_read_unlock();
cant_find_conn:
return NULL;
}
/* /*
* handle data received on the local endpoint * handle data received on the local endpoint
* - may be called in interrupt context * - may be called in interrupt context
...@@ -663,6 +636,7 @@ static struct rxrpc_connection *rxrpc_conn_from_local(struct rxrpc_local *local, ...@@ -663,6 +636,7 @@ static struct rxrpc_connection *rxrpc_conn_from_local(struct rxrpc_local *local,
*/ */
void rxrpc_data_ready(struct sock *sk) void rxrpc_data_ready(struct sock *sk)
{ {
struct rxrpc_connection *conn;
struct rxrpc_skb_priv *sp; struct rxrpc_skb_priv *sp;
struct rxrpc_local *local = sk->sk_user_data; struct rxrpc_local *local = sk->sk_user_data;
struct sk_buff *skb; struct sk_buff *skb;
...@@ -726,34 +700,37 @@ void rxrpc_data_ready(struct sock *sk) ...@@ -726,34 +700,37 @@ void rxrpc_data_ready(struct sock *sk)
(sp->hdr.callNumber == 0 || sp->hdr.seq == 0)) (sp->hdr.callNumber == 0 || sp->hdr.seq == 0))
goto bad_message; goto bad_message;
if (sp->hdr.callNumber == 0) { rcu_read_lock();
/* This is a connection-level packet. These should be
* fairly rare, so the extra overhead of looking them up the
* old-fashioned way doesn't really hurt */
struct rxrpc_connection *conn;
conn = rxrpc_conn_from_local(local, skb); retry_find_conn:
if (!conn) conn = rxrpc_find_connection_rcu(local, skb);
goto cant_route_call; if (!conn)
goto cant_route_call;
if (sp->hdr.callNumber == 0) {
/* Connection-level packet */
_debug("CONN %p {%d}", conn, conn->debug_id); _debug("CONN %p {%d}", conn, conn->debug_id);
rxrpc_post_packet_to_conn(conn, skb); if (!rxrpc_post_packet_to_conn(conn, skb))
rxrpc_put_connection(conn); goto retry_find_conn;
} else { } else {
struct rxrpc_call *call; /* Call-bound packets are routed by connection channel. */
unsigned int channel = sp->hdr.cid & RXRPC_CHANNELMASK;
struct rxrpc_channel *chan = &conn->channels[channel];
struct rxrpc_call *call = rcu_dereference(chan->call);
call = rxrpc_find_call_hash(&sp->hdr, local, if (!call || atomic_read(&call->usage) == 0)
AF_INET, &ip_hdr(skb)->saddr);
if (call)
rxrpc_post_packet_to_call(call, skb);
else
goto cant_route_call; goto cant_route_call;
rxrpc_post_packet_to_call(call, skb);
} }
rcu_read_unlock();
out: out:
return; return;
cant_route_call: cant_route_call:
rcu_read_unlock();
_debug("can't route call"); _debug("can't route call");
if (sp->hdr.flags & RXRPC_CLIENT_INITIATED && if (sp->hdr.flags & RXRPC_CLIENT_INITIATED &&
sp->hdr.type == RXRPC_PACKET_TYPE_DATA) { sp->hdr.type == RXRPC_PACKET_TYPE_DATA) {
......
...@@ -17,11 +17,12 @@ static int none_init_connection_security(struct rxrpc_connection *conn) ...@@ -17,11 +17,12 @@ static int none_init_connection_security(struct rxrpc_connection *conn)
return 0; return 0;
} }
static void none_prime_packet_security(struct rxrpc_connection *conn) static int none_prime_packet_security(struct rxrpc_connection *conn)
{ {
return 0;
} }
static int none_secure_packet(const struct rxrpc_call *call, static int none_secure_packet(struct rxrpc_call *call,
struct sk_buff *skb, struct sk_buff *skb,
size_t data_size, size_t data_size,
void *sechdr) void *sechdr)
...@@ -29,7 +30,7 @@ static int none_secure_packet(const struct rxrpc_call *call, ...@@ -29,7 +30,7 @@ static int none_secure_packet(const struct rxrpc_call *call,
return 0; return 0;
} }
static int none_verify_packet(const struct rxrpc_call *call, static int none_verify_packet(struct rxrpc_call *call,
struct sk_buff *skb, struct sk_buff *skb,
u32 *_abort_code) u32 *_abort_code)
{ {
......
...@@ -374,14 +374,17 @@ void __exit rxrpc_destroy_all_locals(void) ...@@ -374,14 +374,17 @@ void __exit rxrpc_destroy_all_locals(void)
_enter(""); _enter("");
if (list_empty(&rxrpc_local_endpoints)) flush_workqueue(rxrpc_workqueue);
return;
mutex_lock(&rxrpc_local_mutex); if (!list_empty(&rxrpc_local_endpoints)) {
list_for_each_entry(local, &rxrpc_local_endpoints, link) { mutex_lock(&rxrpc_local_mutex);
pr_err("AF_RXRPC: Leaked local %p {%d}\n", list_for_each_entry(local, &rxrpc_local_endpoints, link) {
local, atomic_read(&local->usage)); pr_err("AF_RXRPC: Leaked local %p {%d}\n",
local, atomic_read(&local->usage));
}
mutex_unlock(&rxrpc_local_mutex);
BUG();
} }
mutex_unlock(&rxrpc_local_mutex);
BUG(); rcu_barrier();
} }
...@@ -189,7 +189,7 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp) ...@@ -189,7 +189,7 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
INIT_WORK(&peer->error_distributor, INIT_WORK(&peer->error_distributor,
&rxrpc_peer_error_distributor); &rxrpc_peer_error_distributor);
peer->service_conns = RB_ROOT; peer->service_conns = RB_ROOT;
rwlock_init(&peer->conn_lock); seqlock_init(&peer->service_conn_lock);
spin_lock_init(&peer->lock); spin_lock_init(&peer->lock);
peer->debug_id = atomic_inc_return(&rxrpc_debug_id); peer->debug_id = atomic_inc_return(&rxrpc_debug_id);
} }
......
...@@ -14,15 +14,15 @@ ...@@ -14,15 +14,15 @@
#include <net/af_rxrpc.h> #include <net/af_rxrpc.h>
#include "ar-internal.h" #include "ar-internal.h"
static const char *const rxrpc_conn_states[] = { static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = {
[RXRPC_CONN_UNUSED] = "Unused ", [RXRPC_CONN_UNUSED] = "Unused ",
[RXRPC_CONN_CLIENT] = "Client ", [RXRPC_CONN_CLIENT] = "Client ",
[RXRPC_CONN_SERVER_UNSECURED] = "SvUnsec ", [RXRPC_CONN_SERVICE_UNSECURED] = "SvUnsec ",
[RXRPC_CONN_SERVER_CHALLENGING] = "SvChall ", [RXRPC_CONN_SERVICE_CHALLENGING] = "SvChall ",
[RXRPC_CONN_SERVER] = "SvSecure", [RXRPC_CONN_SERVICE] = "SvSecure",
[RXRPC_CONN_REMOTELY_ABORTED] = "RmtAbort", [RXRPC_CONN_REMOTELY_ABORTED] = "RmtAbort",
[RXRPC_CONN_LOCALLY_ABORTED] = "LocAbort", [RXRPC_CONN_LOCALLY_ABORTED] = "LocAbort",
[RXRPC_CONN_NETWORK_ERROR] = "NetError", [RXRPC_CONN_NETWORK_ERROR] = "NetError",
}; };
/* /*
...@@ -137,7 +137,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) ...@@ -137,7 +137,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
if (v == &rxrpc_connections) { if (v == &rxrpc_connections) {
seq_puts(seq, seq_puts(seq,
"Proto Local Remote " "Proto Local Remote "
" SvID ConnID Calls End Use State Key " " SvID ConnID End Use State Key "
" Serial ISerial\n" " Serial ISerial\n"
); );
return 0; return 0;
...@@ -154,13 +154,12 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) ...@@ -154,13 +154,12 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
ntohs(conn->params.peer->srx.transport.sin.sin_port)); ntohs(conn->params.peer->srx.transport.sin.sin_port));
seq_printf(seq, seq_printf(seq,
"UDP %-22.22s %-22.22s %4x %08x %08x %s %3u" "UDP %-22.22s %-22.22s %4x %08x %s %3u"
" %s %08x %08x %08x\n", " %s %08x %08x %08x\n",
lbuff, lbuff,
rbuff, rbuff,
conn->params.service_id, conn->params.service_id,
conn->proto.cid, conn->proto.cid,
conn->call_counter,
rxrpc_conn_is_service(conn) ? "Svc" : "Clt", rxrpc_conn_is_service(conn) ? "Svc" : "Clt",
atomic_read(&conn->usage), atomic_read(&conn->usage),
rxrpc_conn_states[conn->state], rxrpc_conn_states[conn->state],
......
...@@ -103,43 +103,43 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn) ...@@ -103,43 +103,43 @@ static int rxkad_init_connection_security(struct rxrpc_connection *conn)
* prime the encryption state with the invariant parts of a connection's * prime the encryption state with the invariant parts of a connection's
* description * description
*/ */
static void rxkad_prime_packet_security(struct rxrpc_connection *conn) static int rxkad_prime_packet_security(struct rxrpc_connection *conn)
{ {
struct rxrpc_key_token *token; struct rxrpc_key_token *token;
SKCIPHER_REQUEST_ON_STACK(req, conn->cipher); SKCIPHER_REQUEST_ON_STACK(req, conn->cipher);
struct scatterlist sg[2]; struct scatterlist sg;
struct rxrpc_crypt iv; struct rxrpc_crypt iv;
struct { __be32 *tmpbuf;
__be32 x[4]; size_t tmpsize = 4 * sizeof(__be32);
} tmpbuf __attribute__((aligned(16))); /* must all be in same page */
_enter(""); _enter("");
if (!conn->params.key) if (!conn->params.key)
return; return 0;
tmpbuf = kmalloc(tmpsize, GFP_KERNEL);
if (!tmpbuf)
return -ENOMEM;
token = conn->params.key->payload.data[0]; token = conn->params.key->payload.data[0];
memcpy(&iv, token->kad->session_key, sizeof(iv)); memcpy(&iv, token->kad->session_key, sizeof(iv));
tmpbuf.x[0] = htonl(conn->proto.epoch); tmpbuf[0] = htonl(conn->proto.epoch);
tmpbuf.x[1] = htonl(conn->proto.cid); tmpbuf[1] = htonl(conn->proto.cid);
tmpbuf.x[2] = 0; tmpbuf[2] = 0;
tmpbuf.x[3] = htonl(conn->security_ix); tmpbuf[3] = htonl(conn->security_ix);
sg_init_one(&sg[0], &tmpbuf, sizeof(tmpbuf));
sg_init_one(&sg[1], &tmpbuf, sizeof(tmpbuf));
sg_init_one(&sg, tmpbuf, tmpsize);
skcipher_request_set_tfm(req, conn->cipher); skcipher_request_set_tfm(req, conn->cipher);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(tmpbuf), iv.x); skcipher_request_set_crypt(req, &sg, &sg, tmpsize, iv.x);
crypto_skcipher_encrypt(req); crypto_skcipher_encrypt(req);
skcipher_request_zero(req); skcipher_request_zero(req);
memcpy(&conn->csum_iv, &tmpbuf.x[2], sizeof(conn->csum_iv)); memcpy(&conn->csum_iv, tmpbuf + 2, sizeof(conn->csum_iv));
ASSERTCMP((u32 __force)conn->csum_iv.n[0], ==, (u32 __force)tmpbuf.x[2]); kfree(tmpbuf);
_leave(" = 0");
_leave(""); return 0;
} }
/* /*
...@@ -152,12 +152,9 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call, ...@@ -152,12 +152,9 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
{ {
struct rxrpc_skb_priv *sp; struct rxrpc_skb_priv *sp;
SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher); SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
struct rxkad_level1_hdr hdr;
struct rxrpc_crypt iv; struct rxrpc_crypt iv;
struct scatterlist sg[2]; struct scatterlist sg;
struct {
struct rxkad_level1_hdr hdr;
__be32 first; /* first four bytes of data and padding */
} tmpbuf __attribute__((aligned(8))); /* must all be in same page */
u16 check; u16 check;
sp = rxrpc_skb(skb); sp = rxrpc_skb(skb);
...@@ -167,24 +164,19 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call, ...@@ -167,24 +164,19 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
check = sp->hdr.seq ^ sp->hdr.callNumber; check = sp->hdr.seq ^ sp->hdr.callNumber;
data_size |= (u32)check << 16; data_size |= (u32)check << 16;
tmpbuf.hdr.data_size = htonl(data_size); hdr.data_size = htonl(data_size);
memcpy(&tmpbuf.first, sechdr + 4, sizeof(tmpbuf.first)); memcpy(sechdr, &hdr, sizeof(hdr));
/* start the encryption afresh */ /* start the encryption afresh */
memset(&iv, 0, sizeof(iv)); memset(&iv, 0, sizeof(iv));
sg_init_one(&sg[0], &tmpbuf, sizeof(tmpbuf)); sg_init_one(&sg, sechdr, 8);
sg_init_one(&sg[1], &tmpbuf, sizeof(tmpbuf));
skcipher_request_set_tfm(req, call->conn->cipher); skcipher_request_set_tfm(req, call->conn->cipher);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(tmpbuf), iv.x); skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x);
crypto_skcipher_encrypt(req); crypto_skcipher_encrypt(req);
skcipher_request_zero(req); skcipher_request_zero(req);
memcpy(sechdr, &tmpbuf, sizeof(tmpbuf));
_leave(" = 0"); _leave(" = 0");
return 0; return 0;
} }
...@@ -198,8 +190,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, ...@@ -198,8 +190,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
void *sechdr) void *sechdr)
{ {
const struct rxrpc_key_token *token; const struct rxrpc_key_token *token;
struct rxkad_level2_hdr rxkhdr struct rxkad_level2_hdr rxkhdr;
__attribute__((aligned(8))); /* must be all on one page */
struct rxrpc_skb_priv *sp; struct rxrpc_skb_priv *sp;
SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher); SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
struct rxrpc_crypt iv; struct rxrpc_crypt iv;
...@@ -218,18 +209,16 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, ...@@ -218,18 +209,16 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
rxkhdr.data_size = htonl(data_size | (u32)check << 16); rxkhdr.data_size = htonl(data_size | (u32)check << 16);
rxkhdr.checksum = 0; rxkhdr.checksum = 0;
memcpy(sechdr, &rxkhdr, sizeof(rxkhdr));
/* encrypt from the session key */ /* encrypt from the session key */
token = call->conn->params.key->payload.data[0]; token = call->conn->params.key->payload.data[0];
memcpy(&iv, token->kad->session_key, sizeof(iv)); memcpy(&iv, token->kad->session_key, sizeof(iv));
sg_init_one(&sg[0], sechdr, sizeof(rxkhdr)); sg_init_one(&sg[0], sechdr, sizeof(rxkhdr));
sg_init_one(&sg[1], &rxkhdr, sizeof(rxkhdr));
skcipher_request_set_tfm(req, call->conn->cipher); skcipher_request_set_tfm(req, call->conn->cipher);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(rxkhdr), iv.x); skcipher_request_set_crypt(req, &sg[0], &sg[0], sizeof(rxkhdr), iv.x);
crypto_skcipher_encrypt(req); crypto_skcipher_encrypt(req);
/* we want to encrypt the skbuff in-place */ /* we want to encrypt the skbuff in-place */
...@@ -243,9 +232,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, ...@@ -243,9 +232,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
sg_init_table(sg, nsg); sg_init_table(sg, nsg);
skb_to_sgvec(skb, sg, 0, len); skb_to_sgvec(skb, sg, 0, len);
skcipher_request_set_crypt(req, sg, sg, len, iv.x); skcipher_request_set_crypt(req, sg, sg, len, iv.x);
crypto_skcipher_encrypt(req); crypto_skcipher_encrypt(req);
_leave(" = 0"); _leave(" = 0");
...@@ -259,7 +246,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call, ...@@ -259,7 +246,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
/* /*
* checksum an RxRPC packet header * checksum an RxRPC packet header
*/ */
static int rxkad_secure_packet(const struct rxrpc_call *call, static int rxkad_secure_packet(struct rxrpc_call *call,
struct sk_buff *skb, struct sk_buff *skb,
size_t data_size, size_t data_size,
void *sechdr) void *sechdr)
...@@ -267,10 +254,7 @@ static int rxkad_secure_packet(const struct rxrpc_call *call, ...@@ -267,10 +254,7 @@ static int rxkad_secure_packet(const struct rxrpc_call *call,
struct rxrpc_skb_priv *sp; struct rxrpc_skb_priv *sp;
SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher); SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
struct rxrpc_crypt iv; struct rxrpc_crypt iv;
struct scatterlist sg[2]; struct scatterlist sg;
struct {
__be32 x[2];
} tmpbuf __attribute__((aligned(8))); /* must all be in same page */
u32 x, y; u32 x, y;
int ret; int ret;
...@@ -293,20 +277,17 @@ static int rxkad_secure_packet(const struct rxrpc_call *call, ...@@ -293,20 +277,17 @@ static int rxkad_secure_packet(const struct rxrpc_call *call,
/* calculate the security checksum */ /* calculate the security checksum */
x = call->channel << (32 - RXRPC_CIDSHIFT); x = call->channel << (32 - RXRPC_CIDSHIFT);
x |= sp->hdr.seq & 0x3fffffff; x |= sp->hdr.seq & 0x3fffffff;
tmpbuf.x[0] = htonl(sp->hdr.callNumber); call->crypto_buf[0] = htonl(sp->hdr.callNumber);
tmpbuf.x[1] = htonl(x); call->crypto_buf[1] = htonl(x);
sg_init_one(&sg[0], &tmpbuf, sizeof(tmpbuf));
sg_init_one(&sg[1], &tmpbuf, sizeof(tmpbuf));
sg_init_one(&sg, call->crypto_buf, 8);
skcipher_request_set_tfm(req, call->conn->cipher); skcipher_request_set_tfm(req, call->conn->cipher);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(tmpbuf), iv.x); skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x);
crypto_skcipher_encrypt(req); crypto_skcipher_encrypt(req);
skcipher_request_zero(req); skcipher_request_zero(req);
y = ntohl(tmpbuf.x[1]); y = ntohl(call->crypto_buf[1]);
y = (y >> 16) & 0xffff; y = (y >> 16) & 0xffff;
if (y == 0) if (y == 0)
y = 1; /* zero checksums are not permitted */ y = 1; /* zero checksums are not permitted */
...@@ -367,7 +348,6 @@ static int rxkad_verify_packet_auth(const struct rxrpc_call *call, ...@@ -367,7 +348,6 @@ static int rxkad_verify_packet_auth(const struct rxrpc_call *call,
skcipher_request_set_tfm(req, call->conn->cipher); skcipher_request_set_tfm(req, call->conn->cipher);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, sg, sg, 8, iv.x); skcipher_request_set_crypt(req, sg, sg, 8, iv.x);
crypto_skcipher_decrypt(req); crypto_skcipher_decrypt(req);
skcipher_request_zero(req); skcipher_request_zero(req);
...@@ -452,7 +432,6 @@ static int rxkad_verify_packet_encrypt(const struct rxrpc_call *call, ...@@ -452,7 +432,6 @@ static int rxkad_verify_packet_encrypt(const struct rxrpc_call *call,
skcipher_request_set_tfm(req, call->conn->cipher); skcipher_request_set_tfm(req, call->conn->cipher);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, sg, sg, skb->len, iv.x); skcipher_request_set_crypt(req, sg, sg, skb->len, iv.x);
crypto_skcipher_decrypt(req); crypto_skcipher_decrypt(req);
skcipher_request_zero(req); skcipher_request_zero(req);
if (sg != _sg) if (sg != _sg)
...@@ -498,17 +477,14 @@ static int rxkad_verify_packet_encrypt(const struct rxrpc_call *call, ...@@ -498,17 +477,14 @@ static int rxkad_verify_packet_encrypt(const struct rxrpc_call *call,
/* /*
* verify the security on a received packet * verify the security on a received packet
*/ */
static int rxkad_verify_packet(const struct rxrpc_call *call, static int rxkad_verify_packet(struct rxrpc_call *call,
struct sk_buff *skb, struct sk_buff *skb,
u32 *_abort_code) u32 *_abort_code)
{ {
SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher); SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
struct rxrpc_skb_priv *sp; struct rxrpc_skb_priv *sp;
struct rxrpc_crypt iv; struct rxrpc_crypt iv;
struct scatterlist sg[2]; struct scatterlist sg;
struct {
__be32 x[2];
} tmpbuf __attribute__((aligned(8))); /* must all be in same page */
u16 cksum; u16 cksum;
u32 x, y; u32 x, y;
int ret; int ret;
...@@ -533,20 +509,17 @@ static int rxkad_verify_packet(const struct rxrpc_call *call, ...@@ -533,20 +509,17 @@ static int rxkad_verify_packet(const struct rxrpc_call *call,
/* validate the security checksum */ /* validate the security checksum */
x = call->channel << (32 - RXRPC_CIDSHIFT); x = call->channel << (32 - RXRPC_CIDSHIFT);
x |= sp->hdr.seq & 0x3fffffff; x |= sp->hdr.seq & 0x3fffffff;
tmpbuf.x[0] = htonl(call->call_id); call->crypto_buf[0] = htonl(call->call_id);
tmpbuf.x[1] = htonl(x); call->crypto_buf[1] = htonl(x);
sg_init_one(&sg[0], &tmpbuf, sizeof(tmpbuf));
sg_init_one(&sg[1], &tmpbuf, sizeof(tmpbuf));
sg_init_one(&sg, call->crypto_buf, 8);
skcipher_request_set_tfm(req, call->conn->cipher); skcipher_request_set_tfm(req, call->conn->cipher);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(tmpbuf), iv.x); skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x);
crypto_skcipher_encrypt(req); crypto_skcipher_encrypt(req);
skcipher_request_zero(req); skcipher_request_zero(req);
y = ntohl(tmpbuf.x[1]); y = ntohl(call->crypto_buf[1]);
cksum = (y >> 16) & 0xffff; cksum = (y >> 16) & 0xffff;
if (cksum == 0) if (cksum == 0)
cksum = 1; /* zero checksums are not permitted */ cksum = 1; /* zero checksums are not permitted */
...@@ -709,29 +682,6 @@ static void rxkad_calc_response_checksum(struct rxkad_response *response) ...@@ -709,29 +682,6 @@ static void rxkad_calc_response_checksum(struct rxkad_response *response)
response->encrypted.checksum = htonl(csum); response->encrypted.checksum = htonl(csum);
} }
/*
* load a scatterlist with a potentially split-page buffer
*/
static void rxkad_sg_set_buf2(struct scatterlist sg[2],
void *buf, size_t buflen)
{
int nsg = 1;
sg_init_table(sg, 2);
sg_set_buf(&sg[0], buf, buflen);
if (sg[0].offset + buflen > PAGE_SIZE) {
/* the buffer was split over two pages */
sg[0].length = PAGE_SIZE - sg[0].offset;
sg_set_buf(&sg[1], buf + sg[0].length, buflen - sg[0].length);
nsg++;
}
sg_mark_end(&sg[nsg - 1]);
ASSERTCMP(sg[0].length + sg[1].length, ==, buflen);
}
/* /*
* encrypt the response packet * encrypt the response packet
*/ */
...@@ -741,17 +691,16 @@ static void rxkad_encrypt_response(struct rxrpc_connection *conn, ...@@ -741,17 +691,16 @@ static void rxkad_encrypt_response(struct rxrpc_connection *conn,
{ {
SKCIPHER_REQUEST_ON_STACK(req, conn->cipher); SKCIPHER_REQUEST_ON_STACK(req, conn->cipher);
struct rxrpc_crypt iv; struct rxrpc_crypt iv;
struct scatterlist sg[2]; struct scatterlist sg[1];
/* continue encrypting from where we left off */ /* continue encrypting from where we left off */
memcpy(&iv, s2->session_key, sizeof(iv)); memcpy(&iv, s2->session_key, sizeof(iv));
rxkad_sg_set_buf2(sg, &resp->encrypted, sizeof(resp->encrypted)); sg_init_table(sg, 1);
sg_set_buf(sg, &resp->encrypted, sizeof(resp->encrypted));
skcipher_request_set_tfm(req, conn->cipher); skcipher_request_set_tfm(req, conn->cipher);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x); skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x);
crypto_skcipher_encrypt(req); crypto_skcipher_encrypt(req);
skcipher_request_zero(req); skcipher_request_zero(req);
} }
...@@ -818,14 +767,10 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn, ...@@ -818,14 +767,10 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,
resp.kvno = htonl(token->kad->kvno); resp.kvno = htonl(token->kad->kvno);
resp.ticket_len = htonl(token->kad->ticket_len); resp.ticket_len = htonl(token->kad->ticket_len);
resp.encrypted.call_id[0] = resp.encrypted.call_id[0] = htonl(conn->channels[0].call_counter);
htonl(conn->channels[0] ? conn->channels[0]->call_id : 0); resp.encrypted.call_id[1] = htonl(conn->channels[1].call_counter);
resp.encrypted.call_id[1] = resp.encrypted.call_id[2] = htonl(conn->channels[2].call_counter);
htonl(conn->channels[1] ? conn->channels[1]->call_id : 0); resp.encrypted.call_id[3] = htonl(conn->channels[3].call_counter);
resp.encrypted.call_id[2] =
htonl(conn->channels[2] ? conn->channels[2]->call_id : 0);
resp.encrypted.call_id[3] =
htonl(conn->channels[3] ? conn->channels[3]->call_id : 0);
/* calculate the response checksum and then do the encryption */ /* calculate the response checksum and then do the encryption */
rxkad_calc_response_checksum(&resp); rxkad_calc_response_checksum(&resp);
...@@ -887,10 +832,8 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn, ...@@ -887,10 +832,8 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn,
} }
sg_init_one(&sg[0], ticket, ticket_len); sg_init_one(&sg[0], ticket, ticket_len);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, sg, sg, ticket_len, iv.x); skcipher_request_set_crypt(req, sg, sg, ticket_len, iv.x);
crypto_skcipher_decrypt(req); crypto_skcipher_decrypt(req);
skcipher_request_free(req); skcipher_request_free(req);
...@@ -1001,7 +944,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn, ...@@ -1001,7 +944,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn,
const struct rxrpc_crypt *session_key) const struct rxrpc_crypt *session_key)
{ {
SKCIPHER_REQUEST_ON_STACK(req, rxkad_ci); SKCIPHER_REQUEST_ON_STACK(req, rxkad_ci);
struct scatterlist sg[2]; struct scatterlist sg[1];
struct rxrpc_crypt iv; struct rxrpc_crypt iv;
_enter(",,%08x%08x", _enter(",,%08x%08x",
...@@ -1016,12 +959,11 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn, ...@@ -1016,12 +959,11 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn,
memcpy(&iv, session_key, sizeof(iv)); memcpy(&iv, session_key, sizeof(iv));
rxkad_sg_set_buf2(sg, &resp->encrypted, sizeof(resp->encrypted)); sg_init_table(sg, 1);
sg_set_buf(sg, &resp->encrypted, sizeof(resp->encrypted));
skcipher_request_set_tfm(req, rxkad_ci); skcipher_request_set_tfm(req, rxkad_ci);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x); skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x);
crypto_skcipher_decrypt(req); crypto_skcipher_decrypt(req);
skcipher_request_zero(req); skcipher_request_zero(req);
...@@ -1045,7 +987,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, ...@@ -1045,7 +987,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
void *ticket; void *ticket;
u32 abort_code, version, kvno, ticket_len, level; u32 abort_code, version, kvno, ticket_len, level;
__be32 csum; __be32 csum;
int ret; int ret, i;
_enter("{%d,%x}", conn->debug_id, key_serial(conn->server_key)); _enter("{%d,%x}", conn->debug_id, key_serial(conn->server_key));
...@@ -1108,11 +1050,26 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, ...@@ -1108,11 +1050,26 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
if (response.encrypted.checksum != csum) if (response.encrypted.checksum != csum)
goto protocol_error_free; goto protocol_error_free;
if (ntohl(response.encrypted.call_id[0]) > INT_MAX || spin_lock(&conn->channel_lock);
ntohl(response.encrypted.call_id[1]) > INT_MAX || for (i = 0; i < RXRPC_MAXCALLS; i++) {
ntohl(response.encrypted.call_id[2]) > INT_MAX || struct rxrpc_call *call;
ntohl(response.encrypted.call_id[3]) > INT_MAX) u32 call_id = ntohl(response.encrypted.call_id[i]);
goto protocol_error_free;
if (call_id > INT_MAX)
goto protocol_error_unlock;
if (call_id < conn->channels[i].call_counter)
goto protocol_error_unlock;
if (call_id > conn->channels[i].call_counter) {
call = rcu_dereference_protected(
conn->channels[i].call,
lockdep_is_held(&conn->channel_lock));
if (call && call->state < RXRPC_CALL_COMPLETE)
goto protocol_error_unlock;
conn->channels[i].call_counter = call_id;
}
}
spin_unlock(&conn->channel_lock);
abort_code = RXKADOUTOFSEQUENCE; abort_code = RXKADOUTOFSEQUENCE;
if (ntohl(response.encrypted.inc_nonce) != conn->security_nonce + 1) if (ntohl(response.encrypted.inc_nonce) != conn->security_nonce + 1)
...@@ -1137,6 +1094,8 @@ static int rxkad_verify_response(struct rxrpc_connection *conn, ...@@ -1137,6 +1094,8 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
_leave(" = 0"); _leave(" = 0");
return 0; return 0;
protocol_error_unlock:
spin_unlock(&conn->channel_lock);
protocol_error_free: protocol_error_free:
kfree(ticket); kfree(ticket);
protocol_error: protocol_error:
......
...@@ -10,32 +10,37 @@ ...@@ -10,32 +10,37 @@
*/ */
#include <linux/ip.h> #include <linux/ip.h>
#include <linux/ipv6.h>
#include <linux/udp.h> #include <linux/udp.h>
#include "ar-internal.h" #include "ar-internal.h"
/* /*
* Set up an RxRPC address from a socket buffer. * Fill out a peer address from a socket buffer containing a packet.
*/ */
void rxrpc_get_addr_from_skb(struct rxrpc_local *local, int rxrpc_extract_addr_from_skb(struct sockaddr_rxrpc *srx, struct sk_buff *skb)
const struct sk_buff *skb,
struct sockaddr_rxrpc *srx)
{ {
memset(srx, 0, sizeof(*srx)); memset(srx, 0, sizeof(*srx));
srx->transport_type = local->srx.transport_type;
srx->transport.family = local->srx.transport.family;
/* Can we see an ipv4 UDP packet on an ipv6 UDP socket? and vice switch (ntohs(skb->protocol)) {
* versa? case ETH_P_IP:
*/ srx->transport_type = SOCK_DGRAM;
switch (srx->transport.family) { srx->transport_len = sizeof(srx->transport.sin);
case AF_INET: srx->transport.sin.sin_family = AF_INET;
srx->transport.sin.sin_port = udp_hdr(skb)->source; srx->transport.sin.sin_port = udp_hdr(skb)->source;
srx->transport_len = sizeof(struct sockaddr_in); srx->transport.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
memcpy(&srx->transport.sin.sin_addr, &ip_hdr(skb)->saddr, return 0;
sizeof(struct in_addr));
break; case ETH_P_IPV6:
srx->transport_type = SOCK_DGRAM;
srx->transport_len = sizeof(srx->transport.sin6);
srx->transport.sin6.sin6_family = AF_INET6;
srx->transport.sin6.sin6_port = udp_hdr(skb)->source;
srx->transport.sin6.sin6_addr = ipv6_hdr(skb)->saddr;
return 0;
default: default:
BUG(); pr_warn_ratelimited("AF_RXRPC: Unknown eth protocol %u\n",
ntohs(skb->protocol));
return -EAFNOSUPPORT;
} }
} }
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment