Commit ca0a53dc authored by David S. Miller's avatar David S. Miller

Merge branch 'net-hw-counters-for-soft-devices'

Ido Schimmel says:

====================
HW counters for soft devices

Petr says:

Offloading switch device drivers may be able to collect statistics of the
traffic taking place in the HW datapath that pertains to a certain soft
netdevice, such as a VLAN. In this patch set, add the necessary
infrastructure to allow exposing these statistics to the offloaded
netdevice in question, and add mlxsw offload.

Across HW platforms, the counter itself very likely constitutes a limited
resource, and the act of counting may have a performance impact. Therefore
this patch set makes the HW statistics collection opt-in and togglable from
userspace on a per-netdevice basis.

Additionally, HW devices may have various limiting conditions under which
they can realize the counter. Therefore it is also possible to query
whether the requested counter is realized by any driver. In TC parlance,
which is to a degree reused in this patch set, two values are recognized:
"request" tracks whether the user enabled collecting HW statistics, and
"used" tracks whether any HW statistics are actually collected.

In the past, this author has expressed the opinion that `a typical user
doing "ip -s l sh", including various scripts, wants to see the full
picture and not worry what's going on where'. While that would be nice,
unfortunately it cannot work:

- Packets that trap from the HW datapath to the SW datapath would be
  double counted.

  For a given netdevice, some traffic can be purely a SW artifact, and some
  may flow through the HW object corresponding to the netdevice. But some
  traffic can also get trapped to the SW datapath after bumping the HW
  counter. It is not clear how to make sure double-counting does not occur
  in the SW datapath in that case, while still making sure that possibly
  divergent SW forwarding path gets bumped as appropriate.

  So simply adding HW and SW stats may work roughly, most of the time, but
  there are scenarios where the result is nonsensical.

- HW devices will have limitations as to what type of traffic they can
  count.

  In case of mlxsw, which is part of this patch set, there is no reasonable
  way to count all traffic going through a certain netdevice, such as a
  VLAN netdevice enslaved to a bridge. It is however very simple to count
  traffic flowing through an L3 object, such as a VLAN netdevice with an IP
  address.

  Similarly for physical netdevices, the L3 object at which the counter is
  installed is the subport carrying untagged traffic.

  These are not "just counters". It is important that the user understands
  what is being counted. It would be incorrect to conflate these statistics
  with another existing statistics suite.

To that end, this patch set introduces a statistics suite called "L3
stats". This label should make it easy to understand what is being counted,
and to decide whether a given device can or cannot implement this suite for
some type of netdevice. At the same time, the code is written to make
future extensions easy, should a device pop up that can implement a
different flavor of statistics suite (say L2, or an address-family-specific
suite).

For example, using a work-in-progress iproute2[1], to turn on and then list
the counters on a VLAN netdevice:

    # ip stats set dev swp1.200 l3_stats on
    # ip stats show dev swp1.200 group offload subgroup l3_stats
    56: swp1.200: group offload subgroup l3_stats on used on
	RX:  bytes packets errors dropped  missed   mcast
		0       0      0       0       0       0
	TX:  bytes packets errors dropped carrier collsns
		0       0      0       0       0       0

The patchset progresses as follows:

- Patch #1 is a cleanup.

- In patch #2, remove the assumption that all LINK_OFFLOAD_XSTATS are
  dev-backed.

  The only attribute defined under the nest is currently
  IFLA_OFFLOAD_XSTATS_CPU_HIT. L3_STATS differs from CPU_HIT in that the
  driver that supplies the statistics is not the same as the driver that
  implements the netdevice. Make the code compatible with this in patch #2.

- In patch #3, add the possibility to filter inside nests.

  The filter_mask field of RTM_GETSTATS header determines which
  top-level attributes should be included in the netlink response. This
  saves processing time by only including the bits that the user cares
  about instead of always dumping everything. This is doubly important
  for HW-backed statistics that would typically require a trip to the
  device to fetch the stats. In this patch, the UAPI is extended to
  allow filtering inside IFLA_STATS_LINK_OFFLOAD_XSTATS in particular,
  but the scheme is easily extensible to other nests as well.

- In patch #4, propagate extack where we need it.
  In patch #5, make it possible to propagate errors from drivers to the
  user.

- In patch #6, add the in-kernel APIs for keeping track of the new stats
  suite, and the notifiers that the core uses to communicate with the
  drivers.

- In patch #7, add UAPI for obtaining the new stats suite.

- In patch #8, add a new UAPI message, RTM_SETSTATS, which will carry
  the message to toggle the newly-added stats suite.
  In patch #9, add the toggle itself.

At this point the core is ready for drivers to add support for the new
stats suite.

- In patches #10, #11 and #12, apply small tweaks to mlxsw code.

- In patch #13, add support for L3 stats, which are realized as RIF
  counters.

- Finally in patch #14, a selftest is added to the net/forwarding
  directory. Technically this is a HW-specific test, in that without a HW
  implementing the counters, it just will not pass. But devices that
  support L3 statistics at all are likely to be able to reuse this
  selftest, so it seems appropriate to put it in the general forwarding
  directory.

We also have a netdevsim implementation, and a corresponding selftest that
verifies specifically some of the core code. We intend to contribute these
later. Interested parties can take a look at the raw code at [2].

[1] https://github.com/pmachata/iproute2/commits/soft_counters
[2] https://github.com/pmachata/linux_mlxsw/commits/petrm_soft_counters_2

v2:
- Patch #3:
    - Do not declare strict_start_type at the new policies, since they are
      used with nla_parse_nested() (sans _deprecated).
    - Use NLA_POLICY_NESTED to declare what the nest contents should be
    - Use NLA_POLICY_MASK instead of BITFIELD32 for the filtering
      attribute.
- Patch #6:
    - s/monotonous/monotonic/ in commit message
    - Use a newly-added struct rtnl_hw_stats64 for stats transfer
- Patch #7:
    - Use a newly-added struct rtnl_hw_stats64 for stats transfer
- Patch #8:
    - Do not declare strict_start_type at the new policies, since they are
      used with nla_parse_nested() (sans _deprecated).
- Patch #13:
    - Use a newly-added struct rtnl_hw_stats64 for stats transfer
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents cbcc44db ba95e793
......@@ -6784,12 +6784,14 @@ static inline void mlxsw_reg_ritr_counter_pack(char *payload, u32 index,
set_type = MLXSW_REG_RITR_COUNTER_SET_TYPE_BASIC;
else
set_type = MLXSW_REG_RITR_COUNTER_SET_TYPE_NO_COUNT;
mlxsw_reg_ritr_egress_counter_set_type_set(payload, set_type);
if (egress)
if (egress) {
mlxsw_reg_ritr_egress_counter_set_type_set(payload, set_type);
mlxsw_reg_ritr_egress_counter_index_set(payload, index);
else
} else {
mlxsw_reg_ritr_ingress_counter_set_type_set(payload, set_type);
mlxsw_reg_ritr_ingress_counter_index_set(payload, index);
}
}
static inline void mlxsw_reg_ritr_rif_pack(char *payload, u16 rif)
......
......@@ -4823,6 +4823,22 @@ static int mlxsw_sp_netdevice_vxlan_event(struct mlxsw_sp *mlxsw_sp,
return 0;
}
static bool mlxsw_sp_netdevice_event_is_router(unsigned long event)
{
switch (event) {
case NETDEV_PRE_CHANGEADDR:
case NETDEV_CHANGEADDR:
case NETDEV_CHANGEMTU:
case NETDEV_OFFLOAD_XSTATS_ENABLE:
case NETDEV_OFFLOAD_XSTATS_DISABLE:
case NETDEV_OFFLOAD_XSTATS_REPORT_USED:
case NETDEV_OFFLOAD_XSTATS_REPORT_DELTA:
return true;
default:
return false;
}
}
static int mlxsw_sp_netdevice_event(struct notifier_block *nb,
unsigned long event, void *ptr)
{
......@@ -4847,9 +4863,7 @@ static int mlxsw_sp_netdevice_event(struct notifier_block *nb,
else if (mlxsw_sp_netdev_is_ipip_ul(mlxsw_sp, dev))
err = mlxsw_sp_netdevice_ipip_ul_event(mlxsw_sp, dev,
event, ptr);
else if (event == NETDEV_PRE_CHANGEADDR ||
event == NETDEV_CHANGEADDR ||
event == NETDEV_CHANGEMTU)
else if (mlxsw_sp_netdevice_event_is_router(event))
err = mlxsw_sp_netdevice_router_port_event(dev, event, ptr);
else if (mlxsw_sp_is_vrf_event(event, ptr))
err = mlxsw_sp_netdevice_vrf_event(dev, event, ptr);
......
......@@ -266,10 +266,10 @@ static int mlxsw_sp_dpipe_table_erif_counters_update(void *priv, bool enable)
if (!rif)
continue;
if (enable)
mlxsw_sp_rif_counter_alloc(mlxsw_sp, rif,
mlxsw_sp_rif_counter_alloc(rif,
MLXSW_SP_RIF_COUNTER_EGRESS);
else
mlxsw_sp_rif_counter_free(mlxsw_sp, rif,
mlxsw_sp_rif_counter_free(rif,
MLXSW_SP_RIF_COUNTER_EGRESS);
}
mutex_unlock(&mlxsw_sp->router->lock);
......
......@@ -159,11 +159,9 @@ int mlxsw_sp_rif_counter_value_get(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_rif *rif,
enum mlxsw_sp_rif_counter_dir dir,
u64 *cnt);
void mlxsw_sp_rif_counter_free(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_rif *rif,
void mlxsw_sp_rif_counter_free(struct mlxsw_sp_rif *rif,
enum mlxsw_sp_rif_counter_dir dir);
int mlxsw_sp_rif_counter_alloc(struct mlxsw_sp *mlxsw_sp,
struct mlxsw_sp_rif *rif,
int mlxsw_sp_rif_counter_alloc(struct mlxsw_sp_rif *rif,
enum mlxsw_sp_rif_counter_dir dir);
struct mlxsw_sp_neigh_entry *
mlxsw_sp_rif_neigh_next(struct mlxsw_sp_rif *rif,
......
......@@ -1950,6 +1950,7 @@ enum netdev_ml_priv_type {
* @watchdog_dev_tracker: refcount tracker used by watchdog.
* @dev_registered_tracker: tracker for reference held while
* registered
* @offload_xstats_l3: L3 HW stats for this netdevice.
*
* FIXME: cleanup struct net_device such that network protocol info
* moves out.
......@@ -2287,6 +2288,7 @@ struct net_device {
netdevice_tracker linkwatch_dev_tracker;
netdevice_tracker watchdog_dev_tracker;
netdevice_tracker dev_registered_tracker;
struct rtnl_hw_stats64 *offload_xstats_l3;
};
#define to_net_dev(d) container_of(d, struct net_device, dev)
......@@ -2727,6 +2729,10 @@ enum netdev_cmd {
NETDEV_CVLAN_FILTER_DROP_INFO,
NETDEV_SVLAN_FILTER_PUSH_INFO,
NETDEV_SVLAN_FILTER_DROP_INFO,
NETDEV_OFFLOAD_XSTATS_ENABLE,
NETDEV_OFFLOAD_XSTATS_DISABLE,
NETDEV_OFFLOAD_XSTATS_REPORT_USED,
NETDEV_OFFLOAD_XSTATS_REPORT_DELTA,
};
const char *netdev_cmd_to_name(enum netdev_cmd cmd);
......@@ -2777,6 +2783,42 @@ struct netdev_notifier_pre_changeaddr_info {
const unsigned char *dev_addr;
};
enum netdev_offload_xstats_type {
NETDEV_OFFLOAD_XSTATS_TYPE_L3 = 1,
};
struct netdev_notifier_offload_xstats_info {
struct netdev_notifier_info info; /* must be first */
enum netdev_offload_xstats_type type;
union {
/* NETDEV_OFFLOAD_XSTATS_REPORT_DELTA */
struct netdev_notifier_offload_xstats_rd *report_delta;
/* NETDEV_OFFLOAD_XSTATS_REPORT_USED */
struct netdev_notifier_offload_xstats_ru *report_used;
};
};
int netdev_offload_xstats_enable(struct net_device *dev,
enum netdev_offload_xstats_type type,
struct netlink_ext_ack *extack);
int netdev_offload_xstats_disable(struct net_device *dev,
enum netdev_offload_xstats_type type);
bool netdev_offload_xstats_enabled(const struct net_device *dev,
enum netdev_offload_xstats_type type);
int netdev_offload_xstats_get(struct net_device *dev,
enum netdev_offload_xstats_type type,
struct rtnl_hw_stats64 *stats, bool *used,
struct netlink_ext_ack *extack);
void
netdev_offload_xstats_report_delta(struct netdev_notifier_offload_xstats_rd *rd,
const struct rtnl_hw_stats64 *stats);
void
netdev_offload_xstats_report_used(struct netdev_notifier_offload_xstats_ru *ru);
void netdev_offload_xstats_push_delta(struct net_device *dev,
enum netdev_offload_xstats_type type,
const struct rtnl_hw_stats64 *stats);
static inline void netdev_notifier_info_init(struct netdev_notifier_info *info,
struct net_device *dev)
{
......
......@@ -134,4 +134,7 @@ extern int ndo_dflt_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
int (*vlan_fill)(struct sk_buff *skb,
struct net_device *dev,
u32 filter_mask));
extern void rtnl_offload_xstats_notify(struct net_device *dev);
#endif /* __LINUX_RTNETLINK_H */
......@@ -245,6 +245,21 @@ struct rtnl_link_stats64 {
__u64 rx_nohandler;
};
/* Subset of link stats useful for in-HW collection. Meaning of the fields is as
* for struct rtnl_link_stats64.
*/
struct rtnl_hw_stats64 {
__u64 rx_packets;
__u64 tx_packets;
__u64 rx_bytes;
__u64 tx_bytes;
__u64 rx_errors;
__u64 tx_errors;
__u64 rx_dropped;
__u64 tx_dropped;
__u64 multicast;
};
/* The struct should be in sync with struct ifmap */
struct rtnl_link_ifmap {
__u64 mem_start;
......@@ -1207,6 +1222,17 @@ enum {
#define IFLA_STATS_FILTER_BIT(ATTR) (1 << (ATTR - 1))
enum {
IFLA_STATS_GETSET_UNSPEC,
IFLA_STATS_GET_FILTERS, /* Nest of IFLA_STATS_LINK_xxx, each a u32 with
* a filter mask for the corresponding group.
*/
IFLA_STATS_SET_OFFLOAD_XSTATS_L3_STATS, /* 0 or 1 as u8 */
__IFLA_STATS_GETSET_MAX,
};
#define IFLA_STATS_GETSET_MAX (__IFLA_STATS_GETSET_MAX - 1)
/* These are embedded into IFLA_STATS_LINK_XSTATS:
* [IFLA_STATS_LINK_XSTATS]
* -> [LINK_XSTATS_TYPE_xxx]
......@@ -1224,10 +1250,21 @@ enum {
enum {
IFLA_OFFLOAD_XSTATS_UNSPEC,
IFLA_OFFLOAD_XSTATS_CPU_HIT, /* struct rtnl_link_stats64 */
IFLA_OFFLOAD_XSTATS_HW_S_INFO, /* HW stats info. A nest */
IFLA_OFFLOAD_XSTATS_L3_STATS, /* struct rtnl_hw_stats64 */
__IFLA_OFFLOAD_XSTATS_MAX
};
#define IFLA_OFFLOAD_XSTATS_MAX (__IFLA_OFFLOAD_XSTATS_MAX - 1)
enum {
IFLA_OFFLOAD_XSTATS_HW_S_INFO_UNSPEC,
IFLA_OFFLOAD_XSTATS_HW_S_INFO_REQUEST, /* u8 */
IFLA_OFFLOAD_XSTATS_HW_S_INFO_USED, /* u8 */
__IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX,
};
#define IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX \
(__IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX - 1)
/* XDP section */
#define XDP_FLAGS_UPDATE_IF_NOEXIST (1U << 0)
......
......@@ -146,6 +146,8 @@ enum {
#define RTM_NEWSTATS RTM_NEWSTATS
RTM_GETSTATS = 94,
#define RTM_GETSTATS RTM_GETSTATS
RTM_SETSTATS,
#define RTM_SETSTATS RTM_SETSTATS
RTM_NEWCACHEREPORT = 96,
#define RTM_NEWCACHEREPORT RTM_NEWCACHEREPORT
......@@ -765,6 +767,8 @@ enum rtnetlink_groups {
#define RTNLGRP_MCTP_IFADDR RTNLGRP_MCTP_IFADDR
RTNLGRP_TUNNEL,
#define RTNLGRP_TUNNEL RTNLGRP_TUNNEL
RTNLGRP_STATS,
#define RTNLGRP_STATS RTNLGRP_STATS
__RTNLGRP_MAX
};
#define RTNLGRP_MAX (__RTNLGRP_MAX - 1)
......
......@@ -1622,7 +1622,8 @@ const char *netdev_cmd_to_name(enum netdev_cmd cmd)
N(UDP_TUNNEL_DROP_INFO) N(CHANGE_TX_QUEUE_LEN)
N(CVLAN_FILTER_PUSH_INFO) N(CVLAN_FILTER_DROP_INFO)
N(SVLAN_FILTER_PUSH_INFO) N(SVLAN_FILTER_DROP_INFO)
N(PRE_CHANGEADDR)
N(PRE_CHANGEADDR) N(OFFLOAD_XSTATS_ENABLE) N(OFFLOAD_XSTATS_DISABLE)
N(OFFLOAD_XSTATS_REPORT_USED) N(OFFLOAD_XSTATS_REPORT_DELTA)
}
#undef N
return "UNKNOWN_NETDEV_EVENT";
......@@ -1939,6 +1940,32 @@ static int call_netdevice_notifiers_info(unsigned long val,
return raw_notifier_call_chain(&netdev_chain, val, info);
}
/**
* call_netdevice_notifiers_info_robust - call per-netns notifier blocks
* for and rollback on error
* @val_up: value passed unmodified to notifier function
* @val_down: value passed unmodified to the notifier function when
* recovering from an error on @val_up
* @info: notifier information data
*
* Call all per-netns network notifier blocks, but not notifier blocks on
* the global notifier chain. Parameters and return value are as for
* raw_notifier_call_chain_robust().
*/
static int
call_netdevice_notifiers_info_robust(unsigned long val_up,
unsigned long val_down,
struct netdev_notifier_info *info)
{
struct net *net = dev_net(info->dev);
ASSERT_RTNL();
return raw_notifier_call_chain_robust(&net->netdev_chain,
val_up, val_down, info);
}
static int call_netdevice_notifiers_extack(unsigned long val,
struct net_device *dev,
struct netlink_ext_ack *extack)
......@@ -7728,6 +7755,242 @@ void netdev_bonding_info_change(struct net_device *dev,
}
EXPORT_SYMBOL(netdev_bonding_info_change);
static int netdev_offload_xstats_enable_l3(struct net_device *dev,
struct netlink_ext_ack *extack)
{
struct netdev_notifier_offload_xstats_info info = {
.info.dev = dev,
.info.extack = extack,
.type = NETDEV_OFFLOAD_XSTATS_TYPE_L3,
};
int err;
int rc;
dev->offload_xstats_l3 = kzalloc(sizeof(*dev->offload_xstats_l3),
GFP_KERNEL);
if (!dev->offload_xstats_l3)
return -ENOMEM;
rc = call_netdevice_notifiers_info_robust(NETDEV_OFFLOAD_XSTATS_ENABLE,
NETDEV_OFFLOAD_XSTATS_DISABLE,
&info.info);
err = notifier_to_errno(rc);
if (err)
goto free_stats;
return 0;
free_stats:
kfree(dev->offload_xstats_l3);
dev->offload_xstats_l3 = NULL;
return err;
}
int netdev_offload_xstats_enable(struct net_device *dev,
enum netdev_offload_xstats_type type,
struct netlink_ext_ack *extack)
{
ASSERT_RTNL();
if (netdev_offload_xstats_enabled(dev, type))
return -EALREADY;
switch (type) {
case NETDEV_OFFLOAD_XSTATS_TYPE_L3:
return netdev_offload_xstats_enable_l3(dev, extack);
}
WARN_ON(1);
return -EINVAL;
}
EXPORT_SYMBOL(netdev_offload_xstats_enable);
static void netdev_offload_xstats_disable_l3(struct net_device *dev)
{
struct netdev_notifier_offload_xstats_info info = {
.info.dev = dev,
.type = NETDEV_OFFLOAD_XSTATS_TYPE_L3,
};
call_netdevice_notifiers_info(NETDEV_OFFLOAD_XSTATS_DISABLE,
&info.info);
kfree(dev->offload_xstats_l3);
dev->offload_xstats_l3 = NULL;
}
int netdev_offload_xstats_disable(struct net_device *dev,
enum netdev_offload_xstats_type type)
{
ASSERT_RTNL();
if (!netdev_offload_xstats_enabled(dev, type))
return -EALREADY;
switch (type) {
case NETDEV_OFFLOAD_XSTATS_TYPE_L3:
netdev_offload_xstats_disable_l3(dev);
return 0;
}
WARN_ON(1);
return -EINVAL;
}
EXPORT_SYMBOL(netdev_offload_xstats_disable);
static void netdev_offload_xstats_disable_all(struct net_device *dev)
{
netdev_offload_xstats_disable(dev, NETDEV_OFFLOAD_XSTATS_TYPE_L3);
}
static struct rtnl_hw_stats64 *
netdev_offload_xstats_get_ptr(const struct net_device *dev,
enum netdev_offload_xstats_type type)
{
switch (type) {
case NETDEV_OFFLOAD_XSTATS_TYPE_L3:
return dev->offload_xstats_l3;
}
WARN_ON(1);
return NULL;
}
bool netdev_offload_xstats_enabled(const struct net_device *dev,
enum netdev_offload_xstats_type type)
{
ASSERT_RTNL();
return netdev_offload_xstats_get_ptr(dev, type);
}
EXPORT_SYMBOL(netdev_offload_xstats_enabled);
struct netdev_notifier_offload_xstats_ru {
bool used;
};
struct netdev_notifier_offload_xstats_rd {
struct rtnl_hw_stats64 stats;
bool used;
};
static void netdev_hw_stats64_add(struct rtnl_hw_stats64 *dest,
const struct rtnl_hw_stats64 *src)
{
dest->rx_packets += src->rx_packets;
dest->tx_packets += src->tx_packets;
dest->rx_bytes += src->rx_bytes;
dest->tx_bytes += src->tx_bytes;
dest->rx_errors += src->rx_errors;
dest->tx_errors += src->tx_errors;
dest->rx_dropped += src->rx_dropped;
dest->tx_dropped += src->tx_dropped;
dest->multicast += src->multicast;
}
static int netdev_offload_xstats_get_used(struct net_device *dev,
enum netdev_offload_xstats_type type,
bool *p_used,
struct netlink_ext_ack *extack)
{
struct netdev_notifier_offload_xstats_ru report_used = {};
struct netdev_notifier_offload_xstats_info info = {
.info.dev = dev,
.info.extack = extack,
.type = type,
.report_used = &report_used,
};
int rc;
WARN_ON(!netdev_offload_xstats_enabled(dev, type));
rc = call_netdevice_notifiers_info(NETDEV_OFFLOAD_XSTATS_REPORT_USED,
&info.info);
*p_used = report_used.used;
return notifier_to_errno(rc);
}
static int netdev_offload_xstats_get_stats(struct net_device *dev,
enum netdev_offload_xstats_type type,
struct rtnl_hw_stats64 *p_stats,
bool *p_used,
struct netlink_ext_ack *extack)
{
struct netdev_notifier_offload_xstats_rd report_delta = {};
struct netdev_notifier_offload_xstats_info info = {
.info.dev = dev,
.info.extack = extack,
.type = type,
.report_delta = &report_delta,
};
struct rtnl_hw_stats64 *stats;
int rc;
stats = netdev_offload_xstats_get_ptr(dev, type);
if (WARN_ON(!stats))
return -EINVAL;
rc = call_netdevice_notifiers_info(NETDEV_OFFLOAD_XSTATS_REPORT_DELTA,
&info.info);
/* Cache whatever we got, even if there was an error, otherwise the
* successful stats retrievals would get lost.
*/
netdev_hw_stats64_add(stats, &report_delta.stats);
if (p_stats)
*p_stats = *stats;
*p_used = report_delta.used;
return notifier_to_errno(rc);
}
int netdev_offload_xstats_get(struct net_device *dev,
enum netdev_offload_xstats_type type,
struct rtnl_hw_stats64 *p_stats, bool *p_used,
struct netlink_ext_ack *extack)
{
ASSERT_RTNL();
if (p_stats)
return netdev_offload_xstats_get_stats(dev, type, p_stats,
p_used, extack);
else
return netdev_offload_xstats_get_used(dev, type, p_used,
extack);
}
EXPORT_SYMBOL(netdev_offload_xstats_get);
void
netdev_offload_xstats_report_delta(struct netdev_notifier_offload_xstats_rd *report_delta,
const struct rtnl_hw_stats64 *stats)
{
report_delta->used = true;
netdev_hw_stats64_add(&report_delta->stats, stats);
}
EXPORT_SYMBOL(netdev_offload_xstats_report_delta);
void
netdev_offload_xstats_report_used(struct netdev_notifier_offload_xstats_ru *report_used)
{
report_used->used = true;
}
EXPORT_SYMBOL(netdev_offload_xstats_report_used);
void netdev_offload_xstats_push_delta(struct net_device *dev,
enum netdev_offload_xstats_type type,
const struct rtnl_hw_stats64 *p_stats)
{
struct rtnl_hw_stats64 *stats;
ASSERT_RTNL();
stats = netdev_offload_xstats_get_ptr(dev, type);
if (WARN_ON(!stats))
return;
netdev_hw_stats64_add(stats, p_stats);
}
EXPORT_SYMBOL(netdev_offload_xstats_push_delta);
/**
* netdev_get_xmit_slave - Get the xmit slave of master device
* @dev: device
......@@ -10417,6 +10680,8 @@ void unregister_netdevice_many(struct list_head *head)
dev_xdp_uninstall(dev);
netdev_offload_xstats_disable_all(dev);
/* Notify protocols, that we are about to destroy
* this device. They should clean all the things.
*/
......
This diff is collapsed.
......@@ -76,6 +76,7 @@ static const struct nlmsg_perm nlmsg_route_perms[] =
{ RTM_GETNSID, NETLINK_ROUTE_SOCKET__NLMSG_READ },
{ RTM_NEWSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ },
{ RTM_GETSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ },
{ RTM_SETSTATS, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
{ RTM_NEWCACHEREPORT, NETLINK_ROUTE_SOCKET__NLMSG_READ },
{ RTM_NEWCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
{ RTM_DELCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
......
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
# +--------------------+ +----------------------+
# | H1 | | H2 |
# | | | |
# | $h1.200 + | | + $h2.200 |
# | 192.0.2.1/28 | | | | 192.0.2.18/28 |
# | 2001:db8:1::1/64 | | | | 2001:db8:2::1/64 |
# | | | | | |
# | $h1 + | | + $h2 |
# | | | | | |
# +------------------|-+ +-|--------------------+
# | |
# +------------------|-------------------------|--------------------+
# | SW | | |
# | | | |
# | $rp1 + + $rp2 |
# | | | |
# | $rp1.200 + + $rp2.200 |
# | 192.0.2.2/28 192.0.2.17/28 |
# | 2001:db8:1::2/64 2001:db8:2::2/64 |
# | |
# +-----------------------------------------------------------------+
ALL_TESTS="
ping_ipv4
ping_ipv6
test_stats_rx_ipv4
test_stats_tx_ipv4
test_stats_rx_ipv6
test_stats_tx_ipv6
respin_enablement
test_stats_rx_ipv4
test_stats_tx_ipv4
test_stats_rx_ipv6
test_stats_tx_ipv6
reapply_config
ping_ipv4
ping_ipv6
test_stats_rx_ipv4
test_stats_tx_ipv4
test_stats_rx_ipv6
test_stats_tx_ipv6
test_stats_report_rx
test_stats_report_tx
test_destroy_enabled
test_double_enable
"
NUM_NETIFS=4
source lib.sh
h1_create()
{
simple_if_init $h1
vlan_create $h1 200 v$h1 192.0.2.1/28 2001:db8:1::1/64
ip route add 192.0.2.16/28 vrf v$h1 nexthop via 192.0.2.2
ip -6 route add 2001:db8:2::/64 vrf v$h1 nexthop via 2001:db8:1::2
}
h1_destroy()
{
ip -6 route del 2001:db8:2::/64 vrf v$h1 nexthop via 2001:db8:1::2
ip route del 192.0.2.16/28 vrf v$h1 nexthop via 192.0.2.2
vlan_destroy $h1 200
simple_if_fini $h1
}
h2_create()
{
simple_if_init $h2
vlan_create $h2 200 v$h2 192.0.2.18/28 2001:db8:2::1/64
ip route add 192.0.2.0/28 vrf v$h2 nexthop via 192.0.2.17
ip -6 route add 2001:db8:1::/64 vrf v$h2 nexthop via 2001:db8:2::2
}
h2_destroy()
{
ip -6 route del 2001:db8:1::/64 vrf v$h2 nexthop via 2001:db8:2::2
ip route del 192.0.2.0/28 vrf v$h2 nexthop via 192.0.2.17
vlan_destroy $h2 200
simple_if_fini $h2
}
router_rp1_200_create()
{
ip link add name $rp1.200 up \
link $rp1 addrgenmode eui64 type vlan id 200
ip address add dev $rp1.200 192.0.2.2/28
ip address add dev $rp1.200 2001:db8:1::2/64
ip stats set dev $rp1.200 l3_stats on
}
router_rp1_200_destroy()
{
ip stats set dev $rp1.200 l3_stats off
ip address del dev $rp1.200 2001:db8:1::2/64
ip address del dev $rp1.200 192.0.2.2/28
ip link del dev $rp1.200
}
router_create()
{
ip link set dev $rp1 up
router_rp1_200_create
ip link set dev $rp2 up
vlan_create $rp2 200 "" 192.0.2.17/28 2001:db8:2::2/64
}
router_destroy()
{
vlan_destroy $rp2 200
ip link set dev $rp2 down
router_rp1_200_destroy
ip link set dev $rp1 down
}
setup_prepare()
{
h1=${NETIFS[p1]}
rp1=${NETIFS[p2]}
rp2=${NETIFS[p3]}
h2=${NETIFS[p4]}
rp1mac=$(mac_get $rp1)
rp2mac=$(mac_get $rp2)
vrf_prepare
h1_create
h2_create
router_create
forwarding_enable
}
cleanup()
{
pre_cleanup
forwarding_restore
router_destroy
h2_destroy
h1_destroy
vrf_cleanup
}
ping_ipv4()
{
ping_test $h1.200 192.0.2.18 " IPv4"
}
ping_ipv6()
{
ping_test $h1.200 2001:db8:2::1 " IPv6"
}
get_l3_stat()
{
local selector=$1; shift
ip -j stats show dev $rp1.200 group offload subgroup l3_stats |
jq '.[0].stats64.'$selector
}
send_packets_rx_ipv4()
{
# Send 21 packets instead of 20, because the first one might trap and go
# through the SW datapath, which might not bump the HW counter.
$MZ $h1.200 -c 21 -d 20msec -p 100 \
-a own -b $rp1mac -A 192.0.2.1 -B 192.0.2.18 \
-q -t udp sp=54321,dp=12345
}
send_packets_rx_ipv6()
{
$MZ $h1.200 -6 -c 21 -d 20msec -p 100 \
-a own -b $rp1mac -A 2001:db8:1::1 -B 2001:db8:2::1 \
-q -t udp sp=54321,dp=12345
}
send_packets_tx_ipv4()
{
$MZ $h2.200 -c 21 -d 20msec -p 100 \
-a own -b $rp2mac -A 192.0.2.18 -B 192.0.2.1 \
-q -t udp sp=54321,dp=12345
}
send_packets_tx_ipv6()
{
$MZ $h2.200 -6 -c 21 -d 20msec -p 100 \
-a own -b $rp2mac -A 2001:db8:2::1 -B 2001:db8:1::1 \
-q -t udp sp=54321,dp=12345
}
___test_stats()
{
local dir=$1; shift
local prot=$1; shift
local a
local b
a=$(get_l3_stat ${dir}.packets)
send_packets_${dir}_${prot}
"$@"
b=$(busywait "$TC_HIT_TIMEOUT" until_counter_is ">= $a + 20" \
get_l3_stat ${dir}.packets)
check_err $? "Traffic not reflected in the counter: $a -> $b"
}
__test_stats()
{
local dir=$1; shift
local prot=$1; shift
RET=0
___test_stats "$dir" "$prot"
log_test "Test $dir packets: $prot"
}
test_stats_rx_ipv4()
{
__test_stats rx ipv4
}
test_stats_tx_ipv4()
{
__test_stats tx ipv4
}
test_stats_rx_ipv6()
{
__test_stats rx ipv6
}
test_stats_tx_ipv6()
{
__test_stats tx ipv6
}
# Make sure everything works well even after stats have been disabled and
# reenabled on the same device without touching the L3 configuration.
respin_enablement()
{
log_info "Turning stats off and on again"
ip stats set dev $rp1.200 l3_stats off
ip stats set dev $rp1.200 l3_stats on
}
# For the initial run, l3_stats is enabled on a completely set up netdevice. Now
# do it the other way around: enabling the L3 stats on an L2 netdevice, and only
# then apply the L3 configuration.
reapply_config()
{
log_info "Reapplying configuration"
router_rp1_200_destroy
ip link add name $rp1.200 link $rp1 addrgenmode none type vlan id 200
ip stats set dev $rp1.200 l3_stats on
ip link set dev $rp1.200 up addrgenmode eui64
ip address add dev $rp1.200 192.0.2.2/28
ip address add dev $rp1.200 2001:db8:1::2/64
}
__test_stats_report()
{
local dir=$1; shift
local prot=$1; shift
local a
local b
RET=0
a=$(get_l3_stat ${dir}.packets)
send_packets_${dir}_${prot}
ip address flush dev $rp1.200
b=$(busywait "$TC_HIT_TIMEOUT" until_counter_is ">= $a + 20" \
get_l3_stat ${dir}.packets)
check_err $? "Traffic not reflected in the counter: $a -> $b"
log_test "Test ${dir} packets: stats pushed on loss of L3"
ip stats set dev $rp1.200 l3_stats off
ip link del dev $rp1.200
router_rp1_200_create
}
test_stats_report_rx()
{
__test_stats_report rx ipv4
}
test_stats_report_tx()
{
__test_stats_report tx ipv4
}
test_destroy_enabled()
{
RET=0
ip link del dev $rp1.200
router_rp1_200_create
log_test "Destroy l3_stats-enabled netdev"
}
test_double_enable()
{
RET=0
___test_stats rx ipv4 \
ip stats set dev $rp1.200 l3_stats on
log_test "Test stat retention across a spurious enablement"
}
trap cleanup EXIT
setup_prepare
setup_wait
tests_run
exit $EXIT_STATUS
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment