- 08 Mar, 2024 40 commits
-
-
Shannon Nelson authored
A simple change to the struct ionic_queue layout removes some unnecessary padding and saves us a cacheline in the struct ionic_qcq layout. struct ionic_queue { Before: /* size: 256, cachelines: 4, members: 29 */ After: /* size: 192, cachelines: 3, members: 29 */ struct ionic_qcq { Before: /* size: 2112, cachelines: 33, members: 23 */ After: /* size: 2048, cachelines: 32, members: 23 */ Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Rearange a few fields for better cache use and to put the flags field up into the first cacheline rather than the last. struct ionic_qcq Before: /* size: 2176, cachelines: 34, members: 23 */ After: /* size: 2112, cachelines: 33, members: 23 */ Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Remove the idev field from ionic_queue, which saves us a bit of space, and add it into ionic_cq where there's room within some cacheline padding. Use this pointer rather than doing a multi level reference from lif->ionic. Suggested-by: Neel Patel <npatel2@amd.com> Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
The existing ionic_rx_frags() code is a bit of a mess and can be cleaned up by unrolling the first frag/header setup from the loop, then reworking the do-while-loop into a for-loop. We rename the function to a more descriptive ionic_rx_build_skb(). We also change a couple of related variable names for readability. Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Since the AdminQ clean is a simple action called from only one place, fold it back into the service routine. Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Make desc_info structure specific to the queue type, which allows us to cut down the Rx and AdminQ descriptor sizes by not including all the fields needed for the Tx desriptors. Before: struct ionic_desc_info { /* size: 464, cachelines: 8, members: 6 */ After: struct ionic_tx_desc_info { /* size: 464, cachelines: 8, members: 6 */ struct ionic_rx_desc_info { /* size: 224, cachelines: 4, members: 2 */ struct ionic_admin_desc_info { /* size: 8, cachelines: 1, members: 1 */ Suggested-by: Neel Patel <npatel2@amd.com> Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
With a little simple math we don't need another struct array to find the completion structs, so we can remove the ionic_cq_info altogether. This doesn't really save anything in the ionic_cq since it gets padded out to the cacheline, but it does remove the parallel array allocation of 8 * num_descriptors, or about 8 Kbytes per queue in a default configuration. Suggested-by: Neel Patel <npatel2@amd.com> Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
By reworking the queue service routines to have their own servicing loops we can remove the cb pointer from desc_info to save another 8 bytes per descriptor, This simplifies some of the queue handling indirection and makes the code a little easier to follow, and keeps service code in one place rather than jumping between code files. struct ionic_desc_info Before: /* size: 472, cachelines: 8, members: 7 */ After: /* size: 464, cachelines: 8, members: 6 */ Suggested-by: Neel Patel <npatel2@amd.com> Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Move the AdminQ and NotifyQ queue handling to ionic_main.c with the rest of the adminq code. Suggested-by: Neel Patel <npatel2@amd.com> Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Now that we're not using desc_info pointers mapped in every q we can simplify and drop the unnecessary utility functions. Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Remove the struct pointers from desc_info to use less space. Instead of pointers in every desc_info to its descriptor, we can use the queue descriptor index to find the individual desc, desc_info, and sgl structs in their parallel arrays. struct ionic_desc_info Before: /* size: 496, cachelines: 8, members: 10 */ After: /* size: 472, cachelines: 8, members: 7 */ Suggested-by: Neel Patel <npatel2@amd.com> Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queueDavid S. Miller authored
Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2024-03-06 (iavf, i40e, ixgbe) This series contains updates to iavf, i40e, and ixgbe drivers. Alexey Kodanev removes duplicate calls related to cloud filters on iavf and unnecessary null checks on i40e. Maciej adds helper functions for common code relating to updating statistics for ixgbe. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Jeff was retired as the Intel driver maintainer in commit 6667df91 ("MAINTAINERS: Update MAINTAINERS for Intel ethernet drivers"), and his address bounces. But he has signed-off a lot of patches over the years so get_maintainer insists on CCing him. We haven't heard from him since he left Intel, so remapping the address via mailmap is also pointless. Add to ignored addresses. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== ipv6: lockless inet6_dump_addr() This series removes RTNL locking to dump ipv6 addresses. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
We can now remove RTNL acquisition while running inet6_dump_addr(), inet6_dump_ifmcaddr() and inet6_dump_ifacaddr(). Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
inet6_dump_addr() can use the new xa_array iterator for better scalability. Make it ready for RCU-only protection. RTNL use is removed in the following patch. Also properly return 0 at the end of a dump to avoid and extra recvmsg() to get NLMSG_DONE. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
in6_dump_addrs() is called with RCU protection. There is no need holding idev->lock to iterate through unicast addresses. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Make inet6_fill_ifaddr() lockless, and add approriate annotations on ifa->tstamp, ifa->valid_lft, ifa->preferred_lft, ifa->ifa_proto and ifa->rt_priority. Also constify 2nd argument of inet6_fill_ifaddr(), inet6_fill_ifmcaddr() and inet6_fill_ifacaddr(). Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'ipsec-next-2024-03-06' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next Steffen Klassert says: ==================== 1) Introduce forwarding of ICMP Error messages. That is specified in RFC 4301 but was never implemented. From Antony Antony. 2) Use KMEM_CACHE instead of kmem_cache_create in xfrm6_tunnel_init() and xfrm_policy_init(). From Kunwu Chan. 3) Do not allocate stats in the xfrm interface driver, this can be done on net core now. From Breno Leitao. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Petr Machata says: ==================== Support for nexthop group statistics ECMP is a fundamental component in L3 designs. However, it's fragile. Many factors influence whether an ECMP group will operate as intended: hash policy (i.e. the set of fields that contribute to ECMP hash calculation), neighbor validity, hash seed (which might lead to polarization) or the type of ECMP group used (hash-threshold or resilient). At the same time, collection of statistics that would help an operator determine that the group performs as desired, is difficult. A solution that we present in this patchset is to add counters to next hop group entries. For SW-datapath deployments, this will on its own allow collection and evaluation of relevant statistics. For HW-datapath deployments, we further add a way to request that HW counters be installed for a given group, in-kernel interfaces to collect the HW statistics, and netlink interfaces to query them. For example: # ip nexthop replace id 4000 group 4001/4002 hw_stats on # ip -s -d nexthop show id 4000 id 4000 group 4001/4002 scope global proto unspec offload hw_stats on used on stats: id 4001 packets 5002 packets_hw 5000 id 4002 packets 4999 packets_hw 4999 The point of the patchset is visibility of ECMP balance, and that is influenced by packet headers, not their payload. Correspondingly, we only include packet counters in the statistics, not byte counters. We also decided to model HW statistics as a nexthop group attribute, not an arbitrary nexthop one. The latter would count any traffic going through a given nexthop, regardless of which ECMP group it is in, or any at all. The reason is again hat the point of the patchset is ECMP balance visibility, not arbitrary inspection of how busy a particular nexthop is. Implementation of individual-nexthop statistics is certainly possible, and could well follow the general approach we are taking in this patchset. For resilient groups, per-bucket statistics could be done in a similar manner as well. This patchset contains the core code. mlxsw support will be sent in a follow-up patch set. This patchset progresses as follows: - Patches #1 and #2 add support for a new next-hop object attribute, NHA_OP_FLAGS. That is meant to carry various op-specific signaling, in particular whether SW- and HW-collected nexthop stats should be part of the get or dump response. The idea is to avoid wasting message space, and time for collection of HW statistics, when the values are not needed. - Patches #3 and #4 add SW-datapath stats and corresponding UAPI. - Patches #5, #6 and #7 add support fro HW-datapath stats and UAPI. Individual drivers still need to contribute the appropriate HW-specific support code. v4: - Patch #2: - s/nla_get_bitfield32/nla_get_u32/ in __nh_valid_dump_req(). v3: - Patch #3: - Convert to u64_stats_t - Patch #4: - Give a symbolic name to the set of all valid dump flags for the NHA_OP_FLAGS attribute. - Convert to u64_stats_t - Patch #6: - Use a named constant for the NHA_HW_STATS_ENABLE policy. v2: - Patch #2: - Change OP_FLAGS to u32, enforce through NLA_POLICY_MASK - Patch #3: - Set err on nexthop_create_group() error path - Patch #4: - Use uint to encode NHA_GROUP_STATS_ENTRY_PACKETS - Rename jump target in nla_put_nh_group_stats() to avoid having to rename further in the patchset. - Patch #7: - Use uint to encode NHA_GROUP_STATS_ENTRY_PACKETS_HW - Do not cancel outside of nesting in nla_put_nh_group_stats() ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Add netlink support for reading NH group hardware stats. Stats collection is done through a new notifier, NEXTHOP_EVENT_HW_STATS_REPORT_DELTA. Drivers that implement HW counters for a given NH group are thereby asked to collect the stats and report back to core by calling nh_grp_hw_stats_report_delta(). This is similar to what netdevice L3 stats do. Besides exposing number of packets that passed in the HW datapath, also include information on whether any driver actually realizes the counters. The core can tell based on whether it got any _report_delta() reports from the drivers. This allows enabling the statistics at the group at any time, with drivers opting into supporting them. This is also in line with what netdevice L3 stats are doing. So as not to waste time and space, tie the collection and reporting of HW stats with a new op flag, NHA_OP_FLAG_DUMP_HW_STATS. Co-developed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Kees Cook <keescook@chromium.org> # For the __counted_by bits Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Add netlink support for enabling collection of HW statistics on nexthop groups. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Add hw_stats field to several notifier structures to communicate to the drivers that HW statistics should be configured for nexthops within a given group. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Add netlink support for reading NH group stats. This data is only for statistics of the traffic in the SW datapath. HW nexthop group statistics will be added in the following patches. Emission of the stats is keyed to a new op_stats flag to avoid cluttering the netlink message with stats if the user doesn't need them: NHA_OP_FLAG_DUMP_STATS. Co-developed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Add nexthop group entry stats to count the number of packets forwarded via each nexthop in the group. The stats will be exposed to user space for better data path observability in the next patch. The per-CPU stats pointer is placed at the beginning of 'struct nh_grp_entry', so that all the fields accessed for the data path reside on the same cache line: struct nh_grp_entry { struct nexthop * nh; /* 0 8 */ struct nh_grp_entry_stats * stats; /* 8 8 */ u8 weight; /* 16 1 */ /* XXX 7 bytes hole, try to pack */ union { struct { atomic_t upper_bound; /* 24 4 */ } hthr; /* 24 4 */ struct { struct list_head uw_nh_entry; /* 24 16 */ u16 count_buckets; /* 40 2 */ u16 wants_buckets; /* 42 2 */ } res; /* 24 24 */ }; /* 24 24 */ struct list_head nh_list; /* 48 16 */ /* --- cacheline 1 boundary (64 bytes) --- */ struct nexthop * nh_parent; /* 64 8 */ /* size: 72, cachelines: 2, members: 6 */ /* sum members: 65, holes: 1, sum holes: 7 */ /* last cacheline: 8 bytes */ }; Co-developed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
In order to add per-nexthop statistics, but still not increase netlink message size for consumers that do not care about them, there needs to be a toggle through which the user indicates their desire to get the statistics. To that end, add a new attribute, NHA_OP_FLAGS. The idea is to be able to use the attribute for carrying of arbitrary operation-specific flags, i.e. not make it specific for get / dump. Add the new attribute to get and dump policies, but do not actually allow any flags yet -- those will come later as the flags themselves are defined. Add the necessary parsing code. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
A following patch will introduce a new attribute, op-specific flags to adjust the behavior of an operation. Different operations will recognize different flags. - To make the differentiation possible, stop sharing the policies for get and del operations. - To allow querying for presence of the attribute, have all the attribute arrays sized to NHA_MAX, regardless of what is permitted by policy, and pass the corresponding value to nlmsg_parse() as well. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sai Krishna authored
This patch adds TC offload support for matching TCP flags from TCP header. Example usage: tc qdisc add dev eth0 ingress TC rule to drop the TCP SYN packets: tc filter add dev eth0 ingress protocol ip flower ip_proto tcp tcp_flags 0x02/0x3f skip_sw action drop Signed-off-by: Sai Krishna <saikrishnag@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
fuyuanli authored
It is useful to expose skb addr and sock addr to user in tracepoint tcp_probe, so that we can get more information while monitoring receiving of tcp data, by ebpf or other ways. For example, we need to identify a packet by seq and end_seq when calculate transmit latency between layer 2 and layer 4 by ebpf, but which is not available in tcp_probe, so we can only use kprobe hooking tcp_rcv_established to get them. But we can use tcp_probe directly if skb addr and sock addr are available, which is more efficient. Signed-off-by: fuyuanli <fuyuanli@didiglobal.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
softnet_data->time_squeeze is sometimes used as a proxy for host overload or indication of scheduling problems. In practice this statistic is very noisy and has hard to grasp units - e.g. is 10 squeezes a second to be expected, or high? Delaying network (NAPI) processing leads to drops on NIC queues but also RTT bloat, impacting pacing and CA decisions. Stalls are a little hard to detect on the Rx side, because there may simply have not been any packets received in given period of time. Packet timestamps help a little bit, but again we don't know if packets are stale because we're not keeping up or because someone (*cough* cgroups) disabled IRQs for a long time. We can, however, use Tx as a proxy for Rx stalls. Most drivers use combined Rx+Tx NAPIs so if Tx gets starved so will Rx. On the Tx side we know exactly when packets get queued, and completed, so there is no uncertainty. This patch adds stall checks to BQL. Why BQL? Because it's a convenient place to add such checks, already called by most drivers, and it has copious free space in its structures (this patch adds no extra cache references or dirtying to the fast path). The algorithm takes one parameter - max delay AKA stall threshold and increments a counter whenever NAPI got delayed for at least that amount of time. It also records the length of the longest stall. To be precise every time NAPI has not polled for at least stall thrs we check if there were any Tx packets queued between last NAPI run and now - stall_thrs/2. Unlike the classic Tx watchdog this mechanism does not ignore stalls caused by Tx being disabled, or loss of link. I don't think the check is worth the complexity, and stall is a stall, whether due to host overload, flow control, link down... doesn't matter much to the application. We have been running this detector in production at Meta for 2 years, with the threshold of 8ms. It's the lowest value where false positives become rare. There's still a constant stream of reported stalls (especially without the ksoftirqd deferral patches reverted), those who like their stall metrics to be 0 may prefer higher value. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Breno Leitao <leitao@debian.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
The inlined helper function calc_tx_descs is not used and is redundant. Remove it. Cleans up clang scan build warning: drivers/net/ethernet/chelsio/cxgb4/sge.c:814:28: warning: unused function 'calc_tx_descs' [-Wunused-function] Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Jakub Kicinski says: ==================== netdev: add per-queue statistics Per queue stats keep coming up, so it's about time someone laid the foundation. This series adds the uAPI, a handful of stats and a sample support for bnxt. It's not very comprehensive in terms of stat types or driver support. The expectation is that the support will grow organically. If we have the basic pieces in place it will be easy for reviewers to request new stats, or use of the API in place of ethtool -S. See patch 3 for sample output. v2: https://lore.kernel.org/all/20240229010221.2408413-1-kuba@kernel.org/ v1: https://lore.kernel.org/all/20240226211015.1244807-1-kuba@kernel.org/ rfc: https://lore.kernel.org/all/20240222223629.158254-1-kuba@kernel.org/ ==================== Link: https://lore.kernel.org/r/20240306195509.1502746-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Support per-queue statistics API in bnxt. $ ethtool -S eth0 NIC statistics: [0]: rx_ucast_packets: 1418 [0]: rx_mcast_packets: 178 [0]: rx_bcast_packets: 0 [0]: rx_discards: 0 [0]: rx_errors: 0 [0]: rx_ucast_bytes: 1141815 [0]: rx_mcast_bytes: 16766 [0]: rx_bcast_bytes: 0 [0]: tx_ucast_packets: 1734 ... $ ./cli.py --spec netlink/specs/netdev.yaml \ --dump qstats-get --json '{"scope": "queue"}' [{'ifindex': 2, 'queue-id': 0, 'queue-type': 'rx', 'rx-alloc-fail': 0, 'rx-bytes': 1164931, 'rx-packets': 1641}, ... {'ifindex': 2, 'queue-id': 0, 'queue-type': 'tx', 'tx-bytes': 631494, 'tx-packets': 1771}, ... Reset the per queue counters: $ ethtool -L eth0 combined 4 Inspect again: $ ./cli.py --spec netlink/specs/netdev.yaml \ --dump qstats-get --json '{"scope": "queue"}' [{'ifindex': 2, 'queue-id': 0, 'queue-type': 'rx', 'rx-alloc-fail': 0, 'rx-bytes': 32397, 'rx-packets': 145}, ... {'ifindex': 2, 'queue-id': 0, 'queue-type': 'tx', 'tx-bytes': 37481, 'tx-packets': 196}, ... $ ethtool -S eth0 | head NIC statistics: [0]: rx_ucast_packets: 174 [0]: rx_mcast_packets: 3 [0]: rx_bcast_packets: 0 [0]: rx_discards: 0 [0]: rx_errors: 0 [0]: rx_ucast_bytes: 37151 [0]: rx_mcast_bytes: 267 [0]: rx_bcast_bytes: 0 [0]: tx_ucast_packets: 267 ... Totals are still correct: $ ./cli.py --spec netlink/specs/netdev.yaml --dump qstats-get [{'ifindex': 2, 'rx-alloc-fail': 0, 'rx-bytes': 281949995, 'rx-packets': 216524, 'tx-bytes': 52694905, 'tx-packets': 75546}] $ ip -s link show dev eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 14:23:f2:61:05:40 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 282519546 218100 0 0 0 516 TX: bytes packets errors dropped carrier collsns 53323054 77674 0 0 0 0 Acked-by: Stanislav Fomichev <sdf@google.com> Reviewed-by: Amritha Nambiar <amritha.nambiar@intel.com> Reviewed-by: Michael Chan <michael.chan@broadcom.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Link: https://lore.kernel.org/r/20240306195509.1502746-4-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Rx alloc failures are commonly counted by drivers. Support reporting those via netdev-genl queue stats. Acked-by: Stanislav Fomichev <sdf@google.com> Reviewed-by: Amritha Nambiar <amritha.nambiar@intel.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Link: https://lore.kernel.org/r/20240306195509.1502746-3-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
The ethtool-nl family does a good job exposing various protocol related and IEEE/IETF statistics which used to get dumped under ethtool -S, with creative names. Queue stats don't have a netlink API, yet, and remain a lion's share of ethtool -S output for new drivers. Not only is that bad because the names differ driver to driver but it's also bug-prone. Intuitively drivers try to report only the stats for active queues, but querying ethtool stats involves multiple system calls, and the number of stats is read separately from the stats themselves. Worse still when user space asks for values of the stats, it doesn't inform the kernel how big the buffer is. If number of stats increases in the meantime kernel will overflow user buffer. Add a netlink API for dumping queue stats. Queue information is exposed via the netdev-genl family, so add the stats there. Support per-queue and sum-for-device dumps. Latter will be useful when subsequent patches add more interesting common stats than just bytes and packets. The API does not currently distinguish between HW and SW stats. The expectation is that the source of the stats will either not matter much (good packets) or be obvious (skb alloc errors). Acked-by: Stanislav Fomichev <sdf@google.com> Reviewed-by: Amritha Nambiar <amritha.nambiar@intel.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Link: https://lore.kernel.org/r/20240306195509.1502746-2-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Eric Dumazet says: ==================== net: group together hot data While our recent structure reorganizations were focused on increasing max throughput, there is still an area where improvements are much needed. In many cases, a cpu handles one packet at a time, instead of a nice batch. Hardware interrupt. -> Software interrupt. -> Network/Protocol stacks. If the cpu was idle or busy in other layers, it has to pull many cache lines. This series adds a new net_hotdata structure, where some critical (and read-mostly) data used in rx and tx path is packed in a small number of cache lines. Synthetic benchmarks will not see much difference, but latency of single packet should improve. net_hodata current size on 64bit is 416 bytes, but might grow in the future. Also move RPS definitions to a new include file. ==================== Link: https://lore.kernel.org/r/20240306160031.874438-1-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
rps_sock_flow_table and rps_cpu_mask are used in fast path. Move them to net_hotdata for better cache locality. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20240306160031.874438-19-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
Move RPS related structures and helpers from include/linux/netdevice.h and include/net/sock.h to a new include file. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20240306160031.874438-18-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
Use a 32bit hole in "struct net_offload" to store the remaining 32bit secrets used by TCPv6 and UDPv6. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20240306160031.874438-17-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
"struct inet6_protocol" has a 32bit hole in 32bit arches. Use it to store the 32bit secret used by UDP and TCP, to increase cache locality in rx path. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20240306160031.874438-16-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-