- 04 Mar, 2024 40 commits
-
-
Geliang Tang authored
This patch renames mptcp_pm_nl_get_addr_dumpit() as a dedicated in-kernel netlink PM dump addrs function mptcp_pm_nl_dump_addr(), and invoke a newly added wrapper mptcp_pm_dump_addr() in mptcp_pm_nl_get_addr_dumpit(). Invoke in-kernel PM dump addrs function mptcp_pm_nl_dump_addr() or userspace PM dump addrs function mptcp_userspace_pm_dump_addr() based on whether the token parameter is passed in or not in the wrapper. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Geliang Tang authored
This patch adds token parameter together with addr in get-addr section in mptcp_pm.yaml, then use the following commands to update mptcp_pm_gen.c and mptcp_pm_gen.h: ./tools/net/ynl/ynl-gen-c.py --mode kernel \ --spec Documentation/netlink/specs/mptcp_pm.yaml --source \ -o net/mptcp/mptcp_pm_gen.c ./tools/net/ynl/ynl-gen-c.py --mode kernel \ --spec Documentation/netlink/specs/mptcp_pm.yaml --header \ -o net/mptcp/mptcp_pm_gen.h Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Geliang Tang authored
This patch implements mptcp_userspace_pm_dump_addr() to dump addresses from userspace pm address list. Use mptcp_token_get_sock() to get the msk from the given token, if userspace PM is enabled in it, traverse each address entry in address list, put every entry to userspace using mptcp_pm_nl_put_entry_msg(). Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Geliang Tang authored
This patch exports struct mptcp_genl_family and mptcp_nl_fill_addr() helper to allow them can be used in pm_userspace.c. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Geliang Tang authored
mptcp_pm_remove_addrs_and_subflows() is only used in pm_netlink.c, it's no longer used in pm_userspace.c any more since the commit 8b1c94da ("mptcp: only send RM_ADDR in nl_cmd_remove"). So this patch changes it to a static function. Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alex Elder says: ==================== net: ipa: simplify device pointer access This version of this patch series fixes the bugs in the first patch (which were fixed in the second), where ipa_interrupt_config() had two remaining spots that returned a pointer rather than an integer. Outside of initialization, all uses of the platform device pointer stored in the IPA structure determine the address of device structure embedded within the platform device structure. By changing some of the initialization functions to take a platform device as argument we can simplify getting at the device structure address by storing it (instead of the platform device pointer) in the IPA structure. The first two patches split the interrupt initialization code into two parts--one done earlier than before. The next four patches update some initialization functions to take a platform device pointer as argument. And the last patch replaces the platform device pointer with a device pointer, and converts all remaining references to the &ipa->pdev->dev to use ipa->dev. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
The IPA platform device is now only used as the structure containing the IPA device structure. Replace the platform device pointer with a pointer to the device structure. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Rather than using the platform device pointer field in the IPA pointer, pass a platform device pointer to ipa_smp2p_init(). Use that pointer throughout that function. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Rather than using the platform device pointer field in the IPA pointer, pass a platform device pointer to ipa_smp2p_irq_init(). Use that pointer throughout that function (without assuming it's the same as the IPA platform device pointer). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Rather than using the platform device pointer field in the IPA pointer, pass a platform device pointer to ipa_mem_init(). Use that pointer throughout that function. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Rather than using the platform device pointer field in the IPA pointer, pass a platform device pointer to ipa_reg_init(). Use that pointer throughout that function. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Create a new function ipa_interrupt_init() that is called at probe time to allocate and initialize the IPA interrupt data structure. Create ipa_interrupt_exit() as its inverse. This follows the normal IPA driver pattern of *_init() functions doing things that can be done before access to hardware is required. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Change the return type of ipa_interrupt_config() to be an error code rather than an IPA interrupt structure pointer, and assign the the pointer within that function. Change ipa_interrupt_deconfig() to take the IPA pointer as argument and have it invalidate the ipa->interrupt pointer. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Matthieu Baerts says: ==================== mptcp: add TCP_NOTSENT_LOWAT sockopt support Patch 3 does the magic of adding TCP_NOTSENT_LOWAT support, all the other ones are minor cleanup seen along when working on the new feature. Note that this feature relies on the existing accounting for snd_nxt. Such accounting is not 110% accurate as it tracks the most recent sequence number queued to any subflow, and not the actual sequence number sent on the wire. Paolo experimented a lot, trying to implement the latter, and in the end it proved to be both "too complex" and "not necessary". The complexity raises from the need for additional lock and a lot of refactoring to introduce such protections without adding significant overhead. Additionally, snd_nxt is currently used and exposed with the current semantic by the internal packet scheduling. Introducing a different tracking will still require us to keep the old one. More interestingly, a more accurate tracking could be not strictly necessary: as the MPTCP socket enqueues data to the subflows only up to the available send window, any enqueue data is sent on the wire instantly, without any blocking operation short or a drop in the tx path at the nft or TC layer. ==================== Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
-
Paolo Abeni authored
Most TCP-level socket options get an integer from user space, and set the corresponding field under the msk-level socket lock. Reduce the code duplication moving such operations in the common code. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
Add support for such socket option storing the user-space provided value in a new msk field, and using such data to implement the _mptcp_stream_memory_free() helper, similar to the TCP one. To avoid adding more indirect calls in the fast path, open-code a variant of sk_stream_memory_free() in mptcp_sendmsg() and add direct calls to the mptcp stream memory free helper where possible. Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/464Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
The mptcp_get_int_option() helper is needless open-coded in a couple of places, replace the duplicate code with the helper call. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
After commit 5cf92bba ("mptcp: re-enable sndbuf autotune"), the MPTCP_NOSPACE bit is redundant: it is always set and cleared together with SOCK_NOSPACE. Let's drop the first and always relay on the latter, dropping a bunch of useless code. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Reviewed-by: Mat Martineau <martineau@kernel.org> Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Breno Leitao authored
Do not set rtnl_link_stats64 fields to zero, since they are zeroed before ops->ndo_get_stats64 is called in core dev_get_stats() function. Also, simplify the data collection by removing the temporary variable. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Breno Leitao authored
With commit 34d21de9 ("net: Move {l,t,d}stats allocation to core and convert veth & vrf"), stats allocation could be done on net core instead of this driver. With this new approach, the driver doesn't have to bother with error handling (allocation failure checking, making sure free happens in the right spot, etc). This is core responsibility now. Remove the allocation in the nlmon driver and leverage the network core allocation. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hariprasad Kelam authored
The last patch which added support to extend the firmware shared data to add channel data information has introduced a bug due to the reserved space not adjusted accordingly. This patch fixes the issue and also adds BUILD_BUG to avoid this regression error. Fixes: 99781449 ("Octeontx2-af: Fetch MAC channel info from firmware") Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
struct net_device poll_dev in struct igc_q_vector was added in one of the initial commits, but never used. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ziwei Xiao says: ==================== gve: Add header split support Currently, the ethtool's ringparam has added a new field tcp-data-split for enabling and disabling header split. These three patches will utilize that ethtool flag to support header split in GVE driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jeroen de Borst authored
To record the stats of header split packets, three stats are added in the driver's ethtool stats. - rx_hsplit_pkt is the split packets count with header split - rx_hsplit_bytes is the received header bytes count with header split - rx_hsplit_unsplit_pkt is the unsplit packet count due to header buffer overflow or zero header length when header split is enabled Currently, it's entering the stats_update critical section more than once per packet. We have plans to avoid that in the future change to let all the stats_update happen in one place at the end of `gve_rx_poll_dqo`. Co-developed-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jeroen de Borst authored
Add header buffers and ethtool support to enable header split via the tcp-data-split flag in ethtool's ringparam config. A coherent dma memory is allocated for the header buffers. There is one header buffer per ring entry by calculating the offset to the header-buffers starting address. The header buffer is always copied directly into the skb and payload is always added as frags. When there is a header buffer overflow or the header length is 0, the driver places the whole unsplit packet in frags. When toggling header split, the driver will call gve_adjust_config to set its queues appropriately. If header split is enabled by the user and the max packet buffer size is no less than 4KB, driver will set the packet buffer size as 4KB to support TCP_ZEROCOPY_RECEIVE. Otherwise the driver will use the default 2KB as the packet buffer size. `ethtool -G <dev> tcp-data-split on/off` is the command to toggle header split. `ethtool -g <dev>` will show the status of header split with the field of `tcp-data-split`. Co-developed-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jeroen de Borst authored
To enable header split via ethtool, we first need to query the device to get the max rx buffer size and header buffer size. Add a device option to get these values and store them in the driver. If the header buffer size received from the device is non-zero, it means header split is supported in the device. Currently the max rx buffer size will only be used when header split is enabled which will set the data_buffer_size_dqo to be the max rx buffer size. Also change the data_buffer_size_dqo from int to u16 since we are modifying it and making it to be consistent with max_rx_buffer_size. Co-developed-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Breno Leitao authored
With commit 34d21de9 ("net: Move {l,t,d}stats allocation to core and convert veth & vrf"), stats allocation could be done on net core instead of in this driver. With this new approach, the driver doesn't have to bother with error handling (allocation failure checking, making sure free happens in the right spot, etc). This is core responsibility now. Remove the allocation in the ip6_tunnel driver and leverage the network core allocation instead. Signed-off-by: Breno Leitao <leitao@debian.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Shannon Nelson says: ==================== ionic: code cleanup and performance tuning Brett has been performance testing and code tweaking and has come up with several improvements for our fast path operations. In a simple single thread / single queue iperf case on a 1500 MTU connection we see an improvement from 74.2 to 86.7 Gbits/sec. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
The MODULE_AUTHOR macro is supposed to be a person not a company. Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
Clean up complaints from an xmastree.py scan. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
Use the kernel's CQE dim table to align better with the driver's use of completion queues, and use the tx moderation when using Tx interrupts. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
An earlier change moved the hwstamp queue check into a helper function with an unlikely(). However, it makes more sense for the caller to decide if it's likely() or unlikely(), so make the change to support that. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
To help make sure we're only accessing things we really need to access we can cut down on the q->lif->netdev references by using q->dev which is already in cache. Reviewed-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
Instead of using q->lif->netdev, just pass the netdev when it's locally defined. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
If there is a lot of transmit traffic the driver can get into a situation that the device is starved due to the doorbell never being rung. This can happen if xmit_more is set constantly and __netdev_tx_sent_queue() keeps returning false. Fix this by checking if the queue needs to be stopped right before calling __netdev_tx_sent_queue(). Use MAX_SKB_FRAGS + 1 as the stop condition because that's the maximum number of frags supported for non-TSO transmit. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
The driver currently calls netdev_tx_completed_queue() for every Tx completion. However, this API is only meant to be called once per NAPI if any Tx work is done. Make the necessary changes to support calling netdev_tx_completed_queue() only once per NAPI. Also, use the __netdev_tx_sent_queue() API, which supports the xmit_more functionality. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
Make use of napi_consume_skb so that skb recycling can happen by way of the napi_skb_cache. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
Perf was showing some hot spots in ionic_tx_descs_needed() for TSO traffic. Rework the function to return sooner where possible. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
Cut down the number of default Tx and Rx descriptors to save initial memory requirements. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brett Creeley authored
Currently the driver attempts to wake the Tx queue for every descriptor processed. However, this is overkill and can cause thrashing since Tx xmit can be running concurrently on a different CPU than Tx clean. Fix this by refactoring Tx cq servicing into its own function so the Tx wake code can run after processing all Tx descriptors. The driver isn't using the expected memory barriers to make sure the stop/start bits are coherent. Fix this by making sure to use the correct memory barriers. Also, the driver is using the wake API during Tx xmit even though it's already scheduled. Fix this by using the start API during Tx xmit. Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-