- 17 Nov, 2020 33 commits
-
-
Jakub Kicinski authored
Huazhong Tan says: ==================== net: hns3: updates for -next There are several updates relating to the interrupt coalesce for the HNS3 ethernet driver. based on the frame quantity). a fixed value in code. based on the gap time). its new usage. change log: V4 - remove #5~#10 from this series, which needs more discussion. V3 - fix a typo error in #1 reported by Jakub Kicinski. rewrite #9 commit log. remove #11 from this series. V2 - reorder #2 & #3 to fix compiler error. fix some checkpatch warnings in #10 & #11. previous version: V3: https://patchwork.ozlabs.org/project/netdev/cover/1605151998-12633-1-git-send-email-tanhuazhong@huawei.com/ V2: https://patchwork.ozlabs.org/project/netdev/cover/1604892159-19990-1-git-send-email-tanhuazhong@huawei.com/ V1: https://patchwork.ozlabs.org/project/netdev/cover/1604730681-32559-1-git-send-email-tanhuazhong@huawei.com/ ==================== Link: https://lore.kernel.org/r/1605514854-11205-1-git-send-email-tanhuazhong@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Huazhong Tan authored
Besides GL(Gap Limiting), QL(Quantity Limiting) can be modified dynamically when DIM is supported. So rename gl_adapt_enable as adapt_enable in struct hns3_enet_coalesce. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Huazhong Tan authored
For device whose version is above V3(include V3), the GL configuration can set as 1us unit, so adds support for configuring this field. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Huazhong Tan authored
For maintainability and compatibility, add support for querying the maximum value of GL. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Huazhong Tan authored
QL(quantity limiting) means that hardware supports the interrupt coalesce based on the frame quantity. QL can be configured when int_ql_max in device's specification is non-zero, so add support to configure it. Also, rename two coalesce init function to fit their purpose. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Ioana Ciornei says: ==================== net: phy: add support for shared interrupts (part 2) This patch set aims to actually add support for shared interrupts in phylib and not only for multi-PHY devices. While we are at it, streamline the interrupt handling in phylib. For a bit of context, at the moment, there are multiple phy_driver ops that deal with this subject: - .config_intr() - Enable/disable the interrupt line. - .ack_interrupt() - Should quiesce any interrupts that may have been fired. It's also used by phylib in conjunction with .config_intr() to clear any pending interrupts after the line was disabled, and before it is going to be enabled. - .did_interrupt() - Intended for multi-PHY devices with a shared IRQ line and used by phylib to discern which PHY from the package was the one that actually fired the interrupt. - .handle_interrupt() - Completely overrides the default interrupt handling logic from phylib. The PHY driver is responsible for checking if any interrupt was fired by the respective PHY and choose accordingly if it's the one that should trigger the link state machine. From my point of view, the interrupt handling in phylib has become somewhat confusing with all these callbacks that actually read the same PHY register - the interrupt status. A more streamlined approach would be to just move the responsibility to write an interrupt handler to the driver (as any other device driver does) and make .handle_interrupt() the only way to deal with interrupts. Another advantage with this approach would be that phylib would gain support for shared IRQs between different PHY (not just multi-PHY devices), something which at the moment would require extending every PHY driver anyway in order to implement their .did_interrupt() callback and duplicate the same logic as in .ack_interrupt(). The disadvantage of making .did_interrupt() mandatory would be that we are slightly changing the semantics of the phylib API and that would increase confusion instead of reducing it. What I am proposing is the following: - As a first step, make the .ack_interrupt() callback optional so that we do not break any PHY driver amid the transition. - Every PHY driver gains a .handle_interrupt() implementation that, for the most part, would look like below: irq_status = phy_read(phydev, INTR_STATUS); if (irq_status < 0) { phy_error(phydev); return IRQ_NONE; } if (!(irq_status & irq_mask)) return IRQ_NONE; phy_trigger_machine(phydev); return IRQ_HANDLED; - Remove each PHY driver's implementation of the .ack_interrupt() by actually taking care of quiescing any pending interrupts before enabling/after disabling the interrupt line. - Finally, after all drivers have been ported, remove the .ack_interrupt() and .did_interrupt() callbacks from phy_driver. This patch set is part 2 of the entire change set and it addresses the changes needed in 9 PHY drivers. The rest can be found on my Github branch here: https://github.com/IoanaCiornei/linux/commits/phylib-shared-irq ==================== Link: https://lore.kernel.org/r/20201113165226.561153-1-ciorneiioana@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In preparation of removing the .ack_interrupt() callback, we must replace its occurrences (aka phy_clear_interrupt), from the 2 places where it is called from (phy_enable_interrupts and phy_disable_interrupts), with equivalent functionality. This means that clearing interrupts now becomes something that the PHY driver is responsible of doing, before enabling interrupts and after clearing them. Make this driver follow the new contract. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Acked-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In an attempt to actually support shared IRQs in phylib, we now move the responsibility of triggering the phylib state machine or just returning IRQ_NONE, based on the IRQ status register, to the PHY driver. Having 3 different IRQ handling callbacks (.handle_interrupt(), .did_interrupt() and .ack_interrupt() ) is confusing so let the PHY driver implement directly an IRQ handler like any other device driver. Make this driver follow the new convention. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Acked-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In preparation of removing the .ack_interrupt() callback, we must replace its occurrences (aka phy_clear_interrupt), from the 2 places where it is called from (phy_enable_interrupts and phy_disable_interrupts), with equivalent functionality. This means that clearing interrupts now becomes something that the PHY driver is responsible of doing, before enabling interrupts and after clearing them. Make this driver follow the new contract. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In an attempt to actually support shared IRQs in phylib, we now move the responsibility of triggering the phylib state machine or just returning IRQ_NONE, based on the IRQ status register, to the PHY driver. Having 3 different IRQ handling callbacks (.handle_interrupt(), .did_interrupt() and .ack_interrupt() ) is confusing so let the PHY driver implement directly an IRQ handler like any other device driver. Make this driver follow the new convention. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In preparation of removing the .ack_interrupt() callback, we must replace its occurrences (aka phy_clear_interrupt), from the 2 places where it is called from (phy_enable_interrupts and phy_disable_interrupts), with equivalent functionality. This means that clearing interrupts now becomes something that the PHY driver is responsible of doing, before enabling interrupts and after clearing them. Make this driver follow the new contract. Cc: Andre Edich <andre.edich@microchip.com> Cc: Marco Felsch <m.felsch@pengutronix.de> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In an attempt to actually support shared IRQs in phylib, we now move the responsibility of triggering the phylib state machine or just returning IRQ_NONE, based on the IRQ status register, to the PHY driver. Having 3 different IRQ handling callbacks (.handle_interrupt(), .did_interrupt() and .ack_interrupt() ) is confusing so let the PHY driver implement directly an IRQ handler like any other device driver. Make this driver follow the new convention. Cc: Andre Edich <andre.edich@microchip.com> Cc: Marco Felsch <m.felsch@pengutronix.de> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In preparation of removing the .ack_interrupt() callback, we must replace its occurrences (aka phy_clear_interrupt), from the 2 places where it is called from (phy_enable_interrupts and phy_disable_interrupts), with equivalent functionality. This means that clearing interrupts now becomes something that the PHY driver is responsible of doing, before enabling interrupts and after clearing them. Make this driver follow the new contract. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In an attempt to actually support shared IRQs in phylib, we now move the responsibility of triggering the phylib state machine or just returning IRQ_NONE, based on the IRQ status register, to the PHY driver. Having 3 different IRQ handling callbacks (.handle_interrupt(), .did_interrupt() and .ack_interrupt() ) is confusing so let the PHY driver implement directly an IRQ handler like any other device driver. Make this driver follow the new convention. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In preparation of removing the .ack_interrupt() callback, we must replace its occurrences (aka phy_clear_interrupt), from the 2 places where it is called from (phy_enable_interrupts and phy_disable_interrupts), with equivalent functionality. This means that clearing interrupts now becomes something that the PHY driver is responsible of doing, before enabling interrupts and after clearing them. Make this driver follow the new contract. Cc: Marek Vasut <marex@denx.de> Cc: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In an attempt to actually support shared IRQs in phylib, we now move the responsibility of triggering the phylib state machine or just returning IRQ_NONE, based on the IRQ status register, to the PHY driver. Having 3 different IRQ handling callbacks (.handle_interrupt(), .did_interrupt() and .ack_interrupt() ) is confusing so let the PHY driver implement directly an IRQ handler like any other device driver. Make this driver follow the new convention. Cc: Marek Vasut <marex@denx.de> Cc: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In preparation of removing the .ack_interrupt() callback, we must replace its occurrences (aka phy_clear_interrupt), from the 2 places where it is called from (phy_enable_interrupts and phy_disable_interrupts), with equivalent functionality. This means that clearing interrupts now becomes something that the PHY driver is responsible of doing, before enabling interrupts and after clearing them. Make this driver follow the new contract. Cc: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In an attempt to actually support shared IRQs in phylib, we now move the responsibility of triggering the phylib state machine or just returning IRQ_NONE, based on the IRQ status register, to the PHY driver. Having 3 different IRQ handling callbacks (.handle_interrupt(), .did_interrupt() and .ack_interrupt() ) is confusing so let the PHY driver implement directly an IRQ handler like any other device driver. Make this driver follow the new convention. Cc: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In preparation of removing the .ack_interrupt() callback, we must replace its occurrences (aka phy_clear_interrupt), from the 2 places where it is called from (phy_enable_interrupts and phy_disable_interrupts), with equivalent functionality. This means that clearing interrupts now becomes something that the PHY driver is responsible of doing, before enabling interrupts and after clearing them. Make this driver follow the new contract. Cc: Maxim Kochetkov <fido_max@inbox.ru> Cc: Baruch Siach <baruch@tkos.co.il> Cc: Robert Hancock <robert.hancock@calian.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Tested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In an attempt to actually support shared IRQs in phylib, we now move the responsibility of triggering the phylib state machine or just returning IRQ_NONE, based on the IRQ status register, to the PHY driver. Having 3 different IRQ handling callbacks (.handle_interrupt(), .did_interrupt() and .ack_interrupt() ) is confusing so let the PHY driver implement directly an IRQ handler like any other device driver. Make this driver follow the new convention. Cc: Maxim Kochetkov <fido_max@inbox.ru> Cc: Baruch Siach <baruch@tkos.co.il> Cc: Robert Hancock <robert.hancock@calian.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In preparation of removing the .ack_interrupt() callback, we must replace its occurrences (aka phy_clear_interrupt), from the 2 places where it is called from (phy_enable_interrupts and phy_disable_interrupts), with equivalent functionality. This means that clearing interrupts now becomes something that the PHY driver is responsible of doing, before enabling interrupts and after clearing them. Make this driver follow the new contract. Cc: Nisar Sayed <Nisar.Sayed@microchip.com> Cc: Yuiko Oshino <yuiko.oshino@microchip.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In an attempt to actually support shared IRQs in phylib, we now move the responsibility of triggering the phylib state machine or just returning IRQ_NONE, based on the IRQ status register, to the PHY driver. Having 3 different IRQ handling callbacks (.handle_interrupt(), .did_interrupt() and .ack_interrupt() ) is confusing so let the PHY driver implement directly an IRQ handler like any other device driver. Make this driver follow the new convention. Cc: Nisar Sayed <Nisar.Sayed@microchip.com> Cc: Yuiko Oshino <yuiko.oshino@microchip.com> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In preparation of removing the .ack_interrupt() callback, we must replace its occurrences (aka phy_clear_interrupt), from the 2 places where it is called from (phy_enable_interrupts and phy_disable_interrupts), with equivalent functionality. This means that clearing interrupts now becomes something that the PHY driver is responsible of doing, before enabling interrupts and after clearing them. Make this driver follow the new contract. Cc: Kavya Sree Kotagiri <kavyasree.kotagiri@microchip.com> Cc: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ioana Ciornei authored
In an attempt to actually support shared IRQs in phylib, we now move the responsibility of triggering the phylib state machine or just returning IRQ_NONE, based on the IRQ status register, to the PHY driver. Having 3 different IRQ handling callbacks (.handle_interrupt(), .did_interrupt() and .ack_interrupt() ) is confusing so let the PHY driver implement directly an IRQ handler like any other device driver. Make this driver follow the new convention. Cc: Kavya Sree Kotagiri <kavyasree.kotagiri@microchip.com> Cc: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Randy Dunlap authored
The previous Kconfig patch led to some other build errors as reported by the 0day bot and my own overnight build testing. These are all in <linux/skbuff.h> when KCOV is enabled but SKB_EXTENSIONS is not enabled, so fix those by combining those conditions in the header file. Fixes: 6370cc3b ("net: add kcov handle to skb extensions") Fixes: 85ce50d3 ("net: kcov: don't select SKB_EXTENSIONS when there is no NET") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Cc: Aleksandr Nogikh <nogikh@google.com> Cc: Willem de Bruijn <willemb@google.com> Acked-by: Florian Westphal <fw@strlen.de> Link: https://lore.kernel.org/r/20201116212108.32465-1-rdunlap@infradead.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Sven Van Asbroeck authored
The code in this driver which parses the devicetree to determine the phy/fixed link setup, can be replaced by a single library function: of_phy_get_and_connect(). Behaviour is identical, except that the library function will complain when 'phy-connection-type' is omitted, instead of blindly using PHY_INTERFACE_MODE_NA, which would result in an invalid phy configuration. The library function no longer brings out the exact phy_mode, but the driver doesn't need this, because phy_interface_is_rgmii() queries the phydev directly. Remove 'phy_mode' from the private adapter struct. While we're here, log info about the attached phy on connect, this is useful because the phy type and connection method is now fully configurable via the devicetree. Tested on a lan7430 chip with built-in phy. Verified that adding fixed-link/phy-connection-type in the devicetree results in a fixed-link setup. Used ethtool to verify that the devicetree settings are used. Tested-by: Sven Van Asbroeck <thesven73@gmail.com> # lan7430 Signed-off-by: Sven Van Asbroeck <thesven73@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201116170155.26967-1-TheSven73@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
Currently we print the driver name twice in phy_attached_print(): - phy_dev_info() prints it as part of the device info - and we print it as part of the info string This is a little bit ugly, it makes the info harder to read, especially if the driver name is a little bit longer. Therefore omit the driver name (if set) in the info string. Example from r8169 that uses phylib: old: Generic FE-GE Realtek PHY r8169-300:00: attached PHY driver \ [Generic FE-GE Realtek PHY] (mii_bus:phy_addr=r8169-300:00, irq=IGNORE) new: Generic FE-GE Realtek PHY r8169-300:00: attached PHY driver \ (mii_bus:phy_addr=r8169-300:00, irq=IGNORE) Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/8ab72586-f079-41d8-84ee-9f6a5bd97b2a@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
The only time when nr_frags isn't SKB_MAX_FRAGS is when entering rtl8169_start_xmit(). However we can use SKB_MAX_FRAGS also here because when queue isn't stopped there should always be room for MAX_SKB_FRAGS + 1 descriptors. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/3d1f2ad7-31d5-2cac-4f4a-394f8a3cab63@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
kernel test robot authored
Condition !A || A && B is equivalent to !A || B. Generated by: scripts/coccinelle/misc/excluded_middle.cocci Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: kernel test robot <lkp@intel.com> Signed-off-by: Julia Lawall <julia.lawall@inria.fr> Reviewed-by: Antoine Tenart <atenart@kernel.org> Link: https://lore.kernel.org/r/alpine.DEB.2.22.394.2011161633240.2682@hadrienSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Tobias Waldekranz says: ==================== net: dsa: tag_dsa: Unify regular and ethertype DSA taggers The first patch ports tag_edsa.c's handling of IGMP/MLD traps to tag_dsa.c. That way, we start from two logically equivalent taggers that are then merged. The second commit does the heavy lifting of actually fusing tag_dsa.c and tag_edsa.c. The final one just follows up with some clean up of existing comments. v2 -> v3: - Add the first patch described above as suggested by Andrew. - Better documentation of TO_SNIFFER and FORWARD tags. - Spelling. v1 -> v2: - Fixed some grammar and whitespace errors. - Removed unnecessary default value in Kconfig. - Removed unnecessary #ifdef. - Split out comment fixes from functional changes. - Fully document enum dsa_code. ==================== Link: https://lore.kernel.org/r/20201114234558.31203-1-tobias@waldekranz.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Use a consistent style of one-line/multi-line comments throughout the file. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
Ethertype DSA encodes exactly the same information in the DSA tag as the non-ethertype variety. So refactor out the common parts and reuse them for both protocols. This is ensures tag parsing and generation is always consistent across all mv88e6xxx chips. While we are at it, explicitly deal with all possible CPU codes on receive, making sure to set offload_fwd_mark as appropriate. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tobias Waldekranz authored
When receiving an IGMP/MLD frame with a TO_CPU tag, the switch has not performed any forwarding of it. This means that we should not set the offload_fwd_mark on the skb, in case a software bridge wants it forwarded. This is a port of: 1ed9ec9b ("dsa: Allow forwarding of redirected IGMP traffic") Which corrected the issue for chips using EDSA tags, but not for those using regular DSA tags. Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 16 Nov, 2020 7 commits
-
-
Jakub Kicinski authored
Paolo Abeni says: ==================== mptcp: improve multiple xmit streams support This series improves MPTCP handling of multiple concurrent xmit streams. The to-be-transmitted data is enqueued to a subflow only when the send window is open, keeping the subflows xmit queue shorter and allowing for faster switch-over. The above requires a more accurate msk socket state tracking and some additional infrastructure to allow pushing the data pending in the msk xmit queue as soon as the MPTCP's send window opens (patches 6-10). As a side effect, the MPTCP socket could enqueue data to subflows after close() time - to completely spooling the data sitting in the msk xmit queue. Dealing with the requires some infrastructure and core TCP changes (patches 1-5) Finally, patches 11-12 introduce a more accurate tracking of the other end's receive window. Overall this refactor the MPTCP xmit path, without introducing new features - the new code is covered by the existing self-tests. v2 -> v3: - rebased, - fixed checkpatch issue in patch 1/13 - fixed some state tracking issues in patch 8/13 v1 -> v2: - this is just a repost, to cope with patchwork issues, no changes at all ==================== Link: https://lore.kernel.org/r/cover.1605458224.git.pabeni@redhat.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Paolo Abeni authored
When the worker moves some bytes from the OoO queue into the receive queue, the msk->ask_seq is updated, the MPTCP-level ack carrying that value needs to wait the next ingress packet, possibly slowing down or hanging the peer Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Florian Westphal authored
Before sending 'x' new bytes also check that the new snd_una would be within the permitted receive window. For every ACK that also contains a DSS ack, check whether its tcp-level receive window would advance the current mptcp window right edge and update it if so. Signed-off-by: Florian Westphal <fw@strlen.de> Co-developed-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Florian Westphal authored
MPTCP maintains a status bit, MPTCP_SEND_SPACE, that is set when at least one subflow and the mptcp socket itself are writeable. mptcp_poll returns EPOLLOUT if the bit is set. mptcp_sendmsg makes sure MPTCP_SEND_SPACE gets cleared when last write has used up all subflows or the mptcp socket wmem. This reworks nospace handling as follows: MPTCP_SEND_SPACE is replaced with MPTCP_NOSPACE, i.e. inverted meaning. This bit is set when the mptcp socket is not writeable. The mptcp-level ack path schedule will then schedule the mptcp worker to allow it to free already-acked data (and reduce wmem usage). This will then wake userspace processes that wait for a POLLOUT event. sendmsg will set MPTCP_NOSPACE only when it has to wait for more wmem (blocking I/O case). poll path will set MPTCP_NOSPACE in case the mptcp socket is not writeable. Normal tcp-level notification (SOCK_NOSPACE) is only enabled in case the subflow socket has no available wmem. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Paolo Abeni authored
After the previous patch we may end-up with unsent data in the write buffer. If such buffer is full, the writer will block for unlimited time. We need to trigger the MPTCP xmit path even for the subflow rx path, on MPTCP snd_una updates. Keep things simple and just schedule the work queue if needed. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Paolo Abeni authored
mptcp_sendmsg() is refactored so that first it copies the data provided from user space into the send queue, and then tries to spool the send queue via sendmsg_frag. There a subtle change in the mptcp level collapsing on consecutive data fragment: we now allow that only on unsent data. The latter don't need to deal with msghdr data anymore and can be simplified in a relevant way. snd_nxt and write_seq are now tracked independently. Overall this allows some relevant cleanup and will allow sending pending mptcp data on msk una update in later patch. Co-developed-by: Florian Westphal <fw@strlen.de> Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Paolo Abeni authored
We must not close the subflows before all the MPTCP level data, comprising the DATA_FIN has been acked at the MPTCP level, otherwise we could be unable to retransmit as needed. __mptcp_wr_shutdown() shutdown is responsible to check for the correct status and close all subflows. Is called by the output path after spooling any data and at shutdown/close time. In a similar way, __mptcp_destroy_sock() is responsible to clean-up the MPTCP level status, and is called when the msk transition to TCP_CLOSE. The protocol level close() does not force anymore the TCP_CLOSE status, but orphan the msk socket and all the subflows. Orphaned msk sockets are forciby closed after a timeout or when all MPTCP-level data is acked. There is a caveat about keeping the orphaned subflows around: the TCP stack can asynchronusly call tcp_cleanup_ulp() on them via tcp_close(). To prevent accessing freed memory on later MPTCP level operations, the msk acquires a reference to each subflow socket and prevent subflow_ulp_release() from releasing the subflow context before __mptcp_destroy_sock(). The additional subflow references are released by __mptcp_done() and the async ULP release is detected checking ULP ops. If such field has been already cleared by the ULP release path, the dangling context is freed directly by __mptcp_done(). Co-developed-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-