- 17 Mar, 2023 23 commits
-
-
Russell King (Oracle) authored
Phylink does not want the current state of the link when reading the PCS link state - it wants the latched state. Don't double-read the MII status register. Phylink will re-read as necessary to capture transient link-down events as of dbae3388 ("net: phylink: Force retrigger in case of latched link-fail indicator"). The above referenced commit is a dependency for this change, and thus this change should not be backported to any kernel that does not contain the above referenced commit. Fixes: fcb26bd2 ("net: phy: Add Synopsys DesignWare XPCS MDIO module") Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== vxlan: Add MDB support tl;dr ===== This patchset implements MDB support in the VXLAN driver, allowing it to selectively forward IP multicast traffic to VTEPs with interested receivers instead of flooding it to all the VTEPs as BUM. The motivating use case is intra and inter subnet multicast forwarding using EVPN [1][2], which means that MDB entries are only installed by the user space control plane and no snooping is implemented, thereby avoiding a lot of unnecessary complexity in the kernel. Background ========== Both the bridge and VXLAN drivers have an FDB that allows them to forward Ethernet frames based on their destination MAC addresses and VLAN/VNI. These FDBs are managed using the same PF_BRIDGE/RTM_*NEIGH netlink messages and bridge(8) utility. However, only the bridge driver has an MDB that allows it to selectively forward IP multicast packets to bridge ports with interested receivers behind them, based on (S, G) and (*, G) MDB entries. When these packets reach the VXLAN driver they are flooded using the "all-zeros" FDB entry (00:00:00:00:00:00). The entry either includes the list of all the VTEPs in the tenant domain (when ingress replication is used) or the multicast address of the BUM tunnel (when P2MP tunnels are used), to which all the VTEPs join. Networks that make heavy use of multicast in the overlay can benefit from a solution that allows them to selectively forward IP multicast traffic only to VTEPs with interested receivers. Such a solution is described in the next section. Motivation ========== RFC 7432 [3] defines a "MAC/IP Advertisement route" (type 2) [4] that allows VTEPs in the EVPN network to advertise and learn reachability information for unicast MAC addresses. Traffic destined to a unicast MAC address can therefore be selectively forwarded to a single VTEP behind which the MAC is located. The same is not true for IP multicast traffic. Such traffic is simply flooded as BUM to all VTEPs in the broadcast domain (BD) / subnet, regardless if a VTEP has interested receivers for the multicast stream or not. This is especially problematic for overlay networks that make heavy use of multicast. The issue is addressed by RFC 9251 [1] that defines a "Selective Multicast Ethernet Tag Route" (type 6) [5] which allows VTEPs in the EVPN network to advertise multicast streams that they are interested in. This is done by having each VTEP suppress IGMP/MLD packets from being transmitted to the NVE network and instead communicate the information over BGP to other VTEPs. The draft in [2] further extends RFC 9251 with procedures to allow efficient forwarding of IP multicast traffic not only in a given subnet, but also between different subnets in a tenant domain. The required changes in the bridge driver to support the above were already merged in merge commit 8150f0cf ("Merge branch 'bridge-mcast-extensions-for-evpn'"). However, full support entails MDB support in the VXLAN driver so that it will be able to selectively forward IP multicast traffic only to VTEPs with interested receivers. The implementation of this MDB is described in the next section. Implementation ============== The user interface is extended to allow user space to specify the destination VTEP(s) and related parameters. Example usage: # bridge mdb add dev vxlan0 port vxlan0 grp 239.1.1.1 permanent dst 198.51.100.1 # bridge mdb add dev vxlan0 port vxlan0 grp 239.1.1.1 permanent dst 192.0.2.1 $ bridge -d -s mdb show dev vxlan0 port vxlan0 grp 239.1.1.1 permanent filter_mode exclude proto static dst 192.0.2.1 0.00 dev vxlan0 port vxlan0 grp 239.1.1.1 permanent filter_mode exclude proto static dst 198.51.100.1 0.00 Since the MDB is fully managed by user space and since snooping is not implemented, only permanent entries can be installed and temporary entries are rejected by the kernel. The netlink interface is extended with a few new attributes in the RTM_NEWMDB / RTM_DELMDB request messages: [ struct nlmsghdr ] [ struct br_port_msg ] [ MDBA_SET_ENTRY ] struct br_mdb_entry [ MDBA_SET_ENTRY_ATTRS ] [ MDBE_ATTR_SOURCE ] struct in_addr / struct in6_addr [ MDBE_ATTR_SRC_LIST ] [ MDBE_SRC_LIST_ENTRY ] [ MDBE_SRCATTR_ADDRESS ] struct in_addr / struct in6_addr [ ...] [ MDBE_ATTR_GROUP_MODE ] u8 [ MDBE_ATTR_RTPORT ] u8 [ MDBE_ATTR_DST ] // new struct in_addr / struct in6_addr [ MDBE_ATTR_DST_PORT ] // new u16 [ MDBE_ATTR_VNI ] // new u32 [ MDBE_ATTR_IFINDEX ] // new s32 [ MDBE_ATTR_SRC_VNI ] // new u32 RTM_NEWMDB / RTM_DELMDB responses and notifications are extended with corresponding attributes. One MDB entry that can be installed in the VXLAN MDB, but not in the bridge MDB is the catchall entry (0.0.0.0 / ::). It is used to transmit unregistered multicast traffic that is not link-local and is especially useful when inter-subnet multicast forwarding is required. See patch #12 for a detailed explanation and motivation. It is similar to the "all-zeros" FDB entry that can be installed in the VXLAN FDB, but not the bridge FDB. "added_by_star_ex" entries -------------------------- The bridge driver automatically installs (S, G) MDB port group entries marked as "added_by_star_ex" whenever it detects that an (S, G) entry can prevent traffic from being forwarded via a port associated with an EXCLUDE (*, G) entry. The bridge will add the port to the port group of the (S, G) entry, thereby creating a new port group entry. The complexity associated with these entries is not trivial, but it needs to reside in the bridge driver because it automatically installs MDB entries in response to snooped IGMP / MLD packets. The same in not true for the VXLAN MDB which is entirely managed by user space who is fully capable of forming the correct replication lists on its own. In addition, the complexity associated with the "added_by_star_ex" entries in the VXLAN driver is higher compared to the bridge: Whenever a remote VTEP is added to the catchall entry, it needs to be added to all the existing MDB entries, as such a remote requested all the multicast traffic to be forwarded to it. Similarly, whenever an (*, G) or (S, G) entry is added, all the remotes associated with the catchall entry need to be added to it. Given the above, this patchset does not implement support for such entries. One argument against this decision can be that in the future someone might want to populate the VXLAN MDB in response to decapsulated IGMP / MLD packets and not according to EVPN routes. Regardless of my doubts regarding this possibility, it can be implemented using a new VXLAN device knob that will also enable the "added_by_star_ex" functionality. Testing ======= Tested using existing VXLAN and MDB selftests under "net/" and "net/forwarding/". Added a dedicated selftest in the last patch. Patchset overview ================= Patches #1-#3 are small preparations in the bridge driver. I plan to submit them separately together with an MDB dump test case. Patches #4-#6 are additional preparations centered around the extraction of the MDB netlink handlers from the bridge driver to the common rtnetlink code. This allows reusing the existing MDB netlink messages for the configuration of the VXLAN MDB. Patches #7-#9 include more small preparations in the common rtnetlink code and the VXLAN driver. Patch #10 implements the MDB control path in the VXLAN driver, which will allow user space to create, delete, replace and dump MDB entries. Patches #11-#12 implement the MDB data path in the VXLAN driver, allowing it to selectively forward IP multicast traffic according to the matched MDB entry. Patch #13 finally enables MDB support in the VXLAN driver. iproute2 patches can be found here [6]. Note that in order to fully support the specifications in [1] and [2], additional functionality is required from the data path. However, it can be achieved using existing kernel interfaces which is why it is not described here. Changelog ========= Since v1 [7]: Patch #9: Use htons() in 'case' instead of ntohs() in 'switch'. Since RFC [8]: Patch #3: Use NL_ASSERT_DUMP_CTX_FITS(). Patch #3: memset the entire context when moving to the next device. Patch #3: Reset sequence counters when moving to the next device. Patch #3: Use NL_SET_ERR_MSG_ATTR() in rtnl_validate_mdb_entry(). Patch #7: Remove restrictions regarding mixing of multicast and unicast remote destination IPs in an MDB entry. While such configuration does not make sense to me, it is no forbidden by the VXLAN FDB code and does not crash the kernel. Patch #7: Fix check regarding all-zeros MDB entry and source. Patch #11: New patch. [1] https://datatracker.ietf.org/doc/html/rfc9251 [2] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast [3] https://datatracker.ietf.org/doc/html/rfc7432 [4] https://datatracker.ietf.org/doc/html/rfc7432#section-7.2 [5] https://datatracker.ietf.org/doc/html/rfc9251#section-9.1 [6] https://github.com/idosch/iproute2/commits/submit/mdb_vxlan_rfc_v1 [7] https://lore.kernel.org/netdev/20230313145349.3557231-1-idosch@nvidia.com/ [8] https://lore.kernel.org/netdev/20230204170801.3897900-1-idosch@nvidia.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Add test cases for VXLAN MDB, testing the control and data paths. Two different sets of namespaces (i.e., ns{1,2}_v4 and ns{1,2}_v6) are used in order to test VXLAN MDB with both IPv4 and IPv6 underlays, respectively. Example truncated output: # ./test_vxlan_mdb.sh [...] Tests passed: 620 Tests failed: 0 Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Now that the VXLAN MDB control and data paths are in place we can expose the VXLAN MDB functionality to user space. Set the VXLAN MDB net device operations to the appropriate functions, thereby allowing the rtnetlink code to reach the VXLAN driver. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Integrate MDB support into the Tx path of the VXLAN driver, allowing it to selectively forward IP multicast traffic according to the matched MDB entry. If MDB entries are configured (i.e., 'VXLAN_F_MDB' is set) and the packet is an IP multicast packet, perform up to three different lookups according to the following priority: 1. For an (S, G) entry, using {Source VNI, Source IP, Destination IP}. 2. For a (*, G) entry, using {Source VNI, Destination IP}. 3. For the catchall MDB entry (0.0.0.0 or ::), using the source VNI. The catchall MDB entry is similar to the catchall FDB entry (00:00:00:00:00:00) that is currently used to transmit BUM (broadcast, unknown unicast and multicast) traffic. However, unlike the catchall FDB entry, this entry is only used to transmit unregistered IP multicast traffic that is not link-local. Therefore, when configured, the catchall FDB entry will only transmit BULL (broadcast, unknown unicast, link-local multicast) traffic. The catchall MDB entry is useful in deployments where inter-subnet multicast forwarding is used and not all the VTEPs in a tenant domain are members in all the broadcast domains. In such deployments it is advantageous to transmit BULL (broadcast, unknown unicast and link-local multicast) and unregistered IP multicast traffic on different tunnels. If the same tunnel was used, a VTEP only interested in IP multicast traffic would also pull all the BULL traffic and drop it as it is not a member in the originating broadcast domain [1]. If the packet did not match an MDB entry (or if the packet is not an IP multicast packet), return it to the Tx path, allowing it to be forwarded according to the FDB. If the packet did match an MDB entry, forward it to the associated remote VTEPs. However, if the entry is a (*, G) entry and the associated remote is in INCLUDE mode, then skip over it as the source IP is not in its source list (otherwise the packet would have matched on an (S, G) entry). Similarly, if the associated remote is marked as BLOCKED (can only be set on (S, G) entries), then skip over it as well as the remote is in EXCLUDE mode and the source IP is in its source list. [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast#section-2.6Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Add an internal flag to indicate whether MDB entries are configured or not. Set the flag after installing the first MDB entry and clear it before deleting the last one. The flag will be consulted by the data path which will only perform an MDB lookup if the flag is set, thereby keeping the MDB overhead to a minimum when the MDB is not used. Another option would have been to use a static key, but it is global and not per-device, unlike the current approach. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Implement MDB control path support, enabling the creation, deletion, replacement and dumping of MDB entries in a similar fashion to the bridge driver. Unlike the bridge driver, each entry stores a list of remote VTEPs to which matched packets need to be replicated to and not a list of bridge ports. The motivating use case is the installation of MDB entries by a user space control plane in response to received EVPN routes. As such, only allow permanent MDB entries to be installed and do not implement snooping functionality, avoiding a lot of unnecessary complexity. Since entries can only be modified by user space under RTNL, use RTNL as the write lock. Use RCU to ensure that MDB entries and remotes are not freed while being accessed from the data path during transmission. In terms of uAPI, reuse the existing MDB netlink interface, but add a few new attributes to request and response messages: * IP address of the destination VXLAN tunnel endpoint where the multicast receivers reside. * UDP destination port number to use to connect to the remote VXLAN tunnel endpoint. * VXLAN VNI Network Identifier to use to connect to the remote VXLAN tunnel endpoint. Required when Ingress Replication (IR) is used and the remote VTEP is not a member of originating broadcast domain (VLAN/VNI) [1]. * Source VNI Network Identifier the MDB entry belongs to. Used only when the VXLAN device is in external mode. * Interface index of the outgoing interface to reach the remote VXLAN tunnel endpoint. This is required when the underlay destination IP is multicast (P2MP), as the multicast routing tables are not consulted. All the new attributes are added under the 'MDBA_SET_ENTRY_ATTRS' nest which is strictly validated by the bridge driver, thereby automatically rejecting the new attributes. [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast#section-3.2.2Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Given a packet and a remote destination, the function will take care of encapsulating the packet and transmitting it to the destination. Expose it so that it could be used in subsequent patches by the MDB code to transmit a packet to the remote destination(s) stored in the MDB entry. It will allow us to keep the MDB code self-contained, not exposing its data structures to the rest of the VXLAN driver. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Move the helpers out of the core C file to the private header so that they could be used by the upcoming MDB code. While at it, constify the second argument of vxlan_nla_get_addr(). Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
In the upcoming VXLAN MDB implementation, the 0.0.0.0 and :: MDB entries will act as catchall entries for unregistered IP multicast traffic in a similar fashion to the 00:00:00:00:00:00 VXLAN FDB entry that is used to transmit BUM traffic. In deployments where inter-subnet multicast forwarding is used, not all the VTEPs in a tenant domain are members in all the broadcast domains. It is therefore advantageous to transmit BULL (broadcast, unknown unicast and link-local multicast) and unregistered IP multicast traffic on different tunnels. If the same tunnel was used, a VTEP only interested in IP multicast traffic would also pull all the BULL traffic and drop it as it is not a member in the originating broadcast domain [1]. Prepare for this change by allowing the 0.0.0.0 group address in the common rtnetlink MDB code and forbid it in the bridge driver. A similar change is not needed for IPv6 because the common code only validates that the group address is not the all-nodes address. [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast#section-2.6Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Currently, the bridge driver registers handlers for MDB netlink messages, making it impossible for other drivers to implement MDB support. As a preparation for VXLAN MDB support, move the MDB handlers out of the bridge driver to the core rtnetlink code. The rtnetlink code will call into individual drivers by invoking their previously added MDB net device operations. Note that while the diffstat is large, the change is mechanical. It moves code out of the bridge driver to rtnetlink code. Also note that a similar change was made in 2012 with commit 77162022 ("net: add generic PF_BRIDGE:RTM_ FDB hooks") that moved FDB handlers out of the bridge driver to the core rtnetlink code. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Implement the previously added MDB net device operations in the bridge driver so that they could be invoked by core rtnetlink code in the next patch. The operations are identical to the existing br_mdb_{dump,add,del} functions. The '_new' suffix will be removed in the next patch. The functions are re-implemented in this patch to make the conversion in the next patch easier to review. Add dummy implementations when 'CONFIG_BRIDGE_IGMP_SNOOPING' is disabled, so that an error will be returned to user space when it is trying to add or delete an MDB entry. This is consistent with existing behavior where the bridge driver does not even register rtnetlink handlers for RTM_{NEW,DEL,GET}MDB messages when this Kconfig option is disabled. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Add MDB net device operations that will be invoked by rtnetlink code in response to received RTM_{NEW,DEL,GET}MDB messages. Subsequent patches will implement these operations in the bridge and VXLAN drivers. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Siddharth Vadapalli says: ==================== Add J784S4 CPSW9G NET Bindings This series cleans up the bindings by reordering the compatibles, followed by adding the bindings for CPSW9G instance of CPSW Ethernet Switch on TI's J784S4 SoC. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Siddharth Vadapalli authored
Update bindings for TI K3 J784S4 SoC which contains 9 ports (8 external ports) CPSW9G module and add compatible for it. Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Siddharth Vadapalli authored
Reorder compatibles to follow alphanumeric order. Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shradha Gupta authored
Extended performance counter stats in 'ethtool -S <interface>' output for MANA VF to facilitate troubleshooting. Tested-on: Ubuntu22 Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mengyuan Lou authored
Add ngbe and txgbe ndo_change_mtu support. Signed-off-by: Mengyuan Lou <mengyuanlou@net-swift.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luiz Angelo Daros de Luca authored
The rtl8365mb was using a fixed MTU size of 1536, which was probably inspired by the rtl8366rb's initial frame size. However, unlike that family, the rtl8365mb family can specify the max frame size in bytes, rather than in fixed steps. DSA calls change_mtu for the CPU port once the max MTU value among the ports changes. As the max frame size is defined globally, the switch is configured only when the call affects the CPU port. The available specifications do not directly define the max supported frame size, but it mentions a 16k limit. This driver will use the 0x3FFF limit as it is used in the vendor API code. However, the switch sets the max frame size to 16368 bytes (0x3FF0) after it resets. change_mtu uses MTU size, or ethernet payload size, while the switch works with frame size. The frame size is calculated considering the ethernet header (14 bytes), a possible 802.1Q tag (4 bytes), the payload size (MTU), and the Ethernet FCS (4 bytes). The CPU tag (8 bytes) is consumed before the switch enforces the limit. During setup, the driver will use the default 1500-byte MTU of DSA to set the maximum frame size. The current sum will be VLAN_ETH_HLEN+1500+ETH_FCS_LEN, which results in 1522 bytes. Although it is lower than the previous initial value of 1536 bytes, the driver will increase the frame size for a larger MTU. However, if something requires more space without increasing the MTU, such as QinQ, we would need to add the extra length to the rtl8365mb_port_change_mtu() formula. MTU was tested up to 2018 (with 802.1Q) as that is as far as mt7620 (where rtl8367s is stacked) can go. The register was manually manipulated byte-by-byte to ensure the MTU to frame size conversion was correct. For frames without 802.1Q tag, the frame size limit will be 4 bytes over the required size. There is a jumbo register, enabled by default at 6k frame size. However, the jumbo settings do not seem to limit nor expand the maximum tested MTU (2018), even when jumbo is disabled. More tests are needed with a device that can handle larger frames. Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Reviewed-by: Alvin Šipraga <alsi@bang-olufsen.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jakub Kicinski authored
Durai Manickam says: ==================== Add PTP support for sama7g5 This patch series is intended to add PTP capability to the GEM and EMAC for sama7g5. ==================== Link: https://lore.kernel.org/r/20230315095053.53969-1-durai.manickamkr@microchip.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Durai Manickam KR authored
Add PTP capability to the Ethernet MAC. Signed-off-by: Durai Manickam KR <durai.manickamkr@microchip.com> Reviewed-by: Claudiu Beznea <claudiu.beznea@microchip.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Durai Manickam KR authored
Add PTP capability to the Gigabit Ethernet MAC. Signed-off-by: Durai Manickam KR <durai.manickamkr@microchip.com> Reviewed-by: Claudiu Beznea <claudiu.beznea@microchip.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Andy Shevchenko authored
LED core provides a helper to parse default state from firmware node. Use it instead of custom implementation. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Kurt Kanzenbach <kurt@linutronix.de> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://lore.kernel.org/r/20230314181824.56881-1-andriy.shevchenko@linux.intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 16 Mar, 2023 17 commits
-
-
Rob Herring authored
It is preferred to use typed property access functions (i.e. of_property_read_<type> functions) rather than low-level of_get_property/of_find_property functions for reading properties. Convert reading boolean properties to of_property_read_bool(). Reviewed-by: Simon Horman <simon.horman@corigine.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rob Herring authored
There are no users of nfcmrvl platform_data struct outside of the driver and none will be added, so move it into the driver. Signed-off-by: Rob Herring <robh@kernel.org> Reviewed-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Xu Liang authored
GPY2xx devices need 3 seconds to fully switch out of loopback mode before it can safely re-enter loopback mode. Implement timeout mechanism to guarantee 3 seconds waited before re-enter loopback mode. Signed-off-by: Xu Liang <lxu@maxlinear.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
There is a spelling mistake in a pr_warn_ratelimited message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Link: https://lore.kernel.org/r/20230314082315.26532-1-colin.i.king@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Louis Peens says: ==================== nfp: flower: add support for multi-zone conntrack This series add changes to support offload of connection tracking across multiple zones. Previously the driver only supported offloading of a single goto_chain, spanning a single zone. This was implemented by merging a pre_ct rule, post_ct rule and the nft rule. This series provides updates to let the original post_ct rule act as the new pre_ct rule for a next set of merges if it contains another goto and conntrack action. In pseudo-tc rule format this adds support for: ingress chain 0 proto ip flower action ct zone 1 pipe action goto 1 ingress chain 1 proto ip flower ct_state +tr+new ct_zone 1 action ct_clear pipe action ct zone 2 pipe action goto 2 ingress chain 1 proto ip flower ct_state +tr+est ct_zone 1 action ct_clear pipe action ct zone 2 pipe action goto 2 ingress chain 2 proto ip flower ct_state +tr+new ct_zone 2 action mirred egress redirect dev ... ingress chain 2 proto ip flower ct_state +tr+est ct_zone 2 action mirred egress redirect dev ... This can continue for up to a maximum of 4 zone recirculations. The first few patches are some smaller preparation patches while the last one introduces the functionality. ==================== Link: https://lore.kernel.org/r/20230314063610.10544-1-louis.peens@corigine.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Wentao Jia authored
If goto_chain action present in the post ct flow rule, merge flow rules in this ct-zone, create a new pre_ct entry as the pre ct flow rule of next ct-zone, but do not offload merged flow rules to firmware. Repeat the process in the next ct-zone until no goto_chain action present in the post ct flow rule in a certain ct-zone, merged all the flow rules. Offload to firmware finally. Signed-off-by: Wentao Jia <wentao.jia@corigine.com> Acked-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Wentao Jia authored
The fixed number of offload flow rule is only supported scenario of one ct zone, in the scenario of multiple ct zones, dynamic number and more number of offload flow rules are required. In order to support scenario of multiple ct zones, parameter num_rules is added for to offload flow rules Signed-off-by: Wentao Jia <wentao.jia@corigine.com> Acked-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Wentao Jia authored
The chain_index has different means in pre ct entry and post ct entry. In pre ct entry, it means chain index, but in post ct entry, it means goto chain index, it is confused. chain_index and goto_chain_index may be present in one flow rule, It cannot be distinguished by one field chain_index, both chain_index and goto_chain_index are required in the follow-up patch to support multiple ct zones Another field goto_chain_index is added to record the goto chain index. If no goto action in post ct entry, goto_chain_index is 0. Signed-off-by: Wentao Jia <wentao.jia@corigine.com> Acked-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Wentao Jia authored
'ct_clear' action only or no ct action is supported for 'post_ct_flow'. But in scenario of multiple ct zones, one non 'ct_clear' ct action or more ct actions, including 'ct_clear action', may be present in one flow rule. If ct state match key is 'ct_established', the flow rule is still expected to be classified as 'post_ct_flow'. Check ct status first in function "is_post_ct_flow" to achieve this. Signed-off-by: Wentao Jia <wentao.jia@corigine.com> Acked-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Wentao Jia authored
In the scenario of multiple ct zones, ct state key match and ct action is present in one flow rule, the flow rule is classified to post_ct_flow in design. There is no ct state key match for pre ct flow, the judging condition is added to function "is_pre_ct_flow". Chain_index is another field for judging which flows are pre ct flow If chain_index not 0, the flow is not pre ct flow. Signed-off-by: Wentao Jia <wentao.jia@corigine.com> Acked-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Wentao Jia authored
CT action is a special case different from other actions, CT clear action is not required when get ct action, but this case is not considered. If CT clear action in the flow rule, skip the CT clear action when get ct action, return the first ct action that is not a CT clear action Signed-off-by: Wentao Jia <wentao.jia@corigine.com> Acked-by: Simon Horman <simon.horman@corigine.com> Signed-off-by: Louis Peens <louis.peens@corigine.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Saeed Mahameed says: ==================== mlx5-updates-2023-03-13 1) Trivial cleanup patches 2) By Sandipan Patra: Implement thermal zone to report NIC temperature 3) Adham Faris, Improves devlink health diagnostics for netdev objects 4) From Maor, Enable TC offload for egress and engress MACVLAN over bond 5) From Gal, add devlink hairpin queues parameters to replace debugfs as was discussed in [1]: [1] https://lore.kernel.org/all/20230111194608.7f15b9a1@kernel.org/ ==================== Link: https://lore.kernel.org/all/20230314054234.267365-1-saeed@kernel.org/Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Maor Dickman authored
Support offloading of TC rules that mirror/redirect egress traffic to a MACVLAN device, which is attached to bond device which master mlx5 devices. Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20230314054234.267365-16-saeed@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Maor Dickman authored
Support offloading of TC rules that filter ingress traffic from a MACVLAN device, which is attached to bond device. Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20230314054234.267365-15-saeed@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Maor Dickman authored
In preparation for next patch which will add new check if device block can be setup, extract all existing checks to function to make it more readable and maintainable. Signed-off-by: Maor Dickman <maord@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20230314054234.267365-14-saeed@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gal Pressman authored
Print the number of hairpin queues and size as part of the hairpin table dump. Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20230314054234.267365-13-saeed@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gal Pressman authored
We refer to a TC NIC rule that involves forwarding as "hairpin". Hairpin queues are mlx5 hardware specific implementation for hardware forwarding of such packets. Per the discussion in [1], move the hairpin queues control (number and size) from debugfs to devlink. Expose two devlink params: - hairpin_num_queues: control the number of hairpin queues - hairpin_queue_size: control the size (in packets) of the hairpin queues [1] https://lore.kernel.org/all/20230111194608.7f15b9a1@kernel.org/Signed-off-by: Gal Pressman <gal@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Link: https://lore.kernel.org/r/20230314054234.267365-12-saeed@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-