1. 17 Mar, 2023 24 commits
    • Russell King (Oracle)'s avatar
      net: pcs: lynx: don't print an_enabled in pcs_get_state() · ecec0ebb
      Russell King (Oracle) authored
      an_enabled will be going away, and in any case, pcs_get_state() should
      not be updating this member. Remove the print.
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      Reviewed-by: default avatarSteen Hegelund <Steen.Hegelund@microchip.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ecec0ebb
    • Russell King (Oracle)'s avatar
      net: pcs: xpcs: remove double-read of link state when using AN · ef63461c
      Russell King (Oracle) authored
      Phylink does not want the current state of the link when reading the
      PCS link state - it wants the latched state. Don't double-read the
      MII status register. Phylink will re-read as necessary to capture
      transient link-down events as of dbae3388 ("net: phylink: Force
      retrigger in case of latched link-fail indicator").
      
      The above referenced commit is a dependency for this change, and thus
      this change should not be backported to any kernel that does not
      contain the above referenced commit.
      
      Fixes: fcb26bd2 ("net: phy: Add Synopsys DesignWare XPCS MDIO module")
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ef63461c
    • David S. Miller's avatar
      Merge branch 'vxlan-MDB-support' · abf36703
      David S. Miller authored
      Ido Schimmel says:
      
      ====================
      vxlan: Add MDB support
      
      tl;dr
      =====
      
      This patchset implements MDB support in the VXLAN driver, allowing it to
      selectively forward IP multicast traffic to VTEPs with interested
      receivers instead of flooding it to all the VTEPs as BUM. The motivating
      use case is intra and inter subnet multicast forwarding using EVPN
      [1][2], which means that MDB entries are only installed by the user
      space control plane and no snooping is implemented, thereby avoiding a
      lot of unnecessary complexity in the kernel.
      
      Background
      ==========
      
      Both the bridge and VXLAN drivers have an FDB that allows them to
      forward Ethernet frames based on their destination MAC addresses and
      VLAN/VNI. These FDBs are managed using the same PF_BRIDGE/RTM_*NEIGH
      netlink messages and bridge(8) utility.
      
      However, only the bridge driver has an MDB that allows it to selectively
      forward IP multicast packets to bridge ports with interested receivers
      behind them, based on (S, G) and (*, G) MDB entries. When these packets
      reach the VXLAN driver they are flooded using the "all-zeros" FDB entry
      (00:00:00:00:00:00). The entry either includes the list of all the VTEPs
      in the tenant domain (when ingress replication is used) or the multicast
      address of the BUM tunnel (when P2MP tunnels are used), to which all the
      VTEPs join.
      
      Networks that make heavy use of multicast in the overlay can benefit
      from a solution that allows them to selectively forward IP multicast
      traffic only to VTEPs with interested receivers. Such a solution is
      described in the next section.
      
      Motivation
      ==========
      
      RFC 7432 [3] defines a "MAC/IP Advertisement route" (type 2) [4] that
      allows VTEPs in the EVPN network to advertise and learn reachability
      information for unicast MAC addresses. Traffic destined to a unicast MAC
      address can therefore be selectively forwarded to a single VTEP behind
      which the MAC is located.
      
      The same is not true for IP multicast traffic. Such traffic is simply
      flooded as BUM to all VTEPs in the broadcast domain (BD) / subnet,
      regardless if a VTEP has interested receivers for the multicast stream
      or not. This is especially problematic for overlay networks that make
      heavy use of multicast.
      
      The issue is addressed by RFC 9251 [1] that defines a "Selective
      Multicast Ethernet Tag Route" (type 6) [5] which allows VTEPs in the
      EVPN network to advertise multicast streams that they are interested in.
      This is done by having each VTEP suppress IGMP/MLD packets from being
      transmitted to the NVE network and instead communicate the information
      over BGP to other VTEPs.
      
      The draft in [2] further extends RFC 9251 with procedures to allow
      efficient forwarding of IP multicast traffic not only in a given subnet,
      but also between different subnets in a tenant domain.
      
      The required changes in the bridge driver to support the above were
      already merged in merge commit 8150f0cf ("Merge branch
      'bridge-mcast-extensions-for-evpn'"). However, full support entails MDB
      support in the VXLAN driver so that it will be able to selectively
      forward IP multicast traffic only to VTEPs with interested receivers.
      The implementation of this MDB is described in the next section.
      
      Implementation
      ==============
      
      The user interface is extended to allow user space to specify the
      destination VTEP(s) and related parameters. Example usage:
      
       # bridge mdb add dev vxlan0 port vxlan0 grp 239.1.1.1 permanent dst 198.51.100.1
       # bridge mdb add dev vxlan0 port vxlan0 grp 239.1.1.1 permanent dst 192.0.2.1
      
       $ bridge -d -s mdb show
       dev vxlan0 port vxlan0 grp 239.1.1.1 permanent filter_mode exclude proto static dst 192.0.2.1    0.00
       dev vxlan0 port vxlan0 grp 239.1.1.1 permanent filter_mode exclude proto static dst 198.51.100.1    0.00
      
      Since the MDB is fully managed by user space and since snooping is not
      implemented, only permanent entries can be installed and temporary
      entries are rejected by the kernel.
      
      The netlink interface is extended with a few new attributes in the
      RTM_NEWMDB / RTM_DELMDB request messages:
      
      [ struct nlmsghdr ]
      [ struct br_port_msg ]
      [ MDBA_SET_ENTRY ]
      	struct br_mdb_entry
      [ MDBA_SET_ENTRY_ATTRS ]
      	[ MDBE_ATTR_SOURCE ]
      		struct in_addr / struct in6_addr
      	[ MDBE_ATTR_SRC_LIST ]
      		[ MDBE_SRC_LIST_ENTRY ]
      			[ MDBE_SRCATTR_ADDRESS ]
      				struct in_addr / struct in6_addr
      		[ ...]
      	[ MDBE_ATTR_GROUP_MODE ]
      		u8
      	[ MDBE_ATTR_RTPORT ]
      		u8
      	[ MDBE_ATTR_DST ]	// new
      		struct in_addr / struct in6_addr
      	[ MDBE_ATTR_DST_PORT ]	// new
      		u16
      	[ MDBE_ATTR_VNI ]	// new
      		u32
      	[ MDBE_ATTR_IFINDEX ]	// new
      		s32
      	[ MDBE_ATTR_SRC_VNI ]	// new
      		u32
      
      RTM_NEWMDB / RTM_DELMDB responses and notifications are extended with
      corresponding attributes.
      
      One MDB entry that can be installed in the VXLAN MDB, but not in the
      bridge MDB is the catchall entry (0.0.0.0 / ::). It is used to transmit
      unregistered multicast traffic that is not link-local and is especially
      useful when inter-subnet multicast forwarding is required. See patch #12
      for a detailed explanation and motivation. It is similar to the
      "all-zeros" FDB entry that can be installed in the VXLAN FDB, but not
      the bridge FDB.
      
      "added_by_star_ex" entries
      --------------------------
      
      The bridge driver automatically installs (S, G) MDB port group entries
      marked as "added_by_star_ex" whenever it detects that an (S, G) entry
      can prevent traffic from being forwarded via a port associated with an
      EXCLUDE (*, G) entry. The bridge will add the port to the port group of
      the (S, G) entry, thereby creating a new port group entry. The
      complexity associated with these entries is not trivial, but it needs to
      reside in the bridge driver because it automatically installs MDB
      entries in response to snooped IGMP / MLD packets.
      
      The same in not true for the VXLAN MDB which is entirely managed by user
      space who is fully capable of forming the correct replication lists on
      its own. In addition, the complexity associated with the
      "added_by_star_ex" entries in the VXLAN driver is higher compared to the
      bridge: Whenever a remote VTEP is added to the catchall entry, it needs
      to be added to all the existing MDB entries, as such a remote requested
      all the multicast traffic to be forwarded to it. Similarly, whenever an
      (*, G) or (S, G) entry is added, all the remotes associated with the
      catchall entry need to be added to it.
      
      Given the above, this patchset does not implement support for such
      entries.  One argument against this decision can be that in the future
      someone might want to populate the VXLAN MDB in response to decapsulated
      IGMP / MLD packets and not according to EVPN routes. Regardless of my
      doubts regarding this possibility, it can be implemented using a new
      VXLAN device knob that will also enable the "added_by_star_ex"
      functionality.
      
      Testing
      =======
      
      Tested using existing VXLAN and MDB selftests under "net/" and
      "net/forwarding/". Added a dedicated selftest in the last patch.
      
      Patchset overview
      =================
      
      Patches #1-#3 are small preparations in the bridge driver. I plan to
      submit them separately together with an MDB dump test case.
      
      Patches #4-#6 are additional preparations centered around the extraction
      of the MDB netlink handlers from the bridge driver to the common
      rtnetlink code. This allows reusing the existing MDB netlink messages
      for the configuration of the VXLAN MDB.
      
      Patches #7-#9 include more small preparations in the common rtnetlink
      code and the VXLAN driver.
      
      Patch #10 implements the MDB control path in the VXLAN driver, which
      will allow user space to create, delete, replace and dump MDB entries.
      
      Patches #11-#12 implement the MDB data path in the VXLAN driver,
      allowing it to selectively forward IP multicast traffic according to the
      matched MDB entry.
      
      Patch #13 finally enables MDB support in the VXLAN driver.
      
      iproute2 patches can be found here [6].
      
      Note that in order to fully support the specifications in [1] and [2],
      additional functionality is required from the data path. However, it can
      be achieved using existing kernel interfaces which is why it is not
      described here.
      
      Changelog
      =========
      
      Since v1 [7]:
      
      Patch #9: Use htons() in 'case' instead of ntohs() in 'switch'.
      
      Since RFC [8]:
      
      Patch #3: Use NL_ASSERT_DUMP_CTX_FITS().
      Patch #3: memset the entire context when moving to the next device.
      Patch #3: Reset sequence counters when moving to the next device.
      Patch #3: Use NL_SET_ERR_MSG_ATTR() in rtnl_validate_mdb_entry().
      Patch #7: Remove restrictions regarding mixing of multicast and unicast
      remote destination IPs in an MDB entry. While such configuration does
      not make sense to me, it is no forbidden by the VXLAN FDB code and does
      not crash the kernel.
      Patch #7: Fix check regarding all-zeros MDB entry and source.
      Patch #11: New patch.
      
      [1] https://datatracker.ietf.org/doc/html/rfc9251
      [2] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast
      [3] https://datatracker.ietf.org/doc/html/rfc7432
      [4] https://datatracker.ietf.org/doc/html/rfc7432#section-7.2
      [5] https://datatracker.ietf.org/doc/html/rfc9251#section-9.1
      [6] https://github.com/idosch/iproute2/commits/submit/mdb_vxlan_rfc_v1
      [7] https://lore.kernel.org/netdev/20230313145349.3557231-1-idosch@nvidia.com/
      [8] https://lore.kernel.org/netdev/20230204170801.3897900-1-idosch@nvidia.com/
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      abf36703
    • Ido Schimmel's avatar
      selftests: net: Add VXLAN MDB test · 62199e3f
      Ido Schimmel authored
      Add test cases for VXLAN MDB, testing the control and data paths. Two
      different sets of namespaces (i.e., ns{1,2}_v4 and ns{1,2}_v6) are used
      in order to test VXLAN MDB with both IPv4 and IPv6 underlays,
      respectively.
      
      Example truncated output:
      
       # ./test_vxlan_mdb.sh
       [...]
       Tests passed: 620
       Tests failed:   0
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      62199e3f
    • Ido Schimmel's avatar
      vxlan: Enable MDB support · 08f876a7
      Ido Schimmel authored
      Now that the VXLAN MDB control and data paths are in place we can expose
      the VXLAN MDB functionality to user space.
      
      Set the VXLAN MDB net device operations to the appropriate functions,
      thereby allowing the rtnetlink code to reach the VXLAN driver.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      08f876a7
    • Ido Schimmel's avatar
      vxlan: Add MDB data path support · 0f83e69f
      Ido Schimmel authored
      Integrate MDB support into the Tx path of the VXLAN driver, allowing it
      to selectively forward IP multicast traffic according to the matched MDB
      entry.
      
      If MDB entries are configured (i.e., 'VXLAN_F_MDB' is set) and the
      packet is an IP multicast packet, perform up to three different lookups
      according to the following priority:
      
      1. For an (S, G) entry, using {Source VNI, Source IP, Destination IP}.
      2. For a (*, G) entry, using {Source VNI, Destination IP}.
      3. For the catchall MDB entry (0.0.0.0 or ::), using the source VNI.
      
      The catchall MDB entry is similar to the catchall FDB entry
      (00:00:00:00:00:00) that is currently used to transmit BUM (broadcast,
      unknown unicast and multicast) traffic. However, unlike the catchall FDB
      entry, this entry is only used to transmit unregistered IP multicast
      traffic that is not link-local. Therefore, when configured, the catchall
      FDB entry will only transmit BULL (broadcast, unknown unicast,
      link-local multicast) traffic.
      
      The catchall MDB entry is useful in deployments where inter-subnet
      multicast forwarding is used and not all the VTEPs in a tenant domain
      are members in all the broadcast domains. In such deployments it is
      advantageous to transmit BULL (broadcast, unknown unicast and link-local
      multicast) and unregistered IP multicast traffic on different tunnels.
      If the same tunnel was used, a VTEP only interested in IP multicast
      traffic would also pull all the BULL traffic and drop it as it is not a
      member in the originating broadcast domain [1].
      
      If the packet did not match an MDB entry (or if the packet is not an IP
      multicast packet), return it to the Tx path, allowing it to be forwarded
      according to the FDB.
      
      If the packet did match an MDB entry, forward it to the associated
      remote VTEPs. However, if the entry is a (*, G) entry and the associated
      remote is in INCLUDE mode, then skip over it as the source IP is not in
      its source list (otherwise the packet would have matched on an (S, G)
      entry). Similarly, if the associated remote is marked as BLOCKED (can
      only be set on (S, G) entries), then skip over it as well as the remote
      is in EXCLUDE mode and the source IP is in its source list.
      
      [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast#section-2.6Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0f83e69f
    • Ido Schimmel's avatar
      vxlan: mdb: Add an internal flag to indicate MDB usage · bc6c6b01
      Ido Schimmel authored
      Add an internal flag to indicate whether MDB entries are configured or
      not. Set the flag after installing the first MDB entry and clear it
      before deleting the last one.
      
      The flag will be consulted by the data path which will only perform an
      MDB lookup if the flag is set, thereby keeping the MDB overhead to a
      minimum when the MDB is not used.
      
      Another option would have been to use a static key, but it is global and
      not per-device, unlike the current approach.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bc6c6b01
    • Ido Schimmel's avatar
      vxlan: mdb: Add MDB control path support · a3a48de5
      Ido Schimmel authored
      Implement MDB control path support, enabling the creation, deletion,
      replacement and dumping of MDB entries in a similar fashion to the
      bridge driver. Unlike the bridge driver, each entry stores a list of
      remote VTEPs to which matched packets need to be replicated to and not a
      list of bridge ports.
      
      The motivating use case is the installation of MDB entries by a user
      space control plane in response to received EVPN routes. As such, only
      allow permanent MDB entries to be installed and do not implement
      snooping functionality, avoiding a lot of unnecessary complexity.
      
      Since entries can only be modified by user space under RTNL, use RTNL as
      the write lock. Use RCU to ensure that MDB entries and remotes are not
      freed while being accessed from the data path during transmission.
      
      In terms of uAPI, reuse the existing MDB netlink interface, but add a
      few new attributes to request and response messages:
      
      * IP address of the destination VXLAN tunnel endpoint where the
        multicast receivers reside.
      
      * UDP destination port number to use to connect to the remote VXLAN
        tunnel endpoint.
      
      * VXLAN VNI Network Identifier to use to connect to the remote VXLAN
        tunnel endpoint. Required when Ingress Replication (IR) is used and
        the remote VTEP is not a member of originating broadcast domain
        (VLAN/VNI) [1].
      
      * Source VNI Network Identifier the MDB entry belongs to. Used only when
        the VXLAN device is in external mode.
      
      * Interface index of the outgoing interface to reach the remote VXLAN
        tunnel endpoint. This is required when the underlay destination IP is
        multicast (P2MP), as the multicast routing tables are not consulted.
      
      All the new attributes are added under the 'MDBA_SET_ENTRY_ATTRS' nest
      which is strictly validated by the bridge driver, thereby automatically
      rejecting the new attributes.
      
      [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast#section-3.2.2Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a3a48de5
    • Ido Schimmel's avatar
      vxlan: Expose vxlan_xmit_one() · 6ab271aa
      Ido Schimmel authored
      Given a packet and a remote destination, the function will take care of
      encapsulating the packet and transmitting it to the destination.
      
      Expose it so that it could be used in subsequent patches by the MDB code
      to transmit a packet to the remote destination(s) stored in the MDB
      entry.
      
      It will allow us to keep the MDB code self-contained, not exposing its
      data structures to the rest of the VXLAN driver.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6ab271aa
    • Ido Schimmel's avatar
      vxlan: Move address helpers to private headers · f307c8bf
      Ido Schimmel authored
      Move the helpers out of the core C file to the private header so that
      they could be used by the upcoming MDB code.
      
      While at it, constify the second argument of vxlan_nla_get_addr().
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f307c8bf
    • Ido Schimmel's avatar
      rtnetlink: bridge: mcast: Relax group address validation in common code · da654c80
      Ido Schimmel authored
      In the upcoming VXLAN MDB implementation, the 0.0.0.0 and :: MDB entries
      will act as catchall entries for unregistered IP multicast traffic in a
      similar fashion to the 00:00:00:00:00:00 VXLAN FDB entry that is used to
      transmit BUM traffic.
      
      In deployments where inter-subnet multicast forwarding is used, not all
      the VTEPs in a tenant domain are members in all the broadcast domains.
      It is therefore advantageous to transmit BULL (broadcast, unknown
      unicast and link-local multicast) and unregistered IP multicast traffic
      on different tunnels. If the same tunnel was used, a VTEP only
      interested in IP multicast traffic would also pull all the BULL traffic
      and drop it as it is not a member in the originating broadcast domain
      [1].
      
      Prepare for this change by allowing the 0.0.0.0 group address in the
      common rtnetlink MDB code and forbid it in the bridge driver. A similar
      change is not needed for IPv6 because the common code only validates
      that the group address is not the all-nodes address.
      
      [1] https://datatracker.ietf.org/doc/html/draft-ietf-bess-evpn-irb-mcast#section-2.6Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      da654c80
    • Ido Schimmel's avatar
      rtnetlink: bridge: mcast: Move MDB handlers out of bridge driver · cc7f5022
      Ido Schimmel authored
      Currently, the bridge driver registers handlers for MDB netlink
      messages, making it impossible for other drivers to implement MDB
      support.
      
      As a preparation for VXLAN MDB support, move the MDB handlers out of the
      bridge driver to the core rtnetlink code. The rtnetlink code will call
      into individual drivers by invoking their previously added MDB net
      device operations.
      
      Note that while the diffstat is large, the change is mechanical. It
      moves code out of the bridge driver to rtnetlink code. Also note that a
      similar change was made in 2012 with commit 77162022 ("net: add
      generic PF_BRIDGE:RTM_ FDB hooks") that moved FDB handlers out of the
      bridge driver to the core rtnetlink code.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cc7f5022
    • Ido Schimmel's avatar
      bridge: mcast: Implement MDB net device operations · c009de10
      Ido Schimmel authored
      Implement the previously added MDB net device operations in the bridge
      driver so that they could be invoked by core rtnetlink code in the next
      patch.
      
      The operations are identical to the existing br_mdb_{dump,add,del}
      functions. The '_new' suffix will be removed in the next patch. The
      functions are re-implemented in this patch to make the conversion in the
      next patch easier to review.
      
      Add dummy implementations when 'CONFIG_BRIDGE_IGMP_SNOOPING' is
      disabled, so that an error will be returned to user space when it is
      trying to add or delete an MDB entry. This is consistent with existing
      behavior where the bridge driver does not even register rtnetlink
      handlers for RTM_{NEW,DEL,GET}MDB messages when this Kconfig option is
      disabled.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c009de10
    • Ido Schimmel's avatar
      net: Add MDB net device operations · 8c44fa12
      Ido Schimmel authored
      Add MDB net device operations that will be invoked by rtnetlink code in
      response to received RTM_{NEW,DEL,GET}MDB messages. Subsequent patches
      will implement these operations in the bridge and VXLAN drivers.
      Signed-off-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Reviewed-by: default avatarNikolay Aleksandrov <razor@blackwall.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8c44fa12
    • David S. Miller's avatar
      Merge branch 'J784S4-CPSW9G-bindings' · ec47dcb4
      David S. Miller authored
      Siddharth Vadapalli says:
      
      ====================
      Add J784S4 CPSW9G NET Bindings
      
      This series cleans up the bindings by reordering the compatibles, followed
      by adding the bindings for CPSW9G instance of CPSW Ethernet Switch on TI's
      J784S4 SoC.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ec47dcb4
    • Siddharth Vadapalli's avatar
      dt-bindings: net: ti: k3-am654-cpsw-nuss: Add J784S4 CPSW9G support · e0c9c2a7
      Siddharth Vadapalli authored
      Update bindings for TI K3 J784S4 SoC which contains 9 ports (8 external
      ports) CPSW9G module and add compatible for it.
      Signed-off-by: default avatarSiddharth Vadapalli <s-vadapalli@ti.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e0c9c2a7
    • Siddharth Vadapalli's avatar
      dt-bindings: net: ti: k3-am654-cpsw-nuss: Fix compatible order · 40235ede
      Siddharth Vadapalli authored
      Reorder compatibles to follow alphanumeric order.
      Signed-off-by: default avatarSiddharth Vadapalli <s-vadapalli@ti.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      40235ede
    • Shradha Gupta's avatar
      net: mana: Add new MANA VF performance counters for easier troubleshooting · bd7fc6e1
      Shradha Gupta authored
      Extended performance counter stats in 'ethtool -S <interface>' output
      for MANA VF to facilitate troubleshooting.
      
      Tested-on: Ubuntu22
      Signed-off-by: default avatarShradha Gupta <shradhagupta@linux.microsoft.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bd7fc6e1
    • Mengyuan Lou's avatar
      net: wangxun: Implement the ndo change mtu interface · 81dc0741
      Mengyuan Lou authored
      Add ngbe and txgbe ndo_change_mtu support.
      Signed-off-by: default avatarMengyuan Lou <mengyuanlou@net-swift.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      81dc0741
    • Luiz Angelo Daros de Luca's avatar
      net: dsa: realtek: rtl8365mb: add change_mtu · c36a77c3
      Luiz Angelo Daros de Luca authored
      The rtl8365mb was using a fixed MTU size of 1536, which was probably
      inspired by the rtl8366rb's initial frame size. However, unlike that
      family, the rtl8365mb family can specify the max frame size in bytes,
      rather than in fixed steps.
      
      DSA calls change_mtu for the CPU port once the max MTU value among the
      ports changes. As the max frame size is defined globally, the switch
      is configured only when the call affects the CPU port.
      
      The available specifications do not directly define the max supported
      frame size, but it mentions a 16k limit. This driver will use the 0x3FFF
      limit as it is used in the vendor API code. However, the switch sets the
      max frame size to 16368 bytes (0x3FF0) after it resets.
      
      change_mtu uses MTU size, or ethernet payload size, while the switch
      works with frame size. The frame size is calculated considering the
      ethernet header (14 bytes), a possible 802.1Q tag (4 bytes), the payload
      size (MTU), and the Ethernet FCS (4 bytes). The CPU tag (8 bytes) is
      consumed before the switch enforces the limit.
      
      During setup, the driver will use the default 1500-byte MTU of DSA to
      set the maximum frame size. The current sum will be
      VLAN_ETH_HLEN+1500+ETH_FCS_LEN, which results in 1522 bytes.  Although
      it is lower than the previous initial value of 1536 bytes, the driver
      will increase the frame size for a larger MTU. However, if something
      requires more space without increasing the MTU, such as QinQ, we would
      need to add the extra length to the rtl8365mb_port_change_mtu() formula.
      
      MTU was tested up to 2018 (with 802.1Q) as that is as far as mt7620
      (where rtl8367s is stacked) can go. The register was manually
      manipulated byte-by-byte to ensure the MTU to frame size conversion was
      correct. For frames without 802.1Q tag, the frame size limit will be 4
      bytes over the required size.
      
      There is a jumbo register, enabled by default at 6k frame size.
      However, the jumbo settings do not seem to limit nor expand the maximum
      tested MTU (2018), even when jumbo is disabled. More tests are needed
      with a device that can handle larger frames.
      Signed-off-by: default avatarLuiz Angelo Daros de Luca <luizluca@gmail.com>
      Reviewed-by: default avatarAlexander Duyck <alexanderduyck@fb.com>
      Reviewed-by: default avatarAlvin Šipraga <alsi@bang-olufsen.dk>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c36a77c3
    • Jakub Kicinski's avatar
      Merge branch 'add-ptp-support-for-sama7g5' · b883d1ee
      Jakub Kicinski authored
      Durai Manickam says:
      
      ====================
      Add PTP support for sama7g5
      
      This patch series is intended to add PTP capability to the GEM and
      EMAC for sama7g5.
      ====================
      
      Link: https://lore.kernel.org/r/20230315095053.53969-1-durai.manickamkr@microchip.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      b883d1ee
    • Durai Manickam KR's avatar
      9bae0dd0
    • Durai Manickam KR's avatar
      net: macb: Add PTP support to GEM for sama7g5 · abc783a7
      Durai Manickam KR authored
      Add PTP capability to the Gigabit Ethernet MAC.
      Signed-off-by: default avatarDurai Manickam KR <durai.manickamkr@microchip.com>
      Reviewed-by: default avatarClaudiu Beznea <claudiu.beznea@microchip.com>
      Reviewed-by: default avatarMichal Swiatkowski <michal.swiatkowski@linux.intel.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      abc783a7
    • Andy Shevchenko's avatar
      net: dsa: hellcreek: Get rid of custom led_init_default_state_get() · d565263b
      Andy Shevchenko authored
      LED core provides a helper to parse default state from firmware node.
      Use it instead of custom implementation.
      Signed-off-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Reviewed-by: Kurt Kanzenbach's avatarKurt Kanzenbach <kurt@linutronix.de>
      Reviewed-by: default avatarMichal Swiatkowski <michal.swiatkowski@linux.intel.com>
      Link: https://lore.kernel.org/r/20230314181824.56881-1-andriy.shevchenko@linux.intel.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      d565263b
  2. 16 Mar, 2023 16 commits