Commit e8b9f0da authored by Paolo Abeni's avatar Paolo Abeni

Merge branch 'dsa-changes-for-multiple-cpu-ports-part-4'

Vladimir Oltean says:

====================
DSA changes for multiple CPU ports (part 4)

Those who have been following part 1:
https://patchwork.kernel.org/project/netdevbpf/cover/20220511095020.562461-1-vladimir.oltean@nxp.com/
part 2:
https://patchwork.kernel.org/project/netdevbpf/cover/20220521213743.2735445-1-vladimir.oltean@nxp.com/
and part 3:
https://patchwork.kernel.org/project/netdevbpf/cover/20220819174820.3585002-1-vladimir.oltean@nxp.com/
will know that I am trying to enable the second internal port pair from
the NXP LS1028A Felix switch for DSA-tagged traffic via "ocelot-8021q".

This series represents the final part of that effort. We have:

- the introduction of new UAPI in the form of IFLA_DSA_MASTER, the
  iproute2 patch for which is here:
  https://patchwork.kernel.org/project/netdevbpf/patch/20220904190025.813574-1-vladimir.oltean@nxp.com/

- preparation for LAG DSA masters in terms of suppressing some
  operations for masters in the DSA core that simply don't make sense
  when those masters are a bonding/team interface

- handling all the net device events that occur between DSA and a
  LAG DSA master, including migration to a different DSA master when the
  current master joins a LAG, or the LAG gets destroyed

- updating documentation

- adding an implementation for NXP LS1028A, where things are insanely
  complicated due to hardware limitations. We have 2 tagging protocols:

  * the native "ocelot" protocol (NPI port mode). This does not support
    CPU ports in a LAG, and supports a single DSA master. The DSA master
    can be changed between eno2 (2.5G) and eno3 (1G), but all ports must
    be down during the changing process, and user ports assigned to the
    old DSA master will refuse to come up if the user requests that
    during a "transient" state.

  * the "ocelot-8021q" software-defined protocol, where the Ethernet
    ports connected to the CPU are not actually "god mode" ports as far
    as the hardware is concerned. So here, static assignment between
    user and CPU ports is possible by editing the PGID_SRC masks for
    the port-based forwarding matrix, and "CPU ports in a LAG" simply
    means "a LAG like any other".

The series was regression-tested on LS1028A using the local_termination.sh
kselftest, in most of the possible operating modes and tagging protocols.
I have not done a detailed performance evaluation yet, but using LAG, is
possible to exceed the termination bandwidth of a single CPU port in an
iperf3 test with multiple senders and multiple receivers.

v1 at:
https://patchwork.kernel.org/project/netdevbpf/cover/20220830195932.683432-1-vladimir.oltean@nxp.com/

Previous (older) RFC at:
https://lore.kernel.org/netdev/20220523104256.3556016-1-olteanv@gmail.com/
====================

Link: https://lore.kernel.org/r/20220911010706.2137967-1-vladimir.oltean@nxp.comSigned-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
parents 42e53b44 eca70102
......@@ -49,6 +49,9 @@ In this documentation the following Ethernet interfaces are used:
*eth0*
the master interface
*eth1*
another master interface
*lan1*
a slave interface
......@@ -360,3 +363,96 @@ the ``self`` flag) has been removed. This results in the following changes:
Script writers are therefore encouraged to use the ``master static`` set of
flags when working with bridge FDB entries on DSA switch interfaces.
Affinity of user ports to CPU ports
-----------------------------------
Typically, DSA switches are attached to the host via a single Ethernet
interface, but in cases where the switch chip is discrete, the hardware design
may permit the use of 2 or more ports connected to the host, for an increase in
termination throughput.
DSA can make use of multiple CPU ports in two ways. First, it is possible to
statically assign the termination traffic associated with a certain user port
to be processed by a certain CPU port. This way, user space can implement
custom policies of static load balancing between user ports, by spreading the
affinities according to the available CPU ports.
Secondly, it is possible to perform load balancing between CPU ports on a per
packet basis, rather than statically assigning user ports to CPU ports.
This can be achieved by placing the DSA masters under a LAG interface (bonding
or team). DSA monitors this operation and creates a mirror of this software LAG
on the CPU ports facing the physical DSA masters that constitute the LAG slave
devices.
To make use of multiple CPU ports, the firmware (device tree) description of
the switch must mark all the links between CPU ports and their DSA masters
using the ``ethernet`` reference/phandle. At startup, only a single CPU port
and DSA master will be used - the numerically first port from the firmware
description which has an ``ethernet`` property. It is up to the user to
configure the system for the switch to use other masters.
DSA uses the ``rtnl_link_ops`` mechanism (with a "dsa" ``kind``) to allow
changing the DSA master of a user port. The ``IFLA_DSA_MASTER`` u32 netlink
attribute contains the ifindex of the master device that handles each slave
device. The DSA master must be a valid candidate based on firmware node
information, or a LAG interface which contains only slaves which are valid
candidates.
Using iproute2, the following manipulations are possible:
.. code-block:: sh
# See the DSA master in current use
ip -d link show dev swp0
(...)
dsa master eth0
# Static CPU port distribution
ip link set swp0 type dsa master eth1
ip link set swp1 type dsa master eth0
ip link set swp2 type dsa master eth1
ip link set swp3 type dsa master eth0
# CPU ports in LAG, using explicit assignment of the DSA master
ip link add bond0 type bond mode balance-xor && ip link set bond0 up
ip link set eth1 down && ip link set eth1 master bond0
ip link set swp0 type dsa master bond0
ip link set swp1 type dsa master bond0
ip link set swp2 type dsa master bond0
ip link set swp3 type dsa master bond0
ip link set eth0 down && ip link set eth0 master bond0
ip -d link show dev swp0
(...)
dsa master bond0
# CPU ports in LAG, relying on implicit migration of the DSA master
ip link add bond0 type bond mode balance-xor && ip link set bond0 up
ip link set eth0 down && ip link set eth0 master bond0
ip link set eth1 down && ip link set eth1 master bond0
ip -d link show dev swp0
(...)
dsa master bond0
Notice that in the case of CPU ports under a LAG, the use of the
``IFLA_DSA_MASTER`` netlink attribute is not strictly needed, but rather, DSA
reacts to the ``IFLA_MASTER`` attribute change of its present master (``eth0``)
and migrates all user ports to the new upper of ``eth0``, ``bond0``. Similarly,
when ``bond0`` is destroyed using ``RTM_DELLINK``, DSA migrates the user ports
that were assigned to this interface to the first physical DSA master which is
eligible, based on the firmware description (it effectively reverts to the
startup configuration).
In a setup with more than 2 physical CPU ports, it is therefore possible to mix
static user to CPU port assignment with LAG between DSA masters. It is not
possible to statically assign a user port towards a DSA master that has any
upper interfaces (this includes LAG devices - the master must always be the LAG
in this case).
Live changing of the DSA master (and thus CPU port) affinity of a user port is
permitted, in order to allow dynamic redistribution in response to traffic.
Physical DSA masters are allowed to join and leave at any time a LAG interface
used as a DSA master; however, DSA will reject a LAG interface as a valid
candidate for being a DSA master unless it has at least one physical DSA master
as a slave device.
......@@ -303,6 +303,20 @@ These frames are then queued for transmission using the master network device
Ethernet switch will be able to process these incoming frames from the
management interface and deliver them to the physical switch port.
When using multiple CPU ports, it is possible to stack a LAG (bonding/team)
device between the DSA slave devices and the physical DSA masters. The LAG
device is thus also a DSA master, but the LAG slave devices continue to be DSA
masters as well (just with no user port assigned to them; this is needed for
recovery in case the LAG DSA master disappears). Thus, the data path of the LAG
DSA master is used asymmetrically. On RX, the ``ETH_P_XDSA`` handler, which
calls ``dsa_switch_rcv()``, is invoked early (on the physical DSA master;
LAG slave). Therefore, the RX data path of the LAG DSA master is not used.
On the other hand, TX takes place linearly: ``dsa_slave_xmit`` calls
``dsa_enqueue_skb``, which calls ``dev_queue_xmit`` towards the LAG DSA master.
The latter calls ``dev_queue_xmit`` towards one physical DSA master or the
other, and in both cases, the packet exits the system through a hardware path
towards the switch.
Graphical representation
------------------------
......@@ -629,6 +643,24 @@ Switch configuration
PHY cannot be found. In this case, probing of the DSA switch continues
without that particular port.
- ``port_change_master``: method through which the affinity (association used
for traffic termination purposes) between a user port and a CPU port can be
changed. By default all user ports from a tree are assigned to the first
available CPU port that makes sense for them (most of the times this means
the user ports of a tree are all assigned to the same CPU port, except for H
topologies as described in commit 2c0b03258b8b). The ``port`` argument
represents the index of the user port, and the ``master`` argument represents
the new DSA master ``net_device``. The CPU port associated with the new
master can be retrieved by looking at ``struct dsa_port *cpu_dp =
master->dsa_ptr``. Additionally, the master can also be a LAG device where
all the slave devices are physical DSA masters. LAG DSA masters also have a
valid ``master->dsa_ptr`` pointer, however this is not unique, but rather a
duplicate of the first physical DSA master's (LAG slave) ``dsa_ptr``. In case
of a LAG DSA master, a further call to ``port_lag_join`` will be emitted
separately for the physical CPU ports associated with the physical DSA
masters, requesting them to create a hardware LAG associated with the LAG
interface.
PHY devices and link management
-------------------------------
......@@ -1095,9 +1127,3 @@ capable hardware, but does not enforce a strict switch device driver model. On
the other DSA enforces a fairly strict device driver model, and deals with most
of the switch specific. At some point we should envision a merger between these
two subsystems and get the best of both worlds.
Other hanging fruits
--------------------
- allowing more than one CPU/management interface:
http://comments.gmane.org/gmane.linux.network/365657
......@@ -983,7 +983,7 @@ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
static void bcm_sf2_sw_get_wol(struct dsa_switch *ds, int port,
struct ethtool_wolinfo *wol)
{
struct net_device *p = dsa_to_port(ds, port)->cpu_dp->master;
struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
struct ethtool_wolinfo pwol = { };
......@@ -1007,7 +1007,7 @@ static void bcm_sf2_sw_get_wol(struct dsa_switch *ds, int port,
static int bcm_sf2_sw_set_wol(struct dsa_switch *ds, int port,
struct ethtool_wolinfo *wol)
{
struct net_device *p = dsa_to_port(ds, port)->cpu_dp->master;
struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
s8 cpu_port = dsa_to_port(ds, port)->cpu_dp->index;
struct ethtool_wolinfo pwol = { };
......
......@@ -1102,7 +1102,7 @@ static int bcm_sf2_cfp_rule_get_all(struct bcm_sf2_priv *priv,
int bcm_sf2_get_rxnfc(struct dsa_switch *ds, int port,
struct ethtool_rxnfc *nfc, u32 *rule_locs)
{
struct net_device *p = dsa_to_port(ds, port)->cpu_dp->master;
struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
int ret = 0;
......@@ -1145,7 +1145,7 @@ int bcm_sf2_get_rxnfc(struct dsa_switch *ds, int port,
int bcm_sf2_set_rxnfc(struct dsa_switch *ds, int port,
struct ethtool_rxnfc *nfc)
{
struct net_device *p = dsa_to_port(ds, port)->cpu_dp->master;
struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
int ret = 0;
......
......@@ -1092,7 +1092,7 @@ static int lan9303_port_enable(struct dsa_switch *ds, int port,
if (!dsa_port_is_user(dp))
return 0;
vlan_vid_add(dp->cpu_dp->master, htons(ETH_P_8021Q), port);
vlan_vid_add(dsa_port_to_master(dp), htons(ETH_P_8021Q), port);
return lan9303_enable_processing_port(chip, port);
}
......@@ -1105,7 +1105,7 @@ static void lan9303_port_disable(struct dsa_switch *ds, int port)
if (!dsa_port_is_user(dp))
return;
vlan_vid_del(dp->cpu_dp->master, htons(ETH_P_8021Q), port);
vlan_vid_del(dsa_port_to_master(dp), htons(ETH_P_8021Q), port);
lan9303_disable_processing_port(chip, port);
lan9303_phy_write(ds, chip->phy_addr_base + port, MII_BMCR, BMCR_PDOWN);
......
......@@ -6593,14 +6593,17 @@ static int mv88e6xxx_port_bridge_flags(struct dsa_switch *ds, int port,
static bool mv88e6xxx_lag_can_offload(struct dsa_switch *ds,
struct dsa_lag lag,
struct netdev_lag_upper_info *info)
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack)
{
struct mv88e6xxx_chip *chip = ds->priv;
struct dsa_port *dp;
int members = 0;
if (!mv88e6xxx_has_lag(chip))
if (!mv88e6xxx_has_lag(chip)) {
NL_SET_ERR_MSG_MOD(extack, "Chip does not support LAG offload");
return false;
}
if (!lag.id)
return false;
......@@ -6609,14 +6612,20 @@ static bool mv88e6xxx_lag_can_offload(struct dsa_switch *ds,
/* Includes the port joining the LAG */
members++;
if (members > 8)
if (members > 8) {
NL_SET_ERR_MSG_MOD(extack,
"Cannot offload more than 8 LAG ports");
return false;
}
/* We could potentially relax this to include active
* backup in the future.
*/
if (info->tx_type != NETDEV_LAG_TX_TYPE_HASH)
if (info->tx_type != NETDEV_LAG_TX_TYPE_HASH) {
NL_SET_ERR_MSG_MOD(extack,
"Can only offload LAG using hash TX type");
return false;
}
/* Ideally we would also validate that the hash type matches
* the hardware. Alas, this is always set to unknown on team
......@@ -6769,12 +6778,13 @@ static int mv88e6xxx_port_lag_change(struct dsa_switch *ds, int port)
static int mv88e6xxx_port_lag_join(struct dsa_switch *ds, int port,
struct dsa_lag lag,
struct netdev_lag_upper_info *info)
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack)
{
struct mv88e6xxx_chip *chip = ds->priv;
int err, id;
if (!mv88e6xxx_lag_can_offload(ds, lag, info))
if (!mv88e6xxx_lag_can_offload(ds, lag, info, extack))
return -EOPNOTSUPP;
/* DSA LAG IDs are one-based */
......@@ -6827,12 +6837,13 @@ static int mv88e6xxx_crosschip_lag_change(struct dsa_switch *ds, int sw_index,
static int mv88e6xxx_crosschip_lag_join(struct dsa_switch *ds, int sw_index,
int port, struct dsa_lag lag,
struct netdev_lag_upper_info *info)
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack)
{
struct mv88e6xxx_chip *chip = ds->priv;
int err;
if (!mv88e6xxx_lag_can_offload(ds, lag, info))
if (!mv88e6xxx_lag_can_offload(ds, lag, info, extack))
return -EOPNOTSUPP;
mv88e6xxx_reg_lock(chip);
......
......@@ -42,6 +42,25 @@ static struct net_device *felix_classify_db(struct dsa_db db)
}
}
static int felix_cpu_port_for_master(struct dsa_switch *ds,
struct net_device *master)
{
struct ocelot *ocelot = ds->priv;
struct dsa_port *cpu_dp;
int lag;
if (netif_is_lag_master(master)) {
mutex_lock(&ocelot->fwd_domain_lock);
lag = ocelot_bond_get_id(ocelot, master);
mutex_unlock(&ocelot->fwd_domain_lock);
return lag;
}
cpu_dp = master->dsa_ptr;
return cpu_dp->index;
}
/* Set up VCAP ES0 rules for pushing a tag_8021q VLAN towards the CPU such that
* the tagger can perform RX source port identification.
*/
......@@ -422,6 +441,40 @@ static unsigned long felix_tag_npi_get_host_fwd_mask(struct dsa_switch *ds)
return BIT(ocelot->num_phys_ports);
}
static int felix_tag_npi_change_master(struct dsa_switch *ds, int port,
struct net_device *master,
struct netlink_ext_ack *extack)
{
struct dsa_port *dp = dsa_to_port(ds, port), *other_dp;
struct ocelot *ocelot = ds->priv;
if (netif_is_lag_master(master)) {
NL_SET_ERR_MSG_MOD(extack,
"LAG DSA master only supported using ocelot-8021q");
return -EOPNOTSUPP;
}
/* Changing the NPI port breaks user ports still assigned to the old
* one, so only allow it while they're down, and don't allow them to
* come back up until they're all changed to the new one.
*/
dsa_switch_for_each_user_port(other_dp, ds) {
struct net_device *slave = other_dp->slave;
if (other_dp != dp && (slave->flags & IFF_UP) &&
dsa_port_to_master(other_dp) != master) {
NL_SET_ERR_MSG_MOD(extack,
"Cannot change while old master still has users");
return -EOPNOTSUPP;
}
}
felix_npi_port_deinit(ocelot, ocelot->npi);
felix_npi_port_init(ocelot, felix_cpu_port_for_master(ds, master));
return 0;
}
/* Alternatively to using the NPI functionality, that same hardware MAC
* connected internally to the enetc or fman DSA master can be configured to
* use the software-defined tag_8021q frame format. As far as the hardware is
......@@ -433,6 +486,7 @@ static const struct felix_tag_proto_ops felix_tag_npi_proto_ops = {
.setup = felix_tag_npi_setup,
.teardown = felix_tag_npi_teardown,
.get_host_fwd_mask = felix_tag_npi_get_host_fwd_mask,
.change_master = felix_tag_npi_change_master,
};
static int felix_tag_8021q_setup(struct dsa_switch *ds)
......@@ -507,10 +561,24 @@ static unsigned long felix_tag_8021q_get_host_fwd_mask(struct dsa_switch *ds)
return dsa_cpu_ports(ds);
}
static int felix_tag_8021q_change_master(struct dsa_switch *ds, int port,
struct net_device *master,
struct netlink_ext_ack *extack)
{
int cpu = felix_cpu_port_for_master(ds, master);
struct ocelot *ocelot = ds->priv;
ocelot_port_unassign_dsa_8021q_cpu(ocelot, port);
ocelot_port_assign_dsa_8021q_cpu(ocelot, port, cpu);
return felix_update_trapping_destinations(ds, true);
}
static const struct felix_tag_proto_ops felix_tag_8021q_proto_ops = {
.setup = felix_tag_8021q_setup,
.teardown = felix_tag_8021q_teardown,
.get_host_fwd_mask = felix_tag_8021q_get_host_fwd_mask,
.change_master = felix_tag_8021q_change_master,
};
static void felix_set_host_flood(struct dsa_switch *ds, unsigned long mask,
......@@ -673,6 +741,16 @@ static void felix_port_set_host_flood(struct dsa_switch *ds, int port,
!!felix->host_flood_mc_mask, true);
}
static int felix_port_change_master(struct dsa_switch *ds, int port,
struct net_device *master,
struct netlink_ext_ack *extack)
{
struct ocelot *ocelot = ds->priv;
struct felix *felix = ocelot_to_felix(ocelot);
return felix->tag_proto_ops->change_master(ds, port, master, extack);
}
static int felix_set_ageing_time(struct dsa_switch *ds,
unsigned int ageing_time)
{
......@@ -861,11 +939,21 @@ static void felix_bridge_leave(struct dsa_switch *ds, int port,
static int felix_lag_join(struct dsa_switch *ds, int port,
struct dsa_lag lag,
struct netdev_lag_upper_info *info)
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack)
{
struct ocelot *ocelot = ds->priv;
int err;
return ocelot_port_lag_join(ocelot, port, lag.dev, info);
err = ocelot_port_lag_join(ocelot, port, lag.dev, info, extack);
if (err)
return err;
/* Update the logical LAG port that serves as tag_8021q CPU port */
if (!dsa_is_cpu_port(ds, port))
return 0;
return felix_port_change_master(ds, port, lag.dev, extack);
}
static int felix_lag_leave(struct dsa_switch *ds, int port,
......@@ -875,7 +963,11 @@ static int felix_lag_leave(struct dsa_switch *ds, int port,
ocelot_port_lag_leave(ocelot, port, lag.dev);
return 0;
/* Update the logical LAG port that serves as tag_8021q CPU port */
if (!dsa_is_cpu_port(ds, port))
return 0;
return felix_port_change_master(ds, port, lag.dev, NULL);
}
static int felix_lag_change(struct dsa_switch *ds, int port)
......@@ -1013,6 +1105,27 @@ static void felix_phylink_mac_link_up(struct dsa_switch *ds, int port,
felix->info->port_sched_speed_set(ocelot, port, speed);
}
static int felix_port_enable(struct dsa_switch *ds, int port,
struct phy_device *phydev)
{
struct dsa_port *dp = dsa_to_port(ds, port);
struct ocelot *ocelot = ds->priv;
if (!dsa_port_is_user(dp))
return 0;
if (ocelot->npi >= 0) {
struct net_device *master = dsa_port_to_master(dp);
if (felix_cpu_port_for_master(ds, master) != ocelot->npi) {
dev_err(ds->dev, "Multiple masters are not allowed\n");
return -EINVAL;
}
}
return 0;
}
static void felix_port_qos_map_init(struct ocelot *ocelot, int port)
{
int i;
......@@ -1912,6 +2025,7 @@ const struct dsa_switch_ops felix_switch_ops = {
.phylink_mac_select_pcs = felix_phylink_mac_select_pcs,
.phylink_mac_link_down = felix_phylink_mac_link_down,
.phylink_mac_link_up = felix_phylink_mac_link_up,
.port_enable = felix_port_enable,
.port_fast_age = felix_port_fast_age,
.port_fdb_dump = felix_fdb_dump,
.port_fdb_add = felix_fdb_add,
......@@ -1967,6 +2081,7 @@ const struct dsa_switch_ops felix_switch_ops = {
.port_add_dscp_prio = felix_port_add_dscp_prio,
.port_del_dscp_prio = felix_port_del_dscp_prio,
.port_set_host_flood = felix_port_set_host_flood,
.port_change_master = felix_port_change_master,
};
struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port)
......
......@@ -71,6 +71,9 @@ struct felix_tag_proto_ops {
int (*setup)(struct dsa_switch *ds);
void (*teardown)(struct dsa_switch *ds);
unsigned long (*get_host_fwd_mask)(struct dsa_switch *ds);
int (*change_master)(struct dsa_switch *ds, int port,
struct net_device *master,
struct netlink_ext_ack *extack);
};
extern const struct dsa_switch_ops felix_switch_ops;
......
......@@ -1017,7 +1017,8 @@ int qca8k_port_vlan_del(struct dsa_switch *ds, int port,
static bool qca8k_lag_can_offload(struct dsa_switch *ds,
struct dsa_lag lag,
struct netdev_lag_upper_info *info)
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack)
{
struct dsa_port *dp;
int members = 0;
......@@ -1029,15 +1030,24 @@ static bool qca8k_lag_can_offload(struct dsa_switch *ds,
/* Includes the port joining the LAG */
members++;
if (members > QCA8K_NUM_PORTS_FOR_LAG)
if (members > QCA8K_NUM_PORTS_FOR_LAG) {
NL_SET_ERR_MSG_MOD(extack,
"Cannot offload more than 4 LAG ports");
return false;
}
if (info->tx_type != NETDEV_LAG_TX_TYPE_HASH)
if (info->tx_type != NETDEV_LAG_TX_TYPE_HASH) {
NL_SET_ERR_MSG_MOD(extack,
"Can only offload LAG using hash TX type");
return false;
}
if (info->hash_type != NETDEV_LAG_HASH_L2 &&
info->hash_type != NETDEV_LAG_HASH_L23)
info->hash_type != NETDEV_LAG_HASH_L23) {
NL_SET_ERR_MSG_MOD(extack,
"Can only offload L2 or L2+L3 TX hash");
return false;
}
return true;
}
......@@ -1160,11 +1170,12 @@ static int qca8k_lag_refresh_portmap(struct dsa_switch *ds, int port,
}
int qca8k_port_lag_join(struct dsa_switch *ds, int port, struct dsa_lag lag,
struct netdev_lag_upper_info *info)
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack)
{
int ret;
if (!qca8k_lag_can_offload(ds, lag, info))
if (!qca8k_lag_can_offload(ds, lag, info, extack))
return -EOPNOTSUPP;
ret = qca8k_lag_setup_hash(ds, lag, info);
......
......@@ -512,7 +512,8 @@ int qca8k_port_vlan_del(struct dsa_switch *ds, int port,
/* Common port LAG function */
int qca8k_port_lag_join(struct dsa_switch *ds, int port, struct dsa_lag lag,
struct netdev_lag_upper_info *info);
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack);
int qca8k_port_lag_leave(struct dsa_switch *ds, int port,
struct dsa_lag lag);
......
......@@ -173,7 +173,7 @@ mtk_flow_get_dsa_port(struct net_device **dev)
if (dp->cpu_dp->tag_ops->proto != DSA_TAG_PROTO_MTK)
return -ENODEV;
*dev = dp->cpu_dp->master;
*dev = dsa_port_to_master(dp);
return dp->index;
#else
......
......@@ -1382,7 +1382,7 @@ static u32 ocelot_get_bond_mask(struct ocelot *ocelot, struct net_device *bond)
/* The logical port number of a LAG is equal to the lowest numbered physical
* port ID present in that LAG. It may change if that port ever leaves the LAG.
*/
static int ocelot_bond_get_id(struct ocelot *ocelot, struct net_device *bond)
int ocelot_bond_get_id(struct ocelot *ocelot, struct net_device *bond)
{
int bond_mask = ocelot_get_bond_mask(ocelot, bond);
......@@ -1391,6 +1391,7 @@ static int ocelot_bond_get_id(struct ocelot *ocelot, struct net_device *bond)
return __ffs(bond_mask);
}
EXPORT_SYMBOL_GPL(ocelot_bond_get_id);
/* Returns the mask of user ports assigned to this DSA tag_8021q CPU port.
* Note that when CPU ports are in a LAG, the user ports are assigned to the
......@@ -2132,10 +2133,14 @@ static void ocelot_migrate_lag_fdbs(struct ocelot *ocelot,
int ocelot_port_lag_join(struct ocelot *ocelot, int port,
struct net_device *bond,
struct netdev_lag_upper_info *info)
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack)
{
if (info->tx_type != NETDEV_LAG_TX_TYPE_HASH)
if (info->tx_type != NETDEV_LAG_TX_TYPE_HASH) {
NL_SET_ERR_MSG_MOD(extack,
"Can only offload LAG using hash TX type");
return -EOPNOTSUPP;
}
mutex_lock(&ocelot->fwd_domain_lock);
......
......@@ -1412,11 +1412,10 @@ static int ocelot_netdevice_lag_join(struct net_device *dev,
int port = priv->port.index;
int err;
err = ocelot_port_lag_join(ocelot, port, bond, info);
if (err == -EOPNOTSUPP) {
NL_SET_ERR_MSG_MOD(extack, "Offloading not supported");
err = ocelot_port_lag_join(ocelot, port, bond, info, extack);
if (err == -EOPNOTSUPP)
/* Offloading not supported, fall back to software LAG */
return 0;
}
bridge_dev = netdev_master_upper_dev_get(bond);
if (!bridge_dev || !netif_is_bridge_master(bridge_dev))
......
......@@ -253,11 +253,17 @@ struct netdev_hw_addr_list {
#define netdev_uc_empty(dev) netdev_hw_addr_list_empty(&(dev)->uc)
#define netdev_for_each_uc_addr(ha, dev) \
netdev_hw_addr_list_for_each(ha, &(dev)->uc)
#define netdev_for_each_synced_uc_addr(_ha, _dev) \
netdev_for_each_uc_addr((_ha), (_dev)) \
if ((_ha)->sync_cnt)
#define netdev_mc_count(dev) netdev_hw_addr_list_count(&(dev)->mc)
#define netdev_mc_empty(dev) netdev_hw_addr_list_empty(&(dev)->mc)
#define netdev_for_each_mc_addr(ha, dev) \
netdev_hw_addr_list_for_each(ha, &(dev)->mc)
#define netdev_for_each_synced_mc_addr(_ha, _dev) \
netdev_for_each_mc_addr((_ha), (_dev)) \
if ((_ha)->sync_cnt)
struct hh_cache {
unsigned int hh_len;
......
......@@ -300,6 +300,9 @@ struct dsa_port {
u8 master_admin_up:1;
u8 master_oper_up:1;
/* Valid only on user ports */
u8 cpu_port_in_lag:1;
u8 setup:1;
struct device_node *dn;
......@@ -559,6 +562,10 @@ static inline bool dsa_is_user_port(struct dsa_switch *ds, int p)
list_for_each_entry((_dp), &(_dst)->ports, list) \
if (dsa_port_is_user((_dp)))
#define dsa_tree_for_each_user_port_continue_reverse(_dp, _dst) \
list_for_each_entry_continue_reverse((_dp), &(_dst)->ports, list) \
if (dsa_port_is_user((_dp)))
#define dsa_tree_for_each_cpu_port(_dp, _dst) \
list_for_each_entry((_dp), &(_dst)->ports, list) \
if (dsa_port_is_cpu((_dp)))
......@@ -718,6 +725,14 @@ static inline bool dsa_port_offloads_lag(struct dsa_port *dp,
return dsa_port_lag_dev_get(dp) == lag->dev;
}
static inline struct net_device *dsa_port_to_master(const struct dsa_port *dp)
{
if (dp->cpu_port_in_lag)
return dsa_port_lag_dev_get(dp->cpu_dp);
return dp->cpu_dp->master;
}
static inline
struct net_device *dsa_port_to_bridge_port(const struct dsa_port *dp)
{
......@@ -802,6 +817,12 @@ dsa_tree_offloads_bridge_dev(struct dsa_switch_tree *dst,
return false;
}
static inline bool dsa_port_tree_same(const struct dsa_port *a,
const struct dsa_port *b)
{
return a->ds->dst == b->ds->dst;
}
typedef int dsa_fdb_dump_cb_t(const unsigned char *addr, u16 vid,
bool is_static, void *data);
struct dsa_switch_ops {
......@@ -825,6 +846,10 @@ struct dsa_switch_ops {
int (*connect_tag_protocol)(struct dsa_switch *ds,
enum dsa_tag_protocol proto);
int (*port_change_master)(struct dsa_switch *ds, int port,
struct net_device *master,
struct netlink_ext_ack *extack);
/* Optional switch-wide initialization and destruction methods */
int (*setup)(struct dsa_switch *ds);
void (*teardown)(struct dsa_switch *ds);
......@@ -1081,7 +1106,8 @@ struct dsa_switch_ops {
int port);
int (*crosschip_lag_join)(struct dsa_switch *ds, int sw_index,
int port, struct dsa_lag lag,
struct netdev_lag_upper_info *info);
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack);
int (*crosschip_lag_leave)(struct dsa_switch *ds, int sw_index,
int port, struct dsa_lag lag);
......@@ -1156,7 +1182,8 @@ struct dsa_switch_ops {
int (*port_lag_change)(struct dsa_switch *ds, int port);
int (*port_lag_join)(struct dsa_switch *ds, int port,
struct dsa_lag lag,
struct netdev_lag_upper_info *info);
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack);
int (*port_lag_leave)(struct dsa_switch *ds, int port,
struct dsa_lag lag);
......
......@@ -1229,10 +1229,12 @@ int ocelot_port_mdb_del(struct ocelot *ocelot, int port,
const struct net_device *bridge);
int ocelot_port_lag_join(struct ocelot *ocelot, int port,
struct net_device *bond,
struct netdev_lag_upper_info *info);
struct netdev_lag_upper_info *info,
struct netlink_ext_ack *extack);
void ocelot_port_lag_leave(struct ocelot *ocelot, int port,
struct net_device *bond);
void ocelot_port_lag_change(struct ocelot *ocelot, int port, bool lag_tx_active);
int ocelot_bond_get_id(struct ocelot *ocelot, struct net_device *bond);
int ocelot_devlink_sb_register(struct ocelot *ocelot);
void ocelot_devlink_sb_unregister(struct ocelot *ocelot);
......
......@@ -1375,4 +1375,14 @@ enum {
#define IFLA_MCTP_MAX (__IFLA_MCTP_MAX - 1)
/* DSA section */
enum {
IFLA_DSA_UNSPEC,
IFLA_DSA_MASTER,
__IFLA_DSA_MAX,
};
#define IFLA_DSA_MAX (__IFLA_DSA_MAX - 1)
#endif /* _UAPI_LINUX_IF_LINK_H */
# SPDX-License-Identifier: GPL-2.0
# the core
obj-$(CONFIG_NET_DSA) += dsa_core.o
dsa_core-y += dsa.o dsa2.o master.o port.o slave.o switch.o tag_8021q.o
dsa_core-y += \
dsa.o \
dsa2.o \
master.o \
netlink.o \
port.o \
slave.o \
switch.o \
tag_8021q.o
# tagging formats
obj-$(CONFIG_NET_DSA_TAG_AR9331) += tag_ar9331.o
......
......@@ -536,8 +536,16 @@ static int __init dsa_init_module(void)
dsa_tag_driver_register(&DSA_TAG_DRIVER_NAME(none_ops),
THIS_MODULE);
rc = rtnl_link_register(&dsa_link_ops);
if (rc)
goto netlink_register_fail;
return 0;
netlink_register_fail:
dsa_tag_driver_unregister(&DSA_TAG_DRIVER_NAME(none_ops));
dsa_slave_unregister_notifier();
dev_remove_pack(&dsa_pack_type);
register_notifier_fail:
destroy_workqueue(dsa_owq);
......@@ -547,6 +555,7 @@ module_init(dsa_init_module);
static void __exit dsa_cleanup_module(void)
{
rtnl_link_unregister(&dsa_link_ops);
dsa_tag_driver_unregister(&DSA_TAG_DRIVER_NAME(none_ops));
dsa_slave_unregister_notifier();
......
......@@ -387,6 +387,20 @@ static struct dsa_port *dsa_tree_find_first_cpu(struct dsa_switch_tree *dst)
return NULL;
}
struct net_device *dsa_tree_find_first_master(struct dsa_switch_tree *dst)
{
struct device_node *ethernet;
struct net_device *master;
struct dsa_port *cpu_dp;
cpu_dp = dsa_tree_find_first_cpu(dst);
ethernet = of_parse_phandle(cpu_dp->dn, "ethernet", 0);
master = of_find_net_device_by_node(ethernet);
of_node_put(ethernet);
return master;
}
/* Assign the default CPU port (the first one in the tree) to all ports of the
* fabric which don't already have one as part of their own switch.
*/
......@@ -1263,11 +1277,11 @@ int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
* attempts to change the tagging protocol. If we ever lift the IFF_UP
* restriction, there needs to be another mutex which serializes this.
*/
list_for_each_entry(dp, &dst->ports, list) {
if (dsa_port_is_cpu(dp) && (dp->master->flags & IFF_UP))
dsa_tree_for_each_user_port(dp, dst) {
if (dsa_port_to_master(dp)->flags & IFF_UP)
goto out_unlock;
if (dsa_port_is_user(dp) && (dp->slave->flags & IFF_UP))
if (dp->slave->flags & IFF_UP)
goto out_unlock;
}
......@@ -1312,6 +1326,12 @@ void dsa_tree_master_admin_state_change(struct dsa_switch_tree *dst,
struct dsa_port *cpu_dp = master->dsa_ptr;
bool notify = false;
/* Don't keep track of admin state on LAG DSA masters,
* but rather just of physical DSA masters
*/
if (netif_is_lag_master(master))
return;
if ((dsa_port_master_is_operational(cpu_dp)) !=
(up && cpu_dp->master_oper_up))
notify = true;
......@@ -1329,6 +1349,12 @@ void dsa_tree_master_oper_state_change(struct dsa_switch_tree *dst,
struct dsa_port *cpu_dp = master->dsa_ptr;
bool notify = false;
/* Don't keep track of oper state on LAG DSA masters,
* but rather just of physical DSA masters
*/
if (netif_is_lag_master(master))
return;
if ((dsa_port_master_is_operational(cpu_dp)) !=
(cpu_dp->master_admin_up && up))
notify = true;
......@@ -1797,7 +1823,7 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
rtnl_lock();
dsa_switch_for_each_user_port(dp, ds) {
master = dp->cpu_dp->master;
master = dsa_port_to_master(dp);
slave_dev = dp->slave;
netdev_upper_dev_unlink(master, slave_dev);
......
......@@ -88,6 +88,7 @@ struct dsa_notifier_lag_info {
const struct dsa_port *dp;
struct dsa_lag lag;
struct netdev_lag_upper_info *info;
struct netlink_ext_ack *extack;
};
/* DSA_NOTIFIER_VLAN_* */
......@@ -184,6 +185,11 @@ static inline int dsa_tag_protocol_overhead(const struct dsa_device_ops *ops)
/* master.c */
int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp);
void dsa_master_teardown(struct net_device *dev);
int dsa_master_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp,
struct netdev_lag_upper_info *uinfo,
struct netlink_ext_ack *extack);
void dsa_master_lag_teardown(struct net_device *lag_dev,
struct dsa_port *cpu_dp);
static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
int device, int port)
......@@ -200,6 +206,9 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
return NULL;
}
/* netlink.c */
extern struct rtnl_link_ops dsa_link_ops __read_mostly;
/* port.c */
void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp,
const struct dsa_device_ops *tag_ops);
......@@ -292,6 +301,8 @@ void dsa_port_hsr_leave(struct dsa_port *dp, struct net_device *hsr);
int dsa_port_tag_8021q_vlan_add(struct dsa_port *dp, u16 vid, bool broadcast);
void dsa_port_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid, bool broadcast);
void dsa_port_set_host_flood(struct dsa_port *dp, bool uc, bool mc);
int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
struct netlink_ext_ack *extack);
/* slave.c */
extern const struct dsa_device_ops notag_netdev_ops;
......@@ -305,8 +316,12 @@ int dsa_slave_suspend(struct net_device *slave_dev);
int dsa_slave_resume(struct net_device *slave_dev);
int dsa_slave_register_notifier(void);
void dsa_slave_unregister_notifier(void);
void dsa_slave_sync_ha(struct net_device *dev);
void dsa_slave_unsync_ha(struct net_device *dev);
void dsa_slave_setup_tagger(struct net_device *slave);
int dsa_slave_change_mtu(struct net_device *dev, int new_mtu);
int dsa_slave_change_master(struct net_device *dev, struct net_device *master,
struct netlink_ext_ack *extack);
int dsa_slave_manage_vlan_filtering(struct net_device *dev,
bool vlan_filtering);
......@@ -322,7 +337,7 @@ dsa_slave_to_master(const struct net_device *dev)
{
struct dsa_port *dp = dsa_slave_to_port(dev);
return dp->cpu_dp->master;
return dsa_port_to_master(dp);
}
/* If under a bridge with vlan_filtering=0, make sure to send pvid-tagged
......@@ -542,6 +557,7 @@ void dsa_lag_map(struct dsa_switch_tree *dst, struct dsa_lag *lag);
void dsa_lag_unmap(struct dsa_switch_tree *dst, struct dsa_lag *lag);
struct dsa_lag *dsa_tree_lag_find(struct dsa_switch_tree *dst,
const struct net_device *lag_dev);
struct net_device *dsa_tree_find_first_master(struct dsa_switch_tree *dst);
int dsa_tree_notify(struct dsa_switch_tree *dst, unsigned long e, void *v);
int dsa_broadcast(unsigned long e, void *v);
int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
......
......@@ -226,6 +226,9 @@ static int dsa_master_ethtool_setup(struct net_device *dev)
struct dsa_switch *ds = cpu_dp->ds;
struct ethtool_ops *ops;
if (netif_is_lag_master(dev))
return 0;
ops = devm_kzalloc(ds->dev, sizeof(*ops), GFP_KERNEL);
if (!ops)
return -ENOMEM;
......@@ -250,6 +253,9 @@ static void dsa_master_ethtool_teardown(struct net_device *dev)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
if (netif_is_lag_master(dev))
return;
dev->ethtool_ops = cpu_dp->orig_ethtool_ops;
cpu_dp->orig_ethtool_ops = NULL;
}
......@@ -257,6 +263,9 @@ static void dsa_master_ethtool_teardown(struct net_device *dev)
static void dsa_netdev_ops_set(struct net_device *dev,
const struct dsa_netdevice_ops *ops)
{
if (netif_is_lag_master(dev))
return;
dev->dsa_ptr->netdev_ops = ops;
}
......@@ -355,12 +364,14 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
mtu = ETH_DATA_LEN + dsa_tag_protocol_overhead(tag_ops);
/* The DSA master must use SET_NETDEV_DEV for this to work. */
consumer_link = device_link_add(ds->dev, dev->dev.parent,
DL_FLAG_AUTOREMOVE_CONSUMER);
if (!consumer_link)
netdev_err(dev,
"Failed to create a device link to DSA switch %s\n",
dev_name(ds->dev));
if (!netif_is_lag_master(dev)) {
consumer_link = device_link_add(ds->dev, dev->dev.parent,
DL_FLAG_AUTOREMOVE_CONSUMER);
if (!consumer_link)
netdev_err(dev,
"Failed to create a device link to DSA switch %s\n",
dev_name(ds->dev));
}
/* The switch driver may not implement ->port_change_mtu(), case in
* which dsa_slave_change_mtu() will not update the master MTU either,
......@@ -417,3 +428,52 @@ void dsa_master_teardown(struct net_device *dev)
*/
wmb();
}
int dsa_master_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp,
struct netdev_lag_upper_info *uinfo,
struct netlink_ext_ack *extack)
{
bool master_setup = false;
int err;
if (!netdev_uses_dsa(lag_dev)) {
err = dsa_master_setup(lag_dev, cpu_dp);
if (err)
return err;
master_setup = true;
}
err = dsa_port_lag_join(cpu_dp, lag_dev, uinfo, extack);
if (err) {
if (extack && !extack->_msg)
NL_SET_ERR_MSG_MOD(extack,
"CPU port failed to join LAG");
goto out_master_teardown;
}
return 0;
out_master_teardown:
if (master_setup)
dsa_master_teardown(lag_dev);
return err;
}
/* Tear down a master if there isn't any other user port on it,
* optionally also destroying LAG information.
*/
void dsa_master_lag_teardown(struct net_device *lag_dev,
struct dsa_port *cpu_dp)
{
struct net_device *upper;
struct list_head *iter;
dsa_port_lag_leave(cpu_dp, lag_dev);
netdev_for_each_upper_dev_rcu(lag_dev, upper, iter)
if (dsa_slave_dev_check(upper))
return;
dsa_master_teardown(lag_dev);
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2022 NXP
*/
#include <linux/netdevice.h>
#include <net/rtnetlink.h>
#include "dsa_priv.h"
static const struct nla_policy dsa_policy[IFLA_DSA_MAX + 1] = {
[IFLA_DSA_MASTER] = { .type = NLA_U32 },
};
static int dsa_changelink(struct net_device *dev, struct nlattr *tb[],
struct nlattr *data[],
struct netlink_ext_ack *extack)
{
int err;
if (!data)
return 0;
if (data[IFLA_DSA_MASTER]) {
u32 ifindex = nla_get_u32(data[IFLA_DSA_MASTER]);
struct net_device *master;
master = __dev_get_by_index(dev_net(dev), ifindex);
if (!master)
return -EINVAL;
err = dsa_slave_change_master(dev, master, extack);
if (err)
return err;
}
return 0;
}
static size_t dsa_get_size(const struct net_device *dev)
{
return nla_total_size(sizeof(u32)) + /* IFLA_DSA_MASTER */
0;
}
static int dsa_fill_info(struct sk_buff *skb, const struct net_device *dev)
{
struct net_device *master = dsa_slave_to_master(dev);
if (nla_put_u32(skb, IFLA_DSA_MASTER, master->ifindex))
return -EMSGSIZE;
return 0;
}
struct rtnl_link_ops dsa_link_ops __read_mostly = {
.kind = "dsa",
.priv_size = sizeof(struct dsa_port),
.maxtype = IFLA_DSA_MAX,
.policy = dsa_policy,
.changelink = dsa_changelink,
.get_size = dsa_get_size,
.fill_info = dsa_fill_info,
};
......@@ -7,6 +7,7 @@
*/
#include <linux/if_bridge.h>
#include <linux/netdevice.h>
#include <linux/notifier.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
......@@ -634,6 +635,7 @@ int dsa_port_lag_join(struct dsa_port *dp, struct net_device *lag_dev,
struct dsa_notifier_lag_info info = {
.dp = dp,
.info = uinfo,
.extack = extack,
};
struct net_device *bridge_dev;
int err;
......@@ -1026,7 +1028,7 @@ int dsa_port_standalone_host_fdb_add(struct dsa_port *dp,
int dsa_port_bridge_host_fdb_add(struct dsa_port *dp,
const unsigned char *addr, u16 vid)
{
struct dsa_port *cpu_dp = dp->cpu_dp;
struct net_device *master = dsa_port_to_master(dp);
struct dsa_db db = {
.type = DSA_DB_BRIDGE,
.bridge = *dp->bridge,
......@@ -1037,8 +1039,8 @@ int dsa_port_bridge_host_fdb_add(struct dsa_port *dp,
* requires rtnl_lock(), since we can't guarantee that is held here,
* and we can't take it either.
*/
if (cpu_dp->master->priv_flags & IFF_UNICAST_FLT) {
err = dev_uc_add(cpu_dp->master, addr);
if (master->priv_flags & IFF_UNICAST_FLT) {
err = dev_uc_add(master, addr);
if (err)
return err;
}
......@@ -1077,15 +1079,15 @@ int dsa_port_standalone_host_fdb_del(struct dsa_port *dp,
int dsa_port_bridge_host_fdb_del(struct dsa_port *dp,
const unsigned char *addr, u16 vid)
{
struct dsa_port *cpu_dp = dp->cpu_dp;
struct net_device *master = dsa_port_to_master(dp);
struct dsa_db db = {
.type = DSA_DB_BRIDGE,
.bridge = *dp->bridge,
};
int err;
if (cpu_dp->master->priv_flags & IFF_UNICAST_FLT) {
err = dev_uc_del(cpu_dp->master, addr);
if (master->priv_flags & IFF_UNICAST_FLT) {
err = dev_uc_del(master, addr);
if (err)
return err;
}
......@@ -1208,14 +1210,14 @@ int dsa_port_standalone_host_mdb_add(const struct dsa_port *dp,
int dsa_port_bridge_host_mdb_add(const struct dsa_port *dp,
const struct switchdev_obj_port_mdb *mdb)
{
struct dsa_port *cpu_dp = dp->cpu_dp;
struct net_device *master = dsa_port_to_master(dp);
struct dsa_db db = {
.type = DSA_DB_BRIDGE,
.bridge = *dp->bridge,
};
int err;
err = dev_mc_add(cpu_dp->master, mdb->addr);
err = dev_mc_add(master, mdb->addr);
if (err)
return err;
......@@ -1252,14 +1254,14 @@ int dsa_port_standalone_host_mdb_del(const struct dsa_port *dp,
int dsa_port_bridge_host_mdb_del(const struct dsa_port *dp,
const struct switchdev_obj_port_mdb *mdb)
{
struct dsa_port *cpu_dp = dp->cpu_dp;
struct net_device *master = dsa_port_to_master(dp);
struct dsa_db db = {
.type = DSA_DB_BRIDGE,
.bridge = *dp->bridge,
};
int err;
err = dev_mc_del(cpu_dp->master, mdb->addr);
err = dev_mc_del(master, mdb->addr);
if (err)
return err;
......@@ -1294,19 +1296,19 @@ int dsa_port_host_vlan_add(struct dsa_port *dp,
const struct switchdev_obj_port_vlan *vlan,
struct netlink_ext_ack *extack)
{
struct net_device *master = dsa_port_to_master(dp);
struct dsa_notifier_vlan_info info = {
.dp = dp,
.vlan = vlan,
.extack = extack,
};
struct dsa_port *cpu_dp = dp->cpu_dp;
int err;
err = dsa_port_notify(dp, DSA_NOTIFIER_HOST_VLAN_ADD, &info);
if (err && err != -EOPNOTSUPP)
return err;
vlan_vid_add(cpu_dp->master, htons(ETH_P_8021Q), vlan->vid);
vlan_vid_add(master, htons(ETH_P_8021Q), vlan->vid);
return err;
}
......@@ -1314,18 +1316,18 @@ int dsa_port_host_vlan_add(struct dsa_port *dp,
int dsa_port_host_vlan_del(struct dsa_port *dp,
const struct switchdev_obj_port_vlan *vlan)
{
struct net_device *master = dsa_port_to_master(dp);
struct dsa_notifier_vlan_info info = {
.dp = dp,
.vlan = vlan,
};
struct dsa_port *cpu_dp = dp->cpu_dp;
int err;
err = dsa_port_notify(dp, DSA_NOTIFIER_HOST_VLAN_DEL, &info);
if (err && err != -EOPNOTSUPP)
return err;
vlan_vid_del(cpu_dp->master, htons(ETH_P_8021Q), vlan->vid);
vlan_vid_del(master, htons(ETH_P_8021Q), vlan->vid);
return err;
}
......@@ -1374,6 +1376,136 @@ int dsa_port_mrp_del_ring_role(const struct dsa_port *dp,
return ds->ops->port_mrp_del_ring_role(ds, dp->index, mrp);
}
static int dsa_port_assign_master(struct dsa_port *dp,
struct net_device *master,
struct netlink_ext_ack *extack,
bool fail_on_err)
{
struct dsa_switch *ds = dp->ds;
int port = dp->index, err;
err = ds->ops->port_change_master(ds, port, master, extack);
if (err && !fail_on_err)
dev_err(ds->dev, "port %d failed to assign master %s: %pe\n",
port, master->name, ERR_PTR(err));
if (err && fail_on_err)
return err;
dp->cpu_dp = master->dsa_ptr;
dp->cpu_port_in_lag = netif_is_lag_master(master);
return 0;
}
/* Change the dp->cpu_dp affinity for a user port. Note that both cross-chip
* notifiers and drivers have implicit assumptions about user-to-CPU-port
* mappings, so we unfortunately cannot delay the deletion of the objects
* (switchdev, standalone addresses, standalone VLANs) on the old CPU port
* until the new CPU port has been set up. So we need to completely tear down
* the old CPU port before changing it, and restore it on errors during the
* bringup of the new one.
*/
int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
struct netlink_ext_ack *extack)
{
struct net_device *bridge_dev = dsa_port_bridge_dev_get(dp);
struct net_device *old_master = dsa_port_to_master(dp);
struct net_device *dev = dp->slave;
struct dsa_switch *ds = dp->ds;
bool vlan_filtering;
int err, tmp;
/* Bridges may hold host FDB, MDB and VLAN objects. These need to be
* migrated, so dynamically unoffload and later reoffload the bridge
* port.
*/
if (bridge_dev) {
dsa_port_pre_bridge_leave(dp, bridge_dev);
dsa_port_bridge_leave(dp, bridge_dev);
}
/* The port might still be VLAN filtering even if it's no longer
* under a bridge, either due to ds->vlan_filtering_is_global or
* ds->needs_standalone_vlan_filtering. In turn this means VLANs
* on the CPU port.
*/
vlan_filtering = dsa_port_is_vlan_filtering(dp);
if (vlan_filtering) {
err = dsa_slave_manage_vlan_filtering(dev, false);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Failed to remove standalone VLANs");
goto rewind_old_bridge;
}
}
/* Standalone addresses, and addresses of upper interfaces like
* VLAN, LAG, HSR need to be migrated.
*/
dsa_slave_unsync_ha(dev);
err = dsa_port_assign_master(dp, master, extack, true);
if (err)
goto rewind_old_addrs;
dsa_slave_sync_ha(dev);
if (vlan_filtering) {
err = dsa_slave_manage_vlan_filtering(dev, true);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Failed to restore standalone VLANs");
goto rewind_new_addrs;
}
}
if (bridge_dev) {
err = dsa_port_bridge_join(dp, bridge_dev, extack);
if (err && err == -EOPNOTSUPP) {
NL_SET_ERR_MSG_MOD(extack,
"Failed to reoffload bridge");
goto rewind_new_vlan;
}
}
return 0;
rewind_new_vlan:
if (vlan_filtering)
dsa_slave_manage_vlan_filtering(dev, false);
rewind_new_addrs:
dsa_slave_unsync_ha(dev);
dsa_port_assign_master(dp, old_master, NULL, false);
/* Restore the objects on the old CPU port */
rewind_old_addrs:
dsa_slave_sync_ha(dev);
if (vlan_filtering) {
tmp = dsa_slave_manage_vlan_filtering(dev, true);
if (tmp) {
dev_err(ds->dev,
"port %d failed to restore standalone VLANs: %pe\n",
dp->index, ERR_PTR(tmp));
}
}
rewind_old_bridge:
if (bridge_dev) {
tmp = dsa_port_bridge_join(dp, bridge_dev, extack);
if (tmp) {
dev_err(ds->dev,
"port %d failed to rejoin bridge %s: %pe\n",
dp->index, bridge_dev->name, ERR_PTR(tmp));
}
}
return err;
}
void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp,
const struct dsa_device_ops *tag_ops)
{
......
......@@ -164,6 +164,48 @@ static int dsa_slave_unsync_mc(struct net_device *dev,
return dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL, addr, 0);
}
void dsa_slave_sync_ha(struct net_device *dev)
{
struct dsa_port *dp = dsa_slave_to_port(dev);
struct dsa_switch *ds = dp->ds;
struct netdev_hw_addr *ha;
netif_addr_lock_bh(dev);
netdev_for_each_synced_mc_addr(ha, dev)
dsa_slave_sync_mc(dev, ha->addr);
netdev_for_each_synced_uc_addr(ha, dev)
dsa_slave_sync_uc(dev, ha->addr);
netif_addr_unlock_bh(dev);
if (dsa_switch_supports_uc_filtering(ds) ||
dsa_switch_supports_mc_filtering(ds))
dsa_flush_workqueue();
}
void dsa_slave_unsync_ha(struct net_device *dev)
{
struct dsa_port *dp = dsa_slave_to_port(dev);
struct dsa_switch *ds = dp->ds;
struct netdev_hw_addr *ha;
netif_addr_lock_bh(dev);
netdev_for_each_synced_uc_addr(ha, dev)
dsa_slave_unsync_uc(dev, ha->addr);
netdev_for_each_synced_mc_addr(ha, dev)
dsa_slave_unsync_mc(dev, ha->addr);
netif_addr_unlock_bh(dev);
if (dsa_switch_supports_uc_filtering(ds) ||
dsa_switch_supports_mc_filtering(ds))
dsa_flush_workqueue();
}
/* slave mii_bus handling ***************************************************/
static int dsa_slave_phy_read(struct mii_bus *bus, int addr, int reg)
{
......@@ -1503,8 +1545,7 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
static int dsa_slave_setup_ft_block(struct dsa_switch *ds, int port,
void *type_data)
{
struct dsa_port *cpu_dp = dsa_to_port(ds, port)->cpu_dp;
struct net_device *master = cpu_dp->master;
struct net_device *master = dsa_port_to_master(dsa_to_port(ds, port));
if (!master->netdev_ops->ndo_setup_tc)
return -EOPNOTSUPP;
......@@ -2147,13 +2188,14 @@ static int dsa_slave_fill_forward_path(struct net_device_path_ctx *ctx,
struct net_device_path *path)
{
struct dsa_port *dp = dsa_slave_to_port(ctx->dev);
struct net_device *master = dsa_port_to_master(dp);
struct dsa_port *cpu_dp = dp->cpu_dp;
path->dev = ctx->dev;
path->type = DEV_PATH_DSA;
path->dsa.proto = cpu_dp->tag_ops->proto;
path->dsa.port = dp->index;
ctx->dev = cpu_dp->master;
ctx->dev = master;
return 0;
}
......@@ -2271,9 +2313,9 @@ static int dsa_slave_phy_setup(struct net_device *slave_dev)
void dsa_slave_setup_tagger(struct net_device *slave)
{
struct dsa_port *dp = dsa_slave_to_port(slave);
struct net_device *master = dsa_port_to_master(dp);
struct dsa_slave_priv *p = netdev_priv(slave);
const struct dsa_port *cpu_dp = dp->cpu_dp;
struct net_device *master = cpu_dp->master;
const struct dsa_switch *ds = dp->ds;
slave->needed_headroom = cpu_dp->tag_ops->needed_headroom;
......@@ -2330,8 +2372,7 @@ int dsa_slave_resume(struct net_device *slave_dev)
int dsa_slave_create(struct dsa_port *port)
{
const struct dsa_port *cpu_dp = port->cpu_dp;
struct net_device *master = cpu_dp->master;
struct net_device *master = dsa_port_to_master(port);
struct dsa_switch *ds = port->ds;
const char *name = port->name;
struct net_device *slave_dev;
......@@ -2347,6 +2388,7 @@ int dsa_slave_create(struct dsa_port *port)
if (slave_dev == NULL)
return -ENOMEM;
slave_dev->rtnl_link_ops = &dsa_link_ops;
slave_dev->ethtool_ops = &dsa_slave_ethtool_ops;
#if IS_ENABLED(CONFIG_DCB)
slave_dev->dcbnl_ops = &dsa_slave_dcbnl_ops;
......@@ -2463,6 +2505,83 @@ void dsa_slave_destroy(struct net_device *slave_dev)
free_netdev(slave_dev);
}
int dsa_slave_change_master(struct net_device *dev, struct net_device *master,
struct netlink_ext_ack *extack)
{
struct net_device *old_master = dsa_slave_to_master(dev);
struct dsa_port *dp = dsa_slave_to_port(dev);
struct dsa_switch *ds = dp->ds;
struct net_device *upper;
struct list_head *iter;
int err;
if (master == old_master)
return 0;
if (!ds->ops->port_change_master) {
NL_SET_ERR_MSG_MOD(extack,
"Driver does not support changing DSA master");
return -EOPNOTSUPP;
}
if (!netdev_uses_dsa(master)) {
NL_SET_ERR_MSG_MOD(extack,
"Interface not eligible as DSA master");
return -EOPNOTSUPP;
}
netdev_for_each_upper_dev_rcu(master, upper, iter) {
if (dsa_slave_dev_check(upper))
continue;
if (netif_is_bridge_master(upper))
continue;
NL_SET_ERR_MSG_MOD(extack, "Cannot join master with unknown uppers");
return -EOPNOTSUPP;
}
/* Since we allow live-changing the DSA master, plus we auto-open the
* DSA master when the user port opens => we need to ensure that the
* new DSA master is open too.
*/
if (dev->flags & IFF_UP) {
err = dev_open(master, extack);
if (err)
return err;
}
netdev_upper_dev_unlink(old_master, dev);
err = netdev_upper_dev_link(master, dev, extack);
if (err)
goto out_revert_old_master_unlink;
err = dsa_port_change_master(dp, master, extack);
if (err)
goto out_revert_master_link;
/* Update the MTU of the new CPU port through cross-chip notifiers */
err = dsa_slave_change_mtu(dev, dev->mtu);
if (err && err != -EOPNOTSUPP) {
netdev_warn(dev,
"nonfatal error updating MTU with new master: %pe\n",
ERR_PTR(err));
}
/* If the port doesn't have its own MAC address and relies on the DSA
* master's one, inherit it again from the new DSA master.
*/
if (is_zero_ether_addr(dp->mac))
eth_hw_addr_inherit(dev, master);
return 0;
out_revert_master_link:
netdev_upper_dev_unlink(master, dev);
out_revert_old_master_unlink:
netdev_upper_dev_link(old_master, dev, NULL);
return err;
}
bool dsa_slave_dev_check(const struct net_device *dev)
{
return dev->netdev_ops == &dsa_slave_netdev_ops;
......@@ -2699,11 +2818,45 @@ dsa_slave_prechangeupper_sanity_check(struct net_device *dev,
return NOTIFY_DONE;
}
/* To be eligible as a DSA master, a LAG must have all lower interfaces be
* eligible DSA masters. Additionally, all LAG slaves must be DSA masters of
* switches in the same switch tree.
*/
static int dsa_lag_master_validate(struct net_device *lag_dev,
struct netlink_ext_ack *extack)
{
struct net_device *lower1, *lower2;
struct list_head *iter1, *iter2;
netdev_for_each_lower_dev(lag_dev, lower1, iter1) {
netdev_for_each_lower_dev(lag_dev, lower2, iter2) {
if (!netdev_uses_dsa(lower1) ||
!netdev_uses_dsa(lower2)) {
NL_SET_ERR_MSG_MOD(extack,
"All LAG ports must be eligible as DSA masters");
return notifier_from_errno(-EINVAL);
}
if (lower1 == lower2)
continue;
if (!dsa_port_tree_same(lower1->dsa_ptr,
lower2->dsa_ptr)) {
NL_SET_ERR_MSG_MOD(extack,
"LAG contains DSA masters of disjoint switch trees");
return notifier_from_errno(-EINVAL);
}
}
}
return NOTIFY_DONE;
}
static int
dsa_master_prechangeupper_sanity_check(struct net_device *master,
struct netdev_notifier_changeupper_info *info)
{
struct netlink_ext_ack *extack;
struct netlink_ext_ack *extack = netdev_notifier_info_to_extack(&info->info);
if (!netdev_uses_dsa(master))
return NOTIFY_DONE;
......@@ -2721,13 +2874,51 @@ dsa_master_prechangeupper_sanity_check(struct net_device *master,
if (netif_is_bridge_master(info->upper_dev))
return NOTIFY_DONE;
extack = netdev_notifier_info_to_extack(&info->info);
/* Allow LAG uppers, subject to further restrictions in
* dsa_lag_master_prechangelower_sanity_check()
*/
if (netif_is_lag_master(info->upper_dev))
return dsa_lag_master_validate(info->upper_dev, extack);
NL_SET_ERR_MSG_MOD(extack,
"DSA master cannot join unknown upper interfaces");
return notifier_from_errno(-EBUSY);
}
static int
dsa_lag_master_prechangelower_sanity_check(struct net_device *dev,
struct netdev_notifier_changeupper_info *info)
{
struct netlink_ext_ack *extack = netdev_notifier_info_to_extack(&info->info);
struct net_device *lag_dev = info->upper_dev;
struct net_device *lower;
struct list_head *iter;
if (!netdev_uses_dsa(lag_dev) || !netif_is_lag_master(lag_dev))
return NOTIFY_DONE;
if (!info->linking)
return NOTIFY_DONE;
if (!netdev_uses_dsa(dev)) {
NL_SET_ERR_MSG(extack,
"Only DSA masters can join a LAG DSA master");
return notifier_from_errno(-EINVAL);
}
netdev_for_each_lower_dev(lag_dev, lower, iter) {
if (!dsa_port_tree_same(dev->dsa_ptr, lower->dsa_ptr)) {
NL_SET_ERR_MSG(extack,
"Interface is DSA master for a different switch tree than this LAG");
return notifier_from_errno(-EINVAL);
}
break;
}
return NOTIFY_DONE;
}
/* Don't allow bridging of DSA masters, since the bridge layer rx_handler
* prevents the DSA fake ethertype handler to be invoked, so we don't get the
* chance to strip off and parse the DSA switch tag protocol header (the bridge
......@@ -2768,6 +2959,136 @@ dsa_bridge_prechangelower_sanity_check(struct net_device *new_lower,
return NOTIFY_DONE;
}
static void dsa_tree_migrate_ports_from_lag_master(struct dsa_switch_tree *dst,
struct net_device *lag_dev)
{
struct net_device *new_master = dsa_tree_find_first_master(dst);
struct dsa_port *dp;
int err;
dsa_tree_for_each_user_port(dp, dst) {
if (dsa_port_to_master(dp) != lag_dev)
continue;
err = dsa_slave_change_master(dp->slave, new_master, NULL);
if (err) {
netdev_err(dp->slave,
"failed to restore master to %s: %pe\n",
new_master->name, ERR_PTR(err));
}
}
}
static int dsa_master_lag_join(struct net_device *master,
struct net_device *lag_dev,
struct netdev_lag_upper_info *uinfo,
struct netlink_ext_ack *extack)
{
struct dsa_port *cpu_dp = master->dsa_ptr;
struct dsa_switch_tree *dst = cpu_dp->dst;
struct dsa_port *dp;
int err;
err = dsa_master_lag_setup(lag_dev, cpu_dp, uinfo, extack);
if (err)
return err;
dsa_tree_for_each_user_port(dp, dst) {
if (dsa_port_to_master(dp) != master)
continue;
err = dsa_slave_change_master(dp->slave, lag_dev, extack);
if (err)
goto restore;
}
return 0;
restore:
dsa_tree_for_each_user_port_continue_reverse(dp, dst) {
if (dsa_port_to_master(dp) != lag_dev)
continue;
err = dsa_slave_change_master(dp->slave, master, NULL);
if (err) {
netdev_err(dp->slave,
"failed to restore master to %s: %pe\n",
master->name, ERR_PTR(err));
}
}
dsa_master_lag_teardown(lag_dev, master->dsa_ptr);
return err;
}
static void dsa_master_lag_leave(struct net_device *master,
struct net_device *lag_dev)
{
struct dsa_port *dp, *cpu_dp = lag_dev->dsa_ptr;
struct dsa_switch_tree *dst = cpu_dp->dst;
struct dsa_port *new_cpu_dp = NULL;
struct net_device *lower;
struct list_head *iter;
netdev_for_each_lower_dev(lag_dev, lower, iter) {
if (netdev_uses_dsa(lower)) {
new_cpu_dp = lower->dsa_ptr;
break;
}
}
if (new_cpu_dp) {
/* Update the CPU port of the user ports still under the LAG
* so that dsa_port_to_master() continues to work properly
*/
dsa_tree_for_each_user_port(dp, dst)
if (dsa_port_to_master(dp) == lag_dev)
dp->cpu_dp = new_cpu_dp;
/* Update the index of the virtual CPU port to match the lowest
* physical CPU port
*/
lag_dev->dsa_ptr = new_cpu_dp;
wmb();
} else {
/* If the LAG DSA master has no ports left, migrate back all
* user ports to the first physical CPU port
*/
dsa_tree_migrate_ports_from_lag_master(dst, lag_dev);
}
/* This DSA master has left its LAG in any case, so let
* the CPU port leave the hardware LAG as well
*/
dsa_master_lag_teardown(lag_dev, master->dsa_ptr);
}
static int dsa_master_changeupper(struct net_device *dev,
struct netdev_notifier_changeupper_info *info)
{
struct netlink_ext_ack *extack;
int err = NOTIFY_DONE;
if (!netdev_uses_dsa(dev))
return err;
extack = netdev_notifier_info_to_extack(&info->info);
if (netif_is_lag_master(info->upper_dev)) {
if (info->linking) {
err = dsa_master_lag_join(dev, info->upper_dev,
info->upper_info, extack);
err = notifier_from_errno(err);
} else {
dsa_master_lag_leave(dev, info->upper_dev);
err = NOTIFY_OK;
}
}
return err;
}
static int dsa_slave_netdevice_event(struct notifier_block *nb,
unsigned long event, void *ptr)
{
......@@ -2786,6 +3107,10 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
if (notifier_to_errno(err))
return err;
err = dsa_lag_master_prechangelower_sanity_check(dev, info);
if (notifier_to_errno(err))
return err;
err = dsa_bridge_prechangelower_sanity_check(dev, info);
if (notifier_to_errno(err))
return err;
......@@ -2811,6 +3136,10 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
if (notifier_to_errno(err))
return err;
err = dsa_master_changeupper(dev, ptr);
if (notifier_to_errno(err))
return err;
break;
}
case NETDEV_CHANGELOWERSTATE: {
......@@ -2818,12 +3147,21 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
struct dsa_port *dp;
int err;
if (!dsa_slave_dev_check(dev))
break;
if (dsa_slave_dev_check(dev)) {
dp = dsa_slave_to_port(dev);
dp = dsa_slave_to_port(dev);
err = dsa_port_lag_change(dp, info->lower_state_info);
}
/* Mirror LAG port events on DSA masters that are in
* a LAG towards their respective switch CPU ports
*/
if (netdev_uses_dsa(dev)) {
dp = dev->dsa_ptr;
err = dsa_port_lag_change(dp, info->lower_state_info);
}
err = dsa_port_lag_change(dp, info->lower_state_info);
return notifier_from_errno(err);
}
case NETDEV_CHANGE:
......
......@@ -398,8 +398,15 @@ static int dsa_switch_host_fdb_add(struct dsa_switch *ds,
dsa_switch_for_each_port(dp, ds) {
if (dsa_port_host_address_match(dp, info->dp)) {
err = dsa_port_do_fdb_add(dp, info->addr, info->vid,
info->db);
if (dsa_port_is_cpu(dp) && info->dp->cpu_port_in_lag) {
err = dsa_switch_do_lag_fdb_add(ds, dp->lag,
info->addr,
info->vid,
info->db);
} else {
err = dsa_port_do_fdb_add(dp, info->addr,
info->vid, info->db);
}
if (err)
break;
}
......@@ -419,8 +426,15 @@ static int dsa_switch_host_fdb_del(struct dsa_switch *ds,
dsa_switch_for_each_port(dp, ds) {
if (dsa_port_host_address_match(dp, info->dp)) {
err = dsa_port_do_fdb_del(dp, info->addr, info->vid,
info->db);
if (dsa_port_is_cpu(dp) && info->dp->cpu_port_in_lag) {
err = dsa_switch_do_lag_fdb_del(ds, dp->lag,
info->addr,
info->vid,
info->db);
} else {
err = dsa_port_do_fdb_del(dp, info->addr,
info->vid, info->db);
}
if (err)
break;
}
......@@ -507,12 +521,12 @@ static int dsa_switch_lag_join(struct dsa_switch *ds,
{
if (info->dp->ds == ds && ds->ops->port_lag_join)
return ds->ops->port_lag_join(ds, info->dp->index, info->lag,
info->info);
info->info, info->extack);
if (info->dp->ds != ds && ds->ops->crosschip_lag_join)
return ds->ops->crosschip_lag_join(ds, info->dp->ds->index,
info->dp->index, info->lag,
info->info);
info->info, info->extack);
return -EOPNOTSUPP;
}
......
......@@ -330,7 +330,7 @@ static int dsa_tag_8021q_port_setup(struct dsa_switch *ds, int port)
if (!dsa_port_is_user(dp))
return 0;
master = dp->cpu_dp->master;
master = dsa_port_to_master(dp);
err = dsa_port_tag_8021q_vlan_add(dp, vid, false);
if (err) {
......@@ -359,7 +359,7 @@ static void dsa_tag_8021q_port_teardown(struct dsa_switch *ds, int port)
if (!dsa_port_is_user(dp))
return;
master = dp->cpu_dp->master;
master = dsa_port_to_master(dp);
dsa_port_tag_8021q_vlan_del(dp, vid, false);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment