Commit 82e94d41 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge branch 'net-bridge-multiple-spanning-trees'

Tobias Waldekranz says:

====================
net: bridge: Multiple Spanning Trees

The bridge has had per-VLAN STP support for a while now, since:

https://lore.kernel.org/netdev/20200124114022.10883-1-nikolay@cumulusnetworks.com/

The current implementation has some problems:

- The mapping from VLAN to STP state is fixed as 1:1, i.e. each VLAN
  is managed independently. This is awkward from an MSTP (802.1Q-2018,
  Clause 13.5) point of view, where the model is that multiple VLANs
  are grouped into MST instances.

  Because of the way that the standard is written, presumably, this is
  also reflected in hardware implementations. It is not uncommon for a
  switch to support the full 4k range of VIDs, but that the pool of
  MST instances is much smaller. Some examples:

  Marvell LinkStreet (mv88e6xxx): 4k VLANs, but only 64 MSTIs
  Marvell Prestera: 4k VLANs, but only 128 MSTIs
  Microchip SparX-5i: 4k VLANs, but only 128 MSTIs

- By default, the feature is enabled, and there is no way to disable
  it. This makes it hard to add offloading in a backwards compatible
  way, since any underlying switchdevs have no way to refuse the
  function if the hardware does not support it

- The port-global STP state has precedence over per-VLAN states. In
  MSTP, as far as I understand it, all VLANs will use the common
  spanning tree (CST) by default - through traffic engineering you can
  then optimize your network to group subsets of VLANs to use
  different trees (MSTI). To my understanding, the way this is
  typically managed in silicon is roughly:

  Incoming packet:
  .----.----.--------------.----.-------------
  | DA | SA | 802.1Q VID=X | ET | Payload ...
  '----'----'--------------'----'-------------
                        |
                        '->|\     .----------------------------.
                           | +--> | VID | Members | ... | MSTI |
                   PVID -->|/     |-----|---------|-----|------|
                                  |   1 | 0001001 | ... |    0 |
                                  |   2 | 0001010 | ... |   10 |
                                  |   3 | 0001100 | ... |   10 |
                                  '----------------------------'
                                                             |
                               .-----------------------------'
                               |  .------------------------.
                               '->| MSTI | Fwding | Lrning |
                                  |------|--------|--------|
                                  |    0 | 111110 | 111110 |
                                  |   10 | 110111 | 110111 |
                                  '------------------------'

  What this is trying to show is that the STP state (whether MSTP is
  used, or ye olde STP) is always accessed via the VLAN table. If STP
  is running, all MSTI pointers in that table will reference the same
  index in the STP stable - if MSTP is running, some VLANs may point
  to other trees (like in this example).

  The fact that in the Linux bridge, the global state (think: index 0
  in most hardware implementations) is supposed to override the
  per-VLAN state, is very awkward to offload. In effect, this means
  that when the global state changes to blocking, drivers will have to
  iterate over all MSTIs in use, and alter them all to match. This
  also means that you have to cache whether the hardware state is
  currently tracking the global state or the per-VLAN state. In the
  first case, you also have to cache the per-VLAN state so that you
  can restore it if the global state transitions back to forwarding.

This series adds a new mst_enable bridge setting (as suggested by Nik)
that can only be changed when no VLANs are configured on the
bridge. Enabling this mode has the following effect:

- The port-global STP state is used to represent the CST (Common
  Spanning Tree) (1/15)

- Ingress STP filtering is deferred until the frame's VLAN has been
  resolved (1/15)

- The preexisting per-VLAN states can no longer be controlled directly
  (1/15). They are instead placed under the MST module's control,
  which is managed using a new netlink interface (described in 3/15)

- VLANs can br mapped to MSTIs in an arbitrary M:N fashion, using a
  new global VLAN option (2/15)

Switchdev notifications are added so that a driver can track:
- MST enabled state
- VID to MSTI mappings
- MST port states

An offloading implementation is this provided for mv88e6xxx.
====================

Link: https://lore.kernel.org/r/20220316150857.2442916-1-tobias@waldekranz.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents 54744510 acaf4d2e
This diff is collapsed.
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#define EDSA_HLEN 8 #define EDSA_HLEN 8
#define MV88E6XXX_N_FID 4096 #define MV88E6XXX_N_FID 4096
#define MV88E6XXX_N_SID 64
#define MV88E6XXX_FID_STANDALONE 0 #define MV88E6XXX_FID_STANDALONE 0
#define MV88E6XXX_FID_BRIDGED 1 #define MV88E6XXX_FID_BRIDGED 1
...@@ -130,6 +131,7 @@ struct mv88e6xxx_info { ...@@ -130,6 +131,7 @@ struct mv88e6xxx_info {
unsigned int num_internal_phys; unsigned int num_internal_phys;
unsigned int num_gpio; unsigned int num_gpio;
unsigned int max_vid; unsigned int max_vid;
unsigned int max_sid;
unsigned int port_base_addr; unsigned int port_base_addr;
unsigned int phy_base_addr; unsigned int phy_base_addr;
unsigned int global1_addr; unsigned int global1_addr;
...@@ -181,6 +183,12 @@ struct mv88e6xxx_vtu_entry { ...@@ -181,6 +183,12 @@ struct mv88e6xxx_vtu_entry {
bool valid; bool valid;
bool policy; bool policy;
u8 member[DSA_MAX_PORTS]; u8 member[DSA_MAX_PORTS];
u8 state[DSA_MAX_PORTS]; /* Older silicon has no STU */
};
struct mv88e6xxx_stu_entry {
u8 sid;
bool valid;
u8 state[DSA_MAX_PORTS]; u8 state[DSA_MAX_PORTS];
}; };
...@@ -279,6 +287,7 @@ enum mv88e6xxx_region_id { ...@@ -279,6 +287,7 @@ enum mv88e6xxx_region_id {
MV88E6XXX_REGION_GLOBAL2, MV88E6XXX_REGION_GLOBAL2,
MV88E6XXX_REGION_ATU, MV88E6XXX_REGION_ATU,
MV88E6XXX_REGION_VTU, MV88E6XXX_REGION_VTU,
MV88E6XXX_REGION_STU,
MV88E6XXX_REGION_PVT, MV88E6XXX_REGION_PVT,
_MV88E6XXX_REGION_MAX, _MV88E6XXX_REGION_MAX,
...@@ -288,6 +297,16 @@ struct mv88e6xxx_region_priv { ...@@ -288,6 +297,16 @@ struct mv88e6xxx_region_priv {
enum mv88e6xxx_region_id id; enum mv88e6xxx_region_id id;
}; };
struct mv88e6xxx_mst {
struct list_head node;
refcount_t refcnt;
struct net_device *br;
u16 msti;
struct mv88e6xxx_stu_entry stu;
};
struct mv88e6xxx_chip { struct mv88e6xxx_chip {
const struct mv88e6xxx_info *info; const struct mv88e6xxx_info *info;
...@@ -388,6 +407,9 @@ struct mv88e6xxx_chip { ...@@ -388,6 +407,9 @@ struct mv88e6xxx_chip {
/* devlink regions */ /* devlink regions */
struct devlink_region *regions[_MV88E6XXX_REGION_MAX]; struct devlink_region *regions[_MV88E6XXX_REGION_MAX];
/* Bridge MST to SID mappings */
struct list_head msts;
}; };
struct mv88e6xxx_bus_ops { struct mv88e6xxx_bus_ops {
...@@ -602,6 +624,12 @@ struct mv88e6xxx_ops { ...@@ -602,6 +624,12 @@ struct mv88e6xxx_ops {
int (*vtu_loadpurge)(struct mv88e6xxx_chip *chip, int (*vtu_loadpurge)(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry); struct mv88e6xxx_vtu_entry *entry);
/* Spanning Tree Unit operations */
int (*stu_getnext)(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int (*stu_loadpurge)(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
/* GPIO operations */ /* GPIO operations */
const struct mv88e6xxx_gpio_ops *gpio_ops; const struct mv88e6xxx_gpio_ops *gpio_ops;
...@@ -700,6 +728,11 @@ struct mv88e6xxx_hw_stat { ...@@ -700,6 +728,11 @@ struct mv88e6xxx_hw_stat {
int type; int type;
}; };
static inline bool mv88e6xxx_has_stu(struct mv88e6xxx_chip *chip)
{
return chip->info->max_sid > 0;
}
static inline bool mv88e6xxx_has_pvt(struct mv88e6xxx_chip *chip) static inline bool mv88e6xxx_has_pvt(struct mv88e6xxx_chip *chip)
{ {
return chip->info->pvt; return chip->info->pvt;
...@@ -730,6 +763,11 @@ static inline unsigned int mv88e6xxx_max_vid(struct mv88e6xxx_chip *chip) ...@@ -730,6 +763,11 @@ static inline unsigned int mv88e6xxx_max_vid(struct mv88e6xxx_chip *chip)
return chip->info->max_vid; return chip->info->max_vid;
} }
static inline unsigned int mv88e6xxx_max_sid(struct mv88e6xxx_chip *chip)
{
return chip->info->max_sid;
}
static inline u16 mv88e6xxx_port_mask(struct mv88e6xxx_chip *chip) static inline u16 mv88e6xxx_port_mask(struct mv88e6xxx_chip *chip)
{ {
return GENMASK((s32)mv88e6xxx_num_ports(chip) - 1, 0); return GENMASK((s32)mv88e6xxx_num_ports(chip) - 1, 0);
......
...@@ -503,6 +503,85 @@ static int mv88e6xxx_region_vtu_snapshot(struct devlink *dl, ...@@ -503,6 +503,85 @@ static int mv88e6xxx_region_vtu_snapshot(struct devlink *dl,
return 0; return 0;
} }
/**
* struct mv88e6xxx_devlink_stu_entry - Devlink STU entry
* @sid: Global1/3: SID, unknown filters and learning.
* @vid: Global1/6: Valid bit.
* @data: Global1/7-9: Membership data and priority override.
* @resvd: Reserved. In case we forgot something.
*
* The STU entry format varies between chipset generations. Peridot
* and Amethyst packs the STU data into Global1/7-8. Older silicon
* spreads the information across all three VTU data registers -
* inheriting the layout of even older hardware that had no STU at
* all. Since this is a low-level debug interface, copy all data
* verbatim and defer parsing to the consumer.
*/
struct mv88e6xxx_devlink_stu_entry {
u16 sid;
u16 vid;
u16 data[3];
u16 resvd;
};
static int mv88e6xxx_region_stu_snapshot(struct devlink *dl,
const struct devlink_region_ops *ops,
struct netlink_ext_ack *extack,
u8 **data)
{
struct mv88e6xxx_devlink_stu_entry *table, *entry;
struct dsa_switch *ds = dsa_devlink_to_ds(dl);
struct mv88e6xxx_chip *chip = ds->priv;
struct mv88e6xxx_stu_entry stu;
int err;
table = kcalloc(mv88e6xxx_max_sid(chip) + 1,
sizeof(struct mv88e6xxx_devlink_stu_entry),
GFP_KERNEL);
if (!table)
return -ENOMEM;
entry = table;
stu.sid = mv88e6xxx_max_sid(chip);
stu.valid = false;
mv88e6xxx_reg_lock(chip);
do {
err = mv88e6xxx_g1_stu_getnext(chip, &stu);
if (err)
break;
if (!stu.valid)
break;
err = err ? : mv88e6xxx_g1_read(chip, MV88E6352_G1_VTU_SID,
&entry->sid);
err = err ? : mv88e6xxx_g1_read(chip, MV88E6XXX_G1_VTU_VID,
&entry->vid);
err = err ? : mv88e6xxx_g1_read(chip, MV88E6XXX_G1_VTU_DATA1,
&entry->data[0]);
err = err ? : mv88e6xxx_g1_read(chip, MV88E6XXX_G1_VTU_DATA2,
&entry->data[1]);
err = err ? : mv88e6xxx_g1_read(chip, MV88E6XXX_G1_VTU_DATA3,
&entry->data[2]);
if (err)
break;
entry++;
} while (stu.sid < mv88e6xxx_max_sid(chip));
mv88e6xxx_reg_unlock(chip);
if (err) {
kfree(table);
return err;
}
*data = (u8 *)table;
return 0;
}
static int mv88e6xxx_region_pvt_snapshot(struct devlink *dl, static int mv88e6xxx_region_pvt_snapshot(struct devlink *dl,
const struct devlink_region_ops *ops, const struct devlink_region_ops *ops,
struct netlink_ext_ack *extack, struct netlink_ext_ack *extack,
...@@ -605,6 +684,12 @@ static struct devlink_region_ops mv88e6xxx_region_vtu_ops = { ...@@ -605,6 +684,12 @@ static struct devlink_region_ops mv88e6xxx_region_vtu_ops = {
.destructor = kfree, .destructor = kfree,
}; };
static struct devlink_region_ops mv88e6xxx_region_stu_ops = {
.name = "stu",
.snapshot = mv88e6xxx_region_stu_snapshot,
.destructor = kfree,
};
static struct devlink_region_ops mv88e6xxx_region_pvt_ops = { static struct devlink_region_ops mv88e6xxx_region_pvt_ops = {
.name = "pvt", .name = "pvt",
.snapshot = mv88e6xxx_region_pvt_snapshot, .snapshot = mv88e6xxx_region_pvt_snapshot,
...@@ -640,6 +725,11 @@ static struct mv88e6xxx_region mv88e6xxx_regions[] = { ...@@ -640,6 +725,11 @@ static struct mv88e6xxx_region mv88e6xxx_regions[] = {
.ops = &mv88e6xxx_region_vtu_ops .ops = &mv88e6xxx_region_vtu_ops
/* calculated at runtime */ /* calculated at runtime */
}, },
[MV88E6XXX_REGION_STU] = {
.ops = &mv88e6xxx_region_stu_ops,
.cond = mv88e6xxx_has_stu,
/* calculated at runtime */
},
[MV88E6XXX_REGION_PVT] = { [MV88E6XXX_REGION_PVT] = {
.ops = &mv88e6xxx_region_pvt_ops, .ops = &mv88e6xxx_region_pvt_ops,
.size = MV88E6XXX_MAX_PVT_ENTRIES * sizeof(u16), .size = MV88E6XXX_MAX_PVT_ENTRIES * sizeof(u16),
...@@ -706,6 +796,10 @@ int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds) ...@@ -706,6 +796,10 @@ int mv88e6xxx_setup_devlink_regions_global(struct dsa_switch *ds)
size = (mv88e6xxx_max_vid(chip) + 1) * size = (mv88e6xxx_max_vid(chip) + 1) *
sizeof(struct mv88e6xxx_devlink_vtu_entry); sizeof(struct mv88e6xxx_devlink_vtu_entry);
break; break;
case MV88E6XXX_REGION_STU:
size = (mv88e6xxx_max_sid(chip) + 1) *
sizeof(struct mv88e6xxx_devlink_stu_entry);
break;
} }
region = dsa_devlink_region_create(ds, ops, 1, size); region = dsa_devlink_region_create(ds, ops, 1, size);
......
...@@ -348,6 +348,16 @@ int mv88e6390_g1_vtu_getnext(struct mv88e6xxx_chip *chip, ...@@ -348,6 +348,16 @@ int mv88e6390_g1_vtu_getnext(struct mv88e6xxx_chip *chip,
int mv88e6390_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip, int mv88e6390_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_vtu_entry *entry); struct mv88e6xxx_vtu_entry *entry);
int mv88e6xxx_g1_vtu_flush(struct mv88e6xxx_chip *chip); int mv88e6xxx_g1_vtu_flush(struct mv88e6xxx_chip *chip);
int mv88e6xxx_g1_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6352_g1_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6352_g1_stu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6390_g1_stu_getnext(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6390_g1_stu_loadpurge(struct mv88e6xxx_chip *chip,
struct mv88e6xxx_stu_entry *entry);
int mv88e6xxx_g1_vtu_prob_irq_setup(struct mv88e6xxx_chip *chip); int mv88e6xxx_g1_vtu_prob_irq_setup(struct mv88e6xxx_chip *chip);
void mv88e6xxx_g1_vtu_prob_irq_free(struct mv88e6xxx_chip *chip); void mv88e6xxx_g1_vtu_prob_irq_free(struct mv88e6xxx_chip *chip);
int mv88e6xxx_g1_atu_get_next(struct mv88e6xxx_chip *chip, u16 fid); int mv88e6xxx_g1_atu_get_next(struct mv88e6xxx_chip *chip, u16 fid);
......
This diff is collapsed.
...@@ -119,6 +119,9 @@ int br_vlan_get_info(const struct net_device *dev, u16 vid, ...@@ -119,6 +119,9 @@ int br_vlan_get_info(const struct net_device *dev, u16 vid,
struct bridge_vlan_info *p_vinfo); struct bridge_vlan_info *p_vinfo);
int br_vlan_get_info_rcu(const struct net_device *dev, u16 vid, int br_vlan_get_info_rcu(const struct net_device *dev, u16 vid,
struct bridge_vlan_info *p_vinfo); struct bridge_vlan_info *p_vinfo);
bool br_mst_enabled(const struct net_device *dev);
int br_mst_get_info(const struct net_device *dev, u16 msti, unsigned long *vids);
int br_mst_get_state(const struct net_device *dev, u16 msti, u8 *state);
#else #else
static inline bool br_vlan_enabled(const struct net_device *dev) static inline bool br_vlan_enabled(const struct net_device *dev)
{ {
...@@ -151,6 +154,22 @@ static inline int br_vlan_get_info_rcu(const struct net_device *dev, u16 vid, ...@@ -151,6 +154,22 @@ static inline int br_vlan_get_info_rcu(const struct net_device *dev, u16 vid,
{ {
return -EINVAL; return -EINVAL;
} }
static inline bool br_mst_enabled(const struct net_device *dev)
{
return false;
}
static inline int br_mst_get_info(const struct net_device *dev, u16 msti,
unsigned long *vids)
{
return -EINVAL;
}
static inline int br_mst_get_state(const struct net_device *dev, u16 msti,
u8 *state)
{
return -EINVAL;
}
#endif #endif
#if IS_ENABLED(CONFIG_BRIDGE) #if IS_ENABLED(CONFIG_BRIDGE)
......
...@@ -957,7 +957,10 @@ struct dsa_switch_ops { ...@@ -957,7 +957,10 @@ struct dsa_switch_ops {
struct dsa_bridge bridge); struct dsa_bridge bridge);
void (*port_stp_state_set)(struct dsa_switch *ds, int port, void (*port_stp_state_set)(struct dsa_switch *ds, int port,
u8 state); u8 state);
int (*port_mst_state_set)(struct dsa_switch *ds, int port,
const struct switchdev_mst_state *state);
void (*port_fast_age)(struct dsa_switch *ds, int port); void (*port_fast_age)(struct dsa_switch *ds, int port);
int (*port_vlan_fast_age)(struct dsa_switch *ds, int port, u16 vid);
int (*port_pre_bridge_flags)(struct dsa_switch *ds, int port, int (*port_pre_bridge_flags)(struct dsa_switch *ds, int port,
struct switchdev_brport_flags flags, struct switchdev_brport_flags flags,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
...@@ -976,6 +979,9 @@ struct dsa_switch_ops { ...@@ -976,6 +979,9 @@ struct dsa_switch_ops {
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
int (*port_vlan_del)(struct dsa_switch *ds, int port, int (*port_vlan_del)(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan); const struct switchdev_obj_port_vlan *vlan);
int (*vlan_msti_set)(struct dsa_switch *ds, struct dsa_bridge bridge,
const struct switchdev_vlan_msti *msti);
/* /*
* Forwarding database * Forwarding database
*/ */
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
enum switchdev_attr_id { enum switchdev_attr_id {
SWITCHDEV_ATTR_ID_UNDEFINED, SWITCHDEV_ATTR_ID_UNDEFINED,
SWITCHDEV_ATTR_ID_PORT_STP_STATE, SWITCHDEV_ATTR_ID_PORT_STP_STATE,
SWITCHDEV_ATTR_ID_PORT_MST_STATE,
SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS, SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS,
SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS, SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS,
SWITCHDEV_ATTR_ID_PORT_MROUTER, SWITCHDEV_ATTR_ID_PORT_MROUTER,
...@@ -27,7 +28,14 @@ enum switchdev_attr_id { ...@@ -27,7 +28,14 @@ enum switchdev_attr_id {
SWITCHDEV_ATTR_ID_BRIDGE_VLAN_PROTOCOL, SWITCHDEV_ATTR_ID_BRIDGE_VLAN_PROTOCOL,
SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED, SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED,
SWITCHDEV_ATTR_ID_BRIDGE_MROUTER, SWITCHDEV_ATTR_ID_BRIDGE_MROUTER,
SWITCHDEV_ATTR_ID_BRIDGE_MST,
SWITCHDEV_ATTR_ID_MRP_PORT_ROLE, SWITCHDEV_ATTR_ID_MRP_PORT_ROLE,
SWITCHDEV_ATTR_ID_VLAN_MSTI,
};
struct switchdev_mst_state {
u16 msti;
u8 state;
}; };
struct switchdev_brport_flags { struct switchdev_brport_flags {
...@@ -35,6 +43,11 @@ struct switchdev_brport_flags { ...@@ -35,6 +43,11 @@ struct switchdev_brport_flags {
unsigned long mask; unsigned long mask;
}; };
struct switchdev_vlan_msti {
u16 vid;
u16 msti;
};
struct switchdev_attr { struct switchdev_attr {
struct net_device *orig_dev; struct net_device *orig_dev;
enum switchdev_attr_id id; enum switchdev_attr_id id;
...@@ -43,13 +56,16 @@ struct switchdev_attr { ...@@ -43,13 +56,16 @@ struct switchdev_attr {
void (*complete)(struct net_device *dev, int err, void *priv); void (*complete)(struct net_device *dev, int err, void *priv);
union { union {
u8 stp_state; /* PORT_STP_STATE */ u8 stp_state; /* PORT_STP_STATE */
struct switchdev_mst_state mst_state; /* PORT_MST_STATE */
struct switchdev_brport_flags brport_flags; /* PORT_BRIDGE_FLAGS */ struct switchdev_brport_flags brport_flags; /* PORT_BRIDGE_FLAGS */
bool mrouter; /* PORT_MROUTER */ bool mrouter; /* PORT_MROUTER */
clock_t ageing_time; /* BRIDGE_AGEING_TIME */ clock_t ageing_time; /* BRIDGE_AGEING_TIME */
bool vlan_filtering; /* BRIDGE_VLAN_FILTERING */ bool vlan_filtering; /* BRIDGE_VLAN_FILTERING */
u16 vlan_protocol; /* BRIDGE_VLAN_PROTOCOL */ u16 vlan_protocol; /* BRIDGE_VLAN_PROTOCOL */
bool mst; /* BRIDGE_MST */
bool mc_disabled; /* MC_DISABLED */ bool mc_disabled; /* MC_DISABLED */
u8 mrp_port_role; /* MRP_PORT_ROLE */ u8 mrp_port_role; /* MRP_PORT_ROLE */
struct switchdev_vlan_msti vlan_msti; /* VLAN_MSTI */
} u; } u;
}; };
......
...@@ -122,6 +122,7 @@ enum { ...@@ -122,6 +122,7 @@ enum {
IFLA_BRIDGE_VLAN_TUNNEL_INFO, IFLA_BRIDGE_VLAN_TUNNEL_INFO,
IFLA_BRIDGE_MRP, IFLA_BRIDGE_MRP,
IFLA_BRIDGE_CFM, IFLA_BRIDGE_CFM,
IFLA_BRIDGE_MST,
__IFLA_BRIDGE_MAX, __IFLA_BRIDGE_MAX,
}; };
#define IFLA_BRIDGE_MAX (__IFLA_BRIDGE_MAX - 1) #define IFLA_BRIDGE_MAX (__IFLA_BRIDGE_MAX - 1)
...@@ -453,6 +454,21 @@ enum { ...@@ -453,6 +454,21 @@ enum {
#define IFLA_BRIDGE_CFM_CC_PEER_STATUS_MAX (__IFLA_BRIDGE_CFM_CC_PEER_STATUS_MAX - 1) #define IFLA_BRIDGE_CFM_CC_PEER_STATUS_MAX (__IFLA_BRIDGE_CFM_CC_PEER_STATUS_MAX - 1)
enum {
IFLA_BRIDGE_MST_UNSPEC,
IFLA_BRIDGE_MST_ENTRY,
__IFLA_BRIDGE_MST_MAX,
};
#define IFLA_BRIDGE_MST_MAX (__IFLA_BRIDGE_MST_MAX - 1)
enum {
IFLA_BRIDGE_MST_ENTRY_UNSPEC,
IFLA_BRIDGE_MST_ENTRY_MSTI,
IFLA_BRIDGE_MST_ENTRY_STATE,
__IFLA_BRIDGE_MST_ENTRY_MAX,
};
#define IFLA_BRIDGE_MST_ENTRY_MAX (__IFLA_BRIDGE_MST_ENTRY_MAX - 1)
struct bridge_stp_xstats { struct bridge_stp_xstats {
__u64 transition_blk; __u64 transition_blk;
__u64 transition_fwd; __u64 transition_fwd;
...@@ -564,6 +580,7 @@ enum { ...@@ -564,6 +580,7 @@ enum {
BRIDGE_VLANDB_GOPTS_MCAST_QUERIER, BRIDGE_VLANDB_GOPTS_MCAST_QUERIER,
BRIDGE_VLANDB_GOPTS_MCAST_ROUTER_PORTS, BRIDGE_VLANDB_GOPTS_MCAST_ROUTER_PORTS,
BRIDGE_VLANDB_GOPTS_MCAST_QUERIER_STATE, BRIDGE_VLANDB_GOPTS_MCAST_QUERIER_STATE,
BRIDGE_VLANDB_GOPTS_MSTI,
__BRIDGE_VLANDB_GOPTS_MAX __BRIDGE_VLANDB_GOPTS_MAX
}; };
#define BRIDGE_VLANDB_GOPTS_MAX (__BRIDGE_VLANDB_GOPTS_MAX - 1) #define BRIDGE_VLANDB_GOPTS_MAX (__BRIDGE_VLANDB_GOPTS_MAX - 1)
...@@ -759,6 +776,7 @@ struct br_mcast_stats { ...@@ -759,6 +776,7 @@ struct br_mcast_stats {
enum br_boolopt_id { enum br_boolopt_id {
BR_BOOLOPT_NO_LL_LEARN, BR_BOOLOPT_NO_LL_LEARN,
BR_BOOLOPT_MCAST_VLAN_SNOOPING, BR_BOOLOPT_MCAST_VLAN_SNOOPING,
BR_BOOLOPT_MST_ENABLE,
BR_BOOLOPT_MAX BR_BOOLOPT_MAX
}; };
......
...@@ -817,6 +817,7 @@ enum { ...@@ -817,6 +817,7 @@ enum {
#define RTEXT_FILTER_MRP (1 << 4) #define RTEXT_FILTER_MRP (1 << 4)
#define RTEXT_FILTER_CFM_CONFIG (1 << 5) #define RTEXT_FILTER_CFM_CONFIG (1 << 5)
#define RTEXT_FILTER_CFM_STATUS (1 << 6) #define RTEXT_FILTER_CFM_STATUS (1 << 6)
#define RTEXT_FILTER_MST (1 << 7)
/* End of information exported to user level */ /* End of information exported to user level */
......
...@@ -20,7 +20,7 @@ obj-$(CONFIG_BRIDGE_NETFILTER) += br_netfilter.o ...@@ -20,7 +20,7 @@ obj-$(CONFIG_BRIDGE_NETFILTER) += br_netfilter.o
bridge-$(CONFIG_BRIDGE_IGMP_SNOOPING) += br_multicast.o br_mdb.o br_multicast_eht.o bridge-$(CONFIG_BRIDGE_IGMP_SNOOPING) += br_multicast.o br_mdb.o br_multicast_eht.o
bridge-$(CONFIG_BRIDGE_VLAN_FILTERING) += br_vlan.o br_vlan_tunnel.o br_vlan_options.o bridge-$(CONFIG_BRIDGE_VLAN_FILTERING) += br_vlan.o br_vlan_tunnel.o br_vlan_options.o br_mst.o
bridge-$(CONFIG_NET_SWITCHDEV) += br_switchdev.o bridge-$(CONFIG_NET_SWITCHDEV) += br_switchdev.o
......
...@@ -265,6 +265,9 @@ int br_boolopt_toggle(struct net_bridge *br, enum br_boolopt_id opt, bool on, ...@@ -265,6 +265,9 @@ int br_boolopt_toggle(struct net_bridge *br, enum br_boolopt_id opt, bool on,
case BR_BOOLOPT_MCAST_VLAN_SNOOPING: case BR_BOOLOPT_MCAST_VLAN_SNOOPING:
err = br_multicast_toggle_vlan_snooping(br, on, extack); err = br_multicast_toggle_vlan_snooping(br, on, extack);
break; break;
case BR_BOOLOPT_MST_ENABLE:
err = br_mst_set_enabled(br, on, extack);
break;
default: default:
/* shouldn't be called with unsupported options */ /* shouldn't be called with unsupported options */
WARN_ON(1); WARN_ON(1);
...@@ -281,6 +284,8 @@ int br_boolopt_get(const struct net_bridge *br, enum br_boolopt_id opt) ...@@ -281,6 +284,8 @@ int br_boolopt_get(const struct net_bridge *br, enum br_boolopt_id opt)
return br_opt_get(br, BROPT_NO_LL_LEARN); return br_opt_get(br, BROPT_NO_LL_LEARN);
case BR_BOOLOPT_MCAST_VLAN_SNOOPING: case BR_BOOLOPT_MCAST_VLAN_SNOOPING:
return br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED); return br_opt_get(br, BROPT_MCAST_VLAN_SNOOPING_ENABLED);
case BR_BOOLOPT_MST_ENABLE:
return br_opt_get(br, BROPT_MST_ENABLED);
default: default:
/* shouldn't be called with unsupported options */ /* shouldn't be called with unsupported options */
WARN_ON(1); WARN_ON(1);
......
...@@ -78,13 +78,22 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb ...@@ -78,13 +78,22 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
u16 vid = 0; u16 vid = 0;
u8 state; u8 state;
if (!p || p->state == BR_STATE_DISABLED) if (!p)
goto drop; goto drop;
br = p->br; br = p->br;
if (br_mst_is_enabled(br)) {
state = BR_STATE_FORWARDING;
} else {
if (p->state == BR_STATE_DISABLED)
goto drop;
state = p->state;
}
brmctx = &p->br->multicast_ctx; brmctx = &p->br->multicast_ctx;
pmctx = &p->multicast_ctx; pmctx = &p->multicast_ctx;
state = p->state;
if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid, if (!br_allowed_ingress(p->br, nbp_vlan_group_rcu(p), skb, &vid,
&state, &vlan)) &state, &vlan))
goto out; goto out;
...@@ -370,9 +379,13 @@ static rx_handler_result_t br_handle_frame(struct sk_buff **pskb) ...@@ -370,9 +379,13 @@ static rx_handler_result_t br_handle_frame(struct sk_buff **pskb)
return RX_HANDLER_PASS; return RX_HANDLER_PASS;
forward: forward:
if (br_mst_is_enabled(p->br))
goto defer_stp_filtering;
switch (p->state) { switch (p->state) {
case BR_STATE_FORWARDING: case BR_STATE_FORWARDING:
case BR_STATE_LEARNING: case BR_STATE_LEARNING:
defer_stp_filtering:
if (ether_addr_equal(p->br->dev->dev_addr, dest)) if (ether_addr_equal(p->br->dev->dev_addr, dest))
skb->pkt_type = PACKET_HOST; skb->pkt_type = PACKET_HOST;
......
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Bridge Multiple Spanning Tree Support
*
* Authors:
* Tobias Waldekranz <tobias@waldekranz.com>
*/
#include <linux/kernel.h>
#include <net/switchdev.h>
#include "br_private.h"
DEFINE_STATIC_KEY_FALSE(br_mst_used);
bool br_mst_enabled(const struct net_device *dev)
{
if (!netif_is_bridge_master(dev))
return false;
return br_opt_get(netdev_priv(dev), BROPT_MST_ENABLED);
}
EXPORT_SYMBOL_GPL(br_mst_enabled);
int br_mst_get_info(const struct net_device *dev, u16 msti, unsigned long *vids)
{
const struct net_bridge_vlan_group *vg;
const struct net_bridge_vlan *v;
const struct net_bridge *br;
ASSERT_RTNL();
if (!netif_is_bridge_master(dev))
return -EINVAL;
br = netdev_priv(dev);
if (!br_opt_get(br, BROPT_MST_ENABLED))
return -EINVAL;
vg = br_vlan_group(br);
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (v->msti == msti)
__set_bit(v->vid, vids);
}
return 0;
}
EXPORT_SYMBOL_GPL(br_mst_get_info);
int br_mst_get_state(const struct net_device *dev, u16 msti, u8 *state)
{
const struct net_bridge_port *p = NULL;
const struct net_bridge_vlan_group *vg;
const struct net_bridge_vlan *v;
ASSERT_RTNL();
p = br_port_get_check_rtnl(dev);
if (!p || !br_opt_get(p->br, BROPT_MST_ENABLED))
return -EINVAL;
vg = nbp_vlan_group(p);
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (v->brvlan->msti == msti) {
*state = v->state;
return 0;
}
}
return -ENOENT;
}
EXPORT_SYMBOL_GPL(br_mst_get_state);
static void br_mst_vlan_set_state(struct net_bridge_port *p, struct net_bridge_vlan *v,
u8 state)
{
struct net_bridge_vlan_group *vg = nbp_vlan_group(p);
if (v->state == state)
return;
br_vlan_set_state(v, state);
if (v->vid == vg->pvid)
br_vlan_set_pvid_state(vg, state);
}
int br_mst_set_state(struct net_bridge_port *p, u16 msti, u8 state,
struct netlink_ext_ack *extack)
{
struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_ID_PORT_MST_STATE,
.orig_dev = p->dev,
.u.mst_state = {
.msti = msti,
.state = state,
},
};
struct net_bridge_vlan_group *vg;
struct net_bridge_vlan *v;
int err;
vg = nbp_vlan_group(p);
if (!vg)
return 0;
/* MSTI 0 (CST) state changes are notified via the regular
* SWITCHDEV_ATTR_ID_PORT_STP_STATE.
*/
if (msti) {
err = switchdev_port_attr_set(p->dev, &attr, extack);
if (err && err != -EOPNOTSUPP)
return err;
}
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (v->brvlan->msti != msti)
continue;
br_mst_vlan_set_state(p, v, state);
}
return 0;
}
static void br_mst_vlan_sync_state(struct net_bridge_vlan *pv, u16 msti)
{
struct net_bridge_vlan_group *vg = nbp_vlan_group(pv->port);
struct net_bridge_vlan *v;
list_for_each_entry(v, &vg->vlan_list, vlist) {
/* If this port already has a defined state in this
* MSTI (through some other VLAN membership), inherit
* it.
*/
if (v != pv && v->brvlan->msti == msti) {
br_mst_vlan_set_state(pv->port, pv, v->state);
return;
}
}
/* Otherwise, start out in a new MSTI with all ports disabled. */
return br_mst_vlan_set_state(pv->port, pv, BR_STATE_DISABLED);
}
int br_mst_vlan_set_msti(struct net_bridge_vlan *mv, u16 msti)
{
struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_ID_VLAN_MSTI,
.orig_dev = mv->br->dev,
.u.vlan_msti = {
.vid = mv->vid,
.msti = msti,
},
};
struct net_bridge_vlan_group *vg;
struct net_bridge_vlan *pv;
struct net_bridge_port *p;
int err;
if (mv->msti == msti)
return 0;
err = switchdev_port_attr_set(mv->br->dev, &attr, NULL);
if (err && err != -EOPNOTSUPP)
return err;
mv->msti = msti;
list_for_each_entry(p, &mv->br->port_list, list) {
vg = nbp_vlan_group(p);
pv = br_vlan_find(vg, mv->vid);
if (pv)
br_mst_vlan_sync_state(pv, msti);
}
return 0;
}
void br_mst_vlan_init_state(struct net_bridge_vlan *v)
{
/* VLANs always start out in MSTI 0 (CST) */
v->msti = 0;
if (br_vlan_is_master(v))
v->state = BR_STATE_FORWARDING;
else
v->state = v->port->state;
}
int br_mst_set_enabled(struct net_bridge *br, bool on,
struct netlink_ext_ack *extack)
{
struct switchdev_attr attr = {
.id = SWITCHDEV_ATTR_ID_BRIDGE_MST,
.orig_dev = br->dev,
.u.mst = on,
};
struct net_bridge_vlan_group *vg;
struct net_bridge_port *p;
int err;
list_for_each_entry(p, &br->port_list, list) {
vg = nbp_vlan_group(p);
if (!vg->num_vlans)
continue;
NL_SET_ERR_MSG(extack,
"MST mode can't be changed while VLANs exist");
return -EBUSY;
}
if (br_opt_get(br, BROPT_MST_ENABLED) == on)
return 0;
err = switchdev_port_attr_set(br->dev, &attr, extack);
if (err && err != -EOPNOTSUPP)
return err;
if (on)
static_branch_enable(&br_mst_used);
else
static_branch_disable(&br_mst_used);
br_opt_toggle(br, BROPT_MST_ENABLED, on);
return 0;
}
size_t br_mst_info_size(const struct net_bridge_vlan_group *vg)
{
DECLARE_BITMAP(seen, VLAN_N_VID) = { 0 };
const struct net_bridge_vlan *v;
size_t sz;
/* IFLA_BRIDGE_MST */
sz = nla_total_size(0);
list_for_each_entry_rcu(v, &vg->vlan_list, vlist) {
if (test_bit(v->brvlan->msti, seen))
continue;
/* IFLA_BRIDGE_MST_ENTRY */
sz += nla_total_size(0) +
/* IFLA_BRIDGE_MST_ENTRY_MSTI */
nla_total_size(sizeof(u16)) +
/* IFLA_BRIDGE_MST_ENTRY_STATE */
nla_total_size(sizeof(u8));
__set_bit(v->brvlan->msti, seen);
}
return sz;
}
int br_mst_fill_info(struct sk_buff *skb,
const struct net_bridge_vlan_group *vg)
{
DECLARE_BITMAP(seen, VLAN_N_VID) = { 0 };
const struct net_bridge_vlan *v;
struct nlattr *nest;
int err = 0;
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (test_bit(v->brvlan->msti, seen))
continue;
nest = nla_nest_start_noflag(skb, IFLA_BRIDGE_MST_ENTRY);
if (!nest ||
nla_put_u16(skb, IFLA_BRIDGE_MST_ENTRY_MSTI, v->brvlan->msti) ||
nla_put_u8(skb, IFLA_BRIDGE_MST_ENTRY_STATE, v->state)) {
err = -EMSGSIZE;
break;
}
nla_nest_end(skb, nest);
__set_bit(v->brvlan->msti, seen);
}
return err;
}
static const struct nla_policy br_mst_nl_policy[IFLA_BRIDGE_MST_ENTRY_MAX + 1] = {
[IFLA_BRIDGE_MST_ENTRY_MSTI] = NLA_POLICY_RANGE(NLA_U16,
1, /* 0 reserved for CST */
VLAN_N_VID - 1),
[IFLA_BRIDGE_MST_ENTRY_STATE] = NLA_POLICY_RANGE(NLA_U8,
BR_STATE_DISABLED,
BR_STATE_BLOCKING),
};
static int br_mst_process_one(struct net_bridge_port *p,
const struct nlattr *attr,
struct netlink_ext_ack *extack)
{
struct nlattr *tb[IFLA_BRIDGE_MST_ENTRY_MAX + 1];
u16 msti;
u8 state;
int err;
err = nla_parse_nested(tb, IFLA_BRIDGE_MST_ENTRY_MAX, attr,
br_mst_nl_policy, extack);
if (err)
return err;
if (!tb[IFLA_BRIDGE_MST_ENTRY_MSTI]) {
NL_SET_ERR_MSG_MOD(extack, "MSTI not specified");
return -EINVAL;
}
if (!tb[IFLA_BRIDGE_MST_ENTRY_STATE]) {
NL_SET_ERR_MSG_MOD(extack, "State not specified");
return -EINVAL;
}
msti = nla_get_u16(tb[IFLA_BRIDGE_MST_ENTRY_MSTI]);
state = nla_get_u8(tb[IFLA_BRIDGE_MST_ENTRY_STATE]);
return br_mst_set_state(p, msti, state, extack);
}
int br_mst_process(struct net_bridge_port *p, const struct nlattr *mst_attr,
struct netlink_ext_ack *extack)
{
struct nlattr *attr;
int err, msts = 0;
int rem;
if (!br_opt_get(p->br, BROPT_MST_ENABLED)) {
NL_SET_ERR_MSG_MOD(extack, "Can't modify MST state when MST is disabled");
return -EBUSY;
}
nla_for_each_nested(attr, mst_attr, rem) {
switch (nla_type(attr)) {
case IFLA_BRIDGE_MST_ENTRY:
err = br_mst_process_one(p, attr, extack);
break;
default:
continue;
}
msts++;
if (err)
break;
}
if (!msts) {
NL_SET_ERR_MSG_MOD(extack, "Found no MST entries to process");
err = -EINVAL;
}
return err;
}
...@@ -119,6 +119,9 @@ static size_t br_get_link_af_size_filtered(const struct net_device *dev, ...@@ -119,6 +119,9 @@ static size_t br_get_link_af_size_filtered(const struct net_device *dev,
/* Each VLAN is returned in bridge_vlan_info along with flags */ /* Each VLAN is returned in bridge_vlan_info along with flags */
vinfo_sz += num_vlan_infos * nla_total_size(sizeof(struct bridge_vlan_info)); vinfo_sz += num_vlan_infos * nla_total_size(sizeof(struct bridge_vlan_info));
if (filter_mask & RTEXT_FILTER_MST)
vinfo_sz += br_mst_info_size(vg);
if (!(filter_mask & RTEXT_FILTER_CFM_STATUS)) if (!(filter_mask & RTEXT_FILTER_CFM_STATUS))
return vinfo_sz; return vinfo_sz;
...@@ -485,7 +488,8 @@ static int br_fill_ifinfo(struct sk_buff *skb, ...@@ -485,7 +488,8 @@ static int br_fill_ifinfo(struct sk_buff *skb,
RTEXT_FILTER_BRVLAN_COMPRESSED | RTEXT_FILTER_BRVLAN_COMPRESSED |
RTEXT_FILTER_MRP | RTEXT_FILTER_MRP |
RTEXT_FILTER_CFM_CONFIG | RTEXT_FILTER_CFM_CONFIG |
RTEXT_FILTER_CFM_STATUS)) { RTEXT_FILTER_CFM_STATUS |
RTEXT_FILTER_MST)) {
af = nla_nest_start_noflag(skb, IFLA_AF_SPEC); af = nla_nest_start_noflag(skb, IFLA_AF_SPEC);
if (!af) if (!af)
goto nla_put_failure; goto nla_put_failure;
...@@ -564,7 +568,28 @@ static int br_fill_ifinfo(struct sk_buff *skb, ...@@ -564,7 +568,28 @@ static int br_fill_ifinfo(struct sk_buff *skb,
nla_nest_end(skb, cfm_nest); nla_nest_end(skb, cfm_nest);
} }
if ((filter_mask & RTEXT_FILTER_MST) &&
br_opt_get(br, BROPT_MST_ENABLED) && port) {
const struct net_bridge_vlan_group *vg = nbp_vlan_group(port);
struct nlattr *mst_nest;
int err;
if (!vg || !vg->num_vlans)
goto done;
mst_nest = nla_nest_start(skb, IFLA_BRIDGE_MST);
if (!mst_nest)
goto nla_put_failure;
err = br_mst_fill_info(skb, vg);
if (err)
goto nla_put_failure;
nla_nest_end(skb, mst_nest);
}
done: done:
if (af) if (af)
nla_nest_end(skb, af); nla_nest_end(skb, af);
nlmsg_end(skb, nlh); nlmsg_end(skb, nlh);
...@@ -803,6 +828,23 @@ static int br_afspec(struct net_bridge *br, ...@@ -803,6 +828,23 @@ static int br_afspec(struct net_bridge *br,
if (err) if (err)
return err; return err;
break; break;
case IFLA_BRIDGE_MST:
if (!p) {
NL_SET_ERR_MSG(extack,
"MST states can only be set on bridge ports");
return -EINVAL;
}
if (cmd != RTM_SETLINK) {
NL_SET_ERR_MSG(extack,
"MST states can only be set through RTM_SETLINK");
return -EINVAL;
}
err = br_mst_process(p, attr, extack);
if (err)
return err;
break;
} }
} }
......
...@@ -178,6 +178,7 @@ enum { ...@@ -178,6 +178,7 @@ enum {
* @br_mcast_ctx: if MASTER flag set, this is the global vlan multicast context * @br_mcast_ctx: if MASTER flag set, this is the global vlan multicast context
* @port_mcast_ctx: if MASTER flag unset, this is the per-port/vlan multicast * @port_mcast_ctx: if MASTER flag unset, this is the per-port/vlan multicast
* context * context
* @msti: if MASTER flag set, this holds the VLANs MST instance
* @vlist: sorted list of VLAN entries * @vlist: sorted list of VLAN entries
* @rcu: used for entry destruction * @rcu: used for entry destruction
* *
...@@ -210,6 +211,8 @@ struct net_bridge_vlan { ...@@ -210,6 +211,8 @@ struct net_bridge_vlan {
struct net_bridge_mcast_port port_mcast_ctx; struct net_bridge_mcast_port port_mcast_ctx;
}; };
u16 msti;
struct list_head vlist; struct list_head vlist;
struct rcu_head rcu; struct rcu_head rcu;
...@@ -445,6 +448,7 @@ enum net_bridge_opts { ...@@ -445,6 +448,7 @@ enum net_bridge_opts {
BROPT_NO_LL_LEARN, BROPT_NO_LL_LEARN,
BROPT_VLAN_BRIDGE_BINDING, BROPT_VLAN_BRIDGE_BINDING,
BROPT_MCAST_VLAN_SNOOPING_ENABLED, BROPT_MCAST_VLAN_SNOOPING_ENABLED,
BROPT_MST_ENABLED,
}; };
struct net_bridge { struct net_bridge {
...@@ -1765,6 +1769,63 @@ static inline bool br_vlan_state_allowed(u8 state, bool learn_allow) ...@@ -1765,6 +1769,63 @@ static inline bool br_vlan_state_allowed(u8 state, bool learn_allow)
} }
#endif #endif
/* br_mst.c */
#ifdef CONFIG_BRIDGE_VLAN_FILTERING
DECLARE_STATIC_KEY_FALSE(br_mst_used);
static inline bool br_mst_is_enabled(struct net_bridge *br)
{
return static_branch_unlikely(&br_mst_used) &&
br_opt_get(br, BROPT_MST_ENABLED);
}
int br_mst_set_state(struct net_bridge_port *p, u16 msti, u8 state,
struct netlink_ext_ack *extack);
int br_mst_vlan_set_msti(struct net_bridge_vlan *v, u16 msti);
void br_mst_vlan_init_state(struct net_bridge_vlan *v);
int br_mst_set_enabled(struct net_bridge *br, bool on,
struct netlink_ext_ack *extack);
size_t br_mst_info_size(const struct net_bridge_vlan_group *vg);
int br_mst_fill_info(struct sk_buff *skb,
const struct net_bridge_vlan_group *vg);
int br_mst_process(struct net_bridge_port *p, const struct nlattr *mst_attr,
struct netlink_ext_ack *extack);
#else
static inline bool br_mst_is_enabled(struct net_bridge *br)
{
return false;
}
static inline int br_mst_set_state(struct net_bridge_port *p, u16 msti,
u8 state, struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
static inline int br_mst_set_enabled(struct net_bridge *br, bool on,
struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
static inline size_t br_mst_info_size(const struct net_bridge_vlan_group *vg)
{
return 0;
}
static inline int br_mst_fill_info(struct sk_buff *skb,
const struct net_bridge_vlan_group *vg)
{
return -EOPNOTSUPP;
}
static inline int br_mst_process(struct net_bridge_port *p,
const struct nlattr *mst_attr,
struct netlink_ext_ack *extack)
{
return -EOPNOTSUPP;
}
#endif
struct nf_br_ops { struct nf_br_ops {
int (*br_dev_xmit_hook)(struct sk_buff *skb); int (*br_dev_xmit_hook)(struct sk_buff *skb);
}; };
......
...@@ -43,6 +43,12 @@ void br_set_state(struct net_bridge_port *p, unsigned int state) ...@@ -43,6 +43,12 @@ void br_set_state(struct net_bridge_port *p, unsigned int state)
return; return;
p->state = state; p->state = state;
if (br_opt_get(p->br, BROPT_MST_ENABLED)) {
err = br_mst_set_state(p, 0, state, NULL);
if (err)
br_warn(p->br, "error setting MST state on port %u(%s)\n",
p->port_no, netdev_name(p->dev));
}
err = switchdev_port_attr_set(p->dev, &attr, NULL); err = switchdev_port_attr_set(p->dev, &attr, NULL);
if (err && err != -EOPNOTSUPP) if (err && err != -EOPNOTSUPP)
br_warn(p->br, "error setting offload STP state on port %u(%s)\n", br_warn(p->br, "error setting offload STP state on port %u(%s)\n",
......
...@@ -331,6 +331,46 @@ br_switchdev_fdb_replay(const struct net_device *br_dev, const void *ctx, ...@@ -331,6 +331,46 @@ br_switchdev_fdb_replay(const struct net_device *br_dev, const void *ctx,
return err; return err;
} }
static int br_switchdev_vlan_attr_replay(struct net_device *br_dev,
const void *ctx,
struct notifier_block *nb,
struct netlink_ext_ack *extack)
{
struct switchdev_notifier_port_attr_info attr_info = {
.info = {
.dev = br_dev,
.extack = extack,
.ctx = ctx,
},
};
struct net_bridge *br = netdev_priv(br_dev);
struct net_bridge_vlan_group *vg;
struct switchdev_attr attr;
struct net_bridge_vlan *v;
int err;
attr_info.attr = &attr;
attr.orig_dev = br_dev;
vg = br_vlan_group(br);
list_for_each_entry(v, &vg->vlan_list, vlist) {
if (v->msti) {
attr.id = SWITCHDEV_ATTR_ID_VLAN_MSTI;
attr.u.vlan_msti.vid = v->vid;
attr.u.vlan_msti.msti = v->msti;
err = nb->notifier_call(nb, SWITCHDEV_PORT_ATTR_SET,
&attr_info);
err = notifier_to_errno(err);
if (err)
return err;
}
}
return 0;
}
static int static int
br_switchdev_vlan_replay_one(struct notifier_block *nb, br_switchdev_vlan_replay_one(struct notifier_block *nb,
struct net_device *dev, struct net_device *dev,
...@@ -425,6 +465,12 @@ static int br_switchdev_vlan_replay(struct net_device *br_dev, ...@@ -425,6 +465,12 @@ static int br_switchdev_vlan_replay(struct net_device *br_dev,
return err; return err;
} }
if (adding) {
err = br_switchdev_vlan_attr_replay(br_dev, ctx, nb, extack);
if (err)
return err;
}
return 0; return 0;
} }
......
...@@ -226,6 +226,24 @@ static void nbp_vlan_rcu_free(struct rcu_head *rcu) ...@@ -226,6 +226,24 @@ static void nbp_vlan_rcu_free(struct rcu_head *rcu)
kfree(v); kfree(v);
} }
static void br_vlan_init_state(struct net_bridge_vlan *v)
{
struct net_bridge *br;
if (br_vlan_is_master(v))
br = v->br;
else
br = v->port->br;
if (br_opt_get(br, BROPT_MST_ENABLED)) {
br_mst_vlan_init_state(v);
return;
}
v->state = BR_STATE_FORWARDING;
v->msti = 0;
}
/* This is the shared VLAN add function which works for both ports and bridge /* This is the shared VLAN add function which works for both ports and bridge
* devices. There are four possible calls to this function in terms of the * devices. There are four possible calls to this function in terms of the
* vlan entry type: * vlan entry type:
...@@ -322,7 +340,7 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags, ...@@ -322,7 +340,7 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags,
} }
/* set the state before publishing */ /* set the state before publishing */
v->state = BR_STATE_FORWARDING; br_vlan_init_state(v);
err = rhashtable_lookup_insert_fast(&vg->vlan_hash, &v->vnode, err = rhashtable_lookup_insert_fast(&vg->vlan_hash, &v->vnode,
br_vlan_rht_params); br_vlan_rht_params);
......
...@@ -99,6 +99,11 @@ static int br_vlan_modify_state(struct net_bridge_vlan_group *vg, ...@@ -99,6 +99,11 @@ static int br_vlan_modify_state(struct net_bridge_vlan_group *vg,
return -EBUSY; return -EBUSY;
} }
if (br_opt_get(br, BROPT_MST_ENABLED)) {
NL_SET_ERR_MSG_MOD(extack, "Can't modify vlan state directly when MST is enabled");
return -EBUSY;
}
if (v->state == state) if (v->state == state)
return 0; return 0;
...@@ -291,6 +296,7 @@ bool br_vlan_global_opts_can_enter_range(const struct net_bridge_vlan *v_curr, ...@@ -291,6 +296,7 @@ bool br_vlan_global_opts_can_enter_range(const struct net_bridge_vlan *v_curr,
const struct net_bridge_vlan *r_end) const struct net_bridge_vlan *r_end)
{ {
return v_curr->vid - r_end->vid == 1 && return v_curr->vid - r_end->vid == 1 &&
v_curr->msti == r_end->msti &&
((v_curr->priv_flags ^ r_end->priv_flags) & ((v_curr->priv_flags ^ r_end->priv_flags) &
BR_VLFLAG_GLOBAL_MCAST_ENABLED) == 0 && BR_VLFLAG_GLOBAL_MCAST_ENABLED) == 0 &&
br_multicast_ctx_options_equal(&v_curr->br_mcast_ctx, br_multicast_ctx_options_equal(&v_curr->br_mcast_ctx,
...@@ -379,6 +385,9 @@ bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range, ...@@ -379,6 +385,9 @@ bool br_vlan_global_opts_fill(struct sk_buff *skb, u16 vid, u16 vid_range,
#endif #endif
#endif #endif
if (nla_put_u16(skb, BRIDGE_VLANDB_GOPTS_MSTI, v_opts->msti))
goto out_err;
nla_nest_end(skb, nest); nla_nest_end(skb, nest);
return true; return true;
...@@ -410,6 +419,7 @@ static size_t rtnl_vlan_global_opts_nlmsg_size(const struct net_bridge_vlan *v) ...@@ -410,6 +419,7 @@ static size_t rtnl_vlan_global_opts_nlmsg_size(const struct net_bridge_vlan *v)
+ nla_total_size(0) /* BRIDGE_VLANDB_GOPTS_MCAST_ROUTER_PORTS */ + nla_total_size(0) /* BRIDGE_VLANDB_GOPTS_MCAST_ROUTER_PORTS */
+ br_rports_size(&v->br_mcast_ctx) /* BRIDGE_VLANDB_GOPTS_MCAST_ROUTER_PORTS */ + br_rports_size(&v->br_mcast_ctx) /* BRIDGE_VLANDB_GOPTS_MCAST_ROUTER_PORTS */
#endif #endif
+ nla_total_size(sizeof(u16)) /* BRIDGE_VLANDB_GOPTS_MSTI */
+ nla_total_size(sizeof(u16)); /* BRIDGE_VLANDB_GOPTS_RANGE */ + nla_total_size(sizeof(u16)); /* BRIDGE_VLANDB_GOPTS_RANGE */
} }
...@@ -559,6 +569,15 @@ static int br_vlan_process_global_one_opts(const struct net_bridge *br, ...@@ -559,6 +569,15 @@ static int br_vlan_process_global_one_opts(const struct net_bridge *br,
} }
#endif #endif
#endif #endif
if (tb[BRIDGE_VLANDB_GOPTS_MSTI]) {
u16 msti;
msti = nla_get_u16(tb[BRIDGE_VLANDB_GOPTS_MSTI]);
err = br_mst_vlan_set_msti(v, msti);
if (err)
return err;
*changed = true;
}
return 0; return 0;
} }
...@@ -578,6 +597,7 @@ static const struct nla_policy br_vlan_db_gpol[BRIDGE_VLANDB_GOPTS_MAX + 1] = { ...@@ -578,6 +597,7 @@ static const struct nla_policy br_vlan_db_gpol[BRIDGE_VLANDB_GOPTS_MAX + 1] = {
[BRIDGE_VLANDB_GOPTS_MCAST_QUERIER_INTVL] = { .type = NLA_U64 }, [BRIDGE_VLANDB_GOPTS_MCAST_QUERIER_INTVL] = { .type = NLA_U64 },
[BRIDGE_VLANDB_GOPTS_MCAST_STARTUP_QUERY_INTVL] = { .type = NLA_U64 }, [BRIDGE_VLANDB_GOPTS_MCAST_STARTUP_QUERY_INTVL] = { .type = NLA_U64 },
[BRIDGE_VLANDB_GOPTS_MCAST_QUERY_RESPONSE_INTVL] = { .type = NLA_U64 }, [BRIDGE_VLANDB_GOPTS_MCAST_QUERY_RESPONSE_INTVL] = { .type = NLA_U64 },
[BRIDGE_VLANDB_GOPTS_MSTI] = NLA_POLICY_MAX(NLA_U16, VLAN_N_VID - 1),
}; };
int br_vlan_rtm_process_global_options(struct net_device *dev, int br_vlan_rtm_process_global_options(struct net_device *dev,
......
...@@ -215,6 +215,9 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev, ...@@ -215,6 +215,9 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp, void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp,
const struct dsa_device_ops *tag_ops); const struct dsa_device_ops *tag_ops);
int dsa_port_set_state(struct dsa_port *dp, u8 state, bool do_fast_age); int dsa_port_set_state(struct dsa_port *dp, u8 state, bool do_fast_age);
int dsa_port_set_mst_state(struct dsa_port *dp,
const struct switchdev_mst_state *state,
struct netlink_ext_ack *extack);
int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy); int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy);
int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy); int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy);
void dsa_port_disable_rt(struct dsa_port *dp); void dsa_port_disable_rt(struct dsa_port *dp);
...@@ -234,6 +237,10 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering, ...@@ -234,6 +237,10 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
bool dsa_port_skip_vlan_configuration(struct dsa_port *dp); bool dsa_port_skip_vlan_configuration(struct dsa_port *dp);
int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock); int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock);
int dsa_port_mst_enable(struct dsa_port *dp, bool on,
struct netlink_ext_ack *extack);
int dsa_port_vlan_msti(struct dsa_port *dp,
const struct switchdev_vlan_msti *msti);
int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu, int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu,
bool targeted_match); bool targeted_match);
int dsa_port_fdb_add(struct dsa_port *dp, const unsigned char *addr, int dsa_port_fdb_add(struct dsa_port *dp, const unsigned char *addr,
......
...@@ -30,12 +30,11 @@ static int dsa_port_notify(const struct dsa_port *dp, unsigned long e, void *v) ...@@ -30,12 +30,11 @@ static int dsa_port_notify(const struct dsa_port *dp, unsigned long e, void *v)
return dsa_tree_notify(dp->ds->dst, e, v); return dsa_tree_notify(dp->ds->dst, e, v);
} }
static void dsa_port_notify_bridge_fdb_flush(const struct dsa_port *dp) static void dsa_port_notify_bridge_fdb_flush(const struct dsa_port *dp, u16 vid)
{ {
struct net_device *brport_dev = dsa_port_to_bridge_port(dp); struct net_device *brport_dev = dsa_port_to_bridge_port(dp);
struct switchdev_notifier_fdb_info info = { struct switchdev_notifier_fdb_info info = {
/* flush all VLANs */ .vid = vid,
.vid = 0,
}; };
/* When the port becomes standalone it has already left the bridge. /* When the port becomes standalone it has already left the bridge.
...@@ -57,7 +56,42 @@ static void dsa_port_fast_age(const struct dsa_port *dp) ...@@ -57,7 +56,42 @@ static void dsa_port_fast_age(const struct dsa_port *dp)
ds->ops->port_fast_age(ds, dp->index); ds->ops->port_fast_age(ds, dp->index);
dsa_port_notify_bridge_fdb_flush(dp); /* flush all VLANs */
dsa_port_notify_bridge_fdb_flush(dp, 0);
}
static int dsa_port_vlan_fast_age(const struct dsa_port *dp, u16 vid)
{
struct dsa_switch *ds = dp->ds;
int err;
if (!ds->ops->port_vlan_fast_age)
return -EOPNOTSUPP;
err = ds->ops->port_vlan_fast_age(ds, dp->index, vid);
if (!err)
dsa_port_notify_bridge_fdb_flush(dp, vid);
return err;
}
static int dsa_port_msti_fast_age(const struct dsa_port *dp, u16 msti)
{
DECLARE_BITMAP(vids, VLAN_N_VID) = { 0 };
int err, vid;
err = br_mst_get_info(dsa_port_bridge_dev_get(dp), msti, vids);
if (err)
return err;
for_each_set_bit(vid, vids, VLAN_N_VID) {
err = dsa_port_vlan_fast_age(dp, vid);
if (err)
return err;
}
return 0;
} }
static bool dsa_port_can_configure_learning(struct dsa_port *dp) static bool dsa_port_can_configure_learning(struct dsa_port *dp)
...@@ -118,6 +152,42 @@ static void dsa_port_set_state_now(struct dsa_port *dp, u8 state, ...@@ -118,6 +152,42 @@ static void dsa_port_set_state_now(struct dsa_port *dp, u8 state,
pr_err("DSA: failed to set STP state %u (%d)\n", state, err); pr_err("DSA: failed to set STP state %u (%d)\n", state, err);
} }
int dsa_port_set_mst_state(struct dsa_port *dp,
const struct switchdev_mst_state *state,
struct netlink_ext_ack *extack)
{
struct dsa_switch *ds = dp->ds;
u8 prev_state;
int err;
if (!ds->ops->port_mst_state_set)
return -EOPNOTSUPP;
err = br_mst_get_state(dsa_port_to_bridge_port(dp), state->msti,
&prev_state);
if (err)
return err;
err = ds->ops->port_mst_state_set(ds, dp->index, state);
if (err)
return err;
if (!(dp->learning &&
(prev_state == BR_STATE_LEARNING ||
prev_state == BR_STATE_FORWARDING) &&
(state->state == BR_STATE_DISABLED ||
state->state == BR_STATE_BLOCKING ||
state->state == BR_STATE_LISTENING)))
return 0;
err = dsa_port_msti_fast_age(dp, state->msti);
if (err)
NL_SET_ERR_MSG_MOD(extack,
"Unable to flush associated VLANs");
return 0;
}
int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy) int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy)
{ {
struct dsa_switch *ds = dp->ds; struct dsa_switch *ds = dp->ds;
...@@ -321,6 +391,16 @@ static void dsa_port_bridge_destroy(struct dsa_port *dp, ...@@ -321,6 +391,16 @@ static void dsa_port_bridge_destroy(struct dsa_port *dp,
kfree(bridge); kfree(bridge);
} }
static bool dsa_port_supports_mst(struct dsa_port *dp)
{
struct dsa_switch *ds = dp->ds;
return ds->ops->vlan_msti_set &&
ds->ops->port_mst_state_set &&
ds->ops->port_vlan_fast_age &&
dsa_port_can_configure_learning(dp);
}
int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br, int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
{ {
...@@ -334,6 +414,9 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br, ...@@ -334,6 +414,9 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
struct net_device *brport_dev; struct net_device *brport_dev;
int err; int err;
if (br_mst_enabled(br) && !dsa_port_supports_mst(dp))
return -EOPNOTSUPP;
/* Here the interface is already bridged. Reflect the current /* Here the interface is already bridged. Reflect the current
* configuration so that drivers can program their chips accordingly. * configuration so that drivers can program their chips accordingly.
*/ */
...@@ -735,6 +818,17 @@ int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock) ...@@ -735,6 +818,17 @@ int dsa_port_ageing_time(struct dsa_port *dp, clock_t ageing_clock)
return 0; return 0;
} }
int dsa_port_mst_enable(struct dsa_port *dp, bool on,
struct netlink_ext_ack *extack)
{
if (on && !dsa_port_supports_mst(dp)) {
NL_SET_ERR_MSG_MOD(extack, "Hardware does not support MST");
return -EINVAL;
}
return 0;
}
int dsa_port_pre_bridge_flags(const struct dsa_port *dp, int dsa_port_pre_bridge_flags(const struct dsa_port *dp,
struct switchdev_brport_flags flags, struct switchdev_brport_flags flags,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
...@@ -778,6 +872,17 @@ int dsa_port_bridge_flags(struct dsa_port *dp, ...@@ -778,6 +872,17 @@ int dsa_port_bridge_flags(struct dsa_port *dp,
return 0; return 0;
} }
int dsa_port_vlan_msti(struct dsa_port *dp,
const struct switchdev_vlan_msti *msti)
{
struct dsa_switch *ds = dp->ds;
if (!ds->ops->vlan_msti_set)
return -EOPNOTSUPP;
return ds->ops->vlan_msti_set(ds, *dp->bridge, msti);
}
int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu, int dsa_port_mtu_change(struct dsa_port *dp, int new_mtu,
bool targeted_match) bool targeted_match)
{ {
......
...@@ -451,6 +451,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx, ...@@ -451,6 +451,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx,
ret = dsa_port_set_state(dp, attr->u.stp_state, true); ret = dsa_port_set_state(dp, attr->u.stp_state, true);
break; break;
case SWITCHDEV_ATTR_ID_PORT_MST_STATE:
if (!dsa_port_offloads_bridge_port(dp, attr->orig_dev))
return -EOPNOTSUPP;
ret = dsa_port_set_mst_state(dp, &attr->u.mst_state, extack);
break;
case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING: case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
if (!dsa_port_offloads_bridge_dev(dp, attr->orig_dev)) if (!dsa_port_offloads_bridge_dev(dp, attr->orig_dev))
return -EOPNOTSUPP; return -EOPNOTSUPP;
...@@ -464,6 +470,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx, ...@@ -464,6 +470,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx,
ret = dsa_port_ageing_time(dp, attr->u.ageing_time); ret = dsa_port_ageing_time(dp, attr->u.ageing_time);
break; break;
case SWITCHDEV_ATTR_ID_BRIDGE_MST:
if (!dsa_port_offloads_bridge_dev(dp, attr->orig_dev))
return -EOPNOTSUPP;
ret = dsa_port_mst_enable(dp, attr->u.mst, extack);
break;
case SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS: case SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS:
if (!dsa_port_offloads_bridge_port(dp, attr->orig_dev)) if (!dsa_port_offloads_bridge_port(dp, attr->orig_dev))
return -EOPNOTSUPP; return -EOPNOTSUPP;
...@@ -477,6 +489,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx, ...@@ -477,6 +489,12 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx,
ret = dsa_port_bridge_flags(dp, attr->u.brport_flags, extack); ret = dsa_port_bridge_flags(dp, attr->u.brport_flags, extack);
break; break;
case SWITCHDEV_ATTR_ID_VLAN_MSTI:
if (!dsa_port_offloads_bridge_dev(dp, attr->orig_dev))
return -EOPNOTSUPP;
ret = dsa_port_vlan_msti(dp, &attr->u.vlan_msti);
break;
default: default:
ret = -EOPNOTSUPP; ret = -EOPNOTSUPP;
break; break;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment