Commit 08f329fc authored by David S. Miller's avatar David S. Miller

Merge branch 'tag_8021q-cross-chip'

Vladimir Olteans says:

====================
Proper cross-chip support for tag_8021q

The cross-chip bridging support for tag_8021q/sja1105 introduced here:
https://patchwork.ozlabs.org/project/netdev/cover/20200510163743.18032-1-olteanv@gmail.com/

took some shortcuts and is not reusable in other topologies except for
the one it was written for: disjoint DSA trees. A diagram of this
topology can be seen here:
https://patchwork.ozlabs.org/project/netdev/patch/20200510163743.18032-3-olteanv@gmail.com/

However there are sja1105 switches on other boards using other
topologies, most notably:

- Daisy chained:
                                             |
    sw0p0     sw0p1     sw0p2     sw0p3     sw0p4
 [  user ] [  user ] [  user ] [  dsa  ] [  cpu  ]
                                   |
                                   +---------+
                                             |
    sw1p0     sw1p1     sw1p2     sw1p3     sw1p4
 [  user ] [  user ] [  user ] [  dsa  ] [  dsa  ]
                                   |
                                   +---------+
                                             |
    sw2p0     sw2p1     sw2p2     sw2p3     sw2p4
 [  user ] [  user ] [  user ] [  user ] [  dsa  ]

- "H" topology:

         eth0                                                     eth1
          |                                                        |
       CPU port                                                CPU port
          |                        DSA link                        |
 sw0p0  sw0p1  sw0p2  sw0p3  sw0p4 -------- sw1p4  sw1p3  sw1p2  sw1p1  sw1p0
   |             |      |                            |      |             |
 user          user   user                         user   user          user
 port          port   port                         port   port          port

In fact, the current code for tag_8021q cross-chip links works for
neither of these 2 classes of topologies.

The main reasons are:
(a) The sja1105 driver does not treat DSA links. In the "disjoint trees"
    topology, the routing port towards any other switch is also the CPU
    port, and that was already configured so it already worked.
    This series does not deal with enabling DSA links in the sja1105
    driver, that is a fairly trivial task that will be dealt with
    separately.
(b) The tag_8021q code for cross-chip links assumes that any 2 switches
    between cross-chip forwarding needs to be enabled (i.e. which have
    user ports part of the same bridge) are at most 1 hop away from each
    other. This was true for the "disjoint trees" case because
    once a packet reached the CPU port, VLAN-unaware bridging was done
    by the DSA master towards the other switches based on destination
    MAC address, so the tag_8021q header was not interpreted in any way.
    However, in a daisy chain setup with 3 switches, all of them will
    interpret the tag_8021q header, and all tag_8021q VLANs need to be
    installed in all switches.

When looking at the O(n^2) real complexity of the problem, it is clear
that the current code had absolutely no chance of working in the general
case. So this patch series brings a redesign of tag_8021q, in light of
its new requirements. Anything with O(n^2) complexity (where n is the
number of switches in a DSA tree) is an obvious candidate for the DSA
cross-chip notifier support.

One by one, the patches are:
- The sja1105 driver is extremely entangled with tag_8021q, to be exact,
  with that driver's best_effort_vlan_filtering support. We drop this
  operating mode, which means that sja1105 temporarily loses network
  stack termination for VLAN-aware bridges. That operating mode raced
  itself to its own grave anyway due to some hardware limitations in
  combination with PTP reported by NXP customers. I can't say a lot
  more, but network stack termination for VLAN-aware bridges in sja1105
  will be reimplemented soon with a much, much better solution.
- What remains of tag_8021q in sja1105 is support for standalone ports
  mode and for VLAN-unaware bridging. We refactor the API surface of
  tag_8021q to a single pair of dsa_tag_8021q_{register,unregister}
  functions and we clean up everything else related to tag_8021q from
  sja1105 and felix.
- Then we move tag_8021q into the DSA core. I thought about this a lot,
  and there is really no other way to add a DSA_NOTIFIER_TAG_8021Q_VLAN_ADD
  cross-chip notifier if DSA has no way to know if the individual
  switches use tag_8021q or not. So it needs to be part of the core to
  use notifiers.
- Then we modify tag_8021q to update dynamically on bridge_{join,leave}
  events, instead of what we have today which is simply installing the
  VLANs on all ports of a switch and leaving port isolation up to
  somebody else. This change is necessary because port isolation over a
  DSA link cannot be done in any other way except based on VLAN
  membership, as opposed to bridging within the same switch which had 2
  choices (at least on sja1105).
- Finally we add 2 new cross-chip notifiers for adding and deleting a
  tag_8021q VLAN, which is properly refcounted similar to the bridge FDB
  and MDB code, and complete cleanup is done on teardown (note that this
  is unlike regular bridge VLANs, where we currently cannot do
  refcounting because the user can run "bridge vlan add dev swp0 vid 100"
  a gazillion times, and "bridge vlan del dev swp0 vid 100" just once,
  and for some reason expect that the VLAN will be deleted. But I digress).
  With this opportunity we remove a lot of hard-to-digest code and
  replace it with much more idiomatic DSA-style code.

This series was regression-tested on:
- Single-switch boards with SJA1105T
- Disjoint-tree boards with SJA1105S and Felix (using ocelot-8021q)
- H topology boards using SJA1110A
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents c18e9405 c64b9c05
......@@ -231,11 +231,6 @@ static int felix_tag_8021q_vlan_del(struct dsa_switch *ds, int port, u16 vid)
return 0;
}
static const struct dsa_8021q_ops felix_tag_8021q_ops = {
.vlan_add = felix_tag_8021q_vlan_add,
.vlan_del = felix_tag_8021q_vlan_del,
};
/* Alternatively to using the NPI functionality, that same hardware MAC
* connected internally to the enetc or fman DSA master can be configured to
* use the software-defined tag_8021q frame format. As far as the hardware is
......@@ -425,29 +420,18 @@ static int felix_setup_tag_8021q(struct dsa_switch *ds, int cpu)
ocelot_rmw_rix(ocelot, 0, cpu_flood, ANA_PGID_PGID, PGID_MC);
ocelot_rmw_rix(ocelot, 0, cpu_flood, ANA_PGID_PGID, PGID_BC);
felix->dsa_8021q_ctx = kzalloc(sizeof(*felix->dsa_8021q_ctx),
GFP_KERNEL);
if (!felix->dsa_8021q_ctx)
return -ENOMEM;
felix->dsa_8021q_ctx->ops = &felix_tag_8021q_ops;
felix->dsa_8021q_ctx->proto = htons(ETH_P_8021AD);
felix->dsa_8021q_ctx->ds = ds;
err = dsa_8021q_setup(felix->dsa_8021q_ctx, true);
err = dsa_tag_8021q_register(ds, htons(ETH_P_8021AD));
if (err)
goto out_free_dsa_8021_ctx;
return err;
err = felix_setup_mmio_filtering(felix);
if (err)
goto out_teardown_dsa_8021q;
goto out_tag_8021q_unregister;
return 0;
out_teardown_dsa_8021q:
dsa_8021q_setup(felix->dsa_8021q_ctx, false);
out_free_dsa_8021_ctx:
kfree(felix->dsa_8021q_ctx);
out_tag_8021q_unregister:
dsa_tag_8021q_unregister(ds);
return err;
}
......@@ -462,11 +446,7 @@ static void felix_teardown_tag_8021q(struct dsa_switch *ds, int cpu)
dev_err(ds->dev, "felix_teardown_mmio_filtering returned %d",
err);
err = dsa_8021q_setup(felix->dsa_8021q_ctx, false);
if (err)
dev_err(ds->dev, "dsa_8021q_setup returned %d", err);
kfree(felix->dsa_8021q_ctx);
dsa_tag_8021q_unregister(ds);
for (port = 0; port < ds->num_ports; port++) {
if (dsa_is_unused_port(ds, port))
......@@ -1679,6 +1659,8 @@ const struct dsa_switch_ops felix_switch_ops = {
.port_mrp_del = felix_mrp_del,
.port_mrp_add_ring_role = felix_mrp_add_ring_role,
.port_mrp_del_ring_role = felix_mrp_del_ring_role,
.tag_8021q_vlan_add = felix_tag_8021q_vlan_add,
.tag_8021q_vlan_del = felix_tag_8021q_vlan_del,
};
struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port)
......
......@@ -60,7 +60,6 @@ struct felix {
struct lynx_pcs **pcs;
resource_size_t switch_base;
resource_size_t imdio_base;
struct dsa_8021q_context *dsa_8021q_ctx;
enum dsa_tag_protocol tag_proto;
};
......
......@@ -234,19 +234,13 @@ struct sja1105_bridge_vlan {
bool untagged;
};
enum sja1105_vlan_state {
SJA1105_VLAN_UNAWARE,
SJA1105_VLAN_BEST_EFFORT,
SJA1105_VLAN_FILTERING_FULL,
};
struct sja1105_private {
struct sja1105_static_config static_config;
bool rgmii_rx_delay[SJA1105_MAX_NUM_PORTS];
bool rgmii_tx_delay[SJA1105_MAX_NUM_PORTS];
phy_interface_t phy_mode[SJA1105_MAX_NUM_PORTS];
bool fixed_link[SJA1105_MAX_NUM_PORTS];
bool best_effort_vlan_filtering;
bool vlan_aware;
unsigned long learn_ena;
unsigned long ucast_egress_floods;
unsigned long bcast_egress_floods;
......@@ -263,8 +257,6 @@ struct sja1105_private {
* the switch doesn't confuse them with one another.
*/
struct mutex mgmt_lock;
struct dsa_8021q_context *dsa_8021q_ctx;
enum sja1105_vlan_state vlan_state;
struct devlink_region **regions;
struct sja1105_cbs_entry *cbs;
struct mii_bus *mdio_base_t1;
......@@ -311,10 +303,6 @@ int sja1110_pcs_mdio_write(struct mii_bus *bus, int phy, int reg, u16 val);
/* From sja1105_devlink.c */
int sja1105_devlink_setup(struct dsa_switch *ds);
void sja1105_devlink_teardown(struct dsa_switch *ds);
int sja1105_devlink_param_get(struct dsa_switch *ds, u32 id,
struct devlink_param_gset_ctx *ctx);
int sja1105_devlink_param_set(struct dsa_switch *ds, u32 id,
struct devlink_param_gset_ctx *ctx);
int sja1105_devlink_info_get(struct dsa_switch *ds,
struct devlink_info_req *req,
struct netlink_ext_ack *extack);
......
......@@ -115,105 +115,6 @@ static void sja1105_teardown_devlink_regions(struct dsa_switch *ds)
kfree(priv->regions);
}
static int sja1105_best_effort_vlan_filtering_get(struct sja1105_private *priv,
bool *be_vlan)
{
*be_vlan = priv->best_effort_vlan_filtering;
return 0;
}
static int sja1105_best_effort_vlan_filtering_set(struct sja1105_private *priv,
bool be_vlan)
{
struct dsa_switch *ds = priv->ds;
bool vlan_filtering;
int port;
int rc;
priv->best_effort_vlan_filtering = be_vlan;
rtnl_lock();
for (port = 0; port < ds->num_ports; port++) {
struct dsa_port *dp;
if (!dsa_is_user_port(ds, port))
continue;
dp = dsa_to_port(ds, port);
vlan_filtering = dsa_port_is_vlan_filtering(dp);
rc = sja1105_vlan_filtering(ds, port, vlan_filtering, NULL);
if (rc)
break;
}
rtnl_unlock();
return rc;
}
enum sja1105_devlink_param_id {
SJA1105_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX,
SJA1105_DEVLINK_PARAM_ID_BEST_EFFORT_VLAN_FILTERING,
};
int sja1105_devlink_param_get(struct dsa_switch *ds, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct sja1105_private *priv = ds->priv;
int err;
switch (id) {
case SJA1105_DEVLINK_PARAM_ID_BEST_EFFORT_VLAN_FILTERING:
err = sja1105_best_effort_vlan_filtering_get(priv,
&ctx->val.vbool);
break;
default:
err = -EOPNOTSUPP;
break;
}
return err;
}
int sja1105_devlink_param_set(struct dsa_switch *ds, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct sja1105_private *priv = ds->priv;
int err;
switch (id) {
case SJA1105_DEVLINK_PARAM_ID_BEST_EFFORT_VLAN_FILTERING:
err = sja1105_best_effort_vlan_filtering_set(priv,
ctx->val.vbool);
break;
default:
err = -EOPNOTSUPP;
break;
}
return err;
}
static const struct devlink_param sja1105_devlink_params[] = {
DSA_DEVLINK_PARAM_DRIVER(SJA1105_DEVLINK_PARAM_ID_BEST_EFFORT_VLAN_FILTERING,
"best_effort_vlan_filtering",
DEVLINK_PARAM_TYPE_BOOL,
BIT(DEVLINK_PARAM_CMODE_RUNTIME)),
};
static int sja1105_setup_devlink_params(struct dsa_switch *ds)
{
return dsa_devlink_params_register(ds, sja1105_devlink_params,
ARRAY_SIZE(sja1105_devlink_params));
}
static void sja1105_teardown_devlink_params(struct dsa_switch *ds)
{
dsa_devlink_params_unregister(ds, sja1105_devlink_params,
ARRAY_SIZE(sja1105_devlink_params));
}
int sja1105_devlink_info_get(struct dsa_switch *ds,
struct devlink_info_req *req,
struct netlink_ext_ack *extack)
......@@ -233,23 +134,10 @@ int sja1105_devlink_info_get(struct dsa_switch *ds,
int sja1105_devlink_setup(struct dsa_switch *ds)
{
int rc;
rc = sja1105_setup_devlink_params(ds);
if (rc)
return rc;
rc = sja1105_setup_devlink_regions(ds);
if (rc < 0) {
sja1105_teardown_devlink_params(ds);
return rc;
}
return 0;
return sja1105_setup_devlink_regions(ds);
}
void sja1105_devlink_teardown(struct dsa_switch *ds)
{
sja1105_teardown_devlink_params(ds);
sja1105_teardown_devlink_regions(ds);
}
......@@ -545,18 +545,11 @@ void sja1105_frame_memory_partitioning(struct sja1105_private *priv)
{
struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
struct sja1105_vl_forwarding_params_entry *vl_fwd_params;
int max_mem = priv->info->max_frame_mem;
struct sja1105_table *table;
/* VLAN retagging is implemented using a loopback port that consumes
* frame buffers. That leaves less for us.
*/
if (priv->vlan_state == SJA1105_VLAN_BEST_EFFORT)
max_mem -= SJA1105_FRAME_MEMORY_RETAGGING_OVERHEAD;
table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING_PARAMS];
l2_fwd_params = table->entries;
l2_fwd_params->part_spc[0] = max_mem;
l2_fwd_params->part_spc[0] = SJA1105_MAX_FRAME_MEMORY;
/* If we have any critical-traffic virtual links, we need to reserve
* some frame buffer memory for them. At the moment, hardcode the value
......@@ -1416,7 +1409,7 @@ int sja1105pqrs_fdb_add(struct dsa_switch *ds, int port,
l2_lookup.vlanid = vid;
l2_lookup.iotag = SJA1105_S_TAG;
l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
if (priv->vlan_state != SJA1105_VLAN_UNAWARE) {
if (priv->vlan_aware) {
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
} else {
......@@ -1479,7 +1472,7 @@ int sja1105pqrs_fdb_del(struct dsa_switch *ds, int port,
l2_lookup.vlanid = vid;
l2_lookup.iotag = SJA1105_S_TAG;
l2_lookup.mask_macaddr = GENMASK_ULL(ETH_ALEN * 8 - 1, 0);
if (priv->vlan_state != SJA1105_VLAN_UNAWARE) {
if (priv->vlan_aware) {
l2_lookup.mask_vlanid = VLAN_VID_MASK;
l2_lookup.mask_iotag = BIT(0);
} else {
......@@ -1525,7 +1518,7 @@ static int sja1105_fdb_add(struct dsa_switch *ds, int port,
* for what gets printed in 'bridge fdb show'. In the case of zero,
* no VID gets printed at all.
*/
if (priv->vlan_state != SJA1105_VLAN_FILTERING_FULL)
if (!priv->vlan_aware)
vid = 0;
return priv->info->fdb_add_cmd(ds, port, addr, vid);
......@@ -1536,7 +1529,7 @@ static int sja1105_fdb_del(struct dsa_switch *ds, int port,
{
struct sja1105_private *priv = ds->priv;
if (priv->vlan_state != SJA1105_VLAN_FILTERING_FULL)
if (!priv->vlan_aware)
vid = 0;
return priv->info->fdb_del_cmd(ds, port, addr, vid);
......@@ -1581,7 +1574,7 @@ static int sja1105_fdb_dump(struct dsa_switch *ds, int port,
u64_to_ether_addr(l2_lookup.macaddr, macaddr);
/* We need to hide the dsa_8021q VLANs from the user. */
if (priv->vlan_state == SJA1105_VLAN_UNAWARE)
if (!priv->vlan_aware)
l2_lookup.vlanid = 0;
cb(macaddr, l2_lookup.vlanid, l2_lookup.lockeds, data);
}
......@@ -1997,85 +1990,6 @@ static int sja1105_pvid_apply(struct sja1105_private *priv, int port, u16 pvid)
&mac[port], true);
}
static int sja1105_crosschip_bridge_join(struct dsa_switch *ds,
int tree_index, int sw_index,
int other_port, struct net_device *br)
{
struct dsa_switch *other_ds = dsa_switch_find(tree_index, sw_index);
struct sja1105_private *other_priv = other_ds->priv;
struct sja1105_private *priv = ds->priv;
int port, rc;
if (other_ds->ops != &sja1105_switch_ops)
return 0;
for (port = 0; port < ds->num_ports; port++) {
if (!dsa_is_user_port(ds, port))
continue;
if (dsa_to_port(ds, port)->bridge_dev != br)
continue;
rc = dsa_8021q_crosschip_bridge_join(priv->dsa_8021q_ctx,
port,
other_priv->dsa_8021q_ctx,
other_port);
if (rc)
return rc;
rc = dsa_8021q_crosschip_bridge_join(other_priv->dsa_8021q_ctx,
other_port,
priv->dsa_8021q_ctx,
port);
if (rc)
return rc;
}
return 0;
}
static void sja1105_crosschip_bridge_leave(struct dsa_switch *ds,
int tree_index, int sw_index,
int other_port,
struct net_device *br)
{
struct dsa_switch *other_ds = dsa_switch_find(tree_index, sw_index);
struct sja1105_private *other_priv = other_ds->priv;
struct sja1105_private *priv = ds->priv;
int port;
if (other_ds->ops != &sja1105_switch_ops)
return;
for (port = 0; port < ds->num_ports; port++) {
if (!dsa_is_user_port(ds, port))
continue;
if (dsa_to_port(ds, port)->bridge_dev != br)
continue;
dsa_8021q_crosschip_bridge_leave(priv->dsa_8021q_ctx, port,
other_priv->dsa_8021q_ctx,
other_port);
dsa_8021q_crosschip_bridge_leave(other_priv->dsa_8021q_ctx,
other_port,
priv->dsa_8021q_ctx, port);
}
}
static int sja1105_setup_8021q_tagging(struct dsa_switch *ds, bool enabled)
{
struct sja1105_private *priv = ds->priv;
int rc;
rc = dsa_8021q_setup(priv->dsa_8021q_ctx, enabled);
if (rc)
return rc;
dev_info(ds->dev, "%s switch tagging\n",
enabled ? "Enabled" : "Disabled");
return 0;
}
static enum dsa_tag_protocol
sja1105_get_tag_protocol(struct dsa_switch *ds, int port,
enum dsa_tag_protocol mp)
......@@ -2085,57 +1999,6 @@ sja1105_get_tag_protocol(struct dsa_switch *ds, int port,
return priv->info->tag_proto;
}
static int sja1105_find_free_subvlan(u16 *subvlan_map, bool pvid)
{
int subvlan;
if (pvid)
return 0;
for (subvlan = 1; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
if (subvlan_map[subvlan] == VLAN_N_VID)
return subvlan;
return -1;
}
static int sja1105_find_subvlan(u16 *subvlan_map, u16 vid)
{
int subvlan;
for (subvlan = 0; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
if (subvlan_map[subvlan] == vid)
return subvlan;
return -1;
}
static int sja1105_find_committed_subvlan(struct sja1105_private *priv,
int port, u16 vid)
{
struct sja1105_port *sp = &priv->ports[port];
return sja1105_find_subvlan(sp->subvlan_map, vid);
}
static void sja1105_init_subvlan_map(u16 *subvlan_map)
{
int subvlan;
for (subvlan = 0; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
subvlan_map[subvlan] = VLAN_N_VID;
}
static void sja1105_commit_subvlan_map(struct sja1105_private *priv, int port,
u16 *subvlan_map)
{
struct sja1105_port *sp = &priv->ports[port];
int subvlan;
for (subvlan = 0; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
sp->subvlan_map[subvlan] = subvlan_map[subvlan];
}
static int sja1105_is_vlan_configured(struct sja1105_private *priv, u16 vid)
{
struct sja1105_vlan_lookup_entry *vlan;
......@@ -2152,29 +2015,9 @@ static int sja1105_is_vlan_configured(struct sja1105_private *priv, u16 vid)
return -1;
}
static int
sja1105_find_retagging_entry(struct sja1105_retagging_entry *retagging,
int count, int from_port, u16 from_vid,
u16 to_vid)
{
int i;
for (i = 0; i < count; i++)
if (retagging[i].ing_port == BIT(from_port) &&
retagging[i].vlan_ing == from_vid &&
retagging[i].vlan_egr == to_vid)
return i;
/* Return an invalid entry index if not found */
return -1;
}
static int sja1105_commit_vlans(struct sja1105_private *priv,
struct sja1105_vlan_lookup_entry *new_vlan,
struct sja1105_retagging_entry *new_retagging,
int num_retagging)
struct sja1105_vlan_lookup_entry *new_vlan)
{
struct sja1105_retagging_entry *retagging;
struct sja1105_vlan_lookup_entry *vlan;
struct sja1105_table *table;
int num_vlans = 0;
......@@ -2234,62 +2077,16 @@ static int sja1105_commit_vlans(struct sja1105_private *priv,
vlan[k++] = new_vlan[i];
}
/* VLAN Retagging Table */
table = &priv->static_config.tables[BLK_IDX_RETAGGING];
retagging = table->entries;
for (i = 0; i < table->entry_count; i++) {
rc = sja1105_dynamic_config_write(priv, BLK_IDX_RETAGGING,
i, &retagging[i], false);
if (rc)
return rc;
}
if (table->entry_count)
kfree(table->entries);
table->entries = kcalloc(num_retagging, table->ops->unpacked_entry_size,
GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = num_retagging;
retagging = table->entries;
for (i = 0; i < num_retagging; i++) {
retagging[i] = new_retagging[i];
/* Update entry */
rc = sja1105_dynamic_config_write(priv, BLK_IDX_RETAGGING,
i, &retagging[i], true);
if (rc < 0)
return rc;
}
return 0;
}
struct sja1105_crosschip_vlan {
struct list_head list;
u16 vid;
bool untagged;
int port;
int other_port;
struct dsa_8021q_context *other_ctx;
};
struct sja1105_crosschip_switch {
struct list_head list;
struct dsa_8021q_context *other_ctx;
};
static int sja1105_commit_pvid(struct sja1105_private *priv)
{
struct sja1105_bridge_vlan *v;
struct list_head *vlan_list;
int rc = 0;
if (priv->vlan_state == SJA1105_VLAN_FILTERING_FULL)
if (priv->vlan_aware)
vlan_list = &priv->bridge_vlans;
else
vlan_list = &priv->dsa_8021q_vlans;
......@@ -2311,7 +2108,7 @@ sja1105_build_bridge_vlans(struct sja1105_private *priv,
{
struct sja1105_bridge_vlan *v;
if (priv->vlan_state == SJA1105_VLAN_UNAWARE)
if (!priv->vlan_aware)
return 0;
list_for_each_entry(v, &priv->bridge_vlans, list) {
......@@ -2334,9 +2131,6 @@ sja1105_build_dsa_8021q_vlans(struct sja1105_private *priv,
{
struct sja1105_bridge_vlan *v;
if (priv->vlan_state == SJA1105_VLAN_FILTERING_FULL)
return 0;
list_for_each_entry(v, &priv->dsa_8021q_vlans, list) {
int match = v->vid;
......@@ -2351,326 +2145,11 @@ sja1105_build_dsa_8021q_vlans(struct sja1105_private *priv,
return 0;
}
static int sja1105_build_subvlans(struct sja1105_private *priv,
u16 subvlan_map[][DSA_8021Q_N_SUBVLAN],
struct sja1105_vlan_lookup_entry *new_vlan,
struct sja1105_retagging_entry *new_retagging,
int *num_retagging)
{
struct sja1105_bridge_vlan *v;
int k = *num_retagging;
if (priv->vlan_state != SJA1105_VLAN_BEST_EFFORT)
return 0;
list_for_each_entry(v, &priv->bridge_vlans, list) {
int upstream = dsa_upstream_port(priv->ds, v->port);
int match, subvlan;
u16 rx_vid;
/* Only sub-VLANs on user ports need to be applied.
* Bridge VLANs also include VLANs added automatically
* by DSA on the CPU port.
*/
if (!dsa_is_user_port(priv->ds, v->port))
continue;
subvlan = sja1105_find_subvlan(subvlan_map[v->port],
v->vid);
if (subvlan < 0) {
subvlan = sja1105_find_free_subvlan(subvlan_map[v->port],
v->pvid);
if (subvlan < 0) {
dev_err(priv->ds->dev, "No more free subvlans\n");
return -ENOSPC;
}
}
rx_vid = dsa_8021q_rx_vid_subvlan(priv->ds, v->port, subvlan);
/* @v->vid on @v->port needs to be retagged to @rx_vid
* on @upstream. Assume @v->vid on @v->port and on
* @upstream was already configured by the previous
* iteration over bridge_vlans.
*/
match = rx_vid;
new_vlan[match].vlanid = rx_vid;
new_vlan[match].vmemb_port |= BIT(v->port);
new_vlan[match].vmemb_port |= BIT(upstream);
new_vlan[match].vlan_bc |= BIT(v->port);
new_vlan[match].vlan_bc |= BIT(upstream);
/* The "untagged" flag is set the same as for the
* original VLAN
*/
if (!v->untagged)
new_vlan[match].tag_port |= BIT(v->port);
/* But it's always tagged towards the CPU */
new_vlan[match].tag_port |= BIT(upstream);
new_vlan[match].type_entry = SJA1110_VLAN_D_TAG;
/* The Retagging Table generates packet *clones* with
* the new VLAN. This is a very odd hardware quirk
* which we need to suppress by dropping the original
* packet.
* Deny egress of the original VLAN towards the CPU
* port. This will force the switch to drop it, and
* we'll see only the retagged packets.
*/
match = v->vid;
new_vlan[match].vlan_bc &= ~BIT(upstream);
/* And the retagging itself */
new_retagging[k].vlan_ing = v->vid;
new_retagging[k].vlan_egr = rx_vid;
new_retagging[k].ing_port = BIT(v->port);
new_retagging[k].egr_port = BIT(upstream);
if (k++ == SJA1105_MAX_RETAGGING_COUNT) {
dev_err(priv->ds->dev, "No more retagging rules\n");
return -ENOSPC;
}
subvlan_map[v->port][subvlan] = v->vid;
}
*num_retagging = k;
return 0;
}
/* Sadly, in crosschip scenarios where the CPU port is also the link to another
* switch, we should retag backwards (the dsa_8021q vid to the original vid) on
* the CPU port of neighbour switches.
*/
static int
sja1105_build_crosschip_subvlans(struct sja1105_private *priv,
struct sja1105_vlan_lookup_entry *new_vlan,
struct sja1105_retagging_entry *new_retagging,
int *num_retagging)
{
struct sja1105_crosschip_vlan *tmp, *pos;
struct dsa_8021q_crosschip_link *c;
struct sja1105_bridge_vlan *v, *w;
struct list_head crosschip_vlans;
int k = *num_retagging;
int rc = 0;
if (priv->vlan_state != SJA1105_VLAN_BEST_EFFORT)
return 0;
INIT_LIST_HEAD(&crosschip_vlans);
list_for_each_entry(c, &priv->dsa_8021q_ctx->crosschip_links, list) {
struct sja1105_private *other_priv = c->other_ctx->ds->priv;
if (other_priv->vlan_state == SJA1105_VLAN_FILTERING_FULL)
continue;
/* Crosschip links are also added to the CPU ports.
* Ignore those.
*/
if (!dsa_is_user_port(priv->ds, c->port))
continue;
if (!dsa_is_user_port(c->other_ctx->ds, c->other_port))
continue;
/* Search for VLANs on the remote port */
list_for_each_entry(v, &other_priv->bridge_vlans, list) {
bool already_added = false;
bool we_have_it = false;
if (v->port != c->other_port)
continue;
/* If @v is a pvid on @other_ds, it does not need
* re-retagging, because its SVL field is 0 and we
* already allow that, via the dsa_8021q crosschip
* links.
*/
if (v->pvid)
continue;
/* Search for the VLAN on our local port */
list_for_each_entry(w, &priv->bridge_vlans, list) {
if (w->port == c->port && w->vid == v->vid) {
we_have_it = true;
break;
}
}
if (!we_have_it)
continue;
list_for_each_entry(tmp, &crosschip_vlans, list) {
if (tmp->vid == v->vid &&
tmp->untagged == v->untagged &&
tmp->port == c->port &&
tmp->other_port == v->port &&
tmp->other_ctx == c->other_ctx) {
already_added = true;
break;
}
}
if (already_added)
continue;
tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);
if (!tmp) {
dev_err(priv->ds->dev, "Failed to allocate memory\n");
rc = -ENOMEM;
goto out;
}
tmp->vid = v->vid;
tmp->port = c->port;
tmp->other_port = v->port;
tmp->other_ctx = c->other_ctx;
tmp->untagged = v->untagged;
list_add(&tmp->list, &crosschip_vlans);
}
}
list_for_each_entry(tmp, &crosschip_vlans, list) {
struct sja1105_private *other_priv = tmp->other_ctx->ds->priv;
int upstream = dsa_upstream_port(priv->ds, tmp->port);
int match, subvlan;
u16 rx_vid;
subvlan = sja1105_find_committed_subvlan(other_priv,
tmp->other_port,
tmp->vid);
/* If this happens, it's a bug. The neighbour switch does not
* have a subvlan for tmp->vid on tmp->other_port, but it
* should, since we already checked for its vlan_state.
*/
if (WARN_ON(subvlan < 0)) {
rc = -EINVAL;
goto out;
}
rx_vid = dsa_8021q_rx_vid_subvlan(tmp->other_ctx->ds,
tmp->other_port,
subvlan);
/* The @rx_vid retagged from @tmp->vid on
* {@tmp->other_ds, @tmp->other_port} needs to be
* re-retagged to @tmp->vid on the way back to us.
*
* Assume the original @tmp->vid is already configured
* on this local switch, otherwise we wouldn't be
* retagging its subvlan on the other switch in the
* first place. We just need to add a reverse retagging
* rule for @rx_vid and install @rx_vid on our ports.
*/
match = rx_vid;
new_vlan[match].vlanid = rx_vid;
new_vlan[match].vmemb_port |= BIT(tmp->port);
new_vlan[match].vmemb_port |= BIT(upstream);
/* The "untagged" flag is set the same as for the
* original VLAN. And towards the CPU, it doesn't
* really matter, because @rx_vid will only receive
* traffic on that port. For consistency with other dsa_8021q
* VLANs, we'll keep the CPU port tagged.
*/
if (!tmp->untagged)
new_vlan[match].tag_port |= BIT(tmp->port);
new_vlan[match].tag_port |= BIT(upstream);
new_vlan[match].type_entry = SJA1110_VLAN_D_TAG;
/* Deny egress of @rx_vid towards our front-panel port.
* This will force the switch to drop it, and we'll see
* only the re-retagged packets (having the original,
* pre-initial-retagging, VLAN @tmp->vid).
*/
new_vlan[match].vlan_bc &= ~BIT(tmp->port);
/* On reverse retagging, the same ingress VLAN goes to multiple
* ports. So we have an opportunity to create composite rules
* to not waste the limited space in the retagging table.
*/
k = sja1105_find_retagging_entry(new_retagging, *num_retagging,
upstream, rx_vid, tmp->vid);
if (k < 0) {
if (*num_retagging == SJA1105_MAX_RETAGGING_COUNT) {
dev_err(priv->ds->dev, "No more retagging rules\n");
rc = -ENOSPC;
goto out;
}
k = (*num_retagging)++;
}
/* And the retagging itself */
new_retagging[k].vlan_ing = rx_vid;
new_retagging[k].vlan_egr = tmp->vid;
new_retagging[k].ing_port = BIT(upstream);
new_retagging[k].egr_port |= BIT(tmp->port);
}
out:
list_for_each_entry_safe(tmp, pos, &crosschip_vlans, list) {
list_del(&tmp->list);
kfree(tmp);
}
return rc;
}
static int sja1105_build_vlan_table(struct sja1105_private *priv, bool notify);
static int sja1105_notify_crosschip_switches(struct sja1105_private *priv)
{
struct sja1105_crosschip_switch *s, *pos;
struct list_head crosschip_switches;
struct dsa_8021q_crosschip_link *c;
int rc = 0;
INIT_LIST_HEAD(&crosschip_switches);
list_for_each_entry(c, &priv->dsa_8021q_ctx->crosschip_links, list) {
bool already_added = false;
list_for_each_entry(s, &crosschip_switches, list) {
if (s->other_ctx == c->other_ctx) {
already_added = true;
break;
}
}
if (already_added)
continue;
s = kzalloc(sizeof(*s), GFP_KERNEL);
if (!s) {
dev_err(priv->ds->dev, "Failed to allocate memory\n");
rc = -ENOMEM;
goto out;
}
s->other_ctx = c->other_ctx;
list_add(&s->list, &crosschip_switches);
}
list_for_each_entry(s, &crosschip_switches, list) {
struct sja1105_private *other_priv = s->other_ctx->ds->priv;
rc = sja1105_build_vlan_table(other_priv, false);
if (rc)
goto out;
}
out:
list_for_each_entry_safe(s, pos, &crosschip_switches, list) {
list_del(&s->list);
kfree(s);
}
return rc;
}
static int sja1105_build_vlan_table(struct sja1105_private *priv, bool notify)
static int sja1105_build_vlan_table(struct sja1105_private *priv)
{
u16 subvlan_map[SJA1105_MAX_NUM_PORTS][DSA_8021Q_N_SUBVLAN];
struct sja1105_retagging_entry *new_retagging;
struct sja1105_vlan_lookup_entry *new_vlan;
struct sja1105_table *table;
int i, num_retagging = 0;
int rc;
int rc, i;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
new_vlan = kcalloc(VLAN_N_VID,
......@@ -2679,22 +2158,10 @@ static int sja1105_build_vlan_table(struct sja1105_private *priv, bool notify)
return -ENOMEM;
table = &priv->static_config.tables[BLK_IDX_VLAN_LOOKUP];
new_retagging = kcalloc(SJA1105_MAX_RETAGGING_COUNT,
table->ops->unpacked_entry_size, GFP_KERNEL);
if (!new_retagging) {
kfree(new_vlan);
return -ENOMEM;
}
for (i = 0; i < VLAN_N_VID; i++)
new_vlan[i].vlanid = VLAN_N_VID;
for (i = 0; i < SJA1105_MAX_RETAGGING_COUNT; i++)
new_retagging[i].vlan_ing = VLAN_N_VID;
for (i = 0; i < priv->ds->num_ports; i++)
sja1105_init_subvlan_map(subvlan_map[i]);
/* Bridge VLANs */
rc = sja1105_build_bridge_vlans(priv, new_vlan);
if (rc)
......@@ -2709,22 +2176,7 @@ static int sja1105_build_vlan_table(struct sja1105_private *priv, bool notify)
if (rc)
goto out;
/* Private VLANs necessary for dsa_8021q operation, which we need to
* determine on our own:
* - Sub-VLANs
* - Sub-VLANs of crosschip switches
*/
rc = sja1105_build_subvlans(priv, subvlan_map, new_vlan, new_retagging,
&num_retagging);
if (rc)
goto out;
rc = sja1105_build_crosschip_subvlans(priv, new_vlan, new_retagging,
&num_retagging);
if (rc)
goto out;
rc = sja1105_commit_vlans(priv, new_vlan, new_retagging, num_retagging);
rc = sja1105_commit_vlans(priv, new_vlan);
if (rc)
goto out;
......@@ -2732,18 +2184,8 @@ static int sja1105_build_vlan_table(struct sja1105_private *priv, bool notify)
if (rc)
goto out;
for (i = 0; i < priv->ds->num_ports; i++)
sja1105_commit_subvlan_map(priv, i, subvlan_map[i]);
if (notify) {
rc = sja1105_notify_crosschip_switches(priv);
if (rc)
goto out;
}
out:
kfree(new_vlan);
kfree(new_retagging);
return rc;
}
......@@ -2758,10 +2200,8 @@ int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled,
struct sja1105_l2_lookup_params_entry *l2_lookup_params;
struct sja1105_general_params_entry *general_params;
struct sja1105_private *priv = ds->priv;
enum sja1105_vlan_state state;
struct sja1105_table *table;
struct sja1105_rule *rule;
bool want_tagging;
u16 tpid, tpid2;
int rc;
......@@ -2792,19 +2232,10 @@ int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled,
sp->xmit_tpid = ETH_P_SJA1105;
}
if (!enabled)
state = SJA1105_VLAN_UNAWARE;
else if (priv->best_effort_vlan_filtering)
state = SJA1105_VLAN_BEST_EFFORT;
else
state = SJA1105_VLAN_FILTERING_FULL;
if (priv->vlan_state == state)
if (priv->vlan_aware == enabled)
return 0;
priv->vlan_state = state;
want_tagging = (state == SJA1105_VLAN_UNAWARE ||
state == SJA1105_VLAN_BEST_EFFORT);
priv->vlan_aware = enabled;
table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];
general_params = table->entries;
......@@ -2818,8 +2249,6 @@ int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled,
general_params->incl_srcpt1 = enabled;
general_params->incl_srcpt0 = enabled;
want_tagging = priv->best_effort_vlan_filtering || !enabled;
/* VLAN filtering => independent VLAN learning.
* No VLAN filtering (or best effort) => shared VLAN learning.
*
......@@ -2840,11 +2269,9 @@ int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled,
*/
table = &priv->static_config.tables[BLK_IDX_L2_LOOKUP_PARAMS];
l2_lookup_params = table->entries;
l2_lookup_params->shared_learn = want_tagging;
sja1105_frame_memory_partitioning(priv);
l2_lookup_params->shared_learn = !priv->vlan_aware;
rc = sja1105_build_vlan_table(priv, false);
rc = sja1105_build_vlan_table(priv);
if (rc)
return rc;
......@@ -2852,12 +2279,7 @@ int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled,
if (rc)
NL_SET_ERR_MSG_MOD(extack, "Failed to change VLAN Ethertype");
/* Switch port identification based on 802.1Q is only passable
* if we are not under a vlan_filtering bridge. So make sure
* the two configurations are mutually exclusive (of course, the
* user may know better, i.e. best_effort_vlan_filtering).
*/
return sja1105_setup_8021q_tagging(ds, want_tagging);
return rc;
}
/* Returns number of VLANs added (0 or 1) on success,
......@@ -2927,12 +2349,9 @@ static int sja1105_vlan_add(struct dsa_switch *ds, int port,
bool vlan_table_changed = false;
int rc;
/* If the user wants best-effort VLAN filtering (aka vlan_filtering
* bridge plus tagging), be sure to at least deny alterations to the
* configuration done by dsa_8021q.
/* Be sure to deny alterations to the configuration done by tag_8021q.
*/
if (priv->vlan_state != SJA1105_VLAN_FILTERING_FULL &&
vid_is_dsa_8021q(vlan->vid)) {
if (vid_is_dsa_8021q(vlan->vid)) {
NL_SET_ERR_MSG_MOD(extack,
"Range 1024-3071 reserved for dsa_8021q operation");
return -EBUSY;
......@@ -2948,7 +2367,7 @@ static int sja1105_vlan_add(struct dsa_switch *ds, int port,
if (!vlan_table_changed)
return 0;
return sja1105_build_vlan_table(priv, true);
return sja1105_build_vlan_table(priv);
}
static int sja1105_vlan_del(struct dsa_switch *ds, int port,
......@@ -2965,7 +2384,7 @@ static int sja1105_vlan_del(struct dsa_switch *ds, int port,
if (!vlan_table_changed)
return 0;
return sja1105_build_vlan_table(priv, true);
return sja1105_build_vlan_table(priv);
}
static int sja1105_dsa_8021q_vlan_add(struct dsa_switch *ds, int port, u16 vid,
......@@ -2978,7 +2397,7 @@ static int sja1105_dsa_8021q_vlan_add(struct dsa_switch *ds, int port, u16 vid,
if (rc <= 0)
return rc;
return sja1105_build_vlan_table(priv, true);
return sja1105_build_vlan_table(priv);
}
static int sja1105_dsa_8021q_vlan_del(struct dsa_switch *ds, int port, u16 vid)
......@@ -2990,14 +2409,9 @@ static int sja1105_dsa_8021q_vlan_del(struct dsa_switch *ds, int port, u16 vid)
if (!rc)
return 0;
return sja1105_build_vlan_table(priv, true);
return sja1105_build_vlan_table(priv);
}
static const struct dsa_8021q_ops sja1105_dsa_8021q_ops = {
.vlan_add = sja1105_dsa_8021q_vlan_add,
.vlan_del = sja1105_dsa_8021q_vlan_del,
};
/* The programming model for the SJA1105 switch is "all-at-once" via static
* configuration tables. Some of these can be dynamically modified at runtime,
* but not the xMII mode parameters table.
......@@ -3086,18 +2500,12 @@ static int sja1105_setup(struct dsa_switch *ds)
ds->mtu_enforcement_ingress = true;
priv->best_effort_vlan_filtering = true;
rc = sja1105_devlink_setup(ds);
if (rc < 0)
goto out_static_config_free;
/* The DSA/switchdev model brings up switch ports in standalone mode by
* default, and that means vlan_filtering is 0 since they're not under
* a bridge, so it's safe to set up switch tagging at this time.
*/
rtnl_lock();
rc = sja1105_setup_8021q_tagging(ds, true);
rc = dsa_tag_8021q_register(ds, htons(ETH_P_8021Q));
rtnl_unlock();
if (rc)
goto out_devlink_teardown;
......@@ -3122,6 +2530,10 @@ static void sja1105_teardown(struct dsa_switch *ds)
struct sja1105_bridge_vlan *v, *n;
int port;
rtnl_lock();
dsa_tag_8021q_unregister(ds);
rtnl_unlock();
for (port = 0; port < ds->num_ports; port++) {
struct sja1105_port *sp = &priv->ports[port];
......@@ -3602,11 +3014,9 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
.cls_flower_add = sja1105_cls_flower_add,
.cls_flower_del = sja1105_cls_flower_del,
.cls_flower_stats = sja1105_cls_flower_stats,
.crosschip_bridge_join = sja1105_crosschip_bridge_join,
.crosschip_bridge_leave = sja1105_crosschip_bridge_leave,
.devlink_param_get = sja1105_devlink_param_get,
.devlink_param_set = sja1105_devlink_param_set,
.devlink_info_get = sja1105_devlink_info_get,
.tag_8021q_vlan_add = sja1105_dsa_8021q_vlan_add,
.tag_8021q_vlan_del = sja1105_dsa_8021q_vlan_del,
};
static const struct of_device_id sja1105_dt_ids[];
......@@ -3750,16 +3160,6 @@ static int sja1105_probe(struct spi_device *spi)
mutex_init(&priv->ptp_data.lock);
mutex_init(&priv->mgmt_lock);
priv->dsa_8021q_ctx = devm_kzalloc(dev, sizeof(*priv->dsa_8021q_ctx),
GFP_KERNEL);
if (!priv->dsa_8021q_ctx)
return -ENOMEM;
priv->dsa_8021q_ctx->ops = &sja1105_dsa_8021q_ops;
priv->dsa_8021q_ctx->proto = htons(ETH_P_8021Q);
priv->dsa_8021q_ctx->ds = ds;
INIT_LIST_HEAD(&priv->dsa_8021q_ctx->crosschip_links);
INIT_LIST_HEAD(&priv->bridge_vlans);
INIT_LIST_HEAD(&priv->dsa_8021q_vlans);
......@@ -3785,7 +3185,6 @@ static int sja1105_probe(struct spi_device *spi)
struct sja1105_port *sp = &priv->ports[port];
struct dsa_port *dp = dsa_to_port(ds, port);
struct net_device *slave;
int subvlan;
if (!dsa_is_user_port(ds, port))
continue;
......@@ -3806,9 +3205,6 @@ static int sja1105_probe(struct spi_device *spi)
}
skb_queue_head_init(&sp->xmit_queue);
sp->xmit_tpid = ETH_P_SJA1105;
for (subvlan = 0; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++)
sp->subvlan_map[subvlan] = VLAN_N_VID;
}
return 0;
......@@ -3832,8 +3228,10 @@ static int sja1105_probe(struct spi_device *spi)
static int sja1105_remove(struct spi_device *spi)
{
struct sja1105_private *priv = spi_get_drvdata(spi);
struct dsa_switch *ds = priv->ds;
dsa_unregister_switch(ds);
dsa_unregister_switch(priv->ds);
return 0;
}
......
......@@ -496,14 +496,11 @@ int sja1105_vl_redirect(struct sja1105_private *priv, int port,
struct sja1105_rule *rule = sja1105_rule_find(priv, cookie);
int rc;
if (priv->vlan_state == SJA1105_VLAN_UNAWARE &&
key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
if (!priv->vlan_aware && key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only redirect based on DMAC");
return -EOPNOTSUPP;
} else if ((priv->vlan_state == SJA1105_VLAN_BEST_EFFORT ||
priv->vlan_state == SJA1105_VLAN_FILTERING_FULL) &&
key->type != SJA1105_KEY_VLAN_AWARE_VL) {
} else if (priv->vlan_aware && key->type != SJA1105_KEY_VLAN_AWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only redirect based on {DMAC, VID, PCP}");
return -EOPNOTSUPP;
......@@ -595,14 +592,11 @@ int sja1105_vl_gate(struct sja1105_private *priv, int port,
return -ERANGE;
}
if (priv->vlan_state == SJA1105_VLAN_UNAWARE &&
key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
if (!priv->vlan_aware && key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only gate based on DMAC");
return -EOPNOTSUPP;
} else if ((priv->vlan_state == SJA1105_VLAN_BEST_EFFORT ||
priv->vlan_state == SJA1105_VLAN_FILTERING_FULL) &&
key->type != SJA1105_KEY_VLAN_AWARE_VL) {
} else if (priv->vlan_aware && key->type != SJA1105_KEY_VLAN_AWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only gate based on {DMAC, VID, PCP}");
return -EOPNOTSUPP;
......
......@@ -11,60 +11,38 @@
struct dsa_switch;
struct sk_buff;
struct net_device;
struct packet_type;
struct dsa_8021q_context;
struct dsa_8021q_crosschip_link {
struct dsa_tag_8021q_vlan {
struct list_head list;
int port;
struct dsa_8021q_context *other_ctx;
int other_port;
u16 vid;
refcount_t refcount;
};
struct dsa_8021q_ops {
int (*vlan_add)(struct dsa_switch *ds, int port, u16 vid, u16 flags);
int (*vlan_del)(struct dsa_switch *ds, int port, u16 vid);
};
struct dsa_8021q_context {
const struct dsa_8021q_ops *ops;
struct dsa_switch *ds;
struct list_head crosschip_links;
struct list_head vlans;
/* EtherType of RX VID, used for filtering on master interface */
__be16 proto;
};
#define DSA_8021Q_N_SUBVLAN 8
int dsa_8021q_setup(struct dsa_8021q_context *ctx, bool enabled);
int dsa_tag_8021q_register(struct dsa_switch *ds, __be16 proto);
int dsa_8021q_crosschip_bridge_join(struct dsa_8021q_context *ctx, int port,
struct dsa_8021q_context *other_ctx,
int other_port);
int dsa_8021q_crosschip_bridge_leave(struct dsa_8021q_context *ctx, int port,
struct dsa_8021q_context *other_ctx,
int other_port);
void dsa_tag_8021q_unregister(struct dsa_switch *ds);
struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
u16 tpid, u16 tci);
void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id,
int *subvlan);
void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id);
u16 dsa_8021q_tx_vid(struct dsa_switch *ds, int port);
u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port);
u16 dsa_8021q_rx_vid_subvlan(struct dsa_switch *ds, int port, u16 subvlan);
int dsa_8021q_rx_switch_id(u16 vid);
int dsa_8021q_rx_source_port(u16 vid);
u16 dsa_8021q_rx_subvlan(u16 vid);
bool vid_is_dsa_8021q_rxvlan(u16 vid);
bool vid_is_dsa_8021q_txvlan(u16 vid);
......
......@@ -59,7 +59,6 @@ struct sja1105_skb_cb {
((struct sja1105_skb_cb *)((skb)->cb))
struct sja1105_port {
u16 subvlan_map[DSA_8021Q_N_SUBVLAN];
struct kthread_worker *xmit_worker;
struct kthread_work xmit_work;
struct sk_buff_head xmit_queue;
......
......@@ -352,6 +352,9 @@ struct dsa_switch {
unsigned int ageing_time_min;
unsigned int ageing_time_max;
/* Storage for drivers using tag_8021q */
struct dsa_8021q_context *tag_8021q_ctx;
/* devlink used to represent this switch device */
struct devlink *devlink;
......@@ -869,6 +872,13 @@ struct dsa_switch_ops {
const struct switchdev_obj_ring_role_mrp *mrp);
int (*port_mrp_del_ring_role)(struct dsa_switch *ds, int port,
const struct switchdev_obj_ring_role_mrp *mrp);
/*
* tag_8021q operations
*/
int (*tag_8021q_vlan_add)(struct dsa_switch *ds, int port, u16 vid,
u16 flags);
int (*tag_8021q_vlan_del)(struct dsa_switch *ds, int port, u16 vid);
};
#define DSA_DEVLINK_PARAM_DRIVER(_id, _name, _type, _cmodes) \
......
......@@ -18,16 +18,6 @@ if NET_DSA
# Drivers must select the appropriate tagging format(s)
config NET_DSA_TAG_8021Q
tristate
select VLAN_8021Q
help
Unlike the other tagging protocols, the 802.1Q config option simply
provides helpers for other tagging implementations that might rely on
VLAN in one way or another. It is not a complete solution.
Drivers which use these helpers should select this as dependency.
config NET_DSA_TAG_AR9331
tristate "Tag driver for Atheros AR9331 SoC with built-in switch"
help
......@@ -126,7 +116,6 @@ config NET_DSA_TAG_OCELOT_8021Q
tristate "Tag driver for Ocelot family of switches, using VLAN"
depends on MSCC_OCELOT_SWITCH_LIB || \
(MSCC_OCELOT_SWITCH_LIB=n && COMPILE_TEST)
select NET_DSA_TAG_8021Q
help
Say Y or M if you want to enable support for tagging frames with a
custom VLAN-based header. Frames that require timestamping, such as
......@@ -149,7 +138,6 @@ config NET_DSA_TAG_LAN9303
config NET_DSA_TAG_SJA1105
tristate "Tag driver for NXP SJA1105 switches"
select NET_DSA_TAG_8021Q
select PACKING
help
Say Y or M if you want to enable support for tagging frames with the
......
# SPDX-License-Identifier: GPL-2.0
# the core
obj-$(CONFIG_NET_DSA) += dsa_core.o
dsa_core-y += dsa.o dsa2.o master.o port.o slave.o switch.o
dsa_core-y += dsa.o dsa2.o master.o port.o slave.o switch.o tag_8021q.o
# tagging formats
obj-$(CONFIG_NET_DSA_TAG_8021Q) += tag_8021q.o
obj-$(CONFIG_NET_DSA_TAG_AR9331) += tag_ar9331.o
obj-$(CONFIG_NET_DSA_TAG_BRCM_COMMON) += tag_brcm.o
obj-$(CONFIG_NET_DSA_TAG_DSA_COMMON) += tag_dsa.o
......
......@@ -39,6 +39,8 @@ enum {
DSA_NOTIFIER_MRP_DEL,
DSA_NOTIFIER_MRP_ADD_RING_ROLE,
DSA_NOTIFIER_MRP_DEL_RING_ROLE,
DSA_NOTIFIER_TAG_8021Q_VLAN_ADD,
DSA_NOTIFIER_TAG_8021Q_VLAN_DEL,
};
/* DSA_NOTIFIER_AGEING_TIME */
......@@ -113,6 +115,14 @@ struct dsa_notifier_mrp_ring_role_info {
int port;
};
/* DSA_NOTIFIER_TAG_8021Q_VLAN_* */
struct dsa_notifier_tag_8021q_vlan_info {
int tree_index;
int sw_index;
int port;
u16 vid;
};
struct dsa_switchdev_event_work {
struct dsa_switch *ds;
int port;
......@@ -253,6 +263,8 @@ int dsa_port_link_register_of(struct dsa_port *dp);
void dsa_port_link_unregister_of(struct dsa_port *dp);
int dsa_port_hsr_join(struct dsa_port *dp, struct net_device *hsr);
void dsa_port_hsr_leave(struct dsa_port *dp, struct net_device *hsr);
int dsa_port_tag_8021q_vlan_add(struct dsa_port *dp, u16 vid);
void dsa_port_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid);
extern const struct phylink_mac_ops dsa_port_phylink_mac_ops;
static inline bool dsa_port_offloads_bridge_port(struct dsa_port *dp,
......@@ -386,6 +398,16 @@ int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
const struct dsa_device_ops *tag_ops,
const struct dsa_device_ops *old_tag_ops);
/* tag_8021q.c */
int dsa_tag_8021q_bridge_join(struct dsa_switch *ds,
struct dsa_notifier_bridge_info *info);
int dsa_tag_8021q_bridge_leave(struct dsa_switch *ds,
struct dsa_notifier_bridge_info *info);
int dsa_switch_tag_8021q_vlan_add(struct dsa_switch *ds,
struct dsa_notifier_tag_8021q_vlan_info *info);
int dsa_switch_tag_8021q_vlan_del(struct dsa_switch *ds,
struct dsa_notifier_tag_8021q_vlan_info *info);
extern struct list_head dsa_tree_list;
#endif
......@@ -1217,3 +1217,31 @@ void dsa_port_hsr_leave(struct dsa_port *dp, struct net_device *hsr)
if (err)
pr_err("DSA: failed to notify DSA_NOTIFIER_HSR_LEAVE\n");
}
int dsa_port_tag_8021q_vlan_add(struct dsa_port *dp, u16 vid)
{
struct dsa_notifier_tag_8021q_vlan_info info = {
.tree_index = dp->ds->dst->index,
.sw_index = dp->ds->index,
.port = dp->index,
.vid = vid,
};
return dsa_broadcast(DSA_NOTIFIER_TAG_8021Q_VLAN_ADD, &info);
}
void dsa_port_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid)
{
struct dsa_notifier_tag_8021q_vlan_info info = {
.tree_index = dp->ds->dst->index,
.sw_index = dp->ds->index,
.port = dp->index,
.vid = vid,
};
int err;
err = dsa_broadcast(DSA_NOTIFIER_TAG_8021Q_VLAN_DEL, &info);
if (err)
pr_err("DSA: failed to notify tag_8021q VLAN deletion: %pe\n",
ERR_PTR(err));
}
......@@ -90,18 +90,25 @@ static int dsa_switch_bridge_join(struct dsa_switch *ds,
struct dsa_notifier_bridge_info *info)
{
struct dsa_switch_tree *dst = ds->dst;
int err;
if (dst->index == info->tree_index && ds->index == info->sw_index &&
ds->ops->port_bridge_join)
return ds->ops->port_bridge_join(ds, info->port, info->br);
ds->ops->port_bridge_join) {
err = ds->ops->port_bridge_join(ds, info->port, info->br);
if (err)
return err;
}
if ((dst->index != info->tree_index || ds->index != info->sw_index) &&
ds->ops->crosschip_bridge_join)
return ds->ops->crosschip_bridge_join(ds, info->tree_index,
ds->ops->crosschip_bridge_join) {
err = ds->ops->crosschip_bridge_join(ds, info->tree_index,
info->sw_index,
info->port, info->br);
if (err)
return err;
}
return 0;
return dsa_tag_8021q_bridge_join(ds, info);
}
static int dsa_switch_bridge_leave(struct dsa_switch *ds,
......@@ -151,7 +158,8 @@ static int dsa_switch_bridge_leave(struct dsa_switch *ds,
if (err && err != EOPNOTSUPP)
return err;
}
return 0;
return dsa_tag_8021q_bridge_leave(ds, info);
}
/* Matches for all upstream-facing ports (the CPU port and all upstream-facing
......@@ -726,6 +734,12 @@ static int dsa_switch_event(struct notifier_block *nb,
case DSA_NOTIFIER_MRP_DEL_RING_ROLE:
err = dsa_switch_mrp_del_ring_role(ds, info);
break;
case DSA_NOTIFIER_TAG_8021Q_VLAN_ADD:
err = dsa_switch_tag_8021q_vlan_add(ds, info);
break;
case DSA_NOTIFIER_TAG_8021Q_VLAN_DEL:
err = dsa_switch_tag_8021q_vlan_del(ds, info);
break;
default:
err = -EOPNOTSUPP;
break;
......
......@@ -17,7 +17,7 @@
*
* | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
* +-----------+-----+-----------------+-----------+-----------------------+
* | DIR | SVL | SWITCH_ID | SUBVLAN | PORT |
* | DIR | RSV | SWITCH_ID | RSV | PORT |
* +-----------+-----+-----------------+-----------+-----------------------+
*
* DIR - VID[11:10]:
......@@ -27,24 +27,13 @@
* These values make the special VIDs of 0, 1 and 4095 to be left
* unused by this coding scheme.
*
* SVL/SUBVLAN - { VID[9], VID[5:4] }:
* Sub-VLAN encoding. Valid only when DIR indicates an RX VLAN.
* * 0 (0b000): Field does not encode a sub-VLAN, either because
* received traffic is untagged, PVID-tagged or because a second
* VLAN tag is present after this tag and not inside of it.
* * 1 (0b001): Received traffic is tagged with a VID value private
* to the host. This field encodes the index in the host's lookup
* table through which the value of the ingress VLAN ID can be
* recovered.
* * 2 (0b010): Field encodes a sub-VLAN.
* ...
* * 7 (0b111): Field encodes a sub-VLAN.
* When DIR indicates a TX VLAN, SUBVLAN must be transmitted as zero
* (by the host) and ignored on receive (by the switch).
*
* SWITCH_ID - VID[8:6]:
* Index of switch within DSA tree. Must be between 0 and 7.
*
* RSV - VID[5:4]:
* To be used for further expansion of PORT or for other purposes.
* Must be transmitted as zero and ignored on receive.
*
* PORT - VID[3:0]:
* Index of switch port. Must be between 0 and 15.
*/
......@@ -61,18 +50,6 @@
#define DSA_8021Q_SWITCH_ID(x) (((x) << DSA_8021Q_SWITCH_ID_SHIFT) & \
DSA_8021Q_SWITCH_ID_MASK)
#define DSA_8021Q_SUBVLAN_HI_SHIFT 9
#define DSA_8021Q_SUBVLAN_HI_MASK GENMASK(9, 9)
#define DSA_8021Q_SUBVLAN_LO_SHIFT 4
#define DSA_8021Q_SUBVLAN_LO_MASK GENMASK(5, 4)
#define DSA_8021Q_SUBVLAN_HI(x) (((x) & GENMASK(2, 2)) >> 2)
#define DSA_8021Q_SUBVLAN_LO(x) ((x) & GENMASK(1, 0))
#define DSA_8021Q_SUBVLAN(x) \
(((DSA_8021Q_SUBVLAN_LO(x) << DSA_8021Q_SUBVLAN_LO_SHIFT) & \
DSA_8021Q_SUBVLAN_LO_MASK) | \
((DSA_8021Q_SUBVLAN_HI(x) << DSA_8021Q_SUBVLAN_HI_SHIFT) & \
DSA_8021Q_SUBVLAN_HI_MASK))
#define DSA_8021Q_PORT_SHIFT 0
#define DSA_8021Q_PORT_MASK GENMASK(3, 0)
#define DSA_8021Q_PORT(x) (((x) << DSA_8021Q_PORT_SHIFT) & \
......@@ -98,13 +75,6 @@ u16 dsa_8021q_rx_vid(struct dsa_switch *ds, int port)
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_vid);
u16 dsa_8021q_rx_vid_subvlan(struct dsa_switch *ds, int port, u16 subvlan)
{
return DSA_8021Q_DIR_RX | DSA_8021Q_SWITCH_ID(ds->index) |
DSA_8021Q_PORT(port) | DSA_8021Q_SUBVLAN(subvlan);
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_vid_subvlan);
/* Returns the decoded switch ID from the RX VID. */
int dsa_8021q_rx_switch_id(u16 vid)
{
......@@ -119,20 +89,6 @@ int dsa_8021q_rx_source_port(u16 vid)
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_source_port);
/* Returns the decoded subvlan from the RX VID. */
u16 dsa_8021q_rx_subvlan(u16 vid)
{
u16 svl_hi, svl_lo;
svl_hi = (vid & DSA_8021Q_SUBVLAN_HI_MASK) >>
DSA_8021Q_SUBVLAN_HI_SHIFT;
svl_lo = (vid & DSA_8021Q_SUBVLAN_LO_MASK) >>
DSA_8021Q_SUBVLAN_LO_SHIFT;
return (svl_hi << 2) | svl_lo;
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_subvlan);
bool vid_is_dsa_8021q_rxvlan(u16 vid)
{
return (vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_RX;
......@@ -151,21 +107,152 @@ bool vid_is_dsa_8021q(u16 vid)
}
EXPORT_SYMBOL_GPL(vid_is_dsa_8021q);
/* If @enabled is true, installs @vid with @flags into the switch port's HW
* filter.
* If @enabled is false, deletes @vid (ignores @flags) from the port. Had the
* user explicitly configured this @vid through the bridge core, then the @vid
* is installed again, but this time with the flags from the bridge layer.
static struct dsa_tag_8021q_vlan *
dsa_tag_8021q_vlan_find(struct dsa_8021q_context *ctx, int port, u16 vid)
{
struct dsa_tag_8021q_vlan *v;
list_for_each_entry(v, &ctx->vlans, list)
if (v->vid == vid && v->port == port)
return v;
return NULL;
}
static int dsa_switch_do_tag_8021q_vlan_add(struct dsa_switch *ds, int port,
u16 vid, u16 flags)
{
struct dsa_8021q_context *ctx = ds->tag_8021q_ctx;
struct dsa_port *dp = dsa_to_port(ds, port);
struct dsa_tag_8021q_vlan *v;
int err;
/* No need to bother with refcounting for user ports */
if (!(dsa_port_is_cpu(dp) || dsa_port_is_dsa(dp)))
return ds->ops->tag_8021q_vlan_add(ds, port, vid, flags);
v = dsa_tag_8021q_vlan_find(ctx, port, vid);
if (v) {
refcount_inc(&v->refcount);
return 0;
}
v = kzalloc(sizeof(*v), GFP_KERNEL);
if (!v)
return -ENOMEM;
err = ds->ops->tag_8021q_vlan_add(ds, port, vid, flags);
if (err) {
kfree(v);
return err;
}
v->vid = vid;
v->port = port;
refcount_set(&v->refcount, 1);
list_add_tail(&v->list, &ctx->vlans);
return 0;
}
static int dsa_switch_do_tag_8021q_vlan_del(struct dsa_switch *ds, int port,
u16 vid)
{
struct dsa_8021q_context *ctx = ds->tag_8021q_ctx;
struct dsa_port *dp = dsa_to_port(ds, port);
struct dsa_tag_8021q_vlan *v;
int err;
/* No need to bother with refcounting for user ports */
if (!(dsa_port_is_cpu(dp) || dsa_port_is_dsa(dp)))
return ds->ops->tag_8021q_vlan_del(ds, port, vid);
v = dsa_tag_8021q_vlan_find(ctx, port, vid);
if (!v)
return -ENOENT;
if (!refcount_dec_and_test(&v->refcount))
return 0;
err = ds->ops->tag_8021q_vlan_del(ds, port, vid);
if (err) {
refcount_inc(&v->refcount);
return err;
}
list_del(&v->list);
kfree(v);
return 0;
}
static bool
dsa_switch_tag_8021q_vlan_match(struct dsa_switch *ds, int port,
struct dsa_notifier_tag_8021q_vlan_info *info)
{
if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
return true;
if (ds->dst->index == info->tree_index && ds->index == info->sw_index)
return port == info->port;
return false;
}
int dsa_switch_tag_8021q_vlan_add(struct dsa_switch *ds,
struct dsa_notifier_tag_8021q_vlan_info *info)
{
int port, err;
/* Since we use dsa_broadcast(), there might be other switches in other
* trees which don't support tag_8021q, so don't return an error.
* Or they might even support tag_8021q but have not registered yet to
* use it (maybe they use another tagger currently).
*/
static int dsa_8021q_vid_apply(struct dsa_8021q_context *ctx, int port, u16 vid,
u16 flags, bool enabled)
if (!ds->ops->tag_8021q_vlan_add || !ds->tag_8021q_ctx)
return 0;
for (port = 0; port < ds->num_ports; port++) {
if (dsa_switch_tag_8021q_vlan_match(ds, port, info)) {
u16 flags = 0;
if (dsa_is_user_port(ds, port))
flags |= BRIDGE_VLAN_INFO_UNTAGGED;
if (vid_is_dsa_8021q_rxvlan(info->vid) &&
dsa_8021q_rx_switch_id(info->vid) == ds->index &&
dsa_8021q_rx_source_port(info->vid) == port)
flags |= BRIDGE_VLAN_INFO_PVID;
err = dsa_switch_do_tag_8021q_vlan_add(ds, port,
info->vid,
flags);
if (err)
return err;
}
}
return 0;
}
int dsa_switch_tag_8021q_vlan_del(struct dsa_switch *ds,
struct dsa_notifier_tag_8021q_vlan_info *info)
{
struct dsa_port *dp = dsa_to_port(ctx->ds, port);
int port, err;
if (enabled)
return ctx->ops->vlan_add(ctx->ds, dp->index, vid, flags);
if (!ds->ops->tag_8021q_vlan_del || !ds->tag_8021q_ctx)
return 0;
return ctx->ops->vlan_del(ctx->ds, dp->index, vid);
for (port = 0; port < ds->num_ports; port++) {
if (dsa_switch_tag_8021q_vlan_match(ds, port, info)) {
err = dsa_switch_do_tag_8021q_vlan_del(ds, port,
info->vid);
if (err)
return err;
}
}
return 0;
}
/* RX VLAN tagging (left) and TX VLAN tagging (right) setup shown for a single
......@@ -181,12 +268,6 @@ static int dsa_8021q_vid_apply(struct dsa_8021q_context *ctx, int port, u16 vid,
* force all switched traffic to pass through the CPU. So we must also make
* the other front-panel ports members of this VID we're adding, albeit
* we're not making it their PVID (they'll still have their own).
* By the way - just because we're installing the same VID in multiple
* switch ports doesn't mean that they'll start to talk to one another, even
* while not bridged: the final forwarding decision is still an AND between
* the L2 forwarding information (which is limiting forwarding in this case)
* and the VLAN-based restrictions (of which there are none in this case,
* since all ports are members).
* - On TX (ingress from CPU and towards network) we are faced with a problem.
* If we were to tag traffic (from within DSA) with the port's pvid, all
* would be well, assuming the switch ports were standalone. Frames would
......@@ -200,9 +281,10 @@ static int dsa_8021q_vid_apply(struct dsa_8021q_context *ctx, int port, u16 vid,
* a member of the VID we're tagging the traffic with - the desired one.
*
* So at the end, each front-panel port will have one RX VID (also the PVID),
* the RX VID of all other front-panel ports, and one TX VID. Whereas the CPU
* port will have the RX and TX VIDs of all front-panel ports, and on top of
* that, is also tagged-input and tagged-output (VLAN trunk).
* the RX VID of all other front-panel ports that are in the same bridge, and
* one TX VID. Whereas the CPU port will have the RX and TX VIDs of all
* front-panel ports, and on top of that, is also tagged-input and
* tagged-output (VLAN trunk).
*
* CPU port CPU port
* +-------------+-----+-------------+ +-------------+-----+-------------+
......@@ -220,246 +302,225 @@ static int dsa_8021q_vid_apply(struct dsa_8021q_context *ctx, int port, u16 vid,
* +-+-----+-+-----+-+-----+-+-----+-+ +-+-----+-+-----+-+-----+-+-----+-+
* swp0 swp1 swp2 swp3 swp0 swp1 swp2 swp3
*/
static int dsa_8021q_setup_port(struct dsa_8021q_context *ctx, int port,
bool enabled)
static bool dsa_tag_8021q_bridge_match(struct dsa_switch *ds, int port,
struct dsa_notifier_bridge_info *info)
{
struct dsa_port *dp = dsa_to_port(ds, port);
/* Don't match on self */
if (ds->dst->index == info->tree_index &&
ds->index == info->sw_index &&
port == info->port)
return false;
if (dsa_port_is_user(dp))
return dp->bridge_dev == info->br;
return false;
}
int dsa_tag_8021q_bridge_join(struct dsa_switch *ds,
struct dsa_notifier_bridge_info *info)
{
struct dsa_switch *targeted_ds;
struct dsa_port *targeted_dp;
u16 targeted_rx_vid;
int err, port;
if (!ds->tag_8021q_ctx)
return 0;
targeted_ds = dsa_switch_find(info->tree_index, info->sw_index);
targeted_dp = dsa_to_port(targeted_ds, info->port);
targeted_rx_vid = dsa_8021q_rx_vid(targeted_ds, info->port);
for (port = 0; port < ds->num_ports; port++) {
struct dsa_port *dp = dsa_to_port(ds, port);
u16 rx_vid = dsa_8021q_rx_vid(ds, port);
if (!dsa_tag_8021q_bridge_match(ds, port, info))
continue;
/* Install the RX VID of the targeted port in our VLAN table */
err = dsa_port_tag_8021q_vlan_add(dp, targeted_rx_vid);
if (err)
return err;
/* Install our RX VID into the targeted port's VLAN table */
err = dsa_port_tag_8021q_vlan_add(targeted_dp, rx_vid);
if (err)
return err;
}
return 0;
}
int dsa_tag_8021q_bridge_leave(struct dsa_switch *ds,
struct dsa_notifier_bridge_info *info)
{
int upstream = dsa_upstream_port(ctx->ds, port);
u16 rx_vid = dsa_8021q_rx_vid(ctx->ds, port);
u16 tx_vid = dsa_8021q_tx_vid(ctx->ds, port);
struct dsa_switch *targeted_ds;
struct dsa_port *targeted_dp;
u16 targeted_rx_vid;
int port;
if (!ds->tag_8021q_ctx)
return 0;
targeted_ds = dsa_switch_find(info->tree_index, info->sw_index);
targeted_dp = dsa_to_port(targeted_ds, info->port);
targeted_rx_vid = dsa_8021q_rx_vid(targeted_ds, info->port);
for (port = 0; port < ds->num_ports; port++) {
struct dsa_port *dp = dsa_to_port(ds, port);
u16 rx_vid = dsa_8021q_rx_vid(ds, port);
if (!dsa_tag_8021q_bridge_match(ds, port, info))
continue;
/* Remove the RX VID of the targeted port from our VLAN table */
dsa_port_tag_8021q_vlan_del(dp, targeted_rx_vid);
/* Remove our RX VID from the targeted port's VLAN table */
dsa_port_tag_8021q_vlan_del(targeted_dp, rx_vid);
}
return 0;
}
/* Set up a port's tag_8021q RX and TX VLAN for standalone mode operation */
static int dsa_tag_8021q_port_setup(struct dsa_switch *ds, int port)
{
struct dsa_8021q_context *ctx = ds->tag_8021q_ctx;
struct dsa_port *dp = dsa_to_port(ds, port);
u16 rx_vid = dsa_8021q_rx_vid(ds, port);
u16 tx_vid = dsa_8021q_tx_vid(ds, port);
struct net_device *master;
int i, err, subvlan;
int err;
/* The CPU port is implicitly configured by
* configuring the front-panel ports
*/
if (!dsa_is_user_port(ctx->ds, port))
if (!dsa_port_is_user(dp))
return 0;
master = dsa_to_port(ctx->ds, port)->cpu_dp->master;
master = dp->cpu_dp->master;
/* Add this user port's RX VID to the membership list of all others
* (including itself). This is so that bridging will not be hindered.
* L2 forwarding rules still take precedence when there are no VLAN
* restrictions, so there are no concerns about leaking traffic.
*/
for (i = 0; i < ctx->ds->num_ports; i++) {
u16 flags;
if (i == upstream)
continue;
else if (i == port)
/* The RX VID is pvid on this port */
flags = BRIDGE_VLAN_INFO_UNTAGGED |
BRIDGE_VLAN_INFO_PVID;
else
/* The RX VID is a regular VLAN on all others */
flags = BRIDGE_VLAN_INFO_UNTAGGED;
err = dsa_8021q_vid_apply(ctx, i, rx_vid, flags, enabled);
err = dsa_port_tag_8021q_vlan_add(dp, rx_vid);
if (err) {
dev_err(ctx->ds->dev,
"Failed to apply RX VID %d to port %d: %d\n",
rx_vid, port, err);
dev_err(ds->dev,
"Failed to apply RX VID %d to port %d: %pe\n",
rx_vid, port, ERR_PTR(err));
return err;
}
}
/* CPU port needs to see this port's RX VID
* as tagged egress.
*/
err = dsa_8021q_vid_apply(ctx, upstream, rx_vid, 0, enabled);
if (err) {
dev_err(ctx->ds->dev,
"Failed to apply RX VID %d to port %d: %d\n",
rx_vid, port, err);
return err;
}
/* Add to the master's RX filter not only @rx_vid, but in fact
* the entire subvlan range, just in case this DSA switch might
* want to use sub-VLANs.
*/
for (subvlan = 0; subvlan < DSA_8021Q_N_SUBVLAN; subvlan++) {
u16 vid = dsa_8021q_rx_vid_subvlan(ctx->ds, port, subvlan);
if (enabled)
vlan_vid_add(master, ctx->proto, vid);
else
vlan_vid_del(master, ctx->proto, vid);
}
/* Add @rx_vid to the master's RX filter. */
vlan_vid_add(master, ctx->proto, rx_vid);
/* Finally apply the TX VID on this port and on the CPU port */
err = dsa_8021q_vid_apply(ctx, port, tx_vid, BRIDGE_VLAN_INFO_UNTAGGED,
enabled);
err = dsa_port_tag_8021q_vlan_add(dp, tx_vid);
if (err) {
dev_err(ctx->ds->dev,
"Failed to apply TX VID %d on port %d: %d\n",
tx_vid, port, err);
return err;
}
err = dsa_8021q_vid_apply(ctx, upstream, tx_vid, 0, enabled);
if (err) {
dev_err(ctx->ds->dev,
"Failed to apply TX VID %d on port %d: %d\n",
tx_vid, upstream, err);
dev_err(ds->dev,
"Failed to apply TX VID %d on port %d: %pe\n",
tx_vid, port, ERR_PTR(err));
return err;
}
return err;
}
int dsa_8021q_setup(struct dsa_8021q_context *ctx, bool enabled)
static void dsa_tag_8021q_port_teardown(struct dsa_switch *ds, int port)
{
int rc, port;
struct dsa_8021q_context *ctx = ds->tag_8021q_ctx;
struct dsa_port *dp = dsa_to_port(ds, port);
u16 rx_vid = dsa_8021q_rx_vid(ds, port);
u16 tx_vid = dsa_8021q_tx_vid(ds, port);
struct net_device *master;
ASSERT_RTNL();
/* The CPU port is implicitly configured by
* configuring the front-panel ports
*/
if (!dsa_port_is_user(dp))
return;
for (port = 0; port < ctx->ds->num_ports; port++) {
rc = dsa_8021q_setup_port(ctx, port, enabled);
if (rc < 0) {
dev_err(ctx->ds->dev,
"Failed to setup VLAN tagging for port %d: %d\n",
port, rc);
return rc;
}
}
master = dp->cpu_dp->master;
return 0;
}
EXPORT_SYMBOL_GPL(dsa_8021q_setup);
dsa_port_tag_8021q_vlan_del(dp, rx_vid);
static int dsa_8021q_crosschip_link_apply(struct dsa_8021q_context *ctx,
int port,
struct dsa_8021q_context *other_ctx,
int other_port, bool enabled)
{
u16 rx_vid = dsa_8021q_rx_vid(ctx->ds, port);
vlan_vid_del(master, ctx->proto, rx_vid);
/* @rx_vid of local @ds port @port goes to @other_port of
* @other_ds
*/
return dsa_8021q_vid_apply(other_ctx, other_port, rx_vid,
BRIDGE_VLAN_INFO_UNTAGGED, enabled);
dsa_port_tag_8021q_vlan_del(dp, tx_vid);
}
static int dsa_8021q_crosschip_link_add(struct dsa_8021q_context *ctx, int port,
struct dsa_8021q_context *other_ctx,
int other_port)
static int dsa_tag_8021q_setup(struct dsa_switch *ds)
{
struct dsa_8021q_crosschip_link *c;
int err, port;
list_for_each_entry(c, &ctx->crosschip_links, list) {
if (c->port == port && c->other_ctx == other_ctx &&
c->other_port == other_port) {
refcount_inc(&c->refcount);
return 0;
ASSERT_RTNL();
for (port = 0; port < ds->num_ports; port++) {
err = dsa_tag_8021q_port_setup(ds, port);
if (err < 0) {
dev_err(ds->dev,
"Failed to setup VLAN tagging for port %d: %pe\n",
port, ERR_PTR(err));
return err;
}
}
dev_dbg(ctx->ds->dev,
"adding crosschip link from port %d to %s port %d\n",
port, dev_name(other_ctx->ds->dev), other_port);
c = kzalloc(sizeof(*c), GFP_KERNEL);
if (!c)
return -ENOMEM;
c->port = port;
c->other_ctx = other_ctx;
c->other_port = other_port;
refcount_set(&c->refcount, 1);
list_add(&c->list, &ctx->crosschip_links);
return 0;
}
static void dsa_8021q_crosschip_link_del(struct dsa_8021q_context *ctx,
struct dsa_8021q_crosschip_link *c,
bool *keep)
static void dsa_tag_8021q_teardown(struct dsa_switch *ds)
{
*keep = !refcount_dec_and_test(&c->refcount);
if (*keep)
return;
int port;
dev_dbg(ctx->ds->dev,
"deleting crosschip link from port %d to %s port %d\n",
c->port, dev_name(c->other_ctx->ds->dev), c->other_port);
ASSERT_RTNL();
list_del(&c->list);
kfree(c);
for (port = 0; port < ds->num_ports; port++)
dsa_tag_8021q_port_teardown(ds, port);
}
/* Make traffic from local port @port be received by remote port @other_port.
* This means that our @rx_vid needs to be installed on @other_ds's upstream
* and user ports. The user ports should be egress-untagged so that they can
* pop the dsa_8021q VLAN. But the @other_upstream can be either egress-tagged
* or untagged: it doesn't matter, since it should never egress a frame having
* our @rx_vid.
*/
int dsa_8021q_crosschip_bridge_join(struct dsa_8021q_context *ctx, int port,
struct dsa_8021q_context *other_ctx,
int other_port)
int dsa_tag_8021q_register(struct dsa_switch *ds, __be16 proto)
{
/* @other_upstream is how @other_ds reaches us. If we are part
* of disjoint trees, then we are probably connected through
* our CPU ports. If we're part of the same tree though, we should
* probably use dsa_towards_port.
*/
int other_upstream = dsa_upstream_port(other_ctx->ds, other_port);
int rc;
struct dsa_8021q_context *ctx;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
rc = dsa_8021q_crosschip_link_add(ctx, port, other_ctx, other_port);
if (rc)
return rc;
ctx->proto = proto;
ctx->ds = ds;
rc = dsa_8021q_crosschip_link_apply(ctx, port, other_ctx,
other_port, true);
if (rc)
return rc;
INIT_LIST_HEAD(&ctx->vlans);
rc = dsa_8021q_crosschip_link_add(ctx, port, other_ctx, other_upstream);
if (rc)
return rc;
ds->tag_8021q_ctx = ctx;
return dsa_8021q_crosschip_link_apply(ctx, port, other_ctx,
other_upstream, true);
return dsa_tag_8021q_setup(ds);
}
EXPORT_SYMBOL_GPL(dsa_8021q_crosschip_bridge_join);
EXPORT_SYMBOL_GPL(dsa_tag_8021q_register);
int dsa_8021q_crosschip_bridge_leave(struct dsa_8021q_context *ctx, int port,
struct dsa_8021q_context *other_ctx,
int other_port)
void dsa_tag_8021q_unregister(struct dsa_switch *ds)
{
int other_upstream = dsa_upstream_port(other_ctx->ds, other_port);
struct dsa_8021q_crosschip_link *c, *n;
list_for_each_entry_safe(c, n, &ctx->crosschip_links, list) {
if (c->port == port && c->other_ctx == other_ctx &&
(c->other_port == other_port ||
c->other_port == other_upstream)) {
struct dsa_8021q_context *other_ctx = c->other_ctx;
int other_port = c->other_port;
bool keep;
int rc;
dsa_8021q_crosschip_link_del(ctx, c, &keep);
if (keep)
continue;
struct dsa_8021q_context *ctx = ds->tag_8021q_ctx;
struct dsa_tag_8021q_vlan *v, *n;
rc = dsa_8021q_crosschip_link_apply(ctx, port,
other_ctx,
other_port,
false);
if (rc)
return rc;
}
dsa_tag_8021q_teardown(ds);
list_for_each_entry_safe(v, n, &ctx->vlans, list) {
list_del(&v->list);
kfree(v);
}
return 0;
ds->tag_8021q_ctx = NULL;
kfree(ctx);
}
EXPORT_SYMBOL_GPL(dsa_8021q_crosschip_bridge_leave);
EXPORT_SYMBOL_GPL(dsa_tag_8021q_unregister);
struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
u16 tpid, u16 tci)
......@@ -471,8 +532,7 @@ struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
}
EXPORT_SYMBOL_GPL(dsa_8021q_xmit);
void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id,
int *subvlan)
void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id)
{
u16 vid, tci;
......@@ -489,9 +549,6 @@ void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id,
*source_port = dsa_8021q_rx_source_port(vid);
*switch_id = dsa_8021q_rx_switch_id(vid);
*subvlan = dsa_8021q_rx_subvlan(vid);
skb->priority = (tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
}
EXPORT_SYMBOL_GPL(dsa_8021q_rcv);
MODULE_LICENSE("GPL v2");
......@@ -41,9 +41,9 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
struct net_device *netdev,
struct packet_type *pt)
{
int src_port, switch_id, subvlan;
int src_port, switch_id;
dsa_8021q_rcv(skb, &src_port, &switch_id, &subvlan);
dsa_8021q_rcv(skb, &src_port, &switch_id);
skb->dev = dsa_master_find_slave(netdev, switch_id, src_port);
if (!skb->dev)
......
......@@ -358,20 +358,6 @@ static struct sk_buff
return skb;
}
static void sja1105_decode_subvlan(struct sk_buff *skb, u16 subvlan)
{
struct dsa_port *dp = dsa_slave_to_port(skb->dev);
struct sja1105_port *sp = dp->priv;
u16 vid = sp->subvlan_map[subvlan];
u16 vlan_tci;
if (vid == VLAN_N_VID)
return;
vlan_tci = (skb->priority << VLAN_PRIO_SHIFT) | vid;
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tci);
}
static bool sja1105_skb_has_tag_8021q(const struct sk_buff *skb)
{
u16 tpid = ntohs(eth_hdr(skb)->h_proto);
......@@ -389,8 +375,8 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
struct net_device *netdev,
struct packet_type *pt)
{
int source_port, switch_id, subvlan = 0;
struct sja1105_meta meta = {0};
int source_port, switch_id;
struct ethhdr *hdr;
bool is_link_local;
bool is_meta;
......@@ -403,7 +389,7 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
if (sja1105_skb_has_tag_8021q(skb)) {
/* Normal traffic path. */
dsa_8021q_rcv(skb, &source_port, &switch_id, &subvlan);
dsa_8021q_rcv(skb, &source_port, &switch_id);
} else if (is_link_local) {
/* Management traffic path. Switch embeds the switch ID and
* port ID into bytes of the destination MAC, courtesy of
......@@ -428,9 +414,6 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
return NULL;
}
if (subvlan)
sja1105_decode_subvlan(skb, subvlan);
return sja1105_rcv_meta_state_machine(skb, &meta, is_link_local,
is_meta);
}
......@@ -538,7 +521,7 @@ static struct sk_buff *sja1110_rcv(struct sk_buff *skb,
struct net_device *netdev,
struct packet_type *pt)
{
int source_port = -1, switch_id = -1, subvlan = 0;
int source_port = -1, switch_id = -1;
skb->offload_fwd_mark = 1;
......@@ -551,7 +534,7 @@ static struct sk_buff *sja1110_rcv(struct sk_buff *skb,
/* Packets with in-band control extensions might still have RX VLANs */
if (likely(sja1105_skb_has_tag_8021q(skb)))
dsa_8021q_rcv(skb, &source_port, &switch_id, &subvlan);
dsa_8021q_rcv(skb, &source_port, &switch_id);
skb->dev = dsa_master_find_slave(netdev, switch_id, source_port);
if (!skb->dev) {
......@@ -561,9 +544,6 @@ static struct sk_buff *sja1110_rcv(struct sk_buff *skb,
return NULL;
}
if (subvlan)
sja1105_decode_subvlan(skb, subvlan);
return skb;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment