Commit 56435d91 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge branch 'tag_8021q-for-ocelot-switches'

Vladimir Oltean says:

====================
tag_8021q for Ocelot switches

The Felix switch inside LS1028A has an issue. It has a 2.5G CPU port,
and the external ports, in the majority of use cases, run at 1G. This
means that, when the CPU injects traffic into the switch, it is very
easy to run into congestion. This is not to say that it is impossible to
enter congestion even with all ports running at the same speed, just
that the default configuration is already very prone to that by design.

Normally, the way to deal with that is using Ethernet flow control
(PAUSE frames).

However, this functionality is not working today with the ENETC - Felix
switch pair. The hardware issue is undergoing documentation right now as
an erratum within NXP, but several customers have been requesting a
reasonable workaround for it.

In truth, the LS1028A has 2 internal port pairs. The lack of flow control
is an issue only when NPI mode (Node Processor Interface, aka the mode
where the "CPU port module", which carries DSA-style tagged packets, is
connected to a regular Ethernet port) is used, and NPI mode is supported
by Felix on a single port.

In past BSPs, we have had setups where both internal port pairs were
enabled. We were advertising the following setup:

"data port"     "control port"
  (2.5G)            (1G)

   eno2             eno3
    ^                ^
    |                |
    | regular        | DSA-tagged
    | frames         | frames
    |                |
    v                v
   swp4             swp5

This works but is highly unpractical, due to NXP shifting the task of
designing a functional system (choosing which port to use, depending on
type of traffic required) up to the end user. The swpN interfaces would
have to be bridged with swp4, in order for the eno2 "data port" to have
access to the outside network. And the swpN interfaces would still be
capable of IP networking. So running a DHCP client would give us two IP
interfaces from the same subnet, one assigned to eno2, and the other to
swpN (0, 1, 2, 3).

Also, the dual port design doesn't scale. When attaching another DSA
switch to a Felix port, the end result is that the "data port" cannot
carry any meaningful data to the external world, since it lacks the DSA
tags required to traverse the sja1105 switches below. All that traffic
needs to go through the "control port".

So in newer BSPs there was a desire to simplify that setup, and only
have one internal port pair:

   eno2            eno3
    ^
    |
    | DSA-tagged    x disabled
    | frames
    |
    v
   swp4            swp5

However, this setup only exacerbates the issue of not having flow
control on the NPI port, since that is the only port now. Also, there
are use cases that still require the "data port", such as IEEE 802.1CB
(TSN stream identification doesn't work over an NPI port), source
MAC address learning over NPI, etc.

Again, there is a desire to keep the simplicity of the single internal
port setup, while regaining the benefits of having a dedicated data port
as well. And this series attempts to deliver just that.

So the NPI functionality is disabled conditionally. Its purpose was:
- To ensure individually addressable ports on TX. This can be replaced
  by using some designated VLAN tags which are pushed by the DSA tagger
  code, then removed by the switch (so they are invisible to the outside
  world and to the user).
- To ensure source port identification on RX. Again, this can be
  replaced by using some designated VLAN tags to encapsulate all RX
  traffic (each VLAN uniquely identifies a source port). The DSA tagger
  determines which port it was based on the VLAN number, then removes
  that header.
- To deliver PTP timestamps. This cannot be obtained through VLAN
  headers, so we need to take a step back and see how else we can do
  that. The Microchip Ocelot-1 (VSC7514 MIPS) driver performs manual
  injection/extraction from the CPU port module using register-based
  MMIO, and not over Ethernet. We will need to do the same from DSA,
  which makes this tagger a sort of hybrid between DSA and pure
  switchdev.
====================

Link: https://lore.kernel.org/r/20210129010009.3959398-1-olteanv@gmail.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents de1da8bc e21268ef
......@@ -3,5 +3,12 @@ Date: August 2018
KernelVersion: 4.20
Contact: netdev@vger.kernel.org
Description:
String indicating the type of tagging protocol used by the
DSA slave network device.
On read, this file returns a string indicating the type of
tagging protocol used by the DSA network devices that are
attached to this master interface.
On write, this file changes the tagging protocol of the
attached DSA switches, if this operation is supported by the
driver. Changing the tagging protocol must be done with the DSA
interfaces and the master interface all administratively down.
See the "name" field of each registered struct dsa_device_ops
for a list of valid values.
......@@ -12859,6 +12859,7 @@ F: drivers/net/dsa/ocelot/*
F: drivers/net/ethernet/mscc/
F: include/soc/mscc/ocelot*
F: net/dsa/tag_ocelot.c
F: net/dsa/tag_ocelot_8021q.c
F: tools/testing/selftests/drivers/net/ocelot/*
OCXL (Open Coherent Accelerator Processor Interface OpenCAPI) DRIVER
......
......@@ -6,6 +6,7 @@ config NET_DSA_MSCC_FELIX
depends on NET_VENDOR_FREESCALE
depends on HAS_IOMEM
select MSCC_OCELOT_SWITCH_LIB
select NET_DSA_TAG_OCELOT_8021Q
select NET_DSA_TAG_OCELOT
select FSL_ENETC_MDIO
select PCS_LYNX
......@@ -19,6 +20,7 @@ config NET_DSA_MSCC_SEVILLE
depends on NET_VENDOR_MICROSEMI
depends on HAS_IOMEM
select MSCC_OCELOT_SWITCH_LIB
select NET_DSA_TAG_OCELOT_8021Q
select NET_DSA_TAG_OCELOT
select PCS_LYNX
help
......
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2019 NXP Semiconductors
/* Copyright 2019-2021 NXP Semiconductors
*
* This is an umbrella module for all network switches that are
* register-compatible with Ocelot and that perform I/O to their host CPU
......@@ -13,6 +13,7 @@
#include <soc/mscc/ocelot_ana.h>
#include <soc/mscc/ocelot_ptp.h>
#include <soc/mscc/ocelot.h>
#include <linux/dsa/8021q.h>
#include <linux/platform_device.h>
#include <linux/packing.h>
#include <linux/module.h>
......@@ -24,11 +25,474 @@
#include <net/dsa.h>
#include "felix.h"
static int felix_tag_8021q_rxvlan_add(struct felix *felix, int port, u16 vid,
bool pvid, bool untagged)
{
struct ocelot_vcap_filter *outer_tagging_rule;
struct ocelot *ocelot = &felix->ocelot;
struct dsa_switch *ds = felix->ds;
int key_length, upstream, err;
/* We don't need to install the rxvlan into the other ports' filtering
* tables, because we're just pushing the rxvlan when sending towards
* the CPU
*/
if (!pvid)
return 0;
key_length = ocelot->vcap[VCAP_ES0].keys[VCAP_ES0_IGR_PORT].length;
upstream = dsa_upstream_port(ds, port);
outer_tagging_rule = kzalloc(sizeof(struct ocelot_vcap_filter),
GFP_KERNEL);
if (!outer_tagging_rule)
return -ENOMEM;
outer_tagging_rule->key_type = OCELOT_VCAP_KEY_ANY;
outer_tagging_rule->prio = 1;
outer_tagging_rule->id.cookie = port;
outer_tagging_rule->id.tc_offload = false;
outer_tagging_rule->block_id = VCAP_ES0;
outer_tagging_rule->type = OCELOT_VCAP_FILTER_OFFLOAD;
outer_tagging_rule->lookup = 0;
outer_tagging_rule->ingress_port.value = port;
outer_tagging_rule->ingress_port.mask = GENMASK(key_length - 1, 0);
outer_tagging_rule->egress_port.value = upstream;
outer_tagging_rule->egress_port.mask = GENMASK(key_length - 1, 0);
outer_tagging_rule->action.push_outer_tag = OCELOT_ES0_TAG;
outer_tagging_rule->action.tag_a_tpid_sel = OCELOT_TAG_TPID_SEL_8021AD;
outer_tagging_rule->action.tag_a_vid_sel = 1;
outer_tagging_rule->action.vid_a_val = vid;
err = ocelot_vcap_filter_add(ocelot, outer_tagging_rule, NULL);
if (err)
kfree(outer_tagging_rule);
return err;
}
static int felix_tag_8021q_txvlan_add(struct felix *felix, int port, u16 vid,
bool pvid, bool untagged)
{
struct ocelot_vcap_filter *untagging_rule, *redirect_rule;
struct ocelot *ocelot = &felix->ocelot;
struct dsa_switch *ds = felix->ds;
int upstream, err;
/* tag_8021q.c assumes we are implementing this via port VLAN
* membership, which we aren't. So we don't need to add any VCAP filter
* for the CPU port.
*/
if (ocelot->ports[port]->is_dsa_8021q_cpu)
return 0;
untagging_rule = kzalloc(sizeof(struct ocelot_vcap_filter), GFP_KERNEL);
if (!untagging_rule)
return -ENOMEM;
redirect_rule = kzalloc(sizeof(struct ocelot_vcap_filter), GFP_KERNEL);
if (!redirect_rule) {
kfree(untagging_rule);
return -ENOMEM;
}
upstream = dsa_upstream_port(ds, port);
untagging_rule->key_type = OCELOT_VCAP_KEY_ANY;
untagging_rule->ingress_port_mask = BIT(upstream);
untagging_rule->vlan.vid.value = vid;
untagging_rule->vlan.vid.mask = VLAN_VID_MASK;
untagging_rule->prio = 1;
untagging_rule->id.cookie = port;
untagging_rule->id.tc_offload = false;
untagging_rule->block_id = VCAP_IS1;
untagging_rule->type = OCELOT_VCAP_FILTER_OFFLOAD;
untagging_rule->lookup = 0;
untagging_rule->action.vlan_pop_cnt_ena = true;
untagging_rule->action.vlan_pop_cnt = 1;
untagging_rule->action.pag_override_mask = 0xff;
untagging_rule->action.pag_val = port;
err = ocelot_vcap_filter_add(ocelot, untagging_rule, NULL);
if (err) {
kfree(untagging_rule);
kfree(redirect_rule);
return err;
}
redirect_rule->key_type = OCELOT_VCAP_KEY_ANY;
redirect_rule->ingress_port_mask = BIT(upstream);
redirect_rule->pag = port;
redirect_rule->prio = 1;
redirect_rule->id.cookie = port;
redirect_rule->id.tc_offload = false;
redirect_rule->block_id = VCAP_IS2;
redirect_rule->type = OCELOT_VCAP_FILTER_OFFLOAD;
redirect_rule->lookup = 0;
redirect_rule->action.mask_mode = OCELOT_MASK_MODE_REDIRECT;
redirect_rule->action.port_mask = BIT(port);
err = ocelot_vcap_filter_add(ocelot, redirect_rule, NULL);
if (err) {
ocelot_vcap_filter_del(ocelot, untagging_rule);
kfree(redirect_rule);
return err;
}
return 0;
}
static int felix_tag_8021q_vlan_add(struct dsa_switch *ds, int port, u16 vid,
u16 flags)
{
bool untagged = flags & BRIDGE_VLAN_INFO_UNTAGGED;
bool pvid = flags & BRIDGE_VLAN_INFO_PVID;
struct ocelot *ocelot = ds->priv;
if (vid_is_dsa_8021q_rxvlan(vid))
return felix_tag_8021q_rxvlan_add(ocelot_to_felix(ocelot),
port, vid, pvid, untagged);
if (vid_is_dsa_8021q_txvlan(vid))
return felix_tag_8021q_txvlan_add(ocelot_to_felix(ocelot),
port, vid, pvid, untagged);
return 0;
}
static int felix_tag_8021q_rxvlan_del(struct felix *felix, int port, u16 vid)
{
struct ocelot_vcap_filter *outer_tagging_rule;
struct ocelot_vcap_block *block_vcap_es0;
struct ocelot *ocelot = &felix->ocelot;
block_vcap_es0 = &ocelot->block[VCAP_ES0];
outer_tagging_rule = ocelot_vcap_block_find_filter_by_id(block_vcap_es0,
port, false);
/* In rxvlan_add, we had the "if (!pvid) return 0" logic to avoid
* installing outer tagging ES0 rules where they weren't needed.
* But in rxvlan_del, the API doesn't give us the "flags" anymore,
* so that forces us to be slightly sloppy here, and just assume that
* if we didn't find an outer_tagging_rule it means that there was
* none in the first place, i.e. rxvlan_del is called on a non-pvid
* port. This is most probably true though.
*/
if (!outer_tagging_rule)
return 0;
return ocelot_vcap_filter_del(ocelot, outer_tagging_rule);
}
static int felix_tag_8021q_txvlan_del(struct felix *felix, int port, u16 vid)
{
struct ocelot_vcap_filter *untagging_rule, *redirect_rule;
struct ocelot_vcap_block *block_vcap_is1;
struct ocelot_vcap_block *block_vcap_is2;
struct ocelot *ocelot = &felix->ocelot;
int err;
if (ocelot->ports[port]->is_dsa_8021q_cpu)
return 0;
block_vcap_is1 = &ocelot->block[VCAP_IS1];
block_vcap_is2 = &ocelot->block[VCAP_IS2];
untagging_rule = ocelot_vcap_block_find_filter_by_id(block_vcap_is1,
port, false);
if (!untagging_rule)
return 0;
err = ocelot_vcap_filter_del(ocelot, untagging_rule);
if (err)
return err;
redirect_rule = ocelot_vcap_block_find_filter_by_id(block_vcap_is2,
port, false);
if (!redirect_rule)
return 0;
return ocelot_vcap_filter_del(ocelot, redirect_rule);
}
static int felix_tag_8021q_vlan_del(struct dsa_switch *ds, int port, u16 vid)
{
struct ocelot *ocelot = ds->priv;
if (vid_is_dsa_8021q_rxvlan(vid))
return felix_tag_8021q_rxvlan_del(ocelot_to_felix(ocelot),
port, vid);
if (vid_is_dsa_8021q_txvlan(vid))
return felix_tag_8021q_txvlan_del(ocelot_to_felix(ocelot),
port, vid);
return 0;
}
static const struct dsa_8021q_ops felix_tag_8021q_ops = {
.vlan_add = felix_tag_8021q_vlan_add,
.vlan_del = felix_tag_8021q_vlan_del,
};
/* Alternatively to using the NPI functionality, that same hardware MAC
* connected internally to the enetc or fman DSA master can be configured to
* use the software-defined tag_8021q frame format. As far as the hardware is
* concerned, it thinks it is a "dumb switch" - the queues of the CPU port
* module are now disconnected from it, but can still be accessed through
* register-based MMIO.
*/
static void felix_8021q_cpu_port_init(struct ocelot *ocelot, int port)
{
ocelot->ports[port]->is_dsa_8021q_cpu = true;
ocelot->npi = -1;
/* Overwrite PGID_CPU with the non-tagging port */
ocelot_write_rix(ocelot, BIT(port), ANA_PGID_PGID, PGID_CPU);
ocelot_apply_bridge_fwd_mask(ocelot);
}
static void felix_8021q_cpu_port_deinit(struct ocelot *ocelot, int port)
{
ocelot->ports[port]->is_dsa_8021q_cpu = false;
/* Restore PGID_CPU */
ocelot_write_rix(ocelot, BIT(ocelot->num_phys_ports), ANA_PGID_PGID,
PGID_CPU);
ocelot_apply_bridge_fwd_mask(ocelot);
}
static int felix_setup_tag_8021q(struct dsa_switch *ds, int cpu)
{
struct ocelot *ocelot = ds->priv;
struct felix *felix = ocelot_to_felix(ocelot);
unsigned long cpu_flood;
int port, err;
felix_8021q_cpu_port_init(ocelot, cpu);
for (port = 0; port < ds->num_ports; port++) {
if (dsa_is_unused_port(ds, port))
continue;
/* This overwrites ocelot_init():
* Do not forward BPDU frames to the CPU port module,
* for 2 reasons:
* - When these packets are injected from the tag_8021q
* CPU port, we want them to go out, not loop back
* into the system.
* - STP traffic ingressing on a user port should go to
* the tag_8021q CPU port, not to the hardware CPU
* port module.
*/
ocelot_write_gix(ocelot,
ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_REDIR_ENA(0),
ANA_PORT_CPU_FWD_BPDU_CFG, port);
}
/* In tag_8021q mode, the CPU port module is unused. So we
* want to disable flooding of any kind to the CPU port module,
* since packets going there will end in a black hole.
*/
cpu_flood = ANA_PGID_PGID_PGID(BIT(ocelot->num_phys_ports));
ocelot_rmw_rix(ocelot, 0, cpu_flood, ANA_PGID_PGID, PGID_UC);
ocelot_rmw_rix(ocelot, 0, cpu_flood, ANA_PGID_PGID, PGID_MC);
felix->dsa_8021q_ctx = kzalloc(sizeof(*felix->dsa_8021q_ctx),
GFP_KERNEL);
if (!felix->dsa_8021q_ctx)
return -ENOMEM;
felix->dsa_8021q_ctx->ops = &felix_tag_8021q_ops;
felix->dsa_8021q_ctx->proto = htons(ETH_P_8021AD);
felix->dsa_8021q_ctx->ds = ds;
err = dsa_8021q_setup(felix->dsa_8021q_ctx, true);
if (err)
goto out_free_dsa_8021_ctx;
return 0;
out_free_dsa_8021_ctx:
kfree(felix->dsa_8021q_ctx);
return err;
}
static void felix_teardown_tag_8021q(struct dsa_switch *ds, int cpu)
{
struct ocelot *ocelot = ds->priv;
struct felix *felix = ocelot_to_felix(ocelot);
int err, port;
err = dsa_8021q_setup(felix->dsa_8021q_ctx, false);
if (err)
dev_err(ds->dev, "dsa_8021q_setup returned %d", err);
kfree(felix->dsa_8021q_ctx);
for (port = 0; port < ds->num_ports; port++) {
if (dsa_is_unused_port(ds, port))
continue;
/* Restore the logic from ocelot_init:
* do not forward BPDU frames to the front ports.
*/
ocelot_write_gix(ocelot,
ANA_PORT_CPU_FWD_BPDU_CFG_BPDU_REDIR_ENA(0xffff),
ANA_PORT_CPU_FWD_BPDU_CFG,
port);
}
felix_8021q_cpu_port_deinit(ocelot, cpu);
}
/* The CPU port module is connected to the Node Processor Interface (NPI). This
* is the mode through which frames can be injected from and extracted to an
* external CPU, over Ethernet. In NXP SoCs, the "external CPU" is the ARM CPU
* running Linux, and this forms a DSA setup together with the enetc or fman
* DSA master.
*/
static void felix_npi_port_init(struct ocelot *ocelot, int port)
{
ocelot->npi = port;
ocelot_write(ocelot, QSYS_EXT_CPU_CFG_EXT_CPUQ_MSK_M |
QSYS_EXT_CPU_CFG_EXT_CPU_PORT(port),
QSYS_EXT_CPU_CFG);
/* NPI port Injection/Extraction configuration */
ocelot_fields_write(ocelot, port, SYS_PORT_MODE_INCL_XTR_HDR,
ocelot->npi_xtr_prefix);
ocelot_fields_write(ocelot, port, SYS_PORT_MODE_INCL_INJ_HDR,
ocelot->npi_inj_prefix);
/* Disable transmission of pause frames */
ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, 0);
}
static void felix_npi_port_deinit(struct ocelot *ocelot, int port)
{
/* Restore hardware defaults */
int unused_port = ocelot->num_phys_ports + 2;
ocelot->npi = -1;
ocelot_write(ocelot, QSYS_EXT_CPU_CFG_EXT_CPU_PORT(unused_port),
QSYS_EXT_CPU_CFG);
ocelot_fields_write(ocelot, port, SYS_PORT_MODE_INCL_XTR_HDR,
OCELOT_TAG_PREFIX_DISABLED);
ocelot_fields_write(ocelot, port, SYS_PORT_MODE_INCL_INJ_HDR,
OCELOT_TAG_PREFIX_DISABLED);
/* Enable transmission of pause frames */
ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, 1);
}
static int felix_setup_tag_npi(struct dsa_switch *ds, int cpu)
{
struct ocelot *ocelot = ds->priv;
unsigned long cpu_flood;
felix_npi_port_init(ocelot, cpu);
/* Include the CPU port module (and indirectly, the NPI port)
* in the forwarding mask for unknown unicast - the hardware
* default value for ANA_FLOODING_FLD_UNICAST excludes
* BIT(ocelot->num_phys_ports), and so does ocelot_init,
* since Ocelot relies on whitelisting MAC addresses towards
* PGID_CPU.
* We do this because DSA does not yet perform RX filtering,
* and the NPI port does not perform source address learning,
* so traffic sent to Linux is effectively unknown from the
* switch's perspective.
*/
cpu_flood = ANA_PGID_PGID_PGID(BIT(ocelot->num_phys_ports));
ocelot_rmw_rix(ocelot, cpu_flood, cpu_flood, ANA_PGID_PGID, PGID_UC);
return 0;
}
static void felix_teardown_tag_npi(struct dsa_switch *ds, int cpu)
{
struct ocelot *ocelot = ds->priv;
felix_npi_port_deinit(ocelot, cpu);
}
static int felix_set_tag_protocol(struct dsa_switch *ds, int cpu,
enum dsa_tag_protocol proto)
{
int err;
switch (proto) {
case DSA_TAG_PROTO_OCELOT:
err = felix_setup_tag_npi(ds, cpu);
break;
case DSA_TAG_PROTO_OCELOT_8021Q:
err = felix_setup_tag_8021q(ds, cpu);
break;
default:
err = -EPROTONOSUPPORT;
}
return err;
}
static void felix_del_tag_protocol(struct dsa_switch *ds, int cpu,
enum dsa_tag_protocol proto)
{
switch (proto) {
case DSA_TAG_PROTO_OCELOT:
felix_teardown_tag_npi(ds, cpu);
break;
case DSA_TAG_PROTO_OCELOT_8021Q:
felix_teardown_tag_8021q(ds, cpu);
break;
default:
break;
}
}
/* This always leaves the switch in a consistent state, because although the
* tag_8021q setup can fail, the NPI setup can't. So either the change is made,
* or the restoration is guaranteed to work.
*/
static int felix_change_tag_protocol(struct dsa_switch *ds, int cpu,
enum dsa_tag_protocol proto)
{
struct ocelot *ocelot = ds->priv;
struct felix *felix = ocelot_to_felix(ocelot);
enum dsa_tag_protocol old_proto = felix->tag_proto;
int err;
if (proto != DSA_TAG_PROTO_OCELOT &&
proto != DSA_TAG_PROTO_OCELOT_8021Q)
return -EPROTONOSUPPORT;
felix_del_tag_protocol(ds, cpu, old_proto);
err = felix_set_tag_protocol(ds, cpu, proto);
if (err) {
felix_set_tag_protocol(ds, cpu, old_proto);
return err;
}
felix->tag_proto = proto;
return 0;
}
static enum dsa_tag_protocol felix_get_tag_protocol(struct dsa_switch *ds,
int port,
enum dsa_tag_protocol mp)
{
return DSA_TAG_PROTO_OCELOT;
struct ocelot *ocelot = ds->priv;
struct felix *felix = ocelot_to_felix(ocelot);
return felix->tag_proto;
}
static int felix_set_ageing_time(struct dsa_switch *ds,
......@@ -425,8 +889,8 @@ static int felix_init_structs(struct felix *felix, int num_phys_ports)
ocelot->num_mact_rows = felix->info->num_mact_rows;
ocelot->vcap = felix->info->vcap;
ocelot->ops = felix->info->ops;
ocelot->inj_prefix = OCELOT_TAG_PREFIX_SHORT;
ocelot->xtr_prefix = OCELOT_TAG_PREFIX_SHORT;
ocelot->npi_inj_prefix = OCELOT_TAG_PREFIX_SHORT;
ocelot->npi_xtr_prefix = OCELOT_TAG_PREFIX_SHORT;
ocelot->devlink = felix->ds->devlink;
port_phy_modes = kcalloc(num_phys_ports, sizeof(phy_interface_t),
......@@ -527,28 +991,6 @@ static int felix_init_structs(struct felix *felix, int num_phys_ports)
return 0;
}
/* The CPU port module is connected to the Node Processor Interface (NPI). This
* is the mode through which frames can be injected from and extracted to an
* external CPU, over Ethernet.
*/
static void felix_npi_port_init(struct ocelot *ocelot, int port)
{
ocelot->npi = port;
ocelot_write(ocelot, QSYS_EXT_CPU_CFG_EXT_CPUQ_MSK_M |
QSYS_EXT_CPU_CFG_EXT_CPU_PORT(port),
QSYS_EXT_CPU_CFG);
/* NPI port Injection/Extraction configuration */
ocelot_fields_write(ocelot, port, SYS_PORT_MODE_INCL_XTR_HDR,
ocelot->xtr_prefix);
ocelot_fields_write(ocelot, port, SYS_PORT_MODE_INCL_INJ_HDR,
ocelot->inj_prefix);
/* Disable transmission of pause frames */
ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, 0);
}
/* Hardware initialization done here so that we can allocate structures with
* devm without fear of dsa_register_switch returning -EPROBE_DEFER and causing
* us to allocate structures twice (leak memory) and map PCI memory twice
......@@ -578,10 +1020,10 @@ static int felix_setup(struct dsa_switch *ds)
}
for (port = 0; port < ds->num_ports; port++) {
ocelot_init_port(ocelot, port);
if (dsa_is_unused_port(ds, port))
continue;
if (dsa_is_cpu_port(ds, port))
felix_npi_port_init(ocelot, port);
ocelot_init_port(ocelot, port);
/* Set the default QoS Classification based on PCP and DEI
* bits of vlan tag.
......@@ -593,14 +1035,15 @@ static int felix_setup(struct dsa_switch *ds)
if (err)
return err;
/* Include the CPU port module in the forwarding mask for unknown
* unicast - the hardware default value for ANA_FLOODING_FLD_UNICAST
* excludes BIT(ocelot->num_phys_ports), and so does ocelot_init, since
* Ocelot relies on whitelisting MAC addresses towards PGID_CPU.
for (port = 0; port < ds->num_ports; port++) {
if (!dsa_is_cpu_port(ds, port))
continue;
/* The initial tag protocol is NPI which always returns 0, so
* there's no real point in checking for errors.
*/
ocelot_write_rix(ocelot,
ANA_PGID_PGID_PGID(GENMASK(ocelot->num_phys_ports, 0)),
ANA_PGID_PGID, PGID_UC);
felix_set_tag_protocol(ds, port, felix->tag_proto);
}
ds->mtu_enforcement_ingress = true;
ds->assisted_learning_on_cpu_port = true;
......@@ -614,6 +1057,13 @@ static void felix_teardown(struct dsa_switch *ds)
struct felix *felix = ocelot_to_felix(ocelot);
int port;
for (port = 0; port < ds->num_ports; port++) {
if (!dsa_is_cpu_port(ds, port))
continue;
felix_del_tag_protocol(ds, port, felix->tag_proto);
}
ocelot_devlink_sb_unregister(ocelot);
ocelot_deinit_timestamp(ocelot);
ocelot_deinit(ocelot);
......@@ -860,6 +1310,7 @@ static int felix_sb_occ_tc_port_bind_get(struct dsa_switch *ds, int port,
const struct dsa_switch_ops felix_switch_ops = {
.get_tag_protocol = felix_get_tag_protocol,
.change_tag_protocol = felix_change_tag_protocol,
.setup = felix_setup,
.teardown = felix_teardown,
.set_ageing_time = felix_set_ageing_time,
......
......@@ -48,6 +48,8 @@ struct felix {
struct lynx_pcs **pcs;
resource_size_t switch_base;
resource_size_t imdio_base;
struct dsa_8021q_context *dsa_8021q_ctx;
enum dsa_tag_protocol tag_proto;
};
struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port);
......
......@@ -1467,6 +1467,7 @@ static int felix_pci_probe(struct pci_dev *pdev,
ds->ops = &felix_switch_ops;
ds->priv = ocelot;
felix->ds = ds;
felix->tag_proto = DSA_TAG_PROTO_OCELOT;
err = dsa_register_switch(ds);
if (err) {
......
......@@ -1246,6 +1246,7 @@ static int seville_probe(struct platform_device *pdev)
ds->ops = &felix_switch_ops;
ds->priv = ocelot;
felix->ds = ds;
felix->tag_proto = DSA_TAG_PROTO_OCELOT;
err = dsa_register_switch(ds);
if (err) {
......
......@@ -889,10 +889,87 @@ int ocelot_get_ts_info(struct ocelot *ocelot, int port,
}
EXPORT_SYMBOL(ocelot_get_ts_info);
static u32 ocelot_get_dsa_8021q_cpu_mask(struct ocelot *ocelot)
{
u32 mask = 0;
int port;
for (port = 0; port < ocelot->num_phys_ports; port++) {
struct ocelot_port *ocelot_port = ocelot->ports[port];
if (!ocelot_port)
continue;
if (ocelot_port->is_dsa_8021q_cpu)
mask |= BIT(port);
}
return mask;
}
void ocelot_apply_bridge_fwd_mask(struct ocelot *ocelot)
{
unsigned long cpu_fwd_mask;
int port;
/* If a DSA tag_8021q CPU exists, it needs to be included in the
* regular forwarding path of the front ports regardless of whether
* those are bridged or standalone.
* If DSA tag_8021q is not used, this returns 0, which is fine because
* the hardware-based CPU port module can be a destination for packets
* even if it isn't part of PGID_SRC.
*/
cpu_fwd_mask = ocelot_get_dsa_8021q_cpu_mask(ocelot);
/* Apply FWD mask. The loop is needed to add/remove the current port as
* a source for the other ports.
*/
for (port = 0; port < ocelot->num_phys_ports; port++) {
struct ocelot_port *ocelot_port = ocelot->ports[port];
unsigned long mask;
if (!ocelot_port) {
/* Unused ports can't send anywhere */
mask = 0;
} else if (ocelot_port->is_dsa_8021q_cpu) {
/* The DSA tag_8021q CPU ports need to be able to
* forward packets to all other ports except for
* themselves
*/
mask = GENMASK(ocelot->num_phys_ports - 1, 0);
mask &= ~cpu_fwd_mask;
} else if (ocelot->bridge_fwd_mask & BIT(port)) {
int lag;
mask = ocelot->bridge_fwd_mask & ~BIT(port);
for (lag = 0; lag < ocelot->num_phys_ports; lag++) {
unsigned long bond_mask = ocelot->lags[lag];
if (!bond_mask)
continue;
if (bond_mask & BIT(port)) {
mask &= ~bond_mask;
break;
}
}
} else {
/* Standalone ports forward only to DSA tag_8021q CPU
* ports (if those exist), or to the hardware CPU port
* module otherwise.
*/
mask = cpu_fwd_mask;
}
ocelot_write_rix(ocelot, mask, ANA_PGID_PGID, PGID_SRC + port);
}
}
EXPORT_SYMBOL(ocelot_apply_bridge_fwd_mask);
void ocelot_bridge_stp_state_set(struct ocelot *ocelot, int port, u8 state)
{
u32 port_cfg;
int p, i;
if (!(BIT(port) & ocelot->bridge_mask))
return;
......@@ -915,32 +992,7 @@ void ocelot_bridge_stp_state_set(struct ocelot *ocelot, int port, u8 state)
ocelot_write_gix(ocelot, port_cfg, ANA_PORT_PORT_CFG, port);
/* Apply FWD mask. The loop is needed to add/remove the current port as
* a source for the other ports.
*/
for (p = 0; p < ocelot->num_phys_ports; p++) {
if (ocelot->bridge_fwd_mask & BIT(p)) {
unsigned long mask = ocelot->bridge_fwd_mask & ~BIT(p);
for (i = 0; i < ocelot->num_phys_ports; i++) {
unsigned long bond_mask = ocelot->lags[i];
if (!bond_mask)
continue;
if (bond_mask & BIT(p)) {
mask &= ~bond_mask;
break;
}
}
ocelot_write_rix(ocelot, mask,
ANA_PGID_PGID, PGID_SRC + p);
} else {
ocelot_write_rix(ocelot, 0,
ANA_PGID_PGID, PGID_SRC + p);
}
}
ocelot_apply_bridge_fwd_mask(ocelot);
}
EXPORT_SYMBOL(ocelot_bridge_stp_state_set);
......@@ -1297,6 +1349,7 @@ int ocelot_port_lag_join(struct ocelot *ocelot, int port,
}
ocelot_setup_lag(ocelot, lag);
ocelot_apply_bridge_fwd_mask(ocelot);
ocelot_set_aggr_pgids(ocelot);
return 0;
......@@ -1330,6 +1383,7 @@ void ocelot_port_lag_leave(struct ocelot *ocelot, int port,
ocelot_write_gix(ocelot, port_cfg | ANA_PORT_PORT_CFG_PORTID_VAL(port),
ANA_PORT_PORT_CFG, port);
ocelot_apply_bridge_fwd_mask(ocelot);
ocelot_set_aggr_pgids(ocelot);
}
EXPORT_SYMBOL(ocelot_port_lag_leave);
......@@ -1350,9 +1404,9 @@ void ocelot_port_set_maxlen(struct ocelot *ocelot, int port, size_t sdu)
if (port == ocelot->npi) {
maxlen += OCELOT_TAG_LEN;
if (ocelot->inj_prefix == OCELOT_TAG_PREFIX_SHORT)
if (ocelot->npi_inj_prefix == OCELOT_TAG_PREFIX_SHORT)
maxlen += OCELOT_SHORT_PREFIX_LEN;
else if (ocelot->inj_prefix == OCELOT_TAG_PREFIX_LONG)
else if (ocelot->npi_inj_prefix == OCELOT_TAG_PREFIX_LONG)
maxlen += OCELOT_LONG_PREFIX_LEN;
}
......@@ -1382,9 +1436,9 @@ int ocelot_get_max_mtu(struct ocelot *ocelot, int port)
if (port == ocelot->npi) {
max_mtu -= OCELOT_TAG_LEN;
if (ocelot->inj_prefix == OCELOT_TAG_PREFIX_SHORT)
if (ocelot->npi_inj_prefix == OCELOT_TAG_PREFIX_SHORT)
max_mtu -= OCELOT_SHORT_PREFIX_LEN;
else if (ocelot->inj_prefix == OCELOT_TAG_PREFIX_LONG)
else if (ocelot->npi_inj_prefix == OCELOT_TAG_PREFIX_LONG)
max_mtu -= OCELOT_LONG_PREFIX_LEN;
}
......@@ -1469,9 +1523,9 @@ static void ocelot_cpu_port_init(struct ocelot *ocelot)
ocelot_fields_write(ocelot, cpu, QSYS_SWITCH_PORT_MODE_PORT_ENA, 1);
/* CPU port Injection/Extraction configuration */
ocelot_fields_write(ocelot, cpu, SYS_PORT_MODE_INCL_XTR_HDR,
ocelot->xtr_prefix);
OCELOT_TAG_PREFIX_NONE);
ocelot_fields_write(ocelot, cpu, SYS_PORT_MODE_INCL_INJ_HDR,
ocelot->inj_prefix);
OCELOT_TAG_PREFIX_NONE);
/* Configure the CPU port to be VLAN aware */
ocelot_write_gix(ocelot, ANA_PORT_VLAN_CFG_VLAN_VID(0) |
......
......@@ -622,7 +622,8 @@ static int ocelot_flower_parse(struct ocelot *ocelot, int port, bool ingress,
int ret;
filter->prio = f->common.prio;
filter->id = f->cookie;
filter->id.cookie = f->cookie;
filter->id.tc_offload = true;
ret = ocelot_flower_parse_action(ocelot, port, ingress, f, filter);
if (ret)
......@@ -717,7 +718,7 @@ int ocelot_cls_flower_destroy(struct ocelot *ocelot, int port,
block = &ocelot->block[block_id];
filter = ocelot_vcap_block_find_filter_by_id(block, f->cookie);
filter = ocelot_vcap_block_find_filter_by_id(block, f->cookie, true);
if (!filter)
return 0;
......@@ -741,7 +742,7 @@ int ocelot_cls_flower_stats(struct ocelot *ocelot, int port,
block = &ocelot->block[block_id];
filter = ocelot_vcap_block_find_filter_by_id(block, f->cookie);
filter = ocelot_vcap_block_find_filter_by_id(block, f->cookie, true);
if (!filter || filter->type == OCELOT_VCAP_FILTER_DUMMY)
return 0;
......
......@@ -9,6 +9,7 @@
*/
#include <linux/if_bridge.h>
#include <net/pkt_cls.h>
#include "ocelot.h"
#include "ocelot_vcap.h"
......
......@@ -959,6 +959,12 @@ static void ocelot_vcap_filter_add_to_block(struct ocelot *ocelot,
list_add(&filter->list, pos->prev);
}
static bool ocelot_vcap_filter_equal(const struct ocelot_vcap_filter *a,
const struct ocelot_vcap_filter *b)
{
return !memcmp(&a->id, &b->id, sizeof(struct ocelot_vcap_id));
}
static int ocelot_vcap_block_get_filter_index(struct ocelot_vcap_block *block,
struct ocelot_vcap_filter *filter)
{
......@@ -966,7 +972,7 @@ static int ocelot_vcap_block_get_filter_index(struct ocelot_vcap_block *block,
int index = 0;
list_for_each_entry(tmp, &block->rules, list) {
if (filter->id == tmp->id)
if (ocelot_vcap_filter_equal(filter, tmp))
return index;
index++;
}
......@@ -991,16 +997,19 @@ ocelot_vcap_block_find_filter_by_index(struct ocelot_vcap_block *block,
}
struct ocelot_vcap_filter *
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int id)
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int cookie,
bool tc_offload)
{
struct ocelot_vcap_filter *filter;
list_for_each_entry(filter, &block->rules, list)
if (filter->id == id)
if (filter->id.tc_offload == tc_offload &&
filter->id.cookie == cookie)
return filter;
return NULL;
}
EXPORT_SYMBOL(ocelot_vcap_block_find_filter_by_id);
/* If @on=false, then SNAP, ARP, IP and OAM frames will not match on keys based
* on destination and source MAC addresses, but only on higher-level protocol
......@@ -1150,6 +1159,7 @@ int ocelot_vcap_filter_add(struct ocelot *ocelot,
vcap_entry_set(ocelot, index, filter);
return 0;
}
EXPORT_SYMBOL(ocelot_vcap_filter_add);
static void ocelot_vcap_block_remove_filter(struct ocelot *ocelot,
struct ocelot_vcap_block *block,
......@@ -1160,7 +1170,7 @@ static void ocelot_vcap_block_remove_filter(struct ocelot *ocelot,
list_for_each_safe(pos, q, &block->rules) {
tmp = list_entry(pos, struct ocelot_vcap_filter, list);
if (tmp->id == filter->id) {
if (ocelot_vcap_filter_equal(filter, tmp)) {
if (tmp->block_id == VCAP_IS2 &&
tmp->action.police_ena)
ocelot_vcap_policer_del(ocelot, block,
......@@ -1204,6 +1214,7 @@ int ocelot_vcap_filter_del(struct ocelot *ocelot,
return 0;
}
EXPORT_SYMBOL(ocelot_vcap_filter_del);
int ocelot_vcap_filter_stats_update(struct ocelot *ocelot,
struct ocelot_vcap_filter *filter)
......
......@@ -7,304 +7,13 @@
#define _MSCC_OCELOT_VCAP_H_
#include "ocelot.h"
#include "ocelot_police.h"
#include <net/sch_generic.h>
#include <net/pkt_cls.h>
#include <soc/mscc/ocelot_vcap.h>
#include <net/flow_offload.h>
#define OCELOT_POLICER_DISCARD 0x17f
struct ocelot_ipv4 {
u8 addr[4];
};
enum ocelot_vcap_bit {
OCELOT_VCAP_BIT_ANY,
OCELOT_VCAP_BIT_0,
OCELOT_VCAP_BIT_1
};
struct ocelot_vcap_u8 {
u8 value[1];
u8 mask[1];
};
struct ocelot_vcap_u16 {
u8 value[2];
u8 mask[2];
};
struct ocelot_vcap_u24 {
u8 value[3];
u8 mask[3];
};
struct ocelot_vcap_u32 {
u8 value[4];
u8 mask[4];
};
struct ocelot_vcap_u40 {
u8 value[5];
u8 mask[5];
};
struct ocelot_vcap_u48 {
u8 value[6];
u8 mask[6];
};
struct ocelot_vcap_u64 {
u8 value[8];
u8 mask[8];
};
struct ocelot_vcap_u128 {
u8 value[16];
u8 mask[16];
};
struct ocelot_vcap_vid {
u16 value;
u16 mask;
};
struct ocelot_vcap_ipv4 {
struct ocelot_ipv4 value;
struct ocelot_ipv4 mask;
};
struct ocelot_vcap_udp_tcp {
u16 value;
u16 mask;
};
struct ocelot_vcap_port {
u8 value;
u8 mask;
};
enum ocelot_vcap_key_type {
OCELOT_VCAP_KEY_ANY,
OCELOT_VCAP_KEY_ETYPE,
OCELOT_VCAP_KEY_LLC,
OCELOT_VCAP_KEY_SNAP,
OCELOT_VCAP_KEY_ARP,
OCELOT_VCAP_KEY_IPV4,
OCELOT_VCAP_KEY_IPV6
};
struct ocelot_vcap_key_vlan {
struct ocelot_vcap_vid vid; /* VLAN ID (12 bit) */
struct ocelot_vcap_u8 pcp; /* PCP (3 bit) */
enum ocelot_vcap_bit dei; /* DEI */
enum ocelot_vcap_bit tagged; /* Tagged/untagged frame */
};
struct ocelot_vcap_key_etype {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
struct ocelot_vcap_u16 etype;
struct ocelot_vcap_u16 data; /* MAC data */
};
struct ocelot_vcap_key_llc {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
/* LLC header: DSAP at byte 0, SSAP at byte 1, Control at byte 2 */
struct ocelot_vcap_u32 llc;
};
struct ocelot_vcap_key_snap {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
/* SNAP header: Organization Code at byte 0, Type at byte 3 */
struct ocelot_vcap_u40 snap;
};
struct ocelot_vcap_key_arp {
struct ocelot_vcap_u48 smac;
enum ocelot_vcap_bit arp; /* Opcode ARP/RARP */
enum ocelot_vcap_bit req; /* Opcode request/reply */
enum ocelot_vcap_bit unknown; /* Opcode unknown */
enum ocelot_vcap_bit smac_match; /* Sender MAC matches SMAC */
enum ocelot_vcap_bit dmac_match; /* Target MAC matches DMAC */
/**< Protocol addr. length 4, hardware length 6 */
enum ocelot_vcap_bit length;
enum ocelot_vcap_bit ip; /* Protocol address type IP */
enum ocelot_vcap_bit ethernet; /* Hardware address type Ethernet */
struct ocelot_vcap_ipv4 sip; /* Sender IP address */
struct ocelot_vcap_ipv4 dip; /* Target IP address */
};
struct ocelot_vcap_key_ipv4 {
enum ocelot_vcap_bit ttl; /* TTL zero */
enum ocelot_vcap_bit fragment; /* Fragment */
enum ocelot_vcap_bit options; /* Header options */
struct ocelot_vcap_u8 ds;
struct ocelot_vcap_u8 proto; /* Protocol */
struct ocelot_vcap_ipv4 sip; /* Source IP address */
struct ocelot_vcap_ipv4 dip; /* Destination IP address */
struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
struct ocelot_vcap_udp_tcp sport; /* UDP/TCP: Source port */
struct ocelot_vcap_udp_tcp dport; /* UDP/TCP: Destination port */
enum ocelot_vcap_bit tcp_fin;
enum ocelot_vcap_bit tcp_syn;
enum ocelot_vcap_bit tcp_rst;
enum ocelot_vcap_bit tcp_psh;
enum ocelot_vcap_bit tcp_ack;
enum ocelot_vcap_bit tcp_urg;
enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
};
struct ocelot_vcap_key_ipv6 {
struct ocelot_vcap_u8 proto; /* IPv6 protocol */
struct ocelot_vcap_u128 sip; /* IPv6 source (byte 0-7 ignored) */
struct ocelot_vcap_u128 dip; /* IPv6 destination (byte 0-7 ignored) */
enum ocelot_vcap_bit ttl; /* TTL zero */
struct ocelot_vcap_u8 ds;
struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
struct ocelot_vcap_udp_tcp sport;
struct ocelot_vcap_udp_tcp dport;
enum ocelot_vcap_bit tcp_fin;
enum ocelot_vcap_bit tcp_syn;
enum ocelot_vcap_bit tcp_rst;
enum ocelot_vcap_bit tcp_psh;
enum ocelot_vcap_bit tcp_ack;
enum ocelot_vcap_bit tcp_urg;
enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
};
enum ocelot_mask_mode {
OCELOT_MASK_MODE_NONE,
OCELOT_MASK_MODE_PERMIT_DENY,
OCELOT_MASK_MODE_POLICY,
OCELOT_MASK_MODE_REDIRECT,
};
enum ocelot_es0_tag {
OCELOT_NO_ES0_TAG,
OCELOT_ES0_TAG,
OCELOT_FORCE_PORT_TAG,
OCELOT_FORCE_UNTAG,
};
enum ocelot_tag_tpid_sel {
OCELOT_TAG_TPID_SEL_8021Q,
OCELOT_TAG_TPID_SEL_8021AD,
};
struct ocelot_vcap_action {
union {
/* VCAP ES0 */
struct {
enum ocelot_es0_tag push_outer_tag;
enum ocelot_es0_tag push_inner_tag;
enum ocelot_tag_tpid_sel tag_a_tpid_sel;
int tag_a_vid_sel;
int tag_a_pcp_sel;
u16 vid_a_val;
u8 pcp_a_val;
u8 dei_a_val;
enum ocelot_tag_tpid_sel tag_b_tpid_sel;
int tag_b_vid_sel;
int tag_b_pcp_sel;
u16 vid_b_val;
u8 pcp_b_val;
u8 dei_b_val;
};
/* VCAP IS1 */
struct {
bool vid_replace_ena;
u16 vid;
bool vlan_pop_cnt_ena;
int vlan_pop_cnt;
bool pcp_dei_ena;
u8 pcp;
u8 dei;
bool qos_ena;
u8 qos_val;
u8 pag_override_mask;
u8 pag_val;
};
/* VCAP IS2 */
struct {
bool cpu_copy_ena;
u8 cpu_qu_num;
enum ocelot_mask_mode mask_mode;
unsigned long port_mask;
bool police_ena;
struct ocelot_policer pol;
u32 pol_ix;
};
};
};
struct ocelot_vcap_stats {
u64 bytes;
u64 pkts;
u64 used;
};
enum ocelot_vcap_filter_type {
OCELOT_VCAP_FILTER_DUMMY,
OCELOT_VCAP_FILTER_PAG,
OCELOT_VCAP_FILTER_OFFLOAD,
};
struct ocelot_vcap_filter {
struct list_head list;
enum ocelot_vcap_filter_type type;
int block_id;
int goto_target;
int lookup;
u8 pag;
u16 prio;
u32 id;
struct ocelot_vcap_action action;
struct ocelot_vcap_stats stats;
/* For VCAP IS1 and IS2 */
unsigned long ingress_port_mask;
/* For VCAP ES0 */
struct ocelot_vcap_port ingress_port;
struct ocelot_vcap_port egress_port;
enum ocelot_vcap_bit dmac_mc;
enum ocelot_vcap_bit dmac_bc;
struct ocelot_vcap_key_vlan vlan;
enum ocelot_vcap_key_type key_type;
union {
/* OCELOT_VCAP_KEY_ANY: No specific fields */
struct ocelot_vcap_key_etype etype;
struct ocelot_vcap_key_llc llc;
struct ocelot_vcap_key_snap snap;
struct ocelot_vcap_key_arp arp;
struct ocelot_vcap_key_ipv4 ipv4;
struct ocelot_vcap_key_ipv6 ipv6;
} key;
};
int ocelot_vcap_filter_add(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule,
struct netlink_ext_ack *extack);
int ocelot_vcap_filter_del(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule);
int ocelot_vcap_filter_stats_update(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule);
struct ocelot_vcap_filter *
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int id);
void ocelot_detect_vcap_constants(struct ocelot *ocelot);
int ocelot_vcap_init(struct ocelot *ocelot);
......
......@@ -1347,8 +1347,6 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
ocelot->num_flooding_pgids = 1;
ocelot->vcap = vsc7514_vcap_props;
ocelot->inj_prefix = OCELOT_TAG_PREFIX_NONE;
ocelot->xtr_prefix = OCELOT_TAG_PREFIX_NONE;
ocelot->npi = -1;
err = ocelot_init(ocelot);
......
......@@ -64,6 +64,10 @@ int dsa_8021q_rx_source_port(u16 vid);
u16 dsa_8021q_rx_subvlan(u16 vid);
bool vid_is_dsa_8021q_rxvlan(u16 vid);
bool vid_is_dsa_8021q_txvlan(u16 vid);
bool vid_is_dsa_8021q(u16 vid);
#else
......@@ -123,6 +127,16 @@ u16 dsa_8021q_rx_subvlan(u16 vid)
return 0;
}
bool vid_is_dsa_8021q_rxvlan(u16 vid)
{
return false;
}
bool vid_is_dsa_8021q_txvlan(u16 vid)
{
return false;
}
bool vid_is_dsa_8021q(u16 vid)
{
return false;
......
......@@ -47,6 +47,7 @@ struct phylink_link_state;
#define DSA_TAG_PROTO_RTL4_A_VALUE 17
#define DSA_TAG_PROTO_HELLCREEK_VALUE 18
#define DSA_TAG_PROTO_XRS700X_VALUE 19
#define DSA_TAG_PROTO_OCELOT_8021Q_VALUE 20
enum dsa_tag_protocol {
DSA_TAG_PROTO_NONE = DSA_TAG_PROTO_NONE_VALUE,
......@@ -69,6 +70,7 @@ enum dsa_tag_protocol {
DSA_TAG_PROTO_RTL4_A = DSA_TAG_PROTO_RTL4_A_VALUE,
DSA_TAG_PROTO_HELLCREEK = DSA_TAG_PROTO_HELLCREEK_VALUE,
DSA_TAG_PROTO_XRS700X = DSA_TAG_PROTO_XRS700X_VALUE,
DSA_TAG_PROTO_OCELOT_8021Q = DSA_TAG_PROTO_OCELOT_8021Q_VALUE,
};
struct packet_type;
......@@ -140,6 +142,9 @@ struct dsa_switch_tree {
/* Has this tree been applied to the hardware? */
bool setup;
/* Tagging protocol operations */
const struct dsa_device_ops *tag_ops;
/*
* Configuration data for the platform device that owns
* this dsa switch tree instance.
......@@ -225,7 +230,9 @@ struct dsa_port {
struct net_device *slave;
};
/* CPU port tagging operations used by master or slave devices */
/* Copy of the tagging protocol operations, for quicker access
* in the data path. Valid only for the CPU ports.
*/
const struct dsa_device_ops *tag_ops;
/* Copies for faster access in master receive hot path */
......@@ -480,9 +487,18 @@ static inline bool dsa_port_is_vlan_filtering(const struct dsa_port *dp)
typedef int dsa_fdb_dump_cb_t(const unsigned char *addr, u16 vid,
bool is_static, void *data);
struct dsa_switch_ops {
/*
* Tagging protocol helpers called for the CPU ports and DSA links.
* @get_tag_protocol retrieves the initial tagging protocol and is
* mandatory. Switches which can operate using multiple tagging
* protocols should implement @change_tag_protocol and report in
* @get_tag_protocol the tagger in current use.
*/
enum dsa_tag_protocol (*get_tag_protocol)(struct dsa_switch *ds,
int port,
enum dsa_tag_protocol mprot);
int (*change_tag_protocol)(struct dsa_switch *ds, int port,
enum dsa_tag_protocol proto);
int (*setup)(struct dsa_switch *ds);
void (*teardown)(struct dsa_switch *ds);
......
......@@ -610,6 +610,7 @@ struct ocelot_port {
phy_interface_t phy_mode;
u8 *xmit_template;
bool is_dsa_8021q_cpu;
};
struct ocelot {
......@@ -651,8 +652,8 @@ struct ocelot {
int npi;
enum ocelot_tag_prefix inj_prefix;
enum ocelot_tag_prefix xtr_prefix;
enum ocelot_tag_prefix npi_inj_prefix;
enum ocelot_tag_prefix npi_xtr_prefix;
u32 *lags;
......@@ -760,6 +761,7 @@ void ocelot_adjust_link(struct ocelot *ocelot, int port,
struct phy_device *phydev);
int ocelot_port_vlan_filtering(struct ocelot *ocelot, int port, bool enabled);
void ocelot_bridge_stp_state_set(struct ocelot *ocelot, int port, u8 state);
void ocelot_apply_bridge_fwd_mask(struct ocelot *ocelot);
int ocelot_port_bridge_join(struct ocelot *ocelot, int port,
struct net_device *bridge);
int ocelot_port_bridge_leave(struct ocelot *ocelot, int port,
......
......@@ -400,4 +400,301 @@ enum vcap_es0_action_field {
VCAP_ES0_ACT_HIT_STICKY,
};
struct ocelot_ipv4 {
u8 addr[4];
};
enum ocelot_vcap_bit {
OCELOT_VCAP_BIT_ANY,
OCELOT_VCAP_BIT_0,
OCELOT_VCAP_BIT_1
};
struct ocelot_vcap_u8 {
u8 value[1];
u8 mask[1];
};
struct ocelot_vcap_u16 {
u8 value[2];
u8 mask[2];
};
struct ocelot_vcap_u24 {
u8 value[3];
u8 mask[3];
};
struct ocelot_vcap_u32 {
u8 value[4];
u8 mask[4];
};
struct ocelot_vcap_u40 {
u8 value[5];
u8 mask[5];
};
struct ocelot_vcap_u48 {
u8 value[6];
u8 mask[6];
};
struct ocelot_vcap_u64 {
u8 value[8];
u8 mask[8];
};
struct ocelot_vcap_u128 {
u8 value[16];
u8 mask[16];
};
struct ocelot_vcap_vid {
u16 value;
u16 mask;
};
struct ocelot_vcap_ipv4 {
struct ocelot_ipv4 value;
struct ocelot_ipv4 mask;
};
struct ocelot_vcap_udp_tcp {
u16 value;
u16 mask;
};
struct ocelot_vcap_port {
u8 value;
u8 mask;
};
enum ocelot_vcap_key_type {
OCELOT_VCAP_KEY_ANY,
OCELOT_VCAP_KEY_ETYPE,
OCELOT_VCAP_KEY_LLC,
OCELOT_VCAP_KEY_SNAP,
OCELOT_VCAP_KEY_ARP,
OCELOT_VCAP_KEY_IPV4,
OCELOT_VCAP_KEY_IPV6
};
struct ocelot_vcap_key_vlan {
struct ocelot_vcap_vid vid; /* VLAN ID (12 bit) */
struct ocelot_vcap_u8 pcp; /* PCP (3 bit) */
enum ocelot_vcap_bit dei; /* DEI */
enum ocelot_vcap_bit tagged; /* Tagged/untagged frame */
};
struct ocelot_vcap_key_etype {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
struct ocelot_vcap_u16 etype;
struct ocelot_vcap_u16 data; /* MAC data */
};
struct ocelot_vcap_key_llc {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
/* LLC header: DSAP at byte 0, SSAP at byte 1, Control at byte 2 */
struct ocelot_vcap_u32 llc;
};
struct ocelot_vcap_key_snap {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
/* SNAP header: Organization Code at byte 0, Type at byte 3 */
struct ocelot_vcap_u40 snap;
};
struct ocelot_vcap_key_arp {
struct ocelot_vcap_u48 smac;
enum ocelot_vcap_bit arp; /* Opcode ARP/RARP */
enum ocelot_vcap_bit req; /* Opcode request/reply */
enum ocelot_vcap_bit unknown; /* Opcode unknown */
enum ocelot_vcap_bit smac_match; /* Sender MAC matches SMAC */
enum ocelot_vcap_bit dmac_match; /* Target MAC matches DMAC */
/**< Protocol addr. length 4, hardware length 6 */
enum ocelot_vcap_bit length;
enum ocelot_vcap_bit ip; /* Protocol address type IP */
enum ocelot_vcap_bit ethernet; /* Hardware address type Ethernet */
struct ocelot_vcap_ipv4 sip; /* Sender IP address */
struct ocelot_vcap_ipv4 dip; /* Target IP address */
};
struct ocelot_vcap_key_ipv4 {
enum ocelot_vcap_bit ttl; /* TTL zero */
enum ocelot_vcap_bit fragment; /* Fragment */
enum ocelot_vcap_bit options; /* Header options */
struct ocelot_vcap_u8 ds;
struct ocelot_vcap_u8 proto; /* Protocol */
struct ocelot_vcap_ipv4 sip; /* Source IP address */
struct ocelot_vcap_ipv4 dip; /* Destination IP address */
struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
struct ocelot_vcap_udp_tcp sport; /* UDP/TCP: Source port */
struct ocelot_vcap_udp_tcp dport; /* UDP/TCP: Destination port */
enum ocelot_vcap_bit tcp_fin;
enum ocelot_vcap_bit tcp_syn;
enum ocelot_vcap_bit tcp_rst;
enum ocelot_vcap_bit tcp_psh;
enum ocelot_vcap_bit tcp_ack;
enum ocelot_vcap_bit tcp_urg;
enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
};
struct ocelot_vcap_key_ipv6 {
struct ocelot_vcap_u8 proto; /* IPv6 protocol */
struct ocelot_vcap_u128 sip; /* IPv6 source (byte 0-7 ignored) */
struct ocelot_vcap_u128 dip; /* IPv6 destination (byte 0-7 ignored) */
enum ocelot_vcap_bit ttl; /* TTL zero */
struct ocelot_vcap_u8 ds;
struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
struct ocelot_vcap_udp_tcp sport;
struct ocelot_vcap_udp_tcp dport;
enum ocelot_vcap_bit tcp_fin;
enum ocelot_vcap_bit tcp_syn;
enum ocelot_vcap_bit tcp_rst;
enum ocelot_vcap_bit tcp_psh;
enum ocelot_vcap_bit tcp_ack;
enum ocelot_vcap_bit tcp_urg;
enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
};
enum ocelot_mask_mode {
OCELOT_MASK_MODE_NONE,
OCELOT_MASK_MODE_PERMIT_DENY,
OCELOT_MASK_MODE_POLICY,
OCELOT_MASK_MODE_REDIRECT,
};
enum ocelot_es0_tag {
OCELOT_NO_ES0_TAG,
OCELOT_ES0_TAG,
OCELOT_FORCE_PORT_TAG,
OCELOT_FORCE_UNTAG,
};
enum ocelot_tag_tpid_sel {
OCELOT_TAG_TPID_SEL_8021Q,
OCELOT_TAG_TPID_SEL_8021AD,
};
struct ocelot_vcap_action {
union {
/* VCAP ES0 */
struct {
enum ocelot_es0_tag push_outer_tag;
enum ocelot_es0_tag push_inner_tag;
enum ocelot_tag_tpid_sel tag_a_tpid_sel;
int tag_a_vid_sel;
int tag_a_pcp_sel;
u16 vid_a_val;
u8 pcp_a_val;
u8 dei_a_val;
enum ocelot_tag_tpid_sel tag_b_tpid_sel;
int tag_b_vid_sel;
int tag_b_pcp_sel;
u16 vid_b_val;
u8 pcp_b_val;
u8 dei_b_val;
};
/* VCAP IS1 */
struct {
bool vid_replace_ena;
u16 vid;
bool vlan_pop_cnt_ena;
int vlan_pop_cnt;
bool pcp_dei_ena;
u8 pcp;
u8 dei;
bool qos_ena;
u8 qos_val;
u8 pag_override_mask;
u8 pag_val;
};
/* VCAP IS2 */
struct {
bool cpu_copy_ena;
u8 cpu_qu_num;
enum ocelot_mask_mode mask_mode;
unsigned long port_mask;
bool police_ena;
struct ocelot_policer pol;
u32 pol_ix;
};
};
};
struct ocelot_vcap_stats {
u64 bytes;
u64 pkts;
u64 used;
};
enum ocelot_vcap_filter_type {
OCELOT_VCAP_FILTER_DUMMY,
OCELOT_VCAP_FILTER_PAG,
OCELOT_VCAP_FILTER_OFFLOAD,
};
struct ocelot_vcap_id {
unsigned long cookie;
bool tc_offload;
};
struct ocelot_vcap_filter {
struct list_head list;
enum ocelot_vcap_filter_type type;
int block_id;
int goto_target;
int lookup;
u8 pag;
u16 prio;
struct ocelot_vcap_id id;
struct ocelot_vcap_action action;
struct ocelot_vcap_stats stats;
/* For VCAP IS1 and IS2 */
unsigned long ingress_port_mask;
/* For VCAP ES0 */
struct ocelot_vcap_port ingress_port;
struct ocelot_vcap_port egress_port;
enum ocelot_vcap_bit dmac_mc;
enum ocelot_vcap_bit dmac_bc;
struct ocelot_vcap_key_vlan vlan;
enum ocelot_vcap_key_type key_type;
union {
/* OCELOT_VCAP_KEY_ANY: No specific fields */
struct ocelot_vcap_key_etype etype;
struct ocelot_vcap_key_llc llc;
struct ocelot_vcap_key_snap snap;
struct ocelot_vcap_key_arp arp;
struct ocelot_vcap_key_ipv4 ipv4;
struct ocelot_vcap_key_ipv6 ipv6;
} key;
};
int ocelot_vcap_filter_add(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule,
struct netlink_ext_ack *extack);
int ocelot_vcap_filter_del(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule);
struct ocelot_vcap_filter *
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int id,
bool tc_offload);
#endif /* _OCELOT_VCAP_H_ */
......@@ -105,11 +105,26 @@ config NET_DSA_TAG_RTL4_A
the Realtek RTL8366RB.
config NET_DSA_TAG_OCELOT
tristate "Tag driver for Ocelot family of switches"
tristate "Tag driver for Ocelot family of switches, using NPI port"
select PACKING
help
Say Y or M if you want to enable support for tagging frames for the
Ocelot switches (VSC7511, VSC7512, VSC7513, VSC7514, VSC9959).
Say Y or M if you want to enable NPI tagging for the Ocelot switches
(VSC7511, VSC7512, VSC7513, VSC7514, VSC9953, VSC9959). In this mode,
the frames over the Ethernet CPU port are prepended with a
hardware-defined injection/extraction frame header. Flow control
(PAUSE frames) over the CPU port is not supported when operating in
this mode.
config NET_DSA_TAG_OCELOT_8021Q
tristate "Tag driver for Ocelot family of switches, using VLAN"
select NET_DSA_TAG_8021Q
help
Say Y or M if you want to enable support for tagging frames with a
custom VLAN-based header. Frames that require timestamping, such as
PTP, are not delivered over Ethernet but over register-based MMIO.
Flow control over the CPU port is functional in this mode. When using
this mode, less TCAM resources (VCAP IS1, IS2, ES0) are available for
use with tc-flower.
config NET_DSA_TAG_QCA
tristate "Tag driver for Qualcomm Atheros QCA8K switches"
......
......@@ -15,6 +15,7 @@ obj-$(CONFIG_NET_DSA_TAG_RTL4_A) += tag_rtl4_a.o
obj-$(CONFIG_NET_DSA_TAG_LAN9303) += tag_lan9303.o
obj-$(CONFIG_NET_DSA_TAG_MTK) += tag_mtk.o
obj-$(CONFIG_NET_DSA_TAG_OCELOT) += tag_ocelot.o
obj-$(CONFIG_NET_DSA_TAG_OCELOT_8021Q) += tag_ocelot_8021q.o
obj-$(CONFIG_NET_DSA_TAG_QCA) += tag_qca.o
obj-$(CONFIG_NET_DSA_TAG_SJA1105) += tag_sja1105.o
obj-$(CONFIG_NET_DSA_TAG_TRAILER) += tag_trailer.o
......
......@@ -84,6 +84,32 @@ const char *dsa_tag_protocol_to_str(const struct dsa_device_ops *ops)
return ops->name;
};
/* Function takes a reference on the module owning the tagger,
* so dsa_tag_driver_put must be called afterwards.
*/
const struct dsa_device_ops *dsa_find_tagger_by_name(const char *buf)
{
const struct dsa_device_ops *ops = ERR_PTR(-ENOPROTOOPT);
struct dsa_tag_driver *dsa_tag_driver;
mutex_lock(&dsa_tag_drivers_lock);
list_for_each_entry(dsa_tag_driver, &dsa_tag_drivers_list, list) {
const struct dsa_device_ops *tmp = dsa_tag_driver->ops;
if (!sysfs_streq(buf, tmp->name))
continue;
if (!try_module_get(dsa_tag_driver->owner))
break;
ops = tmp;
break;
}
mutex_unlock(&dsa_tag_drivers_lock);
return ops;
}
const struct dsa_device_ops *dsa_tag_driver_get(int tag_protocol)
{
struct dsa_tag_driver *dsa_tag_driver;
......
......@@ -21,6 +21,49 @@
static DEFINE_MUTEX(dsa2_mutex);
LIST_HEAD(dsa_tree_list);
/**
* dsa_tree_notify - Execute code for all switches in a DSA switch tree.
* @dst: collection of struct dsa_switch devices to notify.
* @e: event, must be of type DSA_NOTIFIER_*
* @v: event-specific value.
*
* Given a struct dsa_switch_tree, this can be used to run a function once for
* each member DSA switch. The other alternative of traversing the tree is only
* through its ports list, which does not uniquely list the switches.
*/
int dsa_tree_notify(struct dsa_switch_tree *dst, unsigned long e, void *v)
{
struct raw_notifier_head *nh = &dst->nh;
int err;
err = raw_notifier_call_chain(nh, e, v);
return notifier_to_errno(err);
}
/**
* dsa_broadcast - Notify all DSA trees in the system.
* @e: event, must be of type DSA_NOTIFIER_*
* @v: event-specific value.
*
* Can be used to notify the switching fabric of events such as cross-chip
* bridging between disjoint trees (such as islands of tagger-compatible
* switches bridged by an incompatible middle switch).
*/
int dsa_broadcast(unsigned long e, void *v)
{
struct dsa_switch_tree *dst;
int err = 0;
list_for_each_entry(dst, &dsa_tree_list, list) {
err = dsa_tree_notify(dst, e, v);
if (err)
break;
}
return err;
}
/**
* dsa_lag_map() - Map LAG netdev to a linear LAG ID
* @dst: Tree in which to record the mapping.
......@@ -136,6 +179,8 @@ static struct dsa_switch_tree *dsa_tree_alloc(int index)
static void dsa_tree_free(struct dsa_switch_tree *dst)
{
if (dst->tag_ops)
dsa_tag_driver_put(dst->tag_ops);
list_del(&dst->list);
kfree(dst);
}
......@@ -424,7 +469,6 @@ static void dsa_port_teardown(struct dsa_port *dp)
break;
case DSA_PORT_TYPE_CPU:
dsa_port_disable(dp);
dsa_tag_driver_put(dp->tag_ops);
dsa_port_link_unregister_of(dp);
break;
case DSA_PORT_TYPE_DSA:
......@@ -898,6 +942,57 @@ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
dst->setup = false;
}
/* Since the dsa/tagging sysfs device attribute is per master, the assumption
* is that all DSA switches within a tree share the same tagger, otherwise
* they would have formed disjoint trees (different "dsa,member" values).
*/
int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
struct net_device *master,
const struct dsa_device_ops *tag_ops,
const struct dsa_device_ops *old_tag_ops)
{
struct dsa_notifier_tag_proto_info info;
struct dsa_port *dp;
int err = -EBUSY;
if (!rtnl_trylock())
return restart_syscall();
/* At the moment we don't allow changing the tag protocol under
* traffic. The rtnl_mutex also happens to serialize concurrent
* attempts to change the tagging protocol. If we ever lift the IFF_UP
* restriction, there needs to be another mutex which serializes this.
*/
if (master->flags & IFF_UP)
goto out_unlock;
list_for_each_entry(dp, &dst->ports, list) {
if (!dsa_is_user_port(dp->ds, dp->index))
continue;
if (dp->slave->flags & IFF_UP)
goto out_unlock;
}
info.tag_ops = tag_ops;
err = dsa_tree_notify(dst, DSA_NOTIFIER_TAG_PROTO, &info);
if (err)
goto out_unwind_tagger;
dst->tag_ops = tag_ops;
rtnl_unlock();
return 0;
out_unwind_tagger:
info.tag_ops = old_tag_ops;
dsa_tree_notify(dst, DSA_NOTIFIER_TAG_PROTO, &info);
out_unlock:
rtnl_unlock();
return err;
}
static struct dsa_port *dsa_port_touch(struct dsa_switch *ds, int index)
{
struct dsa_switch_tree *dst = ds->dst;
......@@ -968,24 +1063,33 @@ static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master)
{
struct dsa_switch *ds = dp->ds;
struct dsa_switch_tree *dst = ds->dst;
const struct dsa_device_ops *tag_ops;
enum dsa_tag_protocol tag_protocol;
tag_protocol = dsa_get_tag_protocol(dp, master);
tag_ops = dsa_tag_driver_get(tag_protocol);
if (IS_ERR(tag_ops)) {
if (PTR_ERR(tag_ops) == -ENOPROTOOPT)
if (dst->tag_ops) {
if (dst->tag_ops->proto != tag_protocol) {
dev_err(ds->dev,
"A DSA switch tree can have only one tagging protocol\n");
return -EINVAL;
}
/* In the case of multiple CPU ports per switch, the tagging
* protocol is still reference-counted only per switch tree, so
* nothing to do here.
*/
} else {
dst->tag_ops = dsa_tag_driver_get(tag_protocol);
if (IS_ERR(dst->tag_ops)) {
if (PTR_ERR(dst->tag_ops) == -ENOPROTOOPT)
return -EPROBE_DEFER;
dev_warn(ds->dev, "No tagger for this switch\n");
dp->master = NULL;
return PTR_ERR(tag_ops);
return PTR_ERR(dst->tag_ops);
}
}
dp->master = master;
dp->type = DSA_PORT_TYPE_CPU;
dp->filter = tag_ops->filter;
dp->rcv = tag_ops->rcv;
dp->tag_ops = tag_ops;
dsa_port_set_tag_protocol(dp, dst->tag_ops);
dp->dst = dst;
return 0;
......
......@@ -28,6 +28,7 @@ enum {
DSA_NOTIFIER_VLAN_ADD,
DSA_NOTIFIER_VLAN_DEL,
DSA_NOTIFIER_MTU,
DSA_NOTIFIER_TAG_PROTO,
};
/* DSA_NOTIFIER_AGEING_TIME */
......@@ -82,6 +83,11 @@ struct dsa_notifier_mtu_info {
int mtu;
};
/* DSA_NOTIFIER_TAG_PROTO_* */
struct dsa_notifier_tag_proto_info {
const struct dsa_device_ops *tag_ops;
};
struct dsa_switchdev_event_work {
struct dsa_switch *ds;
int port;
......@@ -115,6 +121,7 @@ struct dsa_slave_priv {
/* dsa.c */
const struct dsa_device_ops *dsa_tag_driver_get(int tag_protocol);
void dsa_tag_driver_put(const struct dsa_device_ops *ops);
const struct dsa_device_ops *dsa_find_tagger_by_name(const char *buf);
bool dsa_schedule_work(struct work_struct *work);
const char *dsa_tag_protocol_to_str(const struct dsa_device_ops *ops);
......@@ -139,6 +146,8 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
}
/* port.c */
void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp,
const struct dsa_device_ops *tag_ops);
int dsa_port_set_state(struct dsa_port *dp, u8 state);
int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy);
int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy);
......@@ -201,6 +210,8 @@ int dsa_slave_suspend(struct net_device *slave_dev);
int dsa_slave_resume(struct net_device *slave_dev);
int dsa_slave_register_notifier(void);
void dsa_slave_unregister_notifier(void);
void dsa_slave_setup_tagger(struct net_device *slave);
int dsa_slave_change_mtu(struct net_device *dev, int new_mtu);
static inline struct dsa_port *dsa_slave_to_port(const struct net_device *dev)
{
......@@ -283,6 +294,12 @@ void dsa_switch_unregister_notifier(struct dsa_switch *ds);
/* dsa2.c */
void dsa_lag_map(struct dsa_switch_tree *dst, struct net_device *lag);
void dsa_lag_unmap(struct dsa_switch_tree *dst, struct net_device *lag);
int dsa_tree_notify(struct dsa_switch_tree *dst, unsigned long e, void *v);
int dsa_broadcast(unsigned long e, void *v);
int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
struct net_device *master,
const struct dsa_device_ops *tag_ops,
const struct dsa_device_ops *old_tag_ops);
extern struct list_head dsa_tree_list;
......
......@@ -280,7 +280,44 @@ static ssize_t tagging_show(struct device *d, struct device_attribute *attr,
return sprintf(buf, "%s\n",
dsa_tag_protocol_to_str(cpu_dp->tag_ops));
}
static DEVICE_ATTR_RO(tagging);
static ssize_t tagging_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
const struct dsa_device_ops *new_tag_ops, *old_tag_ops;
struct net_device *dev = to_net_dev(d);
struct dsa_port *cpu_dp = dev->dsa_ptr;
int err;
old_tag_ops = cpu_dp->tag_ops;
new_tag_ops = dsa_find_tagger_by_name(buf);
/* Bad tagger name, or module is not loaded? */
if (IS_ERR(new_tag_ops))
return PTR_ERR(new_tag_ops);
if (new_tag_ops == old_tag_ops)
/* Drop the temporarily held duplicate reference, since
* the DSA switch tree uses this tagger.
*/
goto out;
err = dsa_tree_change_tag_proto(cpu_dp->ds->dst, dev, new_tag_ops,
old_tag_ops);
if (err) {
/* On failure the old tagger is restored, so we don't need the
* driver for the new one.
*/
dsa_tag_driver_put(new_tag_ops);
return err;
}
/* On success we no longer need the module for the old tagging protocol
*/
out:
dsa_tag_driver_put(old_tag_ops);
return count;
}
static DEVICE_ATTR_RW(tagging);
static struct attribute *dsa_slave_attrs[] = {
&dev_attr_tagging.attr,
......
......@@ -13,31 +13,21 @@
#include "dsa_priv.h"
static int dsa_broadcast(unsigned long e, void *v)
{
struct dsa_switch_tree *dst;
int err = 0;
list_for_each_entry(dst, &dsa_tree_list, list) {
struct raw_notifier_head *nh = &dst->nh;
err = raw_notifier_call_chain(nh, e, v);
err = notifier_to_errno(err);
if (err)
break;
}
return err;
}
/**
* dsa_port_notify - Notify the switching fabric of changes to a port
* @dp: port on which change occurred
* @e: event, must be of type DSA_NOTIFIER_*
* @v: event-specific value.
*
* Notify all switches in the DSA tree that this port's switch belongs to,
* including this switch itself, of an event. Allows the other switches to
* reconfigure themselves for cross-chip operations. Can also be used to
* reconfigure ports without net_devices (CPU ports, DSA links) whenever
* a user port's state changes.
*/
static int dsa_port_notify(const struct dsa_port *dp, unsigned long e, void *v)
{
struct raw_notifier_head *nh = &dp->ds->dst->nh;
int err;
err = raw_notifier_call_chain(nh, e, v);
return notifier_to_errno(err);
return dsa_tree_notify(dp->ds->dst, e, v);
}
int dsa_port_set_state(struct dsa_port *dp, u8 state)
......@@ -536,6 +526,14 @@ int dsa_port_vlan_del(struct dsa_port *dp,
return dsa_port_notify(dp, DSA_NOTIFIER_VLAN_DEL, &info);
}
void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp,
const struct dsa_device_ops *tag_ops)
{
cpu_dp->filter = tag_ops->filter;
cpu_dp->rcv = tag_ops->rcv;
cpu_dp->tag_ops = tag_ops;
}
static struct phy_device *dsa_port_get_phy_device(struct dsa_port *dp)
{
struct device_node *phy_dn;
......
......@@ -1430,7 +1430,7 @@ static void dsa_bridge_mtu_normalization(struct dsa_port *dp)
dsa_hw_port_list_free(&hw_port_list);
}
static int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
{
struct net_device *master = dsa_slave_to_master(dev);
struct dsa_port *dp = dsa_slave_to_port(dev);
......@@ -1708,6 +1708,27 @@ static int dsa_slave_phy_setup(struct net_device *slave_dev)
return ret;
}
void dsa_slave_setup_tagger(struct net_device *slave)
{
struct dsa_port *dp = dsa_slave_to_port(slave);
struct dsa_slave_priv *p = netdev_priv(slave);
const struct dsa_port *cpu_dp = dp->cpu_dp;
struct net_device *master = cpu_dp->master;
if (cpu_dp->tag_ops->tail_tag)
slave->needed_tailroom = cpu_dp->tag_ops->overhead;
else
slave->needed_headroom = cpu_dp->tag_ops->overhead;
/* Try to save one extra realloc later in the TX path (in the master)
* by also inheriting the master's needed headroom and tailroom.
* The 8021q driver also does this.
*/
slave->needed_headroom += master->needed_headroom;
slave->needed_tailroom += master->needed_tailroom;
p->xmit = cpu_dp->tag_ops->xmit;
}
static struct lock_class_key dsa_slave_netdev_xmit_lock_key;
static void dsa_slave_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
......@@ -1782,16 +1803,6 @@ int dsa_slave_create(struct dsa_port *port)
slave_dev->netdev_ops = &dsa_slave_netdev_ops;
if (ds->ops->port_max_mtu)
slave_dev->max_mtu = ds->ops->port_max_mtu(ds, port->index);
if (cpu_dp->tag_ops->tail_tag)
slave_dev->needed_tailroom = cpu_dp->tag_ops->overhead;
else
slave_dev->needed_headroom = cpu_dp->tag_ops->overhead;
/* Try to save one extra realloc later in the TX path (in the master)
* by also inheriting the master's needed headroom and tailroom.
* The 8021q driver also does this.
*/
slave_dev->needed_headroom += master->needed_headroom;
slave_dev->needed_tailroom += master->needed_tailroom;
SET_NETDEV_DEVTYPE(slave_dev, &dsa_type);
netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one,
......@@ -1814,8 +1825,8 @@ int dsa_slave_create(struct dsa_port *port)
p->dp = port;
INIT_LIST_HEAD(&p->mall_tc_list);
p->xmit = cpu_dp->tag_ops->xmit;
port->slave = slave_dev;
dsa_slave_setup_tagger(slave_dev);
rtnl_lock();
ret = dsa_slave_change_mtu(slave_dev, ETH_DATA_LEN);
......
......@@ -297,6 +297,58 @@ static int dsa_switch_vlan_del(struct dsa_switch *ds,
return 0;
}
static bool dsa_switch_tag_proto_match(struct dsa_switch *ds, int port,
struct dsa_notifier_tag_proto_info *info)
{
if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port))
return true;
return false;
}
static int dsa_switch_change_tag_proto(struct dsa_switch *ds,
struct dsa_notifier_tag_proto_info *info)
{
const struct dsa_device_ops *tag_ops = info->tag_ops;
int port, err;
if (!ds->ops->change_tag_protocol)
return -EOPNOTSUPP;
ASSERT_RTNL();
for (port = 0; port < ds->num_ports; port++) {
if (dsa_switch_tag_proto_match(ds, port, info)) {
err = ds->ops->change_tag_protocol(ds, port,
tag_ops->proto);
if (err)
return err;
if (dsa_is_cpu_port(ds, port))
dsa_port_set_tag_protocol(dsa_to_port(ds, port),
tag_ops);
}
}
/* Now that changing the tag protocol can no longer fail, let's update
* the remaining bits which are "duplicated for faster access", and the
* bits that depend on the tagger, such as the MTU.
*/
for (port = 0; port < ds->num_ports; port++) {
if (dsa_is_user_port(ds, port)) {
struct net_device *slave;
slave = dsa_to_port(ds, port)->slave;
dsa_slave_setup_tagger(slave);
/* rtnl_mutex is held in dsa_tree_change_tag_proto */
dsa_slave_change_mtu(slave, slave->mtu);
}
}
return 0;
}
static int dsa_switch_event(struct notifier_block *nb,
unsigned long event, void *info)
{
......@@ -343,6 +395,9 @@ static int dsa_switch_event(struct notifier_block *nb,
case DSA_NOTIFIER_MTU:
err = dsa_switch_mtu(ds, info);
break;
case DSA_NOTIFIER_TAG_PROTO:
err = dsa_switch_change_tag_proto(ds, info);
break;
default:
err = -EOPNOTSUPP;
break;
......
......@@ -133,10 +133,21 @@ u16 dsa_8021q_rx_subvlan(u16 vid)
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_subvlan);
bool vid_is_dsa_8021q_rxvlan(u16 vid)
{
return (vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_RX;
}
EXPORT_SYMBOL_GPL(vid_is_dsa_8021q_rxvlan);
bool vid_is_dsa_8021q_txvlan(u16 vid)
{
return (vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_TX;
}
EXPORT_SYMBOL_GPL(vid_is_dsa_8021q_txvlan);
bool vid_is_dsa_8021q(u16 vid)
{
return ((vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_RX ||
(vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_TX);
return vid_is_dsa_8021q_rxvlan(vid) || vid_is_dsa_8021q_txvlan(vid);
}
EXPORT_SYMBOL_GPL(vid_is_dsa_8021q);
......
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2020-2021 NXP Semiconductors
*
* An implementation of the software-defined tag_8021q.c tagger format, which
* also preserves full functionality under a vlan_filtering bridge. It does
* this by using the TCAM engines for:
* - pushing the RX VLAN as a second, outer tag, on egress towards the CPU port
* - redirecting towards the correct front port based on TX VLAN and popping
* that on egress
*/
#include <linux/dsa/8021q.h>
#include "dsa_priv.h"
static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
struct dsa_port *dp = dsa_slave_to_port(netdev);
u16 tx_vid = dsa_8021q_tx_vid(dp->ds, dp->index);
u16 queue_mapping = skb_get_queue_mapping(skb);
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
return dsa_8021q_xmit(skb, netdev, ETH_P_8021Q,
((pcp << VLAN_PRIO_SHIFT) | tx_vid));
}
static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
struct net_device *netdev,
struct packet_type *pt)
{
int src_port, switch_id, qos_class;
u16 vid, tci;
skb_push_rcsum(skb, ETH_HLEN);
if (skb_vlan_tag_present(skb)) {
tci = skb_vlan_tag_get(skb);
__vlan_hwaccel_clear_tag(skb);
} else {
__skb_vlan_pop(skb, &tci);
}
skb_pull_rcsum(skb, ETH_HLEN);
vid = tci & VLAN_VID_MASK;
src_port = dsa_8021q_rx_source_port(vid);
switch_id = dsa_8021q_rx_switch_id(vid);
qos_class = (tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
skb->dev = dsa_master_find_slave(netdev, switch_id, src_port);
if (!skb->dev)
return NULL;
skb->offload_fwd_mark = 1;
skb->priority = qos_class;
return skb;
}
static const struct dsa_device_ops ocelot_8021q_netdev_ops = {
.name = "ocelot-8021q",
.proto = DSA_TAG_PROTO_OCELOT_8021Q,
.xmit = ocelot_xmit,
.rcv = ocelot_rcv,
.overhead = VLAN_HLEN,
};
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_OCELOT_8021Q);
module_dsa_tag_driver(ocelot_8021q_netdev_ops);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment