Commit 56435d91 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge branch 'tag_8021q-for-ocelot-switches'

Vladimir Oltean says:

====================
tag_8021q for Ocelot switches

The Felix switch inside LS1028A has an issue. It has a 2.5G CPU port,
and the external ports, in the majority of use cases, run at 1G. This
means that, when the CPU injects traffic into the switch, it is very
easy to run into congestion. This is not to say that it is impossible to
enter congestion even with all ports running at the same speed, just
that the default configuration is already very prone to that by design.

Normally, the way to deal with that is using Ethernet flow control
(PAUSE frames).

However, this functionality is not working today with the ENETC - Felix
switch pair. The hardware issue is undergoing documentation right now as
an erratum within NXP, but several customers have been requesting a
reasonable workaround for it.

In truth, the LS1028A has 2 internal port pairs. The lack of flow control
is an issue only when NPI mode (Node Processor Interface, aka the mode
where the "CPU port module", which carries DSA-style tagged packets, is
connected to a regular Ethernet port) is used, and NPI mode is supported
by Felix on a single port.

In past BSPs, we have had setups where both internal port pairs were
enabled. We were advertising the following setup:

"data port"     "control port"
  (2.5G)            (1G)

   eno2             eno3
    ^                ^
    |                |
    | regular        | DSA-tagged
    | frames         | frames
    |                |
    v                v
   swp4             swp5

This works but is highly unpractical, due to NXP shifting the task of
designing a functional system (choosing which port to use, depending on
type of traffic required) up to the end user. The swpN interfaces would
have to be bridged with swp4, in order for the eno2 "data port" to have
access to the outside network. And the swpN interfaces would still be
capable of IP networking. So running a DHCP client would give us two IP
interfaces from the same subnet, one assigned to eno2, and the other to
swpN (0, 1, 2, 3).

Also, the dual port design doesn't scale. When attaching another DSA
switch to a Felix port, the end result is that the "data port" cannot
carry any meaningful data to the external world, since it lacks the DSA
tags required to traverse the sja1105 switches below. All that traffic
needs to go through the "control port".

So in newer BSPs there was a desire to simplify that setup, and only
have one internal port pair:

   eno2            eno3
    ^
    |
    | DSA-tagged    x disabled
    | frames
    |
    v
   swp4            swp5

However, this setup only exacerbates the issue of not having flow
control on the NPI port, since that is the only port now. Also, there
are use cases that still require the "data port", such as IEEE 802.1CB
(TSN stream identification doesn't work over an NPI port), source
MAC address learning over NPI, etc.

Again, there is a desire to keep the simplicity of the single internal
port setup, while regaining the benefits of having a dedicated data port
as well. And this series attempts to deliver just that.

So the NPI functionality is disabled conditionally. Its purpose was:
- To ensure individually addressable ports on TX. This can be replaced
  by using some designated VLAN tags which are pushed by the DSA tagger
  code, then removed by the switch (so they are invisible to the outside
  world and to the user).
- To ensure source port identification on RX. Again, this can be
  replaced by using some designated VLAN tags to encapsulate all RX
  traffic (each VLAN uniquely identifies a source port). The DSA tagger
  determines which port it was based on the VLAN number, then removes
  that header.
- To deliver PTP timestamps. This cannot be obtained through VLAN
  headers, so we need to take a step back and see how else we can do
  that. The Microchip Ocelot-1 (VSC7514 MIPS) driver performs manual
  injection/extraction from the CPU port module using register-based
  MMIO, and not over Ethernet. We will need to do the same from DSA,
  which makes this tagger a sort of hybrid between DSA and pure
  switchdev.
====================

Link: https://lore.kernel.org/r/20210129010009.3959398-1-olteanv@gmail.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents de1da8bc e21268ef
......@@ -3,5 +3,12 @@ Date: August 2018
KernelVersion: 4.20
Contact: netdev@vger.kernel.org
Description:
String indicating the type of tagging protocol used by the
DSA slave network device.
On read, this file returns a string indicating the type of
tagging protocol used by the DSA network devices that are
attached to this master interface.
On write, this file changes the tagging protocol of the
attached DSA switches, if this operation is supported by the
driver. Changing the tagging protocol must be done with the DSA
interfaces and the master interface all administratively down.
See the "name" field of each registered struct dsa_device_ops
for a list of valid values.
......@@ -12859,6 +12859,7 @@ F: drivers/net/dsa/ocelot/*
F: drivers/net/ethernet/mscc/
F: include/soc/mscc/ocelot*
F: net/dsa/tag_ocelot.c
F: net/dsa/tag_ocelot_8021q.c
F: tools/testing/selftests/drivers/net/ocelot/*
OCXL (Open Coherent Accelerator Processor Interface OpenCAPI) DRIVER
......
......@@ -6,6 +6,7 @@ config NET_DSA_MSCC_FELIX
depends on NET_VENDOR_FREESCALE
depends on HAS_IOMEM
select MSCC_OCELOT_SWITCH_LIB
select NET_DSA_TAG_OCELOT_8021Q
select NET_DSA_TAG_OCELOT
select FSL_ENETC_MDIO
select PCS_LYNX
......@@ -19,6 +20,7 @@ config NET_DSA_MSCC_SEVILLE
depends on NET_VENDOR_MICROSEMI
depends on HAS_IOMEM
select MSCC_OCELOT_SWITCH_LIB
select NET_DSA_TAG_OCELOT_8021Q
select NET_DSA_TAG_OCELOT
select PCS_LYNX
help
......
This diff is collapsed.
......@@ -48,6 +48,8 @@ struct felix {
struct lynx_pcs **pcs;
resource_size_t switch_base;
resource_size_t imdio_base;
struct dsa_8021q_context *dsa_8021q_ctx;
enum dsa_tag_protocol tag_proto;
};
struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port);
......
......@@ -1467,6 +1467,7 @@ static int felix_pci_probe(struct pci_dev *pdev,
ds->ops = &felix_switch_ops;
ds->priv = ocelot;
felix->ds = ds;
felix->tag_proto = DSA_TAG_PROTO_OCELOT;
err = dsa_register_switch(ds);
if (err) {
......
......@@ -1246,6 +1246,7 @@ static int seville_probe(struct platform_device *pdev)
ds->ops = &felix_switch_ops;
ds->priv = ocelot;
felix->ds = ds;
felix->tag_proto = DSA_TAG_PROTO_OCELOT;
err = dsa_register_switch(ds);
if (err) {
......
......@@ -889,10 +889,87 @@ int ocelot_get_ts_info(struct ocelot *ocelot, int port,
}
EXPORT_SYMBOL(ocelot_get_ts_info);
static u32 ocelot_get_dsa_8021q_cpu_mask(struct ocelot *ocelot)
{
u32 mask = 0;
int port;
for (port = 0; port < ocelot->num_phys_ports; port++) {
struct ocelot_port *ocelot_port = ocelot->ports[port];
if (!ocelot_port)
continue;
if (ocelot_port->is_dsa_8021q_cpu)
mask |= BIT(port);
}
return mask;
}
void ocelot_apply_bridge_fwd_mask(struct ocelot *ocelot)
{
unsigned long cpu_fwd_mask;
int port;
/* If a DSA tag_8021q CPU exists, it needs to be included in the
* regular forwarding path of the front ports regardless of whether
* those are bridged or standalone.
* If DSA tag_8021q is not used, this returns 0, which is fine because
* the hardware-based CPU port module can be a destination for packets
* even if it isn't part of PGID_SRC.
*/
cpu_fwd_mask = ocelot_get_dsa_8021q_cpu_mask(ocelot);
/* Apply FWD mask. The loop is needed to add/remove the current port as
* a source for the other ports.
*/
for (port = 0; port < ocelot->num_phys_ports; port++) {
struct ocelot_port *ocelot_port = ocelot->ports[port];
unsigned long mask;
if (!ocelot_port) {
/* Unused ports can't send anywhere */
mask = 0;
} else if (ocelot_port->is_dsa_8021q_cpu) {
/* The DSA tag_8021q CPU ports need to be able to
* forward packets to all other ports except for
* themselves
*/
mask = GENMASK(ocelot->num_phys_ports - 1, 0);
mask &= ~cpu_fwd_mask;
} else if (ocelot->bridge_fwd_mask & BIT(port)) {
int lag;
mask = ocelot->bridge_fwd_mask & ~BIT(port);
for (lag = 0; lag < ocelot->num_phys_ports; lag++) {
unsigned long bond_mask = ocelot->lags[lag];
if (!bond_mask)
continue;
if (bond_mask & BIT(port)) {
mask &= ~bond_mask;
break;
}
}
} else {
/* Standalone ports forward only to DSA tag_8021q CPU
* ports (if those exist), or to the hardware CPU port
* module otherwise.
*/
mask = cpu_fwd_mask;
}
ocelot_write_rix(ocelot, mask, ANA_PGID_PGID, PGID_SRC + port);
}
}
EXPORT_SYMBOL(ocelot_apply_bridge_fwd_mask);
void ocelot_bridge_stp_state_set(struct ocelot *ocelot, int port, u8 state)
{
u32 port_cfg;
int p, i;
if (!(BIT(port) & ocelot->bridge_mask))
return;
......@@ -915,32 +992,7 @@ void ocelot_bridge_stp_state_set(struct ocelot *ocelot, int port, u8 state)
ocelot_write_gix(ocelot, port_cfg, ANA_PORT_PORT_CFG, port);
/* Apply FWD mask. The loop is needed to add/remove the current port as
* a source for the other ports.
*/
for (p = 0; p < ocelot->num_phys_ports; p++) {
if (ocelot->bridge_fwd_mask & BIT(p)) {
unsigned long mask = ocelot->bridge_fwd_mask & ~BIT(p);
for (i = 0; i < ocelot->num_phys_ports; i++) {
unsigned long bond_mask = ocelot->lags[i];
if (!bond_mask)
continue;
if (bond_mask & BIT(p)) {
mask &= ~bond_mask;
break;
}
}
ocelot_write_rix(ocelot, mask,
ANA_PGID_PGID, PGID_SRC + p);
} else {
ocelot_write_rix(ocelot, 0,
ANA_PGID_PGID, PGID_SRC + p);
}
}
ocelot_apply_bridge_fwd_mask(ocelot);
}
EXPORT_SYMBOL(ocelot_bridge_stp_state_set);
......@@ -1297,6 +1349,7 @@ int ocelot_port_lag_join(struct ocelot *ocelot, int port,
}
ocelot_setup_lag(ocelot, lag);
ocelot_apply_bridge_fwd_mask(ocelot);
ocelot_set_aggr_pgids(ocelot);
return 0;
......@@ -1330,6 +1383,7 @@ void ocelot_port_lag_leave(struct ocelot *ocelot, int port,
ocelot_write_gix(ocelot, port_cfg | ANA_PORT_PORT_CFG_PORTID_VAL(port),
ANA_PORT_PORT_CFG, port);
ocelot_apply_bridge_fwd_mask(ocelot);
ocelot_set_aggr_pgids(ocelot);
}
EXPORT_SYMBOL(ocelot_port_lag_leave);
......@@ -1350,9 +1404,9 @@ void ocelot_port_set_maxlen(struct ocelot *ocelot, int port, size_t sdu)
if (port == ocelot->npi) {
maxlen += OCELOT_TAG_LEN;
if (ocelot->inj_prefix == OCELOT_TAG_PREFIX_SHORT)
if (ocelot->npi_inj_prefix == OCELOT_TAG_PREFIX_SHORT)
maxlen += OCELOT_SHORT_PREFIX_LEN;
else if (ocelot->inj_prefix == OCELOT_TAG_PREFIX_LONG)
else if (ocelot->npi_inj_prefix == OCELOT_TAG_PREFIX_LONG)
maxlen += OCELOT_LONG_PREFIX_LEN;
}
......@@ -1382,9 +1436,9 @@ int ocelot_get_max_mtu(struct ocelot *ocelot, int port)
if (port == ocelot->npi) {
max_mtu -= OCELOT_TAG_LEN;
if (ocelot->inj_prefix == OCELOT_TAG_PREFIX_SHORT)
if (ocelot->npi_inj_prefix == OCELOT_TAG_PREFIX_SHORT)
max_mtu -= OCELOT_SHORT_PREFIX_LEN;
else if (ocelot->inj_prefix == OCELOT_TAG_PREFIX_LONG)
else if (ocelot->npi_inj_prefix == OCELOT_TAG_PREFIX_LONG)
max_mtu -= OCELOT_LONG_PREFIX_LEN;
}
......@@ -1469,9 +1523,9 @@ static void ocelot_cpu_port_init(struct ocelot *ocelot)
ocelot_fields_write(ocelot, cpu, QSYS_SWITCH_PORT_MODE_PORT_ENA, 1);
/* CPU port Injection/Extraction configuration */
ocelot_fields_write(ocelot, cpu, SYS_PORT_MODE_INCL_XTR_HDR,
ocelot->xtr_prefix);
OCELOT_TAG_PREFIX_NONE);
ocelot_fields_write(ocelot, cpu, SYS_PORT_MODE_INCL_INJ_HDR,
ocelot->inj_prefix);
OCELOT_TAG_PREFIX_NONE);
/* Configure the CPU port to be VLAN aware */
ocelot_write_gix(ocelot, ANA_PORT_VLAN_CFG_VLAN_VID(0) |
......
......@@ -622,7 +622,8 @@ static int ocelot_flower_parse(struct ocelot *ocelot, int port, bool ingress,
int ret;
filter->prio = f->common.prio;
filter->id = f->cookie;
filter->id.cookie = f->cookie;
filter->id.tc_offload = true;
ret = ocelot_flower_parse_action(ocelot, port, ingress, f, filter);
if (ret)
......@@ -717,7 +718,7 @@ int ocelot_cls_flower_destroy(struct ocelot *ocelot, int port,
block = &ocelot->block[block_id];
filter = ocelot_vcap_block_find_filter_by_id(block, f->cookie);
filter = ocelot_vcap_block_find_filter_by_id(block, f->cookie, true);
if (!filter)
return 0;
......@@ -741,7 +742,7 @@ int ocelot_cls_flower_stats(struct ocelot *ocelot, int port,
block = &ocelot->block[block_id];
filter = ocelot_vcap_block_find_filter_by_id(block, f->cookie);
filter = ocelot_vcap_block_find_filter_by_id(block, f->cookie, true);
if (!filter || filter->type == OCELOT_VCAP_FILTER_DUMMY)
return 0;
......
......@@ -9,6 +9,7 @@
*/
#include <linux/if_bridge.h>
#include <net/pkt_cls.h>
#include "ocelot.h"
#include "ocelot_vcap.h"
......
......@@ -959,6 +959,12 @@ static void ocelot_vcap_filter_add_to_block(struct ocelot *ocelot,
list_add(&filter->list, pos->prev);
}
static bool ocelot_vcap_filter_equal(const struct ocelot_vcap_filter *a,
const struct ocelot_vcap_filter *b)
{
return !memcmp(&a->id, &b->id, sizeof(struct ocelot_vcap_id));
}
static int ocelot_vcap_block_get_filter_index(struct ocelot_vcap_block *block,
struct ocelot_vcap_filter *filter)
{
......@@ -966,7 +972,7 @@ static int ocelot_vcap_block_get_filter_index(struct ocelot_vcap_block *block,
int index = 0;
list_for_each_entry(tmp, &block->rules, list) {
if (filter->id == tmp->id)
if (ocelot_vcap_filter_equal(filter, tmp))
return index;
index++;
}
......@@ -991,16 +997,19 @@ ocelot_vcap_block_find_filter_by_index(struct ocelot_vcap_block *block,
}
struct ocelot_vcap_filter *
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int id)
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int cookie,
bool tc_offload)
{
struct ocelot_vcap_filter *filter;
list_for_each_entry(filter, &block->rules, list)
if (filter->id == id)
if (filter->id.tc_offload == tc_offload &&
filter->id.cookie == cookie)
return filter;
return NULL;
}
EXPORT_SYMBOL(ocelot_vcap_block_find_filter_by_id);
/* If @on=false, then SNAP, ARP, IP and OAM frames will not match on keys based
* on destination and source MAC addresses, but only on higher-level protocol
......@@ -1150,6 +1159,7 @@ int ocelot_vcap_filter_add(struct ocelot *ocelot,
vcap_entry_set(ocelot, index, filter);
return 0;
}
EXPORT_SYMBOL(ocelot_vcap_filter_add);
static void ocelot_vcap_block_remove_filter(struct ocelot *ocelot,
struct ocelot_vcap_block *block,
......@@ -1160,7 +1170,7 @@ static void ocelot_vcap_block_remove_filter(struct ocelot *ocelot,
list_for_each_safe(pos, q, &block->rules) {
tmp = list_entry(pos, struct ocelot_vcap_filter, list);
if (tmp->id == filter->id) {
if (ocelot_vcap_filter_equal(filter, tmp)) {
if (tmp->block_id == VCAP_IS2 &&
tmp->action.police_ena)
ocelot_vcap_policer_del(ocelot, block,
......@@ -1204,6 +1214,7 @@ int ocelot_vcap_filter_del(struct ocelot *ocelot,
return 0;
}
EXPORT_SYMBOL(ocelot_vcap_filter_del);
int ocelot_vcap_filter_stats_update(struct ocelot *ocelot,
struct ocelot_vcap_filter *filter)
......
......@@ -7,304 +7,13 @@
#define _MSCC_OCELOT_VCAP_H_
#include "ocelot.h"
#include "ocelot_police.h"
#include <net/sch_generic.h>
#include <net/pkt_cls.h>
#include <soc/mscc/ocelot_vcap.h>
#include <net/flow_offload.h>
#define OCELOT_POLICER_DISCARD 0x17f
struct ocelot_ipv4 {
u8 addr[4];
};
enum ocelot_vcap_bit {
OCELOT_VCAP_BIT_ANY,
OCELOT_VCAP_BIT_0,
OCELOT_VCAP_BIT_1
};
struct ocelot_vcap_u8 {
u8 value[1];
u8 mask[1];
};
struct ocelot_vcap_u16 {
u8 value[2];
u8 mask[2];
};
struct ocelot_vcap_u24 {
u8 value[3];
u8 mask[3];
};
struct ocelot_vcap_u32 {
u8 value[4];
u8 mask[4];
};
struct ocelot_vcap_u40 {
u8 value[5];
u8 mask[5];
};
struct ocelot_vcap_u48 {
u8 value[6];
u8 mask[6];
};
struct ocelot_vcap_u64 {
u8 value[8];
u8 mask[8];
};
struct ocelot_vcap_u128 {
u8 value[16];
u8 mask[16];
};
struct ocelot_vcap_vid {
u16 value;
u16 mask;
};
struct ocelot_vcap_ipv4 {
struct ocelot_ipv4 value;
struct ocelot_ipv4 mask;
};
struct ocelot_vcap_udp_tcp {
u16 value;
u16 mask;
};
struct ocelot_vcap_port {
u8 value;
u8 mask;
};
enum ocelot_vcap_key_type {
OCELOT_VCAP_KEY_ANY,
OCELOT_VCAP_KEY_ETYPE,
OCELOT_VCAP_KEY_LLC,
OCELOT_VCAP_KEY_SNAP,
OCELOT_VCAP_KEY_ARP,
OCELOT_VCAP_KEY_IPV4,
OCELOT_VCAP_KEY_IPV6
};
struct ocelot_vcap_key_vlan {
struct ocelot_vcap_vid vid; /* VLAN ID (12 bit) */
struct ocelot_vcap_u8 pcp; /* PCP (3 bit) */
enum ocelot_vcap_bit dei; /* DEI */
enum ocelot_vcap_bit tagged; /* Tagged/untagged frame */
};
struct ocelot_vcap_key_etype {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
struct ocelot_vcap_u16 etype;
struct ocelot_vcap_u16 data; /* MAC data */
};
struct ocelot_vcap_key_llc {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
/* LLC header: DSAP at byte 0, SSAP at byte 1, Control at byte 2 */
struct ocelot_vcap_u32 llc;
};
struct ocelot_vcap_key_snap {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
/* SNAP header: Organization Code at byte 0, Type at byte 3 */
struct ocelot_vcap_u40 snap;
};
struct ocelot_vcap_key_arp {
struct ocelot_vcap_u48 smac;
enum ocelot_vcap_bit arp; /* Opcode ARP/RARP */
enum ocelot_vcap_bit req; /* Opcode request/reply */
enum ocelot_vcap_bit unknown; /* Opcode unknown */
enum ocelot_vcap_bit smac_match; /* Sender MAC matches SMAC */
enum ocelot_vcap_bit dmac_match; /* Target MAC matches DMAC */
/**< Protocol addr. length 4, hardware length 6 */
enum ocelot_vcap_bit length;
enum ocelot_vcap_bit ip; /* Protocol address type IP */
enum ocelot_vcap_bit ethernet; /* Hardware address type Ethernet */
struct ocelot_vcap_ipv4 sip; /* Sender IP address */
struct ocelot_vcap_ipv4 dip; /* Target IP address */
};
struct ocelot_vcap_key_ipv4 {
enum ocelot_vcap_bit ttl; /* TTL zero */
enum ocelot_vcap_bit fragment; /* Fragment */
enum ocelot_vcap_bit options; /* Header options */
struct ocelot_vcap_u8 ds;
struct ocelot_vcap_u8 proto; /* Protocol */
struct ocelot_vcap_ipv4 sip; /* Source IP address */
struct ocelot_vcap_ipv4 dip; /* Destination IP address */
struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
struct ocelot_vcap_udp_tcp sport; /* UDP/TCP: Source port */
struct ocelot_vcap_udp_tcp dport; /* UDP/TCP: Destination port */
enum ocelot_vcap_bit tcp_fin;
enum ocelot_vcap_bit tcp_syn;
enum ocelot_vcap_bit tcp_rst;
enum ocelot_vcap_bit tcp_psh;
enum ocelot_vcap_bit tcp_ack;
enum ocelot_vcap_bit tcp_urg;
enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
};
struct ocelot_vcap_key_ipv6 {
struct ocelot_vcap_u8 proto; /* IPv6 protocol */
struct ocelot_vcap_u128 sip; /* IPv6 source (byte 0-7 ignored) */
struct ocelot_vcap_u128 dip; /* IPv6 destination (byte 0-7 ignored) */
enum ocelot_vcap_bit ttl; /* TTL zero */
struct ocelot_vcap_u8 ds;
struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
struct ocelot_vcap_udp_tcp sport;
struct ocelot_vcap_udp_tcp dport;
enum ocelot_vcap_bit tcp_fin;
enum ocelot_vcap_bit tcp_syn;
enum ocelot_vcap_bit tcp_rst;
enum ocelot_vcap_bit tcp_psh;
enum ocelot_vcap_bit tcp_ack;
enum ocelot_vcap_bit tcp_urg;
enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
};
enum ocelot_mask_mode {
OCELOT_MASK_MODE_NONE,
OCELOT_MASK_MODE_PERMIT_DENY,
OCELOT_MASK_MODE_POLICY,
OCELOT_MASK_MODE_REDIRECT,
};
enum ocelot_es0_tag {
OCELOT_NO_ES0_TAG,
OCELOT_ES0_TAG,
OCELOT_FORCE_PORT_TAG,
OCELOT_FORCE_UNTAG,
};
enum ocelot_tag_tpid_sel {
OCELOT_TAG_TPID_SEL_8021Q,
OCELOT_TAG_TPID_SEL_8021AD,
};
struct ocelot_vcap_action {
union {
/* VCAP ES0 */
struct {
enum ocelot_es0_tag push_outer_tag;
enum ocelot_es0_tag push_inner_tag;
enum ocelot_tag_tpid_sel tag_a_tpid_sel;
int tag_a_vid_sel;
int tag_a_pcp_sel;
u16 vid_a_val;
u8 pcp_a_val;
u8 dei_a_val;
enum ocelot_tag_tpid_sel tag_b_tpid_sel;
int tag_b_vid_sel;
int tag_b_pcp_sel;
u16 vid_b_val;
u8 pcp_b_val;
u8 dei_b_val;
};
/* VCAP IS1 */
struct {
bool vid_replace_ena;
u16 vid;
bool vlan_pop_cnt_ena;
int vlan_pop_cnt;
bool pcp_dei_ena;
u8 pcp;
u8 dei;
bool qos_ena;
u8 qos_val;
u8 pag_override_mask;
u8 pag_val;
};
/* VCAP IS2 */
struct {
bool cpu_copy_ena;
u8 cpu_qu_num;
enum ocelot_mask_mode mask_mode;
unsigned long port_mask;
bool police_ena;
struct ocelot_policer pol;
u32 pol_ix;
};
};
};
struct ocelot_vcap_stats {
u64 bytes;
u64 pkts;
u64 used;
};
enum ocelot_vcap_filter_type {
OCELOT_VCAP_FILTER_DUMMY,
OCELOT_VCAP_FILTER_PAG,
OCELOT_VCAP_FILTER_OFFLOAD,
};
struct ocelot_vcap_filter {
struct list_head list;
enum ocelot_vcap_filter_type type;
int block_id;
int goto_target;
int lookup;
u8 pag;
u16 prio;
u32 id;
struct ocelot_vcap_action action;
struct ocelot_vcap_stats stats;
/* For VCAP IS1 and IS2 */
unsigned long ingress_port_mask;
/* For VCAP ES0 */
struct ocelot_vcap_port ingress_port;
struct ocelot_vcap_port egress_port;
enum ocelot_vcap_bit dmac_mc;
enum ocelot_vcap_bit dmac_bc;
struct ocelot_vcap_key_vlan vlan;
enum ocelot_vcap_key_type key_type;
union {
/* OCELOT_VCAP_KEY_ANY: No specific fields */
struct ocelot_vcap_key_etype etype;
struct ocelot_vcap_key_llc llc;
struct ocelot_vcap_key_snap snap;
struct ocelot_vcap_key_arp arp;
struct ocelot_vcap_key_ipv4 ipv4;
struct ocelot_vcap_key_ipv6 ipv6;
} key;
};
int ocelot_vcap_filter_add(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule,
struct netlink_ext_ack *extack);
int ocelot_vcap_filter_del(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule);
int ocelot_vcap_filter_stats_update(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule);
struct ocelot_vcap_filter *
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int id);
void ocelot_detect_vcap_constants(struct ocelot *ocelot);
int ocelot_vcap_init(struct ocelot *ocelot);
......
......@@ -1347,8 +1347,6 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
ocelot->num_flooding_pgids = 1;
ocelot->vcap = vsc7514_vcap_props;
ocelot->inj_prefix = OCELOT_TAG_PREFIX_NONE;
ocelot->xtr_prefix = OCELOT_TAG_PREFIX_NONE;
ocelot->npi = -1;
err = ocelot_init(ocelot);
......
......@@ -64,6 +64,10 @@ int dsa_8021q_rx_source_port(u16 vid);
u16 dsa_8021q_rx_subvlan(u16 vid);
bool vid_is_dsa_8021q_rxvlan(u16 vid);
bool vid_is_dsa_8021q_txvlan(u16 vid);
bool vid_is_dsa_8021q(u16 vid);
#else
......@@ -123,6 +127,16 @@ u16 dsa_8021q_rx_subvlan(u16 vid)
return 0;
}
bool vid_is_dsa_8021q_rxvlan(u16 vid)
{
return false;
}
bool vid_is_dsa_8021q_txvlan(u16 vid)
{
return false;
}
bool vid_is_dsa_8021q(u16 vid)
{
return false;
......
......@@ -47,6 +47,7 @@ struct phylink_link_state;
#define DSA_TAG_PROTO_RTL4_A_VALUE 17
#define DSA_TAG_PROTO_HELLCREEK_VALUE 18
#define DSA_TAG_PROTO_XRS700X_VALUE 19
#define DSA_TAG_PROTO_OCELOT_8021Q_VALUE 20
enum dsa_tag_protocol {
DSA_TAG_PROTO_NONE = DSA_TAG_PROTO_NONE_VALUE,
......@@ -69,6 +70,7 @@ enum dsa_tag_protocol {
DSA_TAG_PROTO_RTL4_A = DSA_TAG_PROTO_RTL4_A_VALUE,
DSA_TAG_PROTO_HELLCREEK = DSA_TAG_PROTO_HELLCREEK_VALUE,
DSA_TAG_PROTO_XRS700X = DSA_TAG_PROTO_XRS700X_VALUE,
DSA_TAG_PROTO_OCELOT_8021Q = DSA_TAG_PROTO_OCELOT_8021Q_VALUE,
};
struct packet_type;
......@@ -140,6 +142,9 @@ struct dsa_switch_tree {
/* Has this tree been applied to the hardware? */
bool setup;
/* Tagging protocol operations */
const struct dsa_device_ops *tag_ops;
/*
* Configuration data for the platform device that owns
* this dsa switch tree instance.
......@@ -225,7 +230,9 @@ struct dsa_port {
struct net_device *slave;
};
/* CPU port tagging operations used by master or slave devices */
/* Copy of the tagging protocol operations, for quicker access
* in the data path. Valid only for the CPU ports.
*/
const struct dsa_device_ops *tag_ops;
/* Copies for faster access in master receive hot path */
......@@ -480,9 +487,18 @@ static inline bool dsa_port_is_vlan_filtering(const struct dsa_port *dp)
typedef int dsa_fdb_dump_cb_t(const unsigned char *addr, u16 vid,
bool is_static, void *data);
struct dsa_switch_ops {
/*
* Tagging protocol helpers called for the CPU ports and DSA links.
* @get_tag_protocol retrieves the initial tagging protocol and is
* mandatory. Switches which can operate using multiple tagging
* protocols should implement @change_tag_protocol and report in
* @get_tag_protocol the tagger in current use.
*/
enum dsa_tag_protocol (*get_tag_protocol)(struct dsa_switch *ds,
int port,
enum dsa_tag_protocol mprot);
int (*change_tag_protocol)(struct dsa_switch *ds, int port,
enum dsa_tag_protocol proto);
int (*setup)(struct dsa_switch *ds);
void (*teardown)(struct dsa_switch *ds);
......
......@@ -610,6 +610,7 @@ struct ocelot_port {
phy_interface_t phy_mode;
u8 *xmit_template;
bool is_dsa_8021q_cpu;
};
struct ocelot {
......@@ -651,8 +652,8 @@ struct ocelot {
int npi;
enum ocelot_tag_prefix inj_prefix;
enum ocelot_tag_prefix xtr_prefix;
enum ocelot_tag_prefix npi_inj_prefix;
enum ocelot_tag_prefix npi_xtr_prefix;
u32 *lags;
......@@ -760,6 +761,7 @@ void ocelot_adjust_link(struct ocelot *ocelot, int port,
struct phy_device *phydev);
int ocelot_port_vlan_filtering(struct ocelot *ocelot, int port, bool enabled);
void ocelot_bridge_stp_state_set(struct ocelot *ocelot, int port, u8 state);
void ocelot_apply_bridge_fwd_mask(struct ocelot *ocelot);
int ocelot_port_bridge_join(struct ocelot *ocelot, int port,
struct net_device *bridge);
int ocelot_port_bridge_leave(struct ocelot *ocelot, int port,
......
......@@ -400,4 +400,301 @@ enum vcap_es0_action_field {
VCAP_ES0_ACT_HIT_STICKY,
};
struct ocelot_ipv4 {
u8 addr[4];
};
enum ocelot_vcap_bit {
OCELOT_VCAP_BIT_ANY,
OCELOT_VCAP_BIT_0,
OCELOT_VCAP_BIT_1
};
struct ocelot_vcap_u8 {
u8 value[1];
u8 mask[1];
};
struct ocelot_vcap_u16 {
u8 value[2];
u8 mask[2];
};
struct ocelot_vcap_u24 {
u8 value[3];
u8 mask[3];
};
struct ocelot_vcap_u32 {
u8 value[4];
u8 mask[4];
};
struct ocelot_vcap_u40 {
u8 value[5];
u8 mask[5];
};
struct ocelot_vcap_u48 {
u8 value[6];
u8 mask[6];
};
struct ocelot_vcap_u64 {
u8 value[8];
u8 mask[8];
};
struct ocelot_vcap_u128 {
u8 value[16];
u8 mask[16];
};
struct ocelot_vcap_vid {
u16 value;
u16 mask;
};
struct ocelot_vcap_ipv4 {
struct ocelot_ipv4 value;
struct ocelot_ipv4 mask;
};
struct ocelot_vcap_udp_tcp {
u16 value;
u16 mask;
};
struct ocelot_vcap_port {
u8 value;
u8 mask;
};
enum ocelot_vcap_key_type {
OCELOT_VCAP_KEY_ANY,
OCELOT_VCAP_KEY_ETYPE,
OCELOT_VCAP_KEY_LLC,
OCELOT_VCAP_KEY_SNAP,
OCELOT_VCAP_KEY_ARP,
OCELOT_VCAP_KEY_IPV4,
OCELOT_VCAP_KEY_IPV6
};
struct ocelot_vcap_key_vlan {
struct ocelot_vcap_vid vid; /* VLAN ID (12 bit) */
struct ocelot_vcap_u8 pcp; /* PCP (3 bit) */
enum ocelot_vcap_bit dei; /* DEI */
enum ocelot_vcap_bit tagged; /* Tagged/untagged frame */
};
struct ocelot_vcap_key_etype {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
struct ocelot_vcap_u16 etype;
struct ocelot_vcap_u16 data; /* MAC data */
};
struct ocelot_vcap_key_llc {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
/* LLC header: DSAP at byte 0, SSAP at byte 1, Control at byte 2 */
struct ocelot_vcap_u32 llc;
};
struct ocelot_vcap_key_snap {
struct ocelot_vcap_u48 dmac;
struct ocelot_vcap_u48 smac;
/* SNAP header: Organization Code at byte 0, Type at byte 3 */
struct ocelot_vcap_u40 snap;
};
struct ocelot_vcap_key_arp {
struct ocelot_vcap_u48 smac;
enum ocelot_vcap_bit arp; /* Opcode ARP/RARP */
enum ocelot_vcap_bit req; /* Opcode request/reply */
enum ocelot_vcap_bit unknown; /* Opcode unknown */
enum ocelot_vcap_bit smac_match; /* Sender MAC matches SMAC */
enum ocelot_vcap_bit dmac_match; /* Target MAC matches DMAC */
/**< Protocol addr. length 4, hardware length 6 */
enum ocelot_vcap_bit length;
enum ocelot_vcap_bit ip; /* Protocol address type IP */
enum ocelot_vcap_bit ethernet; /* Hardware address type Ethernet */
struct ocelot_vcap_ipv4 sip; /* Sender IP address */
struct ocelot_vcap_ipv4 dip; /* Target IP address */
};
struct ocelot_vcap_key_ipv4 {
enum ocelot_vcap_bit ttl; /* TTL zero */
enum ocelot_vcap_bit fragment; /* Fragment */
enum ocelot_vcap_bit options; /* Header options */
struct ocelot_vcap_u8 ds;
struct ocelot_vcap_u8 proto; /* Protocol */
struct ocelot_vcap_ipv4 sip; /* Source IP address */
struct ocelot_vcap_ipv4 dip; /* Destination IP address */
struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
struct ocelot_vcap_udp_tcp sport; /* UDP/TCP: Source port */
struct ocelot_vcap_udp_tcp dport; /* UDP/TCP: Destination port */
enum ocelot_vcap_bit tcp_fin;
enum ocelot_vcap_bit tcp_syn;
enum ocelot_vcap_bit tcp_rst;
enum ocelot_vcap_bit tcp_psh;
enum ocelot_vcap_bit tcp_ack;
enum ocelot_vcap_bit tcp_urg;
enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
};
struct ocelot_vcap_key_ipv6 {
struct ocelot_vcap_u8 proto; /* IPv6 protocol */
struct ocelot_vcap_u128 sip; /* IPv6 source (byte 0-7 ignored) */
struct ocelot_vcap_u128 dip; /* IPv6 destination (byte 0-7 ignored) */
enum ocelot_vcap_bit ttl; /* TTL zero */
struct ocelot_vcap_u8 ds;
struct ocelot_vcap_u48 data; /* Not UDP/TCP: IP data */
struct ocelot_vcap_udp_tcp sport;
struct ocelot_vcap_udp_tcp dport;
enum ocelot_vcap_bit tcp_fin;
enum ocelot_vcap_bit tcp_syn;
enum ocelot_vcap_bit tcp_rst;
enum ocelot_vcap_bit tcp_psh;
enum ocelot_vcap_bit tcp_ack;
enum ocelot_vcap_bit tcp_urg;
enum ocelot_vcap_bit sip_eq_dip; /* SIP equals DIP */
enum ocelot_vcap_bit sport_eq_dport; /* SPORT equals DPORT */
enum ocelot_vcap_bit seq_zero; /* TCP sequence number is zero */
};
enum ocelot_mask_mode {
OCELOT_MASK_MODE_NONE,
OCELOT_MASK_MODE_PERMIT_DENY,
OCELOT_MASK_MODE_POLICY,
OCELOT_MASK_MODE_REDIRECT,
};
enum ocelot_es0_tag {
OCELOT_NO_ES0_TAG,
OCELOT_ES0_TAG,
OCELOT_FORCE_PORT_TAG,
OCELOT_FORCE_UNTAG,
};
enum ocelot_tag_tpid_sel {
OCELOT_TAG_TPID_SEL_8021Q,
OCELOT_TAG_TPID_SEL_8021AD,
};
struct ocelot_vcap_action {
union {
/* VCAP ES0 */
struct {
enum ocelot_es0_tag push_outer_tag;
enum ocelot_es0_tag push_inner_tag;
enum ocelot_tag_tpid_sel tag_a_tpid_sel;
int tag_a_vid_sel;
int tag_a_pcp_sel;
u16 vid_a_val;
u8 pcp_a_val;
u8 dei_a_val;
enum ocelot_tag_tpid_sel tag_b_tpid_sel;
int tag_b_vid_sel;
int tag_b_pcp_sel;
u16 vid_b_val;
u8 pcp_b_val;
u8 dei_b_val;
};
/* VCAP IS1 */
struct {
bool vid_replace_ena;
u16 vid;
bool vlan_pop_cnt_ena;
int vlan_pop_cnt;
bool pcp_dei_ena;
u8 pcp;
u8 dei;
bool qos_ena;
u8 qos_val;
u8 pag_override_mask;
u8 pag_val;
};
/* VCAP IS2 */
struct {
bool cpu_copy_ena;
u8 cpu_qu_num;
enum ocelot_mask_mode mask_mode;
unsigned long port_mask;
bool police_ena;
struct ocelot_policer pol;
u32 pol_ix;
};
};
};
struct ocelot_vcap_stats {
u64 bytes;
u64 pkts;
u64 used;
};
enum ocelot_vcap_filter_type {
OCELOT_VCAP_FILTER_DUMMY,
OCELOT_VCAP_FILTER_PAG,
OCELOT_VCAP_FILTER_OFFLOAD,
};
struct ocelot_vcap_id {
unsigned long cookie;
bool tc_offload;
};
struct ocelot_vcap_filter {
struct list_head list;
enum ocelot_vcap_filter_type type;
int block_id;
int goto_target;
int lookup;
u8 pag;
u16 prio;
struct ocelot_vcap_id id;
struct ocelot_vcap_action action;
struct ocelot_vcap_stats stats;
/* For VCAP IS1 and IS2 */
unsigned long ingress_port_mask;
/* For VCAP ES0 */
struct ocelot_vcap_port ingress_port;
struct ocelot_vcap_port egress_port;
enum ocelot_vcap_bit dmac_mc;
enum ocelot_vcap_bit dmac_bc;
struct ocelot_vcap_key_vlan vlan;
enum ocelot_vcap_key_type key_type;
union {
/* OCELOT_VCAP_KEY_ANY: No specific fields */
struct ocelot_vcap_key_etype etype;
struct ocelot_vcap_key_llc llc;
struct ocelot_vcap_key_snap snap;
struct ocelot_vcap_key_arp arp;
struct ocelot_vcap_key_ipv4 ipv4;
struct ocelot_vcap_key_ipv6 ipv6;
} key;
};
int ocelot_vcap_filter_add(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule,
struct netlink_ext_ack *extack);
int ocelot_vcap_filter_del(struct ocelot *ocelot,
struct ocelot_vcap_filter *rule);
struct ocelot_vcap_filter *
ocelot_vcap_block_find_filter_by_id(struct ocelot_vcap_block *block, int id,
bool tc_offload);
#endif /* _OCELOT_VCAP_H_ */
......@@ -105,11 +105,26 @@ config NET_DSA_TAG_RTL4_A
the Realtek RTL8366RB.
config NET_DSA_TAG_OCELOT
tristate "Tag driver for Ocelot family of switches"
tristate "Tag driver for Ocelot family of switches, using NPI port"
select PACKING
help
Say Y or M if you want to enable support for tagging frames for the
Ocelot switches (VSC7511, VSC7512, VSC7513, VSC7514, VSC9959).
Say Y or M if you want to enable NPI tagging for the Ocelot switches
(VSC7511, VSC7512, VSC7513, VSC7514, VSC9953, VSC9959). In this mode,
the frames over the Ethernet CPU port are prepended with a
hardware-defined injection/extraction frame header. Flow control
(PAUSE frames) over the CPU port is not supported when operating in
this mode.
config NET_DSA_TAG_OCELOT_8021Q
tristate "Tag driver for Ocelot family of switches, using VLAN"
select NET_DSA_TAG_8021Q
help
Say Y or M if you want to enable support for tagging frames with a
custom VLAN-based header. Frames that require timestamping, such as
PTP, are not delivered over Ethernet but over register-based MMIO.
Flow control over the CPU port is functional in this mode. When using
this mode, less TCAM resources (VCAP IS1, IS2, ES0) are available for
use with tc-flower.
config NET_DSA_TAG_QCA
tristate "Tag driver for Qualcomm Atheros QCA8K switches"
......
......@@ -15,6 +15,7 @@ obj-$(CONFIG_NET_DSA_TAG_RTL4_A) += tag_rtl4_a.o
obj-$(CONFIG_NET_DSA_TAG_LAN9303) += tag_lan9303.o
obj-$(CONFIG_NET_DSA_TAG_MTK) += tag_mtk.o
obj-$(CONFIG_NET_DSA_TAG_OCELOT) += tag_ocelot.o
obj-$(CONFIG_NET_DSA_TAG_OCELOT_8021Q) += tag_ocelot_8021q.o
obj-$(CONFIG_NET_DSA_TAG_QCA) += tag_qca.o
obj-$(CONFIG_NET_DSA_TAG_SJA1105) += tag_sja1105.o
obj-$(CONFIG_NET_DSA_TAG_TRAILER) += tag_trailer.o
......
......@@ -84,6 +84,32 @@ const char *dsa_tag_protocol_to_str(const struct dsa_device_ops *ops)
return ops->name;
};
/* Function takes a reference on the module owning the tagger,
* so dsa_tag_driver_put must be called afterwards.
*/
const struct dsa_device_ops *dsa_find_tagger_by_name(const char *buf)
{
const struct dsa_device_ops *ops = ERR_PTR(-ENOPROTOOPT);
struct dsa_tag_driver *dsa_tag_driver;
mutex_lock(&dsa_tag_drivers_lock);
list_for_each_entry(dsa_tag_driver, &dsa_tag_drivers_list, list) {
const struct dsa_device_ops *tmp = dsa_tag_driver->ops;
if (!sysfs_streq(buf, tmp->name))
continue;
if (!try_module_get(dsa_tag_driver->owner))
break;
ops = tmp;
break;
}
mutex_unlock(&dsa_tag_drivers_lock);
return ops;
}
const struct dsa_device_ops *dsa_tag_driver_get(int tag_protocol)
{
struct dsa_tag_driver *dsa_tag_driver;
......
......@@ -21,6 +21,49 @@
static DEFINE_MUTEX(dsa2_mutex);
LIST_HEAD(dsa_tree_list);
/**
* dsa_tree_notify - Execute code for all switches in a DSA switch tree.
* @dst: collection of struct dsa_switch devices to notify.
* @e: event, must be of type DSA_NOTIFIER_*
* @v: event-specific value.
*
* Given a struct dsa_switch_tree, this can be used to run a function once for
* each member DSA switch. The other alternative of traversing the tree is only
* through its ports list, which does not uniquely list the switches.
*/
int dsa_tree_notify(struct dsa_switch_tree *dst, unsigned long e, void *v)
{
struct raw_notifier_head *nh = &dst->nh;
int err;
err = raw_notifier_call_chain(nh, e, v);
return notifier_to_errno(err);
}
/**
* dsa_broadcast - Notify all DSA trees in the system.
* @e: event, must be of type DSA_NOTIFIER_*
* @v: event-specific value.
*
* Can be used to notify the switching fabric of events such as cross-chip
* bridging between disjoint trees (such as islands of tagger-compatible
* switches bridged by an incompatible middle switch).
*/
int dsa_broadcast(unsigned long e, void *v)
{
struct dsa_switch_tree *dst;
int err = 0;
list_for_each_entry(dst, &dsa_tree_list, list) {
err = dsa_tree_notify(dst, e, v);
if (err)
break;
}
return err;
}
/**
* dsa_lag_map() - Map LAG netdev to a linear LAG ID
* @dst: Tree in which to record the mapping.
......@@ -136,6 +179,8 @@ static struct dsa_switch_tree *dsa_tree_alloc(int index)
static void dsa_tree_free(struct dsa_switch_tree *dst)
{
if (dst->tag_ops)
dsa_tag_driver_put(dst->tag_ops);
list_del(&dst->list);
kfree(dst);
}
......@@ -424,7 +469,6 @@ static void dsa_port_teardown(struct dsa_port *dp)
break;
case DSA_PORT_TYPE_CPU:
dsa_port_disable(dp);
dsa_tag_driver_put(dp->tag_ops);
dsa_port_link_unregister_of(dp);
break;
case DSA_PORT_TYPE_DSA:
......@@ -898,6 +942,57 @@ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
dst->setup = false;
}
/* Since the dsa/tagging sysfs device attribute is per master, the assumption
* is that all DSA switches within a tree share the same tagger, otherwise
* they would have formed disjoint trees (different "dsa,member" values).
*/
int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
struct net_device *master,
const struct dsa_device_ops *tag_ops,
const struct dsa_device_ops *old_tag_ops)
{
struct dsa_notifier_tag_proto_info info;
struct dsa_port *dp;
int err = -EBUSY;
if (!rtnl_trylock())
return restart_syscall();
/* At the moment we don't allow changing the tag protocol under
* traffic. The rtnl_mutex also happens to serialize concurrent
* attempts to change the tagging protocol. If we ever lift the IFF_UP
* restriction, there needs to be another mutex which serializes this.
*/
if (master->flags & IFF_UP)
goto out_unlock;
list_for_each_entry(dp, &dst->ports, list) {
if (!dsa_is_user_port(dp->ds, dp->index))
continue;
if (dp->slave->flags & IFF_UP)
goto out_unlock;
}
info.tag_ops = tag_ops;
err = dsa_tree_notify(dst, DSA_NOTIFIER_TAG_PROTO, &info);
if (err)
goto out_unwind_tagger;
dst->tag_ops = tag_ops;
rtnl_unlock();
return 0;
out_unwind_tagger:
info.tag_ops = old_tag_ops;
dsa_tree_notify(dst, DSA_NOTIFIER_TAG_PROTO, &info);
out_unlock:
rtnl_unlock();
return err;
}
static struct dsa_port *dsa_port_touch(struct dsa_switch *ds, int index)
{
struct dsa_switch_tree *dst = ds->dst;
......@@ -968,24 +1063,33 @@ static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master)
{
struct dsa_switch *ds = dp->ds;
struct dsa_switch_tree *dst = ds->dst;
const struct dsa_device_ops *tag_ops;
enum dsa_tag_protocol tag_protocol;
tag_protocol = dsa_get_tag_protocol(dp, master);
tag_ops = dsa_tag_driver_get(tag_protocol);
if (IS_ERR(tag_ops)) {
if (PTR_ERR(tag_ops) == -ENOPROTOOPT)
return -EPROBE_DEFER;
dev_warn(ds->dev, "No tagger for this switch\n");
dp->master = NULL;
return PTR_ERR(tag_ops);
if (dst->tag_ops) {
if (dst->tag_ops->proto != tag_protocol) {
dev_err(ds->dev,
"A DSA switch tree can have only one tagging protocol\n");
return -EINVAL;
}
/* In the case of multiple CPU ports per switch, the tagging
* protocol is still reference-counted only per switch tree, so
* nothing to do here.
*/
} else {
dst->tag_ops = dsa_tag_driver_get(tag_protocol);
if (IS_ERR(dst->tag_ops)) {
if (PTR_ERR(dst->tag_ops) == -ENOPROTOOPT)
return -EPROBE_DEFER;
dev_warn(ds->dev, "No tagger for this switch\n");
dp->master = NULL;
return PTR_ERR(dst->tag_ops);
}
}
dp->master = master;
dp->type = DSA_PORT_TYPE_CPU;
dp->filter = tag_ops->filter;
dp->rcv = tag_ops->rcv;
dp->tag_ops = tag_ops;
dsa_port_set_tag_protocol(dp, dst->tag_ops);
dp->dst = dst;
return 0;
......
......@@ -28,6 +28,7 @@ enum {
DSA_NOTIFIER_VLAN_ADD,
DSA_NOTIFIER_VLAN_DEL,
DSA_NOTIFIER_MTU,
DSA_NOTIFIER_TAG_PROTO,
};
/* DSA_NOTIFIER_AGEING_TIME */
......@@ -82,6 +83,11 @@ struct dsa_notifier_mtu_info {
int mtu;
};
/* DSA_NOTIFIER_TAG_PROTO_* */
struct dsa_notifier_tag_proto_info {
const struct dsa_device_ops *tag_ops;
};
struct dsa_switchdev_event_work {
struct dsa_switch *ds;
int port;
......@@ -115,6 +121,7 @@ struct dsa_slave_priv {
/* dsa.c */
const struct dsa_device_ops *dsa_tag_driver_get(int tag_protocol);
void dsa_tag_driver_put(const struct dsa_device_ops *ops);
const struct dsa_device_ops *dsa_find_tagger_by_name(const char *buf);
bool dsa_schedule_work(struct work_struct *work);
const char *dsa_tag_protocol_to_str(const struct dsa_device_ops *ops);
......@@ -139,6 +146,8 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
}
/* port.c */
void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp,
const struct dsa_device_ops *tag_ops);
int dsa_port_set_state(struct dsa_port *dp, u8 state);
int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy);
int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy);
......@@ -201,6 +210,8 @@ int dsa_slave_suspend(struct net_device *slave_dev);
int dsa_slave_resume(struct net_device *slave_dev);
int dsa_slave_register_notifier(void);
void dsa_slave_unregister_notifier(void);
void dsa_slave_setup_tagger(struct net_device *slave);
int dsa_slave_change_mtu(struct net_device *dev, int new_mtu);
static inline struct dsa_port *dsa_slave_to_port(const struct net_device *dev)
{
......@@ -283,6 +294,12 @@ void dsa_switch_unregister_notifier(struct dsa_switch *ds);
/* dsa2.c */
void dsa_lag_map(struct dsa_switch_tree *dst, struct net_device *lag);
void dsa_lag_unmap(struct dsa_switch_tree *dst, struct net_device *lag);
int dsa_tree_notify(struct dsa_switch_tree *dst, unsigned long e, void *v);
int dsa_broadcast(unsigned long e, void *v);
int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
struct net_device *master,
const struct dsa_device_ops *tag_ops,
const struct dsa_device_ops *old_tag_ops);
extern struct list_head dsa_tree_list;
......
......@@ -280,7 +280,44 @@ static ssize_t tagging_show(struct device *d, struct device_attribute *attr,
return sprintf(buf, "%s\n",
dsa_tag_protocol_to_str(cpu_dp->tag_ops));
}
static DEVICE_ATTR_RO(tagging);
static ssize_t tagging_store(struct device *d, struct device_attribute *attr,
const char *buf, size_t count)
{
const struct dsa_device_ops *new_tag_ops, *old_tag_ops;
struct net_device *dev = to_net_dev(d);
struct dsa_port *cpu_dp = dev->dsa_ptr;
int err;
old_tag_ops = cpu_dp->tag_ops;
new_tag_ops = dsa_find_tagger_by_name(buf);
/* Bad tagger name, or module is not loaded? */
if (IS_ERR(new_tag_ops))
return PTR_ERR(new_tag_ops);
if (new_tag_ops == old_tag_ops)
/* Drop the temporarily held duplicate reference, since
* the DSA switch tree uses this tagger.
*/
goto out;
err = dsa_tree_change_tag_proto(cpu_dp->ds->dst, dev, new_tag_ops,
old_tag_ops);
if (err) {
/* On failure the old tagger is restored, so we don't need the
* driver for the new one.
*/
dsa_tag_driver_put(new_tag_ops);
return err;
}
/* On success we no longer need the module for the old tagging protocol
*/
out:
dsa_tag_driver_put(old_tag_ops);
return count;
}
static DEVICE_ATTR_RW(tagging);
static struct attribute *dsa_slave_attrs[] = {
&dev_attr_tagging.attr,
......
......@@ -13,31 +13,21 @@
#include "dsa_priv.h"
static int dsa_broadcast(unsigned long e, void *v)
{
struct dsa_switch_tree *dst;
int err = 0;
list_for_each_entry(dst, &dsa_tree_list, list) {
struct raw_notifier_head *nh = &dst->nh;
err = raw_notifier_call_chain(nh, e, v);
err = notifier_to_errno(err);
if (err)
break;
}
return err;
}
/**
* dsa_port_notify - Notify the switching fabric of changes to a port
* @dp: port on which change occurred
* @e: event, must be of type DSA_NOTIFIER_*
* @v: event-specific value.
*
* Notify all switches in the DSA tree that this port's switch belongs to,
* including this switch itself, of an event. Allows the other switches to
* reconfigure themselves for cross-chip operations. Can also be used to
* reconfigure ports without net_devices (CPU ports, DSA links) whenever
* a user port's state changes.
*/
static int dsa_port_notify(const struct dsa_port *dp, unsigned long e, void *v)
{
struct raw_notifier_head *nh = &dp->ds->dst->nh;
int err;
err = raw_notifier_call_chain(nh, e, v);
return notifier_to_errno(err);
return dsa_tree_notify(dp->ds->dst, e, v);
}
int dsa_port_set_state(struct dsa_port *dp, u8 state)
......@@ -536,6 +526,14 @@ int dsa_port_vlan_del(struct dsa_port *dp,
return dsa_port_notify(dp, DSA_NOTIFIER_VLAN_DEL, &info);
}
void dsa_port_set_tag_protocol(struct dsa_port *cpu_dp,
const struct dsa_device_ops *tag_ops)
{
cpu_dp->filter = tag_ops->filter;
cpu_dp->rcv = tag_ops->rcv;
cpu_dp->tag_ops = tag_ops;
}
static struct phy_device *dsa_port_get_phy_device(struct dsa_port *dp)
{
struct device_node *phy_dn;
......
......@@ -1430,7 +1430,7 @@ static void dsa_bridge_mtu_normalization(struct dsa_port *dp)
dsa_hw_port_list_free(&hw_port_list);
}
static int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
{
struct net_device *master = dsa_slave_to_master(dev);
struct dsa_port *dp = dsa_slave_to_port(dev);
......@@ -1708,6 +1708,27 @@ static int dsa_slave_phy_setup(struct net_device *slave_dev)
return ret;
}
void dsa_slave_setup_tagger(struct net_device *slave)
{
struct dsa_port *dp = dsa_slave_to_port(slave);
struct dsa_slave_priv *p = netdev_priv(slave);
const struct dsa_port *cpu_dp = dp->cpu_dp;
struct net_device *master = cpu_dp->master;
if (cpu_dp->tag_ops->tail_tag)
slave->needed_tailroom = cpu_dp->tag_ops->overhead;
else
slave->needed_headroom = cpu_dp->tag_ops->overhead;
/* Try to save one extra realloc later in the TX path (in the master)
* by also inheriting the master's needed headroom and tailroom.
* The 8021q driver also does this.
*/
slave->needed_headroom += master->needed_headroom;
slave->needed_tailroom += master->needed_tailroom;
p->xmit = cpu_dp->tag_ops->xmit;
}
static struct lock_class_key dsa_slave_netdev_xmit_lock_key;
static void dsa_slave_set_lockdep_class_one(struct net_device *dev,
struct netdev_queue *txq,
......@@ -1782,16 +1803,6 @@ int dsa_slave_create(struct dsa_port *port)
slave_dev->netdev_ops = &dsa_slave_netdev_ops;
if (ds->ops->port_max_mtu)
slave_dev->max_mtu = ds->ops->port_max_mtu(ds, port->index);
if (cpu_dp->tag_ops->tail_tag)
slave_dev->needed_tailroom = cpu_dp->tag_ops->overhead;
else
slave_dev->needed_headroom = cpu_dp->tag_ops->overhead;
/* Try to save one extra realloc later in the TX path (in the master)
* by also inheriting the master's needed headroom and tailroom.
* The 8021q driver also does this.
*/
slave_dev->needed_headroom += master->needed_headroom;
slave_dev->needed_tailroom += master->needed_tailroom;
SET_NETDEV_DEVTYPE(slave_dev, &dsa_type);
netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one,
......@@ -1814,8 +1825,8 @@ int dsa_slave_create(struct dsa_port *port)
p->dp = port;
INIT_LIST_HEAD(&p->mall_tc_list);
p->xmit = cpu_dp->tag_ops->xmit;
port->slave = slave_dev;
dsa_slave_setup_tagger(slave_dev);
rtnl_lock();
ret = dsa_slave_change_mtu(slave_dev, ETH_DATA_LEN);
......
......@@ -297,6 +297,58 @@ static int dsa_switch_vlan_del(struct dsa_switch *ds,
return 0;
}
static bool dsa_switch_tag_proto_match(struct dsa_switch *ds, int port,
struct dsa_notifier_tag_proto_info *info)
{
if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port))
return true;
return false;
}
static int dsa_switch_change_tag_proto(struct dsa_switch *ds,
struct dsa_notifier_tag_proto_info *info)
{
const struct dsa_device_ops *tag_ops = info->tag_ops;
int port, err;
if (!ds->ops->change_tag_protocol)
return -EOPNOTSUPP;
ASSERT_RTNL();
for (port = 0; port < ds->num_ports; port++) {
if (dsa_switch_tag_proto_match(ds, port, info)) {
err = ds->ops->change_tag_protocol(ds, port,
tag_ops->proto);
if (err)
return err;
if (dsa_is_cpu_port(ds, port))
dsa_port_set_tag_protocol(dsa_to_port(ds, port),
tag_ops);
}
}
/* Now that changing the tag protocol can no longer fail, let's update
* the remaining bits which are "duplicated for faster access", and the
* bits that depend on the tagger, such as the MTU.
*/
for (port = 0; port < ds->num_ports; port++) {
if (dsa_is_user_port(ds, port)) {
struct net_device *slave;
slave = dsa_to_port(ds, port)->slave;
dsa_slave_setup_tagger(slave);
/* rtnl_mutex is held in dsa_tree_change_tag_proto */
dsa_slave_change_mtu(slave, slave->mtu);
}
}
return 0;
}
static int dsa_switch_event(struct notifier_block *nb,
unsigned long event, void *info)
{
......@@ -343,6 +395,9 @@ static int dsa_switch_event(struct notifier_block *nb,
case DSA_NOTIFIER_MTU:
err = dsa_switch_mtu(ds, info);
break;
case DSA_NOTIFIER_TAG_PROTO:
err = dsa_switch_change_tag_proto(ds, info);
break;
default:
err = -EOPNOTSUPP;
break;
......
......@@ -133,10 +133,21 @@ u16 dsa_8021q_rx_subvlan(u16 vid)
}
EXPORT_SYMBOL_GPL(dsa_8021q_rx_subvlan);
bool vid_is_dsa_8021q_rxvlan(u16 vid)
{
return (vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_RX;
}
EXPORT_SYMBOL_GPL(vid_is_dsa_8021q_rxvlan);
bool vid_is_dsa_8021q_txvlan(u16 vid)
{
return (vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_TX;
}
EXPORT_SYMBOL_GPL(vid_is_dsa_8021q_txvlan);
bool vid_is_dsa_8021q(u16 vid)
{
return ((vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_RX ||
(vid & DSA_8021Q_DIR_MASK) == DSA_8021Q_DIR_TX);
return vid_is_dsa_8021q_rxvlan(vid) || vid_is_dsa_8021q_txvlan(vid);
}
EXPORT_SYMBOL_GPL(vid_is_dsa_8021q);
......
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2020-2021 NXP Semiconductors
*
* An implementation of the software-defined tag_8021q.c tagger format, which
* also preserves full functionality under a vlan_filtering bridge. It does
* this by using the TCAM engines for:
* - pushing the RX VLAN as a second, outer tag, on egress towards the CPU port
* - redirecting towards the correct front port based on TX VLAN and popping
* that on egress
*/
#include <linux/dsa/8021q.h>
#include "dsa_priv.h"
static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
struct dsa_port *dp = dsa_slave_to_port(netdev);
u16 tx_vid = dsa_8021q_tx_vid(dp->ds, dp->index);
u16 queue_mapping = skb_get_queue_mapping(skb);
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
return dsa_8021q_xmit(skb, netdev, ETH_P_8021Q,
((pcp << VLAN_PRIO_SHIFT) | tx_vid));
}
static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
struct net_device *netdev,
struct packet_type *pt)
{
int src_port, switch_id, qos_class;
u16 vid, tci;
skb_push_rcsum(skb, ETH_HLEN);
if (skb_vlan_tag_present(skb)) {
tci = skb_vlan_tag_get(skb);
__vlan_hwaccel_clear_tag(skb);
} else {
__skb_vlan_pop(skb, &tci);
}
skb_pull_rcsum(skb, ETH_HLEN);
vid = tci & VLAN_VID_MASK;
src_port = dsa_8021q_rx_source_port(vid);
switch_id = dsa_8021q_rx_switch_id(vid);
qos_class = (tci & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
skb->dev = dsa_master_find_slave(netdev, switch_id, src_port);
if (!skb->dev)
return NULL;
skb->offload_fwd_mark = 1;
skb->priority = qos_class;
return skb;
}
static const struct dsa_device_ops ocelot_8021q_netdev_ops = {
.name = "ocelot-8021q",
.proto = DSA_TAG_PROTO_OCELOT_8021Q,
.xmit = ocelot_xmit,
.rcv = ocelot_rcv,
.overhead = VLAN_HLEN,
};
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_OCELOT_8021Q);
module_dsa_tag_driver(ocelot_8021q_netdev_ops);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment