Commit eb55d7b6 authored by David S. Miller's avatar David S. Miller

Merge branch 'tc-gate-offload-for-SJA1105-DSA-switch'

Vladimir Oltean says:

====================
tc-gate offload for SJA1105 DSA switch

Expose the TTEthernet hardware features of the switch using standard
tc-flower actions: trap, drop, redirect and gate.

v1 was submitted at:
https://patchwork.ozlabs.org/project/netdev/cover/20200503211035.19363-1-olteanv@gmail.com/

v2 was submitted at:
https://patchwork.ozlabs.org/project/netdev/cover/20200503211035.19363-1-olteanv@gmail.com/

Changes in v3:
Made sure there are no compilation warnings when
CONFIG_NET_DSA_SJA1105_TAS or CONFIG_NET_DSA_SJA1105_VL are disabled.

Changes in v2:
Using a newly introduced dsa_port_from_netdev public helper.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents bb206a08 47cfa3af
......@@ -230,6 +230,122 @@ simultaneously on two ports. The driver checks the consistency of the schedules
against this restriction and errors out when appropriate. Schedule analysis is
needed to avoid this, which is outside the scope of the document.
Routing actions (redirect, trap, drop)
--------------------------------------
The switch is able to offload flow-based redirection of packets to a set of
destination ports specified by the user. Internally, this is implemented by
making use of Virtual Links, a TTEthernet concept.
The driver supports 2 types of keys for Virtual Links:
- VLAN-aware virtual links: these match on destination MAC address, VLAN ID and
VLAN PCP.
- VLAN-unaware virtual links: these match on destination MAC address only.
The VLAN awareness state of the bridge (vlan_filtering) cannot be changed while
there are virtual link rules installed.
Composing multiple actions inside the same rule is supported. When only routing
actions are requested, the driver creates a "non-critical" virtual link. When
the action list also contains tc-gate (more details below), the virtual link
becomes "time-critical" (draws frame buffers from a reserved memory partition,
etc).
The 3 routing actions that are supported are "trap", "drop" and "redirect".
Example 1: send frames received on swp2 with a DA of 42:be:24:9b:76:20 to the
CPU and to swp3. This type of key (DA only) when the port's VLAN awareness
state is off::
tc qdisc add dev swp2 clsact
tc filter add dev swp2 ingress flower skip_sw dst_mac 42:be:24:9b:76:20 \
action mirred egress redirect dev swp3 \
action trap
Example 2: drop frames received on swp2 with a DA of 42:be:24:9b:76:20, a VID
of 100 and a PCP of 0::
tc filter add dev swp2 ingress protocol 802.1Q flower skip_sw \
dst_mac 42:be:24:9b:76:20 vlan_id 100 vlan_prio 0 action drop
Time-based ingress policing
---------------------------
The TTEthernet hardware abilities of the switch can be constrained to act
similarly to the Per-Stream Filtering and Policing (PSFP) clause specified in
IEEE 802.1Q-2018 (formerly 802.1Qci). This means it can be used to perform
tight timing-based admission control for up to 1024 flows (identified by a
tuple composed of destination MAC address, VLAN ID and VLAN PCP). Packets which
are received outside their expected reception window are dropped.
This capability can be managed through the offload of the tc-gate action. As
routing actions are intrinsic to virtual links in TTEthernet (which performs
explicit routing of time-critical traffic and does not leave that in the hands
of the FDB, flooding etc), the tc-gate action may never appear alone when
asking sja1105 to offload it. One (or more) redirect or trap actions must also
follow along.
Example: create a tc-taprio schedule that is phase-aligned with a tc-gate
schedule (the clocks must be synchronized by a 1588 application stack, which is
outside the scope of this document). No packet delivered by the sender will be
dropped. Note that the reception window is larger than the transmission window
(and much more so, in this example) to compensate for the packet propagation
delay of the link (which can be determined by the 1588 application stack).
Receiver (sja1105)::
tc qdisc add dev swp2 clsact
now=$(phc_ctl /dev/ptp1 get | awk '/clock time is/ {print $5}') && \
sec=$(echo $now | awk -F. '{print $1}') && \
base_time="$(((sec + 2) * 1000000000))" && \
echo "base time ${base_time}"
tc filter add dev swp2 ingress flower skip_sw \
dst_mac 42:be:24:9b:76:20 \
action gate base-time ${base_time} \
sched-entry OPEN 60000 -1 -1 \
sched-entry CLOSE 40000 -1 -1 \
action trap
Sender::
now=$(phc_ctl /dev/ptp0 get | awk '/clock time is/ {print $5}') && \
sec=$(echo $now | awk -F. '{print $1}') && \
base_time="$(((sec + 2) * 1000000000))" && \
echo "base time ${base_time}"
tc qdisc add dev eno0 parent root taprio \
num_tc 8 \
map 0 1 2 3 4 5 6 7 \
queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \
base-time ${base_time} \
sched-entry S 01 50000 \
sched-entry S 00 50000 \
flags 2
The engine used to schedule the ingress gate operations is the same that the
one used for the tc-taprio offload. Therefore, the restrictions regarding the
fact that no two gate actions (either tc-gate or tc-taprio gates) may fire at
the same time (during the same 200 ns slot) still apply.
To come in handy, it is possible to share time-triggered virtual links across
more than 1 ingress port, via flow blocks. In this case, the restriction of
firing at the same time does not apply because there is a single schedule in
the system, that of the shared virtual link::
tc qdisc add dev swp2 ingress_block 1 clsact
tc qdisc add dev swp3 ingress_block 1 clsact
tc filter add block 1 flower skip_sw dst_mac 42:be:24:9b:76:20 \
action gate index 2 \
base-time 0 \
sched-entry OPEN 50000000 -1 -1 \
sched-entry CLOSE 50000000 -1 -1 \
action trap
Hardware statistics for each flow are also available ("pkts" counts the number
of dropped frames, which is a sum of frames dropped due to timing violations,
lack of destination ports and MTU enforcement checks). Byte-level counters are
not available.
Device Tree bindings and board design
=====================================
......
......@@ -34,3 +34,12 @@ config NET_DSA_SJA1105_TAS
This enables support for the TTEthernet-based egress scheduling
engine in the SJA1105 DSA driver, which is controlled using a
hardware offload of the tc-tqprio qdisc.
config NET_DSA_SJA1105_VL
bool "Support for Virtual Links on NXP SJA1105"
depends on NET_DSA_SJA1105_TAS
help
This enables support for flow classification using capable devices
(SJA1105T, SJA1105Q, SJA1105S). The following actions are supported:
- redirect, trap, drop
- time-based ingress policing, via the tc-gate action
......@@ -17,3 +17,7 @@ endif
ifdef CONFIG_NET_DSA_SJA1105_TAS
sja1105-objs += sja1105_tas.o
endif
ifdef CONFIG_NET_DSA_SJA1105_VL
sja1105-objs += sja1105_vl.o
endif
......@@ -36,6 +36,7 @@ struct sja1105_regs {
u64 status;
u64 port_control;
u64 rgu;
u64 vl_status;
u64 config;
u64 sgmii;
u64 rmii_pll1;
......@@ -97,17 +98,52 @@ struct sja1105_info {
const char *name;
};
enum sja1105_key_type {
SJA1105_KEY_BCAST,
SJA1105_KEY_TC,
SJA1105_KEY_VLAN_UNAWARE_VL,
SJA1105_KEY_VLAN_AWARE_VL,
};
struct sja1105_key {
enum sja1105_key_type type;
union {
/* SJA1105_KEY_TC */
struct {
int pcp;
} tc;
/* SJA1105_KEY_VLAN_UNAWARE_VL */
/* SJA1105_KEY_VLAN_AWARE_VL */
struct {
u64 dmac;
u16 vid;
u16 pcp;
} vl;
};
};
enum sja1105_rule_type {
SJA1105_RULE_BCAST_POLICER,
SJA1105_RULE_TC_POLICER,
SJA1105_RULE_VL,
};
enum sja1105_vl_type {
SJA1105_VL_NONCRITICAL,
SJA1105_VL_RATE_CONSTRAINED,
SJA1105_VL_TIME_TRIGGERED,
};
struct sja1105_rule {
struct list_head list;
unsigned long cookie;
unsigned long port_mask;
struct sja1105_key key;
enum sja1105_rule_type type;
/* Action */
union {
/* SJA1105_RULE_BCAST_POLICER */
struct {
......@@ -117,14 +153,28 @@ struct sja1105_rule {
/* SJA1105_RULE_TC_POLICER */
struct {
int sharindx;
int tc;
} tc_pol;
/* SJA1105_RULE_VL */
struct {
enum sja1105_vl_type type;
unsigned long destports;
int sharindx;
int maxlen;
int ipv;
u64 base_time;
u64 cycle_time;
int num_entries;
struct action_gate_entry *entries;
struct flow_stats stats;
} vl;
};
};
struct sja1105_flow_block {
struct list_head rules;
bool l2_policer_used[SJA1105_NUM_L2_POLICERS];
int num_virtual_links;
};
struct sja1105_private {
......@@ -161,6 +211,7 @@ enum sja1105_reset_reason {
SJA1105_AGEING_TIME,
SJA1105_SCHEDULING,
SJA1105_BEST_EFFORT_POLICING,
SJA1105_VIRTUAL_LINKS,
};
int sja1105_static_config_reload(struct sja1105_private *priv,
......@@ -254,13 +305,19 @@ size_t sja1105pqrs_mac_config_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105pqrs_avb_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
size_t sja1105_vl_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op);
/* From sja1105_flower.c */
int sja1105_cls_flower_del(struct dsa_switch *ds, int port,
struct flow_cls_offload *cls, bool ingress);
int sja1105_cls_flower_add(struct dsa_switch *ds, int port,
struct flow_cls_offload *cls, bool ingress);
int sja1105_cls_flower_stats(struct dsa_switch *ds, int port,
struct flow_cls_offload *cls, bool ingress);
void sja1105_flower_setup(struct dsa_switch *ds);
void sja1105_flower_teardown(struct dsa_switch *ds);
struct sja1105_rule *sja1105_rule_find(struct sja1105_private *priv,
unsigned long cookie);
#endif
......@@ -97,6 +97,12 @@
#define SJA1105_SIZE_DYN_CMD 4
#define SJA1105ET_SJA1105_SIZE_VL_LOOKUP_DYN_CMD \
SJA1105_SIZE_DYN_CMD
#define SJA1105PQRS_SJA1105_SIZE_VL_LOOKUP_DYN_CMD \
(SJA1105_SIZE_DYN_CMD + SJA1105_SIZE_VL_LOOKUP_ENTRY)
#define SJA1105ET_SIZE_MAC_CONFIG_DYN_ENTRY \
SJA1105_SIZE_DYN_CMD
......@@ -146,6 +152,29 @@ enum sja1105_hostcmd {
SJA1105_HOSTCMD_INVALIDATE = 4,
};
static void
sja1105_vl_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
{
const int size = SJA1105_SIZE_DYN_CMD;
sja1105_packing(buf, &cmd->valid, 31, 31, size, op);
sja1105_packing(buf, &cmd->errors, 30, 30, size, op);
sja1105_packing(buf, &cmd->rdwrset, 29, 29, size, op);
sja1105_packing(buf, &cmd->index, 9, 0, size, op);
}
static size_t sja1105et_vl_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_vl_lookup_entry *entry = entry_ptr;
const int size = SJA1105ET_SJA1105_SIZE_VL_LOOKUP_DYN_CMD;
sja1105_packing(buf, &entry->egrmirr, 21, 17, size, op);
sja1105_packing(buf, &entry->ingrmirr, 16, 16, size, op);
return size;
}
static void
sja1105pqrs_l2_lookup_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
enum packing_op op)
......@@ -505,6 +534,16 @@ sja1105pqrs_avb_params_cmd_packing(void *buf, struct sja1105_dyn_cmd *cmd,
struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_SCHEDULE] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0},
[BLK_IDX_VL_LOOKUP] = {
.entry_packing = sja1105et_vl_lookup_entry_packing,
.cmd_packing = sja1105_vl_lookup_cmd_packing,
.access = OP_WRITE,
.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
.packed_size = SJA1105ET_SJA1105_SIZE_VL_LOOKUP_DYN_CMD,
.addr = 0x35,
},
[BLK_IDX_VL_POLICING] = {0},
[BLK_IDX_VL_FORWARDING] = {0},
[BLK_IDX_L2_LOOKUP] = {
.entry_packing = sja1105et_dyn_l2_lookup_entry_packing,
.cmd_packing = sja1105et_l2_lookup_cmd_packing,
......@@ -548,6 +587,7 @@ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
},
[BLK_IDX_SCHEDULE_PARAMS] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0},
[BLK_IDX_VL_FORWARDING_PARAMS] = {0},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.entry_packing = sja1105et_l2_lookup_params_entry_packing,
.cmd_packing = sja1105et_l2_lookup_params_cmd_packing,
......@@ -573,6 +613,16 @@ struct sja1105_dynamic_table_ops sja1105et_dyn_ops[BLK_IDX_MAX_DYN] = {
struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
[BLK_IDX_SCHEDULE] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0},
[BLK_IDX_VL_LOOKUP] = {
.entry_packing = sja1105_vl_lookup_entry_packing,
.cmd_packing = sja1105_vl_lookup_cmd_packing,
.access = (OP_READ | OP_WRITE),
.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
.packed_size = SJA1105PQRS_SJA1105_SIZE_VL_LOOKUP_DYN_CMD,
.addr = 0x47,
},
[BLK_IDX_VL_POLICING] = {0},
[BLK_IDX_VL_FORWARDING] = {0},
[BLK_IDX_L2_LOOKUP] = {
.entry_packing = sja1105pqrs_dyn_l2_lookup_entry_packing,
.cmd_packing = sja1105pqrs_l2_lookup_cmd_packing,
......@@ -616,6 +666,7 @@ struct sja1105_dynamic_table_ops sja1105pqrs_dyn_ops[BLK_IDX_MAX_DYN] = {
},
[BLK_IDX_SCHEDULE_PARAMS] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0},
[BLK_IDX_VL_FORWARDING_PARAMS] = {0},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.entry_packing = sja1105et_l2_lookup_params_entry_packing,
.cmd_packing = sja1105et_l2_lookup_params_cmd_packing,
......
......@@ -2,8 +2,9 @@
/* Copyright 2020, NXP Semiconductors
*/
#include "sja1105.h"
#include "sja1105_vl.h"
static struct sja1105_rule *sja1105_rule_find(struct sja1105_private *priv,
struct sja1105_rule *sja1105_rule_find(struct sja1105_private *priv,
unsigned long cookie)
{
struct sja1105_rule *rule;
......@@ -46,6 +47,7 @@ static int sja1105_setup_bcast_policer(struct sja1105_private *priv,
rule->cookie = cookie;
rule->type = SJA1105_RULE_BCAST_POLICER;
rule->bcast_pol.sharindx = sja1105_find_free_l2_policer(priv);
rule->key.type = SJA1105_KEY_BCAST;
new_rule = true;
}
......@@ -117,7 +119,8 @@ static int sja1105_setup_tc_policer(struct sja1105_private *priv,
rule->cookie = cookie;
rule->type = SJA1105_RULE_TC_POLICER;
rule->tc_pol.sharindx = sja1105_find_free_l2_policer(priv);
rule->tc_pol.tc = tc;
rule->key.type = SJA1105_KEY_TC;
rule->key.tc.pcp = tc;
new_rule = true;
}
......@@ -169,14 +172,38 @@ static int sja1105_setup_tc_policer(struct sja1105_private *priv,
return rc;
}
static int sja1105_flower_parse_policer(struct sja1105_private *priv, int port,
static int sja1105_flower_policer(struct sja1105_private *priv, int port,
struct netlink_ext_ack *extack,
struct flow_cls_offload *cls,
unsigned long cookie,
struct sja1105_key *key,
u64 rate_bytes_per_sec,
s64 burst)
{
switch (key->type) {
case SJA1105_KEY_BCAST:
return sja1105_setup_bcast_policer(priv, extack, cookie, port,
rate_bytes_per_sec, burst);
case SJA1105_KEY_TC:
return sja1105_setup_tc_policer(priv, extack, cookie, port,
key->tc.pcp, rate_bytes_per_sec,
burst);
default:
NL_SET_ERR_MSG_MOD(extack, "Unknown keys for policing");
return -EOPNOTSUPP;
}
}
static int sja1105_flower_parse_key(struct sja1105_private *priv,
struct netlink_ext_ack *extack,
struct flow_cls_offload *cls,
struct sja1105_key *key)
{
struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
struct flow_dissector *dissector = rule->match.dissector;
bool is_bcast_dmac = false;
u64 dmac = U64_MAX;
u16 vid = U16_MAX;
u16 pcp = U16_MAX;
if (dissector->used_keys &
~(BIT(FLOW_DISSECTOR_KEY_BASIC) |
......@@ -213,16 +240,14 @@ static int sja1105_flower_parse_policer(struct sja1105_private *priv, int port,
return -EOPNOTSUPP;
}
if (!ether_addr_equal_masked(match.key->dst, bcast,
match.mask->dst)) {
if (!ether_addr_equal(match.mask->dst, bcast)) {
NL_SET_ERR_MSG_MOD(extack,
"Only matching on broadcast DMAC is supported");
"Masked matching on MAC not supported");
return -EOPNOTSUPP;
}
return sja1105_setup_bcast_policer(priv, extack, cls->cookie,
port, rate_bytes_per_sec,
burst);
dmac = ether_addr_to_u64(match.key->dst);
is_bcast_dmac = ether_addr_equal(match.key->dst, bcast);
}
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
......@@ -230,22 +255,46 @@ static int sja1105_flower_parse_policer(struct sja1105_private *priv, int port,
flow_rule_match_vlan(rule, &match);
if (match.key->vlan_id & match.mask->vlan_id) {
if (match.mask->vlan_id &&
match.mask->vlan_id != VLAN_VID_MASK) {
NL_SET_ERR_MSG_MOD(extack,
"Matching on VID is not supported");
"Masked matching on VID is not supported");
return -EOPNOTSUPP;
}
if (match.mask->vlan_priority != 0x7) {
if (match.mask->vlan_priority &&
match.mask->vlan_priority != 0x7) {
NL_SET_ERR_MSG_MOD(extack,
"Masked matching on PCP is not supported");
return -EOPNOTSUPP;
}
return sja1105_setup_tc_policer(priv, extack, cls->cookie, port,
match.key->vlan_priority,
rate_bytes_per_sec,
burst);
if (match.mask->vlan_id)
vid = match.key->vlan_id;
if (match.mask->vlan_priority)
pcp = match.key->vlan_priority;
}
if (is_bcast_dmac && vid == U16_MAX && pcp == U16_MAX) {
key->type = SJA1105_KEY_BCAST;
return 0;
}
if (dmac == U64_MAX && vid == U16_MAX && pcp != U16_MAX) {
key->type = SJA1105_KEY_TC;
key->tc.pcp = pcp;
return 0;
}
if (dmac != U64_MAX && vid != U16_MAX && pcp != U16_MAX) {
key->type = SJA1105_KEY_VLAN_AWARE_VL;
key->vl.dmac = dmac;
key->vl.vid = vid;
key->vl.pcp = pcp;
return 0;
}
if (dmac != U64_MAX) {
key->type = SJA1105_KEY_VLAN_UNAWARE_VL;
key->vl.dmac = dmac;
return 0;
}
NL_SET_ERR_MSG_MOD(extack, "Not matching on any known key");
......@@ -259,22 +308,110 @@ int sja1105_cls_flower_add(struct dsa_switch *ds, int port,
struct netlink_ext_ack *extack = cls->common.extack;
struct sja1105_private *priv = ds->priv;
const struct flow_action_entry *act;
int rc = -EOPNOTSUPP, i;
unsigned long cookie = cls->cookie;
bool routing_rule = false;
struct sja1105_key key;
bool gate_rule = false;
bool vl_rule = false;
int rc, i;
rc = sja1105_flower_parse_key(priv, extack, cls, &key);
if (rc)
return rc;
rc = -EOPNOTSUPP;
flow_action_for_each(i, act, &rule->action) {
switch (act->id) {
case FLOW_ACTION_POLICE:
rc = sja1105_flower_parse_policer(priv, port, extack, cls,
rc = sja1105_flower_policer(priv, port, extack, cookie,
&key,
act->police.rate_bytes_ps,
act->police.burst);
if (rc)
goto out;
break;
case FLOW_ACTION_TRAP: {
int cpu = dsa_upstream_port(ds, port);
routing_rule = true;
vl_rule = true;
rc = sja1105_vl_redirect(priv, port, extack, cookie,
&key, BIT(cpu), true);
if (rc)
goto out;
break;
}
case FLOW_ACTION_REDIRECT: {
struct dsa_port *to_dp;
to_dp = dsa_port_from_netdev(act->dev);
if (IS_ERR(to_dp)) {
NL_SET_ERR_MSG_MOD(extack,
"Destination not a switch port");
return -EOPNOTSUPP;
}
routing_rule = true;
vl_rule = true;
rc = sja1105_vl_redirect(priv, port, extack, cookie,
&key, BIT(to_dp->index), true);
if (rc)
goto out;
break;
}
case FLOW_ACTION_DROP:
vl_rule = true;
rc = sja1105_vl_redirect(priv, port, extack, cookie,
&key, 0, false);
if (rc)
goto out;
break;
case FLOW_ACTION_GATE:
gate_rule = true;
vl_rule = true;
rc = sja1105_vl_gate(priv, port, extack, cookie,
&key, act->gate.index,
act->gate.prio,
act->gate.basetime,
act->gate.cycletime,
act->gate.cycletimeext,
act->gate.num_entries,
act->gate.entries);
if (rc)
goto out;
break;
default:
NL_SET_ERR_MSG_MOD(extack,
"Action not supported");
break;
rc = -EOPNOTSUPP;
goto out;
}
}
if (vl_rule && !rc) {
/* Delay scheduling configuration until DESTPORTS has been
* populated by all other actions.
*/
if (gate_rule) {
if (!routing_rule) {
NL_SET_ERR_MSG_MOD(extack,
"Can only offload gate action together with redirect or trap");
return -EOPNOTSUPP;
}
rc = sja1105_init_scheduling(priv);
if (rc)
goto out;
}
rc = sja1105_static_config_reload(priv, SJA1105_VIRTUAL_LINKS);
}
out:
return rc;
}
......@@ -289,6 +426,9 @@ int sja1105_cls_flower_del(struct dsa_switch *ds, int port,
if (!rule)
return 0;
if (rule->type == SJA1105_RULE_VL)
return sja1105_vl_delete(priv, port, rule, cls->common.extack);
policing = priv->static_config.tables[BLK_IDX_L2_POLICING].entries;
if (rule->type == SJA1105_RULE_BCAST_POLICER) {
......@@ -297,7 +437,7 @@ int sja1105_cls_flower_del(struct dsa_switch *ds, int port,
old_sharindx = policing[bcast].sharindx;
policing[bcast].sharindx = port;
} else if (rule->type == SJA1105_RULE_TC_POLICER) {
int index = (port * SJA1105_NUM_TC) + rule->tc_pol.tc;
int index = (port * SJA1105_NUM_TC) + rule->key.tc.pcp;
old_sharindx = policing[index].sharindx;
policing[index].sharindx = port;
......@@ -315,6 +455,27 @@ int sja1105_cls_flower_del(struct dsa_switch *ds, int port,
return sja1105_static_config_reload(priv, SJA1105_BEST_EFFORT_POLICING);
}
int sja1105_cls_flower_stats(struct dsa_switch *ds, int port,
struct flow_cls_offload *cls, bool ingress)
{
struct sja1105_private *priv = ds->priv;
struct sja1105_rule *rule = sja1105_rule_find(priv, cls->cookie);
int rc;
if (!rule)
return 0;
if (rule->type != SJA1105_RULE_VL)
return 0;
rc = sja1105_vl_stats(priv, port, rule, &cls->stats,
cls->common.extack);
if (rc)
return rc;
return 0;
}
void sja1105_flower_setup(struct dsa_switch *ds)
{
struct sja1105_private *priv = ds->priv;
......
......@@ -445,7 +445,7 @@ static int sja1105_init_general_params(struct sja1105_private *priv)
*/
.casc_port = SJA1105_NUM_PORTS,
/* No TTEthernet */
.vllupformat = 0,
.vllupformat = SJA1105_VL_FORMAT_PSFP,
.vlmarker = 0,
.vlmask = 0,
/* Only update correctionField for 1-step PTP (L2 transport) */
......@@ -1589,6 +1589,7 @@ static const char * const sja1105_reset_reasons[] = {
[SJA1105_AGEING_TIME] = "Ageing time",
[SJA1105_SCHEDULING] = "Time-aware scheduling",
[SJA1105_BEST_EFFORT_POLICING] = "Best-effort policing",
[SJA1105_VIRTUAL_LINKS] = "Virtual links",
};
/* For situations where we need to change a setting at runtime that is only
......@@ -1831,9 +1832,18 @@ static int sja1105_vlan_filtering(struct dsa_switch *ds, int port, bool enabled)
struct sja1105_general_params_entry *general_params;
struct sja1105_private *priv = ds->priv;
struct sja1105_table *table;
struct sja1105_rule *rule;
u16 tpid, tpid2;
int rc;
list_for_each_entry(rule, &priv->flow_block.rules, list) {
if (rule->type == SJA1105_RULE_VL) {
dev_err(ds->dev,
"Cannot change VLAN filtering state while VL rules are active\n");
return -EBUSY;
}
}
if (enabled) {
/* Enable VLAN filtering. */
tpid = ETH_P_8021Q;
......@@ -2359,6 +2369,7 @@ static const struct dsa_switch_ops sja1105_switch_ops = {
.port_policer_del = sja1105_port_policer_del,
.cls_flower_add = sja1105_cls_flower_add,
.cls_flower_del = sja1105_cls_flower_del,
.cls_flower_stats = sja1105_cls_flower_stats,
};
static int sja1105_check_device_id(struct sja1105_private *priv)
......
......@@ -48,6 +48,19 @@ static inline s64 future_base_time(s64 base_time, s64 cycle_time, s64 now)
return base_time + n * cycle_time;
}
/* This is not a preprocessor macro because the "ns" argument may or may not be
* s64 at caller side. This ensures it is properly type-cast before div_s64.
*/
static inline s64 ns_to_sja1105_delta(s64 ns)
{
return div_s64(ns, 200);
}
static inline s64 sja1105_delta_to_ns(s64 delta)
{
return delta * 200;
}
struct sja1105_ptp_cmd {
u64 startptpcp; /* start toggling PTP_CLK pin */
u64 stopptpcp; /* stop toggling PTP_CLK pin */
......
......@@ -439,6 +439,7 @@ static struct sja1105_regs sja1105et_regs = {
.prod_id = 0x100BC3,
.status = 0x1,
.port_control = 0x11,
.vl_status = 0x10000,
.config = 0x020000,
.rgu = 0x100440,
/* UM10944.pdf, Table 86, ACU Register overview */
......@@ -472,6 +473,7 @@ static struct sja1105_regs sja1105pqrs_regs = {
.prod_id = 0x100BC3,
.status = 0x1,
.port_control = 0x12,
.vl_status = 0x10000,
.config = 0x020000,
.rgu = 0x100440,
/* UM10944.pdf, Table 86, ACU Register overview */
......
......@@ -432,6 +432,84 @@ static size_t sja1105_schedule_entry_packing(void *buf, void *entry_ptr,
return size;
}
static size_t
sja1105_vl_forwarding_params_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_vl_forwarding_params_entry *entry = entry_ptr;
const size_t size = SJA1105_SIZE_VL_FORWARDING_PARAMS_ENTRY;
int offset, i;
for (i = 0, offset = 16; i < 8; i++, offset += 10)
sja1105_packing(buf, &entry->partspc[i],
offset + 9, offset + 0, size, op);
sja1105_packing(buf, &entry->debugen, 15, 15, size, op);
return size;
}
static size_t sja1105_vl_forwarding_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_vl_forwarding_entry *entry = entry_ptr;
const size_t size = SJA1105_SIZE_VL_FORWARDING_ENTRY;
sja1105_packing(buf, &entry->type, 31, 31, size, op);
sja1105_packing(buf, &entry->priority, 30, 28, size, op);
sja1105_packing(buf, &entry->partition, 27, 25, size, op);
sja1105_packing(buf, &entry->destports, 24, 20, size, op);
return size;
}
size_t sja1105_vl_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_vl_lookup_entry *entry = entry_ptr;
const size_t size = SJA1105_SIZE_VL_LOOKUP_ENTRY;
if (entry->format == SJA1105_VL_FORMAT_PSFP) {
/* Interpreting vllupformat as 0 */
sja1105_packing(buf, &entry->destports,
95, 91, size, op);
sja1105_packing(buf, &entry->iscritical,
90, 90, size, op);
sja1105_packing(buf, &entry->macaddr,
89, 42, size, op);
sja1105_packing(buf, &entry->vlanid,
41, 30, size, op);
sja1105_packing(buf, &entry->port,
29, 27, size, op);
sja1105_packing(buf, &entry->vlanprior,
26, 24, size, op);
} else {
/* Interpreting vllupformat as 1 */
sja1105_packing(buf, &entry->egrmirr,
95, 91, size, op);
sja1105_packing(buf, &entry->ingrmirr,
90, 90, size, op);
sja1105_packing(buf, &entry->vlid,
57, 42, size, op);
sja1105_packing(buf, &entry->port,
29, 27, size, op);
}
return size;
}
static size_t sja1105_vl_policing_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
struct sja1105_vl_policing_entry *entry = entry_ptr;
const size_t size = SJA1105_SIZE_VL_POLICING_ENTRY;
sja1105_packing(buf, &entry->type, 63, 63, size, op);
sja1105_packing(buf, &entry->maxlen, 62, 52, size, op);
sja1105_packing(buf, &entry->sharindx, 51, 42, size, op);
if (entry->type == 0) {
sja1105_packing(buf, &entry->bag, 41, 28, size, op);
sja1105_packing(buf, &entry->jitter, 27, 18, size, op);
}
return size;
}
size_t sja1105_vlan_lookup_entry_packing(void *buf, void *entry_ptr,
enum packing_op op)
{
......@@ -510,6 +588,9 @@ static void sja1105_table_write_crc(u8 *table_start, u8 *crc_ptr)
static u64 blk_id_map[BLK_IDX_MAX] = {
[BLK_IDX_SCHEDULE] = BLKID_SCHEDULE,
[BLK_IDX_SCHEDULE_ENTRY_POINTS] = BLKID_SCHEDULE_ENTRY_POINTS,
[BLK_IDX_VL_LOOKUP] = BLKID_VL_LOOKUP,
[BLK_IDX_VL_POLICING] = BLKID_VL_POLICING,
[BLK_IDX_VL_FORWARDING] = BLKID_VL_FORWARDING,
[BLK_IDX_L2_LOOKUP] = BLKID_L2_LOOKUP,
[BLK_IDX_L2_POLICING] = BLKID_L2_POLICING,
[BLK_IDX_VLAN_LOOKUP] = BLKID_VLAN_LOOKUP,
......@@ -517,6 +598,7 @@ static u64 blk_id_map[BLK_IDX_MAX] = {
[BLK_IDX_MAC_CONFIG] = BLKID_MAC_CONFIG,
[BLK_IDX_SCHEDULE_PARAMS] = BLKID_SCHEDULE_PARAMS,
[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = BLKID_SCHEDULE_ENTRY_POINTS_PARAMS,
[BLK_IDX_VL_FORWARDING_PARAMS] = BLKID_VL_FORWARDING_PARAMS,
[BLK_IDX_L2_LOOKUP_PARAMS] = BLKID_L2_LOOKUP_PARAMS,
[BLK_IDX_L2_FORWARDING_PARAMS] = BLKID_L2_FORWARDING_PARAMS,
[BLK_IDX_AVB_PARAMS] = BLKID_AVB_PARAMS,
......@@ -533,6 +615,9 @@ const char *sja1105_static_config_error_msg[] = {
"schedule-table present, but one of "
"schedule-entry-points-table, schedule-parameters-table or "
"schedule-entry-points-parameters table is empty",
[SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION] =
"vl-lookup-table present, but one of vl-policing-table, "
"vl-forwarding-table or vl-forwarding-parameters-table is empty",
[SJA1105_MISSING_L2_POLICING_TABLE] =
"l2-policing-table needs to have at least one entry",
[SJA1105_MISSING_L2_FORWARDING_TABLE] =
......@@ -560,6 +645,7 @@ static sja1105_config_valid_t
static_config_check_memory_size(const struct sja1105_table *tables)
{
const struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
const struct sja1105_vl_forwarding_params_entry *vl_fwd_params;
int i, mem = 0;
l2_fwd_params = tables[BLK_IDX_L2_FORWARDING_PARAMS].entries;
......@@ -567,6 +653,12 @@ static_config_check_memory_size(const struct sja1105_table *tables)
for (i = 0; i < 8; i++)
mem += l2_fwd_params->part_spc[i];
if (tables[BLK_IDX_VL_FORWARDING_PARAMS].entry_count) {
vl_fwd_params = tables[BLK_IDX_VL_FORWARDING_PARAMS].entries;
for (i = 0; i < 8; i++)
mem += vl_fwd_params->partspc[i];
}
if (mem > SJA1105_MAX_FRAME_MEMORY)
return SJA1105_OVERCOMMITTED_FRAME_MEMORY;
......@@ -594,6 +686,32 @@ sja1105_static_config_check_valid(const struct sja1105_static_config *config)
if (!IS_FULL(BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS))
return SJA1105_INCORRECT_TTETHERNET_CONFIGURATION;
}
if (tables[BLK_IDX_VL_LOOKUP].entry_count) {
struct sja1105_vl_lookup_entry *vl_lookup;
bool has_critical_links = false;
int i;
vl_lookup = tables[BLK_IDX_VL_LOOKUP].entries;
for (i = 0; i < tables[BLK_IDX_VL_LOOKUP].entry_count; i++) {
if (vl_lookup[i].iscritical) {
has_critical_links = true;
break;
}
}
if (tables[BLK_IDX_VL_POLICING].entry_count == 0 &&
has_critical_links)
return SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION;
if (tables[BLK_IDX_VL_FORWARDING].entry_count == 0 &&
has_critical_links)
return SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION;
if (tables[BLK_IDX_VL_FORWARDING_PARAMS].entry_count == 0 &&
has_critical_links)
return SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION;
}
if (tables[BLK_IDX_L2_POLICING].entry_count == 0)
return SJA1105_MISSING_L2_POLICING_TABLE;
......@@ -703,6 +821,9 @@ sja1105_static_config_get_length(const struct sja1105_static_config *config)
struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = {
[BLK_IDX_SCHEDULE] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0},
[BLK_IDX_VL_LOOKUP] = {0},
[BLK_IDX_VL_POLICING] = {0},
[BLK_IDX_VL_FORWARDING] = {0},
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105et_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
......@@ -735,6 +856,7 @@ struct sja1105_table_ops sja1105e_table_ops[BLK_IDX_MAX] = {
},
[BLK_IDX_SCHEDULE_PARAMS] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0},
[BLK_IDX_VL_FORWARDING_PARAMS] = {0},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105et_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
......@@ -781,6 +903,24 @@ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_ENTRY,
.max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_COUNT,
},
[BLK_IDX_VL_LOOKUP] = {
.packing = sja1105_vl_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_lookup_entry),
.packed_entry_size = SJA1105_SIZE_VL_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
},
[BLK_IDX_VL_POLICING] = {
.packing = sja1105_vl_policing_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_policing_entry),
.packed_entry_size = SJA1105_SIZE_VL_POLICING_ENTRY,
.max_entry_count = SJA1105_MAX_VL_POLICING_COUNT,
},
[BLK_IDX_VL_FORWARDING] = {
.packing = sja1105_vl_forwarding_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_entry),
.packed_entry_size = SJA1105_SIZE_VL_FORWARDING_ENTRY,
.max_entry_count = SJA1105_MAX_VL_FORWARDING_COUNT,
},
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105et_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
......@@ -823,6 +963,12 @@ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT,
},
[BLK_IDX_VL_FORWARDING_PARAMS] = {
.packing = sja1105_vl_forwarding_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_params_entry),
.packed_entry_size = SJA1105_SIZE_VL_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_VL_FORWARDING_PARAMS_COUNT,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105et_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
......@@ -859,6 +1005,9 @@ struct sja1105_table_ops sja1105t_table_ops[BLK_IDX_MAX] = {
struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = {
[BLK_IDX_SCHEDULE] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0},
[BLK_IDX_VL_LOOKUP] = {0},
[BLK_IDX_VL_POLICING] = {0},
[BLK_IDX_VL_FORWARDING] = {0},
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105pqrs_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
......@@ -891,6 +1040,7 @@ struct sja1105_table_ops sja1105p_table_ops[BLK_IDX_MAX] = {
},
[BLK_IDX_SCHEDULE_PARAMS] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0},
[BLK_IDX_VL_FORWARDING_PARAMS] = {0},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105pqrs_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
......@@ -937,6 +1087,24 @@ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_ENTRY,
.max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_COUNT,
},
[BLK_IDX_VL_LOOKUP] = {
.packing = sja1105_vl_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_lookup_entry),
.packed_entry_size = SJA1105_SIZE_VL_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
},
[BLK_IDX_VL_POLICING] = {
.packing = sja1105_vl_policing_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_policing_entry),
.packed_entry_size = SJA1105_SIZE_VL_POLICING_ENTRY,
.max_entry_count = SJA1105_MAX_VL_POLICING_COUNT,
},
[BLK_IDX_VL_FORWARDING] = {
.packing = sja1105_vl_forwarding_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_entry),
.packed_entry_size = SJA1105_SIZE_VL_FORWARDING_ENTRY,
.max_entry_count = SJA1105_MAX_VL_FORWARDING_COUNT,
},
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105pqrs_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
......@@ -979,6 +1147,12 @@ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT,
},
[BLK_IDX_VL_FORWARDING_PARAMS] = {
.packing = sja1105_vl_forwarding_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_params_entry),
.packed_entry_size = SJA1105_SIZE_VL_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_VL_FORWARDING_PARAMS_COUNT,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105pqrs_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
......@@ -1015,6 +1189,9 @@ struct sja1105_table_ops sja1105q_table_ops[BLK_IDX_MAX] = {
struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = {
[BLK_IDX_SCHEDULE] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS] = {0},
[BLK_IDX_VL_LOOKUP] = {0},
[BLK_IDX_VL_POLICING] = {0},
[BLK_IDX_VL_FORWARDING] = {0},
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105pqrs_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
......@@ -1047,6 +1224,7 @@ struct sja1105_table_ops sja1105r_table_ops[BLK_IDX_MAX] = {
},
[BLK_IDX_SCHEDULE_PARAMS] = {0},
[BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS] = {0},
[BLK_IDX_VL_FORWARDING_PARAMS] = {0},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105pqrs_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
......@@ -1093,6 +1271,24 @@ struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_ENTRY,
.max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_COUNT,
},
[BLK_IDX_VL_LOOKUP] = {
.packing = sja1105_vl_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_lookup_entry),
.packed_entry_size = SJA1105_SIZE_VL_LOOKUP_ENTRY,
.max_entry_count = SJA1105_MAX_VL_LOOKUP_COUNT,
},
[BLK_IDX_VL_POLICING] = {
.packing = sja1105_vl_policing_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_policing_entry),
.packed_entry_size = SJA1105_SIZE_VL_POLICING_ENTRY,
.max_entry_count = SJA1105_MAX_VL_POLICING_COUNT,
},
[BLK_IDX_VL_FORWARDING] = {
.packing = sja1105_vl_forwarding_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_entry),
.packed_entry_size = SJA1105_SIZE_VL_FORWARDING_ENTRY,
.max_entry_count = SJA1105_MAX_VL_FORWARDING_COUNT,
},
[BLK_IDX_L2_LOOKUP] = {
.packing = sja1105pqrs_l2_lookup_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_entry),
......@@ -1135,6 +1331,12 @@ struct sja1105_table_ops sja1105s_table_ops[BLK_IDX_MAX] = {
.packed_entry_size = SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT,
},
[BLK_IDX_VL_FORWARDING_PARAMS] = {
.packing = sja1105_vl_forwarding_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_vl_forwarding_params_entry),
.packed_entry_size = SJA1105_SIZE_VL_FORWARDING_PARAMS_ENTRY,
.max_entry_count = SJA1105_MAX_VL_FORWARDING_PARAMS_COUNT,
},
[BLK_IDX_L2_LOOKUP_PARAMS] = {
.packing = sja1105pqrs_l2_lookup_params_entry_packing,
.unpacked_entry_size = sizeof(struct sja1105_l2_lookup_params_entry),
......
......@@ -13,6 +13,9 @@
#define SJA1105_SIZE_TABLE_HEADER 12
#define SJA1105_SIZE_SCHEDULE_ENTRY 8
#define SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_ENTRY 4
#define SJA1105_SIZE_VL_LOOKUP_ENTRY 12
#define SJA1105_SIZE_VL_POLICING_ENTRY 8
#define SJA1105_SIZE_VL_FORWARDING_ENTRY 4
#define SJA1105_SIZE_L2_POLICING_ENTRY 8
#define SJA1105_SIZE_VLAN_LOOKUP_ENTRY 8
#define SJA1105_SIZE_L2_FORWARDING_ENTRY 8
......@@ -20,6 +23,7 @@
#define SJA1105_SIZE_XMII_PARAMS_ENTRY 4
#define SJA1105_SIZE_SCHEDULE_PARAMS_ENTRY 12
#define SJA1105_SIZE_SCHEDULE_ENTRY_POINTS_PARAMS_ENTRY 4
#define SJA1105_SIZE_VL_FORWARDING_PARAMS_ENTRY 12
#define SJA1105ET_SIZE_L2_LOOKUP_ENTRY 12
#define SJA1105ET_SIZE_MAC_CONFIG_ENTRY 28
#define SJA1105ET_SIZE_L2_LOOKUP_PARAMS_ENTRY 4
......@@ -35,6 +39,9 @@
enum {
BLKID_SCHEDULE = 0x00,
BLKID_SCHEDULE_ENTRY_POINTS = 0x01,
BLKID_VL_LOOKUP = 0x02,
BLKID_VL_POLICING = 0x03,
BLKID_VL_FORWARDING = 0x04,
BLKID_L2_LOOKUP = 0x05,
BLKID_L2_POLICING = 0x06,
BLKID_VLAN_LOOKUP = 0x07,
......@@ -42,6 +49,7 @@ enum {
BLKID_MAC_CONFIG = 0x09,
BLKID_SCHEDULE_PARAMS = 0x0A,
BLKID_SCHEDULE_ENTRY_POINTS_PARAMS = 0x0B,
BLKID_VL_FORWARDING_PARAMS = 0x0C,
BLKID_L2_LOOKUP_PARAMS = 0x0D,
BLKID_L2_FORWARDING_PARAMS = 0x0E,
BLKID_AVB_PARAMS = 0x10,
......@@ -52,6 +60,9 @@ enum {
enum sja1105_blk_idx {
BLK_IDX_SCHEDULE = 0,
BLK_IDX_SCHEDULE_ENTRY_POINTS,
BLK_IDX_VL_LOOKUP,
BLK_IDX_VL_POLICING,
BLK_IDX_VL_FORWARDING,
BLK_IDX_L2_LOOKUP,
BLK_IDX_L2_POLICING,
BLK_IDX_VLAN_LOOKUP,
......@@ -59,6 +70,7 @@ enum sja1105_blk_idx {
BLK_IDX_MAC_CONFIG,
BLK_IDX_SCHEDULE_PARAMS,
BLK_IDX_SCHEDULE_ENTRY_POINTS_PARAMS,
BLK_IDX_VL_FORWARDING_PARAMS,
BLK_IDX_L2_LOOKUP_PARAMS,
BLK_IDX_L2_FORWARDING_PARAMS,
BLK_IDX_AVB_PARAMS,
......@@ -73,6 +85,9 @@ enum sja1105_blk_idx {
#define SJA1105_MAX_SCHEDULE_COUNT 1024
#define SJA1105_MAX_SCHEDULE_ENTRY_POINTS_COUNT 2048
#define SJA1105_MAX_VL_LOOKUP_COUNT 1024
#define SJA1105_MAX_VL_POLICING_COUNT 1024
#define SJA1105_MAX_VL_FORWARDING_COUNT 1024
#define SJA1105_MAX_L2_LOOKUP_COUNT 1024
#define SJA1105_MAX_L2_POLICING_COUNT 45
#define SJA1105_MAX_VLAN_LOOKUP_COUNT 4096
......@@ -80,6 +95,7 @@ enum sja1105_blk_idx {
#define SJA1105_MAX_MAC_CONFIG_COUNT 5
#define SJA1105_MAX_SCHEDULE_PARAMS_COUNT 1
#define SJA1105_MAX_SCHEDULE_ENTRY_POINTS_PARAMS_COUNT 1
#define SJA1105_MAX_VL_FORWARDING_PARAMS_COUNT 1
#define SJA1105_MAX_L2_LOOKUP_PARAMS_COUNT 1
#define SJA1105_MAX_L2_FORWARDING_PARAMS_COUNT 1
#define SJA1105_MAX_GENERAL_PARAMS_COUNT 1
......@@ -262,6 +278,54 @@ struct sja1105_xmii_params_entry {
u64 xmii_mode[5];
};
enum {
SJA1105_VL_FORMAT_PSFP = 0,
SJA1105_VL_FORMAT_ARINC664 = 1,
};
struct sja1105_vl_lookup_entry {
u64 format;
u64 port;
union {
/* SJA1105_VL_FORMAT_PSFP */
struct {
u64 destports;
u64 iscritical;
u64 macaddr;
u64 vlanid;
u64 vlanprior;
};
/* SJA1105_VL_FORMAT_ARINC664 */
struct {
u64 egrmirr;
u64 ingrmirr;
u64 vlid;
};
};
/* Not part of hardware structure */
unsigned long flow_cookie;
};
struct sja1105_vl_policing_entry {
u64 type;
u64 maxlen;
u64 sharindx;
u64 bag;
u64 jitter;
};
struct sja1105_vl_forwarding_entry {
u64 type;
u64 priority;
u64 partition;
u64 destports;
};
struct sja1105_vl_forwarding_params_entry {
u64 partspc[8];
u64 debugen;
};
struct sja1105_table_header {
u64 block_id;
u64 len;
......@@ -303,6 +367,7 @@ typedef enum {
SJA1105_CONFIG_OK = 0,
SJA1105_TTETHERNET_NOT_SUPPORTED,
SJA1105_INCORRECT_TTETHERNET_CONFIGURATION,
SJA1105_INCORRECT_VIRTUAL_LINK_CONFIGURATION,
SJA1105_MISSING_L2_POLICING_TABLE,
SJA1105_MISSING_L2_FORWARDING_TABLE,
SJA1105_MISSING_L2_FORWARDING_PARAMS_TABLE,
......
......@@ -7,7 +7,6 @@
#define SJA1105_TAS_CLKSRC_STANDALONE 1
#define SJA1105_TAS_CLKSRC_AS6802 2
#define SJA1105_TAS_CLKSRC_PTP 3
#define SJA1105_TAS_MAX_DELTA BIT(19)
#define SJA1105_GATE_MASK GENMASK_ULL(SJA1105_NUM_TC - 1, 0)
#define work_to_sja1105_tas(d) \
......@@ -15,22 +14,10 @@
#define tas_to_sja1105(d) \
container_of((d), struct sja1105_private, tas_data)
/* This is not a preprocessor macro because the "ns" argument may or may not be
* s64 at caller side. This ensures it is properly type-cast before div_s64.
*/
static s64 ns_to_sja1105_delta(s64 ns)
{
return div_s64(ns, 200);
}
static s64 sja1105_delta_to_ns(s64 delta)
{
return delta * 200;
}
static int sja1105_tas_set_runtime_params(struct sja1105_private *priv)
{
struct sja1105_tas_data *tas_data = &priv->tas_data;
struct sja1105_gating_config *gating_cfg = &tas_data->gating_cfg;
struct dsa_switch *ds = priv->ds;
s64 earliest_base_time = S64_MAX;
s64 latest_base_time = 0;
......@@ -59,6 +46,19 @@ static int sja1105_tas_set_runtime_params(struct sja1105_private *priv)
}
}
if (!list_empty(&gating_cfg->entries)) {
tas_data->enabled = true;
if (max_cycle_time < gating_cfg->cycle_time)
max_cycle_time = gating_cfg->cycle_time;
if (latest_base_time < gating_cfg->base_time)
latest_base_time = gating_cfg->base_time;
if (earliest_base_time > gating_cfg->base_time) {
earliest_base_time = gating_cfg->base_time;
its_cycle_time = gating_cfg->cycle_time;
}
}
if (!tas_data->enabled)
return 0;
......@@ -155,13 +155,14 @@ static int sja1105_tas_set_runtime_params(struct sja1105_private *priv)
* their "subschedule end index" (subscheind) equal to the last valid
* subschedule's end index (in this case 5).
*/
static int sja1105_init_scheduling(struct sja1105_private *priv)
int sja1105_init_scheduling(struct sja1105_private *priv)
{
struct sja1105_schedule_entry_points_entry *schedule_entry_points;
struct sja1105_schedule_entry_points_params_entry
*schedule_entry_points_params;
struct sja1105_schedule_params_entry *schedule_params;
struct sja1105_tas_data *tas_data = &priv->tas_data;
struct sja1105_gating_config *gating_cfg = &tas_data->gating_cfg;
struct sja1105_schedule_entry *schedule;
struct sja1105_table *table;
int schedule_start_idx;
......@@ -213,6 +214,11 @@ static int sja1105_init_scheduling(struct sja1105_private *priv)
}
}
if (!list_empty(&gating_cfg->entries)) {
num_entries += gating_cfg->num_entries;
num_cycles++;
}
/* Nothing to do */
if (!num_cycles)
return 0;
......@@ -312,6 +318,42 @@ static int sja1105_init_scheduling(struct sja1105_private *priv)
cycle++;
}
if (!list_empty(&gating_cfg->entries)) {
struct sja1105_gate_entry *e;
/* Relative base time */
s64 rbt;
schedule_start_idx = k;
schedule_end_idx = k + gating_cfg->num_entries - 1;
rbt = future_base_time(gating_cfg->base_time,
gating_cfg->cycle_time,
tas_data->earliest_base_time);
rbt -= tas_data->earliest_base_time;
entry_point_delta = ns_to_sja1105_delta(rbt) + 1;
schedule_entry_points[cycle].subschindx = cycle;
schedule_entry_points[cycle].delta = entry_point_delta;
schedule_entry_points[cycle].address = schedule_start_idx;
for (i = cycle; i < 8; i++)
schedule_params->subscheind[i] = schedule_end_idx;
list_for_each_entry(e, &gating_cfg->entries, list) {
schedule[k].delta = ns_to_sja1105_delta(e->interval);
schedule[k].destports = e->rule->vl.destports;
schedule[k].setvalid = true;
schedule[k].txen = true;
schedule[k].vlindex = e->rule->vl.sharindx;
schedule[k].winstindex = e->rule->vl.sharindx;
if (e->gate_state) /* Gate open */
schedule[k].winst = true;
else /* Gate closed */
schedule[k].winend = true;
k++;
}
}
return 0;
}
......@@ -415,6 +457,54 @@ sja1105_tas_check_conflicts(struct sja1105_private *priv, int port,
return false;
}
/* Check the tc-taprio configuration on @port for conflicts with the tc-gate
* global subschedule. If @port is -1, check it against all ports.
* To reuse the sja1105_tas_check_conflicts logic without refactoring it,
* convert the gating configuration to a dummy tc-taprio offload structure.
*/
bool sja1105_gating_check_conflicts(struct sja1105_private *priv, int port,
struct netlink_ext_ack *extack)
{
struct sja1105_gating_config *gating_cfg = &priv->tas_data.gating_cfg;
size_t num_entries = gating_cfg->num_entries;
struct tc_taprio_qopt_offload *dummy;
struct sja1105_gate_entry *e;
bool conflict;
int i = 0;
if (list_empty(&gating_cfg->entries))
return false;
dummy = kzalloc(sizeof(struct tc_taprio_sched_entry) * num_entries +
sizeof(struct tc_taprio_qopt_offload), GFP_KERNEL);
if (!dummy) {
NL_SET_ERR_MSG_MOD(extack, "Failed to allocate memory");
return true;
}
dummy->num_entries = num_entries;
dummy->base_time = gating_cfg->base_time;
dummy->cycle_time = gating_cfg->cycle_time;
list_for_each_entry(e, &gating_cfg->entries, list)
dummy->entries[i++].interval = e->interval;
if (port != -1) {
conflict = sja1105_tas_check_conflicts(priv, port, dummy);
} else {
for (port = 0; port < SJA1105_NUM_PORTS; port++) {
conflict = sja1105_tas_check_conflicts(priv, port,
dummy);
if (conflict)
break;
}
}
kfree(dummy);
return conflict;
}
int sja1105_setup_tc_taprio(struct dsa_switch *ds, int port,
struct tc_taprio_qopt_offload *admin)
{
......@@ -473,6 +563,11 @@ int sja1105_setup_tc_taprio(struct dsa_switch *ds, int port,
return -ERANGE;
}
if (sja1105_gating_check_conflicts(priv, port, NULL)) {
dev_err(ds->dev, "Conflict with tc-gate schedule\n");
return -ERANGE;
}
tas_data->offload[port] = taprio_offload_get(admin);
rc = sja1105_init_scheduling(priv);
......@@ -779,6 +874,8 @@ void sja1105_tas_setup(struct dsa_switch *ds)
INIT_WORK(&tas_data->tas_work, sja1105_tas_state_machine);
tas_data->state = SJA1105_TAS_STATE_DISABLED;
tas_data->last_op = SJA1105_PTP_NONE;
INIT_LIST_HEAD(&tas_data->gating_cfg.entries);
}
void sja1105_tas_teardown(struct dsa_switch *ds)
......
......@@ -6,6 +6,10 @@
#include <net/pkt_sched.h>
#define SJA1105_TAS_MAX_DELTA BIT(18)
struct sja1105_private;
#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_TAS)
enum sja1105_tas_state {
......@@ -20,8 +24,23 @@ enum sja1105_ptp_op {
SJA1105_PTP_ADJUSTFREQ,
};
struct sja1105_gate_entry {
struct list_head list;
struct sja1105_rule *rule;
s64 interval;
u8 gate_state;
};
struct sja1105_gating_config {
u64 cycle_time;
s64 base_time;
int num_entries;
struct list_head entries;
};
struct sja1105_tas_data {
struct tc_taprio_qopt_offload *offload[SJA1105_NUM_PORTS];
struct sja1105_gating_config gating_cfg;
enum sja1105_tas_state state;
enum sja1105_ptp_op last_op;
struct work_struct tas_work;
......@@ -42,6 +61,11 @@ void sja1105_tas_clockstep(struct dsa_switch *ds);
void sja1105_tas_adjfreq(struct dsa_switch *ds);
bool sja1105_gating_check_conflicts(struct sja1105_private *priv, int port,
struct netlink_ext_ack *extack);
int sja1105_init_scheduling(struct sja1105_private *priv);
#else
/* C doesn't allow empty structures, bah! */
......@@ -63,6 +87,18 @@ static inline void sja1105_tas_clockstep(struct dsa_switch *ds) { }
static inline void sja1105_tas_adjfreq(struct dsa_switch *ds) { }
static inline bool
sja1105_gating_check_conflicts(struct dsa_switch *ds, int port,
struct netlink_ext_ack *extack)
{
return true;
}
static inline int sja1105_init_scheduling(struct sja1105_private *priv)
{
return 0;
}
#endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_TAS) */
#endif /* _SJA1105_TAS_H */
// SPDX-License-Identifier: GPL-2.0
/* Copyright 2020, NXP Semiconductors
*/
#include <net/tc_act/tc_gate.h>
#include <linux/dsa/8021q.h>
#include "sja1105.h"
#define SJA1105_VL_FRAME_MEMORY 100
#define SJA1105_SIZE_VL_STATUS 8
/* The switch flow classification core implements TTEthernet, which 'thinks' in
* terms of Virtual Links (VL), a concept borrowed from ARINC 664 part 7.
* However it also has one other operating mode (VLLUPFORMAT=0) where it acts
* somewhat closer to a pre-standard implementation of IEEE 802.1Qci
* (Per-Stream Filtering and Policing), which is what the driver is going to be
* implementing.
*
* VL Lookup
* Key = {DMAC && VLANID +---------+ Key = { (DMAC[47:16] & VLMASK ==
* && VLAN PCP | | VLMARKER)
* && INGRESS PORT} +---------+ (both fixed)
* (exact match, | && DMAC[15:0] == VLID
* all specified in rule) | (specified in rule)
* v && INGRESS PORT }
* ------------
* 0 (PSFP) / \ 1 (ARINC664)
* +-----------/ VLLUPFORMAT \----------+
* | \ (fixed) / |
* | \ / |
* 0 (forwarding) v ------------ |
* ------------ |
* / \ 1 (QoS classification) |
* +---/ ISCRITICAL \-----------+ |
* | \ (per rule) / | |
* | \ / VLID taken from VLID taken from
* v ------------ index of rule contents of rule
* select that matched that matched
* DESTPORTS | |
* | +---------+--------+
* | |
* | v
* | VL Forwarding
* | (indexed by VLID)
* | +---------+
* | +--------------| |
* | | select TYPE +---------+
* | v
* | 0 (rate ------------ 1 (time
* | constrained) / \ triggered)
* | +------/ TYPE \------------+
* | | \ (per VLID) / |
* | v \ / v
* | VL Policing ------------ VL Policing
* | (indexed by VLID) (indexed by VLID)
* | +---------+ +---------+
* | | TYPE=0 | | TYPE=1 |
* | +---------+ +---------+
* | select SHARINDX select SHARINDX to
* | to rate-limit re-enter VL Forwarding
* | groups of VL's with new VLID for egress
* | to same quota |
* | | |
* | select MAXLEN -> exceed => drop select MAXLEN -> exceed => drop
* | | |
* | v v
* | VL Forwarding VL Forwarding
* | (indexed by SHARINDX) (indexed by SHARINDX)
* | +---------+ +---------+
* | | TYPE=0 | | TYPE=1 |
* | +---------+ +---------+
* | select PRIORITY, select PRIORITY,
* | PARTITION, DESTPORTS PARTITION, DESTPORTS
* | | |
* | v v
* | VL Policing VL Policing
* | (indexed by SHARINDX) (indexed by SHARINDX)
* | +---------+ +---------+
* | | TYPE=0 | | TYPE=1 |
* | +---------+ +---------+
* | | |
* | v |
* | select BAG, -> exceed => drop |
* | JITTER v
* | | ----------------------------------------------
* | | / Reception Window is open for this VL \
* | | / (the Schedule Table executes an entry i \
* | | / M <= i < N, for which these conditions hold): \ no
* | | +----/ \-+
* | | |yes \ WINST[M] == 1 && WINSTINDEX[M] == VLID / |
* | | | \ WINEND[N] == 1 && WINSTINDEX[N] == VLID / |
* | | | \ / |
* | | | \ (the VL window has opened and not yet closed)/ |
* | | | ---------------------------------------------- |
* | | v v
* | | dispatch to DESTPORTS when the Schedule Table drop
* | | executes an entry i with TXEN == 1 && VLINDEX == i
* v v
* dispatch immediately to DESTPORTS
*
* The per-port classification key is always composed of {DMAC, VID, PCP} and
* is non-maskable. This 'looks like' the NULL stream identification function
* from IEEE 802.1CB clause 6, except for the extra VLAN PCP. When the switch
* ports operate as VLAN-unaware, we do allow the user to not specify the VLAN
* ID and PCP, and then the port-based defaults will be used.
*
* In TTEthernet, routing is something that needs to be done manually for each
* Virtual Link. So the flow action must always include one of:
* a. 'redirect', 'trap' or 'drop': select the egress port list
* Additionally, the following actions may be applied on a Virtual Link,
* turning it into 'critical' traffic:
* b. 'police': turn it into a rate-constrained VL, with bandwidth limitation
* given by the maximum frame length, bandwidth allocation gap (BAG) and
* maximum jitter.
* c. 'gate': turn it into a time-triggered VL, which can be only be received
* and forwarded according to a given schedule.
*/
static bool sja1105_vl_key_lower(struct sja1105_vl_lookup_entry *a,
struct sja1105_vl_lookup_entry *b)
{
if (a->macaddr < b->macaddr)
return true;
if (a->macaddr > b->macaddr)
return false;
if (a->vlanid < b->vlanid)
return true;
if (a->vlanid > b->vlanid)
return false;
if (a->port < b->port)
return true;
if (a->port > b->port)
return false;
if (a->vlanprior < b->vlanprior)
return true;
if (a->vlanprior > b->vlanprior)
return false;
/* Keys are equal */
return false;
}
static int sja1105_init_virtual_links(struct sja1105_private *priv,
struct netlink_ext_ack *extack)
{
struct sja1105_l2_forwarding_params_entry *l2_fwd_params;
struct sja1105_vl_forwarding_params_entry *vl_fwd_params;
struct sja1105_vl_policing_entry *vl_policing;
struct sja1105_vl_forwarding_entry *vl_fwd;
struct sja1105_vl_lookup_entry *vl_lookup;
bool have_critical_virtual_links = false;
struct sja1105_table *table;
struct sja1105_rule *rule;
int num_virtual_links = 0;
int max_sharindx = 0;
int i, j, k;
table = &priv->static_config.tables[BLK_IDX_L2_FORWARDING_PARAMS];
l2_fwd_params = table->entries;
l2_fwd_params->part_spc[0] = SJA1105_MAX_FRAME_MEMORY;
/* Figure out the dimensioning of the problem */
list_for_each_entry(rule, &priv->flow_block.rules, list) {
if (rule->type != SJA1105_RULE_VL)
continue;
/* Each VL lookup entry matches on a single ingress port */
num_virtual_links += hweight_long(rule->port_mask);
if (rule->vl.type != SJA1105_VL_NONCRITICAL)
have_critical_virtual_links = true;
if (max_sharindx < rule->vl.sharindx)
max_sharindx = rule->vl.sharindx;
}
if (num_virtual_links > SJA1105_MAX_VL_LOOKUP_COUNT) {
NL_SET_ERR_MSG_MOD(extack, "Not enough VL entries available");
return -ENOSPC;
}
if (max_sharindx + 1 > SJA1105_MAX_VL_LOOKUP_COUNT) {
NL_SET_ERR_MSG_MOD(extack, "Policer index out of range");
return -ENOSPC;
}
max_sharindx = max_t(int, num_virtual_links, max_sharindx) + 1;
/* Discard previous VL Lookup Table */
table = &priv->static_config.tables[BLK_IDX_VL_LOOKUP];
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
/* Discard previous VL Policing Table */
table = &priv->static_config.tables[BLK_IDX_VL_POLICING];
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
/* Discard previous VL Forwarding Table */
table = &priv->static_config.tables[BLK_IDX_VL_FORWARDING];
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
/* Discard previous VL Forwarding Parameters Table */
table = &priv->static_config.tables[BLK_IDX_VL_FORWARDING_PARAMS];
if (table->entry_count) {
kfree(table->entries);
table->entry_count = 0;
}
/* Nothing to do */
if (!num_virtual_links)
return 0;
/* Pre-allocate space in the static config tables */
/* VL Lookup Table */
table = &priv->static_config.tables[BLK_IDX_VL_LOOKUP];
table->entries = kcalloc(num_virtual_links,
table->ops->unpacked_entry_size,
GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = num_virtual_links;
vl_lookup = table->entries;
k = 0;
list_for_each_entry(rule, &priv->flow_block.rules, list) {
unsigned long port;
if (rule->type != SJA1105_RULE_VL)
continue;
for_each_set_bit(port, &rule->port_mask, SJA1105_NUM_PORTS) {
vl_lookup[k].format = SJA1105_VL_FORMAT_PSFP;
vl_lookup[k].port = port;
vl_lookup[k].macaddr = rule->key.vl.dmac;
if (rule->key.type == SJA1105_KEY_VLAN_AWARE_VL) {
vl_lookup[k].vlanid = rule->key.vl.vid;
vl_lookup[k].vlanprior = rule->key.vl.pcp;
} else {
u16 vid = dsa_8021q_rx_vid(priv->ds, port);
vl_lookup[k].vlanid = vid;
vl_lookup[k].vlanprior = 0;
}
/* For critical VLs, the DESTPORTS mask is taken from
* the VL Forwarding Table, so no point in putting it
* in the VL Lookup Table
*/
if (rule->vl.type == SJA1105_VL_NONCRITICAL)
vl_lookup[k].destports = rule->vl.destports;
else
vl_lookup[k].iscritical = true;
vl_lookup[k].flow_cookie = rule->cookie;
k++;
}
}
/* UM10944.pdf chapter 4.2.3 VL Lookup table:
* "the entries in the VL Lookup table must be sorted in ascending
* order (i.e. the smallest value must be loaded first) according to
* the following sort order: MACADDR, VLANID, PORT, VLANPRIOR."
*/
for (i = 0; i < num_virtual_links; i++) {
struct sja1105_vl_lookup_entry *a = &vl_lookup[i];
for (j = i + 1; j < num_virtual_links; j++) {
struct sja1105_vl_lookup_entry *b = &vl_lookup[j];
if (sja1105_vl_key_lower(b, a)) {
struct sja1105_vl_lookup_entry tmp = *a;
*a = *b;
*b = tmp;
}
}
}
if (!have_critical_virtual_links)
return 0;
/* VL Policing Table */
table = &priv->static_config.tables[BLK_IDX_VL_POLICING];
table->entries = kcalloc(max_sharindx, table->ops->unpacked_entry_size,
GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = max_sharindx;
vl_policing = table->entries;
/* VL Forwarding Table */
table = &priv->static_config.tables[BLK_IDX_VL_FORWARDING];
table->entries = kcalloc(max_sharindx, table->ops->unpacked_entry_size,
GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = max_sharindx;
vl_fwd = table->entries;
/* VL Forwarding Parameters Table */
table = &priv->static_config.tables[BLK_IDX_VL_FORWARDING_PARAMS];
table->entries = kcalloc(1, table->ops->unpacked_entry_size,
GFP_KERNEL);
if (!table->entries)
return -ENOMEM;
table->entry_count = 1;
vl_fwd_params = table->entries;
/* Reserve some frame buffer memory for the critical-traffic virtual
* links (this needs to be done). At the moment, hardcode the value
* at 100 blocks of 128 bytes of memory each. This leaves 829 blocks
* remaining for best-effort traffic. TODO: figure out a more flexible
* way to perform the frame buffer partitioning.
*/
l2_fwd_params->part_spc[0] = SJA1105_MAX_FRAME_MEMORY -
SJA1105_VL_FRAME_MEMORY;
vl_fwd_params->partspc[0] = SJA1105_VL_FRAME_MEMORY;
for (i = 0; i < num_virtual_links; i++) {
unsigned long cookie = vl_lookup[i].flow_cookie;
struct sja1105_rule *rule = sja1105_rule_find(priv, cookie);
if (rule->vl.type == SJA1105_VL_NONCRITICAL)
continue;
if (rule->vl.type == SJA1105_VL_TIME_TRIGGERED) {
int sharindx = rule->vl.sharindx;
vl_policing[i].type = 1;
vl_policing[i].sharindx = sharindx;
vl_policing[i].maxlen = rule->vl.maxlen;
vl_policing[sharindx].type = 1;
vl_fwd[i].type = 1;
vl_fwd[sharindx].type = 1;
vl_fwd[sharindx].priority = rule->vl.ipv;
vl_fwd[sharindx].partition = 0;
vl_fwd[sharindx].destports = rule->vl.destports;
}
}
return 0;
}
int sja1105_vl_redirect(struct sja1105_private *priv, int port,
struct netlink_ext_ack *extack, unsigned long cookie,
struct sja1105_key *key, unsigned long destports,
bool append)
{
struct sja1105_rule *rule = sja1105_rule_find(priv, cookie);
int rc;
if (dsa_port_is_vlan_filtering(dsa_to_port(priv->ds, port)) &&
key->type != SJA1105_KEY_VLAN_AWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only redirect based on {DMAC, VID, PCP}");
return -EOPNOTSUPP;
} else if (key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only redirect based on DMAC");
return -EOPNOTSUPP;
}
if (!rule) {
rule = kzalloc(sizeof(*rule), GFP_KERNEL);
if (!rule)
return -ENOMEM;
rule->cookie = cookie;
rule->type = SJA1105_RULE_VL;
rule->key = *key;
list_add(&rule->list, &priv->flow_block.rules);
}
rule->port_mask |= BIT(port);
if (append)
rule->vl.destports |= destports;
else
rule->vl.destports = destports;
rc = sja1105_init_virtual_links(priv, extack);
if (rc) {
rule->port_mask &= ~BIT(port);
if (!rule->port_mask) {
list_del(&rule->list);
kfree(rule);
}
}
return rc;
}
int sja1105_vl_delete(struct sja1105_private *priv, int port,
struct sja1105_rule *rule, struct netlink_ext_ack *extack)
{
int rc;
rule->port_mask &= ~BIT(port);
if (!rule->port_mask) {
list_del(&rule->list);
kfree(rule);
}
rc = sja1105_init_virtual_links(priv, extack);
if (rc)
return rc;
return sja1105_static_config_reload(priv, SJA1105_VIRTUAL_LINKS);
}
/* Insert into the global gate list, sorted by gate action time. */
static int sja1105_insert_gate_entry(struct sja1105_gating_config *gating_cfg,
struct sja1105_rule *rule,
u8 gate_state, s64 entry_time,
struct netlink_ext_ack *extack)
{
struct sja1105_gate_entry *e;
int rc;
e = kzalloc(sizeof(*e), GFP_KERNEL);
if (!e)
return -ENOMEM;
e->rule = rule;
e->gate_state = gate_state;
e->interval = entry_time;
if (list_empty(&gating_cfg->entries)) {
list_add(&e->list, &gating_cfg->entries);
} else {
struct sja1105_gate_entry *p;
list_for_each_entry(p, &gating_cfg->entries, list) {
if (p->interval == e->interval) {
NL_SET_ERR_MSG_MOD(extack,
"Gate conflict");
rc = -EBUSY;
goto err;
}
if (e->interval < p->interval)
break;
}
list_add(&e->list, p->list.prev);
}
gating_cfg->num_entries++;
return 0;
err:
kfree(e);
return rc;
}
/* The gate entries contain absolute times in their e->interval field. Convert
* that to proper intervals (i.e. "0, 5, 10, 15" to "5, 5, 5, 5").
*/
static void
sja1105_gating_cfg_time_to_interval(struct sja1105_gating_config *gating_cfg,
u64 cycle_time)
{
struct sja1105_gate_entry *last_e;
struct sja1105_gate_entry *e;
struct list_head *prev;
u32 prev_time = 0;
list_for_each_entry(e, &gating_cfg->entries, list) {
struct sja1105_gate_entry *p;
prev = e->list.prev;
if (prev == &gating_cfg->entries)
continue;
p = list_entry(prev, struct sja1105_gate_entry, list);
prev_time = e->interval;
p->interval = e->interval - p->interval;
}
last_e = list_last_entry(&gating_cfg->entries,
struct sja1105_gate_entry, list);
if (last_e->list.prev != &gating_cfg->entries)
last_e->interval = cycle_time - last_e->interval;
}
static void sja1105_free_gating_config(struct sja1105_gating_config *gating_cfg)
{
struct sja1105_gate_entry *e, *n;
list_for_each_entry_safe(e, n, &gating_cfg->entries, list) {
list_del(&e->list);
kfree(e);
}
}
static int sja1105_compose_gating_subschedule(struct sja1105_private *priv,
struct netlink_ext_ack *extack)
{
struct sja1105_gating_config *gating_cfg = &priv->tas_data.gating_cfg;
struct sja1105_rule *rule;
s64 max_cycle_time = 0;
s64 its_base_time = 0;
int i, rc = 0;
list_for_each_entry(rule, &priv->flow_block.rules, list) {
if (rule->type != SJA1105_RULE_VL)
continue;
if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED)
continue;
if (max_cycle_time < rule->vl.cycle_time) {
max_cycle_time = rule->vl.cycle_time;
its_base_time = rule->vl.base_time;
}
}
if (!max_cycle_time)
return 0;
dev_dbg(priv->ds->dev, "max_cycle_time %lld its_base_time %lld\n",
max_cycle_time, its_base_time);
sja1105_free_gating_config(gating_cfg);
gating_cfg->base_time = its_base_time;
gating_cfg->cycle_time = max_cycle_time;
gating_cfg->num_entries = 0;
list_for_each_entry(rule, &priv->flow_block.rules, list) {
s64 time;
s64 rbt;
if (rule->type != SJA1105_RULE_VL)
continue;
if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED)
continue;
/* Calculate the difference between this gating schedule's
* base time, and the base time of the gating schedule with the
* longest cycle time. We call it the relative base time (rbt).
*/
rbt = future_base_time(rule->vl.base_time, rule->vl.cycle_time,
its_base_time);
rbt -= its_base_time;
time = rbt;
for (i = 0; i < rule->vl.num_entries; i++) {
u8 gate_state = rule->vl.entries[i].gate_state;
s64 entry_time = time;
while (entry_time < max_cycle_time) {
rc = sja1105_insert_gate_entry(gating_cfg, rule,
gate_state,
entry_time,
extack);
if (rc)
goto err;
entry_time += rule->vl.cycle_time;
}
time += rule->vl.entries[i].interval;
}
}
sja1105_gating_cfg_time_to_interval(gating_cfg, max_cycle_time);
return 0;
err:
sja1105_free_gating_config(gating_cfg);
return rc;
}
int sja1105_vl_gate(struct sja1105_private *priv, int port,
struct netlink_ext_ack *extack, unsigned long cookie,
struct sja1105_key *key, u32 index, s32 prio,
u64 base_time, u64 cycle_time, u64 cycle_time_ext,
u32 num_entries, struct action_gate_entry *entries)
{
struct sja1105_rule *rule = sja1105_rule_find(priv, cookie);
int ipv = -1;
int i, rc;
s32 rem;
if (cycle_time_ext) {
NL_SET_ERR_MSG_MOD(extack,
"Cycle time extension not supported");
return -EOPNOTSUPP;
}
div_s64_rem(base_time, sja1105_delta_to_ns(1), &rem);
if (rem) {
NL_SET_ERR_MSG_MOD(extack,
"Base time must be multiple of 200 ns");
return -ERANGE;
}
div_s64_rem(cycle_time, sja1105_delta_to_ns(1), &rem);
if (rem) {
NL_SET_ERR_MSG_MOD(extack,
"Cycle time must be multiple of 200 ns");
return -ERANGE;
}
if (dsa_port_is_vlan_filtering(dsa_to_port(priv->ds, port)) &&
key->type != SJA1105_KEY_VLAN_AWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only gate based on {DMAC, VID, PCP}");
return -EOPNOTSUPP;
} else if (key->type != SJA1105_KEY_VLAN_UNAWARE_VL) {
NL_SET_ERR_MSG_MOD(extack,
"Can only gate based on DMAC");
return -EOPNOTSUPP;
}
if (!rule) {
rule = kzalloc(sizeof(*rule), GFP_KERNEL);
if (!rule)
return -ENOMEM;
list_add(&rule->list, &priv->flow_block.rules);
rule->cookie = cookie;
rule->type = SJA1105_RULE_VL;
rule->key = *key;
rule->vl.type = SJA1105_VL_TIME_TRIGGERED;
rule->vl.sharindx = index;
rule->vl.base_time = base_time;
rule->vl.cycle_time = cycle_time;
rule->vl.num_entries = num_entries;
rule->vl.entries = kcalloc(num_entries,
sizeof(struct action_gate_entry),
GFP_KERNEL);
if (!rule->vl.entries) {
rc = -ENOMEM;
goto out;
}
for (i = 0; i < num_entries; i++) {
div_s64_rem(entries[i].interval,
sja1105_delta_to_ns(1), &rem);
if (rem) {
NL_SET_ERR_MSG_MOD(extack,
"Interval must be multiple of 200 ns");
rc = -ERANGE;
goto out;
}
if (!entries[i].interval) {
NL_SET_ERR_MSG_MOD(extack,
"Interval cannot be zero");
rc = -ERANGE;
goto out;
}
if (ns_to_sja1105_delta(entries[i].interval) >
SJA1105_TAS_MAX_DELTA) {
NL_SET_ERR_MSG_MOD(extack,
"Maximum interval is 52 ms");
rc = -ERANGE;
goto out;
}
if (entries[i].maxoctets != -1) {
NL_SET_ERR_MSG_MOD(extack,
"Cannot offload IntervalOctetMax");
rc = -EOPNOTSUPP;
goto out;
}
if (ipv == -1) {
ipv = entries[i].ipv;
} else if (ipv != entries[i].ipv) {
NL_SET_ERR_MSG_MOD(extack,
"Only support a single IPV per VL");
rc = -EOPNOTSUPP;
goto out;
}
rule->vl.entries[i] = entries[i];
}
if (ipv == -1) {
if (key->type == SJA1105_KEY_VLAN_AWARE_VL)
ipv = key->vl.pcp;
else
ipv = 0;
}
/* TODO: support per-flow MTU */
rule->vl.maxlen = VLAN_ETH_FRAME_LEN + ETH_FCS_LEN;
rule->vl.ipv = ipv;
}
rule->port_mask |= BIT(port);
rc = sja1105_compose_gating_subschedule(priv, extack);
if (rc)
goto out;
rc = sja1105_init_virtual_links(priv, extack);
if (rc)
goto out;
if (sja1105_gating_check_conflicts(priv, -1, extack)) {
NL_SET_ERR_MSG_MOD(extack, "Conflict with tc-taprio schedule");
rc = -ERANGE;
goto out;
}
out:
if (rc) {
rule->port_mask &= ~BIT(port);
if (!rule->port_mask) {
list_del(&rule->list);
kfree(rule->vl.entries);
kfree(rule);
}
}
return rc;
}
static int sja1105_find_vlid(struct sja1105_private *priv, int port,
struct sja1105_key *key)
{
struct sja1105_vl_lookup_entry *vl_lookup;
struct sja1105_table *table;
int i;
if (WARN_ON(key->type != SJA1105_KEY_VLAN_AWARE_VL &&
key->type != SJA1105_KEY_VLAN_UNAWARE_VL))
return -1;
table = &priv->static_config.tables[BLK_IDX_VL_LOOKUP];
vl_lookup = table->entries;
for (i = 0; i < table->entry_count; i++) {
if (key->type == SJA1105_KEY_VLAN_AWARE_VL) {
if (vl_lookup[i].port == port &&
vl_lookup[i].macaddr == key->vl.dmac &&
vl_lookup[i].vlanid == key->vl.vid &&
vl_lookup[i].vlanprior == key->vl.pcp)
return i;
} else {
if (vl_lookup[i].port == port &&
vl_lookup[i].macaddr == key->vl.dmac)
return i;
}
}
return -1;
}
int sja1105_vl_stats(struct sja1105_private *priv, int port,
struct sja1105_rule *rule, struct flow_stats *stats,
struct netlink_ext_ack *extack)
{
const struct sja1105_regs *regs = priv->info->regs;
u8 buf[SJA1105_SIZE_VL_STATUS] = {0};
u64 unreleased;
u64 timingerr;
u64 lengtherr;
int vlid, rc;
u64 pkts;
if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED)
return 0;
vlid = sja1105_find_vlid(priv, port, &rule->key);
if (vlid < 0)
return 0;
rc = sja1105_xfer_buf(priv, SPI_READ, regs->vl_status + 2 * vlid, buf,
SJA1105_SIZE_VL_STATUS);
if (rc) {
NL_SET_ERR_MSG_MOD(extack, "SPI access failed");
return rc;
}
sja1105_unpack(buf, &timingerr, 31, 16, SJA1105_SIZE_VL_STATUS);
sja1105_unpack(buf, &unreleased, 15, 0, SJA1105_SIZE_VL_STATUS);
sja1105_unpack(buf, &lengtherr, 47, 32, SJA1105_SIZE_VL_STATUS);
pkts = timingerr + unreleased + lengtherr;
flow_stats_update(stats, 0, pkts - rule->vl.stats.pkts,
jiffies - rule->vl.stats.lastused,
FLOW_ACTION_HW_STATS_IMMEDIATE);
rule->vl.stats.pkts = pkts;
rule->vl.stats.lastused = jiffies;
return 0;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright 2020, NXP Semiconductors
*/
#ifndef _SJA1105_VL_H
#define _SJA1105_VL_H
#if IS_ENABLED(CONFIG_NET_DSA_SJA1105_VL)
int sja1105_vl_redirect(struct sja1105_private *priv, int port,
struct netlink_ext_ack *extack, unsigned long cookie,
struct sja1105_key *key, unsigned long destports,
bool append);
int sja1105_vl_delete(struct sja1105_private *priv, int port,
struct sja1105_rule *rule,
struct netlink_ext_ack *extack);
int sja1105_vl_gate(struct sja1105_private *priv, int port,
struct netlink_ext_ack *extack, unsigned long cookie,
struct sja1105_key *key, u32 index, s32 prio,
u64 base_time, u64 cycle_time, u64 cycle_time_ext,
u32 num_entries, struct action_gate_entry *entries);
int sja1105_vl_stats(struct sja1105_private *priv, int port,
struct sja1105_rule *rule, struct flow_stats *stats,
struct netlink_ext_ack *extack);
#else
static inline int sja1105_vl_redirect(struct sja1105_private *priv, int port,
struct netlink_ext_ack *extack,
unsigned long cookie,
struct sja1105_key *key,
unsigned long destports,
bool append)
{
NL_SET_ERR_MSG_MOD(extack, "Virtual Links not compiled in");
return -EOPNOTSUPP;
}
static inline int sja1105_vl_delete(struct sja1105_private *priv,
int port, struct sja1105_rule *rule,
struct netlink_ext_ack *extack)
{
NL_SET_ERR_MSG_MOD(extack, "Virtual Links not compiled in");
return -EOPNOTSUPP;
}
static inline int sja1105_vl_gate(struct sja1105_private *priv, int port,
struct netlink_ext_ack *extack,
unsigned long cookie,
struct sja1105_key *key, u32 index, s32 prio,
u64 base_time, u64 cycle_time,
u64 cycle_time_ext, u32 num_entries,
struct action_gate_entry *entries)
{
NL_SET_ERR_MSG_MOD(extack, "Virtual Links not compiled in");
return -EOPNOTSUPP;
}
static inline int sja1105_vl_stats(struct sja1105_private *priv, int port,
struct sja1105_rule *rule,
struct flow_stats *stats,
struct netlink_ext_ack *extack)
{
NL_SET_ERR_MSG_MOD(extack, "Virtual Links not compiled in");
return -EOPNOTSUPP;
}
#endif /* IS_ENABLED(CONFIG_NET_DSA_SJA1105_VL) */
#endif /* _SJA1105_VL_H */
......@@ -637,6 +637,7 @@ void dsa_devlink_resource_occ_get_register(struct dsa_switch *ds,
void *occ_get_priv);
void dsa_devlink_resource_occ_get_unregister(struct dsa_switch *ds,
u64 resource_id);
struct dsa_port *dsa_port_from_netdev(struct net_device *netdev);
struct dsa_devlink_priv {
struct dsa_switch *ds;
......
......@@ -412,6 +412,15 @@ void dsa_devlink_resource_occ_get_unregister(struct dsa_switch *ds,
}
EXPORT_SYMBOL_GPL(dsa_devlink_resource_occ_get_unregister);
struct dsa_port *dsa_port_from_netdev(struct net_device *netdev)
{
if (!netdev || !dsa_slave_dev_check(netdev))
return ERR_PTR(-ENODEV);
return dsa_slave_to_port(netdev);
}
EXPORT_SYMBOL_GPL(dsa_port_from_netdev);
static int __init dsa_init_module(void)
{
int rc;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment