Commit ec22ab00 authored by David S. Miller's avatar David S. Miller

Merge branch 'macsec-hw-offload'

Antoine Tenart says:

====================
net: macsec: initial support for hardware offloading

This series intends to add support for offloading MACsec transformations
to hardware enabled devices. The series adds the necessary
infrastructure for offloading MACsec configurations to hardware drivers,
in patches 1 to 5; then introduces MACsec offloading support in the
Microsemi MSCC PHY driver, in patches 6 to 10.

The series can also be found at:
https://github.com/atenart/linux/tree/net-next/macsec

IProute2 modifications can be found at:
https://github.com/atenart/iproute2/tree/macsec

MACsec hardware offloading infrastructure
-----------------------------------------

Linux has a software implementation of the MACsec standard. There are
hardware engines supporting MACsec operations, such as the Intel ixgbe
NIC and some Microsemi PHYs (the one we use in this series). This means
the MACsec offloading infrastructure should support networking PHY and
MAC drivers. Note that MAC driver preliminary support is part of this
series, but should not be merged before we actually have a provider for
this.

We do intend in this series to re-use the logic, netlink API and data
structures of the existing MACsec software implementation. This allows
not to duplicate definitions and structure storing the same information;
as well as using the same userspace tools to configure both software or
hardware offloaded MACsec flows (with `ip macsec`).

When adding a new MACsec virtual interface the existing logic is kept:
offloading is disabled by default. A user driven configuration choice is
needed to switch to offloading mode (a patch in iproute2 is needed for
this). A single MACsec interface can be offloaded for now, and some
limitations are there: no flow can be moved from one implementation to
the other so the decision needs to be done before configuring the
interface.

MACsec offloading ops are called in 2 steps: a preparation one, and a
commit one. The first step is allowed to fail and should be used to
check if a provided configuration is compatible with a given MACsec
capable hardware. The second step is not allowed to fail and should
only be used to enable a given MACsec configuration.

A limitation as of now is the counters and statistics are not reported
back from the hardware to the software MACsec implementation. This
isn't an issue when using offloaded MACsec transformations, but it
should be added in the future so that the MACsec state can be reported
to the user (which would also improve the debug).

Microsemi PHY MACsec support
----------------------------

In order to add support for the MACsec offloading feature in the
Microsemi MSCC PHY driver, the __phy_read_page and __phy_write_page
helpers had to be exported. This is because the initialization of the
PHY is done while holding the MDIO bus lock, and we need to change the
page to configure the MACsec block.

The support itself is then added in three patches. The first one adds
support for configuring the MACsec block within the PHY, so that it is
up, running and available for future configuration, but is not doing any
modification on the traffic passing through the PHY. The second patch
implements the phy_device MACsec ops in the Microsemi MSCC PHY driver,
and introduce helpers to configure MACsec transformations and flows to
match specific packets. The last one adds support for PN rollover.

Thanks!
Antoine

Since v5:
  - Fixed a compilation issue due to an inclusion from an UAPI header.
  - Added an EXPORT_SYMBOL_GPL for the PN rollover helper, to fix module
    compilation issues.
  - Added a dependency for the MSCC driver on MACSEC || MACSEC=n.
  - Removed the patches including the MAC offloading support as they are
    not to be applied for now.

Since v4:
  - Reworked the MACsec read and write functions in the MSCC PHY driver
    to remove the conditional locking.

Since v3:
  - Fixed a check when enabling offloading that was too restrictive.
  - Fixed the propagation of the changelink event to the underlying
    device drivers.

Since v2:
  - Allow selection the offloading from userspace, defaulting to the
    software implementation when adding a new MACsec interface. The
    offloading mode is now also reported through netlink.
  - Added support for letting MKA packets in and out when using MACsec
    (there are rules to let them bypass the MACsec h/w engine within the
    PHY).
  - Added support for PN rollover (following what's currently done in
    the software implementation: the flow is disabled).
  - Split patches to remove MAC offloading support for now, as there are
    no current provider for this (patches are still included).
  - Improved a few parts of the MACsec support within the MSCC PHY
    driver (e.g. default rules now block non-MACsec traffic, depending
    on the configuration).
  - Many cosmetic fixes & small improvements.

Since v1:
  - Reworked the MACsec offloading API, moving from a single helper
    called for all MACsec configuration operations, to a per-operation
    function that is provided by the underlying hardware drivers.
  - Those functions now contain a verb to describe the configuration
    action they're offloading.
  - Improved the error handling in the MACsec genl helpers to revert
    the configuration to its previous state when the offloading call
    failed.
  - Reworked the file inclusions.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 169af346 781449a4
......@@ -11,16 +11,17 @@
#include <linux/module.h>
#include <crypto/aead.h>
#include <linux/etherdevice.h>
#include <linux/netdevice.h>
#include <linux/rtnetlink.h>
#include <linux/refcount.h>
#include <net/genetlink.h>
#include <net/sock.h>
#include <net/gro_cells.h>
#include <net/macsec.h>
#include <linux/phy.h>
#include <uapi/linux/if_macsec.h>
typedef u64 __bitwise sci_t;
#define MACSEC_SCI_LEN 8
/* SecTAG length = macsec_eth_header without the optional SCI */
......@@ -58,8 +59,6 @@ struct macsec_eth_header {
#define GCM_AES_IV_LEN 12
#define DEFAULT_ICV_LEN 16
#define MACSEC_NUM_AN 4 /* 2 bits for the association number */
#define for_each_rxsc(secy, sc) \
for (sc = rcu_dereference_bh(secy->rx_sc); \
sc; \
......@@ -77,49 +76,6 @@ struct gcm_iv {
__be32 pn;
};
/**
* struct macsec_key - SA key
* @id: user-provided key identifier
* @tfm: crypto struct, key storage
*/
struct macsec_key {
u8 id[MACSEC_KEYID_LEN];
struct crypto_aead *tfm;
};
struct macsec_rx_sc_stats {
__u64 InOctetsValidated;
__u64 InOctetsDecrypted;
__u64 InPktsUnchecked;
__u64 InPktsDelayed;
__u64 InPktsOK;
__u64 InPktsInvalid;
__u64 InPktsLate;
__u64 InPktsNotValid;
__u64 InPktsNotUsingSA;
__u64 InPktsUnusedSA;
};
struct macsec_rx_sa_stats {
__u32 InPktsOK;
__u32 InPktsInvalid;
__u32 InPktsNotValid;
__u32 InPktsNotUsingSA;
__u32 InPktsUnusedSA;
};
struct macsec_tx_sa_stats {
__u32 OutPktsProtected;
__u32 OutPktsEncrypted;
};
struct macsec_tx_sc_stats {
__u64 OutPktsProtected;
__u64 OutPktsEncrypted;
__u64 OutOctetsProtected;
__u64 OutOctetsEncrypted;
};
struct macsec_dev_stats {
__u64 OutPktsUntagged;
__u64 InPktsUntagged;
......@@ -131,124 +87,8 @@ struct macsec_dev_stats {
__u64 InPktsOverrun;
};
/**
* struct macsec_rx_sa - receive secure association
* @active:
* @next_pn: packet number expected for the next packet
* @lock: protects next_pn manipulations
* @key: key structure
* @stats: per-SA stats
*/
struct macsec_rx_sa {
struct macsec_key key;
spinlock_t lock;
u32 next_pn;
refcount_t refcnt;
bool active;
struct macsec_rx_sa_stats __percpu *stats;
struct macsec_rx_sc *sc;
struct rcu_head rcu;
};
struct pcpu_rx_sc_stats {
struct macsec_rx_sc_stats stats;
struct u64_stats_sync syncp;
};
/**
* struct macsec_rx_sc - receive secure channel
* @sci: secure channel identifier for this SC
* @active: channel is active
* @sa: array of secure associations
* @stats: per-SC stats
*/
struct macsec_rx_sc {
struct macsec_rx_sc __rcu *next;
sci_t sci;
bool active;
struct macsec_rx_sa __rcu *sa[MACSEC_NUM_AN];
struct pcpu_rx_sc_stats __percpu *stats;
refcount_t refcnt;
struct rcu_head rcu_head;
};
/**
* struct macsec_tx_sa - transmit secure association
* @active:
* @next_pn: packet number to use for the next packet
* @lock: protects next_pn manipulations
* @key: key structure
* @stats: per-SA stats
*/
struct macsec_tx_sa {
struct macsec_key key;
spinlock_t lock;
u32 next_pn;
refcount_t refcnt;
bool active;
struct macsec_tx_sa_stats __percpu *stats;
struct rcu_head rcu;
};
struct pcpu_tx_sc_stats {
struct macsec_tx_sc_stats stats;
struct u64_stats_sync syncp;
};
/**
* struct macsec_tx_sc - transmit secure channel
* @active:
* @encoding_sa: association number of the SA currently in use
* @encrypt: encrypt packets on transmit, or authenticate only
* @send_sci: always include the SCI in the SecTAG
* @end_station:
* @scb: single copy broadcast flag
* @sa: array of secure associations
* @stats: stats for this TXSC
*/
struct macsec_tx_sc {
bool active;
u8 encoding_sa;
bool encrypt;
bool send_sci;
bool end_station;
bool scb;
struct macsec_tx_sa __rcu *sa[MACSEC_NUM_AN];
struct pcpu_tx_sc_stats __percpu *stats;
};
#define MACSEC_VALIDATE_DEFAULT MACSEC_VALIDATE_STRICT
/**
* struct macsec_secy - MACsec Security Entity
* @netdev: netdevice for this SecY
* @n_rx_sc: number of receive secure channels configured on this SecY
* @sci: secure channel identifier used for tx
* @key_len: length of keys used by the cipher suite
* @icv_len: length of ICV used by the cipher suite
* @validate_frames: validation mode
* @operational: MAC_Operational flag
* @protect_frames: enable protection for this SecY
* @replay_protect: enable packet number checks on receive
* @replay_window: size of the replay window
* @tx_sc: transmit secure channel
* @rx_sc: linked list of receive secure channels
*/
struct macsec_secy {
struct net_device *netdev;
unsigned int n_rx_sc;
sci_t sci;
u16 key_len;
u16 icv_len;
enum macsec_validation_type validate_frames;
bool operational;
bool protect_frames;
bool replay_protect;
u32 replay_window;
struct macsec_tx_sc tx_sc;
struct macsec_rx_sc __rcu *rx_sc;
};
struct pcpu_secy_stats {
struct macsec_dev_stats stats;
struct u64_stats_sync syncp;
......@@ -260,6 +100,7 @@ struct pcpu_secy_stats {
* @real_dev: pointer to underlying netdevice
* @stats: MACsec device stats
* @secys: linked list of SecY's on the underlying device
* @offload: status of offloading on the MACsec device
*/
struct macsec_dev {
struct macsec_secy secy;
......@@ -267,6 +108,7 @@ struct macsec_dev {
struct pcpu_secy_stats __percpu *stats;
struct list_head secys;
struct gro_cells gro_cells;
enum macsec_offload offload;
};
/**
......@@ -480,6 +322,56 @@ static void macsec_set_shortlen(struct macsec_eth_header *h, size_t data_len)
h->short_length = data_len;
}
/* Checks if a MACsec interface is being offloaded to an hardware engine */
static bool macsec_is_offloaded(struct macsec_dev *macsec)
{
if (macsec->offload == MACSEC_OFFLOAD_PHY)
return true;
return false;
}
/* Checks if underlying layers implement MACsec offloading functions. */
static bool macsec_check_offload(enum macsec_offload offload,
struct macsec_dev *macsec)
{
if (!macsec || !macsec->real_dev)
return false;
if (offload == MACSEC_OFFLOAD_PHY)
return macsec->real_dev->phydev &&
macsec->real_dev->phydev->macsec_ops;
return false;
}
static const struct macsec_ops *__macsec_get_ops(enum macsec_offload offload,
struct macsec_dev *macsec,
struct macsec_context *ctx)
{
if (ctx) {
memset(ctx, 0, sizeof(*ctx));
ctx->offload = offload;
if (offload == MACSEC_OFFLOAD_PHY)
ctx->phydev = macsec->real_dev->phydev;
}
return macsec->real_dev->phydev->macsec_ops;
}
/* Returns a pointer to the MACsec ops struct if any and updates the MACsec
* context device reference if provided.
*/
static const struct macsec_ops *macsec_get_ops(struct macsec_dev *macsec,
struct macsec_context *ctx)
{
if (!macsec_check_offload(macsec->offload, macsec))
return NULL;
return __macsec_get_ops(macsec->offload, macsec, ctx);
}
/* validate MACsec packet according to IEEE 802.1AE-2006 9.12 */
static bool macsec_validate_skb(struct sk_buff *skb, u16 icv_len)
{
......@@ -532,6 +424,23 @@ static struct macsec_eth_header *macsec_ethhdr(struct sk_buff *skb)
return (struct macsec_eth_header *)skb_mac_header(skb);
}
static void __macsec_pn_wrapped(struct macsec_secy *secy,
struct macsec_tx_sa *tx_sa)
{
pr_debug("PN wrapped, transitioning to !oper\n");
tx_sa->active = false;
if (secy->protect_frames)
secy->operational = false;
}
void macsec_pn_wrapped(struct macsec_secy *secy, struct macsec_tx_sa *tx_sa)
{
spin_lock_bh(&tx_sa->lock);
__macsec_pn_wrapped(secy, tx_sa);
spin_unlock_bh(&tx_sa->lock);
}
EXPORT_SYMBOL_GPL(macsec_pn_wrapped);
static u32 tx_sa_update_pn(struct macsec_tx_sa *tx_sa, struct macsec_secy *secy)
{
u32 pn;
......@@ -540,12 +449,8 @@ static u32 tx_sa_update_pn(struct macsec_tx_sa *tx_sa, struct macsec_secy *secy)
pn = tx_sa->next_pn;
tx_sa->next_pn++;
if (tx_sa->next_pn == 0) {
pr_debug("PN wrapped, transitioning to !oper\n");
tx_sa->active = false;
if (secy->protect_frames)
secy->operational = false;
}
if (tx_sa->next_pn == 0)
__macsec_pn_wrapped(secy, tx_sa);
spin_unlock_bh(&tx_sa->lock);
return pn;
......@@ -1029,8 +934,10 @@ static struct macsec_rx_sc *find_rx_sc_rtnl(struct macsec_secy *secy, sci_t sci)
return NULL;
}
static void handle_not_macsec(struct sk_buff *skb)
static enum rx_handler_result handle_not_macsec(struct sk_buff *skb)
{
/* Deliver to the uncontrolled port by default */
enum rx_handler_result ret = RX_HANDLER_PASS;
struct macsec_rxh_data *rxd;
struct macsec_dev *macsec;
......@@ -1045,7 +952,8 @@ static void handle_not_macsec(struct sk_buff *skb)
struct sk_buff *nskb;
struct pcpu_secy_stats *secy_stats = this_cpu_ptr(macsec->stats);
if (macsec->secy.validate_frames == MACSEC_VALIDATE_STRICT) {
if (!macsec_is_offloaded(macsec) &&
macsec->secy.validate_frames == MACSEC_VALIDATE_STRICT) {
u64_stats_update_begin(&secy_stats->syncp);
secy_stats->stats.InPktsNoTag++;
u64_stats_update_end(&secy_stats->syncp);
......@@ -1064,9 +972,17 @@ static void handle_not_macsec(struct sk_buff *skb)
secy_stats->stats.InPktsUntagged++;
u64_stats_update_end(&secy_stats->syncp);
}
if (netif_running(macsec->secy.netdev) &&
macsec_is_offloaded(macsec)) {
ret = RX_HANDLER_EXACT;
goto out;
}
}
out:
rcu_read_unlock();
return ret;
}
static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
......@@ -1091,12 +1007,8 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
goto drop_direct;
hdr = macsec_ethhdr(skb);
if (hdr->eth.h_proto != htons(ETH_P_MACSEC)) {
handle_not_macsec(skb);
/* and deliver to the uncontrolled port */
return RX_HANDLER_PASS;
}
if (hdr->eth.h_proto != htons(ETH_P_MACSEC))
return handle_not_macsec(skb);
skb = skb_unshare(skb, GFP_ATOMIC);
*pskb = skb;
......@@ -1585,6 +1497,7 @@ static const struct nla_policy macsec_genl_policy[NUM_MACSEC_ATTR] = {
[MACSEC_ATTR_IFINDEX] = { .type = NLA_U32 },
[MACSEC_ATTR_RXSC_CONFIG] = { .type = NLA_NESTED },
[MACSEC_ATTR_SA_CONFIG] = { .type = NLA_NESTED },
[MACSEC_ATTR_OFFLOAD] = { .type = NLA_NESTED },
};
static const struct nla_policy macsec_genl_rxsc_policy[NUM_MACSEC_RXSC_ATTR] = {
......@@ -1602,6 +1515,44 @@ static const struct nla_policy macsec_genl_sa_policy[NUM_MACSEC_SA_ATTR] = {
.len = MACSEC_MAX_KEY_LEN, },
};
static const struct nla_policy macsec_genl_offload_policy[NUM_MACSEC_OFFLOAD_ATTR] = {
[MACSEC_OFFLOAD_ATTR_TYPE] = { .type = NLA_U8 },
};
/* Offloads an operation to a device driver */
static int macsec_offload(int (* const func)(struct macsec_context *),
struct macsec_context *ctx)
{
int ret;
if (unlikely(!func))
return 0;
if (ctx->offload == MACSEC_OFFLOAD_PHY)
mutex_lock(&ctx->phydev->lock);
/* Phase I: prepare. The drive should fail here if there are going to be
* issues in the commit phase.
*/
ctx->prepare = true;
ret = (*func)(ctx);
if (ret)
goto phy_unlock;
/* Phase II: commit. This step cannot fail. */
ctx->prepare = false;
ret = (*func)(ctx);
/* This should never happen: commit is not allowed to fail */
if (unlikely(ret))
WARN(1, "MACsec offloading commit failed (%d)\n", ret);
phy_unlock:
if (ctx->offload == MACSEC_OFFLOAD_PHY)
mutex_unlock(&ctx->phydev->lock);
return ret;
}
static int parse_sa_config(struct nlattr **attrs, struct nlattr **tb_sa)
{
if (!attrs[MACSEC_ATTR_SA_CONFIG])
......@@ -1717,13 +1668,40 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
if (tb_sa[MACSEC_SA_ATTR_ACTIVE])
rx_sa->active = !!nla_get_u8(tb_sa[MACSEC_SA_ATTR_ACTIVE]);
nla_memcpy(rx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEYID], MACSEC_KEYID_LEN);
rx_sa->sc = rx_sc;
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(netdev_priv(dev))) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
err = -EOPNOTSUPP;
goto cleanup;
}
ctx.sa.assoc_num = assoc_num;
ctx.sa.rx_sa = rx_sa;
memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
MACSEC_KEYID_LEN);
err = macsec_offload(ops->mdo_add_rxsa, &ctx);
if (err)
goto cleanup;
}
nla_memcpy(rx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEYID], MACSEC_KEYID_LEN);
rcu_assign_pointer(rx_sc->sa[assoc_num], rx_sa);
rtnl_unlock();
return 0;
cleanup:
kfree(rx_sa);
rtnl_unlock();
return err;
}
static bool validate_add_rxsc(struct nlattr **attrs)
......@@ -1746,6 +1724,8 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
struct nlattr **attrs = info->attrs;
struct macsec_rx_sc *rx_sc;
struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1];
bool was_active;
int ret;
if (!attrs[MACSEC_ATTR_IFINDEX])
return -EINVAL;
......@@ -1771,12 +1751,35 @@ static int macsec_add_rxsc(struct sk_buff *skb, struct genl_info *info)
return PTR_ERR(rx_sc);
}
was_active = rx_sc->active;
if (tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE])
rx_sc->active = !!nla_get_u8(tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE]);
if (macsec_is_offloaded(netdev_priv(dev))) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
ret = -EOPNOTSUPP;
goto cleanup;
}
ctx.rx_sc = rx_sc;
ret = macsec_offload(ops->mdo_add_rxsc, &ctx);
if (ret)
goto cleanup;
}
rtnl_unlock();
return 0;
cleanup:
rx_sc->active = was_active;
rtnl_unlock();
return ret;
}
static bool validate_add_txsa(struct nlattr **attrs)
......@@ -1813,6 +1816,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
struct macsec_tx_sa *tx_sa;
unsigned char assoc_num;
struct nlattr *tb_sa[MACSEC_SA_ATTR_MAX + 1];
bool was_operational;
int err;
if (!attrs[MACSEC_ATTR_IFINDEX])
......@@ -1863,8 +1867,6 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
return err;
}
nla_memcpy(tx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEYID], MACSEC_KEYID_LEN);
spin_lock_bh(&tx_sa->lock);
tx_sa->next_pn = nla_get_u32(tb_sa[MACSEC_SA_ATTR_PN]);
spin_unlock_bh(&tx_sa->lock);
......@@ -1872,14 +1874,43 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
if (tb_sa[MACSEC_SA_ATTR_ACTIVE])
tx_sa->active = !!nla_get_u8(tb_sa[MACSEC_SA_ATTR_ACTIVE]);
was_operational = secy->operational;
if (assoc_num == tx_sc->encoding_sa && tx_sa->active)
secy->operational = true;
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(netdev_priv(dev))) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
err = -EOPNOTSUPP;
goto cleanup;
}
ctx.sa.assoc_num = assoc_num;
ctx.sa.tx_sa = tx_sa;
memcpy(ctx.sa.key, nla_data(tb_sa[MACSEC_SA_ATTR_KEY]),
MACSEC_KEYID_LEN);
err = macsec_offload(ops->mdo_add_txsa, &ctx);
if (err)
goto cleanup;
}
nla_memcpy(tx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEYID], MACSEC_KEYID_LEN);
rcu_assign_pointer(tx_sc->sa[assoc_num], tx_sa);
rtnl_unlock();
return 0;
cleanup:
secy->operational = was_operational;
kfree(tx_sa);
rtnl_unlock();
return err;
}
static int macsec_del_rxsa(struct sk_buff *skb, struct genl_info *info)
......@@ -1892,6 +1923,7 @@ static int macsec_del_rxsa(struct sk_buff *skb, struct genl_info *info)
u8 assoc_num;
struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1];
struct nlattr *tb_sa[MACSEC_SA_ATTR_MAX + 1];
int ret;
if (!attrs[MACSEC_ATTR_IFINDEX])
return -EINVAL;
......@@ -1915,12 +1947,35 @@ static int macsec_del_rxsa(struct sk_buff *skb, struct genl_info *info)
return -EBUSY;
}
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(netdev_priv(dev))) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
ret = -EOPNOTSUPP;
goto cleanup;
}
ctx.sa.assoc_num = assoc_num;
ctx.sa.rx_sa = rx_sa;
ret = macsec_offload(ops->mdo_del_rxsa, &ctx);
if (ret)
goto cleanup;
}
RCU_INIT_POINTER(rx_sc->sa[assoc_num], NULL);
clear_rx_sa(rx_sa);
rtnl_unlock();
return 0;
cleanup:
rtnl_unlock();
return ret;
}
static int macsec_del_rxsc(struct sk_buff *skb, struct genl_info *info)
......@@ -1931,6 +1986,7 @@ static int macsec_del_rxsc(struct sk_buff *skb, struct genl_info *info)
struct macsec_rx_sc *rx_sc;
sci_t sci;
struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1];
int ret;
if (!attrs[MACSEC_ATTR_IFINDEX])
return -EINVAL;
......@@ -1957,10 +2013,31 @@ static int macsec_del_rxsc(struct sk_buff *skb, struct genl_info *info)
return -ENODEV;
}
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(netdev_priv(dev))) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
ret = -EOPNOTSUPP;
goto cleanup;
}
ctx.rx_sc = rx_sc;
ret = macsec_offload(ops->mdo_del_rxsc, &ctx);
if (ret)
goto cleanup;
}
free_rx_sc(rx_sc);
rtnl_unlock();
return 0;
cleanup:
rtnl_unlock();
return ret;
}
static int macsec_del_txsa(struct sk_buff *skb, struct genl_info *info)
......@@ -1972,6 +2049,7 @@ static int macsec_del_txsa(struct sk_buff *skb, struct genl_info *info)
struct macsec_tx_sa *tx_sa;
u8 assoc_num;
struct nlattr *tb_sa[MACSEC_SA_ATTR_MAX + 1];
int ret;
if (!attrs[MACSEC_ATTR_IFINDEX])
return -EINVAL;
......@@ -1992,12 +2070,35 @@ static int macsec_del_txsa(struct sk_buff *skb, struct genl_info *info)
return -EBUSY;
}
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(netdev_priv(dev))) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
ret = -EOPNOTSUPP;
goto cleanup;
}
ctx.sa.assoc_num = assoc_num;
ctx.sa.tx_sa = tx_sa;
ret = macsec_offload(ops->mdo_del_txsa, &ctx);
if (ret)
goto cleanup;
}
RCU_INIT_POINTER(tx_sc->sa[assoc_num], NULL);
clear_tx_sa(tx_sa);
rtnl_unlock();
return 0;
cleanup:
rtnl_unlock();
return ret;
}
static bool validate_upd_sa(struct nlattr **attrs)
......@@ -2030,6 +2131,9 @@ static int macsec_upd_txsa(struct sk_buff *skb, struct genl_info *info)
struct macsec_tx_sa *tx_sa;
u8 assoc_num;
struct nlattr *tb_sa[MACSEC_SA_ATTR_MAX + 1];
bool was_operational, was_active;
u32 prev_pn = 0;
int ret = 0;
if (!attrs[MACSEC_ATTR_IFINDEX])
return -EINVAL;
......@@ -2050,19 +2154,52 @@ static int macsec_upd_txsa(struct sk_buff *skb, struct genl_info *info)
if (tb_sa[MACSEC_SA_ATTR_PN]) {
spin_lock_bh(&tx_sa->lock);
prev_pn = tx_sa->next_pn;
tx_sa->next_pn = nla_get_u32(tb_sa[MACSEC_SA_ATTR_PN]);
spin_unlock_bh(&tx_sa->lock);
}
was_active = tx_sa->active;
if (tb_sa[MACSEC_SA_ATTR_ACTIVE])
tx_sa->active = nla_get_u8(tb_sa[MACSEC_SA_ATTR_ACTIVE]);
was_operational = secy->operational;
if (assoc_num == tx_sc->encoding_sa)
secy->operational = tx_sa->active;
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(netdev_priv(dev))) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
ret = -EOPNOTSUPP;
goto cleanup;
}
ctx.sa.assoc_num = assoc_num;
ctx.sa.tx_sa = tx_sa;
ret = macsec_offload(ops->mdo_upd_txsa, &ctx);
if (ret)
goto cleanup;
}
rtnl_unlock();
return 0;
cleanup:
if (tb_sa[MACSEC_SA_ATTR_PN]) {
spin_lock_bh(&tx_sa->lock);
tx_sa->next_pn = prev_pn;
spin_unlock_bh(&tx_sa->lock);
}
tx_sa->active = was_active;
secy->operational = was_operational;
rtnl_unlock();
return ret;
}
static int macsec_upd_rxsa(struct sk_buff *skb, struct genl_info *info)
......@@ -2075,6 +2212,9 @@ static int macsec_upd_rxsa(struct sk_buff *skb, struct genl_info *info)
u8 assoc_num;
struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1];
struct nlattr *tb_sa[MACSEC_SA_ATTR_MAX + 1];
bool was_active;
u32 prev_pn = 0;
int ret = 0;
if (!attrs[MACSEC_ATTR_IFINDEX])
return -EINVAL;
......@@ -2098,15 +2238,46 @@ static int macsec_upd_rxsa(struct sk_buff *skb, struct genl_info *info)
if (tb_sa[MACSEC_SA_ATTR_PN]) {
spin_lock_bh(&rx_sa->lock);
prev_pn = rx_sa->next_pn;
rx_sa->next_pn = nla_get_u32(tb_sa[MACSEC_SA_ATTR_PN]);
spin_unlock_bh(&rx_sa->lock);
}
was_active = rx_sa->active;
if (tb_sa[MACSEC_SA_ATTR_ACTIVE])
rx_sa->active = nla_get_u8(tb_sa[MACSEC_SA_ATTR_ACTIVE]);
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(netdev_priv(dev))) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
ret = -EOPNOTSUPP;
goto cleanup;
}
ctx.sa.assoc_num = assoc_num;
ctx.sa.rx_sa = rx_sa;
ret = macsec_offload(ops->mdo_upd_rxsa, &ctx);
if (ret)
goto cleanup;
}
rtnl_unlock();
return 0;
cleanup:
if (tb_sa[MACSEC_SA_ATTR_PN]) {
spin_lock_bh(&rx_sa->lock);
rx_sa->next_pn = prev_pn;
spin_unlock_bh(&rx_sa->lock);
}
rx_sa->active = was_active;
rtnl_unlock();
return ret;
}
static int macsec_upd_rxsc(struct sk_buff *skb, struct genl_info *info)
......@@ -2116,6 +2287,9 @@ static int macsec_upd_rxsc(struct sk_buff *skb, struct genl_info *info)
struct macsec_secy *secy;
struct macsec_rx_sc *rx_sc;
struct nlattr *tb_rxsc[MACSEC_RXSC_ATTR_MAX + 1];
unsigned int prev_n_rx_sc;
bool was_active;
int ret;
if (!attrs[MACSEC_ATTR_IFINDEX])
return -EINVAL;
......@@ -2133,6 +2307,8 @@ static int macsec_upd_rxsc(struct sk_buff *skb, struct genl_info *info)
return PTR_ERR(rx_sc);
}
was_active = rx_sc->active;
prev_n_rx_sc = secy->n_rx_sc;
if (tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE]) {
bool new = !!nla_get_u8(tb_rxsc[MACSEC_RXSC_ATTR_ACTIVE]);
......@@ -2142,9 +2318,153 @@ static int macsec_upd_rxsc(struct sk_buff *skb, struct genl_info *info)
rx_sc->active = new;
}
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(netdev_priv(dev))) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
ret = -EOPNOTSUPP;
goto cleanup;
}
ctx.rx_sc = rx_sc;
ret = macsec_offload(ops->mdo_upd_rxsc, &ctx);
if (ret)
goto cleanup;
}
rtnl_unlock();
return 0;
cleanup:
secy->n_rx_sc = prev_n_rx_sc;
rx_sc->active = was_active;
rtnl_unlock();
return ret;
}
static bool macsec_is_configured(struct macsec_dev *macsec)
{
struct macsec_secy *secy = &macsec->secy;
struct macsec_tx_sc *tx_sc = &secy->tx_sc;
int i;
if (secy->n_rx_sc > 0)
return true;
for (i = 0; i < MACSEC_NUM_AN; i++)
if (tx_sc->sa[i])
return true;
return false;
}
static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
{
struct nlattr *tb_offload[MACSEC_OFFLOAD_ATTR_MAX + 1];
enum macsec_offload offload, prev_offload;
int (*func)(struct macsec_context *ctx);
struct nlattr **attrs = info->attrs;
struct net_device *dev, *loop_dev;
const struct macsec_ops *ops;
struct macsec_context ctx;
struct macsec_dev *macsec;
struct net *loop_net;
int ret;
if (!attrs[MACSEC_ATTR_IFINDEX])
return -EINVAL;
if (!attrs[MACSEC_ATTR_OFFLOAD])
return -EINVAL;
if (nla_parse_nested_deprecated(tb_offload, MACSEC_OFFLOAD_ATTR_MAX,
attrs[MACSEC_ATTR_OFFLOAD],
macsec_genl_offload_policy, NULL))
return -EINVAL;
dev = get_dev_from_nl(genl_info_net(info), attrs);
if (IS_ERR(dev))
return PTR_ERR(dev);
macsec = macsec_priv(dev);
offload = nla_get_u8(tb_offload[MACSEC_OFFLOAD_ATTR_TYPE]);
if (macsec->offload == offload)
return 0;
/* Check if the offloading mode is supported by the underlying layers */
if (offload != MACSEC_OFFLOAD_OFF &&
!macsec_check_offload(offload, macsec))
return -EOPNOTSUPP;
if (offload == MACSEC_OFFLOAD_OFF)
goto skip_limitation;
/* Check the physical interface isn't offloading another interface
* first.
*/
for_each_net(loop_net) {
for_each_netdev(loop_net, loop_dev) {
struct macsec_dev *priv;
if (!netif_is_macsec(loop_dev))
continue;
priv = macsec_priv(loop_dev);
if (priv->real_dev == macsec->real_dev &&
priv->offload != MACSEC_OFFLOAD_OFF)
return -EBUSY;
}
}
skip_limitation:
/* Check if the net device is busy. */
if (netif_running(dev))
return -EBUSY;
rtnl_lock();
prev_offload = macsec->offload;
macsec->offload = offload;
/* Check if the device already has rules configured: we do not support
* rules migration.
*/
if (macsec_is_configured(macsec)) {
ret = -EBUSY;
goto rollback;
}
ops = __macsec_get_ops(offload == MACSEC_OFFLOAD_OFF ? prev_offload : offload,
macsec, &ctx);
if (!ops) {
ret = -EOPNOTSUPP;
goto rollback;
}
if (prev_offload == MACSEC_OFFLOAD_OFF)
func = ops->mdo_add_secy;
else
func = ops->mdo_del_secy;
ctx.secy = &macsec->secy;
ret = macsec_offload(func, &ctx);
if (ret)
goto rollback;
rtnl_unlock();
return 0;
rollback:
macsec->offload = prev_offload;
rtnl_unlock();
return ret;
}
static int copy_tx_sa_stats(struct sk_buff *skb,
......@@ -2408,12 +2728,13 @@ static noinline_for_stack int
dump_secy(struct macsec_secy *secy, struct net_device *dev,
struct sk_buff *skb, struct netlink_callback *cb)
{
struct macsec_rx_sc *rx_sc;
struct macsec_dev *macsec = netdev_priv(dev);
struct macsec_tx_sc *tx_sc = &secy->tx_sc;
struct nlattr *txsa_list, *rxsc_list;
int i, j;
void *hdr;
struct macsec_rx_sc *rx_sc;
struct nlattr *attr;
void *hdr;
int i, j;
hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
&macsec_fam, NLM_F_MULTI, MACSEC_CMD_GET_TXSC);
......@@ -2425,6 +2746,13 @@ dump_secy(struct macsec_secy *secy, struct net_device *dev,
if (nla_put_u32(skb, MACSEC_ATTR_IFINDEX, dev->ifindex))
goto nla_put_failure;
attr = nla_nest_start_noflag(skb, MACSEC_ATTR_OFFLOAD);
if (!attr)
goto nla_put_failure;
if (nla_put_u8(skb, MACSEC_OFFLOAD_ATTR_TYPE, macsec->offload))
goto nla_put_failure;
nla_nest_end(skb, attr);
if (nla_put_secy(secy, skb))
goto nla_put_failure;
......@@ -2690,6 +3018,12 @@ static const struct genl_ops macsec_genl_ops[] = {
.doit = macsec_upd_rxsa,
.flags = GENL_ADMIN_PERM,
},
{
.cmd = MACSEC_CMD_UPD_OFFLOAD,
.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
.doit = macsec_upd_offload,
.flags = GENL_ADMIN_PERM,
},
};
static struct genl_family macsec_fam __ro_after_init = {
......@@ -2712,6 +3046,11 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
struct pcpu_secy_stats *secy_stats;
int ret, len;
if (macsec_is_offloaded(netdev_priv(dev))) {
skb->dev = macsec->real_dev;
return dev_queue_xmit(skb);
}
/* 10.5 */
if (!secy->protect_frames) {
secy_stats = this_cpu_ptr(macsec->stats);
......@@ -2825,6 +3164,22 @@ static int macsec_dev_open(struct net_device *dev)
goto clear_allmulti;
}
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(macsec)) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
err = -EOPNOTSUPP;
goto clear_allmulti;
}
err = macsec_offload(ops->mdo_dev_open, &ctx);
if (err)
goto clear_allmulti;
}
if (netif_carrier_ok(real_dev))
netif_carrier_on(dev);
......@@ -2845,6 +3200,16 @@ static int macsec_dev_stop(struct net_device *dev)
netif_carrier_off(dev);
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(macsec)) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(macsec, &ctx);
if (ops)
macsec_offload(ops->mdo_dev_stop, &ctx);
}
dev_mc_unsync(real_dev, dev);
dev_uc_unsync(real_dev, dev);
......@@ -3076,6 +3441,11 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[],
struct nlattr *data[],
struct netlink_ext_ack *extack)
{
struct macsec_dev *macsec = macsec_priv(dev);
struct macsec_tx_sa tx_sc;
struct macsec_secy secy;
int ret;
if (!data)
return 0;
......@@ -3085,7 +3455,41 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[],
data[IFLA_MACSEC_PORT])
return -EINVAL;
return macsec_changelink_common(dev, data);
/* Keep a copy of unmodified secy and tx_sc, in case the offload
* propagation fails, to revert macsec_changelink_common.
*/
memcpy(&secy, &macsec->secy, sizeof(secy));
memcpy(&tx_sc, &macsec->secy.tx_sc, sizeof(tx_sc));
ret = macsec_changelink_common(dev, data);
if (ret)
return ret;
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(macsec)) {
const struct macsec_ops *ops;
struct macsec_context ctx;
int ret;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (!ops) {
ret = -EOPNOTSUPP;
goto cleanup;
}
ctx.secy = &macsec->secy;
ret = macsec_offload(ops->mdo_upd_secy, &ctx);
if (ret)
goto cleanup;
}
return 0;
cleanup:
memcpy(&macsec->secy.tx_sc, &tx_sc, sizeof(tx_sc));
memcpy(&macsec->secy, &secy, sizeof(secy));
return ret;
}
static void macsec_del_dev(struct macsec_dev *macsec)
......@@ -3128,6 +3532,18 @@ static void macsec_dellink(struct net_device *dev, struct list_head *head)
struct net_device *real_dev = macsec->real_dev;
struct macsec_rxh_data *rxd = macsec_data_rtnl(real_dev);
/* If h/w offloading is available, propagate to the device */
if (macsec_is_offloaded(macsec)) {
const struct macsec_ops *ops;
struct macsec_context ctx;
ops = macsec_get_ops(netdev_priv(dev), &ctx);
if (ops) {
ctx.secy = &macsec->secy;
macsec_offload(ops->mdo_del_secy, &ctx);
}
}
macsec_common_dellink(dev, head);
if (list_empty(&rxd->secys)) {
......@@ -3239,6 +3655,9 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
macsec->real_dev = real_dev;
/* MACsec offloading is off by default */
macsec->offload = MACSEC_OFFLOAD_OFF;
if (data && data[IFLA_MACSEC_ICV_LEN])
icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
dev->mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
......
......@@ -437,6 +437,9 @@ config MICROCHIP_T1_PHY
config MICROSEMI_PHY
tristate "Microsemi PHYs"
depends on MACSEC || MACSEC=n
select CRYPTO_AES
select CRYPTO_ECB
---help---
Currently supports VSC8514, VSC8530, VSC8531, VSC8540 and VSC8541 PHYs
......
......@@ -18,6 +18,17 @@
#include <linux/netdevice.h>
#include <dt-bindings/net/mscc-phy-vsc8531.h>
#include <linux/scatterlist.h>
#include <crypto/skcipher.h>
#if IS_ENABLED(CONFIG_MACSEC)
#include <net/macsec.h>
#endif
#include "mscc_macsec.h"
#include "mscc_mac.h"
#include "mscc_fc_buffer.h"
enum rgmii_rx_clock_delay {
RGMII_RX_CLK_DELAY_0_2_NS = 0,
RGMII_RX_CLK_DELAY_0_8_NS = 1,
......@@ -69,7 +80,7 @@ enum rgmii_rx_clock_delay {
#define MSCC_PHY_EXT_PHY_CNTL_2 24
#define MII_VSC85XX_INT_MASK 25
#define MII_VSC85XX_INT_MASK_MASK 0xa000
#define MII_VSC85XX_INT_MASK_MASK 0xa020
#define MII_VSC85XX_INT_MASK_WOL 0x0040
#define MII_VSC85XX_INT_STATUS 26
......@@ -121,6 +132,26 @@ enum rgmii_rx_clock_delay {
#define PHY_S6G_PLL_FSM_CTRL_DATA_POS 8
#define PHY_S6G_PLL_FSM_ENA_POS 7
#define MSCC_EXT_PAGE_MACSEC_17 17
#define MSCC_EXT_PAGE_MACSEC_18 18
#define MSCC_EXT_PAGE_MACSEC_19 19
#define MSCC_PHY_MACSEC_19_REG_ADDR(x) (x)
#define MSCC_PHY_MACSEC_19_TARGET(x) ((x) << 12)
#define MSCC_PHY_MACSEC_19_READ BIT(14)
#define MSCC_PHY_MACSEC_19_CMD BIT(15)
#define MSCC_EXT_PAGE_MACSEC_20 20
#define MSCC_PHY_MACSEC_20_TARGET(x) (x)
enum macsec_bank {
FC_BUFFER = 0x04,
HOST_MAC = 0x05,
LINE_MAC = 0x06,
IP_1588 = 0x0e,
MACSEC_INGR = 0x38,
MACSEC_EGR = 0x3c,
};
#define MSCC_EXT_PAGE_ACCESS 31
#define MSCC_PHY_PAGE_STANDARD 0x0000 /* Standard registers */
#define MSCC_PHY_PAGE_EXTENDED 0x0001 /* Extended registers */
......@@ -128,6 +159,7 @@ enum rgmii_rx_clock_delay {
#define MSCC_PHY_PAGE_EXTENDED_3 0x0003 /* Extended reg - page 3 */
#define MSCC_PHY_PAGE_EXTENDED_4 0x0004 /* Extended reg - page 4 */
#define MSCC_PHY_PAGE_CSR_CNTL MSCC_PHY_PAGE_EXTENDED_4
#define MSCC_PHY_PAGE_MACSEC MSCC_PHY_PAGE_EXTENDED_4
/* Extended reg - GPIO; this is a bank of registers that are shared for all PHYs
* in the same package.
*/
......@@ -175,6 +207,9 @@ enum rgmii_rx_clock_delay {
#define SECURE_ON_ENABLE 0x8000
#define SECURE_ON_PASSWD_LEN_4 0x4000
#define MSCC_PHY_EXTENDED_INT 28
#define MSCC_PHY_EXTENDED_INT_MS_EGR BIT(9)
/* Extended Page 3 Registers */
#define MSCC_PHY_SERDES_TX_VALID_CNT 21
#define MSCC_PHY_SERDES_TX_CRC_ERR_CNT 22
......@@ -411,6 +446,44 @@ static const struct vsc85xx_hw_stat vsc8584_hw_stats[] = {
},
};
#if IS_ENABLED(CONFIG_MACSEC)
struct macsec_flow {
struct list_head list;
enum mscc_macsec_destination_ports port;
enum macsec_bank bank;
u32 index;
int assoc_num;
bool has_transformation;
/* Highest takes precedence [0..15] */
u8 priority;
u8 key[MACSEC_KEYID_LEN];
union {
struct macsec_rx_sa *rx_sa;
struct macsec_tx_sa *tx_sa;
};
/* Matching */
struct {
u8 sci:1;
u8 tagged:1;
u8 untagged:1;
u8 etype:1;
} match;
u16 etype;
/* Action */
struct {
u8 bypass:1;
u8 drop:1;
} action;
};
#endif
struct vsc8531_private {
int rate_magic;
u16 supp_led_modes;
......@@ -424,6 +497,19 @@ struct vsc8531_private {
* package.
*/
unsigned int base_addr;
#if IS_ENABLED(CONFIG_MACSEC)
/* MACsec fields:
* - One SecY per device (enforced at the s/w implementation level)
* - macsec_flows: list of h/w flows
* - ingr_flows: bitmap of ingress flows
* - egr_flows: bitmap of egress flows
*/
struct macsec_secy *secy;
struct list_head macsec_flows;
unsigned long ingr_flows;
unsigned long egr_flows;
#endif
};
#ifdef CONFIG_OF_MDIO
......@@ -1584,6 +1670,978 @@ static int vsc8584_config_pre_init(struct phy_device *phydev)
return ret;
}
#if IS_ENABLED(CONFIG_MACSEC)
static u32 vsc8584_macsec_phy_read(struct phy_device *phydev,
enum macsec_bank bank, u32 reg)
{
u32 val, val_l = 0, val_h = 0;
unsigned long deadline;
int rc;
rc = phy_select_page(phydev, MSCC_PHY_PAGE_MACSEC);
if (rc < 0)
goto failed;
__phy_write(phydev, MSCC_EXT_PAGE_MACSEC_20,
MSCC_PHY_MACSEC_20_TARGET(bank >> 2));
if (bank >> 2 == 0x1)
/* non-MACsec access */
bank &= 0x3;
else
bank = 0;
__phy_write(phydev, MSCC_EXT_PAGE_MACSEC_19,
MSCC_PHY_MACSEC_19_CMD | MSCC_PHY_MACSEC_19_READ |
MSCC_PHY_MACSEC_19_REG_ADDR(reg) |
MSCC_PHY_MACSEC_19_TARGET(bank));
deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS);
do {
val = __phy_read(phydev, MSCC_EXT_PAGE_MACSEC_19);
} while (time_before(jiffies, deadline) && !(val & MSCC_PHY_MACSEC_19_CMD));
val_l = __phy_read(phydev, MSCC_EXT_PAGE_MACSEC_17);
val_h = __phy_read(phydev, MSCC_EXT_PAGE_MACSEC_18);
failed:
phy_restore_page(phydev, rc, rc);
return (val_h << 16) | val_l;
}
static void vsc8584_macsec_phy_write(struct phy_device *phydev,
enum macsec_bank bank, u32 reg, u32 val)
{
unsigned long deadline;
int rc;
rc = phy_select_page(phydev, MSCC_PHY_PAGE_MACSEC);
if (rc < 0)
goto failed;
__phy_write(phydev, MSCC_EXT_PAGE_MACSEC_20,
MSCC_PHY_MACSEC_20_TARGET(bank >> 2));
if ((bank >> 2 == 0x1) || (bank >> 2 == 0x3))
bank &= 0x3;
else
/* MACsec access */
bank = 0;
__phy_write(phydev, MSCC_EXT_PAGE_MACSEC_17, (u16)val);
__phy_write(phydev, MSCC_EXT_PAGE_MACSEC_18, (u16)(val >> 16));
__phy_write(phydev, MSCC_EXT_PAGE_MACSEC_19,
MSCC_PHY_MACSEC_19_CMD | MSCC_PHY_MACSEC_19_REG_ADDR(reg) |
MSCC_PHY_MACSEC_19_TARGET(bank));
deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS);
do {
val = __phy_read(phydev, MSCC_EXT_PAGE_MACSEC_19);
} while (time_before(jiffies, deadline) && !(val & MSCC_PHY_MACSEC_19_CMD));
failed:
phy_restore_page(phydev, rc, rc);
}
static void vsc8584_macsec_classification(struct phy_device *phydev,
enum macsec_bank bank)
{
/* enable VLAN tag parsing */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_CP_TAG,
MSCC_MS_SAM_CP_TAG_PARSE_STAG |
MSCC_MS_SAM_CP_TAG_PARSE_QTAG |
MSCC_MS_SAM_CP_TAG_PARSE_QINQ);
}
static void vsc8584_macsec_flow_default_action(struct phy_device *phydev,
enum macsec_bank bank,
bool block)
{
u32 port = (bank == MACSEC_INGR) ?
MSCC_MS_PORT_UNCONTROLLED : MSCC_MS_PORT_COMMON;
u32 action = MSCC_MS_FLOW_BYPASS;
if (block)
action = MSCC_MS_FLOW_DROP;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_NM_FLOW_NCP,
/* MACsec untagged */
MSCC_MS_SAM_NM_FLOW_NCP_UNTAGGED_FLOW_TYPE(action) |
MSCC_MS_SAM_NM_FLOW_NCP_UNTAGGED_DROP_ACTION(MSCC_MS_ACTION_DROP) |
MSCC_MS_SAM_NM_FLOW_NCP_UNTAGGED_DEST_PORT(port) |
/* MACsec tagged */
MSCC_MS_SAM_NM_FLOW_NCP_TAGGED_FLOW_TYPE(action) |
MSCC_MS_SAM_NM_FLOW_NCP_TAGGED_DROP_ACTION(MSCC_MS_ACTION_DROP) |
MSCC_MS_SAM_NM_FLOW_NCP_TAGGED_DEST_PORT(port) |
/* Bad tag */
MSCC_MS_SAM_NM_FLOW_NCP_BADTAG_FLOW_TYPE(action) |
MSCC_MS_SAM_NM_FLOW_NCP_BADTAG_DROP_ACTION(MSCC_MS_ACTION_DROP) |
MSCC_MS_SAM_NM_FLOW_NCP_BADTAG_DEST_PORT(port) |
/* Kay tag */
MSCC_MS_SAM_NM_FLOW_NCP_KAY_FLOW_TYPE(action) |
MSCC_MS_SAM_NM_FLOW_NCP_KAY_DROP_ACTION(MSCC_MS_ACTION_DROP) |
MSCC_MS_SAM_NM_FLOW_NCP_KAY_DEST_PORT(port));
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_NM_FLOW_CP,
/* MACsec untagged */
MSCC_MS_SAM_NM_FLOW_NCP_UNTAGGED_FLOW_TYPE(action) |
MSCC_MS_SAM_NM_FLOW_CP_UNTAGGED_DROP_ACTION(MSCC_MS_ACTION_DROP) |
MSCC_MS_SAM_NM_FLOW_CP_UNTAGGED_DEST_PORT(port) |
/* MACsec tagged */
MSCC_MS_SAM_NM_FLOW_NCP_TAGGED_FLOW_TYPE(action) |
MSCC_MS_SAM_NM_FLOW_CP_TAGGED_DROP_ACTION(MSCC_MS_ACTION_DROP) |
MSCC_MS_SAM_NM_FLOW_CP_TAGGED_DEST_PORT(port) |
/* Bad tag */
MSCC_MS_SAM_NM_FLOW_NCP_BADTAG_FLOW_TYPE(action) |
MSCC_MS_SAM_NM_FLOW_CP_BADTAG_DROP_ACTION(MSCC_MS_ACTION_DROP) |
MSCC_MS_SAM_NM_FLOW_CP_BADTAG_DEST_PORT(port) |
/* Kay tag */
MSCC_MS_SAM_NM_FLOW_NCP_KAY_FLOW_TYPE(action) |
MSCC_MS_SAM_NM_FLOW_CP_KAY_DROP_ACTION(MSCC_MS_ACTION_DROP) |
MSCC_MS_SAM_NM_FLOW_CP_KAY_DEST_PORT(port));
}
static void vsc8584_macsec_integrity_checks(struct phy_device *phydev,
enum macsec_bank bank)
{
u32 val;
if (bank != MACSEC_INGR)
return;
/* Set default rules to pass unmatched frames */
val = vsc8584_macsec_phy_read(phydev, bank,
MSCC_MS_PARAMS2_IG_CC_CONTROL);
val |= MSCC_MS_PARAMS2_IG_CC_CONTROL_NON_MATCH_CTRL_ACT |
MSCC_MS_PARAMS2_IG_CC_CONTROL_NON_MATCH_ACT;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_PARAMS2_IG_CC_CONTROL,
val);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_PARAMS2_IG_CP_TAG,
MSCC_MS_PARAMS2_IG_CP_TAG_PARSE_STAG |
MSCC_MS_PARAMS2_IG_CP_TAG_PARSE_QTAG |
MSCC_MS_PARAMS2_IG_CP_TAG_PARSE_QINQ);
}
static void vsc8584_macsec_block_init(struct phy_device *phydev,
enum macsec_bank bank)
{
u32 val;
int i;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_ENA_CFG,
MSCC_MS_ENA_CFG_SW_RST |
MSCC_MS_ENA_CFG_MACSEC_BYPASS_ENA);
/* Set the MACsec block out of s/w reset and enable clocks */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_ENA_CFG,
MSCC_MS_ENA_CFG_CLK_ENA);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_STATUS_CONTEXT_CTRL,
bank == MACSEC_INGR ? 0xe5880214 : 0xe5880218);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_MISC_CONTROL,
MSCC_MS_MISC_CONTROL_MC_LATENCY_FIX(bank == MACSEC_INGR ? 57 : 40) |
MSCC_MS_MISC_CONTROL_XFORM_REC_SIZE(bank == MACSEC_INGR ? 1 : 2));
/* Clear the counters */
val = vsc8584_macsec_phy_read(phydev, bank, MSCC_MS_COUNT_CONTROL);
val |= MSCC_MS_COUNT_CONTROL_AUTO_CNTR_RESET;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_COUNT_CONTROL, val);
/* Enable octet increment mode */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_PP_CTRL,
MSCC_MS_PP_CTRL_MACSEC_OCTET_INCR_MODE);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_BLOCK_CTX_UPDATE, 0x3);
val = vsc8584_macsec_phy_read(phydev, bank, MSCC_MS_COUNT_CONTROL);
val |= MSCC_MS_COUNT_CONTROL_RESET_ALL;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_COUNT_CONTROL, val);
/* Set the MTU */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_NON_VLAN_MTU_CHECK,
MSCC_MS_NON_VLAN_MTU_CHECK_NV_MTU_COMPARE(32761) |
MSCC_MS_NON_VLAN_MTU_CHECK_NV_MTU_COMP_DROP);
for (i = 0; i < 8; i++)
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_VLAN_MTU_CHECK(i),
MSCC_MS_VLAN_MTU_CHECK_MTU_COMPARE(32761) |
MSCC_MS_VLAN_MTU_CHECK_MTU_COMP_DROP);
if (bank == MACSEC_EGR) {
val = vsc8584_macsec_phy_read(phydev, bank, MSCC_MS_INTR_CTRL_STATUS);
val &= ~MSCC_MS_INTR_CTRL_STATUS_INTR_ENABLE_M;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_INTR_CTRL_STATUS, val);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_FC_CFG,
MSCC_MS_FC_CFG_FCBUF_ENA |
MSCC_MS_FC_CFG_LOW_THRESH(0x1) |
MSCC_MS_FC_CFG_HIGH_THRESH(0x4) |
MSCC_MS_FC_CFG_LOW_BYTES_VAL(0x4) |
MSCC_MS_FC_CFG_HIGH_BYTES_VAL(0x6));
}
vsc8584_macsec_classification(phydev, bank);
vsc8584_macsec_flow_default_action(phydev, bank, false);
vsc8584_macsec_integrity_checks(phydev, bank);
/* Enable the MACsec block */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_ENA_CFG,
MSCC_MS_ENA_CFG_CLK_ENA |
MSCC_MS_ENA_CFG_MACSEC_ENA |
MSCC_MS_ENA_CFG_MACSEC_SPEED_MODE(0x5));
}
static void vsc8584_macsec_mac_init(struct phy_device *phydev,
enum macsec_bank bank)
{
u32 val;
int i;
/* Clear host & line stats */
for (i = 0; i < 36; i++)
vsc8584_macsec_phy_write(phydev, bank, 0x1c + i, 0);
val = vsc8584_macsec_phy_read(phydev, bank,
MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL);
val &= ~MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_PAUSE_MODE_M;
val |= MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_PAUSE_MODE(2) |
MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_PAUSE_VALUE(0xffff);
vsc8584_macsec_phy_write(phydev, bank,
MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL, val);
val = vsc8584_macsec_phy_read(phydev, bank,
MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_2);
val |= 0xffff;
vsc8584_macsec_phy_write(phydev, bank,
MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_2, val);
val = vsc8584_macsec_phy_read(phydev, bank,
MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL);
if (bank == HOST_MAC)
val |= MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PAUSE_TIMER_ENA |
MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PAUSE_FRAME_DROP_ENA;
else
val |= MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PAUSE_REACT_ENA |
MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PAUSE_FRAME_DROP_ENA |
MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PAUSE_MODE |
MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_EARLY_PAUSE_DETECT_ENA;
vsc8584_macsec_phy_write(phydev, bank,
MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL, val);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MAC_CFG_PKTINF_CFG,
MSCC_MAC_CFG_PKTINF_CFG_STRIP_FCS_ENA |
MSCC_MAC_CFG_PKTINF_CFG_INSERT_FCS_ENA |
MSCC_MAC_CFG_PKTINF_CFG_LPI_RELAY_ENA |
MSCC_MAC_CFG_PKTINF_CFG_STRIP_PREAMBLE_ENA |
MSCC_MAC_CFG_PKTINF_CFG_INSERT_PREAMBLE_ENA |
(bank == HOST_MAC ?
MSCC_MAC_CFG_PKTINF_CFG_ENABLE_TX_PADDING : 0));
val = vsc8584_macsec_phy_read(phydev, bank, MSCC_MAC_CFG_MODE_CFG);
val &= ~MSCC_MAC_CFG_MODE_CFG_DISABLE_DIC;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MAC_CFG_MODE_CFG, val);
val = vsc8584_macsec_phy_read(phydev, bank, MSCC_MAC_CFG_MAXLEN_CFG);
val &= ~MSCC_MAC_CFG_MAXLEN_CFG_MAX_LEN_M;
val |= MSCC_MAC_CFG_MAXLEN_CFG_MAX_LEN(10240);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MAC_CFG_MAXLEN_CFG, val);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MAC_CFG_ADV_CHK_CFG,
MSCC_MAC_CFG_ADV_CHK_CFG_SFD_CHK_ENA |
MSCC_MAC_CFG_ADV_CHK_CFG_PRM_CHK_ENA |
MSCC_MAC_CFG_ADV_CHK_CFG_OOR_ERR_ENA |
MSCC_MAC_CFG_ADV_CHK_CFG_INR_ERR_ENA);
val = vsc8584_macsec_phy_read(phydev, bank, MSCC_MAC_CFG_LFS_CFG);
val &= ~MSCC_MAC_CFG_LFS_CFG_LFS_MODE_ENA;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MAC_CFG_LFS_CFG, val);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MAC_CFG_ENA_CFG,
MSCC_MAC_CFG_ENA_CFG_RX_CLK_ENA |
MSCC_MAC_CFG_ENA_CFG_TX_CLK_ENA |
MSCC_MAC_CFG_ENA_CFG_RX_ENA |
MSCC_MAC_CFG_ENA_CFG_TX_ENA);
}
/* Must be called with mdio_lock taken */
static int vsc8584_macsec_init(struct phy_device *phydev)
{
u32 val;
vsc8584_macsec_block_init(phydev, MACSEC_INGR);
vsc8584_macsec_block_init(phydev, MACSEC_EGR);
vsc8584_macsec_mac_init(phydev, HOST_MAC);
vsc8584_macsec_mac_init(phydev, LINE_MAC);
vsc8584_macsec_phy_write(phydev, FC_BUFFER,
MSCC_FCBUF_FC_READ_THRESH_CFG,
MSCC_FCBUF_FC_READ_THRESH_CFG_TX_THRESH(4) |
MSCC_FCBUF_FC_READ_THRESH_CFG_RX_THRESH(5));
val = vsc8584_macsec_phy_read(phydev, FC_BUFFER, MSCC_FCBUF_MODE_CFG);
val |= MSCC_FCBUF_MODE_CFG_PAUSE_GEN_ENA |
MSCC_FCBUF_MODE_CFG_RX_PPM_RATE_ADAPT_ENA |
MSCC_FCBUF_MODE_CFG_TX_PPM_RATE_ADAPT_ENA;
vsc8584_macsec_phy_write(phydev, FC_BUFFER, MSCC_FCBUF_MODE_CFG, val);
vsc8584_macsec_phy_write(phydev, FC_BUFFER, MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG,
MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG_TX_THRESH(8) |
MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG_TX_OFFSET(9));
val = vsc8584_macsec_phy_read(phydev, FC_BUFFER,
MSCC_FCBUF_TX_DATA_QUEUE_CFG);
val &= ~(MSCC_FCBUF_TX_DATA_QUEUE_CFG_START_M |
MSCC_FCBUF_TX_DATA_QUEUE_CFG_END_M);
val |= MSCC_FCBUF_TX_DATA_QUEUE_CFG_START(0) |
MSCC_FCBUF_TX_DATA_QUEUE_CFG_END(5119);
vsc8584_macsec_phy_write(phydev, FC_BUFFER,
MSCC_FCBUF_TX_DATA_QUEUE_CFG, val);
val = vsc8584_macsec_phy_read(phydev, FC_BUFFER, MSCC_FCBUF_ENA_CFG);
val |= MSCC_FCBUF_ENA_CFG_TX_ENA | MSCC_FCBUF_ENA_CFG_RX_ENA;
vsc8584_macsec_phy_write(phydev, FC_BUFFER, MSCC_FCBUF_ENA_CFG, val);
val = vsc8584_macsec_phy_read(phydev, IP_1588,
MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL);
val &= ~MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE_M;
val |= MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE(4);
vsc8584_macsec_phy_write(phydev, IP_1588,
MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL, val);
return 0;
}
static void vsc8584_macsec_flow(struct phy_device *phydev,
struct macsec_flow *flow)
{
struct vsc8531_private *priv = phydev->priv;
enum macsec_bank bank = flow->bank;
u32 val, match = 0, mask = 0, action = 0, idx = flow->index;
if (flow->match.tagged)
match |= MSCC_MS_SAM_MISC_MATCH_TAGGED;
if (flow->match.untagged)
match |= MSCC_MS_SAM_MISC_MATCH_UNTAGGED;
if (bank == MACSEC_INGR && flow->assoc_num >= 0) {
match |= MSCC_MS_SAM_MISC_MATCH_AN(flow->assoc_num);
mask |= MSCC_MS_SAM_MASK_AN_MASK(0x3);
}
if (bank == MACSEC_INGR && flow->match.sci && flow->rx_sa->sc->sci) {
match |= MSCC_MS_SAM_MISC_MATCH_TCI(BIT(3));
mask |= MSCC_MS_SAM_MASK_TCI_MASK(BIT(3)) |
MSCC_MS_SAM_MASK_SCI_MASK;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_MATCH_SCI_LO(idx),
lower_32_bits(flow->rx_sa->sc->sci));
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_MATCH_SCI_HI(idx),
upper_32_bits(flow->rx_sa->sc->sci));
}
if (flow->match.etype) {
mask |= MSCC_MS_SAM_MASK_MAC_ETYPE_MASK;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_MAC_SA_MATCH_HI(idx),
MSCC_MS_SAM_MAC_SA_MATCH_HI_ETYPE(htons(flow->etype)));
}
match |= MSCC_MS_SAM_MISC_MATCH_PRIORITY(flow->priority);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_MISC_MATCH(idx), match);
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_MASK(idx), mask);
/* Action for matching packets */
if (flow->action.drop)
action = MSCC_MS_FLOW_DROP;
else if (flow->action.bypass || flow->port == MSCC_MS_PORT_UNCONTROLLED)
action = MSCC_MS_FLOW_BYPASS;
else
action = (bank == MACSEC_INGR) ?
MSCC_MS_FLOW_INGRESS : MSCC_MS_FLOW_EGRESS;
val = MSCC_MS_SAM_FLOW_CTRL_FLOW_TYPE(action) |
MSCC_MS_SAM_FLOW_CTRL_DROP_ACTION(MSCC_MS_ACTION_DROP) |
MSCC_MS_SAM_FLOW_CTRL_DEST_PORT(flow->port);
if (action == MSCC_MS_FLOW_BYPASS)
goto write_ctrl;
if (bank == MACSEC_INGR) {
if (priv->secy->replay_protect)
val |= MSCC_MS_SAM_FLOW_CTRL_REPLAY_PROTECT;
if (priv->secy->validate_frames == MACSEC_VALIDATE_STRICT)
val |= MSCC_MS_SAM_FLOW_CTRL_VALIDATE_FRAMES(MSCC_MS_VALIDATE_STRICT);
else if (priv->secy->validate_frames == MACSEC_VALIDATE_CHECK)
val |= MSCC_MS_SAM_FLOW_CTRL_VALIDATE_FRAMES(MSCC_MS_VALIDATE_CHECK);
} else if (bank == MACSEC_EGR) {
if (priv->secy->protect_frames)
val |= MSCC_MS_SAM_FLOW_CTRL_PROTECT_FRAME;
if (priv->secy->tx_sc.encrypt)
val |= MSCC_MS_SAM_FLOW_CTRL_CONF_PROTECT;
if (priv->secy->tx_sc.send_sci)
val |= MSCC_MS_SAM_FLOW_CTRL_INCLUDE_SCI;
}
write_ctrl:
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_FLOW_CTRL(idx), val);
}
static struct macsec_flow *vsc8584_macsec_find_flow(struct macsec_context *ctx,
enum macsec_bank bank)
{
struct vsc8531_private *priv = ctx->phydev->priv;
struct macsec_flow *pos, *tmp;
list_for_each_entry_safe(pos, tmp, &priv->macsec_flows, list)
if (pos->assoc_num == ctx->sa.assoc_num && pos->bank == bank)
return pos;
return ERR_PTR(-ENOENT);
}
static void vsc8584_macsec_flow_enable(struct phy_device *phydev,
struct macsec_flow *flow)
{
enum macsec_bank bank = flow->bank;
u32 val, idx = flow->index;
if ((flow->bank == MACSEC_INGR && flow->rx_sa && !flow->rx_sa->active) ||
(flow->bank == MACSEC_EGR && flow->tx_sa && !flow->tx_sa->active))
return;
/* Enable */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_ENTRY_SET1, BIT(idx));
/* Set in-use */
val = vsc8584_macsec_phy_read(phydev, bank, MSCC_MS_SAM_FLOW_CTRL(idx));
val |= MSCC_MS_SAM_FLOW_CTRL_SA_IN_USE;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_FLOW_CTRL(idx), val);
}
static void vsc8584_macsec_flow_disable(struct phy_device *phydev,
struct macsec_flow *flow)
{
enum macsec_bank bank = flow->bank;
u32 val, idx = flow->index;
/* Disable */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_ENTRY_CLEAR1, BIT(idx));
/* Clear in-use */
val = vsc8584_macsec_phy_read(phydev, bank, MSCC_MS_SAM_FLOW_CTRL(idx));
val &= ~MSCC_MS_SAM_FLOW_CTRL_SA_IN_USE;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_SAM_FLOW_CTRL(idx), val);
}
static u32 vsc8584_macsec_flow_context_id(struct macsec_flow *flow)
{
if (flow->bank == MACSEC_INGR)
return flow->index + MSCC_MS_MAX_FLOWS;
return flow->index;
}
/* Derive the AES key to get a key for the hash autentication */
static int vsc8584_macsec_derive_key(const u8 key[MACSEC_KEYID_LEN],
u16 key_len, u8 hkey[16])
{
struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0);
struct skcipher_request *req = NULL;
struct scatterlist src, dst;
DECLARE_CRYPTO_WAIT(wait);
u32 input[4] = {0};
int ret;
if (IS_ERR(tfm))
return PTR_ERR(tfm);
req = skcipher_request_alloc(tfm, GFP_KERNEL);
if (!req) {
ret = -ENOMEM;
goto out;
}
skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG |
CRYPTO_TFM_REQ_MAY_SLEEP, crypto_req_done,
&wait);
ret = crypto_skcipher_setkey(tfm, key, key_len);
if (ret < 0)
goto out;
sg_init_one(&src, input, 16);
sg_init_one(&dst, hkey, 16);
skcipher_request_set_crypt(req, &src, &dst, 16, NULL);
ret = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
out:
skcipher_request_free(req);
crypto_free_skcipher(tfm);
return ret;
}
static int vsc8584_macsec_transformation(struct phy_device *phydev,
struct macsec_flow *flow)
{
struct vsc8531_private *priv = phydev->priv;
enum macsec_bank bank = flow->bank;
int i, ret, index = flow->index;
u32 rec = 0, control = 0;
u8 hkey[16];
sci_t sci;
ret = vsc8584_macsec_derive_key(flow->key, priv->secy->key_len, hkey);
if (ret)
return ret;
switch (priv->secy->key_len) {
case 16:
control |= CONTROL_CRYPTO_ALG(CTRYPTO_ALG_AES_CTR_128);
break;
case 32:
control |= CONTROL_CRYPTO_ALG(CTRYPTO_ALG_AES_CTR_256);
break;
default:
return -EINVAL;
}
control |= (bank == MACSEC_EGR) ?
(CONTROL_TYPE_EGRESS | CONTROL_AN(priv->secy->tx_sc.encoding_sa)) :
(CONTROL_TYPE_INGRESS | CONTROL_SEQ_MASK);
control |= CONTROL_UPDATE_SEQ | CONTROL_ENCRYPT_AUTH | CONTROL_KEY_IN_CTX |
CONTROL_IV0 | CONTROL_IV1 | CONTROL_IV_IN_SEQ |
CONTROL_DIGEST_TYPE(0x2) | CONTROL_SEQ_TYPE(0x1) |
CONTROL_AUTH_ALG(AUTH_ALG_AES_GHAS) | CONTROL_CONTEXT_ID;
/* Set the control word */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_XFORM_REC(index, rec++),
control);
/* Set the context ID. Must be unique. */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_XFORM_REC(index, rec++),
vsc8584_macsec_flow_context_id(flow));
/* Set the encryption/decryption key */
for (i = 0; i < priv->secy->key_len / sizeof(u32); i++)
vsc8584_macsec_phy_write(phydev, bank,
MSCC_MS_XFORM_REC(index, rec++),
((u32 *)flow->key)[i]);
/* Set the authentication key */
for (i = 0; i < 4; i++)
vsc8584_macsec_phy_write(phydev, bank,
MSCC_MS_XFORM_REC(index, rec++),
((u32 *)hkey)[i]);
/* Initial sequence number */
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_XFORM_REC(index, rec++),
bank == MACSEC_INGR ?
flow->rx_sa->next_pn : flow->tx_sa->next_pn);
if (bank == MACSEC_INGR)
/* Set the mask (replay window size) */
vsc8584_macsec_phy_write(phydev, bank,
MSCC_MS_XFORM_REC(index, rec++),
priv->secy->replay_window);
/* Set the input vectors */
sci = bank == MACSEC_INGR ? flow->rx_sa->sc->sci : priv->secy->sci;
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_XFORM_REC(index, rec++),
lower_32_bits(sci));
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_XFORM_REC(index, rec++),
upper_32_bits(sci));
while (rec < 20)
vsc8584_macsec_phy_write(phydev, bank, MSCC_MS_XFORM_REC(index, rec++),
0);
flow->has_transformation = true;
return 0;
}
static struct macsec_flow *vsc8584_macsec_alloc_flow(struct vsc8531_private *priv,
enum macsec_bank bank)
{
unsigned long *bitmap = bank == MACSEC_INGR ?
&priv->ingr_flows : &priv->egr_flows;
struct macsec_flow *flow;
int index;
index = find_first_zero_bit(bitmap, MSCC_MS_MAX_FLOWS);
if (index == MSCC_MS_MAX_FLOWS)
return ERR_PTR(-ENOMEM);
flow = kzalloc(sizeof(*flow), GFP_KERNEL);
if (!flow)
return ERR_PTR(-ENOMEM);
set_bit(index, bitmap);
flow->index = index;
flow->bank = bank;
flow->priority = 8;
flow->assoc_num = -1;
list_add_tail(&flow->list, &priv->macsec_flows);
return flow;
}
static void vsc8584_macsec_free_flow(struct vsc8531_private *priv,
struct macsec_flow *flow)
{
unsigned long *bitmap = flow->bank == MACSEC_INGR ?
&priv->ingr_flows : &priv->egr_flows;
list_del(&flow->list);
clear_bit(flow->index, bitmap);
kfree(flow);
}
static int vsc8584_macsec_add_flow(struct phy_device *phydev,
struct macsec_flow *flow, bool update)
{
int ret;
flow->port = MSCC_MS_PORT_CONTROLLED;
vsc8584_macsec_flow(phydev, flow);
if (update)
return 0;
ret = vsc8584_macsec_transformation(phydev, flow);
if (ret) {
vsc8584_macsec_free_flow(phydev->priv, flow);
return ret;
}
return 0;
}
static int vsc8584_macsec_default_flows(struct phy_device *phydev)
{
struct macsec_flow *flow;
/* Add a rule to let the MKA traffic go through, ingress */
flow = vsc8584_macsec_alloc_flow(phydev->priv, MACSEC_INGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
flow->priority = 15;
flow->port = MSCC_MS_PORT_UNCONTROLLED;
flow->match.tagged = 1;
flow->match.untagged = 1;
flow->match.etype = 1;
flow->etype = ETH_P_PAE;
flow->action.bypass = 1;
vsc8584_macsec_flow(phydev, flow);
vsc8584_macsec_flow_enable(phydev, flow);
/* Add a rule to let the MKA traffic go through, egress */
flow = vsc8584_macsec_alloc_flow(phydev->priv, MACSEC_EGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
flow->priority = 15;
flow->port = MSCC_MS_PORT_COMMON;
flow->match.untagged = 1;
flow->match.etype = 1;
flow->etype = ETH_P_PAE;
flow->action.bypass = 1;
vsc8584_macsec_flow(phydev, flow);
vsc8584_macsec_flow_enable(phydev, flow);
return 0;
}
static void vsc8584_macsec_del_flow(struct phy_device *phydev,
struct macsec_flow *flow)
{
vsc8584_macsec_flow_disable(phydev, flow);
vsc8584_macsec_free_flow(phydev->priv, flow);
}
static int __vsc8584_macsec_add_rxsa(struct macsec_context *ctx,
struct macsec_flow *flow, bool update)
{
struct phy_device *phydev = ctx->phydev;
struct vsc8531_private *priv = phydev->priv;
if (!flow) {
flow = vsc8584_macsec_alloc_flow(priv, MACSEC_INGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
memcpy(flow->key, ctx->sa.key, priv->secy->key_len);
}
flow->assoc_num = ctx->sa.assoc_num;
flow->rx_sa = ctx->sa.rx_sa;
/* Always match tagged packets on ingress */
flow->match.tagged = 1;
flow->match.sci = 1;
if (priv->secy->validate_frames != MACSEC_VALIDATE_DISABLED)
flow->match.untagged = 1;
return vsc8584_macsec_add_flow(phydev, flow, update);
}
static int __vsc8584_macsec_add_txsa(struct macsec_context *ctx,
struct macsec_flow *flow, bool update)
{
struct phy_device *phydev = ctx->phydev;
struct vsc8531_private *priv = phydev->priv;
if (!flow) {
flow = vsc8584_macsec_alloc_flow(priv, MACSEC_EGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
memcpy(flow->key, ctx->sa.key, priv->secy->key_len);
}
flow->assoc_num = ctx->sa.assoc_num;
flow->tx_sa = ctx->sa.tx_sa;
/* Always match untagged packets on egress */
flow->match.untagged = 1;
return vsc8584_macsec_add_flow(phydev, flow, update);
}
static int vsc8584_macsec_dev_open(struct macsec_context *ctx)
{
struct vsc8531_private *priv = ctx->phydev->priv;
struct macsec_flow *flow, *tmp;
/* No operation to perform before the commit step */
if (ctx->prepare)
return 0;
list_for_each_entry_safe(flow, tmp, &priv->macsec_flows, list)
vsc8584_macsec_flow_enable(ctx->phydev, flow);
return 0;
}
static int vsc8584_macsec_dev_stop(struct macsec_context *ctx)
{
struct vsc8531_private *priv = ctx->phydev->priv;
struct macsec_flow *flow, *tmp;
/* No operation to perform before the commit step */
if (ctx->prepare)
return 0;
list_for_each_entry_safe(flow, tmp, &priv->macsec_flows, list)
vsc8584_macsec_flow_disable(ctx->phydev, flow);
return 0;
}
static int vsc8584_macsec_add_secy(struct macsec_context *ctx)
{
struct vsc8531_private *priv = ctx->phydev->priv;
struct macsec_secy *secy = ctx->secy;
if (ctx->prepare) {
if (priv->secy)
return -EEXIST;
return 0;
}
priv->secy = secy;
vsc8584_macsec_flow_default_action(ctx->phydev, MACSEC_EGR,
secy->validate_frames != MACSEC_VALIDATE_DISABLED);
vsc8584_macsec_flow_default_action(ctx->phydev, MACSEC_INGR,
secy->validate_frames != MACSEC_VALIDATE_DISABLED);
return vsc8584_macsec_default_flows(ctx->phydev);
}
static int vsc8584_macsec_del_secy(struct macsec_context *ctx)
{
struct vsc8531_private *priv = ctx->phydev->priv;
struct macsec_flow *flow, *tmp;
/* No operation to perform before the commit step */
if (ctx->prepare)
return 0;
list_for_each_entry_safe(flow, tmp, &priv->macsec_flows, list)
vsc8584_macsec_del_flow(ctx->phydev, flow);
vsc8584_macsec_flow_default_action(ctx->phydev, MACSEC_EGR, false);
vsc8584_macsec_flow_default_action(ctx->phydev, MACSEC_INGR, false);
priv->secy = NULL;
return 0;
}
static int vsc8584_macsec_upd_secy(struct macsec_context *ctx)
{
/* No operation to perform before the commit step */
if (ctx->prepare)
return 0;
vsc8584_macsec_del_secy(ctx);
return vsc8584_macsec_add_secy(ctx);
}
static int vsc8584_macsec_add_rxsc(struct macsec_context *ctx)
{
/* Nothing to do */
return 0;
}
static int vsc8584_macsec_upd_rxsc(struct macsec_context *ctx)
{
return -EOPNOTSUPP;
}
static int vsc8584_macsec_del_rxsc(struct macsec_context *ctx)
{
struct vsc8531_private *priv = ctx->phydev->priv;
struct macsec_flow *flow, *tmp;
/* No operation to perform before the commit step */
if (ctx->prepare)
return 0;
list_for_each_entry_safe(flow, tmp, &priv->macsec_flows, list) {
if (flow->bank == MACSEC_INGR && flow->rx_sa &&
flow->rx_sa->sc->sci == ctx->rx_sc->sci)
vsc8584_macsec_del_flow(ctx->phydev, flow);
}
return 0;
}
static int vsc8584_macsec_add_rxsa(struct macsec_context *ctx)
{
struct macsec_flow *flow = NULL;
if (ctx->prepare)
return __vsc8584_macsec_add_rxsa(ctx, flow, false);
flow = vsc8584_macsec_find_flow(ctx, MACSEC_INGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
vsc8584_macsec_flow_enable(ctx->phydev, flow);
return 0;
}
static int vsc8584_macsec_upd_rxsa(struct macsec_context *ctx)
{
struct macsec_flow *flow;
flow = vsc8584_macsec_find_flow(ctx, MACSEC_INGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
if (ctx->prepare) {
/* Make sure the flow is disabled before updating it */
vsc8584_macsec_flow_disable(ctx->phydev, flow);
return __vsc8584_macsec_add_rxsa(ctx, flow, true);
}
vsc8584_macsec_flow_enable(ctx->phydev, flow);
return 0;
}
static int vsc8584_macsec_del_rxsa(struct macsec_context *ctx)
{
struct macsec_flow *flow;
flow = vsc8584_macsec_find_flow(ctx, MACSEC_INGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
if (ctx->prepare)
return 0;
vsc8584_macsec_del_flow(ctx->phydev, flow);
return 0;
}
static int vsc8584_macsec_add_txsa(struct macsec_context *ctx)
{
struct macsec_flow *flow = NULL;
if (ctx->prepare)
return __vsc8584_macsec_add_txsa(ctx, flow, false);
flow = vsc8584_macsec_find_flow(ctx, MACSEC_EGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
vsc8584_macsec_flow_enable(ctx->phydev, flow);
return 0;
}
static int vsc8584_macsec_upd_txsa(struct macsec_context *ctx)
{
struct macsec_flow *flow;
flow = vsc8584_macsec_find_flow(ctx, MACSEC_EGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
if (ctx->prepare) {
/* Make sure the flow is disabled before updating it */
vsc8584_macsec_flow_disable(ctx->phydev, flow);
return __vsc8584_macsec_add_txsa(ctx, flow, true);
}
vsc8584_macsec_flow_enable(ctx->phydev, flow);
return 0;
}
static int vsc8584_macsec_del_txsa(struct macsec_context *ctx)
{
struct macsec_flow *flow;
flow = vsc8584_macsec_find_flow(ctx, MACSEC_EGR);
if (IS_ERR(flow))
return PTR_ERR(flow);
if (ctx->prepare)
return 0;
vsc8584_macsec_del_flow(ctx->phydev, flow);
return 0;
}
static struct macsec_ops vsc8584_macsec_ops = {
.mdo_dev_open = vsc8584_macsec_dev_open,
.mdo_dev_stop = vsc8584_macsec_dev_stop,
.mdo_add_secy = vsc8584_macsec_add_secy,
.mdo_upd_secy = vsc8584_macsec_upd_secy,
.mdo_del_secy = vsc8584_macsec_del_secy,
.mdo_add_rxsc = vsc8584_macsec_add_rxsc,
.mdo_upd_rxsc = vsc8584_macsec_upd_rxsc,
.mdo_del_rxsc = vsc8584_macsec_del_rxsc,
.mdo_add_rxsa = vsc8584_macsec_add_rxsa,
.mdo_upd_rxsa = vsc8584_macsec_upd_rxsa,
.mdo_del_rxsa = vsc8584_macsec_del_rxsa,
.mdo_add_txsa = vsc8584_macsec_add_txsa,
.mdo_upd_txsa = vsc8584_macsec_upd_txsa,
.mdo_del_txsa = vsc8584_macsec_del_txsa,
};
#endif /* CONFIG_MACSEC */
/* Check if one PHY has already done the init of the parts common to all PHYs
* in the Quad PHY package.
*/
......@@ -1733,6 +2791,24 @@ static int vsc8584_config_init(struct phy_device *phydev)
mutex_unlock(&phydev->mdio.bus->mdio_lock);
#if IS_ENABLED(CONFIG_MACSEC)
/* MACsec */
switch (phydev->phy_id & phydev->drv->phy_id_mask) {
case PHY_ID_VSC856X:
case PHY_ID_VSC8575:
case PHY_ID_VSC8582:
case PHY_ID_VSC8584:
INIT_LIST_HEAD(&vsc8531->macsec_flows);
vsc8531->secy = NULL;
phydev->macsec_ops = &vsc8584_macsec_ops;
ret = vsc8584_macsec_init(phydev);
if (ret)
goto err;
}
#endif
phy_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_STANDARD);
val = phy_read(phydev, MSCC_PHY_EXT_PHY_CNTL_1);
......@@ -1758,6 +2834,43 @@ static int vsc8584_config_init(struct phy_device *phydev)
return ret;
}
static int vsc8584_handle_interrupt(struct phy_device *phydev)
{
#if IS_ENABLED(CONFIG_MACSEC)
struct vsc8531_private *priv = phydev->priv;
struct macsec_flow *flow, *tmp;
u32 cause, rec;
/* Check MACsec PN rollover */
cause = vsc8584_macsec_phy_read(phydev, MACSEC_EGR,
MSCC_MS_INTR_CTRL_STATUS);
cause &= MSCC_MS_INTR_CTRL_STATUS_INTR_CLR_STATUS_M;
if (!(cause & MACSEC_INTR_CTRL_STATUS_ROLLOVER))
goto skip_rollover;
rec = 6 + priv->secy->key_len / sizeof(u32);
list_for_each_entry_safe(flow, tmp, &priv->macsec_flows, list) {
u32 val;
if (flow->bank != MACSEC_EGR || !flow->has_transformation)
continue;
val = vsc8584_macsec_phy_read(phydev, MACSEC_EGR,
MSCC_MS_XFORM_REC(flow->index, rec));
if (val == 0xffffffff) {
vsc8584_macsec_flow_disable(phydev, flow);
macsec_pn_wrapped(priv->secy, flow->tx_sa);
break;
}
}
skip_rollover:
#endif
phy_mac_interrupt(phydev);
return 0;
}
static int vsc85xx_config_init(struct phy_device *phydev)
{
int rc, i, phy_id;
......@@ -2201,6 +3314,20 @@ static int vsc85xx_config_intr(struct phy_device *phydev)
int rc;
if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
#if IS_ENABLED(CONFIG_MACSEC)
phy_write(phydev, MSCC_EXT_PAGE_ACCESS,
MSCC_PHY_PAGE_EXTENDED_2);
phy_write(phydev, MSCC_PHY_EXTENDED_INT,
MSCC_PHY_EXTENDED_INT_MS_EGR);
phy_write(phydev, MSCC_EXT_PAGE_ACCESS,
MSCC_PHY_PAGE_STANDARD);
vsc8584_macsec_phy_write(phydev, MACSEC_EGR,
MSCC_MS_AIC_CTRL, 0xf);
vsc8584_macsec_phy_write(phydev, MACSEC_EGR,
MSCC_MS_INTR_CTRL_STATUS,
MSCC_MS_INTR_CTRL_STATUS_INTR_ENABLE(MACSEC_INTR_CTRL_STATUS_ROLLOVER));
#endif
rc = phy_write(phydev, MII_VSC85XX_INT_MASK,
MII_VSC85XX_INT_MASK_MASK);
} else {
......@@ -2550,6 +3677,7 @@ static struct phy_driver vsc85xx_driver[] = {
.config_aneg = &vsc85xx_config_aneg,
.aneg_done = &genphy_aneg_done,
.read_status = &vsc85xx_read_status,
.handle_interrupt = &vsc8584_handle_interrupt,
.ack_interrupt = &vsc85xx_ack_interrupt,
.config_intr = &vsc85xx_config_intr,
.did_interrupt = &vsc8584_did_interrupt,
......@@ -2602,6 +3730,7 @@ static struct phy_driver vsc85xx_driver[] = {
.config_aneg = &vsc85xx_config_aneg,
.aneg_done = &genphy_aneg_done,
.read_status = &vsc85xx_read_status,
.handle_interrupt = &vsc8584_handle_interrupt,
.ack_interrupt = &vsc85xx_ack_interrupt,
.config_intr = &vsc85xx_config_intr,
.did_interrupt = &vsc8584_did_interrupt,
......@@ -2626,6 +3755,7 @@ static struct phy_driver vsc85xx_driver[] = {
.config_aneg = &vsc85xx_config_aneg,
.aneg_done = &genphy_aneg_done,
.read_status = &vsc85xx_read_status,
.handle_interrupt = &vsc8584_handle_interrupt,
.ack_interrupt = &vsc85xx_ack_interrupt,
.config_intr = &vsc85xx_config_intr,
.did_interrupt = &vsc8584_did_interrupt,
......@@ -2650,6 +3780,7 @@ static struct phy_driver vsc85xx_driver[] = {
.config_aneg = &vsc85xx_config_aneg,
.aneg_done = &genphy_aneg_done,
.read_status = &vsc85xx_read_status,
.handle_interrupt = &vsc8584_handle_interrupt,
.ack_interrupt = &vsc85xx_ack_interrupt,
.config_intr = &vsc85xx_config_intr,
.did_interrupt = &vsc8584_did_interrupt,
......
/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
/*
* Microsemi Ocelot Switch driver
*
* Copyright (C) 2019 Microsemi Corporation
*/
#ifndef _MSCC_OCELOT_FC_BUFFER_H_
#define _MSCC_OCELOT_FC_BUFFER_H_
#define MSCC_FCBUF_ENA_CFG 0x00
#define MSCC_FCBUF_MODE_CFG 0x01
#define MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG 0x02
#define MSCC_FCBUF_TX_CTRL_QUEUE_CFG 0x03
#define MSCC_FCBUF_TX_DATA_QUEUE_CFG 0x04
#define MSCC_FCBUF_RX_DATA_QUEUE_CFG 0x05
#define MSCC_FCBUF_TX_BUFF_XON_XOFF_THRESH_CFG 0x06
#define MSCC_FCBUF_FC_READ_THRESH_CFG 0x07
#define MSCC_FCBUF_TX_FRM_GAP_COMP 0x08
#define MSCC_FCBUF_ENA_CFG_TX_ENA BIT(0)
#define MSCC_FCBUF_ENA_CFG_RX_ENA BIT(4)
#define MSCC_FCBUF_MODE_CFG_DROP_BEHAVIOUR BIT(4)
#define MSCC_FCBUF_MODE_CFG_PAUSE_REACT_ENA BIT(8)
#define MSCC_FCBUF_MODE_CFG_RX_PPM_RATE_ADAPT_ENA BIT(12)
#define MSCC_FCBUF_MODE_CFG_TX_PPM_RATE_ADAPT_ENA BIT(16)
#define MSCC_FCBUF_MODE_CFG_TX_CTRL_QUEUE_ENA BIT(20)
#define MSCC_FCBUF_MODE_CFG_PAUSE_GEN_ENA BIT(24)
#define MSCC_FCBUF_MODE_CFG_INCLUDE_PAUSE_RCVD_IN_PAUSE_GEN BIT(28)
#define MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG_TX_THRESH(x) (x)
#define MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG_TX_THRESH_M GENMASK(15, 0)
#define MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG_TX_OFFSET(x) ((x) << 16)
#define MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG_TX_OFFSET_M GENMASK(19, 16)
#define MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG_RX_THRESH(x) ((x) << 20)
#define MSCC_FCBUF_PPM_RATE_ADAPT_THRESH_CFG_RX_THRESH_M GENMASK(31, 20)
#define MSCC_FCBUF_TX_CTRL_QUEUE_CFG_START(x) (x)
#define MSCC_FCBUF_TX_CTRL_QUEUE_CFG_START_M GENMASK(15, 0)
#define MSCC_FCBUF_TX_CTRL_QUEUE_CFG_END(x) ((x) << 16)
#define MSCC_FCBUF_TX_CTRL_QUEUE_CFG_END_M GENMASK(31, 16)
#define MSCC_FCBUF_TX_DATA_QUEUE_CFG_START(x) (x)
#define MSCC_FCBUF_TX_DATA_QUEUE_CFG_START_M GENMASK(15, 0)
#define MSCC_FCBUF_TX_DATA_QUEUE_CFG_END(x) ((x) << 16)
#define MSCC_FCBUF_TX_DATA_QUEUE_CFG_END_M GENMASK(31, 16)
#define MSCC_FCBUF_RX_DATA_QUEUE_CFG_START(x) (x)
#define MSCC_FCBUF_RX_DATA_QUEUE_CFG_START_M GENMASK(15, 0)
#define MSCC_FCBUF_RX_DATA_QUEUE_CFG_END(x) ((x) << 16)
#define MSCC_FCBUF_RX_DATA_QUEUE_CFG_END_M GENMASK(31, 16)
#define MSCC_FCBUF_TX_BUFF_XON_XOFF_THRESH_CFG_XOFF_THRESH(x) (x)
#define MSCC_FCBUF_TX_BUFF_XON_XOFF_THRESH_CFG_XOFF_THRESH_M GENMASK(15, 0)
#define MSCC_FCBUF_TX_BUFF_XON_XOFF_THRESH_CFG_XON_THRESH(x) ((x) << 16)
#define MSCC_FCBUF_TX_BUFF_XON_XOFF_THRESH_CFG_XON_THRESH_M GENMASK(31, 16)
#define MSCC_FCBUF_FC_READ_THRESH_CFG_TX_THRESH(x) (x)
#define MSCC_FCBUF_FC_READ_THRESH_CFG_TX_THRESH_M GENMASK(15, 0)
#define MSCC_FCBUF_FC_READ_THRESH_CFG_RX_THRESH(x) ((x) << 16)
#define MSCC_FCBUF_FC_READ_THRESH_CFG_RX_THRESH_M GENMASK(31, 16)
#endif
/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
/*
* Microsemi Ocelot Switch driver
*
* Copyright (c) 2017 Microsemi Corporation
*/
#ifndef _MSCC_OCELOT_LINE_MAC_H_
#define _MSCC_OCELOT_LINE_MAC_H_
#define MSCC_MAC_CFG_ENA_CFG 0x00
#define MSCC_MAC_CFG_MODE_CFG 0x01
#define MSCC_MAC_CFG_MAXLEN_CFG 0x02
#define MSCC_MAC_CFG_NUM_TAGS_CFG 0x03
#define MSCC_MAC_CFG_TAGS_CFG 0x04
#define MSCC_MAC_CFG_ADV_CHK_CFG 0x07
#define MSCC_MAC_CFG_LFS_CFG 0x08
#define MSCC_MAC_CFG_LB_CFG 0x09
#define MSCC_MAC_CFG_PKTINF_CFG 0x0a
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL 0x0b
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_2 0x0c
#define MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL 0x0d
#define MSCC_MAC_PAUSE_CFG_STATE 0x0e
#define MSCC_MAC_PAUSE_CFG_MAC_ADDRESS_LSB 0x0f
#define MSCC_MAC_PAUSE_CFG_MAC_ADDRESS_MSB 0x10
#define MSCC_MAC_STATUS_RX_LANE_STICKY_0 0x11
#define MSCC_MAC_STATUS_RX_LANE_STICKY_1 0x12
#define MSCC_MAC_STATUS_TX_MONITOR_STICKY 0x13
#define MSCC_MAC_STATUS_TX_MONITOR_STICKY_MASK 0x14
#define MSCC_MAC_STATUS_STICKY 0x15
#define MSCC_MAC_STATUS_STICKY_MASK 0x16
#define MSCC_MAC_STATS_32BIT_RX_HIH_CKSM_ERR_CNT 0x17
#define MSCC_MAC_STATS_32BIT_RX_XGMII_PROT_ERR_CNT 0x18
#define MSCC_MAC_STATS_32BIT_RX_SYMBOL_ERR_CNT 0x19
#define MSCC_MAC_STATS_32BIT_RX_PAUSE_CNT 0x1a
#define MSCC_MAC_STATS_32BIT_RX_UNSUP_OPCODE_CNT 0x1b
#define MSCC_MAC_STATS_32BIT_RX_UC_CNT 0x1c
#define MSCC_MAC_STATS_32BIT_RX_MC_CNT 0x1d
#define MSCC_MAC_STATS_32BIT_RX_BC_CNT 0x1e
#define MSCC_MAC_STATS_32BIT_RX_CRC_ERR_CNT 0x1f
#define MSCC_MAC_STATS_32BIT_RX_UNDERSIZE_CNT 0x20
#define MSCC_MAC_STATS_32BIT_RX_FRAGMENTS_CNT 0x21
#define MSCC_MAC_STATS_32BIT_RX_IN_RANGE_LEN_ERR_CNT 0x22
#define MSCC_MAC_STATS_32BIT_RX_OUT_OF_RANGE_LEN_ERR_CNT 0x23
#define MSCC_MAC_STATS_32BIT_RX_OVERSIZE_CNT 0x24
#define MSCC_MAC_STATS_32BIT_RX_JABBERS_CNT 0x25
#define MSCC_MAC_STATS_32BIT_RX_SIZE64_CNT 0x26
#define MSCC_MAC_STATS_32BIT_RX_SIZE65TO127_CNT 0x27
#define MSCC_MAC_STATS_32BIT_RX_SIZE128TO255_CNT 0x28
#define MSCC_MAC_STATS_32BIT_RX_SIZE256TO511_CNT 0x29
#define MSCC_MAC_STATS_32BIT_RX_SIZE512TO1023_CNT 0x2a
#define MSCC_MAC_STATS_32BIT_RX_SIZE1024TO1518_CNT 0x2b
#define MSCC_MAC_STATS_32BIT_RX_SIZE1519TOMAX_CNT 0x2c
#define MSCC_MAC_STATS_32BIT_RX_IPG_SHRINK_CNT 0x2d
#define MSCC_MAC_STATS_32BIT_TX_PAUSE_CNT 0x2e
#define MSCC_MAC_STATS_32BIT_TX_UC_CNT 0x2f
#define MSCC_MAC_STATS_32BIT_TX_MC_CNT 0x30
#define MSCC_MAC_STATS_32BIT_TX_BC_CNT 0x31
#define MSCC_MAC_STATS_32BIT_TX_SIZE64_CNT 0x32
#define MSCC_MAC_STATS_32BIT_TX_SIZE65TO127_CNT 0x33
#define MSCC_MAC_STATS_32BIT_TX_SIZE128TO255_CNT 0x34
#define MSCC_MAC_STATS_32BIT_TX_SIZE256TO511_CNT 0x35
#define MSCC_MAC_STATS_32BIT_TX_SIZE512TO1023_CNT 0x36
#define MSCC_MAC_STATS_32BIT_TX_SIZE1024TO1518_CNT 0x37
#define MSCC_MAC_STATS_32BIT_TX_SIZE1519TOMAX_CNT 0x38
#define MSCC_MAC_STATS_40BIT_RX_BAD_BYTES_CNT 0x39
#define MSCC_MAC_STATS_40BIT_RX_BAD_BYTES_MSB_CNT 0x3a
#define MSCC_MAC_STATS_40BIT_RX_OK_BYTES_CNT 0x3b
#define MSCC_MAC_STATS_40BIT_RX_OK_BYTES_MSB_CNT 0x3c
#define MSCC_MAC_STATS_40BIT_RX_IN_BYTES_CNT 0x3d
#define MSCC_MAC_STATS_40BIT_RX_IN_BYTES_MSB_CNT 0x3e
#define MSCC_MAC_STATS_40BIT_TX_OK_BYTES_CNT 0x3f
#define MSCC_MAC_STATS_40BIT_TX_OK_BYTES_MSB_CNT 0x40
#define MSCC_MAC_STATS_40BIT_TX_OUT_BYTES_CNT 0x41
#define MSCC_MAC_STATS_40BIT_TX_OUT_BYTES_MSB_CNT 0x42
#define MSCC_MAC_CFG_ENA_CFG_RX_CLK_ENA BIT(0)
#define MSCC_MAC_CFG_ENA_CFG_TX_CLK_ENA BIT(4)
#define MSCC_MAC_CFG_ENA_CFG_RX_SW_RST BIT(8)
#define MSCC_MAC_CFG_ENA_CFG_TX_SW_RST BIT(12)
#define MSCC_MAC_CFG_ENA_CFG_RX_ENA BIT(16)
#define MSCC_MAC_CFG_ENA_CFG_TX_ENA BIT(20)
#define MSCC_MAC_CFG_MODE_CFG_FORCE_CW_UPDATE_INTERVAL(x) ((x) << 20)
#define MSCC_MAC_CFG_MODE_CFG_FORCE_CW_UPDATE_INTERVAL_M GENMASK(29, 20)
#define MSCC_MAC_CFG_MODE_CFG_FORCE_CW_UPDATE BIT(16)
#define MSCC_MAC_CFG_MODE_CFG_TUNNEL_PAUSE_FRAMES BIT(14)
#define MSCC_MAC_CFG_MODE_CFG_MAC_PREAMBLE_CFG(x) ((x) << 10)
#define MSCC_MAC_CFG_MODE_CFG_MAC_PREAMBLE_CFG_M GENMASK(12, 10)
#define MSCC_MAC_CFG_MODE_CFG_MAC_IPG_CFG BIT(6)
#define MSCC_MAC_CFG_MODE_CFG_XGMII_GEN_MODE_ENA BIT(4)
#define MSCC_MAC_CFG_MODE_CFG_HIH_CRC_CHECK BIT(2)
#define MSCC_MAC_CFG_MODE_CFG_UNDERSIZED_FRAME_DROP_DIS BIT(1)
#define MSCC_MAC_CFG_MODE_CFG_DISABLE_DIC BIT(0)
#define MSCC_MAC_CFG_MAXLEN_CFG_MAX_LEN_TAG_CHK BIT(16)
#define MSCC_MAC_CFG_MAXLEN_CFG_MAX_LEN(x) (x)
#define MSCC_MAC_CFG_MAXLEN_CFG_MAX_LEN_M GENMASK(15, 0)
#define MSCC_MAC_CFG_TAGS_CFG_RSZ 0x4
#define MSCC_MAC_CFG_TAGS_CFG_TAG_ID(x) ((x) << 16)
#define MSCC_MAC_CFG_TAGS_CFG_TAG_ID_M GENMASK(31, 16)
#define MSCC_MAC_CFG_TAGS_CFG_TAG_ENA BIT(4)
#define MSCC_MAC_CFG_ADV_CHK_CFG_EXT_EOP_CHK_ENA BIT(24)
#define MSCC_MAC_CFG_ADV_CHK_CFG_EXT_SOP_CHK_ENA BIT(20)
#define MSCC_MAC_CFG_ADV_CHK_CFG_SFD_CHK_ENA BIT(16)
#define MSCC_MAC_CFG_ADV_CHK_CFG_PRM_SHK_CHK_DIS BIT(12)
#define MSCC_MAC_CFG_ADV_CHK_CFG_PRM_CHK_ENA BIT(8)
#define MSCC_MAC_CFG_ADV_CHK_CFG_OOR_ERR_ENA BIT(4)
#define MSCC_MAC_CFG_ADV_CHK_CFG_INR_ERR_ENA BIT(0)
#define MSCC_MAC_CFG_LFS_CFG_LFS_INH_TX BIT(8)
#define MSCC_MAC_CFG_LFS_CFG_LFS_DIS_TX BIT(4)
#define MSCC_MAC_CFG_LFS_CFG_LFS_UNIDIR_ENA BIT(3)
#define MSCC_MAC_CFG_LFS_CFG_USE_LEADING_EDGE_DETECT BIT(2)
#define MSCC_MAC_CFG_LFS_CFG_SPURIOUS_Q_DIS BIT(1)
#define MSCC_MAC_CFG_LFS_CFG_LFS_MODE_ENA BIT(0)
#define MSCC_MAC_CFG_LB_CFG_XGMII_HOST_LB_ENA BIT(4)
#define MSCC_MAC_CFG_LB_CFG_XGMII_PHY_LB_ENA BIT(0)
#define MSCC_MAC_CFG_PKTINF_CFG_STRIP_FCS_ENA BIT(0)
#define MSCC_MAC_CFG_PKTINF_CFG_INSERT_FCS_ENA BIT(4)
#define MSCC_MAC_CFG_PKTINF_CFG_STRIP_PREAMBLE_ENA BIT(8)
#define MSCC_MAC_CFG_PKTINF_CFG_INSERT_PREAMBLE_ENA BIT(12)
#define MSCC_MAC_CFG_PKTINF_CFG_LPI_RELAY_ENA BIT(16)
#define MSCC_MAC_CFG_PKTINF_CFG_LF_RELAY_ENA BIT(20)
#define MSCC_MAC_CFG_PKTINF_CFG_RF_RELAY_ENA BIT(24)
#define MSCC_MAC_CFG_PKTINF_CFG_ENABLE_TX_PADDING BIT(25)
#define MSCC_MAC_CFG_PKTINF_CFG_ENABLE_RX_PADDING BIT(26)
#define MSCC_MAC_CFG_PKTINF_CFG_ENABLE_4BYTE_PREAMBLE BIT(27)
#define MSCC_MAC_CFG_PKTINF_CFG_MACSEC_BYPASS_NUM_PTP_STALL_CLKS(x) ((x) << 28)
#define MSCC_MAC_CFG_PKTINF_CFG_MACSEC_BYPASS_NUM_PTP_STALL_CLKS_M GENMASK(30, 28)
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_PAUSE_VALUE(x) ((x) << 16)
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_PAUSE_VALUE_M GENMASK(31, 16)
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_WAIT_FOR_LPI_LOW BIT(12)
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_USE_PAUSE_STALL_ENA BIT(8)
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_PAUSE_REPL_MODE BIT(4)
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_PAUSE_FRC_FRAME BIT(2)
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_PAUSE_MODE(x) (x)
#define MSCC_MAC_PAUSE_CFG_TX_FRAME_CTRL_PAUSE_MODE_M GENMASK(1, 0)
#define MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_EARLY_PAUSE_DETECT_ENA BIT(16)
#define MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PRE_CRC_MODE BIT(20)
#define MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PAUSE_TIMER_ENA BIT(12)
#define MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PAUSE_REACT_ENA BIT(8)
#define MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PAUSE_FRAME_DROP_ENA BIT(4)
#define MSCC_MAC_PAUSE_CFG_RX_FRAME_CTRL_PAUSE_MODE BIT(0)
#define MSCC_MAC_PAUSE_CFG_STATE_PAUSE_STATE BIT(0)
#define MSCC_MAC_PAUSE_CFG_STATE_MAC_TX_PAUSE_GEN BIT(4)
#define MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL 0x2
#define MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE(x) (x)
#define MSCC_PROC_0_IP_1588_TOP_CFG_STAT_MODE_CTL_PROTOCOL_MODE_M GENMASK(2, 0)
#endif /* _MSCC_OCELOT_LINE_MAC_H_ */
/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
/*
* Microsemi Ocelot Switch driver
*
* Copyright (c) 2018 Microsemi Corporation
*/
#ifndef _MSCC_OCELOT_MACSEC_H_
#define _MSCC_OCELOT_MACSEC_H_
#define MSCC_MS_MAX_FLOWS 16
#define CONTROL_TYPE_EGRESS 0x6
#define CONTROL_TYPE_INGRESS 0xf
#define CONTROL_IV0 BIT(5)
#define CONTROL_IV1 BIT(6)
#define CONTROL_IV2 BIT(7)
#define CONTROL_UPDATE_SEQ BIT(13)
#define CONTROL_IV_IN_SEQ BIT(14)
#define CONTROL_ENCRYPT_AUTH BIT(15)
#define CONTROL_KEY_IN_CTX BIT(16)
#define CONTROL_CRYPTO_ALG(x) ((x) << 17)
#define CTRYPTO_ALG_AES_CTR_128 0x5
#define CTRYPTO_ALG_AES_CTR_192 0x6
#define CTRYPTO_ALG_AES_CTR_256 0x7
#define CONTROL_DIGEST_TYPE(x) ((x) << 21)
#define CONTROL_AUTH_ALG(x) ((x) << 23)
#define AUTH_ALG_AES_GHAS 0x4
#define CONTROL_AN(x) ((x) << 26)
#define CONTROL_SEQ_TYPE(x) ((x) << 28)
#define CONTROL_SEQ_MASK BIT(30)
#define CONTROL_CONTEXT_ID BIT(31)
enum mscc_macsec_destination_ports {
MSCC_MS_PORT_COMMON = 0,
MSCC_MS_PORT_RSVD = 1,
MSCC_MS_PORT_CONTROLLED = 2,
MSCC_MS_PORT_UNCONTROLLED = 3,
};
enum mscc_macsec_drop_actions {
MSCC_MS_ACTION_BYPASS_CRC = 0,
MSCC_MS_ACTION_BYPASS_BAD = 1,
MSCC_MS_ACTION_DROP = 2,
MSCC_MS_ACTION_BYPASS = 3,
};
enum mscc_macsec_flow_types {
MSCC_MS_FLOW_BYPASS = 0,
MSCC_MS_FLOW_DROP = 1,
MSCC_MS_FLOW_INGRESS = 2,
MSCC_MS_FLOW_EGRESS = 3,
};
enum mscc_macsec_validate_levels {
MSCC_MS_VALIDATE_DISABLED = 0,
MSCC_MS_VALIDATE_CHECK = 1,
MSCC_MS_VALIDATE_STRICT = 2,
};
#define MSCC_MS_XFORM_REC(x, y) (((x) << 5) + (y))
#define MSCC_MS_ENA_CFG 0x800
#define MSCC_MS_FC_CFG 0x804
#define MSCC_MS_SAM_MAC_SA_MATCH_LO(x) (0x1000 + ((x) << 4))
#define MSCC_MS_SAM_MAC_SA_MATCH_HI(x) (0x1001 + ((x) << 4))
#define MSCC_MS_SAM_MISC_MATCH(x) (0x1004 + ((x) << 4))
#define MSCC_MS_SAM_MATCH_SCI_LO(x) (0x1005 + ((x) << 4))
#define MSCC_MS_SAM_MATCH_SCI_HI(x) (0x1006 + ((x) << 4))
#define MSCC_MS_SAM_MASK(x) (0x1007 + ((x) << 4))
#define MSCC_MS_SAM_ENTRY_SET1 0x1808
#define MSCC_MS_SAM_ENTRY_CLEAR1 0x180c
#define MSCC_MS_SAM_FLOW_CTRL(x) (0x1c00 + (x))
#define MSCC_MS_SAM_CP_TAG 0x1e40
#define MSCC_MS_SAM_NM_FLOW_NCP 0x1e51
#define MSCC_MS_SAM_NM_FLOW_CP 0x1e52
#define MSCC_MS_MISC_CONTROL 0x1e5f
#define MSCC_MS_COUNT_CONTROL 0x3204
#define MSCC_MS_PARAMS2_IG_CC_CONTROL 0x3a10
#define MSCC_MS_PARAMS2_IG_CP_TAG 0x3a14
#define MSCC_MS_VLAN_MTU_CHECK(x) (0x3c40 + (x))
#define MSCC_MS_NON_VLAN_MTU_CHECK 0x3c48
#define MSCC_MS_PP_CTRL 0x3c4b
#define MSCC_MS_STATUS_CONTEXT_CTRL 0x3d02
#define MSCC_MS_INTR_CTRL_STATUS 0x3d04
#define MSCC_MS_BLOCK_CTX_UPDATE 0x3d0c
#define MSCC_MS_AIC_CTRL 0x3e02
/* MACSEC_ENA_CFG */
#define MSCC_MS_ENA_CFG_CLK_ENA BIT(0)
#define MSCC_MS_ENA_CFG_SW_RST BIT(1)
#define MSCC_MS_ENA_CFG_MACSEC_BYPASS_ENA BIT(8)
#define MSCC_MS_ENA_CFG_MACSEC_ENA BIT(9)
#define MSCC_MS_ENA_CFG_MACSEC_SPEED_MODE(x) ((x) << 10)
#define MSCC_MS_ENA_CFG_MACSEC_SPEED_MODE_M GENMASK(12, 10)
/* MACSEC_FC_CFG */
#define MSCC_MS_FC_CFG_FCBUF_ENA BIT(0)
#define MSCC_MS_FC_CFG_USE_PKT_EXPANSION_INDICATION BIT(1)
#define MSCC_MS_FC_CFG_LOW_THRESH(x) ((x) << 4)
#define MSCC_MS_FC_CFG_LOW_THRESH_M GENMASK(7, 4)
#define MSCC_MS_FC_CFG_HIGH_THRESH(x) ((x) << 8)
#define MSCC_MS_FC_CFG_HIGH_THRESH_M GENMASK(11, 8)
#define MSCC_MS_FC_CFG_LOW_BYTES_VAL(x) ((x) << 12)
#define MSCC_MS_FC_CFG_LOW_BYTES_VAL_M GENMASK(14, 12)
#define MSCC_MS_FC_CFG_HIGH_BYTES_VAL(x) ((x) << 16)
#define MSCC_MS_FC_CFG_HIGH_BYTES_VAL_M GENMASK(18, 16)
/* MSCC_MS_SAM_MAC_SA_MATCH_HI */
#define MSCC_MS_SAM_MAC_SA_MATCH_HI_ETYPE(x) ((x) << 16)
#define MSCC_MS_SAM_MAC_SA_MATCH_HI_ETYPE_M GENMASK(31, 16)
/* MACSEC_SAM_MISC_MATCH */
#define MSCC_MS_SAM_MISC_MATCH_VLAN_VALID BIT(0)
#define MSCC_MS_SAM_MISC_MATCH_QINQ_FOUND BIT(1)
#define MSCC_MS_SAM_MISC_MATCH_STAG_VALID BIT(2)
#define MSCC_MS_SAM_MISC_MATCH_QTAG_VALID BIT(3)
#define MSCC_MS_SAM_MISC_MATCH_VLAN_UP(x) ((x) << 4)
#define MSCC_MS_SAM_MISC_MATCH_VLAN_UP_M GENMASK(6, 4)
#define MSCC_MS_SAM_MISC_MATCH_CONTROL_PACKET BIT(7)
#define MSCC_MS_SAM_MISC_MATCH_UNTAGGED BIT(8)
#define MSCC_MS_SAM_MISC_MATCH_TAGGED BIT(9)
#define MSCC_MS_SAM_MISC_MATCH_BAD_TAG BIT(10)
#define MSCC_MS_SAM_MISC_MATCH_KAY_TAG BIT(11)
#define MSCC_MS_SAM_MISC_MATCH_SOURCE_PORT(x) ((x) << 12)
#define MSCC_MS_SAM_MISC_MATCH_SOURCE_PORT_M GENMASK(13, 12)
#define MSCC_MS_SAM_MISC_MATCH_PRIORITY(x) ((x) << 16)
#define MSCC_MS_SAM_MISC_MATCH_PRIORITY_M GENMASK(19, 16)
#define MSCC_MS_SAM_MISC_MATCH_AN(x) ((x) << 24)
#define MSCC_MS_SAM_MISC_MATCH_TCI(x) ((x) << 26)
/* MACSEC_SAM_MASK */
#define MSCC_MS_SAM_MASK_MAC_SA_MASK(x) (x)
#define MSCC_MS_SAM_MASK_MAC_SA_MASK_M GENMASK(5, 0)
#define MSCC_MS_SAM_MASK_MAC_DA_MASK(x) ((x) << 6)
#define MSCC_MS_SAM_MASK_MAC_DA_MASK_M GENMASK(11, 6)
#define MSCC_MS_SAM_MASK_MAC_ETYPE_MASK BIT(12)
#define MSCC_MS_SAM_MASK_VLAN_VLD_MASK BIT(13)
#define MSCC_MS_SAM_MASK_QINQ_FOUND_MASK BIT(14)
#define MSCC_MS_SAM_MASK_STAG_VLD_MASK BIT(15)
#define MSCC_MS_SAM_MASK_QTAG_VLD_MASK BIT(16)
#define MSCC_MS_SAM_MASK_VLAN_UP_MASK BIT(17)
#define MSCC_MS_SAM_MASK_VLAN_ID_MASK BIT(18)
#define MSCC_MS_SAM_MASK_SOURCE_PORT_MASK BIT(19)
#define MSCC_MS_SAM_MASK_CTL_PACKET_MASK BIT(20)
#define MSCC_MS_SAM_MASK_VLAN_UP_INNER_MASK BIT(21)
#define MSCC_MS_SAM_MASK_VLAN_ID_INNER_MASK BIT(22)
#define MSCC_MS_SAM_MASK_SCI_MASK BIT(23)
#define MSCC_MS_SAM_MASK_AN_MASK(x) ((x) << 24)
#define MSCC_MS_SAM_MASK_TCI_MASK(x) ((x) << 26)
/* MACSEC_SAM_FLOW_CTRL_EGR */
#define MSCC_MS_SAM_FLOW_CTRL_FLOW_TYPE(x) (x)
#define MSCC_MS_SAM_FLOW_CTRL_FLOW_TYPE_M GENMASK(1, 0)
#define MSCC_MS_SAM_FLOW_CTRL_DEST_PORT(x) ((x) << 2)
#define MSCC_MS_SAM_FLOW_CTRL_DEST_PORT_M GENMASK(3, 2)
#define MSCC_MS_SAM_FLOW_CTRL_RESV_4 BIT(4)
#define MSCC_MS_SAM_FLOW_CTRL_FLOW_CRYPT_AUTH BIT(5)
#define MSCC_MS_SAM_FLOW_CTRL_DROP_ACTION(x) ((x) << 6)
#define MSCC_MS_SAM_FLOW_CTRL_DROP_ACTION_M GENMASK(7, 6)
#define MSCC_MS_SAM_FLOW_CTRL_RESV_15_TO_8(x) ((x) << 8)
#define MSCC_MS_SAM_FLOW_CTRL_RESV_15_TO_8_M GENMASK(15, 8)
#define MSCC_MS_SAM_FLOW_CTRL_PROTECT_FRAME BIT(16)
#define MSCC_MS_SAM_FLOW_CTRL_REPLAY_PROTECT BIT(16)
#define MSCC_MS_SAM_FLOW_CTRL_SA_IN_USE BIT(17)
#define MSCC_MS_SAM_FLOW_CTRL_INCLUDE_SCI BIT(18)
#define MSCC_MS_SAM_FLOW_CTRL_USE_ES BIT(19)
#define MSCC_MS_SAM_FLOW_CTRL_USE_SCB BIT(20)
#define MSCC_MS_SAM_FLOW_CTRL_VALIDATE_FRAMES(x) ((x) << 19)
#define MSCC_MS_SAM_FLOW_CTRL_TAG_BYPASS_SIZE(x) ((x) << 21)
#define MSCC_MS_SAM_FLOW_CTRL_TAG_BYPASS_SIZE_M GENMASK(22, 21)
#define MSCC_MS_SAM_FLOW_CTRL_RESV_23 BIT(23)
#define MSCC_MS_SAM_FLOW_CTRL_CONFIDENTIALITY_OFFSET(x) ((x) << 24)
#define MSCC_MS_SAM_FLOW_CTRL_CONFIDENTIALITY_OFFSET_M GENMASK(30, 24)
#define MSCC_MS_SAM_FLOW_CTRL_CONF_PROTECT BIT(31)
/* MACSEC_SAM_CP_TAG */
#define MSCC_MS_SAM_CP_TAG_MAP_TBL(x) (x)
#define MSCC_MS_SAM_CP_TAG_MAP_TBL_M GENMASK(23, 0)
#define MSCC_MS_SAM_CP_TAG_DEF_UP(x) ((x) << 24)
#define MSCC_MS_SAM_CP_TAG_DEF_UP_M GENMASK(26, 24)
#define MSCC_MS_SAM_CP_TAG_STAG_UP_EN BIT(27)
#define MSCC_MS_SAM_CP_TAG_QTAG_UP_EN BIT(28)
#define MSCC_MS_SAM_CP_TAG_PARSE_QINQ BIT(29)
#define MSCC_MS_SAM_CP_TAG_PARSE_STAG BIT(30)
#define MSCC_MS_SAM_CP_TAG_PARSE_QTAG BIT(31)
/* MACSEC_SAM_NM_FLOW_NCP */
#define MSCC_MS_SAM_NM_FLOW_NCP_UNTAGGED_FLOW_TYPE(x) (x)
#define MSCC_MS_SAM_NM_FLOW_NCP_UNTAGGED_DEST_PORT(x) ((x) << 2)
#define MSCC_MS_SAM_NM_FLOW_NCP_UNTAGGED_DROP_ACTION(x) ((x) << 6)
#define MSCC_MS_SAM_NM_FLOW_NCP_TAGGED_FLOW_TYPE(x) ((x) << 8)
#define MSCC_MS_SAM_NM_FLOW_NCP_TAGGED_DEST_PORT(x) ((x) << 10)
#define MSCC_MS_SAM_NM_FLOW_NCP_TAGGED_DROP_ACTION(x) ((x) << 14)
#define MSCC_MS_SAM_NM_FLOW_NCP_BADTAG_FLOW_TYPE(x) ((x) << 16)
#define MSCC_MS_SAM_NM_FLOW_NCP_BADTAG_DEST_PORT(x) ((x) << 18)
#define MSCC_MS_SAM_NM_FLOW_NCP_BADTAG_DROP_ACTION(x) ((x) << 22)
#define MSCC_MS_SAM_NM_FLOW_NCP_KAY_FLOW_TYPE(x) ((x) << 24)
#define MSCC_MS_SAM_NM_FLOW_NCP_KAY_DEST_PORT(x) ((x) << 26)
#define MSCC_MS_SAM_NM_FLOW_NCP_KAY_DROP_ACTION(x) ((x) << 30)
/* MACSEC_SAM_NM_FLOW_CP */
#define MSCC_MS_SAM_NM_FLOW_CP_UNTAGGED_FLOW_TYPE(x) (x)
#define MSCC_MS_SAM_NM_FLOW_CP_UNTAGGED_DEST_PORT(x) ((x) << 2)
#define MSCC_MS_SAM_NM_FLOW_CP_UNTAGGED_DROP_ACTION(x) ((x) << 6)
#define MSCC_MS_SAM_NM_FLOW_CP_TAGGED_FLOW_TYPE(x) ((x) << 8)
#define MSCC_MS_SAM_NM_FLOW_CP_TAGGED_DEST_PORT(x) ((x) << 10)
#define MSCC_MS_SAM_NM_FLOW_CP_TAGGED_DROP_ACTION(x) ((x) << 14)
#define MSCC_MS_SAM_NM_FLOW_CP_BADTAG_FLOW_TYPE(x) ((x) << 16)
#define MSCC_MS_SAM_NM_FLOW_CP_BADTAG_DEST_PORT(x) ((x) << 18)
#define MSCC_MS_SAM_NM_FLOW_CP_BADTAG_DROP_ACTION(x) ((x) << 22)
#define MSCC_MS_SAM_NM_FLOW_CP_KAY_FLOW_TYPE(x) ((x) << 24)
#define MSCC_MS_SAM_NM_FLOW_CP_KAY_DEST_PORT(x) ((x) << 26)
#define MSCC_MS_SAM_NM_FLOW_CP_KAY_DROP_ACTION(x) ((x) << 30)
/* MACSEC_MISC_CONTROL */
#define MSCC_MS_MISC_CONTROL_MC_LATENCY_FIX(x) (x)
#define MSCC_MS_MISC_CONTROL_MC_LATENCY_FIX_M GENMASK(5, 0)
#define MSCC_MS_MISC_CONTROL_STATIC_BYPASS BIT(8)
#define MSCC_MS_MISC_CONTROL_NM_MACSEC_EN BIT(9)
#define MSCC_MS_MISC_CONTROL_VALIDATE_FRAMES(x) ((x) << 10)
#define MSCC_MS_MISC_CONTROL_VALIDATE_FRAMES_M GENMASK(11, 10)
#define MSCC_MS_MISC_CONTROL_XFORM_REC_SIZE(x) ((x) << 24)
#define MSCC_MS_MISC_CONTROL_XFORM_REC_SIZE_M GENMASK(25, 24)
/* MACSEC_COUNT_CONTROL */
#define MSCC_MS_COUNT_CONTROL_RESET_ALL BIT(0)
#define MSCC_MS_COUNT_CONTROL_DEBUG_ACCESS BIT(1)
#define MSCC_MS_COUNT_CONTROL_SATURATE_CNTRS BIT(2)
#define MSCC_MS_COUNT_CONTROL_AUTO_CNTR_RESET BIT(3)
/* MACSEC_PARAMS2_IG_CC_CONTROL */
#define MSCC_MS_PARAMS2_IG_CC_CONTROL_NON_MATCH_CTRL_ACT BIT(14)
#define MSCC_MS_PARAMS2_IG_CC_CONTROL_NON_MATCH_ACT BIT(15)
/* MACSEC_PARAMS2_IG_CP_TAG */
#define MSCC_MS_PARAMS2_IG_CP_TAG_MAP_TBL(x) (x)
#define MSCC_MS_PARAMS2_IG_CP_TAG_MAP_TBL_M GENMASK(23, 0)
#define MSCC_MS_PARAMS2_IG_CP_TAG_DEF_UP(x) ((x) << 24)
#define MSCC_MS_PARAMS2_IG_CP_TAG_DEF_UP_M GENMASK(26, 24)
#define MSCC_MS_PARAMS2_IG_CP_TAG_STAG_UP_EN BIT(27)
#define MSCC_MS_PARAMS2_IG_CP_TAG_QTAG_UP_EN BIT(28)
#define MSCC_MS_PARAMS2_IG_CP_TAG_PARSE_QINQ BIT(29)
#define MSCC_MS_PARAMS2_IG_CP_TAG_PARSE_STAG BIT(30)
#define MSCC_MS_PARAMS2_IG_CP_TAG_PARSE_QTAG BIT(31)
/* MACSEC_VLAN_MTU_CHECK */
#define MSCC_MS_VLAN_MTU_CHECK_MTU_COMPARE(x) (x)
#define MSCC_MS_VLAN_MTU_CHECK_MTU_COMPARE_M GENMASK(14, 0)
#define MSCC_MS_VLAN_MTU_CHECK_MTU_COMP_DROP BIT(15)
/* MACSEC_NON_VLAN_MTU_CHECK */
#define MSCC_MS_NON_VLAN_MTU_CHECK_NV_MTU_COMPARE(x) (x)
#define MSCC_MS_NON_VLAN_MTU_CHECK_NV_MTU_COMPARE_M GENMASK(14, 0)
#define MSCC_MS_NON_VLAN_MTU_CHECK_NV_MTU_COMP_DROP BIT(15)
/* MACSEC_PP_CTRL */
#define MSCC_MS_PP_CTRL_MACSEC_OCTET_INCR_MODE BIT(0)
/* MACSEC_INTR_CTRL_STATUS */
#define MSCC_MS_INTR_CTRL_STATUS_INTR_CLR_STATUS(x) (x)
#define MSCC_MS_INTR_CTRL_STATUS_INTR_CLR_STATUS_M GENMASK(15, 0)
#define MSCC_MS_INTR_CTRL_STATUS_INTR_ENABLE(x) ((x) << 16)
#define MSCC_MS_INTR_CTRL_STATUS_INTR_ENABLE_M GENMASK(31, 16)
#define MACSEC_INTR_CTRL_STATUS_ROLLOVER BIT(5)
#endif
......@@ -332,6 +332,9 @@ struct phy_c45_device_ids {
u32 device_ids[8];
};
struct macsec_context;
struct macsec_ops;
/* phy_device: An instance of a PHY
*
* drv: Pointer to the driver for this PHY instance
......@@ -354,6 +357,7 @@ struct phy_c45_device_ids {
* attached_dev: The attached enet driver's device instance ptr
* adjust_link: Callback for the enet controller to respond to
* changes in the link state.
* macsec_ops: MACsec offloading ops.
*
* speed, duplex, pause, supported, advertising, lp_advertising,
* and autoneg are used like in mii_if_info
......@@ -453,6 +457,11 @@ struct phy_device {
void (*phy_link_change)(struct phy_device *, bool up, bool do_carrier);
void (*adjust_link)(struct net_device *dev);
#if IS_ENABLED(CONFIG_MACSEC)
/* MACsec management functions */
const struct macsec_ops *macsec_ops;
#endif
};
#define to_phy_device(d) container_of(to_mdio_device(d), \
struct phy_device, mdio)
......
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* MACsec netdev header, used for h/w accelerated implementations.
*
* Copyright (c) 2015 Sabrina Dubroca <sd@queasysnail.net>
*/
#ifndef _NET_MACSEC_H_
#define _NET_MACSEC_H_
#include <linux/u64_stats_sync.h>
#include <uapi/linux/if_link.h>
#include <uapi/linux/if_macsec.h>
typedef u64 __bitwise sci_t;
#define MACSEC_NUM_AN 4 /* 2 bits for the association number */
/**
* struct macsec_key - SA key
* @id: user-provided key identifier
* @tfm: crypto struct, key storage
*/
struct macsec_key {
u8 id[MACSEC_KEYID_LEN];
struct crypto_aead *tfm;
};
struct macsec_rx_sc_stats {
__u64 InOctetsValidated;
__u64 InOctetsDecrypted;
__u64 InPktsUnchecked;
__u64 InPktsDelayed;
__u64 InPktsOK;
__u64 InPktsInvalid;
__u64 InPktsLate;
__u64 InPktsNotValid;
__u64 InPktsNotUsingSA;
__u64 InPktsUnusedSA;
};
struct macsec_rx_sa_stats {
__u32 InPktsOK;
__u32 InPktsInvalid;
__u32 InPktsNotValid;
__u32 InPktsNotUsingSA;
__u32 InPktsUnusedSA;
};
struct macsec_tx_sa_stats {
__u32 OutPktsProtected;
__u32 OutPktsEncrypted;
};
struct macsec_tx_sc_stats {
__u64 OutPktsProtected;
__u64 OutPktsEncrypted;
__u64 OutOctetsProtected;
__u64 OutOctetsEncrypted;
};
/**
* struct macsec_rx_sa - receive secure association
* @active:
* @next_pn: packet number expected for the next packet
* @lock: protects next_pn manipulations
* @key: key structure
* @stats: per-SA stats
*/
struct macsec_rx_sa {
struct macsec_key key;
spinlock_t lock;
u32 next_pn;
refcount_t refcnt;
bool active;
struct macsec_rx_sa_stats __percpu *stats;
struct macsec_rx_sc *sc;
struct rcu_head rcu;
};
struct pcpu_rx_sc_stats {
struct macsec_rx_sc_stats stats;
struct u64_stats_sync syncp;
};
struct pcpu_tx_sc_stats {
struct macsec_tx_sc_stats stats;
struct u64_stats_sync syncp;
};
/**
* struct macsec_rx_sc - receive secure channel
* @sci: secure channel identifier for this SC
* @active: channel is active
* @sa: array of secure associations
* @stats: per-SC stats
*/
struct macsec_rx_sc {
struct macsec_rx_sc __rcu *next;
sci_t sci;
bool active;
struct macsec_rx_sa __rcu *sa[MACSEC_NUM_AN];
struct pcpu_rx_sc_stats __percpu *stats;
refcount_t refcnt;
struct rcu_head rcu_head;
};
/**
* struct macsec_tx_sa - transmit secure association
* @active:
* @next_pn: packet number to use for the next packet
* @lock: protects next_pn manipulations
* @key: key structure
* @stats: per-SA stats
*/
struct macsec_tx_sa {
struct macsec_key key;
spinlock_t lock;
u32 next_pn;
refcount_t refcnt;
bool active;
struct macsec_tx_sa_stats __percpu *stats;
struct rcu_head rcu;
};
/**
* struct macsec_tx_sc - transmit secure channel
* @active:
* @encoding_sa: association number of the SA currently in use
* @encrypt: encrypt packets on transmit, or authenticate only
* @send_sci: always include the SCI in the SecTAG
* @end_station:
* @scb: single copy broadcast flag
* @sa: array of secure associations
* @stats: stats for this TXSC
*/
struct macsec_tx_sc {
bool active;
u8 encoding_sa;
bool encrypt;
bool send_sci;
bool end_station;
bool scb;
struct macsec_tx_sa __rcu *sa[MACSEC_NUM_AN];
struct pcpu_tx_sc_stats __percpu *stats;
};
/**
* struct macsec_secy - MACsec Security Entity
* @netdev: netdevice for this SecY
* @n_rx_sc: number of receive secure channels configured on this SecY
* @sci: secure channel identifier used for tx
* @key_len: length of keys used by the cipher suite
* @icv_len: length of ICV used by the cipher suite
* @validate_frames: validation mode
* @operational: MAC_Operational flag
* @protect_frames: enable protection for this SecY
* @replay_protect: enable packet number checks on receive
* @replay_window: size of the replay window
* @tx_sc: transmit secure channel
* @rx_sc: linked list of receive secure channels
*/
struct macsec_secy {
struct net_device *netdev;
unsigned int n_rx_sc;
sci_t sci;
u16 key_len;
u16 icv_len;
enum macsec_validation_type validate_frames;
bool operational;
bool protect_frames;
bool replay_protect;
u32 replay_window;
struct macsec_tx_sc tx_sc;
struct macsec_rx_sc __rcu *rx_sc;
};
/**
* struct macsec_context - MACsec context for hardware offloading
*/
struct macsec_context {
struct phy_device *phydev;
enum macsec_offload offload;
struct macsec_secy *secy;
struct macsec_rx_sc *rx_sc;
struct {
unsigned char assoc_num;
u8 key[MACSEC_KEYID_LEN];
union {
struct macsec_rx_sa *rx_sa;
struct macsec_tx_sa *tx_sa;
};
} sa;
u8 prepare:1;
};
/**
* struct macsec_ops - MACsec offloading operations
*/
struct macsec_ops {
/* Device wide */
int (*mdo_dev_open)(struct macsec_context *ctx);
int (*mdo_dev_stop)(struct macsec_context *ctx);
/* SecY */
int (*mdo_add_secy)(struct macsec_context *ctx);
int (*mdo_upd_secy)(struct macsec_context *ctx);
int (*mdo_del_secy)(struct macsec_context *ctx);
/* Security channels */
int (*mdo_add_rxsc)(struct macsec_context *ctx);
int (*mdo_upd_rxsc)(struct macsec_context *ctx);
int (*mdo_del_rxsc)(struct macsec_context *ctx);
/* Security associations */
int (*mdo_add_rxsa)(struct macsec_context *ctx);
int (*mdo_upd_rxsa)(struct macsec_context *ctx);
int (*mdo_del_rxsa)(struct macsec_context *ctx);
int (*mdo_add_txsa)(struct macsec_context *ctx);
int (*mdo_upd_txsa)(struct macsec_context *ctx);
int (*mdo_del_txsa)(struct macsec_context *ctx);
};
void macsec_pn_wrapped(struct macsec_secy *secy, struct macsec_tx_sa *tx_sa);
#endif /* _NET_MACSEC_H_ */
......@@ -486,6 +486,13 @@ enum macsec_validation_type {
MACSEC_VALIDATE_MAX = __MACSEC_VALIDATE_END - 1,
};
enum macsec_offload {
MACSEC_OFFLOAD_OFF = 0,
MACSEC_OFFLOAD_PHY = 1,
__MACSEC_OFFLOAD_END,
MACSEC_OFFLOAD_MAX = __MACSEC_OFFLOAD_END - 1,
};
/* IPVLAN section */
enum {
IFLA_IPVLAN_UNSPEC,
......
......@@ -45,6 +45,7 @@ enum macsec_attrs {
MACSEC_ATTR_RXSC_LIST, /* dump, nested, macsec_rxsc_attrs for each RXSC */
MACSEC_ATTR_TXSC_STATS, /* dump, nested, macsec_txsc_stats_attr */
MACSEC_ATTR_SECY_STATS, /* dump, nested, macsec_secy_stats_attr */
MACSEC_ATTR_OFFLOAD, /* config, nested, macsec_offload_attrs */
__MACSEC_ATTR_END,
NUM_MACSEC_ATTR = __MACSEC_ATTR_END,
MACSEC_ATTR_MAX = __MACSEC_ATTR_END - 1,
......@@ -97,6 +98,15 @@ enum macsec_sa_attrs {
MACSEC_SA_ATTR_MAX = __MACSEC_SA_ATTR_END - 1,
};
enum macsec_offload_attrs {
MACSEC_OFFLOAD_ATTR_UNSPEC,
MACSEC_OFFLOAD_ATTR_TYPE, /* config/dump, u8 0..2 */
MACSEC_OFFLOAD_ATTR_PAD,
__MACSEC_OFFLOAD_ATTR_END,
NUM_MACSEC_OFFLOAD_ATTR = __MACSEC_OFFLOAD_ATTR_END,
MACSEC_OFFLOAD_ATTR_MAX = __MACSEC_OFFLOAD_ATTR_END - 1,
};
enum macsec_nl_commands {
MACSEC_CMD_GET_TXSC,
MACSEC_CMD_ADD_RXSC,
......@@ -108,6 +118,7 @@ enum macsec_nl_commands {
MACSEC_CMD_ADD_RXSA,
MACSEC_CMD_DEL_RXSA,
MACSEC_CMD_UPD_RXSA,
MACSEC_CMD_UPD_OFFLOAD,
};
/* u64 per-RXSC stats */
......
......@@ -485,6 +485,13 @@ enum macsec_validation_type {
MACSEC_VALIDATE_MAX = __MACSEC_VALIDATE_END - 1,
};
enum macsec_offload {
MACSEC_OFFLOAD_OFF = 0,
MACSEC_OFFLOAD_PHY = 1,
__MACSEC_OFFLOAD_END,
MACSEC_OFFLOAD_MAX = __MACSEC_OFFLOAD_END - 1,
};
/* IPVLAN section */
enum {
IFLA_IPVLAN_UNSPEC,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment