Commit 9d8c7e5a authored by David S. Miller's avatar David S. Miller

Merge branch 'dpaa_eth-rss'

Madalin Bucur says:

====================
Add RSS to DPAA 1.x Ethernet driver

This patch set introduces Receive Side Scaling for the DPAA Ethernet
driver. Documentation is updated with details related to the new
feature and limitations that apply.
Added also a small fix.

v2: removed a C++ style comment
v3: move struct fman to header file to avoid exporting a function
v4: addressed compilation issues introduced in v3
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 0df49584 52600dcc
...@@ -13,6 +13,7 @@ Contents ...@@ -13,6 +13,7 @@ Contents
- Configuring DPAA Ethernet in your kernel - Configuring DPAA Ethernet in your kernel
- DPAA Ethernet Frame Processing - DPAA Ethernet Frame Processing
- DPAA Ethernet Features - DPAA Ethernet Features
- DPAA IRQ Affinity and Receive Side Scaling
- Debugging - Debugging
DPAA Ethernet Overview DPAA Ethernet Overview
...@@ -147,7 +148,10 @@ gradually. ...@@ -147,7 +148,10 @@ gradually.
The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx The driver has Rx and Tx checksum offloading for UDP and TCP. Currently the Rx
checksum offload feature is enabled by default and cannot be controlled through checksum offload feature is enabled by default and cannot be controlled through
ethtool. ethtool. Also, rx-flow-hash and rx-hashing was added. The addition of RSS
provides a big performance boost for the forwarding scenarios, allowing
different traffic flows received by one interface to be processed by different
CPUs in parallel.
The driver has support for multiple prioritized Tx traffic classes. Priorities The driver has support for multiple prioritized Tx traffic classes. Priorities
range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with range from 0 (lowest) to 3 (highest). These are mapped to HW workqueues with
...@@ -166,6 +170,68 @@ classes as follows: ...@@ -166,6 +170,68 @@ classes as follows:
tc qdisc add dev <int> root handle 1: \ tc qdisc add dev <int> root handle 1: \
mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1 mqprio num_tc 4 map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1
DPAA IRQ Affinity and Receive Side Scaling
==========================================
Traffic coming on the DPAA Rx queues or on the DPAA Tx confirmation
queues is seen by the CPU as ingress traffic on a certain portal.
The DPAA QMan portal interrupts are affined each to a certain CPU.
The same portal interrupt services all the QMan portal consumers.
By default the DPAA Ethernet driver enables RSS, making use of the
DPAA FMan Parser and Keygen blocks to distribute traffic on 128
hardware frame queues using a hash on IP v4/v6 source and destination
and L4 source and destination ports, in present in the received frame.
When RSS is disabled, all traffic received by a certain interface is
received on the default Rx frame queue. The default DPAA Rx frame
queues are configured to put the received traffic into a pool channel
that allows any available CPU portal to dequeue the ingress traffic.
The default frame queues have the HOLDACTIVE option set, ensuring that
traffic bursts from a certain queue are serviced by the same CPU.
This ensures a very low rate of frame reordering. A drawback of this
is that only one CPU at a time can service the traffic received by a
certain interface when RSS is not enabled.
To implement RSS, the DPAA Ethernet driver allocates an extra set of
128 Rx frame queues that are configured to dedicated channels, in a
round-robin manner. The mapping of the frame queues to CPUs is now
hardcoded, there is no indirection table to move traffic for a certain
FQ (hash result) to another CPU. The ingress traffic arriving on one
of these frame queues will arrive at the same portal and will always
be processed by the same CPU. This ensures intra-flow order preservation
and workload distribution for multiple traffic flows.
RSS can be turned off for a certain interface using ethtool, i.e.
# ethtool -N fm1-mac9 rx-flow-hash tcp4 ""
To turn it back on, one needs to set rx-flow-hash for tcp4/6 or udp4/6:
# ethtool -N fm1-mac9 rx-flow-hash udp4 sfdn
There is no independent control for individual protocols, any command
run for one of tcp4|udp4|ah4|esp4|sctp4|tcp6|udp6|ah6|esp6|sctp6 is
going to control the rx-flow-hashing for all protocols on that interface.
Besides using the FMan Keygen computed hash for spreading traffic on the
128 Rx FQs, the DPAA Ethernet driver also sets the skb hash value when
the NETIF_F_RXHASH feature is on (active by default). This can be turned
on or off through ethtool, i.e.:
# ethtool -K fm1-mac9 rx-hashing off
# ethtool -k fm1-mac9 | grep hash
receive-hashing: off
# ethtool -K fm1-mac9 rx-hashing on
Actual changes:
receive-hashing: on
# ethtool -k fm1-mac9 | grep hash
receive-hashing: on
Please note that Rx hashing depends upon the rx-flow-hashing being on
for that interface - turning off rx-flow-hashing will also disable the
rx-hashing (without ethtool reporting it as off as that depends on the
NETIF_F_RXHASH feature flag).
Debugging Debugging
========= =========
......
...@@ -158,7 +158,7 @@ MODULE_PARM_DESC(tx_timeout, "The Tx timeout in ms"); ...@@ -158,7 +158,7 @@ MODULE_PARM_DESC(tx_timeout, "The Tx timeout in ms");
#define DPAA_RX_PRIV_DATA_SIZE (u16)(DPAA_TX_PRIV_DATA_SIZE + \ #define DPAA_RX_PRIV_DATA_SIZE (u16)(DPAA_TX_PRIV_DATA_SIZE + \
dpaa_rx_extra_headroom) dpaa_rx_extra_headroom)
#define DPAA_ETH_RX_QUEUES 128 #define DPAA_ETH_PCD_RXQ_NUM 128
#define DPAA_ENQUEUE_RETRIES 100000 #define DPAA_ENQUEUE_RETRIES 100000
...@@ -169,6 +169,7 @@ struct fm_port_fqs { ...@@ -169,6 +169,7 @@ struct fm_port_fqs {
struct dpaa_fq *tx_errq; struct dpaa_fq *tx_errq;
struct dpaa_fq *rx_defq; struct dpaa_fq *rx_defq;
struct dpaa_fq *rx_errq; struct dpaa_fq *rx_errq;
struct dpaa_fq *rx_pcdq;
}; };
/* All the dpa bps in use at any moment */ /* All the dpa bps in use at any moment */
...@@ -235,7 +236,7 @@ static int dpaa_netdev_init(struct net_device *net_dev, ...@@ -235,7 +236,7 @@ static int dpaa_netdev_init(struct net_device *net_dev,
net_dev->max_mtu = dpaa_get_max_mtu(); net_dev->max_mtu = dpaa_get_max_mtu();
net_dev->hw_features |= (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | net_dev->hw_features |= (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
NETIF_F_LLTX); NETIF_F_LLTX | NETIF_F_RXHASH);
net_dev->hw_features |= NETIF_F_SG | NETIF_F_HIGHDMA; net_dev->hw_features |= NETIF_F_SG | NETIF_F_HIGHDMA;
/* The kernels enables GSO automatically, if we declare NETIF_F_SG. /* The kernels enables GSO automatically, if we declare NETIF_F_SG.
...@@ -628,6 +629,7 @@ static inline void dpaa_assign_wq(struct dpaa_fq *fq, int idx) ...@@ -628,6 +629,7 @@ static inline void dpaa_assign_wq(struct dpaa_fq *fq, int idx)
fq->wq = 5; fq->wq = 5;
break; break;
case FQ_TYPE_RX_DEFAULT: case FQ_TYPE_RX_DEFAULT:
case FQ_TYPE_RX_PCD:
fq->wq = 6; fq->wq = 6;
break; break;
case FQ_TYPE_TX: case FQ_TYPE_TX:
...@@ -688,6 +690,7 @@ static int dpaa_alloc_all_fqs(struct device *dev, struct list_head *list, ...@@ -688,6 +690,7 @@ static int dpaa_alloc_all_fqs(struct device *dev, struct list_head *list,
struct fm_port_fqs *port_fqs) struct fm_port_fqs *port_fqs)
{ {
struct dpaa_fq *dpaa_fq; struct dpaa_fq *dpaa_fq;
u32 fq_base, fq_base_aligned, i;
dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_RX_ERROR); dpaa_fq = dpaa_fq_alloc(dev, 0, 1, list, FQ_TYPE_RX_ERROR);
if (!dpaa_fq) if (!dpaa_fq)
...@@ -701,6 +704,26 @@ static int dpaa_alloc_all_fqs(struct device *dev, struct list_head *list, ...@@ -701,6 +704,26 @@ static int dpaa_alloc_all_fqs(struct device *dev, struct list_head *list,
port_fqs->rx_defq = &dpaa_fq[0]; port_fqs->rx_defq = &dpaa_fq[0];
/* the PCD FQIDs range needs to be aligned for correct operation */
if (qman_alloc_fqid_range(&fq_base, 2 * DPAA_ETH_PCD_RXQ_NUM))
goto fq_alloc_failed;
fq_base_aligned = ALIGN(fq_base, DPAA_ETH_PCD_RXQ_NUM);
for (i = fq_base; i < fq_base_aligned; i++)
qman_release_fqid(i);
for (i = fq_base_aligned + DPAA_ETH_PCD_RXQ_NUM;
i < (fq_base + 2 * DPAA_ETH_PCD_RXQ_NUM); i++)
qman_release_fqid(i);
dpaa_fq = dpaa_fq_alloc(dev, fq_base_aligned, DPAA_ETH_PCD_RXQ_NUM,
list, FQ_TYPE_RX_PCD);
if (!dpaa_fq)
goto fq_alloc_failed;
port_fqs->rx_pcdq = &dpaa_fq[0];
if (!dpaa_fq_alloc(dev, 0, DPAA_ETH_TXQ_NUM, list, FQ_TYPE_TX_CONF_MQ)) if (!dpaa_fq_alloc(dev, 0, DPAA_ETH_TXQ_NUM, list, FQ_TYPE_TX_CONF_MQ))
goto fq_alloc_failed; goto fq_alloc_failed;
...@@ -870,13 +893,14 @@ static void dpaa_fq_setup(struct dpaa_priv *priv, ...@@ -870,13 +893,14 @@ static void dpaa_fq_setup(struct dpaa_priv *priv,
const struct dpaa_fq_cbs *fq_cbs, const struct dpaa_fq_cbs *fq_cbs,
struct fman_port *tx_port) struct fman_port *tx_port)
{ {
int egress_cnt = 0, conf_cnt = 0, num_portals = 0, cpu; int egress_cnt = 0, conf_cnt = 0, num_portals = 0, portal_cnt = 0, cpu;
const cpumask_t *affine_cpus = qman_affine_cpus(); const cpumask_t *affine_cpus = qman_affine_cpus();
u16 portals[NR_CPUS]; u16 channels[NR_CPUS];
struct dpaa_fq *fq; struct dpaa_fq *fq;
for_each_cpu(cpu, affine_cpus) for_each_cpu(cpu, affine_cpus)
portals[num_portals++] = qman_affine_channel(cpu); channels[num_portals++] = qman_affine_channel(cpu);
if (num_portals == 0) if (num_portals == 0)
dev_err(priv->net_dev->dev.parent, dev_err(priv->net_dev->dev.parent,
"No Qman software (affine) channels found"); "No Qman software (affine) channels found");
...@@ -890,6 +914,12 @@ static void dpaa_fq_setup(struct dpaa_priv *priv, ...@@ -890,6 +914,12 @@ static void dpaa_fq_setup(struct dpaa_priv *priv,
case FQ_TYPE_RX_ERROR: case FQ_TYPE_RX_ERROR:
dpaa_setup_ingress(priv, fq, &fq_cbs->rx_errq); dpaa_setup_ingress(priv, fq, &fq_cbs->rx_errq);
break; break;
case FQ_TYPE_RX_PCD:
if (!num_portals)
continue;
dpaa_setup_ingress(priv, fq, &fq_cbs->rx_defq);
fq->channel = channels[portal_cnt++ % num_portals];
break;
case FQ_TYPE_TX: case FQ_TYPE_TX:
dpaa_setup_egress(priv, fq, tx_port, dpaa_setup_egress(priv, fq, tx_port,
&fq_cbs->egress_ern); &fq_cbs->egress_ern);
...@@ -1039,7 +1069,8 @@ static int dpaa_fq_init(struct dpaa_fq *dpaa_fq, bool td_enable) ...@@ -1039,7 +1069,8 @@ static int dpaa_fq_init(struct dpaa_fq *dpaa_fq, bool td_enable)
/* Put all the ingress queues in our "ingress CGR". */ /* Put all the ingress queues in our "ingress CGR". */
if (priv->use_ingress_cgr && if (priv->use_ingress_cgr &&
(dpaa_fq->fq_type == FQ_TYPE_RX_DEFAULT || (dpaa_fq->fq_type == FQ_TYPE_RX_DEFAULT ||
dpaa_fq->fq_type == FQ_TYPE_RX_ERROR)) { dpaa_fq->fq_type == FQ_TYPE_RX_ERROR ||
dpaa_fq->fq_type == FQ_TYPE_RX_PCD)) {
initfq.we_mask |= cpu_to_be16(QM_INITFQ_WE_CGID); initfq.we_mask |= cpu_to_be16(QM_INITFQ_WE_CGID);
initfq.fqd.fq_ctrl |= cpu_to_be16(QM_FQCTRL_CGE); initfq.fqd.fq_ctrl |= cpu_to_be16(QM_FQCTRL_CGE);
initfq.fqd.cgid = (u8)priv->ingress_cgr.cgrid; initfq.fqd.cgid = (u8)priv->ingress_cgr.cgrid;
...@@ -1170,7 +1201,7 @@ static int dpaa_eth_init_tx_port(struct fman_port *port, struct dpaa_fq *errq, ...@@ -1170,7 +1201,7 @@ static int dpaa_eth_init_tx_port(struct fman_port *port, struct dpaa_fq *errq,
static int dpaa_eth_init_rx_port(struct fman_port *port, struct dpaa_bp **bps, static int dpaa_eth_init_rx_port(struct fman_port *port, struct dpaa_bp **bps,
size_t count, struct dpaa_fq *errq, size_t count, struct dpaa_fq *errq,
struct dpaa_fq *defq, struct dpaa_fq *defq, struct dpaa_fq *pcdq,
struct dpaa_buffer_layout *buf_layout) struct dpaa_buffer_layout *buf_layout)
{ {
struct fman_buffer_prefix_content buf_prefix_content; struct fman_buffer_prefix_content buf_prefix_content;
...@@ -1190,6 +1221,10 @@ static int dpaa_eth_init_rx_port(struct fman_port *port, struct dpaa_bp **bps, ...@@ -1190,6 +1221,10 @@ static int dpaa_eth_init_rx_port(struct fman_port *port, struct dpaa_bp **bps,
rx_p = &params.specific_params.rx_params; rx_p = &params.specific_params.rx_params;
rx_p->err_fqid = errq->fqid; rx_p->err_fqid = errq->fqid;
rx_p->dflt_fqid = defq->fqid; rx_p->dflt_fqid = defq->fqid;
if (pcdq) {
rx_p->pcd_base_fqid = pcdq->fqid;
rx_p->pcd_fqs_count = DPAA_ETH_PCD_RXQ_NUM;
}
count = min(ARRAY_SIZE(rx_p->ext_buf_pools.ext_buf_pool), count); count = min(ARRAY_SIZE(rx_p->ext_buf_pools.ext_buf_pool), count);
rx_p->ext_buf_pools.num_of_pools_used = (u8)count; rx_p->ext_buf_pools.num_of_pools_used = (u8)count;
...@@ -1234,7 +1269,8 @@ static int dpaa_eth_init_ports(struct mac_device *mac_dev, ...@@ -1234,7 +1269,8 @@ static int dpaa_eth_init_ports(struct mac_device *mac_dev,
return err; return err;
err = dpaa_eth_init_rx_port(rxport, bps, count, port_fqs->rx_errq, err = dpaa_eth_init_rx_port(rxport, bps, count, port_fqs->rx_errq,
port_fqs->rx_defq, &buf_layout[RX]); port_fqs->rx_defq, port_fqs->rx_pcdq,
&buf_layout[RX]);
return err; return err;
} }
...@@ -2201,12 +2237,13 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, ...@@ -2201,12 +2237,13 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
dma_addr_t addr = qm_fd_addr(fd); dma_addr_t addr = qm_fd_addr(fd);
enum qm_fd_format fd_format; enum qm_fd_format fd_format;
struct net_device *net_dev; struct net_device *net_dev;
u32 fd_status; u32 fd_status, hash_offset;
struct dpaa_bp *dpaa_bp; struct dpaa_bp *dpaa_bp;
struct dpaa_priv *priv; struct dpaa_priv *priv;
unsigned int skb_len; unsigned int skb_len;
struct sk_buff *skb; struct sk_buff *skb;
int *count_ptr; int *count_ptr;
void *vaddr;
fd_status = be32_to_cpu(fd->status); fd_status = be32_to_cpu(fd->status);
fd_format = qm_fd_get_format(fd); fd_format = qm_fd_get_format(fd);
...@@ -2252,7 +2289,8 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, ...@@ -2252,7 +2289,8 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size, DMA_FROM_DEVICE); dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size, DMA_FROM_DEVICE);
/* prefetch the first 64 bytes of the frame or the SGT start */ /* prefetch the first 64 bytes of the frame or the SGT start */
prefetch(phys_to_virt(addr) + qm_fd_get_offset(fd)); vaddr = phys_to_virt(addr);
prefetch(vaddr + qm_fd_get_offset(fd));
fd_format = qm_fd_get_format(fd); fd_format = qm_fd_get_format(fd);
/* The only FD types that we may receive are contig and S/G */ /* The only FD types that we may receive are contig and S/G */
...@@ -2273,6 +2311,18 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal, ...@@ -2273,6 +2311,18 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
skb->protocol = eth_type_trans(skb, net_dev); skb->protocol = eth_type_trans(skb, net_dev);
if (net_dev->features & NETIF_F_RXHASH && priv->keygen_in_use &&
!fman_port_get_hash_result_offset(priv->mac_dev->port[RX],
&hash_offset)) {
enum pkt_hash_types type;
/* if L4 exists, it was used in the hash generation */
type = be32_to_cpu(fd->status) & FM_FD_STAT_L4CV ?
PKT_HASH_TYPE_L4 : PKT_HASH_TYPE_L3;
skb_set_hash(skb, be32_to_cpu(*(u32 *)(vaddr + hash_offset)),
type);
}
skb_len = skb->len; skb_len = skb->len;
if (unlikely(netif_receive_skb(skb) == NET_RX_DROP)) if (unlikely(netif_receive_skb(skb) == NET_RX_DROP))
...@@ -2511,6 +2561,9 @@ static struct dpaa_bp *dpaa_bp_alloc(struct device *dev) ...@@ -2511,6 +2561,9 @@ static struct dpaa_bp *dpaa_bp_alloc(struct device *dev)
dpaa_bp->bpid = FSL_DPAA_BPID_INV; dpaa_bp->bpid = FSL_DPAA_BPID_INV;
dpaa_bp->percpu_count = devm_alloc_percpu(dev, *dpaa_bp->percpu_count); dpaa_bp->percpu_count = devm_alloc_percpu(dev, *dpaa_bp->percpu_count);
if (!dpaa_bp->percpu_count)
return ERR_PTR(-ENOMEM);
dpaa_bp->config_count = FSL_DPAA_ETH_MAX_BUF_COUNT; dpaa_bp->config_count = FSL_DPAA_ETH_MAX_BUF_COUNT;
dpaa_bp->seed_cb = dpaa_bp_seed; dpaa_bp->seed_cb = dpaa_bp_seed;
...@@ -2738,6 +2791,9 @@ static int dpaa_eth_probe(struct platform_device *pdev) ...@@ -2738,6 +2791,9 @@ static int dpaa_eth_probe(struct platform_device *pdev)
if (err) if (err)
goto init_ports_failed; goto init_ports_failed;
/* Rx traffic distribution based on keygen hashing defaults to on */
priv->keygen_in_use = true;
priv->percpu_priv = devm_alloc_percpu(dev, *priv->percpu_priv); priv->percpu_priv = devm_alloc_percpu(dev, *priv->percpu_priv);
if (!priv->percpu_priv) { if (!priv->percpu_priv) {
dev_err(dev, "devm_alloc_percpu() failed\n"); dev_err(dev, "devm_alloc_percpu() failed\n");
......
...@@ -52,6 +52,7 @@ ...@@ -52,6 +52,7 @@
enum dpaa_fq_type { enum dpaa_fq_type {
FQ_TYPE_RX_DEFAULT = 1, /* Rx Default FQs */ FQ_TYPE_RX_DEFAULT = 1, /* Rx Default FQs */
FQ_TYPE_RX_ERROR, /* Rx Error FQs */ FQ_TYPE_RX_ERROR, /* Rx Error FQs */
FQ_TYPE_RX_PCD, /* Rx Parse Classify Distribute FQs */
FQ_TYPE_TX, /* "Real" Tx FQs */ FQ_TYPE_TX, /* "Real" Tx FQs */
FQ_TYPE_TX_CONFIRM, /* Tx default Conf FQ (actually an Rx FQ) */ FQ_TYPE_TX_CONFIRM, /* Tx default Conf FQ (actually an Rx FQ) */
FQ_TYPE_TX_CONF_MQ, /* Tx conf FQs (one for each Tx FQ) */ FQ_TYPE_TX_CONF_MQ, /* Tx conf FQs (one for each Tx FQ) */
...@@ -158,6 +159,7 @@ struct dpaa_priv { ...@@ -158,6 +159,7 @@ struct dpaa_priv {
struct list_head dpaa_fq_list; struct list_head dpaa_fq_list;
u8 num_tc; u8 num_tc;
bool keygen_in_use;
u32 msg_enable; /* net_device message level */ u32 msg_enable; /* net_device message level */
struct { struct {
......
...@@ -71,6 +71,9 @@ static ssize_t dpaa_eth_show_fqids(struct device *dev, ...@@ -71,6 +71,9 @@ static ssize_t dpaa_eth_show_fqids(struct device *dev,
case FQ_TYPE_RX_ERROR: case FQ_TYPE_RX_ERROR:
str = "Rx error"; str = "Rx error";
break; break;
case FQ_TYPE_RX_PCD:
str = "Rx PCD";
break;
case FQ_TYPE_TX_CONFIRM: case FQ_TYPE_TX_CONFIRM:
str = "Tx default confirmation"; str = "Tx default confirmation";
break; break;
......
...@@ -399,6 +399,122 @@ static void dpaa_get_strings(struct net_device *net_dev, u32 stringset, ...@@ -399,6 +399,122 @@ static void dpaa_get_strings(struct net_device *net_dev, u32 stringset,
memcpy(strings, dpaa_stats_global, size); memcpy(strings, dpaa_stats_global, size);
} }
static int dpaa_get_hash_opts(struct net_device *dev,
struct ethtool_rxnfc *cmd)
{
struct dpaa_priv *priv = netdev_priv(dev);
cmd->data = 0;
switch (cmd->flow_type) {
case TCP_V4_FLOW:
case TCP_V6_FLOW:
case UDP_V4_FLOW:
case UDP_V6_FLOW:
if (priv->keygen_in_use)
cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
/* Fall through */
case IPV4_FLOW:
case IPV6_FLOW:
case SCTP_V4_FLOW:
case SCTP_V6_FLOW:
case AH_ESP_V4_FLOW:
case AH_ESP_V6_FLOW:
case AH_V4_FLOW:
case AH_V6_FLOW:
case ESP_V4_FLOW:
case ESP_V6_FLOW:
if (priv->keygen_in_use)
cmd->data |= RXH_IP_SRC | RXH_IP_DST;
break;
default:
cmd->data = 0;
break;
}
return 0;
}
static int dpaa_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd,
u32 *unused)
{
int ret = -EOPNOTSUPP;
switch (cmd->cmd) {
case ETHTOOL_GRXFH:
ret = dpaa_get_hash_opts(dev, cmd);
break;
default:
break;
}
return ret;
}
static void dpaa_set_hash(struct net_device *net_dev, bool enable)
{
struct mac_device *mac_dev;
struct fman_port *rxport;
struct dpaa_priv *priv;
priv = netdev_priv(net_dev);
mac_dev = priv->mac_dev;
rxport = mac_dev->port[0];
fman_port_use_kg_hash(rxport, enable);
priv->keygen_in_use = enable;
}
static int dpaa_set_hash_opts(struct net_device *dev,
struct ethtool_rxnfc *nfc)
{
int ret = -EINVAL;
/* we support hashing on IPv4/v6 src/dest IP and L4 src/dest port */
if (nfc->data &
~(RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 | RXH_L4_B_2_3))
return -EINVAL;
switch (nfc->flow_type) {
case TCP_V4_FLOW:
case TCP_V6_FLOW:
case UDP_V4_FLOW:
case UDP_V6_FLOW:
case IPV4_FLOW:
case IPV6_FLOW:
case SCTP_V4_FLOW:
case SCTP_V6_FLOW:
case AH_ESP_V4_FLOW:
case AH_ESP_V6_FLOW:
case AH_V4_FLOW:
case AH_V6_FLOW:
case ESP_V4_FLOW:
case ESP_V6_FLOW:
dpaa_set_hash(dev, !!nfc->data);
ret = 0;
break;
default:
break;
}
return ret;
}
static int dpaa_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *cmd)
{
int ret = -EOPNOTSUPP;
switch (cmd->cmd) {
case ETHTOOL_SRXFH:
ret = dpaa_set_hash_opts(dev, cmd);
break;
default:
break;
}
return ret;
}
const struct ethtool_ops dpaa_ethtool_ops = { const struct ethtool_ops dpaa_ethtool_ops = {
.get_drvinfo = dpaa_get_drvinfo, .get_drvinfo = dpaa_get_drvinfo,
.get_msglevel = dpaa_get_msglevel, .get_msglevel = dpaa_get_msglevel,
...@@ -412,4 +528,6 @@ const struct ethtool_ops dpaa_ethtool_ops = { ...@@ -412,4 +528,6 @@ const struct ethtool_ops dpaa_ethtool_ops = {
.get_strings = dpaa_get_strings, .get_strings = dpaa_get_strings,
.get_link_ksettings = dpaa_get_link_ksettings, .get_link_ksettings = dpaa_get_link_ksettings,
.set_link_ksettings = dpaa_set_link_ksettings, .set_link_ksettings = dpaa_set_link_ksettings,
.get_rxnfc = dpaa_get_rxnfc,
.set_rxnfc = dpaa_set_rxnfc,
}; };
...@@ -4,6 +4,6 @@ obj-$(CONFIG_FSL_FMAN) += fsl_fman.o ...@@ -4,6 +4,6 @@ obj-$(CONFIG_FSL_FMAN) += fsl_fman.o
obj-$(CONFIG_FSL_FMAN) += fsl_fman_port.o obj-$(CONFIG_FSL_FMAN) += fsl_fman_port.o
obj-$(CONFIG_FSL_FMAN) += fsl_mac.o obj-$(CONFIG_FSL_FMAN) += fsl_mac.o
fsl_fman-objs := fman_muram.o fman.o fman_sp.o fsl_fman-objs := fman_muram.o fman.o fman_sp.o fman_keygen.o
fsl_fman_port-objs := fman_port.o fsl_fman_port-objs := fman_port.o
fsl_mac-objs:= mac.o fman_dtsec.o fman_memac.o fman_tgec.o fsl_mac-objs:= mac.o fman_dtsec.o fman_memac.o fman_tgec.o
...@@ -32,9 +32,6 @@ ...@@ -32,9 +32,6 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "fman.h"
#include "fman_muram.h"
#include <linux/fsl/guts.h> #include <linux/fsl/guts.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/delay.h> #include <linux/delay.h>
...@@ -46,6 +43,10 @@ ...@@ -46,6 +43,10 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/libfdt_env.h> #include <linux/libfdt_env.h>
#include "fman.h"
#include "fman_muram.h"
#include "fman_keygen.h"
/* General defines */ /* General defines */
#define FMAN_LIODN_TBL 64 /* size of LIODN table */ #define FMAN_LIODN_TBL 64 /* size of LIODN table */
#define MAX_NUM_OF_MACS 10 #define MAX_NUM_OF_MACS 10
...@@ -56,6 +57,7 @@ ...@@ -56,6 +57,7 @@
/* Modules registers offsets */ /* Modules registers offsets */
#define BMI_OFFSET 0x00080000 #define BMI_OFFSET 0x00080000
#define QMI_OFFSET 0x00080400 #define QMI_OFFSET 0x00080400
#define KG_OFFSET 0x000C1000
#define DMA_OFFSET 0x000C2000 #define DMA_OFFSET 0x000C2000
#define FPM_OFFSET 0x000C3000 #define FPM_OFFSET 0x000C3000
#define IMEM_OFFSET 0x000C4000 #define IMEM_OFFSET 0x000C4000
...@@ -564,80 +566,6 @@ struct fman_cfg { ...@@ -564,80 +566,6 @@ struct fman_cfg {
u32 qmi_def_tnums_thresh; u32 qmi_def_tnums_thresh;
}; };
/* Structure that holds information received from device tree */
struct fman_dts_params {
void __iomem *base_addr; /* FMan virtual address */
struct resource *res; /* FMan memory resource */
u8 id; /* FMan ID */
int err_irq; /* FMan Error IRQ */
u16 clk_freq; /* FMan clock freq (In Mhz) */
u32 qman_channel_base; /* QMan channels base */
u32 num_of_qman_channels; /* Number of QMan channels */
struct resource muram_res; /* MURAM resource */
};
/** fman_exceptions_cb
* fman - Pointer to FMan
* exception - The exception.
*
* Exceptions user callback routine, will be called upon an exception
* passing the exception identification.
*
* Return: irq status
*/
typedef irqreturn_t (fman_exceptions_cb)(struct fman *fman,
enum fman_exceptions exception);
/** fman_bus_error_cb
* fman - Pointer to FMan
* port_id - Port id
* addr - Address that caused the error
* tnum - Owner of error
* liodn - Logical IO device number
*
* Bus error user callback routine, will be called upon bus error,
* passing parameters describing the errors and the owner.
*
* Return: IRQ status
*/
typedef irqreturn_t (fman_bus_error_cb)(struct fman *fman, u8 port_id,
u64 addr, u8 tnum, u16 liodn);
struct fman {
struct device *dev;
void __iomem *base_addr;
struct fman_intr_src intr_mng[FMAN_EV_CNT];
struct fman_fpm_regs __iomem *fpm_regs;
struct fman_bmi_regs __iomem *bmi_regs;
struct fman_qmi_regs __iomem *qmi_regs;
struct fman_dma_regs __iomem *dma_regs;
struct fman_hwp_regs __iomem *hwp_regs;
fman_exceptions_cb *exception_cb;
fman_bus_error_cb *bus_error_cb;
/* Spinlock for FMan use */
spinlock_t spinlock;
struct fman_state_struct *state;
struct fman_cfg *cfg;
struct muram_info *muram;
/* cam section in muram */
unsigned long cam_offset;
size_t cam_size;
/* Fifo in MURAM */
unsigned long fifo_offset;
size_t fifo_size;
u32 liodn_base[64];
u32 liodn_offset[64];
struct fman_dts_params dts_params;
};
static irqreturn_t fman_exceptions(struct fman *fman, static irqreturn_t fman_exceptions(struct fman *fman,
enum fman_exceptions exception) enum fman_exceptions exception)
{ {
...@@ -1811,6 +1739,7 @@ static int fman_config(struct fman *fman) ...@@ -1811,6 +1739,7 @@ static int fman_config(struct fman *fman)
fman->qmi_regs = base_addr + QMI_OFFSET; fman->qmi_regs = base_addr + QMI_OFFSET;
fman->dma_regs = base_addr + DMA_OFFSET; fman->dma_regs = base_addr + DMA_OFFSET;
fman->hwp_regs = base_addr + HWP_OFFSET; fman->hwp_regs = base_addr + HWP_OFFSET;
fman->kg_regs = base_addr + KG_OFFSET;
fman->base_addr = base_addr; fman->base_addr = base_addr;
spin_lock_init(&fman->spinlock); spin_lock_init(&fman->spinlock);
...@@ -2083,6 +2012,11 @@ static int fman_init(struct fman *fman) ...@@ -2083,6 +2012,11 @@ static int fman_init(struct fman *fman)
/* Init HW Parser */ /* Init HW Parser */
hwp_init(fman->hwp_regs); hwp_init(fman->hwp_regs);
/* Init KeyGen */
fman->keygen = keygen_init(fman->kg_regs);
if (!fman->keygen)
return -EINVAL;
err = enable(fman, cfg); err = enable(fman, cfg);
if (err != 0) if (err != 0)
return err; return err;
......
...@@ -34,6 +34,8 @@ ...@@ -34,6 +34,8 @@
#define __FM_H #define __FM_H
#include <linux/io.h> #include <linux/io.h>
#include <linux/interrupt.h>
#include <linux/of_irq.h>
/* FM Frame descriptor macros */ /* FM Frame descriptor macros */
/* Frame queue Context Override */ /* Frame queue Context Override */
...@@ -274,6 +276,81 @@ struct fman_intr_src { ...@@ -274,6 +276,81 @@ struct fman_intr_src {
void *src_handle; void *src_handle;
}; };
/** fman_exceptions_cb
* fman - Pointer to FMan
* exception - The exception.
*
* Exceptions user callback routine, will be called upon an exception
* passing the exception identification.
*
* Return: irq status
*/
typedef irqreturn_t (fman_exceptions_cb)(struct fman *fman,
enum fman_exceptions exception);
/** fman_bus_error_cb
* fman - Pointer to FMan
* port_id - Port id
* addr - Address that caused the error
* tnum - Owner of error
* liodn - Logical IO device number
*
* Bus error user callback routine, will be called upon bus error,
* passing parameters describing the errors and the owner.
*
* Return: IRQ status
*/
typedef irqreturn_t (fman_bus_error_cb)(struct fman *fman, u8 port_id,
u64 addr, u8 tnum, u16 liodn);
/* Structure that holds information received from device tree */
struct fman_dts_params {
void __iomem *base_addr; /* FMan virtual address */
struct resource *res; /* FMan memory resource */
u8 id; /* FMan ID */
int err_irq; /* FMan Error IRQ */
u16 clk_freq; /* FMan clock freq (In Mhz) */
u32 qman_channel_base; /* QMan channels base */
u32 num_of_qman_channels; /* Number of QMan channels */
struct resource muram_res; /* MURAM resource */
};
struct fman {
struct device *dev;
void __iomem *base_addr;
struct fman_intr_src intr_mng[FMAN_EV_CNT];
struct fman_fpm_regs __iomem *fpm_regs;
struct fman_bmi_regs __iomem *bmi_regs;
struct fman_qmi_regs __iomem *qmi_regs;
struct fman_dma_regs __iomem *dma_regs;
struct fman_hwp_regs __iomem *hwp_regs;
struct fman_kg_regs __iomem *kg_regs;
fman_exceptions_cb *exception_cb;
fman_bus_error_cb *bus_error_cb;
/* Spinlock for FMan use */
spinlock_t spinlock;
struct fman_state_struct *state;
struct fman_cfg *cfg;
struct muram_info *muram;
struct fman_keygen *keygen;
/* cam section in muram */
unsigned long cam_offset;
size_t cam_size;
/* Fifo in MURAM */
unsigned long fifo_offset;
size_t fifo_size;
u32 liodn_base[64];
u32 liodn_offset[64];
struct fman_dts_params dts_params;
};
/* Structure for port-FM communication during fman_port_init. */ /* Structure for port-FM communication during fman_port_init. */
struct fman_port_init_params { struct fman_port_init_params {
u8 port_id; /* port Id */ u8 port_id; /* port Id */
......
/*
* Copyright 2017 NXP
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of NXP nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") as published by the Free Software
* Foundation, either version 2 of that License or (at your option) any
* later version.
*
* THIS SOFTWARE IS PROVIDED BY NXP ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NXP BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/slab.h>
#include "fman_keygen.h"
/* Maximum number of HW Ports */
#define FMAN_MAX_NUM_OF_HW_PORTS 64
/* Maximum number of KeyGen Schemes */
#define FM_KG_MAX_NUM_OF_SCHEMES 32
/* Number of generic KeyGen Generic Extract Command Registers */
#define FM_KG_NUM_OF_GENERIC_REGS 8
/* Dummy port ID */
#define DUMMY_PORT_ID 0
/* Select Scheme Value Register */
#define KG_SCH_DEF_USE_KGSE_DV_0 2
#define KG_SCH_DEF_USE_KGSE_DV_1 3
/* Registers Shifting values */
#define FM_KG_KGAR_NUM_SHIFT 16
#define KG_SCH_DEF_L4_PORT_SHIFT 8
#define KG_SCH_DEF_IP_ADDR_SHIFT 18
#define KG_SCH_HASH_CONFIG_SHIFT_SHIFT 24
/* KeyGen Registers bit field masks: */
/* Enable bit field mask for KeyGen General Configuration Register */
#define FM_KG_KGGCR_EN 0x80000000
/* KeyGen Global Registers bit field masks */
#define FM_KG_KGAR_GO 0x80000000
#define FM_KG_KGAR_READ 0x40000000
#define FM_KG_KGAR_WRITE 0x00000000
#define FM_KG_KGAR_SEL_SCHEME_ENTRY 0x00000000
#define FM_KG_KGAR_SCM_WSEL_UPDATE_CNT 0x00008000
#define FM_KG_KGAR_ERR 0x20000000
#define FM_KG_KGAR_SEL_CLS_PLAN_ENTRY 0x01000000
#define FM_KG_KGAR_SEL_PORT_ENTRY 0x02000000
#define FM_KG_KGAR_SEL_PORT_WSEL_SP 0x00008000
#define FM_KG_KGAR_SEL_PORT_WSEL_CPP 0x00004000
/* Error events exceptions */
#define FM_EX_KG_DOUBLE_ECC 0x80000000
#define FM_EX_KG_KEYSIZE_OVERFLOW 0x40000000
/* Scheme Registers bit field masks */
#define KG_SCH_MODE_EN 0x80000000
#define KG_SCH_VSP_NO_KSP_EN 0x80000000
#define KG_SCH_HASH_CONFIG_SYM 0x40000000
/* Known Protocol field codes */
#define KG_SCH_KN_PORT_ID 0x80000000
#define KG_SCH_KN_MACDST 0x40000000
#define KG_SCH_KN_MACSRC 0x20000000
#define KG_SCH_KN_TCI1 0x10000000
#define KG_SCH_KN_TCI2 0x08000000
#define KG_SCH_KN_ETYPE 0x04000000
#define KG_SCH_KN_PPPSID 0x02000000
#define KG_SCH_KN_PPPID 0x01000000
#define KG_SCH_KN_MPLS1 0x00800000
#define KG_SCH_KN_MPLS2 0x00400000
#define KG_SCH_KN_MPLS_LAST 0x00200000
#define KG_SCH_KN_IPSRC1 0x00100000
#define KG_SCH_KN_IPDST1 0x00080000
#define KG_SCH_KN_PTYPE1 0x00040000
#define KG_SCH_KN_IPTOS_TC1 0x00020000
#define KG_SCH_KN_IPV6FL1 0x00010000
#define KG_SCH_KN_IPSRC2 0x00008000
#define KG_SCH_KN_IPDST2 0x00004000
#define KG_SCH_KN_PTYPE2 0x00002000
#define KG_SCH_KN_IPTOS_TC2 0x00001000
#define KG_SCH_KN_IPV6FL2 0x00000800
#define KG_SCH_KN_GREPTYPE 0x00000400
#define KG_SCH_KN_IPSEC_SPI 0x00000200
#define KG_SCH_KN_IPSEC_NH 0x00000100
#define KG_SCH_KN_IPPID 0x00000080
#define KG_SCH_KN_L4PSRC 0x00000004
#define KG_SCH_KN_L4PDST 0x00000002
#define KG_SCH_KN_TFLG 0x00000001
/* NIA values */
#define NIA_ENG_BMI 0x00500000
#define NIA_BMI_AC_ENQ_FRAME 0x00000002
#define ENQUEUE_KG_DFLT_NIA (NIA_ENG_BMI | NIA_BMI_AC_ENQ_FRAME)
/* Hard-coded configuration:
* These values are used as hard-coded values for KeyGen configuration
* and they replace user selections for this hard-coded version
*/
/* Hash distribution shift */
#define DEFAULT_HASH_DIST_FQID_SHIFT 0
/* Hash shift */
#define DEFAULT_HASH_SHIFT 0
/* Symmetric hash usage:
* Warning:
* - the value for symmetric hash usage must be in accordance with hash
* key defined below
* - according to tests performed, spreading is not working if symmetric
* hash is set on true
* So ultimately symmetric hash functionality should be always disabled:
*/
#define DEFAULT_SYMMETRIC_HASH false
/* Hash Key extraction fields: */
#define DEFAULT_HASH_KEY_EXTRACT_FIELDS \
(KG_SCH_KN_IPSRC1 | KG_SCH_KN_IPDST1 | \
KG_SCH_KN_L4PSRC | KG_SCH_KN_L4PDST)
/* Default values to be used as hash key in case IPv4 or L4 (TCP, UDP)
* don't exist in the frame
*/
/* Default IPv4 address */
#define DEFAULT_HASH_KEY_IPv4_ADDR 0x0A0A0A0A
/* Default L4 port */
#define DEFAULT_HASH_KEY_L4_PORT 0x0B0B0B0B
/* KeyGen Memory Mapped Registers: */
/* Scheme Configuration RAM Registers */
struct fman_kg_scheme_regs {
u32 kgse_mode; /* 0x100: MODE */
u32 kgse_ekfc; /* 0x104: Extract Known Fields Command */
u32 kgse_ekdv; /* 0x108: Extract Known Default Value */
u32 kgse_bmch; /* 0x10C: Bit Mask Command High */
u32 kgse_bmcl; /* 0x110: Bit Mask Command Low */
u32 kgse_fqb; /* 0x114: Frame Queue Base */
u32 kgse_hc; /* 0x118: Hash Command */
u32 kgse_ppc; /* 0x11C: Policer Profile Command */
u32 kgse_gec[FM_KG_NUM_OF_GENERIC_REGS];
/* 0x120: Generic Extract Command */
u32 kgse_spc;
/* 0x140: KeyGen Scheme Entry Statistic Packet Counter */
u32 kgse_dv0; /* 0x144: KeyGen Scheme Entry Default Value 0 */
u32 kgse_dv1; /* 0x148: KeyGen Scheme Entry Default Value 1 */
u32 kgse_ccbs;
/* 0x14C: KeyGen Scheme Entry Coarse Classification Bit*/
u32 kgse_mv; /* 0x150: KeyGen Scheme Entry Match vector */
u32 kgse_om; /* 0x154: KeyGen Scheme Entry Operation Mode bits */
u32 kgse_vsp;
/* 0x158: KeyGen Scheme Entry Virtual Storage Profile */
};
/* Port Partition Configuration Registers */
struct fman_kg_pe_regs {
u32 fmkg_pe_sp; /* 0x100: KeyGen Port entry Scheme Partition */
u32 fmkg_pe_cpp;
/* 0x104: KeyGen Port Entry Classification Plan Partition */
};
/* General Configuration and Status Registers
* Global Statistic Counters
* KeyGen Global Registers
*/
struct fman_kg_regs {
u32 fmkg_gcr; /* 0x000: KeyGen General Configuration Register */
u32 res004; /* 0x004: Reserved */
u32 res008; /* 0x008: Reserved */
u32 fmkg_eer; /* 0x00C: KeyGen Error Event Register */
u32 fmkg_eeer; /* 0x010: KeyGen Error Event Enable Register */
u32 res014; /* 0x014: Reserved */
u32 res018; /* 0x018: Reserved */
u32 fmkg_seer; /* 0x01C: KeyGen Scheme Error Event Register */
u32 fmkg_seeer; /* 0x020: KeyGen Scheme Error Event Enable Register */
u32 fmkg_gsr; /* 0x024: KeyGen Global Status Register */
u32 fmkg_tpc; /* 0x028: Total Packet Counter Register */
u32 fmkg_serc; /* 0x02C: Soft Error Capture Register */
u32 res030[4]; /* 0x030: Reserved */
u32 fmkg_fdor; /* 0x034: Frame Data Offset Register */
u32 fmkg_gdv0r; /* 0x038: Global Default Value Register 0 */
u32 fmkg_gdv1r; /* 0x03C: Global Default Value Register 1 */
u32 res04c[6]; /* 0x040: Reserved */
u32 fmkg_feer; /* 0x044: Force Error Event Register */
u32 res068[38]; /* 0x048: Reserved */
union {
u32 fmkg_indirect[63]; /* 0x100: Indirect Access Registers */
struct fman_kg_scheme_regs fmkg_sch; /* Scheme Registers */
struct fman_kg_pe_regs fmkg_pe; /* Port Partition Registers */
};
u32 fmkg_ar; /* 0x1FC: KeyGen Action Register */
};
/* KeyGen Scheme data */
struct keygen_scheme {
bool used; /* Specifies if this scheme is used */
u8 hw_port_id;
/* Hardware port ID
* schemes sharing between multiple ports is not
* currently supported
* so we have only one port id bound to a scheme
*/
u32 base_fqid;
/* Base FQID:
* Must be between 1 and 2^24-1
* If hash is used and an even distribution is
* expected according to hash_fqid_count,
* base_fqid must be aligned to hash_fqid_count
*/
u32 hash_fqid_count;
/* FQ range for hash distribution:
* Must be a power of 2
* Represents the range of queues for spreading
*/
bool use_hashing; /* Usage of Hashing and spreading over FQ */
bool symmetric_hash; /* Symmetric Hash option usage */
u8 hashShift;
/* Hash result right shift.
* Select the 24 bits out of the 64 hash result.
* 0 means using the 24 LSB's, otherwise
* use the 24 LSB's after shifting right
*/
u32 match_vector; /* Match Vector */
};
/* KeyGen driver data */
struct fman_keygen {
struct keygen_scheme schemes[FM_KG_MAX_NUM_OF_SCHEMES];
/* Array of schemes */
struct fman_kg_regs __iomem *keygen_regs; /* KeyGen registers */
};
/* keygen_write_ar_wait
*
* Write Action Register with specified value, wait for GO bit field to be
* idle and then read the error
*
* regs: KeyGen registers
* fmkg_ar: Action Register value
*
* Return: Zero for success or error code in case of failure
*/
static int keygen_write_ar_wait(struct fman_kg_regs __iomem *regs, u32 fmkg_ar)
{
iowrite32be(fmkg_ar, &regs->fmkg_ar);
/* Wait for GO bit field to be idle */
while (fmkg_ar & FM_KG_KGAR_GO)
fmkg_ar = ioread32be(&regs->fmkg_ar);
if (fmkg_ar & FM_KG_KGAR_ERR)
return -EINVAL;
return 0;
}
/* build_ar_scheme
*
* Build Action Register value for scheme settings
*
* scheme_id: Scheme ID
* update_counter: update scheme counter
* write: true for action to write the scheme or false for read action
*
* Return: AR value
*/
static u32 build_ar_scheme(u8 scheme_id, bool update_counter, bool write)
{
u32 rw = (u32)(write ? FM_KG_KGAR_WRITE : FM_KG_KGAR_READ);
return (u32)(FM_KG_KGAR_GO |
rw |
FM_KG_KGAR_SEL_SCHEME_ENTRY |
DUMMY_PORT_ID |
((u32)scheme_id << FM_KG_KGAR_NUM_SHIFT) |
(update_counter ? FM_KG_KGAR_SCM_WSEL_UPDATE_CNT : 0));
}
/* build_ar_bind_scheme
*
* Build Action Register value for port binding to schemes
*
* hwport_id: HW Port ID
* write: true for action to write the bind or false for read action
*
* Return: AR value
*/
static u32 build_ar_bind_scheme(u8 hwport_id, bool write)
{
u32 rw = write ? (u32)FM_KG_KGAR_WRITE : (u32)FM_KG_KGAR_READ;
return (u32)(FM_KG_KGAR_GO |
rw |
FM_KG_KGAR_SEL_PORT_ENTRY |
hwport_id |
FM_KG_KGAR_SEL_PORT_WSEL_SP);
}
/* keygen_write_sp
*
* Write Scheme Partition Register with specified value
*
* regs: KeyGen Registers
* sp: Scheme Partition register value
* add: true to add a scheme partition or false to clear
*
* Return: none
*/
static void keygen_write_sp(struct fman_kg_regs __iomem *regs, u32 sp, bool add)
{
u32 tmp;
tmp = ioread32be(&regs->fmkg_pe.fmkg_pe_sp);
if (add)
tmp |= sp;
else
tmp &= ~sp;
iowrite32be(tmp, &regs->fmkg_pe.fmkg_pe_sp);
}
/* build_ar_bind_cls_plan
*
* Build Action Register value for Classification Plan
*
* hwport_id: HW Port ID
* write: true for action to write the CP or false for read action
*
* Return: AR value
*/
static u32 build_ar_bind_cls_plan(u8 hwport_id, bool write)
{
u32 rw = write ? (u32)FM_KG_KGAR_WRITE : (u32)FM_KG_KGAR_READ;
return (u32)(FM_KG_KGAR_GO |
rw |
FM_KG_KGAR_SEL_PORT_ENTRY |
hwport_id |
FM_KG_KGAR_SEL_PORT_WSEL_CPP);
}
/* keygen_write_cpp
*
* Write Classification Plan Partition Register with specified value
*
* regs: KeyGen Registers
* cpp: CPP register value
*
* Return: none
*/
static void keygen_write_cpp(struct fman_kg_regs __iomem *regs, u32 cpp)
{
iowrite32be(cpp, &regs->fmkg_pe.fmkg_pe_cpp);
}
/* keygen_write_scheme
*
* Write all Schemes Registers with specified values
*
* regs: KeyGen Registers
* scheme_id: Scheme ID
* scheme_regs: Scheme registers values desired to be written
* update_counter: update scheme counter
*
* Return: Zero for success or error code in case of failure
*/
static int keygen_write_scheme(struct fman_kg_regs __iomem *regs, u8 scheme_id,
struct fman_kg_scheme_regs *scheme_regs,
bool update_counter)
{
u32 ar_reg;
int err, i;
/* Write indirect scheme registers */
iowrite32be(scheme_regs->kgse_mode, &regs->fmkg_sch.kgse_mode);
iowrite32be(scheme_regs->kgse_ekfc, &regs->fmkg_sch.kgse_ekfc);
iowrite32be(scheme_regs->kgse_ekdv, &regs->fmkg_sch.kgse_ekdv);
iowrite32be(scheme_regs->kgse_bmch, &regs->fmkg_sch.kgse_bmch);
iowrite32be(scheme_regs->kgse_bmcl, &regs->fmkg_sch.kgse_bmcl);
iowrite32be(scheme_regs->kgse_fqb, &regs->fmkg_sch.kgse_fqb);
iowrite32be(scheme_regs->kgse_hc, &regs->fmkg_sch.kgse_hc);
iowrite32be(scheme_regs->kgse_ppc, &regs->fmkg_sch.kgse_ppc);
iowrite32be(scheme_regs->kgse_spc, &regs->fmkg_sch.kgse_spc);
iowrite32be(scheme_regs->kgse_dv0, &regs->fmkg_sch.kgse_dv0);
iowrite32be(scheme_regs->kgse_dv1, &regs->fmkg_sch.kgse_dv1);
iowrite32be(scheme_regs->kgse_ccbs, &regs->fmkg_sch.kgse_ccbs);
iowrite32be(scheme_regs->kgse_mv, &regs->fmkg_sch.kgse_mv);
iowrite32be(scheme_regs->kgse_om, &regs->fmkg_sch.kgse_om);
iowrite32be(scheme_regs->kgse_vsp, &regs->fmkg_sch.kgse_vsp);
for (i = 0 ; i < FM_KG_NUM_OF_GENERIC_REGS ; i++)
iowrite32be(scheme_regs->kgse_gec[i],
&regs->fmkg_sch.kgse_gec[i]);
/* Write AR (Action register) */
ar_reg = build_ar_scheme(scheme_id, update_counter, true);
err = keygen_write_ar_wait(regs, ar_reg);
if (err != 0) {
pr_err("Writing Action Register failed\n");
return err;
}
return err;
}
/* get_free_scheme_id
*
* Find the first free scheme available to be used
*
* keygen: KeyGen handle
* scheme_id: pointer to scheme id
*
* Return: 0 on success, -EINVAL when the are no available free schemes
*/
static int get_free_scheme_id(struct fman_keygen *keygen, u8 *scheme_id)
{
u8 i;
for (i = 0; i < FM_KG_MAX_NUM_OF_SCHEMES; i++)
if (!keygen->schemes[i].used) {
*scheme_id = i;
return 0;
}
return -EINVAL;
}
/* get_scheme
*
* Provides the scheme for specified ID
*
* keygen: KeyGen handle
* scheme_id: Scheme ID
*
* Return: handle to required scheme
*/
static struct keygen_scheme *get_scheme(struct fman_keygen *keygen,
u8 scheme_id)
{
if (scheme_id >= FM_KG_MAX_NUM_OF_SCHEMES)
return NULL;
return &keygen->schemes[scheme_id];
}
/* keygen_bind_port_to_schemes
*
* Bind the port to schemes
*
* keygen: KeyGen handle
* scheme_id: id of the scheme to bind to
* bind: true to bind the port or false to unbind it
*
* Return: Zero for success or error code in case of failure
*/
static int keygen_bind_port_to_schemes(struct fman_keygen *keygen,
u8 scheme_id,
bool bind)
{
struct fman_kg_regs __iomem *keygen_regs = keygen->keygen_regs;
struct keygen_scheme *scheme;
u32 ar_reg;
u32 schemes_vector = 0;
int err;
scheme = get_scheme(keygen, scheme_id);
if (!scheme) {
pr_err("Requested Scheme does not exist\n");
return -EINVAL;
}
if (!scheme->used) {
pr_err("Cannot bind port to an invalid scheme\n");
return -EINVAL;
}
schemes_vector |= 1 << (31 - scheme_id);
ar_reg = build_ar_bind_scheme(scheme->hw_port_id, false);
err = keygen_write_ar_wait(keygen_regs, ar_reg);
if (err != 0) {
pr_err("Reading Action Register failed\n");
return err;
}
keygen_write_sp(keygen_regs, schemes_vector, bind);
ar_reg = build_ar_bind_scheme(scheme->hw_port_id, true);
err = keygen_write_ar_wait(keygen_regs, ar_reg);
if (err != 0) {
pr_err("Writing Action Register failed\n");
return err;
}
return 0;
}
/* keygen_scheme_setup
*
* Setup the scheme according to required configuration
*
* keygen: KeyGen handle
* scheme_id: scheme ID
* enable: true to enable scheme or false to disable it
*
* Return: Zero for success or error code in case of failure
*/
static int keygen_scheme_setup(struct fman_keygen *keygen, u8 scheme_id,
bool enable)
{
struct fman_kg_regs __iomem *keygen_regs = keygen->keygen_regs;
struct fman_kg_scheme_regs scheme_regs;
struct keygen_scheme *scheme;
u32 tmp_reg;
int err;
scheme = get_scheme(keygen, scheme_id);
if (!scheme) {
pr_err("Requested Scheme does not exist\n");
return -EINVAL;
}
if (enable && scheme->used) {
pr_err("The requested Scheme is already used\n");
return -EINVAL;
}
/* Clear scheme registers */
memset(&scheme_regs, 0, sizeof(struct fman_kg_scheme_regs));
/* Setup all scheme registers: */
tmp_reg = 0;
if (enable) {
/* Enable Scheme */
tmp_reg |= KG_SCH_MODE_EN;
/* Enqueue frame NIA */
tmp_reg |= ENQUEUE_KG_DFLT_NIA;
}
scheme_regs.kgse_mode = tmp_reg;
scheme_regs.kgse_mv = scheme->match_vector;
/* Scheme don't override StorageProfile:
* valid only for DPAA_VERSION >= 11
*/
scheme_regs.kgse_vsp = KG_SCH_VSP_NO_KSP_EN;
/* Configure Hard-Coded Rx Hashing: */
if (scheme->use_hashing) {
/* configure kgse_ekfc */
scheme_regs.kgse_ekfc = DEFAULT_HASH_KEY_EXTRACT_FIELDS;
/* configure kgse_ekdv */
tmp_reg = 0;
tmp_reg |= (KG_SCH_DEF_USE_KGSE_DV_0 <<
KG_SCH_DEF_IP_ADDR_SHIFT);
tmp_reg |= (KG_SCH_DEF_USE_KGSE_DV_1 <<
KG_SCH_DEF_L4_PORT_SHIFT);
scheme_regs.kgse_ekdv = tmp_reg;
/* configure kgse_dv0 */
scheme_regs.kgse_dv0 = DEFAULT_HASH_KEY_IPv4_ADDR;
/* configure kgse_dv1 */
scheme_regs.kgse_dv1 = DEFAULT_HASH_KEY_L4_PORT;
/* configure kgse_hc */
tmp_reg = 0;
tmp_reg |= ((scheme->hash_fqid_count - 1) <<
DEFAULT_HASH_DIST_FQID_SHIFT);
tmp_reg |= scheme->hashShift << KG_SCH_HASH_CONFIG_SHIFT_SHIFT;
if (scheme->symmetric_hash) {
/* Normally extraction key should be verified if
* complies with symmetric hash
* But because extraction is hard-coded, we are sure
* the key is symmetric
*/
tmp_reg |= KG_SCH_HASH_CONFIG_SYM;
}
scheme_regs.kgse_hc = tmp_reg;
} else {
scheme_regs.kgse_ekfc = 0;
scheme_regs.kgse_hc = 0;
scheme_regs.kgse_ekdv = 0;
scheme_regs.kgse_dv0 = 0;
scheme_regs.kgse_dv1 = 0;
}
/* configure kgse_fqb: Scheme FQID base */
tmp_reg = 0;
tmp_reg |= scheme->base_fqid;
scheme_regs.kgse_fqb = tmp_reg;
/* features not used by hard-coded configuration */
scheme_regs.kgse_bmch = 0;
scheme_regs.kgse_bmcl = 0;
scheme_regs.kgse_spc = 0;
/* Write scheme registers */
err = keygen_write_scheme(keygen_regs, scheme_id, &scheme_regs, true);
if (err != 0) {
pr_err("Writing scheme registers failed\n");
return err;
}
/* Update used field for Scheme */
scheme->used = enable;
return 0;
}
/* keygen_init
*
* KeyGen initialization:
* Initializes and enables KeyGen, allocate driver memory, setup registers,
* clear port bindings, invalidate all schemes
*
* keygen_regs: KeyGen registers base address
*
* Return: Handle to KeyGen driver
*/
struct fman_keygen *keygen_init(struct fman_kg_regs __iomem *keygen_regs)
{
struct fman_keygen *keygen;
u32 ar;
int i;
/* Allocate memory for KeyGen driver */
keygen = kzalloc(sizeof(*keygen), GFP_KERNEL);
if (!keygen)
return NULL;
keygen->keygen_regs = keygen_regs;
/* KeyGen initialization (for Master partition):
* Setup KeyGen registers
*/
iowrite32be(ENQUEUE_KG_DFLT_NIA, &keygen_regs->fmkg_gcr);
iowrite32be(FM_EX_KG_DOUBLE_ECC | FM_EX_KG_KEYSIZE_OVERFLOW,
&keygen_regs->fmkg_eer);
iowrite32be(0, &keygen_regs->fmkg_fdor);
iowrite32be(0, &keygen_regs->fmkg_gdv0r);
iowrite32be(0, &keygen_regs->fmkg_gdv1r);
/* Clear binding between ports to schemes and classification plans
* so that all ports are not bound to any scheme/classification plan
*/
for (i = 0; i < FMAN_MAX_NUM_OF_HW_PORTS; i++) {
/* Clear all pe sp schemes registers */
keygen_write_sp(keygen_regs, 0xffffffff, false);
ar = build_ar_bind_scheme(i, true);
keygen_write_ar_wait(keygen_regs, ar);
/* Clear all pe cpp classification plans registers */
keygen_write_cpp(keygen_regs, 0);
ar = build_ar_bind_cls_plan(i, true);
keygen_write_ar_wait(keygen_regs, ar);
}
/* Enable all scheme interrupts */
iowrite32be(0xFFFFFFFF, &keygen_regs->fmkg_seer);
iowrite32be(0xFFFFFFFF, &keygen_regs->fmkg_seeer);
/* Enable KyeGen */
iowrite32be(ioread32be(&keygen_regs->fmkg_gcr) | FM_KG_KGGCR_EN,
&keygen_regs->fmkg_gcr);
return keygen;
}
EXPORT_SYMBOL(keygen_init);
/* keygen_port_hashing_init
*
* Initializes a port for Rx Hashing with specified configuration parameters
*
* keygen: KeyGen handle
* hw_port_id: HW Port ID
* hash_base_fqid: Hashing Base FQID used for spreading
* hash_size: Hashing size
*
* Return: Zero for success or error code in case of failure
*/
int keygen_port_hashing_init(struct fman_keygen *keygen, u8 hw_port_id,
u32 hash_base_fqid, u32 hash_size)
{
struct keygen_scheme *scheme;
u8 scheme_id;
int err;
/* Validate Scheme configuration parameters */
if (hash_base_fqid == 0 || (hash_base_fqid & ~0x00FFFFFF)) {
pr_err("Base FQID must be between 1 and 2^24-1\n");
return -EINVAL;
}
if (hash_size == 0 || (hash_size & (hash_size - 1)) != 0) {
pr_err("Hash size must be power of two\n");
return -EINVAL;
}
/* Find a free scheme */
err = get_free_scheme_id(keygen, &scheme_id);
if (err) {
pr_err("The maximum number of available Schemes has been exceeded\n");
return -EINVAL;
}
/* Create and configure Hard-Coded Scheme: */
scheme = get_scheme(keygen, scheme_id);
if (!scheme) {
pr_err("Requested Scheme does not exist\n");
return -EINVAL;
}
if (scheme->used) {
pr_err("The requested Scheme is already used\n");
return -EINVAL;
}
/* Clear all scheme fields because the scheme may have been
* previously used
*/
memset(scheme, 0, sizeof(struct keygen_scheme));
/* Setup scheme: */
scheme->hw_port_id = hw_port_id;
scheme->use_hashing = true;
scheme->base_fqid = hash_base_fqid;
scheme->hash_fqid_count = hash_size;
scheme->symmetric_hash = DEFAULT_SYMMETRIC_HASH;
scheme->hashShift = DEFAULT_HASH_SHIFT;
/* All Schemes in hard-coded configuration
* are Indirect Schemes
*/
scheme->match_vector = 0;
err = keygen_scheme_setup(keygen, scheme_id, true);
if (err != 0) {
pr_err("Scheme setup failed\n");
return err;
}
/* Bind Rx port to Scheme */
err = keygen_bind_port_to_schemes(keygen, scheme_id, true);
if (err != 0) {
pr_err("Binding port to schemes failed\n");
return err;
}
return 0;
}
EXPORT_SYMBOL(keygen_port_hashing_init);
/*
* Copyright 2017 NXP
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of NXP nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") as published by the Free Software
* Foundation, either version 2 of that License or (at your option) any
* later version.
*
* THIS SOFTWARE IS PROVIDED BY NXP ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL NXP BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __KEYGEN_H
#define __KEYGEN_H
#include <linux/io.h>
struct fman_keygen;
struct fman_kg_regs;
struct fman_keygen *keygen_init(struct fman_kg_regs __iomem *keygen_regs);
int keygen_port_hashing_init(struct fman_keygen *keygen, u8 hw_port_id,
u32 hash_base_fqid, u32 hash_size);
#endif /* __KEYGEN_H */
...@@ -32,10 +32,6 @@ ...@@ -32,10 +32,6 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include "fman_port.h"
#include "fman.h"
#include "fman_sp.h"
#include <linux/io.h> #include <linux/io.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
...@@ -45,6 +41,11 @@ ...@@ -45,6 +41,11 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/libfdt_env.h> #include <linux/libfdt_env.h>
#include "fman.h"
#include "fman_port.h"
#include "fman_sp.h"
#include "fman_keygen.h"
/* Queue ID */ /* Queue ID */
#define DFLT_FQ_ID 0x00FFFFFF #define DFLT_FQ_ID 0x00FFFFFF
...@@ -184,6 +185,7 @@ ...@@ -184,6 +185,7 @@
#define NIA_ENG_QMI_ENQ 0x00540000 #define NIA_ENG_QMI_ENQ 0x00540000
#define NIA_ENG_QMI_DEQ 0x00580000 #define NIA_ENG_QMI_DEQ 0x00580000
#define NIA_ENG_HWP 0x00440000 #define NIA_ENG_HWP 0x00440000
#define NIA_ENG_HWK 0x00480000
#define NIA_BMI_AC_ENQ_FRAME 0x00000002 #define NIA_BMI_AC_ENQ_FRAME 0x00000002
#define NIA_BMI_AC_TX_RELEASE 0x000002C0 #define NIA_BMI_AC_TX_RELEASE 0x000002C0
#define NIA_BMI_AC_RELEASE 0x000000C0 #define NIA_BMI_AC_RELEASE 0x000000C0
...@@ -394,6 +396,8 @@ struct fman_port_bpools { ...@@ -394,6 +396,8 @@ struct fman_port_bpools {
struct fman_port_cfg { struct fman_port_cfg {
u32 dflt_fqid; u32 dflt_fqid;
u32 err_fqid; u32 err_fqid;
u32 pcd_base_fqid;
u32 pcd_fqs_count;
u8 deq_sp; u8 deq_sp;
bool deq_high_priority; bool deq_high_priority;
enum fman_port_deq_type deq_type; enum fman_port_deq_type deq_type;
...@@ -1271,6 +1275,10 @@ static void set_rx_dflt_cfg(struct fman_port *port, ...@@ -1271,6 +1275,10 @@ static void set_rx_dflt_cfg(struct fman_port *port,
port_params->specific_params.rx_params.err_fqid; port_params->specific_params.rx_params.err_fqid;
port->cfg->dflt_fqid = port->cfg->dflt_fqid =
port_params->specific_params.rx_params.dflt_fqid; port_params->specific_params.rx_params.dflt_fqid;
port->cfg->pcd_base_fqid =
port_params->specific_params.rx_params.pcd_base_fqid;
port->cfg->pcd_fqs_count =
port_params->specific_params.rx_params.pcd_fqs_count;
} }
static void set_tx_dflt_cfg(struct fman_port *port, static void set_tx_dflt_cfg(struct fman_port *port,
...@@ -1397,6 +1405,24 @@ int fman_port_config(struct fman_port *port, struct fman_port_params *params) ...@@ -1397,6 +1405,24 @@ int fman_port_config(struct fman_port *port, struct fman_port_params *params)
} }
EXPORT_SYMBOL(fman_port_config); EXPORT_SYMBOL(fman_port_config);
/**
* fman_port_use_kg_hash
* port: A pointer to a FM Port module.
* Sets the HW KeyGen or the BMI as HW Parser next engine, enabling
* or bypassing the KeyGen hashing of Rx traffic
*/
void fman_port_use_kg_hash(struct fman_port *port, bool enable)
{
if (enable)
/* After the Parser frames go to KeyGen */
iowrite32be(NIA_ENG_HWK, &port->bmi_regs->rx.fmbm_rfpne);
else
/* After the Parser frames go to BMI */
iowrite32be(NIA_ENG_BMI | NIA_BMI_AC_ENQ_FRAME,
&port->bmi_regs->rx.fmbm_rfpne);
}
EXPORT_SYMBOL(fman_port_use_kg_hash);
/** /**
* fman_port_init * fman_port_init
* port: A pointer to a FM Port module. * port: A pointer to a FM Port module.
...@@ -1407,9 +1433,10 @@ EXPORT_SYMBOL(fman_port_config); ...@@ -1407,9 +1433,10 @@ EXPORT_SYMBOL(fman_port_config);
*/ */
int fman_port_init(struct fman_port *port) int fman_port_init(struct fman_port *port)
{ {
struct fman_port_init_params params;
struct fman_keygen *keygen;
struct fman_port_cfg *cfg; struct fman_port_cfg *cfg;
int err; int err;
struct fman_port_init_params params;
if (is_init_done(port->cfg)) if (is_init_done(port->cfg))
return -EINVAL; return -EINVAL;
...@@ -1472,6 +1499,17 @@ int fman_port_init(struct fman_port *port) ...@@ -1472,6 +1499,17 @@ int fman_port_init(struct fman_port *port)
if (err) if (err)
return err; return err;
if (port->cfg->pcd_fqs_count) {
keygen = port->dts_params.fman->keygen;
err = keygen_port_hashing_init(keygen, port->port_id,
port->cfg->pcd_base_fqid,
port->cfg->pcd_fqs_count);
if (err)
return err;
fman_port_use_kg_hash(port, true);
}
kfree(port->cfg); kfree(port->cfg);
port->cfg = NULL; port->cfg = NULL;
...@@ -1682,6 +1720,17 @@ u32 fman_port_get_qman_channel_id(struct fman_port *port) ...@@ -1682,6 +1720,17 @@ u32 fman_port_get_qman_channel_id(struct fman_port *port)
} }
EXPORT_SYMBOL(fman_port_get_qman_channel_id); EXPORT_SYMBOL(fman_port_get_qman_channel_id);
int fman_port_get_hash_result_offset(struct fman_port *port, u32 *offset)
{
if (port->buffer_offsets.hash_result_offset == ILLEGAL_BASE)
return -EINVAL;
*offset = port->buffer_offsets.hash_result_offset;
return 0;
}
EXPORT_SYMBOL(fman_port_get_hash_result_offset);
static int fman_port_probe(struct platform_device *of_dev) static int fman_port_probe(struct platform_device *of_dev)
{ {
struct fman_port *port; struct fman_port *port;
......
...@@ -100,6 +100,9 @@ struct fman_port; ...@@ -100,6 +100,9 @@ struct fman_port;
struct fman_port_rx_params { struct fman_port_rx_params {
u32 err_fqid; /* Error Queue Id. */ u32 err_fqid; /* Error Queue Id. */
u32 dflt_fqid; /* Default Queue Id. */ u32 dflt_fqid; /* Default Queue Id. */
u32 pcd_base_fqid; /* PCD base Queue Id. */
u32 pcd_fqs_count; /* Number of PCD FQs. */
/* Which external buffer pools are used /* Which external buffer pools are used
* (up to FMAN_PORT_MAX_EXT_POOLS_NUM), and their sizes. * (up to FMAN_PORT_MAX_EXT_POOLS_NUM), and their sizes.
*/ */
...@@ -134,6 +137,8 @@ struct fman_port_params { ...@@ -134,6 +137,8 @@ struct fman_port_params {
int fman_port_config(struct fman_port *port, struct fman_port_params *params); int fman_port_config(struct fman_port *port, struct fman_port_params *params);
void fman_port_use_kg_hash(struct fman_port *port, bool enable);
int fman_port_init(struct fman_port *port); int fman_port_init(struct fman_port *port);
int fman_port_cfg_buf_prefix_content(struct fman_port *port, int fman_port_cfg_buf_prefix_content(struct fman_port *port,
...@@ -146,6 +151,8 @@ int fman_port_enable(struct fman_port *port); ...@@ -146,6 +151,8 @@ int fman_port_enable(struct fman_port *port);
u32 fman_port_get_qman_channel_id(struct fman_port *port); u32 fman_port_get_qman_channel_id(struct fman_port *port);
int fman_port_get_hash_result_offset(struct fman_port *port, u32 *offset);
struct fman_port *fman_port_bind(struct device *dev); struct fman_port *fman_port_bind(struct device *dev);
#endif /* __FMAN_PORT_H */ #endif /* __FMAN_PORT_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment