Commit 513334e1 authored by David S. Miller's avatar David S. Miller

Merge branch 'mlx5-next'

Saeed Mahameed says:

====================
Mellanox 100G SRIOV E-Switch offload and VF representors

We are happy to announce SRIOV E-Switch offload and VF netdev representors.

Or Gerlitz says:

Currently, the way SR-IOV embedded switches are dealt with in Linux is limited
in its expressiveness and flexibility, but this is not necessarily due to
hardware limitations. The kernel software model for controlling the SR-IOV
switch simply does not allow the configuration of anything more complex than
MAC/VLAN based forwarding.

Hence the benefits brought by SRIOV come at a price of management flexibility,
when compared to software virtual switches which are used in Para-Virtual (PV)
schemes and allow implementing complex policies and virtual topologies. Such
SW switching typically involved a complex per-packet processing within the host
kernel using subsystems such as TC, Bridge, Netfilter and Open-vswitch.

We'd like to change that and get the best of both worlds: the performance of SR-IOV
with the management flexibility of software switches. This will eventually include
a richer model for controlling the SR-IOV switch for flow-based switching and
tunneling. Under this model, the e-switch is configured dynamically and a fallback
to software exists in case the hardware is unable to offload all required flows.

This series from Hadar Hen-Zion and myself, is the 1st step in that direction,
specfically, it provides full control on the SRIOV embedded switching by host
software and paves the way to offload switching rules and polices with downstream
patches.

To allow for host based SW control on the SRIOV HW switch, we introduce per VF
representor host netdevice. The VF representor plays the same role as TAP devices
in PV setup. A packet send through the VF representor on the host arrives to
the VF, and a packet sent through the VF is received by its representor. The
administrator can hook the representor netdev into a kernel switching component.
Once they do that, packets from the VF are subject to steering (matching and
actions) of that software component."

Doing so indeed hurts the performance benefits of SRIOV as it forces all the
traffic to go through the hypervisor. However, this SW representation is what
would eventually allow us to introduce hybrid model, where we offload steering
for some of the VF/VM traffic to the HW while keeping other VM traffic to go
through the hypervisor. Examples for the latter are first packet of flows which
are needed for SW switches learning and/or matching against policy database or
types of traffic for which offloading is not desired or not supported by the
current HW eswitch generation.

The embedded switch is managed through a PCI device driver. As such, we introduce
a devlink/pci based scheme for setting the mode of the e-switch. The current mode
(where steering is done based on mac/vlan, etc) is referred to as "legacy" and the
new mode as "offloads".

For the mlx5 driver / ConnectX4 HW case, the VF representors implement a functional
subset of mlx5e Ethernet netdevices using their own profile. This design buys us robust
implementation with code reuse and sharing.

The representors are created by the host PCI driver when (1) in SRIOV and (2) the
e-switch is set to offloads mode. Currently, in mlx5 the e-switch management is done
through the PF vport (0) and hence the VF representors along with the existing PF
netdev which represents the uplink share the PCI PF device instance.

The series is built from two major components, the first relates to the e-switch
management and the second to VF representors.

We start with a refactoring that treats the existing SRIOV e-switch code as of operating
in legacy mode. Next, we add the code for the offloads mode which programs the e-switch
to operate in a way which serves for software based switching:

1. miss rule which matches all packets that do not match any HW other switching rule
and forwards them to the e-switch management port (0) for further processing.

2. infrastructure for send-to-vport rules which conceptually bypass other "normal"
steering rules which present at the e-switch datapath. Such rules apply only for packets
that originate in the e-switch manager vport (0).

Since all the VF reps run over the same e-switch port, we use more logic in the host PCI
driver to do HW steering of missed packets into the HW queue opened by a the respective VF
representor. Finally here, we add the devlink APIs to configure the e-switch mode.

The second part from Hadar starts with some refactoring work which allow for multiple
mlx5e NIC instances to be created over the same PCI function, use common resources
and avoid wrong loopbacks.

Next comes the heart of the change which is a profile definition which allow to practically
have both "conventional" mlx5e NIC use cases such as native mode (non SRIOV), VF, PF and VF
representor to share the Ethernet driver code. This is done by a small surgery that ended up
with few internal callbacks that should be implemented by a profile instance. The profile
for the conventional NIC is implemented, to preserve the existing functionality.

The last two patches add e-switch registration API for the VF representors and the
implementation of the VF representors netdevice profile. Being an mlx5e instance, the
VF representor uses HW send/recv queues, completions queues and such. It currently doesn't
support NIC offloads but some of them could be added later on. The VF representor has
switchdev ops, where currently the only supported API is the one to the HW ID,
which is needed to identify multiple representors belonging to the same e-switch.

The architecture + solution (software and firmware) work were done by a team consisting
of Ilya Lesokhin, Haggai Eran, Rony Efraim, Tal Anker, Natan Oppenheimer, Saeed Mahameed,
Hadar and Or, thanks you all!

v1 --> v2 fixes:
* removed unneeded variable (patch #3)
* removed unused value DEVLINK_ESWITCH_MODE_NONE (patch #8)
* changed the devlink mode name from "offloads" to "switchdev" which
   better describes what are we referring here, using a known concept (patch #8)
* correctly refer to devlink e-switch modes (patch #10)
* use the correct mlx5e way to define the VF rep statistics  (patch #16)

v2 --> v3 fixes:
* Rebased on top 6fde0e63 'be2net: signedness bug in be_msix_enable()'
* Handled compilation error introduced by rebase on top "f5074d0c Merge branch 'mlx5-100G-fixes'"
* This series applies perfectly even with 'mlx5 resiliency and xmit path fixes' merged to net-next
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 3ea00443 cb67b832
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
config MLX5_CORE config MLX5_CORE
tristate "Mellanox Technologies ConnectX-4 and Connect-IB core driver" tristate "Mellanox Technologies ConnectX-4 and Connect-IB core driver"
depends on MAY_USE_DEVLINK
depends on PCI depends on PCI
default n default n
---help--- ---help---
......
...@@ -5,9 +5,9 @@ mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \ ...@@ -5,9 +5,9 @@ mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
mad.o transobj.o vport.o sriov.o fs_cmd.o fs_core.o \ mad.o transobj.o vport.o sriov.o fs_cmd.o fs_core.o \
fs_counters.o rl.o fs_counters.o rl.o
mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o \ mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o eswitch_offloads.o \
en_main.o en_fs.o en_ethtool.o en_tx.o en_rx.o \ en_main.o en_common.o en_fs.o en_ethtool.o en_tx.o \
en_rx_am.o en_txrx.o en_clock.o vxlan.o en_tc.o \ en_rx.o en_rx_am.o en_txrx.o en_clock.o vxlan.o \
en_arfs.o en_tc.o en_arfs.o en_rep.o
mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o
...@@ -44,6 +44,7 @@ ...@@ -44,6 +44,7 @@
#include <linux/mlx5/vport.h> #include <linux/mlx5/vport.h>
#include <linux/mlx5/transobj.h> #include <linux/mlx5/transobj.h>
#include <linux/rhashtable.h> #include <linux/rhashtable.h>
#include <net/switchdev.h>
#include "wq.h" #include "wq.h"
#include "mlx5_core.h" #include "mlx5_core.h"
#include "en_stats.h" #include "en_stats.h"
...@@ -552,9 +553,15 @@ struct mlx5e_flow_steering { ...@@ -552,9 +553,15 @@ struct mlx5e_flow_steering {
struct mlx5e_arfs_tables arfs; struct mlx5e_arfs_tables arfs;
}; };
struct mlx5e_direct_tir { struct mlx5e_rqt {
u32 tirn;
u32 rqtn; u32 rqtn;
bool enabled;
};
struct mlx5e_tir {
u32 tirn;
struct mlx5e_rqt rqt;
struct list_head list;
}; };
enum { enum {
...@@ -562,6 +569,22 @@ enum { ...@@ -562,6 +569,22 @@ enum {
MLX5E_NIC_PRIO MLX5E_NIC_PRIO
}; };
struct mlx5e_profile {
void (*init)(struct mlx5_core_dev *mdev,
struct net_device *netdev,
const struct mlx5e_profile *profile, void *ppriv);
void (*cleanup)(struct mlx5e_priv *priv);
int (*init_rx)(struct mlx5e_priv *priv);
void (*cleanup_rx)(struct mlx5e_priv *priv);
int (*init_tx)(struct mlx5e_priv *priv);
void (*cleanup_tx)(struct mlx5e_priv *priv);
void (*enable)(struct mlx5e_priv *priv);
void (*disable)(struct mlx5e_priv *priv);
void (*update_stats)(struct mlx5e_priv *priv);
int (*max_nch)(struct mlx5_core_dev *mdev);
int max_tc;
};
struct mlx5e_priv { struct mlx5e_priv {
/* priv data path fields - start */ /* priv data path fields - start */
struct mlx5e_sq **txq_to_sq_map; struct mlx5e_sq **txq_to_sq_map;
...@@ -570,18 +593,14 @@ struct mlx5e_priv { ...@@ -570,18 +593,14 @@ struct mlx5e_priv {
unsigned long state; unsigned long state;
struct mutex state_lock; /* Protects Interface state */ struct mutex state_lock; /* Protects Interface state */
struct mlx5_uar cq_uar;
u32 pdn;
u32 tdn;
struct mlx5_core_mkey mkey;
struct mlx5_core_mkey umr_mkey; struct mlx5_core_mkey umr_mkey;
struct mlx5e_rq drop_rq; struct mlx5e_rq drop_rq;
struct mlx5e_channel **channel; struct mlx5e_channel **channel;
u32 tisn[MLX5E_MAX_NUM_TC]; u32 tisn[MLX5E_MAX_NUM_TC];
u32 indir_rqtn; struct mlx5e_rqt indir_rqt;
u32 indir_tirn[MLX5E_NUM_INDIR_TIRS]; struct mlx5e_tir indir_tir[MLX5E_NUM_INDIR_TIRS];
struct mlx5e_direct_tir direct_tir[MLX5E_MAX_NUM_CHANNELS]; struct mlx5e_tir direct_tir[MLX5E_MAX_NUM_CHANNELS];
u32 tx_rates[MLX5E_MAX_NUM_SQS]; u32 tx_rates[MLX5E_MAX_NUM_SQS];
struct mlx5e_flow_steering fs; struct mlx5e_flow_steering fs;
...@@ -599,6 +618,8 @@ struct mlx5e_priv { ...@@ -599,6 +618,8 @@ struct mlx5e_priv {
struct mlx5e_stats stats; struct mlx5e_stats stats;
struct mlx5e_tstamp tstamp; struct mlx5e_tstamp tstamp;
u16 q_counter; u16 q_counter;
const struct mlx5e_profile *profile;
void *ppriv;
}; };
enum mlx5e_link_mode { enum mlx5e_link_mode {
...@@ -788,5 +809,39 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb, ...@@ -788,5 +809,39 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
#endif #endif
u16 mlx5e_get_max_inline_cap(struct mlx5_core_dev *mdev); u16 mlx5e_get_max_inline_cap(struct mlx5_core_dev *mdev);
int mlx5e_create_tir(struct mlx5_core_dev *mdev,
struct mlx5e_tir *tir, u32 *in, int inlen);
void mlx5e_destroy_tir(struct mlx5_core_dev *mdev,
struct mlx5e_tir *tir);
int mlx5e_create_mdev_resources(struct mlx5_core_dev *mdev);
void mlx5e_destroy_mdev_resources(struct mlx5_core_dev *mdev);
int mlx5e_refresh_tirs_self_loopback_enable(struct mlx5_core_dev *mdev);
struct mlx5_eswitch_rep;
int mlx5e_vport_rep_load(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep);
void mlx5e_vport_rep_unload(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep);
int mlx5e_nic_rep_load(struct mlx5_eswitch *esw, struct mlx5_eswitch_rep *rep);
void mlx5e_nic_rep_unload(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep);
int mlx5e_add_sqs_fwd_rules(struct mlx5e_priv *priv);
void mlx5e_remove_sqs_fwd_rules(struct mlx5e_priv *priv);
int mlx5e_attr_get(struct net_device *dev, struct switchdev_attr *attr);
int mlx5e_create_direct_rqts(struct mlx5e_priv *priv);
void mlx5e_destroy_rqt(struct mlx5e_priv *priv, struct mlx5e_rqt *rqt);
int mlx5e_create_direct_tirs(struct mlx5e_priv *priv);
void mlx5e_destroy_direct_tirs(struct mlx5e_priv *priv);
int mlx5e_create_tises(struct mlx5e_priv *priv);
void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv);
int mlx5e_close(struct net_device *netdev);
int mlx5e_open(struct net_device *netdev);
void mlx5e_update_stats_work(struct work_struct *work);
void *mlx5e_create_netdev(struct mlx5_core_dev *mdev,
const struct mlx5e_profile *profile, void *ppriv);
void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, struct mlx5e_priv *priv);
struct rtnl_link_stats64 *
mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats);
#endif /* __MLX5_EN_H__ */ #endif /* __MLX5_EN_H__ */
...@@ -93,14 +93,14 @@ static enum mlx5e_traffic_types arfs_get_tt(enum arfs_type type) ...@@ -93,14 +93,14 @@ static enum mlx5e_traffic_types arfs_get_tt(enum arfs_type type)
static int arfs_disable(struct mlx5e_priv *priv) static int arfs_disable(struct mlx5e_priv *priv)
{ {
struct mlx5_flow_destination dest; struct mlx5_flow_destination dest;
u32 *tirn = priv->indir_tirn; struct mlx5e_tir *tir = priv->indir_tir;
int err = 0; int err = 0;
int tt; int tt;
int i; int i;
dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR; dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
for (i = 0; i < ARFS_NUM_TYPES; i++) { for (i = 0; i < ARFS_NUM_TYPES; i++) {
dest.tir_num = tirn[i]; dest.tir_num = tir[i].tirn;
tt = arfs_get_tt(i); tt = arfs_get_tt(i);
/* Modify ttc rules destination to bypass the aRFS tables*/ /* Modify ttc rules destination to bypass the aRFS tables*/
err = mlx5_modify_rule_destination(priv->fs.ttc.rules[tt], err = mlx5_modify_rule_destination(priv->fs.ttc.rules[tt],
...@@ -176,7 +176,7 @@ static int arfs_add_default_rule(struct mlx5e_priv *priv, ...@@ -176,7 +176,7 @@ static int arfs_add_default_rule(struct mlx5e_priv *priv,
struct arfs_table *arfs_t = &priv->fs.arfs.arfs_tables[type]; struct arfs_table *arfs_t = &priv->fs.arfs.arfs_tables[type];
struct mlx5_flow_destination dest; struct mlx5_flow_destination dest;
u8 match_criteria_enable = 0; u8 match_criteria_enable = 0;
u32 *tirn = priv->indir_tirn; struct mlx5e_tir *tir = priv->indir_tir;
u32 *match_criteria; u32 *match_criteria;
u32 *match_value; u32 *match_value;
int err = 0; int err = 0;
...@@ -192,16 +192,16 @@ static int arfs_add_default_rule(struct mlx5e_priv *priv, ...@@ -192,16 +192,16 @@ static int arfs_add_default_rule(struct mlx5e_priv *priv,
dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR; dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
switch (type) { switch (type) {
case ARFS_IPV4_TCP: case ARFS_IPV4_TCP:
dest.tir_num = tirn[MLX5E_TT_IPV4_TCP]; dest.tir_num = tir[MLX5E_TT_IPV4_TCP].tirn;
break; break;
case ARFS_IPV4_UDP: case ARFS_IPV4_UDP:
dest.tir_num = tirn[MLX5E_TT_IPV4_UDP]; dest.tir_num = tir[MLX5E_TT_IPV4_UDP].tirn;
break; break;
case ARFS_IPV6_TCP: case ARFS_IPV6_TCP:
dest.tir_num = tirn[MLX5E_TT_IPV6_TCP]; dest.tir_num = tir[MLX5E_TT_IPV6_TCP].tirn;
break; break;
case ARFS_IPV6_UDP: case ARFS_IPV6_UDP:
dest.tir_num = tirn[MLX5E_TT_IPV6_UDP]; dest.tir_num = tir[MLX5E_TT_IPV6_UDP].tirn;
break; break;
default: default:
err = -EINVAL; err = -EINVAL;
......
/*
* Copyright (c) 2016, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "en.h"
/* mlx5e global resources should be placed in this file.
* Global resources are common to all the netdevices crated on the same nic.
*/
int mlx5e_create_tir(struct mlx5_core_dev *mdev,
struct mlx5e_tir *tir, u32 *in, int inlen)
{
int err;
err = mlx5_core_create_tir(mdev, in, inlen, &tir->tirn);
if (err)
return err;
list_add(&tir->list, &mdev->mlx5e_res.td.tirs_list);
return 0;
}
void mlx5e_destroy_tir(struct mlx5_core_dev *mdev,
struct mlx5e_tir *tir)
{
mlx5_core_destroy_tir(mdev, tir->tirn);
list_del(&tir->list);
}
static int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn,
struct mlx5_core_mkey *mkey)
{
struct mlx5_create_mkey_mbox_in *in;
int err;
in = mlx5_vzalloc(sizeof(*in));
if (!in)
return -ENOMEM;
in->seg.flags = MLX5_PERM_LOCAL_WRITE |
MLX5_PERM_LOCAL_READ |
MLX5_ACCESS_MODE_PA;
in->seg.flags_pd = cpu_to_be32(pdn | MLX5_MKEY_LEN64);
in->seg.qpn_mkey7_0 = cpu_to_be32(0xffffff << 8);
err = mlx5_core_create_mkey(mdev, mkey, in, sizeof(*in), NULL, NULL,
NULL);
kvfree(in);
return err;
}
int mlx5e_create_mdev_resources(struct mlx5_core_dev *mdev)
{
struct mlx5e_resources *res = &mdev->mlx5e_res;
int err;
err = mlx5_alloc_map_uar(mdev, &res->cq_uar, false);
if (err) {
mlx5_core_err(mdev, "alloc_map uar failed, %d\n", err);
return err;
}
err = mlx5_core_alloc_pd(mdev, &res->pdn);
if (err) {
mlx5_core_err(mdev, "alloc pd failed, %d\n", err);
goto err_unmap_free_uar;
}
err = mlx5_core_alloc_transport_domain(mdev, &res->td.tdn);
if (err) {
mlx5_core_err(mdev, "alloc td failed, %d\n", err);
goto err_dealloc_pd;
}
err = mlx5e_create_mkey(mdev, res->pdn, &res->mkey);
if (err) {
mlx5_core_err(mdev, "create mkey failed, %d\n", err);
goto err_dealloc_transport_domain;
}
INIT_LIST_HEAD(&mdev->mlx5e_res.td.tirs_list);
return 0;
err_dealloc_transport_domain:
mlx5_core_dealloc_transport_domain(mdev, res->td.tdn);
err_dealloc_pd:
mlx5_core_dealloc_pd(mdev, res->pdn);
err_unmap_free_uar:
mlx5_unmap_free_uar(mdev, &res->cq_uar);
return err;
}
void mlx5e_destroy_mdev_resources(struct mlx5_core_dev *mdev)
{
struct mlx5e_resources *res = &mdev->mlx5e_res;
mlx5_core_destroy_mkey(mdev, &res->mkey);
mlx5_core_dealloc_transport_domain(mdev, res->td.tdn);
mlx5_core_dealloc_pd(mdev, res->pdn);
mlx5_unmap_free_uar(mdev, &res->cq_uar);
}
int mlx5e_refresh_tirs_self_loopback_enable(struct mlx5_core_dev *mdev)
{
struct mlx5e_tir *tir;
void *in;
int inlen;
int err;
inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
in = mlx5_vzalloc(inlen);
if (!in)
return -ENOMEM;
MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
list_for_each_entry(tir, &mdev->mlx5e_res.td.tirs_list, list) {
err = mlx5_core_modify_tir(mdev, tir->tirn, in, inlen);
if (err)
return err;
}
kvfree(in);
return 0;
}
...@@ -876,7 +876,7 @@ static void mlx5e_modify_tirs_hash(struct mlx5e_priv *priv, void *in, int inlen) ...@@ -876,7 +876,7 @@ static void mlx5e_modify_tirs_hash(struct mlx5e_priv *priv, void *in, int inlen)
mlx5e_build_tir_ctx_hash(tirc, priv); mlx5e_build_tir_ctx_hash(tirc, priv);
for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++) for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++)
mlx5_core_modify_tir(mdev, priv->indir_tirn[i], in, inlen); mlx5_core_modify_tir(mdev, priv->indir_tir[i].tirn, in, inlen);
} }
static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir, static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir,
...@@ -898,7 +898,7 @@ static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir, ...@@ -898,7 +898,7 @@ static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir,
mutex_lock(&priv->state_lock); mutex_lock(&priv->state_lock);
if (indir) { if (indir) {
u32 rqtn = priv->indir_rqtn; u32 rqtn = priv->indir_rqt.rqtn;
memcpy(priv->params.indirection_rqt, indir, memcpy(priv->params.indirection_rqt, indir,
sizeof(priv->params.indirection_rqt)); sizeof(priv->params.indirection_rqt));
......
...@@ -655,7 +655,7 @@ static int mlx5e_generate_ttc_table_rules(struct mlx5e_priv *priv) ...@@ -655,7 +655,7 @@ static int mlx5e_generate_ttc_table_rules(struct mlx5e_priv *priv)
if (tt == MLX5E_TT_ANY) if (tt == MLX5E_TT_ANY)
dest.tir_num = priv->direct_tir[0].tirn; dest.tir_num = priv->direct_tir[0].tirn;
else else
dest.tir_num = priv->indir_tirn[tt]; dest.tir_num = priv->indir_tir[tt].tirn;
rules[tt] = mlx5e_generate_ttc_rule(priv, ft, &dest, rules[tt] = mlx5e_generate_ttc_rule(priv, ft, &dest,
ttc_rules[tt].etype, ttc_rules[tt].etype,
ttc_rules[tt].proto); ttc_rules[tt].proto);
......
...@@ -226,14 +226,14 @@ void mlx5e_update_stats(struct mlx5e_priv *priv) ...@@ -226,14 +226,14 @@ void mlx5e_update_stats(struct mlx5e_priv *priv)
mlx5e_update_sw_counters(priv); mlx5e_update_sw_counters(priv);
} }
static void mlx5e_update_stats_work(struct work_struct *work) void mlx5e_update_stats_work(struct work_struct *work)
{ {
struct delayed_work *dwork = to_delayed_work(work); struct delayed_work *dwork = to_delayed_work(work);
struct mlx5e_priv *priv = container_of(dwork, struct mlx5e_priv, struct mlx5e_priv *priv = container_of(dwork, struct mlx5e_priv,
update_stats_work); update_stats_work);
mutex_lock(&priv->state_lock); mutex_lock(&priv->state_lock);
if (test_bit(MLX5E_STATE_OPENED, &priv->state)) { if (test_bit(MLX5E_STATE_OPENED, &priv->state)) {
mlx5e_update_stats(priv); priv->profile->update_stats(priv);
queue_delayed_work(priv->wq, dwork, queue_delayed_work(priv->wq, dwork,
msecs_to_jiffies(MLX5E_UPDATE_STATS_INTERVAL)); msecs_to_jiffies(MLX5E_UPDATE_STATS_INTERVAL));
} }
...@@ -858,7 +858,7 @@ static int mlx5e_create_cq(struct mlx5e_channel *c, ...@@ -858,7 +858,7 @@ static int mlx5e_create_cq(struct mlx5e_channel *c,
mcq->comp = mlx5e_completion_event; mcq->comp = mlx5e_completion_event;
mcq->event = mlx5e_cq_error_event; mcq->event = mlx5e_cq_error_event;
mcq->irqn = irqn; mcq->irqn = irqn;
mcq->uar = &priv->cq_uar; mcq->uar = &mdev->mlx5e_res.cq_uar;
for (i = 0; i < mlx5_cqwq_get_size(&cq->wq); i++) { for (i = 0; i < mlx5_cqwq_get_size(&cq->wq); i++) {
struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(&cq->wq, i); struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(&cq->wq, i);
...@@ -1036,7 +1036,7 @@ static void mlx5e_build_channeltc_to_txq_map(struct mlx5e_priv *priv, int ix) ...@@ -1036,7 +1036,7 @@ static void mlx5e_build_channeltc_to_txq_map(struct mlx5e_priv *priv, int ix)
{ {
int i; int i;
for (i = 0; i < MLX5E_MAX_NUM_TC; i++) for (i = 0; i < priv->profile->max_tc; i++)
priv->channeltc_to_txq_map[ix][i] = priv->channeltc_to_txq_map[ix][i] =
ix + i * priv->params.num_channels; ix + i * priv->params.num_channels;
} }
...@@ -1136,7 +1136,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix, ...@@ -1136,7 +1136,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
c->cpu = cpu; c->cpu = cpu;
c->pdev = &priv->mdev->pdev->dev; c->pdev = &priv->mdev->pdev->dev;
c->netdev = priv->netdev; c->netdev = priv->netdev;
c->mkey_be = cpu_to_be32(priv->mkey.key); c->mkey_be = cpu_to_be32(priv->mdev->mlx5e_res.mkey.key);
c->num_tc = priv->params.num_tc; c->num_tc = priv->params.num_tc;
if (priv->params.rx_am_enabled) if (priv->params.rx_am_enabled)
...@@ -1252,7 +1252,7 @@ static void mlx5e_build_rq_param(struct mlx5e_priv *priv, ...@@ -1252,7 +1252,7 @@ static void mlx5e_build_rq_param(struct mlx5e_priv *priv,
MLX5_SET(wq, wq, end_padding_mode, MLX5_WQ_END_PAD_MODE_ALIGN); MLX5_SET(wq, wq, end_padding_mode, MLX5_WQ_END_PAD_MODE_ALIGN);
MLX5_SET(wq, wq, log_wq_stride, ilog2(sizeof(struct mlx5e_rx_wqe))); MLX5_SET(wq, wq, log_wq_stride, ilog2(sizeof(struct mlx5e_rx_wqe)));
MLX5_SET(wq, wq, log_wq_sz, priv->params.log_rq_size); MLX5_SET(wq, wq, log_wq_sz, priv->params.log_rq_size);
MLX5_SET(wq, wq, pd, priv->pdn); MLX5_SET(wq, wq, pd, priv->mdev->mlx5e_res.pdn);
MLX5_SET(rqc, rqc, counter_set_id, priv->q_counter); MLX5_SET(rqc, rqc, counter_set_id, priv->q_counter);
param->wq.buf_numa_node = dev_to_node(&priv->mdev->pdev->dev); param->wq.buf_numa_node = dev_to_node(&priv->mdev->pdev->dev);
...@@ -1277,7 +1277,7 @@ static void mlx5e_build_sq_param_common(struct mlx5e_priv *priv, ...@@ -1277,7 +1277,7 @@ static void mlx5e_build_sq_param_common(struct mlx5e_priv *priv,
void *wq = MLX5_ADDR_OF(sqc, sqc, wq); void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
MLX5_SET(wq, wq, log_wq_stride, ilog2(MLX5_SEND_WQE_BB)); MLX5_SET(wq, wq, log_wq_stride, ilog2(MLX5_SEND_WQE_BB));
MLX5_SET(wq, wq, pd, priv->pdn); MLX5_SET(wq, wq, pd, priv->mdev->mlx5e_res.pdn);
param->wq.buf_numa_node = dev_to_node(&priv->mdev->pdev->dev); param->wq.buf_numa_node = dev_to_node(&priv->mdev->pdev->dev);
} }
...@@ -1299,7 +1299,7 @@ static void mlx5e_build_common_cq_param(struct mlx5e_priv *priv, ...@@ -1299,7 +1299,7 @@ static void mlx5e_build_common_cq_param(struct mlx5e_priv *priv,
{ {
void *cqc = param->cqc; void *cqc = param->cqc;
MLX5_SET(cqc, cqc, uar_page, priv->cq_uar.index); MLX5_SET(cqc, cqc, uar_page, priv->mdev->mlx5e_res.cq_uar.index);
} }
static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv,
...@@ -1486,7 +1486,8 @@ static void mlx5e_fill_direct_rqt_rqn(struct mlx5e_priv *priv, void *rqtc, ...@@ -1486,7 +1486,8 @@ static void mlx5e_fill_direct_rqt_rqn(struct mlx5e_priv *priv, void *rqtc,
MLX5_SET(rqtc, rqtc, rq_num[0], rqn); MLX5_SET(rqtc, rqtc, rq_num[0], rqn);
} }
static int mlx5e_create_rqt(struct mlx5e_priv *priv, int sz, int ix, u32 *rqtn) static int mlx5e_create_rqt(struct mlx5e_priv *priv, int sz,
int ix, struct mlx5e_rqt *rqt)
{ {
struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_core_dev *mdev = priv->mdev;
void *rqtc; void *rqtc;
...@@ -1509,34 +1510,36 @@ static int mlx5e_create_rqt(struct mlx5e_priv *priv, int sz, int ix, u32 *rqtn) ...@@ -1509,34 +1510,36 @@ static int mlx5e_create_rqt(struct mlx5e_priv *priv, int sz, int ix, u32 *rqtn)
else else
mlx5e_fill_direct_rqt_rqn(priv, rqtc, ix); mlx5e_fill_direct_rqt_rqn(priv, rqtc, ix);
err = mlx5_core_create_rqt(mdev, in, inlen, rqtn); err = mlx5_core_create_rqt(mdev, in, inlen, &rqt->rqtn);
if (!err)
rqt->enabled = true;
kvfree(in); kvfree(in);
return err; return err;
} }
static void mlx5e_destroy_rqt(struct mlx5e_priv *priv, u32 rqtn) void mlx5e_destroy_rqt(struct mlx5e_priv *priv, struct mlx5e_rqt *rqt)
{ {
mlx5_core_destroy_rqt(priv->mdev, rqtn); rqt->enabled = false;
mlx5_core_destroy_rqt(priv->mdev, rqt->rqtn);
}
static int mlx5e_create_indirect_rqts(struct mlx5e_priv *priv)
{
struct mlx5e_rqt *rqt = &priv->indir_rqt;
return mlx5e_create_rqt(priv, MLX5E_INDIR_RQT_SIZE, 0, rqt);
} }
static int mlx5e_create_rqts(struct mlx5e_priv *priv) int mlx5e_create_direct_rqts(struct mlx5e_priv *priv)
{ {
int nch = mlx5e_get_max_num_channels(priv->mdev); struct mlx5e_rqt *rqt;
u32 *rqtn;
int err; int err;
int ix; int ix;
/* Indirect RQT */ for (ix = 0; ix < priv->profile->max_nch(priv->mdev); ix++) {
rqtn = &priv->indir_rqtn; rqt = &priv->direct_tir[ix].rqt;
err = mlx5e_create_rqt(priv, MLX5E_INDIR_RQT_SIZE, 0, rqtn); err = mlx5e_create_rqt(priv, 1 /*size */, ix, rqt);
if (err)
return err;
/* Direct RQTs */
for (ix = 0; ix < nch; ix++) {
rqtn = &priv->direct_tir[ix].rqtn;
err = mlx5e_create_rqt(priv, 1 /*size */, ix, rqtn);
if (err) if (err)
goto err_destroy_rqts; goto err_destroy_rqts;
} }
...@@ -1545,24 +1548,11 @@ static int mlx5e_create_rqts(struct mlx5e_priv *priv) ...@@ -1545,24 +1548,11 @@ static int mlx5e_create_rqts(struct mlx5e_priv *priv)
err_destroy_rqts: err_destroy_rqts:
for (ix--; ix >= 0; ix--) for (ix--; ix >= 0; ix--)
mlx5e_destroy_rqt(priv, priv->direct_tir[ix].rqtn); mlx5e_destroy_rqt(priv, &priv->direct_tir[ix].rqt);
mlx5e_destroy_rqt(priv, priv->indir_rqtn);
return err; return err;
} }
static void mlx5e_destroy_rqts(struct mlx5e_priv *priv)
{
int nch = mlx5e_get_max_num_channels(priv->mdev);
int i;
for (i = 0; i < nch; i++)
mlx5e_destroy_rqt(priv, priv->direct_tir[i].rqtn);
mlx5e_destroy_rqt(priv, priv->indir_rqtn);
}
int mlx5e_redirect_rqt(struct mlx5e_priv *priv, u32 rqtn, int sz, int ix) int mlx5e_redirect_rqt(struct mlx5e_priv *priv, u32 rqtn, int sz, int ix)
{ {
struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_core_dev *mdev = priv->mdev;
...@@ -1598,10 +1588,15 @@ static void mlx5e_redirect_rqts(struct mlx5e_priv *priv) ...@@ -1598,10 +1588,15 @@ static void mlx5e_redirect_rqts(struct mlx5e_priv *priv)
u32 rqtn; u32 rqtn;
int ix; int ix;
rqtn = priv->indir_rqtn; if (priv->indir_rqt.enabled) {
rqtn = priv->indir_rqt.rqtn;
mlx5e_redirect_rqt(priv, rqtn, MLX5E_INDIR_RQT_SIZE, 0); mlx5e_redirect_rqt(priv, rqtn, MLX5E_INDIR_RQT_SIZE, 0);
}
for (ix = 0; ix < priv->params.num_channels; ix++) { for (ix = 0; ix < priv->params.num_channels; ix++) {
rqtn = priv->direct_tir[ix].rqtn; if (!priv->direct_tir[ix].rqt.enabled)
continue;
rqtn = priv->direct_tir[ix].rqt.rqtn;
mlx5e_redirect_rqt(priv, rqtn, 1, ix); mlx5e_redirect_rqt(priv, rqtn, 1, ix);
} }
} }
...@@ -1661,13 +1656,13 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv) ...@@ -1661,13 +1656,13 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
mlx5e_build_tir_ctx_lro(tirc, priv); mlx5e_build_tir_ctx_lro(tirc, priv);
for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) { for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) {
err = mlx5_core_modify_tir(mdev, priv->indir_tirn[tt], in, err = mlx5_core_modify_tir(mdev, priv->indir_tir[tt].tirn, in,
inlen); inlen);
if (err) if (err)
goto free_in; goto free_in;
} }
for (ix = 0; ix < mlx5e_get_max_num_channels(mdev); ix++) { for (ix = 0; ix < priv->profile->max_nch(priv->mdev); ix++) {
err = mlx5_core_modify_tir(mdev, priv->direct_tir[ix].tirn, err = mlx5_core_modify_tir(mdev, priv->direct_tir[ix].tirn,
in, inlen); in, inlen);
if (err) if (err)
...@@ -1680,40 +1675,6 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv) ...@@ -1680,40 +1675,6 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
return err; return err;
} }
static int mlx5e_refresh_tirs_self_loopback_enable(struct mlx5e_priv *priv)
{
void *in;
int inlen;
int err;
int i;
inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
in = mlx5_vzalloc(inlen);
if (!in)
return -ENOMEM;
MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++) {
err = mlx5_core_modify_tir(priv->mdev, priv->indir_tirn[i], in,
inlen);
if (err)
return err;
}
for (i = 0; i < priv->params.num_channels; i++) {
err = mlx5_core_modify_tir(priv->mdev,
priv->direct_tir[i].tirn, in,
inlen);
if (err)
return err;
}
kvfree(in);
return 0;
}
static int mlx5e_set_mtu(struct mlx5e_priv *priv, u16 mtu) static int mlx5e_set_mtu(struct mlx5e_priv *priv, u16 mtu)
{ {
struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_core_dev *mdev = priv->mdev;
...@@ -1782,6 +1743,7 @@ static void mlx5e_netdev_set_tcs(struct net_device *netdev) ...@@ -1782,6 +1743,7 @@ static void mlx5e_netdev_set_tcs(struct net_device *netdev)
int mlx5e_open_locked(struct net_device *netdev) int mlx5e_open_locked(struct net_device *netdev)
{ {
struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev;
int num_txqs; int num_txqs;
int err; int err;
...@@ -1804,7 +1766,7 @@ int mlx5e_open_locked(struct net_device *netdev) ...@@ -1804,7 +1766,7 @@ int mlx5e_open_locked(struct net_device *netdev)
goto err_clear_state_opened_flag; goto err_clear_state_opened_flag;
} }
err = mlx5e_refresh_tirs_self_loopback_enable(priv); err = mlx5e_refresh_tirs_self_loopback_enable(priv->mdev);
if (err) { if (err) {
netdev_err(netdev, "%s: mlx5e_refresh_tirs_self_loopback_enable failed, %d\n", netdev_err(netdev, "%s: mlx5e_refresh_tirs_self_loopback_enable failed, %d\n",
__func__, err); __func__, err);
...@@ -1817,9 +1779,14 @@ int mlx5e_open_locked(struct net_device *netdev) ...@@ -1817,9 +1779,14 @@ int mlx5e_open_locked(struct net_device *netdev)
#ifdef CONFIG_RFS_ACCEL #ifdef CONFIG_RFS_ACCEL
priv->netdev->rx_cpu_rmap = priv->mdev->rmap; priv->netdev->rx_cpu_rmap = priv->mdev->rmap;
#endif #endif
if (priv->profile->update_stats)
queue_delayed_work(priv->wq, &priv->update_stats_work, 0); queue_delayed_work(priv->wq, &priv->update_stats_work, 0);
if (MLX5_CAP_GEN(mdev, vport_group_manager)) {
err = mlx5e_add_sqs_fwd_rules(priv);
if (err)
goto err_close_channels;
}
return 0; return 0;
err_close_channels: err_close_channels:
...@@ -1829,7 +1796,7 @@ int mlx5e_open_locked(struct net_device *netdev) ...@@ -1829,7 +1796,7 @@ int mlx5e_open_locked(struct net_device *netdev)
return err; return err;
} }
static int mlx5e_open(struct net_device *netdev) int mlx5e_open(struct net_device *netdev)
{ {
struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_priv *priv = netdev_priv(netdev);
int err; int err;
...@@ -1844,6 +1811,7 @@ static int mlx5e_open(struct net_device *netdev) ...@@ -1844,6 +1811,7 @@ static int mlx5e_open(struct net_device *netdev)
int mlx5e_close_locked(struct net_device *netdev) int mlx5e_close_locked(struct net_device *netdev)
{ {
struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev;
/* May already be CLOSED in case a previous configuration operation /* May already be CLOSED in case a previous configuration operation
* (e.g RX/TX queue size change) that involves close&open failed. * (e.g RX/TX queue size change) that involves close&open failed.
...@@ -1853,6 +1821,9 @@ int mlx5e_close_locked(struct net_device *netdev) ...@@ -1853,6 +1821,9 @@ int mlx5e_close_locked(struct net_device *netdev)
clear_bit(MLX5E_STATE_OPENED, &priv->state); clear_bit(MLX5E_STATE_OPENED, &priv->state);
if (MLX5_CAP_GEN(mdev, vport_group_manager))
mlx5e_remove_sqs_fwd_rules(priv);
mlx5e_timestamp_cleanup(priv); mlx5e_timestamp_cleanup(priv);
netif_carrier_off(priv->netdev); netif_carrier_off(priv->netdev);
mlx5e_redirect_rqts(priv); mlx5e_redirect_rqts(priv);
...@@ -1861,7 +1832,7 @@ int mlx5e_close_locked(struct net_device *netdev) ...@@ -1861,7 +1832,7 @@ int mlx5e_close_locked(struct net_device *netdev)
return 0; return 0;
} }
static int mlx5e_close(struct net_device *netdev) int mlx5e_close(struct net_device *netdev)
{ {
struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_priv *priv = netdev_priv(netdev);
int err; int err;
...@@ -1920,7 +1891,7 @@ static int mlx5e_create_drop_cq(struct mlx5e_priv *priv, ...@@ -1920,7 +1891,7 @@ static int mlx5e_create_drop_cq(struct mlx5e_priv *priv,
mcq->comp = mlx5e_completion_event; mcq->comp = mlx5e_completion_event;
mcq->event = mlx5e_cq_error_event; mcq->event = mlx5e_cq_error_event;
mcq->irqn = irqn; mcq->irqn = irqn;
mcq->uar = &priv->cq_uar; mcq->uar = &mdev->mlx5e_res.cq_uar;
cq->priv = priv; cq->priv = priv;
...@@ -1986,7 +1957,7 @@ static int mlx5e_create_tis(struct mlx5e_priv *priv, int tc) ...@@ -1986,7 +1957,7 @@ static int mlx5e_create_tis(struct mlx5e_priv *priv, int tc)
memset(in, 0, sizeof(in)); memset(in, 0, sizeof(in));
MLX5_SET(tisc, tisc, prio, tc << 1); MLX5_SET(tisc, tisc, prio, tc << 1);
MLX5_SET(tisc, tisc, transport_domain, priv->tdn); MLX5_SET(tisc, tisc, transport_domain, mdev->mlx5e_res.td.tdn);
return mlx5_core_create_tis(mdev, in, sizeof(in), &priv->tisn[tc]); return mlx5_core_create_tis(mdev, in, sizeof(in), &priv->tisn[tc]);
} }
...@@ -1996,12 +1967,12 @@ static void mlx5e_destroy_tis(struct mlx5e_priv *priv, int tc) ...@@ -1996,12 +1967,12 @@ static void mlx5e_destroy_tis(struct mlx5e_priv *priv, int tc)
mlx5_core_destroy_tis(priv->mdev, priv->tisn[tc]); mlx5_core_destroy_tis(priv->mdev, priv->tisn[tc]);
} }
static int mlx5e_create_tises(struct mlx5e_priv *priv) int mlx5e_create_tises(struct mlx5e_priv *priv)
{ {
int err; int err;
int tc; int tc;
for (tc = 0; tc < MLX5E_MAX_NUM_TC; tc++) { for (tc = 0; tc < priv->profile->max_tc; tc++) {
err = mlx5e_create_tis(priv, tc); err = mlx5e_create_tis(priv, tc);
if (err) if (err)
goto err_close_tises; goto err_close_tises;
...@@ -2016,11 +1987,11 @@ static int mlx5e_create_tises(struct mlx5e_priv *priv) ...@@ -2016,11 +1987,11 @@ static int mlx5e_create_tises(struct mlx5e_priv *priv)
return err; return err;
} }
static void mlx5e_destroy_tises(struct mlx5e_priv *priv) void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv)
{ {
int tc; int tc;
for (tc = 0; tc < MLX5E_MAX_NUM_TC; tc++) for (tc = 0; tc < priv->profile->max_tc; tc++)
mlx5e_destroy_tis(priv, tc); mlx5e_destroy_tis(priv, tc);
} }
...@@ -2029,7 +2000,7 @@ static void mlx5e_build_indir_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, ...@@ -2029,7 +2000,7 @@ static void mlx5e_build_indir_tir_ctx(struct mlx5e_priv *priv, u32 *tirc,
{ {
void *hfso = MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_outer); void *hfso = MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_outer);
MLX5_SET(tirc, tirc, transport_domain, priv->tdn); MLX5_SET(tirc, tirc, transport_domain, priv->mdev->mlx5e_res.td.tdn);
#define MLX5_HASH_IP (MLX5_HASH_FIELD_SEL_SRC_IP |\ #define MLX5_HASH_IP (MLX5_HASH_FIELD_SEL_SRC_IP |\
MLX5_HASH_FIELD_SEL_DST_IP) MLX5_HASH_FIELD_SEL_DST_IP)
...@@ -2046,7 +2017,7 @@ static void mlx5e_build_indir_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, ...@@ -2046,7 +2017,7 @@ static void mlx5e_build_indir_tir_ctx(struct mlx5e_priv *priv, u32 *tirc,
mlx5e_build_tir_ctx_lro(tirc, priv); mlx5e_build_tir_ctx_lro(tirc, priv);
MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_INDIRECT); MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_INDIRECT);
MLX5_SET(tirc, tirc, indirect_table, priv->indir_rqtn); MLX5_SET(tirc, tirc, indirect_table, priv->indir_rqt.rqtn);
mlx5e_build_tir_ctx_hash(tirc, priv); mlx5e_build_tir_ctx_hash(tirc, priv);
switch (tt) { switch (tt) {
...@@ -2136,7 +2107,7 @@ static void mlx5e_build_indir_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, ...@@ -2136,7 +2107,7 @@ static void mlx5e_build_indir_tir_ctx(struct mlx5e_priv *priv, u32 *tirc,
static void mlx5e_build_direct_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, static void mlx5e_build_direct_tir_ctx(struct mlx5e_priv *priv, u32 *tirc,
u32 rqtn) u32 rqtn)
{ {
MLX5_SET(tirc, tirc, transport_domain, priv->tdn); MLX5_SET(tirc, tirc, transport_domain, priv->mdev->mlx5e_res.td.tdn);
mlx5e_build_tir_ctx_lro(tirc, priv); mlx5e_build_tir_ctx_lro(tirc, priv);
...@@ -2145,15 +2116,13 @@ static void mlx5e_build_direct_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, ...@@ -2145,15 +2116,13 @@ static void mlx5e_build_direct_tir_ctx(struct mlx5e_priv *priv, u32 *tirc,
MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_RX_HASH_FN_INVERTED_XOR8); MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_RX_HASH_FN_INVERTED_XOR8);
} }
static int mlx5e_create_tirs(struct mlx5e_priv *priv) static int mlx5e_create_indirect_tirs(struct mlx5e_priv *priv)
{ {
int nch = mlx5e_get_max_num_channels(priv->mdev); struct mlx5e_tir *tir;
void *tirc; void *tirc;
int inlen; int inlen;
u32 *tirn;
int err; int err;
u32 *in; u32 *in;
int ix;
int tt; int tt;
inlen = MLX5_ST_SZ_BYTES(create_tir_in); inlen = MLX5_ST_SZ_BYTES(create_tir_in);
...@@ -2161,25 +2130,51 @@ static int mlx5e_create_tirs(struct mlx5e_priv *priv) ...@@ -2161,25 +2130,51 @@ static int mlx5e_create_tirs(struct mlx5e_priv *priv)
if (!in) if (!in)
return -ENOMEM; return -ENOMEM;
/* indirect tirs */
for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) { for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) {
memset(in, 0, inlen); memset(in, 0, inlen);
tirn = &priv->indir_tirn[tt]; tir = &priv->indir_tir[tt];
tirc = MLX5_ADDR_OF(create_tir_in, in, ctx); tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
mlx5e_build_indir_tir_ctx(priv, tirc, tt); mlx5e_build_indir_tir_ctx(priv, tirc, tt);
err = mlx5_core_create_tir(priv->mdev, in, inlen, tirn); err = mlx5e_create_tir(priv->mdev, tir, in, inlen);
if (err) if (err)
goto err_destroy_tirs; goto err_destroy_tirs;
} }
/* direct tirs */ kvfree(in);
return 0;
err_destroy_tirs:
for (tt--; tt >= 0; tt--)
mlx5e_destroy_tir(priv->mdev, &priv->indir_tir[tt]);
kvfree(in);
return err;
}
int mlx5e_create_direct_tirs(struct mlx5e_priv *priv)
{
int nch = priv->profile->max_nch(priv->mdev);
struct mlx5e_tir *tir;
void *tirc;
int inlen;
int err;
u32 *in;
int ix;
inlen = MLX5_ST_SZ_BYTES(create_tir_in);
in = mlx5_vzalloc(inlen);
if (!in)
return -ENOMEM;
for (ix = 0; ix < nch; ix++) { for (ix = 0; ix < nch; ix++) {
memset(in, 0, inlen); memset(in, 0, inlen);
tirn = &priv->direct_tir[ix].tirn; tir = &priv->direct_tir[ix];
tirc = MLX5_ADDR_OF(create_tir_in, in, ctx); tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
mlx5e_build_direct_tir_ctx(priv, tirc, mlx5e_build_direct_tir_ctx(priv, tirc,
priv->direct_tir[ix].rqtn); priv->direct_tir[ix].rqt.rqtn);
err = mlx5_core_create_tir(priv->mdev, in, inlen, tirn); err = mlx5e_create_tir(priv->mdev, tir, in, inlen);
if (err) if (err)
goto err_destroy_ch_tirs; goto err_destroy_ch_tirs;
} }
...@@ -2190,27 +2185,28 @@ static int mlx5e_create_tirs(struct mlx5e_priv *priv) ...@@ -2190,27 +2185,28 @@ static int mlx5e_create_tirs(struct mlx5e_priv *priv)
err_destroy_ch_tirs: err_destroy_ch_tirs:
for (ix--; ix >= 0; ix--) for (ix--; ix >= 0; ix--)
mlx5_core_destroy_tir(priv->mdev, priv->direct_tir[ix].tirn); mlx5e_destroy_tir(priv->mdev, &priv->direct_tir[ix]);
err_destroy_tirs:
for (tt--; tt >= 0; tt--)
mlx5_core_destroy_tir(priv->mdev, priv->indir_tirn[tt]);
kvfree(in); kvfree(in);
return err; return err;
} }
static void mlx5e_destroy_tirs(struct mlx5e_priv *priv) static void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv)
{ {
int nch = mlx5e_get_max_num_channels(priv->mdev);
int i; int i;
for (i = 0; i < nch; i++)
mlx5_core_destroy_tir(priv->mdev, priv->direct_tir[i].tirn);
for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++) for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++)
mlx5_core_destroy_tir(priv->mdev, priv->indir_tirn[i]); mlx5e_destroy_tir(priv->mdev, &priv->indir_tir[i]);
}
void mlx5e_destroy_direct_tirs(struct mlx5e_priv *priv)
{
int nch = priv->profile->max_nch(priv->mdev);
int i;
for (i = 0; i < nch; i++)
mlx5e_destroy_tir(priv->mdev, &priv->direct_tir[i]);
} }
int mlx5e_modify_rqs_vsd(struct mlx5e_priv *priv, bool vsd) int mlx5e_modify_rqs_vsd(struct mlx5e_priv *priv, bool vsd)
...@@ -2284,7 +2280,7 @@ static int mlx5e_ndo_setup_tc(struct net_device *dev, u32 handle, ...@@ -2284,7 +2280,7 @@ static int mlx5e_ndo_setup_tc(struct net_device *dev, u32 handle,
return mlx5e_setup_tc(dev, tc->tc); return mlx5e_setup_tc(dev, tc->tc);
} }
static struct rtnl_link_stats64 * struct rtnl_link_stats64 *
mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
{ {
struct mlx5e_priv *priv = netdev_priv(dev); struct mlx5e_priv *priv = netdev_priv(dev);
...@@ -2892,9 +2888,10 @@ void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) ...@@ -2892,9 +2888,10 @@ void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode)
MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC_FROM_CQE; MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC_FROM_CQE;
} }
static void mlx5e_build_netdev_priv(struct mlx5_core_dev *mdev, static void mlx5e_build_nic_netdev_priv(struct mlx5_core_dev *mdev,
struct net_device *netdev, struct net_device *netdev,
int num_channels) const struct mlx5e_profile *profile,
void *ppriv)
{ {
struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_priv *priv = netdev_priv(netdev);
u32 link_speed = 0; u32 link_speed = 0;
...@@ -2963,7 +2960,7 @@ static void mlx5e_build_netdev_priv(struct mlx5_core_dev *mdev, ...@@ -2963,7 +2960,7 @@ static void mlx5e_build_netdev_priv(struct mlx5_core_dev *mdev,
sizeof(priv->params.toeplitz_hash_key)); sizeof(priv->params.toeplitz_hash_key));
mlx5e_build_default_indir_rqt(mdev, priv->params.indirection_rqt, mlx5e_build_default_indir_rqt(mdev, priv->params.indirection_rqt,
MLX5E_INDIR_RQT_SIZE, num_channels); MLX5E_INDIR_RQT_SIZE, profile->max_nch(mdev));
priv->params.lro_wqe_sz = priv->params.lro_wqe_sz =
MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ; MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ;
...@@ -2974,7 +2971,9 @@ static void mlx5e_build_netdev_priv(struct mlx5_core_dev *mdev, ...@@ -2974,7 +2971,9 @@ static void mlx5e_build_netdev_priv(struct mlx5_core_dev *mdev,
priv->mdev = mdev; priv->mdev = mdev;
priv->netdev = netdev; priv->netdev = netdev;
priv->params.num_channels = num_channels; priv->params.num_channels = profile->max_nch(mdev);
priv->profile = profile;
priv->ppriv = ppriv;
#ifdef CONFIG_MLX5_CORE_EN_DCB #ifdef CONFIG_MLX5_CORE_EN_DCB
mlx5e_ets_init(priv); mlx5e_ets_init(priv);
...@@ -2999,7 +2998,11 @@ static void mlx5e_set_netdev_dev_addr(struct net_device *netdev) ...@@ -2999,7 +2998,11 @@ static void mlx5e_set_netdev_dev_addr(struct net_device *netdev)
} }
} }
static void mlx5e_build_netdev(struct net_device *netdev) static const struct switchdev_ops mlx5e_switchdev_ops = {
.switchdev_port_attr_get = mlx5e_attr_get,
};
static void mlx5e_build_nic_netdev(struct net_device *netdev)
{ {
struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev; struct mlx5_core_dev *mdev = priv->mdev;
...@@ -3080,31 +3083,11 @@ static void mlx5e_build_netdev(struct net_device *netdev) ...@@ -3080,31 +3083,11 @@ static void mlx5e_build_netdev(struct net_device *netdev)
netdev->priv_flags |= IFF_UNICAST_FLT; netdev->priv_flags |= IFF_UNICAST_FLT;
mlx5e_set_netdev_dev_addr(netdev); mlx5e_set_netdev_dev_addr(netdev);
}
static int mlx5e_create_mkey(struct mlx5e_priv *priv, u32 pdn,
struct mlx5_core_mkey *mkey)
{
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5_create_mkey_mbox_in *in;
int err;
in = mlx5_vzalloc(sizeof(*in)); #ifdef CONFIG_NET_SWITCHDEV
if (!in) if (MLX5_CAP_GEN(mdev, vport_group_manager))
return -ENOMEM; netdev->switchdev_ops = &mlx5e_switchdev_ops;
#endif
in->seg.flags = MLX5_PERM_LOCAL_WRITE |
MLX5_PERM_LOCAL_READ |
MLX5_ACCESS_MODE_PA;
in->seg.flags_pd = cpu_to_be32(pdn | MLX5_MKEY_LEN64);
in->seg.qpn_mkey7_0 = cpu_to_be32(0xffffff << 8);
err = mlx5_core_create_mkey(mdev, mkey, in, sizeof(*in), NULL, NULL,
NULL);
kvfree(in);
return err;
} }
static void mlx5e_create_q_counter(struct mlx5e_priv *priv) static void mlx5e_create_q_counter(struct mlx5e_priv *priv)
...@@ -3134,7 +3117,7 @@ static int mlx5e_create_umr_mkey(struct mlx5e_priv *priv) ...@@ -3134,7 +3117,7 @@ static int mlx5e_create_umr_mkey(struct mlx5e_priv *priv)
struct mlx5_mkey_seg *mkc; struct mlx5_mkey_seg *mkc;
int inlen = sizeof(*in); int inlen = sizeof(*in);
u64 npages = u64 npages =
mlx5e_get_max_num_channels(mdev) * MLX5_CHANNEL_MAX_NUM_MTTS; priv->profile->max_nch(mdev) * MLX5_CHANNEL_MAX_NUM_MTTS;
int err; int err;
in = mlx5_vzalloc(inlen); in = mlx5_vzalloc(inlen);
...@@ -3149,7 +3132,7 @@ static int mlx5e_create_umr_mkey(struct mlx5e_priv *priv) ...@@ -3149,7 +3132,7 @@ static int mlx5e_create_umr_mkey(struct mlx5e_priv *priv)
MLX5_ACCESS_MODE_MTT; MLX5_ACCESS_MODE_MTT;
mkc->qpn_mkey7_0 = cpu_to_be32(0xffffff << 8); mkc->qpn_mkey7_0 = cpu_to_be32(0xffffff << 8);
mkc->flags_pd = cpu_to_be32(priv->pdn); mkc->flags_pd = cpu_to_be32(mdev->mlx5e_res.pdn);
mkc->len = cpu_to_be64(npages << PAGE_SHIFT); mkc->len = cpu_to_be64(npages << PAGE_SHIFT);
mkc->xlt_oct_size = cpu_to_be32(mlx5e_get_mtt_octw(npages)); mkc->xlt_oct_size = cpu_to_be32(mlx5e_get_mtt_octw(npages));
mkc->log2_page_size = PAGE_SHIFT; mkc->log2_page_size = PAGE_SHIFT;
...@@ -3162,160 +3145,233 @@ static int mlx5e_create_umr_mkey(struct mlx5e_priv *priv) ...@@ -3162,160 +3145,233 @@ static int mlx5e_create_umr_mkey(struct mlx5e_priv *priv)
return err; return err;
} }
static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev) static void mlx5e_nic_init(struct mlx5_core_dev *mdev,
struct net_device *netdev,
const struct mlx5e_profile *profile,
void *ppriv)
{ {
struct net_device *netdev; struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5e_priv *priv;
int nch = mlx5e_get_max_num_channels(mdev);
int err;
if (mlx5e_check_required_hca_cap(mdev))
return NULL;
netdev = alloc_etherdev_mqs(sizeof(struct mlx5e_priv), mlx5e_build_nic_netdev_priv(mdev, netdev, profile, ppriv);
nch * MLX5E_MAX_NUM_TC, mlx5e_build_nic_netdev(netdev);
nch); mlx5e_vxlan_init(priv);
if (!netdev) { }
mlx5_core_err(mdev, "alloc_etherdev_mqs() failed\n");
return NULL;
}
mlx5e_build_netdev_priv(mdev, netdev, nch); static void mlx5e_nic_cleanup(struct mlx5e_priv *priv)
mlx5e_build_netdev(netdev); {
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5_eswitch *esw = mdev->priv.eswitch;
netif_carrier_off(netdev); mlx5e_vxlan_cleanup(priv);
priv = netdev_priv(netdev); if (MLX5_CAP_GEN(mdev, vport_group_manager))
mlx5_eswitch_unregister_vport_rep(esw, 0);
}
priv->wq = create_singlethread_workqueue("mlx5e"); static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
if (!priv->wq) {
goto err_free_netdev; struct mlx5_core_dev *mdev = priv->mdev;
int err;
int i;
err = mlx5_alloc_map_uar(mdev, &priv->cq_uar, false); err = mlx5e_create_indirect_rqts(priv);
if (err) { if (err) {
mlx5_core_err(mdev, "alloc_map uar failed, %d\n", err); mlx5_core_warn(mdev, "create indirect rqts failed, %d\n", err);
goto err_destroy_wq; return err;
} }
err = mlx5_core_alloc_pd(mdev, &priv->pdn); err = mlx5e_create_direct_rqts(priv);
if (err) { if (err) {
mlx5_core_err(mdev, "alloc pd failed, %d\n", err); mlx5_core_warn(mdev, "create direct rqts failed, %d\n", err);
goto err_unmap_free_uar; goto err_destroy_indirect_rqts;
} }
err = mlx5_core_alloc_transport_domain(mdev, &priv->tdn); err = mlx5e_create_indirect_tirs(priv);
if (err) { if (err) {
mlx5_core_err(mdev, "alloc td failed, %d\n", err); mlx5_core_warn(mdev, "create indirect tirs failed, %d\n", err);
goto err_dealloc_pd; goto err_destroy_direct_rqts;
} }
err = mlx5e_create_mkey(priv, priv->pdn, &priv->mkey); err = mlx5e_create_direct_tirs(priv);
if (err) { if (err) {
mlx5_core_err(mdev, "create mkey failed, %d\n", err); mlx5_core_warn(mdev, "create direct tirs failed, %d\n", err);
goto err_dealloc_transport_domain; goto err_destroy_indirect_tirs;
} }
err = mlx5e_create_umr_mkey(priv); err = mlx5e_create_flow_steering(priv);
if (err) { if (err) {
mlx5_core_err(mdev, "create umr mkey failed, %d\n", err); mlx5_core_warn(mdev, "create flow steering failed, %d\n", err);
goto err_destroy_mkey; goto err_destroy_direct_tirs;
} }
err = mlx5e_tc_init(priv);
if (err)
goto err_destroy_flow_steering;
return 0;
err_destroy_flow_steering:
mlx5e_destroy_flow_steering(priv);
err_destroy_direct_tirs:
mlx5e_destroy_direct_tirs(priv);
err_destroy_indirect_tirs:
mlx5e_destroy_indirect_tirs(priv);
err_destroy_direct_rqts:
for (i = 0; i < priv->profile->max_nch(mdev); i++)
mlx5e_destroy_rqt(priv, &priv->direct_tir[i].rqt);
err_destroy_indirect_rqts:
mlx5e_destroy_rqt(priv, &priv->indir_rqt);
return err;
}
static void mlx5e_cleanup_nic_rx(struct mlx5e_priv *priv)
{
int i;
mlx5e_tc_cleanup(priv);
mlx5e_destroy_flow_steering(priv);
mlx5e_destroy_direct_tirs(priv);
mlx5e_destroy_indirect_tirs(priv);
for (i = 0; i < priv->profile->max_nch(priv->mdev); i++)
mlx5e_destroy_rqt(priv, &priv->direct_tir[i].rqt);
mlx5e_destroy_rqt(priv, &priv->indir_rqt);
}
static int mlx5e_init_nic_tx(struct mlx5e_priv *priv)
{
int err;
err = mlx5e_create_tises(priv); err = mlx5e_create_tises(priv);
if (err) { if (err) {
mlx5_core_warn(mdev, "create tises failed, %d\n", err); mlx5_core_warn(priv->mdev, "create tises failed, %d\n", err);
goto err_destroy_umr_mkey; return err;
} }
err = mlx5e_open_drop_rq(priv); #ifdef CONFIG_MLX5_CORE_EN_DCB
if (err) { mlx5e_dcbnl_ieee_setets_core(priv, &priv->params.ets);
mlx5_core_err(mdev, "open drop rq failed, %d\n", err); #endif
goto err_destroy_tises; return 0;
}
static void mlx5e_nic_enable(struct mlx5e_priv *priv)
{
struct net_device *netdev = priv->netdev;
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5_eswitch *esw = mdev->priv.eswitch;
struct mlx5_eswitch_rep rep;
if (mlx5e_vxlan_allowed(mdev)) {
rtnl_lock();
udp_tunnel_get_rx_info(netdev);
rtnl_unlock();
} }
err = mlx5e_create_rqts(priv); mlx5e_enable_async_events(priv);
if (err) { queue_work(priv->wq, &priv->set_rx_mode_work);
mlx5_core_warn(mdev, "create rqts failed, %d\n", err);
goto err_close_drop_rq; if (MLX5_CAP_GEN(mdev, vport_group_manager)) {
rep.load = mlx5e_nic_rep_load;
rep.unload = mlx5e_nic_rep_unload;
rep.vport = 0;
rep.priv_data = priv;
mlx5_eswitch_register_vport_rep(esw, &rep);
} }
}
err = mlx5e_create_tirs(priv); static void mlx5e_nic_disable(struct mlx5e_priv *priv)
if (err) { {
mlx5_core_warn(mdev, "create tirs failed, %d\n", err); queue_work(priv->wq, &priv->set_rx_mode_work);
goto err_destroy_rqts; mlx5e_disable_async_events(priv);
}
static const struct mlx5e_profile mlx5e_nic_profile = {
.init = mlx5e_nic_init,
.cleanup = mlx5e_nic_cleanup,
.init_rx = mlx5e_init_nic_rx,
.cleanup_rx = mlx5e_cleanup_nic_rx,
.init_tx = mlx5e_init_nic_tx,
.cleanup_tx = mlx5e_cleanup_nic_tx,
.enable = mlx5e_nic_enable,
.disable = mlx5e_nic_disable,
.update_stats = mlx5e_update_stats,
.max_nch = mlx5e_get_max_num_channels,
.max_tc = MLX5E_MAX_NUM_TC,
};
void *mlx5e_create_netdev(struct mlx5_core_dev *mdev,
const struct mlx5e_profile *profile, void *ppriv)
{
struct net_device *netdev;
struct mlx5e_priv *priv;
int nch = profile->max_nch(mdev);
int err;
netdev = alloc_etherdev_mqs(sizeof(struct mlx5e_priv),
nch * profile->max_tc,
nch);
if (!netdev) {
mlx5_core_err(mdev, "alloc_etherdev_mqs() failed\n");
return NULL;
} }
err = mlx5e_create_flow_steering(priv); profile->init(mdev, netdev, profile, ppriv);
netif_carrier_off(netdev);
priv = netdev_priv(netdev);
priv->wq = create_singlethread_workqueue("mlx5e");
if (!priv->wq)
goto err_free_netdev;
err = mlx5e_create_umr_mkey(priv);
if (err) { if (err) {
mlx5_core_warn(mdev, "create flow steering failed, %d\n", err); mlx5_core_err(mdev, "create umr mkey failed, %d\n", err);
goto err_destroy_tirs; goto err_destroy_wq;
} }
mlx5e_create_q_counter(priv); err = profile->init_tx(priv);
if (err)
mlx5e_init_l2_addr(priv); goto err_destroy_umr_mkey;
mlx5e_vxlan_init(priv); err = mlx5e_open_drop_rq(priv);
if (err) {
mlx5_core_err(mdev, "open drop rq failed, %d\n", err);
goto err_cleanup_tx;
}
err = mlx5e_tc_init(priv); err = profile->init_rx(priv);
if (err) if (err)
goto err_dealloc_q_counters; goto err_close_drop_rq;
#ifdef CONFIG_MLX5_CORE_EN_DCB mlx5e_create_q_counter(priv);
mlx5e_dcbnl_ieee_setets_core(priv, &priv->params.ets);
#endif mlx5e_init_l2_addr(priv);
err = register_netdev(netdev); err = register_netdev(netdev);
if (err) { if (err) {
mlx5_core_err(mdev, "register_netdev failed, %d\n", err); mlx5_core_err(mdev, "register_netdev failed, %d\n", err);
goto err_tc_cleanup; goto err_dealloc_q_counters;
}
if (mlx5e_vxlan_allowed(mdev)) {
rtnl_lock();
udp_tunnel_get_rx_info(netdev);
rtnl_unlock();
} }
mlx5e_enable_async_events(priv); if (profile->enable)
queue_work(priv->wq, &priv->set_rx_mode_work); profile->enable(priv);
return priv; return priv;
err_tc_cleanup:
mlx5e_tc_cleanup(priv);
err_dealloc_q_counters: err_dealloc_q_counters:
mlx5e_destroy_q_counter(priv); mlx5e_destroy_q_counter(priv);
mlx5e_destroy_flow_steering(priv); profile->cleanup_rx(priv);
err_destroy_tirs:
mlx5e_destroy_tirs(priv);
err_destroy_rqts:
mlx5e_destroy_rqts(priv);
err_close_drop_rq: err_close_drop_rq:
mlx5e_close_drop_rq(priv); mlx5e_close_drop_rq(priv);
err_destroy_tises: err_cleanup_tx:
mlx5e_destroy_tises(priv); profile->cleanup_tx(priv);
err_destroy_umr_mkey: err_destroy_umr_mkey:
mlx5_core_destroy_mkey(mdev, &priv->umr_mkey); mlx5_core_destroy_mkey(mdev, &priv->umr_mkey);
err_destroy_mkey:
mlx5_core_destroy_mkey(mdev, &priv->mkey);
err_dealloc_transport_domain:
mlx5_core_dealloc_transport_domain(mdev, priv->tdn);
err_dealloc_pd:
mlx5_core_dealloc_pd(mdev, priv->pdn);
err_unmap_free_uar:
mlx5_unmap_free_uar(mdev, &priv->cq_uar);
err_destroy_wq: err_destroy_wq:
destroy_workqueue(priv->wq); destroy_workqueue(priv->wq);
...@@ -3325,15 +3381,59 @@ static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev) ...@@ -3325,15 +3381,59 @@ static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev)
return NULL; return NULL;
} }
static void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, void *vpriv) static void mlx5e_register_vport_rep(struct mlx5_core_dev *mdev)
{ {
struct mlx5e_priv *priv = vpriv; struct mlx5_eswitch *esw = mdev->priv.eswitch;
int total_vfs = MLX5_TOTAL_VPORTS(mdev);
int vport;
if (!MLX5_CAP_GEN(mdev, vport_group_manager))
return;
for (vport = 1; vport < total_vfs; vport++) {
struct mlx5_eswitch_rep rep;
rep.load = mlx5e_vport_rep_load;
rep.unload = mlx5e_vport_rep_unload;
rep.vport = vport;
mlx5_eswitch_register_vport_rep(esw, &rep);
}
}
static void *mlx5e_add(struct mlx5_core_dev *mdev)
{
struct mlx5_eswitch *esw = mdev->priv.eswitch;
void *ppriv = NULL;
void *ret;
if (mlx5e_check_required_hca_cap(mdev))
return NULL;
if (mlx5e_create_mdev_resources(mdev))
return NULL;
mlx5e_register_vport_rep(mdev);
if (MLX5_CAP_GEN(mdev, vport_group_manager))
ppriv = &esw->offloads.vport_reps[0];
ret = mlx5e_create_netdev(mdev, &mlx5e_nic_profile, ppriv);
if (!ret) {
mlx5e_destroy_mdev_resources(mdev);
return NULL;
}
return ret;
}
void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, struct mlx5e_priv *priv)
{
const struct mlx5e_profile *profile = priv->profile;
struct net_device *netdev = priv->netdev; struct net_device *netdev = priv->netdev;
set_bit(MLX5E_STATE_DESTROYING, &priv->state); set_bit(MLX5E_STATE_DESTROYING, &priv->state);
if (profile->disable)
profile->disable(priv);
queue_work(priv->wq, &priv->set_rx_mode_work);
mlx5e_disable_async_events(priv);
flush_workqueue(priv->wq); flush_workqueue(priv->wq);
if (test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state)) { if (test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state)) {
netif_device_detach(netdev); netif_device_detach(netdev);
...@@ -3342,26 +3442,35 @@ static void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, void *vpriv) ...@@ -3342,26 +3442,35 @@ static void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, void *vpriv)
unregister_netdev(netdev); unregister_netdev(netdev);
} }
mlx5e_tc_cleanup(priv);
mlx5e_vxlan_cleanup(priv);
mlx5e_destroy_q_counter(priv); mlx5e_destroy_q_counter(priv);
mlx5e_destroy_flow_steering(priv); profile->cleanup_rx(priv);
mlx5e_destroy_tirs(priv);
mlx5e_destroy_rqts(priv);
mlx5e_close_drop_rq(priv); mlx5e_close_drop_rq(priv);
mlx5e_destroy_tises(priv); profile->cleanup_tx(priv);
mlx5_core_destroy_mkey(priv->mdev, &priv->umr_mkey); mlx5_core_destroy_mkey(priv->mdev, &priv->umr_mkey);
mlx5_core_destroy_mkey(priv->mdev, &priv->mkey);
mlx5_core_dealloc_transport_domain(priv->mdev, priv->tdn);
mlx5_core_dealloc_pd(priv->mdev, priv->pdn);
mlx5_unmap_free_uar(priv->mdev, &priv->cq_uar);
cancel_delayed_work_sync(&priv->update_stats_work); cancel_delayed_work_sync(&priv->update_stats_work);
destroy_workqueue(priv->wq); destroy_workqueue(priv->wq);
if (profile->cleanup)
profile->cleanup(priv);
if (!test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state)) if (!test_bit(MLX5_INTERFACE_STATE_SHUTDOWN, &mdev->intf_state))
free_netdev(netdev); free_netdev(netdev);
} }
static void mlx5e_remove(struct mlx5_core_dev *mdev, void *vpriv)
{
struct mlx5_eswitch *esw = mdev->priv.eswitch;
int total_vfs = MLX5_TOTAL_VPORTS(mdev);
struct mlx5e_priv *priv = vpriv;
int vport;
mlx5e_destroy_netdev(mdev, priv);
for (vport = 1; vport < total_vfs; vport++)
mlx5_eswitch_unregister_vport_rep(esw, vport);
mlx5e_destroy_mdev_resources(mdev);
}
static void *mlx5e_get_netdev(void *vpriv) static void *mlx5e_get_netdev(void *vpriv)
{ {
struct mlx5e_priv *priv = vpriv; struct mlx5e_priv *priv = vpriv;
...@@ -3370,8 +3479,8 @@ static void *mlx5e_get_netdev(void *vpriv) ...@@ -3370,8 +3479,8 @@ static void *mlx5e_get_netdev(void *vpriv)
} }
static struct mlx5_interface mlx5e_interface = { static struct mlx5_interface mlx5e_interface = {
.add = mlx5e_create_netdev, .add = mlx5e_add,
.remove = mlx5e_destroy_netdev, .remove = mlx5e_remove,
.event = mlx5e_async_event, .event = mlx5e_async_event,
.protocol = MLX5_INTERFACE_PROTOCOL_ETH, .protocol = MLX5_INTERFACE_PROTOCOL_ETH,
.get_dev = mlx5e_get_netdev, .get_dev = mlx5e_get_netdev,
......
/*
* Copyright (c) 2016, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <generated/utsrelease.h>
#include <linux/mlx5/fs.h>
#include <net/switchdev.h>
#include "eswitch.h"
#include "en.h"
static const char mlx5e_rep_driver_name[] = "mlx5e_rep";
static void mlx5e_rep_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *drvinfo)
{
strlcpy(drvinfo->driver, mlx5e_rep_driver_name,
sizeof(drvinfo->driver));
strlcpy(drvinfo->version, UTS_RELEASE, sizeof(drvinfo->version));
}
static const struct counter_desc sw_rep_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_bytes) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_bytes) },
};
#define NUM_VPORT_REP_COUNTERS ARRAY_SIZE(sw_rep_stats_desc)
static void mlx5e_rep_get_strings(struct net_device *dev,
u32 stringset, uint8_t *data)
{
int i;
switch (stringset) {
case ETH_SS_STATS:
for (i = 0; i < NUM_VPORT_REP_COUNTERS; i++)
strcpy(data + (i * ETH_GSTRING_LEN),
sw_rep_stats_desc[i].format);
break;
}
}
static void mlx5e_update_sw_rep_counters(struct mlx5e_priv *priv)
{
struct mlx5e_sw_stats *s = &priv->stats.sw;
struct mlx5e_rq_stats *rq_stats;
struct mlx5e_sq_stats *sq_stats;
int i, j;
memset(s, 0, sizeof(*s));
for (i = 0; i < priv->params.num_channels; i++) {
rq_stats = &priv->channel[i]->rq.stats;
s->rx_packets += rq_stats->packets;
s->rx_bytes += rq_stats->bytes;
for (j = 0; j < priv->params.num_tc; j++) {
sq_stats = &priv->channel[i]->sq[j].stats;
s->tx_packets += sq_stats->packets;
s->tx_bytes += sq_stats->bytes;
}
}
}
static void mlx5e_rep_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *stats, u64 *data)
{
struct mlx5e_priv *priv = netdev_priv(dev);
int i;
if (!data)
return;
mutex_lock(&priv->state_lock);
if (test_bit(MLX5E_STATE_OPENED, &priv->state))
mlx5e_update_sw_rep_counters(priv);
mutex_unlock(&priv->state_lock);
for (i = 0; i < NUM_VPORT_REP_COUNTERS; i++)
data[i] = MLX5E_READ_CTR64_CPU(&priv->stats.sw,
sw_rep_stats_desc, i);
}
static int mlx5e_rep_get_sset_count(struct net_device *dev, int sset)
{
switch (sset) {
case ETH_SS_STATS:
return NUM_VPORT_REP_COUNTERS;
default:
return -EOPNOTSUPP;
}
}
static const struct ethtool_ops mlx5e_rep_ethtool_ops = {
.get_drvinfo = mlx5e_rep_get_drvinfo,
.get_link = ethtool_op_get_link,
.get_strings = mlx5e_rep_get_strings,
.get_sset_count = mlx5e_rep_get_sset_count,
.get_ethtool_stats = mlx5e_rep_get_ethtool_stats,
};
int mlx5e_attr_get(struct net_device *dev, struct switchdev_attr *attr)
{
struct mlx5e_priv *priv = netdev_priv(dev);
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
u8 mac[ETH_ALEN];
if (esw->mode == SRIOV_NONE)
return -EOPNOTSUPP;
switch (attr->id) {
case SWITCHDEV_ATTR_ID_PORT_PARENT_ID:
mlx5_query_nic_vport_mac_address(priv->mdev, 0, mac);
attr->u.ppid.id_len = ETH_ALEN;
memcpy(&attr->u.ppid.id, &mac, ETH_ALEN);
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
int mlx5e_add_sqs_fwd_rules(struct mlx5e_priv *priv)
{
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5_eswitch_rep *rep = priv->ppriv;
struct mlx5e_channel *c;
int n, tc, err, num_sqs = 0;
u16 *sqs;
sqs = kcalloc(priv->params.num_channels * priv->params.num_tc, sizeof(u16), GFP_KERNEL);
if (!sqs)
return -ENOMEM;
for (n = 0; n < priv->params.num_channels; n++) {
c = priv->channel[n];
for (tc = 0; tc < c->num_tc; tc++)
sqs[num_sqs++] = c->sq[tc].sqn;
}
err = mlx5_eswitch_sqs2vport_start(esw, rep, sqs, num_sqs);
kfree(sqs);
return err;
}
int mlx5e_nic_rep_load(struct mlx5_eswitch *esw, struct mlx5_eswitch_rep *rep)
{
struct mlx5e_priv *priv = rep->priv_data;
if (test_bit(MLX5E_STATE_OPENED, &priv->state))
return mlx5e_add_sqs_fwd_rules(priv);
return 0;
}
void mlx5e_remove_sqs_fwd_rules(struct mlx5e_priv *priv)
{
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5_eswitch_rep *rep = priv->ppriv;
mlx5_eswitch_sqs2vport_stop(esw, rep);
}
void mlx5e_nic_rep_unload(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep)
{
struct mlx5e_priv *priv = rep->priv_data;
if (test_bit(MLX5E_STATE_OPENED, &priv->state))
mlx5e_remove_sqs_fwd_rules(priv);
}
static int mlx5e_rep_get_phys_port_name(struct net_device *dev,
char *buf, size_t len)
{
struct mlx5e_priv *priv = netdev_priv(dev);
struct mlx5_eswitch_rep *rep = priv->ppriv;
int ret;
ret = snprintf(buf, len, "%d", rep->vport - 1);
if (ret >= len)
return -EOPNOTSUPP;
return 0;
}
static const struct switchdev_ops mlx5e_rep_switchdev_ops = {
.switchdev_port_attr_get = mlx5e_attr_get,
};
static const struct net_device_ops mlx5e_netdev_ops_rep = {
.ndo_open = mlx5e_open,
.ndo_stop = mlx5e_close,
.ndo_start_xmit = mlx5e_xmit,
.ndo_get_phys_port_name = mlx5e_rep_get_phys_port_name,
.ndo_get_stats64 = mlx5e_get_stats,
};
static void mlx5e_build_rep_netdev_priv(struct mlx5_core_dev *mdev,
struct net_device *netdev,
const struct mlx5e_profile *profile,
void *ppriv)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
u8 cq_period_mode = MLX5_CAP_GEN(mdev, cq_period_start_from_cqe) ?
MLX5_CQ_PERIOD_MODE_START_FROM_CQE :
MLX5_CQ_PERIOD_MODE_START_FROM_EQE;
priv->params.log_sq_size =
MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE;
priv->params.rq_wq_type = MLX5_WQ_TYPE_LINKED_LIST;
priv->params.log_rq_size = MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE;
priv->params.min_rx_wqes = mlx5_min_rx_wqes(priv->params.rq_wq_type,
BIT(priv->params.log_rq_size));
priv->params.rx_am_enabled = MLX5_CAP_GEN(mdev, cq_moderation);
mlx5e_set_rx_cq_mode_params(&priv->params, cq_period_mode);
priv->params.tx_max_inline = mlx5e_get_max_inline_cap(mdev);
priv->params.num_tc = 1;
priv->params.lro_wqe_sz =
MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ;
priv->mdev = mdev;
priv->netdev = netdev;
priv->params.num_channels = profile->max_nch(mdev);
priv->profile = profile;
priv->ppriv = ppriv;
mutex_init(&priv->state_lock);
INIT_DELAYED_WORK(&priv->update_stats_work, mlx5e_update_stats_work);
}
static void mlx5e_build_rep_netdev(struct net_device *netdev)
{
netdev->netdev_ops = &mlx5e_netdev_ops_rep;
netdev->watchdog_timeo = 15 * HZ;
netdev->ethtool_ops = &mlx5e_rep_ethtool_ops;
#ifdef CONFIG_NET_SWITCHDEV
netdev->switchdev_ops = &mlx5e_rep_switchdev_ops;
#endif
netdev->features |= NETIF_F_VLAN_CHALLENGED;
eth_hw_addr_random(netdev);
}
static void mlx5e_init_rep(struct mlx5_core_dev *mdev,
struct net_device *netdev,
const struct mlx5e_profile *profile,
void *ppriv)
{
mlx5e_build_rep_netdev_priv(mdev, netdev, profile, ppriv);
mlx5e_build_rep_netdev(netdev);
}
static int mlx5e_init_rep_rx(struct mlx5e_priv *priv)
{
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5_eswitch_rep *rep = priv->ppriv;
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5_flow_rule *flow_rule;
int err;
int i;
err = mlx5e_create_direct_rqts(priv);
if (err) {
mlx5_core_warn(mdev, "create direct rqts failed, %d\n", err);
return err;
}
err = mlx5e_create_direct_tirs(priv);
if (err) {
mlx5_core_warn(mdev, "create direct tirs failed, %d\n", err);
goto err_destroy_direct_rqts;
}
flow_rule = mlx5_eswitch_create_vport_rx_rule(esw,
rep->vport,
priv->direct_tir[0].tirn);
if (IS_ERR(flow_rule)) {
err = PTR_ERR(flow_rule);
goto err_destroy_direct_tirs;
}
rep->vport_rx_rule = flow_rule;
return 0;
err_destroy_direct_tirs:
mlx5e_destroy_direct_tirs(priv);
err_destroy_direct_rqts:
for (i = 0; i < priv->params.num_channels; i++)
mlx5e_destroy_rqt(priv, &priv->direct_tir[i].rqt);
return err;
}
static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv)
{
struct mlx5_eswitch_rep *rep = priv->ppriv;
int i;
mlx5_del_flow_rule(rep->vport_rx_rule);
mlx5e_destroy_direct_tirs(priv);
for (i = 0; i < priv->params.num_channels; i++)
mlx5e_destroy_rqt(priv, &priv->direct_tir[i].rqt);
}
static int mlx5e_init_rep_tx(struct mlx5e_priv *priv)
{
int err;
err = mlx5e_create_tises(priv);
if (err) {
mlx5_core_warn(priv->mdev, "create tises failed, %d\n", err);
return err;
}
return 0;
}
static int mlx5e_get_rep_max_num_channels(struct mlx5_core_dev *mdev)
{
#define MLX5E_PORT_REPRESENTOR_NCH 1
return MLX5E_PORT_REPRESENTOR_NCH;
}
static struct mlx5e_profile mlx5e_rep_profile = {
.init = mlx5e_init_rep,
.init_rx = mlx5e_init_rep_rx,
.cleanup_rx = mlx5e_cleanup_rep_rx,
.init_tx = mlx5e_init_rep_tx,
.cleanup_tx = mlx5e_cleanup_nic_tx,
.update_stats = mlx5e_update_sw_rep_counters,
.max_nch = mlx5e_get_rep_max_num_channels,
.max_tc = 1,
};
int mlx5e_vport_rep_load(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep)
{
rep->priv_data = mlx5e_create_netdev(esw->dev, &mlx5e_rep_profile, rep);
if (!rep->priv_data) {
pr_warn("Failed to create representor for vport %d\n",
rep->vport);
return -EINVAL;
}
return 0;
}
void mlx5e_vport_rep_unload(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep)
{
struct mlx5e_priv *priv = rep->priv_data;
mlx5e_destroy_netdev(esw->dev, priv);
}
...@@ -40,17 +40,6 @@ ...@@ -40,17 +40,6 @@
#define UPLINK_VPORT 0xFFFF #define UPLINK_VPORT 0xFFFF
#define MLX5_DEBUG_ESWITCH_MASK BIT(3)
#define esw_info(dev, format, ...) \
pr_info("(%s): E-Switch: " format, (dev)->priv.name, ##__VA_ARGS__)
#define esw_warn(dev, format, ...) \
pr_warn("(%s): E-Switch: " format, (dev)->priv.name, ##__VA_ARGS__)
#define esw_debug(dev, format, ...) \
mlx5_core_dbg_mask(dev, MLX5_DEBUG_ESWITCH_MASK, format, ##__VA_ARGS__)
enum { enum {
MLX5_ACTION_NONE = 0, MLX5_ACTION_NONE = 0,
MLX5_ACTION_ADD = 1, MLX5_ACTION_ADD = 1,
...@@ -92,6 +81,9 @@ enum { ...@@ -92,6 +81,9 @@ enum {
MC_ADDR_CHANGE | \ MC_ADDR_CHANGE | \
PROMISC_CHANGE) PROMISC_CHANGE)
int esw_offloads_init(struct mlx5_eswitch *esw, int nvports);
void esw_offloads_cleanup(struct mlx5_eswitch *esw, int nvports);
static int arm_vport_context_events_cmd(struct mlx5_core_dev *dev, u16 vport, static int arm_vport_context_events_cmd(struct mlx5_core_dev *dev, u16 vport,
u32 events_mask) u32 events_mask)
{ {
...@@ -428,7 +420,7 @@ esw_fdb_set_vport_promisc_rule(struct mlx5_eswitch *esw, u32 vport) ...@@ -428,7 +420,7 @@ esw_fdb_set_vport_promisc_rule(struct mlx5_eswitch *esw, u32 vport)
return __esw_fdb_set_vport_rule(esw, vport, true, mac_c, mac_v); return __esw_fdb_set_vport_rule(esw, vport, true, mac_c, mac_v);
} }
static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports) static int esw_create_legacy_fdb_table(struct mlx5_eswitch *esw, int nvports)
{ {
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
struct mlx5_core_dev *dev = esw->dev; struct mlx5_core_dev *dev = esw->dev;
...@@ -479,7 +471,7 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports) ...@@ -479,7 +471,7 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports)
esw_warn(dev, "Failed to create flow group err(%d)\n", err); esw_warn(dev, "Failed to create flow group err(%d)\n", err);
goto out; goto out;
} }
esw->fdb_table.addr_grp = g; esw->fdb_table.legacy.addr_grp = g;
/* Allmulti group : One rule that forwards any mcast traffic */ /* Allmulti group : One rule that forwards any mcast traffic */
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
...@@ -494,7 +486,7 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports) ...@@ -494,7 +486,7 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports)
esw_warn(dev, "Failed to create allmulti flow group err(%d)\n", err); esw_warn(dev, "Failed to create allmulti flow group err(%d)\n", err);
goto out; goto out;
} }
esw->fdb_table.allmulti_grp = g; esw->fdb_table.legacy.allmulti_grp = g;
/* Promiscuous group : /* Promiscuous group :
* One rule that forward all unmatched traffic from previous groups * One rule that forward all unmatched traffic from previous groups
...@@ -511,17 +503,17 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports) ...@@ -511,17 +503,17 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports)
esw_warn(dev, "Failed to create promisc flow group err(%d)\n", err); esw_warn(dev, "Failed to create promisc flow group err(%d)\n", err);
goto out; goto out;
} }
esw->fdb_table.promisc_grp = g; esw->fdb_table.legacy.promisc_grp = g;
out: out:
if (err) { if (err) {
if (!IS_ERR_OR_NULL(esw->fdb_table.allmulti_grp)) { if (!IS_ERR_OR_NULL(esw->fdb_table.legacy.allmulti_grp)) {
mlx5_destroy_flow_group(esw->fdb_table.allmulti_grp); mlx5_destroy_flow_group(esw->fdb_table.legacy.allmulti_grp);
esw->fdb_table.allmulti_grp = NULL; esw->fdb_table.legacy.allmulti_grp = NULL;
} }
if (!IS_ERR_OR_NULL(esw->fdb_table.addr_grp)) { if (!IS_ERR_OR_NULL(esw->fdb_table.legacy.addr_grp)) {
mlx5_destroy_flow_group(esw->fdb_table.addr_grp); mlx5_destroy_flow_group(esw->fdb_table.legacy.addr_grp);
esw->fdb_table.addr_grp = NULL; esw->fdb_table.legacy.addr_grp = NULL;
} }
if (!IS_ERR_OR_NULL(esw->fdb_table.fdb)) { if (!IS_ERR_OR_NULL(esw->fdb_table.fdb)) {
mlx5_destroy_flow_table(esw->fdb_table.fdb); mlx5_destroy_flow_table(esw->fdb_table.fdb);
...@@ -533,20 +525,20 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports) ...@@ -533,20 +525,20 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports)
return err; return err;
} }
static void esw_destroy_fdb_table(struct mlx5_eswitch *esw) static void esw_destroy_legacy_fdb_table(struct mlx5_eswitch *esw)
{ {
if (!esw->fdb_table.fdb) if (!esw->fdb_table.fdb)
return; return;
esw_debug(esw->dev, "Destroy FDB Table\n"); esw_debug(esw->dev, "Destroy FDB Table\n");
mlx5_destroy_flow_group(esw->fdb_table.promisc_grp); mlx5_destroy_flow_group(esw->fdb_table.legacy.promisc_grp);
mlx5_destroy_flow_group(esw->fdb_table.allmulti_grp); mlx5_destroy_flow_group(esw->fdb_table.legacy.allmulti_grp);
mlx5_destroy_flow_group(esw->fdb_table.addr_grp); mlx5_destroy_flow_group(esw->fdb_table.legacy.addr_grp);
mlx5_destroy_flow_table(esw->fdb_table.fdb); mlx5_destroy_flow_table(esw->fdb_table.fdb);
esw->fdb_table.fdb = NULL; esw->fdb_table.fdb = NULL;
esw->fdb_table.addr_grp = NULL; esw->fdb_table.legacy.addr_grp = NULL;
esw->fdb_table.allmulti_grp = NULL; esw->fdb_table.legacy.allmulti_grp = NULL;
esw->fdb_table.promisc_grp = NULL; esw->fdb_table.legacy.promisc_grp = NULL;
} }
/* E-Switch vport UC/MC lists management */ /* E-Switch vport UC/MC lists management */
...@@ -578,7 +570,8 @@ static int esw_add_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr) ...@@ -578,7 +570,8 @@ static int esw_add_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
if (err) if (err)
goto abort; goto abort;
if (esw->fdb_table.fdb) /* SRIOV is enabled: Forward UC MAC to vport */ /* SRIOV is enabled: Forward UC MAC to vport */
if (esw->fdb_table.fdb && esw->mode == SRIOV_LEGACY)
vaddr->flow_rule = esw_fdb_set_vport_rule(esw, mac, vport); vaddr->flow_rule = esw_fdb_set_vport_rule(esw, mac, vport);
esw_debug(esw->dev, "\tADDED UC MAC: vport[%d] %pM index:%d fr(%p)\n", esw_debug(esw->dev, "\tADDED UC MAC: vport[%d] %pM index:%d fr(%p)\n",
...@@ -1540,10 +1533,10 @@ static void esw_disable_vport(struct mlx5_eswitch *esw, int vport_num) ...@@ -1540,10 +1533,10 @@ static void esw_disable_vport(struct mlx5_eswitch *esw, int vport_num)
} }
/* Public E-Switch API */ /* Public E-Switch API */
int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs) int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode)
{ {
int err; int err;
int i; int i, enabled_events;
if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager) || if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager) ||
MLX5_CAP_GEN(esw->dev, port_type) != MLX5_CAP_PORT_TYPE_ETH) MLX5_CAP_GEN(esw->dev, port_type) != MLX5_CAP_PORT_TYPE_ETH)
...@@ -1561,16 +1554,20 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs) ...@@ -1561,16 +1554,20 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs)
if (!MLX5_CAP_ESW_EGRESS_ACL(esw->dev, ft_support)) if (!MLX5_CAP_ESW_EGRESS_ACL(esw->dev, ft_support))
esw_warn(esw->dev, "E-Switch engress ACL is not supported by FW\n"); esw_warn(esw->dev, "E-Switch engress ACL is not supported by FW\n");
esw_info(esw->dev, "E-Switch enable SRIOV: nvfs(%d)\n", nvfs); esw_info(esw->dev, "E-Switch enable SRIOV: nvfs(%d) mode (%d)\n", nvfs, mode);
esw->mode = mode;
esw_disable_vport(esw, 0); esw_disable_vport(esw, 0);
err = esw_create_fdb_table(esw, nvfs + 1); if (mode == SRIOV_LEGACY)
err = esw_create_legacy_fdb_table(esw, nvfs + 1);
else
err = esw_offloads_init(esw, nvfs + 1);
if (err) if (err)
goto abort; goto abort;
enabled_events = (mode == SRIOV_LEGACY) ? SRIOV_VPORT_EVENTS : UC_ADDR_CHANGE;
for (i = 0; i <= nvfs; i++) for (i = 0; i <= nvfs; i++)
esw_enable_vport(esw, i, SRIOV_VPORT_EVENTS); esw_enable_vport(esw, i, enabled_events);
esw_info(esw->dev, "SRIOV enabled: active vports(%d)\n", esw_info(esw->dev, "SRIOV enabled: active vports(%d)\n",
esw->enabled_vports); esw->enabled_vports);
...@@ -1584,16 +1581,18 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs) ...@@ -1584,16 +1581,18 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs)
void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw) void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
{ {
struct esw_mc_addr *mc_promisc; struct esw_mc_addr *mc_promisc;
int nvports;
int i; int i;
if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager) || if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager) ||
MLX5_CAP_GEN(esw->dev, port_type) != MLX5_CAP_PORT_TYPE_ETH) MLX5_CAP_GEN(esw->dev, port_type) != MLX5_CAP_PORT_TYPE_ETH)
return; return;
esw_info(esw->dev, "disable SRIOV: active vports(%d)\n", esw_info(esw->dev, "disable SRIOV: active vports(%d) mode(%d)\n",
esw->enabled_vports); esw->enabled_vports, esw->mode);
mc_promisc = esw->mc_promisc; mc_promisc = esw->mc_promisc;
nvports = esw->enabled_vports;
for (i = 0; i < esw->total_vports; i++) for (i = 0; i < esw->total_vports; i++)
esw_disable_vport(esw, i); esw_disable_vport(esw, i);
...@@ -1601,8 +1600,12 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw) ...@@ -1601,8 +1600,12 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
if (mc_promisc && mc_promisc->uplink_rule) if (mc_promisc && mc_promisc->uplink_rule)
mlx5_del_flow_rule(mc_promisc->uplink_rule); mlx5_del_flow_rule(mc_promisc->uplink_rule);
esw_destroy_fdb_table(esw); if (esw->mode == SRIOV_LEGACY)
esw_destroy_legacy_fdb_table(esw);
else if (esw->mode == SRIOV_OFFLOADS)
esw_offloads_cleanup(esw, nvports);
esw->mode = SRIOV_NONE;
/* VPORT 0 (PF) must be enabled back with non-sriov configuration */ /* VPORT 0 (PF) must be enabled back with non-sriov configuration */
esw_enable_vport(esw, 0, UC_ADDR_CHANGE); esw_enable_vport(esw, 0, UC_ADDR_CHANGE);
} }
...@@ -1660,6 +1663,14 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev) ...@@ -1660,6 +1663,14 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
goto abort; goto abort;
} }
esw->offloads.vport_reps =
kzalloc(total_vports * sizeof(struct mlx5_eswitch_rep),
GFP_KERNEL);
if (!esw->offloads.vport_reps) {
err = -ENOMEM;
goto abort;
}
mutex_init(&esw->state_lock); mutex_init(&esw->state_lock);
for (vport_num = 0; vport_num < total_vports; vport_num++) { for (vport_num = 0; vport_num < total_vports; vport_num++) {
...@@ -1673,6 +1684,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev) ...@@ -1673,6 +1684,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
esw->total_vports = total_vports; esw->total_vports = total_vports;
esw->enabled_vports = 0; esw->enabled_vports = 0;
esw->mode = SRIOV_NONE;
dev->priv.eswitch = esw; dev->priv.eswitch = esw;
esw_enable_vport(esw, 0, UC_ADDR_CHANGE); esw_enable_vport(esw, 0, UC_ADDR_CHANGE);
...@@ -1683,6 +1695,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev) ...@@ -1683,6 +1695,7 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
destroy_workqueue(esw->work_queue); destroy_workqueue(esw->work_queue);
kfree(esw->l2_table.bitmap); kfree(esw->l2_table.bitmap);
kfree(esw->vports); kfree(esw->vports);
kfree(esw->offloads.vport_reps);
kfree(esw); kfree(esw);
return err; return err;
} }
...@@ -1700,6 +1713,7 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw) ...@@ -1700,6 +1713,7 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)
destroy_workqueue(esw->work_queue); destroy_workqueue(esw->work_queue);
kfree(esw->l2_table.bitmap); kfree(esw->l2_table.bitmap);
kfree(esw->mc_promisc); kfree(esw->mc_promisc);
kfree(esw->offloads.vport_reps);
kfree(esw->vports); kfree(esw->vports);
kfree(esw); kfree(esw);
} }
......
...@@ -35,6 +35,7 @@ ...@@ -35,6 +35,7 @@
#include <linux/if_ether.h> #include <linux/if_ether.h>
#include <linux/if_link.h> #include <linux/if_link.h>
#include <net/devlink.h>
#include <linux/mlx5/device.h> #include <linux/mlx5/device.h>
#define MLX5_MAX_UC_PER_VPORT(dev) \ #define MLX5_MAX_UC_PER_VPORT(dev) \
...@@ -46,6 +47,8 @@ ...@@ -46,6 +47,8 @@
#define MLX5_L2_ADDR_HASH_SIZE (BIT(BITS_PER_BYTE)) #define MLX5_L2_ADDR_HASH_SIZE (BIT(BITS_PER_BYTE))
#define MLX5_L2_ADDR_HASH(addr) (addr[5]) #define MLX5_L2_ADDR_HASH(addr) (addr[5])
#define FDB_UPLINK_VPORT 0xffff
/* L2 -mac address based- hash helpers */ /* L2 -mac address based- hash helpers */
struct l2addr_node { struct l2addr_node {
struct hlist_node hlist; struct hlist_node hlist;
...@@ -134,9 +137,48 @@ struct mlx5_l2_table { ...@@ -134,9 +137,48 @@ struct mlx5_l2_table {
struct mlx5_eswitch_fdb { struct mlx5_eswitch_fdb {
void *fdb; void *fdb;
union {
struct legacy_fdb {
struct mlx5_flow_group *addr_grp; struct mlx5_flow_group *addr_grp;
struct mlx5_flow_group *allmulti_grp; struct mlx5_flow_group *allmulti_grp;
struct mlx5_flow_group *promisc_grp; struct mlx5_flow_group *promisc_grp;
} legacy;
struct offloads_fdb {
struct mlx5_flow_group *send_to_vport_grp;
struct mlx5_flow_group *miss_grp;
struct mlx5_flow_rule *miss_rule;
} offloads;
};
};
enum {
SRIOV_NONE,
SRIOV_LEGACY,
SRIOV_OFFLOADS
};
struct mlx5_esw_sq {
struct mlx5_flow_rule *send_to_vport_rule;
struct list_head list;
};
struct mlx5_eswitch_rep {
int (*load)(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep);
void (*unload)(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep);
u16 vport;
struct mlx5_flow_rule *vport_rx_rule;
void *priv_data;
struct list_head vport_sqs_list;
bool valid;
};
struct mlx5_esw_offload {
struct mlx5_flow_table *ft_offloads;
struct mlx5_flow_group *vport_rx_group;
struct mlx5_eswitch_rep *vport_reps;
}; };
struct mlx5_eswitch { struct mlx5_eswitch {
...@@ -153,13 +195,15 @@ struct mlx5_eswitch { ...@@ -153,13 +195,15 @@ struct mlx5_eswitch {
*/ */
struct mutex state_lock; struct mutex state_lock;
struct esw_mc_addr *mc_promisc; struct esw_mc_addr *mc_promisc;
struct mlx5_esw_offload offloads;
int mode;
}; };
/* E-Switch API */ /* E-Switch API */
int mlx5_eswitch_init(struct mlx5_core_dev *dev); int mlx5_eswitch_init(struct mlx5_core_dev *dev);
void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw); void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw);
void mlx5_eswitch_vport_event(struct mlx5_eswitch *esw, struct mlx5_eqe *eqe); void mlx5_eswitch_vport_event(struct mlx5_eswitch *esw, struct mlx5_eqe *eqe);
int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs); int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode);
void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw); void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw);
int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw, int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
int vport, u8 mac[ETH_ALEN]); int vport, u8 mac[ETH_ALEN]);
...@@ -177,4 +221,30 @@ int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw, ...@@ -177,4 +221,30 @@ int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
int vport, int vport,
struct ifla_vf_stats *vf_stats); struct ifla_vf_stats *vf_stats);
struct mlx5_flow_rule *
mlx5_eswitch_create_vport_rx_rule(struct mlx5_eswitch *esw, int vport, u32 tirn);
int mlx5_eswitch_sqs2vport_start(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep,
u16 *sqns_array, int sqns_num);
void mlx5_eswitch_sqs2vport_stop(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep);
int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode);
int mlx5_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode);
void mlx5_eswitch_register_vport_rep(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep);
void mlx5_eswitch_unregister_vport_rep(struct mlx5_eswitch *esw,
int vport);
#define MLX5_DEBUG_ESWITCH_MASK BIT(3)
#define esw_info(dev, format, ...) \
pr_info("(%s): E-Switch: " format, (dev)->priv.name, ##__VA_ARGS__)
#define esw_warn(dev, format, ...) \
pr_warn("(%s): E-Switch: " format, (dev)->priv.name, ##__VA_ARGS__)
#define esw_debug(dev, format, ...) \
mlx5_core_dbg_mask(dev, MLX5_DEBUG_ESWITCH_MASK, format, ##__VA_ARGS__)
#endif /* __MLX5_ESWITCH_H__ */ #endif /* __MLX5_ESWITCH_H__ */
/*
* Copyright (c) 2016, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/etherdevice.h>
#include <linux/mlx5/driver.h>
#include <linux/mlx5/mlx5_ifc.h>
#include <linux/mlx5/vport.h>
#include <linux/mlx5/fs.h>
#include "mlx5_core.h"
#include "eswitch.h"
static struct mlx5_flow_rule *
mlx5_eswitch_add_send_to_vport_rule(struct mlx5_eswitch *esw, int vport, u32 sqn)
{
struct mlx5_flow_destination dest;
struct mlx5_flow_rule *flow_rule;
int match_header = MLX5_MATCH_MISC_PARAMETERS;
u32 *match_v, *match_c;
void *misc;
match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
if (!match_v || !match_c) {
esw_warn(esw->dev, "FDB: Failed to alloc match parameters\n");
flow_rule = ERR_PTR(-ENOMEM);
goto out;
}
misc = MLX5_ADDR_OF(fte_match_param, match_v, misc_parameters);
MLX5_SET(fte_match_set_misc, misc, source_sqn, sqn);
MLX5_SET(fte_match_set_misc, misc, source_port, 0x0); /* source vport is 0 */
misc = MLX5_ADDR_OF(fte_match_param, match_c, misc_parameters);
MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_sqn);
MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
dest.vport_num = vport;
flow_rule = mlx5_add_flow_rule(esw->fdb_table.fdb, match_header, match_c,
match_v, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
0, &dest);
if (IS_ERR(flow_rule))
esw_warn(esw->dev, "FDB: Failed to add send to vport rule err %ld\n", PTR_ERR(flow_rule));
out:
kfree(match_v);
kfree(match_c);
return flow_rule;
}
void mlx5_eswitch_sqs2vport_stop(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep)
{
struct mlx5_esw_sq *esw_sq, *tmp;
if (esw->mode != SRIOV_OFFLOADS)
return;
list_for_each_entry_safe(esw_sq, tmp, &rep->vport_sqs_list, list) {
mlx5_del_flow_rule(esw_sq->send_to_vport_rule);
list_del(&esw_sq->list);
kfree(esw_sq);
}
}
int mlx5_eswitch_sqs2vport_start(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep,
u16 *sqns_array, int sqns_num)
{
struct mlx5_flow_rule *flow_rule;
struct mlx5_esw_sq *esw_sq;
int vport;
int err;
int i;
if (esw->mode != SRIOV_OFFLOADS)
return 0;
vport = rep->vport == 0 ?
FDB_UPLINK_VPORT : rep->vport;
for (i = 0; i < sqns_num; i++) {
esw_sq = kzalloc(sizeof(*esw_sq), GFP_KERNEL);
if (!esw_sq) {
err = -ENOMEM;
goto out_err;
}
/* Add re-inject rule to the PF/representor sqs */
flow_rule = mlx5_eswitch_add_send_to_vport_rule(esw,
vport,
sqns_array[i]);
if (IS_ERR(flow_rule)) {
err = PTR_ERR(flow_rule);
kfree(esw_sq);
goto out_err;
}
esw_sq->send_to_vport_rule = flow_rule;
list_add(&esw_sq->list, &rep->vport_sqs_list);
}
return 0;
out_err:
mlx5_eswitch_sqs2vport_stop(esw, rep);
return err;
}
static int esw_add_fdb_miss_rule(struct mlx5_eswitch *esw)
{
struct mlx5_flow_destination dest;
struct mlx5_flow_rule *flow_rule = NULL;
u32 *match_v, *match_c;
int err = 0;
match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
if (!match_v || !match_c) {
esw_warn(esw->dev, "FDB: Failed to alloc match parameters\n");
err = -ENOMEM;
goto out;
}
dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
dest.vport_num = 0;
flow_rule = mlx5_add_flow_rule(esw->fdb_table.fdb, 0, match_c, match_v,
MLX5_FLOW_CONTEXT_ACTION_FWD_DEST, 0, &dest);
if (IS_ERR(flow_rule)) {
err = PTR_ERR(flow_rule);
esw_warn(esw->dev, "FDB: Failed to add miss flow rule err %d\n", err);
goto out;
}
esw->fdb_table.offloads.miss_rule = flow_rule;
out:
kfree(match_v);
kfree(match_c);
return err;
}
#define MAX_PF_SQ 256
static int esw_create_offloads_fdb_table(struct mlx5_eswitch *esw, int nvports)
{
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
struct mlx5_core_dev *dev = esw->dev;
struct mlx5_flow_namespace *root_ns;
struct mlx5_flow_table *fdb = NULL;
struct mlx5_flow_group *g;
u32 *flow_group_in;
void *match_criteria;
int table_size, ix, err = 0;
flow_group_in = mlx5_vzalloc(inlen);
if (!flow_group_in)
return -ENOMEM;
root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_FDB);
if (!root_ns) {
esw_warn(dev, "Failed to get FDB flow namespace\n");
goto ns_err;
}
esw_debug(dev, "Create offloads FDB table, log_max_size(%d)\n",
MLX5_CAP_ESW_FLOWTABLE_FDB(dev, log_max_ft_size));
table_size = nvports + MAX_PF_SQ + 1;
fdb = mlx5_create_flow_table(root_ns, 0, table_size, 0);
if (IS_ERR(fdb)) {
err = PTR_ERR(fdb);
esw_warn(dev, "Failed to create FDB Table err %d\n", err);
goto fdb_err;
}
esw->fdb_table.fdb = fdb;
/* create send-to-vport group */
memset(flow_group_in, 0, inlen);
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
MLX5_MATCH_MISC_PARAMETERS);
match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
MLX5_SET_TO_ONES(fte_match_param, match_criteria, misc_parameters.source_sqn);
MLX5_SET_TO_ONES(fte_match_param, match_criteria, misc_parameters.source_port);
ix = nvports + MAX_PF_SQ;
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, ix - 1);
g = mlx5_create_flow_group(fdb, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
esw_warn(dev, "Failed to create send-to-vport flow group err(%d)\n", err);
goto send_vport_err;
}
esw->fdb_table.offloads.send_to_vport_grp = g;
/* create miss group */
memset(flow_group_in, 0, inlen);
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, 0);
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, ix + 1);
g = mlx5_create_flow_group(fdb, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
esw_warn(dev, "Failed to create miss flow group err(%d)\n", err);
goto miss_err;
}
esw->fdb_table.offloads.miss_grp = g;
err = esw_add_fdb_miss_rule(esw);
if (err)
goto miss_rule_err;
return 0;
miss_rule_err:
mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp);
miss_err:
mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
send_vport_err:
mlx5_destroy_flow_table(fdb);
fdb_err:
ns_err:
kvfree(flow_group_in);
return err;
}
static void esw_destroy_offloads_fdb_table(struct mlx5_eswitch *esw)
{
if (!esw->fdb_table.fdb)
return;
esw_debug(esw->dev, "Destroy offloads FDB Table\n");
mlx5_del_flow_rule(esw->fdb_table.offloads.miss_rule);
mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp);
mlx5_destroy_flow_table(esw->fdb_table.fdb);
}
static int esw_create_offloads_table(struct mlx5_eswitch *esw)
{
struct mlx5_flow_namespace *ns;
struct mlx5_flow_table *ft_offloads;
struct mlx5_core_dev *dev = esw->dev;
int err = 0;
ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_OFFLOADS);
if (!ns) {
esw_warn(esw->dev, "Failed to get offloads flow namespace\n");
return -ENOMEM;
}
ft_offloads = mlx5_create_flow_table(ns, 0, dev->priv.sriov.num_vfs + 2, 0);
if (IS_ERR(ft_offloads)) {
err = PTR_ERR(ft_offloads);
esw_warn(esw->dev, "Failed to create offloads table, err %d\n", err);
return err;
}
esw->offloads.ft_offloads = ft_offloads;
return 0;
}
static void esw_destroy_offloads_table(struct mlx5_eswitch *esw)
{
struct mlx5_esw_offload *offloads = &esw->offloads;
mlx5_destroy_flow_table(offloads->ft_offloads);
}
static int esw_create_vport_rx_group(struct mlx5_eswitch *esw)
{
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
struct mlx5_flow_group *g;
struct mlx5_priv *priv = &esw->dev->priv;
u32 *flow_group_in;
void *match_criteria, *misc;
int err = 0;
int nvports = priv->sriov.num_vfs + 2;
flow_group_in = mlx5_vzalloc(inlen);
if (!flow_group_in)
return -ENOMEM;
/* create vport rx group */
memset(flow_group_in, 0, inlen);
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
MLX5_MATCH_MISC_PARAMETERS);
match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
misc = MLX5_ADDR_OF(fte_match_param, match_criteria, misc_parameters);
MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, nvports - 1);
g = mlx5_create_flow_group(esw->offloads.ft_offloads, flow_group_in);
if (IS_ERR(g)) {
err = PTR_ERR(g);
mlx5_core_warn(esw->dev, "Failed to create vport rx group err %d\n", err);
goto out;
}
esw->offloads.vport_rx_group = g;
out:
kfree(flow_group_in);
return err;
}
static void esw_destroy_vport_rx_group(struct mlx5_eswitch *esw)
{
mlx5_destroy_flow_group(esw->offloads.vport_rx_group);
}
struct mlx5_flow_rule *
mlx5_eswitch_create_vport_rx_rule(struct mlx5_eswitch *esw, int vport, u32 tirn)
{
struct mlx5_flow_destination dest;
struct mlx5_flow_rule *flow_rule;
int match_header = MLX5_MATCH_MISC_PARAMETERS;
u32 *match_v, *match_c;
void *misc;
match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
if (!match_v || !match_c) {
esw_warn(esw->dev, "Failed to alloc match parameters\n");
flow_rule = ERR_PTR(-ENOMEM);
goto out;
}
misc = MLX5_ADDR_OF(fte_match_param, match_v, misc_parameters);
MLX5_SET(fte_match_set_misc, misc, source_port, vport);
misc = MLX5_ADDR_OF(fte_match_param, match_c, misc_parameters);
MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
dest.tir_num = tirn;
flow_rule = mlx5_add_flow_rule(esw->offloads.ft_offloads, match_header, match_c,
match_v, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
0, &dest);
if (IS_ERR(flow_rule)) {
esw_warn(esw->dev, "fs offloads: Failed to add vport rx rule err %ld\n", PTR_ERR(flow_rule));
goto out;
}
out:
kfree(match_v);
kfree(match_c);
return flow_rule;
}
static int esw_offloads_start(struct mlx5_eswitch *esw)
{
int err, num_vfs = esw->dev->priv.sriov.num_vfs;
if (esw->mode != SRIOV_LEGACY) {
esw_warn(esw->dev, "Can't set offloads mode, SRIOV legacy not enabled\n");
return -EINVAL;
}
mlx5_eswitch_disable_sriov(esw);
err = mlx5_eswitch_enable_sriov(esw, num_vfs, SRIOV_OFFLOADS);
if (err)
esw_warn(esw->dev, "Failed set eswitch to offloads, err %d\n", err);
return err;
}
int esw_offloads_init(struct mlx5_eswitch *esw, int nvports)
{
struct mlx5_eswitch_rep *rep;
int vport;
int err;
err = esw_create_offloads_fdb_table(esw, nvports);
if (err)
return err;
err = esw_create_offloads_table(esw);
if (err)
goto create_ft_err;
err = esw_create_vport_rx_group(esw);
if (err)
goto create_fg_err;
for (vport = 0; vport < nvports; vport++) {
rep = &esw->offloads.vport_reps[vport];
if (!rep->valid)
continue;
err = rep->load(esw, rep);
if (err)
goto err_reps;
}
return 0;
err_reps:
for (vport--; vport >= 0; vport--) {
rep = &esw->offloads.vport_reps[vport];
if (!rep->valid)
continue;
rep->unload(esw, rep);
}
esw_destroy_vport_rx_group(esw);
create_fg_err:
esw_destroy_offloads_table(esw);
create_ft_err:
esw_destroy_offloads_fdb_table(esw);
return err;
}
static int esw_offloads_stop(struct mlx5_eswitch *esw)
{
int err, num_vfs = esw->dev->priv.sriov.num_vfs;
mlx5_eswitch_disable_sriov(esw);
err = mlx5_eswitch_enable_sriov(esw, num_vfs, SRIOV_LEGACY);
if (err)
esw_warn(esw->dev, "Failed set eswitch legacy mode. err %d\n", err);
return err;
}
void esw_offloads_cleanup(struct mlx5_eswitch *esw, int nvports)
{
struct mlx5_eswitch_rep *rep;
int vport;
for (vport = 0; vport < nvports; vport++) {
rep = &esw->offloads.vport_reps[vport];
if (!rep->valid)
continue;
rep->unload(esw, rep);
}
esw_destroy_vport_rx_group(esw);
esw_destroy_offloads_table(esw);
esw_destroy_offloads_fdb_table(esw);
}
static int mlx5_esw_mode_from_devlink(u16 mode, u16 *mlx5_mode)
{
switch (mode) {
case DEVLINK_ESWITCH_MODE_LEGACY:
*mlx5_mode = SRIOV_LEGACY;
break;
case DEVLINK_ESWITCH_MODE_SWITCHDEV:
*mlx5_mode = SRIOV_OFFLOADS;
break;
default:
return -EINVAL;
}
return 0;
}
int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode)
{
struct mlx5_core_dev *dev;
u16 cur_mlx5_mode, mlx5_mode = 0;
dev = devlink_priv(devlink);
if (!MLX5_CAP_GEN(dev, vport_group_manager))
return -EOPNOTSUPP;
cur_mlx5_mode = dev->priv.eswitch->mode;
if (cur_mlx5_mode == SRIOV_NONE)
return -EOPNOTSUPP;
if (mlx5_esw_mode_from_devlink(mode, &mlx5_mode))
return -EINVAL;
if (cur_mlx5_mode == mlx5_mode)
return 0;
if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV)
return esw_offloads_start(dev->priv.eswitch);
else if (mode == DEVLINK_ESWITCH_MODE_LEGACY)
return esw_offloads_stop(dev->priv.eswitch);
else
return -EINVAL;
}
int mlx5_devlink_eswitch_mode_get(struct devlink *devlink, u16 *mode)
{
struct mlx5_core_dev *dev;
dev = devlink_priv(devlink);
if (!MLX5_CAP_GEN(dev, vport_group_manager))
return -EOPNOTSUPP;
if (dev->priv.eswitch->mode == SRIOV_NONE)
return -EOPNOTSUPP;
*mode = dev->priv.eswitch->mode;
return 0;
}
void mlx5_eswitch_register_vport_rep(struct mlx5_eswitch *esw,
struct mlx5_eswitch_rep *rep)
{
struct mlx5_esw_offload *offloads = &esw->offloads;
memcpy(&offloads->vport_reps[rep->vport], rep,
sizeof(struct mlx5_eswitch_rep));
INIT_LIST_HEAD(&offloads->vport_reps[rep->vport].vport_sqs_list);
offloads->vport_reps[rep->vport].valid = true;
}
void mlx5_eswitch_unregister_vport_rep(struct mlx5_eswitch *esw,
int vport)
{
struct mlx5_esw_offload *offloads = &esw->offloads;
struct mlx5_eswitch_rep *rep;
rep = &offloads->vport_reps[vport];
if (esw->mode == SRIOV_OFFLOADS && esw->vports[vport].enabled)
rep->unload(esw, rep);
offloads->vport_reps[vport].valid = false;
}
...@@ -83,6 +83,11 @@ ...@@ -83,6 +83,11 @@
#define ANCHOR_NUM_LEVELS 1 #define ANCHOR_NUM_LEVELS 1
#define ANCHOR_NUM_PRIOS 1 #define ANCHOR_NUM_PRIOS 1
#define ANCHOR_MIN_LEVEL (BY_PASS_MIN_LEVEL + 1) #define ANCHOR_MIN_LEVEL (BY_PASS_MIN_LEVEL + 1)
#define OFFLOADS_MAX_FT 1
#define OFFLOADS_NUM_PRIOS 1
#define OFFLOADS_MIN_LEVEL (ANCHOR_MIN_LEVEL + 1)
struct node_caps { struct node_caps {
size_t arr_sz; size_t arr_sz;
long *caps; long *caps;
...@@ -98,7 +103,7 @@ static struct init_tree_node { ...@@ -98,7 +103,7 @@ static struct init_tree_node {
int num_levels; int num_levels;
} root_fs = { } root_fs = {
.type = FS_TYPE_NAMESPACE, .type = FS_TYPE_NAMESPACE,
.ar_size = 4, .ar_size = 5,
.children = (struct init_tree_node[]) { .children = (struct init_tree_node[]) {
ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0, ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0,
FS_REQUIRED_CAPS(FS_CAP(flow_table_properties_nic_receive.flow_modify_en), FS_REQUIRED_CAPS(FS_CAP(flow_table_properties_nic_receive.flow_modify_en),
...@@ -107,6 +112,9 @@ static struct init_tree_node { ...@@ -107,6 +112,9 @@ static struct init_tree_node {
FS_CAP(flow_table_properties_nic_receive.flow_table_modify)), FS_CAP(flow_table_properties_nic_receive.flow_table_modify)),
ADD_NS(ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS, ADD_NS(ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS,
BY_PASS_PRIO_NUM_LEVELS))), BY_PASS_PRIO_NUM_LEVELS))),
ADD_PRIO(0, OFFLOADS_MIN_LEVEL, 0, {},
ADD_NS(ADD_MULTIPLE_PRIO(OFFLOADS_NUM_PRIOS, OFFLOADS_MAX_FT))),
ADD_PRIO(0, KERNEL_MIN_LEVEL, 0, {}, ADD_PRIO(0, KERNEL_MIN_LEVEL, 0, {},
ADD_NS(ADD_MULTIPLE_PRIO(1, 1), ADD_NS(ADD_MULTIPLE_PRIO(1, 1),
ADD_MULTIPLE_PRIO(KERNEL_NIC_NUM_PRIOS, ADD_MULTIPLE_PRIO(KERNEL_NIC_NUM_PRIOS,
...@@ -1369,6 +1377,7 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev, ...@@ -1369,6 +1377,7 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
switch (type) { switch (type) {
case MLX5_FLOW_NAMESPACE_BYPASS: case MLX5_FLOW_NAMESPACE_BYPASS:
case MLX5_FLOW_NAMESPACE_OFFLOADS:
case MLX5_FLOW_NAMESPACE_KERNEL: case MLX5_FLOW_NAMESPACE_KERNEL:
case MLX5_FLOW_NAMESPACE_LEFTOVERS: case MLX5_FLOW_NAMESPACE_LEFTOVERS:
case MLX5_FLOW_NAMESPACE_ANCHOR: case MLX5_FLOW_NAMESPACE_ANCHOR:
......
...@@ -51,6 +51,7 @@ ...@@ -51,6 +51,7 @@
#ifdef CONFIG_RFS_ACCEL #ifdef CONFIG_RFS_ACCEL
#include <linux/cpu_rmap.h> #include <linux/cpu_rmap.h>
#endif #endif
#include <net/devlink.h>
#include "mlx5_core.h" #include "mlx5_core.h"
#include "fs_core.h" #include "fs_core.h"
#ifdef CONFIG_MLX5_CORE_EN #ifdef CONFIG_MLX5_CORE_EN
...@@ -1315,19 +1316,28 @@ struct mlx5_core_event_handler { ...@@ -1315,19 +1316,28 @@ struct mlx5_core_event_handler {
void *data); void *data);
}; };
static const struct devlink_ops mlx5_devlink_ops = {
#ifdef CONFIG_MLX5_CORE_EN
.eswitch_mode_set = mlx5_devlink_eswitch_mode_set,
.eswitch_mode_get = mlx5_devlink_eswitch_mode_get,
#endif
};
static int init_one(struct pci_dev *pdev, static int init_one(struct pci_dev *pdev,
const struct pci_device_id *id) const struct pci_device_id *id)
{ {
struct mlx5_core_dev *dev; struct mlx5_core_dev *dev;
struct devlink *devlink;
struct mlx5_priv *priv; struct mlx5_priv *priv;
int err; int err;
dev = kzalloc(sizeof(*dev), GFP_KERNEL); devlink = devlink_alloc(&mlx5_devlink_ops, sizeof(*dev));
if (!dev) { if (!devlink) {
dev_err(&pdev->dev, "kzalloc failed\n"); dev_err(&pdev->dev, "kzalloc failed\n");
return -ENOMEM; return -ENOMEM;
} }
dev = devlink_priv(devlink);
priv = &dev->priv; priv = &dev->priv;
priv->pci_dev_data = id->driver_data; priv->pci_dev_data = id->driver_data;
...@@ -1364,15 +1374,21 @@ static int init_one(struct pci_dev *pdev, ...@@ -1364,15 +1374,21 @@ static int init_one(struct pci_dev *pdev,
goto clean_health; goto clean_health;
} }
err = devlink_register(devlink, &pdev->dev);
if (err)
goto clean_load;
return 0; return 0;
clean_load:
mlx5_unload_one(dev, priv);
clean_health: clean_health:
mlx5_health_cleanup(dev); mlx5_health_cleanup(dev);
close_pci: close_pci:
mlx5_pci_close(dev, priv); mlx5_pci_close(dev, priv);
clean_dev: clean_dev:
pci_set_drvdata(pdev, NULL); pci_set_drvdata(pdev, NULL);
kfree(dev); devlink_free(devlink);
return err; return err;
} }
...@@ -1380,8 +1396,10 @@ static int init_one(struct pci_dev *pdev, ...@@ -1380,8 +1396,10 @@ static int init_one(struct pci_dev *pdev,
static void remove_one(struct pci_dev *pdev) static void remove_one(struct pci_dev *pdev)
{ {
struct mlx5_core_dev *dev = pci_get_drvdata(pdev); struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
struct devlink *devlink = priv_to_devlink(dev);
struct mlx5_priv *priv = &dev->priv; struct mlx5_priv *priv = &dev->priv;
devlink_unregister(devlink);
if (mlx5_unload_one(dev, priv)) { if (mlx5_unload_one(dev, priv)) {
dev_err(&dev->pdev->dev, "mlx5_unload_one failed\n"); dev_err(&dev->pdev->dev, "mlx5_unload_one failed\n");
mlx5_health_cleanup(dev); mlx5_health_cleanup(dev);
...@@ -1390,7 +1408,7 @@ static void remove_one(struct pci_dev *pdev) ...@@ -1390,7 +1408,7 @@ static void remove_one(struct pci_dev *pdev)
mlx5_health_cleanup(dev); mlx5_health_cleanup(dev);
mlx5_pci_close(dev, priv); mlx5_pci_close(dev, priv);
pci_set_drvdata(pdev, NULL); pci_set_drvdata(pdev, NULL);
kfree(dev); devlink_free(devlink);
} }
static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev, static pci_ers_result_t mlx5_pci_err_detected(struct pci_dev *pdev,
......
...@@ -167,7 +167,7 @@ int mlx5_core_sriov_configure(struct pci_dev *pdev, int num_vfs) ...@@ -167,7 +167,7 @@ int mlx5_core_sriov_configure(struct pci_dev *pdev, int num_vfs)
mlx5_core_init_vfs(dev, num_vfs); mlx5_core_init_vfs(dev, num_vfs);
#ifdef CONFIG_MLX5_CORE_EN #ifdef CONFIG_MLX5_CORE_EN
mlx5_eswitch_enable_sriov(dev->priv.eswitch, num_vfs); mlx5_eswitch_enable_sriov(dev->priv.eswitch, num_vfs, SRIOV_LEGACY);
#endif #endif
return num_vfs; return num_vfs;
...@@ -209,7 +209,8 @@ int mlx5_sriov_init(struct mlx5_core_dev *dev) ...@@ -209,7 +209,8 @@ int mlx5_sriov_init(struct mlx5_core_dev *dev)
mlx5_core_init_vfs(dev, cur_vfs); mlx5_core_init_vfs(dev, cur_vfs);
#ifdef CONFIG_MLX5_CORE_EN #ifdef CONFIG_MLX5_CORE_EN
if (cur_vfs) if (cur_vfs)
mlx5_eswitch_enable_sriov(dev->priv.eswitch, cur_vfs); mlx5_eswitch_enable_sriov(dev->priv.eswitch, cur_vfs,
SRIOV_LEGACY);
#endif #endif
enable_vfs(dev, cur_vfs); enable_vfs(dev, cur_vfs);
......
...@@ -578,6 +578,18 @@ enum mlx5_pci_status { ...@@ -578,6 +578,18 @@ enum mlx5_pci_status {
MLX5_PCI_STATUS_ENABLED, MLX5_PCI_STATUS_ENABLED,
}; };
struct mlx5_td {
struct list_head tirs_list;
u32 tdn;
};
struct mlx5e_resources {
struct mlx5_uar cq_uar;
u32 pdn;
struct mlx5_td td;
struct mlx5_core_mkey mkey;
};
struct mlx5_core_dev { struct mlx5_core_dev {
struct pci_dev *pdev; struct pci_dev *pdev;
/* sync pci state */ /* sync pci state */
...@@ -602,6 +614,7 @@ struct mlx5_core_dev { ...@@ -602,6 +614,7 @@ struct mlx5_core_dev {
struct mlx5_profile *profile; struct mlx5_profile *profile;
atomic_t num_qps; atomic_t num_qps;
u32 issi; u32 issi;
struct mlx5e_resources mlx5e_res;
#ifdef CONFIG_RFS_ACCEL #ifdef CONFIG_RFS_ACCEL
struct cpu_rmap *rmap; struct cpu_rmap *rmap;
#endif #endif
......
...@@ -54,6 +54,7 @@ static inline void build_leftovers_ft_param(int *priority, ...@@ -54,6 +54,7 @@ static inline void build_leftovers_ft_param(int *priority,
enum mlx5_flow_namespace_type { enum mlx5_flow_namespace_type {
MLX5_FLOW_NAMESPACE_BYPASS, MLX5_FLOW_NAMESPACE_BYPASS,
MLX5_FLOW_NAMESPACE_OFFLOADS,
MLX5_FLOW_NAMESPACE_KERNEL, MLX5_FLOW_NAMESPACE_KERNEL,
MLX5_FLOW_NAMESPACE_LEFTOVERS, MLX5_FLOW_NAMESPACE_LEFTOVERS,
MLX5_FLOW_NAMESPACE_ANCHOR, MLX5_FLOW_NAMESPACE_ANCHOR,
......
...@@ -90,6 +90,9 @@ struct devlink_ops { ...@@ -90,6 +90,9 @@ struct devlink_ops {
u16 tc_index, u16 tc_index,
enum devlink_sb_pool_type pool_type, enum devlink_sb_pool_type pool_type,
u32 *p_cur, u32 *p_max); u32 *p_cur, u32 *p_max);
int (*eswitch_mode_get)(struct devlink *devlink, u16 *p_mode);
int (*eswitch_mode_set)(struct devlink *devlink, u16 mode);
}; };
static inline void *devlink_priv(struct devlink *devlink) static inline void *devlink_priv(struct devlink *devlink)
......
...@@ -57,6 +57,8 @@ enum devlink_command { ...@@ -57,6 +57,8 @@ enum devlink_command {
DEVLINK_CMD_SB_OCC_SNAPSHOT, DEVLINK_CMD_SB_OCC_SNAPSHOT,
DEVLINK_CMD_SB_OCC_MAX_CLEAR, DEVLINK_CMD_SB_OCC_MAX_CLEAR,
DEVLINK_CMD_ESWITCH_MODE_GET,
DEVLINK_CMD_ESWITCH_MODE_SET,
/* add new commands above here */ /* add new commands above here */
__DEVLINK_CMD_MAX, __DEVLINK_CMD_MAX,
...@@ -95,6 +97,11 @@ enum devlink_sb_threshold_type { ...@@ -95,6 +97,11 @@ enum devlink_sb_threshold_type {
#define DEVLINK_SB_THRESHOLD_TO_ALPHA_MAX 20 #define DEVLINK_SB_THRESHOLD_TO_ALPHA_MAX 20
enum devlink_eswitch_mode {
DEVLINK_ESWITCH_MODE_LEGACY,
DEVLINK_ESWITCH_MODE_SWITCHDEV,
};
enum devlink_attr { enum devlink_attr {
/* don't change the order or add anything between, this is ABI! */ /* don't change the order or add anything between, this is ABI! */
DEVLINK_ATTR_UNSPEC, DEVLINK_ATTR_UNSPEC,
...@@ -125,6 +132,7 @@ enum devlink_attr { ...@@ -125,6 +132,7 @@ enum devlink_attr {
DEVLINK_ATTR_SB_TC_INDEX, /* u16 */ DEVLINK_ATTR_SB_TC_INDEX, /* u16 */
DEVLINK_ATTR_SB_OCC_CUR, /* u32 */ DEVLINK_ATTR_SB_OCC_CUR, /* u32 */
DEVLINK_ATTR_SB_OCC_MAX, /* u32 */ DEVLINK_ATTR_SB_OCC_MAX, /* u32 */
DEVLINK_ATTR_ESWITCH_MODE, /* u16 */
/* add new attributes above here, update the policy in devlink.c */ /* add new attributes above here, update the policy in devlink.c */
......
...@@ -1394,6 +1394,78 @@ static int devlink_nl_cmd_sb_occ_max_clear_doit(struct sk_buff *skb, ...@@ -1394,6 +1394,78 @@ static int devlink_nl_cmd_sb_occ_max_clear_doit(struct sk_buff *skb,
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
static int devlink_eswitch_fill(struct sk_buff *msg, struct devlink *devlink,
enum devlink_command cmd, u32 portid,
u32 seq, int flags, u16 mode)
{
void *hdr;
hdr = genlmsg_put(msg, portid, seq, &devlink_nl_family, flags, cmd);
if (!hdr)
return -EMSGSIZE;
if (devlink_nl_put_handle(msg, devlink))
goto nla_put_failure;
if (nla_put_u16(msg, DEVLINK_ATTR_ESWITCH_MODE, mode))
goto nla_put_failure;
genlmsg_end(msg, hdr);
return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
return -EMSGSIZE;
}
static int devlink_nl_cmd_eswitch_mode_get_doit(struct sk_buff *skb,
struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
const struct devlink_ops *ops = devlink->ops;
struct sk_buff *msg;
u16 mode;
int err;
if (!ops || !ops->eswitch_mode_get)
return -EOPNOTSUPP;
err = ops->eswitch_mode_get(devlink, &mode);
if (err)
return err;
msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
if (!msg)
return -ENOMEM;
err = devlink_eswitch_fill(msg, devlink, DEVLINK_CMD_ESWITCH_MODE_GET,
info->snd_portid, info->snd_seq, 0, mode);
if (err) {
nlmsg_free(msg);
return err;
}
return genlmsg_reply(msg, info);
}
static int devlink_nl_cmd_eswitch_mode_set_doit(struct sk_buff *skb,
struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
const struct devlink_ops *ops = devlink->ops;
u16 mode;
if (!info->attrs[DEVLINK_ATTR_ESWITCH_MODE])
return -EINVAL;
mode = nla_get_u16(info->attrs[DEVLINK_ATTR_ESWITCH_MODE]);
if (ops && ops->eswitch_mode_set)
return ops->eswitch_mode_set(devlink, mode);
return -EOPNOTSUPP;
}
static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = { static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING }, [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING },
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING }, [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING },
...@@ -1407,6 +1479,7 @@ static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = { ...@@ -1407,6 +1479,7 @@ static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_SB_POOL_THRESHOLD_TYPE] = { .type = NLA_U8 }, [DEVLINK_ATTR_SB_POOL_THRESHOLD_TYPE] = { .type = NLA_U8 },
[DEVLINK_ATTR_SB_THRESHOLD] = { .type = NLA_U32 }, [DEVLINK_ATTR_SB_THRESHOLD] = { .type = NLA_U32 },
[DEVLINK_ATTR_SB_TC_INDEX] = { .type = NLA_U16 }, [DEVLINK_ATTR_SB_TC_INDEX] = { .type = NLA_U16 },
[DEVLINK_ATTR_ESWITCH_MODE] = { .type = NLA_U16 },
}; };
static const struct genl_ops devlink_nl_ops[] = { static const struct genl_ops devlink_nl_ops[] = {
...@@ -1525,6 +1598,20 @@ static const struct genl_ops devlink_nl_ops[] = { ...@@ -1525,6 +1598,20 @@ static const struct genl_ops devlink_nl_ops[] = {
DEVLINK_NL_FLAG_NEED_SB | DEVLINK_NL_FLAG_NEED_SB |
DEVLINK_NL_FLAG_LOCK_PORTS, DEVLINK_NL_FLAG_LOCK_PORTS,
}, },
{
.cmd = DEVLINK_CMD_ESWITCH_MODE_GET,
.doit = devlink_nl_cmd_eswitch_mode_get_doit,
.policy = devlink_nl_policy,
.flags = GENL_ADMIN_PERM,
.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK,
},
{
.cmd = DEVLINK_CMD_ESWITCH_MODE_SET,
.doit = devlink_nl_cmd_eswitch_mode_set_doit,
.policy = devlink_nl_policy,
.flags = GENL_ADMIN_PERM,
.internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK,
},
}; };
/** /**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment