Commit 70d40b36 authored by David S. Miller's avatar David S. Miller

Merge branch 'mlx5-RDMA-netdevice'

Saeed Mahameed says:

====================
Mellanox, mlx5 RDMA net device support

This series provides the lower level mlx5 support of RDMA netdevice
creation API [1] suggested and introduced by Intel's HFI OPA VNIC
netdevice driver [2], to enable IPoIB mlx5 RDMA netdevice creation.

mlx5 IPoIB RDMA netdev will serve as an acceleration netdevice for the current
IPoIB ULP generic netdevice, providing:
	- mlx5 RSS support.
	- mlx5 HW RX,TX offloads (checksum, TSO, LRO, etc ..).
	- Full mlx5 HW features transparent to the ULP itself.

The idea here is to reuse and benefit from the already implemented mlx5e netdevice
management and channels API for both etherent and RDMA netdevices, since both IPoIB
and Ethernet netdevices share same common mlx5 HW resources (with some small
exceptions) and share most of the control/data path logic, it is more natural to
have them share the same code.

The differences between IPoIB and Ethernet netdevices can be summarized to:

Steering:
In mlx5, IPoIB traffic is sent and received from an underlay special QP, and in Ethernet
the traffic is handled by vports and vport steering is managed by e-switch or FW.

For IPoIB traffic to get steered correctly the only thing we need to do is to create RSS
HW contexts for RX and TX HW contexts for TX (similar to mlx5e) with the underlay QP attached to
them (underlay QP will be 0 in case of Ethernet).

RX,TX:
Since IPoIB traffic is different, slightly modified RX and TX handlers are required,
still we do some code reuse in data path via common helper functions.

All of the other generic netdevice and mlx5 aspects will be shared between mlx5 Ethernet
and IPoIB netdevices, e.g.
	- Channels creation and handling (RQs,SQs,CQs, NAPI, interrupt moderation, etc..)
	- Offloads, checksum, GRO, LRO, TSO, and more.
        - netdevice logic and non Ethernet specific ndos (open/close, etc..)

In order to achieve what we want:

In patchet 1 to 3, Erez added the supported for underlay QP in mlx5_ifc and refactored
the mlx5 steering code to accept the underlay QP as a parameter for creating steering
objects and enabled flow steering for IB link.

Then we are going to use the mlx5e netdevice profile, which is already used to separate between
NIC and VF representors netdevices, to create new type of IPoIB netdevice profile.

For that, one small refactoring is required to make mlx5e netdevice profile management
more genetic and agnostic to link type which is done in patch #4.

In patch #5, we introduce ipoib.c to host all of mlx5 IPoIB (mlx5i) specific logic and a
skeleton for the IPoIB mlx5 netdevice profile, and we will start filling it in next patches,
using mlx5e already existing APIs.

Patch #6 and #7, Implement init/cleanup RX mlx5i netdev profile handlers to create mlx5 RSS
resources, same as mlx5e but without vlan and L2 steering tables.

Patch #8, Implement init/cleanup TX mlx5i netdev profile handlers, to create TX resources
same as mlx5e but with one TC (tc = 0) support.

Patch #9, Implement mlx5i open/close ndos, where we reuese the mlx5e channels API, to start/stop TX/RX channels.

Patch #10, Create the underlay QP and attach it to mlx5i RSS and TX HW contexts.

Patch #11 and #12, Break down the mlx5e xmit flow into smaller helper function and implement the
mlx5i IPoIB xmit routine.

Patch #13 and #14, Have an RX handler per netdevice profile. We already do this before this series
in a non clean way to separate between NIC netdev and VF representor RX handlers, in patch 13 we make
the RX handler generic and bound to a profile and in patch 14 we implement the IPoIB RX handlers.

Patch #15, Small cleanup to avoid e-switch with IPoIB netdev.

In order to enable mlx5 IPoIB, a merge between the IPoIB RDMA netdev offolad support [3]
- which was alread submitted to the rdma mailing list - and this series is required
plus an extra small patch [4] which will connect between both sides and actually enables the offload.

Once both patch-sets are merged into linux we will have to submit the extra small patch [4], to enable
the feature.

Thanks,
Saeed.

[1] https://patchwork.kernel.org/patch/9676637/

[2] https://lwn.net/Articles/715453/
    https://patchwork.kernel.org/patch/9587815/

[3] https://patchwork.kernel.org/patch/9672069/
[4] https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git/commit/?id=0141db6a686e32294dee015b7d07706162ba48d8
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents f72860af 93d576af
...@@ -729,16 +729,6 @@ static inline struct mlx5_ib_mw *to_mmw(struct ib_mw *ibmw) ...@@ -729,16 +729,6 @@ static inline struct mlx5_ib_mw *to_mmw(struct ib_mw *ibmw)
return container_of(ibmw, struct mlx5_ib_mw, ibmw); return container_of(ibmw, struct mlx5_ib_mw, ibmw);
} }
struct mlx5_ib_ah {
struct ib_ah ibah;
struct mlx5_av av;
};
static inline struct mlx5_ib_ah *to_mah(struct ib_ah *ibah)
{
return container_of(ibah, struct mlx5_ib_ah, ibah);
}
int mlx5_ib_db_map_user(struct mlx5_ib_ucontext *context, unsigned long virt, int mlx5_ib_db_map_user(struct mlx5_ib_ucontext *context, unsigned long virt,
struct mlx5_db *db); struct mlx5_db *db);
void mlx5_ib_db_unmap_user(struct mlx5_ib_ucontext *context, struct mlx5_db *db); void mlx5_ib_db_unmap_user(struct mlx5_ib_ucontext *context, struct mlx5_db *db);
......
...@@ -897,6 +897,7 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev, ...@@ -897,6 +897,7 @@ static int create_kernel_qp(struct mlx5_ib_dev *dev,
if (init_attr->create_flags & ~(IB_QP_CREATE_SIGNATURE_EN | if (init_attr->create_flags & ~(IB_QP_CREATE_SIGNATURE_EN |
IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK | IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK |
IB_QP_CREATE_IPOIB_UD_LSO | IB_QP_CREATE_IPOIB_UD_LSO |
IB_QP_CREATE_NETIF_QP |
mlx5_ib_create_qp_sqpn_qp1())) mlx5_ib_create_qp_sqpn_qp1()))
return -EINVAL; return -EINVAL;
......
...@@ -31,3 +31,10 @@ config MLX5_CORE_EN_DCB ...@@ -31,3 +31,10 @@ config MLX5_CORE_EN_DCB
This flag is depended on the kernel's DCB support. This flag is depended on the kernel's DCB support.
If unsure, set to Y If unsure, set to Y
config MLX5_CORE_IPOIB
bool "Mellanox Technologies ConnectX-4 IPoIB offloads support"
depends on MLX5_CORE_EN
default y
---help---
MLX5 IPoIB offloads & acceleration support.
...@@ -11,3 +11,5 @@ mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o eswitch_offloads.o \ ...@@ -11,3 +11,5 @@ mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o eswitch_offloads.o \
en_tc.o en_arfs.o en_rep.o en_fs_ethtool.o en_selftest.o en_tc.o en_arfs.o en_rep.o en_fs_ethtool.o en_selftest.o
mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o
mlx5_core-$(CONFIG_MLX5_CORE_IPOIB) += ipoib.o
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <linux/timecounter.h> #include <linux/timecounter.h>
#include <linux/net_tstamp.h> #include <linux/net_tstamp.h>
#include <linux/ptp_clock_kernel.h> #include <linux/ptp_clock_kernel.h>
#include <linux/crash_dump.h>
#include <linux/mlx5/driver.h> #include <linux/mlx5/driver.h>
#include <linux/mlx5/qp.h> #include <linux/mlx5/qp.h>
#include <linux/mlx5/cq.h> #include <linux/mlx5/cq.h>
...@@ -153,6 +154,14 @@ static inline int mlx5_max_log_rq_size(int wq_type) ...@@ -153,6 +154,14 @@ static inline int mlx5_max_log_rq_size(int wq_type)
} }
} }
static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
{
return is_kdump_kernel() ?
MLX5E_MIN_NUM_CHANNELS :
min_t(int, mdev->priv.eq_table.num_comp_vectors,
MLX5E_MAX_NUM_CHANNELS);
}
struct mlx5e_tx_wqe { struct mlx5e_tx_wqe {
struct mlx5_wqe_ctrl_seg ctrl; struct mlx5_wqe_ctrl_seg ctrl;
struct mlx5_wqe_eth_seg eth; struct mlx5_wqe_eth_seg eth;
...@@ -295,6 +304,7 @@ struct mlx5e_cq { ...@@ -295,6 +304,7 @@ struct mlx5e_cq {
} ____cacheline_aligned_in_smp; } ____cacheline_aligned_in_smp;
struct mlx5e_tx_wqe_info { struct mlx5e_tx_wqe_info {
struct sk_buff *skb;
u32 num_bytes; u32 num_bytes;
u8 num_wqebbs; u8 num_wqebbs;
u8 num_dma; u8 num_dma;
...@@ -336,7 +346,6 @@ struct mlx5e_txqsq { ...@@ -336,7 +346,6 @@ struct mlx5e_txqsq {
/* write@xmit, read@completion */ /* write@xmit, read@completion */
struct { struct {
struct sk_buff **skb;
struct mlx5e_sq_dma *dma_fifo; struct mlx5e_sq_dma *dma_fifo;
struct mlx5e_tx_wqe_info *wqe_info; struct mlx5e_tx_wqe_info *wqe_info;
} db; } db;
...@@ -770,6 +779,10 @@ struct mlx5e_profile { ...@@ -770,6 +779,10 @@ struct mlx5e_profile {
void (*disable)(struct mlx5e_priv *priv); void (*disable)(struct mlx5e_priv *priv);
void (*update_stats)(struct mlx5e_priv *priv); void (*update_stats)(struct mlx5e_priv *priv);
int (*max_nch)(struct mlx5_core_dev *mdev); int (*max_nch)(struct mlx5_core_dev *mdev);
struct {
mlx5e_fp_handle_rx_cqe handle_rx_cqe;
mlx5e_fp_handle_rx_cqe handle_rx_cqe_mpwqe;
} rx_handlers;
int max_tc; int max_tc;
}; };
...@@ -874,6 +887,8 @@ typedef int (*mlx5e_fp_hw_modify)(struct mlx5e_priv *priv); ...@@ -874,6 +887,8 @@ typedef int (*mlx5e_fp_hw_modify)(struct mlx5e_priv *priv);
void mlx5e_switch_priv_channels(struct mlx5e_priv *priv, void mlx5e_switch_priv_channels(struct mlx5e_priv *priv,
struct mlx5e_channels *new_chs, struct mlx5e_channels *new_chs,
mlx5e_fp_hw_modify hw_modify); mlx5e_fp_hw_modify hw_modify);
void mlx5e_activate_priv_channels(struct mlx5e_priv *priv);
void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv);
void mlx5e_build_default_indir_rqt(struct mlx5_core_dev *mdev, void mlx5e_build_default_indir_rqt(struct mlx5_core_dev *mdev,
u32 *indirection_rqt, int len, u32 *indirection_rqt, int len,
...@@ -990,21 +1005,30 @@ int mlx5e_attr_get(struct net_device *dev, struct switchdev_attr *attr); ...@@ -990,21 +1005,30 @@ int mlx5e_attr_get(struct net_device *dev, struct switchdev_attr *attr);
void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe); void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
void mlx5e_update_hw_rep_counters(struct mlx5e_priv *priv); void mlx5e_update_hw_rep_counters(struct mlx5e_priv *priv);
/* common netdev helpers */
int mlx5e_create_indirect_rqt(struct mlx5e_priv *priv);
int mlx5e_create_indirect_tirs(struct mlx5e_priv *priv);
void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv);
int mlx5e_create_direct_rqts(struct mlx5e_priv *priv); int mlx5e_create_direct_rqts(struct mlx5e_priv *priv);
void mlx5e_destroy_rqt(struct mlx5e_priv *priv, struct mlx5e_rqt *rqt); void mlx5e_destroy_direct_rqts(struct mlx5e_priv *priv);
int mlx5e_create_direct_tirs(struct mlx5e_priv *priv); int mlx5e_create_direct_tirs(struct mlx5e_priv *priv);
void mlx5e_destroy_direct_tirs(struct mlx5e_priv *priv); void mlx5e_destroy_direct_tirs(struct mlx5e_priv *priv);
void mlx5e_destroy_rqt(struct mlx5e_priv *priv, struct mlx5e_rqt *rqt);
int mlx5e_create_ttc_table(struct mlx5e_priv *priv, u32 underlay_qpn);
void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv);
int mlx5e_create_tis(struct mlx5_core_dev *mdev, int tc,
u32 underlay_qpn, u32 *tisn);
void mlx5e_destroy_tis(struct mlx5_core_dev *mdev, u32 tisn);
int mlx5e_create_tises(struct mlx5e_priv *priv); int mlx5e_create_tises(struct mlx5e_priv *priv);
void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv); void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv);
int mlx5e_close(struct net_device *netdev); int mlx5e_close(struct net_device *netdev);
int mlx5e_open(struct net_device *netdev); int mlx5e_open(struct net_device *netdev);
void mlx5e_update_stats_work(struct work_struct *work); void mlx5e_update_stats_work(struct work_struct *work);
struct net_device *mlx5e_create_netdev(struct mlx5_core_dev *mdev,
const struct mlx5e_profile *profile,
void *ppriv);
void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, struct mlx5e_priv *priv);
int mlx5e_attach_netdev(struct mlx5_core_dev *mdev, struct net_device *netdev);
void mlx5e_detach_netdev(struct mlx5_core_dev *mdev, struct net_device *netdev);
u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout); u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout);
int mlx5e_get_offload_stats(int attr_id, const struct net_device *dev, int mlx5e_get_offload_stats(int attr_id, const struct net_device *dev,
...@@ -1012,5 +1036,16 @@ int mlx5e_get_offload_stats(int attr_id, const struct net_device *dev, ...@@ -1012,5 +1036,16 @@ int mlx5e_get_offload_stats(int attr_id, const struct net_device *dev,
bool mlx5e_has_offload_stats(const struct net_device *dev, int attr_id); bool mlx5e_has_offload_stats(const struct net_device *dev, int attr_id);
bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv); bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv);
bool mlx5e_is_vf_vport_rep(struct mlx5e_priv *priv);
/* mlx5e generic netdev management API */
struct net_device*
mlx5e_create_netdev(struct mlx5_core_dev *mdev, const struct mlx5e_profile *profile,
void *ppriv);
int mlx5e_attach_netdev(struct mlx5e_priv *priv);
void mlx5e_detach_netdev(struct mlx5e_priv *priv);
void mlx5e_destroy_netdev(struct mlx5e_priv *priv);
void mlx5e_build_nic_params(struct mlx5_core_dev *mdev,
struct mlx5e_params *params,
u16 max_channels);
#endif /* __MLX5_EN_H__ */ #endif /* __MLX5_EN_H__ */
...@@ -321,10 +321,16 @@ static int arfs_create_table(struct mlx5e_priv *priv, ...@@ -321,10 +321,16 @@ static int arfs_create_table(struct mlx5e_priv *priv,
{ {
struct mlx5e_arfs_tables *arfs = &priv->fs.arfs; struct mlx5e_arfs_tables *arfs = &priv->fs.arfs;
struct mlx5e_flow_table *ft = &arfs->arfs_tables[type].ft; struct mlx5e_flow_table *ft = &arfs->arfs_tables[type].ft;
struct mlx5_flow_table_attr ft_attr = {};
int err; int err;
ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO, ft->num_groups = 0;
MLX5E_ARFS_TABLE_SIZE, MLX5E_ARFS_FT_LEVEL, 0);
ft_attr.max_fte = MLX5E_ARFS_TABLE_SIZE;
ft_attr.level = MLX5E_ARFS_FT_LEVEL;
ft_attr.prio = MLX5E_NIC_PRIO;
ft->t = mlx5_create_flow_table(priv->fs.ns, &ft_attr);
if (IS_ERR(ft->t)) { if (IS_ERR(ft->t)) {
err = PTR_ERR(ft->t); err = PTR_ERR(ft->t);
ft->t = NULL; ft->t = NULL;
......
...@@ -792,7 +792,7 @@ static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc) ...@@ -792,7 +792,7 @@ static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc)
return err; return err;
} }
static void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv) void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv)
{ {
struct mlx5e_ttc_table *ttc = &priv->fs.ttc; struct mlx5e_ttc_table *ttc = &priv->fs.ttc;
...@@ -800,14 +800,19 @@ static void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv) ...@@ -800,14 +800,19 @@ static void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv)
mlx5e_destroy_flow_table(&ttc->ft); mlx5e_destroy_flow_table(&ttc->ft);
} }
static int mlx5e_create_ttc_table(struct mlx5e_priv *priv) int mlx5e_create_ttc_table(struct mlx5e_priv *priv, u32 underlay_qpn)
{ {
struct mlx5e_ttc_table *ttc = &priv->fs.ttc; struct mlx5e_ttc_table *ttc = &priv->fs.ttc;
struct mlx5_flow_table_attr ft_attr = {};
struct mlx5e_flow_table *ft = &ttc->ft; struct mlx5e_flow_table *ft = &ttc->ft;
int err; int err;
ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO, ft_attr.max_fte = MLX5E_TTC_TABLE_SIZE;
MLX5E_TTC_TABLE_SIZE, MLX5E_TTC_FT_LEVEL, 0); ft_attr.level = MLX5E_TTC_FT_LEVEL;
ft_attr.prio = MLX5E_NIC_PRIO;
ft_attr.underlay_qpn = underlay_qpn;
ft->t = mlx5_create_flow_table(priv->fs.ns, &ft_attr);
if (IS_ERR(ft->t)) { if (IS_ERR(ft->t)) {
err = PTR_ERR(ft->t); err = PTR_ERR(ft->t);
ft->t = NULL; ft->t = NULL;
...@@ -973,12 +978,16 @@ static int mlx5e_create_l2_table(struct mlx5e_priv *priv) ...@@ -973,12 +978,16 @@ static int mlx5e_create_l2_table(struct mlx5e_priv *priv)
{ {
struct mlx5e_l2_table *l2_table = &priv->fs.l2; struct mlx5e_l2_table *l2_table = &priv->fs.l2;
struct mlx5e_flow_table *ft = &l2_table->ft; struct mlx5e_flow_table *ft = &l2_table->ft;
struct mlx5_flow_table_attr ft_attr = {};
int err; int err;
ft->num_groups = 0; ft->num_groups = 0;
ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
MLX5E_L2_TABLE_SIZE, MLX5E_L2_FT_LEVEL, 0);
ft_attr.max_fte = MLX5E_L2_TABLE_SIZE;
ft_attr.level = MLX5E_L2_FT_LEVEL;
ft_attr.prio = MLX5E_NIC_PRIO;
ft->t = mlx5_create_flow_table(priv->fs.ns, &ft_attr);
if (IS_ERR(ft->t)) { if (IS_ERR(ft->t)) {
err = PTR_ERR(ft->t); err = PTR_ERR(ft->t);
ft->t = NULL; ft->t = NULL;
...@@ -1076,11 +1085,16 @@ static int mlx5e_create_vlan_table_groups(struct mlx5e_flow_table *ft) ...@@ -1076,11 +1085,16 @@ static int mlx5e_create_vlan_table_groups(struct mlx5e_flow_table *ft)
static int mlx5e_create_vlan_table(struct mlx5e_priv *priv) static int mlx5e_create_vlan_table(struct mlx5e_priv *priv)
{ {
struct mlx5e_flow_table *ft = &priv->fs.vlan.ft; struct mlx5e_flow_table *ft = &priv->fs.vlan.ft;
struct mlx5_flow_table_attr ft_attr = {};
int err; int err;
ft->num_groups = 0; ft->num_groups = 0;
ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
MLX5E_VLAN_TABLE_SIZE, MLX5E_VLAN_FT_LEVEL, 0); ft_attr.max_fte = MLX5E_VLAN_TABLE_SIZE;
ft_attr.level = MLX5E_VLAN_FT_LEVEL;
ft_attr.prio = MLX5E_NIC_PRIO;
ft->t = mlx5_create_flow_table(priv->fs.ns, &ft_attr);
if (IS_ERR(ft->t)) { if (IS_ERR(ft->t)) {
err = PTR_ERR(ft->t); err = PTR_ERR(ft->t);
...@@ -1133,7 +1147,7 @@ int mlx5e_create_flow_steering(struct mlx5e_priv *priv) ...@@ -1133,7 +1147,7 @@ int mlx5e_create_flow_steering(struct mlx5e_priv *priv)
priv->netdev->hw_features &= ~NETIF_F_NTUPLE; priv->netdev->hw_features &= ~NETIF_F_NTUPLE;
} }
err = mlx5e_create_ttc_table(priv); err = mlx5e_create_ttc_table(priv, 0);
if (err) { if (err) {
netdev_err(priv->netdev, "Failed to create ttc table, err=%d\n", netdev_err(priv->netdev, "Failed to create ttc table, err=%d\n",
err); err);
......
...@@ -329,7 +329,7 @@ bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv) ...@@ -329,7 +329,7 @@ bool mlx5e_is_uplink_rep(struct mlx5e_priv *priv)
return false; return false;
} }
bool mlx5e_is_vf_vport_rep(struct mlx5e_priv *priv) static bool mlx5e_is_vf_vport_rep(struct mlx5e_priv *priv)
{ {
struct mlx5_eswitch_rep *rep = (struct mlx5_eswitch_rep *)priv->ppriv; struct mlx5_eswitch_rep *rep = (struct mlx5_eswitch_rep *)priv->ppriv;
...@@ -465,22 +465,18 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv) ...@@ -465,22 +465,18 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv)
{ {
struct mlx5_eswitch *esw = priv->mdev->priv.eswitch; struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
struct mlx5_eswitch_rep *rep = priv->ppriv; struct mlx5_eswitch_rep *rep = priv->ppriv;
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5_flow_handle *flow_rule; struct mlx5_flow_handle *flow_rule;
int err; int err;
int i;
mlx5e_init_l2_addr(priv);
err = mlx5e_create_direct_rqts(priv); err = mlx5e_create_direct_rqts(priv);
if (err) { if (err)
mlx5_core_warn(mdev, "create direct rqts failed, %d\n", err);
return err; return err;
}
err = mlx5e_create_direct_tirs(priv); err = mlx5e_create_direct_tirs(priv);
if (err) { if (err)
mlx5_core_warn(mdev, "create direct tirs failed, %d\n", err);
goto err_destroy_direct_rqts; goto err_destroy_direct_rqts;
}
flow_rule = mlx5_eswitch_create_vport_rx_rule(esw, flow_rule = mlx5_eswitch_create_vport_rx_rule(esw,
rep->vport, rep->vport,
...@@ -502,21 +498,18 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv) ...@@ -502,21 +498,18 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv)
err_destroy_direct_tirs: err_destroy_direct_tirs:
mlx5e_destroy_direct_tirs(priv); mlx5e_destroy_direct_tirs(priv);
err_destroy_direct_rqts: err_destroy_direct_rqts:
for (i = 0; i < priv->channels.params.num_channels; i++) mlx5e_destroy_direct_rqts(priv);
mlx5e_destroy_rqt(priv, &priv->direct_tir[i].rqt);
return err; return err;
} }
static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv) static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv)
{ {
struct mlx5_eswitch_rep *rep = priv->ppriv; struct mlx5_eswitch_rep *rep = priv->ppriv;
int i;
mlx5e_tc_cleanup(priv); mlx5e_tc_cleanup(priv);
mlx5_del_flow_rules(rep->vport_rx_rule); mlx5_del_flow_rules(rep->vport_rx_rule);
mlx5e_destroy_direct_tirs(priv); mlx5e_destroy_direct_tirs(priv);
for (i = 0; i < priv->channels.params.num_channels; i++) mlx5e_destroy_direct_rqts(priv);
mlx5e_destroy_rqt(priv, &priv->direct_tir[i].rqt);
} }
static int mlx5e_init_rep_tx(struct mlx5e_priv *priv) static int mlx5e_init_rep_tx(struct mlx5e_priv *priv)
...@@ -545,6 +538,8 @@ static struct mlx5e_profile mlx5e_rep_profile = { ...@@ -545,6 +538,8 @@ static struct mlx5e_profile mlx5e_rep_profile = {
.cleanup_tx = mlx5e_cleanup_nic_tx, .cleanup_tx = mlx5e_cleanup_nic_tx,
.update_stats = mlx5e_rep_update_stats, .update_stats = mlx5e_rep_update_stats,
.max_nch = mlx5e_get_rep_max_num_channels, .max_nch = mlx5e_get_rep_max_num_channels,
.rx_handlers.handle_rx_cqe = mlx5e_handle_rx_cqe_rep,
.rx_handlers.handle_rx_cqe_mpwqe = NULL /* Not supported */,
.max_tc = 1, .max_tc = 1,
}; };
...@@ -563,7 +558,7 @@ int mlx5e_vport_rep_load(struct mlx5_eswitch *esw, ...@@ -563,7 +558,7 @@ int mlx5e_vport_rep_load(struct mlx5_eswitch *esw,
rep->netdev = netdev; rep->netdev = netdev;
err = mlx5e_attach_netdev(esw->dev, netdev); err = mlx5e_attach_netdev(netdev_priv(netdev));
if (err) { if (err) {
pr_warn("Failed to attach representor netdev for vport %d\n", pr_warn("Failed to attach representor netdev for vport %d\n",
rep->vport); rep->vport);
...@@ -580,10 +575,10 @@ int mlx5e_vport_rep_load(struct mlx5_eswitch *esw, ...@@ -580,10 +575,10 @@ int mlx5e_vport_rep_load(struct mlx5_eswitch *esw,
return 0; return 0;
err_detach_netdev: err_detach_netdev:
mlx5e_detach_netdev(esw->dev, netdev); mlx5e_detach_netdev(netdev_priv(netdev));
err_destroy_netdev: err_destroy_netdev:
mlx5e_destroy_netdev(esw->dev, netdev_priv(netdev)); mlx5e_destroy_netdev(netdev_priv(netdev));
return err; return err;
...@@ -595,6 +590,6 @@ void mlx5e_vport_rep_unload(struct mlx5_eswitch *esw, ...@@ -595,6 +590,6 @@ void mlx5e_vport_rep_unload(struct mlx5_eswitch *esw,
struct net_device *netdev = rep->netdev; struct net_device *netdev = rep->netdev;
unregister_netdev(netdev); unregister_netdev(netdev);
mlx5e_detach_netdev(esw->dev, netdev); mlx5e_detach_netdev(netdev_priv(netdev));
mlx5e_destroy_netdev(esw->dev, netdev_priv(netdev)); mlx5e_destroy_netdev(netdev_priv(netdev));
} }
...@@ -1031,3 +1031,81 @@ void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq) ...@@ -1031,3 +1031,81 @@ void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq)
mlx5e_page_release(rq, di, false); mlx5e_page_release(rq, di, false);
} }
} }
#ifdef CONFIG_MLX5_CORE_IPOIB
#define MLX5_IB_GRH_DGID_OFFSET 24
#define MLX5_IB_GRH_BYTES 40
#define MLX5_IPOIB_ENCAP_LEN 4
#define MLX5_GID_SIZE 16
static inline void mlx5i_complete_rx_cqe(struct mlx5e_rq *rq,
struct mlx5_cqe64 *cqe,
u32 cqe_bcnt,
struct sk_buff *skb)
{
struct net_device *netdev = rq->netdev;
u8 *dgid;
u8 g;
g = (be32_to_cpu(cqe->flags_rqpn) >> 28) & 3;
dgid = skb->data + MLX5_IB_GRH_DGID_OFFSET;
if ((!g) || dgid[0] != 0xff)
skb->pkt_type = PACKET_HOST;
else if (memcmp(dgid, netdev->broadcast + 4, MLX5_GID_SIZE) == 0)
skb->pkt_type = PACKET_BROADCAST;
else
skb->pkt_type = PACKET_MULTICAST;
/* TODO: IB/ipoib: Allow mcast packets from other VFs
* 68996a6e760e5c74654723eeb57bf65628ae87f4
*/
skb_pull(skb, MLX5_IB_GRH_BYTES);
skb->protocol = *((__be16 *)(skb->data));
skb->ip_summed = CHECKSUM_COMPLETE;
skb->csum = csum_unfold((__force __sum16)cqe->check_sum);
skb_record_rx_queue(skb, rq->ix);
if (likely(netdev->features & NETIF_F_RXHASH))
mlx5e_skb_set_hash(cqe, skb);
skb_reset_mac_header(skb);
skb_pull(skb, MLX5_IPOIB_ENCAP_LEN);
skb->dev = netdev;
rq->stats.csum_complete++;
rq->stats.packets++;
rq->stats.bytes += cqe_bcnt;
}
void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
{
struct mlx5e_rx_wqe *wqe;
__be16 wqe_counter_be;
struct sk_buff *skb;
u16 wqe_counter;
u32 cqe_bcnt;
wqe_counter_be = cqe->wqe_counter;
wqe_counter = be16_to_cpu(wqe_counter_be);
wqe = mlx5_wq_ll_get_wqe(&rq->wq, wqe_counter);
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
skb = skb_from_cqe(rq, cqe, wqe_counter, cqe_bcnt);
if (!skb)
goto wq_ll_pop;
mlx5i_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
napi_gro_receive(rq->cq.napi, skb);
wq_ll_pop:
mlx5_wq_ll_pop(&rq->wq, wqe_counter_be,
&wqe->next.next_wqe_index);
}
#endif /* CONFIG_MLX5_CORE_IPOIB */
...@@ -337,6 +337,7 @@ esw_fdb_set_vport_promisc_rule(struct mlx5_eswitch *esw, u32 vport) ...@@ -337,6 +337,7 @@ esw_fdb_set_vport_promisc_rule(struct mlx5_eswitch *esw, u32 vport)
static int esw_create_legacy_fdb_table(struct mlx5_eswitch *esw, int nvports) static int esw_create_legacy_fdb_table(struct mlx5_eswitch *esw, int nvports)
{ {
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
struct mlx5_flow_table_attr ft_attr = {};
struct mlx5_core_dev *dev = esw->dev; struct mlx5_core_dev *dev = esw->dev;
struct mlx5_flow_namespace *root_ns; struct mlx5_flow_namespace *root_ns;
struct mlx5_flow_table *fdb; struct mlx5_flow_table *fdb;
...@@ -362,7 +363,9 @@ static int esw_create_legacy_fdb_table(struct mlx5_eswitch *esw, int nvports) ...@@ -362,7 +363,9 @@ static int esw_create_legacy_fdb_table(struct mlx5_eswitch *esw, int nvports)
memset(flow_group_in, 0, inlen); memset(flow_group_in, 0, inlen);
table_size = BIT(MLX5_CAP_ESW_FLOWTABLE_FDB(dev, log_max_ft_size)); table_size = BIT(MLX5_CAP_ESW_FLOWTABLE_FDB(dev, log_max_ft_size));
fdb = mlx5_create_flow_table(root_ns, 0, table_size, 0, 0);
ft_attr.max_fte = table_size;
fdb = mlx5_create_flow_table(root_ns, &ft_attr);
if (IS_ERR(fdb)) { if (IS_ERR(fdb)) {
err = PTR_ERR(fdb); err = PTR_ERR(fdb);
esw_warn(dev, "Failed to create FDB Table err %d\n", err); esw_warn(dev, "Failed to create FDB Table err %d\n", err);
......
...@@ -432,6 +432,7 @@ static int esw_add_fdb_miss_rule(struct mlx5_eswitch *esw) ...@@ -432,6 +432,7 @@ static int esw_add_fdb_miss_rule(struct mlx5_eswitch *esw)
static int esw_create_offloads_fdb_table(struct mlx5_eswitch *esw, int nvports) static int esw_create_offloads_fdb_table(struct mlx5_eswitch *esw, int nvports)
{ {
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in); int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
struct mlx5_flow_table_attr ft_attr = {};
int table_size, ix, esw_size, err = 0; int table_size, ix, esw_size, err = 0;
struct mlx5_core_dev *dev = esw->dev; struct mlx5_core_dev *dev = esw->dev;
struct mlx5_flow_namespace *root_ns; struct mlx5_flow_namespace *root_ns;
...@@ -475,7 +476,11 @@ static int esw_create_offloads_fdb_table(struct mlx5_eswitch *esw, int nvports) ...@@ -475,7 +476,11 @@ static int esw_create_offloads_fdb_table(struct mlx5_eswitch *esw, int nvports)
esw->fdb_table.fdb = fdb; esw->fdb_table.fdb = fdb;
table_size = nvports + MAX_PF_SQ + 1; table_size = nvports + MAX_PF_SQ + 1;
fdb = mlx5_create_flow_table(root_ns, FDB_SLOW_PATH, table_size, 0, 0);
ft_attr.max_fte = table_size;
ft_attr.prio = FDB_SLOW_PATH;
fdb = mlx5_create_flow_table(root_ns, &ft_attr);
if (IS_ERR(fdb)) { if (IS_ERR(fdb)) {
err = PTR_ERR(fdb); err = PTR_ERR(fdb);
esw_warn(dev, "Failed to create slow path FDB Table err %d\n", err); esw_warn(dev, "Failed to create slow path FDB Table err %d\n", err);
...@@ -556,9 +561,10 @@ static void esw_destroy_offloads_fdb_table(struct mlx5_eswitch *esw) ...@@ -556,9 +561,10 @@ static void esw_destroy_offloads_fdb_table(struct mlx5_eswitch *esw)
static int esw_create_offloads_table(struct mlx5_eswitch *esw) static int esw_create_offloads_table(struct mlx5_eswitch *esw)
{ {
struct mlx5_flow_namespace *ns; struct mlx5_flow_table_attr ft_attr = {};
struct mlx5_flow_table *ft_offloads;
struct mlx5_core_dev *dev = esw->dev; struct mlx5_core_dev *dev = esw->dev;
struct mlx5_flow_table *ft_offloads;
struct mlx5_flow_namespace *ns;
int err = 0; int err = 0;
ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_OFFLOADS); ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_OFFLOADS);
...@@ -567,7 +573,9 @@ static int esw_create_offloads_table(struct mlx5_eswitch *esw) ...@@ -567,7 +573,9 @@ static int esw_create_offloads_table(struct mlx5_eswitch *esw)
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
ft_offloads = mlx5_create_flow_table(ns, 0, dev->priv.sriov.num_vfs + 2, 0, 0); ft_attr.max_fte = dev->priv.sriov.num_vfs + 2;
ft_offloads = mlx5_create_flow_table(ns, &ft_attr);
if (IS_ERR(ft_offloads)) { if (IS_ERR(ft_offloads)) {
err = PTR_ERR(ft_offloads); err = PTR_ERR(ft_offloads);
esw_warn(esw->dev, "Failed to create offloads table, err %d\n", err); esw_warn(esw->dev, "Failed to create offloads table, err %d\n", err);
......
...@@ -45,6 +45,10 @@ int mlx5_cmd_update_root_ft(struct mlx5_core_dev *dev, ...@@ -45,6 +45,10 @@ int mlx5_cmd_update_root_ft(struct mlx5_core_dev *dev,
u32 in[MLX5_ST_SZ_DW(set_flow_table_root_in)] = {0}; u32 in[MLX5_ST_SZ_DW(set_flow_table_root_in)] = {0};
u32 out[MLX5_ST_SZ_DW(set_flow_table_root_out)] = {0}; u32 out[MLX5_ST_SZ_DW(set_flow_table_root_out)] = {0};
if ((MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_IB) &&
ft->underlay_qpn == 0)
return 0;
MLX5_SET(set_flow_table_root_in, in, opcode, MLX5_SET(set_flow_table_root_in, in, opcode,
MLX5_CMD_OP_SET_FLOW_TABLE_ROOT); MLX5_CMD_OP_SET_FLOW_TABLE_ROOT);
MLX5_SET(set_flow_table_root_in, in, table_type, ft->type); MLX5_SET(set_flow_table_root_in, in, table_type, ft->type);
...@@ -54,6 +58,10 @@ int mlx5_cmd_update_root_ft(struct mlx5_core_dev *dev, ...@@ -54,6 +58,10 @@ int mlx5_cmd_update_root_ft(struct mlx5_core_dev *dev,
MLX5_SET(set_flow_table_root_in, in, other_vport, 1); MLX5_SET(set_flow_table_root_in, in, other_vport, 1);
} }
if ((MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_IB) &&
ft->underlay_qpn != 0)
MLX5_SET(set_flow_table_root_in, in, underlay_qpn, ft->underlay_qpn);
return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out)); return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
} }
......
...@@ -778,18 +778,16 @@ static void list_add_flow_table(struct mlx5_flow_table *ft, ...@@ -778,18 +778,16 @@ static void list_add_flow_table(struct mlx5_flow_table *ft,
} }
static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespace *ns, static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
struct mlx5_flow_table_attr *ft_attr,
enum fs_flow_table_op_mod op_mod, enum fs_flow_table_op_mod op_mod,
u16 vport, int prio, u16 vport)
int max_fte, u32 level,
u32 flags)
{ {
struct mlx5_flow_root_namespace *root = find_root(&ns->node);
struct mlx5_flow_table *next_ft = NULL; struct mlx5_flow_table *next_ft = NULL;
struct fs_prio *fs_prio = NULL;
struct mlx5_flow_table *ft; struct mlx5_flow_table *ft;
int err;
int log_table_sz; int log_table_sz;
struct mlx5_flow_root_namespace *root = int err;
find_root(&ns->node);
struct fs_prio *fs_prio = NULL;
if (!root) { if (!root) {
pr_err("mlx5: flow steering failed to find root of namespace\n"); pr_err("mlx5: flow steering failed to find root of namespace\n");
...@@ -797,29 +795,31 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa ...@@ -797,29 +795,31 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa
} }
mutex_lock(&root->chain_lock); mutex_lock(&root->chain_lock);
fs_prio = find_prio(ns, prio); fs_prio = find_prio(ns, ft_attr->prio);
if (!fs_prio) { if (!fs_prio) {
err = -EINVAL; err = -EINVAL;
goto unlock_root; goto unlock_root;
} }
if (level >= fs_prio->num_levels) { if (ft_attr->level >= fs_prio->num_levels) {
err = -ENOSPC; err = -ENOSPC;
goto unlock_root; goto unlock_root;
} }
/* The level is related to the /* The level is related to the
* priority level range. * priority level range.
*/ */
level += fs_prio->start_level; ft_attr->level += fs_prio->start_level;
ft = alloc_flow_table(level, ft = alloc_flow_table(ft_attr->level,
vport, vport,
max_fte ? roundup_pow_of_two(max_fte) : 0, ft_attr->max_fte ? roundup_pow_of_two(ft_attr->max_fte) : 0,
root->table_type, root->table_type,
op_mod, flags); op_mod, ft_attr->flags);
if (!ft) { if (!ft) {
err = -ENOMEM; err = -ENOMEM;
goto unlock_root; goto unlock_root;
} }
ft->underlay_qpn = ft_attr->underlay_qpn;
tree_init_node(&ft->node, 1, del_flow_table); tree_init_node(&ft->node, 1, del_flow_table);
log_table_sz = ft->max_fte ? ilog2(ft->max_fte) : 0; log_table_sz = ft->max_fte ? ilog2(ft->max_fte) : 0;
next_ft = find_next_chained_ft(fs_prio); next_ft = find_next_chained_ft(fs_prio);
...@@ -849,44 +849,56 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa ...@@ -849,44 +849,56 @@ static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespa
} }
struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns, struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
int prio, int max_fte, struct mlx5_flow_table_attr *ft_attr)
u32 level,
u32 flags)
{ {
return __mlx5_create_flow_table(ns, FS_FT_OP_MOD_NORMAL, 0, prio, return __mlx5_create_flow_table(ns, ft_attr, FS_FT_OP_MOD_NORMAL, 0);
max_fte, level, flags);
} }
struct mlx5_flow_table *mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns, struct mlx5_flow_table *mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns,
int prio, int max_fte, int prio, int max_fte,
u32 level, u16 vport) u32 level, u16 vport)
{ {
return __mlx5_create_flow_table(ns, FS_FT_OP_MOD_NORMAL, vport, prio, struct mlx5_flow_table_attr ft_attr = {};
max_fte, level, 0);
ft_attr.max_fte = max_fte;
ft_attr.level = level;
ft_attr.prio = prio;
return __mlx5_create_flow_table(ns, &ft_attr, FS_FT_OP_MOD_NORMAL, 0);
} }
struct mlx5_flow_table *mlx5_create_lag_demux_flow_table( struct mlx5_flow_table*
struct mlx5_flow_namespace *ns, mlx5_create_lag_demux_flow_table(struct mlx5_flow_namespace *ns,
int prio, u32 level) int prio, u32 level)
{ {
return __mlx5_create_flow_table(ns, FS_FT_OP_MOD_LAG_DEMUX, 0, prio, 0, struct mlx5_flow_table_attr ft_attr = {};
level, 0);
ft_attr.level = level;
ft_attr.prio = prio;
return __mlx5_create_flow_table(ns, &ft_attr, FS_FT_OP_MOD_LAG_DEMUX, 0);
} }
EXPORT_SYMBOL(mlx5_create_lag_demux_flow_table); EXPORT_SYMBOL(mlx5_create_lag_demux_flow_table);
struct mlx5_flow_table *mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns, struct mlx5_flow_table*
int prio, mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns,
int num_flow_table_entries, int prio,
int max_num_groups, int num_flow_table_entries,
u32 level, int max_num_groups,
u32 flags) u32 level,
u32 flags)
{ {
struct mlx5_flow_table_attr ft_attr = {};
struct mlx5_flow_table *ft; struct mlx5_flow_table *ft;
if (max_num_groups > num_flow_table_entries) if (max_num_groups > num_flow_table_entries)
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
ft = mlx5_create_flow_table(ns, prio, num_flow_table_entries, level, flags); ft_attr.max_fte = num_flow_table_entries;
ft_attr.prio = prio;
ft_attr.level = level;
ft_attr.flags = flags;
ft = mlx5_create_flow_table(ns, &ft_attr);
if (IS_ERR(ft)) if (IS_ERR(ft))
return ft; return ft;
...@@ -1828,12 +1840,18 @@ static void set_prio_attrs(struct mlx5_flow_root_namespace *root_ns) ...@@ -1828,12 +1840,18 @@ static void set_prio_attrs(struct mlx5_flow_root_namespace *root_ns)
static int create_anchor_flow_table(struct mlx5_flow_steering *steering) static int create_anchor_flow_table(struct mlx5_flow_steering *steering)
{ {
struct mlx5_flow_namespace *ns = NULL; struct mlx5_flow_namespace *ns = NULL;
struct mlx5_flow_table_attr ft_attr = {};
struct mlx5_flow_table *ft; struct mlx5_flow_table *ft;
ns = mlx5_get_flow_namespace(steering->dev, MLX5_FLOW_NAMESPACE_ANCHOR); ns = mlx5_get_flow_namespace(steering->dev, MLX5_FLOW_NAMESPACE_ANCHOR);
if (WARN_ON(!ns)) if (WARN_ON(!ns))
return -EINVAL; return -EINVAL;
ft = mlx5_create_flow_table(ns, ANCHOR_PRIO, ANCHOR_SIZE, ANCHOR_LEVEL, 0);
ft_attr.max_fte = ANCHOR_SIZE;
ft_attr.level = ANCHOR_LEVEL;
ft_attr.prio = ANCHOR_PRIO;
ft = mlx5_create_flow_table(ns, &ft_attr);
if (IS_ERR(ft)) { if (IS_ERR(ft)) {
mlx5_core_err(steering->dev, "Failed to create last anchor flow table"); mlx5_core_err(steering->dev, "Failed to create last anchor flow table");
return PTR_ERR(ft); return PTR_ERR(ft);
...@@ -1887,9 +1905,6 @@ void mlx5_cleanup_fs(struct mlx5_core_dev *dev) ...@@ -1887,9 +1905,6 @@ void mlx5_cleanup_fs(struct mlx5_core_dev *dev)
{ {
struct mlx5_flow_steering *steering = dev->priv.steering; struct mlx5_flow_steering *steering = dev->priv.steering;
if (MLX5_CAP_GEN(dev, port_type) != MLX5_CAP_PORT_TYPE_ETH)
return;
cleanup_root_ns(steering->root_ns); cleanup_root_ns(steering->root_ns);
cleanup_root_ns(steering->esw_egress_root_ns); cleanup_root_ns(steering->esw_egress_root_ns);
cleanup_root_ns(steering->esw_ingress_root_ns); cleanup_root_ns(steering->esw_ingress_root_ns);
...@@ -1992,9 +2007,6 @@ int mlx5_init_fs(struct mlx5_core_dev *dev) ...@@ -1992,9 +2007,6 @@ int mlx5_init_fs(struct mlx5_core_dev *dev)
struct mlx5_flow_steering *steering; struct mlx5_flow_steering *steering;
int err = 0; int err = 0;
if (MLX5_CAP_GEN(dev, port_type) != MLX5_CAP_PORT_TYPE_ETH)
return 0;
err = mlx5_init_fc_stats(dev); err = mlx5_init_fc_stats(dev);
if (err) if (err)
return err; return err;
...@@ -2005,7 +2017,10 @@ int mlx5_init_fs(struct mlx5_core_dev *dev) ...@@ -2005,7 +2017,10 @@ int mlx5_init_fs(struct mlx5_core_dev *dev)
steering->dev = dev; steering->dev = dev;
dev->priv.steering = steering; dev->priv.steering = steering;
if (MLX5_CAP_GEN(dev, nic_flow_table) && if ((((MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_ETH) &&
(MLX5_CAP_GEN(dev, nic_flow_table))) ||
((MLX5_CAP_GEN(dev, port_type) == MLX5_CAP_PORT_TYPE_IB) &&
MLX5_CAP_GEN(dev, ipoib_enhanced_offloads))) &&
MLX5_CAP_FLOWTABLE_NIC_RX(dev, ft_support)) { MLX5_CAP_FLOWTABLE_NIC_RX(dev, ft_support)) {
err = init_root_ns(steering); err = init_root_ns(steering);
if (err) if (err)
......
...@@ -118,6 +118,7 @@ struct mlx5_flow_table { ...@@ -118,6 +118,7 @@ struct mlx5_flow_table {
/* FWD rules that point on this flow table */ /* FWD rules that point on this flow table */
struct list_head fwd_rules; struct list_head fwd_rules;
u32 flags; u32 flags;
u32 underlay_qpn;
}; };
struct mlx5_fc_cache { struct mlx5_fc_cache {
......
...@@ -137,7 +137,8 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev) ...@@ -137,7 +137,8 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev)
return err; return err;
} }
if (MLX5_CAP_GEN(dev, nic_flow_table)) { if (MLX5_CAP_GEN(dev, nic_flow_table) ||
MLX5_CAP_GEN(dev, ipoib_enhanced_offloads)) {
err = mlx5_core_get_caps(dev, MLX5_CAP_FLOW_TABLE); err = mlx5_core_get_caps(dev, MLX5_CAP_FLOW_TABLE);
if (err) if (err)
return err; return err;
......
This diff is collapsed.
/*
* Copyright (c) 2017, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __MLX5E_IPOB_H__
#define __MLX5E_IPOB_H__
#include <linux/mlx5/fs.h>
#include "en.h"
#define MLX5I_MAX_NUM_TC 1
/* ipoib rdma netdev's private data structure */
struct mlx5i_priv {
struct mlx5_core_qp qp;
char *mlx5e_priv[0];
};
/* Extract mlx5e_priv from IPoIB netdev */
#define mlx5i_epriv(netdev) ((void *)(((struct mlx5i_priv *)netdev_priv(netdev))->mlx5e_priv))
netdev_tx_t mlx5i_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
struct mlx5_av *av, u32 dqpn, u32 dqkey);
void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
#endif /* __MLX5E_IPOB_H__ */
...@@ -104,12 +104,18 @@ mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns, ...@@ -104,12 +104,18 @@ mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns,
u32 level, u32 level,
u32 flags); u32 flags);
struct mlx5_flow_table_attr {
int prio;
int max_fte;
u32 level;
u32 flags;
u32 underlay_qpn;
};
struct mlx5_flow_table * struct mlx5_flow_table *
mlx5_create_flow_table(struct mlx5_flow_namespace *ns, mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
int prio, struct mlx5_flow_table_attr *ft_attr);
int num_flow_table_entries,
u32 level,
u32 flags);
struct mlx5_flow_table * struct mlx5_flow_table *
mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns, mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns,
int prio, int prio,
......
...@@ -872,7 +872,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { ...@@ -872,7 +872,8 @@ struct mlx5_ifc_cmd_hca_cap_bits {
u8 compact_address_vector[0x1]; u8 compact_address_vector[0x1];
u8 striding_rq[0x1]; u8 striding_rq[0x1];
u8 reserved_at_202[0x2]; u8 reserved_at_202[0x1];
u8 ipoib_enhanced_offloads[0x1];
u8 ipoib_basic_offloads[0x1]; u8 ipoib_basic_offloads[0x1];
u8 reserved_at_205[0xa]; u8 reserved_at_205[0xa];
u8 drain_sigerr[0x1]; u8 drain_sigerr[0x1];
...@@ -2293,7 +2294,9 @@ struct mlx5_ifc_tisc_bits { ...@@ -2293,7 +2294,9 @@ struct mlx5_ifc_tisc_bits {
u8 reserved_at_120[0x8]; u8 reserved_at_120[0x8];
u8 transport_domain[0x18]; u8 transport_domain[0x18];
u8 reserved_at_140[0x3c0]; u8 reserved_at_140[0x8];
u8 underlay_qpn[0x18];
u8 reserved_at_160[0x3a0];
}; };
enum { enum {
...@@ -8218,7 +8221,9 @@ struct mlx5_ifc_set_flow_table_root_in_bits { ...@@ -8218,7 +8221,9 @@ struct mlx5_ifc_set_flow_table_root_in_bits {
u8 reserved_at_a0[0x8]; u8 reserved_at_a0[0x8];
u8 table_id[0x18]; u8 table_id[0x18];
u8 reserved_at_c0[0x140]; u8 reserved_at_c0[0x8];
u8 underlay_qpn[0x18];
u8 reserved_at_e0[0x120];
}; };
enum { enum {
......
...@@ -295,6 +295,16 @@ struct mlx5_av { ...@@ -295,6 +295,16 @@ struct mlx5_av {
u8 rgid[16]; u8 rgid[16];
}; };
struct mlx5_ib_ah {
struct ib_ah ibah;
struct mlx5_av av;
};
static inline struct mlx5_ib_ah *to_mah(struct ib_ah *ibah)
{
return container_of(ibah, struct mlx5_ib_ah, ibah);
}
struct mlx5_wqe_datagram_seg { struct mlx5_wqe_datagram_seg {
struct mlx5_av av; struct mlx5_av av;
}; };
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment