Commit eda1bc65 authored by David S. Miller's avatar David S. Miller

Merge branch 'QED-NVMeTCP-Offload'

Shai Malin says:

====================
QED NVMeTCP Offload

Intro:
======
This is the qed part of Marvell’s NVMeTCP offload series, shared as
RFC series "NVMeTCP Offload ULP and QEDN Device Drive".
This part is a standalone series, and is not dependent on other parts
of the RFC.
The overall goal is to add qedn as the offload driver for NVMeTCP,
alongside the existing offload drivers (qedr, qedi and qedf for rdma,
iscsi and fcoe respectively).

In this series we are making the necessary changes to qed to enable this
by exposing APIs for FW/HW initializations.

The qedn series (and required changes to NVMe stack) will be sent to the
linux-nvme mailing list.
I have included more details on the upstream plan under section with the
same name below.

The Series Patches:
===================
1. qed: Add TCP_ULP FW resource layout – replacing iSCSI when common
   with NVMeTCP.
2. qed: Add NVMeTCP Offload PF Level FW and HW HSI.
3. qed: Add NVMeTCP Offload Connection Level FW and HW HSI.
4. qed: Add support of HW filter block – enables redirecting NVMeTCP
   traffic to the dedicated PF.
5. qed: Add NVMeTCP Offload IO Level FW and HW HSI.
6. qed: Add NVMeTCP Offload IO Level FW Initializations.
7. qed: Add IP services APIs support –VLAN, IP routing and reserving
   TCP ports for the offload device.

The NVMeTCP Offload:
====================
With the goal of enabling a generic infrastructure that allows NVMe/TCP
offload devices like NICs to seamlessly plug into the NVMe-oF stack, this
patch series introduces the nvme-tcp-offload ULP host layer, which will
be a new transport type called "tcp-offload" and will serve as an
abstraction layer to work with vendor specific nvme-tcp offload drivers.

NVMeTCP offload is a full offload of the NVMeTCP protocol, this includes
both the TCP level and the NVMeTCP level.

The nvme-tcp-offload transport can co-exist with the existing tcp and
other transports. The tcp offload was designed so that stack changes are
kept to a bare minimum: only registering new transports.
All other APIs, ops etc. are identical to the regular tcp transport.
Representing the TCP offload as a new transport allows clear and manageable
differentiation between the connections which should use the offload path
and those that are not offloaded (even on the same device).

The nvme-tcp-offload layers and API compared to nvme-tcp and nvme-rdma:

* NVMe layer: *

       [ nvme/nvme-fabrics/blk-mq ]
             |
        (nvme API and blk-mq API)
             |
             |
* Vendor agnostic transport layer: *

      [ nvme-rdma ] [ nvme-tcp ] [ nvme-tcp-offload ]
             |        |             |
           (Verbs)
             |        |             |
             |     (Socket)
             |        |             |
             |        |        (nvme-tcp-offload API)
             |        |             |
             |        |             |
* Vendor Specific Driver: *

             |        |             |
           [ qedr ]
                      |             |
                   [ qede ]
                                    |
                                  [ qedn ]

Performance:
============
With this implementation on top of the Marvell qedn driver (using the
Marvell FastLinQ NIC), we were able to demonstrate the following CPU
utilization improvement:

On AMD EPYC 7402, 2.80GHz, 28 cores:
- For 16K queued read IOs, 16jobs, 4qd (50Gbps line rate):
  Improved the CPU utilization from 15.1% with NVMeTCP SW to 4.7% with
  NVMeTCP offload.

On Intel(R) Xeon(R) Gold 5122 CPU, 3.60GHz, 16 cores:
- For 512K queued read IOs, 16jobs, 4qd (25Gbps line rate):
  Improved the CPU utilization from 16.3% with NVMeTCP SW to 1.1% with
  NVMeTCP offload.

In addition, we were able to demonstrate the following latency improvement:
- For 200K read IOPS (16 jobs, 16 qd, with fio rate limiter):
  Improved the average latency from 105 usec with NVMeTCP SW to 39 usec
  with NVMeTCP offload.

  Improved the 99.99 tail latency from 570 usec with NVMeTCP SW to 91 usec
  with NVMeTCP offload.

The end-to-end offload latency was measured from fio while running against
back end of null device.

The Marvell FastLinQ NIC HW engine:
====================================
The Marvell NIC HW engine is capable of offloading the entire TCP/IP
stack and managing up to 64K connections per PF, already implemented and
upstream use cases for this include iWARP (by the Marvell qedr driver)
and iSCSI (by the Marvell qedi driver).
In addition, the Marvell NIC HW engine offloads the NVMeTCP queue layer
and is able to manage the IO level also in case of TCP re-transmissions
and OOO events.
The HW engine enables direct data placement (including the data digest CRC
calculation and validation) and direct data transmission (including data
digest CRC calculation).

The Marvell qedn driver:
========================
The new driver will be added under "drivers/nvme/hw" and will be enabled
by the Kconfig "Marvell NVM Express over Fabrics TCP offload".
As part of the qedn init, the driver will register as a pci device driver
and will work with the Marvell fastlinQ NIC.
As part of the probe, the driver will register to the nvme_tcp_offload
(ULP) and to the qed module (qed_nvmetcp_ops) - similar to other
"qed_*_ops" which are used by the qede, qedr, qedf and qedi device
drivers.

Upstream Plan:
=============
The RFC series "NVMeTCP Offload ULP and QEDN Device Driver"
https://lore.kernel.org/netdev/20210531225222.16992-1-smalin@marvell.com/
was designed in a modular way so that part 1 (nvme-tcp-offload) and
part 2 (qed) are independent and part 3 (qedn) depends on both parts 1+2.

- Part 1 (RFC patch 1-8): NVMeTCP Offload ULP
  The nvme-tcp-offload patches, will be sent to
  'linux-nvme@lists.infradead.org'.

- Part 2 (RFC patches 9-15): QED NVMeTCP Offload
  The qed infrastructure, will be sent to 'netdev@vger.kernel.org'.

Once part 1 and 2 are accepted:

- Part 3 (RFC patches 16-27): QEDN NVMeTCP Offload
  The qedn patches, will be sent to 'linux-nvme@lists.infradead.org'.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 2c95e6c7 806ee7f8
......@@ -110,6 +110,9 @@ config QED_RDMA
config QED_ISCSI
bool
config QED_NVMETCP
bool
config QED_FCOE
bool
......
......@@ -28,6 +28,11 @@ qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
qed-$(CONFIG_QED_LL2) += qed_ll2.o
qed-$(CONFIG_QED_OOO) += qed_ooo.o
qed-$(CONFIG_QED_NVMETCP) += \
qed_nvmetcp.o \
qed_nvmetcp_fw_funcs.o \
qed_nvmetcp_ip_services.o
qed-$(CONFIG_QED_RDMA) += \
qed_iwarp.o \
qed_rdma.o \
......
......@@ -49,6 +49,8 @@ extern const struct qed_common_ops qed_common_ops_pass;
#define QED_MIN_WIDS (4)
#define QED_PF_DEMS_SIZE (4)
#define QED_LLH_DONT_CARE 0
/* cau states */
enum qed_coalescing_mode {
QED_COAL_MODE_DISABLE,
......@@ -200,6 +202,7 @@ enum qed_pci_personality {
QED_PCI_ETH,
QED_PCI_FCOE,
QED_PCI_ISCSI,
QED_PCI_NVMETCP,
QED_PCI_ETH_ROCE,
QED_PCI_ETH_IWARP,
QED_PCI_ETH_RDMA,
......@@ -239,6 +242,7 @@ enum QED_FEATURE {
QED_PF_L2_QUE,
QED_VF,
QED_RDMA_CNQ,
QED_NVMETCP_CQ,
QED_ISCSI_CQ,
QED_FCOE_CQ,
QED_VF_L2_QUE,
......@@ -284,6 +288,8 @@ struct qed_hw_info {
((dev)->hw_info.personality == QED_PCI_FCOE)
#define QED_IS_ISCSI_PERSONALITY(dev) \
((dev)->hw_info.personality == QED_PCI_ISCSI)
#define QED_IS_NVMETCP_PERSONALITY(dev) \
((dev)->hw_info.personality == QED_PCI_NVMETCP)
/* Resource Allocation scheme results */
u32 resc_start[QED_MAX_RESC];
......@@ -592,6 +598,7 @@ struct qed_hwfn {
struct qed_ooo_info *p_ooo_info;
struct qed_rdma_info *p_rdma_info;
struct qed_iscsi_info *p_iscsi_info;
struct qed_nvmetcp_info *p_nvmetcp_info;
struct qed_fcoe_info *p_fcoe_info;
struct qed_pf_params pf_params;
......@@ -828,6 +835,7 @@ struct qed_dev {
struct qed_eth_cb_ops *eth;
struct qed_fcoe_cb_ops *fcoe;
struct qed_iscsi_cb_ops *iscsi;
struct qed_nvmetcp_cb_ops *nvmetcp;
} protocol_ops;
void *ops_cookie;
......@@ -999,4 +1007,10 @@ int qed_mfw_fill_tlv_data(struct qed_hwfn *hwfn,
void qed_hw_info_set_offload_tc(struct qed_hw_info *p_info, u8 tc);
void qed_periodic_db_rec_start(struct qed_hwfn *p_hwfn);
int qed_llh_add_src_tcp_port_filter(struct qed_dev *cdev, u16 src_port);
int qed_llh_add_dst_tcp_port_filter(struct qed_dev *cdev, u16 dest_port);
void qed_llh_remove_src_tcp_port_filter(struct qed_dev *cdev, u16 src_port);
void qed_llh_remove_dst_tcp_port_filter(struct qed_dev *cdev, u16 src_port);
void qed_llh_clear_all_filters(struct qed_dev *cdev);
#endif /* _QED_H */
......@@ -94,14 +94,14 @@ struct src_ent {
static bool src_proto(enum protocol_type type)
{
return type == PROTOCOLID_ISCSI ||
return type == PROTOCOLID_TCP_ULP ||
type == PROTOCOLID_FCOE ||
type == PROTOCOLID_IWARP;
}
static bool tm_cid_proto(enum protocol_type type)
{
return type == PROTOCOLID_ISCSI ||
return type == PROTOCOLID_TCP_ULP ||
type == PROTOCOLID_FCOE ||
type == PROTOCOLID_ROCE ||
type == PROTOCOLID_IWARP;
......@@ -2072,7 +2072,6 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn, u32 rdma_tasks)
PROTOCOLID_FCOE,
p_params->num_cons,
0);
qed_cxt_set_proto_tid_count(p_hwfn, PROTOCOLID_FCOE,
QED_CXT_FCOE_TID_SEG, 0,
p_params->num_tasks, true);
......@@ -2090,13 +2089,12 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn, u32 rdma_tasks)
if (p_params->num_cons && p_params->num_tasks) {
qed_cxt_set_proto_cid_count(p_hwfn,
PROTOCOLID_ISCSI,
PROTOCOLID_TCP_ULP,
p_params->num_cons,
0);
qed_cxt_set_proto_tid_count(p_hwfn,
PROTOCOLID_ISCSI,
QED_CXT_ISCSI_TID_SEG,
PROTOCOLID_TCP_ULP,
QED_CXT_TCP_ULP_TID_SEG,
0,
p_params->num_tasks,
true);
......@@ -2106,6 +2104,29 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn, u32 rdma_tasks)
}
break;
}
case QED_PCI_NVMETCP:
{
struct qed_nvmetcp_pf_params *p_params;
p_params = &p_hwfn->pf_params.nvmetcp_pf_params;
if (p_params->num_cons && p_params->num_tasks) {
qed_cxt_set_proto_cid_count(p_hwfn,
PROTOCOLID_TCP_ULP,
p_params->num_cons,
0);
qed_cxt_set_proto_tid_count(p_hwfn,
PROTOCOLID_TCP_ULP,
QED_CXT_TCP_ULP_TID_SEG,
0,
p_params->num_tasks,
true);
} else {
DP_INFO(p_hwfn->cdev,
"NvmeTCP personality used without setting params!\n");
}
break;
}
default:
return -EINVAL;
}
......@@ -2129,8 +2150,9 @@ int qed_cxt_get_tid_mem_info(struct qed_hwfn *p_hwfn,
seg = QED_CXT_FCOE_TID_SEG;
break;
case QED_PCI_ISCSI:
proto = PROTOCOLID_ISCSI;
seg = QED_CXT_ISCSI_TID_SEG;
case QED_PCI_NVMETCP:
proto = PROTOCOLID_TCP_ULP;
seg = QED_CXT_TCP_ULP_TID_SEG;
break;
default:
return -EINVAL;
......@@ -2455,8 +2477,9 @@ int qed_cxt_get_task_ctx(struct qed_hwfn *p_hwfn,
seg = QED_CXT_FCOE_TID_SEG;
break;
case QED_PCI_ISCSI:
proto = PROTOCOLID_ISCSI;
seg = QED_CXT_ISCSI_TID_SEG;
case QED_PCI_NVMETCP:
proto = PROTOCOLID_TCP_ULP;
seg = QED_CXT_TCP_ULP_TID_SEG;
break;
default:
return -EINVAL;
......
......@@ -50,7 +50,7 @@ int qed_cxt_get_cid_info(struct qed_hwfn *p_hwfn,
int qed_cxt_get_tid_mem_info(struct qed_hwfn *p_hwfn,
struct qed_tid_mem *p_info);
#define QED_CXT_ISCSI_TID_SEG PROTOCOLID_ISCSI
#define QED_CXT_TCP_ULP_TID_SEG PROTOCOLID_TCP_ULP
#define QED_CXT_ROCE_TID_SEG PROTOCOLID_ROCE
#define QED_CXT_FCOE_TID_SEG PROTOCOLID_FCOE
enum qed_cxt_elem_type {
......
......@@ -37,6 +37,7 @@
#include "qed_sriov.h"
#include "qed_vf.h"
#include "qed_rdma.h"
#include "qed_nvmetcp.h"
static DEFINE_SPINLOCK(qm_lock);
......@@ -667,7 +668,8 @@ qed_llh_set_engine_affin(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
}
/* Storage PF is bound to a single engine while L2 PF uses both */
if (QED_IS_FCOE_PERSONALITY(p_hwfn) || QED_IS_ISCSI_PERSONALITY(p_hwfn))
if (QED_IS_FCOE_PERSONALITY(p_hwfn) || QED_IS_ISCSI_PERSONALITY(p_hwfn) ||
QED_IS_NVMETCP_PERSONALITY(p_hwfn))
eng = cdev->fir_affin ? QED_ENG1 : QED_ENG0;
else /* L2_PERSONALITY */
eng = QED_BOTH_ENG;
......@@ -1164,6 +1166,9 @@ void qed_llh_remove_mac_filter(struct qed_dev *cdev,
if (!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits))
goto out;
if (QED_IS_NVMETCP_PERSONALITY(p_hwfn))
return;
ether_addr_copy(filter.mac.addr, mac_addr);
rc = qed_llh_shadow_remove_filter(cdev, ppfid, &filter, &filter_idx,
&ref_cnt);
......@@ -1381,6 +1386,11 @@ void qed_resc_free(struct qed_dev *cdev)
qed_ooo_free(p_hwfn);
}
if (p_hwfn->hw_info.personality == QED_PCI_NVMETCP) {
qed_nvmetcp_free(p_hwfn);
qed_ooo_free(p_hwfn);
}
if (QED_IS_RDMA_PERSONALITY(p_hwfn) && rdma_info) {
qed_spq_unregister_async_cb(p_hwfn, rdma_info->proto);
qed_rdma_info_free(p_hwfn);
......@@ -1423,6 +1433,7 @@ static u32 qed_get_pq_flags(struct qed_hwfn *p_hwfn)
flags |= PQ_FLAGS_OFLD;
break;
case QED_PCI_ISCSI:
case QED_PCI_NVMETCP:
flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD;
break;
case QED_PCI_ETH_ROCE:
......@@ -2263,10 +2274,11 @@ int qed_resc_alloc(struct qed_dev *cdev)
* at the same time
*/
n_eqes += num_cons + 2 * MAX_NUM_VFS_BB + n_srq;
} else if (p_hwfn->hw_info.personality == QED_PCI_ISCSI) {
} else if (p_hwfn->hw_info.personality == QED_PCI_ISCSI ||
p_hwfn->hw_info.personality == QED_PCI_NVMETCP) {
num_cons =
qed_cxt_get_proto_cid_count(p_hwfn,
PROTOCOLID_ISCSI,
PROTOCOLID_TCP_ULP,
NULL);
n_eqes += 2 * num_cons;
}
......@@ -2313,6 +2325,15 @@ int qed_resc_alloc(struct qed_dev *cdev)
goto alloc_err;
}
if (p_hwfn->hw_info.personality == QED_PCI_NVMETCP) {
rc = qed_nvmetcp_alloc(p_hwfn);
if (rc)
goto alloc_err;
rc = qed_ooo_alloc(p_hwfn);
if (rc)
goto alloc_err;
}
if (QED_IS_RDMA_PERSONALITY(p_hwfn)) {
rc = qed_rdma_info_alloc(p_hwfn);
if (rc)
......@@ -2393,6 +2414,11 @@ void qed_resc_setup(struct qed_dev *cdev)
qed_iscsi_setup(p_hwfn);
qed_ooo_setup(p_hwfn);
}
if (p_hwfn->hw_info.personality == QED_PCI_NVMETCP) {
qed_nvmetcp_setup(p_hwfn);
qed_ooo_setup(p_hwfn);
}
}
}
......@@ -2854,7 +2880,8 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn,
/* Protocol Configuration */
STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_TCP_RT_OFFSET,
(p_hwfn->hw_info.personality == QED_PCI_ISCSI) ? 1 : 0);
((p_hwfn->hw_info.personality == QED_PCI_ISCSI) ||
(p_hwfn->hw_info.personality == QED_PCI_NVMETCP)) ? 1 : 0);
STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_FCOE_RT_OFFSET,
(p_hwfn->hw_info.personality == QED_PCI_FCOE) ? 1 : 0);
STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_ROCE_RT_OFFSET, 0);
......@@ -3535,14 +3562,21 @@ static void qed_hw_set_feat(struct qed_hwfn *p_hwfn)
feat_num[QED_ISCSI_CQ] = min_t(u32, sb_cnt.cnt,
RESC_NUM(p_hwfn,
QED_CMDQS_CQS));
if (QED_IS_NVMETCP_PERSONALITY(p_hwfn))
feat_num[QED_NVMETCP_CQ] = min_t(u32, sb_cnt.cnt,
RESC_NUM(p_hwfn,
QED_CMDQS_CQS));
DP_VERBOSE(p_hwfn,
NETIF_MSG_PROBE,
"#PF_L2_QUEUES=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d FCOE_CQ=%d ISCSI_CQ=%d #SBS=%d\n",
"#PF_L2_QUEUES=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d FCOE_CQ=%d ISCSI_CQ=%d NVMETCP_CQ=%d #SBS=%d\n",
(int)FEAT_NUM(p_hwfn, QED_PF_L2_QUE),
(int)FEAT_NUM(p_hwfn, QED_VF_L2_QUE),
(int)FEAT_NUM(p_hwfn, QED_RDMA_CNQ),
(int)FEAT_NUM(p_hwfn, QED_FCOE_CQ),
(int)FEAT_NUM(p_hwfn, QED_ISCSI_CQ),
(int)FEAT_NUM(p_hwfn, QED_NVMETCP_CQ),
(int)sb_cnt.cnt);
}
......@@ -3734,7 +3768,8 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn,
break;
case QED_BDQ:
if (p_hwfn->hw_info.personality != QED_PCI_ISCSI &&
p_hwfn->hw_info.personality != QED_PCI_FCOE)
p_hwfn->hw_info.personality != QED_PCI_FCOE &&
p_hwfn->hw_info.personality != QED_PCI_NVMETCP)
*p_resc_num = 0;
else
*p_resc_num = 1;
......@@ -3755,7 +3790,8 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn,
*p_resc_start = 0;
else if (p_hwfn->cdev->num_ports_in_engine == 4)
*p_resc_start = p_hwfn->port_id;
else if (p_hwfn->hw_info.personality == QED_PCI_ISCSI)
else if (p_hwfn->hw_info.personality == QED_PCI_ISCSI ||
p_hwfn->hw_info.personality == QED_PCI_NVMETCP)
*p_resc_start = p_hwfn->port_id;
else if (p_hwfn->hw_info.personality == QED_PCI_FCOE)
*p_resc_start = p_hwfn->port_id + 2;
......@@ -5326,3 +5362,93 @@ void qed_set_fw_mac_addr(__le16 *fw_msb,
((u8 *)fw_lsb)[0] = mac[5];
((u8 *)fw_lsb)[1] = mac[4];
}
static int qed_llh_shadow_remove_all_filters(struct qed_dev *cdev, u8 ppfid)
{
struct qed_llh_info *p_llh_info = cdev->p_llh_info;
struct qed_llh_filter_info *p_filters;
int rc;
rc = qed_llh_shadow_sanity(cdev, ppfid, 0, "remove_all");
if (rc)
return rc;
p_filters = p_llh_info->pp_filters[ppfid];
memset(p_filters, 0, NIG_REG_LLH_FUNC_FILTER_EN_SIZE *
sizeof(*p_filters));
return 0;
}
static void qed_llh_clear_ppfid_filters(struct qed_dev *cdev, u8 ppfid)
{
struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn);
u8 filter_idx, abs_ppfid;
int rc = 0;
if (!p_ptt)
return;
if (!test_bit(QED_MF_LLH_PROTO_CLSS, &cdev->mf_bits) &&
!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits))
goto out;
rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid);
if (rc)
goto out;
rc = qed_llh_shadow_remove_all_filters(cdev, ppfid);
if (rc)
goto out;
for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
filter_idx++) {
rc = qed_llh_remove_filter(p_hwfn, p_ptt,
abs_ppfid, filter_idx);
if (rc)
goto out;
}
out:
qed_ptt_release(p_hwfn, p_ptt);
}
int qed_llh_add_src_tcp_port_filter(struct qed_dev *cdev, u16 src_port)
{
return qed_llh_add_protocol_filter(cdev, 0,
QED_LLH_FILTER_TCP_SRC_PORT,
src_port, QED_LLH_DONT_CARE);
}
void qed_llh_remove_src_tcp_port_filter(struct qed_dev *cdev, u16 src_port)
{
qed_llh_remove_protocol_filter(cdev, 0,
QED_LLH_FILTER_TCP_SRC_PORT,
src_port, QED_LLH_DONT_CARE);
}
int qed_llh_add_dst_tcp_port_filter(struct qed_dev *cdev, u16 dest_port)
{
return qed_llh_add_protocol_filter(cdev, 0,
QED_LLH_FILTER_TCP_DEST_PORT,
QED_LLH_DONT_CARE, dest_port);
}
void qed_llh_remove_dst_tcp_port_filter(struct qed_dev *cdev, u16 dest_port)
{
qed_llh_remove_protocol_filter(cdev, 0,
QED_LLH_FILTER_TCP_DEST_PORT,
QED_LLH_DONT_CARE, dest_port);
}
void qed_llh_clear_all_filters(struct qed_dev *cdev)
{
u8 ppfid;
if (!test_bit(QED_MF_LLH_PROTO_CLSS, &cdev->mf_bits) &&
!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits))
return;
for (ppfid = 0; ppfid < cdev->p_llh_info->num_ppfid; ppfid++)
qed_llh_clear_ppfid_filters(cdev, ppfid);
}
......@@ -20,6 +20,7 @@
#include <linux/qed/fcoe_common.h>
#include <linux/qed/eth_common.h>
#include <linux/qed/iscsi_common.h>
#include <linux/qed/nvmetcp_common.h>
#include <linux/qed/iwarp_common.h>
#include <linux/qed/rdma_common.h>
#include <linux/qed/roce_common.h>
......@@ -1118,7 +1119,7 @@ struct outer_tag_config_struct {
/* personality per PF */
enum personality_type {
BAD_PERSONALITY_TYP,
PERSONALITY_ISCSI,
PERSONALITY_TCP_ULP,
PERSONALITY_FCOE,
PERSONALITY_RDMA_AND_ETH,
PERSONALITY_RDMA,
......@@ -12147,7 +12148,8 @@ struct public_func {
#define FUNC_MF_CFG_PROTOCOL_ISCSI 0x00000010
#define FUNC_MF_CFG_PROTOCOL_FCOE 0x00000020
#define FUNC_MF_CFG_PROTOCOL_ROCE 0x00000030
#define FUNC_MF_CFG_PROTOCOL_MAX 0x00000030
#define FUNC_MF_CFG_PROTOCOL_NVMETCP 0x00000040
#define FUNC_MF_CFG_PROTOCOL_MAX 0x00000040
#define FUNC_MF_CFG_MIN_BW_MASK 0x0000ff00
#define FUNC_MF_CFG_MIN_BW_SHIFT 8
......
......@@ -158,7 +158,7 @@ qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_INIT_FUNC,
PROTOCOLID_ISCSI, &init_data);
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
......@@ -250,7 +250,7 @@ qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn,
p_hwfn->p_iscsi_info->event_context = event_context;
p_hwfn->p_iscsi_info->event_cb = async_event_cb;
qed_spq_register_async_cb(p_hwfn, PROTOCOLID_ISCSI,
qed_spq_register_async_cb(p_hwfn, PROTOCOLID_TCP_ULP,
qed_iscsi_async_event);
return qed_spq_post(p_hwfn, p_ent, NULL);
......@@ -286,7 +286,7 @@ static int qed_sp_iscsi_conn_offload(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN,
PROTOCOLID_ISCSI, &init_data);
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
......@@ -465,7 +465,7 @@ static int qed_sp_iscsi_conn_update(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_UPDATE_CONN,
PROTOCOLID_ISCSI, &init_data);
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
......@@ -506,7 +506,7 @@ qed_sp_iscsi_mac_update(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_MAC_UPDATE,
PROTOCOLID_ISCSI, &init_data);
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
......@@ -548,7 +548,7 @@ static int qed_sp_iscsi_conn_terminate(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_TERMINATION_CONN,
PROTOCOLID_ISCSI, &init_data);
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
......@@ -582,7 +582,7 @@ static int qed_sp_iscsi_conn_clear_sq(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_CLEAR_SQ,
PROTOCOLID_ISCSI, &init_data);
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
......@@ -606,13 +606,13 @@ static int qed_sp_iscsi_func_stop(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_DESTROY_FUNC,
PROTOCOLID_ISCSI, &init_data);
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
rc = qed_spq_post(p_hwfn, p_ent, NULL);
qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_ISCSI);
qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_TCP_ULP);
return rc;
}
......@@ -786,7 +786,7 @@ static int qed_iscsi_acquire_connection(struct qed_hwfn *p_hwfn,
u32 icid;
spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_ISCSI, &icid);
rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_TCP_ULP, &icid);
spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
if (rc)
return rc;
......
......@@ -960,7 +960,8 @@ static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn,
if (test_bit(QED_MF_LL2_NON_UNICAST, &p_hwfn->cdev->mf_bits) &&
p_ramrod->main_func_queue && conn_type != QED_LL2_TYPE_ROCE &&
conn_type != QED_LL2_TYPE_IWARP) {
conn_type != QED_LL2_TYPE_IWARP &&
(!QED_IS_NVMETCP_PERSONALITY(p_hwfn))) {
p_ramrod->mf_si_bcast_accept_all = 1;
p_ramrod->mf_si_mcast_accept_all = 1;
} else {
......@@ -1037,8 +1038,8 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
case QED_LL2_TYPE_FCOE:
p_ramrod->conn_type = PROTOCOLID_FCOE;
break;
case QED_LL2_TYPE_ISCSI:
p_ramrod->conn_type = PROTOCOLID_ISCSI;
case QED_LL2_TYPE_TCP_ULP:
p_ramrod->conn_type = PROTOCOLID_TCP_ULP;
break;
case QED_LL2_TYPE_ROCE:
p_ramrod->conn_type = PROTOCOLID_ROCE;
......@@ -1047,8 +1048,9 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
p_ramrod->conn_type = PROTOCOLID_IWARP;
break;
case QED_LL2_TYPE_OOO:
if (p_hwfn->hw_info.personality == QED_PCI_ISCSI)
p_ramrod->conn_type = PROTOCOLID_ISCSI;
if (p_hwfn->hw_info.personality == QED_PCI_ISCSI ||
p_hwfn->hw_info.personality == QED_PCI_NVMETCP)
p_ramrod->conn_type = PROTOCOLID_TCP_ULP;
else
p_ramrod->conn_type = PROTOCOLID_IWARP;
break;
......@@ -1634,7 +1636,8 @@ int qed_ll2_establish_connection(void *cxt, u8 connection_handle)
if (rc)
goto out;
if (!QED_IS_RDMA_PERSONALITY(p_hwfn))
if (!QED_IS_RDMA_PERSONALITY(p_hwfn) &&
!QED_IS_NVMETCP_PERSONALITY(p_hwfn))
qed_wr(p_hwfn, p_ptt, PRS_REG_USE_LIGHT_L2, 1);
qed_ll2_establish_connection_ooo(p_hwfn, p_ll2_conn);
......@@ -2376,7 +2379,8 @@ static int qed_ll2_start_ooo(struct qed_hwfn *p_hwfn,
static bool qed_ll2_is_storage_eng1(struct qed_dev *cdev)
{
return (QED_IS_FCOE_PERSONALITY(QED_LEADING_HWFN(cdev)) ||
QED_IS_ISCSI_PERSONALITY(QED_LEADING_HWFN(cdev))) &&
QED_IS_ISCSI_PERSONALITY(QED_LEADING_HWFN(cdev)) ||
QED_IS_NVMETCP_PERSONALITY(QED_LEADING_HWFN(cdev))) &&
(QED_AFFIN_HWFN(cdev) != QED_LEADING_HWFN(cdev));
}
......@@ -2402,11 +2406,13 @@ static int qed_ll2_stop(struct qed_dev *cdev)
if (cdev->ll2->handle == QED_LL2_UNUSED_HANDLE)
return 0;
if (!QED_IS_NVMETCP_PERSONALITY(p_hwfn))
qed_llh_remove_mac_filter(cdev, 0, cdev->ll2_mac_address);
qed_llh_remove_mac_filter(cdev, 0, cdev->ll2_mac_address);
eth_zero_addr(cdev->ll2_mac_address);
if (QED_IS_ISCSI_PERSONALITY(p_hwfn))
if (QED_IS_ISCSI_PERSONALITY(p_hwfn) || QED_IS_NVMETCP_PERSONALITY(p_hwfn))
qed_ll2_stop_ooo(p_hwfn);
/* In CMT mode, LL2 is always started on engine 0 for a storage PF */
......@@ -2442,7 +2448,8 @@ static int __qed_ll2_start(struct qed_hwfn *p_hwfn,
conn_type = QED_LL2_TYPE_FCOE;
break;
case QED_PCI_ISCSI:
conn_type = QED_LL2_TYPE_ISCSI;
case QED_PCI_NVMETCP:
conn_type = QED_LL2_TYPE_TCP_ULP;
break;
case QED_PCI_ETH_ROCE:
conn_type = QED_LL2_TYPE_ROCE;
......@@ -2567,7 +2574,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
}
}
if (QED_IS_ISCSI_PERSONALITY(p_hwfn)) {
if (QED_IS_ISCSI_PERSONALITY(p_hwfn) || QED_IS_NVMETCP_PERSONALITY(p_hwfn)) {
DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n");
rc = qed_ll2_start_ooo(p_hwfn, params);
if (rc) {
......@@ -2576,18 +2583,21 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
}
}
if (!QED_IS_NVMETCP_PERSONALITY(p_hwfn)) {
rc = qed_llh_add_mac_filter(cdev, 0, params->ll2_mac_address);
if (rc) {
DP_NOTICE(cdev, "Failed to add an LLH filter\n");
goto err3;
}
}
ether_addr_copy(cdev->ll2_mac_address, params->ll2_mac_address);
return 0;
err3:
if (QED_IS_ISCSI_PERSONALITY(p_hwfn))
if (QED_IS_ISCSI_PERSONALITY(p_hwfn) || QED_IS_NVMETCP_PERSONALITY(p_hwfn))
qed_ll2_stop_ooo(p_hwfn);
err2:
if (b_is_storage_eng1)
......
......@@ -2446,6 +2446,9 @@ qed_mcp_get_shmem_proto(struct qed_hwfn *p_hwfn,
case FUNC_MF_CFG_PROTOCOL_ISCSI:
*p_proto = QED_PCI_ISCSI;
break;
case FUNC_MF_CFG_PROTOCOL_NVMETCP:
*p_proto = QED_PCI_NVMETCP;
break;
case FUNC_MF_CFG_PROTOCOL_FCOE:
*p_proto = QED_PCI_FCOE;
break;
......
......@@ -1306,7 +1306,8 @@ int qed_mfw_process_tlv_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
}
if ((tlv_group & QED_MFW_TLV_ISCSI) &&
p_hwfn->hw_info.personality != QED_PCI_ISCSI) {
p_hwfn->hw_info.personality != QED_PCI_ISCSI &&
p_hwfn->hw_info.personality != QED_PCI_NVMETCP) {
DP_VERBOSE(p_hwfn, QED_MSG_SP,
"Skipping iSCSI TLVs for non-iSCSI function\n");
tlv_group &= ~QED_MFW_TLV_ISCSI;
......
This diff is collapsed.
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */
/* Copyright 2021 Marvell. All rights reserved. */
#ifndef _QED_NVMETCP_H
#define _QED_NVMETCP_H
#include <linux/types.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/qed/tcp_common.h>
#include <linux/qed/qed_nvmetcp_if.h>
#include <linux/qed/qed_chain.h>
#include "qed.h"
#include "qed_hsi.h"
#include "qed_mcp.h"
#include "qed_sp.h"
#define QED_NVMETCP_FW_CQ_SIZE (4 * 1024)
/* tcp parameters */
#define QED_TCP_FLOW_LABEL 0
#define QED_TCP_TWO_MSL_TIMER 4000
#define QED_TCP_HALF_WAY_CLOSE_TIMEOUT 10
#define QED_TCP_MAX_FIN_RT 2
#define QED_TCP_SWS_TIMER 5000
struct qed_nvmetcp_info {
spinlock_t lock; /* Connection resources. */
struct list_head free_list;
u16 max_num_outstanding_tasks;
void *event_context;
nvmetcp_event_cb_t event_cb;
};
struct qed_hash_nvmetcp_con {
struct hlist_node node;
struct qed_nvmetcp_conn *con;
};
struct qed_nvmetcp_conn {
struct list_head list_entry;
bool free_on_delete;
u16 conn_id;
u32 icid;
u32 fw_cid;
u8 layer_code;
u8 offl_flags;
u8 connect_mode;
dma_addr_t sq_pbl_addr;
struct qed_chain r2tq;
struct qed_chain xhq;
struct qed_chain uhq;
u8 local_mac[6];
u8 remote_mac[6];
u8 ip_version;
u8 ka_max_probe_cnt;
u16 vlan_id;
u16 tcp_flags;
u32 remote_ip[4];
u32 local_ip[4];
u32 flow_label;
u32 ka_timeout;
u32 ka_interval;
u32 max_rt_time;
u8 ttl;
u8 tos_or_tc;
u16 remote_port;
u16 local_port;
u16 mss;
u8 rcv_wnd_scale;
u32 rcv_wnd;
u32 cwnd;
u8 update_flag;
u8 default_cq;
u8 abortive_dsconnect;
u32 max_seq_size;
u32 max_recv_pdu_length;
u32 max_send_pdu_length;
u32 first_seq_length;
u16 physical_q0;
u16 physical_q1;
u16 nvmetcp_cccid_max_range;
dma_addr_t nvmetcp_cccid_itid_table_addr;
};
#if IS_ENABLED(CONFIG_QED_NVMETCP)
int qed_nvmetcp_alloc(struct qed_hwfn *p_hwfn);
void qed_nvmetcp_setup(struct qed_hwfn *p_hwfn);
void qed_nvmetcp_free(struct qed_hwfn *p_hwfn);
#else /* IS_ENABLED(CONFIG_QED_NVMETCP) */
static inline int qed_nvmetcp_alloc(struct qed_hwfn *p_hwfn)
{
return -EINVAL;
}
static inline void qed_nvmetcp_setup(struct qed_hwfn *p_hwfn) {}
static inline void qed_nvmetcp_free(struct qed_hwfn *p_hwfn) {}
#endif /* IS_ENABLED(CONFIG_QED_NVMETCP) */
#endif
This diff is collapsed.
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */
/* Copyright 2021 Marvell. All rights reserved. */
#ifndef _QED_NVMETCP_FW_FUNCS_H
#define _QED_NVMETCP_FW_FUNCS_H
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/mm.h>
#include <linux/types.h>
#include <asm/byteorder.h>
#include <linux/qed/common_hsi.h>
#include <linux/qed/storage_common.h>
#include <linux/qed/nvmetcp_common.h>
#include <linux/qed/qed_nvmetcp_if.h>
#if IS_ENABLED(CONFIG_QED_NVMETCP)
void init_nvmetcp_host_read_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params);
void init_nvmetcp_host_write_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params);
void init_nvmetcp_init_conn_req_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_icreq_pdu *init_conn_req_pdu_hdr,
struct storage_sgl_task_params *tx_sgl_task_params,
struct storage_sgl_task_params *rx_sgl_task_params);
void init_cleanup_task_nvmetcp(struct nvmetcp_task_params *task_params);
#else /* IS_ENABLED(CONFIG_QED_NVMETCP) */
#endif /* IS_ENABLED(CONFIG_QED_NVMETCP) */
#endif /* _QED_NVMETCP_FW_FUNCS_H */
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
/*
* Copyright 2021 Marvell. All rights reserved.
*/
#include <linux/types.h>
#include <asm/byteorder.h>
#include <asm/param.h>
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include <linux/etherdevice.h>
#include <linux/kernel.h>
#include <linux/stddef.h>
#include <linux/errno.h>
#include <net/tcp.h>
#include <linux/qed/qed_nvmetcp_ip_services_if.h>
#define QED_IP_RESOL_TIMEOUT 4
int qed_route_ipv4(struct sockaddr_storage *local_addr,
struct sockaddr_storage *remote_addr,
struct sockaddr *hardware_address,
struct net_device **ndev)
{
struct neighbour *neigh = NULL;
__be32 *loc_ip, *rem_ip;
struct rtable *rt;
int rc = -ENXIO;
int retry;
loc_ip = &((struct sockaddr_in *)local_addr)->sin_addr.s_addr;
rem_ip = &((struct sockaddr_in *)remote_addr)->sin_addr.s_addr;
*ndev = NULL;
rt = ip_route_output(&init_net, *rem_ip, *loc_ip, 0/*tos*/, 0/*oif*/);
if (IS_ERR(rt)) {
pr_err("lookup route failed\n");
rc = PTR_ERR(rt);
goto return_err;
}
neigh = dst_neigh_lookup(&rt->dst, rem_ip);
if (!neigh) {
rc = -ENOMEM;
ip_rt_put(rt);
goto return_err;
}
*ndev = rt->dst.dev;
ip_rt_put(rt);
/* If not resolved, kick-off state machine towards resolution */
if (!(neigh->nud_state & NUD_VALID))
neigh_event_send(neigh, NULL);
/* query neighbor until resolved or timeout */
retry = QED_IP_RESOL_TIMEOUT;
while (!(neigh->nud_state & NUD_VALID) && retry > 0) {
msleep(1000);
retry--;
}
if (neigh->nud_state & NUD_VALID) {
/* copy resolved MAC address */
neigh_ha_snapshot(hardware_address->sa_data, neigh, *ndev);
hardware_address->sa_family = (*ndev)->type;
rc = 0;
}
neigh_release(neigh);
if (!(*loc_ip)) {
*loc_ip = inet_select_addr(*ndev, *rem_ip, RT_SCOPE_UNIVERSE);
local_addr->ss_family = AF_INET;
}
return_err:
return rc;
}
EXPORT_SYMBOL(qed_route_ipv4);
int qed_route_ipv6(struct sockaddr_storage *local_addr,
struct sockaddr_storage *remote_addr,
struct sockaddr *hardware_address,
struct net_device **ndev)
{
struct neighbour *neigh = NULL;
struct dst_entry *dst;
struct flowi6 fl6;
int rc = -ENXIO;
int retry;
memset(&fl6, 0, sizeof(fl6));
fl6.saddr = ((struct sockaddr_in6 *)local_addr)->sin6_addr;
fl6.daddr = ((struct sockaddr_in6 *)remote_addr)->sin6_addr;
dst = ip6_route_output(&init_net, NULL, &fl6);
if (!dst || dst->error) {
if (dst) {
dst_release(dst);
pr_err("lookup route failed %d\n", dst->error);
}
goto out;
}
neigh = dst_neigh_lookup(dst, &fl6.daddr);
if (neigh) {
*ndev = ip6_dst_idev(dst)->dev;
/* If not resolved, kick-off state machine towards resolution */
if (!(neigh->nud_state & NUD_VALID))
neigh_event_send(neigh, NULL);
/* query neighbor until resolved or timeout */
retry = QED_IP_RESOL_TIMEOUT;
while (!(neigh->nud_state & NUD_VALID) && retry > 0) {
msleep(1000);
retry--;
}
if (neigh->nud_state & NUD_VALID) {
neigh_ha_snapshot((u8 *)hardware_address->sa_data,
neigh, *ndev);
hardware_address->sa_family = (*ndev)->type;
rc = 0;
}
neigh_release(neigh);
if (ipv6_addr_any(&fl6.saddr)) {
if (ipv6_dev_get_saddr(dev_net(*ndev), *ndev,
&fl6.daddr, 0, &fl6.saddr)) {
pr_err("Unable to find source IP address\n");
goto out;
}
local_addr->ss_family = AF_INET6;
((struct sockaddr_in6 *)local_addr)->sin6_addr =
fl6.saddr;
}
}
dst_release(dst);
out:
return rc;
}
EXPORT_SYMBOL(qed_route_ipv6);
void qed_vlan_get_ndev(struct net_device **ndev, u16 *vlan_id)
{
if (is_vlan_dev(*ndev)) {
*vlan_id = vlan_dev_vlan_id(*ndev);
*ndev = vlan_dev_real_dev(*ndev);
}
}
EXPORT_SYMBOL(qed_vlan_get_ndev);
struct pci_dev *qed_validate_ndev(struct net_device *ndev)
{
struct pci_dev *pdev = NULL;
struct net_device *upper;
for_each_pci_dev(pdev) {
if (pdev && pdev->driver &&
!strcmp(pdev->driver->name, "qede")) {
upper = pci_get_drvdata(pdev);
if (upper->ifindex == ndev->ifindex)
return pdev;
}
}
return NULL;
}
EXPORT_SYMBOL(qed_validate_ndev);
__be16 qed_get_in_port(struct sockaddr_storage *sa)
{
return sa->ss_family == AF_INET
? ((struct sockaddr_in *)sa)->sin_port
: ((struct sockaddr_in6 *)sa)->sin6_port;
}
EXPORT_SYMBOL(qed_get_in_port);
int qed_fetch_tcp_port(struct sockaddr_storage local_ip_addr,
struct socket **sock, u16 *port)
{
struct sockaddr_storage sa;
int rc = 0;
rc = sock_create(local_ip_addr.ss_family, SOCK_STREAM, IPPROTO_TCP,
sock);
if (rc) {
pr_warn("failed to create socket: %d\n", rc);
goto err;
}
(*sock)->sk->sk_allocation = GFP_KERNEL;
sk_set_memalloc((*sock)->sk);
rc = kernel_bind(*sock, (struct sockaddr *)&local_ip_addr,
sizeof(local_ip_addr));
if (rc) {
pr_warn("failed to bind socket: %d\n", rc);
goto err_sock;
}
rc = kernel_getsockname(*sock, (struct sockaddr *)&sa);
if (rc < 0) {
pr_warn("getsockname() failed: %d\n", rc);
goto err_sock;
}
*port = ntohs(qed_get_in_port(&sa));
return 0;
err_sock:
sock_release(*sock);
sock = NULL;
err:
return rc;
}
EXPORT_SYMBOL(qed_fetch_tcp_port);
void qed_return_tcp_port(struct socket *sock)
{
if (sock && sock->sk) {
tcp_set_state(sock->sk, TCP_CLOSE);
sock_release(sock);
}
}
EXPORT_SYMBOL(qed_return_tcp_port);
......@@ -16,7 +16,7 @@
#include "qed_ll2.h"
#include "qed_ooo.h"
#include "qed_cxt.h"
#include "qed_nvmetcp.h"
static struct qed_ooo_archipelago
*qed_ooo_seek_archipelago(struct qed_hwfn *p_hwfn,
struct qed_ooo_info
......@@ -83,7 +83,8 @@ int qed_ooo_alloc(struct qed_hwfn *p_hwfn)
switch (p_hwfn->hw_info.personality) {
case QED_PCI_ISCSI:
proto = PROTOCOLID_ISCSI;
case QED_PCI_NVMETCP:
proto = PROTOCOLID_TCP_ULP;
break;
case QED_PCI_ETH_RDMA:
case QED_PCI_ETH_IWARP:
......
......@@ -100,6 +100,11 @@ union ramrod_data {
struct iscsi_spe_conn_mac_update iscsi_conn_mac_update;
struct iscsi_spe_conn_termination iscsi_conn_terminate;
struct nvmetcp_init_ramrod_params nvmetcp_init;
struct nvmetcp_spe_conn_offload nvmetcp_conn_offload;
struct nvmetcp_conn_update_ramrod_params nvmetcp_conn_update;
struct nvmetcp_spe_conn_termination nvmetcp_conn_terminate;
struct vf_start_ramrod_data vf_start;
struct vf_stop_ramrod_data vf_stop;
};
......
......@@ -385,7 +385,8 @@ int qed_sp_pf_start(struct qed_hwfn *p_hwfn,
p_ramrod->personality = PERSONALITY_FCOE;
break;
case QED_PCI_ISCSI:
p_ramrod->personality = PERSONALITY_ISCSI;
case QED_PCI_NVMETCP:
p_ramrod->personality = PERSONALITY_TCP_ULP;
break;
case QED_PCI_ETH_ROCE:
case QED_PCI_ETH_IWARP:
......
......@@ -702,7 +702,7 @@ enum mf_mode {
/* Per-protocol connection types */
enum protocol_type {
PROTOCOLID_ISCSI,
PROTOCOLID_TCP_ULP,
PROTOCOLID_FCOE,
PROTOCOLID_ROCE,
PROTOCOLID_CORE,
......
This diff is collapsed.
......@@ -542,6 +542,22 @@ struct qed_iscsi_pf_params {
u8 bdq_pbl_num_entries[3];
};
struct qed_nvmetcp_pf_params {
u64 glbl_q_params_addr;
u16 cq_num_entries;
u16 num_cons;
u16 num_tasks;
u8 num_sq_pages_in_ring;
u8 num_r2tq_pages_in_ring;
u8 num_uhq_pages_in_ring;
u8 num_queues;
u8 gl_rq_pi;
u8 gl_cmd_pi;
u8 debug_mode;
u8 ll2_ooo_queue_id;
u16 min_rto;
};
struct qed_rdma_pf_params {
/* Supplied to QED during resource allocation (may affect the ILT and
* the doorbell BAR).
......@@ -560,6 +576,7 @@ struct qed_pf_params {
struct qed_eth_pf_params eth_pf_params;
struct qed_fcoe_pf_params fcoe_pf_params;
struct qed_iscsi_pf_params iscsi_pf_params;
struct qed_nvmetcp_pf_params nvmetcp_pf_params;
struct qed_rdma_pf_params rdma_pf_params;
};
......@@ -662,6 +679,7 @@ enum qed_sb_type {
enum qed_protocol {
QED_PROTOCOL_ETH,
QED_PROTOCOL_ISCSI,
QED_PROTOCOL_NVMETCP = QED_PROTOCOL_ISCSI,
QED_PROTOCOL_FCOE,
};
......
......@@ -18,7 +18,7 @@
enum qed_ll2_conn_type {
QED_LL2_TYPE_FCOE,
QED_LL2_TYPE_ISCSI,
QED_LL2_TYPE_TCP_ULP,
QED_LL2_TYPE_TEST,
QED_LL2_TYPE_OOO,
QED_LL2_TYPE_RESERVED2,
......
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */
/* Copyright 2021 Marvell. All rights reserved. */
#ifndef _QED_NVMETCP_IF_H
#define _QED_NVMETCP_IF_H
#include <linux/types.h>
#include <linux/qed/qed_if.h>
#include <linux/qed/storage_common.h>
#include <linux/qed/nvmetcp_common.h>
#define QED_NVMETCP_MAX_IO_SIZE 0x800000
#define QED_NVMETCP_CMN_HDR_SIZE (sizeof(struct nvme_tcp_hdr))
#define QED_NVMETCP_CMD_HDR_SIZE (sizeof(struct nvme_tcp_cmd_pdu))
#define QED_NVMETCP_NON_IO_HDR_SIZE ((QED_NVMETCP_CMN_HDR_SIZE + 16))
typedef int (*nvmetcp_event_cb_t) (void *context,
u8 fw_event_code, void *fw_handle);
struct qed_dev_nvmetcp_info {
struct qed_dev_info common;
u8 port_id; /* Physical port */
u8 num_cqs;
};
#define MAX_TID_BLOCKS_NVMETCP (512)
struct qed_nvmetcp_tid {
u32 size; /* In bytes per task */
u32 num_tids_per_block;
u8 *blocks[MAX_TID_BLOCKS_NVMETCP];
};
struct qed_nvmetcp_id_params {
u8 mac[ETH_ALEN];
u32 ip[4];
u16 port;
};
struct qed_nvmetcp_params_offload {
/* FW initializations */
dma_addr_t sq_pbl_addr;
dma_addr_t nvmetcp_cccid_itid_table_addr;
u16 nvmetcp_cccid_max_range;
u8 default_cq;
/* Networking and TCP stack initializations */
struct qed_nvmetcp_id_params src;
struct qed_nvmetcp_id_params dst;
u32 ka_timeout;
u32 ka_interval;
u32 max_rt_time;
u32 cwnd;
u16 mss;
u16 vlan_id;
bool timestamp_en;
bool delayed_ack_en;
bool tcp_keep_alive_en;
bool ecn_en;
u8 ip_version;
u8 ka_max_probe_cnt;
u8 ttl;
u8 tos_or_tc;
u8 rcv_wnd_scale;
};
struct qed_nvmetcp_params_update {
u32 max_io_size;
u32 max_recv_pdu_length;
u32 max_send_pdu_length;
/* Placeholder: pfv, cpda, hpda */
bool hdr_digest_en;
bool data_digest_en;
};
struct qed_nvmetcp_cb_ops {
struct qed_common_cb_ops common;
};
struct nvmetcp_sge {
struct regpair sge_addr; /* SGE address */
__le32 sge_len; /* SGE length */
__le32 reserved;
};
/* IO path HSI function SGL params */
struct storage_sgl_task_params {
struct nvmetcp_sge *sgl;
struct regpair sgl_phys_addr;
u32 total_buffer_size;
u16 num_sges;
bool small_mid_sge;
};
/* IO path HSI function FW task context params */
struct nvmetcp_task_params {
void *context; /* Output parameter - set/filled by the HSI function */
struct nvmetcp_wqe *sqe;
u32 tx_io_size; /* in bytes (Without DIF, if exists) */
u32 rx_io_size; /* in bytes (Without DIF, if exists) */
u16 conn_icid;
u16 itid;
struct regpair opq; /* qedn_task_ctx address */
u16 host_cccid;
u8 cq_rss_number;
bool send_write_incapsule;
};
/**
* struct qed_nvmetcp_ops - qed NVMeTCP operations.
* @common: common operations pointer
* @ll2: light L2 operations pointer
* @fill_dev_info: fills NVMeTCP specific information
* @param cdev
* @param info
* @return 0 on success, otherwise error value.
* @register_ops: register nvmetcp operations
* @param cdev
* @param ops - specified using qed_nvmetcp_cb_ops
* @param cookie - driver private
* @start: nvmetcp in FW
* @param cdev
* @param tasks - qed will fill information about tasks
* return 0 on success, otherwise error value.
* @stop: nvmetcp in FW
* @param cdev
* return 0 on success, otherwise error value.
* @acquire_conn: acquire a new nvmetcp connection
* @param cdev
* @param handle - qed will fill handle that should be
* used henceforth as identifier of the
* connection.
* @param p_doorbell - qed will fill the address of the
* doorbell.
* @return 0 on sucesss, otherwise error value.
* @release_conn: release a previously acquired nvmetcp connection
* @param cdev
* @param handle - the connection handle.
* @return 0 on success, otherwise error value.
* @offload_conn: configures an offloaded connection
* @param cdev
* @param handle - the connection handle.
* @param conn_info - the configuration to use for the
* offload.
* @return 0 on success, otherwise error value.
* @update_conn: updates an offloaded connection
* @param cdev
* @param handle - the connection handle.
* @param conn_info - the configuration to use for the
* offload.
* @return 0 on success, otherwise error value.
* @destroy_conn: stops an offloaded connection
* @param cdev
* @param handle - the connection handle.
* @return 0 on success, otherwise error value.
* @clear_sq: clear all task in sq
* @param cdev
* @param handle - the connection handle.
* @return 0 on success, otherwise error value.
* @add_src_tcp_port_filter: Add source tcp port filter
* @param cdev
* @param src_port
* @remove_src_tcp_port_filter: Remove source tcp port filter
* @param cdev
* @param src_port
* @add_dst_tcp_port_filter: Add destination tcp port filter
* @param cdev
* @param dest_port
* @remove_dst_tcp_port_filter: Remove destination tcp port filter
* @param cdev
* @param dest_port
* @clear_all_filters: Clear all filters.
* @param cdev
*/
struct qed_nvmetcp_ops {
const struct qed_common_ops *common;
const struct qed_ll2_ops *ll2;
int (*fill_dev_info)(struct qed_dev *cdev,
struct qed_dev_nvmetcp_info *info);
void (*register_ops)(struct qed_dev *cdev,
struct qed_nvmetcp_cb_ops *ops, void *cookie);
int (*start)(struct qed_dev *cdev,
struct qed_nvmetcp_tid *tasks,
void *event_context, nvmetcp_event_cb_t async_event_cb);
int (*stop)(struct qed_dev *cdev);
int (*acquire_conn)(struct qed_dev *cdev,
u32 *handle,
u32 *fw_cid, void __iomem **p_doorbell);
int (*release_conn)(struct qed_dev *cdev, u32 handle);
int (*offload_conn)(struct qed_dev *cdev,
u32 handle,
struct qed_nvmetcp_params_offload *conn_info);
int (*update_conn)(struct qed_dev *cdev,
u32 handle,
struct qed_nvmetcp_params_update *conn_info);
int (*destroy_conn)(struct qed_dev *cdev, u32 handle, u8 abrt_conn);
int (*clear_sq)(struct qed_dev *cdev, u32 handle);
int (*add_src_tcp_port_filter)(struct qed_dev *cdev, u16 src_port);
void (*remove_src_tcp_port_filter)(struct qed_dev *cdev, u16 src_port);
int (*add_dst_tcp_port_filter)(struct qed_dev *cdev, u16 dest_port);
void (*remove_dst_tcp_port_filter)(struct qed_dev *cdev, u16 dest_port);
void (*clear_all_filters)(struct qed_dev *cdev);
void (*init_read_io)(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params);
void (*init_write_io)(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params);
void (*init_icreq_exchange)(struct nvmetcp_task_params *task_params,
struct nvme_tcp_icreq_pdu *init_conn_req_pdu_hdr,
struct storage_sgl_task_params *tx_sgl_task_params,
struct storage_sgl_task_params *rx_sgl_task_params);
void (*init_task_cleanup)(struct nvmetcp_task_params *task_params);
};
const struct qed_nvmetcp_ops *qed_get_nvmetcp_ops(void);
void qed_put_nvmetcp_ops(void);
#endif
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */
/*
* Copyright 2021 Marvell. All rights reserved.
*/
#ifndef _QED_IP_SERVICES_IF_H
#define _QED_IP_SERVICES_IF_H
#include <linux/types.h>
#include <net/route.h>
#include <net/ip6_route.h>
#include <linux/inetdevice.h>
int qed_route_ipv4(struct sockaddr_storage *local_addr,
struct sockaddr_storage *remote_addr,
struct sockaddr *hardware_address,
struct net_device **ndev);
int qed_route_ipv6(struct sockaddr_storage *local_addr,
struct sockaddr_storage *remote_addr,
struct sockaddr *hardware_address,
struct net_device **ndev);
void qed_vlan_get_ndev(struct net_device **ndev, u16 *vlan_id);
struct pci_dev *qed_validate_ndev(struct net_device *ndev);
void qed_return_tcp_port(struct socket *sock);
int qed_fetch_tcp_port(struct sockaddr_storage local_ip_addr,
struct socket **sock, u16 *port);
__be16 qed_get_in_port(struct sockaddr_storage *sa);
#endif /* _QED_IP_SERVICES_IF_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment