Commit eda1bc65 authored by David S. Miller's avatar David S. Miller

Merge branch 'QED-NVMeTCP-Offload'

Shai Malin says:

====================
QED NVMeTCP Offload

Intro:
======
This is the qed part of Marvell’s NVMeTCP offload series, shared as
RFC series "NVMeTCP Offload ULP and QEDN Device Drive".
This part is a standalone series, and is not dependent on other parts
of the RFC.
The overall goal is to add qedn as the offload driver for NVMeTCP,
alongside the existing offload drivers (qedr, qedi and qedf for rdma,
iscsi and fcoe respectively).

In this series we are making the necessary changes to qed to enable this
by exposing APIs for FW/HW initializations.

The qedn series (and required changes to NVMe stack) will be sent to the
linux-nvme mailing list.
I have included more details on the upstream plan under section with the
same name below.

The Series Patches:
===================
1. qed: Add TCP_ULP FW resource layout – replacing iSCSI when common
   with NVMeTCP.
2. qed: Add NVMeTCP Offload PF Level FW and HW HSI.
3. qed: Add NVMeTCP Offload Connection Level FW and HW HSI.
4. qed: Add support of HW filter block – enables redirecting NVMeTCP
   traffic to the dedicated PF.
5. qed: Add NVMeTCP Offload IO Level FW and HW HSI.
6. qed: Add NVMeTCP Offload IO Level FW Initializations.
7. qed: Add IP services APIs support –VLAN, IP routing and reserving
   TCP ports for the offload device.

The NVMeTCP Offload:
====================
With the goal of enabling a generic infrastructure that allows NVMe/TCP
offload devices like NICs to seamlessly plug into the NVMe-oF stack, this
patch series introduces the nvme-tcp-offload ULP host layer, which will
be a new transport type called "tcp-offload" and will serve as an
abstraction layer to work with vendor specific nvme-tcp offload drivers.

NVMeTCP offload is a full offload of the NVMeTCP protocol, this includes
both the TCP level and the NVMeTCP level.

The nvme-tcp-offload transport can co-exist with the existing tcp and
other transports. The tcp offload was designed so that stack changes are
kept to a bare minimum: only registering new transports.
All other APIs, ops etc. are identical to the regular tcp transport.
Representing the TCP offload as a new transport allows clear and manageable
differentiation between the connections which should use the offload path
and those that are not offloaded (even on the same device).

The nvme-tcp-offload layers and API compared to nvme-tcp and nvme-rdma:

* NVMe layer: *

       [ nvme/nvme-fabrics/blk-mq ]
             |
        (nvme API and blk-mq API)
             |
             |
* Vendor agnostic transport layer: *

      [ nvme-rdma ] [ nvme-tcp ] [ nvme-tcp-offload ]
             |        |             |
           (Verbs)
             |        |             |
             |     (Socket)
             |        |             |
             |        |        (nvme-tcp-offload API)
             |        |             |
             |        |             |
* Vendor Specific Driver: *

             |        |             |
           [ qedr ]
                      |             |
                   [ qede ]
                                    |
                                  [ qedn ]

Performance:
============
With this implementation on top of the Marvell qedn driver (using the
Marvell FastLinQ NIC), we were able to demonstrate the following CPU
utilization improvement:

On AMD EPYC 7402, 2.80GHz, 28 cores:
- For 16K queued read IOs, 16jobs, 4qd (50Gbps line rate):
  Improved the CPU utilization from 15.1% with NVMeTCP SW to 4.7% with
  NVMeTCP offload.

On Intel(R) Xeon(R) Gold 5122 CPU, 3.60GHz, 16 cores:
- For 512K queued read IOs, 16jobs, 4qd (25Gbps line rate):
  Improved the CPU utilization from 16.3% with NVMeTCP SW to 1.1% with
  NVMeTCP offload.

In addition, we were able to demonstrate the following latency improvement:
- For 200K read IOPS (16 jobs, 16 qd, with fio rate limiter):
  Improved the average latency from 105 usec with NVMeTCP SW to 39 usec
  with NVMeTCP offload.

  Improved the 99.99 tail latency from 570 usec with NVMeTCP SW to 91 usec
  with NVMeTCP offload.

The end-to-end offload latency was measured from fio while running against
back end of null device.

The Marvell FastLinQ NIC HW engine:
====================================
The Marvell NIC HW engine is capable of offloading the entire TCP/IP
stack and managing up to 64K connections per PF, already implemented and
upstream use cases for this include iWARP (by the Marvell qedr driver)
and iSCSI (by the Marvell qedi driver).
In addition, the Marvell NIC HW engine offloads the NVMeTCP queue layer
and is able to manage the IO level also in case of TCP re-transmissions
and OOO events.
The HW engine enables direct data placement (including the data digest CRC
calculation and validation) and direct data transmission (including data
digest CRC calculation).

The Marvell qedn driver:
========================
The new driver will be added under "drivers/nvme/hw" and will be enabled
by the Kconfig "Marvell NVM Express over Fabrics TCP offload".
As part of the qedn init, the driver will register as a pci device driver
and will work with the Marvell fastlinQ NIC.
As part of the probe, the driver will register to the nvme_tcp_offload
(ULP) and to the qed module (qed_nvmetcp_ops) - similar to other
"qed_*_ops" which are used by the qede, qedr, qedf and qedi device
drivers.

Upstream Plan:
=============
The RFC series "NVMeTCP Offload ULP and QEDN Device Driver"
https://lore.kernel.org/netdev/20210531225222.16992-1-smalin@marvell.com/
was designed in a modular way so that part 1 (nvme-tcp-offload) and
part 2 (qed) are independent and part 3 (qedn) depends on both parts 1+2.

- Part 1 (RFC patch 1-8): NVMeTCP Offload ULP
  The nvme-tcp-offload patches, will be sent to
  'linux-nvme@lists.infradead.org'.

- Part 2 (RFC patches 9-15): QED NVMeTCP Offload
  The qed infrastructure, will be sent to 'netdev@vger.kernel.org'.

Once part 1 and 2 are accepted:

- Part 3 (RFC patches 16-27): QEDN NVMeTCP Offload
  The qedn patches, will be sent to 'linux-nvme@lists.infradead.org'.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 2c95e6c7 806ee7f8
...@@ -110,6 +110,9 @@ config QED_RDMA ...@@ -110,6 +110,9 @@ config QED_RDMA
config QED_ISCSI config QED_ISCSI
bool bool
config QED_NVMETCP
bool
config QED_FCOE config QED_FCOE
bool bool
......
...@@ -28,6 +28,11 @@ qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o ...@@ -28,6 +28,11 @@ qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o
qed-$(CONFIG_QED_LL2) += qed_ll2.o qed-$(CONFIG_QED_LL2) += qed_ll2.o
qed-$(CONFIG_QED_OOO) += qed_ooo.o qed-$(CONFIG_QED_OOO) += qed_ooo.o
qed-$(CONFIG_QED_NVMETCP) += \
qed_nvmetcp.o \
qed_nvmetcp_fw_funcs.o \
qed_nvmetcp_ip_services.o
qed-$(CONFIG_QED_RDMA) += \ qed-$(CONFIG_QED_RDMA) += \
qed_iwarp.o \ qed_iwarp.o \
qed_rdma.o \ qed_rdma.o \
......
...@@ -49,6 +49,8 @@ extern const struct qed_common_ops qed_common_ops_pass; ...@@ -49,6 +49,8 @@ extern const struct qed_common_ops qed_common_ops_pass;
#define QED_MIN_WIDS (4) #define QED_MIN_WIDS (4)
#define QED_PF_DEMS_SIZE (4) #define QED_PF_DEMS_SIZE (4)
#define QED_LLH_DONT_CARE 0
/* cau states */ /* cau states */
enum qed_coalescing_mode { enum qed_coalescing_mode {
QED_COAL_MODE_DISABLE, QED_COAL_MODE_DISABLE,
...@@ -200,6 +202,7 @@ enum qed_pci_personality { ...@@ -200,6 +202,7 @@ enum qed_pci_personality {
QED_PCI_ETH, QED_PCI_ETH,
QED_PCI_FCOE, QED_PCI_FCOE,
QED_PCI_ISCSI, QED_PCI_ISCSI,
QED_PCI_NVMETCP,
QED_PCI_ETH_ROCE, QED_PCI_ETH_ROCE,
QED_PCI_ETH_IWARP, QED_PCI_ETH_IWARP,
QED_PCI_ETH_RDMA, QED_PCI_ETH_RDMA,
...@@ -239,6 +242,7 @@ enum QED_FEATURE { ...@@ -239,6 +242,7 @@ enum QED_FEATURE {
QED_PF_L2_QUE, QED_PF_L2_QUE,
QED_VF, QED_VF,
QED_RDMA_CNQ, QED_RDMA_CNQ,
QED_NVMETCP_CQ,
QED_ISCSI_CQ, QED_ISCSI_CQ,
QED_FCOE_CQ, QED_FCOE_CQ,
QED_VF_L2_QUE, QED_VF_L2_QUE,
...@@ -284,6 +288,8 @@ struct qed_hw_info { ...@@ -284,6 +288,8 @@ struct qed_hw_info {
((dev)->hw_info.personality == QED_PCI_FCOE) ((dev)->hw_info.personality == QED_PCI_FCOE)
#define QED_IS_ISCSI_PERSONALITY(dev) \ #define QED_IS_ISCSI_PERSONALITY(dev) \
((dev)->hw_info.personality == QED_PCI_ISCSI) ((dev)->hw_info.personality == QED_PCI_ISCSI)
#define QED_IS_NVMETCP_PERSONALITY(dev) \
((dev)->hw_info.personality == QED_PCI_NVMETCP)
/* Resource Allocation scheme results */ /* Resource Allocation scheme results */
u32 resc_start[QED_MAX_RESC]; u32 resc_start[QED_MAX_RESC];
...@@ -592,6 +598,7 @@ struct qed_hwfn { ...@@ -592,6 +598,7 @@ struct qed_hwfn {
struct qed_ooo_info *p_ooo_info; struct qed_ooo_info *p_ooo_info;
struct qed_rdma_info *p_rdma_info; struct qed_rdma_info *p_rdma_info;
struct qed_iscsi_info *p_iscsi_info; struct qed_iscsi_info *p_iscsi_info;
struct qed_nvmetcp_info *p_nvmetcp_info;
struct qed_fcoe_info *p_fcoe_info; struct qed_fcoe_info *p_fcoe_info;
struct qed_pf_params pf_params; struct qed_pf_params pf_params;
...@@ -828,6 +835,7 @@ struct qed_dev { ...@@ -828,6 +835,7 @@ struct qed_dev {
struct qed_eth_cb_ops *eth; struct qed_eth_cb_ops *eth;
struct qed_fcoe_cb_ops *fcoe; struct qed_fcoe_cb_ops *fcoe;
struct qed_iscsi_cb_ops *iscsi; struct qed_iscsi_cb_ops *iscsi;
struct qed_nvmetcp_cb_ops *nvmetcp;
} protocol_ops; } protocol_ops;
void *ops_cookie; void *ops_cookie;
...@@ -999,4 +1007,10 @@ int qed_mfw_fill_tlv_data(struct qed_hwfn *hwfn, ...@@ -999,4 +1007,10 @@ int qed_mfw_fill_tlv_data(struct qed_hwfn *hwfn,
void qed_hw_info_set_offload_tc(struct qed_hw_info *p_info, u8 tc); void qed_hw_info_set_offload_tc(struct qed_hw_info *p_info, u8 tc);
void qed_periodic_db_rec_start(struct qed_hwfn *p_hwfn); void qed_periodic_db_rec_start(struct qed_hwfn *p_hwfn);
int qed_llh_add_src_tcp_port_filter(struct qed_dev *cdev, u16 src_port);
int qed_llh_add_dst_tcp_port_filter(struct qed_dev *cdev, u16 dest_port);
void qed_llh_remove_src_tcp_port_filter(struct qed_dev *cdev, u16 src_port);
void qed_llh_remove_dst_tcp_port_filter(struct qed_dev *cdev, u16 src_port);
void qed_llh_clear_all_filters(struct qed_dev *cdev);
#endif /* _QED_H */ #endif /* _QED_H */
...@@ -94,14 +94,14 @@ struct src_ent { ...@@ -94,14 +94,14 @@ struct src_ent {
static bool src_proto(enum protocol_type type) static bool src_proto(enum protocol_type type)
{ {
return type == PROTOCOLID_ISCSI || return type == PROTOCOLID_TCP_ULP ||
type == PROTOCOLID_FCOE || type == PROTOCOLID_FCOE ||
type == PROTOCOLID_IWARP; type == PROTOCOLID_IWARP;
} }
static bool tm_cid_proto(enum protocol_type type) static bool tm_cid_proto(enum protocol_type type)
{ {
return type == PROTOCOLID_ISCSI || return type == PROTOCOLID_TCP_ULP ||
type == PROTOCOLID_FCOE || type == PROTOCOLID_FCOE ||
type == PROTOCOLID_ROCE || type == PROTOCOLID_ROCE ||
type == PROTOCOLID_IWARP; type == PROTOCOLID_IWARP;
...@@ -2072,7 +2072,6 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn, u32 rdma_tasks) ...@@ -2072,7 +2072,6 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn, u32 rdma_tasks)
PROTOCOLID_FCOE, PROTOCOLID_FCOE,
p_params->num_cons, p_params->num_cons,
0); 0);
qed_cxt_set_proto_tid_count(p_hwfn, PROTOCOLID_FCOE, qed_cxt_set_proto_tid_count(p_hwfn, PROTOCOLID_FCOE,
QED_CXT_FCOE_TID_SEG, 0, QED_CXT_FCOE_TID_SEG, 0,
p_params->num_tasks, true); p_params->num_tasks, true);
...@@ -2090,13 +2089,12 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn, u32 rdma_tasks) ...@@ -2090,13 +2089,12 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn, u32 rdma_tasks)
if (p_params->num_cons && p_params->num_tasks) { if (p_params->num_cons && p_params->num_tasks) {
qed_cxt_set_proto_cid_count(p_hwfn, qed_cxt_set_proto_cid_count(p_hwfn,
PROTOCOLID_ISCSI, PROTOCOLID_TCP_ULP,
p_params->num_cons, p_params->num_cons,
0); 0);
qed_cxt_set_proto_tid_count(p_hwfn, qed_cxt_set_proto_tid_count(p_hwfn,
PROTOCOLID_ISCSI, PROTOCOLID_TCP_ULP,
QED_CXT_ISCSI_TID_SEG, QED_CXT_TCP_ULP_TID_SEG,
0, 0,
p_params->num_tasks, p_params->num_tasks,
true); true);
...@@ -2106,6 +2104,29 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn, u32 rdma_tasks) ...@@ -2106,6 +2104,29 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn, u32 rdma_tasks)
} }
break; break;
} }
case QED_PCI_NVMETCP:
{
struct qed_nvmetcp_pf_params *p_params;
p_params = &p_hwfn->pf_params.nvmetcp_pf_params;
if (p_params->num_cons && p_params->num_tasks) {
qed_cxt_set_proto_cid_count(p_hwfn,
PROTOCOLID_TCP_ULP,
p_params->num_cons,
0);
qed_cxt_set_proto_tid_count(p_hwfn,
PROTOCOLID_TCP_ULP,
QED_CXT_TCP_ULP_TID_SEG,
0,
p_params->num_tasks,
true);
} else {
DP_INFO(p_hwfn->cdev,
"NvmeTCP personality used without setting params!\n");
}
break;
}
default: default:
return -EINVAL; return -EINVAL;
} }
...@@ -2129,8 +2150,9 @@ int qed_cxt_get_tid_mem_info(struct qed_hwfn *p_hwfn, ...@@ -2129,8 +2150,9 @@ int qed_cxt_get_tid_mem_info(struct qed_hwfn *p_hwfn,
seg = QED_CXT_FCOE_TID_SEG; seg = QED_CXT_FCOE_TID_SEG;
break; break;
case QED_PCI_ISCSI: case QED_PCI_ISCSI:
proto = PROTOCOLID_ISCSI; case QED_PCI_NVMETCP:
seg = QED_CXT_ISCSI_TID_SEG; proto = PROTOCOLID_TCP_ULP;
seg = QED_CXT_TCP_ULP_TID_SEG;
break; break;
default: default:
return -EINVAL; return -EINVAL;
...@@ -2455,8 +2477,9 @@ int qed_cxt_get_task_ctx(struct qed_hwfn *p_hwfn, ...@@ -2455,8 +2477,9 @@ int qed_cxt_get_task_ctx(struct qed_hwfn *p_hwfn,
seg = QED_CXT_FCOE_TID_SEG; seg = QED_CXT_FCOE_TID_SEG;
break; break;
case QED_PCI_ISCSI: case QED_PCI_ISCSI:
proto = PROTOCOLID_ISCSI; case QED_PCI_NVMETCP:
seg = QED_CXT_ISCSI_TID_SEG; proto = PROTOCOLID_TCP_ULP;
seg = QED_CXT_TCP_ULP_TID_SEG;
break; break;
default: default:
return -EINVAL; return -EINVAL;
......
...@@ -50,7 +50,7 @@ int qed_cxt_get_cid_info(struct qed_hwfn *p_hwfn, ...@@ -50,7 +50,7 @@ int qed_cxt_get_cid_info(struct qed_hwfn *p_hwfn,
int qed_cxt_get_tid_mem_info(struct qed_hwfn *p_hwfn, int qed_cxt_get_tid_mem_info(struct qed_hwfn *p_hwfn,
struct qed_tid_mem *p_info); struct qed_tid_mem *p_info);
#define QED_CXT_ISCSI_TID_SEG PROTOCOLID_ISCSI #define QED_CXT_TCP_ULP_TID_SEG PROTOCOLID_TCP_ULP
#define QED_CXT_ROCE_TID_SEG PROTOCOLID_ROCE #define QED_CXT_ROCE_TID_SEG PROTOCOLID_ROCE
#define QED_CXT_FCOE_TID_SEG PROTOCOLID_FCOE #define QED_CXT_FCOE_TID_SEG PROTOCOLID_FCOE
enum qed_cxt_elem_type { enum qed_cxt_elem_type {
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include "qed_sriov.h" #include "qed_sriov.h"
#include "qed_vf.h" #include "qed_vf.h"
#include "qed_rdma.h" #include "qed_rdma.h"
#include "qed_nvmetcp.h"
static DEFINE_SPINLOCK(qm_lock); static DEFINE_SPINLOCK(qm_lock);
...@@ -667,7 +668,8 @@ qed_llh_set_engine_affin(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) ...@@ -667,7 +668,8 @@ qed_llh_set_engine_affin(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
} }
/* Storage PF is bound to a single engine while L2 PF uses both */ /* Storage PF is bound to a single engine while L2 PF uses both */
if (QED_IS_FCOE_PERSONALITY(p_hwfn) || QED_IS_ISCSI_PERSONALITY(p_hwfn)) if (QED_IS_FCOE_PERSONALITY(p_hwfn) || QED_IS_ISCSI_PERSONALITY(p_hwfn) ||
QED_IS_NVMETCP_PERSONALITY(p_hwfn))
eng = cdev->fir_affin ? QED_ENG1 : QED_ENG0; eng = cdev->fir_affin ? QED_ENG1 : QED_ENG0;
else /* L2_PERSONALITY */ else /* L2_PERSONALITY */
eng = QED_BOTH_ENG; eng = QED_BOTH_ENG;
...@@ -1164,6 +1166,9 @@ void qed_llh_remove_mac_filter(struct qed_dev *cdev, ...@@ -1164,6 +1166,9 @@ void qed_llh_remove_mac_filter(struct qed_dev *cdev,
if (!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits)) if (!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits))
goto out; goto out;
if (QED_IS_NVMETCP_PERSONALITY(p_hwfn))
return;
ether_addr_copy(filter.mac.addr, mac_addr); ether_addr_copy(filter.mac.addr, mac_addr);
rc = qed_llh_shadow_remove_filter(cdev, ppfid, &filter, &filter_idx, rc = qed_llh_shadow_remove_filter(cdev, ppfid, &filter, &filter_idx,
&ref_cnt); &ref_cnt);
...@@ -1381,6 +1386,11 @@ void qed_resc_free(struct qed_dev *cdev) ...@@ -1381,6 +1386,11 @@ void qed_resc_free(struct qed_dev *cdev)
qed_ooo_free(p_hwfn); qed_ooo_free(p_hwfn);
} }
if (p_hwfn->hw_info.personality == QED_PCI_NVMETCP) {
qed_nvmetcp_free(p_hwfn);
qed_ooo_free(p_hwfn);
}
if (QED_IS_RDMA_PERSONALITY(p_hwfn) && rdma_info) { if (QED_IS_RDMA_PERSONALITY(p_hwfn) && rdma_info) {
qed_spq_unregister_async_cb(p_hwfn, rdma_info->proto); qed_spq_unregister_async_cb(p_hwfn, rdma_info->proto);
qed_rdma_info_free(p_hwfn); qed_rdma_info_free(p_hwfn);
...@@ -1423,6 +1433,7 @@ static u32 qed_get_pq_flags(struct qed_hwfn *p_hwfn) ...@@ -1423,6 +1433,7 @@ static u32 qed_get_pq_flags(struct qed_hwfn *p_hwfn)
flags |= PQ_FLAGS_OFLD; flags |= PQ_FLAGS_OFLD;
break; break;
case QED_PCI_ISCSI: case QED_PCI_ISCSI:
case QED_PCI_NVMETCP:
flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD; flags |= PQ_FLAGS_ACK | PQ_FLAGS_OOO | PQ_FLAGS_OFLD;
break; break;
case QED_PCI_ETH_ROCE: case QED_PCI_ETH_ROCE:
...@@ -2263,10 +2274,11 @@ int qed_resc_alloc(struct qed_dev *cdev) ...@@ -2263,10 +2274,11 @@ int qed_resc_alloc(struct qed_dev *cdev)
* at the same time * at the same time
*/ */
n_eqes += num_cons + 2 * MAX_NUM_VFS_BB + n_srq; n_eqes += num_cons + 2 * MAX_NUM_VFS_BB + n_srq;
} else if (p_hwfn->hw_info.personality == QED_PCI_ISCSI) { } else if (p_hwfn->hw_info.personality == QED_PCI_ISCSI ||
p_hwfn->hw_info.personality == QED_PCI_NVMETCP) {
num_cons = num_cons =
qed_cxt_get_proto_cid_count(p_hwfn, qed_cxt_get_proto_cid_count(p_hwfn,
PROTOCOLID_ISCSI, PROTOCOLID_TCP_ULP,
NULL); NULL);
n_eqes += 2 * num_cons; n_eqes += 2 * num_cons;
} }
...@@ -2313,6 +2325,15 @@ int qed_resc_alloc(struct qed_dev *cdev) ...@@ -2313,6 +2325,15 @@ int qed_resc_alloc(struct qed_dev *cdev)
goto alloc_err; goto alloc_err;
} }
if (p_hwfn->hw_info.personality == QED_PCI_NVMETCP) {
rc = qed_nvmetcp_alloc(p_hwfn);
if (rc)
goto alloc_err;
rc = qed_ooo_alloc(p_hwfn);
if (rc)
goto alloc_err;
}
if (QED_IS_RDMA_PERSONALITY(p_hwfn)) { if (QED_IS_RDMA_PERSONALITY(p_hwfn)) {
rc = qed_rdma_info_alloc(p_hwfn); rc = qed_rdma_info_alloc(p_hwfn);
if (rc) if (rc)
...@@ -2393,6 +2414,11 @@ void qed_resc_setup(struct qed_dev *cdev) ...@@ -2393,6 +2414,11 @@ void qed_resc_setup(struct qed_dev *cdev)
qed_iscsi_setup(p_hwfn); qed_iscsi_setup(p_hwfn);
qed_ooo_setup(p_hwfn); qed_ooo_setup(p_hwfn);
} }
if (p_hwfn->hw_info.personality == QED_PCI_NVMETCP) {
qed_nvmetcp_setup(p_hwfn);
qed_ooo_setup(p_hwfn);
}
} }
} }
...@@ -2854,7 +2880,8 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn, ...@@ -2854,7 +2880,8 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn,
/* Protocol Configuration */ /* Protocol Configuration */
STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_TCP_RT_OFFSET, STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_TCP_RT_OFFSET,
(p_hwfn->hw_info.personality == QED_PCI_ISCSI) ? 1 : 0); ((p_hwfn->hw_info.personality == QED_PCI_ISCSI) ||
(p_hwfn->hw_info.personality == QED_PCI_NVMETCP)) ? 1 : 0);
STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_FCOE_RT_OFFSET, STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_FCOE_RT_OFFSET,
(p_hwfn->hw_info.personality == QED_PCI_FCOE) ? 1 : 0); (p_hwfn->hw_info.personality == QED_PCI_FCOE) ? 1 : 0);
STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_ROCE_RT_OFFSET, 0); STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_ROCE_RT_OFFSET, 0);
...@@ -3535,14 +3562,21 @@ static void qed_hw_set_feat(struct qed_hwfn *p_hwfn) ...@@ -3535,14 +3562,21 @@ static void qed_hw_set_feat(struct qed_hwfn *p_hwfn)
feat_num[QED_ISCSI_CQ] = min_t(u32, sb_cnt.cnt, feat_num[QED_ISCSI_CQ] = min_t(u32, sb_cnt.cnt,
RESC_NUM(p_hwfn, RESC_NUM(p_hwfn,
QED_CMDQS_CQS)); QED_CMDQS_CQS));
if (QED_IS_NVMETCP_PERSONALITY(p_hwfn))
feat_num[QED_NVMETCP_CQ] = min_t(u32, sb_cnt.cnt,
RESC_NUM(p_hwfn,
QED_CMDQS_CQS));
DP_VERBOSE(p_hwfn, DP_VERBOSE(p_hwfn,
NETIF_MSG_PROBE, NETIF_MSG_PROBE,
"#PF_L2_QUEUES=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d FCOE_CQ=%d ISCSI_CQ=%d #SBS=%d\n", "#PF_L2_QUEUES=%d VF_L2_QUEUES=%d #ROCE_CNQ=%d FCOE_CQ=%d ISCSI_CQ=%d NVMETCP_CQ=%d #SBS=%d\n",
(int)FEAT_NUM(p_hwfn, QED_PF_L2_QUE), (int)FEAT_NUM(p_hwfn, QED_PF_L2_QUE),
(int)FEAT_NUM(p_hwfn, QED_VF_L2_QUE), (int)FEAT_NUM(p_hwfn, QED_VF_L2_QUE),
(int)FEAT_NUM(p_hwfn, QED_RDMA_CNQ), (int)FEAT_NUM(p_hwfn, QED_RDMA_CNQ),
(int)FEAT_NUM(p_hwfn, QED_FCOE_CQ), (int)FEAT_NUM(p_hwfn, QED_FCOE_CQ),
(int)FEAT_NUM(p_hwfn, QED_ISCSI_CQ), (int)FEAT_NUM(p_hwfn, QED_ISCSI_CQ),
(int)FEAT_NUM(p_hwfn, QED_NVMETCP_CQ),
(int)sb_cnt.cnt); (int)sb_cnt.cnt);
} }
...@@ -3734,7 +3768,8 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn, ...@@ -3734,7 +3768,8 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn,
break; break;
case QED_BDQ: case QED_BDQ:
if (p_hwfn->hw_info.personality != QED_PCI_ISCSI && if (p_hwfn->hw_info.personality != QED_PCI_ISCSI &&
p_hwfn->hw_info.personality != QED_PCI_FCOE) p_hwfn->hw_info.personality != QED_PCI_FCOE &&
p_hwfn->hw_info.personality != QED_PCI_NVMETCP)
*p_resc_num = 0; *p_resc_num = 0;
else else
*p_resc_num = 1; *p_resc_num = 1;
...@@ -3755,7 +3790,8 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn, ...@@ -3755,7 +3790,8 @@ int qed_hw_get_dflt_resc(struct qed_hwfn *p_hwfn,
*p_resc_start = 0; *p_resc_start = 0;
else if (p_hwfn->cdev->num_ports_in_engine == 4) else if (p_hwfn->cdev->num_ports_in_engine == 4)
*p_resc_start = p_hwfn->port_id; *p_resc_start = p_hwfn->port_id;
else if (p_hwfn->hw_info.personality == QED_PCI_ISCSI) else if (p_hwfn->hw_info.personality == QED_PCI_ISCSI ||
p_hwfn->hw_info.personality == QED_PCI_NVMETCP)
*p_resc_start = p_hwfn->port_id; *p_resc_start = p_hwfn->port_id;
else if (p_hwfn->hw_info.personality == QED_PCI_FCOE) else if (p_hwfn->hw_info.personality == QED_PCI_FCOE)
*p_resc_start = p_hwfn->port_id + 2; *p_resc_start = p_hwfn->port_id + 2;
...@@ -5326,3 +5362,93 @@ void qed_set_fw_mac_addr(__le16 *fw_msb, ...@@ -5326,3 +5362,93 @@ void qed_set_fw_mac_addr(__le16 *fw_msb,
((u8 *)fw_lsb)[0] = mac[5]; ((u8 *)fw_lsb)[0] = mac[5];
((u8 *)fw_lsb)[1] = mac[4]; ((u8 *)fw_lsb)[1] = mac[4];
} }
static int qed_llh_shadow_remove_all_filters(struct qed_dev *cdev, u8 ppfid)
{
struct qed_llh_info *p_llh_info = cdev->p_llh_info;
struct qed_llh_filter_info *p_filters;
int rc;
rc = qed_llh_shadow_sanity(cdev, ppfid, 0, "remove_all");
if (rc)
return rc;
p_filters = p_llh_info->pp_filters[ppfid];
memset(p_filters, 0, NIG_REG_LLH_FUNC_FILTER_EN_SIZE *
sizeof(*p_filters));
return 0;
}
static void qed_llh_clear_ppfid_filters(struct qed_dev *cdev, u8 ppfid)
{
struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev);
struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn);
u8 filter_idx, abs_ppfid;
int rc = 0;
if (!p_ptt)
return;
if (!test_bit(QED_MF_LLH_PROTO_CLSS, &cdev->mf_bits) &&
!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits))
goto out;
rc = qed_llh_abs_ppfid(cdev, ppfid, &abs_ppfid);
if (rc)
goto out;
rc = qed_llh_shadow_remove_all_filters(cdev, ppfid);
if (rc)
goto out;
for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
filter_idx++) {
rc = qed_llh_remove_filter(p_hwfn, p_ptt,
abs_ppfid, filter_idx);
if (rc)
goto out;
}
out:
qed_ptt_release(p_hwfn, p_ptt);
}
int qed_llh_add_src_tcp_port_filter(struct qed_dev *cdev, u16 src_port)
{
return qed_llh_add_protocol_filter(cdev, 0,
QED_LLH_FILTER_TCP_SRC_PORT,
src_port, QED_LLH_DONT_CARE);
}
void qed_llh_remove_src_tcp_port_filter(struct qed_dev *cdev, u16 src_port)
{
qed_llh_remove_protocol_filter(cdev, 0,
QED_LLH_FILTER_TCP_SRC_PORT,
src_port, QED_LLH_DONT_CARE);
}
int qed_llh_add_dst_tcp_port_filter(struct qed_dev *cdev, u16 dest_port)
{
return qed_llh_add_protocol_filter(cdev, 0,
QED_LLH_FILTER_TCP_DEST_PORT,
QED_LLH_DONT_CARE, dest_port);
}
void qed_llh_remove_dst_tcp_port_filter(struct qed_dev *cdev, u16 dest_port)
{
qed_llh_remove_protocol_filter(cdev, 0,
QED_LLH_FILTER_TCP_DEST_PORT,
QED_LLH_DONT_CARE, dest_port);
}
void qed_llh_clear_all_filters(struct qed_dev *cdev)
{
u8 ppfid;
if (!test_bit(QED_MF_LLH_PROTO_CLSS, &cdev->mf_bits) &&
!test_bit(QED_MF_LLH_MAC_CLSS, &cdev->mf_bits))
return;
for (ppfid = 0; ppfid < cdev->p_llh_info->num_ppfid; ppfid++)
qed_llh_clear_ppfid_filters(cdev, ppfid);
}
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/qed/fcoe_common.h> #include <linux/qed/fcoe_common.h>
#include <linux/qed/eth_common.h> #include <linux/qed/eth_common.h>
#include <linux/qed/iscsi_common.h> #include <linux/qed/iscsi_common.h>
#include <linux/qed/nvmetcp_common.h>
#include <linux/qed/iwarp_common.h> #include <linux/qed/iwarp_common.h>
#include <linux/qed/rdma_common.h> #include <linux/qed/rdma_common.h>
#include <linux/qed/roce_common.h> #include <linux/qed/roce_common.h>
...@@ -1118,7 +1119,7 @@ struct outer_tag_config_struct { ...@@ -1118,7 +1119,7 @@ struct outer_tag_config_struct {
/* personality per PF */ /* personality per PF */
enum personality_type { enum personality_type {
BAD_PERSONALITY_TYP, BAD_PERSONALITY_TYP,
PERSONALITY_ISCSI, PERSONALITY_TCP_ULP,
PERSONALITY_FCOE, PERSONALITY_FCOE,
PERSONALITY_RDMA_AND_ETH, PERSONALITY_RDMA_AND_ETH,
PERSONALITY_RDMA, PERSONALITY_RDMA,
...@@ -12147,7 +12148,8 @@ struct public_func { ...@@ -12147,7 +12148,8 @@ struct public_func {
#define FUNC_MF_CFG_PROTOCOL_ISCSI 0x00000010 #define FUNC_MF_CFG_PROTOCOL_ISCSI 0x00000010
#define FUNC_MF_CFG_PROTOCOL_FCOE 0x00000020 #define FUNC_MF_CFG_PROTOCOL_FCOE 0x00000020
#define FUNC_MF_CFG_PROTOCOL_ROCE 0x00000030 #define FUNC_MF_CFG_PROTOCOL_ROCE 0x00000030
#define FUNC_MF_CFG_PROTOCOL_MAX 0x00000030 #define FUNC_MF_CFG_PROTOCOL_NVMETCP 0x00000040
#define FUNC_MF_CFG_PROTOCOL_MAX 0x00000040
#define FUNC_MF_CFG_MIN_BW_MASK 0x0000ff00 #define FUNC_MF_CFG_MIN_BW_MASK 0x0000ff00
#define FUNC_MF_CFG_MIN_BW_SHIFT 8 #define FUNC_MF_CFG_MIN_BW_SHIFT 8
......
...@@ -158,7 +158,7 @@ qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn, ...@@ -158,7 +158,7 @@ qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent, rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_INIT_FUNC, ISCSI_RAMROD_CMD_ID_INIT_FUNC,
PROTOCOLID_ISCSI, &init_data); PROTOCOLID_TCP_ULP, &init_data);
if (rc) if (rc)
return rc; return rc;
...@@ -250,7 +250,7 @@ qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn, ...@@ -250,7 +250,7 @@ qed_sp_iscsi_func_start(struct qed_hwfn *p_hwfn,
p_hwfn->p_iscsi_info->event_context = event_context; p_hwfn->p_iscsi_info->event_context = event_context;
p_hwfn->p_iscsi_info->event_cb = async_event_cb; p_hwfn->p_iscsi_info->event_cb = async_event_cb;
qed_spq_register_async_cb(p_hwfn, PROTOCOLID_ISCSI, qed_spq_register_async_cb(p_hwfn, PROTOCOLID_TCP_ULP,
qed_iscsi_async_event); qed_iscsi_async_event);
return qed_spq_post(p_hwfn, p_ent, NULL); return qed_spq_post(p_hwfn, p_ent, NULL);
...@@ -286,7 +286,7 @@ static int qed_sp_iscsi_conn_offload(struct qed_hwfn *p_hwfn, ...@@ -286,7 +286,7 @@ static int qed_sp_iscsi_conn_offload(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent, rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN, ISCSI_RAMROD_CMD_ID_OFFLOAD_CONN,
PROTOCOLID_ISCSI, &init_data); PROTOCOLID_TCP_ULP, &init_data);
if (rc) if (rc)
return rc; return rc;
...@@ -465,7 +465,7 @@ static int qed_sp_iscsi_conn_update(struct qed_hwfn *p_hwfn, ...@@ -465,7 +465,7 @@ static int qed_sp_iscsi_conn_update(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent, rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_UPDATE_CONN, ISCSI_RAMROD_CMD_ID_UPDATE_CONN,
PROTOCOLID_ISCSI, &init_data); PROTOCOLID_TCP_ULP, &init_data);
if (rc) if (rc)
return rc; return rc;
...@@ -506,7 +506,7 @@ qed_sp_iscsi_mac_update(struct qed_hwfn *p_hwfn, ...@@ -506,7 +506,7 @@ qed_sp_iscsi_mac_update(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent, rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_MAC_UPDATE, ISCSI_RAMROD_CMD_ID_MAC_UPDATE,
PROTOCOLID_ISCSI, &init_data); PROTOCOLID_TCP_ULP, &init_data);
if (rc) if (rc)
return rc; return rc;
...@@ -548,7 +548,7 @@ static int qed_sp_iscsi_conn_terminate(struct qed_hwfn *p_hwfn, ...@@ -548,7 +548,7 @@ static int qed_sp_iscsi_conn_terminate(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent, rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_TERMINATION_CONN, ISCSI_RAMROD_CMD_ID_TERMINATION_CONN,
PROTOCOLID_ISCSI, &init_data); PROTOCOLID_TCP_ULP, &init_data);
if (rc) if (rc)
return rc; return rc;
...@@ -582,7 +582,7 @@ static int qed_sp_iscsi_conn_clear_sq(struct qed_hwfn *p_hwfn, ...@@ -582,7 +582,7 @@ static int qed_sp_iscsi_conn_clear_sq(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent, rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_CLEAR_SQ, ISCSI_RAMROD_CMD_ID_CLEAR_SQ,
PROTOCOLID_ISCSI, &init_data); PROTOCOLID_TCP_ULP, &init_data);
if (rc) if (rc)
return rc; return rc;
...@@ -606,13 +606,13 @@ static int qed_sp_iscsi_func_stop(struct qed_hwfn *p_hwfn, ...@@ -606,13 +606,13 @@ static int qed_sp_iscsi_func_stop(struct qed_hwfn *p_hwfn,
rc = qed_sp_init_request(p_hwfn, &p_ent, rc = qed_sp_init_request(p_hwfn, &p_ent,
ISCSI_RAMROD_CMD_ID_DESTROY_FUNC, ISCSI_RAMROD_CMD_ID_DESTROY_FUNC,
PROTOCOLID_ISCSI, &init_data); PROTOCOLID_TCP_ULP, &init_data);
if (rc) if (rc)
return rc; return rc;
rc = qed_spq_post(p_hwfn, p_ent, NULL); rc = qed_spq_post(p_hwfn, p_ent, NULL);
qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_ISCSI); qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_TCP_ULP);
return rc; return rc;
} }
...@@ -786,7 +786,7 @@ static int qed_iscsi_acquire_connection(struct qed_hwfn *p_hwfn, ...@@ -786,7 +786,7 @@ static int qed_iscsi_acquire_connection(struct qed_hwfn *p_hwfn,
u32 icid; u32 icid;
spin_lock_bh(&p_hwfn->p_iscsi_info->lock); spin_lock_bh(&p_hwfn->p_iscsi_info->lock);
rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_ISCSI, &icid); rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_TCP_ULP, &icid);
spin_unlock_bh(&p_hwfn->p_iscsi_info->lock); spin_unlock_bh(&p_hwfn->p_iscsi_info->lock);
if (rc) if (rc)
return rc; return rc;
......
...@@ -960,7 +960,8 @@ static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn, ...@@ -960,7 +960,8 @@ static int qed_sp_ll2_rx_queue_start(struct qed_hwfn *p_hwfn,
if (test_bit(QED_MF_LL2_NON_UNICAST, &p_hwfn->cdev->mf_bits) && if (test_bit(QED_MF_LL2_NON_UNICAST, &p_hwfn->cdev->mf_bits) &&
p_ramrod->main_func_queue && conn_type != QED_LL2_TYPE_ROCE && p_ramrod->main_func_queue && conn_type != QED_LL2_TYPE_ROCE &&
conn_type != QED_LL2_TYPE_IWARP) { conn_type != QED_LL2_TYPE_IWARP &&
(!QED_IS_NVMETCP_PERSONALITY(p_hwfn))) {
p_ramrod->mf_si_bcast_accept_all = 1; p_ramrod->mf_si_bcast_accept_all = 1;
p_ramrod->mf_si_mcast_accept_all = 1; p_ramrod->mf_si_mcast_accept_all = 1;
} else { } else {
...@@ -1037,8 +1038,8 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn, ...@@ -1037,8 +1038,8 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
case QED_LL2_TYPE_FCOE: case QED_LL2_TYPE_FCOE:
p_ramrod->conn_type = PROTOCOLID_FCOE; p_ramrod->conn_type = PROTOCOLID_FCOE;
break; break;
case QED_LL2_TYPE_ISCSI: case QED_LL2_TYPE_TCP_ULP:
p_ramrod->conn_type = PROTOCOLID_ISCSI; p_ramrod->conn_type = PROTOCOLID_TCP_ULP;
break; break;
case QED_LL2_TYPE_ROCE: case QED_LL2_TYPE_ROCE:
p_ramrod->conn_type = PROTOCOLID_ROCE; p_ramrod->conn_type = PROTOCOLID_ROCE;
...@@ -1047,8 +1048,9 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn, ...@@ -1047,8 +1048,9 @@ static int qed_sp_ll2_tx_queue_start(struct qed_hwfn *p_hwfn,
p_ramrod->conn_type = PROTOCOLID_IWARP; p_ramrod->conn_type = PROTOCOLID_IWARP;
break; break;
case QED_LL2_TYPE_OOO: case QED_LL2_TYPE_OOO:
if (p_hwfn->hw_info.personality == QED_PCI_ISCSI) if (p_hwfn->hw_info.personality == QED_PCI_ISCSI ||
p_ramrod->conn_type = PROTOCOLID_ISCSI; p_hwfn->hw_info.personality == QED_PCI_NVMETCP)
p_ramrod->conn_type = PROTOCOLID_TCP_ULP;
else else
p_ramrod->conn_type = PROTOCOLID_IWARP; p_ramrod->conn_type = PROTOCOLID_IWARP;
break; break;
...@@ -1634,7 +1636,8 @@ int qed_ll2_establish_connection(void *cxt, u8 connection_handle) ...@@ -1634,7 +1636,8 @@ int qed_ll2_establish_connection(void *cxt, u8 connection_handle)
if (rc) if (rc)
goto out; goto out;
if (!QED_IS_RDMA_PERSONALITY(p_hwfn)) if (!QED_IS_RDMA_PERSONALITY(p_hwfn) &&
!QED_IS_NVMETCP_PERSONALITY(p_hwfn))
qed_wr(p_hwfn, p_ptt, PRS_REG_USE_LIGHT_L2, 1); qed_wr(p_hwfn, p_ptt, PRS_REG_USE_LIGHT_L2, 1);
qed_ll2_establish_connection_ooo(p_hwfn, p_ll2_conn); qed_ll2_establish_connection_ooo(p_hwfn, p_ll2_conn);
...@@ -2376,7 +2379,8 @@ static int qed_ll2_start_ooo(struct qed_hwfn *p_hwfn, ...@@ -2376,7 +2379,8 @@ static int qed_ll2_start_ooo(struct qed_hwfn *p_hwfn,
static bool qed_ll2_is_storage_eng1(struct qed_dev *cdev) static bool qed_ll2_is_storage_eng1(struct qed_dev *cdev)
{ {
return (QED_IS_FCOE_PERSONALITY(QED_LEADING_HWFN(cdev)) || return (QED_IS_FCOE_PERSONALITY(QED_LEADING_HWFN(cdev)) ||
QED_IS_ISCSI_PERSONALITY(QED_LEADING_HWFN(cdev))) && QED_IS_ISCSI_PERSONALITY(QED_LEADING_HWFN(cdev)) ||
QED_IS_NVMETCP_PERSONALITY(QED_LEADING_HWFN(cdev))) &&
(QED_AFFIN_HWFN(cdev) != QED_LEADING_HWFN(cdev)); (QED_AFFIN_HWFN(cdev) != QED_LEADING_HWFN(cdev));
} }
...@@ -2402,11 +2406,13 @@ static int qed_ll2_stop(struct qed_dev *cdev) ...@@ -2402,11 +2406,13 @@ static int qed_ll2_stop(struct qed_dev *cdev)
if (cdev->ll2->handle == QED_LL2_UNUSED_HANDLE) if (cdev->ll2->handle == QED_LL2_UNUSED_HANDLE)
return 0; return 0;
if (!QED_IS_NVMETCP_PERSONALITY(p_hwfn))
qed_llh_remove_mac_filter(cdev, 0, cdev->ll2_mac_address);
qed_llh_remove_mac_filter(cdev, 0, cdev->ll2_mac_address); qed_llh_remove_mac_filter(cdev, 0, cdev->ll2_mac_address);
eth_zero_addr(cdev->ll2_mac_address); eth_zero_addr(cdev->ll2_mac_address);
if (QED_IS_ISCSI_PERSONALITY(p_hwfn)) if (QED_IS_ISCSI_PERSONALITY(p_hwfn) || QED_IS_NVMETCP_PERSONALITY(p_hwfn))
qed_ll2_stop_ooo(p_hwfn); qed_ll2_stop_ooo(p_hwfn);
/* In CMT mode, LL2 is always started on engine 0 for a storage PF */ /* In CMT mode, LL2 is always started on engine 0 for a storage PF */
...@@ -2442,7 +2448,8 @@ static int __qed_ll2_start(struct qed_hwfn *p_hwfn, ...@@ -2442,7 +2448,8 @@ static int __qed_ll2_start(struct qed_hwfn *p_hwfn,
conn_type = QED_LL2_TYPE_FCOE; conn_type = QED_LL2_TYPE_FCOE;
break; break;
case QED_PCI_ISCSI: case QED_PCI_ISCSI:
conn_type = QED_LL2_TYPE_ISCSI; case QED_PCI_NVMETCP:
conn_type = QED_LL2_TYPE_TCP_ULP;
break; break;
case QED_PCI_ETH_ROCE: case QED_PCI_ETH_ROCE:
conn_type = QED_LL2_TYPE_ROCE; conn_type = QED_LL2_TYPE_ROCE;
...@@ -2567,7 +2574,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params) ...@@ -2567,7 +2574,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
} }
} }
if (QED_IS_ISCSI_PERSONALITY(p_hwfn)) { if (QED_IS_ISCSI_PERSONALITY(p_hwfn) || QED_IS_NVMETCP_PERSONALITY(p_hwfn)) {
DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n"); DP_VERBOSE(cdev, QED_MSG_STORAGE, "Starting OOO LL2 queue\n");
rc = qed_ll2_start_ooo(p_hwfn, params); rc = qed_ll2_start_ooo(p_hwfn, params);
if (rc) { if (rc) {
...@@ -2576,10 +2583,13 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params) ...@@ -2576,10 +2583,13 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
} }
} }
rc = qed_llh_add_mac_filter(cdev, 0, params->ll2_mac_address); if (!QED_IS_NVMETCP_PERSONALITY(p_hwfn)) {
if (rc) { rc = qed_llh_add_mac_filter(cdev, 0, params->ll2_mac_address);
DP_NOTICE(cdev, "Failed to add an LLH filter\n"); if (rc) {
goto err3; DP_NOTICE(cdev, "Failed to add an LLH filter\n");
goto err3;
}
} }
ether_addr_copy(cdev->ll2_mac_address, params->ll2_mac_address); ether_addr_copy(cdev->ll2_mac_address, params->ll2_mac_address);
...@@ -2587,7 +2597,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params) ...@@ -2587,7 +2597,7 @@ static int qed_ll2_start(struct qed_dev *cdev, struct qed_ll2_params *params)
return 0; return 0;
err3: err3:
if (QED_IS_ISCSI_PERSONALITY(p_hwfn)) if (QED_IS_ISCSI_PERSONALITY(p_hwfn) || QED_IS_NVMETCP_PERSONALITY(p_hwfn))
qed_ll2_stop_ooo(p_hwfn); qed_ll2_stop_ooo(p_hwfn);
err2: err2:
if (b_is_storage_eng1) if (b_is_storage_eng1)
......
...@@ -2446,6 +2446,9 @@ qed_mcp_get_shmem_proto(struct qed_hwfn *p_hwfn, ...@@ -2446,6 +2446,9 @@ qed_mcp_get_shmem_proto(struct qed_hwfn *p_hwfn,
case FUNC_MF_CFG_PROTOCOL_ISCSI: case FUNC_MF_CFG_PROTOCOL_ISCSI:
*p_proto = QED_PCI_ISCSI; *p_proto = QED_PCI_ISCSI;
break; break;
case FUNC_MF_CFG_PROTOCOL_NVMETCP:
*p_proto = QED_PCI_NVMETCP;
break;
case FUNC_MF_CFG_PROTOCOL_FCOE: case FUNC_MF_CFG_PROTOCOL_FCOE:
*p_proto = QED_PCI_FCOE; *p_proto = QED_PCI_FCOE;
break; break;
......
...@@ -1306,7 +1306,8 @@ int qed_mfw_process_tlv_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) ...@@ -1306,7 +1306,8 @@ int qed_mfw_process_tlv_req(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
} }
if ((tlv_group & QED_MFW_TLV_ISCSI) && if ((tlv_group & QED_MFW_TLV_ISCSI) &&
p_hwfn->hw_info.personality != QED_PCI_ISCSI) { p_hwfn->hw_info.personality != QED_PCI_ISCSI &&
p_hwfn->hw_info.personality != QED_PCI_NVMETCP) {
DP_VERBOSE(p_hwfn, QED_MSG_SP, DP_VERBOSE(p_hwfn, QED_MSG_SP,
"Skipping iSCSI TLVs for non-iSCSI function\n"); "Skipping iSCSI TLVs for non-iSCSI function\n");
tlv_group &= ~QED_MFW_TLV_ISCSI; tlv_group &= ~QED_MFW_TLV_ISCSI;
......
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
/* Copyright 2021 Marvell. All rights reserved. */
#include <linux/types.h>
#include <asm/byteorder.h>
#include <asm/param.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <linux/etherdevice.h>
#include <linux/kernel.h>
#include <linux/log2.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/stddef.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/list.h>
#include <linux/qed/qed_nvmetcp_if.h>
#include "qed.h"
#include "qed_cxt.h"
#include "qed_dev_api.h"
#include "qed_hsi.h"
#include "qed_hw.h"
#include "qed_int.h"
#include "qed_nvmetcp.h"
#include "qed_ll2.h"
#include "qed_mcp.h"
#include "qed_sp.h"
#include "qed_reg_addr.h"
#include "qed_nvmetcp_fw_funcs.h"
static int qed_nvmetcp_async_event(struct qed_hwfn *p_hwfn, u8 fw_event_code,
u16 echo, union event_ring_data *data,
u8 fw_return_code)
{
if (p_hwfn->p_nvmetcp_info->event_cb) {
struct qed_nvmetcp_info *p_nvmetcp = p_hwfn->p_nvmetcp_info;
return p_nvmetcp->event_cb(p_nvmetcp->event_context,
fw_event_code, data);
} else {
DP_NOTICE(p_hwfn, "nvmetcp async completion is not set\n");
return -EINVAL;
}
}
static int qed_sp_nvmetcp_func_start(struct qed_hwfn *p_hwfn,
enum spq_mode comp_mode,
struct qed_spq_comp_cb *p_comp_addr,
void *event_context,
nvmetcp_event_cb_t async_event_cb)
{
struct nvmetcp_init_ramrod_params *p_ramrod = NULL;
struct qed_nvmetcp_pf_params *p_params = NULL;
struct scsi_init_func_queues *p_queue = NULL;
struct nvmetcp_spe_func_init *p_init = NULL;
struct qed_sp_init_data init_data = {};
struct qed_spq_entry *p_ent = NULL;
int rc = 0;
u16 val;
u8 i;
/* Get SPQ entry */
init_data.cid = qed_spq_get_cid(p_hwfn);
init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
init_data.comp_mode = comp_mode;
init_data.p_comp_data = p_comp_addr;
rc = qed_sp_init_request(p_hwfn, &p_ent,
NVMETCP_RAMROD_CMD_ID_INIT_FUNC,
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
p_ramrod = &p_ent->ramrod.nvmetcp_init;
p_init = &p_ramrod->nvmetcp_init_spe;
p_params = &p_hwfn->pf_params.nvmetcp_pf_params;
p_queue = &p_init->q_params;
p_init->num_sq_pages_in_ring = p_params->num_sq_pages_in_ring;
p_init->num_r2tq_pages_in_ring = p_params->num_r2tq_pages_in_ring;
p_init->num_uhq_pages_in_ring = p_params->num_uhq_pages_in_ring;
p_init->ll2_rx_queue_id = RESC_START(p_hwfn, QED_LL2_RAM_QUEUE) +
p_params->ll2_ooo_queue_id;
SET_FIELD(p_init->flags, NVMETCP_SPE_FUNC_INIT_NVMETCP_MODE, 1);
p_init->func_params.log_page_size = ilog2(PAGE_SIZE);
p_init->func_params.num_tasks = cpu_to_le16(p_params->num_tasks);
p_init->debug_flags = p_params->debug_mode;
DMA_REGPAIR_LE(p_queue->glbl_q_params_addr,
p_params->glbl_q_params_addr);
p_queue->cq_num_entries = cpu_to_le16(QED_NVMETCP_FW_CQ_SIZE);
p_queue->num_queues = p_params->num_queues;
val = RESC_START(p_hwfn, QED_CMDQS_CQS);
p_queue->queue_relative_offset = cpu_to_le16((u16)val);
p_queue->cq_sb_pi = p_params->gl_rq_pi;
for (i = 0; i < p_params->num_queues; i++) {
val = qed_get_igu_sb_id(p_hwfn, i);
p_queue->cq_cmdq_sb_num_arr[i] = cpu_to_le16(val);
}
SET_FIELD(p_queue->q_validity,
SCSI_INIT_FUNC_QUEUES_CMD_VALID, 0);
p_queue->cmdq_num_entries = 0;
p_queue->bdq_resource_id = (u8)RESC_START(p_hwfn, QED_BDQ);
p_ramrod->tcp_init.two_msl_timer = cpu_to_le32(QED_TCP_TWO_MSL_TIMER);
p_ramrod->tcp_init.tx_sws_timer = cpu_to_le16(QED_TCP_SWS_TIMER);
p_init->half_way_close_timeout = cpu_to_le16(QED_TCP_HALF_WAY_CLOSE_TIMEOUT);
p_ramrod->tcp_init.max_fin_rt = QED_TCP_MAX_FIN_RT;
SET_FIELD(p_ramrod->nvmetcp_init_spe.params,
NVMETCP_SPE_FUNC_INIT_MAX_SYN_RT, QED_TCP_MAX_FIN_RT);
p_hwfn->p_nvmetcp_info->event_context = event_context;
p_hwfn->p_nvmetcp_info->event_cb = async_event_cb;
qed_spq_register_async_cb(p_hwfn, PROTOCOLID_TCP_ULP,
qed_nvmetcp_async_event);
return qed_spq_post(p_hwfn, p_ent, NULL);
}
static int qed_sp_nvmetcp_func_stop(struct qed_hwfn *p_hwfn,
enum spq_mode comp_mode,
struct qed_spq_comp_cb *p_comp_addr)
{
struct qed_spq_entry *p_ent = NULL;
struct qed_sp_init_data init_data;
int rc;
/* Get SPQ entry */
memset(&init_data, 0, sizeof(init_data));
init_data.cid = qed_spq_get_cid(p_hwfn);
init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
init_data.comp_mode = comp_mode;
init_data.p_comp_data = p_comp_addr;
rc = qed_sp_init_request(p_hwfn, &p_ent,
NVMETCP_RAMROD_CMD_ID_DESTROY_FUNC,
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
rc = qed_spq_post(p_hwfn, p_ent, NULL);
qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_TCP_ULP);
return rc;
}
static int qed_fill_nvmetcp_dev_info(struct qed_dev *cdev,
struct qed_dev_nvmetcp_info *info)
{
struct qed_hwfn *hwfn = QED_AFFIN_HWFN(cdev);
int rc;
memset(info, 0, sizeof(*info));
rc = qed_fill_dev_info(cdev, &info->common);
info->port_id = MFW_PORT(hwfn);
info->num_cqs = FEAT_NUM(hwfn, QED_NVMETCP_CQ);
return rc;
}
static void qed_register_nvmetcp_ops(struct qed_dev *cdev,
struct qed_nvmetcp_cb_ops *ops,
void *cookie)
{
cdev->protocol_ops.nvmetcp = ops;
cdev->ops_cookie = cookie;
}
static int qed_nvmetcp_stop(struct qed_dev *cdev)
{
int rc;
if (!(cdev->flags & QED_FLAG_STORAGE_STARTED)) {
DP_NOTICE(cdev, "nvmetcp already stopped\n");
return 0;
}
if (!hash_empty(cdev->connections)) {
DP_NOTICE(cdev,
"Can't stop nvmetcp - not all connections were returned\n");
return -EINVAL;
}
/* Stop the nvmetcp */
rc = qed_sp_nvmetcp_func_stop(QED_AFFIN_HWFN(cdev), QED_SPQ_MODE_EBLOCK,
NULL);
cdev->flags &= ~QED_FLAG_STORAGE_STARTED;
return rc;
}
static int qed_nvmetcp_start(struct qed_dev *cdev,
struct qed_nvmetcp_tid *tasks,
void *event_context,
nvmetcp_event_cb_t async_event_cb)
{
struct qed_tid_mem *tid_info;
int rc;
if (cdev->flags & QED_FLAG_STORAGE_STARTED) {
DP_NOTICE(cdev, "nvmetcp already started;\n");
return 0;
}
rc = qed_sp_nvmetcp_func_start(QED_AFFIN_HWFN(cdev),
QED_SPQ_MODE_EBLOCK, NULL,
event_context, async_event_cb);
if (rc) {
DP_NOTICE(cdev, "Failed to start nvmetcp\n");
return rc;
}
cdev->flags |= QED_FLAG_STORAGE_STARTED;
hash_init(cdev->connections);
if (!tasks)
return 0;
tid_info = kzalloc(sizeof(*tid_info), GFP_KERNEL);
if (!tid_info) {
qed_nvmetcp_stop(cdev);
return -ENOMEM;
}
rc = qed_cxt_get_tid_mem_info(QED_AFFIN_HWFN(cdev), tid_info);
if (rc) {
DP_NOTICE(cdev, "Failed to gather task information\n");
qed_nvmetcp_stop(cdev);
kfree(tid_info);
return rc;
}
/* Fill task information */
tasks->size = tid_info->tid_size;
tasks->num_tids_per_block = tid_info->num_tids_per_block;
memcpy(tasks->blocks, tid_info->blocks,
MAX_TID_BLOCKS_NVMETCP * sizeof(u8 *));
kfree(tid_info);
return 0;
}
static struct qed_hash_nvmetcp_con *qed_nvmetcp_get_hash(struct qed_dev *cdev,
u32 handle)
{
struct qed_hash_nvmetcp_con *hash_con = NULL;
if (!(cdev->flags & QED_FLAG_STORAGE_STARTED))
return NULL;
hash_for_each_possible(cdev->connections, hash_con, node, handle) {
if (hash_con->con->icid == handle)
break;
}
if (!hash_con || hash_con->con->icid != handle)
return NULL;
return hash_con;
}
static int qed_sp_nvmetcp_conn_offload(struct qed_hwfn *p_hwfn,
struct qed_nvmetcp_conn *p_conn,
enum spq_mode comp_mode,
struct qed_spq_comp_cb *p_comp_addr)
{
struct nvmetcp_spe_conn_offload *p_ramrod = NULL;
struct tcp_offload_params_opt2 *p_tcp = NULL;
struct qed_sp_init_data init_data = { 0 };
struct qed_spq_entry *p_ent = NULL;
dma_addr_t r2tq_pbl_addr;
dma_addr_t xhq_pbl_addr;
dma_addr_t uhq_pbl_addr;
u16 physical_q;
int rc = 0;
u8 i;
/* Get SPQ entry */
init_data.cid = p_conn->icid;
init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
init_data.comp_mode = comp_mode;
init_data.p_comp_data = p_comp_addr;
rc = qed_sp_init_request(p_hwfn, &p_ent,
NVMETCP_RAMROD_CMD_ID_OFFLOAD_CONN,
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
p_ramrod = &p_ent->ramrod.nvmetcp_conn_offload;
/* Transmission PQ is the first of the PF */
physical_q = qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_OFLD);
p_conn->physical_q0 = cpu_to_le16(physical_q);
p_ramrod->nvmetcp.physical_q0 = cpu_to_le16(physical_q);
/* nvmetcp Pure-ACK PQ */
physical_q = qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_ACK);
p_conn->physical_q1 = cpu_to_le16(physical_q);
p_ramrod->nvmetcp.physical_q1 = cpu_to_le16(physical_q);
p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
DMA_REGPAIR_LE(p_ramrod->nvmetcp.sq_pbl_addr, p_conn->sq_pbl_addr);
r2tq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->r2tq);
DMA_REGPAIR_LE(p_ramrod->nvmetcp.r2tq_pbl_addr, r2tq_pbl_addr);
xhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->xhq);
DMA_REGPAIR_LE(p_ramrod->nvmetcp.xhq_pbl_addr, xhq_pbl_addr);
uhq_pbl_addr = qed_chain_get_pbl_phys(&p_conn->uhq);
DMA_REGPAIR_LE(p_ramrod->nvmetcp.uhq_pbl_addr, uhq_pbl_addr);
p_ramrod->nvmetcp.flags = p_conn->offl_flags;
p_ramrod->nvmetcp.default_cq = p_conn->default_cq;
p_ramrod->nvmetcp.initial_ack = 0;
DMA_REGPAIR_LE(p_ramrod->nvmetcp.nvmetcp.cccid_itid_table_addr,
p_conn->nvmetcp_cccid_itid_table_addr);
p_ramrod->nvmetcp.nvmetcp.cccid_max_range =
cpu_to_le16(p_conn->nvmetcp_cccid_max_range);
p_tcp = &p_ramrod->tcp;
qed_set_fw_mac_addr(&p_tcp->remote_mac_addr_hi,
&p_tcp->remote_mac_addr_mid,
&p_tcp->remote_mac_addr_lo, p_conn->remote_mac);
qed_set_fw_mac_addr(&p_tcp->local_mac_addr_hi,
&p_tcp->local_mac_addr_mid,
&p_tcp->local_mac_addr_lo, p_conn->local_mac);
p_tcp->vlan_id = cpu_to_le16(p_conn->vlan_id);
p_tcp->flags = cpu_to_le16(p_conn->tcp_flags);
p_tcp->ip_version = p_conn->ip_version;
if (p_tcp->ip_version == TCP_IPV6) {
for (i = 0; i < 4; i++) {
p_tcp->remote_ip[i] = cpu_to_le32(p_conn->remote_ip[i]);
p_tcp->local_ip[i] = cpu_to_le32(p_conn->local_ip[i]);
}
} else {
p_tcp->remote_ip[0] = cpu_to_le32(p_conn->remote_ip[0]);
p_tcp->local_ip[0] = cpu_to_le32(p_conn->local_ip[0]);
}
p_tcp->flow_label = cpu_to_le32(p_conn->flow_label);
p_tcp->ttl = p_conn->ttl;
p_tcp->tos_or_tc = p_conn->tos_or_tc;
p_tcp->remote_port = cpu_to_le16(p_conn->remote_port);
p_tcp->local_port = cpu_to_le16(p_conn->local_port);
p_tcp->mss = cpu_to_le16(p_conn->mss);
p_tcp->rcv_wnd_scale = p_conn->rcv_wnd_scale;
p_tcp->connect_mode = p_conn->connect_mode;
p_tcp->cwnd = cpu_to_le32(p_conn->cwnd);
p_tcp->ka_max_probe_cnt = p_conn->ka_max_probe_cnt;
p_tcp->ka_timeout = cpu_to_le32(p_conn->ka_timeout);
p_tcp->max_rt_time = cpu_to_le32(p_conn->max_rt_time);
p_tcp->ka_interval = cpu_to_le32(p_conn->ka_interval);
return qed_spq_post(p_hwfn, p_ent, NULL);
}
static int qed_sp_nvmetcp_conn_update(struct qed_hwfn *p_hwfn,
struct qed_nvmetcp_conn *p_conn,
enum spq_mode comp_mode,
struct qed_spq_comp_cb *p_comp_addr)
{
struct nvmetcp_conn_update_ramrod_params *p_ramrod = NULL;
struct qed_spq_entry *p_ent = NULL;
struct qed_sp_init_data init_data;
int rc = -EINVAL;
u32 dval;
/* Get SPQ entry */
memset(&init_data, 0, sizeof(init_data));
init_data.cid = p_conn->icid;
init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
init_data.comp_mode = comp_mode;
init_data.p_comp_data = p_comp_addr;
rc = qed_sp_init_request(p_hwfn, &p_ent,
NVMETCP_RAMROD_CMD_ID_UPDATE_CONN,
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
p_ramrod = &p_ent->ramrod.nvmetcp_conn_update;
p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
p_ramrod->flags = p_conn->update_flag;
p_ramrod->max_seq_size = cpu_to_le32(p_conn->max_seq_size);
dval = p_conn->max_recv_pdu_length;
p_ramrod->max_recv_pdu_length = cpu_to_le32(dval);
dval = p_conn->max_send_pdu_length;
p_ramrod->max_send_pdu_length = cpu_to_le32(dval);
p_ramrod->first_seq_length = cpu_to_le32(p_conn->first_seq_length);
return qed_spq_post(p_hwfn, p_ent, NULL);
}
static int qed_sp_nvmetcp_conn_terminate(struct qed_hwfn *p_hwfn,
struct qed_nvmetcp_conn *p_conn,
enum spq_mode comp_mode,
struct qed_spq_comp_cb *p_comp_addr)
{
struct nvmetcp_spe_conn_termination *p_ramrod = NULL;
struct qed_spq_entry *p_ent = NULL;
struct qed_sp_init_data init_data;
int rc = -EINVAL;
/* Get SPQ entry */
memset(&init_data, 0, sizeof(init_data));
init_data.cid = p_conn->icid;
init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
init_data.comp_mode = comp_mode;
init_data.p_comp_data = p_comp_addr;
rc = qed_sp_init_request(p_hwfn, &p_ent,
NVMETCP_RAMROD_CMD_ID_TERMINATION_CONN,
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
p_ramrod = &p_ent->ramrod.nvmetcp_conn_terminate;
p_ramrod->conn_id = cpu_to_le16(p_conn->conn_id);
p_ramrod->abortive = p_conn->abortive_dsconnect;
return qed_spq_post(p_hwfn, p_ent, NULL);
}
static int qed_sp_nvmetcp_conn_clear_sq(struct qed_hwfn *p_hwfn,
struct qed_nvmetcp_conn *p_conn,
enum spq_mode comp_mode,
struct qed_spq_comp_cb *p_comp_addr)
{
struct qed_spq_entry *p_ent = NULL;
struct qed_sp_init_data init_data;
int rc = -EINVAL;
/* Get SPQ entry */
memset(&init_data, 0, sizeof(init_data));
init_data.cid = p_conn->icid;
init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
init_data.comp_mode = comp_mode;
init_data.p_comp_data = p_comp_addr;
rc = qed_sp_init_request(p_hwfn, &p_ent,
NVMETCP_RAMROD_CMD_ID_CLEAR_SQ,
PROTOCOLID_TCP_ULP, &init_data);
if (rc)
return rc;
return qed_spq_post(p_hwfn, p_ent, NULL);
}
static void __iomem *qed_nvmetcp_get_db_addr(struct qed_hwfn *p_hwfn, u32 cid)
{
return (u8 __iomem *)p_hwfn->doorbells +
qed_db_addr(cid, DQ_DEMS_LEGACY);
}
static int qed_nvmetcp_allocate_connection(struct qed_hwfn *p_hwfn,
struct qed_nvmetcp_conn **p_out_conn)
{
struct qed_chain_init_params params = {
.mode = QED_CHAIN_MODE_PBL,
.intended_use = QED_CHAIN_USE_TO_CONSUME_PRODUCE,
.cnt_type = QED_CHAIN_CNT_TYPE_U16,
};
struct qed_nvmetcp_pf_params *p_params = NULL;
struct qed_nvmetcp_conn *p_conn = NULL;
int rc = 0;
/* Try finding a free connection that can be used */
spin_lock_bh(&p_hwfn->p_nvmetcp_info->lock);
if (!list_empty(&p_hwfn->p_nvmetcp_info->free_list))
p_conn = list_first_entry(&p_hwfn->p_nvmetcp_info->free_list,
struct qed_nvmetcp_conn, list_entry);
if (p_conn) {
list_del(&p_conn->list_entry);
spin_unlock_bh(&p_hwfn->p_nvmetcp_info->lock);
*p_out_conn = p_conn;
return 0;
}
spin_unlock_bh(&p_hwfn->p_nvmetcp_info->lock);
/* Need to allocate a new connection */
p_params = &p_hwfn->pf_params.nvmetcp_pf_params;
p_conn = kzalloc(sizeof(*p_conn), GFP_KERNEL);
if (!p_conn)
return -ENOMEM;
params.num_elems = p_params->num_r2tq_pages_in_ring *
QED_CHAIN_PAGE_SIZE / sizeof(struct nvmetcp_wqe);
params.elem_size = sizeof(struct nvmetcp_wqe);
rc = qed_chain_alloc(p_hwfn->cdev, &p_conn->r2tq, &params);
if (rc)
goto nomem_r2tq;
params.num_elems = p_params->num_uhq_pages_in_ring *
QED_CHAIN_PAGE_SIZE / sizeof(struct iscsi_uhqe);
params.elem_size = sizeof(struct iscsi_uhqe);
rc = qed_chain_alloc(p_hwfn->cdev, &p_conn->uhq, &params);
if (rc)
goto nomem_uhq;
params.elem_size = sizeof(struct iscsi_xhqe);
rc = qed_chain_alloc(p_hwfn->cdev, &p_conn->xhq, &params);
if (rc)
goto nomem;
p_conn->free_on_delete = true;
*p_out_conn = p_conn;
return 0;
nomem:
qed_chain_free(p_hwfn->cdev, &p_conn->uhq);
nomem_uhq:
qed_chain_free(p_hwfn->cdev, &p_conn->r2tq);
nomem_r2tq:
kfree(p_conn);
return -ENOMEM;
}
static int qed_nvmetcp_acquire_connection(struct qed_hwfn *p_hwfn,
struct qed_nvmetcp_conn **p_out_conn)
{
struct qed_nvmetcp_conn *p_conn = NULL;
int rc = 0;
u32 icid;
spin_lock_bh(&p_hwfn->p_nvmetcp_info->lock);
rc = qed_cxt_acquire_cid(p_hwfn, PROTOCOLID_TCP_ULP, &icid);
spin_unlock_bh(&p_hwfn->p_nvmetcp_info->lock);
if (rc)
return rc;
rc = qed_nvmetcp_allocate_connection(p_hwfn, &p_conn);
if (rc) {
spin_lock_bh(&p_hwfn->p_nvmetcp_info->lock);
qed_cxt_release_cid(p_hwfn, icid);
spin_unlock_bh(&p_hwfn->p_nvmetcp_info->lock);
return rc;
}
p_conn->icid = icid;
p_conn->conn_id = (u16)icid;
p_conn->fw_cid = (p_hwfn->hw_info.opaque_fid << 16) | icid;
*p_out_conn = p_conn;
return rc;
}
static void qed_nvmetcp_release_connection(struct qed_hwfn *p_hwfn,
struct qed_nvmetcp_conn *p_conn)
{
spin_lock_bh(&p_hwfn->p_nvmetcp_info->lock);
list_add_tail(&p_conn->list_entry, &p_hwfn->p_nvmetcp_info->free_list);
qed_cxt_release_cid(p_hwfn, p_conn->icid);
spin_unlock_bh(&p_hwfn->p_nvmetcp_info->lock);
}
static void qed_nvmetcp_free_connection(struct qed_hwfn *p_hwfn,
struct qed_nvmetcp_conn *p_conn)
{
qed_chain_free(p_hwfn->cdev, &p_conn->xhq);
qed_chain_free(p_hwfn->cdev, &p_conn->uhq);
qed_chain_free(p_hwfn->cdev, &p_conn->r2tq);
kfree(p_conn);
}
int qed_nvmetcp_alloc(struct qed_hwfn *p_hwfn)
{
struct qed_nvmetcp_info *p_nvmetcp_info;
p_nvmetcp_info = kzalloc(sizeof(*p_nvmetcp_info), GFP_KERNEL);
if (!p_nvmetcp_info)
return -ENOMEM;
INIT_LIST_HEAD(&p_nvmetcp_info->free_list);
p_hwfn->p_nvmetcp_info = p_nvmetcp_info;
return 0;
}
void qed_nvmetcp_setup(struct qed_hwfn *p_hwfn)
{
spin_lock_init(&p_hwfn->p_nvmetcp_info->lock);
}
void qed_nvmetcp_free(struct qed_hwfn *p_hwfn)
{
struct qed_nvmetcp_conn *p_conn = NULL;
if (!p_hwfn->p_nvmetcp_info)
return;
while (!list_empty(&p_hwfn->p_nvmetcp_info->free_list)) {
p_conn = list_first_entry(&p_hwfn->p_nvmetcp_info->free_list,
struct qed_nvmetcp_conn, list_entry);
if (p_conn) {
list_del(&p_conn->list_entry);
qed_nvmetcp_free_connection(p_hwfn, p_conn);
}
}
kfree(p_hwfn->p_nvmetcp_info);
p_hwfn->p_nvmetcp_info = NULL;
}
static int qed_nvmetcp_acquire_conn(struct qed_dev *cdev,
u32 *handle,
u32 *fw_cid, void __iomem **p_doorbell)
{
struct qed_hash_nvmetcp_con *hash_con;
int rc;
/* Allocate a hashed connection */
hash_con = kzalloc(sizeof(*hash_con), GFP_ATOMIC);
if (!hash_con)
return -ENOMEM;
/* Acquire the connection */
rc = qed_nvmetcp_acquire_connection(QED_AFFIN_HWFN(cdev),
&hash_con->con);
if (rc) {
DP_NOTICE(cdev, "Failed to acquire Connection\n");
kfree(hash_con);
return rc;
}
/* Added the connection to hash table */
*handle = hash_con->con->icid;
*fw_cid = hash_con->con->fw_cid;
hash_add(cdev->connections, &hash_con->node, *handle);
if (p_doorbell)
*p_doorbell = qed_nvmetcp_get_db_addr(QED_AFFIN_HWFN(cdev),
*handle);
return 0;
}
static int qed_nvmetcp_release_conn(struct qed_dev *cdev, u32 handle)
{
struct qed_hash_nvmetcp_con *hash_con;
hash_con = qed_nvmetcp_get_hash(cdev, handle);
if (!hash_con) {
DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
handle);
return -EINVAL;
}
hlist_del(&hash_con->node);
qed_nvmetcp_release_connection(QED_AFFIN_HWFN(cdev), hash_con->con);
kfree(hash_con);
return 0;
}
static int qed_nvmetcp_offload_conn(struct qed_dev *cdev, u32 handle,
struct qed_nvmetcp_params_offload *conn_info)
{
struct qed_hash_nvmetcp_con *hash_con;
struct qed_nvmetcp_conn *con;
hash_con = qed_nvmetcp_get_hash(cdev, handle);
if (!hash_con) {
DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
handle);
return -EINVAL;
}
/* Update the connection with information from the params */
con = hash_con->con;
/* FW initializations */
con->layer_code = NVMETCP_SLOW_PATH_LAYER_CODE;
con->sq_pbl_addr = conn_info->sq_pbl_addr;
con->nvmetcp_cccid_max_range = conn_info->nvmetcp_cccid_max_range;
con->nvmetcp_cccid_itid_table_addr = conn_info->nvmetcp_cccid_itid_table_addr;
con->default_cq = conn_info->default_cq;
SET_FIELD(con->offl_flags, NVMETCP_CONN_OFFLOAD_PARAMS_TARGET_MODE, 0);
SET_FIELD(con->offl_flags, NVMETCP_CONN_OFFLOAD_PARAMS_NVMETCP_MODE, 1);
SET_FIELD(con->offl_flags, NVMETCP_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B, 1);
/* Networking and TCP stack initializations */
ether_addr_copy(con->local_mac, conn_info->src.mac);
ether_addr_copy(con->remote_mac, conn_info->dst.mac);
memcpy(con->local_ip, conn_info->src.ip, sizeof(con->local_ip));
memcpy(con->remote_ip, conn_info->dst.ip, sizeof(con->remote_ip));
con->local_port = conn_info->src.port;
con->remote_port = conn_info->dst.port;
con->vlan_id = conn_info->vlan_id;
if (conn_info->timestamp_en)
SET_FIELD(con->tcp_flags, TCP_OFFLOAD_PARAMS_OPT2_TS_EN, 1);
if (conn_info->delayed_ack_en)
SET_FIELD(con->tcp_flags, TCP_OFFLOAD_PARAMS_OPT2_DA_EN, 1);
if (conn_info->tcp_keep_alive_en)
SET_FIELD(con->tcp_flags, TCP_OFFLOAD_PARAMS_OPT2_KA_EN, 1);
if (conn_info->ecn_en)
SET_FIELD(con->tcp_flags, TCP_OFFLOAD_PARAMS_OPT2_ECN_EN, 1);
con->ip_version = conn_info->ip_version;
con->flow_label = QED_TCP_FLOW_LABEL;
con->ka_max_probe_cnt = conn_info->ka_max_probe_cnt;
con->ka_timeout = conn_info->ka_timeout;
con->ka_interval = conn_info->ka_interval;
con->max_rt_time = conn_info->max_rt_time;
con->ttl = conn_info->ttl;
con->tos_or_tc = conn_info->tos_or_tc;
con->mss = conn_info->mss;
con->cwnd = conn_info->cwnd;
con->rcv_wnd_scale = conn_info->rcv_wnd_scale;
con->connect_mode = 0;
return qed_sp_nvmetcp_conn_offload(QED_AFFIN_HWFN(cdev), con,
QED_SPQ_MODE_EBLOCK, NULL);
}
static int qed_nvmetcp_update_conn(struct qed_dev *cdev,
u32 handle,
struct qed_nvmetcp_params_update *conn_info)
{
struct qed_hash_nvmetcp_con *hash_con;
struct qed_nvmetcp_conn *con;
hash_con = qed_nvmetcp_get_hash(cdev, handle);
if (!hash_con) {
DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
handle);
return -EINVAL;
}
/* Update the connection with information from the params */
con = hash_con->con;
SET_FIELD(con->update_flag,
ISCSI_CONN_UPDATE_RAMROD_PARAMS_INITIAL_R2T, 0);
SET_FIELD(con->update_flag,
ISCSI_CONN_UPDATE_RAMROD_PARAMS_IMMEDIATE_DATA, 1);
if (conn_info->hdr_digest_en)
SET_FIELD(con->update_flag, ISCSI_CONN_UPDATE_RAMROD_PARAMS_HD_EN, 1);
if (conn_info->data_digest_en)
SET_FIELD(con->update_flag, ISCSI_CONN_UPDATE_RAMROD_PARAMS_DD_EN, 1);
/* Placeholder - initialize pfv, cpda, hpda */
con->max_seq_size = conn_info->max_io_size;
con->max_recv_pdu_length = conn_info->max_recv_pdu_length;
con->max_send_pdu_length = conn_info->max_send_pdu_length;
con->first_seq_length = conn_info->max_io_size;
return qed_sp_nvmetcp_conn_update(QED_AFFIN_HWFN(cdev), con,
QED_SPQ_MODE_EBLOCK, NULL);
}
static int qed_nvmetcp_clear_conn_sq(struct qed_dev *cdev, u32 handle)
{
struct qed_hash_nvmetcp_con *hash_con;
hash_con = qed_nvmetcp_get_hash(cdev, handle);
if (!hash_con) {
DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
handle);
return -EINVAL;
}
return qed_sp_nvmetcp_conn_clear_sq(QED_AFFIN_HWFN(cdev), hash_con->con,
QED_SPQ_MODE_EBLOCK, NULL);
}
static int qed_nvmetcp_destroy_conn(struct qed_dev *cdev,
u32 handle, u8 abrt_conn)
{
struct qed_hash_nvmetcp_con *hash_con;
hash_con = qed_nvmetcp_get_hash(cdev, handle);
if (!hash_con) {
DP_NOTICE(cdev, "Failed to find connection for handle %d\n",
handle);
return -EINVAL;
}
hash_con->con->abortive_dsconnect = abrt_conn;
return qed_sp_nvmetcp_conn_terminate(QED_AFFIN_HWFN(cdev), hash_con->con,
QED_SPQ_MODE_EBLOCK, NULL);
}
static const struct qed_nvmetcp_ops qed_nvmetcp_ops_pass = {
.common = &qed_common_ops_pass,
.ll2 = &qed_ll2_ops_pass,
.fill_dev_info = &qed_fill_nvmetcp_dev_info,
.register_ops = &qed_register_nvmetcp_ops,
.start = &qed_nvmetcp_start,
.stop = &qed_nvmetcp_stop,
.acquire_conn = &qed_nvmetcp_acquire_conn,
.release_conn = &qed_nvmetcp_release_conn,
.offload_conn = &qed_nvmetcp_offload_conn,
.update_conn = &qed_nvmetcp_update_conn,
.destroy_conn = &qed_nvmetcp_destroy_conn,
.clear_sq = &qed_nvmetcp_clear_conn_sq,
.add_src_tcp_port_filter = &qed_llh_add_src_tcp_port_filter,
.remove_src_tcp_port_filter = &qed_llh_remove_src_tcp_port_filter,
.add_dst_tcp_port_filter = &qed_llh_add_dst_tcp_port_filter,
.remove_dst_tcp_port_filter = &qed_llh_remove_dst_tcp_port_filter,
.clear_all_filters = &qed_llh_clear_all_filters,
.init_read_io = &init_nvmetcp_host_read_task,
.init_write_io = &init_nvmetcp_host_write_task,
.init_icreq_exchange = &init_nvmetcp_init_conn_req_task,
.init_task_cleanup = &init_cleanup_task_nvmetcp
};
const struct qed_nvmetcp_ops *qed_get_nvmetcp_ops(void)
{
return &qed_nvmetcp_ops_pass;
}
EXPORT_SYMBOL(qed_get_nvmetcp_ops);
void qed_put_nvmetcp_ops(void)
{
}
EXPORT_SYMBOL(qed_put_nvmetcp_ops);
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */
/* Copyright 2021 Marvell. All rights reserved. */
#ifndef _QED_NVMETCP_H
#define _QED_NVMETCP_H
#include <linux/types.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/qed/tcp_common.h>
#include <linux/qed/qed_nvmetcp_if.h>
#include <linux/qed/qed_chain.h>
#include "qed.h"
#include "qed_hsi.h"
#include "qed_mcp.h"
#include "qed_sp.h"
#define QED_NVMETCP_FW_CQ_SIZE (4 * 1024)
/* tcp parameters */
#define QED_TCP_FLOW_LABEL 0
#define QED_TCP_TWO_MSL_TIMER 4000
#define QED_TCP_HALF_WAY_CLOSE_TIMEOUT 10
#define QED_TCP_MAX_FIN_RT 2
#define QED_TCP_SWS_TIMER 5000
struct qed_nvmetcp_info {
spinlock_t lock; /* Connection resources. */
struct list_head free_list;
u16 max_num_outstanding_tasks;
void *event_context;
nvmetcp_event_cb_t event_cb;
};
struct qed_hash_nvmetcp_con {
struct hlist_node node;
struct qed_nvmetcp_conn *con;
};
struct qed_nvmetcp_conn {
struct list_head list_entry;
bool free_on_delete;
u16 conn_id;
u32 icid;
u32 fw_cid;
u8 layer_code;
u8 offl_flags;
u8 connect_mode;
dma_addr_t sq_pbl_addr;
struct qed_chain r2tq;
struct qed_chain xhq;
struct qed_chain uhq;
u8 local_mac[6];
u8 remote_mac[6];
u8 ip_version;
u8 ka_max_probe_cnt;
u16 vlan_id;
u16 tcp_flags;
u32 remote_ip[4];
u32 local_ip[4];
u32 flow_label;
u32 ka_timeout;
u32 ka_interval;
u32 max_rt_time;
u8 ttl;
u8 tos_or_tc;
u16 remote_port;
u16 local_port;
u16 mss;
u8 rcv_wnd_scale;
u32 rcv_wnd;
u32 cwnd;
u8 update_flag;
u8 default_cq;
u8 abortive_dsconnect;
u32 max_seq_size;
u32 max_recv_pdu_length;
u32 max_send_pdu_length;
u32 first_seq_length;
u16 physical_q0;
u16 physical_q1;
u16 nvmetcp_cccid_max_range;
dma_addr_t nvmetcp_cccid_itid_table_addr;
};
#if IS_ENABLED(CONFIG_QED_NVMETCP)
int qed_nvmetcp_alloc(struct qed_hwfn *p_hwfn);
void qed_nvmetcp_setup(struct qed_hwfn *p_hwfn);
void qed_nvmetcp_free(struct qed_hwfn *p_hwfn);
#else /* IS_ENABLED(CONFIG_QED_NVMETCP) */
static inline int qed_nvmetcp_alloc(struct qed_hwfn *p_hwfn)
{
return -EINVAL;
}
static inline void qed_nvmetcp_setup(struct qed_hwfn *p_hwfn) {}
static inline void qed_nvmetcp_free(struct qed_hwfn *p_hwfn) {}
#endif /* IS_ENABLED(CONFIG_QED_NVMETCP) */
#endif
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
/* Copyright 2021 Marvell. All rights reserved. */
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/mm.h>
#include <linux/types.h>
#include <asm/byteorder.h>
#include <linux/qed/common_hsi.h>
#include <linux/qed/storage_common.h>
#include <linux/qed/nvmetcp_common.h>
#include <linux/qed/qed_nvmetcp_if.h>
#include "qed_nvmetcp_fw_funcs.h"
#define NVMETCP_NUM_SGES_IN_CACHE 0x4
bool nvmetcp_is_slow_sgl(u16 num_sges, bool small_mid_sge)
{
return (num_sges > SCSI_NUM_SGES_SLOW_SGL_THR && small_mid_sge);
}
void init_scsi_sgl_context(struct scsi_sgl_params *ctx_sgl_params,
struct scsi_cached_sges *ctx_data_desc,
struct storage_sgl_task_params *sgl_params)
{
u8 num_sges_to_init = (u8)(sgl_params->num_sges > NVMETCP_NUM_SGES_IN_CACHE ?
NVMETCP_NUM_SGES_IN_CACHE : sgl_params->num_sges);
u8 sge_index;
/* sgl params */
ctx_sgl_params->sgl_addr.lo = cpu_to_le32(sgl_params->sgl_phys_addr.lo);
ctx_sgl_params->sgl_addr.hi = cpu_to_le32(sgl_params->sgl_phys_addr.hi);
ctx_sgl_params->sgl_total_length = cpu_to_le32(sgl_params->total_buffer_size);
ctx_sgl_params->sgl_num_sges = cpu_to_le16(sgl_params->num_sges);
for (sge_index = 0; sge_index < num_sges_to_init; sge_index++) {
ctx_data_desc->sge[sge_index].sge_addr.lo =
cpu_to_le32(sgl_params->sgl[sge_index].sge_addr.lo);
ctx_data_desc->sge[sge_index].sge_addr.hi =
cpu_to_le32(sgl_params->sgl[sge_index].sge_addr.hi);
ctx_data_desc->sge[sge_index].sge_len =
cpu_to_le32(sgl_params->sgl[sge_index].sge_len);
}
}
static inline u32 calc_rw_task_size(struct nvmetcp_task_params *task_params,
enum nvmetcp_task_type task_type)
{
u32 io_size;
if (task_type == NVMETCP_TASK_TYPE_HOST_WRITE)
io_size = task_params->tx_io_size;
else
io_size = task_params->rx_io_size;
if (unlikely(!io_size))
return 0;
return io_size;
}
static inline void init_sqe(struct nvmetcp_task_params *task_params,
struct storage_sgl_task_params *sgl_task_params,
enum nvmetcp_task_type task_type)
{
if (!task_params->sqe)
return;
memset(task_params->sqe, 0, sizeof(*task_params->sqe));
task_params->sqe->task_id = cpu_to_le16(task_params->itid);
switch (task_type) {
case NVMETCP_TASK_TYPE_HOST_WRITE: {
u32 buf_size = 0;
u32 num_sges = 0;
SET_FIELD(task_params->sqe->contlen_cdbsize,
NVMETCP_WQE_CDB_SIZE_OR_NVMETCP_CMD, 1);
SET_FIELD(task_params->sqe->flags, NVMETCP_WQE_WQE_TYPE,
NVMETCP_WQE_TYPE_NORMAL);
if (task_params->tx_io_size) {
if (task_params->send_write_incapsule)
buf_size = calc_rw_task_size(task_params, task_type);
if (nvmetcp_is_slow_sgl(sgl_task_params->num_sges,
sgl_task_params->small_mid_sge))
num_sges = NVMETCP_WQE_NUM_SGES_SLOWIO;
else
num_sges = min((u16)sgl_task_params->num_sges,
(u16)SCSI_NUM_SGES_SLOW_SGL_THR);
}
SET_FIELD(task_params->sqe->flags, NVMETCP_WQE_NUM_SGES, num_sges);
SET_FIELD(task_params->sqe->contlen_cdbsize, NVMETCP_WQE_CONT_LEN, buf_size);
} break;
case NVMETCP_TASK_TYPE_HOST_READ: {
SET_FIELD(task_params->sqe->flags, NVMETCP_WQE_WQE_TYPE,
NVMETCP_WQE_TYPE_NORMAL);
SET_FIELD(task_params->sqe->contlen_cdbsize,
NVMETCP_WQE_CDB_SIZE_OR_NVMETCP_CMD, 1);
} break;
case NVMETCP_TASK_TYPE_INIT_CONN_REQUEST: {
SET_FIELD(task_params->sqe->flags, NVMETCP_WQE_WQE_TYPE,
NVMETCP_WQE_TYPE_MIDDLE_PATH);
if (task_params->tx_io_size) {
SET_FIELD(task_params->sqe->contlen_cdbsize, NVMETCP_WQE_CONT_LEN,
task_params->tx_io_size);
SET_FIELD(task_params->sqe->flags, NVMETCP_WQE_NUM_SGES,
min((u16)sgl_task_params->num_sges,
(u16)SCSI_NUM_SGES_SLOW_SGL_THR));
}
} break;
case NVMETCP_TASK_TYPE_CLEANUP:
SET_FIELD(task_params->sqe->flags, NVMETCP_WQE_WQE_TYPE,
NVMETCP_WQE_TYPE_TASK_CLEANUP);
default:
break;
}
}
/* The following function initializes of NVMeTCP task params */
static inline void
init_nvmetcp_task_params(struct e5_nvmetcp_task_context *context,
struct nvmetcp_task_params *task_params,
enum nvmetcp_task_type task_type)
{
context->ystorm_st_context.state.cccid = task_params->host_cccid;
SET_FIELD(context->ustorm_st_context.error_flags, USTORM_NVMETCP_TASK_ST_CTX_NVME_TCP, 1);
context->ustorm_st_context.nvme_tcp_opaque_lo = cpu_to_le32(task_params->opq.lo);
context->ustorm_st_context.nvme_tcp_opaque_hi = cpu_to_le32(task_params->opq.hi);
}
/* The following function initializes default values to all tasks */
static inline void
init_default_nvmetcp_task(struct nvmetcp_task_params *task_params,
void *pdu_header, void *nvme_cmd,
enum nvmetcp_task_type task_type)
{
struct e5_nvmetcp_task_context *context = task_params->context;
const u8 val_byte = context->mstorm_ag_context.cdu_validation;
u8 dw_index;
memset(context, 0, sizeof(*context));
init_nvmetcp_task_params(context, task_params,
(enum nvmetcp_task_type)task_type);
/* Swapping requirements used below, will be removed in future FW versions */
if (task_type == NVMETCP_TASK_TYPE_HOST_WRITE ||
task_type == NVMETCP_TASK_TYPE_HOST_READ) {
for (dw_index = 0;
dw_index < QED_NVMETCP_CMN_HDR_SIZE / sizeof(u32);
dw_index++)
context->ystorm_st_context.pdu_hdr.task_hdr.reg[dw_index] =
cpu_to_le32(__swab32(((u32 *)pdu_header)[dw_index]));
for (dw_index = QED_NVMETCP_CMN_HDR_SIZE / sizeof(u32);
dw_index < QED_NVMETCP_CMD_HDR_SIZE / sizeof(u32);
dw_index++)
context->ystorm_st_context.pdu_hdr.task_hdr.reg[dw_index] =
cpu_to_le32(__swab32(((u32 *)nvme_cmd)[dw_index - 2]));
} else {
for (dw_index = 0;
dw_index < QED_NVMETCP_NON_IO_HDR_SIZE / sizeof(u32);
dw_index++)
context->ystorm_st_context.pdu_hdr.task_hdr.reg[dw_index] =
cpu_to_le32(__swab32(((u32 *)pdu_header)[dw_index]));
}
/* M-Storm Context: */
context->mstorm_ag_context.cdu_validation = val_byte;
context->mstorm_st_context.task_type = (u8)(task_type);
context->mstorm_ag_context.task_cid = cpu_to_le16(task_params->conn_icid);
/* Ustorm Context: */
SET_FIELD(context->ustorm_ag_context.flags1, E5_USTORM_NVMETCP_TASK_AG_CTX_R2T2RECV, 1);
context->ustorm_st_context.task_type = (u8)(task_type);
context->ustorm_st_context.cq_rss_number = task_params->cq_rss_number;
context->ustorm_ag_context.icid = cpu_to_le16(task_params->conn_icid);
}
/* The following function initializes the U-Storm Task Contexts */
static inline void
init_ustorm_task_contexts(struct ustorm_nvmetcp_task_st_ctx *ustorm_st_context,
struct e5_ustorm_nvmetcp_task_ag_ctx *ustorm_ag_context,
u32 remaining_recv_len,
u32 expected_data_transfer_len, u8 num_sges,
bool tx_dif_conn_err_en)
{
/* Remaining data to be received in bytes. Used in validations*/
ustorm_st_context->rem_rcv_len = cpu_to_le32(remaining_recv_len);
ustorm_ag_context->exp_data_acked = cpu_to_le32(expected_data_transfer_len);
ustorm_st_context->exp_data_transfer_len = cpu_to_le32(expected_data_transfer_len);
SET_FIELD(ustorm_st_context->reg1_map, REG1_NUM_SGES, num_sges);
SET_FIELD(ustorm_ag_context->flags2, E5_USTORM_NVMETCP_TASK_AG_CTX_DIF_ERROR_CF_EN,
tx_dif_conn_err_en ? 1 : 0);
}
/* The following function initializes Local Completion Contexts: */
static inline void
set_local_completion_context(struct e5_nvmetcp_task_context *context)
{
SET_FIELD(context->ystorm_st_context.state.flags,
YSTORM_NVMETCP_TASK_STATE_LOCAL_COMP, 1);
SET_FIELD(context->ustorm_st_context.flags,
USTORM_NVMETCP_TASK_ST_CTX_LOCAL_COMP, 1);
}
/* Common Fastpath task init function: */
static inline void
init_rw_nvmetcp_task(struct nvmetcp_task_params *task_params,
enum nvmetcp_task_type task_type,
void *pdu_header, void *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params)
{
struct e5_nvmetcp_task_context *context = task_params->context;
u32 task_size = calc_rw_task_size(task_params, task_type);
bool slow_io = false;
u8 num_sges = 0;
init_default_nvmetcp_task(task_params, pdu_header, nvme_cmd, task_type);
/* Tx/Rx: */
if (task_params->tx_io_size) {
/* if data to transmit: */
init_scsi_sgl_context(&context->ystorm_st_context.state.sgl_params,
&context->ystorm_st_context.state.data_desc,
sgl_task_params);
slow_io = nvmetcp_is_slow_sgl(sgl_task_params->num_sges,
sgl_task_params->small_mid_sge);
num_sges =
(u8)(!slow_io ? min((u32)sgl_task_params->num_sges,
(u32)SCSI_NUM_SGES_SLOW_SGL_THR) :
NVMETCP_WQE_NUM_SGES_SLOWIO);
if (slow_io) {
SET_FIELD(context->ystorm_st_context.state.flags,
YSTORM_NVMETCP_TASK_STATE_SLOW_IO, 1);
}
} else if (task_params->rx_io_size) {
/* if data to receive: */
init_scsi_sgl_context(&context->mstorm_st_context.sgl_params,
&context->mstorm_st_context.data_desc,
sgl_task_params);
num_sges =
(u8)(!nvmetcp_is_slow_sgl(sgl_task_params->num_sges,
sgl_task_params->small_mid_sge) ?
min((u32)sgl_task_params->num_sges,
(u32)SCSI_NUM_SGES_SLOW_SGL_THR) :
NVMETCP_WQE_NUM_SGES_SLOWIO);
context->mstorm_st_context.rem_task_size = cpu_to_le32(task_size);
}
/* Ustorm context: */
init_ustorm_task_contexts(&context->ustorm_st_context,
&context->ustorm_ag_context,
/* Remaining Receive length is the Task Size */
task_size,
/* The size of the transmitted task */
task_size,
/* num_sges */
num_sges,
false);
/* Set exp_data_acked */
if (task_type == NVMETCP_TASK_TYPE_HOST_WRITE) {
if (task_params->send_write_incapsule)
context->ustorm_ag_context.exp_data_acked = task_size;
else
context->ustorm_ag_context.exp_data_acked = 0;
} else if (task_type == NVMETCP_TASK_TYPE_HOST_READ) {
context->ustorm_ag_context.exp_data_acked = 0;
}
context->ustorm_ag_context.exp_cont_len = 0;
init_sqe(task_params, sgl_task_params, task_type);
}
static void
init_common_initiator_read_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params)
{
init_rw_nvmetcp_task(task_params, NVMETCP_TASK_TYPE_HOST_READ,
cmd_pdu_header, nvme_cmd, sgl_task_params);
}
void init_nvmetcp_host_read_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params)
{
init_common_initiator_read_task(task_params, (void *)cmd_pdu_header,
(void *)nvme_cmd, sgl_task_params);
}
static void
init_common_initiator_write_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params)
{
init_rw_nvmetcp_task(task_params, NVMETCP_TASK_TYPE_HOST_WRITE,
cmd_pdu_header, nvme_cmd, sgl_task_params);
}
void init_nvmetcp_host_write_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params)
{
init_common_initiator_write_task(task_params, (void *)cmd_pdu_header,
(void *)nvme_cmd, sgl_task_params);
}
static void
init_common_login_request_task(struct nvmetcp_task_params *task_params,
void *login_req_pdu_header,
struct storage_sgl_task_params *tx_sgl_task_params,
struct storage_sgl_task_params *rx_sgl_task_params)
{
struct e5_nvmetcp_task_context *context = task_params->context;
init_default_nvmetcp_task(task_params, (void *)login_req_pdu_header, NULL,
NVMETCP_TASK_TYPE_INIT_CONN_REQUEST);
/* Ustorm Context: */
init_ustorm_task_contexts(&context->ustorm_st_context,
&context->ustorm_ag_context,
/* Remaining Receive length is the Task Size */
task_params->rx_io_size ?
rx_sgl_task_params->total_buffer_size : 0,
/* The size of the transmitted task */
task_params->tx_io_size ?
tx_sgl_task_params->total_buffer_size : 0,
0, /* num_sges */
0); /* tx_dif_conn_err_en */
/* SGL context: */
if (task_params->tx_io_size)
init_scsi_sgl_context(&context->ystorm_st_context.state.sgl_params,
&context->ystorm_st_context.state.data_desc,
tx_sgl_task_params);
if (task_params->rx_io_size)
init_scsi_sgl_context(&context->mstorm_st_context.sgl_params,
&context->mstorm_st_context.data_desc,
rx_sgl_task_params);
context->mstorm_st_context.rem_task_size =
cpu_to_le32(task_params->rx_io_size ?
rx_sgl_task_params->total_buffer_size : 0);
init_sqe(task_params, tx_sgl_task_params, NVMETCP_TASK_TYPE_INIT_CONN_REQUEST);
}
/* The following function initializes Login task in Host mode: */
void init_nvmetcp_init_conn_req_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_icreq_pdu *init_conn_req_pdu_hdr,
struct storage_sgl_task_params *tx_sgl_task_params,
struct storage_sgl_task_params *rx_sgl_task_params)
{
init_common_login_request_task(task_params, init_conn_req_pdu_hdr,
tx_sgl_task_params, rx_sgl_task_params);
}
void init_cleanup_task_nvmetcp(struct nvmetcp_task_params *task_params)
{
init_sqe(task_params, NULL, NVMETCP_TASK_TYPE_CLEANUP);
}
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */
/* Copyright 2021 Marvell. All rights reserved. */
#ifndef _QED_NVMETCP_FW_FUNCS_H
#define _QED_NVMETCP_FW_FUNCS_H
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/mm.h>
#include <linux/types.h>
#include <asm/byteorder.h>
#include <linux/qed/common_hsi.h>
#include <linux/qed/storage_common.h>
#include <linux/qed/nvmetcp_common.h>
#include <linux/qed/qed_nvmetcp_if.h>
#if IS_ENABLED(CONFIG_QED_NVMETCP)
void init_nvmetcp_host_read_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params);
void init_nvmetcp_host_write_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params);
void init_nvmetcp_init_conn_req_task(struct nvmetcp_task_params *task_params,
struct nvme_tcp_icreq_pdu *init_conn_req_pdu_hdr,
struct storage_sgl_task_params *tx_sgl_task_params,
struct storage_sgl_task_params *rx_sgl_task_params);
void init_cleanup_task_nvmetcp(struct nvmetcp_task_params *task_params);
#else /* IS_ENABLED(CONFIG_QED_NVMETCP) */
#endif /* IS_ENABLED(CONFIG_QED_NVMETCP) */
#endif /* _QED_NVMETCP_FW_FUNCS_H */
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause)
/*
* Copyright 2021 Marvell. All rights reserved.
*/
#include <linux/types.h>
#include <asm/byteorder.h>
#include <asm/param.h>
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include <linux/etherdevice.h>
#include <linux/kernel.h>
#include <linux/stddef.h>
#include <linux/errno.h>
#include <net/tcp.h>
#include <linux/qed/qed_nvmetcp_ip_services_if.h>
#define QED_IP_RESOL_TIMEOUT 4
int qed_route_ipv4(struct sockaddr_storage *local_addr,
struct sockaddr_storage *remote_addr,
struct sockaddr *hardware_address,
struct net_device **ndev)
{
struct neighbour *neigh = NULL;
__be32 *loc_ip, *rem_ip;
struct rtable *rt;
int rc = -ENXIO;
int retry;
loc_ip = &((struct sockaddr_in *)local_addr)->sin_addr.s_addr;
rem_ip = &((struct sockaddr_in *)remote_addr)->sin_addr.s_addr;
*ndev = NULL;
rt = ip_route_output(&init_net, *rem_ip, *loc_ip, 0/*tos*/, 0/*oif*/);
if (IS_ERR(rt)) {
pr_err("lookup route failed\n");
rc = PTR_ERR(rt);
goto return_err;
}
neigh = dst_neigh_lookup(&rt->dst, rem_ip);
if (!neigh) {
rc = -ENOMEM;
ip_rt_put(rt);
goto return_err;
}
*ndev = rt->dst.dev;
ip_rt_put(rt);
/* If not resolved, kick-off state machine towards resolution */
if (!(neigh->nud_state & NUD_VALID))
neigh_event_send(neigh, NULL);
/* query neighbor until resolved or timeout */
retry = QED_IP_RESOL_TIMEOUT;
while (!(neigh->nud_state & NUD_VALID) && retry > 0) {
msleep(1000);
retry--;
}
if (neigh->nud_state & NUD_VALID) {
/* copy resolved MAC address */
neigh_ha_snapshot(hardware_address->sa_data, neigh, *ndev);
hardware_address->sa_family = (*ndev)->type;
rc = 0;
}
neigh_release(neigh);
if (!(*loc_ip)) {
*loc_ip = inet_select_addr(*ndev, *rem_ip, RT_SCOPE_UNIVERSE);
local_addr->ss_family = AF_INET;
}
return_err:
return rc;
}
EXPORT_SYMBOL(qed_route_ipv4);
int qed_route_ipv6(struct sockaddr_storage *local_addr,
struct sockaddr_storage *remote_addr,
struct sockaddr *hardware_address,
struct net_device **ndev)
{
struct neighbour *neigh = NULL;
struct dst_entry *dst;
struct flowi6 fl6;
int rc = -ENXIO;
int retry;
memset(&fl6, 0, sizeof(fl6));
fl6.saddr = ((struct sockaddr_in6 *)local_addr)->sin6_addr;
fl6.daddr = ((struct sockaddr_in6 *)remote_addr)->sin6_addr;
dst = ip6_route_output(&init_net, NULL, &fl6);
if (!dst || dst->error) {
if (dst) {
dst_release(dst);
pr_err("lookup route failed %d\n", dst->error);
}
goto out;
}
neigh = dst_neigh_lookup(dst, &fl6.daddr);
if (neigh) {
*ndev = ip6_dst_idev(dst)->dev;
/* If not resolved, kick-off state machine towards resolution */
if (!(neigh->nud_state & NUD_VALID))
neigh_event_send(neigh, NULL);
/* query neighbor until resolved or timeout */
retry = QED_IP_RESOL_TIMEOUT;
while (!(neigh->nud_state & NUD_VALID) && retry > 0) {
msleep(1000);
retry--;
}
if (neigh->nud_state & NUD_VALID) {
neigh_ha_snapshot((u8 *)hardware_address->sa_data,
neigh, *ndev);
hardware_address->sa_family = (*ndev)->type;
rc = 0;
}
neigh_release(neigh);
if (ipv6_addr_any(&fl6.saddr)) {
if (ipv6_dev_get_saddr(dev_net(*ndev), *ndev,
&fl6.daddr, 0, &fl6.saddr)) {
pr_err("Unable to find source IP address\n");
goto out;
}
local_addr->ss_family = AF_INET6;
((struct sockaddr_in6 *)local_addr)->sin6_addr =
fl6.saddr;
}
}
dst_release(dst);
out:
return rc;
}
EXPORT_SYMBOL(qed_route_ipv6);
void qed_vlan_get_ndev(struct net_device **ndev, u16 *vlan_id)
{
if (is_vlan_dev(*ndev)) {
*vlan_id = vlan_dev_vlan_id(*ndev);
*ndev = vlan_dev_real_dev(*ndev);
}
}
EXPORT_SYMBOL(qed_vlan_get_ndev);
struct pci_dev *qed_validate_ndev(struct net_device *ndev)
{
struct pci_dev *pdev = NULL;
struct net_device *upper;
for_each_pci_dev(pdev) {
if (pdev && pdev->driver &&
!strcmp(pdev->driver->name, "qede")) {
upper = pci_get_drvdata(pdev);
if (upper->ifindex == ndev->ifindex)
return pdev;
}
}
return NULL;
}
EXPORT_SYMBOL(qed_validate_ndev);
__be16 qed_get_in_port(struct sockaddr_storage *sa)
{
return sa->ss_family == AF_INET
? ((struct sockaddr_in *)sa)->sin_port
: ((struct sockaddr_in6 *)sa)->sin6_port;
}
EXPORT_SYMBOL(qed_get_in_port);
int qed_fetch_tcp_port(struct sockaddr_storage local_ip_addr,
struct socket **sock, u16 *port)
{
struct sockaddr_storage sa;
int rc = 0;
rc = sock_create(local_ip_addr.ss_family, SOCK_STREAM, IPPROTO_TCP,
sock);
if (rc) {
pr_warn("failed to create socket: %d\n", rc);
goto err;
}
(*sock)->sk->sk_allocation = GFP_KERNEL;
sk_set_memalloc((*sock)->sk);
rc = kernel_bind(*sock, (struct sockaddr *)&local_ip_addr,
sizeof(local_ip_addr));
if (rc) {
pr_warn("failed to bind socket: %d\n", rc);
goto err_sock;
}
rc = kernel_getsockname(*sock, (struct sockaddr *)&sa);
if (rc < 0) {
pr_warn("getsockname() failed: %d\n", rc);
goto err_sock;
}
*port = ntohs(qed_get_in_port(&sa));
return 0;
err_sock:
sock_release(*sock);
sock = NULL;
err:
return rc;
}
EXPORT_SYMBOL(qed_fetch_tcp_port);
void qed_return_tcp_port(struct socket *sock)
{
if (sock && sock->sk) {
tcp_set_state(sock->sk, TCP_CLOSE);
sock_release(sock);
}
}
EXPORT_SYMBOL(qed_return_tcp_port);
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#include "qed_ll2.h" #include "qed_ll2.h"
#include "qed_ooo.h" #include "qed_ooo.h"
#include "qed_cxt.h" #include "qed_cxt.h"
#include "qed_nvmetcp.h"
static struct qed_ooo_archipelago static struct qed_ooo_archipelago
*qed_ooo_seek_archipelago(struct qed_hwfn *p_hwfn, *qed_ooo_seek_archipelago(struct qed_hwfn *p_hwfn,
struct qed_ooo_info struct qed_ooo_info
...@@ -83,7 +83,8 @@ int qed_ooo_alloc(struct qed_hwfn *p_hwfn) ...@@ -83,7 +83,8 @@ int qed_ooo_alloc(struct qed_hwfn *p_hwfn)
switch (p_hwfn->hw_info.personality) { switch (p_hwfn->hw_info.personality) {
case QED_PCI_ISCSI: case QED_PCI_ISCSI:
proto = PROTOCOLID_ISCSI; case QED_PCI_NVMETCP:
proto = PROTOCOLID_TCP_ULP;
break; break;
case QED_PCI_ETH_RDMA: case QED_PCI_ETH_RDMA:
case QED_PCI_ETH_IWARP: case QED_PCI_ETH_IWARP:
......
...@@ -100,6 +100,11 @@ union ramrod_data { ...@@ -100,6 +100,11 @@ union ramrod_data {
struct iscsi_spe_conn_mac_update iscsi_conn_mac_update; struct iscsi_spe_conn_mac_update iscsi_conn_mac_update;
struct iscsi_spe_conn_termination iscsi_conn_terminate; struct iscsi_spe_conn_termination iscsi_conn_terminate;
struct nvmetcp_init_ramrod_params nvmetcp_init;
struct nvmetcp_spe_conn_offload nvmetcp_conn_offload;
struct nvmetcp_conn_update_ramrod_params nvmetcp_conn_update;
struct nvmetcp_spe_conn_termination nvmetcp_conn_terminate;
struct vf_start_ramrod_data vf_start; struct vf_start_ramrod_data vf_start;
struct vf_stop_ramrod_data vf_stop; struct vf_stop_ramrod_data vf_stop;
}; };
......
...@@ -385,7 +385,8 @@ int qed_sp_pf_start(struct qed_hwfn *p_hwfn, ...@@ -385,7 +385,8 @@ int qed_sp_pf_start(struct qed_hwfn *p_hwfn,
p_ramrod->personality = PERSONALITY_FCOE; p_ramrod->personality = PERSONALITY_FCOE;
break; break;
case QED_PCI_ISCSI: case QED_PCI_ISCSI:
p_ramrod->personality = PERSONALITY_ISCSI; case QED_PCI_NVMETCP:
p_ramrod->personality = PERSONALITY_TCP_ULP;
break; break;
case QED_PCI_ETH_ROCE: case QED_PCI_ETH_ROCE:
case QED_PCI_ETH_IWARP: case QED_PCI_ETH_IWARP:
......
...@@ -702,7 +702,7 @@ enum mf_mode { ...@@ -702,7 +702,7 @@ enum mf_mode {
/* Per-protocol connection types */ /* Per-protocol connection types */
enum protocol_type { enum protocol_type {
PROTOCOLID_ISCSI, PROTOCOLID_TCP_ULP,
PROTOCOLID_FCOE, PROTOCOLID_FCOE,
PROTOCOLID_ROCE, PROTOCOLID_ROCE,
PROTOCOLID_CORE, PROTOCOLID_CORE,
......
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */
/* Copyright 2021 Marvell. All rights reserved. */
#ifndef __NVMETCP_COMMON__
#define __NVMETCP_COMMON__
#include "tcp_common.h"
#include <linux/nvme-tcp.h>
#define NVMETCP_SLOW_PATH_LAYER_CODE (6)
#define NVMETCP_WQE_NUM_SGES_SLOWIO (0xf)
/* NVMeTCP firmware function init parameters */
struct nvmetcp_spe_func_init {
__le16 half_way_close_timeout;
u8 num_sq_pages_in_ring;
u8 num_r2tq_pages_in_ring;
u8 num_uhq_pages_in_ring;
u8 ll2_rx_queue_id;
u8 flags;
#define NVMETCP_SPE_FUNC_INIT_COUNTERS_EN_MASK 0x1
#define NVMETCP_SPE_FUNC_INIT_COUNTERS_EN_SHIFT 0
#define NVMETCP_SPE_FUNC_INIT_NVMETCP_MODE_MASK 0x1
#define NVMETCP_SPE_FUNC_INIT_NVMETCP_MODE_SHIFT 1
#define NVMETCP_SPE_FUNC_INIT_RESERVED0_MASK 0x3F
#define NVMETCP_SPE_FUNC_INIT_RESERVED0_SHIFT 2
u8 debug_flags;
__le16 reserved1;
u8 params;
#define NVMETCP_SPE_FUNC_INIT_MAX_SYN_RT_MASK 0xF
#define NVMETCP_SPE_FUNC_INIT_MAX_SYN_RT_SHIFT 0
#define NVMETCP_SPE_FUNC_INIT_RESERVED1_MASK 0xF
#define NVMETCP_SPE_FUNC_INIT_RESERVED1_SHIFT 4
u8 reserved2[5];
struct scsi_init_func_params func_params;
struct scsi_init_func_queues q_params;
};
/* NVMeTCP init params passed by driver to FW in NVMeTCP init ramrod. */
struct nvmetcp_init_ramrod_params {
struct nvmetcp_spe_func_init nvmetcp_init_spe;
struct tcp_init_params tcp_init;
};
/* NVMeTCP Ramrod Command IDs */
enum nvmetcp_ramrod_cmd_id {
NVMETCP_RAMROD_CMD_ID_UNUSED = 0,
NVMETCP_RAMROD_CMD_ID_INIT_FUNC = 1,
NVMETCP_RAMROD_CMD_ID_DESTROY_FUNC = 2,
NVMETCP_RAMROD_CMD_ID_OFFLOAD_CONN = 3,
NVMETCP_RAMROD_CMD_ID_UPDATE_CONN = 4,
NVMETCP_RAMROD_CMD_ID_TERMINATION_CONN = 5,
NVMETCP_RAMROD_CMD_ID_CLEAR_SQ = 6,
MAX_NVMETCP_RAMROD_CMD_ID
};
struct nvmetcp_glbl_queue_entry {
struct regpair cq_pbl_addr;
struct regpair reserved;
};
/* NVMeTCP conn level EQEs */
enum nvmetcp_eqe_opcode {
NVMETCP_EVENT_TYPE_INIT_FUNC = 0, /* Response after init Ramrod */
NVMETCP_EVENT_TYPE_DESTROY_FUNC, /* Response after destroy Ramrod */
NVMETCP_EVENT_TYPE_OFFLOAD_CONN,/* Response after option 2 offload Ramrod */
NVMETCP_EVENT_TYPE_UPDATE_CONN, /* Response after update Ramrod */
NVMETCP_EVENT_TYPE_CLEAR_SQ, /* Response after clear sq Ramrod */
NVMETCP_EVENT_TYPE_TERMINATE_CONN, /* Response after termination Ramrod */
NVMETCP_EVENT_TYPE_RESERVED0,
NVMETCP_EVENT_TYPE_RESERVED1,
NVMETCP_EVENT_TYPE_ASYN_CONNECT_COMPLETE, /* Connect completed (A-syn EQE) */
NVMETCP_EVENT_TYPE_ASYN_TERMINATE_DONE, /* Termination completed (A-syn EQE) */
NVMETCP_EVENT_TYPE_START_OF_ERROR_TYPES = 10, /* Separate EQs from err EQs */
NVMETCP_EVENT_TYPE_ASYN_ABORT_RCVD, /* TCP RST packet receive (A-syn EQE) */
NVMETCP_EVENT_TYPE_ASYN_CLOSE_RCVD, /* TCP FIN packet receive (A-syn EQE) */
NVMETCP_EVENT_TYPE_ASYN_SYN_RCVD, /* TCP SYN+ACK packet receive (A-syn EQE) */
NVMETCP_EVENT_TYPE_ASYN_MAX_RT_TIME, /* TCP max retransmit time (A-syn EQE) */
NVMETCP_EVENT_TYPE_ASYN_MAX_RT_CNT, /* TCP max retransmit count (A-syn EQE) */
NVMETCP_EVENT_TYPE_ASYN_MAX_KA_PROBES_CNT, /* TCP ka probes count (A-syn EQE) */
NVMETCP_EVENT_TYPE_ASYN_FIN_WAIT2, /* TCP fin wait 2 (A-syn EQE) */
NVMETCP_EVENT_TYPE_NVMETCP_CONN_ERROR, /* NVMeTCP error response (A-syn EQE) */
NVMETCP_EVENT_TYPE_TCP_CONN_ERROR, /* NVMeTCP error - tcp error (A-syn EQE) */
MAX_NVMETCP_EQE_OPCODE
};
struct nvmetcp_conn_offload_section {
struct regpair cccid_itid_table_addr; /* CCCID to iTID table address */
__le16 cccid_max_range; /* CCCID max value - used for validation */
__le16 reserved[3];
};
/* NVMe TCP connection offload params passed by driver to FW in NVMeTCP offload ramrod */
struct nvmetcp_conn_offload_params {
struct regpair sq_pbl_addr;
struct regpair r2tq_pbl_addr;
struct regpair xhq_pbl_addr;
struct regpair uhq_pbl_addr;
__le16 physical_q0;
__le16 physical_q1;
u8 flags;
#define NVMETCP_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B_MASK 0x1
#define NVMETCP_CONN_OFFLOAD_PARAMS_TCP_ON_CHIP_1B_SHIFT 0
#define NVMETCP_CONN_OFFLOAD_PARAMS_TARGET_MODE_MASK 0x1
#define NVMETCP_CONN_OFFLOAD_PARAMS_TARGET_MODE_SHIFT 1
#define NVMETCP_CONN_OFFLOAD_PARAMS_RESTRICTED_MODE_MASK 0x1
#define NVMETCP_CONN_OFFLOAD_PARAMS_RESTRICTED_MODE_SHIFT 2
#define NVMETCP_CONN_OFFLOAD_PARAMS_NVMETCP_MODE_MASK 0x1
#define NVMETCP_CONN_OFFLOAD_PARAMS_NVMETCP_MODE_SHIFT 3
#define NVMETCP_CONN_OFFLOAD_PARAMS_RESERVED1_MASK 0xF
#define NVMETCP_CONN_OFFLOAD_PARAMS_RESERVED1_SHIFT 4
u8 default_cq;
__le16 reserved0;
__le32 reserved1;
__le32 initial_ack;
struct nvmetcp_conn_offload_section nvmetcp; /* NVMe/TCP section */
};
/* NVMe TCP and TCP connection offload params passed by driver to FW in NVMeTCP offload ramrod. */
struct nvmetcp_spe_conn_offload {
__le16 reserved;
__le16 conn_id;
__le32 fw_cid;
struct nvmetcp_conn_offload_params nvmetcp;
struct tcp_offload_params_opt2 tcp;
};
/* NVMeTCP connection update params passed by driver to FW in NVMETCP update ramrod. */
struct nvmetcp_conn_update_ramrod_params {
__le16 reserved0;
__le16 conn_id;
__le32 reserved1;
u8 flags;
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_HD_EN_MASK 0x1
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_HD_EN_SHIFT 0
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_DD_EN_MASK 0x1
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_DD_EN_SHIFT 1
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED0_MASK 0x1
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED0_SHIFT 2
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED1_MASK 0x1
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED1_DATA_SHIFT 3
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED2_MASK 0x1
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED2_SHIFT 4
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED3_MASK 0x1
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED3_SHIFT 5
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED4_MASK 0x1
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED4_SHIFT 6
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED5_MASK 0x1
#define NVMETCP_CONN_UPDATE_RAMROD_PARAMS_RESERVED5_SHIFT 7
u8 reserved3[3];
__le32 max_seq_size;
__le32 max_send_pdu_length;
__le32 max_recv_pdu_length;
__le32 first_seq_length;
__le32 reserved4[5];
};
/* NVMeTCP connection termination request */
struct nvmetcp_spe_conn_termination {
__le16 reserved0;
__le16 conn_id;
__le32 reserved1;
u8 abortive;
u8 reserved2[7];
struct regpair reserved3;
struct regpair reserved4;
};
struct nvmetcp_dif_flags {
u8 flags;
};
enum nvmetcp_wqe_type {
NVMETCP_WQE_TYPE_NORMAL,
NVMETCP_WQE_TYPE_TASK_CLEANUP,
NVMETCP_WQE_TYPE_MIDDLE_PATH,
NVMETCP_WQE_TYPE_IC,
MAX_NVMETCP_WQE_TYPE
};
struct nvmetcp_wqe {
__le16 task_id;
u8 flags;
#define NVMETCP_WQE_WQE_TYPE_MASK 0x7 /* [use nvmetcp_wqe_type] */
#define NVMETCP_WQE_WQE_TYPE_SHIFT 0
#define NVMETCP_WQE_NUM_SGES_MASK 0xF
#define NVMETCP_WQE_NUM_SGES_SHIFT 3
#define NVMETCP_WQE_RESPONSE_MASK 0x1
#define NVMETCP_WQE_RESPONSE_SHIFT 7
struct nvmetcp_dif_flags prot_flags;
__le32 contlen_cdbsize;
#define NVMETCP_WQE_CONT_LEN_MASK 0xFFFFFF
#define NVMETCP_WQE_CONT_LEN_SHIFT 0
#define NVMETCP_WQE_CDB_SIZE_OR_NVMETCP_CMD_MASK 0xFF
#define NVMETCP_WQE_CDB_SIZE_OR_NVMETCP_CMD_SHIFT 24
};
struct nvmetcp_host_cccid_itid_entry {
__le16 itid;
};
struct nvmetcp_connect_done_results {
__le16 icid;
__le16 conn_id;
struct tcp_ulp_connect_done_params params;
};
struct nvmetcp_eqe_data {
__le16 icid;
__le16 conn_id;
__le16 reserved;
u8 error_code;
u8 error_pdu_opcode_reserved;
#define NVMETCP_EQE_DATA_ERROR_PDU_OPCODE_MASK 0x3F
#define NVMETCP_EQE_DATA_ERROR_PDU_OPCODE_SHIFT 0
#define NVMETCP_EQE_DATA_ERROR_PDU_OPCODE_VALID_MASK 0x1
#define NVMETCP_EQE_DATA_ERROR_PDU_OPCODE_VALID_SHIFT 6
#define NVMETCP_EQE_DATA_RESERVED0_MASK 0x1
#define NVMETCP_EQE_DATA_RESERVED0_SHIFT 7
};
enum nvmetcp_task_type {
NVMETCP_TASK_TYPE_HOST_WRITE,
NVMETCP_TASK_TYPE_HOST_READ,
NVMETCP_TASK_TYPE_INIT_CONN_REQUEST,
NVMETCP_TASK_TYPE_RESERVED0,
NVMETCP_TASK_TYPE_CLEANUP,
NVMETCP_TASK_TYPE_HOST_READ_NO_CQE,
MAX_NVMETCP_TASK_TYPE
};
struct nvmetcp_db_data {
u8 params;
#define NVMETCP_DB_DATA_DEST_MASK 0x3 /* destination of doorbell (use enum db_dest) */
#define NVMETCP_DB_DATA_DEST_SHIFT 0
#define NVMETCP_DB_DATA_AGG_CMD_MASK 0x3 /* aggregative command to CM (use enum db_agg_cmd_sel) */
#define NVMETCP_DB_DATA_AGG_CMD_SHIFT 2
#define NVMETCP_DB_DATA_BYPASS_EN_MASK 0x1 /* enable QM bypass */
#define NVMETCP_DB_DATA_BYPASS_EN_SHIFT 4
#define NVMETCP_DB_DATA_RESERVED_MASK 0x1
#define NVMETCP_DB_DATA_RESERVED_SHIFT 5
#define NVMETCP_DB_DATA_AGG_VAL_SEL_MASK 0x3 /* aggregative value selection */
#define NVMETCP_DB_DATA_AGG_VAL_SEL_SHIFT 6
u8 agg_flags; /* bit for every DQ counter flags in CM context that DQ can increment */
__le16 sq_prod;
};
struct nvmetcp_fw_nvmf_cqe {
__le32 reserved[4];
};
struct nvmetcp_icresp_mdata {
u8 digest;
u8 cpda;
__le16 pfv;
__le32 maxdata;
__le16 rsvd[4];
};
union nvmetcp_fw_cqe_data {
struct nvmetcp_fw_nvmf_cqe nvme_cqe;
struct nvmetcp_icresp_mdata icresp_mdata;
};
struct nvmetcp_fw_cqe {
__le16 conn_id;
u8 cqe_type;
u8 cqe_error_status_bits;
#define CQE_ERROR_BITMAP_DIF_ERR_BITS_MASK 0x7
#define CQE_ERROR_BITMAP_DIF_ERR_BITS_SHIFT 0
#define CQE_ERROR_BITMAP_DATA_DIGEST_ERR_MASK 0x1
#define CQE_ERROR_BITMAP_DATA_DIGEST_ERR_SHIFT 3
#define CQE_ERROR_BITMAP_RCV_ON_INVALID_CONN_MASK 0x1
#define CQE_ERROR_BITMAP_RCV_ON_INVALID_CONN_SHIFT 4
__le16 itid;
u8 task_type;
u8 fw_dbg_field;
u8 caused_conn_err;
u8 reserved0[3];
__le32 reserved1;
union nvmetcp_fw_cqe_data cqe_data;
struct regpair task_opaque;
__le32 reserved[6];
};
enum nvmetcp_fw_cqes_type {
NVMETCP_FW_CQE_TYPE_NORMAL = 1,
NVMETCP_FW_CQE_TYPE_RESERVED0,
NVMETCP_FW_CQE_TYPE_RESERVED1,
NVMETCP_FW_CQE_TYPE_CLEANUP,
NVMETCP_FW_CQE_TYPE_DUMMY,
MAX_NVMETCP_FW_CQES_TYPE
};
struct ystorm_nvmetcp_task_state {
struct scsi_cached_sges data_desc;
struct scsi_sgl_params sgl_params;
__le32 resrved0;
__le32 buffer_offset;
__le16 cccid;
struct nvmetcp_dif_flags dif_flags;
u8 flags;
#define YSTORM_NVMETCP_TASK_STATE_LOCAL_COMP_MASK 0x1
#define YSTORM_NVMETCP_TASK_STATE_LOCAL_COMP_SHIFT 0
#define YSTORM_NVMETCP_TASK_STATE_SLOW_IO_MASK 0x1
#define YSTORM_NVMETCP_TASK_STATE_SLOW_IO_SHIFT 1
#define YSTORM_NVMETCP_TASK_STATE_SET_DIF_OFFSET_MASK 0x1
#define YSTORM_NVMETCP_TASK_STATE_SET_DIF_OFFSET_SHIFT 2
#define YSTORM_NVMETCP_TASK_STATE_SEND_W_RSP_MASK 0x1
#define YSTORM_NVMETCP_TASK_STATE_SEND_W_RSP_SHIFT 3
};
struct ystorm_nvmetcp_task_rxmit_opt {
__le32 reserved[4];
};
struct nvmetcp_task_hdr {
__le32 reg[18];
};
struct nvmetcp_task_hdr_aligned {
struct nvmetcp_task_hdr task_hdr;
__le32 reserved[2]; /* HSI_COMMENT: Align to QREG */
};
struct e5_tdif_task_context {
__le32 reserved[16];
};
struct e5_rdif_task_context {
__le32 reserved[12];
};
struct ystorm_nvmetcp_task_st_ctx {
struct ystorm_nvmetcp_task_state state;
struct ystorm_nvmetcp_task_rxmit_opt rxmit_opt;
struct nvmetcp_task_hdr_aligned pdu_hdr;
};
struct mstorm_nvmetcp_task_st_ctx {
struct scsi_cached_sges data_desc;
struct scsi_sgl_params sgl_params;
__le32 rem_task_size;
__le32 data_buffer_offset;
u8 task_type;
struct nvmetcp_dif_flags dif_flags;
__le16 dif_task_icid;
struct regpair reserved0;
__le32 expected_itt;
__le32 reserved1;
};
struct ustorm_nvmetcp_task_st_ctx {
__le32 rem_rcv_len;
__le32 exp_data_transfer_len;
__le32 exp_data_sn;
struct regpair reserved0;
__le32 reg1_map;
#define REG1_NUM_SGES_MASK 0xF
#define REG1_NUM_SGES_SHIFT 0
#define REG1_RESERVED1_MASK 0xFFFFFFF
#define REG1_RESERVED1_SHIFT 4
u8 flags2;
#define USTORM_NVMETCP_TASK_ST_CTX_AHS_EXIST_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_AHS_EXIST_SHIFT 0
#define USTORM_NVMETCP_TASK_ST_CTX_RESERVED1_MASK 0x7F
#define USTORM_NVMETCP_TASK_ST_CTX_RESERVED1_SHIFT 1
struct nvmetcp_dif_flags dif_flags;
__le16 reserved3;
__le16 tqe_opaque[2];
__le32 reserved5;
__le32 nvme_tcp_opaque_lo;
__le32 nvme_tcp_opaque_hi;
u8 task_type;
u8 error_flags;
#define USTORM_NVMETCP_TASK_ST_CTX_DATA_DIGEST_ERROR_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_DATA_DIGEST_ERROR_SHIFT 0
#define USTORM_NVMETCP_TASK_ST_CTX_DATA_TRUNCATED_ERROR_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_DATA_TRUNCATED_ERROR_SHIFT 1
#define USTORM_NVMETCP_TASK_ST_CTX_UNDER_RUN_ERROR_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_UNDER_RUN_ERROR_SHIFT 2
#define USTORM_NVMETCP_TASK_ST_CTX_NVME_TCP_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_NVME_TCP_SHIFT 3
u8 flags;
#define USTORM_NVMETCP_TASK_ST_CTX_CQE_WRITE_MASK 0x3
#define USTORM_NVMETCP_TASK_ST_CTX_CQE_WRITE_SHIFT 0
#define USTORM_NVMETCP_TASK_ST_CTX_LOCAL_COMP_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_LOCAL_COMP_SHIFT 2
#define USTORM_NVMETCP_TASK_ST_CTX_Q0_R2TQE_WRITE_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_Q0_R2TQE_WRITE_SHIFT 3
#define USTORM_NVMETCP_TASK_ST_CTX_TOTAL_DATA_ACKED_DONE_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_TOTAL_DATA_ACKED_DONE_SHIFT 4
#define USTORM_NVMETCP_TASK_ST_CTX_HQ_SCANNED_DONE_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_HQ_SCANNED_DONE_SHIFT 5
#define USTORM_NVMETCP_TASK_ST_CTX_R2T2RECV_DONE_MASK 0x1
#define USTORM_NVMETCP_TASK_ST_CTX_R2T2RECV_DONE_SHIFT 6
u8 cq_rss_number;
};
struct e5_ystorm_nvmetcp_task_ag_ctx {
u8 reserved /* cdu_validation */;
u8 byte1 /* state_and_core_id */;
__le16 word0 /* icid */;
u8 flags0;
u8 flags1;
u8 flags2;
u8 flags3;
__le32 TTT;
u8 byte2;
u8 byte3;
u8 byte4;
u8 e4_reserved7;
};
struct e5_mstorm_nvmetcp_task_ag_ctx {
u8 cdu_validation;
u8 byte1;
__le16 task_cid;
u8 flags0;
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CONNECTION_TYPE_MASK 0xF
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CONNECTION_TYPE_SHIFT 0
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_EXIST_IN_QM0_MASK 0x1
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_EXIST_IN_QM0_SHIFT 4
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CONN_CLEAR_SQ_FLAG_MASK 0x1
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CONN_CLEAR_SQ_FLAG_SHIFT 5
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_VALID_MASK 0x1
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_VALID_SHIFT 6
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_TASK_CLEANUP_FLAG_MASK 0x1
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_TASK_CLEANUP_FLAG_SHIFT 7
u8 flags1;
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_TASK_CLEANUP_CF_MASK 0x3
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_TASK_CLEANUP_CF_SHIFT 0
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CF1_MASK 0x3
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CF1_SHIFT 2
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CF2_MASK 0x3
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CF2_SHIFT 4
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_TASK_CLEANUP_CF_EN_MASK 0x1
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_TASK_CLEANUP_CF_EN_SHIFT 6
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CF1EN_MASK 0x1
#define E5_MSTORM_NVMETCP_TASK_AG_CTX_CF1EN_SHIFT 7
u8 flags2;
u8 flags3;
__le32 reg0;
u8 byte2;
u8 byte3;
u8 byte4;
u8 e4_reserved7;
};
struct e5_ustorm_nvmetcp_task_ag_ctx {
u8 reserved;
u8 state_and_core_id;
__le16 icid;
u8 flags0;
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CONNECTION_TYPE_MASK 0xF
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CONNECTION_TYPE_SHIFT 0
#define E5_USTORM_NVMETCP_TASK_AG_CTX_EXIST_IN_QM0_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_EXIST_IN_QM0_SHIFT 4
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CONN_CLEAR_SQ_FLAG_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CONN_CLEAR_SQ_FLAG_SHIFT 5
#define E5_USTORM_NVMETCP_TASK_AG_CTX_HQ_SCANNED_CF_MASK 0x3
#define E5_USTORM_NVMETCP_TASK_AG_CTX_HQ_SCANNED_CF_SHIFT 6
u8 flags1;
#define E5_USTORM_NVMETCP_TASK_AG_CTX_RESERVED1_MASK 0x3
#define E5_USTORM_NVMETCP_TASK_AG_CTX_RESERVED1_SHIFT 0
#define E5_USTORM_NVMETCP_TASK_AG_CTX_R2T2RECV_MASK 0x3
#define E5_USTORM_NVMETCP_TASK_AG_CTX_R2T2RECV_SHIFT 2
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CF3_MASK 0x3
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CF3_SHIFT 4
#define E5_USTORM_NVMETCP_TASK_AG_CTX_DIF_ERROR_CF_MASK 0x3
#define E5_USTORM_NVMETCP_TASK_AG_CTX_DIF_ERROR_CF_SHIFT 6
u8 flags2;
#define E5_USTORM_NVMETCP_TASK_AG_CTX_HQ_SCANNED_CF_EN_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_HQ_SCANNED_CF_EN_SHIFT 0
#define E5_USTORM_NVMETCP_TASK_AG_CTX_DISABLE_DATA_ACKED_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_DISABLE_DATA_ACKED_SHIFT 1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_R2T2RECV_EN_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_R2T2RECV_EN_SHIFT 2
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CF3EN_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CF3EN_SHIFT 3
#define E5_USTORM_NVMETCP_TASK_AG_CTX_DIF_ERROR_CF_EN_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_DIF_ERROR_CF_EN_SHIFT 4
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CMP_DATA_TOTAL_EXP_EN_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CMP_DATA_TOTAL_EXP_EN_SHIFT 5
#define E5_USTORM_NVMETCP_TASK_AG_CTX_RULE1EN_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_RULE1EN_SHIFT 6
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CMP_CONT_RCV_EXP_EN_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_CMP_CONT_RCV_EXP_EN_SHIFT 7
u8 flags3;
u8 flags4;
#define E5_USTORM_NVMETCP_TASK_AG_CTX_E4_RESERVED5_MASK 0x3
#define E5_USTORM_NVMETCP_TASK_AG_CTX_E4_RESERVED5_SHIFT 0
#define E5_USTORM_NVMETCP_TASK_AG_CTX_E4_RESERVED6_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_E4_RESERVED6_SHIFT 2
#define E5_USTORM_NVMETCP_TASK_AG_CTX_E4_RESERVED7_MASK 0x1
#define E5_USTORM_NVMETCP_TASK_AG_CTX_E4_RESERVED7_SHIFT 3
#define E5_USTORM_NVMETCP_TASK_AG_CTX_DIF_ERROR_TYPE_MASK 0xF
#define E5_USTORM_NVMETCP_TASK_AG_CTX_DIF_ERROR_TYPE_SHIFT 4
u8 byte2;
u8 byte3;
u8 e4_reserved8;
__le32 dif_err_intervals;
__le32 dif_error_1st_interval;
__le32 rcv_cont_len;
__le32 exp_cont_len;
__le32 total_data_acked;
__le32 exp_data_acked;
__le16 word1;
__le16 next_tid;
__le32 hdr_residual_count;
__le32 exp_r2t_sn;
};
struct e5_nvmetcp_task_context {
struct ystorm_nvmetcp_task_st_ctx ystorm_st_context;
struct e5_ystorm_nvmetcp_task_ag_ctx ystorm_ag_context;
struct regpair ystorm_ag_padding[2];
struct e5_tdif_task_context tdif_context;
struct e5_mstorm_nvmetcp_task_ag_ctx mstorm_ag_context;
struct regpair mstorm_ag_padding[2];
struct e5_ustorm_nvmetcp_task_ag_ctx ustorm_ag_context;
struct regpair ustorm_ag_padding[2];
struct mstorm_nvmetcp_task_st_ctx mstorm_st_context;
struct regpair mstorm_st_padding[2];
struct ustorm_nvmetcp_task_st_ctx ustorm_st_context;
struct regpair ustorm_st_padding[2];
struct e5_rdif_task_context rdif_context;
};
#endif /* __NVMETCP_COMMON__*/
...@@ -542,6 +542,22 @@ struct qed_iscsi_pf_params { ...@@ -542,6 +542,22 @@ struct qed_iscsi_pf_params {
u8 bdq_pbl_num_entries[3]; u8 bdq_pbl_num_entries[3];
}; };
struct qed_nvmetcp_pf_params {
u64 glbl_q_params_addr;
u16 cq_num_entries;
u16 num_cons;
u16 num_tasks;
u8 num_sq_pages_in_ring;
u8 num_r2tq_pages_in_ring;
u8 num_uhq_pages_in_ring;
u8 num_queues;
u8 gl_rq_pi;
u8 gl_cmd_pi;
u8 debug_mode;
u8 ll2_ooo_queue_id;
u16 min_rto;
};
struct qed_rdma_pf_params { struct qed_rdma_pf_params {
/* Supplied to QED during resource allocation (may affect the ILT and /* Supplied to QED during resource allocation (may affect the ILT and
* the doorbell BAR). * the doorbell BAR).
...@@ -560,6 +576,7 @@ struct qed_pf_params { ...@@ -560,6 +576,7 @@ struct qed_pf_params {
struct qed_eth_pf_params eth_pf_params; struct qed_eth_pf_params eth_pf_params;
struct qed_fcoe_pf_params fcoe_pf_params; struct qed_fcoe_pf_params fcoe_pf_params;
struct qed_iscsi_pf_params iscsi_pf_params; struct qed_iscsi_pf_params iscsi_pf_params;
struct qed_nvmetcp_pf_params nvmetcp_pf_params;
struct qed_rdma_pf_params rdma_pf_params; struct qed_rdma_pf_params rdma_pf_params;
}; };
...@@ -662,6 +679,7 @@ enum qed_sb_type { ...@@ -662,6 +679,7 @@ enum qed_sb_type {
enum qed_protocol { enum qed_protocol {
QED_PROTOCOL_ETH, QED_PROTOCOL_ETH,
QED_PROTOCOL_ISCSI, QED_PROTOCOL_ISCSI,
QED_PROTOCOL_NVMETCP = QED_PROTOCOL_ISCSI,
QED_PROTOCOL_FCOE, QED_PROTOCOL_FCOE,
}; };
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
enum qed_ll2_conn_type { enum qed_ll2_conn_type {
QED_LL2_TYPE_FCOE, QED_LL2_TYPE_FCOE,
QED_LL2_TYPE_ISCSI, QED_LL2_TYPE_TCP_ULP,
QED_LL2_TYPE_TEST, QED_LL2_TYPE_TEST,
QED_LL2_TYPE_OOO, QED_LL2_TYPE_OOO,
QED_LL2_TYPE_RESERVED2, QED_LL2_TYPE_RESERVED2,
......
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */
/* Copyright 2021 Marvell. All rights reserved. */
#ifndef _QED_NVMETCP_IF_H
#define _QED_NVMETCP_IF_H
#include <linux/types.h>
#include <linux/qed/qed_if.h>
#include <linux/qed/storage_common.h>
#include <linux/qed/nvmetcp_common.h>
#define QED_NVMETCP_MAX_IO_SIZE 0x800000
#define QED_NVMETCP_CMN_HDR_SIZE (sizeof(struct nvme_tcp_hdr))
#define QED_NVMETCP_CMD_HDR_SIZE (sizeof(struct nvme_tcp_cmd_pdu))
#define QED_NVMETCP_NON_IO_HDR_SIZE ((QED_NVMETCP_CMN_HDR_SIZE + 16))
typedef int (*nvmetcp_event_cb_t) (void *context,
u8 fw_event_code, void *fw_handle);
struct qed_dev_nvmetcp_info {
struct qed_dev_info common;
u8 port_id; /* Physical port */
u8 num_cqs;
};
#define MAX_TID_BLOCKS_NVMETCP (512)
struct qed_nvmetcp_tid {
u32 size; /* In bytes per task */
u32 num_tids_per_block;
u8 *blocks[MAX_TID_BLOCKS_NVMETCP];
};
struct qed_nvmetcp_id_params {
u8 mac[ETH_ALEN];
u32 ip[4];
u16 port;
};
struct qed_nvmetcp_params_offload {
/* FW initializations */
dma_addr_t sq_pbl_addr;
dma_addr_t nvmetcp_cccid_itid_table_addr;
u16 nvmetcp_cccid_max_range;
u8 default_cq;
/* Networking and TCP stack initializations */
struct qed_nvmetcp_id_params src;
struct qed_nvmetcp_id_params dst;
u32 ka_timeout;
u32 ka_interval;
u32 max_rt_time;
u32 cwnd;
u16 mss;
u16 vlan_id;
bool timestamp_en;
bool delayed_ack_en;
bool tcp_keep_alive_en;
bool ecn_en;
u8 ip_version;
u8 ka_max_probe_cnt;
u8 ttl;
u8 tos_or_tc;
u8 rcv_wnd_scale;
};
struct qed_nvmetcp_params_update {
u32 max_io_size;
u32 max_recv_pdu_length;
u32 max_send_pdu_length;
/* Placeholder: pfv, cpda, hpda */
bool hdr_digest_en;
bool data_digest_en;
};
struct qed_nvmetcp_cb_ops {
struct qed_common_cb_ops common;
};
struct nvmetcp_sge {
struct regpair sge_addr; /* SGE address */
__le32 sge_len; /* SGE length */
__le32 reserved;
};
/* IO path HSI function SGL params */
struct storage_sgl_task_params {
struct nvmetcp_sge *sgl;
struct regpair sgl_phys_addr;
u32 total_buffer_size;
u16 num_sges;
bool small_mid_sge;
};
/* IO path HSI function FW task context params */
struct nvmetcp_task_params {
void *context; /* Output parameter - set/filled by the HSI function */
struct nvmetcp_wqe *sqe;
u32 tx_io_size; /* in bytes (Without DIF, if exists) */
u32 rx_io_size; /* in bytes (Without DIF, if exists) */
u16 conn_icid;
u16 itid;
struct regpair opq; /* qedn_task_ctx address */
u16 host_cccid;
u8 cq_rss_number;
bool send_write_incapsule;
};
/**
* struct qed_nvmetcp_ops - qed NVMeTCP operations.
* @common: common operations pointer
* @ll2: light L2 operations pointer
* @fill_dev_info: fills NVMeTCP specific information
* @param cdev
* @param info
* @return 0 on success, otherwise error value.
* @register_ops: register nvmetcp operations
* @param cdev
* @param ops - specified using qed_nvmetcp_cb_ops
* @param cookie - driver private
* @start: nvmetcp in FW
* @param cdev
* @param tasks - qed will fill information about tasks
* return 0 on success, otherwise error value.
* @stop: nvmetcp in FW
* @param cdev
* return 0 on success, otherwise error value.
* @acquire_conn: acquire a new nvmetcp connection
* @param cdev
* @param handle - qed will fill handle that should be
* used henceforth as identifier of the
* connection.
* @param p_doorbell - qed will fill the address of the
* doorbell.
* @return 0 on sucesss, otherwise error value.
* @release_conn: release a previously acquired nvmetcp connection
* @param cdev
* @param handle - the connection handle.
* @return 0 on success, otherwise error value.
* @offload_conn: configures an offloaded connection
* @param cdev
* @param handle - the connection handle.
* @param conn_info - the configuration to use for the
* offload.
* @return 0 on success, otherwise error value.
* @update_conn: updates an offloaded connection
* @param cdev
* @param handle - the connection handle.
* @param conn_info - the configuration to use for the
* offload.
* @return 0 on success, otherwise error value.
* @destroy_conn: stops an offloaded connection
* @param cdev
* @param handle - the connection handle.
* @return 0 on success, otherwise error value.
* @clear_sq: clear all task in sq
* @param cdev
* @param handle - the connection handle.
* @return 0 on success, otherwise error value.
* @add_src_tcp_port_filter: Add source tcp port filter
* @param cdev
* @param src_port
* @remove_src_tcp_port_filter: Remove source tcp port filter
* @param cdev
* @param src_port
* @add_dst_tcp_port_filter: Add destination tcp port filter
* @param cdev
* @param dest_port
* @remove_dst_tcp_port_filter: Remove destination tcp port filter
* @param cdev
* @param dest_port
* @clear_all_filters: Clear all filters.
* @param cdev
*/
struct qed_nvmetcp_ops {
const struct qed_common_ops *common;
const struct qed_ll2_ops *ll2;
int (*fill_dev_info)(struct qed_dev *cdev,
struct qed_dev_nvmetcp_info *info);
void (*register_ops)(struct qed_dev *cdev,
struct qed_nvmetcp_cb_ops *ops, void *cookie);
int (*start)(struct qed_dev *cdev,
struct qed_nvmetcp_tid *tasks,
void *event_context, nvmetcp_event_cb_t async_event_cb);
int (*stop)(struct qed_dev *cdev);
int (*acquire_conn)(struct qed_dev *cdev,
u32 *handle,
u32 *fw_cid, void __iomem **p_doorbell);
int (*release_conn)(struct qed_dev *cdev, u32 handle);
int (*offload_conn)(struct qed_dev *cdev,
u32 handle,
struct qed_nvmetcp_params_offload *conn_info);
int (*update_conn)(struct qed_dev *cdev,
u32 handle,
struct qed_nvmetcp_params_update *conn_info);
int (*destroy_conn)(struct qed_dev *cdev, u32 handle, u8 abrt_conn);
int (*clear_sq)(struct qed_dev *cdev, u32 handle);
int (*add_src_tcp_port_filter)(struct qed_dev *cdev, u16 src_port);
void (*remove_src_tcp_port_filter)(struct qed_dev *cdev, u16 src_port);
int (*add_dst_tcp_port_filter)(struct qed_dev *cdev, u16 dest_port);
void (*remove_dst_tcp_port_filter)(struct qed_dev *cdev, u16 dest_port);
void (*clear_all_filters)(struct qed_dev *cdev);
void (*init_read_io)(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params);
void (*init_write_io)(struct nvmetcp_task_params *task_params,
struct nvme_tcp_cmd_pdu *cmd_pdu_header,
struct nvme_command *nvme_cmd,
struct storage_sgl_task_params *sgl_task_params);
void (*init_icreq_exchange)(struct nvmetcp_task_params *task_params,
struct nvme_tcp_icreq_pdu *init_conn_req_pdu_hdr,
struct storage_sgl_task_params *tx_sgl_task_params,
struct storage_sgl_task_params *rx_sgl_task_params);
void (*init_task_cleanup)(struct nvmetcp_task_params *task_params);
};
const struct qed_nvmetcp_ops *qed_get_nvmetcp_ops(void);
void qed_put_nvmetcp_ops(void);
#endif
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-3-Clause) */
/*
* Copyright 2021 Marvell. All rights reserved.
*/
#ifndef _QED_IP_SERVICES_IF_H
#define _QED_IP_SERVICES_IF_H
#include <linux/types.h>
#include <net/route.h>
#include <net/ip6_route.h>
#include <linux/inetdevice.h>
int qed_route_ipv4(struct sockaddr_storage *local_addr,
struct sockaddr_storage *remote_addr,
struct sockaddr *hardware_address,
struct net_device **ndev);
int qed_route_ipv6(struct sockaddr_storage *local_addr,
struct sockaddr_storage *remote_addr,
struct sockaddr *hardware_address,
struct net_device **ndev);
void qed_vlan_get_ndev(struct net_device **ndev, u16 *vlan_id);
struct pci_dev *qed_validate_ndev(struct net_device *ndev);
void qed_return_tcp_port(struct socket *sock);
int qed_fetch_tcp_port(struct sockaddr_storage local_ip_addr,
struct socket **sock, u16 *port);
__be16 qed_get_in_port(struct sockaddr_storage *sa);
#endif /* _QED_IP_SERVICES_IF_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment