Commit 06552f8e authored by David S. Miller's avatar David S. Miller

Merge branch 'Huawei-HiNIC-Ethernet-Driver'

Aviad Krawczyk says:

====================
Huawei HiNIC Ethernet Driver

The patch-set contains the support of the HiNIC Ethernet driver for
hinic family of PCIE Network Interface Cards.

The Huawei's PCIE HiNIC card is a new Ethernet card and hence there was
a need of a new driver.

The current driver is meant to be used for the Physical Function and there
would soon be a support for Virtual Function and more features once the
basic PF driver has been accepted.

Changes V7 -> V8:
1. Remove unnecessary cast from void * - Stephen Hemminger comment
	https://lkml.org/lkml/2017/8/17/1008

Changes V6 -> V7:
1. Separate netpoll and MAINTAINERS patch - Sergei Shtylyov comment
	https://lkml.org/lkml/2017/8/17/479

Changes V5 -> V6:
1. Fix cover letter Message-Id

Changes V4 -> V5:
1. Remove select_queue NOP - David Miller comment
        https://lkml.org/lkml/2017/8/16/625

Changes V3 -> V4:
1. Reverse christmas tree order - David Miller comment
        https://lkml.org/lkml/2017/8/3/862

Changes V2 -> V3:
1. Replace dev_ functions by netif_ functions - Joe Perches comment
        https://lkml.org/lkml/2017/7/19/424
2. Fix the driver directory in MAINTAINERS file - Sergei Shtylyov comment
        https://lkml.org/lkml/2017/7/19/615
3. Add a newline at the end of Makefile - David Miller comment
        https://lkml.org/lkml/2017/7/19/1345
4. Return a pointer as a val instead of in arg - Francois Romieu comment
        https://lkml.org/lkml/2017/7/19/1319
5. Change the error labels to err_xyz - Francois Romieu comment
        https://lkml.org/lkml/2017/7/19/1319
6. Remove check of Func Type in free function - Francois Romieu comment
        https://lkml.org/lkml/2017/7/19/1319
7. Remove !netdev check in remove function - Francois Romieu comment
        https://lkml.org/lkml/2017/7/19/1319
8. Use module_pci_driver - Francois Romieu comment
        https://lkml.org/lkml/2017/7/19/1319
9. Move the PCI device ID to the .c file - Francois Romieu comment
        https://lkml.org/lkml/2017/7/19/1319
10. Remove void * to avoid passing wrong ptr - Francois Romieu comment
        https://lkml.org/lkml/2017/7/19/1319

Changes V1 -> V2:
1. remove driver verstion - Andrew Lunn comment
        https://lkml.org/lkml/2017/7/12/372
2. replace kzalloc by devm_kzalloc for short clean - Andrew Lunn comment
        https://lkml.org/lkml/2017/7/12/372
3. replace pr_ functions by dev_ functions - Andrew Lunn comment
        https://lkml.org/lkml/2017/7/12/375
4. seperate last patch by moving ops to a new patch - Andrew Lunn comment
        https://lkml.org/lkml/2017/7/12/377
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 89c9c163 4d3b6327
Linux Kernel Driver for Huawei Intelligent NIC(HiNIC) family
============================================================
Overview:
=========
HiNIC is a network interface card for the Data Center Area.
The driver supports a range of link-speed devices (10GbE, 25GbE, 40GbE, etc.).
The driver supports also a negotiated and extendable feature set.
Some HiNIC devices support SR-IOV. This driver is used for Physical Function
(PF).
HiNIC devices support MSI-X interrupt vector for each Tx/Rx queue and
adaptive interrupt moderation.
HiNIC devices support also various offload features such as checksum offload,
TCP Transmit Segmentation Offload(TSO), Receive-Side Scaling(RSS) and
LRO(Large Receive Offload).
Supported PCI vendor ID/device IDs:
===================================
19e5:1822 - HiNIC PF
Driver Architecture and Source Code:
====================================
hinic_dev - Implement a Logical Network device that is independent from
specific HW details about HW data structure formats.
hinic_hwdev - Implement the HW details of the device and include the components
for accessing the PCI NIC.
hinic_hwdev contains the following components:
===============================================
HW Interface:
=============
The interface for accessing the pci device (DMA memory and PCI BARs).
(hinic_hw_if.c, hinic_hw_if.h)
Configuration Status Registers Area that describes the HW Registers on the
configuration and status BAR0. (hinic_hw_csr.h)
MGMT components:
================
Asynchronous Event Queues(AEQs) - The event queues for receiving messages from
the MGMT modules on the cards. (hinic_hw_eqs.c, hinic_hw_eqs.h)
Application Programmable Interface commands(API CMD) - Interface for sending
MGMT commands to the card. (hinic_hw_api_cmd.c, hinic_hw_api_cmd.h)
Management (MGMT) - the PF to MGMT channel that uses API CMD for sending MGMT
commands to the card and receives notifications from the MGMT modules on the
card by AEQs. Also set the addresses of the IO CMDQs in HW.
(hinic_hw_mgmt.c, hinic_hw_mgmt.h)
IO components:
==============
Completion Event Queues(CEQs) - The completion Event Queues that describe IO
tasks that are finished. (hinic_hw_eqs.c, hinic_hw_eqs.h)
Work Queues(WQ) - Contain the memory and operations for use by CMD queues and
the Queue Pairs. The WQ is a Memory Block in a Page. The Block contains
pointers to Memory Areas that are the Memory for the Work Queue Elements(WQEs).
(hinic_hw_wq.c, hinic_hw_wq.h)
Command Queues(CMDQ) - The queues for sending commands for IO management and is
used to set the QPs addresses in HW. The commands completion events are
accumulated on the CEQ that is configured to receive the CMDQ completion events.
(hinic_hw_cmdq.c, hinic_hw_cmdq.h)
Queue Pairs(QPs) - The HW Receive and Send queues for Receiving and Transmitting
Data. (hinic_hw_qp.c, hinic_hw_qp.h, hinic_hw_qp_ctxt.h)
IO - de/constructs all the IO components. (hinic_hw_io.c, hinic_hw_io.h)
HW device:
==========
HW device - de/constructs the HW Interface, the MGMT components on the
initialization of the driver and the IO components on the case of Interface
UP/DOWN Events. (hinic_hw_dev.c, hinic_hw_dev.h)
hinic_dev contains the following components:
===============================================
PCI ID table - Contains the supported PCI Vendor/Device IDs.
(hinic_pci_tbl.h)
Port Commands - Send commands to the HW device for port management
(MAC, Vlan, MTU, ...). (hinic_port.c, hinic_port.h)
Tx Queues - Logical Tx Queues that use the HW Send Queues for transmit.
The Logical Tx queue is not dependent on the format of the HW Send Queue.
(hinic_tx.c, hinic_tx.h)
Rx Queues - Logical Rx Queues that use the HW Receive Queues for receive.
The Logical Rx queue is not dependent on the format of the HW Receive Queue.
(hinic_rx.c, hinic_rx.h)
hinic_dev - de/constructs the Logical Tx and Rx Queues.
(hinic_main.c, hinic_dev.h)
Miscellaneous:
=============
Common functions that are used by HW and Logical Device.
(hinic_common.c, hinic_common.h)
Support
=======
If an issue is identified with the released source code on the supported kernel
with a supported adapter, email the specific information related to the issue to
aviad.krawczyk@huawei.com.
...@@ -6240,6 +6240,13 @@ L: linux-input@vger.kernel.org ...@@ -6240,6 +6240,13 @@ L: linux-input@vger.kernel.org
S: Maintained S: Maintained
F: drivers/input/touchscreen/htcpen.c F: drivers/input/touchscreen/htcpen.c
HUAWEI ETHERNET DRIVER
M: Aviad Krawczyk <aviad.krawczyk@huawei.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/hinic.txt
F: drivers/net/ethernet/huawei/hinic/
HUGETLB FILESYSTEM HUGETLB FILESYSTEM
M: Nadia Yvette Chambers <nyc@holomorphy.com> M: Nadia Yvette Chambers <nyc@holomorphy.com>
S: Maintained S: Maintained
......
...@@ -78,6 +78,7 @@ source "drivers/net/ethernet/freescale/Kconfig" ...@@ -78,6 +78,7 @@ source "drivers/net/ethernet/freescale/Kconfig"
source "drivers/net/ethernet/fujitsu/Kconfig" source "drivers/net/ethernet/fujitsu/Kconfig"
source "drivers/net/ethernet/hisilicon/Kconfig" source "drivers/net/ethernet/hisilicon/Kconfig"
source "drivers/net/ethernet/hp/Kconfig" source "drivers/net/ethernet/hp/Kconfig"
source "drivers/net/ethernet/huawei/Kconfig"
source "drivers/net/ethernet/ibm/Kconfig" source "drivers/net/ethernet/ibm/Kconfig"
source "drivers/net/ethernet/intel/Kconfig" source "drivers/net/ethernet/intel/Kconfig"
source "drivers/net/ethernet/i825xx/Kconfig" source "drivers/net/ethernet/i825xx/Kconfig"
......
...@@ -41,6 +41,7 @@ obj-$(CONFIG_NET_VENDOR_FREESCALE) += freescale/ ...@@ -41,6 +41,7 @@ obj-$(CONFIG_NET_VENDOR_FREESCALE) += freescale/
obj-$(CONFIG_NET_VENDOR_FUJITSU) += fujitsu/ obj-$(CONFIG_NET_VENDOR_FUJITSU) += fujitsu/
obj-$(CONFIG_NET_VENDOR_HISILICON) += hisilicon/ obj-$(CONFIG_NET_VENDOR_HISILICON) += hisilicon/
obj-$(CONFIG_NET_VENDOR_HP) += hp/ obj-$(CONFIG_NET_VENDOR_HP) += hp/
obj-$(CONFIG_NET_VENDOR_HUAWEI) += huawei/
obj-$(CONFIG_NET_VENDOR_IBM) += ibm/ obj-$(CONFIG_NET_VENDOR_IBM) += ibm/
obj-$(CONFIG_NET_VENDOR_INTEL) += intel/ obj-$(CONFIG_NET_VENDOR_INTEL) += intel/
obj-$(CONFIG_NET_VENDOR_I825XX) += i825xx/ obj-$(CONFIG_NET_VENDOR_I825XX) += i825xx/
......
#
# Huawei driver configuration
#
config NET_VENDOR_HUAWEI
bool "Huawei devices"
default y
---help---
If you have a network (Ethernet) card belonging to this class, say Y.
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about Huawei cards. If you say Y, you will be asked
for your specific card in the following questions.
if NET_VENDOR_HUAWEI
source "drivers/net/ethernet/huawei/hinic/Kconfig"
endif # NET_VENDOR_HUAWEI
#
# Makefile for the Huawei device drivers.
#
obj-$(CONFIG_HINIC) += hinic/
#
# Huawei driver configuration
#
config HINIC
tristate "Huawei Intelligent PCIE Network Interface Card"
depends on (PCI_MSI && X86)
default m
---help---
This driver supports HiNIC PCIE Ethernet cards.
To compile this driver as part of the kernel, choose Y here.
If unsure, choose N.
The default is compiled as module.
obj-$(CONFIG_HINIC) += hinic.o
hinic-y := hinic_main.o hinic_tx.o hinic_rx.o hinic_port.o hinic_hw_dev.o \
hinic_hw_io.o hinic_hw_qp.o hinic_hw_cmdq.o hinic_hw_wq.o \
hinic_hw_mgmt.o hinic_hw_api_cmd.o hinic_hw_eqs.o hinic_hw_if.o \
hinic_common.o
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <asm/byteorder.h>
#include "hinic_common.h"
/**
* hinic_cpu_to_be32 - convert data to big endian 32 bit format
* @data: the data to convert
* @len: length of data to convert
**/
void hinic_cpu_to_be32(void *data, int len)
{
u32 *mem = data;
int i;
len = len / sizeof(u32);
for (i = 0; i < len; i++) {
*mem = cpu_to_be32(*mem);
mem++;
}
}
/**
* hinic_be32_to_cpu - convert data from big endian 32 bit format
* @data: the data to convert
* @len: length of data to convert
**/
void hinic_be32_to_cpu(void *data, int len)
{
u32 *mem = data;
int i;
len = len / sizeof(u32);
for (i = 0; i < len; i++) {
*mem = be32_to_cpu(*mem);
mem++;
}
}
/**
* hinic_set_sge - set dma area in scatter gather entry
* @sge: scatter gather entry
* @addr: dma address
* @len: length of relevant data in the dma address
**/
void hinic_set_sge(struct hinic_sge *sge, dma_addr_t addr, int len)
{
sge->hi_addr = upper_32_bits(addr);
sge->lo_addr = lower_32_bits(addr);
sge->len = len;
}
/**
* hinic_sge_to_dma - get dma address from scatter gather entry
* @sge: scatter gather entry
*
* Return dma address of sg entry
**/
dma_addr_t hinic_sge_to_dma(struct hinic_sge *sge)
{
return (dma_addr_t)((((u64)sge->hi_addr) << 32) | sge->lo_addr);
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_COMMON_H
#define HINIC_COMMON_H
#include <linux/types.h>
#define UPPER_8_BITS(data) (((data) >> 8) & 0xFF)
#define LOWER_8_BITS(data) ((data) & 0xFF)
struct hinic_sge {
u32 hi_addr;
u32 lo_addr;
u32 len;
};
void hinic_cpu_to_be32(void *data, int len);
void hinic_be32_to_cpu(void *data, int len);
void hinic_set_sge(struct hinic_sge *sge, dma_addr_t addr, int len);
dma_addr_t hinic_sge_to_dma(struct hinic_sge *sge);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_DEV_H
#define HINIC_DEV_H
#include <linux/netdevice.h>
#include <linux/types.h>
#include <linux/semaphore.h>
#include <linux/workqueue.h>
#include <linux/bitops.h>
#include "hinic_hw_dev.h"
#include "hinic_tx.h"
#include "hinic_rx.h"
#define HINIC_DRV_NAME "hinic"
enum hinic_flags {
HINIC_LINK_UP = BIT(0),
HINIC_INTF_UP = BIT(1),
};
struct hinic_rx_mode_work {
struct work_struct work;
u32 rx_mode;
};
struct hinic_dev {
struct net_device *netdev;
struct hinic_hwdev *hwdev;
u32 msg_enable;
unsigned int tx_weight;
unsigned int rx_weight;
unsigned int flags;
struct semaphore mgmt_lock;
unsigned long *vlan_bitmap;
struct hinic_rx_mode_work rx_mode_work;
struct workqueue_struct *workq;
struct hinic_txq *txqs;
struct hinic_rxq *rxqs;
struct hinic_txq_stats tx_stats;
struct hinic_rxq_stats rx_stats;
};
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/dma-mapping.h>
#include <linux/bitops.h>
#include <linux/err.h>
#include <linux/jiffies.h>
#include <linux/delay.h>
#include <linux/log2.h>
#include <linux/semaphore.h>
#include <asm/byteorder.h>
#include <asm/barrier.h>
#include "hinic_hw_csr.h"
#include "hinic_hw_if.h"
#include "hinic_hw_api_cmd.h"
#define API_CHAIN_NUM_CELLS 32
#define API_CMD_CELL_SIZE_SHIFT 6
#define API_CMD_CELL_SIZE_MIN (BIT(API_CMD_CELL_SIZE_SHIFT))
#define API_CMD_CELL_SIZE(cell_size) \
(((cell_size) >= API_CMD_CELL_SIZE_MIN) ? \
(1 << (fls(cell_size - 1))) : API_CMD_CELL_SIZE_MIN)
#define API_CMD_CELL_SIZE_VAL(size) \
ilog2((size) >> API_CMD_CELL_SIZE_SHIFT)
#define API_CMD_BUF_SIZE 2048
/* Sizes of the members in hinic_api_cmd_cell */
#define API_CMD_CELL_DESC_SIZE 8
#define API_CMD_CELL_DATA_ADDR_SIZE 8
#define API_CMD_CELL_ALIGNMENT 8
#define API_CMD_TIMEOUT 1000
#define MASKED_IDX(chain, idx) ((idx) & ((chain)->num_cells - 1))
#define SIZE_8BYTES(size) (ALIGN((size), 8) >> 3)
#define SIZE_4BYTES(size) (ALIGN((size), 4) >> 2)
#define RD_DMA_ATTR_DEFAULT 0
#define WR_DMA_ATTR_DEFAULT 0
enum api_cmd_data_format {
SGE_DATA = 1, /* cell data is passed by hw address */
};
enum api_cmd_type {
API_CMD_WRITE = 0,
};
enum api_cmd_bypass {
NO_BYPASS = 0,
BYPASS = 1,
};
enum api_cmd_xor_chk_level {
XOR_CHK_DIS = 0,
XOR_CHK_ALL = 3,
};
static u8 xor_chksum_set(void *data)
{
int idx;
u8 *val, checksum = 0;
val = data;
for (idx = 0; idx < 7; idx++)
checksum ^= val[idx];
return checksum;
}
static void set_prod_idx(struct hinic_api_cmd_chain *chain)
{
enum hinic_api_cmd_chain_type chain_type = chain->chain_type;
struct hinic_hwif *hwif = chain->hwif;
u32 addr, prod_idx;
addr = HINIC_CSR_API_CMD_CHAIN_PI_ADDR(chain_type);
prod_idx = hinic_hwif_read_reg(hwif, addr);
prod_idx = HINIC_API_CMD_PI_CLEAR(prod_idx, IDX);
prod_idx |= HINIC_API_CMD_PI_SET(chain->prod_idx, IDX);
hinic_hwif_write_reg(hwif, addr, prod_idx);
}
static u32 get_hw_cons_idx(struct hinic_api_cmd_chain *chain)
{
u32 addr, val;
addr = HINIC_CSR_API_CMD_STATUS_ADDR(chain->chain_type);
val = hinic_hwif_read_reg(chain->hwif, addr);
return HINIC_API_CMD_STATUS_GET(val, CONS_IDX);
}
/**
* chain_busy - check if the chain is still processing last requests
* @chain: chain to check
*
* Return 0 - Success, negative - Failure
**/
static int chain_busy(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
u32 prod_idx;
switch (chain->chain_type) {
case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
chain->cons_idx = get_hw_cons_idx(chain);
prod_idx = chain->prod_idx;
/* check for a space for a new command */
if (chain->cons_idx == MASKED_IDX(chain, prod_idx + 1)) {
dev_err(&pdev->dev, "API CMD chain %d is busy\n",
chain->chain_type);
return -EBUSY;
}
break;
default:
dev_err(&pdev->dev, "Unknown API CMD Chain type\n");
break;
}
return 0;
}
/**
* get_cell_data_size - get the data size of a specific cell type
* @type: chain type
*
* Return the data(Desc + Address) size in the cell
**/
static u8 get_cell_data_size(enum hinic_api_cmd_chain_type type)
{
u8 cell_data_size = 0;
switch (type) {
case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
cell_data_size = ALIGN(API_CMD_CELL_DESC_SIZE +
API_CMD_CELL_DATA_ADDR_SIZE,
API_CMD_CELL_ALIGNMENT);
break;
default:
break;
}
return cell_data_size;
}
/**
* prepare_cell_ctrl - prepare the ctrl of the cell for the command
* @cell_ctrl: the control of the cell to set the control value into it
* @data_size: the size of the data in the cell
**/
static void prepare_cell_ctrl(u64 *cell_ctrl, u16 data_size)
{
u8 chksum;
u64 ctrl;
ctrl = HINIC_API_CMD_CELL_CTRL_SET(SIZE_8BYTES(data_size), DATA_SZ) |
HINIC_API_CMD_CELL_CTRL_SET(RD_DMA_ATTR_DEFAULT, RD_DMA_ATTR) |
HINIC_API_CMD_CELL_CTRL_SET(WR_DMA_ATTR_DEFAULT, WR_DMA_ATTR);
chksum = xor_chksum_set(&ctrl);
ctrl |= HINIC_API_CMD_CELL_CTRL_SET(chksum, XOR_CHKSUM);
/* The data in the HW should be in Big Endian Format */
*cell_ctrl = cpu_to_be64(ctrl);
}
/**
* prepare_api_cmd - prepare API CMD command
* @chain: chain for the command
* @dest: destination node on the card that will receive the command
* @cmd: command data
* @cmd_size: the command size
**/
static void prepare_api_cmd(struct hinic_api_cmd_chain *chain,
enum hinic_node_id dest,
void *cmd, u16 cmd_size)
{
struct hinic_api_cmd_cell *cell = chain->curr_node;
struct hinic_api_cmd_cell_ctxt *cell_ctxt;
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
cell_ctxt = &chain->cell_ctxt[chain->prod_idx];
switch (chain->chain_type) {
case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
cell->desc = HINIC_API_CMD_DESC_SET(SGE_DATA, API_TYPE) |
HINIC_API_CMD_DESC_SET(API_CMD_WRITE, RD_WR) |
HINIC_API_CMD_DESC_SET(NO_BYPASS, MGMT_BYPASS);
break;
default:
dev_err(&pdev->dev, "unknown Chain type\n");
return;
}
cell->desc |= HINIC_API_CMD_DESC_SET(dest, DEST) |
HINIC_API_CMD_DESC_SET(SIZE_4BYTES(cmd_size), SIZE);
cell->desc |= HINIC_API_CMD_DESC_SET(xor_chksum_set(&cell->desc),
XOR_CHKSUM);
/* The data in the HW should be in Big Endian Format */
cell->desc = cpu_to_be64(cell->desc);
memcpy(cell_ctxt->api_cmd_vaddr, cmd, cmd_size);
}
/**
* prepare_cell - prepare cell ctrl and cmd in the current cell
* @chain: chain for the command
* @dest: destination node on the card that will receive the command
* @cmd: command data
* @cmd_size: the command size
*
* Return 0 - Success, negative - Failure
**/
static void prepare_cell(struct hinic_api_cmd_chain *chain,
enum hinic_node_id dest,
void *cmd, u16 cmd_size)
{
struct hinic_api_cmd_cell *curr_node = chain->curr_node;
u16 data_size = get_cell_data_size(chain->chain_type);
prepare_cell_ctrl(&curr_node->ctrl, data_size);
prepare_api_cmd(chain, dest, cmd, cmd_size);
}
static inline void cmd_chain_prod_idx_inc(struct hinic_api_cmd_chain *chain)
{
chain->prod_idx = MASKED_IDX(chain, chain->prod_idx + 1);
}
/**
* api_cmd_status_update - update the status in the chain struct
* @chain: chain to update
**/
static void api_cmd_status_update(struct hinic_api_cmd_chain *chain)
{
enum hinic_api_cmd_chain_type chain_type;
struct hinic_api_cmd_status *wb_status;
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
u64 status_header;
u32 status;
wb_status = chain->wb_status;
status_header = be64_to_cpu(wb_status->header);
status = be32_to_cpu(wb_status->status);
if (HINIC_API_CMD_STATUS_GET(status, CHKSUM_ERR)) {
dev_err(&pdev->dev, "API CMD status: Xor check error\n");
return;
}
chain_type = HINIC_API_CMD_STATUS_HEADER_GET(status_header, CHAIN_ID);
if (chain_type >= HINIC_API_CMD_MAX) {
dev_err(&pdev->dev, "unknown API CMD Chain %d\n", chain_type);
return;
}
chain->cons_idx = HINIC_API_CMD_STATUS_GET(status, CONS_IDX);
}
/**
* wait_for_status_poll - wait for write to api cmd command to complete
* @chain: the chain of the command
*
* Return 0 - Success, negative - Failure
**/
static int wait_for_status_poll(struct hinic_api_cmd_chain *chain)
{
int err = -ETIMEDOUT;
unsigned long end;
end = jiffies + msecs_to_jiffies(API_CMD_TIMEOUT);
do {
api_cmd_status_update(chain);
/* wait for CI to be updated - sign for completion */
if (chain->cons_idx == chain->prod_idx) {
err = 0;
break;
}
msleep(20);
} while (time_before(jiffies, end));
return err;
}
/**
* wait_for_api_cmd_completion - wait for command to complete
* @chain: chain for the command
*
* Return 0 - Success, negative - Failure
**/
static int wait_for_api_cmd_completion(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
int err;
switch (chain->chain_type) {
case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
err = wait_for_status_poll(chain);
if (err) {
dev_err(&pdev->dev, "API CMD Poll status timeout\n");
break;
}
break;
default:
dev_err(&pdev->dev, "unknown API CMD Chain type\n");
err = -EINVAL;
break;
}
return err;
}
/**
* api_cmd - API CMD command
* @chain: chain for the command
* @dest: destination node on the card that will receive the command
* @cmd: command data
* @size: the command size
*
* Return 0 - Success, negative - Failure
**/
static int api_cmd(struct hinic_api_cmd_chain *chain,
enum hinic_node_id dest, u8 *cmd, u16 cmd_size)
{
struct hinic_api_cmd_cell_ctxt *ctxt;
int err;
down(&chain->sem);
if (chain_busy(chain)) {
up(&chain->sem);
return -EBUSY;
}
prepare_cell(chain, dest, cmd, cmd_size);
cmd_chain_prod_idx_inc(chain);
wmb(); /* inc pi before issue the command */
set_prod_idx(chain); /* issue the command */
ctxt = &chain->cell_ctxt[chain->prod_idx];
chain->curr_node = ctxt->cell_vaddr;
err = wait_for_api_cmd_completion(chain);
up(&chain->sem);
return err;
}
/**
* hinic_api_cmd_write - Write API CMD command
* @chain: chain for write command
* @dest: destination node on the card that will receive the command
* @cmd: command data
* @size: the command size
*
* Return 0 - Success, negative - Failure
**/
int hinic_api_cmd_write(struct hinic_api_cmd_chain *chain,
enum hinic_node_id dest, u8 *cmd, u16 size)
{
/* Verify the chain type */
if (chain->chain_type == HINIC_API_CMD_WRITE_TO_MGMT_CPU)
return api_cmd(chain, dest, cmd, size);
return -EINVAL;
}
/**
* api_cmd_hw_restart - restart the chain in the HW
* @chain: the API CMD specific chain to restart
*
* Return 0 - Success, negative - Failure
**/
static int api_cmd_hw_restart(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
int err = -ETIMEDOUT;
unsigned long end;
u32 reg_addr, val;
/* Read Modify Write */
reg_addr = HINIC_CSR_API_CMD_CHAIN_REQ_ADDR(chain->chain_type);
val = hinic_hwif_read_reg(hwif, reg_addr);
val = HINIC_API_CMD_CHAIN_REQ_CLEAR(val, RESTART);
val |= HINIC_API_CMD_CHAIN_REQ_SET(1, RESTART);
hinic_hwif_write_reg(hwif, reg_addr, val);
end = jiffies + msecs_to_jiffies(API_CMD_TIMEOUT);
do {
val = hinic_hwif_read_reg(hwif, reg_addr);
if (!HINIC_API_CMD_CHAIN_REQ_GET(val, RESTART)) {
err = 0;
break;
}
msleep(20);
} while (time_before(jiffies, end));
return err;
}
/**
* api_cmd_ctrl_init - set the control register of a chain
* @chain: the API CMD specific chain to set control register for
**/
static void api_cmd_ctrl_init(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
u32 addr, ctrl;
u16 cell_size;
/* Read Modify Write */
addr = HINIC_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
cell_size = API_CMD_CELL_SIZE_VAL(chain->cell_size);
ctrl = hinic_hwif_read_reg(hwif, addr);
ctrl = HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, RESTART_WB_STAT) &
HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_ERR) &
HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_CHK_EN) &
HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
ctrl |= HINIC_API_CMD_CHAIN_CTRL_SET(1, XOR_ERR) |
HINIC_API_CMD_CHAIN_CTRL_SET(XOR_CHK_ALL, XOR_CHK_EN) |
HINIC_API_CMD_CHAIN_CTRL_SET(cell_size, CELL_SIZE);
hinic_hwif_write_reg(hwif, addr, ctrl);
}
/**
* api_cmd_set_status_addr - set the status address of a chain in the HW
* @chain: the API CMD specific chain to set in HW status address for
**/
static void api_cmd_set_status_addr(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
u32 addr, val;
addr = HINIC_CSR_API_CMD_STATUS_HI_ADDR(chain->chain_type);
val = upper_32_bits(chain->wb_status_paddr);
hinic_hwif_write_reg(hwif, addr, val);
addr = HINIC_CSR_API_CMD_STATUS_LO_ADDR(chain->chain_type);
val = lower_32_bits(chain->wb_status_paddr);
hinic_hwif_write_reg(hwif, addr, val);
}
/**
* api_cmd_set_num_cells - set the number cells of a chain in the HW
* @chain: the API CMD specific chain to set in HW the number of cells for
**/
static void api_cmd_set_num_cells(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
u32 addr, val;
addr = HINIC_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(chain->chain_type);
val = chain->num_cells;
hinic_hwif_write_reg(hwif, addr, val);
}
/**
* api_cmd_head_init - set the head of a chain in the HW
* @chain: the API CMD specific chain to set in HW the head for
**/
static void api_cmd_head_init(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
u32 addr, val;
addr = HINIC_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(chain->chain_type);
val = upper_32_bits(chain->head_cell_paddr);
hinic_hwif_write_reg(hwif, addr, val);
addr = HINIC_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(chain->chain_type);
val = lower_32_bits(chain->head_cell_paddr);
hinic_hwif_write_reg(hwif, addr, val);
}
/**
* api_cmd_chain_hw_clean - clean the HW
* @chain: the API CMD specific chain
**/
static void api_cmd_chain_hw_clean(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
u32 addr, ctrl;
addr = HINIC_CSR_API_CMD_CHAIN_CTRL_ADDR(chain->chain_type);
ctrl = hinic_hwif_read_reg(hwif, addr);
ctrl = HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, RESTART_WB_STAT) &
HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_ERR) &
HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, AEQE_EN) &
HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, XOR_CHK_EN) &
HINIC_API_CMD_CHAIN_CTRL_CLEAR(ctrl, CELL_SIZE);
hinic_hwif_write_reg(hwif, addr, ctrl);
}
/**
* api_cmd_chain_hw_init - initialize the chain in the HW
* @chain: the API CMD specific chain to initialize in HW
*
* Return 0 - Success, negative - Failure
**/
static int api_cmd_chain_hw_init(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
int err;
api_cmd_chain_hw_clean(chain);
api_cmd_set_status_addr(chain);
err = api_cmd_hw_restart(chain);
if (err) {
dev_err(&pdev->dev, "Failed to restart API CMD HW\n");
return err;
}
api_cmd_ctrl_init(chain);
api_cmd_set_num_cells(chain);
api_cmd_head_init(chain);
return 0;
}
/**
* free_cmd_buf - free the dma buffer of API CMD command
* @chain: the API CMD specific chain of the cmd
* @cell_idx: the cell index of the cmd
**/
static void free_cmd_buf(struct hinic_api_cmd_chain *chain, int cell_idx)
{
struct hinic_api_cmd_cell_ctxt *cell_ctxt;
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
cell_ctxt = &chain->cell_ctxt[cell_idx];
dma_free_coherent(&pdev->dev, API_CMD_BUF_SIZE,
cell_ctxt->api_cmd_vaddr,
cell_ctxt->api_cmd_paddr);
}
/**
* alloc_cmd_buf - allocate a dma buffer for API CMD command
* @chain: the API CMD specific chain for the cmd
* @cell: the cell in the HW for the cmd
* @cell_idx: the index of the cell
*
* Return 0 - Success, negative - Failure
**/
static int alloc_cmd_buf(struct hinic_api_cmd_chain *chain,
struct hinic_api_cmd_cell *cell, int cell_idx)
{
struct hinic_api_cmd_cell_ctxt *cell_ctxt;
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
dma_addr_t cmd_paddr;
u8 *cmd_vaddr;
int err = 0;
cmd_vaddr = dma_zalloc_coherent(&pdev->dev, API_CMD_BUF_SIZE,
&cmd_paddr, GFP_KERNEL);
if (!cmd_vaddr) {
dev_err(&pdev->dev, "Failed to allocate API CMD DMA memory\n");
return -ENOMEM;
}
cell_ctxt = &chain->cell_ctxt[cell_idx];
cell_ctxt->api_cmd_vaddr = cmd_vaddr;
cell_ctxt->api_cmd_paddr = cmd_paddr;
/* set the cmd DMA address in the cell */
switch (chain->chain_type) {
case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
/* The data in the HW should be in Big Endian Format */
cell->write.hw_cmd_paddr = cpu_to_be64(cmd_paddr);
break;
default:
dev_err(&pdev->dev, "Unsupported API CMD chain type\n");
free_cmd_buf(chain, cell_idx);
err = -EINVAL;
break;
}
return err;
}
/**
* api_cmd_create_cell - create API CMD cell for specific chain
* @chain: the API CMD specific chain to create its cell
* @cell_idx: the index of the cell to create
* @pre_node: previous cell
* @node_vaddr: the returned virt addr of the cell
*
* Return 0 - Success, negative - Failure
**/
static int api_cmd_create_cell(struct hinic_api_cmd_chain *chain,
int cell_idx,
struct hinic_api_cmd_cell *pre_node,
struct hinic_api_cmd_cell **node_vaddr)
{
struct hinic_api_cmd_cell_ctxt *cell_ctxt;
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_api_cmd_cell *node;
dma_addr_t node_paddr;
int err;
node = dma_zalloc_coherent(&pdev->dev, chain->cell_size,
&node_paddr, GFP_KERNEL);
if (!node) {
dev_err(&pdev->dev, "Failed to allocate dma API CMD cell\n");
return -ENOMEM;
}
node->read.hw_wb_resp_paddr = 0;
cell_ctxt = &chain->cell_ctxt[cell_idx];
cell_ctxt->cell_vaddr = node;
cell_ctxt->cell_paddr = node_paddr;
if (!pre_node) {
chain->head_cell_paddr = node_paddr;
chain->head_node = node;
} else {
/* The data in the HW should be in Big Endian Format */
pre_node->next_cell_paddr = cpu_to_be64(node_paddr);
}
switch (chain->chain_type) {
case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
err = alloc_cmd_buf(chain, node, cell_idx);
if (err) {
dev_err(&pdev->dev, "Failed to allocate cmd buffer\n");
goto err_alloc_cmd_buf;
}
break;
default:
dev_err(&pdev->dev, "Unsupported API CMD chain type\n");
err = -EINVAL;
goto err_alloc_cmd_buf;
}
*node_vaddr = node;
return 0;
err_alloc_cmd_buf:
dma_free_coherent(&pdev->dev, chain->cell_size, node, node_paddr);
return err;
}
/**
* api_cmd_destroy_cell - destroy API CMD cell of specific chain
* @chain: the API CMD specific chain to destroy its cell
* @cell_idx: the cell to destroy
**/
static void api_cmd_destroy_cell(struct hinic_api_cmd_chain *chain,
int cell_idx)
{
struct hinic_api_cmd_cell_ctxt *cell_ctxt;
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_api_cmd_cell *node;
dma_addr_t node_paddr;
size_t node_size;
cell_ctxt = &chain->cell_ctxt[cell_idx];
node = cell_ctxt->cell_vaddr;
node_paddr = cell_ctxt->cell_paddr;
node_size = chain->cell_size;
if (cell_ctxt->api_cmd_vaddr) {
switch (chain->chain_type) {
case HINIC_API_CMD_WRITE_TO_MGMT_CPU:
free_cmd_buf(chain, cell_idx);
break;
default:
dev_err(&pdev->dev, "Unsupported API CMD chain type\n");
break;
}
dma_free_coherent(&pdev->dev, node_size, node,
node_paddr);
}
}
/**
* api_cmd_destroy_cells - destroy API CMD cells of specific chain
* @chain: the API CMD specific chain to destroy its cells
* @num_cells: number of cells to destroy
**/
static void api_cmd_destroy_cells(struct hinic_api_cmd_chain *chain,
int num_cells)
{
int cell_idx;
for (cell_idx = 0; cell_idx < num_cells; cell_idx++)
api_cmd_destroy_cell(chain, cell_idx);
}
/**
* api_cmd_create_cells - create API CMD cells for specific chain
* @chain: the API CMD specific chain
*
* Return 0 - Success, negative - Failure
**/
static int api_cmd_create_cells(struct hinic_api_cmd_chain *chain)
{
struct hinic_api_cmd_cell *node = NULL, *pre_node = NULL;
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
int err, cell_idx;
for (cell_idx = 0; cell_idx < chain->num_cells; cell_idx++) {
err = api_cmd_create_cell(chain, cell_idx, pre_node, &node);
if (err) {
dev_err(&pdev->dev, "Failed to create API CMD cell\n");
goto err_create_cell;
}
pre_node = node;
}
/* set the Final node to point on the start */
node->next_cell_paddr = cpu_to_be64(chain->head_cell_paddr);
/* set the current node to be the head */
chain->curr_node = chain->head_node;
return 0;
err_create_cell:
api_cmd_destroy_cells(chain, cell_idx);
return err;
}
/**
* api_chain_init - initialize API CMD specific chain
* @chain: the API CMD specific chain to initialize
* @attr: attributes to set in the chain
*
* Return 0 - Success, negative - Failure
**/
static int api_chain_init(struct hinic_api_cmd_chain *chain,
struct hinic_api_cmd_chain_attr *attr)
{
struct hinic_hwif *hwif = attr->hwif;
struct pci_dev *pdev = hwif->pdev;
size_t cell_ctxt_size;
chain->hwif = hwif;
chain->chain_type = attr->chain_type;
chain->num_cells = attr->num_cells;
chain->cell_size = attr->cell_size;
chain->prod_idx = 0;
chain->cons_idx = 0;
sema_init(&chain->sem, 1);
cell_ctxt_size = chain->num_cells * sizeof(*chain->cell_ctxt);
chain->cell_ctxt = devm_kzalloc(&pdev->dev, cell_ctxt_size, GFP_KERNEL);
if (!chain->cell_ctxt)
return -ENOMEM;
chain->wb_status = dma_zalloc_coherent(&pdev->dev,
sizeof(*chain->wb_status),
&chain->wb_status_paddr,
GFP_KERNEL);
if (!chain->wb_status) {
dev_err(&pdev->dev, "Failed to allocate DMA wb status\n");
return -ENOMEM;
}
return 0;
}
/**
* api_chain_free - free API CMD specific chain
* @chain: the API CMD specific chain to free
**/
static void api_chain_free(struct hinic_api_cmd_chain *chain)
{
struct hinic_hwif *hwif = chain->hwif;
struct pci_dev *pdev = hwif->pdev;
dma_free_coherent(&pdev->dev, sizeof(*chain->wb_status),
chain->wb_status, chain->wb_status_paddr);
}
/**
* api_cmd_create_chain - create API CMD specific chain
* @attr: attributes to set the chain
*
* Return the created chain
**/
static struct hinic_api_cmd_chain *
api_cmd_create_chain(struct hinic_api_cmd_chain_attr *attr)
{
struct hinic_hwif *hwif = attr->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_api_cmd_chain *chain;
int err;
if (attr->num_cells & (attr->num_cells - 1)) {
dev_err(&pdev->dev, "Invalid number of cells, must be power of 2\n");
return ERR_PTR(-EINVAL);
}
chain = devm_kzalloc(&pdev->dev, sizeof(*chain), GFP_KERNEL);
if (!chain)
return ERR_PTR(-ENOMEM);
err = api_chain_init(chain, attr);
if (err) {
dev_err(&pdev->dev, "Failed to initialize chain\n");
return ERR_PTR(err);
}
err = api_cmd_create_cells(chain);
if (err) {
dev_err(&pdev->dev, "Failed to create cells for API CMD chain\n");
goto err_create_cells;
}
err = api_cmd_chain_hw_init(chain);
if (err) {
dev_err(&pdev->dev, "Failed to initialize chain HW\n");
goto err_chain_hw_init;
}
return chain;
err_chain_hw_init:
api_cmd_destroy_cells(chain, chain->num_cells);
err_create_cells:
api_chain_free(chain);
return ERR_PTR(err);
}
/**
* api_cmd_destroy_chain - destroy API CMD specific chain
* @chain: the API CMD specific chain to destroy
**/
static void api_cmd_destroy_chain(struct hinic_api_cmd_chain *chain)
{
api_cmd_chain_hw_clean(chain);
api_cmd_destroy_cells(chain, chain->num_cells);
api_chain_free(chain);
}
/**
* hinic_api_cmd_init - Initialize all the API CMD chains
* @chain: the API CMD chains that are initialized
* @hwif: the hardware interface of a pci function device
*
* Return 0 - Success, negative - Failure
**/
int hinic_api_cmd_init(struct hinic_api_cmd_chain **chain,
struct hinic_hwif *hwif)
{
enum hinic_api_cmd_chain_type type, chain_type;
struct hinic_api_cmd_chain_attr attr;
struct pci_dev *pdev = hwif->pdev;
size_t hw_cell_sz;
int err;
hw_cell_sz = sizeof(struct hinic_api_cmd_cell);
attr.hwif = hwif;
attr.num_cells = API_CHAIN_NUM_CELLS;
attr.cell_size = API_CMD_CELL_SIZE(hw_cell_sz);
chain_type = HINIC_API_CMD_WRITE_TO_MGMT_CPU;
for ( ; chain_type < HINIC_API_CMD_MAX; chain_type++) {
attr.chain_type = chain_type;
if (chain_type != HINIC_API_CMD_WRITE_TO_MGMT_CPU)
continue;
chain[chain_type] = api_cmd_create_chain(&attr);
if (IS_ERR(chain[chain_type])) {
dev_err(&pdev->dev, "Failed to create chain %d\n",
chain_type);
goto err_create_chain;
}
}
return 0;
err_create_chain:
type = HINIC_API_CMD_WRITE_TO_MGMT_CPU;
for ( ; type < chain_type; type++) {
if (type != HINIC_API_CMD_WRITE_TO_MGMT_CPU)
continue;
api_cmd_destroy_chain(chain[type]);
}
return err;
}
/**
* hinic_api_cmd_free - free the API CMD chains
* @chain: the API CMD chains that are freed
**/
void hinic_api_cmd_free(struct hinic_api_cmd_chain **chain)
{
enum hinic_api_cmd_chain_type chain_type;
chain_type = HINIC_API_CMD_WRITE_TO_MGMT_CPU;
for ( ; chain_type < HINIC_API_CMD_MAX; chain_type++) {
if (chain_type != HINIC_API_CMD_WRITE_TO_MGMT_CPU)
continue;
api_cmd_destroy_chain(chain[chain_type]);
}
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_API_CMD_H
#define HINIC_HW_API_CMD_H
#include <linux/types.h>
#include <linux/semaphore.h>
#include "hinic_hw_if.h"
#define HINIC_API_CMD_PI_IDX_SHIFT 0
#define HINIC_API_CMD_PI_IDX_MASK 0xFFFFFF
#define HINIC_API_CMD_PI_SET(val, member) \
(((u32)(val) & HINIC_API_CMD_PI_##member##_MASK) << \
HINIC_API_CMD_PI_##member##_SHIFT)
#define HINIC_API_CMD_PI_CLEAR(val, member) \
((val) & (~(HINIC_API_CMD_PI_##member##_MASK \
<< HINIC_API_CMD_PI_##member##_SHIFT)))
#define HINIC_API_CMD_CHAIN_REQ_RESTART_SHIFT 1
#define HINIC_API_CMD_CHAIN_REQ_RESTART_MASK 0x1
#define HINIC_API_CMD_CHAIN_REQ_SET(val, member) \
(((u32)(val) & HINIC_API_CMD_CHAIN_REQ_##member##_MASK) << \
HINIC_API_CMD_CHAIN_REQ_##member##_SHIFT)
#define HINIC_API_CMD_CHAIN_REQ_GET(val, member) \
(((val) >> HINIC_API_CMD_CHAIN_REQ_##member##_SHIFT) & \
HINIC_API_CMD_CHAIN_REQ_##member##_MASK)
#define HINIC_API_CMD_CHAIN_REQ_CLEAR(val, member) \
((val) & (~(HINIC_API_CMD_CHAIN_REQ_##member##_MASK \
<< HINIC_API_CMD_CHAIN_REQ_##member##_SHIFT)))
#define HINIC_API_CMD_CHAIN_CTRL_RESTART_WB_STAT_SHIFT 1
#define HINIC_API_CMD_CHAIN_CTRL_XOR_ERR_SHIFT 2
#define HINIC_API_CMD_CHAIN_CTRL_AEQE_EN_SHIFT 4
#define HINIC_API_CMD_CHAIN_CTRL_AEQ_ID_SHIFT 8
#define HINIC_API_CMD_CHAIN_CTRL_XOR_CHK_EN_SHIFT 28
#define HINIC_API_CMD_CHAIN_CTRL_CELL_SIZE_SHIFT 30
#define HINIC_API_CMD_CHAIN_CTRL_RESTART_WB_STAT_MASK 0x1
#define HINIC_API_CMD_CHAIN_CTRL_XOR_ERR_MASK 0x1
#define HINIC_API_CMD_CHAIN_CTRL_AEQE_EN_MASK 0x1
#define HINIC_API_CMD_CHAIN_CTRL_AEQ_ID_MASK 0x3
#define HINIC_API_CMD_CHAIN_CTRL_XOR_CHK_EN_MASK 0x3
#define HINIC_API_CMD_CHAIN_CTRL_CELL_SIZE_MASK 0x3
#define HINIC_API_CMD_CHAIN_CTRL_SET(val, member) \
(((u32)(val) & HINIC_API_CMD_CHAIN_CTRL_##member##_MASK) << \
HINIC_API_CMD_CHAIN_CTRL_##member##_SHIFT)
#define HINIC_API_CMD_CHAIN_CTRL_CLEAR(val, member) \
((val) & (~(HINIC_API_CMD_CHAIN_CTRL_##member##_MASK \
<< HINIC_API_CMD_CHAIN_CTRL_##member##_SHIFT)))
#define HINIC_API_CMD_CELL_CTRL_DATA_SZ_SHIFT 0
#define HINIC_API_CMD_CELL_CTRL_RD_DMA_ATTR_SHIFT 16
#define HINIC_API_CMD_CELL_CTRL_WR_DMA_ATTR_SHIFT 24
#define HINIC_API_CMD_CELL_CTRL_XOR_CHKSUM_SHIFT 56
#define HINIC_API_CMD_CELL_CTRL_DATA_SZ_MASK 0x3F
#define HINIC_API_CMD_CELL_CTRL_RD_DMA_ATTR_MASK 0x3F
#define HINIC_API_CMD_CELL_CTRL_WR_DMA_ATTR_MASK 0x3F
#define HINIC_API_CMD_CELL_CTRL_XOR_CHKSUM_MASK 0xFF
#define HINIC_API_CMD_CELL_CTRL_SET(val, member) \
((((u64)val) & HINIC_API_CMD_CELL_CTRL_##member##_MASK) << \
HINIC_API_CMD_CELL_CTRL_##member##_SHIFT)
#define HINIC_API_CMD_DESC_API_TYPE_SHIFT 0
#define HINIC_API_CMD_DESC_RD_WR_SHIFT 1
#define HINIC_API_CMD_DESC_MGMT_BYPASS_SHIFT 2
#define HINIC_API_CMD_DESC_DEST_SHIFT 32
#define HINIC_API_CMD_DESC_SIZE_SHIFT 40
#define HINIC_API_CMD_DESC_XOR_CHKSUM_SHIFT 56
#define HINIC_API_CMD_DESC_API_TYPE_MASK 0x1
#define HINIC_API_CMD_DESC_RD_WR_MASK 0x1
#define HINIC_API_CMD_DESC_MGMT_BYPASS_MASK 0x1
#define HINIC_API_CMD_DESC_DEST_MASK 0x1F
#define HINIC_API_CMD_DESC_SIZE_MASK 0x7FF
#define HINIC_API_CMD_DESC_XOR_CHKSUM_MASK 0xFF
#define HINIC_API_CMD_DESC_SET(val, member) \
((((u64)val) & HINIC_API_CMD_DESC_##member##_MASK) << \
HINIC_API_CMD_DESC_##member##_SHIFT)
#define HINIC_API_CMD_STATUS_HEADER_CHAIN_ID_SHIFT 16
#define HINIC_API_CMD_STATUS_HEADER_CHAIN_ID_MASK 0xFF
#define HINIC_API_CMD_STATUS_HEADER_GET(val, member) \
(((val) >> HINIC_API_CMD_STATUS_HEADER_##member##_SHIFT) & \
HINIC_API_CMD_STATUS_HEADER_##member##_MASK)
#define HINIC_API_CMD_STATUS_CONS_IDX_SHIFT 0
#define HINIC_API_CMD_STATUS_CHKSUM_ERR_SHIFT 28
#define HINIC_API_CMD_STATUS_CONS_IDX_MASK 0xFFFFFF
#define HINIC_API_CMD_STATUS_CHKSUM_ERR_MASK 0x3
#define HINIC_API_CMD_STATUS_GET(val, member) \
(((val) >> HINIC_API_CMD_STATUS_##member##_SHIFT) & \
HINIC_API_CMD_STATUS_##member##_MASK)
enum hinic_api_cmd_chain_type {
HINIC_API_CMD_WRITE_TO_MGMT_CPU = 2,
HINIC_API_CMD_MAX,
};
struct hinic_api_cmd_chain_attr {
struct hinic_hwif *hwif;
enum hinic_api_cmd_chain_type chain_type;
u32 num_cells;
u16 cell_size;
};
struct hinic_api_cmd_status {
u64 header;
u32 status;
u32 rsvd0;
u32 rsvd1;
u32 rsvd2;
u64 rsvd3;
};
/* HW struct */
struct hinic_api_cmd_cell {
u64 ctrl;
/* address is 64 bit in HW struct */
u64 next_cell_paddr;
u64 desc;
/* HW struct */
union {
struct {
u64 hw_cmd_paddr;
} write;
struct {
u64 hw_wb_resp_paddr;
u64 hw_cmd_paddr;
} read;
};
};
struct hinic_api_cmd_cell_ctxt {
dma_addr_t cell_paddr;
struct hinic_api_cmd_cell *cell_vaddr;
dma_addr_t api_cmd_paddr;
u8 *api_cmd_vaddr;
};
struct hinic_api_cmd_chain {
struct hinic_hwif *hwif;
enum hinic_api_cmd_chain_type chain_type;
u32 num_cells;
u16 cell_size;
/* HW members in 24 bit format */
u32 prod_idx;
u32 cons_idx;
struct semaphore sem;
struct hinic_api_cmd_cell_ctxt *cell_ctxt;
dma_addr_t wb_status_paddr;
struct hinic_api_cmd_status *wb_status;
dma_addr_t head_cell_paddr;
struct hinic_api_cmd_cell *head_node;
struct hinic_api_cmd_cell *curr_node;
};
int hinic_api_cmd_write(struct hinic_api_cmd_chain *chain,
enum hinic_node_id dest, u8 *cmd, u16 size);
int hinic_api_cmd_init(struct hinic_api_cmd_chain **chain,
struct hinic_hwif *hwif);
void hinic_api_cmd_free(struct hinic_api_cmd_chain **chain);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/spinlock.h>
#include <linux/sizes.h>
#include <linux/atomic.h>
#include <linux/log2.h>
#include <linux/io.h>
#include <linux/completion.h>
#include <linux/err.h>
#include <asm/byteorder.h>
#include <asm/barrier.h>
#include "hinic_common.h"
#include "hinic_hw_if.h"
#include "hinic_hw_eqs.h"
#include "hinic_hw_mgmt.h"
#include "hinic_hw_wqe.h"
#include "hinic_hw_wq.h"
#include "hinic_hw_cmdq.h"
#include "hinic_hw_io.h"
#include "hinic_hw_dev.h"
#define CMDQ_CEQE_TYPE_SHIFT 0
#define CMDQ_CEQE_TYPE_MASK 0x7
#define CMDQ_CEQE_GET(val, member) \
(((val) >> CMDQ_CEQE_##member##_SHIFT) \
& CMDQ_CEQE_##member##_MASK)
#define CMDQ_WQE_ERRCODE_VAL_SHIFT 20
#define CMDQ_WQE_ERRCODE_VAL_MASK 0xF
#define CMDQ_WQE_ERRCODE_GET(val, member) \
(((val) >> CMDQ_WQE_ERRCODE_##member##_SHIFT) \
& CMDQ_WQE_ERRCODE_##member##_MASK)
#define CMDQ_DB_PI_OFF(pi) (((u16)LOWER_8_BITS(pi)) << 3)
#define CMDQ_DB_ADDR(db_base, pi) ((db_base) + CMDQ_DB_PI_OFF(pi))
#define CMDQ_WQE_HEADER(wqe) ((struct hinic_cmdq_header *)(wqe))
#define CMDQ_WQE_COMPLETED(ctrl_info) \
HINIC_CMDQ_CTRL_GET(ctrl_info, HW_BUSY_BIT)
#define FIRST_DATA_TO_WRITE_LAST sizeof(u64)
#define CMDQ_DB_OFF SZ_2K
#define CMDQ_WQEBB_SIZE 64
#define CMDQ_WQE_SIZE 64
#define CMDQ_DEPTH SZ_4K
#define CMDQ_WQ_PAGE_SIZE SZ_4K
#define WQE_LCMD_SIZE 64
#define WQE_SCMD_SIZE 64
#define COMPLETE_LEN 3
#define CMDQ_TIMEOUT 1000
#define CMDQ_PFN(addr, page_size) ((addr) >> (ilog2(page_size)))
#define cmdq_to_cmdqs(cmdq) container_of((cmdq) - (cmdq)->cmdq_type, \
struct hinic_cmdqs, cmdq[0])
#define cmdqs_to_func_to_io(cmdqs) container_of(cmdqs, \
struct hinic_func_to_io, \
cmdqs)
enum cmdq_wqe_type {
WQE_LCMD_TYPE = 0,
WQE_SCMD_TYPE = 1,
};
enum completion_format {
COMPLETE_DIRECT = 0,
COMPLETE_SGE = 1,
};
enum data_format {
DATA_SGE = 0,
DATA_DIRECT = 1,
};
enum bufdesc_len {
BUFDESC_LCMD_LEN = 2, /* 16 bytes - 2(8 byte unit) */
BUFDESC_SCMD_LEN = 3, /* 24 bytes - 3(8 byte unit) */
};
enum ctrl_sect_len {
CTRL_SECT_LEN = 1, /* 4 bytes (ctrl) - 1(8 byte unit) */
CTRL_DIRECT_SECT_LEN = 2, /* 12 bytes (ctrl + rsvd) - 2(8 byte unit) */
};
enum cmdq_scmd_type {
CMDQ_SET_ARM_CMD = 2,
};
enum cmdq_cmd_type {
CMDQ_CMD_SYNC_DIRECT_RESP = 0,
CMDQ_CMD_SYNC_SGE_RESP = 1,
};
enum completion_request {
NO_CEQ = 0,
CEQ_SET = 1,
};
/**
* hinic_alloc_cmdq_buf - alloc buffer for sending command
* @cmdqs: the cmdqs
* @cmdq_buf: the buffer returned in this struct
*
* Return 0 - Success, negative - Failure
**/
int hinic_alloc_cmdq_buf(struct hinic_cmdqs *cmdqs,
struct hinic_cmdq_buf *cmdq_buf)
{
struct hinic_hwif *hwif = cmdqs->hwif;
struct pci_dev *pdev = hwif->pdev;
cmdq_buf->buf = pci_pool_alloc(cmdqs->cmdq_buf_pool, GFP_KERNEL,
&cmdq_buf->dma_addr);
if (!cmdq_buf->buf) {
dev_err(&pdev->dev, "Failed to allocate cmd from the pool\n");
return -ENOMEM;
}
return 0;
}
/**
* hinic_free_cmdq_buf - free buffer
* @cmdqs: the cmdqs
* @cmdq_buf: the buffer to free that is in this struct
**/
void hinic_free_cmdq_buf(struct hinic_cmdqs *cmdqs,
struct hinic_cmdq_buf *cmdq_buf)
{
pci_pool_free(cmdqs->cmdq_buf_pool, cmdq_buf->buf, cmdq_buf->dma_addr);
}
static unsigned int cmdq_wqe_size_from_bdlen(enum bufdesc_len len)
{
unsigned int wqe_size = 0;
switch (len) {
case BUFDESC_LCMD_LEN:
wqe_size = WQE_LCMD_SIZE;
break;
case BUFDESC_SCMD_LEN:
wqe_size = WQE_SCMD_SIZE;
break;
}
return wqe_size;
}
static void cmdq_set_sge_completion(struct hinic_cmdq_completion *completion,
struct hinic_cmdq_buf *buf_out)
{
struct hinic_sge_resp *sge_resp = &completion->sge_resp;
hinic_set_sge(&sge_resp->sge, buf_out->dma_addr, buf_out->size);
}
static void cmdq_prepare_wqe_ctrl(struct hinic_cmdq_wqe *wqe, int wrapped,
enum hinic_cmd_ack_type ack_type,
enum hinic_mod_type mod, u8 cmd, u16 prod_idx,
enum completion_format complete_format,
enum data_format data_format,
enum bufdesc_len buf_len)
{
struct hinic_cmdq_wqe_lcmd *wqe_lcmd;
struct hinic_cmdq_wqe_scmd *wqe_scmd;
enum ctrl_sect_len ctrl_len;
struct hinic_ctrl *ctrl;
u32 saved_data;
if (data_format == DATA_SGE) {
wqe_lcmd = &wqe->wqe_lcmd;
wqe_lcmd->status.status_info = 0;
ctrl = &wqe_lcmd->ctrl;
ctrl_len = CTRL_SECT_LEN;
} else {
wqe_scmd = &wqe->direct_wqe.wqe_scmd;
wqe_scmd->status.status_info = 0;
ctrl = &wqe_scmd->ctrl;
ctrl_len = CTRL_DIRECT_SECT_LEN;
}
ctrl->ctrl_info = HINIC_CMDQ_CTRL_SET(prod_idx, PI) |
HINIC_CMDQ_CTRL_SET(cmd, CMD) |
HINIC_CMDQ_CTRL_SET(mod, MOD) |
HINIC_CMDQ_CTRL_SET(ack_type, ACK_TYPE);
CMDQ_WQE_HEADER(wqe)->header_info =
HINIC_CMDQ_WQE_HEADER_SET(buf_len, BUFDESC_LEN) |
HINIC_CMDQ_WQE_HEADER_SET(complete_format, COMPLETE_FMT) |
HINIC_CMDQ_WQE_HEADER_SET(data_format, DATA_FMT) |
HINIC_CMDQ_WQE_HEADER_SET(CEQ_SET, COMPLETE_REQ) |
HINIC_CMDQ_WQE_HEADER_SET(COMPLETE_LEN, COMPLETE_SECT_LEN) |
HINIC_CMDQ_WQE_HEADER_SET(ctrl_len, CTRL_LEN) |
HINIC_CMDQ_WQE_HEADER_SET(wrapped, TOGGLED_WRAPPED);
saved_data = CMDQ_WQE_HEADER(wqe)->saved_data;
saved_data = HINIC_SAVED_DATA_CLEAR(saved_data, ARM);
if ((cmd == CMDQ_SET_ARM_CMD) && (mod == HINIC_MOD_COMM))
CMDQ_WQE_HEADER(wqe)->saved_data |=
HINIC_SAVED_DATA_SET(1, ARM);
else
CMDQ_WQE_HEADER(wqe)->saved_data = saved_data;
}
static void cmdq_set_lcmd_bufdesc(struct hinic_cmdq_wqe_lcmd *wqe_lcmd,
struct hinic_cmdq_buf *buf_in)
{
hinic_set_sge(&wqe_lcmd->buf_desc.sge, buf_in->dma_addr, buf_in->size);
}
static void cmdq_set_direct_wqe_data(struct hinic_cmdq_direct_wqe *wqe,
void *buf_in, u32 in_size)
{
struct hinic_cmdq_wqe_scmd *wqe_scmd = &wqe->wqe_scmd;
wqe_scmd->buf_desc.buf_len = in_size;
memcpy(wqe_scmd->buf_desc.data, buf_in, in_size);
}
static void cmdq_set_lcmd_wqe(struct hinic_cmdq_wqe *wqe,
enum cmdq_cmd_type cmd_type,
struct hinic_cmdq_buf *buf_in,
struct hinic_cmdq_buf *buf_out, int wrapped,
enum hinic_cmd_ack_type ack_type,
enum hinic_mod_type mod, u8 cmd, u16 prod_idx)
{
struct hinic_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
enum completion_format complete_format;
switch (cmd_type) {
case CMDQ_CMD_SYNC_SGE_RESP:
complete_format = COMPLETE_SGE;
cmdq_set_sge_completion(&wqe_lcmd->completion, buf_out);
break;
case CMDQ_CMD_SYNC_DIRECT_RESP:
complete_format = COMPLETE_DIRECT;
wqe_lcmd->completion.direct_resp = 0;
break;
}
cmdq_prepare_wqe_ctrl(wqe, wrapped, ack_type, mod, cmd,
prod_idx, complete_format, DATA_SGE,
BUFDESC_LCMD_LEN);
cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in);
}
static void cmdq_set_direct_wqe(struct hinic_cmdq_wqe *wqe,
enum cmdq_cmd_type cmd_type,
void *buf_in, u16 in_size,
struct hinic_cmdq_buf *buf_out, int wrapped,
enum hinic_cmd_ack_type ack_type,
enum hinic_mod_type mod, u8 cmd, u16 prod_idx)
{
struct hinic_cmdq_direct_wqe *direct_wqe = &wqe->direct_wqe;
enum completion_format complete_format;
struct hinic_cmdq_wqe_scmd *wqe_scmd;
wqe_scmd = &direct_wqe->wqe_scmd;
switch (cmd_type) {
case CMDQ_CMD_SYNC_SGE_RESP:
complete_format = COMPLETE_SGE;
cmdq_set_sge_completion(&wqe_scmd->completion, buf_out);
break;
case CMDQ_CMD_SYNC_DIRECT_RESP:
complete_format = COMPLETE_DIRECT;
wqe_scmd->completion.direct_resp = 0;
break;
}
cmdq_prepare_wqe_ctrl(wqe, wrapped, ack_type, mod, cmd, prod_idx,
complete_format, DATA_DIRECT, BUFDESC_SCMD_LEN);
cmdq_set_direct_wqe_data(direct_wqe, buf_in, in_size);
}
static void cmdq_wqe_fill(void *dst, void *src)
{
memcpy(dst + FIRST_DATA_TO_WRITE_LAST, src + FIRST_DATA_TO_WRITE_LAST,
CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST);
wmb(); /* The first 8 bytes should be written last */
*(u64 *)dst = *(u64 *)src;
}
static void cmdq_fill_db(u32 *db_info,
enum hinic_cmdq_type cmdq_type, u16 prod_idx)
{
*db_info = HINIC_CMDQ_DB_INFO_SET(UPPER_8_BITS(prod_idx), HI_PROD_IDX) |
HINIC_CMDQ_DB_INFO_SET(HINIC_CTRL_PATH, PATH) |
HINIC_CMDQ_DB_INFO_SET(cmdq_type, CMDQ_TYPE) |
HINIC_CMDQ_DB_INFO_SET(HINIC_DB_CMDQ_TYPE, DB_TYPE);
}
static void cmdq_set_db(struct hinic_cmdq *cmdq,
enum hinic_cmdq_type cmdq_type, u16 prod_idx)
{
u32 db_info;
cmdq_fill_db(&db_info, cmdq_type, prod_idx);
/* The data that is written to HW should be in Big Endian Format */
db_info = cpu_to_be32(db_info);
wmb(); /* write all before the doorbell */
writel(db_info, CMDQ_DB_ADDR(cmdq->db_base, prod_idx));
}
static int cmdq_sync_cmd_direct_resp(struct hinic_cmdq *cmdq,
enum hinic_mod_type mod, u8 cmd,
struct hinic_cmdq_buf *buf_in,
u64 *resp)
{
struct hinic_cmdq_wqe *curr_cmdq_wqe, cmdq_wqe;
u16 curr_prod_idx, next_prod_idx;
int errcode, wrapped, num_wqebbs;
struct hinic_wq *wq = cmdq->wq;
struct hinic_hw_wqe *hw_wqe;
struct completion done;
/* Keep doorbell index correct. bh - for tasklet(ceq). */
spin_lock_bh(&cmdq->cmdq_lock);
/* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow*/
hw_wqe = hinic_get_wqe(wq, WQE_LCMD_SIZE, &curr_prod_idx);
if (IS_ERR(hw_wqe)) {
spin_unlock_bh(&cmdq->cmdq_lock);
return -EBUSY;
}
curr_cmdq_wqe = &hw_wqe->cmdq_wqe;
wrapped = cmdq->wrapped;
num_wqebbs = ALIGN(WQE_LCMD_SIZE, wq->wqebb_size) / wq->wqebb_size;
next_prod_idx = curr_prod_idx + num_wqebbs;
if (next_prod_idx >= wq->q_depth) {
cmdq->wrapped = !cmdq->wrapped;
next_prod_idx -= wq->q_depth;
}
cmdq->errcode[curr_prod_idx] = &errcode;
init_completion(&done);
cmdq->done[curr_prod_idx] = &done;
cmdq_set_lcmd_wqe(&cmdq_wqe, CMDQ_CMD_SYNC_DIRECT_RESP, buf_in, NULL,
wrapped, HINIC_CMD_ACK_TYPE_CMDQ, mod, cmd,
curr_prod_idx);
/* The data that is written to HW should be in Big Endian Format */
hinic_cpu_to_be32(&cmdq_wqe, WQE_LCMD_SIZE);
/* CMDQ WQE is not shadow, therefore wqe will be written to wq */
cmdq_wqe_fill(curr_cmdq_wqe, &cmdq_wqe);
cmdq_set_db(cmdq, HINIC_CMDQ_SYNC, next_prod_idx);
spin_unlock_bh(&cmdq->cmdq_lock);
if (!wait_for_completion_timeout(&done, CMDQ_TIMEOUT)) {
spin_lock_bh(&cmdq->cmdq_lock);
if (cmdq->errcode[curr_prod_idx] == &errcode)
cmdq->errcode[curr_prod_idx] = NULL;
if (cmdq->done[curr_prod_idx] == &done)
cmdq->done[curr_prod_idx] = NULL;
spin_unlock_bh(&cmdq->cmdq_lock);
return -ETIMEDOUT;
}
smp_rmb(); /* read error code after completion */
if (resp) {
struct hinic_cmdq_wqe_lcmd *wqe_lcmd = &curr_cmdq_wqe->wqe_lcmd;
*resp = cpu_to_be64(wqe_lcmd->completion.direct_resp);
}
if (errcode != 0)
return -EFAULT;
return 0;
}
static int cmdq_set_arm_bit(struct hinic_cmdq *cmdq, void *buf_in,
u16 in_size)
{
struct hinic_cmdq_wqe *curr_cmdq_wqe, cmdq_wqe;
u16 curr_prod_idx, next_prod_idx;
struct hinic_wq *wq = cmdq->wq;
struct hinic_hw_wqe *hw_wqe;
int wrapped, num_wqebbs;
/* Keep doorbell index correct */
spin_lock(&cmdq->cmdq_lock);
/* WQE_SIZE = WQEBB_SIZE, we will get the wq element and not shadow*/
hw_wqe = hinic_get_wqe(wq, WQE_SCMD_SIZE, &curr_prod_idx);
if (IS_ERR(hw_wqe)) {
spin_unlock(&cmdq->cmdq_lock);
return -EBUSY;
}
curr_cmdq_wqe = &hw_wqe->cmdq_wqe;
wrapped = cmdq->wrapped;
num_wqebbs = ALIGN(WQE_SCMD_SIZE, wq->wqebb_size) / wq->wqebb_size;
next_prod_idx = curr_prod_idx + num_wqebbs;
if (next_prod_idx >= wq->q_depth) {
cmdq->wrapped = !cmdq->wrapped;
next_prod_idx -= wq->q_depth;
}
cmdq_set_direct_wqe(&cmdq_wqe, CMDQ_CMD_SYNC_DIRECT_RESP, buf_in,
in_size, NULL, wrapped, HINIC_CMD_ACK_TYPE_CMDQ,
HINIC_MOD_COMM, CMDQ_SET_ARM_CMD, curr_prod_idx);
/* The data that is written to HW should be in Big Endian Format */
hinic_cpu_to_be32(&cmdq_wqe, WQE_SCMD_SIZE);
/* cmdq wqe is not shadow, therefore wqe will be written to wq */
cmdq_wqe_fill(curr_cmdq_wqe, &cmdq_wqe);
cmdq_set_db(cmdq, HINIC_CMDQ_SYNC, next_prod_idx);
spin_unlock(&cmdq->cmdq_lock);
return 0;
}
static int cmdq_params_valid(struct hinic_cmdq_buf *buf_in)
{
if (buf_in->size > HINIC_CMDQ_MAX_DATA_SIZE)
return -EINVAL;
return 0;
}
/**
* hinic_cmdq_direct_resp - send command with direct data as resp
* @cmdqs: the cmdqs
* @mod: module on the card that will handle the command
* @cmd: the command
* @buf_in: the buffer for the command
* @resp: the response to return
*
* Return 0 - Success, negative - Failure
**/
int hinic_cmdq_direct_resp(struct hinic_cmdqs *cmdqs,
enum hinic_mod_type mod, u8 cmd,
struct hinic_cmdq_buf *buf_in, u64 *resp)
{
struct hinic_hwif *hwif = cmdqs->hwif;
struct pci_dev *pdev = hwif->pdev;
int err;
err = cmdq_params_valid(buf_in);
if (err) {
dev_err(&pdev->dev, "Invalid CMDQ parameters\n");
return err;
}
return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC_CMDQ_SYNC],
mod, cmd, buf_in, resp);
}
/**
* hinic_set_arm_bit - set arm bit for enable interrupt again
* @cmdqs: the cmdqs
* @q_type: type of queue to set the arm bit for
* @q_id: the queue number
*
* Return 0 - Success, negative - Failure
**/
int hinic_set_arm_bit(struct hinic_cmdqs *cmdqs,
enum hinic_set_arm_qtype q_type, u32 q_id)
{
struct hinic_cmdq *cmdq = &cmdqs->cmdq[HINIC_CMDQ_SYNC];
struct hinic_hwif *hwif = cmdqs->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_cmdq_arm_bit arm_bit;
int err;
arm_bit.q_type = q_type;
arm_bit.q_id = q_id;
err = cmdq_set_arm_bit(cmdq, &arm_bit, sizeof(arm_bit));
if (err) {
dev_err(&pdev->dev, "Failed to set arm for qid %d\n", q_id);
return err;
}
return 0;
}
static void clear_wqe_complete_bit(struct hinic_cmdq *cmdq,
struct hinic_cmdq_wqe *wqe)
{
u32 header_info = be32_to_cpu(CMDQ_WQE_HEADER(wqe)->header_info);
unsigned int bufdesc_len, wqe_size;
struct hinic_ctrl *ctrl;
bufdesc_len = HINIC_CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN);
wqe_size = cmdq_wqe_size_from_bdlen(bufdesc_len);
if (wqe_size == WQE_LCMD_SIZE) {
struct hinic_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
ctrl = &wqe_lcmd->ctrl;
} else {
struct hinic_cmdq_direct_wqe *direct_wqe = &wqe->direct_wqe;
struct hinic_cmdq_wqe_scmd *wqe_scmd;
wqe_scmd = &direct_wqe->wqe_scmd;
ctrl = &wqe_scmd->ctrl;
}
/* clear HW busy bit */
ctrl->ctrl_info = 0;
wmb(); /* verify wqe is clear */
}
/**
* cmdq_arm_ceq_handler - cmdq completion event handler for arm command
* @cmdq: the cmdq of the arm command
* @wqe: the wqe of the arm command
*
* Return 0 - Success, negative - Failure
**/
static int cmdq_arm_ceq_handler(struct hinic_cmdq *cmdq,
struct hinic_cmdq_wqe *wqe)
{
struct hinic_cmdq_direct_wqe *direct_wqe = &wqe->direct_wqe;
struct hinic_cmdq_wqe_scmd *wqe_scmd;
struct hinic_ctrl *ctrl;
u32 ctrl_info;
wqe_scmd = &direct_wqe->wqe_scmd;
ctrl = &wqe_scmd->ctrl;
ctrl_info = be32_to_cpu(ctrl->ctrl_info);
/* HW should toggle the HW BUSY BIT */
if (!CMDQ_WQE_COMPLETED(ctrl_info))
return -EBUSY;
clear_wqe_complete_bit(cmdq, wqe);
hinic_put_wqe(cmdq->wq, WQE_SCMD_SIZE);
return 0;
}
static void cmdq_update_errcode(struct hinic_cmdq *cmdq, u16 prod_idx,
int errcode)
{
if (cmdq->errcode[prod_idx])
*cmdq->errcode[prod_idx] = errcode;
}
/**
* cmdq_arm_ceq_handler - cmdq completion event handler for sync command
* @cmdq: the cmdq of the command
* @cons_idx: the consumer index to update the error code for
* @errcode: the error code
**/
static void cmdq_sync_cmd_handler(struct hinic_cmdq *cmdq, u16 cons_idx,
int errcode)
{
u16 prod_idx = cons_idx;
spin_lock(&cmdq->cmdq_lock);
cmdq_update_errcode(cmdq, prod_idx, errcode);
wmb(); /* write all before update for the command request */
if (cmdq->done[prod_idx])
complete(cmdq->done[prod_idx]);
spin_unlock(&cmdq->cmdq_lock);
}
static int cmdq_cmd_ceq_handler(struct hinic_cmdq *cmdq, u16 ci,
struct hinic_cmdq_wqe *cmdq_wqe)
{
struct hinic_cmdq_wqe_lcmd *wqe_lcmd = &cmdq_wqe->wqe_lcmd;
struct hinic_status *status = &wqe_lcmd->status;
struct hinic_ctrl *ctrl = &wqe_lcmd->ctrl;
int errcode;
if (!CMDQ_WQE_COMPLETED(be32_to_cpu(ctrl->ctrl_info)))
return -EBUSY;
errcode = CMDQ_WQE_ERRCODE_GET(be32_to_cpu(status->status_info), VAL);
cmdq_sync_cmd_handler(cmdq, ci, errcode);
clear_wqe_complete_bit(cmdq, cmdq_wqe);
hinic_put_wqe(cmdq->wq, WQE_LCMD_SIZE);
return 0;
}
/**
* cmdq_ceq_handler - cmdq completion event handler
* @handle: private data for the handler(cmdqs)
* @ceqe_data: ceq element data
**/
static void cmdq_ceq_handler(void *handle, u32 ceqe_data)
{
enum hinic_cmdq_type cmdq_type = CMDQ_CEQE_GET(ceqe_data, TYPE);
struct hinic_cmdqs *cmdqs = (struct hinic_cmdqs *)handle;
struct hinic_cmdq *cmdq = &cmdqs->cmdq[cmdq_type];
struct hinic_cmdq_header *header;
struct hinic_hw_wqe *hw_wqe;
int err, set_arm = 0;
u32 saved_data;
u16 ci;
/* Read the smallest wqe size for getting wqe size */
while ((hw_wqe = hinic_read_wqe(cmdq->wq, WQE_SCMD_SIZE, &ci))) {
if (IS_ERR(hw_wqe))
break;
header = CMDQ_WQE_HEADER(&hw_wqe->cmdq_wqe);
saved_data = be32_to_cpu(header->saved_data);
if (HINIC_SAVED_DATA_GET(saved_data, ARM)) {
/* arm_bit was set until here */
set_arm = 0;
if (cmdq_arm_ceq_handler(cmdq, &hw_wqe->cmdq_wqe))
break;
} else {
set_arm = 1;
hw_wqe = hinic_read_wqe(cmdq->wq, WQE_LCMD_SIZE, &ci);
if (IS_ERR(hw_wqe))
break;
if (cmdq_cmd_ceq_handler(cmdq, ci, &hw_wqe->cmdq_wqe))
break;
}
}
if (set_arm) {
struct hinic_hwif *hwif = cmdqs->hwif;
struct pci_dev *pdev = hwif->pdev;
err = hinic_set_arm_bit(cmdqs, HINIC_SET_ARM_CMDQ, cmdq_type);
if (err)
dev_err(&pdev->dev, "Failed to set arm for CMDQ\n");
}
}
/**
* cmdq_init_queue_ctxt - init the queue ctxt of a cmdq
* @cmdq_ctxt: cmdq ctxt to initialize
* @cmdq: the cmdq
* @cmdq_pages: the memory of the queue
**/
static void cmdq_init_queue_ctxt(struct hinic_cmdq_ctxt *cmdq_ctxt,
struct hinic_cmdq *cmdq,
struct hinic_cmdq_pages *cmdq_pages)
{
struct hinic_cmdq_ctxt_info *ctxt_info = &cmdq_ctxt->ctxt_info;
u64 wq_first_page_paddr, cmdq_first_block_paddr, pfn;
struct hinic_cmdqs *cmdqs = cmdq_to_cmdqs(cmdq);
struct hinic_wq *wq = cmdq->wq;
/* The data in the HW is in Big Endian Format */
wq_first_page_paddr = be64_to_cpu(*wq->block_vaddr);
pfn = CMDQ_PFN(wq_first_page_paddr, wq->wq_page_size);
ctxt_info->curr_wqe_page_pfn =
HINIC_CMDQ_CTXT_PAGE_INFO_SET(pfn, CURR_WQE_PAGE_PFN) |
HINIC_CMDQ_CTXT_PAGE_INFO_SET(HINIC_CEQ_ID_CMDQ, EQ_ID) |
HINIC_CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_ARM) |
HINIC_CMDQ_CTXT_PAGE_INFO_SET(1, CEQ_EN) |
HINIC_CMDQ_CTXT_PAGE_INFO_SET(cmdq->wrapped, WRAPPED);
/* block PFN - Read Modify Write */
cmdq_first_block_paddr = cmdq_pages->page_paddr;
pfn = CMDQ_PFN(cmdq_first_block_paddr, wq->wq_page_size);
ctxt_info->wq_block_pfn =
HINIC_CMDQ_CTXT_BLOCK_INFO_SET(pfn, WQ_BLOCK_PFN) |
HINIC_CMDQ_CTXT_BLOCK_INFO_SET(atomic_read(&wq->cons_idx), CI);
cmdq_ctxt->func_idx = HINIC_HWIF_FUNC_IDX(cmdqs->hwif);
cmdq_ctxt->cmdq_type = cmdq->cmdq_type;
}
/**
* init_cmdq - initialize cmdq
* @cmdq: the cmdq
* @wq: the wq attaced to the cmdq
* @q_type: the cmdq type of the cmdq
* @db_area: doorbell area for the cmdq
*
* Return 0 - Success, negative - Failure
**/
static int init_cmdq(struct hinic_cmdq *cmdq, struct hinic_wq *wq,
enum hinic_cmdq_type q_type, void __iomem *db_area)
{
int err;
cmdq->wq = wq;
cmdq->cmdq_type = q_type;
cmdq->wrapped = 1;
spin_lock_init(&cmdq->cmdq_lock);
cmdq->done = vzalloc(wq->q_depth * sizeof(*cmdq->done));
if (!cmdq->done)
return -ENOMEM;
cmdq->errcode = vzalloc(wq->q_depth * sizeof(*cmdq->errcode));
if (!cmdq->errcode) {
err = -ENOMEM;
goto err_errcode;
}
cmdq->db_base = db_area + CMDQ_DB_OFF;
return 0;
err_errcode:
vfree(cmdq->done);
return err;
}
/**
* free_cmdq - Free cmdq
* @cmdq: the cmdq to free
**/
static void free_cmdq(struct hinic_cmdq *cmdq)
{
vfree(cmdq->errcode);
vfree(cmdq->done);
}
/**
* init_cmdqs_ctxt - write the cmdq ctxt to HW after init all cmdq
* @hwdev: the NIC HW device
* @cmdqs: cmdqs to write the ctxts for
* &db_area: db_area for all the cmdqs
*
* Return 0 - Success, negative - Failure
**/
static int init_cmdqs_ctxt(struct hinic_hwdev *hwdev,
struct hinic_cmdqs *cmdqs, void __iomem **db_area)
{
struct hinic_hwif *hwif = hwdev->hwif;
enum hinic_cmdq_type type, cmdq_type;
struct hinic_cmdq_ctxt *cmdq_ctxts;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
size_t cmdq_ctxts_size;
int err;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "Unsupported PCI function type\n");
return -EINVAL;
}
cmdq_ctxts_size = HINIC_MAX_CMDQ_TYPES * sizeof(*cmdq_ctxts);
cmdq_ctxts = devm_kzalloc(&pdev->dev, cmdq_ctxts_size, GFP_KERNEL);
if (!cmdq_ctxts)
return -ENOMEM;
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
cmdq_type = HINIC_CMDQ_SYNC;
for (; cmdq_type < HINIC_MAX_CMDQ_TYPES; cmdq_type++) {
err = init_cmdq(&cmdqs->cmdq[cmdq_type],
&cmdqs->saved_wqs[cmdq_type], cmdq_type,
db_area[cmdq_type]);
if (err) {
dev_err(&pdev->dev, "Failed to initialize cmdq\n");
goto err_init_cmdq;
}
cmdq_init_queue_ctxt(&cmdq_ctxts[cmdq_type],
&cmdqs->cmdq[cmdq_type],
&cmdqs->cmdq_pages);
}
/* Write the CMDQ ctxts */
cmdq_type = HINIC_CMDQ_SYNC;
for (; cmdq_type < HINIC_MAX_CMDQ_TYPES; cmdq_type++) {
err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
HINIC_COMM_CMD_CMDQ_CTXT_SET,
&cmdq_ctxts[cmdq_type],
sizeof(cmdq_ctxts[cmdq_type]),
NULL, NULL, HINIC_MGMT_MSG_SYNC);
if (err) {
dev_err(&pdev->dev, "Failed to set CMDQ CTXT type = %d\n",
cmdq_type);
goto err_write_cmdq_ctxt;
}
}
devm_kfree(&pdev->dev, cmdq_ctxts);
return 0;
err_write_cmdq_ctxt:
cmdq_type = HINIC_MAX_CMDQ_TYPES;
err_init_cmdq:
for (type = HINIC_CMDQ_SYNC; type < cmdq_type; type++)
free_cmdq(&cmdqs->cmdq[type]);
devm_kfree(&pdev->dev, cmdq_ctxts);
return err;
}
/**
* hinic_init_cmdqs - init all cmdqs
* @cmdqs: cmdqs to init
* @hwif: HW interface for accessing cmdqs
* @db_area: doorbell areas for all the cmdqs
*
* Return 0 - Success, negative - Failure
**/
int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
void __iomem **db_area)
{
struct hinic_func_to_io *func_to_io = cmdqs_to_func_to_io(cmdqs);
struct pci_dev *pdev = hwif->pdev;
struct hinic_hwdev *hwdev;
size_t saved_wqs_size;
u16 max_wqe_size;
int err;
cmdqs->hwif = hwif;
cmdqs->cmdq_buf_pool = pci_pool_create("hinic_cmdq", pdev,
HINIC_CMDQ_BUF_SIZE,
HINIC_CMDQ_BUF_SIZE, 0);
if (!cmdqs->cmdq_buf_pool)
return -ENOMEM;
saved_wqs_size = HINIC_MAX_CMDQ_TYPES * sizeof(struct hinic_wq);
cmdqs->saved_wqs = devm_kzalloc(&pdev->dev, saved_wqs_size, GFP_KERNEL);
if (!cmdqs->saved_wqs) {
err = -ENOMEM;
goto err_saved_wqs;
}
max_wqe_size = WQE_LCMD_SIZE;
err = hinic_wqs_cmdq_alloc(&cmdqs->cmdq_pages, cmdqs->saved_wqs, hwif,
HINIC_MAX_CMDQ_TYPES, CMDQ_WQEBB_SIZE,
CMDQ_WQ_PAGE_SIZE, CMDQ_DEPTH, max_wqe_size);
if (err) {
dev_err(&pdev->dev, "Failed to allocate CMDQ wqs\n");
goto err_cmdq_wqs;
}
hwdev = container_of(func_to_io, struct hinic_hwdev, func_to_io);
err = init_cmdqs_ctxt(hwdev, cmdqs, db_area);
if (err) {
dev_err(&pdev->dev, "Failed to write cmdq ctxt\n");
goto err_cmdq_ctxt;
}
hinic_ceq_register_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ, cmdqs,
cmdq_ceq_handler);
return 0;
err_cmdq_ctxt:
hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
HINIC_MAX_CMDQ_TYPES);
err_cmdq_wqs:
devm_kfree(&pdev->dev, cmdqs->saved_wqs);
err_saved_wqs:
pci_pool_destroy(cmdqs->cmdq_buf_pool);
return err;
}
/**
* hinic_free_cmdqs - free all cmdqs
* @cmdqs: cmdqs to free
**/
void hinic_free_cmdqs(struct hinic_cmdqs *cmdqs)
{
struct hinic_func_to_io *func_to_io = cmdqs_to_func_to_io(cmdqs);
struct hinic_hwif *hwif = cmdqs->hwif;
struct pci_dev *pdev = hwif->pdev;
enum hinic_cmdq_type cmdq_type;
hinic_ceq_unregister_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ);
cmdq_type = HINIC_CMDQ_SYNC;
for (; cmdq_type < HINIC_MAX_CMDQ_TYPES; cmdq_type++)
free_cmdq(&cmdqs->cmdq[cmdq_type]);
hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs,
HINIC_MAX_CMDQ_TYPES);
devm_kfree(&pdev->dev, cmdqs->saved_wqs);
pci_pool_destroy(cmdqs->cmdq_buf_pool);
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_CMDQ_H
#define HINIC_CMDQ_H
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/completion.h>
#include <linux/pci.h>
#include "hinic_hw_if.h"
#include "hinic_hw_wq.h"
#define HINIC_CMDQ_CTXT_CURR_WQE_PAGE_PFN_SHIFT 0
#define HINIC_CMDQ_CTXT_EQ_ID_SHIFT 56
#define HINIC_CMDQ_CTXT_CEQ_ARM_SHIFT 61
#define HINIC_CMDQ_CTXT_CEQ_EN_SHIFT 62
#define HINIC_CMDQ_CTXT_WRAPPED_SHIFT 63
#define HINIC_CMDQ_CTXT_CURR_WQE_PAGE_PFN_MASK 0xFFFFFFFFFFFFF
#define HINIC_CMDQ_CTXT_EQ_ID_MASK 0x1F
#define HINIC_CMDQ_CTXT_CEQ_ARM_MASK 0x1
#define HINIC_CMDQ_CTXT_CEQ_EN_MASK 0x1
#define HINIC_CMDQ_CTXT_WRAPPED_MASK 0x1
#define HINIC_CMDQ_CTXT_PAGE_INFO_SET(val, member) \
(((u64)(val) & HINIC_CMDQ_CTXT_##member##_MASK) \
<< HINIC_CMDQ_CTXT_##member##_SHIFT)
#define HINIC_CMDQ_CTXT_PAGE_INFO_CLEAR(val, member) \
((val) & (~((u64)HINIC_CMDQ_CTXT_##member##_MASK \
<< HINIC_CMDQ_CTXT_##member##_SHIFT)))
#define HINIC_CMDQ_CTXT_WQ_BLOCK_PFN_SHIFT 0
#define HINIC_CMDQ_CTXT_CI_SHIFT 52
#define HINIC_CMDQ_CTXT_WQ_BLOCK_PFN_MASK 0xFFFFFFFFFFFFF
#define HINIC_CMDQ_CTXT_CI_MASK 0xFFF
#define HINIC_CMDQ_CTXT_BLOCK_INFO_SET(val, member) \
(((u64)(val) & HINIC_CMDQ_CTXT_##member##_MASK) \
<< HINIC_CMDQ_CTXT_##member##_SHIFT)
#define HINIC_CMDQ_CTXT_BLOCK_INFO_CLEAR(val, member) \
((val) & (~((u64)HINIC_CMDQ_CTXT_##member##_MASK \
<< HINIC_CMDQ_CTXT_##member##_SHIFT)))
#define HINIC_SAVED_DATA_ARM_SHIFT 31
#define HINIC_SAVED_DATA_ARM_MASK 0x1
#define HINIC_SAVED_DATA_SET(val, member) \
(((u32)(val) & HINIC_SAVED_DATA_##member##_MASK) \
<< HINIC_SAVED_DATA_##member##_SHIFT)
#define HINIC_SAVED_DATA_GET(val, member) \
(((val) >> HINIC_SAVED_DATA_##member##_SHIFT) \
& HINIC_SAVED_DATA_##member##_MASK)
#define HINIC_SAVED_DATA_CLEAR(val, member) \
((val) & (~(HINIC_SAVED_DATA_##member##_MASK \
<< HINIC_SAVED_DATA_##member##_SHIFT)))
#define HINIC_CMDQ_DB_INFO_HI_PROD_IDX_SHIFT 0
#define HINIC_CMDQ_DB_INFO_PATH_SHIFT 23
#define HINIC_CMDQ_DB_INFO_CMDQ_TYPE_SHIFT 24
#define HINIC_CMDQ_DB_INFO_DB_TYPE_SHIFT 27
#define HINIC_CMDQ_DB_INFO_HI_PROD_IDX_MASK 0xFF
#define HINIC_CMDQ_DB_INFO_PATH_MASK 0x1
#define HINIC_CMDQ_DB_INFO_CMDQ_TYPE_MASK 0x7
#define HINIC_CMDQ_DB_INFO_DB_TYPE_MASK 0x1F
#define HINIC_CMDQ_DB_INFO_SET(val, member) \
(((u32)(val) & HINIC_CMDQ_DB_INFO_##member##_MASK) \
<< HINIC_CMDQ_DB_INFO_##member##_SHIFT)
#define HINIC_CMDQ_BUF_SIZE 2048
#define HINIC_CMDQ_BUF_HW_RSVD 8
#define HINIC_CMDQ_MAX_DATA_SIZE (HINIC_CMDQ_BUF_SIZE - \
HINIC_CMDQ_BUF_HW_RSVD)
enum hinic_cmdq_type {
HINIC_CMDQ_SYNC,
HINIC_MAX_CMDQ_TYPES,
};
enum hinic_set_arm_qtype {
HINIC_SET_ARM_CMDQ,
};
enum hinic_cmd_ack_type {
HINIC_CMD_ACK_TYPE_CMDQ,
};
struct hinic_cmdq_buf {
void *buf;
dma_addr_t dma_addr;
size_t size;
};
struct hinic_cmdq_arm_bit {
u32 q_type;
u32 q_id;
};
struct hinic_cmdq_ctxt_info {
u64 curr_wqe_page_pfn;
u64 wq_block_pfn;
};
struct hinic_cmdq_ctxt {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u8 cmdq_type;
u8 rsvd1[1];
u8 rsvd2[4];
struct hinic_cmdq_ctxt_info ctxt_info;
};
struct hinic_cmdq {
struct hinic_wq *wq;
enum hinic_cmdq_type cmdq_type;
int wrapped;
/* Lock for keeping the doorbell order */
spinlock_t cmdq_lock;
struct completion **done;
int **errcode;
/* doorbell area */
void __iomem *db_base;
};
struct hinic_cmdqs {
struct hinic_hwif *hwif;
struct pci_pool *cmdq_buf_pool;
struct hinic_wq *saved_wqs;
struct hinic_cmdq_pages cmdq_pages;
struct hinic_cmdq cmdq[HINIC_MAX_CMDQ_TYPES];
};
int hinic_alloc_cmdq_buf(struct hinic_cmdqs *cmdqs,
struct hinic_cmdq_buf *cmdq_buf);
void hinic_free_cmdq_buf(struct hinic_cmdqs *cmdqs,
struct hinic_cmdq_buf *cmdq_buf);
int hinic_cmdq_direct_resp(struct hinic_cmdqs *cmdqs,
enum hinic_mod_type mod, u8 cmd,
struct hinic_cmdq_buf *buf_in, u64 *out_param);
int hinic_set_arm_bit(struct hinic_cmdqs *cmdqs,
enum hinic_set_arm_qtype q_type, u32 q_id);
int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif,
void __iomem **db_area);
void hinic_free_cmdqs(struct hinic_cmdqs *cmdqs);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_CSR_H
#define HINIC_HW_CSR_H
/* HW interface registers */
#define HINIC_CSR_FUNC_ATTR0_ADDR 0x0
#define HINIC_CSR_FUNC_ATTR1_ADDR 0x4
#define HINIC_CSR_FUNC_ATTR4_ADDR 0x10
#define HINIC_CSR_FUNC_ATTR5_ADDR 0x14
#define HINIC_DMA_ATTR_BASE 0xC80
#define HINIC_ELECTION_BASE 0x4200
#define HINIC_DMA_ATTR_STRIDE 0x4
#define HINIC_CSR_DMA_ATTR_ADDR(idx) \
(HINIC_DMA_ATTR_BASE + (idx) * HINIC_DMA_ATTR_STRIDE)
#define HINIC_PPF_ELECTION_STRIDE 0x4
#define HINIC_CSR_MAX_PORTS 4
#define HINIC_CSR_PPF_ELECTION_ADDR(idx) \
(HINIC_ELECTION_BASE + (idx) * HINIC_PPF_ELECTION_STRIDE)
/* API CMD registers */
#define HINIC_CSR_API_CMD_BASE 0xF000
#define HINIC_CSR_API_CMD_STRIDE 0x100
#define HINIC_CSR_API_CMD_CHAIN_HEAD_HI_ADDR(idx) \
(HINIC_CSR_API_CMD_BASE + 0x0 + (idx) * HINIC_CSR_API_CMD_STRIDE)
#define HINIC_CSR_API_CMD_CHAIN_HEAD_LO_ADDR(idx) \
(HINIC_CSR_API_CMD_BASE + 0x4 + (idx) * HINIC_CSR_API_CMD_STRIDE)
#define HINIC_CSR_API_CMD_STATUS_HI_ADDR(idx) \
(HINIC_CSR_API_CMD_BASE + 0x8 + (idx) * HINIC_CSR_API_CMD_STRIDE)
#define HINIC_CSR_API_CMD_STATUS_LO_ADDR(idx) \
(HINIC_CSR_API_CMD_BASE + 0xC + (idx) * HINIC_CSR_API_CMD_STRIDE)
#define HINIC_CSR_API_CMD_CHAIN_NUM_CELLS_ADDR(idx) \
(HINIC_CSR_API_CMD_BASE + 0x10 + (idx) * HINIC_CSR_API_CMD_STRIDE)
#define HINIC_CSR_API_CMD_CHAIN_CTRL_ADDR(idx) \
(HINIC_CSR_API_CMD_BASE + 0x14 + (idx) * HINIC_CSR_API_CMD_STRIDE)
#define HINIC_CSR_API_CMD_CHAIN_PI_ADDR(idx) \
(HINIC_CSR_API_CMD_BASE + 0x1C + (idx) * HINIC_CSR_API_CMD_STRIDE)
#define HINIC_CSR_API_CMD_CHAIN_REQ_ADDR(idx) \
(HINIC_CSR_API_CMD_BASE + 0x20 + (idx) * HINIC_CSR_API_CMD_STRIDE)
#define HINIC_CSR_API_CMD_STATUS_ADDR(idx) \
(HINIC_CSR_API_CMD_BASE + 0x30 + (idx) * HINIC_CSR_API_CMD_STRIDE)
/* MSI-X registers */
#define HINIC_CSR_MSIX_CTRL_BASE 0x2000
#define HINIC_CSR_MSIX_CNT_BASE 0x2004
#define HINIC_CSR_MSIX_STRIDE 0x8
#define HINIC_CSR_MSIX_CTRL_ADDR(idx) \
(HINIC_CSR_MSIX_CTRL_BASE + (idx) * HINIC_CSR_MSIX_STRIDE)
#define HINIC_CSR_MSIX_CNT_ADDR(idx) \
(HINIC_CSR_MSIX_CNT_BASE + (idx) * HINIC_CSR_MSIX_STRIDE)
/* EQ registers */
#define HINIC_AEQ_MTT_OFF_BASE_ADDR 0x200
#define HINIC_CEQ_MTT_OFF_BASE_ADDR 0x400
#define HINIC_EQ_MTT_OFF_STRIDE 0x40
#define HINIC_CSR_AEQ_MTT_OFF(id) \
(HINIC_AEQ_MTT_OFF_BASE_ADDR + (id) * HINIC_EQ_MTT_OFF_STRIDE)
#define HINIC_CSR_CEQ_MTT_OFF(id) \
(HINIC_CEQ_MTT_OFF_BASE_ADDR + (id) * HINIC_EQ_MTT_OFF_STRIDE)
#define HINIC_CSR_EQ_PAGE_OFF_STRIDE 8
#define HINIC_CSR_AEQ_HI_PHYS_ADDR_REG(q_id, pg_num) \
(HINIC_CSR_AEQ_MTT_OFF(q_id) + \
(pg_num) * HINIC_CSR_EQ_PAGE_OFF_STRIDE)
#define HINIC_CSR_CEQ_HI_PHYS_ADDR_REG(q_id, pg_num) \
(HINIC_CSR_CEQ_MTT_OFF(q_id) + \
(pg_num) * HINIC_CSR_EQ_PAGE_OFF_STRIDE)
#define HINIC_CSR_AEQ_LO_PHYS_ADDR_REG(q_id, pg_num) \
(HINIC_CSR_AEQ_MTT_OFF(q_id) + \
(pg_num) * HINIC_CSR_EQ_PAGE_OFF_STRIDE + 4)
#define HINIC_CSR_CEQ_LO_PHYS_ADDR_REG(q_id, pg_num) \
(HINIC_CSR_CEQ_MTT_OFF(q_id) + \
(pg_num) * HINIC_CSR_EQ_PAGE_OFF_STRIDE + 4)
#define HINIC_AEQ_CTRL_0_ADDR_BASE 0xE00
#define HINIC_AEQ_CTRL_1_ADDR_BASE 0xE04
#define HINIC_AEQ_CONS_IDX_ADDR_BASE 0xE08
#define HINIC_AEQ_PROD_IDX_ADDR_BASE 0xE0C
#define HINIC_CEQ_CTRL_0_ADDR_BASE 0x1000
#define HINIC_CEQ_CTRL_1_ADDR_BASE 0x1004
#define HINIC_CEQ_CONS_IDX_ADDR_BASE 0x1008
#define HINIC_CEQ_PROD_IDX_ADDR_BASE 0x100C
#define HINIC_EQ_OFF_STRIDE 0x80
#define HINIC_CSR_AEQ_CTRL_0_ADDR(idx) \
(HINIC_AEQ_CTRL_0_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
#define HINIC_CSR_AEQ_CTRL_1_ADDR(idx) \
(HINIC_AEQ_CTRL_1_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
#define HINIC_CSR_AEQ_CONS_IDX_ADDR(idx) \
(HINIC_AEQ_CONS_IDX_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
#define HINIC_CSR_AEQ_PROD_IDX_ADDR(idx) \
(HINIC_AEQ_PROD_IDX_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
#define HINIC_CSR_CEQ_CTRL_0_ADDR(idx) \
(HINIC_CEQ_CTRL_0_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
#define HINIC_CSR_CEQ_CTRL_1_ADDR(idx) \
(HINIC_CEQ_CTRL_1_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
#define HINIC_CSR_CEQ_CONS_IDX_ADDR(idx) \
(HINIC_CEQ_CONS_IDX_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
#define HINIC_CSR_CEQ_PROD_IDX_ADDR(idx) \
(HINIC_CEQ_PROD_IDX_ADDR_BASE + (idx) * HINIC_EQ_OFF_STRIDE)
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/bitops.h>
#include <linux/delay.h>
#include <linux/jiffies.h>
#include <linux/log2.h>
#include <linux/err.h>
#include "hinic_hw_if.h"
#include "hinic_hw_eqs.h"
#include "hinic_hw_mgmt.h"
#include "hinic_hw_qp_ctxt.h"
#include "hinic_hw_qp.h"
#include "hinic_hw_io.h"
#include "hinic_hw_dev.h"
#define IO_STATUS_TIMEOUT 100
#define OUTBOUND_STATE_TIMEOUT 100
#define DB_STATE_TIMEOUT 100
#define MAX_IRQS(max_qps, num_aeqs, num_ceqs) \
(2 * (max_qps) + (num_aeqs) + (num_ceqs))
#define ADDR_IN_4BYTES(addr) ((addr) >> 2)
enum intr_type {
INTR_MSIX_TYPE,
};
enum io_status {
IO_STOPPED = 0,
IO_RUNNING = 1,
};
enum hw_ioctxt_set_cmdq_depth {
HW_IOCTXT_SET_CMDQ_DEPTH_DEFAULT,
};
/* HW struct */
struct hinic_dev_cap {
u8 status;
u8 version;
u8 rsvd0[6];
u8 rsvd1[5];
u8 intr_type;
u8 rsvd2[66];
u16 max_sqs;
u16 max_rqs;
u8 rsvd3[208];
};
struct rx_buf_sz {
int idx;
size_t sz;
};
static struct rx_buf_sz rx_buf_sz_table[] = {
{0, 32},
{1, 64},
{2, 96},
{3, 128},
{4, 192},
{5, 256},
{6, 384},
{7, 512},
{8, 768},
{9, 1024},
{10, 1536},
{11, 2048},
{12, 3072},
{13, 4096},
{14, 8192},
{15, 16384},
{-1, -1},
};
/**
* get_capability - convert device capabilities to NIC capabilities
* @hwdev: the HW device to set and convert device capabilities for
* @dev_cap: device capabilities from FW
*
* Return 0 - Success, negative - Failure
**/
static int get_capability(struct hinic_hwdev *hwdev,
struct hinic_dev_cap *dev_cap)
{
struct hinic_cap *nic_cap = &hwdev->nic_cap;
int num_aeqs, num_ceqs, num_irqs;
if (!HINIC_IS_PF(hwdev->hwif) && !HINIC_IS_PPF(hwdev->hwif))
return -EINVAL;
if (dev_cap->intr_type != INTR_MSIX_TYPE)
return -EFAULT;
num_aeqs = HINIC_HWIF_NUM_AEQS(hwdev->hwif);
num_ceqs = HINIC_HWIF_NUM_CEQS(hwdev->hwif);
num_irqs = HINIC_HWIF_NUM_IRQS(hwdev->hwif);
/* Each QP has its own (SQ + RQ) interrupts */
nic_cap->num_qps = (num_irqs - (num_aeqs + num_ceqs)) / 2;
if (nic_cap->num_qps > HINIC_Q_CTXT_MAX)
nic_cap->num_qps = HINIC_Q_CTXT_MAX;
/* num_qps must be power of 2 */
nic_cap->num_qps = BIT(fls(nic_cap->num_qps) - 1);
nic_cap->max_qps = dev_cap->max_sqs + 1;
if (nic_cap->max_qps != (dev_cap->max_rqs + 1))
return -EFAULT;
if (nic_cap->num_qps > nic_cap->max_qps)
nic_cap->num_qps = nic_cap->max_qps;
return 0;
}
/**
* get_cap_from_fw - get device capabilities from FW
* @pfhwdev: the PF HW device to get capabilities for
*
* Return 0 - Success, negative - Failure
**/
static int get_cap_from_fw(struct hinic_pfhwdev *pfhwdev)
{
struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_dev_cap dev_cap;
u16 in_len, out_len;
int err;
in_len = 0;
out_len = sizeof(dev_cap);
err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_CFGM,
HINIC_CFG_NIC_CAP, &dev_cap, in_len, &dev_cap,
&out_len, HINIC_MGMT_MSG_SYNC);
if (err) {
dev_err(&pdev->dev, "Failed to get capability from FW\n");
return err;
}
return get_capability(hwdev, &dev_cap);
}
/**
* get_dev_cap - get device capabilities
* @hwdev: the NIC HW device to get capabilities for
*
* Return 0 - Success, negative - Failure
**/
static int get_dev_cap(struct hinic_hwdev *hwdev)
{
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
int err;
switch (HINIC_FUNC_TYPE(hwif)) {
case HINIC_PPF:
case HINIC_PF:
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
err = get_cap_from_fw(pfhwdev);
if (err) {
dev_err(&pdev->dev, "Failed to get capability from FW\n");
return err;
}
break;
default:
dev_err(&pdev->dev, "Unsupported PCI Function type\n");
return -EINVAL;
}
return 0;
}
/**
* init_msix - enable the msix and save the entries
* @hwdev: the NIC HW device
*
* Return 0 - Success, negative - Failure
**/
static int init_msix(struct hinic_hwdev *hwdev)
{
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
int nr_irqs, num_aeqs, num_ceqs;
size_t msix_entries_size;
int i, err;
num_aeqs = HINIC_HWIF_NUM_AEQS(hwif);
num_ceqs = HINIC_HWIF_NUM_CEQS(hwif);
nr_irqs = MAX_IRQS(HINIC_MAX_QPS, num_aeqs, num_ceqs);
if (nr_irqs > HINIC_HWIF_NUM_IRQS(hwif))
nr_irqs = HINIC_HWIF_NUM_IRQS(hwif);
msix_entries_size = nr_irqs * sizeof(*hwdev->msix_entries);
hwdev->msix_entries = devm_kzalloc(&pdev->dev, msix_entries_size,
GFP_KERNEL);
if (!hwdev->msix_entries)
return -ENOMEM;
for (i = 0; i < nr_irqs; i++)
hwdev->msix_entries[i].entry = i;
err = pci_enable_msix_exact(pdev, hwdev->msix_entries, nr_irqs);
if (err) {
dev_err(&pdev->dev, "Failed to enable pci msix\n");
return err;
}
return 0;
}
/**
* disable_msix - disable the msix
* @hwdev: the NIC HW device
**/
static void disable_msix(struct hinic_hwdev *hwdev)
{
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
pci_disable_msix(pdev);
}
/**
* hinic_port_msg_cmd - send port msg to mgmt
* @hwdev: the NIC HW device
* @cmd: the port command
* @buf_in: input buffer
* @in_size: input size
* @buf_out: output buffer
* @out_size: returned output size
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_msg_cmd(struct hinic_hwdev *hwdev, enum hinic_port_cmd cmd,
void *buf_in, u16 in_size, void *buf_out, u16 *out_size)
{
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "unsupported PCI Function type\n");
return -EINVAL;
}
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
return hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_L2NIC, cmd,
buf_in, in_size, buf_out, out_size,
HINIC_MGMT_MSG_SYNC);
}
/**
* init_fw_ctxt- Init Firmware tables before network mgmt and io operations
* @hwdev: the NIC HW device
*
* Return 0 - Success, negative - Failure
**/
static int init_fw_ctxt(struct hinic_hwdev *hwdev)
{
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_cmd_fw_ctxt fw_ctxt;
struct hinic_pfhwdev *pfhwdev;
u16 out_size;
int err;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "Unsupported PCI Function type\n");
return -EINVAL;
}
fw_ctxt.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
fw_ctxt.rx_buf_sz = HINIC_RX_BUF_SZ;
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_FWCTXT_INIT,
&fw_ctxt, sizeof(fw_ctxt),
&fw_ctxt, &out_size);
if (err || (out_size != sizeof(fw_ctxt)) || fw_ctxt.status) {
dev_err(&pdev->dev, "Failed to init FW ctxt, ret = %d\n",
fw_ctxt.status);
return -EFAULT;
}
return 0;
}
/**
* set_hw_ioctxt - set the shape of the IO queues in FW
* @hwdev: the NIC HW device
* @rq_depth: rq depth
* @sq_depth: sq depth
*
* Return 0 - Success, negative - Failure
**/
static int set_hw_ioctxt(struct hinic_hwdev *hwdev, unsigned int rq_depth,
unsigned int sq_depth)
{
struct hinic_hwif *hwif = hwdev->hwif;
struct hinic_cmd_hw_ioctxt hw_ioctxt;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
int i;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "Unsupported PCI Function type\n");
return -EINVAL;
}
hw_ioctxt.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
hw_ioctxt.set_cmdq_depth = HW_IOCTXT_SET_CMDQ_DEPTH_DEFAULT;
hw_ioctxt.cmdq_depth = 0;
hw_ioctxt.rq_depth = ilog2(rq_depth);
for (i = 0; ; i++) {
if ((rx_buf_sz_table[i].sz == HINIC_RX_BUF_SZ) ||
(rx_buf_sz_table[i].sz == -1)) {
hw_ioctxt.rx_buf_sz_idx = rx_buf_sz_table[i].idx;
break;
}
}
if (hw_ioctxt.rx_buf_sz_idx == -1)
return -EINVAL;
hw_ioctxt.sq_depth = ilog2(sq_depth);
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
return hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
HINIC_COMM_CMD_HWCTXT_SET,
&hw_ioctxt, sizeof(hw_ioctxt), NULL,
NULL, HINIC_MGMT_MSG_SYNC);
}
static int wait_for_outbound_state(struct hinic_hwdev *hwdev)
{
enum hinic_outbound_state outbound_state;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
unsigned long end;
end = jiffies + msecs_to_jiffies(OUTBOUND_STATE_TIMEOUT);
do {
outbound_state = hinic_outbound_state_get(hwif);
if (outbound_state == HINIC_OUTBOUND_ENABLE)
return 0;
msleep(20);
} while (time_before(jiffies, end));
dev_err(&pdev->dev, "Wait for OUTBOUND - Timeout\n");
return -EFAULT;
}
static int wait_for_db_state(struct hinic_hwdev *hwdev)
{
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
enum hinic_db_state db_state;
unsigned long end;
end = jiffies + msecs_to_jiffies(DB_STATE_TIMEOUT);
do {
db_state = hinic_db_state_get(hwif);
if (db_state == HINIC_DB_ENABLE)
return 0;
msleep(20);
} while (time_before(jiffies, end));
dev_err(&pdev->dev, "Wait for DB - Timeout\n");
return -EFAULT;
}
static int wait_for_io_stopped(struct hinic_hwdev *hwdev)
{
struct hinic_cmd_io_status cmd_io_status;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
unsigned long end;
u16 out_size;
int err;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "Unsupported PCI Function type\n");
return -EINVAL;
}
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
cmd_io_status.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
end = jiffies + msecs_to_jiffies(IO_STATUS_TIMEOUT);
do {
err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
HINIC_COMM_CMD_IO_STATUS_GET,
&cmd_io_status, sizeof(cmd_io_status),
&cmd_io_status, &out_size,
HINIC_MGMT_MSG_SYNC);
if ((err) || (out_size != sizeof(cmd_io_status))) {
dev_err(&pdev->dev, "Failed to get IO status, ret = %d\n",
err);
return err;
}
if (cmd_io_status.status == IO_STOPPED) {
dev_info(&pdev->dev, "IO stopped\n");
return 0;
}
msleep(20);
} while (time_before(jiffies, end));
dev_err(&pdev->dev, "Wait for IO stopped - Timeout\n");
return -ETIMEDOUT;
}
/**
* clear_io_resource - set the IO resources as not active in the NIC
* @hwdev: the NIC HW device
*
* Return 0 - Success, negative - Failure
**/
static int clear_io_resources(struct hinic_hwdev *hwdev)
{
struct hinic_cmd_clear_io_res cmd_clear_io_res;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
int err;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "Unsupported PCI Function type\n");
return -EINVAL;
}
err = wait_for_io_stopped(hwdev);
if (err) {
dev_err(&pdev->dev, "IO has not stopped yet\n");
return err;
}
cmd_clear_io_res.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
HINIC_COMM_CMD_IO_RES_CLEAR, &cmd_clear_io_res,
sizeof(cmd_clear_io_res), NULL, NULL,
HINIC_MGMT_MSG_SYNC);
if (err) {
dev_err(&pdev->dev, "Failed to clear IO resources\n");
return err;
}
return 0;
}
/**
* set_resources_state - set the state of the resources in the NIC
* @hwdev: the NIC HW device
* @state: the state to set
*
* Return 0 - Success, negative - Failure
**/
static int set_resources_state(struct hinic_hwdev *hwdev,
enum hinic_res_state state)
{
struct hinic_cmd_set_res_state res_state;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "Unsupported PCI Function type\n");
return -EINVAL;
}
res_state.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
res_state.state = state;
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
return hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt,
HINIC_MOD_COMM,
HINIC_COMM_CMD_RES_STATE_SET,
&res_state, sizeof(res_state), NULL,
NULL, HINIC_MGMT_MSG_SYNC);
}
/**
* get_base_qpn - get the first qp number
* @hwdev: the NIC HW device
* @base_qpn: returned qp number
*
* Return 0 - Success, negative - Failure
**/
static int get_base_qpn(struct hinic_hwdev *hwdev, u16 *base_qpn)
{
struct hinic_cmd_base_qpn cmd_base_qpn;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
u16 out_size;
int err;
cmd_base_qpn.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_GLOBAL_QPN,
&cmd_base_qpn, sizeof(cmd_base_qpn),
&cmd_base_qpn, &out_size);
if (err || (out_size != sizeof(cmd_base_qpn)) || cmd_base_qpn.status) {
dev_err(&pdev->dev, "Failed to get base qpn, status = %d\n",
cmd_base_qpn.status);
return -EFAULT;
}
*base_qpn = cmd_base_qpn.qpn;
return 0;
}
/**
* hinic_hwdev_ifup - Preparing the HW for passing IO
* @hwdev: the NIC HW device
*
* Return 0 - Success, negative - Failure
**/
int hinic_hwdev_ifup(struct hinic_hwdev *hwdev)
{
struct hinic_func_to_io *func_to_io = &hwdev->func_to_io;
struct hinic_cap *nic_cap = &hwdev->nic_cap;
struct hinic_hwif *hwif = hwdev->hwif;
int err, num_aeqs, num_ceqs, num_qps;
struct msix_entry *ceq_msix_entries;
struct msix_entry *sq_msix_entries;
struct msix_entry *rq_msix_entries;
struct pci_dev *pdev = hwif->pdev;
u16 base_qpn;
err = get_base_qpn(hwdev, &base_qpn);
if (err) {
dev_err(&pdev->dev, "Failed to get global base qp number\n");
return err;
}
num_aeqs = HINIC_HWIF_NUM_AEQS(hwif);
num_ceqs = HINIC_HWIF_NUM_CEQS(hwif);
ceq_msix_entries = &hwdev->msix_entries[num_aeqs];
err = hinic_io_init(func_to_io, hwif, nic_cap->max_qps, num_ceqs,
ceq_msix_entries);
if (err) {
dev_err(&pdev->dev, "Failed to init IO channel\n");
return err;
}
num_qps = nic_cap->num_qps;
sq_msix_entries = &hwdev->msix_entries[num_aeqs + num_ceqs];
rq_msix_entries = &hwdev->msix_entries[num_aeqs + num_ceqs + num_qps];
err = hinic_io_create_qps(func_to_io, base_qpn, num_qps,
sq_msix_entries, rq_msix_entries);
if (err) {
dev_err(&pdev->dev, "Failed to create QPs\n");
goto err_create_qps;
}
err = wait_for_db_state(hwdev);
if (err) {
dev_warn(&pdev->dev, "db - disabled, try again\n");
hinic_db_state_set(hwif, HINIC_DB_ENABLE);
}
err = set_hw_ioctxt(hwdev, HINIC_SQ_DEPTH, HINIC_RQ_DEPTH);
if (err) {
dev_err(&pdev->dev, "Failed to set HW IO ctxt\n");
goto err_hw_ioctxt;
}
return 0;
err_hw_ioctxt:
hinic_io_destroy_qps(func_to_io, num_qps);
err_create_qps:
hinic_io_free(func_to_io);
return err;
}
/**
* hinic_hwdev_ifdown - Closing the HW for passing IO
* @hwdev: the NIC HW device
*
**/
void hinic_hwdev_ifdown(struct hinic_hwdev *hwdev)
{
struct hinic_func_to_io *func_to_io = &hwdev->func_to_io;
struct hinic_cap *nic_cap = &hwdev->nic_cap;
clear_io_resources(hwdev);
hinic_io_destroy_qps(func_to_io, nic_cap->num_qps);
hinic_io_free(func_to_io);
}
/**
* hinic_hwdev_cb_register - register callback handler for MGMT events
* @hwdev: the NIC HW device
* @cmd: the mgmt event
* @handle: private data for the handler
* @handler: event handler
**/
void hinic_hwdev_cb_register(struct hinic_hwdev *hwdev,
enum hinic_mgmt_msg_cmd cmd, void *handle,
void (*handler)(void *handle, void *buf_in,
u16 in_size, void *buf_out,
u16 *out_size))
{
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
struct hinic_nic_cb *nic_cb;
u8 cmd_cb;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "unsupported PCI Function type\n");
return;
}
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
cmd_cb = cmd - HINIC_MGMT_MSG_CMD_BASE;
nic_cb = &pfhwdev->nic_cb[cmd_cb];
nic_cb->handler = handler;
nic_cb->handle = handle;
nic_cb->cb_state = HINIC_CB_ENABLED;
}
/**
* hinic_hwdev_cb_unregister - unregister callback handler for MGMT events
* @hwdev: the NIC HW device
* @cmd: the mgmt event
**/
void hinic_hwdev_cb_unregister(struct hinic_hwdev *hwdev,
enum hinic_mgmt_msg_cmd cmd)
{
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
struct hinic_nic_cb *nic_cb;
u8 cmd_cb;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "unsupported PCI Function type\n");
return;
}
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
cmd_cb = cmd - HINIC_MGMT_MSG_CMD_BASE;
nic_cb = &pfhwdev->nic_cb[cmd_cb];
nic_cb->cb_state &= ~HINIC_CB_ENABLED;
while (nic_cb->cb_state & HINIC_CB_RUNNING)
schedule();
nic_cb->handler = NULL;
}
/**
* nic_mgmt_msg_handler - nic mgmt event handler
* @handle: private data for the handler
* @buf_in: input buffer
* @in_size: input size
* @buf_out: output buffer
* @out_size: returned output size
**/
static void nic_mgmt_msg_handler(void *handle, u8 cmd, void *buf_in,
u16 in_size, void *buf_out, u16 *out_size)
{
struct hinic_pfhwdev *pfhwdev = handle;
enum hinic_cb_state cb_state;
struct hinic_nic_cb *nic_cb;
struct hinic_hwdev *hwdev;
struct hinic_hwif *hwif;
struct pci_dev *pdev;
u8 cmd_cb;
hwdev = &pfhwdev->hwdev;
hwif = hwdev->hwif;
pdev = hwif->pdev;
if ((cmd < HINIC_MGMT_MSG_CMD_BASE) ||
(cmd >= HINIC_MGMT_MSG_CMD_MAX)) {
dev_err(&pdev->dev, "unknown L2NIC event, cmd = %d\n", cmd);
return;
}
cmd_cb = cmd - HINIC_MGMT_MSG_CMD_BASE;
nic_cb = &pfhwdev->nic_cb[cmd_cb];
cb_state = cmpxchg(&nic_cb->cb_state,
HINIC_CB_ENABLED,
HINIC_CB_ENABLED | HINIC_CB_RUNNING);
if ((cb_state == HINIC_CB_ENABLED) && (nic_cb->handler))
nic_cb->handler(nic_cb->handle, buf_in,
in_size, buf_out, out_size);
else
dev_err(&pdev->dev, "Unhandled NIC Event %d\n", cmd);
nic_cb->cb_state &= ~HINIC_CB_RUNNING;
}
/**
* init_pfhwdev - Initialize the extended components of PF
* @pfhwdev: the HW device for PF
*
* Return 0 - success, negative - failure
**/
static int init_pfhwdev(struct hinic_pfhwdev *pfhwdev)
{
struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
int err;
err = hinic_pf_to_mgmt_init(&pfhwdev->pf_to_mgmt, hwif);
if (err) {
dev_err(&pdev->dev, "Failed to initialize PF to MGMT channel\n");
return err;
}
hinic_register_mgmt_msg_cb(&pfhwdev->pf_to_mgmt, HINIC_MOD_L2NIC,
pfhwdev, nic_mgmt_msg_handler);
hinic_set_pf_action(hwif, HINIC_PF_MGMT_ACTIVE);
return 0;
}
/**
* free_pfhwdev - Free the extended components of PF
* @pfhwdev: the HW device for PF
**/
static void free_pfhwdev(struct hinic_pfhwdev *pfhwdev)
{
struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
hinic_set_pf_action(hwdev->hwif, HINIC_PF_MGMT_INIT);
hinic_unregister_mgmt_msg_cb(&pfhwdev->pf_to_mgmt, HINIC_MOD_L2NIC);
hinic_pf_to_mgmt_free(&pfhwdev->pf_to_mgmt);
}
/**
* hinic_init_hwdev - Initialize the NIC HW
* @pdev: the NIC pci device
*
* Return initialized NIC HW device
*
* Initialize the NIC HW device and return a pointer to it
**/
struct hinic_hwdev *hinic_init_hwdev(struct pci_dev *pdev)
{
struct hinic_pfhwdev *pfhwdev;
struct hinic_hwdev *hwdev;
struct hinic_hwif *hwif;
int err, num_aeqs;
hwif = devm_kzalloc(&pdev->dev, sizeof(*hwif), GFP_KERNEL);
if (!hwif)
return ERR_PTR(-ENOMEM);
err = hinic_init_hwif(hwif, pdev);
if (err) {
dev_err(&pdev->dev, "Failed to init HW interface\n");
return ERR_PTR(err);
}
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "Unsupported PCI Function type\n");
err = -EFAULT;
goto err_func_type;
}
pfhwdev = devm_kzalloc(&pdev->dev, sizeof(*pfhwdev), GFP_KERNEL);
if (!pfhwdev) {
err = -ENOMEM;
goto err_pfhwdev_alloc;
}
hwdev = &pfhwdev->hwdev;
hwdev->hwif = hwif;
err = init_msix(hwdev);
if (err) {
dev_err(&pdev->dev, "Failed to init msix\n");
goto err_init_msix;
}
err = wait_for_outbound_state(hwdev);
if (err) {
dev_warn(&pdev->dev, "outbound - disabled, try again\n");
hinic_outbound_state_set(hwif, HINIC_OUTBOUND_ENABLE);
}
num_aeqs = HINIC_HWIF_NUM_AEQS(hwif);
err = hinic_aeqs_init(&hwdev->aeqs, hwif, num_aeqs,
HINIC_DEFAULT_AEQ_LEN, HINIC_EQ_PAGE_SIZE,
hwdev->msix_entries);
if (err) {
dev_err(&pdev->dev, "Failed to init async event queues\n");
goto err_aeqs_init;
}
err = init_pfhwdev(pfhwdev);
if (err) {
dev_err(&pdev->dev, "Failed to init PF HW device\n");
goto err_init_pfhwdev;
}
err = get_dev_cap(hwdev);
if (err) {
dev_err(&pdev->dev, "Failed to get device capabilities\n");
goto err_dev_cap;
}
err = init_fw_ctxt(hwdev);
if (err) {
dev_err(&pdev->dev, "Failed to init function table\n");
goto err_init_fw_ctxt;
}
err = set_resources_state(hwdev, HINIC_RES_ACTIVE);
if (err) {
dev_err(&pdev->dev, "Failed to set resources state\n");
goto err_resources_state;
}
return hwdev;
err_resources_state:
err_init_fw_ctxt:
err_dev_cap:
free_pfhwdev(pfhwdev);
err_init_pfhwdev:
hinic_aeqs_free(&hwdev->aeqs);
err_aeqs_init:
disable_msix(hwdev);
err_init_msix:
err_pfhwdev_alloc:
err_func_type:
hinic_free_hwif(hwif);
return ERR_PTR(err);
}
/**
* hinic_free_hwdev - Free the NIC HW device
* @hwdev: the NIC HW device
**/
void hinic_free_hwdev(struct hinic_hwdev *hwdev)
{
struct hinic_pfhwdev *pfhwdev = container_of(hwdev,
struct hinic_pfhwdev,
hwdev);
set_resources_state(hwdev, HINIC_RES_CLEAN);
free_pfhwdev(pfhwdev);
hinic_aeqs_free(&hwdev->aeqs);
disable_msix(hwdev);
hinic_free_hwif(hwdev->hwif);
}
/**
* hinic_hwdev_num_qps - return the number QPs available for use
* @hwdev: the NIC HW device
*
* Return number QPs available for use
**/
int hinic_hwdev_num_qps(struct hinic_hwdev *hwdev)
{
struct hinic_cap *nic_cap = &hwdev->nic_cap;
return nic_cap->num_qps;
}
/**
* hinic_hwdev_get_sq - get SQ
* @hwdev: the NIC HW device
* @i: the position of the SQ
*
* Return: the SQ in the i position
**/
struct hinic_sq *hinic_hwdev_get_sq(struct hinic_hwdev *hwdev, int i)
{
struct hinic_func_to_io *func_to_io = &hwdev->func_to_io;
struct hinic_qp *qp = &func_to_io->qps[i];
if (i >= hinic_hwdev_num_qps(hwdev))
return NULL;
return &qp->sq;
}
/**
* hinic_hwdev_get_sq - get RQ
* @hwdev: the NIC HW device
* @i: the position of the RQ
*
* Return: the RQ in the i position
**/
struct hinic_rq *hinic_hwdev_get_rq(struct hinic_hwdev *hwdev, int i)
{
struct hinic_func_to_io *func_to_io = &hwdev->func_to_io;
struct hinic_qp *qp = &func_to_io->qps[i];
if (i >= hinic_hwdev_num_qps(hwdev))
return NULL;
return &qp->rq;
}
/**
* hinic_hwdev_msix_cnt_set - clear message attribute counters for msix entry
* @hwdev: the NIC HW device
* @msix_index: msix_index
*
* Return 0 - Success, negative - Failure
**/
int hinic_hwdev_msix_cnt_set(struct hinic_hwdev *hwdev, u16 msix_index)
{
return hinic_msix_attr_cnt_clear(hwdev->hwif, msix_index);
}
/**
* hinic_hwdev_msix_set - set message attribute for msix entry
* @hwdev: the NIC HW device
* @msix_index: msix_index
* @pending_limit: the maximum pending interrupt events (unit 8)
* @coalesc_timer: coalesc period for interrupt (unit 8 us)
* @lli_timer: replenishing period for low latency credit (unit 8 us)
* @lli_credit_limit: maximum credits for low latency msix messages (unit 8)
* @resend_timer: maximum wait for resending msix (unit coalesc period)
*
* Return 0 - Success, negative - Failure
**/
int hinic_hwdev_msix_set(struct hinic_hwdev *hwdev, u16 msix_index,
u8 pending_limit, u8 coalesc_timer,
u8 lli_timer_cfg, u8 lli_credit_limit,
u8 resend_timer)
{
return hinic_msix_attr_set(hwdev->hwif, msix_index,
pending_limit, coalesc_timer,
lli_timer_cfg, lli_credit_limit,
resend_timer);
}
/**
* hinic_hwdev_hw_ci_addr_set - set cons idx addr and attributes in HW for sq
* @hwdev: the NIC HW device
* @sq: send queue
* @pending_limit: the maximum pending update ci events (unit 8)
* @coalesc_timer: coalesc period for update ci (unit 8 us)
*
* Return 0 - Success, negative - Failure
**/
int hinic_hwdev_hw_ci_addr_set(struct hinic_hwdev *hwdev, struct hinic_sq *sq,
u8 pending_limit, u8 coalesc_timer)
{
struct hinic_qp *qp = container_of(sq, struct hinic_qp, sq);
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_pfhwdev *pfhwdev;
struct hinic_cmd_hw_ci hw_ci;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "Unsupported PCI Function type\n");
return -EINVAL;
}
hw_ci.dma_attr_off = 0;
hw_ci.pending_limit = pending_limit;
hw_ci.coalesc_timer = coalesc_timer;
hw_ci.msix_en = 1;
hw_ci.msix_entry_idx = sq->msix_entry;
hw_ci.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
hw_ci.sq_id = qp->q_id;
hw_ci.ci_addr = ADDR_IN_4BYTES(sq->hw_ci_dma_addr);
pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev);
return hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt,
HINIC_MOD_COMM,
HINIC_COMM_CMD_SQ_HI_CI_SET,
&hw_ci, sizeof(hw_ci), NULL,
NULL, HINIC_MGMT_MSG_SYNC);
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_DEV_H
#define HINIC_HW_DEV_H
#include <linux/pci.h>
#include <linux/types.h>
#include <linux/bitops.h>
#include "hinic_hw_if.h"
#include "hinic_hw_eqs.h"
#include "hinic_hw_mgmt.h"
#include "hinic_hw_qp.h"
#include "hinic_hw_io.h"
#define HINIC_MAX_QPS 32
#define HINIC_MGMT_NUM_MSG_CMD (HINIC_MGMT_MSG_CMD_MAX - \
HINIC_MGMT_MSG_CMD_BASE)
struct hinic_cap {
u16 max_qps;
u16 num_qps;
};
enum hinic_port_cmd {
HINIC_PORT_CMD_CHANGE_MTU = 2,
HINIC_PORT_CMD_ADD_VLAN = 3,
HINIC_PORT_CMD_DEL_VLAN = 4,
HINIC_PORT_CMD_SET_MAC = 9,
HINIC_PORT_CMD_GET_MAC = 10,
HINIC_PORT_CMD_DEL_MAC = 11,
HINIC_PORT_CMD_SET_RX_MODE = 12,
HINIC_PORT_CMD_GET_LINK_STATE = 24,
HINIC_PORT_CMD_SET_PORT_STATE = 41,
HINIC_PORT_CMD_FWCTXT_INIT = 69,
HINIC_PORT_CMD_SET_FUNC_STATE = 93,
HINIC_PORT_CMD_GET_GLOBAL_QPN = 102,
HINIC_PORT_CMD_GET_CAP = 170,
};
enum hinic_mgmt_msg_cmd {
HINIC_MGMT_MSG_CMD_BASE = 160,
HINIC_MGMT_MSG_CMD_LINK_STATUS = 160,
HINIC_MGMT_MSG_CMD_MAX,
};
enum hinic_cb_state {
HINIC_CB_ENABLED = BIT(0),
HINIC_CB_RUNNING = BIT(1),
};
enum hinic_res_state {
HINIC_RES_CLEAN = 0,
HINIC_RES_ACTIVE = 1,
};
struct hinic_cmd_fw_ctxt {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u16 rx_buf_sz;
u32 rsvd1;
};
struct hinic_cmd_hw_ioctxt {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u16 rsvd1;
u8 set_cmdq_depth;
u8 cmdq_depth;
u8 rsvd2;
u8 rsvd3;
u8 rsvd4;
u8 rsvd5;
u16 rq_depth;
u16 rx_buf_sz_idx;
u16 sq_depth;
};
struct hinic_cmd_io_status {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u8 rsvd1;
u8 rsvd2;
u32 io_status;
};
struct hinic_cmd_clear_io_res {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u8 rsvd1;
u8 rsvd2;
};
struct hinic_cmd_set_res_state {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u8 state;
u8 rsvd1;
u32 rsvd2;
};
struct hinic_cmd_base_qpn {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u16 qpn;
};
struct hinic_cmd_hw_ci {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u8 dma_attr_off;
u8 pending_limit;
u8 coalesc_timer;
u8 msix_en;
u16 msix_entry_idx;
u32 sq_id;
u32 rsvd1;
u64 ci_addr;
};
struct hinic_hwdev {
struct hinic_hwif *hwif;
struct msix_entry *msix_entries;
struct hinic_aeqs aeqs;
struct hinic_func_to_io func_to_io;
struct hinic_cap nic_cap;
};
struct hinic_nic_cb {
void (*handler)(void *handle, void *buf_in,
u16 in_size, void *buf_out,
u16 *out_size);
void *handle;
unsigned long cb_state;
};
struct hinic_pfhwdev {
struct hinic_hwdev hwdev;
struct hinic_pf_to_mgmt pf_to_mgmt;
struct hinic_nic_cb nic_cb[HINIC_MGMT_NUM_MSG_CMD];
};
void hinic_hwdev_cb_register(struct hinic_hwdev *hwdev,
enum hinic_mgmt_msg_cmd cmd, void *handle,
void (*handler)(void *handle, void *buf_in,
u16 in_size, void *buf_out,
u16 *out_size));
void hinic_hwdev_cb_unregister(struct hinic_hwdev *hwdev,
enum hinic_mgmt_msg_cmd cmd);
int hinic_port_msg_cmd(struct hinic_hwdev *hwdev, enum hinic_port_cmd cmd,
void *buf_in, u16 in_size, void *buf_out,
u16 *out_size);
int hinic_hwdev_ifup(struct hinic_hwdev *hwdev);
void hinic_hwdev_ifdown(struct hinic_hwdev *hwdev);
struct hinic_hwdev *hinic_init_hwdev(struct pci_dev *pdev);
void hinic_free_hwdev(struct hinic_hwdev *hwdev);
int hinic_hwdev_num_qps(struct hinic_hwdev *hwdev);
struct hinic_sq *hinic_hwdev_get_sq(struct hinic_hwdev *hwdev, int i);
struct hinic_rq *hinic_hwdev_get_rq(struct hinic_hwdev *hwdev, int i);
int hinic_hwdev_msix_cnt_set(struct hinic_hwdev *hwdev, u16 msix_index);
int hinic_hwdev_msix_set(struct hinic_hwdev *hwdev, u16 msix_index,
u8 pending_limit, u8 coalesc_timer,
u8 lli_timer_cfg, u8 lli_credit_limit,
u8 resend_timer);
int hinic_hwdev_hw_ci_addr_set(struct hinic_hwdev *hwdev, struct hinic_sq *sq,
u8 pending_limit, u8 coalesc_timer);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/workqueue.h>
#include <linux/interrupt.h>
#include <linux/slab.h>
#include <linux/dma-mapping.h>
#include <linux/log2.h>
#include <asm/byteorder.h>
#include <asm/barrier.h>
#include "hinic_hw_csr.h"
#include "hinic_hw_if.h"
#include "hinic_hw_eqs.h"
#define HINIC_EQS_WQ_NAME "hinic_eqs"
#define GET_EQ_NUM_PAGES(eq, pg_size) \
(ALIGN((eq)->q_len * (eq)->elem_size, pg_size) / (pg_size))
#define GET_EQ_NUM_ELEMS_IN_PG(eq, pg_size) ((pg_size) / (eq)->elem_size)
#define EQ_CONS_IDX_REG_ADDR(eq) (((eq)->type == HINIC_AEQ) ? \
HINIC_CSR_AEQ_CONS_IDX_ADDR((eq)->q_id) : \
HINIC_CSR_CEQ_CONS_IDX_ADDR((eq)->q_id))
#define EQ_PROD_IDX_REG_ADDR(eq) (((eq)->type == HINIC_AEQ) ? \
HINIC_CSR_AEQ_PROD_IDX_ADDR((eq)->q_id) : \
HINIC_CSR_CEQ_PROD_IDX_ADDR((eq)->q_id))
#define EQ_HI_PHYS_ADDR_REG(eq, pg_num) (((eq)->type == HINIC_AEQ) ? \
HINIC_CSR_AEQ_HI_PHYS_ADDR_REG((eq)->q_id, pg_num) : \
HINIC_CSR_CEQ_HI_PHYS_ADDR_REG((eq)->q_id, pg_num))
#define EQ_LO_PHYS_ADDR_REG(eq, pg_num) (((eq)->type == HINIC_AEQ) ? \
HINIC_CSR_AEQ_LO_PHYS_ADDR_REG((eq)->q_id, pg_num) : \
HINIC_CSR_CEQ_LO_PHYS_ADDR_REG((eq)->q_id, pg_num))
#define GET_EQ_ELEMENT(eq, idx) \
((eq)->virt_addr[(idx) / (eq)->num_elem_in_pg] + \
(((idx) & ((eq)->num_elem_in_pg - 1)) * (eq)->elem_size))
#define GET_AEQ_ELEM(eq, idx) ((struct hinic_aeq_elem *) \
GET_EQ_ELEMENT(eq, idx))
#define GET_CEQ_ELEM(eq, idx) ((u32 *) \
GET_EQ_ELEMENT(eq, idx))
#define GET_CURR_AEQ_ELEM(eq) GET_AEQ_ELEM(eq, (eq)->cons_idx)
#define GET_CURR_CEQ_ELEM(eq) GET_CEQ_ELEM(eq, (eq)->cons_idx)
#define PAGE_IN_4K(page_size) ((page_size) >> 12)
#define EQ_SET_HW_PAGE_SIZE_VAL(eq) (ilog2(PAGE_IN_4K((eq)->page_size)))
#define ELEMENT_SIZE_IN_32B(eq) (((eq)->elem_size) >> 5)
#define EQ_SET_HW_ELEM_SIZE_VAL(eq) (ilog2(ELEMENT_SIZE_IN_32B(eq)))
#define EQ_MAX_PAGES 8
#define CEQE_TYPE_SHIFT 23
#define CEQE_TYPE_MASK 0x7
#define CEQE_TYPE(ceqe) (((ceqe) >> CEQE_TYPE_SHIFT) & \
CEQE_TYPE_MASK)
#define CEQE_DATA_MASK 0x3FFFFFF
#define CEQE_DATA(ceqe) ((ceqe) & CEQE_DATA_MASK)
#define aeq_to_aeqs(eq) \
container_of((eq) - (eq)->q_id, struct hinic_aeqs, aeq[0])
#define ceq_to_ceqs(eq) \
container_of((eq) - (eq)->q_id, struct hinic_ceqs, ceq[0])
#define work_to_aeq_work(work) \
container_of(work, struct hinic_eq_work, work)
#define DMA_ATTR_AEQ_DEFAULT 0
#define DMA_ATTR_CEQ_DEFAULT 0
/* No coalescence */
#define THRESH_CEQ_DEFAULT 0
enum eq_int_mode {
EQ_INT_MODE_ARMED,
EQ_INT_MODE_ALWAYS
};
enum eq_arm_state {
EQ_NOT_ARMED,
EQ_ARMED
};
/**
* hinic_aeq_register_hw_cb - register AEQ callback for specific event
* @aeqs: pointer to Async eqs of the chip
* @event: aeq event to register callback for it
* @handle: private data will be used by the callback
* @hw_handler: callback function
**/
void hinic_aeq_register_hw_cb(struct hinic_aeqs *aeqs,
enum hinic_aeq_type event, void *handle,
void (*hwe_handler)(void *handle, void *data,
u8 size))
{
struct hinic_hw_event_cb *hwe_cb = &aeqs->hwe_cb[event];
hwe_cb->hwe_handler = hwe_handler;
hwe_cb->handle = handle;
hwe_cb->hwe_state = HINIC_EQE_ENABLED;
}
/**
* hinic_aeq_unregister_hw_cb - unregister the AEQ callback for specific event
* @aeqs: pointer to Async eqs of the chip
* @event: aeq event to unregister callback for it
**/
void hinic_aeq_unregister_hw_cb(struct hinic_aeqs *aeqs,
enum hinic_aeq_type event)
{
struct hinic_hw_event_cb *hwe_cb = &aeqs->hwe_cb[event];
hwe_cb->hwe_state &= ~HINIC_EQE_ENABLED;
while (hwe_cb->hwe_state & HINIC_EQE_RUNNING)
schedule();
hwe_cb->hwe_handler = NULL;
}
/**
* hinic_ceq_register_cb - register CEQ callback for specific event
* @ceqs: pointer to Completion eqs part of the chip
* @event: ceq event to register callback for it
* @handle: private data will be used by the callback
* @handler: callback function
**/
void hinic_ceq_register_cb(struct hinic_ceqs *ceqs,
enum hinic_ceq_type event, void *handle,
void (*handler)(void *handle, u32 ceqe_data))
{
struct hinic_ceq_cb *ceq_cb = &ceqs->ceq_cb[event];
ceq_cb->handler = handler;
ceq_cb->handle = handle;
ceq_cb->ceqe_state = HINIC_EQE_ENABLED;
}
/**
* hinic_ceq_unregister_cb - unregister the CEQ callback for specific event
* @ceqs: pointer to Completion eqs part of the chip
* @event: ceq event to unregister callback for it
**/
void hinic_ceq_unregister_cb(struct hinic_ceqs *ceqs,
enum hinic_ceq_type event)
{
struct hinic_ceq_cb *ceq_cb = &ceqs->ceq_cb[event];
ceq_cb->ceqe_state &= ~HINIC_EQE_ENABLED;
while (ceq_cb->ceqe_state & HINIC_EQE_RUNNING)
schedule();
ceq_cb->handler = NULL;
}
static u8 eq_cons_idx_checksum_set(u32 val)
{
u8 checksum = 0;
int idx;
for (idx = 0; idx < 32; idx += 4)
checksum ^= ((val >> idx) & 0xF);
return (checksum & 0xF);
}
/**
* eq_update_ci - update the HW cons idx of event queue
* @eq: the event queue to update the cons idx for
**/
static void eq_update_ci(struct hinic_eq *eq)
{
u32 val, addr = EQ_CONS_IDX_REG_ADDR(eq);
/* Read Modify Write */
val = hinic_hwif_read_reg(eq->hwif, addr);
val = HINIC_EQ_CI_CLEAR(val, IDX) &
HINIC_EQ_CI_CLEAR(val, WRAPPED) &
HINIC_EQ_CI_CLEAR(val, INT_ARMED) &
HINIC_EQ_CI_CLEAR(val, XOR_CHKSUM);
val |= HINIC_EQ_CI_SET(eq->cons_idx, IDX) |
HINIC_EQ_CI_SET(eq->wrapped, WRAPPED) |
HINIC_EQ_CI_SET(EQ_ARMED, INT_ARMED);
val |= HINIC_EQ_CI_SET(eq_cons_idx_checksum_set(val), XOR_CHKSUM);
hinic_hwif_write_reg(eq->hwif, addr, val);
}
/**
* aeq_irq_handler - handler for the AEQ event
* @eq: the Async Event Queue that received the event
**/
static void aeq_irq_handler(struct hinic_eq *eq)
{
struct hinic_aeqs *aeqs = aeq_to_aeqs(eq);
struct hinic_hwif *hwif = aeqs->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_aeq_elem *aeqe_curr;
struct hinic_hw_event_cb *hwe_cb;
enum hinic_aeq_type event;
unsigned long eqe_state;
u32 aeqe_desc;
int i, size;
for (i = 0; i < eq->q_len; i++) {
aeqe_curr = GET_CURR_AEQ_ELEM(eq);
/* Data in HW is in Big endian Format */
aeqe_desc = be32_to_cpu(aeqe_curr->desc);
/* HW toggles the wrapped bit, when it adds eq element */
if (HINIC_EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) == eq->wrapped)
break;
event = HINIC_EQ_ELEM_DESC_GET(aeqe_desc, TYPE);
if (event >= HINIC_MAX_AEQ_EVENTS) {
dev_err(&pdev->dev, "Unknown AEQ Event %d\n", event);
return;
}
if (!HINIC_EQ_ELEM_DESC_GET(aeqe_desc, SRC)) {
hwe_cb = &aeqs->hwe_cb[event];
size = HINIC_EQ_ELEM_DESC_GET(aeqe_desc, SIZE);
eqe_state = cmpxchg(&hwe_cb->hwe_state,
HINIC_EQE_ENABLED,
HINIC_EQE_ENABLED |
HINIC_EQE_RUNNING);
if ((eqe_state == HINIC_EQE_ENABLED) &&
(hwe_cb->hwe_handler))
hwe_cb->hwe_handler(hwe_cb->handle,
aeqe_curr->data, size);
else
dev_err(&pdev->dev, "Unhandled AEQ Event %d\n",
event);
hwe_cb->hwe_state &= ~HINIC_EQE_RUNNING;
}
eq->cons_idx++;
if (eq->cons_idx == eq->q_len) {
eq->cons_idx = 0;
eq->wrapped = !eq->wrapped;
}
}
}
/**
* ceq_event_handler - handler for the ceq events
* @ceqs: ceqs part of the chip
* @ceqe: ceq element that describes the event
**/
static void ceq_event_handler(struct hinic_ceqs *ceqs, u32 ceqe)
{
struct hinic_hwif *hwif = ceqs->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_ceq_cb *ceq_cb;
enum hinic_ceq_type event;
unsigned long eqe_state;
event = CEQE_TYPE(ceqe);
if (event >= HINIC_MAX_CEQ_EVENTS) {
dev_err(&pdev->dev, "Unknown CEQ event, event = %d\n", event);
return;
}
ceq_cb = &ceqs->ceq_cb[event];
eqe_state = cmpxchg(&ceq_cb->ceqe_state,
HINIC_EQE_ENABLED,
HINIC_EQE_ENABLED | HINIC_EQE_RUNNING);
if ((eqe_state == HINIC_EQE_ENABLED) && (ceq_cb->handler))
ceq_cb->handler(ceq_cb->handle, CEQE_DATA(ceqe));
else
dev_err(&pdev->dev, "Unhandled CEQ Event %d\n", event);
ceq_cb->ceqe_state &= ~HINIC_EQE_RUNNING;
}
/**
* ceq_irq_handler - handler for the CEQ event
* @eq: the Completion Event Queue that received the event
**/
static void ceq_irq_handler(struct hinic_eq *eq)
{
struct hinic_ceqs *ceqs = ceq_to_ceqs(eq);
u32 ceqe;
int i;
for (i = 0; i < eq->q_len; i++) {
ceqe = *(GET_CURR_CEQ_ELEM(eq));
/* Data in HW is in Big endian Format */
ceqe = be32_to_cpu(ceqe);
/* HW toggles the wrapped bit, when it adds eq element event */
if (HINIC_EQ_ELEM_DESC_GET(ceqe, WRAPPED) == eq->wrapped)
break;
ceq_event_handler(ceqs, ceqe);
eq->cons_idx++;
if (eq->cons_idx == eq->q_len) {
eq->cons_idx = 0;
eq->wrapped = !eq->wrapped;
}
}
}
/**
* eq_irq_handler - handler for the EQ event
* @data: the Event Queue that received the event
**/
static void eq_irq_handler(void *data)
{
struct hinic_eq *eq = data;
if (eq->type == HINIC_AEQ)
aeq_irq_handler(eq);
else if (eq->type == HINIC_CEQ)
ceq_irq_handler(eq);
eq_update_ci(eq);
}
/**
* eq_irq_work - the work of the EQ that received the event
* @work: the work struct that is associated with the EQ
**/
static void eq_irq_work(struct work_struct *work)
{
struct hinic_eq_work *aeq_work = work_to_aeq_work(work);
struct hinic_eq *aeq;
aeq = aeq_work->data;
eq_irq_handler(aeq);
}
/**
* ceq_tasklet - the tasklet of the EQ that received the event
* @ceq_data: the eq
**/
static void ceq_tasklet(unsigned long ceq_data)
{
struct hinic_eq *ceq = (struct hinic_eq *)ceq_data;
eq_irq_handler(ceq);
}
/**
* aeq_interrupt - aeq interrupt handler
* @irq: irq number
* @data: the Async Event Queue that collected the event
**/
static irqreturn_t aeq_interrupt(int irq, void *data)
{
struct hinic_eq_work *aeq_work;
struct hinic_eq *aeq = data;
struct hinic_aeqs *aeqs;
/* clear resend timer cnt register */
hinic_msix_attr_cnt_clear(aeq->hwif, aeq->msix_entry.entry);
aeq_work = &aeq->aeq_work;
aeq_work->data = aeq;
aeqs = aeq_to_aeqs(aeq);
queue_work(aeqs->workq, &aeq_work->work);
return IRQ_HANDLED;
}
/**
* ceq_interrupt - ceq interrupt handler
* @irq: irq number
* @data: the Completion Event Queue that collected the event
**/
static irqreturn_t ceq_interrupt(int irq, void *data)
{
struct hinic_eq *ceq = data;
/* clear resend timer cnt register */
hinic_msix_attr_cnt_clear(ceq->hwif, ceq->msix_entry.entry);
tasklet_schedule(&ceq->ceq_tasklet);
return IRQ_HANDLED;
}
void set_ctrl0(struct hinic_eq *eq)
{
struct msix_entry *msix_entry = &eq->msix_entry;
enum hinic_eq_type type = eq->type;
u32 addr, val, ctrl0;
if (type == HINIC_AEQ) {
/* RMW Ctrl0 */
addr = HINIC_CSR_AEQ_CTRL_0_ADDR(eq->q_id);
val = hinic_hwif_read_reg(eq->hwif, addr);
val = HINIC_AEQ_CTRL_0_CLEAR(val, INT_IDX) &
HINIC_AEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
HINIC_AEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
HINIC_AEQ_CTRL_0_CLEAR(val, INT_MODE);
ctrl0 = HINIC_AEQ_CTRL_0_SET(msix_entry->entry, INT_IDX) |
HINIC_AEQ_CTRL_0_SET(DMA_ATTR_AEQ_DEFAULT, DMA_ATTR) |
HINIC_AEQ_CTRL_0_SET(HINIC_HWIF_PCI_INTF(eq->hwif),
PCI_INTF_IDX) |
HINIC_AEQ_CTRL_0_SET(EQ_INT_MODE_ARMED, INT_MODE);
val |= ctrl0;
hinic_hwif_write_reg(eq->hwif, addr, val);
} else if (type == HINIC_CEQ) {
/* RMW Ctrl0 */
addr = HINIC_CSR_CEQ_CTRL_0_ADDR(eq->q_id);
val = hinic_hwif_read_reg(eq->hwif, addr);
val = HINIC_CEQ_CTRL_0_CLEAR(val, INTR_IDX) &
HINIC_CEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
HINIC_CEQ_CTRL_0_CLEAR(val, KICK_THRESH) &
HINIC_CEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
HINIC_CEQ_CTRL_0_CLEAR(val, INTR_MODE);
ctrl0 = HINIC_CEQ_CTRL_0_SET(msix_entry->entry, INTR_IDX) |
HINIC_CEQ_CTRL_0_SET(DMA_ATTR_CEQ_DEFAULT, DMA_ATTR) |
HINIC_CEQ_CTRL_0_SET(THRESH_CEQ_DEFAULT, KICK_THRESH) |
HINIC_CEQ_CTRL_0_SET(HINIC_HWIF_PCI_INTF(eq->hwif),
PCI_INTF_IDX) |
HINIC_CEQ_CTRL_0_SET(EQ_INT_MODE_ARMED, INTR_MODE);
val |= ctrl0;
hinic_hwif_write_reg(eq->hwif, addr, val);
}
}
void set_ctrl1(struct hinic_eq *eq)
{
enum hinic_eq_type type = eq->type;
u32 page_size_val, elem_size;
u32 addr, val, ctrl1;
if (type == HINIC_AEQ) {
/* RMW Ctrl1 */
addr = HINIC_CSR_AEQ_CTRL_1_ADDR(eq->q_id);
page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
elem_size = EQ_SET_HW_ELEM_SIZE_VAL(eq);
val = hinic_hwif_read_reg(eq->hwif, addr);
val = HINIC_AEQ_CTRL_1_CLEAR(val, LEN) &
HINIC_AEQ_CTRL_1_CLEAR(val, ELEM_SIZE) &
HINIC_AEQ_CTRL_1_CLEAR(val, PAGE_SIZE);
ctrl1 = HINIC_AEQ_CTRL_1_SET(eq->q_len, LEN) |
HINIC_AEQ_CTRL_1_SET(elem_size, ELEM_SIZE) |
HINIC_AEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
val |= ctrl1;
hinic_hwif_write_reg(eq->hwif, addr, val);
} else if (type == HINIC_CEQ) {
/* RMW Ctrl1 */
addr = HINIC_CSR_CEQ_CTRL_1_ADDR(eq->q_id);
page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
val = hinic_hwif_read_reg(eq->hwif, addr);
val = HINIC_CEQ_CTRL_1_CLEAR(val, LEN) &
HINIC_CEQ_CTRL_1_CLEAR(val, PAGE_SIZE);
ctrl1 = HINIC_CEQ_CTRL_1_SET(eq->q_len, LEN) |
HINIC_CEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
val |= ctrl1;
hinic_hwif_write_reg(eq->hwif, addr, val);
}
}
/**
* set_eq_ctrls - setting eq's ctrl registers
* @eq: the Event Queue for setting
**/
static void set_eq_ctrls(struct hinic_eq *eq)
{
set_ctrl0(eq);
set_ctrl1(eq);
}
/**
* aeq_elements_init - initialize all the elements in the aeq
* @eq: the Async Event Queue
* @init_val: value to initialize the elements with it
**/
static void aeq_elements_init(struct hinic_eq *eq, u32 init_val)
{
struct hinic_aeq_elem *aeqe;
int i;
for (i = 0; i < eq->q_len; i++) {
aeqe = GET_AEQ_ELEM(eq, i);
aeqe->desc = cpu_to_be32(init_val);
}
wmb(); /* Write the initilzation values */
}
/**
* ceq_elements_init - Initialize all the elements in the ceq
* @eq: the event queue
* @init_val: value to init with it the elements
**/
static void ceq_elements_init(struct hinic_eq *eq, u32 init_val)
{
u32 *ceqe;
int i;
for (i = 0; i < eq->q_len; i++) {
ceqe = GET_CEQ_ELEM(eq, i);
*(ceqe) = cpu_to_be32(init_val);
}
wmb(); /* Write the initilzation values */
}
/**
* alloc_eq_pages - allocate the pages for the queue
* @eq: the event queue
*
* Return 0 - Success, Negative - Failure
**/
static int alloc_eq_pages(struct hinic_eq *eq)
{
struct hinic_hwif *hwif = eq->hwif;
struct pci_dev *pdev = hwif->pdev;
u32 init_val, addr, val;
size_t addr_size;
int err, pg;
addr_size = eq->num_pages * sizeof(*eq->dma_addr);
eq->dma_addr = devm_kzalloc(&pdev->dev, addr_size, GFP_KERNEL);
if (!eq->dma_addr)
return -ENOMEM;
addr_size = eq->num_pages * sizeof(*eq->virt_addr);
eq->virt_addr = devm_kzalloc(&pdev->dev, addr_size, GFP_KERNEL);
if (!eq->virt_addr) {
err = -ENOMEM;
goto err_virt_addr_alloc;
}
for (pg = 0; pg < eq->num_pages; pg++) {
eq->virt_addr[pg] = dma_zalloc_coherent(&pdev->dev,
eq->page_size,
&eq->dma_addr[pg],
GFP_KERNEL);
if (!eq->virt_addr[pg]) {
err = -ENOMEM;
goto err_dma_alloc;
}
addr = EQ_HI_PHYS_ADDR_REG(eq, pg);
val = upper_32_bits(eq->dma_addr[pg]);
hinic_hwif_write_reg(hwif, addr, val);
addr = EQ_LO_PHYS_ADDR_REG(eq, pg);
val = lower_32_bits(eq->dma_addr[pg]);
hinic_hwif_write_reg(hwif, addr, val);
}
init_val = HINIC_EQ_ELEM_DESC_SET(eq->wrapped, WRAPPED);
if (eq->type == HINIC_AEQ)
aeq_elements_init(eq, init_val);
else if (eq->type == HINIC_CEQ)
ceq_elements_init(eq, init_val);
return 0;
err_dma_alloc:
while (--pg >= 0)
dma_free_coherent(&pdev->dev, eq->page_size,
eq->virt_addr[pg],
eq->dma_addr[pg]);
devm_kfree(&pdev->dev, eq->virt_addr);
err_virt_addr_alloc:
devm_kfree(&pdev->dev, eq->dma_addr);
return err;
}
/**
* free_eq_pages - free the pages of the queue
* @eq: the Event Queue
**/
static void free_eq_pages(struct hinic_eq *eq)
{
struct hinic_hwif *hwif = eq->hwif;
struct pci_dev *pdev = hwif->pdev;
int pg;
for (pg = 0; pg < eq->num_pages; pg++)
dma_free_coherent(&pdev->dev, eq->page_size,
eq->virt_addr[pg],
eq->dma_addr[pg]);
devm_kfree(&pdev->dev, eq->virt_addr);
devm_kfree(&pdev->dev, eq->dma_addr);
}
/**
* init_eq - initialize Event Queue
* @eq: the event queue
* @hwif: the HW interface of a PCI function device
* @type: the type of the event queue, aeq or ceq
* @q_id: Queue id number
* @q_len: the number of EQ elements
* @page_size: the page size of the pages in the event queue
* @entry: msix entry associated with the event queue
*
* Return 0 - Success, Negative - Failure
**/
static int init_eq(struct hinic_eq *eq, struct hinic_hwif *hwif,
enum hinic_eq_type type, int q_id, u32 q_len, u32 page_size,
struct msix_entry entry)
{
struct pci_dev *pdev = hwif->pdev;
int err;
eq->hwif = hwif;
eq->type = type;
eq->q_id = q_id;
eq->q_len = q_len;
eq->page_size = page_size;
/* Clear PI and CI, also clear the ARM bit */
hinic_hwif_write_reg(eq->hwif, EQ_CONS_IDX_REG_ADDR(eq), 0);
hinic_hwif_write_reg(eq->hwif, EQ_PROD_IDX_REG_ADDR(eq), 0);
eq->cons_idx = 0;
eq->wrapped = 0;
if (type == HINIC_AEQ) {
eq->elem_size = HINIC_AEQE_SIZE;
} else if (type == HINIC_CEQ) {
eq->elem_size = HINIC_CEQE_SIZE;
} else {
dev_err(&pdev->dev, "Invalid EQ type\n");
return -EINVAL;
}
eq->num_pages = GET_EQ_NUM_PAGES(eq, page_size);
eq->num_elem_in_pg = GET_EQ_NUM_ELEMS_IN_PG(eq, page_size);
eq->msix_entry = entry;
if (eq->num_elem_in_pg & (eq->num_elem_in_pg - 1)) {
dev_err(&pdev->dev, "num elements in eq page != power of 2\n");
return -EINVAL;
}
if (eq->num_pages > EQ_MAX_PAGES) {
dev_err(&pdev->dev, "too many pages for eq\n");
return -EINVAL;
}
set_eq_ctrls(eq);
eq_update_ci(eq);
err = alloc_eq_pages(eq);
if (err) {
dev_err(&pdev->dev, "Failed to allocate pages for eq\n");
return err;
}
if (type == HINIC_AEQ) {
struct hinic_eq_work *aeq_work = &eq->aeq_work;
INIT_WORK(&aeq_work->work, eq_irq_work);
} else if (type == HINIC_CEQ) {
tasklet_init(&eq->ceq_tasklet, ceq_tasklet,
(unsigned long)eq);
}
/* set the attributes of the msix entry */
hinic_msix_attr_set(eq->hwif, eq->msix_entry.entry,
HINIC_EQ_MSIX_PENDING_LIMIT_DEFAULT,
HINIC_EQ_MSIX_COALESC_TIMER_DEFAULT,
HINIC_EQ_MSIX_LLI_TIMER_DEFAULT,
HINIC_EQ_MSIX_LLI_CREDIT_LIMIT_DEFAULT,
HINIC_EQ_MSIX_RESEND_TIMER_DEFAULT);
if (type == HINIC_AEQ)
err = request_irq(entry.vector, aeq_interrupt, 0,
"hinic_aeq", eq);
else if (type == HINIC_CEQ)
err = request_irq(entry.vector, ceq_interrupt, 0,
"hinic_ceq", eq);
if (err) {
dev_err(&pdev->dev, "Failed to request irq for the EQ\n");
goto err_req_irq;
}
return 0;
err_req_irq:
free_eq_pages(eq);
return err;
}
/**
* remove_eq - remove Event Queue
* @eq: the event queue
**/
static void remove_eq(struct hinic_eq *eq)
{
struct msix_entry *entry = &eq->msix_entry;
free_irq(entry->vector, eq);
if (eq->type == HINIC_AEQ) {
struct hinic_eq_work *aeq_work = &eq->aeq_work;
cancel_work_sync(&aeq_work->work);
} else if (eq->type == HINIC_CEQ) {
tasklet_kill(&eq->ceq_tasklet);
}
free_eq_pages(eq);
}
/**
* hinic_aeqs_init - initialize all the aeqs
* @aeqs: pointer to Async eqs of the chip
* @hwif: the HW interface of a PCI function device
* @num_aeqs: number of AEQs
* @q_len: number of EQ elements
* @page_size: the page size of the pages in the event queue
* @msix_entries: msix entries associated with the event queues
*
* Return 0 - Success, negative - Failure
**/
int hinic_aeqs_init(struct hinic_aeqs *aeqs, struct hinic_hwif *hwif,
int num_aeqs, u32 q_len, u32 page_size,
struct msix_entry *msix_entries)
{
struct pci_dev *pdev = hwif->pdev;
int err, i, q_id;
aeqs->workq = create_singlethread_workqueue(HINIC_EQS_WQ_NAME);
if (!aeqs->workq)
return -ENOMEM;
aeqs->hwif = hwif;
aeqs->num_aeqs = num_aeqs;
for (q_id = 0; q_id < num_aeqs; q_id++) {
err = init_eq(&aeqs->aeq[q_id], hwif, HINIC_AEQ, q_id, q_len,
page_size, msix_entries[q_id]);
if (err) {
dev_err(&pdev->dev, "Failed to init aeq %d\n", q_id);
goto err_init_aeq;
}
}
return 0;
err_init_aeq:
for (i = 0; i < q_id; i++)
remove_eq(&aeqs->aeq[i]);
destroy_workqueue(aeqs->workq);
return err;
}
/**
* hinic_aeqs_free - free all the aeqs
* @aeqs: pointer to Async eqs of the chip
**/
void hinic_aeqs_free(struct hinic_aeqs *aeqs)
{
int q_id;
for (q_id = 0; q_id < aeqs->num_aeqs ; q_id++)
remove_eq(&aeqs->aeq[q_id]);
destroy_workqueue(aeqs->workq);
}
/**
* hinic_ceqs_init - init all the ceqs
* @ceqs: ceqs part of the chip
* @hwif: the hardware interface of a pci function device
* @num_ceqs: number of CEQs
* @q_len: number of EQ elements
* @page_size: the page size of the event queue
* @msix_entries: msix entries associated with the event queues
*
* Return 0 - Success, Negative - Failure
**/
int hinic_ceqs_init(struct hinic_ceqs *ceqs, struct hinic_hwif *hwif,
int num_ceqs, u32 q_len, u32 page_size,
struct msix_entry *msix_entries)
{
struct pci_dev *pdev = hwif->pdev;
int i, q_id, err;
ceqs->hwif = hwif;
ceqs->num_ceqs = num_ceqs;
for (q_id = 0; q_id < num_ceqs; q_id++) {
err = init_eq(&ceqs->ceq[q_id], hwif, HINIC_CEQ, q_id, q_len,
page_size, msix_entries[q_id]);
if (err) {
dev_err(&pdev->dev, "Failed to init ceq %d\n", q_id);
goto err_init_ceq;
}
}
return 0;
err_init_ceq:
for (i = 0; i < q_id; i++)
remove_eq(&ceqs->ceq[i]);
return err;
}
/**
* hinic_ceqs_free - free all the ceqs
* @ceqs: ceqs part of the chip
**/
void hinic_ceqs_free(struct hinic_ceqs *ceqs)
{
int q_id;
for (q_id = 0; q_id < ceqs->num_ceqs; q_id++)
remove_eq(&ceqs->ceq[q_id]);
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_EQS_H
#define HINIC_HW_EQS_H
#include <linux/types.h>
#include <linux/workqueue.h>
#include <linux/pci.h>
#include <linux/sizes.h>
#include <linux/bitops.h>
#include <linux/interrupt.h>
#include "hinic_hw_if.h"
#define HINIC_AEQ_CTRL_0_INT_IDX_SHIFT 0
#define HINIC_AEQ_CTRL_0_DMA_ATTR_SHIFT 12
#define HINIC_AEQ_CTRL_0_PCI_INTF_IDX_SHIFT 20
#define HINIC_AEQ_CTRL_0_INT_MODE_SHIFT 31
#define HINIC_AEQ_CTRL_0_INT_IDX_MASK 0x3FF
#define HINIC_AEQ_CTRL_0_DMA_ATTR_MASK 0x3F
#define HINIC_AEQ_CTRL_0_PCI_INTF_IDX_MASK 0x3
#define HINIC_AEQ_CTRL_0_INT_MODE_MASK 0x1
#define HINIC_AEQ_CTRL_0_SET(val, member) \
(((u32)(val) & HINIC_AEQ_CTRL_0_##member##_MASK) << \
HINIC_AEQ_CTRL_0_##member##_SHIFT)
#define HINIC_AEQ_CTRL_0_CLEAR(val, member) \
((val) & (~(HINIC_AEQ_CTRL_0_##member##_MASK \
<< HINIC_AEQ_CTRL_0_##member##_SHIFT)))
#define HINIC_AEQ_CTRL_1_LEN_SHIFT 0
#define HINIC_AEQ_CTRL_1_ELEM_SIZE_SHIFT 24
#define HINIC_AEQ_CTRL_1_PAGE_SIZE_SHIFT 28
#define HINIC_AEQ_CTRL_1_LEN_MASK 0x1FFFFF
#define HINIC_AEQ_CTRL_1_ELEM_SIZE_MASK 0x3
#define HINIC_AEQ_CTRL_1_PAGE_SIZE_MASK 0xF
#define HINIC_AEQ_CTRL_1_SET(val, member) \
(((u32)(val) & HINIC_AEQ_CTRL_1_##member##_MASK) << \
HINIC_AEQ_CTRL_1_##member##_SHIFT)
#define HINIC_AEQ_CTRL_1_CLEAR(val, member) \
((val) & (~(HINIC_AEQ_CTRL_1_##member##_MASK \
<< HINIC_AEQ_CTRL_1_##member##_SHIFT)))
#define HINIC_CEQ_CTRL_0_INTR_IDX_SHIFT 0
#define HINIC_CEQ_CTRL_0_DMA_ATTR_SHIFT 12
#define HINIC_CEQ_CTRL_0_KICK_THRESH_SHIFT 20
#define HINIC_CEQ_CTRL_0_PCI_INTF_IDX_SHIFT 24
#define HINIC_CEQ_CTRL_0_INTR_MODE_SHIFT 31
#define HINIC_CEQ_CTRL_0_INTR_IDX_MASK 0x3FF
#define HINIC_CEQ_CTRL_0_DMA_ATTR_MASK 0x3F
#define HINIC_CEQ_CTRL_0_KICK_THRESH_MASK 0xF
#define HINIC_CEQ_CTRL_0_PCI_INTF_IDX_MASK 0x3
#define HINIC_CEQ_CTRL_0_INTR_MODE_MASK 0x1
#define HINIC_CEQ_CTRL_0_SET(val, member) \
(((u32)(val) & HINIC_CEQ_CTRL_0_##member##_MASK) << \
HINIC_CEQ_CTRL_0_##member##_SHIFT)
#define HINIC_CEQ_CTRL_0_CLEAR(val, member) \
((val) & (~(HINIC_CEQ_CTRL_0_##member##_MASK \
<< HINIC_CEQ_CTRL_0_##member##_SHIFT)))
#define HINIC_CEQ_CTRL_1_LEN_SHIFT 0
#define HINIC_CEQ_CTRL_1_PAGE_SIZE_SHIFT 28
#define HINIC_CEQ_CTRL_1_LEN_MASK 0x1FFFFF
#define HINIC_CEQ_CTRL_1_PAGE_SIZE_MASK 0xF
#define HINIC_CEQ_CTRL_1_SET(val, member) \
(((u32)(val) & HINIC_CEQ_CTRL_1_##member##_MASK) << \
HINIC_CEQ_CTRL_1_##member##_SHIFT)
#define HINIC_CEQ_CTRL_1_CLEAR(val, member) \
((val) & (~(HINIC_CEQ_CTRL_1_##member##_MASK \
<< HINIC_CEQ_CTRL_1_##member##_SHIFT)))
#define HINIC_EQ_ELEM_DESC_TYPE_SHIFT 0
#define HINIC_EQ_ELEM_DESC_SRC_SHIFT 7
#define HINIC_EQ_ELEM_DESC_SIZE_SHIFT 8
#define HINIC_EQ_ELEM_DESC_WRAPPED_SHIFT 31
#define HINIC_EQ_ELEM_DESC_TYPE_MASK 0x7F
#define HINIC_EQ_ELEM_DESC_SRC_MASK 0x1
#define HINIC_EQ_ELEM_DESC_SIZE_MASK 0xFF
#define HINIC_EQ_ELEM_DESC_WRAPPED_MASK 0x1
#define HINIC_EQ_ELEM_DESC_SET(val, member) \
(((u32)(val) & HINIC_EQ_ELEM_DESC_##member##_MASK) << \
HINIC_EQ_ELEM_DESC_##member##_SHIFT)
#define HINIC_EQ_ELEM_DESC_GET(val, member) \
(((val) >> HINIC_EQ_ELEM_DESC_##member##_SHIFT) & \
HINIC_EQ_ELEM_DESC_##member##_MASK)
#define HINIC_EQ_CI_IDX_SHIFT 0
#define HINIC_EQ_CI_WRAPPED_SHIFT 20
#define HINIC_EQ_CI_XOR_CHKSUM_SHIFT 24
#define HINIC_EQ_CI_INT_ARMED_SHIFT 31
#define HINIC_EQ_CI_IDX_MASK 0xFFFFF
#define HINIC_EQ_CI_WRAPPED_MASK 0x1
#define HINIC_EQ_CI_XOR_CHKSUM_MASK 0xF
#define HINIC_EQ_CI_INT_ARMED_MASK 0x1
#define HINIC_EQ_CI_SET(val, member) \
(((u32)(val) & HINIC_EQ_CI_##member##_MASK) << \
HINIC_EQ_CI_##member##_SHIFT)
#define HINIC_EQ_CI_CLEAR(val, member) \
((val) & (~(HINIC_EQ_CI_##member##_MASK \
<< HINIC_EQ_CI_##member##_SHIFT)))
#define HINIC_MAX_AEQS 4
#define HINIC_MAX_CEQS 32
#define HINIC_AEQE_SIZE 64
#define HINIC_CEQE_SIZE 4
#define HINIC_AEQE_DESC_SIZE 4
#define HINIC_AEQE_DATA_SIZE \
(HINIC_AEQE_SIZE - HINIC_AEQE_DESC_SIZE)
#define HINIC_DEFAULT_AEQ_LEN 64
#define HINIC_DEFAULT_CEQ_LEN 1024
#define HINIC_EQ_PAGE_SIZE SZ_4K
#define HINIC_CEQ_ID_CMDQ 0
enum hinic_eq_type {
HINIC_AEQ,
HINIC_CEQ,
};
enum hinic_aeq_type {
HINIC_MSG_FROM_MGMT_CPU = 2,
HINIC_MAX_AEQ_EVENTS,
};
enum hinic_ceq_type {
HINIC_CEQ_CMDQ = 3,
HINIC_MAX_CEQ_EVENTS,
};
enum hinic_eqe_state {
HINIC_EQE_ENABLED = BIT(0),
HINIC_EQE_RUNNING = BIT(1),
};
struct hinic_aeq_elem {
u8 data[HINIC_AEQE_DATA_SIZE];
u32 desc;
};
struct hinic_eq_work {
struct work_struct work;
void *data;
};
struct hinic_eq {
struct hinic_hwif *hwif;
enum hinic_eq_type type;
int q_id;
u32 q_len;
u32 page_size;
u32 cons_idx;
int wrapped;
size_t elem_size;
int num_pages;
int num_elem_in_pg;
struct msix_entry msix_entry;
dma_addr_t *dma_addr;
void **virt_addr;
struct hinic_eq_work aeq_work;
struct tasklet_struct ceq_tasklet;
};
struct hinic_hw_event_cb {
void (*hwe_handler)(void *handle, void *data, u8 size);
void *handle;
unsigned long hwe_state;
};
struct hinic_aeqs {
struct hinic_hwif *hwif;
struct hinic_eq aeq[HINIC_MAX_AEQS];
int num_aeqs;
struct hinic_hw_event_cb hwe_cb[HINIC_MAX_AEQ_EVENTS];
struct workqueue_struct *workq;
};
struct hinic_ceq_cb {
void (*handler)(void *handle, u32 ceqe_data);
void *handle;
enum hinic_eqe_state ceqe_state;
};
struct hinic_ceqs {
struct hinic_hwif *hwif;
struct hinic_eq ceq[HINIC_MAX_CEQS];
int num_ceqs;
struct hinic_ceq_cb ceq_cb[HINIC_MAX_CEQ_EVENTS];
};
void hinic_aeq_register_hw_cb(struct hinic_aeqs *aeqs,
enum hinic_aeq_type event, void *handle,
void (*hwe_handler)(void *handle, void *data,
u8 size));
void hinic_aeq_unregister_hw_cb(struct hinic_aeqs *aeqs,
enum hinic_aeq_type event);
void hinic_ceq_register_cb(struct hinic_ceqs *ceqs,
enum hinic_ceq_type event, void *handle,
void (*ceq_cb)(void *handle, u32 ceqe_data));
void hinic_ceq_unregister_cb(struct hinic_ceqs *ceqs,
enum hinic_ceq_type event);
int hinic_aeqs_init(struct hinic_aeqs *aeqs, struct hinic_hwif *hwif,
int num_aeqs, u32 q_len, u32 page_size,
struct msix_entry *msix_entries);
void hinic_aeqs_free(struct hinic_aeqs *aeqs);
int hinic_ceqs_init(struct hinic_ceqs *ceqs, struct hinic_hwif *hwif,
int num_ceqs, u32 q_len, u32 page_size,
struct msix_entry *msix_entries);
void hinic_ceqs_free(struct hinic_ceqs *ceqs);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/io.h>
#include <linux/types.h>
#include <linux/bitops.h>
#include "hinic_hw_csr.h"
#include "hinic_hw_if.h"
#define PCIE_ATTR_ENTRY 0
#define VALID_MSIX_IDX(attr, msix_index) ((msix_index) < (attr)->num_irqs)
/**
* hinic_msix_attr_set - set message attribute for msix entry
* @hwif: the HW interface of a pci function device
* @msix_index: msix_index
* @pending_limit: the maximum pending interrupt events (unit 8)
* @coalesc_timer: coalesc period for interrupt (unit 8 us)
* @lli_timer: replenishing period for low latency credit (unit 8 us)
* @lli_credit_limit: maximum credits for low latency msix messages (unit 8)
* @resend_timer: maximum wait for resending msix (unit coalesc period)
*
* Return 0 - Success, negative - Failure
**/
int hinic_msix_attr_set(struct hinic_hwif *hwif, u16 msix_index,
u8 pending_limit, u8 coalesc_timer,
u8 lli_timer, u8 lli_credit_limit,
u8 resend_timer)
{
u32 msix_ctrl, addr;
if (!VALID_MSIX_IDX(&hwif->attr, msix_index))
return -EINVAL;
msix_ctrl = HINIC_MSIX_ATTR_SET(pending_limit, PENDING_LIMIT) |
HINIC_MSIX_ATTR_SET(coalesc_timer, COALESC_TIMER) |
HINIC_MSIX_ATTR_SET(lli_timer, LLI_TIMER) |
HINIC_MSIX_ATTR_SET(lli_credit_limit, LLI_CREDIT) |
HINIC_MSIX_ATTR_SET(resend_timer, RESEND_TIMER);
addr = HINIC_CSR_MSIX_CTRL_ADDR(msix_index);
hinic_hwif_write_reg(hwif, addr, msix_ctrl);
return 0;
}
/**
* hinic_msix_attr_get - get message attribute of msix entry
* @hwif: the HW interface of a pci function device
* @msix_index: msix_index
* @pending_limit: the maximum pending interrupt events (unit 8)
* @coalesc_timer: coalesc period for interrupt (unit 8 us)
* @lli_timer: replenishing period for low latency credit (unit 8 us)
* @lli_credit_limit: maximum credits for low latency msix messages (unit 8)
* @resend_timer: maximum wait for resending msix (unit coalesc period)
*
* Return 0 - Success, negative - Failure
**/
int hinic_msix_attr_get(struct hinic_hwif *hwif, u16 msix_index,
u8 *pending_limit, u8 *coalesc_timer,
u8 *lli_timer, u8 *lli_credit_limit,
u8 *resend_timer)
{
u32 addr, val;
if (!VALID_MSIX_IDX(&hwif->attr, msix_index))
return -EINVAL;
addr = HINIC_CSR_MSIX_CTRL_ADDR(msix_index);
val = hinic_hwif_read_reg(hwif, addr);
*pending_limit = HINIC_MSIX_ATTR_GET(val, PENDING_LIMIT);
*coalesc_timer = HINIC_MSIX_ATTR_GET(val, COALESC_TIMER);
*lli_timer = HINIC_MSIX_ATTR_GET(val, LLI_TIMER);
*lli_credit_limit = HINIC_MSIX_ATTR_GET(val, LLI_CREDIT);
*resend_timer = HINIC_MSIX_ATTR_GET(val, RESEND_TIMER);
return 0;
}
/**
* hinic_msix_attr_cnt_clear - clear message attribute counters for msix entry
* @hwif: the HW interface of a pci function device
* @msix_index: msix_index
*
* Return 0 - Success, negative - Failure
**/
int hinic_msix_attr_cnt_clear(struct hinic_hwif *hwif, u16 msix_index)
{
u32 msix_ctrl, addr;
if (!VALID_MSIX_IDX(&hwif->attr, msix_index))
return -EINVAL;
msix_ctrl = HINIC_MSIX_CNT_SET(1, RESEND_TIMER);
addr = HINIC_CSR_MSIX_CNT_ADDR(msix_index);
hinic_hwif_write_reg(hwif, addr, msix_ctrl);
return 0;
}
/**
* hinic_set_pf_action - set action on pf channel
* @hwif: the HW interface of a pci function device
* @action: action on pf channel
*
* Return 0 - Success, negative - Failure
**/
void hinic_set_pf_action(struct hinic_hwif *hwif, enum hinic_pf_action action)
{
u32 attr5 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR5_ADDR);
attr5 = HINIC_FA5_CLEAR(attr5, PF_ACTION);
attr5 |= HINIC_FA5_SET(action, PF_ACTION);
hinic_hwif_write_reg(hwif, HINIC_CSR_FUNC_ATTR5_ADDR, attr5);
}
enum hinic_outbound_state hinic_outbound_state_get(struct hinic_hwif *hwif)
{
u32 attr4 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR);
return HINIC_FA4_GET(attr4, OUTBOUND_STATE);
}
void hinic_outbound_state_set(struct hinic_hwif *hwif,
enum hinic_outbound_state outbound_state)
{
u32 attr4 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR);
attr4 = HINIC_FA4_CLEAR(attr4, OUTBOUND_STATE);
attr4 |= HINIC_FA4_SET(outbound_state, OUTBOUND_STATE);
hinic_hwif_write_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR, attr4);
}
enum hinic_db_state hinic_db_state_get(struct hinic_hwif *hwif)
{
u32 attr4 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR);
return HINIC_FA4_GET(attr4, DB_STATE);
}
void hinic_db_state_set(struct hinic_hwif *hwif,
enum hinic_db_state db_state)
{
u32 attr4 = hinic_hwif_read_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR);
attr4 = HINIC_FA4_CLEAR(attr4, DB_STATE);
attr4 |= HINIC_FA4_SET(db_state, DB_STATE);
hinic_hwif_write_reg(hwif, HINIC_CSR_FUNC_ATTR4_ADDR, attr4);
}
/**
* hwif_ready - test if the HW is ready for use
* @hwif: the HW interface of a pci function device
*
* Return 0 - Success, negative - Failure
**/
static int hwif_ready(struct hinic_hwif *hwif)
{
struct pci_dev *pdev = hwif->pdev;
u32 addr, attr1;
addr = HINIC_CSR_FUNC_ATTR1_ADDR;
attr1 = hinic_hwif_read_reg(hwif, addr);
if (!HINIC_FA1_GET(attr1, INIT_STATUS)) {
dev_err(&pdev->dev, "hwif status is not ready\n");
return -EFAULT;
}
return 0;
}
/**
* set_hwif_attr - set the attributes in the relevant members in hwif
* @hwif: the HW interface of a pci function device
* @attr0: the first attribute that was read from the hw
* @attr1: the second attribute that was read from the hw
**/
static void set_hwif_attr(struct hinic_hwif *hwif, u32 attr0, u32 attr1)
{
hwif->attr.func_idx = HINIC_FA0_GET(attr0, FUNC_IDX);
hwif->attr.pf_idx = HINIC_FA0_GET(attr0, PF_IDX);
hwif->attr.pci_intf_idx = HINIC_FA0_GET(attr0, PCI_INTF_IDX);
hwif->attr.func_type = HINIC_FA0_GET(attr0, FUNC_TYPE);
hwif->attr.num_aeqs = BIT(HINIC_FA1_GET(attr1, AEQS_PER_FUNC));
hwif->attr.num_ceqs = BIT(HINIC_FA1_GET(attr1, CEQS_PER_FUNC));
hwif->attr.num_irqs = BIT(HINIC_FA1_GET(attr1, IRQS_PER_FUNC));
hwif->attr.num_dma_attr = BIT(HINIC_FA1_GET(attr1, DMA_ATTR_PER_FUNC));
}
/**
* read_hwif_attr - read the attributes and set members in hwif
* @hwif: the HW interface of a pci function device
**/
static void read_hwif_attr(struct hinic_hwif *hwif)
{
u32 addr, attr0, attr1;
addr = HINIC_CSR_FUNC_ATTR0_ADDR;
attr0 = hinic_hwif_read_reg(hwif, addr);
addr = HINIC_CSR_FUNC_ATTR1_ADDR;
attr1 = hinic_hwif_read_reg(hwif, addr);
set_hwif_attr(hwif, attr0, attr1);
}
/**
* set_ppf - try to set hwif as ppf and set the type of hwif in this case
* @hwif: the HW interface of a pci function device
**/
static void set_ppf(struct hinic_hwif *hwif)
{
struct hinic_func_attr *attr = &hwif->attr;
u32 addr, val, ppf_election;
/* Read Modify Write */
addr = HINIC_CSR_PPF_ELECTION_ADDR(HINIC_HWIF_PCI_INTF(hwif));
val = hinic_hwif_read_reg(hwif, addr);
val = HINIC_PPF_ELECTION_CLEAR(val, IDX);
ppf_election = HINIC_PPF_ELECTION_SET(HINIC_HWIF_FUNC_IDX(hwif), IDX);
val |= ppf_election;
hinic_hwif_write_reg(hwif, addr, val);
/* check PPF */
val = hinic_hwif_read_reg(hwif, addr);
attr->ppf_idx = HINIC_PPF_ELECTION_GET(val, IDX);
if (attr->ppf_idx == HINIC_HWIF_FUNC_IDX(hwif))
attr->func_type = HINIC_PPF;
}
/**
* set_dma_attr - set the dma attributes in the HW
* @hwif: the HW interface of a pci function device
* @entry_idx: the entry index in the dma table
* @st: PCIE TLP steering tag
* @at: PCIE TLP AT field
* @ph: PCIE TLP Processing Hint field
* @no_snooping: PCIE TLP No snooping
* @tph_en: PCIE TLP Processing Hint Enable
**/
static void set_dma_attr(struct hinic_hwif *hwif, u32 entry_idx,
u8 st, u8 at, u8 ph,
enum hinic_pcie_nosnoop no_snooping,
enum hinic_pcie_tph tph_en)
{
u32 addr, val, dma_attr_entry;
/* Read Modify Write */
addr = HINIC_CSR_DMA_ATTR_ADDR(entry_idx);
val = hinic_hwif_read_reg(hwif, addr);
val = HINIC_DMA_ATTR_CLEAR(val, ST) &
HINIC_DMA_ATTR_CLEAR(val, AT) &
HINIC_DMA_ATTR_CLEAR(val, PH) &
HINIC_DMA_ATTR_CLEAR(val, NO_SNOOPING) &
HINIC_DMA_ATTR_CLEAR(val, TPH_EN);
dma_attr_entry = HINIC_DMA_ATTR_SET(st, ST) |
HINIC_DMA_ATTR_SET(at, AT) |
HINIC_DMA_ATTR_SET(ph, PH) |
HINIC_DMA_ATTR_SET(no_snooping, NO_SNOOPING) |
HINIC_DMA_ATTR_SET(tph_en, TPH_EN);
val |= dma_attr_entry;
hinic_hwif_write_reg(hwif, addr, val);
}
/**
* dma_attr_table_init - initialize the the default dma attributes
* @hwif: the HW interface of a pci function device
**/
static void dma_attr_init(struct hinic_hwif *hwif)
{
set_dma_attr(hwif, PCIE_ATTR_ENTRY, HINIC_PCIE_ST_DISABLE,
HINIC_PCIE_AT_DISABLE, HINIC_PCIE_PH_DISABLE,
HINIC_PCIE_SNOOP, HINIC_PCIE_TPH_DISABLE);
}
/**
* hinic_init_hwif - initialize the hw interface
* @hwif: the HW interface of a pci function device
* @pdev: the pci device for acessing PCI resources
*
* Return 0 - Success, negative - Failure
**/
int hinic_init_hwif(struct hinic_hwif *hwif, struct pci_dev *pdev)
{
int err;
hwif->pdev = pdev;
hwif->cfg_regs_bar = pci_ioremap_bar(pdev, HINIC_PCI_CFG_REGS_BAR);
if (!hwif->cfg_regs_bar) {
dev_err(&pdev->dev, "Failed to map configuration regs\n");
return -ENOMEM;
}
err = hwif_ready(hwif);
if (err) {
dev_err(&pdev->dev, "HW interface is not ready\n");
goto err_hwif_ready;
}
read_hwif_attr(hwif);
if (HINIC_IS_PF(hwif))
set_ppf(hwif);
/* No transactionss before DMA is initialized */
dma_attr_init(hwif);
return 0;
err_hwif_ready:
iounmap(hwif->cfg_regs_bar);
return err;
}
/**
* hinic_free_hwif - free the HW interface
* @hwif: the HW interface of a pci function device
**/
void hinic_free_hwif(struct hinic_hwif *hwif)
{
iounmap(hwif->cfg_regs_bar);
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_IF_H
#define HINIC_HW_IF_H
#include <linux/pci.h>
#include <linux/io.h>
#include <linux/types.h>
#include <asm/byteorder.h>
#define HINIC_DMA_ATTR_ST_SHIFT 0
#define HINIC_DMA_ATTR_AT_SHIFT 8
#define HINIC_DMA_ATTR_PH_SHIFT 10
#define HINIC_DMA_ATTR_NO_SNOOPING_SHIFT 12
#define HINIC_DMA_ATTR_TPH_EN_SHIFT 13
#define HINIC_DMA_ATTR_ST_MASK 0xFF
#define HINIC_DMA_ATTR_AT_MASK 0x3
#define HINIC_DMA_ATTR_PH_MASK 0x3
#define HINIC_DMA_ATTR_NO_SNOOPING_MASK 0x1
#define HINIC_DMA_ATTR_TPH_EN_MASK 0x1
#define HINIC_DMA_ATTR_SET(val, member) \
(((u32)(val) & HINIC_DMA_ATTR_##member##_MASK) << \
HINIC_DMA_ATTR_##member##_SHIFT)
#define HINIC_DMA_ATTR_CLEAR(val, member) \
((val) & (~(HINIC_DMA_ATTR_##member##_MASK \
<< HINIC_DMA_ATTR_##member##_SHIFT)))
#define HINIC_FA0_FUNC_IDX_SHIFT 0
#define HINIC_FA0_PF_IDX_SHIFT 10
#define HINIC_FA0_PCI_INTF_IDX_SHIFT 14
/* reserved members - off 16 */
#define HINIC_FA0_FUNC_TYPE_SHIFT 24
#define HINIC_FA0_FUNC_IDX_MASK 0x3FF
#define HINIC_FA0_PF_IDX_MASK 0xF
#define HINIC_FA0_PCI_INTF_IDX_MASK 0x3
#define HINIC_FA0_FUNC_TYPE_MASK 0x1
#define HINIC_FA0_GET(val, member) \
(((val) >> HINIC_FA0_##member##_SHIFT) & HINIC_FA0_##member##_MASK)
#define HINIC_FA1_AEQS_PER_FUNC_SHIFT 8
/* reserved members - off 10 */
#define HINIC_FA1_CEQS_PER_FUNC_SHIFT 12
/* reserved members - off 15 */
#define HINIC_FA1_IRQS_PER_FUNC_SHIFT 20
#define HINIC_FA1_DMA_ATTR_PER_FUNC_SHIFT 24
/* reserved members - off 27 */
#define HINIC_FA1_INIT_STATUS_SHIFT 30
#define HINIC_FA1_AEQS_PER_FUNC_MASK 0x3
#define HINIC_FA1_CEQS_PER_FUNC_MASK 0x7
#define HINIC_FA1_IRQS_PER_FUNC_MASK 0xF
#define HINIC_FA1_DMA_ATTR_PER_FUNC_MASK 0x7
#define HINIC_FA1_INIT_STATUS_MASK 0x1
#define HINIC_FA1_GET(val, member) \
(((val) >> HINIC_FA1_##member##_SHIFT) & HINIC_FA1_##member##_MASK)
#define HINIC_FA4_OUTBOUND_STATE_SHIFT 0
#define HINIC_FA4_DB_STATE_SHIFT 1
#define HINIC_FA4_OUTBOUND_STATE_MASK 0x1
#define HINIC_FA4_DB_STATE_MASK 0x1
#define HINIC_FA4_GET(val, member) \
(((val) >> HINIC_FA4_##member##_SHIFT) & HINIC_FA4_##member##_MASK)
#define HINIC_FA4_SET(val, member) \
((((u32)val) & HINIC_FA4_##member##_MASK) << HINIC_FA4_##member##_SHIFT)
#define HINIC_FA4_CLEAR(val, member) \
((val) & (~(HINIC_FA4_##member##_MASK << HINIC_FA4_##member##_SHIFT)))
#define HINIC_FA5_PF_ACTION_SHIFT 0
#define HINIC_FA5_PF_ACTION_MASK 0xFFFF
#define HINIC_FA5_SET(val, member) \
(((u32)(val) & HINIC_FA5_##member##_MASK) << HINIC_FA5_##member##_SHIFT)
#define HINIC_FA5_CLEAR(val, member) \
((val) & (~(HINIC_FA5_##member##_MASK << HINIC_FA5_##member##_SHIFT)))
#define HINIC_PPF_ELECTION_IDX_SHIFT 0
#define HINIC_PPF_ELECTION_IDX_MASK 0x1F
#define HINIC_PPF_ELECTION_SET(val, member) \
(((u32)(val) & HINIC_PPF_ELECTION_##member##_MASK) << \
HINIC_PPF_ELECTION_##member##_SHIFT)
#define HINIC_PPF_ELECTION_GET(val, member) \
(((val) >> HINIC_PPF_ELECTION_##member##_SHIFT) & \
HINIC_PPF_ELECTION_##member##_MASK)
#define HINIC_PPF_ELECTION_CLEAR(val, member) \
((val) & (~(HINIC_PPF_ELECTION_##member##_MASK \
<< HINIC_PPF_ELECTION_##member##_SHIFT)))
#define HINIC_MSIX_PENDING_LIMIT_SHIFT 0
#define HINIC_MSIX_COALESC_TIMER_SHIFT 8
#define HINIC_MSIX_LLI_TIMER_SHIFT 16
#define HINIC_MSIX_LLI_CREDIT_SHIFT 24
#define HINIC_MSIX_RESEND_TIMER_SHIFT 29
#define HINIC_MSIX_PENDING_LIMIT_MASK 0xFF
#define HINIC_MSIX_COALESC_TIMER_MASK 0xFF
#define HINIC_MSIX_LLI_TIMER_MASK 0xFF
#define HINIC_MSIX_LLI_CREDIT_MASK 0x1F
#define HINIC_MSIX_RESEND_TIMER_MASK 0x7
#define HINIC_MSIX_ATTR_SET(val, member) \
(((u32)(val) & HINIC_MSIX_##member##_MASK) << \
HINIC_MSIX_##member##_SHIFT)
#define HINIC_MSIX_ATTR_GET(val, member) \
(((val) >> HINIC_MSIX_##member##_SHIFT) & \
HINIC_MSIX_##member##_MASK)
#define HINIC_MSIX_CNT_RESEND_TIMER_SHIFT 29
#define HINIC_MSIX_CNT_RESEND_TIMER_MASK 0x1
#define HINIC_MSIX_CNT_SET(val, member) \
(((u32)(val) & HINIC_MSIX_CNT_##member##_MASK) << \
HINIC_MSIX_CNT_##member##_SHIFT)
#define HINIC_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
#define HINIC_HWIF_NUM_CEQS(hwif) ((hwif)->attr.num_ceqs)
#define HINIC_HWIF_NUM_IRQS(hwif) ((hwif)->attr.num_irqs)
#define HINIC_HWIF_FUNC_IDX(hwif) ((hwif)->attr.func_idx)
#define HINIC_HWIF_PCI_INTF(hwif) ((hwif)->attr.pci_intf_idx)
#define HINIC_HWIF_PF_IDX(hwif) ((hwif)->attr.pf_idx)
#define HINIC_FUNC_TYPE(hwif) ((hwif)->attr.func_type)
#define HINIC_IS_PF(hwif) (HINIC_FUNC_TYPE(hwif) == HINIC_PF)
#define HINIC_IS_PPF(hwif) (HINIC_FUNC_TYPE(hwif) == HINIC_PPF)
#define HINIC_PCI_CFG_REGS_BAR 0
#define HINIC_PCI_DB_BAR 4
#define HINIC_PCIE_ST_DISABLE 0
#define HINIC_PCIE_AT_DISABLE 0
#define HINIC_PCIE_PH_DISABLE 0
#define HINIC_EQ_MSIX_PENDING_LIMIT_DEFAULT 0 /* Disabled */
#define HINIC_EQ_MSIX_COALESC_TIMER_DEFAULT 0xFF /* max */
#define HINIC_EQ_MSIX_LLI_TIMER_DEFAULT 0 /* Disabled */
#define HINIC_EQ_MSIX_LLI_CREDIT_LIMIT_DEFAULT 0 /* Disabled */
#define HINIC_EQ_MSIX_RESEND_TIMER_DEFAULT 7 /* max */
enum hinic_pcie_nosnoop {
HINIC_PCIE_SNOOP = 0,
HINIC_PCIE_NO_SNOOP = 1,
};
enum hinic_pcie_tph {
HINIC_PCIE_TPH_DISABLE = 0,
HINIC_PCIE_TPH_ENABLE = 1,
};
enum hinic_func_type {
HINIC_PF = 0,
HINIC_PPF = 2,
};
enum hinic_mod_type {
HINIC_MOD_COMM = 0, /* HW communication module */
HINIC_MOD_L2NIC = 1, /* L2NIC module */
HINIC_MOD_CFGM = 7, /* Configuration module */
HINIC_MOD_MAX = 15
};
enum hinic_node_id {
HINIC_NODE_ID_MGMT = 21,
};
enum hinic_pf_action {
HINIC_PF_MGMT_INIT = 0x0,
HINIC_PF_MGMT_ACTIVE = 0x11,
};
enum hinic_outbound_state {
HINIC_OUTBOUND_ENABLE = 0,
HINIC_OUTBOUND_DISABLE = 1,
};
enum hinic_db_state {
HINIC_DB_ENABLE = 0,
HINIC_DB_DISABLE = 1,
};
struct hinic_func_attr {
u16 func_idx;
u8 pf_idx;
u8 pci_intf_idx;
enum hinic_func_type func_type;
u8 ppf_idx;
u16 num_irqs;
u8 num_aeqs;
u8 num_ceqs;
u8 num_dma_attr;
};
struct hinic_hwif {
struct pci_dev *pdev;
void __iomem *cfg_regs_bar;
struct hinic_func_attr attr;
};
static inline u32 hinic_hwif_read_reg(struct hinic_hwif *hwif, u32 reg)
{
return be32_to_cpu(readl(hwif->cfg_regs_bar + reg));
}
static inline void hinic_hwif_write_reg(struct hinic_hwif *hwif, u32 reg,
u32 val)
{
writel(cpu_to_be32(val), hwif->cfg_regs_bar + reg);
}
int hinic_msix_attr_set(struct hinic_hwif *hwif, u16 msix_index,
u8 pending_limit, u8 coalesc_timer,
u8 lli_timer_cfg, u8 lli_credit_limit,
u8 resend_timer);
int hinic_msix_attr_get(struct hinic_hwif *hwif, u16 msix_index,
u8 *pending_limit, u8 *coalesc_timer_cfg,
u8 *lli_timer, u8 *lli_credit_limit,
u8 *resend_timer);
int hinic_msix_attr_cnt_clear(struct hinic_hwif *hwif, u16 msix_index);
void hinic_set_pf_action(struct hinic_hwif *hwif, enum hinic_pf_action action);
enum hinic_outbound_state hinic_outbound_state_get(struct hinic_hwif *hwif);
void hinic_outbound_state_set(struct hinic_hwif *hwif,
enum hinic_outbound_state outbound_state);
enum hinic_db_state hinic_db_state_get(struct hinic_hwif *hwif);
void hinic_db_state_set(struct hinic_hwif *hwif,
enum hinic_db_state db_state);
int hinic_init_hwif(struct hinic_hwif *hwif, struct pci_dev *pdev);
void hinic_free_hwif(struct hinic_hwif *hwif);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/semaphore.h>
#include <linux/dma-mapping.h>
#include <linux/io.h>
#include <linux/err.h>
#include "hinic_hw_if.h"
#include "hinic_hw_eqs.h"
#include "hinic_hw_wqe.h"
#include "hinic_hw_wq.h"
#include "hinic_hw_cmdq.h"
#include "hinic_hw_qp_ctxt.h"
#include "hinic_hw_qp.h"
#include "hinic_hw_io.h"
#define CI_Q_ADDR_SIZE sizeof(u32)
#define CI_ADDR(base_addr, q_id) ((base_addr) + \
(q_id) * CI_Q_ADDR_SIZE)
#define CI_TABLE_SIZE(num_qps) ((num_qps) * CI_Q_ADDR_SIZE)
#define DB_IDX(db, db_base) \
(((unsigned long)(db) - (unsigned long)(db_base)) / HINIC_DB_PAGE_SIZE)
enum io_cmd {
IO_CMD_MODIFY_QUEUE_CTXT = 0,
};
static void init_db_area_idx(struct hinic_free_db_area *free_db_area)
{
int i;
for (i = 0; i < HINIC_DB_MAX_AREAS; i++)
free_db_area->db_idx[i] = i;
free_db_area->alloc_pos = 0;
free_db_area->return_pos = HINIC_DB_MAX_AREAS;
free_db_area->num_free = HINIC_DB_MAX_AREAS;
sema_init(&free_db_area->idx_lock, 1);
}
static void __iomem *get_db_area(struct hinic_func_to_io *func_to_io)
{
struct hinic_free_db_area *free_db_area = &func_to_io->free_db_area;
int pos, idx;
down(&free_db_area->idx_lock);
free_db_area->num_free--;
if (free_db_area->num_free < 0) {
free_db_area->num_free++;
up(&free_db_area->idx_lock);
return ERR_PTR(-ENOMEM);
}
pos = free_db_area->alloc_pos++;
pos &= HINIC_DB_MAX_AREAS - 1;
idx = free_db_area->db_idx[pos];
free_db_area->db_idx[pos] = -1;
up(&free_db_area->idx_lock);
return func_to_io->db_base + idx * HINIC_DB_PAGE_SIZE;
}
static void return_db_area(struct hinic_func_to_io *func_to_io,
void __iomem *db_base)
{
struct hinic_free_db_area *free_db_area = &func_to_io->free_db_area;
int pos, idx = DB_IDX(db_base, func_to_io->db_base);
down(&free_db_area->idx_lock);
pos = free_db_area->return_pos++;
pos &= HINIC_DB_MAX_AREAS - 1;
free_db_area->db_idx[pos] = idx;
free_db_area->num_free++;
up(&free_db_area->idx_lock);
}
static int write_sq_ctxts(struct hinic_func_to_io *func_to_io, u16 base_qpn,
u16 num_sqs)
{
struct hinic_hwif *hwif = func_to_io->hwif;
struct hinic_sq_ctxt_block *sq_ctxt_block;
struct pci_dev *pdev = hwif->pdev;
struct hinic_cmdq_buf cmdq_buf;
struct hinic_sq_ctxt *sq_ctxt;
struct hinic_qp *qp;
u64 out_param;
int err, i;
err = hinic_alloc_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
if (err) {
dev_err(&pdev->dev, "Failed to allocate cmdq buf\n");
return err;
}
sq_ctxt_block = cmdq_buf.buf;
sq_ctxt = sq_ctxt_block->sq_ctxt;
hinic_qp_prepare_header(&sq_ctxt_block->hdr, HINIC_QP_CTXT_TYPE_SQ,
num_sqs, func_to_io->max_qps);
for (i = 0; i < num_sqs; i++) {
qp = &func_to_io->qps[i];
hinic_sq_prepare_ctxt(&sq_ctxt[i], &qp->sq,
base_qpn + qp->q_id);
}
cmdq_buf.size = HINIC_SQ_CTXT_SIZE(num_sqs);
err = hinic_cmdq_direct_resp(&func_to_io->cmdqs, HINIC_MOD_L2NIC,
IO_CMD_MODIFY_QUEUE_CTXT, &cmdq_buf,
&out_param);
if ((err) || (out_param != 0)) {
dev_err(&pdev->dev, "Failed to set SQ ctxts\n");
err = -EFAULT;
}
hinic_free_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
return err;
}
static int write_rq_ctxts(struct hinic_func_to_io *func_to_io, u16 base_qpn,
u16 num_rqs)
{
struct hinic_hwif *hwif = func_to_io->hwif;
struct hinic_rq_ctxt_block *rq_ctxt_block;
struct pci_dev *pdev = hwif->pdev;
struct hinic_cmdq_buf cmdq_buf;
struct hinic_rq_ctxt *rq_ctxt;
struct hinic_qp *qp;
u64 out_param;
int err, i;
err = hinic_alloc_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
if (err) {
dev_err(&pdev->dev, "Failed to allocate cmdq buf\n");
return err;
}
rq_ctxt_block = cmdq_buf.buf;
rq_ctxt = rq_ctxt_block->rq_ctxt;
hinic_qp_prepare_header(&rq_ctxt_block->hdr, HINIC_QP_CTXT_TYPE_RQ,
num_rqs, func_to_io->max_qps);
for (i = 0; i < num_rqs; i++) {
qp = &func_to_io->qps[i];
hinic_rq_prepare_ctxt(&rq_ctxt[i], &qp->rq,
base_qpn + qp->q_id);
}
cmdq_buf.size = HINIC_RQ_CTXT_SIZE(num_rqs);
err = hinic_cmdq_direct_resp(&func_to_io->cmdqs, HINIC_MOD_L2NIC,
IO_CMD_MODIFY_QUEUE_CTXT, &cmdq_buf,
&out_param);
if ((err) || (out_param != 0)) {
dev_err(&pdev->dev, "Failed to set RQ ctxts\n");
err = -EFAULT;
}
hinic_free_cmdq_buf(&func_to_io->cmdqs, &cmdq_buf);
return err;
}
/**
* write_qp_ctxts - write the qp ctxt to HW
* @func_to_io: func to io channel that holds the IO components
* @base_qpn: first qp number
* @num_qps: number of qps to write
*
* Return 0 - Success, negative - Failure
**/
static int write_qp_ctxts(struct hinic_func_to_io *func_to_io, u16 base_qpn,
u16 num_qps)
{
return (write_sq_ctxts(func_to_io, base_qpn, num_qps) ||
write_rq_ctxts(func_to_io, base_qpn, num_qps));
}
/**
* init_qp - Initialize a Queue Pair
* @func_to_io: func to io channel that holds the IO components
* @qp: pointer to the qp to initialize
* @q_id: the id of the qp
* @sq_msix_entry: msix entry for sq
* @rq_msix_entry: msix entry for rq
*
* Return 0 - Success, negative - Failure
**/
static int init_qp(struct hinic_func_to_io *func_to_io,
struct hinic_qp *qp, int q_id,
struct msix_entry *sq_msix_entry,
struct msix_entry *rq_msix_entry)
{
struct hinic_hwif *hwif = func_to_io->hwif;
struct pci_dev *pdev = hwif->pdev;
void __iomem *db_base;
int err;
qp->q_id = q_id;
err = hinic_wq_allocate(&func_to_io->wqs, &func_to_io->sq_wq[q_id],
HINIC_SQ_WQEBB_SIZE, HINIC_SQ_PAGE_SIZE,
HINIC_SQ_DEPTH, HINIC_SQ_WQE_MAX_SIZE);
if (err) {
dev_err(&pdev->dev, "Failed to allocate WQ for SQ\n");
return err;
}
err = hinic_wq_allocate(&func_to_io->wqs, &func_to_io->rq_wq[q_id],
HINIC_RQ_WQEBB_SIZE, HINIC_RQ_PAGE_SIZE,
HINIC_RQ_DEPTH, HINIC_RQ_WQE_SIZE);
if (err) {
dev_err(&pdev->dev, "Failed to allocate WQ for RQ\n");
goto err_rq_alloc;
}
db_base = get_db_area(func_to_io);
if (IS_ERR(db_base)) {
dev_err(&pdev->dev, "Failed to get DB area for SQ\n");
err = PTR_ERR(db_base);
goto err_get_db;
}
func_to_io->sq_db[q_id] = db_base;
err = hinic_init_sq(&qp->sq, hwif, &func_to_io->sq_wq[q_id],
sq_msix_entry,
CI_ADDR(func_to_io->ci_addr_base, q_id),
CI_ADDR(func_to_io->ci_dma_base, q_id), db_base);
if (err) {
dev_err(&pdev->dev, "Failed to init SQ\n");
goto err_sq_init;
}
err = hinic_init_rq(&qp->rq, hwif, &func_to_io->rq_wq[q_id],
rq_msix_entry);
if (err) {
dev_err(&pdev->dev, "Failed to init RQ\n");
goto err_rq_init;
}
return 0;
err_rq_init:
hinic_clean_sq(&qp->sq);
err_sq_init:
return_db_area(func_to_io, db_base);
err_get_db:
hinic_wq_free(&func_to_io->wqs, &func_to_io->rq_wq[q_id]);
err_rq_alloc:
hinic_wq_free(&func_to_io->wqs, &func_to_io->sq_wq[q_id]);
return err;
}
/**
* destroy_qp - Clean the resources of a Queue Pair
* @func_to_io: func to io channel that holds the IO components
* @qp: pointer to the qp to clean
**/
static void destroy_qp(struct hinic_func_to_io *func_to_io,
struct hinic_qp *qp)
{
int q_id = qp->q_id;
hinic_clean_rq(&qp->rq);
hinic_clean_sq(&qp->sq);
return_db_area(func_to_io, func_to_io->sq_db[q_id]);
hinic_wq_free(&func_to_io->wqs, &func_to_io->rq_wq[q_id]);
hinic_wq_free(&func_to_io->wqs, &func_to_io->sq_wq[q_id]);
}
/**
* hinic_io_create_qps - Create Queue Pairs
* @func_to_io: func to io channel that holds the IO components
* @base_qpn: base qp number
* @num_qps: number queue pairs to create
* @sq_msix_entry: msix entries for sq
* @rq_msix_entry: msix entries for rq
*
* Return 0 - Success, negative - Failure
**/
int hinic_io_create_qps(struct hinic_func_to_io *func_to_io,
u16 base_qpn, int num_qps,
struct msix_entry *sq_msix_entries,
struct msix_entry *rq_msix_entries)
{
struct hinic_hwif *hwif = func_to_io->hwif;
struct pci_dev *pdev = hwif->pdev;
size_t qps_size, wq_size, db_size;
void *ci_addr_base;
int i, j, err;
qps_size = num_qps * sizeof(*func_to_io->qps);
func_to_io->qps = devm_kzalloc(&pdev->dev, qps_size, GFP_KERNEL);
if (!func_to_io->qps)
return -ENOMEM;
wq_size = num_qps * sizeof(*func_to_io->sq_wq);
func_to_io->sq_wq = devm_kzalloc(&pdev->dev, wq_size, GFP_KERNEL);
if (!func_to_io->sq_wq) {
err = -ENOMEM;
goto err_sq_wq;
}
wq_size = num_qps * sizeof(*func_to_io->rq_wq);
func_to_io->rq_wq = devm_kzalloc(&pdev->dev, wq_size, GFP_KERNEL);
if (!func_to_io->rq_wq) {
err = -ENOMEM;
goto err_rq_wq;
}
db_size = num_qps * sizeof(*func_to_io->sq_db);
func_to_io->sq_db = devm_kzalloc(&pdev->dev, db_size, GFP_KERNEL);
if (!func_to_io->sq_db) {
err = -ENOMEM;
goto err_sq_db;
}
ci_addr_base = dma_zalloc_coherent(&pdev->dev, CI_TABLE_SIZE(num_qps),
&func_to_io->ci_dma_base,
GFP_KERNEL);
if (!ci_addr_base) {
dev_err(&pdev->dev, "Failed to allocate CI area\n");
err = -ENOMEM;
goto err_ci_base;
}
func_to_io->ci_addr_base = ci_addr_base;
for (i = 0; i < num_qps; i++) {
err = init_qp(func_to_io, &func_to_io->qps[i], i,
&sq_msix_entries[i], &rq_msix_entries[i]);
if (err) {
dev_err(&pdev->dev, "Failed to create QP %d\n", i);
goto err_init_qp;
}
}
err = write_qp_ctxts(func_to_io, base_qpn, num_qps);
if (err) {
dev_err(&pdev->dev, "Failed to init QP ctxts\n");
goto err_write_qp_ctxts;
}
return 0;
err_write_qp_ctxts:
err_init_qp:
for (j = 0; j < i; j++)
destroy_qp(func_to_io, &func_to_io->qps[j]);
dma_free_coherent(&pdev->dev, CI_TABLE_SIZE(num_qps),
func_to_io->ci_addr_base, func_to_io->ci_dma_base);
err_ci_base:
devm_kfree(&pdev->dev, func_to_io->sq_db);
err_sq_db:
devm_kfree(&pdev->dev, func_to_io->rq_wq);
err_rq_wq:
devm_kfree(&pdev->dev, func_to_io->sq_wq);
err_sq_wq:
devm_kfree(&pdev->dev, func_to_io->qps);
return err;
}
/**
* hinic_io_destroy_qps - Destroy the IO Queue Pairs
* @func_to_io: func to io channel that holds the IO components
* @num_qps: number queue pairs to destroy
**/
void hinic_io_destroy_qps(struct hinic_func_to_io *func_to_io, int num_qps)
{
struct hinic_hwif *hwif = func_to_io->hwif;
struct pci_dev *pdev = hwif->pdev;
size_t ci_table_size;
int i;
ci_table_size = CI_TABLE_SIZE(num_qps);
for (i = 0; i < num_qps; i++)
destroy_qp(func_to_io, &func_to_io->qps[i]);
dma_free_coherent(&pdev->dev, ci_table_size, func_to_io->ci_addr_base,
func_to_io->ci_dma_base);
devm_kfree(&pdev->dev, func_to_io->sq_db);
devm_kfree(&pdev->dev, func_to_io->rq_wq);
devm_kfree(&pdev->dev, func_to_io->sq_wq);
devm_kfree(&pdev->dev, func_to_io->qps);
}
/**
* hinic_io_init - Initialize the IO components
* @func_to_io: func to io channel that holds the IO components
* @hwif: HW interface for accessing IO
* @max_qps: maximum QPs in HW
* @num_ceqs: number completion event queues
* @ceq_msix_entries: msix entries for ceqs
*
* Return 0 - Success, negative - Failure
**/
int hinic_io_init(struct hinic_func_to_io *func_to_io,
struct hinic_hwif *hwif, u16 max_qps, int num_ceqs,
struct msix_entry *ceq_msix_entries)
{
struct pci_dev *pdev = hwif->pdev;
enum hinic_cmdq_type cmdq, type;
void __iomem *db_area;
int err;
func_to_io->hwif = hwif;
func_to_io->qps = NULL;
func_to_io->max_qps = max_qps;
err = hinic_ceqs_init(&func_to_io->ceqs, hwif, num_ceqs,
HINIC_DEFAULT_CEQ_LEN, HINIC_EQ_PAGE_SIZE,
ceq_msix_entries);
if (err) {
dev_err(&pdev->dev, "Failed to init CEQs\n");
return err;
}
err = hinic_wqs_alloc(&func_to_io->wqs, 2 * max_qps, hwif);
if (err) {
dev_err(&pdev->dev, "Failed to allocate WQS for IO\n");
goto err_wqs_alloc;
}
func_to_io->db_base = pci_ioremap_bar(pdev, HINIC_PCI_DB_BAR);
if (!func_to_io->db_base) {
dev_err(&pdev->dev, "Failed to remap IO DB area\n");
err = -ENOMEM;
goto err_db_ioremap;
}
init_db_area_idx(&func_to_io->free_db_area);
for (cmdq = HINIC_CMDQ_SYNC; cmdq < HINIC_MAX_CMDQ_TYPES; cmdq++) {
db_area = get_db_area(func_to_io);
if (IS_ERR(db_area)) {
dev_err(&pdev->dev, "Failed to get cmdq db area\n");
err = PTR_ERR(db_area);
goto err_db_area;
}
func_to_io->cmdq_db_area[cmdq] = db_area;
}
err = hinic_init_cmdqs(&func_to_io->cmdqs, hwif,
func_to_io->cmdq_db_area);
if (err) {
dev_err(&pdev->dev, "Failed to initialize cmdqs\n");
goto err_init_cmdqs;
}
return 0;
err_init_cmdqs:
err_db_area:
for (type = HINIC_CMDQ_SYNC; type < cmdq; type++)
return_db_area(func_to_io, func_to_io->cmdq_db_area[type]);
iounmap(func_to_io->db_base);
err_db_ioremap:
hinic_wqs_free(&func_to_io->wqs);
err_wqs_alloc:
hinic_ceqs_free(&func_to_io->ceqs);
return err;
}
/**
* hinic_io_free - Free the IO components
* @func_to_io: func to io channel that holds the IO components
**/
void hinic_io_free(struct hinic_func_to_io *func_to_io)
{
enum hinic_cmdq_type cmdq;
hinic_free_cmdqs(&func_to_io->cmdqs);
for (cmdq = HINIC_CMDQ_SYNC; cmdq < HINIC_MAX_CMDQ_TYPES; cmdq++)
return_db_area(func_to_io, func_to_io->cmdq_db_area[cmdq]);
iounmap(func_to_io->db_base);
hinic_wqs_free(&func_to_io->wqs);
hinic_ceqs_free(&func_to_io->ceqs);
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_IO_H
#define HINIC_HW_IO_H
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/semaphore.h>
#include <linux/sizes.h>
#include "hinic_hw_if.h"
#include "hinic_hw_eqs.h"
#include "hinic_hw_wq.h"
#include "hinic_hw_cmdq.h"
#include "hinic_hw_qp.h"
#define HINIC_DB_PAGE_SIZE SZ_4K
#define HINIC_DB_SIZE SZ_4M
#define HINIC_DB_MAX_AREAS (HINIC_DB_SIZE / HINIC_DB_PAGE_SIZE)
enum hinic_db_type {
HINIC_DB_CMDQ_TYPE,
HINIC_DB_SQ_TYPE,
};
enum hinic_io_path {
HINIC_CTRL_PATH,
HINIC_DATA_PATH,
};
struct hinic_free_db_area {
int db_idx[HINIC_DB_MAX_AREAS];
int alloc_pos;
int return_pos;
int num_free;
/* Lock for getting db area */
struct semaphore idx_lock;
};
struct hinic_func_to_io {
struct hinic_hwif *hwif;
struct hinic_ceqs ceqs;
struct hinic_wqs wqs;
struct hinic_wq *sq_wq;
struct hinic_wq *rq_wq;
struct hinic_qp *qps;
u16 max_qps;
void __iomem **sq_db;
void __iomem *db_base;
void *ci_addr_base;
dma_addr_t ci_dma_base;
struct hinic_free_db_area free_db_area;
void __iomem *cmdq_db_area[HINIC_MAX_CMDQ_TYPES];
struct hinic_cmdqs cmdqs;
};
int hinic_io_create_qps(struct hinic_func_to_io *func_to_io,
u16 base_qpn, int num_qps,
struct msix_entry *sq_msix_entries,
struct msix_entry *rq_msix_entries);
void hinic_io_destroy_qps(struct hinic_func_to_io *func_to_io,
int num_qps);
int hinic_io_init(struct hinic_func_to_io *func_to_io,
struct hinic_hwif *hwif, u16 max_qps, int num_ceqs,
struct msix_entry *ceq_msix_entries);
void hinic_io_free(struct hinic_func_to_io *func_to_io);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/semaphore.h>
#include <linux/completion.h>
#include <linux/slab.h>
#include <asm/barrier.h>
#include "hinic_hw_if.h"
#include "hinic_hw_eqs.h"
#include "hinic_hw_api_cmd.h"
#include "hinic_hw_mgmt.h"
#include "hinic_hw_dev.h"
#define SYNC_MSG_ID_MASK 0x1FF
#define SYNC_MSG_ID(pf_to_mgmt) ((pf_to_mgmt)->sync_msg_id)
#define SYNC_MSG_ID_INC(pf_to_mgmt) (SYNC_MSG_ID(pf_to_mgmt) = \
((SYNC_MSG_ID(pf_to_mgmt) + 1) & \
SYNC_MSG_ID_MASK))
#define MSG_SZ_IS_VALID(in_size) ((in_size) <= MAX_MSG_LEN)
#define MGMT_MSG_LEN_MIN 20
#define MGMT_MSG_LEN_STEP 16
#define MGMT_MSG_RSVD_FOR_DEV 8
#define SEGMENT_LEN 48
#define MAX_PF_MGMT_BUF_SIZE 2048
/* Data should be SEG LEN size aligned */
#define MAX_MSG_LEN 2016
#define MSG_NOT_RESP 0xFFFF
#define MGMT_MSG_TIMEOUT 1000
#define mgmt_to_pfhwdev(pf_mgmt) \
container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt)
enum msg_segment_type {
NOT_LAST_SEGMENT = 0,
LAST_SEGMENT = 1,
};
enum mgmt_direction_type {
MGMT_DIRECT_SEND = 0,
MGMT_RESP = 1,
};
enum msg_ack_type {
MSG_ACK = 0,
MSG_NO_ACK = 1,
};
/**
* hinic_register_mgmt_msg_cb - register msg handler for a msg from a module
* @pf_to_mgmt: PF to MGMT channel
* @mod: module in the chip that this handler will handle its messages
* @handle: private data for the callback
* @callback: the handler that will handle messages
**/
void hinic_register_mgmt_msg_cb(struct hinic_pf_to_mgmt *pf_to_mgmt,
enum hinic_mod_type mod,
void *handle,
void (*callback)(void *handle,
u8 cmd, void *buf_in,
u16 in_size, void *buf_out,
u16 *out_size))
{
struct hinic_mgmt_cb *mgmt_cb = &pf_to_mgmt->mgmt_cb[mod];
mgmt_cb->cb = callback;
mgmt_cb->handle = handle;
mgmt_cb->state = HINIC_MGMT_CB_ENABLED;
}
/**
* hinic_unregister_mgmt_msg_cb - unregister msg handler for a msg from a module
* @pf_to_mgmt: PF to MGMT channel
* @mod: module in the chip that this handler handles its messages
**/
void hinic_unregister_mgmt_msg_cb(struct hinic_pf_to_mgmt *pf_to_mgmt,
enum hinic_mod_type mod)
{
struct hinic_mgmt_cb *mgmt_cb = &pf_to_mgmt->mgmt_cb[mod];
mgmt_cb->state &= ~HINIC_MGMT_CB_ENABLED;
while (mgmt_cb->state & HINIC_MGMT_CB_RUNNING)
schedule();
mgmt_cb->cb = NULL;
}
/**
* prepare_header - prepare the header of the message
* @pf_to_mgmt: PF to MGMT channel
* @msg_len: the length of the message
* @mod: module in the chip that will get the message
* @ack_type: ask for response
* @direction: the direction of the message
* @cmd: command of the message
* @msg_id: message id
*
* Return the prepared header value
**/
static u64 prepare_header(struct hinic_pf_to_mgmt *pf_to_mgmt,
u16 msg_len, enum hinic_mod_type mod,
enum msg_ack_type ack_type,
enum mgmt_direction_type direction,
u16 cmd, u16 msg_id)
{
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
return HINIC_MSG_HEADER_SET(msg_len, MSG_LEN) |
HINIC_MSG_HEADER_SET(mod, MODULE) |
HINIC_MSG_HEADER_SET(SEGMENT_LEN, SEG_LEN) |
HINIC_MSG_HEADER_SET(ack_type, NO_ACK) |
HINIC_MSG_HEADER_SET(0, ASYNC_MGMT_TO_PF) |
HINIC_MSG_HEADER_SET(0, SEQID) |
HINIC_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
HINIC_MSG_HEADER_SET(direction, DIRECTION) |
HINIC_MSG_HEADER_SET(cmd, CMD) |
HINIC_MSG_HEADER_SET(HINIC_HWIF_PCI_INTF(hwif), PCI_INTF) |
HINIC_MSG_HEADER_SET(HINIC_HWIF_PF_IDX(hwif), PF_IDX) |
HINIC_MSG_HEADER_SET(msg_id, MSG_ID);
}
/**
* prepare_mgmt_cmd - prepare the mgmt command
* @mgmt_cmd: pointer to the command to prepare
* @header: pointer of the header for the message
* @msg: the data of the message
* @msg_len: the length of the message
**/
static void prepare_mgmt_cmd(u8 *mgmt_cmd, u64 *header, u8 *msg, u16 msg_len)
{
memset(mgmt_cmd, 0, MGMT_MSG_RSVD_FOR_DEV);
mgmt_cmd += MGMT_MSG_RSVD_FOR_DEV;
memcpy(mgmt_cmd, header, sizeof(*header));
mgmt_cmd += sizeof(*header);
memcpy(mgmt_cmd, msg, msg_len);
}
/**
* mgmt_msg_len - calculate the total message length
* @msg_data_len: the length of the message data
*
* Return the total message length
**/
static u16 mgmt_msg_len(u16 msg_data_len)
{
/* RSVD + HEADER_SIZE + DATA_LEN */
u16 msg_len = MGMT_MSG_RSVD_FOR_DEV + sizeof(u64) + msg_data_len;
if (msg_len > MGMT_MSG_LEN_MIN)
msg_len = MGMT_MSG_LEN_MIN +
ALIGN((msg_len - MGMT_MSG_LEN_MIN),
MGMT_MSG_LEN_STEP);
else
msg_len = MGMT_MSG_LEN_MIN;
return msg_len;
}
/**
* send_msg_to_mgmt - send message to mgmt by API CMD
* @pf_to_mgmt: PF to MGMT channel
* @mod: module in the chip that will get the message
* @cmd: command of the message
* @data: the msg data
* @data_len: the msg data length
* @ack_type: ask for response
* @direction: the direction of the original message
* @resp_msg_id: msg id to response for
*
* Return 0 - Success, negative - Failure
**/
static int send_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
enum hinic_mod_type mod, u8 cmd,
u8 *data, u16 data_len,
enum msg_ack_type ack_type,
enum mgmt_direction_type direction,
u16 resp_msg_id)
{
struct hinic_api_cmd_chain *chain;
u64 header;
u16 msg_id;
msg_id = SYNC_MSG_ID(pf_to_mgmt);
if (direction == MGMT_RESP) {
header = prepare_header(pf_to_mgmt, data_len, mod, ack_type,
direction, cmd, resp_msg_id);
} else {
SYNC_MSG_ID_INC(pf_to_mgmt);
header = prepare_header(pf_to_mgmt, data_len, mod, ack_type,
direction, cmd, msg_id);
}
prepare_mgmt_cmd(pf_to_mgmt->sync_msg_buf, &header, data, data_len);
chain = pf_to_mgmt->cmd_chain[HINIC_API_CMD_WRITE_TO_MGMT_CPU];
return hinic_api_cmd_write(chain, HINIC_NODE_ID_MGMT,
pf_to_mgmt->sync_msg_buf,
mgmt_msg_len(data_len));
}
/**
* msg_to_mgmt_sync - send sync message to mgmt
* @pf_to_mgmt: PF to MGMT channel
* @mod: module in the chip that will get the message
* @cmd: command of the message
* @buf_in: the msg data
* @in_size: the msg data length
* @buf_out: response
* @out_size: response length
* @direction: the direction of the original message
* @resp_msg_id: msg id to response for
*
* Return 0 - Success, negative - Failure
**/
static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt,
enum hinic_mod_type mod, u8 cmd,
u8 *buf_in, u16 in_size,
u8 *buf_out, u16 *out_size,
enum mgmt_direction_type direction,
u16 resp_msg_id)
{
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_recv_msg *recv_msg;
struct completion *recv_done;
u16 msg_id;
int err;
/* Lock the sync_msg_buf */
down(&pf_to_mgmt->sync_msg_lock);
recv_msg = &pf_to_mgmt->recv_resp_msg_from_mgmt;
recv_done = &recv_msg->recv_done;
if (resp_msg_id == MSG_NOT_RESP)
msg_id = SYNC_MSG_ID(pf_to_mgmt);
else
msg_id = resp_msg_id;
init_completion(recv_done);
err = send_msg_to_mgmt(pf_to_mgmt, mod, cmd, buf_in, in_size,
MSG_ACK, direction, resp_msg_id);
if (err) {
dev_err(&pdev->dev, "Failed to send sync msg to mgmt\n");
goto unlock_sync_msg;
}
if (!wait_for_completion_timeout(recv_done, MGMT_MSG_TIMEOUT)) {
dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id);
err = -ETIMEDOUT;
goto unlock_sync_msg;
}
smp_rmb(); /* verify reading after completion */
if (recv_msg->msg_id != msg_id) {
dev_err(&pdev->dev, "incorrect MSG for id = %d\n", msg_id);
err = -EFAULT;
goto unlock_sync_msg;
}
if ((buf_out) && (recv_msg->msg_len <= MAX_PF_MGMT_BUF_SIZE)) {
memcpy(buf_out, recv_msg->msg, recv_msg->msg_len);
*out_size = recv_msg->msg_len;
}
unlock_sync_msg:
up(&pf_to_mgmt->sync_msg_lock);
return err;
}
/**
* msg_to_mgmt_async - send message to mgmt without response
* @pf_to_mgmt: PF to MGMT channel
* @mod: module in the chip that will get the message
* @cmd: command of the message
* @buf_in: the msg data
* @in_size: the msg data length
* @direction: the direction of the original message
* @resp_msg_id: msg id to response for
*
* Return 0 - Success, negative - Failure
**/
static int msg_to_mgmt_async(struct hinic_pf_to_mgmt *pf_to_mgmt,
enum hinic_mod_type mod, u8 cmd,
u8 *buf_in, u16 in_size,
enum mgmt_direction_type direction,
u16 resp_msg_id)
{
int err;
/* Lock the sync_msg_buf */
down(&pf_to_mgmt->sync_msg_lock);
err = send_msg_to_mgmt(pf_to_mgmt, mod, cmd, buf_in, in_size,
MSG_NO_ACK, direction, resp_msg_id);
up(&pf_to_mgmt->sync_msg_lock);
return err;
}
/**
* hinic_msg_to_mgmt - send message to mgmt
* @pf_to_mgmt: PF to MGMT channel
* @mod: module in the chip that will get the message
* @cmd: command of the message
* @buf_in: the msg data
* @in_size: the msg data length
* @buf_out: response
* @out_size: returned response length
* @sync: sync msg or async msg
*
* Return 0 - Success, negative - Failure
**/
int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
enum hinic_mod_type mod, u8 cmd,
void *buf_in, u16 in_size, void *buf_out, u16 *out_size,
enum hinic_mgmt_msg_type sync)
{
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
struct pci_dev *pdev = hwif->pdev;
if (sync != HINIC_MGMT_MSG_SYNC) {
dev_err(&pdev->dev, "Invalid MGMT msg type\n");
return -EINVAL;
}
if (!MSG_SZ_IS_VALID(in_size)) {
dev_err(&pdev->dev, "Invalid MGMT msg buffer size\n");
return -EINVAL;
}
return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
buf_out, out_size, MGMT_DIRECT_SEND,
MSG_NOT_RESP);
}
/**
* mgmt_recv_msg_handler - handler for message from mgmt cpu
* @pf_to_mgmt: PF to MGMT channel
* @recv_msg: received message details
**/
static void mgmt_recv_msg_handler(struct hinic_pf_to_mgmt *pf_to_mgmt,
struct hinic_recv_msg *recv_msg)
{
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
struct pci_dev *pdev = hwif->pdev;
u8 *buf_out = recv_msg->buf_out;
struct hinic_mgmt_cb *mgmt_cb;
unsigned long cb_state;
u16 out_size = 0;
if (recv_msg->mod >= HINIC_MOD_MAX) {
dev_err(&pdev->dev, "Unknown MGMT MSG module = %d\n",
recv_msg->mod);
return;
}
mgmt_cb = &pf_to_mgmt->mgmt_cb[recv_msg->mod];
cb_state = cmpxchg(&mgmt_cb->state,
HINIC_MGMT_CB_ENABLED,
HINIC_MGMT_CB_ENABLED | HINIC_MGMT_CB_RUNNING);
if ((cb_state == HINIC_MGMT_CB_ENABLED) && (mgmt_cb->cb))
mgmt_cb->cb(mgmt_cb->handle, recv_msg->cmd,
recv_msg->msg, recv_msg->msg_len,
buf_out, &out_size);
else
dev_err(&pdev->dev, "No MGMT msg handler, mod = %d\n",
recv_msg->mod);
mgmt_cb->state &= ~HINIC_MGMT_CB_RUNNING;
if (!recv_msg->async_mgmt_to_pf)
/* MGMT sent sync msg, send the response */
msg_to_mgmt_async(pf_to_mgmt, recv_msg->mod, recv_msg->cmd,
buf_out, out_size, MGMT_RESP,
recv_msg->msg_id);
}
/**
* mgmt_resp_msg_handler - handler for a response message from mgmt cpu
* @pf_to_mgmt: PF to MGMT channel
* @recv_msg: received message details
**/
static void mgmt_resp_msg_handler(struct hinic_pf_to_mgmt *pf_to_mgmt,
struct hinic_recv_msg *recv_msg)
{
wmb(); /* verify writing all, before reading */
complete(&recv_msg->recv_done);
}
/**
* recv_mgmt_msg_handler - handler for a message from mgmt cpu
* @pf_to_mgmt: PF to MGMT channel
* @header: the header of the message
* @recv_msg: received message details
**/
static void recv_mgmt_msg_handler(struct hinic_pf_to_mgmt *pf_to_mgmt,
u64 *header, struct hinic_recv_msg *recv_msg)
{
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
struct pci_dev *pdev = hwif->pdev;
int seq_id, seg_len;
u8 *msg_body;
seq_id = HINIC_MSG_HEADER_GET(*header, SEQID);
seg_len = HINIC_MSG_HEADER_GET(*header, SEG_LEN);
if (seq_id >= (MAX_MSG_LEN / SEGMENT_LEN)) {
dev_err(&pdev->dev, "recv big mgmt msg\n");
return;
}
msg_body = (u8 *)header + sizeof(*header);
memcpy(recv_msg->msg + seq_id * SEGMENT_LEN, msg_body, seg_len);
if (!HINIC_MSG_HEADER_GET(*header, LAST))
return;
recv_msg->cmd = HINIC_MSG_HEADER_GET(*header, CMD);
recv_msg->mod = HINIC_MSG_HEADER_GET(*header, MODULE);
recv_msg->async_mgmt_to_pf = HINIC_MSG_HEADER_GET(*header,
ASYNC_MGMT_TO_PF);
recv_msg->msg_len = HINIC_MSG_HEADER_GET(*header, MSG_LEN);
recv_msg->msg_id = HINIC_MSG_HEADER_GET(*header, MSG_ID);
if (HINIC_MSG_HEADER_GET(*header, DIRECTION) == MGMT_RESP)
mgmt_resp_msg_handler(pf_to_mgmt, recv_msg);
else
mgmt_recv_msg_handler(pf_to_mgmt, recv_msg);
}
/**
* mgmt_msg_aeqe_handler - handler for a mgmt message event
* @handle: PF to MGMT channel
* @data: the header of the message
* @size: unused
**/
static void mgmt_msg_aeqe_handler(void *handle, void *data, u8 size)
{
struct hinic_pf_to_mgmt *pf_to_mgmt = handle;
struct hinic_recv_msg *recv_msg;
u64 *header = (u64 *)data;
recv_msg = HINIC_MSG_HEADER_GET(*header, DIRECTION) ==
MGMT_DIRECT_SEND ?
&pf_to_mgmt->recv_msg_from_mgmt :
&pf_to_mgmt->recv_resp_msg_from_mgmt;
recv_mgmt_msg_handler(pf_to_mgmt, header, recv_msg);
}
/**
* alloc_recv_msg - allocate receive message memory
* @pf_to_mgmt: PF to MGMT channel
* @recv_msg: pointer that will hold the allocated data
*
* Return 0 - Success, negative - Failure
**/
static int alloc_recv_msg(struct hinic_pf_to_mgmt *pf_to_mgmt,
struct hinic_recv_msg *recv_msg)
{
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
struct pci_dev *pdev = hwif->pdev;
recv_msg->msg = devm_kzalloc(&pdev->dev, MAX_PF_MGMT_BUF_SIZE,
GFP_KERNEL);
if (!recv_msg->msg)
return -ENOMEM;
recv_msg->buf_out = devm_kzalloc(&pdev->dev, MAX_PF_MGMT_BUF_SIZE,
GFP_KERNEL);
if (!recv_msg->buf_out)
return -ENOMEM;
return 0;
}
/**
* alloc_msg_buf - allocate all the message buffers of PF to MGMT channel
* @pf_to_mgmt: PF to MGMT channel
*
* Return 0 - Success, negative - Failure
**/
static int alloc_msg_buf(struct hinic_pf_to_mgmt *pf_to_mgmt)
{
struct hinic_hwif *hwif = pf_to_mgmt->hwif;
struct pci_dev *pdev = hwif->pdev;
int err;
err = alloc_recv_msg(pf_to_mgmt,
&pf_to_mgmt->recv_msg_from_mgmt);
if (err) {
dev_err(&pdev->dev, "Failed to allocate recv msg\n");
return err;
}
err = alloc_recv_msg(pf_to_mgmt,
&pf_to_mgmt->recv_resp_msg_from_mgmt);
if (err) {
dev_err(&pdev->dev, "Failed to allocate resp recv msg\n");
return err;
}
pf_to_mgmt->sync_msg_buf = devm_kzalloc(&pdev->dev,
MAX_PF_MGMT_BUF_SIZE,
GFP_KERNEL);
if (!pf_to_mgmt->sync_msg_buf)
return -ENOMEM;
return 0;
}
/**
* hinic_pf_to_mgmt_init - initialize PF to MGMT channel
* @pf_to_mgmt: PF to MGMT channel
* @hwif: HW interface the PF to MGMT will use for accessing HW
*
* Return 0 - Success, negative - Failure
**/
int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
struct hinic_hwif *hwif)
{
struct hinic_pfhwdev *pfhwdev = mgmt_to_pfhwdev(pf_to_mgmt);
struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
struct pci_dev *pdev = hwif->pdev;
int err;
pf_to_mgmt->hwif = hwif;
sema_init(&pf_to_mgmt->sync_msg_lock, 1);
pf_to_mgmt->sync_msg_id = 0;
err = alloc_msg_buf(pf_to_mgmt);
if (err) {
dev_err(&pdev->dev, "Failed to allocate msg buffers\n");
return err;
}
err = hinic_api_cmd_init(pf_to_mgmt->cmd_chain, hwif);
if (err) {
dev_err(&pdev->dev, "Failed to initialize cmd chains\n");
return err;
}
hinic_aeq_register_hw_cb(&hwdev->aeqs, HINIC_MSG_FROM_MGMT_CPU,
pf_to_mgmt,
mgmt_msg_aeqe_handler);
return 0;
}
/**
* hinic_pf_to_mgmt_free - free PF to MGMT channel
* @pf_to_mgmt: PF to MGMT channel
**/
void hinic_pf_to_mgmt_free(struct hinic_pf_to_mgmt *pf_to_mgmt)
{
struct hinic_pfhwdev *pfhwdev = mgmt_to_pfhwdev(pf_to_mgmt);
struct hinic_hwdev *hwdev = &pfhwdev->hwdev;
hinic_aeq_unregister_hw_cb(&hwdev->aeqs, HINIC_MSG_FROM_MGMT_CPU);
hinic_api_cmd_free(pf_to_mgmt->cmd_chain);
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_MGMT_H
#define HINIC_HW_MGMT_H
#include <linux/types.h>
#include <linux/semaphore.h>
#include <linux/completion.h>
#include <linux/bitops.h>
#include "hinic_hw_if.h"
#include "hinic_hw_api_cmd.h"
#define HINIC_MSG_HEADER_MSG_LEN_SHIFT 0
#define HINIC_MSG_HEADER_MODULE_SHIFT 11
#define HINIC_MSG_HEADER_SEG_LEN_SHIFT 16
#define HINIC_MSG_HEADER_NO_ACK_SHIFT 22
#define HINIC_MSG_HEADER_ASYNC_MGMT_TO_PF_SHIFT 23
#define HINIC_MSG_HEADER_SEQID_SHIFT 24
#define HINIC_MSG_HEADER_LAST_SHIFT 30
#define HINIC_MSG_HEADER_DIRECTION_SHIFT 31
#define HINIC_MSG_HEADER_CMD_SHIFT 32
#define HINIC_MSG_HEADER_ZEROS_SHIFT 40
#define HINIC_MSG_HEADER_PCI_INTF_SHIFT 48
#define HINIC_MSG_HEADER_PF_IDX_SHIFT 50
#define HINIC_MSG_HEADER_MSG_ID_SHIFT 54
#define HINIC_MSG_HEADER_MSG_LEN_MASK 0x7FF
#define HINIC_MSG_HEADER_MODULE_MASK 0x1F
#define HINIC_MSG_HEADER_SEG_LEN_MASK 0x3F
#define HINIC_MSG_HEADER_NO_ACK_MASK 0x1
#define HINIC_MSG_HEADER_ASYNC_MGMT_TO_PF_MASK 0x1
#define HINIC_MSG_HEADER_SEQID_MASK 0x3F
#define HINIC_MSG_HEADER_LAST_MASK 0x1
#define HINIC_MSG_HEADER_DIRECTION_MASK 0x1
#define HINIC_MSG_HEADER_CMD_MASK 0xFF
#define HINIC_MSG_HEADER_ZEROS_MASK 0xFF
#define HINIC_MSG_HEADER_PCI_INTF_MASK 0x3
#define HINIC_MSG_HEADER_PF_IDX_MASK 0xF
#define HINIC_MSG_HEADER_MSG_ID_MASK 0x3FF
#define HINIC_MSG_HEADER_SET(val, member) \
((u64)((val) & HINIC_MSG_HEADER_##member##_MASK) << \
HINIC_MSG_HEADER_##member##_SHIFT)
#define HINIC_MSG_HEADER_GET(val, member) \
(((val) >> HINIC_MSG_HEADER_##member##_SHIFT) & \
HINIC_MSG_HEADER_##member##_MASK)
enum hinic_mgmt_msg_type {
HINIC_MGMT_MSG_SYNC = 1,
};
enum hinic_cfg_cmd {
HINIC_CFG_NIC_CAP = 0,
};
enum hinic_comm_cmd {
HINIC_COMM_CMD_IO_STATUS_GET = 0x3,
HINIC_COMM_CMD_CMDQ_CTXT_SET = 0x10,
HINIC_COMM_CMD_CMDQ_CTXT_GET = 0x11,
HINIC_COMM_CMD_HWCTXT_SET = 0x12,
HINIC_COMM_CMD_HWCTXT_GET = 0x13,
HINIC_COMM_CMD_SQ_HI_CI_SET = 0x14,
HINIC_COMM_CMD_RES_STATE_SET = 0x24,
HINIC_COMM_CMD_IO_RES_CLEAR = 0x29,
HINIC_COMM_CMD_MAX = 0x32,
};
enum hinic_mgmt_cb_state {
HINIC_MGMT_CB_ENABLED = BIT(0),
HINIC_MGMT_CB_RUNNING = BIT(1),
};
struct hinic_recv_msg {
u8 *msg;
u8 *buf_out;
struct completion recv_done;
u16 cmd;
enum hinic_mod_type mod;
int async_mgmt_to_pf;
u16 msg_len;
u16 msg_id;
};
struct hinic_mgmt_cb {
void (*cb)(void *handle, u8 cmd,
void *buf_in, u16 in_size,
void *buf_out, u16 *out_size);
void *handle;
unsigned long state;
};
struct hinic_pf_to_mgmt {
struct hinic_hwif *hwif;
struct semaphore sync_msg_lock;
u16 sync_msg_id;
u8 *sync_msg_buf;
struct hinic_recv_msg recv_resp_msg_from_mgmt;
struct hinic_recv_msg recv_msg_from_mgmt;
struct hinic_api_cmd_chain *cmd_chain[HINIC_API_CMD_MAX];
struct hinic_mgmt_cb mgmt_cb[HINIC_MOD_MAX];
};
void hinic_register_mgmt_msg_cb(struct hinic_pf_to_mgmt *pf_to_mgmt,
enum hinic_mod_type mod,
void *handle,
void (*callback)(void *handle,
u8 cmd, void *buf_in,
u16 in_size, void *buf_out,
u16 *out_size));
void hinic_unregister_mgmt_msg_cb(struct hinic_pf_to_mgmt *pf_to_mgmt,
enum hinic_mod_type mod);
int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
enum hinic_mod_type mod, u8 cmd,
void *buf_in, u16 in_size, void *buf_out, u16 *out_size,
enum hinic_mgmt_msg_type sync);
int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt,
struct hinic_hwif *hwif);
void hinic_pf_to_mgmt_free(struct hinic_pf_to_mgmt *pf_to_mgmt);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/vmalloc.h>
#include <linux/errno.h>
#include <linux/sizes.h>
#include <linux/atomic.h>
#include <linux/skbuff.h>
#include <linux/io.h>
#include <asm/barrier.h>
#include <asm/byteorder.h>
#include "hinic_common.h"
#include "hinic_hw_if.h"
#include "hinic_hw_wqe.h"
#include "hinic_hw_wq.h"
#include "hinic_hw_qp_ctxt.h"
#include "hinic_hw_qp.h"
#include "hinic_hw_io.h"
#define SQ_DB_OFF SZ_2K
/* The number of cache line to prefetch Until threshold state */
#define WQ_PREFETCH_MAX 2
/* The number of cache line to prefetch After threshold state */
#define WQ_PREFETCH_MIN 1
/* Threshold state */
#define WQ_PREFETCH_THRESHOLD 256
/* sizes of the SQ/RQ ctxt */
#define Q_CTXT_SIZE 48
#define CTXT_RSVD 240
#define SQ_CTXT_OFFSET(max_sqs, max_rqs, q_id) \
(((max_rqs) + (max_sqs)) * CTXT_RSVD + (q_id) * Q_CTXT_SIZE)
#define RQ_CTXT_OFFSET(max_sqs, max_rqs, q_id) \
(((max_rqs) + (max_sqs)) * CTXT_RSVD + \
(max_sqs + (q_id)) * Q_CTXT_SIZE)
#define SIZE_16BYTES(size) (ALIGN(size, 16) >> 4)
#define SIZE_8BYTES(size) (ALIGN(size, 8) >> 3)
#define SECT_SIZE_FROM_8BYTES(size) ((size) << 3)
#define SQ_DB_PI_HI_SHIFT 8
#define SQ_DB_PI_HI(prod_idx) ((prod_idx) >> SQ_DB_PI_HI_SHIFT)
#define SQ_DB_PI_LOW_MASK 0xFF
#define SQ_DB_PI_LOW(prod_idx) ((prod_idx) & SQ_DB_PI_LOW_MASK)
#define SQ_DB_ADDR(sq, pi) ((u64 *)((sq)->db_base) + SQ_DB_PI_LOW(pi))
#define SQ_MASKED_IDX(sq, idx) ((idx) & (sq)->wq->mask)
#define RQ_MASKED_IDX(rq, idx) ((idx) & (rq)->wq->mask)
#define TX_MAX_MSS_DEFAULT 0x3E00
enum sq_wqe_type {
SQ_NORMAL_WQE = 0,
};
enum rq_completion_fmt {
RQ_COMPLETE_SGE = 1
};
void hinic_qp_prepare_header(struct hinic_qp_ctxt_header *qp_ctxt_hdr,
enum hinic_qp_ctxt_type ctxt_type,
u16 num_queues, u16 max_queues)
{
u16 max_sqs = max_queues;
u16 max_rqs = max_queues;
qp_ctxt_hdr->num_queues = num_queues;
qp_ctxt_hdr->queue_type = ctxt_type;
if (ctxt_type == HINIC_QP_CTXT_TYPE_SQ)
qp_ctxt_hdr->addr_offset = SQ_CTXT_OFFSET(max_sqs, max_rqs, 0);
else
qp_ctxt_hdr->addr_offset = RQ_CTXT_OFFSET(max_sqs, max_rqs, 0);
qp_ctxt_hdr->addr_offset = SIZE_16BYTES(qp_ctxt_hdr->addr_offset);
hinic_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr));
}
void hinic_sq_prepare_ctxt(struct hinic_sq_ctxt *sq_ctxt,
struct hinic_sq *sq, u16 global_qid)
{
u32 wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo;
u64 wq_page_addr, wq_page_pfn, wq_block_pfn;
u16 pi_start, ci_start;
struct hinic_wq *wq;
wq = sq->wq;
ci_start = atomic_read(&wq->cons_idx);
pi_start = atomic_read(&wq->prod_idx);
/* Read the first page paddr from the WQ page paddr ptrs */
wq_page_addr = be64_to_cpu(*wq->block_vaddr);
wq_page_pfn = HINIC_WQ_PAGE_PFN(wq_page_addr);
wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
wq_block_pfn = HINIC_WQ_BLOCK_PFN(wq->block_paddr);
wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
sq_ctxt->ceq_attr = HINIC_SQ_CTXT_CEQ_ATTR_SET(global_qid,
GLOBAL_SQ_ID) |
HINIC_SQ_CTXT_CEQ_ATTR_SET(0, EN);
sq_ctxt->ci_wrapped = HINIC_SQ_CTXT_CI_SET(ci_start, IDX) |
HINIC_SQ_CTXT_CI_SET(1, WRAPPED);
sq_ctxt->wq_hi_pfn_pi =
HINIC_SQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
HINIC_SQ_CTXT_WQ_PAGE_SET(pi_start, PI);
sq_ctxt->wq_lo_pfn = wq_page_pfn_lo;
sq_ctxt->pref_cache =
HINIC_SQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
HINIC_SQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
HINIC_SQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
sq_ctxt->pref_wrapped = 1;
sq_ctxt->pref_wq_hi_pfn_ci =
HINIC_SQ_CTXT_PREF_SET(ci_start, CI) |
HINIC_SQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_HI_PFN);
sq_ctxt->pref_wq_lo_pfn = wq_page_pfn_lo;
sq_ctxt->wq_block_hi_pfn =
HINIC_SQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, HI_PFN);
sq_ctxt->wq_block_lo_pfn = wq_block_pfn_lo;
hinic_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt));
}
void hinic_rq_prepare_ctxt(struct hinic_rq_ctxt *rq_ctxt,
struct hinic_rq *rq, u16 global_qid)
{
u32 wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo;
u64 wq_page_addr, wq_page_pfn, wq_block_pfn;
u16 pi_start, ci_start;
struct hinic_wq *wq;
wq = rq->wq;
ci_start = atomic_read(&wq->cons_idx);
pi_start = atomic_read(&wq->prod_idx);
/* Read the first page paddr from the WQ page paddr ptrs */
wq_page_addr = be64_to_cpu(*wq->block_vaddr);
wq_page_pfn = HINIC_WQ_PAGE_PFN(wq_page_addr);
wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
wq_block_pfn = HINIC_WQ_BLOCK_PFN(wq->block_paddr);
wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
rq_ctxt->ceq_attr = HINIC_RQ_CTXT_CEQ_ATTR_SET(0, EN) |
HINIC_RQ_CTXT_CEQ_ATTR_SET(1, WRAPPED);
rq_ctxt->pi_intr_attr = HINIC_RQ_CTXT_PI_SET(pi_start, IDX) |
HINIC_RQ_CTXT_PI_SET(rq->msix_entry, INTR);
rq_ctxt->wq_hi_pfn_ci = HINIC_RQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi,
HI_PFN) |
HINIC_RQ_CTXT_WQ_PAGE_SET(ci_start, CI);
rq_ctxt->wq_lo_pfn = wq_page_pfn_lo;
rq_ctxt->pref_cache =
HINIC_RQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
HINIC_RQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
HINIC_RQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
rq_ctxt->pref_wrapped = 1;
rq_ctxt->pref_wq_hi_pfn_ci =
HINIC_RQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_HI_PFN) |
HINIC_RQ_CTXT_PREF_SET(ci_start, CI);
rq_ctxt->pref_wq_lo_pfn = wq_page_pfn_lo;
rq_ctxt->pi_paddr_hi = upper_32_bits(rq->pi_dma_addr);
rq_ctxt->pi_paddr_lo = lower_32_bits(rq->pi_dma_addr);
rq_ctxt->wq_block_hi_pfn =
HINIC_RQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, HI_PFN);
rq_ctxt->wq_block_lo_pfn = wq_block_pfn_lo;
hinic_cpu_to_be32(rq_ctxt, sizeof(*rq_ctxt));
}
/**
* alloc_sq_skb_arr - allocate sq array for saved skb
* @sq: HW Send Queue
*
* Return 0 - Success, negative - Failure
**/
static int alloc_sq_skb_arr(struct hinic_sq *sq)
{
struct hinic_wq *wq = sq->wq;
size_t skb_arr_size;
skb_arr_size = wq->q_depth * sizeof(*sq->saved_skb);
sq->saved_skb = vzalloc(skb_arr_size);
if (!sq->saved_skb)
return -ENOMEM;
return 0;
}
/**
* free_sq_skb_arr - free sq array for saved skb
* @sq: HW Send Queue
**/
static void free_sq_skb_arr(struct hinic_sq *sq)
{
vfree(sq->saved_skb);
}
/**
* alloc_rq_skb_arr - allocate rq array for saved skb
* @rq: HW Receive Queue
*
* Return 0 - Success, negative - Failure
**/
static int alloc_rq_skb_arr(struct hinic_rq *rq)
{
struct hinic_wq *wq = rq->wq;
size_t skb_arr_size;
skb_arr_size = wq->q_depth * sizeof(*rq->saved_skb);
rq->saved_skb = vzalloc(skb_arr_size);
if (!rq->saved_skb)
return -ENOMEM;
return 0;
}
/**
* free_rq_skb_arr - free rq array for saved skb
* @rq: HW Receive Queue
**/
static void free_rq_skb_arr(struct hinic_rq *rq)
{
vfree(rq->saved_skb);
}
/**
* hinic_init_sq - Initialize HW Send Queue
* @sq: HW Send Queue
* @hwif: HW Interface for accessing HW
* @wq: Work Queue for the data of the SQ
* @entry: msix entry for sq
* @ci_addr: address for reading the current HW consumer index
* @ci_dma_addr: dma address for reading the current HW consumer index
* @db_base: doorbell base address
*
* Return 0 - Success, negative - Failure
**/
int hinic_init_sq(struct hinic_sq *sq, struct hinic_hwif *hwif,
struct hinic_wq *wq, struct msix_entry *entry,
void *ci_addr, dma_addr_t ci_dma_addr,
void __iomem *db_base)
{
sq->hwif = hwif;
sq->wq = wq;
sq->irq = entry->vector;
sq->msix_entry = entry->entry;
sq->hw_ci_addr = ci_addr;
sq->hw_ci_dma_addr = ci_dma_addr;
sq->db_base = db_base + SQ_DB_OFF;
return alloc_sq_skb_arr(sq);
}
/**
* hinic_clean_sq - Clean HW Send Queue's Resources
* @sq: Send Queue
**/
void hinic_clean_sq(struct hinic_sq *sq)
{
free_sq_skb_arr(sq);
}
/**
* alloc_rq_cqe - allocate rq completion queue elements
* @rq: HW Receive Queue
*
* Return 0 - Success, negative - Failure
**/
static int alloc_rq_cqe(struct hinic_rq *rq)
{
struct hinic_hwif *hwif = rq->hwif;
struct pci_dev *pdev = hwif->pdev;
size_t cqe_dma_size, cqe_size;
struct hinic_wq *wq = rq->wq;
int j, i;
cqe_size = wq->q_depth * sizeof(*rq->cqe);
rq->cqe = vzalloc(cqe_size);
if (!rq->cqe)
return -ENOMEM;
cqe_dma_size = wq->q_depth * sizeof(*rq->cqe_dma);
rq->cqe_dma = vzalloc(cqe_dma_size);
if (!rq->cqe_dma)
goto err_cqe_dma_arr_alloc;
for (i = 0; i < wq->q_depth; i++) {
rq->cqe[i] = dma_zalloc_coherent(&pdev->dev,
sizeof(*rq->cqe[i]),
&rq->cqe_dma[i], GFP_KERNEL);
if (!rq->cqe[i])
goto err_cqe_alloc;
}
return 0;
err_cqe_alloc:
for (j = 0; j < i; j++)
dma_free_coherent(&pdev->dev, sizeof(*rq->cqe[j]), rq->cqe[j],
rq->cqe_dma[j]);
vfree(rq->cqe_dma);
err_cqe_dma_arr_alloc:
vfree(rq->cqe);
return -ENOMEM;
}
/**
* free_rq_cqe - free rq completion queue elements
* @rq: HW Receive Queue
**/
static void free_rq_cqe(struct hinic_rq *rq)
{
struct hinic_hwif *hwif = rq->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_wq *wq = rq->wq;
int i;
for (i = 0; i < wq->q_depth; i++)
dma_free_coherent(&pdev->dev, sizeof(*rq->cqe[i]), rq->cqe[i],
rq->cqe_dma[i]);
vfree(rq->cqe_dma);
vfree(rq->cqe);
}
/**
* hinic_init_rq - Initialize HW Receive Queue
* @rq: HW Receive Queue
* @hwif: HW Interface for accessing HW
* @wq: Work Queue for the data of the RQ
* @entry: msix entry for rq
*
* Return 0 - Success, negative - Failure
**/
int hinic_init_rq(struct hinic_rq *rq, struct hinic_hwif *hwif,
struct hinic_wq *wq, struct msix_entry *entry)
{
struct pci_dev *pdev = hwif->pdev;
size_t pi_size;
int err;
rq->hwif = hwif;
rq->wq = wq;
rq->irq = entry->vector;
rq->msix_entry = entry->entry;
rq->buf_sz = HINIC_RX_BUF_SZ;
err = alloc_rq_skb_arr(rq);
if (err) {
dev_err(&pdev->dev, "Failed to allocate rq priv data\n");
return err;
}
err = alloc_rq_cqe(rq);
if (err) {
dev_err(&pdev->dev, "Failed to allocate rq cqe\n");
goto err_alloc_rq_cqe;
}
/* HW requirements: Must be at least 32 bit */
pi_size = ALIGN(sizeof(*rq->pi_virt_addr), sizeof(u32));
rq->pi_virt_addr = dma_zalloc_coherent(&pdev->dev, pi_size,
&rq->pi_dma_addr, GFP_KERNEL);
if (!rq->pi_virt_addr) {
dev_err(&pdev->dev, "Failed to allocate PI address\n");
err = -ENOMEM;
goto err_pi_virt;
}
return 0;
err_pi_virt:
free_rq_cqe(rq);
err_alloc_rq_cqe:
free_rq_skb_arr(rq);
return err;
}
/**
* hinic_clean_rq - Clean HW Receive Queue's Resources
* @rq: HW Receive Queue
**/
void hinic_clean_rq(struct hinic_rq *rq)
{
struct hinic_hwif *hwif = rq->hwif;
struct pci_dev *pdev = hwif->pdev;
size_t pi_size;
pi_size = ALIGN(sizeof(*rq->pi_virt_addr), sizeof(u32));
dma_free_coherent(&pdev->dev, pi_size, rq->pi_virt_addr,
rq->pi_dma_addr);
free_rq_cqe(rq);
free_rq_skb_arr(rq);
}
/**
* hinic_get_sq_free_wqebbs - return number of free wqebbs for use
* @sq: send queue
*
* Return number of free wqebbs
**/
int hinic_get_sq_free_wqebbs(struct hinic_sq *sq)
{
struct hinic_wq *wq = sq->wq;
return atomic_read(&wq->delta) - 1;
}
/**
* hinic_get_rq_free_wqebbs - return number of free wqebbs for use
* @rq: recv queue
*
* Return number of free wqebbs
**/
int hinic_get_rq_free_wqebbs(struct hinic_rq *rq)
{
struct hinic_wq *wq = rq->wq;
return atomic_read(&wq->delta) - 1;
}
static void sq_prepare_ctrl(struct hinic_sq_ctrl *ctrl, u16 prod_idx,
int nr_descs)
{
u32 ctrl_size, task_size, bufdesc_size;
ctrl_size = SIZE_8BYTES(sizeof(struct hinic_sq_ctrl));
task_size = SIZE_8BYTES(sizeof(struct hinic_sq_task));
bufdesc_size = nr_descs * sizeof(struct hinic_sq_bufdesc);
bufdesc_size = SIZE_8BYTES(bufdesc_size);
ctrl->ctrl_info = HINIC_SQ_CTRL_SET(bufdesc_size, BUFDESC_SECT_LEN) |
HINIC_SQ_CTRL_SET(task_size, TASKSECT_LEN) |
HINIC_SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
HINIC_SQ_CTRL_SET(ctrl_size, LEN);
ctrl->queue_info = HINIC_SQ_CTRL_SET(TX_MAX_MSS_DEFAULT,
QUEUE_INFO_MSS);
}
static void sq_prepare_task(struct hinic_sq_task *task)
{
task->pkt_info0 =
HINIC_SQ_TASK_INFO0_SET(0, L2HDR_LEN) |
HINIC_SQ_TASK_INFO0_SET(HINIC_L4_OFF_DISABLE, L4_OFFLOAD) |
HINIC_SQ_TASK_INFO0_SET(HINIC_OUTER_L3TYPE_UNKNOWN,
INNER_L3TYPE) |
HINIC_SQ_TASK_INFO0_SET(HINIC_VLAN_OFF_DISABLE,
VLAN_OFFLOAD) |
HINIC_SQ_TASK_INFO0_SET(HINIC_PKT_NOT_PARSED, PARSE_FLAG);
task->pkt_info1 =
HINIC_SQ_TASK_INFO1_SET(HINIC_MEDIA_UNKNOWN, MEDIA_TYPE) |
HINIC_SQ_TASK_INFO1_SET(0, INNER_L4_LEN) |
HINIC_SQ_TASK_INFO1_SET(0, INNER_L3_LEN);
task->pkt_info2 =
HINIC_SQ_TASK_INFO2_SET(0, TUNNEL_L4_LEN) |
HINIC_SQ_TASK_INFO2_SET(0, OUTER_L3_LEN) |
HINIC_SQ_TASK_INFO2_SET(HINIC_TUNNEL_L4TYPE_UNKNOWN,
TUNNEL_L4TYPE) |
HINIC_SQ_TASK_INFO2_SET(HINIC_OUTER_L3TYPE_UNKNOWN,
OUTER_L3TYPE);
task->ufo_v6_identify = 0;
task->pkt_info4 = HINIC_SQ_TASK_INFO4_SET(HINIC_L2TYPE_ETH, L2TYPE);
task->zero_pad = 0;
}
/**
* hinic_sq_prepare_wqe - prepare wqe before insert to the queue
* @sq: send queue
* @prod_idx: pi value
* @sq_wqe: wqe to prepare
* @sges: sges for use by the wqe for send for buf addresses
* @nr_sges: number of sges
**/
void hinic_sq_prepare_wqe(struct hinic_sq *sq, u16 prod_idx,
struct hinic_sq_wqe *sq_wqe, struct hinic_sge *sges,
int nr_sges)
{
int i;
sq_prepare_ctrl(&sq_wqe->ctrl, prod_idx, nr_sges);
sq_prepare_task(&sq_wqe->task);
for (i = 0; i < nr_sges; i++)
sq_wqe->buf_descs[i].sge = sges[i];
}
/**
* sq_prepare_db - prepare doorbell to write
* @sq: send queue
* @prod_idx: pi value for the doorbell
* @cos: cos of the doorbell
*
* Return db value
**/
static u32 sq_prepare_db(struct hinic_sq *sq, u16 prod_idx, unsigned int cos)
{
struct hinic_qp *qp = container_of(sq, struct hinic_qp, sq);
u8 hi_prod_idx = SQ_DB_PI_HI(SQ_MASKED_IDX(sq, prod_idx));
/* Data should be written to HW in Big Endian Format */
return cpu_to_be32(HINIC_SQ_DB_INFO_SET(hi_prod_idx, PI_HI) |
HINIC_SQ_DB_INFO_SET(HINIC_DB_SQ_TYPE, TYPE) |
HINIC_SQ_DB_INFO_SET(HINIC_DATA_PATH, PATH) |
HINIC_SQ_DB_INFO_SET(cos, COS) |
HINIC_SQ_DB_INFO_SET(qp->q_id, QID));
}
/**
* hinic_sq_write_db- write doorbell
* @sq: send queue
* @prod_idx: pi value for the doorbell
* @wqe_size: wqe size
* @cos: cos of the wqe
**/
void hinic_sq_write_db(struct hinic_sq *sq, u16 prod_idx, unsigned int wqe_size,
unsigned int cos)
{
struct hinic_wq *wq = sq->wq;
/* increment prod_idx to the next */
prod_idx += ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
wmb(); /* Write all before the doorbell */
writel(sq_prepare_db(sq, prod_idx, cos), SQ_DB_ADDR(sq, prod_idx));
}
/**
* hinic_sq_get_wqe - get wqe ptr in the current pi and update the pi
* @sq: sq to get wqe from
* @wqe_size: wqe size
* @prod_idx: returned pi
*
* Return wqe pointer
**/
struct hinic_sq_wqe *hinic_sq_get_wqe(struct hinic_sq *sq,
unsigned int wqe_size, u16 *prod_idx)
{
struct hinic_hw_wqe *hw_wqe = hinic_get_wqe(sq->wq, wqe_size,
prod_idx);
if (IS_ERR(hw_wqe))
return NULL;
return &hw_wqe->sq_wqe;
}
/**
* hinic_sq_write_wqe - write the wqe to the sq
* @sq: send queue
* @prod_idx: pi of the wqe
* @sq_wqe: the wqe to write
* @skb: skb to save
* @wqe_size: the size of the wqe
**/
void hinic_sq_write_wqe(struct hinic_sq *sq, u16 prod_idx,
struct hinic_sq_wqe *sq_wqe,
struct sk_buff *skb, unsigned int wqe_size)
{
struct hinic_hw_wqe *hw_wqe = (struct hinic_hw_wqe *)sq_wqe;
sq->saved_skb[prod_idx] = skb;
/* The data in the HW should be in Big Endian Format */
hinic_cpu_to_be32(sq_wqe, wqe_size);
hinic_write_wqe(sq->wq, hw_wqe, wqe_size);
}
/**
* hinic_sq_read_wqe - read wqe ptr in the current ci and update the ci
* @sq: send queue
* @skb: return skb that was saved
* @wqe_size: the size of the wqe
* @cons_idx: consumer index of the wqe
*
* Return wqe in ci position
**/
struct hinic_sq_wqe *hinic_sq_read_wqe(struct hinic_sq *sq,
struct sk_buff **skb,
unsigned int *wqe_size, u16 *cons_idx)
{
struct hinic_hw_wqe *hw_wqe;
struct hinic_sq_wqe *sq_wqe;
struct hinic_sq_ctrl *ctrl;
unsigned int buf_sect_len;
u32 ctrl_info;
/* read the ctrl section for getting wqe size */
hw_wqe = hinic_read_wqe(sq->wq, sizeof(*ctrl), cons_idx);
if (IS_ERR(hw_wqe))
return NULL;
sq_wqe = &hw_wqe->sq_wqe;
ctrl = &sq_wqe->ctrl;
ctrl_info = be32_to_cpu(ctrl->ctrl_info);
buf_sect_len = HINIC_SQ_CTRL_GET(ctrl_info, BUFDESC_SECT_LEN);
*wqe_size = sizeof(*ctrl) + sizeof(sq_wqe->task);
*wqe_size += SECT_SIZE_FROM_8BYTES(buf_sect_len);
*skb = sq->saved_skb[*cons_idx];
/* using the real wqe size to read wqe again */
hw_wqe = hinic_read_wqe(sq->wq, *wqe_size, cons_idx);
return &hw_wqe->sq_wqe;
}
/**
* hinic_sq_put_wqe - release the ci for new wqes
* @sq: send queue
* @wqe_size: the size of the wqe
**/
void hinic_sq_put_wqe(struct hinic_sq *sq, unsigned int wqe_size)
{
hinic_put_wqe(sq->wq, wqe_size);
}
/**
* hinic_sq_get_sges - get sges from the wqe
* @sq_wqe: wqe to get the sges from its buffer addresses
* @sges: returned sges
* @nr_sges: number sges to return
**/
void hinic_sq_get_sges(struct hinic_sq_wqe *sq_wqe, struct hinic_sge *sges,
int nr_sges)
{
int i;
for (i = 0; i < nr_sges && i < HINIC_MAX_SQ_BUFDESCS; i++) {
sges[i] = sq_wqe->buf_descs[i].sge;
hinic_be32_to_cpu(&sges[i], sizeof(sges[i]));
}
}
/**
* hinic_rq_get_wqe - get wqe ptr in the current pi and update the pi
* @rq: rq to get wqe from
* @wqe_size: wqe size
* @prod_idx: returned pi
*
* Return wqe pointer
**/
struct hinic_rq_wqe *hinic_rq_get_wqe(struct hinic_rq *rq,
unsigned int wqe_size, u16 *prod_idx)
{
struct hinic_hw_wqe *hw_wqe = hinic_get_wqe(rq->wq, wqe_size,
prod_idx);
if (IS_ERR(hw_wqe))
return NULL;
return &hw_wqe->rq_wqe;
}
/**
* hinic_rq_write_wqe - write the wqe to the rq
* @rq: recv queue
* @prod_idx: pi of the wqe
* @rq_wqe: the wqe to write
* @skb: skb to save
**/
void hinic_rq_write_wqe(struct hinic_rq *rq, u16 prod_idx,
struct hinic_rq_wqe *rq_wqe, struct sk_buff *skb)
{
struct hinic_hw_wqe *hw_wqe = (struct hinic_hw_wqe *)rq_wqe;
rq->saved_skb[prod_idx] = skb;
/* The data in the HW should be in Big Endian Format */
hinic_cpu_to_be32(rq_wqe, sizeof(*rq_wqe));
hinic_write_wqe(rq->wq, hw_wqe, sizeof(*rq_wqe));
}
/**
* hinic_rq_read_wqe - read wqe ptr in the current ci and update the ci
* @rq: recv queue
* @wqe_size: the size of the wqe
* @skb: return saved skb
* @cons_idx: consumer index of the wqe
*
* Return wqe in ci position
**/
struct hinic_rq_wqe *hinic_rq_read_wqe(struct hinic_rq *rq,
unsigned int wqe_size,
struct sk_buff **skb, u16 *cons_idx)
{
struct hinic_hw_wqe *hw_wqe;
struct hinic_rq_cqe *cqe;
int rx_done;
u32 status;
hw_wqe = hinic_read_wqe(rq->wq, wqe_size, cons_idx);
if (IS_ERR(hw_wqe))
return NULL;
cqe = rq->cqe[*cons_idx];
status = be32_to_cpu(cqe->status);
rx_done = HINIC_RQ_CQE_STATUS_GET(status, RXDONE);
if (!rx_done)
return NULL;
*skb = rq->saved_skb[*cons_idx];
return &hw_wqe->rq_wqe;
}
/**
* hinic_rq_read_next_wqe - increment ci and read the wqe in ci position
* @rq: recv queue
* @wqe_size: the size of the wqe
* @skb: return saved skb
* @cons_idx: consumer index in the wq
*
* Return wqe in incremented ci position
**/
struct hinic_rq_wqe *hinic_rq_read_next_wqe(struct hinic_rq *rq,
unsigned int wqe_size,
struct sk_buff **skb,
u16 *cons_idx)
{
struct hinic_wq *wq = rq->wq;
struct hinic_hw_wqe *hw_wqe;
unsigned int num_wqebbs;
wqe_size = ALIGN(wqe_size, wq->wqebb_size);
num_wqebbs = wqe_size / wq->wqebb_size;
*cons_idx = RQ_MASKED_IDX(rq, *cons_idx + num_wqebbs);
*skb = rq->saved_skb[*cons_idx];
hw_wqe = hinic_read_wqe_direct(wq, *cons_idx);
return &hw_wqe->rq_wqe;
}
/**
* hinic_put_wqe - release the ci for new wqes
* @rq: recv queue
* @cons_idx: consumer index of the wqe
* @wqe_size: the size of the wqe
**/
void hinic_rq_put_wqe(struct hinic_rq *rq, u16 cons_idx,
unsigned int wqe_size)
{
struct hinic_rq_cqe *cqe = rq->cqe[cons_idx];
u32 status = be32_to_cpu(cqe->status);
status = HINIC_RQ_CQE_STATUS_CLEAR(status, RXDONE);
/* Rx WQE size is 1 WQEBB, no wq shadow*/
cqe->status = cpu_to_be32(status);
wmb(); /* clear done flag */
hinic_put_wqe(rq->wq, wqe_size);
}
/**
* hinic_rq_get_sge - get sge from the wqe
* @rq: recv queue
* @rq_wqe: wqe to get the sge from its buf address
* @cons_idx: consumer index
* @sge: returned sge
**/
void hinic_rq_get_sge(struct hinic_rq *rq, struct hinic_rq_wqe *rq_wqe,
u16 cons_idx, struct hinic_sge *sge)
{
struct hinic_rq_cqe *cqe = rq->cqe[cons_idx];
u32 len = be32_to_cpu(cqe->len);
sge->hi_addr = be32_to_cpu(rq_wqe->buf_desc.hi_addr);
sge->lo_addr = be32_to_cpu(rq_wqe->buf_desc.lo_addr);
sge->len = HINIC_RQ_CQE_SGE_GET(len, LEN);
}
/**
* hinic_rq_prepare_wqe - prepare wqe before insert to the queue
* @rq: recv queue
* @prod_idx: pi value
* @rq_wqe: the wqe
* @sge: sge for use by the wqe for recv buf address
**/
void hinic_rq_prepare_wqe(struct hinic_rq *rq, u16 prod_idx,
struct hinic_rq_wqe *rq_wqe, struct hinic_sge *sge)
{
struct hinic_rq_cqe_sect *cqe_sect = &rq_wqe->cqe_sect;
struct hinic_rq_bufdesc *buf_desc = &rq_wqe->buf_desc;
struct hinic_rq_cqe *cqe = rq->cqe[prod_idx];
struct hinic_rq_ctrl *ctrl = &rq_wqe->ctrl;
dma_addr_t cqe_dma = rq->cqe_dma[prod_idx];
ctrl->ctrl_info =
HINIC_RQ_CTRL_SET(SIZE_8BYTES(sizeof(*ctrl)), LEN) |
HINIC_RQ_CTRL_SET(SIZE_8BYTES(sizeof(*cqe_sect)),
COMPLETE_LEN) |
HINIC_RQ_CTRL_SET(SIZE_8BYTES(sizeof(*buf_desc)),
BUFDESC_SECT_LEN) |
HINIC_RQ_CTRL_SET(RQ_COMPLETE_SGE, COMPLETE_FORMAT);
hinic_set_sge(&cqe_sect->sge, cqe_dma, sizeof(*cqe));
buf_desc->hi_addr = sge->hi_addr;
buf_desc->lo_addr = sge->lo_addr;
}
/**
* hinic_rq_update - update pi of the rq
* @rq: recv queue
* @prod_idx: pi value
**/
void hinic_rq_update(struct hinic_rq *rq, u16 prod_idx)
{
*rq->pi_virt_addr = cpu_to_be16(RQ_MASKED_IDX(rq, prod_idx + 1));
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_QP_H
#define HINIC_HW_QP_H
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/sizes.h>
#include <linux/pci.h>
#include <linux/skbuff.h>
#include "hinic_common.h"
#include "hinic_hw_if.h"
#include "hinic_hw_wqe.h"
#include "hinic_hw_wq.h"
#include "hinic_hw_qp_ctxt.h"
#define HINIC_SQ_DB_INFO_PI_HI_SHIFT 0
#define HINIC_SQ_DB_INFO_QID_SHIFT 8
#define HINIC_SQ_DB_INFO_PATH_SHIFT 23
#define HINIC_SQ_DB_INFO_COS_SHIFT 24
#define HINIC_SQ_DB_INFO_TYPE_SHIFT 27
#define HINIC_SQ_DB_INFO_PI_HI_MASK 0xFF
#define HINIC_SQ_DB_INFO_QID_MASK 0x3FF
#define HINIC_SQ_DB_INFO_PATH_MASK 0x1
#define HINIC_SQ_DB_INFO_COS_MASK 0x7
#define HINIC_SQ_DB_INFO_TYPE_MASK 0x1F
#define HINIC_SQ_DB_INFO_SET(val, member) \
(((u32)(val) & HINIC_SQ_DB_INFO_##member##_MASK) \
<< HINIC_SQ_DB_INFO_##member##_SHIFT)
#define HINIC_SQ_WQEBB_SIZE 64
#define HINIC_RQ_WQEBB_SIZE 32
#define HINIC_SQ_PAGE_SIZE SZ_4K
#define HINIC_RQ_PAGE_SIZE SZ_4K
#define HINIC_SQ_DEPTH SZ_4K
#define HINIC_RQ_DEPTH SZ_4K
#define HINIC_RX_BUF_SZ 2048
#define HINIC_MIN_TX_WQE_SIZE(wq) \
ALIGN(HINIC_SQ_WQE_SIZE(1), (wq)->wqebb_size)
#define HINIC_MIN_TX_NUM_WQEBBS(sq) \
(HINIC_MIN_TX_WQE_SIZE((sq)->wq) / (sq)->wq->wqebb_size)
struct hinic_sq {
struct hinic_hwif *hwif;
struct hinic_wq *wq;
u32 irq;
u16 msix_entry;
void *hw_ci_addr;
dma_addr_t hw_ci_dma_addr;
void __iomem *db_base;
struct sk_buff **saved_skb;
};
struct hinic_rq {
struct hinic_hwif *hwif;
struct hinic_wq *wq;
u32 irq;
u16 msix_entry;
size_t buf_sz;
struct sk_buff **saved_skb;
struct hinic_rq_cqe **cqe;
dma_addr_t *cqe_dma;
u16 *pi_virt_addr;
dma_addr_t pi_dma_addr;
};
struct hinic_qp {
struct hinic_sq sq;
struct hinic_rq rq;
u16 q_id;
};
void hinic_qp_prepare_header(struct hinic_qp_ctxt_header *qp_ctxt_hdr,
enum hinic_qp_ctxt_type ctxt_type,
u16 num_queues, u16 max_queues);
void hinic_sq_prepare_ctxt(struct hinic_sq_ctxt *sq_ctxt,
struct hinic_sq *sq, u16 global_qid);
void hinic_rq_prepare_ctxt(struct hinic_rq_ctxt *rq_ctxt,
struct hinic_rq *rq, u16 global_qid);
int hinic_init_sq(struct hinic_sq *sq, struct hinic_hwif *hwif,
struct hinic_wq *wq, struct msix_entry *entry, void *ci_addr,
dma_addr_t ci_dma_addr, void __iomem *db_base);
void hinic_clean_sq(struct hinic_sq *sq);
int hinic_init_rq(struct hinic_rq *rq, struct hinic_hwif *hwif,
struct hinic_wq *wq, struct msix_entry *entry);
void hinic_clean_rq(struct hinic_rq *rq);
int hinic_get_sq_free_wqebbs(struct hinic_sq *sq);
int hinic_get_rq_free_wqebbs(struct hinic_rq *rq);
void hinic_sq_prepare_wqe(struct hinic_sq *sq, u16 prod_idx,
struct hinic_sq_wqe *wqe, struct hinic_sge *sges,
int nr_sges);
void hinic_sq_write_db(struct hinic_sq *sq, u16 prod_idx, unsigned int wqe_size,
unsigned int cos);
struct hinic_sq_wqe *hinic_sq_get_wqe(struct hinic_sq *sq,
unsigned int wqe_size, u16 *prod_idx);
void hinic_sq_write_wqe(struct hinic_sq *sq, u16 prod_idx,
struct hinic_sq_wqe *wqe, struct sk_buff *skb,
unsigned int wqe_size);
struct hinic_sq_wqe *hinic_sq_read_wqe(struct hinic_sq *sq,
struct sk_buff **skb,
unsigned int *wqe_size, u16 *cons_idx);
void hinic_sq_put_wqe(struct hinic_sq *sq, unsigned int wqe_size);
void hinic_sq_get_sges(struct hinic_sq_wqe *wqe, struct hinic_sge *sges,
int nr_sges);
struct hinic_rq_wqe *hinic_rq_get_wqe(struct hinic_rq *rq,
unsigned int wqe_size, u16 *prod_idx);
void hinic_rq_write_wqe(struct hinic_rq *rq, u16 prod_idx,
struct hinic_rq_wqe *wqe, struct sk_buff *skb);
struct hinic_rq_wqe *hinic_rq_read_wqe(struct hinic_rq *rq,
unsigned int wqe_size,
struct sk_buff **skb, u16 *cons_idx);
struct hinic_rq_wqe *hinic_rq_read_next_wqe(struct hinic_rq *rq,
unsigned int wqe_size,
struct sk_buff **skb,
u16 *cons_idx);
void hinic_rq_put_wqe(struct hinic_rq *rq, u16 cons_idx,
unsigned int wqe_size);
void hinic_rq_get_sge(struct hinic_rq *rq, struct hinic_rq_wqe *wqe,
u16 cons_idx, struct hinic_sge *sge);
void hinic_rq_prepare_wqe(struct hinic_rq *rq, u16 prod_idx,
struct hinic_rq_wqe *wqe, struct hinic_sge *sge);
void hinic_rq_update(struct hinic_rq *rq, u16 prod_idx);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_QP_CTXT_H
#define HINIC_HW_QP_CTXT_H
#include <linux/types.h>
#include "hinic_hw_cmdq.h"
#define HINIC_SQ_CTXT_CEQ_ATTR_GLOBAL_SQ_ID_SHIFT 13
#define HINIC_SQ_CTXT_CEQ_ATTR_EN_SHIFT 23
#define HINIC_SQ_CTXT_CEQ_ATTR_GLOBAL_SQ_ID_MASK 0x3FF
#define HINIC_SQ_CTXT_CEQ_ATTR_EN_MASK 0x1
#define HINIC_SQ_CTXT_CEQ_ATTR_SET(val, member) \
(((u32)(val) & HINIC_SQ_CTXT_CEQ_ATTR_##member##_MASK) \
<< HINIC_SQ_CTXT_CEQ_ATTR_##member##_SHIFT)
#define HINIC_SQ_CTXT_CI_IDX_SHIFT 11
#define HINIC_SQ_CTXT_CI_WRAPPED_SHIFT 23
#define HINIC_SQ_CTXT_CI_IDX_MASK 0xFFF
#define HINIC_SQ_CTXT_CI_WRAPPED_MASK 0x1
#define HINIC_SQ_CTXT_CI_SET(val, member) \
(((u32)(val) & HINIC_SQ_CTXT_CI_##member##_MASK) \
<< HINIC_SQ_CTXT_CI_##member##_SHIFT)
#define HINIC_SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
#define HINIC_SQ_CTXT_WQ_PAGE_PI_SHIFT 20
#define HINIC_SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFF
#define HINIC_SQ_CTXT_WQ_PAGE_PI_MASK 0xFFF
#define HINIC_SQ_CTXT_WQ_PAGE_SET(val, member) \
(((u32)(val) & HINIC_SQ_CTXT_WQ_PAGE_##member##_MASK) \
<< HINIC_SQ_CTXT_WQ_PAGE_##member##_SHIFT)
#define HINIC_SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
#define HINIC_SQ_CTXT_PREF_CACHE_MAX_SHIFT 14
#define HINIC_SQ_CTXT_PREF_CACHE_MIN_SHIFT 25
#define HINIC_SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFF
#define HINIC_SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FF
#define HINIC_SQ_CTXT_PREF_CACHE_MIN_MASK 0x7F
#define HINIC_SQ_CTXT_PREF_WQ_HI_PFN_SHIFT 0
#define HINIC_SQ_CTXT_PREF_CI_SHIFT 20
#define HINIC_SQ_CTXT_PREF_WQ_HI_PFN_MASK 0xFFFFF
#define HINIC_SQ_CTXT_PREF_CI_MASK 0xFFF
#define HINIC_SQ_CTXT_PREF_SET(val, member) \
(((u32)(val) & HINIC_SQ_CTXT_PREF_##member##_MASK) \
<< HINIC_SQ_CTXT_PREF_##member##_SHIFT)
#define HINIC_SQ_CTXT_WQ_BLOCK_HI_PFN_SHIFT 0
#define HINIC_SQ_CTXT_WQ_BLOCK_HI_PFN_MASK 0x7FFFFF
#define HINIC_SQ_CTXT_WQ_BLOCK_SET(val, member) \
(((u32)(val) & HINIC_SQ_CTXT_WQ_BLOCK_##member##_MASK) \
<< HINIC_SQ_CTXT_WQ_BLOCK_##member##_SHIFT)
#define HINIC_RQ_CTXT_CEQ_ATTR_EN_SHIFT 0
#define HINIC_RQ_CTXT_CEQ_ATTR_WRAPPED_SHIFT 1
#define HINIC_RQ_CTXT_CEQ_ATTR_EN_MASK 0x1
#define HINIC_RQ_CTXT_CEQ_ATTR_WRAPPED_MASK 0x1
#define HINIC_RQ_CTXT_CEQ_ATTR_SET(val, member) \
(((u32)(val) & HINIC_RQ_CTXT_CEQ_ATTR_##member##_MASK) \
<< HINIC_RQ_CTXT_CEQ_ATTR_##member##_SHIFT)
#define HINIC_RQ_CTXT_PI_IDX_SHIFT 0
#define HINIC_RQ_CTXT_PI_INTR_SHIFT 22
#define HINIC_RQ_CTXT_PI_IDX_MASK 0xFFF
#define HINIC_RQ_CTXT_PI_INTR_MASK 0x3FF
#define HINIC_RQ_CTXT_PI_SET(val, member) \
(((u32)(val) & HINIC_RQ_CTXT_PI_##member##_MASK) << \
HINIC_RQ_CTXT_PI_##member##_SHIFT)
#define HINIC_RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
#define HINIC_RQ_CTXT_WQ_PAGE_CI_SHIFT 20
#define HINIC_RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFF
#define HINIC_RQ_CTXT_WQ_PAGE_CI_MASK 0xFFF
#define HINIC_RQ_CTXT_WQ_PAGE_SET(val, member) \
(((u32)(val) & HINIC_RQ_CTXT_WQ_PAGE_##member##_MASK) << \
HINIC_RQ_CTXT_WQ_PAGE_##member##_SHIFT)
#define HINIC_RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
#define HINIC_RQ_CTXT_PREF_CACHE_MAX_SHIFT 14
#define HINIC_RQ_CTXT_PREF_CACHE_MIN_SHIFT 25
#define HINIC_RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFF
#define HINIC_RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FF
#define HINIC_RQ_CTXT_PREF_CACHE_MIN_MASK 0x7F
#define HINIC_RQ_CTXT_PREF_WQ_HI_PFN_SHIFT 0
#define HINIC_RQ_CTXT_PREF_CI_SHIFT 20
#define HINIC_RQ_CTXT_PREF_WQ_HI_PFN_MASK 0xFFFFF
#define HINIC_RQ_CTXT_PREF_CI_MASK 0xFFF
#define HINIC_RQ_CTXT_PREF_SET(val, member) \
(((u32)(val) & HINIC_RQ_CTXT_PREF_##member##_MASK) << \
HINIC_RQ_CTXT_PREF_##member##_SHIFT)
#define HINIC_RQ_CTXT_WQ_BLOCK_HI_PFN_SHIFT 0
#define HINIC_RQ_CTXT_WQ_BLOCK_HI_PFN_MASK 0x7FFFFF
#define HINIC_RQ_CTXT_WQ_BLOCK_SET(val, member) \
(((u32)(val) & HINIC_RQ_CTXT_WQ_BLOCK_##member##_MASK) << \
HINIC_RQ_CTXT_WQ_BLOCK_##member##_SHIFT)
#define HINIC_SQ_CTXT_SIZE(num_sqs) (sizeof(struct hinic_qp_ctxt_header) \
+ (num_sqs) * sizeof(struct hinic_sq_ctxt))
#define HINIC_RQ_CTXT_SIZE(num_rqs) (sizeof(struct hinic_qp_ctxt_header) \
+ (num_rqs) * sizeof(struct hinic_rq_ctxt))
#define HINIC_WQ_PAGE_PFN_SHIFT 12
#define HINIC_WQ_BLOCK_PFN_SHIFT 9
#define HINIC_WQ_PAGE_PFN(page_addr) ((page_addr) >> HINIC_WQ_PAGE_PFN_SHIFT)
#define HINIC_WQ_BLOCK_PFN(page_addr) ((page_addr) >> \
HINIC_WQ_BLOCK_PFN_SHIFT)
#define HINIC_Q_CTXT_MAX \
((HINIC_CMDQ_BUF_SIZE - sizeof(struct hinic_qp_ctxt_header)) \
/ sizeof(struct hinic_sq_ctxt))
enum hinic_qp_ctxt_type {
HINIC_QP_CTXT_TYPE_SQ,
HINIC_QP_CTXT_TYPE_RQ
};
struct hinic_qp_ctxt_header {
u16 num_queues;
u16 queue_type;
u32 addr_offset;
};
struct hinic_sq_ctxt {
u32 ceq_attr;
u32 ci_wrapped;
u32 wq_hi_pfn_pi;
u32 wq_lo_pfn;
u32 pref_cache;
u32 pref_wrapped;
u32 pref_wq_hi_pfn_ci;
u32 pref_wq_lo_pfn;
u32 rsvd0;
u32 rsvd1;
u32 wq_block_hi_pfn;
u32 wq_block_lo_pfn;
};
struct hinic_rq_ctxt {
u32 ceq_attr;
u32 pi_intr_attr;
u32 wq_hi_pfn_ci;
u32 wq_lo_pfn;
u32 pref_cache;
u32 pref_wrapped;
u32 pref_wq_hi_pfn_ci;
u32 pref_wq_lo_pfn;
u32 pi_paddr_hi;
u32 pi_paddr_lo;
u32 wq_block_hi_pfn;
u32 wq_block_lo_pfn;
};
struct hinic_sq_ctxt_block {
struct hinic_qp_ctxt_header hdr;
struct hinic_sq_ctxt sq_ctxt[HINIC_Q_CTXT_MAX];
};
struct hinic_rq_ctxt_block {
struct hinic_qp_ctxt_header hdr;
struct hinic_rq_ctxt rq_ctxt[HINIC_Q_CTXT_MAX];
};
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/atomic.h>
#include <linux/semaphore.h>
#include <linux/errno.h>
#include <linux/vmalloc.h>
#include <linux/err.h>
#include <asm/byteorder.h>
#include "hinic_hw_if.h"
#include "hinic_hw_wqe.h"
#include "hinic_hw_wq.h"
#include "hinic_hw_cmdq.h"
#define WQS_BLOCKS_PER_PAGE 4
#define WQ_BLOCK_SIZE 4096
#define WQS_PAGE_SIZE (WQS_BLOCKS_PER_PAGE * WQ_BLOCK_SIZE)
#define WQS_MAX_NUM_BLOCKS 128
#define WQS_FREE_BLOCKS_SIZE(wqs) (WQS_MAX_NUM_BLOCKS * \
sizeof((wqs)->free_blocks[0]))
#define WQ_SIZE(wq) ((wq)->q_depth * (wq)->wqebb_size)
#define WQ_PAGE_ADDR_SIZE sizeof(u64)
#define WQ_MAX_PAGES (WQ_BLOCK_SIZE / WQ_PAGE_ADDR_SIZE)
#define CMDQ_BLOCK_SIZE 512
#define CMDQ_PAGE_SIZE 4096
#define CMDQ_WQ_MAX_PAGES (CMDQ_BLOCK_SIZE / WQ_PAGE_ADDR_SIZE)
#define WQ_BASE_VADDR(wqs, wq) \
((void *)((wqs)->page_vaddr[(wq)->page_idx]) \
+ (wq)->block_idx * WQ_BLOCK_SIZE)
#define WQ_BASE_PADDR(wqs, wq) \
((wqs)->page_paddr[(wq)->page_idx] \
+ (wq)->block_idx * WQ_BLOCK_SIZE)
#define WQ_BASE_ADDR(wqs, wq) \
((void *)((wqs)->shadow_page_vaddr[(wq)->page_idx]) \
+ (wq)->block_idx * WQ_BLOCK_SIZE)
#define CMDQ_BASE_VADDR(cmdq_pages, wq) \
((void *)((cmdq_pages)->page_vaddr) \
+ (wq)->block_idx * CMDQ_BLOCK_SIZE)
#define CMDQ_BASE_PADDR(cmdq_pages, wq) \
((cmdq_pages)->page_paddr \
+ (wq)->block_idx * CMDQ_BLOCK_SIZE)
#define CMDQ_BASE_ADDR(cmdq_pages, wq) \
((void *)((cmdq_pages)->shadow_page_vaddr) \
+ (wq)->block_idx * CMDQ_BLOCK_SIZE)
#define WQE_PAGE_OFF(wq, idx) (((idx) & ((wq)->num_wqebbs_per_page - 1)) * \
(wq)->wqebb_size)
#define WQE_PAGE_NUM(wq, idx) (((idx) / ((wq)->num_wqebbs_per_page)) \
& ((wq)->num_q_pages - 1))
#define WQ_PAGE_ADDR(wq, idx) \
((wq)->shadow_block_vaddr[WQE_PAGE_NUM(wq, idx)])
#define MASKED_WQE_IDX(wq, idx) ((idx) & (wq)->mask)
#define WQE_IN_RANGE(wqe, start, end) \
(((unsigned long)(wqe) >= (unsigned long)(start)) && \
((unsigned long)(wqe) < (unsigned long)(end)))
#define WQE_SHADOW_PAGE(wq, wqe) \
(((unsigned long)(wqe) - (unsigned long)(wq)->shadow_wqe) \
/ (wq)->max_wqe_size)
/**
* queue_alloc_page - allocate page for Queue
* @hwif: HW interface for allocating DMA
* @vaddr: virtual address will be returned in this address
* @paddr: physical address will be returned in this address
* @shadow_vaddr: VM area will be return here for holding WQ page addresses
* @page_sz: page size of each WQ page
*
* Return 0 - Success, negative - Failure
**/
static int queue_alloc_page(struct hinic_hwif *hwif, u64 **vaddr, u64 *paddr,
void ***shadow_vaddr, size_t page_sz)
{
struct pci_dev *pdev = hwif->pdev;
dma_addr_t dma_addr;
*vaddr = dma_zalloc_coherent(&pdev->dev, page_sz, &dma_addr,
GFP_KERNEL);
if (!*vaddr) {
dev_err(&pdev->dev, "Failed to allocate dma for wqs page\n");
return -ENOMEM;
}
*paddr = (u64)dma_addr;
/* use vzalloc for big mem */
*shadow_vaddr = vzalloc(page_sz);
if (!*shadow_vaddr)
goto err_shadow_vaddr;
return 0;
err_shadow_vaddr:
dma_free_coherent(&pdev->dev, page_sz, *vaddr, dma_addr);
return -ENOMEM;
}
/**
* wqs_allocate_page - allocate page for WQ set
* @wqs: Work Queue Set
* @page_idx: the page index of the page will be allocated
*
* Return 0 - Success, negative - Failure
**/
static int wqs_allocate_page(struct hinic_wqs *wqs, int page_idx)
{
return queue_alloc_page(wqs->hwif, &wqs->page_vaddr[page_idx],
&wqs->page_paddr[page_idx],
&wqs->shadow_page_vaddr[page_idx],
WQS_PAGE_SIZE);
}
/**
* wqs_free_page - free page of WQ set
* @wqs: Work Queue Set
* @page_idx: the page index of the page will be freed
**/
static void wqs_free_page(struct hinic_wqs *wqs, int page_idx)
{
struct hinic_hwif *hwif = wqs->hwif;
struct pci_dev *pdev = hwif->pdev;
dma_free_coherent(&pdev->dev, WQS_PAGE_SIZE,
wqs->page_vaddr[page_idx],
(dma_addr_t)wqs->page_paddr[page_idx]);
vfree(wqs->shadow_page_vaddr[page_idx]);
}
/**
* cmdq_allocate_page - allocate page for cmdq
* @cmdq_pages: the pages of the cmdq queue struct to hold the page
*
* Return 0 - Success, negative - Failure
**/
static int cmdq_allocate_page(struct hinic_cmdq_pages *cmdq_pages)
{
return queue_alloc_page(cmdq_pages->hwif, &cmdq_pages->page_vaddr,
&cmdq_pages->page_paddr,
&cmdq_pages->shadow_page_vaddr,
CMDQ_PAGE_SIZE);
}
/**
* cmdq_free_page - free page from cmdq
* @cmdq_pages: the pages of the cmdq queue struct that hold the page
*
* Return 0 - Success, negative - Failure
**/
static void cmdq_free_page(struct hinic_cmdq_pages *cmdq_pages)
{
struct hinic_hwif *hwif = cmdq_pages->hwif;
struct pci_dev *pdev = hwif->pdev;
dma_free_coherent(&pdev->dev, CMDQ_PAGE_SIZE,
cmdq_pages->page_vaddr,
(dma_addr_t)cmdq_pages->page_paddr);
vfree(cmdq_pages->shadow_page_vaddr);
}
static int alloc_page_arrays(struct hinic_wqs *wqs)
{
struct hinic_hwif *hwif = wqs->hwif;
struct pci_dev *pdev = hwif->pdev;
size_t size;
size = wqs->num_pages * sizeof(*wqs->page_paddr);
wqs->page_paddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
if (!wqs->page_paddr)
return -ENOMEM;
size = wqs->num_pages * sizeof(*wqs->page_vaddr);
wqs->page_vaddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
if (!wqs->page_vaddr)
goto err_page_vaddr;
size = wqs->num_pages * sizeof(*wqs->shadow_page_vaddr);
wqs->shadow_page_vaddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
if (!wqs->shadow_page_vaddr)
goto err_page_shadow_vaddr;
return 0;
err_page_shadow_vaddr:
devm_kfree(&pdev->dev, wqs->page_vaddr);
err_page_vaddr:
devm_kfree(&pdev->dev, wqs->page_paddr);
return -ENOMEM;
}
static void free_page_arrays(struct hinic_wqs *wqs)
{
struct hinic_hwif *hwif = wqs->hwif;
struct pci_dev *pdev = hwif->pdev;
devm_kfree(&pdev->dev, wqs->shadow_page_vaddr);
devm_kfree(&pdev->dev, wqs->page_vaddr);
devm_kfree(&pdev->dev, wqs->page_paddr);
}
static int wqs_next_block(struct hinic_wqs *wqs, int *page_idx,
int *block_idx)
{
int pos;
down(&wqs->alloc_blocks_lock);
wqs->num_free_blks--;
if (wqs->num_free_blks < 0) {
wqs->num_free_blks++;
up(&wqs->alloc_blocks_lock);
return -ENOMEM;
}
pos = wqs->alloc_blk_pos++;
pos &= WQS_MAX_NUM_BLOCKS - 1;
*page_idx = wqs->free_blocks[pos].page_idx;
*block_idx = wqs->free_blocks[pos].block_idx;
wqs->free_blocks[pos].page_idx = -1;
wqs->free_blocks[pos].block_idx = -1;
up(&wqs->alloc_blocks_lock);
return 0;
}
static void wqs_return_block(struct hinic_wqs *wqs, int page_idx,
int block_idx)
{
int pos;
down(&wqs->alloc_blocks_lock);
pos = wqs->return_blk_pos++;
pos &= WQS_MAX_NUM_BLOCKS - 1;
wqs->free_blocks[pos].page_idx = page_idx;
wqs->free_blocks[pos].block_idx = block_idx;
wqs->num_free_blks++;
up(&wqs->alloc_blocks_lock);
}
static void init_wqs_blocks_arr(struct hinic_wqs *wqs)
{
int page_idx, blk_idx, pos = 0;
for (page_idx = 0; page_idx < wqs->num_pages; page_idx++) {
for (blk_idx = 0; blk_idx < WQS_BLOCKS_PER_PAGE; blk_idx++) {
wqs->free_blocks[pos].page_idx = page_idx;
wqs->free_blocks[pos].block_idx = blk_idx;
pos++;
}
}
wqs->alloc_blk_pos = 0;
wqs->return_blk_pos = pos;
wqs->num_free_blks = pos;
sema_init(&wqs->alloc_blocks_lock, 1);
}
/**
* hinic_wqs_alloc - allocate Work Queues set
* @wqs: Work Queue Set
* @max_wqs: maximum wqs to allocate
* @hwif: HW interface for use for the allocation
*
* Return 0 - Success, negative - Failure
**/
int hinic_wqs_alloc(struct hinic_wqs *wqs, int max_wqs,
struct hinic_hwif *hwif)
{
struct pci_dev *pdev = hwif->pdev;
int err, i, page_idx;
max_wqs = ALIGN(max_wqs, WQS_BLOCKS_PER_PAGE);
if (max_wqs > WQS_MAX_NUM_BLOCKS) {
dev_err(&pdev->dev, "Invalid max_wqs = %d\n", max_wqs);
return -EINVAL;
}
wqs->hwif = hwif;
wqs->num_pages = max_wqs / WQS_BLOCKS_PER_PAGE;
if (alloc_page_arrays(wqs)) {
dev_err(&pdev->dev,
"Failed to allocate mem for page addresses\n");
return -ENOMEM;
}
for (page_idx = 0; page_idx < wqs->num_pages; page_idx++) {
err = wqs_allocate_page(wqs, page_idx);
if (err) {
dev_err(&pdev->dev, "Failed wq page allocation\n");
goto err_wq_allocate_page;
}
}
wqs->free_blocks = devm_kzalloc(&pdev->dev, WQS_FREE_BLOCKS_SIZE(wqs),
GFP_KERNEL);
if (!wqs->free_blocks) {
err = -ENOMEM;
goto err_alloc_blocks;
}
init_wqs_blocks_arr(wqs);
return 0;
err_alloc_blocks:
err_wq_allocate_page:
for (i = 0; i < page_idx; i++)
wqs_free_page(wqs, i);
free_page_arrays(wqs);
return err;
}
/**
* hinic_wqs_free - free Work Queues set
* @wqs: Work Queue Set
**/
void hinic_wqs_free(struct hinic_wqs *wqs)
{
struct hinic_hwif *hwif = wqs->hwif;
struct pci_dev *pdev = hwif->pdev;
int page_idx;
devm_kfree(&pdev->dev, wqs->free_blocks);
for (page_idx = 0; page_idx < wqs->num_pages; page_idx++)
wqs_free_page(wqs, page_idx);
free_page_arrays(wqs);
}
/**
* alloc_wqes_shadow - allocate WQE shadows for WQ
* @wq: WQ to allocate shadows for
*
* Return 0 - Success, negative - Failure
**/
static int alloc_wqes_shadow(struct hinic_wq *wq)
{
struct hinic_hwif *hwif = wq->hwif;
struct pci_dev *pdev = hwif->pdev;
size_t size;
size = wq->num_q_pages * wq->max_wqe_size;
wq->shadow_wqe = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
if (!wq->shadow_wqe)
return -ENOMEM;
size = wq->num_q_pages * sizeof(wq->prod_idx);
wq->shadow_idx = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
if (!wq->shadow_idx)
goto err_shadow_idx;
return 0;
err_shadow_idx:
devm_kfree(&pdev->dev, wq->shadow_wqe);
return -ENOMEM;
}
/**
* free_wqes_shadow - free WQE shadows of WQ
* @wq: WQ to free shadows from
**/
static void free_wqes_shadow(struct hinic_wq *wq)
{
struct hinic_hwif *hwif = wq->hwif;
struct pci_dev *pdev = hwif->pdev;
devm_kfree(&pdev->dev, wq->shadow_idx);
devm_kfree(&pdev->dev, wq->shadow_wqe);
}
/**
* free_wq_pages - free pages of WQ
* @hwif: HW interface for releasing dma addresses
* @wq: WQ to free pages from
* @num_q_pages: number pages to free
**/
static void free_wq_pages(struct hinic_wq *wq, struct hinic_hwif *hwif,
int num_q_pages)
{
struct pci_dev *pdev = hwif->pdev;
int i;
for (i = 0; i < num_q_pages; i++) {
void **vaddr = &wq->shadow_block_vaddr[i];
u64 *paddr = &wq->block_vaddr[i];
dma_addr_t dma_addr;
dma_addr = (dma_addr_t)be64_to_cpu(*paddr);
dma_free_coherent(&pdev->dev, wq->wq_page_size, *vaddr,
dma_addr);
}
free_wqes_shadow(wq);
}
/**
* alloc_wq_pages - alloc pages for WQ
* @hwif: HW interface for allocating dma addresses
* @wq: WQ to allocate pages for
* @max_pages: maximum pages allowed
*
* Return 0 - Success, negative - Failure
**/
static int alloc_wq_pages(struct hinic_wq *wq, struct hinic_hwif *hwif,
int max_pages)
{
struct pci_dev *pdev = hwif->pdev;
int i, err, num_q_pages;
num_q_pages = ALIGN(WQ_SIZE(wq), wq->wq_page_size) / wq->wq_page_size;
if (num_q_pages > max_pages) {
dev_err(&pdev->dev, "Number wq pages exceeds the limit\n");
return -EINVAL;
}
if (num_q_pages & (num_q_pages - 1)) {
dev_err(&pdev->dev, "Number wq pages must be power of 2\n");
return -EINVAL;
}
wq->num_q_pages = num_q_pages;
err = alloc_wqes_shadow(wq);
if (err) {
dev_err(&pdev->dev, "Failed to allocate wqe shadow\n");
return err;
}
for (i = 0; i < num_q_pages; i++) {
void **vaddr = &wq->shadow_block_vaddr[i];
u64 *paddr = &wq->block_vaddr[i];
dma_addr_t dma_addr;
*vaddr = dma_zalloc_coherent(&pdev->dev, wq->wq_page_size,
&dma_addr, GFP_KERNEL);
if (!*vaddr) {
dev_err(&pdev->dev, "Failed to allocate wq page\n");
goto err_alloc_wq_pages;
}
/* HW uses Big Endian Format */
*paddr = cpu_to_be64(dma_addr);
}
return 0;
err_alloc_wq_pages:
free_wq_pages(wq, hwif, i);
return -ENOMEM;
}
/**
* hinic_wq_allocate - Allocate the WQ resources from the WQS
* @wqs: WQ set from which to allocate the WQ resources
* @wq: WQ to allocate resources for it from the WQ set
* @wqebb_size: Work Queue Block Byte Size
* @wq_page_size: the page size in the Work Queue
* @q_depth: number of wqebbs in WQ
* @max_wqe_size: maximum WQE size that will be used in the WQ
*
* Return 0 - Success, negative - Failure
**/
int hinic_wq_allocate(struct hinic_wqs *wqs, struct hinic_wq *wq,
u16 wqebb_size, u16 wq_page_size, u16 q_depth,
u16 max_wqe_size)
{
struct hinic_hwif *hwif = wqs->hwif;
struct pci_dev *pdev = hwif->pdev;
u16 num_wqebbs_per_page;
int err;
if (wqebb_size == 0) {
dev_err(&pdev->dev, "wqebb_size must be > 0\n");
return -EINVAL;
}
if (wq_page_size == 0) {
dev_err(&pdev->dev, "wq_page_size must be > 0\n");
return -EINVAL;
}
if (q_depth & (q_depth - 1)) {
dev_err(&pdev->dev, "WQ q_depth must be power of 2\n");
return -EINVAL;
}
num_wqebbs_per_page = ALIGN(wq_page_size, wqebb_size) / wqebb_size;
if (num_wqebbs_per_page & (num_wqebbs_per_page - 1)) {
dev_err(&pdev->dev, "num wqebbs per page must be power of 2\n");
return -EINVAL;
}
wq->hwif = hwif;
err = wqs_next_block(wqs, &wq->page_idx, &wq->block_idx);
if (err) {
dev_err(&pdev->dev, "Failed to get free wqs next block\n");
return err;
}
wq->wqebb_size = wqebb_size;
wq->wq_page_size = wq_page_size;
wq->q_depth = q_depth;
wq->max_wqe_size = max_wqe_size;
wq->num_wqebbs_per_page = num_wqebbs_per_page;
wq->block_vaddr = WQ_BASE_VADDR(wqs, wq);
wq->shadow_block_vaddr = WQ_BASE_ADDR(wqs, wq);
wq->block_paddr = WQ_BASE_PADDR(wqs, wq);
err = alloc_wq_pages(wq, wqs->hwif, WQ_MAX_PAGES);
if (err) {
dev_err(&pdev->dev, "Failed to allocate wq pages\n");
goto err_alloc_wq_pages;
}
atomic_set(&wq->cons_idx, 0);
atomic_set(&wq->prod_idx, 0);
atomic_set(&wq->delta, q_depth);
wq->mask = q_depth - 1;
return 0;
err_alloc_wq_pages:
wqs_return_block(wqs, wq->page_idx, wq->block_idx);
return err;
}
/**
* hinic_wq_free - Free the WQ resources to the WQS
* @wqs: WQ set to free the WQ resources to it
* @wq: WQ to free its resources to the WQ set resources
**/
void hinic_wq_free(struct hinic_wqs *wqs, struct hinic_wq *wq)
{
free_wq_pages(wq, wqs->hwif, wq->num_q_pages);
wqs_return_block(wqs, wq->page_idx, wq->block_idx);
}
/**
* hinic_wqs_cmdq_alloc - Allocate wqs for cmdqs
* @cmdq_pages: will hold the pages of the cmdq
* @wq: returned wqs
* @hwif: HW interface
* @cmdq_blocks: number of cmdq blocks/wq to allocate
* @wqebb_size: Work Queue Block Byte Size
* @wq_page_size: the page size in the Work Queue
* @q_depth: number of wqebbs in WQ
* @max_wqe_size: maximum WQE size that will be used in the WQ
*
* Return 0 - Success, negative - Failure
**/
int hinic_wqs_cmdq_alloc(struct hinic_cmdq_pages *cmdq_pages,
struct hinic_wq *wq, struct hinic_hwif *hwif,
int cmdq_blocks, u16 wqebb_size, u16 wq_page_size,
u16 q_depth, u16 max_wqe_size)
{
struct pci_dev *pdev = hwif->pdev;
u16 num_wqebbs_per_page;
int i, j, err = -ENOMEM;
if (wqebb_size == 0) {
dev_err(&pdev->dev, "wqebb_size must be > 0\n");
return -EINVAL;
}
if (wq_page_size == 0) {
dev_err(&pdev->dev, "wq_page_size must be > 0\n");
return -EINVAL;
}
if (q_depth & (q_depth - 1)) {
dev_err(&pdev->dev, "WQ q_depth must be power of 2\n");
return -EINVAL;
}
num_wqebbs_per_page = ALIGN(wq_page_size, wqebb_size) / wqebb_size;
if (num_wqebbs_per_page & (num_wqebbs_per_page - 1)) {
dev_err(&pdev->dev, "num wqebbs per page must be power of 2\n");
return -EINVAL;
}
cmdq_pages->hwif = hwif;
err = cmdq_allocate_page(cmdq_pages);
if (err) {
dev_err(&pdev->dev, "Failed to allocate CMDQ page\n");
return err;
}
for (i = 0; i < cmdq_blocks; i++) {
wq[i].hwif = hwif;
wq[i].page_idx = 0;
wq[i].block_idx = i;
wq[i].wqebb_size = wqebb_size;
wq[i].wq_page_size = wq_page_size;
wq[i].q_depth = q_depth;
wq[i].max_wqe_size = max_wqe_size;
wq[i].num_wqebbs_per_page = num_wqebbs_per_page;
wq[i].block_vaddr = CMDQ_BASE_VADDR(cmdq_pages, &wq[i]);
wq[i].shadow_block_vaddr = CMDQ_BASE_ADDR(cmdq_pages, &wq[i]);
wq[i].block_paddr = CMDQ_BASE_PADDR(cmdq_pages, &wq[i]);
err = alloc_wq_pages(&wq[i], cmdq_pages->hwif,
CMDQ_WQ_MAX_PAGES);
if (err) {
dev_err(&pdev->dev, "Failed to alloc CMDQ blocks\n");
goto err_cmdq_block;
}
atomic_set(&wq[i].cons_idx, 0);
atomic_set(&wq[i].prod_idx, 0);
atomic_set(&wq[i].delta, q_depth);
wq[i].mask = q_depth - 1;
}
return 0;
err_cmdq_block:
for (j = 0; j < i; j++)
free_wq_pages(&wq[j], cmdq_pages->hwif, wq[j].num_q_pages);
cmdq_free_page(cmdq_pages);
return err;
}
/**
* hinic_wqs_cmdq_free - Free wqs from cmdqs
* @cmdq_pages: hold the pages of the cmdq
* @wq: wqs to free
* @cmdq_blocks: number of wqs to free
**/
void hinic_wqs_cmdq_free(struct hinic_cmdq_pages *cmdq_pages,
struct hinic_wq *wq, int cmdq_blocks)
{
int i;
for (i = 0; i < cmdq_blocks; i++)
free_wq_pages(&wq[i], cmdq_pages->hwif, wq[i].num_q_pages);
cmdq_free_page(cmdq_pages);
}
static void copy_wqe_to_shadow(struct hinic_wq *wq, void *shadow_addr,
int num_wqebbs, u16 idx)
{
void *wqebb_addr;
int i;
for (i = 0; i < num_wqebbs; i++, idx++) {
idx = MASKED_WQE_IDX(wq, idx);
wqebb_addr = WQ_PAGE_ADDR(wq, idx) +
WQE_PAGE_OFF(wq, idx);
memcpy(shadow_addr, wqebb_addr, wq->wqebb_size);
shadow_addr += wq->wqebb_size;
}
}
static void copy_wqe_from_shadow(struct hinic_wq *wq, void *shadow_addr,
int num_wqebbs, u16 idx)
{
void *wqebb_addr;
int i;
for (i = 0; i < num_wqebbs; i++, idx++) {
idx = MASKED_WQE_IDX(wq, idx);
wqebb_addr = WQ_PAGE_ADDR(wq, idx) +
WQE_PAGE_OFF(wq, idx);
memcpy(wqebb_addr, shadow_addr, wq->wqebb_size);
shadow_addr += wq->wqebb_size;
}
}
/**
* hinic_get_wqe - get wqe ptr in the current pi and update the pi
* @wq: wq to get wqe from
* @wqe_size: wqe size
* @prod_idx: returned pi
*
* Return wqe pointer
**/
struct hinic_hw_wqe *hinic_get_wqe(struct hinic_wq *wq, unsigned int wqe_size,
u16 *prod_idx)
{
int curr_pg, end_pg, num_wqebbs;
u16 curr_prod_idx, end_prod_idx;
*prod_idx = MASKED_WQE_IDX(wq, atomic_read(&wq->prod_idx));
num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
if (atomic_sub_return(num_wqebbs, &wq->delta) <= 0) {
atomic_add(num_wqebbs, &wq->delta);
return ERR_PTR(-EBUSY);
}
end_prod_idx = atomic_add_return(num_wqebbs, &wq->prod_idx);
end_prod_idx = MASKED_WQE_IDX(wq, end_prod_idx);
curr_prod_idx = end_prod_idx - num_wqebbs;
curr_prod_idx = MASKED_WQE_IDX(wq, curr_prod_idx);
/* end prod index points to the next wqebb, therefore minus 1 */
end_prod_idx = MASKED_WQE_IDX(wq, end_prod_idx - 1);
curr_pg = WQE_PAGE_NUM(wq, curr_prod_idx);
end_pg = WQE_PAGE_NUM(wq, end_prod_idx);
*prod_idx = curr_prod_idx;
if (curr_pg != end_pg) {
void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *prod_idx);
wq->shadow_idx[curr_pg] = *prod_idx;
return shadow_addr;
}
return WQ_PAGE_ADDR(wq, *prod_idx) + WQE_PAGE_OFF(wq, *prod_idx);
}
/**
* hinic_put_wqe - return the wqe place to use for a new wqe
* @wq: wq to return wqe
* @wqe_size: wqe size
**/
void hinic_put_wqe(struct hinic_wq *wq, unsigned int wqe_size)
{
int num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
atomic_add(num_wqebbs, &wq->cons_idx);
atomic_add(num_wqebbs, &wq->delta);
}
/**
* hinic_read_wqe - read wqe ptr in the current ci
* @wq: wq to get read from
* @wqe_size: wqe size
* @cons_idx: returned ci
*
* Return wqe pointer
**/
struct hinic_hw_wqe *hinic_read_wqe(struct hinic_wq *wq, unsigned int wqe_size,
u16 *cons_idx)
{
int num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
u16 curr_cons_idx, end_cons_idx;
int curr_pg, end_pg;
if ((atomic_read(&wq->delta) + num_wqebbs) > wq->q_depth)
return ERR_PTR(-EBUSY);
curr_cons_idx = atomic_read(&wq->cons_idx);
curr_cons_idx = MASKED_WQE_IDX(wq, curr_cons_idx);
end_cons_idx = MASKED_WQE_IDX(wq, curr_cons_idx + num_wqebbs - 1);
curr_pg = WQE_PAGE_NUM(wq, curr_cons_idx);
end_pg = WQE_PAGE_NUM(wq, end_cons_idx);
*cons_idx = curr_cons_idx;
if (curr_pg != end_pg) {
void *shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
copy_wqe_to_shadow(wq, shadow_addr, num_wqebbs, *cons_idx);
return shadow_addr;
}
return WQ_PAGE_ADDR(wq, *cons_idx) + WQE_PAGE_OFF(wq, *cons_idx);
}
/**
* hinic_read_wqe_direct - read wqe directly from ci position
* @wq: wq
* @cons_idx: ci position
*
* Return wqe
**/
struct hinic_hw_wqe *hinic_read_wqe_direct(struct hinic_wq *wq, u16 cons_idx)
{
return WQ_PAGE_ADDR(wq, cons_idx) + WQE_PAGE_OFF(wq, cons_idx);
}
/**
* wqe_shadow - check if a wqe is shadow
* @wq: wq of the wqe
* @wqe: the wqe for shadow checking
*
* Return true - shadow, false - Not shadow
**/
static inline bool wqe_shadow(struct hinic_wq *wq, struct hinic_hw_wqe *wqe)
{
size_t wqe_shadow_size = wq->num_q_pages * wq->max_wqe_size;
return WQE_IN_RANGE(wqe, wq->shadow_wqe,
&wq->shadow_wqe[wqe_shadow_size]);
}
/**
* hinic_write_wqe - write the wqe to the wq
* @wq: wq to write wqe to
* @wqe: wqe to write
* @wqe_size: wqe size
**/
void hinic_write_wqe(struct hinic_wq *wq, struct hinic_hw_wqe *wqe,
unsigned int wqe_size)
{
int curr_pg, num_wqebbs;
void *shadow_addr;
u16 prod_idx;
if (wqe_shadow(wq, wqe)) {
curr_pg = WQE_SHADOW_PAGE(wq, wqe);
prod_idx = wq->shadow_idx[curr_pg];
num_wqebbs = ALIGN(wqe_size, wq->wqebb_size) / wq->wqebb_size;
shadow_addr = &wq->shadow_wqe[curr_pg * wq->max_wqe_size];
copy_wqe_from_shadow(wq, shadow_addr, num_wqebbs, prod_idx);
}
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_WQ_H
#define HINIC_HW_WQ_H
#include <linux/types.h>
#include <linux/semaphore.h>
#include <linux/atomic.h>
#include "hinic_hw_if.h"
#include "hinic_hw_wqe.h"
struct hinic_free_block {
int page_idx;
int block_idx;
};
struct hinic_wq {
struct hinic_hwif *hwif;
int page_idx;
int block_idx;
u16 wqebb_size;
u16 wq_page_size;
u16 q_depth;
u16 max_wqe_size;
u16 num_wqebbs_per_page;
/* The addresses are 64 bit in the HW */
u64 block_paddr;
void **shadow_block_vaddr;
u64 *block_vaddr;
int num_q_pages;
u8 *shadow_wqe;
u16 *shadow_idx;
atomic_t cons_idx;
atomic_t prod_idx;
atomic_t delta;
u16 mask;
};
struct hinic_wqs {
struct hinic_hwif *hwif;
int num_pages;
/* The addresses are 64 bit in the HW */
u64 *page_paddr;
u64 **page_vaddr;
void ***shadow_page_vaddr;
struct hinic_free_block *free_blocks;
int alloc_blk_pos;
int return_blk_pos;
int num_free_blks;
/* Lock for getting a free block from the WQ set */
struct semaphore alloc_blocks_lock;
};
struct hinic_cmdq_pages {
/* The addresses are 64 bit in the HW */
u64 page_paddr;
u64 *page_vaddr;
void **shadow_page_vaddr;
struct hinic_hwif *hwif;
};
int hinic_wqs_cmdq_alloc(struct hinic_cmdq_pages *cmdq_pages,
struct hinic_wq *wq, struct hinic_hwif *hwif,
int cmdq_blocks, u16 wqebb_size, u16 wq_page_size,
u16 q_depth, u16 max_wqe_size);
void hinic_wqs_cmdq_free(struct hinic_cmdq_pages *cmdq_pages,
struct hinic_wq *wq, int cmdq_blocks);
int hinic_wqs_alloc(struct hinic_wqs *wqs, int num_wqs,
struct hinic_hwif *hwif);
void hinic_wqs_free(struct hinic_wqs *wqs);
int hinic_wq_allocate(struct hinic_wqs *wqs, struct hinic_wq *wq,
u16 wqebb_size, u16 wq_page_size, u16 q_depth,
u16 max_wqe_size);
void hinic_wq_free(struct hinic_wqs *wqs, struct hinic_wq *wq);
struct hinic_hw_wqe *hinic_get_wqe(struct hinic_wq *wq, unsigned int wqe_size,
u16 *prod_idx);
void hinic_put_wqe(struct hinic_wq *wq, unsigned int wqe_size);
struct hinic_hw_wqe *hinic_read_wqe(struct hinic_wq *wq, unsigned int wqe_size,
u16 *cons_idx);
struct hinic_hw_wqe *hinic_read_wqe_direct(struct hinic_wq *wq, u16 cons_idx);
void hinic_write_wqe(struct hinic_wq *wq, struct hinic_hw_wqe *wqe,
unsigned int wqe_size);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_HW_WQE_H
#define HINIC_HW_WQE_H
#include "hinic_common.h"
#define HINIC_CMDQ_CTRL_PI_SHIFT 0
#define HINIC_CMDQ_CTRL_CMD_SHIFT 16
#define HINIC_CMDQ_CTRL_MOD_SHIFT 24
#define HINIC_CMDQ_CTRL_ACK_TYPE_SHIFT 29
#define HINIC_CMDQ_CTRL_HW_BUSY_BIT_SHIFT 31
#define HINIC_CMDQ_CTRL_PI_MASK 0xFFFF
#define HINIC_CMDQ_CTRL_CMD_MASK 0xFF
#define HINIC_CMDQ_CTRL_MOD_MASK 0x1F
#define HINIC_CMDQ_CTRL_ACK_TYPE_MASK 0x3
#define HINIC_CMDQ_CTRL_HW_BUSY_BIT_MASK 0x1
#define HINIC_CMDQ_CTRL_SET(val, member) \
(((u32)(val) & HINIC_CMDQ_CTRL_##member##_MASK) \
<< HINIC_CMDQ_CTRL_##member##_SHIFT)
#define HINIC_CMDQ_CTRL_GET(val, member) \
(((val) >> HINIC_CMDQ_CTRL_##member##_SHIFT) \
& HINIC_CMDQ_CTRL_##member##_MASK)
#define HINIC_CMDQ_WQE_HEADER_BUFDESC_LEN_SHIFT 0
#define HINIC_CMDQ_WQE_HEADER_COMPLETE_FMT_SHIFT 15
#define HINIC_CMDQ_WQE_HEADER_DATA_FMT_SHIFT 22
#define HINIC_CMDQ_WQE_HEADER_COMPLETE_REQ_SHIFT 23
#define HINIC_CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_SHIFT 27
#define HINIC_CMDQ_WQE_HEADER_CTRL_LEN_SHIFT 29
#define HINIC_CMDQ_WQE_HEADER_TOGGLED_WRAPPED_SHIFT 31
#define HINIC_CMDQ_WQE_HEADER_BUFDESC_LEN_MASK 0xFF
#define HINIC_CMDQ_WQE_HEADER_COMPLETE_FMT_MASK 0x1
#define HINIC_CMDQ_WQE_HEADER_DATA_FMT_MASK 0x1
#define HINIC_CMDQ_WQE_HEADER_COMPLETE_REQ_MASK 0x1
#define HINIC_CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_MASK 0x3
#define HINIC_CMDQ_WQE_HEADER_CTRL_LEN_MASK 0x3
#define HINIC_CMDQ_WQE_HEADER_TOGGLED_WRAPPED_MASK 0x1
#define HINIC_CMDQ_WQE_HEADER_SET(val, member) \
(((u32)(val) & HINIC_CMDQ_WQE_HEADER_##member##_MASK) \
<< HINIC_CMDQ_WQE_HEADER_##member##_SHIFT)
#define HINIC_CMDQ_WQE_HEADER_GET(val, member) \
(((val) >> HINIC_CMDQ_WQE_HEADER_##member##_SHIFT) \
& HINIC_CMDQ_WQE_HEADER_##member##_MASK)
#define HINIC_SQ_CTRL_BUFDESC_SECT_LEN_SHIFT 0
#define HINIC_SQ_CTRL_TASKSECT_LEN_SHIFT 16
#define HINIC_SQ_CTRL_DATA_FORMAT_SHIFT 22
#define HINIC_SQ_CTRL_LEN_SHIFT 29
#define HINIC_SQ_CTRL_BUFDESC_SECT_LEN_MASK 0xFF
#define HINIC_SQ_CTRL_TASKSECT_LEN_MASK 0x1F
#define HINIC_SQ_CTRL_DATA_FORMAT_MASK 0x1
#define HINIC_SQ_CTRL_LEN_MASK 0x3
#define HINIC_SQ_CTRL_QUEUE_INFO_MSS_SHIFT 13
#define HINIC_SQ_CTRL_QUEUE_INFO_MSS_MASK 0x3FFF
#define HINIC_SQ_CTRL_SET(val, member) \
(((u32)(val) & HINIC_SQ_CTRL_##member##_MASK) \
<< HINIC_SQ_CTRL_##member##_SHIFT)
#define HINIC_SQ_CTRL_GET(val, member) \
(((val) >> HINIC_SQ_CTRL_##member##_SHIFT) \
& HINIC_SQ_CTRL_##member##_MASK)
#define HINIC_SQ_TASK_INFO0_L2HDR_LEN_SHIFT 0
#define HINIC_SQ_TASK_INFO0_L4_OFFLOAD_SHIFT 8
#define HINIC_SQ_TASK_INFO0_INNER_L3TYPE_SHIFT 10
#define HINIC_SQ_TASK_INFO0_VLAN_OFFLOAD_SHIFT 12
#define HINIC_SQ_TASK_INFO0_PARSE_FLAG_SHIFT 13
/* 1 bit reserved */
#define HINIC_SQ_TASK_INFO0_TSO_FLAG_SHIFT 15
#define HINIC_SQ_TASK_INFO0_VLAN_TAG_SHIFT 16
#define HINIC_SQ_TASK_INFO0_L2HDR_LEN_MASK 0xFF
#define HINIC_SQ_TASK_INFO0_L4_OFFLOAD_MASK 0x3
#define HINIC_SQ_TASK_INFO0_INNER_L3TYPE_MASK 0x3
#define HINIC_SQ_TASK_INFO0_VLAN_OFFLOAD_MASK 0x1
#define HINIC_SQ_TASK_INFO0_PARSE_FLAG_MASK 0x1
/* 1 bit reserved */
#define HINIC_SQ_TASK_INFO0_TSO_FLAG_MASK 0x1
#define HINIC_SQ_TASK_INFO0_VLAN_TAG_MASK 0xFFFF
#define HINIC_SQ_TASK_INFO0_SET(val, member) \
(((u32)(val) & HINIC_SQ_TASK_INFO0_##member##_MASK) << \
HINIC_SQ_TASK_INFO0_##member##_SHIFT)
/* 8 bits reserved */
#define HINIC_SQ_TASK_INFO1_MEDIA_TYPE_SHIFT 8
#define HINIC_SQ_TASK_INFO1_INNER_L4_LEN_SHIFT 16
#define HINIC_SQ_TASK_INFO1_INNER_L3_LEN_SHIFT 24
/* 8 bits reserved */
#define HINIC_SQ_TASK_INFO1_MEDIA_TYPE_MASK 0xFF
#define HINIC_SQ_TASK_INFO1_INNER_L4_LEN_MASK 0xFF
#define HINIC_SQ_TASK_INFO1_INNER_L3_LEN_MASK 0xFF
#define HINIC_SQ_TASK_INFO1_SET(val, member) \
(((u32)(val) & HINIC_SQ_TASK_INFO1_##member##_MASK) << \
HINIC_SQ_TASK_INFO1_##member##_SHIFT)
#define HINIC_SQ_TASK_INFO2_TUNNEL_L4_LEN_SHIFT 0
#define HINIC_SQ_TASK_INFO2_OUTER_L3_LEN_SHIFT 12
#define HINIC_SQ_TASK_INFO2_TUNNEL_L4TYPE_SHIFT 19
/* 1 bit reserved */
#define HINIC_SQ_TASK_INFO2_OUTER_L3TYPE_SHIFT 22
/* 8 bits reserved */
#define HINIC_SQ_TASK_INFO2_TUNNEL_L4_LEN_MASK 0xFFF
#define HINIC_SQ_TASK_INFO2_OUTER_L3_LEN_MASK 0x7F
#define HINIC_SQ_TASK_INFO2_TUNNEL_L4TYPE_MASK 0x3
/* 1 bit reserved */
#define HINIC_SQ_TASK_INFO2_OUTER_L3TYPE_MASK 0x3
/* 8 bits reserved */
#define HINIC_SQ_TASK_INFO2_SET(val, member) \
(((u32)(val) & HINIC_SQ_TASK_INFO2_##member##_MASK) << \
HINIC_SQ_TASK_INFO2_##member##_SHIFT)
/* 31 bits reserved */
#define HINIC_SQ_TASK_INFO4_L2TYPE_SHIFT 31
/* 31 bits reserved */
#define HINIC_SQ_TASK_INFO4_L2TYPE_MASK 0x1
#define HINIC_SQ_TASK_INFO4_SET(val, member) \
(((u32)(val) & HINIC_SQ_TASK_INFO4_##member##_MASK) << \
HINIC_SQ_TASK_INFO4_##member##_SHIFT)
#define HINIC_RQ_CQE_STATUS_RXDONE_SHIFT 31
#define HINIC_RQ_CQE_STATUS_RXDONE_MASK 0x1
#define HINIC_RQ_CQE_STATUS_GET(val, member) \
(((val) >> HINIC_RQ_CQE_STATUS_##member##_SHIFT) & \
HINIC_RQ_CQE_STATUS_##member##_MASK)
#define HINIC_RQ_CQE_STATUS_CLEAR(val, member) \
((val) & (~(HINIC_RQ_CQE_STATUS_##member##_MASK << \
HINIC_RQ_CQE_STATUS_##member##_SHIFT)))
#define HINIC_RQ_CQE_SGE_LEN_SHIFT 16
#define HINIC_RQ_CQE_SGE_LEN_MASK 0xFFFF
#define HINIC_RQ_CQE_SGE_GET(val, member) \
(((val) >> HINIC_RQ_CQE_SGE_##member##_SHIFT) & \
HINIC_RQ_CQE_SGE_##member##_MASK)
#define HINIC_RQ_CTRL_BUFDESC_SECT_LEN_SHIFT 0
#define HINIC_RQ_CTRL_COMPLETE_FORMAT_SHIFT 15
#define HINIC_RQ_CTRL_COMPLETE_LEN_SHIFT 27
#define HINIC_RQ_CTRL_LEN_SHIFT 29
#define HINIC_RQ_CTRL_BUFDESC_SECT_LEN_MASK 0xFF
#define HINIC_RQ_CTRL_COMPLETE_FORMAT_MASK 0x1
#define HINIC_RQ_CTRL_COMPLETE_LEN_MASK 0x3
#define HINIC_RQ_CTRL_LEN_MASK 0x3
#define HINIC_RQ_CTRL_SET(val, member) \
(((u32)(val) & HINIC_RQ_CTRL_##member##_MASK) << \
HINIC_RQ_CTRL_##member##_SHIFT)
#define HINIC_SQ_WQE_SIZE(nr_sges) \
(sizeof(struct hinic_sq_ctrl) + \
sizeof(struct hinic_sq_task) + \
(nr_sges) * sizeof(struct hinic_sq_bufdesc))
#define HINIC_SCMD_DATA_LEN 16
#define HINIC_MAX_SQ_BUFDESCS 17
#define HINIC_SQ_WQE_MAX_SIZE 320
#define HINIC_RQ_WQE_SIZE 32
enum hinic_l4offload_type {
HINIC_L4_OFF_DISABLE = 0,
HINIC_TCP_OFFLOAD_ENABLE = 1,
HINIC_SCTP_OFFLOAD_ENABLE = 2,
HINIC_UDP_OFFLOAD_ENABLE = 3,
};
enum hinic_vlan_offload {
HINIC_VLAN_OFF_DISABLE = 0,
HINIC_VLAN_OFF_ENABLE = 1,
};
enum hinic_pkt_parsed {
HINIC_PKT_NOT_PARSED = 0,
HINIC_PKT_PARSED = 1,
};
enum hinic_outer_l3type {
HINIC_OUTER_L3TYPE_UNKNOWN = 0,
HINIC_OUTER_L3TYPE_IPV6 = 1,
HINIC_OUTER_L3TYPE_IPV4_NO_CHKSUM = 2,
HINIC_OUTER_L3TYPE_IPV4_CHKSUM = 3,
};
enum hinic_media_type {
HINIC_MEDIA_UNKNOWN = 0,
};
enum hinic_l2type {
HINIC_L2TYPE_ETH = 0,
};
enum hinc_tunnel_l4type {
HINIC_TUNNEL_L4TYPE_UNKNOWN = 0,
};
struct hinic_cmdq_header {
u32 header_info;
u32 saved_data;
};
struct hinic_status {
u32 status_info;
};
struct hinic_ctrl {
u32 ctrl_info;
};
struct hinic_sge_resp {
struct hinic_sge sge;
u32 rsvd;
};
struct hinic_cmdq_completion {
/* HW Format */
union {
struct hinic_sge_resp sge_resp;
u64 direct_resp;
};
};
struct hinic_scmd_bufdesc {
u32 buf_len;
u32 rsvd;
u8 data[HINIC_SCMD_DATA_LEN];
};
struct hinic_lcmd_bufdesc {
struct hinic_sge sge;
u32 rsvd1;
u64 rsvd2;
u64 rsvd3;
};
struct hinic_cmdq_wqe_scmd {
struct hinic_cmdq_header header;
u64 rsvd;
struct hinic_status status;
struct hinic_ctrl ctrl;
struct hinic_cmdq_completion completion;
struct hinic_scmd_bufdesc buf_desc;
};
struct hinic_cmdq_wqe_lcmd {
struct hinic_cmdq_header header;
struct hinic_status status;
struct hinic_ctrl ctrl;
struct hinic_cmdq_completion completion;
struct hinic_lcmd_bufdesc buf_desc;
};
struct hinic_cmdq_direct_wqe {
struct hinic_cmdq_wqe_scmd wqe_scmd;
};
struct hinic_cmdq_wqe {
/* HW Format */
union {
struct hinic_cmdq_direct_wqe direct_wqe;
struct hinic_cmdq_wqe_lcmd wqe_lcmd;
};
};
struct hinic_sq_ctrl {
u32 ctrl_info;
u32 queue_info;
};
struct hinic_sq_task {
u32 pkt_info0;
u32 pkt_info1;
u32 pkt_info2;
u32 ufo_v6_identify;
u32 pkt_info4;
u32 zero_pad;
};
struct hinic_sq_bufdesc {
struct hinic_sge sge;
u32 rsvd;
};
struct hinic_sq_wqe {
struct hinic_sq_ctrl ctrl;
struct hinic_sq_task task;
struct hinic_sq_bufdesc buf_descs[HINIC_MAX_SQ_BUFDESCS];
};
struct hinic_rq_cqe {
u32 status;
u32 len;
u32 rsvd2;
u32 rsvd3;
u32 rsvd4;
u32 rsvd5;
u32 rsvd6;
u32 rsvd7;
};
struct hinic_rq_ctrl {
u32 ctrl_info;
};
struct hinic_rq_cqe_sect {
struct hinic_sge sge;
u32 rsvd;
};
struct hinic_rq_bufdesc {
u32 hi_addr;
u32 lo_addr;
};
struct hinic_rq_wqe {
struct hinic_rq_ctrl ctrl;
u32 rsvd;
struct hinic_rq_cqe_sect cqe_sect;
struct hinic_rq_bufdesc buf_desc;
};
struct hinic_hw_wqe {
/* HW Format */
union {
struct hinic_cmdq_wqe cmdq_wqe;
struct hinic_sq_wqe sq_wqe;
struct hinic_rq_wqe rq_wqe;
};
};
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/types.h>
#include <linux/etherdevice.h>
#include <linux/netdevice.h>
#include <linux/slab.h>
#include <linux/if_vlan.h>
#include <linux/semaphore.h>
#include <linux/workqueue.h>
#include <net/ip.h>
#include <linux/bitops.h>
#include <linux/bitmap.h>
#include <linux/delay.h>
#include <linux/err.h>
#include "hinic_hw_qp.h"
#include "hinic_hw_dev.h"
#include "hinic_port.h"
#include "hinic_tx.h"
#include "hinic_rx.h"
#include "hinic_dev.h"
MODULE_AUTHOR("Huawei Technologies CO., Ltd");
MODULE_DESCRIPTION("Huawei Intelligent NIC driver");
MODULE_LICENSE("GPL");
static unsigned int tx_weight = 64;
module_param(tx_weight, uint, 0644);
MODULE_PARM_DESC(tx_weight, "Number Tx packets for NAPI budget (default=64)");
static unsigned int rx_weight = 64;
module_param(rx_weight, uint, 0644);
MODULE_PARM_DESC(rx_weight, "Number Rx packets for NAPI budget (default=64)");
#define PCI_DEVICE_ID_HI1822_PF 0x1822
#define HINIC_WQ_NAME "hinic_dev"
#define MSG_ENABLE_DEFAULT (NETIF_MSG_DRV | NETIF_MSG_PROBE | \
NETIF_MSG_IFUP | \
NETIF_MSG_TX_ERR | NETIF_MSG_RX_ERR)
#define VLAN_BITMAP_SIZE(nic_dev) (ALIGN(VLAN_N_VID, 8) / 8)
#define work_to_rx_mode_work(work) \
container_of(work, struct hinic_rx_mode_work, work)
#define rx_mode_work_to_nic_dev(rx_mode_work) \
container_of(rx_mode_work, struct hinic_dev, rx_mode_work)
static int change_mac_addr(struct net_device *netdev, const u8 *addr);
static void set_link_speed(struct ethtool_link_ksettings *link_ksettings,
enum hinic_speed speed)
{
switch (speed) {
case HINIC_SPEED_10MB_LINK:
link_ksettings->base.speed = SPEED_10;
break;
case HINIC_SPEED_100MB_LINK:
link_ksettings->base.speed = SPEED_100;
break;
case HINIC_SPEED_1000MB_LINK:
link_ksettings->base.speed = SPEED_1000;
break;
case HINIC_SPEED_10GB_LINK:
link_ksettings->base.speed = SPEED_10000;
break;
case HINIC_SPEED_25GB_LINK:
link_ksettings->base.speed = SPEED_25000;
break;
case HINIC_SPEED_40GB_LINK:
link_ksettings->base.speed = SPEED_40000;
break;
case HINIC_SPEED_100GB_LINK:
link_ksettings->base.speed = SPEED_100000;
break;
default:
link_ksettings->base.speed = SPEED_UNKNOWN;
break;
}
}
static int hinic_get_link_ksettings(struct net_device *netdev,
struct ethtool_link_ksettings
*link_ksettings)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
enum hinic_port_link_state link_state;
struct hinic_port_cap port_cap;
int err;
ethtool_link_ksettings_zero_link_mode(link_ksettings, advertising);
ethtool_link_ksettings_add_link_mode(link_ksettings, supported,
Autoneg);
link_ksettings->base.speed = SPEED_UNKNOWN;
link_ksettings->base.autoneg = AUTONEG_DISABLE;
link_ksettings->base.duplex = DUPLEX_UNKNOWN;
err = hinic_port_get_cap(nic_dev, &port_cap);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to get port capabilities\n");
return err;
}
err = hinic_port_link_state(nic_dev, &link_state);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to get port link state\n");
return err;
}
if (link_state != HINIC_LINK_STATE_UP) {
netif_info(nic_dev, drv, netdev, "No link\n");
return err;
}
set_link_speed(link_ksettings, port_cap.speed);
if (!!(port_cap.autoneg_cap & HINIC_AUTONEG_SUPPORTED))
ethtool_link_ksettings_add_link_mode(link_ksettings,
advertising, Autoneg);
if (port_cap.autoneg_state == HINIC_AUTONEG_ACTIVE)
link_ksettings->base.autoneg = AUTONEG_ENABLE;
link_ksettings->base.duplex = (port_cap.duplex == HINIC_DUPLEX_FULL) ?
DUPLEX_FULL : DUPLEX_HALF;
return 0;
}
static void hinic_get_drvinfo(struct net_device *netdev,
struct ethtool_drvinfo *info)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
strlcpy(info->driver, HINIC_DRV_NAME, sizeof(info->driver));
strlcpy(info->bus_info, pci_name(hwif->pdev), sizeof(info->bus_info));
}
static void hinic_get_ringparam(struct net_device *netdev,
struct ethtool_ringparam *ring)
{
ring->rx_max_pending = HINIC_RQ_DEPTH;
ring->tx_max_pending = HINIC_SQ_DEPTH;
ring->rx_pending = HINIC_RQ_DEPTH;
ring->tx_pending = HINIC_SQ_DEPTH;
}
static void hinic_get_channels(struct net_device *netdev,
struct ethtool_channels *channels)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
struct hinic_hwdev *hwdev = nic_dev->hwdev;
channels->max_rx = hwdev->nic_cap.max_qps;
channels->max_tx = hwdev->nic_cap.max_qps;
channels->max_other = 0;
channels->max_combined = 0;
channels->rx_count = hinic_hwdev_num_qps(hwdev);
channels->tx_count = hinic_hwdev_num_qps(hwdev);
channels->other_count = 0;
channels->combined_count = 0;
}
static const struct ethtool_ops hinic_ethtool_ops = {
.get_link_ksettings = hinic_get_link_ksettings,
.get_drvinfo = hinic_get_drvinfo,
.get_link = ethtool_op_get_link,
.get_ringparam = hinic_get_ringparam,
.get_channels = hinic_get_channels,
};
static void update_rx_stats(struct hinic_dev *nic_dev, struct hinic_rxq *rxq)
{
struct hinic_rxq_stats *nic_rx_stats = &nic_dev->rx_stats;
struct hinic_rxq_stats rx_stats;
u64_stats_init(&rx_stats.syncp);
hinic_rxq_get_stats(rxq, &rx_stats);
u64_stats_update_begin(&nic_rx_stats->syncp);
nic_rx_stats->bytes += rx_stats.bytes;
nic_rx_stats->pkts += rx_stats.pkts;
u64_stats_update_end(&nic_rx_stats->syncp);
hinic_rxq_clean_stats(rxq);
}
static void update_tx_stats(struct hinic_dev *nic_dev, struct hinic_txq *txq)
{
struct hinic_txq_stats *nic_tx_stats = &nic_dev->tx_stats;
struct hinic_txq_stats tx_stats;
u64_stats_init(&tx_stats.syncp);
hinic_txq_get_stats(txq, &tx_stats);
u64_stats_update_begin(&nic_tx_stats->syncp);
nic_tx_stats->bytes += tx_stats.bytes;
nic_tx_stats->pkts += tx_stats.pkts;
nic_tx_stats->tx_busy += tx_stats.tx_busy;
nic_tx_stats->tx_wake += tx_stats.tx_wake;
nic_tx_stats->tx_dropped += tx_stats.tx_dropped;
u64_stats_update_end(&nic_tx_stats->syncp);
hinic_txq_clean_stats(txq);
}
static void update_nic_stats(struct hinic_dev *nic_dev)
{
int i, num_qps = hinic_hwdev_num_qps(nic_dev->hwdev);
for (i = 0; i < num_qps; i++)
update_rx_stats(nic_dev, &nic_dev->rxqs[i]);
for (i = 0; i < num_qps; i++)
update_tx_stats(nic_dev, &nic_dev->txqs[i]);
}
/**
* create_txqs - Create the Logical Tx Queues of specific NIC device
* @nic_dev: the specific NIC device
*
* Return 0 - Success, negative - Failure
**/
static int create_txqs(struct hinic_dev *nic_dev)
{
int err, i, j, num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);
struct net_device *netdev = nic_dev->netdev;
size_t txq_size;
if (nic_dev->txqs)
return -EINVAL;
txq_size = num_txqs * sizeof(*nic_dev->txqs);
nic_dev->txqs = devm_kzalloc(&netdev->dev, txq_size, GFP_KERNEL);
if (!nic_dev->txqs)
return -ENOMEM;
for (i = 0; i < num_txqs; i++) {
struct hinic_sq *sq = hinic_hwdev_get_sq(nic_dev->hwdev, i);
err = hinic_init_txq(&nic_dev->txqs[i], sq, netdev);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to init Txq\n");
goto err_init_txq;
}
}
return 0;
err_init_txq:
for (j = 0; j < i; j++)
hinic_clean_txq(&nic_dev->txqs[j]);
devm_kfree(&netdev->dev, nic_dev->txqs);
return err;
}
/**
* free_txqs - Free the Logical Tx Queues of specific NIC device
* @nic_dev: the specific NIC device
**/
static void free_txqs(struct hinic_dev *nic_dev)
{
int i, num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);
struct net_device *netdev = nic_dev->netdev;
if (!nic_dev->txqs)
return;
for (i = 0; i < num_txqs; i++)
hinic_clean_txq(&nic_dev->txqs[i]);
devm_kfree(&netdev->dev, nic_dev->txqs);
nic_dev->txqs = NULL;
}
/**
* create_txqs - Create the Logical Rx Queues of specific NIC device
* @nic_dev: the specific NIC device
*
* Return 0 - Success, negative - Failure
**/
static int create_rxqs(struct hinic_dev *nic_dev)
{
int err, i, j, num_rxqs = hinic_hwdev_num_qps(nic_dev->hwdev);
struct net_device *netdev = nic_dev->netdev;
size_t rxq_size;
if (nic_dev->rxqs)
return -EINVAL;
rxq_size = num_rxqs * sizeof(*nic_dev->rxqs);
nic_dev->rxqs = devm_kzalloc(&netdev->dev, rxq_size, GFP_KERNEL);
if (!nic_dev->rxqs)
return -ENOMEM;
for (i = 0; i < num_rxqs; i++) {
struct hinic_rq *rq = hinic_hwdev_get_rq(nic_dev->hwdev, i);
err = hinic_init_rxq(&nic_dev->rxqs[i], rq, netdev);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to init rxq\n");
goto err_init_rxq;
}
}
return 0;
err_init_rxq:
for (j = 0; j < i; j++)
hinic_clean_rxq(&nic_dev->rxqs[j]);
devm_kfree(&netdev->dev, nic_dev->rxqs);
return err;
}
/**
* free_txqs - Free the Logical Rx Queues of specific NIC device
* @nic_dev: the specific NIC device
**/
static void free_rxqs(struct hinic_dev *nic_dev)
{
int i, num_rxqs = hinic_hwdev_num_qps(nic_dev->hwdev);
struct net_device *netdev = nic_dev->netdev;
if (!nic_dev->rxqs)
return;
for (i = 0; i < num_rxqs; i++)
hinic_clean_rxq(&nic_dev->rxqs[i]);
devm_kfree(&netdev->dev, nic_dev->rxqs);
nic_dev->rxqs = NULL;
}
static int hinic_open(struct net_device *netdev)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
enum hinic_port_link_state link_state;
int err, ret, num_qps;
if (!(nic_dev->flags & HINIC_INTF_UP)) {
err = hinic_hwdev_ifup(nic_dev->hwdev);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed - HW interface up\n");
return err;
}
}
err = create_txqs(nic_dev);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to create Tx queues\n");
goto err_create_txqs;
}
err = create_rxqs(nic_dev);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to create Rx queues\n");
goto err_create_rxqs;
}
num_qps = hinic_hwdev_num_qps(nic_dev->hwdev);
netif_set_real_num_tx_queues(netdev, num_qps);
netif_set_real_num_rx_queues(netdev, num_qps);
err = hinic_port_set_state(nic_dev, HINIC_PORT_ENABLE);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to set port state\n");
goto err_port_state;
}
err = hinic_port_set_func_state(nic_dev, HINIC_FUNC_PORT_ENABLE);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to set func port state\n");
goto err_func_port_state;
}
/* Wait up to 3 sec between port enable to link state */
msleep(3000);
down(&nic_dev->mgmt_lock);
err = hinic_port_link_state(nic_dev, &link_state);
if (err) {
netif_err(nic_dev, drv, netdev, "Failed to get link state\n");
goto err_port_link;
}
if (link_state == HINIC_LINK_STATE_UP)
nic_dev->flags |= HINIC_LINK_UP;
nic_dev->flags |= HINIC_INTF_UP;
if ((nic_dev->flags & (HINIC_LINK_UP | HINIC_INTF_UP)) ==
(HINIC_LINK_UP | HINIC_INTF_UP)) {
netif_info(nic_dev, drv, netdev, "link + intf UP\n");
netif_carrier_on(netdev);
netif_tx_wake_all_queues(netdev);
}
up(&nic_dev->mgmt_lock);
netif_info(nic_dev, drv, netdev, "HINIC_INTF is UP\n");
return 0;
err_port_link:
up(&nic_dev->mgmt_lock);
ret = hinic_port_set_func_state(nic_dev, HINIC_FUNC_PORT_DISABLE);
if (ret)
netif_warn(nic_dev, drv, netdev,
"Failed to revert func port state\n");
err_func_port_state:
ret = hinic_port_set_state(nic_dev, HINIC_PORT_DISABLE);
if (ret)
netif_warn(nic_dev, drv, netdev,
"Failed to revert port state\n");
err_port_state:
free_rxqs(nic_dev);
err_create_rxqs:
free_txqs(nic_dev);
err_create_txqs:
if (!(nic_dev->flags & HINIC_INTF_UP))
hinic_hwdev_ifdown(nic_dev->hwdev);
return err;
}
static int hinic_close(struct net_device *netdev)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
unsigned int flags;
int err;
down(&nic_dev->mgmt_lock);
flags = nic_dev->flags;
nic_dev->flags &= ~HINIC_INTF_UP;
netif_carrier_off(netdev);
netif_tx_disable(netdev);
update_nic_stats(nic_dev);
up(&nic_dev->mgmt_lock);
err = hinic_port_set_func_state(nic_dev, HINIC_FUNC_PORT_DISABLE);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to set func port state\n");
nic_dev->flags |= (flags & HINIC_INTF_UP);
return err;
}
err = hinic_port_set_state(nic_dev, HINIC_PORT_DISABLE);
if (err) {
netif_err(nic_dev, drv, netdev, "Failed to set port state\n");
nic_dev->flags |= (flags & HINIC_INTF_UP);
return err;
}
free_rxqs(nic_dev);
free_txqs(nic_dev);
if (flags & HINIC_INTF_UP)
hinic_hwdev_ifdown(nic_dev->hwdev);
netif_info(nic_dev, drv, netdev, "HINIC_INTF is DOWN\n");
return 0;
}
static int hinic_change_mtu(struct net_device *netdev, int new_mtu)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
int err;
netif_info(nic_dev, drv, netdev, "set_mtu = %d\n", new_mtu);
err = hinic_port_set_mtu(nic_dev, new_mtu);
if (err)
netif_err(nic_dev, drv, netdev, "Failed to set port mtu\n");
else
netdev->mtu = new_mtu;
return err;
}
/**
* change_mac_addr - change the main mac address of network device
* @netdev: network device
* @addr: mac address to set
*
* Return 0 - Success, negative - Failure
**/
static int change_mac_addr(struct net_device *netdev, const u8 *addr)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
u16 vid = 0;
int err;
if (!is_valid_ether_addr(addr))
return -EADDRNOTAVAIL;
netif_info(nic_dev, drv, netdev, "change mac addr = %02x %02x %02x %02x %02x %02x\n",
addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
down(&nic_dev->mgmt_lock);
do {
err = hinic_port_del_mac(nic_dev, netdev->dev_addr, vid);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to delete mac\n");
break;
}
err = hinic_port_add_mac(nic_dev, addr, vid);
if (err) {
netif_err(nic_dev, drv, netdev, "Failed to add mac\n");
break;
}
vid = find_next_bit(nic_dev->vlan_bitmap, VLAN_N_VID, vid + 1);
} while (vid != VLAN_N_VID);
up(&nic_dev->mgmt_lock);
return err;
}
static int hinic_set_mac_addr(struct net_device *netdev, void *addr)
{
unsigned char new_mac[ETH_ALEN];
struct sockaddr *saddr = addr;
int err;
memcpy(new_mac, saddr->sa_data, ETH_ALEN);
err = change_mac_addr(netdev, new_mac);
if (!err)
memcpy(netdev->dev_addr, new_mac, ETH_ALEN);
return err;
}
/**
* add_mac_addr - add mac address to network device
* @netdev: network device
* @addr: mac address to add
*
* Return 0 - Success, negative - Failure
**/
static int add_mac_addr(struct net_device *netdev, const u8 *addr)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
u16 vid = 0;
int err;
if (!is_valid_ether_addr(addr))
return -EADDRNOTAVAIL;
netif_info(nic_dev, drv, netdev, "set mac addr = %02x %02x %02x %02x %02x %02x\n",
addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
down(&nic_dev->mgmt_lock);
do {
err = hinic_port_add_mac(nic_dev, addr, vid);
if (err) {
netif_err(nic_dev, drv, netdev, "Failed to add mac\n");
break;
}
vid = find_next_bit(nic_dev->vlan_bitmap, VLAN_N_VID, vid + 1);
} while (vid != VLAN_N_VID);
up(&nic_dev->mgmt_lock);
return err;
}
/**
* remove_mac_addr - remove mac address from network device
* @netdev: network device
* @addr: mac address to remove
*
* Return 0 - Success, negative - Failure
**/
static int remove_mac_addr(struct net_device *netdev, const u8 *addr)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
u16 vid = 0;
int err;
if (!is_valid_ether_addr(addr))
return -EADDRNOTAVAIL;
netif_info(nic_dev, drv, netdev, "remove mac addr = %02x %02x %02x %02x %02x %02x\n",
addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
down(&nic_dev->mgmt_lock);
do {
err = hinic_port_del_mac(nic_dev, addr, vid);
if (err) {
netif_err(nic_dev, drv, netdev,
"Failed to delete mac\n");
break;
}
vid = find_next_bit(nic_dev->vlan_bitmap, VLAN_N_VID, vid + 1);
} while (vid != VLAN_N_VID);
up(&nic_dev->mgmt_lock);
return err;
}
static int hinic_vlan_rx_add_vid(struct net_device *netdev,
__always_unused __be16 proto, u16 vid)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
int ret, err;
netif_info(nic_dev, drv, netdev, "add vid = %d\n", vid);
down(&nic_dev->mgmt_lock);
err = hinic_port_add_vlan(nic_dev, vid);
if (err) {
netif_err(nic_dev, drv, netdev, "Failed to add vlan\n");
goto err_vlan_add;
}
err = hinic_port_add_mac(nic_dev, netdev->dev_addr, vid);
if (err) {
netif_err(nic_dev, drv, netdev, "Failed to set mac\n");
goto err_add_mac;
}
bitmap_set(nic_dev->vlan_bitmap, vid, 1);
up(&nic_dev->mgmt_lock);
return 0;
err_add_mac:
ret = hinic_port_del_vlan(nic_dev, vid);
if (ret)
netif_err(nic_dev, drv, netdev,
"Failed to revert by removing vlan\n");
err_vlan_add:
up(&nic_dev->mgmt_lock);
return err;
}
static int hinic_vlan_rx_kill_vid(struct net_device *netdev,
__always_unused __be16 proto, u16 vid)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
int err;
netif_info(nic_dev, drv, netdev, "remove vid = %d\n", vid);
down(&nic_dev->mgmt_lock);
err = hinic_port_del_vlan(nic_dev, vid);
if (err) {
netif_err(nic_dev, drv, netdev, "Failed to delete vlan\n");
goto err_del_vlan;
}
bitmap_clear(nic_dev->vlan_bitmap, vid, 1);
up(&nic_dev->mgmt_lock);
return 0;
err_del_vlan:
up(&nic_dev->mgmt_lock);
return err;
}
static void set_rx_mode(struct work_struct *work)
{
struct hinic_rx_mode_work *rx_mode_work = work_to_rx_mode_work(work);
struct hinic_dev *nic_dev = rx_mode_work_to_nic_dev(rx_mode_work);
netif_info(nic_dev, drv, nic_dev->netdev, "set rx mode work\n");
hinic_port_set_rx_mode(nic_dev, rx_mode_work->rx_mode);
__dev_uc_sync(nic_dev->netdev, add_mac_addr, remove_mac_addr);
__dev_mc_sync(nic_dev->netdev, add_mac_addr, remove_mac_addr);
}
static void hinic_set_rx_mode(struct net_device *netdev)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
struct hinic_rx_mode_work *rx_mode_work;
u32 rx_mode;
rx_mode_work = &nic_dev->rx_mode_work;
rx_mode = HINIC_RX_MODE_UC |
HINIC_RX_MODE_MC |
HINIC_RX_MODE_BC;
if (netdev->flags & IFF_PROMISC)
rx_mode |= HINIC_RX_MODE_PROMISC;
else if (netdev->flags & IFF_ALLMULTI)
rx_mode |= HINIC_RX_MODE_MC_ALL;
rx_mode_work->rx_mode = rx_mode;
queue_work(nic_dev->workq, &rx_mode_work->work);
}
static void hinic_tx_timeout(struct net_device *netdev)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
netif_err(nic_dev, drv, netdev, "Tx timeout\n");
}
static void hinic_get_stats64(struct net_device *netdev,
struct rtnl_link_stats64 *stats)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
struct hinic_rxq_stats *nic_rx_stats;
struct hinic_txq_stats *nic_tx_stats;
nic_rx_stats = &nic_dev->rx_stats;
nic_tx_stats = &nic_dev->tx_stats;
down(&nic_dev->mgmt_lock);
if (nic_dev->flags & HINIC_INTF_UP)
update_nic_stats(nic_dev);
up(&nic_dev->mgmt_lock);
stats->rx_bytes = nic_rx_stats->bytes;
stats->rx_packets = nic_rx_stats->pkts;
stats->tx_bytes = nic_tx_stats->bytes;
stats->tx_packets = nic_tx_stats->pkts;
stats->tx_errors = nic_tx_stats->tx_dropped;
}
#ifdef CONFIG_NET_POLL_CONTROLLER
static void hinic_netpoll(struct net_device *netdev)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
int i, num_qps;
num_qps = hinic_hwdev_num_qps(nic_dev->hwdev);
for (i = 0; i < num_qps; i++) {
struct hinic_txq *txq = &nic_dev->txqs[i];
struct hinic_rxq *rxq = &nic_dev->rxqs[i];
napi_schedule(&txq->napi);
napi_schedule(&rxq->napi);
}
}
#endif
static const struct net_device_ops hinic_netdev_ops = {
.ndo_open = hinic_open,
.ndo_stop = hinic_close,
.ndo_change_mtu = hinic_change_mtu,
.ndo_set_mac_address = hinic_set_mac_addr,
.ndo_validate_addr = eth_validate_addr,
.ndo_vlan_rx_add_vid = hinic_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = hinic_vlan_rx_kill_vid,
.ndo_set_rx_mode = hinic_set_rx_mode,
.ndo_start_xmit = hinic_xmit_frame,
.ndo_tx_timeout = hinic_tx_timeout,
.ndo_get_stats64 = hinic_get_stats64,
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = hinic_netpoll,
#endif
};
static void netdev_features_init(struct net_device *netdev)
{
netdev->hw_features = NETIF_F_SG | NETIF_F_HIGHDMA;
netdev->vlan_features = netdev->hw_features;
netdev->features = netdev->hw_features | NETIF_F_HW_VLAN_CTAG_FILTER;
}
/**
* link_status_event_handler - link event handler
* @handle: nic device for the handler
* @buf_in: input buffer
* @in_size: input size
* @buf_in: output buffer
* @out_size: returned output size
*
* Return 0 - Success, negative - Failure
**/
static void link_status_event_handler(void *handle, void *buf_in, u16 in_size,
void *buf_out, u16 *out_size)
{
struct hinic_port_link_status *link_status, *ret_link_status;
struct hinic_dev *nic_dev = handle;
link_status = buf_in;
if (link_status->link == HINIC_LINK_STATE_UP) {
down(&nic_dev->mgmt_lock);
nic_dev->flags |= HINIC_LINK_UP;
if ((nic_dev->flags & (HINIC_LINK_UP | HINIC_INTF_UP)) ==
(HINIC_LINK_UP | HINIC_INTF_UP)) {
netif_carrier_on(nic_dev->netdev);
netif_tx_wake_all_queues(nic_dev->netdev);
}
up(&nic_dev->mgmt_lock);
netif_info(nic_dev, drv, nic_dev->netdev, "HINIC_Link is UP\n");
} else {
down(&nic_dev->mgmt_lock);
nic_dev->flags &= ~HINIC_LINK_UP;
netif_carrier_off(nic_dev->netdev);
netif_tx_disable(nic_dev->netdev);
up(&nic_dev->mgmt_lock);
netif_info(nic_dev, drv, nic_dev->netdev, "HINIC_Link is DOWN\n");
}
ret_link_status = buf_out;
ret_link_status->status = 0;
*out_size = sizeof(*ret_link_status);
}
/**
* nic_dev_init - Initialize the NIC device
* @pdev: the NIC pci device
*
* Return 0 - Success, negative - Failure
**/
static int nic_dev_init(struct pci_dev *pdev)
{
struct hinic_rx_mode_work *rx_mode_work;
struct hinic_txq_stats *tx_stats;
struct hinic_rxq_stats *rx_stats;
struct hinic_dev *nic_dev;
struct net_device *netdev;
struct hinic_hwdev *hwdev;
int err, num_qps;
hwdev = hinic_init_hwdev(pdev);
if (IS_ERR(hwdev)) {
dev_err(&pdev->dev, "Failed to initialize HW device\n");
return PTR_ERR(hwdev);
}
num_qps = hinic_hwdev_num_qps(hwdev);
if (num_qps <= 0) {
dev_err(&pdev->dev, "Invalid number of QPS\n");
err = -EINVAL;
goto err_num_qps;
}
netdev = alloc_etherdev_mq(sizeof(*nic_dev), num_qps);
if (!netdev) {
dev_err(&pdev->dev, "Failed to allocate Ethernet device\n");
err = -ENOMEM;
goto err_alloc_etherdev;
}
netdev->netdev_ops = &hinic_netdev_ops;
netdev->ethtool_ops = &hinic_ethtool_ops;
nic_dev = netdev_priv(netdev);
nic_dev->netdev = netdev;
nic_dev->hwdev = hwdev;
nic_dev->msg_enable = MSG_ENABLE_DEFAULT;
nic_dev->flags = 0;
nic_dev->txqs = NULL;
nic_dev->rxqs = NULL;
nic_dev->tx_weight = tx_weight;
nic_dev->rx_weight = rx_weight;
sema_init(&nic_dev->mgmt_lock, 1);
tx_stats = &nic_dev->tx_stats;
rx_stats = &nic_dev->rx_stats;
u64_stats_init(&tx_stats->syncp);
u64_stats_init(&rx_stats->syncp);
nic_dev->vlan_bitmap = devm_kzalloc(&pdev->dev,
VLAN_BITMAP_SIZE(nic_dev),
GFP_KERNEL);
if (!nic_dev->vlan_bitmap) {
err = -ENOMEM;
goto err_vlan_bitmap;
}
nic_dev->workq = create_singlethread_workqueue(HINIC_WQ_NAME);
if (!nic_dev->workq) {
err = -ENOMEM;
goto err_workq;
}
pci_set_drvdata(pdev, netdev);
err = hinic_port_get_mac(nic_dev, netdev->dev_addr);
if (err)
dev_warn(&pdev->dev, "Failed to get mac address\n");
err = hinic_port_add_mac(nic_dev, netdev->dev_addr, 0);
if (err) {
dev_err(&pdev->dev, "Failed to add mac\n");
goto err_add_mac;
}
err = hinic_port_set_mtu(nic_dev, netdev->mtu);
if (err) {
dev_err(&pdev->dev, "Failed to set mtu\n");
goto err_set_mtu;
}
rx_mode_work = &nic_dev->rx_mode_work;
INIT_WORK(&rx_mode_work->work, set_rx_mode);
netdev_features_init(netdev);
netif_carrier_off(netdev);
hinic_hwdev_cb_register(nic_dev->hwdev, HINIC_MGMT_MSG_CMD_LINK_STATUS,
nic_dev, link_status_event_handler);
err = register_netdev(netdev);
if (err) {
dev_err(&pdev->dev, "Failed to register netdev\n");
goto err_reg_netdev;
}
return 0;
err_reg_netdev:
hinic_hwdev_cb_unregister(nic_dev->hwdev,
HINIC_MGMT_MSG_CMD_LINK_STATUS);
cancel_work_sync(&rx_mode_work->work);
err_set_mtu:
err_add_mac:
pci_set_drvdata(pdev, NULL);
destroy_workqueue(nic_dev->workq);
err_workq:
err_vlan_bitmap:
free_netdev(netdev);
err_alloc_etherdev:
err_num_qps:
hinic_free_hwdev(hwdev);
return err;
}
static int hinic_probe(struct pci_dev *pdev,
const struct pci_device_id *id)
{
int err = pci_enable_device(pdev);
if (err) {
dev_err(&pdev->dev, "Failed to enable PCI device\n");
return err;
}
err = pci_request_regions(pdev, HINIC_DRV_NAME);
if (err) {
dev_err(&pdev->dev, "Failed to request PCI regions\n");
goto err_pci_regions;
}
pci_set_master(pdev);
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
if (err) {
dev_warn(&pdev->dev, "Couldn't set 64-bit DMA mask\n");
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
if (err) {
dev_err(&pdev->dev, "Failed to set DMA mask\n");
goto err_dma_mask;
}
}
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
if (err) {
dev_warn(&pdev->dev,
"Couldn't set 64-bit consistent DMA mask\n");
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
if (err) {
dev_err(&pdev->dev,
"Failed to set consistent DMA mask\n");
goto err_dma_consistent_mask;
}
}
err = nic_dev_init(pdev);
if (err) {
dev_err(&pdev->dev, "Failed to initialize NIC device\n");
goto err_nic_dev_init;
}
dev_info(&pdev->dev, "HiNIC driver - probed\n");
return 0;
err_nic_dev_init:
err_dma_consistent_mask:
err_dma_mask:
pci_release_regions(pdev);
err_pci_regions:
pci_disable_device(pdev);
return err;
}
static void hinic_remove(struct pci_dev *pdev)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct hinic_dev *nic_dev = netdev_priv(netdev);
struct hinic_rx_mode_work *rx_mode_work;
unregister_netdev(netdev);
hinic_hwdev_cb_unregister(nic_dev->hwdev,
HINIC_MGMT_MSG_CMD_LINK_STATUS);
rx_mode_work = &nic_dev->rx_mode_work;
cancel_work_sync(&rx_mode_work->work);
pci_set_drvdata(pdev, NULL);
destroy_workqueue(nic_dev->workq);
hinic_free_hwdev(nic_dev->hwdev);
free_netdev(netdev);
pci_release_regions(pdev);
pci_disable_device(pdev);
dev_info(&pdev->dev, "HiNIC driver - removed\n");
}
static const struct pci_device_id hinic_pci_table[] = {
{ PCI_VDEVICE(HUAWEI, PCI_DEVICE_ID_HI1822_PF), 0},
{ 0, 0}
};
MODULE_DEVICE_TABLE(pci, hinic_pci_table);
static struct pci_driver hinic_driver = {
.name = HINIC_DRV_NAME,
.id_table = hinic_pci_table,
.probe = hinic_probe,
.remove = hinic_remove,
};
module_pci_driver(hinic_driver);
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/types.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/if_vlan.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/errno.h>
#include "hinic_hw_if.h"
#include "hinic_hw_dev.h"
#include "hinic_port.h"
#include "hinic_dev.h"
#define HINIC_MIN_MTU_SIZE 256
#define HINIC_MAX_JUMBO_FRAME_SIZE 15872
enum mac_op {
MAC_DEL,
MAC_SET,
};
/**
* change_mac - change(add or delete) mac address
* @nic_dev: nic device
* @addr: mac address
* @vlan_id: vlan number to set with the mac
* @op: add or delete the mac
*
* Return 0 - Success, negative - Failure
**/
static int change_mac(struct hinic_dev *nic_dev, const u8 *addr,
u16 vlan_id, enum mac_op op)
{
struct net_device *netdev = nic_dev->netdev;
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_port_mac_cmd port_mac_cmd;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
enum hinic_port_cmd cmd;
u16 out_size;
int err;
if (vlan_id >= VLAN_N_VID) {
netif_err(nic_dev, drv, netdev, "Invalid VLAN number\n");
return -EINVAL;
}
if (op == MAC_SET)
cmd = HINIC_PORT_CMD_SET_MAC;
else
cmd = HINIC_PORT_CMD_DEL_MAC;
port_mac_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
port_mac_cmd.vlan_id = vlan_id;
memcpy(port_mac_cmd.mac, addr, ETH_ALEN);
err = hinic_port_msg_cmd(hwdev, cmd, &port_mac_cmd,
sizeof(port_mac_cmd),
&port_mac_cmd, &out_size);
if (err || (out_size != sizeof(port_mac_cmd)) || port_mac_cmd.status) {
dev_err(&pdev->dev, "Failed to change MAC, ret = %d\n",
port_mac_cmd.status);
return -EFAULT;
}
return 0;
}
/**
* hinic_port_add_mac - add mac address
* @nic_dev: nic device
* @addr: mac address
* @vlan_id: vlan number to set with the mac
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_add_mac(struct hinic_dev *nic_dev,
const u8 *addr, u16 vlan_id)
{
return change_mac(nic_dev, addr, vlan_id, MAC_SET);
}
/**
* hinic_port_del_mac - remove mac address
* @nic_dev: nic device
* @addr: mac address
* @vlan_id: vlan number that is connected to the mac
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_del_mac(struct hinic_dev *nic_dev, const u8 *addr,
u16 vlan_id)
{
return change_mac(nic_dev, addr, vlan_id, MAC_DEL);
}
/**
* hinic_port_get_mac - get the mac address of the nic device
* @nic_dev: nic device
* @addr: returned mac address
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_get_mac(struct hinic_dev *nic_dev, u8 *addr)
{
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_port_mac_cmd port_mac_cmd;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
u16 out_size;
int err;
port_mac_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_MAC,
&port_mac_cmd, sizeof(port_mac_cmd),
&port_mac_cmd, &out_size);
if (err || (out_size != sizeof(port_mac_cmd)) || port_mac_cmd.status) {
dev_err(&pdev->dev, "Failed to get mac, ret = %d\n",
port_mac_cmd.status);
return -EFAULT;
}
memcpy(addr, port_mac_cmd.mac, ETH_ALEN);
return 0;
}
/**
* hinic_port_set_mtu - set mtu
* @nic_dev: nic device
* @new_mtu: new mtu
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_set_mtu(struct hinic_dev *nic_dev, int new_mtu)
{
struct net_device *netdev = nic_dev->netdev;
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_port_mtu_cmd port_mtu_cmd;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
int err, max_frame;
u16 out_size;
if (new_mtu < HINIC_MIN_MTU_SIZE) {
netif_err(nic_dev, drv, netdev, "mtu < MIN MTU size");
return -EINVAL;
}
max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN;
if (max_frame > HINIC_MAX_JUMBO_FRAME_SIZE) {
netif_err(nic_dev, drv, netdev, "mtu > MAX MTU size");
return -EINVAL;
}
port_mtu_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
port_mtu_cmd.mtu = new_mtu;
err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_CHANGE_MTU,
&port_mtu_cmd, sizeof(port_mtu_cmd),
&port_mtu_cmd, &out_size);
if (err || (out_size != sizeof(port_mtu_cmd)) || port_mtu_cmd.status) {
dev_err(&pdev->dev, "Failed to set mtu, ret = %d\n",
port_mtu_cmd.status);
return -EFAULT;
}
return 0;
}
/**
* hinic_port_add_vlan - add vlan to the nic device
* @nic_dev: nic device
* @vlan_id: the vlan number to add
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_add_vlan(struct hinic_dev *nic_dev, u16 vlan_id)
{
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_port_vlan_cmd port_vlan_cmd;
port_vlan_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwdev->hwif);
port_vlan_cmd.vlan_id = vlan_id;
return hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_ADD_VLAN,
&port_vlan_cmd, sizeof(port_vlan_cmd),
NULL, NULL);
}
/**
* hinic_port_del_vlan - delete vlan from the nic device
* @nic_dev: nic device
* @vlan_id: the vlan number to delete
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_del_vlan(struct hinic_dev *nic_dev, u16 vlan_id)
{
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_port_vlan_cmd port_vlan_cmd;
port_vlan_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwdev->hwif);
port_vlan_cmd.vlan_id = vlan_id;
return hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_DEL_VLAN,
&port_vlan_cmd, sizeof(port_vlan_cmd),
NULL, NULL);
}
/**
* hinic_port_set_rx_mode - set rx mode in the nic device
* @nic_dev: nic device
* @rx_mode: the rx mode to set
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_set_rx_mode(struct hinic_dev *nic_dev, u32 rx_mode)
{
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_port_rx_mode_cmd rx_mode_cmd;
rx_mode_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwdev->hwif);
rx_mode_cmd.rx_mode = rx_mode;
return hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_RX_MODE,
&rx_mode_cmd, sizeof(rx_mode_cmd),
NULL, NULL);
}
/**
* hinic_port_link_state - get the link state
* @nic_dev: nic device
* @link_state: the returned link state
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_link_state(struct hinic_dev *nic_dev,
enum hinic_port_link_state *link_state)
{
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct hinic_port_link_cmd link_cmd;
struct pci_dev *pdev = hwif->pdev;
u16 out_size;
int err;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "unsupported PCI Function type\n");
return -EINVAL;
}
link_cmd.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_LINK_STATE,
&link_cmd, sizeof(link_cmd),
&link_cmd, &out_size);
if (err || (out_size != sizeof(link_cmd)) || link_cmd.status) {
dev_err(&pdev->dev, "Failed to get link state, ret = %d\n",
link_cmd.status);
return -EINVAL;
}
*link_state = link_cmd.state;
return 0;
}
/**
* hinic_port_set_state - set port state
* @nic_dev: nic device
* @state: the state to set
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_set_state(struct hinic_dev *nic_dev, enum hinic_port_state state)
{
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_port_state_cmd port_state;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
u16 out_size;
int err;
if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) {
dev_err(&pdev->dev, "unsupported PCI Function type\n");
return -EINVAL;
}
port_state.state = state;
err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_PORT_STATE,
&port_state, sizeof(port_state),
&port_state, &out_size);
if (err || (out_size != sizeof(port_state)) || port_state.status) {
dev_err(&pdev->dev, "Failed to set port state, ret = %d\n",
port_state.status);
return -EFAULT;
}
return 0;
}
/**
* hinic_port_set_func_state- set func device state
* @nic_dev: nic device
* @state: the state to set
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_set_func_state(struct hinic_dev *nic_dev,
enum hinic_func_port_state state)
{
struct hinic_port_func_state_cmd func_state;
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
u16 out_size;
int err;
func_state.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
func_state.state = state;
err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_SET_FUNC_STATE,
&func_state, sizeof(func_state),
&func_state, &out_size);
if (err || (out_size != sizeof(func_state)) || func_state.status) {
dev_err(&pdev->dev, "Failed to set port func state, ret = %d\n",
func_state.status);
return -EFAULT;
}
return 0;
}
/**
* hinic_port_get_cap - get port capabilities
* @nic_dev: nic device
* @port_cap: returned port capabilities
*
* Return 0 - Success, negative - Failure
**/
int hinic_port_get_cap(struct hinic_dev *nic_dev,
struct hinic_port_cap *port_cap)
{
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
u16 out_size;
int err;
port_cap->func_idx = HINIC_HWIF_FUNC_IDX(hwif);
err = hinic_port_msg_cmd(hwdev, HINIC_PORT_CMD_GET_CAP,
port_cap, sizeof(*port_cap),
port_cap, &out_size);
if (err || (out_size != sizeof(*port_cap)) || port_cap->status) {
dev_err(&pdev->dev,
"Failed to get port capabilities, ret = %d\n",
port_cap->status);
return -EINVAL;
}
return 0;
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_PORT_H
#define HINIC_PORT_H
#include <linux/types.h>
#include <linux/etherdevice.h>
#include <linux/bitops.h>
#include "hinic_dev.h"
enum hinic_rx_mode {
HINIC_RX_MODE_UC = BIT(0),
HINIC_RX_MODE_MC = BIT(1),
HINIC_RX_MODE_BC = BIT(2),
HINIC_RX_MODE_MC_ALL = BIT(3),
HINIC_RX_MODE_PROMISC = BIT(4),
};
enum hinic_port_link_state {
HINIC_LINK_STATE_DOWN,
HINIC_LINK_STATE_UP,
};
enum hinic_port_state {
HINIC_PORT_DISABLE = 0,
HINIC_PORT_ENABLE = 3,
};
enum hinic_func_port_state {
HINIC_FUNC_PORT_DISABLE = 0,
HINIC_FUNC_PORT_ENABLE = 2,
};
enum hinic_autoneg_cap {
HINIC_AUTONEG_UNSUPPORTED,
HINIC_AUTONEG_SUPPORTED,
};
enum hinic_autoneg_state {
HINIC_AUTONEG_DISABLED,
HINIC_AUTONEG_ACTIVE,
};
enum hinic_duplex {
HINIC_DUPLEX_HALF,
HINIC_DUPLEX_FULL,
};
enum hinic_speed {
HINIC_SPEED_10MB_LINK = 0,
HINIC_SPEED_100MB_LINK,
HINIC_SPEED_1000MB_LINK,
HINIC_SPEED_10GB_LINK,
HINIC_SPEED_25GB_LINK,
HINIC_SPEED_40GB_LINK,
HINIC_SPEED_100GB_LINK,
HINIC_SPEED_UNKNOWN = 0xFF,
};
struct hinic_port_mac_cmd {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u16 vlan_id;
u16 rsvd1;
unsigned char mac[ETH_ALEN];
};
struct hinic_port_mtu_cmd {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u16 rsvd1;
u32 mtu;
};
struct hinic_port_vlan_cmd {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u16 vlan_id;
};
struct hinic_port_rx_mode_cmd {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u16 rsvd;
u32 rx_mode;
};
struct hinic_port_link_cmd {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u8 state;
u8 rsvd1;
};
struct hinic_port_state_cmd {
u8 status;
u8 version;
u8 rsvd0[6];
u8 state;
u8 rsvd1[3];
};
struct hinic_port_link_status {
u8 status;
u8 version;
u8 rsvd0[6];
u16 rsvd1;
u8 link;
u8 rsvd2;
};
struct hinic_port_func_state_cmd {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u16 rsvd1;
u8 state;
u8 rsvd2[3];
};
struct hinic_port_cap {
u8 status;
u8 version;
u8 rsvd0[6];
u16 func_idx;
u16 rsvd1;
u8 port_type;
u8 autoneg_cap;
u8 autoneg_state;
u8 duplex;
u8 speed;
u8 rsvd2[3];
};
int hinic_port_add_mac(struct hinic_dev *nic_dev, const u8 *addr,
u16 vlan_id);
int hinic_port_del_mac(struct hinic_dev *nic_dev, const u8 *addr,
u16 vlan_id);
int hinic_port_get_mac(struct hinic_dev *nic_dev, u8 *addr);
int hinic_port_set_mtu(struct hinic_dev *nic_dev, int new_mtu);
int hinic_port_add_vlan(struct hinic_dev *nic_dev, u16 vlan_id);
int hinic_port_del_vlan(struct hinic_dev *nic_dev, u16 vlan_id);
int hinic_port_set_rx_mode(struct hinic_dev *nic_dev, u32 rx_mode);
int hinic_port_link_state(struct hinic_dev *nic_dev,
enum hinic_port_link_state *link_state);
int hinic_port_set_state(struct hinic_dev *nic_dev,
enum hinic_port_state state);
int hinic_port_set_func_state(struct hinic_dev *nic_dev,
enum hinic_func_port_state state);
int hinic_port_get_cap(struct hinic_dev *nic_dev,
struct hinic_port_cap *port_cap);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/u64_stats_sync.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/skbuff.h>
#include <linux/dma-mapping.h>
#include <linux/prefetch.h>
#include <asm/barrier.h>
#include "hinic_common.h"
#include "hinic_hw_if.h"
#include "hinic_hw_wqe.h"
#include "hinic_hw_wq.h"
#include "hinic_hw_qp.h"
#include "hinic_hw_dev.h"
#include "hinic_rx.h"
#include "hinic_dev.h"
#define RX_IRQ_NO_PENDING 0
#define RX_IRQ_NO_COALESC 0
#define RX_IRQ_NO_LLI_TIMER 0
#define RX_IRQ_NO_CREDIT 0
#define RX_IRQ_NO_RESEND_TIMER 0
/**
* hinic_rxq_clean_stats - Clean the statistics of specific queue
* @rxq: Logical Rx Queue
**/
void hinic_rxq_clean_stats(struct hinic_rxq *rxq)
{
struct hinic_rxq_stats *rxq_stats = &rxq->rxq_stats;
u64_stats_update_begin(&rxq_stats->syncp);
rxq_stats->pkts = 0;
rxq_stats->bytes = 0;
u64_stats_update_end(&rxq_stats->syncp);
}
/**
* hinic_rxq_get_stats - get statistics of Rx Queue
* @rxq: Logical Rx Queue
* @stats: return updated stats here
**/
void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats)
{
struct hinic_rxq_stats *rxq_stats = &rxq->rxq_stats;
unsigned int start;
u64_stats_update_begin(&stats->syncp);
do {
start = u64_stats_fetch_begin(&rxq_stats->syncp);
stats->pkts = rxq_stats->pkts;
stats->bytes = rxq_stats->bytes;
} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
u64_stats_update_end(&stats->syncp);
}
/**
* rxq_stats_init - Initialize the statistics of specific queue
* @rxq: Logical Rx Queue
**/
static void rxq_stats_init(struct hinic_rxq *rxq)
{
struct hinic_rxq_stats *rxq_stats = &rxq->rxq_stats;
u64_stats_init(&rxq_stats->syncp);
hinic_rxq_clean_stats(rxq);
}
/**
* rx_alloc_skb - allocate skb and map it to dma address
* @rxq: rx queue
* @dma_addr: returned dma address for the skb
*
* Return skb
**/
static struct sk_buff *rx_alloc_skb(struct hinic_rxq *rxq,
dma_addr_t *dma_addr)
{
struct hinic_dev *nic_dev = netdev_priv(rxq->netdev);
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct sk_buff *skb;
dma_addr_t addr;
int err;
skb = netdev_alloc_skb_ip_align(rxq->netdev, rxq->rq->buf_sz);
if (!skb) {
netdev_err(rxq->netdev, "Failed to allocate Rx SKB\n");
return NULL;
}
addr = dma_map_single(&pdev->dev, skb->data, rxq->rq->buf_sz,
DMA_FROM_DEVICE);
err = dma_mapping_error(&pdev->dev, addr);
if (err) {
dev_err(&pdev->dev, "Failed to map Rx DMA, err = %d\n", err);
goto err_rx_map;
}
*dma_addr = addr;
return skb;
err_rx_map:
dev_kfree_skb_any(skb);
return NULL;
}
/**
* rx_unmap_skb - unmap the dma address of the skb
* @rxq: rx queue
* @dma_addr: dma address of the skb
**/
static void rx_unmap_skb(struct hinic_rxq *rxq, dma_addr_t dma_addr)
{
struct hinic_dev *nic_dev = netdev_priv(rxq->netdev);
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
dma_unmap_single(&pdev->dev, dma_addr, rxq->rq->buf_sz,
DMA_FROM_DEVICE);
}
/**
* rx_free_skb - unmap and free skb
* @rxq: rx queue
* @skb: skb to free
* @dma_addr: dma address of the skb
**/
static void rx_free_skb(struct hinic_rxq *rxq, struct sk_buff *skb,
dma_addr_t dma_addr)
{
rx_unmap_skb(rxq, dma_addr);
dev_kfree_skb_any(skb);
}
/**
* rx_alloc_pkts - allocate pkts in rx queue
* @rxq: rx queue
*
* Return number of skbs allocated
**/
static int rx_alloc_pkts(struct hinic_rxq *rxq)
{
struct hinic_dev *nic_dev = netdev_priv(rxq->netdev);
struct hinic_rq_wqe *rq_wqe;
unsigned int free_wqebbs;
struct hinic_sge sge;
dma_addr_t dma_addr;
struct sk_buff *skb;
int i, alloc_more;
u16 prod_idx;
free_wqebbs = hinic_get_rq_free_wqebbs(rxq->rq);
alloc_more = 0;
/* Limit the allocation chunks */
if (free_wqebbs > nic_dev->rx_weight)
free_wqebbs = nic_dev->rx_weight;
for (i = 0; i < free_wqebbs; i++) {
skb = rx_alloc_skb(rxq, &dma_addr);
if (!skb) {
netdev_err(rxq->netdev, "Failed to alloc Rx skb\n");
alloc_more = 1;
goto skb_out;
}
hinic_set_sge(&sge, dma_addr, skb->len);
rq_wqe = hinic_rq_get_wqe(rxq->rq, HINIC_RQ_WQE_SIZE,
&prod_idx);
if (!rq_wqe) {
rx_free_skb(rxq, skb, dma_addr);
alloc_more = 1;
goto skb_out;
}
hinic_rq_prepare_wqe(rxq->rq, prod_idx, rq_wqe, &sge);
hinic_rq_write_wqe(rxq->rq, prod_idx, rq_wqe, skb);
}
skb_out:
if (i) {
wmb(); /* write all the wqes before update PI */
hinic_rq_update(rxq->rq, prod_idx);
}
if (alloc_more)
tasklet_schedule(&rxq->rx_task);
return i;
}
/**
* free_all_rx_skbs - free all skbs in rx queue
* @rxq: rx queue
**/
static void free_all_rx_skbs(struct hinic_rxq *rxq)
{
struct hinic_rq *rq = rxq->rq;
struct hinic_hw_wqe *hw_wqe;
struct hinic_sge sge;
u16 ci;
while ((hw_wqe = hinic_read_wqe(rq->wq, HINIC_RQ_WQE_SIZE, &ci))) {
if (IS_ERR(hw_wqe))
break;
hinic_rq_get_sge(rq, &hw_wqe->rq_wqe, ci, &sge);
hinic_put_wqe(rq->wq, HINIC_RQ_WQE_SIZE);
rx_free_skb(rxq, rq->saved_skb[ci], hinic_sge_to_dma(&sge));
}
}
/**
* rx_alloc_task - tasklet for queue allocation
* @data: rx queue
**/
static void rx_alloc_task(unsigned long data)
{
struct hinic_rxq *rxq = (struct hinic_rxq *)data;
(void)rx_alloc_pkts(rxq);
}
/**
* rx_recv_jumbo_pkt - Rx handler for jumbo pkt
* @rxq: rx queue
* @head_skb: the first skb in the list
* @left_pkt_len: left size of the pkt exclude head skb
* @ci: consumer index
*
* Return number of wqes that used for the left of the pkt
**/
static int rx_recv_jumbo_pkt(struct hinic_rxq *rxq, struct sk_buff *head_skb,
unsigned int left_pkt_len, u16 ci)
{
struct sk_buff *skb, *curr_skb = head_skb;
struct hinic_rq_wqe *rq_wqe;
unsigned int curr_len;
struct hinic_sge sge;
int num_wqes = 0;
while (left_pkt_len > 0) {
rq_wqe = hinic_rq_read_next_wqe(rxq->rq, HINIC_RQ_WQE_SIZE,
&skb, &ci);
num_wqes++;
hinic_rq_get_sge(rxq->rq, rq_wqe, ci, &sge);
rx_unmap_skb(rxq, hinic_sge_to_dma(&sge));
prefetch(skb->data);
curr_len = (left_pkt_len > HINIC_RX_BUF_SZ) ? HINIC_RX_BUF_SZ :
left_pkt_len;
left_pkt_len -= curr_len;
__skb_put(skb, curr_len);
if (curr_skb == head_skb)
skb_shinfo(head_skb)->frag_list = skb;
else
curr_skb->next = skb;
head_skb->len += skb->len;
head_skb->data_len += skb->len;
head_skb->truesize += skb->truesize;
curr_skb = skb;
}
return num_wqes;
}
/**
* rxq_recv - Rx handler
* @rxq: rx queue
* @budget: maximum pkts to process
*
* Return number of pkts received
**/
static int rxq_recv(struct hinic_rxq *rxq, int budget)
{
struct hinic_qp *qp = container_of(rxq->rq, struct hinic_qp, rq);
u64 pkt_len = 0, rx_bytes = 0;
struct hinic_rq_wqe *rq_wqe;
int num_wqes, pkts = 0;
struct hinic_sge sge;
struct sk_buff *skb;
u16 ci;
while (pkts < budget) {
num_wqes = 0;
rq_wqe = hinic_rq_read_wqe(rxq->rq, HINIC_RQ_WQE_SIZE, &skb,
&ci);
if (!rq_wqe)
break;
hinic_rq_get_sge(rxq->rq, rq_wqe, ci, &sge);
rx_unmap_skb(rxq, hinic_sge_to_dma(&sge));
prefetch(skb->data);
pkt_len = sge.len;
if (pkt_len <= HINIC_RX_BUF_SZ) {
__skb_put(skb, pkt_len);
} else {
__skb_put(skb, HINIC_RX_BUF_SZ);
num_wqes = rx_recv_jumbo_pkt(rxq, skb, pkt_len -
HINIC_RX_BUF_SZ, ci);
}
hinic_rq_put_wqe(rxq->rq, ci,
(num_wqes + 1) * HINIC_RQ_WQE_SIZE);
skb_record_rx_queue(skb, qp->q_id);
skb->protocol = eth_type_trans(skb, rxq->netdev);
napi_gro_receive(&rxq->napi, skb);
pkts++;
rx_bytes += pkt_len;
}
if (pkts)
tasklet_schedule(&rxq->rx_task); /* hinic_rx_alloc_pkts */
u64_stats_update_begin(&rxq->rxq_stats.syncp);
rxq->rxq_stats.pkts += pkts;
rxq->rxq_stats.bytes += rx_bytes;
u64_stats_update_end(&rxq->rxq_stats.syncp);
return pkts;
}
static int rx_poll(struct napi_struct *napi, int budget)
{
struct hinic_rxq *rxq = container_of(napi, struct hinic_rxq, napi);
struct hinic_rq *rq = rxq->rq;
int pkts;
pkts = rxq_recv(rxq, budget);
if (pkts >= budget)
return budget;
napi_complete(napi);
enable_irq(rq->irq);
return pkts;
}
static void rx_add_napi(struct hinic_rxq *rxq)
{
struct hinic_dev *nic_dev = netdev_priv(rxq->netdev);
netif_napi_add(rxq->netdev, &rxq->napi, rx_poll, nic_dev->rx_weight);
napi_enable(&rxq->napi);
}
static void rx_del_napi(struct hinic_rxq *rxq)
{
napi_disable(&rxq->napi);
netif_napi_del(&rxq->napi);
}
static irqreturn_t rx_irq(int irq, void *data)
{
struct hinic_rxq *rxq = (struct hinic_rxq *)data;
struct hinic_rq *rq = rxq->rq;
struct hinic_dev *nic_dev;
/* Disable the interrupt until napi will be completed */
disable_irq_nosync(rq->irq);
nic_dev = netdev_priv(rxq->netdev);
hinic_hwdev_msix_cnt_set(nic_dev->hwdev, rq->msix_entry);
napi_schedule(&rxq->napi);
return IRQ_HANDLED;
}
static int rx_request_irq(struct hinic_rxq *rxq)
{
struct hinic_dev *nic_dev = netdev_priv(rxq->netdev);
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_rq *rq = rxq->rq;
int err;
rx_add_napi(rxq);
hinic_hwdev_msix_set(hwdev, rq->msix_entry,
RX_IRQ_NO_PENDING, RX_IRQ_NO_COALESC,
RX_IRQ_NO_LLI_TIMER, RX_IRQ_NO_CREDIT,
RX_IRQ_NO_RESEND_TIMER);
err = request_irq(rq->irq, rx_irq, 0, rxq->irq_name, rxq);
if (err) {
rx_del_napi(rxq);
return err;
}
return 0;
}
static void rx_free_irq(struct hinic_rxq *rxq)
{
struct hinic_rq *rq = rxq->rq;
free_irq(rq->irq, rxq);
rx_del_napi(rxq);
}
/**
* hinic_init_rxq - Initialize the Rx Queue
* @rxq: Logical Rx Queue
* @rq: Hardware Rx Queue to connect the Logical queue with
* @netdev: network device to connect the Logical queue with
*
* Return 0 - Success, negative - Failure
**/
int hinic_init_rxq(struct hinic_rxq *rxq, struct hinic_rq *rq,
struct net_device *netdev)
{
struct hinic_qp *qp = container_of(rq, struct hinic_qp, rq);
int err, pkts, irqname_len;
rxq->netdev = netdev;
rxq->rq = rq;
rxq_stats_init(rxq);
irqname_len = snprintf(NULL, 0, "hinic_rxq%d", qp->q_id) + 1;
rxq->irq_name = devm_kzalloc(&netdev->dev, irqname_len, GFP_KERNEL);
if (!rxq->irq_name)
return -ENOMEM;
sprintf(rxq->irq_name, "hinic_rxq%d", qp->q_id);
tasklet_init(&rxq->rx_task, rx_alloc_task, (unsigned long)rxq);
pkts = rx_alloc_pkts(rxq);
if (!pkts) {
err = -ENOMEM;
goto err_rx_pkts;
}
err = rx_request_irq(rxq);
if (err) {
netdev_err(netdev, "Failed to request Rx irq\n");
goto err_req_rx_irq;
}
return 0;
err_req_rx_irq:
err_rx_pkts:
tasklet_kill(&rxq->rx_task);
free_all_rx_skbs(rxq);
devm_kfree(&netdev->dev, rxq->irq_name);
return err;
}
/**
* hinic_clean_rxq - Clean the Rx Queue
* @rxq: Logical Rx Queue
**/
void hinic_clean_rxq(struct hinic_rxq *rxq)
{
struct net_device *netdev = rxq->netdev;
rx_free_irq(rxq);
tasklet_kill(&rxq->rx_task);
free_all_rx_skbs(rxq);
devm_kfree(&netdev->dev, rxq->irq_name);
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_RX_H
#define HINIC_RX_H
#include <linux/types.h>
#include <linux/netdevice.h>
#include <linux/u64_stats_sync.h>
#include <linux/interrupt.h>
#include "hinic_hw_qp.h"
struct hinic_rxq_stats {
u64 pkts;
u64 bytes;
struct u64_stats_sync syncp;
};
struct hinic_rxq {
struct net_device *netdev;
struct hinic_rq *rq;
struct hinic_rxq_stats rxq_stats;
char *irq_name;
struct tasklet_struct rx_task;
struct napi_struct napi;
};
void hinic_rxq_clean_stats(struct hinic_rxq *rxq);
void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats);
int hinic_init_rxq(struct hinic_rxq *rxq, struct hinic_rq *rq,
struct net_device *netdev);
void hinic_clean_rxq(struct hinic_rxq *rxq);
#endif
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/u64_stats_sync.h>
#include <linux/errno.h>
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/skbuff.h>
#include <linux/smp.h>
#include <asm/byteorder.h>
#include "hinic_common.h"
#include "hinic_hw_if.h"
#include "hinic_hw_wqe.h"
#include "hinic_hw_wq.h"
#include "hinic_hw_qp.h"
#include "hinic_hw_dev.h"
#include "hinic_dev.h"
#include "hinic_tx.h"
#define TX_IRQ_NO_PENDING 0
#define TX_IRQ_NO_COALESC 0
#define TX_IRQ_NO_LLI_TIMER 0
#define TX_IRQ_NO_CREDIT 0
#define TX_IRQ_NO_RESEND_TIMER 0
#define CI_UPDATE_NO_PENDING 0
#define CI_UPDATE_NO_COALESC 0
#define HW_CONS_IDX(sq) be16_to_cpu(*(u16 *)((sq)->hw_ci_addr))
#define MIN_SKB_LEN 64
/**
* hinic_txq_clean_stats - Clean the statistics of specific queue
* @txq: Logical Tx Queue
**/
void hinic_txq_clean_stats(struct hinic_txq *txq)
{
struct hinic_txq_stats *txq_stats = &txq->txq_stats;
u64_stats_update_begin(&txq_stats->syncp);
txq_stats->pkts = 0;
txq_stats->bytes = 0;
txq_stats->tx_busy = 0;
txq_stats->tx_wake = 0;
txq_stats->tx_dropped = 0;
u64_stats_update_end(&txq_stats->syncp);
}
/**
* hinic_txq_get_stats - get statistics of Tx Queue
* @txq: Logical Tx Queue
* @stats: return updated stats here
**/
void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats)
{
struct hinic_txq_stats *txq_stats = &txq->txq_stats;
unsigned int start;
u64_stats_update_begin(&stats->syncp);
do {
start = u64_stats_fetch_begin(&txq_stats->syncp);
stats->pkts = txq_stats->pkts;
stats->bytes = txq_stats->bytes;
stats->tx_busy = txq_stats->tx_busy;
stats->tx_wake = txq_stats->tx_wake;
stats->tx_dropped = txq_stats->tx_dropped;
} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
u64_stats_update_end(&stats->syncp);
}
/**
* txq_stats_init - Initialize the statistics of specific queue
* @txq: Logical Tx Queue
**/
static void txq_stats_init(struct hinic_txq *txq)
{
struct hinic_txq_stats *txq_stats = &txq->txq_stats;
u64_stats_init(&txq_stats->syncp);
hinic_txq_clean_stats(txq);
}
/**
* tx_map_skb - dma mapping for skb and return sges
* @nic_dev: nic device
* @skb: the skb
* @sges: returned sges
*
* Return 0 - Success, negative - Failure
**/
static int tx_map_skb(struct hinic_dev *nic_dev, struct sk_buff *skb,
struct hinic_sge *sges)
{
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct skb_frag_struct *frag;
dma_addr_t dma_addr;
int i, j;
dma_addr = dma_map_single(&pdev->dev, skb->data, skb_headlen(skb),
DMA_TO_DEVICE);
if (dma_mapping_error(&pdev->dev, dma_addr)) {
dev_err(&pdev->dev, "Failed to map Tx skb data\n");
return -EFAULT;
}
hinic_set_sge(&sges[0], dma_addr, skb_headlen(skb));
for (i = 0 ; i < skb_shinfo(skb)->nr_frags; i++) {
frag = &skb_shinfo(skb)->frags[i];
dma_addr = skb_frag_dma_map(&pdev->dev, frag, 0,
skb_frag_size(frag),
DMA_TO_DEVICE);
if (dma_mapping_error(&pdev->dev, dma_addr)) {
dev_err(&pdev->dev, "Failed to map Tx skb frag\n");
goto err_tx_map;
}
hinic_set_sge(&sges[i + 1], dma_addr, skb_frag_size(frag));
}
return 0;
err_tx_map:
for (j = 0; j < i; j++)
dma_unmap_page(&pdev->dev, hinic_sge_to_dma(&sges[j + 1]),
sges[j + 1].len, DMA_TO_DEVICE);
dma_unmap_single(&pdev->dev, hinic_sge_to_dma(&sges[0]), sges[0].len,
DMA_TO_DEVICE);
return -EFAULT;
}
/**
* tx_unmap_skb - unmap the dma address of the skb
* @nic_dev: nic device
* @skb: the skb
* @sges: the sges that are connected to the skb
**/
static void tx_unmap_skb(struct hinic_dev *nic_dev, struct sk_buff *skb,
struct hinic_sge *sges)
{
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
int i;
for (i = 0; i < skb_shinfo(skb)->nr_frags ; i++)
dma_unmap_page(&pdev->dev, hinic_sge_to_dma(&sges[i + 1]),
sges[i + 1].len, DMA_TO_DEVICE);
dma_unmap_single(&pdev->dev, hinic_sge_to_dma(&sges[0]), sges[0].len,
DMA_TO_DEVICE);
}
netdev_tx_t hinic_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
{
struct hinic_dev *nic_dev = netdev_priv(netdev);
struct netdev_queue *netdev_txq;
int nr_sges, err = NETDEV_TX_OK;
struct hinic_sq_wqe *sq_wqe;
unsigned int wqe_size;
struct hinic_txq *txq;
struct hinic_qp *qp;
u16 prod_idx;
txq = &nic_dev->txqs[skb->queue_mapping];
qp = container_of(txq->sq, struct hinic_qp, sq);
if (skb->len < MIN_SKB_LEN) {
if (skb_pad(skb, MIN_SKB_LEN - skb->len)) {
netdev_err(netdev, "Failed to pad skb\n");
goto skb_error;
}
skb->len = MIN_SKB_LEN;
}
nr_sges = skb_shinfo(skb)->nr_frags + 1;
if (nr_sges > txq->max_sges) {
netdev_err(netdev, "Too many Tx sges\n");
goto skb_error;
}
err = tx_map_skb(nic_dev, skb, txq->sges);
if (err)
goto skb_error;
wqe_size = HINIC_SQ_WQE_SIZE(nr_sges);
sq_wqe = hinic_sq_get_wqe(txq->sq, wqe_size, &prod_idx);
if (!sq_wqe) {
tx_unmap_skb(nic_dev, skb, txq->sges);
netif_stop_subqueue(netdev, qp->q_id);
u64_stats_update_begin(&txq->txq_stats.syncp);
txq->txq_stats.tx_busy++;
u64_stats_update_end(&txq->txq_stats.syncp);
err = NETDEV_TX_BUSY;
goto flush_skbs;
}
hinic_sq_prepare_wqe(txq->sq, prod_idx, sq_wqe, txq->sges, nr_sges);
hinic_sq_write_wqe(txq->sq, prod_idx, sq_wqe, skb, wqe_size);
flush_skbs:
netdev_txq = netdev_get_tx_queue(netdev, skb->queue_mapping);
if ((!skb->xmit_more) || (netif_xmit_stopped(netdev_txq)))
hinic_sq_write_db(txq->sq, prod_idx, wqe_size, 0);
return err;
skb_error:
dev_kfree_skb_any(skb);
u64_stats_update_begin(&txq->txq_stats.syncp);
txq->txq_stats.tx_dropped++;
u64_stats_update_end(&txq->txq_stats.syncp);
return err;
}
/**
* tx_free_skb - unmap and free skb
* @nic_dev: nic device
* @skb: the skb
* @sges: the sges that are connected to the skb
**/
static void tx_free_skb(struct hinic_dev *nic_dev, struct sk_buff *skb,
struct hinic_sge *sges)
{
tx_unmap_skb(nic_dev, skb, sges);
dev_kfree_skb_any(skb);
}
/**
* free_all_rx_skbs - free all skbs in tx queue
* @txq: tx queue
**/
static void free_all_tx_skbs(struct hinic_txq *txq)
{
struct hinic_dev *nic_dev = netdev_priv(txq->netdev);
struct hinic_sq *sq = txq->sq;
struct hinic_sq_wqe *sq_wqe;
unsigned int wqe_size;
struct sk_buff *skb;
int nr_sges;
u16 ci;
while ((sq_wqe = hinic_sq_read_wqe(sq, &skb, &wqe_size, &ci))) {
nr_sges = skb_shinfo(skb)->nr_frags + 1;
hinic_sq_get_sges(sq_wqe, txq->free_sges, nr_sges);
hinic_sq_put_wqe(sq, wqe_size);
tx_free_skb(nic_dev, skb, txq->free_sges);
}
}
/**
* free_tx_poll - free finished tx skbs in tx queue that connected to napi
* @napi: napi
* @budget: number of tx
*
* Return 0 - Success, negative - Failure
**/
static int free_tx_poll(struct napi_struct *napi, int budget)
{
struct hinic_txq *txq = container_of(napi, struct hinic_txq, napi);
struct hinic_qp *qp = container_of(txq->sq, struct hinic_qp, sq);
struct hinic_dev *nic_dev = netdev_priv(txq->netdev);
struct netdev_queue *netdev_txq;
struct hinic_sq *sq = txq->sq;
struct hinic_wq *wq = sq->wq;
struct hinic_sq_wqe *sq_wqe;
unsigned int wqe_size;
int nr_sges, pkts = 0;
struct sk_buff *skb;
u64 tx_bytes = 0;
u16 hw_ci, sw_ci;
do {
hw_ci = HW_CONS_IDX(sq) & wq->mask;
sq_wqe = hinic_sq_read_wqe(sq, &skb, &wqe_size, &sw_ci);
if ((!sq_wqe) ||
(((hw_ci - sw_ci) & wq->mask) * wq->wqebb_size < wqe_size))
break;
tx_bytes += skb->len;
pkts++;
nr_sges = skb_shinfo(skb)->nr_frags + 1;
hinic_sq_get_sges(sq_wqe, txq->free_sges, nr_sges);
hinic_sq_put_wqe(sq, wqe_size);
tx_free_skb(nic_dev, skb, txq->free_sges);
} while (pkts < budget);
if (__netif_subqueue_stopped(nic_dev->netdev, qp->q_id) &&
hinic_get_sq_free_wqebbs(sq) >= HINIC_MIN_TX_NUM_WQEBBS(sq)) {
netdev_txq = netdev_get_tx_queue(txq->netdev, qp->q_id);
__netif_tx_lock(netdev_txq, smp_processor_id());
netif_wake_subqueue(nic_dev->netdev, qp->q_id);
__netif_tx_unlock(netdev_txq);
u64_stats_update_begin(&txq->txq_stats.syncp);
txq->txq_stats.tx_wake++;
u64_stats_update_end(&txq->txq_stats.syncp);
}
u64_stats_update_begin(&txq->txq_stats.syncp);
txq->txq_stats.bytes += tx_bytes;
txq->txq_stats.pkts += pkts;
u64_stats_update_end(&txq->txq_stats.syncp);
if (pkts < budget) {
napi_complete(napi);
enable_irq(sq->irq);
return pkts;
}
return budget;
}
static void tx_napi_add(struct hinic_txq *txq, int weight)
{
netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, weight);
napi_enable(&txq->napi);
}
static void tx_napi_del(struct hinic_txq *txq)
{
napi_disable(&txq->napi);
netif_napi_del(&txq->napi);
}
static irqreturn_t tx_irq(int irq, void *data)
{
struct hinic_txq *txq = data;
struct hinic_dev *nic_dev;
nic_dev = netdev_priv(txq->netdev);
/* Disable the interrupt until napi will be completed */
disable_irq_nosync(txq->sq->irq);
hinic_hwdev_msix_cnt_set(nic_dev->hwdev, txq->sq->msix_entry);
napi_schedule(&txq->napi);
return IRQ_HANDLED;
}
static int tx_request_irq(struct hinic_txq *txq)
{
struct hinic_dev *nic_dev = netdev_priv(txq->netdev);
struct hinic_hwdev *hwdev = nic_dev->hwdev;
struct hinic_hwif *hwif = hwdev->hwif;
struct pci_dev *pdev = hwif->pdev;
struct hinic_sq *sq = txq->sq;
int err;
tx_napi_add(txq, nic_dev->tx_weight);
hinic_hwdev_msix_set(nic_dev->hwdev, sq->msix_entry,
TX_IRQ_NO_PENDING, TX_IRQ_NO_COALESC,
TX_IRQ_NO_LLI_TIMER, TX_IRQ_NO_CREDIT,
TX_IRQ_NO_RESEND_TIMER);
err = request_irq(sq->irq, tx_irq, 0, txq->irq_name, txq);
if (err) {
dev_err(&pdev->dev, "Failed to request Tx irq\n");
tx_napi_del(txq);
return err;
}
return 0;
}
static void tx_free_irq(struct hinic_txq *txq)
{
struct hinic_sq *sq = txq->sq;
free_irq(sq->irq, txq);
tx_napi_del(txq);
}
/**
* hinic_init_txq - Initialize the Tx Queue
* @txq: Logical Tx Queue
* @sq: Hardware Tx Queue to connect the Logical queue with
* @netdev: network device to connect the Logical queue with
*
* Return 0 - Success, negative - Failure
**/
int hinic_init_txq(struct hinic_txq *txq, struct hinic_sq *sq,
struct net_device *netdev)
{
struct hinic_qp *qp = container_of(sq, struct hinic_qp, sq);
struct hinic_dev *nic_dev = netdev_priv(netdev);
struct hinic_hwdev *hwdev = nic_dev->hwdev;
int err, irqname_len;
size_t sges_size;
txq->netdev = netdev;
txq->sq = sq;
txq_stats_init(txq);
txq->max_sges = HINIC_MAX_SQ_BUFDESCS;
sges_size = txq->max_sges * sizeof(*txq->sges);
txq->sges = devm_kzalloc(&netdev->dev, sges_size, GFP_KERNEL);
if (!txq->sges)
return -ENOMEM;
sges_size = txq->max_sges * sizeof(*txq->free_sges);
txq->free_sges = devm_kzalloc(&netdev->dev, sges_size, GFP_KERNEL);
if (!txq->free_sges) {
err = -ENOMEM;
goto err_alloc_free_sges;
}
irqname_len = snprintf(NULL, 0, "hinic_txq%d", qp->q_id) + 1;
txq->irq_name = devm_kzalloc(&netdev->dev, irqname_len, GFP_KERNEL);
if (!txq->irq_name) {
err = -ENOMEM;
goto err_alloc_irqname;
}
sprintf(txq->irq_name, "hinic_txq%d", qp->q_id);
err = hinic_hwdev_hw_ci_addr_set(hwdev, sq, CI_UPDATE_NO_PENDING,
CI_UPDATE_NO_COALESC);
if (err)
goto err_hw_ci;
err = tx_request_irq(txq);
if (err) {
netdev_err(netdev, "Failed to request Tx irq\n");
goto err_req_tx_irq;
}
return 0;
err_req_tx_irq:
err_hw_ci:
devm_kfree(&netdev->dev, txq->irq_name);
err_alloc_irqname:
devm_kfree(&netdev->dev, txq->free_sges);
err_alloc_free_sges:
devm_kfree(&netdev->dev, txq->sges);
return err;
}
/**
* hinic_clean_txq - Clean the Tx Queue
* @txq: Logical Tx Queue
**/
void hinic_clean_txq(struct hinic_txq *txq)
{
struct net_device *netdev = txq->netdev;
tx_free_irq(txq);
free_all_tx_skbs(txq);
devm_kfree(&netdev->dev, txq->irq_name);
devm_kfree(&netdev->dev, txq->free_sges);
devm_kfree(&netdev->dev, txq->sges);
}
/*
* Huawei HiNIC PCI Express Linux driver
* Copyright(c) 2017 Huawei Technologies Co., Ltd
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*
*/
#ifndef HINIC_TX_H
#define HINIC_TX_H
#include <linux/types.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <linux/u64_stats_sync.h>
#include "hinic_common.h"
#include "hinic_hw_qp.h"
struct hinic_txq_stats {
u64 pkts;
u64 bytes;
u64 tx_busy;
u64 tx_wake;
u64 tx_dropped;
struct u64_stats_sync syncp;
};
struct hinic_txq {
struct net_device *netdev;
struct hinic_sq *sq;
struct hinic_txq_stats txq_stats;
int max_sges;
struct hinic_sge *sges;
struct hinic_sge *free_sges;
char *irq_name;
struct napi_struct napi;
};
void hinic_txq_clean_stats(struct hinic_txq *txq);
void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats);
netdev_tx_t hinic_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
int hinic_init_txq(struct hinic_txq *txq, struct hinic_sq *sq,
struct net_device *netdev);
void hinic_clean_txq(struct hinic_txq *txq);
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment