Commit f4b2a420 authored by David S. Miller's avatar David S. Miller

Merge branch 'hns3-ethernet-driver'

Salil Mehta says:

====================
Hisilicon Network Subsystem 3 Ethernet Driver

This patch-set contains the support of the HNS3 (Hisilicon Network Subsystem 3)
Ethernet driver for hip08 family of SoCs and future upcoming SoCs.

Hisilicon's new hip08 SoCs have integrated ethernet based on PCI Express and
hence there was a need of new driver over the previous HNS driver which is
already part of the Linux mainline. This new driver is NOT backward
compatible with HNS.

This current driver is meant to control the Physical Function and there would
soon be a support of a separate driver for Virtual Function once this base PF
driver has been accepted. Also, this driver is the ongoing development work and
HNS3 Ethernet driver would be incrementally enhanced with more new features.

High Level Architecture:

        [ Ethtool ]
	   ^  |
           |  |
     [Ethernet Client]  [ODP/UIO Client] . . . [ RoCE Client ]
                         |                            |
                   [ HNAE Device ]                    |
                         |                            |
    ---------------------------------------------     |
                         |                            |
     [ HNAE3 Framework (Register/unregister) ]        |
                         |                            |
    ---------------------------------------------     |
                         |                            |
                   [ HCLGE Layer]                     |
         ________________|_________________           |
        |                |                 |          |
    [ MDIO ]    [ Scheduler/Shaper ]  [ Debugfs* ]    |
        |                |                 |          |
        |________________|_________________|          |
                         |                            |
             [ IMP command Interface ]                |
    ---------------------------------------------     |
              HIP08  H A R D W A R E                  *

Current patch-set broadly adds the support of the following PF functionality:
 1. Basic Rx and Tx functionality
 2. TSO support
 3. Ethtool support
 4. * Debugfs support -> this patch for now has been taken off.
 5. HNAE framework and hardware compatability layer
 6. Scheduler and Shaper support in transmit function
 7. MDIO support

Change Log:
V5->V6: Addressed below comments:
        * Andrew Lunn: Comments on MDIO and ethtool link mode
        * Leon Romanvosky: Some comments on HNAE layer tidy-up
        * Internal comments on redundant code removal, fixing error types etc.
V4->V5: Addressed below concerns:
        * Florian Fanelli: Miscellaneous comments on ethtool & enet layer
        * Stephen Hemminger: comment of Netdev stats in ethool layer
        * Leon Romanvosky: Comments on Driver Version String, naming & Kconfig
        * Rochard Cochran: Redundant function prototype
V3->V4: Addressed below comments:
        * Andrew Lunn: Various comments on MDIO, ethtool, ENET driver etc,
        * Stephen Hemminger: change access and updation to 64 but statistics
        * Bo You: some spelling mistakes and checkpatch.pl errors.
V2->V3: Addressed comments
        * Yuval Mintz: Removal of redundant userprio-to-tc code
        * Stephen Hemminger: Ethtool & interuupt enable
        * Andrew Lunn: On C45/C22 PHy support, HNAE, ethtool
        * Florian Fainelli: C45/C22 and phy_connect/attach
        * Intel kbuild errors
V1->V2: Addressed some comments by kbuild, Yuval MIntz, Andrew Lunn &
        Florian Fainelli in the following patches:
        * Add support of HNS3 Ethernet Driver for hip08 SoC
        * Add MDIO support to HNS3 Ethernet driver for hip08 SoC
        * Add support of debugfs interface to HNS3 driver
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 5477f7f3 15e8e5ff
......@@ -6148,6 +6148,14 @@ S: Maintained
F: drivers/net/ethernet/hisilicon/
F: Documentation/devicetree/bindings/net/hisilicon*.txt
HISILICON NETWORK SUBSYSTEM 3 DRIVER (HNS3)
M: Yisen Zhuang <yisen.zhuang@huawei.com>
M: Salil Mehta <salil.mehta@huawei.com>
L: netdev@vger.kernel.org
W: http://www.hisilicon.com
S: Maintained
F: drivers/net/ethernet/hisilicon/hns3/
HISILICON ROCE DRIVER
M: Lijun Ou <oulijun@huawei.com>
M: Wei Hu(Xavier) <xavier.huwei@huawei.com>
......
......@@ -76,4 +76,31 @@ config HNS_ENET
This selects the general ethernet driver for HNS. This module make
use of any HNS AE driver, such as HNS_DSAF
config HNS3
tristate "Hisilicon Network Subsystem Support HNS3 (Framework)"
depends on PCI
---help---
This selects the framework support for Hisilicon Network Subsystem 3.
This layer facilitates clients like ENET, RoCE and user-space ethernet
drivers(like ODP)to register with HNAE devices and their associated
operations.
config HNS3_HCLGE
tristate "Hisilicon HNS3 HCLGE Acceleration Engine & Compatibility Layer Support"
depends on PCI_MSI
depends on HNS3
---help---
This selects the HNS3_HCLGE network acceleration engine & its hardware
compatibility layer. The engine would be used in Hisilicon hip08 family of
SoCs and further upcoming SoCs.
config HNS3_ENET
tristate "Hisilicon HNS3 Ethernet Device Support"
depends on 64BIT && PCI
depends on HNS3 && HNS3_HCLGE
---help---
This selects the Ethernet Driver for Hisilicon Network Subsystem 3 for hip08
family of SoCs. This module depends upon HNAE3 driver to access the HNAE3
devices and their associated operations.
endif # NET_VENDOR_HISILICON
......@@ -6,4 +6,5 @@ obj-$(CONFIG_HIX5HD2_GMAC) += hix5hd2_gmac.o
obj-$(CONFIG_HIP04_ETH) += hip04_eth.o
obj-$(CONFIG_HNS_MDIO) += hns_mdio.o
obj-$(CONFIG_HNS) += hns/
obj-$(CONFIG_HNS3) += hns3/
obj-$(CONFIG_HISI_FEMAC) += hisi_femac.o
#
# Makefile for the HISILICON network device drivers.
#
obj-$(CONFIG_HNS3) += hns3pf/
obj-$(CONFIG_HNS3) += hnae3.o
/*
* Copyright (c) 2016-2017 Hisilicon Limited.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include "hnae3.h"
static LIST_HEAD(hnae3_ae_algo_list);
static LIST_HEAD(hnae3_client_list);
static LIST_HEAD(hnae3_ae_dev_list);
/* we are keeping things simple and using single lock for all the
* list. This is a non-critical code so other updations, if happen
* in parallel, can wait.
*/
static DEFINE_MUTEX(hnae3_common_lock);
static bool hnae3_client_match(enum hnae3_client_type client_type,
enum hnae3_dev_type dev_type)
{
if ((dev_type == HNAE3_DEV_KNIC) && (client_type == HNAE3_CLIENT_KNIC ||
client_type == HNAE3_CLIENT_ROCE))
return true;
if (dev_type == HNAE3_DEV_UNIC && client_type == HNAE3_CLIENT_UNIC)
return true;
return false;
}
static int hnae3_match_n_instantiate(struct hnae3_client *client,
struct hnae3_ae_dev *ae_dev,
bool is_reg, bool *matched)
{
int ret;
*matched = false;
/* check if this client matches the type of ae_dev */
if (!(hnae3_client_match(client->type, ae_dev->dev_type) &&
hnae_get_bit(ae_dev->flag, HNAE3_DEV_INITED_B))) {
return 0;
}
/* there is a match of client and dev */
*matched = true;
/* now, (un-)instantiate client by calling lower layer */
if (is_reg) {
ret = ae_dev->ops->init_client_instance(client, ae_dev);
if (ret)
dev_err(&ae_dev->pdev->dev,
"fail to instantiate client\n");
return ret;
}
ae_dev->ops->uninit_client_instance(client, ae_dev);
return 0;
}
int hnae3_register_client(struct hnae3_client *client)
{
struct hnae3_client *client_tmp;
struct hnae3_ae_dev *ae_dev;
bool matched;
int ret = 0;
mutex_lock(&hnae3_common_lock);
/* one system should only have one client for every type */
list_for_each_entry(client_tmp, &hnae3_client_list, node) {
if (client_tmp->type == client->type)
goto exit;
}
list_add_tail(&client->node, &hnae3_client_list);
/* initialize the client on every matched port */
list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) {
/* if the client could not be initialized on current port, for
* any error reasons, move on to next available port
*/
ret = hnae3_match_n_instantiate(client, ae_dev, true, &matched);
if (ret)
dev_err(&ae_dev->pdev->dev,
"match and instantiation failed for port\n");
}
exit:
mutex_unlock(&hnae3_common_lock);
return ret;
}
EXPORT_SYMBOL(hnae3_register_client);
void hnae3_unregister_client(struct hnae3_client *client)
{
struct hnae3_ae_dev *ae_dev;
bool matched;
mutex_lock(&hnae3_common_lock);
/* un-initialize the client on every matched port */
list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) {
hnae3_match_n_instantiate(client, ae_dev, false, &matched);
}
list_del(&client->node);
mutex_unlock(&hnae3_common_lock);
}
EXPORT_SYMBOL(hnae3_unregister_client);
/* hnae3_register_ae_algo - register a AE algorithm to hnae3 framework
* @ae_algo: AE algorithm
* NOTE: the duplicated name will not be checked
*/
int hnae3_register_ae_algo(struct hnae3_ae_algo *ae_algo)
{
const struct pci_device_id *id;
struct hnae3_ae_dev *ae_dev;
struct hnae3_client *client;
bool matched;
int ret = 0;
mutex_lock(&hnae3_common_lock);
list_add_tail(&ae_algo->node, &hnae3_ae_algo_list);
/* Check if this algo/ops matches the list of ae_devs */
list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) {
id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev);
if (!id)
continue;
/* ae_dev init should set flag */
ae_dev->ops = ae_algo->ops;
ret = ae_algo->ops->init_ae_dev(ae_dev);
if (ret) {
dev_err(&ae_dev->pdev->dev, "init ae_dev error.\n");
continue;
}
hnae_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 1);
/* check the client list for the match with this ae_dev type and
* initialize the figure out client instance
*/
list_for_each_entry(client, &hnae3_client_list, node) {
ret = hnae3_match_n_instantiate(client, ae_dev, true,
&matched);
if (ret)
dev_err(&ae_dev->pdev->dev,
"match and instantiation failed\n");
if (matched)
break;
}
}
mutex_unlock(&hnae3_common_lock);
return ret;
}
EXPORT_SYMBOL(hnae3_register_ae_algo);
/* hnae3_unregister_ae_algo - unregisters a AE algorithm
* @ae_algo: the AE algorithm to unregister
*/
void hnae3_unregister_ae_algo(struct hnae3_ae_algo *ae_algo)
{
const struct pci_device_id *id;
struct hnae3_ae_dev *ae_dev;
struct hnae3_client *client;
bool matched;
mutex_lock(&hnae3_common_lock);
/* Check if there are matched ae_dev */
list_for_each_entry(ae_dev, &hnae3_ae_dev_list, node) {
id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev);
if (!id)
continue;
/* check the client list for the match with this ae_dev type and
* un-initialize the figure out client instance
*/
list_for_each_entry(client, &hnae3_client_list, node) {
hnae3_match_n_instantiate(client, ae_dev, false,
&matched);
if (matched)
break;
}
ae_algo->ops->uninit_ae_dev(ae_dev);
hnae_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0);
}
list_del(&ae_algo->node);
mutex_unlock(&hnae3_common_lock);
}
EXPORT_SYMBOL(hnae3_unregister_ae_algo);
/* hnae3_register_ae_dev - registers a AE device to hnae3 framework
* @ae_dev: the AE device
* NOTE: the duplicated name will not be checked
*/
int hnae3_register_ae_dev(struct hnae3_ae_dev *ae_dev)
{
const struct pci_device_id *id;
struct hnae3_ae_algo *ae_algo;
struct hnae3_client *client;
bool matched;
int ret = 0;
mutex_lock(&hnae3_common_lock);
list_add_tail(&ae_dev->node, &hnae3_ae_dev_list);
/* Check if there are matched ae_algo */
list_for_each_entry(ae_algo, &hnae3_ae_algo_list, node) {
id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev);
if (!id)
continue;
ae_dev->ops = ae_algo->ops;
if (!ae_dev->ops) {
dev_err(&ae_dev->pdev->dev, "ae_dev ops are null\n");
goto out_err;
}
/* ae_dev init should set flag */
ret = ae_dev->ops->init_ae_dev(ae_dev);
if (ret) {
dev_err(&ae_dev->pdev->dev, "init ae_dev error\n");
goto out_err;
}
hnae_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 1);
break;
}
/* check the client list for the match with this ae_dev type and
* initialize the figure out client instance
*/
list_for_each_entry(client, &hnae3_client_list, node) {
ret = hnae3_match_n_instantiate(client, ae_dev, true,
&matched);
if (ret)
dev_err(&ae_dev->pdev->dev,
"match and instantiation failed\n");
if (matched)
break;
}
out_err:
mutex_unlock(&hnae3_common_lock);
return ret;
}
EXPORT_SYMBOL(hnae3_register_ae_dev);
/* hnae3_unregister_ae_dev - unregisters a AE device
* @ae_dev: the AE device to unregister
*/
void hnae3_unregister_ae_dev(struct hnae3_ae_dev *ae_dev)
{
const struct pci_device_id *id;
struct hnae3_ae_algo *ae_algo;
struct hnae3_client *client;
bool matched;
mutex_lock(&hnae3_common_lock);
/* Check if there are matched ae_algo */
list_for_each_entry(ae_algo, &hnae3_ae_algo_list, node) {
id = pci_match_id(ae_algo->pdev_id_table, ae_dev->pdev);
if (!id)
continue;
list_for_each_entry(client, &hnae3_client_list, node) {
hnae3_match_n_instantiate(client, ae_dev, false,
&matched);
if (matched)
break;
}
ae_algo->ops->uninit_ae_dev(ae_dev);
hnae_set_bit(ae_dev->flag, HNAE3_DEV_INITED_B, 0);
}
list_del(&ae_dev->node);
mutex_unlock(&hnae3_common_lock);
}
EXPORT_SYMBOL(hnae3_unregister_ae_dev);
MODULE_AUTHOR("Huawei Tech. Co., Ltd.");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("HNAE3(Hisilicon Network Acceleration Engine) Framework");
This diff is collapsed.
#
# Makefile for the HISILICON network device drivers.
#
ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
obj-$(CONFIG_HNS3_HCLGE) += hclge.o
hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o
obj-$(CONFIG_HNS3_ENET) += hns3.o
hns3-objs = hns3_enet.o hns3_ethtool.o
/*
* Copyright (c) 2016~2017 Hisilicon Limited.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/dma-direction.h>
#include "hclge_cmd.h"
#include "hnae3.h"
#include "hclge_main.h"
#define hclge_is_csq(ring) ((ring)->flag & HCLGE_TYPE_CSQ)
#define hclge_ring_to_dma_dir(ring) (hclge_is_csq(ring) ? \
DMA_TO_DEVICE : DMA_FROM_DEVICE)
#define cmq_ring_to_dev(ring) (&(ring)->dev->pdev->dev)
static int hclge_ring_space(struct hclge_cmq_ring *ring)
{
int ntu = ring->next_to_use;
int ntc = ring->next_to_clean;
int used = (ntu - ntc + ring->desc_num) % ring->desc_num;
return ring->desc_num - used - 1;
}
static int hclge_alloc_cmd_desc(struct hclge_cmq_ring *ring)
{
int size = ring->desc_num * sizeof(struct hclge_desc);
ring->desc = kzalloc(size, GFP_KERNEL);
if (!ring->desc)
return -ENOMEM;
ring->desc_dma_addr = dma_map_single(cmq_ring_to_dev(ring), ring->desc,
size, DMA_BIDIRECTIONAL);
if (dma_mapping_error(cmq_ring_to_dev(ring), ring->desc_dma_addr)) {
ring->desc_dma_addr = 0;
kfree(ring->desc);
ring->desc = NULL;
return -ENOMEM;
}
return 0;
}
static void hclge_free_cmd_desc(struct hclge_cmq_ring *ring)
{
dma_unmap_single(cmq_ring_to_dev(ring), ring->desc_dma_addr,
ring->desc_num * sizeof(ring->desc[0]),
DMA_BIDIRECTIONAL);
ring->desc_dma_addr = 0;
kfree(ring->desc);
ring->desc = NULL;
}
static int hclge_init_cmd_queue(struct hclge_dev *hdev, int ring_type)
{
struct hclge_hw *hw = &hdev->hw;
struct hclge_cmq_ring *ring =
(ring_type == HCLGE_TYPE_CSQ) ? &hw->cmq.csq : &hw->cmq.crq;
int ret;
ring->flag = ring_type;
ring->dev = hdev;
ret = hclge_alloc_cmd_desc(ring);
if (ret) {
dev_err(&hdev->pdev->dev, "descriptor %s alloc error %d\n",
(ring_type == HCLGE_TYPE_CSQ) ? "CSQ" : "CRQ", ret);
return ret;
}
ring->next_to_clean = 0;
ring->next_to_use = 0;
return 0;
}
void hclge_cmd_setup_basic_desc(struct hclge_desc *desc,
enum hclge_opcode_type opcode, bool is_read)
{
memset((void *)desc, 0, sizeof(struct hclge_desc));
desc->opcode = cpu_to_le16(opcode);
desc->flag = cpu_to_le16(HCLGE_CMD_FLAG_NO_INTR | HCLGE_CMD_FLAG_IN);
if (is_read)
desc->flag |= cpu_to_le16(HCLGE_CMD_FLAG_WR);
else
desc->flag &= cpu_to_le16(~HCLGE_CMD_FLAG_WR);
}
static void hclge_cmd_config_regs(struct hclge_cmq_ring *ring)
{
dma_addr_t dma = ring->desc_dma_addr;
struct hclge_dev *hdev = ring->dev;
struct hclge_hw *hw = &hdev->hw;
if (ring->flag == HCLGE_TYPE_CSQ) {
hclge_write_dev(hw, HCLGE_NIC_CSQ_BASEADDR_L_REG,
(u32)dma);
hclge_write_dev(hw, HCLGE_NIC_CSQ_BASEADDR_H_REG,
(u32)((dma >> 31) >> 1));
hclge_write_dev(hw, HCLGE_NIC_CSQ_DEPTH_REG,
(ring->desc_num >> HCLGE_NIC_CMQ_DESC_NUM_S) |
HCLGE_NIC_CMQ_ENABLE);
hclge_write_dev(hw, HCLGE_NIC_CSQ_TAIL_REG, 0);
hclge_write_dev(hw, HCLGE_NIC_CSQ_HEAD_REG, 0);
} else {
hclge_write_dev(hw, HCLGE_NIC_CRQ_BASEADDR_L_REG,
(u32)dma);
hclge_write_dev(hw, HCLGE_NIC_CRQ_BASEADDR_H_REG,
(u32)((dma >> 31) >> 1));
hclge_write_dev(hw, HCLGE_NIC_CRQ_DEPTH_REG,
(ring->desc_num >> HCLGE_NIC_CMQ_DESC_NUM_S) |
HCLGE_NIC_CMQ_ENABLE);
hclge_write_dev(hw, HCLGE_NIC_CRQ_TAIL_REG, 0);
hclge_write_dev(hw, HCLGE_NIC_CRQ_HEAD_REG, 0);
}
}
static void hclge_cmd_init_regs(struct hclge_hw *hw)
{
hclge_cmd_config_regs(&hw->cmq.csq);
hclge_cmd_config_regs(&hw->cmq.crq);
}
static int hclge_cmd_csq_clean(struct hclge_hw *hw)
{
struct hclge_cmq_ring *csq = &hw->cmq.csq;
u16 ntc = csq->next_to_clean;
struct hclge_desc *desc;
int clean = 0;
u32 head;
desc = &csq->desc[ntc];
head = hclge_read_dev(hw, HCLGE_NIC_CSQ_HEAD_REG);
while (head != ntc) {
memset(desc, 0, sizeof(*desc));
ntc++;
if (ntc == csq->desc_num)
ntc = 0;
desc = &csq->desc[ntc];
clean++;
}
csq->next_to_clean = ntc;
return clean;
}
static int hclge_cmd_csq_done(struct hclge_hw *hw)
{
u32 head = hclge_read_dev(hw, HCLGE_NIC_CSQ_HEAD_REG);
return head == hw->cmq.csq.next_to_use;
}
static bool hclge_is_special_opcode(u16 opcode)
{
u16 spec_opcode[3] = {0x0030, 0x0031, 0x0032};
int i;
for (i = 0; i < ARRAY_SIZE(spec_opcode); i++) {
if (spec_opcode[i] == opcode)
return true;
}
return false;
}
/**
* hclge_cmd_send - send command to command queue
* @hw: pointer to the hw struct
* @desc: prefilled descriptor for describing the command
* @num : the number of descriptors to be sent
*
* This is the main send command for command queue, it
* sends the queue, cleans the queue, etc
**/
int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num)
{
struct hclge_dev *hdev = (struct hclge_dev *)hw->back;
struct hclge_desc *desc_to_use;
bool complete = false;
u32 timeout = 0;
int handle = 0;
int retval = 0;
u16 opcode, desc_ret;
int ntc;
spin_lock_bh(&hw->cmq.csq.lock);
if (num > hclge_ring_space(&hw->cmq.csq)) {
spin_unlock_bh(&hw->cmq.csq.lock);
return -EBUSY;
}
/**
* Record the location of desc in the ring for this time
* which will be use for hardware to write back
*/
ntc = hw->cmq.csq.next_to_use;
opcode = desc[0].opcode;
while (handle < num) {
desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use];
*desc_to_use = desc[handle];
(hw->cmq.csq.next_to_use)++;
if (hw->cmq.csq.next_to_use == hw->cmq.csq.desc_num)
hw->cmq.csq.next_to_use = 0;
handle++;
}
/* Write to hardware */
hclge_write_dev(hw, HCLGE_NIC_CSQ_TAIL_REG, hw->cmq.csq.next_to_use);
/**
* If the command is sync, wait for the firmware to write back,
* if multi descriptors to be sent, use the first one to check
*/
if (HCLGE_SEND_SYNC(desc->flag)) {
do {
if (hclge_cmd_csq_done(hw))
break;
udelay(1);
timeout++;
} while (timeout < hw->cmq.tx_timeout);
}
if (hclge_cmd_csq_done(hw)) {
complete = true;
handle = 0;
while (handle < num) {
/* Get the result of hardware write back */
desc_to_use = &hw->cmq.csq.desc[ntc];
desc[handle] = *desc_to_use;
pr_debug("Get cmd desc:\n");
if (likely(!hclge_is_special_opcode(opcode)))
desc_ret = desc[handle].retval;
else
desc_ret = desc[0].retval;
if ((enum hclge_cmd_return_status)desc_ret ==
HCLGE_CMD_EXEC_SUCCESS)
retval = 0;
else
retval = -EIO;
hw->cmq.last_status = (enum hclge_cmd_status)desc_ret;
ntc++;
handle++;
if (ntc == hw->cmq.csq.desc_num)
ntc = 0;
}
}
if (!complete)
retval = -EAGAIN;
/* Clean the command send queue */
handle = hclge_cmd_csq_clean(hw);
if (handle != num) {
dev_warn(&hdev->pdev->dev,
"cleaned %d, need to clean %d\n", handle, num);
}
spin_unlock_bh(&hw->cmq.csq.lock);
return retval;
}
enum hclge_cmd_status hclge_cmd_query_firmware_version(struct hclge_hw *hw,
u32 *version)
{
struct hclge_query_version *resp;
struct hclge_desc desc;
int ret;
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_QUERY_FW_VER, 1);
resp = (struct hclge_query_version *)desc.data;
ret = hclge_cmd_send(hw, &desc, 1);
if (!ret)
*version = le32_to_cpu(resp->firmware);
return ret;
}
int hclge_cmd_init(struct hclge_dev *hdev)
{
u32 version;
int ret;
/* Setup the queue entries for use cmd queue */
hdev->hw.cmq.csq.desc_num = HCLGE_NIC_CMQ_DESC_NUM;
hdev->hw.cmq.crq.desc_num = HCLGE_NIC_CMQ_DESC_NUM;
/* Setup the lock for command queue */
spin_lock_init(&hdev->hw.cmq.csq.lock);
spin_lock_init(&hdev->hw.cmq.crq.lock);
/* Setup Tx write back timeout */
hdev->hw.cmq.tx_timeout = HCLGE_CMDQ_TX_TIMEOUT;
/* Setup queue rings */
ret = hclge_init_cmd_queue(hdev, HCLGE_TYPE_CSQ);
if (ret) {
dev_err(&hdev->pdev->dev,
"CSQ ring setup error %d\n", ret);
return ret;
}
ret = hclge_init_cmd_queue(hdev, HCLGE_TYPE_CRQ);
if (ret) {
dev_err(&hdev->pdev->dev,
"CRQ ring setup error %d\n", ret);
goto err_csq;
}
hclge_cmd_init_regs(&hdev->hw);
ret = hclge_cmd_query_firmware_version(&hdev->hw, &version);
if (ret) {
dev_err(&hdev->pdev->dev,
"firmware version query failed %d\n", ret);
return ret;
}
hdev->fw_version = version;
dev_info(&hdev->pdev->dev, "The firware version is %08x\n", version);
return 0;
err_csq:
hclge_free_cmd_desc(&hdev->hw.cmq.csq);
return ret;
}
static void hclge_destroy_queue(struct hclge_cmq_ring *ring)
{
spin_lock_bh(&ring->lock);
hclge_free_cmd_desc(ring);
spin_unlock_bh(&ring->lock);
}
void hclge_destroy_cmd_queue(struct hclge_hw *hw)
{
hclge_destroy_queue(&hw->cmq.csq);
hclge_destroy_queue(&hw->cmq.crq);
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*
* Copyright (c) 2016~2017 Hisilicon Limited.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/etherdevice.h>
#include <linux/kernel.h>
#include "hclge_cmd.h"
#include "hclge_main.h"
#include "hclge_mdio.h"
enum hclge_mdio_c22_op_seq {
HCLGE_MDIO_C22_WRITE = 1,
HCLGE_MDIO_C22_READ = 2
};
#define HCLGE_MDIO_CTRL_START_B 0
#define HCLGE_MDIO_CTRL_ST_S 1
#define HCLGE_MDIO_CTRL_ST_M (0x3 << HCLGE_MDIO_CTRL_ST_S)
#define HCLGE_MDIO_CTRL_OP_S 3
#define HCLGE_MDIO_CTRL_OP_M (0x3 << HCLGE_MDIO_CTRL_OP_S)
#define HCLGE_MDIO_PHYID_S 0
#define HCLGE_MDIO_PHYID_M (0x1f << HCLGE_MDIO_PHYID_S)
#define HCLGE_MDIO_PHYREG_S 0
#define HCLGE_MDIO_PHYREG_M (0x1f << HCLGE_MDIO_PHYREG_S)
#define HCLGE_MDIO_STA_B 0
struct hclge_mdio_cfg_cmd {
u8 ctrl_bit;
u8 phyid;
u8 phyad;
u8 rsvd;
__le16 reserve;
__le16 data_wr;
__le16 data_rd;
__le16 sta;
};
static int hclge_mdio_write(struct mii_bus *bus, int phyid, int regnum,
u16 data)
{
struct hclge_mdio_cfg_cmd *mdio_cmd;
struct hclge_dev *hdev = bus->priv;
struct hclge_desc desc;
int ret;
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_MDIO_CONFIG, false);
mdio_cmd = (struct hclge_mdio_cfg_cmd *)desc.data;
hnae_set_field(mdio_cmd->phyid, HCLGE_MDIO_PHYID_M,
HCLGE_MDIO_PHYID_S, phyid);
hnae_set_field(mdio_cmd->phyad, HCLGE_MDIO_PHYREG_M,
HCLGE_MDIO_PHYREG_S, regnum);
hnae_set_bit(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_START_B, 1);
hnae_set_field(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_ST_M,
HCLGE_MDIO_CTRL_ST_S, 1);
hnae_set_field(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_OP_M,
HCLGE_MDIO_CTRL_OP_S, HCLGE_MDIO_C22_WRITE);
mdio_cmd->data_wr = cpu_to_le16(data);
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret) {
dev_err(&hdev->pdev->dev,
"mdio write fail when sending cmd, status is %d.\n",
ret);
return ret;
}
return 0;
}
static int hclge_mdio_read(struct mii_bus *bus, int phyid, int regnum)
{
struct hclge_mdio_cfg_cmd *mdio_cmd;
struct hclge_dev *hdev = bus->priv;
struct hclge_desc desc;
int ret;
hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_MDIO_CONFIG, true);
mdio_cmd = (struct hclge_mdio_cfg_cmd *)desc.data;
hnae_set_field(mdio_cmd->phyid, HCLGE_MDIO_PHYID_M,
HCLGE_MDIO_PHYID_S, phyid);
hnae_set_field(mdio_cmd->phyad, HCLGE_MDIO_PHYREG_M,
HCLGE_MDIO_PHYREG_S, regnum);
hnae_set_bit(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_START_B, 1);
hnae_set_field(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_ST_M,
HCLGE_MDIO_CTRL_ST_S, 1);
hnae_set_field(mdio_cmd->ctrl_bit, HCLGE_MDIO_CTRL_OP_M,
HCLGE_MDIO_CTRL_OP_S, HCLGE_MDIO_C22_READ);
/* Read out phy data */
ret = hclge_cmd_send(&hdev->hw, &desc, 1);
if (ret) {
dev_err(&hdev->pdev->dev,
"mdio read fail when get data, status is %d.\n",
ret);
return ret;
}
if (hnae_get_bit(le16_to_cpu(mdio_cmd->sta), HCLGE_MDIO_STA_B)) {
dev_err(&hdev->pdev->dev, "mdio read data error\n");
return -EIO;
}
return le16_to_cpu(mdio_cmd->data_rd);
}
int hclge_mac_mdio_config(struct hclge_dev *hdev)
{
struct hclge_mac *mac = &hdev->hw.mac;
struct phy_device *phydev;
struct mii_bus *mdio_bus;
int ret;
if (hdev->hw.mac.phy_addr >= PHY_MAX_ADDR)
return 0;
mdio_bus = devm_mdiobus_alloc(&hdev->pdev->dev);
if (!mdio_bus)
return -ENOMEM;
mdio_bus->name = "hisilicon MII bus";
mdio_bus->read = hclge_mdio_read;
mdio_bus->write = hclge_mdio_write;
snprintf(mdio_bus->id, MII_BUS_ID_SIZE, "%s-%s", "mii",
dev_name(&hdev->pdev->dev));
mdio_bus->parent = &hdev->pdev->dev;
mdio_bus->priv = hdev;
mdio_bus->phy_mask = ~(1 << mac->phy_addr);
ret = mdiobus_register(mdio_bus);
if (ret) {
dev_err(mdio_bus->parent,
"Failed to register MDIO bus ret = %#x\n", ret);
return ret;
}
phydev = mdiobus_get_phy(mdio_bus, mac->phy_addr);
if (!phydev || IS_ERR(phydev)) {
dev_err(mdio_bus->parent, "Failed to get phy device\n");
mdiobus_unregister(mdio_bus);
return -EIO;
}
mac->phydev = phydev;
mac->mdio_bus = mdio_bus;
return 0;
}
static void hclge_mac_adjust_link(struct net_device *netdev)
{
struct hnae3_handle *h = *((void **)netdev_priv(netdev));
struct hclge_vport *vport = hclge_get_vport(h);
struct hclge_dev *hdev = vport->back;
int duplex, speed;
int ret;
speed = netdev->phydev->speed;
duplex = netdev->phydev->duplex;
ret = hclge_cfg_mac_speed_dup(hdev, speed, duplex);
if (ret)
netdev_err(netdev, "failed to adjust link.\n");
}
int hclge_mac_start_phy(struct hclge_dev *hdev)
{
struct net_device *netdev = hdev->vport[0].nic.netdev;
struct phy_device *phydev = hdev->hw.mac.phydev;
int ret;
if (!phydev)
return 0;
ret = phy_connect_direct(netdev, phydev,
hclge_mac_adjust_link,
PHY_INTERFACE_MODE_SGMII);
if (ret) {
netdev_err(netdev, "phy_connect_direct err.\n");
return ret;
}
phy_start(phydev);
return 0;
}
void hclge_mac_stop_phy(struct hclge_dev *hdev)
{
struct net_device *netdev = hdev->vport[0].nic.netdev;
struct phy_device *phydev = netdev->phydev;
if (!phydev)
return;
phy_stop(phydev);
phy_disconnect(phydev);
}
/*
* Copyright (c) 2016-2017 Hisilicon Limited.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#ifndef __HCLGE_MDIO_H
#define __HCLGE_MDIO_H
int hclge_mac_mdio_config(struct hclge_dev *hdev);
int hclge_mac_start_phy(struct hclge_dev *hdev);
void hclge_mac_stop_phy(struct hclge_dev *hdev);
#endif
This diff is collapsed.
/*
* Copyright (c) 2016~2017 Hisilicon Limited.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#ifndef __HCLGE_TM_H
#define __HCLGE_TM_H
#include <linux/types.h>
/* MAC Pause */
#define HCLGE_TX_MAC_PAUSE_EN_MSK BIT(0)
#define HCLGE_RX_MAC_PAUSE_EN_MSK BIT(1)
#define HCLGE_TM_PORT_BASE_MODE_MSK BIT(0)
/* SP or DWRR */
#define HCLGE_TM_TX_SCHD_DWRR_MSK BIT(0)
#define HCLGE_TM_TX_SCHD_SP_MSK (0xFE)
struct hclge_pg_to_pri_link_cmd {
u8 pg_id;
u8 rsvd1[3];
u8 pri_bit_map;
};
struct hclge_qs_to_pri_link_cmd {
__le16 qs_id;
__le16 rsvd;
u8 priority;
#define HCLGE_TM_QS_PRI_LINK_VLD_MSK BIT(0)
u8 link_vld;
};
struct hclge_nq_to_qs_link_cmd {
__le16 nq_id;
__le16 rsvd;
#define HCLGE_TM_Q_QS_LINK_VLD_MSK BIT(10)
__le16 qset_id;
};
struct hclge_pg_weight_cmd {
u8 pg_id;
u8 dwrr;
};
struct hclge_priority_weight_cmd {
u8 pri_id;
u8 dwrr;
};
struct hclge_qs_weight_cmd {
__le16 qs_id;
u8 dwrr;
};
#define HCLGE_TM_SHAP_IR_B_MSK GENMASK(7, 0)
#define HCLGE_TM_SHAP_IR_B_LSH 0
#define HCLGE_TM_SHAP_IR_U_MSK GENMASK(11, 8)
#define HCLGE_TM_SHAP_IR_U_LSH 8
#define HCLGE_TM_SHAP_IR_S_MSK GENMASK(15, 12)
#define HCLGE_TM_SHAP_IR_S_LSH 12
#define HCLGE_TM_SHAP_BS_B_MSK GENMASK(20, 16)
#define HCLGE_TM_SHAP_BS_B_LSH 16
#define HCLGE_TM_SHAP_BS_S_MSK GENMASK(25, 21)
#define HCLGE_TM_SHAP_BS_S_LSH 21
enum hclge_shap_bucket {
HCLGE_TM_SHAP_C_BUCKET = 0,
HCLGE_TM_SHAP_P_BUCKET,
};
struct hclge_pri_shapping_cmd {
u8 pri_id;
u8 rsvd[3];
__le32 pri_shapping_para;
};
struct hclge_pg_shapping_cmd {
u8 pg_id;
u8 rsvd[3];
__le32 pg_shapping_para;
};
struct hclge_bp_to_qs_map_cmd {
u8 tc_id;
u8 rsvd[2];
u8 qs_group_id;
__le32 qs_bit_map;
u32 rsvd1;
};
#define hclge_tm_set_feild(dest, string, val) \
hnae_set_field((dest), (HCLGE_TM_SHAP_##string##_MSK), \
(HCLGE_TM_SHAP_##string##_LSH), val)
#define hclge_tm_get_feild(src, string) \
hnae_get_field((src), (HCLGE_TM_SHAP_##string##_MSK), \
(HCLGE_TM_SHAP_##string##_LSH))
int hclge_tm_schd_init(struct hclge_dev *hdev);
int hclge_pause_setup_hw(struct hclge_dev *hdev);
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment