Commit a6deaa99 authored by David S. Miller's avatar David S. Miller

Merge branch 'octeontx2-af-Add-RVU-Admin-Function-driver'

Sunil Goutham says:

====================
octeontx2-af: Add RVU Admin Function driver

Resource virtualization unit (RVU) on Marvell's OcteonTX2 SOC maps HW
resources from the network, crypto and other functional blocks into
PCI-compatible physical and virtual functions. Each functional block
again has multiple local functions (LFs) for provisioning to PCI devices.
RVU supports multiple PCIe SRIOV physical functions (PFs) and virtual
functions (VFs). PF0 is called the administrative / admin function (AF)
and has privileges to provision RVU functional block's LFs to each of the
PF/VF.

RVU managed networking functional blocks
 - Network pool allocator (NPA)
 - Network interface controller (NIX)
 - Network parser CAM (NPC)
 - Schedule/Synchronize/Order unit (SSO)

RVU managed non-networking functional blocks
 - Crypto accelerator (CPT)
 - Scheduled timers unit (TIM)
 - Schedule/Synchronize/Order unit (SSO)
   Used for both networking and non networking usecases
 - Compression (upcoming in future variants of the silicons)

Resource provisioning examples
 - A PF/VF with NIX-LF & NPA-LF resources works as a pure network device
 - A PF/VF with CPT-LF resource works as a pure cyrpto offload device.

This admin function driver neither receives any data nor processes it i.e
no I/O, a configuration only driver.

PF/VFs communicates with AF via a shared memory region (mailbox). Upon
receiving requests from PF/VF, AF does resource provisioning and other
HW configuration. AF is always attached to host, but PF/VFs may be used
by host kernel itself, or attached to VMs or to userspace applications
like DPDK etc. So AF has to handle provisioning/configuration requests
sent by any device from any domain.

This patch series adds logic for the following
 - RVU AF driver with functional blocks provisioning support.
 - Mailbox infrastructure for communication between AF and PFs.
 - CGX (MAC controller) driver which communicates with firmware for
   managing  physical ethernet interfaces. AF collects info from this
   driver and forwards the same to the PF/VFs uaing these interfaces.

This is the first set of patches out of 80+ patches.

Changes from v8:
 1 Removed unnecessary typecasts in entire series
   - Suggested by David Miller
 2 Added COMPILE_TEST to AF driver
   - Suggested by Arnd Bergmann
 3 Changed udelay() to usleep_range() in rvu_poll_reg
   - Suggested by Arnd Bergmann
 4 MSIX vector base IOMMU mapping is done using dma_map_resource()
   API instead of dma_map_single() as it accepts physical address.
   - Issue pointed by Arnd Bergmann

Changes from v7:
 1 Removed unnecessary typecasts in mbox infra code.
   - Suggested by David Miller
 2 Fixed MAINTAINERS patch
   - Suggested by Joe Perches

Changes from v6:
 Fixed ordering of local variables from longest to shortest line.
   - Suggested by David Miller

Changes from v5:
 Modified bitfield based command structures to bitmasks for communication
 with firmware, to address endianness issues.
   - Suggested by Arnd Bergmann

Changes from v4:
 1 Removed module author/version/description from CGX driver as it's now
   merged with AF driver module.
   - Suggested by Arnd Bergmann
 2 Added big-endian bitfields for CGX's kernel <=> firmware communication
   command structures.
   - Suggested by Arnd Bergmann

Changes from v3:
 Moved driver from drivers/soc to drivers/net/ethernet
   - Suggested by Arnd Bergmann
 https://patchwork.kernel.org/cover/10587635/

Changes from v2:
 No changes, submitted again with netdev mailing list in loop.
   - Suggested by Arnd Bergmann and Andrew Lunn

Changes from v1:
 1 Merged RVU admin function and CGX drivers into a single module
   - Suggested by Arnd Bergmann
 2 Pulled mbox communication APIs into a separate module to remove
   admin function driver dependency in a VM where AF is not attached.
   - Suggested by Arnd Bergmann
====================
Reviewed-by: default avatarArnd Bergmann <arnd@arndb.de>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents e40a826a 1f2cf1b3
......@@ -8847,6 +8847,15 @@ S: Supported
F: drivers/mmc/host/sdhci-xenon*
F: Documentation/devicetree/bindings/mmc/marvell,xenon-sdhci.txt
MARVELL OCTEONTX2 RVU ADMIN FUNCTION DRIVER
M: Sunil Goutham <sgoutham@marvell.com>
M: Linu Cherian <lcherian@marvell.com>
M: Geetha sowjanya <gakula@marvell.com>
M: Jerin Jacob <jerinj@marvell.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/marvell/octeontx2/af/
MATROX FRAMEBUFFER DRIVER
L: linux-fbdev@vger.kernel.org
S: Orphan
......
......@@ -167,4 +167,7 @@ config SKY2_DEBUG
If unsure, say N.
source "drivers/net/ethernet/marvell/octeontx2/Kconfig"
endif # NET_VENDOR_MARVELL
......@@ -11,3 +11,4 @@ obj-$(CONFIG_MVPP2) += mvpp2/
obj-$(CONFIG_PXA168_ETH) += pxa168_eth.o
obj-$(CONFIG_SKGE) += skge.o
obj-$(CONFIG_SKY2) += sky2.o
obj-y += octeontx2/
#
# Marvell OcteonTX2 drivers configuration
#
config OCTEONTX2_MBOX
tristate
config OCTEONTX2_AF
tristate "Marvell OcteonTX2 RVU Admin Function driver"
select OCTEONTX2_MBOX
depends on (64BIT && COMPILE_TEST) || ARM64
depends on PCI
help
This driver supports Marvell's OcteonTX2 Resource Virtualization
Unit's admin function manager which manages all RVU HW resources
and provides a medium to other PF/VFs to configure HW. Should be
enabled for other RVU device drivers to work.
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for Marvell OcteonTX2 device drivers.
#
obj-$(CONFIG_OCTEONTX2_AF) += af/
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for Marvell's OcteonTX2 RVU Admin Function driver
#
obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o
obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
octeontx2_mbox-y := mbox.o
octeontx2_af-y := cgx.o rvu.o rvu_cgx.o
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 CGX driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef CGX_H
#define CGX_H
#include "cgx_fw_if.h"
/* PCI device IDs */
#define PCI_DEVID_OCTEONTX2_CGX 0xA059
/* PCI BAR nos */
#define PCI_CFG_REG_BAR_NUM 0
#define MAX_CGX 3
#define MAX_LMAC_PER_CGX 4
#define CGX_OFFSET(x) ((x) * MAX_LMAC_PER_CGX)
/* Registers */
#define CGXX_CMRX_INT 0x040
#define FW_CGX_INT BIT_ULL(1)
#define CGXX_CMRX_INT_ENA_W1S 0x058
#define CGXX_CMRX_RX_ID_MAP 0x060
#define CGXX_CMRX_RX_LMACS 0x128
#define CGXX_SCRATCH0_REG 0x1050
#define CGXX_SCRATCH1_REG 0x1058
#define CGX_CONST 0x2000
#define CGX_COMMAND_REG CGXX_SCRATCH1_REG
#define CGX_EVENT_REG CGXX_SCRATCH0_REG
#define CGX_CMD_TIMEOUT 2200 /* msecs */
#define CGX_NVEC 37
#define CGX_LMAC_FWI 0
struct cgx_link_event {
struct cgx_lnk_sts lstat;
u8 cgx_id;
u8 lmac_id;
};
/**
* struct cgx_event_cb
* @notify_link_chg: callback for link change notification
* @data: data passed to callback function
*/
struct cgx_event_cb {
int (*notify_link_chg)(struct cgx_link_event *event, void *data);
void *data;
};
extern struct pci_driver cgx_driver;
int cgx_get_cgx_cnt(void);
int cgx_get_lmac_cnt(void *cgxd);
void *cgx_get_pdata(int cgx_id);
int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id);
#endif /* CGX_H */
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 CGX driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __CGX_FW_INTF_H__
#define __CGX_FW_INTF_H__
#include <linux/bitops.h>
#include <linux/bitfield.h>
#define CGX_FIRMWARE_MAJOR_VER 1
#define CGX_FIRMWARE_MINOR_VER 0
#define CGX_EVENT_ACK 1UL
/* CGX error types. set for cmd response status as CGX_STAT_FAIL */
enum cgx_error_type {
CGX_ERR_NONE,
CGX_ERR_LMAC_NOT_ENABLED,
CGX_ERR_LMAC_MODE_INVALID,
CGX_ERR_REQUEST_ID_INVALID,
CGX_ERR_PREV_ACK_NOT_CLEAR,
CGX_ERR_PHY_LINK_DOWN,
CGX_ERR_PCS_RESET_FAIL,
CGX_ERR_AN_CPT_FAIL,
CGX_ERR_TX_NOT_IDLE,
CGX_ERR_RX_NOT_IDLE,
CGX_ERR_SPUX_BR_BLKLOCK_FAIL,
CGX_ERR_SPUX_RX_ALIGN_FAIL,
CGX_ERR_SPUX_TX_FAULT,
CGX_ERR_SPUX_RX_FAULT,
CGX_ERR_SPUX_RESET_FAIL,
CGX_ERR_SPUX_AN_RESET_FAIL,
CGX_ERR_SPUX_USX_AN_RESET_FAIL,
CGX_ERR_SMUX_RX_LINK_NOT_OK,
CGX_ERR_PCS_RECV_LINK_FAIL,
CGX_ERR_TRAINING_FAIL,
CGX_ERR_RX_EQU_FAIL,
CGX_ERR_SPUX_BER_FAIL,
CGX_ERR_SPUX_RSFEC_ALGN_FAIL, /* = 22 */
};
/* LINK speed types */
enum cgx_link_speed {
CGX_LINK_NONE,
CGX_LINK_10M,
CGX_LINK_100M,
CGX_LINK_1G,
CGX_LINK_2HG,
CGX_LINK_5G,
CGX_LINK_10G,
CGX_LINK_20G,
CGX_LINK_25G,
CGX_LINK_40G,
CGX_LINK_50G,
CGX_LINK_100G,
CGX_LINK_SPEED_MAX,
};
/* REQUEST ID types. Input to firmware */
enum cgx_cmd_id {
CGX_CMD_NONE,
CGX_CMD_GET_FW_VER,
CGX_CMD_GET_MAC_ADDR,
CGX_CMD_SET_MTU,
CGX_CMD_GET_LINK_STS, /* optional to user */
CGX_CMD_LINK_BRING_UP,
CGX_CMD_LINK_BRING_DOWN,
CGX_CMD_INTERNAL_LBK,
CGX_CMD_EXTERNAL_LBK,
CGX_CMD_HIGIG,
CGX_CMD_LINK_STATE_CHANGE,
CGX_CMD_MODE_CHANGE, /* hot plug support */
CGX_CMD_INTF_SHUTDOWN,
CGX_CMD_IRQ_ENABLE,
CGX_CMD_IRQ_DISABLE,
};
/* async event ids */
enum cgx_evt_id {
CGX_EVT_NONE,
CGX_EVT_LINK_CHANGE,
};
/* event types - cause of interrupt */
enum cgx_evt_type {
CGX_EVT_ASYNC,
CGX_EVT_CMD_RESP
};
enum cgx_stat {
CGX_STAT_SUCCESS,
CGX_STAT_FAIL
};
enum cgx_cmd_own {
CGX_CMD_OWN_NS,
CGX_CMD_OWN_FIRMWARE,
};
/* m - bit mask
* y - value to be written in the bitrange
* x - input value whose bitrange to be modified
*/
#define FIELD_SET(m, y, x) \
(((x) & ~(m)) | \
FIELD_PREP((m), (y)))
/* scratchx(0) CSR used for ATF->non-secure SW communication.
* This acts as the status register
* Provides details on command ack/status, command response, error details
*/
#define EVTREG_ACK BIT_ULL(0)
#define EVTREG_EVT_TYPE BIT_ULL(1)
#define EVTREG_STAT BIT_ULL(2)
#define EVTREG_ID GENMASK_ULL(8, 3)
/* Response to command IDs with command status as CGX_STAT_FAIL
*
* Not applicable for commands :
* CGX_CMD_LINK_BRING_UP/DOWN/CGX_EVT_LINK_CHANGE
*/
#define EVTREG_ERRTYPE GENMASK_ULL(18, 9)
/* Response to cmd ID as CGX_CMD_GET_FW_VER with cmd status as
* CGX_STAT_SUCCESS
*/
#define RESP_MAJOR_VER GENMASK_ULL(12, 9)
#define RESP_MINOR_VER GENMASK_ULL(16, 13)
/* Response to cmd ID as CGX_CMD_GET_MAC_ADDR with cmd status as
* CGX_STAT_SUCCESS
*/
#define RESP_MAC_ADDR GENMASK_ULL(56, 9)
/* Response to cmd ID - CGX_CMD_LINK_BRING_UP/DOWN, event ID CGX_EVT_LINK_CHANGE
* status can be either CGX_STAT_FAIL or CGX_STAT_SUCCESS
*
* In case of CGX_STAT_FAIL, it indicates CGX configuration failed
* when processing link up/down/change command.
* Both err_type and current link status will be updated
*
* In case of CGX_STAT_SUCCESS, err_type will be CGX_ERR_NONE and current
* link status will be updated
*/
struct cgx_lnk_sts {
uint64_t reserved1:9;
uint64_t link_up:1;
uint64_t full_duplex:1;
uint64_t speed:4; /* cgx_link_speed */
uint64_t err_type:10;
uint64_t reserved2:39;
};
#define RESP_LINKSTAT_UP GENMASK_ULL(9, 9)
#define RESP_LINKSTAT_FDUPLEX GENMASK_ULL(10, 10)
#define RESP_LINKSTAT_SPEED GENMASK_ULL(14, 11)
#define RESP_LINKSTAT_ERRTYPE GENMASK_ULL(24, 15)
/* scratchx(1) CSR used for non-secure SW->ATF communication
* This CSR acts as a command register
*/
#define CMDREG_OWN BIT_ULL(0)
#define CMDREG_ID GENMASK_ULL(7, 2)
/* Any command using enable/disable as an argument need
* to set this bitfield.
* Ex: Loopback, HiGig...
*/
#define CMDREG_ENABLE BIT_ULL(8)
/* command argument to be passed for cmd ID - CGX_CMD_SET_MTU */
#define CMDMTU_SIZE GENMASK_ULL(23, 8)
/* command argument to be passed for cmd ID - CGX_CMD_LINK_CHANGE */
#define CMDLINKCHANGE_LINKUP BIT_ULL(8)
#define CMDLINKCHANGE_FULLDPLX BIT_ULL(9)
#define CMDLINKCHANGE_SPEED GENMASK_ULL(13, 10)
#endif /* __CGX_FW_INTF_H__ */
// SPDX-License-Identifier: GPL-2.0
/* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/pci.h>
#include "rvu_reg.h"
#include "mbox.h"
static const u16 msgs_offset = ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
void otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
struct mbox_hdr *tx_hdr, *rx_hdr;
tx_hdr = mdev->mbase + mbox->tx_start;
rx_hdr = mdev->mbase + mbox->rx_start;
spin_lock(&mdev->mbox_lock);
mdev->msg_size = 0;
mdev->rsp_size = 0;
tx_hdr->num_msgs = 0;
rx_hdr->num_msgs = 0;
spin_unlock(&mdev->mbox_lock);
}
EXPORT_SYMBOL(otx2_mbox_reset);
void otx2_mbox_destroy(struct otx2_mbox *mbox)
{
mbox->reg_base = NULL;
mbox->hwbase = NULL;
kfree(mbox->dev);
mbox->dev = NULL;
}
EXPORT_SYMBOL(otx2_mbox_destroy);
int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev,
void *reg_base, int direction, int ndevs)
{
struct otx2_mbox_dev *mdev;
int devid;
switch (direction) {
case MBOX_DIR_AFPF:
case MBOX_DIR_PFVF:
mbox->tx_start = MBOX_DOWN_TX_START;
mbox->rx_start = MBOX_DOWN_RX_START;
mbox->tx_size = MBOX_DOWN_TX_SIZE;
mbox->rx_size = MBOX_DOWN_RX_SIZE;
break;
case MBOX_DIR_PFAF:
case MBOX_DIR_VFPF:
mbox->tx_start = MBOX_DOWN_RX_START;
mbox->rx_start = MBOX_DOWN_TX_START;
mbox->tx_size = MBOX_DOWN_RX_SIZE;
mbox->rx_size = MBOX_DOWN_TX_SIZE;
break;
case MBOX_DIR_AFPF_UP:
case MBOX_DIR_PFVF_UP:
mbox->tx_start = MBOX_UP_TX_START;
mbox->rx_start = MBOX_UP_RX_START;
mbox->tx_size = MBOX_UP_TX_SIZE;
mbox->rx_size = MBOX_UP_RX_SIZE;
break;
case MBOX_DIR_PFAF_UP:
case MBOX_DIR_VFPF_UP:
mbox->tx_start = MBOX_UP_RX_START;
mbox->rx_start = MBOX_UP_TX_START;
mbox->tx_size = MBOX_UP_RX_SIZE;
mbox->rx_size = MBOX_UP_TX_SIZE;
break;
default:
return -ENODEV;
}
switch (direction) {
case MBOX_DIR_AFPF:
case MBOX_DIR_AFPF_UP:
mbox->trigger = RVU_AF_AFPF_MBOX0;
mbox->tr_shift = 4;
break;
case MBOX_DIR_PFAF:
case MBOX_DIR_PFAF_UP:
mbox->trigger = RVU_PF_PFAF_MBOX1;
mbox->tr_shift = 0;
break;
case MBOX_DIR_PFVF:
case MBOX_DIR_PFVF_UP:
mbox->trigger = RVU_PF_VFX_PFVF_MBOX0;
mbox->tr_shift = 12;
break;
case MBOX_DIR_VFPF:
case MBOX_DIR_VFPF_UP:
mbox->trigger = RVU_VF_VFPF_MBOX1;
mbox->tr_shift = 0;
break;
default:
return -ENODEV;
}
mbox->reg_base = reg_base;
mbox->hwbase = hwbase;
mbox->pdev = pdev;
mbox->dev = kcalloc(ndevs, sizeof(struct otx2_mbox_dev), GFP_KERNEL);
if (!mbox->dev) {
otx2_mbox_destroy(mbox);
return -ENOMEM;
}
mbox->ndevs = ndevs;
for (devid = 0; devid < ndevs; devid++) {
mdev = &mbox->dev[devid];
mdev->mbase = mbox->hwbase + (devid * MBOX_SIZE);
spin_lock_init(&mdev->mbox_lock);
/* Init header to reset value */
otx2_mbox_reset(mbox, devid);
}
return 0;
}
EXPORT_SYMBOL(otx2_mbox_init);
int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
int timeout = 0, sleep = 1;
while (mdev->num_msgs != mdev->msgs_acked) {
msleep(sleep);
timeout += sleep;
if (timeout >= MBOX_RSP_TIMEOUT)
return -EIO;
}
return 0;
}
EXPORT_SYMBOL(otx2_mbox_wait_for_rsp);
int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
unsigned long timeout = jiffies + 1 * HZ;
while (!time_after(jiffies, timeout)) {
if (mdev->num_msgs == mdev->msgs_acked)
return 0;
cpu_relax();
}
return -EIO;
}
EXPORT_SYMBOL(otx2_mbox_busy_poll_for_rsp);
void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
struct mbox_hdr *tx_hdr, *rx_hdr;
tx_hdr = mdev->mbase + mbox->tx_start;
rx_hdr = mdev->mbase + mbox->rx_start;
spin_lock(&mdev->mbox_lock);
/* Reset header for next messages */
mdev->msg_size = 0;
mdev->rsp_size = 0;
mdev->msgs_acked = 0;
/* Sync mbox data into memory */
smp_wmb();
/* num_msgs != 0 signals to the peer that the buffer has a number of
* messages. So this should be written after writing all the messages
* to the shared memory.
*/
tx_hdr->num_msgs = mdev->num_msgs;
rx_hdr->num_msgs = 0;
spin_unlock(&mdev->mbox_lock);
/* The interrupt should be fired after num_msgs is written
* to the shared memory
*/
writeq(1, (void __iomem *)mbox->reg_base +
(mbox->trigger | (devid << mbox->tr_shift)));
}
EXPORT_SYMBOL(otx2_mbox_msg_send);
struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
int size, int size_rsp)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
struct mbox_msghdr *msghdr = NULL;
spin_lock(&mdev->mbox_lock);
size = ALIGN(size, MBOX_MSG_ALIGN);
size_rsp = ALIGN(size_rsp, MBOX_MSG_ALIGN);
/* Check if there is space in mailbox */
if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset)
goto exit;
if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset)
goto exit;
if (mdev->msg_size == 0)
mdev->num_msgs = 0;
mdev->num_msgs++;
msghdr = mdev->mbase + mbox->tx_start + msgs_offset + mdev->msg_size;
/* Clear the whole msg region */
memset(msghdr, 0, sizeof(*msghdr) + size);
/* Init message header with reset values */
msghdr->ver = OTX2_MBOX_VERSION;
mdev->msg_size += size;
mdev->rsp_size += size_rsp;
msghdr->next_msgoff = mdev->msg_size + msgs_offset;
exit:
spin_unlock(&mdev->mbox_lock);
return msghdr;
}
EXPORT_SYMBOL(otx2_mbox_alloc_msg_rsp);
struct mbox_msghdr *otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid,
struct mbox_msghdr *msg)
{
unsigned long imsg = mbox->tx_start + msgs_offset;
unsigned long irsp = mbox->rx_start + msgs_offset;
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
u16 msgs;
if (mdev->num_msgs != mdev->msgs_acked)
return ERR_PTR(-ENODEV);
for (msgs = 0; msgs < mdev->msgs_acked; msgs++) {
struct mbox_msghdr *pmsg = mdev->mbase + imsg;
struct mbox_msghdr *prsp = mdev->mbase + irsp;
if (msg == pmsg) {
if (pmsg->id != prsp->id)
return ERR_PTR(-ENODEV);
return prsp;
}
imsg = pmsg->next_msgoff;
irsp = prsp->next_msgoff;
}
return ERR_PTR(-ENODEV);
}
EXPORT_SYMBOL(otx2_mbox_get_rsp);
int
otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, u16 pcifunc, u16 id)
{
struct msg_rsp *rsp;
rsp = (struct msg_rsp *)
otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp));
if (!rsp)
return -ENOMEM;
rsp->hdr.id = id;
rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
rsp->hdr.rc = MBOX_MSG_INVALID;
rsp->hdr.pcifunc = pcifunc;
return 0;
}
EXPORT_SYMBOL(otx2_reply_invalid_msg);
bool otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
bool ret;
spin_lock(&mdev->mbox_lock);
ret = mdev->num_msgs != 0;
spin_unlock(&mdev->mbox_lock);
return ret;
}
EXPORT_SYMBOL(otx2_mbox_nonempty);
const char *otx2_mbox_id2name(u16 id)
{
switch (id) {
#define M(_name, _id, _1, _2) case _id: return # _name;
MBOX_MESSAGES
#undef M
default:
return "INVALID ID";
}
}
EXPORT_SYMBOL(otx2_mbox_id2name);
MODULE_AUTHOR("Marvell International Ltd.");
MODULE_LICENSE("GPL v2");
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef MBOX_H
#define MBOX_H
#include <linux/etherdevice.h>
#include <linux/sizes.h>
#include "rvu_struct.h"
#define MBOX_SIZE SZ_64K
/* AF/PF: PF initiated, PF/VF VF initiated */
#define MBOX_DOWN_RX_START 0
#define MBOX_DOWN_RX_SIZE (46 * SZ_1K)
#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE)
#define MBOX_DOWN_TX_SIZE (16 * SZ_1K)
/* AF/PF: AF initiated, PF/VF PF initiated */
#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE)
#define MBOX_UP_RX_SIZE SZ_1K
#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE)
#define MBOX_UP_TX_SIZE SZ_1K
#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE
# error "incorrect mailbox area sizes"
#endif
#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull))
#define MBOX_RSP_TIMEOUT 1000 /* in ms, Time to wait for mbox response */
#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */
/* Mailbox directions */
#define MBOX_DIR_AFPF 0 /* AF replies to PF */
#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */
#define MBOX_DIR_PFVF 2 /* PF replies to VF */
#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */
#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */
#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */
#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */
#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */
struct otx2_mbox_dev {
void *mbase; /* This dev's mbox region */
spinlock_t mbox_lock;
u16 msg_size; /* Total msg size to be sent */
u16 rsp_size; /* Total rsp size to be sure the reply is ok */
u16 num_msgs; /* No of msgs sent or waiting for response */
u16 msgs_acked; /* No of msgs for which response is received */
};
struct otx2_mbox {
struct pci_dev *pdev;
void *hwbase; /* Mbox region advertised by HW */
void *reg_base;/* CSR base for this dev */
u64 trigger; /* Trigger mbox notification */
u16 tr_shift; /* Mbox trigger shift */
u64 rx_start; /* Offset of Rx region in mbox memory */
u64 tx_start; /* Offset of Tx region in mbox memory */
u16 rx_size; /* Size of Rx region */
u16 tx_size; /* Size of Tx region */
u16 ndevs; /* The number of peers */
struct otx2_mbox_dev *dev;
};
/* Header which preceeds all mbox messages */
struct mbox_hdr {
u16 num_msgs; /* No of msgs embedded */
};
/* Header which preceeds every msg and is also part of it */
struct mbox_msghdr {
u16 pcifunc; /* Who's sending this msg */
u16 id; /* Mbox message ID */
#define OTX2_MBOX_REQ_SIG (0xdead)
#define OTX2_MBOX_RSP_SIG (0xbeef)
u16 sig; /* Signature, for validating corrupted msgs */
#define OTX2_MBOX_VERSION (0x0001)
u16 ver; /* Version of msg's structure for this ID */
u16 next_msgoff; /* Offset of next msg within mailbox region */
int rc; /* Msg process'ed response code */
};
void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
void otx2_mbox_destroy(struct otx2_mbox *mbox);
int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase,
struct pci_dev *pdev, void __force *reg_base,
int direction, int ndevs);
void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
int size, int size_rsp);
struct mbox_msghdr *otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid,
struct mbox_msghdr *msg);
int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid,
u16 pcifunc, u16 id);
bool otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid);
const char *otx2_mbox_id2name(u16 id);
static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
int devid, int size)
{
return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
}
/* Mailbox message types */
#define MBOX_MSG_MASK 0xFFFF
#define MBOX_MSG_INVALID 0xFFFE
#define MBOX_MSG_MAX 0xFFFF
#define MBOX_MESSAGES \
/* Generic mbox IDs (range 0x000 - 0x1FF) */ \
M(READY, 0x001, msg_req, ready_msg_rsp) \
M(ATTACH_RESOURCES, 0x002, rsrc_attach, msg_rsp) \
M(DETACH_RESOURCES, 0x003, rsrc_detach, msg_rsp) \
M(MSIX_OFFSET, 0x004, msg_req, msix_offset_rsp) \
/* CGX mbox IDs (range 0x200 - 0x3FF) */ \
/* NPA mbox IDs (range 0x400 - 0x5FF) */ \
/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \
/* TIM mbox IDs (range 0x800 - 0x9FF) */ \
/* CPT mbox IDs (range 0xA00 - 0xBFF) */ \
/* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \
/* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \
enum {
#define M(_name, _id, _1, _2) MBOX_MSG_ ## _name = _id,
MBOX_MESSAGES
#undef M
};
/* Mailbox message formats */
/* Generic request msg used for those mbox messages which
* don't send any data in the request.
*/
struct msg_req {
struct mbox_msghdr hdr;
};
/* Generic rsponse msg used a ack or response for those mbox
* messages which doesn't have a specific rsp msg format.
*/
struct msg_rsp {
struct mbox_msghdr hdr;
};
struct ready_msg_rsp {
struct mbox_msghdr hdr;
u16 sclk_feq; /* SCLK frequency */
};
/* Structure for requesting resource provisioning.
* 'modify' flag to be used when either requesting more
* or to detach partial of a cetain resource type.
* Rest of the fields specify how many of what type to
* be attached.
*/
struct rsrc_attach {
struct mbox_msghdr hdr;
u8 modify:1;
u8 npalf:1;
u8 nixlf:1;
u16 sso;
u16 ssow;
u16 timlfs;
u16 cptlfs;
};
/* Structure for relinquishing resources.
* 'partial' flag to be used when relinquishing all resources
* but only of a certain type. If not set, all resources of all
* types provisioned to the RVU function will be detached.
*/
struct rsrc_detach {
struct mbox_msghdr hdr;
u8 partial:1;
u8 npalf:1;
u8 nixlf:1;
u8 sso:1;
u8 ssow:1;
u8 timlfs:1;
u8 cptlfs:1;
};
#define MSIX_VECTOR_INVALID 0xFFFF
#define MAX_RVU_BLKLF_CNT 256
struct msix_offset_rsp {
struct mbox_msghdr hdr;
u16 npa_msixoff;
u16 nix_msixoff;
u8 sso;
u8 ssow;
u8 timlfs;
u8 cptlfs;
u16 sso_msixoff[MAX_RVU_BLKLF_CNT];
u16 ssow_msixoff[MAX_RVU_BLKLF_CNT];
u16 timlf_msixoff[MAX_RVU_BLKLF_CNT];
u16 cptlf_msixoff[MAX_RVU_BLKLF_CNT];
};
#endif /* MBOX_H */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef RVU_H
#define RVU_H
#include "rvu_struct.h"
#include "mbox.h"
/* PCI device IDs */
#define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065
/* PCI BAR nos */
#define PCI_AF_REG_BAR_NUM 0
#define PCI_PF_REG_BAR_NUM 2
#define PCI_MBOX_BAR_NUM 4
#define NAME_SIZE 32
/* PF_FUNC */
#define RVU_PFVF_PF_SHIFT 10
#define RVU_PFVF_PF_MASK 0x3F
#define RVU_PFVF_FUNC_SHIFT 0
#define RVU_PFVF_FUNC_MASK 0x3FF
struct rvu_work {
struct work_struct work;
struct rvu *rvu;
};
struct rsrc_bmap {
unsigned long *bmap; /* Pointer to resource bitmap */
u16 max; /* Max resource id or count */
};
struct rvu_block {
struct rsrc_bmap lf;
u16 *fn_map; /* LF to pcifunc mapping */
bool multislot;
bool implemented;
u8 addr; /* RVU_BLOCK_ADDR_E */
u8 type; /* RVU_BLOCK_TYPE_E */
u8 lfshift;
u64 lookup_reg;
u64 pf_lfcnt_reg;
u64 vf_lfcnt_reg;
u64 lfcfg_reg;
u64 msixcfg_reg;
u64 lfreset_reg;
unsigned char name[NAME_SIZE];
};
/* Structure for per RVU func info ie PF/VF */
struct rvu_pfvf {
bool npalf; /* Only one NPALF per RVU_FUNC */
bool nixlf; /* Only one NIXLF per RVU_FUNC */
u16 sso;
u16 ssow;
u16 cptlfs;
u16 timlfs;
/* Block LF's MSIX vector info */
struct rsrc_bmap msix; /* Bitmap for MSIX vector alloc */
#define MSIX_BLKLF(blkaddr, lf) (((blkaddr) << 8) | ((lf) & 0xFF))
u16 *msix_lfmap; /* Vector to block LF mapping */
};
struct rvu_hwinfo {
u8 total_pfs; /* MAX RVU PFs HW supports */
u16 total_vfs; /* Max RVU VFs HW supports */
u16 max_vfs_per_pf; /* Max VFs that can be attached to a PF */
struct rvu_block block[BLK_COUNT]; /* Block info */
};
struct rvu {
void __iomem *afreg_base;
void __iomem *pfreg_base;
struct pci_dev *pdev;
struct device *dev;
struct rvu_hwinfo *hw;
struct rvu_pfvf *pf;
struct rvu_pfvf *hwvf;
spinlock_t rsrc_lock; /* Serialize resource alloc/free */
/* Mbox */
struct otx2_mbox mbox;
struct rvu_work *mbox_wrk;
struct workqueue_struct *mbox_wq;
/* MSI-X */
u16 num_vec;
char *irq_name;
bool *irq_allocated;
dma_addr_t msix_base_iova;
/* CGX */
#define PF_CGXMAP_BASE 1 /* PF 0 is reserved for RVU PF */
u8 cgx_mapped_pfs;
u8 cgx_cnt; /* available cgx ports */
u8 *pf2cgxlmac_map; /* pf to cgx_lmac map */
u16 *cgxlmac2pf_map; /* bitmap of mapped pfs for
* every cgx lmac port
*/
void **cgx_idmap; /* cgx id to cgx data map table */
struct work_struct cgx_evh_work;
struct workqueue_struct *cgx_evh_wq;
spinlock_t cgx_evq_lock; /* cgx event queue lock */
struct list_head cgx_evq_head; /* cgx event queue head */
};
static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
{
writeq(val, rvu->afreg_base + ((block << 28) | offset));
}
static inline u64 rvu_read64(struct rvu *rvu, u64 block, u64 offset)
{
return readq(rvu->afreg_base + ((block << 28) | offset));
}
static inline void rvupf_write64(struct rvu *rvu, u64 offset, u64 val)
{
writeq(val, rvu->pfreg_base + offset);
}
static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
{
return readq(rvu->pfreg_base + offset);
}
/* Function Prototypes
* RVU
*/
int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
int rvu_alloc_rsrc(struct rsrc_bmap *rsrc);
void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
int rvu_rsrc_free_count(struct rsrc_bmap *rsrc);
int rvu_get_pf(u16 pcifunc);
struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf);
bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr);
int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot);
int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc);
int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
/* CGX APIs */
int rvu_cgx_probe(struct rvu *rvu);
void rvu_cgx_wq_destroy(struct rvu *rvu);
#endif /* RVU_H */
// SPDX-License-Identifier: GPL-2.0
/* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/types.h>
#include <linux/module.h>
#include <linux/pci.h>
#include "rvu.h"
#include "cgx.h"
struct cgx_evq_entry {
struct list_head evq_node;
struct cgx_link_event link_event;
};
static inline u8 cgxlmac_id_to_bmap(u8 cgx_id, u8 lmac_id)
{
return ((cgx_id & 0xF) << 4) | (lmac_id & 0xF);
}
static void *rvu_cgx_pdata(u8 cgx_id, struct rvu *rvu)
{
if (cgx_id >= rvu->cgx_cnt)
return NULL;
return rvu->cgx_idmap[cgx_id];
}
static int rvu_map_cgx_lmac_pf(struct rvu *rvu)
{
int cgx_cnt = rvu->cgx_cnt;
int cgx, lmac_cnt, lmac;
int pf = PF_CGXMAP_BASE;
int size;
if (!cgx_cnt)
return 0;
if (cgx_cnt > 0xF || MAX_LMAC_PER_CGX > 0xF)
return -EINVAL;
/* Alloc map table
* An additional entry is required since PF id starts from 1 and
* hence entry at offset 0 is invalid.
*/
size = (cgx_cnt * MAX_LMAC_PER_CGX + 1) * sizeof(u8);
rvu->pf2cgxlmac_map = devm_kzalloc(rvu->dev, size, GFP_KERNEL);
if (!rvu->pf2cgxlmac_map)
return -ENOMEM;
/* Initialize offset 0 with an invalid cgx and lmac id */
rvu->pf2cgxlmac_map[0] = 0xFF;
/* Reverse map table */
rvu->cgxlmac2pf_map = devm_kzalloc(rvu->dev,
cgx_cnt * MAX_LMAC_PER_CGX * sizeof(u16),
GFP_KERNEL);
if (!rvu->cgxlmac2pf_map)
return -ENOMEM;
rvu->cgx_mapped_pfs = 0;
for (cgx = 0; cgx < cgx_cnt; cgx++) {
lmac_cnt = cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu));
for (lmac = 0; lmac < lmac_cnt; lmac++, pf++) {
rvu->pf2cgxlmac_map[pf] = cgxlmac_id_to_bmap(cgx, lmac);
rvu->cgxlmac2pf_map[CGX_OFFSET(cgx) + lmac] = 1 << pf;
rvu->cgx_mapped_pfs++;
}
}
return 0;
}
/* This is called from interrupt context and is expected to be atomic */
static int cgx_lmac_postevent(struct cgx_link_event *event, void *data)
{
struct cgx_evq_entry *qentry;
struct rvu *rvu = data;
/* post event to the event queue */
qentry = kmalloc(sizeof(*qentry), GFP_ATOMIC);
if (!qentry)
return -ENOMEM;
qentry->link_event = *event;
spin_lock(&rvu->cgx_evq_lock);
list_add_tail(&qentry->evq_node, &rvu->cgx_evq_head);
spin_unlock(&rvu->cgx_evq_lock);
/* start worker to process the events */
queue_work(rvu->cgx_evh_wq, &rvu->cgx_evh_work);
return 0;
}
static void cgx_evhandler_task(struct work_struct *work)
{
struct rvu *rvu = container_of(work, struct rvu, cgx_evh_work);
struct cgx_evq_entry *qentry;
struct cgx_link_event *event;
unsigned long flags;
do {
/* Dequeue an event */
spin_lock_irqsave(&rvu->cgx_evq_lock, flags);
qentry = list_first_entry_or_null(&rvu->cgx_evq_head,
struct cgx_evq_entry,
evq_node);
if (qentry)
list_del(&qentry->evq_node);
spin_unlock_irqrestore(&rvu->cgx_evq_lock, flags);
if (!qentry)
break; /* nothing more to process */
event = &qentry->link_event;
/* Do nothing for now */
kfree(qentry);
} while (1);
}
static void cgx_lmac_event_handler_init(struct rvu *rvu)
{
struct cgx_event_cb cb;
int cgx, lmac, err;
void *cgxd;
spin_lock_init(&rvu->cgx_evq_lock);
INIT_LIST_HEAD(&rvu->cgx_evq_head);
INIT_WORK(&rvu->cgx_evh_work, cgx_evhandler_task);
rvu->cgx_evh_wq = alloc_workqueue("rvu_evh_wq", 0, 0);
if (!rvu->cgx_evh_wq) {
dev_err(rvu->dev, "alloc workqueue failed");
return;
}
cb.notify_link_chg = cgx_lmac_postevent; /* link change call back */
cb.data = rvu;
for (cgx = 0; cgx < rvu->cgx_cnt; cgx++) {
cgxd = rvu_cgx_pdata(cgx, rvu);
for (lmac = 0; lmac < cgx_get_lmac_cnt(cgxd); lmac++) {
err = cgx_lmac_evh_register(&cb, cgxd, lmac);
if (err)
dev_err(rvu->dev,
"%d:%d handler register failed\n",
cgx, lmac);
}
}
}
void rvu_cgx_wq_destroy(struct rvu *rvu)
{
if (rvu->cgx_evh_wq) {
flush_workqueue(rvu->cgx_evh_wq);
destroy_workqueue(rvu->cgx_evh_wq);
rvu->cgx_evh_wq = NULL;
}
}
int rvu_cgx_probe(struct rvu *rvu)
{
int i, err;
/* find available cgx ports */
rvu->cgx_cnt = cgx_get_cgx_cnt();
if (!rvu->cgx_cnt) {
dev_info(rvu->dev, "No CGX devices found!\n");
return -ENODEV;
}
rvu->cgx_idmap = devm_kzalloc(rvu->dev, rvu->cgx_cnt * sizeof(void *),
GFP_KERNEL);
if (!rvu->cgx_idmap)
return -ENOMEM;
/* Initialize the cgxdata table */
for (i = 0; i < rvu->cgx_cnt; i++)
rvu->cgx_idmap[i] = cgx_get_pdata(i);
/* Map CGX LMAC interfaces to RVU PFs */
err = rvu_map_cgx_lmac_pf(rvu);
if (err)
return err;
/* Register for CGX events */
cgx_lmac_event_handler_init(rvu);
return 0;
}
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef RVU_STRUCT_H
#define RVU_STRUCT_H
/* RVU Block Address Enumeration */
enum rvu_block_addr_e {
BLKADDR_RVUM = 0x0ULL,
BLKADDR_LMT = 0x1ULL,
BLKADDR_MSIX = 0x2ULL,
BLKADDR_NPA = 0x3ULL,
BLKADDR_NIX0 = 0x4ULL,
BLKADDR_NIX1 = 0x5ULL,
BLKADDR_NPC = 0x6ULL,
BLKADDR_SSO = 0x7ULL,
BLKADDR_SSOW = 0x8ULL,
BLKADDR_TIM = 0x9ULL,
BLKADDR_CPT0 = 0xaULL,
BLKADDR_CPT1 = 0xbULL,
BLKADDR_NDC0 = 0xcULL,
BLKADDR_NDC1 = 0xdULL,
BLKADDR_NDC2 = 0xeULL,
BLK_COUNT = 0xfULL,
};
/* RVU Block Type Enumeration */
enum rvu_block_type_e {
BLKTYPE_RVUM = 0x0,
BLKTYPE_MSIX = 0x1,
BLKTYPE_LMT = 0x2,
BLKTYPE_NIX = 0x3,
BLKTYPE_NPA = 0x4,
BLKTYPE_NPC = 0x5,
BLKTYPE_SSO = 0x6,
BLKTYPE_SSOW = 0x7,
BLKTYPE_TIM = 0x8,
BLKTYPE_CPT = 0x9,
BLKTYPE_NDC = 0xa,
BLKTYPE_MAX = 0xa,
};
/* RVU Admin function Interrupt Vector Enumeration */
enum rvu_af_int_vec_e {
RVU_AF_INT_VEC_POISON = 0x0,
RVU_AF_INT_VEC_PFFLR = 0x1,
RVU_AF_INT_VEC_PFME = 0x2,
RVU_AF_INT_VEC_GEN = 0x3,
RVU_AF_INT_VEC_MBOX = 0x4,
RVU_AF_INT_VEC_CNT = 0x5,
};
/**
* RVU PF Interrupt Vector Enumeration
*/
enum rvu_pf_int_vec_e {
RVU_PF_INT_VEC_VFFLR0 = 0x0,
RVU_PF_INT_VEC_VFFLR1 = 0x1,
RVU_PF_INT_VEC_VFME0 = 0x2,
RVU_PF_INT_VEC_VFME1 = 0x3,
RVU_PF_INT_VEC_VFPF_MBOX0 = 0x4,
RVU_PF_INT_VEC_VFPF_MBOX1 = 0x5,
RVU_PF_INT_VEC_AFPF_MBOX = 0x6,
RVU_PF_INT_VEC_CNT = 0x7,
};
#endif /* RVU_STRUCT_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment