Commit a6deaa99 authored by David S. Miller's avatar David S. Miller

Merge branch 'octeontx2-af-Add-RVU-Admin-Function-driver'

Sunil Goutham says:

====================
octeontx2-af: Add RVU Admin Function driver

Resource virtualization unit (RVU) on Marvell's OcteonTX2 SOC maps HW
resources from the network, crypto and other functional blocks into
PCI-compatible physical and virtual functions. Each functional block
again has multiple local functions (LFs) for provisioning to PCI devices.
RVU supports multiple PCIe SRIOV physical functions (PFs) and virtual
functions (VFs). PF0 is called the administrative / admin function (AF)
and has privileges to provision RVU functional block's LFs to each of the
PF/VF.

RVU managed networking functional blocks
 - Network pool allocator (NPA)
 - Network interface controller (NIX)
 - Network parser CAM (NPC)
 - Schedule/Synchronize/Order unit (SSO)

RVU managed non-networking functional blocks
 - Crypto accelerator (CPT)
 - Scheduled timers unit (TIM)
 - Schedule/Synchronize/Order unit (SSO)
   Used for both networking and non networking usecases
 - Compression (upcoming in future variants of the silicons)

Resource provisioning examples
 - A PF/VF with NIX-LF & NPA-LF resources works as a pure network device
 - A PF/VF with CPT-LF resource works as a pure cyrpto offload device.

This admin function driver neither receives any data nor processes it i.e
no I/O, a configuration only driver.

PF/VFs communicates with AF via a shared memory region (mailbox). Upon
receiving requests from PF/VF, AF does resource provisioning and other
HW configuration. AF is always attached to host, but PF/VFs may be used
by host kernel itself, or attached to VMs or to userspace applications
like DPDK etc. So AF has to handle provisioning/configuration requests
sent by any device from any domain.

This patch series adds logic for the following
 - RVU AF driver with functional blocks provisioning support.
 - Mailbox infrastructure for communication between AF and PFs.
 - CGX (MAC controller) driver which communicates with firmware for
   managing  physical ethernet interfaces. AF collects info from this
   driver and forwards the same to the PF/VFs uaing these interfaces.

This is the first set of patches out of 80+ patches.

Changes from v8:
 1 Removed unnecessary typecasts in entire series
   - Suggested by David Miller
 2 Added COMPILE_TEST to AF driver
   - Suggested by Arnd Bergmann
 3 Changed udelay() to usleep_range() in rvu_poll_reg
   - Suggested by Arnd Bergmann
 4 MSIX vector base IOMMU mapping is done using dma_map_resource()
   API instead of dma_map_single() as it accepts physical address.
   - Issue pointed by Arnd Bergmann

Changes from v7:
 1 Removed unnecessary typecasts in mbox infra code.
   - Suggested by David Miller
 2 Fixed MAINTAINERS patch
   - Suggested by Joe Perches

Changes from v6:
 Fixed ordering of local variables from longest to shortest line.
   - Suggested by David Miller

Changes from v5:
 Modified bitfield based command structures to bitmasks for communication
 with firmware, to address endianness issues.
   - Suggested by Arnd Bergmann

Changes from v4:
 1 Removed module author/version/description from CGX driver as it's now
   merged with AF driver module.
   - Suggested by Arnd Bergmann
 2 Added big-endian bitfields for CGX's kernel <=> firmware communication
   command structures.
   - Suggested by Arnd Bergmann

Changes from v3:
 Moved driver from drivers/soc to drivers/net/ethernet
   - Suggested by Arnd Bergmann
 https://patchwork.kernel.org/cover/10587635/

Changes from v2:
 No changes, submitted again with netdev mailing list in loop.
   - Suggested by Arnd Bergmann and Andrew Lunn

Changes from v1:
 1 Merged RVU admin function and CGX drivers into a single module
   - Suggested by Arnd Bergmann
 2 Pulled mbox communication APIs into a separate module to remove
   admin function driver dependency in a VM where AF is not attached.
   - Suggested by Arnd Bergmann
====================
Reviewed-by: default avatarArnd Bergmann <arnd@arndb.de>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents e40a826a 1f2cf1b3
......@@ -8847,6 +8847,15 @@ S: Supported
F: drivers/mmc/host/sdhci-xenon*
F: Documentation/devicetree/bindings/mmc/marvell,xenon-sdhci.txt
MARVELL OCTEONTX2 RVU ADMIN FUNCTION DRIVER
M: Sunil Goutham <sgoutham@marvell.com>
M: Linu Cherian <lcherian@marvell.com>
M: Geetha sowjanya <gakula@marvell.com>
M: Jerin Jacob <jerinj@marvell.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/marvell/octeontx2/af/
MATROX FRAMEBUFFER DRIVER
L: linux-fbdev@vger.kernel.org
S: Orphan
......
......@@ -167,4 +167,7 @@ config SKY2_DEBUG
If unsure, say N.
source "drivers/net/ethernet/marvell/octeontx2/Kconfig"
endif # NET_VENDOR_MARVELL
......@@ -11,3 +11,4 @@ obj-$(CONFIG_MVPP2) += mvpp2/
obj-$(CONFIG_PXA168_ETH) += pxa168_eth.o
obj-$(CONFIG_SKGE) += skge.o
obj-$(CONFIG_SKY2) += sky2.o
obj-y += octeontx2/
#
# Marvell OcteonTX2 drivers configuration
#
config OCTEONTX2_MBOX
tristate
config OCTEONTX2_AF
tristate "Marvell OcteonTX2 RVU Admin Function driver"
select OCTEONTX2_MBOX
depends on (64BIT && COMPILE_TEST) || ARM64
depends on PCI
help
This driver supports Marvell's OcteonTX2 Resource Virtualization
Unit's admin function manager which manages all RVU HW resources
and provides a medium to other PF/VFs to configure HW. Should be
enabled for other RVU device drivers to work.
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for Marvell OcteonTX2 device drivers.
#
obj-$(CONFIG_OCTEONTX2_AF) += af/
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for Marvell's OcteonTX2 RVU Admin Function driver
#
obj-$(CONFIG_OCTEONTX2_MBOX) += octeontx2_mbox.o
obj-$(CONFIG_OCTEONTX2_AF) += octeontx2_af.o
octeontx2_mbox-y := mbox.o
octeontx2_af-y := cgx.o rvu.o rvu_cgx.o
// SPDX-License-Identifier: GPL-2.0
/* Marvell OcteonTx2 CGX driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/acpi.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/pci.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/phy.h>
#include <linux/of.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
#include "cgx.h"
#define DRV_NAME "octeontx2-cgx"
#define DRV_STRING "Marvell OcteonTX2 CGX/MAC Driver"
/**
* struct lmac
* @wq_cmd_cmplt: waitq to keep the process blocked until cmd completion
* @cmd_lock: Lock to serialize the command interface
* @resp: command response
* @event_cb: callback for linkchange events
* @cmd_pend: flag set before new command is started
* flag cleared after command response is received
* @cgx: parent cgx port
* @lmac_id: lmac port id
* @name: lmac port name
*/
struct lmac {
wait_queue_head_t wq_cmd_cmplt;
struct mutex cmd_lock;
u64 resp;
struct cgx_event_cb event_cb;
bool cmd_pend;
struct cgx *cgx;
u8 lmac_id;
char *name;
};
struct cgx {
void __iomem *reg_base;
struct pci_dev *pdev;
u8 cgx_id;
u8 lmac_count;
struct lmac *lmac_idmap[MAX_LMAC_PER_CGX];
struct list_head cgx_list;
};
static LIST_HEAD(cgx_list);
/* CGX PHY management internal APIs */
static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool en);
/* Supported devices */
static const struct pci_device_id cgx_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_CGX) },
{ 0, } /* end of table */
};
MODULE_DEVICE_TABLE(pci, cgx_id_table);
static void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val)
{
writeq(val, cgx->reg_base + (lmac << 18) + offset);
}
static u64 cgx_read(struct cgx *cgx, u64 lmac, u64 offset)
{
return readq(cgx->reg_base + (lmac << 18) + offset);
}
static inline struct lmac *lmac_pdata(u8 lmac_id, struct cgx *cgx)
{
if (!cgx || lmac_id >= MAX_LMAC_PER_CGX)
return NULL;
return cgx->lmac_idmap[lmac_id];
}
int cgx_get_cgx_cnt(void)
{
struct cgx *cgx_dev;
int count = 0;
list_for_each_entry(cgx_dev, &cgx_list, cgx_list)
count++;
return count;
}
EXPORT_SYMBOL(cgx_get_cgx_cnt);
int cgx_get_lmac_cnt(void *cgxd)
{
struct cgx *cgx = cgxd;
if (!cgx)
return -ENODEV;
return cgx->lmac_count;
}
EXPORT_SYMBOL(cgx_get_lmac_cnt);
void *cgx_get_pdata(int cgx_id)
{
struct cgx *cgx_dev;
list_for_each_entry(cgx_dev, &cgx_list, cgx_list) {
if (cgx_dev->cgx_id == cgx_id)
return cgx_dev;
}
return NULL;
}
EXPORT_SYMBOL(cgx_get_pdata);
/* CGX Firmware interface low level support */
static int cgx_fwi_cmd_send(u64 req, u64 *resp, struct lmac *lmac)
{
struct cgx *cgx = lmac->cgx;
struct device *dev;
int err = 0;
u64 cmd;
/* Ensure no other command is in progress */
err = mutex_lock_interruptible(&lmac->cmd_lock);
if (err)
return err;
/* Ensure command register is free */
cmd = cgx_read(cgx, lmac->lmac_id, CGX_COMMAND_REG);
if (FIELD_GET(CMDREG_OWN, cmd) != CGX_CMD_OWN_NS) {
err = -EBUSY;
goto unlock;
}
/* Update ownership in command request */
req = FIELD_SET(CMDREG_OWN, CGX_CMD_OWN_FIRMWARE, req);
/* Mark this lmac as pending, before we start */
lmac->cmd_pend = true;
/* Start command in hardware */
cgx_write(cgx, lmac->lmac_id, CGX_COMMAND_REG, req);
/* Ensure command is completed without errors */
if (!wait_event_timeout(lmac->wq_cmd_cmplt, !lmac->cmd_pend,
msecs_to_jiffies(CGX_CMD_TIMEOUT))) {
dev = &cgx->pdev->dev;
dev_err(dev, "cgx port %d:%d cmd timeout\n",
cgx->cgx_id, lmac->lmac_id);
err = -EIO;
goto unlock;
}
/* we have a valid command response */
smp_rmb(); /* Ensure the latest updates are visible */
*resp = lmac->resp;
unlock:
mutex_unlock(&lmac->cmd_lock);
return err;
}
static inline int cgx_fwi_cmd_generic(u64 req, u64 *resp,
struct cgx *cgx, int lmac_id)
{
struct lmac *lmac;
int err;
lmac = lmac_pdata(lmac_id, cgx);
if (!lmac)
return -ENODEV;
err = cgx_fwi_cmd_send(req, resp, lmac);
/* Check for valid response */
if (!err) {
if (FIELD_GET(EVTREG_STAT, *resp) == CGX_STAT_FAIL)
return -EIO;
else
return 0;
}
return err;
}
/* Hardware event handlers */
static inline void cgx_link_change_handler(u64 lstat,
struct lmac *lmac)
{
struct cgx *cgx = lmac->cgx;
struct cgx_link_event event;
struct device *dev;
dev = &cgx->pdev->dev;
event.lstat.link_up = FIELD_GET(RESP_LINKSTAT_UP, lstat);
event.lstat.full_duplex = FIELD_GET(RESP_LINKSTAT_FDUPLEX, lstat);
event.lstat.speed = FIELD_GET(RESP_LINKSTAT_SPEED, lstat);
event.lstat.err_type = FIELD_GET(RESP_LINKSTAT_ERRTYPE, lstat);
event.cgx_id = cgx->cgx_id;
event.lmac_id = lmac->lmac_id;
if (!lmac->event_cb.notify_link_chg) {
dev_dbg(dev, "cgx port %d:%d Link change handler null",
cgx->cgx_id, lmac->lmac_id);
if (event.lstat.err_type != CGX_ERR_NONE) {
dev_err(dev, "cgx port %d:%d Link error %d\n",
cgx->cgx_id, lmac->lmac_id,
event.lstat.err_type);
}
dev_info(dev, "cgx port %d:%d Link status %s, speed %x\n",
cgx->cgx_id, lmac->lmac_id,
event.lstat.link_up ? "UP" : "DOWN",
event.lstat.speed);
return;
}
if (lmac->event_cb.notify_link_chg(&event, lmac->event_cb.data))
dev_err(dev, "event notification failure\n");
}
static inline bool cgx_cmdresp_is_linkevent(u64 event)
{
u8 id;
id = FIELD_GET(EVTREG_ID, event);
if (id == CGX_CMD_LINK_BRING_UP ||
id == CGX_CMD_LINK_BRING_DOWN)
return true;
else
return false;
}
static inline bool cgx_event_is_linkevent(u64 event)
{
if (FIELD_GET(EVTREG_ID, event) == CGX_EVT_LINK_CHANGE)
return true;
else
return false;
}
static irqreturn_t cgx_fwi_event_handler(int irq, void *data)
{
struct lmac *lmac = data;
struct device *dev;
struct cgx *cgx;
u64 event;
cgx = lmac->cgx;
event = cgx_read(cgx, lmac->lmac_id, CGX_EVENT_REG);
if (!FIELD_GET(EVTREG_ACK, event))
return IRQ_NONE;
dev = &cgx->pdev->dev;
switch (FIELD_GET(EVTREG_EVT_TYPE, event)) {
case CGX_EVT_CMD_RESP:
/* Copy the response. Since only one command is active at a
* time, there is no way a response can get overwritten
*/
lmac->resp = event;
/* Ensure response is updated before thread context starts */
smp_wmb();
/* There wont be separate events for link change initiated from
* software; Hence report the command responses as events
*/
if (cgx_cmdresp_is_linkevent(event))
cgx_link_change_handler(event, lmac);
/* Release thread waiting for completion */
lmac->cmd_pend = false;
wake_up_interruptible(&lmac->wq_cmd_cmplt);
break;
case CGX_EVT_ASYNC:
if (cgx_event_is_linkevent(event))
cgx_link_change_handler(event, lmac);
break;
}
/* Any new event or command response will be posted by firmware
* only after the current status is acked.
* Ack the interrupt register as well.
*/
cgx_write(lmac->cgx, lmac->lmac_id, CGX_EVENT_REG, 0);
cgx_write(lmac->cgx, lmac->lmac_id, CGXX_CMRX_INT, FW_CGX_INT);
return IRQ_HANDLED;
}
/* APIs for PHY management using CGX firmware interface */
/* callback registration for hardware events like link change */
int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id)
{
struct cgx *cgx = cgxd;
struct lmac *lmac;
lmac = lmac_pdata(lmac_id, cgx);
if (!lmac)
return -ENODEV;
lmac->event_cb = *cb;
return 0;
}
EXPORT_SYMBOL(cgx_lmac_evh_register);
static int cgx_fwi_link_change(struct cgx *cgx, int lmac_id, bool enable)
{
u64 req = 0;
u64 resp;
if (enable)
req = FIELD_SET(CMDREG_ID, CGX_CMD_LINK_BRING_UP, req);
else
req = FIELD_SET(CMDREG_ID, CGX_CMD_LINK_BRING_DOWN, req);
return cgx_fwi_cmd_generic(req, &resp, cgx, lmac_id);
}
EXPORT_SYMBOL(cgx_fwi_link_change);
static inline int cgx_fwi_read_version(u64 *resp, struct cgx *cgx)
{
u64 req = 0;
req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_FW_VER, req);
return cgx_fwi_cmd_generic(req, resp, cgx, 0);
}
static int cgx_lmac_verify_fwi_version(struct cgx *cgx)
{
struct device *dev = &cgx->pdev->dev;
int major_ver, minor_ver;
u64 resp;
int err;
if (!cgx->lmac_count)
return 0;
err = cgx_fwi_read_version(&resp, cgx);
if (err)
return err;
major_ver = FIELD_GET(RESP_MAJOR_VER, resp);
minor_ver = FIELD_GET(RESP_MINOR_VER, resp);
dev_dbg(dev, "Firmware command interface version = %d.%d\n",
major_ver, minor_ver);
if (major_ver != CGX_FIRMWARE_MAJOR_VER ||
minor_ver != CGX_FIRMWARE_MINOR_VER)
return -EIO;
else
return 0;
}
static int cgx_lmac_init(struct cgx *cgx)
{
struct lmac *lmac;
int i, err;
cgx->lmac_count = cgx_read(cgx, 0, CGXX_CMRX_RX_LMACS) & 0x7;
if (cgx->lmac_count > MAX_LMAC_PER_CGX)
cgx->lmac_count = MAX_LMAC_PER_CGX;
for (i = 0; i < cgx->lmac_count; i++) {
lmac = kcalloc(1, sizeof(struct lmac), GFP_KERNEL);
if (!lmac)
return -ENOMEM;
lmac->name = kcalloc(1, sizeof("cgx_fwi_xxx_yyy"), GFP_KERNEL);
if (!lmac->name)
return -ENOMEM;
sprintf(lmac->name, "cgx_fwi_%d_%d", cgx->cgx_id, i);
lmac->lmac_id = i;
lmac->cgx = cgx;
init_waitqueue_head(&lmac->wq_cmd_cmplt);
mutex_init(&lmac->cmd_lock);
err = request_irq(pci_irq_vector(cgx->pdev,
CGX_LMAC_FWI + i * 9),
cgx_fwi_event_handler, 0, lmac->name, lmac);
if (err)
return err;
/* Enable interrupt */
cgx_write(cgx, lmac->lmac_id, CGXX_CMRX_INT_ENA_W1S,
FW_CGX_INT);
/* Add reference */
cgx->lmac_idmap[i] = lmac;
}
return cgx_lmac_verify_fwi_version(cgx);
}
static int cgx_lmac_exit(struct cgx *cgx)
{
struct lmac *lmac;
int i;
/* Free all lmac related resources */
for (i = 0; i < cgx->lmac_count; i++) {
lmac = cgx->lmac_idmap[i];
if (!lmac)
continue;
free_irq(pci_irq_vector(cgx->pdev, CGX_LMAC_FWI + i * 9), lmac);
kfree(lmac->name);
kfree(lmac);
}
return 0;
}
static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct device *dev = &pdev->dev;
struct cgx *cgx;
int err, nvec;
cgx = devm_kzalloc(dev, sizeof(*cgx), GFP_KERNEL);
if (!cgx)
return -ENOMEM;
cgx->pdev = pdev;
pci_set_drvdata(pdev, cgx);
err = pci_enable_device(pdev);
if (err) {
dev_err(dev, "Failed to enable PCI device\n");
pci_set_drvdata(pdev, NULL);
return err;
}
err = pci_request_regions(pdev, DRV_NAME);
if (err) {
dev_err(dev, "PCI request regions failed 0x%x\n", err);
goto err_disable_device;
}
/* MAP configuration registers */
cgx->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
if (!cgx->reg_base) {
dev_err(dev, "CGX: Cannot map CSR memory space, aborting\n");
err = -ENOMEM;
goto err_release_regions;
}
nvec = CGX_NVEC;
err = pci_alloc_irq_vectors(pdev, nvec, nvec, PCI_IRQ_MSIX);
if (err < 0 || err != nvec) {
dev_err(dev, "Request for %d msix vectors failed, err %d\n",
nvec, err);
goto err_release_regions;
}
list_add(&cgx->cgx_list, &cgx_list);
cgx->cgx_id = cgx_get_cgx_cnt() - 1;
err = cgx_lmac_init(cgx);
if (err)
goto err_release_lmac;
return 0;
err_release_lmac:
cgx_lmac_exit(cgx);
list_del(&cgx->cgx_list);
err_release_regions:
pci_release_regions(pdev);
err_disable_device:
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
return err;
}
static void cgx_remove(struct pci_dev *pdev)
{
struct cgx *cgx = pci_get_drvdata(pdev);
cgx_lmac_exit(cgx);
list_del(&cgx->cgx_list);
pci_free_irq_vectors(pdev);
pci_release_regions(pdev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
}
struct pci_driver cgx_driver = {
.name = DRV_NAME,
.id_table = cgx_id_table,
.probe = cgx_probe,
.remove = cgx_remove,
};
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 CGX driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef CGX_H
#define CGX_H
#include "cgx_fw_if.h"
/* PCI device IDs */
#define PCI_DEVID_OCTEONTX2_CGX 0xA059
/* PCI BAR nos */
#define PCI_CFG_REG_BAR_NUM 0
#define MAX_CGX 3
#define MAX_LMAC_PER_CGX 4
#define CGX_OFFSET(x) ((x) * MAX_LMAC_PER_CGX)
/* Registers */
#define CGXX_CMRX_INT 0x040
#define FW_CGX_INT BIT_ULL(1)
#define CGXX_CMRX_INT_ENA_W1S 0x058
#define CGXX_CMRX_RX_ID_MAP 0x060
#define CGXX_CMRX_RX_LMACS 0x128
#define CGXX_SCRATCH0_REG 0x1050
#define CGXX_SCRATCH1_REG 0x1058
#define CGX_CONST 0x2000
#define CGX_COMMAND_REG CGXX_SCRATCH1_REG
#define CGX_EVENT_REG CGXX_SCRATCH0_REG
#define CGX_CMD_TIMEOUT 2200 /* msecs */
#define CGX_NVEC 37
#define CGX_LMAC_FWI 0
struct cgx_link_event {
struct cgx_lnk_sts lstat;
u8 cgx_id;
u8 lmac_id;
};
/**
* struct cgx_event_cb
* @notify_link_chg: callback for link change notification
* @data: data passed to callback function
*/
struct cgx_event_cb {
int (*notify_link_chg)(struct cgx_link_event *event, void *data);
void *data;
};
extern struct pci_driver cgx_driver;
int cgx_get_cgx_cnt(void);
int cgx_get_lmac_cnt(void *cgxd);
void *cgx_get_pdata(int cgx_id);
int cgx_lmac_evh_register(struct cgx_event_cb *cb, void *cgxd, int lmac_id);
#endif /* CGX_H */
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 CGX driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __CGX_FW_INTF_H__
#define __CGX_FW_INTF_H__
#include <linux/bitops.h>
#include <linux/bitfield.h>
#define CGX_FIRMWARE_MAJOR_VER 1
#define CGX_FIRMWARE_MINOR_VER 0
#define CGX_EVENT_ACK 1UL
/* CGX error types. set for cmd response status as CGX_STAT_FAIL */
enum cgx_error_type {
CGX_ERR_NONE,
CGX_ERR_LMAC_NOT_ENABLED,
CGX_ERR_LMAC_MODE_INVALID,
CGX_ERR_REQUEST_ID_INVALID,
CGX_ERR_PREV_ACK_NOT_CLEAR,
CGX_ERR_PHY_LINK_DOWN,
CGX_ERR_PCS_RESET_FAIL,
CGX_ERR_AN_CPT_FAIL,
CGX_ERR_TX_NOT_IDLE,
CGX_ERR_RX_NOT_IDLE,
CGX_ERR_SPUX_BR_BLKLOCK_FAIL,
CGX_ERR_SPUX_RX_ALIGN_FAIL,
CGX_ERR_SPUX_TX_FAULT,
CGX_ERR_SPUX_RX_FAULT,
CGX_ERR_SPUX_RESET_FAIL,
CGX_ERR_SPUX_AN_RESET_FAIL,
CGX_ERR_SPUX_USX_AN_RESET_FAIL,
CGX_ERR_SMUX_RX_LINK_NOT_OK,
CGX_ERR_PCS_RECV_LINK_FAIL,
CGX_ERR_TRAINING_FAIL,
CGX_ERR_RX_EQU_FAIL,
CGX_ERR_SPUX_BER_FAIL,
CGX_ERR_SPUX_RSFEC_ALGN_FAIL, /* = 22 */
};
/* LINK speed types */
enum cgx_link_speed {
CGX_LINK_NONE,
CGX_LINK_10M,
CGX_LINK_100M,
CGX_LINK_1G,
CGX_LINK_2HG,
CGX_LINK_5G,
CGX_LINK_10G,
CGX_LINK_20G,
CGX_LINK_25G,
CGX_LINK_40G,
CGX_LINK_50G,
CGX_LINK_100G,
CGX_LINK_SPEED_MAX,
};
/* REQUEST ID types. Input to firmware */
enum cgx_cmd_id {
CGX_CMD_NONE,
CGX_CMD_GET_FW_VER,
CGX_CMD_GET_MAC_ADDR,
CGX_CMD_SET_MTU,
CGX_CMD_GET_LINK_STS, /* optional to user */
CGX_CMD_LINK_BRING_UP,
CGX_CMD_LINK_BRING_DOWN,
CGX_CMD_INTERNAL_LBK,
CGX_CMD_EXTERNAL_LBK,
CGX_CMD_HIGIG,
CGX_CMD_LINK_STATE_CHANGE,
CGX_CMD_MODE_CHANGE, /* hot plug support */
CGX_CMD_INTF_SHUTDOWN,
CGX_CMD_IRQ_ENABLE,
CGX_CMD_IRQ_DISABLE,
};
/* async event ids */
enum cgx_evt_id {
CGX_EVT_NONE,
CGX_EVT_LINK_CHANGE,
};
/* event types - cause of interrupt */
enum cgx_evt_type {
CGX_EVT_ASYNC,
CGX_EVT_CMD_RESP
};
enum cgx_stat {
CGX_STAT_SUCCESS,
CGX_STAT_FAIL
};
enum cgx_cmd_own {
CGX_CMD_OWN_NS,
CGX_CMD_OWN_FIRMWARE,
};
/* m - bit mask
* y - value to be written in the bitrange
* x - input value whose bitrange to be modified
*/
#define FIELD_SET(m, y, x) \
(((x) & ~(m)) | \
FIELD_PREP((m), (y)))
/* scratchx(0) CSR used for ATF->non-secure SW communication.
* This acts as the status register
* Provides details on command ack/status, command response, error details
*/
#define EVTREG_ACK BIT_ULL(0)
#define EVTREG_EVT_TYPE BIT_ULL(1)
#define EVTREG_STAT BIT_ULL(2)
#define EVTREG_ID GENMASK_ULL(8, 3)
/* Response to command IDs with command status as CGX_STAT_FAIL
*
* Not applicable for commands :
* CGX_CMD_LINK_BRING_UP/DOWN/CGX_EVT_LINK_CHANGE
*/
#define EVTREG_ERRTYPE GENMASK_ULL(18, 9)
/* Response to cmd ID as CGX_CMD_GET_FW_VER with cmd status as
* CGX_STAT_SUCCESS
*/
#define RESP_MAJOR_VER GENMASK_ULL(12, 9)
#define RESP_MINOR_VER GENMASK_ULL(16, 13)
/* Response to cmd ID as CGX_CMD_GET_MAC_ADDR with cmd status as
* CGX_STAT_SUCCESS
*/
#define RESP_MAC_ADDR GENMASK_ULL(56, 9)
/* Response to cmd ID - CGX_CMD_LINK_BRING_UP/DOWN, event ID CGX_EVT_LINK_CHANGE
* status can be either CGX_STAT_FAIL or CGX_STAT_SUCCESS
*
* In case of CGX_STAT_FAIL, it indicates CGX configuration failed
* when processing link up/down/change command.
* Both err_type and current link status will be updated
*
* In case of CGX_STAT_SUCCESS, err_type will be CGX_ERR_NONE and current
* link status will be updated
*/
struct cgx_lnk_sts {
uint64_t reserved1:9;
uint64_t link_up:1;
uint64_t full_duplex:1;
uint64_t speed:4; /* cgx_link_speed */
uint64_t err_type:10;
uint64_t reserved2:39;
};
#define RESP_LINKSTAT_UP GENMASK_ULL(9, 9)
#define RESP_LINKSTAT_FDUPLEX GENMASK_ULL(10, 10)
#define RESP_LINKSTAT_SPEED GENMASK_ULL(14, 11)
#define RESP_LINKSTAT_ERRTYPE GENMASK_ULL(24, 15)
/* scratchx(1) CSR used for non-secure SW->ATF communication
* This CSR acts as a command register
*/
#define CMDREG_OWN BIT_ULL(0)
#define CMDREG_ID GENMASK_ULL(7, 2)
/* Any command using enable/disable as an argument need
* to set this bitfield.
* Ex: Loopback, HiGig...
*/
#define CMDREG_ENABLE BIT_ULL(8)
/* command argument to be passed for cmd ID - CGX_CMD_SET_MTU */
#define CMDMTU_SIZE GENMASK_ULL(23, 8)
/* command argument to be passed for cmd ID - CGX_CMD_LINK_CHANGE */
#define CMDLINKCHANGE_LINKUP BIT_ULL(8)
#define CMDLINKCHANGE_FULLDPLX BIT_ULL(9)
#define CMDLINKCHANGE_SPEED GENMASK_ULL(13, 10)
#endif /* __CGX_FW_INTF_H__ */
// SPDX-License-Identifier: GPL-2.0
/* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/pci.h>
#include "rvu_reg.h"
#include "mbox.h"
static const u16 msgs_offset = ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
void otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
struct mbox_hdr *tx_hdr, *rx_hdr;
tx_hdr = mdev->mbase + mbox->tx_start;
rx_hdr = mdev->mbase + mbox->rx_start;
spin_lock(&mdev->mbox_lock);
mdev->msg_size = 0;
mdev->rsp_size = 0;
tx_hdr->num_msgs = 0;
rx_hdr->num_msgs = 0;
spin_unlock(&mdev->mbox_lock);
}
EXPORT_SYMBOL(otx2_mbox_reset);
void otx2_mbox_destroy(struct otx2_mbox *mbox)
{
mbox->reg_base = NULL;
mbox->hwbase = NULL;
kfree(mbox->dev);
mbox->dev = NULL;
}
EXPORT_SYMBOL(otx2_mbox_destroy);
int otx2_mbox_init(struct otx2_mbox *mbox, void *hwbase, struct pci_dev *pdev,
void *reg_base, int direction, int ndevs)
{
struct otx2_mbox_dev *mdev;
int devid;
switch (direction) {
case MBOX_DIR_AFPF:
case MBOX_DIR_PFVF:
mbox->tx_start = MBOX_DOWN_TX_START;
mbox->rx_start = MBOX_DOWN_RX_START;
mbox->tx_size = MBOX_DOWN_TX_SIZE;
mbox->rx_size = MBOX_DOWN_RX_SIZE;
break;
case MBOX_DIR_PFAF:
case MBOX_DIR_VFPF:
mbox->tx_start = MBOX_DOWN_RX_START;
mbox->rx_start = MBOX_DOWN_TX_START;
mbox->tx_size = MBOX_DOWN_RX_SIZE;
mbox->rx_size = MBOX_DOWN_TX_SIZE;
break;
case MBOX_DIR_AFPF_UP:
case MBOX_DIR_PFVF_UP:
mbox->tx_start = MBOX_UP_TX_START;
mbox->rx_start = MBOX_UP_RX_START;
mbox->tx_size = MBOX_UP_TX_SIZE;
mbox->rx_size = MBOX_UP_RX_SIZE;
break;
case MBOX_DIR_PFAF_UP:
case MBOX_DIR_VFPF_UP:
mbox->tx_start = MBOX_UP_RX_START;
mbox->rx_start = MBOX_UP_TX_START;
mbox->tx_size = MBOX_UP_RX_SIZE;
mbox->rx_size = MBOX_UP_TX_SIZE;
break;
default:
return -ENODEV;
}
switch (direction) {
case MBOX_DIR_AFPF:
case MBOX_DIR_AFPF_UP:
mbox->trigger = RVU_AF_AFPF_MBOX0;
mbox->tr_shift = 4;
break;
case MBOX_DIR_PFAF:
case MBOX_DIR_PFAF_UP:
mbox->trigger = RVU_PF_PFAF_MBOX1;
mbox->tr_shift = 0;
break;
case MBOX_DIR_PFVF:
case MBOX_DIR_PFVF_UP:
mbox->trigger = RVU_PF_VFX_PFVF_MBOX0;
mbox->tr_shift = 12;
break;
case MBOX_DIR_VFPF:
case MBOX_DIR_VFPF_UP:
mbox->trigger = RVU_VF_VFPF_MBOX1;
mbox->tr_shift = 0;
break;
default:
return -ENODEV;
}
mbox->reg_base = reg_base;
mbox->hwbase = hwbase;
mbox->pdev = pdev;
mbox->dev = kcalloc(ndevs, sizeof(struct otx2_mbox_dev), GFP_KERNEL);
if (!mbox->dev) {
otx2_mbox_destroy(mbox);
return -ENOMEM;
}
mbox->ndevs = ndevs;
for (devid = 0; devid < ndevs; devid++) {
mdev = &mbox->dev[devid];
mdev->mbase = mbox->hwbase + (devid * MBOX_SIZE);
spin_lock_init(&mdev->mbox_lock);
/* Init header to reset value */
otx2_mbox_reset(mbox, devid);
}
return 0;
}
EXPORT_SYMBOL(otx2_mbox_init);
int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
int timeout = 0, sleep = 1;
while (mdev->num_msgs != mdev->msgs_acked) {
msleep(sleep);
timeout += sleep;
if (timeout >= MBOX_RSP_TIMEOUT)
return -EIO;
}
return 0;
}
EXPORT_SYMBOL(otx2_mbox_wait_for_rsp);
int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
unsigned long timeout = jiffies + 1 * HZ;
while (!time_after(jiffies, timeout)) {
if (mdev->num_msgs == mdev->msgs_acked)
return 0;
cpu_relax();
}
return -EIO;
}
EXPORT_SYMBOL(otx2_mbox_busy_poll_for_rsp);
void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
struct mbox_hdr *tx_hdr, *rx_hdr;
tx_hdr = mdev->mbase + mbox->tx_start;
rx_hdr = mdev->mbase + mbox->rx_start;
spin_lock(&mdev->mbox_lock);
/* Reset header for next messages */
mdev->msg_size = 0;
mdev->rsp_size = 0;
mdev->msgs_acked = 0;
/* Sync mbox data into memory */
smp_wmb();
/* num_msgs != 0 signals to the peer that the buffer has a number of
* messages. So this should be written after writing all the messages
* to the shared memory.
*/
tx_hdr->num_msgs = mdev->num_msgs;
rx_hdr->num_msgs = 0;
spin_unlock(&mdev->mbox_lock);
/* The interrupt should be fired after num_msgs is written
* to the shared memory
*/
writeq(1, (void __iomem *)mbox->reg_base +
(mbox->trigger | (devid << mbox->tr_shift)));
}
EXPORT_SYMBOL(otx2_mbox_msg_send);
struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
int size, int size_rsp)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
struct mbox_msghdr *msghdr = NULL;
spin_lock(&mdev->mbox_lock);
size = ALIGN(size, MBOX_MSG_ALIGN);
size_rsp = ALIGN(size_rsp, MBOX_MSG_ALIGN);
/* Check if there is space in mailbox */
if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset)
goto exit;
if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset)
goto exit;
if (mdev->msg_size == 0)
mdev->num_msgs = 0;
mdev->num_msgs++;
msghdr = mdev->mbase + mbox->tx_start + msgs_offset + mdev->msg_size;
/* Clear the whole msg region */
memset(msghdr, 0, sizeof(*msghdr) + size);
/* Init message header with reset values */
msghdr->ver = OTX2_MBOX_VERSION;
mdev->msg_size += size;
mdev->rsp_size += size_rsp;
msghdr->next_msgoff = mdev->msg_size + msgs_offset;
exit:
spin_unlock(&mdev->mbox_lock);
return msghdr;
}
EXPORT_SYMBOL(otx2_mbox_alloc_msg_rsp);
struct mbox_msghdr *otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid,
struct mbox_msghdr *msg)
{
unsigned long imsg = mbox->tx_start + msgs_offset;
unsigned long irsp = mbox->rx_start + msgs_offset;
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
u16 msgs;
if (mdev->num_msgs != mdev->msgs_acked)
return ERR_PTR(-ENODEV);
for (msgs = 0; msgs < mdev->msgs_acked; msgs++) {
struct mbox_msghdr *pmsg = mdev->mbase + imsg;
struct mbox_msghdr *prsp = mdev->mbase + irsp;
if (msg == pmsg) {
if (pmsg->id != prsp->id)
return ERR_PTR(-ENODEV);
return prsp;
}
imsg = pmsg->next_msgoff;
irsp = prsp->next_msgoff;
}
return ERR_PTR(-ENODEV);
}
EXPORT_SYMBOL(otx2_mbox_get_rsp);
int
otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, u16 pcifunc, u16 id)
{
struct msg_rsp *rsp;
rsp = (struct msg_rsp *)
otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp));
if (!rsp)
return -ENOMEM;
rsp->hdr.id = id;
rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
rsp->hdr.rc = MBOX_MSG_INVALID;
rsp->hdr.pcifunc = pcifunc;
return 0;
}
EXPORT_SYMBOL(otx2_reply_invalid_msg);
bool otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid)
{
struct otx2_mbox_dev *mdev = &mbox->dev[devid];
bool ret;
spin_lock(&mdev->mbox_lock);
ret = mdev->num_msgs != 0;
spin_unlock(&mdev->mbox_lock);
return ret;
}
EXPORT_SYMBOL(otx2_mbox_nonempty);
const char *otx2_mbox_id2name(u16 id)
{
switch (id) {
#define M(_name, _id, _1, _2) case _id: return # _name;
MBOX_MESSAGES
#undef M
default:
return "INVALID ID";
}
}
EXPORT_SYMBOL(otx2_mbox_id2name);
MODULE_AUTHOR("Marvell International Ltd.");
MODULE_LICENSE("GPL v2");
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef MBOX_H
#define MBOX_H
#include <linux/etherdevice.h>
#include <linux/sizes.h>
#include "rvu_struct.h"
#define MBOX_SIZE SZ_64K
/* AF/PF: PF initiated, PF/VF VF initiated */
#define MBOX_DOWN_RX_START 0
#define MBOX_DOWN_RX_SIZE (46 * SZ_1K)
#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE)
#define MBOX_DOWN_TX_SIZE (16 * SZ_1K)
/* AF/PF: AF initiated, PF/VF PF initiated */
#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE)
#define MBOX_UP_RX_SIZE SZ_1K
#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE)
#define MBOX_UP_TX_SIZE SZ_1K
#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE
# error "incorrect mailbox area sizes"
#endif
#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull))
#define MBOX_RSP_TIMEOUT 1000 /* in ms, Time to wait for mbox response */
#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */
/* Mailbox directions */
#define MBOX_DIR_AFPF 0 /* AF replies to PF */
#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */
#define MBOX_DIR_PFVF 2 /* PF replies to VF */
#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */
#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */
#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */
#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */
#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */
struct otx2_mbox_dev {
void *mbase; /* This dev's mbox region */
spinlock_t mbox_lock;
u16 msg_size; /* Total msg size to be sent */
u16 rsp_size; /* Total rsp size to be sure the reply is ok */
u16 num_msgs; /* No of msgs sent or waiting for response */
u16 msgs_acked; /* No of msgs for which response is received */
};
struct otx2_mbox {
struct pci_dev *pdev;
void *hwbase; /* Mbox region advertised by HW */
void *reg_base;/* CSR base for this dev */
u64 trigger; /* Trigger mbox notification */
u16 tr_shift; /* Mbox trigger shift */
u64 rx_start; /* Offset of Rx region in mbox memory */
u64 tx_start; /* Offset of Tx region in mbox memory */
u16 rx_size; /* Size of Rx region */
u16 tx_size; /* Size of Tx region */
u16 ndevs; /* The number of peers */
struct otx2_mbox_dev *dev;
};
/* Header which preceeds all mbox messages */
struct mbox_hdr {
u16 num_msgs; /* No of msgs embedded */
};
/* Header which preceeds every msg and is also part of it */
struct mbox_msghdr {
u16 pcifunc; /* Who's sending this msg */
u16 id; /* Mbox message ID */
#define OTX2_MBOX_REQ_SIG (0xdead)
#define OTX2_MBOX_RSP_SIG (0xbeef)
u16 sig; /* Signature, for validating corrupted msgs */
#define OTX2_MBOX_VERSION (0x0001)
u16 ver; /* Version of msg's structure for this ID */
u16 next_msgoff; /* Offset of next msg within mailbox region */
int rc; /* Msg process'ed response code */
};
void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
void otx2_mbox_destroy(struct otx2_mbox *mbox);
int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase,
struct pci_dev *pdev, void __force *reg_base,
int direction, int ndevs);
void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
int size, int size_rsp);
struct mbox_msghdr *otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid,
struct mbox_msghdr *msg);
int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid,
u16 pcifunc, u16 id);
bool otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid);
const char *otx2_mbox_id2name(u16 id);
static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
int devid, int size)
{
return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
}
/* Mailbox message types */
#define MBOX_MSG_MASK 0xFFFF
#define MBOX_MSG_INVALID 0xFFFE
#define MBOX_MSG_MAX 0xFFFF
#define MBOX_MESSAGES \
/* Generic mbox IDs (range 0x000 - 0x1FF) */ \
M(READY, 0x001, msg_req, ready_msg_rsp) \
M(ATTACH_RESOURCES, 0x002, rsrc_attach, msg_rsp) \
M(DETACH_RESOURCES, 0x003, rsrc_detach, msg_rsp) \
M(MSIX_OFFSET, 0x004, msg_req, msix_offset_rsp) \
/* CGX mbox IDs (range 0x200 - 0x3FF) */ \
/* NPA mbox IDs (range 0x400 - 0x5FF) */ \
/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \
/* TIM mbox IDs (range 0x800 - 0x9FF) */ \
/* CPT mbox IDs (range 0xA00 - 0xBFF) */ \
/* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \
/* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \
enum {
#define M(_name, _id, _1, _2) MBOX_MSG_ ## _name = _id,
MBOX_MESSAGES
#undef M
};
/* Mailbox message formats */
/* Generic request msg used for those mbox messages which
* don't send any data in the request.
*/
struct msg_req {
struct mbox_msghdr hdr;
};
/* Generic rsponse msg used a ack or response for those mbox
* messages which doesn't have a specific rsp msg format.
*/
struct msg_rsp {
struct mbox_msghdr hdr;
};
struct ready_msg_rsp {
struct mbox_msghdr hdr;
u16 sclk_feq; /* SCLK frequency */
};
/* Structure for requesting resource provisioning.
* 'modify' flag to be used when either requesting more
* or to detach partial of a cetain resource type.
* Rest of the fields specify how many of what type to
* be attached.
*/
struct rsrc_attach {
struct mbox_msghdr hdr;
u8 modify:1;
u8 npalf:1;
u8 nixlf:1;
u16 sso;
u16 ssow;
u16 timlfs;
u16 cptlfs;
};
/* Structure for relinquishing resources.
* 'partial' flag to be used when relinquishing all resources
* but only of a certain type. If not set, all resources of all
* types provisioned to the RVU function will be detached.
*/
struct rsrc_detach {
struct mbox_msghdr hdr;
u8 partial:1;
u8 npalf:1;
u8 nixlf:1;
u8 sso:1;
u8 ssow:1;
u8 timlfs:1;
u8 cptlfs:1;
};
#define MSIX_VECTOR_INVALID 0xFFFF
#define MAX_RVU_BLKLF_CNT 256
struct msix_offset_rsp {
struct mbox_msghdr hdr;
u16 npa_msixoff;
u16 nix_msixoff;
u8 sso;
u8 ssow;
u8 timlfs;
u8 cptlfs;
u16 sso_msixoff[MAX_RVU_BLKLF_CNT];
u16 ssow_msixoff[MAX_RVU_BLKLF_CNT];
u16 timlf_msixoff[MAX_RVU_BLKLF_CNT];
u16 cptlf_msixoff[MAX_RVU_BLKLF_CNT];
};
#endif /* MBOX_H */
// SPDX-License-Identifier: GPL-2.0
/* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/irq.h>
#include <linux/pci.h>
#include <linux/sysfs.h>
#include "cgx.h"
#include "rvu.h"
#include "rvu_reg.h"
#define DRV_NAME "octeontx2-af"
#define DRV_STRING "Marvell OcteonTX2 RVU Admin Function Driver"
#define DRV_VERSION "1.0"
static int rvu_get_hwvf(struct rvu *rvu, int pcifunc);
static void rvu_set_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
struct rvu_block *block, int lf);
static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
struct rvu_block *block, int lf);
/* Supported devices */
static const struct pci_device_id rvu_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_AF) },
{ 0, } /* end of table */
};
MODULE_AUTHOR("Marvell International Ltd.");
MODULE_DESCRIPTION(DRV_STRING);
MODULE_LICENSE("GPL v2");
MODULE_VERSION(DRV_VERSION);
MODULE_DEVICE_TABLE(pci, rvu_id_table);
/* Poll a RVU block's register 'offset', for a 'zero'
* or 'nonzero' at bits specified by 'mask'
*/
int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero)
{
void __iomem *reg;
int timeout = 100;
u64 reg_val;
reg = rvu->afreg_base + ((block << 28) | offset);
while (timeout) {
reg_val = readq(reg);
if (zero && !(reg_val & mask))
return 0;
if (!zero && (reg_val & mask))
return 0;
usleep_range(1, 2);
timeout--;
}
return -EBUSY;
}
int rvu_alloc_rsrc(struct rsrc_bmap *rsrc)
{
int id;
if (!rsrc->bmap)
return -EINVAL;
id = find_first_zero_bit(rsrc->bmap, rsrc->max);
if (id >= rsrc->max)
return -ENOSPC;
__set_bit(id, rsrc->bmap);
return id;
}
static int rvu_alloc_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc)
{
int start;
if (!rsrc->bmap)
return -EINVAL;
start = bitmap_find_next_zero_area(rsrc->bmap, rsrc->max, 0, nrsrc, 0);
if (start >= rsrc->max)
return -ENOSPC;
bitmap_set(rsrc->bmap, start, nrsrc);
return start;
}
static void rvu_free_rsrc_contig(struct rsrc_bmap *rsrc, int nrsrc, int start)
{
if (!rsrc->bmap)
return;
if (start >= rsrc->max)
return;
bitmap_clear(rsrc->bmap, start, nrsrc);
}
static bool rvu_rsrc_check_contig(struct rsrc_bmap *rsrc, int nrsrc)
{
int start;
if (!rsrc->bmap)
return false;
start = bitmap_find_next_zero_area(rsrc->bmap, rsrc->max, 0, nrsrc, 0);
if (start >= rsrc->max)
return false;
return true;
}
void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id)
{
if (!rsrc->bmap)
return;
__clear_bit(id, rsrc->bmap);
}
int rvu_rsrc_free_count(struct rsrc_bmap *rsrc)
{
int used;
if (!rsrc->bmap)
return 0;
used = bitmap_weight(rsrc->bmap, rsrc->max);
return (rsrc->max - used);
}
int rvu_alloc_bitmap(struct rsrc_bmap *rsrc)
{
rsrc->bmap = kcalloc(BITS_TO_LONGS(rsrc->max),
sizeof(long), GFP_KERNEL);
if (!rsrc->bmap)
return -ENOMEM;
return 0;
}
/* Get block LF's HW index from a PF_FUNC's block slot number */
int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot)
{
u16 match = 0;
int lf;
spin_lock(&rvu->rsrc_lock);
for (lf = 0; lf < block->lf.max; lf++) {
if (block->fn_map[lf] == pcifunc) {
if (slot == match) {
spin_unlock(&rvu->rsrc_lock);
return lf;
}
match++;
}
}
spin_unlock(&rvu->rsrc_lock);
return -ENODEV;
}
/* Convert BLOCK_TYPE_E to a BLOCK_ADDR_E.
* Some silicon variants of OcteonTX2 supports
* multiple blocks of same type.
*
* @pcifunc has to be zero when no LF is yet attached.
*/
int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc)
{
int devnum, blkaddr = -ENODEV;
u64 cfg, reg;
bool is_pf;
switch (blktype) {
case BLKTYPE_NPA:
blkaddr = BLKADDR_NPA;
goto exit;
case BLKTYPE_NIX:
/* For now assume NIX0 */
if (!pcifunc) {
blkaddr = BLKADDR_NIX0;
goto exit;
}
break;
case BLKTYPE_SSO:
blkaddr = BLKADDR_SSO;
goto exit;
case BLKTYPE_SSOW:
blkaddr = BLKADDR_SSOW;
goto exit;
case BLKTYPE_TIM:
blkaddr = BLKADDR_TIM;
goto exit;
case BLKTYPE_CPT:
/* For now assume CPT0 */
if (!pcifunc) {
blkaddr = BLKADDR_CPT0;
goto exit;
}
break;
}
/* Check if this is a RVU PF or VF */
if (pcifunc & RVU_PFVF_FUNC_MASK) {
is_pf = false;
devnum = rvu_get_hwvf(rvu, pcifunc);
} else {
is_pf = true;
devnum = rvu_get_pf(pcifunc);
}
/* Check if the 'pcifunc' has a NIX LF from 'BLKADDR_NIX0' */
if (blktype == BLKTYPE_NIX) {
reg = is_pf ? RVU_PRIV_PFX_NIX0_CFG : RVU_PRIV_HWVFX_NIX0_CFG;
cfg = rvu_read64(rvu, BLKADDR_RVUM, reg | (devnum << 16));
if (cfg)
blkaddr = BLKADDR_NIX0;
}
/* Check if the 'pcifunc' has a CPT LF from 'BLKADDR_CPT0' */
if (blktype == BLKTYPE_CPT) {
reg = is_pf ? RVU_PRIV_PFX_CPT0_CFG : RVU_PRIV_HWVFX_CPT0_CFG;
cfg = rvu_read64(rvu, BLKADDR_RVUM, reg | (devnum << 16));
if (cfg)
blkaddr = BLKADDR_CPT0;
}
exit:
if (is_block_implemented(rvu->hw, blkaddr))
return blkaddr;
return -ENODEV;
}
static void rvu_update_rsrc_map(struct rvu *rvu, struct rvu_pfvf *pfvf,
struct rvu_block *block, u16 pcifunc,
u16 lf, bool attach)
{
int devnum, num_lfs = 0;
bool is_pf;
u64 reg;
if (lf >= block->lf.max) {
dev_err(&rvu->pdev->dev,
"%s: FATAL: LF %d is >= %s's max lfs i.e %d\n",
__func__, lf, block->name, block->lf.max);
return;
}
/* Check if this is for a RVU PF or VF */
if (pcifunc & RVU_PFVF_FUNC_MASK) {
is_pf = false;
devnum = rvu_get_hwvf(rvu, pcifunc);
} else {
is_pf = true;
devnum = rvu_get_pf(pcifunc);
}
block->fn_map[lf] = attach ? pcifunc : 0;
switch (block->type) {
case BLKTYPE_NPA:
pfvf->npalf = attach ? true : false;
num_lfs = pfvf->npalf;
break;
case BLKTYPE_NIX:
pfvf->nixlf = attach ? true : false;
num_lfs = pfvf->nixlf;
break;
case BLKTYPE_SSO:
attach ? pfvf->sso++ : pfvf->sso--;
num_lfs = pfvf->sso;
break;
case BLKTYPE_SSOW:
attach ? pfvf->ssow++ : pfvf->ssow--;
num_lfs = pfvf->ssow;
break;
case BLKTYPE_TIM:
attach ? pfvf->timlfs++ : pfvf->timlfs--;
num_lfs = pfvf->timlfs;
break;
case BLKTYPE_CPT:
attach ? pfvf->cptlfs++ : pfvf->cptlfs--;
num_lfs = pfvf->cptlfs;
break;
}
reg = is_pf ? block->pf_lfcnt_reg : block->vf_lfcnt_reg;
rvu_write64(rvu, BLKADDR_RVUM, reg | (devnum << 16), num_lfs);
}
inline int rvu_get_pf(u16 pcifunc)
{
return (pcifunc >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
}
void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf)
{
u64 cfg;
/* Get numVFs attached to this PF and first HWVF */
cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
*numvfs = (cfg >> 12) & 0xFF;
*hwvf = cfg & 0xFFF;
}
static int rvu_get_hwvf(struct rvu *rvu, int pcifunc)
{
int pf, func;
u64 cfg;
pf = rvu_get_pf(pcifunc);
func = pcifunc & RVU_PFVF_FUNC_MASK;
/* Get first HWVF attached to this PF */
cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
return ((cfg & 0xFFF) + func - 1);
}
struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc)
{
/* Check if it is a PF or VF */
if (pcifunc & RVU_PFVF_FUNC_MASK)
return &rvu->hwvf[rvu_get_hwvf(rvu, pcifunc)];
else
return &rvu->pf[rvu_get_pf(pcifunc)];
}
bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr)
{
struct rvu_block *block;
if (blkaddr < BLKADDR_RVUM || blkaddr >= BLK_COUNT)
return false;
block = &hw->block[blkaddr];
return block->implemented;
}
static void rvu_check_block_implemented(struct rvu *rvu)
{
struct rvu_hwinfo *hw = rvu->hw;
struct rvu_block *block;
int blkid;
u64 cfg;
/* For each block check if 'implemented' bit is set */
for (blkid = 0; blkid < BLK_COUNT; blkid++) {
block = &hw->block[blkid];
cfg = rvupf_read64(rvu, RVU_PF_BLOCK_ADDRX_DISC(blkid));
if (cfg & BIT_ULL(11))
block->implemented = true;
}
}
static void rvu_block_reset(struct rvu *rvu, int blkaddr, u64 rst_reg)
{
struct rvu_block *block = &rvu->hw->block[blkaddr];
if (!block->implemented)
return;
rvu_write64(rvu, blkaddr, rst_reg, BIT_ULL(0));
rvu_poll_reg(rvu, blkaddr, rst_reg, BIT_ULL(63), true);
}
static void rvu_reset_all_blocks(struct rvu *rvu)
{
/* Do a HW reset of all RVU blocks */
rvu_block_reset(rvu, BLKADDR_NPA, NPA_AF_BLK_RST);
rvu_block_reset(rvu, BLKADDR_NIX0, NIX_AF_BLK_RST);
rvu_block_reset(rvu, BLKADDR_NPC, NPC_AF_BLK_RST);
rvu_block_reset(rvu, BLKADDR_SSO, SSO_AF_BLK_RST);
rvu_block_reset(rvu, BLKADDR_TIM, TIM_AF_BLK_RST);
rvu_block_reset(rvu, BLKADDR_CPT0, CPT_AF_BLK_RST);
rvu_block_reset(rvu, BLKADDR_NDC0, NDC_AF_BLK_RST);
rvu_block_reset(rvu, BLKADDR_NDC1, NDC_AF_BLK_RST);
rvu_block_reset(rvu, BLKADDR_NDC2, NDC_AF_BLK_RST);
}
static void rvu_scan_block(struct rvu *rvu, struct rvu_block *block)
{
struct rvu_pfvf *pfvf;
u64 cfg;
int lf;
for (lf = 0; lf < block->lf.max; lf++) {
cfg = rvu_read64(rvu, block->addr,
block->lfcfg_reg | (lf << block->lfshift));
if (!(cfg & BIT_ULL(63)))
continue;
/* Set this resource as being used */
__set_bit(lf, block->lf.bmap);
/* Get, to whom this LF is attached */
pfvf = rvu_get_pfvf(rvu, (cfg >> 8) & 0xFFFF);
rvu_update_rsrc_map(rvu, pfvf, block,
(cfg >> 8) & 0xFFFF, lf, true);
/* Set start MSIX vector for this LF within this PF/VF */
rvu_set_msix_offset(rvu, pfvf, block, lf);
}
}
static void rvu_check_min_msix_vec(struct rvu *rvu, int nvecs, int pf, int vf)
{
int min_vecs;
if (!vf)
goto check_pf;
if (!nvecs) {
dev_warn(rvu->dev,
"PF%d:VF%d is configured with zero msix vectors, %d\n",
pf, vf - 1, nvecs);
}
return;
check_pf:
if (pf == 0)
min_vecs = RVU_AF_INT_VEC_CNT + RVU_PF_INT_VEC_CNT;
else
min_vecs = RVU_PF_INT_VEC_CNT;
if (!(nvecs < min_vecs))
return;
dev_warn(rvu->dev,
"PF%d is configured with too few vectors, %d, min is %d\n",
pf, nvecs, min_vecs);
}
static int rvu_setup_msix_resources(struct rvu *rvu)
{
struct rvu_hwinfo *hw = rvu->hw;
int pf, vf, numvfs, hwvf, err;
int nvecs, offset, max_msix;
struct rvu_pfvf *pfvf;
u64 cfg, phy_addr;
dma_addr_t iova;
for (pf = 0; pf < hw->total_pfs; pf++) {
cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(pf));
/* If PF is not enabled, nothing to do */
if (!((cfg >> 20) & 0x01))
continue;
rvu_get_pf_numvfs(rvu, pf, &numvfs, &hwvf);
pfvf = &rvu->pf[pf];
/* Get num of MSIX vectors attached to this PF */
cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_MSIX_CFG(pf));
pfvf->msix.max = ((cfg >> 32) & 0xFFF) + 1;
rvu_check_min_msix_vec(rvu, pfvf->msix.max, pf, 0);
/* Alloc msix bitmap for this PF */
err = rvu_alloc_bitmap(&pfvf->msix);
if (err)
return err;
/* Allocate memory for MSIX vector to RVU block LF mapping */
pfvf->msix_lfmap = devm_kcalloc(rvu->dev, pfvf->msix.max,
sizeof(u16), GFP_KERNEL);
if (!pfvf->msix_lfmap)
return -ENOMEM;
/* For PF0 (AF) firmware will set msix vector offsets for
* AF, block AF and PF0_INT vectors, so jump to VFs.
*/
if (!pf)
goto setup_vfmsix;
/* Set MSIX offset for PF's 'RVU_PF_INT_VEC' vectors.
* These are allocated on driver init and never freed,
* so no need to set 'msix_lfmap' for these.
*/
cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_INT_CFG(pf));
nvecs = (cfg >> 12) & 0xFF;
cfg &= ~0x7FFULL;
offset = rvu_alloc_rsrc_contig(&pfvf->msix, nvecs);
rvu_write64(rvu, BLKADDR_RVUM,
RVU_PRIV_PFX_INT_CFG(pf), cfg | offset);
setup_vfmsix:
/* Alloc msix bitmap for VFs */
for (vf = 0; vf < numvfs; vf++) {
pfvf = &rvu->hwvf[hwvf + vf];
/* Get num of MSIX vectors attached to this VF */
cfg = rvu_read64(rvu, BLKADDR_RVUM,
RVU_PRIV_PFX_MSIX_CFG(pf));
pfvf->msix.max = (cfg & 0xFFF) + 1;
rvu_check_min_msix_vec(rvu, pfvf->msix.max, pf, vf + 1);
/* Alloc msix bitmap for this VF */
err = rvu_alloc_bitmap(&pfvf->msix);
if (err)
return err;
pfvf->msix_lfmap =
devm_kcalloc(rvu->dev, pfvf->msix.max,
sizeof(u16), GFP_KERNEL);
if (!pfvf->msix_lfmap)
return -ENOMEM;
/* Set MSIX offset for HWVF's 'RVU_VF_INT_VEC' vectors.
* These are allocated on driver init and never freed,
* so no need to set 'msix_lfmap' for these.
*/
cfg = rvu_read64(rvu, BLKADDR_RVUM,
RVU_PRIV_HWVFX_INT_CFG(hwvf + vf));
nvecs = (cfg >> 12) & 0xFF;
cfg &= ~0x7FFULL;
offset = rvu_alloc_rsrc_contig(&pfvf->msix, nvecs);
rvu_write64(rvu, BLKADDR_RVUM,
RVU_PRIV_HWVFX_INT_CFG(hwvf + vf),
cfg | offset);
}
}
/* HW interprets RVU_AF_MSIXTR_BASE address as an IOVA, hence
* create a IOMMU mapping for the physcial address configured by
* firmware and reconfig RVU_AF_MSIXTR_BASE with IOVA.
*/
cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_CONST);
max_msix = cfg & 0xFFFFF;
phy_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_MSIXTR_BASE);
iova = dma_map_resource(rvu->dev, phy_addr,
max_msix * PCI_MSIX_ENTRY_SIZE,
DMA_BIDIRECTIONAL, 0);
if (dma_mapping_error(rvu->dev, iova))
return -ENOMEM;
rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_MSIXTR_BASE, (u64)iova);
rvu->msix_base_iova = iova;
return 0;
}
static void rvu_free_hw_resources(struct rvu *rvu)
{
struct rvu_hwinfo *hw = rvu->hw;
struct rvu_block *block;
struct rvu_pfvf *pfvf;
int id, max_msix;
u64 cfg;
/* Free block LF bitmaps */
for (id = 0; id < BLK_COUNT; id++) {
block = &hw->block[id];
kfree(block->lf.bmap);
}
/* Free MSIX bitmaps */
for (id = 0; id < hw->total_pfs; id++) {
pfvf = &rvu->pf[id];
kfree(pfvf->msix.bmap);
}
for (id = 0; id < hw->total_vfs; id++) {
pfvf = &rvu->hwvf[id];
kfree(pfvf->msix.bmap);
}
/* Unmap MSIX vector base IOVA mapping */
if (!rvu->msix_base_iova)
return;
cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_CONST);
max_msix = cfg & 0xFFFFF;
dma_unmap_resource(rvu->dev, rvu->msix_base_iova,
max_msix * PCI_MSIX_ENTRY_SIZE,
DMA_BIDIRECTIONAL, 0);
}
static int rvu_setup_hw_resources(struct rvu *rvu)
{
struct rvu_hwinfo *hw = rvu->hw;
struct rvu_block *block;
int blkid, err;
u64 cfg;
/* Get HW supported max RVU PF & VF count */
cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_CONST);
hw->total_pfs = (cfg >> 32) & 0xFF;
hw->total_vfs = (cfg >> 20) & 0xFFF;
hw->max_vfs_per_pf = (cfg >> 40) & 0xFF;
/* Init NPA LF's bitmap */
block = &hw->block[BLKADDR_NPA];
if (!block->implemented)
goto nix;
cfg = rvu_read64(rvu, BLKADDR_NPA, NPA_AF_CONST);
block->lf.max = (cfg >> 16) & 0xFFF;
block->addr = BLKADDR_NPA;
block->type = BLKTYPE_NPA;
block->lfshift = 8;
block->lookup_reg = NPA_AF_RVU_LF_CFG_DEBUG;
block->pf_lfcnt_reg = RVU_PRIV_PFX_NPA_CFG;
block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NPA_CFG;
block->lfcfg_reg = NPA_PRIV_LFX_CFG;
block->msixcfg_reg = NPA_PRIV_LFX_INT_CFG;
block->lfreset_reg = NPA_AF_LF_RST;
sprintf(block->name, "NPA");
err = rvu_alloc_bitmap(&block->lf);
if (err)
return err;
nix:
/* Init NIX LF's bitmap */
block = &hw->block[BLKADDR_NIX0];
if (!block->implemented)
goto sso;
cfg = rvu_read64(rvu, BLKADDR_NIX0, NIX_AF_CONST2);
block->lf.max = cfg & 0xFFF;
block->addr = BLKADDR_NIX0;
block->type = BLKTYPE_NIX;
block->lfshift = 8;
block->lookup_reg = NIX_AF_RVU_LF_CFG_DEBUG;
block->pf_lfcnt_reg = RVU_PRIV_PFX_NIX0_CFG;
block->vf_lfcnt_reg = RVU_PRIV_HWVFX_NIX0_CFG;
block->lfcfg_reg = NIX_PRIV_LFX_CFG;
block->msixcfg_reg = NIX_PRIV_LFX_INT_CFG;
block->lfreset_reg = NIX_AF_LF_RST;
sprintf(block->name, "NIX");
err = rvu_alloc_bitmap(&block->lf);
if (err)
return err;
sso:
/* Init SSO group's bitmap */
block = &hw->block[BLKADDR_SSO];
if (!block->implemented)
goto ssow;
cfg = rvu_read64(rvu, BLKADDR_SSO, SSO_AF_CONST);
block->lf.max = cfg & 0xFFFF;
block->addr = BLKADDR_SSO;
block->type = BLKTYPE_SSO;
block->multislot = true;
block->lfshift = 3;
block->lookup_reg = SSO_AF_RVU_LF_CFG_DEBUG;
block->pf_lfcnt_reg = RVU_PRIV_PFX_SSO_CFG;
block->vf_lfcnt_reg = RVU_PRIV_HWVFX_SSO_CFG;
block->lfcfg_reg = SSO_PRIV_LFX_HWGRP_CFG;
block->msixcfg_reg = SSO_PRIV_LFX_HWGRP_INT_CFG;
block->lfreset_reg = SSO_AF_LF_HWGRP_RST;
sprintf(block->name, "SSO GROUP");
err = rvu_alloc_bitmap(&block->lf);
if (err)
return err;
ssow:
/* Init SSO workslot's bitmap */
block = &hw->block[BLKADDR_SSOW];
if (!block->implemented)
goto tim;
block->lf.max = (cfg >> 56) & 0xFF;
block->addr = BLKADDR_SSOW;
block->type = BLKTYPE_SSOW;
block->multislot = true;
block->lfshift = 3;
block->lookup_reg = SSOW_AF_RVU_LF_HWS_CFG_DEBUG;
block->pf_lfcnt_reg = RVU_PRIV_PFX_SSOW_CFG;
block->vf_lfcnt_reg = RVU_PRIV_HWVFX_SSOW_CFG;
block->lfcfg_reg = SSOW_PRIV_LFX_HWS_CFG;
block->msixcfg_reg = SSOW_PRIV_LFX_HWS_INT_CFG;
block->lfreset_reg = SSOW_AF_LF_HWS_RST;
sprintf(block->name, "SSOWS");
err = rvu_alloc_bitmap(&block->lf);
if (err)
return err;
tim:
/* Init TIM LF's bitmap */
block = &hw->block[BLKADDR_TIM];
if (!block->implemented)
goto cpt;
cfg = rvu_read64(rvu, BLKADDR_TIM, TIM_AF_CONST);
block->lf.max = cfg & 0xFFFF;
block->addr = BLKADDR_TIM;
block->type = BLKTYPE_TIM;
block->multislot = true;
block->lfshift = 3;
block->lookup_reg = TIM_AF_RVU_LF_CFG_DEBUG;
block->pf_lfcnt_reg = RVU_PRIV_PFX_TIM_CFG;
block->vf_lfcnt_reg = RVU_PRIV_HWVFX_TIM_CFG;
block->lfcfg_reg = TIM_PRIV_LFX_CFG;
block->msixcfg_reg = TIM_PRIV_LFX_INT_CFG;
block->lfreset_reg = TIM_AF_LF_RST;
sprintf(block->name, "TIM");
err = rvu_alloc_bitmap(&block->lf);
if (err)
return err;
cpt:
/* Init CPT LF's bitmap */
block = &hw->block[BLKADDR_CPT0];
if (!block->implemented)
goto init;
cfg = rvu_read64(rvu, BLKADDR_CPT0, CPT_AF_CONSTANTS0);
block->lf.max = cfg & 0xFF;
block->addr = BLKADDR_CPT0;
block->type = BLKTYPE_CPT;
block->multislot = true;
block->lfshift = 3;
block->lookup_reg = CPT_AF_RVU_LF_CFG_DEBUG;
block->pf_lfcnt_reg = RVU_PRIV_PFX_CPT0_CFG;
block->vf_lfcnt_reg = RVU_PRIV_HWVFX_CPT0_CFG;
block->lfcfg_reg = CPT_PRIV_LFX_CFG;
block->msixcfg_reg = CPT_PRIV_LFX_INT_CFG;
block->lfreset_reg = CPT_AF_LF_RST;
sprintf(block->name, "CPT");
err = rvu_alloc_bitmap(&block->lf);
if (err)
return err;
init:
/* Allocate memory for PFVF data */
rvu->pf = devm_kcalloc(rvu->dev, hw->total_pfs,
sizeof(struct rvu_pfvf), GFP_KERNEL);
if (!rvu->pf)
return -ENOMEM;
rvu->hwvf = devm_kcalloc(rvu->dev, hw->total_vfs,
sizeof(struct rvu_pfvf), GFP_KERNEL);
if (!rvu->hwvf)
return -ENOMEM;
spin_lock_init(&rvu->rsrc_lock);
err = rvu_setup_msix_resources(rvu);
if (err)
return err;
for (blkid = 0; blkid < BLK_COUNT; blkid++) {
block = &hw->block[blkid];
if (!block->lf.bmap)
continue;
/* Allocate memory for block LF/slot to pcifunc mapping info */
block->fn_map = devm_kcalloc(rvu->dev, block->lf.max,
sizeof(u16), GFP_KERNEL);
if (!block->fn_map)
return -ENOMEM;
/* Scan all blocks to check if low level firmware has
* already provisioned any of the resources to a PF/VF.
*/
rvu_scan_block(rvu, block);
}
return 0;
}
static int rvu_mbox_handler_READY(struct rvu *rvu, struct msg_req *req,
struct ready_msg_rsp *rsp)
{
return 0;
}
/* Get current count of a RVU block's LF/slots
* provisioned to a given RVU func.
*/
static u16 rvu_get_rsrc_mapcount(struct rvu_pfvf *pfvf, int blktype)
{
switch (blktype) {
case BLKTYPE_NPA:
return pfvf->npalf ? 1 : 0;
case BLKTYPE_NIX:
return pfvf->nixlf ? 1 : 0;
case BLKTYPE_SSO:
return pfvf->sso;
case BLKTYPE_SSOW:
return pfvf->ssow;
case BLKTYPE_TIM:
return pfvf->timlfs;
case BLKTYPE_CPT:
return pfvf->cptlfs;
}
return 0;
}
static int rvu_lookup_rsrc(struct rvu *rvu, struct rvu_block *block,
int pcifunc, int slot)
{
u64 val;
val = ((u64)pcifunc << 24) | (slot << 16) | (1ULL << 13);
rvu_write64(rvu, block->addr, block->lookup_reg, val);
/* Wait for the lookup to finish */
/* TODO: put some timeout here */
while (rvu_read64(rvu, block->addr, block->lookup_reg) & (1ULL << 13))
;
val = rvu_read64(rvu, block->addr, block->lookup_reg);
/* Check LF valid bit */
if (!(val & (1ULL << 12)))
return -1;
return (val & 0xFFF);
}
static void rvu_detach_block(struct rvu *rvu, int pcifunc, int blktype)
{
struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
struct rvu_hwinfo *hw = rvu->hw;
struct rvu_block *block;
int slot, lf, num_lfs;
int blkaddr;
blkaddr = rvu_get_blkaddr(rvu, blktype, pcifunc);
if (blkaddr < 0)
return;
block = &hw->block[blkaddr];
num_lfs = rvu_get_rsrc_mapcount(pfvf, block->type);
if (!num_lfs)
return;
for (slot = 0; slot < num_lfs; slot++) {
lf = rvu_lookup_rsrc(rvu, block, pcifunc, slot);
if (lf < 0) /* This should never happen */
continue;
/* Disable the LF */
rvu_write64(rvu, blkaddr, block->lfcfg_reg |
(lf << block->lfshift), 0x00ULL);
/* Update SW maintained mapping info as well */
rvu_update_rsrc_map(rvu, pfvf, block,
pcifunc, lf, false);
/* Free the resource */
rvu_free_rsrc(&block->lf, lf);
/* Clear MSIX vector offset for this LF */
rvu_clear_msix_offset(rvu, pfvf, block, lf);
}
}
static int rvu_detach_rsrcs(struct rvu *rvu, struct rsrc_detach *detach,
u16 pcifunc)
{
struct rvu_hwinfo *hw = rvu->hw;
bool is_pf, detach_all = true;
struct rvu_block *block;
int devnum, blkid;
/* Check if this is for a RVU PF or VF */
if (pcifunc & RVU_PFVF_FUNC_MASK) {
is_pf = false;
devnum = rvu_get_hwvf(rvu, pcifunc);
} else {
is_pf = true;
devnum = rvu_get_pf(pcifunc);
}
spin_lock(&rvu->rsrc_lock);
/* Check for partial resource detach */
if (detach && detach->partial)
detach_all = false;
/* Check for RVU block's LFs attached to this func,
* if so, detach them.
*/
for (blkid = 0; blkid < BLK_COUNT; blkid++) {
block = &hw->block[blkid];
if (!block->lf.bmap)
continue;
if (!detach_all && detach) {
if (blkid == BLKADDR_NPA && !detach->npalf)
continue;
else if ((blkid == BLKADDR_NIX0) && !detach->nixlf)
continue;
else if ((blkid == BLKADDR_SSO) && !detach->sso)
continue;
else if ((blkid == BLKADDR_SSOW) && !detach->ssow)
continue;
else if ((blkid == BLKADDR_TIM) && !detach->timlfs)
continue;
else if ((blkid == BLKADDR_CPT0) && !detach->cptlfs)
continue;
}
rvu_detach_block(rvu, pcifunc, block->type);
}
spin_unlock(&rvu->rsrc_lock);
return 0;
}
static int rvu_mbox_handler_DETACH_RESOURCES(struct rvu *rvu,
struct rsrc_detach *detach,
struct msg_rsp *rsp)
{
return rvu_detach_rsrcs(rvu, detach, detach->hdr.pcifunc);
}
static void rvu_attach_block(struct rvu *rvu, int pcifunc,
int blktype, int num_lfs)
{
struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
struct rvu_hwinfo *hw = rvu->hw;
struct rvu_block *block;
int slot, lf;
int blkaddr;
u64 cfg;
if (!num_lfs)
return;
blkaddr = rvu_get_blkaddr(rvu, blktype, 0);
if (blkaddr < 0)
return;
block = &hw->block[blkaddr];
if (!block->lf.bmap)
return;
for (slot = 0; slot < num_lfs; slot++) {
/* Allocate the resource */
lf = rvu_alloc_rsrc(&block->lf);
if (lf < 0)
return;
cfg = (1ULL << 63) | (pcifunc << 8) | slot;
rvu_write64(rvu, blkaddr, block->lfcfg_reg |
(lf << block->lfshift), cfg);
rvu_update_rsrc_map(rvu, pfvf, block,
pcifunc, lf, true);
/* Set start MSIX vector for this LF within this PF/VF */
rvu_set_msix_offset(rvu, pfvf, block, lf);
}
}
static int rvu_check_rsrc_availability(struct rvu *rvu,
struct rsrc_attach *req, u16 pcifunc)
{
struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc);
struct rvu_hwinfo *hw = rvu->hw;
struct rvu_block *block;
int free_lfs, mappedlfs;
/* Only one NPA LF can be attached */
if (req->npalf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NPA)) {
block = &hw->block[BLKADDR_NPA];
free_lfs = rvu_rsrc_free_count(&block->lf);
if (!free_lfs)
goto fail;
} else if (req->npalf) {
dev_err(&rvu->pdev->dev,
"Func 0x%x: Invalid req, already has NPA\n",
pcifunc);
return -EINVAL;
}
/* Only one NIX LF can be attached */
if (req->nixlf && !rvu_get_rsrc_mapcount(pfvf, BLKTYPE_NIX)) {
block = &hw->block[BLKADDR_NIX0];
free_lfs = rvu_rsrc_free_count(&block->lf);
if (!free_lfs)
goto fail;
} else if (req->nixlf) {
dev_err(&rvu->pdev->dev,
"Func 0x%x: Invalid req, already has NIX\n",
pcifunc);
return -EINVAL;
}
if (req->sso) {
block = &hw->block[BLKADDR_SSO];
/* Is request within limits ? */
if (req->sso > block->lf.max) {
dev_err(&rvu->pdev->dev,
"Func 0x%x: Invalid SSO req, %d > max %d\n",
pcifunc, req->sso, block->lf.max);
return -EINVAL;
}
mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
free_lfs = rvu_rsrc_free_count(&block->lf);
/* Check if additional resources are available */
if (req->sso > mappedlfs &&
((req->sso - mappedlfs) > free_lfs))
goto fail;
}
if (req->ssow) {
block = &hw->block[BLKADDR_SSOW];
if (req->ssow > block->lf.max) {
dev_err(&rvu->pdev->dev,
"Func 0x%x: Invalid SSOW req, %d > max %d\n",
pcifunc, req->sso, block->lf.max);
return -EINVAL;
}
mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
free_lfs = rvu_rsrc_free_count(&block->lf);
if (req->ssow > mappedlfs &&
((req->ssow - mappedlfs) > free_lfs))
goto fail;
}
if (req->timlfs) {
block = &hw->block[BLKADDR_TIM];
if (req->timlfs > block->lf.max) {
dev_err(&rvu->pdev->dev,
"Func 0x%x: Invalid TIMLF req, %d > max %d\n",
pcifunc, req->timlfs, block->lf.max);
return -EINVAL;
}
mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
free_lfs = rvu_rsrc_free_count(&block->lf);
if (req->timlfs > mappedlfs &&
((req->timlfs - mappedlfs) > free_lfs))
goto fail;
}
if (req->cptlfs) {
block = &hw->block[BLKADDR_CPT0];
if (req->cptlfs > block->lf.max) {
dev_err(&rvu->pdev->dev,
"Func 0x%x: Invalid CPTLF req, %d > max %d\n",
pcifunc, req->cptlfs, block->lf.max);
return -EINVAL;
}
mappedlfs = rvu_get_rsrc_mapcount(pfvf, block->type);
free_lfs = rvu_rsrc_free_count(&block->lf);
if (req->cptlfs > mappedlfs &&
((req->cptlfs - mappedlfs) > free_lfs))
goto fail;
}
return 0;
fail:
dev_info(rvu->dev, "Request for %s failed\n", block->name);
return -ENOSPC;
}
static int rvu_mbox_handler_ATTACH_RESOURCES(struct rvu *rvu,
struct rsrc_attach *attach,
struct msg_rsp *rsp)
{
u16 pcifunc = attach->hdr.pcifunc;
int devnum, err;
bool is_pf;
/* If first request, detach all existing attached resources */
if (!attach->modify)
rvu_detach_rsrcs(rvu, NULL, pcifunc);
/* Check if this is for a RVU PF or VF */
if (pcifunc & RVU_PFVF_FUNC_MASK) {
is_pf = false;
devnum = rvu_get_hwvf(rvu, pcifunc);
} else {
is_pf = true;
devnum = rvu_get_pf(pcifunc);
}
spin_lock(&rvu->rsrc_lock);
/* Check if the request can be accommodated */
err = rvu_check_rsrc_availability(rvu, attach, pcifunc);
if (err)
goto exit;
/* Now attach the requested resources */
if (attach->npalf)
rvu_attach_block(rvu, pcifunc, BLKTYPE_NPA, 1);
if (attach->nixlf)
rvu_attach_block(rvu, pcifunc, BLKTYPE_NIX, 1);
if (attach->sso) {
/* RVU func doesn't know which exact LF or slot is attached
* to it, it always sees as slot 0,1,2. So for a 'modify'
* request, simply detach all existing attached LFs/slots
* and attach a fresh.
*/
if (attach->modify)
rvu_detach_block(rvu, pcifunc, BLKTYPE_SSO);
rvu_attach_block(rvu, pcifunc, BLKTYPE_SSO, attach->sso);
}
if (attach->ssow) {
if (attach->modify)
rvu_detach_block(rvu, pcifunc, BLKTYPE_SSOW);
rvu_attach_block(rvu, pcifunc, BLKTYPE_SSOW, attach->ssow);
}
if (attach->timlfs) {
if (attach->modify)
rvu_detach_block(rvu, pcifunc, BLKTYPE_TIM);
rvu_attach_block(rvu, pcifunc, BLKTYPE_TIM, attach->timlfs);
}
if (attach->cptlfs) {
if (attach->modify)
rvu_detach_block(rvu, pcifunc, BLKTYPE_CPT);
rvu_attach_block(rvu, pcifunc, BLKTYPE_CPT, attach->cptlfs);
}
exit:
spin_unlock(&rvu->rsrc_lock);
return err;
}
static u16 rvu_get_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
int blkaddr, int lf)
{
u16 vec;
if (lf < 0)
return MSIX_VECTOR_INVALID;
for (vec = 0; vec < pfvf->msix.max; vec++) {
if (pfvf->msix_lfmap[vec] == MSIX_BLKLF(blkaddr, lf))
return vec;
}
return MSIX_VECTOR_INVALID;
}
static void rvu_set_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
struct rvu_block *block, int lf)
{
u16 nvecs, vec, offset;
u64 cfg;
cfg = rvu_read64(rvu, block->addr, block->msixcfg_reg |
(lf << block->lfshift));
nvecs = (cfg >> 12) & 0xFF;
/* Check and alloc MSIX vectors, must be contiguous */
if (!rvu_rsrc_check_contig(&pfvf->msix, nvecs))
return;
offset = rvu_alloc_rsrc_contig(&pfvf->msix, nvecs);
/* Config MSIX offset in LF */
rvu_write64(rvu, block->addr, block->msixcfg_reg |
(lf << block->lfshift), (cfg & ~0x7FFULL) | offset);
/* Update the bitmap as well */
for (vec = 0; vec < nvecs; vec++)
pfvf->msix_lfmap[offset + vec] = MSIX_BLKLF(block->addr, lf);
}
static void rvu_clear_msix_offset(struct rvu *rvu, struct rvu_pfvf *pfvf,
struct rvu_block *block, int lf)
{
u16 nvecs, vec, offset;
u64 cfg;
cfg = rvu_read64(rvu, block->addr, block->msixcfg_reg |
(lf << block->lfshift));
nvecs = (cfg >> 12) & 0xFF;
/* Clear MSIX offset in LF */
rvu_write64(rvu, block->addr, block->msixcfg_reg |
(lf << block->lfshift), cfg & ~0x7FFULL);
offset = rvu_get_msix_offset(rvu, pfvf, block->addr, lf);
/* Update the mapping */
for (vec = 0; vec < nvecs; vec++)
pfvf->msix_lfmap[offset + vec] = 0;
/* Free the same in MSIX bitmap */
rvu_free_rsrc_contig(&pfvf->msix, nvecs, offset);
}
static int rvu_mbox_handler_MSIX_OFFSET(struct rvu *rvu, struct msg_req *req,
struct msix_offset_rsp *rsp)
{
struct rvu_hwinfo *hw = rvu->hw;
u16 pcifunc = req->hdr.pcifunc;
struct rvu_pfvf *pfvf;
int lf, slot;
pfvf = rvu_get_pfvf(rvu, pcifunc);
if (!pfvf->msix.bmap)
return 0;
/* Set MSIX offsets for each block's LFs attached to this PF/VF */
lf = rvu_get_lf(rvu, &hw->block[BLKADDR_NPA], pcifunc, 0);
rsp->npa_msixoff = rvu_get_msix_offset(rvu, pfvf, BLKADDR_NPA, lf);
lf = rvu_get_lf(rvu, &hw->block[BLKADDR_NIX0], pcifunc, 0);
rsp->nix_msixoff = rvu_get_msix_offset(rvu, pfvf, BLKADDR_NIX0, lf);
rsp->sso = pfvf->sso;
for (slot = 0; slot < rsp->sso; slot++) {
lf = rvu_get_lf(rvu, &hw->block[BLKADDR_SSO], pcifunc, slot);
rsp->sso_msixoff[slot] =
rvu_get_msix_offset(rvu, pfvf, BLKADDR_SSO, lf);
}
rsp->ssow = pfvf->ssow;
for (slot = 0; slot < rsp->ssow; slot++) {
lf = rvu_get_lf(rvu, &hw->block[BLKADDR_SSOW], pcifunc, slot);
rsp->ssow_msixoff[slot] =
rvu_get_msix_offset(rvu, pfvf, BLKADDR_SSOW, lf);
}
rsp->timlfs = pfvf->timlfs;
for (slot = 0; slot < rsp->timlfs; slot++) {
lf = rvu_get_lf(rvu, &hw->block[BLKADDR_TIM], pcifunc, slot);
rsp->timlf_msixoff[slot] =
rvu_get_msix_offset(rvu, pfvf, BLKADDR_TIM, lf);
}
rsp->cptlfs = pfvf->cptlfs;
for (slot = 0; slot < rsp->cptlfs; slot++) {
lf = rvu_get_lf(rvu, &hw->block[BLKADDR_CPT0], pcifunc, slot);
rsp->cptlf_msixoff[slot] =
rvu_get_msix_offset(rvu, pfvf, BLKADDR_CPT0, lf);
}
return 0;
}
static int rvu_process_mbox_msg(struct rvu *rvu, int devid,
struct mbox_msghdr *req)
{
/* Check if valid, if not reply with a invalid msg */
if (req->sig != OTX2_MBOX_REQ_SIG)
goto bad_message;
switch (req->id) {
#define M(_name, _id, _req_type, _rsp_type) \
case _id: { \
struct _rsp_type *rsp; \
int err; \
\
rsp = (struct _rsp_type *)otx2_mbox_alloc_msg( \
&rvu->mbox, devid, \
sizeof(struct _rsp_type)); \
if (rsp) { \
rsp->hdr.id = _id; \
rsp->hdr.sig = OTX2_MBOX_RSP_SIG; \
rsp->hdr.pcifunc = req->pcifunc; \
rsp->hdr.rc = 0; \
} \
\
err = rvu_mbox_handler_ ## _name(rvu, \
(struct _req_type *)req, \
rsp); \
if (rsp && err) \
rsp->hdr.rc = err; \
\
return rsp ? err : -ENOMEM; \
}
MBOX_MESSAGES
#undef M
break;
bad_message:
default:
otx2_reply_invalid_msg(&rvu->mbox, devid, req->pcifunc,
req->id);
return -ENODEV;
}
}
static void rvu_mbox_handler(struct work_struct *work)
{
struct rvu_work *mwork = container_of(work, struct rvu_work, work);
struct rvu *rvu = mwork->rvu;
struct otx2_mbox_dev *mdev;
struct mbox_hdr *req_hdr;
struct mbox_msghdr *msg;
struct otx2_mbox *mbox;
int offset, id, err;
u16 pf;
mbox = &rvu->mbox;
pf = mwork - rvu->mbox_wrk;
mdev = &mbox->dev[pf];
/* Process received mbox messages */
req_hdr = mdev->mbase + mbox->rx_start;
if (req_hdr->num_msgs == 0)
return;
offset = mbox->rx_start + ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
for (id = 0; id < req_hdr->num_msgs; id++) {
msg = mdev->mbase + offset;
/* Set which PF sent this message based on mbox IRQ */
msg->pcifunc &= ~(RVU_PFVF_PF_MASK << RVU_PFVF_PF_SHIFT);
msg->pcifunc |= (pf << RVU_PFVF_PF_SHIFT);
err = rvu_process_mbox_msg(rvu, pf, msg);
if (!err) {
offset = mbox->rx_start + msg->next_msgoff;
continue;
}
if (msg->pcifunc & RVU_PFVF_FUNC_MASK)
dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d:VF%d\n",
err, otx2_mbox_id2name(msg->id), msg->id, pf,
(msg->pcifunc & RVU_PFVF_FUNC_MASK) - 1);
else
dev_warn(rvu->dev, "Error %d when processing message %s (0x%x) from PF%d\n",
err, otx2_mbox_id2name(msg->id), msg->id, pf);
}
/* Send mbox responses to PF */
otx2_mbox_msg_send(mbox, pf);
}
static int rvu_mbox_init(struct rvu *rvu)
{
struct rvu_hwinfo *hw = rvu->hw;
void __iomem *hwbase = NULL;
struct rvu_work *mwork;
u64 bar4_addr;
int err, pf;
rvu->mbox_wq = alloc_workqueue("rvu_afpf_mailbox",
WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM,
hw->total_pfs);
if (!rvu->mbox_wq)
return -ENOMEM;
rvu->mbox_wrk = devm_kcalloc(rvu->dev, hw->total_pfs,
sizeof(struct rvu_work), GFP_KERNEL);
if (!rvu->mbox_wrk) {
err = -ENOMEM;
goto exit;
}
/* Map mbox region shared with PFs */
bar4_addr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PF_BAR4_ADDR);
/* Mailbox is a reserved memory (in RAM) region shared between
* RVU devices, shouldn't be mapped as device memory to allow
* unaligned accesses.
*/
hwbase = ioremap_wc(bar4_addr, MBOX_SIZE * hw->total_pfs);
if (!hwbase) {
dev_err(rvu->dev, "Unable to map mailbox region\n");
err = -ENOMEM;
goto exit;
}
err = otx2_mbox_init(&rvu->mbox, hwbase, rvu->pdev, rvu->afreg_base,
MBOX_DIR_AFPF, hw->total_pfs);
if (err)
goto exit;
for (pf = 0; pf < hw->total_pfs; pf++) {
mwork = &rvu->mbox_wrk[pf];
mwork->rvu = rvu;
INIT_WORK(&mwork->work, rvu_mbox_handler);
}
return 0;
exit:
if (hwbase)
iounmap((void __iomem *)hwbase);
destroy_workqueue(rvu->mbox_wq);
return err;
}
static void rvu_mbox_destroy(struct rvu *rvu)
{
if (rvu->mbox_wq) {
flush_workqueue(rvu->mbox_wq);
destroy_workqueue(rvu->mbox_wq);
rvu->mbox_wq = NULL;
}
if (rvu->mbox.hwbase)
iounmap((void __iomem *)rvu->mbox.hwbase);
otx2_mbox_destroy(&rvu->mbox);
}
static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
{
struct rvu *rvu = (struct rvu *)rvu_irq;
struct otx2_mbox_dev *mdev;
struct otx2_mbox *mbox;
struct mbox_hdr *hdr;
u64 intr;
u8 pf;
intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT);
/* Clear interrupts */
rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT, intr);
/* Sync with mbox memory region */
smp_wmb();
for (pf = 0; pf < rvu->hw->total_pfs; pf++) {
if (intr & (1ULL << pf)) {
mbox = &rvu->mbox;
mdev = &mbox->dev[pf];
hdr = mdev->mbase + mbox->rx_start;
if (hdr->num_msgs)
queue_work(rvu->mbox_wq,
&rvu->mbox_wrk[pf].work);
}
}
return IRQ_HANDLED;
}
static void rvu_enable_mbox_intr(struct rvu *rvu)
{
struct rvu_hwinfo *hw = rvu->hw;
/* Clear spurious irqs, if any */
rvu_write64(rvu, BLKADDR_RVUM,
RVU_AF_PFAF_MBOX_INT, INTR_MASK(hw->total_pfs));
/* Enable mailbox interrupt for all PFs except PF0 i.e AF itself */
rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT_ENA_W1S,
INTR_MASK(hw->total_pfs) & ~1ULL);
}
static void rvu_unregister_interrupts(struct rvu *rvu)
{
int irq;
/* Disable the Mbox interrupt */
rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT_ENA_W1C,
INTR_MASK(rvu->hw->total_pfs) & ~1ULL);
for (irq = 0; irq < rvu->num_vec; irq++) {
if (rvu->irq_allocated[irq])
free_irq(pci_irq_vector(rvu->pdev, irq), rvu);
}
pci_free_irq_vectors(rvu->pdev);
rvu->num_vec = 0;
}
static int rvu_register_interrupts(struct rvu *rvu)
{
int ret;
rvu->num_vec = pci_msix_vec_count(rvu->pdev);
rvu->irq_name = devm_kmalloc_array(rvu->dev, rvu->num_vec,
NAME_SIZE, GFP_KERNEL);
if (!rvu->irq_name)
return -ENOMEM;
rvu->irq_allocated = devm_kcalloc(rvu->dev, rvu->num_vec,
sizeof(bool), GFP_KERNEL);
if (!rvu->irq_allocated)
return -ENOMEM;
/* Enable MSI-X */
ret = pci_alloc_irq_vectors(rvu->pdev, rvu->num_vec,
rvu->num_vec, PCI_IRQ_MSIX);
if (ret < 0) {
dev_err(rvu->dev,
"RVUAF: Request for %d msix vectors failed, ret %d\n",
rvu->num_vec, ret);
return ret;
}
/* Register mailbox interrupt handler */
sprintf(&rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], "RVUAF Mbox");
ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_MBOX),
rvu_mbox_intr_handler, 0,
&rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], rvu);
if (ret) {
dev_err(rvu->dev,
"RVUAF: IRQ registration failed for mbox irq\n");
goto fail;
}
rvu->irq_allocated[RVU_AF_INT_VEC_MBOX] = true;
/* Enable mailbox interrupts from all PFs */
rvu_enable_mbox_intr(rvu);
return 0;
fail:
pci_free_irq_vectors(rvu->pdev);
return ret;
}
static int rvu_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct device *dev = &pdev->dev;
struct rvu *rvu;
int err;
rvu = devm_kzalloc(dev, sizeof(*rvu), GFP_KERNEL);
if (!rvu)
return -ENOMEM;
rvu->hw = devm_kzalloc(dev, sizeof(struct rvu_hwinfo), GFP_KERNEL);
if (!rvu->hw) {
devm_kfree(dev, rvu);
return -ENOMEM;
}
pci_set_drvdata(pdev, rvu);
rvu->pdev = pdev;
rvu->dev = &pdev->dev;
err = pci_enable_device(pdev);
if (err) {
dev_err(dev, "Failed to enable PCI device\n");
goto err_freemem;
}
err = pci_request_regions(pdev, DRV_NAME);
if (err) {
dev_err(dev, "PCI request regions failed 0x%x\n", err);
goto err_disable_device;
}
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(48));
if (err) {
dev_err(dev, "Unable to set DMA mask\n");
goto err_release_regions;
}
err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(48));
if (err) {
dev_err(dev, "Unable to set consistent DMA mask\n");
goto err_release_regions;
}
/* Map Admin function CSRs */
rvu->afreg_base = pcim_iomap(pdev, PCI_AF_REG_BAR_NUM, 0);
rvu->pfreg_base = pcim_iomap(pdev, PCI_PF_REG_BAR_NUM, 0);
if (!rvu->afreg_base || !rvu->pfreg_base) {
dev_err(dev, "Unable to map admin function CSRs, aborting\n");
err = -ENOMEM;
goto err_release_regions;
}
/* Check which blocks the HW supports */
rvu_check_block_implemented(rvu);
rvu_reset_all_blocks(rvu);
err = rvu_setup_hw_resources(rvu);
if (err)
goto err_release_regions;
err = rvu_mbox_init(rvu);
if (err)
goto err_hwsetup;
err = rvu_cgx_probe(rvu);
if (err)
goto err_mbox;
err = rvu_register_interrupts(rvu);
if (err)
goto err_cgx;
return 0;
err_cgx:
rvu_cgx_wq_destroy(rvu);
err_mbox:
rvu_mbox_destroy(rvu);
err_hwsetup:
rvu_reset_all_blocks(rvu);
rvu_free_hw_resources(rvu);
err_release_regions:
pci_release_regions(pdev);
err_disable_device:
pci_disable_device(pdev);
err_freemem:
pci_set_drvdata(pdev, NULL);
devm_kfree(&pdev->dev, rvu->hw);
devm_kfree(dev, rvu);
return err;
}
static void rvu_remove(struct pci_dev *pdev)
{
struct rvu *rvu = pci_get_drvdata(pdev);
rvu_unregister_interrupts(rvu);
rvu_cgx_wq_destroy(rvu);
rvu_mbox_destroy(rvu);
rvu_reset_all_blocks(rvu);
rvu_free_hw_resources(rvu);
pci_release_regions(pdev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
devm_kfree(&pdev->dev, rvu->hw);
devm_kfree(&pdev->dev, rvu);
}
static struct pci_driver rvu_driver = {
.name = DRV_NAME,
.id_table = rvu_id_table,
.probe = rvu_probe,
.remove = rvu_remove,
};
static int __init rvu_init_module(void)
{
int err;
pr_info("%s: %s\n", DRV_NAME, DRV_STRING);
err = pci_register_driver(&cgx_driver);
if (err < 0)
return err;
err = pci_register_driver(&rvu_driver);
if (err < 0)
pci_unregister_driver(&cgx_driver);
return err;
}
static void __exit rvu_cleanup_module(void)
{
pci_unregister_driver(&rvu_driver);
pci_unregister_driver(&cgx_driver);
}
module_init(rvu_init_module);
module_exit(rvu_cleanup_module);
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef RVU_H
#define RVU_H
#include "rvu_struct.h"
#include "mbox.h"
/* PCI device IDs */
#define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065
/* PCI BAR nos */
#define PCI_AF_REG_BAR_NUM 0
#define PCI_PF_REG_BAR_NUM 2
#define PCI_MBOX_BAR_NUM 4
#define NAME_SIZE 32
/* PF_FUNC */
#define RVU_PFVF_PF_SHIFT 10
#define RVU_PFVF_PF_MASK 0x3F
#define RVU_PFVF_FUNC_SHIFT 0
#define RVU_PFVF_FUNC_MASK 0x3FF
struct rvu_work {
struct work_struct work;
struct rvu *rvu;
};
struct rsrc_bmap {
unsigned long *bmap; /* Pointer to resource bitmap */
u16 max; /* Max resource id or count */
};
struct rvu_block {
struct rsrc_bmap lf;
u16 *fn_map; /* LF to pcifunc mapping */
bool multislot;
bool implemented;
u8 addr; /* RVU_BLOCK_ADDR_E */
u8 type; /* RVU_BLOCK_TYPE_E */
u8 lfshift;
u64 lookup_reg;
u64 pf_lfcnt_reg;
u64 vf_lfcnt_reg;
u64 lfcfg_reg;
u64 msixcfg_reg;
u64 lfreset_reg;
unsigned char name[NAME_SIZE];
};
/* Structure for per RVU func info ie PF/VF */
struct rvu_pfvf {
bool npalf; /* Only one NPALF per RVU_FUNC */
bool nixlf; /* Only one NIXLF per RVU_FUNC */
u16 sso;
u16 ssow;
u16 cptlfs;
u16 timlfs;
/* Block LF's MSIX vector info */
struct rsrc_bmap msix; /* Bitmap for MSIX vector alloc */
#define MSIX_BLKLF(blkaddr, lf) (((blkaddr) << 8) | ((lf) & 0xFF))
u16 *msix_lfmap; /* Vector to block LF mapping */
};
struct rvu_hwinfo {
u8 total_pfs; /* MAX RVU PFs HW supports */
u16 total_vfs; /* Max RVU VFs HW supports */
u16 max_vfs_per_pf; /* Max VFs that can be attached to a PF */
struct rvu_block block[BLK_COUNT]; /* Block info */
};
struct rvu {
void __iomem *afreg_base;
void __iomem *pfreg_base;
struct pci_dev *pdev;
struct device *dev;
struct rvu_hwinfo *hw;
struct rvu_pfvf *pf;
struct rvu_pfvf *hwvf;
spinlock_t rsrc_lock; /* Serialize resource alloc/free */
/* Mbox */
struct otx2_mbox mbox;
struct rvu_work *mbox_wrk;
struct workqueue_struct *mbox_wq;
/* MSI-X */
u16 num_vec;
char *irq_name;
bool *irq_allocated;
dma_addr_t msix_base_iova;
/* CGX */
#define PF_CGXMAP_BASE 1 /* PF 0 is reserved for RVU PF */
u8 cgx_mapped_pfs;
u8 cgx_cnt; /* available cgx ports */
u8 *pf2cgxlmac_map; /* pf to cgx_lmac map */
u16 *cgxlmac2pf_map; /* bitmap of mapped pfs for
* every cgx lmac port
*/
void **cgx_idmap; /* cgx id to cgx data map table */
struct work_struct cgx_evh_work;
struct workqueue_struct *cgx_evh_wq;
spinlock_t cgx_evq_lock; /* cgx event queue lock */
struct list_head cgx_evq_head; /* cgx event queue head */
};
static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
{
writeq(val, rvu->afreg_base + ((block << 28) | offset));
}
static inline u64 rvu_read64(struct rvu *rvu, u64 block, u64 offset)
{
return readq(rvu->afreg_base + ((block << 28) | offset));
}
static inline void rvupf_write64(struct rvu *rvu, u64 offset, u64 val)
{
writeq(val, rvu->pfreg_base + offset);
}
static inline u64 rvupf_read64(struct rvu *rvu, u64 offset)
{
return readq(rvu->pfreg_base + offset);
}
/* Function Prototypes
* RVU
*/
int rvu_alloc_bitmap(struct rsrc_bmap *rsrc);
int rvu_alloc_rsrc(struct rsrc_bmap *rsrc);
void rvu_free_rsrc(struct rsrc_bmap *rsrc, int id);
int rvu_rsrc_free_count(struct rsrc_bmap *rsrc);
int rvu_get_pf(u16 pcifunc);
struct rvu_pfvf *rvu_get_pfvf(struct rvu *rvu, int pcifunc);
void rvu_get_pf_numvfs(struct rvu *rvu, int pf, int *numvfs, int *hwvf);
bool is_block_implemented(struct rvu_hwinfo *hw, int blkaddr);
int rvu_get_lf(struct rvu *rvu, struct rvu_block *block, u16 pcifunc, u16 slot);
int rvu_get_blkaddr(struct rvu *rvu, int blktype, u16 pcifunc);
int rvu_poll_reg(struct rvu *rvu, u64 block, u64 offset, u64 mask, bool zero);
/* CGX APIs */
int rvu_cgx_probe(struct rvu *rvu);
void rvu_cgx_wq_destroy(struct rvu *rvu);
#endif /* RVU_H */
// SPDX-License-Identifier: GPL-2.0
/* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/types.h>
#include <linux/module.h>
#include <linux/pci.h>
#include "rvu.h"
#include "cgx.h"
struct cgx_evq_entry {
struct list_head evq_node;
struct cgx_link_event link_event;
};
static inline u8 cgxlmac_id_to_bmap(u8 cgx_id, u8 lmac_id)
{
return ((cgx_id & 0xF) << 4) | (lmac_id & 0xF);
}
static void *rvu_cgx_pdata(u8 cgx_id, struct rvu *rvu)
{
if (cgx_id >= rvu->cgx_cnt)
return NULL;
return rvu->cgx_idmap[cgx_id];
}
static int rvu_map_cgx_lmac_pf(struct rvu *rvu)
{
int cgx_cnt = rvu->cgx_cnt;
int cgx, lmac_cnt, lmac;
int pf = PF_CGXMAP_BASE;
int size;
if (!cgx_cnt)
return 0;
if (cgx_cnt > 0xF || MAX_LMAC_PER_CGX > 0xF)
return -EINVAL;
/* Alloc map table
* An additional entry is required since PF id starts from 1 and
* hence entry at offset 0 is invalid.
*/
size = (cgx_cnt * MAX_LMAC_PER_CGX + 1) * sizeof(u8);
rvu->pf2cgxlmac_map = devm_kzalloc(rvu->dev, size, GFP_KERNEL);
if (!rvu->pf2cgxlmac_map)
return -ENOMEM;
/* Initialize offset 0 with an invalid cgx and lmac id */
rvu->pf2cgxlmac_map[0] = 0xFF;
/* Reverse map table */
rvu->cgxlmac2pf_map = devm_kzalloc(rvu->dev,
cgx_cnt * MAX_LMAC_PER_CGX * sizeof(u16),
GFP_KERNEL);
if (!rvu->cgxlmac2pf_map)
return -ENOMEM;
rvu->cgx_mapped_pfs = 0;
for (cgx = 0; cgx < cgx_cnt; cgx++) {
lmac_cnt = cgx_get_lmac_cnt(rvu_cgx_pdata(cgx, rvu));
for (lmac = 0; lmac < lmac_cnt; lmac++, pf++) {
rvu->pf2cgxlmac_map[pf] = cgxlmac_id_to_bmap(cgx, lmac);
rvu->cgxlmac2pf_map[CGX_OFFSET(cgx) + lmac] = 1 << pf;
rvu->cgx_mapped_pfs++;
}
}
return 0;
}
/* This is called from interrupt context and is expected to be atomic */
static int cgx_lmac_postevent(struct cgx_link_event *event, void *data)
{
struct cgx_evq_entry *qentry;
struct rvu *rvu = data;
/* post event to the event queue */
qentry = kmalloc(sizeof(*qentry), GFP_ATOMIC);
if (!qentry)
return -ENOMEM;
qentry->link_event = *event;
spin_lock(&rvu->cgx_evq_lock);
list_add_tail(&qentry->evq_node, &rvu->cgx_evq_head);
spin_unlock(&rvu->cgx_evq_lock);
/* start worker to process the events */
queue_work(rvu->cgx_evh_wq, &rvu->cgx_evh_work);
return 0;
}
static void cgx_evhandler_task(struct work_struct *work)
{
struct rvu *rvu = container_of(work, struct rvu, cgx_evh_work);
struct cgx_evq_entry *qentry;
struct cgx_link_event *event;
unsigned long flags;
do {
/* Dequeue an event */
spin_lock_irqsave(&rvu->cgx_evq_lock, flags);
qentry = list_first_entry_or_null(&rvu->cgx_evq_head,
struct cgx_evq_entry,
evq_node);
if (qentry)
list_del(&qentry->evq_node);
spin_unlock_irqrestore(&rvu->cgx_evq_lock, flags);
if (!qentry)
break; /* nothing more to process */
event = &qentry->link_event;
/* Do nothing for now */
kfree(qentry);
} while (1);
}
static void cgx_lmac_event_handler_init(struct rvu *rvu)
{
struct cgx_event_cb cb;
int cgx, lmac, err;
void *cgxd;
spin_lock_init(&rvu->cgx_evq_lock);
INIT_LIST_HEAD(&rvu->cgx_evq_head);
INIT_WORK(&rvu->cgx_evh_work, cgx_evhandler_task);
rvu->cgx_evh_wq = alloc_workqueue("rvu_evh_wq", 0, 0);
if (!rvu->cgx_evh_wq) {
dev_err(rvu->dev, "alloc workqueue failed");
return;
}
cb.notify_link_chg = cgx_lmac_postevent; /* link change call back */
cb.data = rvu;
for (cgx = 0; cgx < rvu->cgx_cnt; cgx++) {
cgxd = rvu_cgx_pdata(cgx, rvu);
for (lmac = 0; lmac < cgx_get_lmac_cnt(cgxd); lmac++) {
err = cgx_lmac_evh_register(&cb, cgxd, lmac);
if (err)
dev_err(rvu->dev,
"%d:%d handler register failed\n",
cgx, lmac);
}
}
}
void rvu_cgx_wq_destroy(struct rvu *rvu)
{
if (rvu->cgx_evh_wq) {
flush_workqueue(rvu->cgx_evh_wq);
destroy_workqueue(rvu->cgx_evh_wq);
rvu->cgx_evh_wq = NULL;
}
}
int rvu_cgx_probe(struct rvu *rvu)
{
int i, err;
/* find available cgx ports */
rvu->cgx_cnt = cgx_get_cgx_cnt();
if (!rvu->cgx_cnt) {
dev_info(rvu->dev, "No CGX devices found!\n");
return -ENODEV;
}
rvu->cgx_idmap = devm_kzalloc(rvu->dev, rvu->cgx_cnt * sizeof(void *),
GFP_KERNEL);
if (!rvu->cgx_idmap)
return -ENOMEM;
/* Initialize the cgxdata table */
for (i = 0; i < rvu->cgx_cnt; i++)
rvu->cgx_idmap[i] = cgx_get_pdata(i);
/* Map CGX LMAC interfaces to RVU PFs */
err = rvu_map_cgx_lmac_pf(rvu);
if (err)
return err;
/* Register for CGX events */
cgx_lmac_event_handler_init(rvu);
return 0;
}
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef RVU_REG_H
#define RVU_REG_H
/* Admin function registers */
#define RVU_AF_MSIXTR_BASE (0x10)
#define RVU_AF_ECO (0x20)
#define RVU_AF_BLK_RST (0x30)
#define RVU_AF_PF_BAR4_ADDR (0x40)
#define RVU_AF_RAS (0x100)
#define RVU_AF_RAS_W1S (0x108)
#define RVU_AF_RAS_ENA_W1S (0x110)
#define RVU_AF_RAS_ENA_W1C (0x118)
#define RVU_AF_GEN_INT (0x120)
#define RVU_AF_GEN_INT_W1S (0x128)
#define RVU_AF_GEN_INT_ENA_W1S (0x130)
#define RVU_AF_GEN_INT_ENA_W1C (0x138)
#define RVU_AF_AFPF_MBOX0 (0x02000)
#define RVU_AF_AFPF_MBOX1 (0x02008)
#define RVU_AF_AFPFX_MBOXX(a, b) (0x2000 | (a) << 4 | (b) << 3)
#define RVU_AF_PFME_STATUS (0x2800)
#define RVU_AF_PFTRPEND (0x2810)
#define RVU_AF_PFTRPEND_W1S (0x2820)
#define RVU_AF_PF_RST (0x2840)
#define RVU_AF_HWVF_RST (0x2850)
#define RVU_AF_PFAF_MBOX_INT (0x2880)
#define RVU_AF_PFAF_MBOX_INT_W1S (0x2888)
#define RVU_AF_PFAF_MBOX_INT_ENA_W1S (0x2890)
#define RVU_AF_PFAF_MBOX_INT_ENA_W1C (0x2898)
#define RVU_AF_PFFLR_INT (0x28a0)
#define RVU_AF_PFFLR_INT_W1S (0x28a8)
#define RVU_AF_PFFLR_INT_ENA_W1S (0x28b0)
#define RVU_AF_PFFLR_INT_ENA_W1C (0x28b8)
#define RVU_AF_PFME_INT (0x28c0)
#define RVU_AF_PFME_INT_W1S (0x28c8)
#define RVU_AF_PFME_INT_ENA_W1S (0x28d0)
#define RVU_AF_PFME_INT_ENA_W1C (0x28d8)
/* Admin function's privileged PF/VF registers */
#define RVU_PRIV_CONST (0x8000000)
#define RVU_PRIV_GEN_CFG (0x8000010)
#define RVU_PRIV_CLK_CFG (0x8000020)
#define RVU_PRIV_ACTIVE_PC (0x8000030)
#define RVU_PRIV_PFX_CFG(a) (0x8000100 | (a) << 16)
#define RVU_PRIV_PFX_MSIX_CFG(a) (0x8000110 | (a) << 16)
#define RVU_PRIV_PFX_ID_CFG(a) (0x8000120 | (a) << 16)
#define RVU_PRIV_PFX_INT_CFG(a) (0x8000200 | (a) << 16)
#define RVU_PRIV_PFX_NIX0_CFG (0x8000300)
#define RVU_PRIV_PFX_NPA_CFG (0x8000310)
#define RVU_PRIV_PFX_SSO_CFG (0x8000320)
#define RVU_PRIV_PFX_SSOW_CFG (0x8000330)
#define RVU_PRIV_PFX_TIM_CFG (0x8000340)
#define RVU_PRIV_PFX_CPT0_CFG (0x8000350)
#define RVU_PRIV_BLOCK_TYPEX_REV(a) (0x8000400 | (a) << 3)
#define RVU_PRIV_HWVFX_INT_CFG(a) (0x8001280 | (a) << 16)
#define RVU_PRIV_HWVFX_NIX0_CFG (0x8001300)
#define RVU_PRIV_HWVFX_NPA_CFG (0x8001310)
#define RVU_PRIV_HWVFX_SSO_CFG (0x8001320)
#define RVU_PRIV_HWVFX_SSOW_CFG (0x8001330)
#define RVU_PRIV_HWVFX_TIM_CFG (0x8001340)
#define RVU_PRIV_HWVFX_CPT0_CFG (0x8001350)
/* RVU PF registers */
#define RVU_PF_VFX_PFVF_MBOX0 (0x00000)
#define RVU_PF_VFX_PFVF_MBOX1 (0x00008)
#define RVU_PF_VFX_PFVF_MBOXX(a, b) (0x0 | (a) << 12 | (b) << 3)
#define RVU_PF_VF_BAR4_ADDR (0x10)
#define RVU_PF_BLOCK_ADDRX_DISC(a) (0x200 | (a) << 3)
#define RVU_PF_VFME_STATUSX(a) (0x800 | (a) << 3)
#define RVU_PF_VFTRPENDX(a) (0x820 | (a) << 3)
#define RVU_PF_VFTRPEND_W1SX(a) (0x840 | (a) << 3)
#define RVU_PF_VFPF_MBOX_INTX(a) (0x880 | (a) << 3)
#define RVU_PF_VFPF_MBOX_INT_W1SX(a) (0x8A0 | (a) << 3)
#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a) (0x8C0 | (a) << 3)
#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a) (0x8E0 | (a) << 3)
#define RVU_PF_VFFLR_INTX(a) (0x900 | (a) << 3)
#define RVU_PF_VFFLR_INT_W1SX(a) (0x920 | (a) << 3)
#define RVU_PF_VFFLR_INT_ENA_W1SX(a) (0x940 | (a) << 3)
#define RVU_PF_VFFLR_INT_ENA_W1CX(a) (0x960 | (a) << 3)
#define RVU_PF_VFME_INTX(a) (0x980 | (a) << 3)
#define RVU_PF_VFME_INT_W1SX(a) (0x9A0 | (a) << 3)
#define RVU_PF_VFME_INT_ENA_W1SX(a) (0x9C0 | (a) << 3)
#define RVU_PF_VFME_INT_ENA_W1CX(a) (0x9E0 | (a) << 3)
#define RVU_PF_PFAF_MBOX0 (0xC00)
#define RVU_PF_PFAF_MBOX1 (0xC08)
#define RVU_PF_PFAF_MBOXX(a) (0xC00 | (a) << 3)
#define RVU_PF_INT (0xc20)
#define RVU_PF_INT_W1S (0xc28)
#define RVU_PF_INT_ENA_W1S (0xc30)
#define RVU_PF_INT_ENA_W1C (0xc38)
#define RVU_PF_MSIX_VECX_ADDR(a) (0x000 | (a) << 4)
#define RVU_PF_MSIX_VECX_CTL(a) (0x008 | (a) << 4)
#define RVU_PF_MSIX_PBAX(a) (0xF0000 | (a) << 3)
/* RVU VF registers */
#define RVU_VF_VFPF_MBOX0 (0x00000)
#define RVU_VF_VFPF_MBOX1 (0x00008)
/* NPA block's admin function registers */
#define NPA_AF_BLK_RST (0x0000)
#define NPA_AF_CONST (0x0010)
#define NPA_AF_CONST1 (0x0018)
#define NPA_AF_LF_RST (0x0020)
#define NPA_AF_GEN_CFG (0x0030)
#define NPA_AF_NDC_CFG (0x0040)
#define NPA_AF_INP_CTL (0x00D0)
#define NPA_AF_ACTIVE_CYCLES_PC (0x00F0)
#define NPA_AF_AVG_DELAY (0x0100)
#define NPA_AF_GEN_INT (0x0140)
#define NPA_AF_GEN_INT_W1S (0x0148)
#define NPA_AF_GEN_INT_ENA_W1S (0x0150)
#define NPA_AF_GEN_INT_ENA_W1C (0x0158)
#define NPA_AF_RVU_INT (0x0160)
#define NPA_AF_RVU_INT_W1S (0x0168)
#define NPA_AF_RVU_INT_ENA_W1S (0x0170)
#define NPA_AF_RVU_INT_ENA_W1C (0x0178)
#define NPA_AF_ERR_INT (0x0180)
#define NPA_AF_ERR_INT_W1S (0x0188)
#define NPA_AF_ERR_INT_ENA_W1S (0x0190)
#define NPA_AF_ERR_INT_ENA_W1C (0x0198)
#define NPA_AF_RAS (0x01A0)
#define NPA_AF_RAS_W1S (0x01A8)
#define NPA_AF_RAS_ENA_W1S (0x01B0)
#define NPA_AF_RAS_ENA_W1C (0x01B8)
#define NPA_AF_BP_TEST (0x0200)
#define NPA_AF_ECO (0x0300)
#define NPA_AF_AQ_CFG (0x0600)
#define NPA_AF_AQ_BASE (0x0610)
#define NPA_AF_AQ_STATUS (0x0620)
#define NPA_AF_AQ_DOOR (0x0630)
#define NPA_AF_AQ_DONE_WAIT (0x0640)
#define NPA_AF_AQ_DONE (0x0650)
#define NPA_AF_AQ_DONE_ACK (0x0660)
#define NPA_AF_AQ_DONE_INT (0x0680)
#define NPA_AF_AQ_DONE_INT_W1S (0x0688)
#define NPA_AF_AQ_DONE_ENA_W1S (0x0690)
#define NPA_AF_AQ_DONE_ENA_W1C (0x0698)
#define NPA_AF_LFX_AURAS_CFG(a) (0x4000 | (a) << 18)
#define NPA_AF_LFX_LOC_AURAS_BASE(a) (0x4010 | (a) << 18)
#define NPA_AF_LFX_QINTS_CFG(a) (0x4100 | (a) << 18)
#define NPA_AF_LFX_QINTS_BASE(a) (0x4110 | (a) << 18)
#define NPA_PRIV_AF_INT_CFG (0x10000)
#define NPA_PRIV_LFX_CFG (0x10010)
#define NPA_PRIV_LFX_INT_CFG (0x10020)
#define NPA_AF_RVU_LF_CFG_DEBUG (0x10030)
/* NIX block's admin function registers */
#define NIX_AF_CFG (0x0000)
#define NIX_AF_STATUS (0x0010)
#define NIX_AF_NDC_CFG (0x0018)
#define NIX_AF_CONST (0x0020)
#define NIX_AF_CONST1 (0x0028)
#define NIX_AF_CONST2 (0x0030)
#define NIX_AF_CONST3 (0x0038)
#define NIX_AF_SQ_CONST (0x0040)
#define NIX_AF_CQ_CONST (0x0048)
#define NIX_AF_RQ_CONST (0x0050)
#define NIX_AF_PSE_CONST (0x0060)
#define NIX_AF_TL1_CONST (0x0070)
#define NIX_AF_TL2_CONST (0x0078)
#define NIX_AF_TL3_CONST (0x0080)
#define NIX_AF_TL4_CONST (0x0088)
#define NIX_AF_MDQ_CONST (0x0090)
#define NIX_AF_MC_MIRROR_CONST (0x0098)
#define NIX_AF_LSO_CFG (0x00A8)
#define NIX_AF_BLK_RST (0x00B0)
#define NIX_AF_TX_TSTMP_CFG (0x00C0)
#define NIX_AF_RX_CFG (0x00D0)
#define NIX_AF_AVG_DELAY (0x00E0)
#define NIX_AF_CINT_DELAY (0x00F0)
#define NIX_AF_RX_MCAST_BASE (0x0100)
#define NIX_AF_RX_MCAST_CFG (0x0110)
#define NIX_AF_RX_MCAST_BUF_BASE (0x0120)
#define NIX_AF_RX_MCAST_BUF_CFG (0x0130)
#define NIX_AF_RX_MIRROR_BUF_BASE (0x0140)
#define NIX_AF_RX_MIRROR_BUF_CFG (0x0148)
#define NIX_AF_LF_RST (0x0150)
#define NIX_AF_GEN_INT (0x0160)
#define NIX_AF_GEN_INT_W1S (0x0168)
#define NIX_AF_GEN_INT_ENA_W1S (0x0170)
#define NIX_AF_GEN_INT_ENA_W1C (0x0178)
#define NIX_AF_ERR_INT (0x0180)
#define NIX_AF_ERR_INT_W1S (0x0188)
#define NIX_AF_ERR_INT_ENA_W1S (0x0190)
#define NIX_AF_ERR_INT_ENA_W1C (0x0198)
#define NIX_AF_RAS (0x01A0)
#define NIX_AF_RAS_W1S (0x01A8)
#define NIX_AF_RAS_ENA_W1S (0x01B0)
#define NIX_AF_RAS_ENA_W1C (0x01B8)
#define NIX_AF_RVU_INT (0x01C0)
#define NIX_AF_RVU_INT_W1S (0x01C8)
#define NIX_AF_RVU_INT_ENA_W1S (0x01D0)
#define NIX_AF_RVU_INT_ENA_W1C (0x01D8)
#define NIX_AF_TCP_TIMER (0x01E0)
#define NIX_AF_RX_WQE_TAG_CTL (0x01F0)
#define NIX_AF_RX_DEF_OL2 (0x0200)
#define NIX_AF_RX_DEF_OIP4 (0x0210)
#define NIX_AF_RX_DEF_IIP4 (0x0220)
#define NIX_AF_RX_DEF_OIP6 (0x0230)
#define NIX_AF_RX_DEF_IIP6 (0x0240)
#define NIX_AF_RX_DEF_OTCP (0x0250)
#define NIX_AF_RX_DEF_ITCP (0x0260)
#define NIX_AF_RX_DEF_OUDP (0x0270)
#define NIX_AF_RX_DEF_IUDP (0x0280)
#define NIX_AF_RX_DEF_OSCTP (0x0290)
#define NIX_AF_RX_DEF_ISCTP (0x02A0)
#define NIX_AF_RX_DEF_IPSECX (0x02B0)
#define NIX_AF_RX_IPSEC_GEN_CFG (0x0300)
#define NIX_AF_RX_CPTX_INST_ADDR (0x0310)
#define NIX_AF_NDC_TX_SYNC (0x03F0)
#define NIX_AF_AQ_CFG (0x0400)
#define NIX_AF_AQ_BASE (0x0410)
#define NIX_AF_AQ_STATUS (0x0420)
#define NIX_AF_AQ_DOOR (0x0430)
#define NIX_AF_AQ_DONE_WAIT (0x0440)
#define NIX_AF_AQ_DONE (0x0450)
#define NIX_AF_AQ_DONE_ACK (0x0460)
#define NIX_AF_AQ_DONE_TIMER (0x0470)
#define NIX_AF_AQ_DONE_INT (0x0480)
#define NIX_AF_AQ_DONE_INT_W1S (0x0488)
#define NIX_AF_AQ_DONE_ENA_W1S (0x0490)
#define NIX_AF_AQ_DONE_ENA_W1C (0x0498)
#define NIX_AF_RX_LINKX_SLX_SPKT_CNT (0x0500)
#define NIX_AF_RX_LINKX_SLX_SXQE_CNT (0x0510)
#define NIX_AF_RX_MCAST_JOBSX_SW_CNT (0x0520)
#define NIX_AF_RX_MIRROR_JOBSX_SW_CNT (0x0530)
#define NIX_AF_RX_LINKX_CFG(a) (0x0540 | (a) << 16)
#define NIX_AF_RX_SW_SYNC (0x0550)
#define NIX_AF_RX_SW_SYNC_DONE (0x0560)
#define NIX_AF_SEB_ECO (0x0600)
#define NIX_AF_SEB_TEST_BP (0x0610)
#define NIX_AF_NORM_TX_FIFO_STATUS (0x0620)
#define NIX_AF_EXPR_TX_FIFO_STATUS (0x0630)
#define NIX_AF_SDP_TX_FIFO_STATUS (0x0640)
#define NIX_AF_TX_NPC_CAPTURE_CONFIG (0x0660)
#define NIX_AF_TX_NPC_CAPTURE_INFO (0x0670)
#define NIX_AF_DEBUG_NPC_RESP_DATAX(a) (0x680 | (a) << 3)
#define NIX_AF_SMQX_CFG(a) (0x700 | (a) << 16)
#define NIX_AF_PSE_CHANNEL_LEVEL (0x800)
#define NIX_AF_PSE_SHAPER_CFG (0x810)
#define NIX_AF_TX_EXPR_CREDIT (0x830)
#define NIX_AF_MARK_FORMATX_CTL(a) (0x900 | (a) << 18)
#define NIX_AF_TX_LINKX_NORM_CREDIT(a) (0xA00 | (a) << 16)
#define NIX_AF_TX_LINKX_EXPR_CREDIT(a) (0xA10 | (a) << 16)
#define NIX_AF_TX_LINKX_SW_XOFF(a) (0xA20 | (a) << 16)
#define NIX_AF_TX_LINKX_HW_XOFF(a) (0xA30 | (a) << 16)
#define NIX_AF_SDP_LINK_CREDIT (0xa40)
#define NIX_AF_SDP_SW_XOFFX(a) (0xA60 | (a) << 3)
#define NIX_AF_SDP_HW_XOFFX(a) (0xAC0 | (a) << 3)
#define NIX_AF_TL4X_BP_STATUS(a) (0xB00 | (a) << 16)
#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xB10 | (a) << 16)
#define NIX_AF_TL1X_SCHEDULE(a) (0xC00 | (a) << 16)
#define NIX_AF_TL1X_SHAPE(a) (0xC10 | (a) << 16)
#define NIX_AF_TL1X_CIR(a) (0xC20 | (a) << 16)
#define NIX_AF_TL1X_SHAPE_STATE(a) (0xC50 | (a) << 16)
#define NIX_AF_TL1X_SW_XOFF(a) (0xC70 | (a) << 16)
#define NIX_AF_TL1X_TOPOLOGY(a) (0xC80 | (a) << 16)
#define NIX_AF_TL1X_GREEN(a) (0xC90 | (a) << 16)
#define NIX_AF_TL1X_YELLOW(a) (0xCA0 | (a) << 16)
#define NIX_AF_TL1X_RED(a) (0xCB0 | (a) << 16)
#define NIX_AF_TL1X_MD_DEBUG0(a) (0xCC0 | (a) << 16)
#define NIX_AF_TL1X_MD_DEBUG1(a) (0xCC8 | (a) << 16)
#define NIX_AF_TL1X_MD_DEBUG2(a) (0xCD0 | (a) << 16)
#define NIX_AF_TL1X_MD_DEBUG3(a) (0xCD8 | (a) << 16)
#define NIX_AF_TL1A_DEBUG (0xce0)
#define NIX_AF_TL1B_DEBUG (0xcf0)
#define NIX_AF_TL1_DEBUG_GREEN (0xd00)
#define NIX_AF_TL1_DEBUG_NODE (0xd10)
#define NIX_AF_TL1X_DROPPED_PACKETS(a) (0xD20 | (a) << 16)
#define NIX_AF_TL1X_DROPPED_BYTES(a) (0xD30 | (a) << 16)
#define NIX_AF_TL1X_RED_PACKETS(a) (0xD40 | (a) << 16)
#define NIX_AF_TL1X_RED_BYTES(a) (0xD50 | (a) << 16)
#define NIX_AF_TL1X_YELLOW_PACKETS(a) (0xD60 | (a) << 16)
#define NIX_AF_TL1X_YELLOW_BYTES(a) (0xD70 | (a) << 16)
#define NIX_AF_TL1X_GREEN_PACKETS(a) (0xD80 | (a) << 16)
#define NIX_AF_TL1X_GREEN_BYTES(a) (0xD90 | (a) << 16)
#define NIX_AF_TL2X_SCHEDULE(a) (0xE00 | (a) << 16)
#define NIX_AF_TL2X_SHAPE(a) (0xE10 | (a) << 16)
#define NIX_AF_TL2X_CIR(a) (0xE20 | (a) << 16)
#define NIX_AF_TL2X_PIR(a) (0xE30 | (a) << 16)
#define NIX_AF_TL2X_SCHED_STATE(a) (0xE40 | (a) << 16)
#define NIX_AF_TL2X_SHAPE_STATE(a) (0xE50 | (a) << 16)
#define NIX_AF_TL2X_POINTERS(a) (0xE60 | (a) << 16)
#define NIX_AF_TL2X_SW_XOFF(a) (0xE70 | (a) << 16)
#define NIX_AF_TL2X_TOPOLOGY(a) (0xE80 | (a) << 16)
#define NIX_AF_TL2X_PARENT(a) (0xE88 | (a) << 16)
#define NIX_AF_TL2X_GREEN(a) (0xE90 | (a) << 16)
#define NIX_AF_TL2X_YELLOW(a) (0xEA0 | (a) << 16)
#define NIX_AF_TL2X_RED(a) (0xEB0 | (a) << 16)
#define NIX_AF_TL2X_MD_DEBUG0(a) (0xEC0 | (a) << 16)
#define NIX_AF_TL2X_MD_DEBUG1(a) (0xEC8 | (a) << 16)
#define NIX_AF_TL2X_MD_DEBUG2(a) (0xED0 | (a) << 16)
#define NIX_AF_TL2X_MD_DEBUG3(a) (0xED8 | (a) << 16)
#define NIX_AF_TL2A_DEBUG (0xee0)
#define NIX_AF_TL2B_DEBUG (0xef0)
#define NIX_AF_TL3X_SCHEDULE(a) (0x1000 | (a) << 16)
#define NIX_AF_TL3X_SHAPE(a) (0x1010 | (a) << 16)
#define NIX_AF_TL3X_CIR(a) (0x1020 | (a) << 16)
#define NIX_AF_TL3X_PIR(a) (0x1030 | (a) << 16)
#define NIX_AF_TL3X_SCHED_STATE(a) (0x1040 | (a) << 16)
#define NIX_AF_TL3X_SHAPE_STATE(a) (0x1050 | (a) << 16)
#define NIX_AF_TL3X_POINTERS(a) (0x1060 | (a) << 16)
#define NIX_AF_TL3X_SW_XOFF(a) (0x1070 | (a) << 16)
#define NIX_AF_TL3X_TOPOLOGY(a) (0x1080 | (a) << 16)
#define NIX_AF_TL3X_PARENT(a) (0x1088 | (a) << 16)
#define NIX_AF_TL3X_GREEN(a) (0x1090 | (a) << 16)
#define NIX_AF_TL3X_YELLOW(a) (0x10A0 | (a) << 16)
#define NIX_AF_TL3X_RED(a) (0x10B0 | (a) << 16)
#define NIX_AF_TL3X_MD_DEBUG0(a) (0x10C0 | (a) << 16)
#define NIX_AF_TL3X_MD_DEBUG1(a) (0x10C8 | (a) << 16)
#define NIX_AF_TL3X_MD_DEBUG2(a) (0x10D0 | (a) << 16)
#define NIX_AF_TL3X_MD_DEBUG3(a) (0x10D8 | (a) << 16)
#define NIX_AF_TL3A_DEBUG (0x10e0)
#define NIX_AF_TL3B_DEBUG (0x10f0)
#define NIX_AF_TL4X_SCHEDULE(a) (0x1200 | (a) << 16)
#define NIX_AF_TL4X_SHAPE(a) (0x1210 | (a) << 16)
#define NIX_AF_TL4X_CIR(a) (0x1220 | (a) << 16)
#define NIX_AF_TL4X_PIR(a) (0x1230 | (a) << 16)
#define NIX_AF_TL4X_SCHED_STATE(a) (0x1240 | (a) << 16)
#define NIX_AF_TL4X_SHAPE_STATE(a) (0x1250 | (a) << 16)
#define NIX_AF_TL4X_POINTERS(a) (0x1260 | (a) << 16)
#define NIX_AF_TL4X_SW_XOFF(a) (0x1270 | (a) << 16)
#define NIX_AF_TL4X_TOPOLOGY(a) (0x1280 | (a) << 16)
#define NIX_AF_TL4X_PARENT(a) (0x1288 | (a) << 16)
#define NIX_AF_TL4X_GREEN(a) (0x1290 | (a) << 16)
#define NIX_AF_TL4X_YELLOW(a) (0x12A0 | (a) << 16)
#define NIX_AF_TL4X_RED(a) (0x12B0 | (a) << 16)
#define NIX_AF_TL4X_MD_DEBUG0(a) (0x12C0 | (a) << 16)
#define NIX_AF_TL4X_MD_DEBUG1(a) (0x12C8 | (a) << 16)
#define NIX_AF_TL4X_MD_DEBUG2(a) (0x12D0 | (a) << 16)
#define NIX_AF_TL4X_MD_DEBUG3(a) (0x12D8 | (a) << 16)
#define NIX_AF_TL4A_DEBUG (0x12e0)
#define NIX_AF_TL4B_DEBUG (0x12f0)
#define NIX_AF_MDQX_SCHEDULE(a) (0x1400 | (a) << 16)
#define NIX_AF_MDQX_SHAPE(a) (0x1410 | (a) << 16)
#define NIX_AF_MDQX_CIR(a) (0x1420 | (a) << 16)
#define NIX_AF_MDQX_PIR(a) (0x1430 | (a) << 16)
#define NIX_AF_MDQX_SCHED_STATE(a) (0x1440 | (a) << 16)
#define NIX_AF_MDQX_SHAPE_STATE(a) (0x1450 | (a) << 16)
#define NIX_AF_MDQX_POINTERS(a) (0x1460 | (a) << 16)
#define NIX_AF_MDQX_SW_XOFF(a) (0x1470 | (a) << 16)
#define NIX_AF_MDQX_PARENT(a) (0x1480 | (a) << 16)
#define NIX_AF_MDQX_MD_DEBUG(a) (0x14C0 | (a) << 16)
#define NIX_AF_MDQX_PTR_FIFO(a) (0x14D0 | (a) << 16)
#define NIX_AF_MDQA_DEBUG (0x14e0)
#define NIX_AF_MDQB_DEBUG (0x14f0)
#define NIX_AF_TL3_TL2X_CFG(a) (0x1600 | (a) << 18)
#define NIX_AF_TL3_TL2X_BP_STATUS(a) (0x1610 | (a) << 16)
#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b) (0x1700 | (a) << 16 | (b) << 3)
#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b) (0x1800 | (a) << 18 | (b) << 3)
#define NIX_AF_TX_MCASTX(a) (0x1900 | (a) << 15)
#define NIX_AF_TX_VTAG_DEFX_CTL(a) (0x1A00 | (a) << 16)
#define NIX_AF_TX_VTAG_DEFX_DATA(a) (0x1A10 | (a) << 16)
#define NIX_AF_RX_BPIDX_STATUS(a) (0x1A20 | (a) << 17)
#define NIX_AF_RX_CHANX_CFG(a) (0x1A30 | (a) << 15)
#define NIX_AF_CINT_TIMERX(a) (0x1A40 | (a) << 18)
#define NIX_AF_LSO_FORMATX_FIELDX(a, b) (0x1B00 | (a) << 16 | (b) << 3)
#define NIX_AF_LFX_CFG(a) (0x4000 | (a) << 17)
#define NIX_AF_LFX_SQS_CFG(a) (0x4020 | (a) << 17)
#define NIX_AF_LFX_TX_CFG2(a) (0x4028 | (a) << 17)
#define NIX_AF_LFX_SQS_BASE(a) (0x4030 | (a) << 17)
#define NIX_AF_LFX_RQS_CFG(a) (0x4040 | (a) << 17)
#define NIX_AF_LFX_RQS_BASE(a) (0x4050 | (a) << 17)
#define NIX_AF_LFX_CQS_CFG(a) (0x4060 | (a) << 17)
#define NIX_AF_LFX_CQS_BASE(a) (0x4070 | (a) << 17)
#define NIX_AF_LFX_TX_CFG(a) (0x4080 | (a) << 17)
#define NIX_AF_LFX_TX_PARSE_CFG(a) (0x4090 | (a) << 17)
#define NIX_AF_LFX_RX_CFG(a) (0x40A0 | (a) << 17)
#define NIX_AF_LFX_RSS_CFG(a) (0x40C0 | (a) << 17)
#define NIX_AF_LFX_RSS_BASE(a) (0x40D0 | (a) << 17)
#define NIX_AF_LFX_QINTS_CFG(a) (0x4100 | (a) << 17)
#define NIX_AF_LFX_QINTS_BASE(a) (0x4110 | (a) << 17)
#define NIX_AF_LFX_CINTS_CFG(a) (0x4120 | (a) << 17)
#define NIX_AF_LFX_CINTS_BASE(a) (0x4130 | (a) << 17)
#define NIX_AF_LFX_RX_IPSEC_CFG0(a) (0x4140 | (a) << 17)
#define NIX_AF_LFX_RX_IPSEC_CFG1(a) (0x4148 | (a) << 17)
#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a) (0x4150 | (a) << 17)
#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a) (0x4158 | (a) << 17)
#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a) (0x4170 | (a) << 17)
#define NIX_AF_LFX_TX_STATUS(a) (0x4180 | (a) << 17)
#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b) (0x4200 | (a) << 17 | (b) << 3)
#define NIX_AF_LFX_LOCKX(a, b) (0x4300 | (a) << 17 | (b) << 3)
#define NIX_AF_LFX_TX_STATX(a, b) (0x4400 | (a) << 17 | (b) << 3)
#define NIX_AF_LFX_RX_STATX(a, b) (0x4500 | (a) << 17 | (b) << 3)
#define NIX_AF_LFX_RSS_GRPX(a, b) (0x4600 | (a) << 17 | (b) << 3)
#define NIX_AF_RX_NPC_MC_RCV (0x4700)
#define NIX_AF_RX_NPC_MC_DROP (0x4710)
#define NIX_AF_RX_NPC_MIRROR_RCV (0x4720)
#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730)
#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) (0x4800 | (a) << 16)
#define NIX_PRIV_AF_INT_CFG (0x8000000)
#define NIX_PRIV_LFX_CFG (0x8000010)
#define NIX_PRIV_LFX_INT_CFG (0x8000020)
#define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030)
/* SSO */
#define SSO_AF_CONST (0x1000)
#define SSO_AF_CONST1 (0x1008)
#define SSO_AF_BLK_RST (0x10f8)
#define SSO_AF_LF_HWGRP_RST (0x10e0)
#define SSO_AF_RVU_LF_CFG_DEBUG (0x3800)
#define SSO_PRIV_LFX_HWGRP_CFG (0x10000)
#define SSO_PRIV_LFX_HWGRP_INT_CFG (0x20000)
/* SSOW */
#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG (0x0010)
#define SSOW_AF_LF_HWS_RST (0x0030)
#define SSOW_PRIV_LFX_HWS_CFG (0x1000)
#define SSOW_PRIV_LFX_HWS_INT_CFG (0x2000)
/* TIM */
#define TIM_AF_CONST (0x90)
#define TIM_PRIV_LFX_CFG (0x20000)
#define TIM_PRIV_LFX_INT_CFG (0x24000)
#define TIM_AF_RVU_LF_CFG_DEBUG (0x30000)
#define TIM_AF_BLK_RST (0x10)
#define TIM_AF_LF_RST (0x20)
/* CPT */
#define CPT_AF_CONSTANTS0 (0x0000)
#define CPT_PRIV_LFX_CFG (0x41000)
#define CPT_PRIV_LFX_INT_CFG (0x43000)
#define CPT_AF_RVU_LF_CFG_DEBUG (0x45000)
#define CPT_AF_LF_RST (0x44000)
#define CPT_AF_BLK_RST (0x46000)
#define NDC_AF_BLK_RST (0x002F0)
#define NPC_AF_BLK_RST (0x00040)
#endif /* RVU_REG_H */
/* SPDX-License-Identifier: GPL-2.0
* Marvell OcteonTx2 RVU Admin Function driver
*
* Copyright (C) 2018 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef RVU_STRUCT_H
#define RVU_STRUCT_H
/* RVU Block Address Enumeration */
enum rvu_block_addr_e {
BLKADDR_RVUM = 0x0ULL,
BLKADDR_LMT = 0x1ULL,
BLKADDR_MSIX = 0x2ULL,
BLKADDR_NPA = 0x3ULL,
BLKADDR_NIX0 = 0x4ULL,
BLKADDR_NIX1 = 0x5ULL,
BLKADDR_NPC = 0x6ULL,
BLKADDR_SSO = 0x7ULL,
BLKADDR_SSOW = 0x8ULL,
BLKADDR_TIM = 0x9ULL,
BLKADDR_CPT0 = 0xaULL,
BLKADDR_CPT1 = 0xbULL,
BLKADDR_NDC0 = 0xcULL,
BLKADDR_NDC1 = 0xdULL,
BLKADDR_NDC2 = 0xeULL,
BLK_COUNT = 0xfULL,
};
/* RVU Block Type Enumeration */
enum rvu_block_type_e {
BLKTYPE_RVUM = 0x0,
BLKTYPE_MSIX = 0x1,
BLKTYPE_LMT = 0x2,
BLKTYPE_NIX = 0x3,
BLKTYPE_NPA = 0x4,
BLKTYPE_NPC = 0x5,
BLKTYPE_SSO = 0x6,
BLKTYPE_SSOW = 0x7,
BLKTYPE_TIM = 0x8,
BLKTYPE_CPT = 0x9,
BLKTYPE_NDC = 0xa,
BLKTYPE_MAX = 0xa,
};
/* RVU Admin function Interrupt Vector Enumeration */
enum rvu_af_int_vec_e {
RVU_AF_INT_VEC_POISON = 0x0,
RVU_AF_INT_VEC_PFFLR = 0x1,
RVU_AF_INT_VEC_PFME = 0x2,
RVU_AF_INT_VEC_GEN = 0x3,
RVU_AF_INT_VEC_MBOX = 0x4,
RVU_AF_INT_VEC_CNT = 0x5,
};
/**
* RVU PF Interrupt Vector Enumeration
*/
enum rvu_pf_int_vec_e {
RVU_PF_INT_VEC_VFFLR0 = 0x0,
RVU_PF_INT_VEC_VFFLR1 = 0x1,
RVU_PF_INT_VEC_VFME0 = 0x2,
RVU_PF_INT_VEC_VFME1 = 0x3,
RVU_PF_INT_VEC_VFPF_MBOX0 = 0x4,
RVU_PF_INT_VEC_VFPF_MBOX1 = 0x5,
RVU_PF_INT_VEC_AFPF_MBOX = 0x6,
RVU_PF_INT_VEC_CNT = 0x7,
};
#endif /* RVU_STRUCT_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment