Commit d8bb3824 authored by David S. Miller's avatar David S. Miller

Merge branch 'pds_core'

Shannon Nelson says:

====================
pds_core driver

Summary:
--------
This patchset implements a new driver for use with the AMD/Pensando
Distributed Services Card (DSC), intended to provide core configuration
services through the auxiliary_bus and through a couple of EXPORTed
functions for use initially in VFio and vDPA feature specific drivers.

To keep this patchset to a manageable size, the pds_vdpa and pds_vfio
drivers have been split out into their own patchsets to be reviewed
separately.

Detail:
-------
AMD/Pensando is making available a new set of devices for supporting vDPA,
VFio, and potentially other features in the Distributed Services Card
(DSC).  These features are implemented through a PF that serves as a Core
device for controlling and configuring its VF devices.  These VF devices
have separate drivers that use the auxiliary_bus to work through the Core
device as the control path.

Currently, the DSC supports standard ethernet operations using the
ionic driver.  This is not replaced by the Core-based devices - these
new devices are in addition to the existing Ethernet device.  Typical DSC
configurations will include both PDS devices and Ionic Eth devices.
However, there is a potential future path for ethernet services to come
through this device as well.

The Core device is a new PCI PF/VF device managed by a new driver
'pds_core'.  The PF device has access to an admin queue for configuring
the services used by the VFs, and sets up auxiliary_bus devices for each
vDPA VF for communicating with the drivers for the vDPA devices.  The VFs
may be for VFio or vDPA, and other services in the future; these VF types
are selected as part of the DSC internal FW configurations, which is out
of the scope of this patchset.

When the vDPA support set is enabled in the core PF through its devlink
param, auxiliary_bus devices are created for each VF that supports the
feature.  The vDPA driver then connects to and uses this auxiliary_device
to do control path configuration through the PF device.  This can then be
used with the vdpa kernel module to provide devices for virtio_vdpa kernel
module for host interfaces, or vhost_vdpa kernel module for interfaces
exported into your favorite VM.

A cheap ASCII diagram of a vDPA instance looks something like this:

                                ,----------.
                                |   vdpa   |
                                '----------'
                                  |     ||
                                 ctl   data
                                  |     ||
                          .----------.  ||
                          | pds_vdpa |  ||
                          '----------'  ||
                               |        ||
                       pds_core.vDPA.1  ||
                               |        ||
                    .---------------.   ||
                    |   pds_core    |   ||
                    '---------------'   ||
                        ||         ||   ||
                      09:00.0      09:00.1
        == PCI ============================================
                        ||            ||
                   .----------.   .----------.
            ,------|    PF    |---|    VF    |-------,
            |      '----------'   '----------'       |
            |                  DSC                   |
            |                                        |
            ------------------------------------------

Changes:
  v11:
 - change strncpy to strscpy
Reported-by: default avatarkernel test robot <lkp@intel.com>
     Link: https://lore.kernel.org/oe-kbuild-all/202304181137.WaZTYyAa-lkp@intel.com/

  v10:
Link: https://lore.kernel.org/netdev/20230418003228.28234-1-shannon.nelson@amd.com/
 - remove CONFIG_DEBUG_FS guard static inline stuff
 - remove unnecessary 0 and null initializations
 - verify in driver load that PDS_CORE_DRV_NAME matches KBUILD_MODNAME
 - remove debugfs irqs_show(), redundant with /proc
 - return -ENOMEM if intr_info = kcalloc() fails
 - move the status code enum into pds_core_if.h as part of API definition
 - fix up one place in pdsc_devcmd_wait() we're using the status codes where we could use the errno
 - remove redundant calls to flush_workqueue()
 - grab config_lock before testing state bits in pdsc_fw_reporter_diagnose()
 - change pdsc_color_match() to return bool
 - remove useless VIF setup loop and just setup vDPA services for now
 - remove pf pointer from struct padev and have clients use pci_physfn()
 - drop use of "vf" in auxdev.c function names, make more generic
 - remove last of client ops struct and simply export the functions
 - drop drivers@pensando.io from MAINTAINERS and add new include dir
 - include dynamic_debug.h in adminq.c to protect dynamic_hex_dump()
 - fixed fw_slot type from u8 to int for handling error returns
 - fixed comment spelling
 - changed void arg in pdsc_adminq_post() to struct pdsc *

  v9:
Link: https://lore.kernel.org/netdev/20230406234143.11318-1-shannon.nelson@amd.com/
 - change pdsc field name id to uid to clarify the unique id used for aux device
 - remove unnecessary pf->state and other checks in aux device creation
 - hardcode fw slotnames for devlink info, don't use strings from FW
 - handle errors from PDS_CORE_CMD_INIT devcmd call
 - tighten up health thread use of config_lock
 - remove pdsc_queue_health_check() layer over queuing health check
 - start pds_core.rst file in first patch, add to it incrementally
 - give more user interaction info in commit messages
 - removed a few more extraneous includes

  v8:
Link: https://lore.kernel.org/netdev/20230330234628.14627-1-shannon.nelson@amd.com/
 - fixed deadlock problem, use devl_health_reporter_destroy() when devlink is locked
 - don't clear client_id until after auxiliary_device_uninit()

  v7:
Link: https://lore.kernel.org/netdev/20230330192313.62018-1-shannon.nelson@amd.com/
 - use explicit devlink locking and devl_* APIs
 - move some of devlink setup logic into probe and remove
 - use debugfs_create_u{type}() for state and queue head and tail
 - add include for linux/vmalloc.h
Reported-by: default avatarkernel test robot <lkp@intel.com>
     Link: https://lore.kernel.org/oe-kbuild-all/202303260420.Tgq0qobF-lkp@intel.com/

  v6:
Link: https://lore.kernel.org/netdev/20230324190243.27722-1-shannon.nelson@amd.com/
 - removed version.h include noticed by kernel test robot's version check
Reported-by: default avatarkernel test robot <lkp@intel.com>
     Link: https://lore.kernel.org/oe-kbuild-all/202303230742.pX3ply0t-lkp@intel.com/
 - fixed up the more egregious checkpatch line length complaints
 - make sure pdsc_auxbus_dev_register() checks padev pointer errcode

  v5:
Link: https://lore.kernel.org/netdev/20230322185626.38758-1-shannon.nelson@amd.com/
 - added devlink health reporter for FW issues
 - removed asic_type, asic_rev, serial_num, fw_version from debugfs as
   they are available through other means
 - trimed OS info in pdsc_identify(), we don't need to send that much info to the FW
 - removed reg/unreg from auxbus client API, they are now in the core when VF
   is started
 - removed need for pdsc definition in client by simplifying the padev to only carry
   struct pci_dev pointers rather than full struct pdsc to the pf and vf
 - removed the unused pdsc argument in pdsc_notify()
 - moved include/linux/pds/pds_core.h to driver/../pds_core/core.h
 - restored a few pds_core_if.h interface values and structs that are shared
   with FW source
 - moved final config_lock unlock to before tear down of timer and workqueue
   to be sure there are no deadlocks while waiting for any stragglers
 - changed use of PAGE_SIZE to local PDS_PAGE_SIZE to keep with FW layout needs
   without regard to kernel PAGE_SIZE configuration
 - removed the redundant *adminqcq argument from pdsc_adminq_post()

  v4:
Link: https://lore.kernel.org/netdev/20230308051310.12544-1-shannon.nelson@amd.com/
 - reworked to attach to both Core PF and vDPA VF PCI devices
 - now creates auxiliary_device as part of each VF PCI probe, removes them on PCI remove
 - auxiliary devices now use simple unique id rather than PCI address for identifier
 - replaced home-grown event publishing with kernel-based notifier service
 - dropped live_migration parameter, not needed when not creating aux device for it
 - replaced devm_* functions with traditional interfaces
 - added MAINTAINERS entry
 - removed lingering traces of set/get_vf attribute adminq commands
 - trimmed some include lists
 - cleaned a kernel test robot complaint about a stray unused variable
        Link: https://lore.kernel.org/oe-kbuild-all/202302181049.yeUQMeWY-lkp@intel.com/

  v3:
Link: https://lore.kernel.org/netdev/20230217225558.19837-1-shannon.nelson@amd.com/
 - changed names from "pensando" to "amd" and updated copyright strings
 - dropped the DEVLINK_PARAM_GENERIC_ID_FW_BANK for future development
 - changed the auxiliary device creation to be triggered by the
   PCI bus event BOUND_DRIVER, and torn down at UNBIND_DRIVER in order
   to properly handle users using the sysfs bind/unbind functions
 - dropped some noisy log messages
 - rebased to current net-next

  RFC to v2:
Link: https://lore.kernel.org/netdev/20221207004443.33779-1-shannon.nelson@amd.com/
 - added separate devlink param patches for DEVLINK_PARAM_GENERIC_ID_ENABLE_MIGRATION
   and DEVLINK_PARAM_GENERIC_ID_FW_BANK, and dropped the driver specific implementations
 - updated descriptions for the new devlink parameters
 - dropped netdev support
 - dropped vDPA patches, will followup later
 - separated fw update and fw bank select into their own patches

  RFC:
Link: https://lore.kernel.org/netdev/20221118225656.48309-1-snelson@pensando.io/
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 25c800b2 ddbcb220
.. SPDX-License-Identifier: GPL-2.0+
========================================================
Linux Driver for the AMD/Pensando(R) DSC adapter family
========================================================
Copyright(c) 2023 Advanced Micro Devices, Inc
Identifying the Adapter
=======================
To find if one or more AMD/Pensando PCI Core devices are installed on the
host, check for the PCI devices::
# lspci -d 1dd8:100c
b5:00.0 Processing accelerators: Pensando Systems Device 100c
b6:00.0 Processing accelerators: Pensando Systems Device 100c
If such devices are listed as above, then the pds_core.ko driver should find
and configure them for use. There should be log entries in the kernel
messages such as these::
$ dmesg | grep pds_core
pds_core 0000:b5:00.0: 252.048 Gb/s available PCIe bandwidth (16.0 GT/s PCIe x16 link)
pds_core 0000:b5:00.0: FW: 1.60.0-73
pds_core 0000:b6:00.0: 252.048 Gb/s available PCIe bandwidth (16.0 GT/s PCIe x16 link)
pds_core 0000:b6:00.0: FW: 1.60.0-73
Driver and firmware version information can be gathered with devlink::
$ devlink dev info pci/0000:b5:00.0
pci/0000:b5:00.0:
driver pds_core
serial_number FLM18420073
versions:
fixed:
asic.id 0x0
asic.rev 0x0
running:
fw 1.51.0-73
stored:
fw.goldfw 1.15.9-C-22
fw.mainfwa 1.60.0-73
fw.mainfwb 1.60.0-57
Info versions
=============
The ``pds_core`` driver reports the following versions
.. list-table:: devlink info versions implemented
:widths: 5 5 90
* - Name
- Type
- Description
* - ``fw``
- running
- Version of firmware running on the device
* - ``fw.goldfw``
- stored
- Version of firmware stored in the goldfw slot
* - ``fw.mainfwa``
- stored
- Version of firmware stored in the mainfwa slot
* - ``fw.mainfwb``
- stored
- Version of firmware stored in the mainfwb slot
* - ``asic.id``
- fixed
- The ASIC type for this device
* - ``asic.rev``
- fixed
- The revision of the ASIC for this device
Parameters
==========
The ``pds_core`` driver implements the following generic
parameters for controlling the functionality to be made available
as auxiliary_bus devices.
.. list-table:: Generic parameters implemented
:widths: 5 5 8 82
* - Name
- Mode
- Type
- Description
* - ``enable_vnet``
- runtime
- Boolean
- Enables vDPA functionality through an auxiliary_bus device
Firmware Management
===================
The ``flash`` command can update a the DSC firmware. The downloaded firmware
will be saved into either of firmware bank 1 or bank 2, whichever is not
currently in use, and that bank will used for the next boot::
# devlink dev flash pci/0000:b5:00.0 \
file pensando/dsc_fw_1.63.0-22.tar
Health Reporters
================
The driver supports a devlink health reporter for FW status::
# devlink health show pci/0000:2b:00.0 reporter fw
pci/0000:2b:00.0:
reporter fw
state healthy error 0 recover 0
# devlink health diagnose pci/0000:2b:00.0 reporter fw
Status: healthy State: 1 Generation: 0 Recoveries: 0
Enabling the driver
===================
The driver is enabled via the standard kernel configuration system,
using the make command::
make oldconfig/menuconfig/etc.
The driver is located in the menu structure at:
-> Device Drivers
-> Network device support (NETDEVICES [=y])
-> Ethernet driver support
-> AMD devices
-> AMD/Pensando Ethernet PDS_CORE Support
Support
=======
For general Linux networking support, please use the netdev mailing
list, which is monitored by AMD/Pensando personnel::
netdev@vger.kernel.org
......@@ -14,6 +14,7 @@ Contents:
3com/vortex
amazon/ena
altera/altera_tse
amd/pds_core
aquantia/atlantic
chelsio/cxgb
cirrus/cs89x0
......
......@@ -1041,6 +1041,15 @@ F: drivers/gpu/drm/amd/include/vi_structs.h
F: include/uapi/linux/kfd_ioctl.h
F: include/uapi/linux/kfd_sysfs.h
AMD PDS CORE DRIVER
M: Shannon Nelson <shannon.nelson@amd.com>
M: Brett Creeley <brett.creeley@amd.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/device_drivers/ethernet/amd/pds_core.rst
F: drivers/net/ethernet/amd/pds_core/
F: include/linux/pds/
AMD SPI DRIVER
M: Sanjay R Mehta <sanju.mehta@amd.com>
S: Maintained
......
......@@ -186,4 +186,16 @@ config AMD_XGBE_HAVE_ECC
bool
default n
config PDS_CORE
tristate "AMD/Pensando Data Systems Core Device Support"
depends on 64BIT && PCI
help
This enables the support for the AMD/Pensando Core device family of
adapters. More specific information on this driver can be
found in
<file:Documentation/networking/device_drivers/ethernet/amd/pds_core.rst>.
To compile this driver as a module, choose M here. The module
will be called pds_core.
endif # NET_VENDOR_AMD
......@@ -17,3 +17,4 @@ obj-$(CONFIG_PCNET32) += pcnet32.o
obj-$(CONFIG_SUN3LANCE) += sun3lance.o
obj-$(CONFIG_SUNLANCE) += sunlance.o
obj-$(CONFIG_AMD_XGBE) += xgbe/
obj-$(CONFIG_PDS_CORE) += pds_core/
# SPDX-License-Identifier: GPL-2.0
# Copyright (c) 2023 Advanced Micro Devices, Inc.
obj-$(CONFIG_PDS_CORE) := pds_core.o
pds_core-y := main.o \
devlink.o \
auxbus.o \
dev.o \
adminq.o \
core.o \
fw.o
pds_core-$(CONFIG_DEBUG_FS) += debugfs.o
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include <linux/dynamic_debug.h>
#include "core.h"
struct pdsc_wait_context {
struct pdsc_qcq *qcq;
struct completion wait_completion;
};
static int pdsc_process_notifyq(struct pdsc_qcq *qcq)
{
union pds_core_notifyq_comp *comp;
struct pdsc *pdsc = qcq->pdsc;
struct pdsc_cq *cq = &qcq->cq;
struct pdsc_cq_info *cq_info;
int nq_work = 0;
u64 eid;
cq_info = &cq->info[cq->tail_idx];
comp = cq_info->comp;
eid = le64_to_cpu(comp->event.eid);
while (eid > pdsc->last_eid) {
u16 ecode = le16_to_cpu(comp->event.ecode);
switch (ecode) {
case PDS_EVENT_LINK_CHANGE:
dev_info(pdsc->dev, "NotifyQ LINK_CHANGE ecode %d eid %lld\n",
ecode, eid);
pdsc_notify(PDS_EVENT_LINK_CHANGE, comp);
break;
case PDS_EVENT_RESET:
dev_info(pdsc->dev, "NotifyQ RESET ecode %d eid %lld\n",
ecode, eid);
pdsc_notify(PDS_EVENT_RESET, comp);
break;
case PDS_EVENT_XCVR:
dev_info(pdsc->dev, "NotifyQ XCVR ecode %d eid %lld\n",
ecode, eid);
break;
default:
dev_info(pdsc->dev, "NotifyQ ecode %d eid %lld\n",
ecode, eid);
break;
}
pdsc->last_eid = eid;
cq->tail_idx = (cq->tail_idx + 1) & (cq->num_descs - 1);
cq_info = &cq->info[cq->tail_idx];
comp = cq_info->comp;
eid = le64_to_cpu(comp->event.eid);
nq_work++;
}
qcq->accum_work += nq_work;
return nq_work;
}
void pdsc_process_adminq(struct pdsc_qcq *qcq)
{
union pds_core_adminq_comp *comp;
struct pdsc_queue *q = &qcq->q;
struct pdsc *pdsc = qcq->pdsc;
struct pdsc_cq *cq = &qcq->cq;
struct pdsc_q_info *q_info;
unsigned long irqflags;
int nq_work = 0;
int aq_work = 0;
int credits;
/* Don't process AdminQ when shutting down */
if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) {
dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n",
__func__);
return;
}
/* Check for NotifyQ event */
nq_work = pdsc_process_notifyq(&pdsc->notifyqcq);
/* Check for empty queue, which can happen if the interrupt was
* for a NotifyQ event and there are no new AdminQ completions.
*/
if (q->tail_idx == q->head_idx)
goto credits;
/* Find the first completion to clean,
* run the callback in the related q_info,
* and continue while we still match done color
*/
spin_lock_irqsave(&pdsc->adminq_lock, irqflags);
comp = cq->info[cq->tail_idx].comp;
while (pdsc_color_match(comp->color, cq->done_color)) {
q_info = &q->info[q->tail_idx];
q->tail_idx = (q->tail_idx + 1) & (q->num_descs - 1);
/* Copy out the completion data */
memcpy(q_info->dest, comp, sizeof(*comp));
complete_all(&q_info->wc->wait_completion);
if (cq->tail_idx == cq->num_descs - 1)
cq->done_color = !cq->done_color;
cq->tail_idx = (cq->tail_idx + 1) & (cq->num_descs - 1);
comp = cq->info[cq->tail_idx].comp;
aq_work++;
}
spin_unlock_irqrestore(&pdsc->adminq_lock, irqflags);
qcq->accum_work += aq_work;
credits:
/* Return the interrupt credits, one for each completion */
credits = nq_work + aq_work;
if (credits)
pds_core_intr_credits(&pdsc->intr_ctrl[qcq->intx],
credits,
PDS_CORE_INTR_CRED_REARM);
}
void pdsc_work_thread(struct work_struct *work)
{
struct pdsc_qcq *qcq = container_of(work, struct pdsc_qcq, work);
pdsc_process_adminq(qcq);
}
irqreturn_t pdsc_adminq_isr(int irq, void *data)
{
struct pdsc_qcq *qcq = data;
struct pdsc *pdsc = qcq->pdsc;
/* Don't process AdminQ when shutting down */
if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) {
dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n",
__func__);
return IRQ_HANDLED;
}
queue_work(pdsc->wq, &qcq->work);
pds_core_intr_mask(&pdsc->intr_ctrl[irq], PDS_CORE_INTR_MASK_CLEAR);
return IRQ_HANDLED;
}
static int __pdsc_adminq_post(struct pdsc *pdsc,
struct pdsc_qcq *qcq,
union pds_core_adminq_cmd *cmd,
union pds_core_adminq_comp *comp,
struct pdsc_wait_context *wc)
{
struct pdsc_queue *q = &qcq->q;
struct pdsc_q_info *q_info;
unsigned long irqflags;
unsigned int avail;
int index;
int ret;
spin_lock_irqsave(&pdsc->adminq_lock, irqflags);
/* Check for space in the queue */
avail = q->tail_idx;
if (q->head_idx >= avail)
avail += q->num_descs - q->head_idx - 1;
else
avail -= q->head_idx + 1;
if (!avail) {
ret = -ENOSPC;
goto err_out_unlock;
}
/* Check that the FW is running */
if (!pdsc_is_fw_running(pdsc)) {
u8 fw_status = ioread8(&pdsc->info_regs->fw_status);
dev_info(pdsc->dev, "%s: post failed - fw not running %#02x:\n",
__func__, fw_status);
ret = -ENXIO;
goto err_out_unlock;
}
/* Post the request */
index = q->head_idx;
q_info = &q->info[index];
q_info->wc = wc;
q_info->dest = comp;
memcpy(q_info->desc, cmd, sizeof(*cmd));
dev_dbg(pdsc->dev, "head_idx %d tail_idx %d\n",
q->head_idx, q->tail_idx);
dev_dbg(pdsc->dev, "post admin queue command:\n");
dynamic_hex_dump("cmd ", DUMP_PREFIX_OFFSET, 16, 1,
cmd, sizeof(*cmd), true);
q->head_idx = (q->head_idx + 1) & (q->num_descs - 1);
pds_core_dbell_ring(pdsc->kern_dbpage,
q->hw_type, q->dbval | q->head_idx);
ret = index;
err_out_unlock:
spin_unlock_irqrestore(&pdsc->adminq_lock, irqflags);
return ret;
}
int pdsc_adminq_post(struct pdsc *pdsc,
union pds_core_adminq_cmd *cmd,
union pds_core_adminq_comp *comp,
bool fast_poll)
{
struct pdsc_wait_context wc = {
.wait_completion =
COMPLETION_INITIALIZER_ONSTACK(wc.wait_completion),
};
unsigned long poll_interval = 1;
unsigned long poll_jiffies;
unsigned long time_limit;
unsigned long time_start;
unsigned long time_done;
unsigned long remaining;
int err = 0;
int index;
wc.qcq = &pdsc->adminqcq;
index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp, &wc);
if (index < 0) {
err = index;
goto err_out;
}
time_start = jiffies;
time_limit = time_start + HZ * pdsc->devcmd_timeout;
do {
/* Timeslice the actual wait to catch IO errors etc early */
poll_jiffies = msecs_to_jiffies(poll_interval);
remaining = wait_for_completion_timeout(&wc.wait_completion,
poll_jiffies);
if (remaining)
break;
if (!pdsc_is_fw_running(pdsc)) {
u8 fw_status = ioread8(&pdsc->info_regs->fw_status);
dev_dbg(pdsc->dev, "%s: post wait failed - fw not running %#02x:\n",
__func__, fw_status);
err = -ENXIO;
break;
}
/* When fast_poll is not requested, prevent aggressive polling
* on failures due to timeouts by doing exponential back off.
*/
if (!fast_poll && poll_interval < PDSC_ADMINQ_MAX_POLL_INTERVAL)
poll_interval <<= 1;
} while (time_before(jiffies, time_limit));
time_done = jiffies;
dev_dbg(pdsc->dev, "%s: elapsed %d msecs\n",
__func__, jiffies_to_msecs(time_done - time_start));
/* Check the results */
if (time_after_eq(time_done, time_limit))
err = -ETIMEDOUT;
dev_dbg(pdsc->dev, "read admin queue completion idx %d:\n", index);
dynamic_hex_dump("comp ", DUMP_PREFIX_OFFSET, 16, 1,
comp, sizeof(*comp), true);
if (remaining && comp->status)
err = pdsc_err_to_errno(comp->status);
err_out:
if (err) {
dev_dbg(pdsc->dev, "%s: opcode %d status %d err %pe\n",
__func__, cmd->opcode, comp->status, ERR_PTR(err));
if (err == -ENXIO || err == -ETIMEDOUT)
queue_work(pdsc->wq, &pdsc->health_work);
}
return err;
}
EXPORT_SYMBOL_GPL(pdsc_adminq_post);
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include <linux/pci.h>
#include "core.h"
#include <linux/pds/pds_auxbus.h>
/**
* pds_client_register - Link the client to the firmware
* @pf_pdev: ptr to the PF driver struct
* @devname: name that includes service into, e.g. pds_core.vDPA
*
* Return: 0 on success, or
* negative for error
*/
int pds_client_register(struct pci_dev *pf_pdev, char *devname)
{
union pds_core_adminq_comp comp = {};
union pds_core_adminq_cmd cmd = {};
struct pdsc *pf;
int err;
u16 ci;
pf = pci_get_drvdata(pf_pdev);
if (pf->state)
return -ENXIO;
cmd.client_reg.opcode = PDS_AQ_CMD_CLIENT_REG;
strscpy(cmd.client_reg.devname, devname,
sizeof(cmd.client_reg.devname));
err = pdsc_adminq_post(pf, &cmd, &comp, false);
if (err) {
dev_info(pf->dev, "register dev_name %s with DSC failed, status %d: %pe\n",
devname, comp.status, ERR_PTR(err));
return err;
}
ci = le16_to_cpu(comp.client_reg.client_id);
if (!ci) {
dev_err(pf->dev, "%s: device returned null client_id\n",
__func__);
return -EIO;
}
dev_dbg(pf->dev, "%s: device returned client_id %d for %s\n",
__func__, ci, devname);
return ci;
}
EXPORT_SYMBOL_GPL(pds_client_register);
/**
* pds_client_unregister - Unlink the client from the firmware
* @pf_pdev: ptr to the PF driver struct
* @client_id: id returned from pds_client_register()
*
* Return: 0 on success, or
* negative for error
*/
int pds_client_unregister(struct pci_dev *pf_pdev, u16 client_id)
{
union pds_core_adminq_comp comp = {};
union pds_core_adminq_cmd cmd = {};
struct pdsc *pf;
int err;
pf = pci_get_drvdata(pf_pdev);
if (pf->state)
return -ENXIO;
cmd.client_unreg.opcode = PDS_AQ_CMD_CLIENT_UNREG;
cmd.client_unreg.client_id = cpu_to_le16(client_id);
err = pdsc_adminq_post(pf, &cmd, &comp, false);
if (err)
dev_info(pf->dev, "unregister client_id %d failed, status %d: %pe\n",
client_id, comp.status, ERR_PTR(err));
return err;
}
EXPORT_SYMBOL_GPL(pds_client_unregister);
/**
* pds_client_adminq_cmd - Process an adminq request for the client
* @padev: ptr to the client device
* @req: ptr to buffer with request
* @req_len: length of actual struct used for request
* @resp: ptr to buffer where answer is to be copied
* @flags: optional flags from pds_core_adminq_flags
*
* Return: 0 on success, or
* negative for error
*
* Client sends pointers to request and response buffers
* Core copies request data into pds_core_client_request_cmd
* Core sets other fields as needed
* Core posts to AdminQ
* Core copies completion data into response buffer
*/
int pds_client_adminq_cmd(struct pds_auxiliary_dev *padev,
union pds_core_adminq_cmd *req,
size_t req_len,
union pds_core_adminq_comp *resp,
u64 flags)
{
union pds_core_adminq_cmd cmd = {};
struct pci_dev *pf_pdev;
struct pdsc *pf;
size_t cp_len;
int err;
pf_pdev = pci_physfn(padev->vf_pdev);
pf = pci_get_drvdata(pf_pdev);
dev_dbg(pf->dev, "%s: %s opcode %d\n",
__func__, dev_name(&padev->aux_dev.dev), req->opcode);
if (pf->state)
return -ENXIO;
/* Wrap the client's request */
cmd.client_request.opcode = PDS_AQ_CMD_CLIENT_CMD;
cmd.client_request.client_id = cpu_to_le16(padev->client_id);
cp_len = min_t(size_t, req_len, sizeof(cmd.client_request.client_cmd));
memcpy(cmd.client_request.client_cmd, req, cp_len);
err = pdsc_adminq_post(pf, &cmd, resp,
!!(flags & PDS_AQ_FLAG_FASTPOLL));
if (err && err != -EAGAIN)
dev_info(pf->dev, "client admin cmd failed: %pe\n",
ERR_PTR(err));
return err;
}
EXPORT_SYMBOL_GPL(pds_client_adminq_cmd);
static void pdsc_auxbus_dev_release(struct device *dev)
{
struct pds_auxiliary_dev *padev =
container_of(dev, struct pds_auxiliary_dev, aux_dev.dev);
kfree(padev);
}
static struct pds_auxiliary_dev *pdsc_auxbus_dev_register(struct pdsc *cf,
struct pdsc *pf,
u16 client_id,
char *name)
{
struct auxiliary_device *aux_dev;
struct pds_auxiliary_dev *padev;
int err;
padev = kzalloc(sizeof(*padev), GFP_KERNEL);
if (!padev)
return ERR_PTR(-ENOMEM);
padev->vf_pdev = cf->pdev;
padev->client_id = client_id;
aux_dev = &padev->aux_dev;
aux_dev->name = name;
aux_dev->id = cf->uid;
aux_dev->dev.parent = cf->dev;
aux_dev->dev.release = pdsc_auxbus_dev_release;
err = auxiliary_device_init(aux_dev);
if (err < 0) {
dev_warn(cf->dev, "auxiliary_device_init of %s failed: %pe\n",
name, ERR_PTR(err));
goto err_out;
}
err = auxiliary_device_add(aux_dev);
if (err) {
dev_warn(cf->dev, "auxiliary_device_add of %s failed: %pe\n",
name, ERR_PTR(err));
goto err_out_uninit;
}
return padev;
err_out_uninit:
auxiliary_device_uninit(aux_dev);
err_out:
kfree(padev);
return ERR_PTR(err);
}
int pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf)
{
struct pds_auxiliary_dev *padev;
int err = 0;
mutex_lock(&pf->config_lock);
padev = pf->vfs[cf->vf_id].padev;
if (padev) {
pds_client_unregister(pf->pdev, padev->client_id);
auxiliary_device_delete(&padev->aux_dev);
auxiliary_device_uninit(&padev->aux_dev);
padev->client_id = 0;
}
pf->vfs[cf->vf_id].padev = NULL;
mutex_unlock(&pf->config_lock);
return err;
}
int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf)
{
struct pds_auxiliary_dev *padev;
enum pds_core_vif_types vt;
char devname[PDS_DEVNAME_LEN];
u16 vt_support;
int client_id;
int err = 0;
mutex_lock(&pf->config_lock);
/* We only support vDPA so far, so it is the only one to
* be verified that it is available in the Core device and
* enabled in the devlink param. In the future this might
* become a loop for several VIF types.
*/
/* Verify that the type is supported and enabled. It is not
* an error if there is no auxbus device support for this
* VF, it just means something else needs to happen with it.
*/
vt = PDS_DEV_TYPE_VDPA;
vt_support = !!le16_to_cpu(pf->dev_ident.vif_types[vt]);
if (!(vt_support &&
pf->viftype_status[vt].supported &&
pf->viftype_status[vt].enabled))
goto out_unlock;
/* Need to register with FW and get the client_id before
* creating the aux device so that the aux client can run
* adminq commands as part its probe
*/
snprintf(devname, sizeof(devname), "%s.%s.%d",
PDS_CORE_DRV_NAME, pf->viftype_status[vt].name, cf->uid);
client_id = pds_client_register(pf->pdev, devname);
if (client_id < 0) {
err = client_id;
goto out_unlock;
}
padev = pdsc_auxbus_dev_register(cf, pf, client_id,
pf->viftype_status[vt].name);
if (IS_ERR(padev)) {
pds_client_unregister(pf->pdev, client_id);
err = PTR_ERR(padev);
goto out_unlock;
}
pf->vfs[cf->vf_id].padev = padev;
out_unlock:
mutex_unlock(&pf->config_lock);
return err;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include <linux/pci.h>
#include <linux/vmalloc.h>
#include "core.h"
static BLOCKING_NOTIFIER_HEAD(pds_notify_chain);
int pdsc_register_notify(struct notifier_block *nb)
{
return blocking_notifier_chain_register(&pds_notify_chain, nb);
}
EXPORT_SYMBOL_GPL(pdsc_register_notify);
void pdsc_unregister_notify(struct notifier_block *nb)
{
blocking_notifier_chain_unregister(&pds_notify_chain, nb);
}
EXPORT_SYMBOL_GPL(pdsc_unregister_notify);
void pdsc_notify(unsigned long event, void *data)
{
blocking_notifier_call_chain(&pds_notify_chain, event, data);
}
void pdsc_intr_free(struct pdsc *pdsc, int index)
{
struct pdsc_intr_info *intr_info;
if (index >= pdsc->nintrs || index < 0) {
WARN(true, "bad intr index %d\n", index);
return;
}
intr_info = &pdsc->intr_info[index];
if (!intr_info->vector)
return;
dev_dbg(pdsc->dev, "%s: idx %d vec %d name %s\n",
__func__, index, intr_info->vector, intr_info->name);
pds_core_intr_mask(&pdsc->intr_ctrl[index], PDS_CORE_INTR_MASK_SET);
pds_core_intr_clean(&pdsc->intr_ctrl[index]);
free_irq(intr_info->vector, intr_info->data);
memset(intr_info, 0, sizeof(*intr_info));
}
int pdsc_intr_alloc(struct pdsc *pdsc, char *name,
irq_handler_t handler, void *data)
{
struct pdsc_intr_info *intr_info;
unsigned int index;
int err;
/* Find the first available interrupt */
for (index = 0; index < pdsc->nintrs; index++)
if (!pdsc->intr_info[index].vector)
break;
if (index >= pdsc->nintrs) {
dev_warn(pdsc->dev, "%s: no intr, index=%d nintrs=%d\n",
__func__, index, pdsc->nintrs);
return -ENOSPC;
}
pds_core_intr_clean_flags(&pdsc->intr_ctrl[index],
PDS_CORE_INTR_CRED_RESET_COALESCE);
intr_info = &pdsc->intr_info[index];
intr_info->index = index;
intr_info->data = data;
strscpy(intr_info->name, name, sizeof(intr_info->name));
/* Get the OS vector number for the interrupt */
err = pci_irq_vector(pdsc->pdev, index);
if (err < 0) {
dev_err(pdsc->dev, "failed to get intr vector index %d: %pe\n",
index, ERR_PTR(err));
goto err_out_free_intr;
}
intr_info->vector = err;
/* Init the device's intr mask */
pds_core_intr_clean(&pdsc->intr_ctrl[index]);
pds_core_intr_mask_assert(&pdsc->intr_ctrl[index], 1);
pds_core_intr_mask(&pdsc->intr_ctrl[index], PDS_CORE_INTR_MASK_SET);
/* Register the isr with a name */
err = request_irq(intr_info->vector, handler, 0, intr_info->name, data);
if (err) {
dev_err(pdsc->dev, "failed to get intr irq vector %d: %pe\n",
intr_info->vector, ERR_PTR(err));
goto err_out_free_intr;
}
return index;
err_out_free_intr:
pdsc_intr_free(pdsc, index);
return err;
}
static void pdsc_qcq_intr_free(struct pdsc *pdsc, struct pdsc_qcq *qcq)
{
if (!(qcq->flags & PDS_CORE_QCQ_F_INTR) ||
qcq->intx == PDS_CORE_INTR_INDEX_NOT_ASSIGNED)
return;
pdsc_intr_free(pdsc, qcq->intx);
qcq->intx = PDS_CORE_INTR_INDEX_NOT_ASSIGNED;
}
static int pdsc_qcq_intr_alloc(struct pdsc *pdsc, struct pdsc_qcq *qcq)
{
char name[PDSC_INTR_NAME_MAX_SZ];
int index;
if (!(qcq->flags & PDS_CORE_QCQ_F_INTR)) {
qcq->intx = PDS_CORE_INTR_INDEX_NOT_ASSIGNED;
return 0;
}
snprintf(name, sizeof(name), "%s-%d-%s",
PDS_CORE_DRV_NAME, pdsc->pdev->bus->number, qcq->q.name);
index = pdsc_intr_alloc(pdsc, name, pdsc_adminq_isr, qcq);
if (index < 0)
return index;
qcq->intx = index;
return 0;
}
void pdsc_qcq_free(struct pdsc *pdsc, struct pdsc_qcq *qcq)
{
struct device *dev = pdsc->dev;
if (!(qcq && qcq->pdsc))
return;
pdsc_debugfs_del_qcq(qcq);
pdsc_qcq_intr_free(pdsc, qcq);
if (qcq->q_base)
dma_free_coherent(dev, qcq->q_size,
qcq->q_base, qcq->q_base_pa);
if (qcq->cq_base)
dma_free_coherent(dev, qcq->cq_size,
qcq->cq_base, qcq->cq_base_pa);
if (qcq->cq.info)
vfree(qcq->cq.info);
if (qcq->q.info)
vfree(qcq->q.info);
memset(qcq, 0, sizeof(*qcq));
}
static void pdsc_q_map(struct pdsc_queue *q, void *base, dma_addr_t base_pa)
{
struct pdsc_q_info *cur;
unsigned int i;
q->base = base;
q->base_pa = base_pa;
for (i = 0, cur = q->info; i < q->num_descs; i++, cur++)
cur->desc = base + (i * q->desc_size);
}
static void pdsc_cq_map(struct pdsc_cq *cq, void *base, dma_addr_t base_pa)
{
struct pdsc_cq_info *cur;
unsigned int i;
cq->base = base;
cq->base_pa = base_pa;
for (i = 0, cur = cq->info; i < cq->num_descs; i++, cur++)
cur->comp = base + (i * cq->desc_size);
}
int pdsc_qcq_alloc(struct pdsc *pdsc, unsigned int type, unsigned int index,
const char *name, unsigned int flags, unsigned int num_descs,
unsigned int desc_size, unsigned int cq_desc_size,
unsigned int pid, struct pdsc_qcq *qcq)
{
struct device *dev = pdsc->dev;
void *q_base, *cq_base;
dma_addr_t cq_base_pa;
dma_addr_t q_base_pa;
int err;
qcq->q.info = vzalloc(num_descs * sizeof(*qcq->q.info));
if (!qcq->q.info) {
err = -ENOMEM;
goto err_out;
}
qcq->pdsc = pdsc;
qcq->flags = flags;
INIT_WORK(&qcq->work, pdsc_work_thread);
qcq->q.type = type;
qcq->q.index = index;
qcq->q.num_descs = num_descs;
qcq->q.desc_size = desc_size;
qcq->q.tail_idx = 0;
qcq->q.head_idx = 0;
qcq->q.pid = pid;
snprintf(qcq->q.name, sizeof(qcq->q.name), "%s%u", name, index);
err = pdsc_qcq_intr_alloc(pdsc, qcq);
if (err)
goto err_out_free_q_info;
qcq->cq.info = vzalloc(num_descs * sizeof(*qcq->cq.info));
if (!qcq->cq.info) {
err = -ENOMEM;
goto err_out_free_irq;
}
qcq->cq.bound_intr = &pdsc->intr_info[qcq->intx];
qcq->cq.num_descs = num_descs;
qcq->cq.desc_size = cq_desc_size;
qcq->cq.tail_idx = 0;
qcq->cq.done_color = 1;
if (flags & PDS_CORE_QCQ_F_NOTIFYQ) {
/* q & cq need to be contiguous in case of notifyq */
qcq->q_size = PDS_PAGE_SIZE +
ALIGN(num_descs * desc_size, PDS_PAGE_SIZE) +
ALIGN(num_descs * cq_desc_size, PDS_PAGE_SIZE);
qcq->q_base = dma_alloc_coherent(dev,
qcq->q_size + qcq->cq_size,
&qcq->q_base_pa,
GFP_KERNEL);
if (!qcq->q_base) {
err = -ENOMEM;
goto err_out_free_cq_info;
}
q_base = PTR_ALIGN(qcq->q_base, PDS_PAGE_SIZE);
q_base_pa = ALIGN(qcq->q_base_pa, PDS_PAGE_SIZE);
pdsc_q_map(&qcq->q, q_base, q_base_pa);
cq_base = PTR_ALIGN(q_base +
ALIGN(num_descs * desc_size, PDS_PAGE_SIZE),
PDS_PAGE_SIZE);
cq_base_pa = ALIGN(qcq->q_base_pa +
ALIGN(num_descs * desc_size, PDS_PAGE_SIZE),
PDS_PAGE_SIZE);
} else {
/* q DMA descriptors */
qcq->q_size = PDS_PAGE_SIZE + (num_descs * desc_size);
qcq->q_base = dma_alloc_coherent(dev, qcq->q_size,
&qcq->q_base_pa,
GFP_KERNEL);
if (!qcq->q_base) {
err = -ENOMEM;
goto err_out_free_cq_info;
}
q_base = PTR_ALIGN(qcq->q_base, PDS_PAGE_SIZE);
q_base_pa = ALIGN(qcq->q_base_pa, PDS_PAGE_SIZE);
pdsc_q_map(&qcq->q, q_base, q_base_pa);
/* cq DMA descriptors */
qcq->cq_size = PDS_PAGE_SIZE + (num_descs * cq_desc_size);
qcq->cq_base = dma_alloc_coherent(dev, qcq->cq_size,
&qcq->cq_base_pa,
GFP_KERNEL);
if (!qcq->cq_base) {
err = -ENOMEM;
goto err_out_free_q;
}
cq_base = PTR_ALIGN(qcq->cq_base, PDS_PAGE_SIZE);
cq_base_pa = ALIGN(qcq->cq_base_pa, PDS_PAGE_SIZE);
}
pdsc_cq_map(&qcq->cq, cq_base, cq_base_pa);
qcq->cq.bound_q = &qcq->q;
pdsc_debugfs_add_qcq(pdsc, qcq);
return 0;
err_out_free_q:
dma_free_coherent(dev, qcq->q_size, qcq->q_base, qcq->q_base_pa);
err_out_free_cq_info:
vfree(qcq->cq.info);
err_out_free_irq:
pdsc_qcq_intr_free(pdsc, qcq);
err_out_free_q_info:
vfree(qcq->q.info);
memset(qcq, 0, sizeof(*qcq));
err_out:
dev_err(dev, "qcq alloc of %s%d failed %d\n", name, index, err);
return err;
}
static int pdsc_core_init(struct pdsc *pdsc)
{
union pds_core_dev_comp comp = {};
union pds_core_dev_cmd cmd = {
.init.opcode = PDS_CORE_CMD_INIT,
};
struct pds_core_dev_init_data_out cido;
struct pds_core_dev_init_data_in cidi;
u32 dbid_count;
u32 dbpage_num;
size_t sz;
int err;
cidi.adminq_q_base = cpu_to_le64(pdsc->adminqcq.q_base_pa);
cidi.adminq_cq_base = cpu_to_le64(pdsc->adminqcq.cq_base_pa);
cidi.notifyq_cq_base = cpu_to_le64(pdsc->notifyqcq.cq.base_pa);
cidi.flags = cpu_to_le32(PDS_CORE_QINIT_F_IRQ | PDS_CORE_QINIT_F_ENA);
cidi.intr_index = cpu_to_le16(pdsc->adminqcq.intx);
cidi.adminq_ring_size = ilog2(pdsc->adminqcq.q.num_descs);
cidi.notifyq_ring_size = ilog2(pdsc->notifyqcq.q.num_descs);
mutex_lock(&pdsc->devcmd_lock);
sz = min_t(size_t, sizeof(cidi), sizeof(pdsc->cmd_regs->data));
memcpy_toio(&pdsc->cmd_regs->data, &cidi, sz);
err = pdsc_devcmd_locked(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
if (!err) {
sz = min_t(size_t, sizeof(cido), sizeof(pdsc->cmd_regs->data));
memcpy_fromio(&cido, &pdsc->cmd_regs->data, sz);
}
mutex_unlock(&pdsc->devcmd_lock);
if (err) {
dev_err(pdsc->dev, "Device init command failed: %pe\n",
ERR_PTR(err));
return err;
}
pdsc->hw_index = le32_to_cpu(cido.core_hw_index);
dbid_count = le32_to_cpu(pdsc->dev_ident.ndbpgs_per_lif);
dbpage_num = pdsc->hw_index * dbid_count;
pdsc->kern_dbpage = pdsc_map_dbpage(pdsc, dbpage_num);
if (!pdsc->kern_dbpage) {
dev_err(pdsc->dev, "Cannot map dbpage, aborting\n");
return -ENOMEM;
}
pdsc->adminqcq.q.hw_type = cido.adminq_hw_type;
pdsc->adminqcq.q.hw_index = le32_to_cpu(cido.adminq_hw_index);
pdsc->adminqcq.q.dbval = PDS_CORE_DBELL_QID(pdsc->adminqcq.q.hw_index);
pdsc->notifyqcq.q.hw_type = cido.notifyq_hw_type;
pdsc->notifyqcq.q.hw_index = le32_to_cpu(cido.notifyq_hw_index);
pdsc->notifyqcq.q.dbval = PDS_CORE_DBELL_QID(pdsc->notifyqcq.q.hw_index);
pdsc->last_eid = 0;
return err;
}
static struct pdsc_viftype pdsc_viftype_defaults[] = {
[PDS_DEV_TYPE_VDPA] = { .name = PDS_DEV_TYPE_VDPA_STR,
.vif_id = PDS_DEV_TYPE_VDPA,
.dl_id = DEVLINK_PARAM_GENERIC_ID_ENABLE_VNET },
[PDS_DEV_TYPE_MAX] = {}
};
static int pdsc_viftypes_init(struct pdsc *pdsc)
{
enum pds_core_vif_types vt;
pdsc->viftype_status = kzalloc(sizeof(pdsc_viftype_defaults),
GFP_KERNEL);
if (!pdsc->viftype_status)
return -ENOMEM;
for (vt = 0; vt < PDS_DEV_TYPE_MAX; vt++) {
bool vt_support;
if (!pdsc_viftype_defaults[vt].name)
continue;
/* Grab the defaults */
pdsc->viftype_status[vt] = pdsc_viftype_defaults[vt];
/* See what the Core device has for support */
vt_support = !!le16_to_cpu(pdsc->dev_ident.vif_types[vt]);
dev_dbg(pdsc->dev, "VIF %s is %ssupported\n",
pdsc->viftype_status[vt].name,
vt_support ? "" : "not ");
pdsc->viftype_status[vt].supported = vt_support;
}
return 0;
}
int pdsc_setup(struct pdsc *pdsc, bool init)
{
int numdescs;
int err;
if (init)
err = pdsc_dev_init(pdsc);
else
err = pdsc_dev_reinit(pdsc);
if (err)
return err;
/* Scale the descriptor ring length based on number of CPUs and VFs */
numdescs = max_t(int, PDSC_ADMINQ_MIN_LENGTH, num_online_cpus());
numdescs += 2 * pci_sriov_get_totalvfs(pdsc->pdev);
numdescs = roundup_pow_of_two(numdescs);
err = pdsc_qcq_alloc(pdsc, PDS_CORE_QTYPE_ADMINQ, 0, "adminq",
PDS_CORE_QCQ_F_CORE | PDS_CORE_QCQ_F_INTR,
numdescs,
sizeof(union pds_core_adminq_cmd),
sizeof(union pds_core_adminq_comp),
0, &pdsc->adminqcq);
if (err)
goto err_out_teardown;
err = pdsc_qcq_alloc(pdsc, PDS_CORE_QTYPE_NOTIFYQ, 0, "notifyq",
PDS_CORE_QCQ_F_NOTIFYQ,
PDSC_NOTIFYQ_LENGTH,
sizeof(struct pds_core_notifyq_cmd),
sizeof(union pds_core_notifyq_comp),
0, &pdsc->notifyqcq);
if (err)
goto err_out_teardown;
/* NotifyQ rides on the AdminQ interrupt */
pdsc->notifyqcq.intx = pdsc->adminqcq.intx;
/* Set up the Core with the AdminQ and NotifyQ info */
err = pdsc_core_init(pdsc);
if (err)
goto err_out_teardown;
/* Set up the VIFs */
err = pdsc_viftypes_init(pdsc);
if (err)
goto err_out_teardown;
if (init)
pdsc_debugfs_add_viftype(pdsc);
clear_bit(PDSC_S_FW_DEAD, &pdsc->state);
return 0;
err_out_teardown:
pdsc_teardown(pdsc, init);
return err;
}
void pdsc_teardown(struct pdsc *pdsc, bool removing)
{
int i;
pdsc_devcmd_reset(pdsc);
pdsc_qcq_free(pdsc, &pdsc->notifyqcq);
pdsc_qcq_free(pdsc, &pdsc->adminqcq);
kfree(pdsc->viftype_status);
pdsc->viftype_status = NULL;
if (pdsc->intr_info) {
for (i = 0; i < pdsc->nintrs; i++)
pdsc_intr_free(pdsc, i);
if (removing) {
kfree(pdsc->intr_info);
pdsc->intr_info = NULL;
}
}
if (pdsc->kern_dbpage) {
iounmap(pdsc->kern_dbpage);
pdsc->kern_dbpage = NULL;
}
set_bit(PDSC_S_FW_DEAD, &pdsc->state);
}
int pdsc_start(struct pdsc *pdsc)
{
pds_core_intr_mask(&pdsc->intr_ctrl[pdsc->adminqcq.intx],
PDS_CORE_INTR_MASK_CLEAR);
return 0;
}
void pdsc_stop(struct pdsc *pdsc)
{
int i;
if (!pdsc->intr_info)
return;
/* Mask interrupts that are in use */
for (i = 0; i < pdsc->nintrs; i++)
if (pdsc->intr_info[i].vector)
pds_core_intr_mask(&pdsc->intr_ctrl[i],
PDS_CORE_INTR_MASK_SET);
}
static void pdsc_fw_down(struct pdsc *pdsc)
{
union pds_core_notifyq_comp reset_event = {
.reset.ecode = cpu_to_le16(PDS_EVENT_RESET),
.reset.state = 0,
};
if (test_and_set_bit(PDSC_S_FW_DEAD, &pdsc->state)) {
dev_err(pdsc->dev, "%s: already happening\n", __func__);
return;
}
/* Notify clients of fw_down */
devlink_health_report(pdsc->fw_reporter, "FW down reported", pdsc);
pdsc_notify(PDS_EVENT_RESET, &reset_event);
pdsc_stop(pdsc);
pdsc_teardown(pdsc, PDSC_TEARDOWN_RECOVERY);
}
static void pdsc_fw_up(struct pdsc *pdsc)
{
union pds_core_notifyq_comp reset_event = {
.reset.ecode = cpu_to_le16(PDS_EVENT_RESET),
.reset.state = 1,
};
int err;
if (!test_bit(PDSC_S_FW_DEAD, &pdsc->state)) {
dev_err(pdsc->dev, "%s: fw not dead\n", __func__);
return;
}
err = pdsc_setup(pdsc, PDSC_SETUP_RECOVERY);
if (err)
goto err_out;
err = pdsc_start(pdsc);
if (err)
goto err_out;
/* Notify clients of fw_up */
pdsc->fw_recoveries++;
devlink_health_reporter_state_update(pdsc->fw_reporter,
DEVLINK_HEALTH_REPORTER_STATE_HEALTHY);
pdsc_notify(PDS_EVENT_RESET, &reset_event);
return;
err_out:
pdsc_teardown(pdsc, PDSC_TEARDOWN_RECOVERY);
}
void pdsc_health_thread(struct work_struct *work)
{
struct pdsc *pdsc = container_of(work, struct pdsc, health_work);
unsigned long mask;
bool healthy;
mutex_lock(&pdsc->config_lock);
/* Don't do a check when in a transition state */
mask = BIT_ULL(PDSC_S_INITING_DRIVER) |
BIT_ULL(PDSC_S_STOPPING_DRIVER);
if (pdsc->state & mask)
goto out_unlock;
healthy = pdsc_is_fw_good(pdsc);
dev_dbg(pdsc->dev, "%s: health %d fw_status %#02x fw_heartbeat %d\n",
__func__, healthy, pdsc->fw_status, pdsc->last_hb);
if (test_bit(PDSC_S_FW_DEAD, &pdsc->state)) {
if (healthy)
pdsc_fw_up(pdsc);
} else {
if (!healthy)
pdsc_fw_down(pdsc);
}
pdsc->fw_generation = pdsc->fw_status & PDS_CORE_FW_STS_F_GENERATION;
out_unlock:
mutex_unlock(&pdsc->config_lock);
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#ifndef _PDSC_H_
#define _PDSC_H_
#include <linux/debugfs.h>
#include <net/devlink.h>
#include <linux/pds/pds_common.h>
#include <linux/pds/pds_core_if.h>
#include <linux/pds/pds_adminq.h>
#include <linux/pds/pds_intr.h>
#define PDSC_DRV_DESCRIPTION "AMD/Pensando Core Driver"
#define PDSC_WATCHDOG_SECS 5
#define PDSC_QUEUE_NAME_MAX_SZ 32
#define PDSC_ADMINQ_MIN_LENGTH 16 /* must be a power of two */
#define PDSC_NOTIFYQ_LENGTH 64 /* must be a power of two */
#define PDSC_TEARDOWN_RECOVERY false
#define PDSC_TEARDOWN_REMOVING true
#define PDSC_SETUP_RECOVERY false
#define PDSC_SETUP_INIT true
struct pdsc_dev_bar {
void __iomem *vaddr;
phys_addr_t bus_addr;
unsigned long len;
int res_index;
};
struct pdsc;
struct pdsc_vf {
struct pds_auxiliary_dev *padev;
struct pdsc *vf;
u16 index;
__le16 vif_types[PDS_DEV_TYPE_MAX];
};
struct pdsc_devinfo {
u8 asic_type;
u8 asic_rev;
char fw_version[PDS_CORE_DEVINFO_FWVERS_BUFLEN + 1];
char serial_num[PDS_CORE_DEVINFO_SERIAL_BUFLEN + 1];
};
struct pdsc_queue {
struct pdsc_q_info *info;
u64 dbval;
u16 head_idx;
u16 tail_idx;
u8 hw_type;
unsigned int index;
unsigned int num_descs;
u64 dbell_count;
u64 features;
unsigned int type;
unsigned int hw_index;
union {
void *base;
struct pds_core_admin_cmd *adminq;
};
dma_addr_t base_pa; /* must be page aligned */
unsigned int desc_size;
unsigned int pid;
char name[PDSC_QUEUE_NAME_MAX_SZ];
};
#define PDSC_INTR_NAME_MAX_SZ 32
struct pdsc_intr_info {
char name[PDSC_INTR_NAME_MAX_SZ];
unsigned int index;
unsigned int vector;
void *data;
};
struct pdsc_cq_info {
void *comp;
};
struct pdsc_buf_info {
struct page *page;
dma_addr_t dma_addr;
u32 page_offset;
u32 len;
};
struct pdsc_q_info {
union {
void *desc;
struct pdsc_admin_cmd *adminq_desc;
};
unsigned int bytes;
unsigned int nbufs;
struct pdsc_buf_info bufs[PDS_CORE_MAX_FRAGS];
struct pdsc_wait_context *wc;
void *dest;
};
struct pdsc_cq {
struct pdsc_cq_info *info;
struct pdsc_queue *bound_q;
struct pdsc_intr_info *bound_intr;
u16 tail_idx;
bool done_color;
unsigned int num_descs;
unsigned int desc_size;
void *base;
dma_addr_t base_pa; /* must be page aligned */
} ____cacheline_aligned_in_smp;
struct pdsc_qcq {
struct pdsc *pdsc;
void *q_base;
dma_addr_t q_base_pa; /* might not be page aligned */
void *cq_base;
dma_addr_t cq_base_pa; /* might not be page aligned */
u32 q_size;
u32 cq_size;
bool armed;
unsigned int flags;
struct work_struct work;
struct pdsc_queue q;
struct pdsc_cq cq;
int intx;
u32 accum_work;
struct dentry *dentry;
};
struct pdsc_viftype {
char *name;
bool supported;
bool enabled;
int dl_id;
int vif_id;
struct pds_auxiliary_dev *padev;
};
/* No state flags set means we are in a steady running state */
enum pdsc_state_flags {
PDSC_S_FW_DEAD, /* stopped, wait on startup or recovery */
PDSC_S_INITING_DRIVER, /* initial startup from probe */
PDSC_S_STOPPING_DRIVER, /* driver remove */
/* leave this as last */
PDSC_S_STATE_SIZE
};
struct pdsc {
struct pci_dev *pdev;
struct dentry *dentry;
struct device *dev;
struct pdsc_dev_bar bars[PDS_CORE_BARS_MAX];
struct pdsc_vf *vfs;
int num_vfs;
int vf_id;
int hw_index;
int uid;
unsigned long state;
u8 fw_status;
u8 fw_generation;
unsigned long last_fw_time;
u32 last_hb;
struct timer_list wdtimer;
unsigned int wdtimer_period;
struct work_struct health_work;
struct devlink_health_reporter *fw_reporter;
u32 fw_recoveries;
struct pdsc_devinfo dev_info;
struct pds_core_dev_identity dev_ident;
unsigned int nintrs;
struct pdsc_intr_info *intr_info; /* array of nintrs elements */
struct workqueue_struct *wq;
unsigned int devcmd_timeout;
struct mutex devcmd_lock; /* lock for dev_cmd operations */
struct mutex config_lock; /* lock for configuration operations */
spinlock_t adminq_lock; /* lock for adminq operations */
struct pds_core_dev_info_regs __iomem *info_regs;
struct pds_core_dev_cmd_regs __iomem *cmd_regs;
struct pds_core_intr __iomem *intr_ctrl;
u64 __iomem *intr_status;
u64 __iomem *db_pages;
dma_addr_t phy_db_pages;
u64 __iomem *kern_dbpage;
struct pdsc_qcq adminqcq;
struct pdsc_qcq notifyqcq;
u64 last_eid;
struct pdsc_viftype *viftype_status;
};
/** enum pds_core_dbell_bits - bitwise composition of dbell values.
*
* @PDS_CORE_DBELL_QID_MASK: unshifted mask of valid queue id bits.
* @PDS_CORE_DBELL_QID_SHIFT: queue id shift amount in dbell value.
* @PDS_CORE_DBELL_QID: macro to build QID component of dbell value.
*
* @PDS_CORE_DBELL_RING_MASK: unshifted mask of valid ring bits.
* @PDS_CORE_DBELL_RING_SHIFT: ring shift amount in dbell value.
* @PDS_CORE_DBELL_RING: macro to build ring component of dbell value.
*
* @PDS_CORE_DBELL_RING_0: ring zero dbell component value.
* @PDS_CORE_DBELL_RING_1: ring one dbell component value.
* @PDS_CORE_DBELL_RING_2: ring two dbell component value.
* @PDS_CORE_DBELL_RING_3: ring three dbell component value.
*
* @PDS_CORE_DBELL_INDEX_MASK: bit mask of valid index bits, no shift needed.
*/
enum pds_core_dbell_bits {
PDS_CORE_DBELL_QID_MASK = 0xffffff,
PDS_CORE_DBELL_QID_SHIFT = 24,
#define PDS_CORE_DBELL_QID(n) \
(((u64)(n) & PDS_CORE_DBELL_QID_MASK) << PDS_CORE_DBELL_QID_SHIFT)
PDS_CORE_DBELL_RING_MASK = 0x7,
PDS_CORE_DBELL_RING_SHIFT = 16,
#define PDS_CORE_DBELL_RING(n) \
(((u64)(n) & PDS_CORE_DBELL_RING_MASK) << PDS_CORE_DBELL_RING_SHIFT)
PDS_CORE_DBELL_RING_0 = 0,
PDS_CORE_DBELL_RING_1 = PDS_CORE_DBELL_RING(1),
PDS_CORE_DBELL_RING_2 = PDS_CORE_DBELL_RING(2),
PDS_CORE_DBELL_RING_3 = PDS_CORE_DBELL_RING(3),
PDS_CORE_DBELL_INDEX_MASK = 0xffff,
};
static inline void pds_core_dbell_ring(u64 __iomem *db_page,
enum pds_core_logical_qtype qtype,
u64 val)
{
writeq(val, &db_page[qtype]);
}
int pdsc_fw_reporter_diagnose(struct devlink_health_reporter *reporter,
struct devlink_fmsg *fmsg,
struct netlink_ext_ack *extack);
int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
struct netlink_ext_ack *extack);
int pdsc_dl_flash_update(struct devlink *dl,
struct devlink_flash_update_params *params,
struct netlink_ext_ack *extack);
int pdsc_dl_enable_get(struct devlink *dl, u32 id,
struct devlink_param_gset_ctx *ctx);
int pdsc_dl_enable_set(struct devlink *dl, u32 id,
struct devlink_param_gset_ctx *ctx);
int pdsc_dl_enable_validate(struct devlink *dl, u32 id,
union devlink_param_value val,
struct netlink_ext_ack *extack);
void __iomem *pdsc_map_dbpage(struct pdsc *pdsc, int page_num);
void pdsc_debugfs_create(void);
void pdsc_debugfs_destroy(void);
void pdsc_debugfs_add_dev(struct pdsc *pdsc);
void pdsc_debugfs_del_dev(struct pdsc *pdsc);
void pdsc_debugfs_add_ident(struct pdsc *pdsc);
void pdsc_debugfs_add_viftype(struct pdsc *pdsc);
void pdsc_debugfs_add_irqs(struct pdsc *pdsc);
void pdsc_debugfs_add_qcq(struct pdsc *pdsc, struct pdsc_qcq *qcq);
void pdsc_debugfs_del_qcq(struct pdsc_qcq *qcq);
int pdsc_err_to_errno(enum pds_core_status_code code);
bool pdsc_is_fw_running(struct pdsc *pdsc);
bool pdsc_is_fw_good(struct pdsc *pdsc);
int pdsc_devcmd(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
union pds_core_dev_comp *comp, int max_seconds);
int pdsc_devcmd_locked(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
union pds_core_dev_comp *comp, int max_seconds);
int pdsc_devcmd_init(struct pdsc *pdsc);
int pdsc_devcmd_reset(struct pdsc *pdsc);
int pdsc_dev_reinit(struct pdsc *pdsc);
int pdsc_dev_init(struct pdsc *pdsc);
int pdsc_intr_alloc(struct pdsc *pdsc, char *name,
irq_handler_t handler, void *data);
void pdsc_intr_free(struct pdsc *pdsc, int index);
void pdsc_qcq_free(struct pdsc *pdsc, struct pdsc_qcq *qcq);
int pdsc_qcq_alloc(struct pdsc *pdsc, unsigned int type, unsigned int index,
const char *name, unsigned int flags, unsigned int num_descs,
unsigned int desc_size, unsigned int cq_desc_size,
unsigned int pid, struct pdsc_qcq *qcq);
int pdsc_setup(struct pdsc *pdsc, bool init);
void pdsc_teardown(struct pdsc *pdsc, bool removing);
int pdsc_start(struct pdsc *pdsc);
void pdsc_stop(struct pdsc *pdsc);
void pdsc_health_thread(struct work_struct *work);
int pdsc_register_notify(struct notifier_block *nb);
void pdsc_unregister_notify(struct notifier_block *nb);
void pdsc_notify(unsigned long event, void *data);
int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf);
int pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf);
void pdsc_process_adminq(struct pdsc_qcq *qcq);
void pdsc_work_thread(struct work_struct *work);
irqreturn_t pdsc_adminq_isr(int irq, void *data);
int pdsc_firmware_update(struct pdsc *pdsc, const struct firmware *fw,
struct netlink_ext_ack *extack);
#endif /* _PDSC_H_ */
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include <linux/pci.h>
#include "core.h"
static struct dentry *pdsc_dir;
void pdsc_debugfs_create(void)
{
pdsc_dir = debugfs_create_dir(PDS_CORE_DRV_NAME, NULL);
}
void pdsc_debugfs_destroy(void)
{
debugfs_remove_recursive(pdsc_dir);
}
void pdsc_debugfs_add_dev(struct pdsc *pdsc)
{
pdsc->dentry = debugfs_create_dir(pci_name(pdsc->pdev), pdsc_dir);
debugfs_create_ulong("state", 0400, pdsc->dentry, &pdsc->state);
}
void pdsc_debugfs_del_dev(struct pdsc *pdsc)
{
debugfs_remove_recursive(pdsc->dentry);
pdsc->dentry = NULL;
}
static int identity_show(struct seq_file *seq, void *v)
{
struct pdsc *pdsc = seq->private;
struct pds_core_dev_identity *ident;
int vt;
ident = &pdsc->dev_ident;
seq_printf(seq, "fw_heartbeat: 0x%x\n",
ioread32(&pdsc->info_regs->fw_heartbeat));
seq_printf(seq, "nlifs: %d\n",
le32_to_cpu(ident->nlifs));
seq_printf(seq, "nintrs: %d\n",
le32_to_cpu(ident->nintrs));
seq_printf(seq, "ndbpgs_per_lif: %d\n",
le32_to_cpu(ident->ndbpgs_per_lif));
seq_printf(seq, "intr_coal_mult: %d\n",
le32_to_cpu(ident->intr_coal_mult));
seq_printf(seq, "intr_coal_div: %d\n",
le32_to_cpu(ident->intr_coal_div));
seq_puts(seq, "vif_types: ");
for (vt = 0; vt < PDS_DEV_TYPE_MAX; vt++)
seq_printf(seq, "%d ",
le16_to_cpu(pdsc->dev_ident.vif_types[vt]));
seq_puts(seq, "\n");
return 0;
}
DEFINE_SHOW_ATTRIBUTE(identity);
void pdsc_debugfs_add_ident(struct pdsc *pdsc)
{
debugfs_create_file("identity", 0400, pdsc->dentry,
pdsc, &identity_fops);
}
static int viftype_show(struct seq_file *seq, void *v)
{
struct pdsc *pdsc = seq->private;
int vt;
for (vt = 0; vt < PDS_DEV_TYPE_MAX; vt++) {
if (!pdsc->viftype_status[vt].name)
continue;
seq_printf(seq, "%s\t%d supported %d enabled\n",
pdsc->viftype_status[vt].name,
pdsc->viftype_status[vt].supported,
pdsc->viftype_status[vt].enabled);
}
return 0;
}
DEFINE_SHOW_ATTRIBUTE(viftype);
void pdsc_debugfs_add_viftype(struct pdsc *pdsc)
{
debugfs_create_file("viftypes", 0400, pdsc->dentry,
pdsc, &viftype_fops);
}
static const struct debugfs_reg32 intr_ctrl_regs[] = {
{ .name = "coal_init", .offset = 0, },
{ .name = "mask", .offset = 4, },
{ .name = "credits", .offset = 8, },
{ .name = "mask_on_assert", .offset = 12, },
{ .name = "coal_timer", .offset = 16, },
};
void pdsc_debugfs_add_qcq(struct pdsc *pdsc, struct pdsc_qcq *qcq)
{
struct dentry *qcq_dentry, *q_dentry, *cq_dentry;
struct dentry *intr_dentry;
struct debugfs_regset32 *intr_ctrl_regset;
struct pdsc_intr_info *intr = &pdsc->intr_info[qcq->intx];
struct pdsc_queue *q = &qcq->q;
struct pdsc_cq *cq = &qcq->cq;
qcq_dentry = debugfs_create_dir(q->name, pdsc->dentry);
if (IS_ERR_OR_NULL(qcq_dentry))
return;
qcq->dentry = qcq_dentry;
debugfs_create_x64("q_base_pa", 0400, qcq_dentry, &qcq->q_base_pa);
debugfs_create_x32("q_size", 0400, qcq_dentry, &qcq->q_size);
debugfs_create_x64("cq_base_pa", 0400, qcq_dentry, &qcq->cq_base_pa);
debugfs_create_x32("cq_size", 0400, qcq_dentry, &qcq->cq_size);
debugfs_create_x32("accum_work", 0400, qcq_dentry, &qcq->accum_work);
q_dentry = debugfs_create_dir("q", qcq->dentry);
if (IS_ERR_OR_NULL(q_dentry))
return;
debugfs_create_u32("index", 0400, q_dentry, &q->index);
debugfs_create_u32("num_descs", 0400, q_dentry, &q->num_descs);
debugfs_create_u32("desc_size", 0400, q_dentry, &q->desc_size);
debugfs_create_u32("pid", 0400, q_dentry, &q->pid);
debugfs_create_u16("tail", 0400, q_dentry, &q->tail_idx);
debugfs_create_u16("head", 0400, q_dentry, &q->head_idx);
cq_dentry = debugfs_create_dir("cq", qcq->dentry);
if (IS_ERR_OR_NULL(cq_dentry))
return;
debugfs_create_x64("base_pa", 0400, cq_dentry, &cq->base_pa);
debugfs_create_u32("num_descs", 0400, cq_dentry, &cq->num_descs);
debugfs_create_u32("desc_size", 0400, cq_dentry, &cq->desc_size);
debugfs_create_bool("done_color", 0400, cq_dentry, &cq->done_color);
debugfs_create_u16("tail", 0400, cq_dentry, &cq->tail_idx);
if (qcq->flags & PDS_CORE_QCQ_F_INTR) {
intr_dentry = debugfs_create_dir("intr", qcq->dentry);
if (IS_ERR_OR_NULL(intr_dentry))
return;
debugfs_create_u32("index", 0400, intr_dentry, &intr->index);
debugfs_create_u32("vector", 0400, intr_dentry, &intr->vector);
intr_ctrl_regset = kzalloc(sizeof(*intr_ctrl_regset),
GFP_KERNEL);
if (!intr_ctrl_regset)
return;
intr_ctrl_regset->regs = intr_ctrl_regs;
intr_ctrl_regset->nregs = ARRAY_SIZE(intr_ctrl_regs);
intr_ctrl_regset->base = &pdsc->intr_ctrl[intr->index];
debugfs_create_regset32("intr_ctrl", 0400, intr_dentry,
intr_ctrl_regset);
}
};
void pdsc_debugfs_del_qcq(struct pdsc_qcq *qcq)
{
debugfs_remove_recursive(qcq->dentry);
qcq->dentry = NULL;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include <linux/errno.h>
#include <linux/pci.h>
#include <linux/utsname.h>
#include "core.h"
int pdsc_err_to_errno(enum pds_core_status_code code)
{
switch (code) {
case PDS_RC_SUCCESS:
return 0;
case PDS_RC_EVERSION:
case PDS_RC_EQTYPE:
case PDS_RC_EQID:
case PDS_RC_EINVAL:
case PDS_RC_ENOSUPP:
return -EINVAL;
case PDS_RC_EPERM:
return -EPERM;
case PDS_RC_ENOENT:
return -ENOENT;
case PDS_RC_EAGAIN:
return -EAGAIN;
case PDS_RC_ENOMEM:
return -ENOMEM;
case PDS_RC_EFAULT:
return -EFAULT;
case PDS_RC_EBUSY:
return -EBUSY;
case PDS_RC_EEXIST:
return -EEXIST;
case PDS_RC_EVFID:
return -ENODEV;
case PDS_RC_ECLIENT:
return -ECHILD;
case PDS_RC_ENOSPC:
return -ENOSPC;
case PDS_RC_ERANGE:
return -ERANGE;
case PDS_RC_BAD_ADDR:
return -EFAULT;
case PDS_RC_EOPCODE:
case PDS_RC_EINTR:
case PDS_RC_DEV_CMD:
case PDS_RC_ERROR:
case PDS_RC_ERDMA:
case PDS_RC_EIO:
default:
return -EIO;
}
}
bool pdsc_is_fw_running(struct pdsc *pdsc)
{
pdsc->fw_status = ioread8(&pdsc->info_regs->fw_status);
pdsc->last_fw_time = jiffies;
pdsc->last_hb = ioread32(&pdsc->info_regs->fw_heartbeat);
/* Firmware is useful only if the running bit is set and
* fw_status != 0xff (bad PCI read)
*/
return (pdsc->fw_status != 0xff) &&
(pdsc->fw_status & PDS_CORE_FW_STS_F_RUNNING);
}
bool pdsc_is_fw_good(struct pdsc *pdsc)
{
u8 gen = pdsc->fw_status & PDS_CORE_FW_STS_F_GENERATION;
return pdsc_is_fw_running(pdsc) && gen == pdsc->fw_generation;
}
static u8 pdsc_devcmd_status(struct pdsc *pdsc)
{
return ioread8(&pdsc->cmd_regs->comp.status);
}
static bool pdsc_devcmd_done(struct pdsc *pdsc)
{
return ioread32(&pdsc->cmd_regs->done) & PDS_CORE_DEV_CMD_DONE;
}
static void pdsc_devcmd_dbell(struct pdsc *pdsc)
{
iowrite32(0, &pdsc->cmd_regs->done);
iowrite32(1, &pdsc->cmd_regs->doorbell);
}
static void pdsc_devcmd_clean(struct pdsc *pdsc)
{
iowrite32(0, &pdsc->cmd_regs->doorbell);
memset_io(&pdsc->cmd_regs->cmd, 0, sizeof(pdsc->cmd_regs->cmd));
}
static const char *pdsc_devcmd_str(int opcode)
{
switch (opcode) {
case PDS_CORE_CMD_NOP:
return "PDS_CORE_CMD_NOP";
case PDS_CORE_CMD_IDENTIFY:
return "PDS_CORE_CMD_IDENTIFY";
case PDS_CORE_CMD_RESET:
return "PDS_CORE_CMD_RESET";
case PDS_CORE_CMD_INIT:
return "PDS_CORE_CMD_INIT";
case PDS_CORE_CMD_FW_DOWNLOAD:
return "PDS_CORE_CMD_FW_DOWNLOAD";
case PDS_CORE_CMD_FW_CONTROL:
return "PDS_CORE_CMD_FW_CONTROL";
default:
return "PDS_CORE_CMD_UNKNOWN";
}
}
static int pdsc_devcmd_wait(struct pdsc *pdsc, int max_seconds)
{
struct device *dev = pdsc->dev;
unsigned long start_time;
unsigned long max_wait;
unsigned long duration;
int timeout = 0;
int done = 0;
int err = 0;
int status;
int opcode;
opcode = ioread8(&pdsc->cmd_regs->cmd.opcode);
start_time = jiffies;
max_wait = start_time + (max_seconds * HZ);
while (!done && !timeout) {
done = pdsc_devcmd_done(pdsc);
if (done)
break;
timeout = time_after(jiffies, max_wait);
if (timeout)
break;
usleep_range(100, 200);
}
duration = jiffies - start_time;
if (done && duration > HZ)
dev_dbg(dev, "DEVCMD %d %s after %ld secs\n",
opcode, pdsc_devcmd_str(opcode), duration / HZ);
if (!done || timeout) {
dev_err(dev, "DEVCMD %d %s timeout, done %d timeout %d max_seconds=%d\n",
opcode, pdsc_devcmd_str(opcode), done, timeout,
max_seconds);
err = -ETIMEDOUT;
pdsc_devcmd_clean(pdsc);
}
status = pdsc_devcmd_status(pdsc);
err = pdsc_err_to_errno(status);
if (err && err != -EAGAIN)
dev_err(dev, "DEVCMD %d %s failed, status=%d err %d %pe\n",
opcode, pdsc_devcmd_str(opcode), status, err,
ERR_PTR(err));
return err;
}
int pdsc_devcmd_locked(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
union pds_core_dev_comp *comp, int max_seconds)
{
int err;
memcpy_toio(&pdsc->cmd_regs->cmd, cmd, sizeof(*cmd));
pdsc_devcmd_dbell(pdsc);
err = pdsc_devcmd_wait(pdsc, max_seconds);
memcpy_fromio(comp, &pdsc->cmd_regs->comp, sizeof(*comp));
if (err == -ENXIO || err == -ETIMEDOUT)
queue_work(pdsc->wq, &pdsc->health_work);
return err;
}
int pdsc_devcmd(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
union pds_core_dev_comp *comp, int max_seconds)
{
int err;
mutex_lock(&pdsc->devcmd_lock);
err = pdsc_devcmd_locked(pdsc, cmd, comp, max_seconds);
mutex_unlock(&pdsc->devcmd_lock);
return err;
}
int pdsc_devcmd_init(struct pdsc *pdsc)
{
union pds_core_dev_comp comp = {};
union pds_core_dev_cmd cmd = {
.opcode = PDS_CORE_CMD_INIT,
};
return pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
int pdsc_devcmd_reset(struct pdsc *pdsc)
{
union pds_core_dev_comp comp = {};
union pds_core_dev_cmd cmd = {
.reset.opcode = PDS_CORE_CMD_RESET,
};
return pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
static int pdsc_devcmd_identify_locked(struct pdsc *pdsc)
{
union pds_core_dev_comp comp = {};
union pds_core_dev_cmd cmd = {
.identify.opcode = PDS_CORE_CMD_IDENTIFY,
.identify.ver = PDS_CORE_IDENTITY_VERSION_1,
};
return pdsc_devcmd_locked(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
static void pdsc_init_devinfo(struct pdsc *pdsc)
{
pdsc->dev_info.asic_type = ioread8(&pdsc->info_regs->asic_type);
pdsc->dev_info.asic_rev = ioread8(&pdsc->info_regs->asic_rev);
pdsc->fw_generation = PDS_CORE_FW_STS_F_GENERATION &
ioread8(&pdsc->info_regs->fw_status);
memcpy_fromio(pdsc->dev_info.fw_version,
pdsc->info_regs->fw_version,
PDS_CORE_DEVINFO_FWVERS_BUFLEN);
pdsc->dev_info.fw_version[PDS_CORE_DEVINFO_FWVERS_BUFLEN] = 0;
memcpy_fromio(pdsc->dev_info.serial_num,
pdsc->info_regs->serial_num,
PDS_CORE_DEVINFO_SERIAL_BUFLEN);
pdsc->dev_info.serial_num[PDS_CORE_DEVINFO_SERIAL_BUFLEN] = 0;
dev_dbg(pdsc->dev, "fw_version %s\n", pdsc->dev_info.fw_version);
}
static int pdsc_identify(struct pdsc *pdsc)
{
struct pds_core_drv_identity drv = {};
size_t sz;
int err;
drv.drv_type = cpu_to_le32(PDS_DRIVER_LINUX);
snprintf(drv.driver_ver_str, sizeof(drv.driver_ver_str),
"%s %s", PDS_CORE_DRV_NAME, utsname()->release);
/* Next let's get some info about the device
* We use the devcmd_lock at this level in order to
* get safe access to the cmd_regs->data before anyone
* else can mess it up
*/
mutex_lock(&pdsc->devcmd_lock);
sz = min_t(size_t, sizeof(drv), sizeof(pdsc->cmd_regs->data));
memcpy_toio(&pdsc->cmd_regs->data, &drv, sz);
err = pdsc_devcmd_identify_locked(pdsc);
if (!err) {
sz = min_t(size_t, sizeof(pdsc->dev_ident),
sizeof(pdsc->cmd_regs->data));
memcpy_fromio(&pdsc->dev_ident, &pdsc->cmd_regs->data, sz);
}
mutex_unlock(&pdsc->devcmd_lock);
if (err) {
dev_err(pdsc->dev, "Cannot identify device: %pe\n",
ERR_PTR(err));
return err;
}
if (isprint(pdsc->dev_info.fw_version[0]) &&
isascii(pdsc->dev_info.fw_version[0]))
dev_info(pdsc->dev, "FW: %.*s\n",
(int)(sizeof(pdsc->dev_info.fw_version) - 1),
pdsc->dev_info.fw_version);
else
dev_info(pdsc->dev, "FW: (invalid string) 0x%02x 0x%02x 0x%02x 0x%02x ...\n",
(u8)pdsc->dev_info.fw_version[0],
(u8)pdsc->dev_info.fw_version[1],
(u8)pdsc->dev_info.fw_version[2],
(u8)pdsc->dev_info.fw_version[3]);
return 0;
}
int pdsc_dev_reinit(struct pdsc *pdsc)
{
pdsc_init_devinfo(pdsc);
return pdsc_identify(pdsc);
}
int pdsc_dev_init(struct pdsc *pdsc)
{
unsigned int nintrs;
int err;
/* Initial init and reset of device */
pdsc_init_devinfo(pdsc);
pdsc->devcmd_timeout = PDS_CORE_DEVCMD_TIMEOUT;
err = pdsc_devcmd_reset(pdsc);
if (err)
return err;
err = pdsc_identify(pdsc);
if (err)
return err;
pdsc_debugfs_add_ident(pdsc);
/* Now we can reserve interrupts */
nintrs = le32_to_cpu(pdsc->dev_ident.nintrs);
nintrs = min_t(unsigned int, num_online_cpus(), nintrs);
/* Get intr_info struct array for tracking */
pdsc->intr_info = kcalloc(nintrs, sizeof(*pdsc->intr_info), GFP_KERNEL);
if (!pdsc->intr_info) {
err = -ENOMEM;
goto err_out;
}
err = pci_alloc_irq_vectors(pdsc->pdev, nintrs, nintrs, PCI_IRQ_MSIX);
if (err != nintrs) {
dev_err(pdsc->dev, "Can't get %d intrs from OS: %pe\n",
nintrs, ERR_PTR(err));
err = -ENOSPC;
goto err_out;
}
pdsc->nintrs = nintrs;
return 0;
err_out:
kfree(pdsc->intr_info);
pdsc->intr_info = NULL;
return err;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include "core.h"
#include <linux/pds/pds_auxbus.h>
static struct
pdsc_viftype *pdsc_dl_find_viftype_by_id(struct pdsc *pdsc,
enum devlink_param_type dl_id)
{
int vt;
for (vt = 0; vt < PDS_DEV_TYPE_MAX; vt++) {
if (pdsc->viftype_status[vt].dl_id == dl_id)
return &pdsc->viftype_status[vt];
}
return NULL;
}
int pdsc_dl_enable_get(struct devlink *dl, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct pdsc *pdsc = devlink_priv(dl);
struct pdsc_viftype *vt_entry;
vt_entry = pdsc_dl_find_viftype_by_id(pdsc, id);
if (!vt_entry)
return -ENOENT;
ctx->val.vbool = vt_entry->enabled;
return 0;
}
int pdsc_dl_enable_set(struct devlink *dl, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct pdsc *pdsc = devlink_priv(dl);
struct pdsc_viftype *vt_entry;
int err = 0;
int vf_id;
vt_entry = pdsc_dl_find_viftype_by_id(pdsc, id);
if (!vt_entry || !vt_entry->supported)
return -EOPNOTSUPP;
if (vt_entry->enabled == ctx->val.vbool)
return 0;
vt_entry->enabled = ctx->val.vbool;
for (vf_id = 0; vf_id < pdsc->num_vfs; vf_id++) {
struct pdsc *vf = pdsc->vfs[vf_id].vf;
err = ctx->val.vbool ? pdsc_auxbus_dev_add(vf, pdsc) :
pdsc_auxbus_dev_del(vf, pdsc);
}
return err;
}
int pdsc_dl_enable_validate(struct devlink *dl, u32 id,
union devlink_param_value val,
struct netlink_ext_ack *extack)
{
struct pdsc *pdsc = devlink_priv(dl);
struct pdsc_viftype *vt_entry;
vt_entry = pdsc_dl_find_viftype_by_id(pdsc, id);
if (!vt_entry || !vt_entry->supported)
return -EOPNOTSUPP;
if (!pdsc->viftype_status[vt_entry->vif_id].supported)
return -ENODEV;
return 0;
}
int pdsc_dl_flash_update(struct devlink *dl,
struct devlink_flash_update_params *params,
struct netlink_ext_ack *extack)
{
struct pdsc *pdsc = devlink_priv(dl);
return pdsc_firmware_update(pdsc, params->fw, extack);
}
static char *fw_slotnames[] = {
"fw.goldfw",
"fw.mainfwa",
"fw.mainfwb",
};
int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
struct netlink_ext_ack *extack)
{
union pds_core_dev_cmd cmd = {
.fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
.fw_control.oper = PDS_CORE_FW_GET_LIST,
};
struct pds_core_fw_list_info fw_list;
struct pdsc *pdsc = devlink_priv(dl);
union pds_core_dev_comp comp;
char buf[16];
int listlen;
int err;
int i;
mutex_lock(&pdsc->devcmd_lock);
err = pdsc_devcmd_locked(pdsc, &cmd, &comp, pdsc->devcmd_timeout * 2);
memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list));
mutex_unlock(&pdsc->devcmd_lock);
if (err && err != -EIO)
return err;
listlen = fw_list.num_fw_slots;
for (i = 0; i < listlen; i++) {
if (i < ARRAY_SIZE(fw_slotnames))
strscpy(buf, fw_slotnames[i], sizeof(buf));
else
snprintf(buf, sizeof(buf), "fw.slot_%d", i);
err = devlink_info_version_stored_put(req, buf,
fw_list.fw_names[i].fw_version);
}
err = devlink_info_version_running_put(req,
DEVLINK_INFO_VERSION_GENERIC_FW,
pdsc->dev_info.fw_version);
if (err)
return err;
snprintf(buf, sizeof(buf), "0x%x", pdsc->dev_info.asic_type);
err = devlink_info_version_fixed_put(req,
DEVLINK_INFO_VERSION_GENERIC_ASIC_ID,
buf);
if (err)
return err;
snprintf(buf, sizeof(buf), "0x%x", pdsc->dev_info.asic_rev);
err = devlink_info_version_fixed_put(req,
DEVLINK_INFO_VERSION_GENERIC_ASIC_REV,
buf);
if (err)
return err;
return devlink_info_serial_number_put(req, pdsc->dev_info.serial_num);
}
int pdsc_fw_reporter_diagnose(struct devlink_health_reporter *reporter,
struct devlink_fmsg *fmsg,
struct netlink_ext_ack *extack)
{
struct pdsc *pdsc = devlink_health_reporter_priv(reporter);
int err;
mutex_lock(&pdsc->config_lock);
if (test_bit(PDSC_S_FW_DEAD, &pdsc->state))
err = devlink_fmsg_string_pair_put(fmsg, "Status", "dead");
else if (!pdsc_is_fw_good(pdsc))
err = devlink_fmsg_string_pair_put(fmsg, "Status", "unhealthy");
else
err = devlink_fmsg_string_pair_put(fmsg, "Status", "healthy");
mutex_unlock(&pdsc->config_lock);
if (err)
return err;
err = devlink_fmsg_u32_pair_put(fmsg, "State",
pdsc->fw_status &
~PDS_CORE_FW_STS_F_GENERATION);
if (err)
return err;
err = devlink_fmsg_u32_pair_put(fmsg, "Generation",
pdsc->fw_generation >> 4);
if (err)
return err;
return devlink_fmsg_u32_pair_put(fmsg, "Recoveries",
pdsc->fw_recoveries);
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include "core.h"
/* The worst case wait for the install activity is about 25 minutes when
* installing a new CPLD, which is very seldom. Normal is about 30-35
* seconds. Since the driver can't tell if a CPLD update will happen we
* set the timeout for the ugly case.
*/
#define PDSC_FW_INSTALL_TIMEOUT (25 * 60)
#define PDSC_FW_SELECT_TIMEOUT 30
/* Number of periodic log updates during fw file download */
#define PDSC_FW_INTERVAL_FRACTION 32
static int pdsc_devcmd_fw_download_locked(struct pdsc *pdsc, u64 addr,
u32 offset, u32 length)
{
union pds_core_dev_cmd cmd = {
.fw_download.opcode = PDS_CORE_CMD_FW_DOWNLOAD,
.fw_download.offset = cpu_to_le32(offset),
.fw_download.addr = cpu_to_le64(addr),
.fw_download.length = cpu_to_le32(length),
};
union pds_core_dev_comp comp;
return pdsc_devcmd_locked(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
static int pdsc_devcmd_fw_install(struct pdsc *pdsc)
{
union pds_core_dev_cmd cmd = {
.fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
.fw_control.oper = PDS_CORE_FW_INSTALL_ASYNC
};
union pds_core_dev_comp comp;
int err;
err = pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
if (err < 0)
return err;
return comp.fw_control.slot;
}
static int pdsc_devcmd_fw_activate(struct pdsc *pdsc,
enum pds_core_fw_slot slot)
{
union pds_core_dev_cmd cmd = {
.fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
.fw_control.oper = PDS_CORE_FW_ACTIVATE_ASYNC,
.fw_control.slot = slot
};
union pds_core_dev_comp comp;
return pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
static int pdsc_fw_status_long_wait(struct pdsc *pdsc,
const char *label,
unsigned long timeout,
u8 fw_cmd,
struct netlink_ext_ack *extack)
{
union pds_core_dev_cmd cmd = {
.fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
.fw_control.oper = fw_cmd,
};
union pds_core_dev_comp comp;
unsigned long start_time;
unsigned long end_time;
int err;
/* Ping on the status of the long running async install
* command. We get EAGAIN while the command is still
* running, else we get the final command status.
*/
start_time = jiffies;
end_time = start_time + (timeout * HZ);
do {
err = pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
msleep(20);
} while (time_before(jiffies, end_time) &&
(err == -EAGAIN || err == -ETIMEDOUT));
if (err == -EAGAIN || err == -ETIMEDOUT) {
NL_SET_ERR_MSG_MOD(extack, "Firmware wait timed out");
dev_err(pdsc->dev, "DEV_CMD firmware wait %s timed out\n",
label);
} else if (err) {
NL_SET_ERR_MSG_MOD(extack, "Firmware wait failed");
}
return err;
}
int pdsc_firmware_update(struct pdsc *pdsc, const struct firmware *fw,
struct netlink_ext_ack *extack)
{
u32 buf_sz, copy_sz, offset;
struct devlink *dl;
int next_interval;
u64 data_addr;
int err = 0;
int fw_slot;
dev_info(pdsc->dev, "Installing firmware\n");
dl = priv_to_devlink(pdsc);
devlink_flash_update_status_notify(dl, "Preparing to flash",
NULL, 0, 0);
buf_sz = sizeof(pdsc->cmd_regs->data);
dev_dbg(pdsc->dev,
"downloading firmware - size %d part_sz %d nparts %lu\n",
(int)fw->size, buf_sz, DIV_ROUND_UP(fw->size, buf_sz));
offset = 0;
next_interval = 0;
data_addr = offsetof(struct pds_core_dev_cmd_regs, data);
while (offset < fw->size) {
if (offset >= next_interval) {
devlink_flash_update_status_notify(dl, "Downloading",
NULL, offset,
fw->size);
next_interval = offset +
(fw->size / PDSC_FW_INTERVAL_FRACTION);
}
copy_sz = min_t(unsigned int, buf_sz, fw->size - offset);
mutex_lock(&pdsc->devcmd_lock);
memcpy_toio(&pdsc->cmd_regs->data, fw->data + offset, copy_sz);
err = pdsc_devcmd_fw_download_locked(pdsc, data_addr,
offset, copy_sz);
mutex_unlock(&pdsc->devcmd_lock);
if (err) {
dev_err(pdsc->dev,
"download failed offset 0x%x addr 0x%llx len 0x%x: %pe\n",
offset, data_addr, copy_sz, ERR_PTR(err));
NL_SET_ERR_MSG_MOD(extack, "Segment download failed");
goto err_out;
}
offset += copy_sz;
}
devlink_flash_update_status_notify(dl, "Downloading", NULL,
fw->size, fw->size);
devlink_flash_update_timeout_notify(dl, "Installing", NULL,
PDSC_FW_INSTALL_TIMEOUT);
fw_slot = pdsc_devcmd_fw_install(pdsc);
if (fw_slot < 0) {
err = fw_slot;
dev_err(pdsc->dev, "install failed: %pe\n", ERR_PTR(err));
NL_SET_ERR_MSG_MOD(extack, "Failed to start firmware install");
goto err_out;
}
err = pdsc_fw_status_long_wait(pdsc, "Installing",
PDSC_FW_INSTALL_TIMEOUT,
PDS_CORE_FW_INSTALL_STATUS,
extack);
if (err)
goto err_out;
devlink_flash_update_timeout_notify(dl, "Selecting", NULL,
PDSC_FW_SELECT_TIMEOUT);
err = pdsc_devcmd_fw_activate(pdsc, fw_slot);
if (err) {
NL_SET_ERR_MSG_MOD(extack, "Failed to start firmware select");
goto err_out;
}
err = pdsc_fw_status_long_wait(pdsc, "Selecting",
PDSC_FW_SELECT_TIMEOUT,
PDS_CORE_FW_ACTIVATE_STATUS,
extack);
if (err)
goto err_out;
dev_info(pdsc->dev, "Firmware update completed, slot %d\n", fw_slot);
err_out:
if (err)
devlink_flash_update_status_notify(dl, "Flash failed",
NULL, 0, 0);
else
devlink_flash_update_status_notify(dl, "Flash done",
NULL, 0, 0);
return err;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/pci.h>
#include <linux/pds/pds_common.h>
#include "core.h"
MODULE_DESCRIPTION(PDSC_DRV_DESCRIPTION);
MODULE_AUTHOR("Advanced Micro Devices, Inc");
MODULE_LICENSE("GPL");
/* Supported devices */
static const struct pci_device_id pdsc_id_table[] = {
{ PCI_VDEVICE(PENSANDO, PCI_DEVICE_ID_PENSANDO_CORE_PF) },
{ PCI_VDEVICE(PENSANDO, PCI_DEVICE_ID_PENSANDO_VDPA_VF) },
{ 0, } /* end of table */
};
MODULE_DEVICE_TABLE(pci, pdsc_id_table);
static void pdsc_wdtimer_cb(struct timer_list *t)
{
struct pdsc *pdsc = from_timer(pdsc, t, wdtimer);
dev_dbg(pdsc->dev, "%s: jiffies %ld\n", __func__, jiffies);
mod_timer(&pdsc->wdtimer,
round_jiffies(jiffies + pdsc->wdtimer_period));
queue_work(pdsc->wq, &pdsc->health_work);
}
static void pdsc_unmap_bars(struct pdsc *pdsc)
{
struct pdsc_dev_bar *bars = pdsc->bars;
unsigned int i;
for (i = 0; i < PDS_CORE_BARS_MAX; i++) {
if (bars[i].vaddr)
pci_iounmap(pdsc->pdev, bars[i].vaddr);
}
}
static int pdsc_map_bars(struct pdsc *pdsc)
{
struct pdsc_dev_bar *bar = pdsc->bars;
struct pci_dev *pdev = pdsc->pdev;
struct device *dev = pdsc->dev;
struct pdsc_dev_bar *bars;
unsigned int i, j;
int num_bars = 0;
int err;
u32 sig;
bars = pdsc->bars;
/* Since the PCI interface in the hardware is configurable,
* we need to poke into all the bars to find the set we're
* expecting.
*/
for (i = 0, j = 0; i < PDS_CORE_BARS_MAX; i++) {
if (!(pci_resource_flags(pdev, i) & IORESOURCE_MEM))
continue;
bars[j].len = pci_resource_len(pdev, i);
bars[j].bus_addr = pci_resource_start(pdev, i);
bars[j].res_index = i;
/* only map the whole bar 0 */
if (j > 0) {
bars[j].vaddr = NULL;
} else {
bars[j].vaddr = pci_iomap(pdev, i, bars[j].len);
if (!bars[j].vaddr) {
dev_err(dev, "Cannot map BAR %d, aborting\n", i);
return -ENODEV;
}
}
j++;
}
num_bars = j;
/* BAR0: dev_cmd and interrupts */
if (num_bars < 1) {
dev_err(dev, "No bars found\n");
err = -EFAULT;
goto err_out;
}
if (bar->len < PDS_CORE_BAR0_SIZE) {
dev_err(dev, "Resource bar size %lu too small\n", bar->len);
err = -EFAULT;
goto err_out;
}
pdsc->info_regs = bar->vaddr + PDS_CORE_BAR0_DEV_INFO_REGS_OFFSET;
pdsc->cmd_regs = bar->vaddr + PDS_CORE_BAR0_DEV_CMD_REGS_OFFSET;
pdsc->intr_status = bar->vaddr + PDS_CORE_BAR0_INTR_STATUS_OFFSET;
pdsc->intr_ctrl = bar->vaddr + PDS_CORE_BAR0_INTR_CTRL_OFFSET;
sig = ioread32(&pdsc->info_regs->signature);
if (sig != PDS_CORE_DEV_INFO_SIGNATURE) {
dev_err(dev, "Incompatible firmware signature %x", sig);
err = -EFAULT;
goto err_out;
}
/* BAR1: doorbells */
bar++;
if (num_bars < 2) {
dev_err(dev, "Doorbell bar missing\n");
err = -EFAULT;
goto err_out;
}
pdsc->db_pages = bar->vaddr;
pdsc->phy_db_pages = bar->bus_addr;
return 0;
err_out:
pdsc_unmap_bars(pdsc);
return err;
}
void __iomem *pdsc_map_dbpage(struct pdsc *pdsc, int page_num)
{
return pci_iomap_range(pdsc->pdev,
pdsc->bars[PDS_CORE_PCI_BAR_DBELL].res_index,
(u64)page_num << PAGE_SHIFT, PAGE_SIZE);
}
static int pdsc_sriov_configure(struct pci_dev *pdev, int num_vfs)
{
struct pdsc *pdsc = pci_get_drvdata(pdev);
struct device *dev = pdsc->dev;
int ret = 0;
if (num_vfs > 0) {
pdsc->vfs = kcalloc(num_vfs, sizeof(struct pdsc_vf),
GFP_KERNEL);
if (!pdsc->vfs)
return -ENOMEM;
pdsc->num_vfs = num_vfs;
ret = pci_enable_sriov(pdev, num_vfs);
if (ret) {
dev_err(dev, "Cannot enable SRIOV: %pe\n",
ERR_PTR(ret));
goto no_vfs;
}
return num_vfs;
}
no_vfs:
pci_disable_sriov(pdev);
kfree(pdsc->vfs);
pdsc->vfs = NULL;
pdsc->num_vfs = 0;
return ret;
}
static int pdsc_init_vf(struct pdsc *vf)
{
struct devlink *dl;
struct pdsc *pf;
int err;
pf = pdsc_get_pf_struct(vf->pdev);
if (IS_ERR_OR_NULL(pf))
return PTR_ERR(pf) ?: -1;
vf->vf_id = pci_iov_vf_id(vf->pdev);
dl = priv_to_devlink(vf);
devl_lock(dl);
devl_register(dl);
devl_unlock(dl);
pf->vfs[vf->vf_id].vf = vf;
err = pdsc_auxbus_dev_add(vf, pf);
if (err) {
devl_lock(dl);
devl_unregister(dl);
devl_unlock(dl);
}
return err;
}
static const struct devlink_health_reporter_ops pdsc_fw_reporter_ops = {
.name = "fw",
.diagnose = pdsc_fw_reporter_diagnose,
};
static const struct devlink_param pdsc_dl_params[] = {
DEVLINK_PARAM_GENERIC(ENABLE_VNET,
BIT(DEVLINK_PARAM_CMODE_RUNTIME),
pdsc_dl_enable_get,
pdsc_dl_enable_set,
pdsc_dl_enable_validate),
};
#define PDSC_WQ_NAME_LEN 24
static int pdsc_init_pf(struct pdsc *pdsc)
{
struct devlink_health_reporter *hr;
char wq_name[PDSC_WQ_NAME_LEN];
struct devlink *dl;
int err;
pcie_print_link_status(pdsc->pdev);
err = pci_request_regions(pdsc->pdev, PDS_CORE_DRV_NAME);
if (err) {
dev_err(pdsc->dev, "Cannot request PCI regions: %pe\n",
ERR_PTR(err));
return err;
}
err = pdsc_map_bars(pdsc);
if (err)
goto err_out_release_regions;
/* General workqueue and timer, but don't start timer yet */
snprintf(wq_name, sizeof(wq_name), "%s.%d", PDS_CORE_DRV_NAME, pdsc->uid);
pdsc->wq = create_singlethread_workqueue(wq_name);
INIT_WORK(&pdsc->health_work, pdsc_health_thread);
timer_setup(&pdsc->wdtimer, pdsc_wdtimer_cb, 0);
pdsc->wdtimer_period = PDSC_WATCHDOG_SECS * HZ;
mutex_init(&pdsc->devcmd_lock);
mutex_init(&pdsc->config_lock);
spin_lock_init(&pdsc->adminq_lock);
mutex_lock(&pdsc->config_lock);
set_bit(PDSC_S_FW_DEAD, &pdsc->state);
err = pdsc_setup(pdsc, PDSC_SETUP_INIT);
if (err)
goto err_out_unmap_bars;
err = pdsc_start(pdsc);
if (err)
goto err_out_teardown;
mutex_unlock(&pdsc->config_lock);
dl = priv_to_devlink(pdsc);
devl_lock(dl);
err = devl_params_register(dl, pdsc_dl_params,
ARRAY_SIZE(pdsc_dl_params));
if (err) {
dev_warn(pdsc->dev, "Failed to register devlink params: %pe\n",
ERR_PTR(err));
goto err_out_unlock_dl;
}
hr = devl_health_reporter_create(dl, &pdsc_fw_reporter_ops, 0, pdsc);
if (IS_ERR(hr)) {
dev_warn(pdsc->dev, "Failed to create fw reporter: %pe\n", hr);
err = PTR_ERR(hr);
goto err_out_unreg_params;
}
pdsc->fw_reporter = hr;
devl_register(dl);
devl_unlock(dl);
/* Lastly, start the health check timer */
mod_timer(&pdsc->wdtimer, round_jiffies(jiffies + pdsc->wdtimer_period));
return 0;
err_out_unreg_params:
devl_params_unregister(dl, pdsc_dl_params,
ARRAY_SIZE(pdsc_dl_params));
err_out_unlock_dl:
devl_unlock(dl);
pdsc_stop(pdsc);
err_out_teardown:
pdsc_teardown(pdsc, PDSC_TEARDOWN_REMOVING);
err_out_unmap_bars:
mutex_unlock(&pdsc->config_lock);
del_timer_sync(&pdsc->wdtimer);
if (pdsc->wq)
destroy_workqueue(pdsc->wq);
mutex_destroy(&pdsc->config_lock);
mutex_destroy(&pdsc->devcmd_lock);
pci_free_irq_vectors(pdsc->pdev);
pdsc_unmap_bars(pdsc);
err_out_release_regions:
pci_release_regions(pdsc->pdev);
return err;
}
static const struct devlink_ops pdsc_dl_ops = {
.info_get = pdsc_dl_info_get,
.flash_update = pdsc_dl_flash_update,
};
static const struct devlink_ops pdsc_dl_vf_ops = {
};
static DEFINE_IDA(pdsc_ida);
static int pdsc_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct device *dev = &pdev->dev;
const struct devlink_ops *ops;
struct devlink *dl;
struct pdsc *pdsc;
bool is_pf;
int err;
is_pf = !pdev->is_virtfn;
ops = is_pf ? &pdsc_dl_ops : &pdsc_dl_vf_ops;
dl = devlink_alloc(ops, sizeof(struct pdsc), dev);
if (!dl)
return -ENOMEM;
pdsc = devlink_priv(dl);
pdsc->pdev = pdev;
pdsc->dev = &pdev->dev;
set_bit(PDSC_S_INITING_DRIVER, &pdsc->state);
pci_set_drvdata(pdev, pdsc);
pdsc_debugfs_add_dev(pdsc);
err = ida_alloc(&pdsc_ida, GFP_KERNEL);
if (err < 0) {
dev_err(pdsc->dev, "%s: id alloc failed: %pe\n",
__func__, ERR_PTR(err));
goto err_out_free_devlink;
}
pdsc->uid = err;
/* Query system for DMA addressing limitation for the device. */
err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(PDS_CORE_ADDR_LEN));
if (err) {
dev_err(dev, "Unable to obtain 64-bit DMA for consistent allocations, aborting: %pe\n",
ERR_PTR(err));
goto err_out_free_ida;
}
err = pci_enable_device(pdev);
if (err) {
dev_err(dev, "Cannot enable PCI device: %pe\n", ERR_PTR(err));
goto err_out_free_ida;
}
pci_set_master(pdev);
if (is_pf)
err = pdsc_init_pf(pdsc);
else
err = pdsc_init_vf(pdsc);
if (err) {
dev_err(dev, "Cannot init device: %pe\n", ERR_PTR(err));
goto err_out_clear_master;
}
clear_bit(PDSC_S_INITING_DRIVER, &pdsc->state);
return 0;
err_out_clear_master:
pci_clear_master(pdev);
pci_disable_device(pdev);
err_out_free_ida:
ida_free(&pdsc_ida, pdsc->uid);
err_out_free_devlink:
pdsc_debugfs_del_dev(pdsc);
devlink_free(dl);
return err;
}
static void pdsc_remove(struct pci_dev *pdev)
{
struct pdsc *pdsc = pci_get_drvdata(pdev);
struct devlink *dl;
/* Unhook the registrations first to be sure there
* are no requests while we're stopping.
*/
dl = priv_to_devlink(pdsc);
devl_lock(dl);
devl_unregister(dl);
if (!pdev->is_virtfn) {
if (pdsc->fw_reporter) {
devl_health_reporter_destroy(pdsc->fw_reporter);
pdsc->fw_reporter = NULL;
}
devl_params_unregister(dl, pdsc_dl_params,
ARRAY_SIZE(pdsc_dl_params));
}
devl_unlock(dl);
if (pdev->is_virtfn) {
struct pdsc *pf;
pf = pdsc_get_pf_struct(pdsc->pdev);
if (!IS_ERR(pf)) {
pdsc_auxbus_dev_del(pdsc, pf);
pf->vfs[pdsc->vf_id].vf = NULL;
}
} else {
/* Remove the VFs and their aux_bus connections before other
* cleanup so that the clients can use the AdminQ to cleanly
* shut themselves down.
*/
pdsc_sriov_configure(pdev, 0);
del_timer_sync(&pdsc->wdtimer);
if (pdsc->wq)
destroy_workqueue(pdsc->wq);
mutex_lock(&pdsc->config_lock);
set_bit(PDSC_S_STOPPING_DRIVER, &pdsc->state);
pdsc_stop(pdsc);
pdsc_teardown(pdsc, PDSC_TEARDOWN_REMOVING);
mutex_unlock(&pdsc->config_lock);
mutex_destroy(&pdsc->config_lock);
mutex_destroy(&pdsc->devcmd_lock);
pci_free_irq_vectors(pdev);
pdsc_unmap_bars(pdsc);
pci_release_regions(pdev);
}
pci_clear_master(pdev);
pci_disable_device(pdev);
ida_free(&pdsc_ida, pdsc->uid);
pdsc_debugfs_del_dev(pdsc);
devlink_free(dl);
}
static struct pci_driver pdsc_driver = {
.name = PDS_CORE_DRV_NAME,
.id_table = pdsc_id_table,
.probe = pdsc_probe,
.remove = pdsc_remove,
.sriov_configure = pdsc_sriov_configure,
};
void *pdsc_get_pf_struct(struct pci_dev *vf_pdev)
{
return pci_iov_get_pf_drvdata(vf_pdev, &pdsc_driver);
}
EXPORT_SYMBOL_GPL(pdsc_get_pf_struct);
static int __init pdsc_init_module(void)
{
if (strcmp(KBUILD_MODNAME, PDS_CORE_DRV_NAME))
return -EINVAL;
pdsc_debugfs_create();
return pci_register_driver(&pdsc_driver);
}
static void __exit pdsc_cleanup_module(void)
{
pci_unregister_driver(&pdsc_driver);
pdsc_debugfs_destroy();
}
module_init(pdsc_init_module);
module_exit(pdsc_cleanup_module);
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#ifndef _PDS_CORE_ADMINQ_H_
#define _PDS_CORE_ADMINQ_H_
#define PDSC_ADMINQ_MAX_POLL_INTERVAL 256
enum pds_core_adminq_flags {
PDS_AQ_FLAG_FASTPOLL = BIT(1), /* completion poll at 1ms */
};
/*
* enum pds_core_adminq_opcode - AdminQ command opcodes
* These commands are only processed on AdminQ, not available in devcmd
*/
enum pds_core_adminq_opcode {
PDS_AQ_CMD_NOP = 0,
/* Client control */
PDS_AQ_CMD_CLIENT_REG = 6,
PDS_AQ_CMD_CLIENT_UNREG = 7,
PDS_AQ_CMD_CLIENT_CMD = 8,
/* LIF commands */
PDS_AQ_CMD_LIF_IDENTIFY = 20,
PDS_AQ_CMD_LIF_INIT = 21,
PDS_AQ_CMD_LIF_RESET = 22,
PDS_AQ_CMD_LIF_GETATTR = 23,
PDS_AQ_CMD_LIF_SETATTR = 24,
PDS_AQ_CMD_LIF_SETPHC = 25,
PDS_AQ_CMD_RX_MODE_SET = 30,
PDS_AQ_CMD_RX_FILTER_ADD = 31,
PDS_AQ_CMD_RX_FILTER_DEL = 32,
/* Queue commands */
PDS_AQ_CMD_Q_IDENTIFY = 39,
PDS_AQ_CMD_Q_INIT = 40,
PDS_AQ_CMD_Q_CONTROL = 41,
/* SR/IOV commands */
PDS_AQ_CMD_VF_GETATTR = 60,
PDS_AQ_CMD_VF_SETATTR = 61,
};
/*
* enum pds_core_notifyq_opcode - NotifyQ event codes
*/
enum pds_core_notifyq_opcode {
PDS_EVENT_LINK_CHANGE = 1,
PDS_EVENT_RESET = 2,
PDS_EVENT_XCVR = 5,
PDS_EVENT_CLIENT = 6,
};
#define PDS_COMP_COLOR_MASK 0x80
/**
* struct pds_core_notifyq_event - Generic event reporting structure
* @eid: event number
* @ecode: event code
*
* This is the generic event report struct from which the other
* actual events will be formed.
*/
struct pds_core_notifyq_event {
__le64 eid;
__le16 ecode;
};
/**
* struct pds_core_link_change_event - Link change event notification
* @eid: event number
* @ecode: event code = PDS_EVENT_LINK_CHANGE
* @link_status: link up/down, with error bits
* @link_speed: speed of the network link
*
* Sent when the network link state changes between UP and DOWN
*/
struct pds_core_link_change_event {
__le64 eid;
__le16 ecode;
__le16 link_status;
__le32 link_speed; /* units of 1Mbps: e.g. 10000 = 10Gbps */
};
/**
* struct pds_core_reset_event - Reset event notification
* @eid: event number
* @ecode: event code = PDS_EVENT_RESET
* @reset_code: reset type
* @state: 0=pending, 1=complete, 2=error
*
* Sent when the NIC or some subsystem is going to be or
* has been reset.
*/
struct pds_core_reset_event {
__le64 eid;
__le16 ecode;
u8 reset_code;
u8 state;
};
/**
* struct pds_core_client_event - Client event notification
* @eid: event number
* @ecode: event code = PDS_EVENT_CLIENT
* @client_id: client to sent event to
* @client_event: wrapped event struct for the client
*
* Sent when an event needs to be passed on to a client
*/
struct pds_core_client_event {
__le64 eid;
__le16 ecode;
__le16 client_id;
u8 client_event[54];
};
/**
* struct pds_core_notifyq_cmd - Placeholder for building qcq
* @data: anonymous field for building the qcq
*/
struct pds_core_notifyq_cmd {
__le32 data; /* Not used but needed for qcq structure */
};
/*
* union pds_core_notifyq_comp - Overlay of notifyq event structures
*/
union pds_core_notifyq_comp {
struct {
__le64 eid;
__le16 ecode;
};
struct pds_core_notifyq_event event;
struct pds_core_link_change_event link_change;
struct pds_core_reset_event reset;
u8 data[64];
};
#define PDS_DEVNAME_LEN 32
/**
* struct pds_core_client_reg_cmd - Register a new client with DSC
* @opcode: opcode PDS_AQ_CMD_CLIENT_REG
* @rsvd: word boundary padding
* @devname: text name of client device
* @vif_type: what type of device (enum pds_core_vif_types)
*
* Tell the DSC of the new client, and receive a client_id from DSC.
*/
struct pds_core_client_reg_cmd {
u8 opcode;
u8 rsvd[3];
char devname[PDS_DEVNAME_LEN];
u8 vif_type;
};
/**
* struct pds_core_client_reg_comp - Client registration completion
* @status: Status of the command (enum pdc_core_status_code)
* @rsvd: Word boundary padding
* @comp_index: Index in the descriptor ring for which this is the completion
* @client_id: New id assigned by DSC
* @rsvd1: Word boundary padding
* @color: Color bit
*/
struct pds_core_client_reg_comp {
u8 status;
u8 rsvd;
__le16 comp_index;
__le16 client_id;
u8 rsvd1[9];
u8 color;
};
/**
* struct pds_core_client_unreg_cmd - Unregister a client from DSC
* @opcode: opcode PDS_AQ_CMD_CLIENT_UNREG
* @rsvd: word boundary padding
* @client_id: id of client being removed
*
* Tell the DSC this client is going away and remove its context
* This uses the generic completion.
*/
struct pds_core_client_unreg_cmd {
u8 opcode;
u8 rsvd;
__le16 client_id;
};
/**
* struct pds_core_client_request_cmd - Pass along a wrapped client AdminQ cmd
* @opcode: opcode PDS_AQ_CMD_CLIENT_CMD
* @rsvd: word boundary padding
* @client_id: id of client being removed
* @client_cmd: the wrapped client command
*
* Proxy post an adminq command for the client.
* This uses the generic completion.
*/
struct pds_core_client_request_cmd {
u8 opcode;
u8 rsvd;
__le16 client_id;
u8 client_cmd[60];
};
#define PDS_CORE_MAX_FRAGS 16
#define PDS_CORE_QCQ_F_INITED BIT(0)
#define PDS_CORE_QCQ_F_SG BIT(1)
#define PDS_CORE_QCQ_F_INTR BIT(2)
#define PDS_CORE_QCQ_F_TX_STATS BIT(3)
#define PDS_CORE_QCQ_F_RX_STATS BIT(4)
#define PDS_CORE_QCQ_F_NOTIFYQ BIT(5)
#define PDS_CORE_QCQ_F_CMB_RINGS BIT(6)
#define PDS_CORE_QCQ_F_CORE BIT(7)
enum pds_core_lif_type {
PDS_CORE_LIF_TYPE_DEFAULT = 0,
};
/**
* union pds_core_lif_config - LIF configuration
* @state: LIF state (enum pds_core_lif_state)
* @rsvd: Word boundary padding
* @name: LIF name
* @rsvd2: Word boundary padding
* @features: LIF features active (enum pds_core_hw_features)
* @queue_count: Queue counts per queue-type
* @words: Full union buffer size
*/
union pds_core_lif_config {
struct {
u8 state;
u8 rsvd[3];
char name[PDS_CORE_IFNAMSIZ];
u8 rsvd2[12];
__le64 features;
__le32 queue_count[PDS_CORE_QTYPE_MAX];
} __packed;
__le32 words[64];
};
/**
* struct pds_core_lif_status - LIF status register
* @eid: most recent NotifyQ event id
* @rsvd: full struct size
*/
struct pds_core_lif_status {
__le64 eid;
u8 rsvd[56];
};
/**
* struct pds_core_lif_info - LIF info structure
* @config: LIF configuration structure
* @status: LIF status structure
*/
struct pds_core_lif_info {
union pds_core_lif_config config;
struct pds_core_lif_status status;
};
/**
* struct pds_core_lif_identity - LIF identity information (type-specific)
* @features: LIF features (see enum pds_core_hw_features)
* @version: Identify structure version
* @hw_index: LIF hardware index
* @rsvd: Word boundary padding
* @max_nb_sessions: Maximum number of sessions supported
* @rsvd2: buffer padding
* @config: LIF config struct with features, q counts
*/
struct pds_core_lif_identity {
__le64 features;
u8 version;
u8 hw_index;
u8 rsvd[2];
__le32 max_nb_sessions;
u8 rsvd2[120];
union pds_core_lif_config config;
};
/**
* struct pds_core_lif_identify_cmd - Get LIF identity info command
* @opcode: Opcode PDS_AQ_CMD_LIF_IDENTIFY
* @type: LIF type (enum pds_core_lif_type)
* @client_id: Client identifier
* @ver: Version of identify returned by device
* @rsvd: Word boundary padding
* @ident_pa: DMA address to receive identity info
*
* Firmware will copy LIF identity data (struct pds_core_lif_identity)
* into the buffer address given.
*/
struct pds_core_lif_identify_cmd {
u8 opcode;
u8 type;
__le16 client_id;
u8 ver;
u8 rsvd[3];
__le64 ident_pa;
};
/**
* struct pds_core_lif_identify_comp - LIF identify command completion
* @status: Status of the command (enum pds_core_status_code)
* @ver: Version of identify returned by device
* @bytes: Bytes copied into the buffer
* @rsvd: Word boundary padding
* @color: Color bit
*/
struct pds_core_lif_identify_comp {
u8 status;
u8 ver;
__le16 bytes;
u8 rsvd[11];
u8 color;
};
/**
* struct pds_core_lif_init_cmd - LIF init command
* @opcode: Opcode PDS_AQ_CMD_LIF_INIT
* @type: LIF type (enum pds_core_lif_type)
* @client_id: Client identifier
* @rsvd: Word boundary padding
* @info_pa: Destination address for LIF info (struct pds_core_lif_info)
*/
struct pds_core_lif_init_cmd {
u8 opcode;
u8 type;
__le16 client_id;
__le32 rsvd;
__le64 info_pa;
};
/**
* struct pds_core_lif_init_comp - LIF init command completion
* @status: Status of the command (enum pds_core_status_code)
* @rsvd: Word boundary padding
* @hw_index: Hardware index of the initialized LIF
* @rsvd1: Word boundary padding
* @color: Color bit
*/
struct pds_core_lif_init_comp {
u8 status;
u8 rsvd;
__le16 hw_index;
u8 rsvd1[11];
u8 color;
};
/**
* struct pds_core_lif_reset_cmd - LIF reset command
* Will reset only the specified LIF.
* @opcode: Opcode PDS_AQ_CMD_LIF_RESET
* @rsvd: Word boundary padding
* @client_id: Client identifier
*/
struct pds_core_lif_reset_cmd {
u8 opcode;
u8 rsvd;
__le16 client_id;
};
/**
* enum pds_core_lif_attr - List of LIF attributes
* @PDS_CORE_LIF_ATTR_STATE: LIF state attribute
* @PDS_CORE_LIF_ATTR_NAME: LIF name attribute
* @PDS_CORE_LIF_ATTR_FEATURES: LIF features attribute
* @PDS_CORE_LIF_ATTR_STATS_CTRL: LIF statistics control attribute
*/
enum pds_core_lif_attr {
PDS_CORE_LIF_ATTR_STATE = 0,
PDS_CORE_LIF_ATTR_NAME = 1,
PDS_CORE_LIF_ATTR_FEATURES = 4,
PDS_CORE_LIF_ATTR_STATS_CTRL = 6,
};
/**
* struct pds_core_lif_setattr_cmd - Set LIF attributes on the NIC
* @opcode: Opcode PDS_AQ_CMD_LIF_SETATTR
* @attr: Attribute type (enum pds_core_lif_attr)
* @client_id: Client identifier
* @state: LIF state (enum pds_core_lif_state)
* @name: The name string, 0 terminated
* @features: Features (enum pds_core_hw_features)
* @stats_ctl: Stats control commands (enum pds_core_stats_ctl_cmd)
* @rsvd: Command Buffer padding
*/
struct pds_core_lif_setattr_cmd {
u8 opcode;
u8 attr;
__le16 client_id;
union {
u8 state;
char name[PDS_CORE_IFNAMSIZ];
__le64 features;
u8 stats_ctl;
u8 rsvd[60];
} __packed;
};
/**
* struct pds_core_lif_setattr_comp - LIF set attr command completion
* @status: Status of the command (enum pds_core_status_code)
* @rsvd: Word boundary padding
* @comp_index: Index in the descriptor ring for which this is the completion
* @features: Features (enum pds_core_hw_features)
* @rsvd2: Word boundary padding
* @color: Color bit
*/
struct pds_core_lif_setattr_comp {
u8 status;
u8 rsvd;
__le16 comp_index;
union {
__le64 features;
u8 rsvd2[11];
} __packed;
u8 color;
};
/**
* struct pds_core_lif_getattr_cmd - Get LIF attributes from the NIC
* @opcode: Opcode PDS_AQ_CMD_LIF_GETATTR
* @attr: Attribute type (enum pds_core_lif_attr)
* @client_id: Client identifier
*/
struct pds_core_lif_getattr_cmd {
u8 opcode;
u8 attr;
__le16 client_id;
};
/**
* struct pds_core_lif_getattr_comp - LIF get attr command completion
* @status: Status of the command (enum pds_core_status_code)
* @rsvd: Word boundary padding
* @comp_index: Index in the descriptor ring for which this is the completion
* @state: LIF state (enum pds_core_lif_state)
* @name: LIF name string, 0 terminated
* @features: Features (enum pds_core_hw_features)
* @rsvd2: Word boundary padding
* @color: Color bit
*/
struct pds_core_lif_getattr_comp {
u8 status;
u8 rsvd;
__le16 comp_index;
union {
u8 state;
__le64 features;
u8 rsvd2[11];
} __packed;
u8 color;
};
/**
* union pds_core_q_identity - Queue identity information
* @version: Queue type version that can be used with FW
* @supported: Bitfield of queue versions, first bit = ver 0
* @rsvd: Word boundary padding
* @features: Queue features
* @desc_sz: Descriptor size
* @comp_sz: Completion descriptor size
* @rsvd2: Word boundary padding
*/
struct pds_core_q_identity {
u8 version;
u8 supported;
u8 rsvd[6];
#define PDS_CORE_QIDENT_F_CQ 0x01 /* queue has completion ring */
__le64 features;
__le16 desc_sz;
__le16 comp_sz;
u8 rsvd2[6];
};
/**
* struct pds_core_q_identify_cmd - queue identify command
* @opcode: Opcode PDS_AQ_CMD_Q_IDENTIFY
* @type: Logical queue type (enum pds_core_logical_qtype)
* @client_id: Client identifier
* @ver: Highest queue type version that the driver supports
* @rsvd: Word boundary padding
* @ident_pa: DMA address to receive the data (struct pds_core_q_identity)
*/
struct pds_core_q_identify_cmd {
u8 opcode;
u8 type;
__le16 client_id;
u8 ver;
u8 rsvd[3];
__le64 ident_pa;
};
/**
* struct pds_core_q_identify_comp - queue identify command completion
* @status: Status of the command (enum pds_core_status_code)
* @rsvd: Word boundary padding
* @comp_index: Index in the descriptor ring for which this is the completion
* @ver: Queue type version that can be used with FW
* @rsvd1: Word boundary padding
* @color: Color bit
*/
struct pds_core_q_identify_comp {
u8 status;
u8 rsvd;
__le16 comp_index;
u8 ver;
u8 rsvd1[10];
u8 color;
};
/**
* struct pds_core_q_init_cmd - Queue init command
* @opcode: Opcode PDS_AQ_CMD_Q_INIT
* @type: Logical queue type
* @client_id: Client identifier
* @ver: Queue type version
* @rsvd: Word boundary padding
* @index: (LIF, qtype) relative admin queue index
* @intr_index: Interrupt control register index, or Event queue index
* @pid: Process ID
* @flags:
* IRQ: Interrupt requested on completion
* ENA: Enable the queue. If ENA=0 the queue is initialized
* but remains disabled, to be later enabled with the
* Queue Enable command. If ENA=1, then queue is
* initialized and then enabled.
* @cos: Class of service for this queue
* @ring_size: Queue ring size, encoded as a log2(size), in
* number of descriptors. The actual ring size is
* (1 << ring_size). For example, to select a ring size
* of 64 descriptors write ring_size = 6. The minimum
* ring_size value is 2 for a ring of 4 descriptors.
* The maximum ring_size value is 12 for a ring of 4k
* descriptors. Values of ring_size <2 and >12 are
* reserved.
* @ring_base: Queue ring base address
* @cq_ring_base: Completion queue ring base address
*/
struct pds_core_q_init_cmd {
u8 opcode;
u8 type;
__le16 client_id;
u8 ver;
u8 rsvd[3];
__le32 index;
__le16 pid;
__le16 intr_index;
__le16 flags;
#define PDS_CORE_QINIT_F_IRQ 0x01 /* Request interrupt on completion */
#define PDS_CORE_QINIT_F_ENA 0x02 /* Enable the queue */
u8 cos;
#define PDS_CORE_QSIZE_MIN_LG2 2
#define PDS_CORE_QSIZE_MAX_LG2 12
u8 ring_size;
__le64 ring_base;
__le64 cq_ring_base;
} __packed;
/**
* struct pds_core_q_init_comp - Queue init command completion
* @status: Status of the command (enum pds_core_status_code)
* @rsvd: Word boundary padding
* @comp_index: Index in the descriptor ring for which this is the completion
* @hw_index: Hardware Queue ID
* @hw_type: Hardware Queue type
* @rsvd2: Word boundary padding
* @color: Color
*/
struct pds_core_q_init_comp {
u8 status;
u8 rsvd;
__le16 comp_index;
__le32 hw_index;
u8 hw_type;
u8 rsvd2[6];
u8 color;
};
union pds_core_adminq_cmd {
u8 opcode;
u8 bytes[64];
struct pds_core_client_reg_cmd client_reg;
struct pds_core_client_unreg_cmd client_unreg;
struct pds_core_client_request_cmd client_request;
struct pds_core_lif_identify_cmd lif_ident;
struct pds_core_lif_init_cmd lif_init;
struct pds_core_lif_reset_cmd lif_reset;
struct pds_core_lif_setattr_cmd lif_setattr;
struct pds_core_lif_getattr_cmd lif_getattr;
struct pds_core_q_identify_cmd q_ident;
struct pds_core_q_init_cmd q_init;
};
union pds_core_adminq_comp {
struct {
u8 status;
u8 rsvd;
__le16 comp_index;
u8 rsvd2[11];
u8 color;
};
u32 words[4];
struct pds_core_client_reg_comp client_reg;
struct pds_core_lif_identify_comp lif_ident;
struct pds_core_lif_init_comp lif_init;
struct pds_core_lif_setattr_comp lif_setattr;
struct pds_core_lif_getattr_comp lif_getattr;
struct pds_core_q_identify_comp q_ident;
struct pds_core_q_init_comp q_init;
};
#ifndef __CHECKER__
static_assert(sizeof(union pds_core_adminq_cmd) == 64);
static_assert(sizeof(union pds_core_adminq_comp) == 16);
static_assert(sizeof(union pds_core_notifyq_comp) == 64);
#endif /* __CHECKER__ */
/* The color bit is a 'done' bit for the completion descriptors
* where the meaning alternates between '1' and '0' for alternating
* passes through the completion descriptor ring.
*/
static inline bool pdsc_color_match(u8 color, bool done_color)
{
return (!!(color & PDS_COMP_COLOR_MASK)) == done_color;
}
struct pdsc;
int pdsc_adminq_post(struct pdsc *pdsc,
union pds_core_adminq_cmd *cmd,
union pds_core_adminq_comp *comp,
bool fast_poll);
#endif /* _PDS_CORE_ADMINQ_H_ */
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#ifndef _PDSC_AUXBUS_H_
#define _PDSC_AUXBUS_H_
#include <linux/auxiliary_bus.h>
struct pds_auxiliary_dev {
struct auxiliary_device aux_dev;
struct pci_dev *vf_pdev;
u16 client_id;
};
int pds_client_adminq_cmd(struct pds_auxiliary_dev *padev,
union pds_core_adminq_cmd *req,
size_t req_len,
union pds_core_adminq_comp *resp,
u64 flags);
#endif /* _PDSC_AUXBUS_H_ */
/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */
/* Copyright(c) 2023 Advanced Micro Devices, Inc. */
#ifndef _PDS_COMMON_H_
#define _PDS_COMMON_H_
#define PDS_CORE_DRV_NAME "pds_core"
/* the device's internal addressing uses up to 52 bits */
#define PDS_CORE_ADDR_LEN 52
#define PDS_CORE_ADDR_MASK (BIT_ULL(PDS_ADDR_LEN) - 1)
#define PDS_PAGE_SIZE 4096
enum pds_core_driver_type {
PDS_DRIVER_LINUX = 1,
PDS_DRIVER_WIN = 2,
PDS_DRIVER_DPDK = 3,
PDS_DRIVER_FREEBSD = 4,
PDS_DRIVER_IPXE = 5,
PDS_DRIVER_ESXI = 6,
};
enum pds_core_vif_types {
PDS_DEV_TYPE_CORE = 0,
PDS_DEV_TYPE_VDPA = 1,
PDS_DEV_TYPE_VFIO = 2,
PDS_DEV_TYPE_ETH = 3,
PDS_DEV_TYPE_RDMA = 4,
PDS_DEV_TYPE_LM = 5,
/* new ones added before this line */
PDS_DEV_TYPE_MAX = 16 /* don't change - used in struct size */
};
#define PDS_DEV_TYPE_CORE_STR "Core"
#define PDS_DEV_TYPE_VDPA_STR "vDPA"
#define PDS_DEV_TYPE_VFIO_STR "VFio"
#define PDS_DEV_TYPE_ETH_STR "Eth"
#define PDS_DEV_TYPE_RDMA_STR "RDMA"
#define PDS_DEV_TYPE_LM_STR "LM"
#define PDS_CORE_IFNAMSIZ 16
/**
* enum pds_core_logical_qtype - Logical Queue Types
* @PDS_CORE_QTYPE_ADMINQ: Administrative Queue
* @PDS_CORE_QTYPE_NOTIFYQ: Notify Queue
* @PDS_CORE_QTYPE_RXQ: Receive Queue
* @PDS_CORE_QTYPE_TXQ: Transmit Queue
* @PDS_CORE_QTYPE_EQ: Event Queue
* @PDS_CORE_QTYPE_MAX: Max queue type supported
*/
enum pds_core_logical_qtype {
PDS_CORE_QTYPE_ADMINQ = 0,
PDS_CORE_QTYPE_NOTIFYQ = 1,
PDS_CORE_QTYPE_RXQ = 2,
PDS_CORE_QTYPE_TXQ = 3,
PDS_CORE_QTYPE_EQ = 4,
PDS_CORE_QTYPE_MAX = 16 /* don't change - used in struct size */
};
int pdsc_register_notify(struct notifier_block *nb);
void pdsc_unregister_notify(struct notifier_block *nb);
void *pdsc_get_pf_struct(struct pci_dev *vf_pdev);
int pds_client_register(struct pci_dev *pf_pdev, char *devname);
int pds_client_unregister(struct pci_dev *pf_pdev, u16 client_id);
#endif /* _PDS_COMMON_H_ */
/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */
/* Copyright(c) 2023 Advanced Micro Devices, Inc. */
#ifndef _PDS_CORE_IF_H_
#define _PDS_CORE_IF_H_
#define PCI_VENDOR_ID_PENSANDO 0x1dd8
#define PCI_DEVICE_ID_PENSANDO_CORE_PF 0x100c
#define PCI_DEVICE_ID_VIRTIO_NET_TRANS 0x1000
#define PCI_DEVICE_ID_PENSANDO_IONIC_ETH_VF 0x1003
#define PCI_DEVICE_ID_PENSANDO_VDPA_VF 0x100b
#define PDS_CORE_BARS_MAX 4
#define PDS_CORE_PCI_BAR_DBELL 1
/* Bar0 */
#define PDS_CORE_DEV_INFO_SIGNATURE 0x44455649 /* 'DEVI' */
#define PDS_CORE_BAR0_SIZE 0x8000
#define PDS_CORE_BAR0_DEV_INFO_REGS_OFFSET 0x0000
#define PDS_CORE_BAR0_DEV_CMD_REGS_OFFSET 0x0800
#define PDS_CORE_BAR0_DEV_CMD_DATA_REGS_OFFSET 0x0c00
#define PDS_CORE_BAR0_INTR_STATUS_OFFSET 0x1000
#define PDS_CORE_BAR0_INTR_CTRL_OFFSET 0x2000
#define PDS_CORE_DEV_CMD_DONE 0x00000001
#define PDS_CORE_DEVCMD_TIMEOUT 5
#define PDS_CORE_CLIENT_ID 0
#define PDS_CORE_ASIC_TYPE_CAPRI 0
/*
* enum pds_core_cmd_opcode - Device commands
*/
enum pds_core_cmd_opcode {
/* Core init */
PDS_CORE_CMD_NOP = 0,
PDS_CORE_CMD_IDENTIFY = 1,
PDS_CORE_CMD_RESET = 2,
PDS_CORE_CMD_INIT = 3,
PDS_CORE_CMD_FW_DOWNLOAD = 4,
PDS_CORE_CMD_FW_CONTROL = 5,
/* SR/IOV commands */
PDS_CORE_CMD_VF_GETATTR = 60,
PDS_CORE_CMD_VF_SETATTR = 61,
PDS_CORE_CMD_VF_CTRL = 62,
/* Add commands before this line */
PDS_CORE_CMD_MAX,
PDS_CORE_CMD_COUNT
};
/*
* enum pds_core_status_code - Device command return codes
*/
enum pds_core_status_code {
PDS_RC_SUCCESS = 0, /* Success */
PDS_RC_EVERSION = 1, /* Incorrect version for request */
PDS_RC_EOPCODE = 2, /* Invalid cmd opcode */
PDS_RC_EIO = 3, /* I/O error */
PDS_RC_EPERM = 4, /* Permission denied */
PDS_RC_EQID = 5, /* Bad qid */
PDS_RC_EQTYPE = 6, /* Bad qtype */
PDS_RC_ENOENT = 7, /* No such element */
PDS_RC_EINTR = 8, /* operation interrupted */
PDS_RC_EAGAIN = 9, /* Try again */
PDS_RC_ENOMEM = 10, /* Out of memory */
PDS_RC_EFAULT = 11, /* Bad address */
PDS_RC_EBUSY = 12, /* Device or resource busy */
PDS_RC_EEXIST = 13, /* object already exists */
PDS_RC_EINVAL = 14, /* Invalid argument */
PDS_RC_ENOSPC = 15, /* No space left or alloc failure */
PDS_RC_ERANGE = 16, /* Parameter out of range */
PDS_RC_BAD_ADDR = 17, /* Descriptor contains a bad ptr */
PDS_RC_DEV_CMD = 18, /* Device cmd attempted on AdminQ */
PDS_RC_ENOSUPP = 19, /* Operation not supported */
PDS_RC_ERROR = 29, /* Generic error */
PDS_RC_ERDMA = 30, /* Generic RDMA error */
PDS_RC_EVFID = 31, /* VF ID does not exist */
PDS_RC_BAD_FW = 32, /* FW file is invalid or corrupted */
PDS_RC_ECLIENT = 33, /* No such client id */
};
/**
* struct pds_core_drv_identity - Driver identity information
* @drv_type: Driver type (enum pds_core_driver_type)
* @os_dist: OS distribution, numeric format
* @os_dist_str: OS distribution, string format
* @kernel_ver: Kernel version, numeric format
* @kernel_ver_str: Kernel version, string format
* @driver_ver_str: Driver version, string format
*/
struct pds_core_drv_identity {
__le32 drv_type;
__le32 os_dist;
char os_dist_str[128];
__le32 kernel_ver;
char kernel_ver_str[32];
char driver_ver_str[32];
};
#define PDS_DEV_TYPE_MAX 16
/**
* struct pds_core_dev_identity - Device identity information
* @version: Version of device identify
* @type: Identify type (0 for now)
* @state: Device state
* @rsvd: Word boundary padding
* @nlifs: Number of LIFs provisioned
* @nintrs: Number of interrupts provisioned
* @ndbpgs_per_lif: Number of doorbell pages per LIF
* @intr_coal_mult: Interrupt coalescing multiplication factor
* Scale user-supplied interrupt coalescing
* value in usecs to device units using:
* device units = usecs * mult / div
* @intr_coal_div: Interrupt coalescing division factor
* Scale user-supplied interrupt coalescing
* value in usecs to device units using:
* device units = usecs * mult / div
* @vif_types: How many of each VIF device type is supported
*/
struct pds_core_dev_identity {
u8 version;
u8 type;
u8 state;
u8 rsvd;
__le32 nlifs;
__le32 nintrs;
__le32 ndbpgs_per_lif;
__le32 intr_coal_mult;
__le32 intr_coal_div;
__le16 vif_types[PDS_DEV_TYPE_MAX];
};
#define PDS_CORE_IDENTITY_VERSION_1 1
/**
* struct pds_core_dev_identify_cmd - Driver/device identify command
* @opcode: Opcode PDS_CORE_CMD_IDENTIFY
* @ver: Highest version of identify supported by driver
*
* Expects to find driver identification info (struct pds_core_drv_identity)
* in cmd_regs->data. Driver should keep the devcmd interface locked
* while preparing the driver info.
*/
struct pds_core_dev_identify_cmd {
u8 opcode;
u8 ver;
};
/**
* struct pds_core_dev_identify_comp - Device identify command completion
* @status: Status of the command (enum pds_core_status_code)
* @ver: Version of identify returned by device
*
* Device identification info (struct pds_core_dev_identity) can be found
* in cmd_regs->data. Driver should keep the devcmd interface locked
* while reading the results.
*/
struct pds_core_dev_identify_comp {
u8 status;
u8 ver;
};
/**
* struct pds_core_dev_reset_cmd - Device reset command
* @opcode: Opcode PDS_CORE_CMD_RESET
*
* Resets and clears all LIFs, VDevs, and VIFs on the device.
*/
struct pds_core_dev_reset_cmd {
u8 opcode;
};
/**
* struct pds_core_dev_reset_comp - Reset command completion
* @status: Status of the command (enum pds_core_status_code)
*/
struct pds_core_dev_reset_comp {
u8 status;
};
/*
* struct pds_core_dev_init_data - Pointers and info needed for the Core
* initialization PDS_CORE_CMD_INIT command. The in and out structs are
* overlays on the pds_core_dev_cmd_regs.data space for passing data down
* to the firmware on init, and then returning initialization results.
*/
struct pds_core_dev_init_data_in {
__le64 adminq_q_base;
__le64 adminq_cq_base;
__le64 notifyq_cq_base;
__le32 flags;
__le16 intr_index;
u8 adminq_ring_size;
u8 notifyq_ring_size;
};
struct pds_core_dev_init_data_out {
__le32 core_hw_index;
__le32 adminq_hw_index;
__le32 notifyq_hw_index;
u8 adminq_hw_type;
u8 notifyq_hw_type;
};
/**
* struct pds_core_dev_init_cmd - Core device initialize
* @opcode: opcode PDS_CORE_CMD_INIT
*
* Initializes the core device and sets up the AdminQ and NotifyQ.
* Expects to find initialization data (struct pds_core_dev_init_data_in)
* in cmd_regs->data. Driver should keep the devcmd interface locked
* while preparing the driver info.
*/
struct pds_core_dev_init_cmd {
u8 opcode;
};
/**
* struct pds_core_dev_init_comp - Core init completion
* @status: Status of the command (enum pds_core_status_code)
*
* Initialization result data (struct pds_core_dev_init_data_in)
* is found in cmd_regs->data.
*/
struct pds_core_dev_init_comp {
u8 status;
};
/**
* struct pds_core_fw_download_cmd - Firmware download command
* @opcode: opcode
* @rsvd: Word boundary padding
* @addr: DMA address of the firmware buffer
* @offset: offset of the firmware buffer within the full image
* @length: number of valid bytes in the firmware buffer
*/
struct pds_core_fw_download_cmd {
u8 opcode;
u8 rsvd[3];
__le32 offset;
__le64 addr;
__le32 length;
};
/**
* struct pds_core_fw_download_comp - Firmware download completion
* @status: Status of the command (enum pds_core_status_code)
*/
struct pds_core_fw_download_comp {
u8 status;
};
/**
* enum pds_core_fw_control_oper - FW control operations
* @PDS_CORE_FW_INSTALL_ASYNC: Install firmware asynchronously
* @PDS_CORE_FW_INSTALL_STATUS: Firmware installation status
* @PDS_CORE_FW_ACTIVATE_ASYNC: Activate firmware asynchronously
* @PDS_CORE_FW_ACTIVATE_STATUS: Firmware activate status
* @PDS_CORE_FW_UPDATE_CLEANUP: Cleanup any firmware update leftovers
* @PDS_CORE_FW_GET_BOOT: Return current active firmware slot
* @PDS_CORE_FW_SET_BOOT: Set active firmware slot for next boot
* @PDS_CORE_FW_GET_LIST: Return list of installed firmware images
*/
enum pds_core_fw_control_oper {
PDS_CORE_FW_INSTALL_ASYNC = 0,
PDS_CORE_FW_INSTALL_STATUS = 1,
PDS_CORE_FW_ACTIVATE_ASYNC = 2,
PDS_CORE_FW_ACTIVATE_STATUS = 3,
PDS_CORE_FW_UPDATE_CLEANUP = 4,
PDS_CORE_FW_GET_BOOT = 5,
PDS_CORE_FW_SET_BOOT = 6,
PDS_CORE_FW_GET_LIST = 7,
};
enum pds_core_fw_slot {
PDS_CORE_FW_SLOT_INVALID = 0,
PDS_CORE_FW_SLOT_A = 1,
PDS_CORE_FW_SLOT_B = 2,
PDS_CORE_FW_SLOT_GOLD = 3,
};
/**
* struct pds_core_fw_control_cmd - Firmware control command
* @opcode: opcode
* @rsvd: Word boundary padding
* @oper: firmware control operation (enum pds_core_fw_control_oper)
* @slot: slot to operate on (enum pds_core_fw_slot)
*/
struct pds_core_fw_control_cmd {
u8 opcode;
u8 rsvd[3];
u8 oper;
u8 slot;
};
/**
* struct pds_core_fw_control_comp - Firmware control copletion
* @status: Status of the command (enum pds_core_status_code)
* @rsvd: Word alignment space
* @slot: Slot number (enum pds_core_fw_slot)
* @rsvd1: Struct padding
* @color: Color bit
*/
struct pds_core_fw_control_comp {
u8 status;
u8 rsvd[3];
u8 slot;
u8 rsvd1[10];
u8 color;
};
struct pds_core_fw_name_info {
#define PDS_CORE_FWSLOT_BUFLEN 8
#define PDS_CORE_FWVERS_BUFLEN 32
char slotname[PDS_CORE_FWSLOT_BUFLEN];
char fw_version[PDS_CORE_FWVERS_BUFLEN];
};
struct pds_core_fw_list_info {
#define PDS_CORE_FWVERS_LIST_LEN 16
u8 num_fw_slots;
struct pds_core_fw_name_info fw_names[PDS_CORE_FWVERS_LIST_LEN];
} __packed;
enum pds_core_vf_attr {
PDS_CORE_VF_ATTR_SPOOFCHK = 1,
PDS_CORE_VF_ATTR_TRUST = 2,
PDS_CORE_VF_ATTR_MAC = 3,
PDS_CORE_VF_ATTR_LINKSTATE = 4,
PDS_CORE_VF_ATTR_VLAN = 5,
PDS_CORE_VF_ATTR_RATE = 6,
PDS_CORE_VF_ATTR_STATSADDR = 7,
};
/**
* enum pds_core_vf_link_status - Virtual Function link status
* @PDS_CORE_VF_LINK_STATUS_AUTO: Use link state of the uplink
* @PDS_CORE_VF_LINK_STATUS_UP: Link always up
* @PDS_CORE_VF_LINK_STATUS_DOWN: Link always down
*/
enum pds_core_vf_link_status {
PDS_CORE_VF_LINK_STATUS_AUTO = 0,
PDS_CORE_VF_LINK_STATUS_UP = 1,
PDS_CORE_VF_LINK_STATUS_DOWN = 2,
};
/**
* struct pds_core_vf_setattr_cmd - Set VF attributes on the NIC
* @opcode: Opcode
* @attr: Attribute type (enum pds_core_vf_attr)
* @vf_index: VF index
* @macaddr: mac address
* @vlanid: vlan ID
* @maxrate: max Tx rate in Mbps
* @spoofchk: enable address spoof checking
* @trust: enable VF trust
* @linkstate: set link up or down
* @stats: stats addr struct
* @stats.pa: set DMA address for VF stats
* @stats.len: length of VF stats space
* @pad: force union to specific size
*/
struct pds_core_vf_setattr_cmd {
u8 opcode;
u8 attr;
__le16 vf_index;
union {
u8 macaddr[6];
__le16 vlanid;
__le32 maxrate;
u8 spoofchk;
u8 trust;
u8 linkstate;
struct {
__le64 pa;
__le32 len;
} stats;
u8 pad[60];
} __packed;
};
struct pds_core_vf_setattr_comp {
u8 status;
u8 attr;
__le16 vf_index;
__le16 comp_index;
u8 rsvd[9];
u8 color;
};
/**
* struct pds_core_vf_getattr_cmd - Get VF attributes from the NIC
* @opcode: Opcode
* @attr: Attribute type (enum pds_core_vf_attr)
* @vf_index: VF index
*/
struct pds_core_vf_getattr_cmd {
u8 opcode;
u8 attr;
__le16 vf_index;
};
struct pds_core_vf_getattr_comp {
u8 status;
u8 attr;
__le16 vf_index;
union {
u8 macaddr[6];
__le16 vlanid;
__le32 maxrate;
u8 spoofchk;
u8 trust;
u8 linkstate;
__le64 stats_pa;
u8 pad[11];
} __packed;
u8 color;
};
enum pds_core_vf_ctrl_opcode {
PDS_CORE_VF_CTRL_START_ALL = 0,
PDS_CORE_VF_CTRL_START = 1,
};
/**
* struct pds_core_vf_ctrl_cmd - VF control command
* @opcode: Opcode for the command
* @ctrl_opcode: VF control operation type
* @vf_index: VF Index. It is unused if op START_ALL is used.
*/
struct pds_core_vf_ctrl_cmd {
u8 opcode;
u8 ctrl_opcode;
__le16 vf_index;
};
/**
* struct pds_core_vf_ctrl_comp - VF_CTRL command completion.
* @status: Status of the command (enum pds_core_status_code)
*/
struct pds_core_vf_ctrl_comp {
u8 status;
};
/*
* union pds_core_dev_cmd - Overlay of core device command structures
*/
union pds_core_dev_cmd {
u8 opcode;
u32 words[16];
struct pds_core_dev_identify_cmd identify;
struct pds_core_dev_init_cmd init;
struct pds_core_dev_reset_cmd reset;
struct pds_core_fw_download_cmd fw_download;
struct pds_core_fw_control_cmd fw_control;
struct pds_core_vf_setattr_cmd vf_setattr;
struct pds_core_vf_getattr_cmd vf_getattr;
struct pds_core_vf_ctrl_cmd vf_ctrl;
};
/*
* union pds_core_dev_comp - Overlay of core device completion structures
*/
union pds_core_dev_comp {
u8 status;
u8 bytes[16];
struct pds_core_dev_identify_comp identify;
struct pds_core_dev_reset_comp reset;
struct pds_core_dev_init_comp init;
struct pds_core_fw_download_comp fw_download;
struct pds_core_fw_control_comp fw_control;
struct pds_core_vf_setattr_comp vf_setattr;
struct pds_core_vf_getattr_comp vf_getattr;
struct pds_core_vf_ctrl_comp vf_ctrl;
};
/**
* struct pds_core_dev_hwstamp_regs - Hardware current timestamp registers
* @tick_low: Low 32 bits of hardware timestamp
* @tick_high: High 32 bits of hardware timestamp
*/
struct pds_core_dev_hwstamp_regs {
u32 tick_low;
u32 tick_high;
};
/**
* struct pds_core_dev_info_regs - Device info register format (read-only)
* @signature: Signature value of 0x44455649 ('DEVI')
* @version: Current version of info
* @asic_type: Asic type
* @asic_rev: Asic revision
* @fw_status: Firmware status
* bit 0 - 1 = fw running
* bit 4-7 - 4 bit generation number, changes on fw restart
* @fw_heartbeat: Firmware heartbeat counter
* @serial_num: Serial number
* @fw_version: Firmware version
* @oprom_regs: oprom_regs to store oprom debug enable/disable and bmp
* @rsvd_pad1024: Struct padding
* @hwstamp: Hardware current timestamp registers
* @rsvd_pad2048: Struct padding
*/
struct pds_core_dev_info_regs {
#define PDS_CORE_DEVINFO_FWVERS_BUFLEN 32
#define PDS_CORE_DEVINFO_SERIAL_BUFLEN 32
u32 signature;
u8 version;
u8 asic_type;
u8 asic_rev;
#define PDS_CORE_FW_STS_F_STOPPED 0x00
#define PDS_CORE_FW_STS_F_RUNNING 0x01
#define PDS_CORE_FW_STS_F_GENERATION 0xF0
u8 fw_status;
__le32 fw_heartbeat;
char fw_version[PDS_CORE_DEVINFO_FWVERS_BUFLEN];
char serial_num[PDS_CORE_DEVINFO_SERIAL_BUFLEN];
u8 oprom_regs[32]; /* reserved */
u8 rsvd_pad1024[916];
struct pds_core_dev_hwstamp_regs hwstamp; /* on 1k boundary */
u8 rsvd_pad2048[1016];
} __packed;
/**
* struct pds_core_dev_cmd_regs - Device command register format (read-write)
* @doorbell: Device Cmd Doorbell, write-only
* Write a 1 to signal device to process cmd
* @done: Command completed indicator, poll for completion
* bit 0 == 1 when command is complete
* @cmd: Opcode-specific command bytes
* @comp: Opcode-specific response bytes
* @rsvd: Struct padding
* @data: Opcode-specific side-data
*/
struct pds_core_dev_cmd_regs {
u32 doorbell;
u32 done;
union pds_core_dev_cmd cmd;
union pds_core_dev_comp comp;
u8 rsvd[48];
u32 data[478];
} __packed;
/**
* struct pds_core_dev_regs - Device register format for bar 0 page 0
* @info: Device info registers
* @devcmd: Device command registers
*/
struct pds_core_dev_regs {
struct pds_core_dev_info_regs info;
struct pds_core_dev_cmd_regs devcmd;
} __packed;
#ifndef __CHECKER__
static_assert(sizeof(struct pds_core_drv_identity) <= 1912);
static_assert(sizeof(struct pds_core_dev_identity) <= 1912);
static_assert(sizeof(union pds_core_dev_cmd) == 64);
static_assert(sizeof(union pds_core_dev_comp) == 16);
static_assert(sizeof(struct pds_core_dev_info_regs) == 2048);
static_assert(sizeof(struct pds_core_dev_cmd_regs) == 2048);
static_assert(sizeof(struct pds_core_dev_regs) == 4096);
#endif /* __CHECKER__ */
#endif /* _PDS_CORE_IF_H_ */
/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */
/* Copyright(c) 2023 Advanced Micro Devices, Inc. */
#ifndef _PDS_INTR_H_
#define _PDS_INTR_H_
/*
* Interrupt control register
* @coal_init: Coalescing timer initial value, in
* device units. Use @identity->intr_coal_mult
* and @identity->intr_coal_div to convert from
* usecs to device units:
*
* coal_init = coal_usecs * coal_mutl / coal_div
*
* When an interrupt is sent the interrupt
* coalescing timer current value
* (@coalescing_curr) is initialized with this
* value and begins counting down. No more
* interrupts are sent until the coalescing
* timer reaches 0. When @coalescing_init=0
* interrupt coalescing is effectively disabled
* and every interrupt assert results in an
* interrupt. Reset value: 0
* @mask: Interrupt mask. When @mask=1 the interrupt
* resource will not send an interrupt. When
* @mask=0 the interrupt resource will send an
* interrupt if an interrupt event is pending
* or on the next interrupt assertion event.
* Reset value: 1
* @credits: Interrupt credits. This register indicates
* how many interrupt events the hardware has
* sent. When written by software this
* register atomically decrements @int_credits
* by the value written. When @int_credits
* becomes 0 then the "pending interrupt" bit
* in the Interrupt Status register is cleared
* by the hardware and any pending but unsent
* interrupts are cleared.
* !!!IMPORTANT!!! This is a signed register.
* @flags: Interrupt control flags
* @unmask -- When this bit is written with a 1
* the interrupt resource will set mask=0.
* @coal_timer_reset -- When this
* bit is written with a 1 the
* @coalescing_curr will be reloaded with
* @coalescing_init to reset the coalescing
* timer.
* @mask_on_assert: Automatically mask on assertion. When
* @mask_on_assert=1 the interrupt resource
* will set @mask=1 whenever an interrupt is
* sent. When using interrupts in Legacy
* Interrupt mode the driver must select
* @mask_on_assert=0 for proper interrupt
* operation.
* @coalescing_curr: Coalescing timer current value, in
* microseconds. When this value reaches 0
* the interrupt resource is again eligible to
* send an interrupt. If an interrupt event
* is already pending when @coalescing_curr
* reaches 0 the pending interrupt will be
* sent, otherwise an interrupt will be sent
* on the next interrupt assertion event.
*/
struct pds_core_intr {
u32 coal_init;
u32 mask;
u16 credits;
u16 flags;
#define PDS_CORE_INTR_F_UNMASK 0x0001
#define PDS_CORE_INTR_F_TIMER_RESET 0x0002
u32 mask_on_assert;
u32 coalescing_curr;
u32 rsvd6[3];
};
#ifndef __CHECKER__
static_assert(sizeof(struct pds_core_intr) == 32);
#endif /* __CHECKER__ */
#define PDS_CORE_INTR_CTRL_REGS_MAX 2048
#define PDS_CORE_INTR_CTRL_COAL_MAX 0x3F
#define PDS_CORE_INTR_INDEX_NOT_ASSIGNED -1
struct pds_core_intr_status {
u32 status[2];
};
/**
* enum pds_core_intr_mask_vals - valid values for mask and mask_assert.
* @PDS_CORE_INTR_MASK_CLEAR: unmask interrupt.
* @PDS_CORE_INTR_MASK_SET: mask interrupt.
*/
enum pds_core_intr_mask_vals {
PDS_CORE_INTR_MASK_CLEAR = 0,
PDS_CORE_INTR_MASK_SET = 1,
};
/**
* enum pds_core_intr_credits_bits - Bitwise composition of credits values.
* @PDS_CORE_INTR_CRED_COUNT: bit mask of credit count, no shift needed.
* @PDS_CORE_INTR_CRED_COUNT_SIGNED: bit mask of credit count, including sign bit.
* @PDS_CORE_INTR_CRED_UNMASK: unmask the interrupt.
* @PDS_CORE_INTR_CRED_RESET_COALESCE: reset the coalesce timer.
* @PDS_CORE_INTR_CRED_REARM: unmask the and reset the timer.
*/
enum pds_core_intr_credits_bits {
PDS_CORE_INTR_CRED_COUNT = 0x7fffu,
PDS_CORE_INTR_CRED_COUNT_SIGNED = 0xffffu,
PDS_CORE_INTR_CRED_UNMASK = 0x10000u,
PDS_CORE_INTR_CRED_RESET_COALESCE = 0x20000u,
PDS_CORE_INTR_CRED_REARM = (PDS_CORE_INTR_CRED_UNMASK |
PDS_CORE_INTR_CRED_RESET_COALESCE),
};
static inline void
pds_core_intr_coal_init(struct pds_core_intr __iomem *intr_ctrl, u32 coal)
{
iowrite32(coal, &intr_ctrl->coal_init);
}
static inline void
pds_core_intr_mask(struct pds_core_intr __iomem *intr_ctrl, u32 mask)
{
iowrite32(mask, &intr_ctrl->mask);
}
static inline void
pds_core_intr_credits(struct pds_core_intr __iomem *intr_ctrl,
u32 cred, u32 flags)
{
if (WARN_ON_ONCE(cred > PDS_CORE_INTR_CRED_COUNT)) {
cred = ioread32(&intr_ctrl->credits);
cred &= PDS_CORE_INTR_CRED_COUNT_SIGNED;
}
iowrite32(cred | flags, &intr_ctrl->credits);
}
static inline void
pds_core_intr_clean_flags(struct pds_core_intr __iomem *intr_ctrl, u32 flags)
{
u32 cred;
cred = ioread32(&intr_ctrl->credits);
cred &= PDS_CORE_INTR_CRED_COUNT_SIGNED;
cred |= flags;
iowrite32(cred, &intr_ctrl->credits);
}
static inline void
pds_core_intr_clean(struct pds_core_intr __iomem *intr_ctrl)
{
pds_core_intr_clean_flags(intr_ctrl, PDS_CORE_INTR_CRED_RESET_COALESCE);
}
static inline void
pds_core_intr_mask_assert(struct pds_core_intr __iomem *intr_ctrl, u32 mask)
{
iowrite32(mask, &intr_ctrl->mask_on_assert);
}
#endif /* _PDS_INTR_H_ */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment