Commit d8bb3824 authored by David S. Miller's avatar David S. Miller

Merge branch 'pds_core'

Shannon Nelson says:

====================
pds_core driver

Summary:
--------
This patchset implements a new driver for use with the AMD/Pensando
Distributed Services Card (DSC), intended to provide core configuration
services through the auxiliary_bus and through a couple of EXPORTed
functions for use initially in VFio and vDPA feature specific drivers.

To keep this patchset to a manageable size, the pds_vdpa and pds_vfio
drivers have been split out into their own patchsets to be reviewed
separately.

Detail:
-------
AMD/Pensando is making available a new set of devices for supporting vDPA,
VFio, and potentially other features in the Distributed Services Card
(DSC).  These features are implemented through a PF that serves as a Core
device for controlling and configuring its VF devices.  These VF devices
have separate drivers that use the auxiliary_bus to work through the Core
device as the control path.

Currently, the DSC supports standard ethernet operations using the
ionic driver.  This is not replaced by the Core-based devices - these
new devices are in addition to the existing Ethernet device.  Typical DSC
configurations will include both PDS devices and Ionic Eth devices.
However, there is a potential future path for ethernet services to come
through this device as well.

The Core device is a new PCI PF/VF device managed by a new driver
'pds_core'.  The PF device has access to an admin queue for configuring
the services used by the VFs, and sets up auxiliary_bus devices for each
vDPA VF for communicating with the drivers for the vDPA devices.  The VFs
may be for VFio or vDPA, and other services in the future; these VF types
are selected as part of the DSC internal FW configurations, which is out
of the scope of this patchset.

When the vDPA support set is enabled in the core PF through its devlink
param, auxiliary_bus devices are created for each VF that supports the
feature.  The vDPA driver then connects to and uses this auxiliary_device
to do control path configuration through the PF device.  This can then be
used with the vdpa kernel module to provide devices for virtio_vdpa kernel
module for host interfaces, or vhost_vdpa kernel module for interfaces
exported into your favorite VM.

A cheap ASCII diagram of a vDPA instance looks something like this:

                                ,----------.
                                |   vdpa   |
                                '----------'
                                  |     ||
                                 ctl   data
                                  |     ||
                          .----------.  ||
                          | pds_vdpa |  ||
                          '----------'  ||
                               |        ||
                       pds_core.vDPA.1  ||
                               |        ||
                    .---------------.   ||
                    |   pds_core    |   ||
                    '---------------'   ||
                        ||         ||   ||
                      09:00.0      09:00.1
        == PCI ============================================
                        ||            ||
                   .----------.   .----------.
            ,------|    PF    |---|    VF    |-------,
            |      '----------'   '----------'       |
            |                  DSC                   |
            |                                        |
            ------------------------------------------

Changes:
  v11:
 - change strncpy to strscpy
Reported-by: default avatarkernel test robot <lkp@intel.com>
     Link: https://lore.kernel.org/oe-kbuild-all/202304181137.WaZTYyAa-lkp@intel.com/

  v10:
Link: https://lore.kernel.org/netdev/20230418003228.28234-1-shannon.nelson@amd.com/
 - remove CONFIG_DEBUG_FS guard static inline stuff
 - remove unnecessary 0 and null initializations
 - verify in driver load that PDS_CORE_DRV_NAME matches KBUILD_MODNAME
 - remove debugfs irqs_show(), redundant with /proc
 - return -ENOMEM if intr_info = kcalloc() fails
 - move the status code enum into pds_core_if.h as part of API definition
 - fix up one place in pdsc_devcmd_wait() we're using the status codes where we could use the errno
 - remove redundant calls to flush_workqueue()
 - grab config_lock before testing state bits in pdsc_fw_reporter_diagnose()
 - change pdsc_color_match() to return bool
 - remove useless VIF setup loop and just setup vDPA services for now
 - remove pf pointer from struct padev and have clients use pci_physfn()
 - drop use of "vf" in auxdev.c function names, make more generic
 - remove last of client ops struct and simply export the functions
 - drop drivers@pensando.io from MAINTAINERS and add new include dir
 - include dynamic_debug.h in adminq.c to protect dynamic_hex_dump()
 - fixed fw_slot type from u8 to int for handling error returns
 - fixed comment spelling
 - changed void arg in pdsc_adminq_post() to struct pdsc *

  v9:
Link: https://lore.kernel.org/netdev/20230406234143.11318-1-shannon.nelson@amd.com/
 - change pdsc field name id to uid to clarify the unique id used for aux device
 - remove unnecessary pf->state and other checks in aux device creation
 - hardcode fw slotnames for devlink info, don't use strings from FW
 - handle errors from PDS_CORE_CMD_INIT devcmd call
 - tighten up health thread use of config_lock
 - remove pdsc_queue_health_check() layer over queuing health check
 - start pds_core.rst file in first patch, add to it incrementally
 - give more user interaction info in commit messages
 - removed a few more extraneous includes

  v8:
Link: https://lore.kernel.org/netdev/20230330234628.14627-1-shannon.nelson@amd.com/
 - fixed deadlock problem, use devl_health_reporter_destroy() when devlink is locked
 - don't clear client_id until after auxiliary_device_uninit()

  v7:
Link: https://lore.kernel.org/netdev/20230330192313.62018-1-shannon.nelson@amd.com/
 - use explicit devlink locking and devl_* APIs
 - move some of devlink setup logic into probe and remove
 - use debugfs_create_u{type}() for state and queue head and tail
 - add include for linux/vmalloc.h
Reported-by: default avatarkernel test robot <lkp@intel.com>
     Link: https://lore.kernel.org/oe-kbuild-all/202303260420.Tgq0qobF-lkp@intel.com/

  v6:
Link: https://lore.kernel.org/netdev/20230324190243.27722-1-shannon.nelson@amd.com/
 - removed version.h include noticed by kernel test robot's version check
Reported-by: default avatarkernel test robot <lkp@intel.com>
     Link: https://lore.kernel.org/oe-kbuild-all/202303230742.pX3ply0t-lkp@intel.com/
 - fixed up the more egregious checkpatch line length complaints
 - make sure pdsc_auxbus_dev_register() checks padev pointer errcode

  v5:
Link: https://lore.kernel.org/netdev/20230322185626.38758-1-shannon.nelson@amd.com/
 - added devlink health reporter for FW issues
 - removed asic_type, asic_rev, serial_num, fw_version from debugfs as
   they are available through other means
 - trimed OS info in pdsc_identify(), we don't need to send that much info to the FW
 - removed reg/unreg from auxbus client API, they are now in the core when VF
   is started
 - removed need for pdsc definition in client by simplifying the padev to only carry
   struct pci_dev pointers rather than full struct pdsc to the pf and vf
 - removed the unused pdsc argument in pdsc_notify()
 - moved include/linux/pds/pds_core.h to driver/../pds_core/core.h
 - restored a few pds_core_if.h interface values and structs that are shared
   with FW source
 - moved final config_lock unlock to before tear down of timer and workqueue
   to be sure there are no deadlocks while waiting for any stragglers
 - changed use of PAGE_SIZE to local PDS_PAGE_SIZE to keep with FW layout needs
   without regard to kernel PAGE_SIZE configuration
 - removed the redundant *adminqcq argument from pdsc_adminq_post()

  v4:
Link: https://lore.kernel.org/netdev/20230308051310.12544-1-shannon.nelson@amd.com/
 - reworked to attach to both Core PF and vDPA VF PCI devices
 - now creates auxiliary_device as part of each VF PCI probe, removes them on PCI remove
 - auxiliary devices now use simple unique id rather than PCI address for identifier
 - replaced home-grown event publishing with kernel-based notifier service
 - dropped live_migration parameter, not needed when not creating aux device for it
 - replaced devm_* functions with traditional interfaces
 - added MAINTAINERS entry
 - removed lingering traces of set/get_vf attribute adminq commands
 - trimmed some include lists
 - cleaned a kernel test robot complaint about a stray unused variable
        Link: https://lore.kernel.org/oe-kbuild-all/202302181049.yeUQMeWY-lkp@intel.com/

  v3:
Link: https://lore.kernel.org/netdev/20230217225558.19837-1-shannon.nelson@amd.com/
 - changed names from "pensando" to "amd" and updated copyright strings
 - dropped the DEVLINK_PARAM_GENERIC_ID_FW_BANK for future development
 - changed the auxiliary device creation to be triggered by the
   PCI bus event BOUND_DRIVER, and torn down at UNBIND_DRIVER in order
   to properly handle users using the sysfs bind/unbind functions
 - dropped some noisy log messages
 - rebased to current net-next

  RFC to v2:
Link: https://lore.kernel.org/netdev/20221207004443.33779-1-shannon.nelson@amd.com/
 - added separate devlink param patches for DEVLINK_PARAM_GENERIC_ID_ENABLE_MIGRATION
   and DEVLINK_PARAM_GENERIC_ID_FW_BANK, and dropped the driver specific implementations
 - updated descriptions for the new devlink parameters
 - dropped netdev support
 - dropped vDPA patches, will followup later
 - separated fw update and fw bank select into their own patches

  RFC:
Link: https://lore.kernel.org/netdev/20221118225656.48309-1-snelson@pensando.io/
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 25c800b2 ddbcb220
.. SPDX-License-Identifier: GPL-2.0+
========================================================
Linux Driver for the AMD/Pensando(R) DSC adapter family
========================================================
Copyright(c) 2023 Advanced Micro Devices, Inc
Identifying the Adapter
=======================
To find if one or more AMD/Pensando PCI Core devices are installed on the
host, check for the PCI devices::
# lspci -d 1dd8:100c
b5:00.0 Processing accelerators: Pensando Systems Device 100c
b6:00.0 Processing accelerators: Pensando Systems Device 100c
If such devices are listed as above, then the pds_core.ko driver should find
and configure them for use. There should be log entries in the kernel
messages such as these::
$ dmesg | grep pds_core
pds_core 0000:b5:00.0: 252.048 Gb/s available PCIe bandwidth (16.0 GT/s PCIe x16 link)
pds_core 0000:b5:00.0: FW: 1.60.0-73
pds_core 0000:b6:00.0: 252.048 Gb/s available PCIe bandwidth (16.0 GT/s PCIe x16 link)
pds_core 0000:b6:00.0: FW: 1.60.0-73
Driver and firmware version information can be gathered with devlink::
$ devlink dev info pci/0000:b5:00.0
pci/0000:b5:00.0:
driver pds_core
serial_number FLM18420073
versions:
fixed:
asic.id 0x0
asic.rev 0x0
running:
fw 1.51.0-73
stored:
fw.goldfw 1.15.9-C-22
fw.mainfwa 1.60.0-73
fw.mainfwb 1.60.0-57
Info versions
=============
The ``pds_core`` driver reports the following versions
.. list-table:: devlink info versions implemented
:widths: 5 5 90
* - Name
- Type
- Description
* - ``fw``
- running
- Version of firmware running on the device
* - ``fw.goldfw``
- stored
- Version of firmware stored in the goldfw slot
* - ``fw.mainfwa``
- stored
- Version of firmware stored in the mainfwa slot
* - ``fw.mainfwb``
- stored
- Version of firmware stored in the mainfwb slot
* - ``asic.id``
- fixed
- The ASIC type for this device
* - ``asic.rev``
- fixed
- The revision of the ASIC for this device
Parameters
==========
The ``pds_core`` driver implements the following generic
parameters for controlling the functionality to be made available
as auxiliary_bus devices.
.. list-table:: Generic parameters implemented
:widths: 5 5 8 82
* - Name
- Mode
- Type
- Description
* - ``enable_vnet``
- runtime
- Boolean
- Enables vDPA functionality through an auxiliary_bus device
Firmware Management
===================
The ``flash`` command can update a the DSC firmware. The downloaded firmware
will be saved into either of firmware bank 1 or bank 2, whichever is not
currently in use, and that bank will used for the next boot::
# devlink dev flash pci/0000:b5:00.0 \
file pensando/dsc_fw_1.63.0-22.tar
Health Reporters
================
The driver supports a devlink health reporter for FW status::
# devlink health show pci/0000:2b:00.0 reporter fw
pci/0000:2b:00.0:
reporter fw
state healthy error 0 recover 0
# devlink health diagnose pci/0000:2b:00.0 reporter fw
Status: healthy State: 1 Generation: 0 Recoveries: 0
Enabling the driver
===================
The driver is enabled via the standard kernel configuration system,
using the make command::
make oldconfig/menuconfig/etc.
The driver is located in the menu structure at:
-> Device Drivers
-> Network device support (NETDEVICES [=y])
-> Ethernet driver support
-> AMD devices
-> AMD/Pensando Ethernet PDS_CORE Support
Support
=======
For general Linux networking support, please use the netdev mailing
list, which is monitored by AMD/Pensando personnel::
netdev@vger.kernel.org
...@@ -14,6 +14,7 @@ Contents: ...@@ -14,6 +14,7 @@ Contents:
3com/vortex 3com/vortex
amazon/ena amazon/ena
altera/altera_tse altera/altera_tse
amd/pds_core
aquantia/atlantic aquantia/atlantic
chelsio/cxgb chelsio/cxgb
cirrus/cs89x0 cirrus/cs89x0
......
...@@ -1041,6 +1041,15 @@ F: drivers/gpu/drm/amd/include/vi_structs.h ...@@ -1041,6 +1041,15 @@ F: drivers/gpu/drm/amd/include/vi_structs.h
F: include/uapi/linux/kfd_ioctl.h F: include/uapi/linux/kfd_ioctl.h
F: include/uapi/linux/kfd_sysfs.h F: include/uapi/linux/kfd_sysfs.h
AMD PDS CORE DRIVER
M: Shannon Nelson <shannon.nelson@amd.com>
M: Brett Creeley <brett.creeley@amd.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/device_drivers/ethernet/amd/pds_core.rst
F: drivers/net/ethernet/amd/pds_core/
F: include/linux/pds/
AMD SPI DRIVER AMD SPI DRIVER
M: Sanjay R Mehta <sanju.mehta@amd.com> M: Sanjay R Mehta <sanju.mehta@amd.com>
S: Maintained S: Maintained
......
...@@ -186,4 +186,16 @@ config AMD_XGBE_HAVE_ECC ...@@ -186,4 +186,16 @@ config AMD_XGBE_HAVE_ECC
bool bool
default n default n
config PDS_CORE
tristate "AMD/Pensando Data Systems Core Device Support"
depends on 64BIT && PCI
help
This enables the support for the AMD/Pensando Core device family of
adapters. More specific information on this driver can be
found in
<file:Documentation/networking/device_drivers/ethernet/amd/pds_core.rst>.
To compile this driver as a module, choose M here. The module
will be called pds_core.
endif # NET_VENDOR_AMD endif # NET_VENDOR_AMD
...@@ -17,3 +17,4 @@ obj-$(CONFIG_PCNET32) += pcnet32.o ...@@ -17,3 +17,4 @@ obj-$(CONFIG_PCNET32) += pcnet32.o
obj-$(CONFIG_SUN3LANCE) += sun3lance.o obj-$(CONFIG_SUN3LANCE) += sun3lance.o
obj-$(CONFIG_SUNLANCE) += sunlance.o obj-$(CONFIG_SUNLANCE) += sunlance.o
obj-$(CONFIG_AMD_XGBE) += xgbe/ obj-$(CONFIG_AMD_XGBE) += xgbe/
obj-$(CONFIG_PDS_CORE) += pds_core/
# SPDX-License-Identifier: GPL-2.0
# Copyright (c) 2023 Advanced Micro Devices, Inc.
obj-$(CONFIG_PDS_CORE) := pds_core.o
pds_core-y := main.o \
devlink.o \
auxbus.o \
dev.o \
adminq.o \
core.o \
fw.o
pds_core-$(CONFIG_DEBUG_FS) += debugfs.o
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include <linux/dynamic_debug.h>
#include "core.h"
struct pdsc_wait_context {
struct pdsc_qcq *qcq;
struct completion wait_completion;
};
static int pdsc_process_notifyq(struct pdsc_qcq *qcq)
{
union pds_core_notifyq_comp *comp;
struct pdsc *pdsc = qcq->pdsc;
struct pdsc_cq *cq = &qcq->cq;
struct pdsc_cq_info *cq_info;
int nq_work = 0;
u64 eid;
cq_info = &cq->info[cq->tail_idx];
comp = cq_info->comp;
eid = le64_to_cpu(comp->event.eid);
while (eid > pdsc->last_eid) {
u16 ecode = le16_to_cpu(comp->event.ecode);
switch (ecode) {
case PDS_EVENT_LINK_CHANGE:
dev_info(pdsc->dev, "NotifyQ LINK_CHANGE ecode %d eid %lld\n",
ecode, eid);
pdsc_notify(PDS_EVENT_LINK_CHANGE, comp);
break;
case PDS_EVENT_RESET:
dev_info(pdsc->dev, "NotifyQ RESET ecode %d eid %lld\n",
ecode, eid);
pdsc_notify(PDS_EVENT_RESET, comp);
break;
case PDS_EVENT_XCVR:
dev_info(pdsc->dev, "NotifyQ XCVR ecode %d eid %lld\n",
ecode, eid);
break;
default:
dev_info(pdsc->dev, "NotifyQ ecode %d eid %lld\n",
ecode, eid);
break;
}
pdsc->last_eid = eid;
cq->tail_idx = (cq->tail_idx + 1) & (cq->num_descs - 1);
cq_info = &cq->info[cq->tail_idx];
comp = cq_info->comp;
eid = le64_to_cpu(comp->event.eid);
nq_work++;
}
qcq->accum_work += nq_work;
return nq_work;
}
void pdsc_process_adminq(struct pdsc_qcq *qcq)
{
union pds_core_adminq_comp *comp;
struct pdsc_queue *q = &qcq->q;
struct pdsc *pdsc = qcq->pdsc;
struct pdsc_cq *cq = &qcq->cq;
struct pdsc_q_info *q_info;
unsigned long irqflags;
int nq_work = 0;
int aq_work = 0;
int credits;
/* Don't process AdminQ when shutting down */
if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) {
dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n",
__func__);
return;
}
/* Check for NotifyQ event */
nq_work = pdsc_process_notifyq(&pdsc->notifyqcq);
/* Check for empty queue, which can happen if the interrupt was
* for a NotifyQ event and there are no new AdminQ completions.
*/
if (q->tail_idx == q->head_idx)
goto credits;
/* Find the first completion to clean,
* run the callback in the related q_info,
* and continue while we still match done color
*/
spin_lock_irqsave(&pdsc->adminq_lock, irqflags);
comp = cq->info[cq->tail_idx].comp;
while (pdsc_color_match(comp->color, cq->done_color)) {
q_info = &q->info[q->tail_idx];
q->tail_idx = (q->tail_idx + 1) & (q->num_descs - 1);
/* Copy out the completion data */
memcpy(q_info->dest, comp, sizeof(*comp));
complete_all(&q_info->wc->wait_completion);
if (cq->tail_idx == cq->num_descs - 1)
cq->done_color = !cq->done_color;
cq->tail_idx = (cq->tail_idx + 1) & (cq->num_descs - 1);
comp = cq->info[cq->tail_idx].comp;
aq_work++;
}
spin_unlock_irqrestore(&pdsc->adminq_lock, irqflags);
qcq->accum_work += aq_work;
credits:
/* Return the interrupt credits, one for each completion */
credits = nq_work + aq_work;
if (credits)
pds_core_intr_credits(&pdsc->intr_ctrl[qcq->intx],
credits,
PDS_CORE_INTR_CRED_REARM);
}
void pdsc_work_thread(struct work_struct *work)
{
struct pdsc_qcq *qcq = container_of(work, struct pdsc_qcq, work);
pdsc_process_adminq(qcq);
}
irqreturn_t pdsc_adminq_isr(int irq, void *data)
{
struct pdsc_qcq *qcq = data;
struct pdsc *pdsc = qcq->pdsc;
/* Don't process AdminQ when shutting down */
if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) {
dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n",
__func__);
return IRQ_HANDLED;
}
queue_work(pdsc->wq, &qcq->work);
pds_core_intr_mask(&pdsc->intr_ctrl[irq], PDS_CORE_INTR_MASK_CLEAR);
return IRQ_HANDLED;
}
static int __pdsc_adminq_post(struct pdsc *pdsc,
struct pdsc_qcq *qcq,
union pds_core_adminq_cmd *cmd,
union pds_core_adminq_comp *comp,
struct pdsc_wait_context *wc)
{
struct pdsc_queue *q = &qcq->q;
struct pdsc_q_info *q_info;
unsigned long irqflags;
unsigned int avail;
int index;
int ret;
spin_lock_irqsave(&pdsc->adminq_lock, irqflags);
/* Check for space in the queue */
avail = q->tail_idx;
if (q->head_idx >= avail)
avail += q->num_descs - q->head_idx - 1;
else
avail -= q->head_idx + 1;
if (!avail) {
ret = -ENOSPC;
goto err_out_unlock;
}
/* Check that the FW is running */
if (!pdsc_is_fw_running(pdsc)) {
u8 fw_status = ioread8(&pdsc->info_regs->fw_status);
dev_info(pdsc->dev, "%s: post failed - fw not running %#02x:\n",
__func__, fw_status);
ret = -ENXIO;
goto err_out_unlock;
}
/* Post the request */
index = q->head_idx;
q_info = &q->info[index];
q_info->wc = wc;
q_info->dest = comp;
memcpy(q_info->desc, cmd, sizeof(*cmd));
dev_dbg(pdsc->dev, "head_idx %d tail_idx %d\n",
q->head_idx, q->tail_idx);
dev_dbg(pdsc->dev, "post admin queue command:\n");
dynamic_hex_dump("cmd ", DUMP_PREFIX_OFFSET, 16, 1,
cmd, sizeof(*cmd), true);
q->head_idx = (q->head_idx + 1) & (q->num_descs - 1);
pds_core_dbell_ring(pdsc->kern_dbpage,
q->hw_type, q->dbval | q->head_idx);
ret = index;
err_out_unlock:
spin_unlock_irqrestore(&pdsc->adminq_lock, irqflags);
return ret;
}
int pdsc_adminq_post(struct pdsc *pdsc,
union pds_core_adminq_cmd *cmd,
union pds_core_adminq_comp *comp,
bool fast_poll)
{
struct pdsc_wait_context wc = {
.wait_completion =
COMPLETION_INITIALIZER_ONSTACK(wc.wait_completion),
};
unsigned long poll_interval = 1;
unsigned long poll_jiffies;
unsigned long time_limit;
unsigned long time_start;
unsigned long time_done;
unsigned long remaining;
int err = 0;
int index;
wc.qcq = &pdsc->adminqcq;
index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp, &wc);
if (index < 0) {
err = index;
goto err_out;
}
time_start = jiffies;
time_limit = time_start + HZ * pdsc->devcmd_timeout;
do {
/* Timeslice the actual wait to catch IO errors etc early */
poll_jiffies = msecs_to_jiffies(poll_interval);
remaining = wait_for_completion_timeout(&wc.wait_completion,
poll_jiffies);
if (remaining)
break;
if (!pdsc_is_fw_running(pdsc)) {
u8 fw_status = ioread8(&pdsc->info_regs->fw_status);
dev_dbg(pdsc->dev, "%s: post wait failed - fw not running %#02x:\n",
__func__, fw_status);
err = -ENXIO;
break;
}
/* When fast_poll is not requested, prevent aggressive polling
* on failures due to timeouts by doing exponential back off.
*/
if (!fast_poll && poll_interval < PDSC_ADMINQ_MAX_POLL_INTERVAL)
poll_interval <<= 1;
} while (time_before(jiffies, time_limit));
time_done = jiffies;
dev_dbg(pdsc->dev, "%s: elapsed %d msecs\n",
__func__, jiffies_to_msecs(time_done - time_start));
/* Check the results */
if (time_after_eq(time_done, time_limit))
err = -ETIMEDOUT;
dev_dbg(pdsc->dev, "read admin queue completion idx %d:\n", index);
dynamic_hex_dump("comp ", DUMP_PREFIX_OFFSET, 16, 1,
comp, sizeof(*comp), true);
if (remaining && comp->status)
err = pdsc_err_to_errno(comp->status);
err_out:
if (err) {
dev_dbg(pdsc->dev, "%s: opcode %d status %d err %pe\n",
__func__, cmd->opcode, comp->status, ERR_PTR(err));
if (err == -ENXIO || err == -ETIMEDOUT)
queue_work(pdsc->wq, &pdsc->health_work);
}
return err;
}
EXPORT_SYMBOL_GPL(pdsc_adminq_post);
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include <linux/pci.h>
#include "core.h"
#include <linux/pds/pds_auxbus.h>
/**
* pds_client_register - Link the client to the firmware
* @pf_pdev: ptr to the PF driver struct
* @devname: name that includes service into, e.g. pds_core.vDPA
*
* Return: 0 on success, or
* negative for error
*/
int pds_client_register(struct pci_dev *pf_pdev, char *devname)
{
union pds_core_adminq_comp comp = {};
union pds_core_adminq_cmd cmd = {};
struct pdsc *pf;
int err;
u16 ci;
pf = pci_get_drvdata(pf_pdev);
if (pf->state)
return -ENXIO;
cmd.client_reg.opcode = PDS_AQ_CMD_CLIENT_REG;
strscpy(cmd.client_reg.devname, devname,
sizeof(cmd.client_reg.devname));
err = pdsc_adminq_post(pf, &cmd, &comp, false);
if (err) {
dev_info(pf->dev, "register dev_name %s with DSC failed, status %d: %pe\n",
devname, comp.status, ERR_PTR(err));
return err;
}
ci = le16_to_cpu(comp.client_reg.client_id);
if (!ci) {
dev_err(pf->dev, "%s: device returned null client_id\n",
__func__);
return -EIO;
}
dev_dbg(pf->dev, "%s: device returned client_id %d for %s\n",
__func__, ci, devname);
return ci;
}
EXPORT_SYMBOL_GPL(pds_client_register);
/**
* pds_client_unregister - Unlink the client from the firmware
* @pf_pdev: ptr to the PF driver struct
* @client_id: id returned from pds_client_register()
*
* Return: 0 on success, or
* negative for error
*/
int pds_client_unregister(struct pci_dev *pf_pdev, u16 client_id)
{
union pds_core_adminq_comp comp = {};
union pds_core_adminq_cmd cmd = {};
struct pdsc *pf;
int err;
pf = pci_get_drvdata(pf_pdev);
if (pf->state)
return -ENXIO;
cmd.client_unreg.opcode = PDS_AQ_CMD_CLIENT_UNREG;
cmd.client_unreg.client_id = cpu_to_le16(client_id);
err = pdsc_adminq_post(pf, &cmd, &comp, false);
if (err)
dev_info(pf->dev, "unregister client_id %d failed, status %d: %pe\n",
client_id, comp.status, ERR_PTR(err));
return err;
}
EXPORT_SYMBOL_GPL(pds_client_unregister);
/**
* pds_client_adminq_cmd - Process an adminq request for the client
* @padev: ptr to the client device
* @req: ptr to buffer with request
* @req_len: length of actual struct used for request
* @resp: ptr to buffer where answer is to be copied
* @flags: optional flags from pds_core_adminq_flags
*
* Return: 0 on success, or
* negative for error
*
* Client sends pointers to request and response buffers
* Core copies request data into pds_core_client_request_cmd
* Core sets other fields as needed
* Core posts to AdminQ
* Core copies completion data into response buffer
*/
int pds_client_adminq_cmd(struct pds_auxiliary_dev *padev,
union pds_core_adminq_cmd *req,
size_t req_len,
union pds_core_adminq_comp *resp,
u64 flags)
{
union pds_core_adminq_cmd cmd = {};
struct pci_dev *pf_pdev;
struct pdsc *pf;
size_t cp_len;
int err;
pf_pdev = pci_physfn(padev->vf_pdev);
pf = pci_get_drvdata(pf_pdev);
dev_dbg(pf->dev, "%s: %s opcode %d\n",
__func__, dev_name(&padev->aux_dev.dev), req->opcode);
if (pf->state)
return -ENXIO;
/* Wrap the client's request */
cmd.client_request.opcode = PDS_AQ_CMD_CLIENT_CMD;
cmd.client_request.client_id = cpu_to_le16(padev->client_id);
cp_len = min_t(size_t, req_len, sizeof(cmd.client_request.client_cmd));
memcpy(cmd.client_request.client_cmd, req, cp_len);
err = pdsc_adminq_post(pf, &cmd, resp,
!!(flags & PDS_AQ_FLAG_FASTPOLL));
if (err && err != -EAGAIN)
dev_info(pf->dev, "client admin cmd failed: %pe\n",
ERR_PTR(err));
return err;
}
EXPORT_SYMBOL_GPL(pds_client_adminq_cmd);
static void pdsc_auxbus_dev_release(struct device *dev)
{
struct pds_auxiliary_dev *padev =
container_of(dev, struct pds_auxiliary_dev, aux_dev.dev);
kfree(padev);
}
static struct pds_auxiliary_dev *pdsc_auxbus_dev_register(struct pdsc *cf,
struct pdsc *pf,
u16 client_id,
char *name)
{
struct auxiliary_device *aux_dev;
struct pds_auxiliary_dev *padev;
int err;
padev = kzalloc(sizeof(*padev), GFP_KERNEL);
if (!padev)
return ERR_PTR(-ENOMEM);
padev->vf_pdev = cf->pdev;
padev->client_id = client_id;
aux_dev = &padev->aux_dev;
aux_dev->name = name;
aux_dev->id = cf->uid;
aux_dev->dev.parent = cf->dev;
aux_dev->dev.release = pdsc_auxbus_dev_release;
err = auxiliary_device_init(aux_dev);
if (err < 0) {
dev_warn(cf->dev, "auxiliary_device_init of %s failed: %pe\n",
name, ERR_PTR(err));
goto err_out;
}
err = auxiliary_device_add(aux_dev);
if (err) {
dev_warn(cf->dev, "auxiliary_device_add of %s failed: %pe\n",
name, ERR_PTR(err));
goto err_out_uninit;
}
return padev;
err_out_uninit:
auxiliary_device_uninit(aux_dev);
err_out:
kfree(padev);
return ERR_PTR(err);
}
int pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf)
{
struct pds_auxiliary_dev *padev;
int err = 0;
mutex_lock(&pf->config_lock);
padev = pf->vfs[cf->vf_id].padev;
if (padev) {
pds_client_unregister(pf->pdev, padev->client_id);
auxiliary_device_delete(&padev->aux_dev);
auxiliary_device_uninit(&padev->aux_dev);
padev->client_id = 0;
}
pf->vfs[cf->vf_id].padev = NULL;
mutex_unlock(&pf->config_lock);
return err;
}
int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf)
{
struct pds_auxiliary_dev *padev;
enum pds_core_vif_types vt;
char devname[PDS_DEVNAME_LEN];
u16 vt_support;
int client_id;
int err = 0;
mutex_lock(&pf->config_lock);
/* We only support vDPA so far, so it is the only one to
* be verified that it is available in the Core device and
* enabled in the devlink param. In the future this might
* become a loop for several VIF types.
*/
/* Verify that the type is supported and enabled. It is not
* an error if there is no auxbus device support for this
* VF, it just means something else needs to happen with it.
*/
vt = PDS_DEV_TYPE_VDPA;
vt_support = !!le16_to_cpu(pf->dev_ident.vif_types[vt]);
if (!(vt_support &&
pf->viftype_status[vt].supported &&
pf->viftype_status[vt].enabled))
goto out_unlock;
/* Need to register with FW and get the client_id before
* creating the aux device so that the aux client can run
* adminq commands as part its probe
*/
snprintf(devname, sizeof(devname), "%s.%s.%d",
PDS_CORE_DRV_NAME, pf->viftype_status[vt].name, cf->uid);
client_id = pds_client_register(pf->pdev, devname);
if (client_id < 0) {
err = client_id;
goto out_unlock;
}
padev = pdsc_auxbus_dev_register(cf, pf, client_id,
pf->viftype_status[vt].name);
if (IS_ERR(padev)) {
pds_client_unregister(pf->pdev, client_id);
err = PTR_ERR(padev);
goto out_unlock;
}
pf->vfs[cf->vf_id].padev = padev;
out_unlock:
mutex_unlock(&pf->config_lock);
return err;
}
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#ifndef _PDSC_H_
#define _PDSC_H_
#include <linux/debugfs.h>
#include <net/devlink.h>
#include <linux/pds/pds_common.h>
#include <linux/pds/pds_core_if.h>
#include <linux/pds/pds_adminq.h>
#include <linux/pds/pds_intr.h>
#define PDSC_DRV_DESCRIPTION "AMD/Pensando Core Driver"
#define PDSC_WATCHDOG_SECS 5
#define PDSC_QUEUE_NAME_MAX_SZ 32
#define PDSC_ADMINQ_MIN_LENGTH 16 /* must be a power of two */
#define PDSC_NOTIFYQ_LENGTH 64 /* must be a power of two */
#define PDSC_TEARDOWN_RECOVERY false
#define PDSC_TEARDOWN_REMOVING true
#define PDSC_SETUP_RECOVERY false
#define PDSC_SETUP_INIT true
struct pdsc_dev_bar {
void __iomem *vaddr;
phys_addr_t bus_addr;
unsigned long len;
int res_index;
};
struct pdsc;
struct pdsc_vf {
struct pds_auxiliary_dev *padev;
struct pdsc *vf;
u16 index;
__le16 vif_types[PDS_DEV_TYPE_MAX];
};
struct pdsc_devinfo {
u8 asic_type;
u8 asic_rev;
char fw_version[PDS_CORE_DEVINFO_FWVERS_BUFLEN + 1];
char serial_num[PDS_CORE_DEVINFO_SERIAL_BUFLEN + 1];
};
struct pdsc_queue {
struct pdsc_q_info *info;
u64 dbval;
u16 head_idx;
u16 tail_idx;
u8 hw_type;
unsigned int index;
unsigned int num_descs;
u64 dbell_count;
u64 features;
unsigned int type;
unsigned int hw_index;
union {
void *base;
struct pds_core_admin_cmd *adminq;
};
dma_addr_t base_pa; /* must be page aligned */
unsigned int desc_size;
unsigned int pid;
char name[PDSC_QUEUE_NAME_MAX_SZ];
};
#define PDSC_INTR_NAME_MAX_SZ 32
struct pdsc_intr_info {
char name[PDSC_INTR_NAME_MAX_SZ];
unsigned int index;
unsigned int vector;
void *data;
};
struct pdsc_cq_info {
void *comp;
};
struct pdsc_buf_info {
struct page *page;
dma_addr_t dma_addr;
u32 page_offset;
u32 len;
};
struct pdsc_q_info {
union {
void *desc;
struct pdsc_admin_cmd *adminq_desc;
};
unsigned int bytes;
unsigned int nbufs;
struct pdsc_buf_info bufs[PDS_CORE_MAX_FRAGS];
struct pdsc_wait_context *wc;
void *dest;
};
struct pdsc_cq {
struct pdsc_cq_info *info;
struct pdsc_queue *bound_q;
struct pdsc_intr_info *bound_intr;
u16 tail_idx;
bool done_color;
unsigned int num_descs;
unsigned int desc_size;
void *base;
dma_addr_t base_pa; /* must be page aligned */
} ____cacheline_aligned_in_smp;
struct pdsc_qcq {
struct pdsc *pdsc;
void *q_base;
dma_addr_t q_base_pa; /* might not be page aligned */
void *cq_base;
dma_addr_t cq_base_pa; /* might not be page aligned */
u32 q_size;
u32 cq_size;
bool armed;
unsigned int flags;
struct work_struct work;
struct pdsc_queue q;
struct pdsc_cq cq;
int intx;
u32 accum_work;
struct dentry *dentry;
};
struct pdsc_viftype {
char *name;
bool supported;
bool enabled;
int dl_id;
int vif_id;
struct pds_auxiliary_dev *padev;
};
/* No state flags set means we are in a steady running state */
enum pdsc_state_flags {
PDSC_S_FW_DEAD, /* stopped, wait on startup or recovery */
PDSC_S_INITING_DRIVER, /* initial startup from probe */
PDSC_S_STOPPING_DRIVER, /* driver remove */
/* leave this as last */
PDSC_S_STATE_SIZE
};
struct pdsc {
struct pci_dev *pdev;
struct dentry *dentry;
struct device *dev;
struct pdsc_dev_bar bars[PDS_CORE_BARS_MAX];
struct pdsc_vf *vfs;
int num_vfs;
int vf_id;
int hw_index;
int uid;
unsigned long state;
u8 fw_status;
u8 fw_generation;
unsigned long last_fw_time;
u32 last_hb;
struct timer_list wdtimer;
unsigned int wdtimer_period;
struct work_struct health_work;
struct devlink_health_reporter *fw_reporter;
u32 fw_recoveries;
struct pdsc_devinfo dev_info;
struct pds_core_dev_identity dev_ident;
unsigned int nintrs;
struct pdsc_intr_info *intr_info; /* array of nintrs elements */
struct workqueue_struct *wq;
unsigned int devcmd_timeout;
struct mutex devcmd_lock; /* lock for dev_cmd operations */
struct mutex config_lock; /* lock for configuration operations */
spinlock_t adminq_lock; /* lock for adminq operations */
struct pds_core_dev_info_regs __iomem *info_regs;
struct pds_core_dev_cmd_regs __iomem *cmd_regs;
struct pds_core_intr __iomem *intr_ctrl;
u64 __iomem *intr_status;
u64 __iomem *db_pages;
dma_addr_t phy_db_pages;
u64 __iomem *kern_dbpage;
struct pdsc_qcq adminqcq;
struct pdsc_qcq notifyqcq;
u64 last_eid;
struct pdsc_viftype *viftype_status;
};
/** enum pds_core_dbell_bits - bitwise composition of dbell values.
*
* @PDS_CORE_DBELL_QID_MASK: unshifted mask of valid queue id bits.
* @PDS_CORE_DBELL_QID_SHIFT: queue id shift amount in dbell value.
* @PDS_CORE_DBELL_QID: macro to build QID component of dbell value.
*
* @PDS_CORE_DBELL_RING_MASK: unshifted mask of valid ring bits.
* @PDS_CORE_DBELL_RING_SHIFT: ring shift amount in dbell value.
* @PDS_CORE_DBELL_RING: macro to build ring component of dbell value.
*
* @PDS_CORE_DBELL_RING_0: ring zero dbell component value.
* @PDS_CORE_DBELL_RING_1: ring one dbell component value.
* @PDS_CORE_DBELL_RING_2: ring two dbell component value.
* @PDS_CORE_DBELL_RING_3: ring three dbell component value.
*
* @PDS_CORE_DBELL_INDEX_MASK: bit mask of valid index bits, no shift needed.
*/
enum pds_core_dbell_bits {
PDS_CORE_DBELL_QID_MASK = 0xffffff,
PDS_CORE_DBELL_QID_SHIFT = 24,
#define PDS_CORE_DBELL_QID(n) \
(((u64)(n) & PDS_CORE_DBELL_QID_MASK) << PDS_CORE_DBELL_QID_SHIFT)
PDS_CORE_DBELL_RING_MASK = 0x7,
PDS_CORE_DBELL_RING_SHIFT = 16,
#define PDS_CORE_DBELL_RING(n) \
(((u64)(n) & PDS_CORE_DBELL_RING_MASK) << PDS_CORE_DBELL_RING_SHIFT)
PDS_CORE_DBELL_RING_0 = 0,
PDS_CORE_DBELL_RING_1 = PDS_CORE_DBELL_RING(1),
PDS_CORE_DBELL_RING_2 = PDS_CORE_DBELL_RING(2),
PDS_CORE_DBELL_RING_3 = PDS_CORE_DBELL_RING(3),
PDS_CORE_DBELL_INDEX_MASK = 0xffff,
};
static inline void pds_core_dbell_ring(u64 __iomem *db_page,
enum pds_core_logical_qtype qtype,
u64 val)
{
writeq(val, &db_page[qtype]);
}
int pdsc_fw_reporter_diagnose(struct devlink_health_reporter *reporter,
struct devlink_fmsg *fmsg,
struct netlink_ext_ack *extack);
int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
struct netlink_ext_ack *extack);
int pdsc_dl_flash_update(struct devlink *dl,
struct devlink_flash_update_params *params,
struct netlink_ext_ack *extack);
int pdsc_dl_enable_get(struct devlink *dl, u32 id,
struct devlink_param_gset_ctx *ctx);
int pdsc_dl_enable_set(struct devlink *dl, u32 id,
struct devlink_param_gset_ctx *ctx);
int pdsc_dl_enable_validate(struct devlink *dl, u32 id,
union devlink_param_value val,
struct netlink_ext_ack *extack);
void __iomem *pdsc_map_dbpage(struct pdsc *pdsc, int page_num);
void pdsc_debugfs_create(void);
void pdsc_debugfs_destroy(void);
void pdsc_debugfs_add_dev(struct pdsc *pdsc);
void pdsc_debugfs_del_dev(struct pdsc *pdsc);
void pdsc_debugfs_add_ident(struct pdsc *pdsc);
void pdsc_debugfs_add_viftype(struct pdsc *pdsc);
void pdsc_debugfs_add_irqs(struct pdsc *pdsc);
void pdsc_debugfs_add_qcq(struct pdsc *pdsc, struct pdsc_qcq *qcq);
void pdsc_debugfs_del_qcq(struct pdsc_qcq *qcq);
int pdsc_err_to_errno(enum pds_core_status_code code);
bool pdsc_is_fw_running(struct pdsc *pdsc);
bool pdsc_is_fw_good(struct pdsc *pdsc);
int pdsc_devcmd(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
union pds_core_dev_comp *comp, int max_seconds);
int pdsc_devcmd_locked(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
union pds_core_dev_comp *comp, int max_seconds);
int pdsc_devcmd_init(struct pdsc *pdsc);
int pdsc_devcmd_reset(struct pdsc *pdsc);
int pdsc_dev_reinit(struct pdsc *pdsc);
int pdsc_dev_init(struct pdsc *pdsc);
int pdsc_intr_alloc(struct pdsc *pdsc, char *name,
irq_handler_t handler, void *data);
void pdsc_intr_free(struct pdsc *pdsc, int index);
void pdsc_qcq_free(struct pdsc *pdsc, struct pdsc_qcq *qcq);
int pdsc_qcq_alloc(struct pdsc *pdsc, unsigned int type, unsigned int index,
const char *name, unsigned int flags, unsigned int num_descs,
unsigned int desc_size, unsigned int cq_desc_size,
unsigned int pid, struct pdsc_qcq *qcq);
int pdsc_setup(struct pdsc *pdsc, bool init);
void pdsc_teardown(struct pdsc *pdsc, bool removing);
int pdsc_start(struct pdsc *pdsc);
void pdsc_stop(struct pdsc *pdsc);
void pdsc_health_thread(struct work_struct *work);
int pdsc_register_notify(struct notifier_block *nb);
void pdsc_unregister_notify(struct notifier_block *nb);
void pdsc_notify(unsigned long event, void *data);
int pdsc_auxbus_dev_add(struct pdsc *cf, struct pdsc *pf);
int pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf);
void pdsc_process_adminq(struct pdsc_qcq *qcq);
void pdsc_work_thread(struct work_struct *work);
irqreturn_t pdsc_adminq_isr(int irq, void *data);
int pdsc_firmware_update(struct pdsc *pdsc, const struct firmware *fw,
struct netlink_ext_ack *extack);
#endif /* _PDSC_H_ */
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include <linux/pci.h>
#include "core.h"
static struct dentry *pdsc_dir;
void pdsc_debugfs_create(void)
{
pdsc_dir = debugfs_create_dir(PDS_CORE_DRV_NAME, NULL);
}
void pdsc_debugfs_destroy(void)
{
debugfs_remove_recursive(pdsc_dir);
}
void pdsc_debugfs_add_dev(struct pdsc *pdsc)
{
pdsc->dentry = debugfs_create_dir(pci_name(pdsc->pdev), pdsc_dir);
debugfs_create_ulong("state", 0400, pdsc->dentry, &pdsc->state);
}
void pdsc_debugfs_del_dev(struct pdsc *pdsc)
{
debugfs_remove_recursive(pdsc->dentry);
pdsc->dentry = NULL;
}
static int identity_show(struct seq_file *seq, void *v)
{
struct pdsc *pdsc = seq->private;
struct pds_core_dev_identity *ident;
int vt;
ident = &pdsc->dev_ident;
seq_printf(seq, "fw_heartbeat: 0x%x\n",
ioread32(&pdsc->info_regs->fw_heartbeat));
seq_printf(seq, "nlifs: %d\n",
le32_to_cpu(ident->nlifs));
seq_printf(seq, "nintrs: %d\n",
le32_to_cpu(ident->nintrs));
seq_printf(seq, "ndbpgs_per_lif: %d\n",
le32_to_cpu(ident->ndbpgs_per_lif));
seq_printf(seq, "intr_coal_mult: %d\n",
le32_to_cpu(ident->intr_coal_mult));
seq_printf(seq, "intr_coal_div: %d\n",
le32_to_cpu(ident->intr_coal_div));
seq_puts(seq, "vif_types: ");
for (vt = 0; vt < PDS_DEV_TYPE_MAX; vt++)
seq_printf(seq, "%d ",
le16_to_cpu(pdsc->dev_ident.vif_types[vt]));
seq_puts(seq, "\n");
return 0;
}
DEFINE_SHOW_ATTRIBUTE(identity);
void pdsc_debugfs_add_ident(struct pdsc *pdsc)
{
debugfs_create_file("identity", 0400, pdsc->dentry,
pdsc, &identity_fops);
}
static int viftype_show(struct seq_file *seq, void *v)
{
struct pdsc *pdsc = seq->private;
int vt;
for (vt = 0; vt < PDS_DEV_TYPE_MAX; vt++) {
if (!pdsc->viftype_status[vt].name)
continue;
seq_printf(seq, "%s\t%d supported %d enabled\n",
pdsc->viftype_status[vt].name,
pdsc->viftype_status[vt].supported,
pdsc->viftype_status[vt].enabled);
}
return 0;
}
DEFINE_SHOW_ATTRIBUTE(viftype);
void pdsc_debugfs_add_viftype(struct pdsc *pdsc)
{
debugfs_create_file("viftypes", 0400, pdsc->dentry,
pdsc, &viftype_fops);
}
static const struct debugfs_reg32 intr_ctrl_regs[] = {
{ .name = "coal_init", .offset = 0, },
{ .name = "mask", .offset = 4, },
{ .name = "credits", .offset = 8, },
{ .name = "mask_on_assert", .offset = 12, },
{ .name = "coal_timer", .offset = 16, },
};
void pdsc_debugfs_add_qcq(struct pdsc *pdsc, struct pdsc_qcq *qcq)
{
struct dentry *qcq_dentry, *q_dentry, *cq_dentry;
struct dentry *intr_dentry;
struct debugfs_regset32 *intr_ctrl_regset;
struct pdsc_intr_info *intr = &pdsc->intr_info[qcq->intx];
struct pdsc_queue *q = &qcq->q;
struct pdsc_cq *cq = &qcq->cq;
qcq_dentry = debugfs_create_dir(q->name, pdsc->dentry);
if (IS_ERR_OR_NULL(qcq_dentry))
return;
qcq->dentry = qcq_dentry;
debugfs_create_x64("q_base_pa", 0400, qcq_dentry, &qcq->q_base_pa);
debugfs_create_x32("q_size", 0400, qcq_dentry, &qcq->q_size);
debugfs_create_x64("cq_base_pa", 0400, qcq_dentry, &qcq->cq_base_pa);
debugfs_create_x32("cq_size", 0400, qcq_dentry, &qcq->cq_size);
debugfs_create_x32("accum_work", 0400, qcq_dentry, &qcq->accum_work);
q_dentry = debugfs_create_dir("q", qcq->dentry);
if (IS_ERR_OR_NULL(q_dentry))
return;
debugfs_create_u32("index", 0400, q_dentry, &q->index);
debugfs_create_u32("num_descs", 0400, q_dentry, &q->num_descs);
debugfs_create_u32("desc_size", 0400, q_dentry, &q->desc_size);
debugfs_create_u32("pid", 0400, q_dentry, &q->pid);
debugfs_create_u16("tail", 0400, q_dentry, &q->tail_idx);
debugfs_create_u16("head", 0400, q_dentry, &q->head_idx);
cq_dentry = debugfs_create_dir("cq", qcq->dentry);
if (IS_ERR_OR_NULL(cq_dentry))
return;
debugfs_create_x64("base_pa", 0400, cq_dentry, &cq->base_pa);
debugfs_create_u32("num_descs", 0400, cq_dentry, &cq->num_descs);
debugfs_create_u32("desc_size", 0400, cq_dentry, &cq->desc_size);
debugfs_create_bool("done_color", 0400, cq_dentry, &cq->done_color);
debugfs_create_u16("tail", 0400, cq_dentry, &cq->tail_idx);
if (qcq->flags & PDS_CORE_QCQ_F_INTR) {
intr_dentry = debugfs_create_dir("intr", qcq->dentry);
if (IS_ERR_OR_NULL(intr_dentry))
return;
debugfs_create_u32("index", 0400, intr_dentry, &intr->index);
debugfs_create_u32("vector", 0400, intr_dentry, &intr->vector);
intr_ctrl_regset = kzalloc(sizeof(*intr_ctrl_regset),
GFP_KERNEL);
if (!intr_ctrl_regset)
return;
intr_ctrl_regset->regs = intr_ctrl_regs;
intr_ctrl_regset->nregs = ARRAY_SIZE(intr_ctrl_regs);
intr_ctrl_regset->base = &pdsc->intr_ctrl[intr->index];
debugfs_create_regset32("intr_ctrl", 0400, intr_dentry,
intr_ctrl_regset);
}
};
void pdsc_debugfs_del_qcq(struct pdsc_qcq *qcq)
{
debugfs_remove_recursive(qcq->dentry);
qcq->dentry = NULL;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include <linux/errno.h>
#include <linux/pci.h>
#include <linux/utsname.h>
#include "core.h"
int pdsc_err_to_errno(enum pds_core_status_code code)
{
switch (code) {
case PDS_RC_SUCCESS:
return 0;
case PDS_RC_EVERSION:
case PDS_RC_EQTYPE:
case PDS_RC_EQID:
case PDS_RC_EINVAL:
case PDS_RC_ENOSUPP:
return -EINVAL;
case PDS_RC_EPERM:
return -EPERM;
case PDS_RC_ENOENT:
return -ENOENT;
case PDS_RC_EAGAIN:
return -EAGAIN;
case PDS_RC_ENOMEM:
return -ENOMEM;
case PDS_RC_EFAULT:
return -EFAULT;
case PDS_RC_EBUSY:
return -EBUSY;
case PDS_RC_EEXIST:
return -EEXIST;
case PDS_RC_EVFID:
return -ENODEV;
case PDS_RC_ECLIENT:
return -ECHILD;
case PDS_RC_ENOSPC:
return -ENOSPC;
case PDS_RC_ERANGE:
return -ERANGE;
case PDS_RC_BAD_ADDR:
return -EFAULT;
case PDS_RC_EOPCODE:
case PDS_RC_EINTR:
case PDS_RC_DEV_CMD:
case PDS_RC_ERROR:
case PDS_RC_ERDMA:
case PDS_RC_EIO:
default:
return -EIO;
}
}
bool pdsc_is_fw_running(struct pdsc *pdsc)
{
pdsc->fw_status = ioread8(&pdsc->info_regs->fw_status);
pdsc->last_fw_time = jiffies;
pdsc->last_hb = ioread32(&pdsc->info_regs->fw_heartbeat);
/* Firmware is useful only if the running bit is set and
* fw_status != 0xff (bad PCI read)
*/
return (pdsc->fw_status != 0xff) &&
(pdsc->fw_status & PDS_CORE_FW_STS_F_RUNNING);
}
bool pdsc_is_fw_good(struct pdsc *pdsc)
{
u8 gen = pdsc->fw_status & PDS_CORE_FW_STS_F_GENERATION;
return pdsc_is_fw_running(pdsc) && gen == pdsc->fw_generation;
}
static u8 pdsc_devcmd_status(struct pdsc *pdsc)
{
return ioread8(&pdsc->cmd_regs->comp.status);
}
static bool pdsc_devcmd_done(struct pdsc *pdsc)
{
return ioread32(&pdsc->cmd_regs->done) & PDS_CORE_DEV_CMD_DONE;
}
static void pdsc_devcmd_dbell(struct pdsc *pdsc)
{
iowrite32(0, &pdsc->cmd_regs->done);
iowrite32(1, &pdsc->cmd_regs->doorbell);
}
static void pdsc_devcmd_clean(struct pdsc *pdsc)
{
iowrite32(0, &pdsc->cmd_regs->doorbell);
memset_io(&pdsc->cmd_regs->cmd, 0, sizeof(pdsc->cmd_regs->cmd));
}
static const char *pdsc_devcmd_str(int opcode)
{
switch (opcode) {
case PDS_CORE_CMD_NOP:
return "PDS_CORE_CMD_NOP";
case PDS_CORE_CMD_IDENTIFY:
return "PDS_CORE_CMD_IDENTIFY";
case PDS_CORE_CMD_RESET:
return "PDS_CORE_CMD_RESET";
case PDS_CORE_CMD_INIT:
return "PDS_CORE_CMD_INIT";
case PDS_CORE_CMD_FW_DOWNLOAD:
return "PDS_CORE_CMD_FW_DOWNLOAD";
case PDS_CORE_CMD_FW_CONTROL:
return "PDS_CORE_CMD_FW_CONTROL";
default:
return "PDS_CORE_CMD_UNKNOWN";
}
}
static int pdsc_devcmd_wait(struct pdsc *pdsc, int max_seconds)
{
struct device *dev = pdsc->dev;
unsigned long start_time;
unsigned long max_wait;
unsigned long duration;
int timeout = 0;
int done = 0;
int err = 0;
int status;
int opcode;
opcode = ioread8(&pdsc->cmd_regs->cmd.opcode);
start_time = jiffies;
max_wait = start_time + (max_seconds * HZ);
while (!done && !timeout) {
done = pdsc_devcmd_done(pdsc);
if (done)
break;
timeout = time_after(jiffies, max_wait);
if (timeout)
break;
usleep_range(100, 200);
}
duration = jiffies - start_time;
if (done && duration > HZ)
dev_dbg(dev, "DEVCMD %d %s after %ld secs\n",
opcode, pdsc_devcmd_str(opcode), duration / HZ);
if (!done || timeout) {
dev_err(dev, "DEVCMD %d %s timeout, done %d timeout %d max_seconds=%d\n",
opcode, pdsc_devcmd_str(opcode), done, timeout,
max_seconds);
err = -ETIMEDOUT;
pdsc_devcmd_clean(pdsc);
}
status = pdsc_devcmd_status(pdsc);
err = pdsc_err_to_errno(status);
if (err && err != -EAGAIN)
dev_err(dev, "DEVCMD %d %s failed, status=%d err %d %pe\n",
opcode, pdsc_devcmd_str(opcode), status, err,
ERR_PTR(err));
return err;
}
int pdsc_devcmd_locked(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
union pds_core_dev_comp *comp, int max_seconds)
{
int err;
memcpy_toio(&pdsc->cmd_regs->cmd, cmd, sizeof(*cmd));
pdsc_devcmd_dbell(pdsc);
err = pdsc_devcmd_wait(pdsc, max_seconds);
memcpy_fromio(comp, &pdsc->cmd_regs->comp, sizeof(*comp));
if (err == -ENXIO || err == -ETIMEDOUT)
queue_work(pdsc->wq, &pdsc->health_work);
return err;
}
int pdsc_devcmd(struct pdsc *pdsc, union pds_core_dev_cmd *cmd,
union pds_core_dev_comp *comp, int max_seconds)
{
int err;
mutex_lock(&pdsc->devcmd_lock);
err = pdsc_devcmd_locked(pdsc, cmd, comp, max_seconds);
mutex_unlock(&pdsc->devcmd_lock);
return err;
}
int pdsc_devcmd_init(struct pdsc *pdsc)
{
union pds_core_dev_comp comp = {};
union pds_core_dev_cmd cmd = {
.opcode = PDS_CORE_CMD_INIT,
};
return pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
int pdsc_devcmd_reset(struct pdsc *pdsc)
{
union pds_core_dev_comp comp = {};
union pds_core_dev_cmd cmd = {
.reset.opcode = PDS_CORE_CMD_RESET,
};
return pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
static int pdsc_devcmd_identify_locked(struct pdsc *pdsc)
{
union pds_core_dev_comp comp = {};
union pds_core_dev_cmd cmd = {
.identify.opcode = PDS_CORE_CMD_IDENTIFY,
.identify.ver = PDS_CORE_IDENTITY_VERSION_1,
};
return pdsc_devcmd_locked(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
static void pdsc_init_devinfo(struct pdsc *pdsc)
{
pdsc->dev_info.asic_type = ioread8(&pdsc->info_regs->asic_type);
pdsc->dev_info.asic_rev = ioread8(&pdsc->info_regs->asic_rev);
pdsc->fw_generation = PDS_CORE_FW_STS_F_GENERATION &
ioread8(&pdsc->info_regs->fw_status);
memcpy_fromio(pdsc->dev_info.fw_version,
pdsc->info_regs->fw_version,
PDS_CORE_DEVINFO_FWVERS_BUFLEN);
pdsc->dev_info.fw_version[PDS_CORE_DEVINFO_FWVERS_BUFLEN] = 0;
memcpy_fromio(pdsc->dev_info.serial_num,
pdsc->info_regs->serial_num,
PDS_CORE_DEVINFO_SERIAL_BUFLEN);
pdsc->dev_info.serial_num[PDS_CORE_DEVINFO_SERIAL_BUFLEN] = 0;
dev_dbg(pdsc->dev, "fw_version %s\n", pdsc->dev_info.fw_version);
}
static int pdsc_identify(struct pdsc *pdsc)
{
struct pds_core_drv_identity drv = {};
size_t sz;
int err;
drv.drv_type = cpu_to_le32(PDS_DRIVER_LINUX);
snprintf(drv.driver_ver_str, sizeof(drv.driver_ver_str),
"%s %s", PDS_CORE_DRV_NAME, utsname()->release);
/* Next let's get some info about the device
* We use the devcmd_lock at this level in order to
* get safe access to the cmd_regs->data before anyone
* else can mess it up
*/
mutex_lock(&pdsc->devcmd_lock);
sz = min_t(size_t, sizeof(drv), sizeof(pdsc->cmd_regs->data));
memcpy_toio(&pdsc->cmd_regs->data, &drv, sz);
err = pdsc_devcmd_identify_locked(pdsc);
if (!err) {
sz = min_t(size_t, sizeof(pdsc->dev_ident),
sizeof(pdsc->cmd_regs->data));
memcpy_fromio(&pdsc->dev_ident, &pdsc->cmd_regs->data, sz);
}
mutex_unlock(&pdsc->devcmd_lock);
if (err) {
dev_err(pdsc->dev, "Cannot identify device: %pe\n",
ERR_PTR(err));
return err;
}
if (isprint(pdsc->dev_info.fw_version[0]) &&
isascii(pdsc->dev_info.fw_version[0]))
dev_info(pdsc->dev, "FW: %.*s\n",
(int)(sizeof(pdsc->dev_info.fw_version) - 1),
pdsc->dev_info.fw_version);
else
dev_info(pdsc->dev, "FW: (invalid string) 0x%02x 0x%02x 0x%02x 0x%02x ...\n",
(u8)pdsc->dev_info.fw_version[0],
(u8)pdsc->dev_info.fw_version[1],
(u8)pdsc->dev_info.fw_version[2],
(u8)pdsc->dev_info.fw_version[3]);
return 0;
}
int pdsc_dev_reinit(struct pdsc *pdsc)
{
pdsc_init_devinfo(pdsc);
return pdsc_identify(pdsc);
}
int pdsc_dev_init(struct pdsc *pdsc)
{
unsigned int nintrs;
int err;
/* Initial init and reset of device */
pdsc_init_devinfo(pdsc);
pdsc->devcmd_timeout = PDS_CORE_DEVCMD_TIMEOUT;
err = pdsc_devcmd_reset(pdsc);
if (err)
return err;
err = pdsc_identify(pdsc);
if (err)
return err;
pdsc_debugfs_add_ident(pdsc);
/* Now we can reserve interrupts */
nintrs = le32_to_cpu(pdsc->dev_ident.nintrs);
nintrs = min_t(unsigned int, num_online_cpus(), nintrs);
/* Get intr_info struct array for tracking */
pdsc->intr_info = kcalloc(nintrs, sizeof(*pdsc->intr_info), GFP_KERNEL);
if (!pdsc->intr_info) {
err = -ENOMEM;
goto err_out;
}
err = pci_alloc_irq_vectors(pdsc->pdev, nintrs, nintrs, PCI_IRQ_MSIX);
if (err != nintrs) {
dev_err(pdsc->dev, "Can't get %d intrs from OS: %pe\n",
nintrs, ERR_PTR(err));
err = -ENOSPC;
goto err_out;
}
pdsc->nintrs = nintrs;
return 0;
err_out:
kfree(pdsc->intr_info);
pdsc->intr_info = NULL;
return err;
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include "core.h"
#include <linux/pds/pds_auxbus.h>
static struct
pdsc_viftype *pdsc_dl_find_viftype_by_id(struct pdsc *pdsc,
enum devlink_param_type dl_id)
{
int vt;
for (vt = 0; vt < PDS_DEV_TYPE_MAX; vt++) {
if (pdsc->viftype_status[vt].dl_id == dl_id)
return &pdsc->viftype_status[vt];
}
return NULL;
}
int pdsc_dl_enable_get(struct devlink *dl, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct pdsc *pdsc = devlink_priv(dl);
struct pdsc_viftype *vt_entry;
vt_entry = pdsc_dl_find_viftype_by_id(pdsc, id);
if (!vt_entry)
return -ENOENT;
ctx->val.vbool = vt_entry->enabled;
return 0;
}
int pdsc_dl_enable_set(struct devlink *dl, u32 id,
struct devlink_param_gset_ctx *ctx)
{
struct pdsc *pdsc = devlink_priv(dl);
struct pdsc_viftype *vt_entry;
int err = 0;
int vf_id;
vt_entry = pdsc_dl_find_viftype_by_id(pdsc, id);
if (!vt_entry || !vt_entry->supported)
return -EOPNOTSUPP;
if (vt_entry->enabled == ctx->val.vbool)
return 0;
vt_entry->enabled = ctx->val.vbool;
for (vf_id = 0; vf_id < pdsc->num_vfs; vf_id++) {
struct pdsc *vf = pdsc->vfs[vf_id].vf;
err = ctx->val.vbool ? pdsc_auxbus_dev_add(vf, pdsc) :
pdsc_auxbus_dev_del(vf, pdsc);
}
return err;
}
int pdsc_dl_enable_validate(struct devlink *dl, u32 id,
union devlink_param_value val,
struct netlink_ext_ack *extack)
{
struct pdsc *pdsc = devlink_priv(dl);
struct pdsc_viftype *vt_entry;
vt_entry = pdsc_dl_find_viftype_by_id(pdsc, id);
if (!vt_entry || !vt_entry->supported)
return -EOPNOTSUPP;
if (!pdsc->viftype_status[vt_entry->vif_id].supported)
return -ENODEV;
return 0;
}
int pdsc_dl_flash_update(struct devlink *dl,
struct devlink_flash_update_params *params,
struct netlink_ext_ack *extack)
{
struct pdsc *pdsc = devlink_priv(dl);
return pdsc_firmware_update(pdsc, params->fw, extack);
}
static char *fw_slotnames[] = {
"fw.goldfw",
"fw.mainfwa",
"fw.mainfwb",
};
int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
struct netlink_ext_ack *extack)
{
union pds_core_dev_cmd cmd = {
.fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
.fw_control.oper = PDS_CORE_FW_GET_LIST,
};
struct pds_core_fw_list_info fw_list;
struct pdsc *pdsc = devlink_priv(dl);
union pds_core_dev_comp comp;
char buf[16];
int listlen;
int err;
int i;
mutex_lock(&pdsc->devcmd_lock);
err = pdsc_devcmd_locked(pdsc, &cmd, &comp, pdsc->devcmd_timeout * 2);
memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list));
mutex_unlock(&pdsc->devcmd_lock);
if (err && err != -EIO)
return err;
listlen = fw_list.num_fw_slots;
for (i = 0; i < listlen; i++) {
if (i < ARRAY_SIZE(fw_slotnames))
strscpy(buf, fw_slotnames[i], sizeof(buf));
else
snprintf(buf, sizeof(buf), "fw.slot_%d", i);
err = devlink_info_version_stored_put(req, buf,
fw_list.fw_names[i].fw_version);
}
err = devlink_info_version_running_put(req,
DEVLINK_INFO_VERSION_GENERIC_FW,
pdsc->dev_info.fw_version);
if (err)
return err;
snprintf(buf, sizeof(buf), "0x%x", pdsc->dev_info.asic_type);
err = devlink_info_version_fixed_put(req,
DEVLINK_INFO_VERSION_GENERIC_ASIC_ID,
buf);
if (err)
return err;
snprintf(buf, sizeof(buf), "0x%x", pdsc->dev_info.asic_rev);
err = devlink_info_version_fixed_put(req,
DEVLINK_INFO_VERSION_GENERIC_ASIC_REV,
buf);
if (err)
return err;
return devlink_info_serial_number_put(req, pdsc->dev_info.serial_num);
}
int pdsc_fw_reporter_diagnose(struct devlink_health_reporter *reporter,
struct devlink_fmsg *fmsg,
struct netlink_ext_ack *extack)
{
struct pdsc *pdsc = devlink_health_reporter_priv(reporter);
int err;
mutex_lock(&pdsc->config_lock);
if (test_bit(PDSC_S_FW_DEAD, &pdsc->state))
err = devlink_fmsg_string_pair_put(fmsg, "Status", "dead");
else if (!pdsc_is_fw_good(pdsc))
err = devlink_fmsg_string_pair_put(fmsg, "Status", "unhealthy");
else
err = devlink_fmsg_string_pair_put(fmsg, "Status", "healthy");
mutex_unlock(&pdsc->config_lock);
if (err)
return err;
err = devlink_fmsg_u32_pair_put(fmsg, "State",
pdsc->fw_status &
~PDS_CORE_FW_STS_F_GENERATION);
if (err)
return err;
err = devlink_fmsg_u32_pair_put(fmsg, "Generation",
pdsc->fw_generation >> 4);
if (err)
return err;
return devlink_fmsg_u32_pair_put(fmsg, "Recoveries",
pdsc->fw_recoveries);
}
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#include "core.h"
/* The worst case wait for the install activity is about 25 minutes when
* installing a new CPLD, which is very seldom. Normal is about 30-35
* seconds. Since the driver can't tell if a CPLD update will happen we
* set the timeout for the ugly case.
*/
#define PDSC_FW_INSTALL_TIMEOUT (25 * 60)
#define PDSC_FW_SELECT_TIMEOUT 30
/* Number of periodic log updates during fw file download */
#define PDSC_FW_INTERVAL_FRACTION 32
static int pdsc_devcmd_fw_download_locked(struct pdsc *pdsc, u64 addr,
u32 offset, u32 length)
{
union pds_core_dev_cmd cmd = {
.fw_download.opcode = PDS_CORE_CMD_FW_DOWNLOAD,
.fw_download.offset = cpu_to_le32(offset),
.fw_download.addr = cpu_to_le64(addr),
.fw_download.length = cpu_to_le32(length),
};
union pds_core_dev_comp comp;
return pdsc_devcmd_locked(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
static int pdsc_devcmd_fw_install(struct pdsc *pdsc)
{
union pds_core_dev_cmd cmd = {
.fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
.fw_control.oper = PDS_CORE_FW_INSTALL_ASYNC
};
union pds_core_dev_comp comp;
int err;
err = pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
if (err < 0)
return err;
return comp.fw_control.slot;
}
static int pdsc_devcmd_fw_activate(struct pdsc *pdsc,
enum pds_core_fw_slot slot)
{
union pds_core_dev_cmd cmd = {
.fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
.fw_control.oper = PDS_CORE_FW_ACTIVATE_ASYNC,
.fw_control.slot = slot
};
union pds_core_dev_comp comp;
return pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
}
static int pdsc_fw_status_long_wait(struct pdsc *pdsc,
const char *label,
unsigned long timeout,
u8 fw_cmd,
struct netlink_ext_ack *extack)
{
union pds_core_dev_cmd cmd = {
.fw_control.opcode = PDS_CORE_CMD_FW_CONTROL,
.fw_control.oper = fw_cmd,
};
union pds_core_dev_comp comp;
unsigned long start_time;
unsigned long end_time;
int err;
/* Ping on the status of the long running async install
* command. We get EAGAIN while the command is still
* running, else we get the final command status.
*/
start_time = jiffies;
end_time = start_time + (timeout * HZ);
do {
err = pdsc_devcmd(pdsc, &cmd, &comp, pdsc->devcmd_timeout);
msleep(20);
} while (time_before(jiffies, end_time) &&
(err == -EAGAIN || err == -ETIMEDOUT));
if (err == -EAGAIN || err == -ETIMEDOUT) {
NL_SET_ERR_MSG_MOD(extack, "Firmware wait timed out");
dev_err(pdsc->dev, "DEV_CMD firmware wait %s timed out\n",
label);
} else if (err) {
NL_SET_ERR_MSG_MOD(extack, "Firmware wait failed");
}
return err;
}
int pdsc_firmware_update(struct pdsc *pdsc, const struct firmware *fw,
struct netlink_ext_ack *extack)
{
u32 buf_sz, copy_sz, offset;
struct devlink *dl;
int next_interval;
u64 data_addr;
int err = 0;
int fw_slot;
dev_info(pdsc->dev, "Installing firmware\n");
dl = priv_to_devlink(pdsc);
devlink_flash_update_status_notify(dl, "Preparing to flash",
NULL, 0, 0);
buf_sz = sizeof(pdsc->cmd_regs->data);
dev_dbg(pdsc->dev,
"downloading firmware - size %d part_sz %d nparts %lu\n",
(int)fw->size, buf_sz, DIV_ROUND_UP(fw->size, buf_sz));
offset = 0;
next_interval = 0;
data_addr = offsetof(struct pds_core_dev_cmd_regs, data);
while (offset < fw->size) {
if (offset >= next_interval) {
devlink_flash_update_status_notify(dl, "Downloading",
NULL, offset,
fw->size);
next_interval = offset +
(fw->size / PDSC_FW_INTERVAL_FRACTION);
}
copy_sz = min_t(unsigned int, buf_sz, fw->size - offset);
mutex_lock(&pdsc->devcmd_lock);
memcpy_toio(&pdsc->cmd_regs->data, fw->data + offset, copy_sz);
err = pdsc_devcmd_fw_download_locked(pdsc, data_addr,
offset, copy_sz);
mutex_unlock(&pdsc->devcmd_lock);
if (err) {
dev_err(pdsc->dev,
"download failed offset 0x%x addr 0x%llx len 0x%x: %pe\n",
offset, data_addr, copy_sz, ERR_PTR(err));
NL_SET_ERR_MSG_MOD(extack, "Segment download failed");
goto err_out;
}
offset += copy_sz;
}
devlink_flash_update_status_notify(dl, "Downloading", NULL,
fw->size, fw->size);
devlink_flash_update_timeout_notify(dl, "Installing", NULL,
PDSC_FW_INSTALL_TIMEOUT);
fw_slot = pdsc_devcmd_fw_install(pdsc);
if (fw_slot < 0) {
err = fw_slot;
dev_err(pdsc->dev, "install failed: %pe\n", ERR_PTR(err));
NL_SET_ERR_MSG_MOD(extack, "Failed to start firmware install");
goto err_out;
}
err = pdsc_fw_status_long_wait(pdsc, "Installing",
PDSC_FW_INSTALL_TIMEOUT,
PDS_CORE_FW_INSTALL_STATUS,
extack);
if (err)
goto err_out;
devlink_flash_update_timeout_notify(dl, "Selecting", NULL,
PDSC_FW_SELECT_TIMEOUT);
err = pdsc_devcmd_fw_activate(pdsc, fw_slot);
if (err) {
NL_SET_ERR_MSG_MOD(extack, "Failed to start firmware select");
goto err_out;
}
err = pdsc_fw_status_long_wait(pdsc, "Selecting",
PDSC_FW_SELECT_TIMEOUT,
PDS_CORE_FW_ACTIVATE_STATUS,
extack);
if (err)
goto err_out;
dev_info(pdsc->dev, "Firmware update completed, slot %d\n", fw_slot);
err_out:
if (err)
devlink_flash_update_status_notify(dl, "Flash failed",
NULL, 0, 0);
else
devlink_flash_update_status_notify(dl, "Flash done",
NULL, 0, 0);
return err;
}
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2023 Advanced Micro Devices, Inc */
#ifndef _PDSC_AUXBUS_H_
#define _PDSC_AUXBUS_H_
#include <linux/auxiliary_bus.h>
struct pds_auxiliary_dev {
struct auxiliary_device aux_dev;
struct pci_dev *vf_pdev;
u16 client_id;
};
int pds_client_adminq_cmd(struct pds_auxiliary_dev *padev,
union pds_core_adminq_cmd *req,
size_t req_len,
union pds_core_adminq_comp *resp,
u64 flags);
#endif /* _PDSC_AUXBUS_H_ */
/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */
/* Copyright(c) 2023 Advanced Micro Devices, Inc. */
#ifndef _PDS_COMMON_H_
#define _PDS_COMMON_H_
#define PDS_CORE_DRV_NAME "pds_core"
/* the device's internal addressing uses up to 52 bits */
#define PDS_CORE_ADDR_LEN 52
#define PDS_CORE_ADDR_MASK (BIT_ULL(PDS_ADDR_LEN) - 1)
#define PDS_PAGE_SIZE 4096
enum pds_core_driver_type {
PDS_DRIVER_LINUX = 1,
PDS_DRIVER_WIN = 2,
PDS_DRIVER_DPDK = 3,
PDS_DRIVER_FREEBSD = 4,
PDS_DRIVER_IPXE = 5,
PDS_DRIVER_ESXI = 6,
};
enum pds_core_vif_types {
PDS_DEV_TYPE_CORE = 0,
PDS_DEV_TYPE_VDPA = 1,
PDS_DEV_TYPE_VFIO = 2,
PDS_DEV_TYPE_ETH = 3,
PDS_DEV_TYPE_RDMA = 4,
PDS_DEV_TYPE_LM = 5,
/* new ones added before this line */
PDS_DEV_TYPE_MAX = 16 /* don't change - used in struct size */
};
#define PDS_DEV_TYPE_CORE_STR "Core"
#define PDS_DEV_TYPE_VDPA_STR "vDPA"
#define PDS_DEV_TYPE_VFIO_STR "VFio"
#define PDS_DEV_TYPE_ETH_STR "Eth"
#define PDS_DEV_TYPE_RDMA_STR "RDMA"
#define PDS_DEV_TYPE_LM_STR "LM"
#define PDS_CORE_IFNAMSIZ 16
/**
* enum pds_core_logical_qtype - Logical Queue Types
* @PDS_CORE_QTYPE_ADMINQ: Administrative Queue
* @PDS_CORE_QTYPE_NOTIFYQ: Notify Queue
* @PDS_CORE_QTYPE_RXQ: Receive Queue
* @PDS_CORE_QTYPE_TXQ: Transmit Queue
* @PDS_CORE_QTYPE_EQ: Event Queue
* @PDS_CORE_QTYPE_MAX: Max queue type supported
*/
enum pds_core_logical_qtype {
PDS_CORE_QTYPE_ADMINQ = 0,
PDS_CORE_QTYPE_NOTIFYQ = 1,
PDS_CORE_QTYPE_RXQ = 2,
PDS_CORE_QTYPE_TXQ = 3,
PDS_CORE_QTYPE_EQ = 4,
PDS_CORE_QTYPE_MAX = 16 /* don't change - used in struct size */
};
int pdsc_register_notify(struct notifier_block *nb);
void pdsc_unregister_notify(struct notifier_block *nb);
void *pdsc_get_pf_struct(struct pci_dev *vf_pdev);
int pds_client_register(struct pci_dev *pf_pdev, char *devname);
int pds_client_unregister(struct pci_dev *pf_pdev, u16 client_id);
#endif /* _PDS_COMMON_H_ */
This diff is collapsed.
/* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */
/* Copyright(c) 2023 Advanced Micro Devices, Inc. */
#ifndef _PDS_INTR_H_
#define _PDS_INTR_H_
/*
* Interrupt control register
* @coal_init: Coalescing timer initial value, in
* device units. Use @identity->intr_coal_mult
* and @identity->intr_coal_div to convert from
* usecs to device units:
*
* coal_init = coal_usecs * coal_mutl / coal_div
*
* When an interrupt is sent the interrupt
* coalescing timer current value
* (@coalescing_curr) is initialized with this
* value and begins counting down. No more
* interrupts are sent until the coalescing
* timer reaches 0. When @coalescing_init=0
* interrupt coalescing is effectively disabled
* and every interrupt assert results in an
* interrupt. Reset value: 0
* @mask: Interrupt mask. When @mask=1 the interrupt
* resource will not send an interrupt. When
* @mask=0 the interrupt resource will send an
* interrupt if an interrupt event is pending
* or on the next interrupt assertion event.
* Reset value: 1
* @credits: Interrupt credits. This register indicates
* how many interrupt events the hardware has
* sent. When written by software this
* register atomically decrements @int_credits
* by the value written. When @int_credits
* becomes 0 then the "pending interrupt" bit
* in the Interrupt Status register is cleared
* by the hardware and any pending but unsent
* interrupts are cleared.
* !!!IMPORTANT!!! This is a signed register.
* @flags: Interrupt control flags
* @unmask -- When this bit is written with a 1
* the interrupt resource will set mask=0.
* @coal_timer_reset -- When this
* bit is written with a 1 the
* @coalescing_curr will be reloaded with
* @coalescing_init to reset the coalescing
* timer.
* @mask_on_assert: Automatically mask on assertion. When
* @mask_on_assert=1 the interrupt resource
* will set @mask=1 whenever an interrupt is
* sent. When using interrupts in Legacy
* Interrupt mode the driver must select
* @mask_on_assert=0 for proper interrupt
* operation.
* @coalescing_curr: Coalescing timer current value, in
* microseconds. When this value reaches 0
* the interrupt resource is again eligible to
* send an interrupt. If an interrupt event
* is already pending when @coalescing_curr
* reaches 0 the pending interrupt will be
* sent, otherwise an interrupt will be sent
* on the next interrupt assertion event.
*/
struct pds_core_intr {
u32 coal_init;
u32 mask;
u16 credits;
u16 flags;
#define PDS_CORE_INTR_F_UNMASK 0x0001
#define PDS_CORE_INTR_F_TIMER_RESET 0x0002
u32 mask_on_assert;
u32 coalescing_curr;
u32 rsvd6[3];
};
#ifndef __CHECKER__
static_assert(sizeof(struct pds_core_intr) == 32);
#endif /* __CHECKER__ */
#define PDS_CORE_INTR_CTRL_REGS_MAX 2048
#define PDS_CORE_INTR_CTRL_COAL_MAX 0x3F
#define PDS_CORE_INTR_INDEX_NOT_ASSIGNED -1
struct pds_core_intr_status {
u32 status[2];
};
/**
* enum pds_core_intr_mask_vals - valid values for mask and mask_assert.
* @PDS_CORE_INTR_MASK_CLEAR: unmask interrupt.
* @PDS_CORE_INTR_MASK_SET: mask interrupt.
*/
enum pds_core_intr_mask_vals {
PDS_CORE_INTR_MASK_CLEAR = 0,
PDS_CORE_INTR_MASK_SET = 1,
};
/**
* enum pds_core_intr_credits_bits - Bitwise composition of credits values.
* @PDS_CORE_INTR_CRED_COUNT: bit mask of credit count, no shift needed.
* @PDS_CORE_INTR_CRED_COUNT_SIGNED: bit mask of credit count, including sign bit.
* @PDS_CORE_INTR_CRED_UNMASK: unmask the interrupt.
* @PDS_CORE_INTR_CRED_RESET_COALESCE: reset the coalesce timer.
* @PDS_CORE_INTR_CRED_REARM: unmask the and reset the timer.
*/
enum pds_core_intr_credits_bits {
PDS_CORE_INTR_CRED_COUNT = 0x7fffu,
PDS_CORE_INTR_CRED_COUNT_SIGNED = 0xffffu,
PDS_CORE_INTR_CRED_UNMASK = 0x10000u,
PDS_CORE_INTR_CRED_RESET_COALESCE = 0x20000u,
PDS_CORE_INTR_CRED_REARM = (PDS_CORE_INTR_CRED_UNMASK |
PDS_CORE_INTR_CRED_RESET_COALESCE),
};
static inline void
pds_core_intr_coal_init(struct pds_core_intr __iomem *intr_ctrl, u32 coal)
{
iowrite32(coal, &intr_ctrl->coal_init);
}
static inline void
pds_core_intr_mask(struct pds_core_intr __iomem *intr_ctrl, u32 mask)
{
iowrite32(mask, &intr_ctrl->mask);
}
static inline void
pds_core_intr_credits(struct pds_core_intr __iomem *intr_ctrl,
u32 cred, u32 flags)
{
if (WARN_ON_ONCE(cred > PDS_CORE_INTR_CRED_COUNT)) {
cred = ioread32(&intr_ctrl->credits);
cred &= PDS_CORE_INTR_CRED_COUNT_SIGNED;
}
iowrite32(cred | flags, &intr_ctrl->credits);
}
static inline void
pds_core_intr_clean_flags(struct pds_core_intr __iomem *intr_ctrl, u32 flags)
{
u32 cred;
cred = ioread32(&intr_ctrl->credits);
cred &= PDS_CORE_INTR_CRED_COUNT_SIGNED;
cred |= flags;
iowrite32(cred, &intr_ctrl->credits);
}
static inline void
pds_core_intr_clean(struct pds_core_intr __iomem *intr_ctrl)
{
pds_core_intr_clean_flags(intr_ctrl, PDS_CORE_INTR_CRED_RESET_COALESCE);
}
static inline void
pds_core_intr_mask_assert(struct pds_core_intr __iomem *intr_ctrl, u32 mask)
{
iowrite32(mask, &intr_ctrl->mask_on_assert);
}
#endif /* _PDS_INTR_H_ */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment