Commit 033af36d authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'cxl-for-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull compute express link (cxl) updates from Dave Jiang:
 "Major changes address HDM decoder initialization from DVSEC ranges,
  refactoring the code related to cxl mailboxes to be independent of the
  memory devices, and adding support for shared upstream link
  access_coordinate calculation, as well as a change to remove locking
  from memory notifier callback.

  In addition, a number of misc cleanups and refactoring of the code are
  also included.

  Address HDM decoder initialization from DVSEC ranges:
   - Only register non-zero DVSEC ranges
   - Remove duplicate implementation of waiting for memory_info_valid
   - Simplify the checking of mem_enabled in  cxl_hdm_decode_init()

  Refactor the code related to cxl mailboxes to be independent of the memory devices:
   - Move cxl headers in include/linux/ to include/cxl
   - Move all mailbox related data to 'struct cxl_mailbox'
   - Refactor mailbox APIs with 'struct cxl_mailbox' as input instead of
     memory device state

  Add support for shared upstream link access_coordinate calculation for
  configurations that have multiple targets under a switch or a root
  port where the aggregated bandwidth can be greater than the upstream
  link of the switch/RP upstream link:
   - Preserve the CDAT access_coordinate from an endpoint
   - Add the support for shared upstream link access_coordinate calculation
   - Add documentation to explain how the calculations are done

  Remove locking from memory notifier callback.

  Misc cleanups:
   - Convert devm_cxl_add_root() to return using ERR_CAST()
   - cxl_test use dev_is_platform() instead of open coding
   - Remove duplicate include of header core.h in core/cdat.c
   - use scoped resource management to drop put_device() for cxl_port
   - Use scoped_guard to drop device_lock() for cxl_port
   - Refactor __devm_cxl_add_port() to drop gotos
   - Rename cxl_setup_parent_dport to cxl_dport_init_aer and
     cxl_dport_map_regs() to cxl_dport_map_ras()
   - Refactor cxl_dport_init_aer() to be more concise
   - Remove duplicate host_bridge->native_aer checking in
     cxl_dport_init_ras_reporting()
   - Fix comment for cxl_query_cmd()"

* tag 'cxl-for-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (21 commits)
  cxl: Add documentation to explain the shared link bandwidth calculation
  cxl: Calculate region bandwidth of targets with shared upstream link
  cxl: Preserve the CDAT access_coordinate for an endpoint
  cxl: Fix comment regarding cxl_query_cmd() return data
  cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input
  cxl: Move mailbox related bits to the same context
  cxl: move cxl headers to new include/cxl/ directory
  cxl/region: Remove lock from memory notifier callback
  cxl/pci: simplify the check of mem_enabled in cxl_hdm_decode_init()
  cxl/pci: Check Mem_info_valid bit for each applicable DVSEC
  cxl/pci: Remove duplicated implementation of waiting for memory_info_valid
  cxl/pci: Fix to record only non-zero ranges
  cxl/pci: Remove duplicate host_bridge->native_aer checking
  cxl/pci: cxl_dport_map_rch_aer() cleanup
  cxl/pci: Rename cxl_setup_parent_dport() and cxl_dport_map_regs()
  cxl/port: Refactor __devm_cxl_add_port() to drop goto pattern
  cxl/port: Use scoped_guard()/guard() to drop device_lock() for cxl_port
  cxl/port: Use __free() to drop put_device() for cxl_port
  cxl: Remove duplicate included header file core.h
  tools/testing/cxl: Use dev_is_platform()
  ...
parents eee28084 2c70677d
.. SPDX-License-Identifier: GPL-2.0
.. include:: <isonum.txt>
==================================
CXL Access Coordinates Computation
==================================
Shared Upstream Link Calculation
================================
For certain CXL region construction with endpoints behind CXL switches (SW) or
Root Ports (RP), there is the possibility of the total bandwidth for all
the endpoints behind a switch being more than the switch upstream link.
A similar situation can occur within the host, upstream of the root ports.
The CXL driver performs an additional pass after all the targets have
arrived for a region in order to recalculate the bandwidths with possible
upstream link being a limiting factor in mind.
The algorithm assumes the configuration is a symmetric topology as that
maximizes performance. When asymmetric topology is detected, the calculation
is aborted. An asymmetric topology is detected during topology walk where the
number of RPs detected as a grandparent is not equal to the number of devices
iterated in the same iteration loop. The assumption is made that subtle
asymmetry in properties does not happen and all paths to EPs are equal.
There can be multiple switches under an RP. There can be multiple RPs under
a CXL Host Bridge (HB). There can be multiple HBs under a CXL Fixed Memory
Window Structure (CFMWS).
An example hierarchy:
> CFMWS 0
> |
> _________|_________
> | |
> ACPI0017-0 ACPI0017-1
> GP0/HB0/ACPI0016-0 GP1/HB1/ACPI0016-1
> | | | |
> RP0 RP1 RP2 RP3
> | | | |
> SW 0 SW 1 SW 2 SW 3
> | | | | | | | |
> EP0 EP1 EP2 EP3 EP4 EP5 EP6 EP7
Computation for the example hierarchy:
Min (GP0 to CPU BW,
Min(SW 0 Upstream Link to RP0 BW,
Min(SW0SSLBIS for SW0DSP0 (EP0), EP0 DSLBIS, EP0 Upstream Link) +
Min(SW0SSLBIS for SW0DSP1 (EP1), EP1 DSLBIS, EP1 Upstream link)) +
Min(SW 1 Upstream Link to RP1 BW,
Min(SW1SSLBIS for SW1DSP0 (EP2), EP2 DSLBIS, EP2 Upstream Link) +
Min(SW1SSLBIS for SW1DSP1 (EP3), EP3 DSLBIS, EP3 Upstream link))) +
Min (GP1 to CPU BW,
Min(SW 2 Upstream Link to RP2 BW,
Min(SW2SSLBIS for SW2DSP0 (EP4), EP4 DSLBIS, EP4 Upstream Link) +
Min(SW2SSLBIS for SW2DSP1 (EP5), EP5 DSLBIS, EP5 Upstream link)) +
Min(SW 3 Upstream Link to RP3 BW,
Min(SW3SSLBIS for SW3DSP0 (EP6), EP6 DSLBIS, EP6 Upstream Link) +
Min(SW3SSLBIS for SW3DSP1 (EP7), EP7 DSLBIS, EP7 Upstream link))))
The calculation starts at cxl_region_shared_upstream_perf_update(). A xarray
is created to collect all the endpoint bandwidths via the
cxl_endpoint_gather_bandwidth() function. The min() of bandwidth from the
endpoint CDAT and the upstream link bandwidth is calculated. If the endpoint
has a CXL switch as a parent, then min() of calculated bandwidth and the
bandwidth from the SSLBIS for the switch downstream port that is associated
with the endpoint is calculated. The final bandwidth is stored in a
'struct cxl_perf_ctx' in the xarray indexed by a device pointer. If the
endpoint is direct attached to a root port (RP), the device pointer would be an
RP device. If the endpoint is behind a switch, the device pointer would be the
upstream device of the parent switch.
At the next stage, the code walks through one or more switches if they exist
in the topology. For endpoints directly attached to RPs, this step is skipped.
If there is another switch upstream, the code takes the min() of the current
gathered bandwidth and the upstream link bandwidth. If there's a switch
upstream, then the SSLBIS of the upstream switch.
Once the topology walk reaches the RP, whether it's direct attached endpoints
or walking through the switch(es), cxl_rp_gather_bandwidth() is called. At
this point all the bandwidths are aggregated per each host bridge, which is
also the index for the resulting xarray.
The next step is to take the min() of the per host bridge bandwidth and the
bandwidth from the Generic Port (GP). The bandwidths for the GP is retrieved
via ACPI tables SRAT/HMAT. The min bandwidth are aggregated under the same
ACPI0017 device to form a new xarray.
Finally, the cxl_region_update_bandwidth() is called and the aggregated
bandwidth from all the members of the last xarray is updated for the
access coordinates residing in the cxl region (cxlr) context.
......@@ -8,6 +8,7 @@ Compute Express Link
:maxdepth: 1
memory-devices
access-coordinates
maturity-map
......
......@@ -5728,8 +5728,7 @@ L: linux-cxl@vger.kernel.org
S: Maintained
F: Documentation/driver-api/cxl
F: drivers/cxl/
F: include/linux/einj-cxl.h
F: include/linux/cxl-event.h
F: include/cxl/
F: include/uapi/linux/cxl_mem.h
F: tools/testing/cxl/
......
......@@ -7,9 +7,9 @@
*
* Author: Ben Cheatham <benjamin.cheatham@amd.com>
*/
#include <linux/einj-cxl.h>
#include <linux/seq_file.h>
#include <linux/pci.h>
#include <cxl/einj.h>
#include "apei-internal.h"
......
......@@ -27,7 +27,6 @@
#include <linux/timer.h>
#include <linux/cper.h>
#include <linux/cleanup.h>
#include <linux/cxl-event.h>
#include <linux/platform_device.h>
#include <linux/mutex.h>
#include <linux/ratelimit.h>
......@@ -50,6 +49,7 @@
#include <acpi/apei.h>
#include <asm/fixmap.h>
#include <asm/tlbflush.h>
#include <cxl/event.h>
#include <ras/ras_event.h>
#include "apei-internal.h"
......
This diff is collapsed.
......@@ -103,9 +103,11 @@ enum cxl_poison_trace_type {
};
long cxl_pci_get_latency(struct pci_dev *pdev);
int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c);
int cxl_update_hmat_access_coordinates(int nid, struct cxl_region *cxlr,
enum access_coordinate_class access);
bool cxl_need_node_perf_attrs_update(int nid);
int cxl_port_get_switch_dport_bandwidth(struct cxl_port *port,
struct access_coordinate *c);
#endif /* __CXL_CORE_H__ */
This diff is collapsed.
......@@ -58,7 +58,7 @@ static ssize_t payload_max_show(struct device *dev,
if (!mds)
return sysfs_emit(buf, "\n");
return sysfs_emit(buf, "%zu\n", mds->payload_size);
return sysfs_emit(buf, "%zu\n", cxlds->cxl_mbox.payload_size);
}
static DEVICE_ATTR_RO(payload_max);
......@@ -124,15 +124,16 @@ static ssize_t security_state_show(struct device *dev,
{
struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
unsigned long state = mds->security.state;
int rc = 0;
/* sync with latest submission state */
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (mds->security.sanitize_active)
rc = sysfs_emit(buf, "sanitize\n");
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
if (rc)
return rc;
......@@ -277,7 +278,7 @@ static int cxl_validate_poison_dpa(struct cxl_memdev *cxlmd, u64 dpa)
int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa)
{
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_inject_poison inject;
struct cxl_poison_record record;
struct cxl_mbox_cmd mbox_cmd;
......@@ -307,13 +308,13 @@ int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa)
.size_in = sizeof(inject),
.payload_in = &inject,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc)
goto out;
cxlr = cxl_dpa_to_region(cxlmd, dpa);
if (cxlr)
dev_warn_once(mds->cxlds.dev,
dev_warn_once(cxl_mbox->host,
"poison inject dpa:%#llx region: %s\n", dpa,
dev_name(&cxlr->dev));
......@@ -332,7 +333,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_inject_poison, CXL);
int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa)
{
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_clear_poison clear;
struct cxl_poison_record record;
struct cxl_mbox_cmd mbox_cmd;
......@@ -371,13 +372,13 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa)
.payload_in = &clear,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc)
goto out;
cxlr = cxl_dpa_to_region(cxlmd, dpa);
if (cxlr)
dev_warn_once(mds->cxlds.dev,
dev_warn_once(cxl_mbox->host,
"poison clear dpa:%#llx region: %s\n", dpa,
dev_name(&cxlr->dev));
......@@ -714,6 +715,7 @@ static int cxl_memdev_release_file(struct inode *inode, struct file *file)
*/
static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_get_fw_info info;
struct cxl_mbox_cmd mbox_cmd;
int rc;
......@@ -724,7 +726,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds)
.payload_out = &info,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
return rc;
......@@ -748,6 +750,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds)
*/
static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_activate_fw activate;
struct cxl_mbox_cmd mbox_cmd;
......@@ -764,7 +767,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot)
activate.action = CXL_FW_ACTIVATE_OFFLINE;
activate.slot = slot;
return cxl_internal_send_cmd(mds, &mbox_cmd);
return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}
/**
......@@ -779,6 +782,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot)
*/
static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_transfer_fw *transfer;
struct cxl_mbox_cmd mbox_cmd;
int rc;
......@@ -798,7 +802,7 @@ static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds)
transfer->action = CXL_FW_TRANSFER_ACTION_ABORT;
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
kfree(transfer);
return rc;
}
......@@ -829,12 +833,13 @@ static enum fw_upload_err cxl_fw_prepare(struct fw_upload *fwl, const u8 *data,
{
struct cxl_memdev_state *mds = fwl->dd_handle;
struct cxl_mbox_transfer_fw *transfer;
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
if (!size)
return FW_UPLOAD_ERR_INVALID_SIZE;
mds->fw.oneshot = struct_size(transfer, data, size) <
mds->payload_size;
cxl_mbox->payload_size;
if (cxl_mem_get_fw_info(mds))
return FW_UPLOAD_ERR_HW_ERROR;
......@@ -854,6 +859,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data,
{
struct cxl_memdev_state *mds = fwl->dd_handle;
struct cxl_dev_state *cxlds = &mds->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
struct cxl_memdev *cxlmd = cxlds->cxlmd;
struct cxl_mbox_transfer_fw *transfer;
struct cxl_mbox_cmd mbox_cmd;
......@@ -877,7 +883,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data,
* sizeof(*transfer) is 128. These constraints imply that @cur_size
* will always be 128b aligned.
*/
cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer));
cur_size = min_t(size_t, size, cxl_mbox->payload_size - sizeof(*transfer));
remaining = size - cur_size;
size_in = struct_size(transfer, data, cur_size);
......@@ -921,7 +927,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data,
.poll_count = 30,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0) {
rc = FW_UPLOAD_ERR_RW_ERROR;
goto out_free;
......@@ -1059,16 +1065,17 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, CXL);
static void sanitize_teardown_notifier(void *data)
{
struct cxl_memdev_state *mds = data;
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct kernfs_node *state;
/*
* Prevent new irq triggered invocations of the workqueue and
* flush inflight invocations.
*/
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
state = mds->security.sanitize_node;
mds->security.sanitize_node = NULL;
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
cancel_delayed_work_sync(&mds->security.poll_dwork);
sysfs_put(state);
......
......@@ -211,37 +211,6 @@ int cxl_await_media_ready(struct cxl_dev_state *cxlds)
}
EXPORT_SYMBOL_NS_GPL(cxl_await_media_ready, CXL);
static int wait_for_valid(struct pci_dev *pdev, int d)
{
u32 val;
int rc;
/*
* Memory_Info_Valid: When set, indicates that the CXL Range 1 Size high
* and Size Low registers are valid. Must be set within 1 second of
* deassertion of reset to CXL device. Likely it is already set by the
* time this runs, but otherwise give a 1.5 second timeout in case of
* clock skew.
*/
rc = pci_read_config_dword(pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &val);
if (rc)
return rc;
if (val & CXL_DVSEC_MEM_INFO_VALID)
return 0;
msleep(1500);
rc = pci_read_config_dword(pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &val);
if (rc)
return rc;
if (val & CXL_DVSEC_MEM_INFO_VALID)
return 0;
return -ETIMEDOUT;
}
static int cxl_set_mem_enable(struct cxl_dev_state *cxlds, u16 val)
{
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
......@@ -322,11 +291,13 @@ static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm)
return devm_add_action_or_reset(host, disable_hdm, cxlhdm);
}
int cxl_dvsec_rr_decode(struct device *dev, int d,
int cxl_dvsec_rr_decode(struct device *dev, struct cxl_port *port,
struct cxl_endpoint_dvsec_info *info)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct cxl_dev_state *cxlds = pci_get_drvdata(pdev);
int hdm_count, rc, i, ranges = 0;
int d = cxlds->cxl_dvsec;
u16 cap, ctrl;
if (!d) {
......@@ -353,12 +324,6 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
if (!hdm_count || hdm_count > 2)
return -EINVAL;
rc = wait_for_valid(pdev, d);
if (rc) {
dev_dbg(dev, "Failure awaiting MEM_INFO_VALID (%d)\n", rc);
return rc;
}
/*
* The current DVSEC values are moot if the memory capability is
* disabled, and they will remain moot after the HDM Decoder
......@@ -376,6 +341,10 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
u64 base, size;
u32 temp;
rc = cxl_dvsec_mem_range_valid(cxlds, i);
if (rc)
return rc;
rc = pci_read_config_dword(
pdev, d + CXL_DVSEC_RANGE_SIZE_HIGH(i), &temp);
if (rc)
......@@ -390,10 +359,6 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
size |= temp & CXL_DVSEC_MEM_SIZE_LOW_MASK;
if (!size) {
info->dvsec_range[i] = (struct range) {
.start = 0,
.end = CXL_RESOURCE_NONE,
};
continue;
}
......@@ -411,12 +376,10 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
base |= temp & CXL_DVSEC_MEM_BASE_LOW_MASK;
info->dvsec_range[i] = (struct range) {
info->dvsec_range[ranges++] = (struct range) {
.start = base,
.end = base + size - 1
};
ranges++;
}
info->ranges = ranges;
......@@ -463,7 +426,15 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
return -ENODEV;
}
for (i = 0, allowed = 0; info->mem_enabled && i < info->ranges; i++) {
if (!info->mem_enabled) {
rc = devm_cxl_enable_hdm(&port->dev, cxlhdm);
if (rc)
return rc;
return devm_cxl_enable_mem(&port->dev, cxlds);
}
for (i = 0, allowed = 0; i < info->ranges; i++) {
struct device *cxld_dev;
cxld_dev = device_find_child(&root->dev, &info->dvsec_range[i],
......@@ -477,7 +448,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
allowed++;
}
if (!allowed && info->mem_enabled) {
if (!allowed) {
dev_err(dev, "Range register decodes outside platform defined CXL ranges.\n");
return -ENXIO;
}
......@@ -491,14 +462,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
* match. If at least one DVSEC range is enabled and allowed, skip HDM
* Decoder Capability Enable.
*/
if (info->mem_enabled)
return 0;
rc = devm_cxl_enable_hdm(&port->dev, cxlhdm);
if (rc)
return rc;
return devm_cxl_enable_mem(&port->dev, cxlds);
return 0;
}
EXPORT_SYMBOL_NS_GPL(cxl_hdm_decode_init, CXL);
......@@ -772,22 +736,20 @@ static bool cxl_handle_endpoint_ras(struct cxl_dev_state *cxlds)
static void cxl_dport_map_rch_aer(struct cxl_dport *dport)
{
struct cxl_rcrb_info *ri = &dport->rcrb;
void __iomem *dport_aer = NULL;
resource_size_t aer_phys;
struct device *host;
u16 aer_cap;
if (dport->rch && ri->aer_cap) {
aer_cap = cxl_rcrb_to_aer(dport->dport_dev, dport->rcrb.base);
if (aer_cap) {
host = dport->reg_map.host;
aer_phys = ri->aer_cap + ri->base;
dport_aer = devm_cxl_iomap_block(host, aer_phys,
sizeof(struct aer_capability_regs));
aer_phys = aer_cap + dport->rcrb.base;
dport->regs.dport_aer = devm_cxl_iomap_block(host, aer_phys,
sizeof(struct aer_capability_regs));
}
dport->regs.dport_aer = dport_aer;
}
static void cxl_dport_map_regs(struct cxl_dport *dport)
static void cxl_dport_map_ras(struct cxl_dport *dport)
{
struct cxl_register_map *map = &dport->reg_map;
struct device *dev = dport->dport_dev;
......@@ -797,22 +759,16 @@ static void cxl_dport_map_regs(struct cxl_dport *dport)
else if (cxl_map_component_regs(map, &dport->regs.component,
BIT(CXL_CM_CAP_CAP_ID_RAS)))
dev_dbg(dev, "Failed to map RAS capability.\n");
if (dport->rch)
cxl_dport_map_rch_aer(dport);
}
static void cxl_disable_rch_root_ints(struct cxl_dport *dport)
{
void __iomem *aer_base = dport->regs.dport_aer;
struct pci_host_bridge *bridge;
u32 aer_cmd_mask, aer_cmd;
if (!aer_base)
return;
bridge = to_pci_host_bridge(dport->dport_dev);
/*
* Disable RCH root port command interrupts.
* CXL 3.0 12.2.1.1 - RCH Downstream Port-detected Errors
......@@ -821,34 +777,35 @@ static void cxl_disable_rch_root_ints(struct cxl_dport *dport)
* the root cmd register's interrupts is required. But, PCI spec
* shows these are disabled by default on reset.
*/
if (bridge->native_aer) {
aer_cmd_mask = (PCI_ERR_ROOT_CMD_COR_EN |
PCI_ERR_ROOT_CMD_NONFATAL_EN |
PCI_ERR_ROOT_CMD_FATAL_EN);
aer_cmd = readl(aer_base + PCI_ERR_ROOT_COMMAND);
aer_cmd &= ~aer_cmd_mask;
writel(aer_cmd, aer_base + PCI_ERR_ROOT_COMMAND);
}
aer_cmd_mask = (PCI_ERR_ROOT_CMD_COR_EN |
PCI_ERR_ROOT_CMD_NONFATAL_EN |
PCI_ERR_ROOT_CMD_FATAL_EN);
aer_cmd = readl(aer_base + PCI_ERR_ROOT_COMMAND);
aer_cmd &= ~aer_cmd_mask;
writel(aer_cmd, aer_base + PCI_ERR_ROOT_COMMAND);
}
void cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport)
/**
* cxl_dport_init_ras_reporting - Setup CXL RAS report on this dport
* @dport: the cxl_dport that needs to be initialized
* @host: host device for devm operations
*/
void cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device *host)
{
struct device *dport_dev = dport->dport_dev;
dport->reg_map.host = host;
cxl_dport_map_ras(dport);
if (dport->rch) {
struct pci_host_bridge *host_bridge = to_pci_host_bridge(dport_dev);
if (host_bridge->native_aer)
dport->rcrb.aer_cap = cxl_rcrb_to_aer(dport_dev, dport->rcrb.base);
}
struct pci_host_bridge *host_bridge = to_pci_host_bridge(dport->dport_dev);
dport->reg_map.host = host;
cxl_dport_map_regs(dport);
if (!host_bridge->native_aer)
return;
if (dport->rch)
cxl_dport_map_rch_aer(dport);
cxl_disable_rch_root_ints(dport);
}
}
EXPORT_SYMBOL_NS_GPL(cxl_setup_parent_dport, CXL);
EXPORT_SYMBOL_NS_GPL(cxl_dport_init_ras_reporting, CXL);
static void cxl_handle_rdport_cor_ras(struct cxl_dev_state *cxlds,
struct cxl_dport *dport)
......@@ -915,15 +872,13 @@ static void cxl_handle_rdport_errors(struct cxl_dev_state *cxlds)
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
struct aer_capability_regs aer_regs;
struct cxl_dport *dport;
struct cxl_port *port;
int severity;
port = cxl_pci_find_port(pdev, &dport);
struct cxl_port *port __free(put_cxl_port) =
cxl_pci_find_port(pdev, &dport);
if (!port)
return;
put_device(&port->dev);
if (!cxl_rch_get_aer_info(dport->regs.dport_aer, &aer_regs))
return;
......@@ -1076,3 +1031,26 @@ bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port)
__cxl_endpoint_decoder_reset_detected);
}
EXPORT_SYMBOL_NS_GPL(cxl_endpoint_decoder_reset_detected, CXL);
int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c)
{
int speed, bw;
u16 lnksta;
u32 width;
speed = pcie_link_speed_mbps(pdev);
if (speed < 0)
return speed;
speed /= BITS_PER_BYTE;
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnksta);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta);
bw = speed * width;
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
c[i].read_bandwidth = bw;
c[i].write_bandwidth = bw;
}
return 0;
}
This diff is collapsed.
......@@ -1983,6 +1983,7 @@ static int cxl_region_attach(struct cxl_region *cxlr,
* then the region is already committed.
*/
p->state = CXL_CONFIG_COMMIT;
cxl_region_shared_upstream_bandwidth_update(cxlr);
return 0;
}
......@@ -2004,6 +2005,7 @@ static int cxl_region_attach(struct cxl_region *cxlr,
if (rc)
return rc;
p->state = CXL_CONFIG_ACTIVE;
cxl_region_shared_upstream_bandwidth_update(cxlr);
}
cxled->cxld.interleave_ways = p->interleave_ways;
......@@ -2313,8 +2315,6 @@ static void unregister_region(void *_cxlr)
struct cxl_region_params *p = &cxlr->params;
int i;
unregister_memory_notifier(&cxlr->memory_notifier);
unregister_mt_adistance_algorithm(&cxlr->adist_notifier);
device_del(&cxlr->dev);
/*
......@@ -2391,18 +2391,6 @@ static bool cxl_region_update_coordinates(struct cxl_region *cxlr, int nid)
return true;
}
static int cxl_region_nid(struct cxl_region *cxlr)
{
struct cxl_region_params *p = &cxlr->params;
struct resource *res;
guard(rwsem_read)(&cxl_region_rwsem);
res = p->res;
if (!res)
return NUMA_NO_NODE;
return phys_to_target_node(res->start);
}
static int cxl_region_perf_attrs_callback(struct notifier_block *nb,
unsigned long action, void *arg)
{
......@@ -2415,7 +2403,11 @@ static int cxl_region_perf_attrs_callback(struct notifier_block *nb,
if (nid == NUMA_NO_NODE || action != MEM_ONLINE)
return NOTIFY_DONE;
region_nid = cxl_region_nid(cxlr);
/*
* No need to hold cxl_region_rwsem; region parameters are stable
* within the cxl_region driver.
*/
region_nid = phys_to_target_node(cxlr->params.res->start);
if (nid != region_nid)
return NOTIFY_DONE;
......@@ -2434,7 +2426,11 @@ static int cxl_region_calculate_adistance(struct notifier_block *nb,
int *adist = data;
int region_nid;
region_nid = cxl_region_nid(cxlr);
/*
* No need to hold cxl_region_rwsem; region parameters are stable
* within the cxl_region driver.
*/
region_nid = phys_to_target_node(cxlr->params.res->start);
if (nid != region_nid)
return NOTIFY_OK;
......@@ -2484,14 +2480,6 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
if (rc)
goto err;
cxlr->memory_notifier.notifier_call = cxl_region_perf_attrs_callback;
cxlr->memory_notifier.priority = CXL_CALLBACK_PRI;
register_memory_notifier(&cxlr->memory_notifier);
cxlr->adist_notifier.notifier_call = cxl_region_calculate_adistance;
cxlr->adist_notifier.priority = 100;
register_mt_adistance_algorithm(&cxlr->adist_notifier);
rc = devm_add_action_or_reset(port->uport_dev, unregister_region, cxlr);
if (rc)
return ERR_PTR(rc);
......@@ -3094,11 +3082,11 @@ static void cxlr_release_nvdimm(void *_cxlr)
struct cxl_region *cxlr = _cxlr;
struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb;
device_lock(&cxl_nvb->dev);
if (cxlr->cxlr_pmem)
devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister,
cxlr->cxlr_pmem);
device_unlock(&cxl_nvb->dev);
scoped_guard(device, &cxl_nvb->dev) {
if (cxlr->cxlr_pmem)
devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister,
cxlr->cxlr_pmem);
}
cxlr->cxl_nvb = NULL;
put_device(&cxl_nvb->dev);
}
......@@ -3134,13 +3122,14 @@ static int devm_cxl_add_pmem_region(struct cxl_region *cxlr)
dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent),
dev_name(dev));
device_lock(&cxl_nvb->dev);
if (cxl_nvb->dev.driver)
rc = devm_add_action_or_reset(&cxl_nvb->dev,
cxlr_pmem_unregister, cxlr_pmem);
else
rc = -ENXIO;
device_unlock(&cxl_nvb->dev);
scoped_guard(device, &cxl_nvb->dev) {
if (cxl_nvb->dev.driver)
rc = devm_add_action_or_reset(&cxl_nvb->dev,
cxlr_pmem_unregister,
cxlr_pmem);
else
rc = -ENXIO;
}
if (rc)
goto err_bridge;
......@@ -3386,6 +3375,14 @@ static int is_system_ram(struct resource *res, void *arg)
return 1;
}
static void shutdown_notifiers(void *_cxlr)
{
struct cxl_region *cxlr = _cxlr;
unregister_memory_notifier(&cxlr->memory_notifier);
unregister_mt_adistance_algorithm(&cxlr->adist_notifier);
}
static int cxl_region_probe(struct device *dev)
{
struct cxl_region *cxlr = to_cxl_region(dev);
......@@ -3418,6 +3415,18 @@ static int cxl_region_probe(struct device *dev)
out:
up_read(&cxl_region_rwsem);
if (rc)
return rc;
cxlr->memory_notifier.notifier_call = cxl_region_perf_attrs_callback;
cxlr->memory_notifier.priority = CXL_CALLBACK_PRI;
register_memory_notifier(&cxlr->memory_notifier);
cxlr->adist_notifier.notifier_call = cxl_region_calculate_adistance;
cxlr->adist_notifier.priority = 100;
register_mt_adistance_algorithm(&cxlr->adist_notifier);
rc = devm_add_action_or_reset(&cxlr->dev, shutdown_notifiers, cxlr);
if (rc)
return rc;
......
......@@ -744,6 +744,7 @@ struct cxl_root *find_cxl_root(struct cxl_port *port);
void put_cxl_root(struct cxl_root *cxl_root);
DEFINE_FREE(put_cxl_root, struct cxl_root *, if (_T) put_cxl_root(_T))
DEFINE_FREE(put_cxl_port, struct cxl_port *, if (!IS_ERR_OR_NULL(_T)) put_device(&_T->dev))
int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd);
void cxl_bus_rescan(void);
void cxl_bus_drain(void);
......@@ -762,9 +763,10 @@ struct cxl_dport *devm_cxl_add_rch_dport(struct cxl_port *port,
#ifdef CONFIG_PCIEAER_CXL
void cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport);
void cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device *host);
#else
static inline void cxl_setup_parent_dport(struct device *host,
struct cxl_dport *dport) { }
static inline void cxl_dport_init_ras_reporting(struct cxl_dport *dport,
struct device *host) { }
#endif
struct cxl_decoder *to_cxl_decoder(struct device *dev);
......@@ -809,7 +811,7 @@ struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port,
int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
struct cxl_endpoint_dvsec_info *info);
int devm_cxl_add_passthrough_decoder(struct cxl_port *port);
int cxl_dvsec_rr_decode(struct device *dev, int dvsec,
int cxl_dvsec_rr_decode(struct device *dev, struct cxl_port *port,
struct cxl_endpoint_dvsec_info *info);
bool is_cxl_region(struct device *dev);
......@@ -889,6 +891,7 @@ int cxl_endpoint_get_perf_coordinates(struct cxl_port *port,
struct access_coordinate *coord);
void cxl_region_perf_data_calculate(struct cxl_region *cxlr,
struct cxl_endpoint_decoder *cxled);
void cxl_region_shared_upstream_bandwidth_update(struct cxl_region *cxlr);
void cxl_memdev_update_perf(struct cxl_memdev *cxlmd);
......
......@@ -3,11 +3,12 @@
#ifndef __CXL_MEM_H__
#define __CXL_MEM_H__
#include <uapi/linux/cxl_mem.h>
#include <linux/pci.h>
#include <linux/cdev.h>
#include <linux/uuid.h>
#include <linux/rcuwait.h>
#include <linux/cxl-event.h>
#include <linux/node.h>
#include <cxl/event.h>
#include <cxl/mailbox.h>
#include "cxl.h"
/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
......@@ -397,11 +398,13 @@ enum cxl_devtype {
* struct cxl_dpa_perf - DPA performance property entry
* @dpa_range: range for DPA address
* @coord: QoS performance data (i.e. latency, bandwidth)
* @cdat_coord: raw QoS performance data from CDAT
* @qos_class: QoS Class cookies
*/
struct cxl_dpa_perf {
struct range dpa_range;
struct access_coordinate coord[ACCESS_COORDINATE_MAX];
struct access_coordinate cdat_coord[ACCESS_COORDINATE_MAX];
int qos_class;
};
......@@ -424,6 +427,7 @@ struct cxl_dpa_perf {
* @ram_res: Active Volatile memory capacity configuration
* @serial: PCIe Device Serial Number
* @type: Generic Memory Class device or Vendor Specific Memory device
* @cxl_mbox: CXL mailbox context
*/
struct cxl_dev_state {
struct device *dev;
......@@ -438,8 +442,14 @@ struct cxl_dev_state {
struct resource ram_res;
u64 serial;
enum cxl_devtype type;
struct cxl_mailbox cxl_mbox;
};
static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_mbox)
{
return dev_get_drvdata(cxl_mbox->host);
}
/**
* struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data
*
......@@ -448,11 +458,8 @@ struct cxl_dev_state {
* the functionality related to that like Identify Memory Device and Get
* Partition Info
* @cxlds: Core driver state common across Type-2 and Type-3 devices
* @payload_size: Size of space for payload
* (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
* @lsa_size: Size of Label Storage Area
* (CXL 2.0 8.2.9.5.1.1 Identify Memory Device)
* @mbox_mutex: Mutex to synchronize mailbox access.
* @firmware_version: Firmware version for the memory device.
* @enabled_cmds: Hardware commands found enabled in CEL.
* @exclusive_cmds: Commands that are kernel-internal only
......@@ -470,17 +477,13 @@ struct cxl_dev_state {
* @poison: poison driver state info
* @security: security driver state info
* @fw: firmware upload / activation state
* @mbox_wait: RCU wait for mbox send completely
* @mbox_send: @dev specific transport for transmitting mailbox commands
*
* See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for
* details on capacity parameters.
*/
struct cxl_memdev_state {
struct cxl_dev_state cxlds;
size_t payload_size;
size_t lsa_size;
struct mutex mbox_mutex; /* Protects device mailbox and firmware */
char firmware_version[0x10];
DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX);
DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX);
......@@ -500,10 +503,6 @@ struct cxl_memdev_state {
struct cxl_poison_state poison;
struct cxl_security_state security;
struct cxl_fw_state fw;
struct rcuwait mbox_wait;
int (*mbox_send)(struct cxl_memdev_state *mds,
struct cxl_mbox_cmd *cmd);
};
static inline struct cxl_memdev_state *
......@@ -814,7 +813,7 @@ enum {
CXL_PMEM_SEC_PASS_USER,
};
int cxl_internal_send_cmd(struct cxl_memdev_state *mds,
int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox,
struct cxl_mbox_cmd *cmd);
int cxl_dev_state_identify(struct cxl_memdev_state *mds);
int cxl_await_media_ready(struct cxl_dev_state *cxlds);
......
......@@ -109,7 +109,6 @@ static int cxl_mem_probe(struct device *dev)
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct device *endpoint_parent;
struct cxl_port *parent_port;
struct cxl_dport *dport;
struct dentry *dentry;
int rc;
......@@ -146,7 +145,8 @@ static int cxl_mem_probe(struct device *dev)
if (rc)
return rc;
parent_port = cxl_mem_find_port(cxlmd, &dport);
struct cxl_port *parent_port __free(put_cxl_port) =
cxl_mem_find_port(cxlmd, &dport);
if (!parent_port) {
dev_err(dev, "CXL port topology not found\n");
return -ENXIO;
......@@ -166,22 +166,19 @@ static int cxl_mem_probe(struct device *dev)
else
endpoint_parent = &parent_port->dev;
cxl_setup_parent_dport(dev, dport);
cxl_dport_init_ras_reporting(dport, dev);
device_lock(endpoint_parent);
if (!endpoint_parent->driver) {
dev_err(dev, "CXL port topology %s not enabled\n",
dev_name(endpoint_parent));
rc = -ENXIO;
goto unlock;
}
scoped_guard(device, endpoint_parent) {
if (!endpoint_parent->driver) {
dev_err(dev, "CXL port topology %s not enabled\n",
dev_name(endpoint_parent));
return -ENXIO;
}
rc = devm_cxl_add_endpoint(endpoint_parent, cxlmd, dport);
unlock:
device_unlock(endpoint_parent);
put_device(&parent_port->dev);
if (rc)
return rc;
rc = devm_cxl_add_endpoint(endpoint_parent, cxlmd, dport);
if (rc)
return rc;
}
/*
* The kernel may be operating out of CXL memory on this device,
......
......@@ -11,6 +11,7 @@
#include <linux/pci.h>
#include <linux/aer.h>
#include <linux/io.h>
#include <cxl/mailbox.h>
#include "cxlmem.h"
#include "cxlpci.h"
#include "cxl.h"
......@@ -124,6 +125,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id)
u16 opcode;
struct cxl_dev_id *dev_id = id;
struct cxl_dev_state *cxlds = dev_id->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
if (!cxl_mbox_background_complete(cxlds))
......@@ -132,13 +134,13 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id)
reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET);
opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg);
if (opcode == CXL_MBOX_OP_SANITIZE) {
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (mds->security.sanitize_node)
mod_delayed_work(system_wq, &mds->security.poll_dwork, 0);
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
} else {
/* short-circuit the wait in __cxl_pci_mbox_send_cmd() */
rcuwait_wake_up(&mds->mbox_wait);
rcuwait_wake_up(&cxl_mbox->mbox_wait);
}
return IRQ_HANDLED;
......@@ -152,8 +154,9 @@ static void cxl_mbox_sanitize_work(struct work_struct *work)
struct cxl_memdev_state *mds =
container_of(work, typeof(*mds), security.poll_dwork.work);
struct cxl_dev_state *cxlds = &mds->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (cxl_mbox_background_complete(cxlds)) {
mds->security.poll_tmo_secs = 0;
if (mds->security.sanitize_node)
......@@ -167,12 +170,12 @@ static void cxl_mbox_sanitize_work(struct work_struct *work)
mds->security.poll_tmo_secs = min(15 * 60, timeout);
schedule_delayed_work(&mds->security.poll_dwork, timeout * HZ);
}
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
}
/**
* __cxl_pci_mbox_send_cmd() - Execute a mailbox command
* @mds: The memory device driver data
* @cxl_mbox: CXL mailbox context
* @mbox_cmd: Command to send to the memory device.
*
* Context: Any context. Expects mbox_mutex to be held.
......@@ -192,17 +195,18 @@ static void cxl_mbox_sanitize_work(struct work_struct *work)
* not need to coordinate with each other. The driver only uses the primary
* mailbox.
*/
static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds,
static int __cxl_pci_mbox_send_cmd(struct cxl_mailbox *cxl_mbox,
struct cxl_mbox_cmd *mbox_cmd)
{
struct cxl_dev_state *cxlds = &mds->cxlds;
struct cxl_dev_state *cxlds = mbox_to_cxlds(cxl_mbox);
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET;
struct device *dev = cxlds->dev;
u64 cmd_reg, status_reg;
size_t out_len;
int rc;
lockdep_assert_held(&mds->mbox_mutex);
lockdep_assert_held(&cxl_mbox->mbox_mutex);
/*
* Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
......@@ -315,10 +319,10 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds,
timeout = mbox_cmd->poll_interval_ms;
for (i = 0; i < mbox_cmd->poll_count; i++) {
if (rcuwait_wait_event_timeout(&mds->mbox_wait,
cxl_mbox_background_complete(cxlds),
TASK_UNINTERRUPTIBLE,
msecs_to_jiffies(timeout)) > 0)
if (rcuwait_wait_event_timeout(&cxl_mbox->mbox_wait,
cxl_mbox_background_complete(cxlds),
TASK_UNINTERRUPTIBLE,
msecs_to_jiffies(timeout)) > 0)
break;
}
......@@ -360,7 +364,7 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds,
*/
size_t n;
n = min3(mbox_cmd->size_out, mds->payload_size, out_len);
n = min3(mbox_cmd->size_out, cxl_mbox->payload_size, out_len);
memcpy_fromio(mbox_cmd->payload_out, payload, n);
mbox_cmd->size_out = n;
} else {
......@@ -370,14 +374,14 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds,
return 0;
}
static int cxl_pci_mbox_send(struct cxl_memdev_state *mds,
static int cxl_pci_mbox_send(struct cxl_mailbox *cxl_mbox,
struct cxl_mbox_cmd *cmd)
{
int rc;
mutex_lock_io(&mds->mbox_mutex);
rc = __cxl_pci_mbox_send_cmd(mds, cmd);
mutex_unlock(&mds->mbox_mutex);
mutex_lock_io(&cxl_mbox->mbox_mutex);
rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd);
mutex_unlock(&cxl_mbox->mbox_mutex);
return rc;
}
......@@ -385,6 +389,7 @@ static int cxl_pci_mbox_send(struct cxl_memdev_state *mds,
static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail)
{
struct cxl_dev_state *cxlds = &mds->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET);
struct device *dev = cxlds->dev;
unsigned long timeout;
......@@ -417,8 +422,8 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail)
return -ETIMEDOUT;
}
mds->mbox_send = cxl_pci_mbox_send;
mds->payload_size =
cxl_mbox->mbox_send = cxl_pci_mbox_send;
cxl_mbox->payload_size =
1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
/*
......@@ -428,16 +433,15 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail)
* there's no point in going forward. If the size is too large, there's
* no harm is soft limiting it.
*/
mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M);
if (mds->payload_size < 256) {
cxl_mbox->payload_size = min_t(size_t, cxl_mbox->payload_size, SZ_1M);
if (cxl_mbox->payload_size < 256) {
dev_err(dev, "Mailbox is too small (%zub)",
mds->payload_size);
cxl_mbox->payload_size);
return -ENXIO;
}
dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size);
dev_dbg(dev, "Mailbox payload sized %zu", cxl_mbox->payload_size);
rcuwait_init(&mds->mbox_wait);
INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work);
/* background command interrupts are optional */
......@@ -473,7 +477,6 @@ static bool is_cxl_restricted(struct pci_dev *pdev)
static int cxl_rcrb_get_comp_regs(struct pci_dev *pdev,
struct cxl_register_map *map)
{
struct cxl_port *port;
struct cxl_dport *dport;
resource_size_t component_reg_phys;
......@@ -482,14 +485,12 @@ static int cxl_rcrb_get_comp_regs(struct pci_dev *pdev,
.resource = CXL_RESOURCE_NONE,
};
port = cxl_pci_find_port(pdev, &dport);
struct cxl_port *port __free(put_cxl_port) =
cxl_pci_find_port(pdev, &dport);
if (!port)
return -EPROBE_DEFER;
component_reg_phys = cxl_rcd_component_reg_phys(&pdev->dev, dport);
put_device(&port->dev);
if (component_reg_phys == CXL_RESOURCE_NONE)
return -ENXIO;
......@@ -578,9 +579,10 @@ static void free_event_buf(void *buf)
*/
static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_get_event_payload *buf;
buf = kvmalloc(mds->payload_size, GFP_KERNEL);
buf = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL);
if (!buf)
return -ENOMEM;
mds->event.buf = buf;
......@@ -653,6 +655,7 @@ static int cxl_event_req_irq(struct cxl_dev_state *cxlds, u8 setting)
static int cxl_event_get_int_policy(struct cxl_memdev_state *mds,
struct cxl_event_interrupt_policy *policy)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_cmd mbox_cmd = {
.opcode = CXL_MBOX_OP_GET_EVT_INT_POLICY,
.payload_out = policy,
......@@ -660,7 +663,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds,
};
int rc;
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
dev_err(mds->cxlds.dev,
"Failed to get event interrupt policy : %d", rc);
......@@ -671,6 +674,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds,
static int cxl_event_config_msgnums(struct cxl_memdev_state *mds,
struct cxl_event_interrupt_policy *policy)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_cmd mbox_cmd;
int rc;
......@@ -687,7 +691,7 @@ static int cxl_event_config_msgnums(struct cxl_memdev_state *mds,
.size_in = sizeof(*policy),
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0) {
dev_err(mds->cxlds.dev, "Failed to set event interrupt policy : %d",
rc);
......@@ -786,6 +790,23 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge,
return 0;
}
static int cxl_pci_type3_init_mailbox(struct cxl_dev_state *cxlds)
{
int rc;
/*
* Fail the init if there's no mailbox. For a type3 this is out of spec.
*/
if (!cxlds->reg_map.device_map.mbox.valid)
return -ENODEV;
rc = cxl_mailbox_init(&cxlds->cxl_mbox, cxlds->dev);
if (rc)
return rc;
return 0;
}
static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus);
......@@ -846,6 +867,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (rc)
dev_dbg(&pdev->dev, "Failed to map RAS capability.\n");
rc = cxl_pci_type3_init_mailbox(cxlds);
if (rc)
return rc;
rc = cxl_await_media_ready(cxlds);
if (rc == 0)
cxlds->media_ready = true;
......
......@@ -102,13 +102,15 @@ static int cxl_pmem_get_config_size(struct cxl_memdev_state *mds,
struct nd_cmd_get_config_size *cmd,
unsigned int buf_len)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
if (sizeof(*cmd) > buf_len)
return -EINVAL;
*cmd = (struct nd_cmd_get_config_size){
.config_size = mds->lsa_size,
.max_xfer =
mds->payload_size - sizeof(struct cxl_mbox_set_lsa),
cxl_mbox->payload_size - sizeof(struct cxl_mbox_set_lsa),
};
return 0;
......@@ -118,6 +120,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds,
struct nd_cmd_get_config_data_hdr *cmd,
unsigned int buf_len)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_get_lsa get_lsa;
struct cxl_mbox_cmd mbox_cmd;
int rc;
......@@ -139,7 +142,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds,
.payload_out = cmd->out_buf,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
cmd->status = 0;
return rc;
......@@ -149,6 +152,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds,
struct nd_cmd_set_config_hdr *cmd,
unsigned int buf_len)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_set_lsa *set_lsa;
struct cxl_mbox_cmd mbox_cmd;
int rc;
......@@ -175,7 +179,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds,
.size_in = struct_size(set_lsa, data, cmd->in_length),
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
/*
* Set "firmware" status (4-packed bytes at the end of the input
......@@ -233,15 +237,13 @@ static int detach_nvdimm(struct device *dev, void *data)
if (!is_cxl_nvdimm(dev))
return 0;
device_lock(dev);
if (!dev->driver)
goto out;
cxl_nvd = to_cxl_nvdimm(dev);
if (cxl_nvd->cxlmd && cxl_nvd->cxlmd->cxl_nvb == data)
release = true;
out:
device_unlock(dev);
scoped_guard(device, dev) {
if (dev->driver) {
cxl_nvd = to_cxl_nvdimm(dev);
if (cxl_nvd->cxlmd && cxl_nvd->cxlmd->cxl_nvb == data)
release = true;
}
}
if (release)
device_release_driver(dev);
return 0;
......
......@@ -98,7 +98,7 @@ static int cxl_endpoint_port_probe(struct cxl_port *port)
struct cxl_port *root;
int rc;
rc = cxl_dvsec_rr_decode(cxlds->dev, cxlds->cxl_dvsec, &info);
rc = cxl_dvsec_rr_decode(cxlds->dev, port, &info);
if (rc < 0)
return rc;
......
......@@ -14,6 +14,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
unsigned long security_flags = 0;
struct cxl_get_security_output {
......@@ -29,7 +30,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm,
.payload_out = &out,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
return 0;
......@@ -70,7 +71,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_cmd mbox_cmd;
struct cxl_set_pass set_pass;
......@@ -87,7 +88,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm,
.payload_in = &set_pass,
};
return cxl_internal_send_cmd(mds, &mbox_cmd);
return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}
static int __cxl_pmem_security_disable(struct nvdimm *nvdimm,
......@@ -96,7 +97,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_disable_pass dis_pass;
struct cxl_mbox_cmd mbox_cmd;
......@@ -112,7 +113,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm,
.payload_in = &dis_pass,
};
return cxl_internal_send_cmd(mds, &mbox_cmd);
return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}
static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
......@@ -131,12 +132,12 @@ static int cxl_pmem_security_freeze(struct nvdimm *nvdimm)
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_cmd mbox_cmd = {
.opcode = CXL_MBOX_OP_FREEZE_SECURITY,
};
return cxl_internal_send_cmd(mds, &mbox_cmd);
return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}
static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
......@@ -144,7 +145,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
u8 pass[NVDIMM_PASSPHRASE_LEN];
struct cxl_mbox_cmd mbox_cmd;
int rc;
......@@ -156,7 +157,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
.payload_in = pass,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
return rc;
......@@ -169,7 +170,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_cmd mbox_cmd;
struct cxl_pass_erase erase;
int rc;
......@@ -185,7 +186,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
.payload_in = &erase,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
return rc;
......
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2024 Intel Corporation. */
#ifndef __CXL_MBOX_H__
#define __CXL_MBOX_H__
#include <linux/rcuwait.h>
struct cxl_mbox_cmd;
/**
* struct cxl_mailbox - context for CXL mailbox operations
* @host: device that hosts the mailbox
* @payload_size: Size of space for payload
* (CXL 3.1 8.2.8.4.3 Mailbox Capabilities Register)
* @mbox_mutex: mutex protects device mailbox and firmware
* @mbox_wait: rcuwait for mailbox
* @mbox_send: @dev specific transport for transmitting mailbox commands
*/
struct cxl_mailbox {
struct device *host;
size_t payload_size;
struct mutex mbox_mutex; /* lock to protect mailbox context */
struct rcuwait mbox_wait;
int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd);
};
int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, struct device *host);
#endif
......@@ -14,7 +14,7 @@ ldflags-y += --wrap=cxl_dvsec_rr_decode
ldflags-y += --wrap=devm_cxl_add_rch_dport
ldflags-y += --wrap=cxl_rcd_component_reg_phys
ldflags-y += --wrap=cxl_endpoint_parse_cdat
ldflags-y += --wrap=cxl_setup_parent_dport
ldflags-y += --wrap=cxl_dport_init_ras_reporting
DRIVERS := ../../../drivers
CXL_SRC := $(DRIVERS)/cxl
......
......@@ -18,7 +18,7 @@ struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev)
goto out;
}
if (dev->bus == &platform_bus_type)
if (dev_is_platform(dev))
goto out;
adev = to_acpi_device(dev);
......
......@@ -8,6 +8,7 @@
#include <linux/delay.h>
#include <linux/sizes.h>
#include <linux/bits.h>
#include <cxl/mailbox.h>
#include <asm/unaligned.h>
#include <crypto/sha2.h>
#include <cxlmem.h>
......@@ -534,6 +535,7 @@ static int mock_gsl(struct cxl_mbox_cmd *cmd)
static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_get_log *gl = cmd->payload_in;
u32 offset = le32_to_cpu(gl->offset);
u32 length = le32_to_cpu(gl->length);
......@@ -542,7 +544,7 @@ static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd)
if (cmd->size_in < sizeof(*gl))
return -EINVAL;
if (length > mds->payload_size)
if (length > cxl_mbox->payload_size)
return -EINVAL;
if (offset + length > sizeof(mock_cel))
return -EINVAL;
......@@ -617,12 +619,13 @@ void cxl_mockmem_sanitize_work(struct work_struct *work)
{
struct cxl_memdev_state *mds =
container_of(work, typeof(*mds), security.poll_dwork.work);
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (mds->security.sanitize_node)
sysfs_notify_dirent(mds->security.sanitize_node);
mds->security.sanitize_active = false;
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
dev_dbg(mds->cxlds.dev, "sanitize complete\n");
}
......@@ -631,6 +634,7 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata,
struct cxl_mbox_cmd *cmd)
{
struct cxl_memdev_state *mds = mdata->mds;
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
int rc = 0;
if (cmd->size_in != 0)
......@@ -648,14 +652,14 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata,
return -ENXIO;
}
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (schedule_delayed_work(&mds->security.poll_dwork,
msecs_to_jiffies(mdata->sanitize_timeout))) {
mds->security.sanitize_active = true;
dev_dbg(mds->cxlds.dev, "sanitize issued\n");
} else
rc = -EBUSY;
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
return rc;
}
......@@ -1333,12 +1337,13 @@ static int mock_activate_fw(struct cxl_mockmem_data *mdata,
return -EINVAL;
}
static int cxl_mock_mbox_send(struct cxl_memdev_state *mds,
static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox,
struct cxl_mbox_cmd *cmd)
{
struct cxl_dev_state *cxlds = &mds->cxlds;
struct device *dev = cxlds->dev;
struct device *dev = cxl_mbox->host;
struct cxl_mockmem_data *mdata = dev_get_drvdata(dev);
struct cxl_memdev_state *mds = mdata->mds;
struct cxl_dev_state *cxlds = &mds->cxlds;
int rc = -EIO;
switch (cmd->opcode) {
......@@ -1453,6 +1458,17 @@ static ssize_t event_trigger_store(struct device *dev,
}
static DEVICE_ATTR_WO(event_trigger);
static int cxl_mock_mailbox_create(struct cxl_dev_state *cxlds)
{
int rc;
rc = cxl_mailbox_init(&cxlds->cxl_mbox, cxlds->dev);
if (rc)
return rc;
return 0;
}
static int cxl_mock_mem_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
......@@ -1460,6 +1476,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
struct cxl_memdev_state *mds;
struct cxl_dev_state *cxlds;
struct cxl_mockmem_data *mdata;
struct cxl_mailbox *cxl_mbox;
int rc;
mdata = devm_kzalloc(dev, sizeof(*mdata), GFP_KERNEL);
......@@ -1487,13 +1504,18 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
if (IS_ERR(mds))
return PTR_ERR(mds);
cxlds = &mds->cxlds;
rc = cxl_mock_mailbox_create(cxlds);
if (rc)
return rc;
cxl_mbox = &mds->cxlds.cxl_mbox;
mdata->mds = mds;
mds->mbox_send = cxl_mock_mbox_send;
mds->payload_size = SZ_4K;
cxl_mbox->mbox_send = cxl_mock_mbox_send;
cxl_mbox->payload_size = SZ_4K;
mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf;
INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work);
cxlds = &mds->cxlds;
cxlds->serial = pdev->id;
if (is_rcd(pdev))
cxlds->rcd = true;
......
......@@ -228,7 +228,7 @@ int __wrap_cxl_hdm_decode_init(struct cxl_dev_state *cxlds,
}
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_hdm_decode_init, CXL);
int __wrap_cxl_dvsec_rr_decode(struct device *dev, int dvsec,
int __wrap_cxl_dvsec_rr_decode(struct device *dev, struct cxl_port *port,
struct cxl_endpoint_dvsec_info *info)
{
int rc = 0, index;
......@@ -237,7 +237,7 @@ int __wrap_cxl_dvsec_rr_decode(struct device *dev, int dvsec,
if (ops && ops->is_mock_dev(dev))
rc = 0;
else
rc = cxl_dvsec_rr_decode(dev, dvsec, info);
rc = cxl_dvsec_rr_decode(dev, port, info);
put_cxl_mock_ops(index);
return rc;
......@@ -299,17 +299,17 @@ void __wrap_cxl_endpoint_parse_cdat(struct cxl_port *port)
}
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_endpoint_parse_cdat, CXL);
void __wrap_cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport)
void __wrap_cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device *host)
{
int index;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (!ops || !ops->is_mock_port(dport->dport_dev))
cxl_setup_parent_dport(host, dport);
cxl_dport_init_ras_reporting(dport, host);
put_cxl_mock_ops(index);
}
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_setup_parent_dport, CXL);
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_dport_init_ras_reporting, CXL);
MODULE_LICENSE("GPL v2");
MODULE_IMPORT_NS(ACPI);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment