Commit 6c9a9a8d authored by Greg Kroah-Hartman's avatar Greg Kroah-Hartman

Merge tag 'thunderbolt-for-v5.9' of...

Merge tag 'thunderbolt-for-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt into usb-next

Mika writes:

thunderbolt: Changes for v5.9 merge window

This includes following Thunderbolt/USB4 changes for v5.9 merge window:

  * Improvements around NHI (Native Host Interface) HopID allocation

  * Improvements to tunneling and USB3 bandwidth management support

  * Add KUnit tests for path walking and tunneling

  * Initial support for USB4 retimer firmware upgrade

  * Implement Thunderbolt device firmware upgrade mechanism that runs
    the NVM image authentication when the device is disconnected.

  * A couple of small non-critical fixes

* tag 'thunderbolt-for-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt: (32 commits)
  thunderbolt: Fix old style declaration warning
  thunderbolt: Add support for authenticate on disconnect
  thunderbolt: Add support for separating the flush to SPI and authenticate
  thunderbolt: Ensure left shift of 512 does not overflow a 32 bit int
  thunderbolt: Add support for on-board retimers
  thunderbolt: Implement USB4 port sideband operations for retimer access
  thunderbolt: Retry USB4 block read operation
  thunderbolt: Generalize usb4_switch_do_[read|write]_data()
  thunderbolt: Split common NVM functionality into a separate file
  thunderbolt: Add Intel USB-IF ID to the NVM upgrade supported list
  thunderbolt: Add KUnit tests for tunneling
  thunderbolt: Add USB3 bandwidth management
  thunderbolt: Make tb_port_get_link_speed() available to other files
  thunderbolt: Implement USB3 bandwidth negotiation routines
  thunderbolt: Increase DP DPRX wait timeout
  thunderbolt: Report consumed bandwidth in both directions
  thunderbolt: Make usb4_switch_map_pcie_down() also return enabled ports
  thunderbolt: Make usb4_switch_map_usb3_down() also return enabled ports
  thunderbolt: Do not tunnel USB3 if link is not USB4
  thunderbolt: Add DP IN resources for all routers
  ...
parents 15d157e8 ef7e1207
...@@ -178,11 +178,18 @@ KernelVersion: 4.13 ...@@ -178,11 +178,18 @@ KernelVersion: 4.13
Contact: thunderbolt-software@lists.01.org Contact: thunderbolt-software@lists.01.org
Description: When new NVM image is written to the non-active NVM Description: When new NVM image is written to the non-active NVM
area (through non_activeX NVMem device), the area (through non_activeX NVMem device), the
authentication procedure is started by writing 1 to authentication procedure is started by writing to
this file. If everything goes well, the device is this file.
If everything goes well, the device is
restarted with the new NVM firmware. If the image restarted with the new NVM firmware. If the image
verification fails an error code is returned instead. verification fails an error code is returned instead.
This file will accept writing values "1" or "2"
- Writing "1" will flush the image to the storage
area and authenticate the image in one action.
- Writing "2" will run some basic validation on the image
and flush it to the storage area.
When read holds status of the last authentication When read holds status of the last authentication
operation if an error occurred during the process. This operation if an error occurred during the process. This
is directly the status value from the DMA configuration is directly the status value from the DMA configuration
...@@ -236,3 +243,49 @@ KernelVersion: 4.15 ...@@ -236,3 +243,49 @@ KernelVersion: 4.15
Contact: thunderbolt-software@lists.01.org Contact: thunderbolt-software@lists.01.org
Description: This contains XDomain service specific settings as Description: This contains XDomain service specific settings as
bitmask. Format: %x bitmask. Format: %x
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/device
Date: Oct 2020
KernelVersion: v5.9
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: Retimer device identifier read from the hardware.
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/nvm_authenticate
Date: Oct 2020
KernelVersion: v5.9
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: When new NVM image is written to the non-active NVM
area (through non_activeX NVMem device), the
authentication procedure is started by writing 1 to
this file. If everything goes well, the device is
restarted with the new NVM firmware. If the image
verification fails an error code is returned instead.
When read holds status of the last authentication
operation if an error occurred during the process.
Format: %x.
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/nvm_version
Date: Oct 2020
KernelVersion: v5.9
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: Holds retimer NVM version number. Format: %x.%x, major.minor.
What: /sys/bus/thunderbolt/devices/<device>:<port>.<index>/vendor
Date: Oct 2020
KernelVersion: v5.9
Contact: Mika Westerberg <mika.westerberg@linux.intel.com>
Description: Retimer vendor identifier read from the hardware.
What: /sys/bus/thunderbolt/devices/.../nvm_authenticate_on_disconnect
Date: Oct 2020
KernelVersion: v5.9
Contact: Mario Limonciello <mario.limonciello@dell.com>
Description: For supported devices, automatically authenticate the new Thunderbolt
image when the device is disconnected from the host system.
This file will accept writing values "1" or "2"
- Writing "1" will flush the image to the storage
area and prepare the device for authentication on disconnect.
- Writing "2" will run some basic validation on the image
and flush it to the storage area.
...@@ -173,8 +173,8 @@ following ``udev`` rule:: ...@@ -173,8 +173,8 @@ following ``udev`` rule::
ACTION=="add", SUBSYSTEM=="thunderbolt", ATTRS{iommu_dma_protection}=="1", ATTR{authorized}=="0", ATTR{authorized}="1" ACTION=="add", SUBSYSTEM=="thunderbolt", ATTRS{iommu_dma_protection}=="1", ATTR{authorized}=="0", ATTR{authorized}="1"
Upgrading NVM on Thunderbolt device or host Upgrading NVM on Thunderbolt device, host or retimer
------------------------------------------- ----------------------------------------------------
Since most of the functionality is handled in firmware running on a Since most of the functionality is handled in firmware running on a
host controller or a device, it is important that the firmware can be host controller or a device, it is important that the firmware can be
upgraded to the latest where possible bugs in it have been fixed. upgraded to the latest where possible bugs in it have been fixed.
...@@ -185,9 +185,10 @@ for some machines: ...@@ -185,9 +185,10 @@ for some machines:
`Thunderbolt Updates <https://thunderbolttechnology.net/updates>`_ `Thunderbolt Updates <https://thunderbolttechnology.net/updates>`_
Before you upgrade firmware on a device or host, please make sure it is a Before you upgrade firmware on a device, host or retimer, please make
suitable upgrade. Failing to do that may render the device (or host) in a sure it is a suitable upgrade. Failing to do that may render the device
state where it cannot be used properly anymore without special tools! in a state where it cannot be used properly anymore without special
tools!
Host NVM upgrade on Apple Macs is not supported. Host NVM upgrade on Apple Macs is not supported.
......
...@@ -866,8 +866,8 @@ static int tbnet_open(struct net_device *dev) ...@@ -866,8 +866,8 @@ static int tbnet_open(struct net_device *dev)
eof_mask = BIT(TBIP_PDF_FRAME_END); eof_mask = BIT(TBIP_PDF_FRAME_END);
ring = tb_ring_alloc_rx(xd->tb->nhi, -1, TBNET_RING_SIZE, ring = tb_ring_alloc_rx(xd->tb->nhi, -1, TBNET_RING_SIZE,
RING_FLAG_FRAME | RING_FLAG_E2E, sof_mask, RING_FLAG_FRAME, sof_mask, eof_mask,
eof_mask, tbnet_start_poll, net); tbnet_start_poll, net);
if (!ring) { if (!ring) {
netdev_err(dev, "failed to allocate Rx ring\n"); netdev_err(dev, "failed to allocate Rx ring\n");
tb_ring_free(net->tx_ring.ring); tb_ring_free(net->tx_ring.ring);
......
...@@ -8,10 +8,15 @@ menuconfig USB4 ...@@ -8,10 +8,15 @@ menuconfig USB4
select CRYPTO_HASH select CRYPTO_HASH
select NVMEM select NVMEM
help help
USB4 and Thunderbolt driver. USB4 is the public speficiation USB4 and Thunderbolt driver. USB4 is the public specification
based on Thunderbolt 3 protocol. This driver is required if based on the Thunderbolt 3 protocol. This driver is required if
you want to hotplug Thunderbolt and USB4 compliant devices on you want to hotplug Thunderbolt and USB4 compliant devices on
Apple hardware or on PCs with Intel Falcon Ridge or newer. Apple hardware or on PCs with Intel Falcon Ridge or newer.
To compile this driver a module, choose M here. The module will be To compile this driver a module, choose M here. The module will be
called thunderbolt. called thunderbolt.
config USB4_KUNIT_TEST
bool "KUnit tests"
depends on KUNIT=y
depends on USB4=y
...@@ -2,3 +2,6 @@ ...@@ -2,3 +2,6 @@
obj-${CONFIG_USB4} := thunderbolt.o obj-${CONFIG_USB4} := thunderbolt.o
thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o
thunderbolt-objs += nvm.o retimer.o quirks.o
obj-${CONFIG_USB4_KUNIT_TEST} += test.o
...@@ -812,6 +812,6 @@ void tb_domain_exit(void) ...@@ -812,6 +812,6 @@ void tb_domain_exit(void)
{ {
bus_unregister(&tb_bus_type); bus_unregister(&tb_bus_type);
ida_destroy(&tb_domain_ida); ida_destroy(&tb_domain_ida);
tb_switch_exit(); tb_nvm_exit();
tb_xdomain_exit(); tb_xdomain_exit();
} }
...@@ -599,6 +599,7 @@ int tb_drom_read(struct tb_switch *sw) ...@@ -599,6 +599,7 @@ int tb_drom_read(struct tb_switch *sw)
sw->uid = header->uid; sw->uid = header->uid;
sw->vendor = header->vendor_id; sw->vendor = header->vendor_id;
sw->device = header->model_id; sw->device = header->model_id;
tb_check_quirks(sw);
crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len); crc = tb_crc32(sw->drom + TB_DROM_DATA_START, header->data_len);
if (crc != header->data_crc32) { if (crc != header->data_crc32) {
......
...@@ -366,3 +366,17 @@ int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in) ...@@ -366,3 +366,17 @@ int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in)
tb_port_dbg(in, "sink %d de-allocated\n", sink); tb_port_dbg(in, "sink %d de-allocated\n", sink);
return 0; return 0;
} }
/**
* tb_lc_force_power() - Forces LC to be powered on
* @sw: Thunderbolt switch
*
* This is useful to let authentication cycle pass even without
* a Thunderbolt link present.
*/
int tb_lc_force_power(struct tb_switch *sw)
{
u32 in = 0xffff;
return tb_sw_write(sw, &in, TB_CFG_SWITCH, TB_LC_POWER, 1);
}
...@@ -24,12 +24,7 @@ ...@@ -24,12 +24,7 @@
#define RING_TYPE(ring) ((ring)->is_tx ? "TX ring" : "RX ring") #define RING_TYPE(ring) ((ring)->is_tx ? "TX ring" : "RX ring")
/* #define RING_FIRST_USABLE_HOPID 1
* Used to enable end-to-end workaround for missing RX packets. Do not
* use this ring for anything else.
*/
#define RING_E2E_UNUSED_HOPID 2
#define RING_FIRST_USABLE_HOPID TB_PATH_MIN_HOPID
/* /*
* Minimal number of vectors when we use MSI-X. Two for control channel * Minimal number of vectors when we use MSI-X. Two for control channel
...@@ -440,7 +435,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring) ...@@ -440,7 +435,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
/* /*
* Automatically allocate HopID from the non-reserved * Automatically allocate HopID from the non-reserved
* range 8 .. hop_count - 1. * range 1 .. hop_count - 1.
*/ */
for (i = RING_FIRST_USABLE_HOPID; i < nhi->hop_count; i++) { for (i = RING_FIRST_USABLE_HOPID; i < nhi->hop_count; i++) {
if (ring->is_tx) { if (ring->is_tx) {
...@@ -496,10 +491,6 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size, ...@@ -496,10 +491,6 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
dev_dbg(&nhi->pdev->dev, "allocating %s ring %d of size %d\n", dev_dbg(&nhi->pdev->dev, "allocating %s ring %d of size %d\n",
transmit ? "TX" : "RX", hop, size); transmit ? "TX" : "RX", hop, size);
/* Tx Ring 2 is reserved for E2E workaround */
if (transmit && hop == RING_E2E_UNUSED_HOPID)
return NULL;
ring = kzalloc(sizeof(*ring), GFP_KERNEL); ring = kzalloc(sizeof(*ring), GFP_KERNEL);
if (!ring) if (!ring)
return NULL; return NULL;
...@@ -614,19 +605,6 @@ void tb_ring_start(struct tb_ring *ring) ...@@ -614,19 +605,6 @@ void tb_ring_start(struct tb_ring *ring)
flags = RING_FLAG_ENABLE | RING_FLAG_RAW; flags = RING_FLAG_ENABLE | RING_FLAG_RAW;
} }
if (ring->flags & RING_FLAG_E2E && !ring->is_tx) {
u32 hop;
/*
* In order not to lose Rx packets we enable end-to-end
* workaround which transfers Rx credits to an unused Tx
* HopID.
*/
hop = RING_E2E_UNUSED_HOPID << REG_RX_OPTIONS_E2E_HOP_SHIFT;
hop &= REG_RX_OPTIONS_E2E_HOP_MASK;
flags |= hop | RING_FLAG_E2E_FLOW_CONTROL;
}
ring_iowrite64desc(ring, ring->descriptors_dma, 0); ring_iowrite64desc(ring, ring->descriptors_dma, 0);
if (ring->is_tx) { if (ring->is_tx) {
ring_iowrite32desc(ring, ring->size, 12); ring_iowrite32desc(ring, ring->size, 12);
...@@ -1123,9 +1101,7 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id) ...@@ -1123,9 +1101,7 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
/* cannot fail - table is allocated bin pcim_iomap_regions */ /* cannot fail - table is allocated bin pcim_iomap_regions */
nhi->iobase = pcim_iomap_table(pdev)[0]; nhi->iobase = pcim_iomap_table(pdev)[0];
nhi->hop_count = ioread32(nhi->iobase + REG_HOP_COUNT) & 0x3ff; nhi->hop_count = ioread32(nhi->iobase + REG_HOP_COUNT) & 0x3ff;
if (nhi->hop_count != 12 && nhi->hop_count != 32) dev_dbg(&pdev->dev, "total paths: %d\n", nhi->hop_count);
dev_warn(&pdev->dev, "unexpected hop count: %d\n",
nhi->hop_count);
nhi->tx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count, nhi->tx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count,
sizeof(*nhi->tx_rings), GFP_KERNEL); sizeof(*nhi->tx_rings), GFP_KERNEL);
......
// SPDX-License-Identifier: GPL-2.0
/*
* NVM helpers
*
* Copyright (C) 2020, Intel Corporation
* Author: Mika Westerberg <mika.westerberg@linux.intel.com>
*/
#include <linux/idr.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include "tb.h"
static DEFINE_IDA(nvm_ida);
/**
* tb_nvm_alloc() - Allocate new NVM structure
* @dev: Device owning the NVM
*
* Allocates new NVM structure with unique @id and returns it. In case
* of error returns ERR_PTR().
*/
struct tb_nvm *tb_nvm_alloc(struct device *dev)
{
struct tb_nvm *nvm;
int ret;
nvm = kzalloc(sizeof(*nvm), GFP_KERNEL);
if (!nvm)
return ERR_PTR(-ENOMEM);
ret = ida_simple_get(&nvm_ida, 0, 0, GFP_KERNEL);
if (ret < 0) {
kfree(nvm);
return ERR_PTR(ret);
}
nvm->id = ret;
nvm->dev = dev;
return nvm;
}
/**
* tb_nvm_add_active() - Adds active NVMem device to NVM
* @nvm: NVM structure
* @size: Size of the active NVM in bytes
* @reg_read: Pointer to the function to read the NVM (passed directly to the
* NVMem device)
*
* Registers new active NVmem device for @nvm. The @reg_read is called
* directly from NVMem so it must handle possible concurrent access if
* needed. The first parameter passed to @reg_read is @nvm structure.
* Returns %0 in success and negative errno otherwise.
*/
int tb_nvm_add_active(struct tb_nvm *nvm, size_t size, nvmem_reg_read_t reg_read)
{
struct nvmem_config config;
struct nvmem_device *nvmem;
memset(&config, 0, sizeof(config));
config.name = "nvm_active";
config.reg_read = reg_read;
config.read_only = true;
config.id = nvm->id;
config.stride = 4;
config.word_size = 4;
config.size = size;
config.dev = nvm->dev;
config.owner = THIS_MODULE;
config.priv = nvm;
nvmem = nvmem_register(&config);
if (IS_ERR(nvmem))
return PTR_ERR(nvmem);
nvm->active = nvmem;
return 0;
}
/**
* tb_nvm_write_buf() - Write data to @nvm buffer
* @nvm: NVM structure
* @offset: Offset where to write the data
* @val: Data buffer to write
* @bytes: Number of bytes to write
*
* Helper function to cache the new NVM image before it is actually
* written to the flash. Copies @bytes from @val to @nvm->buf starting
* from @offset.
*/
int tb_nvm_write_buf(struct tb_nvm *nvm, unsigned int offset, void *val,
size_t bytes)
{
if (!nvm->buf) {
nvm->buf = vmalloc(NVM_MAX_SIZE);
if (!nvm->buf)
return -ENOMEM;
}
nvm->flushed = false;
nvm->buf_data_size = offset + bytes;
memcpy(nvm->buf + offset, val, bytes);
return 0;
}
/**
* tb_nvm_add_non_active() - Adds non-active NVMem device to NVM
* @nvm: NVM structure
* @size: Size of the non-active NVM in bytes
* @reg_write: Pointer to the function to write the NVM (passed directly
* to the NVMem device)
*
* Registers new non-active NVmem device for @nvm. The @reg_write is called
* directly from NVMem so it must handle possible concurrent access if
* needed. The first parameter passed to @reg_write is @nvm structure.
* Returns %0 in success and negative errno otherwise.
*/
int tb_nvm_add_non_active(struct tb_nvm *nvm, size_t size,
nvmem_reg_write_t reg_write)
{
struct nvmem_config config;
struct nvmem_device *nvmem;
memset(&config, 0, sizeof(config));
config.name = "nvm_non_active";
config.reg_write = reg_write;
config.root_only = true;
config.id = nvm->id;
config.stride = 4;
config.word_size = 4;
config.size = size;
config.dev = nvm->dev;
config.owner = THIS_MODULE;
config.priv = nvm;
nvmem = nvmem_register(&config);
if (IS_ERR(nvmem))
return PTR_ERR(nvmem);
nvm->non_active = nvmem;
return 0;
}
/**
* tb_nvm_free() - Release NVM and its resources
* @nvm: NVM structure to release
*
* Releases NVM and the NVMem devices if they were registered.
*/
void tb_nvm_free(struct tb_nvm *nvm)
{
if (nvm) {
if (nvm->non_active)
nvmem_unregister(nvm->non_active);
if (nvm->active)
nvmem_unregister(nvm->active);
vfree(nvm->buf);
ida_simple_remove(&nvm_ida, nvm->id);
}
kfree(nvm);
}
void tb_nvm_exit(void)
{
ida_destroy(&nvm_ida);
}
...@@ -229,7 +229,7 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid, ...@@ -229,7 +229,7 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
struct tb_port *dst, int dst_hopid, int link_nr, struct tb_port *dst, int dst_hopid, int link_nr,
const char *name) const char *name)
{ {
struct tb_port *in_port, *out_port; struct tb_port *in_port, *out_port, *first_port, *last_port;
int in_hopid, out_hopid; int in_hopid, out_hopid;
struct tb_path *path; struct tb_path *path;
size_t num_hops; size_t num_hops;
...@@ -239,12 +239,23 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid, ...@@ -239,12 +239,23 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
if (!path) if (!path)
return NULL; return NULL;
/* first_port = last_port = NULL;
* Number of hops on a path is the distance between the two i = 0;
* switches plus the source adapter port. tb_for_each_port_on_path(src, dst, in_port) {
*/ if (!first_port)
num_hops = abs(tb_route_length(tb_route(src->sw)) - first_port = in_port;
tb_route_length(tb_route(dst->sw))) + 1; last_port = in_port;
i++;
}
/* Check that src and dst are reachable */
if (first_port != src || last_port != dst) {
kfree(path);
return NULL;
}
/* Each hop takes two ports */
num_hops = i / 2;
path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL); path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL);
if (!path->hops) { if (!path->hops) {
...@@ -559,21 +570,20 @@ bool tb_path_is_invalid(struct tb_path *path) ...@@ -559,21 +570,20 @@ bool tb_path_is_invalid(struct tb_path *path)
} }
/** /**
* tb_path_switch_on_path() - Does the path go through certain switch * tb_path_port_on_path() - Does the path go through certain port
* @path: Path to check * @path: Path to check
* @sw: Switch to check * @port: Switch to check
* *
* Goes over all hops on path and checks if @sw is any of them. * Goes over all hops on path and checks if @port is any of them.
* Direction does not matter. * Direction does not matter.
*/ */
bool tb_path_switch_on_path(const struct tb_path *path, bool tb_path_port_on_path(const struct tb_path *path, const struct tb_port *port)
const struct tb_switch *sw)
{ {
int i; int i;
for (i = 0; i < path->path_length; i++) { for (i = 0; i < path->path_length; i++) {
if (path->hops[i].in_port->sw == sw || if (path->hops[i].in_port == port ||
path->hops[i].out_port->sw == sw) path->hops[i].out_port == port)
return true; return true;
} }
......
// SPDX-License-Identifier: GPL-2.0
/*
* Thunderbolt driver - quirks
*
* Copyright (c) 2020 Mario Limonciello <mario.limonciello@dell.com>
*/
#include "tb.h"
static void quirk_force_power_link(struct tb_switch *sw)
{
sw->quirks |= QUIRK_FORCE_POWER_LINK_CONTROLLER;
}
struct tb_quirk {
u16 vendor;
u16 device;
void (*hook)(struct tb_switch *sw);
};
static const struct tb_quirk tb_quirks[] = {
/* Dell WD19TB supports self-authentication on unplug */
{ 0x00d4, 0xb070, quirk_force_power_link },
};
/**
* tb_check_quirks() - Check for quirks to apply
* @sw: Thunderbolt switch
*
* Apply any quirks for the Thunderbolt controller
*/
void tb_check_quirks(struct tb_switch *sw)
{
int i;
for (i = 0; i < ARRAY_SIZE(tb_quirks); i++) {
const struct tb_quirk *q = &tb_quirks[i];
if (sw->device == q->device && sw->vendor == q->vendor)
q->hook(sw);
}
}
// SPDX-License-Identifier: GPL-2.0
/*
* Thunderbolt/USB4 retimer support.
*
* Copyright (C) 2020, Intel Corporation
* Authors: Kranthi Kuntala <kranthi.kuntala@intel.com>
* Mika Westerberg <mika.westerberg@linux.intel.com>
*/
#include <linux/delay.h>
#include <linux/pm_runtime.h>
#include <linux/sched/signal.h>
#include "sb_regs.h"
#include "tb.h"
#define TB_MAX_RETIMER_INDEX 6
static int tb_retimer_nvm_read(void *priv, unsigned int offset, void *val,
size_t bytes)
{
struct tb_nvm *nvm = priv;
struct tb_retimer *rt = tb_to_retimer(nvm->dev);
int ret;
pm_runtime_get_sync(&rt->dev);
if (!mutex_trylock(&rt->tb->lock)) {
ret = restart_syscall();
goto out;
}
ret = usb4_port_retimer_nvm_read(rt->port, rt->index, offset, val, bytes);
mutex_unlock(&rt->tb->lock);
out:
pm_runtime_mark_last_busy(&rt->dev);
pm_runtime_put_autosuspend(&rt->dev);
return ret;
}
static int tb_retimer_nvm_write(void *priv, unsigned int offset, void *val,
size_t bytes)
{
struct tb_nvm *nvm = priv;
struct tb_retimer *rt = tb_to_retimer(nvm->dev);
int ret = 0;
if (!mutex_trylock(&rt->tb->lock))
return restart_syscall();
ret = tb_nvm_write_buf(nvm, offset, val, bytes);
mutex_unlock(&rt->tb->lock);
return ret;
}
static int tb_retimer_nvm_add(struct tb_retimer *rt)
{
struct tb_nvm *nvm;
u32 val, nvm_size;
int ret;
nvm = tb_nvm_alloc(&rt->dev);
if (IS_ERR(nvm))
return PTR_ERR(nvm);
ret = usb4_port_retimer_nvm_read(rt->port, rt->index, NVM_VERSION, &val,
sizeof(val));
if (ret)
goto err_nvm;
nvm->major = val >> 16;
nvm->minor = val >> 8;
ret = usb4_port_retimer_nvm_read(rt->port, rt->index, NVM_FLASH_SIZE,
&val, sizeof(val));
if (ret)
goto err_nvm;
nvm_size = (SZ_1M << (val & 7)) / 8;
nvm_size = (nvm_size - SZ_16K) / 2;
ret = tb_nvm_add_active(nvm, nvm_size, tb_retimer_nvm_read);
if (ret)
goto err_nvm;
ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE, tb_retimer_nvm_write);
if (ret)
goto err_nvm;
rt->nvm = nvm;
return 0;
err_nvm:
tb_nvm_free(nvm);
return ret;
}
static int tb_retimer_nvm_validate_and_write(struct tb_retimer *rt)
{
unsigned int image_size, hdr_size;
const u8 *buf = rt->nvm->buf;
u16 ds_size, device;
image_size = rt->nvm->buf_data_size;
if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE)
return -EINVAL;
/*
* FARB pointer must point inside the image and must at least
* contain parts of the digital section we will be reading here.
*/
hdr_size = (*(u32 *)buf) & 0xffffff;
if (hdr_size + NVM_DEVID + 2 >= image_size)
return -EINVAL;
/* Digital section start should be aligned to 4k page */
if (!IS_ALIGNED(hdr_size, SZ_4K))
return -EINVAL;
/*
* Read digital section size and check that it also fits inside
* the image.
*/
ds_size = *(u16 *)(buf + hdr_size);
if (ds_size >= image_size)
return -EINVAL;
/*
* Make sure the device ID in the image matches the retimer
* hardware.
*/
device = *(u16 *)(buf + hdr_size + NVM_DEVID);
if (device != rt->device)
return -EINVAL;
/* Skip headers in the image */
buf += hdr_size;
image_size -= hdr_size;
return usb4_port_retimer_nvm_write(rt->port, rt->index, 0, buf,
image_size);
}
static ssize_t device_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_retimer *rt = tb_to_retimer(dev);
return sprintf(buf, "%#x\n", rt->device);
}
static DEVICE_ATTR_RO(device);
static ssize_t nvm_authenticate_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct tb_retimer *rt = tb_to_retimer(dev);
int ret;
if (!mutex_trylock(&rt->tb->lock))
return restart_syscall();
if (!rt->nvm)
ret = -EAGAIN;
else
ret = sprintf(buf, "%#x\n", rt->auth_status);
mutex_unlock(&rt->tb->lock);
return ret;
}
static ssize_t nvm_authenticate_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct tb_retimer *rt = tb_to_retimer(dev);
bool val;
int ret;
pm_runtime_get_sync(&rt->dev);
if (!mutex_trylock(&rt->tb->lock)) {
ret = restart_syscall();
goto exit_rpm;
}
if (!rt->nvm) {
ret = -EAGAIN;
goto exit_unlock;
}
ret = kstrtobool(buf, &val);
if (ret)
goto exit_unlock;
/* Always clear status */
rt->auth_status = 0;
if (val) {
if (!rt->nvm->buf) {
ret = -EINVAL;
goto exit_unlock;
}
ret = tb_retimer_nvm_validate_and_write(rt);
if (ret)
goto exit_unlock;
ret = usb4_port_retimer_nvm_authenticate(rt->port, rt->index);
}
exit_unlock:
mutex_unlock(&rt->tb->lock);
exit_rpm:
pm_runtime_mark_last_busy(&rt->dev);
pm_runtime_put_autosuspend(&rt->dev);
if (ret)
return ret;
return count;
}
static DEVICE_ATTR_RW(nvm_authenticate);
static ssize_t nvm_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct tb_retimer *rt = tb_to_retimer(dev);
int ret;
if (!mutex_trylock(&rt->tb->lock))
return restart_syscall();
if (!rt->nvm)
ret = -EAGAIN;
else
ret = sprintf(buf, "%x.%x\n", rt->nvm->major, rt->nvm->minor);
mutex_unlock(&rt->tb->lock);
return ret;
}
static DEVICE_ATTR_RO(nvm_version);
static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_retimer *rt = tb_to_retimer(dev);
return sprintf(buf, "%#x\n", rt->vendor);
}
static DEVICE_ATTR_RO(vendor);
static struct attribute *retimer_attrs[] = {
&dev_attr_device.attr,
&dev_attr_nvm_authenticate.attr,
&dev_attr_nvm_version.attr,
&dev_attr_vendor.attr,
NULL
};
static const struct attribute_group retimer_group = {
.attrs = retimer_attrs,
};
static const struct attribute_group *retimer_groups[] = {
&retimer_group,
NULL
};
static void tb_retimer_release(struct device *dev)
{
struct tb_retimer *rt = tb_to_retimer(dev);
kfree(rt);
}
struct device_type tb_retimer_type = {
.name = "thunderbolt_retimer",
.groups = retimer_groups,
.release = tb_retimer_release,
};
static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
{
struct tb_retimer *rt;
u32 vendor, device;
int ret;
if (!port->cap_usb4)
return -EINVAL;
ret = usb4_port_retimer_read(port, index, USB4_SB_VENDOR_ID, &vendor,
sizeof(vendor));
if (ret) {
if (ret != -ENODEV)
tb_port_warn(port, "failed read retimer VendorId: %d\n", ret);
return ret;
}
ret = usb4_port_retimer_read(port, index, USB4_SB_PRODUCT_ID, &device,
sizeof(device));
if (ret) {
if (ret != -ENODEV)
tb_port_warn(port, "failed read retimer ProductId: %d\n", ret);
return ret;
}
if (vendor != PCI_VENDOR_ID_INTEL && vendor != 0x8087) {
tb_port_info(port, "retimer NVM format of vendor %#x is not supported\n",
vendor);
return -EOPNOTSUPP;
}
/*
* Check that it supports NVM operations. If not then don't add
* the device at all.
*/
ret = usb4_port_retimer_nvm_sector_size(port, index);
if (ret < 0)
return ret;
rt = kzalloc(sizeof(*rt), GFP_KERNEL);
if (!rt)
return -ENOMEM;
rt->index = index;
rt->vendor = vendor;
rt->device = device;
rt->auth_status = auth_status;
rt->port = port;
rt->tb = port->sw->tb;
rt->dev.parent = &port->sw->dev;
rt->dev.bus = &tb_bus_type;
rt->dev.type = &tb_retimer_type;
dev_set_name(&rt->dev, "%s:%u.%u", dev_name(&port->sw->dev),
port->port, index);
ret = device_register(&rt->dev);
if (ret) {
dev_err(&rt->dev, "failed to register retimer: %d\n", ret);
put_device(&rt->dev);
return ret;
}
ret = tb_retimer_nvm_add(rt);
if (ret) {
dev_err(&rt->dev, "failed to add NVM devices: %d\n", ret);
device_del(&rt->dev);
return ret;
}
dev_info(&rt->dev, "new retimer found, vendor=%#x device=%#x\n",
rt->vendor, rt->device);
pm_runtime_no_callbacks(&rt->dev);
pm_runtime_set_active(&rt->dev);
pm_runtime_enable(&rt->dev);
pm_runtime_set_autosuspend_delay(&rt->dev, TB_AUTOSUSPEND_DELAY);
pm_runtime_mark_last_busy(&rt->dev);
pm_runtime_use_autosuspend(&rt->dev);
return 0;
}
static void tb_retimer_remove(struct tb_retimer *rt)
{
dev_info(&rt->dev, "retimer disconnected\n");
tb_nvm_free(rt->nvm);
device_unregister(&rt->dev);
}
struct tb_retimer_lookup {
const struct tb_port *port;
u8 index;
};
static int retimer_match(struct device *dev, void *data)
{
const struct tb_retimer_lookup *lookup = data;
struct tb_retimer *rt = tb_to_retimer(dev);
return rt && rt->port == lookup->port && rt->index == lookup->index;
}
static struct tb_retimer *tb_port_find_retimer(struct tb_port *port, u8 index)
{
struct tb_retimer_lookup lookup = { .port = port, .index = index };
struct device *dev;
dev = device_find_child(&port->sw->dev, &lookup, retimer_match);
if (dev)
return tb_to_retimer(dev);
return NULL;
}
/**
* tb_retimer_scan() - Scan for on-board retimers under port
* @port: USB4 port to scan
*
* Tries to enumerate on-board retimers connected to @port. Found
* retimers are registered as children of @port. Does not scan for cable
* retimers for now.
*/
int tb_retimer_scan(struct tb_port *port)
{
u32 status[TB_MAX_RETIMER_INDEX] = {};
int ret, i, last_idx = 0;
if (!port->cap_usb4)
return 0;
/*
* Send broadcast RT to make sure retimer indices facing this
* port are set.
*/
ret = usb4_port_enumerate_retimers(port);
if (ret)
return ret;
/*
* Before doing anything else, read the authentication status.
* If the retimer has it set, store it for the new retimer
* device instance.
*/
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
usb4_port_retimer_nvm_authenticate_status(port, i, &status[i]);
for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++) {
/*
* Last retimer is true only for the last on-board
* retimer (the one connected directly to the Type-C
* port).
*/
ret = usb4_port_retimer_is_last(port, i);
if (ret > 0)
last_idx = i;
else if (ret < 0)
break;
}
if (!last_idx)
return 0;
/* Add on-board retimers if they do not exist already */
for (i = 1; i <= last_idx; i++) {
struct tb_retimer *rt;
rt = tb_port_find_retimer(port, i);
if (rt) {
put_device(&rt->dev);
} else {
ret = tb_retimer_add(port, i, status[i]);
if (ret && ret != -EOPNOTSUPP)
return ret;
}
}
return 0;
}
static int remove_retimer(struct device *dev, void *data)
{
struct tb_retimer *rt = tb_to_retimer(dev);
struct tb_port *port = data;
if (rt && rt->port == port)
tb_retimer_remove(rt);
return 0;
}
/**
* tb_retimer_remove_all() - Remove all retimers under port
* @port: USB4 port whose retimers to remove
*
* This removes all previously added retimers under @port.
*/
void tb_retimer_remove_all(struct tb_port *port)
{
if (port->cap_usb4)
device_for_each_child_reverse(&port->sw->dev, port,
remove_retimer);
}
/* SPDX-License-Identifier: GPL-2.0 */
/*
* USB4 port sideband registers found on routers and retimers
*
* Copyright (C) 2020, Intel Corporation
* Authors: Mika Westerberg <mika.westerberg@linux.intel.com>
* Rajmohan Mani <rajmohan.mani@intel.com>
*/
#ifndef _SB_REGS
#define _SB_REGS
#define USB4_SB_VENDOR_ID 0x00
#define USB4_SB_PRODUCT_ID 0x01
#define USB4_SB_OPCODE 0x08
enum usb4_sb_opcode {
USB4_SB_OPCODE_ERR = 0x20525245, /* "ERR " */
USB4_SB_OPCODE_ONS = 0x444d4321, /* "!CMD" */
USB4_SB_OPCODE_ENUMERATE_RETIMERS = 0x4d554e45, /* "ENUM" */
USB4_SB_OPCODE_QUERY_LAST_RETIMER = 0x5453414c, /* "LAST" */
USB4_SB_OPCODE_GET_NVM_SECTOR_SIZE = 0x53534e47, /* "GNSS" */
USB4_SB_OPCODE_NVM_SET_OFFSET = 0x53504f42, /* "BOPS" */
USB4_SB_OPCODE_NVM_BLOCK_WRITE = 0x574b4c42, /* "BLKW" */
USB4_SB_OPCODE_NVM_AUTH_WRITE = 0x48545541, /* "AUTH" */
USB4_SB_OPCODE_NVM_READ = 0x52524641, /* "AFRR" */
};
#define USB4_SB_METADATA 0x09
#define USB4_SB_METADATA_NVM_AUTH_WRITE_MASK GENMASK(5, 0)
#define USB4_SB_DATA 0x12
#endif
...@@ -13,21 +13,12 @@ ...@@ -13,21 +13,12 @@
#include <linux/sched/signal.h> #include <linux/sched/signal.h>
#include <linux/sizes.h> #include <linux/sizes.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/vmalloc.h>
#include "tb.h" #include "tb.h"
/* Switch NVM support */ /* Switch NVM support */
#define NVM_DEVID 0x05
#define NVM_VERSION 0x08
#define NVM_CSS 0x10 #define NVM_CSS 0x10
#define NVM_FLASH_SIZE 0x45
#define NVM_MIN_SIZE SZ_32K
#define NVM_MAX_SIZE SZ_512K
static DEFINE_IDA(nvm_ida);
struct nvm_auth_status { struct nvm_auth_status {
struct list_head list; struct list_head list;
...@@ -35,6 +26,11 @@ struct nvm_auth_status { ...@@ -35,6 +26,11 @@ struct nvm_auth_status {
u32 status; u32 status;
}; };
enum nvm_write_ops {
WRITE_AND_AUTHENTICATE = 1,
WRITE_ONLY = 2,
};
/* /*
* Hold NVM authentication failure status per switch This information * Hold NVM authentication failure status per switch This information
* needs to stay around even when the switch gets power cycled so we * needs to stay around even when the switch gets power cycled so we
...@@ -164,8 +160,12 @@ static int nvm_validate_and_write(struct tb_switch *sw) ...@@ -164,8 +160,12 @@ static int nvm_validate_and_write(struct tb_switch *sw)
} }
if (tb_switch_is_usb4(sw)) if (tb_switch_is_usb4(sw))
return usb4_switch_nvm_write(sw, 0, buf, image_size); ret = usb4_switch_nvm_write(sw, 0, buf, image_size);
return dma_port_flash_write(sw->dma_port, 0, buf, image_size); else
ret = dma_port_flash_write(sw->dma_port, 0, buf, image_size);
if (!ret)
sw->nvm->flushed = true;
return ret;
} }
static int nvm_authenticate_host_dma_port(struct tb_switch *sw) static int nvm_authenticate_host_dma_port(struct tb_switch *sw)
...@@ -328,7 +328,8 @@ static int nvm_authenticate(struct tb_switch *sw) ...@@ -328,7 +328,8 @@ static int nvm_authenticate(struct tb_switch *sw)
static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val, static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val,
size_t bytes) size_t bytes)
{ {
struct tb_switch *sw = priv; struct tb_nvm *nvm = priv;
struct tb_switch *sw = tb_to_switch(nvm->dev);
int ret; int ret;
pm_runtime_get_sync(&sw->dev); pm_runtime_get_sync(&sw->dev);
...@@ -351,8 +352,9 @@ static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val, ...@@ -351,8 +352,9 @@ static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val,
static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val, static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
size_t bytes) size_t bytes)
{ {
struct tb_switch *sw = priv; struct tb_nvm *nvm = priv;
int ret = 0; struct tb_switch *sw = tb_to_switch(nvm->dev);
int ret;
if (!mutex_trylock(&sw->tb->lock)) if (!mutex_trylock(&sw->tb->lock))
return restart_syscall(); return restart_syscall();
...@@ -363,55 +365,15 @@ static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val, ...@@ -363,55 +365,15 @@ static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
* locally here and handle the special cases when the user asks * locally here and handle the special cases when the user asks
* us to authenticate the image. * us to authenticate the image.
*/ */
if (!sw->nvm->buf) { ret = tb_nvm_write_buf(nvm, offset, val, bytes);
sw->nvm->buf = vmalloc(NVM_MAX_SIZE);
if (!sw->nvm->buf) {
ret = -ENOMEM;
goto unlock;
}
}
sw->nvm->buf_data_size = offset + bytes;
memcpy(sw->nvm->buf + offset, val, bytes);
unlock:
mutex_unlock(&sw->tb->lock); mutex_unlock(&sw->tb->lock);
return ret; return ret;
} }
static struct nvmem_device *register_nvmem(struct tb_switch *sw, int id,
size_t size, bool active)
{
struct nvmem_config config;
memset(&config, 0, sizeof(config));
if (active) {
config.name = "nvm_active";
config.reg_read = tb_switch_nvm_read;
config.read_only = true;
} else {
config.name = "nvm_non_active";
config.reg_write = tb_switch_nvm_write;
config.root_only = true;
}
config.id = id;
config.stride = 4;
config.word_size = 4;
config.size = size;
config.dev = &sw->dev;
config.owner = THIS_MODULE;
config.priv = sw;
return nvmem_register(&config);
}
static int tb_switch_nvm_add(struct tb_switch *sw) static int tb_switch_nvm_add(struct tb_switch *sw)
{ {
struct nvmem_device *nvm_dev; struct tb_nvm *nvm;
struct tb_switch_nvm *nvm;
u32 val; u32 val;
int ret; int ret;
...@@ -423,18 +385,17 @@ static int tb_switch_nvm_add(struct tb_switch *sw) ...@@ -423,18 +385,17 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
* currently restrict NVM upgrade for Intel hardware. We may * currently restrict NVM upgrade for Intel hardware. We may
* relax this in the future when we learn other NVM formats. * relax this in the future when we learn other NVM formats.
*/ */
if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL) { if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL &&
sw->config.vendor_id != 0x8087) {
dev_info(&sw->dev, dev_info(&sw->dev,
"NVM format of vendor %#x is not known, disabling NVM upgrade\n", "NVM format of vendor %#x is not known, disabling NVM upgrade\n",
sw->config.vendor_id); sw->config.vendor_id);
return 0; return 0;
} }
nvm = kzalloc(sizeof(*nvm), GFP_KERNEL); nvm = tb_nvm_alloc(&sw->dev);
if (!nvm) if (IS_ERR(nvm))
return -ENOMEM; return PTR_ERR(nvm);
nvm->id = ida_simple_get(&nvm_ida, 0, 0, GFP_KERNEL);
/* /*
* If the switch is in safe-mode the only accessible portion of * If the switch is in safe-mode the only accessible portion of
...@@ -446,7 +407,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw) ...@@ -446,7 +407,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
ret = nvm_read(sw, NVM_FLASH_SIZE, &val, sizeof(val)); ret = nvm_read(sw, NVM_FLASH_SIZE, &val, sizeof(val));
if (ret) if (ret)
goto err_ida; goto err_nvm;
hdr_size = sw->generation < 3 ? SZ_8K : SZ_16K; hdr_size = sw->generation < 3 ? SZ_8K : SZ_16K;
nvm_size = (SZ_1M << (val & 7)) / 8; nvm_size = (SZ_1M << (val & 7)) / 8;
...@@ -454,44 +415,34 @@ static int tb_switch_nvm_add(struct tb_switch *sw) ...@@ -454,44 +415,34 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
ret = nvm_read(sw, NVM_VERSION, &val, sizeof(val)); ret = nvm_read(sw, NVM_VERSION, &val, sizeof(val));
if (ret) if (ret)
goto err_ida; goto err_nvm;
nvm->major = val >> 16; nvm->major = val >> 16;
nvm->minor = val >> 8; nvm->minor = val >> 8;
nvm_dev = register_nvmem(sw, nvm->id, nvm_size, true); ret = tb_nvm_add_active(nvm, nvm_size, tb_switch_nvm_read);
if (IS_ERR(nvm_dev)) { if (ret)
ret = PTR_ERR(nvm_dev); goto err_nvm;
goto err_ida;
}
nvm->active = nvm_dev;
} }
if (!sw->no_nvm_upgrade) { if (!sw->no_nvm_upgrade) {
nvm_dev = register_nvmem(sw, nvm->id, NVM_MAX_SIZE, false); ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE,
if (IS_ERR(nvm_dev)) { tb_switch_nvm_write);
ret = PTR_ERR(nvm_dev); if (ret)
goto err_nvm_active; goto err_nvm;
}
nvm->non_active = nvm_dev;
} }
sw->nvm = nvm; sw->nvm = nvm;
return 0; return 0;
err_nvm_active: err_nvm:
if (nvm->active) tb_nvm_free(nvm);
nvmem_unregister(nvm->active);
err_ida:
ida_simple_remove(&nvm_ida, nvm->id);
kfree(nvm);
return ret; return ret;
} }
static void tb_switch_nvm_remove(struct tb_switch *sw) static void tb_switch_nvm_remove(struct tb_switch *sw)
{ {
struct tb_switch_nvm *nvm; struct tb_nvm *nvm;
nvm = sw->nvm; nvm = sw->nvm;
sw->nvm = NULL; sw->nvm = NULL;
...@@ -503,13 +454,7 @@ static void tb_switch_nvm_remove(struct tb_switch *sw) ...@@ -503,13 +454,7 @@ static void tb_switch_nvm_remove(struct tb_switch *sw)
if (!nvm->authenticating) if (!nvm->authenticating)
nvm_clear_auth_status(sw); nvm_clear_auth_status(sw);
if (nvm->non_active) tb_nvm_free(nvm);
nvmem_unregister(nvm->non_active);
if (nvm->active)
nvmem_unregister(nvm->active);
ida_simple_remove(&nvm_ida, nvm->id);
vfree(nvm->buf);
kfree(nvm);
} }
/* port utility functions */ /* port utility functions */
...@@ -789,8 +734,11 @@ static int tb_port_alloc_hopid(struct tb_port *port, bool in, int min_hopid, ...@@ -789,8 +734,11 @@ static int tb_port_alloc_hopid(struct tb_port *port, bool in, int min_hopid,
ida = &port->out_hopids; ida = &port->out_hopids;
} }
/* HopIDs 0-7 are reserved */ /*
if (min_hopid < TB_PATH_MIN_HOPID) * NHI can use HopIDs 1-max for other adapters HopIDs 0-7 are
* reserved.
*/
if (port->config.type != TB_TYPE_NHI && min_hopid < TB_PATH_MIN_HOPID)
min_hopid = TB_PATH_MIN_HOPID; min_hopid = TB_PATH_MIN_HOPID;
if (max_hopid < 0 || max_hopid > port_max_hopid) if (max_hopid < 0 || max_hopid > port_max_hopid)
...@@ -847,6 +795,13 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid) ...@@ -847,6 +795,13 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid)
ida_simple_remove(&port->out_hopids, hopid); ida_simple_remove(&port->out_hopids, hopid);
} }
static inline bool tb_switch_is_reachable(const struct tb_switch *parent,
const struct tb_switch *sw)
{
u64 mask = (1ULL << parent->config.depth * 8) - 1;
return (tb_route(parent) & mask) == (tb_route(sw) & mask);
}
/** /**
* tb_next_port_on_path() - Return next port for given port on a path * tb_next_port_on_path() - Return next port for given port on a path
* @start: Start port of the walk * @start: Start port of the walk
...@@ -876,12 +831,12 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, ...@@ -876,12 +831,12 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
return end; return end;
} }
if (start->sw->config.depth < end->sw->config.depth) { if (tb_switch_is_reachable(prev->sw, end->sw)) {
next = tb_port_at(tb_route(end->sw), prev->sw);
/* Walk down the topology if next == prev */
if (prev->remote && if (prev->remote &&
prev->remote->sw->config.depth > prev->sw->config.depth) (next == prev || next->dual_link_port == prev))
next = prev->remote; next = prev->remote;
else
next = tb_port_at(tb_route(end->sw), prev->sw);
} else { } else {
if (tb_is_upstream_port(prev)) { if (tb_is_upstream_port(prev)) {
next = prev->remote; next = prev->remote;
...@@ -898,10 +853,16 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, ...@@ -898,10 +853,16 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
} }
} }
return next; return next != prev ? next : NULL;
} }
static int tb_port_get_link_speed(struct tb_port *port) /**
* tb_port_get_link_speed() - Get current link speed
* @port: Port to check (USB4 or CIO)
*
* Returns link speed in Gb/s or negative errno in case of failure.
*/
int tb_port_get_link_speed(struct tb_port *port)
{ {
u32 val, speed; u32 val, speed;
int ret; int ret;
...@@ -1532,11 +1493,11 @@ static ssize_t nvm_authenticate_show(struct device *dev, ...@@ -1532,11 +1493,11 @@ static ssize_t nvm_authenticate_show(struct device *dev,
return sprintf(buf, "%#x\n", status); return sprintf(buf, "%#x\n", status);
} }
static ssize_t nvm_authenticate_store(struct device *dev, static ssize_t nvm_authenticate_sysfs(struct device *dev, const char *buf,
struct device_attribute *attr, const char *buf, size_t count) bool disconnect)
{ {
struct tb_switch *sw = tb_to_switch(dev); struct tb_switch *sw = tb_to_switch(dev);
bool val; int val;
int ret; int ret;
pm_runtime_get_sync(&sw->dev); pm_runtime_get_sync(&sw->dev);
...@@ -1552,26 +1513,33 @@ static ssize_t nvm_authenticate_store(struct device *dev, ...@@ -1552,26 +1513,33 @@ static ssize_t nvm_authenticate_store(struct device *dev,
goto exit_unlock; goto exit_unlock;
} }
ret = kstrtobool(buf, &val); ret = kstrtoint(buf, 10, &val);
if (ret) if (ret)
goto exit_unlock; goto exit_unlock;
/* Always clear the authentication status */ /* Always clear the authentication status */
nvm_clear_auth_status(sw); nvm_clear_auth_status(sw);
if (val) { if (val > 0) {
if (!sw->nvm->flushed) {
if (!sw->nvm->buf) { if (!sw->nvm->buf) {
ret = -EINVAL; ret = -EINVAL;
goto exit_unlock; goto exit_unlock;
} }
ret = nvm_validate_and_write(sw); ret = nvm_validate_and_write(sw);
if (ret) if (ret || val == WRITE_ONLY)
goto exit_unlock; goto exit_unlock;
}
if (val == WRITE_AND_AUTHENTICATE) {
if (disconnect) {
ret = tb_lc_force_power(sw);
} else {
sw->nvm->authenticating = true; sw->nvm->authenticating = true;
ret = nvm_authenticate(sw); ret = nvm_authenticate(sw);
} }
}
}
exit_unlock: exit_unlock:
mutex_unlock(&sw->tb->lock); mutex_unlock(&sw->tb->lock);
...@@ -1579,12 +1547,35 @@ static ssize_t nvm_authenticate_store(struct device *dev, ...@@ -1579,12 +1547,35 @@ static ssize_t nvm_authenticate_store(struct device *dev,
pm_runtime_mark_last_busy(&sw->dev); pm_runtime_mark_last_busy(&sw->dev);
pm_runtime_put_autosuspend(&sw->dev); pm_runtime_put_autosuspend(&sw->dev);
return ret;
}
static ssize_t nvm_authenticate_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
int ret = nvm_authenticate_sysfs(dev, buf, false);
if (ret) if (ret)
return ret; return ret;
return count; return count;
} }
static DEVICE_ATTR_RW(nvm_authenticate); static DEVICE_ATTR_RW(nvm_authenticate);
static ssize_t nvm_authenticate_on_disconnect_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return nvm_authenticate_show(dev, attr, buf);
}
static ssize_t nvm_authenticate_on_disconnect_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
int ret;
ret = nvm_authenticate_sysfs(dev, buf, true);
return ret ? ret : count;
}
static DEVICE_ATTR_RW(nvm_authenticate_on_disconnect);
static ssize_t nvm_version_show(struct device *dev, static ssize_t nvm_version_show(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
...@@ -1642,6 +1633,7 @@ static struct attribute *switch_attrs[] = { ...@@ -1642,6 +1633,7 @@ static struct attribute *switch_attrs[] = {
&dev_attr_generation.attr, &dev_attr_generation.attr,
&dev_attr_key.attr, &dev_attr_key.attr,
&dev_attr_nvm_authenticate.attr, &dev_attr_nvm_authenticate.attr,
&dev_attr_nvm_authenticate_on_disconnect.attr,
&dev_attr_nvm_version.attr, &dev_attr_nvm_version.attr,
&dev_attr_rx_speed.attr, &dev_attr_rx_speed.attr,
&dev_attr_rx_lanes.attr, &dev_attr_rx_lanes.attr,
...@@ -1696,6 +1688,10 @@ static umode_t switch_attr_is_visible(struct kobject *kobj, ...@@ -1696,6 +1688,10 @@ static umode_t switch_attr_is_visible(struct kobject *kobj,
if (tb_route(sw)) if (tb_route(sw))
return attr->mode; return attr->mode;
return 0; return 0;
} else if (attr == &dev_attr_nvm_authenticate_on_disconnect.attr) {
if (sw->quirks & QUIRK_FORCE_POWER_LINK_CONTROLLER)
return attr->mode;
return 0;
} }
return sw->safe_mode ? 0 : attr->mode; return sw->safe_mode ? 0 : attr->mode;
...@@ -2440,6 +2436,9 @@ void tb_switch_remove(struct tb_switch *sw) ...@@ -2440,6 +2436,9 @@ void tb_switch_remove(struct tb_switch *sw)
tb_xdomain_remove(port->xdomain); tb_xdomain_remove(port->xdomain);
port->xdomain = NULL; port->xdomain = NULL;
} }
/* Remove any downstream retimers */
tb_retimer_remove_all(port);
} }
if (!sw->is_unplugged) if (!sw->is_unplugged)
...@@ -2755,8 +2754,3 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw, ...@@ -2755,8 +2754,3 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw,
return NULL; return NULL;
} }
void tb_switch_exit(void)
{
ida_destroy(&nvm_ida);
}
...@@ -211,22 +211,192 @@ static struct tb_port *tb_find_usb3_down(struct tb_switch *sw, ...@@ -211,22 +211,192 @@ static struct tb_port *tb_find_usb3_down(struct tb_switch *sw,
struct tb_port *down; struct tb_port *down;
down = usb4_switch_map_usb3_down(sw, port); down = usb4_switch_map_usb3_down(sw, port);
if (down) { if (down && !tb_usb3_port_is_enabled(down))
if (WARN_ON(!tb_port_is_usb3_down(down)))
goto out;
if (WARN_ON(tb_usb3_port_is_enabled(down)))
goto out;
return down; return down;
return NULL;
}
static struct tb_tunnel *tb_find_tunnel(struct tb *tb, enum tb_tunnel_type type,
struct tb_port *src_port,
struct tb_port *dst_port)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_tunnel *tunnel;
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
if (tunnel->type == type &&
((src_port && src_port == tunnel->src_port) ||
(dst_port && dst_port == tunnel->dst_port))) {
return tunnel;
}
} }
out: return NULL;
return tb_find_unused_port(sw, TB_TYPE_USB3_DOWN); }
static struct tb_tunnel *tb_find_first_usb3_tunnel(struct tb *tb,
struct tb_port *src_port,
struct tb_port *dst_port)
{
struct tb_port *port, *usb3_down;
struct tb_switch *sw;
/* Pick the router that is deepest in the topology */
if (dst_port->sw->config.depth > src_port->sw->config.depth)
sw = dst_port->sw;
else
sw = src_port->sw;
/* Can't be the host router */
if (sw == tb->root_switch)
return NULL;
/* Find the downstream USB4 port that leads to this router */
port = tb_port_at(tb_route(sw), tb->root_switch);
/* Find the corresponding host router USB3 downstream port */
usb3_down = usb4_switch_map_usb3_down(tb->root_switch, port);
if (!usb3_down)
return NULL;
return tb_find_tunnel(tb, TB_TUNNEL_USB3, usb3_down, NULL);
}
static int tb_available_bandwidth(struct tb *tb, struct tb_port *src_port,
struct tb_port *dst_port, int *available_up, int *available_down)
{
int usb3_consumed_up, usb3_consumed_down, ret;
struct tb_cm *tcm = tb_priv(tb);
struct tb_tunnel *tunnel;
struct tb_port *port;
tb_port_dbg(dst_port, "calculating available bandwidth\n");
tunnel = tb_find_first_usb3_tunnel(tb, src_port, dst_port);
if (tunnel) {
ret = tb_tunnel_consumed_bandwidth(tunnel, &usb3_consumed_up,
&usb3_consumed_down);
if (ret)
return ret;
} else {
usb3_consumed_up = 0;
usb3_consumed_down = 0;
}
*available_up = *available_down = 40000;
/* Find the minimum available bandwidth over all links */
tb_for_each_port_on_path(src_port, dst_port, port) {
int link_speed, link_width, up_bw, down_bw;
if (!tb_port_is_null(port))
continue;
if (tb_is_upstream_port(port)) {
link_speed = port->sw->link_speed;
} else {
link_speed = tb_port_get_link_speed(port);
if (link_speed < 0)
return link_speed;
}
link_width = port->bonded ? 2 : 1;
up_bw = link_speed * link_width * 1000; /* Mb/s */
/* Leave 10% guard band */
up_bw -= up_bw / 10;
down_bw = up_bw;
tb_port_dbg(port, "link total bandwidth %d Mb/s\n", up_bw);
/*
* Find all DP tunnels that cross the port and reduce
* their consumed bandwidth from the available.
*/
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
int dp_consumed_up, dp_consumed_down;
if (!tb_tunnel_is_dp(tunnel))
continue;
if (!tb_tunnel_port_on_path(tunnel, port))
continue;
ret = tb_tunnel_consumed_bandwidth(tunnel,
&dp_consumed_up,
&dp_consumed_down);
if (ret)
return ret;
up_bw -= dp_consumed_up;
down_bw -= dp_consumed_down;
}
/*
* If USB3 is tunneled from the host router down to the
* branch leading to port we need to take USB3 consumed
* bandwidth into account regardless whether it actually
* crosses the port.
*/
up_bw -= usb3_consumed_up;
down_bw -= usb3_consumed_down;
if (up_bw < *available_up)
*available_up = up_bw;
if (down_bw < *available_down)
*available_down = down_bw;
}
if (*available_up < 0)
*available_up = 0;
if (*available_down < 0)
*available_down = 0;
return 0;
}
static int tb_release_unused_usb3_bandwidth(struct tb *tb,
struct tb_port *src_port,
struct tb_port *dst_port)
{
struct tb_tunnel *tunnel;
tunnel = tb_find_first_usb3_tunnel(tb, src_port, dst_port);
return tunnel ? tb_tunnel_release_unused_bandwidth(tunnel) : 0;
}
static void tb_reclaim_usb3_bandwidth(struct tb *tb, struct tb_port *src_port,
struct tb_port *dst_port)
{
int ret, available_up, available_down;
struct tb_tunnel *tunnel;
tunnel = tb_find_first_usb3_tunnel(tb, src_port, dst_port);
if (!tunnel)
return;
tb_dbg(tb, "reclaiming unused bandwidth for USB3\n");
/*
* Calculate available bandwidth for the first hop USB3 tunnel.
* That determines the whole USB3 bandwidth for this branch.
*/
ret = tb_available_bandwidth(tb, tunnel->src_port, tunnel->dst_port,
&available_up, &available_down);
if (ret) {
tb_warn(tb, "failed to calculate available bandwidth\n");
return;
}
tb_dbg(tb, "available bandwidth for USB3 %d/%d Mb/s\n",
available_up, available_down);
tb_tunnel_reclaim_available_bandwidth(tunnel, &available_up, &available_down);
} }
static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw) static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
{ {
struct tb_switch *parent = tb_switch_parent(sw); struct tb_switch *parent = tb_switch_parent(sw);
int ret, available_up, available_down;
struct tb_port *up, *down, *port; struct tb_port *up, *down, *port;
struct tb_cm *tcm = tb_priv(tb); struct tb_cm *tcm = tb_priv(tb);
struct tb_tunnel *tunnel; struct tb_tunnel *tunnel;
...@@ -235,6 +405,9 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw) ...@@ -235,6 +405,9 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
if (!up) if (!up)
return 0; return 0;
if (!sw->link_usb4)
return 0;
/* /*
* Look up available down port. Since we are chaining it should * Look up available down port. Since we are chaining it should
* be found right above this switch. * be found right above this switch.
...@@ -254,21 +427,48 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw) ...@@ -254,21 +427,48 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
parent_up = tb_switch_find_port(parent, TB_TYPE_USB3_UP); parent_up = tb_switch_find_port(parent, TB_TYPE_USB3_UP);
if (!parent_up || !tb_port_is_enabled(parent_up)) if (!parent_up || !tb_port_is_enabled(parent_up))
return 0; return 0;
/* Make all unused bandwidth available for the new tunnel */
ret = tb_release_unused_usb3_bandwidth(tb, down, up);
if (ret)
return ret;
} }
tunnel = tb_tunnel_alloc_usb3(tb, up, down); ret = tb_available_bandwidth(tb, down, up, &available_up,
if (!tunnel) &available_down);
return -ENOMEM; if (ret)
goto err_reclaim;
tb_port_dbg(up, "available bandwidth for new USB3 tunnel %d/%d Mb/s\n",
available_up, available_down);
tunnel = tb_tunnel_alloc_usb3(tb, up, down, available_up,
available_down);
if (!tunnel) {
ret = -ENOMEM;
goto err_reclaim;
}
if (tb_tunnel_activate(tunnel)) { if (tb_tunnel_activate(tunnel)) {
tb_port_info(up, tb_port_info(up,
"USB3 tunnel activation failed, aborting\n"); "USB3 tunnel activation failed, aborting\n");
tb_tunnel_free(tunnel); ret = -EIO;
return -EIO; goto err_free;
} }
list_add_tail(&tunnel->list, &tcm->tunnel_list); list_add_tail(&tunnel->list, &tcm->tunnel_list);
if (tb_route(parent))
tb_reclaim_usb3_bandwidth(tb, down, up);
return 0; return 0;
err_free:
tb_tunnel_free(tunnel);
err_reclaim:
if (tb_route(parent))
tb_reclaim_usb3_bandwidth(tb, down, up);
return ret;
} }
static int tb_create_usb3_tunnels(struct tb_switch *sw) static int tb_create_usb3_tunnels(struct tb_switch *sw)
...@@ -339,6 +539,9 @@ static void tb_scan_port(struct tb_port *port) ...@@ -339,6 +539,9 @@ static void tb_scan_port(struct tb_port *port)
tb_port_dbg(port, "port already has a remote\n"); tb_port_dbg(port, "port already has a remote\n");
return; return;
} }
tb_retimer_scan(port);
sw = tb_switch_alloc(port->sw->tb, &port->sw->dev, sw = tb_switch_alloc(port->sw->tb, &port->sw->dev,
tb_downstream_route(port)); tb_downstream_route(port));
if (IS_ERR(sw)) { if (IS_ERR(sw)) {
...@@ -395,6 +598,9 @@ static void tb_scan_port(struct tb_port *port) ...@@ -395,6 +598,9 @@ static void tb_scan_port(struct tb_port *port)
if (tb_enable_tmu(sw)) if (tb_enable_tmu(sw))
tb_sw_warn(sw, "failed to enable TMU\n"); tb_sw_warn(sw, "failed to enable TMU\n");
/* Scan upstream retimers */
tb_retimer_scan(upstream_port);
/* /*
* Create USB 3.x tunnels only when the switch is plugged to the * Create USB 3.x tunnels only when the switch is plugged to the
* domain. This is because we scan the domain also during discovery * domain. This is because we scan the domain also during discovery
...@@ -404,43 +610,44 @@ static void tb_scan_port(struct tb_port *port) ...@@ -404,43 +610,44 @@ static void tb_scan_port(struct tb_port *port)
if (tcm->hotplug_active && tb_tunnel_usb3(sw->tb, sw)) if (tcm->hotplug_active && tb_tunnel_usb3(sw->tb, sw))
tb_sw_warn(sw, "USB3 tunnel creation failed\n"); tb_sw_warn(sw, "USB3 tunnel creation failed\n");
tb_add_dp_resources(sw);
tb_scan_switch(sw); tb_scan_switch(sw);
} }
static struct tb_tunnel *tb_find_tunnel(struct tb *tb, enum tb_tunnel_type type,
struct tb_port *src_port,
struct tb_port *dst_port)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_tunnel *tunnel;
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
if (tunnel->type == type &&
((src_port && src_port == tunnel->src_port) ||
(dst_port && dst_port == tunnel->dst_port))) {
return tunnel;
}
}
return NULL;
}
static void tb_deactivate_and_free_tunnel(struct tb_tunnel *tunnel) static void tb_deactivate_and_free_tunnel(struct tb_tunnel *tunnel)
{ {
struct tb_port *src_port, *dst_port;
struct tb *tb;
if (!tunnel) if (!tunnel)
return; return;
tb_tunnel_deactivate(tunnel); tb_tunnel_deactivate(tunnel);
list_del(&tunnel->list); list_del(&tunnel->list);
tb = tunnel->tb;
src_port = tunnel->src_port;
dst_port = tunnel->dst_port;
switch (tunnel->type) {
case TB_TUNNEL_DP:
/* /*
* In case of DP tunnel make sure the DP IN resource is deallocated * In case of DP tunnel make sure the DP IN resource is
* properly. * deallocated properly.
*/ */
if (tb_tunnel_is_dp(tunnel)) { tb_switch_dealloc_dp_resource(src_port->sw, src_port);
struct tb_port *in = tunnel->src_port; fallthrough;
tb_switch_dealloc_dp_resource(in->sw, in); case TB_TUNNEL_USB3:
tb_reclaim_usb3_bandwidth(tb, src_port, dst_port);
break;
default:
/*
* PCIe and DMA tunnels do not consume guaranteed
* bandwidth.
*/
break;
} }
tb_tunnel_free(tunnel); tb_tunnel_free(tunnel);
...@@ -473,6 +680,7 @@ static void tb_free_unplugged_children(struct tb_switch *sw) ...@@ -473,6 +680,7 @@ static void tb_free_unplugged_children(struct tb_switch *sw)
continue; continue;
if (port->remote->sw->is_unplugged) { if (port->remote->sw->is_unplugged) {
tb_retimer_remove_all(port);
tb_remove_dp_resources(port->remote->sw); tb_remove_dp_resources(port->remote->sw);
tb_switch_lane_bonding_disable(port->remote->sw); tb_switch_lane_bonding_disable(port->remote->sw);
tb_switch_remove(port->remote->sw); tb_switch_remove(port->remote->sw);
...@@ -524,7 +732,7 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw, ...@@ -524,7 +732,7 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw,
if (down) { if (down) {
if (WARN_ON(!tb_port_is_pcie_down(down))) if (WARN_ON(!tb_port_is_pcie_down(down)))
goto out; goto out;
if (WARN_ON(tb_pci_port_is_enabled(down))) if (tb_pci_port_is_enabled(down))
goto out; goto out;
return down; return down;
...@@ -534,51 +742,49 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw, ...@@ -534,51 +742,49 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw,
return tb_find_unused_port(sw, TB_TYPE_PCIE_DOWN); return tb_find_unused_port(sw, TB_TYPE_PCIE_DOWN);
} }
static int tb_available_bw(struct tb_cm *tcm, struct tb_port *in, static struct tb_port *tb_find_dp_out(struct tb *tb, struct tb_port *in)
struct tb_port *out)
{ {
struct tb_switch *sw = out->sw; struct tb_port *host_port, *port;
struct tb_tunnel *tunnel; struct tb_cm *tcm = tb_priv(tb);
int bw, available_bw = 40000;
while (sw && sw != in->sw) { host_port = tb_route(in->sw) ?
bw = sw->link_speed * sw->link_width * 1000; /* Mb/s */ tb_port_at(tb_route(in->sw), tb->root_switch) : NULL;
/* Leave 10% guard band */
bw -= bw / 10;
/* list_for_each_entry(port, &tcm->dp_resources, list) {
* Check for any active DP tunnels that go through this if (!tb_port_is_dpout(port))
* switch and reduce their consumed bandwidth from continue;
* available.
*/
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
int consumed_bw;
if (!tb_tunnel_switch_on_path(tunnel, sw)) if (tb_port_is_enabled(port)) {
tb_port_dbg(port, "in use\n");
continue; continue;
}
consumed_bw = tb_tunnel_consumed_bandwidth(tunnel); tb_port_dbg(port, "DP OUT available\n");
if (consumed_bw < 0)
return consumed_bw;
bw -= consumed_bw; /*
} * Keep the DP tunnel under the topology starting from
* the same host router downstream port.
*/
if (host_port && tb_route(port->sw)) {
struct tb_port *p;
if (bw < available_bw) p = tb_port_at(tb_route(port->sw), tb->root_switch);
available_bw = bw; if (p != host_port)
continue;
}
sw = tb_switch_parent(sw); return port;
} }
return available_bw; return NULL;
} }
static void tb_tunnel_dp(struct tb *tb) static void tb_tunnel_dp(struct tb *tb)
{ {
int available_up, available_down, ret;
struct tb_cm *tcm = tb_priv(tb); struct tb_cm *tcm = tb_priv(tb);
struct tb_port *port, *in, *out; struct tb_port *port, *in, *out;
struct tb_tunnel *tunnel; struct tb_tunnel *tunnel;
int available_bw;
/* /*
* Find pair of inactive DP IN and DP OUT adapters and then * Find pair of inactive DP IN and DP OUT adapters and then
...@@ -589,17 +795,21 @@ static void tb_tunnel_dp(struct tb *tb) ...@@ -589,17 +795,21 @@ static void tb_tunnel_dp(struct tb *tb)
in = NULL; in = NULL;
out = NULL; out = NULL;
list_for_each_entry(port, &tcm->dp_resources, list) { list_for_each_entry(port, &tcm->dp_resources, list) {
if (!tb_port_is_dpin(port))
continue;
if (tb_port_is_enabled(port)) { if (tb_port_is_enabled(port)) {
tb_port_dbg(port, "in use\n"); tb_port_dbg(port, "in use\n");
continue; continue;
} }
tb_port_dbg(port, "available\n"); tb_port_dbg(port, "DP IN available\n");
if (!in && tb_port_is_dpin(port)) out = tb_find_dp_out(tb, port);
if (out) {
in = port; in = port;
else if (!out && tb_port_is_dpout(port)) break;
out = port; }
} }
if (!in) { if (!in) {
...@@ -616,32 +826,41 @@ static void tb_tunnel_dp(struct tb *tb) ...@@ -616,32 +826,41 @@ static void tb_tunnel_dp(struct tb *tb)
return; return;
} }
/* Calculate available bandwidth between in and out */ /* Make all unused USB3 bandwidth available for the new DP tunnel */
available_bw = tb_available_bw(tcm, in, out); ret = tb_release_unused_usb3_bandwidth(tb, in, out);
if (available_bw < 0) { if (ret) {
tb_warn(tb, "failed to determine available bandwidth\n"); tb_warn(tb, "failed to release unused bandwidth\n");
return; goto err_dealloc_dp;
} }
tb_dbg(tb, "available bandwidth for new DP tunnel %u Mb/s\n", ret = tb_available_bandwidth(tb, in, out, &available_up,
available_bw); &available_down);
if (ret)
goto err_reclaim;
tb_dbg(tb, "available bandwidth for new DP tunnel %u/%u Mb/s\n",
available_up, available_down);
tunnel = tb_tunnel_alloc_dp(tb, in, out, available_bw); tunnel = tb_tunnel_alloc_dp(tb, in, out, available_up, available_down);
if (!tunnel) { if (!tunnel) {
tb_port_dbg(out, "could not allocate DP tunnel\n"); tb_port_dbg(out, "could not allocate DP tunnel\n");
goto dealloc_dp; goto err_reclaim;
} }
if (tb_tunnel_activate(tunnel)) { if (tb_tunnel_activate(tunnel)) {
tb_port_info(out, "DP tunnel activation failed, aborting\n"); tb_port_info(out, "DP tunnel activation failed, aborting\n");
tb_tunnel_free(tunnel); goto err_free;
goto dealloc_dp;
} }
list_add_tail(&tunnel->list, &tcm->tunnel_list); list_add_tail(&tunnel->list, &tcm->tunnel_list);
tb_reclaim_usb3_bandwidth(tb, in, out);
return; return;
dealloc_dp: err_free:
tb_tunnel_free(tunnel);
err_reclaim:
tb_reclaim_usb3_bandwidth(tb, in, out);
err_dealloc_dp:
tb_switch_dealloc_dp_resource(in->sw, in); tb_switch_dealloc_dp_resource(in->sw, in);
} }
...@@ -827,6 +1046,8 @@ static void tb_handle_hotplug(struct work_struct *work) ...@@ -827,6 +1046,8 @@ static void tb_handle_hotplug(struct work_struct *work)
goto put_sw; goto put_sw;
} }
if (ev->unplug) { if (ev->unplug) {
tb_retimer_remove_all(port);
if (tb_port_has_remote(port)) { if (tb_port_has_remote(port)) {
tb_port_dbg(port, "switch unplugged\n"); tb_port_dbg(port, "switch unplugged\n");
tb_sw_set_unplugged(port->remote->sw); tb_sw_set_unplugged(port->remote->sw);
...@@ -1071,6 +1292,7 @@ static int tb_free_unplugged_xdomains(struct tb_switch *sw) ...@@ -1071,6 +1292,7 @@ static int tb_free_unplugged_xdomains(struct tb_switch *sw)
if (tb_is_upstream_port(port)) if (tb_is_upstream_port(port))
continue; continue;
if (port->xdomain && port->xdomain->is_unplugged) { if (port->xdomain && port->xdomain->is_unplugged) {
tb_retimer_remove_all(port);
tb_xdomain_remove(port->xdomain); tb_xdomain_remove(port->xdomain);
port->xdomain = NULL; port->xdomain = NULL;
ret++; ret++;
......
...@@ -18,8 +18,17 @@ ...@@ -18,8 +18,17 @@
#include "ctl.h" #include "ctl.h"
#include "dma_port.h" #include "dma_port.h"
#define NVM_MIN_SIZE SZ_32K
#define NVM_MAX_SIZE SZ_512K
/* Intel specific NVM offsets */
#define NVM_DEVID 0x05
#define NVM_VERSION 0x08
#define NVM_FLASH_SIZE 0x45
/** /**
* struct tb_switch_nvm - Structure holding switch NVM information * struct tb_nvm - Structure holding NVM information
* @dev: Owner of the NVM
* @major: Major version number of the active NVM portion * @major: Major version number of the active NVM portion
* @minor: Minor version number of the active NVM portion * @minor: Minor version number of the active NVM portion
* @id: Identifier used with both NVM portions * @id: Identifier used with both NVM portions
...@@ -29,9 +38,14 @@ ...@@ -29,9 +38,14 @@
* the actual NVM flash device * the actual NVM flash device
* @buf_data_size: Number of bytes actually consumed by the new NVM * @buf_data_size: Number of bytes actually consumed by the new NVM
* image * image
* @authenticating: The switch is authenticating the new NVM * @authenticating: The device is authenticating the new NVM
* @flushed: The image has been flushed to the storage area
*
* The user of this structure needs to handle serialization of possible
* concurrent access.
*/ */
struct tb_switch_nvm { struct tb_nvm {
struct device *dev;
u8 major; u8 major;
u8 minor; u8 minor;
int id; int id;
...@@ -40,6 +54,7 @@ struct tb_switch_nvm { ...@@ -40,6 +54,7 @@ struct tb_switch_nvm {
void *buf; void *buf;
size_t buf_data_size; size_t buf_data_size;
bool authenticating; bool authenticating;
bool flushed;
}; };
#define TB_SWITCH_KEY_SIZE 32 #define TB_SWITCH_KEY_SIZE 32
...@@ -97,6 +112,7 @@ struct tb_switch_tmu { ...@@ -97,6 +112,7 @@ struct tb_switch_tmu {
* @device_name: Name of the device (or %NULL if not known) * @device_name: Name of the device (or %NULL if not known)
* @link_speed: Speed of the link in Gb/s * @link_speed: Speed of the link in Gb/s
* @link_width: Width of the link (1 or 2) * @link_width: Width of the link (1 or 2)
* @link_usb4: Upstream link is USB4
* @generation: Switch Thunderbolt generation * @generation: Switch Thunderbolt generation
* @cap_plug_events: Offset to the plug events capability (%0 if not found) * @cap_plug_events: Offset to the plug events capability (%0 if not found)
* @cap_lc: Offset to the link controller capability (%0 if not found) * @cap_lc: Offset to the link controller capability (%0 if not found)
...@@ -117,6 +133,7 @@ struct tb_switch_tmu { ...@@ -117,6 +133,7 @@ struct tb_switch_tmu {
* @depth: Depth in the chain this switch is connected (ICM only) * @depth: Depth in the chain this switch is connected (ICM only)
* @rpm_complete: Completion used to wait for runtime resume to * @rpm_complete: Completion used to wait for runtime resume to
* complete (ICM only) * complete (ICM only)
* @quirks: Quirks used for this Thunderbolt switch
* *
* When the switch is being added or removed to the domain (other * When the switch is being added or removed to the domain (other
* switches) you need to have domain lock held. * switches) you need to have domain lock held.
...@@ -136,12 +153,13 @@ struct tb_switch { ...@@ -136,12 +153,13 @@ struct tb_switch {
const char *device_name; const char *device_name;
unsigned int link_speed; unsigned int link_speed;
unsigned int link_width; unsigned int link_width;
bool link_usb4;
unsigned int generation; unsigned int generation;
int cap_plug_events; int cap_plug_events;
int cap_lc; int cap_lc;
bool is_unplugged; bool is_unplugged;
u8 *drom; u8 *drom;
struct tb_switch_nvm *nvm; struct tb_nvm *nvm;
bool no_nvm_upgrade; bool no_nvm_upgrade;
bool safe_mode; bool safe_mode;
bool boot; bool boot;
...@@ -154,6 +172,7 @@ struct tb_switch { ...@@ -154,6 +172,7 @@ struct tb_switch {
u8 link; u8 link;
u8 depth; u8 depth;
struct completion rpm_complete; struct completion rpm_complete;
unsigned long quirks;
}; };
/** /**
...@@ -195,6 +214,28 @@ struct tb_port { ...@@ -195,6 +214,28 @@ struct tb_port {
struct list_head list; struct list_head list;
}; };
/**
* tb_retimer: Thunderbolt retimer
* @dev: Device for the retimer
* @tb: Pointer to the domain the retimer belongs to
* @index: Retimer index facing the router USB4 port
* @vendor: Vendor ID of the retimer
* @device: Device ID of the retimer
* @port: Pointer to the lane 0 adapter
* @nvm: Pointer to the NVM if the retimer has one (%NULL otherwise)
* @auth_status: Status of last NVM authentication
*/
struct tb_retimer {
struct device dev;
struct tb *tb;
u8 index;
u32 vendor;
u32 device;
struct tb_port *port;
struct tb_nvm *nvm;
u32 auth_status;
};
/** /**
* struct tb_path_hop - routing information for a tb_path * struct tb_path_hop - routing information for a tb_path
* @in_port: Ingress port of a switch * @in_port: Ingress port of a switch
...@@ -286,7 +327,11 @@ struct tb_path { ...@@ -286,7 +327,11 @@ struct tb_path {
/* HopIDs 0-7 are reserved by the Thunderbolt protocol */ /* HopIDs 0-7 are reserved by the Thunderbolt protocol */
#define TB_PATH_MIN_HOPID 8 #define TB_PATH_MIN_HOPID 8
#define TB_PATH_MAX_HOPS 7 /*
* Support paths from the farthest (depth 6) router to the host and back
* to the same level (not necessarily to the same router).
*/
#define TB_PATH_MAX_HOPS (7 * 2)
/** /**
* struct tb_cm_ops - Connection manager specific operations vector * struct tb_cm_ops - Connection manager specific operations vector
...@@ -534,11 +579,11 @@ struct tb *icm_probe(struct tb_nhi *nhi); ...@@ -534,11 +579,11 @@ struct tb *icm_probe(struct tb_nhi *nhi);
struct tb *tb_probe(struct tb_nhi *nhi); struct tb *tb_probe(struct tb_nhi *nhi);
extern struct device_type tb_domain_type; extern struct device_type tb_domain_type;
extern struct device_type tb_retimer_type;
extern struct device_type tb_switch_type; extern struct device_type tb_switch_type;
int tb_domain_init(void); int tb_domain_init(void);
void tb_domain_exit(void); void tb_domain_exit(void);
void tb_switch_exit(void);
int tb_xdomain_init(void); int tb_xdomain_init(void);
void tb_xdomain_exit(void); void tb_xdomain_exit(void);
...@@ -571,6 +616,15 @@ static inline void tb_domain_put(struct tb *tb) ...@@ -571,6 +616,15 @@ static inline void tb_domain_put(struct tb *tb)
put_device(&tb->dev); put_device(&tb->dev);
} }
struct tb_nvm *tb_nvm_alloc(struct device *dev);
int tb_nvm_add_active(struct tb_nvm *nvm, size_t size, nvmem_reg_read_t reg_read);
int tb_nvm_write_buf(struct tb_nvm *nvm, unsigned int offset, void *val,
size_t bytes);
int tb_nvm_add_non_active(struct tb_nvm *nvm, size_t size,
nvmem_reg_write_t reg_write);
void tb_nvm_free(struct tb_nvm *nvm);
void tb_nvm_exit(void);
struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
u64 route); u64 route);
struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb, struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb,
...@@ -741,6 +795,20 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid); ...@@ -741,6 +795,20 @@ void tb_port_release_out_hopid(struct tb_port *port, int hopid);
struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
struct tb_port *prev); struct tb_port *prev);
/**
* tb_for_each_port_on_path() - Iterate over each port on path
* @src: Source port
* @dst: Destination port
* @p: Port used as iterator
*
* Walks over each port on path from @src to @dst.
*/
#define tb_for_each_port_on_path(src, dst, p) \
for ((p) = tb_next_port_on_path((src), (dst), NULL); (p); \
(p) = tb_next_port_on_path((src), (dst), (p)))
int tb_port_get_link_speed(struct tb_port *port);
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap); int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap);
...@@ -769,8 +837,8 @@ void tb_path_free(struct tb_path *path); ...@@ -769,8 +837,8 @@ void tb_path_free(struct tb_path *path);
int tb_path_activate(struct tb_path *path); int tb_path_activate(struct tb_path *path);
void tb_path_deactivate(struct tb_path *path); void tb_path_deactivate(struct tb_path *path);
bool tb_path_is_invalid(struct tb_path *path); bool tb_path_is_invalid(struct tb_path *path);
bool tb_path_switch_on_path(const struct tb_path *path, bool tb_path_port_on_path(const struct tb_path *path,
const struct tb_switch *sw); const struct tb_port *port);
int tb_drom_read(struct tb_switch *sw); int tb_drom_read(struct tb_switch *sw);
int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid); int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid);
...@@ -783,6 +851,7 @@ bool tb_lc_lane_bonding_possible(struct tb_switch *sw); ...@@ -783,6 +851,7 @@ bool tb_lc_lane_bonding_possible(struct tb_switch *sw);
bool tb_lc_dp_sink_query(struct tb_switch *sw, struct tb_port *in); bool tb_lc_dp_sink_query(struct tb_switch *sw, struct tb_port *in);
int tb_lc_dp_sink_alloc(struct tb_switch *sw, struct tb_port *in); int tb_lc_dp_sink_alloc(struct tb_switch *sw, struct tb_port *in);
int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in); int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in);
int tb_lc_force_power(struct tb_switch *sw);
static inline int tb_route_length(u64 route) static inline int tb_route_length(u64 route)
{ {
...@@ -812,6 +881,21 @@ void tb_xdomain_remove(struct tb_xdomain *xd); ...@@ -812,6 +881,21 @@ void tb_xdomain_remove(struct tb_xdomain *xd);
struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link, struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link,
u8 depth); u8 depth);
int tb_retimer_scan(struct tb_port *port);
void tb_retimer_remove_all(struct tb_port *port);
static inline bool tb_is_retimer(const struct device *dev)
{
return dev->type == &tb_retimer_type;
}
static inline struct tb_retimer *tb_to_retimer(struct device *dev)
{
if (tb_is_retimer(dev))
return container_of(dev, struct tb_retimer, dev);
return NULL;
}
int usb4_switch_setup(struct tb_switch *sw); int usb4_switch_setup(struct tb_switch *sw);
int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid); int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid);
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf, int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
...@@ -835,4 +919,35 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw, ...@@ -835,4 +919,35 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
const struct tb_port *port); const struct tb_port *port);
int usb4_port_unlock(struct tb_port *port); int usb4_port_unlock(struct tb_port *port);
int usb4_port_enumerate_retimers(struct tb_port *port);
int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,
u8 size);
int usb4_port_retimer_write(struct tb_port *port, u8 index, u8 reg,
const void *buf, u8 size);
int usb4_port_retimer_is_last(struct tb_port *port, u8 index);
int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index);
int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index,
unsigned int address, const void *buf,
size_t size);
int usb4_port_retimer_nvm_authenticate(struct tb_port *port, u8 index);
int usb4_port_retimer_nvm_authenticate_status(struct tb_port *port, u8 index,
u32 *status);
int usb4_port_retimer_nvm_read(struct tb_port *port, u8 index,
unsigned int address, void *buf, size_t size);
int usb4_usb3_port_max_link_rate(struct tb_port *port);
int usb4_usb3_port_actual_link_rate(struct tb_port *port);
int usb4_usb3_port_allocated_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw);
int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw);
int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw);
/* keep link controller awake during update */
#define QUIRK_FORCE_POWER_LINK_CONTROLLER BIT(0)
void tb_check_quirks(struct tb_switch *sw);
#endif #endif
...@@ -288,8 +288,19 @@ struct tb_regs_port_header { ...@@ -288,8 +288,19 @@ struct tb_regs_port_header {
#define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20 #define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20
/* USB4 port registers */ /* USB4 port registers */
#define PORT_CS_1 0x01
#define PORT_CS_1_LENGTH_SHIFT 8
#define PORT_CS_1_TARGET_MASK GENMASK(18, 16)
#define PORT_CS_1_TARGET_SHIFT 16
#define PORT_CS_1_RETIMER_INDEX_SHIFT 20
#define PORT_CS_1_WNR_WRITE BIT(24)
#define PORT_CS_1_NR BIT(25)
#define PORT_CS_1_RC BIT(26)
#define PORT_CS_1_PND BIT(31)
#define PORT_CS_2 0x02
#define PORT_CS_18 0x12 #define PORT_CS_18 0x12
#define PORT_CS_18_BE BIT(8) #define PORT_CS_18_BE BIT(8)
#define PORT_CS_18_TCM BIT(9)
#define PORT_CS_19 0x13 #define PORT_CS_19 0x13
#define PORT_CS_19_PC BIT(3) #define PORT_CS_19_PC BIT(3)
...@@ -337,6 +348,25 @@ struct tb_regs_port_header { ...@@ -337,6 +348,25 @@ struct tb_regs_port_header {
#define ADP_USB3_CS_0 0x00 #define ADP_USB3_CS_0 0x00
#define ADP_USB3_CS_0_V BIT(30) #define ADP_USB3_CS_0_V BIT(30)
#define ADP_USB3_CS_0_PE BIT(31) #define ADP_USB3_CS_0_PE BIT(31)
#define ADP_USB3_CS_1 0x01
#define ADP_USB3_CS_1_CUBW_MASK GENMASK(11, 0)
#define ADP_USB3_CS_1_CDBW_MASK GENMASK(23, 12)
#define ADP_USB3_CS_1_CDBW_SHIFT 12
#define ADP_USB3_CS_1_HCA BIT(31)
#define ADP_USB3_CS_2 0x02
#define ADP_USB3_CS_2_AUBW_MASK GENMASK(11, 0)
#define ADP_USB3_CS_2_ADBW_MASK GENMASK(23, 12)
#define ADP_USB3_CS_2_ADBW_SHIFT 12
#define ADP_USB3_CS_2_CMR BIT(31)
#define ADP_USB3_CS_3 0x03
#define ADP_USB3_CS_3_SCALE_MASK GENMASK(5, 0)
#define ADP_USB3_CS_4 0x04
#define ADP_USB3_CS_4_ALR_MASK GENMASK(6, 0)
#define ADP_USB3_CS_4_ALR_20G 0x1
#define ADP_USB3_CS_4_ULV BIT(7)
#define ADP_USB3_CS_4_MSLR_MASK GENMASK(18, 12)
#define ADP_USB3_CS_4_MSLR_SHIFT 12
#define ADP_USB3_CS_4_MSLR_20G 0x1
/* Hop register from TB_CFG_HOPS. 8 byte per entry. */ /* Hop register from TB_CFG_HOPS. 8 byte per entry. */
struct tb_regs_hop { struct tb_regs_hop {
...@@ -379,6 +409,7 @@ struct tb_regs_hop { ...@@ -379,6 +409,7 @@ struct tb_regs_hop {
#define TB_LC_SNK_ALLOCATION_SNK1_SHIFT 4 #define TB_LC_SNK_ALLOCATION_SNK1_SHIFT 4
#define TB_LC_SNK_ALLOCATION_SNK1_MASK GENMASK(7, 4) #define TB_LC_SNK_ALLOCATION_SNK1_MASK GENMASK(7, 4)
#define TB_LC_SNK_ALLOCATION_SNK1_CM 0x1 #define TB_LC_SNK_ALLOCATION_SNK1_CM 0x1
#define TB_LC_POWER 0x740
/* Link controller registers */ /* Link controller registers */
#define TB_LC_PORT_ATTR 0x8d #define TB_LC_PORT_ATTR 0x8d
......
// SPDX-License-Identifier: GPL-2.0
/*
* KUnit tests
*
* Copyright (C) 2020, Intel Corporation
* Author: Mika Westerberg <mika.westerberg@linux.intel.com>
*/
#include <kunit/test.h>
#include <linux/idr.h>
#include "tb.h"
#include "tunnel.h"
static int __ida_init(struct kunit_resource *res, void *context)
{
struct ida *ida = context;
ida_init(ida);
res->allocation = ida;
return 0;
}
static void __ida_destroy(struct kunit_resource *res)
{
struct ida *ida = res->allocation;
ida_destroy(ida);
}
static void kunit_ida_init(struct kunit *test, struct ida *ida)
{
kunit_alloc_resource(test, __ida_init, __ida_destroy, GFP_KERNEL, ida);
}
static struct tb_switch *alloc_switch(struct kunit *test, u64 route,
u8 upstream_port, u8 max_port_number)
{
struct tb_switch *sw;
size_t size;
int i;
sw = kunit_kzalloc(test, sizeof(*sw), GFP_KERNEL);
if (!sw)
return NULL;
sw->config.upstream_port_number = upstream_port;
sw->config.depth = tb_route_length(route);
sw->config.route_hi = upper_32_bits(route);
sw->config.route_lo = lower_32_bits(route);
sw->config.enabled = 0;
sw->config.max_port_number = max_port_number;
size = (sw->config.max_port_number + 1) * sizeof(*sw->ports);
sw->ports = kunit_kzalloc(test, size, GFP_KERNEL);
if (!sw->ports)
return NULL;
for (i = 0; i <= sw->config.max_port_number; i++) {
sw->ports[i].sw = sw;
sw->ports[i].port = i;
sw->ports[i].config.port_number = i;
if (i) {
kunit_ida_init(test, &sw->ports[i].in_hopids);
kunit_ida_init(test, &sw->ports[i].out_hopids);
}
}
return sw;
}
static struct tb_switch *alloc_host(struct kunit *test)
{
struct tb_switch *sw;
sw = alloc_switch(test, 0, 7, 13);
if (!sw)
return NULL;
sw->config.vendor_id = 0x8086;
sw->config.device_id = 0x9a1b;
sw->ports[0].config.type = TB_TYPE_PORT;
sw->ports[0].config.max_in_hop_id = 7;
sw->ports[0].config.max_out_hop_id = 7;
sw->ports[1].config.type = TB_TYPE_PORT;
sw->ports[1].config.max_in_hop_id = 19;
sw->ports[1].config.max_out_hop_id = 19;
sw->ports[1].dual_link_port = &sw->ports[2];
sw->ports[2].config.type = TB_TYPE_PORT;
sw->ports[2].config.max_in_hop_id = 19;
sw->ports[2].config.max_out_hop_id = 19;
sw->ports[2].dual_link_port = &sw->ports[1];
sw->ports[2].link_nr = 1;
sw->ports[3].config.type = TB_TYPE_PORT;
sw->ports[3].config.max_in_hop_id = 19;
sw->ports[3].config.max_out_hop_id = 19;
sw->ports[3].dual_link_port = &sw->ports[4];
sw->ports[4].config.type = TB_TYPE_PORT;
sw->ports[4].config.max_in_hop_id = 19;
sw->ports[4].config.max_out_hop_id = 19;
sw->ports[4].dual_link_port = &sw->ports[3];
sw->ports[4].link_nr = 1;
sw->ports[5].config.type = TB_TYPE_DP_HDMI_IN;
sw->ports[5].config.max_in_hop_id = 9;
sw->ports[5].config.max_out_hop_id = 9;
sw->ports[5].cap_adap = -1;
sw->ports[6].config.type = TB_TYPE_DP_HDMI_IN;
sw->ports[6].config.max_in_hop_id = 9;
sw->ports[6].config.max_out_hop_id = 9;
sw->ports[6].cap_adap = -1;
sw->ports[7].config.type = TB_TYPE_NHI;
sw->ports[7].config.max_in_hop_id = 11;
sw->ports[7].config.max_out_hop_id = 11;
sw->ports[8].config.type = TB_TYPE_PCIE_DOWN;
sw->ports[8].config.max_in_hop_id = 8;
sw->ports[8].config.max_out_hop_id = 8;
sw->ports[9].config.type = TB_TYPE_PCIE_DOWN;
sw->ports[9].config.max_in_hop_id = 8;
sw->ports[9].config.max_out_hop_id = 8;
sw->ports[10].disabled = true;
sw->ports[11].disabled = true;
sw->ports[12].config.type = TB_TYPE_USB3_DOWN;
sw->ports[12].config.max_in_hop_id = 8;
sw->ports[12].config.max_out_hop_id = 8;
sw->ports[13].config.type = TB_TYPE_USB3_DOWN;
sw->ports[13].config.max_in_hop_id = 8;
sw->ports[13].config.max_out_hop_id = 8;
return sw;
}
static struct tb_switch *alloc_dev_default(struct kunit *test,
struct tb_switch *parent,
u64 route, bool bonded)
{
struct tb_port *port, *upstream_port;
struct tb_switch *sw;
sw = alloc_switch(test, route, 1, 19);
if (!sw)
return NULL;
sw->config.vendor_id = 0x8086;
sw->config.device_id = 0x15ef;
sw->ports[0].config.type = TB_TYPE_PORT;
sw->ports[0].config.max_in_hop_id = 8;
sw->ports[0].config.max_out_hop_id = 8;
sw->ports[1].config.type = TB_TYPE_PORT;
sw->ports[1].config.max_in_hop_id = 19;
sw->ports[1].config.max_out_hop_id = 19;
sw->ports[1].dual_link_port = &sw->ports[2];
sw->ports[2].config.type = TB_TYPE_PORT;
sw->ports[2].config.max_in_hop_id = 19;
sw->ports[2].config.max_out_hop_id = 19;
sw->ports[2].dual_link_port = &sw->ports[1];
sw->ports[2].link_nr = 1;
sw->ports[3].config.type = TB_TYPE_PORT;
sw->ports[3].config.max_in_hop_id = 19;
sw->ports[3].config.max_out_hop_id = 19;
sw->ports[3].dual_link_port = &sw->ports[4];
sw->ports[4].config.type = TB_TYPE_PORT;
sw->ports[4].config.max_in_hop_id = 19;
sw->ports[4].config.max_out_hop_id = 19;
sw->ports[4].dual_link_port = &sw->ports[3];
sw->ports[4].link_nr = 1;
sw->ports[5].config.type = TB_TYPE_PORT;
sw->ports[5].config.max_in_hop_id = 19;
sw->ports[5].config.max_out_hop_id = 19;
sw->ports[5].dual_link_port = &sw->ports[6];
sw->ports[6].config.type = TB_TYPE_PORT;
sw->ports[6].config.max_in_hop_id = 19;
sw->ports[6].config.max_out_hop_id = 19;
sw->ports[6].dual_link_port = &sw->ports[5];
sw->ports[6].link_nr = 1;
sw->ports[7].config.type = TB_TYPE_PORT;
sw->ports[7].config.max_in_hop_id = 19;
sw->ports[7].config.max_out_hop_id = 19;
sw->ports[7].dual_link_port = &sw->ports[8];
sw->ports[8].config.type = TB_TYPE_PORT;
sw->ports[8].config.max_in_hop_id = 19;
sw->ports[8].config.max_out_hop_id = 19;
sw->ports[8].dual_link_port = &sw->ports[7];
sw->ports[8].link_nr = 1;
sw->ports[9].config.type = TB_TYPE_PCIE_UP;
sw->ports[9].config.max_in_hop_id = 8;
sw->ports[9].config.max_out_hop_id = 8;
sw->ports[10].config.type = TB_TYPE_PCIE_DOWN;
sw->ports[10].config.max_in_hop_id = 8;
sw->ports[10].config.max_out_hop_id = 8;
sw->ports[11].config.type = TB_TYPE_PCIE_DOWN;
sw->ports[11].config.max_in_hop_id = 8;
sw->ports[11].config.max_out_hop_id = 8;
sw->ports[12].config.type = TB_TYPE_PCIE_DOWN;
sw->ports[12].config.max_in_hop_id = 8;
sw->ports[12].config.max_out_hop_id = 8;
sw->ports[13].config.type = TB_TYPE_DP_HDMI_OUT;
sw->ports[13].config.max_in_hop_id = 9;
sw->ports[13].config.max_out_hop_id = 9;
sw->ports[13].cap_adap = -1;
sw->ports[14].config.type = TB_TYPE_DP_HDMI_OUT;
sw->ports[14].config.max_in_hop_id = 9;
sw->ports[14].config.max_out_hop_id = 9;
sw->ports[14].cap_adap = -1;
sw->ports[15].disabled = true;
sw->ports[16].config.type = TB_TYPE_USB3_UP;
sw->ports[16].config.max_in_hop_id = 8;
sw->ports[16].config.max_out_hop_id = 8;
sw->ports[17].config.type = TB_TYPE_USB3_DOWN;
sw->ports[17].config.max_in_hop_id = 8;
sw->ports[17].config.max_out_hop_id = 8;
sw->ports[18].config.type = TB_TYPE_USB3_DOWN;
sw->ports[18].config.max_in_hop_id = 8;
sw->ports[18].config.max_out_hop_id = 8;
sw->ports[19].config.type = TB_TYPE_USB3_DOWN;
sw->ports[19].config.max_in_hop_id = 8;
sw->ports[19].config.max_out_hop_id = 8;
if (!parent)
return sw;
/* Link them */
upstream_port = tb_upstream_port(sw);
port = tb_port_at(route, parent);
port->remote = upstream_port;
upstream_port->remote = port;
if (port->dual_link_port && upstream_port->dual_link_port) {
port->dual_link_port->remote = upstream_port->dual_link_port;
upstream_port->dual_link_port->remote = port->dual_link_port;
}
if (bonded) {
/* Bonding is used */
port->bonded = true;
port->dual_link_port->bonded = true;
upstream_port->bonded = true;
upstream_port->dual_link_port->bonded = true;
}
return sw;
}
static struct tb_switch *alloc_dev_with_dpin(struct kunit *test,
struct tb_switch *parent,
u64 route, bool bonded)
{
struct tb_switch *sw;
sw = alloc_dev_default(test, parent, route, bonded);
if (!sw)
return NULL;
sw->ports[13].config.type = TB_TYPE_DP_HDMI_IN;
sw->ports[13].config.max_in_hop_id = 9;
sw->ports[13].config.max_out_hop_id = 9;
sw->ports[14].config.type = TB_TYPE_DP_HDMI_IN;
sw->ports[14].config.max_in_hop_id = 9;
sw->ports[14].config.max_out_hop_id = 9;
return sw;
}
static void tb_test_path_basic(struct kunit *test)
{
struct tb_port *src_port, *dst_port, *p;
struct tb_switch *host;
host = alloc_host(test);
src_port = &host->ports[5];
dst_port = src_port;
p = tb_next_port_on_path(src_port, dst_port, NULL);
KUNIT_EXPECT_PTR_EQ(test, p, dst_port);
p = tb_next_port_on_path(src_port, dst_port, p);
KUNIT_EXPECT_TRUE(test, !p);
}
static void tb_test_path_not_connected_walk(struct kunit *test)
{
struct tb_port *src_port, *dst_port, *p;
struct tb_switch *host, *dev;
host = alloc_host(test);
/* No connection between host and dev */
dev = alloc_dev_default(test, NULL, 3, true);
src_port = &host->ports[12];
dst_port = &dev->ports[16];
p = tb_next_port_on_path(src_port, dst_port, NULL);
KUNIT_EXPECT_PTR_EQ(test, p, src_port);
p = tb_next_port_on_path(src_port, dst_port, p);
KUNIT_EXPECT_PTR_EQ(test, p, &host->ports[3]);
p = tb_next_port_on_path(src_port, dst_port, p);
KUNIT_EXPECT_TRUE(test, !p);
/* Other direction */
p = tb_next_port_on_path(dst_port, src_port, NULL);
KUNIT_EXPECT_PTR_EQ(test, p, dst_port);
p = tb_next_port_on_path(dst_port, src_port, p);
KUNIT_EXPECT_PTR_EQ(test, p, &dev->ports[1]);
p = tb_next_port_on_path(dst_port, src_port, p);
KUNIT_EXPECT_TRUE(test, !p);
}
struct port_expectation {
u64 route;
u8 port;
enum tb_port_type type;
};
static void tb_test_path_single_hop_walk(struct kunit *test)
{
/*
* Walks from Host PCIe downstream port to Device #1 PCIe
* upstream port.
*
* [Host]
* 1 |
* 1 |
* [Device]
*/
static const struct port_expectation test_data[] = {
{ .route = 0x0, .port = 8, .type = TB_TYPE_PCIE_DOWN },
{ .route = 0x0, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 9, .type = TB_TYPE_PCIE_UP },
};
struct tb_port *src_port, *dst_port, *p;
struct tb_switch *host, *dev;
int i;
host = alloc_host(test);
dev = alloc_dev_default(test, host, 1, true);
src_port = &host->ports[8];
dst_port = &dev->ports[9];
/* Walk both directions */
i = 0;
tb_for_each_port_on_path(src_port, dst_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i++;
}
KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
i = ARRAY_SIZE(test_data) - 1;
tb_for_each_port_on_path(dst_port, src_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i--;
}
KUNIT_EXPECT_EQ(test, i, -1);
}
static void tb_test_path_daisy_chain_walk(struct kunit *test)
{
/*
* Walks from Host DP IN to Device #2 DP OUT.
*
* [Host]
* 1 |
* 1 |
* [Device #1]
* 3 /
* 1 /
* [Device #2]
*/
static const struct port_expectation test_data[] = {
{ .route = 0x0, .port = 5, .type = TB_TYPE_DP_HDMI_IN },
{ .route = 0x0, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x301, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x301, .port = 13, .type = TB_TYPE_DP_HDMI_OUT },
};
struct tb_port *src_port, *dst_port, *p;
struct tb_switch *host, *dev1, *dev2;
int i;
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x301, true);
src_port = &host->ports[5];
dst_port = &dev2->ports[13];
/* Walk both directions */
i = 0;
tb_for_each_port_on_path(src_port, dst_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i++;
}
KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
i = ARRAY_SIZE(test_data) - 1;
tb_for_each_port_on_path(dst_port, src_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i--;
}
KUNIT_EXPECT_EQ(test, i, -1);
}
static void tb_test_path_simple_tree_walk(struct kunit *test)
{
/*
* Walks from Host DP IN to Device #3 DP OUT.
*
* [Host]
* 1 |
* 1 |
* [Device #1]
* 3 / | 5 \ 7
* 1 / | \ 1
* [Device #2] | [Device #4]
* | 1
* [Device #3]
*/
static const struct port_expectation test_data[] = {
{ .route = 0x0, .port = 5, .type = TB_TYPE_DP_HDMI_IN },
{ .route = 0x0, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 5, .type = TB_TYPE_PORT },
{ .route = 0x501, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x501, .port = 13, .type = TB_TYPE_DP_HDMI_OUT },
};
struct tb_port *src_port, *dst_port, *p;
struct tb_switch *host, *dev1, *dev3;
int i;
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
alloc_dev_default(test, dev1, 0x301, true);
dev3 = alloc_dev_default(test, dev1, 0x501, true);
alloc_dev_default(test, dev1, 0x701, true);
src_port = &host->ports[5];
dst_port = &dev3->ports[13];
/* Walk both directions */
i = 0;
tb_for_each_port_on_path(src_port, dst_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i++;
}
KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
i = ARRAY_SIZE(test_data) - 1;
tb_for_each_port_on_path(dst_port, src_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i--;
}
KUNIT_EXPECT_EQ(test, i, -1);
}
static void tb_test_path_complex_tree_walk(struct kunit *test)
{
/*
* Walks from Device #3 DP IN to Device #9 DP OUT.
*
* [Host]
* 1 |
* 1 |
* [Device #1]
* 3 / | 5 \ 7
* 1 / | \ 1
* [Device #2] | [Device #5]
* 5 | | 1 \ 7
* 1 | [Device #4] \ 1
* [Device #3] [Device #6]
* 3 /
* 1 /
* [Device #7]
* 3 / | 5
* 1 / |
* [Device #8] | 1
* [Device #9]
*/
static const struct port_expectation test_data[] = {
{ .route = 0x50301, .port = 13, .type = TB_TYPE_DP_HDMI_IN },
{ .route = 0x50301, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x301, .port = 5, .type = TB_TYPE_PORT },
{ .route = 0x301, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 7, .type = TB_TYPE_PORT },
{ .route = 0x701, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x701, .port = 7, .type = TB_TYPE_PORT },
{ .route = 0x70701, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x70701, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x3070701, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x3070701, .port = 5, .type = TB_TYPE_PORT },
{ .route = 0x503070701, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x503070701, .port = 14, .type = TB_TYPE_DP_HDMI_OUT },
};
struct tb_switch *host, *dev1, *dev2, *dev3, *dev5, *dev6, *dev7, *dev9;
struct tb_port *src_port, *dst_port, *p;
int i;
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x301, true);
dev3 = alloc_dev_with_dpin(test, dev2, 0x50301, true);
alloc_dev_default(test, dev1, 0x501, true);
dev5 = alloc_dev_default(test, dev1, 0x701, true);
dev6 = alloc_dev_default(test, dev5, 0x70701, true);
dev7 = alloc_dev_default(test, dev6, 0x3070701, true);
alloc_dev_default(test, dev7, 0x303070701, true);
dev9 = alloc_dev_default(test, dev7, 0x503070701, true);
src_port = &dev3->ports[13];
dst_port = &dev9->ports[14];
/* Walk both directions */
i = 0;
tb_for_each_port_on_path(src_port, dst_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i++;
}
KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
i = ARRAY_SIZE(test_data) - 1;
tb_for_each_port_on_path(dst_port, src_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i--;
}
KUNIT_EXPECT_EQ(test, i, -1);
}
static void tb_test_path_max_length_walk(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5, *dev6;
struct tb_switch *dev7, *dev8, *dev9, *dev10, *dev11, *dev12;
struct tb_port *src_port, *dst_port, *p;
int i;
/*
* Walks from Device #6 DP IN to Device #12 DP OUT.
*
* [Host]
* 1 / \ 3
* 1 / \ 1
* [Device #1] [Device #7]
* 3 | | 3
* 1 | | 1
* [Device #2] [Device #8]
* 3 | | 3
* 1 | | 1
* [Device #3] [Device #9]
* 3 | | 3
* 1 | | 1
* [Device #4] [Device #10]
* 3 | | 3
* 1 | | 1
* [Device #5] [Device #11]
* 3 | | 3
* 1 | | 1
* [Device #6] [Device #12]
*/
static const struct port_expectation test_data[] = {
{ .route = 0x30303030301, .port = 13, .type = TB_TYPE_DP_HDMI_IN },
{ .route = 0x30303030301, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x303030301, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x303030301, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x3030301, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x3030301, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x30301, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x30301, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x301, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x301, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x1, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x0, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x0, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x3, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x3, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x303, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x303, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x30303, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x30303, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x3030303, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x3030303, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x303030303, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x303030303, .port = 3, .type = TB_TYPE_PORT },
{ .route = 0x30303030303, .port = 1, .type = TB_TYPE_PORT },
{ .route = 0x30303030303, .port = 13, .type = TB_TYPE_DP_HDMI_OUT },
};
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x301, true);
dev3 = alloc_dev_default(test, dev2, 0x30301, true);
dev4 = alloc_dev_default(test, dev3, 0x3030301, true);
dev5 = alloc_dev_default(test, dev4, 0x303030301, true);
dev6 = alloc_dev_with_dpin(test, dev5, 0x30303030301, true);
dev7 = alloc_dev_default(test, host, 0x3, true);
dev8 = alloc_dev_default(test, dev7, 0x303, true);
dev9 = alloc_dev_default(test, dev8, 0x30303, true);
dev10 = alloc_dev_default(test, dev9, 0x3030303, true);
dev11 = alloc_dev_default(test, dev10, 0x303030303, true);
dev12 = alloc_dev_default(test, dev11, 0x30303030303, true);
src_port = &dev6->ports[13];
dst_port = &dev12->ports[13];
/* Walk both directions */
i = 0;
tb_for_each_port_on_path(src_port, dst_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i++;
}
KUNIT_EXPECT_EQ(test, i, (int)ARRAY_SIZE(test_data));
i = ARRAY_SIZE(test_data) - 1;
tb_for_each_port_on_path(dst_port, src_port, p) {
KUNIT_EXPECT_TRUE(test, i < ARRAY_SIZE(test_data));
KUNIT_EXPECT_EQ(test, tb_route(p->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, p->port, test_data[i].port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)p->config.type,
test_data[i].type);
i--;
}
KUNIT_EXPECT_EQ(test, i, -1);
}
static void tb_test_path_not_connected(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2;
struct tb_port *down, *up;
struct tb_path *path;
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x3, false);
/* Not connected to anything */
dev2 = alloc_dev_default(test, NULL, 0x303, false);
down = &dev1->ports[10];
up = &dev2->ports[9];
path = tb_path_alloc(NULL, down, 8, up, 8, 0, "PCIe Down");
KUNIT_ASSERT_TRUE(test, path == NULL);
path = tb_path_alloc(NULL, down, 8, up, 8, 1, "PCIe Down");
KUNIT_ASSERT_TRUE(test, path == NULL);
}
struct hop_expectation {
u64 route;
u8 in_port;
enum tb_port_type in_type;
u8 out_port;
enum tb_port_type out_type;
};
static void tb_test_path_not_bonded_lane0(struct kunit *test)
{
/*
* PCIe path from host to device using lane 0.
*
* [Host]
* 3 |: 4
* 1 |: 2
* [Device]
*/
static const struct hop_expectation test_data[] = {
{
.route = 0x0,
.in_port = 9,
.in_type = TB_TYPE_PCIE_DOWN,
.out_port = 3,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x3,
.in_port = 1,
.in_type = TB_TYPE_PORT,
.out_port = 9,
.out_type = TB_TYPE_PCIE_UP,
},
};
struct tb_switch *host, *dev;
struct tb_port *down, *up;
struct tb_path *path;
int i;
host = alloc_host(test);
dev = alloc_dev_default(test, host, 0x3, false);
down = &host->ports[9];
up = &dev->ports[9];
path = tb_path_alloc(NULL, down, 8, up, 8, 0, "PCIe Down");
KUNIT_ASSERT_TRUE(test, path != NULL);
KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
for (i = 0; i < ARRAY_SIZE(test_data); i++) {
const struct tb_port *in_port, *out_port;
in_port = path->hops[i].in_port;
out_port = path->hops[i].out_port;
KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
test_data[i].in_type);
KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
test_data[i].out_type);
}
tb_path_free(path);
}
static void tb_test_path_not_bonded_lane1(struct kunit *test)
{
/*
* DP Video path from host to device using lane 1. Paths like
* these are only used with Thunderbolt 1 devices where lane
* bonding is not possible. USB4 specifically does not allow
* paths like this (you either use lane 0 where lane 1 is
* disabled or both lanes are bonded).
*
* [Host]
* 1 :| 2
* 1 :| 2
* [Device]
*/
static const struct hop_expectation test_data[] = {
{
.route = 0x0,
.in_port = 5,
.in_type = TB_TYPE_DP_HDMI_IN,
.out_port = 2,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x1,
.in_port = 2,
.in_type = TB_TYPE_PORT,
.out_port = 13,
.out_type = TB_TYPE_DP_HDMI_OUT,
},
};
struct tb_switch *host, *dev;
struct tb_port *in, *out;
struct tb_path *path;
int i;
host = alloc_host(test);
dev = alloc_dev_default(test, host, 0x1, false);
in = &host->ports[5];
out = &dev->ports[13];
path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
KUNIT_ASSERT_TRUE(test, path != NULL);
KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
for (i = 0; i < ARRAY_SIZE(test_data); i++) {
const struct tb_port *in_port, *out_port;
in_port = path->hops[i].in_port;
out_port = path->hops[i].out_port;
KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
test_data[i].in_type);
KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
test_data[i].out_type);
}
tb_path_free(path);
}
static void tb_test_path_not_bonded_lane1_chain(struct kunit *test)
{
/*
* DP Video path from host to device 3 using lane 1.
*
* [Host]
* 1 :| 2
* 1 :| 2
* [Device #1]
* 7 :| 8
* 1 :| 2
* [Device #2]
* 5 :| 6
* 1 :| 2
* [Device #3]
*/
static const struct hop_expectation test_data[] = {
{
.route = 0x0,
.in_port = 5,
.in_type = TB_TYPE_DP_HDMI_IN,
.out_port = 2,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x1,
.in_port = 2,
.in_type = TB_TYPE_PORT,
.out_port = 8,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x701,
.in_port = 2,
.in_type = TB_TYPE_PORT,
.out_port = 6,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x50701,
.in_port = 2,
.in_type = TB_TYPE_PORT,
.out_port = 13,
.out_type = TB_TYPE_DP_HDMI_OUT,
},
};
struct tb_switch *host, *dev1, *dev2, *dev3;
struct tb_port *in, *out;
struct tb_path *path;
int i;
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, false);
dev2 = alloc_dev_default(test, dev1, 0x701, false);
dev3 = alloc_dev_default(test, dev2, 0x50701, false);
in = &host->ports[5];
out = &dev3->ports[13];
path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
KUNIT_ASSERT_TRUE(test, path != NULL);
KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
for (i = 0; i < ARRAY_SIZE(test_data); i++) {
const struct tb_port *in_port, *out_port;
in_port = path->hops[i].in_port;
out_port = path->hops[i].out_port;
KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
test_data[i].in_type);
KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
test_data[i].out_type);
}
tb_path_free(path);
}
static void tb_test_path_not_bonded_lane1_chain_reverse(struct kunit *test)
{
/*
* DP Video path from device 3 to host using lane 1.
*
* [Host]
* 1 :| 2
* 1 :| 2
* [Device #1]
* 7 :| 8
* 1 :| 2
* [Device #2]
* 5 :| 6
* 1 :| 2
* [Device #3]
*/
static const struct hop_expectation test_data[] = {
{
.route = 0x50701,
.in_port = 13,
.in_type = TB_TYPE_DP_HDMI_IN,
.out_port = 2,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x701,
.in_port = 6,
.in_type = TB_TYPE_PORT,
.out_port = 2,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x1,
.in_port = 8,
.in_type = TB_TYPE_PORT,
.out_port = 2,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x0,
.in_port = 2,
.in_type = TB_TYPE_PORT,
.out_port = 5,
.out_type = TB_TYPE_DP_HDMI_IN,
},
};
struct tb_switch *host, *dev1, *dev2, *dev3;
struct tb_port *in, *out;
struct tb_path *path;
int i;
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, false);
dev2 = alloc_dev_default(test, dev1, 0x701, false);
dev3 = alloc_dev_with_dpin(test, dev2, 0x50701, false);
in = &dev3->ports[13];
out = &host->ports[5];
path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
KUNIT_ASSERT_TRUE(test, path != NULL);
KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
for (i = 0; i < ARRAY_SIZE(test_data); i++) {
const struct tb_port *in_port, *out_port;
in_port = path->hops[i].in_port;
out_port = path->hops[i].out_port;
KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
test_data[i].in_type);
KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
test_data[i].out_type);
}
tb_path_free(path);
}
static void tb_test_path_mixed_chain(struct kunit *test)
{
/*
* DP Video path from host to device 4 where first and last link
* is bonded.
*
* [Host]
* 1 |
* 1 |
* [Device #1]
* 7 :| 8
* 1 :| 2
* [Device #2]
* 5 :| 6
* 1 :| 2
* [Device #3]
* 3 |
* 1 |
* [Device #4]
*/
static const struct hop_expectation test_data[] = {
{
.route = 0x0,
.in_port = 5,
.in_type = TB_TYPE_DP_HDMI_IN,
.out_port = 1,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x1,
.in_port = 1,
.in_type = TB_TYPE_PORT,
.out_port = 8,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x701,
.in_port = 2,
.in_type = TB_TYPE_PORT,
.out_port = 6,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x50701,
.in_port = 2,
.in_type = TB_TYPE_PORT,
.out_port = 3,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x3050701,
.in_port = 1,
.in_type = TB_TYPE_PORT,
.out_port = 13,
.out_type = TB_TYPE_DP_HDMI_OUT,
},
};
struct tb_switch *host, *dev1, *dev2, *dev3, *dev4;
struct tb_port *in, *out;
struct tb_path *path;
int i;
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x701, false);
dev3 = alloc_dev_default(test, dev2, 0x50701, false);
dev4 = alloc_dev_default(test, dev3, 0x3050701, true);
in = &host->ports[5];
out = &dev4->ports[13];
path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
KUNIT_ASSERT_TRUE(test, path != NULL);
KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
for (i = 0; i < ARRAY_SIZE(test_data); i++) {
const struct tb_port *in_port, *out_port;
in_port = path->hops[i].in_port;
out_port = path->hops[i].out_port;
KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
test_data[i].in_type);
KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
test_data[i].out_type);
}
tb_path_free(path);
}
static void tb_test_path_mixed_chain_reverse(struct kunit *test)
{
/*
* DP Video path from device 4 to host where first and last link
* is bonded.
*
* [Host]
* 1 |
* 1 |
* [Device #1]
* 7 :| 8
* 1 :| 2
* [Device #2]
* 5 :| 6
* 1 :| 2
* [Device #3]
* 3 |
* 1 |
* [Device #4]
*/
static const struct hop_expectation test_data[] = {
{
.route = 0x3050701,
.in_port = 13,
.in_type = TB_TYPE_DP_HDMI_OUT,
.out_port = 1,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x50701,
.in_port = 3,
.in_type = TB_TYPE_PORT,
.out_port = 2,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x701,
.in_port = 6,
.in_type = TB_TYPE_PORT,
.out_port = 2,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x1,
.in_port = 8,
.in_type = TB_TYPE_PORT,
.out_port = 1,
.out_type = TB_TYPE_PORT,
},
{
.route = 0x0,
.in_port = 1,
.in_type = TB_TYPE_PORT,
.out_port = 5,
.out_type = TB_TYPE_DP_HDMI_IN,
},
};
struct tb_switch *host, *dev1, *dev2, *dev3, *dev4;
struct tb_port *in, *out;
struct tb_path *path;
int i;
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x701, false);
dev3 = alloc_dev_default(test, dev2, 0x50701, false);
dev4 = alloc_dev_default(test, dev3, 0x3050701, true);
in = &dev4->ports[13];
out = &host->ports[5];
path = tb_path_alloc(NULL, in, 9, out, 9, 1, "Video");
KUNIT_ASSERT_TRUE(test, path != NULL);
KUNIT_ASSERT_EQ(test, path->path_length, (int)ARRAY_SIZE(test_data));
for (i = 0; i < ARRAY_SIZE(test_data); i++) {
const struct tb_port *in_port, *out_port;
in_port = path->hops[i].in_port;
out_port = path->hops[i].out_port;
KUNIT_EXPECT_EQ(test, tb_route(in_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, in_port->port, test_data[i].in_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)in_port->config.type,
test_data[i].in_type);
KUNIT_EXPECT_EQ(test, tb_route(out_port->sw), test_data[i].route);
KUNIT_EXPECT_EQ(test, out_port->port, test_data[i].out_port);
KUNIT_EXPECT_EQ(test, (enum tb_port_type)out_port->config.type,
test_data[i].out_type);
}
tb_path_free(path);
}
static void tb_test_tunnel_pcie(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2;
struct tb_tunnel *tunnel1, *tunnel2;
struct tb_port *down, *up;
/*
* Create PCIe tunnel between host and two devices.
*
* [Host]
* 1 |
* 1 |
* [Device #1]
* 5 |
* 1 |
* [Device #2]
*/
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x501, true);
down = &host->ports[8];
up = &dev1->ports[9];
tunnel1 = tb_tunnel_alloc_pci(NULL, up, down);
KUNIT_ASSERT_TRUE(test, tunnel1 != NULL);
KUNIT_EXPECT_EQ(test, tunnel1->type, (enum tb_tunnel_type)TB_TUNNEL_PCI);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->src_port, down);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->dst_port, up);
KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2);
KUNIT_ASSERT_EQ(test, tunnel1->paths[0]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[0].in_port, down);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[1].out_port, up);
KUNIT_ASSERT_EQ(test, tunnel1->paths[1]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[0].in_port, up);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[1].out_port, down);
down = &dev1->ports[10];
up = &dev2->ports[9];
tunnel2 = tb_tunnel_alloc_pci(NULL, up, down);
KUNIT_ASSERT_TRUE(test, tunnel2 != NULL);
KUNIT_EXPECT_EQ(test, tunnel2->type, (enum tb_tunnel_type)TB_TUNNEL_PCI);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->src_port, down);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->dst_port, up);
KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2);
KUNIT_ASSERT_EQ(test, tunnel2->paths[0]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[0].in_port, down);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[1].out_port, up);
KUNIT_ASSERT_EQ(test, tunnel2->paths[1]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[0].in_port, up);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[1].out_port, down);
tb_tunnel_free(tunnel2);
tb_tunnel_free(tunnel1);
}
static void tb_test_tunnel_dp(struct kunit *test)
{
struct tb_switch *host, *dev;
struct tb_port *in, *out;
struct tb_tunnel *tunnel;
/*
* Create DP tunnel between Host and Device
*
* [Host]
* 1 |
* 1 |
* [Device]
*/
host = alloc_host(test);
dev = alloc_dev_default(test, host, 0x3, true);
in = &host->ports[5];
out = &dev->ports[13];
tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[1].out_port, out);
KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[1].out_port, out);
KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[1].out_port, in);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dp_chain(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev4;
struct tb_port *in, *out;
struct tb_tunnel *tunnel;
/*
* Create DP tunnel from Host DP IN to Device #4 DP OUT.
*
* [Host]
* 1 |
* 1 |
* [Device #1]
* 3 / | 5 \ 7
* 1 / | \ 1
* [Device #2] | [Device #4]
* | 1
* [Device #3]
*/
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
alloc_dev_default(test, dev1, 0x301, true);
alloc_dev_default(test, dev1, 0x501, true);
dev4 = alloc_dev_default(test, dev1, 0x701, true);
in = &host->ports[5];
out = &dev4->ports[14];
tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 3);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[2].out_port, out);
KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 3);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[2].out_port, out);
KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 3);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[2].out_port, in);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dp_tree(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2, *dev3, *dev5;
struct tb_port *in, *out;
struct tb_tunnel *tunnel;
/*
* Create DP tunnel from Device #2 DP IN to Device #5 DP OUT.
*
* [Host]
* 3 |
* 1 |
* [Device #1]
* 3 / | 5 \ 7
* 1 / | \ 1
* [Device #2] | [Device #4]
* | 1
* [Device #3]
* | 5
* | 1
* [Device #5]
*/
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x3, true);
dev2 = alloc_dev_with_dpin(test, dev1, 0x303, true);
dev3 = alloc_dev_default(test, dev1, 0x503, true);
alloc_dev_default(test, dev1, 0x703, true);
dev5 = alloc_dev_default(test, dev3, 0x50503, true);
in = &dev2->ports[13];
out = &dev5->ports[13];
tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 4);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[3].out_port, out);
KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 4);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[3].out_port, out);
KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 4);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[3].out_port, in);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_dp_max_length(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5, *dev6;
struct tb_switch *dev7, *dev8, *dev9, *dev10, *dev11, *dev12;
struct tb_port *in, *out;
struct tb_tunnel *tunnel;
/*
* Creates DP tunnel from Device #6 to Device #12.
*
* [Host]
* 1 / \ 3
* 1 / \ 1
* [Device #1] [Device #7]
* 3 | | 3
* 1 | | 1
* [Device #2] [Device #8]
* 3 | | 3
* 1 | | 1
* [Device #3] [Device #9]
* 3 | | 3
* 1 | | 1
* [Device #4] [Device #10]
* 3 | | 3
* 1 | | 1
* [Device #5] [Device #11]
* 3 | | 3
* 1 | | 1
* [Device #6] [Device #12]
*/
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x301, true);
dev3 = alloc_dev_default(test, dev2, 0x30301, true);
dev4 = alloc_dev_default(test, dev3, 0x3030301, true);
dev5 = alloc_dev_default(test, dev4, 0x303030301, true);
dev6 = alloc_dev_with_dpin(test, dev5, 0x30303030301, true);
dev7 = alloc_dev_default(test, host, 0x3, true);
dev8 = alloc_dev_default(test, dev7, 0x303, true);
dev9 = alloc_dev_default(test, dev8, 0x30303, true);
dev10 = alloc_dev_default(test, dev9, 0x3030303, true);
dev11 = alloc_dev_default(test, dev10, 0x303030303, true);
dev12 = alloc_dev_default(test, dev11, 0x30303030303, true);
in = &dev6->ports[13];
out = &dev12->ports[13];
tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel != NULL);
KUNIT_EXPECT_EQ(test, tunnel->type, (enum tb_tunnel_type)TB_TUNNEL_DP);
KUNIT_EXPECT_PTR_EQ(test, tunnel->src_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->dst_port, out);
KUNIT_ASSERT_EQ(test, tunnel->npaths, (size_t)3);
KUNIT_ASSERT_EQ(test, tunnel->paths[0]->path_length, 13);
/* First hop */
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[0].in_port, in);
/* Middle */
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[6].in_port,
&host->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[6].out_port,
&host->ports[3]);
/* Last */
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[0]->hops[12].out_port, out);
KUNIT_ASSERT_EQ(test, tunnel->paths[1]->path_length, 13);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[0].in_port, in);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[6].in_port,
&host->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[6].out_port,
&host->ports[3]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[1]->hops[12].out_port, out);
KUNIT_ASSERT_EQ(test, tunnel->paths[2]->path_length, 13);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[0].in_port, out);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[6].in_port,
&host->ports[3]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[6].out_port,
&host->ports[1]);
KUNIT_EXPECT_PTR_EQ(test, tunnel->paths[2]->hops[12].out_port, in);
tb_tunnel_free(tunnel);
}
static void tb_test_tunnel_usb3(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2;
struct tb_tunnel *tunnel1, *tunnel2;
struct tb_port *down, *up;
/*
* Create USB3 tunnel between host and two devices.
*
* [Host]
* 1 |
* 1 |
* [Device #1]
* \ 7
* \ 1
* [Device #2]
*/
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x1, true);
dev2 = alloc_dev_default(test, dev1, 0x701, true);
down = &host->ports[12];
up = &dev1->ports[16];
tunnel1 = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel1 != NULL);
KUNIT_EXPECT_EQ(test, tunnel1->type, (enum tb_tunnel_type)TB_TUNNEL_USB3);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->src_port, down);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->dst_port, up);
KUNIT_ASSERT_EQ(test, tunnel1->npaths, (size_t)2);
KUNIT_ASSERT_EQ(test, tunnel1->paths[0]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[0].in_port, down);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[0]->hops[1].out_port, up);
KUNIT_ASSERT_EQ(test, tunnel1->paths[1]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[0].in_port, up);
KUNIT_EXPECT_PTR_EQ(test, tunnel1->paths[1]->hops[1].out_port, down);
down = &dev1->ports[17];
up = &dev2->ports[16];
tunnel2 = tb_tunnel_alloc_usb3(NULL, up, down, 0, 0);
KUNIT_ASSERT_TRUE(test, tunnel2 != NULL);
KUNIT_EXPECT_EQ(test, tunnel2->type, (enum tb_tunnel_type)TB_TUNNEL_USB3);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->src_port, down);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->dst_port, up);
KUNIT_ASSERT_EQ(test, tunnel2->npaths, (size_t)2);
KUNIT_ASSERT_EQ(test, tunnel2->paths[0]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[0].in_port, down);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[0]->hops[1].out_port, up);
KUNIT_ASSERT_EQ(test, tunnel2->paths[1]->path_length, 2);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[0].in_port, up);
KUNIT_EXPECT_PTR_EQ(test, tunnel2->paths[1]->hops[1].out_port, down);
tb_tunnel_free(tunnel2);
tb_tunnel_free(tunnel1);
}
static void tb_test_tunnel_port_on_path(struct kunit *test)
{
struct tb_switch *host, *dev1, *dev2, *dev3, *dev4, *dev5;
struct tb_port *in, *out, *port;
struct tb_tunnel *dp_tunnel;
/*
* [Host]
* 3 |
* 1 |
* [Device #1]
* 3 / | 5 \ 7
* 1 / | \ 1
* [Device #2] | [Device #4]
* | 1
* [Device #3]
* | 5
* | 1
* [Device #5]
*/
host = alloc_host(test);
dev1 = alloc_dev_default(test, host, 0x3, true);
dev2 = alloc_dev_with_dpin(test, dev1, 0x303, true);
dev3 = alloc_dev_default(test, dev1, 0x503, true);
dev4 = alloc_dev_default(test, dev1, 0x703, true);
dev5 = alloc_dev_default(test, dev3, 0x50503, true);
in = &dev2->ports[13];
out = &dev5->ports[13];
dp_tunnel = tb_tunnel_alloc_dp(NULL, in, out, 0, 0);
KUNIT_ASSERT_TRUE(test, dp_tunnel != NULL);
KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, in));
KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, out));
port = &host->ports[8];
KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
port = &host->ports[3];
KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
port = &dev1->ports[1];
KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
port = &dev1->ports[3];
KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port));
port = &dev1->ports[5];
KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port));
port = &dev1->ports[7];
KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
port = &dev3->ports[1];
KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port));
port = &dev5->ports[1];
KUNIT_EXPECT_TRUE(test, tb_tunnel_port_on_path(dp_tunnel, port));
port = &dev4->ports[1];
KUNIT_EXPECT_FALSE(test, tb_tunnel_port_on_path(dp_tunnel, port));
tb_tunnel_free(dp_tunnel);
}
static struct kunit_case tb_test_cases[] = {
KUNIT_CASE(tb_test_path_basic),
KUNIT_CASE(tb_test_path_not_connected_walk),
KUNIT_CASE(tb_test_path_single_hop_walk),
KUNIT_CASE(tb_test_path_daisy_chain_walk),
KUNIT_CASE(tb_test_path_simple_tree_walk),
KUNIT_CASE(tb_test_path_complex_tree_walk),
KUNIT_CASE(tb_test_path_max_length_walk),
KUNIT_CASE(tb_test_path_not_connected),
KUNIT_CASE(tb_test_path_not_bonded_lane0),
KUNIT_CASE(tb_test_path_not_bonded_lane1),
KUNIT_CASE(tb_test_path_not_bonded_lane1_chain),
KUNIT_CASE(tb_test_path_not_bonded_lane1_chain_reverse),
KUNIT_CASE(tb_test_path_mixed_chain),
KUNIT_CASE(tb_test_path_mixed_chain_reverse),
KUNIT_CASE(tb_test_tunnel_pcie),
KUNIT_CASE(tb_test_tunnel_dp),
KUNIT_CASE(tb_test_tunnel_dp_chain),
KUNIT_CASE(tb_test_tunnel_dp_tree),
KUNIT_CASE(tb_test_tunnel_dp_max_length),
KUNIT_CASE(tb_test_tunnel_port_on_path),
KUNIT_CASE(tb_test_tunnel_usb3),
{ }
};
static struct kunit_suite tb_test_suite = {
.name = "thunderbolt",
.test_cases = tb_test_cases,
};
kunit_test_suite(tb_test_suite);
...@@ -124,6 +124,7 @@ static void tb_pci_init_path(struct tb_path *path) ...@@ -124,6 +124,7 @@ static void tb_pci_init_path(struct tb_path *path)
path->drop_packages = 0; path->drop_packages = 0;
path->nfc_credits = 0; path->nfc_credits = 0;
path->hops[0].initial_credits = 7; path->hops[0].initial_credits = 7;
if (path->path_length > 1)
path->hops[1].initial_credits = path->hops[1].initial_credits =
tb_initial_credits(path->hops[1].in_port->sw); tb_initial_credits(path->hops[1].in_port->sw);
} }
...@@ -422,7 +423,7 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel) ...@@ -422,7 +423,7 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
u32 out_dp_cap, out_rate, out_lanes, in_dp_cap, in_rate, in_lanes, bw; u32 out_dp_cap, out_rate, out_lanes, in_dp_cap, in_rate, in_lanes, bw;
struct tb_port *out = tunnel->dst_port; struct tb_port *out = tunnel->dst_port;
struct tb_port *in = tunnel->src_port; struct tb_port *in = tunnel->src_port;
int ret; int ret, max_bw;
/* /*
* Copy DP_LOCAL_CAP register to DP_REMOTE_CAP register for * Copy DP_LOCAL_CAP register to DP_REMOTE_CAP register for
...@@ -471,10 +472,15 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel) ...@@ -471,10 +472,15 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel)
tb_port_dbg(out, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n", tb_port_dbg(out, "maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n",
out_rate, out_lanes, bw); out_rate, out_lanes, bw);
if (tunnel->max_bw && bw > tunnel->max_bw) { if (in->sw->config.depth < out->sw->config.depth)
max_bw = tunnel->max_down;
else
max_bw = tunnel->max_up;
if (max_bw && bw > max_bw) {
u32 new_rate, new_lanes, new_bw; u32 new_rate, new_lanes, new_bw;
ret = tb_dp_reduce_bandwidth(tunnel->max_bw, in_rate, in_lanes, ret = tb_dp_reduce_bandwidth(max_bw, in_rate, in_lanes,
out_rate, out_lanes, &new_rate, out_rate, out_lanes, &new_rate,
&new_lanes); &new_lanes);
if (ret) { if (ret) {
...@@ -535,7 +541,8 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active) ...@@ -535,7 +541,8 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
return 0; return 0;
} }
static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel) static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
int *consumed_down)
{ {
struct tb_port *in = tunnel->src_port; struct tb_port *in = tunnel->src_port;
const struct tb_switch *sw = in->sw; const struct tb_switch *sw = in->sw;
...@@ -543,7 +550,7 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel) ...@@ -543,7 +550,7 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel)
int ret; int ret;
if (tb_dp_is_usb4(sw)) { if (tb_dp_is_usb4(sw)) {
int timeout = 10; int timeout = 20;
/* /*
* Wait for DPRX done. Normally it should be already set * Wait for DPRX done. Normally it should be already set
...@@ -579,10 +586,20 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel) ...@@ -579,10 +586,20 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel)
lanes = tb_dp_cap_get_lanes(val); lanes = tb_dp_cap_get_lanes(val);
} else { } else {
/* No bandwidth management for legacy devices */ /* No bandwidth management for legacy devices */
*consumed_up = 0;
*consumed_down = 0;
return 0; return 0;
} }
return tb_dp_bandwidth(rate, lanes); if (in->sw->config.depth < tunnel->dst_port->sw->config.depth) {
*consumed_up = 0;
*consumed_down = tb_dp_bandwidth(rate, lanes);
} else {
*consumed_up = tb_dp_bandwidth(rate, lanes);
*consumed_down = 0;
}
return 0;
} }
static void tb_dp_init_aux_path(struct tb_path *path) static void tb_dp_init_aux_path(struct tb_path *path)
...@@ -708,7 +725,10 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in) ...@@ -708,7 +725,10 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
* @tb: Pointer to the domain structure * @tb: Pointer to the domain structure
* @in: DP in adapter port * @in: DP in adapter port
* @out: DP out adapter port * @out: DP out adapter port
* @max_bw: Maximum available bandwidth for the DP tunnel (%0 if not limited) * @max_up: Maximum available upstream bandwidth for the DP tunnel (%0
* if not limited)
* @max_down: Maximum available downstream bandwidth for the DP tunnel
* (%0 if not limited)
* *
* Allocates a tunnel between @in and @out that is capable of tunneling * Allocates a tunnel between @in and @out that is capable of tunneling
* Display Port traffic. * Display Port traffic.
...@@ -716,7 +736,8 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in) ...@@ -716,7 +736,8 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
* Return: Returns a tb_tunnel on success or NULL on failure. * Return: Returns a tb_tunnel on success or NULL on failure.
*/ */
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in, struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_port *out, int max_bw) struct tb_port *out, int max_up,
int max_down)
{ {
struct tb_tunnel *tunnel; struct tb_tunnel *tunnel;
struct tb_path **paths; struct tb_path **paths;
...@@ -734,7 +755,8 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in, ...@@ -734,7 +755,8 @@ struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
tunnel->consumed_bandwidth = tb_dp_consumed_bandwidth; tunnel->consumed_bandwidth = tb_dp_consumed_bandwidth;
tunnel->src_port = in; tunnel->src_port = in;
tunnel->dst_port = out; tunnel->dst_port = out;
tunnel->max_bw = max_bw; tunnel->max_up = max_up;
tunnel->max_down = max_down;
paths = tunnel->paths; paths = tunnel->paths;
...@@ -854,6 +876,33 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, ...@@ -854,6 +876,33 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
return tunnel; return tunnel;
} }
static int tb_usb3_max_link_rate(struct tb_port *up, struct tb_port *down)
{
int ret, up_max_rate, down_max_rate;
ret = usb4_usb3_port_max_link_rate(up);
if (ret < 0)
return ret;
up_max_rate = ret;
ret = usb4_usb3_port_max_link_rate(down);
if (ret < 0)
return ret;
down_max_rate = ret;
return min(up_max_rate, down_max_rate);
}
static int tb_usb3_init(struct tb_tunnel *tunnel)
{
tb_tunnel_dbg(tunnel, "allocating initial bandwidth %d/%d Mb/s\n",
tunnel->allocated_up, tunnel->allocated_down);
return usb4_usb3_port_allocate_bandwidth(tunnel->src_port,
&tunnel->allocated_up,
&tunnel->allocated_down);
}
static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate) static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate)
{ {
int res; int res;
...@@ -868,6 +917,86 @@ static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate) ...@@ -868,6 +917,86 @@ static int tb_usb3_activate(struct tb_tunnel *tunnel, bool activate)
return 0; return 0;
} }
static int tb_usb3_consumed_bandwidth(struct tb_tunnel *tunnel,
int *consumed_up, int *consumed_down)
{
/*
* PCIe tunneling affects the USB3 bandwidth so take that it
* into account here.
*/
*consumed_up = tunnel->allocated_up * (3 + 1) / 3;
*consumed_down = tunnel->allocated_down * (3 + 1) / 3;
return 0;
}
static int tb_usb3_release_unused_bandwidth(struct tb_tunnel *tunnel)
{
int ret;
ret = usb4_usb3_port_release_bandwidth(tunnel->src_port,
&tunnel->allocated_up,
&tunnel->allocated_down);
if (ret)
return ret;
tb_tunnel_dbg(tunnel, "decreased bandwidth allocation to %d/%d Mb/s\n",
tunnel->allocated_up, tunnel->allocated_down);
return 0;
}
static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
int *available_up,
int *available_down)
{
int ret, max_rate, allocate_up, allocate_down;
ret = usb4_usb3_port_actual_link_rate(tunnel->src_port);
if (ret <= 0) {
tb_tunnel_warn(tunnel, "tunnel is not up\n");
return;
}
/*
* 90% of the max rate can be allocated for isochronous
* transfers.
*/
max_rate = ret * 90 / 100;
/* No need to reclaim if already at maximum */
if (tunnel->allocated_up >= max_rate &&
tunnel->allocated_down >= max_rate)
return;
/* Don't go lower than what is already allocated */
allocate_up = min(max_rate, *available_up);
if (allocate_up < tunnel->allocated_up)
allocate_up = tunnel->allocated_up;
allocate_down = min(max_rate, *available_down);
if (allocate_down < tunnel->allocated_down)
allocate_down = tunnel->allocated_down;
/* If no changes no need to do more */
if (allocate_up == tunnel->allocated_up &&
allocate_down == tunnel->allocated_down)
return;
ret = usb4_usb3_port_allocate_bandwidth(tunnel->src_port, &allocate_up,
&allocate_down);
if (ret) {
tb_tunnel_info(tunnel, "failed to allocate bandwidth\n");
return;
}
tunnel->allocated_up = allocate_up;
*available_up -= tunnel->allocated_up;
tunnel->allocated_down = allocate_down;
*available_down -= tunnel->allocated_down;
tb_tunnel_dbg(tunnel, "increased bandwidth allocation to %d/%d Mb/s\n",
tunnel->allocated_up, tunnel->allocated_down);
}
static void tb_usb3_init_path(struct tb_path *path) static void tb_usb3_init_path(struct tb_path *path)
{ {
path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL; path->egress_fc_enable = TB_PATH_SOURCE | TB_PATH_INTERNAL;
...@@ -879,6 +1008,7 @@ static void tb_usb3_init_path(struct tb_path *path) ...@@ -879,6 +1008,7 @@ static void tb_usb3_init_path(struct tb_path *path)
path->drop_packages = 0; path->drop_packages = 0;
path->nfc_credits = 0; path->nfc_credits = 0;
path->hops[0].initial_credits = 7; path->hops[0].initial_credits = 7;
if (path->path_length > 1)
path->hops[1].initial_credits = path->hops[1].initial_credits =
tb_initial_credits(path->hops[1].in_port->sw); tb_initial_credits(path->hops[1].in_port->sw);
} }
...@@ -947,6 +1077,29 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down) ...@@ -947,6 +1077,29 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
goto err_deactivate; goto err_deactivate;
} }
if (!tb_route(down->sw)) {
int ret;
/*
* Read the initial bandwidth allocation for the first
* hop tunnel.
*/
ret = usb4_usb3_port_allocated_bandwidth(down,
&tunnel->allocated_up, &tunnel->allocated_down);
if (ret)
goto err_deactivate;
tb_tunnel_dbg(tunnel, "currently allocated bandwidth %d/%d Mb/s\n",
tunnel->allocated_up, tunnel->allocated_down);
tunnel->init = tb_usb3_init;
tunnel->consumed_bandwidth = tb_usb3_consumed_bandwidth;
tunnel->release_unused_bandwidth =
tb_usb3_release_unused_bandwidth;
tunnel->reclaim_available_bandwidth =
tb_usb3_reclaim_available_bandwidth;
}
tb_tunnel_dbg(tunnel, "discovered\n"); tb_tunnel_dbg(tunnel, "discovered\n");
return tunnel; return tunnel;
...@@ -963,6 +1116,10 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down) ...@@ -963,6 +1116,10 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
* @tb: Pointer to the domain structure * @tb: Pointer to the domain structure
* @up: USB3 upstream adapter port * @up: USB3 upstream adapter port
* @down: USB3 downstream adapter port * @down: USB3 downstream adapter port
* @max_up: Maximum available upstream bandwidth for the USB3 tunnel (%0
* if not limited).
* @max_down: Maximum available downstream bandwidth for the USB3 tunnel
* (%0 if not limited).
* *
* Allocate an USB3 tunnel. The ports must be of type @TB_TYPE_USB3_UP and * Allocate an USB3 tunnel. The ports must be of type @TB_TYPE_USB3_UP and
* @TB_TYPE_USB3_DOWN. * @TB_TYPE_USB3_DOWN.
...@@ -970,10 +1127,32 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down) ...@@ -970,10 +1127,32 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
* Return: Returns a tb_tunnel on success or %NULL on failure. * Return: Returns a tb_tunnel on success or %NULL on failure.
*/ */
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up, struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
struct tb_port *down) struct tb_port *down, int max_up,
int max_down)
{ {
struct tb_tunnel *tunnel; struct tb_tunnel *tunnel;
struct tb_path *path; struct tb_path *path;
int max_rate = 0;
/*
* Check that we have enough bandwidth available for the new
* USB3 tunnel.
*/
if (max_up > 0 || max_down > 0) {
max_rate = tb_usb3_max_link_rate(down, up);
if (max_rate < 0)
return NULL;
/* Only 90% can be allocated for USB3 isochronous transfers */
max_rate = max_rate * 90 / 100;
tb_port_dbg(up, "required bandwidth for USB3 tunnel %d Mb/s\n",
max_rate);
if (max_rate > max_up || max_rate > max_down) {
tb_port_warn(up, "not enough bandwidth for USB3 tunnel\n");
return NULL;
}
}
tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_USB3); tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_USB3);
if (!tunnel) if (!tunnel)
...@@ -982,6 +1161,8 @@ struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up, ...@@ -982,6 +1161,8 @@ struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
tunnel->activate = tb_usb3_activate; tunnel->activate = tb_usb3_activate;
tunnel->src_port = down; tunnel->src_port = down;
tunnel->dst_port = up; tunnel->dst_port = up;
tunnel->max_up = max_up;
tunnel->max_down = max_down;
path = tb_path_alloc(tb, down, TB_USB3_HOPID, up, TB_USB3_HOPID, 0, path = tb_path_alloc(tb, down, TB_USB3_HOPID, up, TB_USB3_HOPID, 0,
"USB3 Down"); "USB3 Down");
...@@ -1001,6 +1182,18 @@ struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up, ...@@ -1001,6 +1182,18 @@ struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
tb_usb3_init_path(path); tb_usb3_init_path(path);
tunnel->paths[TB_USB3_PATH_UP] = path; tunnel->paths[TB_USB3_PATH_UP] = path;
if (!tb_route(down->sw)) {
tunnel->allocated_up = max_rate;
tunnel->allocated_down = max_rate;
tunnel->init = tb_usb3_init;
tunnel->consumed_bandwidth = tb_usb3_consumed_bandwidth;
tunnel->release_unused_bandwidth =
tb_usb3_release_unused_bandwidth;
tunnel->reclaim_available_bandwidth =
tb_usb3_reclaim_available_bandwidth;
}
return tunnel; return tunnel;
} }
...@@ -1133,22 +1326,23 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel) ...@@ -1133,22 +1326,23 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel)
} }
/** /**
* tb_tunnel_switch_on_path() - Does the tunnel go through switch * tb_tunnel_port_on_path() - Does the tunnel go through port
* @tunnel: Tunnel to check * @tunnel: Tunnel to check
* @sw: Switch to check * @port: Port to check
* *
* Returns true if @tunnel goes through @sw (direction does not matter), * Returns true if @tunnel goes through @port (direction does not matter),
* false otherwise. * false otherwise.
*/ */
bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel, bool tb_tunnel_port_on_path(const struct tb_tunnel *tunnel,
const struct tb_switch *sw) const struct tb_port *port)
{ {
int i; int i;
for (i = 0; i < tunnel->npaths; i++) { for (i = 0; i < tunnel->npaths; i++) {
if (!tunnel->paths[i]) if (!tunnel->paths[i])
continue; continue;
if (tb_path_switch_on_path(tunnel->paths[i], sw))
if (tb_path_port_on_path(tunnel->paths[i], port))
return true; return true;
} }
...@@ -1172,21 +1366,87 @@ static bool tb_tunnel_is_active(const struct tb_tunnel *tunnel) ...@@ -1172,21 +1366,87 @@ static bool tb_tunnel_is_active(const struct tb_tunnel *tunnel)
/** /**
* tb_tunnel_consumed_bandwidth() - Return bandwidth consumed by the tunnel * tb_tunnel_consumed_bandwidth() - Return bandwidth consumed by the tunnel
* @tunnel: Tunnel to check * @tunnel: Tunnel to check
* @consumed_up: Consumed bandwidth in Mb/s from @dst_port to @src_port.
* Can be %NULL.
* @consumed_down: Consumed bandwidth in Mb/s from @src_port to @dst_port.
* Can be %NULL.
* *
* Returns bandwidth currently consumed by @tunnel and %0 if the @tunnel * Stores the amount of isochronous bandwidth @tunnel consumes in
* is not active or does consume bandwidth. * @consumed_up and @consumed_down. In case of success returns %0,
* negative errno otherwise.
*/ */
int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel) int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
int *consumed_down)
{ {
int up_bw = 0, down_bw = 0;
if (!tb_tunnel_is_active(tunnel)) if (!tb_tunnel_is_active(tunnel))
return 0; goto out;
if (tunnel->consumed_bandwidth) { if (tunnel->consumed_bandwidth) {
int ret = tunnel->consumed_bandwidth(tunnel); int ret;
tb_tunnel_dbg(tunnel, "consumed bandwidth %d Mb/s\n", ret); ret = tunnel->consumed_bandwidth(tunnel, &up_bw, &down_bw);
if (ret)
return ret; return ret;
tb_tunnel_dbg(tunnel, "consumed bandwidth %d/%d Mb/s\n", up_bw,
down_bw);
} }
out:
if (consumed_up)
*consumed_up = up_bw;
if (consumed_down)
*consumed_down = down_bw;
return 0;
}
/**
* tb_tunnel_release_unused_bandwidth() - Release unused bandwidth
* @tunnel: Tunnel whose unused bandwidth to release
*
* If tunnel supports dynamic bandwidth management (USB3 tunnels at the
* moment) this function makes it to release all the unused bandwidth.
*
* Returns %0 in case of success and negative errno otherwise.
*/
int tb_tunnel_release_unused_bandwidth(struct tb_tunnel *tunnel)
{
if (!tb_tunnel_is_active(tunnel))
return 0; return 0;
if (tunnel->release_unused_bandwidth) {
int ret;
ret = tunnel->release_unused_bandwidth(tunnel);
if (ret)
return ret;
}
return 0;
}
/**
* tb_tunnel_reclaim_available_bandwidth() - Reclaim available bandwidth
* @tunnel: Tunnel reclaiming available bandwidth
* @available_up: Available upstream bandwidth (in Mb/s)
* @available_down: Available downstream bandwidth (in Mb/s)
*
* Reclaims bandwidth from @available_up and @available_down and updates
* the variables accordingly (e.g decreases both according to what was
* reclaimed by the tunnel). If nothing was reclaimed the values are
* kept as is.
*/
void tb_tunnel_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
int *available_up,
int *available_down)
{
if (!tb_tunnel_is_active(tunnel))
return;
if (tunnel->reclaim_available_bandwidth)
tunnel->reclaim_available_bandwidth(tunnel, available_up,
available_down);
} }
...@@ -29,10 +29,16 @@ enum tb_tunnel_type { ...@@ -29,10 +29,16 @@ enum tb_tunnel_type {
* @init: Optional tunnel specific initialization * @init: Optional tunnel specific initialization
* @activate: Optional tunnel specific activation/deactivation * @activate: Optional tunnel specific activation/deactivation
* @consumed_bandwidth: Return how much bandwidth the tunnel consumes * @consumed_bandwidth: Return how much bandwidth the tunnel consumes
* @release_unused_bandwidth: Release all unused bandwidth
* @reclaim_available_bandwidth: Reclaim back available bandwidth
* @list: Tunnels are linked using this field * @list: Tunnels are linked using this field
* @type: Type of the tunnel * @type: Type of the tunnel
* @max_bw: Maximum bandwidth (Mb/s) available for the tunnel (only for DP). * @max_up: Maximum upstream bandwidth (Mb/s) available for the tunnel.
* Only set if the bandwidth needs to be limited. * Only set if the bandwidth needs to be limited.
* @max_down: Maximum downstream bandwidth (Mb/s) available for the tunnel.
* Only set if the bandwidth needs to be limited.
* @allocated_up: Allocated upstream bandwidth (only for USB3)
* @allocated_down: Allocated downstream bandwidth (only for USB3)
*/ */
struct tb_tunnel { struct tb_tunnel {
struct tb *tb; struct tb *tb;
...@@ -42,10 +48,18 @@ struct tb_tunnel { ...@@ -42,10 +48,18 @@ struct tb_tunnel {
size_t npaths; size_t npaths;
int (*init)(struct tb_tunnel *tunnel); int (*init)(struct tb_tunnel *tunnel);
int (*activate)(struct tb_tunnel *tunnel, bool activate); int (*activate)(struct tb_tunnel *tunnel, bool activate);
int (*consumed_bandwidth)(struct tb_tunnel *tunnel); int (*consumed_bandwidth)(struct tb_tunnel *tunnel, int *consumed_up,
int *consumed_down);
int (*release_unused_bandwidth)(struct tb_tunnel *tunnel);
void (*reclaim_available_bandwidth)(struct tb_tunnel *tunnel,
int *available_up,
int *available_down);
struct list_head list; struct list_head list;
enum tb_tunnel_type type; enum tb_tunnel_type type;
unsigned int max_bw; int max_up;
int max_down;
int allocated_up;
int allocated_down;
}; };
struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down); struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down);
...@@ -53,23 +67,30 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up, ...@@ -53,23 +67,30 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_port *down); struct tb_port *down);
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in); struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in);
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in, struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_port *out, int max_bw); struct tb_port *out, int max_up,
int max_down);
struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_port *dst, int transmit_ring, struct tb_port *dst, int transmit_ring,
int transmit_path, int receive_ring, int transmit_path, int receive_ring,
int receive_path); int receive_path);
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down); struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down);
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up, struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
struct tb_port *down); struct tb_port *down, int max_up,
int max_down);
void tb_tunnel_free(struct tb_tunnel *tunnel); void tb_tunnel_free(struct tb_tunnel *tunnel);
int tb_tunnel_activate(struct tb_tunnel *tunnel); int tb_tunnel_activate(struct tb_tunnel *tunnel);
int tb_tunnel_restart(struct tb_tunnel *tunnel); int tb_tunnel_restart(struct tb_tunnel *tunnel);
void tb_tunnel_deactivate(struct tb_tunnel *tunnel); void tb_tunnel_deactivate(struct tb_tunnel *tunnel);
bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel); bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel);
bool tb_tunnel_switch_on_path(const struct tb_tunnel *tunnel, bool tb_tunnel_port_on_path(const struct tb_tunnel *tunnel,
const struct tb_switch *sw); const struct tb_port *port);
int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel); int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
int *consumed_down);
int tb_tunnel_release_unused_bandwidth(struct tb_tunnel *tunnel);
void tb_tunnel_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
int *available_up,
int *available_down);
static inline bool tb_tunnel_is_pci(const struct tb_tunnel *tunnel) static inline bool tb_tunnel_is_pci(const struct tb_tunnel *tunnel)
{ {
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/ktime.h> #include <linux/ktime.h>
#include "sb_regs.h"
#include "tb.h" #include "tb.h"
#define USB4_DATA_DWORDS 16 #define USB4_DATA_DWORDS 16
...@@ -27,6 +28,12 @@ enum usb4_switch_op { ...@@ -27,6 +28,12 @@ enum usb4_switch_op {
USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25, USB4_SWITCH_OP_NVM_SECTOR_SIZE = 0x25,
}; };
enum usb4_sb_target {
USB4_SB_TARGET_ROUTER,
USB4_SB_TARGET_PARTNER,
USB4_SB_TARGET_RETIMER,
};
#define USB4_NVM_READ_OFFSET_MASK GENMASK(23, 2) #define USB4_NVM_READ_OFFSET_MASK GENMASK(23, 2)
#define USB4_NVM_READ_OFFSET_SHIFT 2 #define USB4_NVM_READ_OFFSET_SHIFT 2
#define USB4_NVM_READ_LENGTH_MASK GENMASK(27, 24) #define USB4_NVM_READ_LENGTH_MASK GENMASK(27, 24)
...@@ -42,8 +49,8 @@ enum usb4_switch_op { ...@@ -42,8 +49,8 @@ enum usb4_switch_op {
#define USB4_NVM_SECTOR_SIZE_MASK GENMASK(23, 0) #define USB4_NVM_SECTOR_SIZE_MASK GENMASK(23, 0)
typedef int (*read_block_fn)(struct tb_switch *, unsigned int, void *, size_t); typedef int (*read_block_fn)(void *, unsigned int, void *, size_t);
typedef int (*write_block_fn)(struct tb_switch *, const void *, size_t); typedef int (*write_block_fn)(void *, const void *, size_t);
static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
u32 value, int timeout_msec) u32 value, int timeout_msec)
...@@ -95,8 +102,8 @@ static int usb4_switch_op_write_metadata(struct tb_switch *sw, u32 metadata) ...@@ -95,8 +102,8 @@ static int usb4_switch_op_write_metadata(struct tb_switch *sw, u32 metadata)
return tb_sw_write(sw, &metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1); return tb_sw_write(sw, &metadata, TB_CFG_SWITCH, ROUTER_CS_25, 1);
} }
static int usb4_switch_do_read_data(struct tb_switch *sw, u16 address, static int usb4_do_read_data(u16 address, void *buf, size_t size,
void *buf, size_t size, read_block_fn read_block) read_block_fn read_block, void *read_block_data)
{ {
unsigned int retries = USB4_DATA_RETRIES; unsigned int retries = USB4_DATA_RETRIES;
unsigned int offset; unsigned int offset;
...@@ -113,13 +120,10 @@ static int usb4_switch_do_read_data(struct tb_switch *sw, u16 address, ...@@ -113,13 +120,10 @@ static int usb4_switch_do_read_data(struct tb_switch *sw, u16 address,
dwaddress = address / 4; dwaddress = address / 4;
dwords = ALIGN(nbytes, 4) / 4; dwords = ALIGN(nbytes, 4) / 4;
ret = read_block(sw, dwaddress, data, dwords); ret = read_block(read_block_data, dwaddress, data, dwords);
if (ret) { if (ret) {
if (ret == -ETIMEDOUT) { if (ret != -ENODEV && retries--)
if (retries--)
continue; continue;
ret = -EIO;
}
return ret; return ret;
} }
...@@ -133,8 +137,8 @@ static int usb4_switch_do_read_data(struct tb_switch *sw, u16 address, ...@@ -133,8 +137,8 @@ static int usb4_switch_do_read_data(struct tb_switch *sw, u16 address,
return 0; return 0;
} }
static int usb4_switch_do_write_data(struct tb_switch *sw, u16 address, static int usb4_do_write_data(unsigned int address, const void *buf, size_t size,
const void *buf, size_t size, write_block_fn write_next_block) write_block_fn write_next_block, void *write_block_data)
{ {
unsigned int retries = USB4_DATA_RETRIES; unsigned int retries = USB4_DATA_RETRIES;
unsigned int offset; unsigned int offset;
...@@ -149,7 +153,7 @@ static int usb4_switch_do_write_data(struct tb_switch *sw, u16 address, ...@@ -149,7 +153,7 @@ static int usb4_switch_do_write_data(struct tb_switch *sw, u16 address,
memcpy(data + offset, buf, nbytes); memcpy(data + offset, buf, nbytes);
ret = write_next_block(sw, data, nbytes / 4); ret = write_next_block(write_block_data, data, nbytes / 4);
if (ret) { if (ret) {
if (ret == -ETIMEDOUT) { if (ret == -ETIMEDOUT) {
if (retries--) if (retries--)
...@@ -192,6 +196,20 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status) ...@@ -192,6 +196,20 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
return 0; return 0;
} }
static bool link_is_usb4(struct tb_port *port)
{
u32 val;
if (!port->cap_usb4)
return false;
if (tb_port_read(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_18, 1))
return false;
return !(val & PORT_CS_18_TCM);
}
/** /**
* usb4_switch_setup() - Additional setup for USB4 device * usb4_switch_setup() - Additional setup for USB4 device
* @sw: USB4 router to setup * @sw: USB4 router to setup
...@@ -205,6 +223,7 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status) ...@@ -205,6 +223,7 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
*/ */
int usb4_switch_setup(struct tb_switch *sw) int usb4_switch_setup(struct tb_switch *sw)
{ {
struct tb_port *downstream_port;
struct tb_switch *parent; struct tb_switch *parent;
bool tbt3, xhci; bool tbt3, xhci;
u32 val = 0; u32 val = 0;
...@@ -217,6 +236,11 @@ int usb4_switch_setup(struct tb_switch *sw) ...@@ -217,6 +236,11 @@ int usb4_switch_setup(struct tb_switch *sw)
if (ret) if (ret)
return ret; return ret;
parent = tb_switch_parent(sw);
downstream_port = tb_port_at(tb_route(sw), parent);
sw->link_usb4 = link_is_usb4(downstream_port);
tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT3");
xhci = val & ROUTER_CS_6_HCI; xhci = val & ROUTER_CS_6_HCI;
tbt3 = !(val & ROUTER_CS_6_TNS); tbt3 = !(val & ROUTER_CS_6_TNS);
...@@ -227,9 +251,7 @@ int usb4_switch_setup(struct tb_switch *sw) ...@@ -227,9 +251,7 @@ int usb4_switch_setup(struct tb_switch *sw)
if (ret) if (ret)
return ret; return ret;
parent = tb_switch_parent(sw); if (sw->link_usb4 && tb_switch_find_port(parent, TB_TYPE_USB3_DOWN)) {
if (tb_switch_find_port(parent, TB_TYPE_USB3_DOWN)) {
val |= ROUTER_CS_5_UTO; val |= ROUTER_CS_5_UTO;
xhci = false; xhci = false;
} }
...@@ -271,10 +293,11 @@ int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid) ...@@ -271,10 +293,11 @@ int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid)
return tb_sw_read(sw, uid, TB_CFG_SWITCH, ROUTER_CS_7, 2); return tb_sw_read(sw, uid, TB_CFG_SWITCH, ROUTER_CS_7, 2);
} }
static int usb4_switch_drom_read_block(struct tb_switch *sw, static int usb4_switch_drom_read_block(void *data,
unsigned int dwaddress, void *buf, unsigned int dwaddress, void *buf,
size_t dwords) size_t dwords)
{ {
struct tb_switch *sw = data;
u8 status = 0; u8 status = 0;
u32 metadata; u32 metadata;
int ret; int ret;
...@@ -311,8 +334,8 @@ static int usb4_switch_drom_read_block(struct tb_switch *sw, ...@@ -311,8 +334,8 @@ static int usb4_switch_drom_read_block(struct tb_switch *sw,
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf, int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size) size_t size)
{ {
return usb4_switch_do_read_data(sw, address, buf, size, return usb4_do_read_data(address, buf, size,
usb4_switch_drom_read_block); usb4_switch_drom_read_block, sw);
} }
static int usb4_set_port_configured(struct tb_port *port, bool configured) static int usb4_set_port_configured(struct tb_port *port, bool configured)
...@@ -445,9 +468,10 @@ int usb4_switch_nvm_sector_size(struct tb_switch *sw) ...@@ -445,9 +468,10 @@ int usb4_switch_nvm_sector_size(struct tb_switch *sw)
return metadata & USB4_NVM_SECTOR_SIZE_MASK; return metadata & USB4_NVM_SECTOR_SIZE_MASK;
} }
static int usb4_switch_nvm_read_block(struct tb_switch *sw, static int usb4_switch_nvm_read_block(void *data,
unsigned int dwaddress, void *buf, size_t dwords) unsigned int dwaddress, void *buf, size_t dwords)
{ {
struct tb_switch *sw = data;
u8 status = 0; u8 status = 0;
u32 metadata; u32 metadata;
int ret; int ret;
...@@ -484,8 +508,8 @@ static int usb4_switch_nvm_read_block(struct tb_switch *sw, ...@@ -484,8 +508,8 @@ static int usb4_switch_nvm_read_block(struct tb_switch *sw,
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf, int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size) size_t size)
{ {
return usb4_switch_do_read_data(sw, address, buf, size, return usb4_do_read_data(address, buf, size,
usb4_switch_nvm_read_block); usb4_switch_nvm_read_block, sw);
} }
static int usb4_switch_nvm_set_offset(struct tb_switch *sw, static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
...@@ -510,9 +534,10 @@ static int usb4_switch_nvm_set_offset(struct tb_switch *sw, ...@@ -510,9 +534,10 @@ static int usb4_switch_nvm_set_offset(struct tb_switch *sw,
return status ? -EIO : 0; return status ? -EIO : 0;
} }
static int usb4_switch_nvm_write_next_block(struct tb_switch *sw, static int usb4_switch_nvm_write_next_block(void *data, const void *buf,
const void *buf, size_t dwords) size_t dwords)
{ {
struct tb_switch *sw = data;
u8 status; u8 status;
int ret; int ret;
...@@ -546,8 +571,8 @@ int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address, ...@@ -546,8 +571,8 @@ int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
if (ret) if (ret)
return ret; return ret;
return usb4_switch_do_write_data(sw, address, buf, size, return usb4_do_write_data(address, buf, size,
usb4_switch_nvm_write_next_block); usb4_switch_nvm_write_next_block, sw);
} }
/** /**
...@@ -710,7 +735,7 @@ struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw, ...@@ -710,7 +735,7 @@ struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw,
if (!tb_port_is_pcie_down(p)) if (!tb_port_is_pcie_down(p))
continue; continue;
if (pcie_idx == usb4_idx && !tb_pci_port_is_enabled(p)) if (pcie_idx == usb4_idx)
return p; return p;
pcie_idx++; pcie_idx++;
...@@ -741,7 +766,7 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw, ...@@ -741,7 +766,7 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
if (!tb_port_is_usb3_down(p)) if (!tb_port_is_usb3_down(p))
continue; continue;
if (usb_idx == usb4_idx && !tb_usb3_port_is_enabled(p)) if (usb_idx == usb4_idx)
return p; return p;
usb_idx++; usb_idx++;
...@@ -769,3 +794,796 @@ int usb4_port_unlock(struct tb_port *port) ...@@ -769,3 +794,796 @@ int usb4_port_unlock(struct tb_port *port)
val &= ~ADP_CS_4_LCK; val &= ~ADP_CS_4_LCK;
return tb_port_write(port, &val, TB_CFG_PORT, ADP_CS_4, 1); return tb_port_write(port, &val, TB_CFG_PORT, ADP_CS_4, 1);
} }
static int usb4_port_wait_for_bit(struct tb_port *port, u32 offset, u32 bit,
u32 value, int timeout_msec)
{
ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
do {
u32 val;
int ret;
ret = tb_port_read(port, &val, TB_CFG_PORT, offset, 1);
if (ret)
return ret;
if ((val & bit) == value)
return 0;
usleep_range(50, 100);
} while (ktime_before(ktime_get(), timeout));
return -ETIMEDOUT;
}
static int usb4_port_read_data(struct tb_port *port, void *data, size_t dwords)
{
if (dwords > USB4_DATA_DWORDS)
return -EINVAL;
return tb_port_read(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
dwords);
}
static int usb4_port_write_data(struct tb_port *port, const void *data,
size_t dwords)
{
if (dwords > USB4_DATA_DWORDS)
return -EINVAL;
return tb_port_write(port, data, TB_CFG_PORT, port->cap_usb4 + PORT_CS_2,
dwords);
}
static int usb4_port_sb_read(struct tb_port *port, enum usb4_sb_target target,
u8 index, u8 reg, void *buf, u8 size)
{
size_t dwords = DIV_ROUND_UP(size, 4);
int ret;
u32 val;
if (!port->cap_usb4)
return -EINVAL;
val = reg;
val |= size << PORT_CS_1_LENGTH_SHIFT;
val |= (target << PORT_CS_1_TARGET_SHIFT) & PORT_CS_1_TARGET_MASK;
if (target == USB4_SB_TARGET_RETIMER)
val |= (index << PORT_CS_1_RETIMER_INDEX_SHIFT);
val |= PORT_CS_1_PND;
ret = tb_port_write(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_1, 1);
if (ret)
return ret;
ret = usb4_port_wait_for_bit(port, port->cap_usb4 + PORT_CS_1,
PORT_CS_1_PND, 0, 500);
if (ret)
return ret;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_1, 1);
if (ret)
return ret;
if (val & PORT_CS_1_NR)
return -ENODEV;
if (val & PORT_CS_1_RC)
return -EIO;
return buf ? usb4_port_read_data(port, buf, dwords) : 0;
}
static int usb4_port_sb_write(struct tb_port *port, enum usb4_sb_target target,
u8 index, u8 reg, const void *buf, u8 size)
{
size_t dwords = DIV_ROUND_UP(size, 4);
int ret;
u32 val;
if (!port->cap_usb4)
return -EINVAL;
if (buf) {
ret = usb4_port_write_data(port, buf, dwords);
if (ret)
return ret;
}
val = reg;
val |= size << PORT_CS_1_LENGTH_SHIFT;
val |= PORT_CS_1_WNR_WRITE;
val |= (target << PORT_CS_1_TARGET_SHIFT) & PORT_CS_1_TARGET_MASK;
if (target == USB4_SB_TARGET_RETIMER)
val |= (index << PORT_CS_1_RETIMER_INDEX_SHIFT);
val |= PORT_CS_1_PND;
ret = tb_port_write(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_1, 1);
if (ret)
return ret;
ret = usb4_port_wait_for_bit(port, port->cap_usb4 + PORT_CS_1,
PORT_CS_1_PND, 0, 500);
if (ret)
return ret;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_1, 1);
if (ret)
return ret;
if (val & PORT_CS_1_NR)
return -ENODEV;
if (val & PORT_CS_1_RC)
return -EIO;
return 0;
}
static int usb4_port_sb_op(struct tb_port *port, enum usb4_sb_target target,
u8 index, enum usb4_sb_opcode opcode, int timeout_msec)
{
ktime_t timeout;
u32 val;
int ret;
val = opcode;
ret = usb4_port_sb_write(port, target, index, USB4_SB_OPCODE, &val,
sizeof(val));
if (ret)
return ret;
timeout = ktime_add_ms(ktime_get(), timeout_msec);
do {
/* Check results */
ret = usb4_port_sb_read(port, target, index, USB4_SB_OPCODE,
&val, sizeof(val));
if (ret)
return ret;
switch (val) {
case 0:
return 0;
case USB4_SB_OPCODE_ERR:
return -EAGAIN;
case USB4_SB_OPCODE_ONS:
return -EOPNOTSUPP;
default:
if (val != opcode)
return -EIO;
break;
}
} while (ktime_before(ktime_get(), timeout));
return -ETIMEDOUT;
}
/**
* usb4_port_enumerate_retimers() - Send RT broadcast transaction
* @port: USB4 port
*
* This forces the USB4 port to send broadcast RT transaction which
* makes the retimers on the link to assign index to themselves. Returns
* %0 in case of success and negative errno if there was an error.
*/
int usb4_port_enumerate_retimers(struct tb_port *port)
{
u32 val;
val = USB4_SB_OPCODE_ENUMERATE_RETIMERS;
return usb4_port_sb_write(port, USB4_SB_TARGET_ROUTER, 0,
USB4_SB_OPCODE, &val, sizeof(val));
}
static inline int usb4_port_retimer_op(struct tb_port *port, u8 index,
enum usb4_sb_opcode opcode,
int timeout_msec)
{
return usb4_port_sb_op(port, USB4_SB_TARGET_RETIMER, index, opcode,
timeout_msec);
}
/**
* usb4_port_retimer_read() - Read from retimer sideband registers
* @port: USB4 port
* @index: Retimer index
* @reg: Sideband register to read
* @buf: Data from @reg is stored here
* @size: Number of bytes to read
*
* Function reads retimer sideband registers starting from @reg. The
* retimer is connected to @port at @index. Returns %0 in case of
* success, and read data is copied to @buf. If there is no retimer
* present at given @index returns %-ENODEV. In any other failure
* returns negative errno.
*/
int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,
u8 size)
{
return usb4_port_sb_read(port, USB4_SB_TARGET_RETIMER, index, reg, buf,
size);
}
/**
* usb4_port_retimer_write() - Write to retimer sideband registers
* @port: USB4 port
* @index: Retimer index
* @reg: Sideband register to write
* @buf: Data that is written starting from @reg
* @size: Number of bytes to write
*
* Writes retimer sideband registers starting from @reg. The retimer is
* connected to @port at @index. Returns %0 in case of success. If there
* is no retimer present at given @index returns %-ENODEV. In any other
* failure returns negative errno.
*/
int usb4_port_retimer_write(struct tb_port *port, u8 index, u8 reg,
const void *buf, u8 size)
{
return usb4_port_sb_write(port, USB4_SB_TARGET_RETIMER, index, reg, buf,
size);
}
/**
* usb4_port_retimer_is_last() - Is the retimer last on-board retimer
* @port: USB4 port
* @index: Retimer index
*
* If the retimer at @index is last one (connected directly to the
* Type-C port) this function returns %1. If it is not returns %0. If
* the retimer is not present returns %-ENODEV. Otherwise returns
* negative errno.
*/
int usb4_port_retimer_is_last(struct tb_port *port, u8 index)
{
u32 metadata;
int ret;
ret = usb4_port_retimer_op(port, index, USB4_SB_OPCODE_QUERY_LAST_RETIMER,
500);
if (ret)
return ret;
ret = usb4_port_retimer_read(port, index, USB4_SB_METADATA, &metadata,
sizeof(metadata));
return ret ? ret : metadata & 1;
}
/**
* usb4_port_retimer_nvm_sector_size() - Read retimer NVM sector size
* @port: USB4 port
* @index: Retimer index
*
* Reads NVM sector size (in bytes) of a retimer at @index. This
* operation can be used to determine whether the retimer supports NVM
* upgrade for example. Returns sector size in bytes or negative errno
* in case of error. Specifically returns %-ENODEV if there is no
* retimer at @index.
*/
int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index)
{
u32 metadata;
int ret;
ret = usb4_port_retimer_op(port, index, USB4_SB_OPCODE_GET_NVM_SECTOR_SIZE,
500);
if (ret)
return ret;
ret = usb4_port_retimer_read(port, index, USB4_SB_METADATA, &metadata,
sizeof(metadata));
return ret ? ret : metadata & USB4_NVM_SECTOR_SIZE_MASK;
}
static int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index,
unsigned int address)
{
u32 metadata, dwaddress;
int ret;
dwaddress = address / 4;
metadata = (dwaddress << USB4_NVM_SET_OFFSET_SHIFT) &
USB4_NVM_SET_OFFSET_MASK;
ret = usb4_port_retimer_write(port, index, USB4_SB_METADATA, &metadata,
sizeof(metadata));
if (ret)
return ret;
return usb4_port_retimer_op(port, index, USB4_SB_OPCODE_NVM_SET_OFFSET,
500);
}
struct retimer_info {
struct tb_port *port;
u8 index;
};
static int usb4_port_retimer_nvm_write_next_block(void *data, const void *buf,
size_t dwords)
{
const struct retimer_info *info = data;
struct tb_port *port = info->port;
u8 index = info->index;
int ret;
ret = usb4_port_retimer_write(port, index, USB4_SB_DATA,
buf, dwords * 4);
if (ret)
return ret;
return usb4_port_retimer_op(port, index,
USB4_SB_OPCODE_NVM_BLOCK_WRITE, 1000);
}
/**
* usb4_port_retimer_nvm_write() - Write to retimer NVM
* @port: USB4 port
* @index: Retimer index
* @address: Byte address where to start the write
* @buf: Data to write
* @size: Size in bytes how much to write
*
* Writes @size bytes from @buf to the retimer NVM. Used for NVM
* upgrade. Returns %0 if the data was written successfully and negative
* errno in case of failure. Specifically returns %-ENODEV if there is
* no retimer at @index.
*/
int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index, unsigned int address,
const void *buf, size_t size)
{
struct retimer_info info = { .port = port, .index = index };
int ret;
ret = usb4_port_retimer_nvm_set_offset(port, index, address);
if (ret)
return ret;
return usb4_do_write_data(address, buf, size,
usb4_port_retimer_nvm_write_next_block, &info);
}
/**
* usb4_port_retimer_nvm_authenticate() - Start retimer NVM upgrade
* @port: USB4 port
* @index: Retimer index
*
* After the new NVM image has been written via usb4_port_retimer_nvm_write()
* this function can be used to trigger the NVM upgrade process. If
* successful the retimer restarts with the new NVM and may not have the
* index set so one needs to call usb4_port_enumerate_retimers() to
* force index to be assigned.
*/
int usb4_port_retimer_nvm_authenticate(struct tb_port *port, u8 index)
{
u32 val;
/*
* We need to use the raw operation here because once the
* authentication completes the retimer index is not set anymore
* so we do not get back the status now.
*/
val = USB4_SB_OPCODE_NVM_AUTH_WRITE;
return usb4_port_sb_write(port, USB4_SB_TARGET_RETIMER, index,
USB4_SB_OPCODE, &val, sizeof(val));
}
/**
* usb4_port_retimer_nvm_authenticate_status() - Read status of NVM upgrade
* @port: USB4 port
* @index: Retimer index
* @status: Raw status code read from metadata
*
* This can be called after usb4_port_retimer_nvm_authenticate() and
* usb4_port_enumerate_retimers() to fetch status of the NVM upgrade.
*
* Returns %0 if the authentication status was successfully read. The
* completion metadata (the result) is then stored into @status. If
* reading the status fails, returns negative errno.
*/
int usb4_port_retimer_nvm_authenticate_status(struct tb_port *port, u8 index,
u32 *status)
{
u32 metadata, val;
int ret;
ret = usb4_port_retimer_read(port, index, USB4_SB_OPCODE, &val,
sizeof(val));
if (ret)
return ret;
switch (val) {
case 0:
*status = 0;
return 0;
case USB4_SB_OPCODE_ERR:
ret = usb4_port_retimer_read(port, index, USB4_SB_METADATA,
&metadata, sizeof(metadata));
if (ret)
return ret;
*status = metadata & USB4_SB_METADATA_NVM_AUTH_WRITE_MASK;
return 0;
case USB4_SB_OPCODE_ONS:
return -EOPNOTSUPP;
default:
return -EIO;
}
}
static int usb4_port_retimer_nvm_read_block(void *data, unsigned int dwaddress,
void *buf, size_t dwords)
{
const struct retimer_info *info = data;
struct tb_port *port = info->port;
u8 index = info->index;
u32 metadata;
int ret;
metadata = dwaddress << USB4_NVM_READ_OFFSET_SHIFT;
if (dwords < USB4_DATA_DWORDS)
metadata |= dwords << USB4_NVM_READ_LENGTH_SHIFT;
ret = usb4_port_retimer_write(port, index, USB4_SB_METADATA, &metadata,
sizeof(metadata));
if (ret)
return ret;
ret = usb4_port_retimer_op(port, index, USB4_SB_OPCODE_NVM_READ, 500);
if (ret)
return ret;
return usb4_port_retimer_read(port, index, USB4_SB_DATA, buf,
dwords * 4);
}
/**
* usb4_port_retimer_nvm_read() - Read contents of retimer NVM
* @port: USB4 port
* @index: Retimer index
* @address: NVM address (in bytes) to start reading
* @buf: Data read from NVM is stored here
* @size: Number of bytes to read
*
* Reads retimer NVM and copies the contents to @buf. Returns %0 if the
* read was successful and negative errno in case of failure.
* Specifically returns %-ENODEV if there is no retimer at @index.
*/
int usb4_port_retimer_nvm_read(struct tb_port *port, u8 index,
unsigned int address, void *buf, size_t size)
{
struct retimer_info info = { .port = port, .index = index };
return usb4_do_read_data(address, buf, size,
usb4_port_retimer_nvm_read_block, &info);
}
/**
* usb4_usb3_port_max_link_rate() - Maximum support USB3 link rate
* @port: USB3 adapter port
*
* Return maximum supported link rate of a USB3 adapter in Mb/s.
* Negative errno in case of error.
*/
int usb4_usb3_port_max_link_rate(struct tb_port *port)
{
int ret, lr;
u32 val;
if (!tb_port_is_usb3_down(port) && !tb_port_is_usb3_up(port))
return -EINVAL;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_4, 1);
if (ret)
return ret;
lr = (val & ADP_USB3_CS_4_MSLR_MASK) >> ADP_USB3_CS_4_MSLR_SHIFT;
return lr == ADP_USB3_CS_4_MSLR_20G ? 20000 : 10000;
}
/**
* usb4_usb3_port_actual_link_rate() - Established USB3 link rate
* @port: USB3 adapter port
*
* Return actual established link rate of a USB3 adapter in Mb/s. If the
* link is not up returns %0 and negative errno in case of failure.
*/
int usb4_usb3_port_actual_link_rate(struct tb_port *port)
{
int ret, lr;
u32 val;
if (!tb_port_is_usb3_down(port) && !tb_port_is_usb3_up(port))
return -EINVAL;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_4, 1);
if (ret)
return ret;
if (!(val & ADP_USB3_CS_4_ULV))
return 0;
lr = val & ADP_USB3_CS_4_ALR_MASK;
return lr == ADP_USB3_CS_4_ALR_20G ? 20000 : 10000;
}
static int usb4_usb3_port_cm_request(struct tb_port *port, bool request)
{
int ret;
u32 val;
if (!tb_port_is_usb3_down(port))
return -EINVAL;
if (tb_route(port->sw))
return -EINVAL;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_2, 1);
if (ret)
return ret;
if (request)
val |= ADP_USB3_CS_2_CMR;
else
val &= ~ADP_USB3_CS_2_CMR;
ret = tb_port_write(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_2, 1);
if (ret)
return ret;
/*
* We can use val here directly as the CMR bit is in the same place
* as HCA. Just mask out others.
*/
val &= ADP_USB3_CS_2_CMR;
return usb4_port_wait_for_bit(port, port->cap_adap + ADP_USB3_CS_1,
ADP_USB3_CS_1_HCA, val, 1500);
}
static inline int usb4_usb3_port_set_cm_request(struct tb_port *port)
{
return usb4_usb3_port_cm_request(port, true);
}
static inline int usb4_usb3_port_clear_cm_request(struct tb_port *port)
{
return usb4_usb3_port_cm_request(port, false);
}
static unsigned int usb3_bw_to_mbps(u32 bw, u8 scale)
{
unsigned long uframes;
uframes = bw * 512UL << scale;
return DIV_ROUND_CLOSEST(uframes * 8000, 1000 * 1000);
}
static u32 mbps_to_usb3_bw(unsigned int mbps, u8 scale)
{
unsigned long uframes;
/* 1 uframe is 1/8 ms (125 us) -> 1 / 8000 s */
uframes = ((unsigned long)mbps * 1000 * 1000) / 8000;
return DIV_ROUND_UP(uframes, 512UL << scale);
}
static int usb4_usb3_port_read_allocated_bandwidth(struct tb_port *port,
int *upstream_bw,
int *downstream_bw)
{
u32 val, bw, scale;
int ret;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_2, 1);
if (ret)
return ret;
ret = tb_port_read(port, &scale, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_3, 1);
if (ret)
return ret;
scale &= ADP_USB3_CS_3_SCALE_MASK;
bw = val & ADP_USB3_CS_2_AUBW_MASK;
*upstream_bw = usb3_bw_to_mbps(bw, scale);
bw = (val & ADP_USB3_CS_2_ADBW_MASK) >> ADP_USB3_CS_2_ADBW_SHIFT;
*downstream_bw = usb3_bw_to_mbps(bw, scale);
return 0;
}
/**
* usb4_usb3_port_allocated_bandwidth() - Bandwidth allocated for USB3
* @port: USB3 adapter port
* @upstream_bw: Allocated upstream bandwidth is stored here
* @downstream_bw: Allocated downstream bandwidth is stored here
*
* Stores currently allocated USB3 bandwidth into @upstream_bw and
* @downstream_bw in Mb/s. Returns %0 in case of success and negative
* errno in failure.
*/
int usb4_usb3_port_allocated_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw)
{
int ret;
ret = usb4_usb3_port_set_cm_request(port);
if (ret)
return ret;
ret = usb4_usb3_port_read_allocated_bandwidth(port, upstream_bw,
downstream_bw);
usb4_usb3_port_clear_cm_request(port);
return ret;
}
static int usb4_usb3_port_read_consumed_bandwidth(struct tb_port *port,
int *upstream_bw,
int *downstream_bw)
{
u32 val, bw, scale;
int ret;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_1, 1);
if (ret)
return ret;
ret = tb_port_read(port, &scale, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_3, 1);
if (ret)
return ret;
scale &= ADP_USB3_CS_3_SCALE_MASK;
bw = val & ADP_USB3_CS_1_CUBW_MASK;
*upstream_bw = usb3_bw_to_mbps(bw, scale);
bw = (val & ADP_USB3_CS_1_CDBW_MASK) >> ADP_USB3_CS_1_CDBW_SHIFT;
*downstream_bw = usb3_bw_to_mbps(bw, scale);
return 0;
}
static int usb4_usb3_port_write_allocated_bandwidth(struct tb_port *port,
int upstream_bw,
int downstream_bw)
{
u32 val, ubw, dbw, scale;
int ret;
/* Read the used scale, hardware default is 0 */
ret = tb_port_read(port, &scale, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_3, 1);
if (ret)
return ret;
scale &= ADP_USB3_CS_3_SCALE_MASK;
ubw = mbps_to_usb3_bw(upstream_bw, scale);
dbw = mbps_to_usb3_bw(downstream_bw, scale);
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_2, 1);
if (ret)
return ret;
val &= ~(ADP_USB3_CS_2_AUBW_MASK | ADP_USB3_CS_2_ADBW_MASK);
val |= dbw << ADP_USB3_CS_2_ADBW_SHIFT;
val |= ubw;
return tb_port_write(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_2, 1);
}
/**
* usb4_usb3_port_allocate_bandwidth() - Allocate bandwidth for USB3
* @port: USB3 adapter port
* @upstream_bw: New upstream bandwidth
* @downstream_bw: New downstream bandwidth
*
* This can be used to set how much bandwidth is allocated for the USB3
* tunneled isochronous traffic. @upstream_bw and @downstream_bw are the
* new values programmed to the USB3 adapter allocation registers. If
* the values are lower than what is currently consumed the allocation
* is set to what is currently consumed instead (consumed bandwidth
* cannot be taken away by CM). The actual new values are returned in
* @upstream_bw and @downstream_bw.
*
* Returns %0 in case of success and negative errno if there was a
* failure.
*/
int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw)
{
int ret, consumed_up, consumed_down, allocate_up, allocate_down;
ret = usb4_usb3_port_set_cm_request(port);
if (ret)
return ret;
ret = usb4_usb3_port_read_consumed_bandwidth(port, &consumed_up,
&consumed_down);
if (ret)
goto err_request;
/* Don't allow it go lower than what is consumed */
allocate_up = max(*upstream_bw, consumed_up);
allocate_down = max(*downstream_bw, consumed_down);
ret = usb4_usb3_port_write_allocated_bandwidth(port, allocate_up,
allocate_down);
if (ret)
goto err_request;
*upstream_bw = allocate_up;
*downstream_bw = allocate_down;
err_request:
usb4_usb3_port_clear_cm_request(port);
return ret;
}
/**
* usb4_usb3_port_release_bandwidth() - Release allocated USB3 bandwidth
* @port: USB3 adapter port
* @upstream_bw: New allocated upstream bandwidth
* @downstream_bw: New allocated downstream bandwidth
*
* Releases USB3 allocated bandwidth down to what is actually consumed.
* The new bandwidth is returned in @upstream_bw and @downstream_bw.
*
* Returns 0% in success and negative errno in case of failure.
*/
int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw)
{
int ret, consumed_up, consumed_down;
ret = usb4_usb3_port_set_cm_request(port);
if (ret)
return ret;
ret = usb4_usb3_port_read_consumed_bandwidth(port, &consumed_up,
&consumed_down);
if (ret)
goto err_request;
/*
* Always keep 1000 Mb/s to make sure xHCI has at least some
* bandwidth available for isochronous traffic.
*/
if (consumed_up < 1000)
consumed_up = 1000;
if (consumed_down < 1000)
consumed_down = 1000;
ret = usb4_usb3_port_write_allocated_bandwidth(port, consumed_up,
consumed_down);
if (ret)
goto err_request;
*upstream_bw = consumed_up;
*downstream_bw = consumed_down;
err_request:
usb4_usb3_port_clear_cm_request(port);
return ret;
}
...@@ -501,6 +501,55 @@ void tb_unregister_protocol_handler(struct tb_protocol_handler *handler) ...@@ -501,6 +501,55 @@ void tb_unregister_protocol_handler(struct tb_protocol_handler *handler)
} }
EXPORT_SYMBOL_GPL(tb_unregister_protocol_handler); EXPORT_SYMBOL_GPL(tb_unregister_protocol_handler);
static int rebuild_property_block(void)
{
u32 *block, len;
int ret;
ret = tb_property_format_dir(xdomain_property_dir, NULL, 0);
if (ret < 0)
return ret;
len = ret;
block = kcalloc(len, sizeof(u32), GFP_KERNEL);
if (!block)
return -ENOMEM;
ret = tb_property_format_dir(xdomain_property_dir, block, len);
if (ret) {
kfree(block);
return ret;
}
kfree(xdomain_property_block);
xdomain_property_block = block;
xdomain_property_block_len = len;
xdomain_property_block_gen++;
return 0;
}
static void finalize_property_block(void)
{
const struct tb_property *nodename;
/*
* On first XDomain connection we set up the the system
* nodename. This delayed here because userspace may not have it
* set when the driver is first probed.
*/
mutex_lock(&xdomain_lock);
nodename = tb_property_find(xdomain_property_dir, "deviceid",
TB_PROPERTY_TYPE_TEXT);
if (!nodename) {
tb_property_add_text(xdomain_property_dir, "deviceid",
utsname()->nodename);
rebuild_property_block();
}
mutex_unlock(&xdomain_lock);
}
static void tb_xdp_handle_request(struct work_struct *work) static void tb_xdp_handle_request(struct work_struct *work)
{ {
struct xdomain_request_work *xw = container_of(work, typeof(*xw), work); struct xdomain_request_work *xw = container_of(work, typeof(*xw), work);
...@@ -529,6 +578,8 @@ static void tb_xdp_handle_request(struct work_struct *work) ...@@ -529,6 +578,8 @@ static void tb_xdp_handle_request(struct work_struct *work)
goto out; goto out;
} }
finalize_property_block();
switch (pkg->type) { switch (pkg->type) {
case PROPERTIES_REQUEST: case PROPERTIES_REQUEST:
ret = tb_xdp_properties_response(tb, ctl, route, sequence, uuid, ret = tb_xdp_properties_response(tb, ctl, route, sequence, uuid,
...@@ -1569,35 +1620,6 @@ bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type type, ...@@ -1569,35 +1620,6 @@ bool tb_xdomain_handle_request(struct tb *tb, enum tb_cfg_pkg_type type,
return ret > 0; return ret > 0;
} }
static int rebuild_property_block(void)
{
u32 *block, len;
int ret;
ret = tb_property_format_dir(xdomain_property_dir, NULL, 0);
if (ret < 0)
return ret;
len = ret;
block = kcalloc(len, sizeof(u32), GFP_KERNEL);
if (!block)
return -ENOMEM;
ret = tb_property_format_dir(xdomain_property_dir, block, len);
if (ret) {
kfree(block);
return ret;
}
kfree(xdomain_property_block);
xdomain_property_block = block;
xdomain_property_block_len = len;
xdomain_property_block_gen++;
return 0;
}
static int update_xdomain(struct device *dev, void *data) static int update_xdomain(struct device *dev, void *data)
{ {
struct tb_xdomain *xd; struct tb_xdomain *xd;
...@@ -1702,8 +1724,6 @@ EXPORT_SYMBOL_GPL(tb_unregister_property_dir); ...@@ -1702,8 +1724,6 @@ EXPORT_SYMBOL_GPL(tb_unregister_property_dir);
int tb_xdomain_init(void) int tb_xdomain_init(void)
{ {
int ret;
xdomain_property_dir = tb_property_create_dir(NULL); xdomain_property_dir = tb_property_create_dir(NULL);
if (!xdomain_property_dir) if (!xdomain_property_dir)
return -ENOMEM; return -ENOMEM;
...@@ -1712,22 +1732,16 @@ int tb_xdomain_init(void) ...@@ -1712,22 +1732,16 @@ int tb_xdomain_init(void)
* Initialize standard set of properties without any service * Initialize standard set of properties without any service
* directories. Those will be added by service drivers * directories. Those will be added by service drivers
* themselves when they are loaded. * themselves when they are loaded.
*
* We also add node name later when first connection is made.
*/ */
tb_property_add_immediate(xdomain_property_dir, "vendorid", tb_property_add_immediate(xdomain_property_dir, "vendorid",
PCI_VENDOR_ID_INTEL); PCI_VENDOR_ID_INTEL);
tb_property_add_text(xdomain_property_dir, "vendorid", "Intel Corp."); tb_property_add_text(xdomain_property_dir, "vendorid", "Intel Corp.");
tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x1); tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x1);
tb_property_add_text(xdomain_property_dir, "deviceid",
utsname()->nodename);
tb_property_add_immediate(xdomain_property_dir, "devicerv", 0x80000100); tb_property_add_immediate(xdomain_property_dir, "devicerv", 0x80000100);
ret = rebuild_property_block(); return 0;
if (ret) {
tb_property_free_dir(xdomain_property_dir);
xdomain_property_dir = NULL;
}
return ret;
} }
void tb_xdomain_exit(void) void tb_xdomain_exit(void)
......
...@@ -504,8 +504,6 @@ struct tb_ring { ...@@ -504,8 +504,6 @@ struct tb_ring {
#define RING_FLAG_NO_SUSPEND BIT(0) #define RING_FLAG_NO_SUSPEND BIT(0)
/* Configure the ring to be in frame mode */ /* Configure the ring to be in frame mode */
#define RING_FLAG_FRAME BIT(1) #define RING_FLAG_FRAME BIT(1)
/* Enable end-to-end flow control */
#define RING_FLAG_E2E BIT(2)
struct ring_frame; struct ring_frame;
typedef void (*ring_cb)(struct tb_ring *, struct ring_frame *, bool canceled); typedef void (*ring_cb)(struct tb_ring *, struct ring_frame *, bool canceled);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment