Commit f67cf491 authored by Mika Westerberg's avatar Mika Westerberg Committed by Greg Kroah-Hartman

thunderbolt: Add support for Internal Connection Manager (ICM)

Starting from Intel Falcon Ridge the internal connection manager running
on the Thunderbolt host controller has been supporting 4 security
levels. One reason for this is to prevent DMA attacks and only allow
connecting devices the user trusts.

The internal connection manager (ICM) is the preferred way of connecting
Thunderbolt devices over software only implementation typically used on
Macs. The driver communicates with ICM using special Thunderbolt ring 0
(control channel) messages. In order to handle these messages we add
support for the ICM messages to the control channel.

The security levels are as follows:

  none - No security, all tunnels are created automatically
  user - User needs to approve the device before tunnels are created
  secure - User need to approve the device before tunnels are created.
	   The device is sent a challenge on future connects to be able
	   to verify it is actually the approved device.
  dponly - Only Display Port and USB tunnels can be created and those
           are created automatically.

The security levels are typically configurable from the system BIOS and
by default it is set to "user" on many systems.

In this patch each Thunderbolt device will have either one or two new
sysfs attributes: authorized and key. The latter appears for devices
that support secure connect.

In order to identify the device the user can read identication
information, including UUID and name of the device from sysfs and based
on that make a decision to authorize the device. The device is
authorized by simply writing 1 to the "authorized" sysfs attribute. This
is following the USB bus device authorization mechanism. The secure
connect requires an additional challenge step (writing 2 to the
"authorized" attribute) in future connects when the key has already been
stored to the NVM of the device.

Non-ICM systems (before Alpine Ridge) continue to use the existing
functionality and the security level is set to none. For systems with
Alpine Ridge, even on Apple hardware, we will use ICM.

This code is based on the work done by Amir Levy and Michael Jamet.
Signed-off-by: default avatarMichael Jamet <michael.jamet@intel.com>
Signed-off-by: default avatarMika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: default avatarYehezkel Bernat <yehezkel.bernat@intel.com>
Reviewed-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: default avatarAndreas Noever <andreas.noever@gmail.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent bdccf295
What: /sys/bus/thunderbolt/devices/.../domainX/security
Date: Sep 2017
KernelVersion: 4.13
Contact: thunderbolt-software@lists.01.org
Description: This attribute holds current Thunderbolt security level
set by the system BIOS. Possible values are:
none: All devices are automatically authorized
user: Devices are only authorized based on writing
appropriate value to the authorized attribute
secure: Require devices that support secure connect at
minimum. User needs to authorize each device.
dponly: Automatically tunnel Display port (and USB). No
PCIe tunnels are created.
What: /sys/bus/thunderbolt/devices/.../authorized
Date: Sep 2017
KernelVersion: 4.13
Contact: thunderbolt-software@lists.01.org
Description: This attribute is used to authorize Thunderbolt devices
after they have been connected. If the device is not
authorized, no devices such as PCIe and Display port are
available to the system.
Contents of this attribute will be 0 when the device is not
yet authorized.
Possible values are supported:
1: The device will be authorized and connected
When key attribute contains 32 byte hex string the possible
values are:
1: The 32 byte hex string is added to the device NVM and
the device is authorized.
2: Send a challenge based on the 32 byte hex string. If the
challenge response from device is valid, the device is
authorized. In case of failure errno will be ENOKEY if
the device did not contain a key at all, and
EKEYREJECTED if the challenge response did not match.
What: /sys/bus/thunderbolt/devices/.../key
Date: Sep 2017
KernelVersion: 4.13
Contact: thunderbolt-software@lists.01.org
Description: When a devices supports Thunderbolt secure connect it will
have this attribute. Writing 32 byte hex string changes
authorization to use the secure connection method instead.
What: /sys/bus/thunderbolt/devices/.../device What: /sys/bus/thunderbolt/devices/.../device
Date: Sep 2017 Date: Sep 2017
KernelVersion: 4.13 KernelVersion: 4.13
......
menuconfig THUNDERBOLT menuconfig THUNDERBOLT
tristate "Thunderbolt support for Apple devices" tristate "Thunderbolt support"
depends on PCI depends on PCI
depends on X86 || COMPILE_TEST depends on X86 || COMPILE_TEST
select APPLE_PROPERTIES if EFI_STUB && X86 select APPLE_PROPERTIES if EFI_STUB && X86
select CRC32 select CRC32
select CRYPTO
select CRYPTO_HASH
help help
Cactus Ridge Thunderbolt Controller driver Thunderbolt Controller driver. This driver is required if you
This driver is required if you want to hotplug Thunderbolt devices on want to hotplug Thunderbolt devices on Apple hardware or on PCs
Apple hardware. with Intel Falcon Ridge or newer.
Device chaining is currently not supported.
To compile this driver a module, choose M here. The module will be To compile this driver a module, choose M here. The module will be
called thunderbolt. called thunderbolt.
obj-${CONFIG_THUNDERBOLT} := thunderbolt.o obj-${CONFIG_THUNDERBOLT} := thunderbolt.o
thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel_pci.o eeprom.o
thunderbolt-objs += domain.o dma_port.o thunderbolt-objs += domain.o dma_port.o icm.o
...@@ -463,6 +463,8 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame, ...@@ -463,6 +463,8 @@ static void tb_ctl_rx_callback(struct tb_ring *ring, struct ring_frame *frame,
"RX: checksum mismatch, dropping packet\n"); "RX: checksum mismatch, dropping packet\n");
goto rx; goto rx;
} }
/* Fall through */
case TB_CFG_PKG_ICM_EVENT:
tb_ctl_handle_event(pkg->ctl, frame->eof, pkg, frame->size); tb_ctl_handle_event(pkg->ctl, frame->eof, pkg, frame->size);
goto rx; goto rx;
......
...@@ -13,11 +13,43 @@ ...@@ -13,11 +13,43 @@
#include <linux/idr.h> #include <linux/idr.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/random.h>
#include <crypto/hash.h>
#include "tb.h" #include "tb.h"
static DEFINE_IDA(tb_domain_ida); static DEFINE_IDA(tb_domain_ida);
static const char * const tb_security_names[] = {
[TB_SECURITY_NONE] = "none",
[TB_SECURITY_USER] = "user",
[TB_SECURITY_SECURE] = "secure",
[TB_SECURITY_DPONLY] = "dponly",
};
static ssize_t security_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb *tb = container_of(dev, struct tb, dev);
return sprintf(buf, "%s\n", tb_security_names[tb->security_level]);
}
static DEVICE_ATTR_RO(security);
static struct attribute *domain_attrs[] = {
&dev_attr_security.attr,
NULL,
};
static struct attribute_group domain_attr_group = {
.attrs = domain_attrs,
};
static const struct attribute_group *domain_attr_groups[] = {
&domain_attr_group,
NULL,
};
struct bus_type tb_bus_type = { struct bus_type tb_bus_type = {
.name = "thunderbolt", .name = "thunderbolt",
}; };
...@@ -82,6 +114,7 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize) ...@@ -82,6 +114,7 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, size_t privsize)
tb->dev.parent = &nhi->pdev->dev; tb->dev.parent = &nhi->pdev->dev;
tb->dev.bus = &tb_bus_type; tb->dev.bus = &tb_bus_type;
tb->dev.type = &tb_domain_type; tb->dev.type = &tb_domain_type;
tb->dev.groups = domain_attr_groups;
dev_set_name(&tb->dev, "domain%d", tb->index); dev_set_name(&tb->dev, "domain%d", tb->index);
device_initialize(&tb->dev); device_initialize(&tb->dev);
...@@ -140,6 +173,12 @@ int tb_domain_add(struct tb *tb) ...@@ -140,6 +173,12 @@ int tb_domain_add(struct tb *tb)
*/ */
tb_ctl_start(tb->ctl); tb_ctl_start(tb->ctl);
if (tb->cm_ops->driver_ready) {
ret = tb->cm_ops->driver_ready(tb);
if (ret)
goto err_ctl_stop;
}
ret = device_add(&tb->dev); ret = device_add(&tb->dev);
if (ret) if (ret)
goto err_ctl_stop; goto err_ctl_stop;
...@@ -231,6 +270,162 @@ int tb_domain_resume_noirq(struct tb *tb) ...@@ -231,6 +270,162 @@ int tb_domain_resume_noirq(struct tb *tb)
return ret; return ret;
} }
int tb_domain_suspend(struct tb *tb)
{
int ret;
mutex_lock(&tb->lock);
if (tb->cm_ops->suspend) {
ret = tb->cm_ops->suspend(tb);
if (ret) {
mutex_unlock(&tb->lock);
return ret;
}
}
mutex_unlock(&tb->lock);
return 0;
}
void tb_domain_complete(struct tb *tb)
{
mutex_lock(&tb->lock);
if (tb->cm_ops->complete)
tb->cm_ops->complete(tb);
mutex_unlock(&tb->lock);
}
/**
* tb_domain_approve_switch() - Approve switch
* @tb: Domain the switch belongs to
* @sw: Switch to approve
*
* This will approve switch by connection manager specific means. In
* case of success the connection manager will create tunnels for all
* supported protocols.
*/
int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw)
{
struct tb_switch *parent_sw;
if (!tb->cm_ops->approve_switch)
return -EPERM;
/* The parent switch must be authorized before this one */
parent_sw = tb_to_switch(sw->dev.parent);
if (!parent_sw || !parent_sw->authorized)
return -EINVAL;
return tb->cm_ops->approve_switch(tb, sw);
}
/**
* tb_domain_approve_switch_key() - Approve switch and add key
* @tb: Domain the switch belongs to
* @sw: Switch to approve
*
* For switches that support secure connect, this function first adds
* key to the switch NVM using connection manager specific means. If
* adding the key is successful, the switch is approved and connected.
*
* Return: %0 on success and negative errno in case of failure.
*/
int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw)
{
struct tb_switch *parent_sw;
int ret;
if (!tb->cm_ops->approve_switch || !tb->cm_ops->add_switch_key)
return -EPERM;
/* The parent switch must be authorized before this one */
parent_sw = tb_to_switch(sw->dev.parent);
if (!parent_sw || !parent_sw->authorized)
return -EINVAL;
ret = tb->cm_ops->add_switch_key(tb, sw);
if (ret)
return ret;
return tb->cm_ops->approve_switch(tb, sw);
}
/**
* tb_domain_challenge_switch_key() - Challenge and approve switch
* @tb: Domain the switch belongs to
* @sw: Switch to approve
*
* For switches that support secure connect, this function generates
* random challenge and sends it to the switch. The switch responds to
* this and if the response matches our random challenge, the switch is
* approved and connected.
*
* Return: %0 on success and negative errno in case of failure.
*/
int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw)
{
u8 challenge[TB_SWITCH_KEY_SIZE];
u8 response[TB_SWITCH_KEY_SIZE];
u8 hmac[TB_SWITCH_KEY_SIZE];
struct tb_switch *parent_sw;
struct crypto_shash *tfm;
struct shash_desc *shash;
int ret;
if (!tb->cm_ops->approve_switch || !tb->cm_ops->challenge_switch_key)
return -EPERM;
/* The parent switch must be authorized before this one */
parent_sw = tb_to_switch(sw->dev.parent);
if (!parent_sw || !parent_sw->authorized)
return -EINVAL;
get_random_bytes(challenge, sizeof(challenge));
ret = tb->cm_ops->challenge_switch_key(tb, sw, challenge, response);
if (ret)
return ret;
tfm = crypto_alloc_shash("hmac(sha256)", 0, 0);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
ret = crypto_shash_setkey(tfm, sw->key, TB_SWITCH_KEY_SIZE);
if (ret)
goto err_free_tfm;
shash = kzalloc(sizeof(*shash) + crypto_shash_descsize(tfm),
GFP_KERNEL);
if (!shash) {
ret = -ENOMEM;
goto err_free_tfm;
}
shash->tfm = tfm;
shash->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
memset(hmac, 0, sizeof(hmac));
ret = crypto_shash_digest(shash, challenge, sizeof(hmac), hmac);
if (ret)
goto err_free_shash;
/* The returned HMAC must match the one we calculated */
if (memcmp(response, hmac, sizeof(hmac))) {
ret = -EKEYREJECTED;
goto err_free_shash;
}
crypto_free_shash(tfm);
kfree(shash);
return tb->cm_ops->approve_switch(tb, sw);
err_free_shash:
kfree(shash);
err_free_tfm:
crypto_free_shash(tfm);
return ret;
}
int tb_domain_init(void) int tb_domain_init(void)
{ {
return bus_register(&tb_bus_type); return bus_register(&tb_bus_type);
......
/*
* Internal Thunderbolt Connection Manager. This is a firmware running on
* the Thunderbolt host controller performing most of the low-level
* handling.
*
* Copyright (C) 2017, Intel Corporation
* Authors: Michael Jamet <michael.jamet@intel.com>
* Mika Westerberg <mika.westerberg@linux.intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/delay.h>
#include <linux/dmi.h>
#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/workqueue.h>
#include "ctl.h"
#include "nhi_regs.h"
#include "tb.h"
#define PCIE2CIO_CMD 0x30
#define PCIE2CIO_CMD_TIMEOUT BIT(31)
#define PCIE2CIO_CMD_START BIT(30)
#define PCIE2CIO_CMD_WRITE BIT(21)
#define PCIE2CIO_CMD_CS_MASK GENMASK(20, 19)
#define PCIE2CIO_CMD_CS_SHIFT 19
#define PCIE2CIO_CMD_PORT_MASK GENMASK(18, 13)
#define PCIE2CIO_CMD_PORT_SHIFT 13
#define PCIE2CIO_WRDATA 0x34
#define PCIE2CIO_RDDATA 0x38
#define PHY_PORT_CS1 0x37
#define PHY_PORT_CS1_LINK_DISABLE BIT(14)
#define PHY_PORT_CS1_LINK_STATE_MASK GENMASK(29, 26)
#define PHY_PORT_CS1_LINK_STATE_SHIFT 26
#define ICM_TIMEOUT 5000 /* ms */
#define ICM_MAX_LINK 4
#define ICM_MAX_DEPTH 6
/**
* struct icm - Internal connection manager private data
* @request_lock: Makes sure only one message is send to ICM at time
* @rescan_work: Work used to rescan the surviving switches after resume
* @upstream_port: Pointer to the PCIe upstream port this host
* controller is connected. This is only set for systems
* where ICM needs to be started manually
* @vnd_cap: Vendor defined capability where PCIe2CIO mailbox resides
* (only set when @upstream_port is not %NULL)
* @is_supported: Checks if we can support ICM on this controller
* @get_mode: Read and return the ICM firmware mode (optional)
* @get_route: Find a route string for given switch
* @device_connected: Handle device connected ICM message
* @device_disconnected: Handle device disconnected ICM message
*/
struct icm {
struct mutex request_lock;
struct delayed_work rescan_work;
struct pci_dev *upstream_port;
int vnd_cap;
bool (*is_supported)(struct tb *tb);
int (*get_mode)(struct tb *tb);
int (*get_route)(struct tb *tb, u8 link, u8 depth, u64 *route);
void (*device_connected)(struct tb *tb,
const struct icm_pkg_header *hdr);
void (*device_disconnected)(struct tb *tb,
const struct icm_pkg_header *hdr);
};
struct icm_notification {
struct work_struct work;
struct icm_pkg_header *pkg;
struct tb *tb;
};
static inline struct tb *icm_to_tb(struct icm *icm)
{
return ((void *)icm - sizeof(struct tb));
}
static inline u8 phy_port_from_route(u64 route, u8 depth)
{
return tb_switch_phy_port_from_link(route >> ((depth - 1) * 8));
}
static inline u8 dual_link_from_link(u8 link)
{
return link ? ((link - 1) ^ 0x01) + 1 : 0;
}
static inline u64 get_route(u32 route_hi, u32 route_lo)
{
return (u64)route_hi << 32 | route_lo;
}
static inline bool is_apple(void)
{
return dmi_match(DMI_BOARD_VENDOR, "Apple Inc.");
}
static bool icm_match(const struct tb_cfg_request *req,
const struct ctl_pkg *pkg)
{
const struct icm_pkg_header *res_hdr = pkg->buffer;
const struct icm_pkg_header *req_hdr = req->request;
if (pkg->frame.eof != req->response_type)
return false;
if (res_hdr->code != req_hdr->code)
return false;
return true;
}
static bool icm_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
{
const struct icm_pkg_header *hdr = pkg->buffer;
if (hdr->packet_id < req->npackets) {
size_t offset = hdr->packet_id * req->response_size;
memcpy(req->response + offset, pkg->buffer, req->response_size);
}
return hdr->packet_id == hdr->total_packets - 1;
}
static int icm_request(struct tb *tb, const void *request, size_t request_size,
void *response, size_t response_size, size_t npackets,
unsigned int timeout_msec)
{
struct icm *icm = tb_priv(tb);
int retries = 3;
do {
struct tb_cfg_request *req;
struct tb_cfg_result res;
req = tb_cfg_request_alloc();
if (!req)
return -ENOMEM;
req->match = icm_match;
req->copy = icm_copy;
req->request = request;
req->request_size = request_size;
req->request_type = TB_CFG_PKG_ICM_CMD;
req->response = response;
req->npackets = npackets;
req->response_size = response_size;
req->response_type = TB_CFG_PKG_ICM_RESP;
mutex_lock(&icm->request_lock);
res = tb_cfg_request_sync(tb->ctl, req, timeout_msec);
mutex_unlock(&icm->request_lock);
tb_cfg_request_put(req);
if (res.err != -ETIMEDOUT)
return res.err == 1 ? -EIO : res.err;
usleep_range(20, 50);
} while (retries--);
return -ETIMEDOUT;
}
static bool icm_fr_is_supported(struct tb *tb)
{
return !is_apple();
}
static inline int icm_fr_get_switch_index(u32 port)
{
int index;
if ((port & ICM_PORT_TYPE_MASK) != TB_TYPE_PORT)
return 0;
index = port >> ICM_PORT_INDEX_SHIFT;
return index != 0xff ? index : 0;
}
static int icm_fr_get_route(struct tb *tb, u8 link, u8 depth, u64 *route)
{
struct icm_fr_pkg_get_topology_response *switches, *sw;
struct icm_fr_pkg_get_topology request = {
.hdr = { .code = ICM_GET_TOPOLOGY },
};
size_t npackets = ICM_GET_TOPOLOGY_PACKETS;
int ret, index;
u8 i;
switches = kcalloc(npackets, sizeof(*switches), GFP_KERNEL);
if (!switches)
return -ENOMEM;
ret = icm_request(tb, &request, sizeof(request), switches,
sizeof(*switches), npackets, ICM_TIMEOUT);
if (ret)
goto err_free;
sw = &switches[0];
index = icm_fr_get_switch_index(sw->ports[link]);
if (!index) {
ret = -ENODEV;
goto err_free;
}
sw = &switches[index];
for (i = 1; i < depth; i++) {
unsigned int j;
if (!(sw->first_data & ICM_SWITCH_USED)) {
ret = -ENODEV;
goto err_free;
}
for (j = 0; j < ARRAY_SIZE(sw->ports); j++) {
index = icm_fr_get_switch_index(sw->ports[j]);
if (index > sw->switch_index) {
sw = &switches[index];
break;
}
}
}
*route = get_route(sw->route_hi, sw->route_lo);
err_free:
kfree(switches);
return ret;
}
static int icm_fr_approve_switch(struct tb *tb, struct tb_switch *sw)
{
struct icm_fr_pkg_approve_device request;
struct icm_fr_pkg_approve_device reply;
int ret;
memset(&request, 0, sizeof(request));
memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid));
request.hdr.code = ICM_APPROVE_DEVICE;
request.connection_id = sw->connection_id;
request.connection_key = sw->connection_key;
memset(&reply, 0, sizeof(reply));
/* Use larger timeout as establishing tunnels can take some time */
ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
1, 10000);
if (ret)
return ret;
if (reply.hdr.flags & ICM_FLAGS_ERROR) {
tb_warn(tb, "PCIe tunnel creation failed\n");
return -EIO;
}
return 0;
}
static int icm_fr_add_switch_key(struct tb *tb, struct tb_switch *sw)
{
struct icm_fr_pkg_add_device_key request;
struct icm_fr_pkg_add_device_key_response reply;
int ret;
memset(&request, 0, sizeof(request));
memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid));
request.hdr.code = ICM_ADD_DEVICE_KEY;
request.connection_id = sw->connection_id;
request.connection_key = sw->connection_key;
memcpy(request.key, sw->key, TB_SWITCH_KEY_SIZE);
memset(&reply, 0, sizeof(reply));
ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
1, ICM_TIMEOUT);
if (ret)
return ret;
if (reply.hdr.flags & ICM_FLAGS_ERROR) {
tb_warn(tb, "Adding key to switch failed\n");
return -EIO;
}
return 0;
}
static int icm_fr_challenge_switch_key(struct tb *tb, struct tb_switch *sw,
const u8 *challenge, u8 *response)
{
struct icm_fr_pkg_challenge_device request;
struct icm_fr_pkg_challenge_device_response reply;
int ret;
memset(&request, 0, sizeof(request));
memcpy(&request.ep_uuid, sw->uuid, sizeof(request.ep_uuid));
request.hdr.code = ICM_CHALLENGE_DEVICE;
request.connection_id = sw->connection_id;
request.connection_key = sw->connection_key;
memcpy(request.challenge, challenge, TB_SWITCH_KEY_SIZE);
memset(&reply, 0, sizeof(reply));
ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
1, ICM_TIMEOUT);
if (ret)
return ret;
if (reply.hdr.flags & ICM_FLAGS_ERROR)
return -EKEYREJECTED;
if (reply.hdr.flags & ICM_FLAGS_NO_KEY)
return -ENOKEY;
memcpy(response, reply.response, TB_SWITCH_KEY_SIZE);
return 0;
}
static void remove_switch(struct tb_switch *sw)
{
struct tb_switch *parent_sw;
parent_sw = tb_to_switch(sw->dev.parent);
tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
tb_switch_remove(sw);
}
static void
icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
{
const struct icm_fr_event_device_connected *pkg =
(const struct icm_fr_event_device_connected *)hdr;
struct tb_switch *sw, *parent_sw;
struct icm *icm = tb_priv(tb);
bool authorized = false;
u8 link, depth;
u64 route;
int ret;
link = pkg->link_info & ICM_LINK_INFO_LINK_MASK;
depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
ICM_LINK_INFO_DEPTH_SHIFT;
authorized = pkg->link_info & ICM_LINK_INFO_APPROVED;
ret = icm->get_route(tb, link, depth, &route);
if (ret) {
tb_err(tb, "failed to find route string for switch at %u.%u\n",
link, depth);
return;
}
sw = tb_switch_find_by_uuid(tb, &pkg->ep_uuid);
if (sw) {
u8 phy_port, sw_phy_port;
parent_sw = tb_to_switch(sw->dev.parent);
sw_phy_port = phy_port_from_route(tb_route(sw), sw->depth);
phy_port = phy_port_from_route(route, depth);
/*
* On resume ICM will send us connected events for the
* devices that still are present. However, that
* information might have changed for example by the
* fact that a switch on a dual-link connection might
* have been enumerated using the other link now. Make
* sure our book keeping matches that.
*/
if (sw->depth == depth && sw_phy_port == phy_port &&
!!sw->authorized == authorized) {
tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
tb_port_at(route, parent_sw)->remote =
tb_upstream_port(sw);
sw->config.route_hi = upper_32_bits(route);
sw->config.route_lo = lower_32_bits(route);
sw->connection_id = pkg->connection_id;
sw->connection_key = pkg->connection_key;
sw->link = link;
sw->depth = depth;
sw->is_unplugged = false;
tb_switch_put(sw);
return;
}
/*
* User connected the same switch to another physical
* port or to another part of the topology. Remove the
* existing switch now before adding the new one.
*/
remove_switch(sw);
tb_switch_put(sw);
}
/*
* If the switch was not found by UUID, look for a switch on
* same physical port (taking possible link aggregation into
* account) and depth. If we found one it is definitely a stale
* one so remove it first.
*/
sw = tb_switch_find_by_link_depth(tb, link, depth);
if (!sw) {
u8 dual_link;
dual_link = dual_link_from_link(link);
if (dual_link)
sw = tb_switch_find_by_link_depth(tb, dual_link, depth);
}
if (sw) {
remove_switch(sw);
tb_switch_put(sw);
}
parent_sw = tb_switch_find_by_link_depth(tb, link, depth - 1);
if (!parent_sw) {
tb_err(tb, "failed to find parent switch for %u.%u\n",
link, depth);
return;
}
sw = tb_switch_alloc(tb, &parent_sw->dev, route);
if (!sw) {
tb_switch_put(parent_sw);
return;
}
sw->uuid = kmemdup(&pkg->ep_uuid, sizeof(pkg->ep_uuid), GFP_KERNEL);
sw->connection_id = pkg->connection_id;
sw->connection_key = pkg->connection_key;
sw->link = link;
sw->depth = depth;
sw->authorized = authorized;
sw->security_level = (pkg->hdr.flags & ICM_FLAGS_SLEVEL_MASK) >>
ICM_FLAGS_SLEVEL_SHIFT;
/* Link the two switches now */
tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw);
tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw);
ret = tb_switch_add(sw);
if (ret) {
tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
tb_switch_put(sw);
}
tb_switch_put(parent_sw);
}
static void
icm_fr_device_disconnected(struct tb *tb, const struct icm_pkg_header *hdr)
{
const struct icm_fr_event_device_disconnected *pkg =
(const struct icm_fr_event_device_disconnected *)hdr;
struct tb_switch *sw;
u8 link, depth;
link = pkg->link_info & ICM_LINK_INFO_LINK_MASK;
depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >>
ICM_LINK_INFO_DEPTH_SHIFT;
if (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) {
tb_warn(tb, "invalid topology %u.%u, ignoring\n", link, depth);
return;
}
sw = tb_switch_find_by_link_depth(tb, link, depth);
if (!sw) {
tb_warn(tb, "no switch exists at %u.%u, ignoring\n", link,
depth);
return;
}
remove_switch(sw);
tb_switch_put(sw);
}
static struct pci_dev *get_upstream_port(struct pci_dev *pdev)
{
struct pci_dev *parent;
parent = pci_upstream_bridge(pdev);
while (parent) {
if (!pci_is_pcie(parent))
return NULL;
if (pci_pcie_type(parent) == PCI_EXP_TYPE_UPSTREAM)
break;
parent = pci_upstream_bridge(parent);
}
if (!parent)
return NULL;
switch (parent->device) {
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_BRIDGE:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_BRIDGE:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_BRIDGE:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_BRIDGE:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_BRIDGE:
return parent;
}
return NULL;
}
static bool icm_ar_is_supported(struct tb *tb)
{
struct pci_dev *upstream_port;
struct icm *icm = tb_priv(tb);
/*
* Starting from Alpine Ridge we can use ICM on Apple machines
* as well. We just need to reset and re-enable it first.
*/
if (!is_apple())
return true;
/*
* Find the upstream PCIe port in case we need to do reset
* through its vendor specific registers.
*/
upstream_port = get_upstream_port(tb->nhi->pdev);
if (upstream_port) {
int cap;
cap = pci_find_ext_capability(upstream_port,
PCI_EXT_CAP_ID_VNDR);
if (cap > 0) {
icm->upstream_port = upstream_port;
icm->vnd_cap = cap;
return true;
}
}
return false;
}
static int icm_ar_get_mode(struct tb *tb)
{
struct tb_nhi *nhi = tb->nhi;
int retries = 5;
u32 val;
do {
val = ioread32(nhi->iobase + REG_FW_STS);
if (val & REG_FW_STS_NVM_AUTH_DONE)
break;
msleep(30);
} while (--retries);
if (!retries) {
dev_err(&nhi->pdev->dev, "ICM firmware not authenticated\n");
return -ENODEV;
}
return nhi_mailbox_mode(nhi);
}
static int icm_ar_get_route(struct tb *tb, u8 link, u8 depth, u64 *route)
{
struct icm_ar_pkg_get_route_response reply;
struct icm_ar_pkg_get_route request = {
.hdr = { .code = ICM_GET_ROUTE },
.link_info = depth << ICM_LINK_INFO_DEPTH_SHIFT | link,
};
int ret;
memset(&reply, 0, sizeof(reply));
ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
1, ICM_TIMEOUT);
if (ret)
return ret;
if (reply.hdr.flags & ICM_FLAGS_ERROR)
return -EIO;
*route = get_route(reply.route_hi, reply.route_lo);
return 0;
}
static void icm_handle_notification(struct work_struct *work)
{
struct icm_notification *n = container_of(work, typeof(*n), work);
struct tb *tb = n->tb;
struct icm *icm = tb_priv(tb);
mutex_lock(&tb->lock);
switch (n->pkg->code) {
case ICM_EVENT_DEVICE_CONNECTED:
icm->device_connected(tb, n->pkg);
break;
case ICM_EVENT_DEVICE_DISCONNECTED:
icm->device_disconnected(tb, n->pkg);
break;
}
mutex_unlock(&tb->lock);
kfree(n->pkg);
kfree(n);
}
static void icm_handle_event(struct tb *tb, enum tb_cfg_pkg_type type,
const void *buf, size_t size)
{
struct icm_notification *n;
n = kmalloc(sizeof(*n), GFP_KERNEL);
if (!n)
return;
INIT_WORK(&n->work, icm_handle_notification);
n->pkg = kmemdup(buf, size, GFP_KERNEL);
n->tb = tb;
queue_work(tb->wq, &n->work);
}
static int
__icm_driver_ready(struct tb *tb, enum tb_security_level *security_level)
{
struct icm_pkg_driver_ready_response reply;
struct icm_pkg_driver_ready request = {
.hdr.code = ICM_DRIVER_READY,
};
unsigned int retries = 10;
int ret;
memset(&reply, 0, sizeof(reply));
ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply),
1, ICM_TIMEOUT);
if (ret)
return ret;
if (security_level)
*security_level = reply.security_level & 0xf;
/*
* Hold on here until the switch config space is accessible so
* that we can read root switch config successfully.
*/
do {
struct tb_cfg_result res;
u32 tmp;
res = tb_cfg_read_raw(tb->ctl, &tmp, 0, 0, TB_CFG_SWITCH,
0, 1, 100);
if (!res.err)
return 0;
msleep(50);
} while (--retries);
return -ETIMEDOUT;
}
static int pci2cio_wait_completion(struct icm *icm, unsigned long timeout_msec)
{
unsigned long end = jiffies + msecs_to_jiffies(timeout_msec);
u32 cmd;
do {
pci_read_config_dword(icm->upstream_port,
icm->vnd_cap + PCIE2CIO_CMD, &cmd);
if (!(cmd & PCIE2CIO_CMD_START)) {
if (cmd & PCIE2CIO_CMD_TIMEOUT)
break;
return 0;
}
msleep(50);
} while (time_before(jiffies, end));
return -ETIMEDOUT;
}
static int pcie2cio_read(struct icm *icm, enum tb_cfg_space cs,
unsigned int port, unsigned int index, u32 *data)
{
struct pci_dev *pdev = icm->upstream_port;
int ret, vnd_cap = icm->vnd_cap;
u32 cmd;
cmd = index;
cmd |= (port << PCIE2CIO_CMD_PORT_SHIFT) & PCIE2CIO_CMD_PORT_MASK;
cmd |= (cs << PCIE2CIO_CMD_CS_SHIFT) & PCIE2CIO_CMD_CS_MASK;
cmd |= PCIE2CIO_CMD_START;
pci_write_config_dword(pdev, vnd_cap + PCIE2CIO_CMD, cmd);
ret = pci2cio_wait_completion(icm, 5000);
if (ret)
return ret;
pci_read_config_dword(pdev, vnd_cap + PCIE2CIO_RDDATA, data);
return 0;
}
static int pcie2cio_write(struct icm *icm, enum tb_cfg_space cs,
unsigned int port, unsigned int index, u32 data)
{
struct pci_dev *pdev = icm->upstream_port;
int vnd_cap = icm->vnd_cap;
u32 cmd;
pci_write_config_dword(pdev, vnd_cap + PCIE2CIO_WRDATA, data);
cmd = index;
cmd |= (port << PCIE2CIO_CMD_PORT_SHIFT) & PCIE2CIO_CMD_PORT_MASK;
cmd |= (cs << PCIE2CIO_CMD_CS_SHIFT) & PCIE2CIO_CMD_CS_MASK;
cmd |= PCIE2CIO_CMD_WRITE | PCIE2CIO_CMD_START;
pci_write_config_dword(pdev, vnd_cap + PCIE2CIO_CMD, cmd);
return pci2cio_wait_completion(icm, 5000);
}
static int icm_firmware_reset(struct tb *tb, struct tb_nhi *nhi)
{
struct icm *icm = tb_priv(tb);
u32 val;
/* Put ARC to wait for CIO reset event to happen */
val = ioread32(nhi->iobase + REG_FW_STS);
val |= REG_FW_STS_CIO_RESET_REQ;
iowrite32(val, nhi->iobase + REG_FW_STS);
/* Re-start ARC */
val = ioread32(nhi->iobase + REG_FW_STS);
val |= REG_FW_STS_ICM_EN_INVERT;
val |= REG_FW_STS_ICM_EN_CPU;
iowrite32(val, nhi->iobase + REG_FW_STS);
/* Trigger CIO reset now */
return pcie2cio_write(icm, TB_CFG_SWITCH, 0, 0x50, BIT(9));
}
static int icm_firmware_start(struct tb *tb, struct tb_nhi *nhi)
{
unsigned int retries = 10;
int ret;
u32 val;
/* Check if the ICM firmware is already running */
val = ioread32(nhi->iobase + REG_FW_STS);
if (val & REG_FW_STS_ICM_EN)
return 0;
dev_info(&nhi->pdev->dev, "starting ICM firmware\n");
ret = icm_firmware_reset(tb, nhi);
if (ret)
return ret;
/* Wait until the ICM firmware tells us it is up and running */
do {
/* Check that the ICM firmware is running */
val = ioread32(nhi->iobase + REG_FW_STS);
if (val & REG_FW_STS_NVM_AUTH_DONE)
return 0;
msleep(300);
} while (--retries);
return -ETIMEDOUT;
}
static int icm_reset_phy_port(struct tb *tb, int phy_port)
{
struct icm *icm = tb_priv(tb);
u32 state0, state1;
int port0, port1;
u32 val0, val1;
int ret;
if (!icm->upstream_port)
return 0;
if (phy_port) {
port0 = 3;
port1 = 4;
} else {
port0 = 1;
port1 = 2;
}
/*
* Read link status of both null ports belonging to a single
* physical port.
*/
ret = pcie2cio_read(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, &val0);
if (ret)
return ret;
ret = pcie2cio_read(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, &val1);
if (ret)
return ret;
state0 = val0 & PHY_PORT_CS1_LINK_STATE_MASK;
state0 >>= PHY_PORT_CS1_LINK_STATE_SHIFT;
state1 = val1 & PHY_PORT_CS1_LINK_STATE_MASK;
state1 >>= PHY_PORT_CS1_LINK_STATE_SHIFT;
/* If they are both up we need to reset them now */
if (state0 != TB_PORT_UP || state1 != TB_PORT_UP)
return 0;
val0 |= PHY_PORT_CS1_LINK_DISABLE;
ret = pcie2cio_write(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, val0);
if (ret)
return ret;
val1 |= PHY_PORT_CS1_LINK_DISABLE;
ret = pcie2cio_write(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, val1);
if (ret)
return ret;
/* Wait a bit and then re-enable both ports */
usleep_range(10, 100);
ret = pcie2cio_read(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, &val0);
if (ret)
return ret;
ret = pcie2cio_read(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, &val1);
if (ret)
return ret;
val0 &= ~PHY_PORT_CS1_LINK_DISABLE;
ret = pcie2cio_write(icm, TB_CFG_PORT, port0, PHY_PORT_CS1, val0);
if (ret)
return ret;
val1 &= ~PHY_PORT_CS1_LINK_DISABLE;
return pcie2cio_write(icm, TB_CFG_PORT, port1, PHY_PORT_CS1, val1);
}
static int icm_firmware_init(struct tb *tb)
{
struct icm *icm = tb_priv(tb);
struct tb_nhi *nhi = tb->nhi;
int ret;
ret = icm_firmware_start(tb, nhi);
if (ret) {
dev_err(&nhi->pdev->dev, "could not start ICM firmware\n");
return ret;
}
if (icm->get_mode) {
ret = icm->get_mode(tb);
switch (ret) {
case NHI_FW_CM_MODE:
/* Ask ICM to accept all Thunderbolt devices */
nhi_mailbox_cmd(nhi, NHI_MAILBOX_ALLOW_ALL_DEVS, 0);
break;
default:
tb_err(tb, "ICM firmware is in wrong mode: %u\n", ret);
return -ENODEV;
}
}
/*
* Reset both physical ports if there is anything connected to
* them already.
*/
ret = icm_reset_phy_port(tb, 0);
if (ret)
dev_warn(&nhi->pdev->dev, "failed to reset links on port0\n");
ret = icm_reset_phy_port(tb, 1);
if (ret)
dev_warn(&nhi->pdev->dev, "failed to reset links on port1\n");
return 0;
}
static int icm_driver_ready(struct tb *tb)
{
int ret;
ret = icm_firmware_init(tb);
if (ret)
return ret;
return __icm_driver_ready(tb, &tb->security_level);
}
static int icm_suspend(struct tb *tb)
{
return nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_SAVE_DEVS, 0);
}
/*
* Mark all switches (except root switch) below this one unplugged. ICM
* firmware will send us an updated list of switches after we have send
* it driver ready command. If a switch is not in that list it will be
* removed when we perform rescan.
*/
static void icm_unplug_children(struct tb_switch *sw)
{
unsigned int i;
if (tb_route(sw))
sw->is_unplugged = true;
for (i = 1; i <= sw->config.max_port_number; i++) {
struct tb_port *port = &sw->ports[i];
if (tb_is_upstream_port(port))
continue;
if (!port->remote)
continue;
icm_unplug_children(port->remote->sw);
}
}
static void icm_free_unplugged_children(struct tb_switch *sw)
{
unsigned int i;
for (i = 1; i <= sw->config.max_port_number; i++) {
struct tb_port *port = &sw->ports[i];
if (tb_is_upstream_port(port))
continue;
if (!port->remote)
continue;
if (port->remote->sw->is_unplugged) {
tb_switch_remove(port->remote->sw);
port->remote = NULL;
} else {
icm_free_unplugged_children(port->remote->sw);
}
}
}
static void icm_rescan_work(struct work_struct *work)
{
struct icm *icm = container_of(work, struct icm, rescan_work.work);
struct tb *tb = icm_to_tb(icm);
mutex_lock(&tb->lock);
if (tb->root_switch)
icm_free_unplugged_children(tb->root_switch);
mutex_unlock(&tb->lock);
}
static void icm_complete(struct tb *tb)
{
struct icm *icm = tb_priv(tb);
if (tb->nhi->going_away)
return;
icm_unplug_children(tb->root_switch);
/*
* Now all existing children should be resumed, start events
* from ICM to get updated status.
*/
__icm_driver_ready(tb, NULL);
/*
* We do not get notifications of devices that have been
* unplugged during suspend so schedule rescan to clean them up
* if any.
*/
queue_delayed_work(tb->wq, &icm->rescan_work, msecs_to_jiffies(500));
}
static int icm_start(struct tb *tb)
{
int ret;
tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0);
if (!tb->root_switch)
return -ENODEV;
ret = tb_switch_add(tb->root_switch);
if (ret)
tb_switch_put(tb->root_switch);
return ret;
}
static void icm_stop(struct tb *tb)
{
struct icm *icm = tb_priv(tb);
cancel_delayed_work(&icm->rescan_work);
tb_switch_remove(tb->root_switch);
tb->root_switch = NULL;
nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0);
}
/* Falcon Ridge and Alpine Ridge */
static const struct tb_cm_ops icm_fr_ops = {
.driver_ready = icm_driver_ready,
.start = icm_start,
.stop = icm_stop,
.suspend = icm_suspend,
.complete = icm_complete,
.handle_event = icm_handle_event,
.approve_switch = icm_fr_approve_switch,
.add_switch_key = icm_fr_add_switch_key,
.challenge_switch_key = icm_fr_challenge_switch_key,
};
struct tb *icm_probe(struct tb_nhi *nhi)
{
struct icm *icm;
struct tb *tb;
tb = tb_domain_alloc(nhi, sizeof(struct icm));
if (!tb)
return NULL;
icm = tb_priv(tb);
INIT_DELAYED_WORK(&icm->rescan_work, icm_rescan_work);
mutex_init(&icm->request_lock);
switch (nhi->pdev->device) {
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
icm->is_supported = icm_fr_is_supported;
icm->get_route = icm_fr_get_route;
icm->device_connected = icm_fr_device_connected;
icm->device_disconnected = icm_fr_device_disconnected;
tb->cm_ops = &icm_fr_ops;
break;
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI:
case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI:
icm->is_supported = icm_ar_is_supported;
icm->get_mode = icm_ar_get_mode;
icm->get_route = icm_ar_get_route;
icm->device_connected = icm_fr_device_connected;
icm->device_disconnected = icm_fr_device_disconnected;
tb->cm_ops = &icm_fr_ops;
break;
}
if (!icm->is_supported || !icm->is_supported(tb)) {
dev_dbg(&nhi->pdev->dev, "ICM not supported on this controller\n");
tb_domain_put(tb);
return NULL;
}
return tb;
}
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/dmi.h>
#include <linux/delay.h> #include <linux/delay.h>
#include "nhi.h" #include "nhi.h"
...@@ -668,6 +667,22 @@ static int nhi_resume_noirq(struct device *dev) ...@@ -668,6 +667,22 @@ static int nhi_resume_noirq(struct device *dev)
return tb_domain_resume_noirq(tb); return tb_domain_resume_noirq(tb);
} }
static int nhi_suspend(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct tb *tb = pci_get_drvdata(pdev);
return tb_domain_suspend(tb);
}
static void nhi_complete(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct tb *tb = pci_get_drvdata(pdev);
tb_domain_complete(tb);
}
static void nhi_shutdown(struct tb_nhi *nhi) static void nhi_shutdown(struct tb_nhi *nhi)
{ {
int i; int i;
...@@ -784,10 +799,16 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id) ...@@ -784,10 +799,16 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
/* magic value - clock related? */ /* magic value - clock related? */
iowrite32(3906250 / 10000, nhi->iobase + 0x38c00); iowrite32(3906250 / 10000, nhi->iobase + 0x38c00);
dev_info(&nhi->pdev->dev, "NHI initialized, starting thunderbolt\n"); tb = icm_probe(nhi);
tb = tb_probe(nhi);
if (!tb) if (!tb)
tb = tb_probe(nhi);
if (!tb) {
dev_err(&nhi->pdev->dev,
"failed to determine connection manager, aborting\n");
return -ENODEV; return -ENODEV;
}
dev_info(&nhi->pdev->dev, "NHI initialized, starting thunderbolt\n");
res = tb_domain_add(tb); res = tb_domain_add(tb);
if (res) { if (res) {
...@@ -826,6 +847,10 @@ static const struct dev_pm_ops nhi_pm_ops = { ...@@ -826,6 +847,10 @@ static const struct dev_pm_ops nhi_pm_ops = {
* pci-tunnels stay alive. * pci-tunnels stay alive.
*/ */
.restore_noirq = nhi_resume_noirq, .restore_noirq = nhi_resume_noirq,
.suspend = nhi_suspend,
.freeze = nhi_suspend,
.poweroff = nhi_suspend,
.complete = nhi_complete,
}; };
static struct pci_device_id nhi_ids[] = { static struct pci_device_id nhi_ids[] = {
...@@ -886,8 +911,6 @@ static int __init nhi_init(void) ...@@ -886,8 +911,6 @@ static int __init nhi_init(void)
{ {
int ret; int ret;
if (!dmi_match(DMI_BOARD_VENDOR, "Apple Inc."))
return -ENOSYS;
ret = tb_domain_init(); ret = tb_domain_init();
if (ret) if (ret)
return ret; return ret;
......
...@@ -118,4 +118,11 @@ struct ring_desc { ...@@ -118,4 +118,11 @@ struct ring_desc {
#define REG_OUTMAIL_CMD_OPMODE_SHIFT 8 #define REG_OUTMAIL_CMD_OPMODE_SHIFT 8
#define REG_OUTMAIL_CMD_OPMODE_MASK GENMASK(11, 8) #define REG_OUTMAIL_CMD_OPMODE_MASK GENMASK(11, 8)
#define REG_FW_STS 0x39944
#define REG_FW_STS_NVM_AUTH_DONE BIT(31)
#define REG_FW_STS_CIO_RESET_REQ BIT(30)
#define REG_FW_STS_ICM_EN_CPU BIT(2)
#define REG_FW_STS_ICM_EN_INVERT BIT(1)
#define REG_FW_STS_ICM_EN BIT(0)
#endif #endif
...@@ -9,6 +9,9 @@ ...@@ -9,6 +9,9 @@
#include "tb.h" #include "tb.h"
/* Switch authorization from userspace is serialized by this lock */
static DEFINE_MUTEX(switch_lock);
/* port utility functions */ /* port utility functions */
static const char *tb_port_type(struct tb_regs_port_header *port) static const char *tb_port_type(struct tb_regs_port_header *port)
...@@ -310,6 +313,75 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active) ...@@ -310,6 +313,75 @@ static int tb_plug_events_active(struct tb_switch *sw, bool active)
sw->cap_plug_events + 1, 1); sw->cap_plug_events + 1, 1);
} }
static ssize_t authorized_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct tb_switch *sw = tb_to_switch(dev);
return sprintf(buf, "%u\n", sw->authorized);
}
static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
{
int ret = -EINVAL;
if (mutex_lock_interruptible(&switch_lock))
return -ERESTARTSYS;
if (sw->authorized)
goto unlock;
switch (val) {
/* Approve switch */
case 1:
if (sw->key)
ret = tb_domain_approve_switch_key(sw->tb, sw);
else
ret = tb_domain_approve_switch(sw->tb, sw);
break;
/* Challenge switch */
case 2:
if (sw->key)
ret = tb_domain_challenge_switch_key(sw->tb, sw);
break;
default:
break;
}
if (!ret) {
sw->authorized = val;
/* Notify status change to the userspace */
kobject_uevent(&sw->dev.kobj, KOBJ_CHANGE);
}
unlock:
mutex_unlock(&switch_lock);
return ret;
}
static ssize_t authorized_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct tb_switch *sw = tb_to_switch(dev);
unsigned int val;
ssize_t ret;
ret = kstrtouint(buf, 0, &val);
if (ret)
return ret;
if (val > 2)
return -EINVAL;
ret = tb_switch_set_authorized(sw, val);
return ret ? ret : count;
}
static DEVICE_ATTR_RW(authorized);
static ssize_t device_show(struct device *dev, struct device_attribute *attr, static ssize_t device_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
...@@ -328,6 +400,54 @@ device_name_show(struct device *dev, struct device_attribute *attr, char *buf) ...@@ -328,6 +400,54 @@ device_name_show(struct device *dev, struct device_attribute *attr, char *buf)
} }
static DEVICE_ATTR_RO(device_name); static DEVICE_ATTR_RO(device_name);
static ssize_t key_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tb_switch *sw = tb_to_switch(dev);
ssize_t ret;
if (mutex_lock_interruptible(&switch_lock))
return -ERESTARTSYS;
if (sw->key)
ret = sprintf(buf, "%*phN\n", TB_SWITCH_KEY_SIZE, sw->key);
else
ret = sprintf(buf, "\n");
mutex_unlock(&switch_lock);
return ret;
}
static ssize_t key_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct tb_switch *sw = tb_to_switch(dev);
u8 key[TB_SWITCH_KEY_SIZE];
ssize_t ret = count;
if (count < 64)
return -EINVAL;
if (hex2bin(key, buf, sizeof(key)))
return -EINVAL;
if (mutex_lock_interruptible(&switch_lock))
return -ERESTARTSYS;
if (sw->authorized) {
ret = -EBUSY;
} else {
kfree(sw->key);
sw->key = kmemdup(key, sizeof(key), GFP_KERNEL);
if (!sw->key)
ret = -ENOMEM;
}
mutex_unlock(&switch_lock);
return ret;
}
static DEVICE_ATTR_RW(key);
static ssize_t vendor_show(struct device *dev, struct device_attribute *attr, static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
...@@ -356,15 +476,35 @@ static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr, ...@@ -356,15 +476,35 @@ static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR_RO(unique_id); static DEVICE_ATTR_RO(unique_id);
static struct attribute *switch_attrs[] = { static struct attribute *switch_attrs[] = {
&dev_attr_authorized.attr,
&dev_attr_device.attr, &dev_attr_device.attr,
&dev_attr_device_name.attr, &dev_attr_device_name.attr,
&dev_attr_key.attr,
&dev_attr_vendor.attr, &dev_attr_vendor.attr,
&dev_attr_vendor_name.attr, &dev_attr_vendor_name.attr,
&dev_attr_unique_id.attr, &dev_attr_unique_id.attr,
NULL, NULL,
}; };
static umode_t switch_attr_is_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct tb_switch *sw = tb_to_switch(dev);
if (attr == &dev_attr_key.attr) {
if (tb_route(sw) &&
sw->tb->security_level == TB_SECURITY_SECURE &&
sw->security_level == TB_SECURITY_SECURE)
return attr->mode;
return 0;
}
return attr->mode;
}
static struct attribute_group switch_group = { static struct attribute_group switch_group = {
.is_visible = switch_attr_is_visible,
.attrs = switch_attrs, .attrs = switch_attrs,
}; };
...@@ -384,6 +524,7 @@ static void tb_switch_release(struct device *dev) ...@@ -384,6 +524,7 @@ static void tb_switch_release(struct device *dev)
kfree(sw->vendor_name); kfree(sw->vendor_name);
kfree(sw->ports); kfree(sw->ports);
kfree(sw->drom); kfree(sw->drom);
kfree(sw->key);
kfree(sw); kfree(sw);
} }
...@@ -490,6 +631,10 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, ...@@ -490,6 +631,10 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
} }
sw->cap_plug_events = cap; sw->cap_plug_events = cap;
/* Root switch is always authorized */
if (!route)
sw->authorized = true;
device_initialize(&sw->dev); device_initialize(&sw->dev);
sw->dev.parent = parent; sw->dev.parent = parent;
sw->dev.bus = &tb_bus_type; sw->dev.bus = &tb_bus_type;
...@@ -754,3 +899,80 @@ void tb_switch_suspend(struct tb_switch *sw) ...@@ -754,3 +899,80 @@ void tb_switch_suspend(struct tb_switch *sw)
* effect? * effect?
*/ */
} }
struct tb_sw_lookup {
struct tb *tb;
u8 link;
u8 depth;
const uuid_be *uuid;
};
static int tb_switch_match(struct device *dev, void *data)
{
struct tb_switch *sw = tb_to_switch(dev);
struct tb_sw_lookup *lookup = data;
if (!sw)
return 0;
if (sw->tb != lookup->tb)
return 0;
if (lookup->uuid)
return !memcmp(sw->uuid, lookup->uuid, sizeof(*lookup->uuid));
/* Root switch is matched only by depth */
if (!lookup->depth)
return !sw->depth;
return sw->link == lookup->link && sw->depth == lookup->depth;
}
/**
* tb_switch_find_by_link_depth() - Find switch by link and depth
* @tb: Domain the switch belongs
* @link: Link number the switch is connected
* @depth: Depth of the switch in link
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put() when done with the switch.
*/
struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link, u8 depth)
{
struct tb_sw_lookup lookup;
struct device *dev;
memset(&lookup, 0, sizeof(lookup));
lookup.tb = tb;
lookup.link = link;
lookup.depth = depth;
dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match);
if (dev)
return tb_to_switch(dev);
return NULL;
}
/**
* tb_switch_find_by_link_depth() - Find switch by UUID
* @tb: Domain the switch belongs
* @uuid: UUID to look for
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put() when done with the switch.
*/
struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_be *uuid)
{
struct tb_sw_lookup lookup;
struct device *dev;
memset(&lookup, 0, sizeof(lookup));
lookup.tb = tb;
lookup.uuid = uuid;
dev = bus_find_device(&tb_bus_type, NULL, &lookup, tb_switch_match);
if (dev)
return tb_to_switch(dev);
return NULL;
}
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dmi.h>
#include "tb.h" #include "tb.h"
#include "tb_regs.h" #include "tb_regs.h"
...@@ -71,6 +72,8 @@ static void tb_scan_port(struct tb_port *port) ...@@ -71,6 +72,8 @@ static void tb_scan_port(struct tb_port *port)
return; return;
} }
sw->authorized = true;
if (tb_switch_add(sw)) { if (tb_switch_add(sw)) {
tb_switch_put(sw); tb_switch_put(sw);
return; return;
...@@ -443,10 +446,14 @@ struct tb *tb_probe(struct tb_nhi *nhi) ...@@ -443,10 +446,14 @@ struct tb *tb_probe(struct tb_nhi *nhi)
struct tb_cm *tcm; struct tb_cm *tcm;
struct tb *tb; struct tb *tb;
if (!dmi_match(DMI_BOARD_VENDOR, "Apple Inc."))
return NULL;
tb = tb_domain_alloc(nhi, sizeof(*tcm)); tb = tb_domain_alloc(nhi, sizeof(*tcm));
if (!tb) if (!tb)
return NULL; return NULL;
tb->security_level = TB_SECURITY_NONE;
tb->cm_ops = &tb_cm_ops; tb->cm_ops = &tb_cm_ops;
tcm = tb_priv(tb); tcm = tb_priv(tb);
......
...@@ -14,6 +14,24 @@ ...@@ -14,6 +14,24 @@
#include "ctl.h" #include "ctl.h"
#include "dma_port.h" #include "dma_port.h"
/**
* enum tb_security_level - Thunderbolt security level
* @TB_SECURITY_NONE: No security, legacy mode
* @TB_SECURITY_USER: User approval required at minimum
* @TB_SECURITY_SECURE: One time saved key required at minimum
* @TB_SECURITY_DPONLY: Only tunnel Display port (and USB)
*/
enum tb_security_level {
TB_SECURITY_NONE,
TB_SECURITY_USER,
TB_SECURITY_SECURE,
TB_SECURITY_DPONLY,
};
#define TB_SWITCH_KEY_SIZE 32
/* Each physical port contains 2 links on modern controllers */
#define TB_SWITCH_LINKS_PER_PHY_PORT 2
/** /**
* struct tb_switch - a thunderbolt switch * struct tb_switch - a thunderbolt switch
* @dev: Device for the switch * @dev: Device for the switch
...@@ -33,6 +51,19 @@ ...@@ -33,6 +51,19 @@
* @cap_plug_events: Offset to the plug events capability (%0 if not found) * @cap_plug_events: Offset to the plug events capability (%0 if not found)
* @is_unplugged: The switch is going away * @is_unplugged: The switch is going away
* @drom: DROM of the switch (%NULL if not found) * @drom: DROM of the switch (%NULL if not found)
* @authorized: Whether the switch is authorized by user or policy
* @work: Work used to automatically authorize a switch
* @security_level: Switch supported security level
* @key: Contains the key used to challenge the device or %NULL if not
* supported. Size of the key is %TB_SWITCH_KEY_SIZE.
* @connection_id: Connection ID used with ICM messaging
* @connection_key: Connection key used with ICM messaging
* @link: Root switch link this switch is connected (ICM only)
* @depth: Depth in the chain this switch is connected (ICM only)
*
* When the switch is being added or removed to the domain (other
* switches) you need to have domain lock held. For switch authorization
* internal switch_lock is enough.
*/ */
struct tb_switch { struct tb_switch {
struct device dev; struct device dev;
...@@ -50,6 +81,14 @@ struct tb_switch { ...@@ -50,6 +81,14 @@ struct tb_switch {
int cap_plug_events; int cap_plug_events;
bool is_unplugged; bool is_unplugged;
u8 *drom; u8 *drom;
unsigned int authorized;
struct work_struct work;
enum tb_security_level security_level;
u8 *key;
u8 connection_id;
u8 connection_key;
u8 link;
u8 depth;
}; };
/** /**
...@@ -121,19 +160,33 @@ struct tb_path { ...@@ -121,19 +160,33 @@ struct tb_path {
/** /**
* struct tb_cm_ops - Connection manager specific operations vector * struct tb_cm_ops - Connection manager specific operations vector
* @driver_ready: Called right after control channel is started. Used by
* ICM to send driver ready message to the firmware.
* @start: Starts the domain * @start: Starts the domain
* @stop: Stops the domain * @stop: Stops the domain
* @suspend_noirq: Connection manager specific suspend_noirq * @suspend_noirq: Connection manager specific suspend_noirq
* @resume_noirq: Connection manager specific resume_noirq * @resume_noirq: Connection manager specific resume_noirq
* @suspend: Connection manager specific suspend
* @complete: Connection manager specific complete
* @handle_event: Handle thunderbolt event * @handle_event: Handle thunderbolt event
* @approve_switch: Approve switch
* @add_switch_key: Add key to switch
* @challenge_switch_key: Challenge switch using key
*/ */
struct tb_cm_ops { struct tb_cm_ops {
int (*driver_ready)(struct tb *tb);
int (*start)(struct tb *tb); int (*start)(struct tb *tb);
void (*stop)(struct tb *tb); void (*stop)(struct tb *tb);
int (*suspend_noirq)(struct tb *tb); int (*suspend_noirq)(struct tb *tb);
int (*resume_noirq)(struct tb *tb); int (*resume_noirq)(struct tb *tb);
int (*suspend)(struct tb *tb);
void (*complete)(struct tb *tb);
void (*handle_event)(struct tb *tb, enum tb_cfg_pkg_type, void (*handle_event)(struct tb *tb, enum tb_cfg_pkg_type,
const void *buf, size_t size); const void *buf, size_t size);
int (*approve_switch)(struct tb *tb, struct tb_switch *sw);
int (*add_switch_key)(struct tb *tb, struct tb_switch *sw);
int (*challenge_switch_key)(struct tb *tb, struct tb_switch *sw,
const u8 *challenge, u8 *response);
}; };
/** /**
...@@ -147,6 +200,7 @@ struct tb_cm_ops { ...@@ -147,6 +200,7 @@ struct tb_cm_ops {
* @root_switch: Root switch of this domain * @root_switch: Root switch of this domain
* @cm_ops: Connection manager specific operations vector * @cm_ops: Connection manager specific operations vector
* @index: Linux assigned domain number * @index: Linux assigned domain number
* @security_level: Current security level
* @privdata: Private connection manager specific data * @privdata: Private connection manager specific data
*/ */
struct tb { struct tb {
...@@ -158,6 +212,7 @@ struct tb { ...@@ -158,6 +212,7 @@ struct tb {
struct tb_switch *root_switch; struct tb_switch *root_switch;
const struct tb_cm_ops *cm_ops; const struct tb_cm_ops *cm_ops;
int index; int index;
enum tb_security_level security_level;
unsigned long privdata[0]; unsigned long privdata[0];
}; };
...@@ -188,6 +243,16 @@ static inline u64 tb_route(struct tb_switch *sw) ...@@ -188,6 +243,16 @@ static inline u64 tb_route(struct tb_switch *sw)
return ((u64) sw->config.route_hi) << 32 | sw->config.route_lo; return ((u64) sw->config.route_hi) << 32 | sw->config.route_lo;
} }
static inline struct tb_port *tb_port_at(u64 route, struct tb_switch *sw)
{
u8 port;
port = route >> (sw->config.depth * 8);
if (WARN_ON(port > sw->config.max_port_number))
return NULL;
return &sw->ports[port];
}
static inline int tb_sw_read(struct tb_switch *sw, void *buffer, static inline int tb_sw_read(struct tb_switch *sw, void *buffer,
enum tb_cfg_space space, u32 offset, u32 length) enum tb_cfg_space space, u32 offset, u32 length)
{ {
...@@ -266,6 +331,7 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer, ...@@ -266,6 +331,7 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer,
#define tb_port_info(port, fmt, arg...) \ #define tb_port_info(port, fmt, arg...) \
__TB_PORT_PRINT(tb_info, port, fmt, ##arg) __TB_PORT_PRINT(tb_info, port, fmt, ##arg)
struct tb *icm_probe(struct tb_nhi *nhi);
struct tb *tb_probe(struct tb_nhi *nhi); struct tb *tb_probe(struct tb_nhi *nhi);
extern struct bus_type tb_bus_type; extern struct bus_type tb_bus_type;
...@@ -280,6 +346,11 @@ int tb_domain_add(struct tb *tb); ...@@ -280,6 +346,11 @@ int tb_domain_add(struct tb *tb);
void tb_domain_remove(struct tb *tb); void tb_domain_remove(struct tb *tb);
int tb_domain_suspend_noirq(struct tb *tb); int tb_domain_suspend_noirq(struct tb *tb);
int tb_domain_resume_noirq(struct tb *tb); int tb_domain_resume_noirq(struct tb *tb);
int tb_domain_suspend(struct tb *tb);
void tb_domain_complete(struct tb *tb);
int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw);
int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw);
int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw);
static inline void tb_domain_put(struct tb *tb) static inline void tb_domain_put(struct tb *tb)
{ {
...@@ -296,6 +367,14 @@ int tb_switch_resume(struct tb_switch *sw); ...@@ -296,6 +367,14 @@ int tb_switch_resume(struct tb_switch *sw);
int tb_switch_reset(struct tb *tb, u64 route); int tb_switch_reset(struct tb *tb, u64 route);
void tb_sw_set_unplugged(struct tb_switch *sw); void tb_sw_set_unplugged(struct tb_switch *sw);
struct tb_switch *get_switch_at_route(struct tb_switch *sw, u64 route); struct tb_switch *get_switch_at_route(struct tb_switch *sw, u64 route);
struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link,
u8 depth);
struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_be *uuid);
static inline unsigned int tb_switch_phy_port_from_link(unsigned int link)
{
return (link - 1) / TB_SWITCH_LINKS_PER_PHY_PORT;
}
static inline void tb_switch_put(struct tb_switch *sw) static inline void tb_switch_put(struct tb_switch *sw)
{ {
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#define _TB_MSGS #define _TB_MSGS
#include <linux/types.h> #include <linux/types.h>
#include <linux/uuid.h>
enum tb_cfg_pkg_type { enum tb_cfg_pkg_type {
TB_CFG_PKG_READ = 1, TB_CFG_PKG_READ = 1,
...@@ -24,6 +25,9 @@ enum tb_cfg_pkg_type { ...@@ -24,6 +25,9 @@ enum tb_cfg_pkg_type {
TB_CFG_PKG_XDOMAIN_RESP = 7, TB_CFG_PKG_XDOMAIN_RESP = 7,
TB_CFG_PKG_OVERRIDE = 8, TB_CFG_PKG_OVERRIDE = 8,
TB_CFG_PKG_RESET = 9, TB_CFG_PKG_RESET = 9,
TB_CFG_PKG_ICM_EVENT = 10,
TB_CFG_PKG_ICM_CMD = 11,
TB_CFG_PKG_ICM_RESP = 12,
TB_CFG_PKG_PREPARE_TO_SLEEP = 0xd, TB_CFG_PKG_PREPARE_TO_SLEEP = 0xd,
}; };
...@@ -105,4 +109,152 @@ struct cfg_pts_pkg { ...@@ -105,4 +109,152 @@ struct cfg_pts_pkg {
u32 data; u32 data;
} __packed; } __packed;
/* ICM messages */
enum icm_pkg_code {
ICM_GET_TOPOLOGY = 0x1,
ICM_DRIVER_READY = 0x3,
ICM_APPROVE_DEVICE = 0x4,
ICM_CHALLENGE_DEVICE = 0x5,
ICM_ADD_DEVICE_KEY = 0x6,
ICM_GET_ROUTE = 0xa,
};
enum icm_event_code {
ICM_EVENT_DEVICE_CONNECTED = 3,
ICM_EVENT_DEVICE_DISCONNECTED = 4,
};
struct icm_pkg_header {
u8 code;
u8 flags;
u8 packet_id;
u8 total_packets;
} __packed;
#define ICM_FLAGS_ERROR BIT(0)
#define ICM_FLAGS_NO_KEY BIT(1)
#define ICM_FLAGS_SLEVEL_SHIFT 3
#define ICM_FLAGS_SLEVEL_MASK GENMASK(4, 3)
struct icm_pkg_driver_ready {
struct icm_pkg_header hdr;
} __packed;
struct icm_pkg_driver_ready_response {
struct icm_pkg_header hdr;
u8 romver;
u8 ramver;
u16 security_level;
} __packed;
/* Falcon Ridge & Alpine Ridge common messages */
struct icm_fr_pkg_get_topology {
struct icm_pkg_header hdr;
} __packed;
#define ICM_GET_TOPOLOGY_PACKETS 14
struct icm_fr_pkg_get_topology_response {
struct icm_pkg_header hdr;
u32 route_lo;
u32 route_hi;
u8 first_data;
u8 second_data;
u8 drom_i2c_address_index;
u8 switch_index;
u32 reserved[2];
u32 ports[16];
u32 port_hop_info[16];
} __packed;
#define ICM_SWITCH_USED BIT(0)
#define ICM_SWITCH_UPSTREAM_PORT_MASK GENMASK(7, 1)
#define ICM_SWITCH_UPSTREAM_PORT_SHIFT 1
#define ICM_PORT_TYPE_MASK GENMASK(23, 0)
#define ICM_PORT_INDEX_SHIFT 24
#define ICM_PORT_INDEX_MASK GENMASK(31, 24)
struct icm_fr_event_device_connected {
struct icm_pkg_header hdr;
uuid_be ep_uuid;
u8 connection_key;
u8 connection_id;
u16 link_info;
u32 ep_name[55];
} __packed;
#define ICM_LINK_INFO_LINK_MASK 0x7
#define ICM_LINK_INFO_DEPTH_SHIFT 4
#define ICM_LINK_INFO_DEPTH_MASK GENMASK(7, 4)
#define ICM_LINK_INFO_APPROVED BIT(8)
struct icm_fr_pkg_approve_device {
struct icm_pkg_header hdr;
uuid_be ep_uuid;
u8 connection_key;
u8 connection_id;
u16 reserved;
} __packed;
struct icm_fr_event_device_disconnected {
struct icm_pkg_header hdr;
u16 reserved;
u16 link_info;
} __packed;
struct icm_fr_pkg_add_device_key {
struct icm_pkg_header hdr;
uuid_be ep_uuid;
u8 connection_key;
u8 connection_id;
u16 reserved;
u32 key[8];
} __packed;
struct icm_fr_pkg_add_device_key_response {
struct icm_pkg_header hdr;
uuid_be ep_uuid;
u8 connection_key;
u8 connection_id;
u16 reserved;
} __packed;
struct icm_fr_pkg_challenge_device {
struct icm_pkg_header hdr;
uuid_be ep_uuid;
u8 connection_key;
u8 connection_id;
u16 reserved;
u32 challenge[8];
} __packed;
struct icm_fr_pkg_challenge_device_response {
struct icm_pkg_header hdr;
uuid_be ep_uuid;
u8 connection_key;
u8 connection_id;
u16 reserved;
u32 challenge[8];
u32 response[8];
} __packed;
/* Alpine Ridge only messages */
struct icm_ar_pkg_get_route {
struct icm_pkg_header hdr;
u16 reserved;
u16 link_info;
} __packed;
struct icm_ar_pkg_get_route_response {
struct icm_pkg_header hdr;
u16 reserved;
u16 link_info;
u32 route_hi;
u32 route_lo;
} __packed;
#endif #endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment