Commit a212d9f3 authored by David S. Miller's avatar David S. Miller

Merge branch 'iosm-driver'

M Chetan Kumar says:

====================
net: iosm: PCIe Driver for Intel M.2 Modem

The IOSM (IPC over Shared Memory) driver is a PCIe host driver implemented
for linux or chrome platform for data exchange over PCIe interface between
Host platform & Intel M.2 Modem. The driver exposes interface conforming to
the MBIM protocol. Any front end application ( eg: Modem Manager) could
easily manage the MBIM interface to enable data communication towards WWAN.

Intel M.2 modem uses 2 BAR regions. The first region is dedicated to Doorbell
register for IRQs and the second region is used as scratchpad area for book
keeping modem execution stage details along with host system shared memory
region context details. The upper edge of the driver exposes the control and
data channels for user space application interaction. At lower edge these data
and control channels are associated to pipes. The pipes are lowest level
interfaces used over PCIe as a logical channel for message exchange. A single
channel maps to UL and DL pipe and are initialized on device open.

On UL path, driver copies application sent data to SKBs associate it with
transfer descriptor and puts it on to ring buffer for DMA transfer. Once
information has been updated in shared memory region, host gives a Doorbell
to modem to perform DMA and modem uses MSI to communicate back to host.
For receiving data in DL path, SKBs are pre-allocated during pipe open and
transfer descriptors are given to modem for DMA transfer.

The driver exposes two types of ports, namely "wwan0mbim0", a char device node
which is used for MBIM control operation and "wwan0-x",(x = 0,1,2..7) network
interfaces for IP data communication.
1) MBIM Control Interface:
This node exposes an interface between modem and application using char device
exposed by "IOSM" driver to establish and manage the MBIM data communication
with PCIe based Intel M.2 Modems.

2) MBIM Data Interface:
The IOSM driver exposes IP link interface "wwan0-x" of type "wwan" for IP traffic.
Iproute network utility is used for creating "wwan0-x" network interface and for
associating it with MBIM IP session. The Driver supports upto 8 IP sessions for
simultaneous IP communication.

This applies on top of WWAN core rtnetlink series posted here:
https://lore.kernel.org/netdev/1623486057-13075-1-git-send-email-loic.poulain@linaro.org/

Also driver has been compiled and tested on top of netdev net-next tree.
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents ffbbc5e5 f7af616c
...@@ -18,6 +18,7 @@ Contents: ...@@ -18,6 +18,7 @@ Contents:
qlogic/index qlogic/index
wan/index wan/index
wifi/index wifi/index
wwan/index
.. only:: subproject and html .. only:: subproject and html
......
.. SPDX-License-Identifier: GPL-2.0-only
WWAN Device Drivers
===================
Contents:
.. toctree::
:maxdepth: 2
iosm
.. only:: subproject and html
Indices
=======
* :ref:`genindex`
.. SPDX-License-Identifier: GPL-2.0-only
.. Copyright (C) 2020-21 Intel Corporation
.. _iosm_driver_doc:
===========================================
IOSM Driver for Intel M.2 PCIe based Modems
===========================================
The IOSM (IPC over Shared Memory) driver is a WWAN PCIe host driver developed
for linux or chrome platform for data exchange over PCIe interface between
Host platform & Intel M.2 Modem. The driver exposes interface conforming to the
MBIM protocol [1]. Any front end application ( eg: Modem Manager) could easily
manage the MBIM interface to enable data communication towards WWAN.
Basic usage
===========
MBIM functions are inactive when unmanaged. The IOSM driver only provides a
userspace interface MBIM "WWAN PORT" representing MBIM control channel and does
not play any role in managing the functionality. It is the job of a userspace
application to detect port enumeration and enable MBIM functionality.
Examples of few such userspace application are:
- mbimcli (included with the libmbim [2] library), and
- Modem Manager [3]
Management Applications to carry out below required actions for establishing
MBIM IP session:
- open the MBIM control channel
- configure network connection settings
- connect to network
- configure IP network interface
Management application development
==================================
The driver and userspace interfaces are described below. The MBIM protocol is
described in [1] Mobile Broadband Interface Model v1.0 Errata-1.
MBIM control channel userspace ABI
----------------------------------
/dev/wwan0mbim0 character device
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The driver exposes an MBIM interface to the MBIM function by implementing
MBIM WWAN Port. The userspace end of the control channel pipe is a
/dev/wwan0mbim0 character device. Application shall use this interface for
MBIM protocol communication.
Fragmentation
~~~~~~~~~~~~~
The userspace application is responsible for all control message fragmentation
and defragmentation as per MBIM specification.
/dev/wwan0mbim0 write()
~~~~~~~~~~~~~~~~~~~~~
The MBIM control messages from the management application must not exceed the
negotiated control message size.
/dev/wwan0mbim0 read()
~~~~~~~~~~~~~~~~~~~~
The management application must accept control messages of up the negotiated
control message size.
MBIM data channel userspace ABI
-------------------------------
wwan0-X network device
~~~~~~~~~~~~~~~~~~~~
The IOSM driver exposes IP link interface "wwan0-X" of type "wwan" for IP
traffic. Iproute network utility is used for creating "wwan0-X" network
interface and for associating it with MBIM IP session. The Driver supports
upto 8 IP sessions for simultaneous IP communication.
The userspace management application is responsible for creating new IP link
prior to establishing MBIM IP session where the SessionId is greater than 0.
For example, creating new IP link for a MBIM IP session with SessionId 1:
ip link add dev wwan0-1 parentdev-name wwan0 type wwan linkid 1
The driver will automatically map the "wwan0-1" network device to MBIM IP
session 1.
References
==========
[1] "MBIM (Mobile Broadband Interface Model) Errata-1"
- https://www.usb.org/document-library/
[2] libmbim - "a glib-based library for talking to WWAN modems and
devices which speak the Mobile Interface Broadband Model (MBIM)
protocol"
- http://www.freedesktop.org/wiki/Software/libmbim/
[3] Modem Manager - "a DBus-activated daemon which controls mobile
broadband (2G/3G/4G) devices and connections"
- http://www.freedesktop.org/wiki/Software/ModemManager/
...@@ -9453,6 +9453,13 @@ L: Dell.Client.Kernel@dell.com ...@@ -9453,6 +9453,13 @@ L: Dell.Client.Kernel@dell.com
S: Maintained S: Maintained
F: drivers/platform/x86/intel-wmi-thunderbolt.c F: drivers/platform/x86/intel-wmi-thunderbolt.c
INTEL WWAN IOSM DRIVER
M: M Chetan Kumar <m.chetan.kumar@intel.com>
M: Intel Corporation <linuxwwan@intel.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/wwan/iosm/
INTEL(R) TRACE HUB INTEL(R) TRACE HUB
M: Alexander Shishkin <alexander.shishkin@linux.intel.com> M: Alexander Shishkin <alexander.shishkin@linux.intel.com>
S: Supported S: Supported
......
...@@ -44,4 +44,16 @@ config MHI_WWAN_CTRL ...@@ -44,4 +44,16 @@ config MHI_WWAN_CTRL
To compile this driver as a module, choose M here: the module will be To compile this driver as a module, choose M here: the module will be
called mhi_wwan_ctrl. called mhi_wwan_ctrl.
config IOSM
tristate "IOSM Driver for Intel M.2 WWAN Device"
select WWAN_CORE
depends on INTEL_IOMMU
help
This driver enables Intel M.2 WWAN Device communication.
If you have one of those Intel M.2 WWAN Modules and wish to use it in
Linux say Y/M here.
If unsure, say N.
endif # WWAN endif # WWAN
...@@ -9,3 +9,4 @@ wwan-objs += wwan_core.o ...@@ -9,3 +9,4 @@ wwan-objs += wwan_core.o
obj-$(CONFIG_WWAN_HWSIM) += wwan_hwsim.o obj-$(CONFIG_WWAN_HWSIM) += wwan_hwsim.o
obj-$(CONFIG_MHI_WWAN_CTRL) += mhi_wwan_ctrl.o obj-$(CONFIG_MHI_WWAN_CTRL) += mhi_wwan_ctrl.o
obj-$(CONFIG_IOSM) += iosm/
# SPDX-License-Identifier: (GPL-2.0-only)
#
# Copyright (C) 2020-21 Intel Corporation.
#
iosm-y = \
iosm_ipc_task_queue.o \
iosm_ipc_imem.o \
iosm_ipc_imem_ops.o \
iosm_ipc_mmio.o \
iosm_ipc_port.o \
iosm_ipc_wwan.o \
iosm_ipc_uevent.o \
iosm_ipc_pm.o \
iosm_ipc_pcie.o \
iosm_ipc_irq.o \
iosm_ipc_chnl_cfg.o \
iosm_ipc_protocol.o \
iosm_ipc_protocol_ops.o \
iosm_ipc_mux.o \
iosm_ipc_mux_codec.o
obj-$(CONFIG_IOSM) := iosm.o
# compilation flags
ccflags-y += -DDEBUG
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020-21 Intel Corporation.
*/
#include <linux/wwan.h>
#include "iosm_ipc_chnl_cfg.h"
/* Max. sizes of a downlink buffers */
#define IPC_MEM_MAX_DL_FLASH_BUF_SIZE (16 * 1024)
#define IPC_MEM_MAX_DL_LOOPBACK_SIZE (1 * 1024 * 1024)
#define IPC_MEM_MAX_DL_AT_BUF_SIZE 2048
#define IPC_MEM_MAX_DL_RPC_BUF_SIZE (32 * 1024)
#define IPC_MEM_MAX_DL_MBIM_BUF_SIZE IPC_MEM_MAX_DL_RPC_BUF_SIZE
/* Max. transfer descriptors for a pipe. */
#define IPC_MEM_MAX_TDS_FLASH_DL 3
#define IPC_MEM_MAX_TDS_FLASH_UL 6
#define IPC_MEM_MAX_TDS_AT 4
#define IPC_MEM_MAX_TDS_RPC 4
#define IPC_MEM_MAX_TDS_MBIM IPC_MEM_MAX_TDS_RPC
#define IPC_MEM_MAX_TDS_LOOPBACK 11
/* Accumulation backoff usec */
#define IRQ_ACC_BACKOFF_OFF 0
/* MUX acc backoff 1ms */
#define IRQ_ACC_BACKOFF_MUX 1000
/* Modem channel configuration table
* Always reserve element zero for flash channel.
*/
static struct ipc_chnl_cfg modem_cfg[] = {
/* IP Mux */
{ IPC_MEM_IP_CHL_ID_0, IPC_MEM_PIPE_0, IPC_MEM_PIPE_1,
IPC_MEM_MAX_TDS_MUX_LITE_UL, IPC_MEM_MAX_TDS_MUX_LITE_DL,
IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE, WWAN_PORT_UNKNOWN },
/* RPC - 0 */
{ IPC_MEM_CTRL_CHL_ID_1, IPC_MEM_PIPE_2, IPC_MEM_PIPE_3,
IPC_MEM_MAX_TDS_RPC, IPC_MEM_MAX_TDS_RPC,
IPC_MEM_MAX_DL_RPC_BUF_SIZE, WWAN_PORT_UNKNOWN },
/* IAT0 */
{ IPC_MEM_CTRL_CHL_ID_2, IPC_MEM_PIPE_4, IPC_MEM_PIPE_5,
IPC_MEM_MAX_TDS_AT, IPC_MEM_MAX_TDS_AT, IPC_MEM_MAX_DL_AT_BUF_SIZE,
WWAN_PORT_AT },
/* Trace */
{ IPC_MEM_CTRL_CHL_ID_3, IPC_MEM_PIPE_6, IPC_MEM_PIPE_7,
IPC_MEM_TDS_TRC, IPC_MEM_TDS_TRC, IPC_MEM_MAX_DL_TRC_BUF_SIZE,
WWAN_PORT_UNKNOWN },
/* IAT1 */
{ IPC_MEM_CTRL_CHL_ID_4, IPC_MEM_PIPE_8, IPC_MEM_PIPE_9,
IPC_MEM_MAX_TDS_AT, IPC_MEM_MAX_TDS_AT, IPC_MEM_MAX_DL_AT_BUF_SIZE,
WWAN_PORT_AT },
/* Loopback */
{ IPC_MEM_CTRL_CHL_ID_5, IPC_MEM_PIPE_10, IPC_MEM_PIPE_11,
IPC_MEM_MAX_TDS_LOOPBACK, IPC_MEM_MAX_TDS_LOOPBACK,
IPC_MEM_MAX_DL_LOOPBACK_SIZE, WWAN_PORT_UNKNOWN },
/* MBIM Channel */
{ IPC_MEM_CTRL_CHL_ID_6, IPC_MEM_PIPE_12, IPC_MEM_PIPE_13,
IPC_MEM_MAX_TDS_MBIM, IPC_MEM_MAX_TDS_MBIM,
IPC_MEM_MAX_DL_MBIM_BUF_SIZE, WWAN_PORT_MBIM },
};
int ipc_chnl_cfg_get(struct ipc_chnl_cfg *chnl_cfg, int index)
{
int array_size = ARRAY_SIZE(modem_cfg);
if (index >= array_size) {
pr_err("index: %d and array_size %d", index, array_size);
return -ECHRNG;
}
if (index == IPC_MEM_MUX_IP_CH_IF_ID)
chnl_cfg->accumulation_backoff = IRQ_ACC_BACKOFF_MUX;
else
chnl_cfg->accumulation_backoff = IRQ_ACC_BACKOFF_OFF;
chnl_cfg->ul_nr_of_entries = modem_cfg[index].ul_nr_of_entries;
chnl_cfg->dl_nr_of_entries = modem_cfg[index].dl_nr_of_entries;
chnl_cfg->dl_buf_size = modem_cfg[index].dl_buf_size;
chnl_cfg->id = modem_cfg[index].id;
chnl_cfg->ul_pipe = modem_cfg[index].ul_pipe;
chnl_cfg->dl_pipe = modem_cfg[index].dl_pipe;
chnl_cfg->wwan_port_type = modem_cfg[index].wwan_port_type;
return 0;
}
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (C) 2020-21 Intel Corporation
*/
#ifndef IOSM_IPC_CHNL_CFG_H
#define IOSM_IPC_CHNL_CFG_H
#include "iosm_ipc_mux.h"
/* Number of TDs on the trace channel */
#define IPC_MEM_TDS_TRC 32
/* Trace channel TD buffer size. */
#define IPC_MEM_MAX_DL_TRC_BUF_SIZE 8192
/* Channel ID */
enum ipc_channel_id {
IPC_MEM_IP_CHL_ID_0 = 0,
IPC_MEM_CTRL_CHL_ID_1,
IPC_MEM_CTRL_CHL_ID_2,
IPC_MEM_CTRL_CHL_ID_3,
IPC_MEM_CTRL_CHL_ID_4,
IPC_MEM_CTRL_CHL_ID_5,
IPC_MEM_CTRL_CHL_ID_6,
};
/**
* struct ipc_chnl_cfg - IPC channel configuration structure
* @id: Interface ID
* @ul_pipe: Uplink datastream
* @dl_pipe: Downlink datastream
* @ul_nr_of_entries: Number of Transfer descriptor uplink pipe
* @dl_nr_of_entries: Number of Transfer descriptor downlink pipe
* @dl_buf_size: Downlink buffer size
* @wwan_port_type: Wwan subsystem port type
* @accumulation_backoff: Time in usec for data accumalation
*/
struct ipc_chnl_cfg {
u32 id;
u32 ul_pipe;
u32 dl_pipe;
u32 ul_nr_of_entries;
u32 dl_nr_of_entries;
u32 dl_buf_size;
u32 wwan_port_type;
u32 accumulation_backoff;
};
/**
* ipc_chnl_cfg_get - Get pipe configuration.
* @chnl_cfg: Array of ipc_chnl_cfg struct
* @index: Channel index (upto MAX_CHANNELS)
*
* Return: 0 on success and failure value on error
*/
int ipc_chnl_cfg_get(struct ipc_chnl_cfg *chnl_cfg, int index);
#endif
This diff is collapsed.
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020-21 Intel Corporation.
*/
#include <linux/delay.h>
#include "iosm_ipc_chnl_cfg.h"
#include "iosm_ipc_imem.h"
#include "iosm_ipc_imem_ops.h"
#include "iosm_ipc_port.h"
#include "iosm_ipc_task_queue.h"
/* Open a packet data online channel between the network layer and CP. */
int ipc_imem_sys_wwan_open(struct iosm_imem *ipc_imem, int if_id)
{
dev_dbg(ipc_imem->dev, "%s if id: %d",
ipc_imem_phase_get_string(ipc_imem->phase), if_id);
/* The network interface is only supported in the runtime phase. */
if (ipc_imem_phase_update(ipc_imem) != IPC_P_RUN) {
dev_err(ipc_imem->dev, "net:%d : refused phase %s", if_id,
ipc_imem_phase_get_string(ipc_imem->phase));
return -EIO;
}
/* check for the interafce id
* if if_id 1 to 8 then create IP MUX channel sessions.
* To start MUX session from 0 as network interface id would start
* from 1 so map it to if_id = if_id - 1
*/
if (if_id >= IP_MUX_SESSION_START && if_id <= IP_MUX_SESSION_END)
return ipc_mux_open_session(ipc_imem->mux, if_id - 1);
return -EINVAL;
}
/* Release a net link to CP. */
void ipc_imem_sys_wwan_close(struct iosm_imem *ipc_imem, int if_id,
int channel_id)
{
if (ipc_imem->mux && if_id >= IP_MUX_SESSION_START &&
if_id <= IP_MUX_SESSION_END)
ipc_mux_close_session(ipc_imem->mux, if_id - 1);
}
/* Tasklet call to do uplink transfer. */
static int ipc_imem_tq_cdev_write(struct iosm_imem *ipc_imem, int arg,
void *msg, size_t size)
{
ipc_imem->ev_cdev_write_pending = false;
ipc_imem_ul_send(ipc_imem);
return 0;
}
/* Through tasklet to do sio write. */
static int ipc_imem_call_cdev_write(struct iosm_imem *ipc_imem)
{
if (ipc_imem->ev_cdev_write_pending)
return -1;
ipc_imem->ev_cdev_write_pending = true;
return ipc_task_queue_send_task(ipc_imem, ipc_imem_tq_cdev_write, 0,
NULL, 0, false);
}
/* Function for transfer UL data */
int ipc_imem_sys_wwan_transmit(struct iosm_imem *ipc_imem,
int if_id, int channel_id, struct sk_buff *skb)
{
int ret = -EINVAL;
if (!ipc_imem || channel_id < 0)
goto out;
/* Is CP Running? */
if (ipc_imem->phase != IPC_P_RUN) {
dev_dbg(ipc_imem->dev, "phase %s transmit",
ipc_imem_phase_get_string(ipc_imem->phase));
ret = -EIO;
goto out;
}
if (if_id >= IP_MUX_SESSION_START && if_id <= IP_MUX_SESSION_END)
/* Route the UL packet through IP MUX Layer */
ret = ipc_mux_ul_trigger_encode(ipc_imem->mux,
if_id - 1, skb);
else
dev_err(ipc_imem->dev,
"invalid if_id %d: ", if_id);
out:
return ret;
}
/* Initialize wwan channel */
void ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
enum ipc_mux_protocol mux_type)
{
struct ipc_chnl_cfg chnl_cfg = { 0 };
ipc_imem->cp_version = ipc_mmio_get_cp_version(ipc_imem->mmio);
/* If modem version is invalid (0xffffffff), do not initialize WWAN. */
if (ipc_imem->cp_version == -1) {
dev_err(ipc_imem->dev, "invalid CP version");
return;
}
ipc_chnl_cfg_get(&chnl_cfg, ipc_imem->nr_of_channels);
ipc_imem_channel_init(ipc_imem, IPC_CTYPE_WWAN, chnl_cfg,
IRQ_MOD_OFF);
/* WWAN registration. */
ipc_imem->wwan = ipc_wwan_init(ipc_imem, ipc_imem->dev);
if (!ipc_imem->wwan)
dev_err(ipc_imem->dev,
"failed to register the ipc_wwan interfaces");
}
/* Map SKB to DMA for transfer */
static int ipc_imem_map_skb_to_dma(struct iosm_imem *ipc_imem,
struct sk_buff *skb)
{
struct iosm_pcie *ipc_pcie = ipc_imem->pcie;
char *buf = skb->data;
int len = skb->len;
dma_addr_t mapping;
int ret;
ret = ipc_pcie_addr_map(ipc_pcie, buf, len, &mapping, DMA_TO_DEVICE);
if (ret)
goto err;
BUILD_BUG_ON(sizeof(*IPC_CB(skb)) > sizeof(skb->cb));
IPC_CB(skb)->mapping = mapping;
IPC_CB(skb)->direction = DMA_TO_DEVICE;
IPC_CB(skb)->len = len;
IPC_CB(skb)->op_type = (u8)UL_DEFAULT;
err:
return ret;
}
/* return true if channel is ready for use */
static bool ipc_imem_is_channel_active(struct iosm_imem *ipc_imem,
struct ipc_mem_channel *channel)
{
enum ipc_phase phase;
/* Update the current operation phase. */
phase = ipc_imem->phase;
/* Select the operation depending on the execution stage. */
switch (phase) {
case IPC_P_RUN:
case IPC_P_PSI:
case IPC_P_EBL:
break;
case IPC_P_ROM:
/* Prepare the PSI image for the CP ROM driver and
* suspend the flash app.
*/
if (channel->state != IMEM_CHANNEL_RESERVED) {
dev_err(ipc_imem->dev,
"ch[%d]:invalid channel state %d,expected %d",
channel->channel_id, channel->state,
IMEM_CHANNEL_RESERVED);
goto channel_unavailable;
}
goto channel_available;
default:
/* Ignore uplink actions in all other phases. */
dev_err(ipc_imem->dev, "ch[%d]: confused phase %d",
channel->channel_id, phase);
goto channel_unavailable;
}
/* Check the full availability of the channel. */
if (channel->state != IMEM_CHANNEL_ACTIVE) {
dev_err(ipc_imem->dev, "ch[%d]: confused channel state %d",
channel->channel_id, channel->state);
goto channel_unavailable;
}
channel_available:
return true;
channel_unavailable:
return false;
}
/* Release a sio link to CP. */
void ipc_imem_sys_cdev_close(struct iosm_cdev *ipc_cdev)
{
struct iosm_imem *ipc_imem = ipc_cdev->ipc_imem;
struct ipc_mem_channel *channel = ipc_cdev->channel;
enum ipc_phase curr_phase;
int status = 0;
u32 tail = 0;
curr_phase = ipc_imem->phase;
/* If current phase is IPC_P_OFF or SIO ID is -ve then
* channel is already freed. Nothing to do.
*/
if (curr_phase == IPC_P_OFF) {
dev_err(ipc_imem->dev,
"nothing to do. Current Phase: %s",
ipc_imem_phase_get_string(curr_phase));
return;
}
if (channel->state == IMEM_CHANNEL_FREE) {
dev_err(ipc_imem->dev, "ch[%d]: invalid channel state %d",
channel->channel_id, channel->state);
return;
}
/* If there are any pending TDs then wait for Timeout/Completion before
* closing pipe.
*/
if (channel->ul_pipe.old_tail != channel->ul_pipe.old_head) {
ipc_imem->app_notify_ul_pend = 1;
/* Suspend the user app and wait a certain time for processing
* UL Data.
*/
status = wait_for_completion_interruptible_timeout
(&ipc_imem->ul_pend_sem,
msecs_to_jiffies(IPC_PEND_DATA_TIMEOUT));
if (status == 0) {
dev_dbg(ipc_imem->dev,
"Pend data Timeout UL-Pipe:%d Head:%d Tail:%d",
channel->ul_pipe.pipe_nr,
channel->ul_pipe.old_head,
channel->ul_pipe.old_tail);
}
ipc_imem->app_notify_ul_pend = 0;
}
/* If there are any pending TDs then wait for Timeout/Completion before
* closing pipe.
*/
ipc_protocol_get_head_tail_index(ipc_imem->ipc_protocol,
&channel->dl_pipe, NULL, &tail);
if (tail != channel->dl_pipe.old_tail) {
ipc_imem->app_notify_dl_pend = 1;
/* Suspend the user app and wait a certain time for processing
* DL Data.
*/
status = wait_for_completion_interruptible_timeout
(&ipc_imem->dl_pend_sem,
msecs_to_jiffies(IPC_PEND_DATA_TIMEOUT));
if (status == 0) {
dev_dbg(ipc_imem->dev,
"Pend data Timeout DL-Pipe:%d Head:%d Tail:%d",
channel->dl_pipe.pipe_nr,
channel->dl_pipe.old_head,
channel->dl_pipe.old_tail);
}
ipc_imem->app_notify_dl_pend = 0;
}
/* Due to wait for completion in messages, there is a small window
* between closing the pipe and updating the channel is closed. In this
* small window there could be HP update from Host Driver. Hence update
* the channel state as CLOSING to aviod unnecessary interrupt
* towards CP.
*/
channel->state = IMEM_CHANNEL_CLOSING;
ipc_imem_pipe_close(ipc_imem, &channel->ul_pipe);
ipc_imem_pipe_close(ipc_imem, &channel->dl_pipe);
ipc_imem_channel_free(channel);
}
/* Open a PORT link to CP and return the channel */
struct ipc_mem_channel *ipc_imem_sys_port_open(struct iosm_imem *ipc_imem,
int chl_id, int hp_id)
{
struct ipc_mem_channel *channel;
int ch_id;
/* The PORT interface is only supported in the runtime phase. */
if (ipc_imem_phase_update(ipc_imem) != IPC_P_RUN) {
dev_err(ipc_imem->dev, "PORT open refused, phase %s",
ipc_imem_phase_get_string(ipc_imem->phase));
return NULL;
}
ch_id = ipc_imem_channel_alloc(ipc_imem, chl_id, IPC_CTYPE_CTRL);
if (ch_id < 0) {
dev_err(ipc_imem->dev, "reservation of an PORT chnl id failed");
return NULL;
}
channel = ipc_imem_channel_open(ipc_imem, ch_id, hp_id);
if (!channel) {
dev_err(ipc_imem->dev, "PORT channel id open failed");
return NULL;
}
return channel;
}
/* transfer skb to modem */
int ipc_imem_sys_cdev_write(struct iosm_cdev *ipc_cdev, struct sk_buff *skb)
{
struct ipc_mem_channel *channel = ipc_cdev->channel;
struct iosm_imem *ipc_imem = ipc_cdev->ipc_imem;
int ret = -EIO;
if (!ipc_imem_is_channel_active(ipc_imem, channel) ||
ipc_imem->phase == IPC_P_OFF_REQ)
goto out;
ret = ipc_imem_map_skb_to_dma(ipc_imem, skb);
if (ret)
goto out;
/* Add skb to the uplink skbuf accumulator. */
skb_queue_tail(&channel->ul_list, skb);
ret = ipc_imem_call_cdev_write(ipc_imem);
if (ret) {
skb_dequeue_tail(&channel->ul_list);
dev_err(ipc_cdev->dev, "channel id[%d] write failed\n",
ipc_cdev->channel->channel_id);
}
out:
return ret;
}
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (C) 2020-21 Intel Corporation.
*/
#ifndef IOSM_IPC_IMEM_OPS_H
#define IOSM_IPC_IMEM_OPS_H
#include "iosm_ipc_mux_codec.h"
/* Maximum wait time for blocking read */
#define IPC_READ_TIMEOUT 500
/* The delay in ms for defering the unregister */
#define SIO_UNREGISTER_DEFER_DELAY_MS 1
/* Default delay till CP PSI image is running and modem updates the
* execution stage.
* unit : milliseconds
*/
#define PSI_START_DEFAULT_TIMEOUT 3000
/* Default time out when closing SIO, till the modem is in
* running state.
* unit : milliseconds
*/
#define BOOT_CHECK_DEFAULT_TIMEOUT 400
/* IP MUX channel range */
#define IP_MUX_SESSION_START 1
#define IP_MUX_SESSION_END 8
/**
* ipc_imem_sys_port_open - Open a port link to CP.
* @ipc_imem: Imem instance.
* @chl_id: Channel Indentifier.
* @hp_id: HP Indentifier.
*
* Return: channel instance on success, NULL for failure
*/
struct ipc_mem_channel *ipc_imem_sys_port_open(struct iosm_imem *ipc_imem,
int chl_id, int hp_id);
/**
* ipc_imem_sys_cdev_close - Release a sio link to CP.
* @ipc_cdev: iosm sio instance.
*/
void ipc_imem_sys_cdev_close(struct iosm_cdev *ipc_cdev);
/**
* ipc_imem_sys_cdev_write - Route the uplink buffer to CP.
* @ipc_cdev: iosm_cdev instance.
* @skb: Pointer to skb.
*
* Return: 0 on success and failure value on error
*/
int ipc_imem_sys_cdev_write(struct iosm_cdev *ipc_cdev, struct sk_buff *skb);
/**
* ipc_imem_sys_wwan_open - Open packet data online channel between network
* layer and CP.
* @ipc_imem: Imem instance.
* @if_id: ip link tag of the net device.
*
* Return: Channel ID on success and failure value on error
*/
int ipc_imem_sys_wwan_open(struct iosm_imem *ipc_imem, int if_id);
/**
* ipc_imem_sys_wwan_close - Close packet data online channel between network
* layer and CP.
* @ipc_imem: Imem instance.
* @if_id: IP link id net device.
* @channel_id: Channel ID to be closed.
*/
void ipc_imem_sys_wwan_close(struct iosm_imem *ipc_imem, int if_id,
int channel_id);
/**
* ipc_imem_sys_wwan_transmit - Function for transfer UL data
* @ipc_imem: Imem instance.
* @if_id: link ID of the device.
* @channel_id: Channel ID used
* @skb: Pointer to sk buffer
*
* Return: 0 on success and failure value on error
*/
int ipc_imem_sys_wwan_transmit(struct iosm_imem *ipc_imem, int if_id,
int channel_id, struct sk_buff *skb);
/**
* ipc_imem_wwan_channel_init - Initializes WWAN channels and the channel for
* MUX.
* @ipc_imem: Pointer to iosm_imem struct.
* @mux_type: Type of mux protocol.
*/
void ipc_imem_wwan_channel_init(struct iosm_imem *ipc_imem,
enum ipc_mux_protocol mux_type);
#endif
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020-21 Intel Corporation.
*/
#include "iosm_ipc_pcie.h"
#include "iosm_ipc_protocol.h"
static void ipc_write_dbell_reg(struct iosm_pcie *ipc_pcie, int irq_n, u32 data)
{
void __iomem *write_reg;
/* Select the first doorbell register, which is only currently needed
* by CP.
*/
write_reg = (void __iomem *)((u8 __iomem *)ipc_pcie->ipc_regs +
ipc_pcie->doorbell_write +
(irq_n * ipc_pcie->doorbell_reg_offset));
/* Fire the doorbell irq by writing data on the doorbell write pointer
* register.
*/
iowrite32(data, write_reg);
}
void ipc_doorbell_fire(struct iosm_pcie *ipc_pcie, int irq_n, u32 data)
{
ipc_write_dbell_reg(ipc_pcie, irq_n, data);
}
/* Threaded Interrupt handler for MSI interrupts */
static irqreturn_t ipc_msi_interrupt(int irq, void *dev_id)
{
struct iosm_pcie *ipc_pcie = dev_id;
int instance = irq - ipc_pcie->pci->irq;
/* Shift the MSI irq actions to the IPC tasklet. IRQ_NONE means the
* irq was not from the IPC device or could not be served.
*/
if (instance >= ipc_pcie->nvec)
return IRQ_NONE;
if (!test_bit(0, &ipc_pcie->suspend))
ipc_imem_irq_process(ipc_pcie->imem, instance);
return IRQ_HANDLED;
}
void ipc_release_irq(struct iosm_pcie *ipc_pcie)
{
struct pci_dev *pdev = ipc_pcie->pci;
if (pdev->msi_enabled) {
while (--ipc_pcie->nvec >= 0)
free_irq(pdev->irq + ipc_pcie->nvec, ipc_pcie);
}
pci_free_irq_vectors(pdev);
}
int ipc_acquire_irq(struct iosm_pcie *ipc_pcie)
{
struct pci_dev *pdev = ipc_pcie->pci;
int i, rc = -EINVAL;
ipc_pcie->nvec = pci_alloc_irq_vectors(pdev, IPC_MSI_VECTORS,
IPC_MSI_VECTORS, PCI_IRQ_MSI);
if (ipc_pcie->nvec < 0) {
rc = ipc_pcie->nvec;
goto error;
}
if (!pdev->msi_enabled)
goto error;
for (i = 0; i < ipc_pcie->nvec; ++i) {
rc = request_threaded_irq(pdev->irq + i, NULL,
ipc_msi_interrupt, IRQF_ONESHOT,
KBUILD_MODNAME, ipc_pcie);
if (rc) {
dev_err(ipc_pcie->dev, "unable to grab IRQ, rc=%d", rc);
ipc_pcie->nvec = i;
ipc_release_irq(ipc_pcie);
goto error;
}
}
error:
return rc;
}
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (C) 2020-21 Intel Corporation.
*/
#ifndef IOSM_IPC_IRQ_H
#define IOSM_IPC_IRQ_H
struct iosm_pcie;
/**
* ipc_doorbell_fire - fire doorbell to CP
* @ipc_pcie: Pointer to iosm_pcie
* @irq_n: Doorbell type
* @data: ipc state
*/
void ipc_doorbell_fire(struct iosm_pcie *ipc_pcie, int irq_n, u32 data);
/**
* ipc_release_irq - Release the IRQ handler.
* @ipc_pcie: Pointer to iosm_pcie struct
*/
void ipc_release_irq(struct iosm_pcie *ipc_pcie);
/**
* ipc_acquire_irq - acquire IRQ & register IRQ handler.
* @ipc_pcie: Pointer to iosm_pcie struct
*
* Return: 0 on success and failure value on error
*/
int ipc_acquire_irq(struct iosm_pcie *ipc_pcie);
#endif
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2020-21 Intel Corporation.
*/
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/slab.h>
#include "iosm_ipc_mmio.h"
/* Definition of MMIO offsets
* note that MMIO_CI offsets are relative to end of chip info structure
*/
/* MMIO chip info size in bytes */
#define MMIO_CHIP_INFO_SIZE 60
/* CP execution stage */
#define MMIO_OFFSET_EXECUTION_STAGE 0x00
/* Boot ROM Chip Info struct */
#define MMIO_OFFSET_CHIP_INFO 0x04
#define MMIO_OFFSET_ROM_EXIT_CODE 0x40
#define MMIO_OFFSET_PSI_ADDRESS 0x54
#define MMIO_OFFSET_PSI_SIZE 0x5C
#define MMIO_OFFSET_IPC_STATUS 0x60
#define MMIO_OFFSET_CONTEXT_INFO 0x64
#define MMIO_OFFSET_BASE_ADDR 0x6C
#define MMIO_OFFSET_END_ADDR 0x74
#define MMIO_OFFSET_CP_VERSION 0xF0
#define MMIO_OFFSET_CP_CAPABILITIES 0xF4
/* Timeout in 50 msec to wait for the modem boot code to write a valid
* execution stage into mmio area
*/
#define IPC_MMIO_EXEC_STAGE_TIMEOUT 50
/* check if exec stage has one of the valid values */
static bool ipc_mmio_is_valid_exec_stage(enum ipc_mem_exec_stage stage)
{
switch (stage) {
case IPC_MEM_EXEC_STAGE_BOOT:
case IPC_MEM_EXEC_STAGE_PSI:
case IPC_MEM_EXEC_STAGE_EBL:
case IPC_MEM_EXEC_STAGE_RUN:
case IPC_MEM_EXEC_STAGE_CRASH:
case IPC_MEM_EXEC_STAGE_CD_READY:
return true;
default:
return false;
}
}
void ipc_mmio_update_cp_capability(struct iosm_mmio *ipc_mmio)
{
u32 cp_cap;
unsigned int ver;
ver = ipc_mmio_get_cp_version(ipc_mmio);
cp_cap = readl(ipc_mmio->base + ipc_mmio->offset.cp_capability);
ipc_mmio->has_mux_lite = (ver >= IOSM_CP_VERSION) &&
!(cp_cap & DL_AGGR) && !(cp_cap & UL_AGGR);
ipc_mmio->has_ul_flow_credit =
(ver >= IOSM_CP_VERSION) && (cp_cap & UL_FLOW_CREDIT);
}
struct iosm_mmio *ipc_mmio_init(void __iomem *mmio, struct device *dev)
{
struct iosm_mmio *ipc_mmio = kzalloc(sizeof(*ipc_mmio), GFP_KERNEL);
int retries = IPC_MMIO_EXEC_STAGE_TIMEOUT;
enum ipc_mem_exec_stage stage;
if (!ipc_mmio)
return NULL;
ipc_mmio->dev = dev;
ipc_mmio->base = mmio;
ipc_mmio->offset.exec_stage = MMIO_OFFSET_EXECUTION_STAGE;
/* Check for a valid execution stage to make sure that the boot code
* has correctly initialized the MMIO area.
*/
do {
stage = ipc_mmio_get_exec_stage(ipc_mmio);
if (ipc_mmio_is_valid_exec_stage(stage))
break;
msleep(20);
} while (retries-- > 0);
if (!retries) {
dev_err(ipc_mmio->dev, "invalid exec stage %X", stage);
goto init_fail;
}
ipc_mmio->offset.chip_info = MMIO_OFFSET_CHIP_INFO;
/* read chip info size and version from chip info structure */
ipc_mmio->chip_info_version =
ioread8(ipc_mmio->base + ipc_mmio->offset.chip_info);
/* Increment of 2 is needed as the size value in the chip info
* excludes the version and size field, which are always present
*/
ipc_mmio->chip_info_size =
ioread8(ipc_mmio->base + ipc_mmio->offset.chip_info + 1) + 2;
if (ipc_mmio->chip_info_size != MMIO_CHIP_INFO_SIZE) {
dev_err(ipc_mmio->dev, "Unexpected Chip Info");
goto init_fail;
}
ipc_mmio->offset.rom_exit_code = MMIO_OFFSET_ROM_EXIT_CODE;
ipc_mmio->offset.psi_address = MMIO_OFFSET_PSI_ADDRESS;
ipc_mmio->offset.psi_size = MMIO_OFFSET_PSI_SIZE;
ipc_mmio->offset.ipc_status = MMIO_OFFSET_IPC_STATUS;
ipc_mmio->offset.context_info = MMIO_OFFSET_CONTEXT_INFO;
ipc_mmio->offset.ap_win_base = MMIO_OFFSET_BASE_ADDR;
ipc_mmio->offset.ap_win_end = MMIO_OFFSET_END_ADDR;
ipc_mmio->offset.cp_version = MMIO_OFFSET_CP_VERSION;
ipc_mmio->offset.cp_capability = MMIO_OFFSET_CP_CAPABILITIES;
return ipc_mmio;
init_fail:
kfree(ipc_mmio);
return NULL;
}
enum ipc_mem_exec_stage ipc_mmio_get_exec_stage(struct iosm_mmio *ipc_mmio)
{
if (!ipc_mmio)
return IPC_MEM_EXEC_STAGE_INVALID;
return (enum ipc_mem_exec_stage)readl(ipc_mmio->base +
ipc_mmio->offset.exec_stage);
}
void ipc_mmio_copy_chip_info(struct iosm_mmio *ipc_mmio, void *dest,
size_t size)
{
if (ipc_mmio && dest)
memcpy_fromio(dest, ipc_mmio->base + ipc_mmio->offset.chip_info,
size);
}
enum ipc_mem_device_ipc_state ipc_mmio_get_ipc_state(struct iosm_mmio *ipc_mmio)
{
if (!ipc_mmio)
return IPC_MEM_DEVICE_IPC_INVALID;
return (enum ipc_mem_device_ipc_state)
readl(ipc_mmio->base + ipc_mmio->offset.ipc_status);
}
enum rom_exit_code ipc_mmio_get_rom_exit_code(struct iosm_mmio *ipc_mmio)
{
if (!ipc_mmio)
return IMEM_ROM_EXIT_FAIL;
return (enum rom_exit_code)readl(ipc_mmio->base +
ipc_mmio->offset.rom_exit_code);
}
void ipc_mmio_config(struct iosm_mmio *ipc_mmio)
{
if (!ipc_mmio)
return;
/* AP memory window (full window is open and active so that modem checks
* each AP address) 0 means don't check on modem side.
*/
iowrite64_lo_hi(0, ipc_mmio->base + ipc_mmio->offset.ap_win_base);
iowrite64_lo_hi(0, ipc_mmio->base + ipc_mmio->offset.ap_win_end);
iowrite64_lo_hi(ipc_mmio->context_info_addr,
ipc_mmio->base + ipc_mmio->offset.context_info);
}
void ipc_mmio_set_psi_addr_and_size(struct iosm_mmio *ipc_mmio, dma_addr_t addr,
u32 size)
{
if (!ipc_mmio)
return;
iowrite64_lo_hi(addr, ipc_mmio->base + ipc_mmio->offset.psi_address);
writel(size, ipc_mmio->base + ipc_mmio->offset.psi_size);
}
void ipc_mmio_set_contex_info_addr(struct iosm_mmio *ipc_mmio, phys_addr_t addr)
{
if (!ipc_mmio)
return;
/* store context_info address. This will be stored in the mmio area
* during IPC_MEM_DEVICE_IPC_INIT state via ipc_mmio_config()
*/
ipc_mmio->context_info_addr = addr;
}
int ipc_mmio_get_cp_version(struct iosm_mmio *ipc_mmio)
{
return ipc_mmio ? readl(ipc_mmio->base + ipc_mmio->offset.cp_version) :
-EFAULT;
}
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (C) 2020-21 Intel Corporation.
*/
#ifndef IOSM_IPC_MMIO_H
#define IOSM_IPC_MMIO_H
/* Minimal IOSM CP VERSION which has valid CP_CAPABILITIES field */
#define IOSM_CP_VERSION 0x0100UL
/* DL dir Aggregation support mask */
#define DL_AGGR BIT(23)
/* UL dir Aggregation support mask */
#define UL_AGGR BIT(22)
/* UL flow credit support mask */
#define UL_FLOW_CREDIT BIT(21)
/* Possible states of the IPC finite state machine. */
enum ipc_mem_device_ipc_state {
IPC_MEM_DEVICE_IPC_UNINIT,
IPC_MEM_DEVICE_IPC_INIT,
IPC_MEM_DEVICE_IPC_RUNNING,
IPC_MEM_DEVICE_IPC_RECOVERY,
IPC_MEM_DEVICE_IPC_ERROR,
IPC_MEM_DEVICE_IPC_DONT_CARE,
IPC_MEM_DEVICE_IPC_INVALID = -1
};
/* Boot ROM exit status. */
enum rom_exit_code {
IMEM_ROM_EXIT_OPEN_EXT = 0x01,
IMEM_ROM_EXIT_OPEN_MEM = 0x02,
IMEM_ROM_EXIT_CERT_EXT = 0x10,
IMEM_ROM_EXIT_CERT_MEM = 0x20,
IMEM_ROM_EXIT_FAIL = 0xFF
};
/* Boot stages */
enum ipc_mem_exec_stage {
IPC_MEM_EXEC_STAGE_RUN = 0x600DF00D,
IPC_MEM_EXEC_STAGE_CRASH = 0x8BADF00D,
IPC_MEM_EXEC_STAGE_CD_READY = 0xBADC0DED,
IPC_MEM_EXEC_STAGE_BOOT = 0xFEEDB007,
IPC_MEM_EXEC_STAGE_PSI = 0xFEEDBEEF,
IPC_MEM_EXEC_STAGE_EBL = 0xFEEDCAFE,
IPC_MEM_EXEC_STAGE_INVALID = 0xFFFFFFFF
};
/* mmio scratchpad info */
struct mmio_offset {
int exec_stage;
int chip_info;
int rom_exit_code;
int psi_address;
int psi_size;
int ipc_status;
int context_info;
int ap_win_base;
int ap_win_end;
int cp_version;
int cp_capability;
};
/**
* struct iosm_mmio - MMIO region mapped to the doorbell scratchpad.
* @base: Base address of MMIO region
* @dev: Pointer to device structure
* @offset: Start offset
* @context_info_addr: Physical base address of context info structure
* @chip_info_version: Version of chip info structure
* @chip_info_size: Size of chip info structure
* @has_mux_lite: It doesn't support mux aggergation
* @has_ul_flow_credit: Ul flow credit support
* @has_slp_no_prot: Device sleep no protocol support
* @has_mcr_support: Usage of mcr support
*/
struct iosm_mmio {
unsigned char __iomem *base;
struct device *dev;
struct mmio_offset offset;
phys_addr_t context_info_addr;
unsigned int chip_info_version;
unsigned int chip_info_size;
u8 has_mux_lite:1,
has_ul_flow_credit:1,
has_slp_no_prot:1,
has_mcr_support:1;
};
/**
* ipc_mmio_init - Allocate mmio instance data
* @mmio_addr: Mapped AP base address of the MMIO area.
* @dev: Pointer to device structure
*
* Returns: address of mmio instance data or NULL if fails.
*/
struct iosm_mmio *ipc_mmio_init(void __iomem *mmio_addr, struct device *dev);
/**
* ipc_mmio_set_psi_addr_and_size - Set start address and size of the
* primary system image (PSI) for the
* FW dowload.
* @ipc_mmio: Pointer to mmio instance
* @addr: PSI address
* @size: PSI immage size
*/
void ipc_mmio_set_psi_addr_and_size(struct iosm_mmio *ipc_mmio, dma_addr_t addr,
u32 size);
/**
* ipc_mmio_set_contex_info_addr - Stores the Context Info Address in
* MMIO instance to share it with CP during
* mmio_init.
* @ipc_mmio: Pointer to mmio instance
* @addr: 64-bit address of AP context information.
*/
void ipc_mmio_set_contex_info_addr(struct iosm_mmio *ipc_mmio,
phys_addr_t addr);
/**
* ipc_mmio_get_cp_version - Write context info and AP memory range addresses.
* This needs to be called when CP is in
* IPC_MEM_DEVICE_IPC_INIT state
* @ipc_mmio: Pointer to mmio instance
*
* Returns: cp version else failure value on error
*/
int ipc_mmio_get_cp_version(struct iosm_mmio *ipc_mmio);
/**
* ipc_mmio_get_cp_version - Get the CP IPC version
* @ipc_mmio: Pointer to mmio instance
*
* Returns: version number on success and failure value on error.
*/
int ipc_mmio_get_cp_version(struct iosm_mmio *ipc_mmio);
/**
* ipc_mmio_get_rom_exit_code - Get exit code from CP boot rom download app
* @ipc_mmio: Pointer to mmio instance
*
* Returns: exit code from CP boot rom download APP
*/
enum rom_exit_code ipc_mmio_get_rom_exit_code(struct iosm_mmio *ipc_mmio);
/**
* ipc_mmio_get_exec_stage - Query CP execution stage
* @ipc_mmio: Pointer to mmio instance
*
* Returns: CP execution stage
*/
enum ipc_mem_exec_stage ipc_mmio_get_exec_stage(struct iosm_mmio *ipc_mmio);
/**
* ipc_mmio_get_ipc_state - Query CP IPC state
* @ipc_mmio: Pointer to mmio instance
*
* Returns: CP IPC state
*/
enum ipc_mem_device_ipc_state
ipc_mmio_get_ipc_state(struct iosm_mmio *ipc_mmio);
/**
* ipc_mmio_copy_chip_info - Copy size bytes of CP chip info structure
* into caller provided buffer
* @ipc_mmio: Pointer to mmio instance
* @dest: Pointer to caller provided buff
* @size: Number of bytes to copy
*/
void ipc_mmio_copy_chip_info(struct iosm_mmio *ipc_mmio, void *dest,
size_t size);
/**
* ipc_mmio_config - Write context info and AP memory range addresses.
* This needs to be called when CP is in
* IPC_MEM_DEVICE_IPC_INIT state
*
* @ipc_mmio: Pointer to mmio instance
*/
void ipc_mmio_config(struct iosm_mmio *ipc_mmio);
/**
* ipc_mmio_update_cp_capability - Read and update modem capability, from mmio
* capability offset
*
* @ipc_mmio: Pointer to mmio instance
*/
void ipc_mmio_update_cp_capability(struct iosm_mmio *ipc_mmio);
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment