Commit 5417197d authored by David S. Miller's avatar David S. Miller

Merge branch 'wwan-t7xx-fw-flashing-and-coredump-support'

M Chetan Kumar says:

====================
net: wwan: t7xx: fw flashing & coredump support

This patch series brings-in the support for FM350 wwan device firmware
flashing & coredump collection using devlink interface.

Below is the high level description of individual patches.
Refer to individual patch commit message for details.

PATCH1:  Enables AP CLDMA communication for firmware flashing &
coredump collection.

PATCH2: Enables the infrastructure & queue configuration required
for early ports enumeration.

PATCH3: Implements device reset and rescan logic required to enter
or exit fastboot mode.

PATCH4: Implements devlink interface & uses the fastboot protocol for
fw flashing and coredump collection.

PATCH5: t7xx devlink commands documentation.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 0630f64d b0bc1709
...@@ -67,3 +67,4 @@ parameters, info versions, and other features it supports. ...@@ -67,3 +67,4 @@ parameters, info versions, and other features it supports.
prestera prestera
iosm iosm
octeontx2 octeontx2
t7xx
.. SPDX-License-Identifier: GPL-2.0
====================
t7xx devlink support
====================
This document describes the devlink features implemented by the ``t7xx``
device driver.
Flash Update
============
The ``t7xx`` driver implements the flash update using the ``devlink-flash``
interface.
The driver uses DEVLINK_SUPPORT_FLASH_UPDATE_COMPONENT to identify the type of
firmware image that need to be programmed upon the request by user space application.
The supported list of firmware image types is described below.
.. list-table:: Firmware Image types
:widths: 15 85
* - Name
- Description
* - ``preloader``
- The first-stage bootloader image
* - ``loader_ext1``
- Preloader extension image
* - ``tee1``
- ARM trusted firmware and TEE (Trusted Execution Environment) image
* - ``lk``
- The second-stage bootloader image
* - ``spmfw``
- MediaTek in-house ASIC for power management image
* - ``sspm_1``
- MediaTek in-house ASIC for power management under secure world image
* - ``mcupm_1``
- MediaTek in-house ASIC for cpu power management image
* - ``dpm_1``
- MediaTek in-house ASIC for dram power management image
* - ``boot``
- The kernel and dtb image
* - ``rootfs``
- Root filesystem image
* - ``md1img``
- Modem image
* - ``md1dsp``
- Modem DSP image
* - ``mcf1``
- Modem OTA image (Modem Configuration Framework) for operators
* - ``mcf2``
- Modem OTA image (Modem Configuration Framework) for OEM vendors
* - ``mcf3``
- Modem OTA image (other usage) for OEM configurations
``t7xx`` driver uses fastboot protocol for fw flashing. In the fw flashing
procedure, fastboot command's & response's are exchanged between driver
and wwan device.
The wwan device is put into fastboot mode via devlink reload command, by
passing "driver_reinit" action.
$ devlink dev reload pci/0000:$bdf action driver_reinit
Upon completion of fw flashing or coredump collection the wwan device is
reset to normal mode using devlink reload command, by passing "fw_activate"
action.
$ devlink dev reload pci/0000:$bdf action fw_activate
Flash Commands:
===============
$ devlink dev flash pci/0000:$bdf file preloader_k6880v1_mdot2_datacard.bin component "preloader"
$ devlink dev flash pci/0000:$bdf file loader_ext-verified.img component "loader_ext1"
$ devlink dev flash pci/0000:$bdf file tee-verified.img component "tee1"
$ devlink dev flash pci/0000:$bdf file lk-verified.img component "lk"
$ devlink dev flash pci/0000:$bdf file spmfw-verified.img component "spmfw"
$ devlink dev flash pci/0000:$bdf file sspm-verified.img component "sspm_1"
$ devlink dev flash pci/0000:$bdf file mcupm-verified.img component "mcupm_1"
$ devlink dev flash pci/0000:$bdf file dpm-verified.img component "dpm_1"
$ devlink dev flash pci/0000:$bdf file boot-verified.img component "boot"
$ devlink dev flash pci/0000:$bdf file root.squashfs component "rootfs"
$ devlink dev flash pci/0000:$bdf file modem-verified.img component "md1img"
$ devlink dev flash pci/0000:$bdf file dsp-verified.bin component "md1dsp"
$ devlink dev flash pci/0000:$bdf file OP_OTA.img component "mcf1"
$ devlink dev flash pci/0000:$bdf file OEM_OTA.img component "mcf2"
$ devlink dev flash pci/0000:$bdf file DEV_OTA.img component "mcf3"
Note: component "value" represents the partition type to be programmed.
Regions
=======
The ``t7xx`` driver supports core dump collection when device encounters
an exception. When wwan device encounters an exception, a snapshot of device
internal data will be taken by the driver using fastboot commands.
Following regions are accessed for device internal data.
.. list-table:: Regions implemented
:widths: 15 85
* - Name
- Description
* - ``mr_dump``
- The detailed modem components log are captured in this region
* - ``lk_dump``
- This region dumps the current snapshot of lk
Region commands
===============
$ devlink region show
$ devlink region new mr_dump
$ devlink region read mr_dump snapshot 0 address 0 length $len
$ devlink region del mr_dump snapshot 0
$ devlink region new lk_dump
$ devlink region read lk_dump snapshot 0 address 0 length $len
$ devlink region del lk_dump snapshot 0
Note: $len is actual len to be dumped.
...@@ -108,6 +108,7 @@ config IOSM ...@@ -108,6 +108,7 @@ config IOSM
config MTK_T7XX config MTK_T7XX
tristate "MediaTek PCIe 5G WWAN modem T7xx device" tristate "MediaTek PCIe 5G WWAN modem T7xx device"
depends on PCI depends on PCI
select NET_DEVLINK
help help
Enables MediaTek PCIe based 5G WWAN modem (T7xx series) device. Enables MediaTek PCIe based 5G WWAN modem (T7xx series) device.
Adapts WWAN framework and provides network interface like wwan0 Adapts WWAN framework and provides network interface like wwan0
......
...@@ -17,4 +17,7 @@ mtk_t7xx-y:= t7xx_pci.o \ ...@@ -17,4 +17,7 @@ mtk_t7xx-y:= t7xx_pci.o \
t7xx_hif_dpmaif_tx.o \ t7xx_hif_dpmaif_tx.o \
t7xx_hif_dpmaif_rx.o \ t7xx_hif_dpmaif_rx.o \
t7xx_dpmaif.o \ t7xx_dpmaif.o \
t7xx_netdev.o t7xx_netdev.o \
t7xx_pci_rescan.o \
t7xx_uevent.o \
t7xx_port_devlink.o
...@@ -57,8 +57,6 @@ ...@@ -57,8 +57,6 @@
#define CHECK_Q_STOP_TIMEOUT_US 1000000 #define CHECK_Q_STOP_TIMEOUT_US 1000000
#define CHECK_Q_STOP_STEP_US 10000 #define CHECK_Q_STOP_STEP_US 10000
#define CLDMA_JUMBO_BUFF_SZ (63 * 1024 + sizeof(struct ccci_header))
static void md_cd_queue_struct_reset(struct cldma_queue *queue, struct cldma_ctrl *md_ctrl, static void md_cd_queue_struct_reset(struct cldma_queue *queue, struct cldma_ctrl *md_ctrl,
enum mtk_txrx tx_rx, unsigned int index) enum mtk_txrx tx_rx, unsigned int index)
{ {
...@@ -993,6 +991,34 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb ...@@ -993,6 +991,34 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb
return ret; return ret;
} }
static void t7xx_cldma_adjust_config(struct cldma_ctrl *md_ctrl, enum cldma_cfg cfg_id)
{
int qno;
for (qno = 0; qno < CLDMA_RXQ_NUM; qno++) {
md_ctrl->rx_ring[qno].pkt_size = CLDMA_SHARED_Q_BUFF_SZ;
md_ctrl->rxq[qno].q_type = CLDMA_SHARED_Q;
}
md_ctrl->rx_ring[CLDMA_RXQ_NUM - 1].pkt_size = CLDMA_JUMBO_BUFF_SZ;
for (qno = 0; qno < CLDMA_TXQ_NUM; qno++) {
md_ctrl->tx_ring[qno].pkt_size = CLDMA_SHARED_Q_BUFF_SZ;
md_ctrl->txq[qno].q_type = CLDMA_SHARED_Q;
}
if (cfg_id == CLDMA_DEDICATED_Q_CFG) {
md_ctrl->rxq[DOWNLOAD_PORT_ID].q_type = CLDMA_DEDICATED_Q;
md_ctrl->txq[DOWNLOAD_PORT_ID].q_type = CLDMA_DEDICATED_Q;
md_ctrl->tx_ring[DOWNLOAD_PORT_ID].pkt_size = CLDMA_DEDICATED_Q_BUFF_SZ;
md_ctrl->rx_ring[DOWNLOAD_PORT_ID].pkt_size = CLDMA_DEDICATED_Q_BUFF_SZ;
md_ctrl->rxq[DUMP_PORT_ID].q_type = CLDMA_DEDICATED_Q;
md_ctrl->txq[DUMP_PORT_ID].q_type = CLDMA_DEDICATED_Q;
md_ctrl->tx_ring[DUMP_PORT_ID].pkt_size = CLDMA_DEDICATED_Q_BUFF_SZ;
md_ctrl->rx_ring[DUMP_PORT_ID].pkt_size = CLDMA_DEDICATED_Q_BUFF_SZ;
}
}
static int t7xx_cldma_late_init(struct cldma_ctrl *md_ctrl) static int t7xx_cldma_late_init(struct cldma_ctrl *md_ctrl)
{ {
char dma_pool_name[32]; char dma_pool_name[32];
...@@ -1021,11 +1047,6 @@ static int t7xx_cldma_late_init(struct cldma_ctrl *md_ctrl) ...@@ -1021,11 +1047,6 @@ static int t7xx_cldma_late_init(struct cldma_ctrl *md_ctrl)
} }
for (j = 0; j < CLDMA_RXQ_NUM; j++) { for (j = 0; j < CLDMA_RXQ_NUM; j++) {
md_ctrl->rx_ring[j].pkt_size = CLDMA_MTU;
if (j == CLDMA_RXQ_NUM - 1)
md_ctrl->rx_ring[j].pkt_size = CLDMA_JUMBO_BUFF_SZ;
ret = t7xx_cldma_rx_ring_init(md_ctrl, &md_ctrl->rx_ring[j]); ret = t7xx_cldma_rx_ring_init(md_ctrl, &md_ctrl->rx_ring[j]);
if (ret) { if (ret) {
dev_err(md_ctrl->dev, "Control RX ring init fail\n"); dev_err(md_ctrl->dev, "Control RX ring init fail\n");
...@@ -1064,13 +1085,18 @@ static void t7xx_hw_info_init(struct cldma_ctrl *md_ctrl) ...@@ -1064,13 +1085,18 @@ static void t7xx_hw_info_init(struct cldma_ctrl *md_ctrl)
struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info; struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
u32 phy_ao_base, phy_pd_base; u32 phy_ao_base, phy_pd_base;
if (md_ctrl->hif_id != CLDMA_ID_MD) hw_info->hw_mode = MODE_BIT_64;
return;
if (md_ctrl->hif_id == CLDMA_ID_MD) {
phy_ao_base = CLDMA1_AO_BASE; phy_ao_base = CLDMA1_AO_BASE;
phy_pd_base = CLDMA1_PD_BASE; phy_pd_base = CLDMA1_PD_BASE;
hw_info->phy_interrupt_id = CLDMA1_INT; hw_info->phy_interrupt_id = CLDMA1_INT;
hw_info->hw_mode = MODE_BIT_64; } else {
phy_ao_base = CLDMA0_AO_BASE;
phy_pd_base = CLDMA0_PD_BASE;
hw_info->phy_interrupt_id = CLDMA0_INT;
}
hw_info->ap_ao_base = t7xx_pcie_addr_transfer(pbase->pcie_ext_reg_base, hw_info->ap_ao_base = t7xx_pcie_addr_transfer(pbase->pcie_ext_reg_base,
pbase->pcie_dev_reg_trsl_addr, phy_ao_base); pbase->pcie_dev_reg_trsl_addr, phy_ao_base);
hw_info->ap_pdn_base = t7xx_pcie_addr_transfer(pbase->pcie_ext_reg_base, hw_info->ap_pdn_base = t7xx_pcie_addr_transfer(pbase->pcie_ext_reg_base,
...@@ -1324,9 +1350,10 @@ int t7xx_cldma_init(struct cldma_ctrl *md_ctrl) ...@@ -1324,9 +1350,10 @@ int t7xx_cldma_init(struct cldma_ctrl *md_ctrl)
return -ENOMEM; return -ENOMEM;
} }
void t7xx_cldma_switch_cfg(struct cldma_ctrl *md_ctrl) void t7xx_cldma_switch_cfg(struct cldma_ctrl *md_ctrl, enum cldma_cfg cfg_id)
{ {
t7xx_cldma_late_release(md_ctrl); t7xx_cldma_late_release(md_ctrl);
t7xx_cldma_adjust_config(md_ctrl, cfg_id);
t7xx_cldma_late_init(md_ctrl); t7xx_cldma_late_init(md_ctrl);
} }
......
...@@ -31,10 +31,14 @@ ...@@ -31,10 +31,14 @@
#include "t7xx_cldma.h" #include "t7xx_cldma.h"
#include "t7xx_pci.h" #include "t7xx_pci.h"
#define CLDMA_JUMBO_BUFF_SZ (63 * 1024 + sizeof(struct ccci_header))
#define CLDMA_SHARED_Q_BUFF_SZ 3584
#define CLDMA_DEDICATED_Q_BUFF_SZ 2048
/** /**
* enum cldma_id - Identifiers for CLDMA HW units. * enum cldma_id - Identifiers for CLDMA HW units.
* @CLDMA_ID_MD: Modem control channel. * @CLDMA_ID_MD: Modem control channel.
* @CLDMA_ID_AP: Application Processor control channel (not used at the moment). * @CLDMA_ID_AP: Application Processor control channel.
* @CLDMA_NUM: Number of CLDMA HW units available. * @CLDMA_NUM: Number of CLDMA HW units available.
*/ */
enum cldma_id { enum cldma_id {
...@@ -55,6 +59,16 @@ struct cldma_gpd { ...@@ -55,6 +59,16 @@ struct cldma_gpd {
__le16 not_used2; __le16 not_used2;
}; };
enum cldma_queue_type {
CLDMA_SHARED_Q,
CLDMA_DEDICATED_Q,
};
enum cldma_cfg {
CLDMA_SHARED_Q_CFG,
CLDMA_DEDICATED_Q_CFG,
};
struct cldma_request { struct cldma_request {
struct cldma_gpd *gpd; /* Virtual address for CPU */ struct cldma_gpd *gpd; /* Virtual address for CPU */
dma_addr_t gpd_addr; /* Physical address for DMA */ dma_addr_t gpd_addr; /* Physical address for DMA */
...@@ -77,6 +91,7 @@ struct cldma_queue { ...@@ -77,6 +91,7 @@ struct cldma_queue {
struct cldma_request *tr_done; struct cldma_request *tr_done;
struct cldma_request *rx_refill; struct cldma_request *rx_refill;
struct cldma_request *tx_next; struct cldma_request *tx_next;
enum cldma_queue_type q_type;
int budget; /* Same as ring buffer size by default */ int budget; /* Same as ring buffer size by default */
spinlock_t ring_lock; spinlock_t ring_lock;
wait_queue_head_t req_wq; /* Only for TX */ wait_queue_head_t req_wq; /* Only for TX */
...@@ -104,17 +119,20 @@ struct cldma_ctrl { ...@@ -104,17 +119,20 @@ struct cldma_ctrl {
int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb); int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb);
}; };
enum cldma_txq_rxq_port_id {
DOWNLOAD_PORT_ID = 0,
DUMP_PORT_ID = 1
};
#define GPD_FLAGS_HWO BIT(0) #define GPD_FLAGS_HWO BIT(0)
#define GPD_FLAGS_IOC BIT(7) #define GPD_FLAGS_IOC BIT(7)
#define GPD_DMAPOOL_ALIGN 16 #define GPD_DMAPOOL_ALIGN 16
#define CLDMA_MTU 3584 /* 3.5kB */
int t7xx_cldma_alloc(enum cldma_id hif_id, struct t7xx_pci_dev *t7xx_dev); int t7xx_cldma_alloc(enum cldma_id hif_id, struct t7xx_pci_dev *t7xx_dev);
void t7xx_cldma_hif_hw_init(struct cldma_ctrl *md_ctrl); void t7xx_cldma_hif_hw_init(struct cldma_ctrl *md_ctrl);
int t7xx_cldma_init(struct cldma_ctrl *md_ctrl); int t7xx_cldma_init(struct cldma_ctrl *md_ctrl);
void t7xx_cldma_exit(struct cldma_ctrl *md_ctrl); void t7xx_cldma_exit(struct cldma_ctrl *md_ctrl);
void t7xx_cldma_switch_cfg(struct cldma_ctrl *md_ctrl); void t7xx_cldma_switch_cfg(struct cldma_ctrl *md_ctrl, enum cldma_cfg cfg_id);
void t7xx_cldma_start(struct cldma_ctrl *md_ctrl); void t7xx_cldma_start(struct cldma_ctrl *md_ctrl);
int t7xx_cldma_stop(struct cldma_ctrl *md_ctrl); int t7xx_cldma_stop(struct cldma_ctrl *md_ctrl);
void t7xx_cldma_reset(struct cldma_ctrl *md_ctrl); void t7xx_cldma_reset(struct cldma_ctrl *md_ctrl);
......
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
D2H_INT_EXCEPTION_CLEARQ_DONE | \ D2H_INT_EXCEPTION_CLEARQ_DONE | \
D2H_INT_EXCEPTION_ALLQ_RESET | \ D2H_INT_EXCEPTION_ALLQ_RESET | \
D2H_INT_PORT_ENUM | \ D2H_INT_PORT_ENUM | \
D2H_INT_ASYNC_AP_HK | \
D2H_INT_ASYNC_MD_HK) D2H_INT_ASYNC_MD_HK)
void t7xx_mhccif_mask_set(struct t7xx_pci_dev *t7xx_dev, u32 val); void t7xx_mhccif_mask_set(struct t7xx_pci_dev *t7xx_dev, u32 val);
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include "t7xx_modem_ops.h" #include "t7xx_modem_ops.h"
#include "t7xx_netdev.h" #include "t7xx_netdev.h"
#include "t7xx_pci.h" #include "t7xx_pci.h"
#include "t7xx_pci_rescan.h"
#include "t7xx_pcie_mac.h" #include "t7xx_pcie_mac.h"
#include "t7xx_port.h" #include "t7xx_port.h"
#include "t7xx_port_proxy.h" #include "t7xx_port_proxy.h"
...@@ -44,6 +45,7 @@ ...@@ -44,6 +45,7 @@
#include "t7xx_state_monitor.h" #include "t7xx_state_monitor.h"
#define RT_ID_MD_PORT_ENUM 0 #define RT_ID_MD_PORT_ENUM 0
#define RT_ID_AP_PORT_ENUM 1
/* Modem feature query identification code - "ICCC" */ /* Modem feature query identification code - "ICCC" */
#define MD_FEATURE_QUERY_ID 0x49434343 #define MD_FEATURE_QUERY_ID 0x49434343
...@@ -191,6 +193,10 @@ static irqreturn_t t7xx_rgu_isr_thread(int irq, void *data) ...@@ -191,6 +193,10 @@ static irqreturn_t t7xx_rgu_isr_thread(int irq, void *data)
msleep(RGU_RESET_DELAY_MS); msleep(RGU_RESET_DELAY_MS);
t7xx_reset_device_via_pmic(t7xx_dev); t7xx_reset_device_via_pmic(t7xx_dev);
if (!t7xx_dev->hp_enable)
t7xx_rescan_queue_work(t7xx_dev->pdev);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
...@@ -296,6 +302,7 @@ static void t7xx_md_exception(struct t7xx_modem *md, enum hif_ex_stage stage) ...@@ -296,6 +302,7 @@ static void t7xx_md_exception(struct t7xx_modem *md, enum hif_ex_stage stage)
} }
t7xx_cldma_exception(md->md_ctrl[CLDMA_ID_MD], stage); t7xx_cldma_exception(md->md_ctrl[CLDMA_ID_MD], stage);
t7xx_cldma_exception(md->md_ctrl[CLDMA_ID_AP], stage);
if (stage == HIF_EX_INIT) if (stage == HIF_EX_INIT)
t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_EXCEPTION_ACK); t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_EXCEPTION_ACK);
...@@ -424,7 +431,7 @@ static int t7xx_parse_host_rt_data(struct t7xx_fsm_ctl *ctl, struct t7xx_sys_inf ...@@ -424,7 +431,7 @@ static int t7xx_parse_host_rt_data(struct t7xx_fsm_ctl *ctl, struct t7xx_sys_inf
if (ft_spt_st != MTK_FEATURE_MUST_BE_SUPPORTED) if (ft_spt_st != MTK_FEATURE_MUST_BE_SUPPORTED)
return -EINVAL; return -EINVAL;
if (i == RT_ID_MD_PORT_ENUM) if (i == RT_ID_MD_PORT_ENUM || i == RT_ID_AP_PORT_ENUM)
t7xx_port_enum_msg_handler(ctl->md, rt_feature->data); t7xx_port_enum_msg_handler(ctl->md, rt_feature->data);
} }
...@@ -454,12 +461,12 @@ static int t7xx_core_reset(struct t7xx_modem *md) ...@@ -454,12 +461,12 @@ static int t7xx_core_reset(struct t7xx_modem *md)
return 0; return 0;
} }
static void t7xx_core_hk_handler(struct t7xx_modem *md, struct t7xx_fsm_ctl *ctl, static void t7xx_core_hk_handler(struct t7xx_modem *md, struct t7xx_sys_info *core_info,
struct t7xx_fsm_ctl *ctl,
enum t7xx_fsm_event_state event_id, enum t7xx_fsm_event_state event_id,
enum t7xx_fsm_event_state err_detect) enum t7xx_fsm_event_state err_detect)
{ {
struct t7xx_fsm_event *event = NULL, *event_next; struct t7xx_fsm_event *event = NULL, *event_next;
struct t7xx_sys_info *core_info = &md->core_md;
struct device *dev = &md->t7xx_dev->pdev->dev; struct device *dev = &md->t7xx_dev->pdev->dev;
unsigned long flags; unsigned long flags;
int ret; int ret;
...@@ -525,23 +532,37 @@ static void t7xx_md_hk_wq(struct work_struct *work) ...@@ -525,23 +532,37 @@ static void t7xx_md_hk_wq(struct work_struct *work)
/* Clear the HS2 EXIT event appended in core_reset() */ /* Clear the HS2 EXIT event appended in core_reset() */
t7xx_fsm_clr_event(ctl, FSM_EVENT_MD_HS2_EXIT); t7xx_fsm_clr_event(ctl, FSM_EVENT_MD_HS2_EXIT);
t7xx_cldma_switch_cfg(md->md_ctrl[CLDMA_ID_MD]); t7xx_cldma_switch_cfg(md->md_ctrl[CLDMA_ID_MD], CLDMA_SHARED_Q_CFG);
t7xx_cldma_start(md->md_ctrl[CLDMA_ID_MD]); t7xx_cldma_start(md->md_ctrl[CLDMA_ID_MD]);
t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS2); t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS2);
md->core_md.handshake_ongoing = true; md->core_md.handshake_ongoing = true;
t7xx_core_hk_handler(md, ctl, FSM_EVENT_MD_HS2, FSM_EVENT_MD_HS2_EXIT); t7xx_core_hk_handler(md, &md->core_md, ctl, FSM_EVENT_MD_HS2, FSM_EVENT_MD_HS2_EXIT);
}
static void t7xx_ap_hk_wq(struct work_struct *work)
{
struct t7xx_modem *md = container_of(work, struct t7xx_modem, ap_handshake_work);
struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
/* Clear the HS2 EXIT event appended in t7xx_core_reset(). */
t7xx_fsm_clr_event(ctl, FSM_EVENT_AP_HS2_EXIT);
t7xx_cldma_stop(md->md_ctrl[CLDMA_ID_AP]);
t7xx_cldma_switch_cfg(md->md_ctrl[CLDMA_ID_AP], CLDMA_SHARED_Q_CFG);
t7xx_cldma_start(md->md_ctrl[CLDMA_ID_AP]);
md->core_ap.handshake_ongoing = true;
t7xx_core_hk_handler(md, &md->core_ap, ctl, FSM_EVENT_AP_HS2, FSM_EVENT_AP_HS2_EXIT);
} }
void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id) void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id)
{ {
struct t7xx_fsm_ctl *ctl = md->fsm_ctl; struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
void __iomem *mhccif_base;
unsigned int int_sta; unsigned int int_sta;
unsigned long flags; unsigned long flags;
switch (evt_id) { switch (evt_id) {
case FSM_PRE_START: case FSM_PRE_START:
t7xx_mhccif_mask_clr(md->t7xx_dev, D2H_INT_PORT_ENUM); t7xx_mhccif_mask_clr(md->t7xx_dev, D2H_INT_PORT_ENUM | D2H_INT_ASYNC_MD_HK |
D2H_INT_ASYNC_AP_HK);
break; break;
case FSM_START: case FSM_START:
...@@ -554,16 +575,26 @@ void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id) ...@@ -554,16 +575,26 @@ void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id)
ctl->exp_flg = true; ctl->exp_flg = true;
md->exp_id &= ~D2H_INT_EXCEPTION_INIT; md->exp_id &= ~D2H_INT_EXCEPTION_INIT;
md->exp_id &= ~D2H_INT_ASYNC_MD_HK; md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
md->exp_id &= ~D2H_INT_ASYNC_AP_HK;
} else if (ctl->exp_flg) { } else if (ctl->exp_flg) {
md->exp_id &= ~D2H_INT_ASYNC_MD_HK; md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
} else if (md->exp_id & D2H_INT_ASYNC_MD_HK) { md->exp_id &= ~D2H_INT_ASYNC_AP_HK;
} else {
void __iomem *mhccif_base = md->t7xx_dev->base_addr.mhccif_rc_base;
if (md->exp_id & D2H_INT_ASYNC_MD_HK) {
queue_work(md->handshake_wq, &md->handshake_work); queue_work(md->handshake_wq, &md->handshake_work);
md->exp_id &= ~D2H_INT_ASYNC_MD_HK; md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
mhccif_base = md->t7xx_dev->base_addr.mhccif_rc_base;
iowrite32(D2H_INT_ASYNC_MD_HK, mhccif_base + REG_EP2RC_SW_INT_ACK); iowrite32(D2H_INT_ASYNC_MD_HK, mhccif_base + REG_EP2RC_SW_INT_ACK);
t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK); t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK);
} else { }
t7xx_mhccif_mask_clr(md->t7xx_dev, D2H_INT_ASYNC_MD_HK);
if (md->exp_id & D2H_INT_ASYNC_AP_HK) {
queue_work(md->ap_handshake_wq, &md->ap_handshake_work);
md->exp_id &= ~D2H_INT_ASYNC_AP_HK;
iowrite32(D2H_INT_ASYNC_AP_HK, mhccif_base + REG_EP2RC_SW_INT_ACK);
t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_AP_HK);
}
} }
spin_unlock_irqrestore(&md->exp_lock, flags); spin_unlock_irqrestore(&md->exp_lock, flags);
...@@ -576,6 +607,7 @@ void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id) ...@@ -576,6 +607,7 @@ void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id)
case FSM_READY: case FSM_READY:
t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK); t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK);
t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_AP_HK);
break; break;
default: default:
...@@ -627,6 +659,19 @@ static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev) ...@@ -627,6 +659,19 @@ static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev)
md->core_md.feature_set[RT_ID_MD_PORT_ENUM] &= ~FEATURE_MSK; md->core_md.feature_set[RT_ID_MD_PORT_ENUM] &= ~FEATURE_MSK;
md->core_md.feature_set[RT_ID_MD_PORT_ENUM] |= md->core_md.feature_set[RT_ID_MD_PORT_ENUM] |=
FIELD_PREP(FEATURE_MSK, MTK_FEATURE_MUST_BE_SUPPORTED); FIELD_PREP(FEATURE_MSK, MTK_FEATURE_MUST_BE_SUPPORTED);
md->ap_handshake_wq = alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
0, "ap_hk_wq");
if (!md->ap_handshake_wq) {
destroy_workqueue(md->handshake_wq);
return NULL;
}
INIT_WORK(&md->ap_handshake_work, t7xx_ap_hk_wq);
md->core_ap.feature_set[RT_ID_AP_PORT_ENUM] &= ~FEATURE_MSK;
md->core_ap.feature_set[RT_ID_AP_PORT_ENUM] |=
FIELD_PREP(FEATURE_MSK, MTK_FEATURE_MUST_BE_SUPPORTED);
return md; return md;
} }
...@@ -638,6 +683,7 @@ int t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev) ...@@ -638,6 +683,7 @@ int t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev)
md->exp_id = 0; md->exp_id = 0;
t7xx_fsm_reset(md); t7xx_fsm_reset(md);
t7xx_cldma_reset(md->md_ctrl[CLDMA_ID_MD]); t7xx_cldma_reset(md->md_ctrl[CLDMA_ID_MD]);
t7xx_cldma_reset(md->md_ctrl[CLDMA_ID_AP]);
t7xx_port_proxy_reset(md->port_prox); t7xx_port_proxy_reset(md->port_prox);
md->md_init_finish = true; md->md_init_finish = true;
return t7xx_core_reset(md); return t7xx_core_reset(md);
...@@ -667,6 +713,10 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev) ...@@ -667,6 +713,10 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
if (ret) if (ret)
goto err_destroy_hswq; goto err_destroy_hswq;
ret = t7xx_cldma_alloc(CLDMA_ID_AP, t7xx_dev);
if (ret)
goto err_destroy_hswq;
ret = t7xx_fsm_init(md); ret = t7xx_fsm_init(md);
if (ret) if (ret)
goto err_destroy_hswq; goto err_destroy_hswq;
...@@ -679,12 +729,16 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev) ...@@ -679,12 +729,16 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
if (ret) if (ret)
goto err_uninit_ccmni; goto err_uninit_ccmni;
ret = t7xx_port_proxy_init(md); ret = t7xx_cldma_init(md->md_ctrl[CLDMA_ID_AP]);
if (ret) if (ret)
goto err_uninit_md_cldma; goto err_uninit_md_cldma;
ret = t7xx_port_proxy_init(md);
if (ret)
goto err_uninit_ap_cldma;
ret = t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_START, 0); ret = t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_START, 0);
if (ret) /* fsm_uninit flushes cmd queue */ if (ret) /* t7xx_fsm_uninit() flushes cmd queue */
goto err_uninit_proxy; goto err_uninit_proxy;
t7xx_md_sys_sw_init(t7xx_dev); t7xx_md_sys_sw_init(t7xx_dev);
...@@ -694,6 +748,9 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev) ...@@ -694,6 +748,9 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
err_uninit_proxy: err_uninit_proxy:
t7xx_port_proxy_uninit(md->port_prox); t7xx_port_proxy_uninit(md->port_prox);
err_uninit_ap_cldma:
t7xx_cldma_exit(md->md_ctrl[CLDMA_ID_AP]);
err_uninit_md_cldma: err_uninit_md_cldma:
t7xx_cldma_exit(md->md_ctrl[CLDMA_ID_MD]); t7xx_cldma_exit(md->md_ctrl[CLDMA_ID_MD]);
...@@ -705,6 +762,7 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev) ...@@ -705,6 +762,7 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
err_destroy_hswq: err_destroy_hswq:
destroy_workqueue(md->handshake_wq); destroy_workqueue(md->handshake_wq);
destroy_workqueue(md->ap_handshake_wq);
dev_err(&t7xx_dev->pdev->dev, "Modem init failed\n"); dev_err(&t7xx_dev->pdev->dev, "Modem init failed\n");
return ret; return ret;
} }
...@@ -720,8 +778,10 @@ void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev) ...@@ -720,8 +778,10 @@ void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev)
t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_PRE_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION); t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_PRE_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION);
t7xx_port_proxy_uninit(md->port_prox); t7xx_port_proxy_uninit(md->port_prox);
t7xx_cldma_exit(md->md_ctrl[CLDMA_ID_AP]);
t7xx_cldma_exit(md->md_ctrl[CLDMA_ID_MD]); t7xx_cldma_exit(md->md_ctrl[CLDMA_ID_MD]);
t7xx_ccmni_exit(t7xx_dev); t7xx_ccmni_exit(t7xx_dev);
t7xx_fsm_uninit(md); t7xx_fsm_uninit(md);
destroy_workqueue(md->handshake_wq); destroy_workqueue(md->handshake_wq);
destroy_workqueue(md->ap_handshake_wq);
} }
...@@ -66,10 +66,13 @@ struct t7xx_modem { ...@@ -66,10 +66,13 @@ struct t7xx_modem {
struct cldma_ctrl *md_ctrl[CLDMA_NUM]; struct cldma_ctrl *md_ctrl[CLDMA_NUM];
struct t7xx_pci_dev *t7xx_dev; struct t7xx_pci_dev *t7xx_dev;
struct t7xx_sys_info core_md; struct t7xx_sys_info core_md;
struct t7xx_sys_info core_ap;
bool md_init_finish; bool md_init_finish;
bool rgu_irq_asserted; bool rgu_irq_asserted;
struct workqueue_struct *handshake_wq; struct workqueue_struct *handshake_wq;
struct work_struct handshake_work; struct work_struct handshake_work;
struct workqueue_struct *ap_handshake_wq;
struct work_struct ap_handshake_work;
struct t7xx_fsm_ctl *fsm_ctl; struct t7xx_fsm_ctl *fsm_ctl;
struct port_proxy *port_prox; struct port_proxy *port_prox;
unsigned int exp_id; unsigned int exp_id;
......
...@@ -38,7 +38,9 @@ ...@@ -38,7 +38,9 @@
#include "t7xx_mhccif.h" #include "t7xx_mhccif.h"
#include "t7xx_modem_ops.h" #include "t7xx_modem_ops.h"
#include "t7xx_pci.h" #include "t7xx_pci.h"
#include "t7xx_pci_rescan.h"
#include "t7xx_pcie_mac.h" #include "t7xx_pcie_mac.h"
#include "t7xx_port_devlink.h"
#include "t7xx_reg.h" #include "t7xx_reg.h"
#include "t7xx_state_monitor.h" #include "t7xx_state_monitor.h"
...@@ -703,22 +705,33 @@ static int t7xx_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) ...@@ -703,22 +705,33 @@ static int t7xx_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
t7xx_pci_infracfg_ao_calc(t7xx_dev); t7xx_pci_infracfg_ao_calc(t7xx_dev);
t7xx_mhccif_init(t7xx_dev); t7xx_mhccif_init(t7xx_dev);
ret = t7xx_md_init(t7xx_dev); ret = t7xx_devlink_register(t7xx_dev);
if (ret) if (ret)
return ret; return ret;
ret = t7xx_md_init(t7xx_dev);
if (ret)
goto err_devlink_unregister;
t7xx_pcie_mac_interrupts_dis(t7xx_dev); t7xx_pcie_mac_interrupts_dis(t7xx_dev);
ret = t7xx_interrupt_init(t7xx_dev); ret = t7xx_interrupt_init(t7xx_dev);
if (ret) { if (ret) {
t7xx_md_exit(t7xx_dev); t7xx_md_exit(t7xx_dev);
return ret; goto err_devlink_unregister;
} }
t7xx_rescan_done();
t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT); t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
t7xx_pcie_mac_interrupts_en(t7xx_dev); t7xx_pcie_mac_interrupts_en(t7xx_dev);
if (!t7xx_dev->hp_enable)
pci_ignore_hotplug(pdev);
return 0; return 0;
err_devlink_unregister:
t7xx_devlink_unregister(t7xx_dev);
return ret;
} }
static void t7xx_pci_remove(struct pci_dev *pdev) static void t7xx_pci_remove(struct pci_dev *pdev)
...@@ -728,6 +741,7 @@ static void t7xx_pci_remove(struct pci_dev *pdev) ...@@ -728,6 +741,7 @@ static void t7xx_pci_remove(struct pci_dev *pdev)
t7xx_dev = pci_get_drvdata(pdev); t7xx_dev = pci_get_drvdata(pdev);
t7xx_md_exit(t7xx_dev); t7xx_md_exit(t7xx_dev);
t7xx_devlink_unregister(t7xx_dev);
for (i = 0; i < EXT_INT_NUM; i++) { for (i = 0; i < EXT_INT_NUM; i++) {
if (!t7xx_dev->intr_handler[i]) if (!t7xx_dev->intr_handler[i])
...@@ -754,7 +768,52 @@ static struct pci_driver t7xx_pci_driver = { ...@@ -754,7 +768,52 @@ static struct pci_driver t7xx_pci_driver = {
.shutdown = t7xx_pci_shutdown, .shutdown = t7xx_pci_shutdown,
}; };
module_pci_driver(t7xx_pci_driver); static int __init t7xx_pci_init(void)
{
int ret;
t7xx_pci_dev_rescan();
ret = t7xx_rescan_init();
if (ret) {
pr_err("Failed to init t7xx rescan work\n");
return ret;
}
return pci_register_driver(&t7xx_pci_driver);
}
module_init(t7xx_pci_init);
static int t7xx_always_match(struct device *dev, const void *data)
{
return dev->parent->fwnode == data;
}
static void __exit t7xx_pci_cleanup(void)
{
int remove_flag = 0;
struct device *dev;
dev = driver_find_device(&t7xx_pci_driver.driver, NULL, NULL, t7xx_always_match);
if (dev) {
pr_debug("unregister t7xx PCIe driver while device is still exist.\n");
put_device(dev);
remove_flag = 1;
} else {
pr_debug("no t7xx PCIe driver found.\n");
}
pci_lock_rescan_remove();
pci_unregister_driver(&t7xx_pci_driver);
pci_unlock_rescan_remove();
t7xx_rescan_deinit();
if (remove_flag) {
pr_debug("remove t7xx PCI device\n");
pci_stop_and_remove_bus_device_locked(to_pci_dev(dev));
}
}
module_exit(t7xx_pci_cleanup);
MODULE_AUTHOR("MediaTek Inc"); MODULE_AUTHOR("MediaTek Inc");
MODULE_DESCRIPTION("MediaTek PCIe 5G WWAN modem T7xx driver"); MODULE_DESCRIPTION("MediaTek PCIe 5G WWAN modem T7xx driver");
......
...@@ -59,6 +59,7 @@ typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param); ...@@ -59,6 +59,7 @@ typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param);
* @md_pm_lock: protects PCIe sleep lock * @md_pm_lock: protects PCIe sleep lock
* @sleep_disable_count: PCIe L1.2 lock counter * @sleep_disable_count: PCIe L1.2 lock counter
* @sleep_lock_acquire: indicates that sleep has been disabled * @sleep_lock_acquire: indicates that sleep has been disabled
* @dl: devlink struct
*/ */
struct t7xx_pci_dev { struct t7xx_pci_dev {
t7xx_intr_callback intr_handler[EXT_INT_NUM]; t7xx_intr_callback intr_handler[EXT_INT_NUM];
...@@ -69,6 +70,7 @@ struct t7xx_pci_dev { ...@@ -69,6 +70,7 @@ struct t7xx_pci_dev {
struct t7xx_modem *md; struct t7xx_modem *md;
struct t7xx_ccmni_ctrl *ccmni_ctlb; struct t7xx_ccmni_ctrl *ccmni_ctlb;
bool rgu_pci_irq_en; bool rgu_pci_irq_en;
bool hp_enable;
/* Low Power Items */ /* Low Power Items */
struct list_head md_pm_entities; struct list_head md_pm_entities;
...@@ -78,6 +80,7 @@ struct t7xx_pci_dev { ...@@ -78,6 +80,7 @@ struct t7xx_pci_dev {
spinlock_t md_pm_lock; /* Protects PCI resource lock */ spinlock_t md_pm_lock; /* Protects PCI resource lock */
unsigned int sleep_disable_count; unsigned int sleep_disable_count;
struct completion sleep_lock_acquire; struct completion sleep_lock_acquire;
struct t7xx_devlink *dl;
}; };
enum t7xx_pm_id { enum t7xx_pm_id {
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2021, MediaTek Inc.
* Copyright (c) 2021-2022, Intel Corporation.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ":t7xx:%s: " fmt, __func__
#define dev_fmt(fmt) "t7xx: " fmt
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include "t7xx_pci.h"
#include "t7xx_pci_rescan.h"
static struct remove_rescan_context g_mtk_rescan_context;
void t7xx_pci_dev_rescan(void)
{
struct pci_bus *b = NULL;
pci_lock_rescan_remove();
while ((b = pci_find_next_bus(b)))
pci_rescan_bus(b);
pci_unlock_rescan_remove();
}
void t7xx_rescan_done(void)
{
unsigned long flags;
spin_lock_irqsave(&g_mtk_rescan_context.dev_lock, flags);
if (g_mtk_rescan_context.rescan_done == 0) {
pr_debug("this is a rescan probe\n");
g_mtk_rescan_context.rescan_done = 1;
} else {
pr_debug("this is a init probe\n");
}
spin_unlock_irqrestore(&g_mtk_rescan_context.dev_lock, flags);
}
static void t7xx_remove_rescan(struct work_struct *work)
{
struct pci_dev *pdev;
int num_retries = RESCAN_RETRIES;
unsigned long flags;
spin_lock_irqsave(&g_mtk_rescan_context.dev_lock, flags);
g_mtk_rescan_context.rescan_done = 0;
pdev = g_mtk_rescan_context.dev;
spin_unlock_irqrestore(&g_mtk_rescan_context.dev_lock, flags);
if (pdev) {
pci_stop_and_remove_bus_device_locked(pdev);
pr_debug("start remove and rescan flow\n");
}
do {
t7xx_pci_dev_rescan();
spin_lock_irqsave(&g_mtk_rescan_context.dev_lock, flags);
if (g_mtk_rescan_context.rescan_done) {
spin_unlock_irqrestore(&g_mtk_rescan_context.dev_lock, flags);
break;
}
spin_unlock_irqrestore(&g_mtk_rescan_context.dev_lock, flags);
msleep(DELAY_RESCAN_MTIME);
} while (num_retries--);
}
void t7xx_rescan_queue_work(struct pci_dev *pdev)
{
unsigned long flags;
dev_info(&pdev->dev, "start queue_mtk_rescan_work\n");
spin_lock_irqsave(&g_mtk_rescan_context.dev_lock, flags);
if (!g_mtk_rescan_context.rescan_done) {
dev_err(&pdev->dev, "rescan failed because last rescan undone\n");
spin_unlock_irqrestore(&g_mtk_rescan_context.dev_lock, flags);
return;
}
g_mtk_rescan_context.dev = pdev;
spin_unlock_irqrestore(&g_mtk_rescan_context.dev_lock, flags);
queue_work(g_mtk_rescan_context.pcie_rescan_wq, &g_mtk_rescan_context.service_task);
}
int t7xx_rescan_init(void)
{
spin_lock_init(&g_mtk_rescan_context.dev_lock);
g_mtk_rescan_context.rescan_done = 1;
g_mtk_rescan_context.dev = NULL;
g_mtk_rescan_context.pcie_rescan_wq = create_singlethread_workqueue(MTK_RESCAN_WQ);
if (!g_mtk_rescan_context.pcie_rescan_wq) {
pr_err("Failed to create workqueue: %s\n", MTK_RESCAN_WQ);
return -ENOMEM;
}
INIT_WORK(&g_mtk_rescan_context.service_task, t7xx_remove_rescan);
return 0;
}
void t7xx_rescan_deinit(void)
{
unsigned long flags;
spin_lock_irqsave(&g_mtk_rescan_context.dev_lock, flags);
g_mtk_rescan_context.rescan_done = 0;
g_mtk_rescan_context.dev = NULL;
spin_unlock_irqrestore(&g_mtk_rescan_context.dev_lock, flags);
cancel_work_sync(&g_mtk_rescan_context.service_task);
destroy_workqueue(g_mtk_rescan_context.pcie_rescan_wq);
}
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (c) 2021, MediaTek Inc.
* Copyright (c) 2021-2022, Intel Corporation.
*/
#ifndef __T7XX_PCI_RESCAN_H__
#define __T7XX_PCI_RESCAN_H__
#define MTK_RESCAN_WQ "mtk_rescan_wq"
#define DELAY_RESCAN_MTIME 1000
#define RESCAN_RETRIES 35
struct remove_rescan_context {
struct work_struct service_task;
struct workqueue_struct *pcie_rescan_wq;
spinlock_t dev_lock; /* protects device */
struct pci_dev *dev;
int rescan_done;
};
void t7xx_pci_dev_rescan(void);
void t7xx_rescan_queue_work(struct pci_dev *pdev);
int t7xx_rescan_init(void);
void t7xx_rescan_deinit(void);
void t7xx_rescan_done(void);
#endif /* __T7XX_PCI_RESCAN_H__ */
...@@ -36,9 +36,15 @@ ...@@ -36,9 +36,15 @@
/* Channel ID and Message ID definitions. /* Channel ID and Message ID definitions.
* The channel number consists of peer_id(15:12) , channel_id(11:0) * The channel number consists of peer_id(15:12) , channel_id(11:0)
* peer_id: * peer_id:
* 0:reserved, 1: to sAP, 2: to MD * 0:reserved, 1: to AP, 2: to MD
*/ */
enum port_ch { enum port_ch {
/* to AP */
PORT_CH_AP_CONTROL_RX = 0x1000,
PORT_CH_AP_CONTROL_TX = 0x1001,
PORT_CH_AP_LOG_RX = 0x1008,
PORT_CH_AP_LOG_TX = 0x1009,
/* to MD */ /* to MD */
PORT_CH_CONTROL_RX = 0x2000, PORT_CH_CONTROL_RX = 0x2000,
PORT_CH_CONTROL_TX = 0x2001, PORT_CH_CONTROL_TX = 0x2001,
...@@ -94,6 +100,7 @@ struct t7xx_port_conf { ...@@ -94,6 +100,7 @@ struct t7xx_port_conf {
struct port_ops *ops; struct port_ops *ops;
char *name; char *name;
enum wwan_port_type port_type; enum wwan_port_type port_type;
bool is_early_port;
}; };
struct t7xx_port { struct t7xx_port {
...@@ -122,11 +129,14 @@ struct t7xx_port { ...@@ -122,11 +129,14 @@ struct t7xx_port {
int rx_length_th; int rx_length_th;
bool chan_enable; bool chan_enable;
struct task_struct *thread; struct task_struct *thread;
struct t7xx_devlink *dl;
}; };
int t7xx_get_port_mtu(struct t7xx_port *port);
struct sk_buff *t7xx_port_alloc_skb(int payload); struct sk_buff *t7xx_port_alloc_skb(int payload);
struct sk_buff *t7xx_ctrl_alloc_skb(int payload); struct sk_buff *t7xx_ctrl_alloc_skb(int payload);
int t7xx_port_enqueue_skb(struct t7xx_port *port, struct sk_buff *skb); int t7xx_port_enqueue_skb(struct t7xx_port *port, struct sk_buff *skb);
int t7xx_port_send_raw_skb(struct t7xx_port *port, struct sk_buff *skb);
int t7xx_port_send_skb(struct t7xx_port *port, struct sk_buff *skb, unsigned int pkt_header, int t7xx_port_send_skb(struct t7xx_port *port, struct sk_buff *skb, unsigned int pkt_header,
unsigned int ex_msg); unsigned int ex_msg);
int t7xx_port_send_ctl_skb(struct t7xx_port *port, struct sk_buff *skb, unsigned int msg, int t7xx_port_send_ctl_skb(struct t7xx_port *port, struct sk_buff *skb, unsigned int msg,
......
...@@ -167,8 +167,12 @@ static int control_msg_handler(struct t7xx_port *port, struct sk_buff *skb) ...@@ -167,8 +167,12 @@ static int control_msg_handler(struct t7xx_port *port, struct sk_buff *skb)
case CTL_ID_HS2_MSG: case CTL_ID_HS2_MSG:
skb_pull(skb, sizeof(*ctrl_msg_h)); skb_pull(skb, sizeof(*ctrl_msg_h));
if (port_conf->rx_ch == PORT_CH_CONTROL_RX) { if (port_conf->rx_ch == PORT_CH_CONTROL_RX ||
ret = t7xx_fsm_append_event(ctl, FSM_EVENT_MD_HS2, skb->data, port_conf->rx_ch == PORT_CH_AP_CONTROL_RX) {
int event = port_conf->rx_ch == PORT_CH_CONTROL_RX ?
FSM_EVENT_MD_HS2 : FSM_EVENT_AP_HS2;
ret = t7xx_fsm_append_event(ctl, event, skb->data,
le32_to_cpu(ctrl_msg_h->data_length)); le32_to_cpu(ctrl_msg_h->data_length));
if (ret) if (ret)
dev_err(port->dev, "Failed to append Handshake 2 event"); dev_err(port->dev, "Failed to append Handshake 2 event");
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2022, Intel Corporation.
*/
#include <linux/bitfield.h>
#include <linux/debugfs.h>
#include <linux/vmalloc.h>
#include "t7xx_hif_cldma.h"
#include "t7xx_pci_rescan.h"
#include "t7xx_port_devlink.h"
#include "t7xx_port_proxy.h"
#include "t7xx_state_monitor.h"
#include "t7xx_uevent.h"
static struct t7xx_devlink_region_info t7xx_devlink_region_list[T7XX_TOTAL_REGIONS] = {
{"mr_dump", T7XX_MRDUMP_SIZE},
{"lk_dump", T7XX_LKDUMP_SIZE},
};
static int t7xx_devlink_port_read(struct t7xx_port *port, char *buf, size_t count)
{
int ret = 0, read_len;
struct sk_buff *skb;
spin_lock_irq(&port->rx_wq.lock);
if (skb_queue_empty(&port->rx_skb_list)) {
ret = wait_event_interruptible_locked_irq(port->rx_wq,
!skb_queue_empty(&port->rx_skb_list));
if (ret == -ERESTARTSYS) {
spin_unlock_irq(&port->rx_wq.lock);
return -EINTR;
}
}
skb = skb_dequeue(&port->rx_skb_list);
spin_unlock_irq(&port->rx_wq.lock);
read_len = count > skb->len ? skb->len : count;
memcpy(buf, skb->data, read_len);
dev_kfree_skb(skb);
return ret ? ret : read_len;
}
static int t7xx_devlink_port_write(struct t7xx_port *port, const char *buf, size_t count)
{
const struct t7xx_port_conf *port_conf = port->port_conf;
size_t actual_count;
struct sk_buff *skb;
int ret, txq_mtu;
txq_mtu = t7xx_get_port_mtu(port);
if (txq_mtu < 0)
return -EINVAL;
actual_count = count > txq_mtu ? txq_mtu : count;
skb = __dev_alloc_skb(actual_count, GFP_KERNEL);
if (!skb)
return -ENOMEM;
skb_put_data(skb, buf, actual_count);
ret = t7xx_port_send_raw_skb(port, skb);
if (ret) {
dev_err(port->dev, "write error on %s, size: %zu, ret: %d\n",
port_conf->name, actual_count, ret);
dev_kfree_skb(skb);
return ret;
}
return actual_count;
}
static int t7xx_devlink_fb_handle_response(struct t7xx_port *port, int *data)
{
int ret = 0, index = 0, return_data = 0, read_bytes;
char status[T7XX_FB_RESPONSE_SIZE + 1];
while (index < T7XX_FB_RESP_COUNT) {
index++;
read_bytes = t7xx_devlink_port_read(port, status, T7XX_FB_RESPONSE_SIZE);
if (read_bytes < 0) {
dev_err(port->dev, "status read failed");
ret = -EIO;
break;
}
status[read_bytes] = '\0';
if (!strncmp(status, T7XX_FB_RESP_INFO, strlen(T7XX_FB_RESP_INFO))) {
break;
} else if (!strncmp(status, T7XX_FB_RESP_OKAY, strlen(T7XX_FB_RESP_OKAY))) {
break;
} else if (!strncmp(status, T7XX_FB_RESP_FAIL, strlen(T7XX_FB_RESP_FAIL))) {
ret = -EPROTO;
break;
} else if (!strncmp(status, T7XX_FB_RESP_DATA, strlen(T7XX_FB_RESP_DATA))) {
if (data) {
if (!kstrtoint(status + strlen(T7XX_FB_RESP_DATA), 16,
&return_data)) {
*data = return_data;
} else {
dev_err(port->dev, "kstrtoint error!\n");
ret = -EPROTO;
}
}
break;
}
}
return ret;
}
static int t7xx_devlink_fb_raw_command(char *cmd, struct t7xx_port *port, int *data)
{
int ret, cmd_size = strlen(cmd);
if (cmd_size > T7XX_FB_COMMAND_SIZE) {
dev_err(port->dev, "command length %d is long\n", cmd_size);
return -EINVAL;
}
if (cmd_size != t7xx_devlink_port_write(port, cmd, cmd_size)) {
dev_err(port->dev, "raw command = %s write failed\n", cmd);
return -EIO;
}
dev_dbg(port->dev, "raw command = %s written to the device\n", cmd);
ret = t7xx_devlink_fb_handle_response(port, data);
if (ret)
dev_err(port->dev, "raw command = %s response FAILURE:%d\n", cmd, ret);
return ret;
}
static int t7xx_devlink_fb_send_buffer(struct t7xx_port *port, const u8 *buf, size_t size)
{
size_t remaining = size, offset = 0, len;
int write_done;
if (!size)
return -EINVAL;
while (remaining) {
len = min_t(size_t, remaining, CLDMA_DEDICATED_Q_BUFF_SZ);
write_done = t7xx_devlink_port_write(port, buf + offset, len);
if (write_done < 0) {
dev_err(port->dev, "write to device failed in %s", __func__);
return -EIO;
} else if (write_done != len) {
dev_err(port->dev, "write Error. Only %d/%zu bytes written",
write_done, len);
return -EIO;
}
remaining -= len;
offset += len;
}
return 0;
}
static int t7xx_devlink_fb_download_command(struct t7xx_port *port, size_t size)
{
char download_command[T7XX_FB_COMMAND_SIZE];
snprintf(download_command, sizeof(download_command), "%s:%08zx",
T7XX_FB_CMD_DOWNLOAD, size);
return t7xx_devlink_fb_raw_command(download_command, port, NULL);
}
static int t7xx_devlink_fb_download(struct t7xx_port *port, const u8 *buf, size_t size)
{
int ret;
if (size <= 0 || size > SIZE_MAX) {
dev_err(port->dev, "file is too large to download");
return -EINVAL;
}
ret = t7xx_devlink_fb_download_command(port, size);
if (ret)
return ret;
ret = t7xx_devlink_fb_send_buffer(port, buf, size);
if (ret)
return ret;
return t7xx_devlink_fb_handle_response(port, NULL);
}
static int t7xx_devlink_fb_flash(const char *cmd, struct t7xx_port *port)
{
char flash_command[T7XX_FB_COMMAND_SIZE];
snprintf(flash_command, sizeof(flash_command), "%s:%s", T7XX_FB_CMD_FLASH, cmd);
return t7xx_devlink_fb_raw_command(flash_command, port, NULL);
}
static int t7xx_devlink_fb_flash_partition(const char *partition, const u8 *buf,
struct t7xx_port *port, size_t size)
{
int ret;
ret = t7xx_devlink_fb_download(port, buf, size);
if (ret)
return ret;
return t7xx_devlink_fb_flash(partition, port);
}
static int t7xx_devlink_fb_get_core(struct t7xx_port *port)
{
struct t7xx_devlink_region_info *mrdump_region;
char mrdump_complete_event[T7XX_FB_EVENT_SIZE];
u32 mrd_mb = T7XX_MRDUMP_SIZE / (1024 * 1024);
struct t7xx_devlink *dl = port->dl;
int clen, dlen = 0, result = 0;
unsigned long long zipsize = 0;
char mcmd[T7XX_FB_MCMD_SIZE];
size_t offset_dlen = 0;
char *mdata;
set_bit(T7XX_MRDUMP_STATUS, &dl->status);
mdata = kmalloc(T7XX_FB_MDATA_SIZE, GFP_KERNEL);
if (!mdata) {
result = -ENOMEM;
goto get_core_exit;
}
mrdump_region = dl->dl_region_info[T7XX_MRDUMP_INDEX];
mrdump_region->dump = vmalloc(mrdump_region->default_size);
if (!mrdump_region->dump) {
kfree(mdata);
result = -ENOMEM;
goto get_core_exit;
}
result = t7xx_devlink_fb_raw_command(T7XX_FB_CMD_OEM_MRDUMP, port, NULL);
if (result) {
dev_err(port->dev, "%s command failed\n", T7XX_FB_CMD_OEM_MRDUMP);
vfree(mrdump_region->dump);
kfree(mdata);
goto get_core_exit;
}
while (mrdump_region->default_size > offset_dlen) {
clen = t7xx_devlink_port_read(port, mcmd, sizeof(mcmd));
if (clen == strlen(T7XX_FB_CMD_RTS) &&
(!strncmp(mcmd, T7XX_FB_CMD_RTS, strlen(T7XX_FB_CMD_RTS)))) {
memset(mdata, 0, T7XX_FB_MDATA_SIZE);
dlen = 0;
memset(mcmd, 0, sizeof(mcmd));
clen = snprintf(mcmd, sizeof(mcmd), "%s", T7XX_FB_CMD_CTS);
if (t7xx_devlink_port_write(port, mcmd, clen) != clen) {
dev_err(port->dev, "write for _CTS failed:%d\n", clen);
goto get_core_free_mem;
}
dlen = t7xx_devlink_port_read(port, mdata, T7XX_FB_MDATA_SIZE);
if (dlen <= 0) {
dev_err(port->dev, "read data error(%d)\n", dlen);
goto get_core_free_mem;
}
zipsize += (unsigned long long)(dlen);
memcpy(mrdump_region->dump + offset_dlen, mdata, dlen);
offset_dlen += dlen;
memset(mcmd, 0, sizeof(mcmd));
clen = snprintf(mcmd, sizeof(mcmd), "%s", T7XX_FB_CMD_FIN);
if (t7xx_devlink_port_write(port, mcmd, clen) != clen) {
dev_err(port->dev, "%s: _FIN failed, (Read %05d:%05llu)\n",
__func__, clen, zipsize);
goto get_core_free_mem;
}
} else if ((clen == strlen(T7XX_FB_RESP_MRDUMP_DONE)) &&
(!strncmp(mcmd, T7XX_FB_RESP_MRDUMP_DONE,
strlen(T7XX_FB_RESP_MRDUMP_DONE)))) {
dev_dbg(port->dev, "%s! size:%zd\n", T7XX_FB_RESP_MRDUMP_DONE, offset_dlen);
mrdump_region->actual_size = offset_dlen;
snprintf(mrdump_complete_event, sizeof(mrdump_complete_event),
"%s size=%zu", T7XX_UEVENT_MRDUMP_READY, offset_dlen);
t7xx_uevent_send(dl->dev, mrdump_complete_event);
kfree(mdata);
result = 0;
goto get_core_exit;
} else {
dev_err(port->dev, "getcore protocol error (read len %05d)\n", clen);
goto get_core_free_mem;
}
}
dev_err(port->dev, "mrdump exceeds %uMB size. Discarded!", mrd_mb);
t7xx_uevent_send(port->dev, T7XX_UEVENT_MRD_DISCD);
get_core_free_mem:
kfree(mdata);
vfree(mrdump_region->dump);
clear_bit(T7XX_MRDUMP_STATUS, &dl->status);
return -EPROTO;
get_core_exit:
clear_bit(T7XX_MRDUMP_STATUS, &dl->status);
return result;
}
static int t7xx_devlink_fb_dump_log(struct t7xx_port *port)
{
struct t7xx_devlink_region_info *lkdump_region;
char lkdump_complete_event[T7XX_FB_EVENT_SIZE];
struct t7xx_devlink *dl = port->dl;
int dlen, datasize = 0, result;
size_t offset_dlen = 0;
u8 *data;
set_bit(T7XX_LKDUMP_STATUS, &dl->status);
result = t7xx_devlink_fb_raw_command(T7XX_FB_CMD_OEM_LKDUMP, port, &datasize);
if (result) {
dev_err(port->dev, "%s command returns failure\n", T7XX_FB_CMD_OEM_LKDUMP);
goto lkdump_exit;
}
lkdump_region = dl->dl_region_info[T7XX_LKDUMP_INDEX];
if (datasize > lkdump_region->default_size) {
dev_err(port->dev, "lkdump size is more than %dKB. Discarded!",
T7XX_LKDUMP_SIZE / 1024);
t7xx_uevent_send(dl->dev, T7XX_UEVENT_LKD_DISCD);
result = -EPROTO;
goto lkdump_exit;
}
data = kzalloc(datasize, GFP_KERNEL);
if (!data) {
result = -ENOMEM;
goto lkdump_exit;
}
lkdump_region->dump = vmalloc(lkdump_region->default_size);
if (!lkdump_region->dump) {
kfree(data);
result = -ENOMEM;
goto lkdump_exit;
}
while (datasize > 0) {
dlen = t7xx_devlink_port_read(port, data, datasize);
if (dlen <= 0) {
dev_err(port->dev, "lkdump read error ret = %d", dlen);
kfree(data);
result = -EPROTO;
goto lkdump_exit;
}
memcpy(lkdump_region->dump + offset_dlen, data, dlen);
datasize -= dlen;
offset_dlen += dlen;
}
dev_dbg(port->dev, "LKDUMP DONE! size:%zd\n", offset_dlen);
lkdump_region->actual_size = offset_dlen;
snprintf(lkdump_complete_event, sizeof(lkdump_complete_event), "%s size=%zu",
T7XX_UEVENT_LKDUMP_READY, offset_dlen);
t7xx_uevent_send(dl->dev, lkdump_complete_event);
kfree(data);
clear_bit(T7XX_LKDUMP_STATUS, &dl->status);
return t7xx_devlink_fb_handle_response(port, NULL);
lkdump_exit:
clear_bit(T7XX_LKDUMP_STATUS, &dl->status);
return result;
}
static int t7xx_devlink_flash_update(struct devlink *devlink,
struct devlink_flash_update_params *params,
struct netlink_ext_ack *extack)
{
struct t7xx_devlink *dl = devlink_priv(devlink);
const char *component = params->component;
const struct firmware *fw = params->fw;
char flash_event[T7XX_FB_EVENT_SIZE];
struct t7xx_port *port;
int ret;
port = dl->port;
if (port->dl->mode != T7XX_FB_DL_MODE) {
dev_err(port->dev, "Modem is not in fastboot download mode!");
ret = -EPERM;
goto err_out;
}
if (dl->status != T7XX_DEVLINK_IDLE) {
dev_err(port->dev, "Modem is busy!");
ret = -EBUSY;
goto err_out;
}
if (!component || !fw->data) {
ret = -EINVAL;
goto err_out;
}
set_bit(T7XX_FLASH_STATUS, &dl->status);
dev_dbg(port->dev, "flash partition name:%s binary size:%zu\n", component, fw->size);
ret = t7xx_devlink_fb_flash_partition(component, fw->data, port, fw->size);
if (ret) {
devlink_flash_update_status_notify(devlink, "flashing failure!",
params->component, 0, 0);
snprintf(flash_event, sizeof(flash_event), "%s for [%s]",
T7XX_UEVENT_FLASHING_FAILURE, params->component);
} else {
devlink_flash_update_status_notify(devlink, "flashing success!",
params->component, 0, 0);
snprintf(flash_event, sizeof(flash_event), "%s for [%s]",
T7XX_UEVENT_FLASHING_SUCCESS, params->component);
}
t7xx_uevent_send(dl->dev, flash_event);
err_out:
clear_bit(T7XX_FLASH_STATUS, &dl->status);
return ret;
}
static int t7xx_devlink_reload_down(struct devlink *devlink, bool netns_change,
enum devlink_reload_action action,
enum devlink_reload_limit limit,
struct netlink_ext_ack *extack)
{
struct t7xx_devlink *dl = devlink_priv(devlink);
switch (action) {
case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
dl->set_fastboot_dl = 1;
return 0;
case DEVLINK_RELOAD_ACTION_FW_ACTIVATE:
return t7xx_devlink_fb_raw_command(T7XX_FB_CMD_REBOOT, dl->port, NULL);
default:
/* Unsupported action should not get to this function */
return -EOPNOTSUPP;
}
}
static int t7xx_devlink_reload_up(struct devlink *devlink,
enum devlink_reload_action action,
enum devlink_reload_limit limit,
u32 *actions_performed,
struct netlink_ext_ack *extack)
{
struct t7xx_devlink *dl = devlink_priv(devlink);
*actions_performed = BIT(action);
switch (action) {
case DEVLINK_RELOAD_ACTION_DRIVER_REINIT:
case DEVLINK_RELOAD_ACTION_FW_ACTIVATE:
t7xx_rescan_queue_work(dl->mtk_dev->pdev);
return 0;
default:
/* Unsupported action should not get to this function */
return -EOPNOTSUPP;
}
}
/* Call back function for devlink ops */
static const struct devlink_ops devlink_flash_ops = {
.supported_flash_update_params = DEVLINK_SUPPORT_FLASH_UPDATE_COMPONENT,
.flash_update = t7xx_devlink_flash_update,
.reload_actions = BIT(DEVLINK_RELOAD_ACTION_DRIVER_REINIT) |
BIT(DEVLINK_RELOAD_ACTION_FW_ACTIVATE),
.reload_down = t7xx_devlink_reload_down,
.reload_up = t7xx_devlink_reload_up,
};
static int t7xx_devlink_region_snapshot(struct devlink *dl, const struct devlink_region_ops *ops,
struct netlink_ext_ack *extack, u8 **data)
{
struct t7xx_devlink_region_info *region_info = ops->priv;
struct t7xx_devlink *t7xx_dl = devlink_priv(dl);
u8 *snapshot_mem;
if (t7xx_dl->status != T7XX_DEVLINK_IDLE) {
dev_err(t7xx_dl->dev, "Modem is busy!");
return -EBUSY;
}
dev_dbg(t7xx_dl->dev, "accessed devlink region:%s index:%d", ops->name, region_info->entry);
if (!strncmp(ops->name, "mr_dump", strlen("mr_dump"))) {
if (!region_info->dump) {
dev_err(t7xx_dl->dev, "devlink region:%s dump memory is not valid!",
region_info->region_name);
return -ENOMEM;
}
snapshot_mem = vmalloc(region_info->default_size);
if (!snapshot_mem)
return -ENOMEM;
memcpy(snapshot_mem, region_info->dump, region_info->default_size);
*data = snapshot_mem;
} else if (!strncmp(ops->name, "lk_dump", strlen("lk_dump"))) {
int ret;
ret = t7xx_devlink_fb_dump_log(t7xx_dl->port);
if (ret)
return ret;
*data = region_info->dump;
}
return 0;
}
/* To create regions for dump files */
static int t7xx_devlink_create_region(struct t7xx_devlink *dl)
{
struct devlink_region_ops *region_ops;
int rc, i;
region_ops = dl->dl_region_ops;
for (i = 0; i < T7XX_TOTAL_REGIONS; i++) {
region_ops[i].name = t7xx_devlink_region_list[i].region_name;
region_ops[i].snapshot = t7xx_devlink_region_snapshot;
region_ops[i].destructor = vfree;
dl->dl_region[i] =
devlink_region_create(dl->dl_ctx, &region_ops[i], T7XX_MAX_SNAPSHOTS,
t7xx_devlink_region_list[i].default_size);
if (IS_ERR(dl->dl_region[i])) {
rc = PTR_ERR(dl->dl_region[i]);
dev_err(dl->dev, "devlink region fail,err %d", rc);
for ( ; i >= 0; i--)
devlink_region_destroy(dl->dl_region[i]);
return rc;
}
t7xx_devlink_region_list[i].entry = i;
region_ops[i].priv = t7xx_devlink_region_list + i;
}
return 0;
}
/* To Destroy devlink regions */
static void t7xx_devlink_destroy_region(struct t7xx_devlink *dl)
{
u8 i;
for (i = 0; i < T7XX_TOTAL_REGIONS; i++)
devlink_region_destroy(dl->dl_region[i]);
}
int t7xx_devlink_register(struct t7xx_pci_dev *t7xx_dev)
{
struct devlink *dl_ctx;
dl_ctx = devlink_alloc(&devlink_flash_ops, sizeof(struct t7xx_devlink),
&t7xx_dev->pdev->dev);
if (!dl_ctx)
return -ENOMEM;
devlink_set_features(dl_ctx, DEVLINK_F_RELOAD);
devlink_register(dl_ctx);
t7xx_dev->dl = devlink_priv(dl_ctx);
t7xx_dev->dl->dl_ctx = dl_ctx;
return 0;
}
void t7xx_devlink_unregister(struct t7xx_pci_dev *t7xx_dev)
{
struct devlink *dl_ctx = priv_to_devlink(t7xx_dev->dl);
devlink_unregister(dl_ctx);
devlink_free(dl_ctx);
}
/**
* t7xx_devlink_region_init - Initialize/register devlink to t7xx driver
* @port: Pointer to port structure
* @dw: Pointer to devlink work structure
* @wq: Pointer to devlink workqueue structure
*
* Returns: Pointer to t7xx_devlink on success and NULL on failure
*/
static struct t7xx_devlink *t7xx_devlink_region_init(struct t7xx_port *port,
struct t7xx_devlink_work *dw,
struct workqueue_struct *wq)
{
struct t7xx_pci_dev *mtk_dev = port->t7xx_dev;
struct t7xx_devlink *dl = mtk_dev->dl;
int rc, i;
dl->dl_ctx = mtk_dev->dl->dl_ctx;
dl->mtk_dev = mtk_dev;
dl->dev = &mtk_dev->pdev->dev;
dl->mode = T7XX_FB_NO_MODE;
dl->status = T7XX_DEVLINK_IDLE;
dl->dl_work = dw;
dl->dl_wq = wq;
for (i = 0; i < T7XX_TOTAL_REGIONS; i++) {
dl->dl_region_info[i] = &t7xx_devlink_region_list[i];
dl->dl_region_info[i]->dump = NULL;
}
dl->port = port;
port->dl = dl;
rc = t7xx_devlink_create_region(dl);
if (rc) {
dev_err(dl->dev, "devlink region creation failed, rc %d", rc);
return NULL;
}
return dl;
}
/**
* t7xx_devlink_region_deinit - To unintialize the devlink from T7XX driver.
* @dl: Devlink instance
*/
static void t7xx_devlink_region_deinit(struct t7xx_devlink *dl)
{
dl->mode = T7XX_FB_NO_MODE;
t7xx_devlink_destroy_region(dl);
}
static void t7xx_devlink_work_handler(struct work_struct *data)
{
struct t7xx_devlink_work *dl_work;
dl_work = container_of(data, struct t7xx_devlink_work, work);
t7xx_devlink_fb_get_core(dl_work->port);
}
static int t7xx_devlink_init(struct t7xx_port *port)
{
struct t7xx_devlink_work *dl_work;
struct workqueue_struct *wq;
dl_work = kmalloc(sizeof(*dl_work), GFP_KERNEL);
if (!dl_work)
return -ENOMEM;
wq = create_workqueue("t7xx_devlink");
if (!wq) {
kfree(dl_work);
dev_err(port->dev, "create_workqueue failed\n");
return -ENODATA;
}
INIT_WORK(&dl_work->work, t7xx_devlink_work_handler);
dl_work->port = port;
port->rx_length_th = T7XX_MAX_QUEUE_LENGTH;
if (!t7xx_devlink_region_init(port, dl_work, wq))
return -ENOMEM;
return 0;
}
static void t7xx_devlink_uninit(struct t7xx_port *port)
{
struct t7xx_devlink *dl = port->dl;
struct sk_buff *skb;
unsigned long flags;
vfree(dl->dl_region_info[T7XX_MRDUMP_INDEX]->dump);
if (dl->dl_wq)
destroy_workqueue(dl->dl_wq);
kfree(dl->dl_work);
t7xx_devlink_region_deinit(port->dl);
spin_lock_irqsave(&port->rx_skb_list.lock, flags);
while ((skb = __skb_dequeue(&port->rx_skb_list)) != NULL)
dev_kfree_skb(skb);
spin_unlock_irqrestore(&port->rx_skb_list.lock, flags);
}
static int t7xx_devlink_enable_chl(struct t7xx_port *port)
{
spin_lock(&port->port_update_lock);
port->chan_enable = true;
spin_unlock(&port->port_update_lock);
if (port->dl->dl_wq && port->dl->mode == T7XX_FB_DUMP_MODE)
queue_work(port->dl->dl_wq, &port->dl->dl_work->work);
return 0;
}
static int t7xx_devlink_disable_chl(struct t7xx_port *port)
{
spin_lock(&port->port_update_lock);
port->chan_enable = false;
spin_unlock(&port->port_update_lock);
return 0;
}
struct port_ops devlink_port_ops = {
.init = &t7xx_devlink_init,
.recv_skb = &t7xx_port_enqueue_skb,
.uninit = &t7xx_devlink_uninit,
.enable_chl = &t7xx_devlink_enable_chl,
.disable_chl = &t7xx_devlink_disable_chl,
};
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (c) 2022, Intel Corporation.
*/
#ifndef __T7XX_PORT_DEVLINK_H__
#define __T7XX_PORT_DEVLINK_H__
#include <net/devlink.h>
#include "t7xx_pci.h"
#define T7XX_MAX_QUEUE_LENGTH 32
#define T7XX_FB_COMMAND_SIZE 64
#define T7XX_FB_RESPONSE_SIZE 64
#define T7XX_FB_MCMD_SIZE 64
#define T7XX_FB_MDATA_SIZE 1024
#define T7XX_FB_RESP_COUNT 30
#define T7XX_FB_CMD_RTS "_RTS"
#define T7XX_FB_CMD_CTS "_CTS"
#define T7XX_FB_CMD_FIN "_FIN"
#define T7XX_FB_CMD_OEM_MRDUMP "oem mrdump"
#define T7XX_FB_CMD_OEM_LKDUMP "oem dump_pllk_log"
#define T7XX_FB_CMD_DOWNLOAD "download"
#define T7XX_FB_CMD_FLASH "flash"
#define T7XX_FB_CMD_REBOOT "reboot"
#define T7XX_FB_RESP_MRDUMP_DONE "MRDUMP08_DONE"
#define T7XX_FB_RESP_OKAY "OKAY"
#define T7XX_FB_RESP_FAIL "FAIL"
#define T7XX_FB_RESP_DATA "DATA"
#define T7XX_FB_RESP_INFO "INFO"
#define T7XX_FB_EVENT_SIZE 50
#define T7XX_MAX_SNAPSHOTS 1
#define T7XX_MAX_REGION_NAME_LENGTH 20
#define T7XX_MRDUMP_SIZE (160 * 1024 * 1024)
#define T7XX_LKDUMP_SIZE (256 * 1024)
#define T7XX_TOTAL_REGIONS 2
#define T7XX_FLASH_STATUS 0
#define T7XX_MRDUMP_STATUS 1
#define T7XX_LKDUMP_STATUS 2
#define T7XX_DEVLINK_IDLE 0
#define T7XX_FB_NO_MODE 0
#define T7XX_FB_DL_MODE 1
#define T7XX_FB_DUMP_MODE 2
#define T7XX_MRDUMP_INDEX 0
#define T7XX_LKDUMP_INDEX 1
struct t7xx_devlink_work {
struct work_struct work;
struct t7xx_port *port;
};
struct t7xx_devlink_region_info {
char region_name[T7XX_MAX_REGION_NAME_LENGTH];
u32 default_size;
u32 actual_size;
u32 entry;
u8 *dump;
};
struct t7xx_devlink {
struct t7xx_pci_dev *mtk_dev;
struct t7xx_port *port;
struct device *dev;
struct devlink *dl_ctx;
struct t7xx_devlink_work *dl_work;
struct workqueue_struct *dl_wq;
struct t7xx_devlink_region_info *dl_region_info[T7XX_TOTAL_REGIONS];
struct devlink_region_ops dl_region_ops[T7XX_TOTAL_REGIONS];
struct devlink_region *dl_region[T7XX_TOTAL_REGIONS];
u8 mode;
unsigned long status;
int set_fastboot_dl;
};
int t7xx_devlink_register(struct t7xx_pci_dev *t7xx_dev);
void t7xx_devlink_unregister(struct t7xx_pci_dev *t7xx_dev);
#endif /*__T7XX_PORT_DEVLINK_H__*/
...@@ -77,6 +77,29 @@ static const struct t7xx_port_conf t7xx_md_port_conf[] = { ...@@ -77,6 +77,29 @@ static const struct t7xx_port_conf t7xx_md_port_conf[] = {
.path_id = CLDMA_ID_MD, .path_id = CLDMA_ID_MD,
.ops = &ctl_port_ops, .ops = &ctl_port_ops,
.name = "t7xx_ctrl", .name = "t7xx_ctrl",
}, {
.tx_ch = PORT_CH_AP_CONTROL_TX,
.rx_ch = PORT_CH_AP_CONTROL_RX,
.txq_index = Q_IDX_CTRL,
.rxq_index = Q_IDX_CTRL,
.path_id = CLDMA_ID_AP,
.ops = &ctl_port_ops,
.name = "t7xx_ap_ctrl",
},
};
static struct t7xx_port_conf t7xx_early_port_conf[] = {
{
.tx_ch = 0xffff,
.rx_ch = 0xffff,
.txq_index = 1,
.rxq_index = 1,
.txq_exp_index = 1,
.rxq_exp_index = 1,
.path_id = CLDMA_ID_AP,
.is_early_port = true,
.ops = &devlink_port_ops,
.name = "ttyDUMP",
}, },
}; };
...@@ -194,7 +217,17 @@ int t7xx_port_enqueue_skb(struct t7xx_port *port, struct sk_buff *skb) ...@@ -194,7 +217,17 @@ int t7xx_port_enqueue_skb(struct t7xx_port *port, struct sk_buff *skb)
return 0; return 0;
} }
static int t7xx_port_send_raw_skb(struct t7xx_port *port, struct sk_buff *skb) int t7xx_get_port_mtu(struct t7xx_port *port)
{
enum cldma_id path_id = port->port_conf->path_id;
int tx_qno = t7xx_port_get_queue_no(port);
struct cldma_ctrl *md_ctrl;
md_ctrl = port->t7xx_dev->md->md_ctrl[path_id];
return md_ctrl->tx_ring[tx_qno].pkt_size;
}
int t7xx_port_send_raw_skb(struct t7xx_port *port, struct sk_buff *skb)
{ {
enum cldma_id path_id = port->port_conf->path_id; enum cldma_id path_id = port->port_conf->path_id;
struct cldma_ctrl *md_ctrl; struct cldma_ctrl *md_ctrl;
...@@ -309,6 +342,26 @@ static void t7xx_proxy_setup_ch_mapping(struct port_proxy *port_prox) ...@@ -309,6 +342,26 @@ static void t7xx_proxy_setup_ch_mapping(struct port_proxy *port_prox)
} }
} }
static int t7xx_port_proxy_recv_skb_from_queue(struct t7xx_pci_dev *t7xx_dev,
struct cldma_queue *queue, struct sk_buff *skb)
{
struct port_proxy *port_prox = t7xx_dev->md->port_prox;
const struct t7xx_port_conf *port_conf;
struct t7xx_port *port;
int ret;
port = port_prox->ports;
port_conf = port->port_conf;
ret = port_conf->ops->recv_skb(port, skb);
if (ret < 0 && ret != -ENOBUFS) {
dev_err(port->dev, "drop on RX ch %d, %d\n", port_conf->rx_ch, ret);
dev_kfree_skb_any(skb);
}
return ret;
}
static struct t7xx_port *t7xx_port_proxy_find_port(struct t7xx_pci_dev *t7xx_dev, static struct t7xx_port *t7xx_port_proxy_find_port(struct t7xx_pci_dev *t7xx_dev,
struct cldma_queue *queue, u16 channel) struct cldma_queue *queue, u16 channel)
{ {
...@@ -330,6 +383,22 @@ static struct t7xx_port *t7xx_port_proxy_find_port(struct t7xx_pci_dev *t7xx_dev ...@@ -330,6 +383,22 @@ static struct t7xx_port *t7xx_port_proxy_find_port(struct t7xx_pci_dev *t7xx_dev
return NULL; return NULL;
} }
struct t7xx_port *t7xx_port_proxy_get_port_by_name(struct port_proxy *port_prox, char *port_name)
{
const struct t7xx_port_conf *port_conf;
struct t7xx_port *port;
int i;
for_each_proxy_port(i, port, port_prox) {
port_conf = port->port_conf;
if (!strncmp(port_conf->name, port_name, strlen(port_conf->name)))
return port;
}
return NULL;
}
/** /**
* t7xx_port_proxy_recv_skb() - Dispatch received skb. * t7xx_port_proxy_recv_skb() - Dispatch received skb.
* @queue: CLDMA queue. * @queue: CLDMA queue.
...@@ -350,6 +419,9 @@ static int t7xx_port_proxy_recv_skb(struct cldma_queue *queue, struct sk_buff *s ...@@ -350,6 +419,9 @@ static int t7xx_port_proxy_recv_skb(struct cldma_queue *queue, struct sk_buff *s
u16 seq_num, channel; u16 seq_num, channel;
int ret; int ret;
if (queue->q_type == CLDMA_DEDICATED_Q)
return t7xx_port_proxy_recv_skb_from_queue(t7xx_dev, queue, skb);
channel = FIELD_GET(CCCI_H_CHN_FLD, le32_to_cpu(ccci_h->status)); channel = FIELD_GET(CCCI_H_CHN_FLD, le32_to_cpu(ccci_h->status));
if (t7xx_fsm_get_md_state(ctl) == MD_STATE_INVALID) { if (t7xx_fsm_get_md_state(ctl) == MD_STATE_INVALID) {
dev_err_ratelimited(dev, "Packet drop on channel 0x%x, modem not ready\n", channel); dev_err_ratelimited(dev, "Packet drop on channel 0x%x, modem not ready\n", channel);
...@@ -364,6 +436,7 @@ static int t7xx_port_proxy_recv_skb(struct cldma_queue *queue, struct sk_buff *s ...@@ -364,6 +436,7 @@ static int t7xx_port_proxy_recv_skb(struct cldma_queue *queue, struct sk_buff *s
seq_num = t7xx_port_next_rx_seq_num(port, ccci_h); seq_num = t7xx_port_next_rx_seq_num(port, ccci_h);
port_conf = port->port_conf; port_conf = port->port_conf;
if (!port->port_conf->is_early_port)
skb_pull(skb, sizeof(*ccci_h)); skb_pull(skb, sizeof(*ccci_h));
ret = port_conf->ops->recv_skb(port, skb); ret = port_conf->ops->recv_skb(port, skb);
...@@ -416,8 +489,12 @@ static void t7xx_proxy_init_all_ports(struct t7xx_modem *md) ...@@ -416,8 +489,12 @@ static void t7xx_proxy_init_all_ports(struct t7xx_modem *md)
if (port_conf->tx_ch == PORT_CH_CONTROL_TX) if (port_conf->tx_ch == PORT_CH_CONTROL_TX)
md->core_md.ctl_port = port; md->core_md.ctl_port = port;
if (port_conf->tx_ch == PORT_CH_AP_CONTROL_TX)
md->core_ap.ctl_port = port;
port->t7xx_dev = md->t7xx_dev; port->t7xx_dev = md->t7xx_dev;
port->dev = &md->t7xx_dev->pdev->dev; port->dev = &md->t7xx_dev->pdev->dev;
port->dl = md->t7xx_dev->dl;
spin_lock_init(&port->port_update_lock); spin_lock_init(&port->port_update_lock);
port->chan_enable = false; port->chan_enable = false;
...@@ -428,26 +505,58 @@ static void t7xx_proxy_init_all_ports(struct t7xx_modem *md) ...@@ -428,26 +505,58 @@ static void t7xx_proxy_init_all_ports(struct t7xx_modem *md)
t7xx_proxy_setup_ch_mapping(port_prox); t7xx_proxy_setup_ch_mapping(port_prox);
} }
void t7xx_port_proxy_set_cfg(struct t7xx_modem *md, enum port_cfg_id cfg_id)
{
struct port_proxy *port_prox = md->port_prox;
const struct t7xx_port_conf *port_conf;
struct device *dev = port_prox->dev;
unsigned int port_count;
struct t7xx_port *port;
int i;
if (port_prox->cfg_id == cfg_id)
return;
if (port_prox->cfg_id != PORT_CFG_ID_INVALID) {
for_each_proxy_port(i, port, port_prox)
port->port_conf->ops->uninit(port);
devm_kfree(dev, port_prox->ports);
}
if (cfg_id == PORT_CFG_ID_EARLY) {
port_conf = t7xx_early_port_conf;
port_count = ARRAY_SIZE(t7xx_early_port_conf);
} else {
port_conf = t7xx_md_port_conf;
port_count = ARRAY_SIZE(t7xx_md_port_conf);
}
port_prox->ports = devm_kzalloc(dev, sizeof(struct t7xx_port) * port_count, GFP_KERNEL);
if (!port_prox->ports)
return;
for (i = 0; i < port_count; i++)
port_prox->ports[i].port_conf = &port_conf[i];
port_prox->cfg_id = cfg_id;
port_prox->port_count = port_count;
t7xx_proxy_init_all_ports(md);
}
static int t7xx_proxy_alloc(struct t7xx_modem *md) static int t7xx_proxy_alloc(struct t7xx_modem *md)
{ {
unsigned int port_count = ARRAY_SIZE(t7xx_md_port_conf);
struct device *dev = &md->t7xx_dev->pdev->dev; struct device *dev = &md->t7xx_dev->pdev->dev;
struct port_proxy *port_prox; struct port_proxy *port_prox;
int i;
port_prox = devm_kzalloc(dev, sizeof(*port_prox) + sizeof(struct t7xx_port) * port_count, port_prox = devm_kzalloc(dev, sizeof(*port_prox), GFP_KERNEL);
GFP_KERNEL);
if (!port_prox) if (!port_prox)
return -ENOMEM; return -ENOMEM;
md->port_prox = port_prox; md->port_prox = port_prox;
port_prox->dev = dev; port_prox->dev = dev;
t7xx_port_proxy_set_cfg(md, PORT_CFG_ID_EARLY);
for (i = 0; i < port_count; i++)
port_prox->ports[i].port_conf = &t7xx_md_port_conf[i];
port_prox->port_count = port_count;
t7xx_proxy_init_all_ports(md);
return 0; return 0;
} }
...@@ -469,6 +578,7 @@ int t7xx_port_proxy_init(struct t7xx_modem *md) ...@@ -469,6 +578,7 @@ int t7xx_port_proxy_init(struct t7xx_modem *md)
if (ret) if (ret)
return ret; return ret;
t7xx_cldma_set_recv_skb(md->md_ctrl[CLDMA_ID_AP], t7xx_port_proxy_recv_skb);
t7xx_cldma_set_recv_skb(md->md_ctrl[CLDMA_ID_MD], t7xx_port_proxy_recv_skb); t7xx_cldma_set_recv_skb(md->md_ctrl[CLDMA_ID_MD], t7xx_port_proxy_recv_skb);
return 0; return 0;
} }
......
...@@ -31,12 +31,19 @@ ...@@ -31,12 +31,19 @@
#define RX_QUEUE_MAXLEN 32 #define RX_QUEUE_MAXLEN 32
#define CTRL_QUEUE_MAXLEN 16 #define CTRL_QUEUE_MAXLEN 16
enum port_cfg_id {
PORT_CFG_ID_INVALID,
PORT_CFG_ID_NORMAL,
PORT_CFG_ID_EARLY,
};
struct port_proxy { struct port_proxy {
int port_count; int port_count;
struct list_head rx_ch_ports[PORT_CH_ID_MASK + 1]; struct list_head rx_ch_ports[PORT_CH_ID_MASK + 1];
struct list_head queue_ports[CLDMA_NUM][MTK_QUEUES]; struct list_head queue_ports[CLDMA_NUM][MTK_QUEUES];
struct device *dev; struct device *dev;
struct t7xx_port ports[]; enum port_cfg_id cfg_id;
struct t7xx_port *ports;
}; };
struct ccci_header { struct ccci_header {
...@@ -86,6 +93,7 @@ struct ctrl_msg_header { ...@@ -86,6 +93,7 @@ struct ctrl_msg_header {
/* Port operations mapping */ /* Port operations mapping */
extern struct port_ops wwan_sub_port_ops; extern struct port_ops wwan_sub_port_ops;
extern struct port_ops ctl_port_ops; extern struct port_ops ctl_port_ops;
extern struct port_ops devlink_port_ops;
void t7xx_port_proxy_reset(struct port_proxy *port_prox); void t7xx_port_proxy_reset(struct port_proxy *port_prox);
void t7xx_port_proxy_uninit(struct port_proxy *port_prox); void t7xx_port_proxy_uninit(struct port_proxy *port_prox);
...@@ -94,5 +102,7 @@ void t7xx_port_proxy_md_status_notify(struct port_proxy *port_prox, unsigned int ...@@ -94,5 +102,7 @@ void t7xx_port_proxy_md_status_notify(struct port_proxy *port_prox, unsigned int
int t7xx_port_enum_msg_handler(struct t7xx_modem *md, void *msg); int t7xx_port_enum_msg_handler(struct t7xx_modem *md, void *msg);
int t7xx_port_proxy_chl_enable_disable(struct port_proxy *port_prox, unsigned int ch_id, int t7xx_port_proxy_chl_enable_disable(struct port_proxy *port_prox, unsigned int ch_id,
bool en_flag); bool en_flag);
struct t7xx_port *t7xx_port_proxy_get_port_by_name(struct port_proxy *port_prox, char *port_name);
void t7xx_port_proxy_set_cfg(struct t7xx_modem *md, enum port_cfg_id cfg_id);
#endif /* __T7XX_PORT_PROXY_H__ */ #endif /* __T7XX_PORT_PROXY_H__ */
...@@ -54,7 +54,7 @@ static void t7xx_port_ctrl_stop(struct wwan_port *port) ...@@ -54,7 +54,7 @@ static void t7xx_port_ctrl_stop(struct wwan_port *port)
static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb) static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
{ {
struct t7xx_port *port_private = wwan_port_get_drvdata(port); struct t7xx_port *port_private = wwan_port_get_drvdata(port);
size_t len, offset, chunk_len = 0, txq_mtu = CLDMA_MTU; size_t len, offset, chunk_len = 0, txq_mtu;
const struct t7xx_port_conf *port_conf; const struct t7xx_port_conf *port_conf;
struct t7xx_fsm_ctl *ctl; struct t7xx_fsm_ctl *ctl;
enum md_state md_state; enum md_state md_state;
...@@ -72,6 +72,7 @@ static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb) ...@@ -72,6 +72,7 @@ static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
return -ENODEV; return -ENODEV;
} }
txq_mtu = t7xx_get_port_mtu(port_private);
for (offset = 0; offset < len; offset += chunk_len) { for (offset = 0; offset < len; offset += chunk_len) {
struct sk_buff *skb_ccci; struct sk_buff *skb_ccci;
int ret; int ret;
...@@ -155,6 +156,12 @@ static void t7xx_port_wwan_md_state_notify(struct t7xx_port *port, unsigned int ...@@ -155,6 +156,12 @@ static void t7xx_port_wwan_md_state_notify(struct t7xx_port *port, unsigned int
{ {
const struct t7xx_port_conf *port_conf = port->port_conf; const struct t7xx_port_conf *port_conf = port->port_conf;
if (state == MD_STATE_EXCEPTION) {
if (port->wwan_port)
wwan_port_txoff(port->wwan_port);
return;
}
if (state != MD_STATE_READY) if (state != MD_STATE_READY)
return; return;
......
...@@ -56,7 +56,7 @@ ...@@ -56,7 +56,7 @@
#define D2H_INT_RESUME_ACK BIT(12) #define D2H_INT_RESUME_ACK BIT(12)
#define D2H_INT_SUSPEND_ACK_AP BIT(13) #define D2H_INT_SUSPEND_ACK_AP BIT(13)
#define D2H_INT_RESUME_ACK_AP BIT(14) #define D2H_INT_RESUME_ACK_AP BIT(14)
#define D2H_INT_ASYNC_SAP_HK BIT(15) #define D2H_INT_ASYNC_AP_HK BIT(15)
#define D2H_INT_ASYNC_MD_HK BIT(16) #define D2H_INT_ASYNC_MD_HK BIT(16)
/* Register base */ /* Register base */
...@@ -101,11 +101,34 @@ enum t7xx_pm_resume_state { ...@@ -101,11 +101,34 @@ enum t7xx_pm_resume_state {
PM_RESUME_REG_STATE_L2_EXP, PM_RESUME_REG_STATE_L2_EXP,
}; };
enum host_event_e {
HOST_EVENT_INIT = 0,
FASTBOOT_DL_NOTY = 0x3,
};
#define T7XX_PCIE_MISC_DEV_STATUS 0x0d1c #define T7XX_PCIE_MISC_DEV_STATUS 0x0d1c
#define MISC_STAGE_MASK GENMASK(2, 0)
#define MISC_RESET_TYPE_PLDR BIT(26)
#define MISC_RESET_TYPE_FLDR BIT(27) #define MISC_RESET_TYPE_FLDR BIT(27)
#define LINUX_STAGE 4 #define MISC_RESET_TYPE_PLDR BIT(26)
#define MISC_DEV_STATUS_MASK GENMASK(15, 0)
#define LK_EVENT_MASK GENMASK(11, 8)
#define HOST_EVENT_MASK GENMASK(31, 28)
enum lk_event_id {
LK_EVENT_NORMAL = 0,
LK_EVENT_CREATE_PD_PORT = 1,
LK_EVENT_CREATE_POST_DL_PORT = 2,
LK_EVENT_RESET = 7,
};
#define MISC_STAGE_MASK GENMASK(2, 0)
enum t7xx_device_stage {
INIT_STAGE = 0,
PRE_BROM_STAGE = 1,
POST_BROM_STAGE = 2,
LK_STAGE = 3,
LINUX_STAGE = 4,
};
#define T7XX_PCIE_RESOURCE_STATUS 0x0d28 #define T7XX_PCIE_RESOURCE_STATUS 0x0d28
#define T7XX_PCIE_RESOURCE_STS_MSK GENMASK(4, 0) #define T7XX_PCIE_RESOURCE_STS_MSK GENMASK(4, 0)
......
...@@ -35,11 +35,15 @@ ...@@ -35,11 +35,15 @@
#include "t7xx_hif_cldma.h" #include "t7xx_hif_cldma.h"
#include "t7xx_mhccif.h" #include "t7xx_mhccif.h"
#include "t7xx_modem_ops.h" #include "t7xx_modem_ops.h"
#include "t7xx_netdev.h"
#include "t7xx_pci.h" #include "t7xx_pci.h"
#include "t7xx_pcie_mac.h" #include "t7xx_pcie_mac.h"
#include "t7xx_port_devlink.h"
#include "t7xx_port_proxy.h" #include "t7xx_port_proxy.h"
#include "t7xx_pci_rescan.h"
#include "t7xx_reg.h" #include "t7xx_reg.h"
#include "t7xx_state_monitor.h" #include "t7xx_state_monitor.h"
#include "t7xx_uevent.h"
#define FSM_DRM_DISABLE_DELAY_MS 200 #define FSM_DRM_DISABLE_DELAY_MS 200
#define FSM_EVENT_POLL_INTERVAL_MS 20 #define FSM_EVENT_POLL_INTERVAL_MS 20
...@@ -47,6 +51,10 @@ ...@@ -47,6 +51,10 @@
#define FSM_MD_EX_PASS_TIMEOUT_MS 45000 #define FSM_MD_EX_PASS_TIMEOUT_MS 45000
#define FSM_CMD_TIMEOUT_MS 2000 #define FSM_CMD_TIMEOUT_MS 2000
/* As per MTK, AP to MD Handshake time is ~15s*/
#define DEVICE_STAGE_POLL_INTERVAL_MS 100
#define DEVICE_STAGE_POLL_COUNT 150
void t7xx_fsm_notifier_register(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier) void t7xx_fsm_notifier_register(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier)
{ {
struct t7xx_fsm_ctl *ctl = md->fsm_ctl; struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
...@@ -206,6 +214,65 @@ static void fsm_routine_exception(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_comm ...@@ -206,6 +214,65 @@ static void fsm_routine_exception(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_comm
fsm_finish_command(ctl, cmd, 0); fsm_finish_command(ctl, cmd, 0);
} }
static void t7xx_host_event_notify(struct t7xx_modem *md, unsigned int event_id)
{
u32 value;
value = ioread32(IREG_BASE(md->t7xx_dev) + T7XX_PCIE_MISC_DEV_STATUS);
value &= ~HOST_EVENT_MASK;
value |= FIELD_PREP(HOST_EVENT_MASK, event_id);
iowrite32(value, IREG_BASE(md->t7xx_dev) + T7XX_PCIE_MISC_DEV_STATUS);
}
static void t7xx_lk_stage_event_handling(struct t7xx_fsm_ctl *ctl, unsigned int dev_status)
{
struct t7xx_modem *md = ctl->md;
struct cldma_ctrl *md_ctrl;
enum lk_event_id lk_event;
struct t7xx_port *port;
struct device *dev;
dev = &md->t7xx_dev->pdev->dev;
lk_event = FIELD_GET(LK_EVENT_MASK, dev_status);
dev_info(dev, "Device enter next stage from LK stage/n");
switch (lk_event) {
case LK_EVENT_NORMAL:
break;
case LK_EVENT_CREATE_PD_PORT:
case LK_EVENT_CREATE_POST_DL_PORT:
md_ctrl = md->md_ctrl[CLDMA_ID_AP];
t7xx_cldma_hif_hw_init(md_ctrl);
t7xx_cldma_stop(md_ctrl);
t7xx_cldma_switch_cfg(md_ctrl, CLDMA_DEDICATED_Q_CFG);
dev_info(dev, "creating the ttyDUMP port\n");
port = t7xx_port_proxy_get_port_by_name(md->port_prox, "ttyDUMP");
if (!port) {
dev_err(dev, "ttyDUMP port not found\n");
return;
}
if (lk_event == LK_EVENT_CREATE_PD_PORT)
port->dl->mode = T7XX_FB_DUMP_MODE;
else
port->dl->mode = T7XX_FB_DL_MODE;
port->port_conf->ops->enable_chl(port);
t7xx_cldma_start(md_ctrl);
if (lk_event == LK_EVENT_CREATE_PD_PORT)
t7xx_uevent_send(dev, T7XX_UEVENT_MODEM_FASTBOOT_DUMP_MODE);
else
t7xx_uevent_send(dev, T7XX_UEVENT_MODEM_FASTBOOT_DL_MODE);
break;
case LK_EVENT_RESET:
break;
default:
dev_err(dev, "Invalid BROM event\n");
break;
}
}
static int fsm_stopped_handler(struct t7xx_fsm_ctl *ctl) static int fsm_stopped_handler(struct t7xx_fsm_ctl *ctl)
{ {
ctl->curr_state = FSM_STATE_STOPPED; ctl->curr_state = FSM_STATE_STOPPED;
...@@ -243,14 +310,24 @@ static void fsm_routine_stopping(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_comma ...@@ -243,14 +310,24 @@ static void fsm_routine_stopping(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_comma
t7xx_cldma_stop(md_ctrl); t7xx_cldma_stop(md_ctrl);
if (!ctl->md->rgu_irq_asserted) { if (!ctl->md->rgu_irq_asserted) {
if (t7xx_dev->dl->set_fastboot_dl)
t7xx_host_event_notify(ctl->md, FASTBOOT_DL_NOTY);
t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DRM_DISABLE_AP); t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DRM_DISABLE_AP);
/* Wait for the DRM disable to take effect */ /* Wait for the DRM disable to take effect */
msleep(FSM_DRM_DISABLE_DELAY_MS); msleep(FSM_DRM_DISABLE_DELAY_MS);
if (t7xx_dev->dl->set_fastboot_dl) {
/* Do not try fldr because device will always wait for
* MHCCIF bit 13 in fastboot download flow.
*/
t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DEVICE_RESET);
} else {
err = t7xx_acpi_fldr_func(t7xx_dev); err = t7xx_acpi_fldr_func(t7xx_dev);
if (err) if (err)
t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DEVICE_RESET); t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DEVICE_RESET);
} }
}
fsm_finish_command(ctl, cmd, fsm_stopped_handler(ctl)); fsm_finish_command(ctl, cmd, fsm_stopped_handler(ctl));
} }
...@@ -272,6 +349,7 @@ static void fsm_routine_ready(struct t7xx_fsm_ctl *ctl) ...@@ -272,6 +349,7 @@ static void fsm_routine_ready(struct t7xx_fsm_ctl *ctl)
ctl->curr_state = FSM_STATE_READY; ctl->curr_state = FSM_STATE_READY;
t7xx_fsm_broadcast_ready_state(ctl); t7xx_fsm_broadcast_ready_state(ctl);
t7xx_uevent_send(&md->t7xx_dev->pdev->dev, T7XX_UEVENT_MODEM_READY);
t7xx_md_event_notify(md, FSM_READY); t7xx_md_event_notify(md, FSM_READY);
} }
...@@ -285,8 +363,9 @@ static int fsm_routine_starting(struct t7xx_fsm_ctl *ctl) ...@@ -285,8 +363,9 @@ static int fsm_routine_starting(struct t7xx_fsm_ctl *ctl)
t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS1); t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS1);
t7xx_md_event_notify(md, FSM_START); t7xx_md_event_notify(md, FSM_START);
wait_event_interruptible_timeout(ctl->async_hk_wq, md->core_md.ready || ctl->exp_flg, wait_event_interruptible_timeout(ctl->async_hk_wq,
HZ * 60); (md->core_md.ready && md->core_ap.ready) ||
ctl->exp_flg, HZ * 60);
dev = &md->t7xx_dev->pdev->dev; dev = &md->t7xx_dev->pdev->dev;
if (ctl->exp_flg) if (ctl->exp_flg)
...@@ -297,6 +376,13 @@ static int fsm_routine_starting(struct t7xx_fsm_ctl *ctl) ...@@ -297,6 +376,13 @@ static int fsm_routine_starting(struct t7xx_fsm_ctl *ctl)
if (md->core_md.handshake_ongoing) if (md->core_md.handshake_ongoing)
t7xx_fsm_append_event(ctl, FSM_EVENT_MD_HS2_EXIT, NULL, 0); t7xx_fsm_append_event(ctl, FSM_EVENT_MD_HS2_EXIT, NULL, 0);
fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
return -ETIMEDOUT;
} else if (!md->core_ap.ready) {
dev_err(dev, "AP handshake timeout\n");
if (md->core_ap.handshake_ongoing)
t7xx_fsm_append_event(ctl, FSM_EVENT_AP_HS2_EXIT, NULL, 0);
fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT); fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
return -ETIMEDOUT; return -ETIMEDOUT;
} }
...@@ -309,8 +395,10 @@ static int fsm_routine_starting(struct t7xx_fsm_ctl *ctl) ...@@ -309,8 +395,10 @@ static int fsm_routine_starting(struct t7xx_fsm_ctl *ctl)
static void fsm_routine_start(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd) static void fsm_routine_start(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd)
{ {
struct t7xx_modem *md = ctl->md; struct t7xx_modem *md = ctl->md;
unsigned int device_stage;
struct device *dev;
u32 dev_status; u32 dev_status;
int ret; int ret = 0;
if (!md) if (!md)
return; return;
...@@ -321,22 +409,60 @@ static void fsm_routine_start(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command ...@@ -321,22 +409,60 @@ static void fsm_routine_start(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command
return; return;
} }
dev = &md->t7xx_dev->pdev->dev;
dev_status = ioread32(IREG_BASE(md->t7xx_dev) + T7XX_PCIE_MISC_DEV_STATUS);
dev_status &= MISC_DEV_STATUS_MASK;
dev_dbg(dev, "dev_status = %x modem state = %d\n", dev_status, ctl->md_state);
if (dev_status == MISC_DEV_STATUS_MASK) {
dev_err(dev, "invalid device status\n");
ret = -EINVAL;
goto finish_command;
}
ctl->curr_state = FSM_STATE_PRE_START; ctl->curr_state = FSM_STATE_PRE_START;
t7xx_md_event_notify(md, FSM_PRE_START); t7xx_md_event_notify(md, FSM_PRE_START);
ret = read_poll_timeout(ioread32, dev_status, device_stage = FIELD_GET(MISC_STAGE_MASK, dev_status);
(dev_status & MISC_STAGE_MASK) == LINUX_STAGE, 20000, 2000000, if (dev_status == ctl->prev_dev_status) {
false, IREG_BASE(md->t7xx_dev) + T7XX_PCIE_MISC_DEV_STATUS); if (ctl->device_stage_check_cnt++ >= DEVICE_STAGE_POLL_COUNT) {
if (ret) { dev_err(dev, "Timeout at device stage 0x%x\n", device_stage);
struct device *dev = &md->t7xx_dev->pdev->dev; ctl->device_stage_check_cnt = 0;
ret = -ETIMEDOUT;
} else {
msleep(DEVICE_STAGE_POLL_INTERVAL_MS);
ret = t7xx_fsm_append_cmd(ctl, FSM_CMD_START, 0);
}
fsm_finish_command(ctl, cmd, -ETIMEDOUT); goto finish_command;
dev_err(dev, "Invalid device status 0x%lx\n", dev_status & MISC_STAGE_MASK);
return;
} }
switch (device_stage) {
case INIT_STAGE:
case PRE_BROM_STAGE:
case POST_BROM_STAGE:
ret = t7xx_fsm_append_cmd(ctl, FSM_CMD_START, 0);
break;
case LK_STAGE:
dev_info(dev, "LK_STAGE Entered");
t7xx_lk_stage_event_handling(ctl, dev_status);
break;
case LINUX_STAGE:
t7xx_cldma_hif_hw_init(md->md_ctrl[CLDMA_ID_AP]);
t7xx_cldma_hif_hw_init(md->md_ctrl[CLDMA_ID_MD]); t7xx_cldma_hif_hw_init(md->md_ctrl[CLDMA_ID_MD]);
fsm_finish_command(ctl, cmd, fsm_routine_starting(ctl)); t7xx_port_proxy_set_cfg(md, PORT_CFG_ID_NORMAL);
ret = fsm_routine_starting(ctl);
break;
default:
break;
}
finish_command:
ctl->prev_dev_status = dev_status;
fsm_finish_command(ctl, cmd, ret);
} }
static int fsm_main_thread(void *data) static int fsm_main_thread(void *data)
...@@ -507,6 +633,8 @@ void t7xx_fsm_reset(struct t7xx_modem *md) ...@@ -507,6 +633,8 @@ void t7xx_fsm_reset(struct t7xx_modem *md)
fsm_flush_event_cmd_qs(ctl); fsm_flush_event_cmd_qs(ctl);
ctl->curr_state = FSM_STATE_STOPPED; ctl->curr_state = FSM_STATE_STOPPED;
ctl->exp_flg = false; ctl->exp_flg = false;
ctl->prev_dev_status = 0;
ctl->device_stage_check_cnt = 0;
} }
int t7xx_fsm_init(struct t7xx_modem *md) int t7xx_fsm_init(struct t7xx_modem *md)
......
...@@ -38,10 +38,12 @@ enum t7xx_fsm_state { ...@@ -38,10 +38,12 @@ enum t7xx_fsm_state {
enum t7xx_fsm_event_state { enum t7xx_fsm_event_state {
FSM_EVENT_INVALID, FSM_EVENT_INVALID,
FSM_EVENT_MD_HS2, FSM_EVENT_MD_HS2,
FSM_EVENT_AP_HS2,
FSM_EVENT_MD_EX, FSM_EVENT_MD_EX,
FSM_EVENT_MD_EX_REC_OK, FSM_EVENT_MD_EX_REC_OK,
FSM_EVENT_MD_EX_PASS, FSM_EVENT_MD_EX_PASS,
FSM_EVENT_MD_HS2_EXIT, FSM_EVENT_MD_HS2_EXIT,
FSM_EVENT_AP_HS2_EXIT,
FSM_EVENT_MAX FSM_EVENT_MAX
}; };
...@@ -94,6 +96,8 @@ struct t7xx_fsm_ctl { ...@@ -94,6 +96,8 @@ struct t7xx_fsm_ctl {
bool exp_flg; bool exp_flg;
spinlock_t notifier_lock; /* Protects notifier list */ spinlock_t notifier_lock; /* Protects notifier list */
struct list_head notifier_list; struct list_head notifier_list;
u32 prev_dev_status;
unsigned int device_stage_check_cnt;
}; };
struct t7xx_fsm_event { struct t7xx_fsm_event {
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2022, Intel Corporation.
*/
#include <linux/slab.h>
#include "t7xx_uevent.h"
/* Update the uevent in work queue context */
static void t7xx_uevent_work(struct work_struct *data)
{
struct t7xx_uevent_info *info;
char *envp[2] = { NULL, NULL };
info = container_of(data, struct t7xx_uevent_info, work);
envp[0] = info->uevent;
if (kobject_uevent_env(&info->dev->kobj, KOBJ_CHANGE, envp))
pr_err("uevent %s failed to sent", info->uevent);
kfree(info);
}
/**
* t7xx_uevent_send - Send modem event to user space.
* @dev: Generic device pointer
* @uevent: Uevent information
*/
void t7xx_uevent_send(struct device *dev, char *uevent)
{
struct t7xx_uevent_info *info = kzalloc(sizeof(*info), GFP_ATOMIC);
if (!info)
return;
INIT_WORK(&info->work, t7xx_uevent_work);
info->dev = dev;
snprintf(info->uevent, T7XX_MAX_UEVENT_LEN, "T7XX_EVENT=%s", uevent);
schedule_work(&info->work);
}
/* SPDX-License-Identifier: GPL-2.0-only
*
* Copyright (c) 2022, Intel Corporation.
*/
#ifndef __T7XX_UEVENT_H__
#define __T7XX_UEVENT_H__
#include <linux/device.h>
#include <linux/kobject.h>
/* Maximum length of user events */
#define T7XX_MAX_UEVENT_LEN 64
/* T7XX Host driver uevents */
#define T7XX_UEVENT_MODEM_READY "T7XX_MODEM_READY"
#define T7XX_UEVENT_MODEM_FASTBOOT_DL_MODE "T7XX_MODEM_FASTBOOT_DL_MODE"
#define T7XX_UEVENT_MODEM_FASTBOOT_DUMP_MODE "T7XX_MODEM_FASTBOOT_DUMP_MODE"
#define T7XX_UEVENT_MRDUMP_READY "T7XX_MRDUMP_READY"
#define T7XX_UEVENT_LKDUMP_READY "T7XX_LKDUMP_READY"
#define T7XX_UEVENT_MRD_DISCD "T7XX_MRDUMP_DISCARDED"
#define T7XX_UEVENT_LKD_DISCD "T7XX_LKDUMP_DISCARDED"
#define T7XX_UEVENT_FLASHING_SUCCESS "T7XX_FLASHING_SUCCESS"
#define T7XX_UEVENT_FLASHING_FAILURE "T7XX_FLASHING_FAILURE"
/**
* struct t7xx_uevent_info - Uevent information structure.
* @dev: Pointer to device structure
* @uevent: Uevent information
* @work: Uevent work struct
*/
struct t7xx_uevent_info {
struct device *dev;
char uevent[T7XX_MAX_UEVENT_LEN];
struct work_struct work;
};
void t7xx_uevent_send(struct device *dev, char *uevent);
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment