Commit d9351ea1 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull IRQ chip updates from Ingo Molnar:
 "A late irqchips update:

   - New TI INTR/INTA set of drivers

   - Rewrite of the stm32mp1-exti driver as a platform driver

   - Update the IOMMU MSI mapping API to be RT friendly

   - A number of cleanups and other low impact fixes"

* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (34 commits)
  iommu/dma-iommu: Remove iommu_dma_map_msi_msg()
  irqchip/gic-v3-mbi: Don't map the MSI page in mbi_compose_m{b, s}i_msg()
  irqchip/ls-scfg-msi: Don't map the MSI page in ls_scfg_msi_compose_msg()
  irqchip/gic-v3-its: Don't map the MSI page in its_irq_compose_msi_msg()
  irqchip/gicv2m: Don't map the MSI page in gicv2m_compose_msi_msg()
  iommu/dma-iommu: Split iommu_dma_map_msi_msg() in two parts
  genirq/msi: Add a new field in msi_desc to store an IOMMU cookie
  arm64: arch_k3: Enable interrupt controller drivers
  irqchip/ti-sci-inta: Add msi domain support
  soc: ti: Add MSI domain bus support for Interrupt Aggregator
  irqchip/ti-sci-inta: Add support for Interrupt Aggregator driver
  dt-bindings: irqchip: Introduce TISCI Interrupt Aggregator bindings
  irqchip/ti-sci-intr: Add support for Interrupt Router driver
  dt-bindings: irqchip: Introduce TISCI Interrupt router bindings
  gpio: thunderx: Use the default parent apis for {request,release}_resources
  genirq: Introduce irq_chip_{request,release}_resource_parent() apis
  firmware: ti_sci: Add helper apis to manage resources
  firmware: ti_sci: Add RM mapping table for am654
  firmware: ti_sci: Add support for IRQ management
  firmware: ti_sci: Add support for RM core ops
  ...
parents 39feaa3f fb4e0592
...@@ -24,7 +24,8 @@ relationship between the TI-SCI parent node to the child node. ...@@ -24,7 +24,8 @@ relationship between the TI-SCI parent node to the child node.
Required properties: Required properties:
------------------- -------------------
- compatible: should be "ti,k2g-sci" - compatible: should be "ti,k2g-sci" for TI 66AK2G SoC
should be "ti,am654-sci" for for TI AM654 SoC
- mbox-names: - mbox-names:
"rx" - Mailbox corresponding to receive path "rx" - Mailbox corresponding to receive path
"tx" - Mailbox corresponding to transmit path "tx" - Mailbox corresponding to transmit path
......
Texas Instruments K3 Interrupt Aggregator
=========================================
The Interrupt Aggregator (INTA) provides a centralized machine
which handles the termination of system events to that they can
be coherently processed by the host(s) in the system. A maximum
of 64 events can be mapped to a single interrupt.
Interrupt Aggregator
+-----------------------------------------+
| Intmap VINT |
| +--------------+ +------------+ |
m ------>| | vint | bit | | 0 |.....|63| vint0 |
. | +--------------+ +------------+ | +------+
. | . . | | HOST |
Globalevents ------>| . . |------>| IRQ |
. | . . | | CTRL |
. | . . | +------+
n ------>| +--------------+ +------------+ |
| | vint | bit | | 0 |.....|63| vintx |
| +--------------+ +------------+ |
| |
+-----------------------------------------+
Configuration of these Intmap registers that maps global events to vint is done
by a system controller (like the Device Memory and Security Controller on K3
AM654 SoC). Driver should request the system controller to get the range
of global events and vints assigned to the requesting host. Management
of these requested resources should be handled by driver and requests
system controller to map specific global event to vint, bit pair.
Communication between the host processor running an OS and the system
controller happens through a protocol called TI System Control Interface
(TISCI protocol). For more details refer:
Documentation/devicetree/bindings/arm/keystone/ti,sci.txt
TISCI Interrupt Aggregator Node:
-------------------------------
- compatible: Must be "ti,sci-inta".
- reg: Should contain registers location and length.
- interrupt-controller: Identifies the node as an interrupt controller
- msi-controller: Identifies the node as an MSI controller.
- interrupt-parent: phandle of irq parent.
- ti,sci: Phandle to TI-SCI compatible System controller node.
- ti,sci-dev-id: TISCI device ID of the Interrupt Aggregator.
- ti,sci-rm-range-vint: Array of TISCI subtype ids representing vints(inta
outputs) range within this INTA, assigned to the
requesting host context.
- ti,sci-rm-range-global-event: Array of TISCI subtype ids representing the
global events range reaching this IA and are assigned
to the requesting host context.
Example:
--------
main_udmass_inta: interrupt-controller@33d00000 {
compatible = "ti,sci-inta";
reg = <0x0 0x33d00000 0x0 0x100000>;
interrupt-controller;
msi-controller;
interrupt-parent = <&main_navss_intr>;
ti,sci = <&dmsc>;
ti,sci-dev-id = <179>;
ti,sci-rm-range-vint = <0x0>;
ti,sci-rm-range-global-event = <0x1>;
};
Texas Instruments K3 Interrupt Router
=====================================
The Interrupt Router (INTR) module provides a mechanism to mux M
interrupt inputs to N interrupt outputs, where all M inputs are selectable
to be driven per N output. An Interrupt Router can either handle edge triggered
or level triggered interrupts and that is fixed in hardware.
Interrupt Router
+----------------------+
| Inputs Outputs |
+-------+ | +------+ +-----+ |
| GPIO |----------->| | irq0 | | 0 | | Host IRQ
+-------+ | +------+ +-----+ | controller
| . . | +-------+
+-------+ | . . |----->| IRQ |
| INTA |----------->| . . | +-------+
+-------+ | . +-----+ |
| +------+ | N | |
| | irqM | +-----+ |
| +------+ |
| |
+----------------------+
There is one register per output (MUXCNTL_N) that controls the selection.
Configuration of these MUXCNTL_N registers is done by a system controller
(like the Device Memory and Security Controller on K3 AM654 SoC). System
controller will keep track of the used and unused registers within the Router.
Driver should request the system controller to get the range of GIC IRQs
assigned to the requesting hosts. It is the drivers responsibility to keep
track of Host IRQs.
Communication between the host processor running an OS and the system
controller happens through a protocol called TI System Control Interface
(TISCI protocol). For more details refer:
Documentation/devicetree/bindings/arm/keystone/ti,sci.txt
TISCI Interrupt Router Node:
----------------------------
Required Properties:
- compatible: Must be "ti,sci-intr".
- ti,intr-trigger-type: Should be one of the following:
1: If intr supports edge triggered interrupts.
4: If intr supports level triggered interrupts.
- interrupt-controller: Identifies the node as an interrupt controller
- #interrupt-cells: Specifies the number of cells needed to encode an
interrupt source. The value should be 2.
First cell should contain the TISCI device ID of source
Second cell should contain the interrupt source offset
within the device.
- ti,sci: Phandle to TI-SCI compatible System controller node.
- ti,sci-dst-id: TISCI device ID of the destination IRQ controller.
- ti,sci-rm-range-girq: Array of TISCI subtype ids representing the host irqs
assigned to this interrupt router. Each subtype id
corresponds to a range of host irqs.
For more details on TISCI IRQ resource management refer:
http://downloads.ti.com/tisci/esd/latest/2_tisci_msgs/rm/rm_irq.html
Example:
--------
The following example demonstrates both interrupt router node and the consumer
node(main gpio) on the AM654 SoC:
main_intr: interrupt-controller0 {
compatible = "ti,sci-intr";
ti,intr-trigger-type = <1>;
interrupt-controller;
interrupt-parent = <&gic500>;
#interrupt-cells = <2>;
ti,sci = <&dmsc>;
ti,sci-dst-id = <56>;
ti,sci-rm-range-girq = <0x1>;
};
main_gpio0: gpio@600000 {
...
interrupt-parent = <&main_intr>;
interrupts = <57 256>, <57 257>, <57 258>,
<57 259>, <57 260>, <57 261>;
...
};
...@@ -15547,6 +15547,12 @@ F: Documentation/devicetree/bindings/reset/ti,sci-reset.txt ...@@ -15547,6 +15547,12 @@ F: Documentation/devicetree/bindings/reset/ti,sci-reset.txt
F: Documentation/devicetree/bindings/clock/ti,sci-clk.txt F: Documentation/devicetree/bindings/clock/ti,sci-clk.txt
F: drivers/clk/keystone/sci-clk.c F: drivers/clk/keystone/sci-clk.c
F: drivers/reset/reset-ti-sci.c F: drivers/reset/reset-ti-sci.c
F: Documentation/devicetree/bindings/interrupt-controller/ti,sci-intr.txt
F: Documentation/devicetree/bindings/interrupt-controller/ti,sci-inta.txt
F: drivers/irqchip/irq-ti-sci-intr.c
F: drivers/irqchip/irq-ti-sci-inta.c
F: include/linux/soc/ti/ti_sci_inta_msi.h
F: drivers/soc/ti/ti_sci_inta_msi.c
Texas Instruments ASoC drivers Texas Instruments ASoC drivers
M: Peter Ujfalusi <peter.ujfalusi@ti.com> M: Peter Ujfalusi <peter.ujfalusi@ti.com>
......
...@@ -87,6 +87,11 @@ config ARCH_EXYNOS ...@@ -87,6 +87,11 @@ config ARCH_EXYNOS
config ARCH_K3 config ARCH_K3
bool "Texas Instruments Inc. K3 multicore SoC architecture" bool "Texas Instruments Inc. K3 multicore SoC architecture"
select PM_GENERIC_DOMAINS if PM select PM_GENERIC_DOMAINS if PM
select MAILBOX
select TI_MESSAGE_MANAGER
select TI_SCI_PROTOCOL
select TI_SCI_INTR_IRQCHIP
select TI_SCI_INTA_IRQCHIP
help help
This enables support for Texas Instruments' K3 multicore SoC This enables support for Texas Instruments' K3 multicore SoC
architecture. architecture.
......
...@@ -64,6 +64,22 @@ struct ti_sci_xfers_info { ...@@ -64,6 +64,22 @@ struct ti_sci_xfers_info {
spinlock_t xfer_lock; spinlock_t xfer_lock;
}; };
/**
* struct ti_sci_rm_type_map - Structure representing TISCI Resource
* management representation of dev_ids.
* @dev_id: TISCI device ID
* @type: Corresponding id as identified by TISCI RM.
*
* Note: This is used only as a work around for using RM range apis
* for AM654 SoC. For future SoCs dev_id will be used as type
* for RM range APIs. In order to maintain ABI backward compatibility
* type is not being changed for AM654 SoC.
*/
struct ti_sci_rm_type_map {
u32 dev_id;
u16 type;
};
/** /**
* struct ti_sci_desc - Description of SoC integration * struct ti_sci_desc - Description of SoC integration
* @default_host_id: Host identifier representing the compute entity * @default_host_id: Host identifier representing the compute entity
...@@ -71,12 +87,14 @@ struct ti_sci_xfers_info { ...@@ -71,12 +87,14 @@ struct ti_sci_xfers_info {
* @max_msgs: Maximum number of messages that can be pending * @max_msgs: Maximum number of messages that can be pending
* simultaneously in the system * simultaneously in the system
* @max_msg_size: Maximum size of data per message that can be handled. * @max_msg_size: Maximum size of data per message that can be handled.
* @rm_type_map: RM resource type mapping structure.
*/ */
struct ti_sci_desc { struct ti_sci_desc {
u8 default_host_id; u8 default_host_id;
int max_rx_timeout_ms; int max_rx_timeout_ms;
int max_msgs; int max_msgs;
int max_msg_size; int max_msg_size;
struct ti_sci_rm_type_map *rm_type_map;
}; };
/** /**
...@@ -1600,6 +1618,392 @@ static int ti_sci_cmd_core_reboot(const struct ti_sci_handle *handle) ...@@ -1600,6 +1618,392 @@ static int ti_sci_cmd_core_reboot(const struct ti_sci_handle *handle)
return ret; return ret;
} }
static int ti_sci_get_resource_type(struct ti_sci_info *info, u16 dev_id,
u16 *type)
{
struct ti_sci_rm_type_map *rm_type_map = info->desc->rm_type_map;
bool found = false;
int i;
/* If map is not provided then assume dev_id is used as type */
if (!rm_type_map) {
*type = dev_id;
return 0;
}
for (i = 0; rm_type_map[i].dev_id; i++) {
if (rm_type_map[i].dev_id == dev_id) {
*type = rm_type_map[i].type;
found = true;
break;
}
}
if (!found)
return -EINVAL;
return 0;
}
/**
* ti_sci_get_resource_range - Helper to get a range of resources assigned
* to a host. Resource is uniquely identified by
* type and subtype.
* @handle: Pointer to TISCI handle.
* @dev_id: TISCI device ID.
* @subtype: Resource assignment subtype that is being requested
* from the given device.
* @s_host: Host processor ID to which the resources are allocated
* @range_start: Start index of the resource range
* @range_num: Number of resources in the range
*
* Return: 0 if all went fine, else return appropriate error.
*/
static int ti_sci_get_resource_range(const struct ti_sci_handle *handle,
u32 dev_id, u8 subtype, u8 s_host,
u16 *range_start, u16 *range_num)
{
struct ti_sci_msg_resp_get_resource_range *resp;
struct ti_sci_msg_req_get_resource_range *req;
struct ti_sci_xfer *xfer;
struct ti_sci_info *info;
struct device *dev;
u16 type;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_GET_RESOURCE_RANGE,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
ret = ti_sci_get_resource_type(info, dev_id, &type);
if (ret) {
dev_err(dev, "rm type lookup failed for %u\n", dev_id);
goto fail;
}
req = (struct ti_sci_msg_req_get_resource_range *)xfer->xfer_buf;
req->secondary_host = s_host;
req->type = type & MSG_RM_RESOURCE_TYPE_MASK;
req->subtype = subtype & MSG_RM_RESOURCE_SUBTYPE_MASK;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_resp_get_resource_range *)xfer->xfer_buf;
if (!ti_sci_is_response_ack(resp)) {
ret = -ENODEV;
} else if (!resp->range_start && !resp->range_num) {
ret = -ENODEV;
} else {
*range_start = resp->range_start;
*range_num = resp->range_num;
};
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_get_resource_range - Get a range of resources assigned to host
* that is same as ti sci interface host.
* @handle: Pointer to TISCI handle.
* @dev_id: TISCI device ID.
* @subtype: Resource assignment subtype that is being requested
* from the given device.
* @range_start: Start index of the resource range
* @range_num: Number of resources in the range
*
* Return: 0 if all went fine, else return appropriate error.
*/
static int ti_sci_cmd_get_resource_range(const struct ti_sci_handle *handle,
u32 dev_id, u8 subtype,
u16 *range_start, u16 *range_num)
{
return ti_sci_get_resource_range(handle, dev_id, subtype,
TI_SCI_IRQ_SECONDARY_HOST_INVALID,
range_start, range_num);
}
/**
* ti_sci_cmd_get_resource_range_from_shost - Get a range of resources
* assigned to a specified host.
* @handle: Pointer to TISCI handle.
* @dev_id: TISCI device ID.
* @subtype: Resource assignment subtype that is being requested
* from the given device.
* @s_host: Host processor ID to which the resources are allocated
* @range_start: Start index of the resource range
* @range_num: Number of resources in the range
*
* Return: 0 if all went fine, else return appropriate error.
*/
static
int ti_sci_cmd_get_resource_range_from_shost(const struct ti_sci_handle *handle,
u32 dev_id, u8 subtype, u8 s_host,
u16 *range_start, u16 *range_num)
{
return ti_sci_get_resource_range(handle, dev_id, subtype, s_host,
range_start, range_num);
}
/**
* ti_sci_manage_irq() - Helper api to configure/release the irq route between
* the requested source and destination
* @handle: Pointer to TISCI handle.
* @valid_params: Bit fields defining the validity of certain params
* @src_id: Device ID of the IRQ source
* @src_index: IRQ source index within the source device
* @dst_id: Device ID of the IRQ destination
* @dst_host_irq: IRQ number of the destination device
* @ia_id: Device ID of the IA, if the IRQ flows through this IA
* @vint: Virtual interrupt to be used within the IA
* @global_event: Global event number to be used for the requesting event
* @vint_status_bit: Virtual interrupt status bit to be used for the event
* @s_host: Secondary host ID to which the irq/event is being
* requested for.
* @type: Request type irq set or release.
*
* Return: 0 if all went fine, else return appropriate error.
*/
static int ti_sci_manage_irq(const struct ti_sci_handle *handle,
u32 valid_params, u16 src_id, u16 src_index,
u16 dst_id, u16 dst_host_irq, u16 ia_id, u16 vint,
u16 global_event, u8 vint_status_bit, u8 s_host,
u16 type)
{
struct ti_sci_msg_req_manage_irq *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct ti_sci_info *info;
struct device *dev;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, type, TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_manage_irq *)xfer->xfer_buf;
req->valid_params = valid_params;
req->src_id = src_id;
req->src_index = src_index;
req->dst_id = dst_id;
req->dst_host_irq = dst_host_irq;
req->ia_id = ia_id;
req->vint = vint;
req->global_event = global_event;
req->vint_status_bit = vint_status_bit;
req->secondary_host = s_host;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
ret = ti_sci_is_response_ack(resp) ? 0 : -ENODEV;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_set_irq() - Helper api to configure the irq route between the
* requested source and destination
* @handle: Pointer to TISCI handle.
* @valid_params: Bit fields defining the validity of certain params
* @src_id: Device ID of the IRQ source
* @src_index: IRQ source index within the source device
* @dst_id: Device ID of the IRQ destination
* @dst_host_irq: IRQ number of the destination device
* @ia_id: Device ID of the IA, if the IRQ flows through this IA
* @vint: Virtual interrupt to be used within the IA
* @global_event: Global event number to be used for the requesting event
* @vint_status_bit: Virtual interrupt status bit to be used for the event
* @s_host: Secondary host ID to which the irq/event is being
* requested for.
*
* Return: 0 if all went fine, else return appropriate error.
*/
static int ti_sci_set_irq(const struct ti_sci_handle *handle, u32 valid_params,
u16 src_id, u16 src_index, u16 dst_id,
u16 dst_host_irq, u16 ia_id, u16 vint,
u16 global_event, u8 vint_status_bit, u8 s_host)
{
pr_debug("%s: IRQ set with valid_params = 0x%x from src = %d, index = %d, to dst = %d, irq = %d,via ia_id = %d, vint = %d, global event = %d,status_bit = %d\n",
__func__, valid_params, src_id, src_index,
dst_id, dst_host_irq, ia_id, vint, global_event,
vint_status_bit);
return ti_sci_manage_irq(handle, valid_params, src_id, src_index,
dst_id, dst_host_irq, ia_id, vint,
global_event, vint_status_bit, s_host,
TI_SCI_MSG_SET_IRQ);
}
/**
* ti_sci_free_irq() - Helper api to free the irq route between the
* requested source and destination
* @handle: Pointer to TISCI handle.
* @valid_params: Bit fields defining the validity of certain params
* @src_id: Device ID of the IRQ source
* @src_index: IRQ source index within the source device
* @dst_id: Device ID of the IRQ destination
* @dst_host_irq: IRQ number of the destination device
* @ia_id: Device ID of the IA, if the IRQ flows through this IA
* @vint: Virtual interrupt to be used within the IA
* @global_event: Global event number to be used for the requesting event
* @vint_status_bit: Virtual interrupt status bit to be used for the event
* @s_host: Secondary host ID to which the irq/event is being
* requested for.
*
* Return: 0 if all went fine, else return appropriate error.
*/
static int ti_sci_free_irq(const struct ti_sci_handle *handle, u32 valid_params,
u16 src_id, u16 src_index, u16 dst_id,
u16 dst_host_irq, u16 ia_id, u16 vint,
u16 global_event, u8 vint_status_bit, u8 s_host)
{
pr_debug("%s: IRQ release with valid_params = 0x%x from src = %d, index = %d, to dst = %d, irq = %d,via ia_id = %d, vint = %d, global event = %d,status_bit = %d\n",
__func__, valid_params, src_id, src_index,
dst_id, dst_host_irq, ia_id, vint, global_event,
vint_status_bit);
return ti_sci_manage_irq(handle, valid_params, src_id, src_index,
dst_id, dst_host_irq, ia_id, vint,
global_event, vint_status_bit, s_host,
TI_SCI_MSG_FREE_IRQ);
}
/**
* ti_sci_cmd_set_irq() - Configure a host irq route between the requested
* source and destination.
* @handle: Pointer to TISCI handle.
* @src_id: Device ID of the IRQ source
* @src_index: IRQ source index within the source device
* @dst_id: Device ID of the IRQ destination
* @dst_host_irq: IRQ number of the destination device
* @vint_irq: Boolean specifying if this interrupt belongs to
* Interrupt Aggregator.
*
* Return: 0 if all went fine, else return appropriate error.
*/
static int ti_sci_cmd_set_irq(const struct ti_sci_handle *handle, u16 src_id,
u16 src_index, u16 dst_id, u16 dst_host_irq)
{
u32 valid_params = MSG_FLAG_DST_ID_VALID | MSG_FLAG_DST_HOST_IRQ_VALID;
return ti_sci_set_irq(handle, valid_params, src_id, src_index, dst_id,
dst_host_irq, 0, 0, 0, 0, 0);
}
/**
* ti_sci_cmd_set_event_map() - Configure an event based irq route between the
* requested source and Interrupt Aggregator.
* @handle: Pointer to TISCI handle.
* @src_id: Device ID of the IRQ source
* @src_index: IRQ source index within the source device
* @ia_id: Device ID of the IA, if the IRQ flows through this IA
* @vint: Virtual interrupt to be used within the IA
* @global_event: Global event number to be used for the requesting event
* @vint_status_bit: Virtual interrupt status bit to be used for the event
*
* Return: 0 if all went fine, else return appropriate error.
*/
static int ti_sci_cmd_set_event_map(const struct ti_sci_handle *handle,
u16 src_id, u16 src_index, u16 ia_id,
u16 vint, u16 global_event,
u8 vint_status_bit)
{
u32 valid_params = MSG_FLAG_IA_ID_VALID | MSG_FLAG_VINT_VALID |
MSG_FLAG_GLB_EVNT_VALID |
MSG_FLAG_VINT_STS_BIT_VALID;
return ti_sci_set_irq(handle, valid_params, src_id, src_index, 0, 0,
ia_id, vint, global_event, vint_status_bit, 0);
}
/**
* ti_sci_cmd_free_irq() - Free a host irq route between the between the
* requested source and destination.
* @handle: Pointer to TISCI handle.
* @src_id: Device ID of the IRQ source
* @src_index: IRQ source index within the source device
* @dst_id: Device ID of the IRQ destination
* @dst_host_irq: IRQ number of the destination device
* @vint_irq: Boolean specifying if this interrupt belongs to
* Interrupt Aggregator.
*
* Return: 0 if all went fine, else return appropriate error.
*/
static int ti_sci_cmd_free_irq(const struct ti_sci_handle *handle, u16 src_id,
u16 src_index, u16 dst_id, u16 dst_host_irq)
{
u32 valid_params = MSG_FLAG_DST_ID_VALID | MSG_FLAG_DST_HOST_IRQ_VALID;
return ti_sci_free_irq(handle, valid_params, src_id, src_index, dst_id,
dst_host_irq, 0, 0, 0, 0, 0);
}
/**
* ti_sci_cmd_free_event_map() - Free an event map between the requested source
* and Interrupt Aggregator.
* @handle: Pointer to TISCI handle.
* @src_id: Device ID of the IRQ source
* @src_index: IRQ source index within the source device
* @ia_id: Device ID of the IA, if the IRQ flows through this IA
* @vint: Virtual interrupt to be used within the IA
* @global_event: Global event number to be used for the requesting event
* @vint_status_bit: Virtual interrupt status bit to be used for the event
*
* Return: 0 if all went fine, else return appropriate error.
*/
static int ti_sci_cmd_free_event_map(const struct ti_sci_handle *handle,
u16 src_id, u16 src_index, u16 ia_id,
u16 vint, u16 global_event,
u8 vint_status_bit)
{
u32 valid_params = MSG_FLAG_IA_ID_VALID |
MSG_FLAG_VINT_VALID | MSG_FLAG_GLB_EVNT_VALID |
MSG_FLAG_VINT_STS_BIT_VALID;
return ti_sci_free_irq(handle, valid_params, src_id, src_index, 0, 0,
ia_id, vint, global_event, vint_status_bit, 0);
}
/* /*
* ti_sci_setup_ops() - Setup the operations structures * ti_sci_setup_ops() - Setup the operations structures
* @info: pointer to TISCI pointer * @info: pointer to TISCI pointer
...@@ -1610,6 +2014,8 @@ static void ti_sci_setup_ops(struct ti_sci_info *info) ...@@ -1610,6 +2014,8 @@ static void ti_sci_setup_ops(struct ti_sci_info *info)
struct ti_sci_core_ops *core_ops = &ops->core_ops; struct ti_sci_core_ops *core_ops = &ops->core_ops;
struct ti_sci_dev_ops *dops = &ops->dev_ops; struct ti_sci_dev_ops *dops = &ops->dev_ops;
struct ti_sci_clk_ops *cops = &ops->clk_ops; struct ti_sci_clk_ops *cops = &ops->clk_ops;
struct ti_sci_rm_core_ops *rm_core_ops = &ops->rm_core_ops;
struct ti_sci_rm_irq_ops *iops = &ops->rm_irq_ops;
core_ops->reboot_device = ti_sci_cmd_core_reboot; core_ops->reboot_device = ti_sci_cmd_core_reboot;
...@@ -1640,6 +2046,15 @@ static void ti_sci_setup_ops(struct ti_sci_info *info) ...@@ -1640,6 +2046,15 @@ static void ti_sci_setup_ops(struct ti_sci_info *info)
cops->get_best_match_freq = ti_sci_cmd_clk_get_match_freq; cops->get_best_match_freq = ti_sci_cmd_clk_get_match_freq;
cops->set_freq = ti_sci_cmd_clk_set_freq; cops->set_freq = ti_sci_cmd_clk_set_freq;
cops->get_freq = ti_sci_cmd_clk_get_freq; cops->get_freq = ti_sci_cmd_clk_get_freq;
rm_core_ops->get_range = ti_sci_cmd_get_resource_range;
rm_core_ops->get_range_from_shost =
ti_sci_cmd_get_resource_range_from_shost;
iops->set_irq = ti_sci_cmd_set_irq;
iops->set_event_map = ti_sci_cmd_set_event_map;
iops->free_irq = ti_sci_cmd_free_irq;
iops->free_event_map = ti_sci_cmd_free_event_map;
} }
/** /**
...@@ -1764,6 +2179,219 @@ const struct ti_sci_handle *devm_ti_sci_get_handle(struct device *dev) ...@@ -1764,6 +2179,219 @@ const struct ti_sci_handle *devm_ti_sci_get_handle(struct device *dev)
} }
EXPORT_SYMBOL_GPL(devm_ti_sci_get_handle); EXPORT_SYMBOL_GPL(devm_ti_sci_get_handle);
/**
* ti_sci_get_by_phandle() - Get the TI SCI handle using DT phandle
* @np: device node
* @property: property name containing phandle on TISCI node
*
* NOTE: The function does not track individual clients of the framework
* and is expected to be maintained by caller of TI SCI protocol library.
* ti_sci_put_handle must be balanced with successful ti_sci_get_by_phandle
* Return: pointer to handle if successful, else:
* -EPROBE_DEFER if the instance is not ready
* -ENODEV if the required node handler is missing
* -EINVAL if invalid conditions are encountered.
*/
const struct ti_sci_handle *ti_sci_get_by_phandle(struct device_node *np,
const char *property)
{
struct ti_sci_handle *handle = NULL;
struct device_node *ti_sci_np;
struct ti_sci_info *info;
struct list_head *p;
if (!np) {
pr_err("I need a device pointer\n");
return ERR_PTR(-EINVAL);
}
ti_sci_np = of_parse_phandle(np, property, 0);
if (!ti_sci_np)
return ERR_PTR(-ENODEV);
mutex_lock(&ti_sci_list_mutex);
list_for_each(p, &ti_sci_list) {
info = list_entry(p, struct ti_sci_info, node);
if (ti_sci_np == info->dev->of_node) {
handle = &info->handle;
info->users++;
break;
}
}
mutex_unlock(&ti_sci_list_mutex);
of_node_put(ti_sci_np);
if (!handle)
return ERR_PTR(-EPROBE_DEFER);
return handle;
}
EXPORT_SYMBOL_GPL(ti_sci_get_by_phandle);
/**
* devm_ti_sci_get_by_phandle() - Managed get handle using phandle
* @dev: Device pointer requesting TISCI handle
* @property: property name containing phandle on TISCI node
*
* NOTE: This releases the handle once the device resources are
* no longer needed. MUST NOT BE released with ti_sci_put_handle.
* The function does not track individual clients of the framework
* and is expected to be maintained by caller of TI SCI protocol library.
*
* Return: 0 if all went fine, else corresponding error.
*/
const struct ti_sci_handle *devm_ti_sci_get_by_phandle(struct device *dev,
const char *property)
{
const struct ti_sci_handle *handle;
const struct ti_sci_handle **ptr;
ptr = devres_alloc(devm_ti_sci_release, sizeof(*ptr), GFP_KERNEL);
if (!ptr)
return ERR_PTR(-ENOMEM);
handle = ti_sci_get_by_phandle(dev_of_node(dev), property);
if (!IS_ERR(handle)) {
*ptr = handle;
devres_add(dev, ptr);
} else {
devres_free(ptr);
}
return handle;
}
EXPORT_SYMBOL_GPL(devm_ti_sci_get_by_phandle);
/**
* ti_sci_get_free_resource() - Get a free resource from TISCI resource.
* @res: Pointer to the TISCI resource
*
* Return: resource num if all went ok else TI_SCI_RESOURCE_NULL.
*/
u16 ti_sci_get_free_resource(struct ti_sci_resource *res)
{
unsigned long flags;
u16 set, free_bit;
raw_spin_lock_irqsave(&res->lock, flags);
for (set = 0; set < res->sets; set++) {
free_bit = find_first_zero_bit(res->desc[set].res_map,
res->desc[set].num);
if (free_bit != res->desc[set].num) {
set_bit(free_bit, res->desc[set].res_map);
raw_spin_unlock_irqrestore(&res->lock, flags);
return res->desc[set].start + free_bit;
}
}
raw_spin_unlock_irqrestore(&res->lock, flags);
return TI_SCI_RESOURCE_NULL;
}
EXPORT_SYMBOL_GPL(ti_sci_get_free_resource);
/**
* ti_sci_release_resource() - Release a resource from TISCI resource.
* @res: Pointer to the TISCI resource
* @id: Resource id to be released.
*/
void ti_sci_release_resource(struct ti_sci_resource *res, u16 id)
{
unsigned long flags;
u16 set;
raw_spin_lock_irqsave(&res->lock, flags);
for (set = 0; set < res->sets; set++) {
if (res->desc[set].start <= id &&
(res->desc[set].num + res->desc[set].start) > id)
clear_bit(id - res->desc[set].start,
res->desc[set].res_map);
}
raw_spin_unlock_irqrestore(&res->lock, flags);
}
EXPORT_SYMBOL_GPL(ti_sci_release_resource);
/**
* ti_sci_get_num_resources() - Get the number of resources in TISCI resource
* @res: Pointer to the TISCI resource
*
* Return: Total number of available resources.
*/
u32 ti_sci_get_num_resources(struct ti_sci_resource *res)
{
u32 set, count = 0;
for (set = 0; set < res->sets; set++)
count += res->desc[set].num;
return count;
}
EXPORT_SYMBOL_GPL(ti_sci_get_num_resources);
/**
* devm_ti_sci_get_of_resource() - Get a TISCI resource assigned to a device
* @handle: TISCI handle
* @dev: Device pointer to which the resource is assigned
* @dev_id: TISCI device id to which the resource is assigned
* @of_prop: property name by which the resource are represented
*
* Return: Pointer to ti_sci_resource if all went well else appropriate
* error pointer.
*/
struct ti_sci_resource *
devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
struct device *dev, u32 dev_id, char *of_prop)
{
struct ti_sci_resource *res;
u32 resource_subtype;
int i, ret;
res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL);
if (!res)
return ERR_PTR(-ENOMEM);
res->sets = of_property_count_elems_of_size(dev_of_node(dev), of_prop,
sizeof(u32));
if (res->sets < 0) {
dev_err(dev, "%s resource type ids not available\n", of_prop);
return ERR_PTR(res->sets);
}
res->desc = devm_kcalloc(dev, res->sets, sizeof(*res->desc),
GFP_KERNEL);
if (!res->desc)
return ERR_PTR(-ENOMEM);
for (i = 0; i < res->sets; i++) {
ret = of_property_read_u32_index(dev_of_node(dev), of_prop, i,
&resource_subtype);
if (ret)
return ERR_PTR(-EINVAL);
ret = handle->ops.rm_core_ops.get_range(handle, dev_id,
resource_subtype,
&res->desc[i].start,
&res->desc[i].num);
if (ret) {
dev_err(dev, "dev = %d subtype %d not allocated for this host\n",
dev_id, resource_subtype);
return ERR_PTR(ret);
}
dev_dbg(dev, "dev = %d, subtype = %d, start = %d, num = %d\n",
dev_id, resource_subtype, res->desc[i].start,
res->desc[i].num);
res->desc[i].res_map =
devm_kzalloc(dev, BITS_TO_LONGS(res->desc[i].num) *
sizeof(*res->desc[i].res_map), GFP_KERNEL);
if (!res->desc[i].res_map)
return ERR_PTR(-ENOMEM);
}
raw_spin_lock_init(&res->lock);
return res;
}
static int tisci_reboot_handler(struct notifier_block *nb, unsigned long mode, static int tisci_reboot_handler(struct notifier_block *nb, unsigned long mode,
void *cmd) void *cmd)
{ {
...@@ -1784,10 +2412,33 @@ static const struct ti_sci_desc ti_sci_pmmc_k2g_desc = { ...@@ -1784,10 +2412,33 @@ static const struct ti_sci_desc ti_sci_pmmc_k2g_desc = {
/* Limited by MBOX_TX_QUEUE_LEN. K2G can handle upto 128 messages! */ /* Limited by MBOX_TX_QUEUE_LEN. K2G can handle upto 128 messages! */
.max_msgs = 20, .max_msgs = 20,
.max_msg_size = 64, .max_msg_size = 64,
.rm_type_map = NULL,
};
static struct ti_sci_rm_type_map ti_sci_am654_rm_type_map[] = {
{.dev_id = 56, .type = 0x00b}, /* GIC_IRQ */
{.dev_id = 179, .type = 0x000}, /* MAIN_NAV_UDMASS_IA0 */
{.dev_id = 187, .type = 0x009}, /* MAIN_NAV_RA */
{.dev_id = 188, .type = 0x006}, /* MAIN_NAV_UDMAP */
{.dev_id = 194, .type = 0x007}, /* MCU_NAV_UDMAP */
{.dev_id = 195, .type = 0x00a}, /* MCU_NAV_RA */
{.dev_id = 0, .type = 0x000}, /* end of table */
};
/* Description for AM654 */
static const struct ti_sci_desc ti_sci_pmmc_am654_desc = {
.default_host_id = 12,
/* Conservative duration */
.max_rx_timeout_ms = 10000,
/* Limited by MBOX_TX_QUEUE_LEN. K2G can handle upto 128 messages! */
.max_msgs = 20,
.max_msg_size = 60,
.rm_type_map = ti_sci_am654_rm_type_map,
}; };
static const struct of_device_id ti_sci_of_match[] = { static const struct of_device_id ti_sci_of_match[] = {
{.compatible = "ti,k2g-sci", .data = &ti_sci_pmmc_k2g_desc}, {.compatible = "ti,k2g-sci", .data = &ti_sci_pmmc_k2g_desc},
{.compatible = "ti,am654-sci", .data = &ti_sci_pmmc_am654_desc},
{ /* Sentinel */ }, { /* Sentinel */ },
}; };
MODULE_DEVICE_TABLE(of, ti_sci_of_match); MODULE_DEVICE_TABLE(of, ti_sci_of_match);
......
...@@ -35,6 +35,13 @@ ...@@ -35,6 +35,13 @@
#define TI_SCI_MSG_QUERY_CLOCK_FREQ 0x010d #define TI_SCI_MSG_QUERY_CLOCK_FREQ 0x010d
#define TI_SCI_MSG_GET_CLOCK_FREQ 0x010e #define TI_SCI_MSG_GET_CLOCK_FREQ 0x010e
/* Resource Management Requests */
#define TI_SCI_MSG_GET_RESOURCE_RANGE 0x1500
/* IRQ requests */
#define TI_SCI_MSG_SET_IRQ 0x1000
#define TI_SCI_MSG_FREE_IRQ 0x1001
/** /**
* struct ti_sci_msg_hdr - Generic Message Header for All messages and responses * struct ti_sci_msg_hdr - Generic Message Header for All messages and responses
* @type: Type of messages: One of TI_SCI_MSG* values * @type: Type of messages: One of TI_SCI_MSG* values
...@@ -461,4 +468,99 @@ struct ti_sci_msg_resp_get_clock_freq { ...@@ -461,4 +468,99 @@ struct ti_sci_msg_resp_get_clock_freq {
u64 freq_hz; u64 freq_hz;
} __packed; } __packed;
#define TI_SCI_IRQ_SECONDARY_HOST_INVALID 0xff
/**
* struct ti_sci_msg_req_get_resource_range - Request to get a host's assigned
* range of resources.
* @hdr: Generic Header
* @type: Unique resource assignment type
* @subtype: Resource assignment subtype within the resource type.
* @secondary_host: Host processing entity to which the resources are
* allocated. This is required only when the destination
* host id id different from ti sci interface host id,
* else TI_SCI_IRQ_SECONDARY_HOST_INVALID can be passed.
*
* Request type is TI_SCI_MSG_GET_RESOURCE_RANGE. Responded with requested
* resource range which is of type TI_SCI_MSG_GET_RESOURCE_RANGE.
*/
struct ti_sci_msg_req_get_resource_range {
struct ti_sci_msg_hdr hdr;
#define MSG_RM_RESOURCE_TYPE_MASK GENMASK(9, 0)
#define MSG_RM_RESOURCE_SUBTYPE_MASK GENMASK(5, 0)
u16 type;
u8 subtype;
u8 secondary_host;
} __packed;
/**
* struct ti_sci_msg_resp_get_resource_range - Response to resource get range.
* @hdr: Generic Header
* @range_start: Start index of the resource range.
* @range_num: Number of resources in the range.
*
* Response to request TI_SCI_MSG_GET_RESOURCE_RANGE.
*/
struct ti_sci_msg_resp_get_resource_range {
struct ti_sci_msg_hdr hdr;
u16 range_start;
u16 range_num;
} __packed;
/**
* struct ti_sci_msg_req_manage_irq - Request to configure/release the route
* between the dev and the host.
* @hdr: Generic Header
* @valid_params: Bit fields defining the validity of interrupt source
* parameters. If a bit is not set, then corresponding
* field is not valid and will not be used for route set.
* Bit field definitions:
* 0 - Valid bit for @dst_id
* 1 - Valid bit for @dst_host_irq
* 2 - Valid bit for @ia_id
* 3 - Valid bit for @vint
* 4 - Valid bit for @global_event
* 5 - Valid bit for @vint_status_bit_index
* 31 - Valid bit for @secondary_host
* @src_id: IRQ source peripheral ID.
* @src_index: IRQ source index within the peripheral
* @dst_id: IRQ Destination ID. Based on the architecture it can be
* IRQ controller or host processor ID.
* @dst_host_irq: IRQ number of the destination host IRQ controller
* @ia_id: Device ID of the interrupt aggregator in which the
* vint resides.
* @vint: Virtual interrupt number if the interrupt route
* is through an interrupt aggregator.
* @global_event: Global event that is to be mapped to interrupt
* aggregator virtual interrupt status bit.
* @vint_status_bit: Virtual interrupt status bit if the interrupt route
* utilizes an interrupt aggregator status bit.
* @secondary_host: Host ID of the IRQ destination computing entity. This is
* required only when destination host id is different
* from ti sci interface host id.
*
* Request type is TI_SCI_MSG_SET/RELEASE_IRQ.
* Response is generic ACK / NACK message.
*/
struct ti_sci_msg_req_manage_irq {
struct ti_sci_msg_hdr hdr;
#define MSG_FLAG_DST_ID_VALID TI_SCI_MSG_FLAG(0)
#define MSG_FLAG_DST_HOST_IRQ_VALID TI_SCI_MSG_FLAG(1)
#define MSG_FLAG_IA_ID_VALID TI_SCI_MSG_FLAG(2)
#define MSG_FLAG_VINT_VALID TI_SCI_MSG_FLAG(3)
#define MSG_FLAG_GLB_EVNT_VALID TI_SCI_MSG_FLAG(4)
#define MSG_FLAG_VINT_STS_BIT_VALID TI_SCI_MSG_FLAG(5)
#define MSG_FLAG_SHOST_VALID TI_SCI_MSG_FLAG(31)
u32 valid_params;
u16 src_id;
u16 src_index;
u16 dst_id;
u16 dst_host_irq;
u16 ia_id;
u16 vint;
u16 global_event;
u8 vint_status_bit;
u8 secondary_host;
} __packed;
#endif /* __TI_SCI_H */ #endif /* __TI_SCI_H */
...@@ -363,22 +363,16 @@ static int thunderx_gpio_irq_request_resources(struct irq_data *data) ...@@ -363,22 +363,16 @@ static int thunderx_gpio_irq_request_resources(struct irq_data *data)
{ {
struct thunderx_line *txline = irq_data_get_irq_chip_data(data); struct thunderx_line *txline = irq_data_get_irq_chip_data(data);
struct thunderx_gpio *txgpio = txline->txgpio; struct thunderx_gpio *txgpio = txline->txgpio;
struct irq_data *parent_data = data->parent_data;
int r; int r;
r = gpiochip_lock_as_irq(&txgpio->chip, txline->line); r = gpiochip_lock_as_irq(&txgpio->chip, txline->line);
if (r) if (r)
return r; return r;
if (parent_data && parent_data->chip->irq_request_resources) { r = irq_chip_request_resources_parent(data);
r = parent_data->chip->irq_request_resources(parent_data); if (r)
if (r) gpiochip_unlock_as_irq(&txgpio->chip, txline->line);
goto error;
}
return 0;
error:
gpiochip_unlock_as_irq(&txgpio->chip, txline->line);
return r; return r;
} }
...@@ -386,10 +380,8 @@ static void thunderx_gpio_irq_release_resources(struct irq_data *data) ...@@ -386,10 +380,8 @@ static void thunderx_gpio_irq_release_resources(struct irq_data *data)
{ {
struct thunderx_line *txline = irq_data_get_irq_chip_data(data); struct thunderx_line *txline = irq_data_get_irq_chip_data(data);
struct thunderx_gpio *txgpio = txline->txgpio; struct thunderx_gpio *txgpio = txline->txgpio;
struct irq_data *parent_data = data->parent_data;
if (parent_data && parent_data->chip->irq_release_resources) irq_chip_release_resources_parent(data);
parent_data->chip->irq_release_resources(parent_data);
gpiochip_unlock_as_irq(&txgpio->chip, txline->line); gpiochip_unlock_as_irq(&txgpio->chip, txline->line);
} }
......
...@@ -94,6 +94,7 @@ config IOMMU_DMA ...@@ -94,6 +94,7 @@ config IOMMU_DMA
bool bool
select IOMMU_API select IOMMU_API
select IOMMU_IOVA select IOMMU_IOVA
select IRQ_MSI_IOMMU
select NEED_SG_DMA_LENGTH select NEED_SG_DMA_LENGTH
config FSL_PAMU config FSL_PAMU
......
...@@ -907,17 +907,18 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, ...@@ -907,17 +907,18 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
return NULL; return NULL;
} }
void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr)
{ {
struct device *dev = msi_desc_to_dev(irq_get_msi_desc(irq)); struct device *dev = msi_desc_to_dev(desc);
struct iommu_domain *domain = iommu_get_domain_for_dev(dev); struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
struct iommu_dma_cookie *cookie; struct iommu_dma_cookie *cookie;
struct iommu_dma_msi_page *msi_page; struct iommu_dma_msi_page *msi_page;
phys_addr_t msi_addr = (u64)msg->address_hi << 32 | msg->address_lo;
unsigned long flags; unsigned long flags;
if (!domain || !domain->iova_cookie) if (!domain || !domain->iova_cookie) {
return; desc->iommu_cookie = NULL;
return 0;
}
cookie = domain->iova_cookie; cookie = domain->iova_cookie;
...@@ -930,19 +931,26 @@ void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) ...@@ -930,19 +931,26 @@ void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg)
msi_page = iommu_dma_get_msi_page(dev, msi_addr, domain); msi_page = iommu_dma_get_msi_page(dev, msi_addr, domain);
spin_unlock_irqrestore(&cookie->msi_lock, flags); spin_unlock_irqrestore(&cookie->msi_lock, flags);
if (WARN_ON(!msi_page)) { msi_desc_set_iommu_cookie(desc, msi_page);
/*
* We're called from a void callback, so the best we can do is if (!msi_page)
* 'fail' by filling the message with obviously bogus values. return -ENOMEM;
* Since we got this far due to an IOMMU being present, it's return 0;
* not like the existing address would have worked anyway... }
*/
msg->address_hi = ~0U; void iommu_dma_compose_msi_msg(struct msi_desc *desc,
msg->address_lo = ~0U; struct msi_msg *msg)
msg->data = ~0U; {
} else { struct device *dev = msi_desc_to_dev(desc);
msg->address_hi = upper_32_bits(msi_page->iova); const struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
msg->address_lo &= cookie_msi_granule(cookie) - 1; const struct iommu_dma_msi_page *msi_page;
msg->address_lo += lower_32_bits(msi_page->iova);
} msi_page = msi_desc_get_iommu_cookie(desc);
if (!domain || !domain->iova_cookie || WARN_ON(!msi_page))
return;
msg->address_hi = upper_32_bits(msi_page->iova);
msg->address_lo &= cookie_msi_granule(domain->iova_cookie) - 1;
msg->address_lo += lower_32_bits(msi_page->iova);
} }
...@@ -6,7 +6,6 @@ config IRQCHIP ...@@ -6,7 +6,6 @@ config IRQCHIP
config ARM_GIC config ARM_GIC
bool bool
select IRQ_DOMAIN
select IRQ_DOMAIN_HIERARCHY select IRQ_DOMAIN_HIERARCHY
select GENERIC_IRQ_MULTI_HANDLER select GENERIC_IRQ_MULTI_HANDLER
select GENERIC_IRQ_EFFECTIVE_AFF_MASK select GENERIC_IRQ_EFFECTIVE_AFF_MASK
...@@ -33,7 +32,6 @@ config GIC_NON_BANKED ...@@ -33,7 +32,6 @@ config GIC_NON_BANKED
config ARM_GIC_V3 config ARM_GIC_V3
bool bool
select IRQ_DOMAIN
select GENERIC_IRQ_MULTI_HANDLER select GENERIC_IRQ_MULTI_HANDLER
select IRQ_DOMAIN_HIERARCHY select IRQ_DOMAIN_HIERARCHY
select PARTITION_PERCPU select PARTITION_PERCPU
...@@ -59,7 +57,6 @@ config ARM_GIC_V3_ITS_FSL_MC ...@@ -59,7 +57,6 @@ config ARM_GIC_V3_ITS_FSL_MC
config ARM_NVIC config ARM_NVIC
bool bool
select IRQ_DOMAIN
select IRQ_DOMAIN_HIERARCHY select IRQ_DOMAIN_HIERARCHY
select GENERIC_IRQ_CHIP select GENERIC_IRQ_CHIP
...@@ -358,7 +355,6 @@ config STM32_EXTI ...@@ -358,7 +355,6 @@ config STM32_EXTI
config QCOM_IRQ_COMBINER config QCOM_IRQ_COMBINER
bool "QCOM IRQ combiner support" bool "QCOM IRQ combiner support"
depends on ARCH_QCOM && ACPI depends on ARCH_QCOM && ACPI
select IRQ_DOMAIN
select IRQ_DOMAIN_HIERARCHY select IRQ_DOMAIN_HIERARCHY
help help
Say yes here to add support for the IRQ combiner devices embedded Say yes here to add support for the IRQ combiner devices embedded
...@@ -375,7 +371,6 @@ config IRQ_UNIPHIER_AIDET ...@@ -375,7 +371,6 @@ config IRQ_UNIPHIER_AIDET
config MESON_IRQ_GPIO config MESON_IRQ_GPIO
bool "Meson GPIO Interrupt Multiplexer" bool "Meson GPIO Interrupt Multiplexer"
depends on ARCH_MESON depends on ARCH_MESON
select IRQ_DOMAIN
select IRQ_DOMAIN_HIERARCHY select IRQ_DOMAIN_HIERARCHY
help help
Support Meson SoC Family GPIO Interrupt Multiplexer Support Meson SoC Family GPIO Interrupt Multiplexer
...@@ -391,7 +386,6 @@ config GOLDFISH_PIC ...@@ -391,7 +386,6 @@ config GOLDFISH_PIC
config QCOM_PDC config QCOM_PDC
bool "QCOM PDC" bool "QCOM PDC"
depends on ARCH_QCOM depends on ARCH_QCOM
select IRQ_DOMAIN
select IRQ_DOMAIN_HIERARCHY select IRQ_DOMAIN_HIERARCHY
help help
Power Domain Controller driver to manage and configure wakeup Power Domain Controller driver to manage and configure wakeup
...@@ -431,6 +425,27 @@ config LS1X_IRQ ...@@ -431,6 +425,27 @@ config LS1X_IRQ
help help
Support for the Loongson-1 platform Interrupt Controller. Support for the Loongson-1 platform Interrupt Controller.
config TI_SCI_INTR_IRQCHIP
bool
depends on TI_SCI_PROTOCOL
select IRQ_DOMAIN_HIERARCHY
help
This enables the irqchip driver support for K3 Interrupt router
over TI System Control Interface available on some new TI's SoCs.
If you wish to use interrupt router irq resources managed by the
TI System Controller, say Y here. Otherwise, say N.
config TI_SCI_INTA_IRQCHIP
bool
depends on TI_SCI_PROTOCOL
select IRQ_DOMAIN_HIERARCHY
select TI_SCI_INTA_MSI_DOMAIN
help
This enables the irqchip driver support for K3 Interrupt aggregator
over TI System Control Interface available on some new TI's SoCs.
If you wish to use interrupt aggregator irq resources managed by the
TI System Controller, say Y here. Otherwise, say N.
endmenu endmenu
config SIFIVE_PLIC config SIFIVE_PLIC
......
...@@ -98,3 +98,5 @@ obj-$(CONFIG_SIFIVE_PLIC) += irq-sifive-plic.o ...@@ -98,3 +98,5 @@ obj-$(CONFIG_SIFIVE_PLIC) += irq-sifive-plic.o
obj-$(CONFIG_IMX_IRQSTEER) += irq-imx-irqsteer.o obj-$(CONFIG_IMX_IRQSTEER) += irq-imx-irqsteer.o
obj-$(CONFIG_MADERA_IRQ) += irq-madera.o obj-$(CONFIG_MADERA_IRQ) += irq-madera.o
obj-$(CONFIG_LS1X_IRQ) += irq-ls1x.o obj-$(CONFIG_LS1X_IRQ) += irq-ls1x.o
obj-$(CONFIG_TI_SCI_INTR_IRQCHIP) += irq-ti-sci-intr.o
obj-$(CONFIG_TI_SCI_INTA_IRQCHIP) += irq-ti-sci-inta.o
...@@ -343,6 +343,9 @@ int __init bcm7038_l1_of_init(struct device_node *dn, ...@@ -343,6 +343,9 @@ int __init bcm7038_l1_of_init(struct device_node *dn,
goto out_unmap; goto out_unmap;
} }
pr_info("registered BCM7038 L1 intc (%pOF, IRQs: %d)\n",
dn, IRQS_PER_WORD * intc->n_words);
return 0; return 0;
out_unmap: out_unmap:
......
...@@ -318,6 +318,9 @@ static int __init bcm7120_l2_intc_probe(struct device_node *dn, ...@@ -318,6 +318,9 @@ static int __init bcm7120_l2_intc_probe(struct device_node *dn,
} }
} }
pr_info("registered %s intc (%pOF, parent IRQ(s): %d)\n",
intc_name, dn, data->num_parent_irqs);
return 0; return 0;
out_free_domain: out_free_domain:
......
...@@ -264,6 +264,8 @@ static int __init brcmstb_l2_intc_of_init(struct device_node *np, ...@@ -264,6 +264,8 @@ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
ct->chip.irq_set_wake = irq_gc_set_wake; ct->chip.irq_set_wake = irq_gc_set_wake;
} }
pr_info("registered L2 intc (%pOF, parent irq: %d)\n", np, parent_irq);
return 0; return 0;
out_free_domain: out_free_domain:
......
...@@ -19,7 +19,6 @@ ...@@ -19,7 +19,6 @@
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/irqchip/arm-gic.h> #include <linux/irqchip/arm-gic.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_clock.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -28,17 +27,27 @@ struct gic_clk_data { ...@@ -28,17 +27,27 @@ struct gic_clk_data {
const char *const *clocks; const char *const *clocks;
}; };
struct gic_chip_pm {
struct gic_chip_data *chip_data;
const struct gic_clk_data *clk_data;
struct clk_bulk_data *clks;
};
static int gic_runtime_resume(struct device *dev) static int gic_runtime_resume(struct device *dev)
{ {
struct gic_chip_data *gic = dev_get_drvdata(dev); struct gic_chip_pm *chip_pm = dev_get_drvdata(dev);
struct gic_chip_data *gic = chip_pm->chip_data;
const struct gic_clk_data *data = chip_pm->clk_data;
int ret; int ret;
ret = pm_clk_resume(dev); ret = clk_bulk_prepare_enable(data->num_clocks, chip_pm->clks);
if (ret) if (ret) {
dev_err(dev, "clk_enable failed: %d\n", ret);
return ret; return ret;
}
/* /*
* On the very first resume, the pointer to the driver data * On the very first resume, the pointer to chip_pm->chip_data
* will be NULL and this is intentional, because we do not * will be NULL and this is intentional, because we do not
* want to restore the GIC on the very first resume. So if * want to restore the GIC on the very first resume. So if
* the pointer is not valid just return. * the pointer is not valid just return.
...@@ -54,35 +63,14 @@ static int gic_runtime_resume(struct device *dev) ...@@ -54,35 +63,14 @@ static int gic_runtime_resume(struct device *dev)
static int gic_runtime_suspend(struct device *dev) static int gic_runtime_suspend(struct device *dev)
{ {
struct gic_chip_data *gic = dev_get_drvdata(dev); struct gic_chip_pm *chip_pm = dev_get_drvdata(dev);
struct gic_chip_data *gic = chip_pm->chip_data;
const struct gic_clk_data *data = chip_pm->clk_data;
gic_dist_save(gic); gic_dist_save(gic);
gic_cpu_save(gic); gic_cpu_save(gic);
return pm_clk_suspend(dev); clk_bulk_disable_unprepare(data->num_clocks, chip_pm->clks);
}
static int gic_get_clocks(struct device *dev, const struct gic_clk_data *data)
{
unsigned int i;
int ret;
if (!dev || !data)
return -EINVAL;
ret = pm_clk_create(dev);
if (ret)
return ret;
for (i = 0; i < data->num_clocks; i++) {
ret = of_pm_clk_add_clk(dev, data->clocks[i]);
if (ret) {
dev_err(dev, "failed to add clock %s\n",
data->clocks[i]);
pm_clk_destroy(dev);
return ret;
}
}
return 0; return 0;
} }
...@@ -91,8 +79,8 @@ static int gic_probe(struct platform_device *pdev) ...@@ -91,8 +79,8 @@ static int gic_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
const struct gic_clk_data *data; const struct gic_clk_data *data;
struct gic_chip_data *gic; struct gic_chip_pm *chip_pm;
int ret, irq; int ret, irq, i;
data = of_device_get_match_data(&pdev->dev); data = of_device_get_match_data(&pdev->dev);
if (!data) { if (!data) {
...@@ -100,28 +88,41 @@ static int gic_probe(struct platform_device *pdev) ...@@ -100,28 +88,41 @@ static int gic_probe(struct platform_device *pdev)
return -ENODEV; return -ENODEV;
} }
chip_pm = devm_kzalloc(dev, sizeof(*chip_pm), GFP_KERNEL);
if (!chip_pm)
return -ENOMEM;
irq = irq_of_parse_and_map(dev->of_node, 0); irq = irq_of_parse_and_map(dev->of_node, 0);
if (!irq) { if (!irq) {
dev_err(dev, "no parent interrupt found!\n"); dev_err(dev, "no parent interrupt found!\n");
return -EINVAL; return -EINVAL;
} }
ret = gic_get_clocks(dev, data); chip_pm->clks = devm_kcalloc(dev, data->num_clocks,
sizeof(*chip_pm->clks), GFP_KERNEL);
if (!chip_pm->clks)
return -ENOMEM;
for (i = 0; i < data->num_clocks; i++)
chip_pm->clks[i].id = data->clocks[i];
ret = devm_clk_bulk_get(dev, data->num_clocks, chip_pm->clks);
if (ret) if (ret)
goto irq_dispose; goto irq_dispose;
chip_pm->clk_data = data;
dev_set_drvdata(dev, chip_pm);
pm_runtime_enable(dev); pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev); ret = pm_runtime_get_sync(dev);
if (ret < 0) if (ret < 0)
goto rpm_disable; goto rpm_disable;
ret = gic_of_init_child(dev, &gic, irq); ret = gic_of_init_child(dev, &chip_pm->chip_data, irq);
if (ret) if (ret)
goto rpm_put; goto rpm_put;
platform_set_drvdata(pdev, gic);
pm_runtime_put(dev); pm_runtime_put(dev);
dev_info(dev, "GIC IRQ controller registered\n"); dev_info(dev, "GIC IRQ controller registered\n");
...@@ -132,7 +133,6 @@ static int gic_probe(struct platform_device *pdev) ...@@ -132,7 +133,6 @@ static int gic_probe(struct platform_device *pdev)
pm_runtime_put_sync(dev); pm_runtime_put_sync(dev);
rpm_disable: rpm_disable:
pm_runtime_disable(dev); pm_runtime_disable(dev);
pm_clk_destroy(dev);
irq_dispose: irq_dispose:
irq_dispose_mapping(irq); irq_dispose_mapping(irq);
...@@ -142,6 +142,8 @@ static int gic_probe(struct platform_device *pdev) ...@@ -142,6 +142,8 @@ static int gic_probe(struct platform_device *pdev)
static const struct dev_pm_ops gic_pm_ops = { static const struct dev_pm_ops gic_pm_ops = {
SET_RUNTIME_PM_OPS(gic_runtime_suspend, SET_RUNTIME_PM_OPS(gic_runtime_suspend,
gic_runtime_resume, NULL) gic_runtime_resume, NULL)
SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
pm_runtime_force_resume)
}; };
static const char * const gic400_clocks[] = { static const char * const gic400_clocks[] = {
......
...@@ -110,7 +110,7 @@ static void gicv2m_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ...@@ -110,7 +110,7 @@ static void gicv2m_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
if (v2m->flags & GICV2M_NEEDS_SPI_OFFSET) if (v2m->flags & GICV2M_NEEDS_SPI_OFFSET)
msg->data -= v2m->spi_offset; msg->data -= v2m->spi_offset;
iommu_dma_map_msi_msg(data->irq, msg); iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg);
} }
static struct irq_chip gicv2m_irq_chip = { static struct irq_chip gicv2m_irq_chip = {
...@@ -167,6 +167,7 @@ static void gicv2m_unalloc_msi(struct v2m_data *v2m, unsigned int hwirq, ...@@ -167,6 +167,7 @@ static void gicv2m_unalloc_msi(struct v2m_data *v2m, unsigned int hwirq,
static int gicv2m_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, static int gicv2m_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args) unsigned int nr_irqs, void *args)
{ {
msi_alloc_info_t *info = args;
struct v2m_data *v2m = NULL, *tmp; struct v2m_data *v2m = NULL, *tmp;
int hwirq, offset, i, err = 0; int hwirq, offset, i, err = 0;
...@@ -186,6 +187,11 @@ static int gicv2m_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, ...@@ -186,6 +187,11 @@ static int gicv2m_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
hwirq = v2m->spi_start + offset; hwirq = v2m->spi_start + offset;
err = iommu_dma_prepare_msi(info->desc,
v2m->res.start + V2M_MSI_SETSPI_NS);
if (err)
return err;
for (i = 0; i < nr_irqs; i++) { for (i = 0; i < nr_irqs; i++) {
err = gicv2m_irq_gic_domain_alloc(domain, virq + i, hwirq + i); err = gicv2m_irq_gic_domain_alloc(domain, virq + i, hwirq + i);
if (err) if (err)
......
...@@ -26,7 +26,6 @@ ...@@ -26,7 +26,6 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/irqdomain.h> #include <linux/irqdomain.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/list_sort.h>
#include <linux/log2.h> #include <linux/log2.h>
#include <linux/memblock.h> #include <linux/memblock.h>
#include <linux/mm.h> #include <linux/mm.h>
...@@ -1179,7 +1178,7 @@ static void its_irq_compose_msi_msg(struct irq_data *d, struct msi_msg *msg) ...@@ -1179,7 +1178,7 @@ static void its_irq_compose_msi_msg(struct irq_data *d, struct msi_msg *msg)
msg->address_hi = upper_32_bits(addr); msg->address_hi = upper_32_bits(addr);
msg->data = its_get_event_id(d); msg->data = its_get_event_id(d);
iommu_dma_map_msi_msg(d->irq, msg); iommu_dma_compose_msi_msg(irq_data_get_msi_desc(d), msg);
} }
static int its_irq_set_irqchip_state(struct irq_data *d, static int its_irq_set_irqchip_state(struct irq_data *d,
...@@ -1465,9 +1464,8 @@ static struct lpi_range *mk_lpi_range(u32 base, u32 span) ...@@ -1465,9 +1464,8 @@ static struct lpi_range *mk_lpi_range(u32 base, u32 span)
{ {
struct lpi_range *range; struct lpi_range *range;
range = kzalloc(sizeof(*range), GFP_KERNEL); range = kmalloc(sizeof(*range), GFP_KERNEL);
if (range) { if (range) {
INIT_LIST_HEAD(&range->entry);
range->base_id = base; range->base_id = base;
range->span = span; range->span = span;
} }
...@@ -1475,31 +1473,6 @@ static struct lpi_range *mk_lpi_range(u32 base, u32 span) ...@@ -1475,31 +1473,6 @@ static struct lpi_range *mk_lpi_range(u32 base, u32 span)
return range; return range;
} }
static int lpi_range_cmp(void *priv, struct list_head *a, struct list_head *b)
{
struct lpi_range *ra, *rb;
ra = container_of(a, struct lpi_range, entry);
rb = container_of(b, struct lpi_range, entry);
return ra->base_id - rb->base_id;
}
static void merge_lpi_ranges(void)
{
struct lpi_range *range, *tmp;
list_for_each_entry_safe(range, tmp, &lpi_range_list, entry) {
if (!list_is_last(&range->entry, &lpi_range_list) &&
(tmp->base_id == (range->base_id + range->span))) {
tmp->base_id = range->base_id;
tmp->span += range->span;
list_del(&range->entry);
kfree(range);
}
}
}
static int alloc_lpi_range(u32 nr_lpis, u32 *base) static int alloc_lpi_range(u32 nr_lpis, u32 *base)
{ {
struct lpi_range *range, *tmp; struct lpi_range *range, *tmp;
...@@ -1529,25 +1502,49 @@ static int alloc_lpi_range(u32 nr_lpis, u32 *base) ...@@ -1529,25 +1502,49 @@ static int alloc_lpi_range(u32 nr_lpis, u32 *base)
return err; return err;
} }
static void merge_lpi_ranges(struct lpi_range *a, struct lpi_range *b)
{
if (&a->entry == &lpi_range_list || &b->entry == &lpi_range_list)
return;
if (a->base_id + a->span != b->base_id)
return;
b->base_id = a->base_id;
b->span += a->span;
list_del(&a->entry);
kfree(a);
}
static int free_lpi_range(u32 base, u32 nr_lpis) static int free_lpi_range(u32 base, u32 nr_lpis)
{ {
struct lpi_range *new; struct lpi_range *new, *old;
int err = 0;
new = mk_lpi_range(base, nr_lpis);
if (!new)
return -ENOMEM;
mutex_lock(&lpi_range_lock); mutex_lock(&lpi_range_lock);
new = mk_lpi_range(base, nr_lpis); list_for_each_entry_reverse(old, &lpi_range_list, entry) {
if (!new) { if (old->base_id < base)
err = -ENOMEM; break;
goto out;
} }
/*
* old is the last element with ->base_id smaller than base,
* so new goes right after it. If there are no elements with
* ->base_id smaller than base, &old->entry ends up pointing
* at the head of the list, and inserting new it the start of
* the list is the right thing to do in that case as well.
*/
list_add(&new->entry, &old->entry);
/*
* Now check if we can merge with the preceding and/or
* following ranges.
*/
merge_lpi_ranges(old, new);
merge_lpi_ranges(new, list_next_entry(new, entry));
list_add(&new->entry, &lpi_range_list);
list_sort(NULL, &lpi_range_list, lpi_range_cmp);
merge_lpi_ranges();
out:
mutex_unlock(&lpi_range_lock); mutex_unlock(&lpi_range_lock);
return err; return 0;
} }
static int __init its_lpi_init(u32 id_bits) static int __init its_lpi_init(u32 id_bits)
...@@ -2487,7 +2484,7 @@ static int its_msi_prepare(struct irq_domain *domain, struct device *dev, ...@@ -2487,7 +2484,7 @@ static int its_msi_prepare(struct irq_domain *domain, struct device *dev,
int err = 0; int err = 0;
/* /*
* We ignore "dev" entierely, and rely on the dev_id that has * We ignore "dev" entirely, and rely on the dev_id that has
* been passed via the scratchpad. This limits this domain's * been passed via the scratchpad. This limits this domain's
* usefulness to upper layers that definitely know that they * usefulness to upper layers that definitely know that they
* are built on top of the ITS. * are built on top of the ITS.
...@@ -2566,6 +2563,7 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, ...@@ -2566,6 +2563,7 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
{ {
msi_alloc_info_t *info = args; msi_alloc_info_t *info = args;
struct its_device *its_dev = info->scratchpad[0].ptr; struct its_device *its_dev = info->scratchpad[0].ptr;
struct its_node *its = its_dev->its;
irq_hw_number_t hwirq; irq_hw_number_t hwirq;
int err; int err;
int i; int i;
...@@ -2574,6 +2572,10 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, ...@@ -2574,6 +2572,10 @@ static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
if (err) if (err)
return err; return err;
err = iommu_dma_prepare_msi(info->desc, its->get_msi_base(its_dev));
if (err)
return err;
for (i = 0; i < nr_irqs; i++) { for (i = 0; i < nr_irqs; i++) {
err = its_irq_gic_domain_alloc(domain, virq + i, hwirq + i); err = its_irq_gic_domain_alloc(domain, virq + i, hwirq + i);
if (err) if (err)
......
...@@ -84,6 +84,7 @@ static void mbi_free_msi(struct mbi_range *mbi, unsigned int hwirq, ...@@ -84,6 +84,7 @@ static void mbi_free_msi(struct mbi_range *mbi, unsigned int hwirq,
static int mbi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, static int mbi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
unsigned int nr_irqs, void *args) unsigned int nr_irqs, void *args)
{ {
msi_alloc_info_t *info = args;
struct mbi_range *mbi = NULL; struct mbi_range *mbi = NULL;
int hwirq, offset, i, err = 0; int hwirq, offset, i, err = 0;
...@@ -104,6 +105,11 @@ static int mbi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, ...@@ -104,6 +105,11 @@ static int mbi_irq_domain_alloc(struct irq_domain *domain, unsigned int virq,
hwirq = mbi->spi_start + offset; hwirq = mbi->spi_start + offset;
err = iommu_dma_prepare_msi(info->desc,
mbi_phys_base + GICD_SETSPI_NSR);
if (err)
return err;
for (i = 0; i < nr_irqs; i++) { for (i = 0; i < nr_irqs; i++) {
err = mbi_irq_gic_domain_alloc(domain, virq + i, hwirq + i); err = mbi_irq_gic_domain_alloc(domain, virq + i, hwirq + i);
if (err) if (err)
...@@ -142,7 +148,7 @@ static void mbi_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ...@@ -142,7 +148,7 @@ static void mbi_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
msg[0].address_lo = lower_32_bits(mbi_phys_base + GICD_SETSPI_NSR); msg[0].address_lo = lower_32_bits(mbi_phys_base + GICD_SETSPI_NSR);
msg[0].data = data->parent_data->hwirq; msg[0].data = data->parent_data->hwirq;
iommu_dma_map_msi_msg(data->irq, msg); iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg);
} }
#ifdef CONFIG_PCI_MSI #ifdef CONFIG_PCI_MSI
...@@ -202,7 +208,7 @@ static void mbi_compose_mbi_msg(struct irq_data *data, struct msi_msg *msg) ...@@ -202,7 +208,7 @@ static void mbi_compose_mbi_msg(struct irq_data *data, struct msi_msg *msg)
msg[1].address_lo = lower_32_bits(mbi_phys_base + GICD_CLRSPI_NSR); msg[1].address_lo = lower_32_bits(mbi_phys_base + GICD_CLRSPI_NSR);
msg[1].data = data->parent_data->hwirq; msg[1].data = data->parent_data->hwirq;
iommu_dma_map_msi_msg(data->irq, &msg[1]); iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), &msg[1]);
} }
/* Platform-MSI specific irqchip */ /* Platform-MSI specific irqchip */
......
...@@ -144,7 +144,6 @@ static int imx_irqsteer_probe(struct platform_device *pdev) ...@@ -144,7 +144,6 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
{ {
struct device_node *np = pdev->dev.of_node; struct device_node *np = pdev->dev.of_node;
struct irqsteer_data *data; struct irqsteer_data *data;
struct resource *res;
u32 irqs_num; u32 irqs_num;
int i, ret; int i, ret;
...@@ -152,8 +151,7 @@ static int imx_irqsteer_probe(struct platform_device *pdev) ...@@ -152,8 +151,7 @@ static int imx_irqsteer_probe(struct platform_device *pdev)
if (!data) if (!data)
return -ENOMEM; return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); data->regs = devm_platform_ioremap_resource(pdev, 0);
data->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(data->regs)) { if (IS_ERR(data->regs)) {
dev_err(&pdev->dev, "failed to initialize reg\n"); dev_err(&pdev->dev, "failed to initialize reg\n");
return PTR_ERR(data->regs); return PTR_ERR(data->regs);
......
...@@ -100,7 +100,7 @@ static void ls_scfg_msi_compose_msg(struct irq_data *data, struct msi_msg *msg) ...@@ -100,7 +100,7 @@ static void ls_scfg_msi_compose_msg(struct irq_data *data, struct msi_msg *msg)
msg->data |= cpumask_first(mask); msg->data |= cpumask_first(mask);
} }
iommu_dma_map_msi_msg(data->irq, msg); iommu_dma_compose_msi_msg(irq_data_get_msi_desc(data), msg);
} }
static int ls_scfg_msi_set_affinity(struct irq_data *irq_data, static int ls_scfg_msi_set_affinity(struct irq_data *irq_data,
...@@ -141,6 +141,7 @@ static int ls_scfg_msi_domain_irq_alloc(struct irq_domain *domain, ...@@ -141,6 +141,7 @@ static int ls_scfg_msi_domain_irq_alloc(struct irq_domain *domain,
unsigned int nr_irqs, unsigned int nr_irqs,
void *args) void *args)
{ {
msi_alloc_info_t *info = args;
struct ls_scfg_msi *msi_data = domain->host_data; struct ls_scfg_msi *msi_data = domain->host_data;
int pos, err = 0; int pos, err = 0;
...@@ -154,6 +155,10 @@ static int ls_scfg_msi_domain_irq_alloc(struct irq_domain *domain, ...@@ -154,6 +155,10 @@ static int ls_scfg_msi_domain_irq_alloc(struct irq_domain *domain,
err = -ENOSPC; err = -ENOSPC;
spin_unlock(&msi_data->lock); spin_unlock(&msi_data->lock);
if (err)
return err;
err = iommu_dma_prepare_msi(info->desc, msi_data->msiir_addr);
if (err) if (err)
return err; return err;
......
...@@ -389,10 +389,8 @@ static int intc_irqpin_probe(struct platform_device *pdev) ...@@ -389,10 +389,8 @@ static int intc_irqpin_probe(struct platform_device *pdev)
int k; int k;
p = devm_kzalloc(dev, sizeof(*p), GFP_KERNEL); p = devm_kzalloc(dev, sizeof(*p), GFP_KERNEL);
if (!p) { if (!p)
dev_err(dev, "failed to allocate driver data\n");
return -ENOMEM; return -ENOMEM;
}
/* deal with driver instance configuration */ /* deal with driver instance configuration */
of_property_read_u32(dev->of_node, "sense-bitfield-width", of_property_read_u32(dev->of_node, "sense-bitfield-width",
......
...@@ -14,8 +14,10 @@ ...@@ -14,8 +14,10 @@
#include <linux/irqchip.h> #include <linux/irqchip.h>
#include <linux/irqchip/chained_irq.h> #include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h> #include <linux/irqdomain.h>
#include <linux/module.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/syscore_ops.h> #include <linux/syscore_ops.h>
#include <dt-bindings/interrupt-controller/arm-gic.h> #include <dt-bindings/interrupt-controller/arm-gic.h>
...@@ -37,12 +39,6 @@ struct stm32_exti_bank { ...@@ -37,12 +39,6 @@ struct stm32_exti_bank {
#define UNDEF_REG ~0 #define UNDEF_REG ~0
enum stm32_exti_hwspinlock {
HWSPINLOCK_UNKNOWN,
HWSPINLOCK_NONE,
HWSPINLOCK_READY,
};
struct stm32_desc_irq { struct stm32_desc_irq {
u32 exti; u32 exti;
u32 irq_parent; u32 irq_parent;
...@@ -69,8 +65,6 @@ struct stm32_exti_host_data { ...@@ -69,8 +65,6 @@ struct stm32_exti_host_data {
void __iomem *base; void __iomem *base;
struct stm32_exti_chip_data *chips_data; struct stm32_exti_chip_data *chips_data;
const struct stm32_exti_drv_data *drv_data; const struct stm32_exti_drv_data *drv_data;
struct device_node *node;
enum stm32_exti_hwspinlock hwlock_state;
struct hwspinlock *hwlock; struct hwspinlock *hwlock;
}; };
...@@ -285,49 +279,27 @@ static int stm32_exti_set_type(struct irq_data *d, ...@@ -285,49 +279,27 @@ static int stm32_exti_set_type(struct irq_data *d,
static int stm32_exti_hwspin_lock(struct stm32_exti_chip_data *chip_data) static int stm32_exti_hwspin_lock(struct stm32_exti_chip_data *chip_data)
{ {
struct stm32_exti_host_data *host_data = chip_data->host_data; int ret, timeout = 0;
struct hwspinlock *hwlock;
int id, ret = 0, timeout = 0;
/* first time, check for hwspinlock availability */
if (unlikely(host_data->hwlock_state == HWSPINLOCK_UNKNOWN)) {
id = of_hwspin_lock_get_id(host_data->node, 0);
if (id >= 0) {
hwlock = hwspin_lock_request_specific(id);
if (hwlock) {
/* found valid hwspinlock */
host_data->hwlock_state = HWSPINLOCK_READY;
host_data->hwlock = hwlock;
pr_debug("%s hwspinlock = %d\n", __func__, id);
} else {
host_data->hwlock_state = HWSPINLOCK_NONE;
}
} else if (id != -EPROBE_DEFER) {
host_data->hwlock_state = HWSPINLOCK_NONE;
} else {
/* hwspinlock driver shall be ready at that stage */
ret = -EPROBE_DEFER;
}
}
if (likely(host_data->hwlock_state == HWSPINLOCK_READY)) { if (!chip_data->host_data->hwlock)
/* return 0;
* Use the x_raw API since we are under spin_lock protection.
* Do not use the x_timeout API because we are under irq_disable /*
* mode (see __setup_irq()) * Use the x_raw API since we are under spin_lock protection.
*/ * Do not use the x_timeout API because we are under irq_disable
do { * mode (see __setup_irq())
ret = hwspin_trylock_raw(host_data->hwlock); */
if (!ret) do {
return 0; ret = hwspin_trylock_raw(chip_data->host_data->hwlock);
if (!ret)
udelay(HWSPNLCK_RETRY_DELAY); return 0;
timeout += HWSPNLCK_RETRY_DELAY;
} while (timeout < HWSPNLCK_TIMEOUT); udelay(HWSPNLCK_RETRY_DELAY);
timeout += HWSPNLCK_RETRY_DELAY;
if (ret == -EBUSY) } while (timeout < HWSPNLCK_TIMEOUT);
ret = -ETIMEDOUT;
} if (ret == -EBUSY)
ret = -ETIMEDOUT;
if (ret) if (ret)
pr_err("%s can't get hwspinlock (%d)\n", __func__, ret); pr_err("%s can't get hwspinlock (%d)\n", __func__, ret);
...@@ -337,7 +309,7 @@ static int stm32_exti_hwspin_lock(struct stm32_exti_chip_data *chip_data) ...@@ -337,7 +309,7 @@ static int stm32_exti_hwspin_lock(struct stm32_exti_chip_data *chip_data)
static void stm32_exti_hwspin_unlock(struct stm32_exti_chip_data *chip_data) static void stm32_exti_hwspin_unlock(struct stm32_exti_chip_data *chip_data)
{ {
if (likely(chip_data->host_data->hwlock_state == HWSPINLOCK_READY)) if (chip_data->host_data->hwlock)
hwspin_unlock_raw(chip_data->host_data->hwlock); hwspin_unlock_raw(chip_data->host_data->hwlock);
} }
...@@ -586,8 +558,7 @@ static int stm32_exti_h_set_affinity(struct irq_data *d, ...@@ -586,8 +558,7 @@ static int stm32_exti_h_set_affinity(struct irq_data *d,
return -EINVAL; return -EINVAL;
} }
#ifdef CONFIG_PM static int __maybe_unused stm32_exti_h_suspend(void)
static int stm32_exti_h_suspend(void)
{ {
struct stm32_exti_chip_data *chip_data; struct stm32_exti_chip_data *chip_data;
int i; int i;
...@@ -602,7 +573,7 @@ static int stm32_exti_h_suspend(void) ...@@ -602,7 +573,7 @@ static int stm32_exti_h_suspend(void)
return 0; return 0;
} }
static void stm32_exti_h_resume(void) static void __maybe_unused stm32_exti_h_resume(void)
{ {
struct stm32_exti_chip_data *chip_data; struct stm32_exti_chip_data *chip_data;
int i; int i;
...@@ -616,17 +587,22 @@ static void stm32_exti_h_resume(void) ...@@ -616,17 +587,22 @@ static void stm32_exti_h_resume(void)
} }
static struct syscore_ops stm32_exti_h_syscore_ops = { static struct syscore_ops stm32_exti_h_syscore_ops = {
#ifdef CONFIG_PM_SLEEP
.suspend = stm32_exti_h_suspend, .suspend = stm32_exti_h_suspend,
.resume = stm32_exti_h_resume, .resume = stm32_exti_h_resume,
#endif
}; };
static void stm32_exti_h_syscore_init(void) static void stm32_exti_h_syscore_init(struct stm32_exti_host_data *host_data)
{ {
stm32_host_data = host_data;
register_syscore_ops(&stm32_exti_h_syscore_ops); register_syscore_ops(&stm32_exti_h_syscore_ops);
} }
#else
static inline void stm32_exti_h_syscore_init(void) {} static void stm32_exti_h_syscore_deinit(void)
#endif {
unregister_syscore_ops(&stm32_exti_h_syscore_ops);
}
static struct irq_chip stm32_exti_h_chip = { static struct irq_chip stm32_exti_h_chip = {
.name = "stm32-exti-h", .name = "stm32-exti-h",
...@@ -683,8 +659,6 @@ stm32_exti_host_data *stm32_exti_host_init(const struct stm32_exti_drv_data *dd, ...@@ -683,8 +659,6 @@ stm32_exti_host_data *stm32_exti_host_init(const struct stm32_exti_drv_data *dd,
return NULL; return NULL;
host_data->drv_data = dd; host_data->drv_data = dd;
host_data->node = node;
host_data->hwlock_state = HWSPINLOCK_UNKNOWN;
host_data->chips_data = kcalloc(dd->bank_nr, host_data->chips_data = kcalloc(dd->bank_nr,
sizeof(struct stm32_exti_chip_data), sizeof(struct stm32_exti_chip_data),
GFP_KERNEL); GFP_KERNEL);
...@@ -711,7 +685,8 @@ stm32_exti_host_data *stm32_exti_host_init(const struct stm32_exti_drv_data *dd, ...@@ -711,7 +685,8 @@ stm32_exti_host_data *stm32_exti_host_init(const struct stm32_exti_drv_data *dd,
static struct static struct
stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data, stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data,
u32 bank_idx) u32 bank_idx,
struct device_node *node)
{ {
const struct stm32_exti_bank *stm32_bank; const struct stm32_exti_bank *stm32_bank;
struct stm32_exti_chip_data *chip_data; struct stm32_exti_chip_data *chip_data;
...@@ -731,7 +706,7 @@ stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data, ...@@ -731,7 +706,7 @@ stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data,
writel_relaxed(0, base + stm32_bank->imr_ofst); writel_relaxed(0, base + stm32_bank->imr_ofst);
writel_relaxed(0, base + stm32_bank->emr_ofst); writel_relaxed(0, base + stm32_bank->emr_ofst);
pr_info("%pOF: bank%d\n", h_data->node, bank_idx); pr_info("%pOF: bank%d\n", node, bank_idx);
return chip_data; return chip_data;
} }
...@@ -771,7 +746,7 @@ static int __init stm32_exti_init(const struct stm32_exti_drv_data *drv_data, ...@@ -771,7 +746,7 @@ static int __init stm32_exti_init(const struct stm32_exti_drv_data *drv_data,
struct stm32_exti_chip_data *chip_data; struct stm32_exti_chip_data *chip_data;
stm32_bank = drv_data->exti_banks[i]; stm32_bank = drv_data->exti_banks[i];
chip_data = stm32_exti_chip_init(host_data, i); chip_data = stm32_exti_chip_init(host_data, i, node);
gc = irq_get_domain_generic_chip(domain, i * IRQS_PER_BANK); gc = irq_get_domain_generic_chip(domain, i * IRQS_PER_BANK);
...@@ -815,50 +790,130 @@ static const struct irq_domain_ops stm32_exti_h_domain_ops = { ...@@ -815,50 +790,130 @@ static const struct irq_domain_ops stm32_exti_h_domain_ops = {
.xlate = irq_domain_xlate_twocell, .xlate = irq_domain_xlate_twocell,
}; };
static int static void stm32_exti_remove_irq(void *data)
__init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data, {
struct device_node *node, struct irq_domain *domain = data;
struct device_node *parent)
irq_domain_remove(domain);
}
static int stm32_exti_remove(struct platform_device *pdev)
{
stm32_exti_h_syscore_deinit();
return 0;
}
static int stm32_exti_probe(struct platform_device *pdev)
{ {
int ret, i;
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct irq_domain *parent_domain, *domain; struct irq_domain *parent_domain, *domain;
struct stm32_exti_host_data *host_data; struct stm32_exti_host_data *host_data;
int ret, i; const struct stm32_exti_drv_data *drv_data;
struct resource *res;
parent_domain = irq_find_host(parent); host_data = devm_kzalloc(dev, sizeof(*host_data), GFP_KERNEL);
if (!parent_domain) { if (!host_data)
pr_err("interrupt-parent not found\n"); return -ENOMEM;
return -EINVAL;
/* check for optional hwspinlock which may be not available yet */
ret = of_hwspin_lock_get_id(np, 0);
if (ret == -EPROBE_DEFER)
/* hwspinlock framework not yet ready */
return ret;
if (ret >= 0) {
host_data->hwlock = devm_hwspin_lock_request_specific(dev, ret);
if (!host_data->hwlock) {
dev_err(dev, "Failed to request hwspinlock\n");
return -EINVAL;
}
} else if (ret != -ENOENT) {
/* note: ENOENT is a valid case (means 'no hwspinlock') */
dev_err(dev, "Failed to get hwspinlock\n");
return ret;
} }
host_data = stm32_exti_host_init(drv_data, node); /* initialize host_data */
if (!host_data) drv_data = of_device_get_match_data(dev);
if (!drv_data) {
dev_err(dev, "no of match data\n");
return -ENODEV;
}
host_data->drv_data = drv_data;
host_data->chips_data = devm_kcalloc(dev, drv_data->bank_nr,
sizeof(*host_data->chips_data),
GFP_KERNEL);
if (!host_data->chips_data)
return -ENOMEM; return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
host_data->base = devm_ioremap_resource(dev, res);
if (IS_ERR(host_data->base)) {
dev_err(dev, "Unable to map registers\n");
return PTR_ERR(host_data->base);
}
for (i = 0; i < drv_data->bank_nr; i++) for (i = 0; i < drv_data->bank_nr; i++)
stm32_exti_chip_init(host_data, i); stm32_exti_chip_init(host_data, i, np);
parent_domain = irq_find_host(of_irq_find_parent(np));
if (!parent_domain) {
dev_err(dev, "GIC interrupt-parent not found\n");
return -EINVAL;
}
domain = irq_domain_add_hierarchy(parent_domain, 0, domain = irq_domain_add_hierarchy(parent_domain, 0,
drv_data->bank_nr * IRQS_PER_BANK, drv_data->bank_nr * IRQS_PER_BANK,
node, &stm32_exti_h_domain_ops, np, &stm32_exti_h_domain_ops,
host_data); host_data);
if (!domain) { if (!domain) {
pr_err("%pOFn: Could not register exti domain.\n", node); dev_err(dev, "Could not register exti domain\n");
ret = -ENOMEM; return -ENOMEM;
goto out_unmap;
} }
stm32_exti_h_syscore_init(); ret = devm_add_action_or_reset(dev, stm32_exti_remove_irq, domain);
if (ret)
return ret;
stm32_exti_h_syscore_init(host_data);
return 0; return 0;
}
out_unmap: /* platform driver only for MP1 */
iounmap(host_data->base); static const struct of_device_id stm32_exti_ids[] = {
kfree(host_data->chips_data); { .compatible = "st,stm32mp1-exti", .data = &stm32mp1_drv_data},
kfree(host_data); {},
return ret; };
MODULE_DEVICE_TABLE(of, stm32_exti_ids);
static struct platform_driver stm32_exti_driver = {
.probe = stm32_exti_probe,
.remove = stm32_exti_remove,
.driver = {
.name = "stm32_exti",
.of_match_table = stm32_exti_ids,
},
};
static int __init stm32_exti_arch_init(void)
{
return platform_driver_register(&stm32_exti_driver);
} }
static void __exit stm32_exti_arch_exit(void)
{
return platform_driver_unregister(&stm32_exti_driver);
}
arch_initcall(stm32_exti_arch_init);
module_exit(stm32_exti_arch_exit);
/* no platform driver for F4 and H7 */
static int __init stm32f4_exti_of_init(struct device_node *np, static int __init stm32f4_exti_of_init(struct device_node *np,
struct device_node *parent) struct device_node *parent)
{ {
...@@ -874,11 +929,3 @@ static int __init stm32h7_exti_of_init(struct device_node *np, ...@@ -874,11 +929,3 @@ static int __init stm32h7_exti_of_init(struct device_node *np,
} }
IRQCHIP_DECLARE(stm32h7_exti, "st,stm32h7-exti", stm32h7_exti_of_init); IRQCHIP_DECLARE(stm32h7_exti, "st,stm32h7-exti", stm32h7_exti_of_init);
static int __init stm32mp1_exti_of_init(struct device_node *np,
struct device_node *parent)
{
return stm32_exti_hierarchy_init(&stm32mp1_drv_data, np, parent);
}
IRQCHIP_DECLARE(stm32mp1_exti, "st,stm32mp1-exti", stm32mp1_exti_of_init);
// SPDX-License-Identifier: GPL-2.0
/*
* Texas Instruments' K3 Interrupt Aggregator irqchip driver
*
* Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/
* Lokesh Vutla <lokeshvutla@ti.com>
*/
#include <linux/err.h>
#include <linux/io.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <linux/interrupt.h>
#include <linux/msi.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/soc/ti/ti_sci_inta_msi.h>
#include <linux/soc/ti/ti_sci_protocol.h>
#include <asm-generic/msi.h>
#define TI_SCI_DEV_ID_MASK 0xffff
#define TI_SCI_DEV_ID_SHIFT 16
#define TI_SCI_IRQ_ID_MASK 0xffff
#define TI_SCI_IRQ_ID_SHIFT 0
#define HWIRQ_TO_DEVID(hwirq) (((hwirq) >> (TI_SCI_DEV_ID_SHIFT)) & \
(TI_SCI_DEV_ID_MASK))
#define HWIRQ_TO_IRQID(hwirq) ((hwirq) & (TI_SCI_IRQ_ID_MASK))
#define TO_HWIRQ(dev, index) ((((dev) & TI_SCI_DEV_ID_MASK) << \
TI_SCI_DEV_ID_SHIFT) | \
((index) & TI_SCI_IRQ_ID_MASK))
#define MAX_EVENTS_PER_VINT 64
#define VINT_ENABLE_SET_OFFSET 0x0
#define VINT_ENABLE_CLR_OFFSET 0x8
#define VINT_STATUS_OFFSET 0x18
/**
* struct ti_sci_inta_event_desc - Description of an event coming to
* Interrupt Aggregator. This serves
* as a mapping table for global event,
* hwirq and vint bit.
* @global_event: Global event number corresponding to this event
* @hwirq: Hwirq of the incoming interrupt
* @vint_bit: Corresponding vint bit to which this event is attached.
*/
struct ti_sci_inta_event_desc {
u16 global_event;
u32 hwirq;
u8 vint_bit;
};
/**
* struct ti_sci_inta_vint_desc - Description of a virtual interrupt coming out
* of Interrupt Aggregator.
* @domain: Pointer to IRQ domain to which this vint belongs.
* @list: List entry for the vint list
* @event_map: Bitmap to manage the allocation of events to vint.
* @events: Array of event descriptors assigned to this vint.
* @parent_virq: Linux IRQ number that gets attached to parent
* @vint_id: TISCI vint ID
*/
struct ti_sci_inta_vint_desc {
struct irq_domain *domain;
struct list_head list;
DECLARE_BITMAP(event_map, MAX_EVENTS_PER_VINT);
struct ti_sci_inta_event_desc events[MAX_EVENTS_PER_VINT];
unsigned int parent_virq;
u16 vint_id;
};
/**
* struct ti_sci_inta_irq_domain - Structure representing a TISCI based
* Interrupt Aggregator IRQ domain.
* @sci: Pointer to TISCI handle
* @vint: TISCI resource pointer representing IA inerrupts.
* @global_event: TISCI resource pointer representing global events.
* @vint_list: List of the vints active in the system
* @vint_mutex: Mutex to protect vint_list
* @base: Base address of the memory mapped IO registers
* @pdev: Pointer to platform device.
*/
struct ti_sci_inta_irq_domain {
const struct ti_sci_handle *sci;
struct ti_sci_resource *vint;
struct ti_sci_resource *global_event;
struct list_head vint_list;
/* Mutex to protect vint list */
struct mutex vint_mutex;
void __iomem *base;
struct platform_device *pdev;
};
#define to_vint_desc(e, i) container_of(e, struct ti_sci_inta_vint_desc, \
events[i])
/**
* ti_sci_inta_irq_handler() - Chained IRQ handler for the vint irqs
* @desc: Pointer to irq_desc corresponding to the irq
*/
static void ti_sci_inta_irq_handler(struct irq_desc *desc)
{
struct ti_sci_inta_vint_desc *vint_desc;
struct ti_sci_inta_irq_domain *inta;
struct irq_domain *domain;
unsigned int virq, bit;
unsigned long val;
vint_desc = irq_desc_get_handler_data(desc);
domain = vint_desc->domain;
inta = domain->host_data;
chained_irq_enter(irq_desc_get_chip(desc), desc);
val = readq_relaxed(inta->base + vint_desc->vint_id * 0x1000 +
VINT_STATUS_OFFSET);
for_each_set_bit(bit, &val, MAX_EVENTS_PER_VINT) {
virq = irq_find_mapping(domain, vint_desc->events[bit].hwirq);
if (virq)
generic_handle_irq(virq);
}
chained_irq_exit(irq_desc_get_chip(desc), desc);
}
/**
* ti_sci_inta_alloc_parent_irq() - Allocate parent irq to Interrupt aggregator
* @domain: IRQ domain corresponding to Interrupt Aggregator
*
* Return 0 if all went well else corresponding error value.
*/
static struct ti_sci_inta_vint_desc *ti_sci_inta_alloc_parent_irq(struct irq_domain *domain)
{
struct ti_sci_inta_irq_domain *inta = domain->host_data;
struct ti_sci_inta_vint_desc *vint_desc;
struct irq_fwspec parent_fwspec;
unsigned int parent_virq;
u16 vint_id;
vint_id = ti_sci_get_free_resource(inta->vint);
if (vint_id == TI_SCI_RESOURCE_NULL)
return ERR_PTR(-EINVAL);
vint_desc = kzalloc(sizeof(*vint_desc), GFP_KERNEL);
if (!vint_desc)
return ERR_PTR(-ENOMEM);
vint_desc->domain = domain;
vint_desc->vint_id = vint_id;
INIT_LIST_HEAD(&vint_desc->list);
parent_fwspec.fwnode = of_node_to_fwnode(of_irq_find_parent(dev_of_node(&inta->pdev->dev)));
parent_fwspec.param_count = 2;
parent_fwspec.param[0] = inta->pdev->id;
parent_fwspec.param[1] = vint_desc->vint_id;
parent_virq = irq_create_fwspec_mapping(&parent_fwspec);
if (parent_virq <= 0) {
kfree(vint_desc);
return ERR_PTR(parent_virq);
}
vint_desc->parent_virq = parent_virq;
list_add_tail(&vint_desc->list, &inta->vint_list);
irq_set_chained_handler_and_data(vint_desc->parent_virq,
ti_sci_inta_irq_handler, vint_desc);
return vint_desc;
}
/**
* ti_sci_inta_alloc_event() - Attach an event to a IA vint.
* @vint_desc: Pointer to vint_desc to which the event gets attached
* @free_bit: Bit inside vint to which event gets attached
* @hwirq: hwirq of the input event
*
* Return event_desc pointer if all went ok else appropriate error value.
*/
static struct ti_sci_inta_event_desc *ti_sci_inta_alloc_event(struct ti_sci_inta_vint_desc *vint_desc,
u16 free_bit,
u32 hwirq)
{
struct ti_sci_inta_irq_domain *inta = vint_desc->domain->host_data;
struct ti_sci_inta_event_desc *event_desc;
u16 dev_id, dev_index;
int err;
dev_id = HWIRQ_TO_DEVID(hwirq);
dev_index = HWIRQ_TO_IRQID(hwirq);
event_desc = &vint_desc->events[free_bit];
event_desc->hwirq = hwirq;
event_desc->vint_bit = free_bit;
event_desc->global_event = ti_sci_get_free_resource(inta->global_event);
if (event_desc->global_event == TI_SCI_RESOURCE_NULL)
return ERR_PTR(-EINVAL);
err = inta->sci->ops.rm_irq_ops.set_event_map(inta->sci,
dev_id, dev_index,
inta->pdev->id,
vint_desc->vint_id,
event_desc->global_event,
free_bit);
if (err)
goto free_global_event;
return event_desc;
free_global_event:
ti_sci_release_resource(inta->global_event, event_desc->global_event);
return ERR_PTR(err);
}
/**
* ti_sci_inta_alloc_irq() - Allocate an irq within INTA domain
* @domain: irq_domain pointer corresponding to INTA
* @hwirq: hwirq of the input event
*
* Note: Allocation happens in the following manner:
* - Find a free bit available in any of the vints available in the list.
* - If not found, allocate a vint from the vint pool
* - Attach the free bit to input hwirq.
* Return event_desc if all went ok else appropriate error value.
*/
static struct ti_sci_inta_event_desc *ti_sci_inta_alloc_irq(struct irq_domain *domain,
u32 hwirq)
{
struct ti_sci_inta_irq_domain *inta = domain->host_data;
struct ti_sci_inta_vint_desc *vint_desc = NULL;
struct ti_sci_inta_event_desc *event_desc;
u16 free_bit;
mutex_lock(&inta->vint_mutex);
list_for_each_entry(vint_desc, &inta->vint_list, list) {
free_bit = find_first_zero_bit(vint_desc->event_map,
MAX_EVENTS_PER_VINT);
if (free_bit != MAX_EVENTS_PER_VINT) {
set_bit(free_bit, vint_desc->event_map);
goto alloc_event;
}
}
/* No free bits available. Allocate a new vint */
vint_desc = ti_sci_inta_alloc_parent_irq(domain);
if (IS_ERR(vint_desc)) {
mutex_unlock(&inta->vint_mutex);
return ERR_PTR(PTR_ERR(vint_desc));
}
free_bit = find_first_zero_bit(vint_desc->event_map,
MAX_EVENTS_PER_VINT);
set_bit(free_bit, vint_desc->event_map);
alloc_event:
event_desc = ti_sci_inta_alloc_event(vint_desc, free_bit, hwirq);
if (IS_ERR(event_desc))
clear_bit(free_bit, vint_desc->event_map);
mutex_unlock(&inta->vint_mutex);
return event_desc;
}
/**
* ti_sci_inta_free_parent_irq() - Free a parent irq to INTA
* @inta: Pointer to inta domain.
* @vint_desc: Pointer to vint_desc that needs to be freed.
*/
static void ti_sci_inta_free_parent_irq(struct ti_sci_inta_irq_domain *inta,
struct ti_sci_inta_vint_desc *vint_desc)
{
if (find_first_bit(vint_desc->event_map, MAX_EVENTS_PER_VINT) == MAX_EVENTS_PER_VINT) {
list_del(&vint_desc->list);
ti_sci_release_resource(inta->vint, vint_desc->vint_id);
irq_dispose_mapping(vint_desc->parent_virq);
kfree(vint_desc);
}
}
/**
* ti_sci_inta_free_irq() - Free an IRQ within INTA domain
* @event_desc: Pointer to event_desc that needs to be freed.
* @hwirq: Hwirq number within INTA domain that needs to be freed
*/
static void ti_sci_inta_free_irq(struct ti_sci_inta_event_desc *event_desc,
u32 hwirq)
{
struct ti_sci_inta_vint_desc *vint_desc;
struct ti_sci_inta_irq_domain *inta;
vint_desc = to_vint_desc(event_desc, event_desc->vint_bit);
inta = vint_desc->domain->host_data;
/* free event irq */
mutex_lock(&inta->vint_mutex);
inta->sci->ops.rm_irq_ops.free_event_map(inta->sci,
HWIRQ_TO_DEVID(hwirq),
HWIRQ_TO_IRQID(hwirq),
inta->pdev->id,
vint_desc->vint_id,
event_desc->global_event,
event_desc->vint_bit);
clear_bit(event_desc->vint_bit, vint_desc->event_map);
ti_sci_release_resource(inta->global_event, event_desc->global_event);
event_desc->global_event = TI_SCI_RESOURCE_NULL;
event_desc->hwirq = 0;
ti_sci_inta_free_parent_irq(inta, vint_desc);
mutex_unlock(&inta->vint_mutex);
}
/**
* ti_sci_inta_request_resources() - Allocate resources for input irq
* @data: Pointer to corresponding irq_data
*
* Note: This is the core api where the actual allocation happens for input
* hwirq. This allocation involves creating a parent irq for vint.
* If this is done in irq_domain_ops.alloc() then a deadlock is reached
* for allocation. So this allocation is being done in request_resources()
*
* Return: 0 if all went well else corresponding error.
*/
static int ti_sci_inta_request_resources(struct irq_data *data)
{
struct ti_sci_inta_event_desc *event_desc;
event_desc = ti_sci_inta_alloc_irq(data->domain, data->hwirq);
if (IS_ERR(event_desc))
return PTR_ERR(event_desc);
data->chip_data = event_desc;
return 0;
}
/**
* ti_sci_inta_release_resources - Release resources for input irq
* @data: Pointer to corresponding irq_data
*
* Note: Corresponding to request_resources(), all the unmapping and deletion
* of parent vint irqs happens in this api.
*/
static void ti_sci_inta_release_resources(struct irq_data *data)
{
struct ti_sci_inta_event_desc *event_desc;
event_desc = irq_data_get_irq_chip_data(data);
ti_sci_inta_free_irq(event_desc, data->hwirq);
}
/**
* ti_sci_inta_manage_event() - Control the event based on the offset
* @data: Pointer to corresponding irq_data
* @offset: register offset using which event is controlled.
*/
static void ti_sci_inta_manage_event(struct irq_data *data, u32 offset)
{
struct ti_sci_inta_event_desc *event_desc;
struct ti_sci_inta_vint_desc *vint_desc;
struct ti_sci_inta_irq_domain *inta;
event_desc = irq_data_get_irq_chip_data(data);
vint_desc = to_vint_desc(event_desc, event_desc->vint_bit);
inta = data->domain->host_data;
writeq_relaxed(BIT(event_desc->vint_bit),
inta->base + vint_desc->vint_id * 0x1000 + offset);
}
/**
* ti_sci_inta_mask_irq() - Mask an event
* @data: Pointer to corresponding irq_data
*/
static void ti_sci_inta_mask_irq(struct irq_data *data)
{
ti_sci_inta_manage_event(data, VINT_ENABLE_CLR_OFFSET);
}
/**
* ti_sci_inta_unmask_irq() - Unmask an event
* @data: Pointer to corresponding irq_data
*/
static void ti_sci_inta_unmask_irq(struct irq_data *data)
{
ti_sci_inta_manage_event(data, VINT_ENABLE_SET_OFFSET);
}
/**
* ti_sci_inta_ack_irq() - Ack an event
* @data: Pointer to corresponding irq_data
*/
static void ti_sci_inta_ack_irq(struct irq_data *data)
{
/*
* Do not clear the event if hardware is capable of sending
* a down event.
*/
if (irqd_get_trigger_type(data) != IRQF_TRIGGER_HIGH)
ti_sci_inta_manage_event(data, VINT_STATUS_OFFSET);
}
static int ti_sci_inta_set_affinity(struct irq_data *d,
const struct cpumask *mask_val, bool force)
{
return -EINVAL;
}
/**
* ti_sci_inta_set_type() - Update the trigger type of the irq.
* @data: Pointer to corresponding irq_data
* @type: Trigger type as specified by user
*
* Note: This updates the handle_irq callback for level msi.
*
* Return 0 if all went well else appropriate error.
*/
static int ti_sci_inta_set_type(struct irq_data *data, unsigned int type)
{
/*
* .alloc default sets handle_edge_irq. But if the user specifies
* that IRQ is level MSI, then update the handle to handle_level_irq
*/
switch (type & IRQ_TYPE_SENSE_MASK) {
case IRQF_TRIGGER_HIGH:
irq_set_handler_locked(data, handle_level_irq);
return 0;
case IRQF_TRIGGER_RISING:
return 0;
default:
return -EINVAL;
}
return -EINVAL;
}
static struct irq_chip ti_sci_inta_irq_chip = {
.name = "INTA",
.irq_ack = ti_sci_inta_ack_irq,
.irq_mask = ti_sci_inta_mask_irq,
.irq_set_type = ti_sci_inta_set_type,
.irq_unmask = ti_sci_inta_unmask_irq,
.irq_set_affinity = ti_sci_inta_set_affinity,
.irq_request_resources = ti_sci_inta_request_resources,
.irq_release_resources = ti_sci_inta_release_resources,
};
/**
* ti_sci_inta_irq_domain_free() - Free an IRQ from the IRQ domain
* @domain: Domain to which the irqs belong
* @virq: base linux virtual IRQ to be freed.
* @nr_irqs: Number of continuous irqs to be freed
*/
static void ti_sci_inta_irq_domain_free(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs)
{
struct irq_data *data = irq_domain_get_irq_data(domain, virq);
irq_domain_reset_irq_data(data);
}
/**
* ti_sci_inta_irq_domain_alloc() - Allocate Interrupt aggregator IRQs
* @domain: Point to the interrupt aggregator IRQ domain
* @virq: Corresponding Linux virtual IRQ number
* @nr_irqs: Continuous irqs to be allocated
* @data: Pointer to firmware specifier
*
* No actual allocation happens here.
*
* Return 0 if all went well else appropriate error value.
*/
static int ti_sci_inta_irq_domain_alloc(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs,
void *data)
{
msi_alloc_info_t *arg = data;
irq_domain_set_info(domain, virq, arg->hwirq, &ti_sci_inta_irq_chip,
NULL, handle_edge_irq, NULL, NULL);
return 0;
}
static const struct irq_domain_ops ti_sci_inta_irq_domain_ops = {
.free = ti_sci_inta_irq_domain_free,
.alloc = ti_sci_inta_irq_domain_alloc,
};
static struct irq_chip ti_sci_inta_msi_irq_chip = {
.name = "MSI-INTA",
.flags = IRQCHIP_SUPPORTS_LEVEL_MSI,
};
static void ti_sci_inta_msi_set_desc(msi_alloc_info_t *arg,
struct msi_desc *desc)
{
struct platform_device *pdev = to_platform_device(desc->dev);
arg->desc = desc;
arg->hwirq = TO_HWIRQ(pdev->id, desc->inta.dev_index);
}
static struct msi_domain_ops ti_sci_inta_msi_ops = {
.set_desc = ti_sci_inta_msi_set_desc,
};
static struct msi_domain_info ti_sci_inta_msi_domain_info = {
.flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS |
MSI_FLAG_LEVEL_CAPABLE),
.ops = &ti_sci_inta_msi_ops,
.chip = &ti_sci_inta_msi_irq_chip,
};
static int ti_sci_inta_irq_domain_probe(struct platform_device *pdev)
{
struct irq_domain *parent_domain, *domain, *msi_domain;
struct device_node *parent_node, *node;
struct ti_sci_inta_irq_domain *inta;
struct device *dev = &pdev->dev;
struct resource *res;
int ret;
node = dev_of_node(dev);
parent_node = of_irq_find_parent(node);
if (!parent_node) {
dev_err(dev, "Failed to get IRQ parent node\n");
return -ENODEV;
}
parent_domain = irq_find_host(parent_node);
if (!parent_domain)
return -EPROBE_DEFER;
inta = devm_kzalloc(dev, sizeof(*inta), GFP_KERNEL);
if (!inta)
return -ENOMEM;
inta->pdev = pdev;
inta->sci = devm_ti_sci_get_by_phandle(dev, "ti,sci");
if (IS_ERR(inta->sci)) {
ret = PTR_ERR(inta->sci);
if (ret != -EPROBE_DEFER)
dev_err(dev, "ti,sci read fail %d\n", ret);
inta->sci = NULL;
return ret;
}
ret = of_property_read_u32(dev->of_node, "ti,sci-dev-id", &pdev->id);
if (ret) {
dev_err(dev, "missing 'ti,sci-dev-id' property\n");
return -EINVAL;
}
inta->vint = devm_ti_sci_get_of_resource(inta->sci, dev, pdev->id,
"ti,sci-rm-range-vint");
if (IS_ERR(inta->vint)) {
dev_err(dev, "VINT resource allocation failed\n");
return PTR_ERR(inta->vint);
}
inta->global_event = devm_ti_sci_get_of_resource(inta->sci, dev, pdev->id,
"ti,sci-rm-range-global-event");
if (IS_ERR(inta->global_event)) {
dev_err(dev, "Global event resource allocation failed\n");
return PTR_ERR(inta->global_event);
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
inta->base = devm_ioremap_resource(dev, res);
if (IS_ERR(inta->base))
return -ENODEV;
domain = irq_domain_add_linear(dev_of_node(dev),
ti_sci_get_num_resources(inta->vint),
&ti_sci_inta_irq_domain_ops, inta);
if (!domain) {
dev_err(dev, "Failed to allocate IRQ domain\n");
return -ENOMEM;
}
msi_domain = ti_sci_inta_msi_create_irq_domain(of_node_to_fwnode(node),
&ti_sci_inta_msi_domain_info,
domain);
if (!msi_domain) {
irq_domain_remove(domain);
dev_err(dev, "Failed to allocate msi domain\n");
return -ENOMEM;
}
INIT_LIST_HEAD(&inta->vint_list);
mutex_init(&inta->vint_mutex);
return 0;
}
static const struct of_device_id ti_sci_inta_irq_domain_of_match[] = {
{ .compatible = "ti,sci-inta", },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, ti_sci_inta_irq_domain_of_match);
static struct platform_driver ti_sci_inta_irq_domain_driver = {
.probe = ti_sci_inta_irq_domain_probe,
.driver = {
.name = "ti-sci-inta",
.of_match_table = ti_sci_inta_irq_domain_of_match,
},
};
module_platform_driver(ti_sci_inta_irq_domain_driver);
MODULE_AUTHOR("Lokesh Vutla <lokeshvutla@ticom>");
MODULE_DESCRIPTION("K3 Interrupt Aggregator driver over TI SCI protocol");
MODULE_LICENSE("GPL v2");
// SPDX-License-Identifier: GPL-2.0
/*
* Texas Instruments' K3 Interrupt Router irqchip driver
*
* Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/
* Lokesh Vutla <lokeshvutla@ti.com>
*/
#include <linux/err.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/io.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <linux/of_platform.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/soc/ti/ti_sci_protocol.h>
#define TI_SCI_DEV_ID_MASK 0xffff
#define TI_SCI_DEV_ID_SHIFT 16
#define TI_SCI_IRQ_ID_MASK 0xffff
#define TI_SCI_IRQ_ID_SHIFT 0
#define HWIRQ_TO_DEVID(hwirq) (((hwirq) >> (TI_SCI_DEV_ID_SHIFT)) & \
(TI_SCI_DEV_ID_MASK))
#define HWIRQ_TO_IRQID(hwirq) ((hwirq) & (TI_SCI_IRQ_ID_MASK))
#define TO_HWIRQ(dev, index) ((((dev) & TI_SCI_DEV_ID_MASK) << \
TI_SCI_DEV_ID_SHIFT) | \
((index) & TI_SCI_IRQ_ID_MASK))
/**
* struct ti_sci_intr_irq_domain - Structure representing a TISCI based
* Interrupt Router IRQ domain.
* @sci: Pointer to TISCI handle
* @dst_irq: TISCI resource pointer representing GIC irq controller.
* @dst_id: TISCI device ID of the GIC irq controller.
* @type: Specifies the trigger type supported by this Interrupt Router
*/
struct ti_sci_intr_irq_domain {
const struct ti_sci_handle *sci;
struct ti_sci_resource *dst_irq;
u32 dst_id;
u32 type;
};
static struct irq_chip ti_sci_intr_irq_chip = {
.name = "INTR",
.irq_eoi = irq_chip_eoi_parent,
.irq_mask = irq_chip_mask_parent,
.irq_unmask = irq_chip_unmask_parent,
.irq_set_type = irq_chip_set_type_parent,
.irq_retrigger = irq_chip_retrigger_hierarchy,
.irq_set_affinity = irq_chip_set_affinity_parent,
};
/**
* ti_sci_intr_irq_domain_translate() - Retrieve hwirq and type from
* IRQ firmware specific handler.
* @domain: Pointer to IRQ domain
* @fwspec: Pointer to IRQ specific firmware structure
* @hwirq: IRQ number identified by hardware
* @type: IRQ type
*
* Return 0 if all went ok else appropriate error.
*/
static int ti_sci_intr_irq_domain_translate(struct irq_domain *domain,
struct irq_fwspec *fwspec,
unsigned long *hwirq,
unsigned int *type)
{
struct ti_sci_intr_irq_domain *intr = domain->host_data;
if (fwspec->param_count != 2)
return -EINVAL;
*hwirq = TO_HWIRQ(fwspec->param[0], fwspec->param[1]);
*type = intr->type;
return 0;
}
/**
* ti_sci_intr_irq_domain_free() - Free the specified IRQs from the domain.
* @domain: Domain to which the irqs belong
* @virq: Linux virtual IRQ to be freed.
* @nr_irqs: Number of continuous irqs to be freed
*/
static void ti_sci_intr_irq_domain_free(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs)
{
struct ti_sci_intr_irq_domain *intr = domain->host_data;
struct irq_data *data, *parent_data;
u16 dev_id, irq_index;
parent_data = irq_domain_get_irq_data(domain->parent, virq);
data = irq_domain_get_irq_data(domain, virq);
irq_index = HWIRQ_TO_IRQID(data->hwirq);
dev_id = HWIRQ_TO_DEVID(data->hwirq);
intr->sci->ops.rm_irq_ops.free_irq(intr->sci, dev_id, irq_index,
intr->dst_id, parent_data->hwirq);
ti_sci_release_resource(intr->dst_irq, parent_data->hwirq);
irq_domain_free_irqs_parent(domain, virq, 1);
irq_domain_reset_irq_data(data);
}
/**
* ti_sci_intr_alloc_gic_irq() - Allocate GIC specific IRQ
* @domain: Pointer to the interrupt router IRQ domain
* @virq: Corresponding Linux virtual IRQ number
* @hwirq: Corresponding hwirq for the IRQ within this IRQ domain
*
* Returns 0 if all went well else appropriate error pointer.
*/
static int ti_sci_intr_alloc_gic_irq(struct irq_domain *domain,
unsigned int virq, u32 hwirq)
{
struct ti_sci_intr_irq_domain *intr = domain->host_data;
struct irq_fwspec fwspec;
u16 dev_id, irq_index;
u16 dst_irq;
int err;
dev_id = HWIRQ_TO_DEVID(hwirq);
irq_index = HWIRQ_TO_IRQID(hwirq);
dst_irq = ti_sci_get_free_resource(intr->dst_irq);
if (dst_irq == TI_SCI_RESOURCE_NULL)
return -EINVAL;
fwspec.fwnode = domain->parent->fwnode;
fwspec.param_count = 3;
fwspec.param[0] = 0; /* SPI */
fwspec.param[1] = dst_irq - 32; /* SPI offset */
fwspec.param[2] = intr->type;
err = irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec);
if (err)
goto err_irqs;
err = intr->sci->ops.rm_irq_ops.set_irq(intr->sci, dev_id, irq_index,
intr->dst_id, dst_irq);
if (err)
goto err_msg;
return 0;
err_msg:
irq_domain_free_irqs_parent(domain, virq, 1);
err_irqs:
ti_sci_release_resource(intr->dst_irq, dst_irq);
return err;
}
/**
* ti_sci_intr_irq_domain_alloc() - Allocate Interrupt router IRQs
* @domain: Point to the interrupt router IRQ domain
* @virq: Corresponding Linux virtual IRQ number
* @nr_irqs: Continuous irqs to be allocated
* @data: Pointer to firmware specifier
*
* Return 0 if all went well else appropriate error value.
*/
static int ti_sci_intr_irq_domain_alloc(struct irq_domain *domain,
unsigned int virq, unsigned int nr_irqs,
void *data)
{
struct irq_fwspec *fwspec = data;
unsigned long hwirq;
unsigned int flags;
int err;
err = ti_sci_intr_irq_domain_translate(domain, fwspec, &hwirq, &flags);
if (err)
return err;
err = ti_sci_intr_alloc_gic_irq(domain, virq, hwirq);
if (err)
return err;
irq_domain_set_hwirq_and_chip(domain, virq, hwirq,
&ti_sci_intr_irq_chip, NULL);
return 0;
}
static const struct irq_domain_ops ti_sci_intr_irq_domain_ops = {
.free = ti_sci_intr_irq_domain_free,
.alloc = ti_sci_intr_irq_domain_alloc,
.translate = ti_sci_intr_irq_domain_translate,
};
static int ti_sci_intr_irq_domain_probe(struct platform_device *pdev)
{
struct irq_domain *parent_domain, *domain;
struct ti_sci_intr_irq_domain *intr;
struct device_node *parent_node;
struct device *dev = &pdev->dev;
int ret;
parent_node = of_irq_find_parent(dev_of_node(dev));
if (!parent_node) {
dev_err(dev, "Failed to get IRQ parent node\n");
return -ENODEV;
}
parent_domain = irq_find_host(parent_node);
if (!parent_domain) {
dev_err(dev, "Failed to find IRQ parent domain\n");
return -ENODEV;
}
intr = devm_kzalloc(dev, sizeof(*intr), GFP_KERNEL);
if (!intr)
return -ENOMEM;
ret = of_property_read_u32(dev_of_node(dev), "ti,intr-trigger-type",
&intr->type);
if (ret) {
dev_err(dev, "missing ti,intr-trigger-type property\n");
return -EINVAL;
}
intr->sci = devm_ti_sci_get_by_phandle(dev, "ti,sci");
if (IS_ERR(intr->sci)) {
ret = PTR_ERR(intr->sci);
if (ret != -EPROBE_DEFER)
dev_err(dev, "ti,sci read fail %d\n", ret);
intr->sci = NULL;
return ret;
}
ret = of_property_read_u32(dev_of_node(dev), "ti,sci-dst-id",
&intr->dst_id);
if (ret) {
dev_err(dev, "missing 'ti,sci-dst-id' property\n");
return -EINVAL;
}
intr->dst_irq = devm_ti_sci_get_of_resource(intr->sci, dev,
intr->dst_id,
"ti,sci-rm-range-girq");
if (IS_ERR(intr->dst_irq)) {
dev_err(dev, "Destination irq resource allocation failed\n");
return PTR_ERR(intr->dst_irq);
}
domain = irq_domain_add_hierarchy(parent_domain, 0, 0, dev_of_node(dev),
&ti_sci_intr_irq_domain_ops, intr);
if (!domain) {
dev_err(dev, "Failed to allocate IRQ domain\n");
return -ENOMEM;
}
return 0;
}
static const struct of_device_id ti_sci_intr_irq_domain_of_match[] = {
{ .compatible = "ti,sci-intr", },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, ti_sci_intr_irq_domain_of_match);
static struct platform_driver ti_sci_intr_irq_domain_driver = {
.probe = ti_sci_intr_irq_domain_probe,
.driver = {
.name = "ti-sci-intr",
.of_match_table = ti_sci_intr_irq_domain_of_match,
},
};
module_platform_driver(ti_sci_intr_irq_domain_driver);
MODULE_AUTHOR("Lokesh Vutla <lokeshvutla@ticom>");
MODULE_DESCRIPTION("K3 Interrupt Router driver over TI SCI protocol");
MODULE_LICENSE("GPL v2");
...@@ -74,4 +74,10 @@ config TI_SCI_PM_DOMAINS ...@@ -74,4 +74,10 @@ config TI_SCI_PM_DOMAINS
called ti_sci_pm_domains. Note this is needed early in boot before called ti_sci_pm_domains. Note this is needed early in boot before
rootfs may be available. rootfs may be available.
config TI_SCI_INTA_MSI_DOMAIN
bool
select GENERIC_MSI_IRQ_DOMAIN
help
Driver to enable Interrupt Aggregator specific MSI Domain.
endif # SOC_TI endif # SOC_TI
...@@ -8,3 +8,4 @@ obj-$(CONFIG_KEYSTONE_NAVIGATOR_DMA) += knav_dma.o ...@@ -8,3 +8,4 @@ obj-$(CONFIG_KEYSTONE_NAVIGATOR_DMA) += knav_dma.o
obj-$(CONFIG_AMX3_PM) += pm33xx.o obj-$(CONFIG_AMX3_PM) += pm33xx.o
obj-$(CONFIG_WKUP_M3_IPC) += wkup_m3_ipc.o obj-$(CONFIG_WKUP_M3_IPC) += wkup_m3_ipc.o
obj-$(CONFIG_TI_SCI_PM_DOMAINS) += ti_sci_pm_domains.o obj-$(CONFIG_TI_SCI_PM_DOMAINS) += ti_sci_pm_domains.o
obj-$(CONFIG_TI_SCI_INTA_MSI_DOMAIN) += ti_sci_inta_msi.o
// SPDX-License-Identifier: GPL-2.0
/*
* Texas Instruments' K3 Interrupt Aggregator MSI bus
*
* Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/
* Lokesh Vutla <lokeshvutla@ti.com>
*/
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/msi.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
#include <linux/soc/ti/ti_sci_inta_msi.h>
#include <linux/soc/ti/ti_sci_protocol.h>
static void ti_sci_inta_msi_write_msg(struct irq_data *data,
struct msi_msg *msg)
{
/* Nothing to do */
}
static void ti_sci_inta_msi_compose_msi_msg(struct irq_data *data,
struct msi_msg *msg)
{
/* Nothing to do */
}
static void ti_sci_inta_msi_update_chip_ops(struct msi_domain_info *info)
{
struct irq_chip *chip = info->chip;
if (WARN_ON(!chip))
return;
chip->irq_request_resources = irq_chip_request_resources_parent;
chip->irq_release_resources = irq_chip_release_resources_parent;
chip->irq_compose_msi_msg = ti_sci_inta_msi_compose_msi_msg;
chip->irq_write_msi_msg = ti_sci_inta_msi_write_msg;
chip->irq_set_type = irq_chip_set_type_parent;
chip->irq_unmask = irq_chip_unmask_parent;
chip->irq_mask = irq_chip_mask_parent;
chip->irq_ack = irq_chip_ack_parent;
}
struct irq_domain *ti_sci_inta_msi_create_irq_domain(struct fwnode_handle *fwnode,
struct msi_domain_info *info,
struct irq_domain *parent)
{
struct irq_domain *domain;
ti_sci_inta_msi_update_chip_ops(info);
domain = msi_create_irq_domain(fwnode, info, parent);
if (domain)
irq_domain_update_bus_token(domain, DOMAIN_BUS_TI_SCI_INTA_MSI);
return domain;
}
EXPORT_SYMBOL_GPL(ti_sci_inta_msi_create_irq_domain);
static void ti_sci_inta_msi_free_descs(struct device *dev)
{
struct msi_desc *desc, *tmp;
list_for_each_entry_safe(desc, tmp, dev_to_msi_list(dev), list) {
list_del(&desc->list);
free_msi_entry(desc);
}
}
static int ti_sci_inta_msi_alloc_descs(struct device *dev,
struct ti_sci_resource *res)
{
struct msi_desc *msi_desc;
int set, i, count = 0;
for (set = 0; set < res->sets; set++) {
for (i = 0; i < res->desc[set].num; i++) {
msi_desc = alloc_msi_entry(dev, 1, NULL);
if (!msi_desc) {
ti_sci_inta_msi_free_descs(dev);
return -ENOMEM;
}
msi_desc->inta.dev_index = res->desc[set].start + i;
INIT_LIST_HEAD(&msi_desc->list);
list_add_tail(&msi_desc->list, dev_to_msi_list(dev));
count++;
}
}
return count;
}
int ti_sci_inta_msi_domain_alloc_irqs(struct device *dev,
struct ti_sci_resource *res)
{
struct platform_device *pdev = to_platform_device(dev);
struct irq_domain *msi_domain;
int ret, nvec;
msi_domain = dev_get_msi_domain(dev);
if (!msi_domain)
return -EINVAL;
if (pdev->id < 0)
return -ENODEV;
nvec = ti_sci_inta_msi_alloc_descs(dev, res);
if (nvec <= 0)
return nvec;
ret = msi_domain_alloc_irqs(msi_domain, dev, nvec);
if (ret) {
dev_err(dev, "Failed to allocate IRQs %d\n", ret);
goto cleanup;
}
return 0;
cleanup:
ti_sci_inta_msi_free_descs(&pdev->dev);
return ret;
}
EXPORT_SYMBOL_GPL(ti_sci_inta_msi_domain_alloc_irqs);
void ti_sci_inta_msi_domain_free_irqs(struct device *dev)
{
msi_domain_free_irqs(dev->msi_domain, dev);
ti_sci_inta_msi_free_descs(dev);
}
EXPORT_SYMBOL_GPL(ti_sci_inta_msi_domain_free_irqs);
unsigned int ti_sci_inta_msi_get_virq(struct device *dev, u32 dev_index)
{
struct msi_desc *desc;
for_each_msi_entry(desc, dev)
if (desc->inta.dev_index == dev_index)
return desc->irq;
return -ENODEV;
}
EXPORT_SYMBOL_GPL(ti_sci_inta_msi_get_virq);
...@@ -71,12 +71,25 @@ void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, ...@@ -71,12 +71,25 @@ void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir, unsigned long attrs); size_t size, enum dma_data_direction dir, unsigned long attrs);
/* The DMA API isn't _quite_ the whole story, though... */ /* The DMA API isn't _quite_ the whole story, though... */
void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg); /*
* iommu_dma_prepare_msi() - Map the MSI page in the IOMMU device
*
* The MSI page will be stored in @desc.
*
* Return: 0 on success otherwise an error describing the failure.
*/
int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr);
/* Update the MSI message if required. */
void iommu_dma_compose_msi_msg(struct msi_desc *desc,
struct msi_msg *msg);
void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list); void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
#else #else
struct iommu_domain; struct iommu_domain;
struct msi_desc;
struct msi_msg; struct msi_msg;
struct device; struct device;
...@@ -99,7 +112,14 @@ static inline void iommu_put_dma_cookie(struct iommu_domain *domain) ...@@ -99,7 +112,14 @@ static inline void iommu_put_dma_cookie(struct iommu_domain *domain)
{ {
} }
static inline void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg) static inline int iommu_dma_prepare_msi(struct msi_desc *desc,
phys_addr_t msi_addr)
{
return 0;
}
static inline void iommu_dma_compose_msi_msg(struct msi_desc *desc,
struct msi_msg *msg)
{ {
} }
......
...@@ -625,6 +625,8 @@ extern int irq_chip_set_wake_parent(struct irq_data *data, unsigned int on); ...@@ -625,6 +625,8 @@ extern int irq_chip_set_wake_parent(struct irq_data *data, unsigned int on);
extern int irq_chip_set_vcpu_affinity_parent(struct irq_data *data, extern int irq_chip_set_vcpu_affinity_parent(struct irq_data *data,
void *vcpu_info); void *vcpu_info);
extern int irq_chip_set_type_parent(struct irq_data *data, unsigned int type); extern int irq_chip_set_type_parent(struct irq_data *data, unsigned int type);
extern int irq_chip_request_resources_parent(struct irq_data *data);
extern void irq_chip_release_resources_parent(struct irq_data *data);
#endif #endif
/* Handling of unhandled and spurious interrupts: */ /* Handling of unhandled and spurious interrupts: */
......
...@@ -165,7 +165,7 @@ ...@@ -165,7 +165,7 @@
#define GICR_PROPBASER_nCnB GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, nCnB) #define GICR_PROPBASER_nCnB GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, nCnB)
#define GICR_PROPBASER_nC GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, nC) #define GICR_PROPBASER_nC GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, nC)
#define GICR_PROPBASER_RaWt GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWt) #define GICR_PROPBASER_RaWt GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWt)
#define GICR_PROPBASER_RaWb GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWt) #define GICR_PROPBASER_RaWb GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWb)
#define GICR_PROPBASER_WaWt GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, WaWt) #define GICR_PROPBASER_WaWt GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, WaWt)
#define GICR_PROPBASER_WaWb GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, WaWb) #define GICR_PROPBASER_WaWb GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, WaWb)
#define GICR_PROPBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWaWt) #define GICR_PROPBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWaWt)
...@@ -192,7 +192,7 @@ ...@@ -192,7 +192,7 @@
#define GICR_PENDBASER_nCnB GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, nCnB) #define GICR_PENDBASER_nCnB GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, nCnB)
#define GICR_PENDBASER_nC GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, nC) #define GICR_PENDBASER_nC GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, nC)
#define GICR_PENDBASER_RaWt GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWt) #define GICR_PENDBASER_RaWt GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWt)
#define GICR_PENDBASER_RaWb GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWt) #define GICR_PENDBASER_RaWb GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWb)
#define GICR_PENDBASER_WaWt GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, WaWt) #define GICR_PENDBASER_WaWt GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, WaWt)
#define GICR_PENDBASER_WaWb GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, WaWb) #define GICR_PENDBASER_WaWb GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, WaWb)
#define GICR_PENDBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWaWt) #define GICR_PENDBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_PENDBASER, INNER, RaWaWt)
...@@ -251,7 +251,7 @@ ...@@ -251,7 +251,7 @@
#define GICR_VPROPBASER_nCnB GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, nCnB) #define GICR_VPROPBASER_nCnB GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, nCnB)
#define GICR_VPROPBASER_nC GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, nC) #define GICR_VPROPBASER_nC GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, nC)
#define GICR_VPROPBASER_RaWt GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWt) #define GICR_VPROPBASER_RaWt GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWt)
#define GICR_VPROPBASER_RaWb GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWt) #define GICR_VPROPBASER_RaWb GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWb)
#define GICR_VPROPBASER_WaWt GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, WaWt) #define GICR_VPROPBASER_WaWt GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, WaWt)
#define GICR_VPROPBASER_WaWb GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, WaWb) #define GICR_VPROPBASER_WaWb GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, WaWb)
#define GICR_VPROPBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWaWt) #define GICR_VPROPBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_VPROPBASER, INNER, RaWaWt)
...@@ -277,7 +277,7 @@ ...@@ -277,7 +277,7 @@
#define GICR_VPENDBASER_nCnB GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, nCnB) #define GICR_VPENDBASER_nCnB GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, nCnB)
#define GICR_VPENDBASER_nC GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, nC) #define GICR_VPENDBASER_nC GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, nC)
#define GICR_VPENDBASER_RaWt GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWt) #define GICR_VPENDBASER_RaWt GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWt)
#define GICR_VPENDBASER_RaWb GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWt) #define GICR_VPENDBASER_RaWb GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWb)
#define GICR_VPENDBASER_WaWt GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, WaWt) #define GICR_VPENDBASER_WaWt GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, WaWt)
#define GICR_VPENDBASER_WaWb GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, WaWb) #define GICR_VPENDBASER_WaWb GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, WaWb)
#define GICR_VPENDBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWaWt) #define GICR_VPENDBASER_RaWaWt GIC_BASER_CACHEABILITY(GICR_VPENDBASER, INNER, RaWaWt)
...@@ -351,7 +351,7 @@ ...@@ -351,7 +351,7 @@
#define GITS_CBASER_nCnB GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, nCnB) #define GITS_CBASER_nCnB GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, nCnB)
#define GITS_CBASER_nC GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, nC) #define GITS_CBASER_nC GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, nC)
#define GITS_CBASER_RaWt GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWt) #define GITS_CBASER_RaWt GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWt)
#define GITS_CBASER_RaWb GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWt) #define GITS_CBASER_RaWb GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWb)
#define GITS_CBASER_WaWt GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, WaWt) #define GITS_CBASER_WaWt GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, WaWt)
#define GITS_CBASER_WaWb GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, WaWb) #define GITS_CBASER_WaWb GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, WaWb)
#define GITS_CBASER_RaWaWt GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWaWt) #define GITS_CBASER_RaWaWt GIC_BASER_CACHEABILITY(GITS_CBASER, INNER, RaWaWt)
...@@ -377,7 +377,7 @@ ...@@ -377,7 +377,7 @@
#define GITS_BASER_nCnB GIC_BASER_CACHEABILITY(GITS_BASER, INNER, nCnB) #define GITS_BASER_nCnB GIC_BASER_CACHEABILITY(GITS_BASER, INNER, nCnB)
#define GITS_BASER_nC GIC_BASER_CACHEABILITY(GITS_BASER, INNER, nC) #define GITS_BASER_nC GIC_BASER_CACHEABILITY(GITS_BASER, INNER, nC)
#define GITS_BASER_RaWt GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWt) #define GITS_BASER_RaWt GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWt)
#define GITS_BASER_RaWb GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWt) #define GITS_BASER_RaWb GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWb)
#define GITS_BASER_WaWt GIC_BASER_CACHEABILITY(GITS_BASER, INNER, WaWt) #define GITS_BASER_WaWt GIC_BASER_CACHEABILITY(GITS_BASER, INNER, WaWt)
#define GITS_BASER_WaWb GIC_BASER_CACHEABILITY(GITS_BASER, INNER, WaWb) #define GITS_BASER_WaWb GIC_BASER_CACHEABILITY(GITS_BASER, INNER, WaWb)
#define GITS_BASER_RaWaWt GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWaWt) #define GITS_BASER_RaWaWt GIC_BASER_CACHEABILITY(GITS_BASER, INNER, RaWaWt)
......
...@@ -82,6 +82,7 @@ enum irq_domain_bus_token { ...@@ -82,6 +82,7 @@ enum irq_domain_bus_token {
DOMAIN_BUS_NEXUS, DOMAIN_BUS_NEXUS,
DOMAIN_BUS_IPI, DOMAIN_BUS_IPI,
DOMAIN_BUS_FSL_MC_MSI, DOMAIN_BUS_FSL_MC_MSI,
DOMAIN_BUS_TI_SCI_INTA_MSI,
}; };
/** /**
......
...@@ -47,6 +47,14 @@ struct fsl_mc_msi_desc { ...@@ -47,6 +47,14 @@ struct fsl_mc_msi_desc {
u16 msi_index; u16 msi_index;
}; };
/**
* ti_sci_inta_msi_desc - TISCI based INTA specific msi descriptor data
* @dev_index: TISCI device index
*/
struct ti_sci_inta_msi_desc {
u16 dev_index;
};
/** /**
* struct msi_desc - Descriptor structure for MSI based interrupts * struct msi_desc - Descriptor structure for MSI based interrupts
* @list: List head for management * @list: List head for management
...@@ -68,6 +76,7 @@ struct fsl_mc_msi_desc { ...@@ -68,6 +76,7 @@ struct fsl_mc_msi_desc {
* @mask_base: [PCI MSI-X] Mask register base address * @mask_base: [PCI MSI-X] Mask register base address
* @platform: [platform] Platform device specific msi descriptor data * @platform: [platform] Platform device specific msi descriptor data
* @fsl_mc: [fsl-mc] FSL MC device specific msi descriptor data * @fsl_mc: [fsl-mc] FSL MC device specific msi descriptor data
* @inta: [INTA] TISCI based INTA specific msi descriptor data
*/ */
struct msi_desc { struct msi_desc {
/* Shared device/bus type independent data */ /* Shared device/bus type independent data */
...@@ -77,6 +86,9 @@ struct msi_desc { ...@@ -77,6 +86,9 @@ struct msi_desc {
struct device *dev; struct device *dev;
struct msi_msg msg; struct msi_msg msg;
struct irq_affinity_desc *affinity; struct irq_affinity_desc *affinity;
#ifdef CONFIG_IRQ_MSI_IOMMU
const void *iommu_cookie;
#endif
union { union {
/* PCI MSI/X specific data */ /* PCI MSI/X specific data */
...@@ -106,6 +118,7 @@ struct msi_desc { ...@@ -106,6 +118,7 @@ struct msi_desc {
*/ */
struct platform_msi_desc platform; struct platform_msi_desc platform;
struct fsl_mc_msi_desc fsl_mc; struct fsl_mc_msi_desc fsl_mc;
struct ti_sci_inta_msi_desc inta;
}; };
}; };
...@@ -119,6 +132,29 @@ struct msi_desc { ...@@ -119,6 +132,29 @@ struct msi_desc {
#define for_each_msi_entry_safe(desc, tmp, dev) \ #define for_each_msi_entry_safe(desc, tmp, dev) \
list_for_each_entry_safe((desc), (tmp), dev_to_msi_list((dev)), list) list_for_each_entry_safe((desc), (tmp), dev_to_msi_list((dev)), list)
#ifdef CONFIG_IRQ_MSI_IOMMU
static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc)
{
return desc->iommu_cookie;
}
static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc,
const void *iommu_cookie)
{
desc->iommu_cookie = iommu_cookie;
}
#else
static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc)
{
return NULL;
}
static inline void msi_desc_set_iommu_cookie(struct msi_desc *desc,
const void *iommu_cookie)
{
}
#endif
#ifdef CONFIG_PCI_MSI #ifdef CONFIG_PCI_MSI
#define first_pci_msi_entry(pdev) first_msi_entry(&(pdev)->dev) #define first_pci_msi_entry(pdev) first_msi_entry(&(pdev)->dev)
#define for_each_pci_msi_entry(desc, pdev) \ #define for_each_pci_msi_entry(desc, pdev) \
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Texas Instruments' K3 TI SCI INTA MSI helper
*
* Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/
* Lokesh Vutla <lokeshvutla@ti.com>
*/
#ifndef __INCLUDE_LINUX_TI_SCI_INTA_MSI_H
#define __INCLUDE_LINUX_TI_SCI_INTA_MSI_H
#include <linux/msi.h>
#include <linux/soc/ti/ti_sci_protocol.h>
struct irq_domain
*ti_sci_inta_msi_create_irq_domain(struct fwnode_handle *fwnode,
struct msi_domain_info *info,
struct irq_domain *parent);
int ti_sci_inta_msi_domain_alloc_irqs(struct device *dev,
struct ti_sci_resource *res);
unsigned int ti_sci_inta_msi_get_virq(struct device *dev, u32 index);
void ti_sci_inta_msi_domain_free_irqs(struct device *dev);
#endif /* __INCLUDE_LINUX_IRQCHIP_TI_SCI_INTA_H */
...@@ -192,15 +192,68 @@ struct ti_sci_clk_ops { ...@@ -192,15 +192,68 @@ struct ti_sci_clk_ops {
u64 *current_freq); u64 *current_freq);
}; };
/**
* struct ti_sci_rm_core_ops - Resource management core operations
* @get_range: Get a range of resources belonging to ti sci host.
* @get_rage_from_shost: Get a range of resources belonging to
* specified host id.
* - s_host: Host processing entity to which the
* resources are allocated
*
* NOTE: for these functions, all the parameters are consolidated and defined
* as below:
* - handle: Pointer to TISCI handle as retrieved by *ti_sci_get_handle
* - dev_id: TISCI device ID.
* - subtype: Resource assignment subtype that is being requested
* from the given device.
* - range_start: Start index of the resource range
* - range_end: Number of resources in the range
*/
struct ti_sci_rm_core_ops {
int (*get_range)(const struct ti_sci_handle *handle, u32 dev_id,
u8 subtype, u16 *range_start, u16 *range_num);
int (*get_range_from_shost)(const struct ti_sci_handle *handle,
u32 dev_id, u8 subtype, u8 s_host,
u16 *range_start, u16 *range_num);
};
/**
* struct ti_sci_rm_irq_ops: IRQ management operations
* @set_irq: Set an IRQ route between the requested source
* and destination
* @set_event_map: Set an Event based peripheral irq to Interrupt
* Aggregator.
* @free_irq: Free an an IRQ route between the requested source
* destination.
* @free_event_map: Free an event based peripheral irq to Interrupt
* Aggregator.
*/
struct ti_sci_rm_irq_ops {
int (*set_irq)(const struct ti_sci_handle *handle, u16 src_id,
u16 src_index, u16 dst_id, u16 dst_host_irq);
int (*set_event_map)(const struct ti_sci_handle *handle, u16 src_id,
u16 src_index, u16 ia_id, u16 vint,
u16 global_event, u8 vint_status_bit);
int (*free_irq)(const struct ti_sci_handle *handle, u16 src_id,
u16 src_index, u16 dst_id, u16 dst_host_irq);
int (*free_event_map)(const struct ti_sci_handle *handle, u16 src_id,
u16 src_index, u16 ia_id, u16 vint,
u16 global_event, u8 vint_status_bit);
};
/** /**
* struct ti_sci_ops - Function support for TI SCI * struct ti_sci_ops - Function support for TI SCI
* @dev_ops: Device specific operations * @dev_ops: Device specific operations
* @clk_ops: Clock specific operations * @clk_ops: Clock specific operations
* @rm_core_ops: Resource management core operations.
* @rm_irq_ops: IRQ management specific operations
*/ */
struct ti_sci_ops { struct ti_sci_ops {
struct ti_sci_core_ops core_ops; struct ti_sci_core_ops core_ops;
struct ti_sci_dev_ops dev_ops; struct ti_sci_dev_ops dev_ops;
struct ti_sci_clk_ops clk_ops; struct ti_sci_clk_ops clk_ops;
struct ti_sci_rm_core_ops rm_core_ops;
struct ti_sci_rm_irq_ops rm_irq_ops;
}; };
/** /**
...@@ -213,10 +266,47 @@ struct ti_sci_handle { ...@@ -213,10 +266,47 @@ struct ti_sci_handle {
struct ti_sci_ops ops; struct ti_sci_ops ops;
}; };
#define TI_SCI_RESOURCE_NULL 0xffff
/**
* struct ti_sci_resource_desc - Description of TI SCI resource instance range.
* @start: Start index of the resource.
* @num: Number of resources.
* @res_map: Bitmap to manage the allocation of these resources.
*/
struct ti_sci_resource_desc {
u16 start;
u16 num;
unsigned long *res_map;
};
/**
* struct ti_sci_resource - Structure representing a resource assigned
* to a device.
* @sets: Number of sets available from this resource type
* @lock: Lock to guard the res map in each set.
* @desc: Array of resource descriptors.
*/
struct ti_sci_resource {
u16 sets;
raw_spinlock_t lock;
struct ti_sci_resource_desc *desc;
};
#if IS_ENABLED(CONFIG_TI_SCI_PROTOCOL) #if IS_ENABLED(CONFIG_TI_SCI_PROTOCOL)
const struct ti_sci_handle *ti_sci_get_handle(struct device *dev); const struct ti_sci_handle *ti_sci_get_handle(struct device *dev);
int ti_sci_put_handle(const struct ti_sci_handle *handle); int ti_sci_put_handle(const struct ti_sci_handle *handle);
const struct ti_sci_handle *devm_ti_sci_get_handle(struct device *dev); const struct ti_sci_handle *devm_ti_sci_get_handle(struct device *dev);
const struct ti_sci_handle *ti_sci_get_by_phandle(struct device_node *np,
const char *property);
const struct ti_sci_handle *devm_ti_sci_get_by_phandle(struct device *dev,
const char *property);
u16 ti_sci_get_free_resource(struct ti_sci_resource *res);
void ti_sci_release_resource(struct ti_sci_resource *res, u16 id);
u32 ti_sci_get_num_resources(struct ti_sci_resource *res);
struct ti_sci_resource *
devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
struct device *dev, u32 dev_id, char *of_prop);
#else /* CONFIG_TI_SCI_PROTOCOL */ #else /* CONFIG_TI_SCI_PROTOCOL */
...@@ -236,6 +326,40 @@ const struct ti_sci_handle *devm_ti_sci_get_handle(struct device *dev) ...@@ -236,6 +326,40 @@ const struct ti_sci_handle *devm_ti_sci_get_handle(struct device *dev)
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
} }
static inline
const struct ti_sci_handle *ti_sci_get_by_phandle(struct device_node *np,
const char *property)
{
return ERR_PTR(-EINVAL);
}
static inline
const struct ti_sci_handle *devm_ti_sci_get_by_phandle(struct device *dev,
const char *property)
{
return ERR_PTR(-EINVAL);
}
static inline u16 ti_sci_get_free_resource(struct ti_sci_resource *res)
{
return TI_SCI_RESOURCE_NULL;
}
static inline void ti_sci_release_resource(struct ti_sci_resource *res, u16 id)
{
}
static inline u32 ti_sci_get_num_resources(struct ti_sci_resource *res)
{
return 0;
}
static inline struct ti_sci_resource *
devm_ti_sci_get_of_resource(const struct ti_sci_handle *handle,
struct device *dev, u32 dev_id, char *of_prop)
{
return ERR_PTR(-EINVAL);
}
#endif /* CONFIG_TI_SCI_PROTOCOL */ #endif /* CONFIG_TI_SCI_PROTOCOL */
#endif /* __TISCI_PROTOCOL_H */ #endif /* __TISCI_PROTOCOL_H */
...@@ -91,6 +91,9 @@ config GENERIC_MSI_IRQ_DOMAIN ...@@ -91,6 +91,9 @@ config GENERIC_MSI_IRQ_DOMAIN
select IRQ_DOMAIN_HIERARCHY select IRQ_DOMAIN_HIERARCHY
select GENERIC_MSI_IRQ select GENERIC_MSI_IRQ
config IRQ_MSI_IOMMU
bool
config HANDLE_DOMAIN_IRQ config HANDLE_DOMAIN_IRQ
bool bool
......
...@@ -1459,6 +1459,33 @@ int irq_chip_set_wake_parent(struct irq_data *data, unsigned int on) ...@@ -1459,6 +1459,33 @@ int irq_chip_set_wake_parent(struct irq_data *data, unsigned int on)
return -ENOSYS; return -ENOSYS;
} }
EXPORT_SYMBOL_GPL(irq_chip_set_wake_parent); EXPORT_SYMBOL_GPL(irq_chip_set_wake_parent);
/**
* irq_chip_request_resources_parent - Request resources on the parent interrupt
* @data: Pointer to interrupt specific data
*/
int irq_chip_request_resources_parent(struct irq_data *data)
{
data = data->parent_data;
if (data->chip->irq_request_resources)
return data->chip->irq_request_resources(data);
return -ENOSYS;
}
EXPORT_SYMBOL_GPL(irq_chip_request_resources_parent);
/**
* irq_chip_release_resources_parent - Release resources on the parent interrupt
* @data: Pointer to interrupt specific data
*/
void irq_chip_release_resources_parent(struct irq_data *data)
{
data = data->parent_data;
if (data->chip->irq_release_resources)
data->chip->irq_release_resources(data);
}
EXPORT_SYMBOL_GPL(irq_chip_release_resources_parent);
#endif #endif
/** /**
......
...@@ -1297,7 +1297,7 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain, ...@@ -1297,7 +1297,7 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
/** /**
* __irq_domain_alloc_irqs - Allocate IRQs from domain * __irq_domain_alloc_irqs - Allocate IRQs from domain
* @domain: domain to allocate from * @domain: domain to allocate from
* @irq_base: allocate specified IRQ nubmer if irq_base >= 0 * @irq_base: allocate specified IRQ number if irq_base >= 0
* @nr_irqs: number of IRQs to allocate * @nr_irqs: number of IRQs to allocate
* @node: NUMA node id for memory allocation * @node: NUMA node id for memory allocation
* @arg: domain specific argument * @arg: domain specific argument
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment