Commit e4b99d41 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Thomas Gleixner:
 "The interrupt department provides:

  Core updates:

   - Better spreading to NUMA nodes in the affinity management

   - Support for more than one set of interrupts to spread out to allow
     separate queues for separate functionality of a single device.

   - Decouple the non queue interrupts from being managed. Those are
     usually general interrupts for error handling etc. and those should
     never be shut down. This also a preparation to utilize the
     spreading mechanism for initial spreading of non-managed interrupts
     later.

   - Make the single CPU target selection in the matrix allocator more
     balanced so interrupts won't accumulate on single CPUs in certain
     situations.

   - A large spell checking patch so we don't end up fixing single typos
     over and over.

  Driver updates:

   - A bunch of new irqchip drivers (RDA8810PL, Madera, imx-irqsteer)

   - Updates for the 8MQ, F1C100s platform drivers

   - A number of SPDX cleanups

   - A workaround for a very broken GICv3 implementation on msm8996
     which sports a botched register set.

   - A platform-msi fix to prevent memory leakage

   - Various cleanups"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (37 commits)
  genirq/affinity: Add is_managed to struct irq_affinity_desc
  genirq/core: Introduce struct irq_affinity_desc
  genirq/affinity: Remove excess indentation
  irqchip/stm32: protect configuration registers with hwspinlock
  dt-bindings: interrupt-controller: stm32: Document hwlock properties
  irqchip: Add driver for imx-irqsteer controller
  dt-bindings/irq: Add binding for Freescale IRQSTEER multiplexer
  irqchip: Add driver for Cirrus Logic Madera codecs
  genirq: Fix various typos in comments
  irqchip/irq-imx-gpcv2: Add IRQCHIP_DECLARE for i.MX8MQ compatible
  irqchip/irq-rda-intc: Fix return value check in rda8810_intc_init()
  irqchip/irq-imx-gpcv2: Silence "fall through" warning
  irqchip/gic-v3: Add quirk for msm8996 broken registers
  irqchip/gic: Add support to device tree based quirks
  dt-bindings/gic-v3: Add msm8996 compatible string
  irqchip/sun4i: Add support for Allwinner ARMv5 F1C100s
  irqchip/sun4i: Move IC specific register offsets to struct
  irqchip/sun4i: Add a struct to hold global variables
  dt-bindings: interrupt-controller: Add suniv interrupt-controller
  irqchip: Add RDA8810PL interrupt driver
  ...
parents d8924c0d c410abbb
......@@ -2,7 +2,9 @@ Allwinner Sunxi Interrupt Controller
Required properties:
- compatible : should be "allwinner,sun4i-a10-ic"
- compatible : should be one of the following:
"allwinner,sun4i-a10-ic"
"allwinner,suniv-f1c100s-ic"
- reg : Specifies base physical address and size of the registers.
- interrupt-controller : Identifies the node as an interrupt controller
- #interrupt-cells : Specifies the number of cells needed to encode an
......
......@@ -7,7 +7,9 @@ Interrupts (LPI).
Main node required properties:
- compatible : should at least contain "arm,gic-v3".
- compatible : should at least contain "arm,gic-v3" or either
"qcom,msm8996-gic-v3", "arm,gic-v3" for msm8996 SoCs
to address SoC specific bugs/quirks
- interrupt-controller : Identifies the node as an interrupt controller
- #interrupt-cells : Specifies the number of cells needed to encode an
interrupt source. Must be a single cell with a value of at least 3.
......
Freescale IRQSTEER Interrupt multiplexer
Required properties:
- compatible: should be:
- "fsl,imx8m-irqsteer"
- "fsl,imx-irqsteer"
- reg: Physical base address and size of registers.
- interrupts: Should contain the parent interrupt line used to multiplex the
input interrupts.
- clocks: Should contain one clock for entry in clock-names
see Documentation/devicetree/bindings/clock/clock-bindings.txt
- clock-names:
- "ipg": main logic clock
- interrupt-controller: Identifies the node as an interrupt controller.
- #interrupt-cells: Specifies the number of cells needed to encode an
interrupt source. The value must be 1.
- fsl,channel: The output channel that all input IRQs should be steered into.
- fsl,irq-groups: Number of IRQ groups managed by this controller instance.
Each group manages 64 input interrupts.
Example:
interrupt-controller@32e2d000 {
compatible = "fsl,imx8m-irqsteer", "fsl,imx-irqsteer";
reg = <0x32e2d000 0x1000>;
interrupts = <GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MQ_CLK_DISP_APB_ROOT>;
clock-names = "ipg";
fsl,channel = <0>;
fsl,irq-groups = <1>;
interrupt-controller;
#interrupt-cells = <1>;
};
RDA Micro RDA8810PL Interrupt Controller
The interrupt controller in RDA8810PL SoC is a custom interrupt controller
which supports up to 32 interrupts.
Required properties:
- compatible: Should be "rda,8810pl-intc".
- reg: Specifies base physical address of the registers set.
- interrupt-controller: Identifies the node as an interrupt controller.
- #interrupt-cells: Specifies the number of cells needed to encode an
interrupt source. The value shall be 2.
The interrupt sources are as follows:
ID Name
------------
0: PULSE_DUMMY
1: I2C
2: NAND_NFSC
3: SDMMC1
4: SDMMC2
5: SDMMC3
6: SPI1
7: SPI2
8: SPI3
9: UART1
10: UART2
11: UART3
12: GPIO1
13: GPIO2
14: GPIO3
15: KEYPAD
16: TIMER
17: TIMEROS
18: COMREG0
19: COMREG1
20: USB
21: DMC
22: DMA
23: CAMERA
24: GOUDA
25: GPU
26: VPU_JPG
27: VPU_HOST
28: VOC
29: AUIFC0
30: AUIFC1
31: L2CC
Example:
apb@20800000 {
compatible = "simple-bus";
...
intc: interrupt-controller@0 {
compatible = "rda,8810pl-intc";
reg = <0x0 0x1000>;
interrupt-controller;
#interrupt-cells = <2>;
};
};
......@@ -14,6 +14,10 @@ Required properties:
(only needed for exti controller with multiple exti under
same parent interrupt: st,stm32-exti and st,stm32h7-exti)
Optional properties:
- hwlocks: reference to a phandle of a hardware spinlock provider node.
Example:
exti: interrupt-controller@40013c00 {
......
......@@ -3700,8 +3700,10 @@ W: https://github.com/CirrusLogic/linux-drivers/wiki
S: Supported
F: Documentation/devicetree/bindings/mfd/madera.txt
F: Documentation/devicetree/bindings/pinctrl/cirrus,madera-pinctrl.txt
F: include/linux/irqchip/irq-madera*
F: include/linux/mfd/madera/*
F: drivers/gpio/gpio-madera*
F: drivers/irqchip/irq-madera*
F: drivers/mfd/madera*
F: drivers/mfd/cs47l*
F: drivers/pinctrl/cirrus/*
......
......@@ -368,14 +368,16 @@ void platform_msi_domain_free(struct irq_domain *domain, unsigned int virq,
unsigned int nvec)
{
struct platform_msi_priv_data *data = domain->host_data;
struct msi_desc *desc;
for_each_msi_entry(desc, data->dev) {
struct msi_desc *desc, *tmp;
for_each_msi_entry_safe(desc, tmp, data->dev) {
if (WARN_ON(!desc->irq || desc->nvec_used != 1))
return;
if (!(desc->irq >= virq && desc->irq < (virq + nvec)))
continue;
irq_domain_free_irqs_common(domain, desc->irq, 1);
list_del(&desc->list);
free_msi_entry(desc);
}
}
......
......@@ -150,6 +150,9 @@ config IMGPDC_IRQ
select GENERIC_IRQ_CHIP
select IRQ_DOMAIN
config MADERA_IRQ
tristate
config IRQ_MIPS_CPU
bool
select GENERIC_IRQ_CHIP
......@@ -195,6 +198,10 @@ config JCORE_AIC
help
Support for the J-Core integrated AIC.
config RDA_INTC
bool
select IRQ_DOMAIN
config RENESAS_INTC_IRQPIN
bool
select IRQ_DOMAIN
......@@ -391,6 +398,14 @@ config CSKY_APB_INTC
by C-SKY single core SOC system. It use mmio map apb-bus to visit
the controller's register.
config IMX_IRQSTEER
bool "i.MX IRQSTEER support"
depends on ARCH_MXC || COMPILE_TEST
default ARCH_MXC
select IRQ_DOMAIN
help
Support for the i.MX IRQSTEER interrupt multiplexer/remapper.
endmenu
config SIFIVE_PLIC
......
......@@ -43,6 +43,7 @@ obj-$(CONFIG_IMGPDC_IRQ) += irq-imgpdc.o
obj-$(CONFIG_IRQ_MIPS_CPU) += irq-mips-cpu.o
obj-$(CONFIG_SIRF_IRQ) += irq-sirfsoc.o
obj-$(CONFIG_JCORE_AIC) += irq-jcore-aic.o
obj-$(CONFIG_RDA_INTC) += irq-rda-intc.o
obj-$(CONFIG_RENESAS_INTC_IRQPIN) += irq-renesas-intc-irqpin.o
obj-$(CONFIG_RENESAS_IRQC) += irq-renesas-irqc.o
obj-$(CONFIG_VERSATILE_FPGA_IRQ) += irq-versatile-fpga.o
......@@ -91,3 +92,5 @@ obj-$(CONFIG_QCOM_PDC) += qcom-pdc.o
obj-$(CONFIG_CSKY_MPINTC) += irq-csky-mpintc.o
obj-$(CONFIG_CSKY_APB_INTC) += irq-csky-apb-intc.o
obj-$(CONFIG_SIFIVE_PLIC) += irq-sifive-plic.o
obj-$(CONFIG_IMX_IRQSTEER) += irq-imx-irqsteer.o
obj-$(CONFIG_MADERA_IRQ) += irq-madera.o
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright 2010 Broadcom
* Copyright 2012 Simon Arlott, Chris Boot, Stephen Warren
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Quirk 1: Shortcut interrupts don't set the bank 1/2 register pending bits
*
* If an interrupt fires on bank 1 that isn't in the shortcuts list, bit 8
......
// SPDX-License-Identifier: GPL-2.0+
/*
* Root interrupt controller for the BCM2836 (Raspberry Pi 2).
*
* Copyright 2015 Broadcom
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/cpu.h>
......
......@@ -105,7 +105,7 @@ static int __init dw_apb_ictl_init(struct device_node *np,
* DW IP can be configured to allow 2-64 irqs. We can determine
* the number of irqs supported by writing into enable register
* and look for bits not set, as corresponding flip-flops will
* have been removed by sythesis tool.
* have been removed by synthesis tool.
*/
/* mask and enable all interrupts */
......
......@@ -36,6 +36,18 @@ void gic_set_kvm_info(const struct gic_kvm_info *info)
gic_kvm_info = info;
}
void gic_enable_of_quirks(const struct device_node *np,
const struct gic_quirk *quirks, void *data)
{
for (; quirks->desc; quirks++) {
if (!of_device_is_compatible(np, quirks->compatible))
continue;
if (quirks->init(data))
pr_info("GIC: enabling workaround for %s\n",
quirks->desc);
}
}
void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks,
void *data)
{
......
......@@ -23,6 +23,7 @@
struct gic_quirk {
const char *desc;
const char *compatible;
bool (*init)(void *data);
u32 iidr;
u32 mask;
......@@ -35,6 +36,8 @@ void gic_dist_config(void __iomem *base, int gic_irqs,
void gic_cpu_config(void __iomem *base, void (*sync_access)(void));
void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks,
void *data);
void gic_enable_of_quirks(const struct device_node *np,
const struct gic_quirk *quirks, void *data);
void gic_set_kvm_info(const struct gic_kvm_info *info);
......
......@@ -41,6 +41,8 @@
#include "irq-gic-common.h"
#define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0)
struct redist_region {
void __iomem *redist_base;
phys_addr_t phys_base;
......@@ -55,6 +57,7 @@ struct gic_chip_data {
struct irq_domain *domain;
u64 redist_stride;
u32 nr_redist_regions;
u64 flags;
bool has_rss;
unsigned int irq_nr;
struct partition_desc *ppi_descs[16];
......@@ -139,6 +142,9 @@ static void gic_enable_redist(bool enable)
u32 count = 1000000; /* 1s! */
u32 val;
if (gic_data.flags & FLAGS_WORKAROUND_GICR_WAKER_MSM8996)
return;
rbase = gic_data_rdist_rd_base();
val = readl_relaxed(rbase + GICR_WAKER);
......@@ -1067,6 +1073,15 @@ static const struct irq_domain_ops partition_domain_ops = {
.select = gic_irq_domain_select,
};
static bool gic_enable_quirk_msm8996(void *data)
{
struct gic_chip_data *d = data;
d->flags |= FLAGS_WORKAROUND_GICR_WAKER_MSM8996;
return true;
}
static int __init gic_init_bases(void __iomem *dist_base,
struct redist_region *rdist_regs,
u32 nr_redist_regions,
......@@ -1271,6 +1286,16 @@ static void __init gic_of_setup_kvm_info(struct device_node *node)
gic_set_kvm_info(&gic_v3_kvm_info);
}
static const struct gic_quirk gic_quirks[] = {
{
.desc = "GICv3: Qualcomm MSM8996 broken firmware",
.compatible = "qcom,msm8996-gic-v3",
.init = gic_enable_quirk_msm8996,
},
{
}
};
static int __init gic_of_init(struct device_node *node, struct device_node *parent)
{
void __iomem *dist_base;
......@@ -1318,6 +1343,8 @@ static int __init gic_of_init(struct device_node *node, struct device_node *pare
if (of_property_read_u64(node, "redistributor-stride", &redist_stride))
redist_stride = 0;
gic_enable_of_quirks(node, gic_quirks, &gic_data);
err = gic_init_bases(dist_base, rdist_regs, nr_redist_regions,
redist_stride, &node->fwnode);
if (err)
......
......@@ -604,8 +604,8 @@ void gic_dist_save(struct gic_chip_data *gic)
/*
* Restores the GIC distributor registers during resume or when coming out of
* idle. Must be called before enabling interrupts. If a level interrupt
* that occured while the GIC was suspended is still present, it will be
* handled normally, but any edge interrupts that occured will not be seen by
* that occurred while the GIC was suspended is still present, it will be
* handled normally, but any edge interrupts that occurred will not be seen by
* the GIC and need to be handled by the platform-specific wakeup source.
*/
void gic_dist_restore(struct gic_chip_data *gic)
......@@ -899,7 +899,7 @@ void gic_migrate_target(unsigned int new_cpu_id)
gic_cpu_map[cpu] = 1 << new_cpu_id;
/*
* Find all the peripheral interrupts targetting the current
* Find all the peripheral interrupts targeting the current
* CPU interface and migrate them to the new CPU interface.
* We skip DIST_TARGET 0 to 7 as they are read-only.
*/
......
......@@ -17,6 +17,9 @@
#define GPC_IMR1_CORE0 0x30
#define GPC_IMR1_CORE1 0x40
#define GPC_IMR1_CORE2 0x1c0
#define GPC_IMR1_CORE3 0x1d0
struct gpcv2_irqchip_data {
struct raw_spinlock rlock;
......@@ -28,6 +31,11 @@ struct gpcv2_irqchip_data {
static struct gpcv2_irqchip_data *imx_gpcv2_instance;
static void __iomem *gpcv2_idx_to_reg(struct gpcv2_irqchip_data *cd, int i)
{
return cd->gpc_base + cd->cpu2wakeup + i * 4;
}
static int gpcv2_wakeup_source_save(void)
{
struct gpcv2_irqchip_data *cd;
......@@ -39,7 +47,7 @@ static int gpcv2_wakeup_source_save(void)
return 0;
for (i = 0; i < IMR_NUM; i++) {
reg = cd->gpc_base + cd->cpu2wakeup + i * 4;
reg = gpcv2_idx_to_reg(cd, i);
cd->saved_irq_mask[i] = readl_relaxed(reg);
writel_relaxed(cd->wakeup_sources[i], reg);
}
......@@ -50,17 +58,14 @@ static int gpcv2_wakeup_source_save(void)
static void gpcv2_wakeup_source_restore(void)
{
struct gpcv2_irqchip_data *cd;
void __iomem *reg;
int i;
cd = imx_gpcv2_instance;
if (!cd)
return;
for (i = 0; i < IMR_NUM; i++) {
reg = cd->gpc_base + cd->cpu2wakeup + i * 4;
writel_relaxed(cd->saved_irq_mask[i], reg);
}
for (i = 0; i < IMR_NUM; i++)
writel_relaxed(cd->saved_irq_mask[i], gpcv2_idx_to_reg(cd, i));
}
static struct syscore_ops imx_gpcv2_syscore_ops = {
......@@ -73,12 +78,10 @@ static int imx_gpcv2_irq_set_wake(struct irq_data *d, unsigned int on)
struct gpcv2_irqchip_data *cd = d->chip_data;
unsigned int idx = d->hwirq / 32;
unsigned long flags;
void __iomem *reg;
u32 mask, val;
raw_spin_lock_irqsave(&cd->rlock, flags);
reg = cd->gpc_base + cd->cpu2wakeup + idx * 4;
mask = 1 << d->hwirq % 32;
mask = BIT(d->hwirq % 32);
val = cd->wakeup_sources[idx];
cd->wakeup_sources[idx] = on ? (val & ~mask) : (val | mask);
......@@ -99,9 +102,9 @@ static void imx_gpcv2_irq_unmask(struct irq_data *d)
u32 val;
raw_spin_lock(&cd->rlock);
reg = cd->gpc_base + cd->cpu2wakeup + d->hwirq / 32 * 4;
reg = gpcv2_idx_to_reg(cd, d->hwirq / 32);
val = readl_relaxed(reg);
val &= ~(1 << d->hwirq % 32);
val &= ~BIT(d->hwirq % 32);
writel_relaxed(val, reg);
raw_spin_unlock(&cd->rlock);
......@@ -115,9 +118,9 @@ static void imx_gpcv2_irq_mask(struct irq_data *d)
u32 val;
raw_spin_lock(&cd->rlock);
reg = cd->gpc_base + cd->cpu2wakeup + d->hwirq / 32 * 4;
reg = gpcv2_idx_to_reg(cd, d->hwirq / 32);
val = readl_relaxed(reg);
val |= 1 << (d->hwirq % 32);
val |= BIT(d->hwirq % 32);
writel_relaxed(val, reg);
raw_spin_unlock(&cd->rlock);
......@@ -192,11 +195,19 @@ static const struct irq_domain_ops gpcv2_irqchip_data_domain_ops = {
.free = irq_domain_free_irqs_common,
};
static const struct of_device_id gpcv2_of_match[] = {
{ .compatible = "fsl,imx7d-gpc", .data = (const void *) 2 },
{ .compatible = "fsl,imx8mq-gpc", .data = (const void *) 4 },
{ /* END */ }
};
static int __init imx_gpcv2_irqchip_init(struct device_node *node,
struct device_node *parent)
{
struct irq_domain *parent_domain, *domain;
struct gpcv2_irqchip_data *cd;
const struct of_device_id *id;
unsigned long core_num;
int i;
if (!parent) {
......@@ -204,6 +215,14 @@ static int __init imx_gpcv2_irqchip_init(struct device_node *node,
return -ENODEV;
}
id = of_match_node(gpcv2_of_match, node);
if (!id) {
pr_err("%pOF: unknown compatibility string\n", node);
return -ENODEV;
}
core_num = (unsigned long)id->data;
parent_domain = irq_find_host(parent);
if (!parent_domain) {
pr_err("%pOF: unable to get parent domain\n", node);
......@@ -212,7 +231,7 @@ static int __init imx_gpcv2_irqchip_init(struct device_node *node,
cd = kzalloc(sizeof(struct gpcv2_irqchip_data), GFP_KERNEL);
if (!cd) {
pr_err("kzalloc failed!\n");
pr_err("%pOF: kzalloc failed!\n", node);
return -ENOMEM;
}
......@@ -220,7 +239,7 @@ static int __init imx_gpcv2_irqchip_init(struct device_node *node,
cd->gpc_base = of_iomap(node, 0);
if (!cd->gpc_base) {
pr_err("fsl-gpcv2: unable to map gpc registers\n");
pr_err("%pOF: unable to map gpc registers\n", node);
kfree(cd);
return -ENOMEM;
}
......@@ -236,8 +255,17 @@ static int __init imx_gpcv2_irqchip_init(struct device_node *node,
/* Initially mask all interrupts */
for (i = 0; i < IMR_NUM; i++) {
writel_relaxed(~0, cd->gpc_base + GPC_IMR1_CORE0 + i * 4);
writel_relaxed(~0, cd->gpc_base + GPC_IMR1_CORE1 + i * 4);
void __iomem *reg = cd->gpc_base + i * 4;
switch (core_num) {
case 4:
writel_relaxed(~0, reg + GPC_IMR1_CORE2);
writel_relaxed(~0, reg + GPC_IMR1_CORE3);
/* fall through */
case 2:
writel_relaxed(~0, reg + GPC_IMR1_CORE0);
writel_relaxed(~0, reg + GPC_IMR1_CORE1);
}
cd->wakeup_sources[i] = ~0;
}
......@@ -262,4 +290,5 @@ static int __init imx_gpcv2_irqchip_init(struct device_node *node,
return 0;
}
IRQCHIP_DECLARE(imx_gpcv2, "fsl,imx7d-gpc", imx_gpcv2_irqchip_init);
IRQCHIP_DECLARE(imx_gpcv2_imx7d, "fsl,imx7d-gpc", imx_gpcv2_irqchip_init);
IRQCHIP_DECLARE(imx_gpcv2_imx8mq, "fsl,imx8mq-gpc", imx_gpcv2_irqchip_init);
// SPDX-License-Identifier: GPL-2.0+
/*
* Copyright 2017 NXP
* Copyright (C) 2018 Pengutronix, Lucas Stach <kernel@pengutronix.de>
*/
#include <linux/clk.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/irqdomain.h>
#include <linux/kernel.h>
#include <linux/of_platform.h>
#include <linux/spinlock.h>
#define CTRL_STRIDE_OFF(_t, _r) (_t * 8 * _r)
#define CHANCTRL 0x0
#define CHANMASK(n, t) (CTRL_STRIDE_OFF(t, 0) + 0x4 * (n) + 0x4)
#define CHANSET(n, t) (CTRL_STRIDE_OFF(t, 1) + 0x4 * (n) + 0x4)
#define CHANSTATUS(n, t) (CTRL_STRIDE_OFF(t, 2) + 0x4 * (n) + 0x4)
#define CHAN_MINTDIS(t) (CTRL_STRIDE_OFF(t, 3) + 0x4)
#define CHAN_MASTRSTAT(t) (CTRL_STRIDE_OFF(t, 3) + 0x8)
struct irqsteer_data {
void __iomem *regs;
struct clk *ipg_clk;
int irq;
raw_spinlock_t lock;
int irq_groups;
int channel;
struct irq_domain *domain;
u32 *saved_reg;
};
static int imx_irqsteer_get_reg_index(struct irqsteer_data *data,
unsigned long irqnum)
{
return (data->irq_groups * 2 - irqnum / 32 - 1);
}
static void imx_irqsteer_irq_unmask(struct irq_data *d)
{
struct irqsteer_data *data = d->chip_data;
int idx = imx_irqsteer_get_reg_index(data, d->hwirq);
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&data->lock, flags);
val = readl_relaxed(data->regs + CHANMASK(idx, data->irq_groups));
val |= BIT(d->hwirq % 32);
writel_relaxed(val, data->regs + CHANMASK(idx, data->irq_groups));
raw_spin_unlock_irqrestore(&data->lock, flags);
}
static void imx_irqsteer_irq_mask(struct irq_data *d)
{
struct irqsteer_data *data = d->chip_data;
int idx = imx_irqsteer_get_reg_index(data, d->hwirq);
unsigned long flags;
u32 val;
raw_spin_lock_irqsave(&data->lock, flags);
val = readl_relaxed(data->regs + CHANMASK(idx, data->irq_groups));
val &= ~BIT(d->hwirq % 32);
writel_relaxed(val, data->regs + CHANMASK(idx, data->irq_groups));
raw_spin_unlock_irqrestore(&data->lock, flags);
}
static struct irq_chip imx_irqsteer_irq_chip = {
.name = "irqsteer",
.irq_mask = imx_irqsteer_irq_mask,
.irq_unmask = imx_irqsteer_irq_unmask,
};
static int imx_irqsteer_irq_map(struct irq_domain *h, unsigned int irq,
irq_hw_number_t hwirq)
{
irq_set_status_flags(irq, IRQ_LEVEL);
irq_set_chip_data(irq, h->host_data);
irq_set_chip_and_handler(irq, &imx_irqsteer_irq_chip, handle_level_irq);
return 0;
}
static const struct irq_domain_ops imx_irqsteer_domain_ops = {
.map = imx_irqsteer_irq_map,
.xlate = irq_domain_xlate_onecell,
};
static void imx_irqsteer_irq_handler(struct irq_desc *desc)
{
struct irqsteer_data *data = irq_desc_get_handler_data(desc);
int i;
chained_irq_enter(irq_desc_get_chip(desc), desc);
for (i = 0; i < data->irq_groups * 64; i += 32) {
int idx = imx_irqsteer_get_reg_index(data, i);
unsigned long irqmap;
int pos, virq;
irqmap = readl_relaxed(data->regs +
CHANSTATUS(idx, data->irq_groups));
for_each_set_bit(pos, &irqmap, 32) {
virq = irq_find_mapping(data->domain, pos + i);
if (virq)
generic_handle_irq(virq);
}
}
chained_irq_exit(irq_desc_get_chip(desc), desc);
}
static int imx_irqsteer_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct irqsteer_data *data;
struct resource *res;
int ret;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(data->regs)) {
dev_err(&pdev->dev, "failed to initialize reg\n");
return PTR_ERR(data->regs);
}
data->irq = platform_get_irq(pdev, 0);
if (data->irq <= 0) {
dev_err(&pdev->dev, "failed to get irq\n");
return -ENODEV;
}
data->ipg_clk = devm_clk_get(&pdev->dev, "ipg");
if (IS_ERR(data->ipg_clk)) {
ret = PTR_ERR(data->ipg_clk);
if (ret != -EPROBE_DEFER)
dev_err(&pdev->dev, "failed to get ipg clk: %d\n", ret);
return ret;
}
raw_spin_lock_init(&data->lock);
of_property_read_u32(np, "fsl,irq-groups", &data->irq_groups);
of_property_read_u32(np, "fsl,channel", &data->channel);
if (IS_ENABLED(CONFIG_PM_SLEEP)) {
data->saved_reg = devm_kzalloc(&pdev->dev,
sizeof(u32) * data->irq_groups * 2,
GFP_KERNEL);
if (!data->saved_reg)
return -ENOMEM;
}
ret = clk_prepare_enable(data->ipg_clk);
if (ret) {
dev_err(&pdev->dev, "failed to enable ipg clk: %d\n", ret);
return ret;
}
/* steer all IRQs into configured channel */
writel_relaxed(BIT(data->channel), data->regs + CHANCTRL);
data->domain = irq_domain_add_linear(np, data->irq_groups * 64,
&imx_irqsteer_domain_ops, data);
if (!data->domain) {
dev_err(&pdev->dev, "failed to create IRQ domain\n");
clk_disable_unprepare(data->ipg_clk);
return -ENOMEM;
}
irq_set_chained_handler_and_data(data->irq, imx_irqsteer_irq_handler,
data);
platform_set_drvdata(pdev, data);
return 0;
}
static int imx_irqsteer_remove(struct platform_device *pdev)
{
struct irqsteer_data *irqsteer_data = platform_get_drvdata(pdev);
irq_set_chained_handler_and_data(irqsteer_data->irq, NULL, NULL);
irq_domain_remove(irqsteer_data->domain);
clk_disable_unprepare(irqsteer_data->ipg_clk);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static void imx_irqsteer_save_regs(struct irqsteer_data *data)
{
int i;
for (i = 0; i < data->irq_groups * 2; i++)
data->saved_reg[i] = readl_relaxed(data->regs +
CHANMASK(i, data->irq_groups));
}
static void imx_irqsteer_restore_regs(struct irqsteer_data *data)
{
int i;
writel_relaxed(BIT(data->channel), data->regs + CHANCTRL);
for (i = 0; i < data->irq_groups * 2; i++)
writel_relaxed(data->saved_reg[i],
data->regs + CHANMASK(i, data->irq_groups));
}
static int imx_irqsteer_suspend(struct device *dev)
{
struct irqsteer_data *irqsteer_data = dev_get_drvdata(dev);
imx_irqsteer_save_regs(irqsteer_data);
clk_disable_unprepare(irqsteer_data->ipg_clk);
return 0;
}
static int imx_irqsteer_resume(struct device *dev)
{
struct irqsteer_data *irqsteer_data = dev_get_drvdata(dev);
int ret;
ret = clk_prepare_enable(irqsteer_data->ipg_clk);
if (ret) {
dev_err(dev, "failed to enable ipg clk: %d\n", ret);
return ret;
}
imx_irqsteer_restore_regs(irqsteer_data);
return 0;
}
#endif
static const struct dev_pm_ops imx_irqsteer_pm_ops = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_irqsteer_suspend, imx_irqsteer_resume)
};
static const struct of_device_id imx_irqsteer_dt_ids[] = {
{ .compatible = "fsl,imx-irqsteer", },
{},
};
static struct platform_driver imx_irqsteer_driver = {
.driver = {
.name = "imx-irqsteer",
.of_match_table = imx_irqsteer_dt_ids,
.pm = &imx_irqsteer_pm_ops,
},
.probe = imx_irqsteer_probe,
.remove = imx_irqsteer_remove,
};
builtin_platform_driver(imx_irqsteer_driver);
// SPDX-License-Identifier: GPL-2.0
/*
* Interrupt support for Cirrus Logic Madera codecs
*
* Copyright (C) 2015-2018 Cirrus Logic, Inc. and
* Cirrus Logic International Semiconductor Ltd.
*/
#include <linux/module.h>
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <linux/slab.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/of_irq.h>
#include <linux/irqchip/irq-madera.h>
#include <linux/mfd/madera/core.h>
#include <linux/mfd/madera/pdata.h>
#include <linux/mfd/madera/registers.h>
#define MADERA_IRQ(_irq, _reg) \
[MADERA_IRQ_ ## _irq] = { \
.reg_offset = (_reg) - MADERA_IRQ1_STATUS_2, \
.mask = MADERA_ ## _irq ## _EINT1 \
}
/* Mappings are the same for all Madera codecs */
static const struct regmap_irq madera_irqs[MADERA_NUM_IRQ] = {
MADERA_IRQ(FLL1_LOCK, MADERA_IRQ1_STATUS_2),
MADERA_IRQ(FLL2_LOCK, MADERA_IRQ1_STATUS_2),
MADERA_IRQ(FLL3_LOCK, MADERA_IRQ1_STATUS_2),
MADERA_IRQ(FLLAO_LOCK, MADERA_IRQ1_STATUS_2),
MADERA_IRQ(MICDET1, MADERA_IRQ1_STATUS_6),
MADERA_IRQ(MICDET2, MADERA_IRQ1_STATUS_6),
MADERA_IRQ(HPDET, MADERA_IRQ1_STATUS_6),
MADERA_IRQ(MICD_CLAMP_RISE, MADERA_IRQ1_STATUS_7),
MADERA_IRQ(MICD_CLAMP_FALL, MADERA_IRQ1_STATUS_7),
MADERA_IRQ(JD1_RISE, MADERA_IRQ1_STATUS_7),
MADERA_IRQ(JD1_FALL, MADERA_IRQ1_STATUS_7),
MADERA_IRQ(ASRC2_IN1_LOCK, MADERA_IRQ1_STATUS_9),
MADERA_IRQ(ASRC2_IN2_LOCK, MADERA_IRQ1_STATUS_9),
MADERA_IRQ(ASRC1_IN1_LOCK, MADERA_IRQ1_STATUS_9),
MADERA_IRQ(ASRC1_IN2_LOCK, MADERA_IRQ1_STATUS_9),
MADERA_IRQ(DRC2_SIG_DET, MADERA_IRQ1_STATUS_9),
MADERA_IRQ(DRC1_SIG_DET, MADERA_IRQ1_STATUS_9),
MADERA_IRQ(DSP_IRQ1, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ2, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ3, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ4, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ5, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ6, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ7, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ8, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ9, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ10, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ11, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ12, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ13, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ14, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ15, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(DSP_IRQ16, MADERA_IRQ1_STATUS_11),
MADERA_IRQ(HP3R_SC, MADERA_IRQ1_STATUS_12),
MADERA_IRQ(HP3L_SC, MADERA_IRQ1_STATUS_12),
MADERA_IRQ(HP2R_SC, MADERA_IRQ1_STATUS_12),
MADERA_IRQ(HP2L_SC, MADERA_IRQ1_STATUS_12),
MADERA_IRQ(HP1R_SC, MADERA_IRQ1_STATUS_12),
MADERA_IRQ(HP1L_SC, MADERA_IRQ1_STATUS_12),
MADERA_IRQ(SPK_OVERHEAT_WARN, MADERA_IRQ1_STATUS_15),
MADERA_IRQ(SPK_OVERHEAT, MADERA_IRQ1_STATUS_15),
MADERA_IRQ(DSP1_BUS_ERR, MADERA_IRQ1_STATUS_33),
MADERA_IRQ(DSP2_BUS_ERR, MADERA_IRQ1_STATUS_33),
MADERA_IRQ(DSP3_BUS_ERR, MADERA_IRQ1_STATUS_33),
MADERA_IRQ(DSP4_BUS_ERR, MADERA_IRQ1_STATUS_33),
MADERA_IRQ(DSP5_BUS_ERR, MADERA_IRQ1_STATUS_33),
MADERA_IRQ(DSP6_BUS_ERR, MADERA_IRQ1_STATUS_33),
MADERA_IRQ(DSP7_BUS_ERR, MADERA_IRQ1_STATUS_33),
};
static const struct regmap_irq_chip madera_irq_chip = {
.name = "madera IRQ",
.status_base = MADERA_IRQ1_STATUS_2,
.mask_base = MADERA_IRQ1_MASK_2,
.ack_base = MADERA_IRQ1_STATUS_2,
.runtime_pm = true,
.num_regs = 32,
.irqs = madera_irqs,
.num_irqs = ARRAY_SIZE(madera_irqs),
};
#ifdef CONFIG_PM_SLEEP
static int madera_suspend(struct device *dev)
{
struct madera *madera = dev_get_drvdata(dev->parent);
dev_dbg(madera->irq_dev, "Suspend, disabling IRQ\n");
/*
* A runtime resume would be needed to access the chip interrupt
* controller but runtime pm doesn't function during suspend.
* Temporarily disable interrupts until we reach suspend_noirq state.
*/
disable_irq(madera->irq);
return 0;
}
static int madera_suspend_noirq(struct device *dev)
{
struct madera *madera = dev_get_drvdata(dev->parent);
dev_dbg(madera->irq_dev, "No IRQ suspend, reenabling IRQ\n");
/* Re-enable interrupts to service wakeup interrupts from the chip */
enable_irq(madera->irq);
return 0;
}
static int madera_resume_noirq(struct device *dev)
{
struct madera *madera = dev_get_drvdata(dev->parent);
dev_dbg(madera->irq_dev, "No IRQ resume, disabling IRQ\n");
/*
* We can't handle interrupts until runtime pm is available again.
* Disable them temporarily.
*/
disable_irq(madera->irq);
return 0;
}
static int madera_resume(struct device *dev)
{
struct madera *madera = dev_get_drvdata(dev->parent);
dev_dbg(madera->irq_dev, "Resume, reenabling IRQ\n");
/* Interrupts can now be handled */
enable_irq(madera->irq);
return 0;
}
#endif
static const struct dev_pm_ops madera_irq_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(madera_suspend, madera_resume)
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(madera_suspend_noirq,
madera_resume_noirq)
};
static int madera_irq_probe(struct platform_device *pdev)
{
struct madera *madera = dev_get_drvdata(pdev->dev.parent);
struct irq_data *irq_data;
unsigned int irq_flags = 0;
int ret;
dev_dbg(&pdev->dev, "probe\n");
/*
* Read the flags from the interrupt controller if not specified
* by pdata
*/
irq_flags = madera->pdata.irq_flags;
if (!irq_flags) {
irq_data = irq_get_irq_data(madera->irq);
if (!irq_data) {
dev_err(&pdev->dev, "Invalid IRQ: %d\n", madera->irq);
return -EINVAL;
}
irq_flags = irqd_get_trigger_type(irq_data);
/* Codec defaults to trigger low, use this if no flags given */
if (irq_flags == IRQ_TYPE_NONE)
irq_flags = IRQF_TRIGGER_LOW;
}
if (irq_flags & (IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING)) {
dev_err(&pdev->dev, "Host interrupt not level-triggered\n");
return -EINVAL;
}
/*
* The silicon always starts at active-low, check if we need to
* switch to active-high.
*/
if (irq_flags & IRQF_TRIGGER_HIGH) {
ret = regmap_update_bits(madera->regmap, MADERA_IRQ1_CTRL,
MADERA_IRQ_POL_MASK, 0);
if (ret) {
dev_err(&pdev->dev,
"Failed to set IRQ polarity: %d\n", ret);
return ret;
}
}
/*
* NOTE: regmap registers this against the OF node of the parent of
* the regmap - that is, against the mfd driver
*/
ret = regmap_add_irq_chip(madera->regmap, madera->irq, IRQF_ONESHOT, 0,
&madera_irq_chip, &madera->irq_data);
if (ret) {
dev_err(&pdev->dev, "add_irq_chip failed: %d\n", ret);
return ret;
}
/* Save dev in parent MFD struct so it is accessible to siblings */
madera->irq_dev = &pdev->dev;
return 0;
}
static int madera_irq_remove(struct platform_device *pdev)
{
struct madera *madera = dev_get_drvdata(pdev->dev.parent);
/*
* The IRQ is disabled by the parent MFD driver before
* it starts cleaning up all child drivers
*/
madera->irq_dev = NULL;
regmap_del_irq_chip(madera->irq, madera->irq_data);
return 0;
}
static struct platform_driver madera_irq_driver = {
.probe = &madera_irq_probe,
.remove = &madera_irq_remove,
.driver = {
.name = "madera-irq",
.pm = &madera_irq_pm_ops,
}
};
module_platform_driver(madera_irq_driver);
MODULE_SOFTDEP("pre: madera");
MODULE_DESCRIPTION("Madera IRQ driver");
MODULE_AUTHOR("Richard Fitzgerald <rf@opensource.cirrus.com>");
MODULE_LICENSE("GPL v2");
......@@ -72,7 +72,7 @@ static int __init ocelot_irq_init(struct device_node *node,
domain = irq_domain_add_linear(node, OCELOT_NR_IRQ,
&irq_generic_chip_ops, NULL);
if (!domain) {
pr_err("%s: unable to add irq domain\n", node->name);
pr_err("%pOFn: unable to add irq domain\n", node);
return -ENOMEM;
}
......@@ -80,14 +80,14 @@ static int __init ocelot_irq_init(struct device_node *node,
"icpu", handle_level_irq,
0, 0, 0);
if (ret) {
pr_err("%s: unable to alloc irq domain gc\n", node->name);
pr_err("%pOFn: unable to alloc irq domain gc\n", node);
goto err_domain_remove;
}
gc = irq_get_domain_generic_chip(domain, 0);
gc->reg_base = of_iomap(node, 0);
if (!gc->reg_base) {
pr_err("%s: unable to map resource\n", node->name);
pr_err("%pOFn: unable to map resource\n", node);
ret = -ENOMEM;
goto err_gc_free;
}
......
// SPDX-License-Identifier: GPL-2.0+
/*
* RDA8810PL SoC irqchip driver
*
* Copyright RDA Microelectronics Company Limited
* Copyright (c) 2017 Andreas Färber
* Copyright (c) 2018 Manivannan Sadhasivam
*/
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <linux/of_address.h>
#include <asm/exception.h>
#define RDA_INTC_FINALSTATUS 0x00
#define RDA_INTC_MASK_SET 0x08
#define RDA_INTC_MASK_CLR 0x0c
#define RDA_IRQ_MASK_ALL 0xFFFFFFFF
#define RDA_NR_IRQS 32
static void __iomem *rda_intc_base;
static struct irq_domain *rda_irq_domain;
static void rda_intc_mask_irq(struct irq_data *d)
{
writel_relaxed(BIT(d->hwirq), rda_intc_base + RDA_INTC_MASK_CLR);
}
static void rda_intc_unmask_irq(struct irq_data *d)
{
writel_relaxed(BIT(d->hwirq), rda_intc_base + RDA_INTC_MASK_SET);
}
static int rda_intc_set_type(struct irq_data *data, unsigned int flow_type)
{
/* Hardware supports only level triggered interrupts */
if ((flow_type & (IRQF_TRIGGER_HIGH | IRQF_TRIGGER_LOW)) == flow_type)
return 0;
return -EINVAL;
}
static void __exception_irq_entry rda_handle_irq(struct pt_regs *regs)
{
u32 stat = readl_relaxed(rda_intc_base + RDA_INTC_FINALSTATUS);
u32 hwirq;
while (stat) {
hwirq = __fls(stat);
handle_domain_irq(rda_irq_domain, hwirq, regs);
stat &= ~BIT(hwirq);
}
}
static struct irq_chip rda_irq_chip = {
.name = "rda-intc",
.irq_mask = rda_intc_mask_irq,
.irq_unmask = rda_intc_unmask_irq,
.irq_set_type = rda_intc_set_type,
};
static int rda_irq_map(struct irq_domain *d,
unsigned int virq, irq_hw_number_t hw)
{
irq_set_status_flags(virq, IRQ_LEVEL);
irq_set_chip_and_handler(virq, &rda_irq_chip, handle_level_irq);
irq_set_chip_data(virq, d->host_data);
irq_set_probe(virq);
return 0;
}
static const struct irq_domain_ops rda_irq_domain_ops = {
.map = rda_irq_map,
.xlate = irq_domain_xlate_onecell,
};
static int __init rda8810_intc_init(struct device_node *node,
struct device_node *parent)
{
rda_intc_base = of_io_request_and_map(node, 0, "rda-intc");
if (IS_ERR(rda_intc_base))
return PTR_ERR(rda_intc_base);
/* Mask all interrupt sources */
writel_relaxed(RDA_IRQ_MASK_ALL, rda_intc_base + RDA_INTC_MASK_CLR);
rda_irq_domain = irq_domain_create_linear(&node->fwnode, RDA_NR_IRQS,
&rda_irq_domain_ops,
rda_intc_base);
if (!rda_irq_domain) {
iounmap(rda_intc_base);
return -ENOMEM;
}
set_handle_irq(rda_handle_irq);
return 0;
}
IRQCHIP_DECLARE(rda_intc, "rda,8810pl-intc", rda8810_intc_init);
// SPDX-License-Identifier: GPL-2.0
/*
* H8S interrupt contoller driver
* H8S interrupt controller driver
*
* Copyright 2015 Yoshinori Sato <ysato@users.sourceforge.jp>
*/
......
// SPDX-License-Identifier: GPL-2.0
/*
* Renesas INTC External IRQ Pin Driver
*
* Copyright (C) 2013 Magnus Damm
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/init.h>
......
// SPDX-License-Identifier: GPL-2.0
/*
* Renesas IRQC Driver
*
* Copyright (C) 2013 Magnus Damm
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/init.h>
......
......@@ -58,7 +58,7 @@ struct s3c_irq_data {
};
/*
* Sructure holding the controller data
* Structure holding the controller data
* @reg_pending register holding pending irqs
* @reg_intpnd special register intpnd in main intc
* @reg_mask mask register
......
......@@ -6,6 +6,8 @@
*/
#include <linux/bitops.h>
#include <linux/delay.h>
#include <linux/hwspinlock.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/irq.h>
......@@ -20,6 +22,9 @@
#define IRQS_PER_BANK 32
#define HWSPNLCK_TIMEOUT 1000 /* usec */
#define HWSPNLCK_RETRY_DELAY 100 /* usec */
struct stm32_exti_bank {
u32 imr_ofst;
u32 emr_ofst;
......@@ -32,6 +37,12 @@ struct stm32_exti_bank {
#define UNDEF_REG ~0
enum stm32_exti_hwspinlock {
HWSPINLOCK_UNKNOWN,
HWSPINLOCK_NONE,
HWSPINLOCK_READY,
};
struct stm32_desc_irq {
u32 exti;
u32 irq_parent;
......@@ -58,6 +69,9 @@ struct stm32_exti_host_data {
void __iomem *base;
struct stm32_exti_chip_data *chips_data;
const struct stm32_exti_drv_data *drv_data;
struct device_node *node;
enum stm32_exti_hwspinlock hwlock_state;
struct hwspinlock *hwlock;
};
static struct stm32_exti_host_data *stm32_host_data;
......@@ -269,6 +283,64 @@ static int stm32_exti_set_type(struct irq_data *d,
return 0;
}
static int stm32_exti_hwspin_lock(struct stm32_exti_chip_data *chip_data)
{
struct stm32_exti_host_data *host_data = chip_data->host_data;
struct hwspinlock *hwlock;
int id, ret = 0, timeout = 0;
/* first time, check for hwspinlock availability */
if (unlikely(host_data->hwlock_state == HWSPINLOCK_UNKNOWN)) {
id = of_hwspin_lock_get_id(host_data->node, 0);
if (id >= 0) {
hwlock = hwspin_lock_request_specific(id);
if (hwlock) {
/* found valid hwspinlock */
host_data->hwlock_state = HWSPINLOCK_READY;
host_data->hwlock = hwlock;
pr_debug("%s hwspinlock = %d\n", __func__, id);
} else {
host_data->hwlock_state = HWSPINLOCK_NONE;
}
} else if (id != -EPROBE_DEFER) {
host_data->hwlock_state = HWSPINLOCK_NONE;
} else {
/* hwspinlock driver shall be ready at that stage */
ret = -EPROBE_DEFER;
}
}
if (likely(host_data->hwlock_state == HWSPINLOCK_READY)) {
/*
* Use the x_raw API since we are under spin_lock protection.
* Do not use the x_timeout API because we are under irq_disable
* mode (see __setup_irq())
*/
do {
ret = hwspin_trylock_raw(host_data->hwlock);
if (!ret)
return 0;
udelay(HWSPNLCK_RETRY_DELAY);
timeout += HWSPNLCK_RETRY_DELAY;
} while (timeout < HWSPNLCK_TIMEOUT);
if (ret == -EBUSY)
ret = -ETIMEDOUT;
}
if (ret)
pr_err("%s can't get hwspinlock (%d)\n", __func__, ret);
return ret;
}
static void stm32_exti_hwspin_unlock(struct stm32_exti_chip_data *chip_data)
{
if (likely(chip_data->host_data->hwlock_state == HWSPINLOCK_READY))
hwspin_unlock_raw(chip_data->host_data->hwlock);
}
static int stm32_irq_set_type(struct irq_data *d, unsigned int type)
{
struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
......@@ -279,21 +351,26 @@ static int stm32_irq_set_type(struct irq_data *d, unsigned int type)
irq_gc_lock(gc);
err = stm32_exti_hwspin_lock(chip_data);
if (err)
goto unlock;
rtsr = irq_reg_readl(gc, stm32_bank->rtsr_ofst);
ftsr = irq_reg_readl(gc, stm32_bank->ftsr_ofst);
err = stm32_exti_set_type(d, type, &rtsr, &ftsr);
if (err) {
irq_gc_unlock(gc);
return err;
}
if (err)
goto unspinlock;
irq_reg_writel(gc, rtsr, stm32_bank->rtsr_ofst);
irq_reg_writel(gc, ftsr, stm32_bank->ftsr_ofst);
unspinlock:
stm32_exti_hwspin_unlock(chip_data);
unlock:
irq_gc_unlock(gc);
return 0;
return err;
}
static void stm32_chip_suspend(struct stm32_exti_chip_data *chip_data,
......@@ -460,20 +537,27 @@ static int stm32_exti_h_set_type(struct irq_data *d, unsigned int type)
int err;
raw_spin_lock(&chip_data->rlock);
err = stm32_exti_hwspin_lock(chip_data);
if (err)
goto unlock;
rtsr = readl_relaxed(base + stm32_bank->rtsr_ofst);
ftsr = readl_relaxed(base + stm32_bank->ftsr_ofst);
err = stm32_exti_set_type(d, type, &rtsr, &ftsr);
if (err) {
raw_spin_unlock(&chip_data->rlock);
return err;
}
if (err)
goto unspinlock;
writel_relaxed(rtsr, base + stm32_bank->rtsr_ofst);
writel_relaxed(ftsr, base + stm32_bank->ftsr_ofst);
unspinlock:
stm32_exti_hwspin_unlock(chip_data);
unlock:
raw_spin_unlock(&chip_data->rlock);
return 0;
return err;
}
static int stm32_exti_h_set_wake(struct irq_data *d, unsigned int on)
......@@ -599,6 +683,8 @@ stm32_exti_host_data *stm32_exti_host_init(const struct stm32_exti_drv_data *dd,
return NULL;
host_data->drv_data = dd;
host_data->node = node;
host_data->hwlock_state = HWSPINLOCK_UNKNOWN;
host_data->chips_data = kcalloc(dd->bank_nr,
sizeof(struct stm32_exti_chip_data),
GFP_KERNEL);
......@@ -625,8 +711,7 @@ stm32_exti_host_data *stm32_exti_host_init(const struct stm32_exti_drv_data *dd,
static struct
stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data,
u32 bank_idx,
struct device_node *node)
u32 bank_idx)
{
const struct stm32_exti_bank *stm32_bank;
struct stm32_exti_chip_data *chip_data;
......@@ -656,8 +741,7 @@ stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data,
if (stm32_bank->fpr_ofst != UNDEF_REG)
writel_relaxed(~0UL, base + stm32_bank->fpr_ofst);
pr_info("%s: bank%d, External IRQs available:%#x\n",
node->full_name, bank_idx, irqs_mask);
pr_info("%pOF: bank%d\n", h_data->node, bank_idx);
return chip_data;
}
......@@ -678,8 +762,8 @@ static int __init stm32_exti_init(const struct stm32_exti_drv_data *drv_data,
domain = irq_domain_add_linear(node, drv_data->bank_nr * IRQS_PER_BANK,
&irq_exti_domain_ops, NULL);
if (!domain) {
pr_err("%s: Could not register interrupt domain.\n",
node->name);
pr_err("%pOFn: Could not register interrupt domain.\n",
node);
ret = -ENOMEM;
goto out_unmap;
}
......@@ -697,7 +781,7 @@ static int __init stm32_exti_init(const struct stm32_exti_drv_data *drv_data,
struct stm32_exti_chip_data *chip_data;
stm32_bank = drv_data->exti_banks[i];
chip_data = stm32_exti_chip_init(host_data, i, node);
chip_data = stm32_exti_chip_init(host_data, i);
gc = irq_get_domain_generic_chip(domain, i * IRQS_PER_BANK);
......@@ -760,7 +844,7 @@ __init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data,
return -ENOMEM;
for (i = 0; i < drv_data->bank_nr; i++)
stm32_exti_chip_init(host_data, i, node);
stm32_exti_chip_init(host_data, i);
domain = irq_domain_add_hierarchy(parent_domain, 0,
drv_data->bank_nr * IRQS_PER_BANK,
......@@ -768,7 +852,7 @@ __init stm32_exti_hierarchy_init(const struct stm32_exti_drv_data *drv_data,
host_data);
if (!domain) {
pr_err("%s: Could not register exti domain.\n", node->name);
pr_err("%pOFn: Could not register exti domain.\n", node);
ret = -ENOMEM;
goto out_unmap;
}
......
......@@ -28,11 +28,21 @@
#define SUN4I_IRQ_NMI_CTRL_REG 0x0c
#define SUN4I_IRQ_PENDING_REG(x) (0x10 + 0x4 * x)
#define SUN4I_IRQ_FIQ_PENDING_REG(x) (0x20 + 0x4 * x)
#define SUN4I_IRQ_ENABLE_REG(x) (0x40 + 0x4 * x)
#define SUN4I_IRQ_MASK_REG(x) (0x50 + 0x4 * x)
#define SUN4I_IRQ_ENABLE_REG(data, x) ((data)->enable_reg_offset + 0x4 * x)
#define SUN4I_IRQ_MASK_REG(data, x) ((data)->mask_reg_offset + 0x4 * x)
#define SUN4I_IRQ_ENABLE_REG_OFFSET 0x40
#define SUN4I_IRQ_MASK_REG_OFFSET 0x50
#define SUNIV_IRQ_ENABLE_REG_OFFSET 0x20
#define SUNIV_IRQ_MASK_REG_OFFSET 0x30
struct sun4i_irq_chip_data {
void __iomem *irq_base;
struct irq_domain *irq_domain;
u32 enable_reg_offset;
u32 mask_reg_offset;
};
static void __iomem *sun4i_irq_base;
static struct irq_domain *sun4i_irq_domain;
static struct sun4i_irq_chip_data *irq_ic_data;
static void __exception_irq_entry sun4i_handle_irq(struct pt_regs *regs);
......@@ -43,7 +53,7 @@ static void sun4i_irq_ack(struct irq_data *irqd)
if (irq != 0)
return; /* Only IRQ 0 / the ENMI needs to be acked */
writel(BIT(0), sun4i_irq_base + SUN4I_IRQ_PENDING_REG(0));
writel(BIT(0), irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(0));
}
static void sun4i_irq_mask(struct irq_data *irqd)
......@@ -53,9 +63,10 @@ static void sun4i_irq_mask(struct irq_data *irqd)
int reg = irq / 32;
u32 val;
val = readl(sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(reg));
val = readl(irq_ic_data->irq_base +
SUN4I_IRQ_ENABLE_REG(irq_ic_data, reg));
writel(val & ~(1 << irq_off),
sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(reg));
irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, reg));
}
static void sun4i_irq_unmask(struct irq_data *irqd)
......@@ -65,9 +76,10 @@ static void sun4i_irq_unmask(struct irq_data *irqd)
int reg = irq / 32;
u32 val;
val = readl(sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(reg));
val = readl(irq_ic_data->irq_base +
SUN4I_IRQ_ENABLE_REG(irq_ic_data, reg));
writel(val | (1 << irq_off),
sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(reg));
irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, reg));
}
static struct irq_chip sun4i_irq_chip = {
......@@ -95,42 +107,76 @@ static const struct irq_domain_ops sun4i_irq_ops = {
static int __init sun4i_of_init(struct device_node *node,
struct device_node *parent)
{
sun4i_irq_base = of_iomap(node, 0);
if (!sun4i_irq_base)
irq_ic_data->irq_base = of_iomap(node, 0);
if (!irq_ic_data->irq_base)
panic("%pOF: unable to map IC registers\n",
node);
/* Disable all interrupts */
writel(0, sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(0));
writel(0, sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(1));
writel(0, sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(2));
writel(0, irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, 0));
writel(0, irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, 1));
writel(0, irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, 2));
/* Unmask all the interrupts, ENABLE_REG(x) is used for masking */
writel(0, sun4i_irq_base + SUN4I_IRQ_MASK_REG(0));
writel(0, sun4i_irq_base + SUN4I_IRQ_MASK_REG(1));
writel(0, sun4i_irq_base + SUN4I_IRQ_MASK_REG(2));
writel(0, irq_ic_data->irq_base + SUN4I_IRQ_MASK_REG(irq_ic_data, 0));
writel(0, irq_ic_data->irq_base + SUN4I_IRQ_MASK_REG(irq_ic_data, 1));
writel(0, irq_ic_data->irq_base + SUN4I_IRQ_MASK_REG(irq_ic_data, 2));
/* Clear all the pending interrupts */
writel(0xffffffff, sun4i_irq_base + SUN4I_IRQ_PENDING_REG(0));
writel(0xffffffff, sun4i_irq_base + SUN4I_IRQ_PENDING_REG(1));
writel(0xffffffff, sun4i_irq_base + SUN4I_IRQ_PENDING_REG(2));
writel(0xffffffff, irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(0));
writel(0xffffffff, irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(1));
writel(0xffffffff, irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(2));
/* Enable protection mode */
writel(0x01, sun4i_irq_base + SUN4I_IRQ_PROTECTION_REG);
writel(0x01, irq_ic_data->irq_base + SUN4I_IRQ_PROTECTION_REG);
/* Configure the external interrupt source type */
writel(0x00, sun4i_irq_base + SUN4I_IRQ_NMI_CTRL_REG);
writel(0x00, irq_ic_data->irq_base + SUN4I_IRQ_NMI_CTRL_REG);
sun4i_irq_domain = irq_domain_add_linear(node, 3 * 32,
irq_ic_data->irq_domain = irq_domain_add_linear(node, 3 * 32,
&sun4i_irq_ops, NULL);
if (!sun4i_irq_domain)
if (!irq_ic_data->irq_domain)
panic("%pOF: unable to create IRQ domain\n", node);
set_handle_irq(sun4i_handle_irq);
return 0;
}
IRQCHIP_DECLARE(allwinner_sun4i_ic, "allwinner,sun4i-a10-ic", sun4i_of_init);
static int __init sun4i_ic_of_init(struct device_node *node,
struct device_node *parent)
{
irq_ic_data = kzalloc(sizeof(struct sun4i_irq_chip_data), GFP_KERNEL);
if (!irq_ic_data) {
pr_err("kzalloc failed!\n");
return -ENOMEM;
}
irq_ic_data->enable_reg_offset = SUN4I_IRQ_ENABLE_REG_OFFSET;
irq_ic_data->mask_reg_offset = SUN4I_IRQ_MASK_REG_OFFSET;
return sun4i_of_init(node, parent);
}
IRQCHIP_DECLARE(allwinner_sun4i_ic, "allwinner,sun4i-a10-ic", sun4i_ic_of_init);
static int __init suniv_ic_of_init(struct device_node *node,
struct device_node *parent)
{
irq_ic_data = kzalloc(sizeof(struct sun4i_irq_chip_data), GFP_KERNEL);
if (!irq_ic_data) {
pr_err("kzalloc failed!\n");
return -ENOMEM;
}
irq_ic_data->enable_reg_offset = SUNIV_IRQ_ENABLE_REG_OFFSET;
irq_ic_data->mask_reg_offset = SUNIV_IRQ_MASK_REG_OFFSET;
return sun4i_of_init(node, parent);
}
IRQCHIP_DECLARE(allwinner_sunvi_ic, "allwinner,suniv-f1c100s-ic",
suniv_ic_of_init);
static void __exception_irq_entry sun4i_handle_irq(struct pt_regs *regs)
{
......@@ -146,13 +192,15 @@ static void __exception_irq_entry sun4i_handle_irq(struct pt_regs *regs)
* the extra check in the common case of 1 hapening after having
* read the vector-reg once.
*/
hwirq = readl(sun4i_irq_base + SUN4I_IRQ_VECTOR_REG) >> 2;
hwirq = readl(irq_ic_data->irq_base + SUN4I_IRQ_VECTOR_REG) >> 2;
if (hwirq == 0 &&
!(readl(sun4i_irq_base + SUN4I_IRQ_PENDING_REG(0)) & BIT(0)))
!(readl(irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(0)) &
BIT(0)))
return;
do {
handle_domain_irq(sun4i_irq_domain, hwirq, regs);
hwirq = readl(sun4i_irq_base + SUN4I_IRQ_VECTOR_REG) >> 2;
handle_domain_irq(irq_ic_data->irq_domain, hwirq, regs);
hwirq = readl(irq_ic_data->irq_base +
SUN4I_IRQ_VECTOR_REG) >> 2;
} while (hwirq != 0);
}
......@@ -184,11 +184,11 @@ static int __init tangox_irq_init(void __iomem *base, struct resource *baseres,
irq = irq_of_parse_and_map(node, 0);
if (!irq)
panic("%s: failed to get IRQ", node->name);
panic("%pOFn: failed to get IRQ", node);
err = of_address_to_resource(node, 0, &res);
if (err)
panic("%s: failed to get address", node->name);
panic("%pOFn: failed to get address", node);
chip = kzalloc(sizeof(*chip), GFP_KERNEL);
chip->ctl = res.start - baseres->start;
......@@ -196,12 +196,12 @@ static int __init tangox_irq_init(void __iomem *base, struct resource *baseres,
dom = irq_domain_add_linear(node, 64, &irq_generic_chip_ops, chip);
if (!dom)
panic("%s: failed to create irqdomain", node->name);
panic("%pOFn: failed to create irqdomain", node);
err = irq_alloc_domain_generic_chips(dom, 32, 2, node->name,
handle_level_irq, 0, 0, 0);
if (err)
panic("%s: failed to allocate irqchip", node->name);
panic("%pOFn: failed to allocate irqchip", node);
tangox_irq_domain_init(dom);
......@@ -219,7 +219,7 @@ static int __init tangox_of_irq_init(struct device_node *node,
base = of_iomap(node, 0);
if (!base)
panic("%s: of_iomap failed", node->name);
panic("%pOFn: of_iomap failed", node);
of_address_to_resource(node, 0, &res);
......
......@@ -534,14 +534,13 @@ static int populate_msi_sysfs(struct pci_dev *pdev)
static struct msi_desc *
msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd)
{
struct cpumask *masks = NULL;
struct irq_affinity_desc *masks = NULL;
struct msi_desc *entry;
u16 control;
if (affd)
masks = irq_create_affinity_masks(nvec, affd);
/* MSI Entry Initialization */
entry = alloc_msi_entry(&dev->dev, nvec, masks);
if (!entry)
......@@ -672,7 +671,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base,
struct msix_entry *entries, int nvec,
const struct irq_affinity *affd)
{
struct cpumask *curmsk, *masks = NULL;
struct irq_affinity_desc *curmsk, *masks = NULL;
struct msi_desc *entry;
int ret, i;
......@@ -1036,6 +1035,13 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
if (maxvec < minvec)
return -ERANGE;
/*
* If the caller is passing in sets, we can't support a range of
* vectors. The caller needs to handle that.
*/
if (affd && affd->nr_sets && minvec != maxvec)
return -EINVAL;
if (WARN_ON_ONCE(dev->msi_enabled))
return -EINVAL;
......@@ -1087,6 +1093,13 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
if (maxvec < minvec)
return -ERANGE;
/*
* If the caller is passing in sets, we can't support a range of
* supported vectors. The caller needs to handle that.
*/
if (affd && affd->nr_sets && minvec != maxvec)
return -EINVAL;
if (WARN_ON_ONCE(dev->msix_enabled))
return -EINVAL;
......@@ -1250,7 +1263,7 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
for_each_pci_msi_entry(entry, dev) {
if (i == nr)
return entry->affinity;
return &entry->affinity->mask;
i++;
}
WARN_ON_ONCE(1);
......@@ -1262,7 +1275,7 @@ const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr)
nr >= entry->nvec_used))
return NULL;
return &entry->affinity[nr];
return &entry->affinity[nr].mask;
} else {
return cpu_possible_mask;
}
......
......@@ -247,10 +247,23 @@ struct irq_affinity_notify {
* the MSI(-X) vector space
* @post_vectors: Don't apply affinity to @post_vectors at end of
* the MSI(-X) vector space
* @nr_sets: Length of passed in *sets array
* @sets: Number of affinitized sets
*/
struct irq_affinity {
int pre_vectors;
int post_vectors;
int nr_sets;
int *sets;
};
/**
* struct irq_affinity_desc - Interrupt affinity descriptor
* @mask: cpumask to hold the affinity assignment
*/
struct irq_affinity_desc {
struct cpumask mask;
unsigned int is_managed : 1;
};
#if defined(CONFIG_SMP)
......@@ -299,7 +312,9 @@ extern int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m);
extern int
irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify);
struct cpumask *irq_create_affinity_masks(int nvec, const struct irq_affinity *affd);
struct irq_affinity_desc *
irq_create_affinity_masks(int nvec, const struct irq_affinity *affd);
int irq_calc_affinity_vectors(int minvec, int maxvec, const struct irq_affinity *affd);
#else /* CONFIG_SMP */
......@@ -333,7 +348,7 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify)
return 0;
}
static inline struct cpumask *
static inline struct irq_affinity_desc *
irq_create_affinity_masks(int nvec, const struct irq_affinity *affd)
{
return NULL;
......
......@@ -27,6 +27,7 @@
struct seq_file;
struct module;
struct msi_msg;
struct irq_affinity_desc;
enum irqchip_irq_state;
/*
......@@ -834,11 +835,12 @@ struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d)
unsigned int arch_dynirq_lower_bound(unsigned int from);
int __irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node,
struct module *owner, const struct cpumask *affinity);
struct module *owner,
const struct irq_affinity_desc *affinity);
int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from,
unsigned int cnt, int node, struct module *owner,
const struct cpumask *affinity);
const struct irq_affinity_desc *affinity);
/* use macros to avoid needing export.h for THIS_MODULE */
#define irq_alloc_descs(irq, from, cnt, node) \
......
......@@ -16,7 +16,7 @@
struct irq_sim_work_ctx {
struct irq_work work;
int irq;
unsigned long *pending;
};
struct irq_sim_irq_ctx {
......
......@@ -19,7 +19,7 @@
* the association between their DT compatible string and their
* initialization function.
*
* @name: name that must be unique accross all IRQCHIP_DECLARE of the
* @name: name that must be unique across all IRQCHIP_DECLARE of the
* same file.
* @compstr: compatible string of the irqchip driver
* @fn: initialization function
......@@ -30,7 +30,7 @@
* This macro must be used by the different irqchip drivers to declare
* the association between their version and their initialization function.
*
* @name: name that must be unique accross all IRQCHIP_ACPI_DECLARE of the
* @name: name that must be unique across all IRQCHIP_ACPI_DECLARE of the
* same file.
* @subtable: Subtable to be identified in MADT
* @validate: Function to be called on that subtable to check its validity.
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Interrupt support for Cirrus Logic Madera codecs
*
* Copyright (C) 2016-2018 Cirrus Logic, Inc. and
* Cirrus Logic International Semiconductor Ltd.
*/
#ifndef IRQCHIP_MADERA_H
#define IRQCHIP_MADERA_H
#include <linux/interrupt.h>
#include <linux/mfd/madera/core.h>
#define MADERA_IRQ_FLL1_LOCK 0
#define MADERA_IRQ_FLL2_LOCK 1
#define MADERA_IRQ_FLL3_LOCK 2
#define MADERA_IRQ_FLLAO_LOCK 3
#define MADERA_IRQ_CLK_SYS_ERR 4
#define MADERA_IRQ_CLK_ASYNC_ERR 5
#define MADERA_IRQ_CLK_DSP_ERR 6
#define MADERA_IRQ_HPDET 7
#define MADERA_IRQ_MICDET1 8
#define MADERA_IRQ_MICDET2 9
#define MADERA_IRQ_JD1_RISE 10
#define MADERA_IRQ_JD1_FALL 11
#define MADERA_IRQ_JD2_RISE 12
#define MADERA_IRQ_JD2_FALL 13
#define MADERA_IRQ_MICD_CLAMP_RISE 14
#define MADERA_IRQ_MICD_CLAMP_FALL 15
#define MADERA_IRQ_DRC2_SIG_DET 16
#define MADERA_IRQ_DRC1_SIG_DET 17
#define MADERA_IRQ_ASRC1_IN1_LOCK 18
#define MADERA_IRQ_ASRC1_IN2_LOCK 19
#define MADERA_IRQ_ASRC2_IN1_LOCK 20
#define MADERA_IRQ_ASRC2_IN2_LOCK 21
#define MADERA_IRQ_DSP_IRQ1 22
#define MADERA_IRQ_DSP_IRQ2 23
#define MADERA_IRQ_DSP_IRQ3 24
#define MADERA_IRQ_DSP_IRQ4 25
#define MADERA_IRQ_DSP_IRQ5 26
#define MADERA_IRQ_DSP_IRQ6 27
#define MADERA_IRQ_DSP_IRQ7 28
#define MADERA_IRQ_DSP_IRQ8 29
#define MADERA_IRQ_DSP_IRQ9 30
#define MADERA_IRQ_DSP_IRQ10 31
#define MADERA_IRQ_DSP_IRQ11 32
#define MADERA_IRQ_DSP_IRQ12 33
#define MADERA_IRQ_DSP_IRQ13 34
#define MADERA_IRQ_DSP_IRQ14 35
#define MADERA_IRQ_DSP_IRQ15 36
#define MADERA_IRQ_DSP_IRQ16 37
#define MADERA_IRQ_HP1L_SC 38
#define MADERA_IRQ_HP1R_SC 39
#define MADERA_IRQ_HP2L_SC 40
#define MADERA_IRQ_HP2R_SC 41
#define MADERA_IRQ_HP3L_SC 42
#define MADERA_IRQ_HP3R_SC 43
#define MADERA_IRQ_SPKOUTL_SC 44
#define MADERA_IRQ_SPKOUTR_SC 45
#define MADERA_IRQ_HP1L_ENABLE_DONE 46
#define MADERA_IRQ_HP1R_ENABLE_DONE 47
#define MADERA_IRQ_HP2L_ENABLE_DONE 48
#define MADERA_IRQ_HP2R_ENABLE_DONE 49
#define MADERA_IRQ_HP3L_ENABLE_DONE 50
#define MADERA_IRQ_HP3R_ENABLE_DONE 51
#define MADERA_IRQ_SPKOUTL_ENABLE_DONE 52
#define MADERA_IRQ_SPKOUTR_ENABLE_DONE 53
#define MADERA_IRQ_SPK_SHUTDOWN 54
#define MADERA_IRQ_SPK_OVERHEAT 55
#define MADERA_IRQ_SPK_OVERHEAT_WARN 56
#define MADERA_IRQ_GPIO1 57
#define MADERA_IRQ_GPIO2 58
#define MADERA_IRQ_GPIO3 59
#define MADERA_IRQ_GPIO4 60
#define MADERA_IRQ_GPIO5 61
#define MADERA_IRQ_GPIO6 62
#define MADERA_IRQ_GPIO7 63
#define MADERA_IRQ_GPIO8 64
#define MADERA_IRQ_DSP1_BUS_ERR 65
#define MADERA_IRQ_DSP2_BUS_ERR 66
#define MADERA_IRQ_DSP3_BUS_ERR 67
#define MADERA_IRQ_DSP4_BUS_ERR 68
#define MADERA_IRQ_DSP5_BUS_ERR 69
#define MADERA_IRQ_DSP6_BUS_ERR 70
#define MADERA_IRQ_DSP7_BUS_ERR 71
#define MADERA_NUM_IRQ 72
/*
* These wrapper functions are for use by other child drivers of the
* same parent MFD.
*/
static inline int madera_get_irq_mapping(struct madera *madera, int irq)
{
if (!madera->irq_dev)
return -ENODEV;
return regmap_irq_get_virq(madera->irq_data, irq);
}
static inline int madera_request_irq(struct madera *madera, int irq,
const char *name,
irq_handler_t handler, void *data)
{
irq = madera_get_irq_mapping(madera, irq);
if (irq < 0)
return irq;
return request_threaded_irq(irq, NULL, handler, IRQF_ONESHOT, name,
data);
}
static inline void madera_free_irq(struct madera *madera, int irq, void *data)
{
irq = madera_get_irq_mapping(madera, irq);
if (irq < 0)
return;
free_irq(irq, data);
}
static inline int madera_set_irq_wake(struct madera *madera, int irq, int on)
{
irq = madera_get_irq_mapping(madera, irq);
if (irq < 0)
return irq;
return irq_set_irq_wake(irq, on);
}
#endif
......@@ -43,6 +43,7 @@ struct irq_chip;
struct irq_data;
struct cpumask;
struct seq_file;
struct irq_affinity_desc;
/* Number of irqs reserved for a legacy isa controller */
#define NUM_ISA_INTERRUPTS 16
......@@ -266,7 +267,7 @@ extern bool irq_domain_check_msi_remap(void);
extern void irq_set_default_host(struct irq_domain *host);
extern int irq_domain_alloc_descs(int virq, unsigned int nr_irqs,
irq_hw_number_t hwirq, int node,
const struct cpumask *affinity);
const struct irq_affinity_desc *affinity);
static inline struct fwnode_handle *of_node_to_fwnode(struct device_node *node)
{
......@@ -449,7 +450,8 @@ static inline struct irq_domain *irq_domain_add_hierarchy(struct irq_domain *par
extern int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
unsigned int nr_irqs, int node, void *arg,
bool realloc, const struct cpumask *affinity);
bool realloc,
const struct irq_affinity_desc *affinity);
extern void irq_domain_free_irqs(unsigned int virq, unsigned int nr_irqs);
extern int irq_domain_activate_irq(struct irq_data *irq_data, bool early);
extern void irq_domain_deactivate_irq(struct irq_data *irq_data);
......
......@@ -76,7 +76,7 @@ struct msi_desc {
unsigned int nvec_used;
struct device *dev;
struct msi_msg msg;
struct cpumask *affinity;
struct irq_affinity_desc *affinity;
union {
/* PCI MSI/X specific data */
......@@ -116,6 +116,8 @@ struct msi_desc {
list_first_entry(dev_to_msi_list((dev)), struct msi_desc, list)
#define for_each_msi_entry(desc, dev) \
list_for_each_entry((desc), dev_to_msi_list((dev)), list)
#define for_each_msi_entry_safe(desc, tmp, dev) \
list_for_each_entry_safe((desc), (tmp), dev_to_msi_list((dev)), list)
#ifdef CONFIG_PCI_MSI
#define first_pci_msi_entry(pdev) first_msi_entry(&(pdev)->dev)
......@@ -136,7 +138,7 @@ static inline void pci_write_msi_msg(unsigned int irq, struct msi_msg *msg)
#endif /* CONFIG_PCI_MSI */
struct msi_desc *alloc_msi_entry(struct device *dev, int nvec,
const struct cpumask *affinity);
const struct irq_affinity_desc *affinity);
void free_msi_entry(struct msi_desc *entry);
void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
......
......@@ -94,15 +94,15 @@ static int get_nodes_in_cpumask(cpumask_var_t *node_to_cpumask,
return nodes;
}
static int irq_build_affinity_masks(const struct irq_affinity *affd,
int startvec, int numvecs,
cpumask_var_t *node_to_cpumask,
const struct cpumask *cpu_mask,
struct cpumask *nmsk,
struct cpumask *masks)
static int __irq_build_affinity_masks(const struct irq_affinity *affd,
int startvec, int numvecs, int firstvec,
cpumask_var_t *node_to_cpumask,
const struct cpumask *cpu_mask,
struct cpumask *nmsk,
struct irq_affinity_desc *masks)
{
int n, nodes, cpus_per_vec, extra_vecs, done = 0;
int last_affv = affd->pre_vectors + numvecs;
int last_affv = firstvec + numvecs;
int curvec = startvec;
nodemask_t nodemsk = NODE_MASK_NONE;
......@@ -117,12 +117,13 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd,
*/
if (numvecs <= nodes) {
for_each_node_mask(n, nodemsk) {
cpumask_copy(masks + curvec, node_to_cpumask[n]);
if (++done == numvecs)
break;
cpumask_or(&masks[curvec].mask,
&masks[curvec].mask,
node_to_cpumask[n]);
if (++curvec == last_affv)
curvec = affd->pre_vectors;
curvec = firstvec;
}
done = numvecs;
goto out;
}
......@@ -130,7 +131,7 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd,
int ncpus, v, vecs_to_assign, vecs_per_node;
/* Spread the vectors per node */
vecs_per_node = (numvecs - (curvec - affd->pre_vectors)) / nodes;
vecs_per_node = (numvecs - (curvec - firstvec)) / nodes;
/* Get the cpus on this node which are in the mask */
cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]);
......@@ -151,14 +152,15 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd,
cpus_per_vec++;
--extra_vecs;
}
irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec);
irq_spread_init_one(&masks[curvec].mask, nmsk,
cpus_per_vec);
}
done += v;
if (done >= numvecs)
break;
if (curvec >= last_affv)
curvec = affd->pre_vectors;
curvec = firstvec;
--nodes;
}
......@@ -166,20 +168,77 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd,
return done;
}
/*
* build affinity in two stages:
* 1) spread present CPU on these vectors
* 2) spread other possible CPUs on these vectors
*/
static int irq_build_affinity_masks(const struct irq_affinity *affd,
int startvec, int numvecs, int firstvec,
cpumask_var_t *node_to_cpumask,
struct irq_affinity_desc *masks)
{
int curvec = startvec, nr_present, nr_others;
int ret = -ENOMEM;
cpumask_var_t nmsk, npresmsk;
if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
return ret;
if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL))
goto fail;
ret = 0;
/* Stabilize the cpumasks */
get_online_cpus();
build_node_to_cpumask(node_to_cpumask);
/* Spread on present CPUs starting from affd->pre_vectors */
nr_present = __irq_build_affinity_masks(affd, curvec, numvecs,
firstvec, node_to_cpumask,
cpu_present_mask, nmsk, masks);
/*
* Spread on non present CPUs starting from the next vector to be
* handled. If the spreading of present CPUs already exhausted the
* vector space, assign the non present CPUs to the already spread
* out vectors.
*/
if (nr_present >= numvecs)
curvec = firstvec;
else
curvec = firstvec + nr_present;
cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask);
nr_others = __irq_build_affinity_masks(affd, curvec, numvecs,
firstvec, node_to_cpumask,
npresmsk, nmsk, masks);
put_online_cpus();
if (nr_present < numvecs)
WARN_ON(nr_present + nr_others < numvecs);
free_cpumask_var(npresmsk);
fail:
free_cpumask_var(nmsk);
return ret;
}
/**
* irq_create_affinity_masks - Create affinity masks for multiqueue spreading
* @nvecs: The total number of vectors
* @affd: Description of the affinity requirements
*
* Returns the masks pointer or NULL if allocation failed.
* Returns the irq_affinity_desc pointer or NULL if allocation failed.
*/
struct cpumask *
struct irq_affinity_desc *
irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
{
int affvecs = nvecs - affd->pre_vectors - affd->post_vectors;
int curvec, usedvecs;
cpumask_var_t nmsk, npresmsk, *node_to_cpumask;
struct cpumask *masks = NULL;
cpumask_var_t *node_to_cpumask;
struct irq_affinity_desc *masks = NULL;
int i, nr_sets;
/*
* If there aren't any vectors left after applying the pre/post
......@@ -188,15 +247,9 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
if (nvecs == affd->pre_vectors + affd->post_vectors)
return NULL;
if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
return NULL;
if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL))
goto outcpumsk;
node_to_cpumask = alloc_node_to_cpumask();
if (!node_to_cpumask)
goto outnpresmsk;
return NULL;
masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL);
if (!masks)
......@@ -204,32 +257,29 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
/* Fill out vectors at the beginning that don't need affinity */
for (curvec = 0; curvec < affd->pre_vectors; curvec++)
cpumask_copy(masks + curvec, irq_default_affinity);
/* Stabilize the cpumasks */
get_online_cpus();
build_node_to_cpumask(node_to_cpumask);
/* Spread on present CPUs starting from affd->pre_vectors */
usedvecs = irq_build_affinity_masks(affd, curvec, affvecs,
node_to_cpumask, cpu_present_mask,
nmsk, masks);
cpumask_copy(&masks[curvec].mask, irq_default_affinity);
/*
* Spread on non present CPUs starting from the next vector to be
* handled. If the spreading of present CPUs already exhausted the
* vector space, assign the non present CPUs to the already spread
* out vectors.
* Spread on present CPUs starting from affd->pre_vectors. If we
* have multiple sets, build each sets affinity mask separately.
*/
if (usedvecs >= affvecs)
curvec = affd->pre_vectors;
else
curvec = affd->pre_vectors + usedvecs;
cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask);
usedvecs += irq_build_affinity_masks(affd, curvec, affvecs,
node_to_cpumask, npresmsk,
nmsk, masks);
put_online_cpus();
nr_sets = affd->nr_sets;
if (!nr_sets)
nr_sets = 1;
for (i = 0, usedvecs = 0; i < nr_sets; i++) {
int this_vecs = affd->sets ? affd->sets[i] : affvecs;
int ret;
ret = irq_build_affinity_masks(affd, curvec, this_vecs,
curvec, node_to_cpumask, masks);
if (ret) {
kfree(masks);
masks = NULL;
goto outnodemsk;
}
curvec += this_vecs;
usedvecs += this_vecs;
}
/* Fill out vectors at the end that don't need affinity */
if (usedvecs >= affvecs)
......@@ -237,14 +287,14 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
else
curvec = affd->pre_vectors + usedvecs;
for (; curvec < nvecs; curvec++)
cpumask_copy(masks + curvec, irq_default_affinity);
cpumask_copy(&masks[curvec].mask, irq_default_affinity);
/* Mark the managed interrupts */
for (i = affd->pre_vectors; i < nvecs - affd->post_vectors; i++)
masks[i].is_managed = 1;
outnodemsk:
free_node_to_cpumask(node_to_cpumask);
outnpresmsk:
free_cpumask_var(npresmsk);
outcpumsk:
free_cpumask_var(nmsk);
return masks;
}
......@@ -258,13 +308,21 @@ int irq_calc_affinity_vectors(int minvec, int maxvec, const struct irq_affinity
{
int resv = affd->pre_vectors + affd->post_vectors;
int vecs = maxvec - resv;
int ret;
int set_vecs;
if (resv > minvec)
return 0;
get_online_cpus();
ret = min_t(int, cpumask_weight(cpu_possible_mask), vecs) + resv;
put_online_cpus();
return ret;
if (affd->nr_sets) {
int i;
for (i = 0, set_vecs = 0; i < affd->nr_sets; i++)
set_vecs += affd->sets[i];
} else {
get_online_cpus();
set_vecs = cpumask_weight(cpu_possible_mask);
put_online_cpus();
}
return resv + min(set_vecs, vecs);
}
......@@ -929,7 +929,7 @@ __irq_do_set_handler(struct irq_desc *desc, irq_flow_handler_t handle,
break;
/*
* Bail out if the outer chip is not set up
* and the interrrupt supposed to be started
* and the interrupt supposed to be started
* right away.
*/
if (WARN_ON(is_chained))
......
......@@ -169,7 +169,7 @@ static void devm_irq_desc_release(struct device *dev, void *res)
* @cnt: Number of consecutive irqs to allocate
* @node: Preferred node on which the irq descriptor should be allocated
* @owner: Owning module (can be NULL)
* @affinity: Optional pointer to an affinity mask array of size @cnt
* @affinity: Optional pointer to an irq_affinity_desc array of size @cnt
* which hints where the irq descriptors should be allocated
* and which default affinities to use
*
......@@ -179,7 +179,7 @@ static void devm_irq_desc_release(struct device *dev, void *res)
*/
int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from,
unsigned int cnt, int node, struct module *owner,
const struct cpumask *affinity)
const struct irq_affinity_desc *affinity)
{
struct irq_desc_devres *dr;
int base;
......
......@@ -56,7 +56,7 @@ int irq_reserve_ipi(struct irq_domain *domain,
unsigned int next;
/*
* The IPI requires a seperate HW irq on each CPU. We require
* The IPI requires a separate HW irq on each CPU. We require
* that the destination mask is consecutive. If an
* implementation needs to support holes, it can reserve
* several IPI ranges.
......@@ -172,7 +172,7 @@ irq_hw_number_t ipi_get_hwirq(unsigned int irq, unsigned int cpu)
/*
* Get the real hardware irq number if the underlying implementation
* uses a seperate irq per cpu. If the underlying implementation uses
* uses a separate irq per cpu. If the underlying implementation uses
* a single hardware irq for all cpus then the IPI send mechanism
* needs to take care of the cpu destinations.
*/
......
......@@ -34,9 +34,20 @@ static struct irq_chip irq_sim_irqchip = {
static void irq_sim_handle_irq(struct irq_work *work)
{
struct irq_sim_work_ctx *work_ctx;
unsigned int offset = 0;
struct irq_sim *sim;
int irqnum;
work_ctx = container_of(work, struct irq_sim_work_ctx, work);
handle_simple_irq(irq_to_desc(work_ctx->irq));
sim = container_of(work_ctx, struct irq_sim, work_ctx);
while (!bitmap_empty(work_ctx->pending, sim->irq_count)) {
offset = find_next_bit(work_ctx->pending,
sim->irq_count, offset);
clear_bit(offset, work_ctx->pending);
irqnum = irq_sim_irqnum(sim, offset);
handle_simple_irq(irq_to_desc(irqnum));
}
}
/**
......@@ -63,6 +74,13 @@ int irq_sim_init(struct irq_sim *sim, unsigned int num_irqs)
return sim->irq_base;
}
sim->work_ctx.pending = bitmap_zalloc(num_irqs, GFP_KERNEL);
if (!sim->work_ctx.pending) {
kfree(sim->irqs);
irq_free_descs(sim->irq_base, num_irqs);
return -ENOMEM;
}
for (i = 0; i < num_irqs; i++) {
sim->irqs[i].irqnum = sim->irq_base + i;
sim->irqs[i].enabled = false;
......@@ -89,6 +107,7 @@ EXPORT_SYMBOL_GPL(irq_sim_init);
void irq_sim_fini(struct irq_sim *sim)
{
irq_work_sync(&sim->work_ctx.work);
bitmap_free(sim->work_ctx.pending);
irq_free_descs(sim->irq_base, sim->irq_count);
kfree(sim->irqs);
}
......@@ -143,7 +162,7 @@ EXPORT_SYMBOL_GPL(devm_irq_sim_init);
void irq_sim_fire(struct irq_sim *sim, unsigned int offset)
{
if (sim->irqs[offset].enabled) {
sim->work_ctx.irq = irq_sim_irqnum(sim, offset);
set_bit(offset, sim->work_ctx.pending);
irq_work_queue(&sim->work_ctx.work);
}
}
......
......@@ -449,30 +449,34 @@ static void free_desc(unsigned int irq)
}
static int alloc_descs(unsigned int start, unsigned int cnt, int node,
const struct cpumask *affinity, struct module *owner)
const struct irq_affinity_desc *affinity,
struct module *owner)
{
const struct cpumask *mask = NULL;
struct irq_desc *desc;
unsigned int flags;
int i;
/* Validate affinity mask(s) */
if (affinity) {
for (i = 0, mask = affinity; i < cnt; i++, mask++) {
if (cpumask_empty(mask))
for (i = 0; i < cnt; i++, i++) {
if (cpumask_empty(&affinity[i].mask))
return -EINVAL;
}
}
flags = affinity ? IRQD_AFFINITY_MANAGED | IRQD_MANAGED_SHUTDOWN : 0;
mask = NULL;
for (i = 0; i < cnt; i++) {
const struct cpumask *mask = NULL;
unsigned int flags = 0;
if (affinity) {
node = cpu_to_node(cpumask_first(affinity));
mask = affinity;
if (affinity->is_managed) {
flags = IRQD_AFFINITY_MANAGED |
IRQD_MANAGED_SHUTDOWN;
}
mask = &affinity->mask;
node = cpu_to_node(cpumask_first(mask));
affinity++;
}
desc = alloc_desc(start + i, node, flags, mask, owner);
if (!desc)
goto err;
......@@ -575,7 +579,7 @@ static void free_desc(unsigned int irq)
}
static inline int alloc_descs(unsigned int start, unsigned int cnt, int node,
const struct cpumask *affinity,
const struct irq_affinity_desc *affinity,
struct module *owner)
{
u32 i;
......@@ -705,7 +709,7 @@ EXPORT_SYMBOL_GPL(irq_free_descs);
*/
int __ref
__irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node,
struct module *owner, const struct cpumask *affinity)
struct module *owner, const struct irq_affinity_desc *affinity)
{
int start, ret;
......
......@@ -969,7 +969,7 @@ const struct irq_domain_ops irq_domain_simple_ops = {
EXPORT_SYMBOL_GPL(irq_domain_simple_ops);
int irq_domain_alloc_descs(int virq, unsigned int cnt, irq_hw_number_t hwirq,
int node, const struct cpumask *affinity)
int node, const struct irq_affinity_desc *affinity)
{
unsigned int hint;
......@@ -1281,7 +1281,7 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
*/
int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
unsigned int nr_irqs, int node, void *arg,
bool realloc, const struct cpumask *affinity)
bool realloc, const struct irq_affinity_desc *affinity)
{
int i, ret, virq;
......
......@@ -915,7 +915,7 @@ irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) { }
#endif
/*
* Interrupts which are not explicitely requested as threaded
* Interrupts which are not explicitly requested as threaded
* interrupts rely on the implicit bh/preempt disable of the hard irq
* context. So we need to disable bh here to avoid deadlocks and other
* side effects.
......
......@@ -14,6 +14,7 @@ struct cpumap {
unsigned int available;
unsigned int allocated;
unsigned int managed;
unsigned int managed_allocated;
bool initialized;
bool online;
unsigned long alloc_map[IRQ_MATRIX_SIZE];
......@@ -145,6 +146,27 @@ static unsigned int matrix_find_best_cpu(struct irq_matrix *m,
return best_cpu;
}
/* Find the best CPU which has the lowest number of managed IRQs allocated */
static unsigned int matrix_find_best_cpu_managed(struct irq_matrix *m,
const struct cpumask *msk)
{
unsigned int cpu, best_cpu, allocated = UINT_MAX;
struct cpumap *cm;
best_cpu = UINT_MAX;
for_each_cpu(cpu, msk) {
cm = per_cpu_ptr(m->maps, cpu);
if (!cm->online || cm->managed_allocated > allocated)
continue;
best_cpu = cpu;
allocated = cm->managed_allocated;
}
return best_cpu;
}
/**
* irq_matrix_assign_system - Assign system wide entry in the matrix
* @m: Matrix pointer
......@@ -269,7 +291,7 @@ int irq_matrix_alloc_managed(struct irq_matrix *m, const struct cpumask *msk,
if (cpumask_empty(msk))
return -EINVAL;
cpu = matrix_find_best_cpu(m, msk);
cpu = matrix_find_best_cpu_managed(m, msk);
if (cpu == UINT_MAX)
return -ENOSPC;
......@@ -282,6 +304,7 @@ int irq_matrix_alloc_managed(struct irq_matrix *m, const struct cpumask *msk,
return -ENOSPC;
set_bit(bit, cm->alloc_map);
cm->allocated++;
cm->managed_allocated++;
m->total_allocated++;
*mapped_cpu = cpu;
trace_irq_matrix_alloc_managed(bit, cpu, m, cm);
......@@ -395,6 +418,8 @@ void irq_matrix_free(struct irq_matrix *m, unsigned int cpu,
clear_bit(bit, cm->alloc_map);
cm->allocated--;
if(managed)
cm->managed_allocated--;
if (cm->online)
m->total_allocated--;
......@@ -464,13 +489,14 @@ void irq_matrix_debug_show(struct seq_file *sf, struct irq_matrix *m, int ind)
seq_printf(sf, "Total allocated: %6u\n", m->total_allocated);
seq_printf(sf, "System: %u: %*pbl\n", nsys, m->matrix_bits,
m->system_map);
seq_printf(sf, "%*s| CPU | avl | man | act | vectors\n", ind, " ");
seq_printf(sf, "%*s| CPU | avl | man | mac | act | vectors\n", ind, " ");
cpus_read_lock();
for_each_online_cpu(cpu) {
struct cpumap *cm = per_cpu_ptr(m->maps, cpu);
seq_printf(sf, "%*s %4d %4u %4u %4u %*pbl\n", ind, " ",
cpu, cm->available, cm->managed, cm->allocated,
seq_printf(sf, "%*s %4d %4u %4u %4u %4u %*pbl\n", ind, " ",
cpu, cm->available, cm->managed,
cm->managed_allocated, cm->allocated,
m->matrix_bits, cm->alloc_map);
}
cpus_read_unlock();
......
......@@ -23,11 +23,11 @@
* @nvec: The number of vectors used in this entry
* @affinity: Optional pointer to an affinity mask array size of @nvec
*
* If @affinity is not NULL then a an affinity array[@nvec] is allocated
* and the affinity masks from @affinity are copied.
* If @affinity is not NULL then an affinity array[@nvec] is allocated
* and the affinity masks and flags from @affinity are copied.
*/
struct msi_desc *
alloc_msi_entry(struct device *dev, int nvec, const struct cpumask *affinity)
struct msi_desc *alloc_msi_entry(struct device *dev, int nvec,
const struct irq_affinity_desc *affinity)
{
struct msi_desc *desc;
......
......@@ -66,7 +66,7 @@ static int try_one_irq(struct irq_desc *desc, bool force)
raw_spin_lock(&desc->lock);
/*
* PER_CPU, nested thread interrupts and interrupts explicitely
* PER_CPU, nested thread interrupts and interrupts explicitly
* marked polled are excluded from polling.
*/
if (irq_settings_is_per_cpu(desc) ||
......@@ -76,7 +76,7 @@ static int try_one_irq(struct irq_desc *desc, bool force)
/*
* Do not poll disabled interrupts unless the spurious
* disabled poller asks explicitely.
* disabled poller asks explicitly.
*/
if (irqd_irq_disabled(&desc->irq_data) && !force)
goto out;
......@@ -292,7 +292,7 @@ void note_interrupt(struct irq_desc *desc, irqreturn_t action_ret)
* So in case a thread is woken, we just note the fact and
* defer the analysis to the next hardware interrupt.
*
* The threaded handlers store whether they sucessfully
* The threaded handlers store whether they successfully
* handled an interrupt and we check whether that number
* changed versus the last invocation.
*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment