Commit 45e981b8 authored by Bjorn Helgaas's avatar Bjorn Helgaas

Merge branch 'pci/controller/qcom'

- Drop endpoint redundant masking of global IRQ events (Manivannan
  Sadhasivam)

- Clarify unknown global IRQ message and only log it once to avoid a flood
  (Manivannan Sadhasivam)

- Add Manivannan Sadhasivam as maintainer of qcom endpoint driver
  (Manivannan Sadhasivam)

- Add 'linux,pci-domain' property to endpoint DT binding (Manivannan
  Sadhasivam)

- Assign PCI domain number for endpoint controllers (Manivannan Sadhasivam)

- Add 'qcom_pcie_ep' and the PCI domain number to IRQ names for endpoint
  controller (Manivannan Sadhasivam)

- Add global SPI interrupt for PCIe link events to DT binding (Manivannan
  Sadhasivam)

- Add global RC interrupt handler to handle 'Link up' events and
  automatically enumerate hot-added devices (Manivannan Sadhasivam)

- Avoid mirroring of DBI and iATU register space so it doesn't overlap BAR
  MMIO space (Prudhvi Yarlagadda)

- Enable controller resources like PHY only after PERST# is deasserted to
  partially avoid the problem that the endpoint SoC crashes when accessing
  things when Refclk is absent (Manivannan Sadhasivam)

- Rename dw_pcie.link_gen to max_link_speed to avoid ambiguity (Manivannan
  Sadhasivam)

- Cache maximum link speed value in dw_pcie.max_link_speed for use by
  vendor drivers (Manivannan Sadhasivam)

- Add 16.0 GT/s equalization and RX lane margining settings (Shashank Babu
  Chinta Venkata)

- Pass domain number to pci_bus_release_domain_nr() explicitly to avoid a
  NULL pointer dereference (Manivannan Sadhasivam)

* pci/controller/qcom:
  PCI: Pass domain number to pci_bus_release_domain_nr() explicitly
  PCI: qcom: Add RX lane margining settings for 16.0 GT/s
  PCI: qcom: Add equalization settings for 16.0 GT/s
  PCI: dwc: Always cache the maximum link speed value in dw_pcie::max_link_speed
  PCI: dwc: Rename 'dw_pcie::link_gen' to 'dw_pcie::max_link_speed'
  PCI: qcom-ep: Enable controller resources like PHY only after refclk is available
  PCI: qcom: Disable mirroring of DBI and iATU register space in BAR region
  PCI: qcom: Enumerate endpoints based on Link up event in 'global_irq' interrupt
  dt-bindings: PCI: qcom,pcie-sm8450: Add 'global' interrupt
  PCI: qcom-ep: Modify 'global_irq' and 'perst_irq' IRQ device names
  PCI: endpoint: Assign PCI domain number for endpoint controllers
  dt-bindings: PCI: pci-ep: Document 'linux,pci-domain' property
  dt-bindings: PCI: pci-ep: Update Maintainers
  PCI: qcom-ep: Reword the error message for receiving unknown global IRQ event
  PCI: qcom-ep: Drop the redundant masking of global IRQ events
parents 1bcf2331 0cca961a
...@@ -10,7 +10,8 @@ description: | ...@@ -10,7 +10,8 @@ description: |
Common properties for PCI Endpoint Controller Nodes. Common properties for PCI Endpoint Controller Nodes.
maintainers: maintainers:
- Kishon Vijay Abraham I <kishon@ti.com> - Kishon Vijay Abraham I <kishon@kernel.org>
- Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
properties: properties:
$nodename: $nodename:
...@@ -41,6 +42,17 @@ properties: ...@@ -41,6 +42,17 @@ properties:
default: 1 default: 1
maximum: 16 maximum: 16
linux,pci-domain:
description:
If present this property assigns a fixed PCI domain number to a PCI
Endpoint Controller, otherwise an unstable (across boots) unique number
will be assigned. It is required to either not set this property at all
or set it for all PCI endpoint controllers in the system, otherwise
potentially conflicting domain numbers may be assigned to endpoint
controllers. The domain number for each endpoint controller in the system
must be unique.
$ref: /schemas/types.yaml#/definitions/uint32
required: required:
- compatible - compatible
......
...@@ -21,11 +21,11 @@ properties: ...@@ -21,11 +21,11 @@ properties:
interrupts: interrupts:
minItems: 1 minItems: 1
maxItems: 8 maxItems: 9
interrupt-names: interrupt-names:
minItems: 1 minItems: 1
maxItems: 8 maxItems: 9
iommu-map: iommu-map:
minItems: 1 minItems: 1
......
...@@ -280,4 +280,5 @@ examples: ...@@ -280,4 +280,5 @@ examples:
phy-names = "pciephy"; phy-names = "pciephy";
max-link-speed = <3>; max-link-speed = <3>;
num-lanes = <2>; num-lanes = <2>;
linux,pci-domain = <0>;
}; };
...@@ -55,8 +55,8 @@ properties: ...@@ -55,8 +55,8 @@ properties:
- const: aggre1 # Aggre NoC PCIe1 AXI clock - const: aggre1 # Aggre NoC PCIe1 AXI clock
interrupts: interrupts:
minItems: 8 minItems: 9
maxItems: 8 maxItems: 9
interrupt-names: interrupt-names:
items: items:
...@@ -68,6 +68,7 @@ properties: ...@@ -68,6 +68,7 @@ properties:
- const: msi5 - const: msi5
- const: msi6 - const: msi6
- const: msi7 - const: msi7
- const: global
operating-points-v2: true operating-points-v2: true
opp-table: opp-table:
...@@ -149,9 +150,10 @@ examples: ...@@ -149,9 +150,10 @@ examples:
<GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>; <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0", "msi1", "msi2", "msi3", interrupt-names = "msi0", "msi1", "msi2", "msi3",
"msi4", "msi5", "msi6", "msi7"; "msi4", "msi5", "msi6", "msi7", "global";
#interrupt-cells = <1>; #interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>; interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 0 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ interrupt-map = <0 0 0 1 &intc 0 0 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
......
...@@ -2728,7 +2728,7 @@ F: drivers/iommu/msm* ...@@ -2728,7 +2728,7 @@ F: drivers/iommu/msm*
F: drivers/mfd/ssbi.c F: drivers/mfd/ssbi.c
F: drivers/mmc/host/mmci_qcom* F: drivers/mmc/host/mmci_qcom*
F: drivers/mmc/host/sdhci-msm.c F: drivers/mmc/host/sdhci-msm.c
F: drivers/pci/controller/dwc/pcie-qcom.c F: drivers/pci/controller/dwc/pcie-qcom*
F: drivers/phy/qualcomm/ F: drivers/phy/qualcomm/
F: drivers/power/*/msm* F: drivers/power/*/msm*
F: drivers/reset/reset-qcom-* F: drivers/reset/reset-qcom-*
...@@ -17754,6 +17754,7 @@ M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> ...@@ -17754,6 +17754,7 @@ M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
L: linux-arm-msm@vger.kernel.org L: linux-arm-msm@vger.kernel.org
S: Maintained S: Maintained
F: drivers/pci/controller/dwc/pcie-qcom-common.c
F: drivers/pci/controller/dwc/pcie-qcom.c F: drivers/pci/controller/dwc/pcie-qcom.c
PCIE DRIVER FOR ROCKCHIP PCIE DRIVER FOR ROCKCHIP
...@@ -17790,6 +17791,7 @@ L: linux-pci@vger.kernel.org ...@@ -17790,6 +17791,7 @@ L: linux-pci@vger.kernel.org
L: linux-arm-msm@vger.kernel.org L: linux-arm-msm@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml F: Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
F: drivers/pci/controller/dwc/pcie-qcom-common.c
F: drivers/pci/controller/dwc/pcie-qcom-ep.c F: drivers/pci/controller/dwc/pcie-qcom-ep.c
PCMCIA SUBSYSTEM PCMCIA SUBSYSTEM
......
...@@ -265,12 +265,16 @@ config PCIE_DW_PLAT_EP ...@@ -265,12 +265,16 @@ config PCIE_DW_PLAT_EP
order to enable device-specific features PCI_DW_PLAT_EP must be order to enable device-specific features PCI_DW_PLAT_EP must be
selected. selected.
config PCIE_QCOM_COMMON
bool
config PCIE_QCOM config PCIE_QCOM
bool "Qualcomm PCIe controller (host mode)" bool "Qualcomm PCIe controller (host mode)"
depends on OF && (ARCH_QCOM || COMPILE_TEST) depends on OF && (ARCH_QCOM || COMPILE_TEST)
depends on PCI_MSI depends on PCI_MSI
select PCIE_DW_HOST select PCIE_DW_HOST
select CRC8 select CRC8
select PCIE_QCOM_COMMON
help help
Say Y here to enable PCIe controller support on Qualcomm SoCs. The Say Y here to enable PCIe controller support on Qualcomm SoCs. The
PCIe controller uses the DesignWare core plus Qualcomm-specific PCIe controller uses the DesignWare core plus Qualcomm-specific
...@@ -281,6 +285,7 @@ config PCIE_QCOM_EP ...@@ -281,6 +285,7 @@ config PCIE_QCOM_EP
depends on OF && (ARCH_QCOM || COMPILE_TEST) depends on OF && (ARCH_QCOM || COMPILE_TEST)
depends on PCI_ENDPOINT depends on PCI_ENDPOINT
select PCIE_DW_EP select PCIE_DW_EP
select PCIE_QCOM_COMMON
help help
Say Y here to enable support for the PCIe controllers on Qualcomm SoCs Say Y here to enable support for the PCIe controllers on Qualcomm SoCs
to work in endpoint mode. The PCIe controller uses the DesignWare core to work in endpoint mode. The PCIe controller uses the DesignWare core
......
...@@ -12,6 +12,7 @@ obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o ...@@ -12,6 +12,7 @@ obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o
obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o
obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o
obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o obj-$(CONFIG_PCI_LAYERSCAPE_EP) += pci-layerscape-ep.o
obj-$(CONFIG_PCIE_QCOM_COMMON) += pcie-qcom-common.o
obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o
obj-$(CONFIG_PCIE_QCOM_EP) += pcie-qcom-ep.o obj-$(CONFIG_PCIE_QCOM_EP) += pcie-qcom-ep.o
obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
......
...@@ -860,12 +860,12 @@ static int imx_pcie_start_link(struct dw_pcie *pci) ...@@ -860,12 +860,12 @@ static int imx_pcie_start_link(struct dw_pcie *pci)
if (ret) if (ret)
goto err_reset_phy; goto err_reset_phy;
if (pci->link_gen > 1) { if (pci->max_link_speed > 1) {
/* Allow faster modes after the link is up */ /* Allow faster modes after the link is up */
dw_pcie_dbi_ro_wr_en(pci); dw_pcie_dbi_ro_wr_en(pci);
tmp = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); tmp = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP);
tmp &= ~PCI_EXP_LNKCAP_SLS; tmp &= ~PCI_EXP_LNKCAP_SLS;
tmp |= pci->link_gen; tmp |= pci->max_link_speed;
dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, tmp); dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, tmp);
/* /*
...@@ -1423,8 +1423,8 @@ static int imx_pcie_probe(struct platform_device *pdev) ...@@ -1423,8 +1423,8 @@ static int imx_pcie_probe(struct platform_device *pdev)
imx_pcie->tx_swing_low = 127; imx_pcie->tx_swing_low = 127;
/* Limit link speed */ /* Limit link speed */
pci->link_gen = 1; pci->max_link_speed = 1;
of_property_read_u32(node, "fsl,max-link-speed", &pci->link_gen); of_property_read_u32(node, "fsl,max-link-speed", &pci->max_link_speed);
imx_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); imx_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie");
if (IS_ERR(imx_pcie->vpcie)) { if (IS_ERR(imx_pcie->vpcie)) {
......
...@@ -112,6 +112,7 @@ int dw_pcie_get_resources(struct dw_pcie *pci) ...@@ -112,6 +112,7 @@ int dw_pcie_get_resources(struct dw_pcie *pci)
pci->dbi_base = devm_pci_remap_cfg_resource(pci->dev, res); pci->dbi_base = devm_pci_remap_cfg_resource(pci->dev, res);
if (IS_ERR(pci->dbi_base)) if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base); return PTR_ERR(pci->dbi_base);
pci->dbi_phys_addr = res->start;
} }
/* DBI2 is mainly useful for the endpoint controller */ /* DBI2 is mainly useful for the endpoint controller */
...@@ -134,6 +135,7 @@ int dw_pcie_get_resources(struct dw_pcie *pci) ...@@ -134,6 +135,7 @@ int dw_pcie_get_resources(struct dw_pcie *pci)
pci->atu_base = devm_ioremap_resource(pci->dev, res); pci->atu_base = devm_ioremap_resource(pci->dev, res);
if (IS_ERR(pci->atu_base)) if (IS_ERR(pci->atu_base))
return PTR_ERR(pci->atu_base); return PTR_ERR(pci->atu_base);
pci->atu_phys_addr = res->start;
} else { } else {
pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET; pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET;
} }
...@@ -166,8 +168,8 @@ int dw_pcie_get_resources(struct dw_pcie *pci) ...@@ -166,8 +168,8 @@ int dw_pcie_get_resources(struct dw_pcie *pci)
return ret; return ret;
} }
if (pci->link_gen < 1) if (pci->max_link_speed < 1)
pci->link_gen = of_pci_get_max_link_speed(np); pci->max_link_speed = of_pci_get_max_link_speed(np);
of_property_read_u32(np, "num-lanes", &pci->num_lanes); of_property_read_u32(np, "num-lanes", &pci->num_lanes);
...@@ -687,16 +689,27 @@ void dw_pcie_upconfig_setup(struct dw_pcie *pci) ...@@ -687,16 +689,27 @@ void dw_pcie_upconfig_setup(struct dw_pcie *pci)
} }
EXPORT_SYMBOL_GPL(dw_pcie_upconfig_setup); EXPORT_SYMBOL_GPL(dw_pcie_upconfig_setup);
static void dw_pcie_link_set_max_speed(struct dw_pcie *pci, u32 link_gen) static void dw_pcie_link_set_max_speed(struct dw_pcie *pci)
{ {
u32 cap, ctrl2, link_speed; u32 cap, ctrl2, link_speed;
u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
cap = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); cap = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP);
/*
* Even if the platform doesn't want to limit the maximum link speed,
* just cache the hardware default value so that the vendor drivers can
* use it to do any link specific configuration.
*/
if (pci->max_link_speed < 1) {
pci->max_link_speed = FIELD_GET(PCI_EXP_LNKCAP_SLS, cap);
return;
}
ctrl2 = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCTL2); ctrl2 = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCTL2);
ctrl2 &= ~PCI_EXP_LNKCTL2_TLS; ctrl2 &= ~PCI_EXP_LNKCTL2_TLS;
switch (pcie_link_speed[link_gen]) { switch (pcie_link_speed[pci->max_link_speed]) {
case PCIE_SPEED_2_5GT: case PCIE_SPEED_2_5GT:
link_speed = PCI_EXP_LNKCTL2_TLS_2_5GT; link_speed = PCI_EXP_LNKCTL2_TLS_2_5GT;
break; break;
...@@ -1058,8 +1071,7 @@ void dw_pcie_setup(struct dw_pcie *pci) ...@@ -1058,8 +1071,7 @@ void dw_pcie_setup(struct dw_pcie *pci)
{ {
u32 val; u32 val;
if (pci->link_gen > 0) dw_pcie_link_set_max_speed(pci);
dw_pcie_link_set_max_speed(pci, pci->link_gen);
/* Configure Gen1 N_FTS */ /* Configure Gen1 N_FTS */
if (pci->n_fts[0]) { if (pci->n_fts[0]) {
......
...@@ -125,6 +125,19 @@ ...@@ -125,6 +125,19 @@
#define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16) #define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16)
#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24 #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24
#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24) #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24)
#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_16_0GT 0x1
#define GEN3_EQ_CONTROL_OFF 0x8A8
#define GEN3_EQ_CONTROL_OFF_FB_MODE GENMASK(3, 0)
#define GEN3_EQ_CONTROL_OFF_PHASE23_EXIT_MODE BIT(4)
#define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC GENMASK(23, 8)
#define GEN3_EQ_CONTROL_OFF_FOM_INC_INITIAL_EVAL BIT(24)
#define GEN3_EQ_FB_MODE_DIR_CHANGE_OFF 0x8AC
#define GEN3_EQ_FMDC_T_MIN_PHASE23 GENMASK(4, 0)
#define GEN3_EQ_FMDC_N_EVALS GENMASK(9, 5)
#define GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA GENMASK(13, 10)
#define GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA GENMASK(17, 14)
#define PCIE_PORT_MULTI_LANE_CTRL 0x8C0 #define PCIE_PORT_MULTI_LANE_CTRL 0x8C0
#define PORT_MLTI_UPCFG_SUPPORT BIT(7) #define PORT_MLTI_UPCFG_SUPPORT BIT(7)
...@@ -197,6 +210,24 @@ ...@@ -197,6 +210,24 @@
#define PCIE_PL_CHK_REG_ERR_ADDR 0xB28 #define PCIE_PL_CHK_REG_ERR_ADDR 0xB28
/*
* 16.0 GT/s (Gen 4) lane margining register definitions
*/
#define GEN4_LANE_MARGINING_1_OFF 0xB80
#define MARGINING_MAX_VOLTAGE_OFFSET GENMASK(29, 24)
#define MARGINING_NUM_VOLTAGE_STEPS GENMASK(22, 16)
#define MARGINING_MAX_TIMING_OFFSET GENMASK(13, 8)
#define MARGINING_NUM_TIMING_STEPS GENMASK(5, 0)
#define GEN4_LANE_MARGINING_2_OFF 0xB84
#define MARGINING_IND_ERROR_SAMPLER BIT(28)
#define MARGINING_SAMPLE_REPORTING_METHOD BIT(27)
#define MARGINING_IND_LEFT_RIGHT_TIMING BIT(26)
#define MARGINING_IND_UP_DOWN_VOLTAGE BIT(25)
#define MARGINING_VOLTAGE_SUPPORTED BIT(24)
#define MARGINING_MAXLANES GENMASK(20, 16)
#define MARGINING_SAMPLE_RATE_TIMING GENMASK(13, 8)
#define MARGINING_SAMPLE_RATE_VOLTAGE GENMASK(5, 0)
/* /*
* iATU Unroll-specific register definitions * iATU Unroll-specific register definitions
* From 4.80 core version the address translation will be made by unroll * From 4.80 core version the address translation will be made by unroll
...@@ -407,8 +438,10 @@ struct dw_pcie_ops { ...@@ -407,8 +438,10 @@ struct dw_pcie_ops {
struct dw_pcie { struct dw_pcie {
struct device *dev; struct device *dev;
void __iomem *dbi_base; void __iomem *dbi_base;
resource_size_t dbi_phys_addr;
void __iomem *dbi_base2; void __iomem *dbi_base2;
void __iomem *atu_base; void __iomem *atu_base;
resource_size_t atu_phys_addr;
size_t atu_size; size_t atu_size;
u32 num_ib_windows; u32 num_ib_windows;
u32 num_ob_windows; u32 num_ob_windows;
...@@ -421,7 +454,7 @@ struct dw_pcie { ...@@ -421,7 +454,7 @@ struct dw_pcie {
u32 type; u32 type;
unsigned long caps; unsigned long caps;
int num_lanes; int num_lanes;
int link_gen; int max_link_speed;
u8 n_fts[2]; u8 n_fts[2];
struct dw_edma_chip edma; struct dw_edma_chip edma;
struct clk_bulk_data app_clks[DW_PCIE_NUM_APP_CLKS]; struct clk_bulk_data app_clks[DW_PCIE_NUM_APP_CLKS];
......
...@@ -132,7 +132,7 @@ static void intel_pcie_link_setup(struct intel_pcie *pcie) ...@@ -132,7 +132,7 @@ static void intel_pcie_link_setup(struct intel_pcie *pcie)
static void intel_pcie_init_n_fts(struct dw_pcie *pci) static void intel_pcie_init_n_fts(struct dw_pcie *pci)
{ {
switch (pci->link_gen) { switch (pci->max_link_speed) {
case 3: case 3:
pci->n_fts[1] = PORT_AFR_N_FTS_GEN3; pci->n_fts[1] = PORT_AFR_N_FTS_GEN3;
break; break;
...@@ -252,7 +252,7 @@ static int intel_pcie_wait_l2(struct intel_pcie *pcie) ...@@ -252,7 +252,7 @@ static int intel_pcie_wait_l2(struct intel_pcie *pcie)
int ret; int ret;
struct dw_pcie *pci = &pcie->pci; struct dw_pcie *pci = &pcie->pci;
if (pci->link_gen < 3) if (pci->max_link_speed < 3)
return 0; return 0;
/* Send PME_TURN_OFF message */ /* Send PME_TURN_OFF message */
......
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/pci.h>
#include "pcie-designware.h"
#include "pcie-qcom-common.h"
void qcom_pcie_common_set_16gt_equalization(struct dw_pcie *pci)
{
u32 reg;
/*
* GEN3_RELATED_OFF register is repurposed to apply equalization
* settings at various data transmission rates through registers namely
* GEN3_EQ_*. The RATE_SHADOW_SEL bit field of GEN3_RELATED_OFF
* determines the data rate for which these equalization settings are
* applied.
*/
reg = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
reg &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
reg &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
reg |= FIELD_PREP(GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK,
GEN3_RELATED_OFF_RATE_SHADOW_SEL_16_0GT);
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, reg);
reg = dw_pcie_readl_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF);
reg &= ~(GEN3_EQ_FMDC_T_MIN_PHASE23 |
GEN3_EQ_FMDC_N_EVALS |
GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA |
GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA);
reg |= FIELD_PREP(GEN3_EQ_FMDC_T_MIN_PHASE23, 0x1) |
FIELD_PREP(GEN3_EQ_FMDC_N_EVALS, 0xd) |
FIELD_PREP(GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA, 0x5) |
FIELD_PREP(GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA, 0x5);
dw_pcie_writel_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF, reg);
reg = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
reg &= ~(GEN3_EQ_CONTROL_OFF_FB_MODE |
GEN3_EQ_CONTROL_OFF_PHASE23_EXIT_MODE |
GEN3_EQ_CONTROL_OFF_FOM_INC_INITIAL_EVAL |
GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC);
dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, reg);
}
EXPORT_SYMBOL_GPL(qcom_pcie_common_set_16gt_equalization);
void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci)
{
u32 reg;
reg = dw_pcie_readl_dbi(pci, GEN4_LANE_MARGINING_1_OFF);
reg &= ~(MARGINING_MAX_VOLTAGE_OFFSET |
MARGINING_NUM_VOLTAGE_STEPS |
MARGINING_MAX_TIMING_OFFSET |
MARGINING_NUM_TIMING_STEPS);
reg |= FIELD_PREP(MARGINING_MAX_VOLTAGE_OFFSET, 0x24) |
FIELD_PREP(MARGINING_NUM_VOLTAGE_STEPS, 0x78) |
FIELD_PREP(MARGINING_MAX_TIMING_OFFSET, 0x32) |
FIELD_PREP(MARGINING_NUM_TIMING_STEPS, 0x10);
dw_pcie_writel_dbi(pci, GEN4_LANE_MARGINING_1_OFF, reg);
reg = dw_pcie_readl_dbi(pci, GEN4_LANE_MARGINING_2_OFF);
reg |= MARGINING_IND_ERROR_SAMPLER |
MARGINING_SAMPLE_REPORTING_METHOD |
MARGINING_IND_LEFT_RIGHT_TIMING |
MARGINING_VOLTAGE_SUPPORTED;
reg &= ~(MARGINING_IND_UP_DOWN_VOLTAGE |
MARGINING_MAXLANES |
MARGINING_SAMPLE_RATE_TIMING |
MARGINING_SAMPLE_RATE_VOLTAGE);
reg |= FIELD_PREP(MARGINING_MAXLANES, pci->num_lanes) |
FIELD_PREP(MARGINING_SAMPLE_RATE_TIMING, 0x3f) |
FIELD_PREP(MARGINING_SAMPLE_RATE_VOLTAGE, 0x3f);
dw_pcie_writel_dbi(pci, GEN4_LANE_MARGINING_2_OFF, reg);
}
EXPORT_SYMBOL_GPL(qcom_pcie_common_set_16gt_lane_margining);
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _PCIE_QCOM_COMMON_H
#define _PCIE_QCOM_COMMON_H
struct dw_pcie;
void qcom_pcie_common_set_16gt_equalization(struct dw_pcie *pci);
void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci);
#endif
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include "../../pci.h" #include "../../pci.h"
#include "pcie-designware.h" #include "pcie-designware.h"
#include "pcie-qcom-common.h"
/* PARF registers */ /* PARF registers */
#define PARF_SYS_CTRL 0x00 #define PARF_SYS_CTRL 0x00
...@@ -486,6 +487,11 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci) ...@@ -486,6 +487,11 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
goto err_disable_resources; goto err_disable_resources;
} }
if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) {
qcom_pcie_common_set_16gt_equalization(pci);
qcom_pcie_common_set_16gt_lane_margining(pci);
}
/* /*
* The physical address of the MMIO region which is exposed as the BAR * The physical address of the MMIO region which is exposed as the BAR
* should be written to MHI BASE registers. * should be written to MHI BASE registers.
...@@ -647,11 +653,9 @@ static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data) ...@@ -647,11 +653,9 @@ static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data)
struct dw_pcie *pci = &pcie_ep->pci; struct dw_pcie *pci = &pcie_ep->pci;
struct device *dev = pci->dev; struct device *dev = pci->dev;
u32 status = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_STATUS); u32 status = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_STATUS);
u32 mask = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_MASK);
u32 dstate, val; u32 dstate, val;
writel_relaxed(status, pcie_ep->parf + PARF_INT_ALL_CLEAR); writel_relaxed(status, pcie_ep->parf + PARF_INT_ALL_CLEAR);
status &= mask;
if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) { if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) {
dev_dbg(dev, "Received Linkdown event\n"); dev_dbg(dev, "Received Linkdown event\n");
...@@ -681,7 +685,8 @@ static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data) ...@@ -681,7 +685,8 @@ static irqreturn_t qcom_pcie_ep_global_irq_thread(int irq, void *data)
dw_pcie_ep_linkup(&pci->ep); dw_pcie_ep_linkup(&pci->ep);
pcie_ep->link_status = QCOM_PCIE_EP_LINK_UP; pcie_ep->link_status = QCOM_PCIE_EP_LINK_UP;
} else { } else {
dev_err(dev, "Received unknown event: %d\n", status); dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n",
status);
} }
return IRQ_HANDLED; return IRQ_HANDLED;
...@@ -712,8 +717,15 @@ static irqreturn_t qcom_pcie_ep_perst_irq_thread(int irq, void *data) ...@@ -712,8 +717,15 @@ static irqreturn_t qcom_pcie_ep_perst_irq_thread(int irq, void *data)
static int qcom_pcie_ep_enable_irq_resources(struct platform_device *pdev, static int qcom_pcie_ep_enable_irq_resources(struct platform_device *pdev,
struct qcom_pcie_ep *pcie_ep) struct qcom_pcie_ep *pcie_ep)
{ {
struct device *dev = pcie_ep->pci.dev;
char *name;
int ret; int ret;
name = devm_kasprintf(dev, GFP_KERNEL, "qcom_pcie_ep_global_irq%d",
pcie_ep->pci.ep.epc->domain_nr);
if (!name)
return -ENOMEM;
pcie_ep->global_irq = platform_get_irq_byname(pdev, "global"); pcie_ep->global_irq = platform_get_irq_byname(pdev, "global");
if (pcie_ep->global_irq < 0) if (pcie_ep->global_irq < 0)
return pcie_ep->global_irq; return pcie_ep->global_irq;
...@@ -721,18 +733,23 @@ static int qcom_pcie_ep_enable_irq_resources(struct platform_device *pdev, ...@@ -721,18 +733,23 @@ static int qcom_pcie_ep_enable_irq_resources(struct platform_device *pdev,
ret = devm_request_threaded_irq(&pdev->dev, pcie_ep->global_irq, NULL, ret = devm_request_threaded_irq(&pdev->dev, pcie_ep->global_irq, NULL,
qcom_pcie_ep_global_irq_thread, qcom_pcie_ep_global_irq_thread,
IRQF_ONESHOT, IRQF_ONESHOT,
"global_irq", pcie_ep); name, pcie_ep);
if (ret) { if (ret) {
dev_err(&pdev->dev, "Failed to request Global IRQ\n"); dev_err(&pdev->dev, "Failed to request Global IRQ\n");
return ret; return ret;
} }
name = devm_kasprintf(dev, GFP_KERNEL, "qcom_pcie_ep_perst_irq%d",
pcie_ep->pci.ep.epc->domain_nr);
if (!name)
return -ENOMEM;
pcie_ep->perst_irq = gpiod_to_irq(pcie_ep->reset); pcie_ep->perst_irq = gpiod_to_irq(pcie_ep->reset);
irq_set_status_flags(pcie_ep->perst_irq, IRQ_NOAUTOEN); irq_set_status_flags(pcie_ep->perst_irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(&pdev->dev, pcie_ep->perst_irq, NULL, ret = devm_request_threaded_irq(&pdev->dev, pcie_ep->perst_irq, NULL,
qcom_pcie_ep_perst_irq_thread, qcom_pcie_ep_perst_irq_thread,
IRQF_TRIGGER_HIGH | IRQF_ONESHOT, IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"perst_irq", pcie_ep); name, pcie_ep);
if (ret) { if (ret) {
dev_err(&pdev->dev, "Failed to request PERST IRQ\n"); dev_err(&pdev->dev, "Failed to request PERST IRQ\n");
disable_irq(pcie_ep->global_irq); disable_irq(pcie_ep->global_irq);
...@@ -846,21 +863,15 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev) ...@@ -846,21 +863,15 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
if (ret) if (ret)
return ret; return ret;
ret = qcom_pcie_enable_resources(pcie_ep);
if (ret) {
dev_err(dev, "Failed to enable resources: %d\n", ret);
return ret;
}
ret = dw_pcie_ep_init(&pcie_ep->pci.ep); ret = dw_pcie_ep_init(&pcie_ep->pci.ep);
if (ret) { if (ret) {
dev_err(dev, "Failed to initialize endpoint: %d\n", ret); dev_err(dev, "Failed to initialize endpoint: %d\n", ret);
goto err_disable_resources; return ret;
} }
ret = qcom_pcie_ep_enable_irq_resources(pdev, pcie_ep); ret = qcom_pcie_ep_enable_irq_resources(pdev, pcie_ep);
if (ret) if (ret)
goto err_disable_resources; goto err_ep_deinit;
name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node); name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
if (!name) { if (!name) {
...@@ -877,8 +888,8 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev) ...@@ -877,8 +888,8 @@ static int qcom_pcie_ep_probe(struct platform_device *pdev)
disable_irq(pcie_ep->global_irq); disable_irq(pcie_ep->global_irq);
disable_irq(pcie_ep->perst_irq); disable_irq(pcie_ep->perst_irq);
err_disable_resources: err_ep_deinit:
qcom_pcie_disable_resources(pcie_ep); dw_pcie_ep_deinit(&pcie_ep->pci.ep);
return ret; return ret;
} }
......
...@@ -35,6 +35,7 @@ ...@@ -35,6 +35,7 @@
#include "../../pci.h" #include "../../pci.h"
#include "pcie-designware.h" #include "pcie-designware.h"
#include "pcie-qcom-common.h"
/* PARF registers */ /* PARF registers */
#define PARF_SYS_CTRL 0x00 #define PARF_SYS_CTRL 0x00
...@@ -45,15 +46,24 @@ ...@@ -45,15 +46,24 @@
#define PARF_PHY_REFCLK 0x4c #define PARF_PHY_REFCLK 0x4c
#define PARF_CONFIG_BITS 0x50 #define PARF_CONFIG_BITS 0x50
#define PARF_DBI_BASE_ADDR 0x168 #define PARF_DBI_BASE_ADDR 0x168
#define PARF_SLV_ADDR_SPACE_SIZE 0x16c
#define PARF_MHI_CLOCK_RESET_CTRL 0x174 #define PARF_MHI_CLOCK_RESET_CTRL 0x174
#define PARF_AXI_MSTR_WR_ADDR_HALT 0x178 #define PARF_AXI_MSTR_WR_ADDR_HALT 0x178
#define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8 #define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8
#define PARF_Q2A_FLUSH 0x1ac #define PARF_Q2A_FLUSH 0x1ac
#define PARF_LTSSM 0x1b0 #define PARF_LTSSM 0x1b0
#define PARF_INT_ALL_STATUS 0x224
#define PARF_INT_ALL_CLEAR 0x228
#define PARF_INT_ALL_MASK 0x22c
#define PARF_SID_OFFSET 0x234 #define PARF_SID_OFFSET 0x234
#define PARF_BDF_TRANSLATE_CFG 0x24c #define PARF_BDF_TRANSLATE_CFG 0x24c
#define PARF_SLV_ADDR_SPACE_SIZE 0x358 #define PARF_DBI_BASE_ADDR_V2 0x350
#define PARF_DBI_BASE_ADDR_V2_HI 0x354
#define PARF_SLV_ADDR_SPACE_SIZE_V2 0x358
#define PARF_SLV_ADDR_SPACE_SIZE_V2_HI 0x35c
#define PARF_NO_SNOOP_OVERIDE 0x3d4 #define PARF_NO_SNOOP_OVERIDE 0x3d4
#define PARF_ATU_BASE_ADDR 0x634
#define PARF_ATU_BASE_ADDR_HI 0x638
#define PARF_DEVICE_TYPE 0x1000 #define PARF_DEVICE_TYPE 0x1000
#define PARF_BDF_TO_SID_TABLE_N 0x2000 #define PARF_BDF_TO_SID_TABLE_N 0x2000
#define PARF_BDF_TO_SID_CFG 0x2c00 #define PARF_BDF_TO_SID_CFG 0x2c00
...@@ -108,7 +118,7 @@ ...@@ -108,7 +118,7 @@
#define PHY_RX0_EQ(x) FIELD_PREP(GENMASK(26, 24), x) #define PHY_RX0_EQ(x) FIELD_PREP(GENMASK(26, 24), x)
/* PARF_SLV_ADDR_SPACE_SIZE register value */ /* PARF_SLV_ADDR_SPACE_SIZE register value */
#define SLV_ADDR_SPACE_SZ 0x10000000 #define SLV_ADDR_SPACE_SZ 0x80000000
/* PARF_MHI_CLOCK_RESET_CTRL register fields */ /* PARF_MHI_CLOCK_RESET_CTRL register fields */
#define AHB_CLK_EN BIT(0) #define AHB_CLK_EN BIT(0)
...@@ -121,6 +131,9 @@ ...@@ -121,6 +131,9 @@
/* PARF_LTSSM register fields */ /* PARF_LTSSM register fields */
#define LTSSM_EN BIT(8) #define LTSSM_EN BIT(8)
/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */
#define PARF_INT_ALL_LINK_UP BIT(13)
/* PARF_NO_SNOOP_OVERIDE register fields */ /* PARF_NO_SNOOP_OVERIDE register fields */
#define WR_NO_SNOOP_OVERIDE_EN BIT(1) #define WR_NO_SNOOP_OVERIDE_EN BIT(1)
#define RD_NO_SNOOP_OVERIDE_EN BIT(3) #define RD_NO_SNOOP_OVERIDE_EN BIT(3)
...@@ -283,6 +296,11 @@ static int qcom_pcie_start_link(struct dw_pcie *pci) ...@@ -283,6 +296,11 @@ static int qcom_pcie_start_link(struct dw_pcie *pci)
{ {
struct qcom_pcie *pcie = to_qcom_pcie(pci); struct qcom_pcie *pcie = to_qcom_pcie(pci);
if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) {
qcom_pcie_common_set_16gt_equalization(pci);
qcom_pcie_common_set_16gt_lane_margining(pci);
}
/* Enable Link Training state machine */ /* Enable Link Training state machine */
if (pcie->cfg->ops->ltssm_enable) if (pcie->cfg->ops->ltssm_enable)
pcie->cfg->ops->ltssm_enable(pcie); pcie->cfg->ops->ltssm_enable(pcie);
...@@ -324,6 +342,50 @@ static void qcom_pcie_clear_hpc(struct dw_pcie *pci) ...@@ -324,6 +342,50 @@ static void qcom_pcie_clear_hpc(struct dw_pcie *pci)
dw_pcie_dbi_ro_wr_dis(pci); dw_pcie_dbi_ro_wr_dis(pci);
} }
static void qcom_pcie_configure_dbi_base(struct qcom_pcie *pcie)
{
struct dw_pcie *pci = pcie->pci;
if (pci->dbi_phys_addr) {
/*
* PARF_DBI_BASE_ADDR register is in CPU domain and require to
* be programmed with CPU physical address.
*/
writel(lower_32_bits(pci->dbi_phys_addr), pcie->parf +
PARF_DBI_BASE_ADDR);
writel(SLV_ADDR_SPACE_SZ, pcie->parf +
PARF_SLV_ADDR_SPACE_SIZE);
}
}
static void qcom_pcie_configure_dbi_atu_base(struct qcom_pcie *pcie)
{
struct dw_pcie *pci = pcie->pci;
if (pci->dbi_phys_addr) {
/*
* PARF_DBI_BASE_ADDR_V2 and PARF_ATU_BASE_ADDR registers are
* in CPU domain and require to be programmed with CPU
* physical addresses.
*/
writel(lower_32_bits(pci->dbi_phys_addr), pcie->parf +
PARF_DBI_BASE_ADDR_V2);
writel(upper_32_bits(pci->dbi_phys_addr), pcie->parf +
PARF_DBI_BASE_ADDR_V2_HI);
if (pci->atu_phys_addr) {
writel(lower_32_bits(pci->atu_phys_addr), pcie->parf +
PARF_ATU_BASE_ADDR);
writel(upper_32_bits(pci->atu_phys_addr), pcie->parf +
PARF_ATU_BASE_ADDR_HI);
}
writel(0x0, pcie->parf + PARF_SLV_ADDR_SPACE_SIZE_V2);
writel(SLV_ADDR_SPACE_SZ, pcie->parf +
PARF_SLV_ADDR_SPACE_SIZE_V2_HI);
}
}
static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie) static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie)
{ {
u32 val; u32 val;
...@@ -540,8 +602,7 @@ static int qcom_pcie_init_1_0_0(struct qcom_pcie *pcie) ...@@ -540,8 +602,7 @@ static int qcom_pcie_init_1_0_0(struct qcom_pcie *pcie)
static int qcom_pcie_post_init_1_0_0(struct qcom_pcie *pcie) static int qcom_pcie_post_init_1_0_0(struct qcom_pcie *pcie)
{ {
/* change DBI base address */ qcom_pcie_configure_dbi_base(pcie);
writel(0, pcie->parf + PARF_DBI_BASE_ADDR);
if (IS_ENABLED(CONFIG_PCI_MSI)) { if (IS_ENABLED(CONFIG_PCI_MSI)) {
u32 val = readl(pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT); u32 val = readl(pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT);
...@@ -628,8 +689,7 @@ static int qcom_pcie_post_init_2_3_2(struct qcom_pcie *pcie) ...@@ -628,8 +689,7 @@ static int qcom_pcie_post_init_2_3_2(struct qcom_pcie *pcie)
val &= ~PHY_TEST_PWR_DOWN; val &= ~PHY_TEST_PWR_DOWN;
writel(val, pcie->parf + PARF_PHY_CTRL); writel(val, pcie->parf + PARF_PHY_CTRL);
/* change DBI base address */ qcom_pcie_configure_dbi_base(pcie);
writel(0, pcie->parf + PARF_DBI_BASE_ADDR);
/* MAC PHY_POWERDOWN MUX DISABLE */ /* MAC PHY_POWERDOWN MUX DISABLE */
val = readl(pcie->parf + PARF_SYS_CTRL); val = readl(pcie->parf + PARF_SYS_CTRL);
...@@ -811,13 +871,11 @@ static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie) ...@@ -811,13 +871,11 @@ static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie)
u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP);
u32 val; u32 val;
writel(SLV_ADDR_SPACE_SZ, pcie->parf + PARF_SLV_ADDR_SPACE_SIZE);
val = readl(pcie->parf + PARF_PHY_CTRL); val = readl(pcie->parf + PARF_PHY_CTRL);
val &= ~PHY_TEST_PWR_DOWN; val &= ~PHY_TEST_PWR_DOWN;
writel(val, pcie->parf + PARF_PHY_CTRL); writel(val, pcie->parf + PARF_PHY_CTRL);
writel(0, pcie->parf + PARF_DBI_BASE_ADDR); qcom_pcie_configure_dbi_atu_base(pcie);
writel(MST_WAKEUP_EN | SLV_WAKEUP_EN | MSTR_ACLK_CGC_DIS writel(MST_WAKEUP_EN | SLV_WAKEUP_EN | MSTR_ACLK_CGC_DIS
| SLV_ACLK_CGC_DIS | CORE_CLK_CGC_DIS | | SLV_ACLK_CGC_DIS | CORE_CLK_CGC_DIS |
...@@ -913,8 +971,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie) ...@@ -913,8 +971,7 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie)
val &= ~PHY_TEST_PWR_DOWN; val &= ~PHY_TEST_PWR_DOWN;
writel(val, pcie->parf + PARF_PHY_CTRL); writel(val, pcie->parf + PARF_PHY_CTRL);
/* change DBI base address */ qcom_pcie_configure_dbi_atu_base(pcie);
writel(0, pcie->parf + PARF_DBI_BASE_ADDR);
/* MAC PHY_POWERDOWN MUX DISABLE */ /* MAC PHY_POWERDOWN MUX DISABLE */
val = readl(pcie->parf + PARF_SYS_CTRL); val = readl(pcie->parf + PARF_SYS_CTRL);
...@@ -1123,14 +1180,11 @@ static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie) ...@@ -1123,14 +1180,11 @@ static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie)
u32 val; u32 val;
int i; int i;
writel(SLV_ADDR_SPACE_SZ,
pcie->parf + PARF_SLV_ADDR_SPACE_SIZE);
val = readl(pcie->parf + PARF_PHY_CTRL); val = readl(pcie->parf + PARF_PHY_CTRL);
val &= ~PHY_TEST_PWR_DOWN; val &= ~PHY_TEST_PWR_DOWN;
writel(val, pcie->parf + PARF_PHY_CTRL); writel(val, pcie->parf + PARF_PHY_CTRL);
writel(0, pcie->parf + PARF_DBI_BASE_ADDR); qcom_pcie_configure_dbi_atu_base(pcie);
writel(DEVICE_TYPE_RC, pcie->parf + PARF_DEVICE_TYPE); writel(DEVICE_TYPE_RC, pcie->parf + PARF_DEVICE_TYPE);
writel(BYPASS | MSTR_AXI_CLK_EN | AHB_CLK_EN, writel(BYPASS | MSTR_AXI_CLK_EN | AHB_CLK_EN,
...@@ -1488,6 +1542,29 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie) ...@@ -1488,6 +1542,29 @@ static void qcom_pcie_init_debugfs(struct qcom_pcie *pcie)
qcom_pcie_link_transition_count); qcom_pcie_link_transition_count);
} }
static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data)
{
struct qcom_pcie *pcie = data;
struct dw_pcie_rp *pp = &pcie->pci->pp;
struct device *dev = pcie->pci->dev;
u32 status = readl_relaxed(pcie->parf + PARF_INT_ALL_STATUS);
writel_relaxed(status, pcie->parf + PARF_INT_ALL_CLEAR);
if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) {
dev_dbg(dev, "Received Link up event. Starting enumeration!\n");
/* Rescan the bus to enumerate endpoint devices */
pci_lock_rescan_remove();
pci_rescan_bus(pp->bridge->bus);
pci_unlock_rescan_remove();
} else {
dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n",
status);
}
return IRQ_HANDLED;
}
static int qcom_pcie_probe(struct platform_device *pdev) static int qcom_pcie_probe(struct platform_device *pdev)
{ {
const struct qcom_pcie_cfg *pcie_cfg; const struct qcom_pcie_cfg *pcie_cfg;
...@@ -1498,7 +1575,8 @@ static int qcom_pcie_probe(struct platform_device *pdev) ...@@ -1498,7 +1575,8 @@ static int qcom_pcie_probe(struct platform_device *pdev)
struct dw_pcie_rp *pp; struct dw_pcie_rp *pp;
struct resource *res; struct resource *res;
struct dw_pcie *pci; struct dw_pcie *pci;
int ret; int ret, irq;
char *name;
pcie_cfg = of_device_get_match_data(dev); pcie_cfg = of_device_get_match_data(dev);
if (!pcie_cfg || !pcie_cfg->ops) { if (!pcie_cfg || !pcie_cfg->ops) {
...@@ -1617,6 +1695,27 @@ static int qcom_pcie_probe(struct platform_device *pdev) ...@@ -1617,6 +1695,27 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_phy_exit; goto err_phy_exit;
} }
name = devm_kasprintf(dev, GFP_KERNEL, "qcom_pcie_global_irq%d",
pci_domain_nr(pp->bridge->bus));
if (!name) {
ret = -ENOMEM;
goto err_host_deinit;
}
irq = platform_get_irq_byname_optional(pdev, "global");
if (irq > 0) {
ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
qcom_pcie_global_irq_thread,
IRQF_ONESHOT, name, pcie);
if (ret) {
dev_err_probe(&pdev->dev, ret,
"Failed to request Global IRQ\n");
goto err_host_deinit;
}
writel_relaxed(PARF_INT_ALL_LINK_UP, pcie->parf + PARF_INT_ALL_MASK);
}
qcom_pcie_icc_opp_update(pcie); qcom_pcie_icc_opp_update(pcie);
if (pcie->mhi) if (pcie->mhi)
...@@ -1624,6 +1723,8 @@ static int qcom_pcie_probe(struct platform_device *pdev) ...@@ -1624,6 +1723,8 @@ static int qcom_pcie_probe(struct platform_device *pdev)
return 0; return 0;
err_host_deinit:
dw_pcie_host_deinit(pp);
err_phy_exit: err_phy_exit:
phy_exit(pcie->phy); phy_exit(pcie->phy);
err_pm_runtime_put: err_pm_runtime_put:
......
...@@ -141,10 +141,10 @@ static int rcar_gen4_pcie_start_link(struct dw_pcie *dw) ...@@ -141,10 +141,10 @@ static int rcar_gen4_pcie_start_link(struct dw_pcie *dw)
} }
/* /*
* Require direct speed change with retrying here if the link_gen is * Require direct speed change with retrying here if the max_link_speed
* PCIe Gen2 or higher. * is PCIe Gen2 or higher.
*/ */
changes = min_not_zero(dw->link_gen, RCAR_MAX_LINK_SPEED) - 1; changes = min_not_zero(dw->max_link_speed, RCAR_MAX_LINK_SPEED) - 1;
/* /*
* Since dw_pcie_setup_rc() sets it once, PCIe Gen2 will be trained. * Since dw_pcie_setup_rc() sets it once, PCIe Gen2 will be trained.
......
...@@ -233,7 +233,7 @@ static int spear13xx_pcie_probe(struct platform_device *pdev) ...@@ -233,7 +233,7 @@ static int spear13xx_pcie_probe(struct platform_device *pdev)
} }
if (of_property_read_bool(np, "st,pcie-is-gen1")) if (of_property_read_bool(np, "st,pcie-is-gen1"))
pci->link_gen = 1; pci->max_link_speed = 1;
platform_set_drvdata(pdev, spear13xx_pcie); platform_set_drvdata(pdev, spear13xx_pcie);
......
...@@ -177,11 +177,6 @@ ...@@ -177,11 +177,6 @@
#define N_FTS_VAL 52 #define N_FTS_VAL 52
#define FTS_VAL 52 #define FTS_VAL 52
#define GEN3_EQ_CONTROL_OFF 0x8a8
#define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT 8
#define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK GENMASK(23, 8)
#define GEN3_EQ_CONTROL_OFF_FB_MODE_MASK GENMASK(3, 0)
#define PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT 0x8D0 #define PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT 0x8D0
#define AMBA_ERROR_RESPONSE_RRS_SHIFT 3 #define AMBA_ERROR_RESPONSE_RRS_SHIFT 3
#define AMBA_ERROR_RESPONSE_RRS_MASK GENMASK(1, 0) #define AMBA_ERROR_RESPONSE_RRS_MASK GENMASK(1, 0)
...@@ -861,9 +856,9 @@ static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie) ...@@ -861,9 +856,9 @@ static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK; val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC;
val |= (0x3ff << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT); val |= FIELD_PREP(GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC, 0x3ff);
val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK; val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE;
dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val); dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
...@@ -872,10 +867,10 @@ static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie) ...@@ -872,10 +867,10 @@ static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK; val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC;
val |= (pcie->of_data->gen4_preset_vec << val |= FIELD_PREP(GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC,
GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT); pcie->of_data->gen4_preset_vec);
val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK; val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE;
dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val); dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
......
...@@ -838,6 +838,10 @@ void pci_epc_destroy(struct pci_epc *epc) ...@@ -838,6 +838,10 @@ void pci_epc_destroy(struct pci_epc *epc)
{ {
pci_ep_cfs_remove_epc_group(epc->group); pci_ep_cfs_remove_epc_group(epc->group);
device_unregister(&epc->dev); device_unregister(&epc->dev);
#ifdef CONFIG_PCI_DOMAINS_GENERIC
pci_bus_release_domain_nr(&epc->dev, epc->domain_nr);
#endif
} }
EXPORT_SYMBOL_GPL(pci_epc_destroy); EXPORT_SYMBOL_GPL(pci_epc_destroy);
...@@ -900,6 +904,16 @@ __pci_epc_create(struct device *dev, const struct pci_epc_ops *ops, ...@@ -900,6 +904,16 @@ __pci_epc_create(struct device *dev, const struct pci_epc_ops *ops,
epc->dev.release = pci_epc_release; epc->dev.release = pci_epc_release;
epc->ops = ops; epc->ops = ops;
#ifdef CONFIG_PCI_DOMAINS_GENERIC
epc->domain_nr = pci_bus_find_domain_nr(NULL, dev);
#else
/*
* TODO: If the architecture doesn't support generic PCI
* domains, then a custom implementation has to be used.
*/
WARN_ONCE(1, "This architecture doesn't support generic PCI domains\n");
#endif
ret = dev_set_name(&epc->dev, "%s", dev_name(dev)); ret = dev_set_name(&epc->dev, "%s", dev_name(dev));
if (ret) if (ret)
goto put_dev; goto put_dev;
......
...@@ -6828,16 +6828,16 @@ static int of_pci_bus_find_domain_nr(struct device *parent) ...@@ -6828,16 +6828,16 @@ static int of_pci_bus_find_domain_nr(struct device *parent)
return ida_alloc(&pci_domain_nr_dynamic_ida, GFP_KERNEL); return ida_alloc(&pci_domain_nr_dynamic_ida, GFP_KERNEL);
} }
static void of_pci_bus_release_domain_nr(struct pci_bus *bus, struct device *parent) static void of_pci_bus_release_domain_nr(struct device *parent, int domain_nr)
{ {
if (bus->domain_nr < 0) if (domain_nr < 0)
return; return;
/* Release domain from IDA where it was allocated. */ /* Release domain from IDA where it was allocated. */
if (of_get_pci_domain_nr(parent->of_node) == bus->domain_nr) if (of_get_pci_domain_nr(parent->of_node) == domain_nr)
ida_free(&pci_domain_nr_static_ida, bus->domain_nr); ida_free(&pci_domain_nr_static_ida, domain_nr);
else else
ida_free(&pci_domain_nr_dynamic_ida, bus->domain_nr); ida_free(&pci_domain_nr_dynamic_ida, domain_nr);
} }
int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent) int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent)
...@@ -6846,11 +6846,11 @@ int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent) ...@@ -6846,11 +6846,11 @@ int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent)
acpi_pci_bus_find_domain_nr(bus); acpi_pci_bus_find_domain_nr(bus);
} }
void pci_bus_release_domain_nr(struct pci_bus *bus, struct device *parent) void pci_bus_release_domain_nr(struct device *parent, int domain_nr)
{ {
if (!acpi_disabled) if (!acpi_disabled)
return; return;
of_pci_bus_release_domain_nr(bus, parent); of_pci_bus_release_domain_nr(parent, domain_nr);
} }
#endif #endif
......
...@@ -1061,7 +1061,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge) ...@@ -1061,7 +1061,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
free: free:
#ifdef CONFIG_PCI_DOMAINS_GENERIC #ifdef CONFIG_PCI_DOMAINS_GENERIC
pci_bus_release_domain_nr(bus, parent); pci_bus_release_domain_nr(parent, bus->domain_nr);
#endif #endif
kfree(bus); kfree(bus);
return err; return err;
......
...@@ -165,7 +165,7 @@ void pci_remove_root_bus(struct pci_bus *bus) ...@@ -165,7 +165,7 @@ void pci_remove_root_bus(struct pci_bus *bus)
#ifdef CONFIG_PCI_DOMAINS_GENERIC #ifdef CONFIG_PCI_DOMAINS_GENERIC
/* Release domain_nr if it was dynamically allocated */ /* Release domain_nr if it was dynamically allocated */
if (host_bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET) if (host_bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET)
pci_bus_release_domain_nr(bus, host_bridge->dev.parent); pci_bus_release_domain_nr(host_bridge->dev.parent, bus->domain_nr);
#endif #endif
pci_remove_bus(bus); pci_remove_bus(bus);
......
...@@ -128,6 +128,7 @@ struct pci_epc_mem { ...@@ -128,6 +128,7 @@ struct pci_epc_mem {
* @group: configfs group representing the PCI EPC device * @group: configfs group representing the PCI EPC device
* @lock: mutex to protect pci_epc ops * @lock: mutex to protect pci_epc ops
* @function_num_map: bitmap to manage physical function number * @function_num_map: bitmap to manage physical function number
* @domain_nr: PCI domain number of the endpoint controller
* @init_complete: flag to indicate whether the EPC initialization is complete * @init_complete: flag to indicate whether the EPC initialization is complete
* or not * or not
*/ */
...@@ -145,6 +146,7 @@ struct pci_epc { ...@@ -145,6 +146,7 @@ struct pci_epc {
/* mutex to protect against concurrent access of EP controller */ /* mutex to protect against concurrent access of EP controller */
struct mutex lock; struct mutex lock;
unsigned long function_num_map; unsigned long function_num_map;
int domain_nr;
bool init_complete; bool init_complete;
}; };
......
...@@ -1888,7 +1888,7 @@ static inline int acpi_pci_bus_find_domain_nr(struct pci_bus *bus) ...@@ -1888,7 +1888,7 @@ static inline int acpi_pci_bus_find_domain_nr(struct pci_bus *bus)
{ return 0; } { return 0; }
#endif #endif
int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent); int pci_bus_find_domain_nr(struct pci_bus *bus, struct device *parent);
void pci_bus_release_domain_nr(struct pci_bus *bus, struct device *parent); void pci_bus_release_domain_nr(struct device *parent, int domain_nr);
#endif #endif
/* Some architectures require additional setup to direct VGA traffic */ /* Some architectures require additional setup to direct VGA traffic */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment