Commit 3efa7f1f authored by Bjorn Helgaas's avatar Bjorn Helgaas

Merge branch 'lorenzo/pci/tegra'

  - Fix Tegra OF node reference leak (Nishka Dasgupta)

  - Add #defines for PCIe Data Link Feature and Physical Layer 16.0 GT/s
    features (Vidya Sagar)

  - Disable MSI for Tegra Root Ports since they don't support using MSI for
    all Root Port events (Vidya Sagar)

  - Group DesignWare write-protected register writes together (Vidya Sagar)

  - Move DesignWare capability search interfaces so they can be used by
    both host and endpoint drivers (Vidya Sagar)

  - Add DesignWare extended capability search interfaces (Vidya Sagar)

  - Export dw_pcie_wait_for_link() so drivers can be modules (Vidya Sagar)

  - Add "snps,enable-cdm-check" DT binding for Configuration Dependent
    Module (CDM) register checking (Vidya Sagar)

  - Add DesignWare support for "snps,enable-cdm-check" CDM checking (Vidya
    Sagar)

  - Add "supports-clkreq" DT binding for host drivers to decide whether to
    advertise low power features (Vidya Sagar)

  - Add DT binding for Tegra194 (Vidya Sagar)

  - Add DT binding for Tegra194 P2U (PIPE to UPHY) block (Vidya Sagar)

  - Add support for Tegra194 P2U (PIPE to UPHY) (Vidya Sagar)

  - Add support for Tegra194 host controller (Vidya Sagar)

  - Add Tegra support for sideband PERST# and CLKREQ# for C5 (Vidya Sagar)

  - Add Tegra support for slot regulators for p2972-0000 platform (Vidya
    Sagar)

* lorenzo/pci/tegra:
  arm64: tegra: Add PCIe slot supply information in p2972-0000 platform
  arm64: tegra: Add configuration for PCIe C5 sideband signals
  PCI: tegra: Add support to enable slot regulators
  PCI: tegra: Add support to configure sideband pins
  dt-bindings: PCI: tegra: Add PCIe slot supplies regulator entries
  dt-bindings: PCI: tegra: Add sideband pins configuration entries
  PCI: tegra: Add Tegra194 PCIe support
  phy: tegra: Add PCIe PIPE2UPHY support
  dt-bindings: PHY: P2U: Add Tegra194 P2U block
  dt-bindings: PCI: tegra: Add device tree support for Tegra194
  dt-bindings: Add PCIe supports-clkreq property
  PCI: dwc: Add support to enable CDM register check
  dt-bindings: PCI: designware: Add binding for CDM register check
  PCI: dwc: Export dw_pcie_wait_for_link() API
  PCI: dwc: Add extended configuration space capability search API
  PCI: dwc: Move config space capability search API
  PCI: dwc: Group DBI registers writes requiring unlocking
  PCI: Disable MSI for Tegra root ports
  PCI: Add #defines for some of PCIe spec r4.0 features
  PCI: tegra: Fix OF node reference leak
parents 4597905e 09a0774a
......@@ -33,6 +33,11 @@ Optional properties:
- clock-names: Must include the following entries:
- "pcie"
- "pcie_bus"
- snps,enable-cdm-check: This is a boolean property and if present enables
automatic checking of CDM (Configuration Dependent Module) registers
for data corruption. CDM registers include standard PCIe configuration
space registers, Port Logic registers, DMA and iATU (internal Address
Translation Unit) registers.
RC mode:
- num-viewport: number of view ports configured in hardware. If a platform
does not specify it, the driver assumes 2.
......
NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
This PCIe host controller is based on the Synopsis Designware PCIe IP
and thus inherits all the common properties defined in designware-pcie.txt.
Required properties:
- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
- device_type: Must be "pci"
- power-domains: A phandle to the node that controls power to the respective
PCIe controller and a specifier name for the PCIe controller. Following are
the specifiers for the different PCIe controllers
TEGRA194_POWER_DOMAIN_PCIEX8B: C0
TEGRA194_POWER_DOMAIN_PCIEX1A: C1
TEGRA194_POWER_DOMAIN_PCIEX1A: C2
TEGRA194_POWER_DOMAIN_PCIEX1A: C3
TEGRA194_POWER_DOMAIN_PCIEX4A: C4
TEGRA194_POWER_DOMAIN_PCIEX8A: C5
these specifiers are defined in
"include/dt-bindings/power/tegra194-powergate.h" file.
- reg: A list of physical base address and length pairs for each set of
controller registers. Must contain an entry for each entry in the reg-names
property.
- reg-names: Must include the following entries:
"appl": Controller's application logic registers
"config": As per the definition in designware-pcie.txt
"atu_dma": iATU and DMA registers. This is where the iATU (internal Address
Translation Unit) registers of the PCIe core are made available
for SW access.
"dbi": The aperture where root port's own configuration registers are
available
- interrupts: A list of interrupt outputs of the controller. Must contain an
entry for each entry in the interrupt-names property.
- interrupt-names: Must include the following entries:
"intr": The Tegra interrupt that is asserted for controller interrupts
"msi": The Tegra interrupt that is asserted when an MSI is received
- bus-range: Range of bus numbers associated with this controller
- #address-cells: Address representation for root ports (must be 3)
- cell 0 specifies the bus and device numbers of the root port:
[23:16]: bus number
[15:11]: device number
- cell 1 denotes the upper 32 address bits and should be 0
- cell 2 contains the lower 32 address bits and is used to translate to the
CPU address space
- #size-cells: Size representation for root ports (must be 2)
- ranges: Describes the translation of addresses for root ports and standard
PCI regions. The entries must be 7 cells each, where the first three cells
correspond to the address as described for the #address-cells property
above, the fourth and fifth cells are for the physical CPU address to
translate to and the sixth and seventh cells are as described for the
#size-cells property above.
- Entries setup the mapping for the standard I/O, memory and
prefetchable PCI regions. The first cell determines the type of region
that is setup:
- 0x81000000: I/O memory region
- 0x82000000: non-prefetchable memory region
- 0xc2000000: prefetchable memory region
Please refer to the standard PCI bus binding document for a more detailed
explanation.
- #interrupt-cells: Size representation for interrupts (must be 1)
- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
Please refer to the standard PCI bus binding document for a more detailed
explanation.
- clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- core
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- apb
- core
- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
- phy-names: Must include an entry for each active lane.
"p2u-N": where N ranges from 0 to one less than the total number of lanes
- nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed
by controller-id. Following are the controller ids for each controller.
0: C0
1: C1
2: C2
3: C3
4: C4
5: C5
- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
Optional properties:
- pinctrl-names: A list of pinctrl state names.
It is mandatory for C5 controller and optional for other controllers.
- "default": Configures PCIe I/O for proper operation.
- pinctrl-0: phandle for the 'default' state of pin configuration.
It is mandatory for C5 controller and optional for other controllers.
- supports-clkreq: Refer to Documentation/devicetree/bindings/pci/pci.txt
- nvidia,update-fc-fixup: This is a boolean property and needs to be present to
improve performance when a platform is designed in such a way that it
satisfies at least one of the following conditions thereby enabling root
port to exchange optimum number of FC (Flow Control) credits with
downstream devices
1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS)
2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and
a) speed is Gen-2 and MPS is 256B
b) speed is >= Gen-3 with any MPS
- nvidia,aspm-cmrt-us: Common Mode Restore Time for proper operation of ASPM
to be specified in microseconds
- nvidia,aspm-pwr-on-t-us: Power On time for proper operation of ASPM to be
specified in microseconds
- nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be
specified in microseconds
- vpcie3v3-supply: A phandle to the regulator node that supplies 3.3V to the slot
if the platform has one such slot. (Ex:- x16 slot owned by C5 controller
in p2972-0000 platform).
- vpcie12v-supply: A phandle to the regulator node that supplies 12V to the slot
if the platform has one such slot. (Ex:- x16 slot owned by C5 controller
in p2972-0000 platform).
Examples:
=========
Tegra194:
--------
pcie@14180000 {
compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
reg = <0x00 0x14180000 0x0 0x00020000 /* appl registers (128K) */
0x00 0x38000000 0x0 0x00040000 /* configuration space (256K) */
0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K) */
reg-names = "appl", "config", "atu_dma";
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
num-lanes = <8>;
linux,pci-domain = <0>;
pinctrl-names = "default";
pinctrl-0 = <&pex_rst_c5_out_state>, <&clkreq_c5_bi_dir_state>;
clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
clock-names = "core";
resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
<&bpmp TEGRA194_RESET_PEX0_CORE_0>;
reset-names = "apb", "core";
interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */
<GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */
interrupt-names = "intr", "msi";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &gic GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>;
nvidia,bpmp = <&bpmp 0>;
supports-clkreq;
nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>;
bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000 /* downstream I/O (1MB) */
0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000 /* non-prefetchable memory (30MB) */
0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>; /* prefetchable memory (16GB) */
vddio-pex-ctl-supply = <&vdd_1v8ao>;
vpcie3v3-supply = <&vdd_3v3_pcie>;
vpcie12v-supply = <&vdd_12v_pcie>;
phys = <&p2u_hsio_2>, <&p2u_hsio_3>, <&p2u_hsio_4>,
<&p2u_hsio_5>;
phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3";
};
......@@ -27,6 +27,11 @@ driver implementation may support the following properties:
- reset-gpios:
If present this property specifies PERST# GPIO. Host drivers can parse the
GPIO and apply fundamental reset to endpoints.
- supports-clkreq:
If present this property specifies that CLKREQ signal routing exists from
root port to downstream device and host bridge drivers can do programming
which depends on CLKREQ signal existence. For example, programming root port
not to advertise ASPM L1 Sub-States support if there is no CLKREQ signal.
PCI-PCI Bridge properties
-------------------------
......
NVIDIA Tegra194 P2U binding
Tegra194 has two PHY bricks namely HSIO (High Speed IO) and NVHS (NVIDIA High
Speed) each interfacing with 12 and 8 P2U instances respectively.
A P2U instance is a glue logic between Synopsys DesignWare Core PCIe IP's PIPE
interface and PHY of HSIO/NVHS bricks. Each P2U instance represents one PCIe
lane.
Required properties:
- compatible: For Tegra19x, must contain "nvidia,tegra194-p2u".
- reg: Should be the physical address space and length of respective each P2U
instance.
- reg-names: Must include the entry "ctl".
Required properties for PHY port node:
- #phy-cells: Defined by generic PHY bindings. Must be 0.
Refer to phy/phy-bindings.txt for the generic PHY binding properties.
Example:
p2u_hsio_0: phy@3e10000 {
compatible = "nvidia,tegra194-p2u";
reg = <0x03e10000 0x10000>;
reg-names = "ctl";
#phy-cells = <0>;
};
......@@ -289,5 +289,29 @@ vdd_hdmi: regulator@1 {
gpio = <&gpio TEGRA194_MAIN_GPIO(A, 3) GPIO_ACTIVE_HIGH>;
enable-active-high;
};
vdd_3v3_pcie: regulator@2 {
compatible = "regulator-fixed";
reg = <2>;
regulator-name = "PEX_3V3";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
gpio = <&gpio TEGRA194_MAIN_GPIO(Z, 2) GPIO_ACTIVE_HIGH>;
regulator-boot-on;
enable-active-high;
};
vdd_12v_pcie: regulator@3 {
compatible = "regulator-fixed";
reg = <3>;
regulator-name = "VDD_12V";
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1200000>;
gpio = <&gpio TEGRA194_MAIN_GPIO(A, 1) GPIO_ACTIVE_LOW>;
regulator-boot-on;
enable-active-low;
};
};
};
......@@ -93,9 +93,11 @@ pcie@14180000 {
};
pcie@141a0000 {
status = "disabled";
status = "okay";
vddio-pex-ctl-supply = <&vdd_1v8ao>;
vpcie3v3-supply = <&vdd_3v3_pcie>;
vpcie12v-supply = <&vdd_12v_pcie>;
phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>,
<&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>,
......
......@@ -3,8 +3,9 @@
#include <dt-bindings/gpio/tegra194-gpio.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/mailbox/tegra186-hsp.h>
#include <dt-bindings/reset/tegra194-reset.h>
#include <dt-bindings/pinctrl/pinctrl-tegra.h>
#include <dt-bindings/power/tegra194-powergate.h>
#include <dt-bindings/reset/tegra194-reset.h>
#include <dt-bindings/thermal/tegra194-bpmp-thermal.h>
/ {
......@@ -130,6 +131,38 @@ agic: interrupt-controller@2a40000 {
};
};
pinmux: pinmux@2430000 {
compatible = "nvidia,tegra194-pinmux";
reg = <0x2430000 0x17000
0xc300000 0x4000>;
status = "okay";
pex_rst_c5_out_state: pex_rst_c5_out {
pex_rst {
nvidia,pins = "pex_l5_rst_n_pgg1";
nvidia,schmitt = <TEGRA_PIN_DISABLE>;
nvidia,lpdr = <TEGRA_PIN_ENABLE>;
nvidia,enable-input = <TEGRA_PIN_DISABLE>;
nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
nvidia,tristate = <TEGRA_PIN_DISABLE>;
nvidia,pull = <TEGRA_PIN_PULL_NONE>;
};
};
clkreq_c5_bi_dir_state: clkreq_c5_bi_dir {
clkreq {
nvidia,pins = "pex_l5_clkreq_n_pgg0";
nvidia,schmitt = <TEGRA_PIN_DISABLE>;
nvidia,lpdr = <TEGRA_PIN_ENABLE>;
nvidia,enable-input = <TEGRA_PIN_ENABLE>;
nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
nvidia,tristate = <TEGRA_PIN_DISABLE>;
nvidia,pull = <TEGRA_PIN_PULL_NONE>;
};
};
};
uarta: serial@3100000 {
compatible = "nvidia,tegra194-uart", "nvidia,tegra20-uart";
reg = <0x03100000 0x40>;
......@@ -1365,6 +1398,9 @@ pcie@141a0000 {
num-viewport = <8>;
linux,pci-domain = <5>;
pinctrl-names = "default";
pinctrl-0 = <&pex_rst_c5_out_state>, <&clkreq_c5_bi_dir_state>;
clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>,
<&bpmp TEGRA194_CLK_PEX1_CORE_5M>;
clock-names = "core", "core_m";
......
......@@ -236,6 +236,16 @@ config PCI_MESON
and therefore the driver re-uses the DesignWare core functions to
implement the driver.
config PCIE_TEGRA194
tristate "NVIDIA Tegra194 (and later) PCIe controller"
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST
select PHY_TEGRA194_P2U
help
Say Y here if you want support for DesignWare core based PCIe host
controller found in NVIDIA Tegra194 SoC.
config PCIE_UNIPHIER
bool "Socionext UniPhier PCIe controllers"
depends on ARCH_UNIPHIER || COMPILE_TEST
......
......@@ -16,6 +16,7 @@ obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o
obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o
obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o
obj-$(CONFIG_PCI_MESON) += pci-meson.o
obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o
obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
# The following drivers are for devices that use the generic ACPI
......
......@@ -40,39 +40,6 @@ void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar)
__dw_pcie_ep_reset_bar(pci, bar, 0);
}
static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
u8 cap)
{
u8 cap_id, next_cap_ptr;
u16 reg;
if (!cap_ptr)
return 0;
reg = dw_pcie_readw_dbi(pci, cap_ptr);
cap_id = (reg & 0x00ff);
if (cap_id > PCI_CAP_ID_MAX)
return 0;
if (cap_id == cap)
return cap_ptr;
next_cap_ptr = (reg & 0xff00) >> 8;
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
}
static u8 dw_pcie_ep_find_capability(struct dw_pcie *pci, u8 cap)
{
u8 next_cap_ptr;
u16 reg;
reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
next_cap_ptr = (reg & 0x00ff);
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
}
static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no,
struct pci_epf_header *hdr)
{
......@@ -620,9 +587,9 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
return -ENOMEM;
}
ep->msi_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSI);
ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI);
ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX);
ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX);
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
if (offset) {
......
......@@ -644,6 +644,12 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
u32 val, ctrl, num_ctrls;
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
/*
* Enable DBI read-only registers for writing/updating configuration.
* Write permission gets disabled towards the end of this function.
*/
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_setup(pci);
if (!pp->ops->msi_host_init) {
......@@ -666,12 +672,10 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0x00000000);
/* Setup interrupt pins */
dw_pcie_dbi_ro_wr_en(pci);
val = dw_pcie_readl_dbi(pci, PCI_INTERRUPT_LINE);
val &= 0xffff00ff;
val |= 0x00000100;
dw_pcie_writel_dbi(pci, PCI_INTERRUPT_LINE, val);
dw_pcie_dbi_ro_wr_dis(pci);
/* Setup bus numbers */
val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS);
......@@ -703,15 +707,13 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0);
/* Enable write permission for the DBI read-only register */
dw_pcie_dbi_ro_wr_en(pci);
/* Program correct class for RC */
dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI);
/* Better disable write permission right after the update */
dw_pcie_dbi_ro_wr_dis(pci);
dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val);
val |= PORT_LOGIC_SPEED_CHANGE;
dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val);
dw_pcie_dbi_ro_wr_dis(pci);
}
EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
......@@ -14,6 +14,86 @@
#include "pcie-designware.h"
/*
* These interfaces resemble the pci_find_*capability() interfaces, but these
* are for configuring host controllers, which are bridges *to* PCI devices but
* are not PCI devices themselves.
*/
static u8 __dw_pcie_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
u8 cap)
{
u8 cap_id, next_cap_ptr;
u16 reg;
if (!cap_ptr)
return 0;
reg = dw_pcie_readw_dbi(pci, cap_ptr);
cap_id = (reg & 0x00ff);
if (cap_id > PCI_CAP_ID_MAX)
return 0;
if (cap_id == cap)
return cap_ptr;
next_cap_ptr = (reg & 0xff00) >> 8;
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
}
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap)
{
u8 next_cap_ptr;
u16 reg;
reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
next_cap_ptr = (reg & 0x00ff);
return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_capability);
static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start,
u8 cap)
{
u32 header;
int ttl;
int pos = PCI_CFG_SPACE_SIZE;
/* minimum 8 bytes per capability */
ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8;
if (start)
pos = start;
header = dw_pcie_readl_dbi(pci, pos);
/*
* If we have no capabilities, this is indicated by cap ID,
* cap version and next pointer all being 0.
*/
if (header == 0)
return 0;
while (ttl-- > 0) {
if (PCI_EXT_CAP_ID(header) == cap && pos != start)
return pos;
pos = PCI_EXT_CAP_NEXT(header);
if (pos < PCI_CFG_SPACE_SIZE)
break;
header = dw_pcie_readl_dbi(pci, pos);
}
return 0;
}
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap)
{
return dw_pcie_find_next_ext_capability(pci, 0, cap);
}
EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability);
int dw_pcie_read(void __iomem *addr, int size, u32 *val)
{
if (!IS_ALIGNED((uintptr_t)addr, size)) {
......@@ -376,10 +456,11 @@ int dw_pcie_wait_for_link(struct dw_pcie *pci)
usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX);
}
dev_err(pci->dev, "Phy link never came up\n");
dev_info(pci->dev, "Phy link never came up\n");
return -ETIMEDOUT;
}
EXPORT_SYMBOL_GPL(dw_pcie_wait_for_link);
int dw_pcie_link_up(struct dw_pcie *pci)
{
......@@ -468,4 +549,11 @@ void dw_pcie_setup(struct dw_pcie *pci)
break;
}
dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val);
if (of_property_read_bool(np, "snps,enable-cdm-check")) {
val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS |
PCIE_PL_CHK_REG_CHK_REG_START;
dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val);
}
}
......@@ -86,6 +86,15 @@
#define PCIE_MISC_CONTROL_1_OFF 0x8BC
#define PCIE_DBI_RO_WR_EN BIT(0)
#define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20
#define PCIE_PL_CHK_REG_CHK_REG_START BIT(0)
#define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS BIT(1)
#define PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR BIT(16)
#define PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR BIT(17)
#define PCIE_PL_CHK_REG_CHK_REG_COMPLETE BIT(18)
#define PCIE_PL_CHK_REG_ERR_ADDR 0xB28
/*
* iATU Unroll-specific register definitions
* From 4.80 core version the address translation will be made by unroll
......@@ -251,6 +260,9 @@ struct dw_pcie {
#define to_dw_pcie_from_ep(endpoint) \
container_of((endpoint), struct dw_pcie, ep)
u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap);
u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap);
int dw_pcie_read(void __iomem *addr, int size, u32 *val);
int dw_pcie_write(void __iomem *addr, int size, u32 val);
......
// SPDX-License-Identifier: GPL-2.0+
/*
* PCIe host controller driver for Tegra194 SoC
*
* Copyright (C) 2019 NVIDIA Corporation.
*
* Author: Vidya Sagar <vidyas@nvidia.com>
*/
#include <linux/clk.h>
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <linux/of_irq.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/phy/phy.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/random.h>
#include <linux/reset.h>
#include <linux/resource.h>
#include <linux/types.h>
#include "pcie-designware.h"
#include <soc/tegra/bpmp.h>
#include <soc/tegra/bpmp-abi.h>
#include "../../pci.h"
#define APPL_PINMUX 0x0
#define APPL_PINMUX_PEX_RST BIT(0)
#define APPL_PINMUX_CLKREQ_OVERRIDE_EN BIT(2)
#define APPL_PINMUX_CLKREQ_OVERRIDE BIT(3)
#define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN BIT(4)
#define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE BIT(5)
#define APPL_PINMUX_CLKREQ_OUT_OVRD_EN BIT(9)
#define APPL_PINMUX_CLKREQ_OUT_OVRD BIT(10)
#define APPL_CTRL 0x4
#define APPL_CTRL_SYS_PRE_DET_STATE BIT(6)
#define APPL_CTRL_LTSSM_EN BIT(7)
#define APPL_CTRL_HW_HOT_RST_EN BIT(20)
#define APPL_CTRL_HW_HOT_RST_MODE_MASK GENMASK(1, 0)
#define APPL_CTRL_HW_HOT_RST_MODE_SHIFT 22
#define APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST 0x1
#define APPL_INTR_EN_L0_0 0x8
#define APPL_INTR_EN_L0_0_LINK_STATE_INT_EN BIT(0)
#define APPL_INTR_EN_L0_0_MSI_RCV_INT_EN BIT(4)
#define APPL_INTR_EN_L0_0_INT_INT_EN BIT(8)
#define APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN BIT(19)
#define APPL_INTR_EN_L0_0_SYS_INTR_EN BIT(30)
#define APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN BIT(31)
#define APPL_INTR_STATUS_L0 0xC
#define APPL_INTR_STATUS_L0_LINK_STATE_INT BIT(0)
#define APPL_INTR_STATUS_L0_INT_INT BIT(8)
#define APPL_INTR_STATUS_L0_CDM_REG_CHK_INT BIT(18)
#define APPL_INTR_EN_L1_0_0 0x1C
#define APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN BIT(1)
#define APPL_INTR_STATUS_L1_0_0 0x20
#define APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED BIT(1)
#define APPL_INTR_STATUS_L1_1 0x2C
#define APPL_INTR_STATUS_L1_2 0x30
#define APPL_INTR_STATUS_L1_3 0x34
#define APPL_INTR_STATUS_L1_6 0x3C
#define APPL_INTR_STATUS_L1_7 0x40
#define APPL_INTR_EN_L1_8_0 0x44
#define APPL_INTR_EN_L1_8_BW_MGT_INT_EN BIT(2)
#define APPL_INTR_EN_L1_8_AUTO_BW_INT_EN BIT(3)
#define APPL_INTR_EN_L1_8_INTX_EN BIT(11)
#define APPL_INTR_EN_L1_8_AER_INT_EN BIT(15)
#define APPL_INTR_STATUS_L1_8_0 0x4C
#define APPL_INTR_STATUS_L1_8_0_EDMA_INT_MASK GENMASK(11, 6)
#define APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS BIT(2)
#define APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS BIT(3)
#define APPL_INTR_STATUS_L1_9 0x54
#define APPL_INTR_STATUS_L1_10 0x58
#define APPL_INTR_STATUS_L1_11 0x64
#define APPL_INTR_STATUS_L1_13 0x74
#define APPL_INTR_STATUS_L1_14 0x78
#define APPL_INTR_STATUS_L1_15 0x7C
#define APPL_INTR_STATUS_L1_17 0x88
#define APPL_INTR_EN_L1_18 0x90
#define APPL_INTR_EN_L1_18_CDM_REG_CHK_CMPLT BIT(2)
#define APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR BIT(1)
#define APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR BIT(0)
#define APPL_INTR_STATUS_L1_18 0x94
#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT BIT(2)
#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR BIT(1)
#define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR BIT(0)
#define APPL_MSI_CTRL_2 0xB0
#define APPL_LTR_MSG_1 0xC4
#define LTR_MSG_REQ BIT(15)
#define LTR_MST_NO_SNOOP_SHIFT 16
#define APPL_LTR_MSG_2 0xC8
#define APPL_LTR_MSG_2_LTR_MSG_REQ_STATE BIT(3)
#define APPL_LINK_STATUS 0xCC
#define APPL_LINK_STATUS_RDLH_LINK_UP BIT(0)
#define APPL_DEBUG 0xD0
#define APPL_DEBUG_PM_LINKST_IN_L2_LAT BIT(21)
#define APPL_DEBUG_PM_LINKST_IN_L0 0x11
#define APPL_DEBUG_LTSSM_STATE_MASK GENMASK(8, 3)
#define APPL_DEBUG_LTSSM_STATE_SHIFT 3
#define LTSSM_STATE_PRE_DETECT 5
#define APPL_RADM_STATUS 0xE4
#define APPL_PM_XMT_TURNOFF_STATE BIT(0)
#define APPL_DM_TYPE 0x100
#define APPL_DM_TYPE_MASK GENMASK(3, 0)
#define APPL_DM_TYPE_RP 0x4
#define APPL_DM_TYPE_EP 0x0
#define APPL_CFG_BASE_ADDR 0x104
#define APPL_CFG_BASE_ADDR_MASK GENMASK(31, 12)
#define APPL_CFG_IATU_DMA_BASE_ADDR 0x108
#define APPL_CFG_IATU_DMA_BASE_ADDR_MASK GENMASK(31, 18)
#define APPL_CFG_MISC 0x110
#define APPL_CFG_MISC_SLV_EP_MODE BIT(14)
#define APPL_CFG_MISC_ARCACHE_MASK GENMASK(13, 10)
#define APPL_CFG_MISC_ARCACHE_SHIFT 10
#define APPL_CFG_MISC_ARCACHE_VAL 3
#define APPL_CFG_SLCG_OVERRIDE 0x114
#define APPL_CFG_SLCG_OVERRIDE_SLCG_EN_MASTER BIT(0)
#define APPL_CAR_RESET_OVRD 0x12C
#define APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N BIT(0)
#define IO_BASE_IO_DECODE BIT(0)
#define IO_BASE_IO_DECODE_BIT8 BIT(8)
#define CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE BIT(0)
#define CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE BIT(16)
#define CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF 0x718
#define CFG_TIMER_CTRL_ACK_NAK_SHIFT (19)
#define EVENT_COUNTER_ALL_CLEAR 0x3
#define EVENT_COUNTER_ENABLE_ALL 0x7
#define EVENT_COUNTER_ENABLE_SHIFT 2
#define EVENT_COUNTER_EVENT_SEL_MASK GENMASK(7, 0)
#define EVENT_COUNTER_EVENT_SEL_SHIFT 16
#define EVENT_COUNTER_EVENT_Tx_L0S 0x2
#define EVENT_COUNTER_EVENT_Rx_L0S 0x3
#define EVENT_COUNTER_EVENT_L1 0x5
#define EVENT_COUNTER_EVENT_L1_1 0x7
#define EVENT_COUNTER_EVENT_L1_2 0x8
#define EVENT_COUNTER_GROUP_SEL_SHIFT 24
#define EVENT_COUNTER_GROUP_5 0x5
#define PORT_LOGIC_ACK_F_ASPM_CTRL 0x70C
#define ENTER_ASPM BIT(30)
#define L0S_ENTRANCE_LAT_SHIFT 24
#define L0S_ENTRANCE_LAT_MASK GENMASK(26, 24)
#define L1_ENTRANCE_LAT_SHIFT 27
#define L1_ENTRANCE_LAT_MASK GENMASK(29, 27)
#define N_FTS_SHIFT 8
#define N_FTS_MASK GENMASK(7, 0)
#define N_FTS_VAL 52
#define PORT_LOGIC_GEN2_CTRL 0x80C
#define PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE BIT(17)
#define FTS_MASK GENMASK(7, 0)
#define FTS_VAL 52
#define PORT_LOGIC_MSI_CTRL_INT_0_EN 0x828
#define GEN3_EQ_CONTROL_OFF 0x8a8
#define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT 8
#define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK GENMASK(23, 8)
#define GEN3_EQ_CONTROL_OFF_FB_MODE_MASK GENMASK(3, 0)
#define GEN3_RELATED_OFF 0x890
#define GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL BIT(0)
#define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16)
#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24
#define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24)
#define PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT 0x8D0
#define AMBA_ERROR_RESPONSE_CRS_SHIFT 3
#define AMBA_ERROR_RESPONSE_CRS_MASK GENMASK(1, 0)
#define AMBA_ERROR_RESPONSE_CRS_OKAY 0
#define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFFFFFF 1
#define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 2
#define PORT_LOGIC_MSIX_DOORBELL 0x948
#define CAP_SPCIE_CAP_OFF 0x154
#define CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK GENMASK(3, 0)
#define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK GENMASK(11, 8)
#define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT 8
#define PME_ACK_TIMEOUT 10000
#define LTSSM_TIMEOUT 50000 /* 50ms */
#define GEN3_GEN4_EQ_PRESET_INIT 5
#define GEN1_CORE_CLK_FREQ 62500000
#define GEN2_CORE_CLK_FREQ 125000000
#define GEN3_CORE_CLK_FREQ 250000000
#define GEN4_CORE_CLK_FREQ 500000000
static const unsigned int pcie_gen_freq[] = {
GEN1_CORE_CLK_FREQ,
GEN2_CORE_CLK_FREQ,
GEN3_CORE_CLK_FREQ,
GEN4_CORE_CLK_FREQ
};
static const u32 event_cntr_ctrl_offset[] = {
0x1d8,
0x1a8,
0x1a8,
0x1a8,
0x1c4,
0x1d8
};
static const u32 event_cntr_data_offset[] = {
0x1dc,
0x1ac,
0x1ac,
0x1ac,
0x1c8,
0x1dc
};
struct tegra_pcie_dw {
struct device *dev;
struct resource *appl_res;
struct resource *dbi_res;
struct resource *atu_dma_res;
void __iomem *appl_base;
struct clk *core_clk;
struct reset_control *core_apb_rst;
struct reset_control *core_rst;
struct dw_pcie pci;
struct tegra_bpmp *bpmp;
bool supports_clkreq;
bool enable_cdm_check;
bool link_state;
bool update_fc_fixup;
u8 init_link_width;
u32 msi_ctrl_int;
u32 num_lanes;
u32 max_speed;
u32 cid;
u32 cfg_link_cap_l1sub;
u32 pcie_cap_base;
u32 aspm_cmrt;
u32 aspm_pwr_on_t;
u32 aspm_l0s_enter_lat;
struct regulator *pex_ctl_supply;
struct regulator *slot_ctl_3v3;
struct regulator *slot_ctl_12v;
unsigned int phy_count;
struct phy **phys;
struct dentry *debugfs;
};
static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci)
{
return container_of(pci, struct tegra_pcie_dw, pci);
}
static inline void appl_writel(struct tegra_pcie_dw *pcie, const u32 value,
const u32 reg)
{
writel_relaxed(value, pcie->appl_base + reg);
}
static inline u32 appl_readl(struct tegra_pcie_dw *pcie, const u32 reg)
{
return readl_relaxed(pcie->appl_base + reg);
}
struct tegra_pcie_soc {
enum dw_pcie_device_mode mode;
};
static void apply_bad_link_workaround(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
u32 current_link_width;
u16 val;
/*
* NOTE:- Since this scenario is uncommon and link as such is not
* stable anyway, not waiting to confirm if link is really
* transitioning to Gen-2 speed
*/
val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
if (val & PCI_EXP_LNKSTA_LBMS) {
current_link_width = (val & PCI_EXP_LNKSTA_NLW) >>
PCI_EXP_LNKSTA_NLW_SHIFT;
if (pcie->init_link_width > current_link_width) {
dev_warn(pci->dev, "PCIe link is bad, width reduced\n");
val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
PCI_EXP_LNKCTL2);
val &= ~PCI_EXP_LNKCTL2_TLS;
val |= PCI_EXP_LNKCTL2_TLS_2_5GT;
dw_pcie_writew_dbi(pci, pcie->pcie_cap_base +
PCI_EXP_LNKCTL2, val);
val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
PCI_EXP_LNKCTL);
val |= PCI_EXP_LNKCTL_RL;
dw_pcie_writew_dbi(pci, pcie->pcie_cap_base +
PCI_EXP_LNKCTL, val);
}
}
}
static irqreturn_t tegra_pcie_rp_irq_handler(struct tegra_pcie_dw *pcie)
{
struct dw_pcie *pci = &pcie->pci;
struct pcie_port *pp = &pci->pp;
u32 val, tmp;
u16 val_w;
val = appl_readl(pcie, APPL_INTR_STATUS_L0);
if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) {
val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0);
if (val & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) {
appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0);
/* SBR & Surprise Link Down WAR */
val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
val &= ~APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
appl_writel(pcie, val, APPL_CAR_RESET_OVRD);
udelay(1);
val = appl_readl(pcie, APPL_CAR_RESET_OVRD);
val |= APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N;
appl_writel(pcie, val, APPL_CAR_RESET_OVRD);
val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
val |= PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE;
dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
}
}
if (val & APPL_INTR_STATUS_L0_INT_INT) {
val = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0);
if (val & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) {
appl_writel(pcie,
APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS,
APPL_INTR_STATUS_L1_8_0);
apply_bad_link_workaround(pp);
}
if (val & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) {
appl_writel(pcie,
APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS,
APPL_INTR_STATUS_L1_8_0);
val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base +
PCI_EXP_LNKSTA);
dev_dbg(pci->dev, "Link Speed : Gen-%u\n", val_w &
PCI_EXP_LNKSTA_CLS);
}
}
val = appl_readl(pcie, APPL_INTR_STATUS_L0);
if (val & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) {
val = appl_readl(pcie, APPL_INTR_STATUS_L1_18);
tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS);
if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) {
dev_info(pci->dev, "CDM check complete\n");
tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE;
}
if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) {
dev_err(pci->dev, "CDM comparison mismatch\n");
tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR;
}
if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) {
dev_err(pci->dev, "CDM Logic error\n");
tmp |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR;
}
dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, tmp);
tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR);
dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", tmp);
}
return IRQ_HANDLED;
}
static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg)
{
struct tegra_pcie_dw *pcie = arg;
return tegra_pcie_rp_irq_handler(pcie);
}
static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size,
u32 *val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
/*
* This is an endpoint mode specific register happen to appear even
* when controller is operating in root port mode and system hangs
* when it is accessed with link being in ASPM-L1 state.
* So skip accessing it altogether
*/
if (where == PORT_LOGIC_MSIX_DOORBELL) {
*val = 0x00000000;
return PCIBIOS_SUCCESSFUL;
}
return dw_pcie_read(pci->dbi_base + where, size, val);
}
static int tegra_pcie_dw_wr_own_conf(struct pcie_port *pp, int where, int size,
u32 val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
/*
* This is an endpoint mode specific register happen to appear even
* when controller is operating in root port mode and system hangs
* when it is accessed with link being in ASPM-L1 state.
* So skip accessing it altogether
*/
if (where == PORT_LOGIC_MSIX_DOORBELL)
return PCIBIOS_SUCCESSFUL;
return dw_pcie_write(pci->dbi_base + where, size, val);
}
#if defined(CONFIG_PCIEASPM)
static void disable_aspm_l11(struct tegra_pcie_dw *pcie)
{
u32 val;
val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
val &= ~PCI_L1SS_CAP_ASPM_L1_1;
dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
}
static void disable_aspm_l12(struct tegra_pcie_dw *pcie)
{
u32 val;
val = dw_pcie_readl_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub);
val &= ~PCI_L1SS_CAP_ASPM_L1_2;
dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val);
}
static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event)
{
u32 val;
val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid]);
val &= ~(EVENT_COUNTER_EVENT_SEL_MASK << EVENT_COUNTER_EVENT_SEL_SHIFT);
val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
val |= event << EVENT_COUNTER_EVENT_SEL_SHIFT;
val |= EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_data_offset[pcie->cid]);
return val;
}
static int aspm_state_cnt(struct seq_file *s, void *data)
{
struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *)
dev_get_drvdata(s->private);
u32 val;
seq_printf(s, "Tx L0s entry count : %u\n",
event_counter_prog(pcie, EVENT_COUNTER_EVENT_Tx_L0S));
seq_printf(s, "Rx L0s entry count : %u\n",
event_counter_prog(pcie, EVENT_COUNTER_EVENT_Rx_L0S));
seq_printf(s, "Link L1 entry count : %u\n",
event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1));
seq_printf(s, "Link L1.1 entry count : %u\n",
event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_1));
seq_printf(s, "Link L1.2 entry count : %u\n",
event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_2));
/* Clear all counters */
dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid],
EVENT_COUNTER_ALL_CLEAR);
/* Re-enable counting */
val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val);
return 0;
}
static void init_host_aspm(struct tegra_pcie_dw *pcie)
{
struct dw_pcie *pci = &pcie->pci;
u32 val;
val = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS);
pcie->cfg_link_cap_l1sub = val + PCI_L1SS_CAP;
/* Enable ASPM counters */
val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT;
val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT;
dw_pcie_writel_dbi(pci, event_cntr_ctrl_offset[pcie->cid], val);
/* Program T_cmrt and T_pwr_on values */
val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub);
val &= ~(PCI_L1SS_CAP_CM_RESTORE_TIME | PCI_L1SS_CAP_P_PWR_ON_VALUE);
val |= (pcie->aspm_cmrt << 8);
val |= (pcie->aspm_pwr_on_t << 19);
dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val);
/* Program L0s and L1 entrance latencies */
val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
val &= ~L0S_ENTRANCE_LAT_MASK;
val |= (pcie->aspm_l0s_enter_lat << L0S_ENTRANCE_LAT_SHIFT);
val |= ENTER_ASPM;
dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
}
static int init_debugfs(struct tegra_pcie_dw *pcie)
{
struct dentry *d;
d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt",
pcie->debugfs, aspm_state_cnt);
if (IS_ERR_OR_NULL(d))
dev_err(pcie->dev,
"Failed to create debugfs file \"aspm_state_cnt\"\n");
return 0;
}
#else
static inline void disable_aspm_l12(struct tegra_pcie_dw *pcie) { return; }
static inline void disable_aspm_l11(struct tegra_pcie_dw *pcie) { return; }
static inline void init_host_aspm(struct tegra_pcie_dw *pcie) { return; }
static inline int init_debugfs(struct tegra_pcie_dw *pcie) { return 0; }
#endif
static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
u32 val;
u16 val_w;
val = appl_readl(pcie, APPL_INTR_EN_L0_0);
val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN;
appl_writel(pcie, val, APPL_INTR_EN_L0_0);
val = appl_readl(pcie, APPL_INTR_EN_L1_0_0);
val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN;
appl_writel(pcie, val, APPL_INTR_EN_L1_0_0);
if (pcie->enable_cdm_check) {
val = appl_readl(pcie, APPL_INTR_EN_L0_0);
val |= APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN;
appl_writel(pcie, val, APPL_INTR_EN_L0_0);
val = appl_readl(pcie, APPL_INTR_EN_L1_18);
val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_CMP_ERR;
val |= APPL_INTR_EN_L1_18_CDM_REG_CHK_LOGIC_ERR;
appl_writel(pcie, val, APPL_INTR_EN_L1_18);
}
val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
PCI_EXP_LNKSTA);
pcie->init_link_width = (val_w & PCI_EXP_LNKSTA_NLW) >>
PCI_EXP_LNKSTA_NLW_SHIFT;
val_w = dw_pcie_readw_dbi(&pcie->pci, pcie->pcie_cap_base +
PCI_EXP_LNKCTL);
val_w |= PCI_EXP_LNKCTL_LBMIE;
dw_pcie_writew_dbi(&pcie->pci, pcie->pcie_cap_base + PCI_EXP_LNKCTL,
val_w);
}
static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
u32 val;
/* Enable legacy interrupt generation */
val = appl_readl(pcie, APPL_INTR_EN_L0_0);
val |= APPL_INTR_EN_L0_0_SYS_INTR_EN;
val |= APPL_INTR_EN_L0_0_INT_INT_EN;
appl_writel(pcie, val, APPL_INTR_EN_L0_0);
val = appl_readl(pcie, APPL_INTR_EN_L1_8_0);
val |= APPL_INTR_EN_L1_8_INTX_EN;
val |= APPL_INTR_EN_L1_8_AUTO_BW_INT_EN;
val |= APPL_INTR_EN_L1_8_BW_MGT_INT_EN;
if (IS_ENABLED(CONFIG_PCIEAER))
val |= APPL_INTR_EN_L1_8_AER_INT_EN;
appl_writel(pcie, val, APPL_INTR_EN_L1_8_0);
}
static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
u32 val;
dw_pcie_msi_init(pp);
/* Enable MSI interrupt generation */
val = appl_readl(pcie, APPL_INTR_EN_L0_0);
val |= APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN;
val |= APPL_INTR_EN_L0_0_MSI_RCV_INT_EN;
appl_writel(pcie, val, APPL_INTR_EN_L0_0);
}
static void tegra_pcie_enable_interrupts(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
/* Clear interrupt statuses before enabling interrupts */
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17);
tegra_pcie_enable_system_interrupts(pp);
tegra_pcie_enable_legacy_interrupts(pp);
if (IS_ENABLED(CONFIG_PCI_MSI))
tegra_pcie_enable_msi_interrupts(pp);
}
static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie)
{
struct dw_pcie *pci = &pcie->pci;
u32 val, offset, i;
/* Program init preset */
for (i = 0; i < pcie->num_lanes; i++) {
dw_pcie_read(pci->dbi_base + CAP_SPCIE_CAP_OFF
+ (i * 2), 2, &val);
val &= ~CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK;
val |= GEN3_GEN4_EQ_PRESET_INIT;
val &= ~CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK;
val |= (GEN3_GEN4_EQ_PRESET_INIT <<
CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT);
dw_pcie_write(pci->dbi_base + CAP_SPCIE_CAP_OFF
+ (i * 2), 2, val);
offset = dw_pcie_find_ext_capability(pci,
PCI_EXT_CAP_ID_PL_16GT) +
PCI_PL_16GT_LE_CTRL;
dw_pcie_read(pci->dbi_base + offset + i, 1, &val);
val &= ~PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK;
val |= GEN3_GEN4_EQ_PRESET_INIT;
val &= ~PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK;
val |= (GEN3_GEN4_EQ_PRESET_INIT <<
PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT);
dw_pcie_write(pci->dbi_base + offset + i, 1, val);
}
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
val |= (0x3ff << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
val |= (0x1 << GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT);
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF);
val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK;
val |= (0x360 << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT);
val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK;
dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val);
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
val &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK;
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
}
static void tegra_pcie_prepare_host(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
u32 val;
val = dw_pcie_readl_dbi(pci, PCI_IO_BASE);
val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8);
dw_pcie_writel_dbi(pci, PCI_IO_BASE, val);
val = dw_pcie_readl_dbi(pci, PCI_PREF_MEMORY_BASE);
val |= CFG_PREF_MEM_LIMIT_BASE_MEM_DECODE;
val |= CFG_PREF_MEM_LIMIT_BASE_MEM_LIMIT_DECODE;
dw_pcie_writel_dbi(pci, PCI_PREF_MEMORY_BASE, val);
dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0);
/* Configure FTS */
val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL);
val &= ~(N_FTS_MASK << N_FTS_SHIFT);
val |= N_FTS_VAL << N_FTS_SHIFT;
dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val);
val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL);
val &= ~FTS_MASK;
val |= FTS_VAL;
dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val);
/* Enable as 0xFFFF0001 response for CRS */
val = dw_pcie_readl_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT);
val &= ~(AMBA_ERROR_RESPONSE_CRS_MASK << AMBA_ERROR_RESPONSE_CRS_SHIFT);
val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 <<
AMBA_ERROR_RESPONSE_CRS_SHIFT);
dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val);
/* Configure Max Speed from DT */
if (pcie->max_speed && pcie->max_speed != -EINVAL) {
val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base +
PCI_EXP_LNKCAP);
val &= ~PCI_EXP_LNKCAP_SLS;
val |= pcie->max_speed;
dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP,
val);
}
/* Configure Max lane width from DT */
val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP);
val &= ~PCI_EXP_LNKCAP_MLW;
val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT);
dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val);
config_gen3_gen4_eq_presets(pcie);
init_host_aspm(pcie);
val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF);
val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL;
dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val);
if (pcie->update_fc_fixup) {
val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF);
val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT;
dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val);
}
dw_pcie_setup_rc(pp);
clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ);
/* Assert RST */
val = appl_readl(pcie, APPL_PINMUX);
val &= ~APPL_PINMUX_PEX_RST;
appl_writel(pcie, val, APPL_PINMUX);
usleep_range(100, 200);
/* Enable LTSSM */
val = appl_readl(pcie, APPL_CTRL);
val |= APPL_CTRL_LTSSM_EN;
appl_writel(pcie, val, APPL_CTRL);
/* De-assert RST */
val = appl_readl(pcie, APPL_PINMUX);
val |= APPL_PINMUX_PEX_RST;
appl_writel(pcie, val, APPL_PINMUX);
msleep(100);
}
static int tegra_pcie_dw_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
u32 val, tmp, offset, speed;
tegra_pcie_prepare_host(pp);
if (dw_pcie_wait_for_link(pci)) {
/*
* There are some endpoints which can't get the link up if
* root port has Data Link Feature (DLF) enabled.
* Refer Spec rev 4.0 ver 1.0 sec 3.4.2 & 7.7.4 for more info
* on Scaled Flow Control and DLF.
* So, need to confirm that is indeed the case here and attempt
* link up once again with DLF disabled.
*/
val = appl_readl(pcie, APPL_DEBUG);
val &= APPL_DEBUG_LTSSM_STATE_MASK;
val >>= APPL_DEBUG_LTSSM_STATE_SHIFT;
tmp = appl_readl(pcie, APPL_LINK_STATUS);
tmp &= APPL_LINK_STATUS_RDLH_LINK_UP;
if (!(val == 0x11 && !tmp)) {
/* Link is down for all good reasons */
return 0;
}
dev_info(pci->dev, "Link is down in DLL");
dev_info(pci->dev, "Trying again with DLFE disabled\n");
/* Disable LTSSM */
val = appl_readl(pcie, APPL_CTRL);
val &= ~APPL_CTRL_LTSSM_EN;
appl_writel(pcie, val, APPL_CTRL);
reset_control_assert(pcie->core_rst);
reset_control_deassert(pcie->core_rst);
offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF);
val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP);
val &= ~PCI_DLF_EXCHANGE_ENABLE;
dw_pcie_writel_dbi(pci, offset, val);
tegra_pcie_prepare_host(pp);
if (dw_pcie_wait_for_link(pci))
return 0;
}
speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) &
PCI_EXP_LNKSTA_CLS;
clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]);
tegra_pcie_enable_interrupts(pp);
return 0;
}
static int tegra_pcie_dw_link_up(struct dw_pcie *pci)
{
struct tegra_pcie_dw *pcie = to_tegra_pcie(pci);
u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA);
return !!(val & PCI_EXP_LNKSTA_DLLLA);
}
static void tegra_pcie_set_msi_vec_num(struct pcie_port *pp)
{
pp->num_vectors = MAX_MSI_IRQS;
}
static const struct dw_pcie_ops tegra_dw_pcie_ops = {
.link_up = tegra_pcie_dw_link_up,
};
static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = {
.rd_own_conf = tegra_pcie_dw_rd_own_conf,
.wr_own_conf = tegra_pcie_dw_wr_own_conf,
.host_init = tegra_pcie_dw_host_init,
.set_num_vectors = tegra_pcie_set_msi_vec_num,
};
static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie)
{
unsigned int phy_count = pcie->phy_count;
while (phy_count--) {
phy_power_off(pcie->phys[phy_count]);
phy_exit(pcie->phys[phy_count]);
}
}
static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie)
{
unsigned int i;
int ret;
for (i = 0; i < pcie->phy_count; i++) {
ret = phy_init(pcie->phys[i]);
if (ret < 0)
goto phy_power_off;
ret = phy_power_on(pcie->phys[i]);
if (ret < 0)
goto phy_exit;
}
return 0;
phy_power_off:
while (i--) {
phy_power_off(pcie->phys[i]);
phy_exit:
phy_exit(pcie->phys[i]);
}
return ret;
}
static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie)
{
struct device_node *np = pcie->dev->of_node;
int ret;
ret = of_property_read_u32(np, "nvidia,aspm-cmrt-us", &pcie->aspm_cmrt);
if (ret < 0) {
dev_info(pcie->dev, "Failed to read ASPM T_cmrt: %d\n", ret);
return ret;
}
ret = of_property_read_u32(np, "nvidia,aspm-pwr-on-t-us",
&pcie->aspm_pwr_on_t);
if (ret < 0)
dev_info(pcie->dev, "Failed to read ASPM Power On time: %d\n",
ret);
ret = of_property_read_u32(np, "nvidia,aspm-l0s-entrance-latency-us",
&pcie->aspm_l0s_enter_lat);
if (ret < 0)
dev_info(pcie->dev,
"Failed to read ASPM L0s Entrance latency: %d\n", ret);
ret = of_property_read_u32(np, "num-lanes", &pcie->num_lanes);
if (ret < 0) {
dev_err(pcie->dev, "Failed to read num-lanes: %d\n", ret);
return ret;
}
pcie->max_speed = of_pci_get_max_link_speed(np);
ret = of_property_read_u32_index(np, "nvidia,bpmp", 1, &pcie->cid);
if (ret) {
dev_err(pcie->dev, "Failed to read Controller-ID: %d\n", ret);
return ret;
}
ret = of_property_count_strings(np, "phy-names");
if (ret < 0) {
dev_err(pcie->dev, "Failed to find PHY entries: %d\n",
ret);
return ret;
}
pcie->phy_count = ret;
if (of_property_read_bool(np, "nvidia,update-fc-fixup"))
pcie->update_fc_fixup = true;
pcie->supports_clkreq =
of_property_read_bool(pcie->dev->of_node, "supports-clkreq");
pcie->enable_cdm_check =
of_property_read_bool(np, "snps,enable-cdm-check");
return 0;
}
static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie,
bool enable)
{
struct mrq_uphy_response resp;
struct tegra_bpmp_message msg;
struct mrq_uphy_request req;
/* Controller-5 doesn't need to have its state set by BPMP-FW */
if (pcie->cid == 5)
return 0;
memset(&req, 0, sizeof(req));
memset(&resp, 0, sizeof(resp));
req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE;
req.controller_state.pcie_controller = pcie->cid;
req.controller_state.enable = enable;
memset(&msg, 0, sizeof(msg));
msg.mrq = MRQ_UPHY;
msg.tx.data = &req;
msg.tx.size = sizeof(req);
msg.rx.data = &resp;
msg.rx.size = sizeof(resp);
return tegra_bpmp_transfer(pcie->bpmp, &msg);
}
static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie)
{
struct pcie_port *pp = &pcie->pci.pp;
struct pci_bus *child, *root_bus = NULL;
struct pci_dev *pdev;
/*
* link doesn't go into L2 state with some of the endpoints with Tegra
* if they are not in D0 state. So, need to make sure that immediate
* downstream devices are in D0 state before sending PME_TurnOff to put
* link into L2 state.
* This is as per PCI Express Base r4.0 v1.0 September 27-2017,
* 5.2 Link State Power Management (Page #428).
*/
list_for_each_entry(child, &pp->root_bus->children, node) {
/* Bring downstream devices to D0 if they are not already in */
if (child->parent == pp->root_bus) {
root_bus = child;
break;
}
}
if (!root_bus) {
dev_err(pcie->dev, "Failed to find downstream devices\n");
return;
}
list_for_each_entry(pdev, &root_bus->devices, bus_list) {
if (PCI_SLOT(pdev->devfn) == 0) {
if (pci_set_power_state(pdev, PCI_D0))
dev_err(pcie->dev,
"Failed to transition %s to D0 state\n",
dev_name(&pdev->dev));
}
}
}
static int tegra_pcie_get_slot_regulators(struct tegra_pcie_dw *pcie)
{
pcie->slot_ctl_3v3 = devm_regulator_get_optional(pcie->dev, "vpcie3v3");
if (IS_ERR(pcie->slot_ctl_3v3)) {
if (PTR_ERR(pcie->slot_ctl_3v3) != -ENODEV)
return PTR_ERR(pcie->slot_ctl_3v3);
pcie->slot_ctl_3v3 = NULL;
}
pcie->slot_ctl_12v = devm_regulator_get_optional(pcie->dev, "vpcie12v");
if (IS_ERR(pcie->slot_ctl_12v)) {
if (PTR_ERR(pcie->slot_ctl_12v) != -ENODEV)
return PTR_ERR(pcie->slot_ctl_12v);
pcie->slot_ctl_12v = NULL;
}
return 0;
}
static int tegra_pcie_enable_slot_regulators(struct tegra_pcie_dw *pcie)
{
int ret;
if (pcie->slot_ctl_3v3) {
ret = regulator_enable(pcie->slot_ctl_3v3);
if (ret < 0) {
dev_err(pcie->dev,
"Failed to enable 3.3V slot supply: %d\n", ret);
return ret;
}
}
if (pcie->slot_ctl_12v) {
ret = regulator_enable(pcie->slot_ctl_12v);
if (ret < 0) {
dev_err(pcie->dev,
"Failed to enable 12V slot supply: %d\n", ret);
goto fail_12v_enable;
}
}
/*
* According to PCI Express Card Electromechanical Specification
* Revision 1.1, Table-2.4, T_PVPERL (Power stable to PERST# inactive)
* should be a minimum of 100ms.
*/
if (pcie->slot_ctl_3v3 || pcie->slot_ctl_12v)
msleep(100);
return 0;
fail_12v_enable:
if (pcie->slot_ctl_3v3)
regulator_disable(pcie->slot_ctl_3v3);
return ret;
}
static void tegra_pcie_disable_slot_regulators(struct tegra_pcie_dw *pcie)
{
if (pcie->slot_ctl_12v)
regulator_disable(pcie->slot_ctl_12v);
if (pcie->slot_ctl_3v3)
regulator_disable(pcie->slot_ctl_3v3);
}
static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie,
bool en_hw_hot_rst)
{
int ret;
u32 val;
ret = tegra_pcie_bpmp_set_ctrl_state(pcie, true);
if (ret) {
dev_err(pcie->dev,
"Failed to enable controller %u: %d\n", pcie->cid, ret);
return ret;
}
ret = tegra_pcie_enable_slot_regulators(pcie);
if (ret < 0)
goto fail_slot_reg_en;
ret = regulator_enable(pcie->pex_ctl_supply);
if (ret < 0) {
dev_err(pcie->dev, "Failed to enable regulator: %d\n", ret);
goto fail_reg_en;
}
ret = clk_prepare_enable(pcie->core_clk);
if (ret) {
dev_err(pcie->dev, "Failed to enable core clock: %d\n", ret);
goto fail_core_clk;
}
ret = reset_control_deassert(pcie->core_apb_rst);
if (ret) {
dev_err(pcie->dev, "Failed to deassert core APB reset: %d\n",
ret);
goto fail_core_apb_rst;
}
if (en_hw_hot_rst) {
/* Enable HW_HOT_RST mode */
val = appl_readl(pcie, APPL_CTRL);
val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
val |= APPL_CTRL_HW_HOT_RST_EN;
appl_writel(pcie, val, APPL_CTRL);
}
ret = tegra_pcie_enable_phy(pcie);
if (ret) {
dev_err(pcie->dev, "Failed to enable PHY: %d\n", ret);
goto fail_phy;
}
/* Update CFG base address */
appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK,
APPL_CFG_BASE_ADDR);
/* Configure this core for RP mode operation */
appl_writel(pcie, APPL_DM_TYPE_RP, APPL_DM_TYPE);
appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE);
val = appl_readl(pcie, APPL_CTRL);
appl_writel(pcie, val | APPL_CTRL_SYS_PRE_DET_STATE, APPL_CTRL);
val = appl_readl(pcie, APPL_CFG_MISC);
val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT);
appl_writel(pcie, val, APPL_CFG_MISC);
if (!pcie->supports_clkreq) {
val = appl_readl(pcie, APPL_PINMUX);
val |= APPL_PINMUX_CLKREQ_OUT_OVRD_EN;
val |= APPL_PINMUX_CLKREQ_OUT_OVRD;
appl_writel(pcie, val, APPL_PINMUX);
}
/* Update iATU_DMA base address */
appl_writel(pcie,
pcie->atu_dma_res->start & APPL_CFG_IATU_DMA_BASE_ADDR_MASK,
APPL_CFG_IATU_DMA_BASE_ADDR);
reset_control_deassert(pcie->core_rst);
pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci,
PCI_CAP_ID_EXP);
/* Disable ASPM-L1SS advertisement as there is no CLKREQ routing */
if (!pcie->supports_clkreq) {
disable_aspm_l11(pcie);
disable_aspm_l12(pcie);
}
return ret;
fail_phy:
reset_control_assert(pcie->core_apb_rst);
fail_core_apb_rst:
clk_disable_unprepare(pcie->core_clk);
fail_core_clk:
regulator_disable(pcie->pex_ctl_supply);
fail_reg_en:
tegra_pcie_disable_slot_regulators(pcie);
fail_slot_reg_en:
tegra_pcie_bpmp_set_ctrl_state(pcie, false);
return ret;
}
static int __deinit_controller(struct tegra_pcie_dw *pcie)
{
int ret;
ret = reset_control_assert(pcie->core_rst);
if (ret) {
dev_err(pcie->dev, "Failed to assert \"core\" reset: %d\n",
ret);
return ret;
}
tegra_pcie_disable_phy(pcie);
ret = reset_control_assert(pcie->core_apb_rst);
if (ret) {
dev_err(pcie->dev, "Failed to assert APB reset: %d\n", ret);
return ret;
}
clk_disable_unprepare(pcie->core_clk);
ret = regulator_disable(pcie->pex_ctl_supply);
if (ret) {
dev_err(pcie->dev, "Failed to disable regulator: %d\n", ret);
return ret;
}
tegra_pcie_disable_slot_regulators(pcie);
ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false);
if (ret) {
dev_err(pcie->dev, "Failed to disable controller %d: %d\n",
pcie->cid, ret);
return ret;
}
return ret;
}
static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie)
{
struct dw_pcie *pci = &pcie->pci;
struct pcie_port *pp = &pci->pp;
int ret;
ret = tegra_pcie_config_controller(pcie, false);
if (ret < 0)
return ret;
pp->ops = &tegra_pcie_dw_host_ops;
ret = dw_pcie_host_init(pp);
if (ret < 0) {
dev_err(pcie->dev, "Failed to add PCIe port: %d\n", ret);
goto fail_host_init;
}
return 0;
fail_host_init:
return __deinit_controller(pcie);
}
static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie)
{
u32 val;
if (!tegra_pcie_dw_link_up(&pcie->pci))
return 0;
val = appl_readl(pcie, APPL_RADM_STATUS);
val |= APPL_PM_XMT_TURNOFF_STATE;
appl_writel(pcie, val, APPL_RADM_STATUS);
return readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, val,
val & APPL_DEBUG_PM_LINKST_IN_L2_LAT,
1, PME_ACK_TIMEOUT);
}
static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie)
{
u32 data;
int err;
if (!tegra_pcie_dw_link_up(&pcie->pci)) {
dev_dbg(pcie->dev, "PCIe link is not up...!\n");
return;
}
if (tegra_pcie_try_link_l2(pcie)) {
dev_info(pcie->dev, "Link didn't transition to L2 state\n");
/*
* TX lane clock freq will reset to Gen1 only if link is in L2
* or detect state.
* So apply pex_rst to end point to force RP to go into detect
* state
*/
data = appl_readl(pcie, APPL_PINMUX);
data &= ~APPL_PINMUX_PEX_RST;
appl_writel(pcie, data, APPL_PINMUX);
err = readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG,
data,
((data &
APPL_DEBUG_LTSSM_STATE_MASK) >>
APPL_DEBUG_LTSSM_STATE_SHIFT) ==
LTSSM_STATE_PRE_DETECT,
1, LTSSM_TIMEOUT);
if (err) {
dev_info(pcie->dev, "Link didn't go to detect state\n");
} else {
/* Disable LTSSM after link is in detect state */
data = appl_readl(pcie, APPL_CTRL);
data &= ~APPL_CTRL_LTSSM_EN;
appl_writel(pcie, data, APPL_CTRL);
}
}
/*
* DBI registers may not be accessible after this as PLL-E would be
* down depending on how CLKREQ is pulled by end point
*/
data = appl_readl(pcie, APPL_PINMUX);
data |= (APPL_PINMUX_CLKREQ_OVERRIDE_EN | APPL_PINMUX_CLKREQ_OVERRIDE);
/* Cut REFCLK to slot */
data |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN;
data &= ~APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE;
appl_writel(pcie, data, APPL_PINMUX);
}
static int tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie)
{
tegra_pcie_downstream_dev_to_D0(pcie);
dw_pcie_host_deinit(&pcie->pci.pp);
tegra_pcie_dw_pme_turnoff(pcie);
return __deinit_controller(pcie);
}
static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie)
{
struct pcie_port *pp = &pcie->pci.pp;
struct device *dev = pcie->dev;
char *name;
int ret;
if (IS_ENABLED(CONFIG_PCI_MSI)) {
pp->msi_irq = of_irq_get_byname(dev->of_node, "msi");
if (!pp->msi_irq) {
dev_err(dev, "Failed to get MSI interrupt\n");
return -ENODEV;
}
}
pm_runtime_enable(dev);
ret = pm_runtime_get_sync(dev);
if (ret < 0) {
dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n",
ret);
goto fail_pm_get_sync;
}
ret = pinctrl_pm_select_default_state(dev);
if (ret < 0) {
dev_err(dev, "Failed to configure sideband pins: %d\n", ret);
goto fail_pinctrl;
}
tegra_pcie_init_controller(pcie);
pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci);
if (!pcie->link_state) {
ret = -ENOMEDIUM;
goto fail_host_init;
}
name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node);
if (!name) {
ret = -ENOMEM;
goto fail_host_init;
}
pcie->debugfs = debugfs_create_dir(name, NULL);
if (!pcie->debugfs)
dev_err(dev, "Failed to create debugfs\n");
else
init_debugfs(pcie);
return ret;
fail_host_init:
tegra_pcie_deinit_controller(pcie);
fail_pinctrl:
pm_runtime_put_sync(dev);
fail_pm_get_sync:
pm_runtime_disable(dev);
return ret;
}
static int tegra_pcie_dw_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct resource *atu_dma_res;
struct tegra_pcie_dw *pcie;
struct resource *dbi_res;
struct pcie_port *pp;
struct dw_pcie *pci;
struct phy **phys;
char *name;
int ret;
u32 i;
pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
if (!pcie)
return -ENOMEM;
pci = &pcie->pci;
pci->dev = &pdev->dev;
pci->ops = &tegra_dw_pcie_ops;
pp = &pci->pp;
pcie->dev = &pdev->dev;
ret = tegra_pcie_dw_parse_dt(pcie);
if (ret < 0) {
dev_err(dev, "Failed to parse device tree: %d\n", ret);
return ret;
}
ret = tegra_pcie_get_slot_regulators(pcie);
if (ret < 0) {
dev_err(dev, "Failed to get slot regulators: %d\n", ret);
return ret;
}
pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl");
if (IS_ERR(pcie->pex_ctl_supply)) {
ret = PTR_ERR(pcie->pex_ctl_supply);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get regulator: %ld\n",
PTR_ERR(pcie->pex_ctl_supply));
return ret;
}
pcie->core_clk = devm_clk_get(dev, "core");
if (IS_ERR(pcie->core_clk)) {
dev_err(dev, "Failed to get core clock: %ld\n",
PTR_ERR(pcie->core_clk));
return PTR_ERR(pcie->core_clk);
}
pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"appl");
if (!pcie->appl_res) {
dev_err(dev, "Failed to find \"appl\" region\n");
return -ENODEV;
}
pcie->appl_base = devm_ioremap_resource(dev, pcie->appl_res);
if (IS_ERR(pcie->appl_base))
return PTR_ERR(pcie->appl_base);
pcie->core_apb_rst = devm_reset_control_get(dev, "apb");
if (IS_ERR(pcie->core_apb_rst)) {
dev_err(dev, "Failed to get APB reset: %ld\n",
PTR_ERR(pcie->core_apb_rst));
return PTR_ERR(pcie->core_apb_rst);
}
phys = devm_kcalloc(dev, pcie->phy_count, sizeof(*phys), GFP_KERNEL);
if (!phys)
return -ENOMEM;
for (i = 0; i < pcie->phy_count; i++) {
name = kasprintf(GFP_KERNEL, "p2u-%u", i);
if (!name) {
dev_err(dev, "Failed to create P2U string\n");
return -ENOMEM;
}
phys[i] = devm_phy_get(dev, name);
kfree(name);
if (IS_ERR(phys[i])) {
ret = PTR_ERR(phys[i]);
if (ret != -EPROBE_DEFER)
dev_err(dev, "Failed to get PHY: %d\n", ret);
return ret;
}
}
pcie->phys = phys;
dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
if (!dbi_res) {
dev_err(dev, "Failed to find \"dbi\" region\n");
return -ENODEV;
}
pcie->dbi_res = dbi_res;
pci->dbi_base = devm_ioremap_resource(dev, dbi_res);
if (IS_ERR(pci->dbi_base))
return PTR_ERR(pci->dbi_base);
/* Tegra HW locates DBI2 at a fixed offset from DBI */
pci->dbi_base2 = pci->dbi_base + 0x1000;
atu_dma_res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"atu_dma");
if (!atu_dma_res) {
dev_err(dev, "Failed to find \"atu_dma\" region\n");
return -ENODEV;
}
pcie->atu_dma_res = atu_dma_res;
pci->atu_base = devm_ioremap_resource(dev, atu_dma_res);
if (IS_ERR(pci->atu_base))
return PTR_ERR(pci->atu_base);
pcie->core_rst = devm_reset_control_get(dev, "core");
if (IS_ERR(pcie->core_rst)) {
dev_err(dev, "Failed to get core reset: %ld\n",
PTR_ERR(pcie->core_rst));
return PTR_ERR(pcie->core_rst);
}
pp->irq = platform_get_irq_byname(pdev, "intr");
if (!pp->irq) {
dev_err(dev, "Failed to get \"intr\" interrupt\n");
return -ENODEV;
}
ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler,
IRQF_SHARED, "tegra-pcie-intr", pcie);
if (ret) {
dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret);
return ret;
}
pcie->bpmp = tegra_bpmp_get(dev);
if (IS_ERR(pcie->bpmp))
return PTR_ERR(pcie->bpmp);
platform_set_drvdata(pdev, pcie);
ret = tegra_pcie_config_rp(pcie);
if (ret && ret != -ENOMEDIUM)
goto fail;
else
return 0;
fail:
tegra_bpmp_put(pcie->bpmp);
return ret;
}
static int tegra_pcie_dw_remove(struct platform_device *pdev)
{
struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
if (!pcie->link_state)
return 0;
debugfs_remove_recursive(pcie->debugfs);
tegra_pcie_deinit_controller(pcie);
pm_runtime_put_sync(pcie->dev);
pm_runtime_disable(pcie->dev);
tegra_bpmp_put(pcie->bpmp);
return 0;
}
static int tegra_pcie_dw_suspend_late(struct device *dev)
{
struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
u32 val;
if (!pcie->link_state)
return 0;
/* Enable HW_HOT_RST mode */
val = appl_readl(pcie, APPL_CTRL);
val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
val |= APPL_CTRL_HW_HOT_RST_EN;
appl_writel(pcie, val, APPL_CTRL);
return 0;
}
static int tegra_pcie_dw_suspend_noirq(struct device *dev)
{
struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
if (!pcie->link_state)
return 0;
/* Save MSI interrupt vector */
pcie->msi_ctrl_int = dw_pcie_readl_dbi(&pcie->pci,
PORT_LOGIC_MSI_CTRL_INT_0_EN);
tegra_pcie_downstream_dev_to_D0(pcie);
tegra_pcie_dw_pme_turnoff(pcie);
return __deinit_controller(pcie);
}
static int tegra_pcie_dw_resume_noirq(struct device *dev)
{
struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
int ret;
if (!pcie->link_state)
return 0;
ret = tegra_pcie_config_controller(pcie, true);
if (ret < 0)
return ret;
ret = tegra_pcie_dw_host_init(&pcie->pci.pp);
if (ret < 0) {
dev_err(dev, "Failed to init host: %d\n", ret);
goto fail_host_init;
}
/* Restore MSI interrupt vector */
dw_pcie_writel_dbi(&pcie->pci, PORT_LOGIC_MSI_CTRL_INT_0_EN,
pcie->msi_ctrl_int);
return 0;
fail_host_init:
return __deinit_controller(pcie);
}
static int tegra_pcie_dw_resume_early(struct device *dev)
{
struct tegra_pcie_dw *pcie = dev_get_drvdata(dev);
u32 val;
if (!pcie->link_state)
return 0;
/* Disable HW_HOT_RST mode */
val = appl_readl(pcie, APPL_CTRL);
val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK <<
APPL_CTRL_HW_HOT_RST_MODE_SHIFT);
val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST <<
APPL_CTRL_HW_HOT_RST_MODE_SHIFT;
val &= ~APPL_CTRL_HW_HOT_RST_EN;
appl_writel(pcie, val, APPL_CTRL);
return 0;
}
static void tegra_pcie_dw_shutdown(struct platform_device *pdev)
{
struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev);
if (!pcie->link_state)
return;
debugfs_remove_recursive(pcie->debugfs);
tegra_pcie_downstream_dev_to_D0(pcie);
disable_irq(pcie->pci.pp.irq);
if (IS_ENABLED(CONFIG_PCI_MSI))
disable_irq(pcie->pci.pp.msi_irq);
tegra_pcie_dw_pme_turnoff(pcie);
__deinit_controller(pcie);
}
static const struct of_device_id tegra_pcie_dw_of_match[] = {
{
.compatible = "nvidia,tegra194-pcie",
},
{},
};
static const struct dev_pm_ops tegra_pcie_dw_pm_ops = {
.suspend_late = tegra_pcie_dw_suspend_late,
.suspend_noirq = tegra_pcie_dw_suspend_noirq,
.resume_noirq = tegra_pcie_dw_resume_noirq,
.resume_early = tegra_pcie_dw_resume_early,
};
static struct platform_driver tegra_pcie_dw_driver = {
.probe = tegra_pcie_dw_probe,
.remove = tegra_pcie_dw_remove,
.shutdown = tegra_pcie_dw_shutdown,
.driver = {
.name = "tegra194-pcie",
.pm = &tegra_pcie_dw_pm_ops,
.of_match_table = tegra_pcie_dw_of_match,
},
};
module_platform_driver(tegra_pcie_dw_driver);
MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match);
MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
MODULE_DESCRIPTION("NVIDIA PCIe host controller driver");
MODULE_LICENSE("GPL v2");
......@@ -2237,14 +2237,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
err = of_pci_get_devfn(port);
if (err < 0) {
dev_err(dev, "failed to parse address: %d\n", err);
return err;
goto err_node_put;
}
index = PCI_SLOT(err);
if (index < 1 || index > soc->num_ports) {
dev_err(dev, "invalid port number: %d\n", index);
return -EINVAL;
err = -EINVAL;
goto err_node_put;
}
index--;
......@@ -2253,12 +2254,13 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
if (err < 0) {
dev_err(dev, "failed to parse # of lanes: %d\n",
err);
return err;
goto err_node_put;
}
if (value > 16) {
dev_err(dev, "invalid # of lanes: %u\n", value);
return -EINVAL;
err = -EINVAL;
goto err_node_put;
}
lanes |= value << (index << 3);
......@@ -2272,13 +2274,15 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
lane += value;
rp = devm_kzalloc(dev, sizeof(*rp), GFP_KERNEL);
if (!rp)
return -ENOMEM;
if (!rp) {
err = -ENOMEM;
goto err_node_put;
}
err = of_address_to_resource(port, 0, &rp->regs);
if (err < 0) {
dev_err(dev, "failed to parse address: %d\n", err);
return err;
goto err_node_put;
}
INIT_LIST_HEAD(&rp->list);
......@@ -2330,6 +2334,10 @@ static int tegra_pcie_parse_dt(struct tegra_pcie *pcie)
return err;
return 0;
err_node_put:
of_node_put(port);
return err;
}
/*
......
......@@ -2591,6 +2591,59 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA,
PCI_DEVICE_ID_NVIDIA_NVENET_15,
nvenet_msi_disable);
/*
* PCIe spec r4.0 sec 7.7.1.2 and sec 7.7.2.2 say that if MSI/MSI-X is enabled,
* then the device can't use INTx interrupts. Tegra's PCIe root ports don't
* generate MSI interrupts for PME and AER events instead only INTx interrupts
* are generated. Though Tegra's PCIe root ports can generate MSI interrupts
* for other events, since PCIe specificiation doesn't support using a mix of
* INTx and MSI/MSI-X, it is required to disable MSI interrupts to avoid port
* service drivers registering their respective ISRs for MSIs.
*/
static void pci_quirk_nvidia_tegra_disable_rp_msi(struct pci_dev *dev)
{
dev->no_msi = 1;
}
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad0,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad1,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x1ad2,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf0,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf1,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1c,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1d,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e12,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e13,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0fae,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0faf,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x10e5,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x10e6,
PCI_CLASS_BRIDGE_PCI, 8,
pci_quirk_nvidia_tegra_disable_rp_msi);
/*
* Some versions of the MCP55 bridge from Nvidia have a legacy IRQ routing
* config register. This register controls the routing of legacy
......
......@@ -7,3 +7,10 @@ config PHY_TEGRA_XUSB
To compile this driver as a module, choose M here: the module will
be called phy-tegra-xusb.
config PHY_TEGRA194_P2U
tristate "NVIDIA Tegra194 PIPE2UPHY PHY driver"
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
select GENERIC_PHY
help
Enable this to support the P2U (PIPE to UPHY) that is part of Tegra 19x SOCs.
......@@ -6,3 +6,4 @@ phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_124_SOC) += xusb-tegra124.o
phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_132_SOC) += xusb-tegra124.o
phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_210_SOC) += xusb-tegra210.o
phy-tegra-xusb-$(CONFIG_ARCH_TEGRA_186_SOC) += xusb-tegra186.o
obj-$(CONFIG_PHY_TEGRA194_P2U) += phy-tegra194-p2u.o
// SPDX-License-Identifier: GPL-2.0+
/*
* P2U (PIPE to UPHY) driver for Tegra T194 SoC
*
* Copyright (C) 2019 NVIDIA Corporation.
*
* Author: Vidya Sagar <vidyas@nvidia.com>
*/
#include <linux/err.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/phy/phy.h>
#define P2U_PERIODIC_EQ_CTRL_GEN3 0xc0
#define P2U_PERIODIC_EQ_CTRL_GEN3_PERIODIC_EQ_EN BIT(0)
#define P2U_PERIODIC_EQ_CTRL_GEN3_INIT_PRESET_EQ_TRAIN_EN BIT(1)
#define P2U_PERIODIC_EQ_CTRL_GEN4 0xc4
#define P2U_PERIODIC_EQ_CTRL_GEN4_INIT_PRESET_EQ_TRAIN_EN BIT(1)
#define P2U_RX_DEBOUNCE_TIME 0xa4
#define P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_MASK 0xffff
#define P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_VAL 160
struct tegra_p2u {
void __iomem *base;
};
static inline void p2u_writel(struct tegra_p2u *phy, const u32 value,
const u32 reg)
{
writel_relaxed(value, phy->base + reg);
}
static inline u32 p2u_readl(struct tegra_p2u *phy, const u32 reg)
{
return readl_relaxed(phy->base + reg);
}
static int tegra_p2u_power_on(struct phy *x)
{
struct tegra_p2u *phy = phy_get_drvdata(x);
u32 val;
val = p2u_readl(phy, P2U_PERIODIC_EQ_CTRL_GEN3);
val &= ~P2U_PERIODIC_EQ_CTRL_GEN3_PERIODIC_EQ_EN;
val |= P2U_PERIODIC_EQ_CTRL_GEN3_INIT_PRESET_EQ_TRAIN_EN;
p2u_writel(phy, val, P2U_PERIODIC_EQ_CTRL_GEN3);
val = p2u_readl(phy, P2U_PERIODIC_EQ_CTRL_GEN4);
val |= P2U_PERIODIC_EQ_CTRL_GEN4_INIT_PRESET_EQ_TRAIN_EN;
p2u_writel(phy, val, P2U_PERIODIC_EQ_CTRL_GEN4);
val = p2u_readl(phy, P2U_RX_DEBOUNCE_TIME);
val &= ~P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_MASK;
val |= P2U_RX_DEBOUNCE_TIME_DEBOUNCE_TIMER_VAL;
p2u_writel(phy, val, P2U_RX_DEBOUNCE_TIME);
return 0;
}
static const struct phy_ops ops = {
.power_on = tegra_p2u_power_on,
.owner = THIS_MODULE,
};
static int tegra_p2u_probe(struct platform_device *pdev)
{
struct phy_provider *phy_provider;
struct device *dev = &pdev->dev;
struct phy *generic_phy;
struct tegra_p2u *phy;
struct resource *res;
phy = devm_kzalloc(dev, sizeof(*phy), GFP_KERNEL);
if (!phy)
return -ENOMEM;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctl");
phy->base = devm_ioremap_resource(dev, res);
if (IS_ERR(phy->base))
return PTR_ERR(phy->base);
platform_set_drvdata(pdev, phy);
generic_phy = devm_phy_create(dev, NULL, &ops);
if (IS_ERR(generic_phy))
return PTR_ERR(generic_phy);
phy_set_drvdata(generic_phy, phy);
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
if (IS_ERR(phy_provider))
return PTR_ERR(phy_provider);
return 0;
}
static const struct of_device_id tegra_p2u_id_table[] = {
{
.compatible = "nvidia,tegra194-p2u",
},
{}
};
MODULE_DEVICE_TABLE(of, tegra_p2u_id_table);
static struct platform_driver tegra_p2u_driver = {
.probe = tegra_p2u_probe,
.driver = {
.name = "tegra194-p2u",
.of_match_table = tegra_p2u_id_table,
},
};
module_platform_driver(tegra_p2u_driver);
MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>");
MODULE_DESCRIPTION("NVIDIA Tegra194 PIPE2UPHY PHY driver");
MODULE_LICENSE("GPL v2");
......@@ -714,7 +714,9 @@
#define PCI_EXT_CAP_ID_DPC 0x1D /* Downstream Port Containment */
#define PCI_EXT_CAP_ID_L1SS 0x1E /* L1 PM Substates */
#define PCI_EXT_CAP_ID_PTM 0x1F /* Precision Time Measurement */
#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PTM
#define PCI_EXT_CAP_ID_DLF 0x25 /* Data Link Feature */
#define PCI_EXT_CAP_ID_PL_16GT 0x26 /* Physical Layer 16.0 GT/s */
#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PL_16GT
#define PCI_EXT_CAP_DSN_SIZEOF 12
#define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40
......@@ -1054,4 +1056,14 @@
#define PCI_L1SS_CTL1_LTR_L12_TH_SCALE 0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */
#define PCI_L1SS_CTL2 0x0c /* Control 2 Register */
/* Data Link Feature */
#define PCI_DLF_CAP 0x04 /* Capabilities Register */
#define PCI_DLF_EXCHANGE_ENABLE 0x80000000 /* Data Link Feature Exchange Enable */
/* Physical Layer 16.0 GT/s */
#define PCI_PL_16GT_LE_CTRL 0x20 /* Lane Equalization Control Register */
#define PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK 0x0000000F
#define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK 0x000000F0
#define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT 4
#endif /* LINUX_PCI_REGS_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment