Commit 86f26a77 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pci-v5.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
 "Enumeration:

   - Revert sysfs "rescan" renames that broke apps (Kelsey Skunberg)

   - Add more 32 GT/s link speed decoding and improve the implementation
     (Yicong Yang)

  Resource management:

   - Add support for sizing programmable host bridge apertures and fix a
     related alpha Nautilus regression (Ivan Kokshaysky)

  Interrupts:

   - Add boot interrupt quirk mechanism for Xeon chipsets and document
     boot interrupts (Sean V Kelley)

  PCIe native device hotplug:

   - When possible, disable in-band presence detect and use PDS
     (Alexandru Gagniuc)

   - Add DMI table for devices that don't use in-band presence detection
     but don't advertise that correctly (Stuart Hayes)

   - Fix hang when powering slots up/down via sysfs (Lukas Wunner)

   - Fix an MSI interrupt race (Stuart Hayes)

  Virtualization:

   - Add ACS quirks for Zhaoxin devices (Raymond Pang)

  Error handling:

   - Add Error Disconnect Recover (EDR) support so firmware can report
     devices disconnected via DPC and we can try to recover (Kuppuswamy
     Sathyanarayanan)

  Peer-to-peer DMA:

   - Add Intel Sky Lake-E Root Ports B, C, D to the whitelist (Andrew
     Maier)

  ASPM:

   - Reduce severity of common clock config message (Chris Packham)

   - Clear the correct bits when enabling L1 substates, so we don't go
     to the wrong state (Yicong Yang)

  Endpoint framework:

   - Replace EPF linkup ops with notifier call chain and improve locking
     (Kishon Vijay Abraham I)

   - Fix concurrent memory allocation in OB address region (Kishon Vijay
     Abraham I)

   - Move PF function number assignment to EPC core to support multiple
     function creation methods (Kishon Vijay Abraham I)

   - Fix issue with clearing configfs "start" entry (Kunihiko Hayashi)

   - Fix issue with endpoint MSI-X ignoring BAR Indicator and Table
     Offset (Kishon Vijay Abraham I)

   - Add support for testing DMA transfers (Kishon Vijay Abraham I)

   - Add support for testing > 10 endpoint devices (Kishon Vijay Abraham I)

   - Add support for tests to clear IRQ (Kishon Vijay Abraham I)

   - Add common DT schema for endpoint controllers (Kishon Vijay Abraham I)

  Amlogic Meson PCIe controller driver:

   - Add DT bindings for AXG PCIe PHY, shared MIPI/PCIe analog PHY (Remi
     Pommarel)

   - Add Amlogic AXG PCIe PHY, AXG MIPI/PCIe analog PHY drivers (Remi
     Pommarel)

  Cadence PCIe controller driver:

   - Add Root Complex/Endpoint DT schema for Cadence PCIe (Kishon Vijay
     Abraham I)

  Intel VMD host bridge driver:

   - Add two VMD Device IDs that require bus restriction mode (Sushma
     Kalakota)

  Mobiveil PCIe controller driver:

   - Refactor and modularize mobiveil driver (Hou Zhiqiang)

   - Add support for Mobiveil GPEX Gen4 host (Hou Zhiqiang)

  Microsoft Hyper-V host bridge driver:

   - Add support for Hyper-V PCI protocol version 1.3 and
     PCI_BUS_RELATIONS2 (Long Li)

   - Refactor to prepare for virtual PCI on non-x86 architectures (Boqun
     Feng)

   - Fix memory leak in hv_pci_probe()'s error path (Dexuan Cui)

  NVIDIA Tegra PCIe controller driver:

   - Use pci_parse_request_of_pci_ranges() (Rob Herring)

   - Add support for endpoint mode and related DT updates (Vidya Sagar)

   - Reduce -EPROBE_DEFER error message log level (Thierry Reding)

  Qualcomm PCIe controller driver:

   - Restrict class fixup to specific Qualcomm devices (Bjorn Andersson)

  Synopsys DesignWare PCIe controller driver:

   - Refactor core initialization code for endpoint mode (Vidya Sagar)

   - Fix endpoint MSI-X to use correct table address (Kishon Vijay
     Abraham I)

  TI DRA7xx PCIe controller driver:

   - Fix MSI IRQ handling (Vignesh Raghavendra)

  TI Keystone PCIe controller driver:

   - Allow AM654 endpoint to raise MSI-X interrupt (Kishon Vijay Abraham I)

  Miscellaneous:

   - Quirk ASMedia XHCI USB to avoid "PME# from D0" defect (Kai-Heng
     Feng)

   - Use ioremap(), not phys_to_virt(), for platform ROM to fix video
     ROM mapping with CONFIG_HIGHMEM (Mikel Rychliski)"

* tag 'pci-v5.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (96 commits)
  misc: pci_endpoint_test: remove duplicate macro PCI_ENDPOINT_TEST_STATUS
  PCI: tegra: Print -EPROBE_DEFER error message at debug level
  misc: pci_endpoint_test: Use full pci-endpoint-test name in request_irq()
  misc: pci_endpoint_test: Fix to support > 10 pci-endpoint-test devices
  tools: PCI: Add 'e' to clear IRQ
  misc: pci_endpoint_test: Add ioctl to clear IRQ
  misc: pci_endpoint_test: Avoid using module parameter to determine irqtype
  PCI: keystone: Allow AM654 PCIe Endpoint to raise MSI-X interrupt
  PCI: dwc: Fix dw_pcie_ep_raise_msix_irq() to get correct MSI-X table address
  PCI: endpoint: Fix ->set_msix() to take BIR and offset as arguments
  misc: pci_endpoint_test: Add support to get DMA option from userspace
  tools: PCI: Add 'd' command line option to support DMA
  misc: pci_endpoint_test: Use streaming DMA APIs for buffer allocation
  PCI: endpoint: functions/pci-epf-test: Print throughput information
  PCI: endpoint: functions/pci-epf-test: Add DMA support to transfer data
  PCI: pciehp: Fix MSI interrupt race
  PCI: pciehp: Fix indefinite wait on sysfs requests
  PCI: endpoint: Fix clearing start entry in configfs
  PCI: tegra: Add support for PCIe endpoint mode in Tegra194
  PCI: sysfs: Revert "rescan" file renames
  ...
parents 0ad5b053 86ce3c90
.. SPDX-License-Identifier: GPL-2.0
===============
Boot Interrupts
===============
:Author: - Sean V Kelley <sean.v.kelley@linux.intel.com>
Overview
========
On PCI Express, interrupts are represented with either MSI or inbound
interrupt messages (Assert_INTx/Deassert_INTx). The integrated IO-APIC in a
given Core IO converts the legacy interrupt messages from PCI Express to
MSI interrupts. If the IO-APIC is disabled (via the mask bits in the
IO-APIC table entries), the messages are routed to the legacy PCH. This
in-band interrupt mechanism was traditionally necessary for systems that
did not support the IO-APIC and for boot. Intel in the past has used the
term "boot interrupts" to describe this mechanism. Further, the PCI Express
protocol describes this in-band legacy wire-interrupt INTx mechanism for
I/O devices to signal PCI-style level interrupts. The subsequent paragraphs
describe problems with the Core IO handling of INTx message routing to the
PCH and mitigation within BIOS and the OS.
Issue
=====
When in-band legacy INTx messages are forwarded to the PCH, they in turn
trigger a new interrupt for which the OS likely lacks a handler. When an
interrupt goes unhandled over time, they are tracked by the Linux kernel as
Spurious Interrupts. The IRQ will be disabled by the Linux kernel after it
reaches a specific count with the error "nobody cared". This disabled IRQ
now prevents valid usage by an existing interrupt which may happen to share
the IRQ line.
irq 19: nobody cared (try booting with the "irqpoll" option)
CPU: 0 PID: 2988 Comm: irq/34-nipalk Tainted: 4.14.87-rt49-02410-g4a640ec-dirty #1
Hardware name: National Instruments NI PXIe-8880/NI PXIe-8880, BIOS 2.1.5f1 01/09/2020
Call Trace:
<IRQ>
? dump_stack+0x46/0x5e
? __report_bad_irq+0x2e/0xb0
? note_interrupt+0x242/0x290
? nNIKAL100_memoryRead16+0x8/0x10 [nikal]
? handle_irq_event_percpu+0x55/0x70
? handle_irq_event+0x4f/0x80
? handle_fasteoi_irq+0x81/0x180
? handle_irq+0x1c/0x30
? do_IRQ+0x41/0xd0
? common_interrupt+0x84/0x84
</IRQ>
handlers:
irq_default_primary_handler threaded usb_hcd_irq
Disabling IRQ #19
Conditions
==========
The use of threaded interrupts is the most likely condition to trigger
this problem today. Threaded interrupts may not be reenabled after the IRQ
handler wakes. These "one shot" conditions mean that the threaded interrupt
needs to keep the interrupt line masked until the threaded handler has run.
Especially when dealing with high data rate interrupts, the thread needs to
run to completion; otherwise some handlers will end up in stack overflows
since the interrupt of the issuing device is still active.
Affected Chipsets
=================
The legacy interrupt forwarding mechanism exists today in a number of
devices including but not limited to chipsets from AMD/ATI, Broadcom, and
Intel. Changes made through the mitigations below have been applied to
drivers/pci/quirks.c
Starting with ICX there are no longer any IO-APICs in the Core IO's
devices. IO-APIC is only in the PCH. Devices connected to the Core IO's
PCIe Root Ports will use native MSI/MSI-X mechanisms.
Mitigations
===========
The mitigations take the form of PCI quirks. The preference has been to
first identify and make use of a means to disable the routing to the PCH.
In such a case a quirk to disable boot interrupt generation can be
added.[1]
Intel® 6300ESB I/O Controller Hub
Alternate Base Address Register:
BIE: Boot Interrupt Enable
0 = Boot interrupt is enabled.
1 = Boot interrupt is disabled.
Intel® Sandy Bridge through Sky Lake based Xeon servers:
Coherent Interface Protocol Interrupt Control
dis_intx_route2pch/dis_intx_route2ich/dis_intx_route2dmi2:
When this bit is set. Local INTx messages received from the
Intel® Quick Data DMA/PCI Express ports are not routed to legacy
PCH - they are either converted into MSI via the integrated IO-APIC
(if the IO-APIC mask bit is clear in the appropriate entries)
or cause no further action (when mask bit is set)
In the absence of a way to directly disable the routing, another approach
has been to make use of PCI Interrupt pin to INTx routing tables for
purposes of redirecting the interrupt handler to the rerouted interrupt
line by default. Therefore, on chipsets where this INTx routing cannot be
disabled, the Linux kernel will reroute the valid interrupt to its legacy
interrupt. This redirection of the handler will prevent the occurrence of
the spurious interrupt detection which would ordinarily disable the IRQ
line due to excessive unhandled counts.[2]
The config option X86_REROUTE_FOR_BROKEN_BOOT_IRQS exists to enable (or
disable) the redirection of the interrupt handler to the PCH interrupt
line. The option can be overridden by either pci=ioapicreroute or
pci=noioapicreroute.[3]
More Documentation
==================
There is an overview of the legacy interrupt handling in several datasheets
(6300ESB and 6700PXH below). While largely the same, it provides insight
into the evolution of its handling with chipsets.
Example of disabling of the boot interrupt
------------------------------------------
Intel® 6300ESB I/O Controller Hub (Document # 300641-004US)
5.7.3 Boot Interrupt
https://www.intel.com/content/dam/doc/datasheet/6300esb-io-controller-hub-datasheet.pdf
Intel® Xeon® Processor E5-1600/2400/2600/4600 v3 Product Families
Datasheet - Volume 2: Registers (Document # 330784-003)
6.6.41 cipintrc Coherent Interface Protocol Interrupt Control
https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-v3-datasheet-vol-2.pdf
Example of handler rerouting
----------------------------
Intel® 6700PXH 64-bit PCI Hub (Document # 302628)
2.15.2 PCI Express Legacy INTx Support and Boot Interrupt
https://www.intel.com/content/dam/doc/datasheet/6700pxh-64-bit-pci-hub-datasheet.pdf
If you have any legacy PCI interrupt questions that aren't answered, email me.
Cheers,
Sean V Kelley
sean.v.kelley@linux.intel.com
[1] https://lore.kernel.org/r/12131949181903-git-send-email-sassmann@suse.de/
[2] https://lore.kernel.org/r/12131949182094-git-send-email-sassmann@suse.de/
[3] https://lore.kernel.org/r/487C8EA7.6020205@suse.de/
...@@ -16,3 +16,4 @@ Linux PCI Bus Subsystem ...@@ -16,3 +16,4 @@ Linux PCI Bus Subsystem
pci-error-recovery pci-error-recovery
pcieaer-howto pcieaer-howto
endpoint/index endpoint/index
boot-interrupts
...@@ -156,12 +156,6 @@ default reset_link function, but different upstream ports might ...@@ -156,12 +156,6 @@ default reset_link function, but different upstream ports might
have different specifications to reset pci express link, so all have different specifications to reset pci express link, so all
upstream ports should provide their own reset_link functions. upstream ports should provide their own reset_link functions.
In struct pcie_port_service_driver, a new pointer, reset_link, is
added.
::
pci_ers_result_t (*reset_link) (struct pci_dev *dev);
Section 3.2.2.2 provides more detailed info on when to call Section 3.2.2.2 provides more detailed info on when to call
reset_link. reset_link.
...@@ -212,15 +206,10 @@ error_detected(dev, pci_channel_io_frozen) to all drivers within ...@@ -212,15 +206,10 @@ error_detected(dev, pci_channel_io_frozen) to all drivers within
a hierarchy in question. Then, performing link reset at upstream is a hierarchy in question. Then, performing link reset at upstream is
necessary. As different kinds of devices might use different approaches necessary. As different kinds of devices might use different approaches
to reset link, AER port service driver is required to provide the to reset link, AER port service driver is required to provide the
function to reset link. Firstly, kernel looks for if the upstream function to reset link via callback parameter of pcie_do_recovery()
component has an aer driver. If it has, kernel uses the reset_link function. If reset_link is not NULL, recovery function will use it
callback of the aer driver. If the upstream component has no aer driver to reset the link. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER
and the port is downstream port, we will perform a hot reset as the and reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes
default by setting the Secondary Bus Reset bit of the Bridge Control
register associated with the downstream port. As for upstream ports,
they should provide their own aer service drivers with reset_link
function. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER and
reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes
to mmio_enabled. to mmio_enabled.
helper functions helper functions
...@@ -243,9 +232,9 @@ messages to root port when an error is detected. ...@@ -243,9 +232,9 @@ messages to root port when an error is detected.
:: ::
int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev);` int pci_aer_clear_nonfatal_status(struct pci_dev *dev);`
pci_cleanup_aer_uncorrect_error_status cleanups the uncorrectable pci_aer_clear_nonfatal_status clears non-fatal errors in the uncorrectable
error status register. error status register.
Frequent Asked Questions Frequent Asked Questions
......
...@@ -18,7 +18,6 @@ Required properties: ...@@ -18,7 +18,6 @@ Required properties:
- reg-names: Must be - reg-names: Must be
- "elbi" External local bus interface registers - "elbi" External local bus interface registers
- "cfg" Meson specific registers - "cfg" Meson specific registers
- "phy" Meson PCIE PHY registers for AXG SoC Family
- "config" PCIe configuration space - "config" PCIe configuration space
- reset-gpios: The GPIO to generate PCIe PERST# assert and deassert signal. - reset-gpios: The GPIO to generate PCIe PERST# assert and deassert signal.
- clocks: Must contain an entry for each entry in clock-names. - clocks: Must contain an entry for each entry in clock-names.
...@@ -26,13 +25,13 @@ Required properties: ...@@ -26,13 +25,13 @@ Required properties:
- "pclk" PCIe GEN 100M PLL clock - "pclk" PCIe GEN 100M PLL clock
- "port" PCIe_x(A or B) RC clock gate - "port" PCIe_x(A or B) RC clock gate
- "general" PCIe Phy clock - "general" PCIe Phy clock
- "mipi" PCIe_x(A or B) 100M ref clock gate for AXG SoC Family
- resets: phandle to the reset lines. - resets: phandle to the reset lines.
- reset-names: must contain "phy" "port" and "apb" - reset-names: must contain "port" and "apb"
- "phy" Share PHY reset for AXG SoC Family
- "port" Port A or B reset - "port" Port A or B reset
- "apb" Share APB reset - "apb" Share APB reset
- phys: should contain a phandle to the shared phy for G12A SoC Family - phys: should contain a phandle to the PCIE phy
- phy-names: must contain "pcie"
- device_type: - device_type:
should be "pci". As specified in designware-pcie.txt should be "pci". As specified in designware-pcie.txt
...@@ -43,9 +42,8 @@ Example configuration: ...@@ -43,9 +42,8 @@ Example configuration:
compatible = "amlogic,axg-pcie", "snps,dw-pcie"; compatible = "amlogic,axg-pcie", "snps,dw-pcie";
reg = <0x0 0xf9800000 0x0 0x400000 reg = <0x0 0xf9800000 0x0 0x400000
0x0 0xff646000 0x0 0x2000 0x0 0xff646000 0x0 0x2000
0x0 0xff644000 0x0 0x2000
0x0 0xf9f00000 0x0 0x100000>; 0x0 0xf9f00000 0x0 0x100000>;
reg-names = "elbi", "cfg", "phy", "config"; reg-names = "elbi", "cfg", "config";
reset-gpios = <&gpio GPIOX_19 GPIO_ACTIVE_HIGH>; reset-gpios = <&gpio GPIOX_19 GPIO_ACTIVE_HIGH>;
interrupts = <GIC_SPI 177 IRQ_TYPE_EDGE_RISING>; interrupts = <GIC_SPI 177 IRQ_TYPE_EDGE_RISING>;
#interrupt-cells = <1>; #interrupt-cells = <1>;
...@@ -58,17 +56,15 @@ Example configuration: ...@@ -58,17 +56,15 @@ Example configuration:
ranges = <0x82000000 0 0 0x0 0xf9c00000 0 0x00300000>; ranges = <0x82000000 0 0 0x0 0xf9c00000 0 0x00300000>;
clocks = <&clkc CLKID_USB clocks = <&clkc CLKID_USB
&clkc CLKID_MIPI_ENABLE
&clkc CLKID_PCIE_A &clkc CLKID_PCIE_A
&clkc CLKID_PCIE_CML_EN0>; &clkc CLKID_PCIE_CML_EN0>;
clock-names = "general", clock-names = "general",
"mipi",
"pclk", "pclk",
"port"; "port";
resets = <&reset RESET_PCIE_PHY>, resets = <&reset RESET_PCIE_A>,
<&reset RESET_PCIE_A>,
<&reset RESET_PCIE_APB>; <&reset RESET_PCIE_APB>;
reset-names = "phy", reset-names = "port",
"port",
"apb"; "apb";
phys = <&pcie_phy>;
phy-names = "pcie";
}; };
* Cadence PCIe endpoint controller
Required properties:
- compatible: Should contain "cdns,cdns-pcie-ep" to identify the IP used.
- reg: Should contain the controller register base address and AXI interface
region base address respectively.
- reg-names: Must be "reg" and "mem" respectively.
- cdns,max-outbound-regions: Set to maximum number of outbound regions
Optional properties:
- max-functions: Maximum number of functions that can be configured (default 1).
- phys: From PHY bindings: List of Generic PHY phandles. One per lane if more
than one in the list. If only one PHY listed it must manage all lanes.
- phy-names: List of names to identify the PHY.
Example:
pcie@fc000000 {
compatible = "cdns,cdns-pcie-ep";
reg = <0x0 0xfc000000 0x0 0x01000000>,
<0x0 0x80000000 0x0 0x40000000>;
reg-names = "reg", "mem";
cdns,max-outbound-regions = <16>;
max-functions = /bits/ 8 <8>;
phys = <&ep_phy0 &ep_phy1>;
phy-names = "pcie-lane0","pcie-lane1";
};
# SPDX-License-Identifier: GPL-2.0-only
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/cdns,cdns-pcie-ep.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Cadence PCIe EP Controller
maintainers:
- Tom Joseph <tjoseph@cadence.com>
allOf:
- $ref: "cdns-pcie.yaml#"
- $ref: "pci-ep.yaml#"
properties:
compatible:
const: cdns,cdns-pcie-ep
reg:
maxItems: 2
reg-names:
items:
- const: reg
- const: mem
required:
- reg
- reg-names
examples:
- |
bus {
#address-cells = <2>;
#size-cells = <2>;
pcie-ep@fc000000 {
compatible = "cdns,cdns-pcie-ep";
reg = <0x0 0xfc000000 0x0 0x01000000>,
<0x0 0x80000000 0x0 0x40000000>;
reg-names = "reg", "mem";
cdns,max-outbound-regions = <16>;
max-functions = /bits/ 8 <8>;
phys = <&pcie_phy0>;
phy-names = "pcie-phy";
};
};
...
* Cadence PCIe host controller
This PCIe controller inherits the base properties defined in
host-generic-pci.txt.
Required properties:
- compatible: Should contain "cdns,cdns-pcie-host" to identify the IP used.
- reg: Should contain the controller register base address, PCIe configuration
window base address, and AXI interface region base address respectively.
- reg-names: Must be "reg", "cfg" and "mem" respectively.
- #address-cells: Set to <3>
- #size-cells: Set to <2>
- device_type: Set to "pci"
- ranges: Ranges for the PCI memory and I/O regions
- #interrupt-cells: Set to <1>
- interrupt-map-mask and interrupt-map: Standard PCI properties to define the
mapping of the PCIe interface to interrupt numbers.
Optional properties:
- cdns,max-outbound-regions: Set to maximum number of outbound regions
(default 32)
- cdns,no-bar-match-nbits: Set into the no BAR match register to configure the
number of least significant bits kept during inbound (PCIe -> AXI) address
translations (default 32)
- vendor-id: The PCI vendor ID (16 bits, default is design dependent)
- device-id: The PCI device ID (16 bits, default is design dependent)
- phys: From PHY bindings: List of Generic PHY phandles. One per lane if more
than one in the list. If only one PHY listed it must manage all lanes.
- phy-names: List of names to identify the PHY.
Example:
pcie@fb000000 {
compatible = "cdns,cdns-pcie-host";
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x0 0xff>;
linux,pci-domain = <0>;
cdns,max-outbound-regions = <16>;
cdns,no-bar-match-nbits = <32>;
vendor-id = /bits/ 16 <0x17cd>;
device-id = /bits/ 16 <0x0200>;
reg = <0x0 0xfb000000 0x0 0x01000000>,
<0x0 0x41000000 0x0 0x00001000>,
<0x0 0x40000000 0x0 0x04000000>;
reg-names = "reg", "cfg", "mem";
ranges = <0x02000000 0x0 0x42000000 0x0 0x42000000 0x0 0x1000000>,
<0x01000000 0x0 0x43000000 0x0 0x43000000 0x0 0x0010000>;
#interrupt-cells = <0x1>;
interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0x0 0x0 14 0x1
0x0 0x0 0x0 0x2 &gic 0x0 0x0 0x0 15 0x1
0x0 0x0 0x0 0x3 &gic 0x0 0x0 0x0 16 0x1
0x0 0x0 0x0 0x4 &gic 0x0 0x0 0x0 17 0x1>;
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
msi-parent = <&its_pci>;
phys = <&pcie_phy0>;
phy-names = "pcie-phy";
};
# SPDX-License-Identifier: GPL-2.0-only
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/cdns,cdns-pcie-host.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Cadence PCIe host controller
maintainers:
- Tom Joseph <tjoseph@cadence.com>
allOf:
- $ref: /schemas/pci/pci-bus.yaml#
- $ref: "cdns-pcie-host.yaml#"
properties:
compatible:
const: cdns,cdns-pcie-host
reg:
maxItems: 3
reg-names:
items:
- const: reg
- const: cfg
- const: mem
msi-parent: true
required:
- reg
- reg-names
examples:
- |
bus {
#address-cells = <2>;
#size-cells = <2>;
pcie@fb000000 {
compatible = "cdns,cdns-pcie-host";
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;
bus-range = <0x0 0xff>;
linux,pci-domain = <0>;
cdns,max-outbound-regions = <16>;
cdns,no-bar-match-nbits = <32>;
vendor-id = <0x17cd>;
device-id = <0x0200>;
reg = <0x0 0xfb000000 0x0 0x01000000>,
<0x0 0x41000000 0x0 0x00001000>,
<0x0 0x40000000 0x0 0x04000000>;
reg-names = "reg", "cfg", "mem";
ranges = <0x02000000 0x0 0x42000000 0x0 0x42000000 0x0 0x1000000>,
<0x01000000 0x0 0x43000000 0x0 0x43000000 0x0 0x0010000>;
#interrupt-cells = <0x1>;
interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0x0 0x0 14 0x1>,
<0x0 0x0 0x0 0x2 &gic 0x0 0x0 0x0 15 0x1>,
<0x0 0x0 0x0 0x3 &gic 0x0 0x0 0x0 16 0x1>,
<0x0 0x0 0x0 0x4 &gic 0x0 0x0 0x0 17 0x1>;
interrupt-map-mask = <0x0 0x0 0x0 0x7>;
msi-parent = <&its_pci>;
phys = <&pcie_phy0>;
phy-names = "pcie-phy";
};
};
...
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/pci/cdns-pcie-host.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Cadence PCIe Host
maintainers:
- Tom Joseph <tjoseph@cadence.com>
allOf:
- $ref: "/schemas/pci/pci-bus.yaml#"
- $ref: "cdns-pcie.yaml#"
properties:
cdns,no-bar-match-nbits:
description:
Set into the no BAR match register to configure the number of least
significant bits kept during inbound (PCIe -> AXI) address translations
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
minimum: 0
maximum: 64
default: 32
msi-parent: true
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/pci/cdns-pcie.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Cadence PCIe Core
maintainers:
- Tom Joseph <tjoseph@cadence.com>
properties:
cdns,max-outbound-regions:
description: maximum number of outbound regions
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
minimum: 1
maximum: 32
default: 32
phys:
description:
One per lane if more than one in the list. If only one PHY listed it must
manage all lanes.
minItems: 1
maxItems: 16
phy-names:
items:
- const: pcie-phy
# FIXME: names when more than 1
NXP Layerscape PCIe Gen4 controller
This PCIe controller is based on the Mobiveil PCIe IP and thus inherits all
the common properties defined in mobiveil-pcie.txt.
Required properties:
- compatible: should contain the platform identifier such as:
"fsl,lx2160a-pcie"
- reg: base addresses and lengths of the PCIe controller register blocks.
"csr_axi_slave": Bridge config registers
"config_axi_slave": PCIe controller registers
- interrupts: A list of interrupt outputs of the controller. Must contain an
entry for each entry in the interrupt-names property.
- interrupt-names: It could include the following entries:
"intr": The interrupt that is asserted for controller interrupts
"aer": Asserted for aer interrupt when chip support the aer interrupt with
none MSI/MSI-X/INTx mode,but there is interrupt line for aer.
"pme": Asserted for pme interrupt when chip support the pme interrupt with
none MSI/MSI-X/INTx mode,but there is interrupt line for pme.
- dma-coherent: Indicates that the hardware IP block can ensure the coherency
of the data transferred from/to the IP block. This can avoid the software
cache flush/invalid actions, and improve the performance significantly.
- msi-parent : See the generic MSI binding described in
Documentation/devicetree/bindings/interrupt-controller/msi.txt.
Example:
pcie@3400000 {
compatible = "fsl,lx2160a-pcie";
reg = <0x00 0x03400000 0x0 0x00100000 /* controller registers */
0x80 0x00000000 0x0 0x00001000>; /* configuration space */
reg-names = "csr_axi_slave", "config_axi_slave";
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* AER interrupt */
<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* PME interrupt */
<GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */
interrupt-names = "aer", "pme", "intr";
#address-cells = <3>;
#size-cells = <2>;
device_type = "pci";
apio-wins = <8>;
ppio-wins = <8>;
dma-coherent;
bus-range = <0x0 0xff>;
msi-parent = <&its>;
ranges = <0x82000000 0x0 0x40000000 0x80 0x40000000 0x0 0x40000000>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0000 0 0 1 &gic 0 0 GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>,
<0000 0 0 2 &gic 0 0 GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>,
<0000 0 0 3 &gic 0 0 GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>,
<0000 0 0 4 &gic 0 0 GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>;
};
NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based) NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based)
This PCIe host controller is based on the Synopsis Designware PCIe IP This PCIe controller is based on the Synopsis Designware PCIe IP
and thus inherits all the common properties defined in designware-pcie.txt. and thus inherits all the common properties defined in designware-pcie.txt.
Some of the controller instances are dual mode where in they can work either
in root port mode or endpoint mode but one at a time.
Required properties: Required properties:
- compatible: For Tegra19x, must contain "nvidia,tegra194-pcie".
- device_type: Must be "pci"
- power-domains: A phandle to the node that controls power to the respective - power-domains: A phandle to the node that controls power to the respective
PCIe controller and a specifier name for the PCIe controller. Following are PCIe controller and a specifier name for the PCIe controller. Following are
the specifiers for the different PCIe controllers the specifiers for the different PCIe controllers
...@@ -32,6 +32,32 @@ Required properties: ...@@ -32,6 +32,32 @@ Required properties:
entry for each entry in the interrupt-names property. entry for each entry in the interrupt-names property.
- interrupt-names: Must include the following entries: - interrupt-names: Must include the following entries:
"intr": The Tegra interrupt that is asserted for controller interrupts "intr": The Tegra interrupt that is asserted for controller interrupts
- clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
- clock-names: Must include the following entries:
- core
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- apb
- core
- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
- phy-names: Must include an entry for each active lane.
"p2u-N": where N ranges from 0 to one less than the total number of lanes
- nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed
by controller-id. Following are the controller ids for each controller.
0: C0
1: C1
2: C2
3: C3
4: C4
5: C5
- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
RC mode:
- compatible: Tegra19x must contain "nvidia,tegra194-pcie"
- device_type: Must be "pci" for RC mode
- interrupt-names: Must include the following entries:
"msi": The Tegra interrupt that is asserted when an MSI is received "msi": The Tegra interrupt that is asserted when an MSI is received
- bus-range: Range of bus numbers associated with this controller - bus-range: Range of bus numbers associated with this controller
- #address-cells: Address representation for root ports (must be 3) - #address-cells: Address representation for root ports (must be 3)
...@@ -60,27 +86,15 @@ Required properties: ...@@ -60,27 +86,15 @@ Required properties:
- interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties
Please refer to the standard PCI bus binding document for a more detailed Please refer to the standard PCI bus binding document for a more detailed
explanation. explanation.
- clocks: Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details. EP mode:
- clock-names: Must include the following entries: In Tegra194, Only controllers C0, C4 & C5 support EP mode.
- core - compatible: Tegra19x must contain "nvidia,tegra194-pcie-ep"
- resets: Must contain an entry for each entry in reset-names. - reg-names: Must include the following entries:
See ../reset/reset.txt for details. "addr_space": Used to map remote RC address space
- reset-names: Must include the following entries: - reset-gpios: Must contain a phandle to a GPIO controller followed by
- apb GPIO that is being used as PERST input signal. Please refer to pci.txt
- core document.
- phys: Must contain a phandle to P2U PHY for each entry in phy-names.
- phy-names: Must include an entry for each active lane.
"p2u-N": where N ranges from 0 to one less than the total number of lanes
- nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed
by controller-id. Following are the controller ids for each controller.
0: C0
1: C1
2: C2
3: C3
4: C4
5: C5
- vddio-pex-ctl-supply: Regulator supply for PCIe side band signals
Optional properties: Optional properties:
- pinctrl-names: A list of pinctrl state names. - pinctrl-names: A list of pinctrl state names.
...@@ -104,6 +118,8 @@ Optional properties: ...@@ -104,6 +118,8 @@ Optional properties:
specified in microseconds specified in microseconds
- nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be - nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be
specified in microseconds specified in microseconds
RC mode:
- vpcie3v3-supply: A phandle to the regulator node that supplies 3.3V to the slot - vpcie3v3-supply: A phandle to the regulator node that supplies 3.3V to the slot
if the platform has one such slot. (Ex:- x16 slot owned by C5 controller if the platform has one such slot. (Ex:- x16 slot owned by C5 controller
in p2972-0000 platform). in p2972-0000 platform).
...@@ -111,11 +127,18 @@ Optional properties: ...@@ -111,11 +127,18 @@ Optional properties:
if the platform has one such slot. (Ex:- x16 slot owned by C5 controller if the platform has one such slot. (Ex:- x16 slot owned by C5 controller
in p2972-0000 platform). in p2972-0000 platform).
EP mode:
- nvidia,refclk-select-gpios: Must contain a phandle to a GPIO controller
followed by GPIO that is being used to enable REFCLK to controller from host
NOTE:- On Tegra194's P2972-0000 platform, only C5 controller can be enabled to
operate in the endpoint mode because of the way the platform is designed.
Examples: Examples:
========= =========
Tegra194: Tegra194 RC mode:
-------- -----------------
pcie@14180000 { pcie@14180000 {
compatible = "nvidia,tegra194-pcie", "snps,dw-pcie"; compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
...@@ -169,3 +192,53 @@ Tegra194: ...@@ -169,3 +192,53 @@ Tegra194:
<&p2u_hsio_5>; <&p2u_hsio_5>;
phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3"; phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3";
}; };
Tegra194 EP mode:
-----------------
pcie_ep@141a0000 {
compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>;
reg = <0x00 0x141a0000 0x0 0x00020000 /* appl registers (128K) */
0x00 0x3a040000 0x0 0x00040000 /* iATU_DMA reg space (256K) */
0x00 0x3a080000 0x0 0x00040000 /* DBI reg space (256K) */
0x1c 0x00000000 0x4 0x00000000>; /* Address Space (16G) */
reg-names = "appl", "atu_dma", "dbi", "addr_space";
num-lanes = <8>;
num-ib-windows = <2>;
num-ob-windows = <8>;
pinctrl-names = "default";
pinctrl-0 = <&clkreq_c5_bi_dir_state>;
clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>;
clock-names = "core";
resets = <&bpmp TEGRA194_RESET_PEX1_CORE_5_APB>,
<&bpmp TEGRA194_RESET_PEX1_CORE_5>;
reset-names = "apb", "core";
interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */
interrupt-names = "intr";
nvidia,bpmp = <&bpmp 5>;
nvidia,aspm-cmrt-us = <60>;
nvidia,aspm-pwr-on-t-us = <20>;
nvidia,aspm-l0s-entrance-latency-us = <3>;
vddio-pex-ctl-supply = <&vdd_1v8ao>;
reset-gpios = <&gpio TEGRA194_MAIN_GPIO(GG, 1) GPIO_ACTIVE_LOW>;
nvidia,refclk-select-gpios = <&gpio_aon TEGRA194_AON_GPIO(AA, 5)
GPIO_ACTIVE_HIGH>;
phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>,
<&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>,
<&p2u_nvhs_6>, <&p2u_nvhs_7>;
phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4",
"p2u-5", "p2u-6", "p2u-7";
};
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/pci/pci-ep.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: PCI Endpoint Controller Schema
description: |
Common properties for PCI Endpoint Controller Nodes.
maintainers:
- Kishon Vijay Abraham I <kishon@ti.com>
properties:
$nodename:
pattern: "^pcie-ep@"
max-functions:
description: Maximum number of functions that can be configured
allOf:
- $ref: /schemas/types.yaml#/definitions/uint8
minimum: 1
default: 1
maximum: 255
max-link-speed:
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
enum: [ 1, 2, 3, 4 ]
num-lanes:
description: maximum number of lanes
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32
minimum: 1
default: 1
maximum: 16
required:
- compatible
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/phy/amlogic,meson-axg-mipi-pcie-analog.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Amlogic AXG shared MIPI/PCIE analog PHY
maintainers:
- Remi Pommarel <repk@triplefau.lt>
properties:
compatible:
const: amlogic,axg-mipi-pcie-analog-phy
reg:
maxItems: 1
"#phy-cells":
const: 1
required:
- compatible
- reg
- "#phy-cells"
additionalProperties: false
examples:
- |
mpphy: phy@0 {
compatible = "amlogic,axg-mipi-pcie-analog-phy";
reg = <0x0 0x0 0x0 0xc>;
#phy-cells = <1>;
};
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: "http://devicetree.org/schemas/phy/amlogic,meson-axg-pcie.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Amlogic AXG PCIE PHY
maintainers:
- Remi Pommarel <repk@triplefau.lt>
properties:
compatible:
const: amlogic,axg-pcie-phy
reg:
maxItems: 1
resets:
maxItems: 1
phys:
maxItems: 1
phy-names:
const: analog
"#phy-cells":
const: 0
required:
- compatible
- reg
- phys
- phy-names
- resets
- "#phy-cells"
additionalProperties: false
examples:
- |
#include <dt-bindings/reset/amlogic,meson-axg-reset.h>
#include <dt-bindings/phy/phy.h>
pcie_phy: pcie-phy@ff644000 {
compatible = "amlogic,axg-pcie-phy";
reg = <0x0 0xff644000 0x0 0x1c>;
resets = <&reset RESET_PCIE_PHY>;
phys = <&mipi_analog_phy PHY_TYPE_PCIE>;
phy-names = "analog";
#phy-cells = <0>;
};
...@@ -12857,7 +12857,7 @@ PCI DRIVER FOR CADENCE PCIE IP ...@@ -12857,7 +12857,7 @@ PCI DRIVER FOR CADENCE PCIE IP
M: Tom Joseph <tjoseph@cadence.com> M: Tom Joseph <tjoseph@cadence.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/pci/cdns,*.txt F: Documentation/devicetree/bindings/pci/cdns,*
F: drivers/pci/controller/cadence/ F: drivers/pci/controller/cadence/
PCI DRIVER FOR FREESCALE LAYERSCAPE PCI DRIVER FOR FREESCALE LAYERSCAPE
...@@ -12870,6 +12870,14 @@ L: linux-arm-kernel@lists.infradead.org ...@@ -12870,6 +12870,14 @@ L: linux-arm-kernel@lists.infradead.org
S: Maintained S: Maintained
F: drivers/pci/controller/dwc/*layerscape* F: drivers/pci/controller/dwc/*layerscape*
PCI DRIVER FOR NXP LAYERSCAPE GEN4 CONTROLLER
M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
L: linux-pci@vger.kernel.org
L: linux-arm-kernel@lists.infradead.org
S: Maintained
F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt
F: drivers/pci/controller/mobibeil/pcie-layerscape-gen4.c
PCI DRIVER FOR GENERIC OF HOSTS PCI DRIVER FOR GENERIC OF HOSTS
M: Will Deacon <will@kernel.org> M: Will Deacon <will@kernel.org>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
...@@ -12912,7 +12920,7 @@ M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com> ...@@ -12912,7 +12920,7 @@ M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
S: Supported S: Supported
F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt
F: drivers/pci/controller/pcie-mobiveil.c F: drivers/pci/controller/mobiveil/pcie-mobiveil*
PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support) PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support)
M: Thomas Petazzoni <thomas.petazzoni@bootlin.com> M: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
......
...@@ -187,10 +187,6 @@ nautilus_machine_check(unsigned long vector, unsigned long la_ptr) ...@@ -187,10 +187,6 @@ nautilus_machine_check(unsigned long vector, unsigned long la_ptr)
extern void pcibios_claim_one_bus(struct pci_bus *); extern void pcibios_claim_one_bus(struct pci_bus *);
static struct resource irongate_io = {
.name = "Irongate PCI IO",
.flags = IORESOURCE_IO,
};
static struct resource irongate_mem = { static struct resource irongate_mem = {
.name = "Irongate PCI MEM", .name = "Irongate PCI MEM",
.flags = IORESOURCE_MEM, .flags = IORESOURCE_MEM,
...@@ -208,17 +204,19 @@ nautilus_init_pci(void) ...@@ -208,17 +204,19 @@ nautilus_init_pci(void)
struct pci_controller *hose = hose_head; struct pci_controller *hose = hose_head;
struct pci_host_bridge *bridge; struct pci_host_bridge *bridge;
struct pci_bus *bus; struct pci_bus *bus;
struct pci_dev *irongate;
unsigned long bus_align, bus_size, pci_mem; unsigned long bus_align, bus_size, pci_mem;
unsigned long memtop = max_low_pfn << PAGE_SHIFT; unsigned long memtop = max_low_pfn << PAGE_SHIFT;
int ret;
bridge = pci_alloc_host_bridge(0); bridge = pci_alloc_host_bridge(0);
if (!bridge) if (!bridge)
return; return;
/* Use default IO. */
pci_add_resource(&bridge->windows, &ioport_resource); pci_add_resource(&bridge->windows, &ioport_resource);
pci_add_resource(&bridge->windows, &iomem_resource); /* Irongate PCI memory aperture, calculate requred size before
setting it up. */
pci_add_resource(&bridge->windows, &irongate_mem);
pci_add_resource(&bridge->windows, &busn_resource); pci_add_resource(&bridge->windows, &busn_resource);
bridge->dev.parent = NULL; bridge->dev.parent = NULL;
bridge->sysdata = hose; bridge->sysdata = hose;
...@@ -226,59 +224,49 @@ nautilus_init_pci(void) ...@@ -226,59 +224,49 @@ nautilus_init_pci(void)
bridge->ops = alpha_mv.pci_ops; bridge->ops = alpha_mv.pci_ops;
bridge->swizzle_irq = alpha_mv.pci_swizzle; bridge->swizzle_irq = alpha_mv.pci_swizzle;
bridge->map_irq = alpha_mv.pci_map_irq; bridge->map_irq = alpha_mv.pci_map_irq;
bridge->size_windows = 1;
/* Scan our single hose. */ /* Scan our single hose. */
ret = pci_scan_root_bus_bridge(bridge); if (pci_scan_root_bus_bridge(bridge)) {
if (ret) {
pci_free_host_bridge(bridge); pci_free_host_bridge(bridge);
return; return;
} }
bus = hose->bus = bridge->bus; bus = hose->bus = bridge->bus;
pcibios_claim_one_bus(bus); pcibios_claim_one_bus(bus);
irongate = pci_get_domain_bus_and_slot(pci_domain_nr(bus), 0, 0);
bus->self = irongate;
bus->resource[0] = &irongate_io;
bus->resource[1] = &irongate_mem;
pci_bus_size_bridges(bus); pci_bus_size_bridges(bus);
/* IO port range. */ /* Now we've got the size and alignment of PCI memory resources
bus->resource[0]->start = 0; stored in irongate_mem. Set up the PCI memory range: limit is
bus->resource[0]->end = 0xffff; hardwired to 0xffffffff, base must be aligned to 16Mb. */
bus_align = irongate_mem.start;
/* Set up PCI memory range - limit is hardwired to 0xffffffff, bus_size = irongate_mem.end + 1 - bus_align;
base must be at aligned to 16Mb. */
bus_align = bus->resource[1]->start;
bus_size = bus->resource[1]->end + 1 - bus_align;
if (bus_align < 0x1000000UL) if (bus_align < 0x1000000UL)
bus_align = 0x1000000UL; bus_align = 0x1000000UL;
pci_mem = (0x100000000UL - bus_size) & -bus_align; pci_mem = (0x100000000UL - bus_size) & -bus_align;
irongate_mem.start = pci_mem;
irongate_mem.end = 0xffffffffUL;
bus->resource[1]->start = pci_mem; /* Register our newly calculated PCI memory window in the resource
bus->resource[1]->end = 0xffffffffUL; tree. */
if (request_resource(&iomem_resource, bus->resource[1]) < 0) if (request_resource(&iomem_resource, &irongate_mem) < 0)
printk(KERN_ERR "Failed to request MEM on hose 0\n"); printk(KERN_ERR "Failed to request MEM on hose 0\n");
printk(KERN_INFO "Irongate pci_mem %pR\n", &irongate_mem);
if (pci_mem < memtop) if (pci_mem < memtop)
memtop = pci_mem; memtop = pci_mem;
if (memtop > alpha_mv.min_mem_address) { if (memtop > alpha_mv.min_mem_address) {
free_reserved_area(__va(alpha_mv.min_mem_address), free_reserved_area(__va(alpha_mv.min_mem_address),
__va(memtop), -1, NULL); __va(memtop), -1, NULL);
printk("nautilus_init_pci: %ldk freed\n", printk(KERN_INFO "nautilus_init_pci: %ldk freed\n",
(memtop - alpha_mv.min_mem_address) >> 10); (memtop - alpha_mv.min_mem_address) >> 10);
} }
if ((IRONGATE0->dev_vendor >> 16) > 0x7006) /* Albacore? */ if ((IRONGATE0->dev_vendor >> 16) > 0x7006) /* Albacore? */
IRONGATE0->pci_mem = pci_mem; IRONGATE0->pci_mem = pci_mem;
pci_bus_assign_resources(bus); pci_bus_assign_resources(bus);
/* pci_common_swizzle() relies on bus->self being NULL
for the root bus, so just clear it. */
bus->self = NULL;
pci_bus_add_devices(bus); pci_bus_add_devices(bus);
} }
......
...@@ -376,6 +376,7 @@ struct hv_tsc_emulation_status { ...@@ -376,6 +376,7 @@ struct hv_tsc_emulation_status {
#define HVCALL_SEND_IPI_EX 0x0015 #define HVCALL_SEND_IPI_EX 0x0015
#define HVCALL_POST_MESSAGE 0x005c #define HVCALL_POST_MESSAGE 0x005c
#define HVCALL_SIGNAL_EVENT 0x005d #define HVCALL_SIGNAL_EVENT 0x005d
#define HVCALL_RETARGET_INTERRUPT 0x007e
#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af
#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0
...@@ -405,6 +406,8 @@ enum HV_GENERIC_SET_FORMAT { ...@@ -405,6 +406,8 @@ enum HV_GENERIC_SET_FORMAT {
HV_GENERIC_SET_ALL, HV_GENERIC_SET_ALL,
}; };
#define HV_PARTITION_ID_SELF ((u64)-1)
#define HV_HYPERCALL_RESULT_MASK GENMASK_ULL(15, 0) #define HV_HYPERCALL_RESULT_MASK GENMASK_ULL(15, 0)
#define HV_HYPERCALL_FAST_BIT BIT(16) #define HV_HYPERCALL_FAST_BIT BIT(16)
#define HV_HYPERCALL_VARHEAD_OFFSET 17 #define HV_HYPERCALL_VARHEAD_OFFSET 17
...@@ -909,4 +912,42 @@ struct hv_tlb_flush_ex { ...@@ -909,4 +912,42 @@ struct hv_tlb_flush_ex {
struct hv_partition_assist_pg { struct hv_partition_assist_pg {
u32 tlb_lock_count; u32 tlb_lock_count;
}; };
union hv_msi_entry {
u64 as_uint64;
struct {
u32 address;
u32 data;
} __packed;
};
struct hv_interrupt_entry {
u32 source; /* 1 for MSI(-X) */
u32 reserved1;
union hv_msi_entry msi_entry;
} __packed;
/*
* flags for hv_device_interrupt_target.flags
*/
#define HV_DEVICE_INTERRUPT_TARGET_MULTICAST 1
#define HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET 2
struct hv_device_interrupt_target {
u32 vector;
u32 flags;
union {
u64 vp_mask;
struct hv_vpset vp_set;
};
} __packed;
/* HvRetargetDeviceInterrupt hypercall */
struct hv_retarget_device_interrupt {
u64 partition_id; /* use "self" */
u64 device_id;
struct hv_interrupt_entry int_entry;
u64 reserved2;
struct hv_device_interrupt_target int_target;
} __packed __aligned(8);
#endif #endif
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/nmi.h> #include <linux/nmi.h>
#include <linux/msi.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/hyperv-tlfs.h> #include <asm/hyperv-tlfs.h>
#include <asm/nospec-branch.h> #include <asm/nospec-branch.h>
...@@ -242,6 +243,13 @@ bool hv_vcpu_is_preempted(int vcpu); ...@@ -242,6 +243,13 @@ bool hv_vcpu_is_preempted(int vcpu);
static inline void hv_apic_init(void) {} static inline void hv_apic_init(void) {}
#endif #endif
static inline void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry,
struct msi_desc *msi_desc)
{
msi_entry->address = msi_desc->msg.address_lo;
msi_entry->data = msi_desc->msg.data;
}
#else /* CONFIG_HYPERV */ #else /* CONFIG_HYPERV */
static inline void hyperv_init(void) {} static inline void hyperv_init(void) {}
static inline void hyperv_setup_mmu_ops(void) {} static inline void hyperv_setup_mmu_ops(void) {}
......
...@@ -131,6 +131,7 @@ static struct pci_osc_bit_struct pci_osc_support_bit[] = { ...@@ -131,6 +131,7 @@ static struct pci_osc_bit_struct pci_osc_support_bit[] = {
{ OSC_PCI_CLOCK_PM_SUPPORT, "ClockPM" }, { OSC_PCI_CLOCK_PM_SUPPORT, "ClockPM" },
{ OSC_PCI_SEGMENT_GROUPS_SUPPORT, "Segments" }, { OSC_PCI_SEGMENT_GROUPS_SUPPORT, "Segments" },
{ OSC_PCI_MSI_SUPPORT, "MSI" }, { OSC_PCI_MSI_SUPPORT, "MSI" },
{ OSC_PCI_EDR_SUPPORT, "EDR" },
{ OSC_PCI_HPX_TYPE_3_SUPPORT, "HPX-Type3" }, { OSC_PCI_HPX_TYPE_3_SUPPORT, "HPX-Type3" },
}; };
...@@ -141,6 +142,7 @@ static struct pci_osc_bit_struct pci_osc_control_bit[] = { ...@@ -141,6 +142,7 @@ static struct pci_osc_bit_struct pci_osc_control_bit[] = {
{ OSC_PCI_EXPRESS_AER_CONTROL, "AER" }, { OSC_PCI_EXPRESS_AER_CONTROL, "AER" },
{ OSC_PCI_EXPRESS_CAPABILITY_CONTROL, "PCIeCapability" }, { OSC_PCI_EXPRESS_CAPABILITY_CONTROL, "PCIeCapability" },
{ OSC_PCI_EXPRESS_LTR_CONTROL, "LTR" }, { OSC_PCI_EXPRESS_LTR_CONTROL, "LTR" },
{ OSC_PCI_EXPRESS_DPC_CONTROL, "DPC" },
}; };
static void decode_osc_bits(struct acpi_pci_root *root, char *msg, u32 word, static void decode_osc_bits(struct acpi_pci_root *root, char *msg, u32 word,
...@@ -440,6 +442,8 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, ...@@ -440,6 +442,8 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
support |= OSC_PCI_ASPM_SUPPORT | OSC_PCI_CLOCK_PM_SUPPORT; support |= OSC_PCI_ASPM_SUPPORT | OSC_PCI_CLOCK_PM_SUPPORT;
if (pci_msi_enabled()) if (pci_msi_enabled())
support |= OSC_PCI_MSI_SUPPORT; support |= OSC_PCI_MSI_SUPPORT;
if (IS_ENABLED(CONFIG_PCIE_EDR))
support |= OSC_PCI_EDR_SUPPORT;
decode_osc_support(root, "OS supports", support); decode_osc_support(root, "OS supports", support);
status = acpi_pci_osc_support(root, support); status = acpi_pci_osc_support(root, support);
...@@ -487,6 +491,15 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, ...@@ -487,6 +491,15 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
control |= OSC_PCI_EXPRESS_AER_CONTROL; control |= OSC_PCI_EXPRESS_AER_CONTROL;
} }
/*
* Per the Downstream Port Containment Related Enhancements ECN to
* the PCI Firmware Spec, r3.2, sec 4.5.1, table 4-5,
* OSC_PCI_EXPRESS_DPC_CONTROL indicates the OS supports both DPC
* and EDR.
*/
if (IS_ENABLED(CONFIG_PCIE_DPC) && IS_ENABLED(CONFIG_PCIE_EDR))
control |= OSC_PCI_EXPRESS_DPC_CONTROL;
requested = control; requested = control;
status = acpi_pci_osc_control_set(handle, &control, status = acpi_pci_osc_control_set(handle, &control,
OSC_PCI_EXPRESS_CAPABILITY_CONTROL); OSC_PCI_EXPRESS_CAPABILITY_CONTROL);
...@@ -916,6 +929,8 @@ struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root, ...@@ -916,6 +929,8 @@ struct pci_bus *acpi_pci_root_create(struct acpi_pci_root *root,
host_bridge->native_pme = 0; host_bridge->native_pme = 0;
if (!(root->osc_control_set & OSC_PCI_EXPRESS_LTR_CONTROL)) if (!(root->osc_control_set & OSC_PCI_EXPRESS_LTR_CONTROL))
host_bridge->native_ltr = 0; host_bridge->native_ltr = 0;
if (!(root->osc_control_set & OSC_PCI_EXPRESS_DPC_CONTROL))
host_bridge->native_dpc = 0;
/* /*
* Evaluate the "PCI Boot Configuration" _DSM Function. If it * Evaluate the "PCI Boot Configuration" _DSM Function. If it
......
...@@ -192,30 +192,35 @@ static bool amdgpu_read_bios_from_rom(struct amdgpu_device *adev) ...@@ -192,30 +192,35 @@ static bool amdgpu_read_bios_from_rom(struct amdgpu_device *adev)
static bool amdgpu_read_platform_bios(struct amdgpu_device *adev) static bool amdgpu_read_platform_bios(struct amdgpu_device *adev)
{ {
uint8_t __iomem *bios; phys_addr_t rom = adev->pdev->rom;
size_t size; size_t romlen = adev->pdev->romlen;
void __iomem *bios;
adev->bios = NULL; adev->bios = NULL;
bios = pci_platform_rom(adev->pdev, &size); if (!rom || romlen == 0)
if (!bios) {
return false; return false;
}
adev->bios = kzalloc(size, GFP_KERNEL); adev->bios = kzalloc(romlen, GFP_KERNEL);
if (adev->bios == NULL) if (!adev->bios)
return false; return false;
memcpy_fromio(adev->bios, bios, size); bios = ioremap(rom, romlen);
if (!bios)
goto free_bios;
if (!check_atom_bios(adev->bios, size)) { memcpy_fromio(adev->bios, bios, romlen);
kfree(adev->bios); iounmap(bios);
return false;
}
adev->bios_size = size; if (!check_atom_bios(adev->bios, romlen))
goto free_bios;
adev->bios_size = romlen;
return true; return true;
free_bios:
kfree(adev->bios);
return false;
} }
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
......
...@@ -101,9 +101,13 @@ platform_init(struct nvkm_bios *bios, const char *name) ...@@ -101,9 +101,13 @@ platform_init(struct nvkm_bios *bios, const char *name)
else else
return ERR_PTR(-ENODEV); return ERR_PTR(-ENODEV);
if (!pdev->rom || pdev->romlen == 0)
return ERR_PTR(-ENODEV);
if ((priv = kmalloc(sizeof(*priv), GFP_KERNEL))) { if ((priv = kmalloc(sizeof(*priv), GFP_KERNEL))) {
priv->size = pdev->romlen;
if (ret = -ENODEV, if (ret = -ENODEV,
(priv->rom = pci_platform_rom(pdev, &priv->size))) (priv->rom = ioremap(pdev->rom, pdev->romlen)))
return priv; return priv;
kfree(priv); kfree(priv);
} }
...@@ -111,11 +115,20 @@ platform_init(struct nvkm_bios *bios, const char *name) ...@@ -111,11 +115,20 @@ platform_init(struct nvkm_bios *bios, const char *name)
return ERR_PTR(ret); return ERR_PTR(ret);
} }
static void
platform_fini(void *data)
{
struct priv *priv = data;
iounmap(priv->rom);
kfree(priv);
}
const struct nvbios_source const struct nvbios_source
nvbios_platform = { nvbios_platform = {
.name = "PLATFORM", .name = "PLATFORM",
.init = platform_init, .init = platform_init,
.fini = (void(*)(void *))kfree, .fini = platform_fini,
.read = pcirom_read, .read = pcirom_read,
.rw = true, .rw = true,
}; };
...@@ -108,25 +108,33 @@ static bool radeon_read_bios(struct radeon_device *rdev) ...@@ -108,25 +108,33 @@ static bool radeon_read_bios(struct radeon_device *rdev)
static bool radeon_read_platform_bios(struct radeon_device *rdev) static bool radeon_read_platform_bios(struct radeon_device *rdev)
{ {
uint8_t __iomem *bios; phys_addr_t rom = rdev->pdev->rom;
size_t size; size_t romlen = rdev->pdev->romlen;
void __iomem *bios;
rdev->bios = NULL; rdev->bios = NULL;
bios = pci_platform_rom(rdev->pdev, &size); if (!rom || romlen == 0)
if (!bios) {
return false; return false;
}
if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) { rdev->bios = kzalloc(romlen, GFP_KERNEL);
return false; if (!rdev->bios)
}
rdev->bios = kmemdup(bios, size, GFP_KERNEL);
if (rdev->bios == NULL) {
return false; return false;
}
bios = ioremap(rom, romlen);
if (!bios)
goto free_bios;
memcpy_fromio(rdev->bios, bios, romlen);
iounmap(bios);
if (rdev->bios[0] != 0x55 || rdev->bios[1] != 0xaa)
goto free_bios;
return true; return true;
free_bios:
kfree(rdev->bios);
return false;
} }
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
......
This diff is collapsed.
...@@ -3508,9 +3508,9 @@ static pci_ers_result_t ice_pci_err_slot_reset(struct pci_dev *pdev) ...@@ -3508,9 +3508,9 @@ static pci_ers_result_t ice_pci_err_slot_reset(struct pci_dev *pdev)
result = PCI_ERS_RESULT_DISCONNECT; result = PCI_ERS_RESULT_DISCONNECT;
} }
err = pci_cleanup_aer_uncorrect_error_status(pdev); err = pci_aer_clear_nonfatal_status(pdev);
if (err) if (err)
dev_dbg(&pdev->dev, "pci_cleanup_aer_uncorrect_error_status failed, error %d\n", dev_dbg(&pdev->dev, "pci_aer_clear_nonfatal_status() failed, error %d\n",
err); err);
/* non-fatal, continue */ /* non-fatal, continue */
......
...@@ -2674,8 +2674,8 @@ static int idt_init_pci(struct idt_ntb_dev *ndev) ...@@ -2674,8 +2674,8 @@ static int idt_init_pci(struct idt_ntb_dev *ndev)
ret = pci_enable_pcie_error_reporting(pdev); ret = pci_enable_pcie_error_reporting(pdev);
if (ret != 0) if (ret != 0)
dev_warn(&pdev->dev, "PCIe AER capability disabled\n"); dev_warn(&pdev->dev, "PCIe AER capability disabled\n");
else /* Cleanup uncorrectable error status before getting to init */ else /* Cleanup nonfatal error status before getting to init */
pci_cleanup_aer_uncorrect_error_status(pdev); pci_aer_clear_nonfatal_status(pdev);
/* First enable the PCI device */ /* First enable the PCI device */
ret = pcim_enable_device(pdev); ret = pcim_enable_device(pdev);
......
...@@ -213,16 +213,6 @@ config PCIE_MEDIATEK ...@@ -213,16 +213,6 @@ config PCIE_MEDIATEK
Say Y here if you want to enable PCIe controller support on Say Y here if you want to enable PCIe controller support on
MediaTek SoCs. MediaTek SoCs.
config PCIE_MOBIVEIL
bool "Mobiveil AXI PCIe controller"
depends on ARCH_ZYNQMP || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
help
Say Y here if you want to enable support for the Mobiveil AXI PCIe
Soft IP. It has up to 8 outbound and inbound windows
for address translation and it is a PCIe Gen4 IP.
config PCIE_TANGO_SMP8759 config PCIE_TANGO_SMP8759
bool "Tango SMP8759 PCIe controller (DANGEROUS)" bool "Tango SMP8759 PCIe controller (DANGEROUS)"
depends on ARCH_TANGO && PCI_MSI && OF depends on ARCH_TANGO && PCI_MSI && OF
...@@ -269,5 +259,6 @@ config PCI_HYPERV_INTERFACE ...@@ -269,5 +259,6 @@ config PCI_HYPERV_INTERFACE
have a common interface with the Hyper-V PCI frontend driver. have a common interface with the Hyper-V PCI frontend driver.
source "drivers/pci/controller/dwc/Kconfig" source "drivers/pci/controller/dwc/Kconfig"
source "drivers/pci/controller/mobiveil/Kconfig"
source "drivers/pci/controller/cadence/Kconfig" source "drivers/pci/controller/cadence/Kconfig"
endmenu endmenu
...@@ -25,12 +25,12 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o ...@@ -25,12 +25,12 @@ obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o
obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o
obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o
obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o
obj-$(CONFIG_PCIE_MOBIVEIL) += pcie-mobiveil.o
obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o
obj-$(CONFIG_VMD) += vmd.o obj-$(CONFIG_VMD) += vmd.o
obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o
# pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW
obj-y += dwc/ obj-y += dwc/
obj-y += mobiveil/
# The following drivers are for devices that use the generic ACPI # The following drivers are for devices that use the generic ACPI
......
...@@ -248,14 +248,37 @@ config PCI_MESON ...@@ -248,14 +248,37 @@ config PCI_MESON
implement the driver. implement the driver.
config PCIE_TEGRA194 config PCIE_TEGRA194
tristate "NVIDIA Tegra194 (and later) PCIe controller" tristate
config PCIE_TEGRA194_HOST
tristate "NVIDIA Tegra194 (and later) PCIe controller - Host Mode"
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
depends on PCI_MSI_IRQ_DOMAIN depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST select PCIE_DW_HOST
select PHY_TEGRA194_P2U select PHY_TEGRA194_P2U
select PCIE_TEGRA194
help
Enables support for the PCIe controller in the NVIDIA Tegra194 SoC to
work in host mode. There are two instances of PCIe controllers in
Tegra194. This controller can work either as EP or RC. In order to
enable host-specific features PCIE_TEGRA194_HOST must be selected and
in order to enable device-specific features PCIE_TEGRA194_EP must be
selected. This uses the DesignWare core.
config PCIE_TEGRA194_EP
tristate "NVIDIA Tegra194 (and later) PCIe controller - Endpoint Mode"
depends on ARCH_TEGRA_194_SOC || COMPILE_TEST
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PHY_TEGRA194_P2U
select PCIE_TEGRA194
help help
Say Y here if you want support for DesignWare core based PCIe host Enables support for the PCIe controller in the NVIDIA Tegra194 SoC to
controller found in NVIDIA Tegra194 SoC. work in host mode. There are two instances of PCIe controllers in
Tegra194. This controller can work either as EP or RC. In order to
enable host-specific features PCIE_TEGRA194_HOST must be selected and
in order to enable device-specific features PCIE_TEGRA194_EP must be
selected. This uses the DesignWare core.
config PCIE_UNIPHIER config PCIE_UNIPHIER
bool "Socionext UniPhier PCIe controllers" bool "Socionext UniPhier PCIe controllers"
......
...@@ -215,10 +215,6 @@ static int dra7xx_pcie_host_init(struct pcie_port *pp) ...@@ -215,10 +215,6 @@ static int dra7xx_pcie_host_init(struct pcie_port *pp)
return 0; return 0;
} }
static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = {
.host_init = dra7xx_pcie_host_init,
};
static int dra7xx_pcie_intx_map(struct irq_domain *domain, unsigned int irq, static int dra7xx_pcie_intx_map(struct irq_domain *domain, unsigned int irq,
irq_hw_number_t hwirq) irq_hw_number_t hwirq)
{ {
...@@ -233,43 +229,77 @@ static const struct irq_domain_ops intx_domain_ops = { ...@@ -233,43 +229,77 @@ static const struct irq_domain_ops intx_domain_ops = {
.xlate = pci_irqd_intx_xlate, .xlate = pci_irqd_intx_xlate,
}; };
static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp) static int dra7xx_pcie_handle_msi(struct pcie_port *pp, int index)
{ {
struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev; unsigned long val;
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); int pos, irq;
struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node = of_get_next_child(node, NULL);
if (!pcie_intc_node) { val = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS +
dev_err(dev, "No PCIe Intc node found\n"); (index * MSI_REG_CTRL_BLOCK_SIZE));
return -ENODEV; if (!val)
} return 0;
dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, 0);
&intx_domain_ops, pp); while (pos != MAX_MSI_IRQS_PER_CTRL) {
of_node_put(pcie_intc_node); irq = irq_find_mapping(pp->irq_domain,
if (!dra7xx->irq_domain) { (index * MAX_MSI_IRQS_PER_CTRL) + pos);
dev_err(dev, "Failed to get a INTx IRQ domain\n"); generic_handle_irq(irq);
return -ENODEV; pos++;
pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, pos);
} }
return 0; return 1;
} }
static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg) static void dra7xx_pcie_handle_msi_irq(struct pcie_port *pp)
{ {
struct dra7xx_pcie *dra7xx = arg; struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct dw_pcie *pci = dra7xx->pci; int ret, i, count, num_ctrls;
struct pcie_port *pp = &pci->pp;
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
/**
* Need to make sure all MSI status bits read 0 before exiting.
* Else, new MSI IRQs are not registered by the wrapper. Have an
* upperbound for the loop and exit the IRQ in case of IRQ flood
* to avoid locking up system in interrupt context.
*/
count = 0;
do {
ret = 0;
for (i = 0; i < num_ctrls; i++)
ret |= dra7xx_pcie_handle_msi(pp, i);
count++;
} while (ret && count <= 1000);
if (count > 1000)
dev_warn_ratelimited(pci->dev,
"Too many MSI IRQs to handle\n");
}
static void dra7xx_pcie_msi_irq_handler(struct irq_desc *desc)
{
struct irq_chip *chip = irq_desc_get_chip(desc);
struct dra7xx_pcie *dra7xx;
struct dw_pcie *pci;
struct pcie_port *pp;
unsigned long reg; unsigned long reg;
u32 virq, bit; u32 virq, bit;
chained_irq_enter(chip, desc);
pp = irq_desc_get_handler_data(desc);
pci = to_dw_pcie_from_pp(pp);
dra7xx = to_dra7xx_pcie(pci);
reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI); reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI);
dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI, reg);
switch (reg) { switch (reg) {
case MSI: case MSI:
dw_handle_msi_irq(pp); dra7xx_pcie_handle_msi_irq(pp);
break; break;
case INTA: case INTA:
case INTB: case INTB:
...@@ -283,9 +313,7 @@ static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg) ...@@ -283,9 +313,7 @@ static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg)
break; break;
} }
dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI, reg); chained_irq_exit(chip, desc);
return IRQ_HANDLED;
} }
static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg) static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg)
...@@ -347,6 +375,145 @@ static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg) ...@@ -347,6 +375,145 @@ static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev;
struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci);
struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node = of_get_next_child(node, NULL);
if (!pcie_intc_node) {
dev_err(dev, "No PCIe Intc node found\n");
return -ENODEV;
}
irq_set_chained_handler_and_data(pp->irq, dra7xx_pcie_msi_irq_handler,
pp);
dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
&intx_domain_ops, pp);
of_node_put(pcie_intc_node);
if (!dra7xx->irq_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n");
return -ENODEV;
}
return 0;
}
static void dra7xx_pcie_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
u64 msi_target;
msi_target = (u64)pp->msi_data;
msg->address_lo = lower_32_bits(msi_target);
msg->address_hi = upper_32_bits(msi_target);
msg->data = d->hwirq;
dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n",
(int)d->hwirq, msg->address_hi, msg->address_lo);
}
static int dra7xx_pcie_msi_set_affinity(struct irq_data *d,
const struct cpumask *mask,
bool force)
{
return -EINVAL;
}
static void dra7xx_pcie_bottom_mask(struct irq_data *d)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
unsigned int res, bit, ctrl;
unsigned long flags;
raw_spin_lock_irqsave(&pp->lock, flags);
ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
pp->irq_mask[ctrl] |= BIT(bit);
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res,
pp->irq_mask[ctrl]);
raw_spin_unlock_irqrestore(&pp->lock, flags);
}
static void dra7xx_pcie_bottom_unmask(struct irq_data *d)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
unsigned int res, bit, ctrl;
unsigned long flags;
raw_spin_lock_irqsave(&pp->lock, flags);
ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
pp->irq_mask[ctrl] &= ~BIT(bit);
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res,
pp->irq_mask[ctrl]);
raw_spin_unlock_irqrestore(&pp->lock, flags);
}
static void dra7xx_pcie_bottom_ack(struct irq_data *d)
{
struct pcie_port *pp = irq_data_get_irq_chip_data(d);
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
unsigned int res, bit, ctrl;
ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_STATUS + res, BIT(bit));
}
static struct irq_chip dra7xx_pci_msi_bottom_irq_chip = {
.name = "DRA7XX-PCI-MSI",
.irq_ack = dra7xx_pcie_bottom_ack,
.irq_compose_msi_msg = dra7xx_pcie_setup_msi_msg,
.irq_set_affinity = dra7xx_pcie_msi_set_affinity,
.irq_mask = dra7xx_pcie_bottom_mask,
.irq_unmask = dra7xx_pcie_bottom_unmask,
};
static int dra7xx_pcie_msi_host_init(struct pcie_port *pp)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
u32 ctrl, num_ctrls;
pp->msi_irq_chip = &dra7xx_pci_msi_bottom_irq_chip;
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
/* Initialize IRQ Status array */
for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
pp->irq_mask[ctrl] = ~0;
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
pp->irq_mask[ctrl]);
dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_ENABLE +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE),
~0);
}
return dw_pcie_allocate_domains(pp);
}
static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = {
.host_init = dra7xx_pcie_host_init,
.msi_host_init = dra7xx_pcie_msi_host_init,
};
static void dra7xx_pcie_ep_init(struct dw_pcie_ep *ep) static void dra7xx_pcie_ep_init(struct dw_pcie_ep *ep)
{ {
struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
...@@ -467,14 +634,6 @@ static int __init dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx, ...@@ -467,14 +634,6 @@ static int __init dra7xx_add_pcie_port(struct dra7xx_pcie *dra7xx,
return pp->irq; return pp->irq;
} }
ret = devm_request_irq(dev, pp->irq, dra7xx_pcie_msi_irq_handler,
IRQF_SHARED | IRQF_NO_THREAD,
"dra7-pcie-msi", dra7xx);
if (ret) {
dev_err(dev, "failed to request irq\n");
return ret;
}
ret = dra7xx_pcie_init_irq_domain(pp); ret = dra7xx_pcie_init_irq_domain(pp);
if (ret < 0) if (ret < 0)
return ret; return ret;
......
...@@ -959,6 +959,9 @@ static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no, ...@@ -959,6 +959,9 @@ static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
case PCI_EPC_IRQ_MSI: case PCI_EPC_IRQ_MSI:
dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num);
break; break;
case PCI_EPC_IRQ_MSIX:
dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num);
break;
default: default:
dev_err(pci->dev, "UNKNOWN IRQ type\n"); dev_err(pci->dev, "UNKNOWN IRQ type\n");
return -EINVAL; return -EINVAL;
...@@ -970,7 +973,7 @@ static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no, ...@@ -970,7 +973,7 @@ static int ks_pcie_am654_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
static const struct pci_epc_features ks_pcie_am654_epc_features = { static const struct pci_epc_features ks_pcie_am654_epc_features = {
.linkup_notifier = false, .linkup_notifier = false,
.msi_capable = true, .msi_capable = true,
.msix_capable = false, .msix_capable = true,
.reserved_bar = 1 << BAR_0 | 1 << BAR_1, .reserved_bar = 1 << BAR_0 | 1 << BAR_1,
.bar_fixed_64bit = 1 << BAR_0, .bar_fixed_64bit = 1 << BAR_0,
.bar_fixed_size[2] = SZ_1M, .bar_fixed_size[2] = SZ_1M,
......
...@@ -66,7 +66,6 @@ ...@@ -66,7 +66,6 @@
#define PORT_CLK_RATE 100000000UL #define PORT_CLK_RATE 100000000UL
#define MAX_PAYLOAD_SIZE 256 #define MAX_PAYLOAD_SIZE 256
#define MAX_READ_REQ_SIZE 256 #define MAX_READ_REQ_SIZE 256
#define MESON_PCIE_PHY_POWERUP 0x1c
#define PCIE_RESET_DELAY 500 #define PCIE_RESET_DELAY 500
#define PCIE_SHARED_RESET 1 #define PCIE_SHARED_RESET 1
#define PCIE_NORMAL_RESET 0 #define PCIE_NORMAL_RESET 0
...@@ -81,26 +80,19 @@ enum pcie_data_rate { ...@@ -81,26 +80,19 @@ enum pcie_data_rate {
struct meson_pcie_mem_res { struct meson_pcie_mem_res {
void __iomem *elbi_base; void __iomem *elbi_base;
void __iomem *cfg_base; void __iomem *cfg_base;
void __iomem *phy_base;
}; };
struct meson_pcie_clk_res { struct meson_pcie_clk_res {
struct clk *clk; struct clk *clk;
struct clk *mipi_gate;
struct clk *port_clk; struct clk *port_clk;
struct clk *general_clk; struct clk *general_clk;
}; };
struct meson_pcie_rc_reset { struct meson_pcie_rc_reset {
struct reset_control *phy;
struct reset_control *port; struct reset_control *port;
struct reset_control *apb; struct reset_control *apb;
}; };
struct meson_pcie_param {
bool has_shared_phy;
};
struct meson_pcie { struct meson_pcie {
struct dw_pcie pci; struct dw_pcie pci;
struct meson_pcie_mem_res mem_res; struct meson_pcie_mem_res mem_res;
...@@ -108,7 +100,6 @@ struct meson_pcie { ...@@ -108,7 +100,6 @@ struct meson_pcie {
struct meson_pcie_rc_reset mrst; struct meson_pcie_rc_reset mrst;
struct gpio_desc *reset_gpio; struct gpio_desc *reset_gpio;
struct phy *phy; struct phy *phy;
const struct meson_pcie_param *param;
}; };
static struct reset_control *meson_pcie_get_reset(struct meson_pcie *mp, static struct reset_control *meson_pcie_get_reset(struct meson_pcie *mp,
...@@ -130,13 +121,6 @@ static int meson_pcie_get_resets(struct meson_pcie *mp) ...@@ -130,13 +121,6 @@ static int meson_pcie_get_resets(struct meson_pcie *mp)
{ {
struct meson_pcie_rc_reset *mrst = &mp->mrst; struct meson_pcie_rc_reset *mrst = &mp->mrst;
if (!mp->param->has_shared_phy) {
mrst->phy = meson_pcie_get_reset(mp, "phy", PCIE_SHARED_RESET);
if (IS_ERR(mrst->phy))
return PTR_ERR(mrst->phy);
reset_control_deassert(mrst->phy);
}
mrst->port = meson_pcie_get_reset(mp, "port", PCIE_NORMAL_RESET); mrst->port = meson_pcie_get_reset(mp, "port", PCIE_NORMAL_RESET);
if (IS_ERR(mrst->port)) if (IS_ERR(mrst->port))
return PTR_ERR(mrst->port); return PTR_ERR(mrst->port);
...@@ -162,22 +146,6 @@ static void __iomem *meson_pcie_get_mem(struct platform_device *pdev, ...@@ -162,22 +146,6 @@ static void __iomem *meson_pcie_get_mem(struct platform_device *pdev,
return devm_ioremap_resource(dev, res); return devm_ioremap_resource(dev, res);
} }
static void __iomem *meson_pcie_get_mem_shared(struct platform_device *pdev,
struct meson_pcie *mp,
const char *id)
{
struct device *dev = mp->pci.dev;
struct resource *res;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, id);
if (!res) {
dev_err(dev, "No REG resource %s\n", id);
return ERR_PTR(-ENXIO);
}
return devm_ioremap(dev, res->start, resource_size(res));
}
static int meson_pcie_get_mems(struct platform_device *pdev, static int meson_pcie_get_mems(struct platform_device *pdev,
struct meson_pcie *mp) struct meson_pcie *mp)
{ {
...@@ -189,14 +157,6 @@ static int meson_pcie_get_mems(struct platform_device *pdev, ...@@ -189,14 +157,6 @@ static int meson_pcie_get_mems(struct platform_device *pdev,
if (IS_ERR(mp->mem_res.cfg_base)) if (IS_ERR(mp->mem_res.cfg_base))
return PTR_ERR(mp->mem_res.cfg_base); return PTR_ERR(mp->mem_res.cfg_base);
/* Meson AXG SoC has two PCI controllers use same phy register */
if (!mp->param->has_shared_phy) {
mp->mem_res.phy_base =
meson_pcie_get_mem_shared(pdev, mp, "phy");
if (IS_ERR(mp->mem_res.phy_base))
return PTR_ERR(mp->mem_res.phy_base);
}
return 0; return 0;
} }
...@@ -204,7 +164,6 @@ static int meson_pcie_power_on(struct meson_pcie *mp) ...@@ -204,7 +164,6 @@ static int meson_pcie_power_on(struct meson_pcie *mp)
{ {
int ret = 0; int ret = 0;
if (mp->param->has_shared_phy) {
ret = phy_init(mp->phy); ret = phy_init(mp->phy);
if (ret) if (ret)
return ret; return ret;
...@@ -214,27 +173,24 @@ static int meson_pcie_power_on(struct meson_pcie *mp) ...@@ -214,27 +173,24 @@ static int meson_pcie_power_on(struct meson_pcie *mp)
phy_exit(mp->phy); phy_exit(mp->phy);
return ret; return ret;
} }
} else
writel(MESON_PCIE_PHY_POWERUP, mp->mem_res.phy_base);
return 0; return 0;
} }
static void meson_pcie_power_off(struct meson_pcie *mp)
{
phy_power_off(mp->phy);
phy_exit(mp->phy);
}
static int meson_pcie_reset(struct meson_pcie *mp) static int meson_pcie_reset(struct meson_pcie *mp)
{ {
struct meson_pcie_rc_reset *mrst = &mp->mrst; struct meson_pcie_rc_reset *mrst = &mp->mrst;
int ret = 0; int ret = 0;
if (mp->param->has_shared_phy) {
ret = phy_reset(mp->phy); ret = phy_reset(mp->phy);
if (ret) if (ret)
return ret; return ret;
} else {
reset_control_assert(mrst->phy);
udelay(PCIE_RESET_DELAY);
reset_control_deassert(mrst->phy);
udelay(PCIE_RESET_DELAY);
}
reset_control_assert(mrst->port); reset_control_assert(mrst->port);
reset_control_assert(mrst->apb); reset_control_assert(mrst->apb);
...@@ -286,12 +242,6 @@ static int meson_pcie_probe_clocks(struct meson_pcie *mp) ...@@ -286,12 +242,6 @@ static int meson_pcie_probe_clocks(struct meson_pcie *mp)
if (IS_ERR(res->port_clk)) if (IS_ERR(res->port_clk))
return PTR_ERR(res->port_clk); return PTR_ERR(res->port_clk);
if (!mp->param->has_shared_phy) {
res->mipi_gate = meson_pcie_probe_clock(dev, "mipi", 0);
if (IS_ERR(res->mipi_gate))
return PTR_ERR(res->mipi_gate);
}
res->general_clk = meson_pcie_probe_clock(dev, "general", 0); res->general_clk = meson_pcie_probe_clock(dev, "general", 0);
if (IS_ERR(res->general_clk)) if (IS_ERR(res->general_clk))
return PTR_ERR(res->general_clk); return PTR_ERR(res->general_clk);
...@@ -562,7 +512,6 @@ static const struct dw_pcie_ops dw_pcie_ops = { ...@@ -562,7 +512,6 @@ static const struct dw_pcie_ops dw_pcie_ops = {
static int meson_pcie_probe(struct platform_device *pdev) static int meson_pcie_probe(struct platform_device *pdev)
{ {
const struct meson_pcie_param *match_data;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct dw_pcie *pci; struct dw_pcie *pci;
struct meson_pcie *mp; struct meson_pcie *mp;
...@@ -576,16 +525,9 @@ static int meson_pcie_probe(struct platform_device *pdev) ...@@ -576,16 +525,9 @@ static int meson_pcie_probe(struct platform_device *pdev)
pci->dev = dev; pci->dev = dev;
pci->ops = &dw_pcie_ops; pci->ops = &dw_pcie_ops;
match_data = of_device_get_match_data(dev);
if (!match_data) {
dev_err(dev, "failed to get match data\n");
return -ENODEV;
}
mp->param = match_data;
if (mp->param->has_shared_phy) {
mp->phy = devm_phy_get(dev, "pcie"); mp->phy = devm_phy_get(dev, "pcie");
if (IS_ERR(mp->phy)) if (IS_ERR(mp->phy)) {
dev_err(dev, "get phy failed, %ld\n", PTR_ERR(mp->phy));
return PTR_ERR(mp->phy); return PTR_ERR(mp->phy);
} }
...@@ -636,30 +578,16 @@ static int meson_pcie_probe(struct platform_device *pdev) ...@@ -636,30 +578,16 @@ static int meson_pcie_probe(struct platform_device *pdev)
return 0; return 0;
err_phy: err_phy:
if (mp->param->has_shared_phy) { meson_pcie_power_off(mp);
phy_power_off(mp->phy);
phy_exit(mp->phy);
}
return ret; return ret;
} }
static struct meson_pcie_param meson_pcie_axg_param = {
.has_shared_phy = false,
};
static struct meson_pcie_param meson_pcie_g12a_param = {
.has_shared_phy = true,
};
static const struct of_device_id meson_pcie_of_match[] = { static const struct of_device_id meson_pcie_of_match[] = {
{ {
.compatible = "amlogic,axg-pcie", .compatible = "amlogic,axg-pcie",
.data = &meson_pcie_axg_param,
}, },
{ {
.compatible = "amlogic,g12a-pcie", .compatible = "amlogic,g12a-pcie",
.data = &meson_pcie_g12a_param,
}, },
{}, {},
}; };
......
...@@ -18,6 +18,15 @@ void dw_pcie_ep_linkup(struct dw_pcie_ep *ep) ...@@ -18,6 +18,15 @@ void dw_pcie_ep_linkup(struct dw_pcie_ep *ep)
pci_epc_linkup(epc); pci_epc_linkup(epc);
} }
EXPORT_SYMBOL_GPL(dw_pcie_ep_linkup);
void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep)
{
struct pci_epc *epc = ep->epc;
pci_epc_init_notify(epc);
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init_notify);
static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar, static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar,
int flags) int flags)
...@@ -125,6 +134,7 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no, ...@@ -125,6 +134,7 @@ static void dw_pcie_ep_clear_bar(struct pci_epc *epc, u8 func_no,
dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_INBOUND); dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_INBOUND);
clear_bit(atu_index, ep->ib_window_map); clear_bit(atu_index, ep->ib_window_map);
ep->epf_bar[bar] = NULL;
} }
static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no,
...@@ -158,6 +168,7 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, ...@@ -158,6 +168,7 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no,
dw_pcie_writel_dbi(pci, reg + 4, 0); dw_pcie_writel_dbi(pci, reg + 4, 0);
} }
ep->epf_bar[bar] = epf_bar;
dw_pcie_dbi_ro_wr_dis(pci); dw_pcie_dbi_ro_wr_dis(pci);
return 0; return 0;
...@@ -269,7 +280,8 @@ static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no) ...@@ -269,7 +280,8 @@ static int dw_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no)
return val; return val;
} }
static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts) static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts,
enum pci_barno bir, u32 offset)
{ {
struct dw_pcie_ep *ep = epc_get_drvdata(epc); struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
...@@ -278,12 +290,22 @@ static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts) ...@@ -278,12 +290,22 @@ static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts)
if (!ep->msix_cap) if (!ep->msix_cap)
return -EINVAL; return -EINVAL;
dw_pcie_dbi_ro_wr_en(pci);
reg = ep->msix_cap + PCI_MSIX_FLAGS; reg = ep->msix_cap + PCI_MSIX_FLAGS;
val = dw_pcie_readw_dbi(pci, reg); val = dw_pcie_readw_dbi(pci, reg);
val &= ~PCI_MSIX_FLAGS_QSIZE; val &= ~PCI_MSIX_FLAGS_QSIZE;
val |= interrupts; val |= interrupts;
dw_pcie_dbi_ro_wr_en(pci);
dw_pcie_writew_dbi(pci, reg, val); dw_pcie_writew_dbi(pci, reg, val);
reg = ep->msix_cap + PCI_MSIX_TABLE;
val = offset | bir;
dw_pcie_writel_dbi(pci, reg, val);
reg = ep->msix_cap + PCI_MSIX_PBA;
val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
dw_pcie_writel_dbi(pci, reg, val);
dw_pcie_dbi_ro_wr_dis(pci); dw_pcie_dbi_ro_wr_dis(pci);
return 0; return 0;
...@@ -409,55 +431,41 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, ...@@ -409,55 +431,41 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
u16 interrupt_num) u16 interrupt_num)
{ {
struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct pci_epf_msix_tbl *msix_tbl;
struct pci_epc *epc = ep->epc; struct pci_epc *epc = ep->epc;
u16 tbl_offset, bir; struct pci_epf_bar *epf_bar;
u32 bar_addr_upper, bar_addr_lower;
u32 msg_addr_upper, msg_addr_lower;
u32 reg, msg_data, vec_ctrl; u32 reg, msg_data, vec_ctrl;
u64 tbl_addr, msg_addr, reg_u64; unsigned int aligned_offset;
void __iomem *msix_tbl; u32 tbl_offset;
u64 msg_addr;
int ret; int ret;
u8 bir;
reg = ep->msix_cap + PCI_MSIX_TABLE; reg = ep->msix_cap + PCI_MSIX_TABLE;
tbl_offset = dw_pcie_readl_dbi(pci, reg); tbl_offset = dw_pcie_readl_dbi(pci, reg);
bir = (tbl_offset & PCI_MSIX_TABLE_BIR); bir = (tbl_offset & PCI_MSIX_TABLE_BIR);
tbl_offset &= PCI_MSIX_TABLE_OFFSET; tbl_offset &= PCI_MSIX_TABLE_OFFSET;
reg = PCI_BASE_ADDRESS_0 + (4 * bir); epf_bar = ep->epf_bar[bir];
bar_addr_upper = 0; msix_tbl = epf_bar->addr;
bar_addr_lower = dw_pcie_readl_dbi(pci, reg); msix_tbl = (struct pci_epf_msix_tbl *)((char *)msix_tbl + tbl_offset);
reg_u64 = (bar_addr_lower & PCI_BASE_ADDRESS_MEM_TYPE_MASK);
if (reg_u64 == PCI_BASE_ADDRESS_MEM_TYPE_64)
bar_addr_upper = dw_pcie_readl_dbi(pci, reg + 4);
tbl_addr = ((u64) bar_addr_upper) << 32 | bar_addr_lower;
tbl_addr += (tbl_offset + ((interrupt_num - 1) * PCI_MSIX_ENTRY_SIZE));
tbl_addr &= PCI_BASE_ADDRESS_MEM_MASK;
msix_tbl = ioremap(ep->phys_base + tbl_addr,
PCI_MSIX_ENTRY_SIZE);
if (!msix_tbl)
return -EINVAL;
msg_addr_lower = readl(msix_tbl + PCI_MSIX_ENTRY_LOWER_ADDR);
msg_addr_upper = readl(msix_tbl + PCI_MSIX_ENTRY_UPPER_ADDR);
msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower;
msg_data = readl(msix_tbl + PCI_MSIX_ENTRY_DATA);
vec_ctrl = readl(msix_tbl + PCI_MSIX_ENTRY_VECTOR_CTRL);
iounmap(msix_tbl); msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr;
msg_data = msix_tbl[(interrupt_num - 1)].msg_data;
vec_ctrl = msix_tbl[(interrupt_num - 1)].vector_ctrl;
if (vec_ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT) { if (vec_ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT) {
dev_dbg(pci->dev, "MSI-X entry ctrl set\n"); dev_dbg(pci->dev, "MSI-X entry ctrl set\n");
return -EPERM; return -EPERM;
} }
aligned_offset = msg_addr & (epc->mem->page_size - 1);
ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr,
epc->mem->page_size); epc->mem->page_size);
if (ret) if (ret)
return ret; return ret;
writel(msg_data, ep->msi_mem); writel(msg_data, ep->msi_mem + aligned_offset);
dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys); dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys);
...@@ -492,19 +500,54 @@ static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap) ...@@ -492,19 +500,54 @@ static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
return 0; return 0;
} }
int dw_pcie_ep_init(struct dw_pcie_ep *ep) int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
{ {
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
unsigned int offset;
unsigned int nbars;
u8 hdr_type;
u32 reg;
int i; int i;
hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE);
if (hdr_type != PCI_HEADER_TYPE_NORMAL) {
dev_err(pci->dev,
"PCIe controller is not set to EP mode (hdr_type:0x%x)!\n",
hdr_type);
return -EIO;
}
ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI);
ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX);
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
if (offset) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >>
PCI_REBAR_CTRL_NBAR_SHIFT;
dw_pcie_dbi_ro_wr_en(pci);
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
dw_pcie_dbi_ro_wr_dis(pci);
}
dw_pcie_setup(pci);
return 0;
}
EXPORT_SYMBOL_GPL(dw_pcie_ep_init_complete);
int dw_pcie_ep_init(struct dw_pcie_ep *ep)
{
int ret; int ret;
u32 reg;
void *addr; void *addr;
u8 hdr_type;
unsigned int nbars;
unsigned int offset;
struct pci_epc *epc; struct pci_epc *epc;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct device *dev = pci->dev; struct device *dev = pci->dev;
struct device_node *np = dev->of_node; struct device_node *np = dev->of_node;
const struct pci_epc_features *epc_features;
if (!pci->dbi_base || !pci->dbi_base2) { if (!pci->dbi_base || !pci->dbi_base2) {
dev_err(dev, "dbi_base/dbi_base2 is not populated\n"); dev_err(dev, "dbi_base/dbi_base2 is not populated\n");
...@@ -563,13 +606,6 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) ...@@ -563,13 +606,6 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
if (ep->ops->ep_init) if (ep->ops->ep_init)
ep->ops->ep_init(ep); ep->ops->ep_init(ep);
hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE);
if (hdr_type != PCI_HEADER_TYPE_NORMAL) {
dev_err(pci->dev, "PCIe controller is not set to EP mode (hdr_type:0x%x)!\n",
hdr_type);
return -EIO;
}
ret = of_property_read_u8(np, "max-functions", &epc->max_functions); ret = of_property_read_u8(np, "max-functions", &epc->max_functions);
if (ret < 0) if (ret < 0)
epc->max_functions = 1; epc->max_functions = 1;
...@@ -587,23 +623,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep) ...@@ -587,23 +623,13 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n");
return -ENOMEM; return -ENOMEM;
} }
ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI);
ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX);
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
if (offset) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >>
PCI_REBAR_CTRL_NBAR_SHIFT;
dw_pcie_dbi_ro_wr_en(pci); if (ep->ops->get_features) {
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) epc_features = ep->ops->get_features(ep);
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0); if (epc_features->core_init_notifier)
dw_pcie_dbi_ro_wr_dis(pci); return 0;
} }
dw_pcie_setup(pci); return dw_pcie_ep_init_complete(ep);
return 0;
} }
EXPORT_SYMBOL_GPL(dw_pcie_ep_init);
...@@ -233,6 +233,7 @@ struct dw_pcie_ep { ...@@ -233,6 +233,7 @@ struct dw_pcie_ep {
phys_addr_t msi_mem_phys; phys_addr_t msi_mem_phys;
u8 msi_cap; /* MSI capability offset */ u8 msi_cap; /* MSI capability offset */
u8 msix_cap; /* MSI-X capability offset */ u8 msix_cap; /* MSI-X capability offset */
struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS];
}; };
struct dw_pcie_ops { struct dw_pcie_ops {
...@@ -411,6 +412,8 @@ static inline int dw_pcie_allocate_domains(struct pcie_port *pp) ...@@ -411,6 +412,8 @@ static inline int dw_pcie_allocate_domains(struct pcie_port *pp)
#ifdef CONFIG_PCIE_DW_EP #ifdef CONFIG_PCIE_DW_EP
void dw_pcie_ep_linkup(struct dw_pcie_ep *ep); void dw_pcie_ep_linkup(struct dw_pcie_ep *ep);
int dw_pcie_ep_init(struct dw_pcie_ep *ep); int dw_pcie_ep_init(struct dw_pcie_ep *ep);
int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep);
void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep);
void dw_pcie_ep_exit(struct dw_pcie_ep *ep); void dw_pcie_ep_exit(struct dw_pcie_ep *ep);
int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no); int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no);
int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
...@@ -428,6 +431,15 @@ static inline int dw_pcie_ep_init(struct dw_pcie_ep *ep) ...@@ -428,6 +431,15 @@ static inline int dw_pcie_ep_init(struct dw_pcie_ep *ep)
return 0; return 0;
} }
static inline int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep)
{
return 0;
}
static inline void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep)
{
}
static inline void dw_pcie_ep_exit(struct dw_pcie_ep *ep) static inline void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
{ {
} }
......
...@@ -1439,7 +1439,13 @@ static void qcom_fixup_class(struct pci_dev *dev) ...@@ -1439,7 +1439,13 @@ static void qcom_fixup_class(struct pci_dev *dev)
{ {
dev->class = PCI_CLASS_BRIDGE_PCI << 8; dev->class = PCI_CLASS_BRIDGE_PCI << 8;
} }
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, PCI_ANY_ID, qcom_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0101, qcom_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0104, qcom_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0106, qcom_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0107, qcom_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, qcom_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, qcom_fixup_class);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, qcom_fixup_class);
static struct platform_driver qcom_pcie_driver = { static struct platform_driver qcom_pcie_driver = {
.probe = qcom_pcie_probe, .probe = qcom_pcie_probe,
......
This diff is collapsed.
# SPDX-License-Identifier: GPL-2.0
menu "Mobiveil PCIe Core Support"
depends on PCI
config PCIE_MOBIVEIL
bool
config PCIE_MOBIVEIL_HOST
bool
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_MOBIVEIL
config PCIE_MOBIVEIL_PLAT
bool "Mobiveil AXI PCIe controller"
depends on ARCH_ZYNQMP || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_MOBIVEIL_HOST
help
Say Y here if you want to enable support for the Mobiveil AXI PCIe
Soft IP. It has up to 8 outbound and inbound windows
for address translation and it is a PCIe Gen4 IP.
config PCIE_LAYERSCAPE_GEN4
bool "Freescale Layerscape PCIe Gen4 controller"
depends on PCI
depends on OF && (ARM64 || ARCH_LAYERSCAPE)
depends on PCI_MSI_IRQ_DOMAIN
select PCIE_MOBIVEIL_HOST
help
Say Y here if you want PCIe Gen4 controller support on
Layerscape SoCs.
endmenu
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_PCIE_MOBIVEIL) += pcie-mobiveil.o
obj-$(CONFIG_PCIE_MOBIVEIL_HOST) += pcie-mobiveil-host.o
obj-$(CONFIG_PCIE_MOBIVEIL_PLAT) += pcie-mobiveil-plat.o
obj-$(CONFIG_PCIE_LAYERSCAPE_GEN4) += pcie-layerscape-gen4.o
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe Gen4 host controller driver for NXP Layerscape SoCs
*
* Copyright 2019-2020 NXP
*
* Author: Zhiqiang Hou <Zhiqiang.Hou@nxp.com>
*/
#include <linux/kernel.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/of_pci.h>
#include <linux/of_platform.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/resource.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
#include "pcie-mobiveil.h"
/* LUT and PF control registers */
#define PCIE_LUT_OFF 0x80000
#define PCIE_PF_OFF 0xc0000
#define PCIE_PF_INT_STAT 0x18
#define PF_INT_STAT_PABRST BIT(31)
#define PCIE_PF_DBG 0x7fc
#define PF_DBG_LTSSM_MASK 0x3f
#define PF_DBG_LTSSM_L0 0x2d /* L0 state */
#define PF_DBG_WE BIT(31)
#define PF_DBG_PABR BIT(27)
#define to_ls_pcie_g4(x) platform_get_drvdata((x)->pdev)
struct ls_pcie_g4 {
struct mobiveil_pcie pci;
struct delayed_work dwork;
int irq;
};
static inline u32 ls_pcie_g4_lut_readl(struct ls_pcie_g4 *pcie, u32 off)
{
return ioread32(pcie->pci.csr_axi_slave_base + PCIE_LUT_OFF + off);
}
static inline void ls_pcie_g4_lut_writel(struct ls_pcie_g4 *pcie,
u32 off, u32 val)
{
iowrite32(val, pcie->pci.csr_axi_slave_base + PCIE_LUT_OFF + off);
}
static inline u32 ls_pcie_g4_pf_readl(struct ls_pcie_g4 *pcie, u32 off)
{
return ioread32(pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off);
}
static inline void ls_pcie_g4_pf_writel(struct ls_pcie_g4 *pcie,
u32 off, u32 val)
{
iowrite32(val, pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off);
}
static int ls_pcie_g4_link_up(struct mobiveil_pcie *pci)
{
struct ls_pcie_g4 *pcie = to_ls_pcie_g4(pci);
u32 state;
state = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
state = state & PF_DBG_LTSSM_MASK;
if (state == PF_DBG_LTSSM_L0)
return 1;
return 0;
}
static void ls_pcie_g4_disable_interrupt(struct ls_pcie_g4 *pcie)
{
struct mobiveil_pcie *mv_pci = &pcie->pci;
mobiveil_csr_writel(mv_pci, 0, PAB_INTP_AMBA_MISC_ENB);
}
static void ls_pcie_g4_enable_interrupt(struct ls_pcie_g4 *pcie)
{
struct mobiveil_pcie *mv_pci = &pcie->pci;
u32 val;
/* Clear the interrupt status */
mobiveil_csr_writel(mv_pci, 0xffffffff, PAB_INTP_AMBA_MISC_STAT);
val = PAB_INTP_INTX_MASK | PAB_INTP_MSI | PAB_INTP_RESET |
PAB_INTP_PCIE_UE | PAB_INTP_IE_PMREDI | PAB_INTP_IE_EC;
mobiveil_csr_writel(mv_pci, val, PAB_INTP_AMBA_MISC_ENB);
}
static int ls_pcie_g4_reinit_hw(struct ls_pcie_g4 *pcie)
{
struct mobiveil_pcie *mv_pci = &pcie->pci;
struct device *dev = &mv_pci->pdev->dev;
u32 val, act_stat;
int to = 100;
/* Poll for pab_csb_reset to set and PAB activity to clear */
do {
usleep_range(10, 15);
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_INT_STAT);
act_stat = mobiveil_csr_readl(mv_pci, PAB_ACTIVITY_STAT);
} while (((val & PF_INT_STAT_PABRST) == 0 || act_stat) && to--);
if (to < 0) {
dev_err(dev, "Poll PABRST&PABACT timeout\n");
return -EIO;
}
/* clear PEX_RESET bit in PEX_PF0_DBG register */
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
val |= PF_DBG_WE;
ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val);
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
val |= PF_DBG_PABR;
ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val);
val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG);
val &= ~PF_DBG_WE;
ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val);
mobiveil_host_init(mv_pci, true);
to = 100;
while (!ls_pcie_g4_link_up(mv_pci) && to--)
usleep_range(200, 250);
if (to < 0) {
dev_err(dev, "PCIe link training timeout\n");
return -EIO;
}
return 0;
}
static irqreturn_t ls_pcie_g4_isr(int irq, void *dev_id)
{
struct ls_pcie_g4 *pcie = (struct ls_pcie_g4 *)dev_id;
struct mobiveil_pcie *mv_pci = &pcie->pci;
u32 val;
val = mobiveil_csr_readl(mv_pci, PAB_INTP_AMBA_MISC_STAT);
if (!val)
return IRQ_NONE;
if (val & PAB_INTP_RESET) {
ls_pcie_g4_disable_interrupt(pcie);
schedule_delayed_work(&pcie->dwork, msecs_to_jiffies(1));
}
mobiveil_csr_writel(mv_pci, val, PAB_INTP_AMBA_MISC_STAT);
return IRQ_HANDLED;
}
static int ls_pcie_g4_interrupt_init(struct mobiveil_pcie *mv_pci)
{
struct ls_pcie_g4 *pcie = to_ls_pcie_g4(mv_pci);
struct platform_device *pdev = mv_pci->pdev;
struct device *dev = &pdev->dev;
int ret;
pcie->irq = platform_get_irq_byname(pdev, "intr");
if (pcie->irq < 0) {
dev_err(dev, "Can't get 'intr' IRQ, errno = %d\n", pcie->irq);
return pcie->irq;
}
ret = devm_request_irq(dev, pcie->irq, ls_pcie_g4_isr,
IRQF_SHARED, pdev->name, pcie);
if (ret) {
dev_err(dev, "Can't register PCIe IRQ, errno = %d\n", ret);
return ret;
}
return 0;
}
static void ls_pcie_g4_reset(struct work_struct *work)
{
struct delayed_work *dwork = container_of(work, struct delayed_work,
work);
struct ls_pcie_g4 *pcie = container_of(dwork, struct ls_pcie_g4, dwork);
struct mobiveil_pcie *mv_pci = &pcie->pci;
u16 ctrl;
ctrl = mobiveil_csr_readw(mv_pci, PCI_BRIDGE_CONTROL);
ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
mobiveil_csr_writew(mv_pci, ctrl, PCI_BRIDGE_CONTROL);
if (!ls_pcie_g4_reinit_hw(pcie))
return;
ls_pcie_g4_enable_interrupt(pcie);
}
static struct mobiveil_rp_ops ls_pcie_g4_rp_ops = {
.interrupt_init = ls_pcie_g4_interrupt_init,
};
static const struct mobiveil_pab_ops ls_pcie_g4_pab_ops = {
.link_up = ls_pcie_g4_link_up,
};
static int __init ls_pcie_g4_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct pci_host_bridge *bridge;
struct mobiveil_pcie *mv_pci;
struct ls_pcie_g4 *pcie;
struct device_node *np = dev->of_node;
int ret;
if (!of_parse_phandle(np, "msi-parent", 0)) {
dev_err(dev, "Failed to find msi-parent\n");
return -EINVAL;
}
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
if (!bridge)
return -ENOMEM;
pcie = pci_host_bridge_priv(bridge);
mv_pci = &pcie->pci;
mv_pci->pdev = pdev;
mv_pci->ops = &ls_pcie_g4_pab_ops;
mv_pci->rp.ops = &ls_pcie_g4_rp_ops;
mv_pci->rp.bridge = bridge;
platform_set_drvdata(pdev, pcie);
INIT_DELAYED_WORK(&pcie->dwork, ls_pcie_g4_reset);
ret = mobiveil_pcie_host_probe(mv_pci);
if (ret) {
dev_err(dev, "Fail to probe\n");
return ret;
}
ls_pcie_g4_enable_interrupt(pcie);
return 0;
}
static const struct of_device_id ls_pcie_g4_of_match[] = {
{ .compatible = "fsl,lx2160a-pcie", },
{ },
};
static struct platform_driver ls_pcie_g4_driver = {
.driver = {
.name = "layerscape-pcie-gen4",
.of_match_table = ls_pcie_g4_of_match,
.suppress_bind_attrs = true,
},
};
builtin_platform_driver_probe(ls_pcie_g4_driver, ls_pcie_g4_probe);
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe host controller driver for Mobiveil PCIe Host controller
*
* Copyright (c) 2018 Mobiveil Inc.
* Copyright 2019 NXP
*
* Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>
* Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
*/
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_pci.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include "pcie-mobiveil.h"
static int mobiveil_pcie_probe(struct platform_device *pdev)
{
struct mobiveil_pcie *pcie;
struct pci_host_bridge *bridge;
struct device *dev = &pdev->dev;
/* allocate the PCIe port */
bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie));
if (!bridge)
return -ENOMEM;
pcie = pci_host_bridge_priv(bridge);
pcie->rp.bridge = bridge;
pcie->pdev = pdev;
return mobiveil_pcie_host_probe(pcie);
}
static const struct of_device_id mobiveil_pcie_of_match[] = {
{.compatible = "mbvl,gpex40-pcie",},
{},
};
MODULE_DEVICE_TABLE(of, mobiveil_pcie_of_match);
static struct platform_driver mobiveil_pcie_driver = {
.probe = mobiveil_pcie_probe,
.driver = {
.name = "mobiveil-pcie",
.of_match_table = mobiveil_pcie_of_match,
.suppress_bind_attrs = true,
},
};
builtin_platform_driver(mobiveil_pcie_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Mobiveil PCIe host controller driver");
MODULE_AUTHOR("Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>");
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe host controller driver for Mobiveil PCIe Host controller
*
* Copyright (c) 2018 Mobiveil Inc.
* Copyright 2019 NXP
*
* Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>
* Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
*/
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include "pcie-mobiveil.h"
/*
* mobiveil_pcie_sel_page - routine to access paged register
*
* Registers whose address greater than PAGED_ADDR_BNDRY (0xc00) are paged,
* for this scheme to work extracted higher 6 bits of the offset will be
* written to pg_sel field of PAB_CTRL register and rest of the lower 10
* bits enabled with PAGED_ADDR_BNDRY are used as offset of the register.
*/
static void mobiveil_pcie_sel_page(struct mobiveil_pcie *pcie, u8 pg_idx)
{
u32 val;
val = readl(pcie->csr_axi_slave_base + PAB_CTRL);
val &= ~(PAGE_SEL_MASK << PAGE_SEL_SHIFT);
val |= (pg_idx & PAGE_SEL_MASK) << PAGE_SEL_SHIFT;
writel(val, pcie->csr_axi_slave_base + PAB_CTRL);
}
static void __iomem *mobiveil_pcie_comp_addr(struct mobiveil_pcie *pcie,
u32 off)
{
if (off < PAGED_ADDR_BNDRY) {
/* For directly accessed registers, clear the pg_sel field */
mobiveil_pcie_sel_page(pcie, 0);
return pcie->csr_axi_slave_base + off;
}
mobiveil_pcie_sel_page(pcie, OFFSET_TO_PAGE_IDX(off));
return pcie->csr_axi_slave_base + OFFSET_TO_PAGE_ADDR(off);
}
static int mobiveil_pcie_read(void __iomem *addr, int size, u32 *val)
{
if ((uintptr_t)addr & (size - 1)) {
*val = 0;
return PCIBIOS_BAD_REGISTER_NUMBER;
}
switch (size) {
case 4:
*val = readl(addr);
break;
case 2:
*val = readw(addr);
break;
case 1:
*val = readb(addr);
break;
default:
*val = 0;
return PCIBIOS_BAD_REGISTER_NUMBER;
}
return PCIBIOS_SUCCESSFUL;
}
static int mobiveil_pcie_write(void __iomem *addr, int size, u32 val)
{
if ((uintptr_t)addr & (size - 1))
return PCIBIOS_BAD_REGISTER_NUMBER;
switch (size) {
case 4:
writel(val, addr);
break;
case 2:
writew(val, addr);
break;
case 1:
writeb(val, addr);
break;
default:
return PCIBIOS_BAD_REGISTER_NUMBER;
}
return PCIBIOS_SUCCESSFUL;
}
u32 mobiveil_csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size)
{
void __iomem *addr;
u32 val;
int ret;
addr = mobiveil_pcie_comp_addr(pcie, off);
ret = mobiveil_pcie_read(addr, size, &val);
if (ret)
dev_err(&pcie->pdev->dev, "read CSR address failed\n");
return val;
}
void mobiveil_csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off,
size_t size)
{
void __iomem *addr;
int ret;
addr = mobiveil_pcie_comp_addr(pcie, off);
ret = mobiveil_pcie_write(addr, size, val);
if (ret)
dev_err(&pcie->pdev->dev, "write CSR address failed\n");
}
bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie)
{
if (pcie->ops->link_up)
return pcie->ops->link_up(pcie);
return (mobiveil_csr_readl(pcie, LTSSM_STATUS) &
LTSSM_STATUS_L0_MASK) == LTSSM_STATUS_L0;
}
void program_ib_windows(struct mobiveil_pcie *pcie, int win_num,
u64 cpu_addr, u64 pci_addr, u32 type, u64 size)
{
u32 value;
u64 size64 = ~(size - 1);
if (win_num >= pcie->ppio_wins) {
dev_err(&pcie->pdev->dev,
"ERROR: max inbound windows reached !\n");
return;
}
value = mobiveil_csr_readl(pcie, PAB_PEX_AMAP_CTRL(win_num));
value &= ~(AMAP_CTRL_TYPE_MASK << AMAP_CTRL_TYPE_SHIFT | WIN_SIZE_MASK);
value |= type << AMAP_CTRL_TYPE_SHIFT | 1 << AMAP_CTRL_EN_SHIFT |
(lower_32_bits(size64) & WIN_SIZE_MASK);
mobiveil_csr_writel(pcie, value, PAB_PEX_AMAP_CTRL(win_num));
mobiveil_csr_writel(pcie, upper_32_bits(size64),
PAB_EXT_PEX_AMAP_SIZEN(win_num));
mobiveil_csr_writel(pcie, lower_32_bits(cpu_addr),
PAB_PEX_AMAP_AXI_WIN(win_num));
mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr),
PAB_EXT_PEX_AMAP_AXI_WIN(win_num));
mobiveil_csr_writel(pcie, lower_32_bits(pci_addr),
PAB_PEX_AMAP_PEX_WIN_L(win_num));
mobiveil_csr_writel(pcie, upper_32_bits(pci_addr),
PAB_PEX_AMAP_PEX_WIN_H(win_num));
pcie->ib_wins_configured++;
}
/*
* routine to program the outbound windows
*/
void program_ob_windows(struct mobiveil_pcie *pcie, int win_num,
u64 cpu_addr, u64 pci_addr, u32 type, u64 size)
{
u32 value;
u64 size64 = ~(size - 1);
if (win_num >= pcie->apio_wins) {
dev_err(&pcie->pdev->dev,
"ERROR: max outbound windows reached !\n");
return;
}
/*
* program Enable Bit to 1, Type Bit to (00) base 2, AXI Window Size Bit
* to 4 KB in PAB_AXI_AMAP_CTRL register
*/
value = mobiveil_csr_readl(pcie, PAB_AXI_AMAP_CTRL(win_num));
value &= ~(WIN_TYPE_MASK << WIN_TYPE_SHIFT | WIN_SIZE_MASK);
value |= 1 << WIN_ENABLE_SHIFT | type << WIN_TYPE_SHIFT |
(lower_32_bits(size64) & WIN_SIZE_MASK);
mobiveil_csr_writel(pcie, value, PAB_AXI_AMAP_CTRL(win_num));
mobiveil_csr_writel(pcie, upper_32_bits(size64),
PAB_EXT_AXI_AMAP_SIZE(win_num));
/*
* program AXI window base with appropriate value in
* PAB_AXI_AMAP_AXI_WIN0 register
*/
mobiveil_csr_writel(pcie,
lower_32_bits(cpu_addr) & (~AXI_WINDOW_ALIGN_MASK),
PAB_AXI_AMAP_AXI_WIN(win_num));
mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr),
PAB_EXT_AXI_AMAP_AXI_WIN(win_num));
mobiveil_csr_writel(pcie, lower_32_bits(pci_addr),
PAB_AXI_AMAP_PEX_WIN_L(win_num));
mobiveil_csr_writel(pcie, upper_32_bits(pci_addr),
PAB_AXI_AMAP_PEX_WIN_H(win_num));
pcie->ob_wins_configured++;
}
int mobiveil_bringup_link(struct mobiveil_pcie *pcie)
{
int retries;
/* check if the link is up or not */
for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) {
if (mobiveil_pcie_link_up(pcie))
return 0;
usleep_range(LINK_WAIT_MIN, LINK_WAIT_MAX);
}
dev_err(&pcie->pdev->dev, "link never came up\n");
return -ETIMEDOUT;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -824,8 +824,8 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie) ...@@ -824,8 +824,8 @@ static int brcm_pcie_setup(struct brcm_pcie *pcie)
cls = FIELD_GET(PCI_EXP_LNKSTA_CLS, lnksta); cls = FIELD_GET(PCI_EXP_LNKSTA_CLS, lnksta);
nlw = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta); nlw = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta);
dev_info(dev, "link up, %s x%u %s\n", dev_info(dev, "link up, %s x%u %s\n",
PCIE_SPEED2STR(cls + PCI_SPEED_133MHz_PCIX_533), pci_speed_string(pcie_link_speed[cls]), nlw,
nlw, ssc_good ? "(SSC)" : "(!SSC)"); ssc_good ? "(SSC)" : "(!SSC)");
/* PCIe->SCB endian mode for BAR */ /* PCIe->SCB endian mode for BAR */
tmp = readl(base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1); tmp = readl(base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1);
......
...@@ -29,7 +29,6 @@ struct pci_epc_group { ...@@ -29,7 +29,6 @@ struct pci_epc_group {
struct config_group group; struct config_group group;
struct pci_epc *epc; struct pci_epc *epc;
bool start; bool start;
unsigned long function_num_map;
}; };
static inline struct pci_epf_group *to_pci_epf_group(struct config_item *item) static inline struct pci_epf_group *to_pci_epf_group(struct config_item *item)
...@@ -58,6 +57,7 @@ static ssize_t pci_epc_start_store(struct config_item *item, const char *page, ...@@ -58,6 +57,7 @@ static ssize_t pci_epc_start_store(struct config_item *item, const char *page,
if (!start) { if (!start) {
pci_epc_stop(epc); pci_epc_stop(epc);
epc_group->start = 0;
return len; return len;
} }
...@@ -89,37 +89,22 @@ static int pci_epc_epf_link(struct config_item *epc_item, ...@@ -89,37 +89,22 @@ static int pci_epc_epf_link(struct config_item *epc_item,
struct config_item *epf_item) struct config_item *epf_item)
{ {
int ret; int ret;
u32 func_no = 0;
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item); struct pci_epf_group *epf_group = to_pci_epf_group(epf_item);
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item); struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
struct pci_epc *epc = epc_group->epc; struct pci_epc *epc = epc_group->epc;
struct pci_epf *epf = epf_group->epf; struct pci_epf *epf = epf_group->epf;
func_no = find_first_zero_bit(&epc_group->function_num_map,
BITS_PER_LONG);
if (func_no >= BITS_PER_LONG)
return -EINVAL;
set_bit(func_no, &epc_group->function_num_map);
epf->func_no = func_no;
ret = pci_epc_add_epf(epc, epf); ret = pci_epc_add_epf(epc, epf);
if (ret) if (ret)
goto err_add_epf; return ret;
ret = pci_epf_bind(epf); ret = pci_epf_bind(epf);
if (ret) if (ret) {
goto err_epf_bind;
return 0;
err_epf_bind:
pci_epc_remove_epf(epc, epf); pci_epc_remove_epf(epc, epf);
err_add_epf:
clear_bit(func_no, &epc_group->function_num_map);
return ret; return ret;
}
return 0;
} }
static void pci_epc_epf_unlink(struct config_item *epc_item, static void pci_epc_epf_unlink(struct config_item *epc_item,
...@@ -134,7 +119,6 @@ static void pci_epc_epf_unlink(struct config_item *epc_item, ...@@ -134,7 +119,6 @@ static void pci_epc_epf_unlink(struct config_item *epc_item,
epc = epc_group->epc; epc = epc_group->epc;
epf = epf_group->epf; epf = epf_group->epf;
clear_bit(epf->func_no, &epc_group->function_num_map);
pci_epf_unbind(epf); pci_epf_unbind(epf);
pci_epc_remove_epf(epc, epf); pci_epc_remove_epf(epc, epf);
} }
......
This diff is collapsed.
...@@ -79,6 +79,7 @@ int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size, ...@@ -79,6 +79,7 @@ int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size,
mem->page_size = page_size; mem->page_size = page_size;
mem->pages = pages; mem->pages = pages;
mem->size = size; mem->size = size;
mutex_init(&mem->lock);
epc->mem = mem; epc->mem = mem;
...@@ -122,7 +123,7 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, ...@@ -122,7 +123,7 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
phys_addr_t *phys_addr, size_t size) phys_addr_t *phys_addr, size_t size)
{ {
int pageno; int pageno;
void __iomem *virt_addr; void __iomem *virt_addr = NULL;
struct pci_epc_mem *mem = epc->mem; struct pci_epc_mem *mem = epc->mem;
unsigned int page_shift = ilog2(mem->page_size); unsigned int page_shift = ilog2(mem->page_size);
int order; int order;
...@@ -130,15 +131,18 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, ...@@ -130,15 +131,18 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
size = ALIGN(size, mem->page_size); size = ALIGN(size, mem->page_size);
order = pci_epc_mem_get_order(mem, size); order = pci_epc_mem_get_order(mem, size);
mutex_lock(&mem->lock);
pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order); pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order);
if (pageno < 0) if (pageno < 0)
return NULL; goto ret;
*phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift); *phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift);
virt_addr = ioremap(*phys_addr, size); virt_addr = ioremap(*phys_addr, size);
if (!virt_addr) if (!virt_addr)
bitmap_release_region(mem->bitmap, pageno, order); bitmap_release_region(mem->bitmap, pageno, order);
ret:
mutex_unlock(&mem->lock);
return virt_addr; return virt_addr;
} }
EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr); EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);
...@@ -164,7 +168,9 @@ void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr, ...@@ -164,7 +168,9 @@ void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr,
pageno = (phys_addr - mem->phys_base) >> page_shift; pageno = (phys_addr - mem->phys_base) >> page_shift;
size = ALIGN(size, mem->page_size); size = ALIGN(size, mem->page_size);
order = pci_epc_mem_get_order(mem, size); order = pci_epc_mem_get_order(mem, size);
mutex_lock(&mem->lock);
bitmap_release_region(mem->bitmap, pageno, order); bitmap_release_region(mem->bitmap, pageno, order);
mutex_unlock(&mem->lock);
} }
EXPORT_SYMBOL_GPL(pci_epc_mem_free_addr); EXPORT_SYMBOL_GPL(pci_epc_mem_free_addr);
......
This diff is collapsed.
...@@ -84,6 +84,7 @@ struct controller { ...@@ -84,6 +84,7 @@ struct controller {
struct pcie_device *pcie; struct pcie_device *pcie;
u32 slot_cap; /* capabilities and quirks */ u32 slot_cap; /* capabilities and quirks */
unsigned int inband_presence_disabled:1;
u16 slot_ctrl; /* control register access */ u16 slot_ctrl; /* control register access */
struct mutex ctrl_lock; struct mutex ctrl_lock;
......
This diff is collapsed.
...@@ -291,6 +291,9 @@ static const struct pci_p2pdma_whitelist_entry { ...@@ -291,6 +291,9 @@ static const struct pci_p2pdma_whitelist_entry {
{PCI_VENDOR_ID_INTEL, 0x2f01, REQ_SAME_HOST_BRIDGE}, {PCI_VENDOR_ID_INTEL, 0x2f01, REQ_SAME_HOST_BRIDGE},
/* Intel SkyLake-E */ /* Intel SkyLake-E */
{PCI_VENDOR_ID_INTEL, 0x2030, 0}, {PCI_VENDOR_ID_INTEL, 0x2030, 0},
{PCI_VENDOR_ID_INTEL, 0x2031, 0},
{PCI_VENDOR_ID_INTEL, 0x2032, 0},
{PCI_VENDOR_ID_INTEL, 0x2033, 0},
{PCI_VENDOR_ID_INTEL, 0x2020, 0}, {PCI_VENDOR_ID_INTEL, 0x2020, 0},
{} {}
}; };
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -141,3 +141,13 @@ config PCIE_BW ...@@ -141,3 +141,13 @@ config PCIE_BW
This enables PCI Express Bandwidth Change Notification. If This enables PCI Express Bandwidth Change Notification. If
you know link width or rate changes occur only to correct you know link width or rate changes occur only to correct
unreliable links, you may answer Y. unreliable links, you may answer Y.
config PCIE_EDR
bool "PCI Express Error Disconnect Recover support"
depends on PCIE_DPC && ACPI
help
This option adds Error Disconnect Recover support as specified
in the Downstream Port Containment Related Enhancements ECN to
the PCI Firmware Specification r3.2. Enable this if you want to
support hybrid DPC model which uses both firmware and OS to
implement DPC.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment