Commit 56e520c7 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'iommu-updates-v4.9' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:

 - support for interrupt virtualization in the AMD IOMMU driver. These
   patches were shared with the KVM tree and are already merged through
   that tree.

 - generic DT-binding support for the ARM-SMMU driver. With this the
   driver now makes use of the generic DMA-API code. This also required
   some changes outside of the IOMMU code, but these are acked by the
   respective maintainers.

 - more cleanups and fixes all over the place.

* tag 'iommu-updates-v4.9' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (40 commits)
  iommu/amd: No need to wait iommu completion if no dte irq entry change
  iommu/amd: Free domain id when free a domain of struct dma_ops_domain
  iommu/amd: Use standard bitmap operation to set bitmap
  iommu/amd: Clean up the cmpxchg64 invocation
  iommu/io-pgtable-arm: Check for v7s-incapable systems
  iommu/dma: Avoid PCI host bridge windows
  iommu/dma: Add support for mapping MSIs
  iommu/arm-smmu: Set domain geometry
  iommu/arm-smmu: Wire up generic configuration support
  Docs: dt: document ARM SMMU generic binding usage
  iommu/arm-smmu: Convert to iommu_fwspec
  iommu/arm-smmu: Intelligent SMR allocation
  iommu/arm-smmu: Add a stream map entry iterator
  iommu/arm-smmu: Streamline SMMU data lookups
  iommu/arm-smmu: Refactor mmu-masters handling
  iommu/arm-smmu: Keep track of S2CR state
  iommu/arm-smmu: Consolidate stream map entry state
  iommu/arm-smmu: Handle stream IDs more dynamically
  iommu/arm-smmu: Set PRIVCFG in stage 1 STEs
  iommu/arm-smmu: Support non-PCI devices with SMMUv3
  ...
parents d09ba131 13a08259
...@@ -27,6 +27,12 @@ the PCIe specification. ...@@ -27,6 +27,12 @@ the PCIe specification.
* "cmdq-sync" - CMD_SYNC complete * "cmdq-sync" - CMD_SYNC complete
* "gerror" - Global Error activated * "gerror" - Global Error activated
- #iommu-cells : See the generic IOMMU binding described in
devicetree/bindings/pci/pci-iommu.txt
for details. For SMMUv3, must be 1, with each cell
describing a single stream ID. All possible stream
IDs which a device may emit must be described.
** SMMUv3 optional properties: ** SMMUv3 optional properties:
- dma-coherent : Present if DMA operations made by the SMMU (page - dma-coherent : Present if DMA operations made by the SMMU (page
...@@ -54,6 +60,6 @@ the PCIe specification. ...@@ -54,6 +60,6 @@ the PCIe specification.
<GIC_SPI 79 IRQ_TYPE_EDGE_RISING>; <GIC_SPI 79 IRQ_TYPE_EDGE_RISING>;
interrupt-names = "eventq", "priq", "cmdq-sync", "gerror"; interrupt-names = "eventq", "priq", "cmdq-sync", "gerror";
dma-coherent; dma-coherent;
#iommu-cells = <0>; #iommu-cells = <1>;
msi-parent = <&its 0xff0000>; msi-parent = <&its 0xff0000>;
}; };
...@@ -35,12 +35,16 @@ conditions. ...@@ -35,12 +35,16 @@ conditions.
interrupt per context bank. In the case of a single, interrupt per context bank. In the case of a single,
combined interrupt, it must be listed multiple times. combined interrupt, it must be listed multiple times.
- mmu-masters : A list of phandles to device nodes representing bus - #iommu-cells : See Documentation/devicetree/bindings/iommu/iommu.txt
masters for which the SMMU can provide a translation for details. With a value of 1, each "iommus" entry
and their corresponding StreamIDs (see example below). represents a distinct stream ID emitted by that device
Each device node linked from this list must have a into the relevant SMMU.
"#stream-id-cells" property, indicating the number of
StreamIDs associated with it. SMMUs with stream matching support and complex masters
may use a value of 2, where the second cell represents
an SMR mask to combine with the ID in the first cell.
Care must be taken to ensure the set of matched IDs
does not result in conflicts.
** System MMU optional properties: ** System MMU optional properties:
...@@ -56,9 +60,20 @@ conditions. ...@@ -56,9 +60,20 @@ conditions.
aliases of secure registers have to be used during aliases of secure registers have to be used during
SMMU configuration. SMMU configuration.
Example: ** Deprecated properties:
- mmu-masters (deprecated in favour of the generic "iommus" binding) :
A list of phandles to device nodes representing bus
masters for which the SMMU can provide a translation
and their corresponding Stream IDs. Each device node
linked from this list must have a "#stream-id-cells"
property, indicating the number of Stream ID
arguments associated with its phandle.
smmu { ** Examples:
/* SMMU with stream matching or stream indexing */
smmu1: iommu {
compatible = "arm,smmu-v1"; compatible = "arm,smmu-v1";
reg = <0xba5e0000 0x10000>; reg = <0xba5e0000 0x10000>;
#global-interrupts = <2>; #global-interrupts = <2>;
...@@ -68,11 +83,29 @@ Example: ...@@ -68,11 +83,29 @@ Example:
<0 35 4>, <0 35 4>,
<0 36 4>, <0 36 4>,
<0 37 4>; <0 37 4>;
#iommu-cells = <1>;
};
/* device with two stream IDs, 0 and 7 */
master1 {
iommus = <&smmu1 0>,
<&smmu1 7>;
};
/* SMMU with stream matching */
smmu2: iommu {
...
#iommu-cells = <2>;
};
/* device with stream IDs 0 and 7 */
master2 {
iommus = <&smmu2 0 0>,
<&smmu2 7 0>;
};
/* /* device with stream IDs 1, 17, 33 and 49 */
* Two DMA controllers, the first with two StreamIDs (0xd01d master3 {
* and 0xd01e) and the second with only one (0xd11c). iommus = <&smmu2 1 0x30>;
*/
mmu-masters = <&dma0 0xd01d 0xd01e>,
<&dma1 0xd11c>;
}; };
This document describes the generic device tree binding for describing the
relationship between PCI(e) devices and IOMMU(s).
Each PCI(e) device under a root complex is uniquely identified by its Requester
ID (AKA RID). A Requester ID is a triplet of a Bus number, Device number, and
Function number.
For the purpose of this document, when treated as a numeric value, a RID is
formatted such that:
* Bits [15:8] are the Bus number.
* Bits [7:3] are the Device number.
* Bits [2:0] are the Function number.
* Any other bits required for padding must be zero.
IOMMUs may distinguish PCI devices through sideband data derived from the
Requester ID. While a given PCI device can only master through one IOMMU, a
root complex may split masters across a set of IOMMUs (e.g. with one IOMMU per
bus).
The generic 'iommus' property is insufficient to describe this relationship,
and a mechanism is required to map from a PCI device to its IOMMU and sideband
data.
For generic IOMMU bindings, see
Documentation/devicetree/bindings/iommu/iommu.txt.
PCI root complex
================
Optional properties
-------------------
- iommu-map: Maps a Requester ID to an IOMMU and associated iommu-specifier
data.
The property is an arbitrary number of tuples of
(rid-base,iommu,iommu-base,length).
Any RID r in the interval [rid-base, rid-base + length) is associated with
the listed IOMMU, with the iommu-specifier (r - rid-base + iommu-base).
- iommu-map-mask: A mask to be applied to each Requester ID prior to being
mapped to an iommu-specifier per the iommu-map property.
Example (1)
===========
/ {
#address-cells = <1>;
#size-cells = <1>;
iommu: iommu@a {
reg = <0xa 0x1>;
compatible = "vendor,some-iommu";
#iommu-cells = <1>;
};
pci: pci@f {
reg = <0xf 0x1>;
compatible = "vendor,pcie-root-complex";
device_type = "pci";
/*
* The sideband data provided to the IOMMU is the RID,
* identity-mapped.
*/
iommu-map = <0x0 &iommu 0x0 0x10000>;
};
};
Example (2)
===========
/ {
#address-cells = <1>;
#size-cells = <1>;
iommu: iommu@a {
reg = <0xa 0x1>;
compatible = "vendor,some-iommu";
#iommu-cells = <1>;
};
pci: pci@f {
reg = <0xf 0x1>;
compatible = "vendor,pcie-root-complex";
device_type = "pci";
/*
* The sideband data provided to the IOMMU is the RID with the
* function bits masked out.
*/
iommu-map = <0x0 &iommu 0x0 0x10000>;
iommu-map-mask = <0xfff8>;
};
};
Example (3)
===========
/ {
#address-cells = <1>;
#size-cells = <1>;
iommu: iommu@a {
reg = <0xa 0x1>;
compatible = "vendor,some-iommu";
#iommu-cells = <1>;
};
pci: pci@f {
reg = <0xf 0x1>;
compatible = "vendor,pcie-root-complex";
device_type = "pci";
/*
* The sideband data provided to the IOMMU is the RID,
* but the high bits of the bus number are flipped.
*/
iommu-map = <0x0000 &iommu 0x8000 0x8000>,
<0x8000 &iommu 0x0000 0x8000>;
};
};
Example (4)
===========
/ {
#address-cells = <1>;
#size-cells = <1>;
iommu_a: iommu@a {
reg = <0xa 0x1>;
compatible = "vendor,some-iommu";
#iommu-cells = <1>;
};
iommu_b: iommu@b {
reg = <0xb 0x1>;
compatible = "vendor,some-iommu";
#iommu-cells = <1>;
};
iommu_c: iommu@c {
reg = <0xc 0x1>;
compatible = "vendor,some-iommu";
#iommu-cells = <1>;
};
pci: pci@f {
reg = <0xf 0x1>;
compatible = "vendor,pcie-root-complex";
device_type = "pci";
/*
* Devices with bus number 0-127 are mastered via IOMMU
* a, with sideband data being RID[14:0].
* Devices with bus number 128-255 are mastered via
* IOMMU b, with sideband data being RID[14:0].
* No devices master via IOMMU c.
*/
iommu-map = <0x0000 &iommu_a 0x0000 0x8000>,
<0x8000 &iommu_b 0x0000 0x8000>;
};
};
...@@ -828,7 +828,7 @@ static bool do_iommu_attach(struct device *dev, const struct iommu_ops *ops, ...@@ -828,7 +828,7 @@ static bool do_iommu_attach(struct device *dev, const struct iommu_ops *ops,
* then the IOMMU core will have already configured a group for this * then the IOMMU core will have already configured a group for this
* device, and allocated the default domain for that group. * device, and allocated the default domain for that group.
*/ */
if (!domain || iommu_dma_init_domain(domain, dma_base, size)) { if (!domain || iommu_dma_init_domain(domain, dma_base, size, dev)) {
pr_warn("Failed to set up IOMMU for device %s; retaining platform DMA ops\n", pr_warn("Failed to set up IOMMU for device %s; retaining platform DMA ops\n",
dev_name(dev)); dev_name(dev));
return false; return false;
......
...@@ -255,7 +255,6 @@ CONFIG_RTC_CLASS=y ...@@ -255,7 +255,6 @@ CONFIG_RTC_CLASS=y
CONFIG_DMADEVICES=y CONFIG_DMADEVICES=y
CONFIG_EEEPC_LAPTOP=y CONFIG_EEEPC_LAPTOP=y
CONFIG_AMD_IOMMU=y CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
CONFIG_INTEL_IOMMU=y CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set # CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_EFI_VARS=y CONFIG_EFI_VARS=y
......
...@@ -66,7 +66,7 @@ static inline int __exynos_iommu_create_mapping(struct exynos_drm_private *priv, ...@@ -66,7 +66,7 @@ static inline int __exynos_iommu_create_mapping(struct exynos_drm_private *priv,
if (ret) if (ret)
goto free_domain; goto free_domain;
ret = iommu_dma_init_domain(domain, start, size); ret = iommu_dma_init_domain(domain, start, size, NULL);
if (ret) if (ret)
goto put_cookie; goto put_cookie;
......
...@@ -309,7 +309,7 @@ config ARM_SMMU ...@@ -309,7 +309,7 @@ config ARM_SMMU
config ARM_SMMU_V3 config ARM_SMMU_V3
bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support"
depends on ARM64 && PCI depends on ARM64
select IOMMU_API select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE select IOMMU_IO_PGTABLE_LPAE
select GENERIC_MSI_IRQ_DOMAIN select GENERIC_MSI_IRQ_DOMAIN
......
...@@ -103,7 +103,7 @@ struct flush_queue { ...@@ -103,7 +103,7 @@ struct flush_queue {
struct flush_queue_entry *entries; struct flush_queue_entry *entries;
}; };
DEFINE_PER_CPU(struct flush_queue, flush_queue); static DEFINE_PER_CPU(struct flush_queue, flush_queue);
static atomic_t queue_timer_on; static atomic_t queue_timer_on;
static struct timer_list queue_timer; static struct timer_list queue_timer;
...@@ -1361,7 +1361,8 @@ static u64 *alloc_pte(struct protection_domain *domain, ...@@ -1361,7 +1361,8 @@ static u64 *alloc_pte(struct protection_domain *domain,
__npte = PM_LEVEL_PDE(level, virt_to_phys(page)); __npte = PM_LEVEL_PDE(level, virt_to_phys(page));
if (cmpxchg64(pte, __pte, __npte)) { /* pte could have been changed somewhere. */
if (cmpxchg64(pte, __pte, __npte) != __pte) {
free_page((unsigned long)page); free_page((unsigned long)page);
continue; continue;
} }
...@@ -1741,6 +1742,9 @@ static void dma_ops_domain_free(struct dma_ops_domain *dom) ...@@ -1741,6 +1742,9 @@ static void dma_ops_domain_free(struct dma_ops_domain *dom)
free_pagetable(&dom->domain); free_pagetable(&dom->domain);
if (dom->domain.id)
domain_id_free(dom->domain.id);
kfree(dom); kfree(dom);
} }
...@@ -3649,7 +3653,7 @@ static struct irq_remap_table *get_irq_table(u16 devid, bool ioapic) ...@@ -3649,7 +3653,7 @@ static struct irq_remap_table *get_irq_table(u16 devid, bool ioapic)
table = irq_lookup_table[devid]; table = irq_lookup_table[devid];
if (table) if (table)
goto out; goto out_unlock;
alias = amd_iommu_alias_table[devid]; alias = amd_iommu_alias_table[devid];
table = irq_lookup_table[alias]; table = irq_lookup_table[alias];
...@@ -3663,7 +3667,7 @@ static struct irq_remap_table *get_irq_table(u16 devid, bool ioapic) ...@@ -3663,7 +3667,7 @@ static struct irq_remap_table *get_irq_table(u16 devid, bool ioapic)
/* Nothing there yet, allocate new irq remapping table */ /* Nothing there yet, allocate new irq remapping table */
table = kzalloc(sizeof(*table), GFP_ATOMIC); table = kzalloc(sizeof(*table), GFP_ATOMIC);
if (!table) if (!table)
goto out; goto out_unlock;
/* Initialize table spin-lock */ /* Initialize table spin-lock */
spin_lock_init(&table->lock); spin_lock_init(&table->lock);
...@@ -3676,7 +3680,7 @@ static struct irq_remap_table *get_irq_table(u16 devid, bool ioapic) ...@@ -3676,7 +3680,7 @@ static struct irq_remap_table *get_irq_table(u16 devid, bool ioapic)
if (!table->table) { if (!table->table) {
kfree(table); kfree(table);
table = NULL; table = NULL;
goto out; goto out_unlock;
} }
if (!AMD_IOMMU_GUEST_IR_GA(amd_iommu_guest_ir)) if (!AMD_IOMMU_GUEST_IR_GA(amd_iommu_guest_ir))
...@@ -4153,6 +4157,7 @@ static int irq_remapping_alloc(struct irq_domain *domain, unsigned int virq, ...@@ -4153,6 +4157,7 @@ static int irq_remapping_alloc(struct irq_domain *domain, unsigned int virq,
} }
if (index < 0) { if (index < 0) {
pr_warn("Failed to allocate IRTE\n"); pr_warn("Failed to allocate IRTE\n");
ret = index;
goto out_free_parent; goto out_free_parent;
} }
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/bitmap.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/syscore_ops.h> #include <linux/syscore_ops.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
...@@ -2285,7 +2286,7 @@ static int __init early_amd_iommu_init(void) ...@@ -2285,7 +2286,7 @@ static int __init early_amd_iommu_init(void)
* never allocate domain 0 because its used as the non-allocated and * never allocate domain 0 because its used as the non-allocated and
* error value placeholder * error value placeholder
*/ */
amd_iommu_pd_alloc_bitmap[0] = 1; __set_bit(0, amd_iommu_pd_alloc_bitmap);
spin_lock_init(&amd_iommu_pd_lock); spin_lock_init(&amd_iommu_pd_lock);
......
...@@ -79,12 +79,6 @@ static inline int amd_iommu_create_irq_domain(struct amd_iommu *iommu) ...@@ -79,12 +79,6 @@ static inline int amd_iommu_create_irq_domain(struct amd_iommu *iommu)
extern int amd_iommu_complete_ppr(struct pci_dev *pdev, int pasid, extern int amd_iommu_complete_ppr(struct pci_dev *pdev, int pasid,
int status, int tag); int status, int tag);
#ifndef CONFIG_AMD_IOMMU_STATS
static inline void amd_iommu_stats_init(void) { }
#endif /* !CONFIG_AMD_IOMMU_STATS */
static inline bool is_rd890_iommu(struct pci_dev *pdev) static inline bool is_rd890_iommu(struct pci_dev *pdev)
{ {
return (pdev->vendor == PCI_VENDOR_ID_ATI) && return (pdev->vendor == PCI_VENDOR_ID_ATI) &&
......
...@@ -30,10 +30,13 @@ ...@@ -30,10 +30,13 @@
#include <linux/msi.h> #include <linux/msi.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_iommu.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/amba/bus.h>
#include "io-pgtable.h" #include "io-pgtable.h"
/* MMIO registers */ /* MMIO registers */
...@@ -123,6 +126,10 @@ ...@@ -123,6 +126,10 @@
#define CR2_RECINVSID (1 << 1) #define CR2_RECINVSID (1 << 1)
#define CR2_E2H (1 << 0) #define CR2_E2H (1 << 0)
#define ARM_SMMU_GBPA 0x44
#define GBPA_ABORT (1 << 20)
#define GBPA_UPDATE (1 << 31)
#define ARM_SMMU_IRQ_CTRL 0x50 #define ARM_SMMU_IRQ_CTRL 0x50
#define IRQ_CTRL_EVTQ_IRQEN (1 << 2) #define IRQ_CTRL_EVTQ_IRQEN (1 << 2)
#define IRQ_CTRL_PRIQ_IRQEN (1 << 1) #define IRQ_CTRL_PRIQ_IRQEN (1 << 1)
...@@ -260,6 +267,9 @@ ...@@ -260,6 +267,9 @@
#define STRTAB_STE_1_SHCFG_INCOMING 1UL #define STRTAB_STE_1_SHCFG_INCOMING 1UL
#define STRTAB_STE_1_SHCFG_SHIFT 44 #define STRTAB_STE_1_SHCFG_SHIFT 44
#define STRTAB_STE_1_PRIVCFG_UNPRIV 2UL
#define STRTAB_STE_1_PRIVCFG_SHIFT 48
#define STRTAB_STE_2_S2VMID_SHIFT 0 #define STRTAB_STE_2_S2VMID_SHIFT 0
#define STRTAB_STE_2_S2VMID_MASK 0xffffUL #define STRTAB_STE_2_S2VMID_MASK 0xffffUL
#define STRTAB_STE_2_VTCR_SHIFT 32 #define STRTAB_STE_2_VTCR_SHIFT 32
...@@ -606,12 +616,9 @@ struct arm_smmu_device { ...@@ -606,12 +616,9 @@ struct arm_smmu_device {
struct arm_smmu_strtab_cfg strtab_cfg; struct arm_smmu_strtab_cfg strtab_cfg;
}; };
/* SMMU private data for an IOMMU group */ /* SMMU private data for each master */
struct arm_smmu_group { struct arm_smmu_master_data {
struct arm_smmu_device *smmu; struct arm_smmu_device *smmu;
struct arm_smmu_domain *domain;
int num_sids;
u32 *sids;
struct arm_smmu_strtab_ent ste; struct arm_smmu_strtab_ent ste;
}; };
...@@ -713,19 +720,15 @@ static void queue_inc_prod(struct arm_smmu_queue *q) ...@@ -713,19 +720,15 @@ static void queue_inc_prod(struct arm_smmu_queue *q)
writel(q->prod, q->prod_reg); writel(q->prod, q->prod_reg);
} }
static bool __queue_cons_before(struct arm_smmu_queue *q, u32 until) /*
{ * Wait for the SMMU to consume items. If drain is true, wait until the queue
if (Q_WRP(q, q->cons) == Q_WRP(q, until)) * is empty. Otherwise, wait until there is at least one free slot.
return Q_IDX(q, q->cons) < Q_IDX(q, until); */
static int queue_poll_cons(struct arm_smmu_queue *q, bool drain, bool wfe)
return Q_IDX(q, q->cons) >= Q_IDX(q, until);
}
static int queue_poll_cons(struct arm_smmu_queue *q, u32 until, bool wfe)
{ {
ktime_t timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US); ktime_t timeout = ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US);
while (queue_sync_cons(q), __queue_cons_before(q, until)) { while (queue_sync_cons(q), (drain ? !queue_empty(q) : queue_full(q))) {
if (ktime_compare(ktime_get(), timeout) > 0) if (ktime_compare(ktime_get(), timeout) > 0)
return -ETIMEDOUT; return -ETIMEDOUT;
...@@ -896,8 +899,8 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu) ...@@ -896,8 +899,8 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu, static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
struct arm_smmu_cmdq_ent *ent) struct arm_smmu_cmdq_ent *ent)
{ {
u32 until;
u64 cmd[CMDQ_ENT_DWORDS]; u64 cmd[CMDQ_ENT_DWORDS];
unsigned long flags;
bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV); bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
struct arm_smmu_queue *q = &smmu->cmdq.q; struct arm_smmu_queue *q = &smmu->cmdq.q;
...@@ -907,20 +910,15 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu, ...@@ -907,20 +910,15 @@ static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
return; return;
} }
spin_lock(&smmu->cmdq.lock); spin_lock_irqsave(&smmu->cmdq.lock, flags);
while (until = q->prod + 1, queue_insert_raw(q, cmd) == -ENOSPC) { while (queue_insert_raw(q, cmd) == -ENOSPC) {
/* if (queue_poll_cons(q, false, wfe))
* Keep the queue locked, otherwise the producer could wrap
* twice and we could see a future consumer pointer that looks
* like it's behind us.
*/
if (queue_poll_cons(q, until, wfe))
dev_err_ratelimited(smmu->dev, "CMDQ timeout\n"); dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
} }
if (ent->opcode == CMDQ_OP_CMD_SYNC && queue_poll_cons(q, until, wfe)) if (ent->opcode == CMDQ_OP_CMD_SYNC && queue_poll_cons(q, true, wfe))
dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n"); dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
spin_unlock(&smmu->cmdq.lock); spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
} }
/* Context descriptor manipulation functions */ /* Context descriptor manipulation functions */
...@@ -1073,7 +1071,9 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid, ...@@ -1073,7 +1071,9 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
#ifdef CONFIG_PCI_ATS #ifdef CONFIG_PCI_ATS
STRTAB_STE_1_EATS_TRANS << STRTAB_STE_1_EATS_SHIFT | STRTAB_STE_1_EATS_TRANS << STRTAB_STE_1_EATS_SHIFT |
#endif #endif
STRTAB_STE_1_STRW_NSEL1 << STRTAB_STE_1_STRW_SHIFT); STRTAB_STE_1_STRW_NSEL1 << STRTAB_STE_1_STRW_SHIFT |
STRTAB_STE_1_PRIVCFG_UNPRIV <<
STRTAB_STE_1_PRIVCFG_SHIFT);
if (smmu->features & ARM_SMMU_FEAT_STALLS) if (smmu->features & ARM_SMMU_FEAT_STALLS)
dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD); dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD);
...@@ -1161,36 +1161,66 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) ...@@ -1161,36 +1161,66 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
struct arm_smmu_queue *q = &smmu->evtq.q; struct arm_smmu_queue *q = &smmu->evtq.q;
u64 evt[EVTQ_ENT_DWORDS]; u64 evt[EVTQ_ENT_DWORDS];
while (!queue_remove_raw(q, evt)) { do {
u8 id = evt[0] >> EVTQ_0_ID_SHIFT & EVTQ_0_ID_MASK; while (!queue_remove_raw(q, evt)) {
u8 id = evt[0] >> EVTQ_0_ID_SHIFT & EVTQ_0_ID_MASK;
dev_info(smmu->dev, "event 0x%02x received:\n", id); dev_info(smmu->dev, "event 0x%02x received:\n", id);
for (i = 0; i < ARRAY_SIZE(evt); ++i) for (i = 0; i < ARRAY_SIZE(evt); ++i)
dev_info(smmu->dev, "\t0x%016llx\n", dev_info(smmu->dev, "\t0x%016llx\n",
(unsigned long long)evt[i]); (unsigned long long)evt[i]);
}
}
/*
* Not much we can do on overflow, so scream and pretend we're
* trying harder.
*/
if (queue_sync_prod(q) == -EOVERFLOW)
dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n");
} while (!queue_empty(q));
/* Sync our overflow flag, as we believe we're up to speed */ /* Sync our overflow flag, as we believe we're up to speed */
q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons); q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static irqreturn_t arm_smmu_evtq_handler(int irq, void *dev) static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
{ {
irqreturn_t ret = IRQ_WAKE_THREAD; u32 sid, ssid;
struct arm_smmu_device *smmu = dev; u16 grpid;
struct arm_smmu_queue *q = &smmu->evtq.q; bool ssv, last;
sid = evt[0] >> PRIQ_0_SID_SHIFT & PRIQ_0_SID_MASK;
ssv = evt[0] & PRIQ_0_SSID_V;
ssid = ssv ? evt[0] >> PRIQ_0_SSID_SHIFT & PRIQ_0_SSID_MASK : 0;
last = evt[0] & PRIQ_0_PRG_LAST;
grpid = evt[1] >> PRIQ_1_PRG_IDX_SHIFT & PRIQ_1_PRG_IDX_MASK;
dev_info(smmu->dev, "unexpected PRI request received:\n");
dev_info(smmu->dev,
"\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
sid, ssid, grpid, last ? "L" : "",
evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
evt[0] & PRIQ_0_PERM_READ ? "R" : "",
evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
evt[1] & PRIQ_1_ADDR_MASK << PRIQ_1_ADDR_SHIFT);
if (last) {
struct arm_smmu_cmdq_ent cmd = {
.opcode = CMDQ_OP_PRI_RESP,
.substream_valid = ssv,
.pri = {
.sid = sid,
.ssid = ssid,
.grpid = grpid,
.resp = PRI_RESP_DENY,
},
};
/* arm_smmu_cmdq_issue_cmd(smmu, &cmd);
* Not much we can do on overflow, so scream and pretend we're }
* trying harder.
*/
if (queue_sync_prod(q) == -EOVERFLOW)
dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n");
else if (queue_empty(q))
ret = IRQ_NONE;
return ret;
} }
static irqreturn_t arm_smmu_priq_thread(int irq, void *dev) static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
...@@ -1199,63 +1229,19 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev) ...@@ -1199,63 +1229,19 @@ static irqreturn_t arm_smmu_priq_thread(int irq, void *dev)
struct arm_smmu_queue *q = &smmu->priq.q; struct arm_smmu_queue *q = &smmu->priq.q;
u64 evt[PRIQ_ENT_DWORDS]; u64 evt[PRIQ_ENT_DWORDS];
while (!queue_remove_raw(q, evt)) { do {
u32 sid, ssid; while (!queue_remove_raw(q, evt))
u16 grpid; arm_smmu_handle_ppr(smmu, evt);
bool ssv, last;
sid = evt[0] >> PRIQ_0_SID_SHIFT & PRIQ_0_SID_MASK; if (queue_sync_prod(q) == -EOVERFLOW)
ssv = evt[0] & PRIQ_0_SSID_V; dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
ssid = ssv ? evt[0] >> PRIQ_0_SSID_SHIFT & PRIQ_0_SSID_MASK : 0; } while (!queue_empty(q));
last = evt[0] & PRIQ_0_PRG_LAST;
grpid = evt[1] >> PRIQ_1_PRG_IDX_SHIFT & PRIQ_1_PRG_IDX_MASK;
dev_info(smmu->dev, "unexpected PRI request received:\n");
dev_info(smmu->dev,
"\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
sid, ssid, grpid, last ? "L" : "",
evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
evt[0] & PRIQ_0_PERM_READ ? "R" : "",
evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
evt[1] & PRIQ_1_ADDR_MASK << PRIQ_1_ADDR_SHIFT);
if (last) {
struct arm_smmu_cmdq_ent cmd = {
.opcode = CMDQ_OP_PRI_RESP,
.substream_valid = ssv,
.pri = {
.sid = sid,
.ssid = ssid,
.grpid = grpid,
.resp = PRI_RESP_DENY,
},
};
arm_smmu_cmdq_issue_cmd(smmu, &cmd);
}
}
/* Sync our overflow flag, as we believe we're up to speed */ /* Sync our overflow flag, as we believe we're up to speed */
q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons); q->cons = Q_OVF(q, q->prod) | Q_WRP(q, q->cons) | Q_IDX(q, q->cons);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static irqreturn_t arm_smmu_priq_handler(int irq, void *dev)
{
irqreturn_t ret = IRQ_WAKE_THREAD;
struct arm_smmu_device *smmu = dev;
struct arm_smmu_queue *q = &smmu->priq.q;
/* PRIQ overflow indicates a programming error */
if (queue_sync_prod(q) == -EOVERFLOW)
dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
else if (queue_empty(q))
ret = IRQ_NONE;
return ret;
}
static irqreturn_t arm_smmu_cmdq_sync_handler(int irq, void *dev) static irqreturn_t arm_smmu_cmdq_sync_handler(int irq, void *dev)
{ {
/* We don't actually use CMD_SYNC interrupts for anything */ /* We don't actually use CMD_SYNC interrupts for anything */
...@@ -1288,15 +1274,11 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev) ...@@ -1288,15 +1274,11 @@ static irqreturn_t arm_smmu_gerror_handler(int irq, void *dev)
if (active & GERROR_MSI_GERROR_ABT_ERR) if (active & GERROR_MSI_GERROR_ABT_ERR)
dev_warn(smmu->dev, "GERROR MSI write aborted\n"); dev_warn(smmu->dev, "GERROR MSI write aborted\n");
if (active & GERROR_MSI_PRIQ_ABT_ERR) { if (active & GERROR_MSI_PRIQ_ABT_ERR)
dev_warn(smmu->dev, "PRIQ MSI write aborted\n"); dev_warn(smmu->dev, "PRIQ MSI write aborted\n");
arm_smmu_priq_handler(irq, smmu->dev);
}
if (active & GERROR_MSI_EVTQ_ABT_ERR) { if (active & GERROR_MSI_EVTQ_ABT_ERR)
dev_warn(smmu->dev, "EVTQ MSI write aborted\n"); dev_warn(smmu->dev, "EVTQ MSI write aborted\n");
arm_smmu_evtq_handler(irq, smmu->dev);
}
if (active & GERROR_MSI_CMDQ_ABT_ERR) { if (active & GERROR_MSI_CMDQ_ABT_ERR) {
dev_warn(smmu->dev, "CMDQ MSI write aborted\n"); dev_warn(smmu->dev, "CMDQ MSI write aborted\n");
...@@ -1569,6 +1551,8 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain) ...@@ -1569,6 +1551,8 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
return -ENOMEM; return -ENOMEM;
domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap; domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
domain->geometry.aperture_end = (1UL << ias) - 1;
domain->geometry.force_aperture = true;
smmu_domain->pgtbl_ops = pgtbl_ops; smmu_domain->pgtbl_ops = pgtbl_ops;
ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg); ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
...@@ -1578,20 +1562,6 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain) ...@@ -1578,20 +1562,6 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
return ret; return ret;
} }
static struct arm_smmu_group *arm_smmu_group_get(struct device *dev)
{
struct iommu_group *group;
struct arm_smmu_group *smmu_group;
group = iommu_group_get(dev);
if (!group)
return NULL;
smmu_group = iommu_group_get_iommudata(group);
iommu_group_put(group);
return smmu_group;
}
static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid) static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid)
{ {
__le64 *step; __le64 *step;
...@@ -1614,27 +1584,17 @@ static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid) ...@@ -1614,27 +1584,17 @@ static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid)
return step; return step;
} }
static int arm_smmu_install_ste_for_group(struct arm_smmu_group *smmu_group) static int arm_smmu_install_ste_for_dev(struct iommu_fwspec *fwspec)
{ {
int i; int i;
struct arm_smmu_domain *smmu_domain = smmu_group->domain; struct arm_smmu_master_data *master = fwspec->iommu_priv;
struct arm_smmu_strtab_ent *ste = &smmu_group->ste; struct arm_smmu_device *smmu = master->smmu;
struct arm_smmu_device *smmu = smmu_group->smmu;
if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { for (i = 0; i < fwspec->num_ids; ++i) {
ste->s1_cfg = &smmu_domain->s1_cfg; u32 sid = fwspec->ids[i];
ste->s2_cfg = NULL;
arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
} else {
ste->s1_cfg = NULL;
ste->s2_cfg = &smmu_domain->s2_cfg;
}
for (i = 0; i < smmu_group->num_sids; ++i) {
u32 sid = smmu_group->sids[i];
__le64 *step = arm_smmu_get_step_for_sid(smmu, sid); __le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
arm_smmu_write_strtab_ent(smmu, sid, step, ste); arm_smmu_write_strtab_ent(smmu, sid, step, &master->ste);
} }
return 0; return 0;
...@@ -1642,13 +1602,11 @@ static int arm_smmu_install_ste_for_group(struct arm_smmu_group *smmu_group) ...@@ -1642,13 +1602,11 @@ static int arm_smmu_install_ste_for_group(struct arm_smmu_group *smmu_group)
static void arm_smmu_detach_dev(struct device *dev) static void arm_smmu_detach_dev(struct device *dev)
{ {
struct arm_smmu_group *smmu_group = arm_smmu_group_get(dev); struct arm_smmu_master_data *master = dev->iommu_fwspec->iommu_priv;
smmu_group->ste.bypass = true; master->ste.bypass = true;
if (arm_smmu_install_ste_for_group(smmu_group) < 0) if (arm_smmu_install_ste_for_dev(dev->iommu_fwspec) < 0)
dev_warn(dev, "failed to install bypass STE\n"); dev_warn(dev, "failed to install bypass STE\n");
smmu_group->domain = NULL;
} }
static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
...@@ -1656,16 +1614,20 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -1656,16 +1614,20 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
int ret = 0; int ret = 0;
struct arm_smmu_device *smmu; struct arm_smmu_device *smmu;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_group *smmu_group = arm_smmu_group_get(dev); struct arm_smmu_master_data *master;
struct arm_smmu_strtab_ent *ste;
if (!smmu_group) if (!dev->iommu_fwspec)
return -ENOENT; return -ENOENT;
master = dev->iommu_fwspec->iommu_priv;
smmu = master->smmu;
ste = &master->ste;
/* Already attached to a different domain? */ /* Already attached to a different domain? */
if (smmu_group->domain && smmu_group->domain != smmu_domain) if (!ste->bypass)
arm_smmu_detach_dev(dev); arm_smmu_detach_dev(dev);
smmu = smmu_group->smmu;
mutex_lock(&smmu_domain->init_mutex); mutex_lock(&smmu_domain->init_mutex);
if (!smmu_domain->smmu) { if (!smmu_domain->smmu) {
...@@ -1684,21 +1646,21 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -1684,21 +1646,21 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
goto out_unlock; goto out_unlock;
} }
/* Group already attached to this domain? */ ste->bypass = false;
if (smmu_group->domain) ste->valid = true;
goto out_unlock;
smmu_group->domain = smmu_domain;
/* if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
* FIXME: This should always be "false" once we have IOMMU-backed ste->s1_cfg = &smmu_domain->s1_cfg;
* DMA ops for all devices behind the SMMU. ste->s2_cfg = NULL;
*/ arm_smmu_write_ctx_desc(smmu, ste->s1_cfg);
smmu_group->ste.bypass = domain->type == IOMMU_DOMAIN_DMA; } else {
ste->s1_cfg = NULL;
ste->s2_cfg = &smmu_domain->s2_cfg;
}
ret = arm_smmu_install_ste_for_group(smmu_group); ret = arm_smmu_install_ste_for_dev(dev->iommu_fwspec);
if (ret < 0) if (ret < 0)
smmu_group->domain = NULL; ste->valid = false;
out_unlock: out_unlock:
mutex_unlock(&smmu_domain->init_mutex); mutex_unlock(&smmu_domain->init_mutex);
...@@ -1757,40 +1719,19 @@ arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) ...@@ -1757,40 +1719,19 @@ arm_smmu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
return ret; return ret;
} }
static int __arm_smmu_get_pci_sid(struct pci_dev *pdev, u16 alias, void *sidp) static struct platform_driver arm_smmu_driver;
{
*(u32 *)sidp = alias;
return 0; /* Continue walking */
}
static void __arm_smmu_release_pci_iommudata(void *data) static int arm_smmu_match_node(struct device *dev, void *data)
{ {
kfree(data); return dev->of_node == data;
} }
static struct arm_smmu_device *arm_smmu_get_for_pci_dev(struct pci_dev *pdev) static struct arm_smmu_device *arm_smmu_get_by_node(struct device_node *np)
{ {
struct device_node *of_node; struct device *dev = driver_find_device(&arm_smmu_driver.driver, NULL,
struct platform_device *smmu_pdev; np, arm_smmu_match_node);
struct arm_smmu_device *smmu = NULL; put_device(dev);
struct pci_bus *bus = pdev->bus; return dev ? dev_get_drvdata(dev) : NULL;
/* Walk up to the root bus */
while (!pci_is_root_bus(bus))
bus = bus->parent;
/* Follow the "iommus" phandle from the host controller */
of_node = of_parse_phandle(bus->bridge->parent->of_node, "iommus", 0);
if (!of_node)
return NULL;
/* See if we can find an SMMU corresponding to the phandle */
smmu_pdev = of_find_device_by_node(of_node);
if (smmu_pdev)
smmu = platform_get_drvdata(smmu_pdev);
of_node_put(of_node);
return smmu;
} }
static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
...@@ -1803,94 +1744,91 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) ...@@ -1803,94 +1744,91 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
return sid < limit; return sid < limit;
} }
static struct iommu_ops arm_smmu_ops;
static int arm_smmu_add_device(struct device *dev) static int arm_smmu_add_device(struct device *dev)
{ {
int i, ret; int i, ret;
u32 sid, *sids;
struct pci_dev *pdev;
struct iommu_group *group;
struct arm_smmu_group *smmu_group;
struct arm_smmu_device *smmu; struct arm_smmu_device *smmu;
struct arm_smmu_master_data *master;
struct iommu_fwspec *fwspec = dev->iommu_fwspec;
struct iommu_group *group;
/* We only support PCI, for now */ if (!fwspec || fwspec->ops != &arm_smmu_ops)
if (!dev_is_pci(dev))
return -ENODEV; return -ENODEV;
/*
pdev = to_pci_dev(dev); * We _can_ actually withstand dodgy bus code re-calling add_device()
group = iommu_group_get_for_dev(dev); * without an intervening remove_device()/of_xlate() sequence, but
if (IS_ERR(group)) * we're not going to do so quietly...
return PTR_ERR(group); */
if (WARN_ON_ONCE(fwspec->iommu_priv)) {
smmu_group = iommu_group_get_iommudata(group); master = fwspec->iommu_priv;
if (!smmu_group) { smmu = master->smmu;
smmu = arm_smmu_get_for_pci_dev(pdev);
if (!smmu) {
ret = -ENOENT;
goto out_remove_dev;
}
smmu_group = kzalloc(sizeof(*smmu_group), GFP_KERNEL);
if (!smmu_group) {
ret = -ENOMEM;
goto out_remove_dev;
}
smmu_group->ste.valid = true;
smmu_group->smmu = smmu;
iommu_group_set_iommudata(group, smmu_group,
__arm_smmu_release_pci_iommudata);
} else { } else {
smmu = smmu_group->smmu; smmu = arm_smmu_get_by_node(to_of_node(fwspec->iommu_fwnode));
} if (!smmu)
return -ENODEV;
master = kzalloc(sizeof(*master), GFP_KERNEL);
if (!master)
return -ENOMEM;
/* Assume SID == RID until firmware tells us otherwise */ master->smmu = smmu;
pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid, &sid); fwspec->iommu_priv = master;
for (i = 0; i < smmu_group->num_sids; ++i) {
/* If we already know about this SID, then we're done */
if (smmu_group->sids[i] == sid)
goto out_put_group;
} }
/* Check the SID is in range of the SMMU and our stream table */ /* Check the SIDs are in range of the SMMU and our stream table */
if (!arm_smmu_sid_in_range(smmu, sid)) { for (i = 0; i < fwspec->num_ids; i++) {
ret = -ERANGE; u32 sid = fwspec->ids[i];
goto out_remove_dev;
}
/* Ensure l2 strtab is initialised */ if (!arm_smmu_sid_in_range(smmu, sid))
if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { return -ERANGE;
ret = arm_smmu_init_l2_strtab(smmu, sid);
if (ret)
goto out_remove_dev;
}
/* Resize the SID array for the group */ /* Ensure l2 strtab is initialised */
smmu_group->num_sids++; if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) {
sids = krealloc(smmu_group->sids, smmu_group->num_sids * sizeof(*sids), ret = arm_smmu_init_l2_strtab(smmu, sid);
GFP_KERNEL); if (ret)
if (!sids) { return ret;
smmu_group->num_sids--; }
ret = -ENOMEM;
goto out_remove_dev;
} }
/* Add the new SID */ group = iommu_group_get_for_dev(dev);
sids[smmu_group->num_sids - 1] = sid; if (!IS_ERR(group))
smmu_group->sids = sids; iommu_group_put(group);
out_put_group:
iommu_group_put(group);
return 0;
out_remove_dev: return PTR_ERR_OR_ZERO(group);
iommu_group_remove_device(dev);
iommu_group_put(group);
return ret;
} }
static void arm_smmu_remove_device(struct device *dev) static void arm_smmu_remove_device(struct device *dev)
{ {
struct iommu_fwspec *fwspec = dev->iommu_fwspec;
struct arm_smmu_master_data *master;
if (!fwspec || fwspec->ops != &arm_smmu_ops)
return;
master = fwspec->iommu_priv;
if (master && master->ste.valid)
arm_smmu_detach_dev(dev);
iommu_group_remove_device(dev); iommu_group_remove_device(dev);
kfree(master);
iommu_fwspec_free(dev);
}
static struct iommu_group *arm_smmu_device_group(struct device *dev)
{
struct iommu_group *group;
/*
* We don't support devices sharing stream IDs other than PCI RID
* aliases, since the necessary ID-to-device lookup becomes rather
* impractical given a potential sparse 32-bit stream ID space.
*/
if (dev_is_pci(dev))
group = pci_device_group(dev);
else
group = generic_device_group(dev);
return group;
} }
static int arm_smmu_domain_get_attr(struct iommu_domain *domain, static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
...@@ -1937,6 +1875,11 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain, ...@@ -1937,6 +1875,11 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
return ret; return ret;
} }
static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
{
return iommu_fwspec_add_ids(dev, args->args, 1);
}
static struct iommu_ops arm_smmu_ops = { static struct iommu_ops arm_smmu_ops = {
.capable = arm_smmu_capable, .capable = arm_smmu_capable,
.domain_alloc = arm_smmu_domain_alloc, .domain_alloc = arm_smmu_domain_alloc,
...@@ -1948,9 +1891,10 @@ static struct iommu_ops arm_smmu_ops = { ...@@ -1948,9 +1891,10 @@ static struct iommu_ops arm_smmu_ops = {
.iova_to_phys = arm_smmu_iova_to_phys, .iova_to_phys = arm_smmu_iova_to_phys,
.add_device = arm_smmu_add_device, .add_device = arm_smmu_add_device,
.remove_device = arm_smmu_remove_device, .remove_device = arm_smmu_remove_device,
.device_group = pci_device_group, .device_group = arm_smmu_device_group,
.domain_get_attr = arm_smmu_domain_get_attr, .domain_get_attr = arm_smmu_domain_get_attr,
.domain_set_attr = arm_smmu_domain_set_attr, .domain_set_attr = arm_smmu_domain_set_attr,
.of_xlate = arm_smmu_of_xlate,
.pgsize_bitmap = -1UL, /* Restricted during device attach */ .pgsize_bitmap = -1UL, /* Restricted during device attach */
}; };
...@@ -2151,6 +2095,24 @@ static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val, ...@@ -2151,6 +2095,24 @@ static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val,
1, ARM_SMMU_POLL_TIMEOUT_US); 1, ARM_SMMU_POLL_TIMEOUT_US);
} }
/* GBPA is "special" */
static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
{
int ret;
u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA;
ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
1, ARM_SMMU_POLL_TIMEOUT_US);
if (ret)
return ret;
reg &= ~clr;
reg |= set;
writel_relaxed(reg | GBPA_UPDATE, gbpa);
return readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
1, ARM_SMMU_POLL_TIMEOUT_US);
}
static void arm_smmu_free_msis(void *data) static void arm_smmu_free_msis(void *data)
{ {
struct device *dev = data; struct device *dev = data;
...@@ -2235,10 +2197,10 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu) ...@@ -2235,10 +2197,10 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
/* Request interrupt lines */ /* Request interrupt lines */
irq = smmu->evtq.q.irq; irq = smmu->evtq.q.irq;
if (irq) { if (irq) {
ret = devm_request_threaded_irq(smmu->dev, irq, ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
arm_smmu_evtq_handler,
arm_smmu_evtq_thread, arm_smmu_evtq_thread,
0, "arm-smmu-v3-evtq", smmu); IRQF_ONESHOT,
"arm-smmu-v3-evtq", smmu);
if (ret < 0) if (ret < 0)
dev_warn(smmu->dev, "failed to enable evtq irq\n"); dev_warn(smmu->dev, "failed to enable evtq irq\n");
} }
...@@ -2263,10 +2225,10 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu) ...@@ -2263,10 +2225,10 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
if (smmu->features & ARM_SMMU_FEAT_PRI) { if (smmu->features & ARM_SMMU_FEAT_PRI) {
irq = smmu->priq.q.irq; irq = smmu->priq.q.irq;
if (irq) { if (irq) {
ret = devm_request_threaded_irq(smmu->dev, irq, ret = devm_request_threaded_irq(smmu->dev, irq, NULL,
arm_smmu_priq_handler,
arm_smmu_priq_thread, arm_smmu_priq_thread,
0, "arm-smmu-v3-priq", IRQF_ONESHOT,
"arm-smmu-v3-priq",
smmu); smmu);
if (ret < 0) if (ret < 0)
dev_warn(smmu->dev, dev_warn(smmu->dev,
...@@ -2296,7 +2258,7 @@ static int arm_smmu_device_disable(struct arm_smmu_device *smmu) ...@@ -2296,7 +2258,7 @@ static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
return ret; return ret;
} }
static int arm_smmu_device_reset(struct arm_smmu_device *smmu) static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
{ {
int ret; int ret;
u32 reg, enables; u32 reg, enables;
...@@ -2397,8 +2359,17 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu) ...@@ -2397,8 +2359,17 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu)
return ret; return ret;
} }
/* Enable the SMMU interface */
enables |= CR0_SMMUEN; /* Enable the SMMU interface, or ensure bypass */
if (!bypass || disable_bypass) {
enables |= CR0_SMMUEN;
} else {
ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
if (ret) {
dev_err(smmu->dev, "GBPA not responding to update\n");
return ret;
}
}
ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0, ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
ARM_SMMU_CR0ACK); ARM_SMMU_CR0ACK);
if (ret) { if (ret) {
...@@ -2597,6 +2568,15 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev) ...@@ -2597,6 +2568,15 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
struct resource *res; struct resource *res;
struct arm_smmu_device *smmu; struct arm_smmu_device *smmu;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
bool bypass = true;
u32 cells;
if (of_property_read_u32(dev->of_node, "#iommu-cells", &cells))
dev_err(dev, "missing #iommu-cells property\n");
else if (cells != 1)
dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
else
bypass = false;
smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL); smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
if (!smmu) { if (!smmu) {
...@@ -2649,7 +2629,24 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev) ...@@ -2649,7 +2629,24 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, smmu); platform_set_drvdata(pdev, smmu);
/* Reset the device */ /* Reset the device */
return arm_smmu_device_reset(smmu); ret = arm_smmu_device_reset(smmu, bypass);
if (ret)
return ret;
/* And we're up. Go go go! */
of_iommu_set_ops(dev->of_node, &arm_smmu_ops);
#ifdef CONFIG_PCI
pci_request_acs();
ret = bus_set_iommu(&pci_bus_type, &arm_smmu_ops);
if (ret)
return ret;
#endif
#ifdef CONFIG_ARM_AMBA
ret = bus_set_iommu(&amba_bustype, &arm_smmu_ops);
if (ret)
return ret;
#endif
return bus_set_iommu(&platform_bus_type, &arm_smmu_ops);
} }
static int arm_smmu_device_remove(struct platform_device *pdev) static int arm_smmu_device_remove(struct platform_device *pdev)
...@@ -2677,22 +2674,14 @@ static struct platform_driver arm_smmu_driver = { ...@@ -2677,22 +2674,14 @@ static struct platform_driver arm_smmu_driver = {
static int __init arm_smmu_init(void) static int __init arm_smmu_init(void)
{ {
struct device_node *np; static bool registered;
int ret; int ret = 0;
np = of_find_matching_node(NULL, arm_smmu_of_match);
if (!np)
return 0;
of_node_put(np);
ret = platform_driver_register(&arm_smmu_driver);
if (ret)
return ret;
pci_request_acs();
return bus_set_iommu(&pci_bus_type, &arm_smmu_ops); if (!registered) {
ret = platform_driver_register(&arm_smmu_driver);
registered = !ret;
}
return ret;
} }
static void __exit arm_smmu_exit(void) static void __exit arm_smmu_exit(void)
...@@ -2703,6 +2692,20 @@ static void __exit arm_smmu_exit(void) ...@@ -2703,6 +2692,20 @@ static void __exit arm_smmu_exit(void)
subsys_initcall(arm_smmu_init); subsys_initcall(arm_smmu_init);
module_exit(arm_smmu_exit); module_exit(arm_smmu_exit);
static int __init arm_smmu_of_init(struct device_node *np)
{
int ret = arm_smmu_init();
if (ret)
return ret;
if (!of_platform_device_create(np, NULL, platform_bus_type.dev_root))
return -ENODEV;
return 0;
}
IOMMU_OF_DECLARE(arm_smmuv3, "arm,smmu-v3", arm_smmu_of_init);
MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations"); MODULE_DESCRIPTION("IOMMU API for ARM architected SMMUv3 implementations");
MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>"); MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#define pr_fmt(fmt) "arm-smmu: " fmt #define pr_fmt(fmt) "arm-smmu: " fmt
#include <linux/atomic.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dma-iommu.h> #include <linux/dma-iommu.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
...@@ -40,6 +41,8 @@ ...@@ -40,6 +41,8 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_iommu.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -49,15 +52,9 @@ ...@@ -49,15 +52,9 @@
#include "io-pgtable.h" #include "io-pgtable.h"
/* Maximum number of stream IDs assigned to a single device */
#define MAX_MASTER_STREAMIDS 128
/* Maximum number of context banks per SMMU */ /* Maximum number of context banks per SMMU */
#define ARM_SMMU_MAX_CBS 128 #define ARM_SMMU_MAX_CBS 128
/* Maximum number of mapping groups per SMMU */
#define ARM_SMMU_MAX_SMRS 128
/* SMMU global address space */ /* SMMU global address space */
#define ARM_SMMU_GR0(smmu) ((smmu)->base) #define ARM_SMMU_GR0(smmu) ((smmu)->base)
#define ARM_SMMU_GR1(smmu) ((smmu)->base + (1 << (smmu)->pgshift)) #define ARM_SMMU_GR1(smmu) ((smmu)->base + (1 << (smmu)->pgshift))
...@@ -165,21 +162,27 @@ ...@@ -165,21 +162,27 @@
#define ARM_SMMU_GR0_SMR(n) (0x800 + ((n) << 2)) #define ARM_SMMU_GR0_SMR(n) (0x800 + ((n) << 2))
#define SMR_VALID (1 << 31) #define SMR_VALID (1 << 31)
#define SMR_MASK_SHIFT 16 #define SMR_MASK_SHIFT 16
#define SMR_MASK_MASK 0x7fff
#define SMR_ID_SHIFT 0 #define SMR_ID_SHIFT 0
#define SMR_ID_MASK 0x7fff
#define ARM_SMMU_GR0_S2CR(n) (0xc00 + ((n) << 2)) #define ARM_SMMU_GR0_S2CR(n) (0xc00 + ((n) << 2))
#define S2CR_CBNDX_SHIFT 0 #define S2CR_CBNDX_SHIFT 0
#define S2CR_CBNDX_MASK 0xff #define S2CR_CBNDX_MASK 0xff
#define S2CR_TYPE_SHIFT 16 #define S2CR_TYPE_SHIFT 16
#define S2CR_TYPE_MASK 0x3 #define S2CR_TYPE_MASK 0x3
#define S2CR_TYPE_TRANS (0 << S2CR_TYPE_SHIFT) enum arm_smmu_s2cr_type {
#define S2CR_TYPE_BYPASS (1 << S2CR_TYPE_SHIFT) S2CR_TYPE_TRANS,
#define S2CR_TYPE_FAULT (2 << S2CR_TYPE_SHIFT) S2CR_TYPE_BYPASS,
S2CR_TYPE_FAULT,
};
#define S2CR_PRIVCFG_SHIFT 24 #define S2CR_PRIVCFG_SHIFT 24
#define S2CR_PRIVCFG_UNPRIV (2 << S2CR_PRIVCFG_SHIFT) #define S2CR_PRIVCFG_MASK 0x3
enum arm_smmu_s2cr_privcfg {
S2CR_PRIVCFG_DEFAULT,
S2CR_PRIVCFG_DIPAN,
S2CR_PRIVCFG_UNPRIV,
S2CR_PRIVCFG_PRIV,
};
/* Context bank attribute registers */ /* Context bank attribute registers */
#define ARM_SMMU_GR1_CBAR(n) (0x0 + ((n) << 2)) #define ARM_SMMU_GR1_CBAR(n) (0x0 + ((n) << 2))
...@@ -217,6 +220,7 @@ ...@@ -217,6 +220,7 @@
#define ARM_SMMU_CB_TTBR0 0x20 #define ARM_SMMU_CB_TTBR0 0x20
#define ARM_SMMU_CB_TTBR1 0x28 #define ARM_SMMU_CB_TTBR1 0x28
#define ARM_SMMU_CB_TTBCR 0x30 #define ARM_SMMU_CB_TTBCR 0x30
#define ARM_SMMU_CB_CONTEXTIDR 0x34
#define ARM_SMMU_CB_S1_MAIR0 0x38 #define ARM_SMMU_CB_S1_MAIR0 0x38
#define ARM_SMMU_CB_S1_MAIR1 0x3c #define ARM_SMMU_CB_S1_MAIR1 0x3c
#define ARM_SMMU_CB_PAR 0x50 #define ARM_SMMU_CB_PAR 0x50
...@@ -239,7 +243,6 @@ ...@@ -239,7 +243,6 @@
#define SCTLR_AFE (1 << 2) #define SCTLR_AFE (1 << 2)
#define SCTLR_TRE (1 << 1) #define SCTLR_TRE (1 << 1)
#define SCTLR_M (1 << 0) #define SCTLR_M (1 << 0)
#define SCTLR_EAE_SBOP (SCTLR_AFE | SCTLR_TRE)
#define ARM_MMU500_ACTLR_CPRE (1 << 1) #define ARM_MMU500_ACTLR_CPRE (1 << 1)
...@@ -296,23 +299,33 @@ enum arm_smmu_implementation { ...@@ -296,23 +299,33 @@ enum arm_smmu_implementation {
CAVIUM_SMMUV2, CAVIUM_SMMUV2,
}; };
struct arm_smmu_s2cr {
struct iommu_group *group;
int count;
enum arm_smmu_s2cr_type type;
enum arm_smmu_s2cr_privcfg privcfg;
u8 cbndx;
};
#define s2cr_init_val (struct arm_smmu_s2cr){ \
.type = disable_bypass ? S2CR_TYPE_FAULT : S2CR_TYPE_BYPASS, \
}
struct arm_smmu_smr { struct arm_smmu_smr {
u8 idx;
u16 mask; u16 mask;
u16 id; u16 id;
bool valid;
}; };
struct arm_smmu_master_cfg { struct arm_smmu_master_cfg {
int num_streamids; struct arm_smmu_device *smmu;
u16 streamids[MAX_MASTER_STREAMIDS]; s16 smendx[];
struct arm_smmu_smr *smrs;
};
struct arm_smmu_master {
struct device_node *of_node;
struct rb_node node;
struct arm_smmu_master_cfg cfg;
}; };
#define INVALID_SMENDX -1
#define __fwspec_cfg(fw) ((struct arm_smmu_master_cfg *)fw->iommu_priv)
#define fwspec_smmu(fw) (__fwspec_cfg(fw)->smmu)
#define for_each_cfg_sme(fw, i, idx) \
for (i = 0; idx = __fwspec_cfg(fw)->smendx[i], i < fw->num_ids; ++i)
struct arm_smmu_device { struct arm_smmu_device {
struct device *dev; struct device *dev;
...@@ -346,7 +359,11 @@ struct arm_smmu_device { ...@@ -346,7 +359,11 @@ struct arm_smmu_device {
atomic_t irptndx; atomic_t irptndx;
u32 num_mapping_groups; u32 num_mapping_groups;
DECLARE_BITMAP(smr_map, ARM_SMMU_MAX_SMRS); u16 streamid_mask;
u16 smr_mask_mask;
struct arm_smmu_smr *smrs;
struct arm_smmu_s2cr *s2crs;
struct mutex stream_map_mutex;
unsigned long va_size; unsigned long va_size;
unsigned long ipa_size; unsigned long ipa_size;
...@@ -357,9 +374,6 @@ struct arm_smmu_device { ...@@ -357,9 +374,6 @@ struct arm_smmu_device {
u32 num_context_irqs; u32 num_context_irqs;
unsigned int *irqs; unsigned int *irqs;
struct list_head list;
struct rb_root masters;
u32 cavium_id_base; /* Specific to Cavium */ u32 cavium_id_base; /* Specific to Cavium */
}; };
...@@ -397,15 +411,6 @@ struct arm_smmu_domain { ...@@ -397,15 +411,6 @@ struct arm_smmu_domain {
struct iommu_domain domain; struct iommu_domain domain;
}; };
struct arm_smmu_phandle_args {
struct device_node *np;
int args_count;
uint32_t args[MAX_MASTER_STREAMIDS];
};
static DEFINE_SPINLOCK(arm_smmu_devices_lock);
static LIST_HEAD(arm_smmu_devices);
struct arm_smmu_option_prop { struct arm_smmu_option_prop {
u32 opt; u32 opt;
const char *prop; const char *prop;
...@@ -413,6 +418,8 @@ struct arm_smmu_option_prop { ...@@ -413,6 +418,8 @@ struct arm_smmu_option_prop {
static atomic_t cavium_smmu_context_count = ATOMIC_INIT(0); static atomic_t cavium_smmu_context_count = ATOMIC_INIT(0);
static bool using_legacy_binding, using_generic_binding;
static struct arm_smmu_option_prop arm_smmu_options[] = { static struct arm_smmu_option_prop arm_smmu_options[] = {
{ ARM_SMMU_OPT_SECURE_CFG_ACCESS, "calxeda,smmu-secure-config-access" }, { ARM_SMMU_OPT_SECURE_CFG_ACCESS, "calxeda,smmu-secure-config-access" },
{ 0, NULL}, { 0, NULL},
...@@ -444,131 +451,86 @@ static struct device_node *dev_get_dev_node(struct device *dev) ...@@ -444,131 +451,86 @@ static struct device_node *dev_get_dev_node(struct device *dev)
while (!pci_is_root_bus(bus)) while (!pci_is_root_bus(bus))
bus = bus->parent; bus = bus->parent;
return bus->bridge->parent->of_node; return of_node_get(bus->bridge->parent->of_node);
} }
return dev->of_node; return of_node_get(dev->of_node);
} }
static struct arm_smmu_master *find_smmu_master(struct arm_smmu_device *smmu, static int __arm_smmu_get_pci_sid(struct pci_dev *pdev, u16 alias, void *data)
struct device_node *dev_node)
{ {
struct rb_node *node = smmu->masters.rb_node; *((__be32 *)data) = cpu_to_be32(alias);
return 0; /* Continue walking */
while (node) {
struct arm_smmu_master *master;
master = container_of(node, struct arm_smmu_master, node);
if (dev_node < master->of_node)
node = node->rb_left;
else if (dev_node > master->of_node)
node = node->rb_right;
else
return master;
}
return NULL;
} }
static struct arm_smmu_master_cfg * static int __find_legacy_master_phandle(struct device *dev, void *data)
find_smmu_master_cfg(struct device *dev)
{ {
struct arm_smmu_master_cfg *cfg = NULL; struct of_phandle_iterator *it = *(void **)data;
struct iommu_group *group = iommu_group_get(dev); struct device_node *np = it->node;
int err;
if (group) {
cfg = iommu_group_get_iommudata(group); of_for_each_phandle(it, err, dev->of_node, "mmu-masters",
iommu_group_put(group); "#stream-id-cells", 0)
} if (it->node == np) {
*(void **)data = dev;
return cfg; return 1;
}
it->node = np;
return err == -ENOENT ? 0 : err;
} }
static int insert_smmu_master(struct arm_smmu_device *smmu, static struct platform_driver arm_smmu_driver;
struct arm_smmu_master *master) static struct iommu_ops arm_smmu_ops;
static int arm_smmu_register_legacy_master(struct device *dev,
struct arm_smmu_device **smmu)
{ {
struct rb_node **new, *parent; struct device *smmu_dev;
struct device_node *np;
new = &smmu->masters.rb_node; struct of_phandle_iterator it;
parent = NULL; void *data = &it;
while (*new) { u32 *sids;
struct arm_smmu_master *this __be32 pci_sid;
= container_of(*new, struct arm_smmu_master, node); int err;
parent = *new; np = dev_get_dev_node(dev);
if (master->of_node < this->of_node) if (!np || !of_find_property(np, "#stream-id-cells", NULL)) {
new = &((*new)->rb_left); of_node_put(np);
else if (master->of_node > this->of_node) return -ENODEV;
new = &((*new)->rb_right);
else
return -EEXIST;
} }
rb_link_node(&master->node, parent, new); it.node = np;
rb_insert_color(&master->node, &smmu->masters); err = driver_for_each_device(&arm_smmu_driver.driver, NULL, &data,
return 0; __find_legacy_master_phandle);
} smmu_dev = data;
of_node_put(np);
static int register_smmu_master(struct arm_smmu_device *smmu, if (err == 0)
struct device *dev, return -ENODEV;
struct arm_smmu_phandle_args *masterspec) if (err < 0)
{ return err;
int i;
struct arm_smmu_master *master;
master = find_smmu_master(smmu, masterspec->np); if (dev_is_pci(dev)) {
if (master) { /* "mmu-masters" assumes Stream ID == Requester ID */
dev_err(dev, pci_for_each_dma_alias(to_pci_dev(dev), __arm_smmu_get_pci_sid,
"rejecting multiple registrations for master device %s\n", &pci_sid);
masterspec->np->name); it.cur = &pci_sid;
return -EBUSY; it.cur_count = 1;
} }
if (masterspec->args_count > MAX_MASTER_STREAMIDS) { err = iommu_fwspec_init(dev, &smmu_dev->of_node->fwnode,
dev_err(dev, &arm_smmu_ops);
"reached maximum number (%d) of stream IDs for master device %s\n", if (err)
MAX_MASTER_STREAMIDS, masterspec->np->name); return err;
return -ENOSPC;
}
master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL); sids = kcalloc(it.cur_count, sizeof(*sids), GFP_KERNEL);
if (!master) if (!sids)
return -ENOMEM; return -ENOMEM;
master->of_node = masterspec->np; *smmu = dev_get_drvdata(smmu_dev);
master->cfg.num_streamids = masterspec->args_count; of_phandle_iterator_args(&it, sids, it.cur_count);
err = iommu_fwspec_add_ids(dev, sids, it.cur_count);
for (i = 0; i < master->cfg.num_streamids; ++i) { kfree(sids);
u16 streamid = masterspec->args[i]; return err;
if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
(streamid >= smmu->num_mapping_groups)) {
dev_err(dev,
"stream ID for master device %s greater than maximum allowed (%d)\n",
masterspec->np->name, smmu->num_mapping_groups);
return -ERANGE;
}
master->cfg.streamids[i] = streamid;
}
return insert_smmu_master(smmu, master);
}
static struct arm_smmu_device *find_smmu_for_device(struct device *dev)
{
struct arm_smmu_device *smmu;
struct arm_smmu_master *master = NULL;
struct device_node *dev_node = dev_get_dev_node(dev);
spin_lock(&arm_smmu_devices_lock);
list_for_each_entry(smmu, &arm_smmu_devices, list) {
master = find_smmu_master(smmu, dev_node);
if (master)
break;
}
spin_unlock(&arm_smmu_devices_lock);
return master ? smmu : NULL;
} }
static int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end) static int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end)
...@@ -738,7 +700,7 @@ static irqreturn_t arm_smmu_global_fault(int irq, void *dev) ...@@ -738,7 +700,7 @@ static irqreturn_t arm_smmu_global_fault(int irq, void *dev)
static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain,
struct io_pgtable_cfg *pgtbl_cfg) struct io_pgtable_cfg *pgtbl_cfg)
{ {
u32 reg; u32 reg, reg2;
u64 reg64; u64 reg64;
bool stage1; bool stage1;
struct arm_smmu_cfg *cfg = &smmu_domain->cfg; struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
...@@ -781,14 +743,22 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, ...@@ -781,14 +743,22 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain,
/* TTBRs */ /* TTBRs */
if (stage1) { if (stage1) {
reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0]; u16 asid = ARM_SMMU_CB_ASID(smmu, cfg);
reg64 |= ((u64)ARM_SMMU_CB_ASID(smmu, cfg)) << TTBRn_ASID_SHIFT; if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) {
writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0); reg = pgtbl_cfg->arm_v7s_cfg.ttbr[0];
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR0);
reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[1]; reg = pgtbl_cfg->arm_v7s_cfg.ttbr[1];
reg64 |= ((u64)ARM_SMMU_CB_ASID(smmu, cfg)) << TTBRn_ASID_SHIFT; writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBR1);
writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR1); writel_relaxed(asid, cb_base + ARM_SMMU_CB_CONTEXTIDR);
} else {
reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0];
reg64 |= (u64)asid << TTBRn_ASID_SHIFT;
writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0);
reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[1];
reg64 |= (u64)asid << TTBRn_ASID_SHIFT;
writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR1);
}
} else { } else {
reg64 = pgtbl_cfg->arm_lpae_s2_cfg.vttbr; reg64 = pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0); writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0);
...@@ -796,28 +766,36 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain, ...@@ -796,28 +766,36 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain,
/* TTBCR */ /* TTBCR */
if (stage1) { if (stage1) {
reg = pgtbl_cfg->arm_lpae_s1_cfg.tcr; if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) {
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR); reg = pgtbl_cfg->arm_v7s_cfg.tcr;
if (smmu->version > ARM_SMMU_V1) { reg2 = 0;
reg = pgtbl_cfg->arm_lpae_s1_cfg.tcr >> 32; } else {
reg |= TTBCR2_SEP_UPSTREAM; reg = pgtbl_cfg->arm_lpae_s1_cfg.tcr;
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR2); reg2 = pgtbl_cfg->arm_lpae_s1_cfg.tcr >> 32;
reg2 |= TTBCR2_SEP_UPSTREAM;
} }
if (smmu->version > ARM_SMMU_V1)
writel_relaxed(reg2, cb_base + ARM_SMMU_CB_TTBCR2);
} else { } else {
reg = pgtbl_cfg->arm_lpae_s2_cfg.vtcr; reg = pgtbl_cfg->arm_lpae_s2_cfg.vtcr;
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
} }
writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
/* MAIRs (stage-1 only) */ /* MAIRs (stage-1 only) */
if (stage1) { if (stage1) {
reg = pgtbl_cfg->arm_lpae_s1_cfg.mair[0]; if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_S) {
reg = pgtbl_cfg->arm_v7s_cfg.prrr;
reg2 = pgtbl_cfg->arm_v7s_cfg.nmrr;
} else {
reg = pgtbl_cfg->arm_lpae_s1_cfg.mair[0];
reg2 = pgtbl_cfg->arm_lpae_s1_cfg.mair[1];
}
writel_relaxed(reg, cb_base + ARM_SMMU_CB_S1_MAIR0); writel_relaxed(reg, cb_base + ARM_SMMU_CB_S1_MAIR0);
reg = pgtbl_cfg->arm_lpae_s1_cfg.mair[1]; writel_relaxed(reg2, cb_base + ARM_SMMU_CB_S1_MAIR1);
writel_relaxed(reg, cb_base + ARM_SMMU_CB_S1_MAIR1);
} }
/* SCTLR */ /* SCTLR */
reg = SCTLR_CFIE | SCTLR_CFRE | SCTLR_M | SCTLR_EAE_SBOP; reg = SCTLR_CFIE | SCTLR_CFRE | SCTLR_AFE | SCTLR_TRE | SCTLR_M;
if (stage1) if (stage1)
reg |= SCTLR_S1_ASIDPNE; reg |= SCTLR_S1_ASIDPNE;
#ifdef __BIG_ENDIAN #ifdef __BIG_ENDIAN
...@@ -841,12 +819,6 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, ...@@ -841,12 +819,6 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
if (smmu_domain->smmu) if (smmu_domain->smmu)
goto out_unlock; goto out_unlock;
/* We're bypassing these SIDs, so don't allocate an actual context */
if (domain->type == IOMMU_DOMAIN_DMA) {
smmu_domain->smmu = smmu;
goto out_unlock;
}
/* /*
* Mapping the requested stage onto what we support is surprisingly * Mapping the requested stage onto what we support is surprisingly
* complicated, mainly because the spec allows S1+S2 SMMUs without * complicated, mainly because the spec allows S1+S2 SMMUs without
...@@ -880,6 +852,11 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, ...@@ -880,6 +852,11 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
*/ */
if (smmu->features & ARM_SMMU_FEAT_FMT_AARCH32_L) if (smmu->features & ARM_SMMU_FEAT_FMT_AARCH32_L)
cfg->fmt = ARM_SMMU_CTX_FMT_AARCH32_L; cfg->fmt = ARM_SMMU_CTX_FMT_AARCH32_L;
if (IS_ENABLED(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) &&
!IS_ENABLED(CONFIG_64BIT) && !IS_ENABLED(CONFIG_ARM_LPAE) &&
(smmu->features & ARM_SMMU_FEAT_FMT_AARCH32_S) &&
(smmu_domain->stage == ARM_SMMU_DOMAIN_S1))
cfg->fmt = ARM_SMMU_CTX_FMT_AARCH32_S;
if ((IS_ENABLED(CONFIG_64BIT) || cfg->fmt == ARM_SMMU_CTX_FMT_NONE) && if ((IS_ENABLED(CONFIG_64BIT) || cfg->fmt == ARM_SMMU_CTX_FMT_NONE) &&
(smmu->features & (ARM_SMMU_FEAT_FMT_AARCH64_64K | (smmu->features & (ARM_SMMU_FEAT_FMT_AARCH64_64K |
ARM_SMMU_FEAT_FMT_AARCH64_16K | ARM_SMMU_FEAT_FMT_AARCH64_16K |
...@@ -899,10 +876,14 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, ...@@ -899,10 +876,14 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
oas = smmu->ipa_size; oas = smmu->ipa_size;
if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64) { if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64) {
fmt = ARM_64_LPAE_S1; fmt = ARM_64_LPAE_S1;
} else { } else if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH32_L) {
fmt = ARM_32_LPAE_S1; fmt = ARM_32_LPAE_S1;
ias = min(ias, 32UL); ias = min(ias, 32UL);
oas = min(oas, 40UL); oas = min(oas, 40UL);
} else {
fmt = ARM_V7S;
ias = min(ias, 32UL);
oas = min(oas, 32UL);
} }
break; break;
case ARM_SMMU_DOMAIN_NESTED: case ARM_SMMU_DOMAIN_NESTED:
...@@ -958,6 +939,8 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain, ...@@ -958,6 +939,8 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
/* Update the domain's page sizes to reflect the page table format */ /* Update the domain's page sizes to reflect the page table format */
domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap; domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
domain->geometry.aperture_end = (1UL << ias) - 1;
domain->geometry.force_aperture = true;
/* Initialise the context bank with our page table cfg */ /* Initialise the context bank with our page table cfg */
arm_smmu_init_context_bank(smmu_domain, &pgtbl_cfg); arm_smmu_init_context_bank(smmu_domain, &pgtbl_cfg);
...@@ -996,7 +979,7 @@ static void arm_smmu_destroy_domain_context(struct iommu_domain *domain) ...@@ -996,7 +979,7 @@ static void arm_smmu_destroy_domain_context(struct iommu_domain *domain)
void __iomem *cb_base; void __iomem *cb_base;
int irq; int irq;
if (!smmu || domain->type == IOMMU_DOMAIN_DMA) if (!smmu)
return; return;
/* /*
...@@ -1030,8 +1013,8 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) ...@@ -1030,8 +1013,8 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
if (!smmu_domain) if (!smmu_domain)
return NULL; return NULL;
if (type == IOMMU_DOMAIN_DMA && if (type == IOMMU_DOMAIN_DMA && (using_legacy_binding ||
iommu_get_dma_cookie(&smmu_domain->domain)) { iommu_get_dma_cookie(&smmu_domain->domain))) {
kfree(smmu_domain); kfree(smmu_domain);
return NULL; return NULL;
} }
...@@ -1055,162 +1038,197 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) ...@@ -1055,162 +1038,197 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
kfree(smmu_domain); kfree(smmu_domain);
} }
static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu, static void arm_smmu_write_smr(struct arm_smmu_device *smmu, int idx)
struct arm_smmu_master_cfg *cfg)
{ {
int i; struct arm_smmu_smr *smr = smmu->smrs + idx;
struct arm_smmu_smr *smrs; u32 reg = smr->id << SMR_ID_SHIFT | smr->mask << SMR_MASK_SHIFT;
void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH)) if (smr->valid)
return 0; reg |= SMR_VALID;
writel_relaxed(reg, ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_SMR(idx));
}
if (cfg->smrs) static void arm_smmu_write_s2cr(struct arm_smmu_device *smmu, int idx)
return -EEXIST; {
struct arm_smmu_s2cr *s2cr = smmu->s2crs + idx;
u32 reg = (s2cr->type & S2CR_TYPE_MASK) << S2CR_TYPE_SHIFT |
(s2cr->cbndx & S2CR_CBNDX_MASK) << S2CR_CBNDX_SHIFT |
(s2cr->privcfg & S2CR_PRIVCFG_MASK) << S2CR_PRIVCFG_SHIFT;
smrs = kmalloc_array(cfg->num_streamids, sizeof(*smrs), GFP_KERNEL); writel_relaxed(reg, ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_S2CR(idx));
if (!smrs) { }
dev_err(smmu->dev, "failed to allocate %d SMRs\n",
cfg->num_streamids);
return -ENOMEM;
}
/* Allocate the SMRs on the SMMU */ static void arm_smmu_write_sme(struct arm_smmu_device *smmu, int idx)
for (i = 0; i < cfg->num_streamids; ++i) { {
int idx = __arm_smmu_alloc_bitmap(smmu->smr_map, 0, arm_smmu_write_s2cr(smmu, idx);
smmu->num_mapping_groups); if (smmu->smrs)
if (idx < 0) { arm_smmu_write_smr(smmu, idx);
dev_err(smmu->dev, "failed to allocate free SMR\n"); }
goto err_free_smrs;
}
smrs[i] = (struct arm_smmu_smr) { static int arm_smmu_find_sme(struct arm_smmu_device *smmu, u16 id, u16 mask)
.idx = idx, {
.mask = 0, /* We don't currently share SMRs */ struct arm_smmu_smr *smrs = smmu->smrs;
.id = cfg->streamids[i], int i, free_idx = -ENOSPC;
};
}
/* It worked! Now, poke the actual hardware */ /* Stream indexing is blissfully easy */
for (i = 0; i < cfg->num_streamids; ++i) { if (!smrs)
u32 reg = SMR_VALID | smrs[i].id << SMR_ID_SHIFT | return id;
smrs[i].mask << SMR_MASK_SHIFT;
writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_SMR(smrs[i].idx));
}
cfg->smrs = smrs; /* Validating SMRs is... less so */
return 0; for (i = 0; i < smmu->num_mapping_groups; ++i) {
if (!smrs[i].valid) {
/*
* Note the first free entry we come across, which
* we'll claim in the end if nothing else matches.
*/
if (free_idx < 0)
free_idx = i;
continue;
}
/*
* If the new entry is _entirely_ matched by an existing entry,
* then reuse that, with the guarantee that there also cannot
* be any subsequent conflicting entries. In normal use we'd
* expect simply identical entries for this case, but there's
* no harm in accommodating the generalisation.
*/
if ((mask & smrs[i].mask) == mask &&
!((id ^ smrs[i].id) & ~smrs[i].mask))
return i;
/*
* If the new entry has any other overlap with an existing one,
* though, then there always exists at least one stream ID
* which would cause a conflict, and we can't allow that risk.
*/
if (!((id ^ smrs[i].id) & ~(smrs[i].mask | mask)))
return -EINVAL;
}
err_free_smrs: return free_idx;
while (--i >= 0)
__arm_smmu_free_bitmap(smmu->smr_map, smrs[i].idx);
kfree(smrs);
return -ENOSPC;
} }
static void arm_smmu_master_free_smrs(struct arm_smmu_device *smmu, static bool arm_smmu_free_sme(struct arm_smmu_device *smmu, int idx)
struct arm_smmu_master_cfg *cfg)
{ {
int i; if (--smmu->s2crs[idx].count)
void __iomem *gr0_base = ARM_SMMU_GR0(smmu); return false;
struct arm_smmu_smr *smrs = cfg->smrs;
if (!smrs)
return;
/* Invalidate the SMRs before freeing back to the allocator */
for (i = 0; i < cfg->num_streamids; ++i) {
u8 idx = smrs[i].idx;
writel_relaxed(~SMR_VALID, gr0_base + ARM_SMMU_GR0_SMR(idx)); smmu->s2crs[idx] = s2cr_init_val;
__arm_smmu_free_bitmap(smmu->smr_map, idx); if (smmu->smrs)
} smmu->smrs[idx].valid = false;
cfg->smrs = NULL; return true;
kfree(smrs);
} }
static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain, static int arm_smmu_master_alloc_smes(struct device *dev)
struct arm_smmu_master_cfg *cfg)
{ {
int i, ret; struct iommu_fwspec *fwspec = dev->iommu_fwspec;
struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_master_cfg *cfg = fwspec->iommu_priv;
void __iomem *gr0_base = ARM_SMMU_GR0(smmu); struct arm_smmu_device *smmu = cfg->smmu;
struct arm_smmu_smr *smrs = smmu->smrs;
struct iommu_group *group;
int i, idx, ret;
/* mutex_lock(&smmu->stream_map_mutex);
* FIXME: This won't be needed once we have IOMMU-backed DMA ops /* Figure out a viable stream map entry allocation */
* for all devices behind the SMMU. Note that we need to take for_each_cfg_sme(fwspec, i, idx) {
* care configuring SMRs for devices both a platform_device and u16 sid = fwspec->ids[i];
* and a PCI device (i.e. a PCI host controller) u16 mask = fwspec->ids[i] >> SMR_MASK_SHIFT;
*/
if (smmu_domain->domain.type == IOMMU_DOMAIN_DMA)
return 0;
/* Devices in an IOMMU group may already be configured */ if (idx != INVALID_SMENDX) {
ret = arm_smmu_master_configure_smrs(smmu, cfg); ret = -EEXIST;
if (ret) goto out_err;
return ret == -EEXIST ? 0 : ret; }
for (i = 0; i < cfg->num_streamids; ++i) { ret = arm_smmu_find_sme(smmu, sid, mask);
u32 idx, s2cr; if (ret < 0)
goto out_err;
idx = ret;
if (smrs && smmu->s2crs[idx].count == 0) {
smrs[idx].id = sid;
smrs[idx].mask = mask;
smrs[idx].valid = true;
}
smmu->s2crs[idx].count++;
cfg->smendx[i] = (s16)idx;
}
idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i]; group = iommu_group_get_for_dev(dev);
s2cr = S2CR_TYPE_TRANS | S2CR_PRIVCFG_UNPRIV | if (!group)
(smmu_domain->cfg.cbndx << S2CR_CBNDX_SHIFT); group = ERR_PTR(-ENOMEM);
writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx)); if (IS_ERR(group)) {
ret = PTR_ERR(group);
goto out_err;
} }
iommu_group_put(group);
/* It worked! Now, poke the actual hardware */
for_each_cfg_sme(fwspec, i, idx) {
arm_smmu_write_sme(smmu, idx);
smmu->s2crs[idx].group = group;
}
mutex_unlock(&smmu->stream_map_mutex);
return 0; return 0;
out_err:
while (i--) {
arm_smmu_free_sme(smmu, cfg->smendx[i]);
cfg->smendx[i] = INVALID_SMENDX;
}
mutex_unlock(&smmu->stream_map_mutex);
return ret;
} }
static void arm_smmu_domain_remove_master(struct arm_smmu_domain *smmu_domain, static void arm_smmu_master_free_smes(struct iommu_fwspec *fwspec)
struct arm_smmu_master_cfg *cfg)
{ {
int i; struct arm_smmu_device *smmu = fwspec_smmu(fwspec);
struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_master_cfg *cfg = fwspec->iommu_priv;
void __iomem *gr0_base = ARM_SMMU_GR0(smmu); int i, idx;
/* An IOMMU group is torn down by the first device to be removed */
if ((smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) && !cfg->smrs)
return;
/* mutex_lock(&smmu->stream_map_mutex);
* We *must* clear the S2CR first, because freeing the SMR means for_each_cfg_sme(fwspec, i, idx) {
* that it can be re-allocated immediately. if (arm_smmu_free_sme(smmu, idx))
*/ arm_smmu_write_sme(smmu, idx);
for (i = 0; i < cfg->num_streamids; ++i) { cfg->smendx[i] = INVALID_SMENDX;
u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
u32 reg = disable_bypass ? S2CR_TYPE_FAULT : S2CR_TYPE_BYPASS;
writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_S2CR(idx));
} }
mutex_unlock(&smmu->stream_map_mutex);
arm_smmu_master_free_smrs(smmu, cfg);
} }
static void arm_smmu_detach_dev(struct device *dev, static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
struct arm_smmu_master_cfg *cfg) struct iommu_fwspec *fwspec)
{ {
struct iommu_domain *domain = dev->archdata.iommu; struct arm_smmu_device *smmu = smmu_domain->smmu;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_s2cr *s2cr = smmu->s2crs;
enum arm_smmu_s2cr_type type = S2CR_TYPE_TRANS;
u8 cbndx = smmu_domain->cfg.cbndx;
int i, idx;
for_each_cfg_sme(fwspec, i, idx) {
if (type == s2cr[idx].type && cbndx == s2cr[idx].cbndx)
continue;
dev->archdata.iommu = NULL; s2cr[idx].type = type;
arm_smmu_domain_remove_master(smmu_domain, cfg); s2cr[idx].privcfg = S2CR_PRIVCFG_UNPRIV;
s2cr[idx].cbndx = cbndx;
arm_smmu_write_s2cr(smmu, idx);
}
return 0;
} }
static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
{ {
int ret; int ret;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct iommu_fwspec *fwspec = dev->iommu_fwspec;
struct arm_smmu_device *smmu; struct arm_smmu_device *smmu;
struct arm_smmu_master_cfg *cfg; struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
smmu = find_smmu_for_device(dev); if (!fwspec || fwspec->ops != &arm_smmu_ops) {
if (!smmu) {
dev_err(dev, "cannot attach to SMMU, is it on the same bus?\n"); dev_err(dev, "cannot attach to SMMU, is it on the same bus?\n");
return -ENXIO; return -ENXIO;
} }
smmu = fwspec_smmu(fwspec);
/* Ensure that the domain is finalised */ /* Ensure that the domain is finalised */
ret = arm_smmu_init_domain_context(domain, smmu); ret = arm_smmu_init_domain_context(domain, smmu);
if (ret < 0) if (ret < 0)
...@@ -1228,18 +1246,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ...@@ -1228,18 +1246,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
} }
/* Looks ok, so add the device to the domain */ /* Looks ok, so add the device to the domain */
cfg = find_smmu_master_cfg(dev); return arm_smmu_domain_add_master(smmu_domain, fwspec);
if (!cfg)
return -ENODEV;
/* Detach the dev from its current domain */
if (dev->archdata.iommu)
arm_smmu_detach_dev(dev, cfg);
ret = arm_smmu_domain_add_master(smmu_domain, cfg);
if (!ret)
dev->archdata.iommu = domain;
return ret;
} }
static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
...@@ -1358,110 +1365,113 @@ static bool arm_smmu_capable(enum iommu_cap cap) ...@@ -1358,110 +1365,113 @@ static bool arm_smmu_capable(enum iommu_cap cap)
} }
} }
static int __arm_smmu_get_pci_sid(struct pci_dev *pdev, u16 alias, void *data) static int arm_smmu_match_node(struct device *dev, void *data)
{ {
*((u16 *)data) = alias; return dev->of_node == data;
return 0; /* Continue walking */
} }
static void __arm_smmu_release_pci_iommudata(void *data) static struct arm_smmu_device *arm_smmu_get_by_node(struct device_node *np)
{ {
kfree(data); struct device *dev = driver_find_device(&arm_smmu_driver.driver, NULL,
np, arm_smmu_match_node);
put_device(dev);
return dev ? dev_get_drvdata(dev) : NULL;
} }
static int arm_smmu_init_pci_device(struct pci_dev *pdev, static int arm_smmu_add_device(struct device *dev)
struct iommu_group *group)
{ {
struct arm_smmu_device *smmu;
struct arm_smmu_master_cfg *cfg; struct arm_smmu_master_cfg *cfg;
u16 sid; struct iommu_fwspec *fwspec = dev->iommu_fwspec;
int i; int i, ret;
cfg = iommu_group_get_iommudata(group);
if (!cfg) {
cfg = kzalloc(sizeof(*cfg), GFP_KERNEL);
if (!cfg)
return -ENOMEM;
iommu_group_set_iommudata(group, cfg, if (using_legacy_binding) {
__arm_smmu_release_pci_iommudata); ret = arm_smmu_register_legacy_master(dev, &smmu);
fwspec = dev->iommu_fwspec;
if (ret)
goto out_free;
} else if (fwspec) {
smmu = arm_smmu_get_by_node(to_of_node(fwspec->iommu_fwnode));
} else {
return -ENODEV;
} }
if (cfg->num_streamids >= MAX_MASTER_STREAMIDS) ret = -EINVAL;
return -ENOSPC; for (i = 0; i < fwspec->num_ids; i++) {
u16 sid = fwspec->ids[i];
u16 mask = fwspec->ids[i] >> SMR_MASK_SHIFT;
/* if (sid & ~smmu->streamid_mask) {
* Assume Stream ID == Requester ID for now. dev_err(dev, "stream ID 0x%x out of range for SMMU (0x%x)\n",
* We need a way to describe the ID mappings in FDT. sid, smmu->streamid_mask);
*/ goto out_free;
pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid, &sid); }
for (i = 0; i < cfg->num_streamids; ++i) if (mask & ~smmu->smr_mask_mask) {
if (cfg->streamids[i] == sid) dev_err(dev, "SMR mask 0x%x out of range for SMMU (0x%x)\n",
break; sid, smmu->smr_mask_mask);
goto out_free;
/* Avoid duplicate SIDs, as this can lead to SMR conflicts */ }
if (i == cfg->num_streamids) }
cfg->streamids[cfg->num_streamids++] = sid;
return 0;
}
static int arm_smmu_init_platform_device(struct device *dev,
struct iommu_group *group)
{
struct arm_smmu_device *smmu = find_smmu_for_device(dev);
struct arm_smmu_master *master;
if (!smmu) ret = -ENOMEM;
return -ENODEV; cfg = kzalloc(offsetof(struct arm_smmu_master_cfg, smendx[i]),
GFP_KERNEL);
if (!cfg)
goto out_free;
master = find_smmu_master(smmu, dev->of_node); cfg->smmu = smmu;
if (!master) fwspec->iommu_priv = cfg;
return -ENODEV; while (i--)
cfg->smendx[i] = INVALID_SMENDX;
iommu_group_set_iommudata(group, &master->cfg, NULL); ret = arm_smmu_master_alloc_smes(dev);
if (ret)
goto out_free;
return 0; return 0;
}
static int arm_smmu_add_device(struct device *dev) out_free:
{ if (fwspec)
struct iommu_group *group; kfree(fwspec->iommu_priv);
iommu_fwspec_free(dev);
group = iommu_group_get_for_dev(dev); return ret;
if (IS_ERR(group))
return PTR_ERR(group);
iommu_group_put(group);
return 0;
} }
static void arm_smmu_remove_device(struct device *dev) static void arm_smmu_remove_device(struct device *dev)
{ {
struct iommu_fwspec *fwspec = dev->iommu_fwspec;
if (!fwspec || fwspec->ops != &arm_smmu_ops)
return;
arm_smmu_master_free_smes(fwspec);
iommu_group_remove_device(dev); iommu_group_remove_device(dev);
kfree(fwspec->iommu_priv);
iommu_fwspec_free(dev);
} }
static struct iommu_group *arm_smmu_device_group(struct device *dev) static struct iommu_group *arm_smmu_device_group(struct device *dev)
{ {
struct iommu_group *group; struct iommu_fwspec *fwspec = dev->iommu_fwspec;
int ret; struct arm_smmu_device *smmu = fwspec_smmu(fwspec);
struct iommu_group *group = NULL;
int i, idx;
if (dev_is_pci(dev)) for_each_cfg_sme(fwspec, i, idx) {
group = pci_device_group(dev); if (group && smmu->s2crs[idx].group &&
else group != smmu->s2crs[idx].group)
group = generic_device_group(dev); return ERR_PTR(-EINVAL);
group = smmu->s2crs[idx].group;
}
if (IS_ERR(group)) if (group)
return group; return group;
if (dev_is_pci(dev)) if (dev_is_pci(dev))
ret = arm_smmu_init_pci_device(to_pci_dev(dev), group); group = pci_device_group(dev);
else else
ret = arm_smmu_init_platform_device(dev, group); group = generic_device_group(dev);
if (ret) {
iommu_group_put(group);
group = ERR_PTR(ret);
}
return group; return group;
} }
...@@ -1510,6 +1520,19 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain, ...@@ -1510,6 +1520,19 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
return ret; return ret;
} }
static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
{
u32 fwid = 0;
if (args->args_count > 0)
fwid |= (u16)args->args[0];
if (args->args_count > 1)
fwid |= (u16)args->args[1] << SMR_MASK_SHIFT;
return iommu_fwspec_add_ids(dev, &fwid, 1);
}
static struct iommu_ops arm_smmu_ops = { static struct iommu_ops arm_smmu_ops = {
.capable = arm_smmu_capable, .capable = arm_smmu_capable,
.domain_alloc = arm_smmu_domain_alloc, .domain_alloc = arm_smmu_domain_alloc,
...@@ -1524,6 +1547,7 @@ static struct iommu_ops arm_smmu_ops = { ...@@ -1524,6 +1547,7 @@ static struct iommu_ops arm_smmu_ops = {
.device_group = arm_smmu_device_group, .device_group = arm_smmu_device_group,
.domain_get_attr = arm_smmu_domain_get_attr, .domain_get_attr = arm_smmu_domain_get_attr,
.domain_set_attr = arm_smmu_domain_set_attr, .domain_set_attr = arm_smmu_domain_set_attr,
.of_xlate = arm_smmu_of_xlate,
.pgsize_bitmap = -1UL, /* Restricted during device attach */ .pgsize_bitmap = -1UL, /* Restricted during device attach */
}; };
...@@ -1531,19 +1555,19 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu) ...@@ -1531,19 +1555,19 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu)
{ {
void __iomem *gr0_base = ARM_SMMU_GR0(smmu); void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
void __iomem *cb_base; void __iomem *cb_base;
int i = 0; int i;
u32 reg, major; u32 reg, major;
/* clear global FSR */ /* clear global FSR */
reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR); reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR);
writel(reg, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR); writel(reg, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR);
/* Mark all SMRn as invalid and all S2CRn as bypass unless overridden */ /*
reg = disable_bypass ? S2CR_TYPE_FAULT : S2CR_TYPE_BYPASS; * Reset stream mapping groups: Initial values mark all SMRn as
for (i = 0; i < smmu->num_mapping_groups; ++i) { * invalid and all S2CRn as bypass unless overridden.
writel_relaxed(0, gr0_base + ARM_SMMU_GR0_SMR(i)); */
writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_S2CR(i)); for (i = 0; i < smmu->num_mapping_groups; ++i)
} arm_smmu_write_sme(smmu, i);
/* /*
* Before clearing ARM_MMU500_ACTLR_CPRE, need to * Before clearing ARM_MMU500_ACTLR_CPRE, need to
...@@ -1632,6 +1656,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) ...@@ -1632,6 +1656,7 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
void __iomem *gr0_base = ARM_SMMU_GR0(smmu); void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
u32 id; u32 id;
bool cttw_dt, cttw_reg; bool cttw_dt, cttw_reg;
int i;
dev_notice(smmu->dev, "probing hardware configuration...\n"); dev_notice(smmu->dev, "probing hardware configuration...\n");
dev_notice(smmu->dev, "SMMUv%d with:\n", dev_notice(smmu->dev, "SMMUv%d with:\n",
...@@ -1690,39 +1715,55 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) ...@@ -1690,39 +1715,55 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
dev_notice(smmu->dev, dev_notice(smmu->dev,
"\t(IDR0.CTTW overridden by dma-coherent property)\n"); "\t(IDR0.CTTW overridden by dma-coherent property)\n");
/* Max. number of entries we have for stream matching/indexing */
size = 1 << ((id >> ID0_NUMSIDB_SHIFT) & ID0_NUMSIDB_MASK);
smmu->streamid_mask = size - 1;
if (id & ID0_SMS) { if (id & ID0_SMS) {
u32 smr, sid, mask; u32 smr;
smmu->features |= ARM_SMMU_FEAT_STREAM_MATCH; smmu->features |= ARM_SMMU_FEAT_STREAM_MATCH;
smmu->num_mapping_groups = (id >> ID0_NUMSMRG_SHIFT) & size = (id >> ID0_NUMSMRG_SHIFT) & ID0_NUMSMRG_MASK;
ID0_NUMSMRG_MASK; if (size == 0) {
if (smmu->num_mapping_groups == 0) {
dev_err(smmu->dev, dev_err(smmu->dev,
"stream-matching supported, but no SMRs present!\n"); "stream-matching supported, but no SMRs present!\n");
return -ENODEV; return -ENODEV;
} }
smr = SMR_MASK_MASK << SMR_MASK_SHIFT; /*
smr |= (SMR_ID_MASK << SMR_ID_SHIFT); * SMR.ID bits may not be preserved if the corresponding MASK
* bits are set, so check each one separately. We can reject
* masters later if they try to claim IDs outside these masks.
*/
smr = smmu->streamid_mask << SMR_ID_SHIFT;
writel_relaxed(smr, gr0_base + ARM_SMMU_GR0_SMR(0)); writel_relaxed(smr, gr0_base + ARM_SMMU_GR0_SMR(0));
smr = readl_relaxed(gr0_base + ARM_SMMU_GR0_SMR(0)); smr = readl_relaxed(gr0_base + ARM_SMMU_GR0_SMR(0));
smmu->streamid_mask = smr >> SMR_ID_SHIFT;
mask = (smr >> SMR_MASK_SHIFT) & SMR_MASK_MASK; smr = smmu->streamid_mask << SMR_MASK_SHIFT;
sid = (smr >> SMR_ID_SHIFT) & SMR_ID_MASK; writel_relaxed(smr, gr0_base + ARM_SMMU_GR0_SMR(0));
if ((mask & sid) != sid) { smr = readl_relaxed(gr0_base + ARM_SMMU_GR0_SMR(0));
dev_err(smmu->dev, smmu->smr_mask_mask = smr >> SMR_MASK_SHIFT;
"SMR mask bits (0x%x) insufficient for ID field (0x%x)\n",
mask, sid); /* Zero-initialised to mark as invalid */
return -ENODEV; smmu->smrs = devm_kcalloc(smmu->dev, size, sizeof(*smmu->smrs),
} GFP_KERNEL);
if (!smmu->smrs)
return -ENOMEM;
dev_notice(smmu->dev, dev_notice(smmu->dev,
"\tstream matching with %u register groups, mask 0x%x", "\tstream matching with %lu register groups, mask 0x%x",
smmu->num_mapping_groups, mask); size, smmu->smr_mask_mask);
} else {
smmu->num_mapping_groups = (id >> ID0_NUMSIDB_SHIFT) &
ID0_NUMSIDB_MASK;
} }
/* s2cr->type == 0 means translation, so initialise explicitly */
smmu->s2crs = devm_kmalloc_array(smmu->dev, size, sizeof(*smmu->s2crs),
GFP_KERNEL);
if (!smmu->s2crs)
return -ENOMEM;
for (i = 0; i < size; i++)
smmu->s2crs[i] = s2cr_init_val;
smmu->num_mapping_groups = size;
mutex_init(&smmu->stream_map_mutex);
if (smmu->version < ARM_SMMU_V2 || !(id & ID0_PTFS_NO_AARCH32)) { if (smmu->version < ARM_SMMU_V2 || !(id & ID0_PTFS_NO_AARCH32)) {
smmu->features |= ARM_SMMU_FEAT_FMT_AARCH32_L; smmu->features |= ARM_SMMU_FEAT_FMT_AARCH32_L;
...@@ -1855,15 +1896,24 @@ MODULE_DEVICE_TABLE(of, arm_smmu_of_match); ...@@ -1855,15 +1896,24 @@ MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
static int arm_smmu_device_dt_probe(struct platform_device *pdev) static int arm_smmu_device_dt_probe(struct platform_device *pdev)
{ {
const struct of_device_id *of_id;
const struct arm_smmu_match_data *data; const struct arm_smmu_match_data *data;
struct resource *res; struct resource *res;
struct arm_smmu_device *smmu; struct arm_smmu_device *smmu;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct rb_node *node;
struct of_phandle_iterator it;
struct arm_smmu_phandle_args *masterspec;
int num_irqs, i, err; int num_irqs, i, err;
bool legacy_binding;
legacy_binding = of_find_property(dev->of_node, "mmu-masters", NULL);
if (legacy_binding && !using_generic_binding) {
if (!using_legacy_binding)
pr_notice("deprecated \"mmu-masters\" DT property in use; DMA API support unavailable\n");
using_legacy_binding = true;
} else if (!legacy_binding && !using_legacy_binding) {
using_generic_binding = true;
} else {
dev_err(dev, "not probing due to mismatched DT properties\n");
return -ENODEV;
}
smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL); smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
if (!smmu) { if (!smmu) {
...@@ -1872,8 +1922,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev) ...@@ -1872,8 +1922,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
} }
smmu->dev = dev; smmu->dev = dev;
of_id = of_match_node(arm_smmu_of_match, dev->of_node); data = of_device_get_match_data(dev);
data = of_id->data;
smmu->version = data->version; smmu->version = data->version;
smmu->model = data->model; smmu->model = data->model;
...@@ -1923,37 +1972,6 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev) ...@@ -1923,37 +1972,6 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
if (err) if (err)
return err; return err;
i = 0;
smmu->masters = RB_ROOT;
err = -ENOMEM;
/* No need to zero the memory for masterspec */
masterspec = kmalloc(sizeof(*masterspec), GFP_KERNEL);
if (!masterspec)
goto out_put_masters;
of_for_each_phandle(&it, err, dev->of_node,
"mmu-masters", "#stream-id-cells", 0) {
int count = of_phandle_iterator_args(&it, masterspec->args,
MAX_MASTER_STREAMIDS);
masterspec->np = of_node_get(it.node);
masterspec->args_count = count;
err = register_smmu_master(smmu, dev, masterspec);
if (err) {
dev_err(dev, "failed to add master %s\n",
masterspec->np->name);
kfree(masterspec);
goto out_put_masters;
}
i++;
}
dev_notice(dev, "registered %d master devices\n", i);
kfree(masterspec);
parse_driver_options(smmu); parse_driver_options(smmu);
if (smmu->version == ARM_SMMU_V2 && if (smmu->version == ARM_SMMU_V2 &&
...@@ -1961,8 +1979,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev) ...@@ -1961,8 +1979,7 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
dev_err(dev, dev_err(dev,
"found only %d context interrupt(s) but %d required\n", "found only %d context interrupt(s) but %d required\n",
smmu->num_context_irqs, smmu->num_context_banks); smmu->num_context_irqs, smmu->num_context_banks);
err = -ENODEV; return -ENODEV;
goto out_put_masters;
} }
for (i = 0; i < smmu->num_global_irqs; ++i) { for (i = 0; i < smmu->num_global_irqs; ++i) {
...@@ -1974,59 +1991,39 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev) ...@@ -1974,59 +1991,39 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
if (err) { if (err) {
dev_err(dev, "failed to request global IRQ %d (%u)\n", dev_err(dev, "failed to request global IRQ %d (%u)\n",
i, smmu->irqs[i]); i, smmu->irqs[i]);
goto out_put_masters; return err;
} }
} }
INIT_LIST_HEAD(&smmu->list); of_iommu_set_ops(dev->of_node, &arm_smmu_ops);
spin_lock(&arm_smmu_devices_lock); platform_set_drvdata(pdev, smmu);
list_add(&smmu->list, &arm_smmu_devices);
spin_unlock(&arm_smmu_devices_lock);
arm_smmu_device_reset(smmu); arm_smmu_device_reset(smmu);
return 0;
out_put_masters: /* Oh, for a proper bus abstraction */
for (node = rb_first(&smmu->masters); node; node = rb_next(node)) { if (!iommu_present(&platform_bus_type))
struct arm_smmu_master *master bus_set_iommu(&platform_bus_type, &arm_smmu_ops);
= container_of(node, struct arm_smmu_master, node); #ifdef CONFIG_ARM_AMBA
of_node_put(master->of_node); if (!iommu_present(&amba_bustype))
bus_set_iommu(&amba_bustype, &arm_smmu_ops);
#endif
#ifdef CONFIG_PCI
if (!iommu_present(&pci_bus_type)) {
pci_request_acs();
bus_set_iommu(&pci_bus_type, &arm_smmu_ops);
} }
#endif
return err; return 0;
} }
static int arm_smmu_device_remove(struct platform_device *pdev) static int arm_smmu_device_remove(struct platform_device *pdev)
{ {
int i; struct arm_smmu_device *smmu = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
struct arm_smmu_device *curr, *smmu = NULL;
struct rb_node *node;
spin_lock(&arm_smmu_devices_lock);
list_for_each_entry(curr, &arm_smmu_devices, list) {
if (curr->dev == dev) {
smmu = curr;
list_del(&smmu->list);
break;
}
}
spin_unlock(&arm_smmu_devices_lock);
if (!smmu) if (!smmu)
return -ENODEV; return -ENODEV;
for (node = rb_first(&smmu->masters); node; node = rb_next(node)) {
struct arm_smmu_master *master
= container_of(node, struct arm_smmu_master, node);
of_node_put(master->of_node);
}
if (!bitmap_empty(smmu->context_map, ARM_SMMU_MAX_CBS)) if (!bitmap_empty(smmu->context_map, ARM_SMMU_MAX_CBS))
dev_err(dev, "removing device with active domains!\n"); dev_err(&pdev->dev, "removing device with active domains!\n");
for (i = 0; i < smmu->num_global_irqs; ++i)
devm_free_irq(smmu->dev, smmu->irqs[i], smmu);
/* Turn the thing off */ /* Turn the thing off */
writel(sCR0_CLIENTPD, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0); writel(sCR0_CLIENTPD, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0);
...@@ -2044,41 +2041,14 @@ static struct platform_driver arm_smmu_driver = { ...@@ -2044,41 +2041,14 @@ static struct platform_driver arm_smmu_driver = {
static int __init arm_smmu_init(void) static int __init arm_smmu_init(void)
{ {
struct device_node *np; static bool registered;
int ret; int ret = 0;
/*
* Play nice with systems that don't have an ARM SMMU by checking that
* an ARM SMMU exists in the system before proceeding with the driver
* and IOMMU bus operation registration.
*/
np = of_find_matching_node(NULL, arm_smmu_of_match);
if (!np)
return 0;
of_node_put(np);
ret = platform_driver_register(&arm_smmu_driver);
if (ret)
return ret;
/* Oh, for a proper bus abstraction */
if (!iommu_present(&platform_bus_type))
bus_set_iommu(&platform_bus_type, &arm_smmu_ops);
#ifdef CONFIG_ARM_AMBA
if (!iommu_present(&amba_bustype))
bus_set_iommu(&amba_bustype, &arm_smmu_ops);
#endif
#ifdef CONFIG_PCI if (!registered) {
if (!iommu_present(&pci_bus_type)) { ret = platform_driver_register(&arm_smmu_driver);
pci_request_acs(); registered = !ret;
bus_set_iommu(&pci_bus_type, &arm_smmu_ops);
} }
#endif return ret;
return 0;
} }
static void __exit arm_smmu_exit(void) static void __exit arm_smmu_exit(void)
...@@ -2089,6 +2059,25 @@ static void __exit arm_smmu_exit(void) ...@@ -2089,6 +2059,25 @@ static void __exit arm_smmu_exit(void)
subsys_initcall(arm_smmu_init); subsys_initcall(arm_smmu_init);
module_exit(arm_smmu_exit); module_exit(arm_smmu_exit);
static int __init arm_smmu_of_init(struct device_node *np)
{
int ret = arm_smmu_init();
if (ret)
return ret;
if (!of_platform_device_create(np, NULL, platform_bus_type.dev_root))
return -ENODEV;
return 0;
}
IOMMU_OF_DECLARE(arm_smmuv1, "arm,smmu-v1", arm_smmu_of_init);
IOMMU_OF_DECLARE(arm_smmuv2, "arm,smmu-v2", arm_smmu_of_init);
IOMMU_OF_DECLARE(arm_mmu400, "arm,mmu-400", arm_smmu_of_init);
IOMMU_OF_DECLARE(arm_mmu401, "arm,mmu-401", arm_smmu_of_init);
IOMMU_OF_DECLARE(arm_mmu500, "arm,mmu-500", arm_smmu_of_init);
IOMMU_OF_DECLARE(cavium_smmuv2, "cavium,smmu-v2", arm_smmu_of_init);
MODULE_DESCRIPTION("IOMMU API for ARM architected SMMU implementations"); MODULE_DESCRIPTION("IOMMU API for ARM architected SMMU implementations");
MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>"); MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
...@@ -25,10 +25,29 @@ ...@@ -25,10 +25,29 @@
#include <linux/huge_mm.h> #include <linux/huge_mm.h>
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/iova.h> #include <linux/iova.h>
#include <linux/irq.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/pci.h>
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
struct iommu_dma_msi_page {
struct list_head list;
dma_addr_t iova;
phys_addr_t phys;
};
struct iommu_dma_cookie {
struct iova_domain iovad;
struct list_head msi_page_list;
spinlock_t msi_lock;
};
static inline struct iova_domain *cookie_iovad(struct iommu_domain *domain)
{
return &((struct iommu_dma_cookie *)domain->iova_cookie)->iovad;
}
int iommu_dma_init(void) int iommu_dma_init(void)
{ {
return iova_cache_get(); return iova_cache_get();
...@@ -43,15 +62,19 @@ int iommu_dma_init(void) ...@@ -43,15 +62,19 @@ int iommu_dma_init(void)
*/ */
int iommu_get_dma_cookie(struct iommu_domain *domain) int iommu_get_dma_cookie(struct iommu_domain *domain)
{ {
struct iova_domain *iovad; struct iommu_dma_cookie *cookie;
if (domain->iova_cookie) if (domain->iova_cookie)
return -EEXIST; return -EEXIST;
iovad = kzalloc(sizeof(*iovad), GFP_KERNEL); cookie = kzalloc(sizeof(*cookie), GFP_KERNEL);
domain->iova_cookie = iovad; if (!cookie)
return -ENOMEM;
return iovad ? 0 : -ENOMEM; spin_lock_init(&cookie->msi_lock);
INIT_LIST_HEAD(&cookie->msi_page_list);
domain->iova_cookie = cookie;
return 0;
} }
EXPORT_SYMBOL(iommu_get_dma_cookie); EXPORT_SYMBOL(iommu_get_dma_cookie);
...@@ -63,32 +86,58 @@ EXPORT_SYMBOL(iommu_get_dma_cookie); ...@@ -63,32 +86,58 @@ EXPORT_SYMBOL(iommu_get_dma_cookie);
*/ */
void iommu_put_dma_cookie(struct iommu_domain *domain) void iommu_put_dma_cookie(struct iommu_domain *domain)
{ {
struct iova_domain *iovad = domain->iova_cookie; struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iommu_dma_msi_page *msi, *tmp;
if (!iovad) if (!cookie)
return; return;
if (iovad->granule) if (cookie->iovad.granule)
put_iova_domain(iovad); put_iova_domain(&cookie->iovad);
kfree(iovad);
list_for_each_entry_safe(msi, tmp, &cookie->msi_page_list, list) {
list_del(&msi->list);
kfree(msi);
}
kfree(cookie);
domain->iova_cookie = NULL; domain->iova_cookie = NULL;
} }
EXPORT_SYMBOL(iommu_put_dma_cookie); EXPORT_SYMBOL(iommu_put_dma_cookie);
static void iova_reserve_pci_windows(struct pci_dev *dev,
struct iova_domain *iovad)
{
struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
struct resource_entry *window;
unsigned long lo, hi;
resource_list_for_each_entry(window, &bridge->windows) {
if (resource_type(window->res) != IORESOURCE_MEM &&
resource_type(window->res) != IORESOURCE_IO)
continue;
lo = iova_pfn(iovad, window->res->start - window->offset);
hi = iova_pfn(iovad, window->res->end - window->offset);
reserve_iova(iovad, lo, hi);
}
}
/** /**
* iommu_dma_init_domain - Initialise a DMA mapping domain * iommu_dma_init_domain - Initialise a DMA mapping domain
* @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
* @base: IOVA at which the mappable address space starts * @base: IOVA at which the mappable address space starts
* @size: Size of IOVA space * @size: Size of IOVA space
* @dev: Device the domain is being initialised for
* *
* @base and @size should be exact multiples of IOMMU page granularity to * @base and @size should be exact multiples of IOMMU page granularity to
* avoid rounding surprises. If necessary, we reserve the page at address 0 * avoid rounding surprises. If necessary, we reserve the page at address 0
* to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but * to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but
* any change which could make prior IOVAs invalid will fail. * any change which could make prior IOVAs invalid will fail.
*/ */
int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 size) int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
u64 size, struct device *dev)
{ {
struct iova_domain *iovad = domain->iova_cookie; struct iova_domain *iovad = cookie_iovad(domain);
unsigned long order, base_pfn, end_pfn; unsigned long order, base_pfn, end_pfn;
if (!iovad) if (!iovad)
...@@ -124,6 +173,8 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 size ...@@ -124,6 +173,8 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 size
iovad->dma_32bit_pfn = end_pfn; iovad->dma_32bit_pfn = end_pfn;
} else { } else {
init_iova_domain(iovad, 1UL << order, base_pfn, end_pfn); init_iova_domain(iovad, 1UL << order, base_pfn, end_pfn);
if (dev && dev_is_pci(dev))
iova_reserve_pci_windows(to_pci_dev(dev), iovad);
} }
return 0; return 0;
} }
...@@ -155,7 +206,7 @@ int dma_direction_to_prot(enum dma_data_direction dir, bool coherent) ...@@ -155,7 +206,7 @@ int dma_direction_to_prot(enum dma_data_direction dir, bool coherent)
static struct iova *__alloc_iova(struct iommu_domain *domain, size_t size, static struct iova *__alloc_iova(struct iommu_domain *domain, size_t size,
dma_addr_t dma_limit) dma_addr_t dma_limit)
{ {
struct iova_domain *iovad = domain->iova_cookie; struct iova_domain *iovad = cookie_iovad(domain);
unsigned long shift = iova_shift(iovad); unsigned long shift = iova_shift(iovad);
unsigned long length = iova_align(iovad, size) >> shift; unsigned long length = iova_align(iovad, size) >> shift;
...@@ -171,7 +222,7 @@ static struct iova *__alloc_iova(struct iommu_domain *domain, size_t size, ...@@ -171,7 +222,7 @@ static struct iova *__alloc_iova(struct iommu_domain *domain, size_t size,
/* The IOVA allocator knows what we mapped, so just unmap whatever that was */ /* The IOVA allocator knows what we mapped, so just unmap whatever that was */
static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr) static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr)
{ {
struct iova_domain *iovad = domain->iova_cookie; struct iova_domain *iovad = cookie_iovad(domain);
unsigned long shift = iova_shift(iovad); unsigned long shift = iova_shift(iovad);
unsigned long pfn = dma_addr >> shift; unsigned long pfn = dma_addr >> shift;
struct iova *iova = find_iova(iovad, pfn); struct iova *iova = find_iova(iovad, pfn);
...@@ -294,7 +345,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, ...@@ -294,7 +345,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
void (*flush_page)(struct device *, const void *, phys_addr_t)) void (*flush_page)(struct device *, const void *, phys_addr_t))
{ {
struct iommu_domain *domain = iommu_get_domain_for_dev(dev); struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
struct iova_domain *iovad = domain->iova_cookie; struct iova_domain *iovad = cookie_iovad(domain);
struct iova *iova; struct iova *iova;
struct page **pages; struct page **pages;
struct sg_table sgt; struct sg_table sgt;
...@@ -386,7 +437,7 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, ...@@ -386,7 +437,7 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
{ {
dma_addr_t dma_addr; dma_addr_t dma_addr;
struct iommu_domain *domain = iommu_get_domain_for_dev(dev); struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
struct iova_domain *iovad = domain->iova_cookie; struct iova_domain *iovad = cookie_iovad(domain);
phys_addr_t phys = page_to_phys(page) + offset; phys_addr_t phys = page_to_phys(page) + offset;
size_t iova_off = iova_offset(iovad, phys); size_t iova_off = iova_offset(iovad, phys);
size_t len = iova_align(iovad, size + iova_off); size_t len = iova_align(iovad, size + iova_off);
...@@ -495,7 +546,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, ...@@ -495,7 +546,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, int prot) int nents, int prot)
{ {
struct iommu_domain *domain = iommu_get_domain_for_dev(dev); struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
struct iova_domain *iovad = domain->iova_cookie; struct iova_domain *iovad = cookie_iovad(domain);
struct iova *iova; struct iova *iova;
struct scatterlist *s, *prev = NULL; struct scatterlist *s, *prev = NULL;
dma_addr_t dma_addr; dma_addr_t dma_addr;
...@@ -587,3 +638,81 @@ int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) ...@@ -587,3 +638,81 @@ int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{ {
return dma_addr == DMA_ERROR_CODE; return dma_addr == DMA_ERROR_CODE;
} }
static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
phys_addr_t msi_addr, struct iommu_domain *domain)
{
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iommu_dma_msi_page *msi_page;
struct iova_domain *iovad = &cookie->iovad;
struct iova *iova;
int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO;
msi_addr &= ~(phys_addr_t)iova_mask(iovad);
list_for_each_entry(msi_page, &cookie->msi_page_list, list)
if (msi_page->phys == msi_addr)
return msi_page;
msi_page = kzalloc(sizeof(*msi_page), GFP_ATOMIC);
if (!msi_page)
return NULL;
iova = __alloc_iova(domain, iovad->granule, dma_get_mask(dev));
if (!iova)
goto out_free_page;
msi_page->phys = msi_addr;
msi_page->iova = iova_dma_addr(iovad, iova);
if (iommu_map(domain, msi_page->iova, msi_addr, iovad->granule, prot))
goto out_free_iova;
INIT_LIST_HEAD(&msi_page->list);
list_add(&msi_page->list, &cookie->msi_page_list);
return msi_page;
out_free_iova:
__free_iova(iovad, iova);
out_free_page:
kfree(msi_page);
return NULL;
}
void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg)
{
struct device *dev = msi_desc_to_dev(irq_get_msi_desc(irq));
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
struct iommu_dma_cookie *cookie;
struct iommu_dma_msi_page *msi_page;
phys_addr_t msi_addr = (u64)msg->address_hi << 32 | msg->address_lo;
unsigned long flags;
if (!domain || !domain->iova_cookie)
return;
cookie = domain->iova_cookie;
/*
* We disable IRQs to rule out a possible inversion against
* irq_desc_lock if, say, someone tries to retarget the affinity
* of an MSI from within an IPI handler.
*/
spin_lock_irqsave(&cookie->msi_lock, flags);
msi_page = iommu_dma_get_msi_page(dev, msi_addr, domain);
spin_unlock_irqrestore(&cookie->msi_lock, flags);
if (WARN_ON(!msi_page)) {
/*
* We're called from a void callback, so the best we can do is
* 'fail' by filling the message with obviously bogus values.
* Since we got this far due to an IOMMU being present, it's
* not like the existing address would have worked anyway...
*/
msg->address_hi = ~0U;
msg->address_lo = ~0U;
msg->data = ~0U;
} else {
msg->address_hi = upper_32_bits(msi_page->iova);
msg->address_lo &= iova_mask(&cookie->iovad);
msg->address_lo += lower_32_bits(msi_page->iova);
}
}
...@@ -1345,8 +1345,8 @@ static int __init exynos_iommu_of_setup(struct device_node *np) ...@@ -1345,8 +1345,8 @@ static int __init exynos_iommu_of_setup(struct device_node *np)
exynos_iommu_init(); exynos_iommu_init();
pdev = of_platform_device_create(np, NULL, platform_bus_type.dev_root); pdev = of_platform_device_create(np, NULL, platform_bus_type.dev_root);
if (IS_ERR(pdev)) if (!pdev)
return PTR_ERR(pdev); return -ENODEV;
/* /*
* use the first registered sysmmu device for performing * use the first registered sysmmu device for performing
......
...@@ -2452,20 +2452,15 @@ static int get_last_alias(struct pci_dev *pdev, u16 alias, void *opaque) ...@@ -2452,20 +2452,15 @@ static int get_last_alias(struct pci_dev *pdev, u16 alias, void *opaque)
return 0; return 0;
} }
/* domain is initialized */ static struct dmar_domain *find_or_alloc_domain(struct device *dev, int gaw)
static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw)
{ {
struct device_domain_info *info = NULL; struct device_domain_info *info = NULL;
struct dmar_domain *domain, *tmp; struct dmar_domain *domain = NULL;
struct intel_iommu *iommu; struct intel_iommu *iommu;
u16 req_id, dma_alias; u16 req_id, dma_alias;
unsigned long flags; unsigned long flags;
u8 bus, devfn; u8 bus, devfn;
domain = find_domain(dev);
if (domain)
return domain;
iommu = device_to_iommu(dev, &bus, &devfn); iommu = device_to_iommu(dev, &bus, &devfn);
if (!iommu) if (!iommu)
return NULL; return NULL;
...@@ -2487,9 +2482,9 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw) ...@@ -2487,9 +2482,9 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw)
} }
spin_unlock_irqrestore(&device_domain_lock, flags); spin_unlock_irqrestore(&device_domain_lock, flags);
/* DMA alias already has a domain, uses it */ /* DMA alias already has a domain, use it */
if (info) if (info)
goto found_domain; goto out;
} }
/* Allocate and initialize new domain for the device */ /* Allocate and initialize new domain for the device */
...@@ -2501,28 +2496,67 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw) ...@@ -2501,28 +2496,67 @@ static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw)
return NULL; return NULL;
} }
/* register PCI DMA alias device */ out:
if (dev_is_pci(dev) && req_id != dma_alias) {
tmp = dmar_insert_one_dev_info(iommu, PCI_BUS_NUM(dma_alias),
dma_alias & 0xff, NULL, domain);
if (!tmp || tmp != domain) { return domain;
domain_exit(domain); }
domain = tmp;
}
if (!domain) static struct dmar_domain *set_domain_for_dev(struct device *dev,
return NULL; struct dmar_domain *domain)
{
struct intel_iommu *iommu;
struct dmar_domain *tmp;
u16 req_id, dma_alias;
u8 bus, devfn;
iommu = device_to_iommu(dev, &bus, &devfn);
if (!iommu)
return NULL;
req_id = ((u16)bus << 8) | devfn;
if (dev_is_pci(dev)) {
struct pci_dev *pdev = to_pci_dev(dev);
pci_for_each_dma_alias(pdev, get_last_alias, &dma_alias);
/* register PCI DMA alias device */
if (req_id != dma_alias) {
tmp = dmar_insert_one_dev_info(iommu, PCI_BUS_NUM(dma_alias),
dma_alias & 0xff, NULL, domain);
if (!tmp || tmp != domain)
return tmp;
}
} }
found_domain:
tmp = dmar_insert_one_dev_info(iommu, bus, devfn, dev, domain); tmp = dmar_insert_one_dev_info(iommu, bus, devfn, dev, domain);
if (!tmp || tmp != domain)
return tmp;
return domain;
}
if (!tmp || tmp != domain) { static struct dmar_domain *get_domain_for_dev(struct device *dev, int gaw)
{
struct dmar_domain *domain, *tmp;
domain = find_domain(dev);
if (domain)
goto out;
domain = find_or_alloc_domain(dev, gaw);
if (!domain)
goto out;
tmp = set_domain_for_dev(dev, domain);
if (!tmp || domain != tmp) {
domain_exit(domain); domain_exit(domain);
domain = tmp; domain = tmp;
} }
out:
return domain; return domain;
} }
...@@ -3394,17 +3428,18 @@ static unsigned long intel_alloc_iova(struct device *dev, ...@@ -3394,17 +3428,18 @@ static unsigned long intel_alloc_iova(struct device *dev,
static struct dmar_domain *__get_valid_domain_for_dev(struct device *dev) static struct dmar_domain *__get_valid_domain_for_dev(struct device *dev)
{ {
struct dmar_domain *domain, *tmp;
struct dmar_rmrr_unit *rmrr; struct dmar_rmrr_unit *rmrr;
struct dmar_domain *domain;
struct device *i_dev; struct device *i_dev;
int i, ret; int i, ret;
domain = get_domain_for_dev(dev, DEFAULT_DOMAIN_ADDRESS_WIDTH); domain = find_domain(dev);
if (!domain) { if (domain)
pr_err("Allocating domain for %s failed\n", goto out;
dev_name(dev));
return NULL; domain = find_or_alloc_domain(dev, DEFAULT_DOMAIN_ADDRESS_WIDTH);
} if (!domain)
goto out;
/* We have a new domain - setup possible RMRRs for the device */ /* We have a new domain - setup possible RMRRs for the device */
rcu_read_lock(); rcu_read_lock();
...@@ -3423,6 +3458,18 @@ static struct dmar_domain *__get_valid_domain_for_dev(struct device *dev) ...@@ -3423,6 +3458,18 @@ static struct dmar_domain *__get_valid_domain_for_dev(struct device *dev)
} }
rcu_read_unlock(); rcu_read_unlock();
tmp = set_domain_for_dev(dev, domain);
if (!tmp || domain != tmp) {
domain_exit(domain);
domain = tmp;
}
out:
if (!domain)
pr_err("Allocating domain for %s failed\n", dev_name(dev));
return domain; return domain;
} }
......
...@@ -633,6 +633,10 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg, ...@@ -633,6 +633,10 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
{ {
struct arm_v7s_io_pgtable *data; struct arm_v7s_io_pgtable *data;
#ifdef PHYS_OFFSET
if (upper_32_bits(PHYS_OFFSET))
return NULL;
#endif
if (cfg->ias > ARM_V7S_ADDR_BITS || cfg->oas > ARM_V7S_ADDR_BITS) if (cfg->ias > ARM_V7S_ADDR_BITS || cfg->oas > ARM_V7S_ADDR_BITS)
return NULL; return NULL;
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/property.h>
#include <trace/events/iommu.h> #include <trace/events/iommu.h>
static struct kset *iommu_group_kset; static struct kset *iommu_group_kset;
...@@ -1613,3 +1614,60 @@ int iommu_request_dm_for_dev(struct device *dev) ...@@ -1613,3 +1614,60 @@ int iommu_request_dm_for_dev(struct device *dev)
return ret; return ret;
} }
int iommu_fwspec_init(struct device *dev, struct fwnode_handle *iommu_fwnode,
const struct iommu_ops *ops)
{
struct iommu_fwspec *fwspec = dev->iommu_fwspec;
if (fwspec)
return ops == fwspec->ops ? 0 : -EINVAL;
fwspec = kzalloc(sizeof(*fwspec), GFP_KERNEL);
if (!fwspec)
return -ENOMEM;
of_node_get(to_of_node(iommu_fwnode));
fwspec->iommu_fwnode = iommu_fwnode;
fwspec->ops = ops;
dev->iommu_fwspec = fwspec;
return 0;
}
EXPORT_SYMBOL_GPL(iommu_fwspec_init);
void iommu_fwspec_free(struct device *dev)
{
struct iommu_fwspec *fwspec = dev->iommu_fwspec;
if (fwspec) {
fwnode_handle_put(fwspec->iommu_fwnode);
kfree(fwspec);
dev->iommu_fwspec = NULL;
}
}
EXPORT_SYMBOL_GPL(iommu_fwspec_free);
int iommu_fwspec_add_ids(struct device *dev, u32 *ids, int num_ids)
{
struct iommu_fwspec *fwspec = dev->iommu_fwspec;
size_t size;
int i;
if (!fwspec)
return -EINVAL;
size = offsetof(struct iommu_fwspec, ids[fwspec->num_ids + num_ids]);
if (size > sizeof(*fwspec)) {
fwspec = krealloc(dev->iommu_fwspec, size, GFP_KERNEL);
if (!fwspec)
return -ENOMEM;
}
for (i = 0; i < num_ids; i++)
fwspec->ids[fwspec->num_ids + i] = ids[i];
fwspec->num_ids += num_ids;
dev->iommu_fwspec = fwspec;
return 0;
}
EXPORT_SYMBOL_GPL(iommu_fwspec_add_ids);
...@@ -636,7 +636,7 @@ static int ipmmu_add_device(struct device *dev) ...@@ -636,7 +636,7 @@ static int ipmmu_add_device(struct device *dev)
spin_unlock(&ipmmu_devices_lock); spin_unlock(&ipmmu_devices_lock);
if (ret < 0) if (ret < 0)
return -ENODEV; goto error;
for (i = 0; i < num_utlbs; ++i) { for (i = 0; i < num_utlbs; ++i) {
if (utlbs[i] >= mmu->num_utlbs) { if (utlbs[i] >= mmu->num_utlbs) {
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/limits.h> #include <linux/limits.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_iommu.h> #include <linux/of_iommu.h>
#include <linux/of_pci.h>
#include <linux/slab.h> #include <linux/slab.h>
static const struct of_device_id __iommu_of_table_sentinel static const struct of_device_id __iommu_of_table_sentinel
...@@ -134,6 +135,47 @@ const struct iommu_ops *of_iommu_get_ops(struct device_node *np) ...@@ -134,6 +135,47 @@ const struct iommu_ops *of_iommu_get_ops(struct device_node *np)
return ops; return ops;
} }
static int __get_pci_rid(struct pci_dev *pdev, u16 alias, void *data)
{
struct of_phandle_args *iommu_spec = data;
iommu_spec->args[0] = alias;
return iommu_spec->np == pdev->bus->dev.of_node;
}
static const struct iommu_ops
*of_pci_iommu_configure(struct pci_dev *pdev, struct device_node *bridge_np)
{
const struct iommu_ops *ops;
struct of_phandle_args iommu_spec;
/*
* Start by tracing the RID alias down the PCI topology as
* far as the host bridge whose OF node we have...
* (we're not even attempting to handle multi-alias devices yet)
*/
iommu_spec.args_count = 1;
iommu_spec.np = bridge_np;
pci_for_each_dma_alias(pdev, __get_pci_rid, &iommu_spec);
/*
* ...then find out what that becomes once it escapes the PCI
* bus into the system beyond, and which IOMMU it ends up at.
*/
iommu_spec.np = NULL;
if (of_pci_map_rid(bridge_np, iommu_spec.args[0], "iommu-map",
"iommu-map-mask", &iommu_spec.np, iommu_spec.args))
return NULL;
ops = of_iommu_get_ops(iommu_spec.np);
if (!ops || !ops->of_xlate ||
iommu_fwspec_init(&pdev->dev, &iommu_spec.np->fwnode, ops) ||
ops->of_xlate(&pdev->dev, &iommu_spec))
ops = NULL;
of_node_put(iommu_spec.np);
return ops;
}
const struct iommu_ops *of_iommu_configure(struct device *dev, const struct iommu_ops *of_iommu_configure(struct device *dev,
struct device_node *master_np) struct device_node *master_np)
{ {
...@@ -142,12 +184,8 @@ const struct iommu_ops *of_iommu_configure(struct device *dev, ...@@ -142,12 +184,8 @@ const struct iommu_ops *of_iommu_configure(struct device *dev,
const struct iommu_ops *ops = NULL; const struct iommu_ops *ops = NULL;
int idx = 0; int idx = 0;
/*
* We can't do much for PCI devices without knowing how
* device IDs are wired up from the PCI bus to the IOMMU.
*/
if (dev_is_pci(dev)) if (dev_is_pci(dev))
return NULL; return of_pci_iommu_configure(to_pci_dev(dev), master_np);
/* /*
* We don't currently walk up the tree looking for a parent IOMMU. * We don't currently walk up the tree looking for a parent IOMMU.
...@@ -160,7 +198,9 @@ const struct iommu_ops *of_iommu_configure(struct device *dev, ...@@ -160,7 +198,9 @@ const struct iommu_ops *of_iommu_configure(struct device *dev,
np = iommu_spec.np; np = iommu_spec.np;
ops = of_iommu_get_ops(np); ops = of_iommu_get_ops(np);
if (!ops || !ops->of_xlate || ops->of_xlate(dev, &iommu_spec)) if (!ops || !ops->of_xlate ||
iommu_fwspec_init(dev, &np->fwnode, ops) ||
ops->of_xlate(dev, &iommu_spec))
goto err_put_node; goto err_put_node;
of_node_put(np); of_node_put(np);
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#define pr_fmt(fmt) "GICv2m: " fmt #define pr_fmt(fmt) "GICv2m: " fmt
#include <linux/acpi.h> #include <linux/acpi.h>
#include <linux/dma-iommu.h>
#include <linux/irq.h> #include <linux/irq.h>
#include <linux/irqdomain.h> #include <linux/irqdomain.h>
#include <linux/kernel.h> #include <linux/kernel.h>
...@@ -108,6 +109,8 @@ static void gicv2m_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ...@@ -108,6 +109,8 @@ static void gicv2m_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
if (v2m->flags & GICV2M_NEEDS_SPI_OFFSET) if (v2m->flags & GICV2M_NEEDS_SPI_OFFSET)
msg->data -= v2m->spi_offset; msg->data -= v2m->spi_offset;
iommu_dma_map_msi_msg(data->irq, msg);
} }
static struct irq_chip gicv2m_irq_chip = { static struct irq_chip gicv2m_irq_chip = {
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include <linux/bitmap.h> #include <linux/bitmap.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/dma-iommu.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/irqdomain.h> #include <linux/irqdomain.h>
#include <linux/acpi_iort.h> #include <linux/acpi_iort.h>
...@@ -659,6 +660,8 @@ static void its_irq_compose_msi_msg(struct irq_data *d, struct msi_msg *msg) ...@@ -659,6 +660,8 @@ static void its_irq_compose_msi_msg(struct irq_data *d, struct msi_msg *msg)
msg->address_lo = addr & ((1UL << 32) - 1); msg->address_lo = addr & ((1UL << 32) - 1);
msg->address_hi = addr >> 32; msg->address_hi = addr >> 32;
msg->data = its_get_event_id(d); msg->data = its_get_event_id(d);
iommu_dma_map_msi_msg(d->irq, msg);
} }
static struct irq_chip its_irq_chip = { static struct irq_chip its_irq_chip = {
......
...@@ -26,6 +26,7 @@ ...@@ -26,6 +26,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/of_pci.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/slab.h> #include <linux/slab.h>
...@@ -592,87 +593,16 @@ static u32 __of_msi_map_rid(struct device *dev, struct device_node **np, ...@@ -592,87 +593,16 @@ static u32 __of_msi_map_rid(struct device *dev, struct device_node **np,
u32 rid_in) u32 rid_in)
{ {
struct device *parent_dev; struct device *parent_dev;
struct device_node *msi_controller_node;
struct device_node *msi_np = *np;
u32 map_mask, masked_rid, rid_base, msi_base, rid_len, phandle;
int msi_map_len;
bool matched;
u32 rid_out = rid_in; u32 rid_out = rid_in;
const __be32 *msi_map = NULL;
/* /*
* Walk up the device parent links looking for one with a * Walk up the device parent links looking for one with a
* "msi-map" property. * "msi-map" property.
*/ */
for (parent_dev = dev; parent_dev; parent_dev = parent_dev->parent) { for (parent_dev = dev; parent_dev; parent_dev = parent_dev->parent)
if (!parent_dev->of_node) if (!of_pci_map_rid(parent_dev->of_node, rid_in, "msi-map",
continue; "msi-map-mask", np, &rid_out))
msi_map = of_get_property(parent_dev->of_node,
"msi-map", &msi_map_len);
if (!msi_map)
continue;
if (msi_map_len % (4 * sizeof(__be32))) {
dev_err(parent_dev, "Error: Bad msi-map length: %d\n",
msi_map_len);
return rid_out;
}
/* We have a good parent_dev and msi_map, let's use them. */
break;
}
if (!msi_map)
return rid_out;
/* The default is to select all bits. */
map_mask = 0xffffffff;
/*
* Can be overridden by "msi-map-mask" property. If
* of_property_read_u32() fails, the default is used.
*/
of_property_read_u32(parent_dev->of_node, "msi-map-mask", &map_mask);
masked_rid = map_mask & rid_in;
matched = false;
while (!matched && msi_map_len >= 4 * sizeof(__be32)) {
rid_base = be32_to_cpup(msi_map + 0);
phandle = be32_to_cpup(msi_map + 1);
msi_base = be32_to_cpup(msi_map + 2);
rid_len = be32_to_cpup(msi_map + 3);
if (rid_base & ~map_mask) {
dev_err(parent_dev,
"Invalid msi-map translation - msi-map-mask (0x%x) ignores rid-base (0x%x)\n",
map_mask, rid_base);
return rid_out;
}
msi_controller_node = of_find_node_by_phandle(phandle);
matched = (masked_rid >= rid_base &&
masked_rid < rid_base + rid_len);
if (msi_np)
matched &= msi_np == msi_controller_node;
if (matched && !msi_np) {
*np = msi_np = msi_controller_node;
break; break;
}
of_node_put(msi_controller_node);
msi_map_len -= 4 * sizeof(__be32);
msi_map += 4;
}
if (!matched)
return rid_out;
rid_out = masked_rid - rid_base + msi_base;
dev_dbg(dev,
"msi-map at: %s, using mask %08x, rid-base: %08x, msi-base: %08x, length: %08x, rid: %08x -> %08x\n",
dev_name(parent_dev), map_mask, rid_base, msi_base,
rid_len, rid_in, rid_out);
return rid_out; return rid_out;
} }
......
...@@ -308,3 +308,105 @@ struct msi_controller *of_pci_find_msi_chip_by_node(struct device_node *of_node) ...@@ -308,3 +308,105 @@ struct msi_controller *of_pci_find_msi_chip_by_node(struct device_node *of_node)
EXPORT_SYMBOL_GPL(of_pci_find_msi_chip_by_node); EXPORT_SYMBOL_GPL(of_pci_find_msi_chip_by_node);
#endif /* CONFIG_PCI_MSI */ #endif /* CONFIG_PCI_MSI */
/**
* of_pci_map_rid - Translate a requester ID through a downstream mapping.
* @np: root complex device node.
* @rid: PCI requester ID to map.
* @map_name: property name of the map to use.
* @map_mask_name: optional property name of the mask to use.
* @target: optional pointer to a target device node.
* @id_out: optional pointer to receive the translated ID.
*
* Given a PCI requester ID, look up the appropriate implementation-defined
* platform ID and/or the target device which receives transactions on that
* ID, as per the "iommu-map" and "msi-map" bindings. Either of @target or
* @id_out may be NULL if only the other is required. If @target points to
* a non-NULL device node pointer, only entries targeting that node will be
* matched; if it points to a NULL value, it will receive the device node of
* the first matching target phandle, with a reference held.
*
* Return: 0 on success or a standard error code on failure.
*/
int of_pci_map_rid(struct device_node *np, u32 rid,
const char *map_name, const char *map_mask_name,
struct device_node **target, u32 *id_out)
{
u32 map_mask, masked_rid;
int map_len;
const __be32 *map = NULL;
if (!np || !map_name || (!target && !id_out))
return -EINVAL;
map = of_get_property(np, map_name, &map_len);
if (!map) {
if (target)
return -ENODEV;
/* Otherwise, no map implies no translation */
*id_out = rid;
return 0;
}
if (!map_len || map_len % (4 * sizeof(*map))) {
pr_err("%s: Error: Bad %s length: %d\n", np->full_name,
map_name, map_len);
return -EINVAL;
}
/* The default is to select all bits. */
map_mask = 0xffffffff;
/*
* Can be overridden by "{iommu,msi}-map-mask" property.
* If of_property_read_u32() fails, the default is used.
*/
if (map_mask_name)
of_property_read_u32(np, map_mask_name, &map_mask);
masked_rid = map_mask & rid;
for ( ; map_len > 0; map_len -= 4 * sizeof(*map), map += 4) {
struct device_node *phandle_node;
u32 rid_base = be32_to_cpup(map + 0);
u32 phandle = be32_to_cpup(map + 1);
u32 out_base = be32_to_cpup(map + 2);
u32 rid_len = be32_to_cpup(map + 3);
if (rid_base & ~map_mask) {
pr_err("%s: Invalid %s translation - %s-mask (0x%x) ignores rid-base (0x%x)\n",
np->full_name, map_name, map_name,
map_mask, rid_base);
return -EFAULT;
}
if (masked_rid < rid_base || masked_rid >= rid_base + rid_len)
continue;
phandle_node = of_find_node_by_phandle(phandle);
if (!phandle_node)
return -ENODEV;
if (target) {
if (*target)
of_node_put(phandle_node);
else
*target = phandle_node;
if (*target != phandle_node)
continue;
}
if (id_out)
*id_out = masked_rid - rid_base + out_base;
pr_debug("%s: %s, using mask %08x, rid-base: %08x, out-base: %08x, length: %08x, rid: %08x -> %08x\n",
np->full_name, map_name, map_mask, rid_base, out_base,
rid_len, rid, *id_out);
return 0;
}
pr_err("%s: Invalid %s translation - no match for rid 0x%x on %s\n",
np->full_name, map_name, rid,
target && *target ? (*target)->full_name : "any target");
return -EFAULT;
}
...@@ -26,7 +26,7 @@ ...@@ -26,7 +26,7 @@
#define LARB0_PORT_OFFSET 0 #define LARB0_PORT_OFFSET 0
#define LARB1_PORT_OFFSET 11 #define LARB1_PORT_OFFSET 11
#define LARB2_PORT_OFFSET 21 #define LARB2_PORT_OFFSET 21
#define LARB3_PORT_OFFSET 43 #define LARB3_PORT_OFFSET 44
#define MT2701_M4U_ID_LARB0(port) ((port) + LARB0_PORT_OFFSET) #define MT2701_M4U_ID_LARB0(port) ((port) + LARB0_PORT_OFFSET)
#define MT2701_M4U_ID_LARB1(port) ((port) + LARB1_PORT_OFFSET) #define MT2701_M4U_ID_LARB1(port) ((port) + LARB1_PORT_OFFSET)
......
...@@ -41,6 +41,7 @@ struct device_node; ...@@ -41,6 +41,7 @@ struct device_node;
struct fwnode_handle; struct fwnode_handle;
struct iommu_ops; struct iommu_ops;
struct iommu_group; struct iommu_group;
struct iommu_fwspec;
struct bus_attribute { struct bus_attribute {
struct attribute attr; struct attribute attr;
...@@ -765,6 +766,7 @@ struct device_dma_parameters { ...@@ -765,6 +766,7 @@ struct device_dma_parameters {
* gone away. This should be set by the allocator of the * gone away. This should be set by the allocator of the
* device (i.e. the bus driver that discovered the device). * device (i.e. the bus driver that discovered the device).
* @iommu_group: IOMMU group the device belongs to. * @iommu_group: IOMMU group the device belongs to.
* @iommu_fwspec: IOMMU-specific properties supplied by firmware.
* *
* @offline_disabled: If set, the device is permanently online. * @offline_disabled: If set, the device is permanently online.
* @offline: Set after successful invocation of bus type's .offline(). * @offline: Set after successful invocation of bus type's .offline().
...@@ -849,6 +851,7 @@ struct device { ...@@ -849,6 +851,7 @@ struct device {
void (*release)(struct device *dev); void (*release)(struct device *dev);
struct iommu_group *iommu_group; struct iommu_group *iommu_group;
struct iommu_fwspec *iommu_fwspec;
bool offline_disabled:1; bool offline_disabled:1;
bool offline:1; bool offline:1;
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#ifdef CONFIG_IOMMU_DMA #ifdef CONFIG_IOMMU_DMA
#include <linux/iommu.h> #include <linux/iommu.h>
#include <linux/msi.h>
int iommu_dma_init(void); int iommu_dma_init(void);
...@@ -29,7 +30,8 @@ int iommu_get_dma_cookie(struct iommu_domain *domain); ...@@ -29,7 +30,8 @@ int iommu_get_dma_cookie(struct iommu_domain *domain);
void iommu_put_dma_cookie(struct iommu_domain *domain); void iommu_put_dma_cookie(struct iommu_domain *domain);
/* Setup call for arch DMA mapping code */ /* Setup call for arch DMA mapping code */
int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 size); int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
u64 size, struct device *dev);
/* General helpers for DMA-API <-> IOMMU-API interaction */ /* General helpers for DMA-API <-> IOMMU-API interaction */
int dma_direction_to_prot(enum dma_data_direction dir, bool coherent); int dma_direction_to_prot(enum dma_data_direction dir, bool coherent);
...@@ -62,9 +64,13 @@ void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents, ...@@ -62,9 +64,13 @@ void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
int iommu_dma_supported(struct device *dev, u64 mask); int iommu_dma_supported(struct device *dev, u64 mask);
int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
/* The DMA API isn't _quite_ the whole story, though... */
void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg);
#else #else
struct iommu_domain; struct iommu_domain;
struct msi_msg;
static inline int iommu_dma_init(void) static inline int iommu_dma_init(void)
{ {
...@@ -80,6 +86,10 @@ static inline void iommu_put_dma_cookie(struct iommu_domain *domain) ...@@ -80,6 +86,10 @@ static inline void iommu_put_dma_cookie(struct iommu_domain *domain)
{ {
} }
static inline void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg)
{
}
#endif /* CONFIG_IOMMU_DMA */ #endif /* CONFIG_IOMMU_DMA */
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* __DMA_IOMMU_H */ #endif /* __DMA_IOMMU_H */
...@@ -331,10 +331,32 @@ extern struct iommu_group *pci_device_group(struct device *dev); ...@@ -331,10 +331,32 @@ extern struct iommu_group *pci_device_group(struct device *dev);
/* Generic device grouping function */ /* Generic device grouping function */
extern struct iommu_group *generic_device_group(struct device *dev); extern struct iommu_group *generic_device_group(struct device *dev);
/**
* struct iommu_fwspec - per-device IOMMU instance data
* @ops: ops for this device's IOMMU
* @iommu_fwnode: firmware handle for this device's IOMMU
* @iommu_priv: IOMMU driver private data for this device
* @num_ids: number of associated device IDs
* @ids: IDs which this device may present to the IOMMU
*/
struct iommu_fwspec {
const struct iommu_ops *ops;
struct fwnode_handle *iommu_fwnode;
void *iommu_priv;
unsigned int num_ids;
u32 ids[1];
};
int iommu_fwspec_init(struct device *dev, struct fwnode_handle *iommu_fwnode,
const struct iommu_ops *ops);
void iommu_fwspec_free(struct device *dev);
int iommu_fwspec_add_ids(struct device *dev, u32 *ids, int num_ids);
#else /* CONFIG_IOMMU_API */ #else /* CONFIG_IOMMU_API */
struct iommu_ops {}; struct iommu_ops {};
struct iommu_group {}; struct iommu_group {};
struct iommu_fwspec {};
static inline bool iommu_present(struct bus_type *bus) static inline bool iommu_present(struct bus_type *bus)
{ {
...@@ -541,6 +563,23 @@ static inline void iommu_device_unlink(struct device *dev, struct device *link) ...@@ -541,6 +563,23 @@ static inline void iommu_device_unlink(struct device *dev, struct device *link)
{ {
} }
static inline int iommu_fwspec_init(struct device *dev,
struct fwnode_handle *iommu_fwnode,
const struct iommu_ops *ops)
{
return -ENODEV;
}
static inline void iommu_fwspec_free(struct device *dev)
{
}
static inline int iommu_fwspec_add_ids(struct device *dev, u32 *ids,
int num_ids)
{
return -ENODEV;
}
#endif /* CONFIG_IOMMU_API */ #endif /* CONFIG_IOMMU_API */
#endif /* __LINUX_IOMMU_H */ #endif /* __LINUX_IOMMU_H */
...@@ -17,6 +17,9 @@ int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin); ...@@ -17,6 +17,9 @@ int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin);
int of_pci_parse_bus_range(struct device_node *node, struct resource *res); int of_pci_parse_bus_range(struct device_node *node, struct resource *res);
int of_get_pci_domain_nr(struct device_node *node); int of_get_pci_domain_nr(struct device_node *node);
void of_pci_check_probe_only(void); void of_pci_check_probe_only(void);
int of_pci_map_rid(struct device_node *np, u32 rid,
const char *map_name, const char *map_mask_name,
struct device_node **target, u32 *id_out);
#else #else
static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq) static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq)
{ {
...@@ -52,6 +55,13 @@ of_get_pci_domain_nr(struct device_node *node) ...@@ -52,6 +55,13 @@ of_get_pci_domain_nr(struct device_node *node)
return -1; return -1;
} }
static inline int of_pci_map_rid(struct device_node *np, u32 rid,
const char *map_name, const char *map_mask_name,
struct device_node **target, u32 *id_out)
{
return -EINVAL;
}
static inline void of_pci_check_probe_only(void) { } static inline void of_pci_check_probe_only(void) { }
#endif #endif
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment