Commit 6eae81a5 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'iommu-updates-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull IOMMU updates from Joerg Roedel:
 "This time with bigger changes than usual:

   - A new IOMMU driver for the ARM SMMUv3.

     This IOMMU is pretty different from SMMUv1 and v2 in that it is
     configured through in-memory structures and not through the MMIO
     register region.  The ARM SMMUv3 also supports IO demand paging for
     PCI devices with PRI/PASID capabilities, but this is not
     implemented in the driver yet.

   - Lots of cleanups and device-tree support for the Exynos IOMMU
     driver.  This is part of the effort to bring Exynos DRM support
     upstream.

   - Introduction of default domains into the IOMMU core code.

     The rationale behind this is to move functionalily out of the IOMMU
     drivers to common code to get to a unified behavior between
     different drivers.  The patches here introduce a default domain for
     iommu-groups (isolation groups).

     A device will now always be attached to a domain, either the
     default domain or another domain handled by the device driver.  The
     IOMMU drivers have to be modified to make use of that feature.  So
     long the AMD IOMMU driver is converted, with others to follow.

   - Patches for the Intel VT-d drvier to fix DMAR faults that happen
     when a kdump kernel boots.

     When the kdump kernel boots it re-initializes the IOMMU hardware,
     which destroys all mappings from the crashed kernel.  As this
     happens before the endpoint devices are re-initialized, any
     in-flight DMA causes a DMAR fault.  These faults cause PCI master
     aborts, which some devices can't handle properly and go into an
     undefined state, so that the device driver in the kdump kernel
     fails to initialize them and the dump fails.

     This is now fixed by copying over the mapping structures (only
     context tables and interrupt remapping tables) from the old kernel
     and keep the old mappings in place until the device driver of the
     new kernel takes over.  This emulates the the behavior without an
     IOMMU to the best degree possible.

   - A couple of other small fixes and cleanups"

* tag 'iommu-updates-v4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (69 commits)
  iommu/amd: Handle large pages correctly in free_pagetable
  iommu/vt-d: Don't disable IR when it was previously enabled
  iommu/vt-d: Make sure copied over IR entries are not reused
  iommu/vt-d: Copy IR table from old kernel when in kdump mode
  iommu/vt-d: Set IRTA in intel_setup_irq_remapping
  iommu/vt-d: Disable IRQ remapping in intel_prepare_irq_remapping
  iommu/vt-d: Move QI initializationt to intel_setup_irq_remapping
  iommu/vt-d: Move EIM detection to intel_prepare_irq_remapping
  iommu/vt-d: Enable Translation only if it was previously disabled
  iommu/vt-d: Don't disable translation prior to OS handover
  iommu/vt-d: Don't copy translation tables if RTT bit needs to be changed
  iommu/vt-d: Don't do early domain assignment if kdump kernel
  iommu/vt-d: Allocate si_domain in init_dmars()
  iommu/vt-d: Mark copied context entries
  iommu/vt-d: Do not re-use domain-ids from the old kernel
  iommu/vt-d: Copy translation tables from old kernel
  iommu/vt-d: Detect pre enabled translation
  iommu/vt-d: Make root entry visible for hardware right after allocation
  iommu/vt-d: Init QI before root entry is allocated
  iommu/vt-d: Cleanup log messages
  ...
parents 54245ed8 5ffde2f6
* ARM SMMUv3 Architecture Implementation
The SMMUv3 architecture is a significant deparature from previous
revisions, replacing the MMIO register interface with in-memory command
and event queues and adding support for the ATS and PRI components of
the PCIe specification.
** SMMUv3 required properties:
- compatible : Should include:
* "arm,smmu-v3" for any SMMUv3 compliant
implementation. This entry should be last in the
compatible list.
- reg : Base address and size of the SMMU.
- interrupts : Non-secure interrupt list describing the wired
interrupt sources corresponding to entries in
interrupt-names. If no wired interrupts are
present then this property may be omitted.
- interrupt-names : When the interrupts property is present, should
include the following:
* "eventq" - Event Queue not empty
* "priq" - PRI Queue not empty
* "cmdq-sync" - CMD_SYNC complete
* "gerror" - Global Error activated
** SMMUv3 optional properties:
- dma-coherent : Present if DMA operations made by the SMMU (page
table walks, stream table accesses etc) are cache
coherent with the CPU.
NOTE: this only applies to the SMMU itself, not
masters connected upstream of the SMMU.
...@@ -1637,11 +1637,12 @@ F: drivers/i2c/busses/i2c-cadence.c ...@@ -1637,11 +1637,12 @@ F: drivers/i2c/busses/i2c-cadence.c
F: drivers/mmc/host/sdhci-of-arasan.c F: drivers/mmc/host/sdhci-of-arasan.c
F: drivers/edac/synopsys_edac.c F: drivers/edac/synopsys_edac.c
ARM SMMU DRIVER ARM SMMU DRIVERS
M: Will Deacon <will.deacon@arm.com> M: Will Deacon <will.deacon@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained S: Maintained
F: drivers/iommu/arm-smmu.c F: drivers/iommu/arm-smmu.c
F: drivers/iommu/arm-smmu-v3.c
F: drivers/iommu/io-pgtable-arm.c F: drivers/iommu/io-pgtable-arm.c
ARM64 PORT (AARCH64 ARCHITECTURE) ARM64 PORT (AARCH64 ARCHITECTURE)
......
...@@ -339,6 +339,7 @@ config SPAPR_TCE_IOMMU ...@@ -339,6 +339,7 @@ config SPAPR_TCE_IOMMU
Enables bits of IOMMU API required by VFIO. The iommu_ops Enables bits of IOMMU API required by VFIO. The iommu_ops
is not implemented as it is not necessary for VFIO. is not implemented as it is not necessary for VFIO.
# ARM IOMMU support
config ARM_SMMU config ARM_SMMU
bool "ARM Ltd. System MMU (SMMU) Support" bool "ARM Ltd. System MMU (SMMU) Support"
depends on (ARM64 || ARM) && MMU depends on (ARM64 || ARM) && MMU
...@@ -352,4 +353,16 @@ config ARM_SMMU ...@@ -352,4 +353,16 @@ config ARM_SMMU
Say Y here if your SoC includes an IOMMU device implementing Say Y here if your SoC includes an IOMMU device implementing
the ARM SMMU architecture. the ARM SMMU architecture.
config ARM_SMMU_V3
bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support"
depends on ARM64 && PCI
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
help
Support for implementations of the ARM System MMU architecture
version 3 providing translation support to a PCIe root complex.
Say Y here if your system includes an IOMMU device implementing
the ARM SMMUv3 architecture.
endif # IOMMU_SUPPORT endif # IOMMU_SUPPORT
...@@ -9,6 +9,7 @@ obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o msm_iommu_dev.o ...@@ -9,6 +9,7 @@ obj-$(CONFIG_MSM_IOMMU) += msm_iommu.o msm_iommu_dev.o
obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o obj-$(CONFIG_AMD_IOMMU) += amd_iommu.o amd_iommu_init.o
obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o obj-$(CONFIG_AMD_IOMMU_V2) += amd_iommu_v2.o
obj-$(CONFIG_ARM_SMMU) += arm-smmu.o obj-$(CONFIG_ARM_SMMU) += arm-smmu.o
obj-$(CONFIG_ARM_SMMU_V3) += arm-smmu-v3.o
obj-$(CONFIG_DMAR_TABLE) += dmar.o obj-$(CONFIG_DMAR_TABLE) += dmar.o
obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o obj-$(CONFIG_INTEL_IOMMU) += intel-iommu.o
obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
......
This diff is collapsed.
...@@ -226,6 +226,7 @@ static enum iommu_init_state init_state = IOMMU_START_STATE; ...@@ -226,6 +226,7 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
static int amd_iommu_enable_interrupts(void); static int amd_iommu_enable_interrupts(void);
static int __init iommu_go_to_state(enum iommu_init_state state); static int __init iommu_go_to_state(enum iommu_init_state state);
static void init_device_table_dma(void);
static inline void update_last_devid(u16 devid) static inline void update_last_devid(u16 devid)
{ {
...@@ -1389,8 +1390,14 @@ static int __init amd_iommu_init_pci(void) ...@@ -1389,8 +1390,14 @@ static int __init amd_iommu_init_pci(void)
break; break;
} }
ret = amd_iommu_init_devices(); init_device_table_dma();
for_each_iommu(iommu)
iommu_flush_all_caches(iommu);
ret = amd_iommu_init_api();
if (!ret)
print_iommu_info(); print_iommu_info();
return ret; return ret;
...@@ -1829,8 +1836,6 @@ static bool __init check_ioapic_information(void) ...@@ -1829,8 +1836,6 @@ static bool __init check_ioapic_information(void)
static void __init free_dma_resources(void) static void __init free_dma_resources(void)
{ {
amd_iommu_uninit_devices();
free_pages((unsigned long)amd_iommu_pd_alloc_bitmap, free_pages((unsigned long)amd_iommu_pd_alloc_bitmap,
get_order(MAX_DOMAIN_ID/8)); get_order(MAX_DOMAIN_ID/8));
...@@ -2023,27 +2028,10 @@ static bool detect_ivrs(void) ...@@ -2023,27 +2028,10 @@ static bool detect_ivrs(void)
static int amd_iommu_init_dma(void) static int amd_iommu_init_dma(void)
{ {
struct amd_iommu *iommu;
int ret;
if (iommu_pass_through) if (iommu_pass_through)
ret = amd_iommu_init_passthrough(); return amd_iommu_init_passthrough();
else else
ret = amd_iommu_init_dma_ops(); return amd_iommu_init_dma_ops();
if (ret)
return ret;
init_device_table_dma();
for_each_iommu(iommu)
iommu_flush_all_caches(iommu);
amd_iommu_init_api();
amd_iommu_init_notifier();
return 0;
} }
/**************************************************************************** /****************************************************************************
......
...@@ -30,7 +30,7 @@ extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu); ...@@ -30,7 +30,7 @@ extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu);
extern int amd_iommu_init_devices(void); extern int amd_iommu_init_devices(void);
extern void amd_iommu_uninit_devices(void); extern void amd_iommu_uninit_devices(void);
extern void amd_iommu_init_notifier(void); extern void amd_iommu_init_notifier(void);
extern void amd_iommu_init_api(void); extern int amd_iommu_init_api(void);
/* Needed for interrupt remapping */ /* Needed for interrupt remapping */
extern int amd_iommu_prepare(void); extern int amd_iommu_prepare(void);
......
...@@ -447,8 +447,6 @@ struct aperture_range { ...@@ -447,8 +447,6 @@ struct aperture_range {
* Data container for a dma_ops specific protection domain * Data container for a dma_ops specific protection domain
*/ */
struct dma_ops_domain { struct dma_ops_domain {
struct list_head list;
/* generic protection domain information */ /* generic protection domain information */
struct protection_domain domain; struct protection_domain domain;
...@@ -463,12 +461,6 @@ struct dma_ops_domain { ...@@ -463,12 +461,6 @@ struct dma_ops_domain {
/* This will be set to true when TLB needs to be flushed */ /* This will be set to true when TLB needs to be flushed */
bool need_flush; bool need_flush;
/*
* if this is a preallocated domain, keep the device for which it was
* preallocated in this variable
*/
u16 target_dev;
}; };
/* /*
...@@ -553,9 +545,6 @@ struct amd_iommu { ...@@ -553,9 +545,6 @@ struct amd_iommu {
/* if one, we need to send a completion wait command */ /* if one, we need to send a completion wait command */
bool need_sync; bool need_sync;
/* default dma_ops domain for that IOMMU */
struct dma_ops_domain *default_dom;
/* IOMMU sysfs device */ /* IOMMU sysfs device */
struct device *iommu_dev; struct device *iommu_dev;
......
This diff is collapsed.
...@@ -202,8 +202,7 @@ ...@@ -202,8 +202,7 @@
#define ARM_SMMU_CB_S1_TLBIVAL 0x620 #define ARM_SMMU_CB_S1_TLBIVAL 0x620
#define ARM_SMMU_CB_S2_TLBIIPAS2 0x630 #define ARM_SMMU_CB_S2_TLBIIPAS2 0x630
#define ARM_SMMU_CB_S2_TLBIIPAS2L 0x638 #define ARM_SMMU_CB_S2_TLBIIPAS2L 0x638
#define ARM_SMMU_CB_ATS1PR_LO 0x800 #define ARM_SMMU_CB_ATS1PR 0x800
#define ARM_SMMU_CB_ATS1PR_HI 0x804
#define ARM_SMMU_CB_ATSR 0x8f0 #define ARM_SMMU_CB_ATSR 0x8f0
#define SCTLR_S1_ASIDPNE (1 << 12) #define SCTLR_S1_ASIDPNE (1 << 12)
...@@ -247,7 +246,7 @@ ...@@ -247,7 +246,7 @@
#define FSYNR0_WNR (1 << 4) #define FSYNR0_WNR (1 << 4)
static int force_stage; static int force_stage;
module_param_named(force_stage, force_stage, int, S_IRUGO | S_IWUSR); module_param_named(force_stage, force_stage, int, S_IRUGO);
MODULE_PARM_DESC(force_stage, MODULE_PARM_DESC(force_stage,
"Force SMMU mappings to be installed at a particular stage of translation. A value of '1' or '2' forces the corresponding stage. All other values are ignored (i.e. no stage is forced). Note that selecting a specific stage will disable support for nested translation."); "Force SMMU mappings to be installed at a particular stage of translation. A value of '1' or '2' forces the corresponding stage. All other values are ignored (i.e. no stage is forced). Note that selecting a specific stage will disable support for nested translation.");
...@@ -1229,18 +1228,18 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain, ...@@ -1229,18 +1228,18 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
void __iomem *cb_base; void __iomem *cb_base;
u32 tmp; u32 tmp;
u64 phys; u64 phys;
unsigned long va;
cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx); cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
if (smmu->version == 1) { /* ATS1 registers can only be written atomically */
u32 reg = iova & ~0xfff; va = iova & ~0xfffUL;
writel_relaxed(reg, cb_base + ARM_SMMU_CB_ATS1PR_LO); #ifdef CONFIG_64BIT
} else { if (smmu->version == ARM_SMMU_V2)
u32 reg = iova & ~0xfff; writeq_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR);
writel_relaxed(reg, cb_base + ARM_SMMU_CB_ATS1PR_LO); else
reg = ((u64)iova & ~0xfff) >> 32; #endif
writel_relaxed(reg, cb_base + ARM_SMMU_CB_ATS1PR_HI); writel_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR);
}
if (readl_poll_timeout_atomic(cb_base + ARM_SMMU_CB_ATSR, tmp, if (readl_poll_timeout_atomic(cb_base + ARM_SMMU_CB_ATSR, tmp,
!(tmp & ATSR_ACTIVE), 5, 50)) { !(tmp & ATSR_ACTIVE), 5, 50)) {
......
...@@ -26,7 +26,7 @@ ...@@ -26,7 +26,7 @@
* These routines are used by both DMA-remapping and Interrupt-remapping * These routines are used by both DMA-remapping and Interrupt-remapping
*/ */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt /* has to precede printk.h */ #define pr_fmt(fmt) "DMAR: " fmt
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/dmar.h> #include <linux/dmar.h>
...@@ -555,7 +555,7 @@ static int dmar_walk_remapping_entries(struct acpi_dmar_header *start, ...@@ -555,7 +555,7 @@ static int dmar_walk_remapping_entries(struct acpi_dmar_header *start,
break; break;
} else if (next > end) { } else if (next > end) {
/* Avoid passing table end */ /* Avoid passing table end */
pr_warn(FW_BUG "record passes table end\n"); pr_warn(FW_BUG "Record passes table end\n");
ret = -EINVAL; ret = -EINVAL;
break; break;
} }
...@@ -802,7 +802,7 @@ int __init dmar_table_init(void) ...@@ -802,7 +802,7 @@ int __init dmar_table_init(void)
ret = parse_dmar_table(); ret = parse_dmar_table();
if (ret < 0) { if (ret < 0) {
if (ret != -ENODEV) if (ret != -ENODEV)
pr_info("parse DMAR table failure.\n"); pr_info("Parse DMAR table failure.\n");
} else if (list_empty(&dmar_drhd_units)) { } else if (list_empty(&dmar_drhd_units)) {
pr_info("No DMAR devices found\n"); pr_info("No DMAR devices found\n");
ret = -ENODEV; ret = -ENODEV;
...@@ -847,7 +847,7 @@ dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg) ...@@ -847,7 +847,7 @@ dmar_validate_one_drhd(struct acpi_dmar_header *entry, void *arg)
else else
addr = early_ioremap(drhd->address, VTD_PAGE_SIZE); addr = early_ioremap(drhd->address, VTD_PAGE_SIZE);
if (!addr) { if (!addr) {
pr_warn("IOMMU: can't validate: %llx\n", drhd->address); pr_warn("Can't validate DRHD address: %llx\n", drhd->address);
return -EINVAL; return -EINVAL;
} }
...@@ -921,14 +921,14 @@ static int map_iommu(struct intel_iommu *iommu, u64 phys_addr) ...@@ -921,14 +921,14 @@ static int map_iommu(struct intel_iommu *iommu, u64 phys_addr)
iommu->reg_size = VTD_PAGE_SIZE; iommu->reg_size = VTD_PAGE_SIZE;
if (!request_mem_region(iommu->reg_phys, iommu->reg_size, iommu->name)) { if (!request_mem_region(iommu->reg_phys, iommu->reg_size, iommu->name)) {
pr_err("IOMMU: can't reserve memory\n"); pr_err("Can't reserve memory\n");
err = -EBUSY; err = -EBUSY;
goto out; goto out;
} }
iommu->reg = ioremap(iommu->reg_phys, iommu->reg_size); iommu->reg = ioremap(iommu->reg_phys, iommu->reg_size);
if (!iommu->reg) { if (!iommu->reg) {
pr_err("IOMMU: can't map the region\n"); pr_err("Can't map the region\n");
err = -ENOMEM; err = -ENOMEM;
goto release; goto release;
} }
...@@ -952,13 +952,13 @@ static int map_iommu(struct intel_iommu *iommu, u64 phys_addr) ...@@ -952,13 +952,13 @@ static int map_iommu(struct intel_iommu *iommu, u64 phys_addr)
iommu->reg_size = map_size; iommu->reg_size = map_size;
if (!request_mem_region(iommu->reg_phys, iommu->reg_size, if (!request_mem_region(iommu->reg_phys, iommu->reg_size,
iommu->name)) { iommu->name)) {
pr_err("IOMMU: can't reserve memory\n"); pr_err("Can't reserve memory\n");
err = -EBUSY; err = -EBUSY;
goto out; goto out;
} }
iommu->reg = ioremap(iommu->reg_phys, iommu->reg_size); iommu->reg = ioremap(iommu->reg_phys, iommu->reg_size);
if (!iommu->reg) { if (!iommu->reg) {
pr_err("IOMMU: can't map the region\n"); pr_err("Can't map the region\n");
err = -ENOMEM; err = -ENOMEM;
goto release; goto release;
} }
...@@ -1014,14 +1014,14 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd) ...@@ -1014,14 +1014,14 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
return -ENOMEM; return -ENOMEM;
if (dmar_alloc_seq_id(iommu) < 0) { if (dmar_alloc_seq_id(iommu) < 0) {
pr_err("IOMMU: failed to allocate seq_id\n"); pr_err("Failed to allocate seq_id\n");
err = -ENOSPC; err = -ENOSPC;
goto error; goto error;
} }
err = map_iommu(iommu, drhd->reg_base_addr); err = map_iommu(iommu, drhd->reg_base_addr);
if (err) { if (err) {
pr_err("IOMMU: failed to map %s\n", iommu->name); pr_err("Failed to map %s\n", iommu->name);
goto error_free_seq_id; goto error_free_seq_id;
} }
...@@ -1045,8 +1045,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd) ...@@ -1045,8 +1045,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
iommu->node = -1; iommu->node = -1;
ver = readl(iommu->reg + DMAR_VER_REG); ver = readl(iommu->reg + DMAR_VER_REG);
pr_info("IOMMU %d: reg_base_addr %llx ver %d:%d cap %llx ecap %llx\n", pr_info("%s: reg_base_addr %llx ver %d:%d cap %llx ecap %llx\n",
iommu->seq_id, iommu->name,
(unsigned long long)drhd->reg_base_addr, (unsigned long long)drhd->reg_base_addr,
DMAR_VER_MAJOR(ver), DMAR_VER_MINOR(ver), DMAR_VER_MAJOR(ver), DMAR_VER_MINOR(ver),
(unsigned long long)iommu->cap, (unsigned long long)iommu->cap,
...@@ -1646,13 +1646,13 @@ int dmar_set_interrupt(struct intel_iommu *iommu) ...@@ -1646,13 +1646,13 @@ int dmar_set_interrupt(struct intel_iommu *iommu)
if (irq > 0) { if (irq > 0) {
iommu->irq = irq; iommu->irq = irq;
} else { } else {
pr_err("IOMMU: no free vectors\n"); pr_err("No free IRQ vectors\n");
return -EINVAL; return -EINVAL;
} }
ret = request_irq(irq, dmar_fault, IRQF_NO_THREAD, iommu->name, iommu); ret = request_irq(irq, dmar_fault, IRQF_NO_THREAD, iommu->name, iommu);
if (ret) if (ret)
pr_err("IOMMU: can't request irq\n"); pr_err("Can't request irq\n");
return ret; return ret;
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -227,6 +227,7 @@ iova_insert_rbtree(struct rb_root *root, struct iova *iova) ...@@ -227,6 +227,7 @@ iova_insert_rbtree(struct rb_root *root, struct iova *iova)
/* Figure out where to put new node */ /* Figure out where to put new node */
while (*new) { while (*new) {
struct iova *this = container_of(*new, struct iova, node); struct iova *this = container_of(*new, struct iova, node);
parent = *new; parent = *new;
if (iova->pfn_lo < this->pfn_lo) if (iova->pfn_lo < this->pfn_lo)
...@@ -350,6 +351,7 @@ void ...@@ -350,6 +351,7 @@ void
free_iova(struct iova_domain *iovad, unsigned long pfn) free_iova(struct iova_domain *iovad, unsigned long pfn)
{ {
struct iova *iova = find_iova(iovad, pfn); struct iova *iova = find_iova(iovad, pfn);
if (iova) if (iova)
__free_iova(iovad, iova); __free_iova(iovad, iova);
...@@ -369,6 +371,7 @@ void put_iova_domain(struct iova_domain *iovad) ...@@ -369,6 +371,7 @@ void put_iova_domain(struct iova_domain *iovad)
node = rb_first(&iovad->rbroot); node = rb_first(&iovad->rbroot);
while (node) { while (node) {
struct iova *iova = container_of(node, struct iova, node); struct iova *iova = container_of(node, struct iova, node);
rb_erase(node, &iovad->rbroot); rb_erase(node, &iovad->rbroot);
free_iova_mem(iova); free_iova_mem(iova);
node = rb_first(&iovad->rbroot); node = rb_first(&iovad->rbroot);
...@@ -482,6 +485,7 @@ copy_reserved_iova(struct iova_domain *from, struct iova_domain *to) ...@@ -482,6 +485,7 @@ copy_reserved_iova(struct iova_domain *from, struct iova_domain *to)
for (node = rb_first(&from->rbroot); node; node = rb_next(node)) { for (node = rb_first(&from->rbroot); node; node = rb_next(node)) {
struct iova *iova = container_of(node, struct iova, node); struct iova *iova = container_of(node, struct iova, node);
struct iova *new_iova; struct iova *new_iova;
new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi); new_iova = reserve_iova(to, iova->pfn_lo, iova->pfn_hi);
if (!new_iova) if (!new_iova)
printk(KERN_ERR "Reserve iova range %lx@%lx failed\n", printk(KERN_ERR "Reserve iova range %lx@%lx failed\n",
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment