1. 01 Apr, 2014 1 commit
  2. 28 Mar, 2014 2 commits
  3. 25 Mar, 2014 1 commit
    • Joerg Roedel's avatar
      iommu/vt-d: Check for NULL pointer in dmar_acpi_dev_scope_init() · 11f1a776
      Joerg Roedel authored
      When ir_dev_scope_init() is called via a rootfs initcall it
      will check for irq_remapping_enabled before it calls
      (indirectly) into dmar_acpi_dev_scope_init() which uses the
      dmar_tbl pointer without any checks.
      
      The AMD IOMMU driver also sets the irq_remapping_enabled
      flag which causes the dmar_acpi_dev_scope_init() function to
      be called on systems with AMD IOMMU hardware too, causing a
      boot-time kernel crash.
      Signed-off-by: default avatarJoerg Roedel <joro@8bytes.org>
      11f1a776
  4. 24 Mar, 2014 30 commits
  5. 20 Mar, 2014 3 commits
  6. 19 Mar, 2014 3 commits
    • David Woodhouse's avatar
      iommu/vt-d: Be less pessimistic about domain coherency where possible · d0501960
      David Woodhouse authored
      In commit 2e12bc29 ("intel-iommu: Default to non-coherent for domains
      unattached to iommus") we decided to err on the side of caution and
      always assume that it's possible that a device will be attached which is
      behind a non-coherent IOMMU.
      
      In some cases, however, that just *cannot* happen. If there *are* no
      IOMMUs in the system which are non-coherent, then we don't need to do
      it. And flushing the dcache is a *significant* performance hit.
      Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      d0501960
    • David Woodhouse's avatar
    • David Woodhouse's avatar
      iommu/vt-d: Clean up and fix page table clear/free behaviour · ea8ea460
      David Woodhouse authored
      There is a race condition between the existing clear/free code and the
      hardware. The IOMMU is actually permitted to cache the intermediate
      levels of the page tables, and doesn't need to walk the table from the
      very top of the PGD each time. So the existing back-to-back calls to
      dma_pte_clear_range() and dma_pte_free_pagetable() can lead to a
      use-after-free where the IOMMU reads from a freed page table.
      
      When freeing page tables we actually need to do the IOTLB flush, with
      the 'invalidation hint' bit clear to indicate that it's not just a
      leaf-node flush, after unlinking each page table page from the next level
      up but before actually freeing it.
      
      So in the rewritten domain_unmap() we just return a list of pages (using
      pg->freelist to make a list of them), and then the caller is expected to
      do the appropriate IOTLB flush (or tear down the domain completely,
      whatever), before finally calling dma_free_pagelist() to free the pages.
      
      As an added bonus, we no longer need to flush the CPU's data cache for
      pages which are about to be *removed* from the page table hierarchy anyway,
      in the non-cache-coherent case. This drastically improves the performance
      of large unmaps.
      
      As a side-effect of all these changes, this also fixes the fact that
      intel_iommu_unmap() was neglecting to free the page tables for the range
      in question after clearing them.
      Signed-off-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      ea8ea460