• David Dillow's avatar
    iommu/vt-d: Don't over-free page table directories · f7116e11
    David Dillow authored
    dma_pte_free_level() recurses down the IOMMU page tables and frees
    directory pages that are entirely contained in the given PFN range.
    Unfortunately, it incorrectly calculates the starting address covered
    by the PTE under consideration, which can lead to it clearing an entry
    that is still in use.
    
    This occurs if we have a scatterlist with an entry that has a length
    greater than 1026 MB and is aligned to 2 MB for both the IOMMU and
    physical addresses. For example, if __domain_mapping() is asked to map a
    two-entry scatterlist with 2 MB and 1028 MB segments to PFN 0xffff80000,
    it will ask if dma_pte_free_pagetable() is asked to PFNs from
    0xffff80200 to 0xffffc05ff, it will also incorrectly clear the PFNs from
    0xffff80000 to 0xffff801ff because of this issue. The current code will
    set level_pfn to 0xffff80200, and 0xffff80200-0xffffc01ff fits inside
    the range being cleared. Properly setting the level_pfn for the current
    level under consideration catches that this PTE is outside of the range
    being cleared.
    
    This patch also changes the value passed into dma_pte_free_level() when
    it recurses. This only affects the first PTE of the range being cleared,
    and is handled by the existing code that ensures we start our cursor no
    lower than start_pfn.
    
    This was found when using dma_map_sg() to map large chunks of contiguous
    memory, which immediatedly led to faults on the first access of the
    erroneously-deleted mappings.
    
    Fixes: 3269ee0b ("intel-iommu: Fix leaks in pagetable freeing")
    Reviewed-by: default avatarBenjamin Serebrin <serebrin@google.com>
    Signed-off-by: default avatarDavid Dillow <dillow@google.com>
    Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
    f7116e11
intel-iommu.c 137 KB