Commit 2d6ffc63 authored by Lu Baolu's avatar Lu Baolu Committed by Will Deacon

iommu/vt-d: Fix unaligned addresses for intel_flush_svm_range_dev()

The VT-d hardware will ignore those Addr bits which have been masked by
the AM field in the PASID-based-IOTLB invalidation descriptor. As the
result, if the starting address in the descriptor is not aligned with
the address mask, some IOTLB caches might not invalidate. Hence people
will see below errors.

[ 1093.704661] dmar_fault: 29 callbacks suppressed
[ 1093.704664] DMAR: DRHD: handling fault status reg 3
[ 1093.712738] DMAR: [DMA Read] Request device [7a:02.0] PASID 2
               fault addr 7f81c968d000 [fault reason 113]
               SM: Present bit in first-level paging entry is clear

Fix this by using aligned address for PASID-based-IOTLB invalidation.

Fixes: 1c4f88b7 ("iommu/vt-d: Shared virtual address in scalable mode")
Reported-and-tested-by: default avatarGuo Kaijie <Kaijie.Guo@intel.com>
Signed-off-by: default avatarLu Baolu <baolu.lu@linux.intel.com>
Link: https://lore.kernel.org/r/20201231005323.2178523-2-baolu.lu@linux.intel.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
parent 7c29ada5
......@@ -118,8 +118,10 @@ void intel_svm_check(struct intel_iommu *iommu)
iommu->flags |= VTD_FLAG_SVM_CAPABLE;
}
static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_dev *sdev,
unsigned long address, unsigned long pages, int ih)
static void __flush_svm_range_dev(struct intel_svm *svm,
struct intel_svm_dev *sdev,
unsigned long address,
unsigned long pages, int ih)
{
struct qi_desc desc;
......@@ -170,6 +172,22 @@ static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_d
}
}
static void intel_flush_svm_range_dev(struct intel_svm *svm,
struct intel_svm_dev *sdev,
unsigned long address,
unsigned long pages, int ih)
{
unsigned long shift = ilog2(__roundup_pow_of_two(pages));
unsigned long align = (1ULL << (VTD_PAGE_SHIFT + shift));
unsigned long start = ALIGN_DOWN(address, align);
unsigned long end = ALIGN(address + (pages << VTD_PAGE_SHIFT), align);
while (start < end) {
__flush_svm_range_dev(svm, sdev, start, align >> VTD_PAGE_SHIFT, ih);
start += align;
}
}
static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address,
unsigned long pages, int ih)
{
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment