Commit 66bdb147 authored by Christoph Hellwig's avatar Christoph Hellwig Committed by Ingo Molnar

swiotlb: Use dma_direct_supported() for swiotlb_ops

swiotlb_alloc() calls dma_direct_alloc(), which can satisfy lower than 32-bit
DMA mask requests using GFP_DMA if the architecture supports it.  Various
x86 drivers rely on that, so we need to support that.  At the same time
the whole kernel expects a 32-bit DMA mask to just work, so the other magic
in swiotlb_dma_supported() isn't actually needed either.
Reported-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: iommu@lists.linux-foundation.org
Fixes: 6e4bf586 ("x86/dma: Use generic swiotlb_ops")
Link: http://lkml.kernel.org/r/20180409091517.6619-2-hch@lst.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 9a3b7e5e
...@@ -1130,6 +1130,6 @@ const struct dma_map_ops swiotlb_dma_ops = { ...@@ -1130,6 +1130,6 @@ const struct dma_map_ops swiotlb_dma_ops = {
.unmap_sg = swiotlb_unmap_sg_attrs, .unmap_sg = swiotlb_unmap_sg_attrs,
.map_page = swiotlb_map_page, .map_page = swiotlb_map_page,
.unmap_page = swiotlb_unmap_page, .unmap_page = swiotlb_unmap_page,
.dma_supported = swiotlb_dma_supported, .dma_supported = dma_direct_supported,
}; };
#endif /* CONFIG_DMA_DIRECT_OPS */ #endif /* CONFIG_DMA_DIRECT_OPS */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment