- 11 Sep, 2019 9 commits
-
-
Christoph Hellwig authored
Now that we know we always have the dma-noncoherent.h helpers available if we are on an architecture with support for non-coherent devices, we can just call them directly, and remove the calls to the dma-direct routines, including the fact that we call the dma_direct_map_page routines but ignore the value returned from it. Instead we now have Xen wrappers for the arch_sync_dma_for_{device,cpu} helpers that call the special Xen versions of those routines for foreign pages. Note that the new helpers get the physical address passed in addition to the dma address to avoid another translation for the local cache maintainance. The pfn_valid checks remain on the dma address as in the old code, even if that looks a little funny. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Christoph Hellwig authored
xen_dma_map_page uses a different and more complicated check for foreign pages than the other three cache maintainance helpers. Switch it to the simpler pfn_valid method a well, and document the scheme with a single improved comment in xen_dma_map_page. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
-
Christoph Hellwig authored
There is no need to wrap the common version, just wire them up directly. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
-
Christoph Hellwig authored
These routines are only used by swiotlb-xen, which cannot be modular. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
-
Christoph Hellwig authored
arm and arm64 can just use xen_swiotlb_dma_ops directly like x86, no need for a pointer indirection. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
-
Christoph Hellwig authored
Calculate the required operation in the caller, and pass it directly instead of recalculating it for each page, and use simple arithmetics to get from the physical address to Xen page size aligned chunks. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
-
Christoph Hellwig authored
Use the dma-noncoherent dev_is_dma_coherent helper instead of the home grown variant. Note that both are always initialized to the same value in arch_setup_dma_ops. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
-
Christoph Hellwig authored
Shared the duplicate arm/arm64 code in include/xen/arm/page-coherent.h. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
-
Christoph Hellwig authored
Copy the arm64 code that uses the dma-direct/swiotlb helpers for DMA on-coherent devices. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
-
- 04 Sep, 2019 17 commits
-
-
Christoph Hellwig authored
Remove a few tiny wrappers around the generic dma remap code. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
A helper to find the backing page array based on a virtual address. This also ensures we do the same vm_flags check everywhere instead of slightly different or missing ones in a few places. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Currently the generic dma remap allocator gets a vm_flags passed by the caller that is a little confusing. We just introduced a generic vmalloc-level flag to identify the dma coherent allocations, so use that everywhere and remove the now pointless argument. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
The arm architecture had a VM_ARM_DMA_CONSISTENT flag to mark DMA coherent remapping for a while. Lift this flag to common code so that we can use it generically. We also check it in the only place VM_USERMAP is directly check so that we can entirely replace that flag as well (although I'm not even sure why we'd want to allow remapping DMA appings, but I'd rather not change behavior). Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Most dma_map_ops instances are IOMMUs that work perfectly fine in 32-bits of IOVA space, and the generic direct mapping code already provides its own routines that is intelligent based on the amount of memory actually present. Wire up the dma-direct routine for the ARM direct mapping code as well, and otherwise default to the constant 32-bit mask. This way we only need to override it for the occasional odd IOMMU that requires 64-bit IOVA support, or IOMMU drivers that are more efficient if they can fall back to the direct mapping. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
dma_declare_coherent_memory is something that the platform setup code (which pretty much means the device tree these days) need to do so that drivers can use the memory as declared by the platform. Drivers themselves have no business calling this function. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Remoteproc started using dma_declare_coherent_memory recently, which is a bad idea from drivers, and the maintainers agreed to fix that. But until that is fixed only allow building the driver built in so that we can remove the dma_declare_coherent_memory export and prevent other drivers from "accidentally" using it like remoteproc. Note that the driver would also leak the declared coherent memory on unload if it actually was built as a module at the moment. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
-
Christoph Hellwig authored
dma_mmap_from_dev_coherent is only used by dma_map_ops instances, none of which is modular. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
This function is entirely unused given that declared memory is generally provided by platform setup code. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
We can already use DMA_ATTR_WRITE_COMBINE or the _wc prefixed version, so remove the third way of doing things. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
-
Christoph Hellwig authored
CONFIG_ARCH_NO_COHERENT_DMA_MMAP is now functionally identical to !CONFIG_MMU, so remove the separate symbol. The only difference is that arm did not set it for !CONFIG_MMU, but arm uses a separate dma mapping implementation including its own mmap method, which is handled by moving the CONFIG_MMU check in dma_can_mmap so that is only applies to the dma-direct case, just as the other ifdefs for it. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k
-
Christoph Hellwig authored
parisc is the only architecture that sets ARCH_NO_COHERENT_DMA_MMAP when an MMU is enabled. AFAIK this is because parisc CPUs use VIVT caches, which means exporting normally cachable memory to userspace is relatively dangrous due to cache aliasing. But normally cachable memory is only allocated by dma_alloc_coherent on parisc when using the sba_iommu or ccio_iommu drivers, so just remove the .mmap implementation for them so that we don't have to set ARCH_NO_COHERENT_DMA_MMAP, which I plan to get rid of. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
There is no need to go through dma_common_mmap for the arm-nommu dma mmap implementation as the only possible memory not handled above could be that from the per-device coherent pool. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Replace the local hack with the dma_can_mmap helper to check if a given device supports mapping DMA allocations to userspace. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Takashi Iwai <tiwai@suse.de>
-
Christoph Hellwig authored
Add a helper to check if DMA allocations for a specific device can be mapped to userspace using dma_mmap_*. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
While the default ->mmap and ->get_sgtable implementations work for the majority of our dma_map_ops impementations they are inherently safe for others that don't use the page allocator or CMA and/or use their own way of remapping not covered by the common code. So remove the defaults if these methods are not wired up, but instead wire up the default implementations for all safe instances. Fixes: e1c7e324 ("dma-mapping: always provide the dma_map_ops based implementation") Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
The comments are spot on and should be near the central API, not just near a single implementation. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 03 Sep, 2019 5 commits
-
-
Andy Shevchenko authored
After commit cf65a0f6 ("dma-mapping: move all DMA mapping code to kernel/dma") some of the files are referring to outdated information, i.e. old file names of DMA mapping sources. Fix it here. Note, the lines with "Glue code for..." have been removed completely. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Yoshihiro Shimoda authored
This patch adds a new dma_map_ops of get_merge_boundary() to expose the DMA merge boundary if the domain type is IOMMU_DOMAIN_DMA. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Yoshihiro Shimoda authored
This patch adds a new DMA API "dma_get_merge_boundary". This function returns the DMA merge boundary if the DMA layer can merge the segments. This patch also adds the implementation for a new dma_map_ops pointer. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Yoshihiro Shimoda authored
When the max_segs of a mmc host is smaller than 512, the mmc subsystem tries to use 512 segments if DMA MAP layer can merge the segments, and then the mmc subsystem exposes such information to the block layer by using blk_queue_can_use_dma_map_merging(). Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Yoshihiro Shimoda authored
This patch adds a helper function whether a queue can merge the segments by the DMA MAP layer (e.g. via IOMMU). Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Simon Horman <horms+renesas@verge.net.au Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 29 Aug, 2019 4 commits
-
-
Christoph Hellwig authored
Based on an email from Paul Burton, quoting section 4.8 "Cacheability and Coherency Attributes and Access Types" of "MIPS Architecture Volume 1: Introduction to the MIPS32 Architecture" (MD00080, revision 6.01). Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Paul Burton <paul.burton@mips.com>
-
Christoph Hellwig authored
Based on an email from Will Deacon. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com>
-
Christoph Hellwig authored
The memory allocated for the atomic pool needs to have the same mapping attributes that we use for remapping, so use pgprot_dmacoherent instead of open coding it. Also deduct a suitable zone to allocate the memory from based on the presence of the DMA zones. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
arch_dma_mmap_pgprot is used for two things: 1) to override the "normal" uncached page attributes for mapping memory coherent to devices that can't snoop the CPU caches 2) to provide the special DMA_ATTR_WRITE_COMBINE semantics on older arm systems and some mips platforms Replace one with the pgprot_dmacoherent macro that is already provided by arm and much simpler to use, and lift the DMA_ATTR_WRITE_COMBINE handling to common code with an explicit arch opt-in. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k Acked-by: Paul Burton <paul.burton@mips.com> # mips
-
- 26 Aug, 2019 2 commits
-
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 25 Aug, 2019 3 commits
-
-
Linus Torvalds authored
-
git://github.com/ojeda/linuxLinus Torvalds authored
Pull auxdisplay cleanup from Miguel Ojeda: "Make ht16k33_fb_fix and ht16k33_fb_var constant (Nishka Dasgupta)" * tag 'auxdisplay-for-linus-v5.3-rc7' of git://github.com/ojeda/linux: auxdisplay: ht16k33: Make ht16k33_fb_fix and ht16k33_fb_var constant
-
git://git.kernel.org/pub/scm/linux/kernel/git/rw/umlLinus Torvalds authored
Pull UML fix from Richard Weinberger: "Fix time travel mode" * tag 'for-linus-5.3-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml: um: fix time travel mode
-