- 19 Oct, 2018 9 commits
-
-
Christoph Hellwig authored
Handle architectures that are not cache coherent directly in the main swiotlb code by calling arch_sync_dma_for_{device,cpu} in all the right places from the various dma_map/unmap/sync methods when the device is non-coherent. Because swiotlb now uses dma_direct_alloc for the coherent allocation that side is already taken care of by the dma-direct code calling into arch_dma_{alloc,free} for devices that are non-coherent. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Christoph Hellwig authored
All architectures that support swiotlb also have a zone that backs up these less than full addressing allocations (usually ZONE_DMA32). Because of that it is rather pointless to fall back to the global swiotlb buffer if the normal dma direct allocation failed - the only thing this will do is to eat up bounce buffers that would be more useful to serve streaming mappings. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Christoph Hellwig authored
Remove the somewhat useless map_single function, and replace it with a swiotlb_bounce_page handler that handles everything related to actually bouncing a page. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Christoph Hellwig authored
No need to duplicate the code - map_sg is equivalent to map_page for each page in the scatterlist. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Christoph Hellwig authored
Like all other dma mapping drivers just return an error code instead of an actual memory buffer. The reason for the overflow buffer was that at the time swiotlb was invented there was no way to check for dma mapping errors, but this has long been fixed. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Christoph Hellwig authored
All properly written drivers now have error handling in the dma_map_single / dma_map_page callers. As swiotlb_tbl_map_single already prints a useful warning when running out of swiotlb pool space we can also remove swiotlb_full entirely as it serves no purpose now. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Christoph Hellwig authored
This comments describes an aspect of the map_sg interface that isn't even exploited by swiotlb. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 09 Oct, 2018 3 commits
-
-
Christoph Hellwig authored
Respect the DMA_ATTR_NO_WARN flags for allocations in dma-direct. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
This allows all dma_map_ops instances to entirely rely on DMA_ATTR_NO_WARN going forward. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
What we are doing here isn't quite obvious, so add a comment explaining it. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 08 Oct, 2018 1 commit
-
-
Stephen Boyd authored
I recently debugged a DMA mapping oops where a driver was trying to map a buffer returned from request_firmware() with dma_map_single(). Memory returned from request_firmware() is mapped into the vmalloc region and this isn't a valid region to map with dma_map_single() per the DMA documentation's "What memory is DMA'able?" section. Unfortunately, we don't really check that in the DMA debugging code, so enabling DMA debugging doesn't help catch this problem. Let's add a new DMA debug function to check for a vmalloc address or an invalid virtual address and print a warning if this happens. This makes it a little easier to debug these sorts of problems, instead of seeing odd behavior or crashes when drivers attempt to map the vmalloc space for DMA. Cc: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 05 Oct, 2018 1 commit
-
-
Alexander Duyck authored
It appears that in commit 9d7a224b ("dma-direct: always allow dma mask <= physiscal memory size") the logic of the test was changed from a "<" to a ">=" however I don't see any reason for that change. I am assuming that there was some additional change planned, specifically I suspect the logic was intended to be reversed and possibly used for a return. Since that is the case I have gone ahead and done that. This addresses issues I had on my system that prevented me from booting with the above mentioned commit applied on an x86_64 system w/ Intel IOMMU. Fixes: 9d7a224b ("dma-direct: always allow dma mask <= physiscal memory size") Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 02 Oct, 2018 1 commit
-
-
Christoph Hellwig authored
This avoids a warning on powerpc. Signed-off-by: Christoph Hellwig <hch@lst.de> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
-
- 01 Oct, 2018 5 commits
-
-
Christoph Hellwig authored
This way an architecture with less than 4G of RAM can support dma_mask smaller than 32-bit without a ZONE_DMA. Apparently that is a common case on powerpc. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
Instead of rejecting devices with a too small bus_dma_mask we can handle by taking the bus dma_mask into account for allocations and bounce buffering decisions. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
We need to take the DMA offset and encryption bit into account when selecting a zone. User the opportunity to factor out the zone selection into a helper for reuse. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
This is somewhat modelled after the powerpc version, and differs from the legacy fallback in use fls64 instead of pointlessly splitting up the address into low and high dwords and in that it takes (__)phys_to_dma into account. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
This save some duplication for ia64, and makes the interface more general. In the long run we want each dma_map_ops instance to fill this out, but this will take a little more prep work. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 Sep, 2018 1 commit
-
-
Christoph Hellwig authored
unicore32 is a bog standard 32-bit port without larger physical address space, highmem or any other obvious addressing limitation. There should be no need to bounce buffer using swiotlb. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Guan Xuetao <gxt@pku.edu.cn>
-
- 25 Sep, 2018 1 commit
-
-
Christoph Hellwig authored
This reverts commit 46053c73. This change breaks architectures setting up dma_ops in their own magic way and not using arch_setup_dma_ops, so revert it. Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 20 Sep, 2018 7 commits
-
-
Christoph Hellwig authored
We can use the arch_dma_coherent_to_pfn hook to provide a ->get_sgtable implementation. Note that this isn't an endorsement of this interface (which is a horrible bad idea), but it is required to move arm64 over to the generic code without a loss of functionality. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
The only functional differences (modulo a few missing fixes in the arch code) is that architectures without coherent caches need a hook to convert a virtual or dma address into a pfn, given that we don't have the kernel linear mapping available for the otherwise easy virt_to_page call. As a side effect we can support mmap of the per-device coherent area even on architectures not providing the callback, and we make previous dangerous default methods dma_common_mmap actually save for non-coherent architectures by rejecting it without the right helper. In addition to that we need a hook so that some architectures can override the protection bits when mmaping a dma coherent allocations. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Paul Burton <paul.burton@mips.com> # MIPS parts
-
Christoph Hellwig authored
All the cache maintainance is already stubbed out when not enabled, but merging the two allows us to nicely handle the case where cache maintainance is required for some devices, but not others. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Paul Burton <paul.burton@mips.com> # MIPS parts
-
Christoph Hellwig authored
Various architectures support both coherent and non-coherent dma on a per-device basis. Move the dma_noncoherent flag from the mips archdata field to struct device proper to prepare the infrastructure for reuse on other architectures. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Paul Burton <paul.burton@mips.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christoph Hellwig authored
While both option select a form of conditional dma coherence they don't actually share any code in the implementation, so untangle them. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Paul Burton <paul.burton@mips.com>
-
Christoph Hellwig authored
The patch adding the infrastructure failed to actually add the symbol declaration, oops.. Fixes: faef8772 ("dma-noncoherent: add a arch_sync_dma_for_cpu_all hook") Signed-off-by: Christoph Hellwig <hch@lst.de>
-
He Zhe authored
early_cma does not check input argument before passing it to simple_strtoull. The argument would be a NULL pointer if "cma", without its value, is set in command line and thus causes the following panic. PANIC: early exception 0xe3 IP 10:ffffffffa3e9db8d error 0 cr2 0x0 [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.0-rc3-yocto-standard+ #7 [ 0.000000] RIP: 0010:_parse_integer_fixup_radix+0xd/0x70 ... [ 0.000000] Call Trace: [ 0.000000] simple_strtoull+0x29/0x70 [ 0.000000] memparse+0x26/0x90 [ 0.000000] early_cma+0x17/0x6a [ 0.000000] do_early_param+0x57/0x8e [ 0.000000] parse_args+0x208/0x320 [ 0.000000] ? rdinit_setup+0x30/0x30 [ 0.000000] parse_early_options+0x29/0x2d [ 0.000000] ? rdinit_setup+0x30/0x30 [ 0.000000] parse_early_param+0x36/0x4d [ 0.000000] setup_arch+0x336/0x99e [ 0.000000] start_kernel+0x6f/0x4e6 [ 0.000000] x86_64_start_reservations+0x24/0x26 [ 0.000000] x86_64_start_kernel+0x6f/0x72 [ 0.000000] secondary_startup_64+0xa4/0xb0 This patch adds a check to prevent the panic. Signed-off-by: He Zhe <zhe.he@windriver.com> Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> Cc: stable@vger.kernel.org Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 08 Sep, 2018 8 commits
-
-
Christoph Hellwig authored
There is no reason to leave the per-device dma_ops around when deconfiguring a device, so move this code from arm64 into the common code. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
This goes through a lot of hooks just to call arch_teardown_dma_ops. Replace it with a direct call instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
There is no good reason for this indirection given that the method always exists. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
We can just use the default implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
Switch to the generic noncoherent direct mapping implementation. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
This methods needs to provide the equivalent of sync_single_for_device for each S/G list element, but was missing. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
hexagon does all the required cache maintainance at dma map time, and none at unmap time. It thus has to implement sync_single_for_device to match the map cace for buffer reuse, but there is no point in doing another invalidation in the sync_single_cpu_case, which in terms of cache maintainance is equivalent to the unmap case. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linuxLinus Torvalds authored
Pull i2c fixes from Wolfram Sang: - bugfixes for uniphier, i801, and xiic drivers - ID removal (never produced) for imx - one MAINTAINER addition * 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux: i2c: xiic: Record xilinx i2c with Zynq fragment i2c: xiic: Make the start and the byte count write atomic i2c: i801: fix DNV's SMBCTRL register offset i2c: imx-lpi2c: Remove mx8dv compatible entry dt-bindings: imx-lpi2c: Remove mx8dv compatible entry i2c: uniphier-f: issue STOP only for last message or I2C_M_STOP i2c: uniphier: issue STOP only for last message or I2C_M_STOP
-
- 07 Sep, 2018 3 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arcLinus Torvalds authored
Pull ARC updates from Vineet Gupta: - Fix for atomic_fetch_#op [Will Deacon] - Enable per device IOC [Eugeniy Paltsev] - Remove redundant gcc version checks [Masahiro Yamada] - Miscll platform config/DT updates [Alexey Brodkin] * tag 'arc-4.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: ARC: don't check for HIGHMEM pages in arch_dma_alloc ARC: IOC: panic if both IOC and ZONE_HIGHMEM enabled ARC: dma [IOC] Enable per device io coherency ARC: dma [IOC]: mark DMA devices connected as dma-coherent ARC: atomics: unbork atomic_fetch_##op() arc: remove redundant GCC version checks ARC: sort Kconfig ARC: cleanup show_faulting_vma() ARC: [plat-axs*]: Enable SWAP ARC: [plat-axs*/plat-hsdk]: Allow U-Boot to pass MAC-address to the kernel ARC: configs: cleanup
-
David Howells authored
Fix the cell specification mechanism to allow cells to be pre-created without having to specify at least one address (the addresses will be upcalled for). This allows the cell information preload service to avoid the need to issue loads of DNS lookups during boot to get the addresses for each cell (500+ lookups for the 'standard' cell list[*]). The lookups can be done later as each cell is accessed through the filesystem. Also remove the print statement that prints a line every time a new cell is added. [*] There are 144 cells in the list. Each cell is first looked up for an SRV record, and if that fails, for an AFSDB record. These get a list of server names, each of which then has to be looked up to get the addresses for that server. E.g.: dig srv _afs3-vlserver._udp.grand.central.org Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/shli/mdLinus Torvalds authored
Pull MD fixes from Shaohua Li: - Fix a locking issue for md-cluster (Guoqing) - Fix a sync crash for raid10 (Ni) - Fix a reshape bug with raid5 cache enabled (me) * tag 'md/4.19-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/shli/md: md-cluster: release RESYNC lock after the last resync message RAID10 BUG_ON in raise_barrier when force is true and conf->barrier is 0 md/raid5-cache: disable reshape completely
-