- 19 Jan, 2024 4 commits
-
-
Randy Dunlap authored
Fix warnings of "Excess struct member" by removing those lines. They are extraneous. imx-sdma.c:467: warning: Excess struct member 'context_loaded' description in 'sdma_channel' imx-sdma.c:467: warning: Excess struct member 'bd_pool' description in 'sdma_channel' imx-sdma.c:500: warning: Excess struct member 'script_addrs' description in 'sdma_firmware_header' Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: Shawn Guo <shawnguo@kernel.org> Cc: Vinod Koul <vkoul@kernel.org> Cc: dmaengine@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20240119032832.4051-1-rdunlap@infradead.orgSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Nathan Chancellor authored
Clang warns (or errors with CONFIG_WERROR=y): drivers/dma/xilinx/xdma.c:894:3: error: variable 'desc' is uninitialized when used here [-Werror,-Wuninitialized] 894 | desc->error = true; | ^~~~ The initialization of desc was moved too far forward, move it back so that this assignment does not result in a potential crash at runtime while clearing up the warning. Closes: https://github.com/ClangBuiltLinux/linux/issues/1972 Fixes: 2f8f90cd ("dmaengine: xilinx: xdma: Implement interleaved DMA transfers") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20231222-dma-xilinx-xdma-clang-fixes-v1-2-84a18ff184d2@kernel.orgSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Nathan Chancellor authored
Clang warns (or errors with CONFIG_WERROR=y): drivers/dma/xilinx/xdma.c:757:68: error: operator '?:' has lower precedence than '+'; '+' will be evaluated first [-Werror,-Wparentheses] 757 | src_addr += dmaengine_get_src_icg(xt, &xt->sgl[i]) + xt->src_inc ? | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ drivers/dma/xilinx/xdma.c:757:68: note: place parentheses around the '+' expression to silence this warning 757 | src_addr += dmaengine_get_src_icg(xt, &xt->sgl[i]) + xt->src_inc ? | ^ | ( ) drivers/dma/xilinx/xdma.c:757:68: note: place parentheses around the '?:' expression to evaluate it first 757 | src_addr += dmaengine_get_src_icg(xt, &xt->sgl[i]) + xt->src_inc ? | ^ | ( 758 | xt->sgl[i].size : 0; | | ) drivers/dma/xilinx/xdma.c:759:68: error: operator '?:' has lower precedence than '+'; '+' will be evaluated first [-Werror,-Wparentheses] 759 | dst_addr += dmaengine_get_dst_icg(xt, &xt->sgl[i]) + xt->dst_inc ? | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ drivers/dma/xilinx/xdma.c:759:68: note: place parentheses around the '+' expression to silence this warning 759 | dst_addr += dmaengine_get_dst_icg(xt, &xt->sgl[i]) + xt->dst_inc ? | ^ | ( ) drivers/dma/xilinx/xdma.c:759:68: note: place parentheses around the '?:' expression to evaluate it first 759 | dst_addr += dmaengine_get_dst_icg(xt, &xt->sgl[i]) + xt->dst_inc ? | ^ | ( 760 | xt->sgl[i].size : 0; | | ) The src_inc and dst_inc members of 'struct dma_interleaved_template' are booleans, so it does not make sense for the addition to happen first. Wrap the conditional operator in parantheses so it is evaluated first. Closes: https://github.com/ClangBuiltLinux/linux/issues/1971 Fixes: 2f8f90cd ("dmaengine: xilinx: xdma: Implement interleaved DMA transfers") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20231222-dma-xilinx-xdma-clang-fixes-v1-1-84a18ff184d2@kernel.orgSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Vinod Koul authored
dmaengine updates for v6.8 New support: - Loongson LS2X APB DMA controller - sf-pdma: mpfs-pdma support - Qualcomm X1E80100 GPI dma controller support Updates: - Xilinx XDMA updates to support interleaved DMA transfers - TI PSIL threads for AM62P and J722S and cfg register regions description - axi-dmac Improving the cyclic DMA transfers - Tegra Support dma-channel-mask property - Remaining platform remove callback returning void conversions
-
- 22 Dec, 2023 8 commits
-
-
Vinod Koul authored
xdma_prep_interleaved_dma() was local to file but not declared static, leading to warning: drivers/dma/xilinx/xdma.c:729:1: warning: no previous prototype for 'xdma_prep_interleaved_dma' [-Wmissing-prototypes] 729 | xdma_prep_interleaved_dma(struct dma_chan *chan Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Vinod Koul <vkoul@kernel.org> Link: https://lore.kernel.org/r/20231222094001.731889-1-vkoul@kernel.orgSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Vinod Koul authored
Increase length to be copied to be large enough to overcome the following compilation error. The buf is large enough for this purpose. drivers/dma/xilinx/xilinx_dpdma.c: In function ‘xilinx_dpdma_debugfs_desc_done_irq_read’: drivers/dma/xilinx/xilinx_dpdma.c:313:39: error: ‘snprintf’ output may be truncated before the last format character [-Werror=format-truncation=] 313 | snprintf(buf, out_str_len, "%d", | ^ drivers/dma/xilinx/xilinx_dpdma.c:313:9: note: ‘snprintf’ output between 2 and 6 bytes into a destination of size 5 313 | snprintf(buf, out_str_len, "%d", | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 314 | dpdma_debugfs.xilinx_dpdma_irq_done_count); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: Vinod Koul <vkoul@kernel.org> Link: https://lore.kernel.org/r/20231222094017.731917-1-vkoul@kernel.orgSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Bumyong Lee authored
According to DMA-330 errata notice[1] 71930, DMAKILL cannot clear internal signal, named pipeline_req_active. it makes that pl330 would wait forever in WFP state although dma already send dma request if pl330 gets dma request before entering WFP state. The errata suggests that polling until entering WFP state as workaround and then peripherals allows to issue dma request. [1]: https://developer.arm.com/documentation/genc008428/latestSigned-off-by: Bumyong Lee <bumyong.lee@samsung.com> Link: https://lore.kernel.org/r/20231219055026.118695-1-bumyong.lee@samsung.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Jan Kuliga authored
Interleaved DMA functionality allows dmaengine clients' to express DMA transfers in an arbitrary way. This is extremely useful in FPGA environments, where a greater transfer flexibility is needed. For instance, in one FPGA design there may be need to do DMA to/from a FIFO at a fixed address, and also to do DMA to/from a (non)contiguous RAM memory. Introduce separate tx preparation callback and add tx-flags handling logic. Their behavior is based on the description of interleaved DMA transfers in both source code and the DMAEngine's documentation. Since XDMA is a fully-fledged scatter-gather dma engine, the logic of xdma_prep_interleaved_dma() is fairly simple and similar to the other tx preparation callbacks. The whole tx-flags handling logic resides in xdma_channel_isr(). Transfer of a single frame from a interleaved DMA transfer template is pretty similar to the single sg transaction. Therefore, the transaction of the whole interleaved DMA transfer template is basically a cyclic dma transaction with finite cycles/periods (equal to the frame of count) of a single sg transfers. Signed-off-by: Jan Kuliga <jankul@alatek.krakow.pl> Link: https://lore.kernel.org/r/20231218113943.9099-9-jankul@alatek.krakow.plSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Jan Kuliga authored
Make generic code generic. As descriptor-filling logic stays the same regardless of a dmaengine's type of transfer, it is possible to write the descriptor-filling function in a generic way, so that it can be used for every single type of transfer preparation callback. Signed-off-by: Jan Kuliga <jankul@alatek.krakow.pl> Link: https://lore.kernel.org/r/20231218113943.9099-8-jankul@alatek.krakow.plSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Jan Kuliga authored
Extend the capability of transfer status reporting. Introduce error flag, which allows to report error in case of a interrupt-reported error condition. Signed-off-by: Jan Kuliga <jankul@alatek.krakow.pl> Link: https://lore.kernel.org/r/20231218113943.9099-7-jankul@alatek.krakow.plSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Jan Kuliga authored
Check and clear the status register value before proceeding any further in xdma_channel_isr(). It is necessary to do it since the interrupt may occur on any error condition enabled at the start of a transfer. Signed-off-by: Jan Kuliga <jankul@alatek.krakow.pl> Link: https://lore.kernel.org/r/20231218113943.9099-6-jankul@alatek.krakow.plSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Jan Kuliga authored
Simplify xdma_xfer_stop(). Stop the dma engine and clear its status register unconditionally - just do what its name states. This change also allows to call it without grabbing a lock, which minimizes the total time spent with a spinlock held. Delete the currently processed vd.node from the vc.desc_issued list prior to passing it to vchan_terminate_vdesc(). In case there's more than one descriptor pending on vc.desc_issued list, calling vchan_terminate_desc() results in losing the link between vc.desc_issued list head and the second descriptor on the list. Doing so results in resources leakege, as vchan_dma_desc_free_list() won't be able to properly free memory resources attached to descriptors, resulting in dma_pool_destroy() failure. Don't call vchan_dma_desc_free_list() from within xdma_terminate_all(). Move all terminated descriptors to the vc.desc_terminated list instead. This allows to postpone freeing memory resources associated with descriptors until the call to vchan_synchronize(), which is called from xdma_synchronize() callback. This is the right way to do it - xdma_terminate_all() should return as soon as possible, while freeing resources (that may be time consuming in case of large number of descriptors) can be done safely later. Fixes: f5c392d1 ("dmaengine: xilinx: xdma: Add terminate_all/synchronize callbacks") Signed-off-by: Jan Kuliga <jankul@alatek.krakow.pl> Link: https://lore.kernel.org/r/20231218113943.9099-5-jankul@alatek.krakow.plSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
- 21 Dec, 2023 22 commits
-
-
Jan Kuliga authored
According to the XDMA datasheet (PG195), the address of any descriptor must be 32 byte aligned. The datasheet also states that a contiguous block of descriptors must not cross a 4k address boundary. Therefore, it is possible to ease the pressure put on the dma_pool allocator just by requiring sufficient alignment and boundary values. Add proper macro definition and change the values passed into the dma_pool_create(). Signed-off-by: Jan Kuliga <jankul@alatek.krakow.pl> Link: https://lore.kernel.org/r/20231218113943.9099-4-jankul@alatek.krakow.plSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Jan Kuliga authored
Complete lacking bits describing the status/control register values. Add macros describing the status/control registers. Signed-off-by: Jan Kuliga <jankul@alatek.krakow.pl> Link: https://lore.kernel.org/r/20231218113943.9099-3-jankul@alatek.krakow.plSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Jan Kuliga authored
Get rid of duplicated macro definitions, as these macros are defined earlier in the file. Also, get rid of unused member of 'struct xdma_desc'. Signed-off-by: Jan Kuliga <jankul@alatek.krakow.pl> Link: https://lore.kernel.org/r/20231218113943.9099-2-jankul@alatek.krakow.plSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Miquel Raynal authored
The driver is capable of starting scatter-gather transfers and needs to wait until their end. It is also capable of starting cyclic transfers and will only be "reset" next time the channel will be reused. In practice most of the time we hear no audio glitch because the sound card stops the flow on its side so the DMA transfers are just discarded. There are however some cases (when playing a bit with a number of frames and with a discontinuous sound file) when the sound card seems to be slightly too slow at stopping the flow, leading to a glitch that can be heard. In all cases, we need to earn better control of the DMA engine and adding proper ->device_terminate_all() and ->device_synchronize() callbacks feels totally relevant. With these two callbacks, no glitch can be heard anymore. Fixes: cd8c732c ("dmaengine: xilinx: xdma: Support cyclic transfers") Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Tested-by: Lizhi Hou <lizhi.hou@amd.com> Link: https://lore.kernel.org/r/20231130111315.729430-5-miquel.raynal@bootlin.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Miquel Raynal authored
The driver internal scatter-gather logic is: * set busy to true * start transfer <irq> * set busy to false * trigger next transfer if any * set busy to true </irq> Setting busy to false in cyclic transfers does not make any sense and is conceptually wrong. In order to ease the integration of additional callbacks let's move this change to the scatter-gather path. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/r/20231130111315.729430-4-miquel.raynal@bootlin.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Miquel Raynal authored
We support both modes, but they perform totally different taks in the interrupt handler. Clarify what shall be done in each case. Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/r/20231130111315.729430-3-miquel.raynal@bootlin.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Miquel Raynal authored
Xilinx DMA engine is capable of keeping track of the number of elapsed periods and this is an increasing 32-bit counter which is only reset when turning off the engine. No need to add this value to our local counter. Fixes: cd8c732c ("dmaengine: xilinx: xdma: Support cyclic transfers") Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/r/20231130111315.729430-2-miquel.raynal@bootlin.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Rex Zhang authored
Task may be rescheduled within dma_free_coherent(). So dma_free_coherent() can't be called between spin_lock() and spin_unlock() to avoid Call Trace: Call Trace: <TASK> dump_stack_lvl+0x37/0x50 __might_resched+0x16a/0x1c0 vunmap+0x2c/0x70 __iommu_dma_free+0x96/0x100 idxd_device_evl_free+0xd5/0x100 [idxd] device_release_driver_internal+0x197/0x200 unbind_store+0xa1/0xb0 kernfs_fop_write_iter+0x120/0x1c0 vfs_write+0x2d3/0x400 ksys_write+0x63/0xe0 do_syscall_64+0x44/0xa0 entry_SYSCALL_64_after_hwframe+0x6e/0xd8 Move it out of the context. Fixes: 244da66c ("dmaengine: idxd: setup event log configuration") Signed-off-by: Rex Zhang <rex.zhang@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20231212022158.358619-2-rex.zhang@intel.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Vignesh Raghavendra authored
Add PSIL thread information and enable UDMA support for AM62P and J722S SoC. J722S SoC family is a superset of AM62P, thus common PSIL thread ID map is reused for both devices. For those interested, more details about the SoC can be found in the Technical Reference Manual here: AM62P - https://www.ti.com/lit/pdf/spruj83 J722S - https://www.ti.com/lit/zip/sprujb3Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com> Signed-off-by: Bryan Brattlof <bb@ti.com> Signed-off-by: Vaishnav Achath <vaishnav.a@ti.com> Reviewed-by: Jai Luthra <j-luthra@ti.com> Link: https://lore.kernel.org/r/20231213081318.26203-1-vaishnav.a@ti.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Amelie Delaunay authored
__dma_async_device_channel_register() can fail. In case of failure, chan->local is freed (with free_percpu()), and chan->local is nullified. When dma_async_device_unregister() is called (because of managed API or intentionally by DMA controller driver), channels are unconditionally unregistered, leading to this NULL pointer: [ 1.318693] Unable to handle kernel NULL pointer dereference at virtual address 00000000000000d0 [...] [ 1.484499] Call trace: [ 1.486930] device_del+0x40/0x394 [ 1.490314] device_unregister+0x20/0x7c [ 1.494220] __dma_async_device_channel_unregister+0x68/0xc0 Look at dma_async_device_register() function error path, channel device unregistration is done only if chan->local is not NULL. Then add the same condition at the beginning of __dma_async_device_channel_unregister() function, to avoid NULL pointer issue whatever the API used to reach this function. Fixes: d2fb0a04 ("dmaengine: break out channel registration") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20231213160452.2598073-1-amelie.delaunay@foss.st.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Frank Li authored
Refactor the code to use the common dt-binding header file, fsl-edma.h. Renaming ARGS* to FSL_EDMA*, ensuring no functional changes. Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20231114154824.3617255-4-Frank.Li@nxp.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Frank Li authored
Introduce a common dt-bindings header file, fsl-edma.h, shared between the driver and dts files. This addition aims to eliminate hardcoded values in dts files, promoting maintainability and consistency. DTS header file not support BIT() macro yet. Directly use 2^n number. Signed-off-by: Frank Li <Frank.Li@nxp.com> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20231114154824.3617255-3-Frank.Li@nxp.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Frank Li authored
The eDMAv4 channel mux has a limitation where certain requests must use even channels, while others must use odd numbers. Add two flags (ARGS_EVEN_CH and ARGS_ODD_CH) to reflect this limitation. The device tree source (dts) files need to be updated accordingly. This issue was identified by the following commit: commit a7259905 ("arm64: dts: imx93: Fix the dmas entries order") Reverting channel orders triggered this problem. Fixes: 72f5801a ("dmaengine: fsl-edma: integrate v3 support") Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20231114154824.3617255-2-Frank.Li@nxp.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Paul Cercueil authored
For cyclic transfers, chain the last descriptor to the first one, and disable IRQ generation if there is no callback registered with the cyclic transfer. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Link: https://lore.kernel.org/r/20231215131313.23840-6-paul@crapouillou.netSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Paul Cercueil authored
Instead of notifying userspace in the end-of-transfer (EOT) interrupt and program the hardware in the start-of-transfer (SOT) interrupt, we can do both things in the EOT, allowing us to mask the SOT, and halve the number of interrupts sent by the HDL core. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Link: https://lore.kernel.org/r/20231215131313.23840-5-paul@crapouillou.netSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Paul Cercueil authored
Implement support for scatter-gather transfers. Build a chain of hardware descriptors, each one corresponding to a segment of the transfer, and linked to the next one. The hardware will transfer the chain and only fire interrupts when the whole chain has been transferred. Support for scatter-gather is automatically enabled when the driver detects that the hardware supports it, by writing then reading the AXI_DMAC_REG_SG_ADDRESS register. If not available, the driver will fall back to standard DMA transfers. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Link: https://lore.kernel.org/r/20231215131313.23840-4-paul@crapouillou.netSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Paul Cercueil authored
Change where and how the DMA transfers meta-data is stored, to prepare for the upcoming introduction of scatter-gather support. Allocate hardware descriptors in the format that the HDL core will be expecting them when the scatter-gather feature is enabled, and use these fields to store the data that was previously stored in the axi_dmac_sg structure. Note that the 'x_len' and 'y_len' fields now contain the transfer length minus one, since that's what the hardware will expect in these fields. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Link: https://lore.kernel.org/r/20231215131313.23840-3-paul@crapouillou.netSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Paul Cercueil authored
Use a for() loop instead of a while() loop in axi_dmac_fill_linear_sg(). This makes the code leaner and cleaner overall, and does not introduce any functional change. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Link: https://lore.kernel.org/r/20231215131313.23840-2-paul@crapouillou.netSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Binbin Zhou authored
The Loongson LS2X APB DMA controller is available on Loongson-2K chips. It is a single-channel, configurable DMA controller IP core based on the AXI bus, whose main function is to integrate DMA functionality on a chip dedicated to carrying data between memory and peripherals in APB bus (e.g. nand). Signed-off-by: Binbin Zhou <zhoubinbin@loongson.cn> Signed-off-by: Yingkun Meng <mengyingkun@loongson.cn> Link: https://lore.kernel.org/r/8df2a0199434fba3535831082966c2442ecf1cae.1702365725.git.zhoubinbin@loongson.cnSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Binbin Zhou authored
Add Loongson LS2X APB DMA controller binding with DT schema format using json-schema. Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Binbin Zhou <zhoubinbin@loongson.cn> Link: https://lore.kernel.org/r/078307641077edaf46dd986c6d31cea15545a208.1702365725.git.zhoubinbin@loongson.cnSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Randy Dunlap authored
Correct kernel-doc warnings as reported by kernel test robot: ste_dma40.c:57: warning: Excess struct member 'dev_tx' description in 'stedma40_platform_data' ste_dma40.c:57: warning: Excess struct member 'dev_rx' description in 'stedma40_platform_data' Correct spellos as reported by codespell. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202312171417.izbQThoU-lkp@intel.com/ Cc: Linus Walleij <linus.walleij@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Cc: Vinod Koul <vkoul@kernel.org> Cc: dmaengine@vger.kernel.org Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20231218060834.19222-1-rdunlap@infradead.orgSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Christophe JAILLET authored
ida_alloc() and ida_free() should be preferred to the deprecated ida_simple_get() and ida_simple_remove(). This is less verbose. Note that the upper limit of ida_simple_get() is exclusive, but the one of ida_alloc_range() is inclusive. Sothis change allows one more device. MINORMASK is ((1U << MINORBITS) - 1), so allowing MINORMASK as a maximum value makes sense. It is also consistent with other "ida_.*MINORMASK" and "ida_*MINOR()" usages. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Lijun Pan <lijun.pan@intel.com> Link: https://lore.kernel.org/r/ac991f5f42112fa782a881d391d447529cbc4a23.1702967302.git.christophe.jaillet@wanadoo.frSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
- 11 Dec, 2023 6 commits
-
-
Amelie Delaunay authored
Source and destination data buffers are allocated with GPF_KERNEL flag. It means that, if the DDR is more than 2GB, buffers can be allocated above the 32-bit addressable space. In this case, and if the dma controller is only 32-bit compatible, swiotlb bounce buffer, located in the 32-bit addressable space, is used and introduces a memcpy. To prevent this extra memcpy, due to swiotlb bounce buffer use because source or destination data buffer is allocated above the 32-bit addressable space, force source and destination data buffers allocation with GPF_DMA instead, when nobounce parameter is true. Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Link: https://lore.kernel.org/r/20231124160235.2459326-1-amelie.delaunay@foss.st.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Frank Li authored
Allocate channel count consistently increases due to a missing source ID (srcid) cleanup in the fsl_edma_free_chan_resources() function at imx93 eDMAv4. Reset 'srcid' at fsl_edma_free_chan_resources(). Cc: stable@vger.kernel.org Fixes: 72f5801a ("dmaengine: fsl-edma: integrate v3 support") Signed-off-by: Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20231127214325.2477247-1-Frank.Li@nxp.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Mohan Kumar authored
To support the flexibility to reserve the specific dma channels add the support of dma-channel-mask property in the tegra210-adma driver Signed-off-by: Mohan Kumar <mkumard@nvidia.com> Link: https://lore.kernel.org/r/20231128071615.31447-3-mkumard@nvidia.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Mohan Kumar authored
Add dma-channel-mask binding doc support to nvidia,tegra210-adma to reserve the adma channel usage Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Mohan Kumar <mkumard@nvidia.com> Link: https://lore.kernel.org/r/20231128071615.31447-2-mkumard@nvidia.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Yang Yingliang authored
device_link_add() returns NULL pointer not PTR_ERR() when it fails, so replace the IS_ERR() check with NULL pointer check. Fixes: 72f5801a ("dmaengine: fsl-edma: integrate v3 support") Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20231129090000.841440-1-yangyingliang@huaweicloud.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-
Shravan Chippa authored
Sifive platform dma (sf-pdma) has both in-order and out-of-order configurations but sf-pdam driver configured to do in-order DMA transfers, with out-of-order configuration got better throughput in the PolarFire SoC platform. Add a PolarFire SoC specific compatible and code to support for out-of-order dma transfers Reviewed-by: Emil Renner Berthing <emil.renner.berthing@canonical.com> Signed-off-by: Shravan Chippa <shravan.chippa@microchip.com> Link: https://lore.kernel.org/r/20231208103856.3732998-4-shravan.chippa@microchip.comSigned-off-by: Vinod Koul <vkoul@kernel.org>
-