Commit 78e8696c authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-4.21-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This includes a new driver, removes R-Mobile APE6 as it is no longer
  used, sprd cyclic dma support, last batch of dma_slave_config
  direction removal and random updates to bunch of drivers.

  Summary:
   - New driver for UniPhier MIO DMA controller
   - Remove R-Mobile APE6 support
   - Sprd driver updates and support for cyclic link-list
   - Remove dma_slave_config direction usage from rest of drivers
   - Minor updates to dmatest, dw-dmac, zynqmp and bcm dma drivers"

* tag 'dmaengine-4.21-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (48 commits)
  dmaengine: qcom_hidma: convert to DEFINE_SHOW_ATTRIBUTE
  dmaengine: pxa: remove DBGFS_FUNC_DECL()
  dmaengine: mic_x100_dma: convert to DEFINE_SHOW_ATTRIBUTE
  dmaengine: amba-pl08x: convert to DEFINE_SHOW_ATTRIBUTE
  dmaengine: Documentation: Add documentation for multi chan testing
  dmaengine: dmatest: Add transfer_size parameter
  dmaengine: dmatest: Add alignment parameter
  dmaengine: dmatest: Use fixed point div to calculate iops
  dmaengine: dmatest: Add support for multi channel testing
  dmaengine: rcar-dmac: Document R8A774C0 bindings
  dt-bindings: dmaengine: usb-dmac: Add binding for r8a774c0
  dmaengine: zynqmp_dma: replace spin_lock_bh with spin_lock_irqsave
  dmaengine: sprd: Add me as one of the module authors
  dmaengine: sprd: Support DMA 2-stage transfer mode
  dmaengine: sprd: Support DMA link-list cyclic callback
  dmaengine: sprd: Set cur_desc as NULL when free or terminate one dma channel
  dmaengine: sprd: Fix the last link-list configuration
  dmaengine: sprd: Get transfer residue depending on the transfer direction
  dmaengine: sprd: Remove direction usage from struct dma_slave_config
  dmaengine: dmatest: fix a small memory leak in dmatest_func()
  ...
parents fcf01044 66061182
* Renesas R-Car (RZ/G) DMA Controller Device Tree bindings * Renesas R-Car (RZ/G) DMA Controller Device Tree bindings
Renesas R-Car Generation 2 SoCs have multiple multi-channel DMA Renesas R-Car (Gen 2/3) and RZ/G SoCs have multiple multi-channel DMA
controller instances named DMAC capable of serving multiple clients. Channels controller instances named DMAC capable of serving multiple clients. Channels
can be dedicated to specific clients or shared between a large number of can be dedicated to specific clients or shared between a large number of
clients. clients.
...@@ -20,6 +20,8 @@ Required Properties: ...@@ -20,6 +20,8 @@ Required Properties:
- "renesas,dmac-r8a7744" (RZ/G1N) - "renesas,dmac-r8a7744" (RZ/G1N)
- "renesas,dmac-r8a7745" (RZ/G1E) - "renesas,dmac-r8a7745" (RZ/G1E)
- "renesas,dmac-r8a77470" (RZ/G1C) - "renesas,dmac-r8a77470" (RZ/G1C)
- "renesas,dmac-r8a774a1" (RZ/G2M)
- "renesas,dmac-r8a774c0" (RZ/G2E)
- "renesas,dmac-r8a7790" (R-Car H2) - "renesas,dmac-r8a7790" (R-Car H2)
- "renesas,dmac-r8a7791" (R-Car M2-W) - "renesas,dmac-r8a7791" (R-Car M2-W)
- "renesas,dmac-r8a7792" (R-Car V2H) - "renesas,dmac-r8a7792" (R-Car V2H)
......
...@@ -6,6 +6,9 @@ Required Properties: ...@@ -6,6 +6,9 @@ Required Properties:
- "renesas,r8a7743-usb-dmac" (RZ/G1M) - "renesas,r8a7743-usb-dmac" (RZ/G1M)
- "renesas,r8a7744-usb-dmac" (RZ/G1N) - "renesas,r8a7744-usb-dmac" (RZ/G1N)
- "renesas,r8a7745-usb-dmac" (RZ/G1E) - "renesas,r8a7745-usb-dmac" (RZ/G1E)
- "renesas,r8a77470-usb-dmac" (RZ/G1C)
- "renesas,r8a774a1-usb-dmac" (RZ/G2M)
- "renesas,r8a774c0-usb-dmac" (RZ/G2E)
- "renesas,r8a7790-usb-dmac" (R-Car H2) - "renesas,r8a7790-usb-dmac" (R-Car H2)
- "renesas,r8a7791-usb-dmac" (R-Car M2-W) - "renesas,r8a7791-usb-dmac" (R-Car M2-W)
- "renesas,r8a7793-usb-dmac" (R-Car M2-N) - "renesas,r8a7793-usb-dmac" (R-Car M2-N)
......
...@@ -27,6 +27,10 @@ Optional properties: ...@@ -27,6 +27,10 @@ Optional properties:
general purpose DMA channel allocator. False if not passed. general purpose DMA channel allocator. False if not passed.
- multi-block: Multi block transfers supported by hardware. Array property with - multi-block: Multi block transfers supported by hardware. Array property with
one cell per channel. 0: not supported, 1 (default): supported. one cell per channel. 0: not supported, 1 (default): supported.
- snps,dma-protection-control: AHB HPROT[3:1] protection setting.
The default value is 0 (for non-cacheable, non-buffered,
unprivileged data access).
Refer to include/dt-bindings/dma/dw-dmac.h for possible values.
Example: Example:
......
UniPhier Media IO DMA controller
This works as an external DMA engine for SD/eMMC controllers etc.
found in UniPhier LD4, Pro4, sLD8 SoCs.
Required properties:
- compatible: should be "socionext,uniphier-mio-dmac".
- reg: offset and length of the register set for the device.
- interrupts: a list of interrupt specifiers associated with the DMA channels.
- clocks: a single clock specifier.
- #dma-cells: should be <1>. The single cell represents the channel index.
Example:
dmac: dma-controller@5a000000 {
compatible = "socionext,uniphier-mio-dmac";
reg = <0x5a000000 0x1000>;
interrupts = <0 68 4>, <0 68 4>, <0 69 4>, <0 70 4>,
<0 71 4>, <0 72 4>, <0 73 4>, <0 74 4>;
clocks = <&mio_clk 7>;
#dma-cells = <1>;
};
Note:
In the example above, "interrupts = <0 68 4>, <0 68 4>, ..." is not a typo.
The first two channels share a single interrupt line.
...@@ -30,28 +30,43 @@ Part 2 - When dmatest is built as a module ...@@ -30,28 +30,43 @@ Part 2 - When dmatest is built as a module
Example of usage:: Example of usage::
% modprobe dmatest channel=dma0chan0 timeout=2000 iterations=1 run=1 % modprobe dmatest timeout=2000 iterations=1 channel=dma0chan0 run=1
...or:: ...or::
% modprobe dmatest % modprobe dmatest
% echo dma0chan0 > /sys/module/dmatest/parameters/channel
% echo 2000 > /sys/module/dmatest/parameters/timeout % echo 2000 > /sys/module/dmatest/parameters/timeout
% echo 1 > /sys/module/dmatest/parameters/iterations % echo 1 > /sys/module/dmatest/parameters/iterations
% echo dma0chan0 > /sys/module/dmatest/parameters/channel
% echo 1 > /sys/module/dmatest/parameters/run % echo 1 > /sys/module/dmatest/parameters/run
...or on the kernel command line:: ...or on the kernel command line::
dmatest.channel=dma0chan0 dmatest.timeout=2000 dmatest.iterations=1 dmatest.run=1 dmatest.timeout=2000 dmatest.iterations=1 dmatest.channel=dma0chan0 dmatest.run=1
Example of multi-channel test usage:
% modprobe dmatest
% echo 2000 > /sys/module/dmatest/parameters/timeout
% echo 1 > /sys/module/dmatest/parameters/iterations
% echo dma0chan0 > /sys/module/dmatest/parameters/channel
% echo dma0chan1 > /sys/module/dmatest/parameters/channel
% echo dma0chan2 > /sys/module/dmatest/parameters/channel
% echo 1 > /sys/module/dmatest/parameters/run
Note: the channel parameter should always be the last parameter set prior to
running the test (setting run=1), this is because upon setting the channel
parameter, that specific channel is requested using the dmaengine and a thread
is created with the existing parameters. This thread is set as pending
and will be executed once run is set to 1. Any parameters set after the thread
is created are not applied.
.. hint:: .. hint::
available channel list could be extracted by running the following command:: available channel list could be extracted by running the following command::
% ls -1 /sys/class/dma/ % ls -1 /sys/class/dma/
Once started a message like "dmatest: Started 1 threads using dma0chan0" is Once started a message like " dmatest: Added 1 threads using dma0chan0" is
emitted. After that only test failure messages are reported until the test emitted. A thread for that specific channel is created and is now pending, the
stops. pending thread is started once run is to 1.
Note that running a new test will not stop any in progress test. Note that running a new test will not stop any in progress test.
...@@ -116,3 +131,85 @@ Example:: ...@@ -116,3 +131,85 @@ Example::
The details of a data miscompare error are also emitted, but do not follow the The details of a data miscompare error are also emitted, but do not follow the
above format. above format.
Part 5 - Handling channel allocation
====================================
Allocating Channels
-------------------
Channels are required to be configured prior to starting the test run.
Attempting to run the test without configuring the channels will fail.
Example::
% echo 1 > /sys/module/dmatest/parameters/run
dmatest: Could not start test, no channels configured
Channels are registered using the "channel" parameter. Channels can be requested by their
name, once requested, the channel is registered and a pending thread is added to the test list.
Example::
% echo dma0chan2 > /sys/module/dmatest/parameters/channel
dmatest: Added 1 threads using dma0chan2
More channels can be added by repeating the example above.
Reading back the channel parameter will return the name of last channel that was added successfully.
Example::
% echo dma0chan1 > /sys/module/dmatest/parameters/channel
dmatest: Added 1 threads using dma0chan1
% echo dma0chan2 > /sys/module/dmatest/parameters/channel
dmatest: Added 1 threads using dma0chan2
% cat /sys/module/dmatest/parameters/channel
dma0chan2
Another method of requesting channels is to request a channel with an empty string, Doing so
will request all channels available to be tested:
Example::
% echo "" > /sys/module/dmatest/parameters/channel
dmatest: Added 1 threads using dma0chan0
dmatest: Added 1 threads using dma0chan3
dmatest: Added 1 threads using dma0chan4
dmatest: Added 1 threads using dma0chan5
dmatest: Added 1 threads using dma0chan6
dmatest: Added 1 threads using dma0chan7
dmatest: Added 1 threads using dma0chan8
At any point during the test configuration, reading the "test_list" parameter will
print the list of currently pending tests.
Example::
% cat /sys/module/dmatest/parameters/test_list
dmatest: 1 threads using dma0chan0
dmatest: 1 threads using dma0chan3
dmatest: 1 threads using dma0chan4
dmatest: 1 threads using dma0chan5
dmatest: 1 threads using dma0chan6
dmatest: 1 threads using dma0chan7
dmatest: 1 threads using dma0chan8
Note: Channels will have to be configured for each test run as channel configurations do not
carry across to the next test run.
Releasing Channels
-------------------
Channels can be freed by setting run to 0.
Example::
% echo dma0chan1 > /sys/module/dmatest/parameters/channel
dmatest: Added 1 threads using dma0chan1
% cat /sys/class/dma/dma0chan1/in_use
1
% echo 0 > /sys/module/dmatest/parameters/run
% cat /sys/class/dma/dma0chan1/in_use
0
Channels allocated by previous test runs are automatically freed when a new
channel is requested after completing a successful test run.
...@@ -2279,6 +2279,7 @@ F: arch/arm/mm/cache-uniphier.c ...@@ -2279,6 +2279,7 @@ F: arch/arm/mm/cache-uniphier.c
F: arch/arm64/boot/dts/socionext/uniphier* F: arch/arm64/boot/dts/socionext/uniphier*
F: drivers/bus/uniphier-system-bus.c F: drivers/bus/uniphier-system-bus.c
F: drivers/clk/uniphier/ F: drivers/clk/uniphier/
F: drivers/dmaengine/uniphier-mdmac.c
F: drivers/gpio/gpio-uniphier.c F: drivers/gpio/gpio-uniphier.c
F: drivers/i2c/busses/i2c-uniphier* F: drivers/i2c/busses/i2c-uniphier*
F: drivers/irqchip/irq-uniphier-aidet.c F: drivers/irqchip/irq-uniphier-aidet.c
...@@ -14628,9 +14629,11 @@ SYNOPSYS DESIGNWARE DMAC DRIVER ...@@ -14628,9 +14629,11 @@ SYNOPSYS DESIGNWARE DMAC DRIVER
M: Viresh Kumar <vireshk@kernel.org> M: Viresh Kumar <vireshk@kernel.org>
R: Andy Shevchenko <andriy.shevchenko@linux.intel.com> R: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/dma/snps-dma.txt
F: drivers/dma/dw/
F: include/dt-bindings/dma/dw-dmac.h
F: include/linux/dma/dw.h F: include/linux/dma/dw.h
F: include/linux/platform_data/dma-dw.h F: include/linux/platform_data/dma-dw.h
F: drivers/dma/dw/
SYNOPSYS DESIGNWARE ENTERPRISE ETHERNET DRIVER SYNOPSYS DESIGNWARE ENTERPRISE ETHERNET DRIVER
M: Jose Abreu <Jose.Abreu@synopsys.com> M: Jose Abreu <Jose.Abreu@synopsys.com>
......
...@@ -587,6 +587,17 @@ config TIMB_DMA ...@@ -587,6 +587,17 @@ config TIMB_DMA
help help
Enable support for the Timberdale FPGA DMA engine. Enable support for the Timberdale FPGA DMA engine.
config UNIPHIER_MDMAC
tristate "UniPhier MIO DMAC"
depends on ARCH_UNIPHIER || COMPILE_TEST
depends on OF
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Enable support for the MIO DMAC (Media I/O DMA controller) on the
UniPhier platform. This DMA controller is used as the external
DMA engine of the SD/eMMC controllers of the LD4, Pro4, sLD8 SoCs.
config XGENE_DMA config XGENE_DMA
tristate "APM X-Gene DMA support" tristate "APM X-Gene DMA support"
depends on ARCH_XGENE || COMPILE_TEST depends on ARCH_XGENE || COMPILE_TEST
......
...@@ -70,6 +70,7 @@ obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o ...@@ -70,6 +70,7 @@ obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
obj-$(CONFIG_TEGRA210_ADMA) += tegra210-adma.o obj-$(CONFIG_TEGRA210_ADMA) += tegra210-adma.o
obj-$(CONFIG_TIMB_DMA) += timb_dma.o obj-$(CONFIG_TIMB_DMA) += timb_dma.o
obj-$(CONFIG_UNIPHIER_MDMAC) += uniphier-mdmac.o
obj-$(CONFIG_XGENE_DMA) += xgene-dma.o obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
obj-$(CONFIG_ZX_DMA) += zx_dma.o obj-$(CONFIG_ZX_DMA) += zx_dma.o
obj-$(CONFIG_ST_FDMA) += st_fdma.o obj-$(CONFIG_ST_FDMA) += st_fdma.o
......
...@@ -2505,24 +2505,14 @@ static int pl08x_debugfs_show(struct seq_file *s, void *data) ...@@ -2505,24 +2505,14 @@ static int pl08x_debugfs_show(struct seq_file *s, void *data)
return 0; return 0;
} }
static int pl08x_debugfs_open(struct inode *inode, struct file *file) DEFINE_SHOW_ATTRIBUTE(pl08x_debugfs);
{
return single_open(file, pl08x_debugfs_show, inode->i_private);
}
static const struct file_operations pl08x_debugfs_operations = {
.open = pl08x_debugfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static void init_pl08x_debugfs(struct pl08x_driver_data *pl08x) static void init_pl08x_debugfs(struct pl08x_driver_data *pl08x)
{ {
/* Expose a simple debugfs interface to view all clocks */ /* Expose a simple debugfs interface to view all clocks */
(void) debugfs_create_file(dev_name(&pl08x->adev->dev), (void) debugfs_create_file(dev_name(&pl08x->adev->dev),
S_IFREG | S_IRUGO, NULL, pl08x, S_IFREG | S_IRUGO, NULL, pl08x,
&pl08x_debugfs_operations); &pl08x_debugfs_fops);
} }
#else #else
......
// SPDX-License-Identifier: GPL-2.0+
/* /*
* BCM2835 DMA engine support * BCM2835 DMA engine support
* *
...@@ -18,16 +19,6 @@ ...@@ -18,16 +19,6 @@
* *
* MARVELL MMP Peripheral DMA Driver * MARVELL MMP Peripheral DMA Driver
* Copyright 2012 Marvell International Ltd. * Copyright 2012 Marvell International Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/ */
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
...@@ -1056,4 +1047,4 @@ module_platform_driver(bcm2835_dma_driver); ...@@ -1056,4 +1047,4 @@ module_platform_driver(bcm2835_dma_driver);
MODULE_ALIAS("platform:bcm2835-dma"); MODULE_ALIAS("platform:bcm2835-dma");
MODULE_DESCRIPTION("BCM2835 DMA engine driver"); MODULE_DESCRIPTION("BCM2835 DMA engine driver");
MODULE_AUTHOR("Florian Meier <florian.meier@koalo.de>"); MODULE_AUTHOR("Florian Meier <florian.meier@koalo.de>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL");
...@@ -1802,13 +1802,10 @@ static struct dma_chan *coh901318_xlate(struct of_phandle_args *dma_spec, ...@@ -1802,13 +1802,10 @@ static struct dma_chan *coh901318_xlate(struct of_phandle_args *dma_spec,
static int coh901318_config(struct coh901318_chan *cohc, static int coh901318_config(struct coh901318_chan *cohc,
struct coh901318_params *param) struct coh901318_params *param)
{ {
unsigned long flags;
const struct coh901318_params *p; const struct coh901318_params *p;
int channel = cohc->id; int channel = cohc->id;
void __iomem *virtbase = cohc->base->virtbase; void __iomem *virtbase = cohc->base->virtbase;
spin_lock_irqsave(&cohc->lock, flags);
if (param) if (param)
p = param; p = param;
else else
...@@ -1828,8 +1825,6 @@ static int coh901318_config(struct coh901318_chan *cohc, ...@@ -1828,8 +1825,6 @@ static int coh901318_config(struct coh901318_chan *cohc,
coh901318_set_conf(cohc, p->config); coh901318_set_conf(cohc, p->config);
coh901318_set_ctrl(cohc, p->ctrl_lli_last); coh901318_set_ctrl(cohc, p->ctrl_lli_last);
spin_unlock_irqrestore(&cohc->lock, flags);
return 0; return 0;
} }
......
This diff is collapsed.
...@@ -160,12 +160,14 @@ static void dwc_initialize_chan_idma32(struct dw_dma_chan *dwc) ...@@ -160,12 +160,14 @@ static void dwc_initialize_chan_idma32(struct dw_dma_chan *dwc)
static void dwc_initialize_chan_dw(struct dw_dma_chan *dwc) static void dwc_initialize_chan_dw(struct dw_dma_chan *dwc)
{ {
struct dw_dma *dw = to_dw_dma(dwc->chan.device);
u32 cfghi = DWC_CFGH_FIFO_MODE; u32 cfghi = DWC_CFGH_FIFO_MODE;
u32 cfglo = DWC_CFGL_CH_PRIOR(dwc->priority); u32 cfglo = DWC_CFGL_CH_PRIOR(dwc->priority);
bool hs_polarity = dwc->dws.hs_polarity; bool hs_polarity = dwc->dws.hs_polarity;
cfghi |= DWC_CFGH_DST_PER(dwc->dws.dst_id); cfghi |= DWC_CFGH_DST_PER(dwc->dws.dst_id);
cfghi |= DWC_CFGH_SRC_PER(dwc->dws.src_id); cfghi |= DWC_CFGH_SRC_PER(dwc->dws.src_id);
cfghi |= DWC_CFGH_PROTCTL(dw->pdata->protctl);
/* Set polarity of handshake interface */ /* Set polarity of handshake interface */
cfglo |= hs_polarity ? DWC_CFGL_HS_DST_POL | DWC_CFGL_HS_SRC_POL : 0; cfglo |= hs_polarity ? DWC_CFGL_HS_DST_POL | DWC_CFGL_HS_SRC_POL : 0;
......
...@@ -162,6 +162,12 @@ dw_dma_parse_dt(struct platform_device *pdev) ...@@ -162,6 +162,12 @@ dw_dma_parse_dt(struct platform_device *pdev)
pdata->multi_block[tmp] = 1; pdata->multi_block[tmp] = 1;
} }
if (!of_property_read_u32(np, "snps,dma-protection-control", &tmp)) {
if (tmp > CHAN_PROTCTL_MASK)
return NULL;
pdata->protctl = tmp;
}
return pdata; return pdata;
} }
#else #else
......
...@@ -200,6 +200,10 @@ enum dw_dma_msize { ...@@ -200,6 +200,10 @@ enum dw_dma_msize {
#define DWC_CFGH_FCMODE (1 << 0) #define DWC_CFGH_FCMODE (1 << 0)
#define DWC_CFGH_FIFO_MODE (1 << 1) #define DWC_CFGH_FIFO_MODE (1 << 1)
#define DWC_CFGH_PROTCTL(x) ((x) << 2) #define DWC_CFGH_PROTCTL(x) ((x) << 2)
#define DWC_CFGH_PROTCTL_DATA (0 << 2) /* data access - always set */
#define DWC_CFGH_PROTCTL_PRIV (1 << 2) /* privileged -> AHB HPROT[1] */
#define DWC_CFGH_PROTCTL_BUFFER (2 << 2) /* bufferable -> AHB HPROT[2] */
#define DWC_CFGH_PROTCTL_CACHE (4 << 2) /* cacheable -> AHB HPROT[3] */
#define DWC_CFGH_DS_UPD_EN (1 << 5) #define DWC_CFGH_DS_UPD_EN (1 << 5)
#define DWC_CFGH_SS_UPD_EN (1 << 6) #define DWC_CFGH_SS_UPD_EN (1 << 6)
#define DWC_CFGH_SRC_PER(x) ((x) << 7) #define DWC_CFGH_SRC_PER(x) ((x) << 7)
......
...@@ -997,7 +997,7 @@ ep93xx_dma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, ...@@ -997,7 +997,7 @@ ep93xx_dma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest,
for (offset = 0; offset < len; offset += bytes) { for (offset = 0; offset < len; offset += bytes) {
desc = ep93xx_dma_desc_get(edmac); desc = ep93xx_dma_desc_get(edmac);
if (!desc) { if (!desc) {
dev_warn(chan2dev(edmac), "couln't get descriptor\n"); dev_warn(chan2dev(edmac), "couldn't get descriptor\n");
goto fail; goto fail;
} }
...@@ -1069,7 +1069,7 @@ ep93xx_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, ...@@ -1069,7 +1069,7 @@ ep93xx_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
desc = ep93xx_dma_desc_get(edmac); desc = ep93xx_dma_desc_get(edmac);
if (!desc) { if (!desc) {
dev_warn(chan2dev(edmac), "couln't get descriptor\n"); dev_warn(chan2dev(edmac), "couldn't get descriptor\n");
goto fail; goto fail;
} }
...@@ -1149,7 +1149,7 @@ ep93xx_dma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t dma_addr, ...@@ -1149,7 +1149,7 @@ ep93xx_dma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t dma_addr,
for (offset = 0; offset < buf_len; offset += period_len) { for (offset = 0; offset < buf_len; offset += period_len) {
desc = ep93xx_dma_desc_get(edmac); desc = ep93xx_dma_desc_get(edmac);
if (!desc) { if (!desc) {
dev_warn(chan2dev(edmac), "couln't get descriptor\n"); dev_warn(chan2dev(edmac), "couldn't get descriptor\n");
goto fail; goto fail;
} }
......
...@@ -335,6 +335,7 @@ struct sdma_desc { ...@@ -335,6 +335,7 @@ struct sdma_desc {
* @sdma: pointer to the SDMA engine for this channel * @sdma: pointer to the SDMA engine for this channel
* @channel: the channel number, matches dmaengine chan_id + 1 * @channel: the channel number, matches dmaengine chan_id + 1
* @direction: transfer type. Needed for setting SDMA script * @direction: transfer type. Needed for setting SDMA script
* @slave_config Slave configuration
* @peripheral_type: Peripheral type. Needed for setting SDMA script * @peripheral_type: Peripheral type. Needed for setting SDMA script
* @event_id0: aka dma request line * @event_id0: aka dma request line
* @event_id1: for channels that use 2 events * @event_id1: for channels that use 2 events
...@@ -362,6 +363,7 @@ struct sdma_channel { ...@@ -362,6 +363,7 @@ struct sdma_channel {
struct sdma_engine *sdma; struct sdma_engine *sdma;
unsigned int channel; unsigned int channel;
enum dma_transfer_direction direction; enum dma_transfer_direction direction;
struct dma_slave_config slave_config;
enum sdma_peripheral_type peripheral_type; enum sdma_peripheral_type peripheral_type;
unsigned int event_id0; unsigned int event_id0;
unsigned int event_id1; unsigned int event_id1;
...@@ -440,6 +442,10 @@ struct sdma_engine { ...@@ -440,6 +442,10 @@ struct sdma_engine {
struct sdma_buffer_descriptor *bd0; struct sdma_buffer_descriptor *bd0;
}; };
static int sdma_config_write(struct dma_chan *chan,
struct dma_slave_config *dmaengine_cfg,
enum dma_transfer_direction direction);
static struct sdma_driver_data sdma_imx31 = { static struct sdma_driver_data sdma_imx31 = {
.chnenbl0 = SDMA_CHNENBL0_IMX31, .chnenbl0 = SDMA_CHNENBL0_IMX31,
.num_events = 32, .num_events = 32,
...@@ -671,9 +677,7 @@ static int sdma_load_script(struct sdma_engine *sdma, void *buf, int size, ...@@ -671,9 +677,7 @@ static int sdma_load_script(struct sdma_engine *sdma, void *buf, int size,
int ret; int ret;
unsigned long flags; unsigned long flags;
buf_virt = dma_alloc_coherent(NULL, buf_virt = dma_alloc_coherent(NULL, size, &buf_phys, GFP_KERNEL);
size,
&buf_phys, GFP_KERNEL);
if (!buf_virt) { if (!buf_virt) {
return -ENOMEM; return -ENOMEM;
} }
...@@ -1122,18 +1126,6 @@ static int sdma_config_channel(struct dma_chan *chan) ...@@ -1122,18 +1126,6 @@ static int sdma_config_channel(struct dma_chan *chan)
sdmac->shp_addr = 0; sdmac->shp_addr = 0;
sdmac->per_addr = 0; sdmac->per_addr = 0;
if (sdmac->event_id0) {
if (sdmac->event_id0 >= sdmac->sdma->drvdata->num_events)
return -EINVAL;
sdma_event_enable(sdmac, sdmac->event_id0);
}
if (sdmac->event_id1) {
if (sdmac->event_id1 >= sdmac->sdma->drvdata->num_events)
return -EINVAL;
sdma_event_enable(sdmac, sdmac->event_id1);
}
switch (sdmac->peripheral_type) { switch (sdmac->peripheral_type) {
case IMX_DMATYPE_DSP: case IMX_DMATYPE_DSP:
sdma_config_ownership(sdmac, false, true, true); sdma_config_ownership(sdmac, false, true, true);
...@@ -1431,6 +1423,8 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg( ...@@ -1431,6 +1423,8 @@ static struct dma_async_tx_descriptor *sdma_prep_slave_sg(
struct scatterlist *sg; struct scatterlist *sg;
struct sdma_desc *desc; struct sdma_desc *desc;
sdma_config_write(chan, &sdmac->slave_config, direction);
desc = sdma_transfer_init(sdmac, direction, sg_len); desc = sdma_transfer_init(sdmac, direction, sg_len);
if (!desc) if (!desc)
goto err_out; goto err_out;
...@@ -1515,6 +1509,8 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic( ...@@ -1515,6 +1509,8 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic(
dev_dbg(sdma->dev, "%s channel: %d\n", __func__, channel); dev_dbg(sdma->dev, "%s channel: %d\n", __func__, channel);
sdma_config_write(chan, &sdmac->slave_config, direction);
desc = sdma_transfer_init(sdmac, direction, num_periods); desc = sdma_transfer_init(sdmac, direction, num_periods);
if (!desc) if (!desc)
goto err_out; goto err_out;
...@@ -1570,17 +1566,18 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic( ...@@ -1570,17 +1566,18 @@ static struct dma_async_tx_descriptor *sdma_prep_dma_cyclic(
return NULL; return NULL;
} }
static int sdma_config(struct dma_chan *chan, static int sdma_config_write(struct dma_chan *chan,
struct dma_slave_config *dmaengine_cfg) struct dma_slave_config *dmaengine_cfg,
enum dma_transfer_direction direction)
{ {
struct sdma_channel *sdmac = to_sdma_chan(chan); struct sdma_channel *sdmac = to_sdma_chan(chan);
if (dmaengine_cfg->direction == DMA_DEV_TO_MEM) { if (direction == DMA_DEV_TO_MEM) {
sdmac->per_address = dmaengine_cfg->src_addr; sdmac->per_address = dmaengine_cfg->src_addr;
sdmac->watermark_level = dmaengine_cfg->src_maxburst * sdmac->watermark_level = dmaengine_cfg->src_maxburst *
dmaengine_cfg->src_addr_width; dmaengine_cfg->src_addr_width;
sdmac->word_size = dmaengine_cfg->src_addr_width; sdmac->word_size = dmaengine_cfg->src_addr_width;
} else if (dmaengine_cfg->direction == DMA_DEV_TO_DEV) { } else if (direction == DMA_DEV_TO_DEV) {
sdmac->per_address2 = dmaengine_cfg->src_addr; sdmac->per_address2 = dmaengine_cfg->src_addr;
sdmac->per_address = dmaengine_cfg->dst_addr; sdmac->per_address = dmaengine_cfg->dst_addr;
sdmac->watermark_level = dmaengine_cfg->src_maxburst & sdmac->watermark_level = dmaengine_cfg->src_maxburst &
...@@ -1594,10 +1591,33 @@ static int sdma_config(struct dma_chan *chan, ...@@ -1594,10 +1591,33 @@ static int sdma_config(struct dma_chan *chan,
dmaengine_cfg->dst_addr_width; dmaengine_cfg->dst_addr_width;
sdmac->word_size = dmaengine_cfg->dst_addr_width; sdmac->word_size = dmaengine_cfg->dst_addr_width;
} }
sdmac->direction = dmaengine_cfg->direction; sdmac->direction = direction;
return sdma_config_channel(chan); return sdma_config_channel(chan);
} }
static int sdma_config(struct dma_chan *chan,
struct dma_slave_config *dmaengine_cfg)
{
struct sdma_channel *sdmac = to_sdma_chan(chan);
memcpy(&sdmac->slave_config, dmaengine_cfg, sizeof(*dmaengine_cfg));
/* Set ENBLn earlier to make sure dma request triggered after that */
if (sdmac->event_id0) {
if (sdmac->event_id0 >= sdmac->sdma->drvdata->num_events)
return -EINVAL;
sdma_event_enable(sdmac, sdmac->event_id0);
}
if (sdmac->event_id1) {
if (sdmac->event_id1 >= sdmac->sdma->drvdata->num_events)
return -EINVAL;
sdma_event_enable(sdmac, sdmac->event_id1);
}
return 0;
}
static enum dma_status sdma_tx_status(struct dma_chan *chan, static enum dma_status sdma_tx_status(struct dma_chan *chan,
dma_cookie_t cookie, dma_cookie_t cookie,
struct dma_tx_state *txstate) struct dma_tx_state *txstate)
......
...@@ -11,3 +11,16 @@ config MTK_HSDMA ...@@ -11,3 +11,16 @@ config MTK_HSDMA
This controller provides the channels which is dedicated to This controller provides the channels which is dedicated to
memory-to-memory transfer to offload from CPU through ring- memory-to-memory transfer to offload from CPU through ring-
based descriptor management. based descriptor management.
config MTK_CQDMA
tristate "MediaTek Command-Queue DMA controller support"
depends on ARCH_MEDIATEK || COMPILE_TEST
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
select ASYNC_TX_ENABLE_CHANNEL_SWITCH
help
Enable support for Command-Queue DMA controller on MediaTek
SoCs.
This controller provides the channels which is dedicated to
memory-to-memory transfer to offload from CPU.
obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o
obj-$(CONFIG_MTK_CQDMA) += mtk-cqdma.o
This diff is collapsed.
...@@ -676,7 +676,7 @@ static void mic_dma_dev_unreg(struct mic_dma_device *mic_dma_dev) ...@@ -676,7 +676,7 @@ static void mic_dma_dev_unreg(struct mic_dma_device *mic_dma_dev)
} }
/* DEBUGFS CODE */ /* DEBUGFS CODE */
static int mic_dma_reg_seq_show(struct seq_file *s, void *pos) static int mic_dma_reg_show(struct seq_file *s, void *pos)
{ {
struct mic_dma_device *mic_dma_dev = s->private; struct mic_dma_device *mic_dma_dev = s->private;
int i, chan_num, first_chan = mic_dma_dev->start_ch; int i, chan_num, first_chan = mic_dma_dev->start_ch;
...@@ -707,23 +707,7 @@ static int mic_dma_reg_seq_show(struct seq_file *s, void *pos) ...@@ -707,23 +707,7 @@ static int mic_dma_reg_seq_show(struct seq_file *s, void *pos)
return 0; return 0;
} }
static int mic_dma_reg_debug_open(struct inode *inode, struct file *file) DEFINE_SHOW_ATTRIBUTE(mic_dma_reg);
{
return single_open(file, mic_dma_reg_seq_show, inode->i_private);
}
static int mic_dma_reg_debug_release(struct inode *inode, struct file *file)
{
return single_release(inode, file);
}
static const struct file_operations mic_dma_reg_ops = {
.owner = THIS_MODULE,
.open = mic_dma_reg_debug_open,
.read = seq_read,
.llseek = seq_lseek,
.release = mic_dma_reg_debug_release
};
/* Debugfs parent dir */ /* Debugfs parent dir */
static struct dentry *mic_dma_dbg; static struct dentry *mic_dma_dbg;
...@@ -747,7 +731,7 @@ static int mic_dma_driver_probe(struct mbus_device *mbdev) ...@@ -747,7 +731,7 @@ static int mic_dma_driver_probe(struct mbus_device *mbdev)
if (mic_dma_dev->dbg_dir) if (mic_dma_dev->dbg_dir)
debugfs_create_file("mic_dma_reg", 0444, debugfs_create_file("mic_dma_reg", 0444,
mic_dma_dev->dbg_dir, mic_dma_dev, mic_dma_dev->dbg_dir, mic_dma_dev,
&mic_dma_reg_ops); &mic_dma_reg_fops);
} }
return 0; return 0;
} }
......
...@@ -96,6 +96,7 @@ struct mmp_pdma_chan { ...@@ -96,6 +96,7 @@ struct mmp_pdma_chan {
struct dma_async_tx_descriptor desc; struct dma_async_tx_descriptor desc;
struct mmp_pdma_phy *phy; struct mmp_pdma_phy *phy;
enum dma_transfer_direction dir; enum dma_transfer_direction dir;
struct dma_slave_config slave_config;
struct mmp_pdma_desc_sw *cyclic_first; /* first desc_sw if channel struct mmp_pdma_desc_sw *cyclic_first; /* first desc_sw if channel
* is in cyclic mode */ * is in cyclic mode */
...@@ -140,6 +141,10 @@ struct mmp_pdma_device { ...@@ -140,6 +141,10 @@ struct mmp_pdma_device {
#define to_mmp_pdma_dev(dmadev) \ #define to_mmp_pdma_dev(dmadev) \
container_of(dmadev, struct mmp_pdma_device, device) container_of(dmadev, struct mmp_pdma_device, device)
static int mmp_pdma_config_write(struct dma_chan *dchan,
struct dma_slave_config *cfg,
enum dma_transfer_direction direction);
static void set_desc(struct mmp_pdma_phy *phy, dma_addr_t addr) static void set_desc(struct mmp_pdma_phy *phy, dma_addr_t addr)
{ {
u32 reg = (phy->idx << 4) + DDADR; u32 reg = (phy->idx << 4) + DDADR;
...@@ -537,6 +542,8 @@ mmp_pdma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl, ...@@ -537,6 +542,8 @@ mmp_pdma_prep_slave_sg(struct dma_chan *dchan, struct scatterlist *sgl,
chan->byte_align = false; chan->byte_align = false;
mmp_pdma_config_write(dchan, &chan->slave_config, dir);
for_each_sg(sgl, sg, sg_len, i) { for_each_sg(sgl, sg, sg_len, i) {
addr = sg_dma_address(sg); addr = sg_dma_address(sg);
avail = sg_dma_len(sgl); avail = sg_dma_len(sgl);
...@@ -619,6 +626,7 @@ mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan, ...@@ -619,6 +626,7 @@ mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan,
return NULL; return NULL;
chan = to_mmp_pdma_chan(dchan); chan = to_mmp_pdma_chan(dchan);
mmp_pdma_config_write(dchan, &chan->slave_config, direction);
switch (direction) { switch (direction) {
case DMA_MEM_TO_DEV: case DMA_MEM_TO_DEV:
...@@ -684,8 +692,9 @@ mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan, ...@@ -684,8 +692,9 @@ mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan,
return NULL; return NULL;
} }
static int mmp_pdma_config(struct dma_chan *dchan, static int mmp_pdma_config_write(struct dma_chan *dchan,
struct dma_slave_config *cfg) struct dma_slave_config *cfg,
enum dma_transfer_direction direction)
{ {
struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan); struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
u32 maxburst = 0, addr = 0; u32 maxburst = 0, addr = 0;
...@@ -694,12 +703,12 @@ static int mmp_pdma_config(struct dma_chan *dchan, ...@@ -694,12 +703,12 @@ static int mmp_pdma_config(struct dma_chan *dchan,
if (!dchan) if (!dchan)
return -EINVAL; return -EINVAL;
if (cfg->direction == DMA_DEV_TO_MEM) { if (direction == DMA_DEV_TO_MEM) {
chan->dcmd = DCMD_INCTRGADDR | DCMD_FLOWSRC; chan->dcmd = DCMD_INCTRGADDR | DCMD_FLOWSRC;
maxburst = cfg->src_maxburst; maxburst = cfg->src_maxburst;
width = cfg->src_addr_width; width = cfg->src_addr_width;
addr = cfg->src_addr; addr = cfg->src_addr;
} else if (cfg->direction == DMA_MEM_TO_DEV) { } else if (direction == DMA_MEM_TO_DEV) {
chan->dcmd = DCMD_INCSRCADDR | DCMD_FLOWTRG; chan->dcmd = DCMD_INCSRCADDR | DCMD_FLOWTRG;
maxburst = cfg->dst_maxburst; maxburst = cfg->dst_maxburst;
width = cfg->dst_addr_width; width = cfg->dst_addr_width;
...@@ -720,7 +729,7 @@ static int mmp_pdma_config(struct dma_chan *dchan, ...@@ -720,7 +729,7 @@ static int mmp_pdma_config(struct dma_chan *dchan,
else if (maxburst == 32) else if (maxburst == 32)
chan->dcmd |= DCMD_BURST32; chan->dcmd |= DCMD_BURST32;
chan->dir = cfg->direction; chan->dir = direction;
chan->dev_addr = addr; chan->dev_addr = addr;
/* FIXME: drivers should be ported over to use the filter /* FIXME: drivers should be ported over to use the filter
* function. Once that's done, the following two lines can * function. Once that's done, the following two lines can
...@@ -732,6 +741,15 @@ static int mmp_pdma_config(struct dma_chan *dchan, ...@@ -732,6 +741,15 @@ static int mmp_pdma_config(struct dma_chan *dchan,
return 0; return 0;
} }
static int mmp_pdma_config(struct dma_chan *dchan,
struct dma_slave_config *cfg)
{
struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
memcpy(&chan->slave_config, cfg, sizeof(*cfg));
return 0;
}
static int mmp_pdma_terminate_all(struct dma_chan *dchan) static int mmp_pdma_terminate_all(struct dma_chan *dchan)
{ {
struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan); struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
......
...@@ -448,6 +448,7 @@ struct dma_pl330_chan { ...@@ -448,6 +448,7 @@ struct dma_pl330_chan {
/* DMA-mapped view of the FIFO; may differ if an IOMMU is present */ /* DMA-mapped view of the FIFO; may differ if an IOMMU is present */
dma_addr_t fifo_dma; dma_addr_t fifo_dma;
enum dma_data_direction dir; enum dma_data_direction dir;
struct dma_slave_config slave_config;
/* for cyclic capability */ /* for cyclic capability */
bool cyclic; bool cyclic;
...@@ -542,6 +543,10 @@ struct _xfer_spec { ...@@ -542,6 +543,10 @@ struct _xfer_spec {
struct dma_pl330_desc *desc; struct dma_pl330_desc *desc;
}; };
static int pl330_config_write(struct dma_chan *chan,
struct dma_slave_config *slave_config,
enum dma_transfer_direction direction);
static inline bool _queue_full(struct pl330_thread *thrd) static inline bool _queue_full(struct pl330_thread *thrd)
{ {
return thrd->req[0].desc != NULL && thrd->req[1].desc != NULL; return thrd->req[0].desc != NULL && thrd->req[1].desc != NULL;
...@@ -2220,20 +2225,21 @@ static int fixup_burst_len(int max_burst_len, int quirks) ...@@ -2220,20 +2225,21 @@ static int fixup_burst_len(int max_burst_len, int quirks)
return max_burst_len; return max_burst_len;
} }
static int pl330_config(struct dma_chan *chan, static int pl330_config_write(struct dma_chan *chan,
struct dma_slave_config *slave_config) struct dma_slave_config *slave_config,
enum dma_transfer_direction direction)
{ {
struct dma_pl330_chan *pch = to_pchan(chan); struct dma_pl330_chan *pch = to_pchan(chan);
pl330_unprep_slave_fifo(pch); pl330_unprep_slave_fifo(pch);
if (slave_config->direction == DMA_MEM_TO_DEV) { if (direction == DMA_MEM_TO_DEV) {
if (slave_config->dst_addr) if (slave_config->dst_addr)
pch->fifo_addr = slave_config->dst_addr; pch->fifo_addr = slave_config->dst_addr;
if (slave_config->dst_addr_width) if (slave_config->dst_addr_width)
pch->burst_sz = __ffs(slave_config->dst_addr_width); pch->burst_sz = __ffs(slave_config->dst_addr_width);
pch->burst_len = fixup_burst_len(slave_config->dst_maxburst, pch->burst_len = fixup_burst_len(slave_config->dst_maxburst,
pch->dmac->quirks); pch->dmac->quirks);
} else if (slave_config->direction == DMA_DEV_TO_MEM) { } else if (direction == DMA_DEV_TO_MEM) {
if (slave_config->src_addr) if (slave_config->src_addr)
pch->fifo_addr = slave_config->src_addr; pch->fifo_addr = slave_config->src_addr;
if (slave_config->src_addr_width) if (slave_config->src_addr_width)
...@@ -2245,6 +2251,16 @@ static int pl330_config(struct dma_chan *chan, ...@@ -2245,6 +2251,16 @@ static int pl330_config(struct dma_chan *chan,
return 0; return 0;
} }
static int pl330_config(struct dma_chan *chan,
struct dma_slave_config *slave_config)
{
struct dma_pl330_chan *pch = to_pchan(chan);
memcpy(&pch->slave_config, slave_config, sizeof(*slave_config));
return 0;
}
static int pl330_terminate_all(struct dma_chan *chan) static int pl330_terminate_all(struct dma_chan *chan)
{ {
struct dma_pl330_chan *pch = to_pchan(chan); struct dma_pl330_chan *pch = to_pchan(chan);
...@@ -2661,6 +2677,8 @@ static struct dma_async_tx_descriptor *pl330_prep_dma_cyclic( ...@@ -2661,6 +2677,8 @@ static struct dma_async_tx_descriptor *pl330_prep_dma_cyclic(
return NULL; return NULL;
} }
pl330_config_write(chan, &pch->slave_config, direction);
if (!pl330_prep_slave_fifo(pch, direction)) if (!pl330_prep_slave_fifo(pch, direction))
return NULL; return NULL;
...@@ -2815,6 +2833,8 @@ pl330_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, ...@@ -2815,6 +2833,8 @@ pl330_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
if (unlikely(!pch || !sgl || !sg_len)) if (unlikely(!pch || !sgl || !sg_len))
return NULL; return NULL;
pl330_config_write(chan, &pch->slave_config, direction);
if (!pl330_prep_slave_fifo(pch, direction)) if (!pl330_prep_slave_fifo(pch, direction))
return NULL; return NULL;
......
...@@ -189,7 +189,7 @@ static bool pxad_filter_fn(struct dma_chan *chan, void *param); ...@@ -189,7 +189,7 @@ static bool pxad_filter_fn(struct dma_chan *chan, void *param);
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
static int dbg_show_requester_chan(struct seq_file *s, void *p) static int requester_chan_show(struct seq_file *s, void *p)
{ {
struct pxad_phy *phy = s->private; struct pxad_phy *phy = s->private;
int i; int i;
...@@ -220,7 +220,7 @@ static int is_phys_valid(unsigned long addr) ...@@ -220,7 +220,7 @@ static int is_phys_valid(unsigned long addr)
#define PXA_DCSR_STR(flag) (dcsr & PXA_DCSR_##flag ? #flag" " : "") #define PXA_DCSR_STR(flag) (dcsr & PXA_DCSR_##flag ? #flag" " : "")
#define PXA_DCMD_STR(flag) (dcmd & PXA_DCMD_##flag ? #flag" " : "") #define PXA_DCMD_STR(flag) (dcmd & PXA_DCMD_##flag ? #flag" " : "")
static int dbg_show_descriptors(struct seq_file *s, void *p) static int descriptors_show(struct seq_file *s, void *p)
{ {
struct pxad_phy *phy = s->private; struct pxad_phy *phy = s->private;
int i, max_show = 20, burst, width; int i, max_show = 20, burst, width;
...@@ -263,7 +263,7 @@ static int dbg_show_descriptors(struct seq_file *s, void *p) ...@@ -263,7 +263,7 @@ static int dbg_show_descriptors(struct seq_file *s, void *p)
return 0; return 0;
} }
static int dbg_show_chan_state(struct seq_file *s, void *p) static int chan_state_show(struct seq_file *s, void *p)
{ {
struct pxad_phy *phy = s->private; struct pxad_phy *phy = s->private;
u32 dcsr, dcmd; u32 dcsr, dcmd;
...@@ -306,7 +306,7 @@ static int dbg_show_chan_state(struct seq_file *s, void *p) ...@@ -306,7 +306,7 @@ static int dbg_show_chan_state(struct seq_file *s, void *p)
return 0; return 0;
} }
static int dbg_show_state(struct seq_file *s, void *p) static int state_show(struct seq_file *s, void *p)
{ {
struct pxad_device *pdev = s->private; struct pxad_device *pdev = s->private;
...@@ -317,22 +317,10 @@ static int dbg_show_state(struct seq_file *s, void *p) ...@@ -317,22 +317,10 @@ static int dbg_show_state(struct seq_file *s, void *p)
return 0; return 0;
} }
#define DBGFS_FUNC_DECL(name) \ DEFINE_SHOW_ATTRIBUTE(state);
static int dbg_open_##name(struct inode *inode, struct file *file) \ DEFINE_SHOW_ATTRIBUTE(chan_state);
{ \ DEFINE_SHOW_ATTRIBUTE(descriptors);
return single_open(file, dbg_show_##name, inode->i_private); \ DEFINE_SHOW_ATTRIBUTE(requester_chan);
} \
static const struct file_operations dbg_fops_##name = { \
.open = dbg_open_##name, \
.llseek = seq_lseek, \
.read = seq_read, \
.release = single_release, \
}
DBGFS_FUNC_DECL(state);
DBGFS_FUNC_DECL(chan_state);
DBGFS_FUNC_DECL(descriptors);
DBGFS_FUNC_DECL(requester_chan);
static struct dentry *pxad_dbg_alloc_chan(struct pxad_device *pdev, static struct dentry *pxad_dbg_alloc_chan(struct pxad_device *pdev,
int ch, struct dentry *chandir) int ch, struct dentry *chandir)
...@@ -348,13 +336,13 @@ static struct dentry *pxad_dbg_alloc_chan(struct pxad_device *pdev, ...@@ -348,13 +336,13 @@ static struct dentry *pxad_dbg_alloc_chan(struct pxad_device *pdev,
if (chan) if (chan)
chan_state = debugfs_create_file("state", 0400, chan, dt, chan_state = debugfs_create_file("state", 0400, chan, dt,
&dbg_fops_chan_state); &chan_state_fops);
if (chan_state) if (chan_state)
chan_descr = debugfs_create_file("descriptors", 0400, chan, dt, chan_descr = debugfs_create_file("descriptors", 0400, chan, dt,
&dbg_fops_descriptors); &descriptors_fops);
if (chan_descr) if (chan_descr)
chan_reqs = debugfs_create_file("requesters", 0400, chan, dt, chan_reqs = debugfs_create_file("requesters", 0400, chan, dt,
&dbg_fops_requester_chan); &requester_chan_fops);
if (!chan_reqs) if (!chan_reqs)
goto err_state; goto err_state;
...@@ -375,7 +363,7 @@ static void pxad_init_debugfs(struct pxad_device *pdev) ...@@ -375,7 +363,7 @@ static void pxad_init_debugfs(struct pxad_device *pdev)
goto err_root; goto err_root;
pdev->dbgfs_state = debugfs_create_file("state", 0400, pdev->dbgfs_root, pdev->dbgfs_state = debugfs_create_file("state", 0400, pdev->dbgfs_root,
pdev, &dbg_fops_state); pdev, &state_fops);
if (!pdev->dbgfs_state) if (!pdev->dbgfs_state)
goto err_state; goto err_state;
......
...@@ -85,11 +85,11 @@ static void hidma_ll_devstats(struct seq_file *s, void *llhndl) ...@@ -85,11 +85,11 @@ static void hidma_ll_devstats(struct seq_file *s, void *llhndl)
} }
/* /*
* hidma_chan_stats: display HIDMA channel statistics * hidma_chan_show: display HIDMA channel statistics
* *
* Display the statistics for the current HIDMA virtual channel device. * Display the statistics for the current HIDMA virtual channel device.
*/ */
static int hidma_chan_stats(struct seq_file *s, void *unused) static int hidma_chan_show(struct seq_file *s, void *unused)
{ {
struct hidma_chan *mchan = s->private; struct hidma_chan *mchan = s->private;
struct hidma_desc *mdesc; struct hidma_desc *mdesc;
...@@ -117,11 +117,11 @@ static int hidma_chan_stats(struct seq_file *s, void *unused) ...@@ -117,11 +117,11 @@ static int hidma_chan_stats(struct seq_file *s, void *unused)
} }
/* /*
* hidma_dma_info: display HIDMA device info * hidma_dma_show: display HIDMA device info
* *
* Display the info for the current HIDMA device. * Display the info for the current HIDMA device.
*/ */
static int hidma_dma_info(struct seq_file *s, void *unused) static int hidma_dma_show(struct seq_file *s, void *unused)
{ {
struct hidma_dev *dmadev = s->private; struct hidma_dev *dmadev = s->private;
resource_size_t sz; resource_size_t sz;
...@@ -138,29 +138,8 @@ static int hidma_dma_info(struct seq_file *s, void *unused) ...@@ -138,29 +138,8 @@ static int hidma_dma_info(struct seq_file *s, void *unused)
return 0; return 0;
} }
static int hidma_chan_stats_open(struct inode *inode, struct file *file) DEFINE_SHOW_ATTRIBUTE(hidma_chan);
{ DEFINE_SHOW_ATTRIBUTE(hidma_dma);
return single_open(file, hidma_chan_stats, inode->i_private);
}
static int hidma_dma_info_open(struct inode *inode, struct file *file)
{
return single_open(file, hidma_dma_info, inode->i_private);
}
static const struct file_operations hidma_chan_fops = {
.open = hidma_chan_stats_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static const struct file_operations hidma_dma_fops = {
.open = hidma_dma_info_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
void hidma_debug_uninit(struct hidma_dev *dmadev) void hidma_debug_uninit(struct hidma_dev *dmadev)
{ {
......
...@@ -17,7 +17,6 @@ ...@@ -17,7 +17,6 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/sa11x0-dma.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
...@@ -830,6 +829,14 @@ static const struct dma_slave_map sa11x0_dma_map[] = { ...@@ -830,6 +829,14 @@ static const struct dma_slave_map sa11x0_dma_map[] = {
{ "sa11x0-ssp", "rx", "Ser4SSPRc" }, { "sa11x0-ssp", "rx", "Ser4SSPRc" },
}; };
static bool sa11x0_dma_filter_fn(struct dma_chan *chan, void *param)
{
struct sa11x0_dma_chan *c = to_sa11x0_dma_chan(chan);
const char *p = param;
return !strcmp(c->name, p);
}
static int sa11x0_dma_init_dmadev(struct dma_device *dmadev, static int sa11x0_dma_init_dmadev(struct dma_device *dmadev,
struct device *dev) struct device *dev)
{ {
...@@ -1087,18 +1094,6 @@ static struct platform_driver sa11x0_dma_driver = { ...@@ -1087,18 +1094,6 @@ static struct platform_driver sa11x0_dma_driver = {
.remove = sa11x0_dma_remove, .remove = sa11x0_dma_remove,
}; };
bool sa11x0_dma_filter_fn(struct dma_chan *chan, void *param)
{
if (chan->device->dev->driver == &sa11x0_dma_driver.driver) {
struct sa11x0_dma_chan *c = to_sa11x0_dma_chan(chan);
const char *p = param;
return !strcmp(c->name, p);
}
return false;
}
EXPORT_SYMBOL(sa11x0_dma_filter_fn);
static int __init sa11x0_dma_init(void) static int __init sa11x0_dma_init(void)
{ {
return platform_driver_register(&sa11x0_dma_driver); return platform_driver_register(&sa11x0_dma_driver);
......
# SPDX-License-Identifier: GPL-2.0
# #
# DMA engine configuration for sh # DMA engine configuration for sh
# #
...@@ -12,7 +13,7 @@ config RENESAS_DMA ...@@ -12,7 +13,7 @@ config RENESAS_DMA
config SH_DMAE_BASE config SH_DMAE_BASE
bool "Renesas SuperH DMA Engine support" bool "Renesas SuperH DMA Engine support"
depends on SUPERH || ARCH_RENESAS || COMPILE_TEST depends on SUPERH || COMPILE_TEST
depends on !SUPERH || SH_DMA depends on !SUPERH || SH_DMA
depends on !SH_DMA_API depends on !SH_DMA_API
default y default y
...@@ -30,15 +31,6 @@ config SH_DMAE ...@@ -30,15 +31,6 @@ config SH_DMAE
help help
Enable support for the Renesas SuperH DMA controllers. Enable support for the Renesas SuperH DMA controllers.
if SH_DMAE
config SH_DMAE_R8A73A4
def_bool y
depends on ARCH_R8A73A4
depends on OF
endif
config RCAR_DMAC config RCAR_DMAC
tristate "Renesas R-Car Gen2 DMA Controller" tristate "Renesas R-Car Gen2 DMA Controller"
depends on ARCH_RENESAS || COMPILE_TEST depends on ARCH_RENESAS || COMPILE_TEST
......
...@@ -10,7 +10,6 @@ obj-$(CONFIG_SH_DMAE_BASE) += shdma-base.o shdma-of.o ...@@ -10,7 +10,6 @@ obj-$(CONFIG_SH_DMAE_BASE) += shdma-base.o shdma-of.o
# #
shdma-y := shdmac.o shdma-y := shdmac.o
shdma-$(CONFIG_SH_DMAE_R8A73A4) += shdma-r8a73a4.o
shdma-objs := $(shdma-y) shdma-objs := $(shdma-y)
obj-$(CONFIG_SH_DMAE) += shdma.o obj-$(CONFIG_SH_DMAE) += shdma.o
......
// SPDX-License-Identifier: GPL-2.0
/*
* Renesas SuperH DMA Engine support for r8a73a4 (APE6) SoCs
*
* Copyright (C) 2013 Renesas Electronics, Inc.
*/
#include <linux/sh_dma.h>
#include "shdma-arm.h"
static const unsigned int dma_ts_shift[] = SH_DMAE_TS_SHIFT;
static const struct sh_dmae_slave_config dma_slaves[] = {
{
.chcr = CHCR_TX(XMIT_SZ_32BIT),
.mid_rid = 0xd1, /* MMC0 Tx */
}, {
.chcr = CHCR_RX(XMIT_SZ_32BIT),
.mid_rid = 0xd2, /* MMC0 Rx */
}, {
.chcr = CHCR_TX(XMIT_SZ_32BIT),
.mid_rid = 0xe1, /* MMC1 Tx */
}, {
.chcr = CHCR_RX(XMIT_SZ_32BIT),
.mid_rid = 0xe2, /* MMC1 Rx */
},
};
#define DMAE_CHANNEL(a, b) \
{ \
.offset = (a) - 0x20, \
.dmars = (a) - 0x20 + 0x40, \
.chclr_bit = (b), \
.chclr_offset = 0x80 - 0x20, \
}
static const struct sh_dmae_channel dma_channels[] = {
DMAE_CHANNEL(0x8000, 0),
DMAE_CHANNEL(0x8080, 1),
DMAE_CHANNEL(0x8100, 2),
DMAE_CHANNEL(0x8180, 3),
DMAE_CHANNEL(0x8200, 4),
DMAE_CHANNEL(0x8280, 5),
DMAE_CHANNEL(0x8300, 6),
DMAE_CHANNEL(0x8380, 7),
DMAE_CHANNEL(0x8400, 8),
DMAE_CHANNEL(0x8480, 9),
DMAE_CHANNEL(0x8500, 10),
DMAE_CHANNEL(0x8580, 11),
DMAE_CHANNEL(0x8600, 12),
DMAE_CHANNEL(0x8680, 13),
DMAE_CHANNEL(0x8700, 14),
DMAE_CHANNEL(0x8780, 15),
DMAE_CHANNEL(0x8800, 16),
DMAE_CHANNEL(0x8880, 17),
DMAE_CHANNEL(0x8900, 18),
DMAE_CHANNEL(0x8980, 19),
};
const struct sh_dmae_pdata r8a73a4_dma_pdata = {
.slave = dma_slaves,
.slave_num = ARRAY_SIZE(dma_slaves),
.channel = dma_channels,
.channel_num = ARRAY_SIZE(dma_channels),
.ts_low_shift = TS_LOW_SHIFT,
.ts_low_mask = TS_LOW_BIT << TS_LOW_SHIFT,
.ts_high_shift = TS_HI_SHIFT,
.ts_high_mask = TS_HI_BIT << TS_HI_SHIFT,
.ts_shift = dma_ts_shift,
.ts_shift_num = ARRAY_SIZE(dma_ts_shift),
.dmaor_init = DMAOR_DME,
.chclr_present = 1,
.chclr_bitwise = 1,
};
...@@ -58,11 +58,4 @@ struct sh_dmae_desc { ...@@ -58,11 +58,4 @@ struct sh_dmae_desc {
#define to_sh_dev(chan) container_of(chan->shdma_chan.dma_chan.device,\ #define to_sh_dev(chan) container_of(chan->shdma_chan.dma_chan.device,\
struct sh_dmae_device, shdma_dev.dma_dev) struct sh_dmae_device, shdma_dev.dma_dev)
#ifdef CONFIG_SH_DMAE_R8A73A4
extern const struct sh_dmae_pdata r8a73a4_dma_pdata;
#define r8a73a4_shdma_devid (&r8a73a4_dma_pdata)
#else
#define r8a73a4_shdma_devid NULL
#endif
#endif /* __DMA_SHDMA_H */ #endif /* __DMA_SHDMA_H */
...@@ -665,12 +665,6 @@ static const struct shdma_ops sh_dmae_shdma_ops = { ...@@ -665,12 +665,6 @@ static const struct shdma_ops sh_dmae_shdma_ops = {
.get_partial = sh_dmae_get_partial, .get_partial = sh_dmae_get_partial,
}; };
static const struct of_device_id sh_dmae_of_match[] = {
{.compatible = "renesas,shdma-r8a73a4", .data = r8a73a4_shdma_devid,},
{}
};
MODULE_DEVICE_TABLE(of, sh_dmae_of_match);
static int sh_dmae_probe(struct platform_device *pdev) static int sh_dmae_probe(struct platform_device *pdev)
{ {
const enum dma_slave_buswidth widths = const enum dma_slave_buswidth widths =
...@@ -915,7 +909,6 @@ static struct platform_driver sh_dmae_driver = { ...@@ -915,7 +909,6 @@ static struct platform_driver sh_dmae_driver = {
.driver = { .driver = {
.pm = &sh_dmae_pm, .pm = &sh_dmae_pm,
.name = SH_DMAE_DRV_NAME, .name = SH_DMAE_DRV_NAME,
.of_match_table = sh_dmae_of_match,
}, },
.remove = sh_dmae_remove, .remove = sh_dmae_remove,
}; };
......
...@@ -36,6 +36,8 @@ ...@@ -36,6 +36,8 @@
#define SPRD_DMA_GLB_CHN_EN_STS 0x1c #define SPRD_DMA_GLB_CHN_EN_STS 0x1c
#define SPRD_DMA_GLB_DEBUG_STS 0x20 #define SPRD_DMA_GLB_DEBUG_STS 0x20
#define SPRD_DMA_GLB_ARB_SEL_STS 0x24 #define SPRD_DMA_GLB_ARB_SEL_STS 0x24
#define SPRD_DMA_GLB_2STAGE_GRP1 0x28
#define SPRD_DMA_GLB_2STAGE_GRP2 0x2c
#define SPRD_DMA_GLB_REQ_UID(uid) (0x4 * ((uid) - 1)) #define SPRD_DMA_GLB_REQ_UID(uid) (0x4 * ((uid) - 1))
#define SPRD_DMA_GLB_REQ_UID_OFFSET 0x2000 #define SPRD_DMA_GLB_REQ_UID_OFFSET 0x2000
...@@ -57,6 +59,18 @@ ...@@ -57,6 +59,18 @@
#define SPRD_DMA_CHN_SRC_BLK_STEP 0x38 #define SPRD_DMA_CHN_SRC_BLK_STEP 0x38
#define SPRD_DMA_CHN_DES_BLK_STEP 0x3c #define SPRD_DMA_CHN_DES_BLK_STEP 0x3c
/* SPRD_DMA_GLB_2STAGE_GRP register definition */
#define SPRD_DMA_GLB_2STAGE_EN BIT(24)
#define SPRD_DMA_GLB_CHN_INT_MASK GENMASK(23, 20)
#define SPRD_DMA_GLB_LIST_DONE_TRG BIT(19)
#define SPRD_DMA_GLB_TRANS_DONE_TRG BIT(18)
#define SPRD_DMA_GLB_BLOCK_DONE_TRG BIT(17)
#define SPRD_DMA_GLB_FRAG_DONE_TRG BIT(16)
#define SPRD_DMA_GLB_TRG_OFFSET 16
#define SPRD_DMA_GLB_DEST_CHN_MASK GENMASK(13, 8)
#define SPRD_DMA_GLB_DEST_CHN_OFFSET 8
#define SPRD_DMA_GLB_SRC_CHN_MASK GENMASK(5, 0)
/* SPRD_DMA_CHN_INTC register definition */ /* SPRD_DMA_CHN_INTC register definition */
#define SPRD_DMA_INT_MASK GENMASK(4, 0) #define SPRD_DMA_INT_MASK GENMASK(4, 0)
#define SPRD_DMA_INT_CLR_OFFSET 24 #define SPRD_DMA_INT_CLR_OFFSET 24
...@@ -118,6 +132,10 @@ ...@@ -118,6 +132,10 @@
#define SPRD_DMA_SRC_TRSF_STEP_OFFSET 0 #define SPRD_DMA_SRC_TRSF_STEP_OFFSET 0
#define SPRD_DMA_TRSF_STEP_MASK GENMASK(15, 0) #define SPRD_DMA_TRSF_STEP_MASK GENMASK(15, 0)
/* define DMA channel mode & trigger mode mask */
#define SPRD_DMA_CHN_MODE_MASK GENMASK(7, 0)
#define SPRD_DMA_TRG_MODE_MASK GENMASK(7, 0)
/* define the DMA transfer step type */ /* define the DMA transfer step type */
#define SPRD_DMA_NONE_STEP 0 #define SPRD_DMA_NONE_STEP 0
#define SPRD_DMA_BYTE_STEP 1 #define SPRD_DMA_BYTE_STEP 1
...@@ -159,6 +177,7 @@ struct sprd_dma_chn_hw { ...@@ -159,6 +177,7 @@ struct sprd_dma_chn_hw {
struct sprd_dma_desc { struct sprd_dma_desc {
struct virt_dma_desc vd; struct virt_dma_desc vd;
struct sprd_dma_chn_hw chn_hw; struct sprd_dma_chn_hw chn_hw;
enum dma_transfer_direction dir;
}; };
/* dma channel description */ /* dma channel description */
...@@ -169,6 +188,8 @@ struct sprd_dma_chn { ...@@ -169,6 +188,8 @@ struct sprd_dma_chn {
struct dma_slave_config slave_cfg; struct dma_slave_config slave_cfg;
u32 chn_num; u32 chn_num;
u32 dev_id; u32 dev_id;
enum sprd_dma_chn_mode chn_mode;
enum sprd_dma_trg_mode trg_mode;
struct sprd_dma_desc *cur_desc; struct sprd_dma_desc *cur_desc;
}; };
...@@ -205,6 +226,16 @@ static inline struct sprd_dma_desc *to_sprd_dma_desc(struct virt_dma_desc *vd) ...@@ -205,6 +226,16 @@ static inline struct sprd_dma_desc *to_sprd_dma_desc(struct virt_dma_desc *vd)
return container_of(vd, struct sprd_dma_desc, vd); return container_of(vd, struct sprd_dma_desc, vd);
} }
static void sprd_dma_glb_update(struct sprd_dma_dev *sdev, u32 reg,
u32 mask, u32 val)
{
u32 orig = readl(sdev->glb_base + reg);
u32 tmp;
tmp = (orig & ~mask) | val;
writel(tmp, sdev->glb_base + reg);
}
static void sprd_dma_chn_update(struct sprd_dma_chn *schan, u32 reg, static void sprd_dma_chn_update(struct sprd_dma_chn *schan, u32 reg,
u32 mask, u32 val) u32 mask, u32 val)
{ {
...@@ -331,6 +362,17 @@ static void sprd_dma_stop_and_disable(struct sprd_dma_chn *schan) ...@@ -331,6 +362,17 @@ static void sprd_dma_stop_and_disable(struct sprd_dma_chn *schan)
sprd_dma_disable_chn(schan); sprd_dma_disable_chn(schan);
} }
static unsigned long sprd_dma_get_src_addr(struct sprd_dma_chn *schan)
{
unsigned long addr, addr_high;
addr = readl(schan->chn_base + SPRD_DMA_CHN_SRC_ADDR);
addr_high = readl(schan->chn_base + SPRD_DMA_CHN_WARP_PTR) &
SPRD_DMA_HIGH_ADDR_MASK;
return addr | (addr_high << SPRD_DMA_HIGH_ADDR_OFFSET);
}
static unsigned long sprd_dma_get_dst_addr(struct sprd_dma_chn *schan) static unsigned long sprd_dma_get_dst_addr(struct sprd_dma_chn *schan)
{ {
unsigned long addr, addr_high; unsigned long addr, addr_high;
...@@ -377,6 +419,49 @@ static enum sprd_dma_req_mode sprd_dma_get_req_type(struct sprd_dma_chn *schan) ...@@ -377,6 +419,49 @@ static enum sprd_dma_req_mode sprd_dma_get_req_type(struct sprd_dma_chn *schan)
return (frag_reg >> SPRD_DMA_REQ_MODE_OFFSET) & SPRD_DMA_REQ_MODE_MASK; return (frag_reg >> SPRD_DMA_REQ_MODE_OFFSET) & SPRD_DMA_REQ_MODE_MASK;
} }
static int sprd_dma_set_2stage_config(struct sprd_dma_chn *schan)
{
struct sprd_dma_dev *sdev = to_sprd_dma_dev(&schan->vc.chan);
u32 val, chn = schan->chn_num + 1;
switch (schan->chn_mode) {
case SPRD_DMA_SRC_CHN0:
val = chn & SPRD_DMA_GLB_SRC_CHN_MASK;
val |= BIT(schan->trg_mode - 1) << SPRD_DMA_GLB_TRG_OFFSET;
val |= SPRD_DMA_GLB_2STAGE_EN;
sprd_dma_glb_update(sdev, SPRD_DMA_GLB_2STAGE_GRP1, val, val);
break;
case SPRD_DMA_SRC_CHN1:
val = chn & SPRD_DMA_GLB_SRC_CHN_MASK;
val |= BIT(schan->trg_mode - 1) << SPRD_DMA_GLB_TRG_OFFSET;
val |= SPRD_DMA_GLB_2STAGE_EN;
sprd_dma_glb_update(sdev, SPRD_DMA_GLB_2STAGE_GRP2, val, val);
break;
case SPRD_DMA_DST_CHN0:
val = (chn << SPRD_DMA_GLB_DEST_CHN_OFFSET) &
SPRD_DMA_GLB_DEST_CHN_MASK;
val |= SPRD_DMA_GLB_2STAGE_EN;
sprd_dma_glb_update(sdev, SPRD_DMA_GLB_2STAGE_GRP1, val, val);
break;
case SPRD_DMA_DST_CHN1:
val = (chn << SPRD_DMA_GLB_DEST_CHN_OFFSET) &
SPRD_DMA_GLB_DEST_CHN_MASK;
val |= SPRD_DMA_GLB_2STAGE_EN;
sprd_dma_glb_update(sdev, SPRD_DMA_GLB_2STAGE_GRP2, val, val);
break;
default:
dev_err(sdev->dma_dev.dev, "invalid channel mode setting %d\n",
schan->chn_mode);
return -EINVAL;
}
return 0;
}
static void sprd_dma_set_chn_config(struct sprd_dma_chn *schan, static void sprd_dma_set_chn_config(struct sprd_dma_chn *schan,
struct sprd_dma_desc *sdesc) struct sprd_dma_desc *sdesc)
{ {
...@@ -410,6 +495,13 @@ static void sprd_dma_start(struct sprd_dma_chn *schan) ...@@ -410,6 +495,13 @@ static void sprd_dma_start(struct sprd_dma_chn *schan)
list_del(&vd->node); list_del(&vd->node);
schan->cur_desc = to_sprd_dma_desc(vd); schan->cur_desc = to_sprd_dma_desc(vd);
/*
* Set 2-stage configuration if the channel starts one 2-stage
* transfer.
*/
if (schan->chn_mode && sprd_dma_set_2stage_config(schan))
return;
/* /*
* Copy the DMA configuration from DMA descriptor to this hardware * Copy the DMA configuration from DMA descriptor to this hardware
* channel. * channel.
...@@ -427,6 +519,7 @@ static void sprd_dma_stop(struct sprd_dma_chn *schan) ...@@ -427,6 +519,7 @@ static void sprd_dma_stop(struct sprd_dma_chn *schan)
sprd_dma_stop_and_disable(schan); sprd_dma_stop_and_disable(schan);
sprd_dma_unset_uid(schan); sprd_dma_unset_uid(schan);
sprd_dma_clear_int(schan); sprd_dma_clear_int(schan);
schan->cur_desc = NULL;
} }
static bool sprd_dma_check_trans_done(struct sprd_dma_desc *sdesc, static bool sprd_dma_check_trans_done(struct sprd_dma_desc *sdesc,
...@@ -450,7 +543,7 @@ static irqreturn_t dma_irq_handle(int irq, void *dev_id) ...@@ -450,7 +543,7 @@ static irqreturn_t dma_irq_handle(int irq, void *dev_id)
struct sprd_dma_desc *sdesc; struct sprd_dma_desc *sdesc;
enum sprd_dma_req_mode req_type; enum sprd_dma_req_mode req_type;
enum sprd_dma_int_type int_type; enum sprd_dma_int_type int_type;
bool trans_done = false; bool trans_done = false, cyclic = false;
u32 i; u32 i;
while (irq_status) { while (irq_status) {
...@@ -465,13 +558,19 @@ static irqreturn_t dma_irq_handle(int irq, void *dev_id) ...@@ -465,13 +558,19 @@ static irqreturn_t dma_irq_handle(int irq, void *dev_id)
sdesc = schan->cur_desc; sdesc = schan->cur_desc;
/* Check if the dma request descriptor is done. */ /* cyclic mode schedule callback */
trans_done = sprd_dma_check_trans_done(sdesc, int_type, cyclic = schan->linklist.phy_addr ? true : false;
req_type); if (cyclic == true) {
if (trans_done == true) { vchan_cyclic_callback(&sdesc->vd);
vchan_cookie_complete(&sdesc->vd); } else {
schan->cur_desc = NULL; /* Check if the dma request descriptor is done. */
sprd_dma_start(schan); trans_done = sprd_dma_check_trans_done(sdesc, int_type,
req_type);
if (trans_done == true) {
vchan_cookie_complete(&sdesc->vd);
schan->cur_desc = NULL;
sprd_dma_start(schan);
}
} }
spin_unlock(&schan->vc.lock); spin_unlock(&schan->vc.lock);
} }
...@@ -534,7 +633,12 @@ static enum dma_status sprd_dma_tx_status(struct dma_chan *chan, ...@@ -534,7 +633,12 @@ static enum dma_status sprd_dma_tx_status(struct dma_chan *chan,
else else
pos = 0; pos = 0;
} else if (schan->cur_desc && schan->cur_desc->vd.tx.cookie == cookie) { } else if (schan->cur_desc && schan->cur_desc->vd.tx.cookie == cookie) {
pos = sprd_dma_get_dst_addr(schan); struct sprd_dma_desc *sdesc = to_sprd_dma_desc(vd);
if (sdesc->dir == DMA_DEV_TO_MEM)
pos = sprd_dma_get_dst_addr(schan);
else
pos = sprd_dma_get_src_addr(schan);
} else { } else {
pos = 0; pos = 0;
} }
...@@ -593,6 +697,7 @@ static int sprd_dma_fill_desc(struct dma_chan *chan, ...@@ -593,6 +697,7 @@ static int sprd_dma_fill_desc(struct dma_chan *chan,
{ {
struct sprd_dma_dev *sdev = to_sprd_dma_dev(chan); struct sprd_dma_dev *sdev = to_sprd_dma_dev(chan);
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
enum sprd_dma_chn_mode chn_mode = schan->chn_mode;
u32 req_mode = (flags >> SPRD_DMA_REQ_SHIFT) & SPRD_DMA_REQ_MODE_MASK; u32 req_mode = (flags >> SPRD_DMA_REQ_SHIFT) & SPRD_DMA_REQ_MODE_MASK;
u32 int_mode = flags & SPRD_DMA_INT_MASK; u32 int_mode = flags & SPRD_DMA_INT_MASK;
int src_datawidth, dst_datawidth, src_step, dst_step; int src_datawidth, dst_datawidth, src_step, dst_step;
...@@ -604,7 +709,16 @@ static int sprd_dma_fill_desc(struct dma_chan *chan, ...@@ -604,7 +709,16 @@ static int sprd_dma_fill_desc(struct dma_chan *chan,
dev_err(sdev->dma_dev.dev, "invalid source step\n"); dev_err(sdev->dma_dev.dev, "invalid source step\n");
return src_step; return src_step;
} }
dst_step = SPRD_DMA_NONE_STEP;
/*
* For 2-stage transfer, destination channel step can not be 0,
* since destination device is AON IRAM.
*/
if (chn_mode == SPRD_DMA_DST_CHN0 ||
chn_mode == SPRD_DMA_DST_CHN1)
dst_step = src_step;
else
dst_step = SPRD_DMA_NONE_STEP;
} else { } else {
dst_step = sprd_dma_get_step(slave_cfg->dst_addr_width); dst_step = sprd_dma_get_step(slave_cfg->dst_addr_width);
if (dst_step < 0) { if (dst_step < 0) {
...@@ -674,13 +788,11 @@ static int sprd_dma_fill_desc(struct dma_chan *chan, ...@@ -674,13 +788,11 @@ static int sprd_dma_fill_desc(struct dma_chan *chan,
/* link-list configuration */ /* link-list configuration */
if (schan->linklist.phy_addr) { if (schan->linklist.phy_addr) {
if (sg_index == sglen - 1)
hw->frg_len |= SPRD_DMA_LLIST_END;
hw->cfg |= SPRD_DMA_LINKLIST_EN; hw->cfg |= SPRD_DMA_LINKLIST_EN;
/* link-list index */ /* link-list index */
temp = (sg_index + 1) % sglen; temp = sglen ? (sg_index + 1) % sglen : 0;
/* Next link-list configuration's physical address offset */ /* Next link-list configuration's physical address offset */
temp = temp * sizeof(*hw) + SPRD_DMA_CHN_SRC_ADDR; temp = temp * sizeof(*hw) + SPRD_DMA_CHN_SRC_ADDR;
/* /*
...@@ -804,6 +916,8 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, ...@@ -804,6 +916,8 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
if (!sdesc) if (!sdesc)
return NULL; return NULL;
sdesc->dir = dir;
for_each_sg(sgl, sg, sglen, i) { for_each_sg(sgl, sg, sglen, i) {
len = sg_dma_len(sg); len = sg_dma_len(sg);
...@@ -831,6 +945,12 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, ...@@ -831,6 +945,12 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
} }
} }
/* Set channel mode and trigger mode for 2-stage transfer */
schan->chn_mode =
(flags >> SPRD_DMA_CHN_MODE_SHIFT) & SPRD_DMA_CHN_MODE_MASK;
schan->trg_mode =
(flags >> SPRD_DMA_TRG_MODE_SHIFT) & SPRD_DMA_TRG_MODE_MASK;
ret = sprd_dma_fill_desc(chan, &sdesc->chn_hw, 0, 0, src, dst, len, ret = sprd_dma_fill_desc(chan, &sdesc->chn_hw, 0, 0, src, dst, len,
dir, flags, slave_cfg); dir, flags, slave_cfg);
if (ret) { if (ret) {
...@@ -847,9 +967,6 @@ static int sprd_dma_slave_config(struct dma_chan *chan, ...@@ -847,9 +967,6 @@ static int sprd_dma_slave_config(struct dma_chan *chan,
struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); struct sprd_dma_chn *schan = to_sprd_dma_chan(chan);
struct dma_slave_config *slave_cfg = &schan->slave_cfg; struct dma_slave_config *slave_cfg = &schan->slave_cfg;
if (!is_slave_direction(config->direction))
return -EINVAL;
memcpy(slave_cfg, config, sizeof(*config)); memcpy(slave_cfg, config, sizeof(*config));
return 0; return 0;
} }
...@@ -1109,4 +1226,5 @@ module_platform_driver(sprd_dma_driver); ...@@ -1109,4 +1226,5 @@ module_platform_driver(sprd_dma_driver);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("DMA driver for Spreadtrum"); MODULE_DESCRIPTION("DMA driver for Spreadtrum");
MODULE_AUTHOR("Baolin Wang <baolin.wang@spreadtrum.com>"); MODULE_AUTHOR("Baolin Wang <baolin.wang@spreadtrum.com>");
MODULE_AUTHOR("Eric Long <eric.long@spreadtrum.com>");
MODULE_ALIAS("platform:sprd-dma"); MODULE_ALIAS("platform:sprd-dma");
...@@ -442,6 +442,7 @@ struct d40_base; ...@@ -442,6 +442,7 @@ struct d40_base;
* @queue: Queued jobs. * @queue: Queued jobs.
* @prepare_queue: Prepared jobs. * @prepare_queue: Prepared jobs.
* @dma_cfg: The client configuration of this dma channel. * @dma_cfg: The client configuration of this dma channel.
* @slave_config: DMA slave configuration.
* @configured: whether the dma_cfg configuration is valid * @configured: whether the dma_cfg configuration is valid
* @base: Pointer to the device instance struct. * @base: Pointer to the device instance struct.
* @src_def_cfg: Default cfg register setting for src. * @src_def_cfg: Default cfg register setting for src.
...@@ -468,6 +469,7 @@ struct d40_chan { ...@@ -468,6 +469,7 @@ struct d40_chan {
struct list_head queue; struct list_head queue;
struct list_head prepare_queue; struct list_head prepare_queue;
struct stedma40_chan_cfg dma_cfg; struct stedma40_chan_cfg dma_cfg;
struct dma_slave_config slave_config;
bool configured; bool configured;
struct d40_base *base; struct d40_base *base;
/* Default register configurations */ /* Default register configurations */
...@@ -625,6 +627,10 @@ static void __iomem *chan_base(struct d40_chan *chan) ...@@ -625,6 +627,10 @@ static void __iomem *chan_base(struct d40_chan *chan)
#define chan_err(d40c, format, arg...) \ #define chan_err(d40c, format, arg...) \
d40_err(chan2dev(d40c), format, ## arg) d40_err(chan2dev(d40c), format, ## arg)
static int d40_set_runtime_config_write(struct dma_chan *chan,
struct dma_slave_config *config,
enum dma_transfer_direction direction);
static int d40_pool_lli_alloc(struct d40_chan *d40c, struct d40_desc *d40d, static int d40_pool_lli_alloc(struct d40_chan *d40c, struct d40_desc *d40d,
int lli_len) int lli_len)
{ {
...@@ -2216,6 +2222,8 @@ d40_prep_sg(struct dma_chan *dchan, struct scatterlist *sg_src, ...@@ -2216,6 +2222,8 @@ d40_prep_sg(struct dma_chan *dchan, struct scatterlist *sg_src,
return NULL; return NULL;
} }
d40_set_runtime_config_write(dchan, &chan->slave_config, direction);
spin_lock_irqsave(&chan->lock, flags); spin_lock_irqsave(&chan->lock, flags);
desc = d40_prep_desc(chan, sg_src, sg_len, dma_flags); desc = d40_prep_desc(chan, sg_src, sg_len, dma_flags);
...@@ -2634,11 +2642,22 @@ dma40_config_to_halfchannel(struct d40_chan *d40c, ...@@ -2634,11 +2642,22 @@ dma40_config_to_halfchannel(struct d40_chan *d40c,
return 0; return 0;
} }
/* Runtime reconfiguration extension */
static int d40_set_runtime_config(struct dma_chan *chan, static int d40_set_runtime_config(struct dma_chan *chan,
struct dma_slave_config *config) struct dma_slave_config *config)
{ {
struct d40_chan *d40c = container_of(chan, struct d40_chan, chan); struct d40_chan *d40c = container_of(chan, struct d40_chan, chan);
memcpy(&d40c->slave_config, config, sizeof(*config));
return 0;
}
/* Runtime reconfiguration extension */
static int d40_set_runtime_config_write(struct dma_chan *chan,
struct dma_slave_config *config,
enum dma_transfer_direction direction)
{
struct d40_chan *d40c = container_of(chan, struct d40_chan, chan);
struct stedma40_chan_cfg *cfg = &d40c->dma_cfg; struct stedma40_chan_cfg *cfg = &d40c->dma_cfg;
enum dma_slave_buswidth src_addr_width, dst_addr_width; enum dma_slave_buswidth src_addr_width, dst_addr_width;
dma_addr_t config_addr; dma_addr_t config_addr;
...@@ -2655,7 +2674,7 @@ static int d40_set_runtime_config(struct dma_chan *chan, ...@@ -2655,7 +2674,7 @@ static int d40_set_runtime_config(struct dma_chan *chan,
dst_addr_width = config->dst_addr_width; dst_addr_width = config->dst_addr_width;
dst_maxburst = config->dst_maxburst; dst_maxburst = config->dst_maxburst;
if (config->direction == DMA_DEV_TO_MEM) { if (direction == DMA_DEV_TO_MEM) {
config_addr = config->src_addr; config_addr = config->src_addr;
if (cfg->dir != DMA_DEV_TO_MEM) if (cfg->dir != DMA_DEV_TO_MEM)
...@@ -2671,7 +2690,7 @@ static int d40_set_runtime_config(struct dma_chan *chan, ...@@ -2671,7 +2690,7 @@ static int d40_set_runtime_config(struct dma_chan *chan,
if (dst_maxburst == 0) if (dst_maxburst == 0)
dst_maxburst = src_maxburst; dst_maxburst = src_maxburst;
} else if (config->direction == DMA_MEM_TO_DEV) { } else if (direction == DMA_MEM_TO_DEV) {
config_addr = config->dst_addr; config_addr = config->dst_addr;
if (cfg->dir != DMA_MEM_TO_DEV) if (cfg->dir != DMA_MEM_TO_DEV)
...@@ -2689,7 +2708,7 @@ static int d40_set_runtime_config(struct dma_chan *chan, ...@@ -2689,7 +2708,7 @@ static int d40_set_runtime_config(struct dma_chan *chan,
} else { } else {
dev_err(d40c->base->dev, dev_err(d40c->base->dev,
"unrecognized channel direction %d\n", "unrecognized channel direction %d\n",
config->direction); direction);
return -EINVAL; return -EINVAL;
} }
...@@ -2746,12 +2765,12 @@ static int d40_set_runtime_config(struct dma_chan *chan, ...@@ -2746,12 +2765,12 @@ static int d40_set_runtime_config(struct dma_chan *chan,
/* These settings will take precedence later */ /* These settings will take precedence later */
d40c->runtime_addr = config_addr; d40c->runtime_addr = config_addr;
d40c->runtime_direction = config->direction; d40c->runtime_direction = direction;
dev_dbg(d40c->base->dev, dev_dbg(d40c->base->dev,
"configured channel %s for %s, data width %d/%d, " "configured channel %s for %s, data width %d/%d, "
"maxburst %d/%d elements, LE, no flow control\n", "maxburst %d/%d elements, LE, no flow control\n",
dma_chan_name(chan), dma_chan_name(chan),
(config->direction == DMA_DEV_TO_MEM) ? "RX" : "TX", (direction == DMA_DEV_TO_MEM) ? "RX" : "TX",
src_addr_width, dst_addr_width, src_addr_width, dst_addr_width,
src_maxburst, dst_maxburst); src_maxburst, dst_maxburst);
......
This diff is collapsed.
...@@ -190,6 +190,8 @@ ...@@ -190,6 +190,8 @@
/* AXI CDMA Specific Masks */ /* AXI CDMA Specific Masks */
#define XILINX_CDMA_CR_SGMODE BIT(3) #define XILINX_CDMA_CR_SGMODE BIT(3)
#define xilinx_prep_dma_addr_t(addr) \
((dma_addr_t)((u64)addr##_##msb << 32 | (addr)))
/** /**
* struct xilinx_vdma_desc_hw - Hardware Descriptor * struct xilinx_vdma_desc_hw - Hardware Descriptor
* @next_desc: Next Descriptor Pointer @0x00 * @next_desc: Next Descriptor Pointer @0x00
...@@ -887,6 +889,24 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) ...@@ -887,6 +889,24 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
chan->id); chan->id);
return -ENOMEM; return -ENOMEM;
} }
/*
* For cyclic DMA mode we need to program the tail Descriptor
* register with a value which is not a part of the BD chain
* so allocating a desc segment during channel allocation for
* programming tail descriptor.
*/
chan->cyclic_seg_v = dma_zalloc_coherent(chan->dev,
sizeof(*chan->cyclic_seg_v),
&chan->cyclic_seg_p, GFP_KERNEL);
if (!chan->cyclic_seg_v) {
dev_err(chan->dev,
"unable to allocate desc segment for cyclic DMA\n");
dma_free_coherent(chan->dev, sizeof(*chan->seg_v) *
XILINX_DMA_NUM_DESCS, chan->seg_v,
chan->seg_p);
return -ENOMEM;
}
chan->cyclic_seg_v->phys = chan->cyclic_seg_p;
for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) { for (i = 0; i < XILINX_DMA_NUM_DESCS; i++) {
chan->seg_v[i].hw.next_desc = chan->seg_v[i].hw.next_desc =
...@@ -922,24 +942,6 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan) ...@@ -922,24 +942,6 @@ static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
return -ENOMEM; return -ENOMEM;
} }
if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
/*
* For cyclic DMA mode we need to program the tail Descriptor
* register with a value which is not a part of the BD chain
* so allocating a desc segment during channel allocation for
* programming tail descriptor.
*/
chan->cyclic_seg_v = dma_zalloc_coherent(chan->dev,
sizeof(*chan->cyclic_seg_v),
&chan->cyclic_seg_p, GFP_KERNEL);
if (!chan->cyclic_seg_v) {
dev_err(chan->dev,
"unable to allocate desc segment for cyclic DMA\n");
return -ENOMEM;
}
chan->cyclic_seg_v->phys = chan->cyclic_seg_p;
}
dma_cookie_init(dchan); dma_cookie_init(dchan);
if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
...@@ -1245,8 +1247,10 @@ static void xilinx_cdma_start_transfer(struct xilinx_dma_chan *chan) ...@@ -1245,8 +1247,10 @@ static void xilinx_cdma_start_transfer(struct xilinx_dma_chan *chan)
hw = &segment->hw; hw = &segment->hw;
xilinx_write(chan, XILINX_CDMA_REG_SRCADDR, hw->src_addr); xilinx_write(chan, XILINX_CDMA_REG_SRCADDR,
xilinx_write(chan, XILINX_CDMA_REG_DSTADDR, hw->dest_addr); xilinx_prep_dma_addr_t(hw->src_addr));
xilinx_write(chan, XILINX_CDMA_REG_DSTADDR,
xilinx_prep_dma_addr_t(hw->dest_addr));
/* Start the transfer */ /* Start the transfer */
dma_ctrl_write(chan, XILINX_DMA_REG_BTT, dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
......
...@@ -163,7 +163,7 @@ struct zynqmp_dma_desc_ll { ...@@ -163,7 +163,7 @@ struct zynqmp_dma_desc_ll {
u32 ctrl; u32 ctrl;
u64 nxtdscraddr; u64 nxtdscraddr;
u64 rsvd; u64 rsvd;
}; __aligned(64) };
/** /**
* struct zynqmp_dma_desc_sw - Per Transaction structure * struct zynqmp_dma_desc_sw - Per Transaction structure
...@@ -375,9 +375,10 @@ static dma_cookie_t zynqmp_dma_tx_submit(struct dma_async_tx_descriptor *tx) ...@@ -375,9 +375,10 @@ static dma_cookie_t zynqmp_dma_tx_submit(struct dma_async_tx_descriptor *tx)
struct zynqmp_dma_chan *chan = to_chan(tx->chan); struct zynqmp_dma_chan *chan = to_chan(tx->chan);
struct zynqmp_dma_desc_sw *desc, *new; struct zynqmp_dma_desc_sw *desc, *new;
dma_cookie_t cookie; dma_cookie_t cookie;
unsigned long irqflags;
new = tx_to_desc(tx); new = tx_to_desc(tx);
spin_lock_bh(&chan->lock); spin_lock_irqsave(&chan->lock, irqflags);
cookie = dma_cookie_assign(tx); cookie = dma_cookie_assign(tx);
if (!list_empty(&chan->pending_list)) { if (!list_empty(&chan->pending_list)) {
...@@ -393,7 +394,7 @@ static dma_cookie_t zynqmp_dma_tx_submit(struct dma_async_tx_descriptor *tx) ...@@ -393,7 +394,7 @@ static dma_cookie_t zynqmp_dma_tx_submit(struct dma_async_tx_descriptor *tx)
} }
list_add_tail(&new->node, &chan->pending_list); list_add_tail(&new->node, &chan->pending_list);
spin_unlock_bh(&chan->lock); spin_unlock_irqrestore(&chan->lock, irqflags);
return cookie; return cookie;
} }
...@@ -408,12 +409,13 @@ static struct zynqmp_dma_desc_sw * ...@@ -408,12 +409,13 @@ static struct zynqmp_dma_desc_sw *
zynqmp_dma_get_descriptor(struct zynqmp_dma_chan *chan) zynqmp_dma_get_descriptor(struct zynqmp_dma_chan *chan)
{ {
struct zynqmp_dma_desc_sw *desc; struct zynqmp_dma_desc_sw *desc;
unsigned long irqflags;
spin_lock_bh(&chan->lock); spin_lock_irqsave(&chan->lock, irqflags);
desc = list_first_entry(&chan->free_list, desc = list_first_entry(&chan->free_list,
struct zynqmp_dma_desc_sw, node); struct zynqmp_dma_desc_sw, node);
list_del(&desc->node); list_del(&desc->node);
spin_unlock_bh(&chan->lock); spin_unlock_irqrestore(&chan->lock, irqflags);
INIT_LIST_HEAD(&desc->tx_list); INIT_LIST_HEAD(&desc->tx_list);
/* Clear the src and dst descriptor memory */ /* Clear the src and dst descriptor memory */
...@@ -643,10 +645,11 @@ static void zynqmp_dma_complete_descriptor(struct zynqmp_dma_chan *chan) ...@@ -643,10 +645,11 @@ static void zynqmp_dma_complete_descriptor(struct zynqmp_dma_chan *chan)
static void zynqmp_dma_issue_pending(struct dma_chan *dchan) static void zynqmp_dma_issue_pending(struct dma_chan *dchan)
{ {
struct zynqmp_dma_chan *chan = to_chan(dchan); struct zynqmp_dma_chan *chan = to_chan(dchan);
unsigned long irqflags;
spin_lock_bh(&chan->lock); spin_lock_irqsave(&chan->lock, irqflags);
zynqmp_dma_start_transfer(chan); zynqmp_dma_start_transfer(chan);
spin_unlock_bh(&chan->lock); spin_unlock_irqrestore(&chan->lock, irqflags);
} }
/** /**
...@@ -667,10 +670,11 @@ static void zynqmp_dma_free_descriptors(struct zynqmp_dma_chan *chan) ...@@ -667,10 +670,11 @@ static void zynqmp_dma_free_descriptors(struct zynqmp_dma_chan *chan)
static void zynqmp_dma_free_chan_resources(struct dma_chan *dchan) static void zynqmp_dma_free_chan_resources(struct dma_chan *dchan)
{ {
struct zynqmp_dma_chan *chan = to_chan(dchan); struct zynqmp_dma_chan *chan = to_chan(dchan);
unsigned long irqflags;
spin_lock_bh(&chan->lock); spin_lock_irqsave(&chan->lock, irqflags);
zynqmp_dma_free_descriptors(chan); zynqmp_dma_free_descriptors(chan);
spin_unlock_bh(&chan->lock); spin_unlock_irqrestore(&chan->lock, irqflags);
dma_free_coherent(chan->dev, dma_free_coherent(chan->dev,
(2 * ZYNQMP_DMA_DESC_SIZE(chan) * ZYNQMP_DMA_NUM_DESCS), (2 * ZYNQMP_DMA_DESC_SIZE(chan) * ZYNQMP_DMA_NUM_DESCS),
chan->desc_pool_v, chan->desc_pool_p); chan->desc_pool_v, chan->desc_pool_p);
...@@ -743,8 +747,9 @@ static void zynqmp_dma_do_tasklet(unsigned long data) ...@@ -743,8 +747,9 @@ static void zynqmp_dma_do_tasklet(unsigned long data)
{ {
struct zynqmp_dma_chan *chan = (struct zynqmp_dma_chan *)data; struct zynqmp_dma_chan *chan = (struct zynqmp_dma_chan *)data;
u32 count; u32 count;
unsigned long irqflags;
spin_lock(&chan->lock); spin_lock_irqsave(&chan->lock, irqflags);
if (chan->err) { if (chan->err) {
zynqmp_dma_reset(chan); zynqmp_dma_reset(chan);
...@@ -764,7 +769,7 @@ static void zynqmp_dma_do_tasklet(unsigned long data) ...@@ -764,7 +769,7 @@ static void zynqmp_dma_do_tasklet(unsigned long data)
zynqmp_dma_start_transfer(chan); zynqmp_dma_start_transfer(chan);
unlock: unlock:
spin_unlock(&chan->lock); spin_unlock_irqrestore(&chan->lock, irqflags);
} }
/** /**
...@@ -776,11 +781,12 @@ static void zynqmp_dma_do_tasklet(unsigned long data) ...@@ -776,11 +781,12 @@ static void zynqmp_dma_do_tasklet(unsigned long data)
static int zynqmp_dma_device_terminate_all(struct dma_chan *dchan) static int zynqmp_dma_device_terminate_all(struct dma_chan *dchan)
{ {
struct zynqmp_dma_chan *chan = to_chan(dchan); struct zynqmp_dma_chan *chan = to_chan(dchan);
unsigned long irqflags;
spin_lock_bh(&chan->lock); spin_lock_irqsave(&chan->lock, irqflags);
writel(ZYNQMP_DMA_IDS_DEFAULT_MASK, chan->regs + ZYNQMP_DMA_IDS); writel(ZYNQMP_DMA_IDS_DEFAULT_MASK, chan->regs + ZYNQMP_DMA_IDS);
zynqmp_dma_free_descriptors(chan); zynqmp_dma_free_descriptors(chan);
spin_unlock_bh(&chan->lock); spin_unlock_irqrestore(&chan->lock, irqflags);
return 0; return 0;
} }
...@@ -804,19 +810,20 @@ static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy( ...@@ -804,19 +810,20 @@ static struct dma_async_tx_descriptor *zynqmp_dma_prep_memcpy(
void *desc = NULL, *prev = NULL; void *desc = NULL, *prev = NULL;
size_t copy; size_t copy;
u32 desc_cnt; u32 desc_cnt;
unsigned long irqflags;
chan = to_chan(dchan); chan = to_chan(dchan);
desc_cnt = DIV_ROUND_UP(len, ZYNQMP_DMA_MAX_TRANS_LEN); desc_cnt = DIV_ROUND_UP(len, ZYNQMP_DMA_MAX_TRANS_LEN);
spin_lock_bh(&chan->lock); spin_lock_irqsave(&chan->lock, irqflags);
if (desc_cnt > chan->desc_free_cnt) { if (desc_cnt > chan->desc_free_cnt) {
spin_unlock_bh(&chan->lock); spin_unlock_irqrestore(&chan->lock, irqflags);
dev_dbg(chan->dev, "chan %p descs are not available\n", chan); dev_dbg(chan->dev, "chan %p descs are not available\n", chan);
return NULL; return NULL;
} }
chan->desc_free_cnt = chan->desc_free_cnt - desc_cnt; chan->desc_free_cnt = chan->desc_free_cnt - desc_cnt;
spin_unlock_bh(&chan->lock); spin_unlock_irqrestore(&chan->lock, irqflags);
do { do {
/* Allocate and populate the descriptor */ /* Allocate and populate the descriptor */
......
/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
#ifndef __DT_BINDINGS_DMA_DW_DMAC_H__
#define __DT_BINDINGS_DMA_DW_DMAC_H__
/*
* Protection Control bits provide protection against illegal transactions.
* The protection bits[0:2] are one-to-one mapped to AHB HPROT[3:1] signals.
*/
#define DW_DMAC_HPROT1_PRIVILEGED_MODE (1 << 0) /* Privileged Mode */
#define DW_DMAC_HPROT2_BUFFERABLE (1 << 1) /* DMA is bufferable */
#define DW_DMAC_HPROT3_CACHEABLE (1 << 2) /* DMA is cacheable */
#endif /* __DT_BINDINGS_DMA_DW_DMAC_H__ */
...@@ -3,9 +3,65 @@ ...@@ -3,9 +3,65 @@
#ifndef _SPRD_DMA_H_ #ifndef _SPRD_DMA_H_
#define _SPRD_DMA_H_ #define _SPRD_DMA_H_
#define SPRD_DMA_REQ_SHIFT 16 #define SPRD_DMA_REQ_SHIFT 8
#define SPRD_DMA_FLAGS(req_mode, int_type) \ #define SPRD_DMA_TRG_MODE_SHIFT 16
((req_mode) << SPRD_DMA_REQ_SHIFT | (int_type)) #define SPRD_DMA_CHN_MODE_SHIFT 24
#define SPRD_DMA_FLAGS(chn_mode, trg_mode, req_mode, int_type) \
((chn_mode) << SPRD_DMA_CHN_MODE_SHIFT | \
(trg_mode) << SPRD_DMA_TRG_MODE_SHIFT | \
(req_mode) << SPRD_DMA_REQ_SHIFT | (int_type))
/*
* The Spreadtrum DMA controller supports channel 2-stage tansfer, that means
* we can request 2 dma channels, one for source channel, and another one for
* destination channel. Each channel is independent, and has its own
* configurations. Once the source channel's transaction is done, it will
* trigger the destination channel's transaction automatically by hardware
* signal.
*
* To support 2-stage tansfer, we must configure the channel mode and trigger
* mode as below definition.
*/
/*
* enum sprd_dma_chn_mode: define the DMA channel mode for 2-stage transfer
* @SPRD_DMA_CHN_MODE_NONE: No channel mode setting which means channel doesn't
* support the 2-stage transfer.
* @SPRD_DMA_SRC_CHN0: Channel used as source channel 0.
* @SPRD_DMA_SRC_CHN1: Channel used as source channel 1.
* @SPRD_DMA_DST_CHN0: Channel used as destination channel 0.
* @SPRD_DMA_DST_CHN1: Channel used as destination channel 1.
*
* Now the DMA controller can supports 2 groups 2-stage transfer.
*/
enum sprd_dma_chn_mode {
SPRD_DMA_CHN_MODE_NONE,
SPRD_DMA_SRC_CHN0,
SPRD_DMA_SRC_CHN1,
SPRD_DMA_DST_CHN0,
SPRD_DMA_DST_CHN1,
};
/*
* enum sprd_dma_trg_mode: define the DMA channel trigger mode for 2-stage
* transfer
* @SPRD_DMA_NO_TRG: No trigger setting.
* @SPRD_DMA_FRAG_DONE_TRG: Trigger the transaction of destination channel
* automatically once the source channel's fragment request is done.
* @SPRD_DMA_BLOCK_DONE_TRG: Trigger the transaction of destination channel
* automatically once the source channel's block request is done.
* @SPRD_DMA_TRANS_DONE_TRG: Trigger the transaction of destination channel
* automatically once the source channel's transfer request is done.
* @SPRD_DMA_LIST_DONE_TRG: Trigger the transaction of destination channel
* automatically once the source channel's link-list request is done.
*/
enum sprd_dma_trg_mode {
SPRD_DMA_NO_TRG,
SPRD_DMA_FRAG_DONE_TRG,
SPRD_DMA_BLOCK_DONE_TRG,
SPRD_DMA_TRANS_DONE_TRG,
SPRD_DMA_LIST_DONE_TRG,
};
/* /*
* enum sprd_dma_req_mode: define the DMA request mode * enum sprd_dma_req_mode: define the DMA request mode
......
...@@ -49,6 +49,7 @@ struct dw_dma_slave { ...@@ -49,6 +49,7 @@ struct dw_dma_slave {
* @data_width: Maximum data width supported by hardware per AHB master * @data_width: Maximum data width supported by hardware per AHB master
* (in bytes, power of 2) * (in bytes, power of 2)
* @multi_block: Multi block transfers supported by hardware per channel. * @multi_block: Multi block transfers supported by hardware per channel.
* @protctl: Protection control signals setting per channel.
*/ */
struct dw_dma_platform_data { struct dw_dma_platform_data {
unsigned int nr_channels; unsigned int nr_channels;
...@@ -65,6 +66,11 @@ struct dw_dma_platform_data { ...@@ -65,6 +66,11 @@ struct dw_dma_platform_data {
unsigned char nr_masters; unsigned char nr_masters;
unsigned char data_width[DW_DMA_MAX_NR_MASTERS]; unsigned char data_width[DW_DMA_MAX_NR_MASTERS];
unsigned char multi_block[DW_DMA_MAX_NR_CHANNELS]; unsigned char multi_block[DW_DMA_MAX_NR_CHANNELS];
#define CHAN_PROTCTL_PRIVILEGED BIT(0)
#define CHAN_PROTCTL_BUFFERABLE BIT(1)
#define CHAN_PROTCTL_CACHEABLE BIT(2)
#define CHAN_PROTCTL_MASK GENMASK(2, 0)
unsigned char protctl;
}; };
#endif /* _PLATFORM_DATA_DMA_DW_H */ #endif /* _PLATFORM_DATA_DMA_DW_H */
/*
* SA11x0 DMA Engine support
*
* Copyright (C) 2012 Russell King
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __LINUX_SA11X0_DMA_H
#define __LINUX_SA11X0_DMA_H
struct dma_chan;
#if defined(CONFIG_DMA_SA11X0) || defined(CONFIG_DMA_SA11X0_MODULE)
bool sa11x0_dma_filter_fn(struct dma_chan *, void *);
#else
static inline bool sa11x0_dma_filter_fn(struct dma_chan *c, void *d)
{
return false;
}
#endif
#endif
/* /* SPDX-License-Identifier: GPL-2.0
*
* Dmaengine driver base library for DMA controllers, found on SH-based SoCs * Dmaengine driver base library for DMA controllers, found on SH-based SoCs
* *
* extracted from shdma.c and headers * extracted from shdma.c and headers
...@@ -7,10 +8,6 @@ ...@@ -7,10 +8,6 @@
* Copyright (C) 2009 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> * Copyright (C) 2009 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
* Copyright (C) 2009 Renesas Solutions, Inc. All rights reserved. * Copyright (C) 2009 Renesas Solutions, Inc. All rights reserved.
* Copyright (C) 2007 Freescale Semiconductor, Inc. All rights reserved. * Copyright (C) 2007 Freescale Semiconductor, Inc. All rights reserved.
*
* This is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*/ */
#ifndef SHDMA_BASE_H #ifndef SHDMA_BASE_H
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment