Commit 77c32bbb authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma

Pull slave-dmaengine updates from Vinod Koul:
 - new Xilixn VDMA driver from Srikanth
 - bunch of updates for edma driver by Thomas, Joel and Peter
 - fixes and updates on dw, ste_dma, freescale, mpc512x, sudmac etc

* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (45 commits)
  dmaengine: sh: don't use dynamic static allocation
  dmaengine: sh: fix print specifier warnings
  dmaengine: sh: make shdma_prep_dma_cyclic static
  dmaengine: Kconfig: Update MXS_DMA help text to include MX6Q/MX6DL
  of: dma: Grammar s/requests/request/, s/used required/required/
  dmaengine: shdma: Enable driver compilation with COMPILE_TEST
  dmaengine: rcar-hpbdma: Include linux/err.h
  dmaengine: sudmac: Include linux/err.h
  dmaengine: sudmac: Keep #include sorted alphabetically
  dmaengine: shdmac: Include linux/err.h
  dmaengine: shdmac: Keep #include sorted alphabetically
  dmaengine: s3c24xx-dma: Add cyclic transfer support
  dmaengine: s3c24xx-dma: Process whole SG chain
  dmaengine: imx: correct sdmac->status for cyclic dma tx
  dmaengine: pch: fix compilation for alpha target
  dmaengine: dw: check return code of dma_async_device_register()
  dmaengine: dw: fix regression in dw_probe() function
  dmaengine: dw: enable clock before access
  dma: pch_dma: Fix Kconfig dependencies
  dmaengine: mpc512x: add support for peripheral transfers
  ...
parents fad0701e 06822788
* MARVELL MMP DMA controller * MARVELL MMP DMA controller
Marvell Peripheral DMA Controller Marvell Peripheral DMA Controller
Used platfroms: pxa688, pxa910, pxa3xx, etc Used platforms: pxa688, pxa910, pxa3xx, etc
Required properties: Required properties:
- compatible: Should be "marvell,pdma-1.0" - compatible: Should be "marvell,pdma-1.0"
- reg: Should contain DMA registers location and length. - reg: Should contain DMA registers location and length.
- interrupts: Either contain all of the per-channel DMA interrupts - interrupts: Either contain all of the per-channel DMA interrupts
or one irq for pdma device or one irq for pdma device
- #dma-channels: Number of DMA channels supported by the controller.
Optional properties:
- #dma-channels: Number of DMA channels supported by the controller (defaults
to 32 when not specified)
"marvell,pdma-1.0" "marvell,pdma-1.0"
Used platfroms: pxa25x, pxa27x, pxa3xx, pxa93x, pxa168, pxa910, pxa688. Used platforms: pxa25x, pxa27x, pxa3xx, pxa93x, pxa168, pxa910, pxa688.
Examples: Examples:
...@@ -45,7 +48,7 @@ pdma: dma-controller@d4000000 { ...@@ -45,7 +48,7 @@ pdma: dma-controller@d4000000 {
Marvell Two Channel DMA Controller used specifically for audio Marvell Two Channel DMA Controller used specifically for audio
Used platfroms: pxa688, pxa910 Used platforms: pxa688, pxa910
Required properties: Required properties:
- compatible: Should be "marvell,adma-1.0" or "marvell,pxa910-squ" - compatible: Should be "marvell,adma-1.0" or "marvell,pxa910-squ"
......
Xilinx AXI VDMA engine, it does transfers between memory and video devices.
It can be configured to have one channel or two channels. If configured
as two channels, one is to transmit to the video device and another is
to receive from the video device.
Required properties:
- compatible: Should be "xlnx,axi-vdma-1.00.a"
- #dma-cells: Should be <1>, see "dmas" property below
- reg: Should contain VDMA registers location and length.
- xlnx,num-fstores: Should be the number of framebuffers as configured in h/w.
- dma-channel child node: Should have at least one channel and can have up to
two channels per device. This node specifies the properties of each
DMA channel (see child node properties below).
Optional properties:
- xlnx,include-sg: Tells configured for Scatter-mode in
the hardware.
- xlnx,flush-fsync: Tells which channel to Flush on Frame sync.
It takes following values:
{1}, flush both channels
{2}, flush mm2s channel
{3}, flush s2mm channel
Required child node properties:
- compatible: It should be either "xlnx,axi-vdma-mm2s-channel" or
"xlnx,axi-vdma-s2mm-channel".
- interrupts: Should contain per channel VDMA interrupts.
- xlnx,data-width: Should contain the stream data width, take values
{32,64...1024}.
Optional child node properties:
- xlnx,include-dre: Tells hardware is configured for Data
Realignment Engine.
- xlnx,genlock-mode: Tells Genlock synchronization is
enabled/disabled in hardware.
Example:
++++++++
axi_vdma_0: axivdma@40030000 {
compatible = "xlnx,axi-vdma-1.00.a";
#dma_cells = <1>;
reg = < 0x40030000 0x10000 >;
xlnx,num-fstores = <0x8>;
xlnx,flush-fsync = <0x1>;
dma-channel@40030000 {
compatible = "xlnx,axi-vdma-mm2s-channel";
interrupts = < 0 54 4 >;
xlnx,datawidth = <0x40>;
} ;
dma-channel@40030030 {
compatible = "xlnx,axi-vdma-s2mm-channel";
interrupts = < 0 53 4 >;
xlnx,datawidth = <0x40>;
} ;
} ;
* DMA client
Required properties:
- dmas: a list of <[Video DMA device phandle] [Channel ID]> pairs,
where Channel ID is '0' for write/tx and '1' for read/rx
channel.
- dma-names: a list of DMA channel names, one per "dmas" entry
Example:
++++++++
vdmatest_0: vdmatest@0 {
compatible ="xlnx,axi-vdma-test-1.00.a";
dmas = <&axi_vdma_0 0
&axi_vdma_0 1>;
dma-names = "vdma0", "vdma1";
} ;
...@@ -296,7 +296,7 @@ ipic-msi@7c0 { ...@@ -296,7 +296,7 @@ ipic-msi@7c0 {
}; };
dma@2c000 { dma@2c000 {
compatible = "fsl,mpc8308-dma", "fsl,mpc5121-dma"; compatible = "fsl,mpc8308-dma";
reg = <0x2c000 0x1800>; reg = <0x2c000 0x1800>;
interrupts = <3 0x8 interrupts = <3 0x8
94 0x8>; 94 0x8>;
......
...@@ -265,7 +265,7 @@ ipic-msi@7c0 { ...@@ -265,7 +265,7 @@ ipic-msi@7c0 {
}; };
dma@2c000 { dma@2c000 {
compatible = "fsl,mpc8308-dma", "fsl,mpc5121-dma"; compatible = "fsl,mpc8308-dma";
reg = <0x2c000 0x1800>; reg = <0x2c000 0x1800>;
interrupts = <3 0x8 interrupts = <3 0x8
94 0x8>; 94 0x8>;
......
...@@ -234,7 +234,7 @@ config PL330_DMA ...@@ -234,7 +234,7 @@ config PL330_DMA
config PCH_DMA config PCH_DMA
tristate "Intel EG20T PCH / LAPIS Semicon IOH(ML7213/ML7223/ML7831) DMA" tristate "Intel EG20T PCH / LAPIS Semicon IOH(ML7213/ML7223/ML7831) DMA"
depends on PCI && X86 depends on PCI && (X86_32 || COMPILE_TEST)
select DMA_ENGINE select DMA_ENGINE
help help
Enable support for Intel EG20T PCH DMA engine. Enable support for Intel EG20T PCH DMA engine.
...@@ -269,7 +269,7 @@ config MXS_DMA ...@@ -269,7 +269,7 @@ config MXS_DMA
select DMA_ENGINE select DMA_ENGINE
help help
Support the MXS DMA engine. This engine including APBH-DMA Support the MXS DMA engine. This engine including APBH-DMA
and APBX-DMA is integrated into Freescale i.MX23/28 chips. and APBX-DMA is integrated into Freescale i.MX23/28/MX6Q/MX6DL chips.
config EP93XX_DMA config EP93XX_DMA
bool "Cirrus Logic EP93xx DMA support" bool "Cirrus Logic EP93xx DMA support"
...@@ -361,6 +361,20 @@ config FSL_EDMA ...@@ -361,6 +361,20 @@ config FSL_EDMA
multiplexing capability for DMA request sources(slot). multiplexing capability for DMA request sources(slot).
This module can be found on Freescale Vybrid and LS-1 SoCs. This module can be found on Freescale Vybrid and LS-1 SoCs.
config XILINX_VDMA
tristate "Xilinx AXI VDMA Engine"
depends on (ARCH_ZYNQ || MICROBLAZE)
select DMA_ENGINE
help
Enable support for Xilinx AXI VDMA Soft IP.
This engine provides high-bandwidth direct memory access
between memory and AXI4-Stream video type target
peripherals including peripherals which support AXI4-
Stream Video Protocol. It has two stream interfaces/
channels, Memory Mapped to Stream (MM2S) and Stream to
Memory Mapped (S2MM) for the data transfers.
config DMA_ENGINE config DMA_ENGINE
bool bool
......
...@@ -46,3 +46,4 @@ obj-$(CONFIG_K3_DMA) += k3dma.o ...@@ -46,3 +46,4 @@ obj-$(CONFIG_K3_DMA) += k3dma.o
obj-$(CONFIG_MOXART_DMA) += moxart-dma.o obj-$(CONFIG_MOXART_DMA) += moxart-dma.o
obj-$(CONFIG_FSL_EDMA) += fsl-edma.o obj-$(CONFIG_FSL_EDMA) += fsl-edma.o
obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o obj-$(CONFIG_QCOM_BAM_DMA) += qcom_bam_dma.o
obj-y += xilinx/
...@@ -1493,6 +1493,13 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) ...@@ -1493,6 +1493,13 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
dw->regs = chip->regs; dw->regs = chip->regs;
chip->dw = dw; chip->dw = dw;
dw->clk = devm_clk_get(chip->dev, "hclk");
if (IS_ERR(dw->clk))
return PTR_ERR(dw->clk);
err = clk_prepare_enable(dw->clk);
if (err)
return err;
dw_params = dma_read_byaddr(chip->regs, DW_PARAMS); dw_params = dma_read_byaddr(chip->regs, DW_PARAMS);
autocfg = dw_params >> DW_PARAMS_EN & 0x1; autocfg = dw_params >> DW_PARAMS_EN & 0x1;
...@@ -1500,15 +1507,19 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) ...@@ -1500,15 +1507,19 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
if (!pdata && autocfg) { if (!pdata && autocfg) {
pdata = devm_kzalloc(chip->dev, sizeof(*pdata), GFP_KERNEL); pdata = devm_kzalloc(chip->dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata) if (!pdata) {
return -ENOMEM; err = -ENOMEM;
goto err_pdata;
}
/* Fill platform data with the default values */ /* Fill platform data with the default values */
pdata->is_private = true; pdata->is_private = true;
pdata->chan_allocation_order = CHAN_ALLOCATION_ASCENDING; pdata->chan_allocation_order = CHAN_ALLOCATION_ASCENDING;
pdata->chan_priority = CHAN_PRIORITY_ASCENDING; pdata->chan_priority = CHAN_PRIORITY_ASCENDING;
} else if (!pdata || pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS) } else if (!pdata || pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS) {
return -EINVAL; err = -EINVAL;
goto err_pdata;
}
if (autocfg) if (autocfg)
nr_channels = (dw_params >> DW_PARAMS_NR_CHAN & 0x7) + 1; nr_channels = (dw_params >> DW_PARAMS_NR_CHAN & 0x7) + 1;
...@@ -1517,13 +1528,10 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) ...@@ -1517,13 +1528,10 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
dw->chan = devm_kcalloc(chip->dev, nr_channels, sizeof(*dw->chan), dw->chan = devm_kcalloc(chip->dev, nr_channels, sizeof(*dw->chan),
GFP_KERNEL); GFP_KERNEL);
if (!dw->chan) if (!dw->chan) {
return -ENOMEM; err = -ENOMEM;
goto err_pdata;
dw->clk = devm_clk_get(chip->dev, "hclk"); }
if (IS_ERR(dw->clk))
return PTR_ERR(dw->clk);
clk_prepare_enable(dw->clk);
/* Get hardware configuration parameters */ /* Get hardware configuration parameters */
if (autocfg) { if (autocfg) {
...@@ -1553,7 +1561,8 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) ...@@ -1553,7 +1561,8 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
sizeof(struct dw_desc), 4, 0); sizeof(struct dw_desc), 4, 0);
if (!dw->desc_pool) { if (!dw->desc_pool) {
dev_err(chip->dev, "No memory for descriptors dma pool\n"); dev_err(chip->dev, "No memory for descriptors dma pool\n");
return -ENOMEM; err = -ENOMEM;
goto err_pdata;
} }
tasklet_init(&dw->tasklet, dw_dma_tasklet, (unsigned long)dw); tasklet_init(&dw->tasklet, dw_dma_tasklet, (unsigned long)dw);
...@@ -1561,7 +1570,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) ...@@ -1561,7 +1570,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
err = request_irq(chip->irq, dw_dma_interrupt, IRQF_SHARED, err = request_irq(chip->irq, dw_dma_interrupt, IRQF_SHARED,
"dw_dmac", dw); "dw_dmac", dw);
if (err) if (err)
return err; goto err_pdata;
INIT_LIST_HEAD(&dw->dma.channels); INIT_LIST_HEAD(&dw->dma.channels);
for (i = 0; i < nr_channels; i++) { for (i = 0; i < nr_channels; i++) {
...@@ -1650,12 +1659,20 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata) ...@@ -1650,12 +1659,20 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
dma_writel(dw, CFG, DW_CFG_DMA_EN); dma_writel(dw, CFG, DW_CFG_DMA_EN);
err = dma_async_device_register(&dw->dma);
if (err)
goto err_dma_register;
dev_info(chip->dev, "DesignWare DMA Controller, %d channels\n", dev_info(chip->dev, "DesignWare DMA Controller, %d channels\n",
nr_channels); nr_channels);
dma_async_device_register(&dw->dma);
return 0; return 0;
err_dma_register:
free_irq(chip->irq, dw);
err_pdata:
clk_disable_unprepare(dw->clk);
return err;
} }
EXPORT_SYMBOL_GPL(dw_dma_probe); EXPORT_SYMBOL_GPL(dw_dma_probe);
...@@ -1676,6 +1693,8 @@ int dw_dma_remove(struct dw_dma_chip *chip) ...@@ -1676,6 +1693,8 @@ int dw_dma_remove(struct dw_dma_chip *chip)
channel_clear_bit(dw, CH_EN, dwc->mask); channel_clear_bit(dw, CH_EN, dwc->mask);
} }
clk_disable_unprepare(dw->clk);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(dw_dma_remove); EXPORT_SYMBOL_GPL(dw_dma_remove);
......
...@@ -93,19 +93,13 @@ static int dw_pci_resume_early(struct device *dev) ...@@ -93,19 +93,13 @@ static int dw_pci_resume_early(struct device *dev)
return dw_dma_resume(chip); return dw_dma_resume(chip);
}; };
#else /* !CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_SLEEP */
#define dw_pci_suspend_late NULL
#define dw_pci_resume_early NULL
#endif /* !CONFIG_PM_SLEEP */
static const struct dev_pm_ops dw_pci_dev_pm_ops = { static const struct dev_pm_ops dw_pci_dev_pm_ops = {
.suspend_late = dw_pci_suspend_late, SET_LATE_SYSTEM_SLEEP_PM_OPS(dw_pci_suspend_late, dw_pci_resume_early)
.resume_early = dw_pci_resume_early,
}; };
static DEFINE_PCI_DEVICE_TABLE(dw_pci_id_table) = { static const struct pci_device_id dw_pci_id_table[] = {
/* Medfield */ /* Medfield */
{ PCI_VDEVICE(INTEL, 0x0827), (kernel_ulong_t)&dw_pci_pdata }, { PCI_VDEVICE(INTEL, 0x0827), (kernel_ulong_t)&dw_pci_pdata },
{ PCI_VDEVICE(INTEL, 0x0830), (kernel_ulong_t)&dw_pci_pdata }, { PCI_VDEVICE(INTEL, 0x0830), (kernel_ulong_t)&dw_pci_pdata },
......
...@@ -256,7 +256,7 @@ MODULE_DEVICE_TABLE(acpi, dw_dma_acpi_id_table); ...@@ -256,7 +256,7 @@ MODULE_DEVICE_TABLE(acpi, dw_dma_acpi_id_table);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
static int dw_suspend_noirq(struct device *dev) static int dw_suspend_late(struct device *dev)
{ {
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
struct dw_dma_chip *chip = platform_get_drvdata(pdev); struct dw_dma_chip *chip = platform_get_drvdata(pdev);
...@@ -264,7 +264,7 @@ static int dw_suspend_noirq(struct device *dev) ...@@ -264,7 +264,7 @@ static int dw_suspend_noirq(struct device *dev)
return dw_dma_suspend(chip); return dw_dma_suspend(chip);
} }
static int dw_resume_noirq(struct device *dev) static int dw_resume_early(struct device *dev)
{ {
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
struct dw_dma_chip *chip = platform_get_drvdata(pdev); struct dw_dma_chip *chip = platform_get_drvdata(pdev);
...@@ -272,20 +272,10 @@ static int dw_resume_noirq(struct device *dev) ...@@ -272,20 +272,10 @@ static int dw_resume_noirq(struct device *dev)
return dw_dma_resume(chip); return dw_dma_resume(chip);
} }
#else /* !CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_SLEEP */
#define dw_suspend_noirq NULL
#define dw_resume_noirq NULL
#endif /* !CONFIG_PM_SLEEP */
static const struct dev_pm_ops dw_dev_pm_ops = { static const struct dev_pm_ops dw_dev_pm_ops = {
.suspend_noirq = dw_suspend_noirq, SET_LATE_SYSTEM_SLEEP_PM_OPS(dw_suspend_late, dw_resume_early)
.resume_noirq = dw_resume_noirq,
.freeze_noirq = dw_suspend_noirq,
.thaw_noirq = dw_resume_noirq,
.restore_noirq = dw_resume_noirq,
.poweroff_noirq = dw_suspend_noirq,
}; };
static struct platform_driver dw_driver = { static struct platform_driver dw_driver = {
......
This diff is collapsed.
...@@ -607,8 +607,6 @@ static void sdma_handle_channel_loop(struct sdma_channel *sdmac) ...@@ -607,8 +607,6 @@ static void sdma_handle_channel_loop(struct sdma_channel *sdmac)
if (bd->mode.status & BD_RROR) if (bd->mode.status & BD_RROR)
sdmac->status = DMA_ERROR; sdmac->status = DMA_ERROR;
else
sdmac->status = DMA_IN_PROGRESS;
bd->mode.status |= BD_DONE; bd->mode.status |= BD_DONE;
sdmac->buf_tail++; sdmac->buf_tail++;
......
...@@ -29,8 +29,8 @@ ...@@ -29,8 +29,8 @@
#define DALGN 0x00a0 #define DALGN 0x00a0
#define DINT 0x00f0 #define DINT 0x00f0
#define DDADR 0x0200 #define DDADR 0x0200
#define DSADR 0x0204 #define DSADR(n) (0x0204 + ((n) << 4))
#define DTADR 0x0208 #define DTADR(n) (0x0208 + ((n) << 4))
#define DCMD 0x020c #define DCMD 0x020c
#define DCSR_RUN BIT(31) /* Run Bit (read / write) */ #define DCSR_RUN BIT(31) /* Run Bit (read / write) */
...@@ -277,7 +277,7 @@ static void mmp_pdma_free_phy(struct mmp_pdma_chan *pchan) ...@@ -277,7 +277,7 @@ static void mmp_pdma_free_phy(struct mmp_pdma_chan *pchan)
return; return;
/* clear the channel mapping in DRCMR */ /* clear the channel mapping in DRCMR */
reg = DRCMR(pchan->phy->vchan->drcmr); reg = DRCMR(pchan->drcmr);
writel(0, pchan->phy->base + reg); writel(0, pchan->phy->base + reg);
spin_lock_irqsave(&pdev->phy_lock, flags); spin_lock_irqsave(&pdev->phy_lock, flags);
...@@ -748,11 +748,92 @@ static int mmp_pdma_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd, ...@@ -748,11 +748,92 @@ static int mmp_pdma_control(struct dma_chan *dchan, enum dma_ctrl_cmd cmd,
return 0; return 0;
} }
static unsigned int mmp_pdma_residue(struct mmp_pdma_chan *chan,
dma_cookie_t cookie)
{
struct mmp_pdma_desc_sw *sw;
u32 curr, residue = 0;
bool passed = false;
bool cyclic = chan->cyclic_first != NULL;
/*
* If the channel does not have a phy pointer anymore, it has already
* been completed. Therefore, its residue is 0.
*/
if (!chan->phy)
return 0;
if (chan->dir == DMA_DEV_TO_MEM)
curr = readl(chan->phy->base + DTADR(chan->phy->idx));
else
curr = readl(chan->phy->base + DSADR(chan->phy->idx));
list_for_each_entry(sw, &chan->chain_running, node) {
u32 start, end, len;
if (chan->dir == DMA_DEV_TO_MEM)
start = sw->desc.dtadr;
else
start = sw->desc.dsadr;
len = sw->desc.dcmd & DCMD_LENGTH;
end = start + len;
/*
* 'passed' will be latched once we found the descriptor which
* lies inside the boundaries of the curr pointer. All
* descriptors that occur in the list _after_ we found that
* partially handled descriptor are still to be processed and
* are hence added to the residual bytes counter.
*/
if (passed) {
residue += len;
} else if (curr >= start && curr <= end) {
residue += end - curr;
passed = true;
}
/*
* Descriptors that have the ENDIRQEN bit set mark the end of a
* transaction chain, and the cookie assigned with it has been
* returned previously from mmp_pdma_tx_submit().
*
* In case we have multiple transactions in the running chain,
* and the cookie does not match the one the user asked us
* about, reset the state variables and start over.
*
* This logic does not apply to cyclic transactions, where all
* descriptors have the ENDIRQEN bit set, and for which we
* can't have multiple transactions on one channel anyway.
*/
if (cyclic || !(sw->desc.dcmd & DCMD_ENDIRQEN))
continue;
if (sw->async_tx.cookie == cookie) {
return residue;
} else {
residue = 0;
passed = false;
}
}
/* We should only get here in case of cyclic transactions */
return residue;
}
static enum dma_status mmp_pdma_tx_status(struct dma_chan *dchan, static enum dma_status mmp_pdma_tx_status(struct dma_chan *dchan,
dma_cookie_t cookie, dma_cookie_t cookie,
struct dma_tx_state *txstate) struct dma_tx_state *txstate)
{ {
return dma_cookie_status(dchan, cookie, txstate); struct mmp_pdma_chan *chan = to_mmp_pdma_chan(dchan);
enum dma_status ret;
ret = dma_cookie_status(dchan, cookie, txstate);
if (likely(ret != DMA_ERROR))
dma_set_residue(txstate, mmp_pdma_residue(chan, cookie));
return ret;
} }
/** /**
...@@ -858,8 +939,7 @@ static int mmp_pdma_chan_init(struct mmp_pdma_device *pdev, int idx, int irq) ...@@ -858,8 +939,7 @@ static int mmp_pdma_chan_init(struct mmp_pdma_device *pdev, int idx, int irq)
struct mmp_pdma_chan *chan; struct mmp_pdma_chan *chan;
int ret; int ret;
chan = devm_kzalloc(pdev->dev, sizeof(struct mmp_pdma_chan), chan = devm_kzalloc(pdev->dev, sizeof(*chan), GFP_KERNEL);
GFP_KERNEL);
if (chan == NULL) if (chan == NULL)
return -ENOMEM; return -ENOMEM;
...@@ -946,8 +1026,7 @@ static int mmp_pdma_probe(struct platform_device *op) ...@@ -946,8 +1026,7 @@ static int mmp_pdma_probe(struct platform_device *op)
irq_num++; irq_num++;
} }
pdev->phy = devm_kcalloc(pdev->dev, pdev->phy = devm_kcalloc(pdev->dev, dma_channels, sizeof(*pdev->phy),
dma_channels, sizeof(struct mmp_pdma_chan),
GFP_KERNEL); GFP_KERNEL);
if (pdev->phy == NULL) if (pdev->phy == NULL)
return -ENOMEM; return -ENOMEM;
......
This diff is collapsed.
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/slab.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/pch_dma.h> #include <linux/pch_dma.h>
...@@ -996,7 +997,7 @@ static void pch_dma_remove(struct pci_dev *pdev) ...@@ -996,7 +997,7 @@ static void pch_dma_remove(struct pci_dev *pdev)
#define PCI_DEVICE_ID_ML7831_DMA1_8CH 0x8810 #define PCI_DEVICE_ID_ML7831_DMA1_8CH 0x8810
#define PCI_DEVICE_ID_ML7831_DMA2_4CH 0x8815 #define PCI_DEVICE_ID_ML7831_DMA2_4CH 0x8815
DEFINE_PCI_DEVICE_TABLE(pch_dma_id_table) = { const struct pci_device_id pch_dma_id_table[] = {
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_EG20T_PCH_DMA_8CH), 8 }, { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_EG20T_PCH_DMA_8CH), 8 },
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_EG20T_PCH_DMA_4CH), 4 }, { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_EG20T_PCH_DMA_4CH), 4 },
{ PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA1_8CH), 8}, /* UART Video */ { PCI_VDEVICE(ROHM, PCI_DEVICE_ID_ML7213_DMA1_8CH), 8}, /* UART Video */
......
...@@ -164,6 +164,7 @@ struct s3c24xx_sg { ...@@ -164,6 +164,7 @@ struct s3c24xx_sg {
* @disrcc: value for source control register * @disrcc: value for source control register
* @didstc: value for destination control register * @didstc: value for destination control register
* @dcon: base value for dcon register * @dcon: base value for dcon register
* @cyclic: indicate cyclic transfer
*/ */
struct s3c24xx_txd { struct s3c24xx_txd {
struct virt_dma_desc vd; struct virt_dma_desc vd;
...@@ -173,6 +174,7 @@ struct s3c24xx_txd { ...@@ -173,6 +174,7 @@ struct s3c24xx_txd {
u32 disrcc; u32 disrcc;
u32 didstc; u32 didstc;
u32 dcon; u32 dcon;
bool cyclic;
}; };
struct s3c24xx_dma_chan; struct s3c24xx_dma_chan;
...@@ -669,8 +671,10 @@ static irqreturn_t s3c24xx_dma_irq(int irq, void *data) ...@@ -669,8 +671,10 @@ static irqreturn_t s3c24xx_dma_irq(int irq, void *data)
/* when more sg's are in this txd, start the next one */ /* when more sg's are in this txd, start the next one */
if (!list_is_last(txd->at, &txd->dsg_list)) { if (!list_is_last(txd->at, &txd->dsg_list)) {
txd->at = txd->at->next; txd->at = txd->at->next;
if (txd->cyclic)
vchan_cyclic_callback(&txd->vd);
s3c24xx_dma_start_next_sg(s3cchan, txd); s3c24xx_dma_start_next_sg(s3cchan, txd);
} else { } else if (!txd->cyclic) {
s3cchan->at = NULL; s3cchan->at = NULL;
vchan_cookie_complete(&txd->vd); vchan_cookie_complete(&txd->vd);
...@@ -682,6 +686,12 @@ static irqreturn_t s3c24xx_dma_irq(int irq, void *data) ...@@ -682,6 +686,12 @@ static irqreturn_t s3c24xx_dma_irq(int irq, void *data)
s3c24xx_dma_start_next_txd(s3cchan); s3c24xx_dma_start_next_txd(s3cchan);
else else
s3c24xx_dma_phy_free(s3cchan); s3c24xx_dma_phy_free(s3cchan);
} else {
vchan_cyclic_callback(&txd->vd);
/* Cyclic: reset at beginning */
txd->at = txd->dsg_list.next;
s3c24xx_dma_start_next_sg(s3cchan, txd);
} }
} }
spin_unlock(&s3cchan->vc.lock); spin_unlock(&s3cchan->vc.lock);
...@@ -877,6 +887,104 @@ static struct dma_async_tx_descriptor *s3c24xx_dma_prep_memcpy( ...@@ -877,6 +887,104 @@ static struct dma_async_tx_descriptor *s3c24xx_dma_prep_memcpy(
return vchan_tx_prep(&s3cchan->vc, &txd->vd, flags); return vchan_tx_prep(&s3cchan->vc, &txd->vd, flags);
} }
static struct dma_async_tx_descriptor *s3c24xx_dma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t addr, size_t size, size_t period,
enum dma_transfer_direction direction, unsigned long flags,
void *context)
{
struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan);
struct s3c24xx_dma_engine *s3cdma = s3cchan->host;
const struct s3c24xx_dma_platdata *pdata = s3cdma->pdata;
struct s3c24xx_dma_channel *cdata = &pdata->channels[s3cchan->id];
struct s3c24xx_txd *txd;
struct s3c24xx_sg *dsg;
unsigned sg_len;
dma_addr_t slave_addr;
u32 hwcfg = 0;
int i;
dev_dbg(&s3cdma->pdev->dev,
"prepare cyclic transaction of %zu bytes with period %zu from %s\n",
size, period, s3cchan->name);
if (!is_slave_direction(direction)) {
dev_err(&s3cdma->pdev->dev,
"direction %d unsupported\n", direction);
return NULL;
}
txd = s3c24xx_dma_get_txd();
if (!txd)
return NULL;
txd->cyclic = 1;
if (cdata->handshake)
txd->dcon |= S3C24XX_DCON_HANDSHAKE;
switch (cdata->bus) {
case S3C24XX_DMA_APB:
txd->dcon |= S3C24XX_DCON_SYNC_PCLK;
hwcfg |= S3C24XX_DISRCC_LOC_APB;
break;
case S3C24XX_DMA_AHB:
txd->dcon |= S3C24XX_DCON_SYNC_HCLK;
hwcfg |= S3C24XX_DISRCC_LOC_AHB;
break;
}
/*
* Always assume our peripheral desintation is a fixed
* address in memory.
*/
hwcfg |= S3C24XX_DISRCC_INC_FIXED;
/*
* Individual dma operations are requested by the slave,
* so serve only single atomic operations (S3C24XX_DCON_SERV_SINGLE).
*/
txd->dcon |= S3C24XX_DCON_SERV_SINGLE;
if (direction == DMA_MEM_TO_DEV) {
txd->disrcc = S3C24XX_DISRCC_LOC_AHB |
S3C24XX_DISRCC_INC_INCREMENT;
txd->didstc = hwcfg;
slave_addr = s3cchan->cfg.dst_addr;
txd->width = s3cchan->cfg.dst_addr_width;
} else {
txd->disrcc = hwcfg;
txd->didstc = S3C24XX_DIDSTC_LOC_AHB |
S3C24XX_DIDSTC_INC_INCREMENT;
slave_addr = s3cchan->cfg.src_addr;
txd->width = s3cchan->cfg.src_addr_width;
}
sg_len = size / period;
for (i = 0; i < sg_len; i++) {
dsg = kzalloc(sizeof(*dsg), GFP_NOWAIT);
if (!dsg) {
s3c24xx_dma_free_txd(txd);
return NULL;
}
list_add_tail(&dsg->node, &txd->dsg_list);
dsg->len = period;
/* Check last period length */
if (i == sg_len - 1)
dsg->len = size - period * i;
if (direction == DMA_MEM_TO_DEV) {
dsg->src_addr = addr + period * i;
dsg->dst_addr = slave_addr;
} else { /* DMA_DEV_TO_MEM */
dsg->src_addr = slave_addr;
dsg->dst_addr = addr + period * i;
}
}
return vchan_tx_prep(&s3cchan->vc, &txd->vd, flags);
}
static struct dma_async_tx_descriptor *s3c24xx_dma_prep_slave_sg( static struct dma_async_tx_descriptor *s3c24xx_dma_prep_slave_sg(
struct dma_chan *chan, struct scatterlist *sgl, struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction direction, unsigned int sg_len, enum dma_transfer_direction direction,
...@@ -961,7 +1069,6 @@ static struct dma_async_tx_descriptor *s3c24xx_dma_prep_slave_sg( ...@@ -961,7 +1069,6 @@ static struct dma_async_tx_descriptor *s3c24xx_dma_prep_slave_sg(
dsg->src_addr = slave_addr; dsg->src_addr = slave_addr;
dsg->dst_addr = sg_dma_address(sg); dsg->dst_addr = sg_dma_address(sg);
} }
break;
} }
return vchan_tx_prep(&s3cchan->vc, &txd->vd, flags); return vchan_tx_prep(&s3cchan->vc, &txd->vd, flags);
...@@ -1198,6 +1305,7 @@ static int s3c24xx_dma_probe(struct platform_device *pdev) ...@@ -1198,6 +1305,7 @@ static int s3c24xx_dma_probe(struct platform_device *pdev)
/* Initialize slave engine for SoC internal dedicated peripherals */ /* Initialize slave engine for SoC internal dedicated peripherals */
dma_cap_set(DMA_SLAVE, s3cdma->slave.cap_mask); dma_cap_set(DMA_SLAVE, s3cdma->slave.cap_mask);
dma_cap_set(DMA_CYCLIC, s3cdma->slave.cap_mask);
dma_cap_set(DMA_PRIVATE, s3cdma->slave.cap_mask); dma_cap_set(DMA_PRIVATE, s3cdma->slave.cap_mask);
s3cdma->slave.dev = &pdev->dev; s3cdma->slave.dev = &pdev->dev;
s3cdma->slave.device_alloc_chan_resources = s3cdma->slave.device_alloc_chan_resources =
...@@ -1207,6 +1315,7 @@ static int s3c24xx_dma_probe(struct platform_device *pdev) ...@@ -1207,6 +1315,7 @@ static int s3c24xx_dma_probe(struct platform_device *pdev)
s3cdma->slave.device_tx_status = s3c24xx_dma_tx_status; s3cdma->slave.device_tx_status = s3c24xx_dma_tx_status;
s3cdma->slave.device_issue_pending = s3c24xx_dma_issue_pending; s3cdma->slave.device_issue_pending = s3c24xx_dma_issue_pending;
s3cdma->slave.device_prep_slave_sg = s3c24xx_dma_prep_slave_sg; s3cdma->slave.device_prep_slave_sg = s3c24xx_dma_prep_slave_sg;
s3cdma->slave.device_prep_dma_cyclic = s3c24xx_dma_prep_dma_cyclic;
s3cdma->slave.device_control = s3c24xx_dma_control; s3cdma->slave.device_control = s3c24xx_dma_control;
/* Register as many memcpy channels as there are physical channels */ /* Register as many memcpy channels as there are physical channels */
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
config SH_DMAE_BASE config SH_DMAE_BASE
bool "Renesas SuperH DMA Engine support" bool "Renesas SuperH DMA Engine support"
depends on (SUPERH && SH_DMA) || (ARM && ARCH_SHMOBILE) depends on (SUPERH && SH_DMA) || ARCH_SHMOBILE || COMPILE_TEST
depends on !SH_DMA_API depends on !SH_DMA_API
default y default y
select DMA_ENGINE select DMA_ENGINE
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <linux/dmaengine.h> #include <linux/dmaengine.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/module.h> #include <linux/module.h>
......
...@@ -73,8 +73,7 @@ static void shdma_chan_xfer_ld_queue(struct shdma_chan *schan) ...@@ -73,8 +73,7 @@ static void shdma_chan_xfer_ld_queue(struct shdma_chan *schan)
static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx) static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx)
{ {
struct shdma_desc *chunk, *c, *desc = struct shdma_desc *chunk, *c, *desc =
container_of(tx, struct shdma_desc, async_tx), container_of(tx, struct shdma_desc, async_tx);
*last = desc;
struct shdma_chan *schan = to_shdma_chan(tx->chan); struct shdma_chan *schan = to_shdma_chan(tx->chan);
dma_async_tx_callback callback = tx->callback; dma_async_tx_callback callback = tx->callback;
dma_cookie_t cookie; dma_cookie_t cookie;
...@@ -98,19 +97,20 @@ static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx) ...@@ -98,19 +97,20 @@ static dma_cookie_t shdma_tx_submit(struct dma_async_tx_descriptor *tx)
&chunk->node == &schan->ld_free)) &chunk->node == &schan->ld_free))
break; break;
chunk->mark = DESC_SUBMITTED; chunk->mark = DESC_SUBMITTED;
if (chunk->chunks == 1) {
chunk->async_tx.callback = callback;
chunk->async_tx.callback_param = tx->callback_param;
} else {
/* Callback goes to the last chunk */ /* Callback goes to the last chunk */
chunk->async_tx.callback = NULL; chunk->async_tx.callback = NULL;
}
chunk->cookie = cookie; chunk->cookie = cookie;
list_move_tail(&chunk->node, &schan->ld_queue); list_move_tail(&chunk->node, &schan->ld_queue);
last = chunk;
dev_dbg(schan->dev, "submit #%d@%p on %d\n", dev_dbg(schan->dev, "submit #%d@%p on %d\n",
tx->cookie, &last->async_tx, schan->id); tx->cookie, &chunk->async_tx, schan->id);
} }
last->async_tx.callback = callback;
last->async_tx.callback_param = tx->callback_param;
if (power_up) { if (power_up) {
int ret; int ret;
schan->pm_state = SHDMA_PM_BUSY; schan->pm_state = SHDMA_PM_BUSY;
...@@ -304,6 +304,7 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all) ...@@ -304,6 +304,7 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all)
dma_async_tx_callback callback = NULL; dma_async_tx_callback callback = NULL;
void *param = NULL; void *param = NULL;
unsigned long flags; unsigned long flags;
LIST_HEAD(cyclic_list);
spin_lock_irqsave(&schan->chan_lock, flags); spin_lock_irqsave(&schan->chan_lock, flags);
list_for_each_entry_safe(desc, _desc, &schan->ld_queue, node) { list_for_each_entry_safe(desc, _desc, &schan->ld_queue, node) {
...@@ -369,10 +370,16 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all) ...@@ -369,10 +370,16 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all)
if (((desc->mark == DESC_COMPLETED || if (((desc->mark == DESC_COMPLETED ||
desc->mark == DESC_WAITING) && desc->mark == DESC_WAITING) &&
async_tx_test_ack(&desc->async_tx)) || all) { async_tx_test_ack(&desc->async_tx)) || all) {
if (all || !desc->cyclic) {
/* Remove from ld_queue list */ /* Remove from ld_queue list */
desc->mark = DESC_IDLE; desc->mark = DESC_IDLE;
list_move(&desc->node, &schan->ld_free); list_move(&desc->node, &schan->ld_free);
} else {
/* reuse as cyclic */
desc->mark = DESC_SUBMITTED;
list_move_tail(&desc->node, &cyclic_list);
}
if (list_empty(&schan->ld_queue)) { if (list_empty(&schan->ld_queue)) {
dev_dbg(schan->dev, "Bring down channel %d\n", schan->id); dev_dbg(schan->dev, "Bring down channel %d\n", schan->id);
...@@ -389,6 +396,8 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all) ...@@ -389,6 +396,8 @@ static dma_async_tx_callback __ld_cleanup(struct shdma_chan *schan, bool all)
*/ */
schan->dma_chan.completed_cookie = schan->dma_chan.cookie; schan->dma_chan.completed_cookie = schan->dma_chan.cookie;
list_splice_tail(&cyclic_list, &schan->ld_queue);
spin_unlock_irqrestore(&schan->chan_lock, flags); spin_unlock_irqrestore(&schan->chan_lock, flags);
if (callback) if (callback)
...@@ -521,7 +530,7 @@ static struct shdma_desc *shdma_add_desc(struct shdma_chan *schan, ...@@ -521,7 +530,7 @@ static struct shdma_desc *shdma_add_desc(struct shdma_chan *schan,
*/ */
static struct dma_async_tx_descriptor *shdma_prep_sg(struct shdma_chan *schan, static struct dma_async_tx_descriptor *shdma_prep_sg(struct shdma_chan *schan,
struct scatterlist *sgl, unsigned int sg_len, dma_addr_t *addr, struct scatterlist *sgl, unsigned int sg_len, dma_addr_t *addr,
enum dma_transfer_direction direction, unsigned long flags) enum dma_transfer_direction direction, unsigned long flags, bool cyclic)
{ {
struct scatterlist *sg; struct scatterlist *sg;
struct shdma_desc *first = NULL, *new = NULL /* compiler... */; struct shdma_desc *first = NULL, *new = NULL /* compiler... */;
...@@ -569,6 +578,10 @@ static struct dma_async_tx_descriptor *shdma_prep_sg(struct shdma_chan *schan, ...@@ -569,6 +578,10 @@ static struct dma_async_tx_descriptor *shdma_prep_sg(struct shdma_chan *schan,
if (!new) if (!new)
goto err_get_desc; goto err_get_desc;
new->cyclic = cyclic;
if (cyclic)
new->chunks = 1;
else
new->chunks = chunks--; new->chunks = chunks--;
list_add_tail(&new->node, &tx_list); list_add_tail(&new->node, &tx_list);
} while (len); } while (len);
...@@ -612,7 +625,8 @@ static struct dma_async_tx_descriptor *shdma_prep_memcpy( ...@@ -612,7 +625,8 @@ static struct dma_async_tx_descriptor *shdma_prep_memcpy(
sg_dma_address(&sg) = dma_src; sg_dma_address(&sg) = dma_src;
sg_dma_len(&sg) = len; sg_dma_len(&sg) = len;
return shdma_prep_sg(schan, &sg, 1, &dma_dest, DMA_MEM_TO_MEM, flags); return shdma_prep_sg(schan, &sg, 1, &dma_dest, DMA_MEM_TO_MEM,
flags, false);
} }
static struct dma_async_tx_descriptor *shdma_prep_slave_sg( static struct dma_async_tx_descriptor *shdma_prep_slave_sg(
...@@ -640,7 +654,58 @@ static struct dma_async_tx_descriptor *shdma_prep_slave_sg( ...@@ -640,7 +654,58 @@ static struct dma_async_tx_descriptor *shdma_prep_slave_sg(
slave_addr = ops->slave_addr(schan); slave_addr = ops->slave_addr(schan);
return shdma_prep_sg(schan, sgl, sg_len, &slave_addr, return shdma_prep_sg(schan, sgl, sg_len, &slave_addr,
direction, flags); direction, flags, false);
}
#define SHDMA_MAX_SG_LEN 32
static struct dma_async_tx_descriptor *shdma_prep_dma_cyclic(
struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
size_t period_len, enum dma_transfer_direction direction,
unsigned long flags, void *context)
{
struct shdma_chan *schan = to_shdma_chan(chan);
struct shdma_dev *sdev = to_shdma_dev(schan->dma_chan.device);
const struct shdma_ops *ops = sdev->ops;
unsigned int sg_len = buf_len / period_len;
int slave_id = schan->slave_id;
dma_addr_t slave_addr;
struct scatterlist sgl[SHDMA_MAX_SG_LEN];
int i;
if (!chan)
return NULL;
BUG_ON(!schan->desc_num);
if (sg_len > SHDMA_MAX_SG_LEN) {
dev_err(schan->dev, "sg length %d exceds limit %d",
sg_len, SHDMA_MAX_SG_LEN);
return NULL;
}
/* Someone calling slave DMA on a generic channel? */
if (slave_id < 0 || (buf_len < period_len)) {
dev_warn(schan->dev,
"%s: bad parameter: buf_len=%zu, period_len=%zu, id=%d\n",
__func__, buf_len, period_len, slave_id);
return NULL;
}
slave_addr = ops->slave_addr(schan);
sg_init_table(sgl, sg_len);
for (i = 0; i < sg_len; i++) {
dma_addr_t src = buf_addr + (period_len * i);
sg_set_page(&sgl[i], pfn_to_page(PFN_DOWN(src)), period_len,
offset_in_page(src));
sg_dma_address(&sgl[i]) = src;
sg_dma_len(&sgl[i]) = period_len;
}
return shdma_prep_sg(schan, sgl, sg_len, &slave_addr,
direction, flags, true);
} }
static int shdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, static int shdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
...@@ -915,6 +980,7 @@ int shdma_init(struct device *dev, struct shdma_dev *sdev, ...@@ -915,6 +980,7 @@ int shdma_init(struct device *dev, struct shdma_dev *sdev,
/* Compulsory for DMA_SLAVE fields */ /* Compulsory for DMA_SLAVE fields */
dma_dev->device_prep_slave_sg = shdma_prep_slave_sg; dma_dev->device_prep_slave_sg = shdma_prep_slave_sg;
dma_dev->device_prep_dma_cyclic = shdma_prep_dma_cyclic;
dma_dev->device_control = shdma_control; dma_dev->device_control = shdma_control;
dma_dev->dev = dev; dma_dev->dev = dev;
......
...@@ -18,21 +18,22 @@ ...@@ -18,21 +18,22 @@
* *
*/ */
#include <linux/delay.h>
#include <linux/dmaengine.h>
#include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/kdebug.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/notifier.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/dmaengine.h>
#include <linux/delay.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/rculist.h>
#include <linux/sh_dma.h> #include <linux/sh_dma.h>
#include <linux/notifier.h> #include <linux/slab.h>
#include <linux/kdebug.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/rculist.h>
#include "../dmaengine.h" #include "../dmaengine.h"
#include "shdma.h" #include "shdma.h"
......
...@@ -14,12 +14,13 @@ ...@@ -14,12 +14,13 @@
* published by the Free Software Foundation. * published by the Free Software Foundation.
*/ */
#include <linux/dmaengine.h>
#include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/dmaengine.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/sudmac.h> #include <linux/sudmac.h>
struct sudmac_chan { struct sudmac_chan {
......
...@@ -556,7 +556,6 @@ struct d40_gen_dmac { ...@@ -556,7 +556,6 @@ struct d40_gen_dmac {
* later * later
* @reg_val_backup_chan: Backup data for standard channel parameter registers. * @reg_val_backup_chan: Backup data for standard channel parameter registers.
* @gcc_pwr_off_mask: Mask to maintain the channels that can be turned off. * @gcc_pwr_off_mask: Mask to maintain the channels that can be turned off.
* @initialized: true if the dma has been initialized
* @gen_dmac: the struct for generic registers values to represent u8500/8540 * @gen_dmac: the struct for generic registers values to represent u8500/8540
* DMA controller * DMA controller
*/ */
...@@ -594,7 +593,6 @@ struct d40_base { ...@@ -594,7 +593,6 @@ struct d40_base {
u32 reg_val_backup_v4[BACKUP_REGS_SZ_MAX]; u32 reg_val_backup_v4[BACKUP_REGS_SZ_MAX];
u32 *reg_val_backup_chan; u32 *reg_val_backup_chan;
u16 gcc_pwr_off_mask; u16 gcc_pwr_off_mask;
bool initialized;
struct d40_gen_dmac gen_dmac; struct d40_gen_dmac gen_dmac;
}; };
...@@ -1056,62 +1054,6 @@ static int d40_sg_2_dmalen(struct scatterlist *sgl, int sg_len, ...@@ -1056,62 +1054,6 @@ static int d40_sg_2_dmalen(struct scatterlist *sgl, int sg_len,
return len; return len;
} }
#ifdef CONFIG_PM
static void dma40_backup(void __iomem *baseaddr, u32 *backup,
u32 *regaddr, int num, bool save)
{
int i;
for (i = 0; i < num; i++) {
void __iomem *addr = baseaddr + regaddr[i];
if (save)
backup[i] = readl_relaxed(addr);
else
writel_relaxed(backup[i], addr);
}
}
static void d40_save_restore_registers(struct d40_base *base, bool save)
{
int i;
/* Save/Restore channel specific registers */
for (i = 0; i < base->num_phy_chans; i++) {
void __iomem *addr;
int idx;
if (base->phy_res[i].reserved)
continue;
addr = base->virtbase + D40_DREG_PCBASE + i * D40_DREG_PCDELTA;
idx = i * ARRAY_SIZE(d40_backup_regs_chan);
dma40_backup(addr, &base->reg_val_backup_chan[idx],
d40_backup_regs_chan,
ARRAY_SIZE(d40_backup_regs_chan),
save);
}
/* Save/Restore global registers */
dma40_backup(base->virtbase, base->reg_val_backup,
d40_backup_regs, ARRAY_SIZE(d40_backup_regs),
save);
/* Save/Restore registers only existing on dma40 v3 and later */
if (base->gen_dmac.backup)
dma40_backup(base->virtbase, base->reg_val_backup_v4,
base->gen_dmac.backup,
base->gen_dmac.backup_size,
save);
}
#else
static void d40_save_restore_registers(struct d40_base *base, bool save)
{
}
#endif
static int __d40_execute_command_phy(struct d40_chan *d40c, static int __d40_execute_command_phy(struct d40_chan *d40c,
enum d40_command command) enum d40_command command)
{ {
...@@ -1495,8 +1437,8 @@ static int d40_pause(struct d40_chan *d40c) ...@@ -1495,8 +1437,8 @@ static int d40_pause(struct d40_chan *d40c)
if (!d40c->busy) if (!d40c->busy)
return 0; return 0;
pm_runtime_get_sync(d40c->base->dev);
spin_lock_irqsave(&d40c->lock, flags); spin_lock_irqsave(&d40c->lock, flags);
pm_runtime_get_sync(d40c->base->dev);
res = d40_channel_execute_command(d40c, D40_DMA_SUSPEND_REQ); res = d40_channel_execute_command(d40c, D40_DMA_SUSPEND_REQ);
...@@ -2998,18 +2940,88 @@ static int __init d40_dmaengine_init(struct d40_base *base, ...@@ -2998,18 +2940,88 @@ static int __init d40_dmaengine_init(struct d40_base *base,
} }
/* Suspend resume functionality */ /* Suspend resume functionality */
#ifdef CONFIG_PM #ifdef CONFIG_PM_SLEEP
static int dma40_pm_suspend(struct device *dev) static int dma40_suspend(struct device *dev)
{ {
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
struct d40_base *base = platform_get_drvdata(pdev); struct d40_base *base = platform_get_drvdata(pdev);
int ret = 0; int ret;
ret = pm_runtime_force_suspend(dev);
if (ret)
return ret;
if (base->lcpa_regulator) if (base->lcpa_regulator)
ret = regulator_disable(base->lcpa_regulator); ret = regulator_disable(base->lcpa_regulator);
return ret; return ret;
} }
static int dma40_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct d40_base *base = platform_get_drvdata(pdev);
int ret = 0;
if (base->lcpa_regulator) {
ret = regulator_enable(base->lcpa_regulator);
if (ret)
return ret;
}
return pm_runtime_force_resume(dev);
}
#endif
#ifdef CONFIG_PM
static void dma40_backup(void __iomem *baseaddr, u32 *backup,
u32 *regaddr, int num, bool save)
{
int i;
for (i = 0; i < num; i++) {
void __iomem *addr = baseaddr + regaddr[i];
if (save)
backup[i] = readl_relaxed(addr);
else
writel_relaxed(backup[i], addr);
}
}
static void d40_save_restore_registers(struct d40_base *base, bool save)
{
int i;
/* Save/Restore channel specific registers */
for (i = 0; i < base->num_phy_chans; i++) {
void __iomem *addr;
int idx;
if (base->phy_res[i].reserved)
continue;
addr = base->virtbase + D40_DREG_PCBASE + i * D40_DREG_PCDELTA;
idx = i * ARRAY_SIZE(d40_backup_regs_chan);
dma40_backup(addr, &base->reg_val_backup_chan[idx],
d40_backup_regs_chan,
ARRAY_SIZE(d40_backup_regs_chan),
save);
}
/* Save/Restore global registers */
dma40_backup(base->virtbase, base->reg_val_backup,
d40_backup_regs, ARRAY_SIZE(d40_backup_regs),
save);
/* Save/Restore registers only existing on dma40 v3 and later */
if (base->gen_dmac.backup)
dma40_backup(base->virtbase, base->reg_val_backup_v4,
base->gen_dmac.backup,
base->gen_dmac.backup_size,
save);
}
static int dma40_runtime_suspend(struct device *dev) static int dma40_runtime_suspend(struct device *dev)
{ {
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
...@@ -3030,36 +3042,20 @@ static int dma40_runtime_resume(struct device *dev) ...@@ -3030,36 +3042,20 @@ static int dma40_runtime_resume(struct device *dev)
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
struct d40_base *base = platform_get_drvdata(pdev); struct d40_base *base = platform_get_drvdata(pdev);
if (base->initialized)
d40_save_restore_registers(base, false); d40_save_restore_registers(base, false);
writel_relaxed(D40_DREG_GCC_ENABLE_ALL, writel_relaxed(D40_DREG_GCC_ENABLE_ALL,
base->virtbase + D40_DREG_GCC); base->virtbase + D40_DREG_GCC);
return 0; return 0;
} }
#endif
static int dma40_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct d40_base *base = platform_get_drvdata(pdev);
int ret = 0;
if (base->lcpa_regulator)
ret = regulator_enable(base->lcpa_regulator);
return ret;
}
static const struct dev_pm_ops dma40_pm_ops = { static const struct dev_pm_ops dma40_pm_ops = {
.suspend = dma40_pm_suspend, SET_LATE_SYSTEM_SLEEP_PM_OPS(dma40_suspend, dma40_resume)
.runtime_suspend = dma40_runtime_suspend, SET_PM_RUNTIME_PM_OPS(dma40_runtime_suspend,
.runtime_resume = dma40_runtime_resume, dma40_runtime_resume,
.resume = dma40_resume, NULL)
}; };
#define DMA40_PM_OPS (&dma40_pm_ops)
#else
#define DMA40_PM_OPS NULL
#endif
/* Initialization functions. */ /* Initialization functions. */
...@@ -3645,12 +3641,6 @@ static int __init d40_probe(struct platform_device *pdev) ...@@ -3645,12 +3641,6 @@ static int __init d40_probe(struct platform_device *pdev)
goto failure; goto failure;
} }
pm_runtime_irq_safe(base->dev);
pm_runtime_set_autosuspend_delay(base->dev, DMA40_AUTOSUSPEND_DELAY);
pm_runtime_use_autosuspend(base->dev);
pm_runtime_enable(base->dev);
pm_runtime_resume(base->dev);
if (base->plat_data->use_esram_lcla) { if (base->plat_data->use_esram_lcla) {
base->lcpa_regulator = regulator_get(base->dev, "lcla_esram"); base->lcpa_regulator = regulator_get(base->dev, "lcla_esram");
...@@ -3671,7 +3661,15 @@ static int __init d40_probe(struct platform_device *pdev) ...@@ -3671,7 +3661,15 @@ static int __init d40_probe(struct platform_device *pdev)
} }
} }
base->initialized = true; writel_relaxed(D40_DREG_GCC_ENABLE_ALL, base->virtbase + D40_DREG_GCC);
pm_runtime_irq_safe(base->dev);
pm_runtime_set_autosuspend_delay(base->dev, DMA40_AUTOSUSPEND_DELAY);
pm_runtime_use_autosuspend(base->dev);
pm_runtime_mark_last_busy(base->dev);
pm_runtime_set_active(base->dev);
pm_runtime_enable(base->dev);
ret = d40_dmaengine_init(base, num_reserved_chans); ret = d40_dmaengine_init(base, num_reserved_chans);
if (ret) if (ret)
goto failure; goto failure;
...@@ -3754,7 +3752,7 @@ static struct platform_driver d40_driver = { ...@@ -3754,7 +3752,7 @@ static struct platform_driver d40_driver = {
.driver = { .driver = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.name = D40_NAME, .name = D40_NAME,
.pm = DMA40_PM_OPS, .pm = &dma40_pm_ops,
.of_match_table = d40_match, .of_match_table = d40_match,
}, },
}; };
......
obj-$(CONFIG_XILINX_VDMA) += xilinx_vdma.o
This diff is collapsed.
/*
* Xilinx DMA Engine drivers support header file
*
* Copyright (C) 2010-2014 Xilinx, Inc. All rights reserved.
*
* This is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#ifndef __DMA_XILINX_DMA_H
#define __DMA_XILINX_DMA_H
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
/**
* struct xilinx_vdma_config - VDMA Configuration structure
* @frm_dly: Frame delay
* @gen_lock: Whether in gen-lock mode
* @master: Master that it syncs to
* @frm_cnt_en: Enable frame count enable
* @park: Whether wants to park
* @park_frm: Frame to park on
* @coalesc: Interrupt coalescing threshold
* @delay: Delay counter
* @reset: Reset Channel
* @ext_fsync: External Frame Sync source
*/
struct xilinx_vdma_config {
int frm_dly;
int gen_lock;
int master;
int frm_cnt_en;
int park;
int park_frm;
int coalesc;
int delay;
int reset;
int ext_fsync;
};
int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
struct xilinx_vdma_config *cfg);
#endif
...@@ -292,7 +292,7 @@ struct dma_chan_dev { ...@@ -292,7 +292,7 @@ struct dma_chan_dev {
}; };
/** /**
* enum dma_slave_buswidth - defines bus with of the DMA slave * enum dma_slave_buswidth - defines bus width of the DMA slave
* device, source or target buses * device, source or target buses
*/ */
enum dma_slave_buswidth { enum dma_slave_buswidth {
......
...@@ -54,6 +54,7 @@ struct shdma_desc { ...@@ -54,6 +54,7 @@ struct shdma_desc {
dma_cookie_t cookie; dma_cookie_t cookie;
int chunks; int chunks;
int mark; int mark;
bool cyclic; /* used as cyclic transfer */
}; };
struct shdma_chan { struct shdma_chan {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment