Commit 13bf2cf9 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull DMAengine updates from Vinod Koul:
 "This round brings couple of framework changes, a new driver and usual
  driver updates:

   - new managed helper for dmaengine framework registration

   - split dmaengine pause capability to pause and resume and allow
     drivers to report that individually

   - update dma_request_chan_by_mask() to handle deferred probing

   - move imx-sdma to use virt-dma

   - new driver for Actions Semi Owl family S900 controller

   - minor updates to intel, renesas, mv_xor, pl330 etc"

* tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits)
  dmaengine: Add Actions Semi Owl family S900 DMA driver
  dt-bindings: dmaengine: Add binding for Actions Semi Owl SoCs
  dmaengine: sh: rcar-dmac: Should not stop the DMAC by rcar_dmac_sync_tcr()
  dmaengine: mic_x100_dma: use the new helper to simplify the code
  dmaengine: add a new helper dmaenginem_async_device_register
  dmaengine: imx-sdma: add memcpy interface
  dmaengine: imx-sdma: add SDMA_BD_MAX_CNT to replace '0xffff'
  dmaengine: dma_request_chan_by_mask() to handle deferred probing
  dmaengine: pl330: fix irq race with terminate_all
  dmaengine: Revert "dmaengine: mv_xor_v2: enable COMPILE_TEST"
  dmaengine: mv_xor_v2: use {lower,upper}_32_bits to configure HW descriptor address
  dmaengine: mv_xor_v2: enable COMPILE_TEST
  dmaengine: mv_xor_v2: move unmap to before callback
  dmaengine: mv_xor_v2: convert callback to helper function
  dmaengine: mv_xor_v2: kill the tasklets upon exit
  dmaengine: mv_xor_v2: explicitly freeup irq
  dmaengine: sh: rcar-dmac: Add dma_pause operation
  dmaengine: sh: rcar-dmac: add a new function to clear CHCR.DE with barrier
  dmaengine: idma64: Support dmaengine_terminate_sync()
  dmaengine: hsu: Support dmaengine_terminate_sync()
  ...
parents bbd60bff 3257d861
* Actions Semi Owl SoCs DMA controller
This binding follows the generic DMA bindings defined in dma.txt.
Required properties:
- compatible: Should be "actions,s900-dma".
- reg: Should contain DMA registers location and length.
- interrupts: Should contain 4 interrupts shared by all channel.
- #dma-cells: Must be <1>. Used to represent the number of integer
cells in the dmas property of client device.
- dma-channels: Physical channels supported.
- dma-requests: Number of DMA request signals supported by the controller.
Refer to Documentation/devicetree/bindings/dma/dma.txt
- clocks: Phandle and Specifier of the clock feeding the DMA controller.
Example:
Controller:
dma: dma-controller@e0260000 {
compatible = "actions,s900-dma";
reg = <0x0 0xe0260000 0x0 0x1000>;
interrupts = <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 58 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 60 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <1>;
dma-channels = <12>;
dma-requests = <46>;
clocks = <&clock CLK_DMAC>;
};
Client:
DMA clients connected to the Actions Semi Owl SoCs DMA controller must
use the format described in the dma.txt file, using a two-cell specifier
for each channel.
The two cells in order are:
1. A phandle pointing to the DMA controller.
2. The channel id.
uart5: serial@e012a000 {
...
dma-names = "tx", "rx";
dmas = <&dma 26>, <&dma 27>;
...
};
...@@ -29,6 +29,7 @@ Required Properties: ...@@ -29,6 +29,7 @@ Required Properties:
- "renesas,dmac-r8a77965" (R-Car M3-N) - "renesas,dmac-r8a77965" (R-Car M3-N)
- "renesas,dmac-r8a77970" (R-Car V3M) - "renesas,dmac-r8a77970" (R-Car V3M)
- "renesas,dmac-r8a77980" (R-Car V3H) - "renesas,dmac-r8a77980" (R-Car V3H)
- "renesas,dmac-r8a77990" (R-Car E3)
- "renesas,dmac-r8a77995" (R-Car D3) - "renesas,dmac-r8a77995" (R-Car D3)
- reg: base address and length of the registers block for the DMAC - reg: base address and length of the registers block for the DMAC
......
...@@ -66,6 +66,8 @@ Optional child node properties: ...@@ -66,6 +66,8 @@ Optional child node properties:
Optional child node properties for VDMA: Optional child node properties for VDMA:
- xlnx,genlock-mode: Tells Genlock synchronization is - xlnx,genlock-mode: Tells Genlock synchronization is
enabled/disabled in hardware. enabled/disabled in hardware.
- xlnx,enable-vert-flip: Tells vertical flip is
enabled/disabled in hardware(S2MM path).
Optional child node properties for AXI DMA: Optional child node properties for AXI DMA:
-dma-channels: Number of dma channels in child node. -dma-channels: Number of dma channels in child node.
......
...@@ -240,6 +240,7 @@ CLOCK ...@@ -240,6 +240,7 @@ CLOCK
devm_of_clk_add_hw_provider() devm_of_clk_add_hw_provider()
DMA DMA
dmaenginem_async_device_register()
dmam_alloc_coherent() dmam_alloc_coherent()
dmam_alloc_attrs() dmam_alloc_attrs()
dmam_declare_coherent_memory() dmam_declare_coherent_memory()
......
...@@ -42,6 +42,8 @@ static struct page *pq_scribble_page; ...@@ -42,6 +42,8 @@ static struct page *pq_scribble_page;
#define P(b, d) (b[d-2]) #define P(b, d) (b[d-2])
#define Q(b, d) (b[d-1]) #define Q(b, d) (b[d-1])
#define MAX_DISKS 255
/** /**
* do_async_gen_syndrome - asynchronously calculate P and/or Q * do_async_gen_syndrome - asynchronously calculate P and/or Q
*/ */
...@@ -184,7 +186,7 @@ async_gen_syndrome(struct page **blocks, unsigned int offset, int disks, ...@@ -184,7 +186,7 @@ async_gen_syndrome(struct page **blocks, unsigned int offset, int disks,
struct dma_device *device = chan ? chan->device : NULL; struct dma_device *device = chan ? chan->device : NULL;
struct dmaengine_unmap_data *unmap = NULL; struct dmaengine_unmap_data *unmap = NULL;
BUG_ON(disks > 255 || !(P(blocks, disks) || Q(blocks, disks))); BUG_ON(disks > MAX_DISKS || !(P(blocks, disks) || Q(blocks, disks)));
if (device) if (device)
unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOWAIT); unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOWAIT);
...@@ -196,7 +198,7 @@ async_gen_syndrome(struct page **blocks, unsigned int offset, int disks, ...@@ -196,7 +198,7 @@ async_gen_syndrome(struct page **blocks, unsigned int offset, int disks,
is_dma_pq_aligned(device, offset, 0, len)) { is_dma_pq_aligned(device, offset, 0, len)) {
struct dma_async_tx_descriptor *tx; struct dma_async_tx_descriptor *tx;
enum dma_ctrl_flags dma_flags = 0; enum dma_ctrl_flags dma_flags = 0;
unsigned char coefs[src_cnt]; unsigned char coefs[MAX_DISKS];
int i, j; int i, j;
/* run the p+q asynchronously */ /* run the p+q asynchronously */
...@@ -299,11 +301,11 @@ async_syndrome_val(struct page **blocks, unsigned int offset, int disks, ...@@ -299,11 +301,11 @@ async_syndrome_val(struct page **blocks, unsigned int offset, int disks,
struct dma_chan *chan = pq_val_chan(submit, blocks, disks, len); struct dma_chan *chan = pq_val_chan(submit, blocks, disks, len);
struct dma_device *device = chan ? chan->device : NULL; struct dma_device *device = chan ? chan->device : NULL;
struct dma_async_tx_descriptor *tx; struct dma_async_tx_descriptor *tx;
unsigned char coefs[disks-2]; unsigned char coefs[MAX_DISKS];
enum dma_ctrl_flags dma_flags = submit->cb_fn ? DMA_PREP_INTERRUPT : 0; enum dma_ctrl_flags dma_flags = submit->cb_fn ? DMA_PREP_INTERRUPT : 0;
struct dmaengine_unmap_data *unmap = NULL; struct dmaengine_unmap_data *unmap = NULL;
BUG_ON(disks < 4); BUG_ON(disks < 4 || disks > MAX_DISKS);
if (device) if (device)
unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOWAIT); unmap = dmaengine_get_unmap_data(device->dev, disks, GFP_NOWAIT);
......
...@@ -81,11 +81,13 @@ static void raid6_dual_recov(int disks, size_t bytes, int faila, int failb, stru ...@@ -81,11 +81,13 @@ static void raid6_dual_recov(int disks, size_t bytes, int faila, int failb, stru
init_async_submit(&submit, 0, NULL, NULL, NULL, addr_conv); init_async_submit(&submit, 0, NULL, NULL, NULL, addr_conv);
tx = async_gen_syndrome(ptrs, 0, disks, bytes, &submit); tx = async_gen_syndrome(ptrs, 0, disks, bytes, &submit);
} else { } else {
struct page *blocks[disks]; struct page *blocks[NDISKS];
struct page *dest; struct page *dest;
int count = 0; int count = 0;
int i; int i;
BUG_ON(disks > NDISKS);
/* data+Q failure. Reconstruct data from P, /* data+Q failure. Reconstruct data from P,
* then rebuild syndrome * then rebuild syndrome
*/ */
......
...@@ -250,6 +250,7 @@ config IMX_SDMA ...@@ -250,6 +250,7 @@ config IMX_SDMA
tristate "i.MX SDMA support" tristate "i.MX SDMA support"
depends on ARCH_MXC depends on ARCH_MXC
select DMA_ENGINE select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help help
Support the i.MX SDMA engine. This engine is integrated into Support the i.MX SDMA engine. This engine is integrated into
Freescale i.MX25/31/35/51/53/6 chips. Freescale i.MX25/31/35/51/53/6 chips.
...@@ -413,6 +414,14 @@ config NBPFAXI_DMA ...@@ -413,6 +414,14 @@ config NBPFAXI_DMA
help help
Support for "Type-AXI" NBPF DMA IPs from Renesas Support for "Type-AXI" NBPF DMA IPs from Renesas
config OWL_DMA
tristate "Actions Semi Owl SoCs DMA support"
depends on ARCH_ACTIONS
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Enable support for the Actions Semi Owl SoCs DMA controller.
config PCH_DMA config PCH_DMA
tristate "Intel EG20T PCH / LAPIS Semicon IOH(ML7213/ML7223/ML7831) DMA" tristate "Intel EG20T PCH / LAPIS Semicon IOH(ML7213/ML7223/ML7831) DMA"
depends on PCI && (X86_32 || COMPILE_TEST) depends on PCI && (X86_32 || COMPILE_TEST)
......
...@@ -52,6 +52,7 @@ obj-$(CONFIG_MV_XOR_V2) += mv_xor_v2.o ...@@ -52,6 +52,7 @@ obj-$(CONFIG_MV_XOR_V2) += mv_xor_v2.o
obj-$(CONFIG_MXS_DMA) += mxs-dma.o obj-$(CONFIG_MXS_DMA) += mxs-dma.o
obj-$(CONFIG_MX3_IPU) += ipu/ obj-$(CONFIG_MX3_IPU) += ipu/
obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o obj-$(CONFIG_NBPFAXI_DMA) += nbpfaxi.o
obj-$(CONFIG_OWL_DMA) += owl-dma.o
obj-$(CONFIG_PCH_DMA) += pch_dma.o obj-$(CONFIG_PCH_DMA) += pch_dma.o
obj-$(CONFIG_PL330_DMA) += pl330.o obj-$(CONFIG_PL330_DMA) += pl330.o
obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/ obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
......
...@@ -500,12 +500,8 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps) ...@@ -500,12 +500,8 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
caps->max_burst = device->max_burst; caps->max_burst = device->max_burst;
caps->residue_granularity = device->residue_granularity; caps->residue_granularity = device->residue_granularity;
caps->descriptor_reuse = device->descriptor_reuse; caps->descriptor_reuse = device->descriptor_reuse;
caps->cmd_pause = !!device->device_pause;
/* caps->cmd_resume = !!device->device_resume;
* Some devices implement only pause (e.g. to get residuum) but no
* resume. However cmd_pause is advertised as pause AND resume.
*/
caps->cmd_pause = !!(device->device_pause && device->device_resume);
caps->cmd_terminate = !!device->device_terminate_all; caps->cmd_terminate = !!device->device_terminate_all;
return 0; return 0;
...@@ -774,8 +770,14 @@ struct dma_chan *dma_request_chan_by_mask(const dma_cap_mask_t *mask) ...@@ -774,8 +770,14 @@ struct dma_chan *dma_request_chan_by_mask(const dma_cap_mask_t *mask)
return ERR_PTR(-ENODEV); return ERR_PTR(-ENODEV);
chan = __dma_request_channel(mask, NULL, NULL); chan = __dma_request_channel(mask, NULL, NULL);
if (!chan) if (!chan) {
chan = ERR_PTR(-ENODEV); mutex_lock(&dma_list_mutex);
if (list_empty(&dma_device_list))
chan = ERR_PTR(-EPROBE_DEFER);
else
chan = ERR_PTR(-ENODEV);
mutex_unlock(&dma_list_mutex);
}
return chan; return chan;
} }
...@@ -1139,6 +1141,41 @@ void dma_async_device_unregister(struct dma_device *device) ...@@ -1139,6 +1141,41 @@ void dma_async_device_unregister(struct dma_device *device)
} }
EXPORT_SYMBOL(dma_async_device_unregister); EXPORT_SYMBOL(dma_async_device_unregister);
static void dmam_device_release(struct device *dev, void *res)
{
struct dma_device *device;
device = *(struct dma_device **)res;
dma_async_device_unregister(device);
}
/**
* dmaenginem_async_device_register - registers DMA devices found
* @device: &dma_device
*
* The operation is managed and will be undone on driver detach.
*/
int dmaenginem_async_device_register(struct dma_device *device)
{
void *p;
int ret;
p = devres_alloc(dmam_device_release, sizeof(void *), GFP_KERNEL);
if (!p)
return -ENOMEM;
ret = dma_async_device_register(device);
if (!ret) {
*(struct dma_device **)p = device;
devres_add(device->dev, p);
} else {
devres_free(p);
}
return ret;
}
EXPORT_SYMBOL(dmaenginem_async_device_register);
struct dmaengine_unmap_pool { struct dmaengine_unmap_pool {
struct kmem_cache *cache; struct kmem_cache *cache;
const char *name; const char *name;
......
...@@ -413,6 +413,13 @@ static void hsu_dma_free_chan_resources(struct dma_chan *chan) ...@@ -413,6 +413,13 @@ static void hsu_dma_free_chan_resources(struct dma_chan *chan)
vchan_free_chan_resources(to_virt_chan(chan)); vchan_free_chan_resources(to_virt_chan(chan));
} }
static void hsu_dma_synchronize(struct dma_chan *chan)
{
struct hsu_dma_chan *hsuc = to_hsu_dma_chan(chan);
vchan_synchronize(&hsuc->vchan);
}
int hsu_dma_probe(struct hsu_dma_chip *chip) int hsu_dma_probe(struct hsu_dma_chip *chip)
{ {
struct hsu_dma *hsu; struct hsu_dma *hsu;
...@@ -459,6 +466,7 @@ int hsu_dma_probe(struct hsu_dma_chip *chip) ...@@ -459,6 +466,7 @@ int hsu_dma_probe(struct hsu_dma_chip *chip)
hsu->dma.device_pause = hsu_dma_pause; hsu->dma.device_pause = hsu_dma_pause;
hsu->dma.device_resume = hsu_dma_resume; hsu->dma.device_resume = hsu_dma_resume;
hsu->dma.device_terminate_all = hsu_dma_terminate_all; hsu->dma.device_terminate_all = hsu_dma_terminate_all;
hsu->dma.device_synchronize = hsu_dma_synchronize;
hsu->dma.src_addr_widths = HSU_DMA_BUSWIDTHS; hsu->dma.src_addr_widths = HSU_DMA_BUSWIDTHS;
hsu->dma.dst_addr_widths = HSU_DMA_BUSWIDTHS; hsu->dma.dst_addr_widths = HSU_DMA_BUSWIDTHS;
......
...@@ -496,6 +496,13 @@ static int idma64_terminate_all(struct dma_chan *chan) ...@@ -496,6 +496,13 @@ static int idma64_terminate_all(struct dma_chan *chan)
return 0; return 0;
} }
static void idma64_synchronize(struct dma_chan *chan)
{
struct idma64_chan *idma64c = to_idma64_chan(chan);
vchan_synchronize(&idma64c->vchan);
}
static int idma64_alloc_chan_resources(struct dma_chan *chan) static int idma64_alloc_chan_resources(struct dma_chan *chan)
{ {
struct idma64_chan *idma64c = to_idma64_chan(chan); struct idma64_chan *idma64c = to_idma64_chan(chan);
...@@ -583,6 +590,7 @@ static int idma64_probe(struct idma64_chip *chip) ...@@ -583,6 +590,7 @@ static int idma64_probe(struct idma64_chip *chip)
idma64->dma.device_pause = idma64_pause; idma64->dma.device_pause = idma64_pause;
idma64->dma.device_resume = idma64_resume; idma64->dma.device_resume = idma64_resume;
idma64->dma.device_terminate_all = idma64_terminate_all; idma64->dma.device_terminate_all = idma64_terminate_all;
idma64->dma.device_synchronize = idma64_synchronize;
idma64->dma.src_addr_widths = IDMA64_BUSWIDTHS; idma64->dma.src_addr_widths = IDMA64_BUSWIDTHS;
idma64->dma.dst_addr_widths = IDMA64_BUSWIDTHS; idma64->dma.dst_addr_widths = IDMA64_BUSWIDTHS;
......
This diff is collapsed.
...@@ -688,6 +688,12 @@ static void ioat_restart_channel(struct ioatdma_chan *ioat_chan) ...@@ -688,6 +688,12 @@ static void ioat_restart_channel(struct ioatdma_chan *ioat_chan)
{ {
u64 phys_complete; u64 phys_complete;
/* set the completion address register again */
writel(lower_32_bits(ioat_chan->completion_dma),
ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_LOW);
writel(upper_32_bits(ioat_chan->completion_dma),
ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_HIGH);
ioat_quiesce(ioat_chan, 0); ioat_quiesce(ioat_chan, 0);
if (ioat_cleanup_preamble(ioat_chan, &phys_complete)) if (ioat_cleanup_preamble(ioat_chan, &phys_complete))
__cleanup(ioat_chan, phys_complete); __cleanup(ioat_chan, phys_complete);
......
...@@ -470,11 +470,6 @@ static void mic_dma_chan_destroy(struct mic_dma_chan *ch) ...@@ -470,11 +470,6 @@ static void mic_dma_chan_destroy(struct mic_dma_chan *ch)
mic_dma_chan_mask_intr(ch); mic_dma_chan_mask_intr(ch);
} }
static void mic_dma_unregister_dma_device(struct mic_dma_device *mic_dma_dev)
{
dma_async_device_unregister(&mic_dma_dev->dma_dev);
}
static int mic_dma_setup_irq(struct mic_dma_chan *ch) static int mic_dma_setup_irq(struct mic_dma_chan *ch)
{ {
ch->cookie = ch->cookie =
...@@ -630,7 +625,7 @@ static int mic_dma_register_dma_device(struct mic_dma_device *mic_dma_dev, ...@@ -630,7 +625,7 @@ static int mic_dma_register_dma_device(struct mic_dma_device *mic_dma_dev,
list_add_tail(&mic_dma_dev->mic_ch[i].api_ch.device_node, list_add_tail(&mic_dma_dev->mic_ch[i].api_ch.device_node,
&mic_dma_dev->dma_dev.channels); &mic_dma_dev->dma_dev.channels);
} }
return dma_async_device_register(&mic_dma_dev->dma_dev); return dmaenginem_async_device_register(&mic_dma_dev->dma_dev);
} }
/* /*
...@@ -678,7 +673,6 @@ static struct mic_dma_device *mic_dma_dev_reg(struct mbus_device *mbdev, ...@@ -678,7 +673,6 @@ static struct mic_dma_device *mic_dma_dev_reg(struct mbus_device *mbdev,
static void mic_dma_dev_unreg(struct mic_dma_device *mic_dma_dev) static void mic_dma_dev_unreg(struct mic_dma_device *mic_dma_dev)
{ {
mic_dma_unregister_dma_device(mic_dma_dev);
mic_dma_uninit(mic_dma_dev); mic_dma_uninit(mic_dma_dev);
kfree(mic_dma_dev); kfree(mic_dma_dev);
} }
......
...@@ -174,6 +174,7 @@ struct mv_xor_v2_device { ...@@ -174,6 +174,7 @@ struct mv_xor_v2_device {
int desc_size; int desc_size;
unsigned int npendings; unsigned int npendings;
unsigned int hw_queue_idx; unsigned int hw_queue_idx;
struct msi_desc *msi_desc;
}; };
/** /**
...@@ -588,11 +589,9 @@ static void mv_xor_v2_tasklet(unsigned long data) ...@@ -588,11 +589,9 @@ static void mv_xor_v2_tasklet(unsigned long data)
*/ */
dma_cookie_complete(&next_pending_sw_desc->async_tx); dma_cookie_complete(&next_pending_sw_desc->async_tx);
if (next_pending_sw_desc->async_tx.callback)
next_pending_sw_desc->async_tx.callback(
next_pending_sw_desc->async_tx.callback_param);
dma_descriptor_unmap(&next_pending_sw_desc->async_tx); dma_descriptor_unmap(&next_pending_sw_desc->async_tx);
dmaengine_desc_get_callback_invoke(
&next_pending_sw_desc->async_tx, NULL);
} }
dma_run_dependencies(&next_pending_sw_desc->async_tx); dma_run_dependencies(&next_pending_sw_desc->async_tx);
...@@ -643,9 +642,9 @@ static int mv_xor_v2_descq_init(struct mv_xor_v2_device *xor_dev) ...@@ -643,9 +642,9 @@ static int mv_xor_v2_descq_init(struct mv_xor_v2_device *xor_dev)
xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_SIZE_OFF); xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_SIZE_OFF);
/* write the DESQ address to the DMA enngine*/ /* write the DESQ address to the DMA enngine*/
writel(xor_dev->hw_desq & 0xFFFFFFFF, writel(lower_32_bits(xor_dev->hw_desq),
xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_BALR_OFF); xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_BALR_OFF);
writel((xor_dev->hw_desq & 0xFFFF00000000) >> 32, writel(upper_32_bits(xor_dev->hw_desq),
xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_BAHR_OFF); xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_BAHR_OFF);
/* /*
...@@ -780,6 +779,7 @@ static int mv_xor_v2_probe(struct platform_device *pdev) ...@@ -780,6 +779,7 @@ static int mv_xor_v2_probe(struct platform_device *pdev)
msi_desc = first_msi_entry(&pdev->dev); msi_desc = first_msi_entry(&pdev->dev);
if (!msi_desc) if (!msi_desc)
goto free_msi_irqs; goto free_msi_irqs;
xor_dev->msi_desc = msi_desc;
ret = devm_request_irq(&pdev->dev, msi_desc->irq, ret = devm_request_irq(&pdev->dev, msi_desc->irq,
mv_xor_v2_interrupt_handler, 0, mv_xor_v2_interrupt_handler, 0,
...@@ -897,8 +897,12 @@ static int mv_xor_v2_remove(struct platform_device *pdev) ...@@ -897,8 +897,12 @@ static int mv_xor_v2_remove(struct platform_device *pdev)
xor_dev->desc_size * MV_XOR_V2_DESC_NUM, xor_dev->desc_size * MV_XOR_V2_DESC_NUM,
xor_dev->hw_desq_virt, xor_dev->hw_desq); xor_dev->hw_desq_virt, xor_dev->hw_desq);
devm_free_irq(&pdev->dev, xor_dev->msi_desc->irq, xor_dev);
platform_msi_domain_free_irqs(&pdev->dev); platform_msi_domain_free_irqs(&pdev->dev);
tasklet_kill(&xor_dev->irq_tasklet);
clk_disable_unprepare(xor_dev->clk); clk_disable_unprepare(xor_dev->clk);
return 0; return 0;
......
...@@ -479,6 +479,7 @@ static size_t nbpf_xfer_size(struct nbpf_device *nbpf, ...@@ -479,6 +479,7 @@ static size_t nbpf_xfer_size(struct nbpf_device *nbpf,
default: default:
pr_warn("%s(): invalid bus width %u\n", __func__, width); pr_warn("%s(): invalid bus width %u\n", __func__, width);
/* fall through */
case DMA_SLAVE_BUSWIDTH_1_BYTE: case DMA_SLAVE_BUSWIDTH_1_BYTE:
size = burst; size = burst;
} }
......
This diff is collapsed.
...@@ -1046,13 +1046,16 @@ static bool _start(struct pl330_thread *thrd) ...@@ -1046,13 +1046,16 @@ static bool _start(struct pl330_thread *thrd)
if (_state(thrd) == PL330_STATE_KILLING) if (_state(thrd) == PL330_STATE_KILLING)
UNTIL(thrd, PL330_STATE_STOPPED) UNTIL(thrd, PL330_STATE_STOPPED)
/* fall through */
case PL330_STATE_FAULTING: case PL330_STATE_FAULTING:
_stop(thrd); _stop(thrd);
/* fall through */
case PL330_STATE_KILLING: case PL330_STATE_KILLING:
case PL330_STATE_COMPLETING: case PL330_STATE_COMPLETING:
UNTIL(thrd, PL330_STATE_STOPPED) UNTIL(thrd, PL330_STATE_STOPPED)
/* fall through */
case PL330_STATE_STOPPED: case PL330_STATE_STOPPED:
return _trigger(thrd); return _trigger(thrd);
...@@ -1779,8 +1782,6 @@ static inline void _free_event(struct pl330_thread *thrd, int ev) ...@@ -1779,8 +1782,6 @@ static inline void _free_event(struct pl330_thread *thrd, int ev)
static void pl330_release_channel(struct pl330_thread *thrd) static void pl330_release_channel(struct pl330_thread *thrd)
{ {
struct pl330_dmac *pl330;
if (!thrd || thrd->free) if (!thrd || thrd->free)
return; return;
...@@ -1789,8 +1790,6 @@ static void pl330_release_channel(struct pl330_thread *thrd) ...@@ -1789,8 +1790,6 @@ static void pl330_release_channel(struct pl330_thread *thrd)
dma_pl330_rqcb(thrd->req[1 - thrd->lstenq].desc, PL330_ERR_ABORT); dma_pl330_rqcb(thrd->req[1 - thrd->lstenq].desc, PL330_ERR_ABORT);
dma_pl330_rqcb(thrd->req[thrd->lstenq].desc, PL330_ERR_ABORT); dma_pl330_rqcb(thrd->req[thrd->lstenq].desc, PL330_ERR_ABORT);
pl330 = thrd->dmac;
_free_event(thrd, thrd->ev); _free_event(thrd, thrd->ev);
thrd->free = true; thrd->free = true;
} }
...@@ -2257,13 +2256,14 @@ static int pl330_terminate_all(struct dma_chan *chan) ...@@ -2257,13 +2256,14 @@ static int pl330_terminate_all(struct dma_chan *chan)
pm_runtime_get_sync(pl330->ddma.dev); pm_runtime_get_sync(pl330->ddma.dev);
spin_lock_irqsave(&pch->lock, flags); spin_lock_irqsave(&pch->lock, flags);
spin_lock(&pl330->lock); spin_lock(&pl330->lock);
_stop(pch->thread); _stop(pch->thread);
spin_unlock(&pl330->lock);
pch->thread->req[0].desc = NULL; pch->thread->req[0].desc = NULL;
pch->thread->req[1].desc = NULL; pch->thread->req[1].desc = NULL;
pch->thread->req_running = -1; pch->thread->req_running = -1;
spin_unlock(&pl330->lock);
power_down = pch->active; power_down = pch->active;
pch->active = false; pch->active = false;
......
// SPDX-License-Identifier: GPL-2.0
/* /*
* Renesas R-Car Gen2 DMA Controller Driver * Renesas R-Car Gen2 DMA Controller Driver
* *
* Copyright (C) 2014 Renesas Electronics Inc. * Copyright (C) 2014 Renesas Electronics Inc.
* *
* Author: Laurent Pinchart <laurent.pinchart@ideasonboard.com> * Author: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
*
* This is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*/ */
#include <linux/delay.h> #include <linux/delay.h>
...@@ -431,7 +428,8 @@ static void rcar_dmac_chan_start_xfer(struct rcar_dmac_chan *chan) ...@@ -431,7 +428,8 @@ static void rcar_dmac_chan_start_xfer(struct rcar_dmac_chan *chan)
chcr |= RCAR_DMACHCR_DPM_DISABLED | RCAR_DMACHCR_IE; chcr |= RCAR_DMACHCR_DPM_DISABLED | RCAR_DMACHCR_IE;
} }
rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr | RCAR_DMACHCR_DE); rcar_dmac_chan_write(chan, RCAR_DMACHCR,
chcr | RCAR_DMACHCR_DE | RCAR_DMACHCR_CAIE);
} }
static int rcar_dmac_init(struct rcar_dmac *dmac) static int rcar_dmac_init(struct rcar_dmac *dmac)
...@@ -761,21 +759,15 @@ static void rcar_dmac_chcr_de_barrier(struct rcar_dmac_chan *chan) ...@@ -761,21 +759,15 @@ static void rcar_dmac_chcr_de_barrier(struct rcar_dmac_chan *chan)
dev_err(chan->chan.device->dev, "CHCR DE check error\n"); dev_err(chan->chan.device->dev, "CHCR DE check error\n");
} }
static void rcar_dmac_sync_tcr(struct rcar_dmac_chan *chan) static void rcar_dmac_clear_chcr_de(struct rcar_dmac_chan *chan)
{ {
u32 chcr = rcar_dmac_chan_read(chan, RCAR_DMACHCR); u32 chcr = rcar_dmac_chan_read(chan, RCAR_DMACHCR);
if (!(chcr & RCAR_DMACHCR_DE))
return;
/* set DE=0 and flush remaining data */ /* set DE=0 and flush remaining data */
rcar_dmac_chan_write(chan, RCAR_DMACHCR, (chcr & ~RCAR_DMACHCR_DE)); rcar_dmac_chan_write(chan, RCAR_DMACHCR, (chcr & ~RCAR_DMACHCR_DE));
/* make sure all remaining data was flushed */ /* make sure all remaining data was flushed */
rcar_dmac_chcr_de_barrier(chan); rcar_dmac_chcr_de_barrier(chan);
/* back DE */
rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr);
} }
static void rcar_dmac_chan_halt(struct rcar_dmac_chan *chan) static void rcar_dmac_chan_halt(struct rcar_dmac_chan *chan)
...@@ -783,7 +775,8 @@ static void rcar_dmac_chan_halt(struct rcar_dmac_chan *chan) ...@@ -783,7 +775,8 @@ static void rcar_dmac_chan_halt(struct rcar_dmac_chan *chan)
u32 chcr = rcar_dmac_chan_read(chan, RCAR_DMACHCR); u32 chcr = rcar_dmac_chan_read(chan, RCAR_DMACHCR);
chcr &= ~(RCAR_DMACHCR_DSE | RCAR_DMACHCR_DSIE | RCAR_DMACHCR_IE | chcr &= ~(RCAR_DMACHCR_DSE | RCAR_DMACHCR_DSIE | RCAR_DMACHCR_IE |
RCAR_DMACHCR_TE | RCAR_DMACHCR_DE); RCAR_DMACHCR_TE | RCAR_DMACHCR_DE |
RCAR_DMACHCR_CAE | RCAR_DMACHCR_CAIE);
rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr); rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr);
rcar_dmac_chcr_de_barrier(chan); rcar_dmac_chcr_de_barrier(chan);
} }
...@@ -812,12 +805,7 @@ static void rcar_dmac_chan_reinit(struct rcar_dmac_chan *chan) ...@@ -812,12 +805,7 @@ static void rcar_dmac_chan_reinit(struct rcar_dmac_chan *chan)
} }
} }
static void rcar_dmac_stop(struct rcar_dmac *dmac) static void rcar_dmac_stop_all_chan(struct rcar_dmac *dmac)
{
rcar_dmac_write(dmac, RCAR_DMAOR, 0);
}
static void rcar_dmac_abort(struct rcar_dmac *dmac)
{ {
unsigned int i; unsigned int i;
...@@ -826,14 +814,24 @@ static void rcar_dmac_abort(struct rcar_dmac *dmac) ...@@ -826,14 +814,24 @@ static void rcar_dmac_abort(struct rcar_dmac *dmac)
struct rcar_dmac_chan *chan = &dmac->channels[i]; struct rcar_dmac_chan *chan = &dmac->channels[i];
/* Stop and reinitialize the channel. */ /* Stop and reinitialize the channel. */
spin_lock(&chan->lock); spin_lock_irq(&chan->lock);
rcar_dmac_chan_halt(chan); rcar_dmac_chan_halt(chan);
spin_unlock(&chan->lock); spin_unlock_irq(&chan->lock);
rcar_dmac_chan_reinit(chan);
} }
} }
static int rcar_dmac_chan_pause(struct dma_chan *chan)
{
unsigned long flags;
struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan);
spin_lock_irqsave(&rchan->lock, flags);
rcar_dmac_clear_chcr_de(rchan);
spin_unlock_irqrestore(&rchan->lock, flags);
return 0;
}
/* ----------------------------------------------------------------------------- /* -----------------------------------------------------------------------------
* Descriptors preparation * Descriptors preparation
*/ */
...@@ -1355,9 +1353,6 @@ static unsigned int rcar_dmac_chan_get_residue(struct rcar_dmac_chan *chan, ...@@ -1355,9 +1353,6 @@ static unsigned int rcar_dmac_chan_get_residue(struct rcar_dmac_chan *chan,
residue += chunk->size; residue += chunk->size;
} }
if (desc->direction == DMA_DEV_TO_MEM)
rcar_dmac_sync_tcr(chan);
/* Add the residue for the current chunk. */ /* Add the residue for the current chunk. */
residue += rcar_dmac_chan_read(chan, RCAR_DMATCRB) << desc->xfer_shift; residue += rcar_dmac_chan_read(chan, RCAR_DMATCRB) << desc->xfer_shift;
...@@ -1522,11 +1517,26 @@ static irqreturn_t rcar_dmac_isr_channel(int irq, void *dev) ...@@ -1522,11 +1517,26 @@ static irqreturn_t rcar_dmac_isr_channel(int irq, void *dev)
u32 mask = RCAR_DMACHCR_DSE | RCAR_DMACHCR_TE; u32 mask = RCAR_DMACHCR_DSE | RCAR_DMACHCR_TE;
struct rcar_dmac_chan *chan = dev; struct rcar_dmac_chan *chan = dev;
irqreturn_t ret = IRQ_NONE; irqreturn_t ret = IRQ_NONE;
bool reinit = false;
u32 chcr; u32 chcr;
spin_lock(&chan->lock); spin_lock(&chan->lock);
chcr = rcar_dmac_chan_read(chan, RCAR_DMACHCR); chcr = rcar_dmac_chan_read(chan, RCAR_DMACHCR);
if (chcr & RCAR_DMACHCR_CAE) {
struct rcar_dmac *dmac = to_rcar_dmac(chan->chan.device);
/*
* We don't need to call rcar_dmac_chan_halt()
* because channel is already stopped in error case.
* We need to clear register and check DE bit as recovery.
*/
rcar_dmac_write(dmac, RCAR_DMACHCLR, 1 << chan->index);
rcar_dmac_chcr_de_barrier(chan);
reinit = true;
goto spin_lock_end;
}
if (chcr & RCAR_DMACHCR_TE) if (chcr & RCAR_DMACHCR_TE)
mask |= RCAR_DMACHCR_DE; mask |= RCAR_DMACHCR_DE;
rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr & ~mask); rcar_dmac_chan_write(chan, RCAR_DMACHCR, chcr & ~mask);
...@@ -1539,8 +1549,16 @@ static irqreturn_t rcar_dmac_isr_channel(int irq, void *dev) ...@@ -1539,8 +1549,16 @@ static irqreturn_t rcar_dmac_isr_channel(int irq, void *dev)
if (chcr & RCAR_DMACHCR_TE) if (chcr & RCAR_DMACHCR_TE)
ret |= rcar_dmac_isr_transfer_end(chan); ret |= rcar_dmac_isr_transfer_end(chan);
spin_lock_end:
spin_unlock(&chan->lock); spin_unlock(&chan->lock);
if (reinit) {
dev_err(chan->chan.device->dev, "Channel Address Error\n");
rcar_dmac_chan_reinit(chan);
ret = IRQ_HANDLED;
}
return ret; return ret;
} }
...@@ -1597,24 +1615,6 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev) ...@@ -1597,24 +1615,6 @@ static irqreturn_t rcar_dmac_isr_channel_thread(int irq, void *dev)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
static irqreturn_t rcar_dmac_isr_error(int irq, void *data)
{
struct rcar_dmac *dmac = data;
if (!(rcar_dmac_read(dmac, RCAR_DMAOR) & RCAR_DMAOR_AE))
return IRQ_NONE;
/*
* An unrecoverable error occurred on an unknown channel. Halt the DMAC,
* abort transfers on all channels, and reinitialize the DMAC.
*/
rcar_dmac_stop(dmac);
rcar_dmac_abort(dmac);
rcar_dmac_init(dmac);
return IRQ_HANDLED;
}
/* ----------------------------------------------------------------------------- /* -----------------------------------------------------------------------------
* OF xlate and channel filter * OF xlate and channel filter
*/ */
...@@ -1784,8 +1784,6 @@ static int rcar_dmac_probe(struct platform_device *pdev) ...@@ -1784,8 +1784,6 @@ static int rcar_dmac_probe(struct platform_device *pdev)
struct rcar_dmac *dmac; struct rcar_dmac *dmac;
struct resource *mem; struct resource *mem;
unsigned int i; unsigned int i;
char *irqname;
int irq;
int ret; int ret;
dmac = devm_kzalloc(&pdev->dev, sizeof(*dmac), GFP_KERNEL); dmac = devm_kzalloc(&pdev->dev, sizeof(*dmac), GFP_KERNEL);
...@@ -1824,17 +1822,6 @@ static int rcar_dmac_probe(struct platform_device *pdev) ...@@ -1824,17 +1822,6 @@ static int rcar_dmac_probe(struct platform_device *pdev)
if (IS_ERR(dmac->iomem)) if (IS_ERR(dmac->iomem))
return PTR_ERR(dmac->iomem); return PTR_ERR(dmac->iomem);
irq = platform_get_irq_byname(pdev, "error");
if (irq < 0) {
dev_err(&pdev->dev, "no error IRQ specified\n");
return -ENODEV;
}
irqname = devm_kasprintf(dmac->dev, GFP_KERNEL, "%s:error",
dev_name(dmac->dev));
if (!irqname)
return -ENOMEM;
/* Enable runtime PM and initialize the device. */ /* Enable runtime PM and initialize the device. */
pm_runtime_enable(&pdev->dev); pm_runtime_enable(&pdev->dev);
ret = pm_runtime_get_sync(&pdev->dev); ret = pm_runtime_get_sync(&pdev->dev);
...@@ -1871,6 +1858,7 @@ static int rcar_dmac_probe(struct platform_device *pdev) ...@@ -1871,6 +1858,7 @@ static int rcar_dmac_probe(struct platform_device *pdev)
engine->device_prep_slave_sg = rcar_dmac_prep_slave_sg; engine->device_prep_slave_sg = rcar_dmac_prep_slave_sg;
engine->device_prep_dma_cyclic = rcar_dmac_prep_dma_cyclic; engine->device_prep_dma_cyclic = rcar_dmac_prep_dma_cyclic;
engine->device_config = rcar_dmac_device_config; engine->device_config = rcar_dmac_device_config;
engine->device_pause = rcar_dmac_chan_pause;
engine->device_terminate_all = rcar_dmac_chan_terminate_all; engine->device_terminate_all = rcar_dmac_chan_terminate_all;
engine->device_tx_status = rcar_dmac_tx_status; engine->device_tx_status = rcar_dmac_tx_status;
engine->device_issue_pending = rcar_dmac_issue_pending; engine->device_issue_pending = rcar_dmac_issue_pending;
...@@ -1885,14 +1873,6 @@ static int rcar_dmac_probe(struct platform_device *pdev) ...@@ -1885,14 +1873,6 @@ static int rcar_dmac_probe(struct platform_device *pdev)
goto error; goto error;
} }
ret = devm_request_irq(&pdev->dev, irq, rcar_dmac_isr_error, 0,
irqname, dmac);
if (ret) {
dev_err(&pdev->dev, "failed to request IRQ %u (%d)\n",
irq, ret);
return ret;
}
/* Register the DMAC as a DMA provider for DT. */ /* Register the DMAC as a DMA provider for DT. */
ret = of_dma_controller_register(pdev->dev.of_node, rcar_dmac_of_xlate, ret = of_dma_controller_register(pdev->dev.of_node, rcar_dmac_of_xlate,
NULL); NULL);
...@@ -1932,7 +1912,7 @@ static void rcar_dmac_shutdown(struct platform_device *pdev) ...@@ -1932,7 +1912,7 @@ static void rcar_dmac_shutdown(struct platform_device *pdev)
{ {
struct rcar_dmac *dmac = platform_get_drvdata(pdev); struct rcar_dmac *dmac = platform_get_drvdata(pdev);
rcar_dmac_stop(dmac); rcar_dmac_stop_all_chan(dmac);
} }
static const struct of_device_id rcar_dmac_of_ids[] = { static const struct of_device_id rcar_dmac_of_ids[] = {
......
...@@ -555,6 +555,7 @@ struct d40_gen_dmac { ...@@ -555,6 +555,7 @@ struct d40_gen_dmac {
* @reg_val_backup_v4: Backup of registers that only exits on dma40 v3 and * @reg_val_backup_v4: Backup of registers that only exits on dma40 v3 and
* later * later
* @reg_val_backup_chan: Backup data for standard channel parameter registers. * @reg_val_backup_chan: Backup data for standard channel parameter registers.
* @regs_interrupt: Scratch space for registers during interrupt.
* @gcc_pwr_off_mask: Mask to maintain the channels that can be turned off. * @gcc_pwr_off_mask: Mask to maintain the channels that can be turned off.
* @gen_dmac: the struct for generic registers values to represent u8500/8540 * @gen_dmac: the struct for generic registers values to represent u8500/8540
* DMA controller * DMA controller
...@@ -592,6 +593,7 @@ struct d40_base { ...@@ -592,6 +593,7 @@ struct d40_base {
u32 reg_val_backup[BACKUP_REGS_SZ]; u32 reg_val_backup[BACKUP_REGS_SZ];
u32 reg_val_backup_v4[BACKUP_REGS_SZ_MAX]; u32 reg_val_backup_v4[BACKUP_REGS_SZ_MAX];
u32 *reg_val_backup_chan; u32 *reg_val_backup_chan;
u32 *regs_interrupt;
u16 gcc_pwr_off_mask; u16 gcc_pwr_off_mask;
struct d40_gen_dmac gen_dmac; struct d40_gen_dmac gen_dmac;
}; };
...@@ -1637,7 +1639,7 @@ static irqreturn_t d40_handle_interrupt(int irq, void *data) ...@@ -1637,7 +1639,7 @@ static irqreturn_t d40_handle_interrupt(int irq, void *data)
struct d40_chan *d40c; struct d40_chan *d40c;
unsigned long flags; unsigned long flags;
struct d40_base *base = data; struct d40_base *base = data;
u32 regs[base->gen_dmac.il_size]; u32 *regs = base->regs_interrupt;
struct d40_interrupt_lookup *il = base->gen_dmac.il; struct d40_interrupt_lookup *il = base->gen_dmac.il;
u32 il_size = base->gen_dmac.il_size; u32 il_size = base->gen_dmac.il_size;
...@@ -3258,13 +3260,22 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev) ...@@ -3258,13 +3260,22 @@ static struct d40_base * __init d40_hw_detect_init(struct platform_device *pdev)
if (!base->lcla_pool.alloc_map) if (!base->lcla_pool.alloc_map)
goto free_backup_chan; goto free_backup_chan;
base->regs_interrupt = kmalloc_array(base->gen_dmac.il_size,
sizeof(*base->regs_interrupt),
GFP_KERNEL);
if (!base->regs_interrupt)
goto free_map;
base->desc_slab = kmem_cache_create(D40_NAME, sizeof(struct d40_desc), base->desc_slab = kmem_cache_create(D40_NAME, sizeof(struct d40_desc),
0, SLAB_HWCACHE_ALIGN, 0, SLAB_HWCACHE_ALIGN,
NULL); NULL);
if (base->desc_slab == NULL) if (base->desc_slab == NULL)
goto free_map; goto free_regs;
return base; return base;
free_regs:
kfree(base->regs_interrupt);
free_map: free_map:
kfree(base->lcla_pool.alloc_map); kfree(base->lcla_pool.alloc_map);
free_backup_chan: free_backup_chan:
......
...@@ -594,7 +594,7 @@ static void stm32_dma_start_transfer(struct stm32_dma_chan *chan) ...@@ -594,7 +594,7 @@ static void stm32_dma_start_transfer(struct stm32_dma_chan *chan)
chan->busy = true; chan->busy = true;
dev_dbg(chan2dev(chan), "vchan %p: started\n", &chan->vchan); dev_dbg(chan2dev(chan), "vchan %pK: started\n", &chan->vchan);
} }
static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan) static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan)
...@@ -693,7 +693,7 @@ static void stm32_dma_issue_pending(struct dma_chan *c) ...@@ -693,7 +693,7 @@ static void stm32_dma_issue_pending(struct dma_chan *c)
spin_lock_irqsave(&chan->vchan.lock, flags); spin_lock_irqsave(&chan->vchan.lock, flags);
if (vchan_issue_pending(&chan->vchan) && !chan->desc && !chan->busy) { if (vchan_issue_pending(&chan->vchan) && !chan->desc && !chan->busy) {
dev_dbg(chan2dev(chan), "vchan %p: issued\n", &chan->vchan); dev_dbg(chan2dev(chan), "vchan %pK: issued\n", &chan->vchan);
stm32_dma_start_transfer(chan); stm32_dma_start_transfer(chan);
} }
......
...@@ -1170,7 +1170,7 @@ static void stm32_mdma_start_transfer(struct stm32_mdma_chan *chan) ...@@ -1170,7 +1170,7 @@ static void stm32_mdma_start_transfer(struct stm32_mdma_chan *chan)
chan->busy = true; chan->busy = true;
dev_dbg(chan2dev(chan), "vchan %p: started\n", &chan->vchan); dev_dbg(chan2dev(chan), "vchan %pK: started\n", &chan->vchan);
} }
static void stm32_mdma_issue_pending(struct dma_chan *c) static void stm32_mdma_issue_pending(struct dma_chan *c)
...@@ -1183,7 +1183,7 @@ static void stm32_mdma_issue_pending(struct dma_chan *c) ...@@ -1183,7 +1183,7 @@ static void stm32_mdma_issue_pending(struct dma_chan *c)
if (!vchan_issue_pending(&chan->vchan)) if (!vchan_issue_pending(&chan->vchan))
goto end; goto end;
dev_dbg(chan2dev(chan), "vchan %p: issued\n", &chan->vchan); dev_dbg(chan2dev(chan), "vchan %pK: issued\n", &chan->vchan);
if (!chan->desc && !chan->busy) if (!chan->desc && !chan->busy)
stm32_mdma_start_transfer(chan); stm32_mdma_start_transfer(chan);
...@@ -1203,7 +1203,7 @@ static int stm32_mdma_pause(struct dma_chan *c) ...@@ -1203,7 +1203,7 @@ static int stm32_mdma_pause(struct dma_chan *c)
spin_unlock_irqrestore(&chan->vchan.lock, flags); spin_unlock_irqrestore(&chan->vchan.lock, flags);
if (!ret) if (!ret)
dev_dbg(chan2dev(chan), "vchan %p: pause\n", &chan->vchan); dev_dbg(chan2dev(chan), "vchan %pK: pause\n", &chan->vchan);
return ret; return ret;
} }
...@@ -1240,7 +1240,7 @@ static int stm32_mdma_resume(struct dma_chan *c) ...@@ -1240,7 +1240,7 @@ static int stm32_mdma_resume(struct dma_chan *c)
spin_unlock_irqrestore(&chan->vchan.lock, flags); spin_unlock_irqrestore(&chan->vchan.lock, flags);
dev_dbg(chan2dev(chan), "vchan %p: resume\n", &chan->vchan); dev_dbg(chan2dev(chan), "vchan %pK: resume\n", &chan->vchan);
return 0; return 0;
} }
......
...@@ -115,6 +115,9 @@ ...@@ -115,6 +115,9 @@
#define XILINX_VDMA_REG_START_ADDRESS(n) (0x000c + 4 * (n)) #define XILINX_VDMA_REG_START_ADDRESS(n) (0x000c + 4 * (n))
#define XILINX_VDMA_REG_START_ADDRESS_64(n) (0x000c + 8 * (n)) #define XILINX_VDMA_REG_START_ADDRESS_64(n) (0x000c + 8 * (n))
#define XILINX_VDMA_REG_ENABLE_VERTICAL_FLIP 0x00ec
#define XILINX_VDMA_ENABLE_VERTICAL_FLIP BIT(0)
/* HW specific definitions */ /* HW specific definitions */
#define XILINX_DMA_MAX_CHANS_PER_DEVICE 0x20 #define XILINX_DMA_MAX_CHANS_PER_DEVICE 0x20
...@@ -340,6 +343,7 @@ struct xilinx_dma_tx_descriptor { ...@@ -340,6 +343,7 @@ struct xilinx_dma_tx_descriptor {
* @start_transfer: Differentiate b/w DMA IP's transfer * @start_transfer: Differentiate b/w DMA IP's transfer
* @stop_transfer: Differentiate b/w DMA IP's quiesce * @stop_transfer: Differentiate b/w DMA IP's quiesce
* @tdest: TDEST value for mcdma * @tdest: TDEST value for mcdma
* @has_vflip: S2MM vertical flip
*/ */
struct xilinx_dma_chan { struct xilinx_dma_chan {
struct xilinx_dma_device *xdev; struct xilinx_dma_device *xdev;
...@@ -376,6 +380,7 @@ struct xilinx_dma_chan { ...@@ -376,6 +380,7 @@ struct xilinx_dma_chan {
void (*start_transfer)(struct xilinx_dma_chan *chan); void (*start_transfer)(struct xilinx_dma_chan *chan);
int (*stop_transfer)(struct xilinx_dma_chan *chan); int (*stop_transfer)(struct xilinx_dma_chan *chan);
u16 tdest; u16 tdest;
bool has_vflip;
}; };
/** /**
...@@ -1092,6 +1097,14 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan) ...@@ -1092,6 +1097,14 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
desc->async_tx.phys); desc->async_tx.phys);
/* Configure the hardware using info in the config structure */ /* Configure the hardware using info in the config structure */
if (chan->has_vflip) {
reg = dma_read(chan, XILINX_VDMA_REG_ENABLE_VERTICAL_FLIP);
reg &= ~XILINX_VDMA_ENABLE_VERTICAL_FLIP;
reg |= config->vflip_en;
dma_write(chan, XILINX_VDMA_REG_ENABLE_VERTICAL_FLIP,
reg);
}
reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR); reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR);
if (config->frm_cnt_en) if (config->frm_cnt_en)
...@@ -2105,6 +2118,8 @@ int xilinx_vdma_channel_set_config(struct dma_chan *dchan, ...@@ -2105,6 +2118,8 @@ int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
} }
chan->config.frm_cnt_en = cfg->frm_cnt_en; chan->config.frm_cnt_en = cfg->frm_cnt_en;
chan->config.vflip_en = cfg->vflip_en;
if (cfg->park) if (cfg->park)
chan->config.park_frm = cfg->park_frm; chan->config.park_frm = cfg->park_frm;
else else
...@@ -2428,6 +2443,13 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev, ...@@ -2428,6 +2443,13 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
chan->direction = DMA_DEV_TO_MEM; chan->direction = DMA_DEV_TO_MEM;
chan->id = chan_id; chan->id = chan_id;
chan->tdest = chan_id - xdev->nr_channels; chan->tdest = chan_id - xdev->nr_channels;
chan->has_vflip = of_property_read_bool(node,
"xlnx,enable-vert-flip");
if (chan->has_vflip) {
chan->config.vflip_en = dma_read(chan,
XILINX_VDMA_REG_ENABLE_VERTICAL_FLIP) &
XILINX_VDMA_ENABLE_VERTICAL_FLIP;
}
chan->ctrl_offset = XILINX_DMA_S2MM_CTRL_OFFSET; chan->ctrl_offset = XILINX_DMA_S2MM_CTRL_OFFSET;
if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) { if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
......
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
* @delay: Delay counter * @delay: Delay counter
* @reset: Reset Channel * @reset: Reset Channel
* @ext_fsync: External Frame Sync source * @ext_fsync: External Frame Sync source
* @vflip_en: Vertical Flip enable
*/ */
struct xilinx_vdma_config { struct xilinx_vdma_config {
int frm_dly; int frm_dly;
...@@ -39,6 +40,7 @@ struct xilinx_vdma_config { ...@@ -39,6 +40,7 @@ struct xilinx_vdma_config {
int delay; int delay;
int reset; int reset;
int ext_fsync; int ext_fsync;
bool vflip_en;
}; };
int xilinx_vdma_channel_set_config(struct dma_chan *dchan, int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
......
...@@ -415,7 +415,9 @@ enum dma_residue_granularity { ...@@ -415,7 +415,9 @@ enum dma_residue_granularity {
* each type, the dma controller should set BIT(<TYPE>) and same * each type, the dma controller should set BIT(<TYPE>) and same
* should be checked by controller as well * should be checked by controller as well
* @max_burst: max burst capability per-transfer * @max_burst: max burst capability per-transfer
* @cmd_pause: true, if pause and thereby resume is supported * @cmd_pause: true, if pause is supported (i.e. for reading residue or
* for resume later)
* @cmd_resume: true, if resume is supported
* @cmd_terminate: true, if terminate cmd is supported * @cmd_terminate: true, if terminate cmd is supported
* @residue_granularity: granularity of the reported transfer residue * @residue_granularity: granularity of the reported transfer residue
* @descriptor_reuse: if a descriptor can be reused by client and * @descriptor_reuse: if a descriptor can be reused by client and
...@@ -427,6 +429,7 @@ struct dma_slave_caps { ...@@ -427,6 +429,7 @@ struct dma_slave_caps {
u32 directions; u32 directions;
u32 max_burst; u32 max_burst;
bool cmd_pause; bool cmd_pause;
bool cmd_resume;
bool cmd_terminate; bool cmd_terminate;
enum dma_residue_granularity residue_granularity; enum dma_residue_granularity residue_granularity;
bool descriptor_reuse; bool descriptor_reuse;
...@@ -1403,6 +1406,7 @@ static inline int dmaengine_desc_free(struct dma_async_tx_descriptor *desc) ...@@ -1403,6 +1406,7 @@ static inline int dmaengine_desc_free(struct dma_async_tx_descriptor *desc)
/* --- DMA device --- */ /* --- DMA device --- */
int dma_async_device_register(struct dma_device *device); int dma_async_device_register(struct dma_device *device);
int dmaenginem_async_device_register(struct dma_device *device);
void dma_async_device_unregister(struct dma_device *device); void dma_async_device_unregister(struct dma_device *device);
void dma_run_dependencies(struct dma_async_tx_descriptor *tx); void dma_run_dependencies(struct dma_async_tx_descriptor *tx);
struct dma_chan *dma_get_slave_channel(struct dma_chan *chan); struct dma_chan *dma_get_slave_channel(struct dma_chan *chan);
......
...@@ -147,7 +147,7 @@ static int dmaengine_pcm_set_runtime_hwparams(struct snd_pcm_substream *substrea ...@@ -147,7 +147,7 @@ static int dmaengine_pcm_set_runtime_hwparams(struct snd_pcm_substream *substrea
ret = dma_get_slave_caps(chan, &dma_caps); ret = dma_get_slave_caps(chan, &dma_caps);
if (ret == 0) { if (ret == 0) {
if (dma_caps.cmd_pause) if (dma_caps.cmd_pause && dma_caps.cmd_resume)
hw.info |= SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME; hw.info |= SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME;
if (dma_caps.residue_granularity <= DMA_RESIDUE_GRANULARITY_SEGMENT) if (dma_caps.residue_granularity <= DMA_RESIDUE_GRANULARITY_SEGMENT)
hw.info |= SNDRV_PCM_INFO_BATCH; hw.info |= SNDRV_PCM_INFO_BATCH;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment