Commit a5255bc3 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dmaengine-5.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "Here are the changes this time around, couple of new drivers and
  updates to few more:

   - New drivers for SiFive PDMA, Socionext Milbeaut HDMAC and XDMAC,
     Freescale dpaa2 qDMA

   - Support for X1000 in JZ4780

   - Xilinx dma updates and support for Xilinx AXI MCDM controller

   - New bindings for rcar R8A774B1

   - Minor updates to dw, dma-jz4780, ti-edma, sprd drivers"

* tag 'dmaengine-5.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (61 commits)
  dmaengine: Fix Kconfig indentation
  dmaengine: sf-pdma: move macro to header file
  dmaengine: sf-pdma: replace /** with /* for non-function comment
  dmaengine: ti: edma: fix missed failure handling
  dmaengine: mmp_pdma: add missed of_dma_controller_free
  dmaengine: mmp_tdma: add missed of_dma_controller_free
  dmaengine: sprd: Add wrap address support for link-list mode
  MAINTAINERS: Add Green as SiFive PDMA driver maintainer
  dmaengine: sf-pdma: add platform DMA support for HiFive Unleashed A00
  dt-bindings: dmaengine: sf-pdma: add bindins for SiFive PDMA
  dmaengine: zx: remove: removed dmam_pool_destroy
  dmaengine: mediatek: hsdma_probe: fixed a memory leak when devm_request_irq fails
  dmaengine: iop-adma: clean up an indentation issue
  dmaengine: milbeaut-xdmac: remove redundant error log
  dmaengine: milbeaut-hdmac: remove redundant error log
  dmaengine: dma-jz4780: add missed clk_disable_unprepare in remove
  dmaengine: JZ4780: Add support for the X1000.
  dt-bindings: dmaengine: Add X1000 bindings.
  dmaengine: xilinx_dma: Add Xilinx AXI MCDMA Engine driver support
  dmaengine: xilinx_dma: Extend dma_config struct to store irq routine handle
  ...
parents 596cf45c 67805a4b
......@@ -25,11 +25,18 @@ properties:
Used to provide DMA controller specific information.
dma-channel-mask:
$ref: /schemas/types.yaml#definitions/uint32
description:
Bitmask of available DMA channels in ascending order that are
not reserved by firmware and are available to the
kernel. i.e. first channel corresponds to LSB.
The first item in the array is for channels 0-31, the second is for
channels 32-63, etc.
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
items:
minItems: 1
# Should be enough
maxItems: 255
dma-channels:
$ref: /schemas/types.yaml#definitions/uint32
......
......@@ -7,10 +7,11 @@ Required properties:
* ingenic,jz4725b-dma
* ingenic,jz4770-dma
* ingenic,jz4780-dma
* ingenic,x1000-dma
- reg: Should contain the DMA channel registers location and length, followed
by the DMA controller registers location and length.
- interrupts: Should contain the interrupt specifier of the DMA controller.
- clocks: Should contain a clock specifier for the JZ4780 PDMA clock.
- clocks: Should contain a clock specifier for the JZ4780/X1000 PDMA clock.
- #dma-cells: Must be <2>. Number of integer cells in the dmas property of
DMA clients (see below).
......
* Milbeaut AHB DMA Controller
Milbeaut AHB DMA controller has transfer capability below.
- device to memory transfer
- memory to device transfer
Required property:
- compatible: Should be "socionext,milbeaut-m10v-hdmac"
- reg: Should contain DMA registers location and length.
- interrupts: Should contain all of the per-channel DMA interrupts.
Number of channels is configurable - 2, 4 or 8, so
the number of interrupts specified should be {2,4,8}.
- #dma-cells: Should be 1. Specify the ID of the slave.
- clocks: Phandle to the clock used by the HDMAC module.
Example:
hdmac1: dma-controller@1e110000 {
compatible = "socionext,milbeaut-m10v-hdmac";
reg = <0x1e110000 0x10000>;
interrupts = <0 132 4>,
<0 133 4>,
<0 134 4>,
<0 135 4>,
<0 136 4>,
<0 137 4>,
<0 138 4>,
<0 139 4>;
#dma-cells = <1>;
clocks = <&dummy_clk>;
};
* Milbeaut AXI DMA Controller
Milbeaut AXI DMA controller has only memory to memory transfer capability.
* DMA controller
Required property:
- compatible: Should be "socionext,milbeaut-m10v-xdmac"
- reg: Should contain DMA registers location and length.
- interrupts: Should contain all of the per-channel DMA interrupts.
Number of channels is configurable - 2, 4 or 8, so
the number of interrupts specified should be {2,4,8}.
- #dma-cells: Should be 1.
Example:
xdmac0: dma-controller@1c250000 {
compatible = "socionext,milbeaut-m10v-xdmac";
reg = <0x1c250000 0x1000>;
interrupts = <0 17 0x4>,
<0 18 0x4>,
<0 19 0x4>,
<0 20 0x4>;
#dma-cells = <1>;
};
......@@ -21,6 +21,7 @@ Required Properties:
- "renesas,dmac-r8a7745" (RZ/G1E)
- "renesas,dmac-r8a77470" (RZ/G1C)
- "renesas,dmac-r8a774a1" (RZ/G2M)
- "renesas,dmac-r8a774b1" (RZ/G2N)
- "renesas,dmac-r8a774c0" (RZ/G2E)
- "renesas,dmac-r8a7790" (R-Car H2)
- "renesas,dmac-r8a7791" (R-Car M2-W)
......
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/dma/sifive,fu540-c000-pdma.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: SiFive Unleashed Rev C000 Platform DMA
maintainers:
- Green Wan <green.wan@sifive.com>
- Palmer Debbelt <palmer@sifive.com>
- Paul Walmsley <paul.walmsley@sifive.com>
description: |
Platform DMA is a DMA engine of SiFive Unleashed. It supports 4
channels. Each channel has 2 interrupts. One is for DMA done and
the other is for DME error.
In different SoC, DMA could be attached to different IRQ line.
DT file need to be changed to meet the difference. For technical
doc,
https://static.dev.sifive.com/FU540-C000-v1.0.pdf
properties:
compatible:
items:
- const: sifive,fu540-c000-pdma
reg:
maxItems: 1
interrupts:
minItems: 1
maxItems: 8
'#dma-cells':
const: 1
required:
- compatible
- reg
- interrupts
- '#dma-cells'
examples:
- |
dma@3000000 {
compatible = "sifive,fu540-c000-pdma";
reg = <0x0 0x3000000 0x0 0x8000>;
interrupts = <23 24 25 26 27 28 29 30>;
#dma-cells = <1>;
};
...
......@@ -42,6 +42,11 @@ Optional properties:
- ti,edma-reserved-slot-ranges: PaRAM slot ranges which should not be used by
the driver, they are allocated to be used by for example the
DSP. See example.
- dma-channel-mask: Mask of usable channels.
Single uint32 for EDMA with 32 channels, array of two uint32 for
EDMA with 64 channels. See example and
Documentation/devicetree/bindings/dma/dma-common.yaml
------------------------------------------------------------------------------
eDMA3 Transfer Controller
......@@ -91,6 +96,9 @@ edma: edma@49000000 {
ti,edma-memcpy-channels = <20 21>;
/* The following PaRAM slots are reserved: 35-44 and 100-109 */
ti,edma-reserved-slot-ranges = <35 10>, <100 10>;
/* The following channels are reserved: 35-44 */
dma-channel-mask = <0xffffffff /* Channel 0-31 */
0xffffe007>; /* Channel 32-63 */
};
edma_tptc0: tptc@49800000 {
......
......@@ -11,9 +11,16 @@ is to receive from the device.
Xilinx AXI CDMA engine, it does transfers between memory-mapped source
address and a memory-mapped destination address.
Xilinx AXI MCDMA engine, it does transfer between memory and AXI4 stream
target devices. It can be configured to have up to 16 independent transmit
and receive channels.
Required properties:
- compatible: Should be "xlnx,axi-vdma-1.00.a" or "xlnx,axi-dma-1.00.a" or
"xlnx,axi-cdma-1.00.a""
- compatible: Should be one of-
"xlnx,axi-vdma-1.00.a"
"xlnx,axi-dma-1.00.a"
"xlnx,axi-cdma-1.00.a"
"xlnx,axi-mcdma-1.00.a"
- #dma-cells: Should be <1>, see "dmas" property below
- reg: Should contain VDMA registers location and length.
- xlnx,addrwidth: Should be the vdma addressing size in bits(ex: 32 bits).
......@@ -29,7 +36,7 @@ Required properties:
"m_axis_mm2s_aclk", "s_axis_s2mm_aclk"
For CDMA:
Required elements: "s_axi_lite_aclk", "m_axi_aclk"
FOR AXIDMA:
For AXIDMA and MCDMA:
Required elements: "s_axi_lite_aclk"
Optional elements: "m_axi_mm2s_aclk", "m_axi_s2mm_aclk",
"m_axi_sg_aclk"
......@@ -37,12 +44,11 @@ Required properties:
Required properties for VDMA:
- xlnx,num-fstores: Should be the number of framebuffers as configured in h/w.
Optional properties for AXI DMA:
Optional properties for AXI DMA and MCDMA:
- xlnx,sg-length-width: Should be set to the width in bits of the length
register as configured in h/w. Takes values {8...26}. If the property
is missing or invalid then the default value 23 is used. This is the
maximum value that is supported by all IP versions.
- xlnx,mcdma: Tells whether configured for multi-channel mode in the hardware.
Optional properties for VDMA:
- xlnx,flush-fsync: Tells which channel to Flush on Frame sync.
It takes following values:
......@@ -55,8 +61,8 @@ Required child node properties:
For VDMA: It should be either "xlnx,axi-vdma-mm2s-channel" or
"xlnx,axi-vdma-s2mm-channel".
For CDMA: It should be "xlnx,axi-cdma-channel".
For AXIDMA: It should be either "xlnx,axi-dma-mm2s-channel" or
"xlnx,axi-dma-s2mm-channel".
For AXIDMA and MCDMA: It should be either "xlnx,axi-dma-mm2s-channel"
or "xlnx,axi-dma-s2mm-channel".
- interrupts: Should contain per channel VDMA interrupts.
- xlnx,datawidth: Should contain the stream data width, take values
{32,64...1024}.
......@@ -69,8 +75,8 @@ Optional child node properties for VDMA:
enabled/disabled in hardware.
- xlnx,enable-vert-flip: Tells vertical flip is
enabled/disabled in hardware(S2MM path).
Optional child node properties for AXI DMA:
-dma-channels: Number of dma channels in child node.
Optional child node properties for MCDMA:
- dma-channels: Number of dma channels in child node.
Example:
++++++++
......
......@@ -14953,6 +14953,12 @@ F: drivers/media/usb/siano/
F: drivers/media/usb/siano/
F: drivers/media/mmc/siano/
SIFIVE PDMA DRIVER
M: Green Wan <green.wan@sifive.com>
S: Maintained
F: drivers/dma/sf-pdma/
F: Documentation/devicetree/bindings/dma/sifive,fu540-c000-pdma.yaml
SIFIVE DRIVERS
M: Palmer Dabbelt <palmer@dabbelt.com>
M: Paul Walmsley <paul.walmsley@sifive.com>
......
......@@ -342,6 +342,26 @@ config MCF_EDMA
minimal intervention from a host processor.
This module can be found on Freescale ColdFire mcf5441x SoCs.
config MILBEAUT_HDMAC
tristate "Milbeaut AHB DMA support"
depends on ARCH_MILBEAUT || COMPILE_TEST
depends on OF
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Say yes here to support the Socionext Milbeaut
HDMAC device.
config MILBEAUT_XDMAC
tristate "Milbeaut AXI DMA support"
depends on ARCH_MILBEAUT || COMPILE_TEST
depends on OF
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Say yes here to support the Socionext Milbeaut
XDMAC device.
config MMP_PDMA
bool "MMP PDMA support"
depends on ARCH_MMP || ARCH_PXA || COMPILE_TEST
......@@ -635,6 +655,10 @@ config XILINX_DMA
destination address.
AXI DMA engine provides high-bandwidth one dimensional direct
memory access between memory and AXI4-Stream target peripherals.
AXI MCDMA engine provides high-bandwidth direct memory access
between memory and AXI4-Stream target peripherals. It provides
the scatter gather interface with multiple channels independent
configuration support.
config XILINX_ZYNQMP_DMA
tristate "Xilinx ZynqMP DMA Engine"
......@@ -665,10 +689,14 @@ source "drivers/dma/dw-edma/Kconfig"
source "drivers/dma/hsu/Kconfig"
source "drivers/dma/sf-pdma/Kconfig"
source "drivers/dma/sh/Kconfig"
source "drivers/dma/ti/Kconfig"
source "drivers/dma/fsl-dpaa2-qdma/Kconfig"
# clients
comment "DMA Clients"
depends on DMA_ENGINE
......
......@@ -45,6 +45,8 @@ obj-$(CONFIG_INTEL_IOP_ADMA) += iop-adma.o
obj-$(CONFIG_INTEL_MIC_X100_DMA) += mic_x100_dma.o
obj-$(CONFIG_K3_DMA) += k3dma.o
obj-$(CONFIG_LPC18XX_DMAMUX) += lpc18xx-dmamux.o
obj-$(CONFIG_MILBEAUT_HDMAC) += milbeaut-hdmac.o
obj-$(CONFIG_MILBEAUT_XDMAC) += milbeaut-xdmac.o
obj-$(CONFIG_MMP_PDMA) += mmp_pdma.o
obj-$(CONFIG_MMP_TDMA) += mmp_tdma.o
obj-$(CONFIG_MOXART_DMA) += moxart-dma.o
......@@ -60,6 +62,7 @@ obj-$(CONFIG_PL330_DMA) += pl330.o
obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
obj-$(CONFIG_PXA_DMA) += pxa_dma.o
obj-$(CONFIG_RENESAS_DMA) += sh/
obj-$(CONFIG_SF_PDMA) += sf-pdma/
obj-$(CONFIG_SIRF_DMA) += sirf-dma.o
obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o
obj-$(CONFIG_STM32_DMA) += stm32-dma.o
......@@ -75,6 +78,7 @@ obj-$(CONFIG_UNIPHIER_MDMAC) += uniphier-mdmac.o
obj-$(CONFIG_XGENE_DMA) += xgene-dma.o
obj-$(CONFIG_ZX_DMA) += zx_dma.o
obj-$(CONFIG_ST_FDMA) += st_fdma.o
obj-$(CONFIG_FSL_DPAA2_QDMA) += fsl-dpaa2-qdma/
obj-y += mediatek/
obj-y += qcom/
......
......@@ -1957,21 +1957,16 @@ static int atmel_xdmac_resume(struct device *dev)
static int at_xdmac_probe(struct platform_device *pdev)
{
struct resource *res;
struct at_xdmac *atxdmac;
int irq, size, nr_channels, i, ret;
void __iomem *base;
u32 reg;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -EINVAL;
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
base = devm_ioremap_resource(&pdev->dev, res);
base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(base))
return PTR_ERR(base);
......
......@@ -858,13 +858,7 @@ static int jz4780_dma_probe(struct platform_device *pdev)
jzdma->soc_data = soc_data;
platform_set_drvdata(pdev, jzdma);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(dev, "failed to get I/O memory\n");
return -EINVAL;
}
jzdma->chn_base = devm_ioremap_resource(dev, res);
jzdma->chn_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(jzdma->chn_base))
return PTR_ERR(jzdma->chn_base);
......@@ -987,6 +981,7 @@ static int jz4780_dma_remove(struct platform_device *pdev)
of_dma_controller_free(pdev->dev.of_node);
clk_disable_unprepare(jzdma->clk);
free_irq(jzdma->irq, jzdma);
for (i = 0; i < jzdma->soc_data->nb_channels; i++)
......@@ -1019,11 +1014,18 @@ static const struct jz4780_dma_soc_data jz4780_dma_soc_data = {
.flags = JZ_SOC_DATA_ALLOW_LEGACY_DT | JZ_SOC_DATA_PROGRAMMABLE_DMA,
};
static const struct jz4780_dma_soc_data x1000_dma_soc_data = {
.nb_channels = 8,
.transfer_ord_max = 7,
.flags = JZ_SOC_DATA_PROGRAMMABLE_DMA,
};
static const struct of_device_id jz4780_dma_dt_match[] = {
{ .compatible = "ingenic,jz4740-dma", .data = &jz4740_dma_soc_data },
{ .compatible = "ingenic,jz4725b-dma", .data = &jz4725b_dma_soc_data },
{ .compatible = "ingenic,jz4770-dma", .data = &jz4770_dma_soc_data },
{ .compatible = "ingenic,jz4780-dma", .data = &jz4780_dma_soc_data },
{ .compatible = "ingenic,x1000-dma", .data = &x1000_dma_soc_data },
{},
};
MODULE_DEVICE_TABLE(of, jz4780_dma_dt_match);
......
......@@ -66,7 +66,7 @@ static int dw_probe(struct platform_device *pdev)
data->chip = chip;
chip->clk = devm_clk_get(chip->dev, "hclk");
chip->clk = devm_clk_get_optional(chip->dev, "hclk");
if (IS_ERR(chip->clk))
return PTR_ERR(chip->clk);
err = clk_prepare_enable(chip->clk);
......
menuconfig FSL_DPAA2_QDMA
tristate "NXP DPAA2 QDMA"
depends on ARM64
depends on FSL_MC_BUS && FSL_MC_DPIO
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
NXP Data Path Acceleration Architecture 2 QDMA driver,
using the NXP MC bus driver.
# SPDX-License-Identifier: GPL-2.0
# Makefile for the NXP DPAA2 qDMA controllers
obj-$(CONFIG_FSL_DPAA2_QDMA) += dpaa2-qdma.o dpdmai.o
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright 2019 NXP */
#ifndef __DPAA2_QDMA_H
#define __DPAA2_QDMA_H
#define DPAA2_QDMA_STORE_SIZE 16
#define NUM_CH 8
struct dpaa2_qdma_sd_d {
u32 rsv:32;
union {
struct {
u32 ssd:12; /* souce stride distance */
u32 sss:12; /* souce stride size */
u32 rsv1:8;
} sdf;
struct {
u32 dsd:12; /* Destination stride distance */
u32 dss:12; /* Destination stride size */
u32 rsv2:8;
} ddf;
} df;
u32 rbpcmd; /* Route-by-port command */
u32 cmd;
} __attribute__((__packed__));
/* Source descriptor command read transaction type for RBP=0: */
/* coherent copy of cacheable memory */
#define QDMA_SD_CMD_RDTTYPE_COHERENT (0xb << 28)
/* Destination descriptor command write transaction type for RBP=0: */
/* coherent copy of cacheable memory */
#define QDMA_DD_CMD_WRTTYPE_COHERENT (0x6 << 28)
#define LX2160_QDMA_DD_CMD_WRTTYPE_COHERENT (0xb << 28)
#define QMAN_FD_FMT_ENABLE BIT(0) /* frame list table enable */
#define QMAN_FD_BMT_ENABLE BIT(15) /* bypass memory translation */
#define QMAN_FD_BMT_DISABLE (0) /* bypass memory translation */
#define QMAN_FD_SL_DISABLE (0) /* short lengthe disabled */
#define QMAN_FD_SL_ENABLE BIT(14) /* short lengthe enabled */
#define QDMA_FINAL_BIT_DISABLE (0) /* final bit disable */
#define QDMA_FINAL_BIT_ENABLE BIT(31) /* final bit enable */
#define QDMA_FD_SHORT_FORMAT BIT(11) /* short format */
#define QDMA_FD_LONG_FORMAT (0) /* long format */
#define QDMA_SER_DISABLE (8) /* no notification */
#define QDMA_SER_CTX BIT(8) /* notification by FQD_CTX[fqid] */
#define QDMA_SER_DEST (2 << 8) /* notification by destination desc */
#define QDMA_SER_BOTH (3 << 8) /* soruce and dest notification */
#define QDMA_FD_SPF_ENALBE BIT(30) /* source prefetch enable */
#define QMAN_FD_VA_ENABLE BIT(14) /* Address used is virtual address */
#define QMAN_FD_VA_DISABLE (0)/* Address used is a real address */
/* Flow Context: 49bit physical address */
#define QMAN_FD_CBMT_ENABLE BIT(15)
#define QMAN_FD_CBMT_DISABLE (0) /* Flow Context: 64bit virtual address */
#define QMAN_FD_SC_DISABLE (0) /* stashing control */
#define QDMA_FL_FMT_SBF (0x0) /* Single buffer frame */
#define QDMA_FL_FMT_SGE (0x2) /* Scatter gather frame */
#define QDMA_FL_BMT_ENABLE BIT(15) /* enable bypass memory translation */
#define QDMA_FL_BMT_DISABLE (0x0) /* enable bypass memory translation */
#define QDMA_FL_SL_LONG (0x0)/* long length */
#define QDMA_FL_SL_SHORT (0x1) /* short length */
#define QDMA_FL_F (0x1)/* last frame list bit */
/*Description of Frame list table structure*/
struct dpaa2_qdma_chan {
struct dpaa2_qdma_engine *qdma;
struct virt_dma_chan vchan;
struct virt_dma_desc vdesc;
enum dma_status status;
u32 fqid;
/* spinlock used by dpaa2 qdma driver */
spinlock_t queue_lock;
struct dma_pool *fd_pool;
struct dma_pool *fl_pool;
struct dma_pool *sdd_pool;
struct list_head comp_used;
struct list_head comp_free;
};
struct dpaa2_qdma_comp {
dma_addr_t fd_bus_addr;
dma_addr_t fl_bus_addr;
dma_addr_t desc_bus_addr;
struct dpaa2_fd *fd_virt_addr;
struct dpaa2_fl_entry *fl_virt_addr;
struct dpaa2_qdma_sd_d *desc_virt_addr;
struct dpaa2_qdma_chan *qchan;
struct virt_dma_desc vdesc;
struct list_head list;
};
struct dpaa2_qdma_engine {
struct dma_device dma_dev;
u32 n_chans;
struct dpaa2_qdma_chan chans[NUM_CH];
int qdma_wrtype_fixup;
int desc_allocated;
struct dpaa2_qdma_priv *priv;
};
/*
* dpaa2_qdma_priv - driver private data
*/
struct dpaa2_qdma_priv {
int dpqdma_id;
struct iommu_domain *iommu_domain;
struct dpdmai_attr dpdmai_attr;
struct device *dev;
struct fsl_mc_io *mc_io;
struct fsl_mc_device *dpdmai_dev;
u8 num_pairs;
struct dpaa2_qdma_engine *dpaa2_qdma;
struct dpaa2_qdma_priv_per_prio *ppriv;
struct dpdmai_rx_queue_attr rx_queue_attr[DPDMAI_PRIO_NUM];
u32 tx_fqid[DPDMAI_PRIO_NUM];
};
struct dpaa2_qdma_priv_per_prio {
int req_fqid;
int rsp_fqid;
int prio;
struct dpaa2_io_store *store;
struct dpaa2_io_notification_ctx nctx;
struct dpaa2_qdma_priv *priv;
};
static struct soc_device_attribute soc_fixup_tuning[] = {
{ .family = "QorIQ LX2160A"},
{ },
};
/* FD pool size: one FD + 3 Frame list + 2 source/destination descriptor */
#define FD_POOL_SIZE (sizeof(struct dpaa2_fd) + \
sizeof(struct dpaa2_fl_entry) * 3 + \
sizeof(struct dpaa2_qdma_sd_d) * 2)
static void dpaa2_dpdmai_free_channels(struct dpaa2_qdma_engine *dpaa2_qdma);
static void dpaa2_dpdmai_free_comp(struct dpaa2_qdma_chan *qchan,
struct list_head *head);
#endif /* __DPAA2_QDMA_H */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright 2019 NXP */
#ifndef __FSL_DPDMAI_H
#define __FSL_DPDMAI_H
/* DPDMAI Version */
#define DPDMAI_VER_MAJOR 2
#define DPDMAI_VER_MINOR 2
#define DPDMAI_CMD_BASE_VERSION 0
#define DPDMAI_CMD_ID_OFFSET 4
#define DPDMAI_CMDID_FORMAT(x) (((x) << DPDMAI_CMD_ID_OFFSET) | \
DPDMAI_CMD_BASE_VERSION)
/* Command IDs */
#define DPDMAI_CMDID_CLOSE DPDMAI_CMDID_FORMAT(0x800)
#define DPDMAI_CMDID_OPEN DPDMAI_CMDID_FORMAT(0x80E)
#define DPDMAI_CMDID_CREATE DPDMAI_CMDID_FORMAT(0x90E)
#define DPDMAI_CMDID_ENABLE DPDMAI_CMDID_FORMAT(0x002)
#define DPDMAI_CMDID_DISABLE DPDMAI_CMDID_FORMAT(0x003)
#define DPDMAI_CMDID_GET_ATTR DPDMAI_CMDID_FORMAT(0x004)
#define DPDMAI_CMDID_RESET DPDMAI_CMDID_FORMAT(0x005)
#define DPDMAI_CMDID_IS_ENABLED DPDMAI_CMDID_FORMAT(0x006)
#define DPDMAI_CMDID_SET_IRQ DPDMAI_CMDID_FORMAT(0x010)
#define DPDMAI_CMDID_GET_IRQ DPDMAI_CMDID_FORMAT(0x011)
#define DPDMAI_CMDID_SET_IRQ_ENABLE DPDMAI_CMDID_FORMAT(0x012)
#define DPDMAI_CMDID_GET_IRQ_ENABLE DPDMAI_CMDID_FORMAT(0x013)
#define DPDMAI_CMDID_SET_IRQ_MASK DPDMAI_CMDID_FORMAT(0x014)
#define DPDMAI_CMDID_GET_IRQ_MASK DPDMAI_CMDID_FORMAT(0x015)
#define DPDMAI_CMDID_GET_IRQ_STATUS DPDMAI_CMDID_FORMAT(0x016)
#define DPDMAI_CMDID_CLEAR_IRQ_STATUS DPDMAI_CMDID_FORMAT(0x017)
#define DPDMAI_CMDID_SET_RX_QUEUE DPDMAI_CMDID_FORMAT(0x1A0)
#define DPDMAI_CMDID_GET_RX_QUEUE DPDMAI_CMDID_FORMAT(0x1A1)
#define DPDMAI_CMDID_GET_TX_QUEUE DPDMAI_CMDID_FORMAT(0x1A2)
#define MC_CMD_HDR_TOKEN_O 32 /* Token field offset */
#define MC_CMD_HDR_TOKEN_S 16 /* Token field size */
#define MAKE_UMASK64(_width) \
((u64)((_width) < 64 ? ((u64)1 << (_width)) - 1 : (u64)-1))
/* Data Path DMA Interface API
* Contains initialization APIs and runtime control APIs for DPDMAI
*/
/**
* Maximum number of Tx/Rx priorities per DPDMAI object
*/
#define DPDMAI_PRIO_NUM 2
/* DPDMAI queue modification options */
/**
* Select to modify the user's context associated with the queue
*/
#define DPDMAI_QUEUE_OPT_USER_CTX 0x1
/**
* Select to modify the queue's destination
*/
#define DPDMAI_QUEUE_OPT_DEST 0x2
/**
* struct dpdmai_cfg - Structure representing DPDMAI configuration
* @priorities: Priorities for the DMA hardware processing; valid priorities are
* configured with values 1-8; the entry following last valid entry
* should be configured with 0
*/
struct dpdmai_cfg {
u8 priorities[DPDMAI_PRIO_NUM];
};
/**
* struct dpdmai_attr - Structure representing DPDMAI attributes
* @id: DPDMAI object ID
* @version: DPDMAI version
* @num_of_priorities: number of priorities
*/
struct dpdmai_attr {
int id;
/**
* struct version - DPDMAI version
* @major: DPDMAI major version
* @minor: DPDMAI minor version
*/
struct {
u16 major;
u16 minor;
} version;
u8 num_of_priorities;
};
/**
* enum dpdmai_dest - DPDMAI destination types
* @DPDMAI_DEST_NONE: Unassigned destination; The queue is set in parked mode
* and does not generate FQDAN notifications; user is expected to dequeue
* from the queue based on polling or other user-defined method
* @DPDMAI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
* notifications to the specified DPIO; user is expected to dequeue
* from the queue only after notification is received
* @DPDMAI_DEST_DPCON: The queue is set in schedule mode and does not generate
* FQDAN notifications, but is connected to the specified DPCON object;
* user is expected to dequeue from the DPCON channel
*/
enum dpdmai_dest {
DPDMAI_DEST_NONE = 0,
DPDMAI_DEST_DPIO = 1,
DPDMAI_DEST_DPCON = 2
};
/**
* struct dpdmai_dest_cfg - Structure representing DPDMAI destination parameters
* @dest_type: Destination type
* @dest_id: Either DPIO ID or DPCON ID, depending on the destination type
* @priority: Priority selection within the DPIO or DPCON channel; valid values
* are 0-1 or 0-7, depending on the number of priorities in that
* channel; not relevant for 'DPDMAI_DEST_NONE' option
*/
struct dpdmai_dest_cfg {
enum dpdmai_dest dest_type;
int dest_id;
u8 priority;
};
/**
* struct dpdmai_rx_queue_cfg - DPDMAI RX queue configuration
* @options: Flags representing the suggested modifications to the queue;
* Use any combination of 'DPDMAI_QUEUE_OPT_<X>' flags
* @user_ctx: User context value provided in the frame descriptor of each
* dequeued frame;
* valid only if 'DPDMAI_QUEUE_OPT_USER_CTX' is contained in 'options'
* @dest_cfg: Queue destination parameters;
* valid only if 'DPDMAI_QUEUE_OPT_DEST' is contained in 'options'
*/
struct dpdmai_rx_queue_cfg {
struct dpdmai_dest_cfg dest_cfg;
u32 options;
u64 user_ctx;
};
/**
* struct dpdmai_rx_queue_attr - Structure representing attributes of Rx queues
* @user_ctx: User context value provided in the frame descriptor of each
* dequeued frame
* @dest_cfg: Queue destination configuration
* @fqid: Virtual FQID value to be used for dequeue operations
*/
struct dpdmai_rx_queue_attr {
struct dpdmai_dest_cfg dest_cfg;
u64 user_ctx;
u32 fqid;
};
int dpdmai_open(struct fsl_mc_io *mc_io, u32 cmd_flags,
int dpdmai_id, u16 *token);
int dpdmai_close(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token);
int dpdmai_create(struct fsl_mc_io *mc_io, u32 cmd_flags,
const struct dpdmai_cfg *cfg, u16 *token);
int dpdmai_enable(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token);
int dpdmai_disable(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token);
int dpdmai_reset(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token);
int dpdmai_get_attributes(struct fsl_mc_io *mc_io, u32 cmd_flags,
u16 token, struct dpdmai_attr *attr);
int dpdmai_set_rx_queue(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
u8 priority, const struct dpdmai_rx_queue_cfg *cfg);
int dpdmai_get_rx_queue(struct fsl_mc_io *mc_io, u32 cmd_flags, u16 token,
u8 priority, struct dpdmai_rx_queue_attr *attr);
int dpdmai_get_tx_queue(struct fsl_mc_io *mc_io, u32 cmd_flags,
u16 token, u8 priority, u32 *fqid);
#endif /* __FSL_DPDMAI_H */
......@@ -1155,6 +1155,9 @@ static int fsl_qdma_probe(struct platform_device *pdev)
return ret;
fsl_qdma->irq_base = platform_get_irq_byname(pdev, "qdma-queue0");
if (fsl_qdma->irq_base < 0)
return fsl_qdma->irq_base;
fsl_qdma->feature = of_property_read_bool(np, "big-endian");
INIT_LIST_HEAD(&fsl_qdma->dma_dev.channels);
......
......@@ -1359,9 +1359,11 @@ static int iop_adma_probe(struct platform_device *pdev)
iop_adma_device_clear_err_status(iop_chan);
for (i = 0; i < 3; i++) {
irq_handler_t handler[] = { iop_adma_eot_handler,
static const irq_handler_t handler[] = {
iop_adma_eot_handler,
iop_adma_eoc_handler,
iop_adma_err_handler };
iop_adma_err_handler
};
int irq = platform_get_irq(pdev, i);
if (irq < 0) {
ret = -ENXIO;
......
......@@ -835,13 +835,8 @@ static int k3_dma_probe(struct platform_device *op)
const struct k3dma_soc_data *soc_data;
struct k3_dma_dev *d;
const struct of_device_id *of_id;
struct resource *iores;
int i, ret, irq = 0;
iores = platform_get_resource(op, IORESOURCE_MEM, 0);
if (!iores)
return -EINVAL;
d = devm_kzalloc(&op->dev, sizeof(*d), GFP_KERNEL);
if (!d)
return -ENOMEM;
......@@ -850,7 +845,7 @@ static int k3_dma_probe(struct platform_device *op)
if (!soc_data)
return -EINVAL;
d->base = devm_ioremap_resource(&op->dev, iores);
d->base = devm_platform_ioremap_resource(op, 0);
if (IS_ERR(d->base))
return PTR_ERR(d->base);
......
......@@ -819,15 +819,7 @@ static int mtk_cqdma_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&cqdma->pc[i]->queue);
spin_lock_init(&cqdma->pc[i]->lock);
refcount_set(&cqdma->pc[i]->refcnt, 0);
res = platform_get_resource(pdev, IORESOURCE_MEM, i);
if (!res) {
dev_err(&pdev->dev, "No mem resource for %s\n",
dev_name(&pdev->dev));
return -EINVAL;
}
cqdma->pc[i]->base = devm_ioremap_resource(&pdev->dev, res);
cqdma->pc[i]->base = devm_platform_ioremap_resource(pdev, i);
if (IS_ERR(cqdma->pc[i]->base))
return PTR_ERR(cqdma->pc[i]->base);
......
......@@ -997,7 +997,7 @@ static int mtk_hsdma_probe(struct platform_device *pdev)
if (err) {
dev_err(&pdev->dev,
"request_irq failed with err %d\n", err);
goto err_unregister;
goto err_free;
}
platform_set_drvdata(pdev, hsdma);
......@@ -1006,6 +1006,8 @@ static int mtk_hsdma_probe(struct platform_device *pdev)
return 0;
err_free:
of_dma_controller_free(pdev->dev.of_node);
err_unregister:
dma_async_device_unregister(dd);
......
......@@ -475,7 +475,6 @@ static int mtk_uart_apdma_probe(struct platform_device *pdev)
struct device_node *np = pdev->dev.of_node;
struct mtk_uart_apdmadev *mtkd;
int bit_mask = 32, rc;
struct resource *res;
struct mtk_chan *c;
unsigned int i;
......@@ -532,13 +531,7 @@ static int mtk_uart_apdma_probe(struct platform_device *pdev)
goto err_no_dma;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, i);
if (!res) {
rc = -ENODEV;
goto err_no_dma;
}
c->base = devm_ioremap_resource(&pdev->dev, res);
c->base = devm_platform_ioremap_resource(pdev, i);
if (IS_ERR(c->base)) {
rc = PTR_ERR(c->base);
goto err_no_dma;
......
This diff is collapsed.
This diff is collapsed.
......@@ -945,6 +945,8 @@ static int mmp_pdma_remove(struct platform_device *op)
struct mmp_pdma_phy *phy;
int i, irq = 0, irq_num = 0;
if (op->dev.of_node)
of_dma_controller_free(op->dev.of_node);
for (i = 0; i < pdev->dma_channels; i++) {
if (platform_get_irq(op, i) > 0)
......
......@@ -544,6 +544,9 @@ static void mmp_tdma_issue_pending(struct dma_chan *chan)
static int mmp_tdma_remove(struct platform_device *pdev)
{
if (pdev->dev.of_node)
of_dma_controller_free(pdev->dev.of_node);
return 0;
}
......
......@@ -1045,18 +1045,13 @@ static int owl_dma_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct owl_dma *od;
struct resource *res;
int ret, i, nr_channels, nr_requests;
od = devm_kzalloc(&pdev->dev, sizeof(*od), GFP_KERNEL);
if (!od)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -EINVAL;
od->base = devm_ioremap_resource(&pdev->dev, res);
od->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(od->base))
return PTR_ERR(od->base);
......
config SF_PDMA
tristate "Sifive PDMA controller driver"
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
help
Support the SiFive PDMA controller.
obj-$(CONFIG_SF_PDMA) += sf-pdma.o
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* SiFive FU540 Platform DMA driver
* Copyright (C) 2019 SiFive
*
* Based partially on:
* - drivers/dma/fsl-edma.c
* - drivers/dma/dw-edma/
* - drivers/dma/pxa-dma.c
*
* See the following sources for further documentation:
* - Chapter 12 "Platform DMA Engine (PDMA)" of
* SiFive FU540-C000 v1.0
* https://static.dev.sifive.com/FU540-C000-v1.0.pdf
*/
#ifndef _SF_PDMA_H
#define _SF_PDMA_H
#include <linux/dmaengine.h>
#include <linux/dma-direction.h>
#include "../dmaengine.h"
#include "../virt-dma.h"
#define PDMA_NR_CH 4
#if (PDMA_NR_CH != 4)
#error "Please define PDMA_NR_CH to 4"
#endif
#define PDMA_BASE_ADDR 0x3000000
#define PDMA_CHAN_OFFSET 0x1000
/* Register Offset */
#define PDMA_CTRL 0x000
#define PDMA_XFER_TYPE 0x004
#define PDMA_XFER_SIZE 0x008
#define PDMA_DST_ADDR 0x010
#define PDMA_SRC_ADDR 0x018
#define PDMA_ACT_TYPE 0x104 /* Read-only */
#define PDMA_REMAINING_BYTE 0x108 /* Read-only */
#define PDMA_CUR_DST_ADDR 0x110 /* Read-only*/
#define PDMA_CUR_SRC_ADDR 0x118 /* Read-only*/
/* CTRL */
#define PDMA_CLEAR_CTRL 0x0
#define PDMA_CLAIM_MASK GENMASK(0, 0)
#define PDMA_RUN_MASK GENMASK(1, 1)
#define PDMA_ENABLE_DONE_INT_MASK GENMASK(14, 14)
#define PDMA_ENABLE_ERR_INT_MASK GENMASK(15, 15)
#define PDMA_DONE_STATUS_MASK GENMASK(30, 30)
#define PDMA_ERR_STATUS_MASK GENMASK(31, 31)
/* Transfer Type */
#define PDMA_FULL_SPEED 0xFF000008
/* Error Recovery */
#define MAX_RETRY 1
#define SF_PDMA_REG_BASE(ch) (pdma->membase + (PDMA_CHAN_OFFSET * (ch)))
struct pdma_regs {
/* read-write regs */
void __iomem *ctrl; /* 4 bytes */
void __iomem *xfer_type; /* 4 bytes */
void __iomem *xfer_size; /* 8 bytes */
void __iomem *dst_addr; /* 8 bytes */
void __iomem *src_addr; /* 8 bytes */
/* read-only */
void __iomem *act_type; /* 4 bytes */
void __iomem *residue; /* 8 bytes */
void __iomem *cur_dst_addr; /* 8 bytes */
void __iomem *cur_src_addr; /* 8 bytes */
};
struct sf_pdma_desc {
u32 xfer_type;
u64 xfer_size;
u64 dst_addr;
u64 src_addr;
struct virt_dma_desc vdesc;
struct sf_pdma_chan *chan;
bool in_use;
enum dma_transfer_direction dirn;
struct dma_async_tx_descriptor *async_tx;
};
enum sf_pdma_pm_state {
RUNNING = 0,
SUSPENDED,
};
struct sf_pdma_chan {
struct virt_dma_chan vchan;
enum dma_status status;
enum sf_pdma_pm_state pm_state;
u32 slave_id;
struct sf_pdma *pdma;
struct sf_pdma_desc *desc;
struct dma_slave_config cfg;
u32 attr;
dma_addr_t dma_dev_addr;
u32 dma_dev_size;
struct tasklet_struct done_tasklet;
struct tasklet_struct err_tasklet;
struct pdma_regs regs;
spinlock_t lock; /* protect chan data */
bool xfer_err;
int txirq;
int errirq;
int retries;
};
struct sf_pdma {
struct dma_device dma_dev;
void __iomem *membase;
void __iomem *mappedbase;
u32 n_chans;
struct sf_pdma_chan chans[PDMA_NR_CH];
};
#endif /* _SF_PDMA_H */
......@@ -203,19 +203,27 @@ struct rcar_dmac {
unsigned int n_channels;
struct rcar_dmac_chan *channels;
unsigned int channels_mask;
u32 channels_mask;
DECLARE_BITMAP(modules, 256);
};
#define to_rcar_dmac(d) container_of(d, struct rcar_dmac, engine)
/*
* struct rcar_dmac_of_data - This driver's OF data
* @chan_offset_base: DMAC channels base offset
* @chan_offset_stride: DMAC channels offset stride
*/
struct rcar_dmac_of_data {
u32 chan_offset_base;
u32 chan_offset_stride;
};
/* -----------------------------------------------------------------------------
* Registers
*/
#define RCAR_DMAC_CHAN_OFFSET(i) (0x8000 + 0x80 * (i))
#define RCAR_DMAISTA 0x0020
#define RCAR_DMASEC 0x0030
#define RCAR_DMAOR 0x0060
......@@ -1726,6 +1734,7 @@ static const struct dev_pm_ops rcar_dmac_pm = {
static int rcar_dmac_chan_probe(struct rcar_dmac *dmac,
struct rcar_dmac_chan *rchan,
const struct rcar_dmac_of_data *data,
unsigned int index)
{
struct platform_device *pdev = to_platform_device(dmac->dev);
......@@ -1735,7 +1744,8 @@ static int rcar_dmac_chan_probe(struct rcar_dmac *dmac,
int ret;
rchan->index = index;
rchan->iomem = dmac->iomem + RCAR_DMAC_CHAN_OFFSET(index);
rchan->iomem = dmac->iomem + data->chan_offset_base +
data->chan_offset_stride * index;
rchan->mid_rid = -EINVAL;
spin_lock_init(&rchan->lock);
......@@ -1800,7 +1810,15 @@ static int rcar_dmac_parse_of(struct device *dev, struct rcar_dmac *dmac)
return -EINVAL;
}
/*
* If the driver is unable to read dma-channel-mask property,
* the driver assumes that it can use all channels.
*/
dmac->channels_mask = GENMASK(dmac->n_channels - 1, 0);
of_property_read_u32(np, "dma-channel-mask", &dmac->channels_mask);
/* If the property has out-of-channel mask, this driver clears it */
dmac->channels_mask &= GENMASK(dmac->n_channels - 1, 0);
return 0;
}
......@@ -1813,10 +1831,14 @@ static int rcar_dmac_probe(struct platform_device *pdev)
DMA_SLAVE_BUSWIDTH_32_BYTES | DMA_SLAVE_BUSWIDTH_64_BYTES;
struct dma_device *engine;
struct rcar_dmac *dmac;
struct resource *mem;
const struct rcar_dmac_of_data *data;
unsigned int i;
int ret;
data = of_device_get_match_data(&pdev->dev);
if (!data)
return -EINVAL;
dmac = devm_kzalloc(&pdev->dev, sizeof(*dmac), GFP_KERNEL);
if (!dmac)
return -ENOMEM;
......@@ -1848,8 +1870,7 @@ static int rcar_dmac_probe(struct platform_device *pdev)
return -ENOMEM;
/* Request resources. */
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
dmac->iomem = devm_ioremap_resource(&pdev->dev, mem);
dmac->iomem = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(dmac->iomem))
return PTR_ERR(dmac->iomem);
......@@ -1901,7 +1922,7 @@ static int rcar_dmac_probe(struct platform_device *pdev)
if (!(dmac->channels_mask & BIT(i)))
continue;
ret = rcar_dmac_chan_probe(dmac, &dmac->channels[i], i);
ret = rcar_dmac_chan_probe(dmac, &dmac->channels[i], data, i);
if (ret < 0)
goto error;
}
......@@ -1948,8 +1969,16 @@ static void rcar_dmac_shutdown(struct platform_device *pdev)
rcar_dmac_stop_all_chan(dmac);
}
static const struct rcar_dmac_of_data rcar_dmac_data = {
.chan_offset_base = 0x8000,
.chan_offset_stride = 0x80,
};
static const struct of_device_id rcar_dmac_of_ids[] = {
{ .compatible = "renesas,rcar-dmac", },
{
.compatible = "renesas,rcar-dmac",
.data = &rcar_dmac_data,
},
{ /* Sentinel */ }
};
MODULE_DEVICE_TABLE(of, rcar_dmac_of_ids);
......
......@@ -99,6 +99,7 @@
/* DMA_CHN_WARP_* register definition */
#define SPRD_DMA_HIGH_ADDR_MASK GENMASK(31, 28)
#define SPRD_DMA_LOW_ADDR_MASK GENMASK(31, 0)
#define SPRD_DMA_WRAP_ADDR_MASK GENMASK(27, 0)
#define SPRD_DMA_HIGH_ADDR_OFFSET 4
/* SPRD_DMA_CHN_INTC register definition */
......@@ -118,6 +119,8 @@
#define SPRD_DMA_SWT_MODE_OFFSET 26
#define SPRD_DMA_REQ_MODE_OFFSET 24
#define SPRD_DMA_REQ_MODE_MASK GENMASK(1, 0)
#define SPRD_DMA_WRAP_SEL_DEST BIT(23)
#define SPRD_DMA_WRAP_EN BIT(22)
#define SPRD_DMA_FIX_SEL_OFFSET 21
#define SPRD_DMA_FIX_EN_OFFSET 20
#define SPRD_DMA_LLIST_END BIT(19)
......@@ -804,6 +807,8 @@ static int sprd_dma_fill_desc(struct dma_chan *chan,
temp |= req_mode << SPRD_DMA_REQ_MODE_OFFSET;
temp |= fix_mode << SPRD_DMA_FIX_SEL_OFFSET;
temp |= fix_en << SPRD_DMA_FIX_EN_OFFSET;
temp |= schan->linklist.wrap_addr ?
SPRD_DMA_WRAP_EN | SPRD_DMA_WRAP_SEL_DEST : 0;
temp |= slave_cfg->src_maxburst & SPRD_DMA_FRG_LEN_MASK;
hw->frg_len = temp;
......@@ -831,6 +836,12 @@ static int sprd_dma_fill_desc(struct dma_chan *chan,
hw->llist_ptr = lower_32_bits(llist_ptr);
hw->src_blk_step = (upper_32_bits(llist_ptr) << SPRD_DMA_LLIST_HIGH_SHIFT) &
SPRD_DMA_LLIST_HIGH_MASK;
if (schan->linklist.wrap_addr) {
hw->wrap_ptr |= schan->linklist.wrap_addr &
SPRD_DMA_WRAP_ADDR_MASK;
hw->wrap_to |= dst & SPRD_DMA_WRAP_ADDR_MASK;
}
} else {
hw->llist_ptr = 0;
hw->src_blk_step = 0;
......@@ -939,9 +950,11 @@ sprd_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
schan->linklist.phy_addr = ll_cfg->phy_addr;
schan->linklist.virt_addr = ll_cfg->virt_addr;
schan->linklist.wrap_addr = ll_cfg->wrap_addr;
} else {
schan->linklist.phy_addr = 0;
schan->linklist.virt_addr = 0;
schan->linklist.wrap_addr = 0;
}
/*
......@@ -1080,7 +1093,6 @@ static int sprd_dma_probe(struct platform_device *pdev)
struct device_node *np = pdev->dev.of_node;
struct sprd_dma_dev *sdev;
struct sprd_dma_chn *dma_chn;
struct resource *res;
u32 chn_count;
int ret, i;
......@@ -1126,8 +1138,7 @@ static int sprd_dma_probe(struct platform_device *pdev)
dev_warn(&pdev->dev, "no interrupts for the dma controller\n");
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
sdev->glb_base = devm_ioremap_resource(&pdev->dev, res);
sdev->glb_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(sdev->glb_base))
return PTR_ERR(sdev->glb_base);
......
......@@ -260,6 +260,13 @@ struct edma_cc {
*/
unsigned long *slot_inuse;
/*
* For tracking reserved channels used by DSP.
* If the bit is cleared, the channel is allocated to be used by DSP
* and Linux must not touch it.
*/
unsigned long *channels_mask;
struct dma_device dma_slave;
struct dma_device *dma_memcpy;
struct edma_chan *slave_chans;
......@@ -716,6 +723,12 @@ static int edma_alloc_channel(struct edma_chan *echan,
struct edma_cc *ecc = echan->ecc;
int channel = EDMA_CHAN_SLOT(echan->ch_num);
if (!test_bit(echan->ch_num, ecc->channels_mask)) {
dev_err(ecc->dev, "Channel%d is reserved, can not be used!\n",
echan->ch_num);
return -EINVAL;
}
/* ensure access through shadow region 0 */
edma_or_array2(ecc, EDMA_DRAE, 0, EDMA_REG_ARRAY_INDEX(channel),
EDMA_CHANNEL_BIT(channel));
......@@ -2249,10 +2262,8 @@ static int edma_probe(struct platform_device *pdev)
{
struct edma_soc_info *info = pdev->dev.platform_data;
s8 (*queue_priority_mapping)[2];
int i, off;
const s16 (*rsv_slots)[2];
const s16 (*xbar_chans)[2];
int irq;
const s16 (*reserved)[2];
int i, irq;
char *irq_name;
struct resource *mem;
struct device_node *node = pdev->dev.of_node;
......@@ -2331,15 +2342,32 @@ static int edma_probe(struct platform_device *pdev)
if (!ecc->slot_inuse)
return -ENOMEM;
ecc->channels_mask = devm_kcalloc(dev,
BITS_TO_LONGS(ecc->num_channels),
sizeof(unsigned long), GFP_KERNEL);
if (!ecc->channels_mask)
return -ENOMEM;
/* Mark all channels available initially */
bitmap_fill(ecc->channels_mask, ecc->num_channels);
ecc->default_queue = info->default_queue;
if (info->rsv) {
/* Set the reserved slots in inuse list */
rsv_slots = info->rsv->rsv_slots;
if (rsv_slots) {
for (i = 0; rsv_slots[i][0] != -1; i++)
bitmap_set(ecc->slot_inuse, rsv_slots[i][0],
rsv_slots[i][1]);
reserved = info->rsv->rsv_slots;
if (reserved) {
for (i = 0; reserved[i][0] != -1; i++)
bitmap_set(ecc->slot_inuse, reserved[i][0],
reserved[i][1]);
}
/* Clear channels not usable for Linux */
reserved = info->rsv->rsv_chans;
if (reserved) {
for (i = 0; reserved[i][0] != -1; i++)
bitmap_clear(ecc->channels_mask, reserved[i][0],
reserved[i][1]);
}
}
......@@ -2349,14 +2377,6 @@ static int edma_probe(struct platform_device *pdev)
edma_write_slot(ecc, i, &dummy_paramset);
}
/* Clear the xbar mapped channels in unused list */
xbar_chans = info->xbar_chans;
if (xbar_chans) {
for (i = 0; xbar_chans[i][1] != -1; i++) {
off = xbar_chans[i][1];
}
}
irq = platform_get_irq_byname(pdev, "edma3_ccint");
if (irq < 0 && node)
irq = irq_of_parse_and_map(node, 0);
......@@ -2399,12 +2419,15 @@ static int edma_probe(struct platform_device *pdev)
if (!ecc->legacy_mode) {
int lowest_priority = 0;
unsigned int array_max;
struct of_phandle_args tc_args;
ecc->tc_list = devm_kcalloc(dev, ecc->num_tc,
sizeof(*ecc->tc_list), GFP_KERNEL);
if (!ecc->tc_list)
return -ENOMEM;
if (!ecc->tc_list) {
ret = -ENOMEM;
goto err_reg1;
}
for (i = 0;; i++) {
ret = of_parse_phandle_with_fixed_args(node, "ti,tptcs",
......@@ -2420,6 +2443,18 @@ static int edma_probe(struct platform_device *pdev)
info->default_queue = i;
}
}
/* See if we have optional dma-channel-mask array */
array_max = DIV_ROUND_UP(ecc->num_channels, BITS_PER_TYPE(u32));
ret = of_property_read_variable_u32_array(node,
"dma-channel-mask",
(u32 *)ecc->channels_mask,
1, array_max);
if (ret > 0 && ret != array_max)
dev_warn(dev, "dma-channel-mask is not complete.\n");
else if (ret == -EOVERFLOW || ret == -ENODATA)
dev_warn(dev,
"dma-channel-mask is out of range or empty\n");
}
/* Event queue priority mapping */
......@@ -2437,6 +2472,10 @@ static int edma_probe(struct platform_device *pdev)
edma_dma_init(ecc, legacy_mode);
for (i = 0; i < ecc->num_channels; i++) {
/* Do not touch reserved channels */
if (!test_bit(i, ecc->channels_mask))
continue;
/* Assign all channels to the default queue */
edma_assign_channel_eventq(&ecc->slave_chans[i],
info->default_queue);
......
......@@ -382,7 +382,6 @@ static int uniphier_mdmac_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct uniphier_mdmac_device *mdev;
struct dma_device *ddev;
struct resource *res;
int nr_chans, ret, i;
nr_chans = platform_irq_count(pdev);
......@@ -398,8 +397,7 @@ static int uniphier_mdmac_probe(struct platform_device *pdev)
if (!mdev)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mdev->reg_base = devm_ioremap_resource(dev, res);
mdev->reg_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(mdev->reg_base))
return PTR_ERR(mdev->reg_base);
......
This diff is collapsed.
......@@ -754,18 +754,13 @@ static struct dma_chan *zx_of_dma_simple_xlate(struct of_phandle_args *dma_spec,
static int zx_dma_probe(struct platform_device *op)
{
struct zx_dma_dev *d;
struct resource *iores;
int i, ret = 0;
iores = platform_get_resource(op, IORESOURCE_MEM, 0);
if (!iores)
return -EINVAL;
d = devm_kzalloc(&op->dev, sizeof(*d), GFP_KERNEL);
if (!d)
return -ENOMEM;
d->base = devm_ioremap_resource(&op->dev, iores);
d->base = devm_platform_ioremap_resource(op, 0);
if (IS_ERR(d->base))
return PTR_ERR(d->base);
......@@ -894,7 +889,6 @@ static int zx_dma_remove(struct platform_device *op)
list_del(&c->vc.chan.device_node);
}
clk_disable_unprepare(d->clk);
dmam_pool_destroy(d->pool);
return 0;
}
......
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* This header provides macros for X1000 DMA bindings.
*
* Copyright (c) 2019 Zhou Yanjie <zhouyanjie@zoho.com>
*/
#ifndef __DT_BINDINGS_DMA_X1000_DMA_H__
#define __DT_BINDINGS_DMA_X1000_DMA_H__
/*
* Request type numbers for the X1000 DMA controller (written to the DRTn
* register for the channel).
*/
#define X1000_DMA_DMIC_RX 0x5
#define X1000_DMA_I2S0_TX 0x6
#define X1000_DMA_I2S0_RX 0x7
#define X1000_DMA_AUTO 0x8
#define X1000_DMA_UART2_TX 0x10
#define X1000_DMA_UART2_RX 0x11
#define X1000_DMA_UART1_TX 0x12
#define X1000_DMA_UART1_RX 0x13
#define X1000_DMA_UART0_TX 0x14
#define X1000_DMA_UART0_RX 0x15
#define X1000_DMA_SSI0_TX 0x16
#define X1000_DMA_SSI0_RX 0x17
#define X1000_DMA_MSC0_TX 0x1a
#define X1000_DMA_MSC0_RX 0x1b
#define X1000_DMA_MSC1_TX 0x1c
#define X1000_DMA_MSC1_RX 0x1d
#define X1000_DMA_PCM0_TX 0x20
#define X1000_DMA_PCM0_RX 0x21
#define X1000_DMA_SMB0_TX 0x24
#define X1000_DMA_SMB0_RX 0x25
#define X1000_DMA_SMB1_TX 0x26
#define X1000_DMA_SMB1_RX 0x27
#define X1000_DMA_SMB2_TX 0x28
#define X1000_DMA_SMB2_RX 0x29
#endif /* __DT_BINDINGS_DMA_X1000_DMA_H__ */
......@@ -118,6 +118,9 @@ enum sprd_dma_int_type {
* struct sprd_dma_linklist - DMA link-list address structure
* @virt_addr: link-list virtual address to configure link-list node
* @phy_addr: link-list physical address to link DMA transfer
* @wrap_addr: the wrap address for link-list mode, which means once the
* transfer address reaches the wrap address, the next transfer address
* will jump to the address specified by wrap_to register.
*
* The Spreadtrum DMA controller supports the link-list mode, that means slaves
* can supply several groups configurations (each configuration represents one
......@@ -181,6 +184,7 @@ enum sprd_dma_int_type {
struct sprd_dma_linklist {
unsigned long virt_addr;
phys_addr_t phy_addr;
phys_addr_t wrap_addr;
};
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment